Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC platform changes from Arnd Bergmann:
"New and updated SoC support, notable changes include:

- bcm:
brcmstb SMP support
initial iproc/cygnus support
- exynos:
Exynos4415 SoC support
PMU and suspend support for Exynos5420
PMU support for Exynos3250
pm related maintenance
- imx:
new LS1021A SoC support
vybrid 610 global timer support
- integrator:
convert to using multiplatform configuration
- mediatek:
earlyprintk support for mt8127/mt8135
- meson:
meson8 soc and l2 cache controller support
- mvebu:
Armada 38x CPU hotplug support
drop support for prerelease Armada 375 Z1 stepping
extended suspend support, now works on Armada 370/XP
- omap:
hwmod related maintenance
prcm cleanup
- pxa:
initial pxa27x DT handling
- rockchip:
SMP support for rk3288
add cpu frequency scaling support
- shmobile:
r8a7740 power domain support
various small restart, timer, pci apmu changes
- sunxi:
Allwinner A80 (sun9i) earlyprintk support
- ux500:
power domain support

Overall, a significant chunk of changes, coming mostly from the usual
suspects: omap, shmobile, samsung and mvebu, all of which already
contain a lot of platform specific code in arch/arm"

* tag 'soc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (187 commits)
ARM: mvebu: use the cpufreq-dt platform_data for independent clocks
soc: integrator: Add terminating entry for integrator_cm_match
ARM: mvebu: add SDRAM controller description for Armada XP
ARM: mvebu: adjust mbus controller description on Armada 370/XP
ARM: mvebu: add suspend/resume DT information for Armada XP GP
ARM: mvebu: synchronize secondary CPU clocks on resume
ARM: mvebu: make sure MMU is disabled in armada_370_xp_cpu_resume
ARM: mvebu: Armada XP GP specific suspend/resume code
ARM: mvebu: reserve the first 10 KB of each memory bank for suspend/resume
ARM: mvebu: implement suspend/resume support for Armada XP
clk: mvebu: add suspend/resume for gatable clocks
bus: mvebu-mbus: provide a mechanism to save SDRAM window configuration
bus: mvebu-mbus: suspend/resume support
clocksource: time-armada-370-xp: add suspend/resume support
irqchip: armada-370-xp: Add suspend/resume support
ARM: add lolevel debug support for asm9260
ARM: add mach-asm9260
ARM: EXYNOS: use u8 for val[] in struct exynos_pmu_conf
power: reset: imx-snvs-poweroff: add power off driver for i.mx6
ARM: imx: temporarily remove CONFIG_SOC_FSL from LS1021A
...

+6336 -2360
+5 -23
Documentation/arm/firmware.txt
··· 7 7 a need to provide an interface for such platforms to specify available firmware 8 8 operations and call them when needed. 9 9 10 - Firmware operations can be specified using struct firmware_ops 11 - 12 - struct firmware_ops { 13 - /* 14 - * Enters CPU idle mode 15 - */ 16 - int (*do_idle)(void); 17 - /* 18 - * Sets boot address of specified physical CPU 19 - */ 20 - int (*set_cpu_boot_addr)(int cpu, unsigned long boot_addr); 21 - /* 22 - * Boots specified physical CPU 23 - */ 24 - int (*cpu_boot)(int cpu); 25 - /* 26 - * Initializes L2 cache 27 - */ 28 - int (*l2x0_init)(void); 29 - }; 30 - 31 - and then registered with register_firmware_ops function 10 + Firmware operations can be specified by filling in a struct firmware_ops 11 + with appropriate callbacks and then registering it with register_firmware_ops() 12 + function. 32 13 33 14 void register_firmware_ops(const struct firmware_ops *ops) 34 15 35 - the ops pointer must be non-NULL. 16 + The ops pointer must be non-NULL. More information about struct firmware_ops 17 + and its members can be found in arch/arm/include/asm/firmware.h header. 36 18 37 19 There is a default, empty set of operations provided, so there is no need to 38 20 set anything if platform does not require firmware operations.
+13 -3
Documentation/arm/sunxi/README
··· 37 37 http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf 38 38 39 39 - Allwinner A23 40 - + Not Supported 40 + + Datasheet 41 + http://dl.linux-sunxi.org/A23/A23%20Datasheet%20V1.0%2020130830.pdf 42 + + User Manual 43 + http://dl.linux-sunxi.org/A23/A23%20User%20Manual%20V1.0%2020130830.pdf 41 44 42 45 * Quad ARM Cortex-A7 based SoCs 43 46 - Allwinner A31 (sun6i) 44 47 + Datasheet 45 - http://dl.linux-sunxi.org/A31/A31%20Datasheet%20-%20v1.00%20(2012-12-24).pdf 48 + http://dl.linux-sunxi.org/A31/A3x_release_document/A31/IC/A31%20datasheet%20V1.3%2020131106.pdf 49 + + User Manual 50 + http://dl.linux-sunxi.org/A31/A3x_release_document/A31/IC/A31%20user%20manual%20V1.1%2020130630.pdf 46 51 47 52 - Allwinner A31s (sun6i) 48 53 + Not Supported 54 + + Datasheet 55 + http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20datasheet%20V1.3%2020131106.pdf 56 + + User Manual 57 + http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20User%20Manual%20%20V1.0%2020130322.pdf 49 58 50 59 * Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs 51 60 - Allwinner A80 52 - + Not Supported 61 + + Datasheet 62 + http://dl.linux-sunxi.org/A80/A80_Datasheet_Revision_1.0_0404.pdf
+5 -3
Documentation/devicetree/bindings/arm/amlogic.txt
··· 2 2 ------------------------------------------- 3 3 4 4 Boards with the Amlogic Meson6 SoC shall have the following properties: 5 + Required root node property: 6 + compatible: "amlogic,meson6" 5 7 6 - Required root node property: 7 - 8 - compatible = "amlogic,meson6"; 8 + Boards with the Amlogic Meson8 SoC shall have the following properties: 9 + Required root node property: 10 + compatible: "amlogic,meson8";
+9
Documentation/devicetree/bindings/arm/cpus.txt
··· 227 227 # List of phandles to idle state nodes supported 228 228 by this cpu [3]. 229 229 230 + - rockchip,pmu 231 + Usage: optional for systems that have an "enable-method" 232 + property value of "rockchip,rk3066-smp" 233 + While optional, it is the preferred way to get access to 234 + the cpu-core power-domains. 235 + Value type: <phandle> 236 + Definition: Specifies the syscon node controlling the cpu core 237 + power domains. 238 + 230 239 Example 1 (dual-cluster big.LITTLE system 32-bit): 231 240 232 241 cpus {
+12
Documentation/devicetree/bindings/arm/sunxi.txt
··· 1 + Allwinner sunXi Platforms Device Tree Bindings 2 + 3 + Each device tree must specify which Allwinner SoC it uses, 4 + using one of the following compatible strings: 5 + 6 + allwinner,sun4i-a10 7 + allwinner,sun5i-a10s 8 + allwinner,sun5i-a13 9 + allwinner,sun6i-a31 10 + allwinner,sun7i-a20 11 + allwinner,sun8i-a23 12 + allwinner,sun9i-a80
+35
Documentation/devicetree/bindings/arm/ux500/power_domain.txt
··· 1 + * ST-Ericsson UX500 PM Domains 2 + 3 + UX500 supports multiple PM domains which are used to gate power to one or 4 + more peripherals on the SOC. 5 + 6 + The implementation of PM domains for UX500 are based upon the generic PM domain 7 + and use the corresponding DT bindings. 8 + 9 + ==PM domain providers== 10 + 11 + Required properties: 12 + - compatible: Must be "stericsson,ux500-pm-domains". 13 + - #power-domain-cells : Number of cells in a power domain specifier, must be 1. 14 + 15 + Example: 16 + pm_domains: pm_domains0 { 17 + compatible = "stericsson,ux500-pm-domains"; 18 + #power-domain-cells = <1>; 19 + }; 20 + 21 + ==PM domain consumers== 22 + 23 + Required properties: 24 + - power-domains: A phandle and PM domain specifier. Below are the list of 25 + valid specifiers: 26 + 27 + Index Specifier 28 + ----- --------- 29 + 0 DOMAIN_VAPE 30 + 31 + Example: 32 + sdi0_per1@80126000 { 33 + compatible = "arm,pl18x", "arm,primecell"; 34 + power-domains = <&pm_domains DOMAIN_VAPE> 35 + };
+10 -7
Documentation/devicetree/bindings/bus/mvebu-mbus.txt
··· 48 48 - compatible: Should be set to "marvell,mbus-controller". 49 49 50 50 - reg: Device's register space. 51 - Two entries are expected (see the examples below): 52 - the first one controls the devices decoding window and 53 - the second one controls the SDRAM decoding window. 51 + Two or three entries are expected (see the examples below): 52 + the first one controls the devices decoding window, 53 + the second one controls the SDRAM decoding window and 54 + the third controls the MBus bridge (only with the 55 + marvell,armada370-mbus and marvell,armadaxp-mbus 56 + compatible strings) 54 57 55 58 Example: 56 59 ··· 70 67 71 68 mbusc: mbus-controller@20000 { 72 69 compatible = "marvell,mbus-controller"; 73 - reg = <0x20000 0x100>, <0x20180 0x20>; 70 + reg = <0x20000 0x100>, <0x20180 0x20>, <0x20250 0x8>; 74 71 }; 75 72 76 73 /* more children ...*/ ··· 129 126 130 127 mbusc: mbus-controller@20000 { 131 128 compatible = "marvell,mbus-controller"; 132 - reg = <0x20000 0x100>, <0x20180 0x20>; 129 + reg = <0x20000 0x100>, <0x20180 0x20>, <0x20250 0x8>; 133 130 }; 134 131 135 132 /* more children ...*/ ··· 173 170 174 171 mbusc: mbus-controller@20000 { 175 172 compatible = "marvell,mbus-controller"; 176 - reg = <0x20000 0x100>, <0x20180 0x20>; 173 + reg = <0x20000 0x100>, <0x20180 0x20>, <0x20250 0x8>; 177 174 }; 178 175 179 176 /* other children */ ··· 269 266 ranges = <0 MBUS_ID(0xf0, 0x01) 0 0x100000>; 270 267 271 268 mbusc: mbus-controller@20000 { 272 - reg = <0x20000 0x100>, <0x20180 0x20>; 269 + reg = <0x20000 0x100>, <0x20180 0x20>, <0x20250 0x8>; 273 270 }; 274 271 275 272 interrupt-controller@20000 {
+21
Documentation/devicetree/bindings/memory-controllers/mvebu-sdram-controller.txt
··· 1 + Device Tree bindings for MVEBU SDRAM controllers 2 + 3 + The Marvell EBU SoCs all have a SDRAM controller. The SDRAM controller 4 + differs from one SoC variant to another, but they also share a number 5 + of commonalities. 6 + 7 + For now, this Device Tree binding documentation only documents the 8 + Armada XP SDRAM controller. 9 + 10 + Required properties: 11 + 12 + - compatible: for Armada XP, "marvell,armada-xp-sdram-controller" 13 + - reg: a resource specifier for the register space, which should 14 + include all SDRAM controller registers as per the datasheet. 15 + 16 + Example: 17 + 18 + sdramc@1400 { 19 + compatible = "marvell,armada-xp-sdram-controller"; 20 + reg = <0x1400 0x500>; 21 + };
+23
Documentation/devicetree/bindings/power_supply/imx-snvs-poweroff.txt
··· 1 + i.mx6 Poweroff Driver 2 + 3 + SNVS_LPCR in SNVS module can power off the whole system by pull 4 + PMIC_ON_REQ low if PMIC_ON_REQ is connected with external PMIC. 5 + If you don't want to use PMIC_ON_REQ as power on/off control, 6 + please set status='disabled' to disable this driver. 7 + 8 + Required Properties: 9 + -compatible: "fsl,sec-v4.0-poweroff" 10 + -reg: Specifies the physical address of the SNVS_LPCR register 11 + 12 + Example: 13 + snvs@020cc000 { 14 + compatible = "fsl,sec-v4.0-mon", "simple-bus"; 15 + #address-cells = <1>; 16 + #size-cells = <1>; 17 + ranges = <0 0x020cc000 0x4000>; 18 + ..... 19 + snvs_poweroff: snvs-poweroff@38 { 20 + compatible = "fsl,sec-v4.0-poweroff"; 21 + reg = <0x38 0x4>; 22 + }; 23 + }
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 91 91 marvell Marvell Technology Group Ltd. 92 92 maxim Maxim Integrated Products 93 93 mediatek MediaTek Inc. 94 + merrii Merrii Technology Co., Ltd. 94 95 micrel Micrel Inc. 95 96 microchip Microchip Technology Inc. 96 97 micron Micron Technology Inc.
+1
MAINTAINERS
··· 1379 1379 F: arch/arm/configs/mackerel_defconfig 1380 1380 F: arch/arm/configs/marzen_defconfig 1381 1381 F: arch/arm/configs/shmobile_defconfig 1382 + F: arch/arm/include/debug/renesas-scif.S 1382 1383 F: arch/arm/mach-shmobile/ 1383 1384 F: drivers/sh/ 1384 1385
+2 -18
arch/arm/Kconfig
··· 320 320 select SPARSE_IRQ 321 321 select USE_OF 322 322 323 - config ARCH_INTEGRATOR 324 - bool "ARM Ltd. Integrator family" 325 - select ARM_AMBA 326 - select ARM_PATCH_PHYS_VIRT if MMU 327 - select AUTO_ZRELADDR 328 - select COMMON_CLK 329 - select COMMON_CLK_VERSATILE 330 - select GENERIC_CLOCKEVENTS 331 - select HAVE_TCM 332 - select ICST 333 - select MULTI_IRQ_HANDLER 334 - select PLAT_VERSATILE 335 - select SPARSE_IRQ 336 - select USE_OF 337 - select VERSATILE_FPGA_IRQ 338 - help 339 - Support for ARM's Integrator platform. 340 - 341 323 config ARCH_REALVIEW 342 324 bool "ARM Ltd. RealView family" 343 325 select ARCH_WANT_OPTIONAL_GPIOLIB ··· 838 856 # plat- suffix) or along side the corresponding mach-* source. 839 857 # 840 858 source "arch/arm/mach-mvebu/Kconfig" 859 + 860 + source "arch/arm/mach-asm9260/Kconfig" 841 861 842 862 source "arch/arm/mach-at91/Kconfig" 843 863
+161 -6
arch/arm/Kconfig.debug
··· 93 93 prompt "Kernel low-level debugging port" 94 94 depends on DEBUG_LL 95 95 96 + config DEBUG_ASM9260_UART 97 + bool "Kernel low-level debugging via asm9260 UART" 98 + depends on MACH_ASM9260 99 + help 100 + Say Y here if you want the debug print routines to direct 101 + their output to an UART or USART port on asm9260 based 102 + machines. 103 + 104 + DEBUG_UART_PHYS | DEBUG_UART_VIRT 105 + 106 + 0x80000000 | 0xf0000000 | UART0 107 + 0x80004000 | 0xf0004000 | UART1 108 + 0x80008000 | 0xf0008000 | UART2 109 + 0x8000c000 | 0xf000c000 | UART3 110 + 0x80010000 | 0xf0010000 | UART4 111 + 0x80014000 | 0xf0014000 | UART5 112 + 0x80018000 | 0xf0018000 | UART6 113 + 0x8001c000 | 0xf001c000 | UART7 114 + 0x80020000 | 0xf0020000 | UART8 115 + 0x80024000 | 0xf0024000 | UART9 116 + 96 117 config AT91_DEBUG_LL_DBGU0 97 118 bool "Kernel low-level debugging on rm9200, 9260/9g20, 9261/9g10 and 9rl" 98 119 depends on HAVE_AT91_DBGU0 ··· 134 113 config DEBUG_BCM_5301X 135 114 bool "Kernel low-level debugging on BCM5301X UART1" 136 115 depends on ARCH_BCM_5301X 137 - select DEBUG_UART_PL01X 116 + select DEBUG_UART_8250 138 117 139 118 config DEBUG_BCM_KONA_UART 140 119 bool "Kernel low-level debugging messages via BCM KONA UART" ··· 159 138 help 160 139 Say Y here if you want kernel low-level debugging support 161 140 on Marvell Berlin SoC based platforms. 141 + 142 + config DEBUG_BRCMSTB_UART 143 + bool "Use BRCMSTB UART for low-level debug" 144 + depends on ARCH_BRCMSTB 145 + select DEBUG_UART_8250 146 + help 147 + Say Y here if you want the debug print routines to direct 148 + their output to the first serial port on these devices. 149 + 150 + If you have a Broadcom STB chip and would like early print 151 + messages to appear over the UART, select this option. 162 152 163 153 config DEBUG_CLPS711X_UART1 164 154 bool "Kernel low-level debugging messages via UART1" ··· 685 653 Say Y here if you want kernel low-level debugging support 686 654 on Rockchip RK32xx based platforms. 687 655 656 + config DEBUG_R7S72100_SCIF2 657 + bool "Kernel low-level debugging messages via SCIF2 on R7S72100" 658 + depends on ARCH_R7S72100 659 + help 660 + Say Y here if you want kernel low-level debugging support 661 + via SCIF2 on Renesas RZ/A1H (R7S72100). 662 + 663 + config DEBUG_RCAR_GEN1_SCIF0 664 + bool "Kernel low-level debugging messages via SCIF0 on R8A7778" 665 + depends on ARCH_R8A7778 666 + help 667 + Say Y here if you want kernel low-level debugging support 668 + via SCIF0 on Renesas R-Car M1A (R8A7778). 669 + 670 + config DEBUG_RCAR_GEN1_SCIF2 671 + bool "Kernel low-level debugging messages via SCIF2 on R8A7779" 672 + depends on ARCH_R8A7779 673 + help 674 + Say Y here if you want kernel low-level debugging support 675 + via SCIF2 on Renesas R-Car H1 (R8A7779). 676 + 677 + config DEBUG_RCAR_GEN2_SCIF0 678 + bool "Kernel low-level debugging messages via SCIF0 on R8A7790/R8A7791/R8A7793)" 679 + depends on ARCH_R8A7790 || ARCH_R8A7791 || ARCH_R8A7793 680 + help 681 + Say Y here if you want kernel low-level debugging support 682 + via SCIF0 on Renesas R-Car H2 (R8A7790), M2-W (R8A7791), or 683 + M2-N (R8A7793). 684 + 685 + config DEBUG_RCAR_GEN2_SCIF2 686 + bool "Kernel low-level debugging messages via SCIF2 on R8A7794" 687 + depends on ARCH_R8A7794 688 + help 689 + Say Y here if you want kernel low-level debugging support 690 + via SCIF2 on Renesas R-Car E2 (R8A7794). 691 + 692 + config DEBUG_RMOBILE_SCIFA0 693 + bool "Kernel low-level debugging messages via SCIFA0 on R8A73A4/SH7372" 694 + depends on ARCH_R8A73A4 || ARCH_SH7372 695 + help 696 + Say Y here if you want kernel low-level debugging support 697 + via SCIFA0 on Renesas R-Mobile APE6 (R8A73A4) or SH-Mobile 698 + AP4 (SH7372). 699 + 700 + config DEBUG_RMOBILE_SCIFA1 701 + bool "Kernel low-level debugging messages via SCIFA1 on R8A7740" 702 + depends on ARCH_R8A7740 703 + help 704 + Say Y here if you want kernel low-level debugging support 705 + via SCIFA1 on Renesas R-Mobile A1 (R8A7740). 706 + 707 + config DEBUG_RMOBILE_SCIFA4 708 + bool "Kernel low-level debugging messages via SCIFA4 on SH73A0" 709 + depends on ARCH_SH73A0 710 + help 711 + Say Y here if you want kernel low-level debugging support 712 + via SCIFA4 on Renesas SH-Mobile AG5 (SH73A0). 713 + 688 714 config DEBUG_S3C_UART0 689 715 depends on PLAT_SAMSUNG 690 716 select DEBUG_EXYNOS_UART if ARCH_EXYNOS ··· 813 723 their output to UART 2. The port must have been initialised 814 724 by the boot-loader before use. 815 725 726 + config DEBUG_SA1100 727 + depends on ARCH_SA1100 728 + bool "Use SA1100 UARTs for low-level debug" 729 + help 730 + Say Y here if you want kernel low-level debugging support 731 + on SA-11x0 UART ports. The kernel will check for the first 732 + enabled UART in a sequence 3-1-2. 733 + 816 734 config DEBUG_SOCFPGA_UART 817 735 depends on ARCH_SOCFPGA 818 736 bool "Use SOCFPGA UART for low-level debug" ··· 828 730 help 829 731 Say Y here if you want kernel low-level debugging support 830 732 on SOCFPGA based platforms. 733 + 734 + config DEBUG_SUN9I_UART0 735 + bool "Kernel low-level debugging messages via sun9i UART0" 736 + depends on MACH_SUN9I 737 + select DEBUG_UART_8250 738 + help 739 + Say Y here if you want kernel low-level debugging support 740 + on Allwinner A80 based platforms on the UART0. 831 741 832 742 config DEBUG_SUNXI_UART0 833 743 bool "Kernel low-level debugging messages via sunXi UART0" ··· 971 865 help 972 866 Say Y here if you want kernel low-level debugging support 973 867 for Mediatek mt6589 based platforms on UART0. 868 + 869 + config DEBUG_MT8127_UART0 870 + bool "Mediatek mt8127 UART0" 871 + depends on ARCH_MEDIATEK 872 + select DEBUG_UART_8250 873 + help 874 + Say Y here if you want kernel low-level debugging support 875 + for Mediatek mt8127 based platforms on UART0. 876 + 877 + config DEBUG_MT8135_UART3 878 + bool "Mediatek mt8135 UART3" 879 + depends on ARCH_MEDIATEK 880 + select DEBUG_UART_8250 881 + help 882 + Say Y here if you want kernel low-level debugging support 883 + for Mediatek mt8135 based platforms on UART3. 974 884 975 885 config DEBUG_VEXPRESS_UART0_DETECT 976 886 bool "Autodetect UART0 on Versatile Express Cortex-A core tiles" ··· 1163 1041 1164 1042 config DEBUG_LL_INCLUDE 1165 1043 string 1044 + default "debug/sa1100.S" if DEBUG_SA1100 1166 1045 default "debug/8250.S" if DEBUG_LL_UART_8250 || DEBUG_UART_8250 1046 + default "debug/asm9260.S" if DEBUG_ASM9260_UART 1167 1047 default "debug/clps711x.S" if DEBUG_CLPS711X_UART1 || DEBUG_CLPS711X_UART2 1168 1048 default "debug/meson.S" if DEBUG_MESON_UARTAO 1169 1049 default "debug/pl01x.S" if DEBUG_LL_UART_PL01X || DEBUG_UART_PL01X ··· 1185 1061 DEBUG_IMX6SX_UART 1186 1062 default "debug/msm.S" if DEBUG_MSM_UART || DEBUG_QCOM_UARTDM 1187 1063 default "debug/omap2plus.S" if DEBUG_OMAP2PLUS_UART 1064 + default "debug/renesas-scif.S" if DEBUG_R7S72100_SCIF2 1065 + default "debug/renesas-scif.S" if DEBUG_RCAR_GEN1_SCIF0 1066 + default "debug/renesas-scif.S" if DEBUG_RCAR_GEN1_SCIF2 1067 + default "debug/renesas-scif.S" if DEBUG_RCAR_GEN2_SCIF0 1068 + default "debug/renesas-scif.S" if DEBUG_RCAR_GEN2_SCIF2 1069 + default "debug/renesas-scif.S" if DEBUG_RMOBILE_SCIFA0 1070 + default "debug/renesas-scif.S" if DEBUG_RMOBILE_SCIFA1 1071 + default "debug/renesas-scif.S" if DEBUG_RMOBILE_SCIFA4 1188 1072 default "debug/s3c24xx.S" if DEBUG_S3C24XX_UART 1189 1073 default "debug/s5pv210.S" if DEBUG_S5PV210_UART 1190 1074 default "debug/sirf.S" if DEBUG_SIRFPRIMA2_UART1 || DEBUG_SIRFMARCO_UART1 ··· 1238 1106 default 0x02530c00 if DEBUG_KEYSTONE_UART0 1239 1107 default 0x02531000 if DEBUG_KEYSTONE_UART1 1240 1108 default 0x03010fe0 if ARCH_RPC 1109 + default 0x07000000 if DEBUG_SUN9I_UART0 1241 1110 default 0x10009000 if DEBUG_REALVIEW_STD_PORT || \ 1242 1111 DEBUG_VEXPRESS_UART0_CA9 1243 1112 default 0x1010c000 if DEBUG_REALVIEW_PB1176_PORT ··· 1246 1113 default 0x10126000 if DEBUG_RK3X_UART1 1247 1114 default 0x101f1000 if ARCH_VERSATILE 1248 1115 default 0x101fb000 if DEBUG_NOMADIK_UART 1116 + default 0x11002000 if DEBUG_MT8127_UART0 1249 1117 default 0x11006000 if DEBUG_MT6589_UART0 1118 + default 0x11009000 if DEBUG_MT8135_UART3 1250 1119 default 0x16000000 if ARCH_INTEGRATOR 1251 1120 default 0x18000300 if DEBUG_BCM_5301X 1252 1121 default 0x1c090000 if DEBUG_VEXPRESS_UART0_RS1 ··· 1270 1135 default 0x78000000 if DEBUG_CNS3XXX 1271 1136 default 0x7c0003f8 if FOOTBRIDGE 1272 1137 default 0x78000000 if DEBUG_CNS3XXX 1138 + default 0x80010000 if DEBUG_ASM9260_UART 1273 1139 default 0x80070000 if DEBUG_IMX23_UART 1274 1140 default 0x80074000 if DEBUG_IMX28_UART 1275 1141 default 0x80230000 if DEBUG_PICOXCELL_UART ··· 1288 1152 default 0xd4018000 if DEBUG_MMP_UART3 1289 1153 default 0xe0000000 if ARCH_SPEAR13XX 1290 1154 default 0xe4007000 if DEBUG_HIP04_UART 1155 + default 0xe6c40000 if DEBUG_RMOBILE_SCIFA0 1156 + default 0xe6c50000 if DEBUG_RMOBILE_SCIFA1 1157 + default 0xe6c80000 if DEBUG_RMOBILE_SCIFA4 1158 + default 0xe6e58000 if DEBUG_RCAR_GEN2_SCIF2 1159 + default 0xe6e60000 if DEBUG_RCAR_GEN2_SCIF0 1160 + default 0xe8008000 if DEBUG_R7S72100_SCIF2 1291 1161 default 0xf0000be0 if ARCH_EBSA110 1162 + default 0xf040ab00 if DEBUG_BRCMSTB_UART 1292 1163 default 0xf1012000 if DEBUG_MVEBU_UART_ALTERNATE 1293 1164 default 0xf1012000 if ARCH_DOVE || ARCH_MV78XX0 || \ 1294 1165 ARCH_ORION5X ··· 1307 1164 default 0xff690000 if DEBUG_RK32_UART2 1308 1165 default 0xffc02000 if DEBUG_SOCFPGA_UART 1309 1166 default 0xffd82340 if ARCH_IOP13XX 1167 + default 0xffe40000 if DEBUG_RCAR_GEN1_SCIF0 1168 + default 0xffe42000 if DEBUG_RCAR_GEN1_SCIF2 1310 1169 default 0xfff36000 if DEBUG_HIGHBANK_UART 1311 1170 default 0xfffe8600 if DEBUG_UART_BCM63XX 1312 1171 default 0xfffff700 if ARCH_IOP33X 1313 1172 depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ 1314 1173 DEBUG_LL_UART_EFM32 || \ 1315 1174 DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ 1316 - DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ 1317 - DEBUG_UART_BCM63XX 1175 + DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_R7S72100_SCIF2 || \ 1176 + DEBUG_RCAR_GEN1_SCIF0 || DEBUG_RCAR_GEN1_SCIF2 || \ 1177 + DEBUG_RCAR_GEN2_SCIF0 || DEBUG_RCAR_GEN2_SCIF2 || \ 1178 + DEBUG_RMOBILE_SCIFA0 || DEBUG_RMOBILE_SCIFA1 || \ 1179 + DEBUG_RMOBILE_SCIFA4 || DEBUG_S3C24XX_UART || \ 1180 + DEBUG_UART_BCM63XX || DEBUG_ASM9260_UART 1318 1181 1319 1182 config DEBUG_UART_VIRT 1320 1183 hex "Virtual base address of debug UART" 1321 1184 default 0xe0010fe0 if ARCH_RPC 1322 1185 default 0xe1000000 if DEBUG_MSM_UART 1323 1186 default 0xf0000be0 if ARCH_EBSA110 1187 + default 0xf0010000 if DEBUG_ASM9260_UART 1324 1188 default 0xf01fb000 if DEBUG_NOMADIK_UART 1325 1189 default 0xf0201000 if DEBUG_BCM2835 1326 1190 default 0xf1000300 if DEBUG_BCM_5301X 1191 + default 0xf1002000 if DEBUG_MT8127_UART0 1327 1192 default 0xf1006000 if DEBUG_MT6589_UART0 1193 + default 0xf1009000 if DEBUG_MT8135_UART3 1328 1194 default 0xf11f1000 if ARCH_VERSATILE 1329 1195 default 0xf1600000 if ARCH_INTEGRATOR 1330 1196 default 0xf1c28000 if DEBUG_SUNXI_UART0 ··· 1342 1190 default 0xf6200000 if DEBUG_PXA_UART1 1343 1191 default 0xf4090000 if ARCH_LPC32XX 1344 1192 default 0xf4200000 if ARCH_GEMINI 1193 + default 0xf7000000 if DEBUG_SUN9I_UART0 1345 1194 default 0xf7000000 if DEBUG_S3C24XX_UART && (DEBUG_S3C_UART0 || \ 1346 1195 DEBUG_S3C2410_UART0) 1347 1196 default 0xf7004000 if DEBUG_S3C24XX_UART && (DEBUG_S3C_UART1 || \ ··· 1357 1204 default 0xfb002000 if DEBUG_CNS3XXX 1358 1205 default 0xfb009000 if DEBUG_REALVIEW_STD_PORT 1359 1206 default 0xfb10c000 if DEBUG_REALVIEW_PB1176_PORT 1207 + default 0xfc40ab00 if DEBUG_BRCMSTB_UART 1360 1208 default 0xfcfe8600 if DEBUG_UART_BCM63XX 1361 1209 default 0xfd000000 if ARCH_SPEAR3XX || ARCH_SPEAR6XX 1362 1210 default 0xfd000000 if ARCH_SPEAR13XX ··· 1398 1244 depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ 1399 1245 DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ 1400 1246 DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ 1401 - DEBUG_UART_BCM63XX 1247 + DEBUG_UART_BCM63XX || DEBUG_ASM9260_UART 1402 1248 1403 1249 config DEBUG_UART_8250_SHIFT 1404 1250 int "Register offset shift for the 8250 debug UART" 1405 1251 depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250 1406 - default 0 if FOOTBRIDGE || ARCH_IOP32X 1252 + default 0 if FOOTBRIDGE || ARCH_IOP32X || DEBUG_BCM_5301X 1407 1253 default 2 1408 1254 1409 1255 config DEBUG_UART_8250_WORD ··· 1414 1260 ARCH_KEYSTONE || \ 1415 1261 DEBUG_DAVINCI_DMx_UART0 || DEBUG_DAVINCI_DA8XX_UART1 || \ 1416 1262 DEBUG_DAVINCI_DA8XX_UART2 || \ 1417 - DEBUG_BCM_KONA_UART || DEBUG_RK32_UART2 1263 + DEBUG_BCM_KONA_UART || DEBUG_RK32_UART2 || \ 1264 + DEBUG_BRCMSTB_UART 1418 1265 1419 1266 config DEBUG_UART_8250_FLOW_CONTROL 1420 1267 bool "Enable flow control for 8250 UART"
+2 -1
arch/arm/boot/dts/armada-370-xp.dtsi
··· 180 180 181 181 mbusc: mbus-controller@20000 { 182 182 compatible = "marvell,mbus-controller"; 183 - reg = <0x20000 0x100>, <0x20180 0x20>; 183 + reg = <0x20000 0x100>, <0x20180 0x20>, 184 + <0x20250 0x8>; 184 185 }; 185 186 186 187 mpic: interrupt-controller@20000 {
+18 -1
arch/arm/boot/dts/armada-xp-gp.dts
··· 23 23 */ 24 24 25 25 /dts-v1/; 26 + #include <dt-bindings/gpio/gpio.h> 26 27 #include "armada-xp-mv78460.dtsi" 27 28 28 29 / { ··· 47 46 */ 48 47 reg = <0x00000000 0x00000000 0x00000000 0xf0000000>, 49 48 <0x00000001 0x00000000 0x00000001 0x00000000>; 49 + }; 50 + 51 + cpus { 52 + pm_pic { 53 + ctrl-gpios = <&gpio0 16 GPIO_ACTIVE_LOW>, 54 + <&gpio0 17 GPIO_ACTIVE_LOW>, 55 + <&gpio0 18 GPIO_ACTIVE_LOW>; 56 + }; 50 57 }; 51 58 52 59 soc { ··· 124 115 serial@12300 { 125 116 status = "okay"; 126 117 }; 127 - 118 + pinctrl { 119 + pinctrl-0 = <&pic_pins>; 120 + pinctrl-names = "default"; 121 + pic_pins: pic-pins-0 { 122 + marvell,pins = "mpp16", "mpp17", 123 + "mpp18"; 124 + marvell,function = "gpio"; 125 + }; 126 + }; 128 127 sata@a0000 { 129 128 nr-ports = <2>; 130 129 status = "okay";
+5
arch/arm/boot/dts/armada-xp.dtsi
··· 35 35 }; 36 36 37 37 internal-regs { 38 + sdramc@1400 { 39 + compatible = "marvell,armada-xp-sdram-controller"; 40 + reg = <0x1400 0x500>; 41 + }; 42 + 38 43 L2: l2-cache { 39 44 compatible = "marvell,aurora-system-cache"; 40 45 reg = <0x08000 0x1000>;
+47 -1
arch/arm/boot/dts/integrator.dtsi
··· 6 6 7 7 / { 8 8 core-module@10000000 { 9 - compatible = "arm,core-module-integrator"; 9 + compatible = "arm,core-module-integrator", "syscon"; 10 10 reg = <0x10000000 0x200>; 11 + 12 + /* Use core module LED to indicate CPU load */ 13 + led@0c.0 { 14 + compatible = "register-bit-led"; 15 + offset = <0x0c>; 16 + mask = <0x01>; 17 + label = "integrator:core_module"; 18 + linux,default-trigger = "cpu0"; 19 + default-state = "on"; 20 + }; 11 21 }; 12 22 13 23 ebi@12000000 { ··· 91 81 kmi@19000000 { 92 82 reg = <0x19000000 0x1000>; 93 83 interrupts = <4>; 84 + }; 85 + 86 + syscon { 87 + /* Debug registers mapped as syscon */ 88 + compatible = "syscon"; 89 + reg = <0x1a000000 0x10>; 90 + 91 + led@04.0 { 92 + compatible = "register-bit-led"; 93 + offset = <0x04>; 94 + mask = <0x01>; 95 + label = "integrator:green0"; 96 + linux,default-trigger = "heartbeat"; 97 + default-state = "on"; 98 + }; 99 + led@04.1 { 100 + compatible = "register-bit-led"; 101 + offset = <0x04>; 102 + mask = <0x02>; 103 + label = "integrator:yellow"; 104 + default-state = "off"; 105 + }; 106 + led@04.2 { 107 + compatible = "register-bit-led"; 108 + offset = <0x04>; 109 + mask = <0x04>; 110 + label = "integrator:red"; 111 + default-state = "off"; 112 + }; 113 + led@04.3 { 114 + compatible = "register-bit-led"; 115 + offset = <0x04>; 116 + mask = <0x08>; 117 + label = "integrator:green1"; 118 + default-state = "off"; 119 + }; 94 120 }; 95 121 }; 96 122 };
+17
arch/arm/boot/dts/omap3-cm-t3x30.dtsi
··· 64 64 65 65 #include "twl4030.dtsi" 66 66 #include "twl4030_omap3.dtsi" 67 + #include <dt-bindings/input/input.h> 67 68 68 69 &mmc1 { 69 70 vmmc-supply = <&vmmc1>; ··· 74 73 ti,use-leds; 75 74 /* pullups: BIT(0) */ 76 75 ti,pullups = <0x000001>; 76 + }; 77 + 78 + &twl_keypad { 79 + linux,keymap = < 80 + MATRIX_KEY(0x00, 0x01, KEY_A) 81 + MATRIX_KEY(0x00, 0x02, KEY_B) 82 + MATRIX_KEY(0x00, 0x03, KEY_LEFT) 83 + 84 + MATRIX_KEY(0x01, 0x01, KEY_UP) 85 + MATRIX_KEY(0x01, 0x02, KEY_ENTER) 86 + MATRIX_KEY(0x01, 0x03, KEY_DOWN) 87 + 88 + MATRIX_KEY(0x02, 0x01, KEY_RIGHT) 89 + MATRIX_KEY(0x02, 0x02, KEY_C) 90 + MATRIX_KEY(0x02, 0x03, KEY_D) 91 + >; 77 92 }; 78 93 79 94 &hsusb1_phy {
+1 -1
arch/arm/boot/dts/omap4.dtsi
··· 895 895 reg = <0x58002000 0x1000>; 896 896 status = "disabled"; 897 897 ti,hwmods = "dss_rfbi"; 898 - clocks = <&dss_dss_clk>, <&dss_fck>; 898 + clocks = <&dss_dss_clk>, <&l3_div_ck>; 899 899 clock-names = "fck", "ick"; 900 900 }; 901 901
-8
arch/arm/boot/dts/omap44xx-clocks.dtsi
··· 1018 1018 reg = <0x1120>; 1019 1019 }; 1020 1020 1021 - dss_fck: dss_fck { 1022 - #clock-cells = <0>; 1023 - compatible = "ti,gate-clock"; 1024 - clocks = <&l3_div_ck>; 1025 - ti,bit-shift = <1>; 1026 - reg = <0x1120>; 1027 - }; 1028 - 1029 1021 fdif_fck: fdif_fck { 1030 1022 #clock-cells = <0>; 1031 1023 compatible = "ti,divider-clock";
+22
arch/arm/boot/dts/ste-dbx5x0.dtsi
··· 11 11 12 12 #include <dt-bindings/interrupt-controller/irq.h> 13 13 #include <dt-bindings/mfd/dbx500-prcmu.h> 14 + #include <dt-bindings/arm/ux500_pm_domains.h> 14 15 #include "skeleton.dtsi" 15 16 16 17 / { ··· 44 43 interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>; 45 44 }; 46 45 46 + pm_domains: pm_domains0 { 47 + compatible = "stericsson,ux500-pm-domains"; 48 + #power-domain-cells = <1>; 49 + }; 47 50 48 51 clocks { 49 52 compatible = "stericsson,u8500-clks"; ··· 641 636 clock-frequency = <400000>; 642 637 clocks = <&prcc_kclk 3 3>, <&prcc_pclk 3 3>; 643 638 clock-names = "i2cclk", "apb_pclk"; 639 + power-domains = <&pm_domains DOMAIN_VAPE>; 644 640 }; 645 641 646 642 i2c@80122000 { ··· 657 651 658 652 clocks = <&prcc_kclk 1 2>, <&prcc_pclk 1 2>; 659 653 clock-names = "i2cclk", "apb_pclk"; 654 + power-domains = <&pm_domains DOMAIN_VAPE>; 660 655 }; 661 656 662 657 i2c@80128000 { ··· 673 666 674 667 clocks = <&prcc_kclk 1 6>, <&prcc_pclk 1 6>; 675 668 clock-names = "i2cclk", "apb_pclk"; 669 + power-domains = <&pm_domains DOMAIN_VAPE>; 676 670 }; 677 671 678 672 i2c@80110000 { ··· 689 681 690 682 clocks = <&prcc_kclk 2 0>, <&prcc_pclk 2 0>; 691 683 clock-names = "i2cclk", "apb_pclk"; 684 + power-domains = <&pm_domains DOMAIN_VAPE>; 692 685 }; 693 686 694 687 i2c@8012a000 { ··· 705 696 706 697 clocks = <&prcc_kclk 1 9>, <&prcc_pclk 1 10>; 707 698 clock-names = "i2cclk", "apb_pclk"; 699 + power-domains = <&pm_domains DOMAIN_VAPE>; 708 700 }; 709 701 710 702 ssp@80002000 { ··· 719 709 dmas = <&dma 8 0 0x2>, /* Logical - DevToMem */ 720 710 <&dma 8 0 0x0>; /* Logical - MemToDev */ 721 711 dma-names = "rx", "tx"; 712 + power-domains = <&pm_domains DOMAIN_VAPE>; 722 713 }; 723 714 724 715 ssp@80003000 { ··· 733 722 dmas = <&dma 9 0 0x2>, /* Logical - DevToMem */ 734 723 <&dma 9 0 0x0>; /* Logical - MemToDev */ 735 724 dma-names = "rx", "tx"; 725 + power-domains = <&pm_domains DOMAIN_VAPE>; 736 726 }; 737 727 738 728 spi@8011a000 { ··· 748 736 dmas = <&dma 0 0 0x2>, /* Logical - DevToMem */ 749 737 <&dma 0 0 0x0>; /* Logical - MemToDev */ 750 738 dma-names = "rx", "tx"; 739 + power-domains = <&pm_domains DOMAIN_VAPE>; 751 740 }; 752 741 753 742 spi@80112000 { ··· 763 750 dmas = <&dma 35 0 0x2>, /* Logical - DevToMem */ 764 751 <&dma 35 0 0x0>; /* Logical - MemToDev */ 765 752 dma-names = "rx", "tx"; 753 + power-domains = <&pm_domains DOMAIN_VAPE>; 766 754 }; 767 755 768 756 spi@80111000 { ··· 778 764 dmas = <&dma 33 0 0x2>, /* Logical - DevToMem */ 779 765 <&dma 33 0 0x0>; /* Logical - MemToDev */ 780 766 dma-names = "rx", "tx"; 767 + power-domains = <&pm_domains DOMAIN_VAPE>; 781 768 }; 782 769 783 770 spi@80129000 { ··· 793 778 dmas = <&dma 40 0 0x2>, /* Logical - DevToMem */ 794 779 <&dma 40 0 0x0>; /* Logical - MemToDev */ 795 780 dma-names = "rx", "tx"; 781 + power-domains = <&pm_domains DOMAIN_VAPE>; 796 782 }; 797 783 798 784 uart@80120000 { ··· 852 836 853 837 clocks = <&prcc_kclk 1 5>, <&prcc_pclk 1 5>; 854 838 clock-names = "sdi", "apb_pclk"; 839 + power-domains = <&pm_domains DOMAIN_VAPE>; 855 840 856 841 status = "disabled"; 857 842 }; ··· 868 851 869 852 clocks = <&prcc_kclk 2 4>, <&prcc_pclk 2 6>; 870 853 clock-names = "sdi", "apb_pclk"; 854 + power-domains = <&pm_domains DOMAIN_VAPE>; 871 855 872 856 status = "disabled"; 873 857 }; ··· 884 866 885 867 clocks = <&prcc_kclk 3 4>, <&prcc_pclk 3 4>; 886 868 clock-names = "sdi", "apb_pclk"; 869 + power-domains = <&pm_domains DOMAIN_VAPE>; 887 870 888 871 status = "disabled"; 889 872 }; ··· 900 881 901 882 clocks = <&prcc_kclk 2 5>, <&prcc_pclk 2 7>; 902 883 clock-names = "sdi", "apb_pclk"; 884 + power-domains = <&pm_domains DOMAIN_VAPE>; 903 885 904 886 status = "disabled"; 905 887 }; ··· 916 896 917 897 clocks = <&prcc_kclk 2 2>, <&prcc_pclk 2 4>; 918 898 clock-names = "sdi", "apb_pclk"; 899 + power-domains = <&pm_domains DOMAIN_VAPE>; 919 900 920 901 status = "disabled"; 921 902 }; ··· 932 911 933 912 clocks = <&prcc_kclk 3 7>, <&prcc_pclk 3 7>; 934 913 clock-names = "sdi", "apb_pclk"; 914 + power-domains = <&pm_domains DOMAIN_VAPE>; 935 915 936 916 status = "disabled"; 937 917 };
+2 -1
arch/arm/configs/bcm_defconfig
··· 25 25 # CONFIG_BLK_DEV_BSG is not set 26 26 CONFIG_PARTITION_ADVANCED=y 27 27 CONFIG_ARCH_BCM=y 28 - CONFIG_ARCH_BCM_MOBILE=y 28 + CONFIG_ARCH_BCM_21664=y 29 + CONFIG_ARCH_BCM_281XX=y 29 30 CONFIG_ARM_THUMBEE=y 30 31 CONFIG_SMP=y 31 32 CONFIG_PREEMPT=y
+3
arch/arm/configs/integrator_defconfig
··· 8 8 CONFIG_MODULES=y 9 9 CONFIG_MODULE_UNLOAD=y 10 10 CONFIG_PARTITION_ADVANCED=y 11 + CONFIG_ARCH_MULTI_V4T=y 12 + CONFIG_ARCH_MULTI_V5=y 13 + # CONFIG_ARCH_MULTI_V7 is not set 11 14 CONFIG_ARCH_INTEGRATOR=y 12 15 CONFIG_ARCH_INTEGRATOR_AP=y 13 16 CONFIG_ARCH_INTEGRATOR_CP=y
+9 -1
arch/arm/include/asm/firmware.h
··· 28 28 /* 29 29 * Enters CPU idle mode 30 30 */ 31 - int (*do_idle)(void); 31 + int (*do_idle)(unsigned long mode); 32 32 /* 33 33 * Sets boot address of specified physical CPU 34 34 */ ··· 41 41 * Initializes L2 cache 42 42 */ 43 43 int (*l2x0_init)(void); 44 + /* 45 + * Enter system-wide suspend. 46 + */ 47 + int (*suspend)(void); 48 + /* 49 + * Restore state of privileged hardware after system-wide suspend. 50 + */ 51 + int (*resume)(void); 44 52 }; 45 53 46 54 /* Global pointer for current firmware_ops structure, can't be NULL. */
+29
arch/arm/include/debug/asm9260.S
··· 1 + /* Debugging macro include header 2 + * 3 + * Copyright (C) 1994-1999 Russell King 4 + * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks 5 + * Modified for ASM9260 by Oleksij Remepl <linux@rempel-privat.de> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + */ 12 + 13 + .macro addruart, rp, rv, tmp 14 + ldr \rp, = CONFIG_DEBUG_UART_PHYS 15 + ldr \rv, = CONFIG_DEBUG_UART_VIRT 16 + .endm 17 + 18 + .macro waituart,rd,rx 19 + .endm 20 + 21 + .macro senduart,rd,rx 22 + str \rd, [\rx, #0x50] @ TXDATA 23 + .endm 24 + 25 + .macro busyuart,rd,rx 26 + 1002: ldr \rd, [\rx, #0x60] @ STAT 27 + tst \rd, #1 << 27 @ TXEMPTY 28 + beq 1002b @ wait until transmit done 29 + .endm
+52
arch/arm/include/debug/renesas-scif.S
··· 1 + /* 2 + * Renesas SCIF(A) debugging macro include header 3 + * 4 + * Based on r8a7790.S 5 + * 6 + * Copyright (C) 2012-2013 Renesas Electronics Corporation 7 + * Copyright (C) 1994-1999 Russell King 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #define SCIF_PHYS CONFIG_DEBUG_UART_PHYS 15 + #define SCIF_VIRT ((SCIF_PHYS & 0x00ffffff) | 0xfd000000) 16 + 17 + #if CONFIG_DEBUG_UART_PHYS < 0xe6e00000 18 + /* SCIFA */ 19 + #define FTDR 0x20 20 + #define FSR 0x14 21 + #else 22 + /* SCIF */ 23 + #define FTDR 0x0c 24 + #define FSR 0x10 25 + #endif 26 + 27 + #define TDFE (1 << 5) 28 + #define TEND (1 << 6) 29 + 30 + .macro addruart, rp, rv, tmp 31 + ldr \rp, =SCIF_PHYS 32 + ldr \rv, =SCIF_VIRT 33 + .endm 34 + 35 + .macro waituart, rd, rx 36 + 1001: ldrh \rd, [\rx, #FSR] 37 + tst \rd, #TDFE 38 + beq 1001b 39 + .endm 40 + 41 + .macro senduart, rd, rx 42 + strb \rd, [\rx, #FTDR] 43 + ldrh \rd, [\rx, #FSR] 44 + bic \rd, \rd, #TEND 45 + strh \rd, [\rx, #FSR] 46 + .endm 47 + 48 + .macro busyuart, rd, rx 49 + 1001: ldrh \rd, [\rx, #FSR] 50 + tst \rd, #TEND 51 + beq 1001b 52 + .endm
+6
arch/arm/mach-asm9260/Kconfig
··· 1 + config MACH_ASM9260 2 + bool "Alphascale ASM9260" 3 + depends on ARCH_MULTI_V5 4 + select CPU_ARM926T 5 + help 6 + Support for Alphascale ASM9260 based platform.
+57 -39
arch/arm/mach-bcm/Kconfig
··· 5 5 6 6 if ARCH_BCM 7 7 8 + comment "IPROC architected SoCs" 9 + 10 + config ARCH_BCM_IPROC 11 + bool 12 + select ARM_GIC 13 + select CACHE_L2X0 14 + select HAVE_ARM_SCU if SMP 15 + select HAVE_ARM_TWD if SMP 16 + select ARM_GLOBAL_TIMER 17 + 18 + select CLKSRC_MMIO 19 + select ARCH_REQUIRE_GPIOLIB 20 + select ARM_AMBA 21 + select PINCTRL 22 + help 23 + This enables support for systems based on Broadcom IPROC architected SoCs. 24 + The IPROC complex contains one or more ARM CPUs along with common 25 + core periperals. Application specific SoCs are created by adding a 26 + uArchitecture containing peripherals outside of the IPROC complex. 27 + Currently supported SoCs are Cygnus. 28 + 29 + config ARCH_BCM_CYGNUS 30 + bool "Broadcom Cygnus Support" if ARCH_MULTI_V7 31 + select ARCH_BCM_IPROC 32 + help 33 + Enable support for the Cygnus family, 34 + which includes the following variants: 35 + BCM11300, BCM11320, BCM11350, BCM11360, 36 + BCM58300, BCM58302, BCM58303, BCM58305. 37 + 38 + config ARCH_BCM_5301X 39 + bool "Broadcom BCM470X / BCM5301X ARM SoC" if ARCH_MULTI_V7 40 + select ARCH_BCM_IPROC 41 + help 42 + Support for Broadcom BCM470X and BCM5301X SoCs with ARM CPU cores. 43 + 44 + This is a network SoC line mostly used in home routers and 45 + wifi access points, it's internal name is Northstar. 46 + This inclused the following SoC: BCM53010, BCM53011, BCM53012, 47 + BCM53014, BCM53015, BCM53016, BCM53017, BCM53018, BCM4707, 48 + BCM4708 and BCM4709. 49 + 50 + Do not confuse this with the BCM4760 which is a totally 51 + different SoC or with the older BCM47XX and BCM53XX based 52 + network SoC using a MIPS CPU, they are supported by arch/mips/bcm47xx 53 + 54 + comment "KONA architected SoCs" 55 + 8 56 config ARCH_BCM_MOBILE 9 - bool "Broadcom Mobile SoC Support" if ARCH_MULTI_V7 57 + bool 10 58 select ARCH_REQUIRE_GPIOLIB 11 59 select ARM_ERRATA_754322 12 60 select ARM_ERRATA_775420 ··· 63 15 select TICK_ONESHOT 64 16 select HAVE_ARM_ARCH_TIMER 65 17 select PINCTRL 18 + select ARCH_BCM_MOBILE_SMP if SMP 66 19 help 67 20 This enables support for systems based on Broadcom mobile SoCs. 68 21 69 - if ARCH_BCM_MOBILE 70 - 71 - menu "Broadcom Mobile SoC Selection" 72 - 73 22 config ARCH_BCM_281XX 74 23 bool "Broadcom BCM281XX SoC family" 75 - default y 24 + select ARCH_BCM_MOBILE 76 25 select HAVE_SMP 77 26 help 78 27 Enable support for the BCM281XX family, which includes ··· 78 33 79 34 config ARCH_BCM_21664 80 35 bool "Broadcom BCM21664 SoC family" 81 - default y 36 + select ARCH_BCM_MOBILE 82 37 select HAVE_SMP 83 38 help 84 39 Enable support for the BCM21664 family, which includes ··· 86 41 87 42 config ARCH_BCM_MOBILE_L2_CACHE 88 43 bool "Broadcom mobile SoC level 2 cache support" 89 - depends on (ARCH_BCM_281XX || ARCH_BCM_21664) 44 + depends on ARCH_BCM_MOBILE 90 45 default y 91 46 select CACHE_L2X0 92 47 select ARCH_BCM_MOBILE_SMC 93 48 94 49 config ARCH_BCM_MOBILE_SMC 95 50 bool 96 - depends on ARCH_BCM_281XX || ARCH_BCM_21664 51 + depends on ARCH_BCM_MOBILE 97 52 98 53 config ARCH_BCM_MOBILE_SMP 99 - bool "Broadcom mobile SoC SMP support" 100 - depends on (ARCH_BCM_281XX || ARCH_BCM_21664) && SMP 101 - default y 54 + bool 55 + depends on ARCH_BCM_MOBILE 102 56 select HAVE_ARM_SCU 103 57 select ARM_ERRATA_764369 104 58 help ··· 105 61 Provided as an option so SMP support for SoCs of this type 106 62 can be disabled for an SMP-enabled kernel. 107 63 108 - endmenu 109 - 110 - endif 64 + comment "Other Architectures" 111 65 112 66 config ARCH_BCM2835 113 67 bool "Broadcom BCM2835 family" if ARCH_MULTI_V6 ··· 119 77 help 120 78 This enables support for the Broadcom BCM2835 SoC. This SoC is 121 79 used in the Raspberry Pi and Roku 2 devices. 122 - 123 - config ARCH_BCM_5301X 124 - bool "Broadcom BCM470X / BCM5301X ARM SoC" if ARCH_MULTI_V7 125 - select ARM_GIC 126 - select CACHE_L2X0 127 - select HAVE_ARM_SCU if SMP 128 - select HAVE_ARM_TWD if SMP 129 - select ARM_GLOBAL_TIMER 130 - select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK 131 - help 132 - Support for Broadcom BCM470X and BCM5301X SoCs with ARM CPU cores. 133 - 134 - This is a network SoC line mostly used in home routers and 135 - wifi access points, it's internal name is Northstar. 136 - This inclused the following SoC: BCM53010, BCM53011, BCM53012, 137 - BCM53014, BCM53015, BCM53016, BCM53017, BCM53018, BCM4707, 138 - BCM4708 and BCM4709. 139 - 140 - Do not confuse this with the BCM4760 which is a totally 141 - different SoC or with the older BCM47XX and BCM53XX based 142 - network SoC using a MIPS CPU, they are supported by arch/mips/bcm47xx 143 80 144 81 config ARCH_BCM_63XX 145 82 bool "Broadcom BCM63xx DSL SoC" if ARCH_MULTI_V7 ··· 139 118 140 119 config ARCH_BRCMSTB 141 120 bool "Broadcom BCM7XXX based boards" if ARCH_MULTI_V7 142 - depends on MMU 143 121 select ARM_GIC 144 - select MIGHT_HAVE_PCI 145 - select HAVE_SMP 146 122 select HAVE_ARM_ARCH_TIMER 147 123 select BRCMSTB_GISB_ARB 148 124 select BRCMSTB_L2_IRQ
+5
arch/arm/mach-bcm/Makefile
··· 10 10 # of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 11 # GNU General Public License for more details. 12 12 13 + # Cygnus 14 + obj-$(CONFIG_ARCH_BCM_CYGNUS) += bcm_cygnus.o 15 + 13 16 # BCM281XX 14 17 obj-$(CONFIG_ARCH_BCM_281XX) += board_bcm281xx.o 15 18 ··· 41 38 obj-$(CONFIG_ARCH_BCM_63XX) := bcm63xx.o 42 39 43 40 ifeq ($(CONFIG_ARCH_BRCMSTB),y) 41 + CFLAGS_platsmp-brcmstb.o += -march=armv7-a 44 42 obj-y += brcmstb.o 43 + obj-$(CONFIG_SMP) += headsmp-brcmstb.o platsmp-brcmstb.o 45 44 endif
+25
arch/arm/mach-bcm/bcm_cygnus.c
··· 1 + /* 2 + * Copyright (C) 2014 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License as 6 + * published by the Free Software Foundation version 2. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <asm/mach/arch.h> 15 + 16 + static const char const *bcm_cygnus_dt_compat[] = { 17 + "brcm,cygnus", 18 + NULL, 19 + }; 20 + 21 + DT_MACHINE_START(BCM_CYGNUS_DT, "Broadcom Cygnus SoC") 22 + .l2c_aux_val = 0, 23 + .l2c_aux_mask = ~0, 24 + .dt_compat = bcm_cygnus_dt_compat, 25 + MACHINE_END
+19
arch/arm/mach-bcm/brcmstb.h
··· 1 + /* 2 + * Copyright (C) 2013-2014 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License as 6 + * published by the Free Software Foundation version 2. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #ifndef __BRCMSTB_H__ 15 + #define __BRCMSTB_H__ 16 + 17 + void brcmstb_secondary_startup(void); 18 + 19 + #endif /* __BRCMSTB_H__ */
+33
arch/arm/mach-bcm/headsmp-brcmstb.S
··· 1 + /* 2 + * SMP boot code for secondary CPUs 3 + * Based on arch/arm/mach-tegra/headsmp.S 4 + * 5 + * Copyright (C) 2010 NVIDIA, Inc. 6 + * Copyright (C) 2013-2014 Broadcom Corporation 7 + * 8 + * This program is free software; you can redistribute it and/or 9 + * modify it under the terms of the GNU General Public License as 10 + * published by the Free Software Foundation version 2. 11 + * 12 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 13 + * kind, whether express or implied; without even the implied warranty 14 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + */ 17 + 18 + #include <asm/assembler.h> 19 + #include <linux/linkage.h> 20 + #include <linux/init.h> 21 + 22 + .section ".text.head", "ax" 23 + 24 + ENTRY(brcmstb_secondary_startup) 25 + /* 26 + * Ensure CPU is in a sane state by disabling all IRQs and switching 27 + * into SVC mode. 28 + */ 29 + setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r0 30 + 31 + bl v7_invalidate_l1 32 + b secondary_startup 33 + ENDPROC(brcmstb_secondary_startup)
+329
arch/arm/mach-bcm/platsmp-brcmstb.c
··· 1 + /* 2 + * Broadcom STB CPU SMP and hotplug support for ARM 3 + * 4 + * Copyright (C) 2013-2014 Broadcom Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License as 8 + * published by the Free Software Foundation version 2. 9 + * 10 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 11 + * kind, whether express or implied; without even the implied warranty 12 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/delay.h> 17 + #include <linux/errno.h> 18 + #include <linux/init.h> 19 + #include <linux/io.h> 20 + #include <linux/of_address.h> 21 + #include <linux/of_platform.h> 22 + #include <linux/printk.h> 23 + #include <linux/regmap.h> 24 + #include <linux/smp.h> 25 + #include <linux/mfd/syscon.h> 26 + 27 + #include <asm/cacheflush.h> 28 + #include <asm/cp15.h> 29 + #include <asm/mach-types.h> 30 + #include <asm/smp_plat.h> 31 + 32 + #include "brcmstb.h" 33 + 34 + enum { 35 + ZONE_MAN_CLKEN_MASK = BIT(0), 36 + ZONE_MAN_RESET_CNTL_MASK = BIT(1), 37 + ZONE_MAN_MEM_PWR_MASK = BIT(4), 38 + ZONE_RESERVED_1_MASK = BIT(5), 39 + ZONE_MAN_ISO_CNTL_MASK = BIT(6), 40 + ZONE_MANUAL_CONTROL_MASK = BIT(7), 41 + ZONE_PWR_DN_REQ_MASK = BIT(9), 42 + ZONE_PWR_UP_REQ_MASK = BIT(10), 43 + ZONE_BLK_RST_ASSERT_MASK = BIT(12), 44 + ZONE_PWR_OFF_STATE_MASK = BIT(25), 45 + ZONE_PWR_ON_STATE_MASK = BIT(26), 46 + ZONE_DPG_PWR_STATE_MASK = BIT(28), 47 + ZONE_MEM_PWR_STATE_MASK = BIT(29), 48 + ZONE_RESET_STATE_MASK = BIT(31), 49 + CPU0_PWR_ZONE_CTRL_REG = 1, 50 + CPU_RESET_CONFIG_REG = 2, 51 + }; 52 + 53 + static void __iomem *cpubiuctrl_block; 54 + static void __iomem *hif_cont_block; 55 + static u32 cpu0_pwr_zone_ctrl_reg; 56 + static u32 cpu_rst_cfg_reg; 57 + static u32 hif_cont_reg; 58 + 59 + #ifdef CONFIG_HOTPLUG_CPU 60 + /* 61 + * We must quiesce a dying CPU before it can be killed by the boot CPU. Because 62 + * one or more cache may be disabled, we must flush to ensure coherency. We 63 + * cannot use traditionl completion structures or spinlocks as they rely on 64 + * coherency. 65 + */ 66 + static DEFINE_PER_CPU_ALIGNED(int, per_cpu_sw_state); 67 + 68 + static int per_cpu_sw_state_rd(u32 cpu) 69 + { 70 + sync_cache_r(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu))); 71 + return per_cpu(per_cpu_sw_state, cpu); 72 + } 73 + 74 + static void per_cpu_sw_state_wr(u32 cpu, int val) 75 + { 76 + dmb(); 77 + per_cpu(per_cpu_sw_state, cpu) = val; 78 + sync_cache_w(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu))); 79 + } 80 + #else 81 + static inline void per_cpu_sw_state_wr(u32 cpu, int val) { } 82 + #endif 83 + 84 + static void __iomem *pwr_ctrl_get_base(u32 cpu) 85 + { 86 + void __iomem *base = cpubiuctrl_block + cpu0_pwr_zone_ctrl_reg; 87 + base += (cpu_logical_map(cpu) * 4); 88 + return base; 89 + } 90 + 91 + static u32 pwr_ctrl_rd(u32 cpu) 92 + { 93 + void __iomem *base = pwr_ctrl_get_base(cpu); 94 + return readl_relaxed(base); 95 + } 96 + 97 + static void pwr_ctrl_wr(u32 cpu, u32 val) 98 + { 99 + void __iomem *base = pwr_ctrl_get_base(cpu); 100 + writel(val, base); 101 + } 102 + 103 + static void cpu_rst_cfg_set(u32 cpu, int set) 104 + { 105 + u32 val; 106 + val = readl_relaxed(cpubiuctrl_block + cpu_rst_cfg_reg); 107 + if (set) 108 + val |= BIT(cpu_logical_map(cpu)); 109 + else 110 + val &= ~BIT(cpu_logical_map(cpu)); 111 + writel_relaxed(val, cpubiuctrl_block + cpu_rst_cfg_reg); 112 + } 113 + 114 + static void cpu_set_boot_addr(u32 cpu, unsigned long boot_addr) 115 + { 116 + const int reg_ofs = cpu_logical_map(cpu) * 8; 117 + writel_relaxed(0, hif_cont_block + hif_cont_reg + reg_ofs); 118 + writel_relaxed(boot_addr, hif_cont_block + hif_cont_reg + 4 + reg_ofs); 119 + } 120 + 121 + static void brcmstb_cpu_boot(u32 cpu) 122 + { 123 + /* Mark this CPU as "up" */ 124 + per_cpu_sw_state_wr(cpu, 1); 125 + 126 + /* 127 + * Set the reset vector to point to the secondary_startup 128 + * routine 129 + */ 130 + cpu_set_boot_addr(cpu, virt_to_phys(brcmstb_secondary_startup)); 131 + 132 + /* Unhalt the cpu */ 133 + cpu_rst_cfg_set(cpu, 0); 134 + } 135 + 136 + static void brcmstb_cpu_power_on(u32 cpu) 137 + { 138 + /* 139 + * The secondary cores power was cut, so we must go through 140 + * power-on initialization. 141 + */ 142 + u32 tmp; 143 + 144 + /* Request zone power up */ 145 + pwr_ctrl_wr(cpu, ZONE_PWR_UP_REQ_MASK); 146 + 147 + /* Wait for the power up FSM to complete */ 148 + do { 149 + tmp = pwr_ctrl_rd(cpu); 150 + } while (!(tmp & ZONE_PWR_ON_STATE_MASK)); 151 + } 152 + 153 + static int brcmstb_cpu_get_power_state(u32 cpu) 154 + { 155 + int tmp = pwr_ctrl_rd(cpu); 156 + return (tmp & ZONE_RESET_STATE_MASK) ? 0 : 1; 157 + } 158 + 159 + #ifdef CONFIG_HOTPLUG_CPU 160 + 161 + static void brcmstb_cpu_die(u32 cpu) 162 + { 163 + v7_exit_coherency_flush(all); 164 + 165 + per_cpu_sw_state_wr(cpu, 0); 166 + 167 + /* Sit and wait to die */ 168 + wfi(); 169 + 170 + /* We should never get here... */ 171 + while (1) 172 + ; 173 + } 174 + 175 + static int brcmstb_cpu_kill(u32 cpu) 176 + { 177 + u32 tmp; 178 + 179 + while (per_cpu_sw_state_rd(cpu)) 180 + ; 181 + 182 + /* Program zone reset */ 183 + pwr_ctrl_wr(cpu, ZONE_RESET_STATE_MASK | ZONE_BLK_RST_ASSERT_MASK | 184 + ZONE_PWR_DN_REQ_MASK); 185 + 186 + /* Verify zone reset */ 187 + tmp = pwr_ctrl_rd(cpu); 188 + if (!(tmp & ZONE_RESET_STATE_MASK)) 189 + pr_err("%s: Zone reset bit for CPU %d not asserted!\n", 190 + __func__, cpu); 191 + 192 + /* Wait for power down */ 193 + do { 194 + tmp = pwr_ctrl_rd(cpu); 195 + } while (!(tmp & ZONE_PWR_OFF_STATE_MASK)); 196 + 197 + /* Flush pipeline before resetting CPU */ 198 + mb(); 199 + 200 + /* Assert reset on the CPU */ 201 + cpu_rst_cfg_set(cpu, 1); 202 + 203 + return 1; 204 + } 205 + 206 + #endif /* CONFIG_HOTPLUG_CPU */ 207 + 208 + static int __init setup_hifcpubiuctrl_regs(struct device_node *np) 209 + { 210 + int rc = 0; 211 + char *name; 212 + struct device_node *syscon_np = NULL; 213 + 214 + name = "syscon-cpu"; 215 + 216 + syscon_np = of_parse_phandle(np, name, 0); 217 + if (!syscon_np) { 218 + pr_err("can't find phandle %s\n", name); 219 + rc = -EINVAL; 220 + goto cleanup; 221 + } 222 + 223 + cpubiuctrl_block = of_iomap(syscon_np, 0); 224 + if (!cpubiuctrl_block) { 225 + pr_err("iomap failed for cpubiuctrl_block\n"); 226 + rc = -EINVAL; 227 + goto cleanup; 228 + } 229 + 230 + rc = of_property_read_u32_index(np, name, CPU0_PWR_ZONE_CTRL_REG, 231 + &cpu0_pwr_zone_ctrl_reg); 232 + if (rc) { 233 + pr_err("failed to read 1st entry from %s property (%d)\n", name, 234 + rc); 235 + rc = -EINVAL; 236 + goto cleanup; 237 + } 238 + 239 + rc = of_property_read_u32_index(np, name, CPU_RESET_CONFIG_REG, 240 + &cpu_rst_cfg_reg); 241 + if (rc) { 242 + pr_err("failed to read 2nd entry from %s property (%d)\n", name, 243 + rc); 244 + rc = -EINVAL; 245 + goto cleanup; 246 + } 247 + 248 + cleanup: 249 + of_node_put(syscon_np); 250 + return rc; 251 + } 252 + 253 + static int __init setup_hifcont_regs(struct device_node *np) 254 + { 255 + int rc = 0; 256 + char *name; 257 + struct device_node *syscon_np = NULL; 258 + 259 + name = "syscon-cont"; 260 + 261 + syscon_np = of_parse_phandle(np, name, 0); 262 + if (!syscon_np) { 263 + pr_err("can't find phandle %s\n", name); 264 + rc = -EINVAL; 265 + goto cleanup; 266 + } 267 + 268 + hif_cont_block = of_iomap(syscon_np, 0); 269 + if (!hif_cont_block) { 270 + pr_err("iomap failed for hif_cont_block\n"); 271 + rc = -EINVAL; 272 + goto cleanup; 273 + } 274 + 275 + /* Offset is at top of hif_cont_block */ 276 + hif_cont_reg = 0; 277 + 278 + cleanup: 279 + of_node_put(syscon_np); 280 + return rc; 281 + } 282 + 283 + static void __init brcmstb_cpu_ctrl_setup(unsigned int max_cpus) 284 + { 285 + int rc; 286 + struct device_node *np; 287 + char *name; 288 + 289 + name = "brcm,brcmstb-smpboot"; 290 + np = of_find_compatible_node(NULL, NULL, name); 291 + if (!np) { 292 + pr_err("can't find compatible node %s\n", name); 293 + return; 294 + } 295 + 296 + rc = setup_hifcpubiuctrl_regs(np); 297 + if (rc) 298 + return; 299 + 300 + rc = setup_hifcont_regs(np); 301 + if (rc) 302 + return; 303 + } 304 + 305 + static int brcmstb_boot_secondary(unsigned int cpu, struct task_struct *idle) 306 + { 307 + /* Missing the brcm,brcmstb-smpboot DT node? */ 308 + if (!cpubiuctrl_block || !hif_cont_block) 309 + return -ENODEV; 310 + 311 + /* Bring up power to the core if necessary */ 312 + if (brcmstb_cpu_get_power_state(cpu) == 0) 313 + brcmstb_cpu_power_on(cpu); 314 + 315 + brcmstb_cpu_boot(cpu); 316 + 317 + return 0; 318 + } 319 + 320 + static struct smp_operations brcmstb_smp_ops __initdata = { 321 + .smp_prepare_cpus = brcmstb_cpu_ctrl_setup, 322 + .smp_boot_secondary = brcmstb_boot_secondary, 323 + #ifdef CONFIG_HOTPLUG_CPU 324 + .cpu_kill = brcmstb_cpu_kill, 325 + .cpu_die = brcmstb_cpu_die, 326 + #endif 327 + }; 328 + 329 + CPU_METHOD_OF_DECLARE(brcmstb_smp, "brcm,brahma-b15", &brcmstb_smp_ops);
+2 -1
arch/arm/mach-berlin/Kconfig
··· 1 1 menuconfig ARCH_BERLIN 2 2 bool "Marvell Berlin SoCs" if ARCH_MULTI_V7 3 + select ARCH_HAS_RESET_CONTROLLER 3 4 select ARCH_REQUIRE_GPIOLIB 4 5 select ARM_GIC 5 - select GENERIC_IRQ_CHIP 6 6 select DW_APB_ICTL 7 7 select DW_APB_TIMER_OF 8 + select GENERIC_IRQ_CHIP 8 9 select PINCTRL 9 10 10 11 if ARCH_BERLIN
+11
arch/arm/mach-exynos/Kconfig
··· 24 24 select PM_GENERIC_DOMAINS if PM_RUNTIME 25 25 select S5P_DEV_MFC 26 26 select SRAM 27 + select MFD_SYSCON 27 28 help 28 29 Support for SAMSUNG EXYNOS SoCs (EXYNOS4/5) 29 30 ··· 76 75 default y 77 76 depends on ARCH_EXYNOS4 78 77 78 + config SOC_EXYNOS4415 79 + bool "SAMSUNG EXYNOS4415" 80 + default y 81 + depends on ARCH_EXYNOS4 82 + 79 83 config SOC_EXYNOS5250 80 84 bool "SAMSUNG EXYNOS5250" 81 85 default y ··· 128 122 help 129 123 This is needed to provide CPU and cluster power management 130 124 on Exynos5420 implementing big.LITTLE. 125 + 126 + config EXYNOS_CPU_SUSPEND 127 + bool 128 + select ARM_CPU_SUSPEND 129 + default PM_SLEEP || ARM_EXYNOS_CPUIDLE 131 130 132 131 endif
+3 -1
arch/arm/mach-exynos/Makefile
··· 11 11 12 12 obj-$(CONFIG_ARCH_EXYNOS) += exynos.o pmu.o exynos-smc.o firmware.o 13 13 14 - obj-$(CONFIG_PM_SLEEP) += pm.o sleep.o 14 + obj-$(CONFIG_EXYNOS_CPU_SUSPEND) += pm.o sleep.o 15 + obj-$(CONFIG_PM_SLEEP) += suspend.o 15 16 obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o 16 17 17 18 obj-$(CONFIG_SMP) += platsmp.o headsmp.o 18 19 19 20 plus_sec := $(call as-instr,.arch_extension sec,+sec) 20 21 AFLAGS_exynos-smc.o :=-Wa,-march=armv7-a$(plus_sec) 22 + AFLAGS_sleep.o :=-Wa,-march=armv7-a$(plus_sec) 21 23 22 24 obj-$(CONFIG_EXYNOS5420_MCPM) += mcpm-exynos.o 23 25 CFLAGS_mcpm-exynos.o += -march=armv7-a
+13 -18
arch/arm/mach-exynos/common.h
··· 12 12 #ifndef __ARCH_ARM_MACH_EXYNOS_COMMON_H 13 13 #define __ARCH_ARM_MACH_EXYNOS_COMMON_H 14 14 15 - #include <linux/reboot.h> 16 15 #include <linux/of.h> 17 16 18 17 #define EXYNOS3250_SOC_ID 0xE3472000 ··· 110 111 #define soc_is_exynos5() (soc_is_exynos5250() || soc_is_exynos5410() || \ 111 112 soc_is_exynos5420() || soc_is_exynos5800()) 112 113 114 + extern u32 cp15_save_diag; 115 + extern u32 cp15_save_power; 116 + 113 117 extern void __iomem *sysram_ns_base_addr; 114 118 extern void __iomem *sysram_base_addr; 115 119 extern void __iomem *pmu_base_addr; 116 120 void exynos_sysram_init(void); 121 + 122 + enum { 123 + FW_DO_IDLE_SLEEP, 124 + FW_DO_IDLE_AFTR, 125 + }; 117 126 118 127 void exynos_firmware_init(void); 119 128 ··· 134 127 #endif 135 128 136 129 extern void exynos_cpu_resume(void); 130 + extern void exynos_cpu_resume_ns(void); 137 131 138 132 extern struct smp_operations exynos_smp_ops; 139 133 140 - /* PMU(Power Management Unit) support */ 141 - 142 - #define PMU_TABLE_END (-1U) 143 - 144 - enum sys_powerdown { 145 - SYS_AFTR, 146 - SYS_LPA, 147 - SYS_SLEEP, 148 - NUM_SYS_POWERDOWN, 149 - }; 150 - 151 - struct exynos_pmu_conf { 152 - unsigned int offset; 153 - unsigned int val[NUM_SYS_POWERDOWN]; 154 - }; 155 - 156 - extern void exynos_sys_powerdown_conf(enum sys_powerdown mode); 157 134 extern void exynos_cpu_power_down(int cpu); 158 135 extern void exynos_cpu_power_up(int cpu); 159 136 extern int exynos_cpu_power_state(int cpu); 160 137 extern void exynos_cluster_power_down(int cluster); 161 138 extern void exynos_cluster_power_up(int cluster); 162 139 extern int exynos_cluster_power_state(int cluster); 140 + extern void exynos_cpu_save_register(void); 141 + extern void exynos_cpu_restore_register(void); 142 + extern void exynos_pm_central_suspend(void); 143 + extern int exynos_pm_central_resume(void); 163 144 extern void exynos_enter_aftr(void); 164 145 165 146 extern void s5p_init_cpu(void __iomem *cpuid_addr);
+24
arch/arm/mach-exynos/exynos-pmu.h
··· 1 + /* 2 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 3 + * http://www.samsung.com 4 + * 5 + * Header for EXYNOS PMU Driver support 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef __EXYNOS_PMU_H 13 + #define __EXYNOS_PMU_H 14 + 15 + enum sys_powerdown { 16 + SYS_AFTR, 17 + SYS_LPA, 18 + SYS_SLEEP, 19 + NUM_SYS_POWERDOWN, 20 + }; 21 + 22 + extern void exynos_sys_powerdown_conf(enum sys_powerdown mode); 23 + 24 + #endif /* __EXYNOS_PMU_H */
+6 -24
arch/arm/mach-exynos/exynos.c
··· 87 87 }, 88 88 }; 89 89 90 - static void exynos_restart(enum reboot_mode mode, const char *cmd) 91 - { 92 - struct device_node *np; 93 - u32 val = 0x1; 94 - void __iomem *addr = pmu_base_addr + EXYNOS_SWRESET; 95 - 96 - if (of_machine_is_compatible("samsung,exynos5440")) { 97 - u32 status; 98 - np = of_find_compatible_node(NULL, NULL, "samsung,exynos5440-clock"); 99 - 100 - addr = of_iomap(np, 0) + 0xbc; 101 - status = __raw_readl(addr); 102 - 103 - addr = of_iomap(np, 0) + 0xcc; 104 - val = __raw_readl(addr); 105 - 106 - val = (val & 0xffff0000) | (status & 0xffff); 107 - } 108 - 109 - __raw_writel(val, addr); 110 - } 111 - 112 90 static struct platform_device exynos_cpuidle = { 113 91 .name = "exynos_cpuidle", 114 92 #ifdef CONFIG_ARM_EXYNOS_CPUIDLE ··· 180 202 { .compatible = "samsung,exynos4210-pmu" }, 181 203 { .compatible = "samsung,exynos4212-pmu" }, 182 204 { .compatible = "samsung,exynos4412-pmu" }, 205 + { .compatible = "samsung,exynos4415-pmu" }, 183 206 { .compatible = "samsung,exynos5250-pmu" }, 184 207 { .compatible = "samsung,exynos5260-pmu" }, 185 208 { .compatible = "samsung,exynos5410-pmu" }, ··· 247 268 exynos_sysram_init(); 248 269 249 270 if (of_machine_is_compatible("samsung,exynos4210") || 250 - of_machine_is_compatible("samsung,exynos5250")) 271 + of_machine_is_compatible("samsung,exynos4212") || 272 + (of_machine_is_compatible("samsung,exynos4412") && 273 + of_machine_is_compatible("samsung,trats2")) || 274 + of_machine_is_compatible("samsung,exynos5250")) 251 275 platform_device_register(&exynos_cpuidle); 252 276 253 277 platform_device_register_simple("exynos-cpufreq", -1, NULL, 0); ··· 265 283 "samsung,exynos4210", 266 284 "samsung,exynos4212", 267 285 "samsung,exynos4412", 286 + "samsung,exynos4415", 268 287 "samsung,exynos5", 269 288 "samsung,exynos5250", 270 289 "samsung,exynos5260", ··· 311 328 .init_machine = exynos_dt_machine_init, 312 329 .init_late = exynos_init_late, 313 330 .dt_compat = exynos_dt_compat, 314 - .restart = exynos_restart, 315 331 .reserve = exynos_reserve, 316 332 .dt_fixup = exynos_dt_fixup, 317 333 MACHINE_END
+64 -3
arch/arm/mach-exynos/firmware.c
··· 14 14 #include <linux/of.h> 15 15 #include <linux/of_address.h> 16 16 17 + #include <asm/cacheflush.h> 18 + #include <asm/cputype.h> 17 19 #include <asm/firmware.h> 20 + #include <asm/suspend.h> 18 21 19 22 #include <mach/map.h> 20 23 21 24 #include "common.h" 22 25 #include "smc.h" 23 26 24 - static int exynos_do_idle(void) 27 + #define EXYNOS_SLEEP_MAGIC 0x00000bad 28 + #define EXYNOS_AFTR_MAGIC 0xfcba0d10 29 + #define EXYNOS_BOOT_ADDR 0x8 30 + #define EXYNOS_BOOT_FLAG 0xc 31 + 32 + static void exynos_save_cp15(void) 25 33 { 26 - exynos_smc(SMC_CMD_SLEEP, 0, 0, 0); 34 + /* Save Power control and Diagnostic registers */ 35 + asm ("mrc p15, 0, %0, c15, c0, 0\n" 36 + "mrc p15, 0, %1, c15, c0, 1\n" 37 + : "=r" (cp15_save_power), "=r" (cp15_save_diag) 38 + : : "cc"); 39 + } 40 + 41 + static int exynos_do_idle(unsigned long mode) 42 + { 43 + switch (mode) { 44 + case FW_DO_IDLE_AFTR: 45 + if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 46 + exynos_save_cp15(); 47 + __raw_writel(virt_to_phys(exynos_cpu_resume_ns), 48 + sysram_ns_base_addr + 0x24); 49 + __raw_writel(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); 50 + exynos_smc(SMC_CMD_CPU0AFTR, 0, 0, 0); 51 + break; 52 + case FW_DO_IDLE_SLEEP: 53 + exynos_smc(SMC_CMD_SLEEP, 0, 0, 0); 54 + } 27 55 return 0; 28 56 } 29 57 ··· 97 69 return 0; 98 70 } 99 71 72 + static int exynos_cpu_suspend(unsigned long arg) 73 + { 74 + flush_cache_all(); 75 + outer_flush_all(); 76 + 77 + exynos_smc(SMC_CMD_SLEEP, 0, 0, 0); 78 + 79 + pr_info("Failed to suspend the system\n"); 80 + writel(0, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); 81 + return 1; 82 + } 83 + 84 + static int exynos_suspend(void) 85 + { 86 + if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 87 + exynos_save_cp15(); 88 + 89 + writel(EXYNOS_SLEEP_MAGIC, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); 90 + writel(virt_to_phys(exynos_cpu_resume_ns), 91 + sysram_ns_base_addr + EXYNOS_BOOT_ADDR); 92 + 93 + return cpu_suspend(0, exynos_cpu_suspend); 94 + } 95 + 96 + static int exynos_resume(void) 97 + { 98 + writel(0, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); 99 + 100 + return 0; 101 + } 102 + 100 103 static const struct firmware_ops exynos_firmware_ops = { 101 - .do_idle = exynos_do_idle, 104 + .do_idle = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_do_idle : NULL, 102 105 .set_cpu_boot_addr = exynos_set_cpu_boot_addr, 103 106 .cpu_boot = exynos_cpu_boot, 107 + .suspend = IS_ENABLED(CONFIG_PM_SLEEP) ? exynos_suspend : NULL, 108 + .resume = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_resume : NULL, 104 109 }; 105 110 106 111 void __init exynos_firmware_init(void)
+22 -10
arch/arm/mach-exynos/mcpm-exynos.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/io.h> 17 17 #include <linux/of_address.h> 18 + #include <linux/syscore_ops.h> 18 19 19 20 #include <asm/cputype.h> 20 21 #include <asm/cp15.h> ··· 30 29 #define EXYNOS5420_ENABLE_AUTOMATIC_CORE_DOWN BIT(9) 31 30 #define EXYNOS5420_USE_ARM_CORE_DOWN_STATE BIT(29) 32 31 #define EXYNOS5420_USE_L2_COMMON_UP_STATE BIT(30) 32 + 33 + static void __iomem *ns_sram_base_addr; 33 34 34 35 /* 35 36 * The common v7_exit_coherency_flush API could not be used because of the ··· 321 318 {}, 322 319 }; 323 320 321 + static void exynos_mcpm_setup_entry_point(void) 322 + { 323 + /* 324 + * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr 325 + * as part of secondary_cpu_start(). Let's redirect it to the 326 + * mcpm_entry_point(). This is done during both secondary boot-up as 327 + * well as system resume. 328 + */ 329 + __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ 330 + __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ 331 + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); 332 + } 333 + 334 + static struct syscore_ops exynos_mcpm_syscore_ops = { 335 + .resume = exynos_mcpm_setup_entry_point, 336 + }; 337 + 324 338 static int __init exynos_mcpm_init(void) 325 339 { 326 340 struct device_node *node; 327 - void __iomem *ns_sram_base_addr; 328 341 unsigned int value, i; 329 342 int ret; 330 343 ··· 406 387 pmu_raw_writel(value, EXYNOS_COMMON_OPTION(i)); 407 388 } 408 389 409 - /* 410 - * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr 411 - * as part of secondary_cpu_start(). Let's redirect it to the 412 - * mcpm_entry_point(). 413 - */ 414 - __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ 415 - __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ 416 - __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); 390 + exynos_mcpm_setup_entry_point(); 417 391 418 - iounmap(ns_sram_base_addr); 392 + register_syscore_ops(&exynos_mcpm_syscore_ops); 419 393 420 394 return ret; 421 395 }
+35
arch/arm/mach-exynos/platsmp.c
··· 126 126 */ 127 127 void exynos_cpu_power_down(int cpu) 128 128 { 129 + if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") || 130 + of_machine_is_compatible("samsung,exynos5800"))) { 131 + /* 132 + * Bypass power down for CPU0 during suspend. Check for 133 + * the SYS_PWR_REG value to decide if we are suspending 134 + * the system. 135 + */ 136 + int val = pmu_raw_readl(EXYNOS5_ARM_CORE0_SYS_PWR_REG); 137 + 138 + if (!(val & S5P_CORE_LOCAL_PWR_EN)) 139 + return; 140 + } 129 141 pmu_raw_writel(0, EXYNOS_ARM_CORE_CONFIGURATION(cpu)); 130 142 } 131 143 ··· 216 204 } 217 205 218 206 /* 207 + * Set wake up by local power mode and execute software reset for given core. 208 + * 209 + * Currently this is needed only when booting secondary CPU on Exynos3250. 210 + */ 211 + static void exynos_core_restart(u32 core_id) 212 + { 213 + u32 val; 214 + 215 + if (!of_machine_is_compatible("samsung,exynos3250")) 216 + return; 217 + 218 + val = pmu_raw_readl(EXYNOS_ARM_CORE_STATUS(core_id)); 219 + val |= S5P_CORE_WAKEUP_FROM_LOCAL_CFG; 220 + pmu_raw_writel(val, EXYNOS_ARM_CORE_STATUS(core_id)); 221 + 222 + pr_info("CPU%u: Software reset\n", core_id); 223 + pmu_raw_writel(EXYNOS_CORE_PO_RESET(core_id), EXYNOS_SWRESET); 224 + } 225 + 226 + /* 219 227 * Write pen_release in a way that is guaranteed to be visible to all 220 228 * observers, irrespective of whether they're taking part in coherency 221 229 * or not. This is necessary for the hotplug code to work reliably. ··· 311 279 return -ETIMEDOUT; 312 280 } 313 281 } 282 + 283 + exynos_core_restart(core_id); 284 + 314 285 /* 315 286 * Send the secondary CPU a soft interrupt, thereby causing 316 287 * the boot monitor to read the system wide flags register,
+40 -271
arch/arm/mach-exynos/pm.c
··· 1 1 /* 2 - * Copyright (c) 2011-2012 Samsung Electronics Co., Ltd. 2 + * Copyright (c) 2011-2014 Samsung Electronics Co., Ltd. 3 3 * http://www.samsung.com 4 4 * 5 5 * EXYNOS - Power Management support ··· 15 15 16 16 #include <linux/init.h> 17 17 #include <linux/suspend.h> 18 - #include <linux/syscore_ops.h> 19 18 #include <linux/cpu_pm.h> 20 19 #include <linux/io.h> 21 - #include <linux/irqchip/arm-gic.h> 22 20 #include <linux/err.h> 23 - #include <linux/clk.h> 24 21 25 - #include <asm/cacheflush.h> 26 - #include <asm/hardware/cache-l2x0.h> 22 + #include <asm/firmware.h> 27 23 #include <asm/smp_scu.h> 28 24 #include <asm/suspend.h> 29 25 30 26 #include <plat/pm-common.h> 31 - #include <plat/regs-srom.h> 32 - 33 - #include <mach/map.h> 34 27 35 28 #include "common.h" 29 + #include "exynos-pmu.h" 36 30 #include "regs-pmu.h" 37 31 #include "regs-sys.h" 38 32 39 - /** 40 - * struct exynos_wkup_irq - Exynos GIC to PMU IRQ mapping 41 - * @hwirq: Hardware IRQ signal of the GIC 42 - * @mask: Mask in PMU wake-up mask register 43 - */ 44 - struct exynos_wkup_irq { 45 - unsigned int hwirq; 46 - u32 mask; 47 - }; 48 - 49 - static struct sleep_save exynos5_sys_save[] = { 50 - SAVE_ITEM(EXYNOS5_SYS_I2C_CFG), 51 - }; 52 - 53 - static struct sleep_save exynos_core_save[] = { 54 - /* SROM side */ 55 - SAVE_ITEM(S5P_SROM_BW), 56 - SAVE_ITEM(S5P_SROM_BC0), 57 - SAVE_ITEM(S5P_SROM_BC1), 58 - SAVE_ITEM(S5P_SROM_BC2), 59 - SAVE_ITEM(S5P_SROM_BC3), 60 - }; 61 - 62 - /* 63 - * GIC wake-up support 64 - */ 65 - 66 - static u32 exynos_irqwake_intmask = 0xffffffff; 67 - 68 - static const struct exynos_wkup_irq exynos4_wkup_irq[] = { 69 - { 76, BIT(1) }, /* RTC alarm */ 70 - { 77, BIT(2) }, /* RTC tick */ 71 - { /* sentinel */ }, 72 - }; 73 - 74 - static const struct exynos_wkup_irq exynos5250_wkup_irq[] = { 75 - { 75, BIT(1) }, /* RTC alarm */ 76 - { 76, BIT(2) }, /* RTC tick */ 77 - { /* sentinel */ }, 78 - }; 79 - 80 - static int exynos_irq_set_wake(struct irq_data *data, unsigned int state) 33 + static inline void __iomem *exynos_boot_vector_addr(void) 81 34 { 82 - const struct exynos_wkup_irq *wkup_irq; 83 - 84 - if (soc_is_exynos5250()) 85 - wkup_irq = exynos5250_wkup_irq; 86 - else 87 - wkup_irq = exynos4_wkup_irq; 88 - 89 - while (wkup_irq->mask) { 90 - if (wkup_irq->hwirq == data->hwirq) { 91 - if (!state) 92 - exynos_irqwake_intmask |= wkup_irq->mask; 93 - else 94 - exynos_irqwake_intmask &= ~wkup_irq->mask; 95 - return 0; 96 - } 97 - ++wkup_irq; 98 - } 99 - 100 - return -ENOENT; 35 + if (samsung_rev() == EXYNOS4210_REV_1_1) 36 + return pmu_base_addr + S5P_INFORM7; 37 + else if (samsung_rev() == EXYNOS4210_REV_1_0) 38 + return sysram_base_addr + 0x24; 39 + return pmu_base_addr + S5P_INFORM0; 101 40 } 102 41 103 - #define EXYNOS_BOOT_VECTOR_ADDR (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 104 - pmu_base_addr + S5P_INFORM7 : \ 105 - (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 106 - (sysram_base_addr + 0x24) : \ 107 - pmu_base_addr + S5P_INFORM0)) 108 - #define EXYNOS_BOOT_VECTOR_FLAG (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 109 - pmu_base_addr + S5P_INFORM6 : \ 110 - (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 111 - (sysram_base_addr + 0x20) : \ 112 - pmu_base_addr + S5P_INFORM1)) 42 + static inline void __iomem *exynos_boot_vector_flag(void) 43 + { 44 + if (samsung_rev() == EXYNOS4210_REV_1_1) 45 + return pmu_base_addr + S5P_INFORM6; 46 + else if (samsung_rev() == EXYNOS4210_REV_1_0) 47 + return sysram_base_addr + 0x20; 48 + return pmu_base_addr + S5P_INFORM1; 49 + } 113 50 114 51 #define S5P_CHECK_AFTR 0xFCBA0D10 115 - #define S5P_CHECK_SLEEP 0x00000BAD 116 52 117 53 /* For Cortex-A9 Diagnostic and Power control register */ 118 54 static unsigned int save_arm_register[2]; 119 55 120 - static void exynos_cpu_save_register(void) 56 + void exynos_cpu_save_register(void) 121 57 { 122 58 unsigned long tmp; 123 59 ··· 70 134 save_arm_register[1] = tmp; 71 135 } 72 136 73 - static void exynos_cpu_restore_register(void) 137 + void exynos_cpu_restore_register(void) 74 138 { 75 139 unsigned long tmp; 76 140 ··· 89 153 : "cc"); 90 154 } 91 155 92 - static void exynos_pm_central_suspend(void) 156 + void exynos_pm_central_suspend(void) 93 157 { 94 158 unsigned long tmp; 95 159 ··· 97 161 tmp = pmu_raw_readl(S5P_CENTRAL_SEQ_CONFIGURATION); 98 162 tmp &= ~S5P_CENTRAL_LOWPWR_CFG; 99 163 pmu_raw_writel(tmp, S5P_CENTRAL_SEQ_CONFIGURATION); 164 + 165 + /* Setting SEQ_OPTION register */ 166 + pmu_raw_writel(S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFE0, 167 + S5P_CENTRAL_SEQ_OPTION); 100 168 } 101 169 102 - static int exynos_pm_central_resume(void) 170 + int exynos_pm_central_resume(void) 103 171 { 104 172 unsigned long tmp; 105 173 ··· 134 194 135 195 static void exynos_cpu_set_boot_vector(long flags) 136 196 { 137 - __raw_writel(virt_to_phys(exynos_cpu_resume), EXYNOS_BOOT_VECTOR_ADDR); 138 - __raw_writel(flags, EXYNOS_BOOT_VECTOR_FLAG); 197 + __raw_writel(virt_to_phys(exynos_cpu_resume), 198 + exynos_boot_vector_addr()); 199 + __raw_writel(flags, exynos_boot_vector_flag()); 139 200 } 140 201 141 202 static int exynos_aftr_finisher(unsigned long flags) 142 203 { 204 + int ret; 205 + 143 206 exynos_set_wakeupmask(0x0000ff3e); 144 - exynos_cpu_set_boot_vector(S5P_CHECK_AFTR); 145 207 /* Set value of power down register for aftr mode */ 146 208 exynos_sys_powerdown_conf(SYS_AFTR); 147 - cpu_do_idle(); 209 + 210 + ret = call_firmware_op(do_idle, FW_DO_IDLE_AFTR); 211 + if (ret == -ENOSYS) { 212 + if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 213 + exynos_cpu_save_register(); 214 + exynos_cpu_set_boot_vector(S5P_CHECK_AFTR); 215 + cpu_do_idle(); 216 + } 148 217 149 218 return 1; 150 219 } ··· 163 214 cpu_pm_enter(); 164 215 165 216 exynos_pm_central_suspend(); 166 - if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 167 - exynos_cpu_save_register(); 168 217 169 218 cpu_suspend(0, exynos_aftr_finisher); 170 219 171 220 if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) { 172 221 scu_enable(S5P_VA_SCU); 173 - exynos_cpu_restore_register(); 222 + if (call_firmware_op(resume) == -ENOSYS) 223 + exynos_cpu_restore_register(); 174 224 } 175 225 176 226 exynos_pm_central_resume(); 177 227 178 228 cpu_pm_exit(); 179 - } 180 - 181 - static int exynos_cpu_suspend(unsigned long arg) 182 - { 183 - #ifdef CONFIG_CACHE_L2X0 184 - outer_flush_all(); 185 - #endif 186 - 187 - if (soc_is_exynos5250()) 188 - flush_cache_all(); 189 - 190 - /* issue the standby signal into the pm unit. */ 191 - cpu_do_idle(); 192 - 193 - pr_info("Failed to suspend the system\n"); 194 - return 1; /* Aborting suspend */ 195 - } 196 - 197 - static void exynos_pm_prepare(void) 198 - { 199 - unsigned int tmp; 200 - 201 - /* Set wake-up mask registers */ 202 - pmu_raw_writel(exynos_get_eint_wake_mask(), S5P_EINT_WAKEUP_MASK); 203 - pmu_raw_writel(exynos_irqwake_intmask & ~(1 << 31), S5P_WAKEUP_MASK); 204 - 205 - s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 206 - 207 - if (soc_is_exynos5250()) { 208 - s3c_pm_do_save(exynos5_sys_save, ARRAY_SIZE(exynos5_sys_save)); 209 - /* Disable USE_RETENTION of JPEG_MEM_OPTION */ 210 - tmp = pmu_raw_readl(EXYNOS5_JPEG_MEM_OPTION); 211 - tmp &= ~EXYNOS5_OPTION_USE_RETENTION; 212 - pmu_raw_writel(tmp, EXYNOS5_JPEG_MEM_OPTION); 213 - } 214 - 215 - /* Set value of power down register for sleep mode */ 216 - 217 - exynos_sys_powerdown_conf(SYS_SLEEP); 218 - pmu_raw_writel(S5P_CHECK_SLEEP, S5P_INFORM1); 219 - 220 - /* ensure at least INFORM0 has the resume address */ 221 - 222 - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); 223 - } 224 - 225 - static int exynos_pm_suspend(void) 226 - { 227 - unsigned long tmp; 228 - 229 - exynos_pm_central_suspend(); 230 - 231 - /* Setting SEQ_OPTION register */ 232 - 233 - tmp = (S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFE0); 234 - pmu_raw_writel(tmp, S5P_CENTRAL_SEQ_OPTION); 235 - 236 - if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 237 - exynos_cpu_save_register(); 238 - 239 - return 0; 240 - } 241 - 242 - static void exynos_pm_resume(void) 243 - { 244 - if (exynos_pm_central_resume()) 245 - goto early_wakeup; 246 - 247 - if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 248 - exynos_cpu_restore_register(); 249 - 250 - /* For release retention */ 251 - 252 - pmu_raw_writel((1 << 28), S5P_PAD_RET_MAUDIO_OPTION); 253 - pmu_raw_writel((1 << 28), S5P_PAD_RET_GPIO_OPTION); 254 - pmu_raw_writel((1 << 28), S5P_PAD_RET_UART_OPTION); 255 - pmu_raw_writel((1 << 28), S5P_PAD_RET_MMCA_OPTION); 256 - pmu_raw_writel((1 << 28), S5P_PAD_RET_MMCB_OPTION); 257 - pmu_raw_writel((1 << 28), S5P_PAD_RET_EBIA_OPTION); 258 - pmu_raw_writel((1 << 28), S5P_PAD_RET_EBIB_OPTION); 259 - 260 - if (soc_is_exynos5250()) 261 - s3c_pm_do_restore(exynos5_sys_save, 262 - ARRAY_SIZE(exynos5_sys_save)); 263 - 264 - s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 265 - 266 - if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 267 - scu_enable(S5P_VA_SCU); 268 - 269 - early_wakeup: 270 - 271 - /* Clear SLEEP mode set in INFORM1 */ 272 - pmu_raw_writel(0x0, S5P_INFORM1); 273 - 274 - return; 275 - } 276 - 277 - static struct syscore_ops exynos_pm_syscore_ops = { 278 - .suspend = exynos_pm_suspend, 279 - .resume = exynos_pm_resume, 280 - }; 281 - 282 - /* 283 - * Suspend Ops 284 - */ 285 - 286 - static int exynos_suspend_enter(suspend_state_t state) 287 - { 288 - int ret; 289 - 290 - s3c_pm_debug_init(); 291 - 292 - S3C_PMDBG("%s: suspending the system...\n", __func__); 293 - 294 - S3C_PMDBG("%s: wakeup masks: %08x,%08x\n", __func__, 295 - exynos_irqwake_intmask, exynos_get_eint_wake_mask()); 296 - 297 - if (exynos_irqwake_intmask == -1U 298 - && exynos_get_eint_wake_mask() == -1U) { 299 - pr_err("%s: No wake-up sources!\n", __func__); 300 - pr_err("%s: Aborting sleep\n", __func__); 301 - return -EINVAL; 302 - } 303 - 304 - s3c_pm_save_uarts(); 305 - exynos_pm_prepare(); 306 - flush_cache_all(); 307 - s3c_pm_check_store(); 308 - 309 - ret = cpu_suspend(0, exynos_cpu_suspend); 310 - if (ret) 311 - return ret; 312 - 313 - s3c_pm_restore_uarts(); 314 - 315 - S3C_PMDBG("%s: wakeup stat: %08x\n", __func__, 316 - pmu_raw_readl(S5P_WAKEUP_STAT)); 317 - 318 - s3c_pm_check_restore(); 319 - 320 - S3C_PMDBG("%s: resuming the system...\n", __func__); 321 - 322 - return 0; 323 - } 324 - 325 - static int exynos_suspend_prepare(void) 326 - { 327 - s3c_pm_check_prepare(); 328 - 329 - return 0; 330 - } 331 - 332 - static void exynos_suspend_finish(void) 333 - { 334 - s3c_pm_check_cleanup(); 335 - } 336 - 337 - static const struct platform_suspend_ops exynos_suspend_ops = { 338 - .enter = exynos_suspend_enter, 339 - .prepare = exynos_suspend_prepare, 340 - .finish = exynos_suspend_finish, 341 - .valid = suspend_valid_only_mem, 342 - }; 343 - 344 - void __init exynos_pm_init(void) 345 - { 346 - u32 tmp; 347 - 348 - /* Platform-specific GIC callback */ 349 - gic_arch_extn.irq_set_wake = exynos_irq_set_wake; 350 - 351 - /* All wakeup disable */ 352 - tmp = pmu_raw_readl(S5P_WAKEUP_MASK); 353 - tmp |= ((0xFF << 8) | (0x1F << 1)); 354 - pmu_raw_writel(tmp, S5P_WAKEUP_MASK); 355 - 356 - register_syscore_ops(&exynos_pm_syscore_ops); 357 - suspend_set_ops(&exynos_suspend_ops); 358 229 }
+623 -40
arch/arm/mach-exynos/pmu.c
··· 1 1 /* 2 - * Copyright (c) 2011-2012 Samsung Electronics Co., Ltd. 2 + * Copyright (c) 2011-2014 Samsung Electronics Co., Ltd. 3 3 * http://www.samsung.com/ 4 4 * 5 5 * EXYNOS - CPU PMU(Power Management Unit) support ··· 10 10 */ 11 11 12 12 #include <linux/io.h> 13 - #include <linux/kernel.h> 13 + #include <linux/of.h> 14 + #include <linux/of_address.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/delay.h> 17 + #include <linux/notifier.h> 18 + #include <linux/reboot.h> 14 19 15 - #include "common.h" 20 + 21 + #include "exynos-pmu.h" 16 22 #include "regs-pmu.h" 17 23 18 - static const struct exynos_pmu_conf *exynos_pmu_config; 24 + #define PMU_TABLE_END (-1U) 25 + 26 + struct exynos_pmu_conf { 27 + unsigned int offset; 28 + u8 val[NUM_SYS_POWERDOWN]; 29 + }; 30 + 31 + struct exynos_pmu_data { 32 + const struct exynos_pmu_conf *pmu_config; 33 + const struct exynos_pmu_conf *pmu_config_extra; 34 + 35 + void (*pmu_init)(void); 36 + void (*powerdown_conf)(enum sys_powerdown); 37 + void (*powerdown_conf_extra)(enum sys_powerdown); 38 + }; 39 + 40 + struct exynos_pmu_context { 41 + struct device *dev; 42 + const struct exynos_pmu_data *pmu_data; 43 + }; 44 + 45 + static void __iomem *pmu_base_addr; 46 + static struct exynos_pmu_context *pmu_context; 47 + 48 + static inline void pmu_raw_writel(u32 val, u32 offset) 49 + { 50 + writel_relaxed(val, pmu_base_addr + offset); 51 + } 52 + 53 + static inline u32 pmu_raw_readl(u32 offset) 54 + { 55 + return readl_relaxed(pmu_base_addr + offset); 56 + } 57 + 58 + static struct exynos_pmu_conf exynos3250_pmu_config[] = { 59 + /* { .offset = offset, .val = { AFTR, W-AFTR, SLEEP } */ 60 + { EXYNOS3_ARM_CORE0_SYS_PWR_REG, { 0x0, 0x0, 0x2} }, 61 + { EXYNOS3_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 62 + { EXYNOS3_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 63 + { EXYNOS3_ARM_CORE1_SYS_PWR_REG, { 0x0, 0x0, 0x2} }, 64 + { EXYNOS3_DIS_IRQ_ARM_CORE1_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 65 + { EXYNOS3_DIS_IRQ_ARM_CORE1_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 66 + { EXYNOS3_ISP_ARM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 67 + { EXYNOS3_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 68 + { EXYNOS3_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 69 + { EXYNOS3_ARM_COMMON_SYS_PWR_REG, { 0x0, 0x0, 0x2} }, 70 + { EXYNOS3_ARM_L2_SYS_PWR_REG, { 0x0, 0x0, 0x3} }, 71 + { EXYNOS3_CMU_ACLKSTOP_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 72 + { EXYNOS3_CMU_SCLKSTOP_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 73 + { EXYNOS3_CMU_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 74 + { EXYNOS3_DRAM_FREQ_DOWN_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 75 + { EXYNOS3_DDRPHY_DLLOFF_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 76 + { EXYNOS3_LPDDR_PHY_DLL_LOCK_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 77 + { EXYNOS3_CMU_ACLKSTOP_COREBLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 78 + { EXYNOS3_CMU_SCLKSTOP_COREBLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 79 + { EXYNOS3_CMU_RESET_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 80 + { EXYNOS3_APLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 81 + { EXYNOS3_MPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 82 + { EXYNOS3_BPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 83 + { EXYNOS3_VPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 84 + { EXYNOS3_EPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 85 + { EXYNOS3_UPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 86 + { EXYNOS3_EPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 87 + { EXYNOS3_MPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 88 + { EXYNOS3_BPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 89 + { EXYNOS3_CMU_CLKSTOP_CAM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 90 + { EXYNOS3_CMU_CLKSTOP_MFC_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 91 + { EXYNOS3_CMU_CLKSTOP_G3D_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 92 + { EXYNOS3_CMU_CLKSTOP_LCD0_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 93 + { EXYNOS3_CMU_CLKSTOP_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 94 + { EXYNOS3_CMU_CLKSTOP_MAUDIO_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 95 + { EXYNOS3_CMU_RESET_CAM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 96 + { EXYNOS3_CMU_RESET_MFC_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 97 + { EXYNOS3_CMU_RESET_G3D_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 98 + { EXYNOS3_CMU_RESET_LCD0_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 99 + { EXYNOS3_CMU_RESET_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 100 + { EXYNOS3_CMU_RESET_MAUDIO_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 101 + { EXYNOS3_TOP_BUS_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 102 + { EXYNOS3_TOP_RETENTION_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 103 + { EXYNOS3_TOP_PWR_SYS_PWR_REG, { 0x3, 0x3, 0x3} }, 104 + { EXYNOS3_TOP_BUS_COREBLK_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 105 + { EXYNOS3_TOP_RETENTION_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 106 + { EXYNOS3_TOP_PWR_COREBLK_SYS_PWR_REG, { 0x3, 0x3, 0x3} }, 107 + { EXYNOS3_LOGIC_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 108 + { EXYNOS3_OSCCLK_GATE_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 109 + { EXYNOS3_LOGIC_RESET_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 110 + { EXYNOS3_OSCCLK_GATE_COREBLK_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 111 + { EXYNOS3_PAD_RETENTION_DRAM_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 112 + { EXYNOS3_PAD_RETENTION_MAUDIO_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 113 + { EXYNOS3_PAD_RETENTION_GPIO_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 114 + { EXYNOS3_PAD_RETENTION_UART_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 115 + { EXYNOS3_PAD_RETENTION_MMC0_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 116 + { EXYNOS3_PAD_RETENTION_MMC1_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 117 + { EXYNOS3_PAD_RETENTION_MMC2_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 118 + { EXYNOS3_PAD_RETENTION_SPI_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 119 + { EXYNOS3_PAD_RETENTION_EBIA_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 120 + { EXYNOS3_PAD_RETENTION_EBIB_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 121 + { EXYNOS3_PAD_RETENTION_JTAG_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 122 + { EXYNOS3_PAD_ISOLATION_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 123 + { EXYNOS3_PAD_ALV_SEL_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 124 + { EXYNOS3_XUSBXTI_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 125 + { EXYNOS3_XXTI_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 126 + { EXYNOS3_EXT_REGULATOR_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 127 + { EXYNOS3_EXT_REGULATOR_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 128 + { EXYNOS3_GPIO_MODE_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 129 + { EXYNOS3_GPIO_MODE_MAUDIO_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 130 + { EXYNOS3_TOP_ASB_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 131 + { EXYNOS3_TOP_ASB_ISOLATION_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 132 + { EXYNOS3_TOP_ASB_RESET_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 133 + { EXYNOS3_TOP_ASB_ISOLATION_COREBLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 134 + { EXYNOS3_CAM_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 135 + { EXYNOS3_MFC_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 136 + { EXYNOS3_G3D_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 137 + { EXYNOS3_LCD0_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 138 + { EXYNOS3_ISP_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 139 + { EXYNOS3_MAUDIO_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 140 + { EXYNOS3_CMU_SYSCLK_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 141 + { PMU_TABLE_END,}, 142 + }; 19 143 20 144 static const struct exynos_pmu_conf exynos4210_pmu_config[] = { 21 145 /* { .offset = offset, .val = { AFTR, LPA, SLEEP } */ ··· 388 264 { EXYNOS5_INTRAM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 389 265 { EXYNOS5_INTROM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 390 266 { EXYNOS5_JPEG_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 267 + { EXYNOS5_JPEG_MEM_OPTION, { 0x10, 0x10, 0x0} }, 391 268 { EXYNOS5_HSI_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 392 269 { EXYNOS5_MCUIOP_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 393 270 { EXYNOS5_SATA_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, ··· 440 315 { PMU_TABLE_END,}, 441 316 }; 442 317 318 + static struct exynos_pmu_conf exynos5420_pmu_config[] = { 319 + /* { .offset = offset, .val = { AFTR, LPA, SLEEP } */ 320 + { EXYNOS5_ARM_CORE0_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 321 + { EXYNOS5_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 322 + { EXYNOS5_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 323 + { EXYNOS5_ARM_CORE1_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 324 + { EXYNOS5_DIS_IRQ_ARM_CORE1_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 325 + { EXYNOS5_DIS_IRQ_ARM_CORE1_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 326 + { EXYNOS5420_ARM_CORE2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 327 + { EXYNOS5420_DIS_IRQ_ARM_CORE2_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 328 + { EXYNOS5420_DIS_IRQ_ARM_CORE2_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 329 + { EXYNOS5420_ARM_CORE3_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 330 + { EXYNOS5420_DIS_IRQ_ARM_CORE3_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 331 + { EXYNOS5420_DIS_IRQ_ARM_CORE3_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 332 + { EXYNOS5420_KFC_CORE0_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 333 + { EXYNOS5420_DIS_IRQ_KFC_CORE0_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 334 + { EXYNOS5420_DIS_IRQ_KFC_CORE0_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 335 + { EXYNOS5420_KFC_CORE1_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 336 + { EXYNOS5420_DIS_IRQ_KFC_CORE1_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 337 + { EXYNOS5420_DIS_IRQ_KFC_CORE1_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 338 + { EXYNOS5420_KFC_CORE2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 339 + { EXYNOS5420_DIS_IRQ_KFC_CORE2_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 340 + { EXYNOS5420_DIS_IRQ_KFC_CORE2_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 341 + { EXYNOS5420_KFC_CORE3_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 342 + { EXYNOS5420_DIS_IRQ_KFC_CORE3_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 343 + { EXYNOS5420_DIS_IRQ_KFC_CORE3_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 344 + { EXYNOS5_ISP_ARM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 345 + { EXYNOS5_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 346 + { EXYNOS5_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 347 + { EXYNOS5420_ARM_COMMON_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 348 + { EXYNOS5420_KFC_COMMON_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 349 + { EXYNOS5_ARM_L2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 350 + { EXYNOS5420_KFC_L2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 351 + { EXYNOS5_CMU_ACLKSTOP_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 352 + { EXYNOS5_CMU_SCLKSTOP_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 353 + { EXYNOS5_CMU_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 354 + { EXYNOS5_CMU_ACLKSTOP_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 355 + { EXYNOS5_CMU_SCLKSTOP_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 356 + { EXYNOS5_CMU_RESET_SYSMEM_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 357 + { EXYNOS5_DRAM_FREQ_DOWN_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 358 + { EXYNOS5_DDRPHY_DLLOFF_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 359 + { EXYNOS5_DDRPHY_DLLLOCK_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 360 + { EXYNOS5_APLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 361 + { EXYNOS5_MPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 362 + { EXYNOS5_VPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 363 + { EXYNOS5_EPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 364 + { EXYNOS5_BPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 365 + { EXYNOS5_CPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 366 + { EXYNOS5420_DPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 367 + { EXYNOS5420_IPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 368 + { EXYNOS5420_KPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 369 + { EXYNOS5_MPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 370 + { EXYNOS5_BPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 371 + { EXYNOS5420_RPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 372 + { EXYNOS5420_SPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 373 + { EXYNOS5_TOP_BUS_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 374 + { EXYNOS5_TOP_RETENTION_SYS_PWR_REG, { 0x1, 0x1, 0x1} }, 375 + { EXYNOS5_TOP_PWR_SYS_PWR_REG, { 0x3, 0x3, 0x0} }, 376 + { EXYNOS5_TOP_BUS_SYSMEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 377 + { EXYNOS5_TOP_RETENTION_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 378 + { EXYNOS5_TOP_PWR_SYSMEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} }, 379 + { EXYNOS5_LOGIC_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 380 + { EXYNOS5_OSCCLK_GATE_SYS_PWR_REG, { 0x1, 0x0, 0x1} }, 381 + { EXYNOS5_LOGIC_RESET_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 382 + { EXYNOS5_OSCCLK_GATE_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 383 + { EXYNOS5420_INTRAM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x3} }, 384 + { EXYNOS5420_INTROM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x3} }, 385 + { EXYNOS5_PAD_RETENTION_DRAM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 386 + { EXYNOS5_PAD_RETENTION_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 387 + { EXYNOS5420_PAD_RETENTION_JTAG_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 388 + { EXYNOS5420_PAD_RETENTION_DRAM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 389 + { EXYNOS5420_PAD_RETENTION_UART_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 390 + { EXYNOS5420_PAD_RETENTION_MMC0_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 391 + { EXYNOS5420_PAD_RETENTION_MMC1_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 392 + { EXYNOS5420_PAD_RETENTION_MMC2_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 393 + { EXYNOS5420_PAD_RETENTION_HSI_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 394 + { EXYNOS5420_PAD_RETENTION_EBIA_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 395 + { EXYNOS5420_PAD_RETENTION_EBIB_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 396 + { EXYNOS5420_PAD_RETENTION_SPI_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 397 + { EXYNOS5420_PAD_RETENTION_DRAM_COREBLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 398 + { EXYNOS5_PAD_ISOLATION_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 399 + { EXYNOS5_PAD_ISOLATION_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 400 + { EXYNOS5_PAD_ALV_SEL_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 401 + { EXYNOS5_XUSBXTI_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 402 + { EXYNOS5_XXTI_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 403 + { EXYNOS5_EXT_REGULATOR_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 404 + { EXYNOS5_GPIO_MODE_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 405 + { EXYNOS5_GPIO_MODE_SYSMEM_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 406 + { EXYNOS5_GPIO_MODE_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 407 + { EXYNOS5_TOP_ASB_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} }, 408 + { EXYNOS5_TOP_ASB_ISOLATION_SYS_PWR_REG, { 0x1, 0x0, 0x0} }, 409 + { EXYNOS5_GSCL_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 410 + { EXYNOS5_ISP_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 411 + { EXYNOS5_MFC_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 412 + { EXYNOS5_G3D_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 413 + { EXYNOS5420_DISP1_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 414 + { EXYNOS5420_MAU_SYS_PWR_REG, { 0x7, 0x7, 0x0} }, 415 + { EXYNOS5420_G2D_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 416 + { EXYNOS5420_MSC_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 417 + { EXYNOS5420_FSYS_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 418 + { EXYNOS5420_FSYS2_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 419 + { EXYNOS5420_PSGEN_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 420 + { EXYNOS5420_PERIC_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 421 + { EXYNOS5420_WCORE_SYS_PWR_REG, { 0x7, 0x0, 0x0} }, 422 + { EXYNOS5_CMU_CLKSTOP_GSCL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 423 + { EXYNOS5_CMU_CLKSTOP_ISP_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 424 + { EXYNOS5_CMU_CLKSTOP_MFC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 425 + { EXYNOS5_CMU_CLKSTOP_G3D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 426 + { EXYNOS5420_CMU_CLKSTOP_DISP1_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 427 + { EXYNOS5420_CMU_CLKSTOP_MAU_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 428 + { EXYNOS5420_CMU_CLKSTOP_G2D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 429 + { EXYNOS5420_CMU_CLKSTOP_MSC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 430 + { EXYNOS5420_CMU_CLKSTOP_FSYS_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 431 + { EXYNOS5420_CMU_CLKSTOP_PSGEN_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 432 + { EXYNOS5420_CMU_CLKSTOP_PERIC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 433 + { EXYNOS5420_CMU_CLKSTOP_WCORE_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 434 + { EXYNOS5_CMU_SYSCLK_GSCL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 435 + { EXYNOS5_CMU_SYSCLK_ISP_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 436 + { EXYNOS5_CMU_SYSCLK_MFC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 437 + { EXYNOS5_CMU_SYSCLK_G3D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 438 + { EXYNOS5420_CMU_SYSCLK_DISP1_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 439 + { EXYNOS5420_CMU_SYSCLK_MAU_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 440 + { EXYNOS5420_CMU_SYSCLK_G2D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 441 + { EXYNOS5420_CMU_SYSCLK_MSC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 442 + { EXYNOS5420_CMU_SYSCLK_FSYS_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 443 + { EXYNOS5420_CMU_SYSCLK_FSYS2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 444 + { EXYNOS5420_CMU_SYSCLK_PSGEN_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 445 + { EXYNOS5420_CMU_SYSCLK_PERIC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 446 + { EXYNOS5420_CMU_SYSCLK_WCORE_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 447 + { EXYNOS5420_CMU_RESET_FSYS2_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 448 + { EXYNOS5420_CMU_RESET_PSGEN_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 449 + { EXYNOS5420_CMU_RESET_PERIC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 450 + { EXYNOS5420_CMU_RESET_WCORE_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 451 + { EXYNOS5_CMU_RESET_GSCL_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 452 + { EXYNOS5_CMU_RESET_ISP_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 453 + { EXYNOS5_CMU_RESET_MFC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 454 + { EXYNOS5_CMU_RESET_G3D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 455 + { EXYNOS5420_CMU_RESET_DISP1_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 456 + { EXYNOS5420_CMU_RESET_MAU_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 457 + { EXYNOS5420_CMU_RESET_G2D_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 458 + { EXYNOS5420_CMU_RESET_MSC_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 459 + { EXYNOS5420_CMU_RESET_FSYS_SYS_PWR_REG, { 0x0, 0x0, 0x0} }, 460 + { PMU_TABLE_END,}, 461 + }; 462 + 463 + static unsigned int const exynos3250_list_feed[] = { 464 + EXYNOS3_ARM_CORE_OPTION(0), 465 + EXYNOS3_ARM_CORE_OPTION(1), 466 + EXYNOS3_ARM_CORE_OPTION(2), 467 + EXYNOS3_ARM_CORE_OPTION(3), 468 + EXYNOS3_ARM_COMMON_OPTION, 469 + EXYNOS3_TOP_PWR_OPTION, 470 + EXYNOS3_CORE_TOP_PWR_OPTION, 471 + S5P_CAM_OPTION, 472 + S5P_MFC_OPTION, 473 + S5P_G3D_OPTION, 474 + S5P_LCD0_OPTION, 475 + S5P_ISP_OPTION, 476 + }; 477 + 478 + static void exynos3250_powerdown_conf_extra(enum sys_powerdown mode) 479 + { 480 + unsigned int i; 481 + unsigned int tmp; 482 + 483 + /* Enable only SC_FEEDBACK */ 484 + for (i = 0; i < ARRAY_SIZE(exynos3250_list_feed); i++) { 485 + tmp = pmu_raw_readl(exynos3250_list_feed[i]); 486 + tmp &= ~(EXYNOS3_OPTION_USE_SC_COUNTER); 487 + tmp |= EXYNOS3_OPTION_USE_SC_FEEDBACK; 488 + pmu_raw_writel(tmp, exynos3250_list_feed[i]); 489 + } 490 + 491 + if (mode != SYS_SLEEP) 492 + return; 493 + 494 + pmu_raw_writel(XUSBXTI_DURATION, EXYNOS3_XUSBXTI_DURATION); 495 + pmu_raw_writel(XXTI_DURATION, EXYNOS3_XXTI_DURATION); 496 + pmu_raw_writel(EXT_REGULATOR_DURATION, EXYNOS3_EXT_REGULATOR_DURATION); 497 + pmu_raw_writel(EXT_REGULATOR_COREBLK_DURATION, 498 + EXYNOS3_EXT_REGULATOR_COREBLK_DURATION); 499 + } 500 + 443 501 static unsigned int const exynos5_list_both_cnt_feed[] = { 444 502 EXYNOS5_ARM_CORE0_OPTION, 445 503 EXYNOS5_ARM_CORE1_OPTION, ··· 643 335 EXYNOS5_ISP_ARM_OPTION, 644 336 }; 645 337 646 - static void exynos5_init_pmu(void) 338 + static unsigned int const exynos5420_list_disable_pmu_reg[] = { 339 + EXYNOS5_CMU_CLKSTOP_GSCL_SYS_PWR_REG, 340 + EXYNOS5_CMU_CLKSTOP_ISP_SYS_PWR_REG, 341 + EXYNOS5_CMU_CLKSTOP_G3D_SYS_PWR_REG, 342 + EXYNOS5420_CMU_CLKSTOP_DISP1_SYS_PWR_REG, 343 + EXYNOS5420_CMU_CLKSTOP_MAU_SYS_PWR_REG, 344 + EXYNOS5420_CMU_CLKSTOP_G2D_SYS_PWR_REG, 345 + EXYNOS5420_CMU_CLKSTOP_MSC_SYS_PWR_REG, 346 + EXYNOS5420_CMU_CLKSTOP_FSYS_SYS_PWR_REG, 347 + EXYNOS5420_CMU_CLKSTOP_PSGEN_SYS_PWR_REG, 348 + EXYNOS5420_CMU_CLKSTOP_PERIC_SYS_PWR_REG, 349 + EXYNOS5420_CMU_CLKSTOP_WCORE_SYS_PWR_REG, 350 + EXYNOS5_CMU_SYSCLK_GSCL_SYS_PWR_REG, 351 + EXYNOS5_CMU_SYSCLK_ISP_SYS_PWR_REG, 352 + EXYNOS5_CMU_SYSCLK_G3D_SYS_PWR_REG, 353 + EXYNOS5420_CMU_SYSCLK_DISP1_SYS_PWR_REG, 354 + EXYNOS5420_CMU_SYSCLK_MAU_SYS_PWR_REG, 355 + EXYNOS5420_CMU_SYSCLK_G2D_SYS_PWR_REG, 356 + EXYNOS5420_CMU_SYSCLK_MSC_SYS_PWR_REG, 357 + EXYNOS5420_CMU_SYSCLK_FSYS_SYS_PWR_REG, 358 + EXYNOS5420_CMU_SYSCLK_FSYS2_SYS_PWR_REG, 359 + EXYNOS5420_CMU_SYSCLK_PSGEN_SYS_PWR_REG, 360 + EXYNOS5420_CMU_SYSCLK_PERIC_SYS_PWR_REG, 361 + EXYNOS5420_CMU_SYSCLK_WCORE_SYS_PWR_REG, 362 + EXYNOS5420_CMU_RESET_FSYS2_SYS_PWR_REG, 363 + EXYNOS5420_CMU_RESET_PSGEN_SYS_PWR_REG, 364 + EXYNOS5420_CMU_RESET_PERIC_SYS_PWR_REG, 365 + EXYNOS5420_CMU_RESET_WCORE_SYS_PWR_REG, 366 + EXYNOS5_CMU_RESET_GSCL_SYS_PWR_REG, 367 + EXYNOS5_CMU_RESET_ISP_SYS_PWR_REG, 368 + EXYNOS5_CMU_RESET_G3D_SYS_PWR_REG, 369 + EXYNOS5420_CMU_RESET_DISP1_SYS_PWR_REG, 370 + EXYNOS5420_CMU_RESET_MAU_SYS_PWR_REG, 371 + EXYNOS5420_CMU_RESET_G2D_SYS_PWR_REG, 372 + EXYNOS5420_CMU_RESET_MSC_SYS_PWR_REG, 373 + EXYNOS5420_CMU_RESET_FSYS_SYS_PWR_REG, 374 + }; 375 + 376 + static void exynos5_power_off(void) 377 + { 378 + unsigned int tmp; 379 + 380 + pr_info("Power down.\n"); 381 + tmp = pmu_raw_readl(EXYNOS_PS_HOLD_CONTROL); 382 + tmp ^= (1 << 8); 383 + pmu_raw_writel(tmp, EXYNOS_PS_HOLD_CONTROL); 384 + 385 + /* Wait a little so we don't give a false warning below */ 386 + mdelay(100); 387 + 388 + pr_err("Power down failed, please power off system manually.\n"); 389 + while (1) 390 + ; 391 + } 392 + 393 + void exynos5420_powerdown_conf(enum sys_powerdown mode) 394 + { 395 + u32 this_cluster; 396 + 397 + this_cluster = MPIDR_AFFINITY_LEVEL(read_cpuid_mpidr(), 1); 398 + 399 + /* 400 + * set the cluster id to IROM register to ensure that we wake 401 + * up with the current cluster. 402 + */ 403 + pmu_raw_writel(this_cluster, EXYNOS_IROM_DATA2); 404 + } 405 + 406 + 407 + static void exynos5_powerdown_conf(enum sys_powerdown mode) 647 408 { 648 409 unsigned int i; 649 410 unsigned int tmp; ··· 720 343 /* 721 344 * Enable both SC_FEEDBACK and SC_COUNTER 722 345 */ 723 - for (i = 0 ; i < ARRAY_SIZE(exynos5_list_both_cnt_feed) ; i++) { 346 + for (i = 0; i < ARRAY_SIZE(exynos5_list_both_cnt_feed); i++) { 724 347 tmp = pmu_raw_readl(exynos5_list_both_cnt_feed[i]); 725 348 tmp |= (EXYNOS5_USE_SC_FEEDBACK | 726 349 EXYNOS5_USE_SC_COUNTER); ··· 737 360 /* 738 361 * Disable WFI/WFE on XXX_OPTION 739 362 */ 740 - for (i = 0 ; i < ARRAY_SIZE(exynos5_list_disable_wfi_wfe) ; i++) { 363 + for (i = 0; i < ARRAY_SIZE(exynos5_list_disable_wfi_wfe); i++) { 741 364 tmp = pmu_raw_readl(exynos5_list_disable_wfi_wfe[i]); 742 365 tmp &= ~(EXYNOS5_OPTION_USE_STANDBYWFE | 743 366 EXYNOS5_OPTION_USE_STANDBYWFI); ··· 749 372 { 750 373 unsigned int i; 751 374 752 - if (soc_is_exynos5250()) 753 - exynos5_init_pmu(); 375 + const struct exynos_pmu_data *pmu_data = pmu_context->pmu_data; 754 376 755 - for (i = 0; (exynos_pmu_config[i].offset != PMU_TABLE_END) ; i++) 756 - pmu_raw_writel(exynos_pmu_config[i].val[mode], 757 - exynos_pmu_config[i].offset); 377 + if (pmu_data->powerdown_conf) 378 + pmu_data->powerdown_conf(mode); 758 379 759 - if (soc_is_exynos4412()) { 760 - for (i = 0; exynos4412_pmu_config[i].offset != PMU_TABLE_END ; i++) 761 - pmu_raw_writel(exynos4412_pmu_config[i].val[mode], 762 - exynos4412_pmu_config[i].offset); 380 + if (pmu_data->pmu_config) { 381 + for (i = 0; (pmu_data->pmu_config[i].offset != PMU_TABLE_END); i++) 382 + pmu_raw_writel(pmu_data->pmu_config[i].val[mode], 383 + pmu_data->pmu_config[i].offset); 384 + } 385 + 386 + if (pmu_data->powerdown_conf_extra) 387 + pmu_data->powerdown_conf_extra(mode); 388 + 389 + if (pmu_data->pmu_config_extra) { 390 + for (i = 0; pmu_data->pmu_config_extra[i].offset != PMU_TABLE_END; i++) 391 + pmu_raw_writel(pmu_data->pmu_config_extra[i].val[mode], 392 + pmu_data->pmu_config_extra[i].offset); 763 393 } 764 394 } 765 395 766 - static int __init exynos_pmu_init(void) 396 + static void exynos3250_pmu_init(void) 767 397 { 768 398 unsigned int value; 769 399 770 - exynos_pmu_config = exynos4210_pmu_config; 400 + /* 401 + * To prevent from issuing new bus request form L2 memory system 402 + * If core status is power down, should be set '1' to L2 power down 403 + */ 404 + value = pmu_raw_readl(EXYNOS3_ARM_COMMON_OPTION); 405 + value |= EXYNOS3_OPTION_SKIP_DEACTIVATE_ACEACP_IN_PWDN; 406 + pmu_raw_writel(value, EXYNOS3_ARM_COMMON_OPTION); 771 407 772 - if (soc_is_exynos4210()) { 773 - exynos_pmu_config = exynos4210_pmu_config; 774 - pr_info("EXYNOS4210 PMU Initialize\n"); 775 - } else if (soc_is_exynos4212() || soc_is_exynos4412()) { 776 - exynos_pmu_config = exynos4x12_pmu_config; 777 - pr_info("EXYNOS4x12 PMU Initialize\n"); 778 - } else if (soc_is_exynos5250()) { 779 - /* 780 - * When SYS_WDTRESET is set, watchdog timer reset request 781 - * is ignored by power management unit. 782 - */ 783 - value = pmu_raw_readl(EXYNOS5_AUTO_WDTRESET_DISABLE); 784 - value &= ~EXYNOS5_SYS_WDTRESET; 785 - pmu_raw_writel(value, EXYNOS5_AUTO_WDTRESET_DISABLE); 408 + /* Enable USE_STANDBY_WFI for all CORE */ 409 + pmu_raw_writel(S5P_USE_STANDBY_WFI_ALL, S5P_CENTRAL_SEQ_OPTION); 786 410 787 - value = pmu_raw_readl(EXYNOS5_MASK_WDTRESET_REQUEST); 788 - value &= ~EXYNOS5_SYS_WDTRESET; 789 - pmu_raw_writel(value, EXYNOS5_MASK_WDTRESET_REQUEST); 411 + /* 412 + * Set PSHOLD port for output high 413 + */ 414 + value = pmu_raw_readl(S5P_PS_HOLD_CONTROL); 415 + value |= S5P_PS_HOLD_OUTPUT_HIGH; 416 + pmu_raw_writel(value, S5P_PS_HOLD_CONTROL); 790 417 791 - exynos_pmu_config = exynos5250_pmu_config; 792 - pr_info("EXYNOS5250 PMU Initialize\n"); 793 - } else { 794 - pr_info("EXYNOS: PMU not supported\n"); 418 + /* 419 + * Enable signal for PSHOLD port 420 + */ 421 + value = pmu_raw_readl(S5P_PS_HOLD_CONTROL); 422 + value |= S5P_PS_HOLD_EN; 423 + pmu_raw_writel(value, S5P_PS_HOLD_CONTROL); 424 + } 425 + 426 + static void exynos5250_pmu_init(void) 427 + { 428 + unsigned int value; 429 + /* 430 + * When SYS_WDTRESET is set, watchdog timer reset request 431 + * is ignored by power management unit. 432 + */ 433 + value = pmu_raw_readl(EXYNOS5_AUTO_WDTRESET_DISABLE); 434 + value &= ~EXYNOS5_SYS_WDTRESET; 435 + pmu_raw_writel(value, EXYNOS5_AUTO_WDTRESET_DISABLE); 436 + 437 + value = pmu_raw_readl(EXYNOS5_MASK_WDTRESET_REQUEST); 438 + value &= ~EXYNOS5_SYS_WDTRESET; 439 + pmu_raw_writel(value, EXYNOS5_MASK_WDTRESET_REQUEST); 440 + } 441 + 442 + static void exynos5420_pmu_init(void) 443 + { 444 + unsigned int value; 445 + int i; 446 + 447 + /* 448 + * Set the CMU_RESET, CMU_SYSCLK and CMU_CLKSTOP registers 449 + * for local power blocks to Low initially as per Table 8-4: 450 + * "System-Level Power-Down Configuration Registers". 451 + */ 452 + for (i = 0; i < ARRAY_SIZE(exynos5420_list_disable_pmu_reg); i++) 453 + pmu_raw_writel(0, exynos5420_list_disable_pmu_reg[i]); 454 + 455 + /* Enable USE_STANDBY_WFI for all CORE */ 456 + pmu_raw_writel(EXYNOS5420_USE_STANDBY_WFI_ALL, S5P_CENTRAL_SEQ_OPTION); 457 + 458 + value = pmu_raw_readl(EXYNOS_L2_OPTION(0)); 459 + value &= ~EXYNOS5_USE_RETENTION; 460 + pmu_raw_writel(value, EXYNOS_L2_OPTION(0)); 461 + 462 + value = pmu_raw_readl(EXYNOS_L2_OPTION(1)); 463 + value &= ~EXYNOS5_USE_RETENTION; 464 + pmu_raw_writel(value, EXYNOS_L2_OPTION(1)); 465 + 466 + /* 467 + * If L2_COMMON is turned off, clocks related to ATB async 468 + * bridge are gated. Thus, when ISP power is gated, LPI 469 + * may get stuck. 470 + */ 471 + value = pmu_raw_readl(EXYNOS5420_LPI_MASK); 472 + value |= EXYNOS5420_ATB_ISP_ARM; 473 + pmu_raw_writel(value, EXYNOS5420_LPI_MASK); 474 + 475 + value = pmu_raw_readl(EXYNOS5420_LPI_MASK1); 476 + value |= EXYNOS5420_ATB_KFC; 477 + pmu_raw_writel(value, EXYNOS5420_LPI_MASK1); 478 + 479 + /* Prevent issue of new bus request from L2 memory */ 480 + value = pmu_raw_readl(EXYNOS5420_ARM_COMMON_OPTION); 481 + value |= EXYNOS5_SKIP_DEACTIVATE_ACEACP_IN_PWDN; 482 + pmu_raw_writel(value, EXYNOS5420_ARM_COMMON_OPTION); 483 + 484 + value = pmu_raw_readl(EXYNOS5420_KFC_COMMON_OPTION); 485 + value |= EXYNOS5_SKIP_DEACTIVATE_ACEACP_IN_PWDN; 486 + pmu_raw_writel(value, EXYNOS5420_KFC_COMMON_OPTION); 487 + 488 + /* This setting is to reduce suspend/resume time */ 489 + pmu_raw_writel(DUR_WAIT_RESET, EXYNOS5420_LOGIC_RESET_DURATION3); 490 + 491 + /* Serialized CPU wakeup of Eagle */ 492 + pmu_raw_writel(SPREAD_ENABLE, EXYNOS5420_ARM_INTR_SPREAD_ENABLE); 493 + 494 + pmu_raw_writel(SPREAD_USE_STANDWFI, 495 + EXYNOS5420_ARM_INTR_SPREAD_USE_STANDBYWFI); 496 + 497 + pmu_raw_writel(0x1, EXYNOS5420_UP_SCHEDULER); 498 + 499 + pm_power_off = exynos5_power_off; 500 + pr_info("EXYNOS5420 PMU initialized\n"); 501 + } 502 + 503 + static int pmu_restart_notify(struct notifier_block *this, 504 + unsigned long code, void *unused) 505 + { 506 + pmu_raw_writel(0x1, EXYNOS_SWRESET); 507 + 508 + return NOTIFY_DONE; 509 + } 510 + 511 + static const struct exynos_pmu_data exynos3250_pmu_data = { 512 + .pmu_config = exynos3250_pmu_config, 513 + .pmu_init = exynos3250_pmu_init, 514 + .powerdown_conf_extra = exynos3250_powerdown_conf_extra, 515 + }; 516 + 517 + static const struct exynos_pmu_data exynos4210_pmu_data = { 518 + .pmu_config = exynos4210_pmu_config, 519 + }; 520 + 521 + static const struct exynos_pmu_data exynos4212_pmu_data = { 522 + .pmu_config = exynos4x12_pmu_config, 523 + }; 524 + 525 + static const struct exynos_pmu_data exynos4412_pmu_data = { 526 + .pmu_config = exynos4x12_pmu_config, 527 + .pmu_config_extra = exynos4412_pmu_config, 528 + }; 529 + 530 + static const struct exynos_pmu_data exynos5250_pmu_data = { 531 + .pmu_config = exynos5250_pmu_config, 532 + .pmu_init = exynos5250_pmu_init, 533 + .powerdown_conf = exynos5_powerdown_conf, 534 + }; 535 + 536 + static struct exynos_pmu_data exynos5420_pmu_data = { 537 + .pmu_config = exynos5420_pmu_config, 538 + .pmu_init = exynos5420_pmu_init, 539 + .powerdown_conf = exynos5420_powerdown_conf, 540 + }; 541 + 542 + /* 543 + * PMU platform driver and devicetree bindings. 544 + */ 545 + static const struct of_device_id exynos_pmu_of_device_ids[] = { 546 + { 547 + .compatible = "samsung,exynos3250-pmu", 548 + .data = &exynos3250_pmu_data, 549 + }, { 550 + .compatible = "samsung,exynos4210-pmu", 551 + .data = &exynos4210_pmu_data, 552 + }, { 553 + .compatible = "samsung,exynos4212-pmu", 554 + .data = &exynos4212_pmu_data, 555 + }, { 556 + .compatible = "samsung,exynos4412-pmu", 557 + .data = &exynos4412_pmu_data, 558 + }, { 559 + .compatible = "samsung,exynos5250-pmu", 560 + .data = &exynos5250_pmu_data, 561 + }, { 562 + .compatible = "samsung,exynos5420-pmu", 563 + .data = &exynos5420_pmu_data, 564 + }, 565 + { /*sentinel*/ }, 566 + }; 567 + 568 + /* 569 + * Exynos PMU restart notifier, handles restart functionality 570 + */ 571 + static struct notifier_block pmu_restart_handler = { 572 + .notifier_call = pmu_restart_notify, 573 + .priority = 128, 574 + }; 575 + 576 + static int exynos_pmu_probe(struct platform_device *pdev) 577 + { 578 + const struct of_device_id *match; 579 + struct device *dev = &pdev->dev; 580 + struct resource *res; 581 + int ret; 582 + 583 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 584 + pmu_base_addr = devm_ioremap_resource(dev, res); 585 + if (IS_ERR(pmu_base_addr)) 586 + return PTR_ERR(pmu_base_addr); 587 + 588 + pmu_context = devm_kzalloc(&pdev->dev, 589 + sizeof(struct exynos_pmu_context), 590 + GFP_KERNEL); 591 + if (!pmu_context) { 592 + dev_err(dev, "Cannot allocate memory.\n"); 593 + return -ENOMEM; 795 594 } 595 + pmu_context->dev = dev; 796 596 597 + match = of_match_node(exynos_pmu_of_device_ids, dev->of_node); 598 + 599 + pmu_context->pmu_data = match->data; 600 + 601 + if (pmu_context->pmu_data->pmu_init) 602 + pmu_context->pmu_data->pmu_init(); 603 + 604 + platform_set_drvdata(pdev, pmu_context); 605 + 606 + ret = register_restart_handler(&pmu_restart_handler); 607 + if (ret) 608 + dev_warn(dev, "can't register restart handler err=%d\n", ret); 609 + 610 + dev_dbg(dev, "Exynos PMU Driver probe done\n"); 797 611 return 0; 798 612 } 799 - arch_initcall(exynos_pmu_init); 613 + 614 + static struct platform_driver exynos_pmu_driver = { 615 + .driver = { 616 + .name = "exynos-pmu", 617 + .owner = THIS_MODULE, 618 + .of_match_table = exynos_pmu_of_device_ids, 619 + }, 620 + .probe = exynos_pmu_probe, 621 + }; 622 + 623 + static int __init exynos_pmu_init(void) 624 + { 625 + return platform_driver_register(&exynos_pmu_driver); 626 + 627 + } 628 + postcore_initcall(exynos_pmu_init);
+358
arch/arm/mach-exynos/regs-pmu.h
··· 19 19 #define S5P_CENTRAL_SEQ_OPTION 0x0208 20 20 21 21 #define S5P_USE_STANDBY_WFI0 (1 << 16) 22 + #define S5P_USE_STANDBY_WFI1 (1 << 17) 23 + #define S5P_USE_STANDBY_WFI2 (1 << 19) 24 + #define S5P_USE_STANDBY_WFI3 (1 << 20) 22 25 #define S5P_USE_STANDBY_WFE0 (1 << 24) 26 + #define S5P_USE_STANDBY_WFE1 (1 << 25) 27 + #define S5P_USE_STANDBY_WFE2 (1 << 27) 28 + #define S5P_USE_STANDBY_WFE3 (1 << 28) 29 + 30 + #define S5P_USE_STANDBY_WFI_ALL \ 31 + (S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFI1 | \ 32 + S5P_USE_STANDBY_WFI2 | S5P_USE_STANDBY_WFI3 | \ 33 + S5P_USE_STANDBY_WFE0 | S5P_USE_STANDBY_WFE1 | \ 34 + S5P_USE_STANDBY_WFE2 | S5P_USE_STANDBY_WFE3) 35 + 23 36 #define S5P_USE_DELAYED_RESET_ASSERTION BIT(12) 24 37 38 + #define EXYNOS_CORE_PO_RESET(n) ((1 << 4) << n) 39 + #define EXYNOS_WAKEUP_FROM_LOWPWR (1 << 28) 25 40 #define EXYNOS_SWRESET 0x0400 26 41 #define EXYNOS5440_SWRESET 0x00C4 27 42 ··· 51 36 #define S5P_INFORM7 0x081C 52 37 #define S5P_PMU_SPARE3 0x090C 53 38 39 + #define EXYNOS_IROM_DATA2 0x0988 54 40 #define S5P_ARM_CORE0_LOWPWR 0x1000 55 41 #define S5P_DIS_IRQ_CORE0 0x1004 56 42 #define S5P_DIS_IRQ_CENTRAL0 0x1008 ··· 134 118 #define EXYNOS_COMMON_OPTION(_nr) \ 135 119 (EXYNOS_COMMON_CONFIGURATION(_nr) + 0x8) 136 120 121 + #define EXYNOS_CORE_LOCAL_PWR_EN 0x3 122 + 123 + #define EXYNOS_ARM_COMMON_STATUS 0x2504 124 + #define EXYNOS_COMMON_OPTION(_nr) \ 125 + (EXYNOS_COMMON_CONFIGURATION(_nr) + 0x8) 126 + 127 + #define EXYNOS_ARM_L2_CONFIGURATION 0x2600 128 + #define EXYNOS_L2_CONFIGURATION(_nr) \ 129 + (EXYNOS_ARM_L2_CONFIGURATION + ((_nr) * 0x80)) 130 + #define EXYNOS_L2_STATUS(_nr) \ 131 + (EXYNOS_L2_CONFIGURATION(_nr) + 0x4) 132 + #define EXYNOS_L2_OPTION(_nr) \ 133 + (EXYNOS_L2_CONFIGURATION(_nr) + 0x8) 134 + #define EXYNOS_L2_COMMON_PWR_EN 0x3 135 + 136 + #define EXYNOS_ARM_CORE_X_STATUS_OFFSET 0x4 137 + 138 + #define EXYNOS5_APLL_SYSCLK_CONFIGURATION 0x2A00 139 + #define EXYNOS5_APLL_SYSCLK_STATUS 0x2A04 140 + 141 + #define EXYNOS5_ARM_L2_OPTION 0x2608 142 + #define EXYNOS5_USE_RETENTION BIT(4) 143 + 144 + #define EXYNOS5_L2RSTDISABLE_VALUE BIT(3) 145 + 137 146 #define S5P_PAD_RET_MAUDIO_OPTION 0x3028 138 147 #define S5P_PAD_RET_GPIO_OPTION 0x3108 139 148 #define S5P_PAD_RET_UART_OPTION 0x3128 ··· 167 126 #define S5P_PAD_RET_EBIA_OPTION 0x3188 168 127 #define S5P_PAD_RET_EBIB_OPTION 0x31A8 169 128 129 + #define S5P_PS_HOLD_CONTROL 0x330C 130 + #define S5P_PS_HOLD_EN (1 << 31) 131 + #define S5P_PS_HOLD_OUTPUT_HIGH (3 << 8) 132 + 133 + #define S5P_CAM_OPTION 0x3C08 134 + #define S5P_MFC_OPTION 0x3C48 135 + #define S5P_G3D_OPTION 0x3C68 136 + #define S5P_LCD0_OPTION 0x3C88 137 + #define S5P_LCD1_OPTION 0x3CA8 138 + #define S5P_ISP_OPTION S5P_LCD1_OPTION 139 + 170 140 #define S5P_CORE_LOCAL_PWR_EN 0x3 141 + #define S5P_CORE_WAKEUP_FROM_LOCAL_CFG (0x3 << 8) 171 142 172 143 /* Only for EXYNOS4210 */ 173 144 #define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154 ··· 238 185 #define S5P_DIS_IRQ_CORE3 0x1034 239 186 #define S5P_DIS_IRQ_CENTRAL3 0x1038 240 187 188 + /* Only for EXYNOS3XXX */ 189 + #define EXYNOS3_ARM_CORE0_SYS_PWR_REG 0x1000 190 + #define EXYNOS3_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG 0x1004 191 + #define EXYNOS3_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG 0x1008 192 + #define EXYNOS3_ARM_CORE1_SYS_PWR_REG 0x1010 193 + #define EXYNOS3_DIS_IRQ_ARM_CORE1_LOCAL_SYS_PWR_REG 0x1014 194 + #define EXYNOS3_DIS_IRQ_ARM_CORE1_CENTRAL_SYS_PWR_REG 0x1018 195 + #define EXYNOS3_ISP_ARM_SYS_PWR_REG 0x1050 196 + #define EXYNOS3_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG 0x1054 197 + #define EXYNOS3_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG 0x1058 198 + #define EXYNOS3_ARM_COMMON_SYS_PWR_REG 0x1080 199 + #define EXYNOS3_ARM_L2_SYS_PWR_REG 0x10C0 200 + #define EXYNOS3_CMU_ACLKSTOP_SYS_PWR_REG 0x1100 201 + #define EXYNOS3_CMU_SCLKSTOP_SYS_PWR_REG 0x1104 202 + #define EXYNOS3_CMU_RESET_SYS_PWR_REG 0x110C 203 + #define EXYNOS3_CMU_ACLKSTOP_COREBLK_SYS_PWR_REG 0x1110 204 + #define EXYNOS3_CMU_SCLKSTOP_COREBLK_SYS_PWR_REG 0x1114 205 + #define EXYNOS3_CMU_RESET_COREBLK_SYS_PWR_REG 0x111C 206 + #define EXYNOS3_APLL_SYSCLK_SYS_PWR_REG 0x1120 207 + #define EXYNOS3_MPLL_SYSCLK_SYS_PWR_REG 0x1124 208 + #define EXYNOS3_VPLL_SYSCLK_SYS_PWR_REG 0x1128 209 + #define EXYNOS3_EPLL_SYSCLK_SYS_PWR_REG 0x112C 210 + #define EXYNOS3_MPLLUSER_SYSCLK_SYS_PWR_REG 0x1130 211 + #define EXYNOS3_BPLLUSER_SYSCLK_SYS_PWR_REG 0x1134 212 + #define EXYNOS3_EPLLUSER_SYSCLK_SYS_PWR_REG 0x1138 213 + #define EXYNOS3_CMU_CLKSTOP_CAM_SYS_PWR_REG 0x1140 214 + #define EXYNOS3_CMU_CLKSTOP_MFC_SYS_PWR_REG 0x1148 215 + #define EXYNOS3_CMU_CLKSTOP_G3D_SYS_PWR_REG 0x114C 216 + #define EXYNOS3_CMU_CLKSTOP_LCD0_SYS_PWR_REG 0x1150 217 + #define EXYNOS3_CMU_CLKSTOP_ISP_SYS_PWR_REG 0x1154 218 + #define EXYNOS3_CMU_CLKSTOP_MAUDIO_SYS_PWR_REG 0x1158 219 + #define EXYNOS3_CMU_RESET_CAM_SYS_PWR_REG 0x1160 220 + #define EXYNOS3_CMU_RESET_MFC_SYS_PWR_REG 0x1168 221 + #define EXYNOS3_CMU_RESET_G3D_SYS_PWR_REG 0x116C 222 + #define EXYNOS3_CMU_RESET_LCD0_SYS_PWR_REG 0x1170 223 + #define EXYNOS3_CMU_RESET_ISP_SYS_PWR_REG 0x1174 224 + #define EXYNOS3_CMU_RESET_MAUDIO_SYS_PWR_REG 0x1178 225 + #define EXYNOS3_TOP_BUS_SYS_PWR_REG 0x1180 226 + #define EXYNOS3_TOP_RETENTION_SYS_PWR_REG 0x1184 227 + #define EXYNOS3_TOP_PWR_SYS_PWR_REG 0x1188 228 + #define EXYNOS3_TOP_BUS_COREBLK_SYS_PWR_REG 0x1190 229 + #define EXYNOS3_TOP_RETENTION_COREBLK_SYS_PWR_REG 0x1194 230 + #define EXYNOS3_TOP_PWR_COREBLK_SYS_PWR_REG 0x1198 231 + #define EXYNOS3_LOGIC_RESET_SYS_PWR_REG 0x11A0 232 + #define EXYNOS3_OSCCLK_GATE_SYS_PWR_REG 0x11A4 233 + #define EXYNOS3_LOGIC_RESET_COREBLK_SYS_PWR_REG 0x11B0 234 + #define EXYNOS3_OSCCLK_GATE_COREBLK_SYS_PWR_REG 0x11B4 235 + #define EXYNOS3_PAD_RETENTION_DRAM_SYS_PWR_REG 0x1200 236 + #define EXYNOS3_PAD_RETENTION_MAUDIO_SYS_PWR_REG 0x1204 237 + #define EXYNOS3_PAD_RETENTION_SPI_SYS_PWR_REG 0x1208 238 + #define EXYNOS3_PAD_RETENTION_MMC2_SYS_PWR_REG 0x1218 239 + #define EXYNOS3_PAD_RETENTION_GPIO_SYS_PWR_REG 0x1220 240 + #define EXYNOS3_PAD_RETENTION_UART_SYS_PWR_REG 0x1224 241 + #define EXYNOS3_PAD_RETENTION_MMC0_SYS_PWR_REG 0x1228 242 + #define EXYNOS3_PAD_RETENTION_MMC1_SYS_PWR_REG 0x122C 243 + #define EXYNOS3_PAD_RETENTION_EBIA_SYS_PWR_REG 0x1230 244 + #define EXYNOS3_PAD_RETENTION_EBIB_SYS_PWR_REG 0x1234 245 + #define EXYNOS3_PAD_RETENTION_JTAG_SYS_PWR_REG 0x1238 246 + #define EXYNOS3_PAD_ISOLATION_SYS_PWR_REG 0x1240 247 + #define EXYNOS3_PAD_ALV_SEL_SYS_PWR_REG 0x1260 248 + #define EXYNOS3_XUSBXTI_SYS_PWR_REG 0x1280 249 + #define EXYNOS3_XXTI_SYS_PWR_REG 0x1284 250 + #define EXYNOS3_EXT_REGULATOR_SYS_PWR_REG 0x12C0 251 + #define EXYNOS3_EXT_REGULATOR_COREBLK_SYS_PWR_REG 0x12C4 252 + #define EXYNOS3_GPIO_MODE_SYS_PWR_REG 0x1300 253 + #define EXYNOS3_GPIO_MODE_MAUDIO_SYS_PWR_REG 0x1340 254 + #define EXYNOS3_TOP_ASB_RESET_SYS_PWR_REG 0x1344 255 + #define EXYNOS3_TOP_ASB_ISOLATION_SYS_PWR_REG 0x1348 256 + #define EXYNOS3_TOP_ASB_RESET_COREBLK_SYS_PWR_REG 0x1350 257 + #define EXYNOS3_TOP_ASB_ISOLATION_COREBLK_SYS_PWR_REG 0x1354 258 + #define EXYNOS3_CAM_SYS_PWR_REG 0x1380 259 + #define EXYNOS3_MFC_SYS_PWR_REG 0x1388 260 + #define EXYNOS3_G3D_SYS_PWR_REG 0x138C 261 + #define EXYNOS3_LCD0_SYS_PWR_REG 0x1390 262 + #define EXYNOS3_ISP_SYS_PWR_REG 0x1394 263 + #define EXYNOS3_MAUDIO_SYS_PWR_REG 0x1398 264 + #define EXYNOS3_DRAM_FREQ_DOWN_SYS_PWR_REG 0x13B0 265 + #define EXYNOS3_DDRPHY_DLLOFF_SYS_PWR_REG 0x13B4 266 + #define EXYNOS3_CMU_SYSCLK_ISP_SYS_PWR_REG 0x13B8 267 + #define EXYNOS3_LPDDR_PHY_DLL_LOCK_SYS_PWR_REG 0x13C0 268 + #define EXYNOS3_BPLL_SYSCLK_SYS_PWR_REG 0x13C4 269 + #define EXYNOS3_UPLL_SYSCLK_SYS_PWR_REG 0x13C8 270 + 271 + #define EXYNOS3_ARM_CORE0_OPTION 0x2008 272 + #define EXYNOS3_ARM_CORE_OPTION(_nr) \ 273 + (EXYNOS3_ARM_CORE0_OPTION + ((_nr) * 0x80)) 274 + 275 + #define EXYNOS3_ARM_COMMON_OPTION 0x2408 276 + #define EXYNOS3_TOP_PWR_OPTION 0x2C48 277 + #define EXYNOS3_CORE_TOP_PWR_OPTION 0x2CA8 278 + #define EXYNOS3_XUSBXTI_DURATION 0x341C 279 + #define EXYNOS3_XXTI_DURATION 0x343C 280 + #define EXYNOS3_EXT_REGULATOR_DURATION 0x361C 281 + #define EXYNOS3_EXT_REGULATOR_COREBLK_DURATION 0x363C 282 + #define XUSBXTI_DURATION 0x00000BB8 283 + #define XXTI_DURATION XUSBXTI_DURATION 284 + #define EXT_REGULATOR_DURATION 0x00001D4C 285 + #define EXT_REGULATOR_COREBLK_DURATION EXT_REGULATOR_DURATION 286 + 287 + /* for XXX_OPTION */ 288 + #define EXYNOS3_OPTION_USE_SC_COUNTER (1 << 0) 289 + #define EXYNOS3_OPTION_USE_SC_FEEDBACK (1 << 1) 290 + #define EXYNOS3_OPTION_SKIP_DEACTIVATE_ACEACP_IN_PWDN (1 << 7) 291 + 241 292 /* For EXYNOS5 */ 242 293 243 294 #define EXYNOS5_AUTO_WDTRESET_DISABLE 0x0408 244 295 #define EXYNOS5_MASK_WDTRESET_REQUEST 0x040C 245 296 297 + #define EXYNOS5_USE_RETENTION BIT(4) 246 298 #define EXYNOS5_SYS_WDTRESET (1 << 20) 247 299 248 300 #define EXYNOS5_ARM_CORE0_SYS_PWR_REG 0x1000 ··· 486 328 return ((MPIDR_AFFINITY_LEVEL(mpidr, 1) * MAX_CPUS_IN_CLUSTER) 487 329 + MPIDR_AFFINITY_LEVEL(mpidr, 0)); 488 330 } 331 + 332 + /* Only for EXYNOS5420 */ 333 + #define EXYNOS5420_ISP_ARM_OPTION 0x2488 334 + #define EXYNOS5420_L2RSTDISABLE_VALUE BIT(3) 335 + 336 + #define EXYNOS5420_LPI_MASK 0x0004 337 + #define EXYNOS5420_LPI_MASK1 0x0008 338 + #define EXYNOS5420_UFS BIT(8) 339 + #define EXYNOS5420_ATB_KFC BIT(13) 340 + #define EXYNOS5420_ATB_ISP_ARM BIT(19) 341 + #define EXYNOS5420_EMULATION BIT(31) 342 + #define ATB_ISP_ARM BIT(12) 343 + #define ATB_KFC BIT(13) 344 + #define ATB_NOC BIT(14) 345 + 346 + #define EXYNOS5420_ARM_INTR_SPREAD_ENABLE 0x0100 347 + #define EXYNOS5420_ARM_INTR_SPREAD_USE_STANDBYWFI 0x0104 348 + #define EXYNOS5420_UP_SCHEDULER 0x0120 349 + #define SPREAD_ENABLE 0xF 350 + #define SPREAD_USE_STANDWFI 0xF 351 + 352 + #define EXYNOS5420_BB_CON1 0x0784 353 + #define EXYNOS5420_BB_SEL_EN BIT(31) 354 + #define EXYNOS5420_BB_PMOS_EN BIT(7) 355 + #define EXYNOS5420_BB_1300X 0XF 356 + 357 + #define EXYNOS5420_ARM_CORE2_SYS_PWR_REG 0x1020 358 + #define EXYNOS5420_DIS_IRQ_ARM_CORE2_LOCAL_SYS_PWR_REG 0x1024 359 + #define EXYNOS5420_DIS_IRQ_ARM_CORE2_CENTRAL_SYS_PWR_REG 0x1028 360 + #define EXYNOS5420_ARM_CORE3_SYS_PWR_REG 0x1030 361 + #define EXYNOS5420_DIS_IRQ_ARM_CORE3_LOCAL_SYS_PWR_REG 0x1034 362 + #define EXYNOS5420_DIS_IRQ_ARM_CORE3_CENTRAL_SYS_PWR_REG 0x1038 363 + #define EXYNOS5420_KFC_CORE0_SYS_PWR_REG 0x1040 364 + #define EXYNOS5420_DIS_IRQ_KFC_CORE0_LOCAL_SYS_PWR_REG 0x1044 365 + #define EXYNOS5420_DIS_IRQ_KFC_CORE0_CENTRAL_SYS_PWR_REG 0x1048 366 + #define EXYNOS5420_KFC_CORE1_SYS_PWR_REG 0x1050 367 + #define EXYNOS5420_DIS_IRQ_KFC_CORE1_LOCAL_SYS_PWR_REG 0x1054 368 + #define EXYNOS5420_DIS_IRQ_KFC_CORE1_CENTRAL_SYS_PWR_REG 0x1058 369 + #define EXYNOS5420_KFC_CORE2_SYS_PWR_REG 0x1060 370 + #define EXYNOS5420_DIS_IRQ_KFC_CORE2_LOCAL_SYS_PWR_REG 0x1064 371 + #define EXYNOS5420_DIS_IRQ_KFC_CORE2_CENTRAL_SYS_PWR_REG 0x1068 372 + #define EXYNOS5420_KFC_CORE3_SYS_PWR_REG 0x1070 373 + #define EXYNOS5420_DIS_IRQ_KFC_CORE3_LOCAL_SYS_PWR_REG 0x1074 374 + #define EXYNOS5420_DIS_IRQ_KFC_CORE3_CENTRAL_SYS_PWR_REG 0x1078 375 + #define EXYNOS5420_ISP_ARM_SYS_PWR_REG 0x1090 376 + #define EXYNOS5420_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG 0x1094 377 + #define EXYNOS5420_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG 0x1098 378 + #define EXYNOS5420_ARM_COMMON_SYS_PWR_REG 0x10A0 379 + #define EXYNOS5420_KFC_COMMON_SYS_PWR_REG 0x10B0 380 + #define EXYNOS5420_KFC_L2_SYS_PWR_REG 0x10D0 381 + #define EXYNOS5420_DPLL_SYSCLK_SYS_PWR_REG 0x1158 382 + #define EXYNOS5420_IPLL_SYSCLK_SYS_PWR_REG 0x115C 383 + #define EXYNOS5420_KPLL_SYSCLK_SYS_PWR_REG 0x1160 384 + #define EXYNOS5420_RPLL_SYSCLK_SYS_PWR_REG 0x1174 385 + #define EXYNOS5420_SPLL_SYSCLK_SYS_PWR_REG 0x1178 386 + #define EXYNOS5420_INTRAM_MEM_SYS_PWR_REG 0x11B8 387 + #define EXYNOS5420_INTROM_MEM_SYS_PWR_REG 0x11BC 388 + #define EXYNOS5420_ONENANDXL_MEM_SYS_PWR 0x11C0 389 + #define EXYNOS5420_USBDEV_MEM_SYS_PWR 0x11CC 390 + #define EXYNOS5420_USBDEV1_MEM_SYS_PWR 0x11D0 391 + #define EXYNOS5420_SDMMC_MEM_SYS_PWR 0x11D4 392 + #define EXYNOS5420_CSSYS_MEM_SYS_PWR 0x11D8 393 + #define EXYNOS5420_SECSS_MEM_SYS_PWR 0x11DC 394 + #define EXYNOS5420_ROTATOR_MEM_SYS_PWR 0x11E0 395 + #define EXYNOS5420_INTRAM_MEM_SYS_PWR 0x11E4 396 + #define EXYNOS5420_INTROM_MEM_SYS_PWR 0x11E8 397 + #define EXYNOS5420_PAD_RETENTION_JTAG_SYS_PWR_REG 0x1208 398 + #define EXYNOS5420_PAD_RETENTION_DRAM_SYS_PWR_REG 0x1210 399 + #define EXYNOS5420_PAD_RETENTION_UART_SYS_PWR_REG 0x1214 400 + #define EXYNOS5420_PAD_RETENTION_MMC0_SYS_PWR_REG 0x1218 401 + #define EXYNOS5420_PAD_RETENTION_MMC1_SYS_PWR_REG 0x121C 402 + #define EXYNOS5420_PAD_RETENTION_MMC2_SYS_PWR_REG 0x1220 403 + #define EXYNOS5420_PAD_RETENTION_HSI_SYS_PWR_REG 0x1224 404 + #define EXYNOS5420_PAD_RETENTION_EBIA_SYS_PWR_REG 0x1228 405 + #define EXYNOS5420_PAD_RETENTION_EBIB_SYS_PWR_REG 0x122C 406 + #define EXYNOS5420_PAD_RETENTION_SPI_SYS_PWR_REG 0x1230 407 + #define EXYNOS5420_PAD_RETENTION_DRAM_COREBLK_SYS_PWR_REG 0x1234 408 + #define EXYNOS5420_DISP1_SYS_PWR_REG 0x1410 409 + #define EXYNOS5420_MAU_SYS_PWR_REG 0x1414 410 + #define EXYNOS5420_G2D_SYS_PWR_REG 0x1418 411 + #define EXYNOS5420_MSC_SYS_PWR_REG 0x141C 412 + #define EXYNOS5420_FSYS_SYS_PWR_REG 0x1420 413 + #define EXYNOS5420_FSYS2_SYS_PWR_REG 0x1424 414 + #define EXYNOS5420_PSGEN_SYS_PWR_REG 0x1428 415 + #define EXYNOS5420_PERIC_SYS_PWR_REG 0x142C 416 + #define EXYNOS5420_WCORE_SYS_PWR_REG 0x1430 417 + #define EXYNOS5420_CMU_CLKSTOP_DISP1_SYS_PWR_REG 0x1490 418 + #define EXYNOS5420_CMU_CLKSTOP_MAU_SYS_PWR_REG 0x1494 419 + #define EXYNOS5420_CMU_CLKSTOP_G2D_SYS_PWR_REG 0x1498 420 + #define EXYNOS5420_CMU_CLKSTOP_MSC_SYS_PWR_REG 0x149C 421 + #define EXYNOS5420_CMU_CLKSTOP_FSYS_SYS_PWR_REG 0x14A0 422 + #define EXYNOS5420_CMU_CLKSTOP_FSYS2_SYS_PWR_REG 0x14A4 423 + #define EXYNOS5420_CMU_CLKSTOP_PSGEN_SYS_PWR_REG 0x14A8 424 + #define EXYNOS5420_CMU_CLKSTOP_PERIC_SYS_PWR_REG 0x14AC 425 + #define EXYNOS5420_CMU_CLKSTOP_WCORE_SYS_PWR_REG 0x14B0 426 + #define EXYNOS5420_CMU_SYSCLK_TOPPWR_SYS_PWR_REG 0x14BC 427 + #define EXYNOS5420_CMU_SYSCLK_DISP1_SYS_PWR_REG 0x14D0 428 + #define EXYNOS5420_CMU_SYSCLK_MAU_SYS_PWR_REG 0x14D4 429 + #define EXYNOS5420_CMU_SYSCLK_G2D_SYS_PWR_REG 0x14D8 430 + #define EXYNOS5420_CMU_SYSCLK_MSC_SYS_PWR_REG 0x14DC 431 + #define EXYNOS5420_CMU_SYSCLK_FSYS_SYS_PWR_REG 0x14E0 432 + #define EXYNOS5420_CMU_SYSCLK_FSYS2_SYS_PWR_REG 0x14E4 433 + #define EXYNOS5420_CMU_SYSCLK_PSGEN_SYS_PWR_REG 0x14E8 434 + #define EXYNOS5420_CMU_SYSCLK_PERIC_SYS_PWR_REG 0x14EC 435 + #define EXYNOS5420_CMU_SYSCLK_WCORE_SYS_PWR_REG 0x14F0 436 + #define EXYNOS5420_CMU_SYSCLK_SYSMEM_TOPPWR_SYS_PWR_REG 0x14F4 437 + #define EXYNOS5420_CMU_RESET_FSYS2_SYS_PWR_REG 0x1570 438 + #define EXYNOS5420_CMU_RESET_PSGEN_SYS_PWR_REG 0x1574 439 + #define EXYNOS5420_CMU_RESET_PERIC_SYS_PWR_REG 0x1578 440 + #define EXYNOS5420_CMU_RESET_WCORE_SYS_PWR_REG 0x157C 441 + #define EXYNOS5420_CMU_RESET_DISP1_SYS_PWR_REG 0x1590 442 + #define EXYNOS5420_CMU_RESET_MAU_SYS_PWR_REG 0x1594 443 + #define EXYNOS5420_CMU_RESET_G2D_SYS_PWR_REG 0x1598 444 + #define EXYNOS5420_CMU_RESET_MSC_SYS_PWR_REG 0x159C 445 + #define EXYNOS5420_CMU_RESET_FSYS_SYS_PWR_REG 0x15A0 446 + #define EXYNOS5420_SFR_AXI_CGDIS1 0x15E4 447 + #define EXYNOS_ARM_CORE2_CONFIGURATION 0x2100 448 + #define EXYNOS5420_ARM_CORE2_OPTION 0x2108 449 + #define EXYNOS_ARM_CORE3_CONFIGURATION 0x2180 450 + #define EXYNOS5420_ARM_CORE3_OPTION 0x2188 451 + #define EXYNOS5420_ARM_COMMON_STATUS 0x2504 452 + #define EXYNOS5420_ARM_COMMON_OPTION 0x2508 453 + #define EXYNOS5420_KFC_COMMON_STATUS 0x2584 454 + #define EXYNOS5420_KFC_COMMON_OPTION 0x2588 455 + #define EXYNOS5420_LOGIC_RESET_DURATION3 0x2D1C 456 + 457 + #define EXYNOS5420_PAD_RET_GPIO_OPTION 0x30C8 458 + #define EXYNOS5420_PAD_RET_UART_OPTION 0x30E8 459 + #define EXYNOS5420_PAD_RET_MMCA_OPTION 0x3108 460 + #define EXYNOS5420_PAD_RET_MMCB_OPTION 0x3128 461 + #define EXYNOS5420_PAD_RET_MMCC_OPTION 0x3148 462 + #define EXYNOS5420_PAD_RET_HSI_OPTION 0x3168 463 + #define EXYNOS5420_PAD_RET_SPI_OPTION 0x31C8 464 + #define EXYNOS5420_PAD_RET_DRAM_COREBLK_OPTION 0x31E8 465 + #define EXYNOS_PAD_RET_DRAM_OPTION 0x3008 466 + #define EXYNOS_PAD_RET_MAUDIO_OPTION 0x3028 467 + #define EXYNOS_PAD_RET_JTAG_OPTION 0x3048 468 + #define EXYNOS_PAD_RET_GPIO_OPTION 0x3108 469 + #define EXYNOS_PAD_RET_UART_OPTION 0x3128 470 + #define EXYNOS_PAD_RET_MMCA_OPTION 0x3148 471 + #define EXYNOS_PAD_RET_MMCB_OPTION 0x3168 472 + #define EXYNOS_PAD_RET_EBIA_OPTION 0x3188 473 + #define EXYNOS_PAD_RET_EBIB_OPTION 0x31A8 474 + 475 + #define EXYNOS_PS_HOLD_CONTROL 0x330C 476 + 477 + /* For SYS_PWR_REG */ 478 + #define EXYNOS_SYS_PWR_CFG BIT(0) 479 + 480 + #define EXYNOS5420_MFC_CONFIGURATION 0x4060 481 + #define EXYNOS5420_MFC_STATUS 0x4064 482 + #define EXYNOS5420_MFC_OPTION 0x4068 483 + #define EXYNOS5420_G3D_CONFIGURATION 0x4080 484 + #define EXYNOS5420_G3D_STATUS 0x4084 485 + #define EXYNOS5420_G3D_OPTION 0x4088 486 + #define EXYNOS5420_DISP0_CONFIGURATION 0x40A0 487 + #define EXYNOS5420_DISP0_STATUS 0x40A4 488 + #define EXYNOS5420_DISP0_OPTION 0x40A8 489 + #define EXYNOS5420_DISP1_CONFIGURATION 0x40C0 490 + #define EXYNOS5420_DISP1_STATUS 0x40C4 491 + #define EXYNOS5420_DISP1_OPTION 0x40C8 492 + #define EXYNOS5420_MAU_CONFIGURATION 0x40E0 493 + #define EXYNOS5420_MAU_STATUS 0x40E4 494 + #define EXYNOS5420_MAU_OPTION 0x40E8 495 + #define EXYNOS5420_FSYS2_OPTION 0x4168 496 + #define EXYNOS5420_PSGEN_OPTION 0x4188 497 + 498 + /* For EXYNOS_CENTRAL_SEQ_OPTION */ 499 + #define EXYNOS5_USE_STANDBYWFI_ARM_CORE0 BIT(16) 500 + #define EXYNOS5_USE_STANDBYWFI_ARM_CORE1 BUT(17) 501 + #define EXYNOS5_USE_STANDBYWFE_ARM_CORE0 BIT(24) 502 + #define EXYNOS5_USE_STANDBYWFE_ARM_CORE1 BIT(25) 503 + 504 + #define EXYNOS5420_ARM_USE_STANDBY_WFI0 BIT(4) 505 + #define EXYNOS5420_ARM_USE_STANDBY_WFI1 BIT(5) 506 + #define EXYNOS5420_ARM_USE_STANDBY_WFI2 BIT(6) 507 + #define EXYNOS5420_ARM_USE_STANDBY_WFI3 BIT(7) 508 + #define EXYNOS5420_KFC_USE_STANDBY_WFI0 BIT(8) 509 + #define EXYNOS5420_KFC_USE_STANDBY_WFI1 BIT(9) 510 + #define EXYNOS5420_KFC_USE_STANDBY_WFI2 BIT(10) 511 + #define EXYNOS5420_KFC_USE_STANDBY_WFI3 BIT(11) 512 + #define EXYNOS5420_ARM_USE_STANDBY_WFE0 BIT(16) 513 + #define EXYNOS5420_ARM_USE_STANDBY_WFE1 BIT(17) 514 + #define EXYNOS5420_ARM_USE_STANDBY_WFE2 BIT(18) 515 + #define EXYNOS5420_ARM_USE_STANDBY_WFE3 BIT(19) 516 + #define EXYNOS5420_KFC_USE_STANDBY_WFE0 BIT(20) 517 + #define EXYNOS5420_KFC_USE_STANDBY_WFE1 BIT(21) 518 + #define EXYNOS5420_KFC_USE_STANDBY_WFE2 BIT(22) 519 + #define EXYNOS5420_KFC_USE_STANDBY_WFE3 BIT(23) 520 + 521 + #define DUR_WAIT_RESET 0xF 522 + 523 + #define EXYNOS5420_USE_STANDBY_WFI_ALL (EXYNOS5420_ARM_USE_STANDBY_WFI0 \ 524 + | EXYNOS5420_ARM_USE_STANDBY_WFI1 \ 525 + | EXYNOS5420_ARM_USE_STANDBY_WFI2 \ 526 + | EXYNOS5420_ARM_USE_STANDBY_WFI3 \ 527 + | EXYNOS5420_KFC_USE_STANDBY_WFI0 \ 528 + | EXYNOS5420_KFC_USE_STANDBY_WFI1 \ 529 + | EXYNOS5420_KFC_USE_STANDBY_WFI2 \ 530 + | EXYNOS5420_KFC_USE_STANDBY_WFI3) 489 531 490 532 #endif /* __ASM_ARCH_REGS_PMU_H */
+28
arch/arm/mach-exynos/sleep.S
··· 16 16 */ 17 17 18 18 #include <linux/linkage.h> 19 + #include "smc.h" 19 20 20 21 #define CPU_MASK 0xff0ffff0 21 22 #define CPU_CORTEX_A9 0x410fc090 ··· 56 55 #endif 57 56 b cpu_resume 58 57 ENDPROC(exynos_cpu_resume) 58 + 59 + .align 60 + 61 + ENTRY(exynos_cpu_resume_ns) 62 + mrc p15, 0, r0, c0, c0, 0 63 + ldr r1, =CPU_MASK 64 + and r0, r0, r1 65 + ldr r1, =CPU_CORTEX_A9 66 + cmp r0, r1 67 + bne skip_cp15 68 + 69 + adr r0, cp15_save_power 70 + ldr r1, [r0] 71 + adr r0, cp15_save_diag 72 + ldr r2, [r0] 73 + mov r0, #SMC_CMD_C15RESUME 74 + dsb 75 + smc #0 76 + skip_cp15: 77 + b cpu_resume 78 + ENDPROC(exynos_cpu_resume_ns) 79 + .globl cp15_save_diag 80 + cp15_save_diag: 81 + .long 0 @ cp15 diagnostic 82 + .globl cp15_save_power 83 + cp15_save_power: 84 + .long 0 @ cp15 power control
+4
arch/arm/mach-exynos/smc.h
··· 26 26 #define SMC_CMD_L2X0INVALL (-24) 27 27 #define SMC_CMD_L2X0DEBUG (-25) 28 28 29 + #ifndef __ASSEMBLY__ 30 + 29 31 extern void exynos_smc(u32 cmd, u32 arg1, u32 arg2, u32 arg3); 32 + 33 + #endif /* __ASSEMBLY__ */ 30 34 31 35 #endif
+566
arch/arm/mach-exynos/suspend.c
··· 1 + /* 2 + * Copyright (c) 2011-2014 Samsung Electronics Co., Ltd. 3 + * http://www.samsung.com 4 + * 5 + * EXYNOS - Suspend support 6 + * 7 + * Based on arch/arm/mach-s3c2410/pm.c 8 + * Copyright (c) 2006 Simtec Electronics 9 + * Ben Dooks <ben@simtec.co.uk> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + */ 15 + 16 + #include <linux/init.h> 17 + #include <linux/suspend.h> 18 + #include <linux/syscore_ops.h> 19 + #include <linux/cpu_pm.h> 20 + #include <linux/io.h> 21 + #include <linux/irqchip/arm-gic.h> 22 + #include <linux/err.h> 23 + #include <linux/regulator/machine.h> 24 + 25 + #include <asm/cacheflush.h> 26 + #include <asm/hardware/cache-l2x0.h> 27 + #include <asm/firmware.h> 28 + #include <asm/mcpm.h> 29 + #include <asm/smp_scu.h> 30 + #include <asm/suspend.h> 31 + 32 + #include <plat/pm-common.h> 33 + #include <plat/regs-srom.h> 34 + 35 + #include "common.h" 36 + #include "regs-pmu.h" 37 + #include "regs-sys.h" 38 + #include "exynos-pmu.h" 39 + 40 + #define S5P_CHECK_SLEEP 0x00000BAD 41 + 42 + #define REG_TABLE_END (-1U) 43 + 44 + #define EXYNOS5420_CPU_STATE 0x28 45 + 46 + /** 47 + * struct exynos_wkup_irq - Exynos GIC to PMU IRQ mapping 48 + * @hwirq: Hardware IRQ signal of the GIC 49 + * @mask: Mask in PMU wake-up mask register 50 + */ 51 + struct exynos_wkup_irq { 52 + unsigned int hwirq; 53 + u32 mask; 54 + }; 55 + 56 + static struct sleep_save exynos5_sys_save[] = { 57 + SAVE_ITEM(EXYNOS5_SYS_I2C_CFG), 58 + }; 59 + 60 + static struct sleep_save exynos_core_save[] = { 61 + /* SROM side */ 62 + SAVE_ITEM(S5P_SROM_BW), 63 + SAVE_ITEM(S5P_SROM_BC0), 64 + SAVE_ITEM(S5P_SROM_BC1), 65 + SAVE_ITEM(S5P_SROM_BC2), 66 + SAVE_ITEM(S5P_SROM_BC3), 67 + }; 68 + 69 + struct exynos_pm_data { 70 + const struct exynos_wkup_irq *wkup_irq; 71 + struct sleep_save *extra_save; 72 + int num_extra_save; 73 + unsigned int wake_disable_mask; 74 + unsigned int *release_ret_regs; 75 + 76 + void (*pm_prepare)(void); 77 + void (*pm_resume_prepare)(void); 78 + void (*pm_resume)(void); 79 + int (*pm_suspend)(void); 80 + int (*cpu_suspend)(unsigned long); 81 + }; 82 + 83 + struct exynos_pm_data *pm_data; 84 + 85 + static int exynos5420_cpu_state; 86 + static unsigned int exynos_pmu_spare3; 87 + 88 + /* 89 + * GIC wake-up support 90 + */ 91 + 92 + static u32 exynos_irqwake_intmask = 0xffffffff; 93 + 94 + static const struct exynos_wkup_irq exynos4_wkup_irq[] = { 95 + { 76, BIT(1) }, /* RTC alarm */ 96 + { 77, BIT(2) }, /* RTC tick */ 97 + { /* sentinel */ }, 98 + }; 99 + 100 + static const struct exynos_wkup_irq exynos5250_wkup_irq[] = { 101 + { 75, BIT(1) }, /* RTC alarm */ 102 + { 76, BIT(2) }, /* RTC tick */ 103 + { /* sentinel */ }, 104 + }; 105 + 106 + unsigned int exynos_release_ret_regs[] = { 107 + S5P_PAD_RET_MAUDIO_OPTION, 108 + S5P_PAD_RET_GPIO_OPTION, 109 + S5P_PAD_RET_UART_OPTION, 110 + S5P_PAD_RET_MMCA_OPTION, 111 + S5P_PAD_RET_MMCB_OPTION, 112 + S5P_PAD_RET_EBIA_OPTION, 113 + S5P_PAD_RET_EBIB_OPTION, 114 + REG_TABLE_END, 115 + }; 116 + 117 + unsigned int exynos5420_release_ret_regs[] = { 118 + EXYNOS_PAD_RET_DRAM_OPTION, 119 + EXYNOS_PAD_RET_MAUDIO_OPTION, 120 + EXYNOS_PAD_RET_JTAG_OPTION, 121 + EXYNOS5420_PAD_RET_GPIO_OPTION, 122 + EXYNOS5420_PAD_RET_UART_OPTION, 123 + EXYNOS5420_PAD_RET_MMCA_OPTION, 124 + EXYNOS5420_PAD_RET_MMCB_OPTION, 125 + EXYNOS5420_PAD_RET_MMCC_OPTION, 126 + EXYNOS5420_PAD_RET_HSI_OPTION, 127 + EXYNOS_PAD_RET_EBIA_OPTION, 128 + EXYNOS_PAD_RET_EBIB_OPTION, 129 + EXYNOS5420_PAD_RET_SPI_OPTION, 130 + EXYNOS5420_PAD_RET_DRAM_COREBLK_OPTION, 131 + REG_TABLE_END, 132 + }; 133 + 134 + static int exynos_irq_set_wake(struct irq_data *data, unsigned int state) 135 + { 136 + const struct exynos_wkup_irq *wkup_irq; 137 + 138 + if (!pm_data->wkup_irq) 139 + return -ENOENT; 140 + wkup_irq = pm_data->wkup_irq; 141 + 142 + while (wkup_irq->mask) { 143 + if (wkup_irq->hwirq == data->hwirq) { 144 + if (!state) 145 + exynos_irqwake_intmask |= wkup_irq->mask; 146 + else 147 + exynos_irqwake_intmask &= ~wkup_irq->mask; 148 + return 0; 149 + } 150 + ++wkup_irq; 151 + } 152 + 153 + return -ENOENT; 154 + } 155 + 156 + static int exynos_cpu_do_idle(void) 157 + { 158 + /* issue the standby signal into the pm unit. */ 159 + cpu_do_idle(); 160 + 161 + pr_info("Failed to suspend the system\n"); 162 + return 1; /* Aborting suspend */ 163 + } 164 + static void exynos_flush_cache_all(void) 165 + { 166 + flush_cache_all(); 167 + outer_flush_all(); 168 + } 169 + 170 + static int exynos_cpu_suspend(unsigned long arg) 171 + { 172 + exynos_flush_cache_all(); 173 + return exynos_cpu_do_idle(); 174 + } 175 + 176 + static int exynos5420_cpu_suspend(unsigned long arg) 177 + { 178 + /* MCPM works with HW CPU identifiers */ 179 + unsigned int mpidr = read_cpuid_mpidr(); 180 + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 181 + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 182 + 183 + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE); 184 + 185 + if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) { 186 + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume); 187 + 188 + /* 189 + * Residency value passed to mcpm_cpu_suspend back-end 190 + * has to be given clear semantics. Set to 0 as a 191 + * temporary value. 192 + */ 193 + mcpm_cpu_suspend(0); 194 + } 195 + 196 + pr_info("Failed to suspend the system\n"); 197 + 198 + /* return value != 0 means failure */ 199 + return 1; 200 + } 201 + 202 + static void exynos_pm_set_wakeup_mask(void) 203 + { 204 + /* Set wake-up mask registers */ 205 + pmu_raw_writel(exynos_get_eint_wake_mask(), S5P_EINT_WAKEUP_MASK); 206 + pmu_raw_writel(exynos_irqwake_intmask & ~(1 << 31), S5P_WAKEUP_MASK); 207 + } 208 + 209 + static void exynos_pm_enter_sleep_mode(void) 210 + { 211 + /* Set value of power down register for sleep mode */ 212 + exynos_sys_powerdown_conf(SYS_SLEEP); 213 + pmu_raw_writel(S5P_CHECK_SLEEP, S5P_INFORM1); 214 + } 215 + 216 + static void exynos_pm_prepare(void) 217 + { 218 + /* Set wake-up mask registers */ 219 + exynos_pm_set_wakeup_mask(); 220 + 221 + s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 222 + 223 + if (pm_data->extra_save) 224 + s3c_pm_do_save(pm_data->extra_save, 225 + pm_data->num_extra_save); 226 + 227 + exynos_pm_enter_sleep_mode(); 228 + 229 + /* ensure at least INFORM0 has the resume address */ 230 + pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); 231 + } 232 + 233 + static void exynos5420_pm_prepare(void) 234 + { 235 + unsigned int tmp; 236 + 237 + /* Set wake-up mask registers */ 238 + exynos_pm_set_wakeup_mask(); 239 + 240 + s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 241 + 242 + exynos_pmu_spare3 = pmu_raw_readl(S5P_PMU_SPARE3); 243 + /* 244 + * The cpu state needs to be saved and restored so that the 245 + * secondary CPUs will enter low power start. Though the U-Boot 246 + * is setting the cpu state with low power flag, the kernel 247 + * needs to restore it back in case, the primary cpu fails to 248 + * suspend for any reason. 249 + */ 250 + exynos5420_cpu_state = __raw_readl(sysram_base_addr + 251 + EXYNOS5420_CPU_STATE); 252 + 253 + exynos_pm_enter_sleep_mode(); 254 + 255 + /* ensure at least INFORM0 has the resume address */ 256 + if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) 257 + pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0); 258 + 259 + tmp = pmu_raw_readl(EXYNOS5_ARM_L2_OPTION); 260 + tmp &= ~EXYNOS5_USE_RETENTION; 261 + pmu_raw_writel(tmp, EXYNOS5_ARM_L2_OPTION); 262 + 263 + tmp = pmu_raw_readl(EXYNOS5420_SFR_AXI_CGDIS1); 264 + tmp |= EXYNOS5420_UFS; 265 + pmu_raw_writel(tmp, EXYNOS5420_SFR_AXI_CGDIS1); 266 + 267 + tmp = pmu_raw_readl(EXYNOS5420_ARM_COMMON_OPTION); 268 + tmp &= ~EXYNOS5420_L2RSTDISABLE_VALUE; 269 + pmu_raw_writel(tmp, EXYNOS5420_ARM_COMMON_OPTION); 270 + 271 + tmp = pmu_raw_readl(EXYNOS5420_FSYS2_OPTION); 272 + tmp |= EXYNOS5420_EMULATION; 273 + pmu_raw_writel(tmp, EXYNOS5420_FSYS2_OPTION); 274 + 275 + tmp = pmu_raw_readl(EXYNOS5420_PSGEN_OPTION); 276 + tmp |= EXYNOS5420_EMULATION; 277 + pmu_raw_writel(tmp, EXYNOS5420_PSGEN_OPTION); 278 + } 279 + 280 + 281 + static int exynos_pm_suspend(void) 282 + { 283 + exynos_pm_central_suspend(); 284 + 285 + if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 286 + exynos_cpu_save_register(); 287 + 288 + return 0; 289 + } 290 + 291 + static int exynos5420_pm_suspend(void) 292 + { 293 + u32 this_cluster; 294 + 295 + exynos_pm_central_suspend(); 296 + 297 + /* Setting SEQ_OPTION register */ 298 + 299 + this_cluster = MPIDR_AFFINITY_LEVEL(read_cpuid_mpidr(), 1); 300 + if (!this_cluster) 301 + pmu_raw_writel(EXYNOS5420_ARM_USE_STANDBY_WFI0, 302 + S5P_CENTRAL_SEQ_OPTION); 303 + else 304 + pmu_raw_writel(EXYNOS5420_KFC_USE_STANDBY_WFI0, 305 + S5P_CENTRAL_SEQ_OPTION); 306 + return 0; 307 + } 308 + 309 + static void exynos_pm_release_retention(void) 310 + { 311 + unsigned int i; 312 + 313 + for (i = 0; (pm_data->release_ret_regs[i] != REG_TABLE_END); i++) 314 + pmu_raw_writel(EXYNOS_WAKEUP_FROM_LOWPWR, 315 + pm_data->release_ret_regs[i]); 316 + } 317 + 318 + static void exynos_pm_resume(void) 319 + { 320 + u32 cpuid = read_cpuid_part(); 321 + 322 + if (exynos_pm_central_resume()) 323 + goto early_wakeup; 324 + 325 + /* For release retention */ 326 + exynos_pm_release_retention(); 327 + 328 + if (pm_data->extra_save) 329 + s3c_pm_do_restore_core(pm_data->extra_save, 330 + pm_data->num_extra_save); 331 + 332 + s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 333 + 334 + if (cpuid == ARM_CPU_PART_CORTEX_A9) 335 + scu_enable(S5P_VA_SCU); 336 + 337 + if (call_firmware_op(resume) == -ENOSYS 338 + && cpuid == ARM_CPU_PART_CORTEX_A9) 339 + exynos_cpu_restore_register(); 340 + 341 + early_wakeup: 342 + 343 + /* Clear SLEEP mode set in INFORM1 */ 344 + pmu_raw_writel(0x0, S5P_INFORM1); 345 + } 346 + 347 + static void exynos5420_prepare_pm_resume(void) 348 + { 349 + if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) 350 + WARN_ON(mcpm_cpu_powered_up()); 351 + } 352 + 353 + static void exynos5420_pm_resume(void) 354 + { 355 + unsigned long tmp; 356 + 357 + /* Restore the CPU0 low power state register */ 358 + tmp = pmu_raw_readl(EXYNOS5_ARM_CORE0_SYS_PWR_REG); 359 + pmu_raw_writel(tmp | S5P_CORE_LOCAL_PWR_EN, 360 + EXYNOS5_ARM_CORE0_SYS_PWR_REG); 361 + 362 + /* Restore the sysram cpu state register */ 363 + __raw_writel(exynos5420_cpu_state, 364 + sysram_base_addr + EXYNOS5420_CPU_STATE); 365 + 366 + pmu_raw_writel(EXYNOS5420_USE_STANDBY_WFI_ALL, 367 + S5P_CENTRAL_SEQ_OPTION); 368 + 369 + if (exynos_pm_central_resume()) 370 + goto early_wakeup; 371 + 372 + /* For release retention */ 373 + exynos_pm_release_retention(); 374 + 375 + pmu_raw_writel(exynos_pmu_spare3, S5P_PMU_SPARE3); 376 + 377 + s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 378 + 379 + early_wakeup: 380 + 381 + tmp = pmu_raw_readl(EXYNOS5420_SFR_AXI_CGDIS1); 382 + tmp &= ~EXYNOS5420_UFS; 383 + pmu_raw_writel(tmp, EXYNOS5420_SFR_AXI_CGDIS1); 384 + 385 + tmp = pmu_raw_readl(EXYNOS5420_FSYS2_OPTION); 386 + tmp &= ~EXYNOS5420_EMULATION; 387 + pmu_raw_writel(tmp, EXYNOS5420_FSYS2_OPTION); 388 + 389 + tmp = pmu_raw_readl(EXYNOS5420_PSGEN_OPTION); 390 + tmp &= ~EXYNOS5420_EMULATION; 391 + pmu_raw_writel(tmp, EXYNOS5420_PSGEN_OPTION); 392 + 393 + /* Clear SLEEP mode set in INFORM1 */ 394 + pmu_raw_writel(0x0, S5P_INFORM1); 395 + } 396 + 397 + /* 398 + * Suspend Ops 399 + */ 400 + 401 + static int exynos_suspend_enter(suspend_state_t state) 402 + { 403 + int ret; 404 + 405 + s3c_pm_debug_init(); 406 + 407 + S3C_PMDBG("%s: suspending the system...\n", __func__); 408 + 409 + S3C_PMDBG("%s: wakeup masks: %08x,%08x\n", __func__, 410 + exynos_irqwake_intmask, exynos_get_eint_wake_mask()); 411 + 412 + if (exynos_irqwake_intmask == -1U 413 + && exynos_get_eint_wake_mask() == -1U) { 414 + pr_err("%s: No wake-up sources!\n", __func__); 415 + pr_err("%s: Aborting sleep\n", __func__); 416 + return -EINVAL; 417 + } 418 + 419 + s3c_pm_save_uarts(); 420 + if (pm_data->pm_prepare) 421 + pm_data->pm_prepare(); 422 + flush_cache_all(); 423 + s3c_pm_check_store(); 424 + 425 + ret = call_firmware_op(suspend); 426 + if (ret == -ENOSYS) 427 + ret = cpu_suspend(0, pm_data->cpu_suspend); 428 + if (ret) 429 + return ret; 430 + 431 + if (pm_data->pm_resume_prepare) 432 + pm_data->pm_resume_prepare(); 433 + s3c_pm_restore_uarts(); 434 + 435 + S3C_PMDBG("%s: wakeup stat: %08x\n", __func__, 436 + pmu_raw_readl(S5P_WAKEUP_STAT)); 437 + 438 + s3c_pm_check_restore(); 439 + 440 + S3C_PMDBG("%s: resuming the system...\n", __func__); 441 + 442 + return 0; 443 + } 444 + 445 + static int exynos_suspend_prepare(void) 446 + { 447 + int ret; 448 + 449 + /* 450 + * REVISIT: It would be better if struct platform_suspend_ops 451 + * .prepare handler get the suspend_state_t as a parameter to 452 + * avoid hard-coding the suspend to mem state. It's safe to do 453 + * it now only because the suspend_valid_only_mem function is 454 + * used as the .valid callback used to check if a given state 455 + * is supported by the platform anyways. 456 + */ 457 + ret = regulator_suspend_prepare(PM_SUSPEND_MEM); 458 + if (ret) { 459 + pr_err("Failed to prepare regulators for suspend (%d)\n", ret); 460 + return ret; 461 + } 462 + 463 + s3c_pm_check_prepare(); 464 + 465 + return 0; 466 + } 467 + 468 + static void exynos_suspend_finish(void) 469 + { 470 + int ret; 471 + 472 + s3c_pm_check_cleanup(); 473 + 474 + ret = regulator_suspend_finish(); 475 + if (ret) 476 + pr_warn("Failed to resume regulators from suspend (%d)\n", ret); 477 + } 478 + 479 + static const struct platform_suspend_ops exynos_suspend_ops = { 480 + .enter = exynos_suspend_enter, 481 + .prepare = exynos_suspend_prepare, 482 + .finish = exynos_suspend_finish, 483 + .valid = suspend_valid_only_mem, 484 + }; 485 + 486 + static const struct exynos_pm_data exynos4_pm_data = { 487 + .wkup_irq = exynos4_wkup_irq, 488 + .wake_disable_mask = ((0xFF << 8) | (0x1F << 1)), 489 + .release_ret_regs = exynos_release_ret_regs, 490 + .pm_suspend = exynos_pm_suspend, 491 + .pm_resume = exynos_pm_resume, 492 + .pm_prepare = exynos_pm_prepare, 493 + .cpu_suspend = exynos_cpu_suspend, 494 + }; 495 + 496 + static const struct exynos_pm_data exynos5250_pm_data = { 497 + .wkup_irq = exynos5250_wkup_irq, 498 + .wake_disable_mask = ((0xFF << 8) | (0x1F << 1)), 499 + .release_ret_regs = exynos_release_ret_regs, 500 + .extra_save = exynos5_sys_save, 501 + .num_extra_save = ARRAY_SIZE(exynos5_sys_save), 502 + .pm_suspend = exynos_pm_suspend, 503 + .pm_resume = exynos_pm_resume, 504 + .pm_prepare = exynos_pm_prepare, 505 + .cpu_suspend = exynos_cpu_suspend, 506 + }; 507 + 508 + static struct exynos_pm_data exynos5420_pm_data = { 509 + .wkup_irq = exynos5250_wkup_irq, 510 + .wake_disable_mask = (0x7F << 7) | (0x1F << 1), 511 + .release_ret_regs = exynos5420_release_ret_regs, 512 + .pm_resume_prepare = exynos5420_prepare_pm_resume, 513 + .pm_resume = exynos5420_pm_resume, 514 + .pm_suspend = exynos5420_pm_suspend, 515 + .pm_prepare = exynos5420_pm_prepare, 516 + .cpu_suspend = exynos5420_cpu_suspend, 517 + }; 518 + 519 + static struct of_device_id exynos_pmu_of_device_ids[] = { 520 + { 521 + .compatible = "samsung,exynos4210-pmu", 522 + .data = &exynos4_pm_data, 523 + }, { 524 + .compatible = "samsung,exynos4212-pmu", 525 + .data = &exynos4_pm_data, 526 + }, { 527 + .compatible = "samsung,exynos4412-pmu", 528 + .data = &exynos4_pm_data, 529 + }, { 530 + .compatible = "samsung,exynos5250-pmu", 531 + .data = &exynos5250_pm_data, 532 + }, { 533 + .compatible = "samsung,exynos5420-pmu", 534 + .data = &exynos5420_pm_data, 535 + }, 536 + { /*sentinel*/ }, 537 + }; 538 + 539 + static struct syscore_ops exynos_pm_syscore_ops; 540 + 541 + void __init exynos_pm_init(void) 542 + { 543 + const struct of_device_id *match; 544 + u32 tmp; 545 + 546 + of_find_matching_node_and_match(NULL, exynos_pmu_of_device_ids, &match); 547 + if (!match) { 548 + pr_err("Failed to find PMU node\n"); 549 + return; 550 + } 551 + pm_data = (struct exynos_pm_data *) match->data; 552 + 553 + /* Platform-specific GIC callback */ 554 + gic_arch_extn.irq_set_wake = exynos_irq_set_wake; 555 + 556 + /* All wakeup disable */ 557 + tmp = pmu_raw_readl(S5P_WAKEUP_MASK); 558 + tmp |= pm_data->wake_disable_mask; 559 + pmu_raw_writel(tmp, S5P_WAKEUP_MASK); 560 + 561 + exynos_pm_syscore_ops.suspend = pm_data->pm_suspend; 562 + exynos_pm_syscore_ops.resume = pm_data->pm_resume; 563 + 564 + register_syscore_ops(&exynos_pm_syscore_ops); 565 + suspend_set_ops(&exynos_suspend_ops); 566 + }
+30 -1
arch/arm/mach-imx/Kconfig
··· 633 633 bool "Vybrid Family VF610 support" 634 634 select ARM_GIC 635 635 select PINCTRL_VF610 636 - select VF_PIT_TIMER 637 636 select PL310_ERRATA_769419 if CACHE_L2X0 638 637 639 638 help 640 639 This enable support for Freescale Vybrid VF610 processor. 640 + 641 + choice 642 + prompt "Clocksource for scheduler clock" 643 + depends on SOC_VF610 644 + default VF_USE_ARM_GLOBAL_TIMER 645 + 646 + config VF_USE_ARM_GLOBAL_TIMER 647 + bool "Use ARM Global Timer" 648 + select ARM_GLOBAL_TIMER 649 + select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK 650 + help 651 + Use the ARM Global Timer as clocksource 652 + 653 + config VF_USE_PIT_TIMER 654 + bool "Use PIT timer" 655 + select VF_PIT_TIMER 656 + help 657 + Use SoC Periodic Interrupt Timer (PIT) as clocksource 658 + 659 + endchoice 660 + 661 + config SOC_LS1021A 662 + bool "Freescale LS1021A support" 663 + select ARM_GIC 664 + select HAVE_ARM_ARCH_TIMER 665 + select PCI_DOMAINS if PCI 666 + select ZONE_DMA if ARM_LPAE 667 + 668 + help 669 + This enable support for Freescale LS1021A processor. 641 670 642 671 endif 643 672
+4 -2
arch/arm/mach-imx/Makefile
··· 12 12 obj-$(CONFIG_SOC_IMX35) += mm-imx3.o cpu-imx35.o clk-imx35.o ehci-imx35.o pm-imx3.o 13 13 14 14 imx5-pm-$(CONFIG_PM) += pm-imx5.o 15 - obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o clk-imx51-imx53.o $(imx5-pm-y) 15 + obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o clk-imx51-imx53.o clk-cpu.o $(imx5-pm-y) 16 16 17 17 obj-$(CONFIG_COMMON_CLK) += clk-pllv1.o clk-pllv2.o clk-pllv3.o clk-gate2.o \ 18 18 clk-pfd.o clk-busy.o clk.o \ ··· 89 89 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 90 90 obj-$(CONFIG_HAVE_IMX_MMDC) += mmdc.o 91 91 obj-$(CONFIG_HAVE_IMX_SRC) += src.o 92 - ifdef CONFIG_SOC_IMX6 92 + ifneq ($(CONFIG_SOC_IMX6)$(CONFIG_SOC_LS1021A),) 93 93 AFLAGS_headsmp.o :=-Wa,-march=armv7-a 94 94 obj-$(CONFIG_SMP) += headsmp.o platsmp.o 95 95 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o ··· 109 109 obj-$(CONFIG_SOC_IMX53) += mach-imx53.o 110 110 111 111 obj-$(CONFIG_SOC_VF610) += clk-vf610.o mach-vf610.o 112 + 113 + obj-$(CONFIG_SOC_LS1021A) += mach-ls1021a.o 112 114 113 115 obj-y += devices/
+32 -2
arch/arm/mach-imx/anatop.c
··· 30 30 #define ANADIG_DIGPROG_IMX6SL 0x280 31 31 32 32 #define BM_ANADIG_REG_2P5_ENABLE_WEAK_LINREG 0x40000 33 + #define BM_ANADIG_REG_2P5_ENABLE_PULLDOWN 0x8 33 34 #define BM_ANADIG_REG_CORE_FET_ODRIVE 0x20000000 34 35 #define BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG 0x1000 36 + /* Below MISC0_DISCON_HIGH_SNVS is only for i.MX6SL */ 37 + #define BM_ANADIG_ANA_MISC0_DISCON_HIGH_SNVS 0x2000 35 38 #define BM_ANADIG_USB_CHRG_DETECT_CHK_CHRG_B 0x80000 36 39 #define BM_ANADIG_USB_CHRG_DETECT_EN_B 0x100000 37 40 ··· 59 56 BM_ANADIG_REG_CORE_FET_ODRIVE); 60 57 } 61 58 59 + static inline void imx_anatop_enable_2p5_pulldown(bool enable) 60 + { 61 + regmap_write(anatop, ANADIG_REG_2P5 + (enable ? REG_SET : REG_CLR), 62 + BM_ANADIG_REG_2P5_ENABLE_PULLDOWN); 63 + } 64 + 65 + static inline void imx_anatop_disconnect_high_snvs(bool enable) 66 + { 67 + regmap_write(anatop, ANADIG_ANA_MISC0 + (enable ? REG_SET : REG_CLR), 68 + BM_ANADIG_ANA_MISC0_DISCON_HIGH_SNVS); 69 + } 70 + 62 71 void imx_anatop_pre_suspend(void) 63 72 { 64 - imx_anatop_enable_weak2p5(true); 73 + if (imx_mmdc_get_ddr_type() == IMX_DDR_TYPE_LPDDR2) 74 + imx_anatop_enable_2p5_pulldown(true); 75 + else 76 + imx_anatop_enable_weak2p5(true); 77 + 65 78 imx_anatop_enable_fet_odrive(true); 79 + 80 + if (cpu_is_imx6sl()) 81 + imx_anatop_disconnect_high_snvs(true); 66 82 } 67 83 68 84 void imx_anatop_post_resume(void) 69 85 { 86 + if (imx_mmdc_get_ddr_type() == IMX_DDR_TYPE_LPDDR2) 87 + imx_anatop_enable_2p5_pulldown(false); 88 + else 89 + imx_anatop_enable_weak2p5(false); 90 + 70 91 imx_anatop_enable_fet_odrive(false); 71 - imx_anatop_enable_weak2p5(false); 92 + 93 + if (cpu_is_imx6sl()) 94 + imx_anatop_disconnect_high_snvs(false); 95 + 72 96 } 73 97 74 98 static void imx_anatop_usb_chrg_detect_disable(void)
+107
arch/arm/mach-imx/clk-cpu.c
··· 1 + /* 2 + * Copyright (c) 2014 Lucas Stach <l.stach@pengutronix.de>, Pengutronix 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * http://www.opensource.org/licenses/gpl-license.html 9 + * http://www.gnu.org/copyleft/gpl.html 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/clk-provider.h> 14 + #include <linux/slab.h> 15 + 16 + struct clk_cpu { 17 + struct clk_hw hw; 18 + struct clk *div; 19 + struct clk *mux; 20 + struct clk *pll; 21 + struct clk *step; 22 + }; 23 + 24 + static inline struct clk_cpu *to_clk_cpu(struct clk_hw *hw) 25 + { 26 + return container_of(hw, struct clk_cpu, hw); 27 + } 28 + 29 + static unsigned long clk_cpu_recalc_rate(struct clk_hw *hw, 30 + unsigned long parent_rate) 31 + { 32 + struct clk_cpu *cpu = to_clk_cpu(hw); 33 + 34 + return clk_get_rate(cpu->div); 35 + } 36 + 37 + static long clk_cpu_round_rate(struct clk_hw *hw, unsigned long rate, 38 + unsigned long *prate) 39 + { 40 + struct clk_cpu *cpu = to_clk_cpu(hw); 41 + 42 + return clk_round_rate(cpu->pll, rate); 43 + } 44 + 45 + static int clk_cpu_set_rate(struct clk_hw *hw, unsigned long rate, 46 + unsigned long parent_rate) 47 + { 48 + struct clk_cpu *cpu = to_clk_cpu(hw); 49 + int ret; 50 + 51 + /* switch to PLL bypass clock */ 52 + ret = clk_set_parent(cpu->mux, cpu->step); 53 + if (ret) 54 + return ret; 55 + 56 + /* reprogram PLL */ 57 + ret = clk_set_rate(cpu->pll, rate); 58 + if (ret) { 59 + clk_set_parent(cpu->mux, cpu->pll); 60 + return ret; 61 + } 62 + /* switch back to PLL clock */ 63 + clk_set_parent(cpu->mux, cpu->pll); 64 + 65 + /* Ensure the divider is what we expect */ 66 + clk_set_rate(cpu->div, rate); 67 + 68 + return 0; 69 + } 70 + 71 + static const struct clk_ops clk_cpu_ops = { 72 + .recalc_rate = clk_cpu_recalc_rate, 73 + .round_rate = clk_cpu_round_rate, 74 + .set_rate = clk_cpu_set_rate, 75 + }; 76 + 77 + struct clk *imx_clk_cpu(const char *name, const char *parent_name, 78 + struct clk *div, struct clk *mux, struct clk *pll, 79 + struct clk *step) 80 + { 81 + struct clk_cpu *cpu; 82 + struct clk *clk; 83 + struct clk_init_data init; 84 + 85 + cpu = kzalloc(sizeof(*cpu), GFP_KERNEL); 86 + if (!cpu) 87 + return ERR_PTR(-ENOMEM); 88 + 89 + cpu->div = div; 90 + cpu->mux = mux; 91 + cpu->pll = pll; 92 + cpu->step = step; 93 + 94 + init.name = name; 95 + init.ops = &clk_cpu_ops; 96 + init.flags = 0; 97 + init.parent_names = &parent_name; 98 + init.num_parents = 1; 99 + 100 + cpu->hw.init = &init; 101 + 102 + clk = clk_register(NULL, &cpu->hw); 103 + if (IS_ERR(clk)) 104 + kfree(cpu); 105 + 106 + return clk; 107 + }
+13 -1
arch/arm/mach-imx/clk-imx51-imx53.c
··· 125 125 static const char *spdif_sel[] = { "pll1_sw", "pll2_sw", "pll3_sw", "spdif_xtal_sel", }; 126 126 static const char *spdif0_com_sel[] = { "spdif0_podf", "ssi1_root_gate", }; 127 127 static const char *mx51_spdif1_com_sel[] = { "spdif1_podf", "ssi2_root_gate", }; 128 + static const char *step_sels[] = { "lp_apm", }; 129 + static const char *cpu_podf_sels[] = { "pll1_sw", "step_sel" }; 128 130 129 131 static struct clk *clk[IMX5_CLK_END]; 130 132 static struct clk_onecell_data clk_data; ··· 195 193 clk[IMX5_CLK_USB_PHY_PODF] = imx_clk_divider("usb_phy_podf", "usb_phy_pred", MXC_CCM_CDCDR, 0, 3); 196 194 clk[IMX5_CLK_USB_PHY_SEL] = imx_clk_mux("usb_phy_sel", MXC_CCM_CSCMR1, 26, 1, 197 195 usb_phy_sel_str, ARRAY_SIZE(usb_phy_sel_str)); 198 - clk[IMX5_CLK_CPU_PODF] = imx_clk_divider("cpu_podf", "pll1_sw", MXC_CCM_CACRR, 0, 3); 196 + clk[IMX5_CLK_STEP_SEL] = imx_clk_mux("step_sel", MXC_CCM_CCSR, 7, 2, step_sels, ARRAY_SIZE(step_sels)); 197 + clk[IMX5_CLK_CPU_PODF_SEL] = imx_clk_mux("cpu_podf_sel", MXC_CCM_CCSR, 2, 1, cpu_podf_sels, ARRAY_SIZE(cpu_podf_sels)); 198 + clk[IMX5_CLK_CPU_PODF] = imx_clk_divider("cpu_podf", "cpu_podf_sel", MXC_CCM_CACRR, 0, 3); 199 199 clk[IMX5_CLK_DI_PRED] = imx_clk_divider("di_pred", "pll3_sw", MXC_CCM_CDCDR, 6, 3); 200 200 clk[IMX5_CLK_IIM_GATE] = imx_clk_gate2("iim_gate", "ipg", MXC_CCM_CCGR0, 30); 201 201 clk[IMX5_CLK_UART1_IPG_GATE] = imx_clk_gate2("uart1_ipg_gate", "ipg", MXC_CCM_CCGR1, 6); ··· 541 537 clk[IMX5_CLK_CKO2] = imx_clk_gate2("cko2", "cko2_podf", MXC_CCM_CCOSR, 24); 542 538 clk[IMX5_CLK_SPDIF_XTAL_SEL] = imx_clk_mux("spdif_xtal_sel", MXC_CCM_CSCMR1, 2, 2, 543 539 mx53_spdif_xtal_sel, ARRAY_SIZE(mx53_spdif_xtal_sel)); 540 + clk[IMX5_CLK_ARM] = imx_clk_cpu("arm", "cpu_podf", 541 + clk[IMX5_CLK_CPU_PODF], 542 + clk[IMX5_CLK_CPU_PODF_SEL], 543 + clk[IMX5_CLK_PLL1_SW], 544 + clk[IMX5_CLK_STEP_SEL]); 544 545 545 546 imx_check_clocks(clk, ARRAY_SIZE(clk)); 546 547 ··· 559 550 560 551 /* move can bus clk to 24MHz */ 561 552 clk_set_parent(clk[IMX5_CLK_CAN_SEL], clk[IMX5_CLK_LP_APM]); 553 + 554 + /* make sure step clock is running from 24MHz */ 555 + clk_set_parent(clk[IMX5_CLK_STEP_SEL], clk[IMX5_CLK_LP_APM]); 562 556 563 557 clk_prepare_enable(clk[IMX5_CLK_IIM_GATE]); 564 558 imx_print_silicon_rev("i.MX53", mx53_revision());
+16 -5
arch/arm/mach-imx/clk-vf610.c
··· 120 120 VF610_CLK_DDR_SEL, 121 121 }; 122 122 123 + static struct clk * __init vf610_get_fixed_clock( 124 + struct device_node *ccm_node, const char *name) 125 + { 126 + struct clk *clk = of_clk_get_by_name(ccm_node, name); 127 + 128 + /* Backward compatibility if device tree is missing clks assignments */ 129 + if (IS_ERR(clk)) 130 + clk = imx_obtain_fixed_clock(name, 0); 131 + return clk; 132 + }; 133 + 123 134 static void __init vf610_clocks_init(struct device_node *ccm_node) 124 135 { 125 136 struct device_node *np; ··· 141 130 clk[VF610_CLK_SIRC_32K] = imx_clk_fixed("sirc_32k", 32000); 142 131 clk[VF610_CLK_FIRC] = imx_clk_fixed("firc", 24000000); 143 132 144 - clk[VF610_CLK_SXOSC] = imx_obtain_fixed_clock("sxosc", 0); 145 - clk[VF610_CLK_FXOSC] = imx_obtain_fixed_clock("fxosc", 0); 146 - clk[VF610_CLK_AUDIO_EXT] = imx_obtain_fixed_clock("audio_ext", 0); 147 - clk[VF610_CLK_ENET_EXT] = imx_obtain_fixed_clock("enet_ext", 0); 133 + clk[VF610_CLK_SXOSC] = vf610_get_fixed_clock(ccm_node, "sxosc"); 134 + clk[VF610_CLK_FXOSC] = vf610_get_fixed_clock(ccm_node, "fxosc"); 135 + clk[VF610_CLK_AUDIO_EXT] = vf610_get_fixed_clock(ccm_node, "audio_ext"); 136 + clk[VF610_CLK_ENET_EXT] = vf610_get_fixed_clock(ccm_node, "enet_ext"); 148 137 149 138 /* Clock source from external clock via LVDs PAD */ 150 - clk[VF610_CLK_ANACLK1] = imx_obtain_fixed_clock("anaclk1", 0); 139 + clk[VF610_CLK_ANACLK1] = vf610_get_fixed_clock(ccm_node, "anaclk1"); 151 140 152 141 clk[VF610_CLK_FXOSC_HALF] = imx_clk_fixed_factor("fxosc_half", "fxosc", 1, 2); 153 142
+4
arch/arm/mach-imx/clk.h
··· 131 131 CLK_SET_RATE_PARENT, mult, div); 132 132 } 133 133 134 + struct clk *imx_clk_cpu(const char *name, const char *parent_name, 135 + struct clk *div, struct clk *mux, struct clk *pll, 136 + struct clk *step); 137 + 134 138 #endif
+2
arch/arm/mach-imx/common.h
··· 115 115 int imx6q_set_lpm(enum mxc_cpu_pwr_mode mode); 116 116 void imx6q_set_int_mem_clk_lpm(bool enable); 117 117 void imx6sl_set_wait_clk(bool enter); 118 + int imx_mmdc_get_ddr_type(void); 118 119 119 120 void imx_cpu_die(unsigned int cpu); 120 121 int imx_cpu_kill(unsigned int cpu); ··· 157 156 #endif 158 157 159 158 extern struct smp_operations imx_smp_ops; 159 + extern struct smp_operations ls1021a_smp_ops; 160 160 161 161 #endif
+2
arch/arm/mach-imx/mach-imx53.c
··· 40 40 static void __init imx53_init_late(void) 41 41 { 42 42 imx53_pm_init(); 43 + 44 + platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 43 45 } 44 46 45 47 static const char * const imx53_dt_board_compat[] __initconst = {
+51
arch/arm/mach-imx/mach-imx6sx.c
··· 8 8 9 9 #include <linux/irqchip.h> 10 10 #include <linux/of_platform.h> 11 + #include <linux/phy.h> 12 + #include <linux/regmap.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> 11 15 #include <asm/mach/arch.h> 12 16 #include <asm/mach/map.h> 13 17 14 18 #include "common.h" 15 19 #include "cpuidle.h" 20 + 21 + static int ar8031_phy_fixup(struct phy_device *dev) 22 + { 23 + u16 val; 24 + 25 + /* Set RGMII IO voltage to 1.8V */ 26 + phy_write(dev, 0x1d, 0x1f); 27 + phy_write(dev, 0x1e, 0x8); 28 + 29 + /* introduce tx clock delay */ 30 + phy_write(dev, 0x1d, 0x5); 31 + val = phy_read(dev, 0x1e); 32 + val |= 0x0100; 33 + phy_write(dev, 0x1e, val); 34 + 35 + return 0; 36 + } 37 + 38 + #define PHY_ID_AR8031 0x004dd074 39 + static void __init imx6sx_enet_phy_init(void) 40 + { 41 + if (IS_BUILTIN(CONFIG_PHYLIB)) 42 + phy_register_fixup_for_uid(PHY_ID_AR8031, 0xffffffff, 43 + ar8031_phy_fixup); 44 + } 45 + 46 + static void __init imx6sx_enet_clk_sel(void) 47 + { 48 + struct regmap *gpr; 49 + 50 + gpr = syscon_regmap_lookup_by_compatible("fsl,imx6sx-iomuxc-gpr"); 51 + if (!IS_ERR(gpr)) { 52 + regmap_update_bits(gpr, IOMUXC_GPR1, 53 + IMX6SX_GPR1_FEC_CLOCK_MUX_SEL_MASK, 0); 54 + regmap_update_bits(gpr, IOMUXC_GPR1, 55 + IMX6SX_GPR1_FEC_CLOCK_PAD_DIR_MASK, 0); 56 + } else { 57 + pr_err("failed to find fsl,imx6sx-iomux-gpr regmap\n"); 58 + } 59 + } 60 + 61 + static inline void imx6sx_enet_init(void) 62 + { 63 + imx6sx_enet_phy_init(); 64 + imx6sx_enet_clk_sel(); 65 + } 16 66 17 67 static void __init imx6sx_init_machine(void) 18 68 { ··· 74 24 75 25 of_platform_populate(NULL, of_default_bus_match_table, NULL, parent); 76 26 27 + imx6sx_enet_init(); 77 28 imx_anatop_init(); 78 29 imx6sx_pm_init(); 79 30 }
+22
arch/arm/mach-imx/mach-ls1021a.c
··· 1 + /* 2 + * Copyright 2013-2014 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + */ 9 + 10 + #include <asm/mach/arch.h> 11 + 12 + #include "common.h" 13 + 14 + static const char * const ls1021a_dt_compat[] __initconst = { 15 + "fsl,ls1021a", 16 + NULL, 17 + }; 18 + 19 + DT_MACHINE_START(LS1021A, "Freescale LS1021A") 20 + .smp = smp_ops(ls1021a_smp_ops), 21 + .dt_compat = ls1021a_dt_compat, 22 + MACHINE_END
+17
arch/arm/mach-imx/mmdc.c
··· 21 21 #define BP_MMDC_MAPSR_PSD 0 22 22 #define BP_MMDC_MAPSR_PSS 4 23 23 24 + #define MMDC_MDMISC 0x18 25 + #define BM_MMDC_MDMISC_DDR_TYPE 0x18 26 + #define BP_MMDC_MDMISC_DDR_TYPE 0x3 27 + 28 + static int ddr_type; 29 + 24 30 static int imx_mmdc_probe(struct platform_device *pdev) 25 31 { 26 32 struct device_node *np = pdev->dev.of_node; ··· 36 30 37 31 mmdc_base = of_iomap(np, 0); 38 32 WARN_ON(!mmdc_base); 33 + 34 + reg = mmdc_base + MMDC_MDMISC; 35 + /* Get ddr type */ 36 + val = readl_relaxed(reg); 37 + ddr_type = (val & BM_MMDC_MDMISC_DDR_TYPE) >> 38 + BP_MMDC_MDMISC_DDR_TYPE; 39 39 40 40 reg = mmdc_base + MMDC_MAPSR; 41 41 ··· 61 49 } 62 50 63 51 return 0; 52 + } 53 + 54 + int imx_mmdc_get_ddr_type(void) 55 + { 56 + return ddr_type; 64 57 } 65 58 66 59 static struct of_device_id imx_mmdc_dt_ids[] = {
+2
arch/arm/mach-imx/mxc.h
··· 55 55 #define IMX_CHIP_REVISION_3_3 0x33 56 56 #define IMX_CHIP_REVISION_UNKNOWN 0xff 57 57 58 + #define IMX_DDR_TYPE_LPDDR2 1 59 + 58 60 #ifndef __ASSEMBLY__ 59 61 extern unsigned int __mxc_cpu_type; 60 62 #endif
+33
arch/arm/mach-imx/platsmp.c
··· 11 11 */ 12 12 13 13 #include <linux/init.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of.h> 14 16 #include <linux/smp.h> 17 + 15 18 #include <asm/cacheflush.h> 16 19 #include <asm/page.h> 17 20 #include <asm/smp_scu.h> ··· 96 93 .cpu_die = imx_cpu_die, 97 94 .cpu_kill = imx_cpu_kill, 98 95 #endif 96 + }; 97 + 98 + #define DCFG_CCSR_SCRATCHRW1 0x200 99 + 100 + static int ls1021a_boot_secondary(unsigned int cpu, struct task_struct *idle) 101 + { 102 + arch_send_wakeup_ipi_mask(cpumask_of(cpu)); 103 + 104 + return 0; 105 + } 106 + 107 + static void __init ls1021a_smp_prepare_cpus(unsigned int max_cpus) 108 + { 109 + struct device_node *np; 110 + void __iomem *dcfg_base; 111 + unsigned long paddr; 112 + 113 + np = of_find_compatible_node(NULL, NULL, "fsl,ls1021a-dcfg"); 114 + dcfg_base = of_iomap(np, 0); 115 + BUG_ON(!dcfg_base); 116 + 117 + paddr = virt_to_phys(secondary_startup); 118 + writel_relaxed(cpu_to_be32(paddr), dcfg_base + DCFG_CCSR_SCRATCHRW1); 119 + 120 + iounmap(dcfg_base); 121 + } 122 + 123 + struct smp_operations ls1021a_smp_ops __initdata = { 124 + .smp_prepare_cpus = ls1021a_smp_prepare_cpus, 125 + .smp_boot_secondary = ls1021a_boot_secondary, 99 126 };
+3 -7
arch/arm/mach-imx/pm-imx6.c
··· 88 88 }; 89 89 90 90 struct imx6_pm_socdata { 91 - u32 cpu_type; 91 + u32 ddr_type; 92 92 const char *mmdc_compat; 93 93 const char *src_compat; 94 94 const char *iomuxc_compat; ··· 138 138 }; 139 139 140 140 static const struct imx6_pm_socdata imx6q_pm_data __initconst = { 141 - .cpu_type = MXC_CPU_IMX6Q, 142 141 .mmdc_compat = "fsl,imx6q-mmdc", 143 142 .src_compat = "fsl,imx6q-src", 144 143 .iomuxc_compat = "fsl,imx6q-iomuxc", ··· 147 148 }; 148 149 149 150 static const struct imx6_pm_socdata imx6dl_pm_data __initconst = { 150 - .cpu_type = MXC_CPU_IMX6DL, 151 151 .mmdc_compat = "fsl,imx6q-mmdc", 152 152 .src_compat = "fsl,imx6q-src", 153 153 .iomuxc_compat = "fsl,imx6dl-iomuxc", ··· 156 158 }; 157 159 158 160 static const struct imx6_pm_socdata imx6sl_pm_data __initconst = { 159 - .cpu_type = MXC_CPU_IMX6SL, 160 161 .mmdc_compat = "fsl,imx6sl-mmdc", 161 162 .src_compat = "fsl,imx6sl-src", 162 163 .iomuxc_compat = "fsl,imx6sl-iomuxc", ··· 165 168 }; 166 169 167 170 static const struct imx6_pm_socdata imx6sx_pm_data __initconst = { 168 - .cpu_type = MXC_CPU_IMX6SX, 169 171 .mmdc_compat = "fsl,imx6sx-mmdc", 170 172 .src_compat = "fsl,imx6sx-src", 171 173 .iomuxc_compat = "fsl,imx6sx-iomuxc", ··· 183 187 struct imx6_cpu_pm_info { 184 188 phys_addr_t pbase; /* The physical address of pm_info. */ 185 189 phys_addr_t resume_addr; /* The physical resume address for asm code */ 186 - u32 cpu_type; 190 + u32 ddr_type; 187 191 u32 pm_info_size; /* Size of pm_info. */ 188 192 struct imx6_pm_base mmdc_base; 189 193 struct imx6_pm_base src_base; ··· 517 521 goto pl310_cache_map_failed; 518 522 } 519 523 520 - pm_info->cpu_type = socdata->cpu_type; 524 + pm_info->ddr_type = imx_mmdc_get_ddr_type(); 521 525 pm_info->mmdc_io_num = socdata->mmdc_io_num; 522 526 mmdc_offset_array = socdata->mmdc_io_offset; 523 527
+7 -7
arch/arm/mach-imx/suspend-imx6.S
··· 45 45 */ 46 46 #define PM_INFO_PBASE_OFFSET 0x0 47 47 #define PM_INFO_RESUME_ADDR_OFFSET 0x4 48 - #define PM_INFO_CPU_TYPE_OFFSET 0x8 48 + #define PM_INFO_DDR_TYPE_OFFSET 0x8 49 49 #define PM_INFO_PM_INFO_SIZE_OFFSET 0xC 50 50 #define PM_INFO_MX6Q_MMDC_P_OFFSET 0x10 51 51 #define PM_INFO_MX6Q_MMDC_V_OFFSET 0x14 ··· 110 110 ldreq r11, [r0, #PM_INFO_MX6Q_MMDC_V_OFFSET] 111 111 ldrne r11, [r0, #PM_INFO_MX6Q_MMDC_P_OFFSET] 112 112 113 - cmp r3, #MXC_CPU_IMX6SL 113 + cmp r3, #IMX_DDR_TYPE_LPDDR2 114 114 bne 4f 115 115 116 116 /* reset read FIFO, RST_RD_FIFO */ ··· 151 151 ENTRY(imx6_suspend) 152 152 ldr r1, [r0, #PM_INFO_PBASE_OFFSET] 153 153 ldr r2, [r0, #PM_INFO_RESUME_ADDR_OFFSET] 154 - ldr r3, [r0, #PM_INFO_CPU_TYPE_OFFSET] 154 + ldr r3, [r0, #PM_INFO_DDR_TYPE_OFFSET] 155 155 ldr r4, [r0, #PM_INFO_PM_INFO_SIZE_OFFSET] 156 156 157 157 /* ··· 209 209 ldr r7, [r0, #PM_INFO_MMDC_IO_NUM_OFFSET] 210 210 ldr r8, =PM_INFO_MMDC_IO_VAL_OFFSET 211 211 add r8, r8, r0 212 - /* i.MX6SL's last 3 IOs need special setting */ 213 - cmp r3, #MXC_CPU_IMX6SL 212 + /* LPDDR2's last 3 IOs need special setting */ 213 + cmp r3, #IMX_DDR_TYPE_LPDDR2 214 214 subeq r7, r7, #0x3 215 215 set_mmdc_io_lpm: 216 216 ldr r9, [r8], #0x8 ··· 218 218 subs r7, r7, #0x1 219 219 bne set_mmdc_io_lpm 220 220 221 - cmp r3, #MXC_CPU_IMX6SL 221 + cmp r3, #IMX_DDR_TYPE_LPDDR2 222 222 bne set_mmdc_io_lpm_done 223 223 ldr r6, =0x1000 224 224 ldr r9, [r8], #0x8 ··· 324 324 str r7, [r11, #MX6Q_SRC_GPR1] 325 325 str r7, [r11, #MX6Q_SRC_GPR2] 326 326 327 - ldr r3, [r0, #PM_INFO_CPU_TYPE_OFFSET] 327 + ldr r3, [r0, #PM_INFO_DDR_TYPE_OFFSET] 328 328 mov r5, #0x1 329 329 resume_mmdc 330 330
+23
arch/arm/mach-integrator/Kconfig
··· 1 + config ARCH_INTEGRATOR 2 + bool "ARM Ltd. Integrator family" if (ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V6) 3 + select ARM_AMBA 4 + select ARM_PATCH_PHYS_VIRT if MMU 5 + select AUTO_ZRELADDR 6 + select COMMON_CLK 7 + select COMMON_CLK_VERSATILE 8 + select GENERIC_CLOCKEVENTS 9 + select HAVE_TCM 10 + select ICST 11 + select MFD_SYSCON 12 + select MULTI_IRQ_HANDLER 13 + select PLAT_VERSATILE 14 + select POWER_RESET 15 + select POWER_RESET_VERSATILE 16 + select POWER_SUPPLY 17 + select SOC_INTEGRATOR_CM 18 + select SPARSE_IRQ 19 + select USE_OF 20 + select VERSATILE_FPGA_IRQ 21 + help 22 + Support for ARM's Integrator platform. 23 + 1 24 if ARCH_INTEGRATOR 2 25 3 26 menu "Integrator Options"
+1 -1
arch/arm/mach-integrator/Makefile
··· 4 4 5 5 # Object file lists. 6 6 7 - obj-y := core.o lm.o leds.o 7 + obj-y := core.o lm.o 8 8 obj-$(CONFIG_ARCH_INTEGRATOR_AP) += integrator_ap.o 9 9 obj-$(CONFIG_ARCH_INTEGRATOR_CP) += integrator_cp.o 10 10
-1
arch/arm/mach-integrator/cm.h
··· 11 11 #define CM_CTRL_LED (1 << 0) 12 12 #define CM_CTRL_nMBDET (1 << 1) 13 13 #define CM_CTRL_REMAP (1 << 2) 14 - #define CM_CTRL_RESET (1 << 3) 15 14 16 15 /* 17 16 * Integrator/AP,PP2 specific
-2
arch/arm/mach-integrator/common.h
··· 4 4 void integrator_init_early(void); 5 5 int integrator_init(bool is_cp); 6 6 void integrator_reserve(void); 7 - void integrator_restart(enum reboot_mode, const char *); 8 - void integrator_init_sysfs(struct device *parent, u32 id);
-103
arch/arm/mach-integrator/core.c
··· 60 60 raw_spin_unlock_irqrestore(&cm_lock, flags); 61 61 } 62 62 63 - static const char *integrator_arch_str(u32 id) 64 - { 65 - switch ((id >> 16) & 0xff) { 66 - case 0x00: 67 - return "ASB little-endian"; 68 - case 0x01: 69 - return "AHB little-endian"; 70 - case 0x03: 71 - return "AHB-Lite system bus, bi-endian"; 72 - case 0x04: 73 - return "AHB"; 74 - case 0x08: 75 - return "AHB system bus, ASB processor bus"; 76 - default: 77 - return "Unknown"; 78 - } 79 - } 80 - 81 - static const char *integrator_fpga_str(u32 id) 82 - { 83 - switch ((id >> 12) & 0xf) { 84 - case 0x01: 85 - return "XC4062"; 86 - case 0x02: 87 - return "XC4085"; 88 - case 0x03: 89 - return "XVC600"; 90 - case 0x04: 91 - return "EPM7256AE (Altera PLD)"; 92 - default: 93 - return "Unknown"; 94 - } 95 - } 96 - 97 63 void cm_clear_irqs(void) 98 64 { 99 65 /* disable core module IRQs */ ··· 75 109 void cm_init(void) 76 110 { 77 111 struct device_node *cm = of_find_matching_node(NULL, cm_match); 78 - u32 val; 79 112 80 113 if (!cm) { 81 114 pr_crit("no core module node found in device tree\n"); ··· 86 121 return; 87 122 } 88 123 cm_clear_irqs(); 89 - val = readl(cm_base + INTEGRATOR_HDR_ID_OFFSET); 90 - pr_info("Detected ARM core module:\n"); 91 - pr_info(" Manufacturer: %02x\n", (val >> 24)); 92 - pr_info(" Architecture: %s\n", integrator_arch_str(val)); 93 - pr_info(" FPGA: %s\n", integrator_fpga_str(val)); 94 - pr_info(" Build: %02x\n", (val >> 4) & 0xFF); 95 - pr_info(" Rev: %c\n", ('A' + (val & 0x03))); 96 124 } 97 125 98 126 /* ··· 96 138 void __init integrator_reserve(void) 97 139 { 98 140 memblock_reserve(PHYS_OFFSET, __pa(swapper_pg_dir) - PHYS_OFFSET); 99 - } 100 - 101 - /* 102 - * To reset, we hit the on-board reset register in the system FPGA 103 - */ 104 - void integrator_restart(enum reboot_mode mode, const char *cmd) 105 - { 106 - cm_control(CM_CTRL_RESET, CM_CTRL_RESET); 107 - } 108 - 109 - static u32 integrator_id; 110 - 111 - static ssize_t intcp_get_manf(struct device *dev, 112 - struct device_attribute *attr, 113 - char *buf) 114 - { 115 - return sprintf(buf, "%02x\n", integrator_id >> 24); 116 - } 117 - 118 - static struct device_attribute intcp_manf_attr = 119 - __ATTR(manufacturer, S_IRUGO, intcp_get_manf, NULL); 120 - 121 - static ssize_t intcp_get_arch(struct device *dev, 122 - struct device_attribute *attr, 123 - char *buf) 124 - { 125 - return sprintf(buf, "%s\n", integrator_arch_str(integrator_id)); 126 - } 127 - 128 - static struct device_attribute intcp_arch_attr = 129 - __ATTR(architecture, S_IRUGO, intcp_get_arch, NULL); 130 - 131 - static ssize_t intcp_get_fpga(struct device *dev, 132 - struct device_attribute *attr, 133 - char *buf) 134 - { 135 - return sprintf(buf, "%s\n", integrator_fpga_str(integrator_id)); 136 - } 137 - 138 - static struct device_attribute intcp_fpga_attr = 139 - __ATTR(fpga, S_IRUGO, intcp_get_fpga, NULL); 140 - 141 - static ssize_t intcp_get_build(struct device *dev, 142 - struct device_attribute *attr, 143 - char *buf) 144 - { 145 - return sprintf(buf, "%02x\n", (integrator_id >> 4) & 0xFF); 146 - } 147 - 148 - static struct device_attribute intcp_build_attr = 149 - __ATTR(build, S_IRUGO, intcp_get_build, NULL); 150 - 151 - 152 - 153 - void integrator_init_sysfs(struct device *parent, u32 id) 154 - { 155 - integrator_id = id; 156 - device_create_file(parent, &intcp_manf_attr); 157 - device_create_file(parent, &intcp_arch_attr); 158 - device_create_file(parent, &intcp_fpga_attr); 159 - device_create_file(parent, &intcp_build_attr); 160 141 }
-48
arch/arm/mach-integrator/include/mach/uncompress.h
··· 1 - /* 2 - * arch/arm/mach-integrator/include/mach/uncompress.h 3 - * 4 - * Copyright (C) 1999 ARM Limited 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - 21 - #define AMBA_UART_DR (*(volatile unsigned char *)0x16000000) 22 - #define AMBA_UART_LCRH (*(volatile unsigned char *)0x16000008) 23 - #define AMBA_UART_LCRM (*(volatile unsigned char *)0x1600000c) 24 - #define AMBA_UART_LCRL (*(volatile unsigned char *)0x16000010) 25 - #define AMBA_UART_CR (*(volatile unsigned char *)0x16000014) 26 - #define AMBA_UART_FR (*(volatile unsigned char *)0x16000018) 27 - 28 - /* 29 - * This does not append a newline 30 - */ 31 - static void putc(int c) 32 - { 33 - while (AMBA_UART_FR & (1 << 5)) 34 - barrier(); 35 - 36 - AMBA_UART_DR = c; 37 - } 38 - 39 - static inline void flush(void) 40 - { 41 - while (AMBA_UART_FR & (1 << 3)) 42 - barrier(); 43 - } 44 - 45 - /* 46 - * nothing to do 47 - */ 48 - #define arch_decomp_setup()
-218
arch/arm/mach-integrator/integrator_ap.c
··· 27 27 #include <linux/syscore_ops.h> 28 28 #include <linux/amba/bus.h> 29 29 #include <linux/amba/kmi.h> 30 - #include <linux/clocksource.h> 31 - #include <linux/clockchips.h> 32 - #include <linux/interrupt.h> 33 30 #include <linux/io.h> 34 31 #include <linux/irqchip.h> 35 32 #include <linux/mtd/physmap.h> 36 - #include <linux/clk.h> 37 33 #include <linux/platform_data/clk-integrator.h> 38 34 #include <linux/of_irq.h> 39 35 #include <linux/of_address.h> 40 36 #include <linux/of_platform.h> 41 37 #include <linux/stat.h> 42 - #include <linux/sys_soc.h> 43 38 #include <linux/termios.h> 44 - #include <linux/sched_clock.h> 45 - #include <linux/clk-provider.h> 46 39 47 40 #include <asm/hardware/arm_timer.h> 48 41 #include <asm/setup.h> ··· 82 89 83 90 static struct map_desc ap_io_desc[] __initdata __maybe_unused = { 84 91 { 85 - .virtual = IO_ADDRESS(INTEGRATOR_CT_BASE), 86 - .pfn = __phys_to_pfn(INTEGRATOR_CT_BASE), 87 - .length = SZ_4K, 88 - .type = MT_DEVICE 89 - }, { 90 92 .virtual = IO_ADDRESS(INTEGRATOR_IC_BASE), 91 93 .pfn = __phys_to_pfn(INTEGRATOR_IC_BASE), 92 94 .length = SZ_4K, ··· 245 257 .set_mctrl = integrator_uart_set_mctrl, 246 258 }; 247 259 248 - /* 249 - * Where is the timer (VA)? 250 - */ 251 - #define TIMER0_VA_BASE __io_address(INTEGRATOR_TIMER0_BASE) 252 - #define TIMER1_VA_BASE __io_address(INTEGRATOR_TIMER1_BASE) 253 - #define TIMER2_VA_BASE __io_address(INTEGRATOR_TIMER2_BASE) 254 - 255 - static unsigned long timer_reload; 256 - 257 - static u64 notrace integrator_read_sched_clock(void) 258 - { 259 - return -readl((void __iomem *) TIMER2_VA_BASE + TIMER_VALUE); 260 - } 261 - 262 - static void integrator_clocksource_init(unsigned long inrate, 263 - void __iomem *base) 264 - { 265 - u32 ctrl = TIMER_CTRL_ENABLE | TIMER_CTRL_PERIODIC; 266 - unsigned long rate = inrate; 267 - 268 - if (rate >= 1500000) { 269 - rate /= 16; 270 - ctrl |= TIMER_CTRL_DIV16; 271 - } 272 - 273 - writel(0xffff, base + TIMER_LOAD); 274 - writel(ctrl, base + TIMER_CTRL); 275 - 276 - clocksource_mmio_init(base + TIMER_VALUE, "timer2", 277 - rate, 200, 16, clocksource_mmio_readl_down); 278 - sched_clock_register(integrator_read_sched_clock, 16, rate); 279 - } 280 - 281 - static void __iomem * clkevt_base; 282 - 283 - /* 284 - * IRQ handler for the timer 285 - */ 286 - static irqreturn_t integrator_timer_interrupt(int irq, void *dev_id) 287 - { 288 - struct clock_event_device *evt = dev_id; 289 - 290 - /* clear the interrupt */ 291 - writel(1, clkevt_base + TIMER_INTCLR); 292 - 293 - evt->event_handler(evt); 294 - 295 - return IRQ_HANDLED; 296 - } 297 - 298 - static void clkevt_set_mode(enum clock_event_mode mode, struct clock_event_device *evt) 299 - { 300 - u32 ctrl = readl(clkevt_base + TIMER_CTRL) & ~TIMER_CTRL_ENABLE; 301 - 302 - /* Disable timer */ 303 - writel(ctrl, clkevt_base + TIMER_CTRL); 304 - 305 - switch (mode) { 306 - case CLOCK_EVT_MODE_PERIODIC: 307 - /* Enable the timer and start the periodic tick */ 308 - writel(timer_reload, clkevt_base + TIMER_LOAD); 309 - ctrl |= TIMER_CTRL_PERIODIC | TIMER_CTRL_ENABLE; 310 - writel(ctrl, clkevt_base + TIMER_CTRL); 311 - break; 312 - case CLOCK_EVT_MODE_ONESHOT: 313 - /* Leave the timer disabled, .set_next_event will enable it */ 314 - ctrl &= ~TIMER_CTRL_PERIODIC; 315 - writel(ctrl, clkevt_base + TIMER_CTRL); 316 - break; 317 - case CLOCK_EVT_MODE_UNUSED: 318 - case CLOCK_EVT_MODE_SHUTDOWN: 319 - case CLOCK_EVT_MODE_RESUME: 320 - default: 321 - /* Just leave in disabled state */ 322 - break; 323 - } 324 - 325 - } 326 - 327 - static int clkevt_set_next_event(unsigned long next, struct clock_event_device *evt) 328 - { 329 - unsigned long ctrl = readl(clkevt_base + TIMER_CTRL); 330 - 331 - writel(ctrl & ~TIMER_CTRL_ENABLE, clkevt_base + TIMER_CTRL); 332 - writel(next, clkevt_base + TIMER_LOAD); 333 - writel(ctrl | TIMER_CTRL_ENABLE, clkevt_base + TIMER_CTRL); 334 - 335 - return 0; 336 - } 337 - 338 - static struct clock_event_device integrator_clockevent = { 339 - .name = "timer1", 340 - .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT, 341 - .set_mode = clkevt_set_mode, 342 - .set_next_event = clkevt_set_next_event, 343 - .rating = 300, 344 - }; 345 - 346 - static struct irqaction integrator_timer_irq = { 347 - .name = "timer", 348 - .flags = IRQF_TIMER | IRQF_IRQPOLL, 349 - .handler = integrator_timer_interrupt, 350 - .dev_id = &integrator_clockevent, 351 - }; 352 - 353 - static void integrator_clockevent_init(unsigned long inrate, 354 - void __iomem *base, int irq) 355 - { 356 - unsigned long rate = inrate; 357 - unsigned int ctrl = 0; 358 - 359 - clkevt_base = base; 360 - /* Calculate and program a divisor */ 361 - if (rate > 0x100000 * HZ) { 362 - rate /= 256; 363 - ctrl |= TIMER_CTRL_DIV256; 364 - } else if (rate > 0x10000 * HZ) { 365 - rate /= 16; 366 - ctrl |= TIMER_CTRL_DIV16; 367 - } 368 - timer_reload = rate / HZ; 369 - writel(ctrl, clkevt_base + TIMER_CTRL); 370 - 371 - setup_irq(irq, &integrator_timer_irq); 372 - clockevents_config_and_register(&integrator_clockevent, 373 - rate, 374 - 1, 375 - 0xffffU); 376 - } 377 - 378 260 void __init ap_init_early(void) 379 261 { 380 - } 381 - 382 - static void __init ap_of_timer_init(void) 383 - { 384 - struct device_node *node; 385 - const char *path; 386 - void __iomem *base; 387 - int err; 388 - int irq; 389 - struct clk *clk; 390 - unsigned long rate; 391 - 392 - of_clk_init(NULL); 393 - 394 - err = of_property_read_string(of_aliases, 395 - "arm,timer-primary", &path); 396 - if (WARN_ON(err)) 397 - return; 398 - node = of_find_node_by_path(path); 399 - base = of_iomap(node, 0); 400 - if (WARN_ON(!base)) 401 - return; 402 - 403 - clk = of_clk_get(node, 0); 404 - BUG_ON(IS_ERR(clk)); 405 - clk_prepare_enable(clk); 406 - rate = clk_get_rate(clk); 407 - 408 - writel(0, base + TIMER_CTRL); 409 - integrator_clocksource_init(rate, base); 410 - 411 - err = of_property_read_string(of_aliases, 412 - "arm,timer-secondary", &path); 413 - if (WARN_ON(err)) 414 - return; 415 - node = of_find_node_by_path(path); 416 - base = of_iomap(node, 0); 417 - if (WARN_ON(!base)) 418 - return; 419 - irq = irq_of_parse_and_map(node, 0); 420 - 421 - clk = of_clk_get(node, 0); 422 - BUG_ON(IS_ERR(clk)); 423 - clk_prepare_enable(clk); 424 - rate = clk_get_rate(clk); 425 - 426 - writel(0, base + TIMER_CTRL); 427 - integrator_clockevent_init(rate, base, irq); 428 262 } 429 263 430 264 static void __init ap_init_irq_of(void) ··· 287 477 unsigned long sc_dec; 288 478 struct device_node *syscon; 289 479 struct device_node *ebi; 290 - struct device *parent; 291 - struct soc_device *soc_dev; 292 - struct soc_device_attribute *soc_dev_attr; 293 - u32 ap_sc_id; 294 480 int i; 295 481 296 482 syscon = of_find_matching_node(NULL, ap_syscon_match); ··· 305 499 306 500 of_platform_populate(NULL, of_default_bus_match_table, 307 501 ap_auxdata_lookup, NULL); 308 - 309 - ap_sc_id = readl(ap_syscon_base); 310 - 311 - soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 312 - if (!soc_dev_attr) 313 - return; 314 - 315 - soc_dev_attr->soc_id = "XVC"; 316 - soc_dev_attr->machine = "Integrator/AP"; 317 - soc_dev_attr->family = "Integrator"; 318 - soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%c", 319 - 'A' + (ap_sc_id & 0x0f)); 320 - 321 - soc_dev = soc_device_register(soc_dev_attr); 322 - if (IS_ERR(soc_dev)) { 323 - kfree(soc_dev_attr->revision); 324 - kfree(soc_dev_attr); 325 - return; 326 - } 327 - 328 - parent = soc_device_to_device(soc_dev); 329 - integrator_init_sysfs(parent, ap_sc_id); 330 502 331 503 sc_dec = readl(ap_syscon_base + INTEGRATOR_SC_DEC_OFFSET); 332 504 for (i = 0; i < 4; i++) { ··· 337 553 .map_io = ap_map_io, 338 554 .init_early = ap_init_early, 339 555 .init_irq = ap_init_irq_of, 340 - .init_time = ap_of_timer_init, 341 556 .init_machine = ap_init_of, 342 - .restart = integrator_restart, 343 557 .dt_compat = ap_dt_board_compat, 344 558 MACHINE_END
-28
arch/arm/mach-integrator/integrator_cp.c
··· 27 27 #include <linux/of_irq.h> 28 28 #include <linux/of_address.h> 29 29 #include <linux/of_platform.h> 30 - #include <linux/sys_soc.h> 31 30 #include <linux/sched_clock.h> 32 31 33 32 #include <asm/setup.h> ··· 273 274 static void __init intcp_init_of(void) 274 275 { 275 276 struct device_node *cpcon; 276 - struct device *parent; 277 - struct soc_device *soc_dev; 278 - struct soc_device_attribute *soc_dev_attr; 279 - u32 intcp_sc_id; 280 277 281 278 cpcon = of_find_matching_node(NULL, intcp_syscon_match); 282 279 if (!cpcon) ··· 284 289 285 290 of_platform_populate(NULL, of_default_bus_match_table, 286 291 intcp_auxdata_lookup, NULL); 287 - 288 - intcp_sc_id = readl(intcp_con_base); 289 - 290 - soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 291 - if (!soc_dev_attr) 292 - return; 293 - 294 - soc_dev_attr->soc_id = "XCV"; 295 - soc_dev_attr->machine = "Integrator/CP"; 296 - soc_dev_attr->family = "Integrator"; 297 - soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%c", 298 - 'A' + (intcp_sc_id & 0x0f)); 299 - 300 - soc_dev = soc_device_register(soc_dev_attr); 301 - if (IS_ERR(soc_dev)) { 302 - kfree(soc_dev_attr->revision); 303 - kfree(soc_dev_attr); 304 - return; 305 - } 306 - 307 - parent = soc_device_to_device(soc_dev); 308 - integrator_init_sysfs(parent, intcp_sc_id); 309 292 } 310 293 311 294 static const char * intcp_dt_board_compat[] = { ··· 297 324 .init_early = intcp_init_early, 298 325 .init_irq = intcp_init_irq_of, 299 326 .init_machine = intcp_init_of, 300 - .restart = integrator_restart, 301 327 .dt_compat = intcp_dt_board_compat, 302 328 MACHINE_END
-124
arch/arm/mach-integrator/leds.c
··· 1 - /* 2 - * Driver for the 4 user LEDs found on the Integrator AP/CP baseboard 3 - * Based on Versatile and RealView machine LED code 4 - * 5 - * License terms: GNU General Public License (GPL) version 2 6 - * Author: Bryan Wu <bryan.wu@canonical.com> 7 - */ 8 - #include <linux/kernel.h> 9 - #include <linux/init.h> 10 - #include <linux/io.h> 11 - #include <linux/slab.h> 12 - #include <linux/leds.h> 13 - 14 - #include "hardware.h" 15 - #include "cm.h" 16 - 17 - #if defined(CONFIG_NEW_LEDS) && defined(CONFIG_LEDS_CLASS) 18 - 19 - #define ALPHA_REG __io_address(INTEGRATOR_DBG_BASE) 20 - #define LEDREG (__io_address(INTEGRATOR_DBG_BASE) + INTEGRATOR_DBG_LEDS_OFFSET) 21 - 22 - struct integrator_led { 23 - struct led_classdev cdev; 24 - u8 mask; 25 - }; 26 - 27 - /* 28 - * The triggers lines up below will only be used if the 29 - * LED triggers are compiled in. 30 - */ 31 - static const struct { 32 - const char *name; 33 - const char *trigger; 34 - } integrator_leds[] = { 35 - { "integrator:green0", "heartbeat", }, 36 - { "integrator:yellow", }, 37 - { "integrator:red", }, 38 - { "integrator:green1", }, 39 - { "integrator:core_module", "cpu0", }, 40 - }; 41 - 42 - static void integrator_led_set(struct led_classdev *cdev, 43 - enum led_brightness b) 44 - { 45 - struct integrator_led *led = container_of(cdev, 46 - struct integrator_led, cdev); 47 - u32 reg = __raw_readl(LEDREG); 48 - 49 - if (b != LED_OFF) 50 - reg |= led->mask; 51 - else 52 - reg &= ~led->mask; 53 - 54 - while (__raw_readl(ALPHA_REG) & 1) 55 - cpu_relax(); 56 - 57 - __raw_writel(reg, LEDREG); 58 - } 59 - 60 - static enum led_brightness integrator_led_get(struct led_classdev *cdev) 61 - { 62 - struct integrator_led *led = container_of(cdev, 63 - struct integrator_led, cdev); 64 - u32 reg = __raw_readl(LEDREG); 65 - 66 - return (reg & led->mask) ? LED_FULL : LED_OFF; 67 - } 68 - 69 - static void cm_led_set(struct led_classdev *cdev, 70 - enum led_brightness b) 71 - { 72 - if (b != LED_OFF) 73 - cm_control(CM_CTRL_LED, CM_CTRL_LED); 74 - else 75 - cm_control(CM_CTRL_LED, 0); 76 - } 77 - 78 - static enum led_brightness cm_led_get(struct led_classdev *cdev) 79 - { 80 - u32 reg = cm_get(); 81 - 82 - return (reg & CM_CTRL_LED) ? LED_FULL : LED_OFF; 83 - } 84 - 85 - static int __init integrator_leds_init(void) 86 - { 87 - int i; 88 - 89 - for (i = 0; i < ARRAY_SIZE(integrator_leds); i++) { 90 - struct integrator_led *led; 91 - 92 - led = kzalloc(sizeof(*led), GFP_KERNEL); 93 - if (!led) 94 - break; 95 - 96 - 97 - led->cdev.name = integrator_leds[i].name; 98 - 99 - if (i == 4) { /* Setting for LED in core module */ 100 - led->cdev.brightness_set = cm_led_set; 101 - led->cdev.brightness_get = cm_led_get; 102 - } else { 103 - led->cdev.brightness_set = integrator_led_set; 104 - led->cdev.brightness_get = integrator_led_get; 105 - } 106 - 107 - led->cdev.default_trigger = integrator_leds[i].trigger; 108 - led->mask = BIT(i); 109 - 110 - if (led_classdev_register(NULL, &led->cdev) < 0) { 111 - kfree(led); 112 - break; 113 - } 114 - } 115 - 116 - return 0; 117 - } 118 - 119 - /* 120 - * Since we may have triggers on any subsystem, defer registration 121 - * until after subsystem_init. 122 - */ 123 - fs_initcall(integrator_leds_init); 124 - #endif
+2 -2
arch/arm/mach-mediatek/Kconfig
··· 1 1 config ARCH_MEDIATEK 2 - bool "Mediatek MT6589 SoC" if ARCH_MULTI_V7 2 + bool "Mediatek MT65xx & MT81xx SoC" if ARCH_MULTI_V7 3 3 select ARM_GIC 4 4 select MTK_TIMER 5 5 help 6 - Support for Mediatek Cortex-A7 Quad-Core-SoC MT6589. 6 + Support for Mediatek MT65xx & MT81xx SoCs
+6
arch/arm/mach-meson/Kconfig
··· 2 2 bool "Amlogic Meson SoCs" if ARCH_MULTI_V7 3 3 select GENERIC_IRQ_CHIP 4 4 select ARM_GIC 5 + select CACHE_L2X0 5 6 6 7 if ARCH_MESON 7 8 8 9 config MACH_MESON6 9 10 bool "Amlogic Meson6 (8726MX) SoCs support" 11 + default ARCH_MESON 12 + select MESON6_TIMER 13 + 14 + config MACH_MESON8 15 + bool "Amlogic Meson8 SoCs support" 10 16 default ARCH_MESON 11 17 select MESON6_TIMER 12 18
+6 -4
arch/arm/mach-meson/meson.c
··· 16 16 #include <linux/of_platform.h> 17 17 #include <asm/mach/arch.h> 18 18 19 - static const char * const m6_common_board_compat[] = { 19 + static const char * const meson_common_board_compat[] = { 20 20 "amlogic,meson6", 21 + "amlogic,meson8", 21 22 NULL, 22 23 }; 23 24 24 - DT_MACHINE_START(AML8726_MX, "Amlogic Meson6 platform") 25 - .dt_compat = m6_common_board_compat, 25 + DT_MACHINE_START(MESON, "Amlogic Meson platform") 26 + .dt_compat = meson_common_board_compat, 27 + .l2c_aux_val = 0, 28 + .l2c_aux_mask = ~0, 26 29 MACHINE_END 27 -
+1 -1
arch/arm/mach-mvebu/Makefile
··· 7 7 obj-$(CONFIG_MACH_MVEBU_ANY) += system-controller.o mvebu-soc-id.o 8 8 9 9 ifeq ($(CONFIG_MACH_MVEBU_V7),y) 10 - obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o 10 + obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o pm.o pm-board.o 11 11 obj-$(CONFIG_SMP) += platsmp.o headsmp.o platsmp-a9.o headsmp-a9.o 12 12 endif 13 13
-6
arch/arm/mach-mvebu/armada-370-xp.h
··· 16 16 #define __MACH_ARMADA_370_XP_H 17 17 18 18 #ifdef CONFIG_SMP 19 - #include <linux/cpumask.h> 20 - 21 - #define ARMADA_XP_MAX_CPUS 4 22 - 23 19 void armada_xp_secondary_startup(void); 24 20 extern struct smp_operations armada_xp_smp_ops; 25 21 #endif 26 - 27 - int armada_370_xp_pmsu_idle_enter(unsigned long deepidle); 28 22 29 23 #endif /* __MACH_ARMADA_370_XP_H */
+57 -65
arch/arm/mach-mvebu/board-v7.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/clk-provider.h> 18 18 #include <linux/of_address.h> 19 + #include <linux/of_fdt.h> 19 20 #include <linux/of_platform.h> 20 21 #include <linux/io.h> 21 22 #include <linux/clocksource.h> 22 23 #include <linux/dma-mapping.h> 24 + #include <linux/memblock.h> 23 25 #include <linux/mbus.h> 24 26 #include <linux/signal.h> 25 27 #include <linux/slab.h> ··· 57 55 { 58 56 return scu_base; 59 57 } 58 + 59 + /* 60 + * When returning from suspend, the platform goes through the 61 + * bootloader, which executes its DDR3 training code. This code has 62 + * the unfortunate idea of using the first 10 KB of each DRAM bank to 63 + * exercise the RAM and calculate the optimal timings. Therefore, this 64 + * area of RAM is overwritten, and shouldn't be used by the kernel if 65 + * suspend/resume is supported. 66 + */ 67 + 68 + #ifdef CONFIG_SUSPEND 69 + #define MVEBU_DDR_TRAINING_AREA_SZ (10 * SZ_1K) 70 + static int __init mvebu_scan_mem(unsigned long node, const char *uname, 71 + int depth, void *data) 72 + { 73 + const char *type = of_get_flat_dt_prop(node, "device_type", NULL); 74 + const __be32 *reg, *endp; 75 + int l; 76 + 77 + if (type == NULL || strcmp(type, "memory")) 78 + return 0; 79 + 80 + reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l); 81 + if (reg == NULL) 82 + reg = of_get_flat_dt_prop(node, "reg", &l); 83 + if (reg == NULL) 84 + return 0; 85 + 86 + endp = reg + (l / sizeof(__be32)); 87 + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { 88 + u64 base, size; 89 + 90 + base = dt_mem_next_cell(dt_root_addr_cells, &reg); 91 + size = dt_mem_next_cell(dt_root_size_cells, &reg); 92 + 93 + memblock_reserve(base, MVEBU_DDR_TRAINING_AREA_SZ); 94 + } 95 + 96 + return 0; 97 + } 98 + 99 + static void __init mvebu_memblock_reserve(void) 100 + { 101 + of_scan_flat_dt(mvebu_scan_mem, NULL); 102 + } 103 + #else 104 + static void __init mvebu_memblock_reserve(void) {} 105 + #endif 60 106 61 107 /* 62 108 * Early versions of Armada 375 SoC have a bug where the BootROM ··· 174 124 return; 175 125 } 176 126 177 - #define A375_Z1_THERMAL_FIXUP_OFFSET 0xc 178 - 179 - static void __init thermal_quirk(void) 180 - { 181 - struct device_node *np; 182 - u32 dev, rev; 183 - int res; 184 - 185 - /* 186 - * The early SoC Z1 revision needs a quirk to be applied in order 187 - * for the thermal controller to work properly. This quirk breaks 188 - * the thermal support if applied on a SoC that doesn't need it, 189 - * so we enforce the SoC revision to be known. 190 - */ 191 - res = mvebu_get_soc_id(&dev, &rev); 192 - if (res < 0 || (res == 0 && rev > ARMADA_375_Z1_REV)) 193 - return; 194 - 195 - for_each_compatible_node(np, NULL, "marvell,armada375-thermal") { 196 - struct property *prop; 197 - __be32 newval, *newprop, *oldprop; 198 - int len; 199 - 200 - /* 201 - * The register offset is at a wrong location. This quirk 202 - * creates a new reg property as a clone of the previous 203 - * one and corrects the offset. 204 - */ 205 - oldprop = (__be32 *)of_get_property(np, "reg", &len); 206 - if (!oldprop) 207 - continue; 208 - 209 - /* Create a duplicate of the 'reg' property */ 210 - prop = kzalloc(sizeof(*prop), GFP_KERNEL); 211 - prop->length = len; 212 - prop->name = kstrdup("reg", GFP_KERNEL); 213 - prop->value = kzalloc(len, GFP_KERNEL); 214 - memcpy(prop->value, oldprop, len); 215 - 216 - /* Fixup the register offset of the second entry */ 217 - oldprop += 2; 218 - newprop = (__be32 *)prop->value + 2; 219 - newval = cpu_to_be32(be32_to_cpu(*oldprop) - 220 - A375_Z1_THERMAL_FIXUP_OFFSET); 221 - *newprop = newval; 222 - of_update_property(np, prop); 223 - 224 - /* 225 - * The thermal controller needs some quirk too, so let's change 226 - * the compatible string to reflect this and allow the driver 227 - * the take the necessary action. 228 - */ 229 - prop = kzalloc(sizeof(*prop), GFP_KERNEL); 230 - prop->name = kstrdup("compatible", GFP_KERNEL); 231 - prop->length = sizeof("marvell,armada375-z1-thermal"); 232 - prop->value = kstrdup("marvell,armada375-z1-thermal", 233 - GFP_KERNEL); 234 - of_update_property(np, prop); 235 - } 236 - return; 237 - } 238 - 239 127 static void __init mvebu_dt_init(void) 240 128 { 241 129 if (of_machine_is_compatible("marvell,armadaxp")) 242 130 i2c_quirk(); 243 - if (of_machine_is_compatible("marvell,a375-db")) { 131 + if (of_machine_is_compatible("marvell,a375-db")) 244 132 external_abort_quirk(); 245 - thermal_quirk(); 246 - } 247 133 248 134 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 249 135 } ··· 192 206 DT_MACHINE_START(ARMADA_370_XP_DT, "Marvell Armada 370/XP (Device Tree)") 193 207 .l2c_aux_val = 0, 194 208 .l2c_aux_mask = ~0, 209 + /* 210 + * The following field (.smp) is still needed to ensure backward 211 + * compatibility with old Device Trees that were not specifying the 212 + * cpus enable-method property. 213 + */ 195 214 .smp = smp_ops(armada_xp_smp_ops), 196 215 .init_machine = mvebu_dt_init, 197 216 .init_irq = mvebu_init_irq, 198 217 .restart = mvebu_restart, 218 + .reserve = mvebu_memblock_reserve, 199 219 .dt_compat = armada_370_xp_dt_compat, 200 220 MACHINE_END 201 221
+36 -185
arch/arm/mach-mvebu/coherency.c
··· 1 1 /* 2 - * Coherency fabric (Aurora) support for Armada 370 and XP platforms. 2 + * Coherency fabric (Aurora) support for Armada 370, 375, 38x and XP 3 + * platforms. 3 4 * 4 5 * Copyright (C) 2012 Marvell 5 6 * ··· 12 11 * License version 2. This program is licensed "as is" without any 13 12 * warranty of any kind, whether express or implied. 14 13 * 15 - * The Armada 370 and Armada XP SOCs have a coherency fabric which is 14 + * The Armada 370, 375, 38x and XP SOCs have a coherency fabric which is 16 15 * responsible for ensuring hardware coherency between all CPUs and between 17 16 * CPUs and I/O masters. This file initializes the coherency fabric and 18 17 * supplies basic routines for configuring and controlling hardware coherency ··· 29 28 #include <linux/platform_device.h> 30 29 #include <linux/slab.h> 31 30 #include <linux/mbus.h> 32 - #include <linux/clk.h> 33 31 #include <linux/pci.h> 34 32 #include <asm/smp_plat.h> 35 33 #include <asm/cacheflush.h> 36 34 #include <asm/mach/map.h> 37 - #include "armada-370-xp.h" 38 35 #include "coherency.h" 39 36 #include "mvebu-soc-id.h" 40 37 ··· 41 42 static void __iomem *coherency_cpu_base; 42 43 43 44 /* Coherency fabric registers */ 44 - #define COHERENCY_FABRIC_CFG_OFFSET 0x4 45 - 46 45 #define IO_SYNC_BARRIER_CTL_OFFSET 0x0 47 46 48 47 enum { ··· 76 79 return ll_enable_coherency(); 77 80 } 78 81 79 - /* 80 - * The below code implements the I/O coherency workaround on Armada 81 - * 375. This workaround consists in using the two channels of the 82 - * first XOR engine to trigger a XOR transaction that serves as the 83 - * I/O coherency barrier. 84 - */ 85 - 86 - static void __iomem *xor_base, *xor_high_base; 87 - static dma_addr_t coherency_wa_buf_phys[CONFIG_NR_CPUS]; 88 - static void *coherency_wa_buf[CONFIG_NR_CPUS]; 89 - static bool coherency_wa_enabled; 90 - 91 - #define XOR_CONFIG(chan) (0x10 + (chan * 4)) 92 - #define XOR_ACTIVATION(chan) (0x20 + (chan * 4)) 93 - #define WINDOW_BAR_ENABLE(chan) (0x240 + ((chan) << 2)) 94 - #define WINDOW_BASE(w) (0x250 + ((w) << 2)) 95 - #define WINDOW_SIZE(w) (0x270 + ((w) << 2)) 96 - #define WINDOW_REMAP_HIGH(w) (0x290 + ((w) << 2)) 97 - #define WINDOW_OVERRIDE_CTRL(chan) (0x2A0 + ((chan) << 2)) 98 - #define XOR_DEST_POINTER(chan) (0x2B0 + (chan * 4)) 99 - #define XOR_BLOCK_SIZE(chan) (0x2C0 + (chan * 4)) 100 - #define XOR_INIT_VALUE_LOW 0x2E0 101 - #define XOR_INIT_VALUE_HIGH 0x2E4 102 - 103 - static inline void mvebu_hwcc_armada375_sync_io_barrier_wa(void) 104 - { 105 - int idx = smp_processor_id(); 106 - 107 - /* Write '1' to the first word of the buffer */ 108 - writel(0x1, coherency_wa_buf[idx]); 109 - 110 - /* Wait until the engine is idle */ 111 - while ((readl(xor_base + XOR_ACTIVATION(idx)) >> 4) & 0x3) 112 - ; 113 - 114 - dmb(); 115 - 116 - /* Trigger channel */ 117 - writel(0x1, xor_base + XOR_ACTIVATION(idx)); 118 - 119 - /* Poll the data until it is cleared by the XOR transaction */ 120 - while (readl(coherency_wa_buf[idx])) 121 - ; 122 - } 123 - 124 - static void __init armada_375_coherency_init_wa(void) 125 - { 126 - const struct mbus_dram_target_info *dram; 127 - struct device_node *xor_node; 128 - struct property *xor_status; 129 - struct clk *xor_clk; 130 - u32 win_enable = 0; 131 - int i; 132 - 133 - pr_warn("enabling coherency workaround for Armada 375 Z1, one XOR engine disabled\n"); 134 - 135 - /* 136 - * Since the workaround uses one XOR engine, we grab a 137 - * reference to its Device Tree node first. 138 - */ 139 - xor_node = of_find_compatible_node(NULL, NULL, "marvell,orion-xor"); 140 - BUG_ON(!xor_node); 141 - 142 - /* 143 - * Then we mark it as disabled so that the real XOR driver 144 - * will not use it. 145 - */ 146 - xor_status = kzalloc(sizeof(struct property), GFP_KERNEL); 147 - BUG_ON(!xor_status); 148 - 149 - xor_status->value = kstrdup("disabled", GFP_KERNEL); 150 - BUG_ON(!xor_status->value); 151 - 152 - xor_status->length = 8; 153 - xor_status->name = kstrdup("status", GFP_KERNEL); 154 - BUG_ON(!xor_status->name); 155 - 156 - of_update_property(xor_node, xor_status); 157 - 158 - /* 159 - * And we remap the registers, get the clock, and do the 160 - * initial configuration of the XOR engine. 161 - */ 162 - xor_base = of_iomap(xor_node, 0); 163 - xor_high_base = of_iomap(xor_node, 1); 164 - 165 - xor_clk = of_clk_get_by_name(xor_node, NULL); 166 - BUG_ON(!xor_clk); 167 - 168 - clk_prepare_enable(xor_clk); 169 - 170 - dram = mv_mbus_dram_info(); 171 - 172 - for (i = 0; i < 8; i++) { 173 - writel(0, xor_base + WINDOW_BASE(i)); 174 - writel(0, xor_base + WINDOW_SIZE(i)); 175 - if (i < 4) 176 - writel(0, xor_base + WINDOW_REMAP_HIGH(i)); 177 - } 178 - 179 - for (i = 0; i < dram->num_cs; i++) { 180 - const struct mbus_dram_window *cs = dram->cs + i; 181 - writel((cs->base & 0xffff0000) | 182 - (cs->mbus_attr << 8) | 183 - dram->mbus_dram_target_id, xor_base + WINDOW_BASE(i)); 184 - writel((cs->size - 1) & 0xffff0000, xor_base + WINDOW_SIZE(i)); 185 - 186 - win_enable |= (1 << i); 187 - win_enable |= 3 << (16 + (2 * i)); 188 - } 189 - 190 - writel(win_enable, xor_base + WINDOW_BAR_ENABLE(0)); 191 - writel(win_enable, xor_base + WINDOW_BAR_ENABLE(1)); 192 - writel(0, xor_base + WINDOW_OVERRIDE_CTRL(0)); 193 - writel(0, xor_base + WINDOW_OVERRIDE_CTRL(1)); 194 - 195 - for (i = 0; i < CONFIG_NR_CPUS; i++) { 196 - coherency_wa_buf[i] = kzalloc(PAGE_SIZE, GFP_KERNEL); 197 - BUG_ON(!coherency_wa_buf[i]); 198 - 199 - /* 200 - * We can't use the DMA mapping API, since we don't 201 - * have a valid 'struct device' pointer 202 - */ 203 - coherency_wa_buf_phys[i] = 204 - virt_to_phys(coherency_wa_buf[i]); 205 - BUG_ON(!coherency_wa_buf_phys[i]); 206 - 207 - /* 208 - * Configure the XOR engine for memset operation, with 209 - * a 128 bytes block size 210 - */ 211 - writel(0x444, xor_base + XOR_CONFIG(i)); 212 - writel(128, xor_base + XOR_BLOCK_SIZE(i)); 213 - writel(coherency_wa_buf_phys[i], 214 - xor_base + XOR_DEST_POINTER(i)); 215 - } 216 - 217 - writel(0x0, xor_base + XOR_INIT_VALUE_LOW); 218 - writel(0x0, xor_base + XOR_INIT_VALUE_HIGH); 219 - 220 - coherency_wa_enabled = true; 221 - } 222 - 223 82 static inline void mvebu_hwcc_sync_io_barrier(void) 224 83 { 225 - if (coherency_wa_enabled) { 226 - mvebu_hwcc_armada375_sync_io_barrier_wa(); 227 - return; 228 - } 229 - 230 84 writel(0x1, coherency_cpu_base + IO_SYNC_BARRIER_CTL_OFFSET); 231 85 while (readl(coherency_cpu_base + IO_SYNC_BARRIER_CTL_OFFSET) & 0x1); 232 86 } ··· 209 361 { 210 362 struct device_node *np; 211 363 const struct of_device_id *match; 364 + int type; 365 + 366 + /* 367 + * The coherency fabric is needed: 368 + * - For coherency between processors on Armada XP, so only 369 + * when SMP is enabled. 370 + * - For coherency between the processor and I/O devices, but 371 + * this coherency requires many pre-requisites (write 372 + * allocate cache policy, shareable pages, SMP bit set) that 373 + * are only meant in SMP situations. 374 + * 375 + * Note that this means that on Armada 370, there is currently 376 + * no way to use hardware I/O coherency, because even when 377 + * CONFIG_SMP is enabled, is_smp() returns false due to the 378 + * Armada 370 being a single-core processor. To lift this 379 + * limitation, we would have to find a way to make the cache 380 + * policy set to write-allocate (on all Armada SoCs), and to 381 + * set the shareable attribute in page tables (on all Armada 382 + * SoCs except the Armada 370). Unfortunately, such decisions 383 + * are taken very early in the kernel boot process, at a point 384 + * where we don't know yet on which SoC we are running. 385 + 386 + */ 387 + if (!is_smp()) 388 + return COHERENCY_FABRIC_TYPE_NONE; 212 389 213 390 np = of_find_matching_node_and_match(NULL, of_coherency_table, &match); 214 - if (np) { 215 - int type = (int) match->data; 391 + if (!np) 392 + return COHERENCY_FABRIC_TYPE_NONE; 216 393 217 - /* Armada 370/XP coherency works in both UP and SMP */ 218 - if (type == COHERENCY_FABRIC_TYPE_ARMADA_370_XP) 219 - return type; 394 + type = (int) match->data; 220 395 221 - /* Armada 375 coherency works only on SMP */ 222 - else if (type == COHERENCY_FABRIC_TYPE_ARMADA_375 && is_smp()) 223 - return type; 396 + of_node_put(np); 224 397 225 - /* Armada 380 coherency works only on SMP */ 226 - else if (type == COHERENCY_FABRIC_TYPE_ARMADA_380 && is_smp()) 227 - return type; 228 - } 229 - 230 - return COHERENCY_FABRIC_TYPE_NONE; 398 + return type; 231 399 } 232 400 233 401 int coherency_available(void) ··· 271 407 272 408 static int __init coherency_late_init(void) 273 409 { 274 - int type = coherency_type(); 275 - 276 - if (type == COHERENCY_FABRIC_TYPE_NONE) 277 - return 0; 278 - 279 - if (type == COHERENCY_FABRIC_TYPE_ARMADA_375) { 280 - u32 dev, rev; 281 - 282 - if (mvebu_get_soc_id(&dev, &rev) == 0 && 283 - rev == ARMADA_375_Z1_REV) 284 - armada_375_coherency_init_wa(); 285 - } 286 - 287 - bus_register_notifier(&platform_bus_type, 288 - &mvebu_hwcc_nb); 289 - 410 + if (coherency_available()) 411 + bus_register_notifier(&platform_bus_type, 412 + &mvebu_hwcc_nb); 290 413 return 0; 291 414 } 292 415
+19 -2
arch/arm/mach-mvebu/coherency_ll.S
··· 24 24 #include <asm/cp15.h> 25 25 26 26 .text 27 - /* Returns the coherency base address in r1 (r0 is untouched) */ 27 + /* 28 + * Returns the coherency base address in r1 (r0 is untouched), or 0 if 29 + * the coherency fabric is not enabled. 30 + */ 28 31 ENTRY(ll_get_coherency_base) 29 32 mrc p15, 0, r1, c1, c0, 0 30 33 tst r1, #CR_M @ Check MMU bit enabled ··· 35 32 36 33 /* 37 34 * MMU is disabled, use the physical address of the coherency 38 - * base address. 35 + * base address. However, if the coherency fabric isn't mapped 36 + * (i.e its virtual address is zero), it means coherency is 37 + * not enabled, so we return 0. 39 38 */ 39 + ldr r1, =coherency_base 40 + cmp r1, #0 41 + beq 2f 40 42 adr r1, 3f 41 43 ldr r3, [r1] 42 44 ldr r1, [r1, r3] ··· 93 85 */ 94 86 mov r0, lr 95 87 bl ll_get_coherency_base 88 + /* Bail out if the coherency is not enabled */ 89 + cmp r1, #0 90 + reteq r0 96 91 bl ll_get_coherency_cpumask 97 92 mov lr, r0 98 93 add r0, r1, #ARMADA_XP_CFB_CFG_REG_OFFSET ··· 118 107 */ 119 108 mov r0, lr 120 109 bl ll_get_coherency_base 110 + /* Bail out if the coherency is not enabled */ 111 + cmp r1, #0 112 + reteq r0 121 113 bl ll_get_coherency_cpumask 122 114 mov lr, r0 123 115 add r0, r1, #ARMADA_XP_CFB_CTL_REG_OFFSET ··· 145 131 */ 146 132 mov r0, lr 147 133 bl ll_get_coherency_base 134 + /* Bail out if the coherency is not enabled */ 135 + cmp r1, #0 136 + reteq r0 148 137 bl ll_get_coherency_cpumask 149 138 mov lr, r0 150 139 add r0, r1, #ARMADA_XP_CFB_CTL_REG_OFFSET
+2
arch/arm/mach-mvebu/common.h
··· 25 25 26 26 void __iomem *mvebu_get_scu_base(void); 27 27 28 + int mvebu_pm_init(void (*board_pm_enter)(void __iomem *sdram_reg, u32 srcmd)); 29 + 28 30 #endif
-1
arch/arm/mach-mvebu/cpu-reset.c
··· 15 15 #include <linux/of_address.h> 16 16 #include <linux/io.h> 17 17 #include <linux/resource.h> 18 - #include "armada-370-xp.h" 19 18 20 19 static void __iomem *cpu_reset_base; 21 20 static size_t cpu_reset_size;
+1
arch/arm/mach-mvebu/headsmp-a9.S
··· 22 22 ENTRY(mvebu_cortex_a9_secondary_startup) 23 23 ARM_BE8(setend be) 24 24 bl v7_invalidate_l1 25 + bl armada_38x_scu_power_up 25 26 b secondary_startup 26 27 ENDPROC(mvebu_cortex_a9_secondary_startup)
+51 -2
arch/arm/mach-mvebu/platsmp-a9.c
··· 43 43 else 44 44 mvebu_pmsu_set_cpu_boot_addr(hw_cpu, mvebu_cortex_a9_secondary_startup); 45 45 smp_wmb(); 46 + 47 + /* 48 + * Doing this before deasserting the CPUs is needed to wake up CPUs 49 + * in the offline state after using CPU hotplug. 50 + */ 51 + arch_send_wakeup_ipi_mask(cpumask_of(cpu)); 52 + 46 53 ret = mvebu_cpu_reset_deassert(hw_cpu); 47 54 if (ret) { 48 55 pr_err("Could not start the secondary CPU: %d\n", ret); 49 56 return ret; 50 57 } 51 - arch_send_wakeup_ipi_mask(cpumask_of(cpu)); 52 58 53 59 return 0; 54 60 } 61 + /* 62 + * When a CPU is brought back online, either through CPU hotplug, or 63 + * because of the boot of a kexec'ed kernel, the PMSU configuration 64 + * for this CPU might be in the deep idle state, preventing this CPU 65 + * from receiving interrupts. Here, we therefore take out the current 66 + * CPU from this state, which was entered by armada_38x_cpu_die() 67 + * below. 68 + */ 69 + static void armada_38x_secondary_init(unsigned int cpu) 70 + { 71 + mvebu_v7_pmsu_idle_exit(); 72 + } 73 + 74 + #ifdef CONFIG_HOTPLUG_CPU 75 + static void armada_38x_cpu_die(unsigned int cpu) 76 + { 77 + /* 78 + * CPU hotplug is implemented by putting offline CPUs into the 79 + * deep idle sleep state. 80 + */ 81 + armada_38x_do_cpu_suspend(true); 82 + } 83 + 84 + /* 85 + * We need a dummy function, so that platform_can_cpu_hotplug() knows 86 + * we support CPU hotplug. However, the function does not need to do 87 + * anything, because CPUs going offline can enter the deep idle state 88 + * by themselves, without any help from a still alive CPU. 89 + */ 90 + static int armada_38x_cpu_kill(unsigned int cpu) 91 + { 92 + return 1; 93 + } 94 + #endif 55 95 56 96 static struct smp_operations mvebu_cortex_a9_smp_ops __initdata = { 57 97 .smp_boot_secondary = mvebu_cortex_a9_boot_secondary, 58 98 }; 59 99 100 + static struct smp_operations armada_38x_smp_ops __initdata = { 101 + .smp_boot_secondary = mvebu_cortex_a9_boot_secondary, 102 + .smp_secondary_init = armada_38x_secondary_init, 103 + #ifdef CONFIG_HOTPLUG_CPU 104 + .cpu_die = armada_38x_cpu_die, 105 + .cpu_kill = armada_38x_cpu_kill, 106 + #endif 107 + }; 108 + 60 109 CPU_METHOD_OF_DECLARE(mvebu_armada_375_smp, "marvell,armada-375-smp", 61 110 &mvebu_cortex_a9_smp_ops); 62 111 CPU_METHOD_OF_DECLARE(mvebu_armada_380_smp, "marvell,armada-380-smp", 63 - &mvebu_cortex_a9_smp_ops); 112 + &armada_38x_smp_ops);
+17 -16
arch/arm/mach-mvebu/platsmp.c
··· 30 30 #include "pmsu.h" 31 31 #include "coherency.h" 32 32 33 + #define ARMADA_XP_MAX_CPUS 4 34 + 33 35 #define AXP_BOOTROM_BASE 0xfff00000 34 36 #define AXP_BOOTROM_SIZE 0x100000 35 37 36 - static struct clk *__init get_cpu_clk(int cpu) 38 + static struct clk *get_cpu_clk(int cpu) 37 39 { 38 40 struct clk *cpu_clk; 39 41 struct device_node *np = of_get_cpu_node(cpu, NULL); ··· 48 46 return cpu_clk; 49 47 } 50 48 51 - static void __init set_secondary_cpus_clock(void) 49 + static void set_secondary_cpu_clock(unsigned int cpu) 52 50 { 53 - int thiscpu, cpu; 51 + int thiscpu; 54 52 unsigned long rate; 55 53 struct clk *cpu_clk; 56 54 57 - thiscpu = smp_processor_id(); 55 + thiscpu = get_cpu(); 56 + 58 57 cpu_clk = get_cpu_clk(thiscpu); 59 58 if (!cpu_clk) 60 - return; 59 + goto out; 61 60 clk_prepare_enable(cpu_clk); 62 61 rate = clk_get_rate(cpu_clk); 63 62 64 - /* set all the other CPU clk to the same rate than the boot CPU */ 65 - for_each_possible_cpu(cpu) { 66 - if (cpu == thiscpu) 67 - continue; 68 - cpu_clk = get_cpu_clk(cpu); 69 - if (!cpu_clk) 70 - return; 71 - clk_set_rate(cpu_clk, rate); 72 - clk_prepare_enable(cpu_clk); 73 - } 63 + cpu_clk = get_cpu_clk(cpu); 64 + if (!cpu_clk) 65 + goto out; 66 + clk_set_rate(cpu_clk, rate); 67 + clk_prepare_enable(cpu_clk); 68 + 69 + out: 70 + put_cpu(); 74 71 } 75 72 76 73 static int armada_xp_boot_secondary(unsigned int cpu, struct task_struct *idle) ··· 79 78 pr_info("Booting CPU %d\n", cpu); 80 79 81 80 hw_cpu = cpu_logical_map(cpu); 81 + set_secondary_cpu_clock(hw_cpu); 82 82 mvebu_pmsu_set_cpu_boot_addr(hw_cpu, armada_xp_secondary_startup); 83 83 84 84 /* ··· 128 126 struct resource res; 129 127 int err; 130 128 131 - set_secondary_cpus_clock(); 132 129 flush_cache_all(); 133 130 set_cpu_coherent(); 134 131
+141
arch/arm/mach-mvebu/pm-board.c
··· 1 + /* 2 + * Board-level suspend/resume support. 3 + * 4 + * Copyright (C) 2014 Marvell 5 + * 6 + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #include <linux/delay.h> 14 + #include <linux/gpio.h> 15 + #include <linux/init.h> 16 + #include <linux/io.h> 17 + #include <linux/of.h> 18 + #include <linux/of_address.h> 19 + #include <linux/of_gpio.h> 20 + #include <linux/slab.h> 21 + #include "common.h" 22 + 23 + #define ARMADA_XP_GP_PIC_NR_GPIOS 3 24 + 25 + static void __iomem *gpio_ctrl; 26 + static int pic_gpios[ARMADA_XP_GP_PIC_NR_GPIOS]; 27 + static int pic_raw_gpios[ARMADA_XP_GP_PIC_NR_GPIOS]; 28 + 29 + static void mvebu_armada_xp_gp_pm_enter(void __iomem *sdram_reg, u32 srcmd) 30 + { 31 + u32 reg, ackcmd; 32 + int i; 33 + 34 + /* Put 001 as value on the GPIOs */ 35 + reg = readl(gpio_ctrl); 36 + for (i = 0; i < ARMADA_XP_GP_PIC_NR_GPIOS; i++) 37 + reg &= ~BIT(pic_raw_gpios[i]); 38 + reg |= BIT(pic_raw_gpios[0]); 39 + writel(reg, gpio_ctrl); 40 + 41 + /* Prepare writing 111 to the GPIOs */ 42 + ackcmd = readl(gpio_ctrl); 43 + for (i = 0; i < ARMADA_XP_GP_PIC_NR_GPIOS; i++) 44 + ackcmd |= BIT(pic_raw_gpios[i]); 45 + 46 + /* 47 + * Wait a while, the PIC needs quite a bit of time between the 48 + * two GPIO commands. 49 + */ 50 + mdelay(3000); 51 + 52 + asm volatile ( 53 + /* Align to a cache line */ 54 + ".balign 32\n\t" 55 + 56 + /* Enter self refresh */ 57 + "str %[srcmd], [%[sdram_reg]]\n\t" 58 + 59 + /* 60 + * Wait 100 cycles for DDR to enter self refresh, by 61 + * doing 50 times two instructions. 62 + */ 63 + "mov r1, #50\n\t" 64 + "1: subs r1, r1, #1\n\t" 65 + "bne 1b\n\t" 66 + 67 + /* Issue the command ACK */ 68 + "str %[ackcmd], [%[gpio_ctrl]]\n\t" 69 + 70 + /* Trap the processor */ 71 + "b .\n\t" 72 + : : [srcmd] "r" (srcmd), [sdram_reg] "r" (sdram_reg), 73 + [ackcmd] "r" (ackcmd), [gpio_ctrl] "r" (gpio_ctrl) : "r1"); 74 + } 75 + 76 + static int mvebu_armada_xp_gp_pm_init(void) 77 + { 78 + struct device_node *np; 79 + struct device_node *gpio_ctrl_np; 80 + int ret = 0, i; 81 + 82 + if (!of_machine_is_compatible("marvell,axp-gp")) 83 + return -ENODEV; 84 + 85 + np = of_find_node_by_name(NULL, "pm_pic"); 86 + if (!np) 87 + return -ENODEV; 88 + 89 + for (i = 0; i < ARMADA_XP_GP_PIC_NR_GPIOS; i++) { 90 + char *name; 91 + struct of_phandle_args args; 92 + 93 + pic_gpios[i] = of_get_named_gpio(np, "ctrl-gpios", i); 94 + if (pic_gpios[i] < 0) { 95 + ret = -ENODEV; 96 + goto out; 97 + } 98 + 99 + name = kasprintf(GFP_KERNEL, "pic-pin%d", i); 100 + if (!name) { 101 + ret = -ENOMEM; 102 + goto out; 103 + } 104 + 105 + ret = gpio_request(pic_gpios[i], name); 106 + if (ret < 0) { 107 + kfree(name); 108 + goto out; 109 + } 110 + 111 + ret = gpio_direction_output(pic_gpios[i], 0); 112 + if (ret < 0) { 113 + gpio_free(pic_gpios[i]); 114 + kfree(name); 115 + goto out; 116 + } 117 + 118 + ret = of_parse_phandle_with_fixed_args(np, "ctrl-gpios", 2, 119 + i, &args); 120 + if (ret < 0) { 121 + gpio_free(pic_gpios[i]); 122 + kfree(name); 123 + goto out; 124 + } 125 + 126 + gpio_ctrl_np = args.np; 127 + pic_raw_gpios[i] = args.args[0]; 128 + } 129 + 130 + gpio_ctrl = of_iomap(gpio_ctrl_np, 0); 131 + if (!gpio_ctrl) 132 + return -ENOMEM; 133 + 134 + mvebu_pm_init(mvebu_armada_xp_gp_pm_enter); 135 + 136 + out: 137 + of_node_put(np); 138 + return ret; 139 + } 140 + 141 + late_initcall(mvebu_armada_xp_gp_pm_init);
+218
arch/arm/mach-mvebu/pm.c
··· 1 + /* 2 + * Suspend/resume support. Currently supporting Armada XP only. 3 + * 4 + * Copyright (C) 2014 Marvell 5 + * 6 + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #include <linux/cpu_pm.h> 14 + #include <linux/delay.h> 15 + #include <linux/gpio.h> 16 + #include <linux/io.h> 17 + #include <linux/kernel.h> 18 + #include <linux/mbus.h> 19 + #include <linux/of_address.h> 20 + #include <linux/suspend.h> 21 + #include <asm/cacheflush.h> 22 + #include <asm/outercache.h> 23 + #include <asm/suspend.h> 24 + 25 + #include "coherency.h" 26 + #include "pmsu.h" 27 + 28 + #define SDRAM_CONFIG_OFFS 0x0 29 + #define SDRAM_CONFIG_SR_MODE_BIT BIT(24) 30 + #define SDRAM_OPERATION_OFFS 0x18 31 + #define SDRAM_OPERATION_SELF_REFRESH 0x7 32 + #define SDRAM_DLB_EVICTION_OFFS 0x30c 33 + #define SDRAM_DLB_EVICTION_THRESHOLD_MASK 0xff 34 + 35 + static void (*mvebu_board_pm_enter)(void __iomem *sdram_reg, u32 srcmd); 36 + static void __iomem *sdram_ctrl; 37 + 38 + static int mvebu_pm_powerdown(unsigned long data) 39 + { 40 + u32 reg, srcmd; 41 + 42 + flush_cache_all(); 43 + outer_flush_all(); 44 + 45 + /* 46 + * Issue a Data Synchronization Barrier instruction to ensure 47 + * that all state saving has been completed. 48 + */ 49 + dsb(); 50 + 51 + /* Flush the DLB and wait ~7 usec */ 52 + reg = readl(sdram_ctrl + SDRAM_DLB_EVICTION_OFFS); 53 + reg &= ~SDRAM_DLB_EVICTION_THRESHOLD_MASK; 54 + writel(reg, sdram_ctrl + SDRAM_DLB_EVICTION_OFFS); 55 + 56 + udelay(7); 57 + 58 + /* Set DRAM in battery backup mode */ 59 + reg = readl(sdram_ctrl + SDRAM_CONFIG_OFFS); 60 + reg &= ~SDRAM_CONFIG_SR_MODE_BIT; 61 + writel(reg, sdram_ctrl + SDRAM_CONFIG_OFFS); 62 + 63 + /* Prepare to go to self-refresh */ 64 + 65 + srcmd = readl(sdram_ctrl + SDRAM_OPERATION_OFFS); 66 + srcmd &= ~0x1F; 67 + srcmd |= SDRAM_OPERATION_SELF_REFRESH; 68 + 69 + mvebu_board_pm_enter(sdram_ctrl + SDRAM_OPERATION_OFFS, srcmd); 70 + 71 + return 0; 72 + } 73 + 74 + #define BOOT_INFO_ADDR 0x3000 75 + #define BOOT_MAGIC_WORD 0xdeadb002 76 + #define BOOT_MAGIC_LIST_END 0xffffffff 77 + 78 + /* 79 + * Those registers are accessed before switching the internal register 80 + * base, which is why we hardcode the 0xd0000000 base address, the one 81 + * used by the SoC out of reset. 82 + */ 83 + #define MBUS_WINDOW_12_CTRL 0xd00200b0 84 + #define MBUS_INTERNAL_REG_ADDRESS 0xd0020080 85 + 86 + #define SDRAM_WIN_BASE_REG(x) (0x20180 + (0x8*x)) 87 + #define SDRAM_WIN_CTRL_REG(x) (0x20184 + (0x8*x)) 88 + 89 + static phys_addr_t mvebu_internal_reg_base(void) 90 + { 91 + struct device_node *np; 92 + __be32 in_addr[2]; 93 + 94 + np = of_find_node_by_name(NULL, "internal-regs"); 95 + BUG_ON(!np); 96 + 97 + /* 98 + * Ask the DT what is the internal register address on this 99 + * platform. In the mvebu-mbus DT binding, 0xf0010000 100 + * corresponds to the internal register window. 101 + */ 102 + in_addr[0] = cpu_to_be32(0xf0010000); 103 + in_addr[1] = 0x0; 104 + 105 + return of_translate_address(np, in_addr); 106 + } 107 + 108 + static void mvebu_pm_store_bootinfo(void) 109 + { 110 + u32 *store_addr; 111 + phys_addr_t resume_pc; 112 + 113 + store_addr = phys_to_virt(BOOT_INFO_ADDR); 114 + resume_pc = virt_to_phys(armada_370_xp_cpu_resume); 115 + 116 + /* 117 + * The bootloader expects the first two words to be a magic 118 + * value (BOOT_MAGIC_WORD), followed by the address of the 119 + * resume code to jump to. Then, it expects a sequence of 120 + * (address, value) pairs, which can be used to restore the 121 + * value of certain registers. This sequence must end with the 122 + * BOOT_MAGIC_LIST_END magic value. 123 + */ 124 + 125 + writel(BOOT_MAGIC_WORD, store_addr++); 126 + writel(resume_pc, store_addr++); 127 + 128 + /* 129 + * Some platforms remap their internal register base address 130 + * to 0xf1000000. However, out of reset, window 12 starts at 131 + * 0xf0000000 and ends at 0xf7ffffff, which would overlap with 132 + * the internal registers. Therefore, disable window 12. 133 + */ 134 + writel(MBUS_WINDOW_12_CTRL, store_addr++); 135 + writel(0x0, store_addr++); 136 + 137 + /* 138 + * Set the internal register base address to the value 139 + * expected by Linux, as read from the Device Tree. 140 + */ 141 + writel(MBUS_INTERNAL_REG_ADDRESS, store_addr++); 142 + writel(mvebu_internal_reg_base(), store_addr++); 143 + 144 + /* 145 + * Ask the mvebu-mbus driver to store the SDRAM window 146 + * configuration, which has to be restored by the bootloader 147 + * before re-entering the kernel on resume. 148 + */ 149 + store_addr += mvebu_mbus_save_cpu_target(store_addr); 150 + 151 + writel(BOOT_MAGIC_LIST_END, store_addr); 152 + } 153 + 154 + static int mvebu_pm_enter(suspend_state_t state) 155 + { 156 + if (state != PM_SUSPEND_MEM) 157 + return -EINVAL; 158 + 159 + cpu_pm_enter(); 160 + 161 + mvebu_pm_store_bootinfo(); 162 + cpu_suspend(0, mvebu_pm_powerdown); 163 + 164 + outer_resume(); 165 + 166 + mvebu_v7_pmsu_idle_exit(); 167 + 168 + set_cpu_coherent(); 169 + 170 + cpu_pm_exit(); 171 + 172 + return 0; 173 + } 174 + 175 + static const struct platform_suspend_ops mvebu_pm_ops = { 176 + .enter = mvebu_pm_enter, 177 + .valid = suspend_valid_only_mem, 178 + }; 179 + 180 + int mvebu_pm_init(void (*board_pm_enter)(void __iomem *sdram_reg, u32 srcmd)) 181 + { 182 + struct device_node *np; 183 + struct resource res; 184 + 185 + if (!of_machine_is_compatible("marvell,armadaxp")) 186 + return -ENODEV; 187 + 188 + np = of_find_compatible_node(NULL, NULL, 189 + "marvell,armada-xp-sdram-controller"); 190 + if (!np) 191 + return -ENODEV; 192 + 193 + if (of_address_to_resource(np, 0, &res)) { 194 + of_node_put(np); 195 + return -ENODEV; 196 + } 197 + 198 + if (!request_mem_region(res.start, resource_size(&res), 199 + np->full_name)) { 200 + of_node_put(np); 201 + return -EBUSY; 202 + } 203 + 204 + sdram_ctrl = ioremap(res.start, resource_size(&res)); 205 + if (!sdram_ctrl) { 206 + release_mem_region(res.start, resource_size(&res)); 207 + of_node_put(np); 208 + return -ENOMEM; 209 + } 210 + 211 + of_node_put(np); 212 + 213 + mvebu_board_pm_enter = board_pm_enter; 214 + 215 + suspend_set_ops(&mvebu_pm_ops); 216 + 217 + return 0; 218 + }
+8 -3
arch/arm/mach-mvebu/pmsu.c
··· 20 20 21 21 #include <linux/clk.h> 22 22 #include <linux/cpu_pm.h> 23 + #include <linux/cpufreq-dt.h> 23 24 #include <linux/delay.h> 24 25 #include <linux/init.h> 25 26 #include <linux/io.h> ··· 40 39 #include <asm/suspend.h> 41 40 #include <asm/tlbflush.h> 42 41 #include "common.h" 43 - #include "armada-370-xp.h" 44 42 45 43 46 44 #define PMSU_BASE_OFFSET 0x100 ··· 312 312 return cpu_suspend(deepidle, armada_370_xp_pmsu_idle_enter); 313 313 } 314 314 315 - static int armada_38x_do_cpu_suspend(unsigned long deepidle) 315 + int armada_38x_do_cpu_suspend(unsigned long deepidle) 316 316 { 317 317 unsigned long flags = 0; 318 318 ··· 572 572 return 0; 573 573 } 574 574 575 + struct cpufreq_dt_platform_data cpufreq_dt_pd = { 576 + .independent_clocks = true, 577 + }; 578 + 575 579 static int __init armada_xp_pmsu_cpufreq_init(void) 576 580 { 577 581 struct device_node *np; ··· 648 644 } 649 645 } 650 646 651 - platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 647 + platform_device_register_data(NULL, "cpufreq-dt", -1, 648 + &cpufreq_dt_pd, sizeof(cpufreq_dt_pd)); 652 649 return 0; 653 650 } 654 651
+3
arch/arm/mach-mvebu/pmsu.h
··· 17 17 phys_addr_t resume_addr_reg); 18 18 19 19 void mvebu_v7_pmsu_idle_exit(void); 20 + void armada_370_xp_cpu_resume(void); 20 21 22 + int armada_370_xp_pmsu_idle_enter(unsigned long deepidle); 23 + int armada_38x_do_cpu_suspend(unsigned long deepidle); 21 24 #endif /* __MACH_370_XP_PMSU_H */
+21 -7
arch/arm/mach-mvebu/pmsu_ll.S
··· 12 12 #include <linux/linkage.h> 13 13 #include <asm/assembler.h> 14 14 15 + 16 + ENTRY(armada_38x_scu_power_up) 17 + mrc p15, 4, r1, c15, c0 @ get SCU base address 18 + orr r1, r1, #0x8 @ SCU CPU Power Status Register 19 + mrc 15, 0, r0, cr0, cr0, 5 @ get the CPU ID 20 + and r0, r0, #15 21 + add r1, r1, r0 22 + mov r0, #0x0 23 + strb r0, [r1] @ switch SCU power state to Normal mode 24 + ret lr 25 + ENDPROC(armada_38x_scu_power_up) 26 + 15 27 /* 16 28 * This is the entry point through which CPUs exiting cpuidle deep 17 29 * idle state are going. 18 30 */ 19 31 ENTRY(armada_370_xp_cpu_resume) 20 32 ARM_BE8(setend be ) @ go BE8 if entered LE 33 + /* 34 + * Disable the MMU that might have been enabled in BootROM if 35 + * this code is used in the resume path of a suspend/resume 36 + * cycle. 37 + */ 38 + mrc p15, 0, r1, c1, c0, 0 39 + bic r1, #1 40 + mcr p15, 0, r1, c1, c0, 0 21 41 bl ll_add_cpu_to_smp_group 22 42 bl ll_enable_coherency 23 43 b cpu_resume ··· 47 27 /* do we need it for Armada 38x*/ 48 28 ARM_BE8(setend be ) @ go BE8 if entered LE 49 29 bl v7_invalidate_l1 50 - mrc p15, 4, r1, c15, c0 @ get SCU base address 51 - orr r1, r1, #0x8 @ SCU CPU Power Status Register 52 - mrc 15, 0, r0, cr0, cr0, 5 @ get the CPU ID 53 - and r0, r0, #15 54 - add r1, r1, r0 55 - mov r0, #0x0 56 - strb r0, [r1] @ switch SCU power state to Normal mode 30 + bl armada_38x_scu_power_up 57 31 b cpu_resume 58 32 ENDPROC(armada_38x_cpu_resume) 59 33
+1 -1
arch/arm/mach-omap2/Makefile
··· 113 113 obj-$(CONFIG_ARCH_OMAP2) += prm2xxx_3xxx.o prm2xxx.o cm2xxx.o 114 114 obj-$(CONFIG_ARCH_OMAP3) += prm2xxx_3xxx.o prm3xxx.o cm3xxx.o 115 115 obj-$(CONFIG_ARCH_OMAP3) += vc3xxx_data.o vp3xxx_data.o 116 - omap-prcm-4-5-common = cminst44xx.o cm44xx.o prm44xx.o \ 116 + omap-prcm-4-5-common = cminst44xx.o prm44xx.o \ 117 117 prcm_mpu44xx.o prminst44xx.o \ 118 118 vc44xx_data.o vp44xx_data.o 119 119 obj-$(CONFIG_ARCH_OMAP4) += $(omap-prcm-4-5-common)
+2 -10
arch/arm/mach-omap2/am33xx-restart.c
··· 9 9 #include <linux/reboot.h> 10 10 11 11 #include "common.h" 12 - #include "prm-regbits-33xx.h" 13 - #include "prm33xx.h" 12 + #include "prm.h" 14 13 15 14 /** 16 15 * am3xx_restart - trigger a software restart of the SoC ··· 23 24 { 24 25 /* TODO: Handle mode and cmd if necessary */ 25 26 26 - am33xx_prm_rmw_reg_bits(AM33XX_RST_GLOBAL_WARM_SW_MASK, 27 - AM33XX_RST_GLOBAL_WARM_SW_MASK, 28 - AM33XX_PRM_DEVICE_MOD, 29 - AM33XX_PRM_RSTCTRL_OFFSET); 30 - 31 - /* OCP barrier */ 32 - (void)am33xx_prm_read_reg(AM33XX_PRM_DEVICE_MOD, 33 - AM33XX_PRM_RSTCTRL_OFFSET); 27 + omap_prm_reset_system(); 34 28 }
+6
arch/arm/mach-omap2/cclock3xxx_data.c
··· 257 257 .get_parent = &omap2_init_dpll_parent, 258 258 .recalc_rate = &omap3_dpll_recalc, 259 259 .set_rate = &omap3_noncore_dpll_set_rate, 260 + .set_parent = &omap3_noncore_dpll_set_parent, 261 + .set_rate_and_parent = &omap3_noncore_dpll_set_rate_and_parent, 262 + .determine_rate = &omap3_noncore_dpll_determine_rate, 260 263 .round_rate = &omap2_dpll_round_rate, 261 264 }; 262 265 ··· 370 367 .get_parent = &omap2_init_dpll_parent, 371 368 .recalc_rate = &omap3_dpll_recalc, 372 369 .set_rate = &omap3_dpll4_set_rate, 370 + .set_parent = &omap3_noncore_dpll_set_parent, 371 + .set_rate_and_parent = &omap3_dpll4_set_rate_and_parent, 372 + .determine_rate = &omap3_noncore_dpll_determine_rate, 373 373 .round_rate = &omap2_dpll_round_rate, 374 374 }; 375 375
+6 -1
arch/arm/mach-omap2/clock.c
··· 171 171 _wait_idlest_generic(clk, idlest_reg, (1 << idlest_bit), 172 172 idlest_val, __clk_get_name(clk->hw.clk)); 173 173 } else { 174 - cm_wait_module_ready(prcm_mod, idlest_reg_id, idlest_bit); 174 + omap_cm_wait_module_ready(0, prcm_mod, idlest_reg_id, 175 + idlest_bit); 175 176 }; 176 177 } 177 178 ··· 772 771 ti_clk_features.cm_idlest_val = OMAP24XX_CM_IDLEST_VAL; 773 772 else if (cpu_is_omap34xx()) 774 773 ti_clk_features.cm_idlest_val = OMAP34XX_CM_IDLEST_VAL; 774 + 775 + /* On OMAP3430 ES1.0, DPLL4 can't be re-programmed */ 776 + if (omap_rev() == OMAP3430_REV_ES1_0) 777 + ti_clk_features.flags |= TI_CLK_DPLL4_DENY_REPROGRAM; 775 778 }
+1
arch/arm/mach-omap2/clock.h
··· 234 234 }; 235 235 236 236 #define TI_CLK_DPLL_HAS_FREQSEL (1 << 0) 237 + #define TI_CLK_DPLL4_DENY_REPROGRAM (1 << 1) 237 238 238 239 extern struct ti_clk_features ti_clk_features; 239 240
+37 -1
arch/arm/mach-omap2/clock3xxx.c
··· 38 38 39 39 /* needed by omap3_core_dpll_m2_set_rate() */ 40 40 struct clk *sdrc_ick_p, *arm_fck_p; 41 + 42 + /** 43 + * omap3_dpll4_set_rate - set rate for omap3 per-dpll 44 + * @hw: clock to change 45 + * @rate: target rate for clock 46 + * @parent_rate: rate of the parent clock 47 + * 48 + * Check if the current SoC supports the per-dpll reprogram operation 49 + * or not, and then do the rate change if supported. Returns -EINVAL 50 + * if not supported, 0 for success, and potential error codes from the 51 + * clock rate change. 52 + */ 41 53 int omap3_dpll4_set_rate(struct clk_hw *hw, unsigned long rate, 42 54 unsigned long parent_rate) 43 55 { ··· 58 46 * on 3430ES1 prevents us from changing DPLL multipliers or dividers 59 47 * on DPLL4. 60 48 */ 61 - if (omap_rev() == OMAP3430_REV_ES1_0) { 49 + if (ti_clk_features.flags & TI_CLK_DPLL4_DENY_REPROGRAM) { 62 50 pr_err("clock: DPLL4 cannot change rate due to silicon 'Limitation 2.5' on 3430ES1.\n"); 63 51 return -EINVAL; 64 52 } 65 53 66 54 return omap3_noncore_dpll_set_rate(hw, rate, parent_rate); 55 + } 56 + 57 + /** 58 + * omap3_dpll4_set_rate_and_parent - set rate and parent for omap3 per-dpll 59 + * @hw: clock to change 60 + * @rate: target rate for clock 61 + * @parent_rate: rate of the parent clock 62 + * @index: parent index, 0 - reference clock, 1 - bypass clock 63 + * 64 + * Check if the current SoC support the per-dpll reprogram operation 65 + * or not, and then do the rate + parent change if supported. Returns 66 + * -EINVAL if not supported, 0 for success, and potential error codes 67 + * from the clock rate change. 68 + */ 69 + int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate, 70 + unsigned long parent_rate, u8 index) 71 + { 72 + if (ti_clk_features.flags & TI_CLK_DPLL4_DENY_REPROGRAM) { 73 + pr_err("clock: DPLL4 cannot change rate due to silicon 'Limitation 2.5' on 3430ES1.\n"); 74 + return -EINVAL; 75 + } 76 + 77 + return omap3_noncore_dpll_set_rate_and_parent(hw, rate, parent_rate, 78 + index); 67 79 } 68 80 69 81 void __init omap3_clk_lock_dpll5(void)
+15 -3
arch/arm/mach-omap2/cm.h
··· 45 45 * struct cm_ll_data - fn ptrs to per-SoC CM function implementations 46 46 * @split_idlest_reg: ptr to the SoC CM-specific split_idlest_reg impl 47 47 * @wait_module_ready: ptr to the SoC CM-specific wait_module_ready impl 48 + * @wait_module_idle: ptr to the SoC CM-specific wait_module_idle impl 49 + * @module_enable: ptr to the SoC CM-specific module_enable impl 50 + * @module_disable: ptr to the SoC CM-specific module_disable impl 48 51 */ 49 52 struct cm_ll_data { 50 53 int (*split_idlest_reg)(void __iomem *idlest_reg, s16 *prcm_inst, 51 54 u8 *idlest_reg_id); 52 - int (*wait_module_ready)(s16 prcm_mod, u8 idlest_id, u8 idlest_shift); 55 + int (*wait_module_ready)(u8 part, s16 prcm_mod, u16 idlest_reg, 56 + u8 idlest_shift); 57 + int (*wait_module_idle)(u8 part, s16 prcm_mod, u16 idlest_reg, 58 + u8 idlest_shift); 59 + void (*module_enable)(u8 mode, u8 part, u16 inst, u16 clkctrl_offs); 60 + void (*module_disable)(u8 part, u16 inst, u16 clkctrl_offs); 53 61 }; 54 62 55 63 extern int cm_split_idlest_reg(void __iomem *idlest_reg, s16 *prcm_inst, 56 64 u8 *idlest_reg_id); 57 - extern int cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, u8 idlest_shift); 58 - 65 + int omap_cm_wait_module_ready(u8 part, s16 prcm_mod, u16 idlest_reg, 66 + u8 idlest_shift); 67 + int omap_cm_wait_module_idle(u8 part, s16 prcm_mod, u16 idlest_reg, 68 + u8 idlest_shift); 69 + int omap_cm_module_enable(u8 mode, u8 part, u16 inst, u16 clkctrl_offs); 70 + int omap_cm_module_disable(u8 part, u16 inst, u16 clkctrl_offs); 59 71 extern int cm_register(struct cm_ll_data *cld); 60 72 extern int cm_unregister(struct cm_ll_data *cld); 61 73
-2
arch/arm/mach-omap2/cm1_44xx.h
··· 25 25 #ifndef __ARCH_ARM_MACH_OMAP2_CM1_44XX_H 26 26 #define __ARCH_ARM_MACH_OMAP2_CM1_44XX_H 27 27 28 - #include "cm_44xx_54xx.h" 29 - 30 28 /* CM1 base address */ 31 29 #define OMAP4430_CM1_BASE 0x4a004000 32 30
-2
arch/arm/mach-omap2/cm1_54xx.h
··· 22 22 #ifndef __ARCH_ARM_MACH_OMAP2_CM1_54XX_H 23 23 #define __ARCH_ARM_MACH_OMAP2_CM1_54XX_H 24 24 25 - #include "cm_44xx_54xx.h" 26 - 27 25 /* CM1 base address */ 28 26 #define OMAP54XX_CM_CORE_AON_BASE 0x4a004000 29 27
-2
arch/arm/mach-omap2/cm1_7xx.h
··· 23 23 #ifndef __ARCH_ARM_MACH_OMAP2_CM1_7XX_H 24 24 #define __ARCH_ARM_MACH_OMAP2_CM1_7XX_H 25 25 26 - #include "cm_44xx_54xx.h" 27 - 28 26 /* CM1 base address */ 29 27 #define DRA7XX_CM_CORE_AON_BASE 0x4a005000 30 28
-2
arch/arm/mach-omap2/cm2_44xx.h
··· 25 25 #ifndef __ARCH_ARM_MACH_OMAP2_CM2_44XX_H 26 26 #define __ARCH_ARM_MACH_OMAP2_CM2_44XX_H 27 27 28 - #include "cm_44xx_54xx.h" 29 - 30 28 /* CM2 base address */ 31 29 #define OMAP4430_CM2_BASE 0x4a008000 32 30
-2
arch/arm/mach-omap2/cm2_54xx.h
··· 21 21 #ifndef __ARCH_ARM_MACH_OMAP2_CM2_54XX_H 22 22 #define __ARCH_ARM_MACH_OMAP2_CM2_54XX_H 23 23 24 - #include "cm_44xx_54xx.h" 25 - 26 24 /* CM2 base address */ 27 25 #define OMAP54XX_CM_CORE_BASE 0x4a008000 28 26
-2
arch/arm/mach-omap2/cm2_7xx.h
··· 22 22 #ifndef __ARCH_ARM_MACH_OMAP2_CM2_7XX_H 23 23 #define __ARCH_ARM_MACH_OMAP2_CM2_7XX_H 24 24 25 - #include "cm_44xx_54xx.h" 26 - 27 25 /* CM2 base address */ 28 26 #define DRA7XX_CM_CORE_BASE 0x4a008000 29 27
+10 -7
arch/arm/mach-omap2/cm2xxx.c
··· 53 53 omap2_cm_write_mod_reg(v, module, OMAP2_CM_CLKSTCTRL); 54 54 } 55 55 56 - bool omap2xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask) 56 + static bool omap2xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask) 57 57 { 58 58 u32 v; 59 59 ··· 64 64 return (v == OMAP24XX_CLKSTCTRL_ENABLE_AUTO) ? 1 : 0; 65 65 } 66 66 67 - void omap2xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask) 67 + static void omap2xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask) 68 68 { 69 69 _write_clktrctrl(OMAP24XX_CLKSTCTRL_ENABLE_AUTO, module, mask); 70 70 } 71 71 72 - void omap2xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask) 72 + static void omap2xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask) 73 73 { 74 74 _write_clktrctrl(OMAP24XX_CLKSTCTRL_DISABLE_AUTO, module, mask); 75 75 } ··· 150 150 v |= m; 151 151 omap2_cm_write_mod_reg(v, PLL_MOD, CM_CLKEN); 152 152 153 - omap2xxx_cm_wait_module_ready(PLL_MOD, 1, status_bit); 153 + omap2xxx_cm_wait_module_ready(0, PLL_MOD, 1, status_bit); 154 154 155 155 /* 156 156 * REVISIT: Should we return an error code if ··· 204 204 * XXX This function is only needed until absolute register addresses are 205 205 * removed from the OMAP struct clk records. 206 206 */ 207 - int omap2xxx_cm_split_idlest_reg(void __iomem *idlest_reg, s16 *prcm_inst, 208 - u8 *idlest_reg_id) 207 + static int omap2xxx_cm_split_idlest_reg(void __iomem *idlest_reg, 208 + s16 *prcm_inst, 209 + u8 *idlest_reg_id) 209 210 { 210 211 unsigned long offs; 211 212 u8 idlest_offs; ··· 239 238 240 239 /** 241 240 * omap2xxx_cm_wait_module_ready - wait for a module to leave idle or standby 241 + * @part: PRCM partition, ignored for OMAP2 242 242 * @prcm_mod: PRCM module offset 243 243 * @idlest_id: CM_IDLESTx register ID (i.e., x = 1, 2, 3) 244 244 * @idlest_shift: shift of the bit in the CM_IDLEST* register to check ··· 248 246 * (@prcm_mod, @idlest_id, @idlest_shift) is clocked. Return 0 upon 249 247 * success or -EBUSY if the module doesn't enable in time. 250 248 */ 251 - int omap2xxx_cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, u8 idlest_shift) 249 + int omap2xxx_cm_wait_module_ready(u8 part, s16 prcm_mod, u16 idlest_id, 250 + u8 idlest_shift) 252 251 { 253 252 int ena = 0, i = 0; 254 253 u8 cm_idlest_reg;
+2 -8
arch/arm/mach-omap2/cm2xxx.h
··· 46 46 47 47 #ifndef __ASSEMBLER__ 48 48 49 - extern void omap2xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask); 50 - extern void omap2xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask); 51 - 52 49 extern void omap2xxx_cm_set_dpll_disable_autoidle(void); 53 50 extern void omap2xxx_cm_set_dpll_auto_low_power_stop(void); 54 51 ··· 54 57 extern void omap2xxx_cm_set_apll96_disable_autoidle(void); 55 58 extern void omap2xxx_cm_set_apll96_auto_low_power_stop(void); 56 59 57 - extern bool omap2xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask); 58 - extern int omap2xxx_cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, 59 - u8 idlest_shift); 60 - extern int omap2xxx_cm_split_idlest_reg(void __iomem *idlest_reg, 61 - s16 *prcm_inst, u8 *idlest_reg_id); 60 + int omap2xxx_cm_wait_module_ready(u8 part, s16 prcm_mod, u16 idlest_id, 61 + u8 idlest_shift); 62 62 extern int omap2xxx_cm_fclks_active(void); 63 63 extern int omap2xxx_cm_mpu_retention_allowed(void); 64 64 extern u32 omap2xxx_cm_get_core_clk_src(void);
+41 -20
arch/arm/mach-omap2/cm33xx.c
··· 96 96 /** 97 97 * _clkctrl_idlest - read a CM_*_CLKCTRL register; mask & shift IDLEST bitfield 98 98 * @inst: CM instance register offset (*_INST macro) 99 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 100 99 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 101 100 * 102 101 * Return the IDLEST bitfield of a CM_*_CLKCTRL register, shifted down to 103 102 * bit 0. 104 103 */ 105 - static u32 _clkctrl_idlest(u16 inst, s16 cdoffs, u16 clkctrl_offs) 104 + static u32 _clkctrl_idlest(u16 inst, u16 clkctrl_offs) 106 105 { 107 106 u32 v = am33xx_cm_read_reg(inst, clkctrl_offs); 108 107 v &= AM33XX_IDLEST_MASK; ··· 112 113 /** 113 114 * _is_module_ready - can module registers be accessed without causing an abort? 114 115 * @inst: CM instance register offset (*_INST macro) 115 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 116 116 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 117 117 * 118 118 * Returns true if the module's CM_*_CLKCTRL.IDLEST bitfield is either 119 119 * *FUNCTIONAL or *INTERFACE_IDLE; false otherwise. 120 120 */ 121 - static bool _is_module_ready(u16 inst, s16 cdoffs, u16 clkctrl_offs) 121 + static bool _is_module_ready(u16 inst, u16 clkctrl_offs) 122 122 { 123 123 u32 v; 124 124 125 - v = _clkctrl_idlest(inst, cdoffs, clkctrl_offs); 125 + v = _clkctrl_idlest(inst, clkctrl_offs); 126 126 127 127 return (v == CLKCTRL_IDLEST_FUNCTIONAL || 128 128 v == CLKCTRL_IDLEST_INTERFACE_IDLE) ? true : false; ··· 156 158 * Returns true if the clockdomain referred to by (@inst, @cdoffs) 157 159 * is in hardware-supervised idle mode, or 0 otherwise. 158 160 */ 159 - bool am33xx_cm_is_clkdm_in_hwsup(u16 inst, u16 cdoffs) 161 + static bool am33xx_cm_is_clkdm_in_hwsup(u16 inst, u16 cdoffs) 160 162 { 161 163 u32 v; 162 164 ··· 175 177 * Put a clockdomain referred to by (@inst, @cdoffs) into 176 178 * hardware-supervised idle mode. No return value. 177 179 */ 178 - void am33xx_cm_clkdm_enable_hwsup(u16 inst, u16 cdoffs) 180 + static void am33xx_cm_clkdm_enable_hwsup(u16 inst, u16 cdoffs) 179 181 { 180 182 _clktrctrl_write(OMAP34XX_CLKSTCTRL_ENABLE_AUTO, inst, cdoffs); 181 183 } ··· 189 191 * software-supervised idle mode, i.e., controlled manually by the 190 192 * Linux OMAP clockdomain code. No return value. 191 193 */ 192 - void am33xx_cm_clkdm_disable_hwsup(u16 inst, u16 cdoffs) 194 + static void am33xx_cm_clkdm_disable_hwsup(u16 inst, u16 cdoffs) 193 195 { 194 196 _clktrctrl_write(OMAP34XX_CLKSTCTRL_DISABLE_AUTO, inst, cdoffs); 195 197 } ··· 202 204 * Put a clockdomain referred to by (@inst, @cdoffs) into idle 203 205 * No return value. 204 206 */ 205 - void am33xx_cm_clkdm_force_sleep(u16 inst, u16 cdoffs) 207 + static void am33xx_cm_clkdm_force_sleep(u16 inst, u16 cdoffs) 206 208 { 207 209 _clktrctrl_write(OMAP34XX_CLKSTCTRL_FORCE_SLEEP, inst, cdoffs); 208 210 } ··· 215 217 * Take a clockdomain referred to by (@inst, @cdoffs) out of idle, 216 218 * waking it up. No return value. 217 219 */ 218 - void am33xx_cm_clkdm_force_wakeup(u16 inst, u16 cdoffs) 220 + static void am33xx_cm_clkdm_force_wakeup(u16 inst, u16 cdoffs) 219 221 { 220 222 _clktrctrl_write(OMAP34XX_CLKSTCTRL_FORCE_WAKEUP, inst, cdoffs); 221 223 } ··· 226 228 227 229 /** 228 230 * am33xx_cm_wait_module_ready - wait for a module to be in 'func' state 231 + * @part: PRCM partition, ignored for AM33xx 229 232 * @inst: CM instance register offset (*_INST macro) 230 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 231 233 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 234 + * @bit_shift: bit shift for the register, ignored for AM33xx 232 235 * 233 236 * Wait for the module IDLEST to be functional. If the idle state is in any 234 237 * the non functional state (trans, idle or disabled), module and thus the 235 238 * sysconfig cannot be accessed and will probably lead to an "imprecise 236 239 * external abort" 237 240 */ 238 - int am33xx_cm_wait_module_ready(u16 inst, s16 cdoffs, u16 clkctrl_offs) 241 + static int am33xx_cm_wait_module_ready(u8 part, s16 inst, u16 clkctrl_offs, 242 + u8 bit_shift) 239 243 { 240 244 int i = 0; 241 245 242 - omap_test_timeout(_is_module_ready(inst, cdoffs, clkctrl_offs), 246 + omap_test_timeout(_is_module_ready(inst, clkctrl_offs), 243 247 MAX_MODULE_READY_TIME, i); 244 248 245 249 return (i < MAX_MODULE_READY_TIME) ? 0 : -EBUSY; ··· 250 250 /** 251 251 * am33xx_cm_wait_module_idle - wait for a module to be in 'disabled' 252 252 * state 253 + * @part: CM partition, ignored for AM33xx 253 254 * @inst: CM instance register offset (*_INST macro) 254 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 255 255 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 256 + * @bit_shift: bit shift for the register, ignored for AM33xx 256 257 * 257 258 * Wait for the module IDLEST to be disabled. Some PRCM transition, 258 259 * like reset assertion or parent clock de-activation must wait the 259 260 * module to be fully disabled. 260 261 */ 261 - int am33xx_cm_wait_module_idle(u16 inst, s16 cdoffs, u16 clkctrl_offs) 262 + static int am33xx_cm_wait_module_idle(u8 part, s16 inst, u16 clkctrl_offs, 263 + u8 bit_shift) 262 264 { 263 265 int i = 0; 264 266 265 267 if (!clkctrl_offs) 266 268 return 0; 267 269 268 - omap_test_timeout((_clkctrl_idlest(inst, cdoffs, clkctrl_offs) == 270 + omap_test_timeout((_clkctrl_idlest(inst, clkctrl_offs) == 269 271 CLKCTRL_IDLEST_DISABLED), 270 272 MAX_MODULE_READY_TIME, i); 271 273 ··· 277 275 /** 278 276 * am33xx_cm_module_enable - Enable the modulemode inside CLKCTRL 279 277 * @mode: Module mode (SW or HW) 278 + * @part: CM partition, ignored for AM33xx 280 279 * @inst: CM instance register offset (*_INST macro) 281 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 282 280 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 283 281 * 284 282 * No return value. 285 283 */ 286 - void am33xx_cm_module_enable(u8 mode, u16 inst, s16 cdoffs, u16 clkctrl_offs) 284 + static void am33xx_cm_module_enable(u8 mode, u8 part, u16 inst, 285 + u16 clkctrl_offs) 287 286 { 288 287 u32 v; 289 288 ··· 296 293 297 294 /** 298 295 * am33xx_cm_module_disable - Disable the module inside CLKCTRL 296 + * @part: CM partition, ignored for AM33xx 299 297 * @inst: CM instance register offset (*_INST macro) 300 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 301 298 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 302 299 * 303 300 * No return value. 304 301 */ 305 - void am33xx_cm_module_disable(u16 inst, s16 cdoffs, u16 clkctrl_offs) 302 + static void am33xx_cm_module_disable(u8 part, u16 inst, u16 clkctrl_offs) 306 303 { 307 304 u32 v; 308 305 ··· 365 362 .clkdm_clk_enable = am33xx_clkdm_clk_enable, 366 363 .clkdm_clk_disable = am33xx_clkdm_clk_disable, 367 364 }; 365 + 366 + static struct cm_ll_data am33xx_cm_ll_data = { 367 + .wait_module_ready = &am33xx_cm_wait_module_ready, 368 + .wait_module_idle = &am33xx_cm_wait_module_idle, 369 + .module_enable = &am33xx_cm_module_enable, 370 + .module_disable = &am33xx_cm_module_disable, 371 + }; 372 + 373 + int __init am33xx_cm_init(void) 374 + { 375 + return cm_register(&am33xx_cm_ll_data); 376 + } 377 + 378 + static void __exit am33xx_cm_exit(void) 379 + { 380 + cm_unregister(&am33xx_cm_ll_data); 381 + } 382 + __exitcall(am33xx_cm_exit);
+1 -36
arch/arm/mach-omap2/cm33xx.h
··· 374 374 375 375 376 376 #ifndef __ASSEMBLER__ 377 - bool am33xx_cm_is_clkdm_in_hwsup(u16 inst, u16 cdoffs); 378 - void am33xx_cm_clkdm_enable_hwsup(u16 inst, u16 cdoffs); 379 - void am33xx_cm_clkdm_disable_hwsup(u16 inst, u16 cdoffs); 380 - void am33xx_cm_clkdm_force_sleep(u16 inst, u16 cdoffs); 381 - void am33xx_cm_clkdm_force_wakeup(u16 inst, u16 cdoffs); 382 - 383 - #if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) 384 - extern int am33xx_cm_wait_module_idle(u16 inst, s16 cdoffs, 385 - u16 clkctrl_offs); 386 - extern void am33xx_cm_module_enable(u8 mode, u16 inst, s16 cdoffs, 387 - u16 clkctrl_offs); 388 - extern void am33xx_cm_module_disable(u16 inst, s16 cdoffs, 389 - u16 clkctrl_offs); 390 - extern int am33xx_cm_wait_module_ready(u16 inst, s16 cdoffs, 391 - u16 clkctrl_offs); 392 - #else 393 - static inline int am33xx_cm_wait_module_idle(u16 inst, s16 cdoffs, 394 - u16 clkctrl_offs) 395 - { 396 - return 0; 397 - } 398 - static inline void am33xx_cm_module_enable(u8 mode, u16 inst, s16 cdoffs, 399 - u16 clkctrl_offs) 400 - { 401 - } 402 - static inline void am33xx_cm_module_disable(u16 inst, s16 cdoffs, 403 - u16 clkctrl_offs) 404 - { 405 - } 406 - static inline int am33xx_cm_wait_module_ready(u16 inst, s16 cdoffs, 407 - u16 clkctrl_offs) 408 - { 409 - return 0; 410 - } 411 - #endif 412 - 377 + int am33xx_cm_init(void); 413 378 #endif /* ASSEMBLER */ 414 379 #endif
+11 -8
arch/arm/mach-omap2/cm3xxx.c
··· 42 42 omap2_cm_write_mod_reg(v, module, OMAP2_CM_CLKSTCTRL); 43 43 } 44 44 45 - bool omap3xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask) 45 + static bool omap3xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask) 46 46 { 47 47 u32 v; 48 48 ··· 53 53 return (v == OMAP34XX_CLKSTCTRL_ENABLE_AUTO) ? 1 : 0; 54 54 } 55 55 56 - void omap3xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask) 56 + static void omap3xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask) 57 57 { 58 58 _write_clktrctrl(OMAP34XX_CLKSTCTRL_ENABLE_AUTO, module, mask); 59 59 } 60 60 61 - void omap3xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask) 61 + static void omap3xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask) 62 62 { 63 63 _write_clktrctrl(OMAP34XX_CLKSTCTRL_DISABLE_AUTO, module, mask); 64 64 } 65 65 66 - void omap3xxx_cm_clkdm_force_sleep(s16 module, u32 mask) 66 + static void omap3xxx_cm_clkdm_force_sleep(s16 module, u32 mask) 67 67 { 68 68 _write_clktrctrl(OMAP34XX_CLKSTCTRL_FORCE_SLEEP, module, mask); 69 69 } 70 70 71 - void omap3xxx_cm_clkdm_force_wakeup(s16 module, u32 mask) 71 + static void omap3xxx_cm_clkdm_force_wakeup(s16 module, u32 mask) 72 72 { 73 73 _write_clktrctrl(OMAP34XX_CLKSTCTRL_FORCE_WAKEUP, module, mask); 74 74 } ··· 79 79 80 80 /** 81 81 * omap3xxx_cm_wait_module_ready - wait for a module to leave idle or standby 82 + * @part: PRCM partition, ignored for OMAP3 82 83 * @prcm_mod: PRCM module offset 83 84 * @idlest_id: CM_IDLESTx register ID (i.e., x = 1, 2, 3) 84 85 * @idlest_shift: shift of the bit in the CM_IDLEST* register to check ··· 88 87 * (@prcm_mod, @idlest_id, @idlest_shift) is clocked. Return 0 upon 89 88 * success or -EBUSY if the module doesn't enable in time. 90 89 */ 91 - int omap3xxx_cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, u8 idlest_shift) 90 + static int omap3xxx_cm_wait_module_ready(u8 part, s16 prcm_mod, u16 idlest_id, 91 + u8 idlest_shift) 92 92 { 93 93 int ena = 0, i = 0; 94 94 u8 cm_idlest_reg; ··· 118 116 * XXX This function is only needed until absolute register addresses are 119 117 * removed from the OMAP struct clk records. 120 118 */ 121 - int omap3xxx_cm_split_idlest_reg(void __iomem *idlest_reg, s16 *prcm_inst, 122 - u8 *idlest_reg_id) 119 + static int omap3xxx_cm_split_idlest_reg(void __iomem *idlest_reg, 120 + s16 *prcm_inst, 121 + u8 *idlest_reg_id) 123 122 { 124 123 unsigned long offs; 125 124 u8 idlest_offs;
-12
arch/arm/mach-omap2/cm3xxx.h
··· 68 68 69 69 #ifndef __ASSEMBLER__ 70 70 71 - extern void omap3xxx_cm_clkdm_enable_hwsup(s16 module, u32 mask); 72 - extern void omap3xxx_cm_clkdm_disable_hwsup(s16 module, u32 mask); 73 - extern void omap3xxx_cm_clkdm_force_sleep(s16 module, u32 mask); 74 - extern void omap3xxx_cm_clkdm_force_wakeup(s16 module, u32 mask); 75 - 76 - extern bool omap3xxx_cm_is_clkdm_in_hwsup(s16 module, u32 mask); 77 - extern int omap3xxx_cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, 78 - u8 idlest_shift); 79 - 80 - extern int omap3xxx_cm_split_idlest_reg(void __iomem *idlest_reg, 81 - s16 *prcm_inst, u8 *idlest_reg_id); 82 - 83 71 extern void omap3_cm_save_context(void); 84 72 extern void omap3_cm_restore_context(void); 85 73 extern void omap3_cm_save_scratchpad_contents(u32 *ptr);
-49
arch/arm/mach-omap2/cm44xx.c
··· 1 - /* 2 - * OMAP4 CM1, CM2 module low-level functions 3 - * 4 - * Copyright (C) 2010 Nokia Corporation 5 - * Paul Walmsley 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - * 11 - * These functions are intended to be used only by the cminst44xx.c file. 12 - * XXX Perhaps we should just move them there and make them static. 13 - */ 14 - 15 - #include <linux/kernel.h> 16 - #include <linux/types.h> 17 - #include <linux/errno.h> 18 - #include <linux/err.h> 19 - #include <linux/io.h> 20 - 21 - #include "cm.h" 22 - #include "cm1_44xx.h" 23 - #include "cm2_44xx.h" 24 - 25 - /* CM1 hardware module low-level functions */ 26 - 27 - /* Read a register in CM1 */ 28 - u32 omap4_cm1_read_inst_reg(s16 inst, u16 reg) 29 - { 30 - return readl_relaxed(cm_base + inst + reg); 31 - } 32 - 33 - /* Write into a register in CM1 */ 34 - void omap4_cm1_write_inst_reg(u32 val, s16 inst, u16 reg) 35 - { 36 - writel_relaxed(val, cm_base + inst + reg); 37 - } 38 - 39 - /* Read a register in CM2 */ 40 - u32 omap4_cm2_read_inst_reg(s16 inst, u16 reg) 41 - { 42 - return readl_relaxed(cm2_base + inst + reg); 43 - } 44 - 45 - /* Write into a register in CM2 */ 46 - void omap4_cm2_write_inst_reg(u32 val, s16 inst, u16 reg) 47 - { 48 - writel_relaxed(val, cm2_base + inst + reg); 49 - }
+3
arch/arm/mach-omap2/cm44xx.h
··· 23 23 #define OMAP4_CM_CLKSTCTRL 0x0000 24 24 #define OMAP4_CM_STATICDEP 0x0004 25 25 26 + void omap_cm_base_init(void); 27 + int omap4_cm_init(void); 28 + 26 29 #endif
-36
arch/arm/mach-omap2/cm_44xx_54xx.h
··· 1 - /* 2 - * OMAP44xx and OMAP54xx CM1/CM2 function prototypes 3 - * 4 - * Copyright (C) 2009-2013 Texas Instruments, Inc. 5 - * Copyright (C) 2009-2010 Nokia Corporation 6 - * 7 - * Paul Walmsley (paul@pwsan.com) 8 - * Rajendra Nayak (rnayak@ti.com) 9 - * Benoit Cousson (b-cousson@ti.com) 10 - * 11 - * This file is automatically generated from the OMAP hardware databases. 12 - * We respectfully ask that any modifications to this file be coordinated 13 - * with the public linux-omap@vger.kernel.org mailing list and the 14 - * authors above to ensure that the autogeneration scripts are kept 15 - * up-to-date with the file contents. 16 - * 17 - * This program is free software; you can redistribute it and/or modify 18 - * it under the terms of the GNU General Public License version 2 as 19 - * published by the Free Software Foundation. 20 - * 21 - */ 22 - 23 - #ifndef __ARCH_ARM_MACH_OMAP2_CM_44XX_54XX_H 24 - #define __ARCH_ARM_MACH_OMAP2_CM_44XX_55XX_H 25 - 26 - /* CM1 Function prototypes */ 27 - extern u32 omap4_cm1_read_inst_reg(s16 inst, u16 idx); 28 - extern void omap4_cm1_write_inst_reg(u32 val, s16 inst, u16 idx); 29 - extern u32 omap4_cm1_rmw_inst_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx); 30 - 31 - /* CM2 Function prototypes */ 32 - extern u32 omap4_cm2_read_inst_reg(s16 inst, u16 idx); 33 - extern void omap4_cm2_write_inst_reg(u32 val, s16 inst, u16 idx); 34 - extern u32 omap4_cm2_rmw_inst_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx); 35 - 36 - #endif
+78 -4
arch/arm/mach-omap2/cm_common.c
··· 72 72 } 73 73 74 74 /** 75 - * cm_wait_module_ready - wait for a module to leave idle or standby 75 + * omap_cm_wait_module_ready - wait for a module to leave idle or standby 76 + * @part: PRCM partition 76 77 * @prcm_mod: PRCM module offset 77 - * @idlest_id: CM_IDLESTx register ID (i.e., x = 1, 2, 3) 78 + * @idlest_reg: CM_IDLESTx register 78 79 * @idlest_shift: shift of the bit in the CM_IDLEST* register to check 79 80 * 80 81 * Wait for the PRCM to indicate that the module identified by ··· 84 83 * no per-SoC wait_module_ready() function pointer has been registered 85 84 * or if the idlest register is unknown on the SoC. 86 85 */ 87 - int cm_wait_module_ready(s16 prcm_mod, u8 idlest_id, u8 idlest_shift) 86 + int omap_cm_wait_module_ready(u8 part, s16 prcm_mod, u16 idlest_reg, 87 + u8 idlest_shift) 88 88 { 89 89 if (!cm_ll_data->wait_module_ready) { 90 90 WARN_ONCE(1, "cm: %s: no low-level function defined\n", ··· 93 91 return -EINVAL; 94 92 } 95 93 96 - return cm_ll_data->wait_module_ready(prcm_mod, idlest_id, idlest_shift); 94 + return cm_ll_data->wait_module_ready(part, prcm_mod, idlest_reg, 95 + idlest_shift); 96 + } 97 + 98 + /** 99 + * omap_cm_wait_module_idle - wait for a module to enter idle or standby 100 + * @part: PRCM partition 101 + * @prcm_mod: PRCM module offset 102 + * @idlest_reg: CM_IDLESTx register 103 + * @idlest_shift: shift of the bit in the CM_IDLEST* register to check 104 + * 105 + * Wait for the PRCM to indicate that the module identified by 106 + * (@prcm_mod, @idlest_id, @idlest_shift) is no longer clocked. Return 107 + * 0 upon success, -EBUSY if the module doesn't enable in time, or 108 + * -EINVAL if no per-SoC wait_module_idle() function pointer has been 109 + * registered or if the idlest register is unknown on the SoC. 110 + */ 111 + int omap_cm_wait_module_idle(u8 part, s16 prcm_mod, u16 idlest_reg, 112 + u8 idlest_shift) 113 + { 114 + if (!cm_ll_data->wait_module_idle) { 115 + WARN_ONCE(1, "cm: %s: no low-level function defined\n", 116 + __func__); 117 + return -EINVAL; 118 + } 119 + 120 + return cm_ll_data->wait_module_idle(part, prcm_mod, idlest_reg, 121 + idlest_shift); 122 + } 123 + 124 + /** 125 + * omap_cm_module_enable - enable a module 126 + * @mode: target mode for the module 127 + * @part: PRCM partition 128 + * @inst: PRCM instance 129 + * @clkctrl_offs: CM_CLKCTRL register offset for the module 130 + * 131 + * Enables clocks for a module identified by (@part, @inst, @clkctrl_offs) 132 + * making its IO space accessible. Return 0 upon success, -EINVAL if no 133 + * per-SoC module_enable() function pointer has been registered. 134 + */ 135 + int omap_cm_module_enable(u8 mode, u8 part, u16 inst, u16 clkctrl_offs) 136 + { 137 + if (!cm_ll_data->module_enable) { 138 + WARN_ONCE(1, "cm: %s: no low-level function defined\n", 139 + __func__); 140 + return -EINVAL; 141 + } 142 + 143 + cm_ll_data->module_enable(mode, part, inst, clkctrl_offs); 144 + return 0; 145 + } 146 + 147 + /** 148 + * omap_cm_module_disable - disable a module 149 + * @part: PRCM partition 150 + * @inst: PRCM instance 151 + * @clkctrl_offs: CM_CLKCTRL register offset for the module 152 + * 153 + * Disables clocks for a module identified by (@part, @inst, @clkctrl_offs) 154 + * makings its IO space inaccessible. Return 0 upon success, -EINVAL if 155 + * no per-SoC module_disable() function pointer has been registered. 156 + */ 157 + int omap_cm_module_disable(u8 part, u16 inst, u16 clkctrl_offs) 158 + { 159 + if (!cm_ll_data->module_disable) { 160 + WARN_ONCE(1, "cm: %s: no low-level function defined\n", 161 + __func__); 162 + return -EINVAL; 163 + } 164 + 165 + cm_ll_data->module_disable(part, inst, clkctrl_offs); 166 + return 0; 97 167 } 98 168 99 169 /**
+47 -33
arch/arm/mach-omap2/cminst44xx.c
··· 26 26 #include "cm1_44xx.h" 27 27 #include "cm2_44xx.h" 28 28 #include "cm44xx.h" 29 - #include "cminst44xx.h" 30 29 #include "cm-regbits-34xx.h" 31 30 #include "prcm44xx.h" 32 31 #include "prm44xx.h" ··· 73 74 74 75 /* Private functions */ 75 76 77 + static u32 omap4_cminst_read_inst_reg(u8 part, u16 inst, u16 idx); 78 + 76 79 /** 77 80 * _clkctrl_idlest - read a CM_*_CLKCTRL register; mask & shift IDLEST bitfield 78 81 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 79 82 * @inst: CM instance register offset (*_INST macro) 80 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 81 83 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 82 84 * 83 85 * Return the IDLEST bitfield of a CM_*_CLKCTRL register, shifted down to 84 86 * bit 0. 85 87 */ 86 - static u32 _clkctrl_idlest(u8 part, u16 inst, s16 cdoffs, u16 clkctrl_offs) 88 + static u32 _clkctrl_idlest(u8 part, u16 inst, u16 clkctrl_offs) 87 89 { 88 90 u32 v = omap4_cminst_read_inst_reg(part, inst, clkctrl_offs); 89 91 v &= OMAP4430_IDLEST_MASK; ··· 96 96 * _is_module_ready - can module registers be accessed without causing an abort? 97 97 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 98 98 * @inst: CM instance register offset (*_INST macro) 99 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 100 99 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 101 100 * 102 101 * Returns true if the module's CM_*_CLKCTRL.IDLEST bitfield is either 103 102 * *FUNCTIONAL or *INTERFACE_IDLE; false otherwise. 104 103 */ 105 - static bool _is_module_ready(u8 part, u16 inst, s16 cdoffs, u16 clkctrl_offs) 104 + static bool _is_module_ready(u8 part, u16 inst, u16 clkctrl_offs) 106 105 { 107 106 u32 v; 108 107 109 - v = _clkctrl_idlest(part, inst, cdoffs, clkctrl_offs); 108 + v = _clkctrl_idlest(part, inst, clkctrl_offs); 110 109 111 110 return (v == CLKCTRL_IDLEST_FUNCTIONAL || 112 111 v == CLKCTRL_IDLEST_INTERFACE_IDLE) ? true : false; 113 112 } 114 113 115 - /* Public functions */ 116 - 117 114 /* Read a register in a CM instance */ 118 - u32 omap4_cminst_read_inst_reg(u8 part, u16 inst, u16 idx) 115 + static u32 omap4_cminst_read_inst_reg(u8 part, u16 inst, u16 idx) 119 116 { 120 117 BUG_ON(part >= OMAP4_MAX_PRCM_PARTITIONS || 121 118 part == OMAP4430_INVALID_PRCM_PARTITION || ··· 121 124 } 122 125 123 126 /* Write into a register in a CM instance */ 124 - void omap4_cminst_write_inst_reg(u32 val, u8 part, u16 inst, u16 idx) 127 + static void omap4_cminst_write_inst_reg(u32 val, u8 part, u16 inst, u16 idx) 125 128 { 126 129 BUG_ON(part >= OMAP4_MAX_PRCM_PARTITIONS || 127 130 part == OMAP4430_INVALID_PRCM_PARTITION || ··· 130 133 } 131 134 132 135 /* Read-modify-write a register in CM1. Caller must lock */ 133 - u32 omap4_cminst_rmw_inst_reg_bits(u32 mask, u32 bits, u8 part, u16 inst, 134 - s16 idx) 136 + static u32 omap4_cminst_rmw_inst_reg_bits(u32 mask, u32 bits, u8 part, u16 inst, 137 + s16 idx) 135 138 { 136 139 u32 v; 137 140 ··· 143 146 return v; 144 147 } 145 148 146 - u32 omap4_cminst_set_inst_reg_bits(u32 bits, u8 part, u16 inst, s16 idx) 149 + static u32 omap4_cminst_set_inst_reg_bits(u32 bits, u8 part, u16 inst, s16 idx) 147 150 { 148 151 return omap4_cminst_rmw_inst_reg_bits(bits, bits, part, inst, idx); 149 152 } 150 153 151 - u32 omap4_cminst_clear_inst_reg_bits(u32 bits, u8 part, u16 inst, s16 idx) 154 + static u32 omap4_cminst_clear_inst_reg_bits(u32 bits, u8 part, u16 inst, 155 + s16 idx) 152 156 { 153 157 return omap4_cminst_rmw_inst_reg_bits(bits, 0x0, part, inst, idx); 154 158 } 155 159 156 - u32 omap4_cminst_read_inst_reg_bits(u8 part, u16 inst, s16 idx, u32 mask) 160 + static u32 omap4_cminst_read_inst_reg_bits(u8 part, u16 inst, s16 idx, u32 mask) 157 161 { 158 162 u32 v; 159 163 ··· 198 200 * Returns true if the clockdomain referred to by (@part, @inst, @cdoffs) 199 201 * is in hardware-supervised idle mode, or 0 otherwise. 200 202 */ 201 - bool omap4_cminst_is_clkdm_in_hwsup(u8 part, u16 inst, u16 cdoffs) 203 + static bool omap4_cminst_is_clkdm_in_hwsup(u8 part, u16 inst, u16 cdoffs) 202 204 { 203 205 u32 v; 204 206 ··· 218 220 * Put a clockdomain referred to by (@part, @inst, @cdoffs) into 219 221 * hardware-supervised idle mode. No return value. 220 222 */ 221 - void omap4_cminst_clkdm_enable_hwsup(u8 part, u16 inst, u16 cdoffs) 223 + static void omap4_cminst_clkdm_enable_hwsup(u8 part, u16 inst, u16 cdoffs) 222 224 { 223 225 _clktrctrl_write(OMAP34XX_CLKSTCTRL_ENABLE_AUTO, part, inst, cdoffs); 224 226 } ··· 233 235 * software-supervised idle mode, i.e., controlled manually by the 234 236 * Linux OMAP clockdomain code. No return value. 235 237 */ 236 - void omap4_cminst_clkdm_disable_hwsup(u8 part, u16 inst, u16 cdoffs) 238 + static void omap4_cminst_clkdm_disable_hwsup(u8 part, u16 inst, u16 cdoffs) 237 239 { 238 240 _clktrctrl_write(OMAP34XX_CLKSTCTRL_DISABLE_AUTO, part, inst, cdoffs); 239 241 } ··· 247 249 * Take a clockdomain referred to by (@part, @inst, @cdoffs) out of idle, 248 250 * waking it up. No return value. 249 251 */ 250 - void omap4_cminst_clkdm_force_wakeup(u8 part, u16 inst, u16 cdoffs) 252 + static void omap4_cminst_clkdm_force_wakeup(u8 part, u16 inst, u16 cdoffs) 251 253 { 252 254 _clktrctrl_write(OMAP34XX_CLKSTCTRL_FORCE_WAKEUP, part, inst, cdoffs); 253 255 } ··· 256 258 * 257 259 */ 258 260 259 - void omap4_cminst_clkdm_force_sleep(u8 part, u16 inst, u16 cdoffs) 261 + static void omap4_cminst_clkdm_force_sleep(u8 part, u16 inst, u16 cdoffs) 260 262 { 261 263 _clktrctrl_write(OMAP34XX_CLKSTCTRL_FORCE_SLEEP, part, inst, cdoffs); 262 264 } ··· 265 267 * omap4_cminst_wait_module_ready - wait for a module to be in 'func' state 266 268 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 267 269 * @inst: CM instance register offset (*_INST macro) 268 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 269 270 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 271 + * @bit_shift: bit shift for the register, ignored for OMAP4+ 270 272 * 271 273 * Wait for the module IDLEST to be functional. If the idle state is in any 272 274 * the non functional state (trans, idle or disabled), module and thus the 273 275 * sysconfig cannot be accessed and will probably lead to an "imprecise 274 276 * external abort" 275 277 */ 276 - int omap4_cminst_wait_module_ready(u8 part, u16 inst, s16 cdoffs, 277 - u16 clkctrl_offs) 278 + static int omap4_cminst_wait_module_ready(u8 part, s16 inst, u16 clkctrl_offs, 279 + u8 bit_shift) 278 280 { 279 281 int i = 0; 280 282 281 283 if (!clkctrl_offs) 282 284 return 0; 283 285 284 - omap_test_timeout(_is_module_ready(part, inst, cdoffs, clkctrl_offs), 286 + omap_test_timeout(_is_module_ready(part, inst, clkctrl_offs), 285 287 MAX_MODULE_READY_TIME, i); 286 288 287 289 return (i < MAX_MODULE_READY_TIME) ? 0 : -EBUSY; ··· 292 294 * state 293 295 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 294 296 * @inst: CM instance register offset (*_INST macro) 295 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 296 297 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 298 + * @bit_shift: Bit shift for the register, ignored for OMAP4+ 297 299 * 298 300 * Wait for the module IDLEST to be disabled. Some PRCM transition, 299 301 * like reset assertion or parent clock de-activation must wait the 300 302 * module to be fully disabled. 301 303 */ 302 - int omap4_cminst_wait_module_idle(u8 part, u16 inst, s16 cdoffs, u16 clkctrl_offs) 304 + static int omap4_cminst_wait_module_idle(u8 part, s16 inst, u16 clkctrl_offs, 305 + u8 bit_shift) 303 306 { 304 307 int i = 0; 305 308 306 309 if (!clkctrl_offs) 307 310 return 0; 308 311 309 - omap_test_timeout((_clkctrl_idlest(part, inst, cdoffs, clkctrl_offs) == 312 + omap_test_timeout((_clkctrl_idlest(part, inst, clkctrl_offs) == 310 313 CLKCTRL_IDLEST_DISABLED), 311 314 MAX_MODULE_DISABLE_TIME, i); 312 315 ··· 319 320 * @mode: Module mode (SW or HW) 320 321 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 321 322 * @inst: CM instance register offset (*_INST macro) 322 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 323 323 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 324 324 * 325 325 * No return value. 326 326 */ 327 - void omap4_cminst_module_enable(u8 mode, u8 part, u16 inst, s16 cdoffs, 328 - u16 clkctrl_offs) 327 + static void omap4_cminst_module_enable(u8 mode, u8 part, u16 inst, 328 + u16 clkctrl_offs) 329 329 { 330 330 u32 v; 331 331 ··· 338 340 * omap4_cminst_module_disable - Disable the module inside CLKCTRL 339 341 * @part: PRCM partition ID that the CM_CLKCTRL register exists in 340 342 * @inst: CM instance register offset (*_INST macro) 341 - * @cdoffs: Clockdomain register offset (*_CDOFFS macro) 342 343 * @clkctrl_offs: Module clock control register offset (*_CLKCTRL macro) 343 344 * 344 345 * No return value. 345 346 */ 346 - void omap4_cminst_module_disable(u8 part, u16 inst, s16 cdoffs, 347 - u16 clkctrl_offs) 347 + static void omap4_cminst_module_disable(u8 part, u16 inst, u16 clkctrl_offs) 348 348 { 349 349 u32 v; 350 350 ··· 506 510 .clkdm_clk_enable = omap4_clkdm_clk_enable, 507 511 .clkdm_clk_disable = omap4_clkdm_clk_disable, 508 512 }; 513 + 514 + static struct cm_ll_data omap4xxx_cm_ll_data = { 515 + .wait_module_ready = &omap4_cminst_wait_module_ready, 516 + .wait_module_idle = &omap4_cminst_wait_module_idle, 517 + .module_enable = &omap4_cminst_module_enable, 518 + .module_disable = &omap4_cminst_module_disable, 519 + }; 520 + 521 + int __init omap4_cm_init(void) 522 + { 523 + return cm_register(&omap4xxx_cm_ll_data); 524 + } 525 + 526 + static void __exit omap4_cm_exit(void) 527 + { 528 + cm_unregister(&omap4xxx_cm_ll_data); 529 + } 530 + __exitcall(omap4_cm_exit);
-43
arch/arm/mach-omap2/cminst44xx.h
··· 1 - /* 2 - * OMAP4 Clock Management (CM) function prototypes 3 - * 4 - * Copyright (C) 2010 Nokia Corporation 5 - * Paul Walmsley 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - #ifndef __ARCH_ASM_MACH_OMAP2_CMINST44XX_H 12 - #define __ARCH_ASM_MACH_OMAP2_CMINST44XX_H 13 - 14 - bool omap4_cminst_is_clkdm_in_hwsup(u8 part, u16 inst, u16 cdoffs); 15 - void omap4_cminst_clkdm_enable_hwsup(u8 part, u16 inst, u16 cdoffs); 16 - void omap4_cminst_clkdm_disable_hwsup(u8 part, u16 inst, u16 cdoffs); 17 - void omap4_cminst_clkdm_force_sleep(u8 part, u16 inst, u16 cdoffs); 18 - void omap4_cminst_clkdm_force_wakeup(u8 part, u16 inst, u16 cdoffs); 19 - extern int omap4_cminst_wait_module_ready(u8 part, u16 inst, s16 cdoffs, u16 clkctrl_offs); 20 - extern int omap4_cminst_wait_module_idle(u8 part, u16 inst, s16 cdoffs, 21 - u16 clkctrl_offs); 22 - extern void omap4_cminst_module_enable(u8 mode, u8 part, u16 inst, s16 cdoffs, 23 - u16 clkctrl_offs); 24 - extern void omap4_cminst_module_disable(u8 part, u16 inst, s16 cdoffs, 25 - u16 clkctrl_offs); 26 - /* 27 - * In an ideal world, we would not export these low-level functions, 28 - * but this will probably take some time to fix properly 29 - */ 30 - u32 omap4_cminst_read_inst_reg(u8 part, u16 inst, u16 idx); 31 - void omap4_cminst_write_inst_reg(u32 val, u8 part, u16 inst, u16 idx); 32 - u32 omap4_cminst_rmw_inst_reg_bits(u32 mask, u32 bits, u8 part, 33 - u16 inst, s16 idx); 34 - u32 omap4_cminst_set_inst_reg_bits(u32 bits, u8 part, u16 inst, 35 - s16 idx); 36 - u32 omap4_cminst_clear_inst_reg_bits(u32 bits, u8 part, u16 inst, 37 - s16 idx); 38 - extern u32 omap4_cminst_read_inst_reg_bits(u8 part, u16 inst, s16 idx, 39 - u32 mask); 40 - 41 - extern void omap_cm_base_init(void); 42 - 43 - #endif
+124 -65
arch/arm/mach-omap2/dpll3xxx.c
··· 460 460 /* Non-CORE DPLL rate set code */ 461 461 462 462 /** 463 - * omap3_noncore_dpll_set_rate - set non-core DPLL rate 464 - * @clk: struct clk * of DPLL to set 465 - * @rate: rounded target rate 463 + * omap3_noncore_dpll_determine_rate - determine rate for a DPLL 464 + * @hw: pointer to the clock to determine rate for 465 + * @rate: target rate for the DPLL 466 + * @best_parent_rate: pointer for returning best parent rate 467 + * @best_parent_clk: pointer for returning best parent clock 466 468 * 467 - * Set the DPLL CLKOUT to the target rate. If the DPLL can enter 468 - * low-power bypass, and the target rate is the bypass source clock 469 - * rate, then configure the DPLL for bypass. Otherwise, round the 470 - * target rate if it hasn't been done already, then program and lock 471 - * the DPLL. Returns -EINVAL upon error, or 0 upon success. 469 + * Determines which DPLL mode to use for reaching a desired target rate. 470 + * Checks whether the DPLL shall be in bypass or locked mode, and if 471 + * locked, calculates the M,N values for the DPLL via round-rate. 472 + * Returns a positive clock rate with success, negative error value 473 + * in failure. 472 474 */ 473 - int omap3_noncore_dpll_set_rate(struct clk_hw *hw, unsigned long rate, 474 - unsigned long parent_rate) 475 + long omap3_noncore_dpll_determine_rate(struct clk_hw *hw, unsigned long rate, 476 + unsigned long *best_parent_rate, 477 + struct clk **best_parent_clk) 475 478 { 476 479 struct clk_hw_omap *clk = to_clk_hw_omap(hw); 477 - struct clk *new_parent = NULL; 478 - unsigned long rrate; 479 - u16 freqsel = 0; 480 480 struct dpll_data *dd; 481 - int ret; 482 481 483 482 if (!hw || !rate) 484 483 return -EINVAL; ··· 488 489 489 490 if (__clk_get_rate(dd->clk_bypass) == rate && 490 491 (dd->modes & (1 << DPLL_LOW_POWER_BYPASS))) { 491 - pr_debug("%s: %s: set rate: entering bypass.\n", 492 - __func__, __clk_get_name(hw->clk)); 493 - 494 - __clk_prepare(dd->clk_bypass); 495 - clk_enable(dd->clk_bypass); 496 - ret = _omap3_noncore_dpll_bypass(clk); 497 - if (!ret) 498 - new_parent = dd->clk_bypass; 499 - clk_disable(dd->clk_bypass); 500 - __clk_unprepare(dd->clk_bypass); 492 + *best_parent_clk = dd->clk_bypass; 501 493 } else { 502 - __clk_prepare(dd->clk_ref); 503 - clk_enable(dd->clk_ref); 504 - 505 - /* XXX this check is probably pointless in the CCF context */ 506 - if (dd->last_rounded_rate != rate) { 507 - rrate = __clk_round_rate(hw->clk, rate); 508 - if (rrate != rate) { 509 - pr_warn("%s: %s: final rate %lu does not match desired rate %lu\n", 510 - __func__, __clk_get_name(hw->clk), 511 - rrate, rate); 512 - rate = rrate; 513 - } 514 - } 515 - 516 - if (dd->last_rounded_rate == 0) 517 - return -EINVAL; 518 - 519 - /* Freqsel is available only on OMAP343X devices */ 520 - if (ti_clk_features.flags & TI_CLK_DPLL_HAS_FREQSEL) { 521 - freqsel = _omap3_dpll_compute_freqsel(clk, 522 - dd->last_rounded_n); 523 - WARN_ON(!freqsel); 524 - } 525 - 526 - pr_debug("%s: %s: set rate: locking rate to %lu.\n", 527 - __func__, __clk_get_name(hw->clk), rate); 528 - 529 - ret = omap3_noncore_dpll_program(clk, freqsel); 530 - if (!ret) 531 - new_parent = dd->clk_ref; 532 - clk_disable(dd->clk_ref); 533 - __clk_unprepare(dd->clk_ref); 494 + rate = omap2_dpll_round_rate(hw, rate, best_parent_rate); 495 + *best_parent_clk = dd->clk_ref; 534 496 } 497 + 498 + *best_parent_rate = rate; 499 + 500 + return rate; 501 + } 502 + 503 + /** 504 + * omap3_noncore_dpll_set_parent - set parent for a DPLL clock 505 + * @hw: pointer to the clock to set parent for 506 + * @index: parent index to select 507 + * 508 + * Sets parent for a DPLL clock. This sets the DPLL into bypass or 509 + * locked mode. Returns 0 with success, negative error value otherwise. 510 + */ 511 + int omap3_noncore_dpll_set_parent(struct clk_hw *hw, u8 index) 512 + { 513 + struct clk_hw_omap *clk = to_clk_hw_omap(hw); 514 + int ret; 515 + 516 + if (!hw) 517 + return -EINVAL; 518 + 519 + if (index) 520 + ret = _omap3_noncore_dpll_bypass(clk); 521 + else 522 + ret = _omap3_noncore_dpll_lock(clk); 523 + 524 + return ret; 525 + } 526 + 527 + /** 528 + * omap3_noncore_dpll_set_rate - set rate for a DPLL clock 529 + * @hw: pointer to the clock to set parent for 530 + * @rate: target rate for the clock 531 + * @parent_rate: rate of the parent clock 532 + * 533 + * Sets rate for a DPLL clock. First checks if the clock parent is 534 + * reference clock (in bypass mode, the rate of the clock can't be 535 + * changed) and proceeds with the rate change operation. Returns 0 536 + * with success, negative error value otherwise. 537 + */ 538 + int omap3_noncore_dpll_set_rate(struct clk_hw *hw, unsigned long rate, 539 + unsigned long parent_rate) 540 + { 541 + struct clk_hw_omap *clk = to_clk_hw_omap(hw); 542 + struct dpll_data *dd; 543 + u16 freqsel = 0; 544 + int ret; 545 + 546 + if (!hw || !rate) 547 + return -EINVAL; 548 + 549 + dd = clk->dpll_data; 550 + if (!dd) 551 + return -EINVAL; 552 + 553 + if (__clk_get_parent(hw->clk) != dd->clk_ref) 554 + return -EINVAL; 555 + 556 + if (dd->last_rounded_rate == 0) 557 + return -EINVAL; 558 + 559 + /* Freqsel is available only on OMAP343X devices */ 560 + if (ti_clk_features.flags & TI_CLK_DPLL_HAS_FREQSEL) { 561 + freqsel = _omap3_dpll_compute_freqsel(clk, dd->last_rounded_n); 562 + WARN_ON(!freqsel); 563 + } 564 + 565 + pr_debug("%s: %s: set rate: locking rate to %lu.\n", __func__, 566 + __clk_get_name(hw->clk), rate); 567 + 568 + ret = omap3_noncore_dpll_program(clk, freqsel); 569 + 570 + return ret; 571 + } 572 + 573 + /** 574 + * omap3_noncore_dpll_set_rate_and_parent - set rate and parent for a DPLL clock 575 + * @hw: pointer to the clock to set rate and parent for 576 + * @rate: target rate for the DPLL 577 + * @parent_rate: clock rate of the DPLL parent 578 + * @index: new parent index for the DPLL, 0 - reference, 1 - bypass 579 + * 580 + * Sets rate and parent for a DPLL clock. If new parent is the bypass 581 + * clock, only selects the parent. Otherwise proceeds with a rate 582 + * change, as this will effectively also change the parent as the 583 + * DPLL is put into locked mode. Returns 0 with success, negative error 584 + * value otherwise. 585 + */ 586 + int omap3_noncore_dpll_set_rate_and_parent(struct clk_hw *hw, 587 + unsigned long rate, 588 + unsigned long parent_rate, 589 + u8 index) 590 + { 591 + int ret; 592 + 593 + if (!hw || !rate) 594 + return -EINVAL; 595 + 535 596 /* 536 - * FIXME - this is all wrong. common code handles reparenting and 537 - * migrating prepare/enable counts. dplls should be a multiplexer 538 - * clock and this should be a set_parent operation so that all of that 539 - * stuff is inherited for free 540 - */ 597 + * clk-ref at index[0], in which case we only need to set rate, 598 + * the parent will be changed automatically with the lock sequence. 599 + * With clk-bypass case we only need to change parent. 600 + */ 601 + if (index) 602 + ret = omap3_noncore_dpll_set_parent(hw, index); 603 + else 604 + ret = omap3_noncore_dpll_set_rate(hw, rate, parent_rate); 541 605 542 - if (!ret && clk_get_parent(hw->clk) != new_parent) 543 - __clk_reparent(hw->clk, new_parent); 544 - 545 - return 0; 606 + return ret; 546 607 } 547 608 548 609 /* DPLL autoidle read/set code */
+41
arch/arm/mach-omap2/dpll44xx.c
··· 207 207 208 208 return dd->last_rounded_rate; 209 209 } 210 + 211 + /** 212 + * omap4_dpll_regm4xen_determine_rate - determine rate for a DPLL 213 + * @hw: pointer to the clock to determine rate for 214 + * @rate: target rate for the DPLL 215 + * @best_parent_rate: pointer for returning best parent rate 216 + * @best_parent_clk: pointer for returning best parent clock 217 + * 218 + * Determines which DPLL mode to use for reaching a desired rate. 219 + * Checks whether the DPLL shall be in bypass or locked mode, and if 220 + * locked, calculates the M,N values for the DPLL via round-rate. 221 + * Returns a positive clock rate with success, negative error value 222 + * in failure. 223 + */ 224 + long omap4_dpll_regm4xen_determine_rate(struct clk_hw *hw, unsigned long rate, 225 + unsigned long *best_parent_rate, 226 + struct clk **best_parent_clk) 227 + { 228 + struct clk_hw_omap *clk = to_clk_hw_omap(hw); 229 + struct dpll_data *dd; 230 + 231 + if (!hw || !rate) 232 + return -EINVAL; 233 + 234 + dd = clk->dpll_data; 235 + if (!dd) 236 + return -EINVAL; 237 + 238 + if (__clk_get_rate(dd->clk_bypass) == rate && 239 + (dd->modes & (1 << DPLL_LOW_POWER_BYPASS))) { 240 + *best_parent_clk = dd->clk_bypass; 241 + } else { 242 + rate = omap4_dpll_regm4xen_round_rate(hw, rate, 243 + best_parent_rate); 244 + *best_parent_clk = dd->clk_ref; 245 + } 246 + 247 + *best_parent_rate = rate; 248 + 249 + return rate; 250 + }
+10 -1
arch/arm/mach-omap2/io.c
··· 45 45 #include "sram.h" 46 46 #include "cm2xxx.h" 47 47 #include "cm3xxx.h" 48 + #include "cm33xx.h" 49 + #include "cm44xx.h" 48 50 #include "prm.h" 49 51 #include "cm.h" 50 52 #include "prcm_mpu44xx.h" 51 53 #include "prminst44xx.h" 52 - #include "cminst44xx.h" 53 54 #include "prm2xxx.h" 54 55 #include "prm3xxx.h" 56 + #include "prm33xx.h" 55 57 #include "prm44xx.h" 56 58 #include "opp2xxx.h" 57 59 ··· 567 565 omap2_set_globals_cm(AM33XX_L4_WK_IO_ADDRESS(AM33XX_PRCM_BASE), NULL); 568 566 omap3xxx_check_revision(); 569 567 am33xx_check_features(); 568 + am33xx_prm_init(); 569 + am33xx_cm_init(); 570 570 am33xx_powerdomains_init(); 571 571 am33xx_clockdomains_init(); 572 572 am33xx_hwmod_init(); ··· 595 591 omap_cm_base_init(); 596 592 omap3xxx_check_revision(); 597 593 am33xx_check_features(); 594 + omap44xx_prm_init(); 595 + omap4_cm_init(); 598 596 am43xx_powerdomains_init(); 599 597 am43xx_clockdomains_init(); 600 598 am43xx_hwmod_init(); ··· 626 620 omap_cm_base_init(); 627 621 omap4xxx_check_revision(); 628 622 omap4xxx_check_features(); 623 + omap4_cm_init(); 629 624 omap4_pm_init_early(); 630 625 omap44xx_prm_init(); 631 626 omap44xx_voltagedomains_init(); ··· 662 655 omap_cm_base_init(); 663 656 omap44xx_prm_init(); 664 657 omap5xxx_check_revision(); 658 + omap4_cm_init(); 665 659 omap54xx_voltagedomains_init(); 666 660 omap54xx_powerdomains_init(); 667 661 omap54xx_clockdomains_init(); ··· 694 686 omap_cm_base_init(); 695 687 omap44xx_prm_init(); 696 688 dra7xxx_check_revision(); 689 + omap4_cm_init(); 697 690 dra7xx_powerdomains_init(); 698 691 dra7xx_clockdomains_init(); 699 692 dra7xx_hwmod_init();
+3 -1
arch/arm/mach-omap2/omap-mpuss-lowpower.c
··· 227 227 int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state) 228 228 { 229 229 struct omap4_cpu_pm_info *pm_info = &per_cpu(omap4_pm_info, cpu); 230 - unsigned int save_state = 0; 230 + unsigned int save_state = 0, cpu_logic_state = PWRDM_POWER_RET; 231 231 unsigned int wakeup_cpu; 232 232 233 233 if (omap_rev() == OMAP4430_REV_ES1_0) ··· 239 239 save_state = 0; 240 240 break; 241 241 case PWRDM_POWER_OFF: 242 + cpu_logic_state = PWRDM_POWER_OFF; 242 243 save_state = 1; 243 244 break; 244 245 case PWRDM_POWER_RET: ··· 271 270 272 271 cpu_clear_prev_logic_pwrst(cpu); 273 272 pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); 273 + pwrdm_set_logic_retst(pm_info->pwrdm, cpu_logic_state); 274 274 set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.resume)); 275 275 omap_pm_ops.scu_prepare(cpu, power_state); 276 276 l2x0_pwrst_prepare(cpu, save_state);
+2 -3
arch/arm/mach-omap2/omap2-restart.c
··· 15 15 16 16 #include "soc.h" 17 17 #include "common.h" 18 - #include "prm2xxx.h" 18 + #include "prm.h" 19 19 20 20 /* 21 21 * reset_virt_prcm_set_ck, reset_sys_ck: pointers to the virt_prcm_set ··· 40 40 41 41 /* XXX Should save the cmd argument for use after the reboot */ 42 42 43 - omap2xxx_prm_dpll_reset(); /* never returns */ 44 - while (1); 43 + omap_prm_reset_system(); 45 44 } 46 45 47 46 /**
+2 -5
arch/arm/mach-omap2/omap3-restart.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/reboot.h> 16 16 17 - #include "iomap.h" 18 - #include "common.h" 19 17 #include "control.h" 20 - #include "prm3xxx.h" 18 + #include "prm.h" 21 19 22 20 /* Global address base setup code */ 23 21 ··· 30 32 void omap3xxx_restart(enum reboot_mode mode, const char *cmd) 31 33 { 32 34 omap3_ctrl_write_boot_mode((cmd ? (u8)*cmd : 0)); 33 - omap3xxx_prm_dpll3_reset(); /* never returns */ 34 - while (1); 35 + omap_prm_reset_system(); 35 36 }
+2 -4
arch/arm/mach-omap2/omap4-restart.c
··· 9 9 10 10 #include <linux/types.h> 11 11 #include <linux/reboot.h> 12 - #include "prminst44xx.h" 12 + #include "prm.h" 13 13 14 14 /** 15 15 * omap44xx_restart - trigger a software restart of the SoC ··· 22 22 void omap44xx_restart(enum reboot_mode mode, const char *cmd) 23 23 { 24 24 /* XXX Should save 'cmd' into scratchpad for use after reboot */ 25 - omap4_prminst_global_warm_sw_reset(); /* never returns */ 26 - while (1) 27 - ; 25 + omap_prm_reset_system(); 28 26 }
+76 -196
arch/arm/mach-omap2/omap_hwmod.c
··· 153 153 #include "powerdomain.h" 154 154 #include "cm2xxx.h" 155 155 #include "cm3xxx.h" 156 - #include "cminst44xx.h" 157 156 #include "cm33xx.h" 158 157 #include "prm.h" 159 158 #include "prm3xxx.h" ··· 978 979 pr_debug("omap_hwmod: %s: %s: %d\n", 979 980 oh->name, __func__, oh->prcm.omap4.modulemode); 980 981 981 - omap4_cminst_module_enable(oh->prcm.omap4.modulemode, 982 - oh->clkdm->prcm_partition, 983 - oh->clkdm->cm_inst, 984 - oh->clkdm->clkdm_offs, 985 - oh->prcm.omap4.clkctrl_offs); 986 - } 987 - 988 - /** 989 - * _am33xx_enable_module - enable CLKCTRL modulemode on AM33XX 990 - * @oh: struct omap_hwmod * 991 - * 992 - * Enables the PRCM module mode related to the hwmod @oh. 993 - * No return value. 994 - */ 995 - static void _am33xx_enable_module(struct omap_hwmod *oh) 996 - { 997 - if (!oh->clkdm || !oh->prcm.omap4.modulemode) 998 - return; 999 - 1000 - pr_debug("omap_hwmod: %s: %s: %d\n", 1001 - oh->name, __func__, oh->prcm.omap4.modulemode); 1002 - 1003 - am33xx_cm_module_enable(oh->prcm.omap4.modulemode, oh->clkdm->cm_inst, 1004 - oh->clkdm->clkdm_offs, 1005 - oh->prcm.omap4.clkctrl_offs); 982 + omap_cm_module_enable(oh->prcm.omap4.modulemode, 983 + oh->clkdm->prcm_partition, 984 + oh->clkdm->cm_inst, oh->prcm.omap4.clkctrl_offs); 1006 985 } 1007 986 1008 987 /** ··· 1003 1026 if (oh->flags & HWMOD_NO_IDLEST) 1004 1027 return 0; 1005 1028 1006 - return omap4_cminst_wait_module_idle(oh->clkdm->prcm_partition, 1007 - oh->clkdm->cm_inst, 1008 - oh->clkdm->clkdm_offs, 1009 - oh->prcm.omap4.clkctrl_offs); 1010 - } 1011 - 1012 - /** 1013 - * _am33xx_wait_target_disable - wait for a module to be disabled on AM33XX 1014 - * @oh: struct omap_hwmod * 1015 - * 1016 - * Wait for a module @oh to enter slave idle. Returns 0 if the module 1017 - * does not have an IDLEST bit or if the module successfully enters 1018 - * slave idle; otherwise, pass along the return value of the 1019 - * appropriate *_cm*_wait_module_idle() function. 1020 - */ 1021 - static int _am33xx_wait_target_disable(struct omap_hwmod *oh) 1022 - { 1023 - if (!oh) 1024 - return -EINVAL; 1025 - 1026 - if (oh->_int_flags & _HWMOD_NO_MPU_PORT) 1027 - return 0; 1028 - 1029 - if (oh->flags & HWMOD_NO_IDLEST) 1030 - return 0; 1031 - 1032 - return am33xx_cm_wait_module_idle(oh->clkdm->cm_inst, 1033 - oh->clkdm->clkdm_offs, 1034 - oh->prcm.omap4.clkctrl_offs); 1029 + return omap_cm_wait_module_idle(oh->clkdm->prcm_partition, 1030 + oh->clkdm->cm_inst, 1031 + oh->prcm.omap4.clkctrl_offs, 0); 1035 1032 } 1036 1033 1037 1034 /** ··· 1810 1859 1811 1860 pr_debug("omap_hwmod: %s: %s\n", oh->name, __func__); 1812 1861 1813 - omap4_cminst_module_disable(oh->clkdm->prcm_partition, 1814 - oh->clkdm->cm_inst, 1815 - oh->clkdm->clkdm_offs, 1816 - oh->prcm.omap4.clkctrl_offs); 1862 + omap_cm_module_disable(oh->clkdm->prcm_partition, oh->clkdm->cm_inst, 1863 + oh->prcm.omap4.clkctrl_offs); 1817 1864 1818 1865 v = _omap4_wait_target_disable(oh); 1819 - if (v) 1820 - pr_warn("omap_hwmod: %s: _wait_target_disable failed\n", 1821 - oh->name); 1822 - 1823 - return 0; 1824 - } 1825 - 1826 - /** 1827 - * _am33xx_disable_module - enable CLKCTRL modulemode on AM33XX 1828 - * @oh: struct omap_hwmod * 1829 - * 1830 - * Disable the PRCM module mode related to the hwmod @oh. 1831 - * Return EINVAL if the modulemode is not supported and 0 in case of success. 1832 - */ 1833 - static int _am33xx_disable_module(struct omap_hwmod *oh) 1834 - { 1835 - int v; 1836 - 1837 - if (!oh->clkdm || !oh->prcm.omap4.modulemode) 1838 - return -EINVAL; 1839 - 1840 - pr_debug("omap_hwmod: %s: %s\n", oh->name, __func__); 1841 - 1842 - if (_are_any_hardreset_lines_asserted(oh)) 1843 - return 0; 1844 - 1845 - am33xx_cm_module_disable(oh->clkdm->cm_inst, oh->clkdm->clkdm_offs, 1846 - oh->prcm.omap4.clkctrl_offs); 1847 - 1848 - v = _am33xx_wait_target_disable(oh); 1849 1866 if (v) 1850 1867 pr_warn("omap_hwmod: %s: _wait_target_disable failed\n", 1851 1868 oh->name); ··· 1984 2065 1985 2066 spin_lock_irqsave(&io_chain_lock, flags); 1986 2067 1987 - if (cpu_is_omap34xx()) 1988 - omap3xxx_prm_reconfigure_io_chain(); 1989 - else if (cpu_is_omap44xx()) 1990 - omap44xx_prm_reconfigure_io_chain(); 2068 + omap_prm_reconfigure_io_chain(); 1991 2069 1992 2070 spin_unlock_irqrestore(&io_chain_lock, flags); 1993 2071 } ··· 2635 2719 if (oh->_state != _HWMOD_STATE_INITIALIZED) 2636 2720 return 0; 2637 2721 2722 + if (oh->parent_hwmod) { 2723 + int r; 2724 + 2725 + r = _enable(oh->parent_hwmod); 2726 + WARN(r, "hwmod: %s: setup: failed to enable parent hwmod %s\n", 2727 + oh->name, oh->parent_hwmod->name); 2728 + } 2729 + 2638 2730 _setup_iclk_autoidle(oh); 2639 2731 2640 2732 if (!_setup_reset(oh)) 2641 2733 _setup_postsetup(oh); 2734 + 2735 + if (oh->parent_hwmod) { 2736 + u8 postsetup_state; 2737 + 2738 + postsetup_state = oh->parent_hwmod->_postsetup_state; 2739 + 2740 + if (postsetup_state == _HWMOD_STATE_IDLE) 2741 + _idle(oh->parent_hwmod); 2742 + else if (postsetup_state == _HWMOD_STATE_DISABLED) 2743 + _shutdown(oh->parent_hwmod); 2744 + else if (postsetup_state != _HWMOD_STATE_ENABLED) 2745 + WARN(1, "hwmod: %s: unknown postsetup state %d! defaulting to enabled\n", 2746 + oh->parent_hwmod->name, postsetup_state); 2747 + } 2642 2748 2643 2749 return 0; 2644 2750 } ··· 2770 2832 _alloc_links(&ml, &sl); 2771 2833 2772 2834 ml->ocp_if = oi; 2773 - INIT_LIST_HEAD(&ml->node); 2774 2835 list_add(&ml->node, &oi->master->master_ports); 2775 2836 oi->master->masters_cnt++; 2776 2837 2777 2838 sl->ocp_if = oi; 2778 - INIT_LIST_HEAD(&sl->node); 2779 2839 list_add(&sl->node, &oi->slave->slave_ports); 2780 2840 oi->slave->slaves_cnt++; 2781 2841 ··· 2863 2927 /* Static functions intended only for use in soc_ops field function pointers */ 2864 2928 2865 2929 /** 2866 - * _omap2xxx_wait_target_ready - wait for a module to leave slave idle 2930 + * _omap2xxx_3xxx_wait_target_ready - wait for a module to leave slave idle 2867 2931 * @oh: struct omap_hwmod * 2868 2932 * 2869 2933 * Wait for a module @oh to leave slave idle. Returns 0 if the module ··· 2871 2935 * slave idle; otherwise, pass along the return value of the 2872 2936 * appropriate *_cm*_wait_module_ready() function. 2873 2937 */ 2874 - static int _omap2xxx_wait_target_ready(struct omap_hwmod *oh) 2938 + static int _omap2xxx_3xxx_wait_target_ready(struct omap_hwmod *oh) 2875 2939 { 2876 2940 if (!oh) 2877 2941 return -EINVAL; ··· 2884 2948 2885 2949 /* XXX check module SIDLEMODE, hardreset status, enabled clocks */ 2886 2950 2887 - return omap2xxx_cm_wait_module_ready(oh->prcm.omap2.module_offs, 2888 - oh->prcm.omap2.idlest_reg_id, 2889 - oh->prcm.omap2.idlest_idle_bit); 2890 - } 2891 - 2892 - /** 2893 - * _omap3xxx_wait_target_ready - wait for a module to leave slave idle 2894 - * @oh: struct omap_hwmod * 2895 - * 2896 - * Wait for a module @oh to leave slave idle. Returns 0 if the module 2897 - * does not have an IDLEST bit or if the module successfully leaves 2898 - * slave idle; otherwise, pass along the return value of the 2899 - * appropriate *_cm*_wait_module_ready() function. 2900 - */ 2901 - static int _omap3xxx_wait_target_ready(struct omap_hwmod *oh) 2902 - { 2903 - if (!oh) 2904 - return -EINVAL; 2905 - 2906 - if (oh->flags & HWMOD_NO_IDLEST) 2907 - return 0; 2908 - 2909 - if (!_find_mpu_rt_port(oh)) 2910 - return 0; 2911 - 2912 - /* XXX check module SIDLEMODE, hardreset status, enabled clocks */ 2913 - 2914 - return omap3xxx_cm_wait_module_ready(oh->prcm.omap2.module_offs, 2915 - oh->prcm.omap2.idlest_reg_id, 2916 - oh->prcm.omap2.idlest_idle_bit); 2951 + return omap_cm_wait_module_ready(0, oh->prcm.omap2.module_offs, 2952 + oh->prcm.omap2.idlest_reg_id, 2953 + oh->prcm.omap2.idlest_idle_bit); 2917 2954 } 2918 2955 2919 2956 /** ··· 2911 3002 2912 3003 /* XXX check module SIDLEMODE, hardreset status */ 2913 3004 2914 - return omap4_cminst_wait_module_ready(oh->clkdm->prcm_partition, 2915 - oh->clkdm->cm_inst, 2916 - oh->clkdm->clkdm_offs, 2917 - oh->prcm.omap4.clkctrl_offs); 2918 - } 2919 - 2920 - /** 2921 - * _am33xx_wait_target_ready - wait for a module to leave slave idle 2922 - * @oh: struct omap_hwmod * 2923 - * 2924 - * Wait for a module @oh to leave slave idle. Returns 0 if the module 2925 - * does not have an IDLEST bit or if the module successfully leaves 2926 - * slave idle; otherwise, pass along the return value of the 2927 - * appropriate *_cm*_wait_module_ready() function. 2928 - */ 2929 - static int _am33xx_wait_target_ready(struct omap_hwmod *oh) 2930 - { 2931 - if (!oh || !oh->clkdm) 2932 - return -EINVAL; 2933 - 2934 - if (oh->flags & HWMOD_NO_IDLEST) 2935 - return 0; 2936 - 2937 - if (!_find_mpu_rt_port(oh)) 2938 - return 0; 2939 - 2940 - /* XXX check module SIDLEMODE, hardreset status */ 2941 - 2942 - return am33xx_cm_wait_module_ready(oh->clkdm->cm_inst, 2943 - oh->clkdm->clkdm_offs, 2944 - oh->prcm.omap4.clkctrl_offs); 3005 + return omap_cm_wait_module_ready(oh->clkdm->prcm_partition, 3006 + oh->clkdm->cm_inst, 3007 + oh->prcm.omap4.clkctrl_offs, 0); 2945 3008 } 2946 3009 2947 3010 /** ··· 2930 3049 static int _omap2_assert_hardreset(struct omap_hwmod *oh, 2931 3050 struct omap_hwmod_rst_info *ohri) 2932 3051 { 2933 - return omap2_prm_assert_hardreset(oh->prcm.omap2.module_offs, 2934 - ohri->rst_shift); 3052 + return omap_prm_assert_hardreset(ohri->rst_shift, 0, 3053 + oh->prcm.omap2.module_offs, 0); 2935 3054 } 2936 3055 2937 3056 /** ··· 2948 3067 static int _omap2_deassert_hardreset(struct omap_hwmod *oh, 2949 3068 struct omap_hwmod_rst_info *ohri) 2950 3069 { 2951 - return omap2_prm_deassert_hardreset(oh->prcm.omap2.module_offs, 2952 - ohri->rst_shift, 2953 - ohri->st_shift); 3070 + return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0, 3071 + oh->prcm.omap2.module_offs, 0, 0); 2954 3072 } 2955 3073 2956 3074 /** ··· 2967 3087 static int _omap2_is_hardreset_asserted(struct omap_hwmod *oh, 2968 3088 struct omap_hwmod_rst_info *ohri) 2969 3089 { 2970 - return omap2_prm_is_hardreset_asserted(oh->prcm.omap2.module_offs, 2971 - ohri->st_shift); 3090 + return omap_prm_is_hardreset_asserted(ohri->st_shift, 0, 3091 + oh->prcm.omap2.module_offs, 0); 2972 3092 } 2973 3093 2974 3094 /** ··· 2989 3109 if (!oh->clkdm) 2990 3110 return -EINVAL; 2991 3111 2992 - return omap4_prminst_assert_hardreset(ohri->rst_shift, 2993 - oh->clkdm->pwrdm.ptr->prcm_partition, 2994 - oh->clkdm->pwrdm.ptr->prcm_offs, 2995 - oh->prcm.omap4.rstctrl_offs); 3112 + return omap_prm_assert_hardreset(ohri->rst_shift, 3113 + oh->clkdm->pwrdm.ptr->prcm_partition, 3114 + oh->clkdm->pwrdm.ptr->prcm_offs, 3115 + oh->prcm.omap4.rstctrl_offs); 2996 3116 } 2997 3117 2998 3118 /** ··· 3016 3136 if (ohri->st_shift) 3017 3137 pr_err("omap_hwmod: %s: %s: hwmod data error: OMAP4 does not support st_shift\n", 3018 3138 oh->name, ohri->name); 3019 - return omap4_prminst_deassert_hardreset(ohri->rst_shift, 3020 - oh->clkdm->pwrdm.ptr->prcm_partition, 3021 - oh->clkdm->pwrdm.ptr->prcm_offs, 3022 - oh->prcm.omap4.rstctrl_offs); 3139 + return omap_prm_deassert_hardreset(ohri->rst_shift, 0, 3140 + oh->clkdm->pwrdm.ptr->prcm_partition, 3141 + oh->clkdm->pwrdm.ptr->prcm_offs, 3142 + oh->prcm.omap4.rstctrl_offs, 0); 3023 3143 } 3024 3144 3025 3145 /** ··· 3040 3160 if (!oh->clkdm) 3041 3161 return -EINVAL; 3042 3162 3043 - return omap4_prminst_is_hardreset_asserted(ohri->rst_shift, 3044 - oh->clkdm->pwrdm.ptr->prcm_partition, 3045 - oh->clkdm->pwrdm.ptr->prcm_offs, 3046 - oh->prcm.omap4.rstctrl_offs); 3163 + return omap_prm_is_hardreset_asserted(ohri->rst_shift, 3164 + oh->clkdm->pwrdm.ptr-> 3165 + prcm_partition, 3166 + oh->clkdm->pwrdm.ptr->prcm_offs, 3167 + oh->prcm.omap4.rstctrl_offs); 3047 3168 } 3048 3169 3049 3170 /** ··· 3063 3182 struct omap_hwmod_rst_info *ohri) 3064 3183 3065 3184 { 3066 - return am33xx_prm_assert_hardreset(ohri->rst_shift, 3067 - oh->clkdm->pwrdm.ptr->prcm_offs, 3068 - oh->prcm.omap4.rstctrl_offs); 3185 + return omap_prm_assert_hardreset(ohri->rst_shift, 0, 3186 + oh->clkdm->pwrdm.ptr->prcm_offs, 3187 + oh->prcm.omap4.rstctrl_offs); 3069 3188 } 3070 3189 3071 3190 /** ··· 3083 3202 static int _am33xx_deassert_hardreset(struct omap_hwmod *oh, 3084 3203 struct omap_hwmod_rst_info *ohri) 3085 3204 { 3086 - return am33xx_prm_deassert_hardreset(ohri->rst_shift, 3087 - ohri->st_shift, 3088 - oh->clkdm->pwrdm.ptr->prcm_offs, 3089 - oh->prcm.omap4.rstctrl_offs, 3090 - oh->prcm.omap4.rstst_offs); 3205 + return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0, 3206 + oh->clkdm->pwrdm.ptr->prcm_offs, 3207 + oh->prcm.omap4.rstctrl_offs, 3208 + oh->prcm.omap4.rstst_offs); 3091 3209 } 3092 3210 3093 3211 /** ··· 3104 3224 static int _am33xx_is_hardreset_asserted(struct omap_hwmod *oh, 3105 3225 struct omap_hwmod_rst_info *ohri) 3106 3226 { 3107 - return am33xx_prm_is_hardreset_asserted(ohri->rst_shift, 3108 - oh->clkdm->pwrdm.ptr->prcm_offs, 3109 - oh->prcm.omap4.rstctrl_offs); 3227 + return omap_prm_is_hardreset_asserted(ohri->rst_shift, 0, 3228 + oh->clkdm->pwrdm.ptr->prcm_offs, 3229 + oh->prcm.omap4.rstctrl_offs); 3110 3230 } 3111 3231 3112 3232 /* Public functions */ ··· 4114 4234 void __init omap_hwmod_init(void) 4115 4235 { 4116 4236 if (cpu_is_omap24xx()) { 4117 - soc_ops.wait_target_ready = _omap2xxx_wait_target_ready; 4237 + soc_ops.wait_target_ready = _omap2xxx_3xxx_wait_target_ready; 4118 4238 soc_ops.assert_hardreset = _omap2_assert_hardreset; 4119 4239 soc_ops.deassert_hardreset = _omap2_deassert_hardreset; 4120 4240 soc_ops.is_hardreset_asserted = _omap2_is_hardreset_asserted; 4121 4241 } else if (cpu_is_omap34xx()) { 4122 - soc_ops.wait_target_ready = _omap3xxx_wait_target_ready; 4242 + soc_ops.wait_target_ready = _omap2xxx_3xxx_wait_target_ready; 4123 4243 soc_ops.assert_hardreset = _omap2_assert_hardreset; 4124 4244 soc_ops.deassert_hardreset = _omap2_deassert_hardreset; 4125 4245 soc_ops.is_hardreset_asserted = _omap2_is_hardreset_asserted; ··· 4138 4258 soc_ops.enable_module = _omap4_enable_module; 4139 4259 soc_ops.disable_module = _omap4_disable_module; 4140 4260 soc_ops.wait_target_ready = _omap4_wait_target_ready; 4141 - soc_ops.assert_hardreset = _am33xx_assert_hardreset; 4142 - soc_ops.deassert_hardreset = _am33xx_deassert_hardreset; 4143 - soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted; 4261 + soc_ops.assert_hardreset = _omap4_assert_hardreset; 4262 + soc_ops.deassert_hardreset = _omap4_deassert_hardreset; 4263 + soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted; 4144 4264 soc_ops.init_clkdm = _init_clkdm; 4145 4265 } else if (soc_is_am33xx()) { 4146 - soc_ops.enable_module = _am33xx_enable_module; 4147 - soc_ops.disable_module = _am33xx_disable_module; 4148 - soc_ops.wait_target_ready = _am33xx_wait_target_ready; 4266 + soc_ops.enable_module = _omap4_enable_module; 4267 + soc_ops.disable_module = _omap4_disable_module; 4268 + soc_ops.wait_target_ready = _omap4_wait_target_ready; 4149 4269 soc_ops.assert_hardreset = _am33xx_assert_hardreset; 4150 4270 soc_ops.deassert_hardreset = _am33xx_deassert_hardreset; 4151 4271 soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted;
+8
arch/arm/mach-omap2/omap_hwmod.h
··· 633 633 * @flags: hwmod flags (documented below) 634 634 * @_lock: spinlock serializing operations on this hwmod 635 635 * @node: list node for hwmod list (internal use) 636 + * @parent_hwmod: (temporary) a pointer to the hierarchical parent of this hwmod 636 637 * 637 638 * @main_clk refers to this module's "main clock," which for our 638 639 * purposes is defined as "the functional clock needed for register ··· 644 643 * the omap_hwmod code and should not be set during initialization. 645 644 * 646 645 * @masters and @slaves are now deprecated. 646 + * 647 + * @parent_hwmod is temporary; there should be no need for it, as this 648 + * information should already be expressed in the OCP interface 649 + * structures. @parent_hwmod is present as a workaround until we improve 650 + * handling for hwmods with multiple parents (e.g., OMAP4+ DSS with 651 + * multiple register targets across different interconnects). 647 652 */ 648 653 struct omap_hwmod { 649 654 const char *name; ··· 687 680 u8 _int_flags; 688 681 u8 _state; 689 682 u8 _postsetup_state; 683 + struct omap_hwmod *parent_hwmod; 690 684 }; 691 685 692 686 struct omap_hwmod *omap_hwmod_lookup(const char *name);
+39
arch/arm/mach-omap2/omap_hwmod_43xx_data.c
··· 417 417 }, 418 418 }; 419 419 420 + /* 421 + * 'adc/tsc' class 422 + * TouchScreen Controller (Analog-To-Digital Converter) 423 + */ 424 + static struct omap_hwmod_class_sysconfig am43xx_adc_tsc_sysc = { 425 + .rev_offs = 0x00, 426 + .sysc_offs = 0x10, 427 + .sysc_flags = SYSC_HAS_SIDLEMODE, 428 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 429 + SIDLE_SMART_WKUP), 430 + .sysc_fields = &omap_hwmod_sysc_type2, 431 + }; 432 + 433 + static struct omap_hwmod_class am43xx_adc_tsc_hwmod_class = { 434 + .name = "adc_tsc", 435 + .sysc = &am43xx_adc_tsc_sysc, 436 + }; 437 + 438 + static struct omap_hwmod am43xx_adc_tsc_hwmod = { 439 + .name = "adc_tsc", 440 + .class = &am43xx_adc_tsc_hwmod_class, 441 + .clkdm_name = "l3s_tsc_clkdm", 442 + .main_clk = "adc_tsc_fck", 443 + .prcm = { 444 + .omap4 = { 445 + .clkctrl_offs = AM43XX_CM_WKUP_ADC_TSC_CLKCTRL_OFFSET, 446 + .modulemode = MODULEMODE_SWCTRL, 447 + }, 448 + }, 449 + }; 450 + 420 451 /* dss */ 421 452 422 453 static struct omap_hwmod am43xx_dss_core_hwmod = { ··· 576 545 .slave = &am43xx_gpio0_hwmod, 577 546 .clk = "sys_clkin_ck", 578 547 .user = OCP_USER_MPU | OCP_USER_SDMA, 548 + }; 549 + 550 + static struct omap_hwmod_ocp_if am43xx_l4_wkup__adc_tsc = { 551 + .master = &am33xx_l4_wkup_hwmod, 552 + .slave = &am43xx_adc_tsc_hwmod, 553 + .clk = "dpll_core_m4_div2_ck", 554 + .user = OCP_USER_MPU, 579 555 }; 580 556 581 557 static struct omap_hwmod_ocp_if am43xx_l4_hs__cpgmac0 = { ··· 827 789 &am43xx_l4_wkup__i2c1, 828 790 &am43xx_l4_wkup__gpio0, 829 791 &am43xx_l4_wkup__wd_timer1, 792 + &am43xx_l4_wkup__adc_tsc, 830 793 &am43xx_l3_s__qspi, 831 794 &am33xx_l4_per__dcan0, 832 795 &am33xx_l4_per__dcan1,
+16 -9
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 589 589 .omap4 = { 590 590 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET, 591 591 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 592 + .modulemode = MODULEMODE_SWCTRL, 592 593 }, 593 594 }, 594 595 .opt_clks = dss_opt_clks, ··· 648 647 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 649 648 }, 650 649 }, 651 - .dev_attr = &omap44xx_dss_dispc_dev_attr 650 + .dev_attr = &omap44xx_dss_dispc_dev_attr, 651 + .parent_hwmod = &omap44xx_dss_hwmod, 652 652 }; 653 653 654 654 /* ··· 703 701 }, 704 702 .opt_clks = dss_dsi1_opt_clks, 705 703 .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_opt_clks), 704 + .parent_hwmod = &omap44xx_dss_hwmod, 706 705 }; 707 706 708 707 /* dss_dsi2 */ ··· 736 733 }, 737 734 .opt_clks = dss_dsi2_opt_clks, 738 735 .opt_clks_cnt = ARRAY_SIZE(dss_dsi2_opt_clks), 736 + .parent_hwmod = &omap44xx_dss_hwmod, 739 737 }; 740 738 741 739 /* ··· 794 790 }, 795 791 .opt_clks = dss_hdmi_opt_clks, 796 792 .opt_clks_cnt = ARRAY_SIZE(dss_hdmi_opt_clks), 793 + .parent_hwmod = &omap44xx_dss_hwmod, 797 794 }; 798 795 799 796 /* ··· 824 819 }; 825 820 826 821 static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 827 - { .role = "ick", .clk = "dss_fck" }, 822 + { .role = "ick", .clk = "l3_div_ck" }, 828 823 }; 829 824 830 825 static struct omap_hwmod omap44xx_dss_rfbi_hwmod = { ··· 841 836 }, 842 837 .opt_clks = dss_rfbi_opt_clks, 843 838 .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 839 + .parent_hwmod = &omap44xx_dss_hwmod, 844 840 }; 845 841 846 842 /* ··· 865 859 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 866 860 }, 867 861 }, 862 + .parent_hwmod = &omap44xx_dss_hwmod, 868 863 }; 869 864 870 865 /* ··· 3678 3671 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss = { 3679 3672 .master = &omap44xx_l3_main_2_hwmod, 3680 3673 .slave = &omap44xx_dss_hwmod, 3681 - .clk = "dss_fck", 3674 + .clk = "l3_div_ck", 3682 3675 .addr = omap44xx_dss_dma_addrs, 3683 3676 .user = OCP_USER_SDMA, 3684 3677 }; ··· 3714 3707 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_dispc = { 3715 3708 .master = &omap44xx_l3_main_2_hwmod, 3716 3709 .slave = &omap44xx_dss_dispc_hwmod, 3717 - .clk = "dss_fck", 3710 + .clk = "l3_div_ck", 3718 3711 .addr = omap44xx_dss_dispc_dma_addrs, 3719 3712 .user = OCP_USER_SDMA, 3720 3713 }; ··· 3750 3743 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_dsi1 = { 3751 3744 .master = &omap44xx_l3_main_2_hwmod, 3752 3745 .slave = &omap44xx_dss_dsi1_hwmod, 3753 - .clk = "dss_fck", 3746 + .clk = "l3_div_ck", 3754 3747 .addr = omap44xx_dss_dsi1_dma_addrs, 3755 3748 .user = OCP_USER_SDMA, 3756 3749 }; ··· 3786 3779 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_dsi2 = { 3787 3780 .master = &omap44xx_l3_main_2_hwmod, 3788 3781 .slave = &omap44xx_dss_dsi2_hwmod, 3789 - .clk = "dss_fck", 3782 + .clk = "l3_div_ck", 3790 3783 .addr = omap44xx_dss_dsi2_dma_addrs, 3791 3784 .user = OCP_USER_SDMA, 3792 3785 }; ··· 3822 3815 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_hdmi = { 3823 3816 .master = &omap44xx_l3_main_2_hwmod, 3824 3817 .slave = &omap44xx_dss_hdmi_hwmod, 3825 - .clk = "dss_fck", 3818 + .clk = "l3_div_ck", 3826 3819 .addr = omap44xx_dss_hdmi_dma_addrs, 3827 3820 .user = OCP_USER_SDMA, 3828 3821 }; ··· 3858 3851 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_rfbi = { 3859 3852 .master = &omap44xx_l3_main_2_hwmod, 3860 3853 .slave = &omap44xx_dss_rfbi_hwmod, 3861 - .clk = "dss_fck", 3854 + .clk = "l3_div_ck", 3862 3855 .addr = omap44xx_dss_rfbi_dma_addrs, 3863 3856 .user = OCP_USER_SDMA, 3864 3857 }; ··· 3894 3887 static struct omap_hwmod_ocp_if omap44xx_l3_main_2__dss_venc = { 3895 3888 .master = &omap44xx_l3_main_2_hwmod, 3896 3889 .slave = &omap44xx_dss_venc_hwmod, 3897 - .clk = "dss_fck", 3890 + .clk = "l3_div_ck", 3898 3891 .addr = omap44xx_dss_venc_dma_addrs, 3899 3892 .user = OCP_USER_SDMA, 3900 3893 };
+5
arch/arm/mach-omap2/omap_hwmod_54xx_data.c
··· 421 421 .opt_clks = dss_dispc_opt_clks, 422 422 .opt_clks_cnt = ARRAY_SIZE(dss_dispc_opt_clks), 423 423 .dev_attr = &dss_dispc_dev_attr, 424 + .parent_hwmod = &omap54xx_dss_hwmod, 424 425 }; 425 426 426 427 /* ··· 463 462 }, 464 463 .opt_clks = dss_dsi1_a_opt_clks, 465 464 .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_a_opt_clks), 465 + .parent_hwmod = &omap54xx_dss_hwmod, 466 466 }; 467 467 468 468 /* dss_dsi1_c */ ··· 484 482 }, 485 483 .opt_clks = dss_dsi1_c_opt_clks, 486 484 .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_c_opt_clks), 485 + .parent_hwmod = &omap54xx_dss_hwmod, 487 486 }; 488 487 489 488 /* ··· 524 521 }, 525 522 .opt_clks = dss_hdmi_opt_clks, 526 523 .opt_clks_cnt = ARRAY_SIZE(dss_hdmi_opt_clks), 524 + .parent_hwmod = &omap54xx_dss_hwmod, 527 525 }; 528 526 529 527 /* ··· 564 560 }, 565 561 .opt_clks = dss_rfbi_opt_clks, 566 562 .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 563 + .parent_hwmod = &omap54xx_dss_hwmod, 567 564 }; 568 565 569 566 /*
+100
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 2075 2075 }, 2076 2076 }; 2077 2077 2078 + /* uart7 */ 2079 + static struct omap_hwmod dra7xx_uart7_hwmod = { 2080 + .name = "uart7", 2081 + .class = &dra7xx_uart_hwmod_class, 2082 + .clkdm_name = "l4per2_clkdm", 2083 + .main_clk = "uart7_gfclk_mux", 2084 + .flags = HWMOD_SWSUP_SIDLE_ACT, 2085 + .prcm = { 2086 + .omap4 = { 2087 + .clkctrl_offs = DRA7XX_CM_L4PER2_UART7_CLKCTRL_OFFSET, 2088 + .context_offs = DRA7XX_RM_L4PER2_UART7_CONTEXT_OFFSET, 2089 + .modulemode = MODULEMODE_SWCTRL, 2090 + }, 2091 + }, 2092 + }; 2093 + 2094 + /* uart8 */ 2095 + static struct omap_hwmod dra7xx_uart8_hwmod = { 2096 + .name = "uart8", 2097 + .class = &dra7xx_uart_hwmod_class, 2098 + .clkdm_name = "l4per2_clkdm", 2099 + .main_clk = "uart8_gfclk_mux", 2100 + .flags = HWMOD_SWSUP_SIDLE_ACT, 2101 + .prcm = { 2102 + .omap4 = { 2103 + .clkctrl_offs = DRA7XX_CM_L4PER2_UART8_CLKCTRL_OFFSET, 2104 + .context_offs = DRA7XX_RM_L4PER2_UART8_CONTEXT_OFFSET, 2105 + .modulemode = MODULEMODE_SWCTRL, 2106 + }, 2107 + }, 2108 + }; 2109 + 2110 + /* uart9 */ 2111 + static struct omap_hwmod dra7xx_uart9_hwmod = { 2112 + .name = "uart9", 2113 + .class = &dra7xx_uart_hwmod_class, 2114 + .clkdm_name = "l4per2_clkdm", 2115 + .main_clk = "uart9_gfclk_mux", 2116 + .flags = HWMOD_SWSUP_SIDLE_ACT, 2117 + .prcm = { 2118 + .omap4 = { 2119 + .clkctrl_offs = DRA7XX_CM_L4PER2_UART9_CLKCTRL_OFFSET, 2120 + .context_offs = DRA7XX_RM_L4PER2_UART9_CONTEXT_OFFSET, 2121 + .modulemode = MODULEMODE_SWCTRL, 2122 + }, 2123 + }, 2124 + }; 2125 + 2126 + /* uart10 */ 2127 + static struct omap_hwmod dra7xx_uart10_hwmod = { 2128 + .name = "uart10", 2129 + .class = &dra7xx_uart_hwmod_class, 2130 + .clkdm_name = "wkupaon_clkdm", 2131 + .main_clk = "uart10_gfclk_mux", 2132 + .flags = HWMOD_SWSUP_SIDLE_ACT, 2133 + .prcm = { 2134 + .omap4 = { 2135 + .clkctrl_offs = DRA7XX_CM_WKUPAON_UART10_CLKCTRL_OFFSET, 2136 + .context_offs = DRA7XX_RM_WKUPAON_UART10_CONTEXT_OFFSET, 2137 + .modulemode = MODULEMODE_SWCTRL, 2138 + }, 2139 + }, 2140 + }; 2141 + 2078 2142 /* 2079 2143 * 'usb_otg_ss' class 2080 2144 * ··· 3159 3095 .user = OCP_USER_MPU | OCP_USER_SDMA, 3160 3096 }; 3161 3097 3098 + /* l4_per2 -> uart7 */ 3099 + static struct omap_hwmod_ocp_if dra7xx_l4_per2__uart7 = { 3100 + .master = &dra7xx_l4_per2_hwmod, 3101 + .slave = &dra7xx_uart7_hwmod, 3102 + .clk = "l3_iclk_div", 3103 + .user = OCP_USER_MPU | OCP_USER_SDMA, 3104 + }; 3105 + 3106 + /* l4_per2 -> uart8 */ 3107 + static struct omap_hwmod_ocp_if dra7xx_l4_per2__uart8 = { 3108 + .master = &dra7xx_l4_per2_hwmod, 3109 + .slave = &dra7xx_uart8_hwmod, 3110 + .clk = "l3_iclk_div", 3111 + .user = OCP_USER_MPU | OCP_USER_SDMA, 3112 + }; 3113 + 3114 + /* l4_per2 -> uart9 */ 3115 + static struct omap_hwmod_ocp_if dra7xx_l4_per2__uart9 = { 3116 + .master = &dra7xx_l4_per2_hwmod, 3117 + .slave = &dra7xx_uart9_hwmod, 3118 + .clk = "l3_iclk_div", 3119 + .user = OCP_USER_MPU | OCP_USER_SDMA, 3120 + }; 3121 + 3122 + /* l4_wkup -> uart10 */ 3123 + static struct omap_hwmod_ocp_if dra7xx_l4_wkup__uart10 = { 3124 + .master = &dra7xx_l4_wkup_hwmod, 3125 + .slave = &dra7xx_uart10_hwmod, 3126 + .clk = "wkupaon_iclk_mux", 3127 + .user = OCP_USER_MPU | OCP_USER_SDMA, 3128 + }; 3129 + 3162 3130 /* l4_per3 -> usb_otg_ss1 */ 3163 3131 static struct omap_hwmod_ocp_if dra7xx_l4_per3__usb_otg_ss1 = { 3164 3132 .master = &dra7xx_l4_per3_hwmod, ··· 3355 3259 &dra7xx_l4_per1__uart4, 3356 3260 &dra7xx_l4_per1__uart5, 3357 3261 &dra7xx_l4_per1__uart6, 3262 + &dra7xx_l4_per2__uart7, 3263 + &dra7xx_l4_per2__uart8, 3264 + &dra7xx_l4_per2__uart9, 3265 + &dra7xx_l4_wkup__uart10, 3358 3266 &dra7xx_l4_per3__usb_otg_ss1, 3359 3267 &dra7xx_l4_per3__usb_otg_ss2, 3360 3268 &dra7xx_l4_per3__usb_otg_ss3,
+66 -80
arch/arm/mach-omap2/pm44xx.c
··· 37 37 struct list_head node; 38 38 }; 39 39 40 + /** 41 + * struct static_dep_map - Static dependency map 42 + * @from: from clockdomain 43 + * @to: to clockdomain 44 + */ 45 + struct static_dep_map { 46 + const char *from; 47 + const char *to; 48 + }; 49 + 40 50 static u32 cpu_suspend_state = PWRDM_POWER_OFF; 41 51 42 52 static LIST_HEAD(pwrst_list); ··· 158 148 omap_do_wfi(); 159 149 } 160 150 161 - /** 162 - * omap4_init_static_deps - Add OMAP4 static dependencies 163 - * 164 - * Add needed static clockdomain dependencies on OMAP4 devices. 165 - * Return: 0 on success or 'err' on failures 151 + /* 152 + * The dynamic dependency between MPUSS -> MEMIF and 153 + * MPUSS -> L4_PER/L3_* and DUCATI -> L3_* doesn't work as 154 + * expected. The hardware recommendation is to enable static 155 + * dependencies for these to avoid system lock ups or random crashes. 156 + * The L4 wakeup depedency is added to workaround the OCP sync hardware 157 + * BUG with 32K synctimer which lead to incorrect timer value read 158 + * from the 32K counter. The BUG applies for GPTIMER1 and WDT2 which 159 + * are part of L4 wakeup clockdomain. 166 160 */ 167 - static inline int omap4_init_static_deps(void) 168 - { 169 - struct clockdomain *emif_clkdm, *mpuss_clkdm, *l3_1_clkdm; 170 - struct clockdomain *ducati_clkdm, *l3_2_clkdm; 171 - int ret = 0; 161 + static const struct static_dep_map omap4_static_dep_map[] = { 162 + {.from = "mpuss_clkdm", .to = "l3_emif_clkdm"}, 163 + {.from = "mpuss_clkdm", .to = "l3_1_clkdm"}, 164 + {.from = "mpuss_clkdm", .to = "l3_2_clkdm"}, 165 + {.from = "ducati_clkdm", .to = "l3_1_clkdm"}, 166 + {.from = "ducati_clkdm", .to = "l3_2_clkdm"}, 167 + {.from = NULL} /* TERMINATION */ 168 + }; 172 169 173 - if (omap_rev() == OMAP4430_REV_ES1_0) { 174 - WARN(1, "Power Management not supported on OMAP4430 ES1.0\n"); 175 - return -ENODEV; 176 - } 177 - 178 - pr_err("Power Management for TI OMAP4.\n"); 179 - /* 180 - * OMAP4 chip PM currently works only with certain (newer) 181 - * versions of bootloaders. This is due to missing code in the 182 - * kernel to properly reset and initialize some devices. 183 - * http://www.spinics.net/lists/arm-kernel/msg218641.html 184 - */ 185 - pr_warn("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n"); 186 - 187 - ret = pwrdm_for_each(pwrdms_setup, NULL); 188 - if (ret) { 189 - pr_err("Failed to setup powerdomains\n"); 190 - return ret; 191 - } 192 - 193 - /* 194 - * The dynamic dependency between MPUSS -> MEMIF and 195 - * MPUSS -> L4_PER/L3_* and DUCATI -> L3_* doesn't work as 196 - * expected. The hardware recommendation is to enable static 197 - * dependencies for these to avoid system lock ups or random crashes. 198 - * The L4 wakeup depedency is added to workaround the OCP sync hardware 199 - * BUG with 32K synctimer which lead to incorrect timer value read 200 - * from the 32K counter. The BUG applies for GPTIMER1 and WDT2 which 201 - * are part of L4 wakeup clockdomain. 202 - */ 203 - mpuss_clkdm = clkdm_lookup("mpuss_clkdm"); 204 - emif_clkdm = clkdm_lookup("l3_emif_clkdm"); 205 - l3_1_clkdm = clkdm_lookup("l3_1_clkdm"); 206 - l3_2_clkdm = clkdm_lookup("l3_2_clkdm"); 207 - ducati_clkdm = clkdm_lookup("ducati_clkdm"); 208 - if ((!mpuss_clkdm) || (!emif_clkdm) || (!l3_1_clkdm) || 209 - (!l3_2_clkdm) || (!ducati_clkdm)) 210 - return -EINVAL; 211 - 212 - ret = clkdm_add_wkdep(mpuss_clkdm, emif_clkdm); 213 - ret |= clkdm_add_wkdep(mpuss_clkdm, l3_1_clkdm); 214 - ret |= clkdm_add_wkdep(mpuss_clkdm, l3_2_clkdm); 215 - ret |= clkdm_add_wkdep(ducati_clkdm, l3_1_clkdm); 216 - ret |= clkdm_add_wkdep(ducati_clkdm, l3_2_clkdm); 217 - if (ret) { 218 - pr_err("Failed to add MPUSS -> L3/EMIF/L4PER, DUCATI -> L3 wakeup dependency\n"); 219 - return -EINVAL; 220 - } 221 - 222 - return ret; 223 - } 170 + static const struct static_dep_map omap5_dra7_static_dep_map[] = { 171 + {.from = "mpu_clkdm", .to = "emif_clkdm"}, 172 + {.from = NULL} /* TERMINATION */ 173 + }; 224 174 225 175 /** 226 - * omap5_dra7_init_static_deps - Init static clkdm dependencies on OMAP5 and 227 - * DRA7 228 - * 229 - * The dynamic dependency between MPUSS -> EMIF is broken and has 230 - * not worked as expected. The hardware recommendation is to 231 - * enable static dependencies for these to avoid system 232 - * lock ups or random crashes. 176 + * omap4plus_init_static_deps() - Initialize a static dependency map 177 + * @map: Mapping of clock domains 233 178 */ 234 - static inline int omap5_dra7_init_static_deps(void) 179 + static inline int omap4plus_init_static_deps(const struct static_dep_map *map) 235 180 { 236 - struct clockdomain *mpuss_clkdm, *emif_clkdm; 237 181 int ret; 182 + struct clockdomain *from, *to; 238 183 239 - mpuss_clkdm = clkdm_lookup("mpu_clkdm"); 240 - emif_clkdm = clkdm_lookup("emif_clkdm"); 241 - if (!mpuss_clkdm || !emif_clkdm) 242 - return -EINVAL; 184 + if (!map) 185 + return 0; 243 186 244 - ret = clkdm_add_wkdep(mpuss_clkdm, emif_clkdm); 245 - if (ret) 246 - pr_err("Failed to add MPUSS -> EMIF wakeup dependency\n"); 187 + while (map->from) { 188 + from = clkdm_lookup(map->from); 189 + to = clkdm_lookup(map->to); 190 + if (!from || !to) { 191 + pr_err("Failed lookup %s or %s for wakeup dependency\n", 192 + map->from, map->to); 193 + return -EINVAL; 194 + } 195 + ret = clkdm_add_wkdep(from, to); 196 + if (ret) { 197 + pr_err("Failed to add %s -> %s wakeup dependency(%d)\n", 198 + map->from, map->to, ret); 199 + return ret; 200 + } 247 201 248 - return ret; 202 + map++; 203 + }; 204 + 205 + return 0; 249 206 } 250 207 251 208 /** ··· 249 272 250 273 pr_info("Power Management for TI OMAP4+ devices.\n"); 251 274 275 + /* 276 + * OMAP4 chip PM currently works only with certain (newer) 277 + * versions of bootloaders. This is due to missing code in the 278 + * kernel to properly reset and initialize some devices. 279 + * http://www.spinics.net/lists/arm-kernel/msg218641.html 280 + */ 281 + if (cpu_is_omap44xx()) 282 + pr_warn("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n"); 283 + 252 284 ret = pwrdm_for_each(pwrdms_setup, NULL); 253 285 if (ret) { 254 286 pr_err("Failed to setup powerdomains.\n"); ··· 265 279 } 266 280 267 281 if (cpu_is_omap44xx()) 268 - ret = omap4_init_static_deps(); 282 + ret = omap4plus_init_static_deps(omap4_static_dep_map); 269 283 else if (soc_is_omap54xx() || soc_is_dra7xx()) 270 - ret = omap5_dra7_init_static_deps(); 284 + ret = omap4plus_init_static_deps(omap5_dra7_static_dep_map); 271 285 272 286 if (ret) { 273 287 pr_err("Failed to initialise static dependencies.\n");
+16
arch/arm/mach-omap2/prm.h
··· 29 29 * PRM_HAS_VOLTAGE: has voltage domains 30 30 */ 31 31 #define PRM_HAS_IO_WAKEUP (1 << 0) 32 + #define PRM_HAS_VOLTAGE (1 << 1) 32 33 33 34 /* 34 35 * MAX_MODULE_SOFTRESET_WAIT: Maximum microseconds to wait for OMAP ··· 128 127 * @was_any_context_lost_old: ptr to the SoC PRM context loss test fn 129 128 * @clear_context_loss_flags_old: ptr to the SoC PRM context loss flag clear fn 130 129 * @late_init: ptr to the late init function 130 + * @assert_hardreset: ptr to the SoC PRM hardreset assert impl 131 + * @deassert_hardreset: ptr to the SoC PRM hardreset deassert impl 131 132 * 132 133 * XXX @was_any_context_lost_old and @clear_context_loss_flags_old are 133 134 * deprecated. ··· 139 136 bool (*was_any_context_lost_old)(u8 part, s16 inst, u16 idx); 140 137 void (*clear_context_loss_flags_old)(u8 part, s16 inst, u16 idx); 141 138 int (*late_init)(void); 139 + int (*assert_hardreset)(u8 shift, u8 part, s16 prm_mod, u16 offset); 140 + int (*deassert_hardreset)(u8 shift, u8 st_shift, u8 part, s16 prm_mod, 141 + u16 offset, u16 st_offset); 142 + int (*is_hardreset_asserted)(u8 shift, u8 part, s16 prm_mod, 143 + u16 offset); 144 + void (*reset_system)(void); 142 145 }; 143 146 144 147 extern int prm_register(struct prm_ll_data *pld); 145 148 extern int prm_unregister(struct prm_ll_data *pld); 146 149 150 + int omap_prm_assert_hardreset(u8 shift, u8 part, s16 prm_mod, u16 offset); 151 + int omap_prm_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 prm_mod, 152 + u16 offset, u16 st_offset); 153 + int omap_prm_is_hardreset_asserted(u8 shift, u8 part, s16 prm_mod, u16 offset); 147 154 extern u32 prm_read_reset_sources(void); 148 155 extern bool prm_was_any_context_lost_old(u8 part, s16 inst, u16 idx); 149 156 extern void prm_clear_context_loss_flags_old(u8 part, s16 inst, u16 idx); 157 + void omap_prm_reset_system(void); 158 + 159 + void omap_prm_reconfigure_io_chain(void); 150 160 151 161 #endif 152 162
+5 -1
arch/arm/mach-omap2/prm2xxx.c
··· 106 106 * Set the DPLL reset bit, which should reboot the SoC. This is the 107 107 * recommended way to restart the SoC. No return value. 108 108 */ 109 - void omap2xxx_prm_dpll_reset(void) 109 + static void omap2xxx_prm_dpll_reset(void) 110 110 { 111 111 omap2_prm_set_mod_reg_bits(OMAP_RST_DPLL3_MASK, WKUP_MOD, 112 112 OMAP2_RM_RSTCTRL); ··· 212 212 213 213 static struct prm_ll_data omap2xxx_prm_ll_data = { 214 214 .read_reset_sources = &omap2xxx_prm_read_reset_sources, 215 + .assert_hardreset = &omap2_prm_assert_hardreset, 216 + .deassert_hardreset = &omap2_prm_deassert_hardreset, 217 + .is_hardreset_asserted = &omap2_prm_is_hardreset_asserted, 218 + .reset_system = &omap2xxx_prm_dpll_reset, 215 219 }; 216 220 217 221 int __init omap2xxx_prm_init(void)
-1
arch/arm/mach-omap2/prm2xxx.h
··· 124 124 extern int omap2xxx_clkdm_sleep(struct clockdomain *clkdm); 125 125 extern int omap2xxx_clkdm_wakeup(struct clockdomain *clkdm); 126 126 127 - extern void omap2xxx_prm_dpll_reset(void); 128 127 void omap2xxx_prm_clear_mod_irqs(s16 module, u8 regs, u32 wkst_mask); 129 128 130 129 extern int __init omap2xxx_prm_init(void);
+14 -5
arch/arm/mach-omap2/prm2xxx_3xxx.c
··· 24 24 /** 25 25 * omap2_prm_is_hardreset_asserted - read the HW reset line state of 26 26 * submodules contained in the hwmod module 27 - * @prm_mod: PRM submodule base (e.g. CORE_MOD) 28 27 * @shift: register bit shift corresponding to the reset line to check 28 + * @part: PRM partition, ignored for OMAP2 29 + * @prm_mod: PRM submodule base (e.g. CORE_MOD) 30 + * @offset: register offset, ignored for OMAP2 29 31 * 30 32 * Returns 1 if the (sub)module hardreset line is currently asserted, 31 33 * 0 if the (sub)module hardreset line is not currently asserted, or 32 34 * -EINVAL if called while running on a non-OMAP2/3 chip. 33 35 */ 34 - int omap2_prm_is_hardreset_asserted(s16 prm_mod, u8 shift) 36 + int omap2_prm_is_hardreset_asserted(u8 shift, u8 part, s16 prm_mod, u16 offset) 35 37 { 36 38 return omap2_prm_read_mod_bits_shift(prm_mod, OMAP2_RM_RSTCTRL, 37 39 (1 << shift)); ··· 41 39 42 40 /** 43 41 * omap2_prm_assert_hardreset - assert the HW reset line of a submodule 44 - * @prm_mod: PRM submodule base (e.g. CORE_MOD) 45 42 * @shift: register bit shift corresponding to the reset line to assert 43 + * @part: PRM partition, ignored for OMAP2 44 + * @prm_mod: PRM submodule base (e.g. CORE_MOD) 45 + * @offset: register offset, ignored for OMAP2 46 46 * 47 47 * Some IPs like dsp or iva contain processors that require an HW 48 48 * reset line to be asserted / deasserted in order to fully enable the ··· 53 49 * place the submodule into reset. Returns 0 upon success or -EINVAL 54 50 * upon an argument error. 55 51 */ 56 - int omap2_prm_assert_hardreset(s16 prm_mod, u8 shift) 52 + int omap2_prm_assert_hardreset(u8 shift, u8 part, s16 prm_mod, u16 offset) 57 53 { 58 54 u32 mask; 59 55 ··· 68 64 * @prm_mod: PRM submodule base (e.g. CORE_MOD) 69 65 * @rst_shift: register bit shift corresponding to the reset line to deassert 70 66 * @st_shift: register bit shift for the status of the deasserted submodule 67 + * @part: PRM partition, not used for OMAP2 68 + * @prm_mod: PRM submodule base (e.g. CORE_MOD) 69 + * @rst_offset: reset register offset, not used for OMAP2 70 + * @st_offset: reset status register offset, not used for OMAP2 71 71 * 72 72 * Some IPs like dsp or iva contain processors that require an HW 73 73 * reset line to be asserted / deasserted in order to fully enable the ··· 82 74 * -EINVAL upon an argument error, -EEXIST if the submodule was already out 83 75 * of reset, or -EBUSY if the submodule did not exit reset promptly. 84 76 */ 85 - int omap2_prm_deassert_hardreset(s16 prm_mod, u8 rst_shift, u8 st_shift) 77 + int omap2_prm_deassert_hardreset(u8 rst_shift, u8 st_shift, u8 part, 78 + s16 prm_mod, u16 rst_offset, u16 st_offset) 86 79 { 87 80 u32 rst, st; 88 81 int c;
+6 -3
arch/arm/mach-omap2/prm2xxx_3xxx.h
··· 100 100 } 101 101 102 102 /* These omap2_ PRM functions apply to both OMAP2 and 3 */ 103 - extern int omap2_prm_is_hardreset_asserted(s16 prm_mod, u8 shift); 104 - extern int omap2_prm_assert_hardreset(s16 prm_mod, u8 shift); 105 - extern int omap2_prm_deassert_hardreset(s16 prm_mod, u8 rst_shift, u8 st_shift); 103 + int omap2_prm_is_hardreset_asserted(u8 shift, u8 part, s16 prm_mod, u16 offset); 104 + int omap2_prm_assert_hardreset(u8 shift, u8 part, s16 prm_mod, 105 + u16 offset); 106 + int omap2_prm_deassert_hardreset(u8 rst_shift, u8 st_shift, u8 part, 107 + s16 prm_mod, u16 reset_offset, 108 + u16 st_offset); 106 109 107 110 extern int omap2_pwrdm_set_next_pwrst(struct powerdomain *pwrdm, u8 pwrst); 108 111 extern int omap2_pwrdm_read_next_pwrst(struct powerdomain *pwrdm);
+55 -9
arch/arm/mach-omap2/prm33xx.c
··· 23 23 #include "prm33xx.h" 24 24 #include "prm-regbits-33xx.h" 25 25 26 + #define AM33XX_PRM_RSTCTRL_OFFSET 0x0000 27 + 28 + #define AM33XX_RST_GLOBAL_WARM_SW_MASK (1 << 0) 29 + 26 30 /* Read a register in a PRM instance */ 27 - u32 am33xx_prm_read_reg(s16 inst, u16 idx) 31 + static u32 am33xx_prm_read_reg(s16 inst, u16 idx) 28 32 { 29 33 return readl_relaxed(prm_base + inst + idx); 30 34 } 31 35 32 36 /* Write into a register in a PRM instance */ 33 - void am33xx_prm_write_reg(u32 val, s16 inst, u16 idx) 37 + static void am33xx_prm_write_reg(u32 val, s16 inst, u16 idx) 34 38 { 35 39 writel_relaxed(val, prm_base + inst + idx); 36 40 } 37 41 38 42 /* Read-modify-write a register in PRM. Caller must lock */ 39 - u32 am33xx_prm_rmw_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx) 43 + static u32 am33xx_prm_rmw_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx) 40 44 { 41 45 u32 v; 42 46 ··· 56 52 * am33xx_prm_is_hardreset_asserted - read the HW reset line state of 57 53 * submodules contained in the hwmod module 58 54 * @shift: register bit shift corresponding to the reset line to check 55 + * @part: PRM partition, ignored for AM33xx 59 56 * @inst: CM instance register offset (*_INST macro) 60 57 * @rstctrl_offs: RM_RSTCTRL register address offset for this module 61 58 * ··· 64 59 * 0 if the (sub)module hardreset line is not currently asserted, or 65 60 * -EINVAL upon parameter error. 66 61 */ 67 - int am33xx_prm_is_hardreset_asserted(u8 shift, s16 inst, u16 rstctrl_offs) 62 + static int am33xx_prm_is_hardreset_asserted(u8 shift, u8 part, s16 inst, 63 + u16 rstctrl_offs) 68 64 { 69 65 u32 v; 70 66 ··· 79 73 /** 80 74 * am33xx_prm_assert_hardreset - assert the HW reset line of a submodule 81 75 * @shift: register bit shift corresponding to the reset line to assert 76 + * @part: CM partition, ignored for AM33xx 82 77 * @inst: CM instance register offset (*_INST macro) 83 78 * @rstctrl_reg: RM_RSTCTRL register address for this module 84 79 * ··· 90 83 * place the submodule into reset. Returns 0 upon success or -EINVAL 91 84 * upon an argument error. 92 85 */ 93 - int am33xx_prm_assert_hardreset(u8 shift, s16 inst, u16 rstctrl_offs) 86 + static int am33xx_prm_assert_hardreset(u8 shift, u8 part, s16 inst, 87 + u16 rstctrl_offs) 94 88 { 95 89 u32 mask = 1 << shift; 96 90 ··· 104 96 * am33xx_prm_deassert_hardreset - deassert a submodule hardreset line and 105 97 * wait 106 98 * @shift: register bit shift corresponding to the reset line to deassert 99 + * @st_shift: reset status register bit shift corresponding to the reset line 100 + * @part: PRM partition, not used for AM33xx 107 101 * @inst: CM instance register offset (*_INST macro) 108 102 * @rstctrl_reg: RM_RSTCTRL register address for this module 109 103 * @rstst_reg: RM_RSTST register address for this module ··· 119 109 * -EINVAL upon an argument error, -EEXIST if the submodule was already out 120 110 * of reset, or -EBUSY if the submodule did not exit reset promptly. 121 111 */ 122 - int am33xx_prm_deassert_hardreset(u8 shift, u8 st_shift, s16 inst, 123 - u16 rstctrl_offs, u16 rstst_offs) 112 + static int am33xx_prm_deassert_hardreset(u8 shift, u8 st_shift, u8 part, 113 + s16 inst, u16 rstctrl_offs, 114 + u16 rstst_offs) 124 115 { 125 116 int c; 126 117 u32 mask = 1 << st_shift; 127 118 128 119 /* Check the current status to avoid de-asserting the line twice */ 129 - if (am33xx_prm_is_hardreset_asserted(shift, inst, rstctrl_offs) == 0) 120 + if (am33xx_prm_is_hardreset_asserted(shift, 0, inst, rstctrl_offs) == 0) 130 121 return -EEXIST; 131 122 132 123 /* Clear the reset status by writing 1 to the status bit */ ··· 139 128 am33xx_prm_rmw_reg_bits(mask, 0, inst, rstctrl_offs); 140 129 141 130 /* wait the status to be set */ 142 - omap_test_timeout(am33xx_prm_is_hardreset_asserted(st_shift, inst, 131 + omap_test_timeout(am33xx_prm_is_hardreset_asserted(st_shift, 0, inst, 143 132 rstst_offs), 144 133 MAX_MODULE_HARDRESET_WAIT, c); 145 134 ··· 336 325 return 0; 337 326 } 338 327 328 + /** 329 + * am33xx_prm_global_warm_sw_reset - reboot the device via warm reset 330 + * 331 + * Immediately reboots the device through warm reset. 332 + */ 333 + static void am33xx_prm_global_warm_sw_reset(void) 334 + { 335 + am33xx_prm_rmw_reg_bits(AM33XX_RST_GLOBAL_WARM_SW_MASK, 336 + AM33XX_RST_GLOBAL_WARM_SW_MASK, 337 + AM33XX_PRM_DEVICE_MOD, 338 + AM33XX_PRM_RSTCTRL_OFFSET); 339 + 340 + /* OCP barrier */ 341 + (void)am33xx_prm_read_reg(AM33XX_PRM_DEVICE_MOD, 342 + AM33XX_PRM_RSTCTRL_OFFSET); 343 + } 344 + 339 345 struct pwrdm_ops am33xx_pwrdm_operations = { 340 346 .pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst, 341 347 .pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst, ··· 370 342 .pwrdm_wait_transition = am33xx_pwrdm_wait_transition, 371 343 .pwrdm_has_voltdm = am33xx_check_vcvp, 372 344 }; 345 + 346 + static struct prm_ll_data am33xx_prm_ll_data = { 347 + .assert_hardreset = am33xx_prm_assert_hardreset, 348 + .deassert_hardreset = am33xx_prm_deassert_hardreset, 349 + .is_hardreset_asserted = am33xx_prm_is_hardreset_asserted, 350 + .reset_system = am33xx_prm_global_warm_sw_reset, 351 + }; 352 + 353 + int __init am33xx_prm_init(void) 354 + { 355 + return prm_register(&am33xx_prm_ll_data); 356 + } 357 + 358 + static void __exit am33xx_prm_exit(void) 359 + { 360 + prm_unregister(&am33xx_prm_ll_data); 361 + } 362 + __exitcall(am33xx_prm_exit);
+2 -9
arch/arm/mach-omap2/prm33xx.h
··· 118 118 #define AM33XX_PM_CEFUSE_PWRSTST AM33XX_PRM_REGADDR(AM33XX_PRM_CEFUSE_MOD, 0x0004) 119 119 120 120 #ifndef __ASSEMBLER__ 121 - extern u32 am33xx_prm_read_reg(s16 inst, u16 idx); 122 - extern void am33xx_prm_write_reg(u32 val, s16 inst, u16 idx); 123 - extern u32 am33xx_prm_rmw_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx); 124 - extern void am33xx_prm_global_warm_sw_reset(void); 125 - extern int am33xx_prm_is_hardreset_asserted(u8 shift, s16 inst, 126 - u16 rstctrl_offs); 127 - extern int am33xx_prm_assert_hardreset(u8 shift, s16 inst, u16 rstctrl_offs); 128 - extern int am33xx_prm_deassert_hardreset(u8 shift, u8 st_shift, s16 inst, 129 - u16 rstctrl_offs, u16 rstst_offs); 121 + int am33xx_prm_init(void); 122 + 130 123 #endif /* ASSEMBLER */ 131 124 #endif
+16 -16
arch/arm/mach-omap2/prm3xxx.c
··· 30 30 #include "cm3xxx.h" 31 31 #include "cm-regbits-34xx.h" 32 32 33 + static void omap3xxx_prm_read_pending_irqs(unsigned long *events); 34 + static void omap3xxx_prm_ocp_barrier(void); 35 + static void omap3xxx_prm_save_and_clear_irqen(u32 *saved_mask); 36 + static void omap3xxx_prm_restore_irqen(u32 *saved_mask); 37 + 33 38 static const struct omap_prcm_irq omap3_prcm_irqs[] = { 34 39 OMAP_PRCM_IRQ("wkup", 0, 0), 35 40 OMAP_PRCM_IRQ("io", 9, 1), ··· 136 131 * recommended way to restart the SoC, considering Errata i520. No 137 132 * return value. 138 133 */ 139 - void omap3xxx_prm_dpll3_reset(void) 134 + static void omap3xxx_prm_dpll3_reset(void) 140 135 { 141 136 omap2_prm_set_mod_reg_bits(OMAP_RST_DPLL3_MASK, OMAP3430_GR_MOD, 142 137 OMAP2_RM_RSTCTRL); ··· 152 147 * MPU IRQs, and store the result into the u32 pointed to by @events. 153 148 * No return value. 154 149 */ 155 - void omap3xxx_prm_read_pending_irqs(unsigned long *events) 150 + static void omap3xxx_prm_read_pending_irqs(unsigned long *events) 156 151 { 157 152 u32 mask, st; 158 153 ··· 171 166 * block, to avoid race conditions after acknowledging or clearing IRQ 172 167 * bits. No return value. 173 168 */ 174 - void omap3xxx_prm_ocp_barrier(void) 169 + static void omap3xxx_prm_ocp_barrier(void) 175 170 { 176 171 omap2_prm_read_mod_reg(OCP_MOD, OMAP3_PRM_REVISION_OFFSET); 177 172 } ··· 187 182 * returning; otherwise, spurious interrupts might occur. No return 188 183 * value. 189 184 */ 190 - void omap3xxx_prm_save_and_clear_irqen(u32 *saved_mask) 185 + static void omap3xxx_prm_save_and_clear_irqen(u32 *saved_mask) 191 186 { 192 187 saved_mask[0] = omap2_prm_read_mod_reg(OCP_MOD, 193 188 OMAP3_PRM_IRQENABLE_MPU_OFFSET); ··· 207 202 * barrier should be needed here; any pending PRM interrupts will fire 208 203 * once the writes reach the PRM. No return value. 209 204 */ 210 - void omap3xxx_prm_restore_irqen(u32 *saved_mask) 205 + static void omap3xxx_prm_restore_irqen(u32 *saved_mask) 211 206 { 212 207 omap2_prm_write_mod_reg(saved_mask[0], OCP_MOD, 213 208 OMAP3_PRM_IRQENABLE_MPU_OFFSET); ··· 380 375 * The ST_IO_CHAIN bit does not exist in 3430 before es3.1. The only 381 376 * thing we can do is toggle EN_IO bit for earlier omaps. 382 377 */ 383 - void omap3430_pre_es3_1_reconfigure_io_chain(void) 378 + static void omap3430_pre_es3_1_reconfigure_io_chain(void) 384 379 { 385 380 omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, 386 381 PM_WKEN); ··· 398 393 * deasserting WUCLKIN and clearing the ST_IO_CHAIN WKST bit. No 399 394 * return value. These registers are only available in 3430 es3.1 and later. 400 395 */ 401 - void omap3_prm_reconfigure_io_chain(void) 396 + static void omap3_prm_reconfigure_io_chain(void) 402 397 { 403 398 int i = 0; 404 399 ··· 418 413 PM_WKST); 419 414 420 415 omap2_prm_read_mod_reg(WKUP_MOD, PM_WKST); 421 - } 422 - 423 - /** 424 - * omap3xxx_prm_reconfigure_io_chain - reconfigure I/O chain 425 - */ 426 - void omap3xxx_prm_reconfigure_io_chain(void) 427 - { 428 - if (omap3_prcm_irq_setup.reconfigure_io_chain) 429 - omap3_prcm_irq_setup.reconfigure_io_chain(); 430 416 } 431 417 432 418 /** ··· 660 664 static struct prm_ll_data omap3xxx_prm_ll_data = { 661 665 .read_reset_sources = &omap3xxx_prm_read_reset_sources, 662 666 .late_init = &omap3xxx_prm_late_init, 667 + .assert_hardreset = &omap2_prm_assert_hardreset, 668 + .deassert_hardreset = &omap2_prm_deassert_hardreset, 669 + .is_hardreset_asserted = &omap2_prm_is_hardreset_asserted, 670 + .reset_system = &omap3xxx_prm_dpll3_reset, 663 671 }; 664 672 665 673 int __init omap3xxx_prm_init(void)
-16
arch/arm/mach-omap2/prm3xxx.h
··· 144 144 extern void omap3_prm_vcvp_write(u32 val, u8 offset); 145 145 extern u32 omap3_prm_vcvp_rmw(u32 mask, u32 bits, u8 offset); 146 146 147 - #ifdef CONFIG_ARCH_OMAP3 148 - void omap3xxx_prm_reconfigure_io_chain(void); 149 - #else 150 - static inline void omap3xxx_prm_reconfigure_io_chain(void) 151 - { 152 - } 153 - #endif 154 - 155 - /* PRM interrupt-related functions */ 156 - extern void omap3xxx_prm_read_pending_irqs(unsigned long *events); 157 - extern void omap3xxx_prm_ocp_barrier(void); 158 - extern void omap3xxx_prm_save_and_clear_irqen(u32 *saved_mask); 159 - extern void omap3xxx_prm_restore_irqen(u32 *saved_mask); 160 - 161 - extern void omap3xxx_prm_dpll3_reset(void); 162 - 163 147 extern int __init omap3xxx_prm_init(void); 164 148 extern u32 omap3xxx_prm_get_reset_sources(void); 165 149 int omap3xxx_prm_clear_mod_irqs(s16 module, u8 regs, u32 ignore_bits);
+24 -12
arch/arm/mach-omap2/prm44xx.c
··· 32 32 33 33 /* Static data */ 34 34 35 + static void omap44xx_prm_read_pending_irqs(unsigned long *events); 36 + static void omap44xx_prm_ocp_barrier(void); 37 + static void omap44xx_prm_save_and_clear_irqen(u32 *saved_mask); 38 + static void omap44xx_prm_restore_irqen(u32 *saved_mask); 39 + static void omap44xx_prm_reconfigure_io_chain(void); 40 + 35 41 static const struct omap_prcm_irq omap4_prcm_irqs[] = { 36 42 OMAP_PRCM_IRQ("io", 9, 1), 37 43 }; ··· 86 80 /* PRM low-level functions */ 87 81 88 82 /* Read a register in a CM/PRM instance in the PRM module */ 89 - u32 omap4_prm_read_inst_reg(s16 inst, u16 reg) 83 + static u32 omap4_prm_read_inst_reg(s16 inst, u16 reg) 90 84 { 91 85 return readl_relaxed(prm_base + inst + reg); 92 86 } 93 87 94 88 /* Write into a register in a CM/PRM instance in the PRM module */ 95 - void omap4_prm_write_inst_reg(u32 val, s16 inst, u16 reg) 89 + static void omap4_prm_write_inst_reg(u32 val, s16 inst, u16 reg) 96 90 { 97 91 writel_relaxed(val, prm_base + inst + reg); 98 92 } 99 93 100 94 /* Read-modify-write a register in a PRM module. Caller must lock */ 101 - u32 omap4_prm_rmw_inst_reg_bits(u32 mask, u32 bits, s16 inst, s16 reg) 95 + static u32 omap4_prm_rmw_inst_reg_bits(u32 mask, u32 bits, s16 inst, s16 reg) 102 96 { 103 97 u32 v; 104 98 ··· 213 207 * MPU IRQs, and store the result into the two u32s pointed to by @events. 214 208 * No return value. 215 209 */ 216 - void omap44xx_prm_read_pending_irqs(unsigned long *events) 210 + static void omap44xx_prm_read_pending_irqs(unsigned long *events) 217 211 { 218 212 events[0] = _read_pending_irq_reg(OMAP4_PRM_IRQENABLE_MPU_OFFSET, 219 213 OMAP4_PRM_IRQSTATUS_MPU_OFFSET); ··· 230 224 * block, to avoid race conditions after acknowledging or clearing IRQ 231 225 * bits. No return value. 232 226 */ 233 - void omap44xx_prm_ocp_barrier(void) 227 + static void omap44xx_prm_ocp_barrier(void) 234 228 { 235 229 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 236 230 OMAP4_REVISION_PRM_OFFSET); ··· 247 241 * interrupts reaches the PRM before returning; otherwise, spurious 248 242 * interrupts might occur. No return value. 249 243 */ 250 - void omap44xx_prm_save_and_clear_irqen(u32 *saved_mask) 244 + static void omap44xx_prm_save_and_clear_irqen(u32 *saved_mask) 251 245 { 252 246 saved_mask[0] = 253 247 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, ··· 276 270 * No OCP barrier should be needed here; any pending PRM interrupts will fire 277 271 * once the writes reach the PRM. No return value. 278 272 */ 279 - void omap44xx_prm_restore_irqen(u32 *saved_mask) 273 + static void omap44xx_prm_restore_irqen(u32 *saved_mask) 280 274 { 281 275 omap4_prm_write_inst_reg(saved_mask[0], OMAP4430_PRM_OCP_SOCKET_INST, 282 276 OMAP4_PRM_IRQENABLE_MPU_OFFSET); ··· 293 287 * deasserting WUCLKIN and waiting for WUCLKOUT to be deasserted. 294 288 * No return value. XXX Are the final two steps necessary? 295 289 */ 296 - void omap44xx_prm_reconfigure_io_chain(void) 290 + static void omap44xx_prm_reconfigure_io_chain(void) 297 291 { 298 292 int i = 0; 299 293 s32 inst = omap4_prmst_get_prm_dev_inst(); ··· 658 652 659 653 static int omap4_check_vcvp(void) 660 654 { 661 - /* No VC/VP on dra7xx devices */ 662 - if (soc_is_dra7xx()) 663 - return 0; 655 + if (prm_features & PRM_HAS_VOLTAGE) 656 + return 1; 664 657 665 - return 1; 658 + return 0; 666 659 } 667 660 668 661 struct pwrdm_ops omap4_pwrdm_operations = { ··· 694 689 .was_any_context_lost_old = &omap44xx_prm_was_any_context_lost_old, 695 690 .clear_context_loss_flags_old = &omap44xx_prm_clear_context_loss_flags_old, 696 691 .late_init = &omap44xx_prm_late_init, 692 + .assert_hardreset = omap4_prminst_assert_hardreset, 693 + .deassert_hardreset = omap4_prminst_deassert_hardreset, 694 + .is_hardreset_asserted = omap4_prminst_is_hardreset_asserted, 695 + .reset_system = omap4_prminst_global_warm_sw_reset, 697 696 }; 698 697 699 698 int __init omap44xx_prm_init(void) 700 699 { 701 700 if (cpu_is_omap44xx() || soc_is_omap54xx() || soc_is_dra7xx()) 702 701 prm_features |= PRM_HAS_IO_WAKEUP; 702 + 703 + if (!soc_is_dra7xx()) 704 + prm_features |= PRM_HAS_VOLTAGE; 703 705 704 706 return prm_register(&omap44xx_prm_ll_data); 705 707 }
-19
arch/arm/mach-omap2/prm44xx_54xx.h
··· 26 26 /* Function prototypes */ 27 27 #ifndef __ASSEMBLER__ 28 28 29 - extern u32 omap4_prm_read_inst_reg(s16 inst, u16 idx); 30 - extern void omap4_prm_write_inst_reg(u32 val, s16 inst, u16 idx); 31 - extern u32 omap4_prm_rmw_inst_reg_bits(u32 mask, u32 bits, s16 inst, s16 idx); 32 - 33 29 /* OMAP4/OMAP5-specific VP functions */ 34 30 u32 omap4_prm_vp_check_txdone(u8 vp_id); 35 31 void omap4_prm_vp_clear_txdone(u8 vp_id); ··· 37 41 extern u32 omap4_prm_vcvp_read(u8 offset); 38 42 extern void omap4_prm_vcvp_write(u32 val, u8 offset); 39 43 extern u32 omap4_prm_vcvp_rmw(u32 mask, u32 bits, u8 offset); 40 - 41 - #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 42 - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) 43 - void omap44xx_prm_reconfigure_io_chain(void); 44 - #else 45 - static inline void omap44xx_prm_reconfigure_io_chain(void) 46 - { 47 - } 48 - #endif 49 - 50 - /* PRM interrupt-related functions */ 51 - extern void omap44xx_prm_read_pending_irqs(unsigned long *events); 52 - extern void omap44xx_prm_ocp_barrier(void); 53 - extern void omap44xx_prm_save_and_clear_irqen(u32 *saved_mask); 54 - extern void omap44xx_prm_restore_irqen(u32 *saved_mask); 55 44 56 45 extern int __init omap44xx_prm_init(void); 57 46 extern u32 omap44xx_prm_get_reset_sources(void);
+99
arch/arm/mach-omap2/prm_common.c
··· 423 423 } 424 424 425 425 /** 426 + * omap_prm_assert_hardreset - assert hardreset for an IP block 427 + * @shift: register bit shift corresponding to the reset line 428 + * @part: PRM partition 429 + * @prm_mod: PRM submodule base or instance offset 430 + * @offset: register offset 431 + * 432 + * Asserts a hardware reset line for an IP block. 433 + */ 434 + int omap_prm_assert_hardreset(u8 shift, u8 part, s16 prm_mod, u16 offset) 435 + { 436 + if (!prm_ll_data->assert_hardreset) { 437 + WARN_ONCE(1, "prm: %s: no mapping function defined\n", 438 + __func__); 439 + return -EINVAL; 440 + } 441 + 442 + return prm_ll_data->assert_hardreset(shift, part, prm_mod, offset); 443 + } 444 + 445 + /** 446 + * omap_prm_deassert_hardreset - deassert hardreset for an IP block 447 + * @shift: register bit shift corresponding to the reset line 448 + * @st_shift: reset status bit shift corresponding to the reset line 449 + * @part: PRM partition 450 + * @prm_mod: PRM submodule base or instance offset 451 + * @offset: register offset 452 + * @st_offset: status register offset 453 + * 454 + * Deasserts a hardware reset line for an IP block. 455 + */ 456 + int omap_prm_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 prm_mod, 457 + u16 offset, u16 st_offset) 458 + { 459 + if (!prm_ll_data->deassert_hardreset) { 460 + WARN_ONCE(1, "prm: %s: no mapping function defined\n", 461 + __func__); 462 + return -EINVAL; 463 + } 464 + 465 + return prm_ll_data->deassert_hardreset(shift, st_shift, part, prm_mod, 466 + offset, st_offset); 467 + } 468 + 469 + /** 470 + * omap_prm_is_hardreset_asserted - check the hardreset status for an IP block 471 + * @shift: register bit shift corresponding to the reset line 472 + * @part: PRM partition 473 + * @prm_mod: PRM submodule base or instance offset 474 + * @offset: register offset 475 + * 476 + * Checks if a hardware reset line for an IP block is enabled or not. 477 + */ 478 + int omap_prm_is_hardreset_asserted(u8 shift, u8 part, s16 prm_mod, u16 offset) 479 + { 480 + if (!prm_ll_data->is_hardreset_asserted) { 481 + WARN_ONCE(1, "prm: %s: no mapping function defined\n", 482 + __func__); 483 + return -EINVAL; 484 + } 485 + 486 + return prm_ll_data->is_hardreset_asserted(shift, part, prm_mod, offset); 487 + } 488 + 489 + /** 490 + * omap_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain 491 + * 492 + * Clear any previously-latched I/O wakeup events and ensure that the 493 + * I/O wakeup gates are aligned with the current mux settings. 494 + * Calls SoC specific I/O chain reconfigure function if available, 495 + * otherwise does nothing. 496 + */ 497 + void omap_prm_reconfigure_io_chain(void) 498 + { 499 + if (!prcm_irq_setup || !prcm_irq_setup->reconfigure_io_chain) 500 + return; 501 + 502 + prcm_irq_setup->reconfigure_io_chain(); 503 + } 504 + 505 + /** 506 + * omap_prm_reset_system - trigger global SW reset 507 + * 508 + * Triggers SoC specific global warm reset to reboot the device. 509 + */ 510 + void omap_prm_reset_system(void) 511 + { 512 + if (!prm_ll_data->reset_system) { 513 + WARN_ONCE(1, "prm: %s: no mapping function defined\n", 514 + __func__); 515 + return; 516 + } 517 + 518 + prm_ll_data->reset_system(); 519 + 520 + while (1) 521 + cpu_relax(); 522 + } 523 + 524 + /** 426 525 * prm_register - register per-SoC low-level data with the PRM 427 526 * @pld: low-level per-SoC OMAP PRM data & function pointers to register 428 527 *
+7 -3
arch/arm/mach-omap2/prminst44xx.c
··· 148 148 /** 149 149 * omap4_prminst_deassert_hardreset - deassert a submodule hardreset line and 150 150 * wait 151 - * @rstctrl_reg: RM_RSTCTRL register address for this module 152 151 * @shift: register bit shift corresponding to the reset line to deassert 152 + * @st_shift: status bit offset, not used for OMAP4+ 153 + * @part: PRM partition 154 + * @inst: PRM instance offset 155 + * @rstctrl_offs: reset register offset 156 + * @st_offs: reset status register offset, not used for OMAP4+ 153 157 * 154 158 * Some IPs like dsp, ipu or iva contain processors that require an HW 155 159 * reset line to be asserted / deasserted in order to fully enable the ··· 164 160 * -EINVAL upon an argument error, -EEXIST if the submodule was already out 165 161 * of reset, or -EBUSY if the submodule did not exit reset promptly. 166 162 */ 167 - int omap4_prminst_deassert_hardreset(u8 shift, u8 part, s16 inst, 168 - u16 rstctrl_offs) 163 + int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst, 164 + u16 rstctrl_offs, u16 st_offs) 169 165 { 170 166 int c; 171 167 u32 mask = 1 << shift;
+3 -2
arch/arm/mach-omap2/prminst44xx.h
··· 30 30 u16 rstctrl_offs); 31 31 extern int omap4_prminst_assert_hardreset(u8 shift, u8 part, s16 inst, 32 32 u16 rstctrl_offs); 33 - extern int omap4_prminst_deassert_hardreset(u8 shift, u8 part, s16 inst, 34 - u16 rstctrl_offs); 33 + int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, 34 + s16 inst, u16 rstctrl_offs, 35 + u16 rstst_offs); 35 36 36 37 extern void omap_prm_base_init(void); 37 38
+11
arch/arm/mach-pxa/Kconfig
··· 4 4 5 5 comment "Intel/Marvell Dev Platforms (sorted by hardware release time)" 6 6 7 + config MACH_PXA27X_DT 8 + bool "Support PXA27x platforms from device tree" 9 + select CPU_PXA27x 10 + select POWER_SUPPLY 11 + select PXA27x 12 + select USE_OF 13 + help 14 + Include support for Marvell PXA27x based platforms using 15 + the device tree. Needn't select any other machine while 16 + MACH_PXA27X_DT is enabled. 17 + 7 18 config MACH_PXA3XX_DT 8 19 bool "Support PXA3xx platforms from device tree" 9 20 select CPU_PXA300
+1
arch/arm/mach-pxa/Makefile
··· 21 21 22 22 # Device Tree support 23 23 obj-$(CONFIG_MACH_PXA3XX_DT) += pxa-dt.o 24 + obj-$(CONFIG_MACH_PXA27X_DT) += pxa-dt.o 24 25 25 26 # Intel/Marvell Dev Platforms 26 27 obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o
+2 -2
arch/arm/mach-pxa/em-x270.c
··· 378 378 379 379 err = gpio_request(GPIO11_NAND_CS, "NAND CS"); 380 380 if (err) { 381 - pr_warning("EM-X270: failed to request NAND CS gpio\n"); 381 + pr_warn("EM-X270: failed to request NAND CS gpio\n"); 382 382 return; 383 383 } 384 384 ··· 386 386 387 387 err = gpio_request(nand_rb, "NAND R/B"); 388 388 if (err) { 389 - pr_warning("EM-X270: failed to request NAND R/B gpio\n"); 389 + pr_warn("EM-X270: failed to request NAND R/B gpio\n"); 390 390 gpio_free(GPIO11_NAND_CS); 391 391 return; 392 392 }
+44 -25
arch/arm/mach-pxa/generic.h
··· 13 13 14 14 struct irq_data; 15 15 16 - extern void pxa_timer_init(void); 17 - 18 - extern void __init pxa_map_io(void); 19 - 20 16 extern unsigned int get_clk_frequency_khz(int info); 17 + extern void __init pxa_dt_irq_init(int (*fn)(struct irq_data *, 18 + unsigned int)); 19 + extern void __init pxa_map_io(void); 20 + extern void pxa_timer_init(void); 21 21 22 22 #define SET_BANK(__nr,__start,__size) \ 23 23 mi->bank[__nr].start = (__start), \ ··· 25 25 26 26 #define ARRAY_AND_SIZE(x) (x), ARRAY_SIZE(x) 27 27 28 - #ifdef CONFIG_PXA25x 29 - extern unsigned pxa25x_get_clk_frequency_khz(int); 30 - #else 31 - #define pxa25x_get_clk_frequency_khz(x) (0) 32 - #endif 28 + #define pxa25x_handle_irq icip_handle_irq 29 + extern void __init pxa25x_init_irq(void); 30 + extern void __init pxa25x_map_io(void); 31 + extern void __init pxa26x_init_irq(void); 33 32 34 - #ifdef CONFIG_PXA27x 35 - extern unsigned pxa27x_get_clk_frequency_khz(int); 36 - #else 37 - #define pxa27x_get_clk_frequency_khz(x) (0) 38 - #endif 33 + #define pxa27x_handle_irq ichp_handle_irq 34 + extern void __init pxa27x_dt_init_irq(void); 35 + extern unsigned pxa27x_get_clk_frequency_khz(int); 36 + extern void __init pxa27x_init_irq(void); 37 + extern void __init pxa27x_map_io(void); 39 38 40 - #if defined(CONFIG_PXA25x) || defined(CONFIG_PXA27x) 41 - extern void pxa2xx_clear_reset_status(unsigned int); 42 - #else 43 - static inline void pxa2xx_clear_reset_status(unsigned int mask) {} 44 - #endif 45 - 46 - #ifdef CONFIG_PXA3xx 47 - extern unsigned pxa3xx_get_clk_frequency_khz(int); 48 - #else 49 - #define pxa3xx_get_clk_frequency_khz(x) (0) 50 - #endif 39 + #define pxa3xx_handle_irq ichp_handle_irq 40 + extern void __init pxa3xx_dt_init_irq(void); 41 + extern void __init pxa3xx_init_irq(void); 42 + extern void __init pxa3xx_map_io(void); 51 43 52 44 extern struct syscore_ops pxa_irq_syscore_ops; 53 45 extern struct syscore_ops pxa2xx_mfp_syscore_ops; ··· 51 59 void __init pxa_set_hwuart_info(void *info); 52 60 53 61 void pxa_restart(enum reboot_mode, const char *); 62 + 63 + #if defined(CONFIG_PXA25x) || defined(CONFIG_PXA27x) 64 + extern void pxa2xx_clear_reset_status(unsigned int); 65 + #else 66 + static inline void pxa2xx_clear_reset_status(unsigned int mask) {} 67 + #endif 68 + 69 + /* 70 + * Once fully converted to the clock framework, all these functions should be 71 + * removed, and replaced with a clk_get(NULL, "core"). 72 + */ 73 + #ifdef CONFIG_PXA25x 74 + extern unsigned pxa25x_get_clk_frequency_khz(int); 75 + #else 76 + #define pxa25x_get_clk_frequency_khz(x) (0) 77 + #endif 78 + 79 + #ifdef CONFIG_PXA27x 80 + #else 81 + #define pxa27x_get_clk_frequency_khz(x) (0) 82 + #endif 83 + 84 + #ifdef CONFIG_PXA3xx 85 + extern unsigned pxa3xx_get_clk_frequency_khz(int); 86 + #else 87 + #define pxa3xx_get_clk_frequency_khz(x) (0) 88 + #endif
+1 -2
arch/arm/mach-pxa/gumstix.c
··· 140 140 int timeout = 500; 141 141 142 142 if (!(OSCC & OSCC_OOK)) 143 - pr_warning("32kHz clock was not on. Bootloader may need to " 144 - "be updated\n"); 143 + pr_warn("32kHz clock was not on. Bootloader may need to be updated\n"); 145 144 else 146 145 return; 147 146
-8
arch/arm/mach-pxa/include/mach/pxa25x.h
··· 6 6 #include <mach/mfp-pxa25x.h> 7 7 #include <mach/irqs.h> 8 8 9 - extern void __init pxa25x_map_io(void); 10 - extern void __init pxa25x_init_irq(void); 11 - #ifdef CONFIG_CPU_PXA26x 12 - extern void __init pxa26x_init_irq(void); 13 - #endif 14 - 15 - #define pxa25x_handle_irq icip_handle_irq 16 - 17 9 #endif /* __MACH_PXA25x_H */
-4
arch/arm/mach-pxa/include/mach/pxa27x.h
··· 19 19 #define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */ 20 20 #define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */ 21 21 22 - extern void __init pxa27x_map_io(void); 23 - extern void __init pxa27x_init_irq(void); 24 22 extern int __init pxa27x_set_pwrmode(unsigned int mode); 25 23 extern void pxa27x_cpu_pm_enter(suspend_state_t state); 26 - 27 - #define pxa27x_handle_irq ichp_handle_irq 28 24 29 25 #endif /* __MACH_PXA27x_H */
-5
arch/arm/mach-pxa/include/mach/pxa3xx.h
··· 5 5 #include <mach/pxa3xx-regs.h> 6 6 #include <mach/irqs.h> 7 7 8 - extern void __init pxa3xx_map_io(void); 9 - extern void __init pxa3xx_init_irq(void); 10 - 11 - #define pxa3xx_handle_irq ichp_handle_irq 12 - 13 8 #endif /* __MACH_PXA3XX_H */
+5 -7
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 93 93 break; 94 94 default: 95 95 /* warning and fall through, treat as MFP_LPM_DEFAULT */ 96 - pr_warning("%s: GPIO%d: unsupported low power mode\n", 97 - __func__, gpio); 96 + pr_warn("%s: GPIO%d: unsupported low power mode\n", 97 + __func__, gpio); 98 98 break; 99 99 } 100 100 ··· 107 107 * configurations of those pins not able to wakeup 108 108 */ 109 109 if ((c & MFP_LPM_CAN_WAKEUP) && !gpio_desc[gpio].can_wakeup) { 110 - pr_warning("%s: GPIO%d unable to wakeup\n", 111 - __func__, gpio); 110 + pr_warn("%s: GPIO%d unable to wakeup\n", __func__, gpio); 112 111 return -EINVAL; 113 112 } 114 113 115 114 if ((c & MFP_LPM_CAN_WAKEUP) && is_out) { 116 - pr_warning("%s: output GPIO%d unable to wakeup\n", 117 - __func__, gpio); 115 + pr_warn("%s: output GPIO%d unable to wakeup\n", __func__, gpio); 118 116 return -EINVAL; 119 117 } 120 118 ··· 124 126 int gpio = mfp_to_gpio(mfp); 125 127 126 128 if ((mfp > MFP_PIN_GPIO127) || !gpio_desc[gpio].valid) { 127 - pr_warning("%s: GPIO%d is invalid pin\n", __func__, gpio); 129 + pr_warn("%s: GPIO%d is invalid pin\n", __func__, gpio); 128 130 return -1; 129 131 } 130 132
+1 -1
arch/arm/mach-pxa/poodle.c
··· 446 446 447 447 ret = platform_add_devices(devices, ARRAY_SIZE(devices)); 448 448 if (ret) 449 - pr_warning("poodle: Unable to register LoCoMo device\n"); 449 + pr_warn("poodle: Unable to register LoCoMo device\n"); 450 450 451 451 pxa_set_fb_info(&poodle_locomo_device.dev, &poodle_fb_info); 452 452 pxa_set_udc_info(&udc_info);
+15 -3
arch/arm/mach-pxa/pxa-dt.c
··· 15 15 #include <asm/mach/arch.h> 16 16 #include <asm/mach/time.h> 17 17 #include <mach/irqs.h> 18 - #include <mach/pxa3xx.h> 19 18 20 19 #include "generic.h" 21 20 22 21 #ifdef CONFIG_PXA3xx 23 - extern void __init pxa3xx_dt_init_irq(void); 24 - 25 22 static const struct of_dev_auxdata pxa3xx_auxdata_lookup[] __initconst = { 26 23 OF_DEV_AUXDATA("mrvl,pxa-uart", 0x40100000, "pxa2xx-uart.0", NULL), 27 24 OF_DEV_AUXDATA("mrvl,pxa-uart", 0x40200000, "pxa2xx-uart.1", NULL), ··· 56 59 .restart = pxa_restart, 57 60 .init_machine = pxa3xx_dt_init, 58 61 .dt_compat = pxa3xx_dt_board_compat, 62 + MACHINE_END 63 + #endif 64 + 65 + #ifdef CONFIG_PXA27x 66 + static const char * const pxa27x_dt_board_compat[] __initconst = { 67 + "marvell,pxa270", 68 + NULL, 69 + }; 70 + 71 + DT_MACHINE_START(PXA27X_DT, "Marvell PXA2xx (Device Tree Support)") 72 + .map_io = pxa27x_map_io, 73 + .init_irq = pxa27x_dt_init_irq, 74 + .handle_irq = pxa27x_handle_irq, 75 + .restart = pxa_restart, 76 + .dt_compat = pxa27x_dt_board_compat, 59 77 MACHINE_END 60 78 #endif
+6
arch/arm/mach-pxa/pxa27x.c
··· 398 398 pxa_init_irq(34, pxa27x_set_wake); 399 399 } 400 400 401 + void __init pxa27x_dt_init_irq(void) 402 + { 403 + if (IS_ENABLED(CONFIG_OF)) 404 + pxa_dt_irq_init(pxa27x_set_wake); 405 + } 406 + 401 407 static struct map_desc pxa27x_io_desc[] __initdata = { 402 408 { /* Mem Ctl */ 403 409 .virtual = (unsigned long)SMEMC_VIRT,
+3 -3
arch/arm/mach-pxa/pxa3xx-ulpi.c
··· 74 74 cpu_relax(); 75 75 } 76 76 77 - pr_warning("%s: ULPI access timed out!\n", __func__); 77 + pr_warn("%s: ULPI access timed out!\n", __func__); 78 78 79 79 return -ETIMEDOUT; 80 80 } ··· 84 84 int err; 85 85 86 86 if (pxa310_ulpi_get_phymode() != SYNCH) { 87 - pr_warning("%s: PHY is not in SYNCH mode!\n", __func__); 87 + pr_warn("%s: PHY is not in SYNCH mode!\n", __func__); 88 88 return -EBUSY; 89 89 } 90 90 ··· 101 101 static int pxa310_ulpi_write(struct usb_phy *otg, u32 val, u32 reg) 102 102 { 103 103 if (pxa310_ulpi_get_phymode() != SYNCH) { 104 - pr_warning("%s: PHY is not in SYNCH mode!\n", __func__); 104 + pr_warn("%s: PHY is not in SYNCH mode!\n", __func__); 105 105 return -EBUSY; 106 106 } 107 107
+13 -13
arch/arm/mach-pxa/raumfeld.c
··· 521 521 "W1 external pullup enable"); 522 522 523 523 if (ret < 0) 524 - pr_warning("Unable to request GPIO_W1_PULLUP_ENABLE\n"); 524 + pr_warn("Unable to request GPIO_W1_PULLUP_ENABLE\n"); 525 525 else 526 526 gpio_direction_output(GPIO_W1_PULLUP_ENABLE, 0); 527 527 ··· 600 600 601 601 ret = gpio_request(GPIO_TFT_VA_EN, "display VA enable"); 602 602 if (ret < 0) 603 - pr_warning("Unable to request GPIO_TFT_VA_EN\n"); 603 + pr_warn("Unable to request GPIO_TFT_VA_EN\n"); 604 604 else 605 605 gpio_direction_output(GPIO_TFT_VA_EN, 1); 606 606 ··· 608 608 609 609 ret = gpio_request(GPIO_DISPLAY_ENABLE, "display enable"); 610 610 if (ret < 0) 611 - pr_warning("Unable to request GPIO_DISPLAY_ENABLE\n"); 611 + pr_warn("Unable to request GPIO_DISPLAY_ENABLE\n"); 612 612 else 613 613 gpio_direction_output(GPIO_DISPLAY_ENABLE, 1); 614 614 ··· 814 814 /* Set PEN2 high to enable maximum charge current */ 815 815 ret = gpio_request(GPIO_CHRG_PEN2, "CHRG_PEN2"); 816 816 if (ret < 0) 817 - pr_warning("Unable to request GPIO_CHRG_PEN2\n"); 817 + pr_warn("Unable to request GPIO_CHRG_PEN2\n"); 818 818 else 819 819 gpio_direction_output(GPIO_CHRG_PEN2, 1); 820 820 821 821 ret = gpio_request(GPIO_CHARGE_DC_OK, "CABLE_DC_OK"); 822 822 if (ret < 0) 823 - pr_warning("Unable to request GPIO_CHARGE_DC_OK\n"); 823 + pr_warn("Unable to request GPIO_CHARGE_DC_OK\n"); 824 824 825 825 ret = gpio_request(GPIO_CHARGE_USB_SUSP, "CHARGE_USB_SUSP"); 826 826 if (ret < 0) 827 - pr_warning("Unable to request GPIO_CHARGE_USB_SUSP\n"); 827 + pr_warn("Unable to request GPIO_CHARGE_USB_SUSP\n"); 828 828 else 829 829 gpio_direction_output(GPIO_CHARGE_USB_SUSP, 0); 830 830 ··· 976 976 977 977 ret = gpio_request(GPIO_CODEC_RESET, "cs4270 reset"); 978 978 if (ret < 0) 979 - pr_warning("unable to request GPIO_CODEC_RESET\n"); 979 + pr_warn("unable to request GPIO_CODEC_RESET\n"); 980 980 else 981 981 gpio_direction_output(GPIO_CODEC_RESET, 1); 982 982 983 983 ret = gpio_request(GPIO_SPDIF_RESET, "ak4104 s/pdif reset"); 984 984 if (ret < 0) 985 - pr_warning("unable to request GPIO_SPDIF_RESET\n"); 985 + pr_warn("unable to request GPIO_SPDIF_RESET\n"); 986 986 else 987 987 gpio_direction_output(GPIO_SPDIF_RESET, 1); 988 988 989 989 ret = gpio_request(GPIO_MCLK_RESET, "MCLK reset"); 990 990 if (ret < 0) 991 - pr_warning("unable to request GPIO_MCLK_RESET\n"); 991 + pr_warn("unable to request GPIO_MCLK_RESET\n"); 992 992 else 993 993 gpio_direction_output(GPIO_MCLK_RESET, 1); 994 994 ··· 1019 1019 1020 1020 ret = gpio_request(GPIO_W2W_RESET, "Wi2Wi reset"); 1021 1021 if (ret < 0) 1022 - pr_warning("Unable to request GPIO_W2W_RESET\n"); 1022 + pr_warn("Unable to request GPIO_W2W_RESET\n"); 1023 1023 else 1024 1024 gpio_direction_output(GPIO_W2W_RESET, 0); 1025 1025 1026 1026 ret = gpio_request(GPIO_W2W_PDN, "Wi2Wi powerup"); 1027 1027 if (ret < 0) 1028 - pr_warning("Unable to request GPIO_W2W_PDN\n"); 1028 + pr_warn("Unable to request GPIO_W2W_PDN\n"); 1029 1029 else 1030 1030 gpio_direction_output(GPIO_W2W_PDN, 0); 1031 1031 1032 1032 /* this can be used to switch off the device */ 1033 1033 ret = gpio_request(GPIO_SHUTDOWN_SUPPLY, "supply shutdown"); 1034 1034 if (ret < 0) 1035 - pr_warning("Unable to request GPIO_SHUTDOWN_SUPPLY\n"); 1035 + pr_warn("Unable to request GPIO_SHUTDOWN_SUPPLY\n"); 1036 1036 else 1037 1037 gpio_direction_output(GPIO_SHUTDOWN_SUPPLY, 0); 1038 1038 ··· 1051 1051 1052 1052 ret = gpio_request(GPIO_SHUTDOWN_BATT, "battery shutdown"); 1053 1053 if (ret < 0) 1054 - pr_warning("Unable to request GPIO_SHUTDOWN_BATT\n"); 1054 + pr_warn("Unable to request GPIO_SHUTDOWN_BATT\n"); 1055 1055 else 1056 1056 gpio_direction_output(GPIO_SHUTDOWN_BATT, 0); 1057 1057
+7 -34
arch/arm/mach-pxa/tosa.c
··· 30 30 #include <linux/gpio_keys.h> 31 31 #include <linux/input.h> 32 32 #include <linux/gpio.h> 33 - #include <linux/pda_power.h> 33 + #include <linux/power/gpio-charger.h> 34 34 #include <linux/spi/spi.h> 35 35 #include <linux/spi/pxa2xx_spi.h> 36 36 #include <linux/input/matrix_keypad.h> ··· 361 361 /* 362 362 * Tosa AC IN 363 363 */ 364 - static int tosa_power_init(struct device *dev) 365 - { 366 - int ret = gpio_request(TOSA_GPIO_AC_IN, "ac in"); 367 - if (ret) 368 - goto err_gpio_req; 369 - 370 - ret = gpio_direction_input(TOSA_GPIO_AC_IN); 371 - if (ret) 372 - goto err_gpio_in; 373 - 374 - return 0; 375 - 376 - err_gpio_in: 377 - gpio_free(TOSA_GPIO_AC_IN); 378 - err_gpio_req: 379 - return ret; 380 - } 381 - 382 - static void tosa_power_exit(struct device *dev) 383 - { 384 - gpio_free(TOSA_GPIO_AC_IN); 385 - } 386 - 387 - static int tosa_power_ac_online(void) 388 - { 389 - return gpio_get_value(TOSA_GPIO_AC_IN) == 0; 390 - } 391 - 392 364 static char *tosa_ac_supplied_to[] = { 393 365 "main-battery", 394 366 "backup-battery", 395 367 "jacket-battery", 396 368 }; 397 369 398 - static struct pda_power_pdata tosa_power_data = { 399 - .init = tosa_power_init, 400 - .is_ac_online = tosa_power_ac_online, 401 - .exit = tosa_power_exit, 370 + static struct gpio_charger_platform_data tosa_power_data = { 371 + .name = "charger", 372 + .type = POWER_SUPPLY_TYPE_MAINS, 373 + .gpio = TOSA_GPIO_AC_IN, 374 + .gpio_active_low = 1, 402 375 .supplied_to = tosa_ac_supplied_to, 403 376 .num_supplicants = ARRAY_SIZE(tosa_ac_supplied_to), 404 377 }; ··· 388 415 }; 389 416 390 417 static struct platform_device tosa_power_device = { 391 - .name = "pda-power", 418 + .name = "gpio-charger", 392 419 .id = -1, 393 420 .dev.platform_data = &tosa_power_data, 394 421 .resource = tosa_power_resource,
+4 -1
arch/arm/mach-rockchip/headsmp.S
··· 16 16 #include <linux/init.h> 17 17 18 18 ENTRY(rockchip_secondary_startup) 19 - bl v7_invalidate_l1 19 + mrc p15, 0, r0, c0, c0, 0 @ read main ID register 20 + ldr r1, =0x00000c09 @ Cortex-A9 primary part number 21 + teq r0, r1 22 + beq v7_invalidate_l1 20 23 b secondary_startup 21 24 ENDPROC(rockchip_secondary_startup) 22 25
+184 -47
arch/arm/mach-rockchip/platsmp.c
··· 19 19 #include <linux/io.h> 20 20 #include <linux/of.h> 21 21 #include <linux/of_address.h> 22 + #include <linux/regmap.h> 23 + #include <linux/mfd/syscon.h> 22 24 25 + #include <linux/reset.h> 26 + #include <linux/cpu.h> 23 27 #include <asm/cacheflush.h> 24 28 #include <asm/cp15.h> 25 29 #include <asm/smp_scu.h> ··· 41 37 42 38 #define PMU_PWRDN_SCU 4 43 39 44 - static void __iomem *pmu_base_addr; 40 + static struct regmap *pmu; 45 41 46 - static inline bool pmu_power_domain_is_on(int pd) 42 + static int pmu_power_domain_is_on(int pd) 47 43 { 48 - return !(readl_relaxed(pmu_base_addr + PMU_PWRDN_ST) & BIT(pd)); 44 + u32 val; 45 + int ret; 46 + 47 + ret = regmap_read(pmu, PMU_PWRDN_ST, &val); 48 + if (ret < 0) 49 + return ret; 50 + 51 + return !(val & BIT(pd)); 49 52 } 50 53 51 - static void pmu_set_power_domain(int pd, bool on) 54 + struct reset_control *rockchip_get_core_reset(int cpu) 52 55 { 53 - u32 val = readl_relaxed(pmu_base_addr + PMU_PWRDN_CON); 54 - if (on) 55 - val &= ~BIT(pd); 56 - else 57 - val |= BIT(pd); 58 - writel(val, pmu_base_addr + PMU_PWRDN_CON); 56 + struct device *dev = get_cpu_device(cpu); 57 + struct device_node *np; 59 58 60 - while (pmu_power_domain_is_on(pd) != on) { } 59 + /* The cpu device is only available after the initial core bringup */ 60 + if (dev) 61 + np = dev->of_node; 62 + else 63 + np = of_get_cpu_node(cpu, 0); 64 + 65 + return of_reset_control_get(np, NULL); 66 + } 67 + 68 + static int pmu_set_power_domain(int pd, bool on) 69 + { 70 + u32 val = (on) ? 0 : BIT(pd); 71 + int ret; 72 + 73 + /* 74 + * We need to soft reset the cpu when we turn off the cpu power domain, 75 + * or else the active processors might be stalled when the individual 76 + * processor is powered down. 77 + */ 78 + if (read_cpuid_part() != ARM_CPU_PART_CORTEX_A9) { 79 + struct reset_control *rstc = rockchip_get_core_reset(pd); 80 + 81 + if (IS_ERR(rstc)) { 82 + pr_err("%s: could not get reset control for core %d\n", 83 + __func__, pd); 84 + return PTR_ERR(rstc); 85 + } 86 + 87 + if (on) 88 + reset_control_deassert(rstc); 89 + else 90 + reset_control_assert(rstc); 91 + 92 + reset_control_put(rstc); 93 + } 94 + 95 + ret = regmap_update_bits(pmu, PMU_PWRDN_CON, BIT(pd), val); 96 + if (ret < 0) { 97 + pr_err("%s: could not update power domain\n", __func__); 98 + return ret; 99 + } 100 + 101 + ret = -1; 102 + while (ret != on) { 103 + ret = pmu_power_domain_is_on(pd); 104 + if (ret < 0) { 105 + pr_err("%s: could not read power domain state\n", 106 + __func__); 107 + return ret; 108 + } 109 + } 110 + 111 + return 0; 61 112 } 62 113 63 114 /* ··· 122 63 static int __cpuinit rockchip_boot_secondary(unsigned int cpu, 123 64 struct task_struct *idle) 124 65 { 125 - if (!sram_base_addr || !pmu_base_addr) { 66 + int ret; 67 + 68 + if (!sram_base_addr || !pmu) { 126 69 pr_err("%s: sram or pmu missing for cpu boot\n", __func__); 127 70 return -ENXIO; 128 71 } ··· 136 75 } 137 76 138 77 /* start the core */ 139 - pmu_set_power_domain(0 + cpu, true); 78 + ret = pmu_set_power_domain(0 + cpu, true); 79 + if (ret < 0) 80 + return ret; 81 + 82 + if (read_cpuid_part() != ARM_CPU_PART_CORTEX_A9) { 83 + /* We communicate with the bootrom to active the cpus other 84 + * than cpu0, after a blob of initialize code, they will 85 + * stay at wfe state, once they are actived, they will check 86 + * the mailbox: 87 + * sram_base_addr + 4: 0xdeadbeaf 88 + * sram_base_addr + 8: start address for pc 89 + * */ 90 + udelay(10); 91 + writel(virt_to_phys(rockchip_secondary_startup), 92 + sram_base_addr + 8); 93 + writel(0xDEADBEAF, sram_base_addr + 4); 94 + dsb_sev(); 95 + } 140 96 141 97 return 0; 142 98 } ··· 188 110 return -EINVAL; 189 111 } 190 112 191 - sram_base_addr = of_iomap(node, 0); 192 - 193 113 /* set the boot function for the sram code */ 194 114 rockchip_boot_fn = virt_to_phys(rockchip_secondary_startup); 195 115 ··· 201 125 return 0; 202 126 } 203 127 128 + static struct regmap_config rockchip_pmu_regmap_config = { 129 + .reg_bits = 32, 130 + .val_bits = 32, 131 + .reg_stride = 4, 132 + }; 133 + 134 + static int __init rockchip_smp_prepare_pmu(void) 135 + { 136 + struct device_node *node; 137 + void __iomem *pmu_base; 138 + 139 + /* 140 + * This function is only called via smp_ops->smp_prepare_cpu(). 141 + * That only happens if a "/cpus" device tree node exists 142 + * and has an "enable-method" property that selects the SMP 143 + * operations defined herein. 144 + */ 145 + node = of_find_node_by_path("/cpus"); 146 + 147 + pmu = syscon_regmap_lookup_by_phandle(node, "rockchip,pmu"); 148 + of_node_put(node); 149 + if (!IS_ERR(pmu)) 150 + return 0; 151 + 152 + pmu = syscon_regmap_lookup_by_compatible("rockchip,rk3066-pmu"); 153 + if (!IS_ERR(pmu)) 154 + return 0; 155 + 156 + /* fallback, create our own regmap for the pmu area */ 157 + pmu = NULL; 158 + node = of_find_compatible_node(NULL, NULL, "rockchip,rk3066-pmu"); 159 + if (!node) { 160 + pr_err("%s: could not find pmu dt node\n", __func__); 161 + return -ENODEV; 162 + } 163 + 164 + pmu_base = of_iomap(node, 0); 165 + if (!pmu_base) { 166 + pr_err("%s: could not map pmu registers\n", __func__); 167 + return -ENOMEM; 168 + } 169 + 170 + pmu = regmap_init_mmio(NULL, pmu_base, &rockchip_pmu_regmap_config); 171 + if (IS_ERR(pmu)) { 172 + int ret = PTR_ERR(pmu); 173 + 174 + iounmap(pmu_base); 175 + pmu = NULL; 176 + pr_err("%s: regmap init failed\n", __func__); 177 + return ret; 178 + } 179 + 180 + return 0; 181 + } 182 + 204 183 static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus) 205 184 { 206 185 struct device_node *node; 207 186 unsigned int i; 208 - 209 - node = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-scu"); 210 - if (!node) { 211 - pr_err("%s: missing scu\n", __func__); 212 - return; 213 - } 214 - 215 - scu_base_addr = of_iomap(node, 0); 216 - if (!scu_base_addr) { 217 - pr_err("%s: could not map scu registers\n", __func__); 218 - return; 219 - } 220 187 221 188 node = of_find_compatible_node(NULL, NULL, "rockchip,rk3066-smp-sram"); 222 189 if (!node) { ··· 267 148 return; 268 149 } 269 150 270 - if (rockchip_smp_prepare_sram(node)) 271 - return; 272 - 273 - node = of_find_compatible_node(NULL, NULL, "rockchip,rk3066-pmu"); 274 - if (!node) { 275 - pr_err("%s: could not find pmu dt node\n", __func__); 151 + sram_base_addr = of_iomap(node, 0); 152 + if (!sram_base_addr) { 153 + pr_err("%s: could not map sram registers\n", __func__); 276 154 return; 277 155 } 278 156 279 - pmu_base_addr = of_iomap(node, 0); 280 - if (!pmu_base_addr) { 281 - pr_err("%s: could not map pmu registers\n", __func__); 157 + if (rockchip_smp_prepare_pmu()) 282 158 return; 159 + 160 + if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) { 161 + if (rockchip_smp_prepare_sram(node)) 162 + return; 163 + 164 + /* enable the SCU power domain */ 165 + pmu_set_power_domain(PMU_PWRDN_SCU, true); 166 + 167 + node = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-scu"); 168 + if (!node) { 169 + pr_err("%s: missing scu\n", __func__); 170 + return; 171 + } 172 + 173 + scu_base_addr = of_iomap(node, 0); 174 + if (!scu_base_addr) { 175 + pr_err("%s: could not map scu registers\n", __func__); 176 + return; 177 + } 178 + 179 + /* 180 + * While the number of cpus is gathered from dt, also get the 181 + * number of cores from the scu to verify this value when 182 + * booting the cores. 183 + */ 184 + ncores = scu_get_core_count(scu_base_addr); 185 + pr_err("%s: ncores %d\n", __func__, ncores); 186 + 187 + scu_enable(scu_base_addr); 188 + } else { 189 + unsigned int l2ctlr; 190 + 191 + asm ("mrc p15, 1, %0, c9, c0, 2\n" : "=r" (l2ctlr)); 192 + ncores = ((l2ctlr >> 24) & 0x3) + 1; 283 193 } 284 - 285 - /* enable the SCU power domain */ 286 - pmu_set_power_domain(PMU_PWRDN_SCU, true); 287 - 288 - /* 289 - * While the number of cpus is gathered from dt, also get the number 290 - * of cores from the scu to verify this value when booting the cores. 291 - */ 292 - ncores = scu_get_core_count(scu_base_addr); 293 - 294 - scu_enable(scu_base_addr); 295 194 296 195 /* Make sure that all cores except the first are really off */ 297 196 for (i = 1; i < ncores; i++)
+7
arch/arm/mach-rockchip/rockchip.c
··· 24 24 #include <asm/hardware/cache-l2x0.h> 25 25 #include "core.h" 26 26 27 + static void __init rockchip_dt_init(void) 28 + { 29 + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 30 + platform_device_register_simple("cpufreq-dt", 0, NULL, 0); 31 + } 32 + 27 33 static const char * const rockchip_board_dt_compat[] = { 28 34 "rockchip,rk2928", 29 35 "rockchip,rk3066a", ··· 43 37 .l2c_aux_val = 0, 44 38 .l2c_aux_mask = ~0, 45 39 .dt_compat = rockchip_board_dt_compat, 40 + .init_machine = rockchip_dt_init, 46 41 MACHINE_END
+8 -2
arch/arm/mach-sa1100/include/mach/debug-macro.S arch/arm/include/debug/sa1100.S
··· 1 - /* arch/arm/mach-sa1100/include/mach/debug-macro.S 1 + /* arch/arm/include/debug/sa1100.S 2 2 * 3 3 * Debugging macro include header 4 4 * ··· 10 10 * published by the Free Software Foundation. 11 11 * 12 12 */ 13 - #include <mach/hardware.h> 13 + 14 + #define UTCR3 0x0c 15 + #define UTDR 0x14 16 + #define UTSR1 0x20 17 + #define UTCR3_TXE 0x00000002 /* Transmit Enable */ 18 + #define UTSR1_TBY 0x00000001 /* Transmitter BusY (read) */ 19 + #define UTSR1_TNF 0x00000004 /* Transmit FIFO Not Full (read) */ 14 20 15 21 .macro addruart, rp, rv, tmp 16 22 mrc p15, 0, \rp, c1, c0
+2
arch/arm/mach-shmobile/Kconfig
··· 1 1 config ARCH_SHMOBILE 2 2 bool 3 + select ZONE_DMA if ARM_LPAE 3 4 4 5 config PM_RCAR 5 6 bool ··· 19 18 select PM_RCAR if PM || SMP 20 19 select RENESAS_IRQC 21 20 select SYS_SUPPORTS_SH_CMT 21 + select PCI_DOMAINS if PCI 22 22 23 23 config ARCH_RMOBILE 24 24 bool
+1
arch/arm/mach-shmobile/Makefile
··· 35 35 36 36 # Shared SoC family objects 37 37 obj-$(CONFIG_ARCH_RCAR_GEN2) += setup-rcar-gen2.o platsmp-apmu.o $(cpu-y) 38 + CFLAGS_setup-rcar-gen2.o += -march=armv7-a 38 39 39 40 # SMP objects 40 41 smp-y := $(cpu-y)
+11 -1
arch/arm/mach-shmobile/board-armadillo800eva.c
··· 1229 1229 static struct pm_domain_device domain_devices[] __initdata = { 1230 1230 { "A4LC", &lcdc0_device }, 1231 1231 { "A4LC", &hdmi_lcdc_device }, 1232 + { "A4MP", &hdmi_device }, 1233 + { "A4MP", &fsi_device }, 1234 + { "A4R", &ceu0_device }, 1235 + { "A4S", &sh_eth_device }, 1236 + { "A3SP", &pwm_device }, 1237 + { "A3SP", &sdhi0_device }, 1238 + { "A3SP", &sh_mmcif_device }, 1232 1239 }; 1233 - struct platform_device *usb = NULL; 1240 + struct platform_device *usb = NULL, *sdhi1 = NULL; 1234 1241 1235 1242 regulator_register_always_on(0, "fixed-3.3V", fixed3v3_power_consumers, 1236 1243 ARRAY_SIZE(fixed3v3_power_consumers), 3300000); ··· 1306 1299 1307 1300 platform_device_register(&vcc_sdhi1); 1308 1301 platform_device_register(&sdhi1_device); 1302 + sdhi1 = &sdhi1_device; 1309 1303 } 1310 1304 1311 1305 ··· 1327 1319 ARRAY_SIZE(domain_devices)); 1328 1320 if (usb) 1329 1321 rmobile_add_device_to_domain("A3SP", usb); 1322 + if (sdhi1) 1323 + rmobile_add_device_to_domain("A3SP", sdhi1); 1330 1324 1331 1325 r8a7740_pm_init(); 1332 1326 }
+8
arch/arm/mach-shmobile/board-kzm9g-reference.c
··· 39 39 #endif 40 40 } 41 41 42 + #define RESCNT2 IOMEM(0xe6188020) 43 + static void kzm9g_restart(enum reboot_mode mode, const char *cmd) 44 + { 45 + /* Do soft power on reset */ 46 + writel((1 << 31), RESCNT2); 47 + } 48 + 42 49 static const char *kzm9g_boards_compat_dt[] __initdata = { 43 50 "renesas,kzm9g-reference", 44 51 NULL, ··· 57 50 .init_early = shmobile_init_delay, 58 51 .init_machine = kzm_init, 59 52 .init_late = shmobile_init_late, 53 + .restart = kzm9g_restart, 60 54 .dt_compat = kzm9g_boards_compat_dt, 61 55 MACHINE_END
-5
arch/arm/mach-shmobile/common.h
··· 19 19 extern void shmobile_smp_scu_prepare_cpus(unsigned int max_cpus); 20 20 extern void shmobile_smp_scu_cpu_die(unsigned int cpu); 21 21 extern int shmobile_smp_scu_cpu_kill(unsigned int cpu); 22 - extern void shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus); 23 - extern int shmobile_smp_apmu_boot_secondary(unsigned int cpu, 24 - struct task_struct *idle); 25 - extern void shmobile_smp_apmu_cpu_die(unsigned int cpu); 26 - extern int shmobile_smp_apmu_cpu_kill(unsigned int cpu); 27 22 struct clk; 28 23 extern int shmobile_clk_init(void); 29 24 extern void shmobile_handle_irq_intc(struct pt_regs *);
+9 -18
arch/arm/mach-shmobile/platsmp-apmu.c
··· 1 1 /* 2 2 * SMP support for SoCs with APMU 3 3 * 4 + * Copyright (C) 2014 Renesas Electronics Corporation 4 5 * Copyright (C) 2013 Magnus Damm 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify ··· 23 22 #include <asm/smp_plat.h> 24 23 #include <asm/suspend.h> 25 24 #include "common.h" 25 + #include "platsmp-apmu.h" 26 26 27 27 static struct { 28 28 void __iomem *iomem; ··· 85 83 pr_debug("apmu ioremap %d %d %pr\n", cpu, bit, res); 86 84 } 87 85 88 - static struct { 89 - struct resource iomem; 90 - int cpus[4]; 91 - } apmu_config[] = { 92 - { 93 - .iomem = DEFINE_RES_MEM(0xe6152000, 0x88), 94 - .cpus = { 0, 1, 2, 3 }, 95 - }, 96 - { 97 - .iomem = DEFINE_RES_MEM(0xe6151000, 0x88), 98 - .cpus = { 0x100, 0x101, 0x102, 0x103 }, 99 - } 100 - }; 101 - 102 - static void apmu_parse_cfg(void (*fn)(struct resource *res, int cpu, int bit)) 86 + static void apmu_parse_cfg(void (*fn)(struct resource *res, int cpu, int bit), 87 + struct rcar_apmu_config *apmu_config, int num) 103 88 { 104 89 u32 id; 105 90 int k; 106 91 int bit, index; 107 92 bool is_allowed; 108 93 109 - for (k = 0; k < ARRAY_SIZE(apmu_config); k++) { 94 + for (k = 0; k < num; k++) { 110 95 /* only enable the cluster that includes the boot CPU */ 111 96 is_allowed = false; 112 97 for (bit = 0; bit < ARRAY_SIZE(apmu_config[k].cpus); bit++) { ··· 117 128 } 118 129 } 119 130 120 - void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus) 131 + void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, 132 + struct rcar_apmu_config *apmu_config, 133 + int num) 121 134 { 122 135 /* install boot code shared by all CPUs */ 123 136 shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); 124 137 shmobile_boot_arg = MPIDR_HWID_BITMASK; 125 138 126 139 /* perform per-cpu setup */ 127 - apmu_parse_cfg(apmu_init_cpu); 140 + apmu_parse_cfg(apmu_init_cpu, apmu_config, num); 128 141 } 129 142 130 143 #ifdef CONFIG_SMP
+32
arch/arm/mach-shmobile/platsmp-apmu.h
··· 1 + /* 2 + * rmobile apmu definition 3 + * 4 + * Copyright (C) 2014 Renesas Electronics Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; version 2 of the License. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #ifndef PLATSMP_APMU_H 17 + #define PLATSMP_APMU_H 18 + 19 + struct rcar_apmu_config { 20 + struct resource iomem; 21 + int cpus[4]; 22 + }; 23 + 24 + extern void shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, 25 + struct rcar_apmu_config *apmu_config, 26 + int num); 27 + extern int shmobile_smp_apmu_boot_secondary(unsigned int cpu, 28 + struct task_struct *idle); 29 + extern void shmobile_smp_apmu_cpu_die(unsigned int cpu); 30 + extern int shmobile_smp_apmu_cpu_kill(unsigned int cpu); 31 + 32 + #endif /* PLATSMP_APMU_H */
+40 -4
arch/arm/mach-shmobile/pm-r8a7740.c
··· 14 14 #include "pm-rmobile.h" 15 15 16 16 #if defined(CONFIG_PM) && !defined(CONFIG_ARCH_MULTIPLATFORM) 17 - static int r8a7740_pd_a4s_suspend(void) 17 + static int r8a7740_pd_a3sm_suspend(void) 18 18 { 19 19 /* 20 - * The A4S domain contains the CPU core and therefore it should 20 + * The A3SM domain contains the CPU core and therefore it should 21 21 * only be turned off if the CPU is not in use. 22 22 */ 23 23 return -EBUSY; ··· 32 32 return console_suspend_enabled ? 0 : -EBUSY; 33 33 } 34 34 35 + static int r8a7740_pd_d4_suspend(void) 36 + { 37 + /* 38 + * The D4 domain contains the Coresight-ETM hardware block and 39 + * therefore it should only be turned off if the debug module is 40 + * not in use. 41 + */ 42 + return -EBUSY; 43 + } 44 + 35 45 static struct rmobile_pm_domain r8a7740_pm_domains[] = { 36 46 { 37 47 .genpd.name = "A4LC", 38 48 .bit_shift = 1, 39 49 }, { 50 + .genpd.name = "A4MP", 51 + .bit_shift = 2, 52 + }, { 53 + .genpd.name = "D4", 54 + .bit_shift = 3, 55 + .gov = &pm_domain_always_on_gov, 56 + .suspend = r8a7740_pd_d4_suspend, 57 + }, { 58 + .genpd.name = "A4R", 59 + .bit_shift = 5, 60 + }, { 61 + .genpd.name = "A3RV", 62 + .bit_shift = 6, 63 + }, { 40 64 .genpd.name = "A4S", 41 65 .bit_shift = 10, 42 - .gov = &pm_domain_always_on_gov, 43 66 .no_debug = true, 44 - .suspend = r8a7740_pd_a4s_suspend, 45 67 }, { 46 68 .genpd.name = "A3SP", 47 69 .bit_shift = 11, 48 70 .gov = &pm_domain_always_on_gov, 49 71 .no_debug = true, 50 72 .suspend = r8a7740_pd_a3sp_suspend, 73 + }, { 74 + .genpd.name = "A3SM", 75 + .bit_shift = 12, 76 + .gov = &pm_domain_always_on_gov, 77 + .suspend = r8a7740_pd_a3sm_suspend, 78 + }, { 79 + .genpd.name = "A3SG", 80 + .bit_shift = 13, 81 + }, { 82 + .genpd.name = "A4SU", 83 + .bit_shift = 20, 51 84 }, 52 85 }; 53 86 54 87 void __init r8a7740_init_pm_domains(void) 55 88 { 56 89 rmobile_init_domains(r8a7740_pm_domains, ARRAY_SIZE(r8a7740_pm_domains)); 90 + pm_genpd_add_subdomain_names("A4R", "A3RV"); 57 91 pm_genpd_add_subdomain_names("A4S", "A3SP"); 92 + pm_genpd_add_subdomain_names("A4S", "A3SM"); 93 + pm_genpd_add_subdomain_names("A4S", "A3SG"); 58 94 } 59 95 #endif /* CONFIG_PM && !CONFIG_ARCH_MULTIPLATFORM */ 60 96
+12
arch/arm/mach-shmobile/setup-r8a7740.c
··· 67 67 68 68 void __init r8a7740_map_io(void) 69 69 { 70 + debug_ll_io_init(); 70 71 iotable_init(r8a7740_io_desc, ARRAY_SIZE(r8a7740_io_desc)); 71 72 } 72 73 ··· 743 742 void __init r8a7740_add_standard_devices(void) 744 743 { 745 744 static struct pm_domain_device domain_devices[] __initdata = { 745 + { "A4R", &tmu0_device }, 746 + { "A4R", &i2c0_device }, 747 + { "A4S", &irqpin0_device }, 748 + { "A4S", &irqpin1_device }, 749 + { "A4S", &irqpin2_device }, 750 + { "A4S", &irqpin3_device }, 746 751 { "A3SP", &scif0_device }, 747 752 { "A3SP", &scif1_device }, 748 753 { "A3SP", &scif2_device }, ··· 759 752 { "A3SP", &scif7_device }, 760 753 { "A3SP", &scif8_device }, 761 754 { "A3SP", &i2c1_device }, 755 + { "A3SP", &ipmmu_device }, 756 + { "A3SP", &dma0_device }, 757 + { "A3SP", &dma1_device }, 758 + { "A3SP", &dma2_device }, 759 + { "A3SP", &usb_dma_device }, 762 760 }; 763 761 764 762 /* I2C work-around */
+1
arch/arm/mach-shmobile/setup-r8a7779.c
··· 66 66 67 67 void __init r8a7779_map_io(void) 68 68 { 69 + debug_ll_io_init(); 69 70 iotable_init(r8a7779_io_desc, ARRAY_SIZE(r8a7779_io_desc)); 70 71 } 71 72
+48 -22
arch/arm/mach-shmobile/setup-rcar-gen2.c
··· 3 3 * 4 4 * Copyright (C) 2013 Renesas Solutions Corp. 5 5 * Copyright (C) 2013 Magnus Damm 6 + * Copyright (C) 2014 Ulrich Hecht 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License as published by ··· 21 20 #include <linux/dma-contiguous.h> 22 21 #include <linux/io.h> 23 22 #include <linux/kernel.h> 23 + #include <linux/of.h> 24 24 #include <linux/of_fdt.h> 25 25 #include <asm/mach/arch.h> 26 26 #include "common.h" ··· 52 50 { 53 51 #if defined(CONFIG_ARM_ARCH_TIMER) || defined(CONFIG_COMMON_CLK) 54 52 u32 mode = rcar_gen2_read_mode_pins(); 53 + bool is_e2 = (bool)of_find_compatible_node(NULL, NULL, 54 + "renesas,r8a7794"); 55 55 #endif 56 56 #ifdef CONFIG_ARM_ARCH_TIMER 57 57 void __iomem *base; 58 58 int extal_mhz = 0; 59 59 u32 freq; 60 60 61 - /* At Linux boot time the r8a7790 arch timer comes up 62 - * with the counter disabled. Moreover, it may also report 63 - * a potentially incorrect fixed 13 MHz frequency. To be 64 - * correct these registers need to be updated to use the 65 - * frequency EXTAL / 2 which can be determined by the MD pins. 66 - */ 61 + if (is_e2) { 62 + freq = 260000000 / 8; /* ZS / 8 */ 63 + /* CNTVOFF has to be initialized either from non-secure 64 + * Hypervisor mode or secure Monitor mode with SCR.NS==1. 65 + * If TrustZone is enabled then it should be handled by the 66 + * secure code. 67 + */ 68 + asm volatile( 69 + " cps 0x16\n" 70 + " mrc p15, 0, r1, c1, c1, 0\n" 71 + " orr r0, r1, #1\n" 72 + " mcr p15, 0, r0, c1, c1, 0\n" 73 + " isb\n" 74 + " mov r0, #0\n" 75 + " mcrr p15, 4, r0, r0, c14\n" 76 + " isb\n" 77 + " mcr p15, 0, r1, c1, c1, 0\n" 78 + " isb\n" 79 + " cps 0x13\n" 80 + : : : "r0", "r1"); 81 + } else { 82 + /* At Linux boot time the r8a7790 arch timer comes up 83 + * with the counter disabled. Moreover, it may also report 84 + * a potentially incorrect fixed 13 MHz frequency. To be 85 + * correct these registers need to be updated to use the 86 + * frequency EXTAL / 2 which can be determined by the MD pins. 87 + */ 67 88 68 - switch (mode & (MD(14) | MD(13))) { 69 - case 0: 70 - extal_mhz = 15; 71 - break; 72 - case MD(13): 73 - extal_mhz = 20; 74 - break; 75 - case MD(14): 76 - extal_mhz = 26; 77 - break; 78 - case MD(13) | MD(14): 79 - extal_mhz = 30; 80 - break; 89 + switch (mode & (MD(14) | MD(13))) { 90 + case 0: 91 + extal_mhz = 15; 92 + break; 93 + case MD(13): 94 + extal_mhz = 20; 95 + break; 96 + case MD(14): 97 + extal_mhz = 26; 98 + break; 99 + case MD(13) | MD(14): 100 + extal_mhz = 30; 101 + break; 102 + } 103 + 104 + /* The arch timer frequency equals EXTAL / 2 */ 105 + freq = extal_mhz * (1000000 / 2); 81 106 } 82 - 83 - /* The arch timer frequency equals EXTAL / 2 */ 84 - freq = extal_mhz * (1000000 / 2); 85 107 86 108 /* Remap "armgcnt address map" space */ 87 109 base = ioremap(0xe6080000, PAGE_SIZE);
+2
arch/arm/mach-shmobile/setup-sh7372.c
··· 56 56 57 57 void __init sh7372_map_io(void) 58 58 { 59 + debug_ll_io_init(); 59 60 iotable_init(sh7372_io_desc, ARRAY_SIZE(sh7372_io_desc)); 60 61 } 61 62 ··· 1009 1008 .init_irq = sh7372_init_irq, 1010 1009 .handle_irq = shmobile_handle_irq_intc, 1011 1010 .init_machine = sh7372_add_standard_devices_dt, 1011 + .init_late = shmobile_init_late, 1012 1012 .dt_compat = sh7372_boards_compat_dt, 1013 1013 MACHINE_END 1014 1014
+9
arch/arm/mach-shmobile/setup-sh73a0.c
··· 55 55 56 56 void __init sh73a0_map_io(void) 57 57 { 58 + debug_ll_io_init(); 58 59 iotable_init(sh73a0_io_desc, ARRAY_SIZE(sh73a0_io_desc)); 59 60 } 60 61 ··· 787 786 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 788 787 } 789 788 789 + #define RESCNT2 IOMEM(0xe6188020) 790 + static void sh73a0_restart(enum reboot_mode mode, const char *cmd) 791 + { 792 + /* Do soft power on reset */ 793 + writel((1 << 31), RESCNT2); 794 + } 795 + 790 796 static const char *sh73a0_boards_compat_dt[] __initdata = { 791 797 "renesas,sh73a0", 792 798 NULL, ··· 805 797 .init_early = shmobile_init_delay, 806 798 .init_machine = sh73a0_add_standard_devices_dt, 807 799 .init_late = shmobile_init_late, 800 + .restart = sh73a0_restart, 808 801 .dt_compat = sh73a0_boards_compat_dt, 809 802 MACHINE_END 810 803 #endif /* CONFIG_USE_OF */
+15 -1
arch/arm/mach-shmobile/smp-r8a7790.c
··· 21 21 #include <asm/smp_plat.h> 22 22 23 23 #include "common.h" 24 + #include "platsmp-apmu.h" 24 25 #include "pm-rcar.h" 25 26 #include "r8a7790.h" 26 27 ··· 35 34 .isr_bit = 21, /* CA7-SCU */ 36 35 }; 37 36 37 + static struct rcar_apmu_config r8a7790_apmu_config[] = { 38 + { 39 + .iomem = DEFINE_RES_MEM(0xe6152000, 0x88), 40 + .cpus = { 0, 1, 2, 3 }, 41 + }, 42 + { 43 + .iomem = DEFINE_RES_MEM(0xe6151000, 0x88), 44 + .cpus = { 0x100, 0x0101, 0x102, 0x103 }, 45 + } 46 + }; 47 + 38 48 static void __init r8a7790_smp_prepare_cpus(unsigned int max_cpus) 39 49 { 40 50 /* let APMU code install data related to shmobile_boot_vector */ 41 - shmobile_smp_apmu_prepare_cpus(max_cpus); 51 + shmobile_smp_apmu_prepare_cpus(max_cpus, 52 + r8a7790_apmu_config, 53 + ARRAY_SIZE(r8a7790_apmu_config)); 42 54 43 55 /* turn on power to SCU */ 44 56 r8a7790_pm_init();
+11 -1
arch/arm/mach-shmobile/smp-r8a7791.c
··· 21 21 #include <asm/smp_plat.h> 22 22 23 23 #include "common.h" 24 + #include "platsmp-apmu.h" 24 25 #include "r8a7791.h" 25 26 #include "rcar-gen2.h" 27 + 28 + static struct rcar_apmu_config r8a7791_apmu_config[] = { 29 + { 30 + .iomem = DEFINE_RES_MEM(0xe6152000, 0x88), 31 + .cpus = { 0, 1 }, 32 + } 33 + }; 26 34 27 35 static void __init r8a7791_smp_prepare_cpus(unsigned int max_cpus) 28 36 { 29 37 /* let APMU code install data related to shmobile_boot_vector */ 30 - shmobile_smp_apmu_prepare_cpus(max_cpus); 38 + shmobile_smp_apmu_prepare_cpus(max_cpus, 39 + r8a7791_apmu_config, 40 + ARRAY_SIZE(r8a7791_apmu_config)); 31 41 32 42 r8a7791_pm_init(); 33 43 }
+15 -8
arch/arm/mach-shmobile/timer.c
··· 40 40 struct device_node *np, *cpus; 41 41 bool is_a7_a8_a9 = false; 42 42 bool is_a15 = false; 43 + bool has_arch_timer = false; 43 44 u32 max_freq = 0; 44 45 45 46 cpus = of_find_node_by_path("/cpus"); ··· 53 52 if (!of_property_read_u32(np, "clock-frequency", &freq)) 54 53 max_freq = max(max_freq, freq); 55 54 56 - if (of_device_is_compatible(np, "arm,cortex-a7") || 57 - of_device_is_compatible(np, "arm,cortex-a8") || 58 - of_device_is_compatible(np, "arm,cortex-a9")) 55 + if (of_device_is_compatible(np, "arm,cortex-a8") || 56 + of_device_is_compatible(np, "arm,cortex-a9")) { 59 57 is_a7_a8_a9 = true; 60 - else if (of_device_is_compatible(np, "arm,cortex-a15")) 58 + } else if (of_device_is_compatible(np, "arm,cortex-a7")) { 59 + is_a7_a8_a9 = true; 60 + has_arch_timer = true; 61 + } else if (of_device_is_compatible(np, "arm,cortex-a15")) { 61 62 is_a15 = true; 63 + has_arch_timer = true; 64 + } 62 65 } 63 66 64 67 of_node_put(cpus); ··· 70 65 if (!max_freq) 71 66 return; 72 67 73 - if (is_a7_a8_a9) 74 - shmobile_setup_delay_hz(max_freq, 1, 3); 75 - else if (is_a15 && !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) 76 - shmobile_setup_delay_hz(max_freq, 2, 4); 68 + if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) { 69 + if (is_a7_a8_a9) 70 + shmobile_setup_delay_hz(max_freq, 1, 3); 71 + else if (is_a15) 72 + shmobile_setup_delay_hz(max_freq, 2, 4); 73 + } 77 74 } 78 75 79 76 static void __init shmobile_late_time_init(void)
+3
arch/arm/mach-socfpga/core.h
··· 21 21 #define __MACH_CORE_H 22 22 23 23 #define SOCFPGA_RSTMGR_CTRL 0x04 24 + #define SOCFPGA_RSTMGR_MODMPURST 0x10 24 25 #define SOCFPGA_RSTMGR_MODPERRST 0x14 25 26 #define SOCFPGA_RSTMGR_BRGMODRST 0x1c 26 27 27 28 /* System Manager bits */ 28 29 #define RSTMGR_CTRL_SWCOLDRSTREQ 0x1 /* Cold Reset */ 29 30 #define RSTMGR_CTRL_SWWARMRSTREQ 0x2 /* Warm Reset */ 31 + 32 + #define RSTMGR_MPUMODRST_CPU1 0x2 /* CPU1 Reset */ 30 33 31 34 extern void socfpga_secondary_startup(void); 32 35 extern void __iomem *socfpga_scu_base_addr;
+11 -8
arch/arm/mach-socfpga/platsmp.c
··· 34 34 int trampoline_size = &secondary_trampoline_end - &secondary_trampoline; 35 35 36 36 if (socfpga_cpu1start_addr) { 37 + /* This will put CPU #1 into reset. */ 38 + writel(RSTMGR_MPUMODRST_CPU1, 39 + rst_manager_base_addr + SOCFPGA_RSTMGR_MODMPURST); 40 + 37 41 memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); 38 42 39 - __raw_writel(virt_to_phys(socfpga_secondary_startup), 40 - (sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff))); 43 + writel(virt_to_phys(socfpga_secondary_startup), 44 + sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff)); 41 45 42 46 flush_cache_all(); 43 47 smp_wmb(); 44 48 outer_clean_range(0, trampoline_size); 45 49 46 - /* This will release CPU #1 out of reset.*/ 47 - __raw_writel(0, rst_manager_base_addr + 0x10); 50 + /* This will release CPU #1 out of reset. */ 51 + writel(0, rst_manager_base_addr + SOCFPGA_RSTMGR_MODMPURST); 48 52 } 49 53 50 54 return 0; ··· 90 86 */ 91 87 static void socfpga_cpu_die(unsigned int cpu) 92 88 { 93 - cpu_do_idle(); 94 - 95 - /* We should have never returned from idle */ 96 - panic("cpu %d unexpectedly exit from shutdown\n", cpu); 89 + /* Do WFI. If we wake up early, go back into WFI */ 90 + while (1) 91 + cpu_do_idle(); 97 92 } 98 93 99 94 struct smp_operations socfpga_smp_ops __initdata = {
+7
arch/arm/mach-sunxi/Kconfig
··· 42 42 select MFD_SUN6I_PRCM 43 43 select RESET_CONTROLLER 44 44 45 + config MACH_SUN9I 46 + bool "Allwinner (sun9i) SoCs support" 47 + default ARCH_SUNXI 48 + select ARCH_HAS_RESET_CONTROLLER 49 + select ARM_GIC 50 + select RESET_CONTROLLER 51 + 45 52 endif
+1 -1
arch/arm/mach-sunxi/platsmp.c
··· 116 116 return 0; 117 117 } 118 118 119 - struct smp_operations sun6i_smp_ops __initdata = { 119 + static struct smp_operations sun6i_smp_ops __initdata = { 120 120 .smp_prepare_cpus = sun6i_smp_prepare_cpus, 121 121 .smp_boot_secondary = sun6i_smp_boot_secondary, 122 122 };
+9
arch/arm/mach-sunxi/sunxi.c
··· 63 63 DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i (A23) Family") 64 64 .dt_compat = sun8i_board_dt_compat, 65 65 MACHINE_END 66 + 67 + static const char * const sun9i_board_dt_compat[] = { 68 + "allwinner,sun9i-a80", 69 + NULL, 70 + }; 71 + 72 + DT_MACHINE_START(SUN9I_DT, "Allwinner sun9i Family") 73 + .dt_compat = sun9i_board_dt_compat, 74 + MACHINE_END
+1 -1
arch/arm/mach-tegra/cpuidle-tegra114.c
··· 49 49 call_firmware_op(prepare_idle); 50 50 51 51 /* Do suspend by ourselves if the firmware does not implement it */ 52 - if (call_firmware_op(do_idle) == -ENOSYS) 52 + if (call_firmware_op(do_idle, 0) == -ENOSYS) 53 53 cpu_suspend(0, tegra30_sleep_cpu_secondary_finish); 54 54 55 55 clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &dev->cpu);
+29 -36
arch/arm/mach-u300/dummyspichip.c
··· 80 80 "in 8bit mode\n"); 81 81 status = spi_w8r8(spi, 0xAA); 82 82 if (status < 0) 83 - pr_warning("Siple test 1: FAILURE: spi_write_then_read " 84 - "failed with status %d\n", status); 83 + pr_warn("Simple test 1: FAILURE: spi_write_then_read failed with status %d\n", 84 + status); 85 85 else 86 86 pr_info("Simple test 1: SUCCESS!\n"); 87 87 ··· 89 89 "in 8bit mode (full FIFO)\n"); 90 90 status = spi_write_then_read(spi, &txbuf[0], 8, &rxbuf[0], 8); 91 91 if (status < 0) 92 - pr_warning("Simple test 2: FAILURE: spi_write_then_read() " 93 - "failed with status %d\n", status); 92 + pr_warn("Simple test 2: FAILURE: spi_write_then_read() failed with status %d\n", 93 + status); 94 94 else 95 95 pr_info("Simple test 2: SUCCESS!\n"); 96 96 ··· 98 98 "in 8bit mode (see if we overflow FIFO)\n"); 99 99 status = spi_write_then_read(spi, &txbuf[0], 14, &rxbuf[0], 14); 100 100 if (status < 0) 101 - pr_warning("Simple test 3: FAILURE: failed with status %d " 102 - "(probably FIFO overrun)\n", status); 101 + pr_warn("Simple test 3: FAILURE: failed with status %d (probably FIFO overrun)\n", 102 + status); 103 103 else 104 104 pr_info("Simple test 3: SUCCESS!\n"); 105 105 ··· 107 107 "bytes garbage with spi_read() in 8bit mode\n"); 108 108 status = spi_write(spi, &txbuf[0], 8); 109 109 if (status < 0) 110 - pr_warning("Simple test 4 step 1: FAILURE: spi_write() " 111 - "failed with status %d\n", status); 110 + pr_warn("Simple test 4 step 1: FAILURE: spi_write() failed with status %d\n", 111 + status); 112 112 else 113 113 pr_info("Simple test 4 step 1: SUCCESS!\n"); 114 114 status = spi_read(spi, &rxbuf[0], 8); 115 115 if (status < 0) 116 - pr_warning("Simple test 4 step 2: FAILURE: spi_read() " 117 - "failed with status %d\n", status); 116 + pr_warn("Simple test 4 step 2: FAILURE: spi_read() failed with status %d\n", 117 + status); 118 118 else 119 119 pr_info("Simple test 4 step 2: SUCCESS!\n"); 120 120 ··· 122 122 "14 bytes garbage with spi_read() in 8bit mode\n"); 123 123 status = spi_write(spi, &txbuf[0], 14); 124 124 if (status < 0) 125 - pr_warning("Simple test 5 step 1: FAILURE: spi_write() " 126 - "failed with status %d (probably FIFO overrun)\n", 127 - status); 125 + pr_warn("Simple test 5 step 1: FAILURE: spi_write() failed with status %d (probably FIFO overrun)\n", 126 + status); 128 127 else 129 128 pr_info("Simple test 5 step 1: SUCCESS!\n"); 130 129 status = spi_read(spi, &rxbuf[0], 14); 131 130 if (status < 0) 132 - pr_warning("Simple test 5 step 2: FAILURE: spi_read() " 133 - "failed with status %d (probably FIFO overrun)\n", 134 - status); 131 + pr_warn("Simple test 5 step 2: FAILURE: spi_read() failed with status %d (probably FIFO overrun)\n", 132 + status); 135 133 else 136 134 pr_info("Simple test 5: SUCCESS!\n"); 137 135 ··· 138 140 DMA_TEST_SIZE, DMA_TEST_SIZE); 139 141 status = spi_write(spi, &bigtxbuf_virtual[0], DMA_TEST_SIZE); 140 142 if (status < 0) 141 - pr_warning("Simple test 6 step 1: FAILURE: spi_write() " 142 - "failed with status %d (probably FIFO overrun)\n", 143 - status); 143 + pr_warn("Simple test 6 step 1: FAILURE: spi_write() failed with status %d (probably FIFO overrun)\n", 144 + status); 144 145 else 145 146 pr_info("Simple test 6 step 1: SUCCESS!\n"); 146 147 status = spi_read(spi, &bigrxbuf_virtual[0], DMA_TEST_SIZE); 147 148 if (status < 0) 148 - pr_warning("Simple test 6 step 2: FAILURE: spi_read() " 149 - "failed with status %d (probably FIFO overrun)\n", 150 - status); 149 + pr_warn("Simple test 6 step 2: FAILURE: spi_read() failed with status %d (probably FIFO overrun)\n", 150 + status); 151 151 else 152 152 pr_info("Simple test 6: SUCCESS!\n"); 153 153 ··· 165 169 pr_info("Simple test 7: SUCCESS! (expected failure with " 166 170 "status EIO)\n"); 167 171 else if (status < 0) 168 - pr_warning("Siple test 7: FAILURE: spi_write_then_read " 169 - "failed with status %d\n", status); 172 + pr_warn("Simple test 7: FAILURE: spi_write_then_read failed with status %d\n", 173 + status); 170 174 else 171 - pr_warning("Siple test 7: FAILURE: spi_write_then_read " 172 - "succeeded but it was expected to fail!\n"); 175 + pr_warn("Simple test 7: FAILURE: spi_write_then_read succeeded but it was expected to fail!\n"); 173 176 174 177 pr_info("Simple test 8: write 8 bytes, read back 8 bytes garbage " 175 178 "in 16bit mode (full FIFO)\n"); 176 179 status = spi_write_then_read(spi, &txbuf[0], 8, &rxbuf[0], 8); 177 180 if (status < 0) 178 - pr_warning("Simple test 8: FAILURE: spi_write_then_read() " 179 - "failed with status %d\n", status); 181 + pr_warn("Simple test 8: FAILURE: spi_write_then_read() failed with status %d\n", 182 + status); 180 183 else 181 184 pr_info("Simple test 8: SUCCESS!\n"); 182 185 ··· 183 188 "in 16bit mode (see if we overflow FIFO)\n"); 184 189 status = spi_write_then_read(spi, &txbuf[0], 14, &rxbuf[0], 14); 185 190 if (status < 0) 186 - pr_warning("Simple test 9: FAILURE: failed with status %d " 187 - "(probably FIFO overrun)\n", status); 191 + pr_warn("Simple test 9: FAILURE: failed with status %d (probably FIFO overrun)\n", 192 + status); 188 193 else 189 194 pr_info("Simple test 9: SUCCESS!\n"); 190 195 ··· 193 198 DMA_TEST_SIZE, DMA_TEST_SIZE); 194 199 status = spi_write(spi, &bigtxbuf_virtual[0], DMA_TEST_SIZE); 195 200 if (status < 0) 196 - pr_warning("Simple test 10 step 1: FAILURE: spi_write() " 197 - "failed with status %d (probably FIFO overrun)\n", 198 - status); 201 + pr_warn("Simple test 10 step 1: FAILURE: spi_write() failed with status %d (probably FIFO overrun)\n", 202 + status); 199 203 else 200 204 pr_info("Simple test 10 step 1: SUCCESS!\n"); 201 205 202 206 status = spi_read(spi, &bigrxbuf_virtual[0], DMA_TEST_SIZE); 203 207 if (status < 0) 204 - pr_warning("Simple test 10 step 2: FAILURE: spi_read() " 205 - "failed with status %d (probably FIFO overrun)\n", 206 - status); 208 + pr_warn("Simple test 10 step 2: FAILURE: spi_read() failed with status %d (probably FIFO overrun)\n", 209 + status); 207 210 else 208 211 pr_info("Simple test 10: SUCCESS!\n"); 209 212
+1
arch/arm/mach-ux500/Kconfig
··· 32 32 select PINCTRL_AB8540 33 33 select REGULATOR 34 34 select REGULATOR_DB8500_PRCMU 35 + select PM_GENERIC_DOMAINS if PM 35 36 36 37 config MACH_MOP500 37 38 bool "U8500 Development platform, MOP500 versions"
+1
arch/arm/mach-ux500/Makefile
··· 9 9 board-mop500-audio.o 10 10 obj-$(CONFIG_SMP) += platsmp.o headsmp.o 11 11 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o 12 + obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o 12 13 13 14 CFLAGS_hotplug.o += -march=armv7-a
+4
arch/arm/mach-ux500/pm.c
··· 17 17 #include <linux/platform_data/arm-ux500-pm.h> 18 18 19 19 #include "db8500-regs.h" 20 + #include "pm_domains.h" 20 21 21 22 /* ARM WFI Standby signal register */ 22 23 #define PRCM_ARM_WFI_STANDBY (prcmu_base + 0x130) ··· 192 191 193 192 /* Set up ux500 suspend callbacks. */ 194 193 suspend_set_ops(UX500_SUSPEND_OPS); 194 + 195 + /* Initialize ux500 power domains */ 196 + ux500_pm_domains_init(); 195 197 }
+79
arch/arm/mach-ux500/pm_domains.c
··· 1 + /* 2 + * Copyright (C) 2014 Linaro Ltd. 3 + * 4 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 5 + * License terms: GNU General Public License (GPL) version 2 6 + * 7 + * Implements PM domains using the generic PM domain for ux500. 8 + */ 9 + #include <linux/printk.h> 10 + #include <linux/slab.h> 11 + #include <linux/err.h> 12 + #include <linux/of.h> 13 + #include <linux/pm_domain.h> 14 + 15 + #include <dt-bindings/arm/ux500_pm_domains.h> 16 + #include "pm_domains.h" 17 + 18 + static int pd_power_off(struct generic_pm_domain *domain) 19 + { 20 + /* 21 + * Handle the gating of the PM domain regulator here. 22 + * 23 + * Drivers/subsystems handling devices in the PM domain needs to perform 24 + * register context save/restore from their respective runtime PM 25 + * callbacks, to be able to enable PM domain gating/ungating. 26 + */ 27 + return 0; 28 + } 29 + 30 + static int pd_power_on(struct generic_pm_domain *domain) 31 + { 32 + /* 33 + * Handle the ungating of the PM domain regulator here. 34 + * 35 + * Drivers/subsystems handling devices in the PM domain needs to perform 36 + * register context save/restore from their respective runtime PM 37 + * callbacks, to be able to enable PM domain gating/ungating. 38 + */ 39 + return 0; 40 + } 41 + 42 + static struct generic_pm_domain ux500_pm_domain_vape = { 43 + .name = "VAPE", 44 + .power_off = pd_power_off, 45 + .power_on = pd_power_on, 46 + }; 47 + 48 + static struct generic_pm_domain *ux500_pm_domains[NR_DOMAINS] = { 49 + [DOMAIN_VAPE] = &ux500_pm_domain_vape, 50 + }; 51 + 52 + static struct of_device_id ux500_pm_domain_matches[] = { 53 + { .compatible = "stericsson,ux500-pm-domains", }, 54 + { }, 55 + }; 56 + 57 + int __init ux500_pm_domains_init(void) 58 + { 59 + struct device_node *np; 60 + struct genpd_onecell_data *genpd_data; 61 + int i; 62 + 63 + np = of_find_matching_node(NULL, ux500_pm_domain_matches); 64 + if (!np) 65 + return -ENODEV; 66 + 67 + genpd_data = kzalloc(sizeof(*genpd_data), GFP_KERNEL); 68 + if (!genpd_data) 69 + return -ENOMEM; 70 + 71 + genpd_data->domains = ux500_pm_domains; 72 + genpd_data->num_domains = ARRAY_SIZE(ux500_pm_domains); 73 + 74 + for (i = 0; i < ARRAY_SIZE(ux500_pm_domains); ++i) 75 + pm_genpd_init(ux500_pm_domains[i], NULL, false); 76 + 77 + of_genpd_add_provider_onecell(np, genpd_data); 78 + return 0; 79 + }
+17
arch/arm/mach-ux500/pm_domains.h
··· 1 + /* 2 + * Copyright (C) 2014 Linaro Ltd. 3 + * 4 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 5 + * License terms: GNU General Public License (GPL) version 2 6 + */ 7 + 8 + #ifndef __MACH_UX500_PM_DOMAINS_H 9 + #define __MACH_UX500_PM_DOMAINS_H 10 + 11 + #ifdef CONFIG_PM_GENERIC_DOMAINS 12 + extern int __init ux500_pm_domains_init(void); 13 + #else 14 + static inline int ux500_pm_domains_init(void) { return 0; } 15 + #endif 16 + 17 + #endif
+14 -14
arch/arm/mm/Kconfig
··· 21 21 22 22 # ARM720T 23 23 config CPU_ARM720T 24 - bool "Support ARM720T processor" if ARCH_INTEGRATOR 24 + bool "Support ARM720T processor" if (ARCH_MULTI_V4T && ARCH_INTEGRATOR) 25 25 select CPU_32v4T 26 26 select CPU_ABRT_LV4T 27 27 select CPU_CACHE_V4 ··· 39 39 40 40 # ARM740T 41 41 config CPU_ARM740T 42 - bool "Support ARM740T processor" if ARCH_INTEGRATOR 42 + bool "Support ARM740T processor" if (ARCH_MULTI_V4T && ARCH_INTEGRATOR) 43 43 depends on !MMU 44 44 select CPU_32v4T 45 45 select CPU_ABRT_LV4T ··· 71 71 72 72 # ARM920T 73 73 config CPU_ARM920T 74 - bool "Support ARM920T processor" if ARCH_INTEGRATOR 74 + bool "Support ARM920T processor" if (ARCH_MULTI_V4T && ARCH_INTEGRATOR) 75 75 select CPU_32v4T 76 76 select CPU_ABRT_EV4T 77 77 select CPU_CACHE_V4WT ··· 89 89 90 90 # ARM922T 91 91 config CPU_ARM922T 92 - bool "Support ARM922T processor" if ARCH_INTEGRATOR 92 + bool "Support ARM922T processor" if (ARCH_MULTI_V4T && ARCH_INTEGRATOR) 93 93 select CPU_32v4T 94 94 select CPU_ABRT_EV4T 95 95 select CPU_CACHE_V4WT ··· 127 127 128 128 # ARM926T 129 129 config CPU_ARM926T 130 - bool "Support ARM926T processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB 130 + bool "Support ARM926T processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V5) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB) 131 131 select CPU_32v5 132 132 select CPU_ABRT_EV5TJ 133 133 select CPU_CACHE_VIVT ··· 163 163 164 164 # ARM940T 165 165 config CPU_ARM940T 166 - bool "Support ARM940T processor" if ARCH_INTEGRATOR 166 + bool "Support ARM940T processor" if (ARCH_MULTI_V4T && ARCH_INTEGRATOR) 167 167 depends on !MMU 168 168 select CPU_32v4T 169 169 select CPU_ABRT_NOMMU ··· 181 181 182 182 # ARM946E-S 183 183 config CPU_ARM946E 184 - bool "Support ARM946E-S processor" if ARCH_INTEGRATOR 184 + bool "Support ARM946E-S processor" if (ARCH_MULTI_V5 && ARCH_INTEGRATOR) 185 185 depends on !MMU 186 186 select CPU_32v5 187 187 select CPU_ABRT_NOMMU ··· 198 198 199 199 # ARM1020 - needs validating 200 200 config CPU_ARM1020 201 - bool "Support ARM1020T (rev 0) processor" if ARCH_INTEGRATOR 201 + bool "Support ARM1020T (rev 0) processor" if (ARCH_MULTI_V5 && ARCH_INTEGRATOR) 202 202 select CPU_32v5 203 203 select CPU_ABRT_EV4T 204 204 select CPU_CACHE_V4WT ··· 216 216 217 217 # ARM1020E - needs validating 218 218 config CPU_ARM1020E 219 - bool "Support ARM1020E processor" if ARCH_INTEGRATOR 219 + bool "Support ARM1020E processor" if (ARCH_MULTI_V5 && ARCH_INTEGRATOR) 220 220 depends on n 221 221 select CPU_32v5 222 222 select CPU_ABRT_EV4T ··· 229 229 230 230 # ARM1022E 231 231 config CPU_ARM1022 232 - bool "Support ARM1022E processor" if ARCH_INTEGRATOR 232 + bool "Support ARM1022E processor" if (ARCH_MULTI_V5 && ARCH_INTEGRATOR) 233 233 select CPU_32v5 234 234 select CPU_ABRT_EV4T 235 235 select CPU_CACHE_VIVT ··· 247 247 248 248 # ARM1026EJ-S 249 249 config CPU_ARM1026 250 - bool "Support ARM1026EJ-S processor" if ARCH_INTEGRATOR 250 + bool "Support ARM1026EJ-S processor" if (ARCH_MULTI_V5 && ARCH_INTEGRATOR) 251 251 select CPU_32v5 252 252 select CPU_ABRT_EV5T # But need Jazelle, but EV5TJ ignores bit 10 253 253 select CPU_CACHE_VIVT ··· 358 358 359 359 # ARMv6 360 360 config CPU_V6 361 - bool "Support ARM V6 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX 361 + bool "Support ARM V6 processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V6) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX) 362 362 select CPU_32v6 363 363 select CPU_ABRT_EV6 364 364 select CPU_CACHE_V6 ··· 371 371 372 372 # ARMv6k 373 373 config CPU_V6K 374 - bool "Support ARM V6K processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX 374 + bool "Support ARM V6K processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V6) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX) 375 375 select CPU_32v6 376 376 select CPU_32v6K 377 377 select CPU_ABRT_EV6 ··· 385 385 386 386 # ARMv7 387 387 config CPU_V7 388 - bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX 388 + bool "Support ARM V7 processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V7) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX) 389 389 select CPU_32v6K 390 390 select CPU_32v7 391 391 select CPU_ABRT_EV7
+1
arch/arm/plat-samsung/Makefile
··· 35 35 # PM support 36 36 37 37 obj-$(CONFIG_PM_SLEEP) += pm-common.o 38 + obj-$(CONFIG_EXYNOS_CPU_SUSPEND) += pm-common.o 38 39 obj-$(CONFIG_SAMSUNG_PM) += pm.o 39 40 obj-$(CONFIG_SAMSUNG_PM_GPIO) += pm-gpio.o 40 41 obj-$(CONFIG_SAMSUNG_PM_CHECK) += pm-check.o
+39 -6
drivers/bus/brcmstb_gisb.c
··· 23 23 #include <linux/list.h> 24 24 #include <linux/of.h> 25 25 #include <linux/bitops.h> 26 + #include <linux/pm.h> 26 27 27 28 #include <asm/bug.h> 28 29 #include <asm/signal.h> ··· 49 48 struct list_head next; 50 49 u32 valid_mask; 51 50 const char *master_names[sizeof(u32) * BITS_PER_BYTE]; 51 + u32 saved_timeout; 52 52 }; 53 53 54 54 static LIST_HEAD(brcmstb_gisb_arb_device_list); ··· 162 160 return ret; 163 161 } 164 162 165 - void __init brcmstb_hook_fault_code(void) 166 - { 167 - hook_fault_code(22, brcmstb_bus_error_handler, SIGBUS, 0, 168 - "imprecise external abort"); 169 - } 170 - 171 163 static irqreturn_t brcmstb_gisb_timeout_handler(int irq, void *dev_id) 172 164 { 173 165 brcmstb_gisb_arb_decode_addr(dev_id, "timeout"); ··· 257 261 258 262 list_add_tail(&gdev->next, &brcmstb_gisb_arb_device_list); 259 263 264 + hook_fault_code(22, brcmstb_bus_error_handler, SIGBUS, 0, 265 + "imprecise external abort"); 266 + 260 267 dev_info(&pdev->dev, "registered mem: %p, irqs: %d, %d\n", 261 268 gdev->base, timeout_irq, tea_irq); 262 269 263 270 return 0; 264 271 } 272 + 273 + #ifdef CONFIG_PM_SLEEP 274 + static int brcmstb_gisb_arb_suspend(struct device *dev) 275 + { 276 + struct platform_device *pdev = to_platform_device(dev); 277 + struct brcmstb_gisb_arb_device *gdev = platform_get_drvdata(pdev); 278 + 279 + gdev->saved_timeout = ioread32(gdev->base + ARB_TIMER); 280 + 281 + return 0; 282 + } 283 + 284 + /* Make sure we provide the same timeout value that was configured before, and 285 + * do this before the GISB timeout interrupt handler has any chance to run. 286 + */ 287 + static int brcmstb_gisb_arb_resume_noirq(struct device *dev) 288 + { 289 + struct platform_device *pdev = to_platform_device(dev); 290 + struct brcmstb_gisb_arb_device *gdev = platform_get_drvdata(pdev); 291 + 292 + iowrite32(gdev->saved_timeout, gdev->base + ARB_TIMER); 293 + 294 + return 0; 295 + } 296 + #else 297 + #define brcmstb_gisb_arb_suspend NULL 298 + #define brcmstb_gisb_arb_resume_noirq NULL 299 + #endif 300 + 301 + static const struct dev_pm_ops brcmstb_gisb_arb_pm_ops = { 302 + .suspend = brcmstb_gisb_arb_suspend, 303 + .resume_noirq = brcmstb_gisb_arb_resume_noirq, 304 + }; 265 305 266 306 static const struct of_device_id brcmstb_gisb_arb_of_match[] = { 267 307 { .compatible = "brcm,gisb-arb" }, ··· 310 278 .name = "brcm-gisb-arb", 311 279 .owner = THIS_MODULE, 312 280 .of_match_table = brcmstb_gisb_arb_of_match, 281 + .pm = &brcmstb_gisb_arb_pm_ops, 313 282 }, 314 283 }; 315 284
+176 -4
drivers/bus/mvebu-mbus.c
··· 57 57 #include <linux/of_address.h> 58 58 #include <linux/debugfs.h> 59 59 #include <linux/log2.h> 60 + #include <linux/syscore_ops.h> 60 61 61 62 /* 62 63 * DDR target is the same on all platforms. ··· 95 94 96 95 #define DOVE_DDR_BASE_CS_OFF(n) ((n) << 4) 97 96 97 + /* Relative to mbusbridge_base */ 98 + #define MBUS_BRIDGE_CTRL_OFF 0x0 99 + #define MBUS_BRIDGE_BASE_OFF 0x4 100 + 101 + /* Maximum number of windows, for all known platforms */ 102 + #define MBUS_WINS_MAX 20 103 + 98 104 struct mvebu_mbus_state; 99 105 100 106 struct mvebu_mbus_soc_data { 101 107 unsigned int num_wins; 102 108 unsigned int num_remappable_wins; 109 + bool has_mbus_bridge; 103 110 unsigned int (*win_cfg_offset)(const int win); 104 111 void (*setup_cpu_target)(struct mvebu_mbus_state *s); 112 + int (*save_cpu_target)(struct mvebu_mbus_state *s, 113 + u32 *store_addr); 105 114 int (*show_cpu_target)(struct mvebu_mbus_state *s, 106 115 struct seq_file *seq, void *v); 116 + }; 117 + 118 + /* 119 + * Used to store the state of one MBus window accross suspend/resume. 120 + */ 121 + struct mvebu_mbus_win_data { 122 + u32 ctrl; 123 + u32 base; 124 + u32 remap_lo; 125 + u32 remap_hi; 107 126 }; 108 127 109 128 struct mvebu_mbus_state { 110 129 void __iomem *mbuswins_base; 111 130 void __iomem *sdramwins_base; 131 + void __iomem *mbusbridge_base; 132 + phys_addr_t sdramwins_phys_base; 112 133 struct dentry *debugfs_root; 113 134 struct dentry *debugfs_sdram; 114 135 struct dentry *debugfs_devs; ··· 138 115 struct resource pcie_io_aperture; 139 116 const struct mvebu_mbus_soc_data *soc; 140 117 int hw_io_coherency; 118 + 119 + /* Used during suspend/resume */ 120 + u32 mbus_bridge_ctrl; 121 + u32 mbus_bridge_base; 122 + struct mvebu_mbus_win_data wins[MBUS_WINS_MAX]; 141 123 }; 142 124 143 125 static struct mvebu_mbus_state mbus_state; ··· 544 516 mvebu_mbus_dram_info.num_cs = cs; 545 517 } 546 518 519 + static int 520 + mvebu_mbus_default_save_cpu_target(struct mvebu_mbus_state *mbus, 521 + u32 *store_addr) 522 + { 523 + int i; 524 + 525 + for (i = 0; i < 4; i++) { 526 + u32 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i)); 527 + u32 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i)); 528 + 529 + writel(mbus->sdramwins_phys_base + DDR_BASE_CS_OFF(i), 530 + store_addr++); 531 + writel(base, store_addr++); 532 + writel(mbus->sdramwins_phys_base + DDR_SIZE_CS_OFF(i), 533 + store_addr++); 534 + writel(size, store_addr++); 535 + } 536 + 537 + /* We've written 16 words to the store address */ 538 + return 16; 539 + } 540 + 547 541 static void __init 548 542 mvebu_mbus_dove_setup_cpu_target(struct mvebu_mbus_state *mbus) 549 543 { ··· 596 546 mvebu_mbus_dram_info.num_cs = cs; 597 547 } 598 548 549 + static int 550 + mvebu_mbus_dove_save_cpu_target(struct mvebu_mbus_state *mbus, 551 + u32 *store_addr) 552 + { 553 + int i; 554 + 555 + for (i = 0; i < 2; i++) { 556 + u32 map = readl(mbus->sdramwins_base + DOVE_DDR_BASE_CS_OFF(i)); 557 + 558 + writel(mbus->sdramwins_phys_base + DOVE_DDR_BASE_CS_OFF(i), 559 + store_addr++); 560 + writel(map, store_addr++); 561 + } 562 + 563 + /* We've written 4 words to the store address */ 564 + return 4; 565 + } 566 + 567 + int mvebu_mbus_save_cpu_target(u32 *store_addr) 568 + { 569 + return mbus_state.soc->save_cpu_target(&mbus_state, store_addr); 570 + } 571 + 599 572 static const struct mvebu_mbus_soc_data armada_370_xp_mbus_data = { 600 573 .num_wins = 20, 601 574 .num_remappable_wins = 8, 575 + .has_mbus_bridge = true, 602 576 .win_cfg_offset = armada_370_xp_mbus_win_offset, 577 + .save_cpu_target = mvebu_mbus_default_save_cpu_target, 603 578 .setup_cpu_target = mvebu_mbus_default_setup_cpu_target, 604 579 .show_cpu_target = mvebu_sdram_debug_show_orion, 605 580 }; ··· 633 558 .num_wins = 8, 634 559 .num_remappable_wins = 4, 635 560 .win_cfg_offset = orion_mbus_win_offset, 561 + .save_cpu_target = mvebu_mbus_default_save_cpu_target, 636 562 .setup_cpu_target = mvebu_mbus_default_setup_cpu_target, 637 563 .show_cpu_target = mvebu_sdram_debug_show_orion, 638 564 }; ··· 642 566 .num_wins = 8, 643 567 .num_remappable_wins = 4, 644 568 .win_cfg_offset = orion_mbus_win_offset, 569 + .save_cpu_target = mvebu_mbus_dove_save_cpu_target, 645 570 .setup_cpu_target = mvebu_mbus_dove_setup_cpu_target, 646 571 .show_cpu_target = mvebu_sdram_debug_show_dove, 647 572 }; ··· 655 578 .num_wins = 8, 656 579 .num_remappable_wins = 4, 657 580 .win_cfg_offset = orion_mbus_win_offset, 581 + .save_cpu_target = mvebu_mbus_default_save_cpu_target, 658 582 .setup_cpu_target = mvebu_mbus_default_setup_cpu_target, 659 583 .show_cpu_target = mvebu_sdram_debug_show_orion, 660 584 }; ··· 664 586 .num_wins = 8, 665 587 .num_remappable_wins = 2, 666 588 .win_cfg_offset = orion_mbus_win_offset, 589 + .save_cpu_target = mvebu_mbus_default_save_cpu_target, 667 590 .setup_cpu_target = mvebu_mbus_default_setup_cpu_target, 668 591 .show_cpu_target = mvebu_sdram_debug_show_orion, 669 592 }; ··· 673 594 .num_wins = 14, 674 595 .num_remappable_wins = 8, 675 596 .win_cfg_offset = mv78xx0_mbus_win_offset, 597 + .save_cpu_target = mvebu_mbus_default_save_cpu_target, 676 598 .setup_cpu_target = mvebu_mbus_default_setup_cpu_target, 677 599 .show_cpu_target = mvebu_sdram_debug_show_orion, 678 600 }; ··· 778 698 } 779 699 fs_initcall(mvebu_mbus_debugfs_init); 780 700 701 + static int mvebu_mbus_suspend(void) 702 + { 703 + struct mvebu_mbus_state *s = &mbus_state; 704 + int win; 705 + 706 + if (!s->mbusbridge_base) 707 + return -ENODEV; 708 + 709 + for (win = 0; win < s->soc->num_wins; win++) { 710 + void __iomem *addr = s->mbuswins_base + 711 + s->soc->win_cfg_offset(win); 712 + 713 + s->wins[win].base = readl(addr + WIN_BASE_OFF); 714 + s->wins[win].ctrl = readl(addr + WIN_CTRL_OFF); 715 + 716 + if (win >= s->soc->num_remappable_wins) 717 + continue; 718 + 719 + s->wins[win].remap_lo = readl(addr + WIN_REMAP_LO_OFF); 720 + s->wins[win].remap_hi = readl(addr + WIN_REMAP_HI_OFF); 721 + } 722 + 723 + s->mbus_bridge_ctrl = readl(s->mbusbridge_base + 724 + MBUS_BRIDGE_CTRL_OFF); 725 + s->mbus_bridge_base = readl(s->mbusbridge_base + 726 + MBUS_BRIDGE_BASE_OFF); 727 + 728 + return 0; 729 + } 730 + 731 + static void mvebu_mbus_resume(void) 732 + { 733 + struct mvebu_mbus_state *s = &mbus_state; 734 + int win; 735 + 736 + writel(s->mbus_bridge_ctrl, 737 + s->mbusbridge_base + MBUS_BRIDGE_CTRL_OFF); 738 + writel(s->mbus_bridge_base, 739 + s->mbusbridge_base + MBUS_BRIDGE_BASE_OFF); 740 + 741 + for (win = 0; win < s->soc->num_wins; win++) { 742 + void __iomem *addr = s->mbuswins_base + 743 + s->soc->win_cfg_offset(win); 744 + 745 + writel(s->wins[win].base, addr + WIN_BASE_OFF); 746 + writel(s->wins[win].ctrl, addr + WIN_CTRL_OFF); 747 + 748 + if (win >= s->soc->num_remappable_wins) 749 + continue; 750 + 751 + writel(s->wins[win].remap_lo, addr + WIN_REMAP_LO_OFF); 752 + writel(s->wins[win].remap_hi, addr + WIN_REMAP_HI_OFF); 753 + } 754 + } 755 + 756 + struct syscore_ops mvebu_mbus_syscore_ops = { 757 + .suspend = mvebu_mbus_suspend, 758 + .resume = mvebu_mbus_resume, 759 + }; 760 + 781 761 static int __init mvebu_mbus_common_init(struct mvebu_mbus_state *mbus, 782 762 phys_addr_t mbuswins_phys_base, 783 763 size_t mbuswins_size, 784 764 phys_addr_t sdramwins_phys_base, 785 - size_t sdramwins_size) 765 + size_t sdramwins_size, 766 + phys_addr_t mbusbridge_phys_base, 767 + size_t mbusbridge_size) 786 768 { 787 769 int win; 788 770 ··· 858 716 return -ENOMEM; 859 717 } 860 718 719 + mbus->sdramwins_phys_base = sdramwins_phys_base; 720 + 721 + if (mbusbridge_phys_base) { 722 + mbus->mbusbridge_base = ioremap(mbusbridge_phys_base, 723 + mbusbridge_size); 724 + if (!mbus->mbusbridge_base) { 725 + iounmap(mbus->sdramwins_base); 726 + iounmap(mbus->mbuswins_base); 727 + return -ENOMEM; 728 + } 729 + } else 730 + mbus->mbusbridge_base = NULL; 731 + 861 732 for (win = 0; win < mbus->soc->num_wins; win++) 862 733 mvebu_mbus_disable_window(mbus, win); 863 734 864 735 mbus->soc->setup_cpu_target(mbus); 736 + 737 + register_syscore_ops(&mvebu_mbus_syscore_ops); 865 738 866 739 return 0; 867 740 } ··· 903 746 mbuswins_phys_base, 904 747 mbuswins_size, 905 748 sdramwins_phys_base, 906 - sdramwins_size); 749 + sdramwins_size, 0, 0); 907 750 } 908 751 909 752 #ifdef CONFIG_OF ··· 1044 887 1045 888 int __init mvebu_mbus_dt_init(bool is_coherent) 1046 889 { 1047 - struct resource mbuswins_res, sdramwins_res; 890 + struct resource mbuswins_res, sdramwins_res, mbusbridge_res; 1048 891 struct device_node *np, *controller; 1049 892 const struct of_device_id *of_id; 1050 893 const __be32 *prop; ··· 1080 923 return -EINVAL; 1081 924 } 1082 925 926 + /* 927 + * Set the resource to 0 so that it can be left unmapped by 928 + * mvebu_mbus_common_init() if the DT doesn't carry the 929 + * necessary information. This is needed to preserve backward 930 + * compatibility. 931 + */ 932 + memset(&mbusbridge_res, 0, sizeof(mbusbridge_res)); 933 + 934 + if (mbus_state.soc->has_mbus_bridge) { 935 + if (of_address_to_resource(controller, 2, &mbusbridge_res)) 936 + pr_warn(FW_WARN "deprecated mbus-mvebu Device Tree, suspend/resume will not work\n"); 937 + } 938 + 1083 939 mbus_state.hw_io_coherency = is_coherent; 1084 940 1085 941 /* Get optional pcie-{mem,io}-aperture properties */ ··· 1103 933 mbuswins_res.start, 1104 934 resource_size(&mbuswins_res), 1105 935 sdramwins_res.start, 1106 - resource_size(&sdramwins_res)); 936 + resource_size(&sdramwins_res), 937 + mbusbridge_res.start, 938 + resource_size(&mbusbridge_res)); 1107 939 if (ret) 1108 940 return ret; 1109 941
+30 -2
drivers/clk/mvebu/common.c
··· 19 19 #include <linux/io.h> 20 20 #include <linux/of.h> 21 21 #include <linux/of_address.h> 22 + #include <linux/syscore_ops.h> 22 23 23 24 #include "common.h" 24 25 ··· 178 177 spinlock_t *lock; 179 178 struct clk **gates; 180 179 int num_gates; 180 + void __iomem *base; 181 + u32 saved_reg; 181 182 }; 182 183 183 184 #define to_clk_gate(_hw) container_of(_hw, struct clk_gate, hw) 184 185 186 + static struct clk_gating_ctrl *ctrl; 187 + 185 188 static struct clk *clk_gating_get_src( 186 189 struct of_phandle_args *clkspec, void *data) 187 190 { 188 - struct clk_gating_ctrl *ctrl = (struct clk_gating_ctrl *)data; 189 191 int n; 190 192 191 193 if (clkspec->args_count < 1) ··· 203 199 return ERR_PTR(-ENODEV); 204 200 } 205 201 202 + static int mvebu_clk_gating_suspend(void) 203 + { 204 + ctrl->saved_reg = readl(ctrl->base); 205 + return 0; 206 + } 207 + 208 + static void mvebu_clk_gating_resume(void) 209 + { 210 + writel(ctrl->saved_reg, ctrl->base); 211 + } 212 + 213 + static struct syscore_ops clk_gate_syscore_ops = { 214 + .suspend = mvebu_clk_gating_suspend, 215 + .resume = mvebu_clk_gating_resume, 216 + }; 217 + 206 218 void __init mvebu_clk_gating_setup(struct device_node *np, 207 219 const struct clk_gating_soc_desc *desc) 208 220 { 209 - struct clk_gating_ctrl *ctrl; 210 221 struct clk *clk; 211 222 void __iomem *base; 212 223 const char *default_parent = NULL; 213 224 int n; 225 + 226 + if (ctrl) { 227 + pr_err("mvebu-clk-gating: cannot instantiate more than one gatable clock device\n"); 228 + return; 229 + } 214 230 215 231 base = of_iomap(np, 0); 216 232 if (WARN_ON(!base)) ··· 248 224 249 225 /* lock must already be initialized */ 250 226 ctrl->lock = &ctrl_gating_lock; 227 + 228 + ctrl->base = base; 251 229 252 230 /* Count, allocate, and register clock gates */ 253 231 for (n = 0; desc[n].name;) ··· 271 245 } 272 246 273 247 of_clk_add_provider(np, clk_gating_get_src, ctrl); 248 + 249 + register_syscore_ops(&clk_gate_syscore_ops); 274 250 275 251 return; 276 252 gates_out:
+28 -1
drivers/clk/samsung/clk-exynos5440.c
··· 15 15 #include <linux/clk-provider.h> 16 16 #include <linux/of.h> 17 17 #include <linux/of_address.h> 18 + #include <linux/notifier.h> 19 + #include <linux/reboot.h> 18 20 19 21 #include "clk.h" 20 22 #include "clk-pll.h" ··· 24 22 #define CLKEN_OV_VAL 0xf8 25 23 #define CPU_CLK_STATUS 0xfc 26 24 #define MISC_DOUT1 0x558 25 + 26 + static void __iomem *reg_base; 27 27 28 28 /* parent clock name list */ 29 29 PNAME(mout_armclk_p) = { "cplla", "cpllb" }; ··· 93 89 {}, 94 90 }; 95 91 92 + static int exynos5440_clk_restart_notify(struct notifier_block *this, 93 + unsigned long code, void *unused) 94 + { 95 + u32 val, status; 96 + 97 + status = readl_relaxed(reg_base + 0xbc); 98 + val = readl_relaxed(reg_base + 0xcc); 99 + val = (val & 0xffff0000) | (status & 0xffff); 100 + writel_relaxed(val, reg_base + 0xcc); 101 + 102 + return NOTIFY_DONE; 103 + } 104 + 105 + /* 106 + * Exynos5440 Clock restart notifier, handles restart functionality 107 + */ 108 + static struct notifier_block exynos5440_clk_restart_handler = { 109 + .notifier_call = exynos5440_clk_restart_notify, 110 + .priority = 128, 111 + }; 112 + 96 113 /* register exynos5440 clocks */ 97 114 static void __init exynos5440_clk_init(struct device_node *np) 98 115 { 99 - void __iomem *reg_base; 100 116 struct samsung_clk_provider *ctx; 101 117 102 118 reg_base = of_iomap(np, 0); ··· 148 124 ARRAY_SIZE(exynos5440_gate_clks)); 149 125 150 126 samsung_clk_of_add_provider(np, ctx); 127 + 128 + if (register_restart_handler(&exynos5440_clk_restart_handler)) 129 + pr_warn("exynos5440 clock can't register restart handler\n"); 151 130 152 131 pr_info("Exynos5440: arm_clk = %ldHz\n", _get_rate("arm_clk")); 153 132 pr_info("exynos5440 clock initialization complete\n");
+15
drivers/clk/ti/dpll.c
··· 33 33 .recalc_rate = &omap4_dpll_regm4xen_recalc, 34 34 .round_rate = &omap4_dpll_regm4xen_round_rate, 35 35 .set_rate = &omap3_noncore_dpll_set_rate, 36 + .set_parent = &omap3_noncore_dpll_set_parent, 37 + .set_rate_and_parent = &omap3_noncore_dpll_set_rate_and_parent, 38 + .determine_rate = &omap4_dpll_regm4xen_determine_rate, 36 39 .get_parent = &omap2_init_dpll_parent, 37 40 }; 38 41 #else ··· 56 53 .recalc_rate = &omap3_dpll_recalc, 57 54 .round_rate = &omap2_dpll_round_rate, 58 55 .set_rate = &omap3_noncore_dpll_set_rate, 56 + .set_parent = &omap3_noncore_dpll_set_parent, 57 + .set_rate_and_parent = &omap3_noncore_dpll_set_rate_and_parent, 58 + .determine_rate = &omap3_noncore_dpll_determine_rate, 59 59 .get_parent = &omap2_init_dpll_parent, 60 60 }; 61 61 ··· 67 61 .get_parent = &omap2_init_dpll_parent, 68 62 .round_rate = &omap2_dpll_round_rate, 69 63 .set_rate = &omap3_noncore_dpll_set_rate, 64 + .set_parent = &omap3_noncore_dpll_set_parent, 65 + .set_rate_and_parent = &omap3_noncore_dpll_set_rate_and_parent, 66 + .determine_rate = &omap3_noncore_dpll_determine_rate, 70 67 }; 71 68 #else 72 69 static const struct clk_ops dpll_core_ck_ops = {}; ··· 106 97 .get_parent = &omap2_init_dpll_parent, 107 98 .recalc_rate = &omap3_dpll_recalc, 108 99 .set_rate = &omap3_noncore_dpll_set_rate, 100 + .set_parent = &omap3_noncore_dpll_set_parent, 101 + .set_rate_and_parent = &omap3_noncore_dpll_set_rate_and_parent, 102 + .determine_rate = &omap3_noncore_dpll_determine_rate, 109 103 .round_rate = &omap2_dpll_round_rate, 110 104 }; 111 105 ··· 118 106 .get_parent = &omap2_init_dpll_parent, 119 107 .recalc_rate = &omap3_dpll_recalc, 120 108 .set_rate = &omap3_dpll4_set_rate, 109 + .set_parent = &omap3_noncore_dpll_set_parent, 110 + .set_rate_and_parent = &omap3_dpll4_set_rate_and_parent, 111 + .determine_rate = &omap3_noncore_dpll_determine_rate, 121 112 .round_rate = &omap2_dpll_round_rate, 122 113 }; 123 114 #endif
+1
drivers/clocksource/Kconfig
··· 32 32 33 33 config MESON6_TIMER 34 34 bool 35 + select CLKSRC_MMIO 35 36 36 37 config ORION_TIMER 37 38 select CLKSRC_OF
+1
drivers/clocksource/Makefile
··· 45 45 obj-$(CONFIG_CLKSRC_METAG_GENERIC) += metag_generic.o 46 46 obj-$(CONFIG_ARCH_HAS_TICK_BROADCAST) += dummy_timer.o 47 47 obj-$(CONFIG_ARCH_KEYSTONE) += timer-keystone.o 48 + obj-$(CONFIG_ARCH_INTEGRATOR_AP) += timer-integrator-ap.o 48 49 obj-$(CONFIG_CLKSRC_VERSATILE) += versatile.o
+25
drivers/clocksource/time-armada-370-xp.c
··· 43 43 #include <linux/module.h> 44 44 #include <linux/sched_clock.h> 45 45 #include <linux/percpu.h> 46 + #include <linux/syscore_ops.h> 46 47 47 48 /* 48 49 * Timer block registers. ··· 224 223 .notifier_call = armada_370_xp_timer_cpu_notify, 225 224 }; 226 225 226 + static u32 timer0_ctrl_reg, timer0_local_ctrl_reg; 227 + 228 + static int armada_370_xp_timer_suspend(void) 229 + { 230 + timer0_ctrl_reg = readl(timer_base + TIMER_CTRL_OFF); 231 + timer0_local_ctrl_reg = readl(local_base + TIMER_CTRL_OFF); 232 + return 0; 233 + } 234 + 235 + static void armada_370_xp_timer_resume(void) 236 + { 237 + writel(0xffffffff, timer_base + TIMER0_VAL_OFF); 238 + writel(0xffffffff, timer_base + TIMER0_RELOAD_OFF); 239 + writel(timer0_ctrl_reg, timer_base + TIMER_CTRL_OFF); 240 + writel(timer0_local_ctrl_reg, local_base + TIMER_CTRL_OFF); 241 + } 242 + 243 + struct syscore_ops armada_370_xp_timer_syscore_ops = { 244 + .suspend = armada_370_xp_timer_suspend, 245 + .resume = armada_370_xp_timer_resume, 246 + }; 247 + 227 248 static void __init armada_370_xp_timer_common_init(struct device_node *np) 228 249 { 229 250 u32 clr = 0, set = 0; ··· 308 285 /* Immediately configure the timer on the boot CPU */ 309 286 if (!res) 310 287 armada_370_xp_timer_setup(this_cpu_ptr(armada_370_xp_evt)); 288 + 289 + register_syscore_ops(&armada_370_xp_timer_syscore_ops); 311 290 } 312 291 313 292 static void __init armada_xp_timer_init(struct device_node *np)
+210
drivers/clocksource/timer-integrator-ap.c
··· 1 + /* 2 + * Integrator/AP timer driver 3 + * Copyright (C) 2000-2003 Deep Blue Solutions Ltd 4 + * Copyright (c) 2014, Linaro Limited 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program; if not, write to the Free Software 18 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 + */ 20 + 21 + #include <linux/clk.h> 22 + #include <linux/clocksource.h> 23 + #include <linux/of_irq.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_platform.h> 26 + #include <linux/clockchips.h> 27 + #include <linux/interrupt.h> 28 + #include <linux/sched_clock.h> 29 + #include <asm/hardware/arm_timer.h> 30 + 31 + static void __iomem * sched_clk_base; 32 + 33 + static u64 notrace integrator_read_sched_clock(void) 34 + { 35 + return -readl(sched_clk_base + TIMER_VALUE); 36 + } 37 + 38 + static void integrator_clocksource_init(unsigned long inrate, 39 + void __iomem *base) 40 + { 41 + u32 ctrl = TIMER_CTRL_ENABLE | TIMER_CTRL_PERIODIC; 42 + unsigned long rate = inrate; 43 + 44 + if (rate >= 1500000) { 45 + rate /= 16; 46 + ctrl |= TIMER_CTRL_DIV16; 47 + } 48 + 49 + writel(0xffff, base + TIMER_LOAD); 50 + writel(ctrl, base + TIMER_CTRL); 51 + 52 + clocksource_mmio_init(base + TIMER_VALUE, "timer2", 53 + rate, 200, 16, clocksource_mmio_readl_down); 54 + 55 + sched_clk_base = base; 56 + sched_clock_register(integrator_read_sched_clock, 16, rate); 57 + } 58 + 59 + static unsigned long timer_reload; 60 + static void __iomem * clkevt_base; 61 + 62 + /* 63 + * IRQ handler for the timer 64 + */ 65 + static irqreturn_t integrator_timer_interrupt(int irq, void *dev_id) 66 + { 67 + struct clock_event_device *evt = dev_id; 68 + 69 + /* clear the interrupt */ 70 + writel(1, clkevt_base + TIMER_INTCLR); 71 + 72 + evt->event_handler(evt); 73 + 74 + return IRQ_HANDLED; 75 + } 76 + 77 + static void clkevt_set_mode(enum clock_event_mode mode, struct clock_event_device *evt) 78 + { 79 + u32 ctrl = readl(clkevt_base + TIMER_CTRL) & ~TIMER_CTRL_ENABLE; 80 + 81 + /* Disable timer */ 82 + writel(ctrl, clkevt_base + TIMER_CTRL); 83 + 84 + switch (mode) { 85 + case CLOCK_EVT_MODE_PERIODIC: 86 + /* Enable the timer and start the periodic tick */ 87 + writel(timer_reload, clkevt_base + TIMER_LOAD); 88 + ctrl |= TIMER_CTRL_PERIODIC | TIMER_CTRL_ENABLE; 89 + writel(ctrl, clkevt_base + TIMER_CTRL); 90 + break; 91 + case CLOCK_EVT_MODE_ONESHOT: 92 + /* Leave the timer disabled, .set_next_event will enable it */ 93 + ctrl &= ~TIMER_CTRL_PERIODIC; 94 + writel(ctrl, clkevt_base + TIMER_CTRL); 95 + break; 96 + case CLOCK_EVT_MODE_UNUSED: 97 + case CLOCK_EVT_MODE_SHUTDOWN: 98 + case CLOCK_EVT_MODE_RESUME: 99 + default: 100 + /* Just leave in disabled state */ 101 + break; 102 + } 103 + 104 + } 105 + 106 + static int clkevt_set_next_event(unsigned long next, struct clock_event_device *evt) 107 + { 108 + unsigned long ctrl = readl(clkevt_base + TIMER_CTRL); 109 + 110 + writel(ctrl & ~TIMER_CTRL_ENABLE, clkevt_base + TIMER_CTRL); 111 + writel(next, clkevt_base + TIMER_LOAD); 112 + writel(ctrl | TIMER_CTRL_ENABLE, clkevt_base + TIMER_CTRL); 113 + 114 + return 0; 115 + } 116 + 117 + static struct clock_event_device integrator_clockevent = { 118 + .name = "timer1", 119 + .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT, 120 + .set_mode = clkevt_set_mode, 121 + .set_next_event = clkevt_set_next_event, 122 + .rating = 300, 123 + }; 124 + 125 + static struct irqaction integrator_timer_irq = { 126 + .name = "timer", 127 + .flags = IRQF_TIMER | IRQF_IRQPOLL, 128 + .handler = integrator_timer_interrupt, 129 + .dev_id = &integrator_clockevent, 130 + }; 131 + 132 + static void integrator_clockevent_init(unsigned long inrate, 133 + void __iomem *base, int irq) 134 + { 135 + unsigned long rate = inrate; 136 + unsigned int ctrl = 0; 137 + 138 + clkevt_base = base; 139 + /* Calculate and program a divisor */ 140 + if (rate > 0x100000 * HZ) { 141 + rate /= 256; 142 + ctrl |= TIMER_CTRL_DIV256; 143 + } else if (rate > 0x10000 * HZ) { 144 + rate /= 16; 145 + ctrl |= TIMER_CTRL_DIV16; 146 + } 147 + timer_reload = rate / HZ; 148 + writel(ctrl, clkevt_base + TIMER_CTRL); 149 + 150 + setup_irq(irq, &integrator_timer_irq); 151 + clockevents_config_and_register(&integrator_clockevent, 152 + rate, 153 + 1, 154 + 0xffffU); 155 + } 156 + 157 + static void __init integrator_ap_timer_init_of(struct device_node *node) 158 + { 159 + const char *path; 160 + void __iomem *base; 161 + int err; 162 + int irq; 163 + struct clk *clk; 164 + unsigned long rate; 165 + struct device_node *pri_node; 166 + struct device_node *sec_node; 167 + 168 + base = of_io_request_and_map(node, 0, "integrator-timer"); 169 + if (!base) 170 + return; 171 + 172 + clk = of_clk_get(node, 0); 173 + if (IS_ERR(clk)) { 174 + pr_err("No clock for %s\n", node->name); 175 + return; 176 + } 177 + clk_prepare_enable(clk); 178 + rate = clk_get_rate(clk); 179 + writel(0, base + TIMER_CTRL); 180 + 181 + err = of_property_read_string(of_aliases, 182 + "arm,timer-primary", &path); 183 + if (WARN_ON(err)) 184 + return; 185 + pri_node = of_find_node_by_path(path); 186 + err = of_property_read_string(of_aliases, 187 + "arm,timer-secondary", &path); 188 + if (WARN_ON(err)) 189 + return; 190 + sec_node = of_find_node_by_path(path); 191 + 192 + if (node == pri_node) { 193 + /* The primary timer lacks IRQ, use as clocksource */ 194 + integrator_clocksource_init(rate, base); 195 + return; 196 + } 197 + 198 + if (node == sec_node) { 199 + /* The secondary timer will drive the clock event */ 200 + irq = irq_of_parse_and_map(node, 0); 201 + integrator_clockevent_init(rate, base, irq); 202 + return; 203 + } 204 + 205 + pr_info("Timer @%p unused\n", base); 206 + clk_disable_unprepare(clk); 207 + } 208 + 209 + CLOCKSOURCE_OF_DECLARE(integrator_ap_timer, "arm,integrator-timer", 210 + integrator_ap_timer_init_of);
+52
drivers/irqchip/irq-armada-370-xp.c
··· 26 26 #include <linux/of_pci.h> 27 27 #include <linux/irqdomain.h> 28 28 #include <linux/slab.h> 29 + #include <linux/syscore_ops.h> 29 30 #include <linux/msi.h> 30 31 #include <asm/mach/arch.h> 31 32 #include <asm/exception.h> ··· 68 67 static void __iomem *per_cpu_int_base; 69 68 static void __iomem *main_int_base; 70 69 static struct irq_domain *armada_370_xp_mpic_domain; 70 + static u32 doorbell_mask_reg; 71 71 #ifdef CONFIG_PCI_MSI 72 72 static struct irq_domain *armada_370_xp_msi_domain; 73 73 static DECLARE_BITMAP(msi_used, PCI_MSI_DOORBELL_NR); ··· 487 485 } while (1); 488 486 } 489 487 488 + static int armada_370_xp_mpic_suspend(void) 489 + { 490 + doorbell_mask_reg = readl(per_cpu_int_base + 491 + ARMADA_370_XP_IN_DRBEL_MSK_OFFS); 492 + return 0; 493 + } 494 + 495 + static void armada_370_xp_mpic_resume(void) 496 + { 497 + int nirqs; 498 + irq_hw_number_t irq; 499 + 500 + /* Re-enable interrupts */ 501 + nirqs = (readl(main_int_base + ARMADA_370_XP_INT_CONTROL) >> 2) & 0x3ff; 502 + for (irq = 0; irq < nirqs; irq++) { 503 + struct irq_data *data; 504 + int virq; 505 + 506 + virq = irq_linear_revmap(armada_370_xp_mpic_domain, irq); 507 + if (virq == 0) 508 + continue; 509 + 510 + if (irq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ) 511 + writel(irq, per_cpu_int_base + 512 + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 513 + else 514 + writel(irq, main_int_base + 515 + ARMADA_370_XP_INT_SET_ENABLE_OFFS); 516 + 517 + data = irq_get_irq_data(virq); 518 + if (!irqd_irq_disabled(data)) 519 + armada_370_xp_irq_unmask(data); 520 + } 521 + 522 + /* Reconfigure doorbells for IPIs and MSIs */ 523 + writel(doorbell_mask_reg, 524 + per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_MSK_OFFS); 525 + if (doorbell_mask_reg & IPI_DOORBELL_MASK) 526 + writel(0, per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 527 + if (doorbell_mask_reg & PCI_MSI_DOORBELL_MASK) 528 + writel(1, per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); 529 + } 530 + 531 + struct syscore_ops armada_370_xp_mpic_syscore_ops = { 532 + .suspend = armada_370_xp_mpic_suspend, 533 + .resume = armada_370_xp_mpic_resume, 534 + }; 535 + 490 536 static int __init armada_370_xp_mpic_of_init(struct device_node *node, 491 537 struct device_node *parent) 492 538 { ··· 590 540 irq_set_chained_handler(parent_irq, 591 541 armada_370_xp_mpic_handle_cascade_irq); 592 542 } 543 + 544 + register_syscore_ops(&armada_370_xp_mpic_syscore_ops); 593 545 594 546 return 0; 595 547 }
+9
drivers/power/reset/Kconfig
··· 71 71 help 72 72 Reboot support for Hisilicon boards. 73 73 74 + config POWER_RESET_IMX 75 + bool "IMX6 power-off driver" 76 + depends on POWER_RESET && SOC_IMX6 77 + help 78 + This driver support power off external PMIC by PMIC_ON_REQ on i.mx6 79 + boards.If you want to use other pin to control external power,please 80 + say N here or disable in dts to make sure pm_power_off never be 81 + overwrote wrongly by this driver. 82 + 74 83 config POWER_RESET_MSM 75 84 bool "Qualcomm MSM power-off driver" 76 85 depends on ARCH_QCOM
+1
drivers/power/reset/Makefile
··· 6 6 obj-$(CONFIG_POWER_RESET_GPIO) += gpio-poweroff.o 7 7 obj-$(CONFIG_POWER_RESET_GPIO_RESTART) += gpio-restart.o 8 8 obj-$(CONFIG_POWER_RESET_HISI) += hisi-reboot.o 9 + obj-$(CONFIG_POWER_RESET_IMX) += imx-snvs-poweroff.o 9 10 obj-$(CONFIG_POWER_RESET_MSM) += msm-poweroff.o 10 11 obj-$(CONFIG_POWER_RESET_LTC2952) += ltc2952-poweroff.o 11 12 obj-$(CONFIG_POWER_RESET_QNAP) += qnap-poweroff.o
+66
drivers/power/reset/imx-snvs-poweroff.c
··· 1 + /* Power off driver for i.mx6 2 + * Copyright (c) 2014, FREESCALE CORPORATION. All rights reserved. 3 + * 4 + * based on msm-poweroff.c 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 and 8 + * only version 2 as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + */ 16 + 17 + #include <linux/err.h> 18 + #include <linux/init.h> 19 + #include <linux/io.h> 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/of.h> 23 + #include <linux/of_address.h> 24 + #include <linux/platform_device.h> 25 + 26 + static void __iomem *snvs_base; 27 + 28 + static void do_imx_poweroff(void) 29 + { 30 + u32 value = readl(snvs_base); 31 + 32 + /* set TOP and DP_EN bit */ 33 + writel(value | 0x60, snvs_base); 34 + } 35 + 36 + static int imx_poweroff_probe(struct platform_device *pdev) 37 + { 38 + snvs_base = of_iomap(pdev->dev.of_node, 0); 39 + if (!snvs_base) { 40 + dev_err(&pdev->dev, "failed to get memory\n"); 41 + return -ENODEV; 42 + } 43 + 44 + pm_power_off = do_imx_poweroff; 45 + return 0; 46 + } 47 + 48 + static const struct of_device_id of_imx_poweroff_match[] = { 49 + { .compatible = "fsl,sec-v4.0-poweroff", }, 50 + {}, 51 + }; 52 + MODULE_DEVICE_TABLE(of, of_imx_poweroff_match); 53 + 54 + static struct platform_driver imx_poweroff_driver = { 55 + .probe = imx_poweroff_probe, 56 + .driver = { 57 + .name = "imx-snvs-poweroff", 58 + .of_match_table = of_match_ptr(of_imx_poweroff_match), 59 + }, 60 + }; 61 + 62 + static int __init imx_poweroff_init(void) 63 + { 64 + return platform_driver_register(&imx_poweroff_driver); 65 + } 66 + device_initcall(imx_poweroff_init);
+9
drivers/soc/versatile/Kconfig
··· 1 1 # 2 2 # ARM Versatile SoC drivers 3 3 # 4 + config SOC_INTEGRATOR_CM 5 + bool "SoC bus device for the ARM Integrator platform core modules" 6 + depends on ARCH_INTEGRATOR 7 + select SOC_BUS 8 + help 9 + Include support for the SoC bus on the ARM Integrator platform 10 + core modules providing some sysfs information about the ASIC 11 + variant. 12 + 4 13 config SOC_REALVIEW 5 14 bool "SoC bus device for the ARM RealView platforms" 6 15 depends on ARCH_REALVIEW
+1
drivers/soc/versatile/Makefile
··· 1 + obj-$(CONFIG_SOC_INTEGRATOR_CM) += soc-integrator.o 1 2 obj-$(CONFIG_SOC_REALVIEW) += soc-realview.o
+155
drivers/soc/versatile/soc-integrator.c
··· 1 + /* 2 + * Copyright (C) 2014 Linaro Ltd. 3 + * 4 + * Author: Linus Walleij <linus.walleij@linaro.org> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2, as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + #include <linux/init.h> 12 + #include <linux/io.h> 13 + #include <linux/slab.h> 14 + #include <linux/sys_soc.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/mfd/syscon.h> 17 + #include <linux/regmap.h> 18 + #include <linux/of.h> 19 + 20 + #define INTEGRATOR_HDR_ID_OFFSET 0x00 21 + 22 + static u32 integrator_coreid; 23 + 24 + static const struct of_device_id integrator_cm_match[] = { 25 + { .compatible = "arm,core-module-integrator", }, 26 + { } 27 + }; 28 + 29 + static const char *integrator_arch_str(u32 id) 30 + { 31 + switch ((id >> 16) & 0xff) { 32 + case 0x00: 33 + return "ASB little-endian"; 34 + case 0x01: 35 + return "AHB little-endian"; 36 + case 0x03: 37 + return "AHB-Lite system bus, bi-endian"; 38 + case 0x04: 39 + return "AHB"; 40 + case 0x08: 41 + return "AHB system bus, ASB processor bus"; 42 + default: 43 + return "Unknown"; 44 + } 45 + } 46 + 47 + static const char *integrator_fpga_str(u32 id) 48 + { 49 + switch ((id >> 12) & 0xf) { 50 + case 0x01: 51 + return "XC4062"; 52 + case 0x02: 53 + return "XC4085"; 54 + case 0x03: 55 + return "XVC600"; 56 + case 0x04: 57 + return "EPM7256AE (Altera PLD)"; 58 + default: 59 + return "Unknown"; 60 + } 61 + } 62 + 63 + static ssize_t integrator_get_manf(struct device *dev, 64 + struct device_attribute *attr, 65 + char *buf) 66 + { 67 + return sprintf(buf, "%02x\n", integrator_coreid >> 24); 68 + } 69 + 70 + static struct device_attribute integrator_manf_attr = 71 + __ATTR(manufacturer, S_IRUGO, integrator_get_manf, NULL); 72 + 73 + static ssize_t integrator_get_arch(struct device *dev, 74 + struct device_attribute *attr, 75 + char *buf) 76 + { 77 + return sprintf(buf, "%s\n", integrator_arch_str(integrator_coreid)); 78 + } 79 + 80 + static struct device_attribute integrator_arch_attr = 81 + __ATTR(arch, S_IRUGO, integrator_get_arch, NULL); 82 + 83 + static ssize_t integrator_get_fpga(struct device *dev, 84 + struct device_attribute *attr, 85 + char *buf) 86 + { 87 + return sprintf(buf, "%s\n", integrator_fpga_str(integrator_coreid)); 88 + } 89 + 90 + static struct device_attribute integrator_fpga_attr = 91 + __ATTR(fpga, S_IRUGO, integrator_get_fpga, NULL); 92 + 93 + static ssize_t integrator_get_build(struct device *dev, 94 + struct device_attribute *attr, 95 + char *buf) 96 + { 97 + return sprintf(buf, "%02x\n", (integrator_coreid >> 4) & 0xFF); 98 + } 99 + 100 + static struct device_attribute integrator_build_attr = 101 + __ATTR(build, S_IRUGO, integrator_get_build, NULL); 102 + 103 + static int __init integrator_soc_init(void) 104 + { 105 + static struct regmap *syscon_regmap; 106 + struct soc_device *soc_dev; 107 + struct soc_device_attribute *soc_dev_attr; 108 + struct device_node *np; 109 + struct device *dev; 110 + u32 val; 111 + int ret; 112 + 113 + np = of_find_matching_node(NULL, integrator_cm_match); 114 + if (!np) 115 + return -ENODEV; 116 + 117 + syscon_regmap = syscon_node_to_regmap(np); 118 + if (IS_ERR(syscon_regmap)) 119 + return PTR_ERR(syscon_regmap); 120 + 121 + ret = regmap_read(syscon_regmap, INTEGRATOR_HDR_ID_OFFSET, 122 + &val); 123 + if (ret) 124 + return -ENODEV; 125 + integrator_coreid = val; 126 + 127 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 128 + if (!soc_dev_attr) 129 + return -ENOMEM; 130 + 131 + soc_dev_attr->soc_id = "Integrator"; 132 + soc_dev_attr->machine = "Integrator"; 133 + soc_dev_attr->family = "Versatile"; 134 + soc_dev = soc_device_register(soc_dev_attr); 135 + if (IS_ERR(soc_dev)) { 136 + kfree(soc_dev_attr); 137 + return -ENODEV; 138 + } 139 + dev = soc_device_to_device(soc_dev); 140 + 141 + device_create_file(dev, &integrator_manf_attr); 142 + device_create_file(dev, &integrator_arch_attr); 143 + device_create_file(dev, &integrator_fpga_attr); 144 + device_create_file(dev, &integrator_build_attr); 145 + 146 + dev_info(dev, "Detected ARM core module:\n"); 147 + dev_info(dev, " Manufacturer: %02x\n", (val >> 24)); 148 + dev_info(dev, " Architecture: %s\n", integrator_arch_str(val)); 149 + dev_info(dev, " FPGA: %s\n", integrator_fpga_str(val)); 150 + dev_info(dev, " Build: %02x\n", (val >> 4) & 0xFF); 151 + dev_info(dev, " Rev: %c\n", ('A' + (val & 0x03))); 152 + 153 + return 0; 154 + } 155 + device_initcall(integrator_soc_init);
+15
include/dt-bindings/arm/ux500_pm_domains.h
··· 1 + /* 2 + * Copyright (C) 2014 Linaro Ltd. 3 + * 4 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 5 + * License terms: GNU General Public License (GPL) version 2 6 + */ 7 + #ifndef _DT_BINDINGS_ARM_UX500_PM_DOMAINS_H 8 + #define _DT_BINDINGS_ARM_UX500_PM_DOMAINS_H 9 + 10 + #define DOMAIN_VAPE 0 11 + 12 + /* Number of PM domains. */ 13 + #define NR_DOMAINS (DOMAIN_VAPE + 1) 14 + 15 + #endif
+4 -1
include/dt-bindings/clock/imx5-clock.h
··· 198 198 #define IMX5_CLK_OCRAM 186 199 199 #define IMX5_CLK_SAHARA_IPG_GATE 187 200 200 #define IMX5_CLK_SATA_REF 188 201 - #define IMX5_CLK_END 189 201 + #define IMX5_CLK_STEP_SEL 189 202 + #define IMX5_CLK_CPU_PODF_SEL 190 203 + #define IMX5_CLK_ARM 191 204 + #define IMX5_CLK_END 192 202 205 203 206 #endif /* __DT_BINDINGS_CLOCK_IMX5_H */
+15
include/linux/clk/ti.h
··· 254 254 void omap2_init_clk_hw_omap_clocks(struct clk *clk); 255 255 int omap3_noncore_dpll_enable(struct clk_hw *hw); 256 256 void omap3_noncore_dpll_disable(struct clk_hw *hw); 257 + int omap3_noncore_dpll_set_parent(struct clk_hw *hw, u8 index); 257 258 int omap3_noncore_dpll_set_rate(struct clk_hw *hw, unsigned long rate, 258 259 unsigned long parent_rate); 260 + int omap3_noncore_dpll_set_rate_and_parent(struct clk_hw *hw, 261 + unsigned long rate, 262 + unsigned long parent_rate, 263 + u8 index); 264 + long omap3_noncore_dpll_determine_rate(struct clk_hw *hw, 265 + unsigned long rate, 266 + unsigned long *best_parent_rate, 267 + struct clk **best_parent_clk); 259 268 unsigned long omap4_dpll_regm4xen_recalc(struct clk_hw *hw, 260 269 unsigned long parent_rate); 261 270 long omap4_dpll_regm4xen_round_rate(struct clk_hw *hw, 262 271 unsigned long target_rate, 263 272 unsigned long *parent_rate); 273 + long omap4_dpll_regm4xen_determine_rate(struct clk_hw *hw, 274 + unsigned long rate, 275 + unsigned long *best_parent_rate, 276 + struct clk **best_parent_clk); 264 277 u8 omap2_init_dpll_parent(struct clk_hw *hw); 265 278 unsigned long omap3_dpll_recalc(struct clk_hw *hw, unsigned long parent_rate); 266 279 long omap2_dpll_round_rate(struct clk_hw *hw, unsigned long target_rate, ··· 291 278 void omap2_clk_enable_init_clocks(const char **clk_names, u8 num_clocks); 292 279 int omap3_dpll4_set_rate(struct clk_hw *clk, unsigned long rate, 293 280 unsigned long parent_rate); 281 + int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate, 282 + unsigned long parent_rate, u8 index); 294 283 int omap2_dflt_clk_enable(struct clk_hw *hw); 295 284 void omap2_dflt_clk_disable(struct clk_hw *hw); 296 285 int omap2_dflt_clk_is_enabled(struct clk_hw *hw);
+1
include/linux/mbus.h
··· 61 61 } 62 62 #endif 63 63 64 + int mvebu_mbus_save_cpu_target(u32 *store_addr); 64 65 void mvebu_mbus_get_pcie_mem_aperture(struct resource *res); 65 66 void mvebu_mbus_get_pcie_io_aperture(struct resource *res); 66 67 int mvebu_mbus_add_window_remap_by_id(unsigned int target,
+39
include/linux/mfd/syscon/imx6q-iomuxc-gpr.h
··· 395 395 #define IMX6SL_GPR1_FEC_CLOCK_MUX1_SEL_MASK (0x3 << 17) 396 396 #define IMX6SL_GPR1_FEC_CLOCK_MUX2_SEL_MASK (0x1 << 14) 397 397 398 + /* For imx6sx iomux gpr register field define */ 399 + #define IMX6SX_GPR1_VDEC_SW_RST_MASK (0x1 << 20) 400 + #define IMX6SX_GPR1_VDEC_SW_RST_RESET (0x1 << 20) 401 + #define IMX6SX_GPR1_VDEC_SW_RST_RELEASE (0x0 << 20) 402 + #define IMX6SX_GPR1_VADC_SW_RST_MASK (0x1 << 19) 403 + #define IMX6SX_GPR1_VADC_SW_RST_RESET (0x1 << 19) 404 + #define IMX6SX_GPR1_VADC_SW_RST_RELEASE (0x0 << 19) 405 + #define IMX6SX_GPR1_FEC_CLOCK_MUX_SEL_MASK (0x3 << 13) 406 + #define IMX6SX_GPR1_FEC_CLOCK_PAD_DIR_MASK (0x3 << 17) 407 + #define IMX6SX_GPR1_FEC_CLOCK_MUX_SEL_EXT (0x3 << 13) 408 + 409 + #define IMX6SX_GPR4_FEC_ENET1_STOP_REQ (0x1 << 3) 410 + #define IMX6SX_GPR4_FEC_ENET2_STOP_REQ (0x1 << 4) 411 + 412 + #define IMX6SX_GPR5_DISP_MUX_LDB_CTRL_MASK (0x1 << 3) 413 + #define IMX6SX_GPR5_DISP_MUX_LDB_CTRL_LCDIF1 (0x0 << 3) 414 + #define IMX6SX_GPR5_DISP_MUX_LDB_CTRL_LCDIF2 (0x1 << 3) 415 + 416 + #define IMX6SX_GPR5_CSI2_MUX_CTRL_MASK (0x3 << 27) 417 + #define IMX6SX_GPR5_CSI2_MUX_CTRL_EXT_PIN (0x0 << 27) 418 + #define IMX6SX_GPR5_CSI2_MUX_CTRL_CVD (0x1 << 27) 419 + #define IMX6SX_GPR5_CSI2_MUX_CTRL_VDAC_TO_CSI (0x2 << 27) 420 + #define IMX6SX_GPR5_CSI2_MUX_CTRL_GND (0x3 << 27) 421 + #define IMX6SX_GPR5_VADC_TO_CSI_CAPTURE_EN_MASK (0x1 << 26) 422 + #define IMX6SX_GPR5_VADC_TO_CSI_CAPTURE_EN_ENABLE (0x1 << 26) 423 + #define IMX6SX_GPR5_VADC_TO_CSI_CAPTURE_EN_DISABLE (0x0 << 26) 424 + #define IMX6SX_GPR5_CSI1_MUX_CTRL_MASK (0x3 << 4) 425 + #define IMX6SX_GPR5_CSI1_MUX_CTRL_EXT_PIN (0x0 << 4) 426 + #define IMX6SX_GPR5_CSI1_MUX_CTRL_CVD (0x1 << 4) 427 + #define IMX6SX_GPR5_CSI1_MUX_CTRL_VDAC_TO_CSI (0x2 << 4) 428 + #define IMX6SX_GPR5_CSI1_MUX_CTRL_GND (0x3 << 4) 429 + 430 + #define IMX6SX_GPR5_DISP_MUX_DCIC2_LCDIF2 (0x0 << 2) 431 + #define IMX6SX_GPR5_DISP_MUX_DCIC2_LVDS (0x1 << 2) 432 + #define IMX6SX_GPR5_DISP_MUX_DCIC2_MASK (0x1 << 2) 433 + #define IMX6SX_GPR5_DISP_MUX_DCIC1_LCDIF1 (0x0 << 1) 434 + #define IMX6SX_GPR5_DISP_MUX_DCIC1_LVDS (0x1 << 1) 435 + #define IMX6SX_GPR5_DISP_MUX_DCIC1_MASK (0x1 << 1) 436 + 398 437 #endif /* __LINUX_IMX6Q_IOMUXC_GPR_H */