Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
"This branch contains platform-related driver updates for ARM and
ARM64, these are the areas that bring the changes:

New drivers:

- driver support for Renesas R-Car V3M (R8A77970)

- power management support for Amlogic GX

- a new driver for the Tegra BPMP thermal sensor

- a new bus driver for Technologic Systems NBUS

Changes for subsystems that prefer to merge through arm-soc:

- the usual updates for reset controller drivers from Philipp Zabel,
with five added drivers for SoCs in the arc, meson, socfpa,
uniphier and mediatek families

- updates to the ARM SCPI and PSCI frameworks, from Sudeep Holla,
Heiner Kallweit and Lorenzo Pieralisi

Changes specific to some ARM-based SoC

- the Freescale/NXP DPAA QBMan drivers from PowerPC can now work on
ARM as well

- several changes for power management on Broadcom SoCs

- various improvements on Qualcomm, Broadcom, Amlogic, Atmel,
Mediatek

- minor Cleanups for Samsung, TI OMAP SoCs"

[ NOTE! This doesn't work without the previous ARM SoC device-tree pull,
because the R8A77970 driver is missing a header file that came from
that pull.

The fact that this got merged afterwards only fixes it at this point,
and bisection of that driver will fail if/when you walk into the
history of that driver. - Linus ]

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (96 commits)
soc: amlogic: meson-gx-pwrc-vpu: fix power-off when powered by bootloader
bus: add driver for the Technologic Systems NBUS
memory: omap-gpmc: Remove deprecated gpmc_update_nand_reg()
soc: qcom: remove unused label
soc: amlogic: gx pm domain: add PM and OF dependencies
drivers/firmware: psci_checker: Add missing destroy_timer_on_stack()
dt-bindings: power: add amlogic meson power domain bindings
soc: amlogic: add Meson GX VPU Domains driver
soc: qcom: Remote filesystem memory driver
dt-binding: soc: qcom: Add binding for rmtfs memory
of: reserved_mem: Accessor for acquiring reserved_mem
of/platform: Generalize /reserved-memory handling
soc: mediatek: pwrap: fix fatal compiler error
soc: mediatek: pwrap: fix compiler errors
arm64: mediatek: cleanup message for platform selection
soc: Allow test-building of MediaTek drivers
soc: mediatek: place Kconfig for all SoC drivers under menu
soc: mediatek: pwrap: add support for MT7622 SoC
soc: mediatek: pwrap: add common way for setup CS timing extenstion
soc: mediatek: pwrap: add MediaTek MT6380 as one slave of pwrap
..

+6940 -1003
+5 -1
Documentation/devicetree/bindings/arm/bcm/brcm,brcmstb.txt
··· 164 164 165 165 Required properties: 166 166 - compatible : should contain one of these 167 + "brcm,brcmstb-ddr-phy-v71.1" 168 + "brcm,brcmstb-ddr-phy-v72.0" 167 169 "brcm,brcmstb-ddr-phy-v225.1" 168 170 "brcm,brcmstb-ddr-phy-v240.1" 169 171 "brcm,brcmstb-ddr-phy-v240.2" ··· 186 184 Power-Down (SRPD), among other things. 187 185 188 186 Required properties: 189 - - compatible : should contain "brcm,brcmstb-memc-ddr" 187 + - compatible : should contain one of these 188 + "brcm,brcmstb-memc-ddr-rev-b.2.2" 189 + "brcm,brcmstb-memc-ddr" 190 190 - reg : the MEMC DDR register range 191 191 192 192 Example:
-1
Documentation/devicetree/bindings/arm/samsung/pmu.txt
··· 4 4 - compatible : should contain two values. First value must be one from following list: 5 5 - "samsung,exynos3250-pmu" - for Exynos3250 SoC, 6 6 - "samsung,exynos4210-pmu" - for Exynos4210 SoC, 7 - - "samsung,exynos4212-pmu" - for Exynos4212 SoC, 8 7 - "samsung,exynos4412-pmu" - for Exynos4412 SoC, 9 8 - "samsung,exynos5250-pmu" - for Exynos5250 SoC, 10 9 - "samsung,exynos5260-pmu" - for Exynos5260 SoC.
+2
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 18 18 * Core, iface, and bus clocks required for "qcom,scm" 19 19 - clock-names: Must contain "core" for the core clock, "iface" for the interface 20 20 clock and "bus" for the bus clock per the requirements of the compatible. 21 + - qcom,dload-mode: phandle to the TCSR hardware block and offset of the 22 + download mode control register (optional) 21 23 22 24 Example for MSM8916: 23 25
+27
Documentation/devicetree/bindings/memory-controllers/brcm,dpfe-cpu.txt
··· 1 + DDR PHY Front End (DPFE) for Broadcom STB 2 + ========================================= 3 + 4 + DPFE and the DPFE firmware provide an interface for the host CPU to 5 + communicate with the DCPU, which resides inside the DDR PHY. 6 + 7 + There are three memory regions for interacting with the DCPU. These are 8 + specified in a single reg property. 9 + 10 + Required properties: 11 + - compatible: must be "brcm,bcm7271-dpfe-cpu", "brcm,bcm7268-dpfe-cpu" 12 + or "brcm,dpfe-cpu" 13 + - reg: must reference three register ranges 14 + - start address and length of the DCPU register space 15 + - start address and length of the DCPU data memory space 16 + - start address and length of the DCPU instruction memory space 17 + - reg-names: must contain "dpfe-cpu", "dpfe-dmem", and "dpfe-imem"; 18 + they must be in the same order as the register declarations 19 + 20 + Example: 21 + dpfe_cpu0: dpfe-cpu@f1132000 { 22 + compatible = "brcm,bcm7271-dpfe-cpu", "brcm,dpfe-cpu"; 23 + reg = <0xf1132000 0x180 24 + 0xf1134000 0x1000 25 + 0xf1138000 0x4000>; 26 + reg-names = "dpfe-cpu", "dpfe-dmem", "dpfe-imem"; 27 + };
+153
Documentation/devicetree/bindings/mips/brcm/soc.txt
··· 11 11 12 12 The experimental -viper variants are for running Linux on the 3384's 13 13 BMIPS4355 cable modem CPU instead of the BMIPS5000 application processor. 14 + 15 + Power management 16 + ---------------- 17 + 18 + For power management (particularly, S2/S3/S5 system suspend), the following SoC 19 + components are needed: 20 + 21 + = Always-On control block (AON CTRL) 22 + 23 + This hardware provides control registers for the "always-on" (even in low-power 24 + modes) hardware, such as the Power Management State Machine (PMSM). 25 + 26 + Required properties: 27 + - compatible : should be one of 28 + "brcm,bcm7425-aon-ctrl" 29 + "brcm,bcm7429-aon-ctrl" 30 + "brcm,bcm7435-aon-ctrl" and 31 + "brcm,brcmstb-aon-ctrl" 32 + - reg : the register start and length for the AON CTRL block 33 + 34 + Example: 35 + 36 + syscon@410000 { 37 + compatible = "brcm,bcm7425-aon-ctrl", "brcm,brcmstb-aon-ctrl"; 38 + reg = <0x410000 0x400>; 39 + }; 40 + 41 + = Memory controllers 42 + 43 + A Broadcom STB SoC typically has a number of independent memory controllers, 44 + each of which may have several associated hardware blocks, which are versioned 45 + independently (control registers, DDR PHYs, etc.). One might consider 46 + describing these controllers as a parent "memory controllers" block, which 47 + contains N sub-nodes (one for each controller in the system), each of which is 48 + associated with a number of hardware register resources (e.g., its PHY. 49 + 50 + == MEMC (MEMory Controller) 51 + 52 + Represents a single memory controller instance. 53 + 54 + Required properties: 55 + - compatible : should contain "brcm,brcmstb-memc" and "simple-bus" 56 + - ranges : should contain the child address in the parent address 57 + space, must be 0 here, and the register start and length of 58 + the entire memory controller (including all sub nodes: DDR PHY, 59 + arbiter, etc.) 60 + - #address-cells : must be 1 61 + - #size-cells : must be 1 62 + 63 + Example: 64 + 65 + memory-controller@0 { 66 + compatible = "brcm,brcmstb-memc", "simple-bus"; 67 + ranges = <0x0 0x0 0xa000>; 68 + #address-cells = <1>; 69 + #size-cells = <1>; 70 + 71 + memc-arb@1000 { 72 + ... 73 + }; 74 + 75 + memc-ddr@2000 { 76 + ... 77 + }; 78 + 79 + ddr-phy@6000 { 80 + ... 81 + }; 82 + }; 83 + 84 + Should contain subnodes for any of the following relevant hardware resources: 85 + 86 + == DDR PHY control 87 + 88 + Control registers for this memory controller's DDR PHY. 89 + 90 + Required properties: 91 + - compatible : should contain one of these 92 + "brcm,brcmstb-ddr-phy-v64.5" 93 + "brcm,brcmstb-ddr-phy" 94 + 95 + - reg : the DDR PHY register range and length 96 + 97 + Example: 98 + 99 + ddr-phy@6000 { 100 + compatible = "brcm,brcmstb-ddr-phy-v64.5"; 101 + reg = <0x6000 0xc8>; 102 + }; 103 + 104 + == DDR memory controller sequencer 105 + 106 + Control registers for this memory controller's DDR memory sequencer 107 + 108 + Required properties: 109 + - compatible : should contain one of these 110 + "brcm,bcm7425-memc-ddr" 111 + "brcm,bcm7429-memc-ddr" 112 + "brcm,bcm7435-memc-ddr" and 113 + "brcm,brcmstb-memc-ddr" 114 + 115 + - reg : the DDR sequencer register range and length 116 + 117 + Example: 118 + 119 + memc-ddr@2000 { 120 + compatible = "brcm,bcm7425-memc-ddr", "brcm,brcmstb-memc-ddr"; 121 + reg = <0x2000 0x300>; 122 + }; 123 + 124 + == MEMC Arbiter 125 + 126 + The memory controller arbiter is responsible for memory clients allocation 127 + (bandwidth, priorities etc.) and needs to have its contents restored during 128 + deep sleep states (S3). 129 + 130 + Required properties: 131 + 132 + - compatible : should contain one of these 133 + "brcm,brcmstb-memc-arb-v10.0.0.0" 134 + "brcm,brcmstb-memc-arb" 135 + 136 + - reg : the DDR Arbiter register range and length 137 + 138 + Example: 139 + 140 + memc-arb@1000 { 141 + compatible = "brcm,brcmstb-memc-arb-v10.0.0.0"; 142 + reg = <0x1000 0x248>; 143 + }; 144 + 145 + == Timers 146 + 147 + The Broadcom STB chips contain a timer block with several general purpose 148 + timers that can be used. 149 + 150 + Required properties: 151 + 152 + - compatible : should contain one of: 153 + "brcm,bcm7425-timers" 154 + "brcm,bcm7429-timers" 155 + "brcm,bcm7435-timers and 156 + "brcm,brcmstb-timers" 157 + - reg : the timers register range 158 + - interrupts : the interrupt line for this timer block 159 + 160 + Example: 161 + 162 + timers: timer@4067c0 { 163 + compatible = "brcm,bcm7425-timers", "brcm,brcmstb-timers"; 164 + reg = <0x4067c0 0x40>; 165 + interrupts = <&periph_intc 19>; 166 + };
+61
Documentation/devicetree/bindings/power/amlogic,meson-gx-pwrc.txt
··· 1 + Amlogic Meson Power Controller 2 + ============================== 3 + 4 + The Amlogic Meson SoCs embeds an internal Power domain controller. 5 + 6 + VPU Power Domain 7 + ---------------- 8 + 9 + The Video Processing Unit power domain is controlled by this power controller, 10 + but the domain requires some external resources to meet the correct power 11 + sequences. 12 + The bindings must respect the power domain bindings as described in the file 13 + power_domain.txt 14 + 15 + Device Tree Bindings: 16 + --------------------- 17 + 18 + Required properties: 19 + - compatible: should be "amlogic,meson-gx-pwrc-vpu" for the Meson GX SoCs 20 + - #power-domain-cells: should be 0 21 + - amlogic,hhi-sysctrl: phandle to the HHI sysctrl node 22 + - resets: phandles to the reset lines needed for this power demain sequence 23 + as described in ../reset/reset.txt 24 + - clocks: from common clock binding: handle to VPU and VAPB clocks 25 + - clock-names: from common clock binding: must contain "vpu", "vapb" 26 + corresponding to entry in the clocks property. 27 + 28 + Parent node should have the following properties : 29 + - compatible: "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd" 30 + - reg: base address and size of the AO system control register space. 31 + 32 + Example: 33 + ------- 34 + 35 + ao_sysctrl: sys-ctrl@0 { 36 + compatible = "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd"; 37 + reg = <0x0 0x0 0x0 0x100>; 38 + 39 + pwrc_vpu: power-controller-vpu { 40 + compatible = "amlogic,meson-gx-pwrc-vpu"; 41 + #power-domain-cells = <0>; 42 + amlogic,hhi-sysctrl = <&sysctrl>; 43 + resets = <&reset RESET_VIU>, 44 + <&reset RESET_VENC>, 45 + <&reset RESET_VCBUS>, 46 + <&reset RESET_BT656>, 47 + <&reset RESET_DVIN_RESET>, 48 + <&reset RESET_RDMA>, 49 + <&reset RESET_VENCI>, 50 + <&reset RESET_VENCP>, 51 + <&reset RESET_VDAC>, 52 + <&reset RESET_VDI6>, 53 + <&reset RESET_VENCL>, 54 + <&reset RESET_VID_LOCK>; 55 + clocks = <&clkc CLKID_VPU>, 56 + <&clkc CLKID_VAPB>; 57 + clock-names = "vpu", "vapb"; 58 + }; 59 + }; 60 + 61 +
+1
Documentation/devicetree/bindings/power/renesas,rcar-sysc.txt
··· 17 17 - "renesas,r8a7794-sysc" (R-Car E2) 18 18 - "renesas,r8a7795-sysc" (R-Car H3) 19 19 - "renesas,r8a7796-sysc" (R-Car M3-W) 20 + - "renesas,r8a77970-sysc" (R-Car V3M) 20 21 - "renesas,r8a77995-sysc" (R-Car D3) 21 22 - reg: Address start and address range for the device. 22 23 - #power-domain-cells: Must be 1.
+51
Documentation/devicetree/bindings/reserved-memory/qcom,rmtfs-mem.txt
··· 1 + Qualcomm Remote File System Memory binding 2 + 3 + This binding describes the Qualcomm remote filesystem memory, which serves the 4 + purpose of describing the shared memory region used for remote processors to 5 + access block device data using the Remote Filesystem protocol. 6 + 7 + - compatible: 8 + Usage: required 9 + Value type: <stringlist> 10 + Definition: must be: 11 + "qcom,rmtfs-mem" 12 + 13 + - reg: 14 + Usage: required for static allocation 15 + Value type: <prop-encoded-array> 16 + Definition: must specify base address and size of the memory region, 17 + as described in reserved-memory.txt 18 + 19 + - size: 20 + Usage: required for dynamic allocation 21 + Value type: <prop-encoded-array> 22 + Definition: must specify a size of the memory region, as described in 23 + reserved-memory.txt 24 + 25 + - qcom,client-id: 26 + Usage: required 27 + Value type: <u32> 28 + Definition: identifier of the client to use this region for buffers. 29 + 30 + - qcom,vmid: 31 + Usage: optional 32 + Value type: <u32> 33 + Definition: vmid of the remote processor, to set up memory protection. 34 + 35 + = EXAMPLE 36 + The following example shows the remote filesystem memory setup for APQ8016, 37 + with the rmtfs region for the Hexagon DSP (id #1) located at 0x86700000. 38 + 39 + reserved-memory { 40 + #address-cells = <2>; 41 + #size-cells = <2>; 42 + ranges; 43 + 44 + rmtfs@86700000 { 45 + compatible = "qcom,rmtfs-mem"; 46 + reg = <0x0 0x86700000 0x0 0xe0000>; 47 + no-map; 48 + 49 + qcom,client-id = <1>; 50 + }; 51 + };
+1
Documentation/devicetree/bindings/reset/renesas,rst.txt
··· 26 26 - "renesas,r8a7794-rst" (R-Car E2) 27 27 - "renesas,r8a7795-rst" (R-Car H3) 28 28 - "renesas,r8a7796-rst" (R-Car M3-W) 29 + - "renesas,r8a77970-rst" (R-Car V3M) 29 30 - "renesas,r8a77995-rst" (R-Car D3) 30 31 - reg: Address start and address range for the device. 31 32
+33
Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt
··· 1 + Binding for the AXS10x reset controller 2 + 3 + This binding describes the ARC AXS10x boards custom IP-block which allows 4 + to control reset signals of selected peripherals. For example DW GMAC, etc... 5 + This block is controlled via memory-mapped register (AKA CREG) which 6 + represents up-to 32 reset lines. 7 + 8 + As of today only the following lines are used: 9 + - DW GMAC - line 5 10 + 11 + This binding uses the common reset binding[1]. 12 + 13 + [1] Documentation/devicetree/bindings/reset/reset.txt 14 + 15 + Required properties: 16 + - compatible: should be "snps,axs10x-reset". 17 + - reg: should always contain pair address - length: for creg reset 18 + bits register. 19 + - #reset-cells: from common reset binding; Should always be set to 1. 20 + 21 + Example: 22 + reset: reset-controller@11220 { 23 + compatible = "snps,axs10x-reset"; 24 + #reset-cells = <1>; 25 + reg = <0x11220 0x4>; 26 + }; 27 + 28 + Specifying reset lines connected to IP modules: 29 + ethernet@.... { 30 + .... 31 + resets = <&reset 5>; 32 + .... 33 + };
+3
Documentation/devicetree/bindings/reset/uniphier-reset.txt
··· 13 13 "socionext,uniphier-pxs2-reset" - for PXs2/LD6b SoC 14 14 "socionext,uniphier-ld11-reset" - for LD11 SoC 15 15 "socionext,uniphier-ld20-reset" - for LD20 SoC 16 + "socionext,uniphier-pxs3-reset" - for PXs3 SoC 16 17 - #reset-cells: should be 1. 17 18 18 19 Example: ··· 45 44 "socionext,uniphier-ld11-mio-reset" - for LD11 SoC (MIO) 46 45 "socionext,uniphier-ld11-sd-reset" - for LD11 SoC (SD) 47 46 "socionext,uniphier-ld20-sd-reset" - for LD20 SoC 47 + "socionext,uniphier-pxs3-sd-reset" - for PXs3 SoC 48 48 - #reset-cells: should be 1. 49 49 50 50 Example: ··· 76 74 "socionext,uniphier-pxs2-peri-reset" - for PXs2/LD6b SoC 77 75 "socionext,uniphier-ld11-peri-reset" - for LD11 SoC 78 76 "socionext,uniphier-ld20-peri-reset" - for LD20 SoC 77 + "socionext,uniphier-pxs3-peri-reset" - for PXs3 SoC 79 78 - #reset-cells: should be 1. 80 79 81 80 Example:
+7 -5
Documentation/devicetree/bindings/soc/fsl/bman.txt
··· 65 65 BMan Private Memory Node 66 66 67 67 BMan requires a contiguous range of physical memory used for the backing store 68 - for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as a 69 - node under the /reserved-memory node 68 + for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as 69 + a node under the /reserved-memory node. 70 70 71 71 The BMan FBPR memory node must be named "bman-fbpr" 72 72 ··· 75 75 - compatible 76 76 Usage: required 77 77 Value type: <stringlist> 78 - Definition: Must inclide "fsl,bman-fbpr" 78 + Definition: PPC platforms: Must include "fsl,bman-fbpr" 79 + ARM platforms: Must include "shared-dma-pool" 80 + as well as the "no-map" property 79 81 80 82 The following constraints are relevant to the FBPR private memory: 81 83 - The size must be 2^(size + 1), with size = 11..33. That is 4 KiB to ··· 102 100 ranges; 103 101 104 102 bman_fbpr: bman-fbpr { 105 - compatible = "fsl,bman-fbpr"; 106 - alloc-ranges = <0 0 0x10 0>; 103 + compatible = "shared-mem-pool"; 107 104 size = <0 0x1000000>; 108 105 alignment = <0 0x1000000>; 106 + no-map; 109 107 }; 110 108 }; 111 109
+19 -7
Documentation/devicetree/bindings/soc/fsl/qman.txt
··· 60 60 Value type: <prop-encoded-array> 61 61 Definition: Reference input clock. Its frequency is half of the 62 62 platform clock 63 + - memory-regions 64 + Usage: Required for ARM 65 + Value type: <phandle array> 66 + Definition: List of phandles referencing the QMan private memory 67 + nodes (described below). The qman-fqd node must be 68 + first followed by qman-pfdr node. Only used on ARM 63 69 64 70 Devices connected to a QMan instance via Direct Connect Portals (DCP) must link 65 71 to the respective QMan instance ··· 80 74 81 75 QMan requires two contiguous range of physical memory used for the backing store 82 76 for QMan Frame Queue Descriptor (FQD) and Packed Frame Descriptor Record (PFDR). 83 - This memory is reserved/allocated as a nodes under the /reserved-memory node 77 + This memory is reserved/allocated as a node under the /reserved-memory node. 78 + 79 + For additional details about reserved memory regions see reserved-memory.txt 84 80 85 81 The QMan FQD memory node must be named "qman-fqd" 86 82 ··· 91 83 - compatible 92 84 Usage: required 93 85 Value type: <stringlist> 94 - Definition: Must inclide "fsl,qman-fqd" 86 + Definition: PPC platforms: Must include "fsl,qman-fqd" 87 + ARM platforms: Must include "shared-dma-pool" 88 + as well as the "no-map" property 95 89 96 90 The QMan PFDR memory node must be named "qman-pfdr" 97 91 ··· 102 92 - compatible 103 93 Usage: required 104 94 Value type: <stringlist> 105 - Definition: Must inclide "fsl,qman-pfdr" 95 + Definition: PPC platforms: Must include "fsl,qman-pfdr" 96 + ARM platforms: Must include "shared-dma-pool" 97 + as well as the "no-map" property 106 98 107 99 The following constraints are relevant to the FQD and PFDR private memory: 108 100 - The size must be 2^(size + 1), with size = 11..29. That is 4 KiB to ··· 129 117 ranges; 130 118 131 119 qman_fqd: qman-fqd { 132 - compatible = "fsl,qman-fqd"; 133 - alloc-ranges = <0 0 0x10 0>; 120 + compatible = "shared-dma-pool"; 134 121 size = <0 0x400000>; 135 122 alignment = <0 0x400000>; 123 + no-map; 136 124 }; 137 125 qman_pfdr: qman-pfdr { 138 - compatible = "fsl,qman-pfdr"; 139 - alloc-ranges = <0 0 0x10 0>; 126 + compatible = "shared-dma-pool"; 140 127 size = <0 0x2000000>; 141 128 alignment = <0 0x2000000>; 129 + no-map; 142 130 }; 143 131 }; 144 132
+5 -1
Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
··· 19 19 Required properties in pwrap device node. 20 20 - compatible: 21 21 "mediatek,mt2701-pwrap" for MT2701/7623 SoCs 22 + "mediatek,mt7622-pwrap" for MT7622 SoCs 22 23 "mediatek,mt8135-pwrap" for MT8135 SoCs 23 24 "mediatek,mt8173-pwrap" for MT8173 SoCs 24 25 - interrupts: IRQ for pwrap in SOC ··· 37 36 - clocks: Must contain an entry for each entry in clock-names. 38 37 39 38 Optional properities: 40 - - pmic: Mediatek PMIC MFD is the child device of pwrap 39 + - pmic: Using either MediaTek PMIC MFD as the child device of pwrap 41 40 See the following for child node definitions: 42 41 Documentation/devicetree/bindings/mfd/mt6397.txt 42 + or the regulator-only device as the child device of pwrap, such as MT6380. 43 + See the following definitions for such kinds of devices. 44 + Documentation/devicetree/bindings/regulator/mt6380-regulator.txt 43 45 44 46 Example: 45 47 pwrap: pwrap@1000f000 {
+19 -1
MAINTAINERS
··· 1219 1219 W: http://www.linux4sam.org 1220 1220 T: git git://git.kernel.org/pub/scm/linux/kernel/git/nferre/linux-at91.git 1221 1221 S: Supported 1222 + N: at91 1223 + N: atmel 1222 1224 F: arch/arm/mach-at91/ 1223 1225 F: include/soc/at91/ 1224 1226 F: arch/arm/boot/dts/at91*.dts ··· 1229 1227 F: arch/arm/boot/dts/sama*.dtsi 1230 1228 F: arch/arm/include/debug/at91.S 1231 1229 F: drivers/memory/atmel* 1230 + F: drivers/watchdog/sama5d4_wdt.c 1231 + X: drivers/input/touchscreen/atmel_mxt_ts.c 1232 + X: drivers/net/wireless/atmel/ 1232 1233 1233 1234 ARM/CALXEDA HIGHBANK ARCHITECTURE 1234 1235 M: Rob Herring <robh@kernel.org> ··· 2146 2141 F: drivers/i2c/busses/i2c-zx2967.c 2147 2142 F: drivers/mmc/host/dw_mmc-zx.* 2148 2143 F: drivers/pinctrl/zte/ 2149 - F: drivers/reset/reset-zx2967.c 2150 2144 F: drivers/soc/zte/ 2151 2145 F: drivers/thermal/zx2967_thermal.c 2152 2146 F: drivers/watchdog/zx2967_wdt.c ··· 2993 2989 L: bcm-kernel-feedback-list@broadcom.com 2994 2990 S: Maintained 2995 2991 F: drivers/mtd/nand/brcmnand/ 2992 + 2993 + BROADCOM STB DPFE DRIVER 2994 + M: Markus Mayer <mmayer@broadcom.com> 2995 + M: bcm-kernel-feedback-list@broadcom.com 2996 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2997 + S: Maintained 2998 + F: Documentation/devicetree/bindings/memory-controllers/brcm,dpfe-cpu.txt 2999 + F: drivers/memory/brcmstb_dpfe.c 2996 3000 2997 3001 BROADCOM SYSTEMPORT ETHERNET DRIVER 2998 3002 M: Florian Fainelli <f.fainelli@gmail.com> ··· 13015 13003 F: arch/arc/plat-axs10x 13016 13004 F: arch/arc/boot/dts/ax* 13017 13005 F: Documentation/devicetree/bindings/arc/axs10* 13006 + 13007 + SYNOPSYS AXS10x RESET CONTROLLER DRIVER 13008 + M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> 13009 + S: Supported 13010 + F: drivers/reset/reset-axs10x.c 13011 + F: Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt 13018 13012 13019 13013 SYNOPSYS DESIGNWARE APB GPIO DRIVER 13020 13014 M: Hoan Tran <hotran@apm.com>
+2
arch/arm/mach-mediatek/platsmp.c
··· 54 54 { .compatible = "mediatek,mt8135", .data = &mtk_mt8135_tz_boot }, 55 55 { .compatible = "mediatek,mt8127", .data = &mtk_mt8135_tz_boot }, 56 56 { .compatible = "mediatek,mt2701", .data = &mtk_mt8135_tz_boot }, 57 + {}, 57 58 }; 58 59 59 60 static const struct of_device_id mtk_smp_boot_infos[] __initconst = { 60 61 { .compatible = "mediatek,mt6589", .data = &mtk_mt6589_boot }, 61 62 { .compatible = "mediatek,mt7623", .data = &mtk_mt7623_boot }, 62 63 { .compatible = "mediatek,mt7623a", .data = &mtk_mt7623_boot }, 64 + {}, 63 65 }; 64 66 65 67 static void __iomem *mtk_smp_base;
+3 -2
arch/arm64/Kconfig.platforms
··· 91 91 This enables support for Hisilicon ARMv8 SoC family 92 92 93 93 config ARCH_MEDIATEK 94 - bool "Mediatek MT65xx & MT81xx ARMv8 SoC" 94 + bool "MediaTek SoC Family" 95 95 select ARM_GIC 96 96 select PINCTRL 97 97 select MTK_TIMER 98 98 help 99 - Support for Mediatek MT65xx & MT81xx ARMv8 SoCs 99 + This enables support for MediaTek MT27xx, MT65xx, MT76xx 100 + & MT81xx ARMv8 SoCs 100 101 101 102 config ARCH_MESON 102 103 bool "Amlogic Platforms"
+8
drivers/bus/Kconfig
··· 165 165 Generic driver for Texas Instruments interconnect target module 166 166 found on many TI SoCs. 167 167 168 + config TS_NBUS 169 + tristate "Technologic Systems NBUS Driver" 170 + depends on SOC_IMX28 171 + depends on OF_GPIO && PWM 172 + help 173 + Driver for the Technologic Systems NBUS which is used to interface 174 + with the peripherals in the FPGA of the TS-4600 SoM. 175 + 168 176 config UNIPHIER_SYSTEM_BUS 169 177 tristate "UniPhier System Bus driver" 170 178 depends on ARCH_UNIPHIER && OF
+1
drivers/bus/Makefile
··· 22 22 obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o 23 23 obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o 24 24 obj-$(CONFIG_TI_SYSC) += ti-sysc.o 25 + obj-$(CONFIG_TS_NBUS) += ts-nbus.o 25 26 obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o 26 27 obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o 27 28
+375
drivers/bus/ts-nbus.c
··· 1 + /* 2 + * NBUS driver for TS-4600 based boards 3 + * 4 + * Copyright (c) 2016 - Savoir-faire Linux 5 + * Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com> 6 + * 7 + * This file is licensed under the terms of the GNU General Public 8 + * License version 2. This program is licensed "as is" without any 9 + * warranty of any kind, whether express or implied. 10 + * 11 + * This driver implements a GPIOs bit-banged bus, called the NBUS by Technologic 12 + * Systems. It is used to communicate with the peripherals in the FPGA on the 13 + * TS-4600 SoM. 14 + */ 15 + 16 + #include <linux/bitops.h> 17 + #include <linux/gpio/consumer.h> 18 + #include <linux/kernel.h> 19 + #include <linux/module.h> 20 + #include <linux/mutex.h> 21 + #include <linux/of_platform.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/pwm.h> 24 + #include <linux/ts-nbus.h> 25 + 26 + #define TS_NBUS_DIRECTION_IN 0 27 + #define TS_NBUS_DIRECTION_OUT 1 28 + #define TS_NBUS_WRITE_ADR 0 29 + #define TS_NBUS_WRITE_VAL 1 30 + 31 + struct ts_nbus { 32 + struct pwm_device *pwm; 33 + struct gpio_descs *data; 34 + struct gpio_desc *csn; 35 + struct gpio_desc *txrx; 36 + struct gpio_desc *strobe; 37 + struct gpio_desc *ale; 38 + struct gpio_desc *rdy; 39 + struct mutex lock; 40 + }; 41 + 42 + /* 43 + * request all gpios required by the bus. 44 + */ 45 + static int ts_nbus_init_pdata(struct platform_device *pdev, struct ts_nbus 46 + *ts_nbus) 47 + { 48 + ts_nbus->data = devm_gpiod_get_array(&pdev->dev, "ts,data", 49 + GPIOD_OUT_HIGH); 50 + if (IS_ERR(ts_nbus->data)) { 51 + dev_err(&pdev->dev, "failed to retrieve ts,data-gpio from dts\n"); 52 + return PTR_ERR(ts_nbus->data); 53 + } 54 + 55 + ts_nbus->csn = devm_gpiod_get(&pdev->dev, "ts,csn", GPIOD_OUT_HIGH); 56 + if (IS_ERR(ts_nbus->csn)) { 57 + dev_err(&pdev->dev, "failed to retrieve ts,csn-gpio from dts\n"); 58 + return PTR_ERR(ts_nbus->csn); 59 + } 60 + 61 + ts_nbus->txrx = devm_gpiod_get(&pdev->dev, "ts,txrx", GPIOD_OUT_HIGH); 62 + if (IS_ERR(ts_nbus->txrx)) { 63 + dev_err(&pdev->dev, "failed to retrieve ts,txrx-gpio from dts\n"); 64 + return PTR_ERR(ts_nbus->txrx); 65 + } 66 + 67 + ts_nbus->strobe = devm_gpiod_get(&pdev->dev, "ts,strobe", GPIOD_OUT_HIGH); 68 + if (IS_ERR(ts_nbus->strobe)) { 69 + dev_err(&pdev->dev, "failed to retrieve ts,strobe-gpio from dts\n"); 70 + return PTR_ERR(ts_nbus->strobe); 71 + } 72 + 73 + ts_nbus->ale = devm_gpiod_get(&pdev->dev, "ts,ale", GPIOD_OUT_HIGH); 74 + if (IS_ERR(ts_nbus->ale)) { 75 + dev_err(&pdev->dev, "failed to retrieve ts,ale-gpio from dts\n"); 76 + return PTR_ERR(ts_nbus->ale); 77 + } 78 + 79 + ts_nbus->rdy = devm_gpiod_get(&pdev->dev, "ts,rdy", GPIOD_IN); 80 + if (IS_ERR(ts_nbus->rdy)) { 81 + dev_err(&pdev->dev, "failed to retrieve ts,rdy-gpio from dts\n"); 82 + return PTR_ERR(ts_nbus->rdy); 83 + } 84 + 85 + return 0; 86 + } 87 + 88 + /* 89 + * the data gpios are used for reading and writing values, their directions 90 + * should be adjusted accordingly. 91 + */ 92 + static void ts_nbus_set_direction(struct ts_nbus *ts_nbus, int direction) 93 + { 94 + int i; 95 + 96 + for (i = 0; i < 8; i++) { 97 + if (direction == TS_NBUS_DIRECTION_IN) 98 + gpiod_direction_input(ts_nbus->data->desc[i]); 99 + else 100 + /* when used as output the default state of the data 101 + * lines are set to high */ 102 + gpiod_direction_output(ts_nbus->data->desc[i], 1); 103 + } 104 + } 105 + 106 + /* 107 + * reset the bus in its initial state. 108 + * The data, csn, strobe and ale lines must be zero'ed to let the FPGA knows a 109 + * new transaction can be process. 110 + */ 111 + static void ts_nbus_reset_bus(struct ts_nbus *ts_nbus) 112 + { 113 + int i; 114 + int values[8]; 115 + 116 + for (i = 0; i < 8; i++) 117 + values[i] = 0; 118 + 119 + gpiod_set_array_value_cansleep(8, ts_nbus->data->desc, values); 120 + gpiod_set_value_cansleep(ts_nbus->csn, 0); 121 + gpiod_set_value_cansleep(ts_nbus->strobe, 0); 122 + gpiod_set_value_cansleep(ts_nbus->ale, 0); 123 + } 124 + 125 + /* 126 + * let the FPGA knows it can process. 127 + */ 128 + static void ts_nbus_start_transaction(struct ts_nbus *ts_nbus) 129 + { 130 + gpiod_set_value_cansleep(ts_nbus->strobe, 1); 131 + } 132 + 133 + /* 134 + * read a byte value from the data gpios. 135 + * return 0 on success or negative errno on failure. 136 + */ 137 + static int ts_nbus_read_byte(struct ts_nbus *ts_nbus, u8 *val) 138 + { 139 + struct gpio_descs *gpios = ts_nbus->data; 140 + int ret, i; 141 + 142 + *val = 0; 143 + for (i = 0; i < 8; i++) { 144 + ret = gpiod_get_value_cansleep(gpios->desc[i]); 145 + if (ret < 0) 146 + return ret; 147 + if (ret) 148 + *val |= BIT(i); 149 + } 150 + 151 + return 0; 152 + } 153 + 154 + /* 155 + * set the data gpios accordingly to the byte value. 156 + */ 157 + static void ts_nbus_write_byte(struct ts_nbus *ts_nbus, u8 byte) 158 + { 159 + struct gpio_descs *gpios = ts_nbus->data; 160 + int i; 161 + int values[8]; 162 + 163 + for (i = 0; i < 8; i++) 164 + if (byte & BIT(i)) 165 + values[i] = 1; 166 + else 167 + values[i] = 0; 168 + 169 + gpiod_set_array_value_cansleep(8, gpios->desc, values); 170 + } 171 + 172 + /* 173 + * reading the bus consists of resetting the bus, then notifying the FPGA to 174 + * send the data in the data gpios and return the read value. 175 + * return 0 on success or negative errno on failure. 176 + */ 177 + static int ts_nbus_read_bus(struct ts_nbus *ts_nbus, u8 *val) 178 + { 179 + ts_nbus_reset_bus(ts_nbus); 180 + ts_nbus_start_transaction(ts_nbus); 181 + 182 + return ts_nbus_read_byte(ts_nbus, val); 183 + } 184 + 185 + /* 186 + * writing to the bus consists of resetting the bus, then define the type of 187 + * command (address/value), write the data and notify the FPGA to retrieve the 188 + * value in the data gpios. 189 + */ 190 + static void ts_nbus_write_bus(struct ts_nbus *ts_nbus, int cmd, u8 val) 191 + { 192 + ts_nbus_reset_bus(ts_nbus); 193 + 194 + if (cmd == TS_NBUS_WRITE_ADR) 195 + gpiod_set_value_cansleep(ts_nbus->ale, 1); 196 + 197 + ts_nbus_write_byte(ts_nbus, val); 198 + ts_nbus_start_transaction(ts_nbus); 199 + } 200 + 201 + /* 202 + * read the value in the FPGA register at the given address. 203 + * return 0 on success or negative errno on failure. 204 + */ 205 + int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val) 206 + { 207 + int ret, i; 208 + u8 byte; 209 + 210 + /* bus access must be atomic */ 211 + mutex_lock(&ts_nbus->lock); 212 + 213 + /* set the bus in read mode */ 214 + gpiod_set_value_cansleep(ts_nbus->txrx, 0); 215 + 216 + /* write address */ 217 + ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr); 218 + 219 + /* set the data gpios direction as input before reading */ 220 + ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_IN); 221 + 222 + /* reading value MSB first */ 223 + do { 224 + *val = 0; 225 + byte = 0; 226 + for (i = 1; i >= 0; i--) { 227 + /* read a byte from the bus, leave on error */ 228 + ret = ts_nbus_read_bus(ts_nbus, &byte); 229 + if (ret < 0) 230 + goto err; 231 + 232 + /* append the byte read to the final value */ 233 + *val |= byte << (i * 8); 234 + } 235 + gpiod_set_value_cansleep(ts_nbus->csn, 1); 236 + ret = gpiod_get_value_cansleep(ts_nbus->rdy); 237 + } while (ret); 238 + 239 + err: 240 + /* restore the data gpios direction as output after reading */ 241 + ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_OUT); 242 + 243 + mutex_unlock(&ts_nbus->lock); 244 + 245 + return ret; 246 + } 247 + EXPORT_SYMBOL_GPL(ts_nbus_read); 248 + 249 + /* 250 + * write the desired value in the FPGA register at the given address. 251 + */ 252 + int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val) 253 + { 254 + int i; 255 + 256 + /* bus access must be atomic */ 257 + mutex_lock(&ts_nbus->lock); 258 + 259 + /* set the bus in write mode */ 260 + gpiod_set_value_cansleep(ts_nbus->txrx, 1); 261 + 262 + /* write address */ 263 + ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr); 264 + 265 + /* writing value MSB first */ 266 + for (i = 1; i >= 0; i--) 267 + ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_VAL, (u8)(val >> (i * 8))); 268 + 269 + /* wait for completion */ 270 + gpiod_set_value_cansleep(ts_nbus->csn, 1); 271 + while (gpiod_get_value_cansleep(ts_nbus->rdy) != 0) { 272 + gpiod_set_value_cansleep(ts_nbus->csn, 0); 273 + gpiod_set_value_cansleep(ts_nbus->csn, 1); 274 + } 275 + 276 + mutex_unlock(&ts_nbus->lock); 277 + 278 + return 0; 279 + } 280 + EXPORT_SYMBOL_GPL(ts_nbus_write); 281 + 282 + static int ts_nbus_probe(struct platform_device *pdev) 283 + { 284 + struct pwm_device *pwm; 285 + struct pwm_args pargs; 286 + struct device *dev = &pdev->dev; 287 + struct ts_nbus *ts_nbus; 288 + int ret; 289 + 290 + ts_nbus = devm_kzalloc(dev, sizeof(*ts_nbus), GFP_KERNEL); 291 + if (!ts_nbus) 292 + return -ENOMEM; 293 + 294 + mutex_init(&ts_nbus->lock); 295 + 296 + ret = ts_nbus_init_pdata(pdev, ts_nbus); 297 + if (ret < 0) 298 + return ret; 299 + 300 + pwm = devm_pwm_get(dev, NULL); 301 + if (IS_ERR(pwm)) { 302 + ret = PTR_ERR(pwm); 303 + if (ret != -EPROBE_DEFER) 304 + dev_err(dev, "unable to request PWM\n"); 305 + return ret; 306 + } 307 + 308 + pwm_get_args(pwm, &pargs); 309 + if (!pargs.period) { 310 + dev_err(&pdev->dev, "invalid PWM period\n"); 311 + return -EINVAL; 312 + } 313 + 314 + /* 315 + * FIXME: pwm_apply_args() should be removed when switching to 316 + * the atomic PWM API. 317 + */ 318 + pwm_apply_args(pwm); 319 + ret = pwm_config(pwm, pargs.period, pargs.period); 320 + if (ret < 0) 321 + return ret; 322 + 323 + /* 324 + * we can now start the FPGA and populate the peripherals. 325 + */ 326 + pwm_enable(pwm); 327 + ts_nbus->pwm = pwm; 328 + 329 + /* 330 + * let the child nodes retrieve this instance of the ts-nbus. 331 + */ 332 + dev_set_drvdata(dev, ts_nbus); 333 + 334 + ret = of_platform_populate(dev->of_node, NULL, NULL, dev); 335 + if (ret < 0) 336 + return ret; 337 + 338 + dev_info(dev, "initialized\n"); 339 + 340 + return 0; 341 + } 342 + 343 + static int ts_nbus_remove(struct platform_device *pdev) 344 + { 345 + struct ts_nbus *ts_nbus = dev_get_drvdata(&pdev->dev); 346 + 347 + /* shutdown the FPGA */ 348 + mutex_lock(&ts_nbus->lock); 349 + pwm_disable(ts_nbus->pwm); 350 + mutex_unlock(&ts_nbus->lock); 351 + 352 + return 0; 353 + } 354 + 355 + static const struct of_device_id ts_nbus_of_match[] = { 356 + { .compatible = "technologic,ts-nbus", }, 357 + { }, 358 + }; 359 + MODULE_DEVICE_TABLE(of, ts_nbus_of_match); 360 + 361 + static struct platform_driver ts_nbus_driver = { 362 + .probe = ts_nbus_probe, 363 + .remove = ts_nbus_remove, 364 + .driver = { 365 + .name = "ts_nbus", 366 + .of_match_table = ts_nbus_of_match, 367 + }, 368 + }; 369 + 370 + module_platform_driver(ts_nbus_driver); 371 + 372 + MODULE_ALIAS("platform:ts_nbus"); 373 + MODULE_AUTHOR("Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>"); 374 + MODULE_DESCRIPTION("Technologic Systems NBUS"); 375 + MODULE_LICENSE("GPL v2");
+9
drivers/clk/bcm/Kconfig
··· 30 30 help 31 31 Enable common clock framework support for the Broadcom Cygnus SoC 32 32 33 + config CLK_BCM_HR2 34 + bool "Broadcom Hurricane 2 clock support" 35 + depends on ARCH_BCM_HR2 || COMPILE_TEST 36 + select COMMON_CLK_IPROC 37 + default ARCH_BCM_HR2 38 + help 39 + Enable common clock framework support for the Broadcom Hurricane 2 40 + SoC 41 + 33 42 config CLK_BCM_NSP 34 43 bool "Broadcom Northstar/Northstar Plus clock support" 35 44 depends on ARCH_BCM_5301X || ARCH_BCM_NSP || COMPILE_TEST
+1
drivers/clk/bcm/Makefile
··· 9 9 obj-$(CONFIG_ARCH_BCM2835) += clk-bcm2835-aux.o 10 10 obj-$(CONFIG_ARCH_BCM_53573) += clk-bcm53573-ilp.o 11 11 obj-$(CONFIG_CLK_BCM_CYGNUS) += clk-cygnus.o 12 + obj-$(CONFIG_CLK_BCM_HR2) += clk-hr2.o 12 13 obj-$(CONFIG_CLK_BCM_NSP) += clk-nsp.o 13 14 obj-$(CONFIG_CLK_BCM_NS2) += clk-ns2.o 14 15 obj-$(CONFIG_CLK_BCM_SR) += clk-sr.o
+27
drivers/clk/bcm/clk-hr2.c
··· 1 + /* 2 + * Copyright (C) 2017 Broadcom 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License as 6 + * published by the Free Software Foundation version 2. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/err.h> 16 + #include <linux/clk-provider.h> 17 + #include <linux/io.h> 18 + #include <linux/of.h> 19 + #include <linux/of_address.h> 20 + 21 + #include "clk-iproc.h" 22 + 23 + static void __init hr2_armpll_init(struct device_node *node) 24 + { 25 + iproc_armpll_setup(node); 26 + } 27 + CLK_OF_DECLARE(hr2_armpll, "brcm,hr2-armpll", hr2_armpll_init);
+11
drivers/firmware/Kconfig
··· 215 215 def_bool y 216 216 depends on QCOM_SCM && ARM64 217 217 218 + config QCOM_SCM_DOWNLOAD_MODE_DEFAULT 219 + bool "Qualcomm download mode enabled by default" 220 + depends on QCOM_SCM 221 + help 222 + A device with "download mode" enabled will upon an unexpected 223 + warm-restart enter a special debug mode that allows the user to 224 + "download" memory content over USB for offline postmortem analysis. 225 + The feature can be enabled/disabled on the kernel command line. 226 + 227 + Say Y here to enable "download mode" by default. 228 + 218 229 config TI_SCI_PROTOCOL 219 230 tristate "TI System Control Interface (TISCI) Message Protocol" 220 231 depends on TI_MESSAGE_MANAGER
+85 -127
drivers/firmware/arm_scpi.c
··· 28 28 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 29 30 30 #include <linux/bitmap.h> 31 + #include <linux/bitfield.h> 31 32 #include <linux/device.h> 32 33 #include <linux/err.h> 33 34 #include <linux/export.h> ··· 73 72 74 73 #define MAX_DVFS_DOMAINS 8 75 74 #define MAX_DVFS_OPPS 16 76 - #define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16) 77 - #define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff) 78 75 79 - #define PROTOCOL_REV_MINOR_BITS 16 80 - #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) 81 - #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) 82 - #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) 76 + #define PROTO_REV_MAJOR_MASK GENMASK(31, 16) 77 + #define PROTO_REV_MINOR_MASK GENMASK(15, 0) 83 78 84 - #define FW_REV_MAJOR_BITS 24 85 - #define FW_REV_MINOR_BITS 16 86 - #define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1) 87 - #define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1) 88 - #define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS) 89 - #define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS) 90 - #define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK) 79 + #define FW_REV_MAJOR_MASK GENMASK(31, 24) 80 + #define FW_REV_MINOR_MASK GENMASK(23, 16) 81 + #define FW_REV_PATCH_MASK GENMASK(15, 0) 91 82 92 83 #define MAX_RX_TIMEOUT (msecs_to_jiffies(30)) 93 84 ··· 304 311 u8 name[20]; 305 312 } __packed; 306 313 307 - struct clk_get_value { 308 - __le32 rate; 309 - } __packed; 310 - 311 314 struct clk_set_value { 312 315 __le16 id; 313 316 __le16 reserved; ··· 317 328 } __packed; 318 329 319 330 struct dvfs_info { 320 - __le32 header; 331 + u8 domain; 332 + u8 opp_count; 333 + __le16 latency; 321 334 struct { 322 335 __le32 freq; 323 336 __le32 m_volt; ··· 341 350 u8 trigger_type; 342 351 char name[20]; 343 352 }; 344 - 345 - struct sensor_value { 346 - __le32 lo_val; 347 - __le32 hi_val; 348 - } __packed; 349 353 350 354 struct dev_pstate_set { 351 355 __le16 dev_id; ··· 405 419 unsigned int len; 406 420 407 421 if (scpi_info->is_legacy) { 408 - struct legacy_scpi_shared_mem *mem = ch->rx_payload; 422 + struct legacy_scpi_shared_mem __iomem *mem = 423 + ch->rx_payload; 409 424 410 425 /* RX Length is not replied by the legacy Firmware */ 411 426 len = match->rx_len; 412 427 413 - match->status = le32_to_cpu(mem->status); 428 + match->status = ioread32(&mem->status); 414 429 memcpy_fromio(match->rx_buf, mem->payload, len); 415 430 } else { 416 - struct scpi_shared_mem *mem = ch->rx_payload; 431 + struct scpi_shared_mem __iomem *mem = ch->rx_payload; 417 432 418 433 len = min(match->rx_len, CMD_SIZE(cmd)); 419 434 420 - match->status = le32_to_cpu(mem->status); 435 + match->status = ioread32(&mem->status); 421 436 memcpy_fromio(match->rx_buf, mem->payload, len); 422 437 } 423 438 ··· 432 445 static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) 433 446 { 434 447 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 435 - struct scpi_shared_mem *mem = ch->rx_payload; 448 + struct scpi_shared_mem __iomem *mem = ch->rx_payload; 436 449 u32 cmd = 0; 437 450 438 451 if (!scpi_info->is_legacy) 439 - cmd = le32_to_cpu(mem->command); 452 + cmd = ioread32(&mem->command); 440 453 441 454 scpi_process_cmd(ch, cmd); 442 455 } ··· 446 459 unsigned long flags; 447 460 struct scpi_xfer *t = msg; 448 461 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 449 - struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; 462 + struct scpi_shared_mem __iomem *mem = ch->tx_payload; 450 463 451 464 if (t->tx_buf) { 452 465 if (scpi_info->is_legacy) ··· 465 478 } 466 479 467 480 if (!scpi_info->is_legacy) 468 - mem->command = cpu_to_le32(t->cmd); 481 + iowrite32(t->cmd, &mem->command); 469 482 } 470 483 471 484 static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) ··· 570 583 static unsigned long scpi_clk_get_val(u16 clk_id) 571 584 { 572 585 int ret; 573 - struct clk_get_value clk; 586 + __le32 rate; 574 587 __le16 le_clk_id = cpu_to_le16(clk_id); 575 588 576 589 ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, 577 - sizeof(le_clk_id), &clk, sizeof(clk)); 590 + sizeof(le_clk_id), &rate, sizeof(rate)); 578 591 579 - return ret ? ret : le32_to_cpu(clk.rate); 592 + return ret ? ret : le32_to_cpu(rate); 580 593 } 581 594 582 595 static int scpi_clk_set_val(u16 clk_id, unsigned long rate) ··· 632 645 633 646 static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain) 634 647 { 648 + if (domain >= MAX_DVFS_DOMAINS) 649 + return ERR_PTR(-EINVAL); 650 + 651 + return scpi_info->dvfs[domain] ?: ERR_PTR(-EINVAL); 652 + } 653 + 654 + static int scpi_dvfs_populate_info(struct device *dev, u8 domain) 655 + { 635 656 struct scpi_dvfs_info *info; 636 657 struct scpi_opp *opp; 637 658 struct dvfs_info buf; 638 659 int ret, i; 639 660 640 - if (domain >= MAX_DVFS_DOMAINS) 641 - return ERR_PTR(-EINVAL); 642 - 643 - if (scpi_info->dvfs[domain]) /* data already populated */ 644 - return scpi_info->dvfs[domain]; 645 - 646 661 ret = scpi_send_message(CMD_GET_DVFS_INFO, &domain, sizeof(domain), 647 662 &buf, sizeof(buf)); 648 663 if (ret) 649 - return ERR_PTR(ret); 664 + return ret; 650 665 651 - info = kmalloc(sizeof(*info), GFP_KERNEL); 666 + info = devm_kmalloc(dev, sizeof(*info), GFP_KERNEL); 652 667 if (!info) 653 - return ERR_PTR(-ENOMEM); 668 + return -ENOMEM; 654 669 655 - info->count = DVFS_OPP_COUNT(buf.header); 656 - info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */ 670 + info->count = buf.opp_count; 671 + info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */ 657 672 658 - info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); 659 - if (!info->opps) { 660 - kfree(info); 661 - return ERR_PTR(-ENOMEM); 662 - } 673 + info->opps = devm_kcalloc(dev, info->count, sizeof(*opp), GFP_KERNEL); 674 + if (!info->opps) 675 + return -ENOMEM; 663 676 664 677 for (i = 0, opp = info->opps; i < info->count; i++, opp++) { 665 678 opp->freq = le32_to_cpu(buf.opps[i].freq); ··· 669 682 sort(info->opps, info->count, sizeof(*opp), opp_cmp_func, NULL); 670 683 671 684 scpi_info->dvfs[domain] = info; 672 - return info; 685 + return 0; 686 + } 687 + 688 + static void scpi_dvfs_populate(struct device *dev) 689 + { 690 + int domain; 691 + 692 + for (domain = 0; domain < MAX_DVFS_DOMAINS; domain++) 693 + scpi_dvfs_populate_info(dev, domain); 673 694 } 674 695 675 696 static int scpi_dev_domain_id(struct device *dev) ··· 707 712 708 713 if (IS_ERR(info)) 709 714 return PTR_ERR(info); 710 - 711 - if (!info->latency) 712 - return 0; 713 715 714 716 return info->latency; 715 717 } ··· 768 776 static int scpi_sensor_get_value(u16 sensor, u64 *val) 769 777 { 770 778 __le16 id = cpu_to_le16(sensor); 771 - struct sensor_value buf; 779 + __le64 value; 772 780 int ret; 773 781 774 782 ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id), 775 - &buf, sizeof(buf)); 783 + &value, sizeof(value)); 776 784 if (ret) 777 785 return ret; 778 786 779 787 if (scpi_info->is_legacy) 780 - /* only 32-bits supported, hi_val can be junk */ 781 - *val = le32_to_cpu(buf.lo_val); 788 + /* only 32-bits supported, upper 32 bits can be junk */ 789 + *val = le32_to_cpup((__le32 *)&value); 782 790 else 783 - *val = (u64)le32_to_cpu(buf.hi_val) << 32 | 784 - le32_to_cpu(buf.lo_val); 791 + *val = le64_to_cpu(value); 785 792 786 793 return 0; 787 794 } ··· 853 862 static ssize_t protocol_version_show(struct device *dev, 854 863 struct device_attribute *attr, char *buf) 855 864 { 856 - struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 857 - 858 - return sprintf(buf, "%d.%d\n", 859 - PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 860 - PROTOCOL_REV_MINOR(scpi_info->protocol_version)); 865 + return sprintf(buf, "%lu.%lu\n", 866 + FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), 867 + FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version)); 861 868 } 862 869 static DEVICE_ATTR_RO(protocol_version); 863 870 864 871 static ssize_t firmware_version_show(struct device *dev, 865 872 struct device_attribute *attr, char *buf) 866 873 { 867 - struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 868 - 869 - return sprintf(buf, "%d.%d.%d\n", 870 - FW_REV_MAJOR(scpi_info->firmware_version), 871 - FW_REV_MINOR(scpi_info->firmware_version), 872 - FW_REV_PATCH(scpi_info->firmware_version)); 874 + return sprintf(buf, "%lu.%lu.%lu\n", 875 + FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), 876 + FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), 877 + FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); 873 878 } 874 879 static DEVICE_ATTR_RO(firmware_version); 875 880 ··· 876 889 }; 877 890 ATTRIBUTE_GROUPS(versions); 878 891 879 - static void 880 - scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count) 892 + static void scpi_free_channels(void *data) 881 893 { 894 + struct scpi_drvinfo *info = data; 882 895 int i; 883 896 884 - for (i = 0; i < count && pchan->chan; i++, pchan++) { 885 - mbox_free_channel(pchan->chan); 886 - devm_kfree(dev, pchan->xfers); 887 - devm_iounmap(dev, pchan->rx_payload); 888 - } 889 - } 890 - 891 - static int scpi_remove(struct platform_device *pdev) 892 - { 893 - int i; 894 - struct device *dev = &pdev->dev; 895 - struct scpi_drvinfo *info = platform_get_drvdata(pdev); 896 - 897 - scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ 898 - 899 - of_platform_depopulate(dev); 900 - sysfs_remove_groups(&dev->kobj, versions_groups); 901 - scpi_free_channels(dev, info->channels, info->num_chans); 902 - platform_set_drvdata(pdev, NULL); 903 - 904 - for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { 905 - kfree(info->dvfs[i]->opps); 906 - kfree(info->dvfs[i]); 907 - } 908 - devm_kfree(dev, info->channels); 909 - devm_kfree(dev, info); 910 - 911 - return 0; 897 + for (i = 0; i < info->num_chans; i++) 898 + mbox_free_channel(info->channels[i].chan); 912 899 } 913 900 914 901 #define MAX_SCPI_XFERS 10 ··· 913 952 { 914 953 int count, idx, ret; 915 954 struct resource res; 916 - struct scpi_chan *scpi_chan; 917 955 struct device *dev = &pdev->dev; 918 956 struct device_node *np = dev->of_node; 919 957 ··· 929 969 return -ENODEV; 930 970 } 931 971 932 - scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL); 933 - if (!scpi_chan) 972 + scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan), 973 + GFP_KERNEL); 974 + if (!scpi_info->channels) 934 975 return -ENOMEM; 935 976 936 - for (idx = 0; idx < count; idx++) { 977 + ret = devm_add_action(dev, scpi_free_channels, scpi_info); 978 + if (ret) 979 + return ret; 980 + 981 + for (; scpi_info->num_chans < count; scpi_info->num_chans++) { 937 982 resource_size_t size; 938 - struct scpi_chan *pchan = scpi_chan + idx; 983 + int idx = scpi_info->num_chans; 984 + struct scpi_chan *pchan = scpi_info->channels + idx; 939 985 struct mbox_client *cl = &pchan->cl; 940 986 struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 941 987 ··· 949 983 of_node_put(shmem); 950 984 if (ret) { 951 985 dev_err(dev, "failed to get SCPI payload mem resource\n"); 952 - goto err; 986 + return ret; 953 987 } 954 988 955 989 size = resource_size(&res); 956 990 pchan->rx_payload = devm_ioremap(dev, res.start, size); 957 991 if (!pchan->rx_payload) { 958 992 dev_err(dev, "failed to ioremap SCPI payload\n"); 959 - ret = -EADDRNOTAVAIL; 960 - goto err; 993 + return -EADDRNOTAVAIL; 961 994 } 962 995 pchan->tx_payload = pchan->rx_payload + (size >> 1); 963 996 ··· 982 1017 dev_err(dev, "failed to get channel%d err %d\n", 983 1018 idx, ret); 984 1019 } 985 - err: 986 - scpi_free_channels(dev, scpi_chan, idx); 987 - scpi_info = NULL; 988 1020 return ret; 989 1021 } 990 1022 991 - scpi_info->channels = scpi_chan; 992 - scpi_info->num_chans = count; 993 1023 scpi_info->commands = scpi_std_commands; 994 - 995 - platform_set_drvdata(pdev, scpi_info); 1024 + scpi_info->scpi_ops = &scpi_ops; 996 1025 997 1026 if (scpi_info->is_legacy) { 998 1027 /* Replace with legacy variants */ ··· 1002 1043 ret = scpi_init_versions(scpi_info); 1003 1044 if (ret) { 1004 1045 dev_err(dev, "incorrect or no SCP firmware found\n"); 1005 - scpi_remove(pdev); 1006 1046 return ret; 1007 1047 } 1008 1048 1009 - _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n", 1010 - PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 1011 - PROTOCOL_REV_MINOR(scpi_info->protocol_version), 1012 - FW_REV_MAJOR(scpi_info->firmware_version), 1013 - FW_REV_MINOR(scpi_info->firmware_version), 1014 - FW_REV_PATCH(scpi_info->firmware_version)); 1015 - scpi_info->scpi_ops = &scpi_ops; 1049 + scpi_dvfs_populate(dev); 1016 1050 1017 - ret = sysfs_create_groups(&dev->kobj, versions_groups); 1051 + _dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n", 1052 + FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), 1053 + FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version), 1054 + FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), 1055 + FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), 1056 + FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); 1057 + 1058 + ret = devm_device_add_groups(dev, versions_groups); 1018 1059 if (ret) 1019 1060 dev_err(dev, "unable to create sysfs version group\n"); 1020 1061 1021 - return of_platform_populate(dev->of_node, NULL, NULL, dev); 1062 + return devm_of_platform_populate(dev); 1022 1063 } 1023 1064 1024 1065 static const struct of_device_id scpi_of_match[] = { ··· 1035 1076 .of_match_table = scpi_of_match, 1036 1077 }, 1037 1078 .probe = scpi_probe, 1038 - .remove = scpi_remove, 1039 1079 }; 1040 1080 module_platform_driver(scpi_driver); 1041 1081
+1
drivers/firmware/psci_checker.c
··· 340 340 * later. 341 341 */ 342 342 del_timer(&wakeup_timer); 343 + destroy_timer_on_stack(&wakeup_timer); 343 344 344 345 if (atomic_dec_return_relaxed(&nb_active_threads) == 0) 345 346 complete(&suspend_threads_done);
+24
drivers/firmware/qcom_scm-32.c
··· 561 561 return ret ? : le32_to_cpu(out); 562 562 } 563 563 564 + int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 565 + { 566 + return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE, 567 + enable ? QCOM_SCM_SET_DLOAD_MODE : 0, 0); 568 + } 569 + 564 570 int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id) 565 571 { 566 572 struct { ··· 601 595 u32 spare) 602 596 { 603 597 return -ENODEV; 598 + } 599 + 600 + int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, 601 + unsigned int *val) 602 + { 603 + int ret; 604 + 605 + ret = qcom_scm_call_atomic1(QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, addr); 606 + if (ret >= 0) 607 + *val = ret; 608 + 609 + return ret < 0 ? ret : 0; 610 + } 611 + 612 + int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val) 613 + { 614 + return qcom_scm_call_atomic2(QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE, 615 + addr, val); 604 616 }
+44
drivers/firmware/qcom_scm-64.c
··· 439 439 440 440 return ret; 441 441 } 442 + 443 + int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 444 + { 445 + struct qcom_scm_desc desc = {0}; 446 + struct arm_smccc_res res; 447 + 448 + desc.args[0] = QCOM_SCM_SET_DLOAD_MODE; 449 + desc.args[1] = enable ? QCOM_SCM_SET_DLOAD_MODE : 0; 450 + desc.arginfo = QCOM_SCM_ARGS(2); 451 + 452 + return qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE, 453 + &desc, &res); 454 + } 455 + 456 + int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, 457 + unsigned int *val) 458 + { 459 + struct qcom_scm_desc desc = {0}; 460 + struct arm_smccc_res res; 461 + int ret; 462 + 463 + desc.args[0] = addr; 464 + desc.arginfo = QCOM_SCM_ARGS(1); 465 + 466 + ret = qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, 467 + &desc, &res); 468 + if (ret >= 0) 469 + *val = res.a1; 470 + 471 + return ret < 0 ? ret : 0; 472 + } 473 + 474 + int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val) 475 + { 476 + struct qcom_scm_desc desc = {0}; 477 + struct arm_smccc_res res; 478 + 479 + desc.args[0] = addr; 480 + desc.args[1] = val; 481 + desc.arginfo = QCOM_SCM_ARGS(2); 482 + 483 + return qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE, 484 + &desc, &res); 485 + }
+87
drivers/firmware/qcom_scm.c
··· 19 19 #include <linux/cpumask.h> 20 20 #include <linux/export.h> 21 21 #include <linux/dma-mapping.h> 22 + #include <linux/module.h> 22 23 #include <linux/types.h> 23 24 #include <linux/qcom_scm.h> 24 25 #include <linux/of.h> 26 + #include <linux/of_address.h> 25 27 #include <linux/of_platform.h> 26 28 #include <linux/clk.h> 27 29 #include <linux/reset-controller.h> 28 30 29 31 #include "qcom_scm.h" 32 + 33 + static bool download_mode = IS_ENABLED(CONFIG_QCOM_SCM_DOWNLOAD_MODE_DEFAULT); 34 + module_param(download_mode, bool, 0); 30 35 31 36 #define SCM_HAS_CORE_CLK BIT(0) 32 37 #define SCM_HAS_IFACE_CLK BIT(1) ··· 43 38 struct clk *iface_clk; 44 39 struct clk *bus_clk; 45 40 struct reset_controller_dev reset; 41 + 42 + u64 dload_mode_addr; 46 43 }; 47 44 48 45 static struct qcom_scm *__scm; ··· 340 333 } 341 334 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init); 342 335 336 + int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) 337 + { 338 + return __qcom_scm_io_readl(__scm->dev, addr, val); 339 + } 340 + EXPORT_SYMBOL(qcom_scm_io_readl); 341 + 342 + int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) 343 + { 344 + return __qcom_scm_io_writel(__scm->dev, addr, val); 345 + } 346 + EXPORT_SYMBOL(qcom_scm_io_writel); 347 + 348 + static void qcom_scm_set_download_mode(bool enable) 349 + { 350 + bool avail; 351 + int ret = 0; 352 + 353 + avail = __qcom_scm_is_call_available(__scm->dev, 354 + QCOM_SCM_SVC_BOOT, 355 + QCOM_SCM_SET_DLOAD_MODE); 356 + if (avail) { 357 + ret = __qcom_scm_set_dload_mode(__scm->dev, enable); 358 + } else if (__scm->dload_mode_addr) { 359 + ret = __qcom_scm_io_writel(__scm->dev, __scm->dload_mode_addr, 360 + enable ? QCOM_SCM_SET_DLOAD_MODE : 0); 361 + } else { 362 + dev_err(__scm->dev, 363 + "No available mechanism for setting download mode\n"); 364 + } 365 + 366 + if (ret) 367 + dev_err(__scm->dev, "failed to set download mode: %d\n", ret); 368 + } 369 + 370 + static int qcom_scm_find_dload_address(struct device *dev, u64 *addr) 371 + { 372 + struct device_node *tcsr; 373 + struct device_node *np = dev->of_node; 374 + struct resource res; 375 + u32 offset; 376 + int ret; 377 + 378 + tcsr = of_parse_phandle(np, "qcom,dload-mode", 0); 379 + if (!tcsr) 380 + return 0; 381 + 382 + ret = of_address_to_resource(tcsr, 0, &res); 383 + of_node_put(tcsr); 384 + if (ret) 385 + return ret; 386 + 387 + ret = of_property_read_u32_index(np, "qcom,dload-mode", 1, &offset); 388 + if (ret < 0) 389 + return ret; 390 + 391 + *addr = res.start + offset; 392 + 393 + return 0; 394 + } 395 + 343 396 /** 344 397 * qcom_scm_is_available() - Checks if SCM is available 345 398 */ ··· 424 357 scm = devm_kzalloc(&pdev->dev, sizeof(*scm), GFP_KERNEL); 425 358 if (!scm) 426 359 return -ENOMEM; 360 + 361 + ret = qcom_scm_find_dload_address(&pdev->dev, &scm->dload_mode_addr); 362 + if (ret < 0) 363 + return ret; 427 364 428 365 clks = (unsigned long)of_device_get_match_data(&pdev->dev); 429 366 if (clks & SCM_HAS_CORE_CLK) { ··· 477 406 478 407 __qcom_scm_init(); 479 408 409 + /* 410 + * If requested enable "download mode", from this point on warmboot 411 + * will cause the the boot stages to enter download mode, unless 412 + * disabled below by a clean shutdown/reboot. 413 + */ 414 + if (download_mode) 415 + qcom_scm_set_download_mode(true); 416 + 480 417 return 0; 418 + } 419 + 420 + static void qcom_scm_shutdown(struct platform_device *pdev) 421 + { 422 + /* Clean shutdown, disable download mode to allow normal restart */ 423 + if (download_mode) 424 + qcom_scm_set_download_mode(false); 481 425 } 482 426 483 427 static const struct of_device_id qcom_scm_dt_match[] = { ··· 522 436 .of_match_table = qcom_scm_dt_match, 523 437 }, 524 438 .probe = qcom_scm_probe, 439 + .shutdown = qcom_scm_shutdown, 525 440 }; 526 441 527 442 static int __init qcom_scm_init(void)
+8
drivers/firmware/qcom_scm.h
··· 14 14 15 15 #define QCOM_SCM_SVC_BOOT 0x1 16 16 #define QCOM_SCM_BOOT_ADDR 0x1 17 + #define QCOM_SCM_SET_DLOAD_MODE 0x10 17 18 #define QCOM_SCM_BOOT_ADDR_MC 0x11 18 19 #define QCOM_SCM_SET_REMOTE_STATE 0xa 19 20 extern int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id); 21 + extern int __qcom_scm_set_dload_mode(struct device *dev, bool enable); 20 22 21 23 #define QCOM_SCM_FLAG_HLOS 0x01 22 24 #define QCOM_SCM_FLAG_COLDBOOT_MC 0x02 ··· 31 29 #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 32 30 #define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10 33 31 extern void __qcom_scm_cpu_power_down(u32 flags); 32 + 33 + #define QCOM_SCM_SVC_IO 0x5 34 + #define QCOM_SCM_IO_READ 0x1 35 + #define QCOM_SCM_IO_WRITE 0x2 36 + extern int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, unsigned int *val); 37 + extern int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val); 34 38 35 39 #define QCOM_SCM_SVC_INFO 0x6 36 40 #define QCOM_IS_CALL_AVAIL_CMD 0x1
+3 -1
drivers/firmware/tegra/Makefile
··· 1 - obj-$(CONFIG_TEGRA_BPMP) += bpmp.o 1 + tegra-bpmp-y = bpmp.o 2 + tegra-bpmp-$(CONFIG_DEBUG_FS) += bpmp-debugfs.o 3 + obj-$(CONFIG_TEGRA_BPMP) += tegra-bpmp.o 2 4 obj-$(CONFIG_TEGRA_IVC) += ivc.o
+444
drivers/firmware/tegra/bpmp-debugfs.c
··· 1 + /* 2 + * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + */ 14 + #include <linux/debugfs.h> 15 + #include <linux/dma-mapping.h> 16 + #include <linux/uaccess.h> 17 + 18 + #include <soc/tegra/bpmp.h> 19 + #include <soc/tegra/bpmp-abi.h> 20 + 21 + struct seqbuf { 22 + char *buf; 23 + size_t pos; 24 + size_t size; 25 + }; 26 + 27 + static void seqbuf_init(struct seqbuf *seqbuf, void *buf, size_t size) 28 + { 29 + seqbuf->buf = buf; 30 + seqbuf->size = size; 31 + seqbuf->pos = 0; 32 + } 33 + 34 + static size_t seqbuf_avail(struct seqbuf *seqbuf) 35 + { 36 + return seqbuf->pos < seqbuf->size ? seqbuf->size - seqbuf->pos : 0; 37 + } 38 + 39 + static size_t seqbuf_status(struct seqbuf *seqbuf) 40 + { 41 + return seqbuf->pos <= seqbuf->size ? 0 : -EOVERFLOW; 42 + } 43 + 44 + static int seqbuf_eof(struct seqbuf *seqbuf) 45 + { 46 + return seqbuf->pos >= seqbuf->size; 47 + } 48 + 49 + static int seqbuf_read(struct seqbuf *seqbuf, void *buf, size_t nbyte) 50 + { 51 + nbyte = min(nbyte, seqbuf_avail(seqbuf)); 52 + memcpy(buf, seqbuf->buf + seqbuf->pos, nbyte); 53 + seqbuf->pos += nbyte; 54 + return seqbuf_status(seqbuf); 55 + } 56 + 57 + static int seqbuf_read_u32(struct seqbuf *seqbuf, uint32_t *v) 58 + { 59 + int err; 60 + 61 + err = seqbuf_read(seqbuf, v, 4); 62 + *v = le32_to_cpu(*v); 63 + return err; 64 + } 65 + 66 + static int seqbuf_read_str(struct seqbuf *seqbuf, const char **str) 67 + { 68 + *str = seqbuf->buf + seqbuf->pos; 69 + seqbuf->pos += strnlen(*str, seqbuf_avail(seqbuf)); 70 + seqbuf->pos++; 71 + return seqbuf_status(seqbuf); 72 + } 73 + 74 + static void seqbuf_seek(struct seqbuf *seqbuf, ssize_t offset) 75 + { 76 + seqbuf->pos += offset; 77 + } 78 + 79 + /* map filename in Linux debugfs to corresponding entry in BPMP */ 80 + static const char *get_filename(struct tegra_bpmp *bpmp, 81 + const struct file *file, char *buf, int size) 82 + { 83 + char root_path_buf[512]; 84 + const char *root_path; 85 + const char *filename; 86 + size_t root_len; 87 + 88 + root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf, 89 + sizeof(root_path_buf)); 90 + if (IS_ERR(root_path)) 91 + return NULL; 92 + 93 + root_len = strlen(root_path); 94 + 95 + filename = dentry_path(file->f_path.dentry, buf, size); 96 + if (IS_ERR(filename)) 97 + return NULL; 98 + 99 + if (strlen(filename) < root_len || 100 + strncmp(filename, root_path, root_len)) 101 + return NULL; 102 + 103 + filename += root_len; 104 + 105 + return filename; 106 + } 107 + 108 + static int mrq_debugfs_read(struct tegra_bpmp *bpmp, 109 + dma_addr_t name, size_t sz_name, 110 + dma_addr_t data, size_t sz_data, 111 + size_t *nbytes) 112 + { 113 + struct mrq_debugfs_request req = { 114 + .cmd = cpu_to_le32(CMD_DEBUGFS_READ), 115 + .fop = { 116 + .fnameaddr = cpu_to_le32((uint32_t)name), 117 + .fnamelen = cpu_to_le32((uint32_t)sz_name), 118 + .dataaddr = cpu_to_le32((uint32_t)data), 119 + .datalen = cpu_to_le32((uint32_t)sz_data), 120 + }, 121 + }; 122 + struct mrq_debugfs_response resp; 123 + struct tegra_bpmp_message msg = { 124 + .mrq = MRQ_DEBUGFS, 125 + .tx = { 126 + .data = &req, 127 + .size = sizeof(req), 128 + }, 129 + .rx = { 130 + .data = &resp, 131 + .size = sizeof(resp), 132 + }, 133 + }; 134 + int err; 135 + 136 + err = tegra_bpmp_transfer(bpmp, &msg); 137 + if (err < 0) 138 + return err; 139 + 140 + *nbytes = (size_t)resp.fop.nbytes; 141 + 142 + return 0; 143 + } 144 + 145 + static int mrq_debugfs_write(struct tegra_bpmp *bpmp, 146 + dma_addr_t name, size_t sz_name, 147 + dma_addr_t data, size_t sz_data) 148 + { 149 + const struct mrq_debugfs_request req = { 150 + .cmd = cpu_to_le32(CMD_DEBUGFS_WRITE), 151 + .fop = { 152 + .fnameaddr = cpu_to_le32((uint32_t)name), 153 + .fnamelen = cpu_to_le32((uint32_t)sz_name), 154 + .dataaddr = cpu_to_le32((uint32_t)data), 155 + .datalen = cpu_to_le32((uint32_t)sz_data), 156 + }, 157 + }; 158 + struct tegra_bpmp_message msg = { 159 + .mrq = MRQ_DEBUGFS, 160 + .tx = { 161 + .data = &req, 162 + .size = sizeof(req), 163 + }, 164 + }; 165 + 166 + return tegra_bpmp_transfer(bpmp, &msg); 167 + } 168 + 169 + static int mrq_debugfs_dumpdir(struct tegra_bpmp *bpmp, dma_addr_t addr, 170 + size_t size, size_t *nbytes) 171 + { 172 + const struct mrq_debugfs_request req = { 173 + .cmd = cpu_to_le32(CMD_DEBUGFS_DUMPDIR), 174 + .dumpdir = { 175 + .dataaddr = cpu_to_le32((uint32_t)addr), 176 + .datalen = cpu_to_le32((uint32_t)size), 177 + }, 178 + }; 179 + struct mrq_debugfs_response resp; 180 + struct tegra_bpmp_message msg = { 181 + .mrq = MRQ_DEBUGFS, 182 + .tx = { 183 + .data = &req, 184 + .size = sizeof(req), 185 + }, 186 + .rx = { 187 + .data = &resp, 188 + .size = sizeof(resp), 189 + }, 190 + }; 191 + int err; 192 + 193 + err = tegra_bpmp_transfer(bpmp, &msg); 194 + if (err < 0) 195 + return err; 196 + 197 + *nbytes = (size_t)resp.dumpdir.nbytes; 198 + 199 + return 0; 200 + } 201 + 202 + static int debugfs_show(struct seq_file *m, void *p) 203 + { 204 + struct file *file = m->private; 205 + struct inode *inode = file_inode(file); 206 + struct tegra_bpmp *bpmp = inode->i_private; 207 + const size_t datasize = m->size; 208 + const size_t namesize = SZ_256; 209 + void *datavirt, *namevirt; 210 + dma_addr_t dataphys, namephys; 211 + char buf[256]; 212 + const char *filename; 213 + size_t len, nbytes; 214 + int ret; 215 + 216 + filename = get_filename(bpmp, file, buf, sizeof(buf)); 217 + if (!filename) 218 + return -ENOENT; 219 + 220 + namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys, 221 + GFP_KERNEL | GFP_DMA32); 222 + if (!namevirt) 223 + return -ENOMEM; 224 + 225 + datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys, 226 + GFP_KERNEL | GFP_DMA32); 227 + if (!datavirt) { 228 + ret = -ENOMEM; 229 + goto free_namebuf; 230 + } 231 + 232 + len = strlen(filename); 233 + strncpy(namevirt, filename, namesize); 234 + 235 + ret = mrq_debugfs_read(bpmp, namephys, len, dataphys, datasize, 236 + &nbytes); 237 + 238 + if (!ret) 239 + seq_write(m, datavirt, nbytes); 240 + 241 + dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys); 242 + free_namebuf: 243 + dma_free_coherent(bpmp->dev, namesize, namevirt, namephys); 244 + 245 + return ret; 246 + } 247 + 248 + static int debugfs_open(struct inode *inode, struct file *file) 249 + { 250 + return single_open_size(file, debugfs_show, file, SZ_128K); 251 + } 252 + 253 + static ssize_t debugfs_store(struct file *file, const char __user *buf, 254 + size_t count, loff_t *f_pos) 255 + { 256 + struct inode *inode = file_inode(file); 257 + struct tegra_bpmp *bpmp = inode->i_private; 258 + const size_t datasize = count; 259 + const size_t namesize = SZ_256; 260 + void *datavirt, *namevirt; 261 + dma_addr_t dataphys, namephys; 262 + char fnamebuf[256]; 263 + const char *filename; 264 + size_t len; 265 + int ret; 266 + 267 + filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf)); 268 + if (!filename) 269 + return -ENOENT; 270 + 271 + namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys, 272 + GFP_KERNEL | GFP_DMA32); 273 + if (!namevirt) 274 + return -ENOMEM; 275 + 276 + datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys, 277 + GFP_KERNEL | GFP_DMA32); 278 + if (!datavirt) { 279 + ret = -ENOMEM; 280 + goto free_namebuf; 281 + } 282 + 283 + len = strlen(filename); 284 + strncpy(namevirt, filename, namesize); 285 + 286 + if (copy_from_user(datavirt, buf, count)) { 287 + ret = -EFAULT; 288 + goto free_databuf; 289 + } 290 + 291 + ret = mrq_debugfs_write(bpmp, namephys, len, dataphys, 292 + count); 293 + 294 + free_databuf: 295 + dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys); 296 + free_namebuf: 297 + dma_free_coherent(bpmp->dev, namesize, namevirt, namephys); 298 + 299 + return ret ?: count; 300 + } 301 + 302 + static const struct file_operations debugfs_fops = { 303 + .open = debugfs_open, 304 + .read = seq_read, 305 + .llseek = seq_lseek, 306 + .write = debugfs_store, 307 + .release = single_release, 308 + }; 309 + 310 + static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf, 311 + struct dentry *parent, uint32_t depth) 312 + { 313 + int err; 314 + uint32_t d, t; 315 + const char *name; 316 + struct dentry *dentry; 317 + 318 + while (!seqbuf_eof(seqbuf)) { 319 + err = seqbuf_read_u32(seqbuf, &d); 320 + if (err < 0) 321 + return err; 322 + 323 + if (d < depth) { 324 + seqbuf_seek(seqbuf, -4); 325 + /* go up a level */ 326 + return 0; 327 + } else if (d != depth) { 328 + /* malformed data received from BPMP */ 329 + return -EIO; 330 + } 331 + 332 + err = seqbuf_read_u32(seqbuf, &t); 333 + if (err < 0) 334 + return err; 335 + err = seqbuf_read_str(seqbuf, &name); 336 + if (err < 0) 337 + return err; 338 + 339 + if (t & DEBUGFS_S_ISDIR) { 340 + dentry = debugfs_create_dir(name, parent); 341 + if (!dentry) 342 + return -ENOMEM; 343 + err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1); 344 + if (err < 0) 345 + return err; 346 + } else { 347 + umode_t mode; 348 + 349 + mode = t & DEBUGFS_S_IRUSR ? S_IRUSR : 0; 350 + mode |= t & DEBUGFS_S_IWUSR ? S_IWUSR : 0; 351 + dentry = debugfs_create_file(name, mode, 352 + parent, bpmp, 353 + &debugfs_fops); 354 + if (!dentry) 355 + return -ENOMEM; 356 + } 357 + } 358 + 359 + return 0; 360 + } 361 + 362 + static int create_debugfs_mirror(struct tegra_bpmp *bpmp, void *buf, 363 + size_t bufsize, struct dentry *root) 364 + { 365 + struct seqbuf seqbuf; 366 + int err; 367 + 368 + bpmp->debugfs_mirror = debugfs_create_dir("debug", root); 369 + if (!bpmp->debugfs_mirror) 370 + return -ENOMEM; 371 + 372 + seqbuf_init(&seqbuf, buf, bufsize); 373 + err = bpmp_populate_dir(bpmp, &seqbuf, bpmp->debugfs_mirror, 0); 374 + if (err < 0) { 375 + debugfs_remove_recursive(bpmp->debugfs_mirror); 376 + bpmp->debugfs_mirror = NULL; 377 + } 378 + 379 + return err; 380 + } 381 + 382 + static int mrq_is_supported(struct tegra_bpmp *bpmp, unsigned int mrq) 383 + { 384 + struct mrq_query_abi_request req = { .mrq = cpu_to_le32(mrq) }; 385 + struct mrq_query_abi_response resp; 386 + struct tegra_bpmp_message msg = { 387 + .mrq = MRQ_QUERY_ABI, 388 + .tx = { 389 + .data = &req, 390 + .size = sizeof(req), 391 + }, 392 + .rx = { 393 + .data = &resp, 394 + .size = sizeof(resp), 395 + }, 396 + }; 397 + int ret; 398 + 399 + ret = tegra_bpmp_transfer(bpmp, &msg); 400 + if (ret < 0) { 401 + /* something went wrong; assume not supported */ 402 + dev_warn(bpmp->dev, "tegra_bpmp_transfer failed (%d)\n", ret); 403 + return 0; 404 + } 405 + 406 + return resp.status ? 0 : 1; 407 + } 408 + 409 + int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp) 410 + { 411 + dma_addr_t phys; 412 + void *virt; 413 + const size_t sz = SZ_256K; 414 + size_t nbytes; 415 + int ret; 416 + struct dentry *root; 417 + 418 + if (!mrq_is_supported(bpmp, MRQ_DEBUGFS)) 419 + return 0; 420 + 421 + root = debugfs_create_dir("bpmp", NULL); 422 + if (!root) 423 + return -ENOMEM; 424 + 425 + virt = dma_alloc_coherent(bpmp->dev, sz, &phys, 426 + GFP_KERNEL | GFP_DMA32); 427 + if (!virt) { 428 + ret = -ENOMEM; 429 + goto out; 430 + } 431 + 432 + ret = mrq_debugfs_dumpdir(bpmp, phys, sz, &nbytes); 433 + if (ret < 0) 434 + goto free; 435 + 436 + ret = create_debugfs_mirror(bpmp, virt, nbytes, root); 437 + free: 438 + dma_free_coherent(bpmp->dev, sz, virt, phys); 439 + out: 440 + if (ret < 0) 441 + debugfs_remove(root); 442 + 443 + return ret; 444 + }
+23 -8
drivers/firmware/tegra/bpmp.c
··· 194 194 } 195 195 196 196 static ssize_t __tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel, 197 - void *data, size_t size) 197 + void *data, size_t size, int *ret) 198 198 { 199 + int err; 200 + 199 201 if (data && size > 0) 200 202 memcpy(data, channel->ib->data, size); 201 203 202 - return tegra_ivc_read_advance(channel->ivc); 204 + err = tegra_ivc_read_advance(channel->ivc); 205 + if (err < 0) 206 + return err; 207 + 208 + *ret = channel->ib->code; 209 + 210 + return 0; 203 211 } 204 212 205 213 static ssize_t tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel, 206 - void *data, size_t size) 214 + void *data, size_t size, int *ret) 207 215 { 208 216 struct tegra_bpmp *bpmp = channel->bpmp; 209 217 unsigned long flags; ··· 225 217 } 226 218 227 219 spin_lock_irqsave(&bpmp->lock, flags); 228 - err = __tegra_bpmp_channel_read(channel, data, size); 220 + err = __tegra_bpmp_channel_read(channel, data, size, ret); 229 221 clear_bit(index, bpmp->threaded.allocated); 230 222 spin_unlock_irqrestore(&bpmp->lock, flags); 231 223 ··· 345 337 if (err < 0) 346 338 return err; 347 339 348 - return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size); 340 + return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size, 341 + &msg->rx.ret); 349 342 } 350 343 EXPORT_SYMBOL_GPL(tegra_bpmp_transfer_atomic); 351 344 ··· 380 371 if (err == 0) 381 372 return -ETIMEDOUT; 382 373 383 - return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size); 374 + return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size, 375 + &msg->rx.ret); 384 376 } 385 377 EXPORT_SYMBOL_GPL(tegra_bpmp_transfer); 386 378 ··· 397 387 return NULL; 398 388 } 399 389 400 - static void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, 401 - int code, const void *data, size_t size) 390 + void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code, 391 + const void *data, size_t size) 402 392 { 403 393 unsigned long flags = channel->ib->flags; 404 394 struct tegra_bpmp *bpmp = channel->bpmp; ··· 436 426 mbox_client_txdone(bpmp->mbox.channel, 0); 437 427 } 438 428 } 429 + EXPORT_SYMBOL_GPL(tegra_bpmp_mrq_return); 439 430 440 431 static void tegra_bpmp_handle_mrq(struct tegra_bpmp *bpmp, 441 432 unsigned int mrq, ··· 834 823 err = tegra_bpmp_init_powergates(bpmp); 835 824 if (err < 0) 836 825 goto free_mrq; 826 + 827 + err = tegra_bpmp_init_debugfs(bpmp); 828 + if (err < 0) 829 + dev_err(&pdev->dev, "debugfs initialization failed: %d\n", err); 837 830 838 831 return 0; 839 832
+1 -1
drivers/firmware/ti_sci.c
··· 439 439 /* And we wait for the response. */ 440 440 timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); 441 441 if (!wait_for_completion_timeout(&xfer->done, timeout)) { 442 - dev_err(dev, "Mbox timedout in resp(caller: %pF)\n", 442 + dev_err(dev, "Mbox timedout in resp(caller: %pS)\n", 443 443 (void *)_RET_IP_); 444 444 ret = -ETIMEDOUT; 445 445 }
+1
drivers/memory/Makefile
··· 9 9 obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o 10 10 obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o 11 11 obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o 12 + obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o 12 13 obj-$(CONFIG_TI_AEMIF) += ti-aemif.o 13 14 obj-$(CONFIG_TI_EMIF) += emif.o 14 15 obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o
+722
drivers/memory/brcmstb_dpfe.c
··· 1 + /* 2 + * DDR PHY Front End (DPFE) driver for Broadcom set top box SoCs 3 + * 4 + * Copyright (c) 2017 Broadcom 5 + * 6 + * Released under the GPLv2 only. 7 + * SPDX-License-Identifier: GPL-2.0 8 + */ 9 + 10 + /* 11 + * This driver provides access to the DPFE interface of Broadcom STB SoCs. 12 + * The firmware running on the DCPU inside the DDR PHY can provide current 13 + * information about the system's RAM, for instance the DRAM refresh rate. 14 + * This can be used as an indirect indicator for the DRAM's temperature. 15 + * Slower refresh rate means cooler RAM, higher refresh rate means hotter 16 + * RAM. 17 + * 18 + * Throughout the driver, we use readl_relaxed() and writel_relaxed(), which 19 + * already contain the appropriate le32_to_cpu()/cpu_to_le32() calls. 20 + * 21 + * Note regarding the loading of the firmware image: we use be32_to_cpu() 22 + * and le_32_to_cpu(), so we can support the following four cases: 23 + * - LE kernel + LE firmware image (the most common case) 24 + * - LE kernel + BE firmware image 25 + * - BE kernel + LE firmware image 26 + * - BE kernel + BE firmware image 27 + * 28 + * The DPCU always runs in big endian mode. The firwmare image, however, can 29 + * be in either format. Also, communication between host CPU and DCPU is 30 + * always in little endian. 31 + */ 32 + 33 + #include <linux/delay.h> 34 + #include <linux/firmware.h> 35 + #include <linux/io.h> 36 + #include <linux/module.h> 37 + #include <linux/of_address.h> 38 + #include <linux/platform_device.h> 39 + 40 + #define DRVNAME "brcmstb-dpfe" 41 + #define FIRMWARE_NAME "dpfe.bin" 42 + 43 + /* DCPU register offsets */ 44 + #define REG_DCPU_RESET 0x0 45 + #define REG_TO_DCPU_MBOX 0x10 46 + #define REG_TO_HOST_MBOX 0x14 47 + 48 + /* Message RAM */ 49 + #define DCPU_MSG_RAM(x) (0x100 + (x) * sizeof(u32)) 50 + 51 + /* DRAM Info Offsets & Masks */ 52 + #define DRAM_INFO_INTERVAL 0x0 53 + #define DRAM_INFO_MR4 0x4 54 + #define DRAM_INFO_ERROR 0x8 55 + #define DRAM_INFO_MR4_MASK 0xff 56 + 57 + /* DRAM MR4 Offsets & Masks */ 58 + #define DRAM_MR4_REFRESH 0x0 /* Refresh rate */ 59 + #define DRAM_MR4_SR_ABORT 0x3 /* Self Refresh Abort */ 60 + #define DRAM_MR4_PPRE 0x4 /* Post-package repair entry/exit */ 61 + #define DRAM_MR4_TH_OFFS 0x5 /* Thermal Offset; vendor specific */ 62 + #define DRAM_MR4_TUF 0x7 /* Temperature Update Flag */ 63 + 64 + #define DRAM_MR4_REFRESH_MASK 0x7 65 + #define DRAM_MR4_SR_ABORT_MASK 0x1 66 + #define DRAM_MR4_PPRE_MASK 0x1 67 + #define DRAM_MR4_TH_OFFS_MASK 0x3 68 + #define DRAM_MR4_TUF_MASK 0x1 69 + 70 + /* DRAM Vendor Offsets & Masks */ 71 + #define DRAM_VENDOR_MR5 0x0 72 + #define DRAM_VENDOR_MR6 0x4 73 + #define DRAM_VENDOR_MR7 0x8 74 + #define DRAM_VENDOR_MR8 0xc 75 + #define DRAM_VENDOR_ERROR 0x10 76 + #define DRAM_VENDOR_MASK 0xff 77 + 78 + /* Reset register bits & masks */ 79 + #define DCPU_RESET_SHIFT 0x0 80 + #define DCPU_RESET_MASK 0x1 81 + #define DCPU_CLK_DISABLE_SHIFT 0x2 82 + 83 + /* DCPU return codes */ 84 + #define DCPU_RET_ERROR_BIT BIT(31) 85 + #define DCPU_RET_SUCCESS 0x1 86 + #define DCPU_RET_ERR_HEADER (DCPU_RET_ERROR_BIT | BIT(0)) 87 + #define DCPU_RET_ERR_INVAL (DCPU_RET_ERROR_BIT | BIT(1)) 88 + #define DCPU_RET_ERR_CHKSUM (DCPU_RET_ERROR_BIT | BIT(2)) 89 + #define DCPU_RET_ERR_COMMAND (DCPU_RET_ERROR_BIT | BIT(3)) 90 + /* This error code is not firmware defined and only used in the driver. */ 91 + #define DCPU_RET_ERR_TIMEDOUT (DCPU_RET_ERROR_BIT | BIT(4)) 92 + 93 + /* Firmware magic */ 94 + #define DPFE_BE_MAGIC 0xfe1010fe 95 + #define DPFE_LE_MAGIC 0xfe0101fe 96 + 97 + /* Error codes */ 98 + #define ERR_INVALID_MAGIC -1 99 + #define ERR_INVALID_SIZE -2 100 + #define ERR_INVALID_CHKSUM -3 101 + 102 + /* Message types */ 103 + #define DPFE_MSG_TYPE_COMMAND 1 104 + #define DPFE_MSG_TYPE_RESPONSE 2 105 + 106 + #define DELAY_LOOP_MAX 200000 107 + 108 + enum dpfe_msg_fields { 109 + MSG_HEADER, 110 + MSG_COMMAND, 111 + MSG_ARG_COUNT, 112 + MSG_ARG0, 113 + MSG_CHKSUM, 114 + MSG_FIELD_MAX /* Last entry */ 115 + }; 116 + 117 + enum dpfe_commands { 118 + DPFE_CMD_GET_INFO, 119 + DPFE_CMD_GET_REFRESH, 120 + DPFE_CMD_GET_VENDOR, 121 + DPFE_CMD_MAX /* Last entry */ 122 + }; 123 + 124 + struct dpfe_msg { 125 + u32 header; 126 + u32 command; 127 + u32 arg_count; 128 + u32 arg0; 129 + u32 chksum; /* This is the sum of all other entries. */ 130 + }; 131 + 132 + /* 133 + * Format of the binary firmware file: 134 + * 135 + * entry 136 + * 0 header 137 + * value: 0xfe0101fe <== little endian 138 + * 0xfe1010fe <== big endian 139 + * 1 sequence: 140 + * [31:16] total segments on this build 141 + * [15:0] this segment sequence. 142 + * 2 FW version 143 + * 3 IMEM byte size 144 + * 4 DMEM byte size 145 + * IMEM 146 + * DMEM 147 + * last checksum ==> sum of everything 148 + */ 149 + struct dpfe_firmware_header { 150 + u32 magic; 151 + u32 sequence; 152 + u32 version; 153 + u32 imem_size; 154 + u32 dmem_size; 155 + }; 156 + 157 + /* Things we only need during initialization. */ 158 + struct init_data { 159 + unsigned int dmem_len; 160 + unsigned int imem_len; 161 + unsigned int chksum; 162 + bool is_big_endian; 163 + }; 164 + 165 + /* Things we need for as long as we are active. */ 166 + struct private_data { 167 + void __iomem *regs; 168 + void __iomem *dmem; 169 + void __iomem *imem; 170 + struct device *dev; 171 + unsigned int index; 172 + struct mutex lock; 173 + }; 174 + 175 + static const char *error_text[] = { 176 + "Success", "Header code incorrect", "Unknown command or argument", 177 + "Incorrect checksum", "Malformed command", "Timed out", 178 + }; 179 + 180 + /* List of supported firmware commands */ 181 + static const u32 dpfe_commands[DPFE_CMD_MAX][MSG_FIELD_MAX] = { 182 + [DPFE_CMD_GET_INFO] = { 183 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 184 + [MSG_COMMAND] = 1, 185 + [MSG_ARG_COUNT] = 1, 186 + [MSG_ARG0] = 1, 187 + [MSG_CHKSUM] = 4, 188 + }, 189 + [DPFE_CMD_GET_REFRESH] = { 190 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 191 + [MSG_COMMAND] = 2, 192 + [MSG_ARG_COUNT] = 1, 193 + [MSG_ARG0] = 1, 194 + [MSG_CHKSUM] = 5, 195 + }, 196 + [DPFE_CMD_GET_VENDOR] = { 197 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 198 + [MSG_COMMAND] = 2, 199 + [MSG_ARG_COUNT] = 1, 200 + [MSG_ARG0] = 2, 201 + [MSG_CHKSUM] = 6, 202 + }, 203 + }; 204 + 205 + static bool is_dcpu_enabled(void __iomem *regs) 206 + { 207 + u32 val; 208 + 209 + val = readl_relaxed(regs + REG_DCPU_RESET); 210 + 211 + return !(val & DCPU_RESET_MASK); 212 + } 213 + 214 + static void __disable_dcpu(void __iomem *regs) 215 + { 216 + u32 val; 217 + 218 + if (!is_dcpu_enabled(regs)) 219 + return; 220 + 221 + /* Put DCPU in reset if it's running. */ 222 + val = readl_relaxed(regs + REG_DCPU_RESET); 223 + val |= (1 << DCPU_RESET_SHIFT); 224 + writel_relaxed(val, regs + REG_DCPU_RESET); 225 + } 226 + 227 + static void __enable_dcpu(void __iomem *regs) 228 + { 229 + u32 val; 230 + 231 + /* Clear mailbox registers. */ 232 + writel_relaxed(0, regs + REG_TO_DCPU_MBOX); 233 + writel_relaxed(0, regs + REG_TO_HOST_MBOX); 234 + 235 + /* Disable DCPU clock gating */ 236 + val = readl_relaxed(regs + REG_DCPU_RESET); 237 + val &= ~(1 << DCPU_CLK_DISABLE_SHIFT); 238 + writel_relaxed(val, regs + REG_DCPU_RESET); 239 + 240 + /* Take DCPU out of reset */ 241 + val = readl_relaxed(regs + REG_DCPU_RESET); 242 + val &= ~(1 << DCPU_RESET_SHIFT); 243 + writel_relaxed(val, regs + REG_DCPU_RESET); 244 + } 245 + 246 + static unsigned int get_msg_chksum(const u32 msg[]) 247 + { 248 + unsigned int sum = 0; 249 + unsigned int i; 250 + 251 + /* Don't include the last field in the checksum. */ 252 + for (i = 0; i < MSG_FIELD_MAX - 1; i++) 253 + sum += msg[i]; 254 + 255 + return sum; 256 + } 257 + 258 + static int __send_command(struct private_data *priv, unsigned int cmd, 259 + u32 result[]) 260 + { 261 + const u32 *msg = dpfe_commands[cmd]; 262 + void __iomem *regs = priv->regs; 263 + unsigned int i, chksum; 264 + int ret = 0; 265 + u32 resp; 266 + 267 + if (cmd >= DPFE_CMD_MAX) 268 + return -1; 269 + 270 + mutex_lock(&priv->lock); 271 + 272 + /* Write command and arguments to message area */ 273 + for (i = 0; i < MSG_FIELD_MAX; i++) 274 + writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i)); 275 + 276 + /* Tell DCPU there is a command waiting */ 277 + writel_relaxed(1, regs + REG_TO_DCPU_MBOX); 278 + 279 + /* Wait for DCPU to process the command */ 280 + for (i = 0; i < DELAY_LOOP_MAX; i++) { 281 + /* Read response code */ 282 + resp = readl_relaxed(regs + REG_TO_HOST_MBOX); 283 + if (resp > 0) 284 + break; 285 + udelay(5); 286 + } 287 + 288 + if (i == DELAY_LOOP_MAX) { 289 + resp = (DCPU_RET_ERR_TIMEDOUT & ~DCPU_RET_ERROR_BIT); 290 + ret = -ffs(resp); 291 + } else { 292 + /* Read response data */ 293 + for (i = 0; i < MSG_FIELD_MAX; i++) 294 + result[i] = readl_relaxed(regs + DCPU_MSG_RAM(i)); 295 + } 296 + 297 + /* Tell DCPU we are done */ 298 + writel_relaxed(0, regs + REG_TO_HOST_MBOX); 299 + 300 + mutex_unlock(&priv->lock); 301 + 302 + if (ret) 303 + return ret; 304 + 305 + /* Verify response */ 306 + chksum = get_msg_chksum(result); 307 + if (chksum != result[MSG_CHKSUM]) 308 + resp = DCPU_RET_ERR_CHKSUM; 309 + 310 + if (resp != DCPU_RET_SUCCESS) { 311 + resp &= ~DCPU_RET_ERROR_BIT; 312 + ret = -ffs(resp); 313 + } 314 + 315 + return ret; 316 + } 317 + 318 + /* Ensure that the firmware file loaded meets all the requirements. */ 319 + static int __verify_firmware(struct init_data *init, 320 + const struct firmware *fw) 321 + { 322 + const struct dpfe_firmware_header *header = (void *)fw->data; 323 + unsigned int dmem_size, imem_size, total_size; 324 + bool is_big_endian = false; 325 + const u32 *chksum_ptr; 326 + 327 + if (header->magic == DPFE_BE_MAGIC) 328 + is_big_endian = true; 329 + else if (header->magic != DPFE_LE_MAGIC) 330 + return ERR_INVALID_MAGIC; 331 + 332 + if (is_big_endian) { 333 + dmem_size = be32_to_cpu(header->dmem_size); 334 + imem_size = be32_to_cpu(header->imem_size); 335 + } else { 336 + dmem_size = le32_to_cpu(header->dmem_size); 337 + imem_size = le32_to_cpu(header->imem_size); 338 + } 339 + 340 + /* Data and instruction sections are 32 bit words. */ 341 + if ((dmem_size % sizeof(u32)) != 0 || (imem_size % sizeof(u32)) != 0) 342 + return ERR_INVALID_SIZE; 343 + 344 + /* 345 + * The header + the data section + the instruction section + the 346 + * checksum must be equal to the total firmware size. 347 + */ 348 + total_size = dmem_size + imem_size + sizeof(*header) + 349 + sizeof(*chksum_ptr); 350 + if (total_size != fw->size) 351 + return ERR_INVALID_SIZE; 352 + 353 + /* The checksum comes at the very end. */ 354 + chksum_ptr = (void *)fw->data + sizeof(*header) + dmem_size + imem_size; 355 + 356 + init->is_big_endian = is_big_endian; 357 + init->dmem_len = dmem_size; 358 + init->imem_len = imem_size; 359 + init->chksum = (is_big_endian) 360 + ? be32_to_cpu(*chksum_ptr) : le32_to_cpu(*chksum_ptr); 361 + 362 + return 0; 363 + } 364 + 365 + /* Verify checksum by reading back the firmware from co-processor RAM. */ 366 + static int __verify_fw_checksum(struct init_data *init, 367 + struct private_data *priv, 368 + const struct dpfe_firmware_header *header, 369 + u32 checksum) 370 + { 371 + u32 magic, sequence, version, sum; 372 + u32 __iomem *dmem = priv->dmem; 373 + u32 __iomem *imem = priv->imem; 374 + unsigned int i; 375 + 376 + if (init->is_big_endian) { 377 + magic = be32_to_cpu(header->magic); 378 + sequence = be32_to_cpu(header->sequence); 379 + version = be32_to_cpu(header->version); 380 + } else { 381 + magic = le32_to_cpu(header->magic); 382 + sequence = le32_to_cpu(header->sequence); 383 + version = le32_to_cpu(header->version); 384 + } 385 + 386 + sum = magic + sequence + version + init->dmem_len + init->imem_len; 387 + 388 + for (i = 0; i < init->dmem_len / sizeof(u32); i++) 389 + sum += readl_relaxed(dmem + i); 390 + 391 + for (i = 0; i < init->imem_len / sizeof(u32); i++) 392 + sum += readl_relaxed(imem + i); 393 + 394 + return (sum == checksum) ? 0 : -1; 395 + } 396 + 397 + static int __write_firmware(u32 __iomem *mem, const u32 *fw, 398 + unsigned int size, bool is_big_endian) 399 + { 400 + unsigned int i; 401 + 402 + /* Convert size to 32-bit words. */ 403 + size /= sizeof(u32); 404 + 405 + /* It is recommended to clear the firmware area first. */ 406 + for (i = 0; i < size; i++) 407 + writel_relaxed(0, mem + i); 408 + 409 + /* Now copy it. */ 410 + if (is_big_endian) { 411 + for (i = 0; i < size; i++) 412 + writel_relaxed(be32_to_cpu(fw[i]), mem + i); 413 + } else { 414 + for (i = 0; i < size; i++) 415 + writel_relaxed(le32_to_cpu(fw[i]), mem + i); 416 + } 417 + 418 + return 0; 419 + } 420 + 421 + static int brcmstb_dpfe_download_firmware(struct platform_device *pdev, 422 + struct init_data *init) 423 + { 424 + const struct dpfe_firmware_header *header; 425 + unsigned int dmem_size, imem_size; 426 + struct device *dev = &pdev->dev; 427 + bool is_big_endian = false; 428 + struct private_data *priv; 429 + const struct firmware *fw; 430 + const u32 *dmem, *imem; 431 + const void *fw_blob; 432 + int ret; 433 + 434 + priv = platform_get_drvdata(pdev); 435 + 436 + /* 437 + * Skip downloading the firmware if the DCPU is already running and 438 + * responding to commands. 439 + */ 440 + if (is_dcpu_enabled(priv->regs)) { 441 + u32 response[MSG_FIELD_MAX]; 442 + 443 + ret = __send_command(priv, DPFE_CMD_GET_INFO, response); 444 + if (!ret) 445 + return 0; 446 + } 447 + 448 + ret = request_firmware(&fw, FIRMWARE_NAME, dev); 449 + /* request_firmware() prints its own error messages. */ 450 + if (ret) 451 + return ret; 452 + 453 + ret = __verify_firmware(init, fw); 454 + if (ret) 455 + return -EFAULT; 456 + 457 + __disable_dcpu(priv->regs); 458 + 459 + is_big_endian = init->is_big_endian; 460 + dmem_size = init->dmem_len; 461 + imem_size = init->imem_len; 462 + 463 + /* At the beginning of the firmware blob is a header. */ 464 + header = (struct dpfe_firmware_header *)fw->data; 465 + /* Void pointer to the beginning of the actual firmware. */ 466 + fw_blob = fw->data + sizeof(*header); 467 + /* IMEM comes right after the header. */ 468 + imem = fw_blob; 469 + /* DMEM follows after IMEM. */ 470 + dmem = fw_blob + imem_size; 471 + 472 + ret = __write_firmware(priv->dmem, dmem, dmem_size, is_big_endian); 473 + if (ret) 474 + return ret; 475 + ret = __write_firmware(priv->imem, imem, imem_size, is_big_endian); 476 + if (ret) 477 + return ret; 478 + 479 + ret = __verify_fw_checksum(init, priv, header, init->chksum); 480 + if (ret) 481 + return ret; 482 + 483 + __enable_dcpu(priv->regs); 484 + 485 + return 0; 486 + } 487 + 488 + static ssize_t generic_show(unsigned int command, u32 response[], 489 + struct device *dev, char *buf) 490 + { 491 + struct private_data *priv; 492 + int ret; 493 + 494 + priv = dev_get_drvdata(dev); 495 + if (!priv) 496 + return sprintf(buf, "ERROR: driver private data not set\n"); 497 + 498 + ret = __send_command(priv, command, response); 499 + if (ret < 0) 500 + return sprintf(buf, "ERROR: %s\n", error_text[-ret]); 501 + 502 + return 0; 503 + } 504 + 505 + static ssize_t show_info(struct device *dev, struct device_attribute *devattr, 506 + char *buf) 507 + { 508 + u32 response[MSG_FIELD_MAX]; 509 + unsigned int info; 510 + int ret; 511 + 512 + ret = generic_show(DPFE_CMD_GET_INFO, response, dev, buf); 513 + if (ret) 514 + return ret; 515 + 516 + info = response[MSG_ARG0]; 517 + 518 + return sprintf(buf, "%u.%u.%u.%u\n", 519 + (info >> 24) & 0xff, 520 + (info >> 16) & 0xff, 521 + (info >> 8) & 0xff, 522 + info & 0xff); 523 + } 524 + 525 + static ssize_t show_refresh(struct device *dev, 526 + struct device_attribute *devattr, char *buf) 527 + { 528 + u32 response[MSG_FIELD_MAX]; 529 + void __iomem *info; 530 + struct private_data *priv; 531 + unsigned int offset; 532 + u8 refresh, sr_abort, ppre, thermal_offs, tuf; 533 + u32 mr4; 534 + int ret; 535 + 536 + ret = generic_show(DPFE_CMD_GET_REFRESH, response, dev, buf); 537 + if (ret) 538 + return ret; 539 + 540 + priv = dev_get_drvdata(dev); 541 + offset = response[MSG_ARG0]; 542 + info = priv->dmem + offset; 543 + 544 + mr4 = readl_relaxed(info + DRAM_INFO_MR4) & DRAM_INFO_MR4_MASK; 545 + 546 + refresh = (mr4 >> DRAM_MR4_REFRESH) & DRAM_MR4_REFRESH_MASK; 547 + sr_abort = (mr4 >> DRAM_MR4_SR_ABORT) & DRAM_MR4_SR_ABORT_MASK; 548 + ppre = (mr4 >> DRAM_MR4_PPRE) & DRAM_MR4_PPRE_MASK; 549 + thermal_offs = (mr4 >> DRAM_MR4_TH_OFFS) & DRAM_MR4_TH_OFFS_MASK; 550 + tuf = (mr4 >> DRAM_MR4_TUF) & DRAM_MR4_TUF_MASK; 551 + 552 + return sprintf(buf, "%#x %#x %#x %#x %#x %#x %#x\n", 553 + readl_relaxed(info + DRAM_INFO_INTERVAL), 554 + refresh, sr_abort, ppre, thermal_offs, tuf, 555 + readl_relaxed(info + DRAM_INFO_ERROR)); 556 + } 557 + 558 + static ssize_t store_refresh(struct device *dev, struct device_attribute *attr, 559 + const char *buf, size_t count) 560 + { 561 + u32 response[MSG_FIELD_MAX]; 562 + struct private_data *priv; 563 + void __iomem *info; 564 + unsigned int offset; 565 + unsigned long val; 566 + int ret; 567 + 568 + if (kstrtoul(buf, 0, &val) < 0) 569 + return -EINVAL; 570 + 571 + priv = dev_get_drvdata(dev); 572 + 573 + ret = __send_command(priv, DPFE_CMD_GET_REFRESH, response); 574 + if (ret) 575 + return ret; 576 + 577 + offset = response[MSG_ARG0]; 578 + info = priv->dmem + offset; 579 + writel_relaxed(val, info + DRAM_INFO_INTERVAL); 580 + 581 + return count; 582 + } 583 + 584 + static ssize_t show_vendor(struct device *dev, struct device_attribute *devattr, 585 + char *buf) 586 + { 587 + u32 response[MSG_FIELD_MAX]; 588 + struct private_data *priv; 589 + void __iomem *info; 590 + unsigned int offset; 591 + int ret; 592 + 593 + ret = generic_show(DPFE_CMD_GET_VENDOR, response, dev, buf); 594 + if (ret) 595 + return ret; 596 + 597 + offset = response[MSG_ARG0]; 598 + priv = dev_get_drvdata(dev); 599 + info = priv->dmem + offset; 600 + 601 + return sprintf(buf, "%#x %#x %#x %#x %#x\n", 602 + readl_relaxed(info + DRAM_VENDOR_MR5) & DRAM_VENDOR_MASK, 603 + readl_relaxed(info + DRAM_VENDOR_MR6) & DRAM_VENDOR_MASK, 604 + readl_relaxed(info + DRAM_VENDOR_MR7) & DRAM_VENDOR_MASK, 605 + readl_relaxed(info + DRAM_VENDOR_MR8) & DRAM_VENDOR_MASK, 606 + readl_relaxed(info + DRAM_VENDOR_ERROR)); 607 + } 608 + 609 + static int brcmstb_dpfe_resume(struct platform_device *pdev) 610 + { 611 + struct init_data init; 612 + 613 + return brcmstb_dpfe_download_firmware(pdev, &init); 614 + } 615 + 616 + static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL); 617 + static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh); 618 + static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL); 619 + static struct attribute *dpfe_attrs[] = { 620 + &dev_attr_dpfe_info.attr, 621 + &dev_attr_dpfe_refresh.attr, 622 + &dev_attr_dpfe_vendor.attr, 623 + NULL 624 + }; 625 + ATTRIBUTE_GROUPS(dpfe); 626 + 627 + static int brcmstb_dpfe_probe(struct platform_device *pdev) 628 + { 629 + struct device *dev = &pdev->dev; 630 + struct private_data *priv; 631 + struct device *dpfe_dev; 632 + struct init_data init; 633 + struct resource *res; 634 + u32 index; 635 + int ret; 636 + 637 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 638 + if (!priv) 639 + return -ENOMEM; 640 + 641 + mutex_init(&priv->lock); 642 + platform_set_drvdata(pdev, priv); 643 + 644 + /* Cell index is optional; default to 0 if not present. */ 645 + ret = of_property_read_u32(dev->of_node, "cell-index", &index); 646 + if (ret) 647 + index = 0; 648 + 649 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-cpu"); 650 + priv->regs = devm_ioremap_resource(dev, res); 651 + if (IS_ERR(priv->regs)) { 652 + dev_err(dev, "couldn't map DCPU registers\n"); 653 + return -ENODEV; 654 + } 655 + 656 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-dmem"); 657 + priv->dmem = devm_ioremap_resource(dev, res); 658 + if (IS_ERR(priv->dmem)) { 659 + dev_err(dev, "Couldn't map DCPU data memory\n"); 660 + return -ENOENT; 661 + } 662 + 663 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-imem"); 664 + priv->imem = devm_ioremap_resource(dev, res); 665 + if (IS_ERR(priv->imem)) { 666 + dev_err(dev, "Couldn't map DCPU instruction memory\n"); 667 + return -ENOENT; 668 + } 669 + 670 + ret = brcmstb_dpfe_download_firmware(pdev, &init); 671 + if (ret) 672 + goto err; 673 + 674 + dpfe_dev = devm_kzalloc(dev, sizeof(*dpfe_dev), GFP_KERNEL); 675 + if (!dpfe_dev) { 676 + ret = -ENOMEM; 677 + goto err; 678 + } 679 + 680 + priv->dev = dpfe_dev; 681 + priv->index = index; 682 + 683 + dpfe_dev->parent = dev; 684 + dpfe_dev->groups = dpfe_groups; 685 + dpfe_dev->of_node = dev->of_node; 686 + dev_set_drvdata(dpfe_dev, priv); 687 + dev_set_name(dpfe_dev, "dpfe%u", index); 688 + 689 + ret = device_register(dpfe_dev); 690 + if (ret) 691 + goto err; 692 + 693 + dev_info(dev, "registered.\n"); 694 + 695 + return 0; 696 + 697 + err: 698 + dev_err(dev, "failed to initialize -- error %d\n", ret); 699 + 700 + return ret; 701 + } 702 + 703 + static const struct of_device_id brcmstb_dpfe_of_match[] = { 704 + { .compatible = "brcm,dpfe-cpu", }, 705 + {} 706 + }; 707 + MODULE_DEVICE_TABLE(of, brcmstb_dpfe_of_match); 708 + 709 + static struct platform_driver brcmstb_dpfe_driver = { 710 + .driver = { 711 + .name = DRVNAME, 712 + .of_match_table = brcmstb_dpfe_of_match, 713 + }, 714 + .probe = brcmstb_dpfe_probe, 715 + .resume = brcmstb_dpfe_resume, 716 + }; 717 + 718 + module_platform_driver(brcmstb_dpfe_driver); 719 + 720 + MODULE_AUTHOR("Markus Mayer <mmayer@broadcom.com>"); 721 + MODULE_DESCRIPTION("BRCMSTB DDR PHY Front End Driver"); 722 + MODULE_LICENSE("GPL");
+24 -30
drivers/memory/omap-gpmc.c
··· 1075 1075 } 1076 1076 EXPORT_SYMBOL(gpmc_configure); 1077 1077 1078 - void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs) 1078 + static bool gpmc_nand_writebuffer_empty(void) 1079 + { 1080 + if (gpmc_read_reg(GPMC_STATUS) & GPMC_STATUS_EMPTYWRITEBUFFERSTATUS) 1081 + return true; 1082 + 1083 + return false; 1084 + } 1085 + 1086 + static struct gpmc_nand_ops nand_ops = { 1087 + .nand_writebuffer_empty = gpmc_nand_writebuffer_empty, 1088 + }; 1089 + 1090 + /** 1091 + * gpmc_omap_get_nand_ops - Get the GPMC NAND interface 1092 + * @regs: the GPMC NAND register map exclusive for NAND use. 1093 + * @cs: GPMC chip select number on which the NAND sits. The 1094 + * register map returned will be specific to this chip select. 1095 + * 1096 + * Returns NULL on error e.g. invalid cs. 1097 + */ 1098 + struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs) 1079 1099 { 1080 1100 int i; 1081 1101 1082 - reg->gpmc_status = NULL; /* deprecated */ 1102 + if (cs >= gpmc_cs_num) 1103 + return NULL; 1104 + 1083 1105 reg->gpmc_nand_command = gpmc_base + GPMC_CS0_OFFSET + 1084 1106 GPMC_CS_NAND_COMMAND + GPMC_CS_SIZE * cs; 1085 1107 reg->gpmc_nand_address = gpmc_base + GPMC_CS0_OFFSET + ··· 1133 1111 reg->gpmc_bch_result6[i] = gpmc_base + GPMC_ECC_BCH_RESULT_6 + 1134 1112 i * GPMC_BCH_SIZE; 1135 1113 } 1136 - } 1137 - 1138 - static bool gpmc_nand_writebuffer_empty(void) 1139 - { 1140 - if (gpmc_read_reg(GPMC_STATUS) & GPMC_STATUS_EMPTYWRITEBUFFERSTATUS) 1141 - return true; 1142 - 1143 - return false; 1144 - } 1145 - 1146 - static struct gpmc_nand_ops nand_ops = { 1147 - .nand_writebuffer_empty = gpmc_nand_writebuffer_empty, 1148 - }; 1149 - 1150 - /** 1151 - * gpmc_omap_get_nand_ops - Get the GPMC NAND interface 1152 - * @regs: the GPMC NAND register map exclusive for NAND use. 1153 - * @cs: GPMC chip select number on which the NAND sits. The 1154 - * register map returned will be specific to this chip select. 1155 - * 1156 - * Returns NULL on error e.g. invalid cs. 1157 - */ 1158 - struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs) 1159 - { 1160 - if (cs >= gpmc_cs_num) 1161 - return NULL; 1162 - 1163 - gpmc_update_nand_reg(reg, cs); 1164 1114 1165 1115 return &nand_ops; 1166 1116 }
+26
drivers/of/of_reserved_mem.c
··· 397 397 rmem->ops->device_release(rmem, dev); 398 398 } 399 399 EXPORT_SYMBOL_GPL(of_reserved_mem_device_release); 400 + 401 + /** 402 + * of_reserved_mem_lookup() - acquire reserved_mem from a device node 403 + * @np: node pointer of the desired reserved-memory region 404 + * 405 + * This function allows drivers to acquire a reference to the reserved_mem 406 + * struct based on a device node handle. 407 + * 408 + * Returns a reserved_mem reference, or NULL on error. 409 + */ 410 + struct reserved_mem *of_reserved_mem_lookup(struct device_node *np) 411 + { 412 + const char *name; 413 + int i; 414 + 415 + if (!np->full_name) 416 + return NULL; 417 + 418 + name = kbasename(np->full_name); 419 + for (i = 0; i < reserved_mem_count; i++) 420 + if (!strcmp(reserved_mem[i].name, name)) 421 + return &reserved_mem[i]; 422 + 423 + return NULL; 424 + } 425 + EXPORT_SYMBOL_GPL(of_reserved_mem_lookup);
+11 -8
drivers/of/platform.c
··· 497 497 EXPORT_SYMBOL_GPL(of_platform_default_populate); 498 498 499 499 #ifndef CONFIG_PPC 500 + static const struct of_device_id reserved_mem_matches[] = { 501 + { .compatible = "qcom,rmtfs-mem" }, 502 + { .compatible = "ramoops" }, 503 + {} 504 + }; 505 + 500 506 static int __init of_platform_default_populate_init(void) 501 507 { 502 508 struct device_node *node; ··· 511 505 return -ENODEV; 512 506 513 507 /* 514 - * Handle ramoops explicitly, since it is inside /reserved-memory, 515 - * which lacks a "compatible" property. 508 + * Handle certain compatibles explicitly, since we don't want to create 509 + * platform_devices for every node in /reserved-memory with a 510 + * "compatible", 516 511 */ 517 - node = of_find_node_by_path("/reserved-memory"); 518 - if (node) { 519 - node = of_find_compatible_node(node, NULL, "ramoops"); 520 - if (node) 521 - of_platform_device_create(node, NULL, NULL); 522 - } 512 + for_each_matching_node(node, reserved_mem_matches) 513 + of_platform_device_create(node, NULL, NULL); 523 514 524 515 /* Populate everything else. */ 525 516 of_platform_default_populate(NULL, NULL, NULL);
+15 -15
drivers/reset/Kconfig
··· 28 28 This enables the ATH79 reset controller driver that supports the 29 29 AR71xx SoC reset controller. 30 30 31 + config RESET_AXS10X 32 + bool "AXS10x Reset Driver" if COMPILE_TEST 33 + default ARC_PLAT_AXS10X 34 + help 35 + This enables the reset controller driver for AXS10x. 36 + 31 37 config RESET_BERLIN 32 38 bool "Berlin Reset Driver" if COMPILE_TEST 33 39 default ARCH_BERLIN ··· 81 75 help 82 76 This enables the reset driver for ImgTec Pistachio SoCs. 83 77 84 - config RESET_SOCFPGA 85 - bool "SoCFPGA Reset Driver" if COMPILE_TEST 86 - default ARCH_SOCFPGA 78 + config RESET_SIMPLE 79 + bool "Simple Reset Controller Driver" if COMPILE_TEST 80 + default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX 87 81 help 88 - This enables the reset controller driver for Altera SoCFPGAs. 82 + This enables a simple reset controller driver for reset lines that 83 + that can be asserted and deasserted by toggling bits in a contiguous, 84 + exclusive register space. 89 85 90 - config RESET_STM32 91 - bool "STM32 Reset Driver" if COMPILE_TEST 92 - default ARCH_STM32 93 - help 94 - This enables the RCC reset controller driver for STM32 MCUs. 86 + Currently this driver supports Altera SoCFPGAs, the RCC reset 87 + controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family. 95 88 96 89 config RESET_SUNXI 97 90 bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI 98 91 default ARCH_SUNXI 92 + select RESET_SIMPLE 99 93 help 100 94 This enables the reset driver for Allwinner SoCs. 101 95 ··· 126 120 Support for reset controllers on UniPhier SoCs. 127 121 Say Y if you want to control reset signals provided by System Control 128 122 block, Media I/O block, Peripheral Block. 129 - 130 - config RESET_ZX2967 131 - bool "ZTE ZX2967 Reset Driver" 132 - depends on ARCH_ZX || COMPILE_TEST 133 - help 134 - This enables the reset controller driver for ZTE's zx2967 family. 135 123 136 124 config RESET_ZYNQ 137 125 bool "ZYNQ Reset Driver" if COMPILE_TEST
+2 -3
drivers/reset/Makefile
··· 5 5 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 6 6 obj-$(CONFIG_RESET_A10SR) += reset-a10sr.o 7 7 obj-$(CONFIG_RESET_ATH79) += reset-ath79.o 8 + obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o 8 9 obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 9 10 obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o 10 11 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o ··· 14 13 obj-$(CONFIG_RESET_MESON) += reset-meson.o 15 14 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 16 15 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 17 - obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o 18 - obj-$(CONFIG_RESET_STM32) += reset-stm32.o 16 + obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 19 17 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o 20 18 obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o 21 19 obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o 22 20 obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o 23 - obj-$(CONFIG_RESET_ZX2967) += reset-zx2967.o 24 21 obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o 25 22
+83
drivers/reset/reset-axs10x.c
··· 1 + /* 2 + * Copyright (C) 2017 Synopsys. 3 + * 4 + * Synopsys AXS10x reset driver. 5 + * 6 + * This file is licensed under the terms of the GNU General Public 7 + * License version 2. This program is licensed "as is" without any 8 + * warranty of any kind, whether express or implied. 9 + */ 10 + 11 + #include <linux/io.h> 12 + #include <linux/module.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/reset-controller.h> 15 + 16 + #define to_axs10x_rst(p) container_of((p), struct axs10x_rst, rcdev) 17 + 18 + #define AXS10X_MAX_RESETS 32 19 + 20 + struct axs10x_rst { 21 + void __iomem *regs_rst; 22 + spinlock_t lock; 23 + struct reset_controller_dev rcdev; 24 + }; 25 + 26 + static int axs10x_reset_reset(struct reset_controller_dev *rcdev, 27 + unsigned long id) 28 + { 29 + struct axs10x_rst *rst = to_axs10x_rst(rcdev); 30 + unsigned long flags; 31 + 32 + spin_lock_irqsave(&rst->lock, flags); 33 + writel(BIT(id), rst->regs_rst); 34 + spin_unlock_irqrestore(&rst->lock, flags); 35 + 36 + return 0; 37 + } 38 + 39 + static const struct reset_control_ops axs10x_reset_ops = { 40 + .reset = axs10x_reset_reset, 41 + }; 42 + 43 + static int axs10x_reset_probe(struct platform_device *pdev) 44 + { 45 + struct axs10x_rst *rst; 46 + struct resource *mem; 47 + 48 + rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL); 49 + if (!rst) 50 + return -ENOMEM; 51 + 52 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 53 + rst->regs_rst = devm_ioremap_resource(&pdev->dev, mem); 54 + if (IS_ERR(rst->regs_rst)) 55 + return PTR_ERR(rst->regs_rst); 56 + 57 + spin_lock_init(&rst->lock); 58 + 59 + rst->rcdev.owner = THIS_MODULE; 60 + rst->rcdev.ops = &axs10x_reset_ops; 61 + rst->rcdev.of_node = pdev->dev.of_node; 62 + rst->rcdev.nr_resets = AXS10X_MAX_RESETS; 63 + 64 + return devm_reset_controller_register(&pdev->dev, &rst->rcdev); 65 + } 66 + 67 + static const struct of_device_id axs10x_reset_dt_match[] = { 68 + { .compatible = "snps,axs10x-reset" }, 69 + { }, 70 + }; 71 + 72 + static struct platform_driver axs10x_reset_driver = { 73 + .probe = axs10x_reset_probe, 74 + .driver = { 75 + .name = "axs10x-reset", 76 + .of_match_table = axs10x_reset_dt_match, 77 + }, 78 + }; 79 + builtin_platform_driver(axs10x_reset_driver); 80 + 81 + MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>"); 82 + MODULE_DESCRIPTION("Synopsys AXS10x reset driver"); 83 + MODULE_LICENSE("GPL v2");
+58 -7
drivers/reset/reset-meson.c
··· 62 62 #include <linux/reset-controller.h> 63 63 #include <linux/slab.h> 64 64 #include <linux/types.h> 65 + #include <linux/of_device.h> 65 66 66 67 #define REG_COUNT 8 67 68 #define BITS_PER_REG 32 69 + #define LEVEL_OFFSET 0x7c 68 70 69 71 struct meson_reset { 70 72 void __iomem *reg_base; 71 73 struct reset_controller_dev rcdev; 74 + spinlock_t lock; 72 75 }; 73 76 74 77 static int meson_reset_reset(struct reset_controller_dev *rcdev, ··· 83 80 unsigned int offset = id % BITS_PER_REG; 84 81 void __iomem *reg_addr = data->reg_base + (bank << 2); 85 82 86 - if (bank >= REG_COUNT) 87 - return -EINVAL; 88 - 89 83 writel(BIT(offset), reg_addr); 90 84 91 85 return 0; 92 86 } 93 87 94 - static const struct reset_control_ops meson_reset_ops = { 88 + static int meson_reset_level(struct reset_controller_dev *rcdev, 89 + unsigned long id, bool assert) 90 + { 91 + struct meson_reset *data = 92 + container_of(rcdev, struct meson_reset, rcdev); 93 + unsigned int bank = id / BITS_PER_REG; 94 + unsigned int offset = id % BITS_PER_REG; 95 + void __iomem *reg_addr = data->reg_base + LEVEL_OFFSET + (bank << 2); 96 + unsigned long flags; 97 + u32 reg; 98 + 99 + spin_lock_irqsave(&data->lock, flags); 100 + 101 + reg = readl(reg_addr); 102 + if (assert) 103 + writel(reg & ~BIT(offset), reg_addr); 104 + else 105 + writel(reg | BIT(offset), reg_addr); 106 + 107 + spin_unlock_irqrestore(&data->lock, flags); 108 + 109 + return 0; 110 + } 111 + 112 + static int meson_reset_assert(struct reset_controller_dev *rcdev, 113 + unsigned long id) 114 + { 115 + return meson_reset_level(rcdev, id, true); 116 + } 117 + 118 + static int meson_reset_deassert(struct reset_controller_dev *rcdev, 119 + unsigned long id) 120 + { 121 + return meson_reset_level(rcdev, id, false); 122 + } 123 + 124 + static const struct reset_control_ops meson_reset_meson8_ops = { 95 125 .reset = meson_reset_reset, 96 126 }; 97 127 128 + static const struct reset_control_ops meson_reset_gx_ops = { 129 + .reset = meson_reset_reset, 130 + .assert = meson_reset_assert, 131 + .deassert = meson_reset_deassert, 132 + }; 133 + 98 134 static const struct of_device_id meson_reset_dt_ids[] = { 99 - { .compatible = "amlogic,meson8b-reset", }, 100 - { .compatible = "amlogic,meson-gxbb-reset", }, 135 + { .compatible = "amlogic,meson8b-reset", 136 + .data = &meson_reset_meson8_ops, }, 137 + { .compatible = "amlogic,meson-gxbb-reset", 138 + .data = &meson_reset_gx_ops, }, 101 139 { /* sentinel */ }, 102 140 }; 103 141 104 142 static int meson_reset_probe(struct platform_device *pdev) 105 143 { 144 + const struct reset_control_ops *ops; 106 145 struct meson_reset *data; 107 146 struct resource *res; 108 147 109 148 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 110 149 if (!data) 111 150 return -ENOMEM; 151 + 152 + ops = of_device_get_match_data(&pdev->dev); 153 + if (!ops) 154 + return -EINVAL; 112 155 113 156 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 114 157 data->reg_base = devm_ioremap_resource(&pdev->dev, res); ··· 163 114 164 115 platform_set_drvdata(pdev, data); 165 116 117 + spin_lock_init(&data->lock); 118 + 166 119 data->rcdev.owner = THIS_MODULE; 167 120 data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG; 168 - data->rcdev.ops = &meson_reset_ops; 121 + data->rcdev.ops = ops; 169 122 data->rcdev.of_node = pdev->dev.of_node; 170 123 171 124 return devm_reset_controller_register(&pdev->dev, &data->rcdev);
+186
drivers/reset/reset-simple.c
··· 1 + /* 2 + * Simple Reset Controller Driver 3 + * 4 + * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de> 5 + * 6 + * Based on Allwinner SoCs Reset Controller driver 7 + * 8 + * Copyright 2013 Maxime Ripard 9 + * 10 + * Maxime Ripard <maxime.ripard@free-electrons.com> 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License as published by 14 + * the Free Software Foundation; either version 2 of the License, or 15 + * (at your option) any later version. 16 + */ 17 + 18 + #include <linux/device.h> 19 + #include <linux/err.h> 20 + #include <linux/io.h> 21 + #include <linux/of.h> 22 + #include <linux/of_device.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/reset-controller.h> 25 + #include <linux/spinlock.h> 26 + 27 + #include "reset-simple.h" 28 + 29 + static inline struct reset_simple_data * 30 + to_reset_simple_data(struct reset_controller_dev *rcdev) 31 + { 32 + return container_of(rcdev, struct reset_simple_data, rcdev); 33 + } 34 + 35 + static int reset_simple_update(struct reset_controller_dev *rcdev, 36 + unsigned long id, bool assert) 37 + { 38 + struct reset_simple_data *data = to_reset_simple_data(rcdev); 39 + int reg_width = sizeof(u32); 40 + int bank = id / (reg_width * BITS_PER_BYTE); 41 + int offset = id % (reg_width * BITS_PER_BYTE); 42 + unsigned long flags; 43 + u32 reg; 44 + 45 + spin_lock_irqsave(&data->lock, flags); 46 + 47 + reg = readl(data->membase + (bank * reg_width)); 48 + if (assert ^ data->active_low) 49 + reg |= BIT(offset); 50 + else 51 + reg &= ~BIT(offset); 52 + writel(reg, data->membase + (bank * reg_width)); 53 + 54 + spin_unlock_irqrestore(&data->lock, flags); 55 + 56 + return 0; 57 + } 58 + 59 + static int reset_simple_assert(struct reset_controller_dev *rcdev, 60 + unsigned long id) 61 + { 62 + return reset_simple_update(rcdev, id, true); 63 + } 64 + 65 + static int reset_simple_deassert(struct reset_controller_dev *rcdev, 66 + unsigned long id) 67 + { 68 + return reset_simple_update(rcdev, id, false); 69 + } 70 + 71 + static int reset_simple_status(struct reset_controller_dev *rcdev, 72 + unsigned long id) 73 + { 74 + struct reset_simple_data *data = to_reset_simple_data(rcdev); 75 + int reg_width = sizeof(u32); 76 + int bank = id / (reg_width * BITS_PER_BYTE); 77 + int offset = id % (reg_width * BITS_PER_BYTE); 78 + u32 reg; 79 + 80 + reg = readl(data->membase + (bank * reg_width)); 81 + 82 + return !(reg & BIT(offset)) ^ !data->status_active_low; 83 + } 84 + 85 + const struct reset_control_ops reset_simple_ops = { 86 + .assert = reset_simple_assert, 87 + .deassert = reset_simple_deassert, 88 + .status = reset_simple_status, 89 + }; 90 + 91 + /** 92 + * struct reset_simple_devdata - simple reset controller properties 93 + * @reg_offset: offset between base address and first reset register. 94 + * @nr_resets: number of resets. If not set, default to resource size in bits. 95 + * @active_low: if true, bits are cleared to assert the reset. Otherwise, bits 96 + * are set to assert the reset. 97 + * @status_active_low: if true, bits read back as cleared while the reset is 98 + * asserted. Otherwise, bits read back as set while the 99 + * reset is asserted. 100 + */ 101 + struct reset_simple_devdata { 102 + u32 reg_offset; 103 + u32 nr_resets; 104 + bool active_low; 105 + bool status_active_low; 106 + }; 107 + 108 + #define SOCFPGA_NR_BANKS 8 109 + 110 + static const struct reset_simple_devdata reset_simple_socfpga = { 111 + .reg_offset = 0x10, 112 + .nr_resets = SOCFPGA_NR_BANKS * 32, 113 + .status_active_low = true, 114 + }; 115 + 116 + static const struct reset_simple_devdata reset_simple_active_low = { 117 + .active_low = true, 118 + .status_active_low = true, 119 + }; 120 + 121 + static const struct of_device_id reset_simple_dt_ids[] = { 122 + { .compatible = "altr,rst-mgr", .data = &reset_simple_socfpga }, 123 + { .compatible = "st,stm32-rcc", }, 124 + { .compatible = "allwinner,sun6i-a31-clock-reset", 125 + .data = &reset_simple_active_low }, 126 + { .compatible = "zte,zx296718-reset", 127 + .data = &reset_simple_active_low }, 128 + { /* sentinel */ }, 129 + }; 130 + 131 + static int reset_simple_probe(struct platform_device *pdev) 132 + { 133 + struct device *dev = &pdev->dev; 134 + const struct reset_simple_devdata *devdata; 135 + struct reset_simple_data *data; 136 + void __iomem *membase; 137 + struct resource *res; 138 + u32 reg_offset = 0; 139 + 140 + devdata = of_device_get_match_data(dev); 141 + 142 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 143 + if (!data) 144 + return -ENOMEM; 145 + 146 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 147 + membase = devm_ioremap_resource(dev, res); 148 + if (IS_ERR(membase)) 149 + return PTR_ERR(membase); 150 + 151 + spin_lock_init(&data->lock); 152 + data->membase = membase; 153 + data->rcdev.owner = THIS_MODULE; 154 + data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE; 155 + data->rcdev.ops = &reset_simple_ops; 156 + data->rcdev.of_node = dev->of_node; 157 + 158 + if (devdata) { 159 + reg_offset = devdata->reg_offset; 160 + if (devdata->nr_resets) 161 + data->rcdev.nr_resets = devdata->nr_resets; 162 + data->active_low = devdata->active_low; 163 + data->status_active_low = devdata->status_active_low; 164 + } 165 + 166 + if (of_device_is_compatible(dev->of_node, "altr,rst-mgr") && 167 + of_property_read_u32(dev->of_node, "altr,modrst-offset", 168 + &reg_offset)) { 169 + dev_warn(dev, 170 + "missing altr,modrst-offset property, assuming 0x%x!\n", 171 + reg_offset); 172 + } 173 + 174 + data->membase += reg_offset; 175 + 176 + return devm_reset_controller_register(dev, &data->rcdev); 177 + } 178 + 179 + static struct platform_driver reset_simple_driver = { 180 + .probe = reset_simple_probe, 181 + .driver = { 182 + .name = "simple-reset", 183 + .of_match_table = reset_simple_dt_ids, 184 + }, 185 + }; 186 + builtin_platform_driver(reset_simple_driver);
+45
drivers/reset/reset-simple.h
··· 1 + /* 2 + * Simple Reset Controller ops 3 + * 4 + * Based on Allwinner SoCs Reset Controller driver 5 + * 6 + * Copyright 2013 Maxime Ripard 7 + * 8 + * Maxime Ripard <maxime.ripard@free-electrons.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License as published by 12 + * the Free Software Foundation; either version 2 of the License, or 13 + * (at your option) any later version. 14 + */ 15 + 16 + #ifndef __RESET_SIMPLE_H__ 17 + #define __RESET_SIMPLE_H__ 18 + 19 + #include <linux/io.h> 20 + #include <linux/reset-controller.h> 21 + #include <linux/spinlock.h> 22 + 23 + /** 24 + * struct reset_simple_data - driver data for simple reset controllers 25 + * @lock: spinlock to protect registers during read-modify-write cycles 26 + * @membase: memory mapped I/O register range 27 + * @rcdev: reset controller device base structure 28 + * @active_low: if true, bits are cleared to assert the reset. Otherwise, bits 29 + * are set to assert the reset. Note that this says nothing about 30 + * the voltage level of the actual reset line. 31 + * @status_active_low: if true, bits read back as cleared while the reset is 32 + * asserted. Otherwise, bits read back as set while the 33 + * reset is asserted. 34 + */ 35 + struct reset_simple_data { 36 + spinlock_t lock; 37 + void __iomem *membase; 38 + struct reset_controller_dev rcdev; 39 + bool active_low; 40 + bool status_active_low; 41 + }; 42 + 43 + extern const struct reset_control_ops reset_simple_ops; 44 + 45 + #endif /* __RESET_SIMPLE_H__ */
-157
drivers/reset/reset-socfpga.c
··· 1 - /* 2 - * Socfpga Reset Controller Driver 3 - * 4 - * Copyright 2014 Steffen Trumtrar <s.trumtrar@pengutronix.de> 5 - * 6 - * based on 7 - * Allwinner SoCs Reset Controller driver 8 - * 9 - * Copyright 2013 Maxime Ripard 10 - * 11 - * Maxime Ripard <maxime.ripard@free-electrons.com> 12 - * 13 - * This program is free software; you can redistribute it and/or modify 14 - * it under the terms of the GNU General Public License as published by 15 - * the Free Software Foundation; either version 2 of the License, or 16 - * (at your option) any later version. 17 - */ 18 - 19 - #include <linux/err.h> 20 - #include <linux/io.h> 21 - #include <linux/init.h> 22 - #include <linux/of.h> 23 - #include <linux/platform_device.h> 24 - #include <linux/reset-controller.h> 25 - #include <linux/spinlock.h> 26 - #include <linux/types.h> 27 - 28 - #define BANK_INCREMENT 4 29 - #define NR_BANKS 8 30 - 31 - struct socfpga_reset_data { 32 - spinlock_t lock; 33 - void __iomem *membase; 34 - struct reset_controller_dev rcdev; 35 - }; 36 - 37 - static int socfpga_reset_assert(struct reset_controller_dev *rcdev, 38 - unsigned long id) 39 - { 40 - struct socfpga_reset_data *data = container_of(rcdev, 41 - struct socfpga_reset_data, 42 - rcdev); 43 - int reg_width = sizeof(u32); 44 - int bank = id / (reg_width * BITS_PER_BYTE); 45 - int offset = id % (reg_width * BITS_PER_BYTE); 46 - unsigned long flags; 47 - u32 reg; 48 - 49 - spin_lock_irqsave(&data->lock, flags); 50 - 51 - reg = readl(data->membase + (bank * BANK_INCREMENT)); 52 - writel(reg | BIT(offset), data->membase + (bank * BANK_INCREMENT)); 53 - spin_unlock_irqrestore(&data->lock, flags); 54 - 55 - return 0; 56 - } 57 - 58 - static int socfpga_reset_deassert(struct reset_controller_dev *rcdev, 59 - unsigned long id) 60 - { 61 - struct socfpga_reset_data *data = container_of(rcdev, 62 - struct socfpga_reset_data, 63 - rcdev); 64 - 65 - int reg_width = sizeof(u32); 66 - int bank = id / (reg_width * BITS_PER_BYTE); 67 - int offset = id % (reg_width * BITS_PER_BYTE); 68 - unsigned long flags; 69 - u32 reg; 70 - 71 - spin_lock_irqsave(&data->lock, flags); 72 - 73 - reg = readl(data->membase + (bank * BANK_INCREMENT)); 74 - writel(reg & ~BIT(offset), data->membase + (bank * BANK_INCREMENT)); 75 - 76 - spin_unlock_irqrestore(&data->lock, flags); 77 - 78 - return 0; 79 - } 80 - 81 - static int socfpga_reset_status(struct reset_controller_dev *rcdev, 82 - unsigned long id) 83 - { 84 - struct socfpga_reset_data *data = container_of(rcdev, 85 - struct socfpga_reset_data, rcdev); 86 - int reg_width = sizeof(u32); 87 - int bank = id / (reg_width * BITS_PER_BYTE); 88 - int offset = id % (reg_width * BITS_PER_BYTE); 89 - u32 reg; 90 - 91 - reg = readl(data->membase + (bank * BANK_INCREMENT)); 92 - 93 - return !(reg & BIT(offset)); 94 - } 95 - 96 - static const struct reset_control_ops socfpga_reset_ops = { 97 - .assert = socfpga_reset_assert, 98 - .deassert = socfpga_reset_deassert, 99 - .status = socfpga_reset_status, 100 - }; 101 - 102 - static int socfpga_reset_probe(struct platform_device *pdev) 103 - { 104 - struct socfpga_reset_data *data; 105 - struct resource *res; 106 - struct device *dev = &pdev->dev; 107 - struct device_node *np = dev->of_node; 108 - u32 modrst_offset; 109 - 110 - /* 111 - * The binding was mainlined without the required property. 112 - * Do not continue, when we encounter an old DT. 113 - */ 114 - if (!of_find_property(pdev->dev.of_node, "#reset-cells", NULL)) { 115 - dev_err(&pdev->dev, "%pOF missing #reset-cells property\n", 116 - pdev->dev.of_node); 117 - return -EINVAL; 118 - } 119 - 120 - data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 121 - if (!data) 122 - return -ENOMEM; 123 - 124 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 125 - data->membase = devm_ioremap_resource(&pdev->dev, res); 126 - if (IS_ERR(data->membase)) 127 - return PTR_ERR(data->membase); 128 - 129 - if (of_property_read_u32(np, "altr,modrst-offset", &modrst_offset)) { 130 - dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n"); 131 - modrst_offset = 0x10; 132 - } 133 - data->membase += modrst_offset; 134 - 135 - spin_lock_init(&data->lock); 136 - 137 - data->rcdev.owner = THIS_MODULE; 138 - data->rcdev.nr_resets = NR_BANKS * (sizeof(u32) * BITS_PER_BYTE); 139 - data->rcdev.ops = &socfpga_reset_ops; 140 - data->rcdev.of_node = pdev->dev.of_node; 141 - 142 - return devm_reset_controller_register(dev, &data->rcdev); 143 - } 144 - 145 - static const struct of_device_id socfpga_reset_dt_ids[] = { 146 - { .compatible = "altr,rst-mgr", }, 147 - { /* sentinel */ }, 148 - }; 149 - 150 - static struct platform_driver socfpga_reset_driver = { 151 - .probe = socfpga_reset_probe, 152 - .driver = { 153 - .name = "socfpga-reset", 154 - .of_match_table = socfpga_reset_dt_ids, 155 - }, 156 - }; 157 - builtin_platform_driver(socfpga_reset_driver);
-108
drivers/reset/reset-stm32.c
··· 1 - /* 2 - * Copyright (C) Maxime Coquelin 2015 3 - * Author: Maxime Coquelin <mcoquelin.stm32@gmail.com> 4 - * License terms: GNU General Public License (GPL), version 2 5 - * 6 - * Heavily based on sunxi driver from Maxime Ripard. 7 - */ 8 - 9 - #include <linux/err.h> 10 - #include <linux/io.h> 11 - #include <linux/of.h> 12 - #include <linux/of_address.h> 13 - #include <linux/platform_device.h> 14 - #include <linux/reset-controller.h> 15 - #include <linux/slab.h> 16 - #include <linux/spinlock.h> 17 - #include <linux/types.h> 18 - 19 - struct stm32_reset_data { 20 - spinlock_t lock; 21 - void __iomem *membase; 22 - struct reset_controller_dev rcdev; 23 - }; 24 - 25 - static int stm32_reset_assert(struct reset_controller_dev *rcdev, 26 - unsigned long id) 27 - { 28 - struct stm32_reset_data *data = container_of(rcdev, 29 - struct stm32_reset_data, 30 - rcdev); 31 - int bank = id / BITS_PER_LONG; 32 - int offset = id % BITS_PER_LONG; 33 - unsigned long flags; 34 - u32 reg; 35 - 36 - spin_lock_irqsave(&data->lock, flags); 37 - 38 - reg = readl(data->membase + (bank * 4)); 39 - writel(reg | BIT(offset), data->membase + (bank * 4)); 40 - 41 - spin_unlock_irqrestore(&data->lock, flags); 42 - 43 - return 0; 44 - } 45 - 46 - static int stm32_reset_deassert(struct reset_controller_dev *rcdev, 47 - unsigned long id) 48 - { 49 - struct stm32_reset_data *data = container_of(rcdev, 50 - struct stm32_reset_data, 51 - rcdev); 52 - int bank = id / BITS_PER_LONG; 53 - int offset = id % BITS_PER_LONG; 54 - unsigned long flags; 55 - u32 reg; 56 - 57 - spin_lock_irqsave(&data->lock, flags); 58 - 59 - reg = readl(data->membase + (bank * 4)); 60 - writel(reg & ~BIT(offset), data->membase + (bank * 4)); 61 - 62 - spin_unlock_irqrestore(&data->lock, flags); 63 - 64 - return 0; 65 - } 66 - 67 - static const struct reset_control_ops stm32_reset_ops = { 68 - .assert = stm32_reset_assert, 69 - .deassert = stm32_reset_deassert, 70 - }; 71 - 72 - static const struct of_device_id stm32_reset_dt_ids[] = { 73 - { .compatible = "st,stm32-rcc", }, 74 - { /* sentinel */ }, 75 - }; 76 - 77 - static int stm32_reset_probe(struct platform_device *pdev) 78 - { 79 - struct stm32_reset_data *data; 80 - struct resource *res; 81 - 82 - data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 83 - if (!data) 84 - return -ENOMEM; 85 - 86 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 87 - data->membase = devm_ioremap_resource(&pdev->dev, res); 88 - if (IS_ERR(data->membase)) 89 - return PTR_ERR(data->membase); 90 - 91 - spin_lock_init(&data->lock); 92 - 93 - data->rcdev.owner = THIS_MODULE; 94 - data->rcdev.nr_resets = resource_size(res) * 8; 95 - data->rcdev.ops = &stm32_reset_ops; 96 - data->rcdev.of_node = pdev->dev.of_node; 97 - 98 - return devm_reset_controller_register(&pdev->dev, &data->rcdev); 99 - } 100 - 101 - static struct platform_driver stm32_reset_driver = { 102 - .probe = stm32_reset_probe, 103 - .driver = { 104 - .name = "stm32-rcc-reset", 105 - .of_match_table = stm32_reset_dt_ids, 106 - }, 107 - }; 108 - builtin_platform_driver(stm32_reset_driver);
+6 -98
drivers/reset/reset-sunxi.c
··· 22 22 #include <linux/spinlock.h> 23 23 #include <linux/types.h> 24 24 25 - struct sunxi_reset_data { 26 - spinlock_t lock; 27 - void __iomem *membase; 28 - struct reset_controller_dev rcdev; 29 - }; 30 - 31 - static int sunxi_reset_assert(struct reset_controller_dev *rcdev, 32 - unsigned long id) 33 - { 34 - struct sunxi_reset_data *data = container_of(rcdev, 35 - struct sunxi_reset_data, 36 - rcdev); 37 - int reg_width = sizeof(u32); 38 - int bank = id / (reg_width * BITS_PER_BYTE); 39 - int offset = id % (reg_width * BITS_PER_BYTE); 40 - unsigned long flags; 41 - u32 reg; 42 - 43 - spin_lock_irqsave(&data->lock, flags); 44 - 45 - reg = readl(data->membase + (bank * reg_width)); 46 - writel(reg & ~BIT(offset), data->membase + (bank * reg_width)); 47 - 48 - spin_unlock_irqrestore(&data->lock, flags); 49 - 50 - return 0; 51 - } 52 - 53 - static int sunxi_reset_deassert(struct reset_controller_dev *rcdev, 54 - unsigned long id) 55 - { 56 - struct sunxi_reset_data *data = container_of(rcdev, 57 - struct sunxi_reset_data, 58 - rcdev); 59 - int reg_width = sizeof(u32); 60 - int bank = id / (reg_width * BITS_PER_BYTE); 61 - int offset = id % (reg_width * BITS_PER_BYTE); 62 - unsigned long flags; 63 - u32 reg; 64 - 65 - spin_lock_irqsave(&data->lock, flags); 66 - 67 - reg = readl(data->membase + (bank * reg_width)); 68 - writel(reg | BIT(offset), data->membase + (bank * reg_width)); 69 - 70 - spin_unlock_irqrestore(&data->lock, flags); 71 - 72 - return 0; 73 - } 74 - 75 - static const struct reset_control_ops sunxi_reset_ops = { 76 - .assert = sunxi_reset_assert, 77 - .deassert = sunxi_reset_deassert, 78 - }; 25 + #include "reset-simple.h" 79 26 80 27 static int sunxi_reset_init(struct device_node *np) 81 28 { 82 - struct sunxi_reset_data *data; 29 + struct reset_simple_data *data; 83 30 struct resource res; 84 31 resource_size_t size; 85 32 int ret; ··· 55 108 56 109 data->rcdev.owner = THIS_MODULE; 57 110 data->rcdev.nr_resets = size * 8; 58 - data->rcdev.ops = &sunxi_reset_ops; 111 + data->rcdev.ops = &reset_simple_ops; 59 112 data->rcdev.of_node = np; 113 + data->active_low = true; 60 114 61 115 return reset_controller_register(&data->rcdev); 62 116 ··· 70 122 * These are the reset controller we need to initialize early on in 71 123 * our system, before we can even think of using a regular device 72 124 * driver for it. 125 + * The controllers that we can register through the regular device 126 + * model are handled by the simple reset driver directly. 73 127 */ 74 128 static const struct of_device_id sunxi_early_reset_dt_ids[] __initconst = { 75 129 { .compatible = "allwinner,sun6i-a31-ahb1-reset", }, ··· 85 135 for_each_matching_node(np, sunxi_early_reset_dt_ids) 86 136 sunxi_reset_init(np); 87 137 } 88 - 89 - /* 90 - * And these are the controllers we can register through the regular 91 - * device model. 92 - */ 93 - static const struct of_device_id sunxi_reset_dt_ids[] = { 94 - { .compatible = "allwinner,sun6i-a31-clock-reset", }, 95 - { /* sentinel */ }, 96 - }; 97 - 98 - static int sunxi_reset_probe(struct platform_device *pdev) 99 - { 100 - struct sunxi_reset_data *data; 101 - struct resource *res; 102 - 103 - data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 104 - if (!data) 105 - return -ENOMEM; 106 - 107 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 108 - data->membase = devm_ioremap_resource(&pdev->dev, res); 109 - if (IS_ERR(data->membase)) 110 - return PTR_ERR(data->membase); 111 - 112 - spin_lock_init(&data->lock); 113 - 114 - data->rcdev.owner = THIS_MODULE; 115 - data->rcdev.nr_resets = resource_size(res) * 8; 116 - data->rcdev.ops = &sunxi_reset_ops; 117 - data->rcdev.of_node = pdev->dev.of_node; 118 - 119 - return devm_reset_controller_register(&pdev->dev, &data->rcdev); 120 - } 121 - 122 - static struct platform_driver sunxi_reset_driver = { 123 - .probe = sunxi_reset_probe, 124 - .driver = { 125 - .name = "sunxi-reset", 126 - .of_match_table = sunxi_reset_dt_ids, 127 - }, 128 - }; 129 - builtin_platform_driver(sunxi_reset_driver);
+30
drivers/reset/reset-uniphier.c
··· 58 58 59 59 static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = { 60 60 UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */ 61 + UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */ 61 62 UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, MIO, RLE) */ 62 63 UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */ 63 64 UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ ··· 77 76 78 77 static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = { 79 78 UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */ 79 + UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */ 80 80 UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, RLE) */ 81 81 UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ 82 82 UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ ··· 94 92 static const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = { 95 93 UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ 96 94 UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ 95 + UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */ 97 96 UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC, MIO) */ 98 97 UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */ 99 98 UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */ ··· 105 102 static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = { 106 103 UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ 107 104 UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ 105 + UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */ 108 106 UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC) */ 109 107 UNIPHIER_RESETX(12, 0x200c, 5), /* GIO (PCIe, USB3) */ 110 108 UNIPHIER_RESETX(16, 0x200c, 12), /* USB30-PHY0 */ ··· 115 111 UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */ 116 112 UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */ 117 113 UNIPHIER_RESETX(42, 0x2010, 2), /* EXIV */ 114 + UNIPHIER_RESET_END, 115 + }; 116 + 117 + static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = { 118 + UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ 119 + UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ 120 + UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */ 121 + UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */ 122 + UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */ 123 + UNIPHIER_RESETX(16, 0x200c, 16), /* USB30-PHY0 */ 124 + UNIPHIER_RESETX(17, 0x200c, 18), /* USB30-PHY1 */ 125 + UNIPHIER_RESETX(18, 0x200c, 20), /* USB30-PHY2 */ 126 + UNIPHIER_RESETX(20, 0x200c, 17), /* USB31-PHY0 */ 127 + UNIPHIER_RESETX(21, 0x200c, 19), /* USB31-PHY1 */ 118 128 UNIPHIER_RESET_END, 119 129 }; 120 130 ··· 377 359 .compatible = "socionext,uniphier-ld20-reset", 378 360 .data = uniphier_ld20_sys_reset_data, 379 361 }, 362 + { 363 + .compatible = "socionext,uniphier-pxs3-reset", 364 + .data = uniphier_pxs3_sys_reset_data, 365 + }, 380 366 /* Media I/O reset, SD reset */ 381 367 { 382 368 .compatible = "socionext,uniphier-ld4-mio-reset", ··· 414 392 .compatible = "socionext,uniphier-ld20-sd-reset", 415 393 .data = uniphier_pro5_sd_reset_data, 416 394 }, 395 + { 396 + .compatible = "socionext,uniphier-pxs3-sd-reset", 397 + .data = uniphier_pro5_sd_reset_data, 398 + }, 417 399 /* Peripheral reset */ 418 400 { 419 401 .compatible = "socionext,uniphier-ld4-peri-reset", ··· 445 419 }, 446 420 { 447 421 .compatible = "socionext,uniphier-ld20-peri-reset", 422 + .data = uniphier_pro4_peri_reset_data, 423 + }, 424 + { 425 + .compatible = "socionext,uniphier-pxs3-peri-reset", 448 426 .data = uniphier_pro4_peri_reset_data, 449 427 }, 450 428 /* Analog signal amplifiers reset */
-99
drivers/reset/reset-zx2967.c
··· 1 - /* 2 - * ZTE's zx2967 family reset controller driver 3 - * 4 - * Copyright (C) 2017 ZTE Ltd. 5 - * 6 - * Author: Baoyou Xie <baoyou.xie@linaro.org> 7 - * 8 - * License terms: GNU General Public License (GPL) version 2 9 - */ 10 - 11 - #include <linux/of_address.h> 12 - #include <linux/platform_device.h> 13 - #include <linux/reset-controller.h> 14 - 15 - struct zx2967_reset { 16 - void __iomem *reg_base; 17 - spinlock_t lock; 18 - struct reset_controller_dev rcdev; 19 - }; 20 - 21 - static int zx2967_reset_act(struct reset_controller_dev *rcdev, 22 - unsigned long id, bool assert) 23 - { 24 - struct zx2967_reset *reset = NULL; 25 - int bank = id / 32; 26 - int offset = id % 32; 27 - u32 reg; 28 - unsigned long flags; 29 - 30 - reset = container_of(rcdev, struct zx2967_reset, rcdev); 31 - 32 - spin_lock_irqsave(&reset->lock, flags); 33 - 34 - reg = readl_relaxed(reset->reg_base + (bank * 4)); 35 - if (assert) 36 - reg &= ~BIT(offset); 37 - else 38 - reg |= BIT(offset); 39 - writel_relaxed(reg, reset->reg_base + (bank * 4)); 40 - 41 - spin_unlock_irqrestore(&reset->lock, flags); 42 - 43 - return 0; 44 - } 45 - 46 - static int zx2967_reset_assert(struct reset_controller_dev *rcdev, 47 - unsigned long id) 48 - { 49 - return zx2967_reset_act(rcdev, id, true); 50 - } 51 - 52 - static int zx2967_reset_deassert(struct reset_controller_dev *rcdev, 53 - unsigned long id) 54 - { 55 - return zx2967_reset_act(rcdev, id, false); 56 - } 57 - 58 - static const struct reset_control_ops zx2967_reset_ops = { 59 - .assert = zx2967_reset_assert, 60 - .deassert = zx2967_reset_deassert, 61 - }; 62 - 63 - static int zx2967_reset_probe(struct platform_device *pdev) 64 - { 65 - struct zx2967_reset *reset; 66 - struct resource *res; 67 - 68 - reset = devm_kzalloc(&pdev->dev, sizeof(*reset), GFP_KERNEL); 69 - if (!reset) 70 - return -ENOMEM; 71 - 72 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 73 - reset->reg_base = devm_ioremap_resource(&pdev->dev, res); 74 - if (IS_ERR(reset->reg_base)) 75 - return PTR_ERR(reset->reg_base); 76 - 77 - spin_lock_init(&reset->lock); 78 - 79 - reset->rcdev.owner = THIS_MODULE; 80 - reset->rcdev.nr_resets = resource_size(res) * 8; 81 - reset->rcdev.ops = &zx2967_reset_ops; 82 - reset->rcdev.of_node = pdev->dev.of_node; 83 - 84 - return devm_reset_controller_register(&pdev->dev, &reset->rcdev); 85 - } 86 - 87 - static const struct of_device_id zx2967_reset_dt_ids[] = { 88 - { .compatible = "zte,zx296718-reset", }, 89 - {}, 90 - }; 91 - 92 - static struct platform_driver zx2967_reset_driver = { 93 - .probe = zx2967_reset_probe, 94 - .driver = { 95 - .name = "zx2967-reset", 96 - .of_match_table = zx2967_reset_dt_ids, 97 - }, 98 - }; 99 - builtin_platform_driver(zx2967_reset_driver);
+1 -1
drivers/soc/Makefile
··· 11 11 obj-y += fsl/ 12 12 obj-$(CONFIG_ARCH_MXC) += imx/ 13 13 obj-$(CONFIG_SOC_XWAY) += lantiq/ 14 - obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 14 + obj-y += mediatek/ 15 15 obj-$(CONFIG_ARCH_MESON) += amlogic/ 16 16 obj-$(CONFIG_ARCH_QCOM) += qcom/ 17 17 obj-y += renesas/
+21
drivers/soc/amlogic/Kconfig
··· 9 9 Say yes to support decoding of Amlogic Meson GX SoC family 10 10 information about the type, package and version. 11 11 12 + config MESON_GX_PM_DOMAINS 13 + bool "Amlogic Meson GX Power Domains driver" 14 + depends on ARCH_MESON || COMPILE_TEST 15 + depends on PM && OF 16 + default ARCH_MESON 17 + select PM_GENERIC_DOMAINS 18 + select PM_GENERIC_DOMAINS_OF 19 + help 20 + Say yes to expose Amlogic Meson GX Power Domains as 21 + Generic Power Domains. 22 + 23 + config MESON_MX_SOCINFO 24 + bool "Amlogic Meson MX SoC Information driver" 25 + depends on ARCH_MESON || COMPILE_TEST 26 + default ARCH_MESON 27 + select SOC_BUS 28 + help 29 + Say yes to support decoding of Amlogic Meson6, Meson8, 30 + Meson8b and Meson8m2 SoC family information about the type 31 + and version. 32 + 12 33 endmenu
+2
drivers/soc/amlogic/Makefile
··· 1 1 obj-$(CONFIG_MESON_GX_SOCINFO) += meson-gx-socinfo.o 2 + obj-$(CONFIG_MESON_GX_PM_DOMAINS) += meson-gx-pwrc-vpu.o 3 + obj-$(CONFIG_MESON_MX_SOCINFO) += meson-mx-socinfo.o
+243
drivers/soc/amlogic/meson-gx-pwrc-vpu.c
··· 1 + /* 2 + * Copyright (c) 2017 BayLibre, SAS 3 + * Author: Neil Armstrong <narmstrong@baylibre.com> 4 + * 5 + * SPDX-License-Identifier: GPL-2.0+ 6 + */ 7 + 8 + #include <linux/of_address.h> 9 + #include <linux/platform_device.h> 10 + #include <linux/pm_domain.h> 11 + #include <linux/bitfield.h> 12 + #include <linux/regmap.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/reset.h> 15 + #include <linux/clk.h> 16 + 17 + /* AO Offsets */ 18 + 19 + #define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2) 20 + 21 + #define GEN_PWR_VPU_HDMI BIT(8) 22 + #define GEN_PWR_VPU_HDMI_ISO BIT(9) 23 + 24 + /* HHI Offsets */ 25 + 26 + #define HHI_MEM_PD_REG0 (0x40 << 2) 27 + #define HHI_VPU_MEM_PD_REG0 (0x41 << 2) 28 + #define HHI_VPU_MEM_PD_REG1 (0x42 << 2) 29 + 30 + struct meson_gx_pwrc_vpu { 31 + struct generic_pm_domain genpd; 32 + struct regmap *regmap_ao; 33 + struct regmap *regmap_hhi; 34 + struct reset_control *rstc; 35 + struct clk *vpu_clk; 36 + struct clk *vapb_clk; 37 + }; 38 + 39 + static inline 40 + struct meson_gx_pwrc_vpu *genpd_to_pd(struct generic_pm_domain *d) 41 + { 42 + return container_of(d, struct meson_gx_pwrc_vpu, genpd); 43 + } 44 + 45 + static int meson_gx_pwrc_vpu_power_off(struct generic_pm_domain *genpd) 46 + { 47 + struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd); 48 + int i; 49 + 50 + regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, 51 + GEN_PWR_VPU_HDMI_ISO, GEN_PWR_VPU_HDMI_ISO); 52 + udelay(20); 53 + 54 + /* Power Down Memories */ 55 + for (i = 0; i < 32; i += 2) { 56 + regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0, 57 + 0x2 << i, 0x3 << i); 58 + udelay(5); 59 + } 60 + for (i = 0; i < 32; i += 2) { 61 + regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1, 62 + 0x2 << i, 0x3 << i); 63 + udelay(5); 64 + } 65 + for (i = 8; i < 16; i++) { 66 + regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0, 67 + BIT(i), BIT(i)); 68 + udelay(5); 69 + } 70 + udelay(20); 71 + 72 + regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, 73 + GEN_PWR_VPU_HDMI, GEN_PWR_VPU_HDMI); 74 + 75 + msleep(20); 76 + 77 + clk_disable_unprepare(pd->vpu_clk); 78 + clk_disable_unprepare(pd->vapb_clk); 79 + 80 + return 0; 81 + } 82 + 83 + static int meson_gx_pwrc_vpu_setup_clk(struct meson_gx_pwrc_vpu *pd) 84 + { 85 + int ret; 86 + 87 + ret = clk_prepare_enable(pd->vpu_clk); 88 + if (ret) 89 + return ret; 90 + 91 + ret = clk_prepare_enable(pd->vapb_clk); 92 + if (ret) 93 + clk_disable_unprepare(pd->vpu_clk); 94 + 95 + return ret; 96 + } 97 + 98 + static int meson_gx_pwrc_vpu_power_on(struct generic_pm_domain *genpd) 99 + { 100 + struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd); 101 + int ret; 102 + int i; 103 + 104 + regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, 105 + GEN_PWR_VPU_HDMI, 0); 106 + udelay(20); 107 + 108 + /* Power Up Memories */ 109 + for (i = 0; i < 32; i += 2) { 110 + regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0, 111 + 0x2 << i, 0); 112 + udelay(5); 113 + } 114 + 115 + for (i = 0; i < 32; i += 2) { 116 + regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1, 117 + 0x2 << i, 0); 118 + udelay(5); 119 + } 120 + 121 + for (i = 8; i < 16; i++) { 122 + regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0, 123 + BIT(i), 0); 124 + udelay(5); 125 + } 126 + udelay(20); 127 + 128 + ret = reset_control_assert(pd->rstc); 129 + if (ret) 130 + return ret; 131 + 132 + regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, 133 + GEN_PWR_VPU_HDMI_ISO, 0); 134 + 135 + ret = reset_control_deassert(pd->rstc); 136 + if (ret) 137 + return ret; 138 + 139 + ret = meson_gx_pwrc_vpu_setup_clk(pd); 140 + if (ret) 141 + return ret; 142 + 143 + return 0; 144 + } 145 + 146 + static bool meson_gx_pwrc_vpu_get_power(struct meson_gx_pwrc_vpu *pd) 147 + { 148 + u32 reg; 149 + 150 + regmap_read(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, &reg); 151 + 152 + return (reg & GEN_PWR_VPU_HDMI); 153 + } 154 + 155 + static struct meson_gx_pwrc_vpu vpu_hdmi_pd = { 156 + .genpd = { 157 + .name = "vpu_hdmi", 158 + .power_off = meson_gx_pwrc_vpu_power_off, 159 + .power_on = meson_gx_pwrc_vpu_power_on, 160 + }, 161 + }; 162 + 163 + static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev) 164 + { 165 + struct regmap *regmap_ao, *regmap_hhi; 166 + struct reset_control *rstc; 167 + struct clk *vpu_clk; 168 + struct clk *vapb_clk; 169 + bool powered_off; 170 + int ret; 171 + 172 + regmap_ao = syscon_node_to_regmap(of_get_parent(pdev->dev.of_node)); 173 + if (IS_ERR(regmap_ao)) { 174 + dev_err(&pdev->dev, "failed to get regmap\n"); 175 + return PTR_ERR(regmap_ao); 176 + } 177 + 178 + regmap_hhi = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 179 + "amlogic,hhi-sysctrl"); 180 + if (IS_ERR(regmap_hhi)) { 181 + dev_err(&pdev->dev, "failed to get HHI regmap\n"); 182 + return PTR_ERR(regmap_hhi); 183 + } 184 + 185 + rstc = devm_reset_control_array_get(&pdev->dev, false, false); 186 + if (IS_ERR(rstc)) { 187 + dev_err(&pdev->dev, "failed to get reset lines\n"); 188 + return PTR_ERR(rstc); 189 + } 190 + 191 + vpu_clk = devm_clk_get(&pdev->dev, "vpu"); 192 + if (IS_ERR(vpu_clk)) { 193 + dev_err(&pdev->dev, "vpu clock request failed\n"); 194 + return PTR_ERR(vpu_clk); 195 + } 196 + 197 + vapb_clk = devm_clk_get(&pdev->dev, "vapb"); 198 + if (IS_ERR(vapb_clk)) { 199 + dev_err(&pdev->dev, "vapb clock request failed\n"); 200 + return PTR_ERR(vapb_clk); 201 + } 202 + 203 + vpu_hdmi_pd.regmap_ao = regmap_ao; 204 + vpu_hdmi_pd.regmap_hhi = regmap_hhi; 205 + vpu_hdmi_pd.rstc = rstc; 206 + vpu_hdmi_pd.vpu_clk = vpu_clk; 207 + vpu_hdmi_pd.vapb_clk = vapb_clk; 208 + 209 + powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd); 210 + 211 + /* If already powered, sync the clock states */ 212 + if (!powered_off) { 213 + ret = meson_gx_pwrc_vpu_setup_clk(&vpu_hdmi_pd); 214 + if (ret) 215 + return ret; 216 + } 217 + 218 + pm_genpd_init(&vpu_hdmi_pd.genpd, &pm_domain_always_on_gov, 219 + powered_off); 220 + 221 + return of_genpd_add_provider_simple(pdev->dev.of_node, 222 + &vpu_hdmi_pd.genpd); 223 + } 224 + 225 + static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev) 226 + { 227 + meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); 228 + } 229 + 230 + static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = { 231 + { .compatible = "amlogic,meson-gx-pwrc-vpu" }, 232 + { /* sentinel */ } 233 + }; 234 + 235 + static struct platform_driver meson_gx_pwrc_vpu_driver = { 236 + .probe = meson_gx_pwrc_vpu_probe, 237 + .shutdown = meson_gx_pwrc_vpu_shutdown, 238 + .driver = { 239 + .name = "meson_gx_pwrc_vpu", 240 + .of_match_table = meson_gx_pwrc_vpu_match_table, 241 + }, 242 + }; 243 + builtin_platform_driver(meson_gx_pwrc_vpu_driver);
+175
drivers/soc/amlogic/meson-mx-socinfo.c
··· 1 + /* 2 + * Copyright (c) 2017 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 3 + * 4 + * SPDX-License-Identifier: GPL-2.0+ 5 + */ 6 + 7 + #include <linux/io.h> 8 + #include <linux/of.h> 9 + #include <linux/of_address.h> 10 + #include <linux/of_platform.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/slab.h> 13 + #include <linux/sys_soc.h> 14 + #include <linux/bitfield.h> 15 + #include <linux/regmap.h> 16 + #include <linux/mfd/syscon.h> 17 + 18 + #define MESON_SOCINFO_MAJOR_VER_MESON6 0x16 19 + #define MESON_SOCINFO_MAJOR_VER_MESON8 0x19 20 + #define MESON_SOCINFO_MAJOR_VER_MESON8B 0x1b 21 + 22 + #define MESON_MX_ASSIST_HW_REV 0x14c 23 + 24 + #define MESON_MX_ANALOG_TOP_METAL_REVISION 0x0 25 + 26 + #define MESON_MX_BOOTROM_MISC_VER 0x4 27 + 28 + static const char *meson_mx_socinfo_revision(unsigned int major_ver, 29 + unsigned int misc_ver, 30 + unsigned int metal_rev) 31 + { 32 + unsigned int minor_ver; 33 + 34 + switch (major_ver) { 35 + case MESON_SOCINFO_MAJOR_VER_MESON6: 36 + minor_ver = 0xa; 37 + break; 38 + 39 + case MESON_SOCINFO_MAJOR_VER_MESON8: 40 + if (metal_rev == 0x11111112) 41 + major_ver = 0x1d; 42 + 43 + if (metal_rev == 0x11111111 || metal_rev == 0x11111112) 44 + minor_ver = 0xa; 45 + else if (metal_rev == 0x11111113) 46 + minor_ver = 0xb; 47 + else if (metal_rev == 0x11111133) 48 + minor_ver = 0xc; 49 + else 50 + minor_ver = 0xd; 51 + 52 + break; 53 + 54 + case MESON_SOCINFO_MAJOR_VER_MESON8B: 55 + if (metal_rev == 0x11111111) 56 + minor_ver = 0xa; 57 + else 58 + minor_ver = 0xb; 59 + 60 + break; 61 + 62 + default: 63 + minor_ver = 0x0; 64 + break; 65 + } 66 + 67 + return kasprintf(GFP_KERNEL, "Rev%X (%x - 0:%X)", minor_ver, major_ver, 68 + misc_ver); 69 + } 70 + 71 + static const char *meson_mx_socinfo_soc_id(unsigned int major_ver, 72 + unsigned int metal_rev) 73 + { 74 + const char *soc_id; 75 + 76 + switch (major_ver) { 77 + case MESON_SOCINFO_MAJOR_VER_MESON6: 78 + soc_id = "Meson6 (AML8726-MX)"; 79 + break; 80 + 81 + case MESON_SOCINFO_MAJOR_VER_MESON8: 82 + if (metal_rev == 0x11111112) 83 + soc_id = "Meson8m2 (S812)"; 84 + else 85 + soc_id = "Meson8 (S802)"; 86 + 87 + break; 88 + 89 + case MESON_SOCINFO_MAJOR_VER_MESON8B: 90 + soc_id = "Meson8b (S805)"; 91 + break; 92 + 93 + default: 94 + soc_id = "Unknown"; 95 + break; 96 + } 97 + 98 + return kstrdup_const(soc_id, GFP_KERNEL); 99 + } 100 + 101 + static const struct of_device_id meson_mx_socinfo_analog_top_ids[] = { 102 + { .compatible = "amlogic,meson8-analog-top", }, 103 + { .compatible = "amlogic,meson8b-analog-top", }, 104 + { /* sentinel */ } 105 + }; 106 + 107 + int __init meson_mx_socinfo_init(void) 108 + { 109 + struct soc_device_attribute *soc_dev_attr; 110 + struct soc_device *soc_dev; 111 + struct device_node *np; 112 + struct regmap *assist_regmap, *bootrom_regmap, *analog_top_regmap; 113 + unsigned int major_ver, misc_ver, metal_rev = 0; 114 + int ret; 115 + 116 + assist_regmap = 117 + syscon_regmap_lookup_by_compatible("amlogic,meson-mx-assist"); 118 + if (IS_ERR(assist_regmap)) 119 + return PTR_ERR(assist_regmap); 120 + 121 + bootrom_regmap = 122 + syscon_regmap_lookup_by_compatible("amlogic,meson-mx-bootrom"); 123 + if (IS_ERR(bootrom_regmap)) 124 + return PTR_ERR(bootrom_regmap); 125 + 126 + np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids); 127 + if (np) { 128 + analog_top_regmap = syscon_node_to_regmap(np); 129 + if (IS_ERR(analog_top_regmap)) 130 + return PTR_ERR(analog_top_regmap); 131 + 132 + ret = regmap_read(analog_top_regmap, 133 + MESON_MX_ANALOG_TOP_METAL_REVISION, 134 + &metal_rev); 135 + if (ret) 136 + return ret; 137 + } 138 + 139 + ret = regmap_read(assist_regmap, MESON_MX_ASSIST_HW_REV, &major_ver); 140 + if (ret < 0) 141 + return ret; 142 + 143 + ret = regmap_read(bootrom_regmap, MESON_MX_BOOTROM_MISC_VER, 144 + &misc_ver); 145 + if (ret < 0) 146 + return ret; 147 + 148 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 149 + if (!soc_dev_attr) 150 + return -ENODEV; 151 + 152 + soc_dev_attr->family = "Amlogic Meson"; 153 + 154 + np = of_find_node_by_path("/"); 155 + of_property_read_string(np, "model", &soc_dev_attr->machine); 156 + of_node_put(np); 157 + 158 + soc_dev_attr->revision = meson_mx_socinfo_revision(major_ver, misc_ver, 159 + metal_rev); 160 + soc_dev_attr->soc_id = meson_mx_socinfo_soc_id(major_ver, metal_rev); 161 + 162 + soc_dev = soc_device_register(soc_dev_attr); 163 + if (IS_ERR(soc_dev)) { 164 + kfree_const(soc_dev_attr->revision); 165 + kfree_const(soc_dev_attr->soc_id); 166 + kfree(soc_dev_attr); 167 + return PTR_ERR(soc_dev); 168 + } 169 + 170 + dev_info(soc_device_to_device(soc_dev), "Amlogic %s %s detected\n", 171 + soc_dev_attr->soc_id, soc_dev_attr->revision); 172 + 173 + return 0; 174 + } 175 + device_initcall(meson_mx_socinfo_init);
+8
drivers/soc/atmel/soc.c
··· 72 72 "sama5d21", "sama5d2"), 73 73 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D22CU_EXID_MATCH, 74 74 "sama5d22", "sama5d2"), 75 + AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D225C_D1M_EXID_MATCH, 76 + "sama5d225c 16MiB SiP", "sama5d2"), 75 77 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D23CU_EXID_MATCH, 76 78 "sama5d23", "sama5d2"), 77 79 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CX_EXID_MATCH, ··· 86 84 "sama5d27", "sama5d2"), 87 85 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CN_EXID_MATCH, 88 86 "sama5d27", "sama5d2"), 87 + AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D1G_EXID_MATCH, 88 + "sama5d27c 128MiB SiP", "sama5d2"), 89 + AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D5M_EXID_MATCH, 90 + "sama5d27c 64MiB SiP", "sama5d2"), 89 91 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CU_EXID_MATCH, 90 92 "sama5d28", "sama5d2"), 91 93 AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CN_EXID_MATCH, 92 94 "sama5d28", "sama5d2"), 95 + AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_D1G_EXID_MATCH, 96 + "sama5d28c 128MiB SiP", "sama5d2"), 93 97 AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D31_EXID_MATCH, 94 98 "sama5d31", "sama5d3"), 95 99 AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D33_EXID_MATCH,
+4
drivers/soc/atmel/soc.h
··· 64 64 65 65 #define SAMA5D2_CIDR_MATCH 0x0a5c08c0 66 66 #define SAMA5D21CU_EXID_MATCH 0x0000005a 67 + #define SAMA5D225C_D1M_EXID_MATCH 0x00000053 67 68 #define SAMA5D22CU_EXID_MATCH 0x00000059 68 69 #define SAMA5D22CN_EXID_MATCH 0x00000069 69 70 #define SAMA5D23CU_EXID_MATCH 0x00000058 70 71 #define SAMA5D24CX_EXID_MATCH 0x00000004 71 72 #define SAMA5D24CU_EXID_MATCH 0x00000014 72 73 #define SAMA5D26CU_EXID_MATCH 0x00000012 74 + #define SAMA5D27C_D1G_EXID_MATCH 0x00000033 75 + #define SAMA5D27C_D5M_EXID_MATCH 0x00000032 73 76 #define SAMA5D27CU_EXID_MATCH 0x00000011 74 77 #define SAMA5D27CN_EXID_MATCH 0x00000021 78 + #define SAMA5D28C_D1G_EXID_MATCH 0x00000013 75 79 #define SAMA5D28CU_EXID_MATCH 0x00000010 76 80 #define SAMA5D28CN_EXID_MATCH 0x00000020 77 81
+2
drivers/soc/bcm/Kconfig
··· 20 20 21 21 If unsure, say N. 22 22 23 + source "drivers/soc/bcm/brcmstb/Kconfig" 24 + 23 25 endmenu
+10
drivers/soc/bcm/brcmstb/Kconfig
··· 1 + if SOC_BRCMSTB 2 + 3 + config BRCMSTB_PM 4 + bool "Support suspend/resume for STB platforms" 5 + default y 6 + depends on PM 7 + depends on ARCH_BRCMSTB || BMIPS_GENERIC 8 + select ARM_CPU_SUSPEND if ARM 9 + 10 + endif # SOC_BRCMSTB
+1
drivers/soc/bcm/brcmstb/Makefile
··· 1 1 obj-y += common.o biuctrl.o 2 + obj-$(CONFIG_BRCMSTB_PM) += pm/
+3
drivers/soc/bcm/brcmstb/pm/Makefile
··· 1 + obj-$(CONFIG_ARM) += s2-arm.o pm-arm.o 2 + AFLAGS_s2-arm.o := -march=armv7-a 3 + obj-$(CONFIG_BMIPS_GENERIC) += s2-mips.o s3-mips.o pm-mips.o
+113
drivers/soc/bcm/brcmstb/pm/aon_defs.h
··· 1 + /* 2 + * Always ON (AON) register interface between bootloader and Linux 3 + * 4 + * Copyright © 2014-2017 Broadcom 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #ifndef __BRCMSTB_AON_DEFS_H__ 17 + #define __BRCMSTB_AON_DEFS_H__ 18 + 19 + #include <linux/compiler.h> 20 + 21 + /* Magic number in upper 16-bits */ 22 + #define BRCMSTB_S3_MAGIC_MASK 0xffff0000 23 + #define BRCMSTB_S3_MAGIC_SHORT 0x5AFE0000 24 + 25 + enum { 26 + /* Restore random key for AES memory verification (off = fixed key) */ 27 + S3_FLAG_LOAD_RANDKEY = (1 << 0), 28 + 29 + /* Scratch buffer page table is present */ 30 + S3_FLAG_SCRATCH_BUFFER_TABLE = (1 << 1), 31 + 32 + /* Skip all memory verification */ 33 + S3_FLAG_NO_MEM_VERIFY = (1 << 2), 34 + 35 + /* 36 + * Modification of this bit reserved for bootloader only. 37 + * 1=PSCI started Linux, 0=Direct jump to Linux. 38 + */ 39 + S3_FLAG_PSCI_BOOT = (1 << 3), 40 + 41 + /* 42 + * Modification of this bit reserved for bootloader only. 43 + * 1=64 bit boot, 0=32 bit boot. 44 + */ 45 + S3_FLAG_BOOTED64 = (1 << 4), 46 + }; 47 + 48 + #define BRCMSTB_HASH_LEN (128 / 8) /* 128-bit hash */ 49 + 50 + #define AON_REG_MAGIC_FLAGS 0x00 51 + #define AON_REG_CONTROL_LOW 0x04 52 + #define AON_REG_CONTROL_HIGH 0x08 53 + #define AON_REG_S3_HASH 0x0c /* hash of S3 params */ 54 + #define AON_REG_CONTROL_HASH_LEN 0x1c 55 + #define AON_REG_PANIC 0x20 56 + 57 + #define BRCMSTB_S3_MAGIC 0x5AFEB007 58 + #define BRCMSTB_PANIC_MAGIC 0x512E115E 59 + #define BOOTLOADER_SCRATCH_SIZE 64 60 + #define BRCMSTB_DTU_STATE_MAP_ENTRIES (8*1024) 61 + #define BRCMSTB_DTU_CONFIG_ENTRIES (512) 62 + #define BRCMSTB_DTU_COUNT (2) 63 + 64 + #define IMAGE_DESCRIPTORS_BUFSIZE (2 * 1024) 65 + #define S3_BOOTLOADER_RESERVED (S3_FLAG_PSCI_BOOT | S3_FLAG_BOOTED64) 66 + 67 + struct brcmstb_bootloader_dtu_table { 68 + uint32_t dtu_state_map[BRCMSTB_DTU_STATE_MAP_ENTRIES]; 69 + uint32_t dtu_config[BRCMSTB_DTU_CONFIG_ENTRIES]; 70 + }; 71 + 72 + /* 73 + * Bootloader utilizes a custom parameter block left in DRAM for handling S3 74 + * warm resume 75 + */ 76 + struct brcmstb_s3_params { 77 + /* scratch memory for bootloader */ 78 + uint8_t scratch[BOOTLOADER_SCRATCH_SIZE]; 79 + 80 + uint32_t magic; /* BRCMSTB_S3_MAGIC */ 81 + uint64_t reentry; /* PA */ 82 + 83 + /* descriptors */ 84 + uint32_t hash[BRCMSTB_HASH_LEN / 4]; 85 + 86 + /* 87 + * If 0, then ignore this parameter (there is only one set of 88 + * descriptors) 89 + * 90 + * If non-0, then a second set of descriptors is stored at: 91 + * 92 + * descriptors + desc_offset_2 93 + * 94 + * The MAC result of both descriptors is XOR'd and stored in @hash 95 + */ 96 + uint32_t desc_offset_2; 97 + 98 + /* 99 + * (Physical) address of a brcmstb_bootloader_scratch_table, for 100 + * providing a large DRAM buffer to the bootloader 101 + */ 102 + uint64_t buffer_table; 103 + 104 + uint32_t spare[70]; 105 + 106 + uint8_t descriptors[IMAGE_DESCRIPTORS_BUFSIZE]; 107 + /* 108 + * Must be last member of struct. See brcmstb_pm_s3_finish() for reason. 109 + */ 110 + struct brcmstb_bootloader_dtu_table dtu[BRCMSTB_DTU_COUNT]; 111 + } __packed; 112 + 113 + #endif /* __BRCMSTB_AON_DEFS_H__ */
+822
drivers/soc/bcm/brcmstb/pm/pm-arm.c
··· 1 + /* 2 + * ARM-specific support for Broadcom STB S2/S3/S5 power management 3 + * 4 + * S2: clock gate CPUs and as many peripherals as possible 5 + * S3: power off all of the chip except the Always ON (AON) island; keep DDR is 6 + * self-refresh 7 + * S5: (a.k.a. S3 cold boot) much like S3, except DDR is powered down, so we 8 + * treat this mode like a soft power-off, with wakeup allowed from AON 9 + * 10 + * Copyright © 2014-2017 Broadcom 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License version 2 as 14 + * published by the Free Software Foundation. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + * GNU General Public License for more details. 20 + */ 21 + 22 + #define pr_fmt(fmt) "brcmstb-pm: " fmt 23 + 24 + #include <linux/bitops.h> 25 + #include <linux/compiler.h> 26 + #include <linux/delay.h> 27 + #include <linux/dma-mapping.h> 28 + #include <linux/err.h> 29 + #include <linux/init.h> 30 + #include <linux/io.h> 31 + #include <linux/ioport.h> 32 + #include <linux/kconfig.h> 33 + #include <linux/kernel.h> 34 + #include <linux/memblock.h> 35 + #include <linux/module.h> 36 + #include <linux/notifier.h> 37 + #include <linux/of.h> 38 + #include <linux/of_address.h> 39 + #include <linux/platform_device.h> 40 + #include <linux/pm.h> 41 + #include <linux/printk.h> 42 + #include <linux/proc_fs.h> 43 + #include <linux/sizes.h> 44 + #include <linux/slab.h> 45 + #include <linux/sort.h> 46 + #include <linux/suspend.h> 47 + #include <linux/types.h> 48 + #include <linux/uaccess.h> 49 + #include <linux/soc/brcmstb/brcmstb.h> 50 + 51 + #include <asm/fncpy.h> 52 + #include <asm/setup.h> 53 + #include <asm/suspend.h> 54 + 55 + #include "pm.h" 56 + #include "aon_defs.h" 57 + 58 + #define SHIMPHY_DDR_PAD_CNTRL 0x8c 59 + 60 + /* Method #0 */ 61 + #define SHIMPHY_PAD_PLL_SEQUENCE BIT(8) 62 + #define SHIMPHY_PAD_GATE_PLL_S3 BIT(9) 63 + 64 + /* Method #1 */ 65 + #define PWRDWN_SEQ_NO_SEQUENCING 0 66 + #define PWRDWN_SEQ_HOLD_CHANNEL 1 67 + #define PWRDWN_SEQ_RESET_PLL 2 68 + #define PWRDWN_SEQ_POWERDOWN_PLL 3 69 + 70 + #define SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK 0x00f00000 71 + #define SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT 20 72 + 73 + #define DDR_FORCE_CKE_RST_N BIT(3) 74 + #define DDR_PHY_RST_N BIT(2) 75 + #define DDR_PHY_CKE BIT(1) 76 + 77 + #define DDR_PHY_NO_CHANNEL 0xffffffff 78 + 79 + #define MAX_NUM_MEMC 3 80 + 81 + struct brcmstb_memc { 82 + void __iomem *ddr_phy_base; 83 + void __iomem *ddr_shimphy_base; 84 + void __iomem *ddr_ctrl; 85 + }; 86 + 87 + struct brcmstb_pm_control { 88 + void __iomem *aon_ctrl_base; 89 + void __iomem *aon_sram; 90 + struct brcmstb_memc memcs[MAX_NUM_MEMC]; 91 + 92 + void __iomem *boot_sram; 93 + size_t boot_sram_len; 94 + 95 + bool support_warm_boot; 96 + size_t pll_status_offset; 97 + int num_memc; 98 + 99 + struct brcmstb_s3_params *s3_params; 100 + dma_addr_t s3_params_pa; 101 + int s3entry_method; 102 + u32 warm_boot_offset; 103 + u32 phy_a_standby_ctrl_offs; 104 + u32 phy_b_standby_ctrl_offs; 105 + bool needs_ddr_pad; 106 + struct platform_device *pdev; 107 + }; 108 + 109 + enum bsp_initiate_command { 110 + BSP_CLOCK_STOP = 0x00, 111 + BSP_GEN_RANDOM_KEY = 0x4A, 112 + BSP_RESTORE_RANDOM_KEY = 0x55, 113 + BSP_GEN_FIXED_KEY = 0x63, 114 + }; 115 + 116 + #define PM_INITIATE 0x01 117 + #define PM_INITIATE_SUCCESS 0x00 118 + #define PM_INITIATE_FAIL 0xfe 119 + 120 + static struct brcmstb_pm_control ctrl; 121 + 122 + static int (*brcmstb_pm_do_s2_sram)(void __iomem *aon_ctrl_base, 123 + void __iomem *ddr_phy_pll_status); 124 + 125 + static int brcmstb_init_sram(struct device_node *dn) 126 + { 127 + void __iomem *sram; 128 + struct resource res; 129 + int ret; 130 + 131 + ret = of_address_to_resource(dn, 0, &res); 132 + if (ret) 133 + return ret; 134 + 135 + /* Uncached, executable remapping of SRAM */ 136 + sram = __arm_ioremap_exec(res.start, resource_size(&res), false); 137 + if (!sram) 138 + return -ENOMEM; 139 + 140 + ctrl.boot_sram = sram; 141 + ctrl.boot_sram_len = resource_size(&res); 142 + 143 + return 0; 144 + } 145 + 146 + static const struct of_device_id sram_dt_ids[] = { 147 + { .compatible = "mmio-sram" }, 148 + { /* sentinel */ } 149 + }; 150 + 151 + static int do_bsp_initiate_command(enum bsp_initiate_command cmd) 152 + { 153 + void __iomem *base = ctrl.aon_ctrl_base; 154 + int ret; 155 + int timeo = 1000 * 1000; /* 1 second */ 156 + 157 + writel_relaxed(0, base + AON_CTRL_PM_INITIATE); 158 + (void)readl_relaxed(base + AON_CTRL_PM_INITIATE); 159 + 160 + /* Go! */ 161 + writel_relaxed((cmd << 1) | PM_INITIATE, base + AON_CTRL_PM_INITIATE); 162 + 163 + /* 164 + * If firmware doesn't support the 'ack', then just assume it's done 165 + * after 10ms. Note that this only works for command 0, BSP_CLOCK_STOP 166 + */ 167 + if (of_machine_is_compatible("brcm,bcm74371a0")) { 168 + (void)readl_relaxed(base + AON_CTRL_PM_INITIATE); 169 + mdelay(10); 170 + return 0; 171 + } 172 + 173 + for (;;) { 174 + ret = readl_relaxed(base + AON_CTRL_PM_INITIATE); 175 + if (!(ret & PM_INITIATE)) 176 + break; 177 + if (timeo <= 0) { 178 + pr_err("error: timeout waiting for BSP (%x)\n", ret); 179 + break; 180 + } 181 + timeo -= 50; 182 + udelay(50); 183 + } 184 + 185 + return (ret & 0xff) != PM_INITIATE_SUCCESS; 186 + } 187 + 188 + static int brcmstb_pm_handshake(void) 189 + { 190 + void __iomem *base = ctrl.aon_ctrl_base; 191 + u32 tmp; 192 + int ret; 193 + 194 + /* BSP power handshake, v1 */ 195 + tmp = readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS); 196 + tmp &= ~1UL; 197 + writel_relaxed(tmp, base + AON_CTRL_HOST_MISC_CMDS); 198 + (void)readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS); 199 + 200 + ret = do_bsp_initiate_command(BSP_CLOCK_STOP); 201 + if (ret) 202 + pr_err("BSP handshake failed\n"); 203 + 204 + /* 205 + * HACK: BSP may have internal race on the CLOCK_STOP command. 206 + * Avoid touching the BSP for a few milliseconds. 207 + */ 208 + mdelay(3); 209 + 210 + return ret; 211 + } 212 + 213 + static inline void shimphy_set(u32 value, u32 mask) 214 + { 215 + int i; 216 + 217 + if (!ctrl.needs_ddr_pad) 218 + return; 219 + 220 + for (i = 0; i < ctrl.num_memc; i++) { 221 + u32 tmp; 222 + 223 + tmp = readl_relaxed(ctrl.memcs[i].ddr_shimphy_base + 224 + SHIMPHY_DDR_PAD_CNTRL); 225 + tmp = value | (tmp & mask); 226 + writel_relaxed(tmp, ctrl.memcs[i].ddr_shimphy_base + 227 + SHIMPHY_DDR_PAD_CNTRL); 228 + } 229 + wmb(); /* Complete sequence in order. */ 230 + } 231 + 232 + static inline void ddr_ctrl_set(bool warmboot) 233 + { 234 + int i; 235 + 236 + for (i = 0; i < ctrl.num_memc; i++) { 237 + u32 tmp; 238 + 239 + tmp = readl_relaxed(ctrl.memcs[i].ddr_ctrl + 240 + ctrl.warm_boot_offset); 241 + if (warmboot) 242 + tmp |= 1; 243 + else 244 + tmp &= ~1; /* Cold boot */ 245 + writel_relaxed(tmp, ctrl.memcs[i].ddr_ctrl + 246 + ctrl.warm_boot_offset); 247 + } 248 + /* Complete sequence in order */ 249 + wmb(); 250 + } 251 + 252 + static inline void s3entry_method0(void) 253 + { 254 + shimphy_set(SHIMPHY_PAD_GATE_PLL_S3 | SHIMPHY_PAD_PLL_SEQUENCE, 255 + 0xffffffff); 256 + } 257 + 258 + static inline void s3entry_method1(void) 259 + { 260 + /* 261 + * S3 Entry Sequence 262 + * ----------------- 263 + * Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3 264 + * Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 1 265 + */ 266 + shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL << 267 + SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT), 268 + ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK); 269 + 270 + ddr_ctrl_set(true); 271 + } 272 + 273 + static inline void s5entry_method1(void) 274 + { 275 + int i; 276 + 277 + /* 278 + * S5 Entry Sequence 279 + * ----------------- 280 + * Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3 281 + * Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 0 282 + * Step 3: DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ CKE ] = 0 283 + * DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ RST_N ] = 0 284 + */ 285 + shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL << 286 + SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT), 287 + ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK); 288 + 289 + ddr_ctrl_set(false); 290 + 291 + for (i = 0; i < ctrl.num_memc; i++) { 292 + u32 tmp; 293 + 294 + /* Step 3: Channel A (RST_N = CKE = 0) */ 295 + tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base + 296 + ctrl.phy_a_standby_ctrl_offs); 297 + tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N); 298 + writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base + 299 + ctrl.phy_a_standby_ctrl_offs); 300 + 301 + /* Step 3: Channel B? */ 302 + if (ctrl.phy_b_standby_ctrl_offs != DDR_PHY_NO_CHANNEL) { 303 + tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base + 304 + ctrl.phy_b_standby_ctrl_offs); 305 + tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N); 306 + writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base + 307 + ctrl.phy_b_standby_ctrl_offs); 308 + } 309 + } 310 + /* Must complete */ 311 + wmb(); 312 + } 313 + 314 + /* 315 + * Run a Power Management State Machine (PMSM) shutdown command and put the CPU 316 + * into a low-power mode 317 + */ 318 + static void brcmstb_do_pmsm_power_down(unsigned long base_cmd, bool onewrite) 319 + { 320 + void __iomem *base = ctrl.aon_ctrl_base; 321 + 322 + if ((ctrl.s3entry_method == 1) && (base_cmd == PM_COLD_CONFIG)) 323 + s5entry_method1(); 324 + 325 + /* pm_start_pwrdn transition 0->1 */ 326 + writel_relaxed(base_cmd, base + AON_CTRL_PM_CTRL); 327 + 328 + if (!onewrite) { 329 + (void)readl_relaxed(base + AON_CTRL_PM_CTRL); 330 + 331 + writel_relaxed(base_cmd | PM_PWR_DOWN, base + AON_CTRL_PM_CTRL); 332 + (void)readl_relaxed(base + AON_CTRL_PM_CTRL); 333 + } 334 + wfi(); 335 + } 336 + 337 + /* Support S5 cold boot out of "poweroff" */ 338 + static void brcmstb_pm_poweroff(void) 339 + { 340 + brcmstb_pm_handshake(); 341 + 342 + /* Clear magic S3 warm-boot value */ 343 + writel_relaxed(0, ctrl.aon_sram + AON_REG_MAGIC_FLAGS); 344 + (void)readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS); 345 + 346 + /* Skip wait-for-interrupt signal; just use a countdown */ 347 + writel_relaxed(0x10, ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT); 348 + (void)readl_relaxed(ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT); 349 + 350 + if (ctrl.s3entry_method == 1) { 351 + shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL << 352 + SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT), 353 + ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK); 354 + ddr_ctrl_set(false); 355 + brcmstb_do_pmsm_power_down(M1_PM_COLD_CONFIG, true); 356 + return; /* We should never actually get here */ 357 + } 358 + 359 + brcmstb_do_pmsm_power_down(PM_COLD_CONFIG, false); 360 + } 361 + 362 + static void *brcmstb_pm_copy_to_sram(void *fn, size_t len) 363 + { 364 + unsigned int size = ALIGN(len, FNCPY_ALIGN); 365 + 366 + if (ctrl.boot_sram_len < size) { 367 + pr_err("standby code will not fit in SRAM\n"); 368 + return NULL; 369 + } 370 + 371 + return fncpy(ctrl.boot_sram, fn, size); 372 + } 373 + 374 + /* 375 + * S2 suspend/resume picks up where we left off, so we must execute carefully 376 + * from SRAM, in order to allow DDR to come back up safely before we continue. 377 + */ 378 + static int brcmstb_pm_s2(void) 379 + { 380 + /* A previous S3 can set a value hazardous to S2, so make sure. */ 381 + if (ctrl.s3entry_method == 1) { 382 + shimphy_set((PWRDWN_SEQ_NO_SEQUENCING << 383 + SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT), 384 + ~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK); 385 + ddr_ctrl_set(false); 386 + } 387 + 388 + brcmstb_pm_do_s2_sram = brcmstb_pm_copy_to_sram(&brcmstb_pm_do_s2, 389 + brcmstb_pm_do_s2_sz); 390 + if (!brcmstb_pm_do_s2_sram) 391 + return -EINVAL; 392 + 393 + return brcmstb_pm_do_s2_sram(ctrl.aon_ctrl_base, 394 + ctrl.memcs[0].ddr_phy_base + 395 + ctrl.pll_status_offset); 396 + } 397 + 398 + /* 399 + * This function is called on a new stack, so don't allow inlining (which will 400 + * generate stack references on the old stack). It cannot be made static because 401 + * it is referenced from brcmstb_pm_s3() 402 + */ 403 + noinline int brcmstb_pm_s3_finish(void) 404 + { 405 + struct brcmstb_s3_params *params = ctrl.s3_params; 406 + dma_addr_t params_pa = ctrl.s3_params_pa; 407 + phys_addr_t reentry = virt_to_phys(&cpu_resume); 408 + enum bsp_initiate_command cmd; 409 + u32 flags; 410 + 411 + /* 412 + * Clear parameter structure, but not DTU area, which has already been 413 + * filled in. We know DTU is a the end, so we can just subtract its 414 + * size. 415 + */ 416 + memset(params, 0, sizeof(*params) - sizeof(params->dtu)); 417 + 418 + flags = readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS); 419 + 420 + flags &= S3_BOOTLOADER_RESERVED; 421 + flags |= S3_FLAG_NO_MEM_VERIFY; 422 + flags |= S3_FLAG_LOAD_RANDKEY; 423 + 424 + /* Load random / fixed key */ 425 + if (flags & S3_FLAG_LOAD_RANDKEY) 426 + cmd = BSP_GEN_RANDOM_KEY; 427 + else 428 + cmd = BSP_GEN_FIXED_KEY; 429 + if (do_bsp_initiate_command(cmd)) { 430 + pr_info("key loading failed\n"); 431 + return -EIO; 432 + } 433 + 434 + params->magic = BRCMSTB_S3_MAGIC; 435 + params->reentry = reentry; 436 + 437 + /* No more writes to DRAM */ 438 + flush_cache_all(); 439 + 440 + flags |= BRCMSTB_S3_MAGIC_SHORT; 441 + 442 + writel_relaxed(flags, ctrl.aon_sram + AON_REG_MAGIC_FLAGS); 443 + writel_relaxed(lower_32_bits(params_pa), 444 + ctrl.aon_sram + AON_REG_CONTROL_LOW); 445 + writel_relaxed(upper_32_bits(params_pa), 446 + ctrl.aon_sram + AON_REG_CONTROL_HIGH); 447 + 448 + switch (ctrl.s3entry_method) { 449 + case 0: 450 + s3entry_method0(); 451 + brcmstb_do_pmsm_power_down(PM_WARM_CONFIG, false); 452 + break; 453 + case 1: 454 + s3entry_method1(); 455 + brcmstb_do_pmsm_power_down(M1_PM_WARM_CONFIG, true); 456 + break; 457 + default: 458 + return -EINVAL; 459 + } 460 + 461 + /* Must have been interrupted from wfi()? */ 462 + return -EINTR; 463 + } 464 + 465 + static int brcmstb_pm_do_s3(unsigned long sp) 466 + { 467 + unsigned long save_sp; 468 + int ret; 469 + 470 + asm volatile ( 471 + "mov %[save], sp\n" 472 + "mov sp, %[new]\n" 473 + "bl brcmstb_pm_s3_finish\n" 474 + "mov %[ret], r0\n" 475 + "mov %[new], sp\n" 476 + "mov sp, %[save]\n" 477 + : [save] "=&r" (save_sp), [ret] "=&r" (ret) 478 + : [new] "r" (sp) 479 + ); 480 + 481 + return ret; 482 + } 483 + 484 + static int brcmstb_pm_s3(void) 485 + { 486 + void __iomem *sp = ctrl.boot_sram + ctrl.boot_sram_len; 487 + 488 + return cpu_suspend((unsigned long)sp, brcmstb_pm_do_s3); 489 + } 490 + 491 + static int brcmstb_pm_standby(bool deep_standby) 492 + { 493 + int ret; 494 + 495 + if (brcmstb_pm_handshake()) 496 + return -EIO; 497 + 498 + if (deep_standby) 499 + ret = brcmstb_pm_s3(); 500 + else 501 + ret = brcmstb_pm_s2(); 502 + if (ret) 503 + pr_err("%s: standby failed\n", __func__); 504 + 505 + return ret; 506 + } 507 + 508 + static int brcmstb_pm_enter(suspend_state_t state) 509 + { 510 + int ret = -EINVAL; 511 + 512 + switch (state) { 513 + case PM_SUSPEND_STANDBY: 514 + ret = brcmstb_pm_standby(false); 515 + break; 516 + case PM_SUSPEND_MEM: 517 + ret = brcmstb_pm_standby(true); 518 + break; 519 + } 520 + 521 + return ret; 522 + } 523 + 524 + static int brcmstb_pm_valid(suspend_state_t state) 525 + { 526 + switch (state) { 527 + case PM_SUSPEND_STANDBY: 528 + return true; 529 + case PM_SUSPEND_MEM: 530 + return ctrl.support_warm_boot; 531 + default: 532 + return false; 533 + } 534 + } 535 + 536 + static const struct platform_suspend_ops brcmstb_pm_ops = { 537 + .enter = brcmstb_pm_enter, 538 + .valid = brcmstb_pm_valid, 539 + }; 540 + 541 + static const struct of_device_id aon_ctrl_dt_ids[] = { 542 + { .compatible = "brcm,brcmstb-aon-ctrl" }, 543 + {} 544 + }; 545 + 546 + struct ddr_phy_ofdata { 547 + bool supports_warm_boot; 548 + size_t pll_status_offset; 549 + int s3entry_method; 550 + u32 warm_boot_offset; 551 + u32 phy_a_standby_ctrl_offs; 552 + u32 phy_b_standby_ctrl_offs; 553 + }; 554 + 555 + static struct ddr_phy_ofdata ddr_phy_71_1 = { 556 + .supports_warm_boot = true, 557 + .pll_status_offset = 0x0c, 558 + .s3entry_method = 1, 559 + .warm_boot_offset = 0x2c, 560 + .phy_a_standby_ctrl_offs = 0x198, 561 + .phy_b_standby_ctrl_offs = DDR_PHY_NO_CHANNEL 562 + }; 563 + 564 + static struct ddr_phy_ofdata ddr_phy_72_0 = { 565 + .supports_warm_boot = true, 566 + .pll_status_offset = 0x10, 567 + .s3entry_method = 1, 568 + .warm_boot_offset = 0x40, 569 + .phy_a_standby_ctrl_offs = 0x2a4, 570 + .phy_b_standby_ctrl_offs = 0x8a4 571 + }; 572 + 573 + static struct ddr_phy_ofdata ddr_phy_225_1 = { 574 + .supports_warm_boot = false, 575 + .pll_status_offset = 0x4, 576 + .s3entry_method = 0 577 + }; 578 + 579 + static struct ddr_phy_ofdata ddr_phy_240_1 = { 580 + .supports_warm_boot = true, 581 + .pll_status_offset = 0x4, 582 + .s3entry_method = 0 583 + }; 584 + 585 + static const struct of_device_id ddr_phy_dt_ids[] = { 586 + { 587 + .compatible = "brcm,brcmstb-ddr-phy-v71.1", 588 + .data = &ddr_phy_71_1, 589 + }, 590 + { 591 + .compatible = "brcm,brcmstb-ddr-phy-v72.0", 592 + .data = &ddr_phy_72_0, 593 + }, 594 + { 595 + .compatible = "brcm,brcmstb-ddr-phy-v225.1", 596 + .data = &ddr_phy_225_1, 597 + }, 598 + { 599 + .compatible = "brcm,brcmstb-ddr-phy-v240.1", 600 + .data = &ddr_phy_240_1, 601 + }, 602 + { 603 + /* Same as v240.1, for the registers we care about */ 604 + .compatible = "brcm,brcmstb-ddr-phy-v240.2", 605 + .data = &ddr_phy_240_1, 606 + }, 607 + {} 608 + }; 609 + 610 + struct ddr_seq_ofdata { 611 + bool needs_ddr_pad; 612 + u32 warm_boot_offset; 613 + }; 614 + 615 + static const struct ddr_seq_ofdata ddr_seq_b22 = { 616 + .needs_ddr_pad = false, 617 + .warm_boot_offset = 0x2c, 618 + }; 619 + 620 + static const struct ddr_seq_ofdata ddr_seq = { 621 + .needs_ddr_pad = true, 622 + }; 623 + 624 + static const struct of_device_id ddr_shimphy_dt_ids[] = { 625 + { .compatible = "brcm,brcmstb-ddr-shimphy-v1.0" }, 626 + {} 627 + }; 628 + 629 + static const struct of_device_id brcmstb_memc_of_match[] = { 630 + { 631 + .compatible = "brcm,brcmstb-memc-ddr-rev-b.2.2", 632 + .data = &ddr_seq_b22, 633 + }, 634 + { 635 + .compatible = "brcm,brcmstb-memc-ddr", 636 + .data = &ddr_seq, 637 + }, 638 + {}, 639 + }; 640 + 641 + static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches, 642 + int index, const void **ofdata) 643 + { 644 + struct device_node *dn; 645 + const struct of_device_id *match; 646 + 647 + dn = of_find_matching_node_and_match(NULL, matches, &match); 648 + if (!dn) 649 + return ERR_PTR(-EINVAL); 650 + 651 + if (ofdata) 652 + *ofdata = match->data; 653 + 654 + return of_io_request_and_map(dn, index, dn->full_name); 655 + } 656 + 657 + static int brcmstb_pm_panic_notify(struct notifier_block *nb, 658 + unsigned long action, void *data) 659 + { 660 + writel_relaxed(BRCMSTB_PANIC_MAGIC, ctrl.aon_sram + AON_REG_PANIC); 661 + 662 + return NOTIFY_DONE; 663 + } 664 + 665 + static struct notifier_block brcmstb_pm_panic_nb = { 666 + .notifier_call = brcmstb_pm_panic_notify, 667 + }; 668 + 669 + static int brcmstb_pm_probe(struct platform_device *pdev) 670 + { 671 + const struct ddr_phy_ofdata *ddr_phy_data; 672 + const struct ddr_seq_ofdata *ddr_seq_data; 673 + const struct of_device_id *of_id = NULL; 674 + struct device_node *dn; 675 + void __iomem *base; 676 + int ret, i; 677 + 678 + /* AON ctrl registers */ 679 + base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL); 680 + if (IS_ERR(base)) { 681 + pr_err("error mapping AON_CTRL\n"); 682 + return PTR_ERR(base); 683 + } 684 + ctrl.aon_ctrl_base = base; 685 + 686 + /* AON SRAM registers */ 687 + base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL); 688 + if (IS_ERR(base)) { 689 + /* Assume standard offset */ 690 + ctrl.aon_sram = ctrl.aon_ctrl_base + 691 + AON_CTRL_SYSTEM_DATA_RAM_OFS; 692 + } else { 693 + ctrl.aon_sram = base; 694 + } 695 + 696 + writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC); 697 + 698 + /* DDR PHY registers */ 699 + base = brcmstb_ioremap_match(ddr_phy_dt_ids, 0, 700 + (const void **)&ddr_phy_data); 701 + if (IS_ERR(base)) { 702 + pr_err("error mapping DDR PHY\n"); 703 + return PTR_ERR(base); 704 + } 705 + ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot; 706 + ctrl.pll_status_offset = ddr_phy_data->pll_status_offset; 707 + /* Only need DDR PHY 0 for now? */ 708 + ctrl.memcs[0].ddr_phy_base = base; 709 + ctrl.s3entry_method = ddr_phy_data->s3entry_method; 710 + ctrl.phy_a_standby_ctrl_offs = ddr_phy_data->phy_a_standby_ctrl_offs; 711 + ctrl.phy_b_standby_ctrl_offs = ddr_phy_data->phy_b_standby_ctrl_offs; 712 + /* 713 + * Slightly grosss to use the phy ver to get a memc, 714 + * offset but that is the only versioned things so far 715 + * we can test for. 716 + */ 717 + ctrl.warm_boot_offset = ddr_phy_data->warm_boot_offset; 718 + 719 + /* DDR SHIM-PHY registers */ 720 + for_each_matching_node(dn, ddr_shimphy_dt_ids) { 721 + i = ctrl.num_memc; 722 + if (i >= MAX_NUM_MEMC) { 723 + pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC); 724 + break; 725 + } 726 + 727 + base = of_io_request_and_map(dn, 0, dn->full_name); 728 + if (IS_ERR(base)) { 729 + if (!ctrl.support_warm_boot) 730 + break; 731 + 732 + pr_err("error mapping DDR SHIMPHY %d\n", i); 733 + return PTR_ERR(base); 734 + } 735 + ctrl.memcs[i].ddr_shimphy_base = base; 736 + ctrl.num_memc++; 737 + } 738 + 739 + /* Sequencer DRAM Param and Control Registers */ 740 + i = 0; 741 + for_each_matching_node(dn, brcmstb_memc_of_match) { 742 + base = of_iomap(dn, 0); 743 + if (!base) { 744 + pr_err("error mapping DDR Sequencer %d\n", i); 745 + return -ENOMEM; 746 + } 747 + 748 + of_id = of_match_node(brcmstb_memc_of_match, dn); 749 + if (!of_id) { 750 + iounmap(base); 751 + return -EINVAL; 752 + } 753 + 754 + ddr_seq_data = of_id->data; 755 + ctrl.needs_ddr_pad = ddr_seq_data->needs_ddr_pad; 756 + /* Adjust warm boot offset based on the DDR sequencer */ 757 + if (ddr_seq_data->warm_boot_offset) 758 + ctrl.warm_boot_offset = ddr_seq_data->warm_boot_offset; 759 + 760 + ctrl.memcs[i].ddr_ctrl = base; 761 + i++; 762 + } 763 + 764 + pr_debug("PM: supports warm boot:%d, method:%d, wboffs:%x\n", 765 + ctrl.support_warm_boot, ctrl.s3entry_method, 766 + ctrl.warm_boot_offset); 767 + 768 + dn = of_find_matching_node(NULL, sram_dt_ids); 769 + if (!dn) { 770 + pr_err("SRAM not found\n"); 771 + return -EINVAL; 772 + } 773 + 774 + ret = brcmstb_init_sram(dn); 775 + if (ret) { 776 + pr_err("error setting up SRAM for PM\n"); 777 + return ret; 778 + } 779 + 780 + ctrl.pdev = pdev; 781 + 782 + ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL); 783 + if (!ctrl.s3_params) 784 + return -ENOMEM; 785 + ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params, 786 + sizeof(*ctrl.s3_params), 787 + DMA_TO_DEVICE); 788 + if (dma_mapping_error(&pdev->dev, ctrl.s3_params_pa)) { 789 + pr_err("error mapping DMA memory\n"); 790 + ret = -ENOMEM; 791 + goto out; 792 + } 793 + 794 + atomic_notifier_chain_register(&panic_notifier_list, 795 + &brcmstb_pm_panic_nb); 796 + 797 + pm_power_off = brcmstb_pm_poweroff; 798 + suspend_set_ops(&brcmstb_pm_ops); 799 + 800 + return 0; 801 + 802 + out: 803 + kfree(ctrl.s3_params); 804 + 805 + pr_warn("PM: initialization failed with code %d\n", ret); 806 + 807 + return ret; 808 + } 809 + 810 + static struct platform_driver brcmstb_pm_driver = { 811 + .driver = { 812 + .name = "brcmstb-pm", 813 + .of_match_table = aon_ctrl_dt_ids, 814 + }, 815 + }; 816 + 817 + static int __init brcmstb_pm_init(void) 818 + { 819 + return platform_driver_probe(&brcmstb_pm_driver, 820 + brcmstb_pm_probe); 821 + } 822 + module_init(brcmstb_pm_init);
+461
drivers/soc/bcm/brcmstb/pm/pm-mips.c
··· 1 + /* 2 + * MIPS-specific support for Broadcom STB S2/S3/S5 power management 3 + * 4 + * Copyright (C) 2016-2017 Broadcom 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/printk.h> 18 + #include <linux/io.h> 19 + #include <linux/of.h> 20 + #include <linux/of_address.h> 21 + #include <linux/delay.h> 22 + #include <linux/suspend.h> 23 + #include <asm/bmips.h> 24 + #include <asm/tlbflush.h> 25 + 26 + #include "pm.h" 27 + 28 + #define S2_NUM_PARAMS 6 29 + #define MAX_NUM_MEMC 3 30 + 31 + /* S3 constants */ 32 + #define MAX_GP_REGS 16 33 + #define MAX_CP0_REGS 32 34 + #define NUM_MEMC_CLIENTS 128 35 + #define AON_CTRL_RAM_SIZE 128 36 + #define BRCMSTB_S3_MAGIC 0x5AFEB007 37 + 38 + #define CLEAR_RESET_MASK 0x01 39 + 40 + /* Index each CP0 register that needs to be saved */ 41 + #define CONTEXT 0 42 + #define USER_LOCAL 1 43 + #define PGMK 2 44 + #define HWRENA 3 45 + #define COMPARE 4 46 + #define STATUS 5 47 + #define CONFIG 6 48 + #define MODE 7 49 + #define EDSP 8 50 + #define BOOT_VEC 9 51 + #define EBASE 10 52 + 53 + struct brcmstb_memc { 54 + void __iomem *ddr_phy_base; 55 + void __iomem *arb_base; 56 + }; 57 + 58 + struct brcmstb_pm_control { 59 + void __iomem *aon_ctrl_base; 60 + void __iomem *aon_sram_base; 61 + void __iomem *timers_base; 62 + struct brcmstb_memc memcs[MAX_NUM_MEMC]; 63 + int num_memc; 64 + }; 65 + 66 + struct brcm_pm_s3_context { 67 + u32 cp0_regs[MAX_CP0_REGS]; 68 + u32 memc0_rts[NUM_MEMC_CLIENTS]; 69 + u32 sc_boot_vec; 70 + }; 71 + 72 + struct brcmstb_mem_transfer; 73 + 74 + struct brcmstb_mem_transfer { 75 + struct brcmstb_mem_transfer *next; 76 + void *src; 77 + void *dst; 78 + dma_addr_t pa_src; 79 + dma_addr_t pa_dst; 80 + u32 len; 81 + u8 key; 82 + u8 mode; 83 + u8 src_remapped; 84 + u8 dst_remapped; 85 + u8 src_dst_remapped; 86 + }; 87 + 88 + #define AON_SAVE_SRAM(base, idx, val) \ 89 + __raw_writel(val, base + (idx << 2)) 90 + 91 + /* Used for saving registers in asm */ 92 + u32 gp_regs[MAX_GP_REGS]; 93 + 94 + #define BSP_CLOCK_STOP 0x00 95 + #define PM_INITIATE 0x01 96 + 97 + static struct brcmstb_pm_control ctrl; 98 + 99 + static void brcm_pm_save_cp0_context(struct brcm_pm_s3_context *ctx) 100 + { 101 + /* Generic MIPS */ 102 + ctx->cp0_regs[CONTEXT] = read_c0_context(); 103 + ctx->cp0_regs[USER_LOCAL] = read_c0_userlocal(); 104 + ctx->cp0_regs[PGMK] = read_c0_pagemask(); 105 + ctx->cp0_regs[HWRENA] = read_c0_cache(); 106 + ctx->cp0_regs[COMPARE] = read_c0_compare(); 107 + ctx->cp0_regs[STATUS] = read_c0_status(); 108 + 109 + /* Broadcom specific */ 110 + ctx->cp0_regs[CONFIG] = read_c0_brcm_config(); 111 + ctx->cp0_regs[MODE] = read_c0_brcm_mode(); 112 + ctx->cp0_regs[EDSP] = read_c0_brcm_edsp(); 113 + ctx->cp0_regs[BOOT_VEC] = read_c0_brcm_bootvec(); 114 + ctx->cp0_regs[EBASE] = read_c0_ebase(); 115 + 116 + ctx->sc_boot_vec = bmips_read_zscm_reg(0xa0); 117 + } 118 + 119 + static void brcm_pm_restore_cp0_context(struct brcm_pm_s3_context *ctx) 120 + { 121 + /* Restore cp0 state */ 122 + bmips_write_zscm_reg(0xa0, ctx->sc_boot_vec); 123 + 124 + /* Generic MIPS */ 125 + write_c0_context(ctx->cp0_regs[CONTEXT]); 126 + write_c0_userlocal(ctx->cp0_regs[USER_LOCAL]); 127 + write_c0_pagemask(ctx->cp0_regs[PGMK]); 128 + write_c0_cache(ctx->cp0_regs[HWRENA]); 129 + write_c0_compare(ctx->cp0_regs[COMPARE]); 130 + write_c0_status(ctx->cp0_regs[STATUS]); 131 + 132 + /* Broadcom specific */ 133 + write_c0_brcm_config(ctx->cp0_regs[CONFIG]); 134 + write_c0_brcm_mode(ctx->cp0_regs[MODE]); 135 + write_c0_brcm_edsp(ctx->cp0_regs[EDSP]); 136 + write_c0_brcm_bootvec(ctx->cp0_regs[BOOT_VEC]); 137 + write_c0_ebase(ctx->cp0_regs[EBASE]); 138 + } 139 + 140 + static void brcmstb_pm_handshake(void) 141 + { 142 + void __iomem *base = ctrl.aon_ctrl_base; 143 + u32 tmp; 144 + 145 + /* BSP power handshake, v1 */ 146 + tmp = __raw_readl(base + AON_CTRL_HOST_MISC_CMDS); 147 + tmp &= ~1UL; 148 + __raw_writel(tmp, base + AON_CTRL_HOST_MISC_CMDS); 149 + (void)__raw_readl(base + AON_CTRL_HOST_MISC_CMDS); 150 + 151 + __raw_writel(0, base + AON_CTRL_PM_INITIATE); 152 + (void)__raw_readl(base + AON_CTRL_PM_INITIATE); 153 + __raw_writel(BSP_CLOCK_STOP | PM_INITIATE, 154 + base + AON_CTRL_PM_INITIATE); 155 + /* 156 + * HACK: BSP may have internal race on the CLOCK_STOP command. 157 + * Avoid touching the BSP for a few milliseconds. 158 + */ 159 + mdelay(3); 160 + } 161 + 162 + static void brcmstb_pm_s5(void) 163 + { 164 + void __iomem *base = ctrl.aon_ctrl_base; 165 + 166 + brcmstb_pm_handshake(); 167 + 168 + /* Clear magic s3 warm-boot value */ 169 + AON_SAVE_SRAM(ctrl.aon_sram_base, 0, 0); 170 + 171 + /* Set the countdown */ 172 + __raw_writel(0x10, base + AON_CTRL_PM_CPU_WAIT_COUNT); 173 + (void)__raw_readl(base + AON_CTRL_PM_CPU_WAIT_COUNT); 174 + 175 + /* Prepare to S5 cold boot */ 176 + __raw_writel(PM_COLD_CONFIG, base + AON_CTRL_PM_CTRL); 177 + (void)__raw_readl(base + AON_CTRL_PM_CTRL); 178 + 179 + __raw_writel((PM_COLD_CONFIG | PM_PWR_DOWN), base + 180 + AON_CTRL_PM_CTRL); 181 + (void)__raw_readl(base + AON_CTRL_PM_CTRL); 182 + 183 + __asm__ __volatile__( 184 + " wait\n" 185 + : : : "memory"); 186 + } 187 + 188 + static int brcmstb_pm_s3(void) 189 + { 190 + struct brcm_pm_s3_context s3_context; 191 + void __iomem *memc_arb_base; 192 + unsigned long flags; 193 + u32 tmp; 194 + int i; 195 + 196 + /* Prepare for s3 */ 197 + AON_SAVE_SRAM(ctrl.aon_sram_base, 0, BRCMSTB_S3_MAGIC); 198 + AON_SAVE_SRAM(ctrl.aon_sram_base, 1, (u32)&s3_reentry); 199 + AON_SAVE_SRAM(ctrl.aon_sram_base, 2, 0); 200 + 201 + /* Clear RESET_HISTORY */ 202 + tmp = __raw_readl(ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL); 203 + tmp &= ~CLEAR_RESET_MASK; 204 + __raw_writel(tmp, ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL); 205 + 206 + local_irq_save(flags); 207 + 208 + /* Inhibit DDR_RSTb pulse for both MMCs*/ 209 + for (i = 0; i < ctrl.num_memc; i++) { 210 + tmp = __raw_readl(ctrl.memcs[i].ddr_phy_base + 211 + DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL); 212 + 213 + tmp &= ~0x0f; 214 + __raw_writel(tmp, ctrl.memcs[i].ddr_phy_base + 215 + DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL); 216 + tmp |= (0x05 | BIT(5)); 217 + __raw_writel(tmp, ctrl.memcs[i].ddr_phy_base + 218 + DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL); 219 + } 220 + 221 + /* Save CP0 context */ 222 + brcm_pm_save_cp0_context(&s3_context); 223 + 224 + /* Save RTS(skip debug register) */ 225 + memc_arb_base = ctrl.memcs[0].arb_base + 4; 226 + for (i = 0; i < NUM_MEMC_CLIENTS; i++) { 227 + s3_context.memc0_rts[i] = __raw_readl(memc_arb_base); 228 + memc_arb_base += 4; 229 + } 230 + 231 + /* Save I/O context */ 232 + local_flush_tlb_all(); 233 + _dma_cache_wback_inv(0, ~0); 234 + 235 + brcm_pm_do_s3(ctrl.aon_ctrl_base, current_cpu_data.dcache.linesz); 236 + 237 + /* CPU reconfiguration */ 238 + local_flush_tlb_all(); 239 + bmips_cpu_setup(); 240 + cpumask_clear(&bmips_booted_mask); 241 + 242 + /* Restore RTS (skip debug register) */ 243 + memc_arb_base = ctrl.memcs[0].arb_base + 4; 244 + for (i = 0; i < NUM_MEMC_CLIENTS; i++) { 245 + __raw_writel(s3_context.memc0_rts[i], memc_arb_base); 246 + memc_arb_base += 4; 247 + } 248 + 249 + /* restore CP0 context */ 250 + brcm_pm_restore_cp0_context(&s3_context); 251 + 252 + local_irq_restore(flags); 253 + 254 + return 0; 255 + } 256 + 257 + static int brcmstb_pm_s2(void) 258 + { 259 + /* 260 + * We need to pass 6 arguments to an assembly function. Lets avoid the 261 + * stack and pass arguments in a explicit 4 byte array. The assembly 262 + * code assumes all arguments are 4 bytes and arguments are ordered 263 + * like so: 264 + * 265 + * 0: AON_CTRl base register 266 + * 1: DDR_PHY base register 267 + * 2: TIMERS base resgister 268 + * 3: I-Cache line size 269 + * 4: Restart vector address 270 + * 5: Restart vector size 271 + */ 272 + u32 s2_params[6]; 273 + 274 + /* Prepare s2 parameters */ 275 + s2_params[0] = (u32)ctrl.aon_ctrl_base; 276 + s2_params[1] = (u32)ctrl.memcs[0].ddr_phy_base; 277 + s2_params[2] = (u32)ctrl.timers_base; 278 + s2_params[3] = (u32)current_cpu_data.icache.linesz; 279 + s2_params[4] = (u32)BMIPS_WARM_RESTART_VEC; 280 + s2_params[5] = (u32)(bmips_smp_int_vec_end - 281 + bmips_smp_int_vec); 282 + 283 + /* Drop to standby */ 284 + brcm_pm_do_s2(s2_params); 285 + 286 + return 0; 287 + } 288 + 289 + static int brcmstb_pm_standby(bool deep_standby) 290 + { 291 + brcmstb_pm_handshake(); 292 + 293 + /* Send IRQs to BMIPS_WARM_RESTART_VEC */ 294 + clear_c0_cause(CAUSEF_IV); 295 + irq_disable_hazard(); 296 + set_c0_status(ST0_BEV); 297 + irq_disable_hazard(); 298 + 299 + if (deep_standby) 300 + brcmstb_pm_s3(); 301 + else 302 + brcmstb_pm_s2(); 303 + 304 + /* Send IRQs to normal runtime vectors */ 305 + clear_c0_status(ST0_BEV); 306 + irq_disable_hazard(); 307 + set_c0_cause(CAUSEF_IV); 308 + irq_disable_hazard(); 309 + 310 + return 0; 311 + } 312 + 313 + static int brcmstb_pm_enter(suspend_state_t state) 314 + { 315 + int ret = -EINVAL; 316 + 317 + switch (state) { 318 + case PM_SUSPEND_STANDBY: 319 + ret = brcmstb_pm_standby(false); 320 + break; 321 + case PM_SUSPEND_MEM: 322 + ret = brcmstb_pm_standby(true); 323 + break; 324 + } 325 + 326 + return ret; 327 + } 328 + 329 + static int brcmstb_pm_valid(suspend_state_t state) 330 + { 331 + switch (state) { 332 + case PM_SUSPEND_STANDBY: 333 + return true; 334 + case PM_SUSPEND_MEM: 335 + return true; 336 + default: 337 + return false; 338 + } 339 + } 340 + 341 + static const struct platform_suspend_ops brcmstb_pm_ops = { 342 + .enter = brcmstb_pm_enter, 343 + .valid = brcmstb_pm_valid, 344 + }; 345 + 346 + static const struct of_device_id aon_ctrl_dt_ids[] = { 347 + { .compatible = "brcm,brcmstb-aon-ctrl" }, 348 + { /* sentinel */ } 349 + }; 350 + 351 + static const struct of_device_id ddr_phy_dt_ids[] = { 352 + { .compatible = "brcm,brcmstb-ddr-phy" }, 353 + { /* sentinel */ } 354 + }; 355 + 356 + static const struct of_device_id arb_dt_ids[] = { 357 + { .compatible = "brcm,brcmstb-memc-arb" }, 358 + { /* sentinel */ } 359 + }; 360 + 361 + static const struct of_device_id timers_ids[] = { 362 + { .compatible = "brcm,brcmstb-timers" }, 363 + { /* sentinel */ } 364 + }; 365 + 366 + static inline void __iomem *brcmstb_ioremap_node(struct device_node *dn, 367 + int index) 368 + { 369 + return of_io_request_and_map(dn, index, dn->full_name); 370 + } 371 + 372 + static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches, 373 + int index, const void **ofdata) 374 + { 375 + struct device_node *dn; 376 + const struct of_device_id *match; 377 + 378 + dn = of_find_matching_node_and_match(NULL, matches, &match); 379 + if (!dn) 380 + return ERR_PTR(-EINVAL); 381 + 382 + if (ofdata) 383 + *ofdata = match->data; 384 + 385 + return brcmstb_ioremap_node(dn, index); 386 + } 387 + 388 + static int brcmstb_pm_init(void) 389 + { 390 + struct device_node *dn; 391 + void __iomem *base; 392 + int i; 393 + 394 + /* AON ctrl registers */ 395 + base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL); 396 + if (IS_ERR(base)) { 397 + pr_err("error mapping AON_CTRL\n"); 398 + goto aon_err; 399 + } 400 + ctrl.aon_ctrl_base = base; 401 + 402 + /* AON SRAM registers */ 403 + base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL); 404 + if (IS_ERR(base)) { 405 + pr_err("error mapping AON_SRAM\n"); 406 + goto sram_err; 407 + } 408 + ctrl.aon_sram_base = base; 409 + 410 + ctrl.num_memc = 0; 411 + /* Map MEMC DDR PHY registers */ 412 + for_each_matching_node(dn, ddr_phy_dt_ids) { 413 + i = ctrl.num_memc; 414 + if (i >= MAX_NUM_MEMC) { 415 + pr_warn("Too many MEMCs (max %d)\n", MAX_NUM_MEMC); 416 + break; 417 + } 418 + base = brcmstb_ioremap_node(dn, 0); 419 + if (IS_ERR(base)) 420 + goto ddr_err; 421 + 422 + ctrl.memcs[i].ddr_phy_base = base; 423 + ctrl.num_memc++; 424 + } 425 + 426 + /* MEMC ARB registers */ 427 + base = brcmstb_ioremap_match(arb_dt_ids, 0, NULL); 428 + if (IS_ERR(base)) { 429 + pr_err("error mapping MEMC ARB\n"); 430 + goto ddr_err; 431 + } 432 + ctrl.memcs[0].arb_base = base; 433 + 434 + /* Timer registers */ 435 + base = brcmstb_ioremap_match(timers_ids, 0, NULL); 436 + if (IS_ERR(base)) { 437 + pr_err("error mapping timers\n"); 438 + goto tmr_err; 439 + } 440 + ctrl.timers_base = base; 441 + 442 + /* s3 cold boot aka s5 */ 443 + pm_power_off = brcmstb_pm_s5; 444 + 445 + suspend_set_ops(&brcmstb_pm_ops); 446 + 447 + return 0; 448 + 449 + tmr_err: 450 + iounmap(ctrl.memcs[0].arb_base); 451 + ddr_err: 452 + for (i = 0; i < ctrl.num_memc; i++) 453 + iounmap(ctrl.memcs[i].ddr_phy_base); 454 + 455 + iounmap(ctrl.aon_sram_base); 456 + sram_err: 457 + iounmap(ctrl.aon_ctrl_base); 458 + aon_err: 459 + return PTR_ERR(base); 460 + } 461 + arch_initcall(brcmstb_pm_init);
+89
drivers/soc/bcm/brcmstb/pm/pm.h
··· 1 + /* 2 + * Definitions for Broadcom STB power management / Always ON (AON) block 3 + * 4 + * Copyright © 2016-2017 Broadcom 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #ifndef __BRCMSTB_PM_H__ 17 + #define __BRCMSTB_PM_H__ 18 + 19 + #define AON_CTRL_RESET_CTRL 0x00 20 + #define AON_CTRL_PM_CTRL 0x04 21 + #define AON_CTRL_PM_STATUS 0x08 22 + #define AON_CTRL_PM_CPU_WAIT_COUNT 0x10 23 + #define AON_CTRL_PM_INITIATE 0x88 24 + #define AON_CTRL_HOST_MISC_CMDS 0x8c 25 + #define AON_CTRL_SYSTEM_DATA_RAM_OFS 0x200 26 + 27 + /* MIPS PM constants */ 28 + /* MEMC0 offsets */ 29 + #define DDR40_PHY_CONTROL_REGS_0_PLL_STATUS 0x10 30 + #define DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL 0xa4 31 + 32 + /* TIMER offsets */ 33 + #define TIMER_TIMER1_CTRL 0x0c 34 + #define TIMER_TIMER1_STAT 0x1c 35 + 36 + /* TIMER defines */ 37 + #define RESET_TIMER 0x0 38 + #define START_TIMER 0xbfffffff 39 + #define TIMER_MASK 0x3fffffff 40 + 41 + /* PM_CTRL bitfield (Method #0) */ 42 + #define PM_FAST_PWRDOWN (1 << 6) 43 + #define PM_WARM_BOOT (1 << 5) 44 + #define PM_DEEP_STANDBY (1 << 4) 45 + #define PM_CPU_PWR (1 << 3) 46 + #define PM_USE_CPU_RDY (1 << 2) 47 + #define PM_PLL_PWRDOWN (1 << 1) 48 + #define PM_PWR_DOWN (1 << 0) 49 + 50 + /* PM_CTRL bitfield (Method #1) */ 51 + #define PM_DPHY_STANDBY_CLEAR (1 << 20) 52 + #define PM_MIN_S3_WIDTH_TIMER_BYPASS (1 << 7) 53 + 54 + #define PM_S2_COMMAND (PM_PLL_PWRDOWN | PM_USE_CPU_RDY | PM_PWR_DOWN) 55 + 56 + /* Method 0 bitmasks */ 57 + #define PM_COLD_CONFIG (PM_PLL_PWRDOWN | PM_DEEP_STANDBY) 58 + #define PM_WARM_CONFIG (PM_COLD_CONFIG | PM_USE_CPU_RDY | PM_WARM_BOOT) 59 + 60 + /* Method 1 bitmask */ 61 + #define M1_PM_WARM_CONFIG (PM_DPHY_STANDBY_CLEAR | \ 62 + PM_MIN_S3_WIDTH_TIMER_BYPASS | \ 63 + PM_WARM_BOOT | PM_DEEP_STANDBY | \ 64 + PM_PLL_PWRDOWN | PM_PWR_DOWN) 65 + 66 + #define M1_PM_COLD_CONFIG (PM_DPHY_STANDBY_CLEAR | \ 67 + PM_MIN_S3_WIDTH_TIMER_BYPASS | \ 68 + PM_DEEP_STANDBY | \ 69 + PM_PLL_PWRDOWN | PM_PWR_DOWN) 70 + 71 + #ifndef __ASSEMBLY__ 72 + 73 + #ifndef CONFIG_MIPS 74 + extern const unsigned long brcmstb_pm_do_s2_sz; 75 + extern asmlinkage int brcmstb_pm_do_s2(void __iomem *aon_ctrl_base, 76 + void __iomem *ddr_phy_pll_status); 77 + #else 78 + /* s2 asm */ 79 + extern asmlinkage int brcm_pm_do_s2(u32 *s2_params); 80 + 81 + /* s3 asm */ 82 + extern asmlinkage int brcm_pm_do_s3(void __iomem *aon_ctrl_base, 83 + int dcache_linesz); 84 + extern int s3_reentry; 85 + #endif /* CONFIG_MIPS */ 86 + 87 + #endif 88 + 89 + #endif /* __BRCMSTB_PM_H__ */
+76
drivers/soc/bcm/brcmstb/pm/s2-arm.S
··· 1 + /* 2 + * Copyright © 2014-2017 Broadcom 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/linkage.h> 15 + #include <asm/assembler.h> 16 + 17 + #include "pm.h" 18 + 19 + .text 20 + .align 3 21 + 22 + #define AON_CTRL_REG r10 23 + #define DDR_PHY_STATUS_REG r11 24 + 25 + /* 26 + * r0: AON_CTRL base address 27 + * r1: DDRY PHY PLL status register address 28 + */ 29 + ENTRY(brcmstb_pm_do_s2) 30 + stmfd sp!, {r4-r11, lr} 31 + mov AON_CTRL_REG, r0 32 + mov DDR_PHY_STATUS_REG, r1 33 + 34 + /* Flush memory transactions */ 35 + dsb 36 + 37 + /* Cache DDR_PHY_STATUS_REG translation */ 38 + ldr r0, [DDR_PHY_STATUS_REG] 39 + 40 + /* power down request */ 41 + ldr r0, =PM_S2_COMMAND 42 + ldr r1, =0 43 + str r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 44 + ldr r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 45 + str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 46 + ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 47 + 48 + /* Wait for interrupt */ 49 + wfi 50 + nop 51 + 52 + /* Bring MEMC back up */ 53 + 1: ldr r0, [DDR_PHY_STATUS_REG] 54 + ands r0, #1 55 + beq 1b 56 + 57 + /* Power-up handshake */ 58 + ldr r0, =1 59 + str r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS] 60 + ldr r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS] 61 + 62 + ldr r0, =0 63 + str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 64 + ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL] 65 + 66 + /* Return to caller */ 67 + ldr r0, =0 68 + ldmfd sp!, {r4-r11, pc} 69 + 70 + ENDPROC(brcmstb_pm_do_s2) 71 + 72 + /* Place literal pool here */ 73 + .ltorg 74 + 75 + ENTRY(brcmstb_pm_do_s2_sz) 76 + .word . - brcmstb_pm_do_s2
+200
drivers/soc/bcm/brcmstb/pm/s2-mips.S
··· 1 + /* 2 + * Copyright (C) 2016 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <asm/asm.h> 15 + #include <asm/regdef.h> 16 + #include <asm/mipsregs.h> 17 + #include <asm/stackframe.h> 18 + 19 + #include "pm.h" 20 + 21 + .text 22 + .set noreorder 23 + .align 5 24 + 25 + /* 26 + * a0: u32 params array 27 + */ 28 + LEAF(brcm_pm_do_s2) 29 + 30 + subu sp, 64 31 + sw ra, 0(sp) 32 + sw s0, 4(sp) 33 + sw s1, 8(sp) 34 + sw s2, 12(sp) 35 + sw s3, 16(sp) 36 + sw s4, 20(sp) 37 + sw s5, 24(sp) 38 + sw s6, 28(sp) 39 + sw s7, 32(sp) 40 + 41 + /* 42 + * Dereference the params array 43 + * s0: AON_CTRL base register 44 + * s1: DDR_PHY base register 45 + * s2: TIMERS base register 46 + * s3: I-Cache line size 47 + * s4: Restart vector address 48 + * s5: Restart vector size 49 + */ 50 + move t0, a0 51 + 52 + lw s0, 0(t0) 53 + lw s1, 4(t0) 54 + lw s2, 8(t0) 55 + lw s3, 12(t0) 56 + lw s4, 16(t0) 57 + lw s5, 20(t0) 58 + 59 + /* Lock this asm section into the I-cache */ 60 + addiu t1, s3, -1 61 + not t1 62 + 63 + la t0, brcm_pm_do_s2 64 + and t0, t1 65 + 66 + la t2, asm_end 67 + and t2, t1 68 + 69 + 1: cache 0x1c, 0(t0) 70 + bne t0, t2, 1b 71 + addu t0, s3 72 + 73 + /* Lock the interrupt vector into the I-cache */ 74 + move t0, zero 75 + 76 + 2: move t1, s4 77 + cache 0x1c, 0(t1) 78 + addu t1, s3 79 + addu t0, s3 80 + ble t0, s5, 2b 81 + nop 82 + 83 + sync 84 + 85 + /* Power down request */ 86 + li t0, PM_S2_COMMAND 87 + sw zero, AON_CTRL_PM_CTRL(s0) 88 + lw zero, AON_CTRL_PM_CTRL(s0) 89 + sw t0, AON_CTRL_PM_CTRL(s0) 90 + lw t0, AON_CTRL_PM_CTRL(s0) 91 + 92 + /* Enable CP0 interrupt 2 and wait for interrupt */ 93 + mfc0 t0, CP0_STATUS 94 + /* Save cp0 sr for restoring later */ 95 + move s6, t0 96 + 97 + li t1, ~(ST0_IM | ST0_IE) 98 + and t0, t1 99 + ori t0, STATUSF_IP2 100 + mtc0 t0, CP0_STATUS 101 + nop 102 + nop 103 + nop 104 + ori t0, ST0_IE 105 + mtc0 t0, CP0_STATUS 106 + 107 + /* Wait for interrupt */ 108 + wait 109 + nop 110 + 111 + /* Wait for memc0 */ 112 + 1: lw t0, DDR40_PHY_CONTROL_REGS_0_PLL_STATUS(s1) 113 + andi t0, 1 114 + beqz t0, 1b 115 + nop 116 + 117 + /* 1ms delay needed for stable recovery */ 118 + /* Use TIMER1 to count 1 ms */ 119 + li t0, RESET_TIMER 120 + sw t0, TIMER_TIMER1_CTRL(s2) 121 + lw t0, TIMER_TIMER1_CTRL(s2) 122 + 123 + li t0, START_TIMER 124 + sw t0, TIMER_TIMER1_CTRL(s2) 125 + lw t0, TIMER_TIMER1_CTRL(s2) 126 + 127 + /* Prepare delay */ 128 + li t0, TIMER_MASK 129 + lw t1, TIMER_TIMER1_STAT(s2) 130 + and t1, t0 131 + /* 1ms delay */ 132 + addi t1, 27000 133 + 134 + /* Wait for the timer value to exceed t1 */ 135 + 1: lw t0, TIMER_TIMER1_STAT(s2) 136 + sgtu t2, t1, t0 137 + bnez t2, 1b 138 + nop 139 + 140 + /* Power back up */ 141 + li t1, 1 142 + sw t1, AON_CTRL_HOST_MISC_CMDS(s0) 143 + lw t1, AON_CTRL_HOST_MISC_CMDS(s0) 144 + 145 + sw zero, AON_CTRL_PM_CTRL(s0) 146 + lw zero, AON_CTRL_PM_CTRL(s0) 147 + 148 + /* Unlock I-cache */ 149 + addiu t1, s3, -1 150 + not t1 151 + 152 + la t0, brcm_pm_do_s2 153 + and t0, t1 154 + 155 + la t2, asm_end 156 + and t2, t1 157 + 158 + 1: cache 0x00, 0(t0) 159 + bne t0, t2, 1b 160 + addu t0, s3 161 + 162 + /* Unlock interrupt vector */ 163 + move t0, zero 164 + 165 + 2: move t1, s4 166 + cache 0x00, 0(t1) 167 + addu t1, s3 168 + addu t0, s3 169 + ble t0, s5, 2b 170 + nop 171 + 172 + /* Restore cp0 sr */ 173 + sync 174 + nop 175 + mtc0 s6, CP0_STATUS 176 + nop 177 + 178 + /* Set return value to success */ 179 + li v0, 0 180 + 181 + /* Return to caller */ 182 + lw s7, 32(sp) 183 + lw s6, 28(sp) 184 + lw s5, 24(sp) 185 + lw s4, 20(sp) 186 + lw s3, 16(sp) 187 + lw s2, 12(sp) 188 + lw s1, 8(sp) 189 + lw s0, 4(sp) 190 + lw ra, 0(sp) 191 + addiu sp, 64 192 + 193 + jr ra 194 + nop 195 + END(brcm_pm_do_s2) 196 + 197 + .globl asm_end 198 + asm_end: 199 + nop 200 +
+146
drivers/soc/bcm/brcmstb/pm/s3-mips.S
··· 1 + /* 2 + * Copyright (C) 2016 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <asm/asm.h> 15 + #include <asm/regdef.h> 16 + #include <asm/mipsregs.h> 17 + #include <asm/bmips.h> 18 + 19 + #include "pm.h" 20 + 21 + .text 22 + .set noreorder 23 + .align 5 24 + .global s3_reentry 25 + 26 + /* 27 + * a0: AON_CTRL base register 28 + * a1: D-Cache line size 29 + */ 30 + LEAF(brcm_pm_do_s3) 31 + 32 + /* Get the address of s3_context */ 33 + la t0, gp_regs 34 + sw ra, 0(t0) 35 + sw s0, 4(t0) 36 + sw s1, 8(t0) 37 + sw s2, 12(t0) 38 + sw s3, 16(t0) 39 + sw s4, 20(t0) 40 + sw s5, 24(t0) 41 + sw s6, 28(t0) 42 + sw s7, 32(t0) 43 + sw gp, 36(t0) 44 + sw sp, 40(t0) 45 + sw fp, 44(t0) 46 + 47 + /* Save CP0 Status */ 48 + mfc0 t1, CP0_STATUS 49 + sw t1, 48(t0) 50 + 51 + /* Write-back gp registers - cache will be gone */ 52 + addiu t1, a1, -1 53 + not t1 54 + and t0, t1 55 + 56 + /* Flush at least 64 bytes */ 57 + addiu t2, t0, 64 58 + and t2, t1 59 + 60 + 1: cache 0x17, 0(t0) 61 + bne t0, t2, 1b 62 + addu t0, a1 63 + 64 + /* Drop to deep standby */ 65 + li t1, PM_WARM_CONFIG 66 + sw zero, AON_CTRL_PM_CTRL(a0) 67 + lw zero, AON_CTRL_PM_CTRL(a0) 68 + sw t1, AON_CTRL_PM_CTRL(a0) 69 + lw t1, AON_CTRL_PM_CTRL(a0) 70 + 71 + li t1, (PM_WARM_CONFIG | PM_PWR_DOWN) 72 + sw t1, AON_CTRL_PM_CTRL(a0) 73 + lw t1, AON_CTRL_PM_CTRL(a0) 74 + 75 + /* Enable CP0 interrupt 2 and wait for interrupt */ 76 + mfc0 t0, CP0_STATUS 77 + 78 + li t1, ~(ST0_IM | ST0_IE) 79 + and t0, t1 80 + ori t0, STATUSF_IP2 81 + mtc0 t0, CP0_STATUS 82 + nop 83 + nop 84 + nop 85 + ori t0, ST0_IE 86 + mtc0 t0, CP0_STATUS 87 + 88 + /* Wait for interrupt */ 89 + wait 90 + nop 91 + 92 + s3_reentry: 93 + 94 + /* Clear call/return stack */ 95 + li t0, (0x06 << 16) 96 + mtc0 t0, $22, 2 97 + ssnop 98 + ssnop 99 + ssnop 100 + 101 + /* Clear jump target buffer */ 102 + li t0, (0x04 << 16) 103 + mtc0 t0, $22, 2 104 + ssnop 105 + ssnop 106 + ssnop 107 + 108 + sync 109 + nop 110 + 111 + /* Setup mmu defaults */ 112 + mtc0 zero, CP0_WIRED 113 + mtc0 zero, CP0_ENTRYHI 114 + li k0, PM_DEFAULT_MASK 115 + mtc0 k0, CP0_PAGEMASK 116 + 117 + li sp, BMIPS_WARM_RESTART_VEC 118 + la k0, plat_wired_tlb_setup 119 + jalr k0 120 + nop 121 + 122 + /* Restore general purpose registers */ 123 + la t0, gp_regs 124 + lw fp, 44(t0) 125 + lw sp, 40(t0) 126 + lw gp, 36(t0) 127 + lw s7, 32(t0) 128 + lw s6, 28(t0) 129 + lw s5, 24(t0) 130 + lw s4, 20(t0) 131 + lw s3, 16(t0) 132 + lw s2, 12(t0) 133 + lw s1, 8(t0) 134 + lw s0, 4(t0) 135 + lw ra, 0(t0) 136 + 137 + /* Restore CP0 status */ 138 + lw t1, 48(t0) 139 + mtc0 t1, CP0_STATUS 140 + 141 + /* Return to caller */ 142 + li v0, 0 143 + jr ra 144 + nop 145 + 146 + END(brcm_pm_do_s3)
+1
drivers/soc/fsl/guts.c
··· 213 213 { .compatible = "fsl,ls1021a-dcfg", }, 214 214 { .compatible = "fsl,ls1043a-dcfg", }, 215 215 { .compatible = "fsl,ls2080a-dcfg", }, 216 + { .compatible = "fsl,ls1088a-dcfg", }, 216 217 {} 217 218 }; 218 219 MODULE_DEVICE_TABLE(of, fsl_guts_of_match);
+1 -1
drivers/soc/fsl/qbman/Kconfig
··· 1 1 menuconfig FSL_DPAA 2 2 bool "Freescale DPAA 1.x support" 3 - depends on FSL_SOC_BOOKE 3 + depends on (FSL_SOC_BOOKE || ARCH_LAYERSCAPE) 4 4 select GENERIC_ALLOCATOR 5 5 help 6 6 The Freescale Data Path Acceleration Architecture (DPAA) is a set of
+1 -1
drivers/soc/fsl/qbman/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_FSL_DPAA) += bman_ccsr.o qman_ccsr.o \ 3 3 bman_portal.o qman_portal.o \ 4 - bman.o qman.o 4 + bman.o qman.o dpaa_sys.o 5 5 6 6 obj-$(CONFIG_FSL_BMAN_TEST) += bman-test.o 7 7 bman-test-y = bman_test.o
+33 -9
drivers/soc/fsl/qbman/bman.c
··· 35 35 36 36 /* Portal register assists */ 37 37 38 + #if defined(CONFIG_ARM) || defined(CONFIG_ARM64) 39 + /* Cache-inhibited register offsets */ 40 + #define BM_REG_RCR_PI_CINH 0x3000 41 + #define BM_REG_RCR_CI_CINH 0x3100 42 + #define BM_REG_RCR_ITR 0x3200 43 + #define BM_REG_CFG 0x3300 44 + #define BM_REG_SCN(n) (0x3400 + ((n) << 6)) 45 + #define BM_REG_ISR 0x3e00 46 + #define BM_REG_IER 0x3e40 47 + #define BM_REG_ISDR 0x3e80 48 + #define BM_REG_IIR 0x3ec0 49 + 50 + /* Cache-enabled register offsets */ 51 + #define BM_CL_CR 0x0000 52 + #define BM_CL_RR0 0x0100 53 + #define BM_CL_RR1 0x0140 54 + #define BM_CL_RCR 0x1000 55 + #define BM_CL_RCR_PI_CENA 0x3000 56 + #define BM_CL_RCR_CI_CENA 0x3100 57 + 58 + #else 38 59 /* Cache-inhibited register offsets */ 39 60 #define BM_REG_RCR_PI_CINH 0x0000 40 61 #define BM_REG_RCR_CI_CINH 0x0004 ··· 74 53 #define BM_CL_RCR 0x1000 75 54 #define BM_CL_RCR_PI_CENA 0x3000 76 55 #define BM_CL_RCR_CI_CENA 0x3100 56 + #endif 77 57 78 58 /* 79 59 * Portal modes. ··· 176 154 }; 177 155 178 156 struct bm_addr { 179 - void __iomem *ce; /* cache-enabled */ 157 + void *ce; /* cache-enabled */ 158 + __be32 *ce_be; /* Same as above but for direct access */ 180 159 void __iomem *ci; /* cache-inhibited */ 181 160 }; 182 161 ··· 190 167 /* Cache-inhibited register access. */ 191 168 static inline u32 bm_in(struct bm_portal *p, u32 offset) 192 169 { 193 - return be32_to_cpu(__raw_readl(p->addr.ci + offset)); 170 + return ioread32be(p->addr.ci + offset); 194 171 } 195 172 196 173 static inline void bm_out(struct bm_portal *p, u32 offset, u32 val) 197 174 { 198 - __raw_writel(cpu_to_be32(val), p->addr.ci + offset); 175 + iowrite32be(val, p->addr.ci + offset); 199 176 } 200 177 201 178 /* Cache Enabled Portal Access */ ··· 211 188 212 189 static inline u32 bm_ce_in(struct bm_portal *p, u32 offset) 213 190 { 214 - return be32_to_cpu(__raw_readl(p->addr.ce + offset)); 191 + return be32_to_cpu(*(p->addr.ce_be + (offset/4))); 215 192 } 216 193 217 194 struct bman_portal { ··· 431 408 432 409 mc->cr = portal->addr.ce + BM_CL_CR; 433 410 mc->rr = portal->addr.ce + BM_CL_RR0; 434 - mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & BM_MCC_VERB_VBIT) ? 411 + mc->rridx = (mc->cr->_ncw_verb & BM_MCC_VERB_VBIT) ? 435 412 0 : 1; 436 413 mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0; 437 414 #ifdef CONFIG_FSL_DPAA_CHECKING ··· 489 466 * its command is submitted and completed. This includes the valid-bit, 490 467 * in case you were wondering... 491 468 */ 492 - if (!__raw_readb(&rr->verb)) { 469 + if (!rr->verb) { 493 470 dpaa_invalidate_touch_ro(rr); 494 471 return NULL; 495 472 } ··· 535 512 * config, everything that follows depends on it and "config" is more 536 513 * for (de)reference... 537 514 */ 538 - p->addr.ce = c->addr_virt[DPAA_PORTAL_CE]; 539 - p->addr.ci = c->addr_virt[DPAA_PORTAL_CI]; 515 + p->addr.ce = c->addr_virt_ce; 516 + p->addr.ce_be = c->addr_virt_ce; 517 + p->addr.ci = c->addr_virt_ci; 540 518 if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) { 541 519 dev_err(c->dev, "RCR initialisation failed\n"); 542 520 goto fail_rcr; ··· 631 607 unsigned long irqflags; 632 608 633 609 local_irq_save(irqflags); 634 - set_bits(bits & BM_PIRQ_VISIBLE, &p->irq_sources); 610 + p->irq_sources |= bits & BM_PIRQ_VISIBLE; 635 611 bm_out(&p->p, BM_REG_IER, p->irq_sources); 636 612 local_irq_restore(irqflags); 637 613 return 0;
+15
drivers/soc/fsl/qbman/bman_ccsr.c
··· 201 201 return -ENODEV; 202 202 } 203 203 204 + /* 205 + * If FBPR memory wasn't defined using the qbman compatible string 206 + * try using the of_reserved_mem_device method 207 + */ 208 + if (!fbpr_a) { 209 + ret = qbman_init_private_mem(dev, 0, &fbpr_a, &fbpr_sz); 210 + if (ret) { 211 + dev_err(dev, "qbman_init_private_mem() failed 0x%x\n", 212 + ret); 213 + return -ENODEV; 214 + } 215 + } 216 + 217 + dev_dbg(dev, "Allocated FBPR 0x%llx 0x%zx\n", fbpr_a, fbpr_sz); 218 + 204 219 bm_set_memory(fbpr_a, fbpr_sz); 205 220 206 221 err_irq = platform_get_irq(pdev, 0);
+10 -13
drivers/soc/fsl/qbman/bman_portal.c
··· 91 91 struct device_node *node = dev->of_node; 92 92 struct bm_portal_config *pcfg; 93 93 struct resource *addr_phys[2]; 94 - void __iomem *va; 95 94 int irq, cpu; 96 95 97 96 pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL); ··· 122 123 } 123 124 pcfg->irq = irq; 124 125 125 - va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0); 126 - if (!va) { 127 - dev_err(dev, "ioremap::CE failed\n"); 126 + pcfg->addr_virt_ce = memremap(addr_phys[0]->start, 127 + resource_size(addr_phys[0]), 128 + QBMAN_MEMREMAP_ATTR); 129 + if (!pcfg->addr_virt_ce) { 130 + dev_err(dev, "memremap::CE failed\n"); 128 131 goto err_ioremap1; 129 132 } 130 133 131 - pcfg->addr_virt[DPAA_PORTAL_CE] = va; 132 - 133 - va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]), 134 - _PAGE_GUARDED | _PAGE_NO_CACHE); 135 - if (!va) { 134 + pcfg->addr_virt_ci = ioremap(addr_phys[1]->start, 135 + resource_size(addr_phys[1])); 136 + if (!pcfg->addr_virt_ci) { 136 137 dev_err(dev, "ioremap::CI failed\n"); 137 138 goto err_ioremap2; 138 139 } 139 - 140 - pcfg->addr_virt[DPAA_PORTAL_CI] = va; 141 140 142 141 spin_lock(&bman_lock); 143 142 cpu = cpumask_next_zero(-1, &portal_cpus); ··· 161 164 return 0; 162 165 163 166 err_portal_init: 164 - iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]); 167 + iounmap(pcfg->addr_virt_ci); 165 168 err_ioremap2: 166 - iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]); 169 + memunmap(pcfg->addr_virt_ce); 167 170 err_ioremap1: 168 171 return -ENXIO; 169 172 }
+3 -5
drivers/soc/fsl/qbman/bman_priv.h
··· 46 46 extern struct gen_pool *bm_bpalloc; 47 47 48 48 struct bm_portal_config { 49 - /* 50 - * Corenet portal addresses; 51 - * [0]==cache-enabled, [1]==cache-inhibited. 52 - */ 53 - void __iomem *addr_virt[2]; 49 + /* Portal addresses */ 50 + void *addr_virt_ce; 51 + void __iomem *addr_virt_ci; 54 52 /* Allow these to be joined in lists */ 55 53 struct list_head list; 56 54 struct device *dev;
+78
drivers/soc/fsl/qbman/dpaa_sys.c
··· 1 + /* Copyright 2017 NXP Semiconductor, Inc. 2 + * 3 + * Redistribution and use in source and binary forms, with or without 4 + * modification, are permitted provided that the following conditions are met: 5 + * * Redistributions of source code must retain the above copyright 6 + * notice, this list of conditions and the following disclaimer. 7 + * * Redistributions in binary form must reproduce the above copyright 8 + * notice, this list of conditions and the following disclaimer in the 9 + * documentation and/or other materials provided with the distribution. 10 + * * Neither the name of NXP Semiconductor nor the 11 + * names of its contributors may be used to endorse or promote products 12 + * derived from this software without specific prior written permission. 13 + * 14 + * ALTERNATIVELY, this software may be distributed under the terms of the 15 + * GNU General Public License ("GPL") as published by the Free Software 16 + * Foundation, either version 2 of that License or (at your option) any 17 + * later version. 18 + * 19 + * THIS SOFTWARE IS PROVIDED BY NXP Semiconductor ``AS IS'' AND ANY 20 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 21 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 22 + * DISCLAIMED. IN NO EVENT SHALL NXP Semiconductor BE LIABLE FOR ANY 23 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 24 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 25 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 26 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 27 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 28 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 29 + */ 30 + 31 + #include <linux/dma-mapping.h> 32 + #include "dpaa_sys.h" 33 + 34 + /* 35 + * Initialize a devices private memory region 36 + */ 37 + int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr, 38 + size_t *size) 39 + { 40 + int ret; 41 + struct device_node *mem_node; 42 + u64 size64; 43 + 44 + ret = of_reserved_mem_device_init_by_idx(dev, dev->of_node, idx); 45 + if (ret) { 46 + dev_err(dev, 47 + "of_reserved_mem_device_init_by_idx(%d) failed 0x%x\n", 48 + idx, ret); 49 + return -ENODEV; 50 + } 51 + mem_node = of_parse_phandle(dev->of_node, "memory-region", 0); 52 + if (mem_node) { 53 + ret = of_property_read_u64(mem_node, "size", &size64); 54 + if (ret) { 55 + dev_err(dev, "of_address_to_resource fails 0x%x\n", 56 + ret); 57 + return -ENODEV; 58 + } 59 + *size = size64; 60 + } else { 61 + dev_err(dev, "No memory-region found for index %d\n", idx); 62 + return -ENODEV; 63 + } 64 + 65 + if (!dma_zalloc_coherent(dev, *size, addr, 0)) { 66 + dev_err(dev, "DMA Alloc memory failed\n"); 67 + return -ENODEV; 68 + } 69 + 70 + /* 71 + * Disassociate the reserved memory area from the device 72 + * because a device can only have one DMA memory area. This 73 + * should be fine since the memory is allocated and initialized 74 + * and only ever accessed by the QBMan device from now on 75 + */ 76 + of_reserved_mem_device_release(dev); 77 + return 0; 78 + }
+17 -8
drivers/soc/fsl/qbman/dpaa_sys.h
··· 44 44 #include <linux/prefetch.h> 45 45 #include <linux/genalloc.h> 46 46 #include <asm/cacheflush.h> 47 + #include <linux/io.h> 48 + #include <linux/delay.h> 47 49 48 50 /* For 2-element tables related to cache-inhibited and cache-enabled mappings */ 49 51 #define DPAA_PORTAL_CE 0 50 52 #define DPAA_PORTAL_CI 1 51 53 52 - #if (L1_CACHE_BYTES != 32) && (L1_CACHE_BYTES != 64) 53 - #error "Unsupported Cacheline Size" 54 - #endif 55 - 56 54 static inline void dpaa_flush(void *p) 57 55 { 56 + /* 57 + * Only PPC needs to flush the cache currently - on ARM the mapping 58 + * is non cacheable 59 + */ 58 60 #ifdef CONFIG_PPC 59 61 flush_dcache_range((unsigned long)p, (unsigned long)p+64); 60 - #elif defined(CONFIG_ARM32) 61 - __cpuc_flush_dcache_area(p, 64); 62 - #elif defined(CONFIG_ARM64) 63 - __flush_dcache_area(p, 64); 64 62 #endif 65 63 } 66 64 ··· 99 101 100 102 /* Offset applied to genalloc pools due to zero being an error return */ 101 103 #define DPAA_GENALLOC_OFF 0x80000000 104 + 105 + /* Initialize the devices private memory region */ 106 + int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr, 107 + size_t *size); 108 + 109 + /* memremap() attributes for different platforms */ 110 + #ifdef CONFIG_PPC 111 + #define QBMAN_MEMREMAP_ATTR MEMREMAP_WB 112 + #else 113 + #define QBMAN_MEMREMAP_ATTR MEMREMAP_WC 114 + #endif 102 115 103 116 #endif /* __DPAA_SYS_H */
+56 -27
drivers/soc/fsl/qbman/qman.c
··· 41 41 42 42 /* Portal register assists */ 43 43 44 + #if defined(CONFIG_ARM) || defined(CONFIG_ARM64) 45 + /* Cache-inhibited register offsets */ 46 + #define QM_REG_EQCR_PI_CINH 0x3000 47 + #define QM_REG_EQCR_CI_CINH 0x3040 48 + #define QM_REG_EQCR_ITR 0x3080 49 + #define QM_REG_DQRR_PI_CINH 0x3100 50 + #define QM_REG_DQRR_CI_CINH 0x3140 51 + #define QM_REG_DQRR_ITR 0x3180 52 + #define QM_REG_DQRR_DCAP 0x31C0 53 + #define QM_REG_DQRR_SDQCR 0x3200 54 + #define QM_REG_DQRR_VDQCR 0x3240 55 + #define QM_REG_DQRR_PDQCR 0x3280 56 + #define QM_REG_MR_PI_CINH 0x3300 57 + #define QM_REG_MR_CI_CINH 0x3340 58 + #define QM_REG_MR_ITR 0x3380 59 + #define QM_REG_CFG 0x3500 60 + #define QM_REG_ISR 0x3600 61 + #define QM_REG_IER 0x3640 62 + #define QM_REG_ISDR 0x3680 63 + #define QM_REG_IIR 0x36C0 64 + #define QM_REG_ITPR 0x3740 65 + 66 + /* Cache-enabled register offsets */ 67 + #define QM_CL_EQCR 0x0000 68 + #define QM_CL_DQRR 0x1000 69 + #define QM_CL_MR 0x2000 70 + #define QM_CL_EQCR_PI_CENA 0x3000 71 + #define QM_CL_EQCR_CI_CENA 0x3040 72 + #define QM_CL_DQRR_PI_CENA 0x3100 73 + #define QM_CL_DQRR_CI_CENA 0x3140 74 + #define QM_CL_MR_PI_CENA 0x3300 75 + #define QM_CL_MR_CI_CENA 0x3340 76 + #define QM_CL_CR 0x3800 77 + #define QM_CL_RR0 0x3900 78 + #define QM_CL_RR1 0x3940 79 + 80 + #else 44 81 /* Cache-inhibited register offsets */ 45 82 #define QM_REG_EQCR_PI_CINH 0x0000 46 83 #define QM_REG_EQCR_CI_CINH 0x0004 ··· 112 75 #define QM_CL_CR 0x3800 113 76 #define QM_CL_RR0 0x3900 114 77 #define QM_CL_RR1 0x3940 78 + #endif 115 79 116 80 /* 117 81 * BTW, the drivers (and h/w programming model) already obtain the required ··· 338 300 }; 339 301 340 302 struct qm_addr { 341 - void __iomem *ce; /* cache-enabled */ 303 + void *ce; /* cache-enabled */ 304 + __be32 *ce_be; /* same value as above but for direct access */ 342 305 void __iomem *ci; /* cache-inhibited */ 343 306 }; 344 307 ··· 360 321 /* Cache-inhibited register access. */ 361 322 static inline u32 qm_in(struct qm_portal *p, u32 offset) 362 323 { 363 - return be32_to_cpu(__raw_readl(p->addr.ci + offset)); 324 + return ioread32be(p->addr.ci + offset); 364 325 } 365 326 366 327 static inline void qm_out(struct qm_portal *p, u32 offset, u32 val) 367 328 { 368 - __raw_writel(cpu_to_be32(val), p->addr.ci + offset); 329 + iowrite32be(val, p->addr.ci + offset); 369 330 } 370 331 371 332 /* Cache Enabled Portal Access */ ··· 381 342 382 343 static inline u32 qm_ce_in(struct qm_portal *p, u32 offset) 383 344 { 384 - return be32_to_cpu(__raw_readl(p->addr.ce + offset)); 345 + return be32_to_cpu(*(p->addr.ce_be + (offset/4))); 385 346 } 386 347 387 348 /* --- EQCR API --- */ ··· 685 646 */ 686 647 dpaa_invalidate_touch_ro(res); 687 648 #endif 688 - /* 689 - * when accessing 'verb', use __raw_readb() to ensure that compiler 690 - * inlining doesn't try to optimise out "excess reads". 691 - */ 692 - if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) { 649 + if ((res->verb & QM_DQRR_VERB_VBIT) == dqrr->vbit) { 693 650 dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1); 694 651 if (!dqrr->pi) 695 652 dqrr->vbit ^= QM_DQRR_VERB_VBIT; ··· 812 777 union qm_mr_entry *res = qm_cl(mr->ring, mr->pi); 813 778 814 779 DPAA_ASSERT(mr->pmode == qm_mr_pvb); 815 - /* 816 - * when accessing 'verb', use __raw_readb() to ensure that compiler 817 - * inlining doesn't try to optimise out "excess reads". 818 - */ 819 - if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) { 780 + 781 + if ((res->verb & QM_MR_VERB_VBIT) == mr->vbit) { 820 782 mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1); 821 783 if (!mr->pi) 822 784 mr->vbit ^= QM_MR_VERB_VBIT; ··· 854 822 855 823 mc->cr = portal->addr.ce + QM_CL_CR; 856 824 mc->rr = portal->addr.ce + QM_CL_RR0; 857 - mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & QM_MCC_VERB_VBIT) 825 + mc->rridx = (mc->cr->_ncw_verb & QM_MCC_VERB_VBIT) 858 826 ? 0 : 1; 859 827 mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0; 860 828 #ifdef CONFIG_FSL_DPAA_CHECKING ··· 912 880 * its command is submitted and completed. This includes the valid-bit, 913 881 * in case you were wondering... 914 882 */ 915 - if (!__raw_readb(&rr->verb)) { 883 + if (!rr->verb) { 916 884 dpaa_invalidate_touch_ro(rr); 917 885 return NULL; 918 886 } ··· 941 909 942 910 static inline void fq_set(struct qman_fq *fq, u32 mask) 943 911 { 944 - set_bits(mask, &fq->flags); 912 + fq->flags |= mask; 945 913 } 946 914 947 915 static inline void fq_clear(struct qman_fq *fq, u32 mask) 948 916 { 949 - clear_bits(mask, &fq->flags); 917 + fq->flags &= ~mask; 950 918 } 951 919 952 920 static inline int fq_isset(struct qman_fq *fq, u32 mask) ··· 1116 1084 * entries well before the ring has been fully consumed, so 1117 1085 * we're being *really* paranoid here. 1118 1086 */ 1119 - u64 now, then = jiffies; 1120 - 1121 - do { 1122 - now = jiffies; 1123 - } while ((then + 10000) > now); 1087 + msleep(1); 1124 1088 msg = qm_mr_current(p); 1125 1089 if (!msg) 1126 1090 return 0; ··· 1152 1124 * config, everything that follows depends on it and "config" is more 1153 1125 * for (de)reference 1154 1126 */ 1155 - p->addr.ce = c->addr_virt[DPAA_PORTAL_CE]; 1156 - p->addr.ci = c->addr_virt[DPAA_PORTAL_CI]; 1127 + p->addr.ce = c->addr_virt_ce; 1128 + p->addr.ce_be = c->addr_virt_ce; 1129 + p->addr.ci = c->addr_virt_ci; 1157 1130 /* 1158 1131 * If CI-stashing is used, the current defaults use a threshold of 3, 1159 1132 * and stash with high-than-DQRR priority. ··· 1595 1566 unsigned long irqflags; 1596 1567 1597 1568 local_irq_save(irqflags); 1598 - set_bits(bits & QM_PIRQ_VISIBLE, &p->irq_sources); 1569 + p->irq_sources |= bits & QM_PIRQ_VISIBLE; 1599 1570 qm_out(&p->p, QM_REG_IER, p->irq_sources); 1600 1571 local_irq_restore(irqflags); 1601 1572 } ··· 1618 1589 */ 1619 1590 local_irq_save(irqflags); 1620 1591 bits &= QM_PIRQ_VISIBLE; 1621 - clear_bits(bits, &p->irq_sources); 1592 + p->irq_sources &= ~bits; 1622 1593 qm_out(&p->p, QM_REG_IER, p->irq_sources); 1623 1594 ier = qm_in(&p->p, QM_REG_IER); 1624 1595 /*
+65 -30
drivers/soc/fsl/qbman/qman_ccsr.c
··· 401 401 } 402 402 403 403 /* 404 - * Ideally we would use the DMA API to turn rmem->base into a DMA address 405 - * (especially if iommu translations ever get involved). Unfortunately, the 406 - * DMA API currently does not allow mapping anything that is not backed with 407 - * a struct page. 404 + * QMan needs two global memory areas initialized at boot time: 405 + * 1) FQD: Frame Queue Descriptors used to manage frame queues 406 + * 2) PFDR: Packed Frame Queue Descriptor Records used to store frames 407 + * Both areas are reserved using the device tree reserved memory framework 408 + * and the addresses and sizes are initialized when the QMan device is probed 408 409 */ 409 410 static dma_addr_t fqd_a, pfdr_a; 410 411 static size_t fqd_sz, pfdr_sz; 412 + 413 + #ifdef CONFIG_PPC 414 + /* 415 + * Support for PPC Device Tree backward compatibility when compatible 416 + * string is set to fsl-qman-fqd and fsl-qman-pfdr 417 + */ 418 + static int zero_priv_mem(phys_addr_t addr, size_t sz) 419 + { 420 + /* map as cacheable, non-guarded */ 421 + void __iomem *tmpp = ioremap_prot(addr, sz, 0); 422 + 423 + if (!tmpp) 424 + return -ENOMEM; 425 + 426 + memset_io(tmpp, 0, sz); 427 + flush_dcache_range((unsigned long)tmpp, 428 + (unsigned long)tmpp + sz); 429 + iounmap(tmpp); 430 + 431 + return 0; 432 + } 411 433 412 434 static int qman_fqd(struct reserved_mem *rmem) 413 435 { ··· 437 415 fqd_sz = rmem->size; 438 416 439 417 WARN_ON(!(fqd_a && fqd_sz)); 440 - 441 418 return 0; 442 419 } 443 420 RESERVEDMEM_OF_DECLARE(qman_fqd, "fsl,qman-fqd", qman_fqd); ··· 452 431 } 453 432 RESERVEDMEM_OF_DECLARE(qman_pfdr, "fsl,qman-pfdr", qman_pfdr); 454 433 434 + #endif 435 + 455 436 static unsigned int qm_get_fqid_maxcnt(void) 456 437 { 457 438 return fqd_sz / 64; 458 - } 459 - 460 - /* 461 - * Flush this memory range from data cache so that QMAN originated 462 - * transactions for this memory region could be marked non-coherent. 463 - */ 464 - static int zero_priv_mem(struct device *dev, struct device_node *node, 465 - phys_addr_t addr, size_t sz) 466 - { 467 - /* map as cacheable, non-guarded */ 468 - void __iomem *tmpp = ioremap_prot(addr, sz, 0); 469 - 470 - if (!tmpp) 471 - return -ENOMEM; 472 - 473 - memset_io(tmpp, 0, sz); 474 - flush_dcache_range((unsigned long)tmpp, 475 - (unsigned long)tmpp + sz); 476 - iounmap(tmpp); 477 - 478 - return 0; 479 439 } 480 440 481 441 static void log_edata_bits(struct device *dev, u32 bit_count) ··· 719 717 qman_ip_rev = QMAN_REV30; 720 718 else if (major == 3 && minor == 1) 721 719 qman_ip_rev = QMAN_REV31; 720 + else if (major == 3 && minor == 2) 721 + qman_ip_rev = QMAN_REV32; 722 722 else { 723 723 dev_err(dev, "Unknown QMan version\n"); 724 724 return -ENODEV; ··· 731 727 qm_channel_caam = QMAN_CHANNEL_CAAM_REV3; 732 728 } 733 729 734 - ret = zero_priv_mem(dev, node, fqd_a, fqd_sz); 735 - WARN_ON(ret); 736 - if (ret) 737 - return -ENODEV; 730 + if (fqd_a) { 731 + #ifdef CONFIG_PPC 732 + /* 733 + * For PPC backward DT compatibility 734 + * FQD memory MUST be zero'd by software 735 + */ 736 + zero_priv_mem(fqd_a, fqd_sz); 737 + #else 738 + WARN(1, "Unexpected architecture using non shared-dma-mem reservations"); 739 + #endif 740 + } else { 741 + /* 742 + * Order of memory regions is assumed as FQD followed by PFDR 743 + * in order to ensure allocations from the correct regions the 744 + * driver initializes then allocates each piece in order 745 + */ 746 + ret = qbman_init_private_mem(dev, 0, &fqd_a, &fqd_sz); 747 + if (ret) { 748 + dev_err(dev, "qbman_init_private_mem() for FQD failed 0x%x\n", 749 + ret); 750 + return -ENODEV; 751 + } 752 + } 753 + dev_dbg(dev, "Allocated FQD 0x%llx 0x%zx\n", fqd_a, fqd_sz); 754 + 755 + if (!pfdr_a) { 756 + /* Setup PFDR memory */ 757 + ret = qbman_init_private_mem(dev, 1, &pfdr_a, &pfdr_sz); 758 + if (ret) { 759 + dev_err(dev, "qbman_init_private_mem() for PFDR failed 0x%x\n", 760 + ret); 761 + return -ENODEV; 762 + } 763 + } 764 + dev_dbg(dev, "Allocated PFDR 0x%llx 0x%zx\n", pfdr_a, pfdr_sz); 738 765 739 766 ret = qman_init_ccsr(dev); 740 767 if (ret) {
+10 -13
drivers/soc/fsl/qbman/qman_portal.c
··· 224 224 struct device_node *node = dev->of_node; 225 225 struct qm_portal_config *pcfg; 226 226 struct resource *addr_phys[2]; 227 - void __iomem *va; 228 227 int irq, cpu, err; 229 228 u32 val; 230 229 ··· 261 262 } 262 263 pcfg->irq = irq; 263 264 264 - va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0); 265 - if (!va) { 266 - dev_err(dev, "ioremap::CE failed\n"); 265 + pcfg->addr_virt_ce = memremap(addr_phys[0]->start, 266 + resource_size(addr_phys[0]), 267 + QBMAN_MEMREMAP_ATTR); 268 + if (!pcfg->addr_virt_ce) { 269 + dev_err(dev, "memremap::CE failed\n"); 267 270 goto err_ioremap1; 268 271 } 269 272 270 - pcfg->addr_virt[DPAA_PORTAL_CE] = va; 271 - 272 - va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]), 273 - _PAGE_GUARDED | _PAGE_NO_CACHE); 274 - if (!va) { 273 + pcfg->addr_virt_ci = ioremap(addr_phys[1]->start, 274 + resource_size(addr_phys[1])); 275 + if (!pcfg->addr_virt_ci) { 275 276 dev_err(dev, "ioremap::CI failed\n"); 276 277 goto err_ioremap2; 277 278 } 278 - 279 - pcfg->addr_virt[DPAA_PORTAL_CI] = va; 280 279 281 280 pcfg->pools = qm_get_pools_sdqcr(); 282 281 ··· 307 310 return 0; 308 311 309 312 err_portal_init: 310 - iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]); 313 + iounmap(pcfg->addr_virt_ci); 311 314 err_ioremap2: 312 - iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]); 315 + memunmap(pcfg->addr_virt_ce); 313 316 err_ioremap1: 314 317 return -ENXIO; 315 318 }
+4 -7
drivers/soc/fsl/qbman/qman_priv.h
··· 28 28 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 29 29 */ 30 30 31 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 32 - 33 31 #include "dpaa_sys.h" 34 32 35 33 #include <soc/fsl/qman.h> ··· 153 155 void qman_init_cgr_all(void); 154 156 155 157 struct qm_portal_config { 156 - /* 157 - * Corenet portal addresses; 158 - * [0]==cache-enabled, [1]==cache-inhibited. 159 - */ 160 - void __iomem *addr_virt[2]; 158 + /* Portal addresses */ 159 + void *addr_virt_ce; 160 + void __iomem *addr_virt_ci; 161 161 struct device *dev; 162 162 struct iommu_domain *iommu_domain; 163 163 /* Allow these to be joined in lists */ ··· 183 187 #define QMAN_REV20 0x0200 184 188 #define QMAN_REV30 0x0300 185 189 #define QMAN_REV31 0x0301 190 + #define QMAN_REV32 0x0302 186 191 extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */ 187 192 188 193 #define QM_FQID_RANGE_START 1 /* FQID 0 reserved for internal use */
-2
drivers/soc/fsl/qbman/qman_test.h
··· 30 30 31 31 #include "qman_priv.h" 32 32 33 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 34 - 35 33 int qman_test_stash(void); 36 34 int qman_test_api(void);
+5 -3
drivers/soc/mediatek/Kconfig
··· 1 1 # 2 2 # MediaTek SoC drivers 3 3 # 4 + menu "MediaTek SoC drivers" 5 + depends on ARCH_MEDIATEK || COMPILE_TEST 6 + 4 7 config MTK_INFRACFG 5 8 bool "MediaTek INFRACFG Support" 6 - depends on ARCH_MEDIATEK || COMPILE_TEST 7 9 select REGMAP 8 10 help 9 11 Say yes here to add support for the MediaTek INFRACFG controller. The ··· 14 12 15 13 config MTK_PMIC_WRAP 16 14 tristate "MediaTek PMIC Wrapper Support" 17 - depends on ARCH_MEDIATEK 18 15 depends on RESET_CONTROLLER 19 16 select REGMAP 20 17 help ··· 23 22 24 23 config MTK_SCPSYS 25 24 bool "MediaTek SCPSYS Support" 26 - depends on ARCH_MEDIATEK || COMPILE_TEST 27 25 default ARCH_MEDIATEK 28 26 select REGMAP 29 27 select MTK_INFRACFG ··· 30 30 help 31 31 Say yes here to add support for the MediaTek SCPSYS power domain 32 32 driver. 33 + 34 + endmenu
+439 -88
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 70 70 PWRAP_WDT_SRC_EN_HARB_STAUPD_DLE | \ 71 71 PWRAP_WDT_SRC_EN_HARB_STAUPD_ALE) 72 72 73 + /* Group of bits used for shown slave capability */ 74 + #define PWRAP_SLV_CAP_SPI BIT(0) 75 + #define PWRAP_SLV_CAP_DUALIO BIT(1) 76 + #define PWRAP_SLV_CAP_SECURITY BIT(2) 77 + #define HAS_CAP(_c, _x) (((_c) & (_x)) == (_x)) 78 + 73 79 /* defines for slave device wrapper registers */ 74 80 enum dew_regs { 75 81 PWRAP_DEW_BASE, ··· 214 208 PWRAP_ADC_RDATA_ADDR1, 215 209 PWRAP_ADC_RDATA_ADDR2, 216 210 211 + /* MT7622 only regs */ 212 + PWRAP_EINT_STA0_ADR, 213 + PWRAP_EINT_STA1_ADR, 214 + PWRAP_STA, 215 + PWRAP_CLR, 216 + PWRAP_DVFS_ADR8, 217 + PWRAP_DVFS_WDATA8, 218 + PWRAP_DVFS_ADR9, 219 + PWRAP_DVFS_WDATA9, 220 + PWRAP_DVFS_ADR10, 221 + PWRAP_DVFS_WDATA10, 222 + PWRAP_DVFS_ADR11, 223 + PWRAP_DVFS_WDATA11, 224 + PWRAP_DVFS_ADR12, 225 + PWRAP_DVFS_WDATA12, 226 + PWRAP_DVFS_ADR13, 227 + PWRAP_DVFS_WDATA13, 228 + PWRAP_DVFS_ADR14, 229 + PWRAP_DVFS_WDATA14, 230 + PWRAP_DVFS_ADR15, 231 + PWRAP_DVFS_WDATA15, 232 + PWRAP_EXT_CK, 233 + PWRAP_ADC_RDATA_ADDR, 234 + PWRAP_GPS_STA, 235 + PWRAP_SW_RST, 236 + PWRAP_DVFS_STEP_CTRL0, 237 + PWRAP_DVFS_STEP_CTRL1, 238 + PWRAP_DVFS_STEP_CTRL2, 239 + PWRAP_SPI2_CTRL, 240 + 217 241 /* MT8135 only regs */ 218 242 PWRAP_CSHEXT, 219 243 PWRAP_EVENT_IN_EN, ··· 364 328 [PWRAP_ADC_RDY_ADDR] = 0x14c, 365 329 [PWRAP_ADC_RDATA_ADDR1] = 0x150, 366 330 [PWRAP_ADC_RDATA_ADDR2] = 0x154, 331 + }; 332 + 333 + static int mt7622_regs[] = { 334 + [PWRAP_MUX_SEL] = 0x0, 335 + [PWRAP_WRAP_EN] = 0x4, 336 + [PWRAP_DIO_EN] = 0x8, 337 + [PWRAP_SIDLY] = 0xC, 338 + [PWRAP_RDDMY] = 0x10, 339 + [PWRAP_SI_CK_CON] = 0x14, 340 + [PWRAP_CSHEXT_WRITE] = 0x18, 341 + [PWRAP_CSHEXT_READ] = 0x1C, 342 + [PWRAP_CSLEXT_START] = 0x20, 343 + [PWRAP_CSLEXT_END] = 0x24, 344 + [PWRAP_STAUPD_PRD] = 0x28, 345 + [PWRAP_STAUPD_GRPEN] = 0x2C, 346 + [PWRAP_EINT_STA0_ADR] = 0x30, 347 + [PWRAP_EINT_STA1_ADR] = 0x34, 348 + [PWRAP_STA] = 0x38, 349 + [PWRAP_CLR] = 0x3C, 350 + [PWRAP_STAUPD_MAN_TRIG] = 0x40, 351 + [PWRAP_STAUPD_STA] = 0x44, 352 + [PWRAP_WRAP_STA] = 0x48, 353 + [PWRAP_HARB_INIT] = 0x4C, 354 + [PWRAP_HARB_HPRIO] = 0x50, 355 + [PWRAP_HIPRIO_ARB_EN] = 0x54, 356 + [PWRAP_HARB_STA0] = 0x58, 357 + [PWRAP_HARB_STA1] = 0x5C, 358 + [PWRAP_MAN_EN] = 0x60, 359 + [PWRAP_MAN_CMD] = 0x64, 360 + [PWRAP_MAN_RDATA] = 0x68, 361 + [PWRAP_MAN_VLDCLR] = 0x6C, 362 + [PWRAP_WACS0_EN] = 0x70, 363 + [PWRAP_INIT_DONE0] = 0x74, 364 + [PWRAP_WACS0_CMD] = 0x78, 365 + [PWRAP_WACS0_RDATA] = 0x7C, 366 + [PWRAP_WACS0_VLDCLR] = 0x80, 367 + [PWRAP_WACS1_EN] = 0x84, 368 + [PWRAP_INIT_DONE1] = 0x88, 369 + [PWRAP_WACS1_CMD] = 0x8C, 370 + [PWRAP_WACS1_RDATA] = 0x90, 371 + [PWRAP_WACS1_VLDCLR] = 0x94, 372 + [PWRAP_WACS2_EN] = 0x98, 373 + [PWRAP_INIT_DONE2] = 0x9C, 374 + [PWRAP_WACS2_CMD] = 0xA0, 375 + [PWRAP_WACS2_RDATA] = 0xA4, 376 + [PWRAP_WACS2_VLDCLR] = 0xA8, 377 + [PWRAP_INT_EN] = 0xAC, 378 + [PWRAP_INT_FLG_RAW] = 0xB0, 379 + [PWRAP_INT_FLG] = 0xB4, 380 + [PWRAP_INT_CLR] = 0xB8, 381 + [PWRAP_SIG_ADR] = 0xBC, 382 + [PWRAP_SIG_MODE] = 0xC0, 383 + [PWRAP_SIG_VALUE] = 0xC4, 384 + [PWRAP_SIG_ERRVAL] = 0xC8, 385 + [PWRAP_CRC_EN] = 0xCC, 386 + [PWRAP_TIMER_EN] = 0xD0, 387 + [PWRAP_TIMER_STA] = 0xD4, 388 + [PWRAP_WDT_UNIT] = 0xD8, 389 + [PWRAP_WDT_SRC_EN] = 0xDC, 390 + [PWRAP_WDT_FLG] = 0xE0, 391 + [PWRAP_DEBUG_INT_SEL] = 0xE4, 392 + [PWRAP_DVFS_ADR0] = 0xE8, 393 + [PWRAP_DVFS_WDATA0] = 0xEC, 394 + [PWRAP_DVFS_ADR1] = 0xF0, 395 + [PWRAP_DVFS_WDATA1] = 0xF4, 396 + [PWRAP_DVFS_ADR2] = 0xF8, 397 + [PWRAP_DVFS_WDATA2] = 0xFC, 398 + [PWRAP_DVFS_ADR3] = 0x100, 399 + [PWRAP_DVFS_WDATA3] = 0x104, 400 + [PWRAP_DVFS_ADR4] = 0x108, 401 + [PWRAP_DVFS_WDATA4] = 0x10C, 402 + [PWRAP_DVFS_ADR5] = 0x110, 403 + [PWRAP_DVFS_WDATA5] = 0x114, 404 + [PWRAP_DVFS_ADR6] = 0x118, 405 + [PWRAP_DVFS_WDATA6] = 0x11C, 406 + [PWRAP_DVFS_ADR7] = 0x120, 407 + [PWRAP_DVFS_WDATA7] = 0x124, 408 + [PWRAP_DVFS_ADR8] = 0x128, 409 + [PWRAP_DVFS_WDATA8] = 0x12C, 410 + [PWRAP_DVFS_ADR9] = 0x130, 411 + [PWRAP_DVFS_WDATA9] = 0x134, 412 + [PWRAP_DVFS_ADR10] = 0x138, 413 + [PWRAP_DVFS_WDATA10] = 0x13C, 414 + [PWRAP_DVFS_ADR11] = 0x140, 415 + [PWRAP_DVFS_WDATA11] = 0x144, 416 + [PWRAP_DVFS_ADR12] = 0x148, 417 + [PWRAP_DVFS_WDATA12] = 0x14C, 418 + [PWRAP_DVFS_ADR13] = 0x150, 419 + [PWRAP_DVFS_WDATA13] = 0x154, 420 + [PWRAP_DVFS_ADR14] = 0x158, 421 + [PWRAP_DVFS_WDATA14] = 0x15C, 422 + [PWRAP_DVFS_ADR15] = 0x160, 423 + [PWRAP_DVFS_WDATA15] = 0x164, 424 + [PWRAP_SPMINF_STA] = 0x168, 425 + [PWRAP_CIPHER_KEY_SEL] = 0x16C, 426 + [PWRAP_CIPHER_IV_SEL] = 0x170, 427 + [PWRAP_CIPHER_EN] = 0x174, 428 + [PWRAP_CIPHER_RDY] = 0x178, 429 + [PWRAP_CIPHER_MODE] = 0x17C, 430 + [PWRAP_CIPHER_SWRST] = 0x180, 431 + [PWRAP_DCM_EN] = 0x184, 432 + [PWRAP_DCM_DBC_PRD] = 0x188, 433 + [PWRAP_EXT_CK] = 0x18C, 434 + [PWRAP_ADC_CMD_ADDR] = 0x190, 435 + [PWRAP_PWRAP_ADC_CMD] = 0x194, 436 + [PWRAP_ADC_RDATA_ADDR] = 0x198, 437 + [PWRAP_GPS_STA] = 0x19C, 438 + [PWRAP_SW_RST] = 0x1A0, 439 + [PWRAP_DVFS_STEP_CTRL0] = 0x238, 440 + [PWRAP_DVFS_STEP_CTRL1] = 0x23C, 441 + [PWRAP_DVFS_STEP_CTRL2] = 0x240, 442 + [PWRAP_SPI2_CTRL] = 0x244, 367 443 }; 368 444 369 445 static int mt8173_regs[] = { ··· 635 487 636 488 enum pmic_type { 637 489 PMIC_MT6323, 490 + PMIC_MT6380, 638 491 PMIC_MT6397, 639 492 }; 640 493 641 494 enum pwrap_type { 642 495 PWRAP_MT2701, 496 + PWRAP_MT7622, 643 497 PWRAP_MT8135, 644 498 PWRAP_MT8173, 645 499 }; 646 500 501 + struct pmic_wrapper; 647 502 struct pwrap_slv_type { 648 503 const u32 *dew_regs; 649 504 enum pmic_type type; 505 + const struct regmap_config *regmap; 506 + /* Flags indicating the capability for the target slave */ 507 + u32 caps; 508 + /* 509 + * pwrap operations are highly associated with the PMIC types, 510 + * so the pointers added increases flexibility allowing determination 511 + * which type is used by the detection through device tree. 512 + */ 513 + int (*pwrap_read)(struct pmic_wrapper *wrp, u32 adr, u32 *rdata); 514 + int (*pwrap_write)(struct pmic_wrapper *wrp, u32 adr, u32 wdata); 650 515 }; 651 516 652 517 struct pmic_wrapper { ··· 683 522 u32 int_en_all; 684 523 u32 spi_w; 685 524 u32 wdt_src; 686 - int has_bridge:1; 525 + unsigned int has_bridge:1; 687 526 int (*init_reg_clock)(struct pmic_wrapper *wrp); 688 527 int (*init_soc_specific)(struct pmic_wrapper *wrp); 689 528 }; ··· 754 593 } while (1); 755 594 } 756 595 757 - static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 758 - { 759 - int ret; 760 - 761 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 762 - if (ret) { 763 - pwrap_leave_fsm_vldclr(wrp); 764 - return ret; 765 - } 766 - 767 - pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata, 768 - PWRAP_WACS2_CMD); 769 - 770 - return 0; 771 - } 772 - 773 - static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 596 + static int pwrap_read16(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 774 597 { 775 598 int ret; 776 599 ··· 775 630 pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR); 776 631 777 632 return 0; 633 + } 634 + 635 + static int pwrap_read32(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 636 + { 637 + int ret, msb; 638 + 639 + *rdata = 0; 640 + for (msb = 0; msb < 2; msb++) { 641 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 642 + if (ret) { 643 + pwrap_leave_fsm_vldclr(wrp); 644 + return ret; 645 + } 646 + 647 + pwrap_writel(wrp, ((msb << 30) | (adr << 16)), 648 + PWRAP_WACS2_CMD); 649 + 650 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr); 651 + if (ret) 652 + return ret; 653 + 654 + *rdata += (PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, 655 + PWRAP_WACS2_RDATA)) << (16 * msb)); 656 + 657 + pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR); 658 + } 659 + 660 + return 0; 661 + } 662 + 663 + static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 664 + { 665 + return wrp->slave->pwrap_read(wrp, adr, rdata); 666 + } 667 + 668 + static int pwrap_write16(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 669 + { 670 + int ret; 671 + 672 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 673 + if (ret) { 674 + pwrap_leave_fsm_vldclr(wrp); 675 + return ret; 676 + } 677 + 678 + pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata, 679 + PWRAP_WACS2_CMD); 680 + 681 + return 0; 682 + } 683 + 684 + static int pwrap_write32(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 685 + { 686 + int ret, msb, rdata; 687 + 688 + for (msb = 0; msb < 2; msb++) { 689 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 690 + if (ret) { 691 + pwrap_leave_fsm_vldclr(wrp); 692 + return ret; 693 + } 694 + 695 + pwrap_writel(wrp, (1 << 31) | (msb << 30) | (adr << 16) | 696 + ((wdata >> (msb * 16)) & 0xffff), 697 + PWRAP_WACS2_CMD); 698 + 699 + /* 700 + * The pwrap_read operation is the requirement of hardware used 701 + * for the synchronization between two successive 16-bit 702 + * pwrap_writel operations composing one 32-bit bus writing. 703 + * Otherwise, we'll find the result fails on the lower 16-bit 704 + * pwrap writing. 705 + */ 706 + if (!msb) 707 + pwrap_read(wrp, adr, &rdata); 708 + } 709 + 710 + return 0; 711 + } 712 + 713 + static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 714 + { 715 + return wrp->slave->pwrap_write(wrp, adr, wdata); 778 716 } 779 717 780 718 static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata) ··· 939 711 return 0; 940 712 } 941 713 942 - static int pwrap_mt8135_init_reg_clock(struct pmic_wrapper *wrp) 714 + static int pwrap_init_dual_io(struct pmic_wrapper *wrp) 943 715 { 944 - pwrap_writel(wrp, 0x4, PWRAP_CSHEXT); 945 - pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE); 946 - pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ); 947 - pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START); 948 - pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END); 716 + int ret; 717 + u32 rdata; 718 + 719 + /* Enable dual IO mode */ 720 + pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1); 721 + 722 + /* Check IDLE & INIT_DONE in advance */ 723 + ret = pwrap_wait_for_state(wrp, 724 + pwrap_is_fsm_idle_and_sync_idle); 725 + if (ret) { 726 + dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 727 + return ret; 728 + } 729 + 730 + pwrap_writel(wrp, 1, PWRAP_DIO_EN); 731 + 732 + /* Read Test */ 733 + pwrap_read(wrp, 734 + wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata); 735 + if (rdata != PWRAP_DEW_READ_TEST_VAL) { 736 + dev_err(wrp->dev, 737 + "Read failed on DIO mode: 0x%04x!=0x%04x\n", 738 + PWRAP_DEW_READ_TEST_VAL, rdata); 739 + return -EFAULT; 740 + } 949 741 950 742 return 0; 951 743 } 952 744 953 - static int pwrap_mt8173_init_reg_clock(struct pmic_wrapper *wrp) 745 + /* 746 + * pwrap_init_chip_select_ext is used to configure CS extension time for each 747 + * phase during data transactions on the pwrap bus. 748 + */ 749 + static void pwrap_init_chip_select_ext(struct pmic_wrapper *wrp, u8 hext_write, 750 + u8 hext_read, u8 lext_start, 751 + u8 lext_end) 954 752 { 955 - pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE); 956 - pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ); 957 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START); 958 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END); 753 + /* 754 + * After finishing a write and read transaction, extends CS high time 755 + * to be at least xT of BUS CLK as hext_write and hext_read specifies 756 + * respectively. 757 + */ 758 + pwrap_writel(wrp, hext_write, PWRAP_CSHEXT_WRITE); 759 + pwrap_writel(wrp, hext_read, PWRAP_CSHEXT_READ); 760 + 761 + /* 762 + * Extends CS low time after CSL and before CSH command to be at 763 + * least xT of BUS CLK as lext_start and lext_end specifies 764 + * respectively. 765 + */ 766 + pwrap_writel(wrp, lext_start, PWRAP_CSLEXT_START); 767 + pwrap_writel(wrp, lext_end, PWRAP_CSLEXT_END); 768 + } 769 + 770 + static int pwrap_common_init_reg_clock(struct pmic_wrapper *wrp) 771 + { 772 + switch (wrp->master->type) { 773 + case PWRAP_MT8173: 774 + pwrap_init_chip_select_ext(wrp, 0, 4, 2, 2); 775 + break; 776 + case PWRAP_MT8135: 777 + pwrap_writel(wrp, 0x4, PWRAP_CSHEXT); 778 + pwrap_init_chip_select_ext(wrp, 0, 4, 0, 0); 779 + break; 780 + default: 781 + break; 782 + } 959 783 960 784 return 0; 961 785 } ··· 1017 737 switch (wrp->slave->type) { 1018 738 case PMIC_MT6397: 1019 739 pwrap_writel(wrp, 0xc, PWRAP_RDDMY); 1020 - pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE); 1021 - pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ); 1022 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START); 1023 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END); 740 + pwrap_init_chip_select_ext(wrp, 4, 0, 2, 2); 1024 741 break; 1025 742 1026 743 case PMIC_MT6323: 1027 744 pwrap_writel(wrp, 0x8, PWRAP_RDDMY); 1028 745 pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_RDDMY_NO], 1029 746 0x8); 1030 - pwrap_writel(wrp, 0x5, PWRAP_CSHEXT_WRITE); 1031 - pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ); 1032 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START); 1033 - pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END); 747 + pwrap_init_chip_select_ext(wrp, 5, 0, 2, 2); 748 + break; 749 + default: 1034 750 break; 1035 751 } 1036 752 ··· 1070 794 case PWRAP_MT8173: 1071 795 pwrap_writel(wrp, 1, PWRAP_CIPHER_EN); 1072 796 break; 797 + case PWRAP_MT7622: 798 + pwrap_writel(wrp, 0, PWRAP_CIPHER_EN); 799 + break; 1073 800 } 1074 801 1075 802 /* Config cipher mode @PMIC */ ··· 1094 815 pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_EN], 1095 816 0x1); 1096 817 break; 818 + default: 819 + break; 1097 820 } 1098 821 1099 822 /* wait for cipher data ready@AP */ ··· 1108 827 /* wait for cipher data ready@PMIC */ 1109 828 ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready); 1110 829 if (ret) { 1111 - dev_err(wrp->dev, "timeout waiting for cipher data ready@PMIC\n"); 830 + dev_err(wrp->dev, 831 + "timeout waiting for cipher data ready@PMIC\n"); 1112 832 return ret; 1113 833 } 1114 834 ··· 1132 850 dev_err(wrp->dev, "rdata=0x%04X\n", rdata); 1133 851 return -EFAULT; 1134 852 } 853 + 854 + return 0; 855 + } 856 + 857 + static int pwrap_init_security(struct pmic_wrapper *wrp) 858 + { 859 + int ret; 860 + 861 + /* Enable encryption */ 862 + ret = pwrap_init_cipher(wrp); 863 + if (ret) 864 + return ret; 865 + 866 + /* Signature checking - using CRC */ 867 + if (pwrap_write(wrp, 868 + wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1)) 869 + return -EFAULT; 870 + 871 + pwrap_writel(wrp, 0x1, PWRAP_CRC_EN); 872 + pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE); 873 + pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL], 874 + PWRAP_SIG_ADR); 875 + pwrap_writel(wrp, 876 + wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN); 1135 877 1136 878 return 0; 1137 879 } ··· 1217 911 return 0; 1218 912 } 1219 913 914 + static int pwrap_mt7622_init_soc_specific(struct pmic_wrapper *wrp) 915 + { 916 + pwrap_writel(wrp, 0, PWRAP_STAUPD_PRD); 917 + /* enable 2wire SPI master */ 918 + pwrap_writel(wrp, 0x8000000, PWRAP_SPI2_CTRL); 919 + 920 + return 0; 921 + } 922 + 1220 923 static int pwrap_init(struct pmic_wrapper *wrp) 1221 924 { 1222 925 int ret; 1223 - u32 rdata; 1224 926 1225 927 reset_control_reset(wrp->rstc); 1226 928 if (wrp->rstc_bridge) ··· 1240 926 pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD); 1241 927 } 1242 928 1243 - /* Reset SPI slave */ 1244 - ret = pwrap_reset_spislave(wrp); 1245 - if (ret) 1246 - return ret; 929 + if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) { 930 + /* Reset SPI slave */ 931 + ret = pwrap_reset_spislave(wrp); 932 + if (ret) 933 + return ret; 934 + } 1247 935 1248 936 pwrap_writel(wrp, 1, PWRAP_WRAP_EN); 1249 937 ··· 1257 941 if (ret) 1258 942 return ret; 1259 943 1260 - /* Setup serial input delay */ 1261 - ret = pwrap_init_sidly(wrp); 1262 - if (ret) 1263 - return ret; 1264 - 1265 - /* Enable dual IO mode */ 1266 - pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1); 1267 - 1268 - /* Check IDLE & INIT_DONE in advance */ 1269 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle); 1270 - if (ret) { 1271 - dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 1272 - return ret; 944 + if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) { 945 + /* Setup serial input delay */ 946 + ret = pwrap_init_sidly(wrp); 947 + if (ret) 948 + return ret; 1273 949 } 1274 950 1275 - pwrap_writel(wrp, 1, PWRAP_DIO_EN); 1276 - 1277 - /* Read Test */ 1278 - pwrap_read(wrp, wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata); 1279 - if (rdata != PWRAP_DEW_READ_TEST_VAL) { 1280 - dev_err(wrp->dev, "Read test failed after switch to DIO mode: 0x%04x != 0x%04x\n", 1281 - PWRAP_DEW_READ_TEST_VAL, rdata); 1282 - return -EFAULT; 951 + if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_DUALIO)) { 952 + /* Enable dual I/O mode */ 953 + ret = pwrap_init_dual_io(wrp); 954 + if (ret) 955 + return ret; 1283 956 } 1284 957 1285 - /* Enable encryption */ 1286 - ret = pwrap_init_cipher(wrp); 1287 - if (ret) 1288 - return ret; 1289 - 1290 - /* Signature checking - using CRC */ 1291 - if (pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1)) 1292 - return -EFAULT; 1293 - 1294 - pwrap_writel(wrp, 0x1, PWRAP_CRC_EN); 1295 - pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE); 1296 - pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL], 1297 - PWRAP_SIG_ADR); 1298 - pwrap_writel(wrp, wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN); 958 + if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SECURITY)) { 959 + /* Enable security on bus */ 960 + ret = pwrap_init_security(wrp); 961 + if (ret) 962 + return ret; 963 + } 1299 964 1300 965 if (wrp->master->type == PWRAP_MT8135) 1301 966 pwrap_writel(wrp, 0x7, PWRAP_RRARB_EN); ··· 1320 1023 return IRQ_HANDLED; 1321 1024 } 1322 1025 1323 - static const struct regmap_config pwrap_regmap_config = { 1026 + static const struct regmap_config pwrap_regmap_config16 = { 1324 1027 .reg_bits = 16, 1325 1028 .val_bits = 16, 1326 1029 .reg_stride = 2, ··· 1329 1032 .max_register = 0xffff, 1330 1033 }; 1331 1034 1035 + static const struct regmap_config pwrap_regmap_config32 = { 1036 + .reg_bits = 32, 1037 + .val_bits = 32, 1038 + .reg_stride = 4, 1039 + .reg_read = pwrap_regmap_read, 1040 + .reg_write = pwrap_regmap_write, 1041 + .max_register = 0xffff, 1042 + }; 1043 + 1332 1044 static const struct pwrap_slv_type pmic_mt6323 = { 1333 1045 .dew_regs = mt6323_regs, 1334 1046 .type = PMIC_MT6323, 1047 + .regmap = &pwrap_regmap_config16, 1048 + .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO | 1049 + PWRAP_SLV_CAP_SECURITY, 1050 + .pwrap_read = pwrap_read16, 1051 + .pwrap_write = pwrap_write16, 1052 + }; 1053 + 1054 + static const struct pwrap_slv_type pmic_mt6380 = { 1055 + .dew_regs = NULL, 1056 + .type = PMIC_MT6380, 1057 + .regmap = &pwrap_regmap_config32, 1058 + .caps = 0, 1059 + .pwrap_read = pwrap_read32, 1060 + .pwrap_write = pwrap_write32, 1335 1061 }; 1336 1062 1337 1063 static const struct pwrap_slv_type pmic_mt6397 = { 1338 1064 .dew_regs = mt6397_regs, 1339 1065 .type = PMIC_MT6397, 1066 + .regmap = &pwrap_regmap_config16, 1067 + .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO | 1068 + PWRAP_SLV_CAP_SECURITY, 1069 + .pwrap_read = pwrap_read16, 1070 + .pwrap_write = pwrap_write16, 1340 1071 }; 1341 1072 1342 1073 static const struct of_device_id of_slave_match_tbl[] = { 1343 1074 { 1344 1075 .compatible = "mediatek,mt6323", 1345 1076 .data = &pmic_mt6323, 1077 + }, { 1078 + /* The MT6380 PMIC only implements a regulator, so we bind it 1079 + * directly instead of using a MFD. 1080 + */ 1081 + .compatible = "mediatek,mt6380-regulator", 1082 + .data = &pmic_mt6380, 1346 1083 }, { 1347 1084 .compatible = "mediatek,mt6397", 1348 1085 .data = &pmic_mt6397, ··· 1398 1067 .init_soc_specific = pwrap_mt2701_init_soc_specific, 1399 1068 }; 1400 1069 1070 + static const struct pmic_wrapper_type pwrap_mt7622 = { 1071 + .regs = mt7622_regs, 1072 + .type = PWRAP_MT7622, 1073 + .arb_en_all = 0xff, 1074 + .int_en_all = ~(u32)BIT(31), 1075 + .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 1076 + .wdt_src = PWRAP_WDT_SRC_MASK_ALL, 1077 + .has_bridge = 0, 1078 + .init_reg_clock = pwrap_common_init_reg_clock, 1079 + .init_soc_specific = pwrap_mt7622_init_soc_specific, 1080 + }; 1081 + 1401 1082 static const struct pmic_wrapper_type pwrap_mt8135 = { 1402 1083 .regs = mt8135_regs, 1403 1084 .type = PWRAP_MT8135, ··· 1418 1075 .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 1419 1076 .wdt_src = PWRAP_WDT_SRC_MASK_ALL, 1420 1077 .has_bridge = 1, 1421 - .init_reg_clock = pwrap_mt8135_init_reg_clock, 1078 + .init_reg_clock = pwrap_common_init_reg_clock, 1422 1079 .init_soc_specific = pwrap_mt8135_init_soc_specific, 1423 1080 }; 1424 1081 ··· 1430 1087 .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 1431 1088 .wdt_src = PWRAP_WDT_SRC_MASK_NO_STAUPD, 1432 1089 .has_bridge = 0, 1433 - .init_reg_clock = pwrap_mt8173_init_reg_clock, 1090 + .init_reg_clock = pwrap_common_init_reg_clock, 1434 1091 .init_soc_specific = pwrap_mt8173_init_soc_specific, 1435 1092 }; 1436 1093 ··· 1438 1095 { 1439 1096 .compatible = "mediatek,mt2701-pwrap", 1440 1097 .data = &pwrap_mt2701, 1098 + }, { 1099 + .compatible = "mediatek,mt7622-pwrap", 1100 + .data = &pwrap_mt7622, 1441 1101 }, { 1442 1102 .compatible = "mediatek,mt8135-pwrap", 1443 1103 .data = &pwrap_mt8135, ··· 1505 1159 if (IS_ERR(wrp->bridge_base)) 1506 1160 return PTR_ERR(wrp->bridge_base); 1507 1161 1508 - wrp->rstc_bridge = devm_reset_control_get(wrp->dev, "pwrap-bridge"); 1162 + wrp->rstc_bridge = devm_reset_control_get(wrp->dev, 1163 + "pwrap-bridge"); 1509 1164 if (IS_ERR(wrp->rstc_bridge)) { 1510 1165 ret = PTR_ERR(wrp->rstc_bridge); 1511 - dev_dbg(wrp->dev, "cannot get pwrap-bridge reset: %d\n", ret); 1166 + dev_dbg(wrp->dev, 1167 + "cannot get pwrap-bridge reset: %d\n", ret); 1512 1168 return ret; 1513 1169 } 1514 1170 } 1515 1171 1516 1172 wrp->clk_spi = devm_clk_get(wrp->dev, "spi"); 1517 1173 if (IS_ERR(wrp->clk_spi)) { 1518 - dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_spi)); 1174 + dev_dbg(wrp->dev, "failed to get clock: %ld\n", 1175 + PTR_ERR(wrp->clk_spi)); 1519 1176 return PTR_ERR(wrp->clk_spi); 1520 1177 } 1521 1178 1522 1179 wrp->clk_wrap = devm_clk_get(wrp->dev, "wrap"); 1523 1180 if (IS_ERR(wrp->clk_wrap)) { 1524 - dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_wrap)); 1181 + dev_dbg(wrp->dev, "failed to get clock: %ld\n", 1182 + PTR_ERR(wrp->clk_wrap)); 1525 1183 return PTR_ERR(wrp->clk_wrap); 1526 1184 } 1527 1185 ··· 1570 1220 pwrap_writel(wrp, wrp->master->int_en_all, PWRAP_INT_EN); 1571 1221 1572 1222 irq = platform_get_irq(pdev, 0); 1573 - ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, IRQF_TRIGGER_HIGH, 1574 - "mt-pmic-pwrap", wrp); 1223 + ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, 1224 + IRQF_TRIGGER_HIGH, 1225 + "mt-pmic-pwrap", wrp); 1575 1226 if (ret) 1576 1227 goto err_out2; 1577 1228 1578 - wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, &pwrap_regmap_config); 1229 + wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, wrp->slave->regmap); 1579 1230 if (IS_ERR(wrp->regmap)) { 1580 1231 ret = PTR_ERR(wrp->regmap); 1581 1232 goto err_out2;
+11
drivers/soc/qcom/Kconfig
··· 35 35 modes. It interface with various system drivers to put the cores in 36 36 low power modes. 37 37 38 + config QCOM_RMTFS_MEM 39 + tristate "Qualcomm Remote Filesystem memory driver" 40 + depends on ARCH_QCOM 41 + help 42 + The Qualcomm remote filesystem memory driver is used for allocating 43 + and exposing regions of shared memory with remote processors for the 44 + purpose of exchanging sector-data between the remote filesystem 45 + service and its clients. 46 + 47 + Say y here if you intend to boot the modem remoteproc. 48 + 38 49 config QCOM_SMEM 39 50 tristate "Qualcomm Shared Memory Manager (SMEM)" 40 51 depends on ARCH_QCOM
+1
drivers/soc/qcom/Makefile
··· 3 3 obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o 4 4 obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o 5 5 obj-$(CONFIG_QCOM_PM) += spm.o 6 + obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o 6 7 obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o 7 8 obj-$(CONFIG_QCOM_SMEM) += smem.o 8 9 obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
+269
drivers/soc/qcom/rmtfs_mem.c
··· 1 + /* 2 + * Copyright (c) 2017 Linaro Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/cdev.h> 16 + #include <linux/err.h> 17 + #include <linux/module.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/of.h> 20 + #include <linux/of_reserved_mem.h> 21 + #include <linux/dma-mapping.h> 22 + #include <linux/slab.h> 23 + #include <linux/uaccess.h> 24 + #include <linux/io.h> 25 + #include <linux/qcom_scm.h> 26 + 27 + #define QCOM_RMTFS_MEM_DEV_MAX (MINORMASK + 1) 28 + 29 + static dev_t qcom_rmtfs_mem_major; 30 + 31 + struct qcom_rmtfs_mem { 32 + struct device dev; 33 + struct cdev cdev; 34 + 35 + void *base; 36 + phys_addr_t addr; 37 + phys_addr_t size; 38 + 39 + unsigned int client_id; 40 + }; 41 + 42 + static ssize_t qcom_rmtfs_mem_show(struct device *dev, 43 + struct device_attribute *attr, 44 + char *buf); 45 + 46 + static DEVICE_ATTR(phys_addr, 0400, qcom_rmtfs_mem_show, NULL); 47 + static DEVICE_ATTR(size, 0400, qcom_rmtfs_mem_show, NULL); 48 + static DEVICE_ATTR(client_id, 0400, qcom_rmtfs_mem_show, NULL); 49 + 50 + static ssize_t qcom_rmtfs_mem_show(struct device *dev, 51 + struct device_attribute *attr, 52 + char *buf) 53 + { 54 + struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev, 55 + struct qcom_rmtfs_mem, 56 + dev); 57 + 58 + if (attr == &dev_attr_phys_addr) 59 + return sprintf(buf, "%pa\n", &rmtfs_mem->addr); 60 + if (attr == &dev_attr_size) 61 + return sprintf(buf, "%pa\n", &rmtfs_mem->size); 62 + if (attr == &dev_attr_client_id) 63 + return sprintf(buf, "%d\n", rmtfs_mem->client_id); 64 + 65 + return -EINVAL; 66 + } 67 + 68 + static struct attribute *qcom_rmtfs_mem_attrs[] = { 69 + &dev_attr_phys_addr.attr, 70 + &dev_attr_size.attr, 71 + &dev_attr_client_id.attr, 72 + NULL 73 + }; 74 + ATTRIBUTE_GROUPS(qcom_rmtfs_mem); 75 + 76 + static int qcom_rmtfs_mem_open(struct inode *inode, struct file *filp) 77 + { 78 + struct qcom_rmtfs_mem *rmtfs_mem = container_of(inode->i_cdev, 79 + struct qcom_rmtfs_mem, 80 + cdev); 81 + 82 + get_device(&rmtfs_mem->dev); 83 + filp->private_data = rmtfs_mem; 84 + 85 + return 0; 86 + } 87 + static ssize_t qcom_rmtfs_mem_read(struct file *filp, 88 + char __user *buf, size_t count, loff_t *f_pos) 89 + { 90 + struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data; 91 + 92 + if (*f_pos >= rmtfs_mem->size) 93 + return 0; 94 + 95 + if (*f_pos + count >= rmtfs_mem->size) 96 + count = rmtfs_mem->size - *f_pos; 97 + 98 + if (copy_to_user(buf, rmtfs_mem->base + *f_pos, count)) 99 + return -EFAULT; 100 + 101 + *f_pos += count; 102 + return count; 103 + } 104 + 105 + static ssize_t qcom_rmtfs_mem_write(struct file *filp, 106 + const char __user *buf, size_t count, 107 + loff_t *f_pos) 108 + { 109 + struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data; 110 + 111 + if (*f_pos >= rmtfs_mem->size) 112 + return 0; 113 + 114 + if (*f_pos + count >= rmtfs_mem->size) 115 + count = rmtfs_mem->size - *f_pos; 116 + 117 + if (copy_from_user(rmtfs_mem->base + *f_pos, buf, count)) 118 + return -EFAULT; 119 + 120 + *f_pos += count; 121 + return count; 122 + } 123 + 124 + static int qcom_rmtfs_mem_release(struct inode *inode, struct file *filp) 125 + { 126 + struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data; 127 + 128 + put_device(&rmtfs_mem->dev); 129 + 130 + return 0; 131 + } 132 + 133 + static const struct file_operations qcom_rmtfs_mem_fops = { 134 + .owner = THIS_MODULE, 135 + .open = qcom_rmtfs_mem_open, 136 + .read = qcom_rmtfs_mem_read, 137 + .write = qcom_rmtfs_mem_write, 138 + .release = qcom_rmtfs_mem_release, 139 + .llseek = default_llseek, 140 + }; 141 + 142 + static void qcom_rmtfs_mem_release_device(struct device *dev) 143 + { 144 + struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev, 145 + struct qcom_rmtfs_mem, 146 + dev); 147 + 148 + kfree(rmtfs_mem); 149 + } 150 + 151 + static int qcom_rmtfs_mem_probe(struct platform_device *pdev) 152 + { 153 + struct device_node *node = pdev->dev.of_node; 154 + struct reserved_mem *rmem; 155 + struct qcom_rmtfs_mem *rmtfs_mem; 156 + u32 client_id; 157 + int ret; 158 + 159 + rmem = of_reserved_mem_lookup(node); 160 + if (!rmem) { 161 + dev_err(&pdev->dev, "failed to acquire memory region\n"); 162 + return -EINVAL; 163 + } 164 + 165 + ret = of_property_read_u32(node, "qcom,client-id", &client_id); 166 + if (ret) { 167 + dev_err(&pdev->dev, "failed to parse \"qcom,client-id\"\n"); 168 + return ret; 169 + 170 + } 171 + 172 + rmtfs_mem = kzalloc(sizeof(*rmtfs_mem), GFP_KERNEL); 173 + if (!rmtfs_mem) 174 + return -ENOMEM; 175 + 176 + rmtfs_mem->addr = rmem->base; 177 + rmtfs_mem->client_id = client_id; 178 + rmtfs_mem->size = rmem->size; 179 + 180 + device_initialize(&rmtfs_mem->dev); 181 + rmtfs_mem->dev.parent = &pdev->dev; 182 + rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups; 183 + 184 + rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr, 185 + rmtfs_mem->size, MEMREMAP_WC); 186 + if (IS_ERR(rmtfs_mem->base)) { 187 + dev_err(&pdev->dev, "failed to remap rmtfs_mem region\n"); 188 + ret = PTR_ERR(rmtfs_mem->base); 189 + goto put_device; 190 + } 191 + 192 + cdev_init(&rmtfs_mem->cdev, &qcom_rmtfs_mem_fops); 193 + rmtfs_mem->cdev.owner = THIS_MODULE; 194 + 195 + dev_set_name(&rmtfs_mem->dev, "qcom_rmtfs_mem%d", client_id); 196 + rmtfs_mem->dev.id = client_id; 197 + rmtfs_mem->dev.devt = MKDEV(MAJOR(qcom_rmtfs_mem_major), client_id); 198 + 199 + ret = cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev); 200 + if (ret) { 201 + dev_err(&pdev->dev, "failed to add cdev: %d\n", ret); 202 + goto put_device; 203 + } 204 + 205 + rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device; 206 + 207 + dev_set_drvdata(&pdev->dev, rmtfs_mem); 208 + 209 + return 0; 210 + 211 + put_device: 212 + put_device(&rmtfs_mem->dev); 213 + 214 + return ret; 215 + } 216 + 217 + static int qcom_rmtfs_mem_remove(struct platform_device *pdev) 218 + { 219 + struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev); 220 + 221 + cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); 222 + put_device(&rmtfs_mem->dev); 223 + 224 + return 0; 225 + } 226 + 227 + static const struct of_device_id qcom_rmtfs_mem_of_match[] = { 228 + { .compatible = "qcom,rmtfs-mem" }, 229 + {} 230 + }; 231 + MODULE_DEVICE_TABLE(of, qcom_rmtfs_mem_of_match); 232 + 233 + static struct platform_driver qcom_rmtfs_mem_driver = { 234 + .probe = qcom_rmtfs_mem_probe, 235 + .remove = qcom_rmtfs_mem_remove, 236 + .driver = { 237 + .name = "qcom_rmtfs_mem", 238 + .of_match_table = qcom_rmtfs_mem_of_match, 239 + }, 240 + }; 241 + 242 + static int qcom_rmtfs_mem_init(void) 243 + { 244 + int ret; 245 + 246 + ret = alloc_chrdev_region(&qcom_rmtfs_mem_major, 0, 247 + QCOM_RMTFS_MEM_DEV_MAX, "qcom_rmtfs_mem"); 248 + if (ret < 0) { 249 + pr_err("qcom_rmtfs_mem: failed to allocate char dev region\n"); 250 + return ret; 251 + } 252 + 253 + ret = platform_driver_register(&qcom_rmtfs_mem_driver); 254 + if (ret < 0) { 255 + pr_err("qcom_rmtfs_mem: failed to register rmtfs_mem driver\n"); 256 + unregister_chrdev_region(qcom_rmtfs_mem_major, 257 + QCOM_RMTFS_MEM_DEV_MAX); 258 + } 259 + 260 + return ret; 261 + } 262 + module_init(qcom_rmtfs_mem_init); 263 + 264 + static void qcom_rmtfs_mem_exit(void) 265 + { 266 + platform_driver_unregister(&qcom_rmtfs_mem_driver); 267 + unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX); 268 + } 269 + module_exit(qcom_rmtfs_mem_exit);
+266 -71
drivers/soc/qcom/smem.c
··· 52 52 * 53 53 * Items in the non-cached region are allocated from the start of the partition 54 54 * while items in the cached region are allocated from the end. The free area 55 - * is hence the region between the cached and non-cached offsets. 55 + * is hence the region between the cached and non-cached offsets. The header of 56 + * cached items comes after the data. 56 57 * 58 + * Version 12 (SMEM_GLOBAL_PART_VERSION) changes the item alloc/get procedure 59 + * for the global heap. A new global partition is created from the global heap 60 + * region with partition type (SMEM_GLOBAL_HOST) and the max smem item count is 61 + * set by the bootloader. 57 62 * 58 63 * To synchronize allocations in the shared memory heaps a remote spinlock must 59 64 * be held - currently lock number 3 of the sfpb or tcsr is used for this on all ··· 67 62 */ 68 63 69 64 /* 70 - * Item 3 of the global heap contains an array of versions for the various 71 - * software components in the SoC. We verify that the boot loader version is 72 - * what the expected version (SMEM_EXPECTED_VERSION) as a sanity check. 65 + * The version member of the smem header contains an array of versions for the 66 + * various software components in the SoC. We verify that the boot loader 67 + * version is a valid version as a sanity check. 73 68 */ 74 - #define SMEM_ITEM_VERSION 3 75 - #define SMEM_MASTER_SBL_VERSION_INDEX 7 76 - #define SMEM_EXPECTED_VERSION 11 69 + #define SMEM_MASTER_SBL_VERSION_INDEX 7 70 + #define SMEM_GLOBAL_HEAP_VERSION 11 71 + #define SMEM_GLOBAL_PART_VERSION 12 77 72 78 73 /* 79 74 * The first 8 items are only to be allocated by the boot loader while ··· 87 82 /* Processor/host identifier for the application processor */ 88 83 #define SMEM_HOST_APPS 0 89 84 85 + /* Processor/host identifier for the global partition */ 86 + #define SMEM_GLOBAL_HOST 0xfffe 87 + 90 88 /* Max number of processors/hosts in a system */ 91 - #define SMEM_HOST_COUNT 9 89 + #define SMEM_HOST_COUNT 10 92 90 93 91 /** 94 92 * struct smem_proc_comm - proc_comm communication struct (legacy) ··· 148 140 * @flags: flags for the partition (currently unused) 149 141 * @host0: first processor/host with access to this partition 150 142 * @host1: second processor/host with access to this partition 143 + * @cacheline: alignment for "cached" entries 151 144 * @reserved: reserved entries for later use 152 145 */ 153 146 struct smem_ptable_entry { ··· 157 148 __le32 flags; 158 149 __le16 host0; 159 150 __le16 host1; 160 - __le32 reserved[8]; 151 + __le32 cacheline; 152 + __le32 reserved[7]; 161 153 }; 162 154 163 155 /** ··· 223 213 #define SMEM_PRIVATE_CANARY 0xa5a5 224 214 225 215 /** 216 + * struct smem_info - smem region info located after the table of contents 217 + * @magic: magic number, must be SMEM_INFO_MAGIC 218 + * @size: size of the smem region 219 + * @base_addr: base address of the smem region 220 + * @reserved: for now reserved entry 221 + * @num_items: highest accepted item number 222 + */ 223 + struct smem_info { 224 + u8 magic[4]; 225 + __le32 size; 226 + __le32 base_addr; 227 + __le32 reserved; 228 + __le16 num_items; 229 + }; 230 + 231 + static const u8 SMEM_INFO_MAGIC[] = { 0x53, 0x49, 0x49, 0x49 }; /* SIII */ 232 + 233 + /** 226 234 * struct smem_region - representation of a chunk of memory used for smem 227 235 * @aux_base: identifier of aux_mem base 228 236 * @virt_base: virtual base address of memory with this aux_mem identifier ··· 256 228 * struct qcom_smem - device data for the smem device 257 229 * @dev: device pointer 258 230 * @hwlock: reference to a hwspinlock 231 + * @global_partition: pointer to global partition when in use 232 + * @global_cacheline: cacheline size for global partition 259 233 * @partitions: list of pointers to partitions affecting the current 260 234 * processor/host 235 + * @cacheline: list of cacheline sizes for each host 236 + * @item_count: max accepted item number 261 237 * @num_regions: number of @regions 262 238 * @regions: list of the memory regions defining the shared memory 263 239 */ ··· 270 238 271 239 struct hwspinlock *hwlock; 272 240 241 + struct smem_partition_header *global_partition; 242 + size_t global_cacheline; 273 243 struct smem_partition_header *partitions[SMEM_HOST_COUNT]; 244 + size_t cacheline[SMEM_HOST_COUNT]; 245 + u32 item_count; 274 246 275 247 unsigned num_regions; 276 248 struct smem_region regions[0]; 277 249 }; 278 250 279 251 static struct smem_private_entry * 280 - phdr_to_last_private_entry(struct smem_partition_header *phdr) 252 + phdr_to_last_uncached_entry(struct smem_partition_header *phdr) 281 253 { 282 254 void *p = phdr; 283 255 284 256 return p + le32_to_cpu(phdr->offset_free_uncached); 285 257 } 286 258 287 - static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr) 259 + static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr, 260 + size_t cacheline) 261 + { 262 + void *p = phdr; 263 + 264 + return p + le32_to_cpu(phdr->size) - ALIGN(sizeof(*phdr), cacheline); 265 + } 266 + 267 + static void *phdr_to_last_cached_entry(struct smem_partition_header *phdr) 288 268 { 289 269 void *p = phdr; 290 270 ··· 304 260 } 305 261 306 262 static struct smem_private_entry * 307 - phdr_to_first_private_entry(struct smem_partition_header *phdr) 263 + phdr_to_first_uncached_entry(struct smem_partition_header *phdr) 308 264 { 309 265 void *p = phdr; 310 266 ··· 312 268 } 313 269 314 270 static struct smem_private_entry * 315 - private_entry_next(struct smem_private_entry *e) 271 + uncached_entry_next(struct smem_private_entry *e) 316 272 { 317 273 void *p = e; 318 274 ··· 320 276 le32_to_cpu(e->size); 321 277 } 322 278 323 - static void *entry_to_item(struct smem_private_entry *e) 279 + static struct smem_private_entry * 280 + cached_entry_next(struct smem_private_entry *e, size_t cacheline) 281 + { 282 + void *p = e; 283 + 284 + return p - le32_to_cpu(e->size) - ALIGN(sizeof(*e), cacheline); 285 + } 286 + 287 + static void *uncached_entry_to_item(struct smem_private_entry *e) 324 288 { 325 289 void *p = e; 326 290 327 291 return p + sizeof(*e) + le16_to_cpu(e->padding_hdr); 292 + } 293 + 294 + static void *cached_entry_to_item(struct smem_private_entry *e) 295 + { 296 + void *p = e; 297 + 298 + return p - le32_to_cpu(e->size); 328 299 } 329 300 330 301 /* Pointer to the one and only smem handle */ ··· 349 290 #define HWSPINLOCK_TIMEOUT 1000 350 291 351 292 static int qcom_smem_alloc_private(struct qcom_smem *smem, 352 - unsigned host, 293 + struct smem_partition_header *phdr, 353 294 unsigned item, 354 295 size_t size) 355 296 { 356 - struct smem_partition_header *phdr; 357 297 struct smem_private_entry *hdr, *end; 358 298 size_t alloc_size; 359 299 void *cached; 360 300 361 - phdr = smem->partitions[host]; 362 - hdr = phdr_to_first_private_entry(phdr); 363 - end = phdr_to_last_private_entry(phdr); 364 - cached = phdr_to_first_cached_entry(phdr); 301 + hdr = phdr_to_first_uncached_entry(phdr); 302 + end = phdr_to_last_uncached_entry(phdr); 303 + cached = phdr_to_last_cached_entry(phdr); 365 304 366 305 while (hdr < end) { 367 306 if (hdr->canary != SMEM_PRIVATE_CANARY) { 368 307 dev_err(smem->dev, 369 - "Found invalid canary in host %d partition\n", 370 - host); 308 + "Found invalid canary in hosts %d:%d partition\n", 309 + phdr->host0, phdr->host1); 371 310 return -EINVAL; 372 311 } 373 312 374 313 if (le16_to_cpu(hdr->item) == item) 375 314 return -EEXIST; 376 315 377 - hdr = private_entry_next(hdr); 316 + hdr = uncached_entry_next(hdr); 378 317 } 379 318 380 319 /* Check that we don't grow into the cached region */ ··· 403 346 unsigned item, 404 347 size_t size) 405 348 { 406 - struct smem_header *header; 407 349 struct smem_global_entry *entry; 408 - 409 - if (WARN_ON(item >= SMEM_ITEM_COUNT)) 410 - return -EINVAL; 350 + struct smem_header *header; 411 351 412 352 header = smem->regions[0].virt_base; 413 353 entry = &header->toc[item]; ··· 443 389 */ 444 390 int qcom_smem_alloc(unsigned host, unsigned item, size_t size) 445 391 { 392 + struct smem_partition_header *phdr; 446 393 unsigned long flags; 447 394 int ret; 448 395 ··· 456 401 return -EINVAL; 457 402 } 458 403 404 + if (WARN_ON(item >= __smem->item_count)) 405 + return -EINVAL; 406 + 459 407 ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 460 408 HWSPINLOCK_TIMEOUT, 461 409 &flags); 462 410 if (ret) 463 411 return ret; 464 412 465 - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) 466 - ret = qcom_smem_alloc_private(__smem, host, item, size); 467 - else 413 + if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 414 + phdr = __smem->partitions[host]; 415 + ret = qcom_smem_alloc_private(__smem, phdr, item, size); 416 + } else if (__smem->global_partition) { 417 + phdr = __smem->global_partition; 418 + ret = qcom_smem_alloc_private(__smem, phdr, item, size); 419 + } else { 468 420 ret = qcom_smem_alloc_global(__smem, item, size); 421 + } 469 422 470 423 hwspin_unlock_irqrestore(__smem->hwlock, &flags); 471 424 ··· 490 427 struct smem_global_entry *entry; 491 428 u32 aux_base; 492 429 unsigned i; 493 - 494 - if (WARN_ON(item >= SMEM_ITEM_COUNT)) 495 - return ERR_PTR(-EINVAL); 496 430 497 431 header = smem->regions[0].virt_base; 498 432 entry = &header->toc[item]; ··· 512 452 } 513 453 514 454 static void *qcom_smem_get_private(struct qcom_smem *smem, 515 - unsigned host, 455 + struct smem_partition_header *phdr, 456 + size_t cacheline, 516 457 unsigned item, 517 458 size_t *size) 518 459 { 519 - struct smem_partition_header *phdr; 520 460 struct smem_private_entry *e, *end; 521 461 522 - phdr = smem->partitions[host]; 523 - e = phdr_to_first_private_entry(phdr); 524 - end = phdr_to_last_private_entry(phdr); 462 + e = phdr_to_first_uncached_entry(phdr); 463 + end = phdr_to_last_uncached_entry(phdr); 525 464 526 465 while (e < end) { 527 - if (e->canary != SMEM_PRIVATE_CANARY) { 528 - dev_err(smem->dev, 529 - "Found invalid canary in host %d partition\n", 530 - host); 531 - return ERR_PTR(-EINVAL); 532 - } 466 + if (e->canary != SMEM_PRIVATE_CANARY) 467 + goto invalid_canary; 533 468 534 469 if (le16_to_cpu(e->item) == item) { 535 470 if (size != NULL) 536 471 *size = le32_to_cpu(e->size) - 537 472 le16_to_cpu(e->padding_data); 538 473 539 - return entry_to_item(e); 474 + return uncached_entry_to_item(e); 540 475 } 541 476 542 - e = private_entry_next(e); 477 + e = uncached_entry_next(e); 478 + } 479 + 480 + /* Item was not found in the uncached list, search the cached list */ 481 + 482 + e = phdr_to_first_cached_entry(phdr, cacheline); 483 + end = phdr_to_last_cached_entry(phdr); 484 + 485 + while (e > end) { 486 + if (e->canary != SMEM_PRIVATE_CANARY) 487 + goto invalid_canary; 488 + 489 + if (le16_to_cpu(e->item) == item) { 490 + if (size != NULL) 491 + *size = le32_to_cpu(e->size) - 492 + le16_to_cpu(e->padding_data); 493 + 494 + return cached_entry_to_item(e); 495 + } 496 + 497 + e = cached_entry_next(e, cacheline); 543 498 } 544 499 545 500 return ERR_PTR(-ENOENT); 501 + 502 + invalid_canary: 503 + dev_err(smem->dev, "Found invalid canary in hosts %d:%d partition\n", 504 + phdr->host0, phdr->host1); 505 + 506 + return ERR_PTR(-EINVAL); 546 507 } 547 508 548 509 /** ··· 577 496 */ 578 497 void *qcom_smem_get(unsigned host, unsigned item, size_t *size) 579 498 { 499 + struct smem_partition_header *phdr; 580 500 unsigned long flags; 501 + size_t cacheln; 581 502 int ret; 582 503 void *ptr = ERR_PTR(-EPROBE_DEFER); 583 504 584 505 if (!__smem) 585 506 return ptr; 507 + 508 + if (WARN_ON(item >= __smem->item_count)) 509 + return ERR_PTR(-EINVAL); 586 510 587 511 ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 588 512 HWSPINLOCK_TIMEOUT, ··· 595 509 if (ret) 596 510 return ERR_PTR(ret); 597 511 598 - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) 599 - ptr = qcom_smem_get_private(__smem, host, item, size); 600 - else 512 + if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 513 + phdr = __smem->partitions[host]; 514 + cacheln = __smem->cacheline[host]; 515 + ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); 516 + } else if (__smem->global_partition) { 517 + phdr = __smem->global_partition; 518 + cacheln = __smem->global_cacheline; 519 + ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); 520 + } else { 601 521 ptr = qcom_smem_get_global(__smem, item, size); 522 + } 602 523 603 524 hwspin_unlock_irqrestore(__smem->hwlock, &flags); 604 525 ··· 634 541 phdr = __smem->partitions[host]; 635 542 ret = le32_to_cpu(phdr->offset_free_cached) - 636 543 le32_to_cpu(phdr->offset_free_uncached); 544 + } else if (__smem->global_partition) { 545 + phdr = __smem->global_partition; 546 + ret = le32_to_cpu(phdr->offset_free_cached) - 547 + le32_to_cpu(phdr->offset_free_uncached); 637 548 } else { 638 549 header = __smem->regions[0].virt_base; 639 550 ret = le32_to_cpu(header->available); ··· 649 552 650 553 static int qcom_smem_get_sbl_version(struct qcom_smem *smem) 651 554 { 555 + struct smem_header *header; 652 556 __le32 *versions; 653 - size_t size; 654 557 655 - versions = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, &size); 656 - if (IS_ERR(versions)) { 657 - dev_err(smem->dev, "Unable to read the version item\n"); 658 - return -ENOENT; 659 - } 660 - 661 - if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) { 662 - dev_err(smem->dev, "Version item is too small\n"); 663 - return -EINVAL; 664 - } 558 + header = smem->regions[0].virt_base; 559 + versions = header->version; 665 560 666 561 return le32_to_cpu(versions[SMEM_MASTER_SBL_VERSION_INDEX]); 667 562 } 668 563 669 - static int qcom_smem_enumerate_partitions(struct qcom_smem *smem, 670 - unsigned local_host) 564 + static struct smem_ptable *qcom_smem_get_ptable(struct qcom_smem *smem) 671 565 { 672 - struct smem_partition_header *header; 673 - struct smem_ptable_entry *entry; 674 566 struct smem_ptable *ptable; 675 - unsigned remote_host; 676 - u32 version, host0, host1; 677 - int i; 567 + u32 version; 678 568 679 569 ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K; 680 570 if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic))) 681 - return 0; 571 + return ERR_PTR(-ENOENT); 682 572 683 573 version = le32_to_cpu(ptable->version); 684 574 if (version != 1) { 685 575 dev_err(smem->dev, 686 576 "Unsupported partition header version %d\n", version); 577 + return ERR_PTR(-EINVAL); 578 + } 579 + return ptable; 580 + } 581 + 582 + static u32 qcom_smem_get_item_count(struct qcom_smem *smem) 583 + { 584 + struct smem_ptable *ptable; 585 + struct smem_info *info; 586 + 587 + ptable = qcom_smem_get_ptable(smem); 588 + if (IS_ERR_OR_NULL(ptable)) 589 + return SMEM_ITEM_COUNT; 590 + 591 + info = (struct smem_info *)&ptable->entry[ptable->num_entries]; 592 + if (memcmp(info->magic, SMEM_INFO_MAGIC, sizeof(info->magic))) 593 + return SMEM_ITEM_COUNT; 594 + 595 + return le16_to_cpu(info->num_items); 596 + } 597 + 598 + static int qcom_smem_set_global_partition(struct qcom_smem *smem) 599 + { 600 + struct smem_partition_header *header; 601 + struct smem_ptable_entry *entry = NULL; 602 + struct smem_ptable *ptable; 603 + u32 host0, host1, size; 604 + int i; 605 + 606 + ptable = qcom_smem_get_ptable(smem); 607 + if (IS_ERR(ptable)) 608 + return PTR_ERR(ptable); 609 + 610 + for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) { 611 + entry = &ptable->entry[i]; 612 + host0 = le16_to_cpu(entry->host0); 613 + host1 = le16_to_cpu(entry->host1); 614 + 615 + if (host0 == SMEM_GLOBAL_HOST && host0 == host1) 616 + break; 617 + } 618 + 619 + if (!entry) { 620 + dev_err(smem->dev, "Missing entry for global partition\n"); 687 621 return -EINVAL; 688 622 } 623 + 624 + if (!le32_to_cpu(entry->offset) || !le32_to_cpu(entry->size)) { 625 + dev_err(smem->dev, "Invalid entry for global partition\n"); 626 + return -EINVAL; 627 + } 628 + 629 + if (smem->global_partition) { 630 + dev_err(smem->dev, "Already found the global partition\n"); 631 + return -EINVAL; 632 + } 633 + 634 + header = smem->regions[0].virt_base + le32_to_cpu(entry->offset); 635 + host0 = le16_to_cpu(header->host0); 636 + host1 = le16_to_cpu(header->host1); 637 + 638 + if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) { 639 + dev_err(smem->dev, "Global partition has invalid magic\n"); 640 + return -EINVAL; 641 + } 642 + 643 + if (host0 != SMEM_GLOBAL_HOST && host1 != SMEM_GLOBAL_HOST) { 644 + dev_err(smem->dev, "Global partition hosts are invalid\n"); 645 + return -EINVAL; 646 + } 647 + 648 + if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) { 649 + dev_err(smem->dev, "Global partition has invalid size\n"); 650 + return -EINVAL; 651 + } 652 + 653 + size = le32_to_cpu(header->offset_free_uncached); 654 + if (size > le32_to_cpu(header->size)) { 655 + dev_err(smem->dev, 656 + "Global partition has invalid free pointer\n"); 657 + return -EINVAL; 658 + } 659 + 660 + smem->global_partition = header; 661 + smem->global_cacheline = le32_to_cpu(entry->cacheline); 662 + 663 + return 0; 664 + } 665 + 666 + static int qcom_smem_enumerate_partitions(struct qcom_smem *smem, 667 + unsigned int local_host) 668 + { 669 + struct smem_partition_header *header; 670 + struct smem_ptable_entry *entry; 671 + struct smem_ptable *ptable; 672 + unsigned int remote_host; 673 + u32 host0, host1; 674 + int i; 675 + 676 + ptable = qcom_smem_get_ptable(smem); 677 + if (IS_ERR(ptable)) 678 + return PTR_ERR(ptable); 689 679 690 680 for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) { 691 681 entry = &ptable->entry[i]; ··· 830 646 return -EINVAL; 831 647 } 832 648 833 - if (header->size != entry->size) { 649 + if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) { 834 650 dev_err(smem->dev, 835 651 "Partition %d has invalid size\n", i); 836 652 return -EINVAL; ··· 843 659 } 844 660 845 661 smem->partitions[remote_host] = header; 662 + smem->cacheline[remote_host] = le32_to_cpu(entry->cacheline); 846 663 } 847 664 848 665 return 0; ··· 914 729 } 915 730 916 731 version = qcom_smem_get_sbl_version(smem); 917 - if (version >> 16 != SMEM_EXPECTED_VERSION) { 732 + switch (version >> 16) { 733 + case SMEM_GLOBAL_PART_VERSION: 734 + ret = qcom_smem_set_global_partition(smem); 735 + if (ret < 0) 736 + return ret; 737 + smem->item_count = qcom_smem_get_item_count(smem); 738 + break; 739 + case SMEM_GLOBAL_HEAP_VERSION: 740 + smem->item_count = SMEM_ITEM_COUNT; 741 + break; 742 + default: 918 743 dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version); 919 744 return -EINVAL; 920 745 } 921 746 922 747 ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS); 923 - if (ret < 0) 748 + if (ret < 0 && ret != -ENOENT) 924 749 return ret; 925 750 926 751 hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
+7 -1
drivers/soc/renesas/Kconfig
··· 3 3 default y if ARCH_RENESAS 4 4 select SOC_BUS 5 5 select RST_RCAR if ARCH_RCAR_GEN1 || ARCH_RCAR_GEN2 || \ 6 - ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77995 6 + ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77970 || \ 7 + ARCH_R8A77995 7 8 select SYSC_R8A7743 if ARCH_R8A7743 8 9 select SYSC_R8A7745 if ARCH_R8A7745 9 10 select SYSC_R8A7779 if ARCH_R8A7779 ··· 14 13 select SYSC_R8A7794 if ARCH_R8A7794 15 14 select SYSC_R8A7795 if ARCH_R8A7795 16 15 select SYSC_R8A7796 if ARCH_R8A7796 16 + select SYSC_R8A77970 if ARCH_R8A77970 17 17 select SYSC_R8A77995 if ARCH_R8A77995 18 18 19 19 if SOC_RENESAS ··· 54 52 55 53 config SYSC_R8A7796 56 54 bool "R-Car M3-W System Controller support" if COMPILE_TEST 55 + select SYSC_RCAR 56 + 57 + config SYSC_R8A77970 58 + bool "R-Car V3M System Controller support" if COMPILE_TEST 57 59 select SYSC_RCAR 58 60 59 61 config SYSC_R8A77995
+1
drivers/soc/renesas/Makefile
··· 12 12 obj-$(CONFIG_SYSC_R8A7794) += r8a7794-sysc.o 13 13 obj-$(CONFIG_SYSC_R8A7795) += r8a7795-sysc.o 14 14 obj-$(CONFIG_SYSC_R8A7796) += r8a7796-sysc.o 15 + obj-$(CONFIG_SYSC_R8A77970) += r8a77970-sysc.o 15 16 obj-$(CONFIG_SYSC_R8A77995) += r8a77995-sysc.o 16 17 17 18 # Family
+39
drivers/soc/renesas/r8a77970-sysc.c
··· 1 + /* 2 + * Renesas R-Car V3M System Controller 3 + * 4 + * Copyright (C) 2017 Cogent Embedded Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/bug.h> 12 + #include <linux/kernel.h> 13 + 14 + #include <dt-bindings/power/r8a77970-sysc.h> 15 + 16 + #include "rcar-sysc.h" 17 + 18 + static const struct rcar_sysc_area r8a77970_areas[] __initconst = { 19 + { "always-on", 0, 0, R8A77970_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 20 + { "ca53-scu", 0x140, 0, R8A77970_PD_CA53_SCU, R8A77970_PD_ALWAYS_ON, 21 + PD_SCU }, 22 + { "ca53-cpu0", 0x200, 0, R8A77970_PD_CA53_CPU0, R8A77970_PD_CA53_SCU, 23 + PD_CPU_NOCR }, 24 + { "ca53-cpu1", 0x200, 1, R8A77970_PD_CA53_CPU1, R8A77970_PD_CA53_SCU, 25 + PD_CPU_NOCR }, 26 + { "cr7", 0x240, 0, R8A77970_PD_CR7, R8A77970_PD_ALWAYS_ON }, 27 + { "a3ir", 0x180, 0, R8A77970_PD_A3IR, R8A77970_PD_ALWAYS_ON }, 28 + { "a2ir0", 0x400, 0, R8A77970_PD_A2IR0, R8A77970_PD_ALWAYS_ON }, 29 + { "a2ir1", 0x400, 1, R8A77970_PD_A2IR1, R8A77970_PD_A2IR0 }, 30 + { "a2ir2", 0x400, 2, R8A77970_PD_A2IR2, R8A77970_PD_A2IR0 }, 31 + { "a2ir3", 0x400, 3, R8A77970_PD_A2IR3, R8A77970_PD_A2IR0 }, 32 + { "a2sc0", 0x400, 4, R8A77970_PD_A2SC0, R8A77970_PD_ALWAYS_ON }, 33 + { "a2sc1", 0x400, 5, R8A77970_PD_A2SC1, R8A77970_PD_A2SC0 }, 34 + }; 35 + 36 + const struct rcar_sysc_info r8a77970_sysc_info __initconst = { 37 + .areas = r8a77970_areas, 38 + .num_areas = ARRAY_SIZE(r8a77970_areas), 39 + };
+1
drivers/soc/renesas/rcar-rst.c
··· 41 41 /* R-Car Gen3 is handled like R-Car Gen2 */ 42 42 { .compatible = "renesas,r8a7795-rst", .data = &rcar_rst_gen2 }, 43 43 { .compatible = "renesas,r8a7796-rst", .data = &rcar_rst_gen2 }, 44 + { .compatible = "renesas,r8a77970-rst", .data = &rcar_rst_gen2 }, 44 45 { .compatible = "renesas,r8a77995-rst", .data = &rcar_rst_gen2 }, 45 46 { /* sentinel */ } 46 47 };
+3
drivers/soc/renesas/rcar-sysc.c
··· 284 284 #ifdef CONFIG_SYSC_R8A7796 285 285 { .compatible = "renesas,r8a7796-sysc", .data = &r8a7796_sysc_info }, 286 286 #endif 287 + #ifdef CONFIG_SYSC_R8A77970 288 + { .compatible = "renesas,r8a77970-sysc", .data = &r8a77970_sysc_info }, 289 + #endif 287 290 #ifdef CONFIG_SYSC_R8A77995 288 291 { .compatible = "renesas,r8a77995-sysc", .data = &r8a77995_sysc_info }, 289 292 #endif
+1
drivers/soc/renesas/rcar-sysc.h
··· 58 58 extern const struct rcar_sysc_info r8a7794_sysc_info; 59 59 extern const struct rcar_sysc_info r8a7795_sysc_info; 60 60 extern const struct rcar_sysc_info r8a7796_sysc_info; 61 + extern const struct rcar_sysc_info r8a77970_sysc_info; 61 62 extern const struct rcar_sysc_info r8a77995_sysc_info; 62 63 63 64
+8
drivers/soc/renesas/renesas-soc.c
··· 144 144 .id = 0x52, 145 145 }; 146 146 147 + static const struct renesas_soc soc_rcar_v3m __initconst __maybe_unused = { 148 + .family = &fam_rcar_gen3, 149 + .id = 0x54, 150 + }; 151 + 147 152 static const struct renesas_soc soc_rcar_d3 __initconst __maybe_unused = { 148 153 .family = &fam_rcar_gen3, 149 154 .id = 0x58, ··· 208 203 #endif 209 204 #ifdef CONFIG_ARCH_R8A7796 210 205 { .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w }, 206 + #endif 207 + #ifdef CONFIG_ARCH_R8A77970 208 + { .compatible = "renesas,r8a77970", .data = &soc_rcar_v3m }, 211 209 #endif 212 210 #ifdef CONFIG_ARCH_R8A77995 213 211 { .compatible = "renesas,r8a77995", .data = &soc_rcar_d3 },
-9
drivers/soc/samsung/exynos-pmu.c
··· 60 60 61 61 if (pmu_data->powerdown_conf_extra) 62 62 pmu_data->powerdown_conf_extra(mode); 63 - 64 - if (pmu_data->pmu_config_extra) { 65 - for (i = 0; pmu_data->pmu_config_extra[i].offset != PMU_TABLE_END; i++) 66 - pmu_raw_writel(pmu_data->pmu_config_extra[i].val[mode], 67 - pmu_data->pmu_config_extra[i].offset); 68 - } 69 63 } 70 64 71 65 /* ··· 82 88 }, { 83 89 .compatible = "samsung,exynos4210-pmu", 84 90 .data = exynos_pmu_data_arm_ptr(exynos4210_pmu_data), 85 - }, { 86 - .compatible = "samsung,exynos4212-pmu", 87 - .data = exynos_pmu_data_arm_ptr(exynos4212_pmu_data), 88 91 }, { 89 92 .compatible = "samsung,exynos4412-pmu", 90 93 .data = exynos_pmu_data_arm_ptr(exynos4412_pmu_data),
-2
drivers/soc/samsung/exynos-pmu.h
··· 23 23 24 24 struct exynos_pmu_data { 25 25 const struct exynos_pmu_conf *pmu_config; 26 - const struct exynos_pmu_conf *pmu_config_extra; 27 26 28 27 void (*pmu_init)(void); 29 28 void (*powerdown_conf)(enum sys_powerdown); ··· 35 36 /* list of all exported SoC specific data */ 36 37 extern const struct exynos_pmu_data exynos3250_pmu_data; 37 38 extern const struct exynos_pmu_data exynos4210_pmu_data; 38 - extern const struct exynos_pmu_data exynos4212_pmu_data; 39 39 extern const struct exynos_pmu_data exynos4412_pmu_data; 40 40 extern const struct exynos_pmu_data exynos5250_pmu_data; 41 41 extern const struct exynos_pmu_data exynos5420_pmu_data;
+2 -11
drivers/soc/samsung/exynos4-pmu.c
··· 90 90 { PMU_TABLE_END,}, 91 91 }; 92 92 93 - static const struct exynos_pmu_conf exynos4x12_pmu_config[] = { 93 + static const struct exynos_pmu_conf exynos4412_pmu_config[] = { 94 94 { S5P_ARM_CORE0_LOWPWR, { 0x0, 0x0, 0x2 } }, 95 95 { S5P_DIS_IRQ_CORE0, { 0x0, 0x0, 0x0 } }, 96 96 { S5P_DIS_IRQ_CENTRAL0, { 0x0, 0x0, 0x0 } }, ··· 195 195 { S5P_GPS_ALIVE_LOWPWR, { 0x7, 0x0, 0x0 } }, 196 196 { S5P_CMU_SYSCLK_ISP_LOWPWR, { 0x1, 0x0, 0x0 } }, 197 197 { S5P_CMU_SYSCLK_GPS_LOWPWR, { 0x1, 0x0, 0x0 } }, 198 - { PMU_TABLE_END,}, 199 - }; 200 - 201 - static const struct exynos_pmu_conf exynos4412_pmu_config[] = { 202 198 { S5P_ARM_CORE2_LOWPWR, { 0x0, 0x0, 0x2 } }, 203 199 { S5P_DIS_IRQ_CORE2, { 0x0, 0x0, 0x0 } }, 204 200 { S5P_DIS_IRQ_CENTRAL2, { 0x0, 0x0, 0x0 } }, ··· 208 212 .pmu_config = exynos4210_pmu_config, 209 213 }; 210 214 211 - const struct exynos_pmu_data exynos4212_pmu_data = { 212 - .pmu_config = exynos4x12_pmu_config, 213 - }; 214 - 215 215 const struct exynos_pmu_data exynos4412_pmu_data = { 216 - .pmu_config = exynos4x12_pmu_config, 217 - .pmu_config_extra = exynos4412_pmu_config, 216 + .pmu_config = exynos4412_pmu_config, 218 217 };
+13 -2
drivers/soc/tegra/powergate-bpmp.c
··· 42 42 { 43 43 struct mrq_pg_request request; 44 44 struct tegra_bpmp_message msg; 45 + int err; 45 46 46 47 memset(&request, 0, sizeof(request)); 47 48 request.cmd = CMD_PG_SET_STATE; ··· 54 53 msg.tx.data = &request; 55 54 msg.tx.size = sizeof(request); 56 55 57 - return tegra_bpmp_transfer(bpmp, &msg); 56 + err = tegra_bpmp_transfer(bpmp, &msg); 57 + if (err < 0) 58 + return err; 59 + else if (msg.rx.ret < 0) 60 + return -EINVAL; 61 + 62 + return 0; 58 63 } 59 64 60 65 static int tegra_bpmp_powergate_get_state(struct tegra_bpmp *bpmp, ··· 87 80 err = tegra_bpmp_transfer(bpmp, &msg); 88 81 if (err < 0) 89 82 return PG_STATE_OFF; 83 + else if (msg.rx.ret < 0) 84 + return -EINVAL; 90 85 91 86 return response.get_state.state; 92 87 } ··· 115 106 err = tegra_bpmp_transfer(bpmp, &msg); 116 107 if (err < 0) 117 108 return err; 109 + else if (msg.rx.ret < 0) 110 + return -EINVAL; 118 111 119 112 return response.get_max_id.max_id; 120 113 } ··· 143 132 msg.rx.size = sizeof(response); 144 133 145 134 err = tegra_bpmp_transfer(bpmp, &msg); 146 - if (err < 0) 135 + if (err < 0 || msg.rx.ret < 0) 147 136 return NULL; 148 137 149 138 return kstrdup(response.get_name.name, GFP_KERNEL);
+1 -1
drivers/thermal/Makefile
··· 55 55 obj-$(CONFIG_INTEL_PCH_THERMAL) += intel_pch_thermal.o 56 56 obj-$(CONFIG_ST_THERMAL) += st/ 57 57 obj-$(CONFIG_QCOM_TSENS) += qcom/ 58 - obj-$(CONFIG_TEGRA_SOCTHERM) += tegra/ 58 + obj-y += tegra/ 59 59 obj-$(CONFIG_HISI_THERMAL) += hisi_thermal.o 60 60 obj-$(CONFIG_MTK_THERMAL) += mtk_thermal.o 61 61 obj-$(CONFIG_GENERIC_ADC_THERMAL) += thermal-generic-adc.o
+7
drivers/thermal/tegra/Kconfig
··· 10 10 zones to manage temperatures. This option is also required for the 11 11 emergency thermal reset (thermtrip) feature to function. 12 12 13 + config TEGRA_BPMP_THERMAL 14 + tristate "Tegra BPMP thermal sensing" 15 + depends on TEGRA_BPMP || COMPILE_TEST 16 + help 17 + Enable this option for support for sensing system temperature of NVIDIA 18 + Tegra systems-on-chip with the BPMP coprocessor (Tegra186). 19 + 13 20 endmenu
+2 -1
drivers/thermal/tegra/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o 2 + obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o 3 + obj-$(CONFIG_TEGRA_BPMP_THERMAL) += tegra-bpmp-thermal.o 3 4 4 5 tegra-soctherm-y := soctherm.o soctherm-fuse.o 5 6 tegra-soctherm-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124-soctherm.o
+263
drivers/thermal/tegra/tegra-bpmp-thermal.c
··· 1 + /* 2 + * Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. 3 + * 4 + * Author: 5 + * Mikko Perttunen <mperttunen@nvidia.com> 6 + * Aapo Vienamo <avienamo@nvidia.com> 7 + * 8 + * This software is licensed under the terms of the GNU General Public 9 + * License version 2, as published by the Free Software Foundation, and 10 + * may be copied, distributed, and modified under those terms. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + */ 18 + 19 + #include <linux/err.h> 20 + #include <linux/module.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/thermal.h> 23 + #include <linux/workqueue.h> 24 + 25 + #include <soc/tegra/bpmp.h> 26 + #include <soc/tegra/bpmp-abi.h> 27 + 28 + struct tegra_bpmp_thermal_zone { 29 + struct tegra_bpmp_thermal *tegra; 30 + struct thermal_zone_device *tzd; 31 + struct work_struct tz_device_update_work; 32 + unsigned int idx; 33 + }; 34 + 35 + struct tegra_bpmp_thermal { 36 + struct device *dev; 37 + struct tegra_bpmp *bpmp; 38 + unsigned int num_zones; 39 + struct tegra_bpmp_thermal_zone **zones; 40 + }; 41 + 42 + static int tegra_bpmp_thermal_get_temp(void *data, int *out_temp) 43 + { 44 + struct tegra_bpmp_thermal_zone *zone = data; 45 + struct mrq_thermal_host_to_bpmp_request req; 46 + union mrq_thermal_bpmp_to_host_response reply; 47 + struct tegra_bpmp_message msg; 48 + int err; 49 + 50 + memset(&req, 0, sizeof(req)); 51 + req.type = CMD_THERMAL_GET_TEMP; 52 + req.get_temp.zone = zone->idx; 53 + 54 + memset(&msg, 0, sizeof(msg)); 55 + msg.mrq = MRQ_THERMAL; 56 + msg.tx.data = &req; 57 + msg.tx.size = sizeof(req); 58 + msg.rx.data = &reply; 59 + msg.rx.size = sizeof(reply); 60 + 61 + err = tegra_bpmp_transfer(zone->tegra->bpmp, &msg); 62 + if (err) 63 + return err; 64 + 65 + *out_temp = reply.get_temp.temp; 66 + 67 + return 0; 68 + } 69 + 70 + static int tegra_bpmp_thermal_set_trips(void *data, int low, int high) 71 + { 72 + struct tegra_bpmp_thermal_zone *zone = data; 73 + struct mrq_thermal_host_to_bpmp_request req; 74 + struct tegra_bpmp_message msg; 75 + 76 + memset(&req, 0, sizeof(req)); 77 + req.type = CMD_THERMAL_SET_TRIP; 78 + req.set_trip.zone = zone->idx; 79 + req.set_trip.enabled = true; 80 + req.set_trip.low = low; 81 + req.set_trip.high = high; 82 + 83 + memset(&msg, 0, sizeof(msg)); 84 + msg.mrq = MRQ_THERMAL; 85 + msg.tx.data = &req; 86 + msg.tx.size = sizeof(req); 87 + 88 + return tegra_bpmp_transfer(zone->tegra->bpmp, &msg); 89 + } 90 + 91 + static void tz_device_update_work_fn(struct work_struct *work) 92 + { 93 + struct tegra_bpmp_thermal_zone *zone; 94 + 95 + zone = container_of(work, struct tegra_bpmp_thermal_zone, 96 + tz_device_update_work); 97 + 98 + thermal_zone_device_update(zone->tzd, THERMAL_TRIP_VIOLATED); 99 + } 100 + 101 + static void bpmp_mrq_thermal(unsigned int mrq, struct tegra_bpmp_channel *ch, 102 + void *data) 103 + { 104 + struct mrq_thermal_bpmp_to_host_request *req; 105 + struct tegra_bpmp_thermal *tegra = data; 106 + int i; 107 + 108 + req = (struct mrq_thermal_bpmp_to_host_request *)ch->ib->data; 109 + 110 + if (req->type != CMD_THERMAL_HOST_TRIP_REACHED) { 111 + dev_err(tegra->dev, "%s: invalid request type: %d\n", 112 + __func__, req->type); 113 + tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0); 114 + return; 115 + } 116 + 117 + for (i = 0; i < tegra->num_zones; ++i) { 118 + if (tegra->zones[i]->idx != req->host_trip_reached.zone) 119 + continue; 120 + 121 + schedule_work(&tegra->zones[i]->tz_device_update_work); 122 + tegra_bpmp_mrq_return(ch, 0, NULL, 0); 123 + return; 124 + } 125 + 126 + dev_err(tegra->dev, "%s: invalid thermal zone: %d\n", __func__, 127 + req->host_trip_reached.zone); 128 + tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0); 129 + } 130 + 131 + static int tegra_bpmp_thermal_get_num_zones(struct tegra_bpmp *bpmp, 132 + int *num_zones) 133 + { 134 + struct mrq_thermal_host_to_bpmp_request req; 135 + union mrq_thermal_bpmp_to_host_response reply; 136 + struct tegra_bpmp_message msg; 137 + int err; 138 + 139 + memset(&req, 0, sizeof(req)); 140 + req.type = CMD_THERMAL_GET_NUM_ZONES; 141 + 142 + memset(&msg, 0, sizeof(msg)); 143 + msg.mrq = MRQ_THERMAL; 144 + msg.tx.data = &req; 145 + msg.tx.size = sizeof(req); 146 + msg.rx.data = &reply; 147 + msg.rx.size = sizeof(reply); 148 + 149 + err = tegra_bpmp_transfer(bpmp, &msg); 150 + if (err) 151 + return err; 152 + 153 + *num_zones = reply.get_num_zones.num; 154 + 155 + return 0; 156 + } 157 + 158 + static const struct thermal_zone_of_device_ops tegra_bpmp_of_thermal_ops = { 159 + .get_temp = tegra_bpmp_thermal_get_temp, 160 + .set_trips = tegra_bpmp_thermal_set_trips, 161 + }; 162 + 163 + static int tegra_bpmp_thermal_probe(struct platform_device *pdev) 164 + { 165 + struct tegra_bpmp *bpmp = dev_get_drvdata(pdev->dev.parent); 166 + struct tegra_bpmp_thermal *tegra; 167 + struct thermal_zone_device *tzd; 168 + unsigned int i, max_num_zones; 169 + int err; 170 + 171 + tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL); 172 + if (!tegra) 173 + return -ENOMEM; 174 + 175 + tegra->dev = &pdev->dev; 176 + tegra->bpmp = bpmp; 177 + 178 + err = tegra_bpmp_thermal_get_num_zones(bpmp, &max_num_zones); 179 + if (err) { 180 + dev_err(&pdev->dev, "failed to get the number of zones: %d\n", 181 + err); 182 + return err; 183 + } 184 + 185 + tegra->zones = devm_kcalloc(&pdev->dev, max_num_zones, 186 + sizeof(*tegra->zones), GFP_KERNEL); 187 + if (!tegra->zones) 188 + return -ENOMEM; 189 + 190 + for (i = 0; i < max_num_zones; ++i) { 191 + struct tegra_bpmp_thermal_zone *zone; 192 + int temp; 193 + 194 + zone = devm_kzalloc(&pdev->dev, sizeof(*zone), GFP_KERNEL); 195 + if (!zone) 196 + return -ENOMEM; 197 + 198 + zone->idx = i; 199 + zone->tegra = tegra; 200 + 201 + err = tegra_bpmp_thermal_get_temp(zone, &temp); 202 + if (err < 0) { 203 + devm_kfree(&pdev->dev, zone); 204 + continue; 205 + } 206 + 207 + tzd = devm_thermal_zone_of_sensor_register( 208 + &pdev->dev, i, zone, &tegra_bpmp_of_thermal_ops); 209 + if (IS_ERR(tzd)) { 210 + if (PTR_ERR(tzd) == -EPROBE_DEFER) 211 + return -EPROBE_DEFER; 212 + devm_kfree(&pdev->dev, zone); 213 + continue; 214 + } 215 + 216 + zone->tzd = tzd; 217 + INIT_WORK(&zone->tz_device_update_work, 218 + tz_device_update_work_fn); 219 + 220 + tegra->zones[tegra->num_zones++] = zone; 221 + } 222 + 223 + err = tegra_bpmp_request_mrq(bpmp, MRQ_THERMAL, bpmp_mrq_thermal, 224 + tegra); 225 + if (err) { 226 + dev_err(&pdev->dev, "failed to register mrq handler: %d\n", 227 + err); 228 + return err; 229 + } 230 + 231 + platform_set_drvdata(pdev, tegra); 232 + 233 + return 0; 234 + } 235 + 236 + static int tegra_bpmp_thermal_remove(struct platform_device *pdev) 237 + { 238 + struct tegra_bpmp_thermal *tegra = platform_get_drvdata(pdev); 239 + 240 + tegra_bpmp_free_mrq(tegra->bpmp, MRQ_THERMAL, tegra); 241 + 242 + return 0; 243 + } 244 + 245 + static const struct of_device_id tegra_bpmp_thermal_of_match[] = { 246 + { .compatible = "nvidia,tegra186-bpmp-thermal" }, 247 + { }, 248 + }; 249 + MODULE_DEVICE_TABLE(of, tegra_bpmp_thermal_of_match); 250 + 251 + static struct platform_driver tegra_bpmp_thermal_driver = { 252 + .probe = tegra_bpmp_thermal_probe, 253 + .remove = tegra_bpmp_thermal_remove, 254 + .driver = { 255 + .name = "tegra-bpmp-thermal", 256 + .of_match_table = tegra_bpmp_thermal_of_match, 257 + }, 258 + }; 259 + module_platform_driver(tegra_bpmp_thermal_driver); 260 + 261 + MODULE_AUTHOR("Mikko Perttunen <mperttunen@nvidia.com>"); 262 + MODULE_DESCRIPTION("NVIDIA Tegra BPMP thermal sensor driver"); 263 + MODULE_LICENSE("GPL v2");
+94
include/dt-bindings/reset/mt7622-reset.h
··· 1 + /* 2 + * Copyright (c) 2017 MediaTek Inc. 3 + * Author: Sean Wang <sean.wang@mediatek.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #ifndef _DT_BINDINGS_RESET_CONTROLLER_MT7622 16 + #define _DT_BINDINGS_RESET_CONTROLLER_MT7622 17 + 18 + /* INFRACFG resets */ 19 + #define MT7622_INFRA_EMI_REG_RST 0 20 + #define MT7622_INFRA_DRAMC0_A0_RST 1 21 + #define MT7622_INFRA_APCIRQ_EINT_RST 3 22 + #define MT7622_INFRA_APXGPT_RST 4 23 + #define MT7622_INFRA_SCPSYS_RST 5 24 + #define MT7622_INFRA_PMIC_WRAP_RST 7 25 + #define MT7622_INFRA_IRRX_RST 9 26 + #define MT7622_INFRA_EMI_RST 16 27 + #define MT7622_INFRA_WED0_RST 17 28 + #define MT7622_INFRA_DRAMC_RST 18 29 + #define MT7622_INFRA_CCI_INTF_RST 19 30 + #define MT7622_INFRA_TRNG_RST 21 31 + #define MT7622_INFRA_SYSIRQ_RST 22 32 + #define MT7622_INFRA_WED1_RST 25 33 + 34 + /* PERICFG Subsystem resets */ 35 + #define MT7622_PERI_UART0_SW_RST 0 36 + #define MT7622_PERI_UART1_SW_RST 1 37 + #define MT7622_PERI_UART2_SW_RST 2 38 + #define MT7622_PERI_UART3_SW_RST 3 39 + #define MT7622_PERI_UART4_SW_RST 4 40 + #define MT7622_PERI_BTIF_SW_RST 6 41 + #define MT7622_PERI_PWM_SW_RST 8 42 + #define MT7622_PERI_AUXADC_SW_RST 10 43 + #define MT7622_PERI_DMA_SW_RST 11 44 + #define MT7622_PERI_IRTX_SW_RST 13 45 + #define MT7622_PERI_NFI_SW_RST 14 46 + #define MT7622_PERI_THERM_SW_RST 16 47 + #define MT7622_PERI_MSDC0_SW_RST 19 48 + #define MT7622_PERI_MSDC1_SW_RST 20 49 + #define MT7622_PERI_I2C0_SW_RST 22 50 + #define MT7622_PERI_I2C1_SW_RST 23 51 + #define MT7622_PERI_I2C2_SW_RST 24 52 + #define MT7622_PERI_SPI0_SW_RST 33 53 + #define MT7622_PERI_SPI1_SW_RST 34 54 + #define MT7622_PERI_FLASHIF_SW_RST 36 55 + 56 + /* TOPRGU resets */ 57 + #define MT7622_TOPRGU_INFRA_RST 0 58 + #define MT7622_TOPRGU_ETHDMA_RST 1 59 + #define MT7622_TOPRGU_DDRPHY_RST 6 60 + #define MT7622_TOPRGU_INFRA_AO_RST 8 61 + #define MT7622_TOPRGU_CONN_RST 9 62 + #define MT7622_TOPRGU_APMIXED_RST 10 63 + #define MT7622_TOPRGU_CONN_MCU_RST 12 64 + 65 + /* PCIe/SATA Subsystem resets */ 66 + #define MT7622_SATA_PHY_REG_RST 12 67 + #define MT7622_SATA_PHY_SW_RST 13 68 + #define MT7622_SATA_AXI_BUS_RST 15 69 + #define MT7622_PCIE1_CORE_RST 19 70 + #define MT7622_PCIE1_MMIO_RST 20 71 + #define MT7622_PCIE1_HRST 21 72 + #define MT7622_PCIE1_USER_RST 22 73 + #define MT7622_PCIE1_PIPE_RST 23 74 + #define MT7622_PCIE0_CORE_RST 27 75 + #define MT7622_PCIE0_MMIO_RST 28 76 + #define MT7622_PCIE0_HRST 29 77 + #define MT7622_PCIE0_USER_RST 30 78 + #define MT7622_PCIE0_PIPE_RST 31 79 + 80 + /* SSUSB Subsystem resets */ 81 + #define MT7622_SSUSB_PHY_PWR_RST 3 82 + #define MT7622_SSUSB_MAC_PWR_RST 4 83 + 84 + /* ETHSYS Subsystem resets */ 85 + #define MT7622_ETHSYS_SYS_RST 0 86 + #define MT7622_ETHSYS_MCM_RST 2 87 + #define MT7622_ETHSYS_HSDMA_RST 5 88 + #define MT7622_ETHSYS_FE_RST 6 89 + #define MT7622_ETHSYS_GMAC_RST 23 90 + #define MT7622_ETHSYS_EPHY_RST 24 91 + #define MT7622_ETHSYS_CRYPTO_RST 29 92 + #define MT7622_ETHSYS_PPE_RST 31 93 + 94 + #endif /* _DT_BINDINGS_RESET_CONTROLLER_MT7622 */
+5
include/linux/of_reserved_mem.h
··· 45 45 void fdt_init_reserved_mem(void); 46 46 void fdt_reserved_mem_save_node(unsigned long node, const char *uname, 47 47 phys_addr_t base, phys_addr_t size); 48 + struct reserved_mem *of_reserved_mem_lookup(struct device_node *np); 48 49 #else 49 50 static inline int of_reserved_mem_device_init_by_idx(struct device *dev, 50 51 struct device_node *np, int idx) ··· 57 56 static inline void fdt_init_reserved_mem(void) { } 58 57 static inline void fdt_reserved_mem_save_node(unsigned long node, 59 58 const char *uname, phys_addr_t base, phys_addr_t size) { } 59 + static inline struct reserved_mem *of_reserved_mem_lookup(struct device_node *np) 60 + { 61 + return NULL; 62 + } 60 63 #endif 61 64 62 65 /**
-12
include/linux/omap-gpmc.h
··· 36 36 } 37 37 #endif /* CONFIG_OMAP_GPMC */ 38 38 39 - /*--------------------------------*/ 40 - 41 - /* deprecated APIs */ 42 - #if IS_ENABLED(CONFIG_OMAP_GPMC) 43 - void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs); 44 - #else 45 - static inline void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs) 46 - { 47 - } 48 - #endif /* CONFIG_OMAP_GPMC */ 49 - /*--------------------------------*/ 50 - 51 39 extern int gpmc_calc_timings(struct gpmc_timings *gpmc_t, 52 40 struct gpmc_settings *gpmc_s, 53 41 struct gpmc_device_timings *dev_t);
-2
include/linux/platform_data/mtd-nand-omap2.h
··· 63 63 void __iomem *gpmc_bch_result4[GPMC_BCH_NUM_REMAINDER]; 64 64 void __iomem *gpmc_bch_result5[GPMC_BCH_NUM_REMAINDER]; 65 65 void __iomem *gpmc_bch_result6[GPMC_BCH_NUM_REMAINDER]; 66 - /* Deprecated. Do not use */ 67 - void __iomem *gpmc_status; 68 66 }; 69 67 70 68 struct omap_nand_platform_data {
+4
include/linux/qcom_scm.h
··· 43 43 extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 44 44 extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 45 45 extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 46 + extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 47 + extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val); 46 48 #else 47 49 static inline 48 50 int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) ··· 75 73 static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) { return -ENODEV; } 76 74 static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) { return -ENODEV; } 77 75 static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) { return -ENODEV; } 76 + static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) { return -ENODEV; } 77 + static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) { return -ENODEV; } 78 78 #endif 79 79 #endif
+18
include/linux/ts-nbus.h
··· 1 + /* 2 + * Copyright (c) 2016 - Savoir-faire Linux 3 + * Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com> 4 + * 5 + * This file is licensed under the terms of the GNU General Public 6 + * License version 2. This program is licensed "as is" without any 7 + * warranty of any kind, whether express or implied. 8 + */ 9 + 10 + #ifndef _TS_NBUS_H 11 + #define _TS_NBUS_H 12 + 13 + struct ts_nbus; 14 + 15 + extern int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val); 16 + extern int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val); 17 + 18 + #endif /* _TS_NBUS_H */
+56 -3
include/soc/tegra/bpmp.h
··· 94 94 struct reset_controller_dev rstc; 95 95 96 96 struct genpd_onecell_data genpd; 97 - }; 98 97 99 - struct tegra_bpmp *tegra_bpmp_get(struct device *dev); 100 - void tegra_bpmp_put(struct tegra_bpmp *bpmp); 98 + #ifdef CONFIG_DEBUG_FS 99 + struct dentry *debugfs_mirror; 100 + #endif 101 + }; 101 102 102 103 struct tegra_bpmp_message { 103 104 unsigned int mrq; ··· 111 110 struct { 112 111 void *data; 113 112 size_t size; 113 + int ret; 114 114 } rx; 115 115 }; 116 116 117 + #if IS_ENABLED(CONFIG_TEGRA_BPMP) 118 + struct tegra_bpmp *tegra_bpmp_get(struct device *dev); 119 + void tegra_bpmp_put(struct tegra_bpmp *bpmp); 117 120 int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp, 118 121 struct tegra_bpmp_message *msg); 119 122 int tegra_bpmp_transfer(struct tegra_bpmp *bpmp, 120 123 struct tegra_bpmp_message *msg); 124 + void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code, 125 + const void *data, size_t size); 121 126 122 127 int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp, unsigned int mrq, 123 128 tegra_bpmp_mrq_handler_t handler, void *data); 124 129 void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp, unsigned int mrq, 125 130 void *data); 131 + #else 132 + static inline struct tegra_bpmp *tegra_bpmp_get(struct device *dev) 133 + { 134 + return ERR_PTR(-ENOTSUPP); 135 + } 136 + static inline void tegra_bpmp_put(struct tegra_bpmp *bpmp) 137 + { 138 + } 139 + static inline int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp, 140 + struct tegra_bpmp_message *msg) 141 + { 142 + return -ENOTSUPP; 143 + } 144 + static inline int tegra_bpmp_transfer(struct tegra_bpmp *bpmp, 145 + struct tegra_bpmp_message *msg) 146 + { 147 + return -ENOTSUPP; 148 + } 149 + static inline void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, 150 + int code, const void *data, 151 + size_t size) 152 + { 153 + } 154 + 155 + static inline int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp, 156 + unsigned int mrq, 157 + tegra_bpmp_mrq_handler_t handler, 158 + void *data) 159 + { 160 + return -ENOTSUPP; 161 + } 162 + static inline void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp, 163 + unsigned int mrq, void *data) 164 + { 165 + } 166 + #endif 126 167 127 168 #if IS_ENABLED(CONFIG_CLK_TEGRA_BPMP) 128 169 int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp); ··· 192 149 return 0; 193 150 } 194 151 #endif 152 + 153 + #if IS_ENABLED(CONFIG_DEBUG_FS) 154 + int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp); 155 + #else 156 + static inline int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp) 157 + { 158 + return 0; 159 + } 160 + #endif 161 + 195 162 196 163 #endif /* __SOC_TEGRA_BPMP_H */