Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
"Some releases this branch is nearly empty, others we have more stuff.
It tends to gather drivers that need SoC modification or dependencies
such that they have to (also) go in through our tree.

For this release, we have merged in part of the reset controller tree
(with handshake that the parts we have merged in will remain stable),
as well as dependencies on a few clock branches.

In general, new items here are:

- Qualcomm driver for SMM/SMD, which is how they communicate with the
coprocessors on (some) of their platforms

- memory controller work for ARM's PL172 memory controller

- reset drivers for various platforms

- PMU power domain support for Marvell platforms

- Tegra support for T132/T210 SoCs: PMC, fuse, memory controller
per-SoC support"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (49 commits)
ARM: tegra: cpuidle: implement cpuidle_state.enter_freeze()
ARM: tegra: Disable cpuidle if PSCI is available
soc/tegra: pmc: Use existing pclk reference
soc/tegra: pmc: Remove unnecessary return statement
soc: tegra: Remove redundant $(CONFIG_ARCH_TEGRA) in Makefile
memory: tegra: Add Tegra210 support
memory: tegra: Add support for a variable-size client ID bitfield
clk: shmobile: rz: Add CPG/MSTP Clock Domain support
clk: shmobile: rcar-gen2: Add CPG/MSTP Clock Domain support
clk: shmobile: r8a7779: Add CPG/MSTP Clock Domain support
clk: shmobile: r8a7778: Add CPG/MSTP Clock Domain support
clk: shmobile: Add CPG/MSTP Clock Domain support
ARM: dove: create a proper PMU driver for power domains, PMU IRQs and resets
reset: reset-zynq: Adding support for Xilinx Zynq reset controller.
docs: dts: Added documentation for Xilinx Zynq Reset Controller bindings.
MIPS: ath79: Add the reset controller to the AR9132 dtsi
reset: Add a driver for the reset controller on the AR71XX/AR9XXX
devicetree: Add bindings for the ATH79 reset controller
reset: socfpga: Update reset-socfpga to read the altr,modrst-offset property
doc: dt: add documentation for lpc1850-rgu reset driver
...

+6784 -501
+26 -3
Documentation/devicetree/bindings/clock/renesas,r8a7778-cpg-clocks.txt
··· 1 1 * Renesas R8A7778 Clock Pulse Generator (CPG) 2 2 3 3 The CPG generates core clocks for the R8A7778. It includes two PLLs and 4 - several fixed ratio dividers 4 + several fixed ratio dividers. 5 + The CPG also provides a Clock Domain for SoC devices, in combination with the 6 + CPG Module Stop (MSTP) Clocks. 5 7 6 8 Required Properties: 7 9 ··· 12 10 - #clock-cells: Must be 1 13 11 - clock-output-names: The names of the clocks. Supported clocks are 14 12 "plla", "pllb", "b", "out", "p", "s", and "s1". 13 + - #power-domain-cells: Must be 0 14 + 15 + SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed 16 + through an MSTP clock should refer to the CPG device node in their 17 + "power-domains" property, as documented by the generic PM domain bindings in 18 + Documentation/devicetree/bindings/power/power_domain.txt. 15 19 16 20 17 - Example 18 - ------- 21 + Examples 22 + -------- 23 + 24 + - CPG device node: 19 25 20 26 cpg_clocks: cpg_clocks@ffc80000 { 21 27 compatible = "renesas,r8a7778-cpg-clocks"; ··· 32 22 clocks = <&extal_clk>; 33 23 clock-output-names = "plla", "pllb", "b", 34 24 "out", "p", "s", "s1"; 25 + #power-domain-cells = <0>; 26 + }; 27 + 28 + 29 + - CPG/MSTP Clock Domain member device node: 30 + 31 + sdhi0: sd@ffe4c000 { 32 + compatible = "renesas,sdhi-r8a7778"; 33 + reg = <0xffe4c000 0x100>; 34 + interrupts = <0 87 IRQ_TYPE_LEVEL_HIGH>; 35 + clocks = <&mstp3_clks R8A7778_CLK_SDHI0>; 36 + power-domains = <&cpg_clocks>; 37 + status = "disabled"; 35 38 };
+26 -4
Documentation/devicetree/bindings/clock/renesas,r8a7779-cpg-clocks.txt
··· 1 1 * Renesas R8A7779 Clock Pulse Generator (CPG) 2 2 3 3 The CPG generates core clocks for the R8A7779. It includes one PLL and 4 - several fixed ratio dividers 4 + several fixed ratio dividers. 5 + The CPG also provides a Clock Domain for SoC devices, in combination with the 6 + CPG Module Stop (MSTP) Clocks. 5 7 6 8 Required Properties: 7 9 ··· 14 12 - #clock-cells: Must be 1 15 13 - clock-output-names: The names of the clocks. Supported clocks are "plla", 16 14 "z", "zs", "s", "s1", "p", "b", "out". 15 + - #power-domain-cells: Must be 0 16 + 17 + SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed 18 + through an MSTP clock should refer to the CPG device node in their 19 + "power-domains" property, as documented by the generic PM domain bindings in 20 + Documentation/devicetree/bindings/power/power_domain.txt. 17 21 18 22 19 - Example 20 - ------- 23 + Examples 24 + -------- 25 + 26 + - CPG device node: 21 27 22 28 cpg_clocks: cpg_clocks@ffc80000 { 23 29 compatible = "renesas,r8a7779-cpg-clocks"; 24 - reg = <0 0xffc80000 0 0x30>; 30 + reg = <0xffc80000 0x30>; 25 31 clocks = <&extal_clk>; 26 32 #clock-cells = <1>; 27 33 clock-output-names = "plla", "z", "zs", "s", "s1", "p", 28 34 "b", "out"; 35 + #power-domain-cells = <0>; 36 + }; 37 + 38 + 39 + - CPG/MSTP Clock Domain member device node: 40 + 41 + sata: sata@fc600000 { 42 + compatible = "renesas,sata-r8a7779", "renesas,rcar-sata"; 43 + reg = <0xfc600000 0x2000>; 44 + interrupts = <0 100 IRQ_TYPE_LEVEL_HIGH>; 45 + clocks = <&mstp1_clks R8A7779_CLK_SATA>; 46 + power-domains = <&cpg_clocks>; 29 47 };
+24 -2
Documentation/devicetree/bindings/clock/renesas,rcar-gen2-cpg-clocks.txt
··· 2 2 3 3 The CPG generates core clocks for the R-Car Gen2 SoCs. It includes three PLLs 4 4 and several fixed ratio dividers. 5 + The CPG also provides a Clock Domain for SoC devices, in combination with the 6 + CPG Module Stop (MSTP) Clocks. 5 7 6 8 Required Properties: 7 9 ··· 22 20 - clock-output-names: The names of the clocks. Supported clocks are "main", 23 21 "pll0", "pll1", "pll3", "lb", "qspi", "sdh", "sd0", "sd1", "z", "rcan", and 24 22 "adsp" 23 + - #power-domain-cells: Must be 0 24 + 25 + SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed 26 + through an MSTP clock should refer to the CPG device node in their 27 + "power-domains" property, as documented by the generic PM domain bindings in 28 + Documentation/devicetree/bindings/power/power_domain.txt. 25 29 26 30 27 - Example 28 - ------- 31 + Examples 32 + -------- 33 + 34 + - CPG device node: 29 35 30 36 cpg_clocks: cpg_clocks@e6150000 { 31 37 compatible = "renesas,r8a7790-cpg-clocks", ··· 44 34 clock-output-names = "main", "pll0, "pll1", "pll3", 45 35 "lb", "qspi", "sdh", "sd0", "sd1", "z", 46 36 "rcan", "adsp"; 37 + #power-domain-cells = <0>; 38 + }; 39 + 40 + 41 + - CPG/MSTP Clock Domain member device node: 42 + 43 + thermal@e61f0000 { 44 + compatible = "renesas,thermal-r8a7790", "renesas,rcar-thermal"; 45 + reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>; 46 + interrupts = <0 69 IRQ_TYPE_LEVEL_HIGH>; 47 + clocks = <&mstp5_clks R8A7790_CLK_THERMAL>; 48 + power-domains = <&cpg_clocks>; 47 49 };
+27 -2
Documentation/devicetree/bindings/clock/renesas,rz-cpg-clocks.txt
··· 2 2 3 3 The CPG generates core clocks for the RZ SoCs. It includes the PLL, variable 4 4 CPU and GPU clocks, and several fixed ratio dividers. 5 + The CPG also provides a Clock Domain for SoC devices, in combination with the 6 + CPG Module Stop (MSTP) Clocks. 5 7 6 8 Required Properties: 7 9 ··· 16 14 - #clock-cells: Must be 1 17 15 - clock-output-names: The names of the clocks. Supported clocks are "pll", 18 16 "i", and "g" 17 + - #power-domain-cells: Must be 0 18 + 19 + SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed 20 + through an MSTP clock should refer to the CPG device node in their 21 + "power-domains" property, as documented by the generic PM domain bindings in 22 + Documentation/devicetree/bindings/power/power_domain.txt. 19 23 20 24 21 - Example 22 - ------- 25 + Examples 26 + -------- 27 + 28 + - CPG device node: 23 29 24 30 cpg_clocks: cpg_clocks@fcfe0000 { 25 31 #clock-cells = <1>; ··· 36 26 reg = <0xfcfe0000 0x18>; 37 27 clocks = <&extal_clk>, <&usb_x1_clk>; 38 28 clock-output-names = "pll", "i", "g"; 29 + #power-domain-cells = <0>; 30 + }; 31 + 32 + 33 + - CPG/MSTP Clock Domain member device node: 34 + 35 + mtu2: timer@fcff0000 { 36 + compatible = "renesas,mtu2-r7s72100", "renesas,mtu2"; 37 + reg = <0xfcff0000 0x400>; 38 + interrupts = <0 107 IRQ_TYPE_LEVEL_HIGH>; 39 + interrupt-names = "tgi0a"; 40 + clocks = <&mstp3_clks R7S72100_CLK_MTU2>; 41 + clock-names = "fck"; 42 + power-domains = <&cpg_clocks>; 43 + status = "disabled"; 39 44 };
+44
Documentation/devicetree/bindings/cpufreq/tegra124-cpufreq.txt
··· 1 + Tegra124 CPU frequency scaling driver bindings 2 + ---------------------------------------------- 3 + 4 + Both required and optional properties listed below must be defined 5 + under node /cpus/cpu@0. 6 + 7 + Required properties: 8 + - clocks: Must contain an entry for each entry in clock-names. 9 + See ../clocks/clock-bindings.txt for details. 10 + - clock-names: Must include the following entries: 11 + - cpu_g: Clock mux for the fast CPU cluster. 12 + - cpu_lp: Clock mux for the low-power CPU cluster. 13 + - pll_x: Fast PLL clocksource. 14 + - pll_p: Auxiliary PLL used during fast PLL rate changes. 15 + - dfll: Fast DFLL clocksource that also automatically scales CPU voltage. 16 + - vdd-cpu-supply: Regulator for CPU voltage 17 + 18 + Optional properties: 19 + - clock-latency: Specify the possible maximum transition latency for clock, 20 + in unit of nanoseconds. 21 + 22 + Example: 23 + -------- 24 + cpus { 25 + #address-cells = <1>; 26 + #size-cells = <0>; 27 + 28 + cpu@0 { 29 + device_type = "cpu"; 30 + compatible = "arm,cortex-a15"; 31 + reg = <0>; 32 + 33 + clocks = <&tegra_car TEGRA124_CLK_CCLK_G>, 34 + <&tegra_car TEGRA124_CLK_CCLK_LP>, 35 + <&tegra_car TEGRA124_CLK_PLL_X>, 36 + <&tegra_car TEGRA124_CLK_PLL_P>, 37 + <&dfll>; 38 + clock-names = "cpu_g", "cpu_lp", "pll_x", "pll_p", "dfll"; 39 + clock-latency = <300000>; 40 + vdd-cpu-supply: <&vdd_cpu>; 41 + }; 42 + 43 + <...> 44 + };
+125
Documentation/devicetree/bindings/memory-controllers/arm,pl172.txt
··· 1 + * Device tree bindings for ARM PL172 MultiPort Memory Controller 2 + 3 + Required properties: 4 + 5 + - compatible: "arm,pl172", "arm,primecell" 6 + 7 + - reg: Must contains offset/length value for controller. 8 + 9 + - #address-cells: Must be 2. The partition number has to be encoded in the 10 + first address cell and it may accept values 0..N-1 11 + (N - total number of partitions). The second cell is the 12 + offset into the partition. 13 + 14 + - #size-cells: Must be set to 1. 15 + 16 + - ranges: Must contain one or more chip select memory regions. 17 + 18 + - clocks: Must contain references to controller clocks. 19 + 20 + - clock-names: Must contain "mpmcclk" and "apb_pclk". 21 + 22 + - clock-ranges: Empty property indicating that child nodes can inherit 23 + named clocks. Required only if clock tree data present 24 + in device tree. 25 + See clock-bindings.txt 26 + 27 + Child chip-select (cs) nodes contain the memory devices nodes connected to 28 + such as NOR (e.g. cfi-flash) and NAND. 29 + 30 + Required child cs node properties: 31 + 32 + - #address-cells: Must be 2. 33 + 34 + - #size-cells: Must be 1. 35 + 36 + - ranges: Empty property indicating that child nodes can inherit 37 + memory layout. 38 + 39 + - clock-ranges: Empty property indicating that child nodes can inherit 40 + named clocks. Required only if clock tree data present 41 + in device tree. 42 + 43 + - mpmc,cs: Chip select number. Indicates to the pl0172 driver 44 + which chipselect is used for accessing the memory. 45 + 46 + - mpmc,memory-width: Width of the chip select memory. Must be equal to 47 + either 8, 16 or 32. 48 + 49 + Optional child cs node config properties: 50 + 51 + - mpmc,async-page-mode: Enable asynchronous page mode. 52 + 53 + - mpmc,cs-active-high: Set chip select polarity to active high. 54 + 55 + - mpmc,byte-lane-low: Set byte lane state to low. 56 + 57 + - mpmc,extended-wait: Enable extended wait. 58 + 59 + - mpmc,buffer-enable: Enable write buffer. 60 + 61 + - mpmc,write-protect: Enable write protect. 62 + 63 + Optional child cs node timing properties: 64 + 65 + - mpmc,write-enable-delay: Delay from chip select assertion to write 66 + enable (WE signal) in nano seconds. 67 + 68 + - mpmc,output-enable-delay: Delay from chip select assertion to output 69 + enable (OE signal) in nano seconds. 70 + 71 + - mpmc,write-access-delay: Delay from chip select assertion to write 72 + access in nano seconds. 73 + 74 + - mpmc,read-access-delay: Delay from chip select assertion to read 75 + access in nano seconds. 76 + 77 + - mpmc,page-mode-read-delay: Delay for asynchronous page mode sequential 78 + accesses in nano seconds. 79 + 80 + - mpmc,turn-round-delay: Delay between access to memory banks in nano 81 + seconds. 82 + 83 + If any of the above timing parameters are absent, current parameter value will 84 + be taken from the corresponding HW reg. 85 + 86 + Example for pl172 with nor flash on chip select 0 shown below. 87 + 88 + emc: memory-controller@40005000 { 89 + compatible = "arm,pl172", "arm,primecell"; 90 + reg = <0x40005000 0x1000>; 91 + clocks = <&ccu1 CLK_CPU_EMCDIV>, <&ccu1 CLK_CPU_EMC>; 92 + clock-names = "mpmcclk", "apb_pclk"; 93 + #address-cells = <2>; 94 + #size-cells = <1>; 95 + ranges = <0 0 0x1c000000 0x1000000 96 + 1 0 0x1d000000 0x1000000 97 + 2 0 0x1e000000 0x1000000 98 + 3 0 0x1f000000 0x1000000>; 99 + 100 + cs0 { 101 + #address-cells = <2>; 102 + #size-cells = <1>; 103 + ranges; 104 + 105 + mpmc,cs = <0>; 106 + mpmc,memory-width = <16>; 107 + mpmc,byte-lane-low; 108 + mpmc,write-enable-delay = <0>; 109 + mpmc,output-enable-delay = <0>; 110 + mpmc,read-enable-delay = <70>; 111 + mpmc,page-mode-read-delay = <70>; 112 + 113 + flash@0,0 { 114 + compatible = "sst,sst39vf320", "cfi-flash"; 115 + reg = <0 0 0x400000>; 116 + bank-width = <2>; 117 + #address-cells = <1>; 118 + #size-cells = <1>; 119 + partition@0 { 120 + label = "data"; 121 + reg = <0 0x400000>; 122 + }; 123 + }; 124 + }; 125 + };
+20
Documentation/devicetree/bindings/reset/ath79-reset.txt
··· 1 + Binding for Qualcomm Atheros AR7xxx/AR9XXX reset controller 2 + 3 + Please also refer to reset.txt in this directory for common reset 4 + controller binding usage. 5 + 6 + Required Properties: 7 + - compatible: has to be "qca,<soctype>-reset", "qca,ar7100-reset" 8 + as fallback 9 + - reg: Base address and size of the controllers memory area 10 + - #reset-cells : Specifies the number of cells needed to encode reset 11 + line, should be 1 12 + 13 + Example: 14 + 15 + reset-controller@1806001c { 16 + compatible = "qca,ar9132-reset", "qca,ar7100-reset"; 17 + reg = <0x1806001c 0x4>; 18 + 19 + #reset-cells = <1>; 20 + };
+84
Documentation/devicetree/bindings/reset/nxp,lpc1850-rgu.txt
··· 1 + NXP LPC1850 Reset Generation Unit (RGU) 2 + ======================================== 3 + 4 + Please also refer to reset.txt in this directory for common reset 5 + controller binding usage. 6 + 7 + Required properties: 8 + - compatible: Should be "nxp,lpc1850-rgu" 9 + - reg: register base and length 10 + - clocks: phandle and clock specifier to RGU clocks 11 + - clock-names: should contain "delay" and "reg" 12 + - #reset-cells: should be 1 13 + 14 + See table below for valid peripheral reset numbers. Numbers not 15 + in the table below are either reserved or not applicable for 16 + normal operation. 17 + 18 + Reset Peripheral 19 + 9 System control unit (SCU) 20 + 12 ARM Cortex-M0 subsystem core (LPC43xx only) 21 + 13 CPU core 22 + 16 LCD controller 23 + 17 USB0 24 + 18 USB1 25 + 19 DMA 26 + 20 SDIO 27 + 21 External memory controller (EMC) 28 + 22 Ethernet 29 + 25 Flash bank A 30 + 27 EEPROM 31 + 28 GPIO 32 + 29 Flash bank B 33 + 32 Timer0 34 + 33 Timer1 35 + 34 Timer2 36 + 35 Timer3 37 + 36 Repetitive Interrupt timer (RIT) 38 + 37 State Configurable Timer (SCT) 39 + 38 Motor control PWM (MCPWM) 40 + 39 QEI 41 + 40 ADC0 42 + 41 ADC1 43 + 42 DAC 44 + 44 USART0 45 + 45 UART1 46 + 46 USART2 47 + 47 USART3 48 + 48 I2C0 49 + 49 I2C1 50 + 50 SSP0 51 + 51 SSP1 52 + 52 I2S0 and I2S1 53 + 53 Serial Flash Interface (SPIFI) 54 + 54 C_CAN1 55 + 55 C_CAN0 56 + 56 ARM Cortex-M0 application core (LPC4370 only) 57 + 57 SGPIO (LPC43xx only) 58 + 58 SPI (LPC43xx only) 59 + 60 ADCHS (12-bit ADC) (LPC4370 only) 60 + 61 + Refer to NXP LPC18xx or LPC43xx user manual for more details about 62 + the reset signals and the connected block/peripheral. 63 + 64 + Reset provider example: 65 + rgu: reset-controller@40053000 { 66 + compatible = "nxp,lpc1850-rgu"; 67 + reg = <0x40053000 0x1000>; 68 + clocks = <&cgu BASE_SAFE_CLK>, <&ccu1 CLK_CPU_BUS>; 69 + clock-names = "delay", "reg"; 70 + #reset-cells = <1>; 71 + }; 72 + 73 + Reset consumer example: 74 + mac: ethernet@40010000 { 75 + compatible = "nxp,lpc1850-dwmac", "snps,dwmac-3.611", "snps,dwmac"; 76 + reg = <0x40010000 0x2000>; 77 + interrupts = <5>; 78 + interrupt-names = "macirq"; 79 + clocks = <&ccu1 CLK_CPU_ETHERNET>; 80 + clock-names = "stmmaceth"; 81 + resets = <&rgu 22>; 82 + reset-names = "stmmaceth"; 83 + status = "disabled"; 84 + };
+1 -1
Documentation/devicetree/bindings/reset/st,sti-picophyreset.txt
··· 39 39 }; 40 40 41 41 Macro definitions for the supported reset channels can be found in: 42 - include/dt-bindings/reset-controller/stih407-resets.h 42 + include/dt-bindings/reset/stih407-resets.h
+2 -2
Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
··· 43 43 44 44 Macro definitions for the supported reset channels can be found in: 45 45 46 - include/dt-bindings/reset-controller/stih415-resets.h 47 - include/dt-bindings/reset-controller/stih416-resets.h 46 + include/dt-bindings/reset/stih415-resets.h 47 + include/dt-bindings/reset/stih416-resets.h
+2 -2
Documentation/devicetree/bindings/reset/st,sti-softreset.txt
··· 42 42 43 43 Macro definitions for the supported reset channels can be found in: 44 44 45 - include/dt-bindings/reset-controller/stih415-resets.h 46 - include/dt-bindings/reset-controller/stih416-resets.h 45 + include/dt-bindings/reset/stih415-resets.h 46 + include/dt-bindings/reset/stih416-resets.h
+68
Documentation/devicetree/bindings/reset/zynq-reset.txt
··· 1 + Xilinx Zynq Reset Manager 2 + 3 + The Zynq AP-SoC has several different resets. 4 + 5 + See Chapter 26 of the Zynq TRM (UG585) for more information about Zynq resets. 6 + 7 + Required properties: 8 + - compatible: "xlnx,zynq-reset" 9 + - reg: SLCR offset and size taken via syscon <0x200 0x48> 10 + - syscon: <&slcr> 11 + This should be a phandle to the Zynq's SLCR registers. 12 + - #reset-cells: Must be 1 13 + 14 + The Zynq Reset Manager needs to be a childnode of the SLCR. 15 + 16 + Example: 17 + rstc: rstc@200 { 18 + compatible = "xlnx,zynq-reset"; 19 + reg = <0x200 0x48>; 20 + #reset-cells = <1>; 21 + syscon = <&slcr>; 22 + }; 23 + 24 + Reset outputs: 25 + 0 : soft reset 26 + 32 : ddr reset 27 + 64 : topsw reset 28 + 96 : dmac reset 29 + 128: usb0 reset 30 + 129: usb1 reset 31 + 160: gem0 reset 32 + 161: gem1 reset 33 + 164: gem0 rx reset 34 + 165: gem1 rx reset 35 + 166: gem0 ref reset 36 + 167: gem1 ref reset 37 + 192: sdio0 reset 38 + 193: sdio1 reset 39 + 196: sdio0 ref reset 40 + 197: sdio1 ref reset 41 + 224: spi0 reset 42 + 225: spi1 reset 43 + 226: spi0 ref reset 44 + 227: spi1 ref reset 45 + 256: can0 reset 46 + 257: can1 reset 47 + 258: can0 ref reset 48 + 259: can1 ref reset 49 + 288: i2c0 reset 50 + 289: i2c1 reset 51 + 320: uart0 reset 52 + 321: uart1 reset 53 + 322: uart0 ref reset 54 + 323: uart1 ref reset 55 + 352: gpio reset 56 + 384: lqspi reset 57 + 385: qspi ref reset 58 + 416: smc reset 59 + 417: smc ref reset 60 + 448: ocm reset 61 + 512: fpga0 out reset 62 + 513: fpga1 out reset 63 + 514: fpga2 out reset 64 + 515: fpga3 out reset 65 + 544: a9 reset 0 66 + 545: a9 reset 1 67 + 552: peri reset 68 +
+117
Documentation/devicetree/bindings/soc/qcom,smd-rpm.txt
··· 1 + Qualcomm Resource Power Manager (RPM) over SMD 2 + 3 + This driver is used to interface with the Resource Power Manager (RPM) found in 4 + various Qualcomm platforms. The RPM allows each component in the system to vote 5 + for state of the system resources, such as clocks, regulators and bus 6 + frequencies. 7 + 8 + - compatible: 9 + Usage: required 10 + Value type: <string> 11 + Definition: must be one of: 12 + "qcom,rpm-msm8974" 13 + 14 + - qcom,smd-channels: 15 + Usage: required 16 + Value type: <stringlist> 17 + Definition: Shared Memory channel used for communication with the RPM 18 + 19 + = SUBDEVICES 20 + 21 + The RPM exposes resources to its subnodes. The below bindings specify the set 22 + of valid subnodes that can operate on these resources. 23 + 24 + == Regulators 25 + 26 + Regulator nodes are identified by their compatible: 27 + 28 + - compatible: 29 + Usage: required 30 + Value type: <string> 31 + Definition: must be one of: 32 + "qcom,rpm-pm8841-regulators" 33 + "qcom,rpm-pm8941-regulators" 34 + 35 + - vdd_s1-supply: 36 + - vdd_s2-supply: 37 + - vdd_s3-supply: 38 + - vdd_s4-supply: 39 + - vdd_s5-supply: 40 + - vdd_s6-supply: 41 + - vdd_s7-supply: 42 + - vdd_s8-supply: 43 + Usage: optional (pm8841 only) 44 + Value type: <phandle> 45 + Definition: reference to regulator supplying the input pin, as 46 + described in the data sheet 47 + 48 + - vdd_s1-supply: 49 + - vdd_s2-supply: 50 + - vdd_s3-supply: 51 + - vdd_l1_l3-supply: 52 + - vdd_l2_lvs1_2_3-supply: 53 + - vdd_l4_l11-supply: 54 + - vdd_l5_l7-supply: 55 + - vdd_l6_l12_l14_l15-supply: 56 + - vdd_l8_l16_l18_l19-supply: 57 + - vdd_l9_l10_l17_l22-supply: 58 + - vdd_l13_l20_l23_l24-supply: 59 + - vdd_l21-supply: 60 + - vin_5vs-supply: 61 + Usage: optional (pm8941 only) 62 + Value type: <phandle> 63 + Definition: reference to regulator supplying the input pin, as 64 + described in the data sheet 65 + 66 + The regulator node houses sub-nodes for each regulator within the device. Each 67 + sub-node is identified using the node's name, with valid values listed for each 68 + of the pmics below. 69 + 70 + pm8841: 71 + s1, s2, s3, s4, s5, s6, s7, s8 72 + 73 + pm8941: 74 + s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, 75 + l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, lvs1, lvs2, 76 + lvs3, 5vs1, 5vs2 77 + 78 + The content of each sub-node is defined by the standard binding for regulators - 79 + see regulator.txt. 80 + 81 + = EXAMPLE 82 + 83 + smd { 84 + compatible = "qcom,smd"; 85 + 86 + rpm { 87 + interrupts = <0 168 1>; 88 + qcom,ipc = <&apcs 8 0>; 89 + qcom,smd-edge = <15>; 90 + 91 + rpm_requests { 92 + compatible = "qcom,rpm-msm8974"; 93 + qcom,smd-channels = "rpm_requests"; 94 + 95 + pm8941-regulators { 96 + compatible = "qcom,rpm-pm8941-regulators"; 97 + vdd_l13_l20_l23_l24-supply = <&pm8941_boost>; 98 + 99 + pm8941_s3: s3 { 100 + regulator-min-microvolt = <1800000>; 101 + regulator-max-microvolt = <1800000>; 102 + }; 103 + 104 + pm8941_boost: s4 { 105 + regulator-min-microvolt = <5000000>; 106 + regulator-max-microvolt = <5000000>; 107 + }; 108 + 109 + pm8941_l20: l20 { 110 + regulator-min-microvolt = <2950000>; 111 + regulator-max-microvolt = <2950000>; 112 + }; 113 + }; 114 + }; 115 + }; 116 + }; 117 +
+79
Documentation/devicetree/bindings/soc/qcom/qcom,smd.txt
··· 1 + Qualcomm Shared Memory Driver (SMD) binding 2 + 3 + This binding describes the Qualcomm Shared Memory Driver, a fifo based 4 + communication channel for sending data between the various subsystems in 5 + Qualcomm platforms. 6 + 7 + - compatible: 8 + Usage: required 9 + Value type: <stringlist> 10 + Definition: must be "qcom,smd" 11 + 12 + = EDGES 13 + 14 + Each subnode of the SMD node represents a remote subsystem or a remote 15 + processor of some sort - or in SMD language an "edge". The name of the edges 16 + are not important. 17 + The edge is described by the following properties: 18 + 19 + - interrupts: 20 + Usage: required 21 + Value type: <prop-encoded-array> 22 + Definition: should specify the IRQ used by the remote processor to 23 + signal this processor about communication related updates 24 + 25 + - qcom,ipc: 26 + Usage: required 27 + Value type: <prop-encoded-array> 28 + Definition: three entries specifying the outgoing ipc bit used for 29 + signaling the remote processor: 30 + - phandle to a syscon node representing the apcs registers 31 + - u32 representing offset to the register within the syscon 32 + - u32 representing the ipc bit within the register 33 + 34 + - qcom,smd-edge: 35 + Usage: required 36 + Value type: <u32> 37 + Definition: the identifier of the remote processor in the smd channel 38 + allocation table 39 + 40 + = SMD DEVICES 41 + 42 + In turn, subnodes of the "edges" represent devices tied to SMD channels on that 43 + "edge". The names of the devices are not important. The properties of these 44 + nodes are defined by the individual bindings for the SMD devices - but must 45 + contain the following property: 46 + 47 + - qcom,smd-channels: 48 + Usage: required 49 + Value type: <stringlist> 50 + Definition: a list of channels tied to this device, used for matching 51 + the device to channels 52 + 53 + = EXAMPLE 54 + 55 + The following example represents a smd node, with one edge representing the 56 + "rpm" subsystem. For the "rpm" subsystem we have a device tied to the 57 + "rpm_request" channel. 58 + 59 + apcs: syscon@f9011000 { 60 + compatible = "syscon"; 61 + reg = <0xf9011000 0x1000>; 62 + }; 63 + 64 + smd { 65 + compatible = "qcom,smd"; 66 + 67 + rpm { 68 + interrupts = <0 168 1>; 69 + qcom,ipc = <&apcs 8 0>; 70 + qcom,smd-edge = <15>; 71 + 72 + rpm_requests { 73 + compatible = "qcom,rpm-msm8974"; 74 + qcom,smd-channels = "rpm_requests"; 75 + 76 + ... 77 + }; 78 + }; 79 + };
+1
MAINTAINERS
··· 8613 8613 S: Maintained 8614 8614 F: drivers/reset/ 8615 8615 F: Documentation/devicetree/bindings/reset/ 8616 + F: include/dt-bindings/reset/ 8616 8617 F: include/linux/reset.h 8617 8618 F: include/linux/reset-controller.h 8618 8619
+1 -1
arch/arm/boot/dts/stih407-family.dtsi
··· 9 9 #include "stih407-pinctrl.dtsi" 10 10 #include <dt-bindings/mfd/st-lpc.h> 11 11 #include <dt-bindings/phy/phy.h> 12 - #include <dt-bindings/reset-controller/stih407-resets.h> 12 + #include <dt-bindings/reset/stih407-resets.h> 13 13 #include <dt-bindings/interrupt-controller/irq-st.h> 14 14 / { 15 15 #address-cells = <1>;
+1 -1
arch/arm/boot/dts/stih415.dtsi
··· 10 10 #include "stih415-clock.dtsi" 11 11 #include "stih415-pinctrl.dtsi" 12 12 #include <dt-bindings/interrupt-controller/arm-gic.h> 13 - #include <dt-bindings/reset-controller/stih415-resets.h> 13 + #include <dt-bindings/reset/stih415-resets.h> 14 14 / { 15 15 16 16 L2: cache-controller {
+1 -1
arch/arm/boot/dts/stih416.dtsi
··· 12 12 13 13 #include <dt-bindings/phy/phy.h> 14 14 #include <dt-bindings/interrupt-controller/arm-gic.h> 15 - #include <dt-bindings/reset-controller/stih416-resets.h> 15 + #include <dt-bindings/reset/stih416-resets.h> 16 16 #include <dt-bindings/interrupt-controller/irq-st.h> 17 17 / { 18 18 L2: cache-controller {
+1
arch/arm/mach-mvebu/Kconfig
··· 96 96 select MACH_MVEBU_ANY 97 97 select ORION_IRQCHIP 98 98 select ORION_TIMER 99 + select PM_GENERIC_DOMAINS if PM 99 100 select PINCTRL_DOVE 100 101 help 101 102 Say 'Y' here if you want your kernel to support the
+2
arch/arm/mach-mvebu/dove.c
··· 12 12 #include <linux/mbus.h> 13 13 #include <linux/of.h> 14 14 #include <linux/of_platform.h> 15 + #include <linux/soc/dove/pmu.h> 15 16 #include <asm/hardware/cache-tauros2.h> 16 17 #include <asm/mach/arch.h> 17 18 #include "common.h" ··· 25 24 tauros2_init(0); 26 25 #endif 27 26 BUG_ON(mvebu_mbus_dt_init(false)); 27 + dove_init_pmu(); 28 28 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 29 29 } 30 30
+2
arch/arm/mach-shmobile/Kconfig
··· 4 4 5 5 config PM_RCAR 6 6 bool 7 + select PM_GENERIC_DOMAINS if PM 7 8 8 9 config PM_RMOBILE 9 10 bool ··· 51 50 52 51 config ARCH_R7S72100 53 52 bool "RZ/A1H (R7S72100)" 53 + select PM_GENERIC_DOMAINS if PM 54 54 select SYS_SUPPORTS_SH_MTU2 55 55 56 56 config ARCH_R8A73A4
+14 -5
arch/arm/mach-tegra/cpuidle-tegra114.c
··· 24 24 #include <asm/cpuidle.h> 25 25 #include <asm/smp_plat.h> 26 26 #include <asm/suspend.h> 27 + #include <asm/psci.h> 27 28 28 29 #include "pm.h" 29 30 #include "sleep.h" ··· 45 44 tegra_set_cpu_in_lp2(); 46 45 cpu_pm_enter(); 47 46 48 - tick_broadcast_enter(); 49 - 50 47 call_firmware_op(prepare_idle); 51 48 52 49 /* Do suspend by ourselves if the firmware does not implement it */ 53 50 if (call_firmware_op(do_idle, 0) == -ENOSYS) 54 51 cpu_suspend(0, tegra30_sleep_cpu_secondary_finish); 55 - 56 - tick_broadcast_exit(); 57 52 58 53 cpu_pm_exit(); 59 54 tegra_clear_cpu_in_lp2(); ··· 57 60 local_fiq_enable(); 58 61 59 62 return index; 63 + } 64 + 65 + static void tegra114_idle_enter_freeze(struct cpuidle_device *dev, 66 + struct cpuidle_driver *drv, 67 + int index) 68 + { 69 + tegra114_idle_power_down(dev, drv, index); 60 70 } 61 71 #endif 62 72 ··· 76 72 #ifdef CONFIG_PM_SLEEP 77 73 [1] = { 78 74 .enter = tegra114_idle_power_down, 75 + .enter_freeze = tegra114_idle_enter_freeze, 79 76 .exit_latency = 500, 80 77 .target_residency = 1000, 78 + .flags = CPUIDLE_FLAG_TIMER_STOP, 81 79 .power_usage = 0, 82 80 .name = "powered-down", 83 81 .desc = "CPU power gated", ··· 90 84 91 85 int __init tegra114_cpuidle_init(void) 92 86 { 93 - return cpuidle_register(&tegra_idle_driver, NULL); 87 + if (!psci_smp_available()) 88 + return cpuidle_register(&tegra_idle_driver, NULL); 89 + 90 + return 0; 94 91 }
-3
arch/arm/mach-tegra/iomap.h
··· 82 82 #define TEGRA_EMC_BASE 0x7000F400 83 83 #define TEGRA_EMC_SIZE SZ_1K 84 84 85 - #define TEGRA_FUSE_BASE 0x7000F800 86 - #define TEGRA_FUSE_SIZE SZ_1K 87 - 88 85 #define TEGRA_EMC0_BASE 0x7001A000 89 86 #define TEGRA_EMC0_SIZE SZ_2K 90 87
+1
arch/mips/Kconfig
··· 118 118 119 119 config ATH79 120 120 bool "Atheros AR71XX/AR724X/AR913X based boards" 121 + select ARCH_HAS_RESET_CONTROLLER 121 122 select ARCH_REQUIRE_GPIOLIB 122 123 select BOOT_RAW 123 124 select CEVT_R4K
+8
arch/mips/boot/dts/qca/ar9132.dtsi
··· 115 115 interrupt-controller; 116 116 #interrupt-cells = <1>; 117 117 }; 118 + 119 + rst: reset-controller@1806001c { 120 + compatible = "qca,ar9132-reset", 121 + "qca,ar7100-reset"; 122 + reg = <0x1806001c 0x4>; 123 + 124 + #reset-cells = <1>; 125 + }; 118 126 }; 119 127 120 128 spi@1f000000 {
+87
drivers/clk/shmobile/clk-mstp.c
··· 2 2 * R-Car MSTP clocks 3 3 * 4 4 * Copyright (C) 2013 Ideas On Board SPRL 5 + * Copyright (C) 2015 Glider bvba 5 6 * 6 7 * Contact: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 7 8 * ··· 11 10 * the Free Software Foundation; version 2 of the License. 12 11 */ 13 12 13 + #include <linux/clk.h> 14 14 #include <linux/clk-provider.h> 15 15 #include <linux/clkdev.h> 16 + #include <linux/clk/shmobile.h> 17 + #include <linux/device.h> 16 18 #include <linux/io.h> 17 19 #include <linux/of.h> 18 20 #include <linux/of_address.h> 21 + #include <linux/pm_clock.h> 22 + #include <linux/pm_domain.h> 19 23 #include <linux/spinlock.h> 20 24 21 25 /* ··· 242 236 of_clk_add_provider(np, of_clk_src_onecell_get, &group->data); 243 237 } 244 238 CLK_OF_DECLARE(cpg_mstp_clks, "renesas,cpg-mstp-clocks", cpg_mstp_clocks_init); 239 + 240 + 241 + #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 242 + int cpg_mstp_attach_dev(struct generic_pm_domain *domain, struct device *dev) 243 + { 244 + struct device_node *np = dev->of_node; 245 + struct of_phandle_args clkspec; 246 + struct clk *clk; 247 + int i = 0; 248 + int error; 249 + 250 + while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i, 251 + &clkspec)) { 252 + if (of_device_is_compatible(clkspec.np, 253 + "renesas,cpg-mstp-clocks")) 254 + goto found; 255 + 256 + of_node_put(clkspec.np); 257 + i++; 258 + } 259 + 260 + return 0; 261 + 262 + found: 263 + clk = of_clk_get_from_provider(&clkspec); 264 + of_node_put(clkspec.np); 265 + 266 + if (IS_ERR(clk)) 267 + return PTR_ERR(clk); 268 + 269 + error = pm_clk_create(dev); 270 + if (error) { 271 + dev_err(dev, "pm_clk_create failed %d\n", error); 272 + goto fail_put; 273 + } 274 + 275 + error = pm_clk_add_clk(dev, clk); 276 + if (error) { 277 + dev_err(dev, "pm_clk_add_clk %pC failed %d\n", clk, error); 278 + goto fail_destroy; 279 + } 280 + 281 + return 0; 282 + 283 + fail_destroy: 284 + pm_clk_destroy(dev); 285 + fail_put: 286 + clk_put(clk); 287 + return error; 288 + } 289 + 290 + void cpg_mstp_detach_dev(struct generic_pm_domain *domain, struct device *dev) 291 + { 292 + if (!list_empty(&dev->power.subsys_data->clock_list)) 293 + pm_clk_destroy(dev); 294 + } 295 + 296 + void __init cpg_mstp_add_clk_domain(struct device_node *np) 297 + { 298 + struct generic_pm_domain *pd; 299 + u32 ncells; 300 + 301 + if (of_property_read_u32(np, "#power-domain-cells", &ncells)) { 302 + pr_warn("%s lacks #power-domain-cells\n", np->full_name); 303 + return; 304 + } 305 + 306 + pd = kzalloc(sizeof(*pd), GFP_KERNEL); 307 + if (!pd) 308 + return; 309 + 310 + pd->name = np->name; 311 + 312 + pd->flags = GENPD_FLAG_PM_CLK; 313 + pm_genpd_init(pd, &simple_qos_governor, false); 314 + pd->attach_dev = cpg_mstp_attach_dev; 315 + pd->detach_dev = cpg_mstp_detach_dev; 316 + 317 + of_genpd_add_provider_simple(np, pd); 318 + } 319 + #endif /* !CONFIG_PM_GENERIC_DOMAINS_OF */
+2
drivers/clk/shmobile/clk-r8a7778.c
··· 124 124 } 125 125 126 126 of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); 127 + 128 + cpg_mstp_add_clk_domain(np); 127 129 } 128 130 129 131 CLK_OF_DECLARE(r8a7778_cpg_clks, "renesas,r8a7778-cpg-clocks",
+2
drivers/clk/shmobile/clk-r8a7779.c
··· 168 168 } 169 169 170 170 of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); 171 + 172 + cpg_mstp_add_clk_domain(np); 171 173 } 172 174 CLK_OF_DECLARE(r8a7779_cpg_clks, "renesas,r8a7779-cpg-clocks", 173 175 r8a7779_cpg_clocks_init);
+2
drivers/clk/shmobile/clk-rcar-gen2.c
··· 415 415 } 416 416 417 417 of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); 418 + 419 + cpg_mstp_add_clk_domain(np); 418 420 } 419 421 CLK_OF_DECLARE(rcar_gen2_cpg_clks, "renesas,rcar-gen2-cpg-clocks", 420 422 rcar_gen2_cpg_clocks_init);
+3
drivers/clk/shmobile/clk-rz.c
··· 10 10 */ 11 11 12 12 #include <linux/clk-provider.h> 13 + #include <linux/clk/shmobile.h> 13 14 #include <linux/init.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/of.h> ··· 100 99 } 101 100 102 101 of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); 102 + 103 + cpg_mstp_add_clk_domain(np); 103 104 } 104 105 CLK_OF_DECLARE(rz_cpg_clks, "renesas,rz-cpg-clocks", rz_cpg_clocks_init);
+10 -3
drivers/cpufreq/Kconfig.arm
··· 247 247 help 248 248 This adds the CPUFreq driver support for SPEAr SOCs. 249 249 250 - config ARM_TEGRA_CPUFREQ 251 - bool "TEGRA CPUFreq support" 250 + config ARM_TEGRA20_CPUFREQ 251 + bool "Tegra20 CPUFreq support" 252 252 depends on ARCH_TEGRA 253 253 default y 254 254 help 255 - This adds the CPUFreq driver support for TEGRA SOCs. 255 + This adds the CPUFreq driver support for Tegra20 SOCs. 256 + 257 + config ARM_TEGRA124_CPUFREQ 258 + tristate "Tegra124 CPUFreq support" 259 + depends on ARCH_TEGRA && CPUFREQ_DT 260 + default y 261 + help 262 + This adds the CPUFreq driver support for Tegra124 SOCs. 256 263 257 264 config ARM_PXA2xx_CPUFREQ 258 265 tristate "Intel PXA2xx CPUfreq driver"
+2 -1
drivers/cpufreq/Makefile
··· 76 76 obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o 77 77 obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o 78 78 obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 79 - obj-$(CONFIG_ARM_TEGRA_CPUFREQ) += tegra-cpufreq.o 79 + obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 80 + obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o 80 81 obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o 81 82 82 83 ##################################################################################
drivers/cpufreq/tegra-cpufreq.c drivers/cpufreq/tegra20-cpufreq.c
+214
drivers/cpufreq/tegra124-cpufreq.c
··· 1 + /* 2 + * Tegra 124 cpufreq driver 3 + * 4 + * This software is licensed under the terms of the GNU General Public 5 + * License version 2, as published by the Free Software Foundation, and 6 + * may be copied, distributed, and modified under those terms. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 + 16 + #include <linux/clk.h> 17 + #include <linux/cpufreq-dt.h> 18 + #include <linux/err.h> 19 + #include <linux/init.h> 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/of_device.h> 23 + #include <linux/of.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/pm_opp.h> 26 + #include <linux/regulator/consumer.h> 27 + #include <linux/types.h> 28 + 29 + struct tegra124_cpufreq_priv { 30 + struct regulator *vdd_cpu_reg; 31 + struct clk *cpu_clk; 32 + struct clk *pllp_clk; 33 + struct clk *pllx_clk; 34 + struct clk *dfll_clk; 35 + struct platform_device *cpufreq_dt_pdev; 36 + }; 37 + 38 + static int tegra124_cpu_switch_to_dfll(struct tegra124_cpufreq_priv *priv) 39 + { 40 + struct clk *orig_parent; 41 + int ret; 42 + 43 + ret = clk_set_rate(priv->dfll_clk, clk_get_rate(priv->cpu_clk)); 44 + if (ret) 45 + return ret; 46 + 47 + orig_parent = clk_get_parent(priv->cpu_clk); 48 + clk_set_parent(priv->cpu_clk, priv->pllp_clk); 49 + 50 + ret = clk_prepare_enable(priv->dfll_clk); 51 + if (ret) 52 + goto out; 53 + 54 + clk_set_parent(priv->cpu_clk, priv->dfll_clk); 55 + 56 + return 0; 57 + 58 + out: 59 + clk_set_parent(priv->cpu_clk, orig_parent); 60 + 61 + return ret; 62 + } 63 + 64 + static void tegra124_cpu_switch_to_pllx(struct tegra124_cpufreq_priv *priv) 65 + { 66 + clk_set_parent(priv->cpu_clk, priv->pllp_clk); 67 + clk_disable_unprepare(priv->dfll_clk); 68 + regulator_sync_voltage(priv->vdd_cpu_reg); 69 + clk_set_parent(priv->cpu_clk, priv->pllx_clk); 70 + } 71 + 72 + static struct cpufreq_dt_platform_data cpufreq_dt_pd = { 73 + .independent_clocks = false, 74 + }; 75 + 76 + static int tegra124_cpufreq_probe(struct platform_device *pdev) 77 + { 78 + struct tegra124_cpufreq_priv *priv; 79 + struct device_node *np; 80 + struct device *cpu_dev; 81 + struct platform_device_info cpufreq_dt_devinfo = {}; 82 + int ret; 83 + 84 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 85 + if (!priv) 86 + return -ENOMEM; 87 + 88 + cpu_dev = get_cpu_device(0); 89 + if (!cpu_dev) 90 + return -ENODEV; 91 + 92 + np = of_cpu_device_node_get(0); 93 + if (!np) 94 + return -ENODEV; 95 + 96 + priv->vdd_cpu_reg = regulator_get(cpu_dev, "vdd-cpu"); 97 + if (IS_ERR(priv->vdd_cpu_reg)) { 98 + ret = PTR_ERR(priv->vdd_cpu_reg); 99 + goto out_put_np; 100 + } 101 + 102 + priv->cpu_clk = of_clk_get_by_name(np, "cpu_g"); 103 + if (IS_ERR(priv->cpu_clk)) { 104 + ret = PTR_ERR(priv->cpu_clk); 105 + goto out_put_vdd_cpu_reg; 106 + } 107 + 108 + priv->dfll_clk = of_clk_get_by_name(np, "dfll"); 109 + if (IS_ERR(priv->dfll_clk)) { 110 + ret = PTR_ERR(priv->dfll_clk); 111 + goto out_put_cpu_clk; 112 + } 113 + 114 + priv->pllx_clk = of_clk_get_by_name(np, "pll_x"); 115 + if (IS_ERR(priv->pllx_clk)) { 116 + ret = PTR_ERR(priv->pllx_clk); 117 + goto out_put_dfll_clk; 118 + } 119 + 120 + priv->pllp_clk = of_clk_get_by_name(np, "pll_p"); 121 + if (IS_ERR(priv->pllp_clk)) { 122 + ret = PTR_ERR(priv->pllp_clk); 123 + goto out_put_pllx_clk; 124 + } 125 + 126 + ret = tegra124_cpu_switch_to_dfll(priv); 127 + if (ret) 128 + goto out_put_pllp_clk; 129 + 130 + cpufreq_dt_devinfo.name = "cpufreq-dt"; 131 + cpufreq_dt_devinfo.parent = &pdev->dev; 132 + cpufreq_dt_devinfo.data = &cpufreq_dt_pd; 133 + cpufreq_dt_devinfo.size_data = sizeof(cpufreq_dt_pd); 134 + 135 + priv->cpufreq_dt_pdev = 136 + platform_device_register_full(&cpufreq_dt_devinfo); 137 + if (IS_ERR(priv->cpufreq_dt_pdev)) { 138 + ret = PTR_ERR(priv->cpufreq_dt_pdev); 139 + goto out_switch_to_pllx; 140 + } 141 + 142 + platform_set_drvdata(pdev, priv); 143 + 144 + return 0; 145 + 146 + out_switch_to_pllx: 147 + tegra124_cpu_switch_to_pllx(priv); 148 + out_put_pllp_clk: 149 + clk_put(priv->pllp_clk); 150 + out_put_pllx_clk: 151 + clk_put(priv->pllx_clk); 152 + out_put_dfll_clk: 153 + clk_put(priv->dfll_clk); 154 + out_put_cpu_clk: 155 + clk_put(priv->cpu_clk); 156 + out_put_vdd_cpu_reg: 157 + regulator_put(priv->vdd_cpu_reg); 158 + out_put_np: 159 + of_node_put(np); 160 + 161 + return ret; 162 + } 163 + 164 + static int tegra124_cpufreq_remove(struct platform_device *pdev) 165 + { 166 + struct tegra124_cpufreq_priv *priv = platform_get_drvdata(pdev); 167 + 168 + platform_device_unregister(priv->cpufreq_dt_pdev); 169 + tegra124_cpu_switch_to_pllx(priv); 170 + 171 + clk_put(priv->pllp_clk); 172 + clk_put(priv->pllx_clk); 173 + clk_put(priv->dfll_clk); 174 + clk_put(priv->cpu_clk); 175 + regulator_put(priv->vdd_cpu_reg); 176 + 177 + return 0; 178 + } 179 + 180 + static struct platform_driver tegra124_cpufreq_platdrv = { 181 + .driver.name = "cpufreq-tegra124", 182 + .probe = tegra124_cpufreq_probe, 183 + .remove = tegra124_cpufreq_remove, 184 + }; 185 + 186 + static int __init tegra_cpufreq_init(void) 187 + { 188 + int ret; 189 + struct platform_device *pdev; 190 + 191 + if (!of_machine_is_compatible("nvidia,tegra124")) 192 + return -ENODEV; 193 + 194 + /* 195 + * Platform driver+device required for handling EPROBE_DEFER with 196 + * the regulator and the DFLL clock 197 + */ 198 + ret = platform_driver_register(&tegra124_cpufreq_platdrv); 199 + if (ret) 200 + return ret; 201 + 202 + pdev = platform_device_register_simple("cpufreq-tegra124", -1, NULL, 0); 203 + if (IS_ERR(pdev)) { 204 + platform_driver_unregister(&tegra124_cpufreq_platdrv); 205 + return PTR_ERR(pdev); 206 + } 207 + 208 + return 0; 209 + } 210 + module_init(tegra_cpufreq_init); 211 + 212 + MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>"); 213 + MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124"); 214 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/iommu/Kconfig
··· 222 222 select IOMMU_API 223 223 help 224 224 This driver supports the IOMMU hardware (SMMU) found on NVIDIA Tegra 225 - SoCs (Tegra30 up to Tegra132). 225 + SoCs (Tegra30 up to Tegra210). 226 226 227 227 config EXYNOS_IOMMU 228 228 bool "Exynos IOMMU Support"
+8
drivers/memory/Kconfig
··· 7 7 8 8 if MEMORY 9 9 10 + config ARM_PL172_MPMC 11 + tristate "ARM PL172 MPMC driver" 12 + depends on ARM_AMBA && OF 13 + help 14 + This selects the ARM PrimeCell PL172 MultiPort Memory Controller. 15 + If you have an embedded system with an AMBA bus and a PL172 16 + controller, say Y or M here. 17 + 10 18 config ATMEL_SDRAMC 11 19 bool "Atmel (Multi-port DDR-)SDRAM Controller" 12 20 default y
+1
drivers/memory/Makefile
··· 5 5 ifeq ($(CONFIG_DDR),y) 6 6 obj-$(CONFIG_OF) += of_memory.o 7 7 endif 8 + obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o 8 9 obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o 9 10 obj-$(CONFIG_TI_AEMIF) += ti-aemif.o 10 11 obj-$(CONFIG_TI_EMIF) += emif.o
+301
drivers/memory/pl172.c
··· 1 + /* 2 + * Memory controller driver for ARM PrimeCell PL172 3 + * PrimeCell MultiPort Memory Controller (PL172) 4 + * 5 + * Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com> 6 + * 7 + * Based on: 8 + * TI AEMIF driver, Copyright (C) 2010 - 2013 Texas Instruments Inc. 9 + * 10 + * This file is licensed under the terms of the GNU General Public 11 + * License version 2. This program is licensed "as is" without any 12 + * warranty of any kind, whether express or implied. 13 + */ 14 + 15 + #include <linux/amba/bus.h> 16 + #include <linux/clk.h> 17 + #include <linux/device.h> 18 + #include <linux/err.h> 19 + #include <linux/init.h> 20 + #include <linux/io.h> 21 + #include <linux/kernel.h> 22 + #include <linux/module.h> 23 + #include <linux/of.h> 24 + #include <linux/of_platform.h> 25 + #include <linux/time.h> 26 + 27 + #define MPMC_STATIC_CFG(n) (0x200 + 0x20 * n) 28 + #define MPMC_STATIC_CFG_MW_8BIT 0x0 29 + #define MPMC_STATIC_CFG_MW_16BIT 0x1 30 + #define MPMC_STATIC_CFG_MW_32BIT 0x2 31 + #define MPMC_STATIC_CFG_PM BIT(3) 32 + #define MPMC_STATIC_CFG_PC BIT(6) 33 + #define MPMC_STATIC_CFG_PB BIT(7) 34 + #define MPMC_STATIC_CFG_EW BIT(8) 35 + #define MPMC_STATIC_CFG_B BIT(19) 36 + #define MPMC_STATIC_CFG_P BIT(20) 37 + #define MPMC_STATIC_WAIT_WEN(n) (0x204 + 0x20 * n) 38 + #define MPMC_STATIC_WAIT_WEN_MAX 0x0f 39 + #define MPMC_STATIC_WAIT_OEN(n) (0x208 + 0x20 * n) 40 + #define MPMC_STATIC_WAIT_OEN_MAX 0x0f 41 + #define MPMC_STATIC_WAIT_RD(n) (0x20c + 0x20 * n) 42 + #define MPMC_STATIC_WAIT_RD_MAX 0x1f 43 + #define MPMC_STATIC_WAIT_PAGE(n) (0x210 + 0x20 * n) 44 + #define MPMC_STATIC_WAIT_PAGE_MAX 0x1f 45 + #define MPMC_STATIC_WAIT_WR(n) (0x214 + 0x20 * n) 46 + #define MPMC_STATIC_WAIT_WR_MAX 0x1f 47 + #define MPMC_STATIC_WAIT_TURN(n) (0x218 + 0x20 * n) 48 + #define MPMC_STATIC_WAIT_TURN_MAX 0x0f 49 + 50 + /* Maximum number of static chip selects */ 51 + #define PL172_MAX_CS 4 52 + 53 + struct pl172_data { 54 + void __iomem *base; 55 + unsigned long rate; 56 + struct clk *clk; 57 + }; 58 + 59 + static int pl172_timing_prop(struct amba_device *adev, 60 + const struct device_node *np, const char *name, 61 + u32 reg_offset, u32 max, int start) 62 + { 63 + struct pl172_data *pl172 = amba_get_drvdata(adev); 64 + int cycles; 65 + u32 val; 66 + 67 + if (!of_property_read_u32(np, name, &val)) { 68 + cycles = DIV_ROUND_UP(val * pl172->rate, NSEC_PER_MSEC) - start; 69 + if (cycles < 0) { 70 + cycles = 0; 71 + } else if (cycles > max) { 72 + dev_err(&adev->dev, "%s timing too tight\n", name); 73 + return -EINVAL; 74 + } 75 + 76 + writel(cycles, pl172->base + reg_offset); 77 + } 78 + 79 + dev_dbg(&adev->dev, "%s: %u cycle(s)\n", name, start + 80 + readl(pl172->base + reg_offset)); 81 + 82 + return 0; 83 + } 84 + 85 + static int pl172_setup_static(struct amba_device *adev, 86 + struct device_node *np, u32 cs) 87 + { 88 + struct pl172_data *pl172 = amba_get_drvdata(adev); 89 + u32 cfg; 90 + int ret; 91 + 92 + /* MPMC static memory configuration */ 93 + if (!of_property_read_u32(np, "mpmc,memory-width", &cfg)) { 94 + if (cfg == 8) { 95 + cfg = MPMC_STATIC_CFG_MW_8BIT; 96 + } else if (cfg == 16) { 97 + cfg = MPMC_STATIC_CFG_MW_16BIT; 98 + } else if (cfg == 32) { 99 + cfg = MPMC_STATIC_CFG_MW_32BIT; 100 + } else { 101 + dev_err(&adev->dev, "invalid memory width cs%u\n", cs); 102 + return -EINVAL; 103 + } 104 + } else { 105 + dev_err(&adev->dev, "memory-width property required\n"); 106 + return -EINVAL; 107 + } 108 + 109 + if (of_property_read_bool(np, "mpmc,async-page-mode")) 110 + cfg |= MPMC_STATIC_CFG_PM; 111 + 112 + if (of_property_read_bool(np, "mpmc,cs-active-high")) 113 + cfg |= MPMC_STATIC_CFG_PC; 114 + 115 + if (of_property_read_bool(np, "mpmc,byte-lane-low")) 116 + cfg |= MPMC_STATIC_CFG_PB; 117 + 118 + if (of_property_read_bool(np, "mpmc,extended-wait")) 119 + cfg |= MPMC_STATIC_CFG_EW; 120 + 121 + if (of_property_read_bool(np, "mpmc,buffer-enable")) 122 + cfg |= MPMC_STATIC_CFG_B; 123 + 124 + if (of_property_read_bool(np, "mpmc,write-protect")) 125 + cfg |= MPMC_STATIC_CFG_P; 126 + 127 + writel(cfg, pl172->base + MPMC_STATIC_CFG(cs)); 128 + dev_dbg(&adev->dev, "mpmc static config cs%u: 0x%08x\n", cs, cfg); 129 + 130 + /* MPMC static memory timing */ 131 + ret = pl172_timing_prop(adev, np, "mpmc,write-enable-delay", 132 + MPMC_STATIC_WAIT_WEN(cs), 133 + MPMC_STATIC_WAIT_WEN_MAX, 1); 134 + if (ret) 135 + goto fail; 136 + 137 + ret = pl172_timing_prop(adev, np, "mpmc,output-enable-delay", 138 + MPMC_STATIC_WAIT_OEN(cs), 139 + MPMC_STATIC_WAIT_OEN_MAX, 0); 140 + if (ret) 141 + goto fail; 142 + 143 + ret = pl172_timing_prop(adev, np, "mpmc,read-access-delay", 144 + MPMC_STATIC_WAIT_RD(cs), 145 + MPMC_STATIC_WAIT_RD_MAX, 1); 146 + if (ret) 147 + goto fail; 148 + 149 + ret = pl172_timing_prop(adev, np, "mpmc,page-mode-read-delay", 150 + MPMC_STATIC_WAIT_PAGE(cs), 151 + MPMC_STATIC_WAIT_PAGE_MAX, 1); 152 + if (ret) 153 + goto fail; 154 + 155 + ret = pl172_timing_prop(adev, np, "mpmc,write-access-delay", 156 + MPMC_STATIC_WAIT_WR(cs), 157 + MPMC_STATIC_WAIT_WR_MAX, 2); 158 + if (ret) 159 + goto fail; 160 + 161 + ret = pl172_timing_prop(adev, np, "mpmc,turn-round-delay", 162 + MPMC_STATIC_WAIT_TURN(cs), 163 + MPMC_STATIC_WAIT_TURN_MAX, 1); 164 + if (ret) 165 + goto fail; 166 + 167 + return 0; 168 + fail: 169 + dev_err(&adev->dev, "failed to configure cs%u\n", cs); 170 + return ret; 171 + } 172 + 173 + static int pl172_parse_cs_config(struct amba_device *adev, 174 + struct device_node *np) 175 + { 176 + u32 cs; 177 + 178 + if (!of_property_read_u32(np, "mpmc,cs", &cs)) { 179 + if (cs >= PL172_MAX_CS) { 180 + dev_err(&adev->dev, "cs%u invalid\n", cs); 181 + return -EINVAL; 182 + } 183 + 184 + return pl172_setup_static(adev, np, cs); 185 + } 186 + 187 + dev_err(&adev->dev, "cs property required\n"); 188 + 189 + return -EINVAL; 190 + } 191 + 192 + static const char * const pl172_revisions[] = {"r1", "r2", "r2p3", "r2p4"}; 193 + 194 + static int pl172_probe(struct amba_device *adev, const struct amba_id *id) 195 + { 196 + struct device_node *child_np, *np = adev->dev.of_node; 197 + struct device *dev = &adev->dev; 198 + static const char *rev = "?"; 199 + struct pl172_data *pl172; 200 + int ret; 201 + 202 + if (amba_part(adev) == 0x172) { 203 + if (amba_rev(adev) < ARRAY_SIZE(pl172_revisions)) 204 + rev = pl172_revisions[amba_rev(adev)]; 205 + } 206 + 207 + dev_info(dev, "ARM PL%x revision %s\n", amba_part(adev), rev); 208 + 209 + pl172 = devm_kzalloc(dev, sizeof(*pl172), GFP_KERNEL); 210 + if (!pl172) 211 + return -ENOMEM; 212 + 213 + pl172->clk = devm_clk_get(dev, "mpmcclk"); 214 + if (IS_ERR(pl172->clk)) { 215 + dev_err(dev, "no mpmcclk provided clock\n"); 216 + return PTR_ERR(pl172->clk); 217 + } 218 + 219 + ret = clk_prepare_enable(pl172->clk); 220 + if (ret) { 221 + dev_err(dev, "unable to mpmcclk enable clock\n"); 222 + return ret; 223 + } 224 + 225 + pl172->rate = clk_get_rate(pl172->clk) / MSEC_PER_SEC; 226 + if (!pl172->rate) { 227 + dev_err(dev, "unable to get mpmcclk clock rate\n"); 228 + ret = -EINVAL; 229 + goto err_clk_enable; 230 + } 231 + 232 + ret = amba_request_regions(adev, NULL); 233 + if (ret) { 234 + dev_err(dev, "unable to request AMBA regions\n"); 235 + goto err_clk_enable; 236 + } 237 + 238 + pl172->base = devm_ioremap(dev, adev->res.start, 239 + resource_size(&adev->res)); 240 + if (!pl172->base) { 241 + dev_err(dev, "ioremap failed\n"); 242 + ret = -ENOMEM; 243 + goto err_no_ioremap; 244 + } 245 + 246 + amba_set_drvdata(adev, pl172); 247 + 248 + /* 249 + * Loop through each child node, which represent a chip select, and 250 + * configure parameters and timing. If successful; populate devices 251 + * under that node. 252 + */ 253 + for_each_available_child_of_node(np, child_np) { 254 + ret = pl172_parse_cs_config(adev, child_np); 255 + if (ret) 256 + continue; 257 + 258 + of_platform_populate(child_np, NULL, NULL, dev); 259 + } 260 + 261 + return 0; 262 + 263 + err_no_ioremap: 264 + amba_release_regions(adev); 265 + err_clk_enable: 266 + clk_disable_unprepare(pl172->clk); 267 + return ret; 268 + } 269 + 270 + static int pl172_remove(struct amba_device *adev) 271 + { 272 + struct pl172_data *pl172 = amba_get_drvdata(adev); 273 + 274 + clk_disable_unprepare(pl172->clk); 275 + amba_release_regions(adev); 276 + 277 + return 0; 278 + } 279 + 280 + static const struct amba_id pl172_ids[] = { 281 + { 282 + .id = 0x07341172, 283 + .mask = 0xffffffff, 284 + }, 285 + { 0, 0 }, 286 + }; 287 + MODULE_DEVICE_TABLE(amba, pl172_ids); 288 + 289 + static struct amba_driver pl172_driver = { 290 + .drv = { 291 + .name = "memory-pl172", 292 + }, 293 + .probe = pl172_probe, 294 + .remove = pl172_remove, 295 + .id_table = pl172_ids, 296 + }; 297 + module_amba_driver(pl172_driver); 298 + 299 + MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>"); 300 + MODULE_DESCRIPTION("PL172 Memory Controller Driver"); 301 + MODULE_LICENSE("GPL v2");
+1
drivers/memory/tegra/Makefile
··· 4 4 tegra-mc-$(CONFIG_ARCH_TEGRA_114_SOC) += tegra114.o 5 5 tegra-mc-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124.o 6 6 tegra-mc-$(CONFIG_ARCH_TEGRA_132_SOC) += tegra124.o 7 + tegra-mc-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o 7 8 8 9 obj-$(CONFIG_TEGRA_MC) += tegra-mc.o 9 10
+6 -2
drivers/memory/tegra/mc.c
··· 42 42 #define MC_ERR_STATUS_ADR_HI_MASK 0x3 43 43 #define MC_ERR_STATUS_SECURITY (1 << 17) 44 44 #define MC_ERR_STATUS_RW (1 << 16) 45 - #define MC_ERR_STATUS_CLIENT_MASK 0x7f 46 45 47 46 #define MC_ERR_ADR 0x0c 48 47 ··· 65 66 #endif 66 67 #ifdef CONFIG_ARCH_TEGRA_132_SOC 67 68 { .compatible = "nvidia,tegra132-mc", .data = &tegra132_mc_soc }, 69 + #endif 70 + #ifdef CONFIG_ARCH_TEGRA_210_SOC 71 + { .compatible = "nvidia,tegra210-mc", .data = &tegra210_mc_soc }, 68 72 #endif 69 73 { } 70 74 }; ··· 285 283 else 286 284 secure = ""; 287 285 288 - id = value & MC_ERR_STATUS_CLIENT_MASK; 286 + id = value & mc->soc->client_id_mask; 289 287 290 288 for (i = 0; i < mc->soc->num_clients; i++) { 291 289 if (mc->soc->clients[i].id == id) { ··· 411 409 err); 412 410 return err; 413 411 } 412 + 413 + WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n"); 414 414 415 415 value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | 416 416 MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
+4
drivers/memory/tegra/mc.h
··· 41 41 extern const struct tegra_mc_soc tegra132_mc_soc; 42 42 #endif 43 43 44 + #ifdef CONFIG_ARCH_TEGRA_210_SOC 45 + extern const struct tegra_mc_soc tegra210_mc_soc; 46 + #endif 47 + 44 48 #endif /* MEMORY_TEGRA_MC_H */
+1
drivers/memory/tegra/tegra114.c
··· 944 944 .num_clients = ARRAY_SIZE(tegra114_mc_clients), 945 945 .num_address_bits = 32, 946 946 .atom_size = 32, 947 + .client_id_mask = 0x7f, 947 948 .smmu = &tegra114_smmu_soc, 948 949 };
+40 -2
drivers/memory/tegra/tegra124-emc.c
··· 1027 1027 DEFINE_SIMPLE_ATTRIBUTE(emc_debug_rate_fops, emc_debug_rate_get, 1028 1028 emc_debug_rate_set, "%lld\n"); 1029 1029 1030 - static void emc_debugfs_init(struct device *dev) 1030 + static int emc_debug_supported_rates_show(struct seq_file *s, void *data) 1031 + { 1032 + struct tegra_emc *emc = s->private; 1033 + const char *prefix = ""; 1034 + unsigned int i; 1035 + 1036 + for (i = 0; i < emc->num_timings; i++) { 1037 + struct emc_timing *timing = &emc->timings[i]; 1038 + 1039 + seq_printf(s, "%s%lu", prefix, timing->rate); 1040 + 1041 + prefix = " "; 1042 + } 1043 + 1044 + seq_puts(s, "\n"); 1045 + 1046 + return 0; 1047 + } 1048 + 1049 + static int emc_debug_supported_rates_open(struct inode *inode, 1050 + struct file *file) 1051 + { 1052 + return single_open(file, emc_debug_supported_rates_show, 1053 + inode->i_private); 1054 + } 1055 + 1056 + static const struct file_operations emc_debug_supported_rates_fops = { 1057 + .open = emc_debug_supported_rates_open, 1058 + .read = seq_read, 1059 + .llseek = seq_lseek, 1060 + .release = single_release, 1061 + }; 1062 + 1063 + static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc) 1031 1064 { 1032 1065 struct dentry *root, *file; 1033 1066 struct clk *clk; ··· 1079 1046 1080 1047 file = debugfs_create_file("rate", S_IRUGO | S_IWUSR, root, clk, 1081 1048 &emc_debug_rate_fops); 1049 + if (!file) 1050 + dev_err(dev, "failed to create debugfs entry\n"); 1051 + 1052 + file = debugfs_create_file("supported_rates", S_IRUGO, root, emc, 1053 + &emc_debug_supported_rates_fops); 1082 1054 if (!file) 1083 1055 dev_err(dev, "failed to create debugfs entry\n"); 1084 1056 } ··· 1157 1119 platform_set_drvdata(pdev, emc); 1158 1120 1159 1121 if (IS_ENABLED(CONFIG_DEBUG_FS)) 1160 - emc_debugfs_init(&pdev->dev); 1122 + emc_debugfs_init(&pdev->dev, emc); 1161 1123 1162 1124 return 0; 1163 1125 };
+2
drivers/memory/tegra/tegra124.c
··· 1032 1032 .num_clients = ARRAY_SIZE(tegra124_mc_clients), 1033 1033 .num_address_bits = 34, 1034 1034 .atom_size = 32, 1035 + .client_id_mask = 0x7f, 1035 1036 .smmu = &tegra124_smmu_soc, 1036 1037 .emem_regs = tegra124_mc_emem_regs, 1037 1038 .num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs), ··· 1068 1067 .num_clients = ARRAY_SIZE(tegra124_mc_clients), 1069 1068 .num_address_bits = 34, 1070 1069 .atom_size = 32, 1070 + .client_id_mask = 0x7f, 1071 1071 .smmu = &tegra132_smmu_soc, 1072 1072 }; 1073 1073 #endif /* CONFIG_ARCH_TEGRA_132_SOC */
+1080
drivers/memory/tegra/tegra210.c
··· 1 + /* 2 + * Copyright (C) 2015 NVIDIA CORPORATION. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #include <linux/of.h> 10 + #include <linux/mm.h> 11 + 12 + #include <asm/cacheflush.h> 13 + 14 + #include <dt-bindings/memory/tegra210-mc.h> 15 + 16 + #include "mc.h" 17 + 18 + static const struct tegra_mc_client tegra210_mc_clients[] = { 19 + { 20 + .id = 0x00, 21 + .name = "ptcr", 22 + .swgroup = TEGRA_SWGROUP_PTC, 23 + }, { 24 + .id = 0x01, 25 + .name = "display0a", 26 + .swgroup = TEGRA_SWGROUP_DC, 27 + .smmu = { 28 + .reg = 0x228, 29 + .bit = 1, 30 + }, 31 + .la = { 32 + .reg = 0x2e8, 33 + .shift = 0, 34 + .mask = 0xff, 35 + .def = 0xc2, 36 + }, 37 + }, { 38 + .id = 0x02, 39 + .name = "display0ab", 40 + .swgroup = TEGRA_SWGROUP_DCB, 41 + .smmu = { 42 + .reg = 0x228, 43 + .bit = 2, 44 + }, 45 + .la = { 46 + .reg = 0x2f4, 47 + .shift = 0, 48 + .mask = 0xff, 49 + .def = 0xc6, 50 + }, 51 + }, { 52 + .id = 0x03, 53 + .name = "display0b", 54 + .swgroup = TEGRA_SWGROUP_DC, 55 + .smmu = { 56 + .reg = 0x228, 57 + .bit = 3, 58 + }, 59 + .la = { 60 + .reg = 0x2e8, 61 + .shift = 16, 62 + .mask = 0xff, 63 + .def = 0x50, 64 + }, 65 + }, { 66 + .id = 0x04, 67 + .name = "display0bb", 68 + .swgroup = TEGRA_SWGROUP_DCB, 69 + .smmu = { 70 + .reg = 0x228, 71 + .bit = 4, 72 + }, 73 + .la = { 74 + .reg = 0x2f4, 75 + .shift = 16, 76 + .mask = 0xff, 77 + .def = 0x50, 78 + }, 79 + }, { 80 + .id = 0x05, 81 + .name = "display0c", 82 + .swgroup = TEGRA_SWGROUP_DC, 83 + .smmu = { 84 + .reg = 0x228, 85 + .bit = 5, 86 + }, 87 + .la = { 88 + .reg = 0x2ec, 89 + .shift = 0, 90 + .mask = 0xff, 91 + .def = 0x50, 92 + }, 93 + }, { 94 + .id = 0x06, 95 + .name = "display0cb", 96 + .swgroup = TEGRA_SWGROUP_DCB, 97 + .smmu = { 98 + .reg = 0x228, 99 + .bit = 6, 100 + }, 101 + .la = { 102 + .reg = 0x2f8, 103 + .shift = 0, 104 + .mask = 0xff, 105 + .def = 0x50, 106 + }, 107 + }, { 108 + .id = 0x0e, 109 + .name = "afir", 110 + .swgroup = TEGRA_SWGROUP_AFI, 111 + .smmu = { 112 + .reg = 0x228, 113 + .bit = 14, 114 + }, 115 + .la = { 116 + .reg = 0x2e0, 117 + .shift = 0, 118 + .mask = 0xff, 119 + .def = 0x13, 120 + }, 121 + }, { 122 + .id = 0x0f, 123 + .name = "avpcarm7r", 124 + .swgroup = TEGRA_SWGROUP_AVPC, 125 + .smmu = { 126 + .reg = 0x228, 127 + .bit = 15, 128 + }, 129 + .la = { 130 + .reg = 0x2e4, 131 + .shift = 0, 132 + .mask = 0xff, 133 + .def = 0x04, 134 + }, 135 + }, { 136 + .id = 0x10, 137 + .name = "displayhc", 138 + .swgroup = TEGRA_SWGROUP_DC, 139 + .smmu = { 140 + .reg = 0x228, 141 + .bit = 16, 142 + }, 143 + .la = { 144 + .reg = 0x2f0, 145 + .shift = 0, 146 + .mask = 0xff, 147 + .def = 0x50, 148 + }, 149 + }, { 150 + .id = 0x11, 151 + .name = "displayhcb", 152 + .swgroup = TEGRA_SWGROUP_DCB, 153 + .smmu = { 154 + .reg = 0x228, 155 + .bit = 17, 156 + }, 157 + .la = { 158 + .reg = 0x2fc, 159 + .shift = 0, 160 + .mask = 0xff, 161 + .def = 0x50, 162 + }, 163 + }, { 164 + .id = 0x15, 165 + .name = "hdar", 166 + .swgroup = TEGRA_SWGROUP_HDA, 167 + .smmu = { 168 + .reg = 0x228, 169 + .bit = 21, 170 + }, 171 + .la = { 172 + .reg = 0x318, 173 + .shift = 0, 174 + .mask = 0xff, 175 + .def = 0x24, 176 + }, 177 + }, { 178 + .id = 0x16, 179 + .name = "host1xdmar", 180 + .swgroup = TEGRA_SWGROUP_HC, 181 + .smmu = { 182 + .reg = 0x228, 183 + .bit = 22, 184 + }, 185 + .la = { 186 + .reg = 0x310, 187 + .shift = 0, 188 + .mask = 0xff, 189 + .def = 0x1e, 190 + }, 191 + }, { 192 + .id = 0x17, 193 + .name = "host1xr", 194 + .swgroup = TEGRA_SWGROUP_HC, 195 + .smmu = { 196 + .reg = 0x228, 197 + .bit = 23, 198 + }, 199 + .la = { 200 + .reg = 0x310, 201 + .shift = 16, 202 + .mask = 0xff, 203 + .def = 0x50, 204 + }, 205 + }, { 206 + .id = 0x1c, 207 + .name = "nvencsrd", 208 + .swgroup = TEGRA_SWGROUP_NVENC, 209 + .smmu = { 210 + .reg = 0x228, 211 + .bit = 28, 212 + }, 213 + .la = { 214 + .reg = 0x328, 215 + .shift = 0, 216 + .mask = 0xff, 217 + .def = 0x23, 218 + }, 219 + }, { 220 + .id = 0x1d, 221 + .name = "ppcsahbdmar", 222 + .swgroup = TEGRA_SWGROUP_PPCS, 223 + .smmu = { 224 + .reg = 0x228, 225 + .bit = 29, 226 + }, 227 + .la = { 228 + .reg = 0x344, 229 + .shift = 0, 230 + .mask = 0xff, 231 + .def = 0x49, 232 + }, 233 + }, { 234 + .id = 0x1e, 235 + .name = "ppcsahbslvr", 236 + .swgroup = TEGRA_SWGROUP_PPCS, 237 + .smmu = { 238 + .reg = 0x228, 239 + .bit = 30, 240 + }, 241 + .la = { 242 + .reg = 0x344, 243 + .shift = 16, 244 + .mask = 0xff, 245 + .def = 0x1a, 246 + }, 247 + }, { 248 + .id = 0x1f, 249 + .name = "satar", 250 + .swgroup = TEGRA_SWGROUP_SATA, 251 + .smmu = { 252 + .reg = 0x228, 253 + .bit = 31, 254 + }, 255 + .la = { 256 + .reg = 0x350, 257 + .shift = 0, 258 + .mask = 0xff, 259 + .def = 0x65, 260 + }, 261 + }, { 262 + .id = 0x27, 263 + .name = "mpcorer", 264 + .swgroup = TEGRA_SWGROUP_MPCORE, 265 + .la = { 266 + .reg = 0x320, 267 + .shift = 0, 268 + .mask = 0xff, 269 + .def = 0x04, 270 + }, 271 + }, { 272 + .id = 0x2b, 273 + .name = "nvencswr", 274 + .swgroup = TEGRA_SWGROUP_NVENC, 275 + .smmu = { 276 + .reg = 0x22c, 277 + .bit = 11, 278 + }, 279 + .la = { 280 + .reg = 0x328, 281 + .shift = 16, 282 + .mask = 0xff, 283 + .def = 0x80, 284 + }, 285 + }, { 286 + .id = 0x31, 287 + .name = "afiw", 288 + .swgroup = TEGRA_SWGROUP_AFI, 289 + .smmu = { 290 + .reg = 0x22c, 291 + .bit = 17, 292 + }, 293 + .la = { 294 + .reg = 0x2e0, 295 + .shift = 16, 296 + .mask = 0xff, 297 + .def = 0x80, 298 + }, 299 + }, { 300 + .id = 0x32, 301 + .name = "avpcarm7w", 302 + .swgroup = TEGRA_SWGROUP_AVPC, 303 + .smmu = { 304 + .reg = 0x22c, 305 + .bit = 18, 306 + }, 307 + .la = { 308 + .reg = 0x2e4, 309 + .shift = 16, 310 + .mask = 0xff, 311 + .def = 0x80, 312 + }, 313 + }, { 314 + .id = 0x35, 315 + .name = "hdaw", 316 + .swgroup = TEGRA_SWGROUP_HDA, 317 + .smmu = { 318 + .reg = 0x22c, 319 + .bit = 21, 320 + }, 321 + .la = { 322 + .reg = 0x318, 323 + .shift = 16, 324 + .mask = 0xff, 325 + .def = 0x80, 326 + }, 327 + }, { 328 + .id = 0x36, 329 + .name = "host1xw", 330 + .swgroup = TEGRA_SWGROUP_HC, 331 + .smmu = { 332 + .reg = 0x22c, 333 + .bit = 22, 334 + }, 335 + .la = { 336 + .reg = 0x314, 337 + .shift = 0, 338 + .mask = 0xff, 339 + .def = 0x80, 340 + }, 341 + }, { 342 + .id = 0x39, 343 + .name = "mpcorew", 344 + .swgroup = TEGRA_SWGROUP_MPCORE, 345 + .la = { 346 + .reg = 0x320, 347 + .shift = 16, 348 + .mask = 0xff, 349 + .def = 0x80, 350 + }, 351 + }, { 352 + .id = 0x3b, 353 + .name = "ppcsahbdmaw", 354 + .swgroup = TEGRA_SWGROUP_PPCS, 355 + .smmu = { 356 + .reg = 0x22c, 357 + .bit = 27, 358 + }, 359 + .la = { 360 + .reg = 0x348, 361 + .shift = 0, 362 + .mask = 0xff, 363 + .def = 0x80, 364 + }, 365 + }, { 366 + .id = 0x3c, 367 + .name = "ppcsahbslvw", 368 + .swgroup = TEGRA_SWGROUP_PPCS, 369 + .smmu = { 370 + .reg = 0x22c, 371 + .bit = 28, 372 + }, 373 + .la = { 374 + .reg = 0x348, 375 + .shift = 16, 376 + .mask = 0xff, 377 + .def = 0x80, 378 + }, 379 + }, { 380 + .id = 0x3d, 381 + .name = "sataw", 382 + .swgroup = TEGRA_SWGROUP_SATA, 383 + .smmu = { 384 + .reg = 0x22c, 385 + .bit = 29, 386 + }, 387 + .la = { 388 + .reg = 0x350, 389 + .shift = 16, 390 + .mask = 0xff, 391 + .def = 0x65, 392 + }, 393 + }, { 394 + .id = 0x44, 395 + .name = "ispra", 396 + .swgroup = TEGRA_SWGROUP_ISP2, 397 + .smmu = { 398 + .reg = 0x230, 399 + .bit = 4, 400 + }, 401 + .la = { 402 + .reg = 0x370, 403 + .shift = 0, 404 + .mask = 0xff, 405 + .def = 0x18, 406 + }, 407 + }, { 408 + .id = 0x46, 409 + .name = "ispwa", 410 + .swgroup = TEGRA_SWGROUP_ISP2, 411 + .smmu = { 412 + .reg = 0x230, 413 + .bit = 6, 414 + }, 415 + .la = { 416 + .reg = 0x374, 417 + .shift = 0, 418 + .mask = 0xff, 419 + .def = 0x80, 420 + }, 421 + }, { 422 + .id = 0x47, 423 + .name = "ispwb", 424 + .swgroup = TEGRA_SWGROUP_ISP2, 425 + .smmu = { 426 + .reg = 0x230, 427 + .bit = 7, 428 + }, 429 + .la = { 430 + .reg = 0x374, 431 + .shift = 16, 432 + .mask = 0xff, 433 + .def = 0x80, 434 + }, 435 + }, { 436 + .id = 0x4a, 437 + .name = "xusb_hostr", 438 + .swgroup = TEGRA_SWGROUP_XUSB_HOST, 439 + .smmu = { 440 + .reg = 0x230, 441 + .bit = 10, 442 + }, 443 + .la = { 444 + .reg = 0x37c, 445 + .shift = 0, 446 + .mask = 0xff, 447 + .def = 0x39, 448 + }, 449 + }, { 450 + .id = 0x4b, 451 + .name = "xusb_hostw", 452 + .swgroup = TEGRA_SWGROUP_XUSB_HOST, 453 + .smmu = { 454 + .reg = 0x230, 455 + .bit = 11, 456 + }, 457 + .la = { 458 + .reg = 0x37c, 459 + .shift = 16, 460 + .mask = 0xff, 461 + .def = 0x80, 462 + }, 463 + }, { 464 + .id = 0x4c, 465 + .name = "xusb_devr", 466 + .swgroup = TEGRA_SWGROUP_XUSB_DEV, 467 + .smmu = { 468 + .reg = 0x230, 469 + .bit = 12, 470 + }, 471 + .la = { 472 + .reg = 0x380, 473 + .shift = 0, 474 + .mask = 0xff, 475 + .def = 0x39, 476 + }, 477 + }, { 478 + .id = 0x4d, 479 + .name = "xusb_devw", 480 + .swgroup = TEGRA_SWGROUP_XUSB_DEV, 481 + .smmu = { 482 + .reg = 0x230, 483 + .bit = 13, 484 + }, 485 + .la = { 486 + .reg = 0x380, 487 + .shift = 16, 488 + .mask = 0xff, 489 + .def = 0x80, 490 + }, 491 + }, { 492 + .id = 0x4e, 493 + .name = "isprab", 494 + .swgroup = TEGRA_SWGROUP_ISP2B, 495 + .smmu = { 496 + .reg = 0x230, 497 + .bit = 14, 498 + }, 499 + .la = { 500 + .reg = 0x384, 501 + .shift = 0, 502 + .mask = 0xff, 503 + .def = 0x18, 504 + }, 505 + }, { 506 + .id = 0x50, 507 + .name = "ispwab", 508 + .swgroup = TEGRA_SWGROUP_ISP2B, 509 + .smmu = { 510 + .reg = 0x230, 511 + .bit = 16, 512 + }, 513 + .la = { 514 + .reg = 0x388, 515 + .shift = 0, 516 + .mask = 0xff, 517 + .def = 0x80, 518 + }, 519 + }, { 520 + .id = 0x51, 521 + .name = "ispwbb", 522 + .swgroup = TEGRA_SWGROUP_ISP2B, 523 + .smmu = { 524 + .reg = 0x230, 525 + .bit = 17, 526 + }, 527 + .la = { 528 + .reg = 0x388, 529 + .shift = 16, 530 + .mask = 0xff, 531 + .def = 0x80, 532 + }, 533 + }, { 534 + .id = 0x54, 535 + .name = "tsecsrd", 536 + .swgroup = TEGRA_SWGROUP_TSEC, 537 + .smmu = { 538 + .reg = 0x230, 539 + .bit = 20, 540 + }, 541 + .la = { 542 + .reg = 0x390, 543 + .shift = 0, 544 + .mask = 0xff, 545 + .def = 0x9b, 546 + }, 547 + }, { 548 + .id = 0x55, 549 + .name = "tsecswr", 550 + .swgroup = TEGRA_SWGROUP_TSEC, 551 + .smmu = { 552 + .reg = 0x230, 553 + .bit = 21, 554 + }, 555 + .la = { 556 + .reg = 0x390, 557 + .shift = 16, 558 + .mask = 0xff, 559 + .def = 0x80, 560 + }, 561 + }, { 562 + .id = 0x56, 563 + .name = "a9avpscr", 564 + .swgroup = TEGRA_SWGROUP_A9AVP, 565 + .smmu = { 566 + .reg = 0x230, 567 + .bit = 22, 568 + }, 569 + .la = { 570 + .reg = 0x3a4, 571 + .shift = 0, 572 + .mask = 0xff, 573 + .def = 0x04, 574 + }, 575 + }, { 576 + .id = 0x57, 577 + .name = "a9avpscw", 578 + .swgroup = TEGRA_SWGROUP_A9AVP, 579 + .smmu = { 580 + .reg = 0x230, 581 + .bit = 23, 582 + }, 583 + .la = { 584 + .reg = 0x3a4, 585 + .shift = 16, 586 + .mask = 0xff, 587 + .def = 0x80, 588 + }, 589 + }, { 590 + .id = 0x58, 591 + .name = "gpusrd", 592 + .swgroup = TEGRA_SWGROUP_GPU, 593 + .smmu = { 594 + /* read-only */ 595 + .reg = 0x230, 596 + .bit = 24, 597 + }, 598 + .la = { 599 + .reg = 0x3c8, 600 + .shift = 0, 601 + .mask = 0xff, 602 + .def = 0x1a, 603 + }, 604 + }, { 605 + .id = 0x59, 606 + .name = "gpuswr", 607 + .swgroup = TEGRA_SWGROUP_GPU, 608 + .smmu = { 609 + /* read-only */ 610 + .reg = 0x230, 611 + .bit = 25, 612 + }, 613 + .la = { 614 + .reg = 0x3c8, 615 + .shift = 16, 616 + .mask = 0xff, 617 + .def = 0x80, 618 + }, 619 + }, { 620 + .id = 0x5a, 621 + .name = "displayt", 622 + .swgroup = TEGRA_SWGROUP_DC, 623 + .smmu = { 624 + .reg = 0x230, 625 + .bit = 26, 626 + }, 627 + .la = { 628 + .reg = 0x2f0, 629 + .shift = 16, 630 + .mask = 0xff, 631 + .def = 0x50, 632 + }, 633 + }, { 634 + .id = 0x60, 635 + .name = "sdmmcra", 636 + .swgroup = TEGRA_SWGROUP_SDMMC1A, 637 + .smmu = { 638 + .reg = 0x234, 639 + .bit = 0, 640 + }, 641 + .la = { 642 + .reg = 0x3b8, 643 + .shift = 0, 644 + .mask = 0xff, 645 + .def = 0x49, 646 + }, 647 + }, { 648 + .id = 0x61, 649 + .name = "sdmmcraa", 650 + .swgroup = TEGRA_SWGROUP_SDMMC2A, 651 + .smmu = { 652 + .reg = 0x234, 653 + .bit = 1, 654 + }, 655 + .la = { 656 + .reg = 0x3bc, 657 + .shift = 0, 658 + .mask = 0xff, 659 + .def = 0x49, 660 + }, 661 + }, { 662 + .id = 0x62, 663 + .name = "sdmmcr", 664 + .swgroup = TEGRA_SWGROUP_SDMMC3A, 665 + .smmu = { 666 + .reg = 0x234, 667 + .bit = 2, 668 + }, 669 + .la = { 670 + .reg = 0x3c0, 671 + .shift = 0, 672 + .mask = 0xff, 673 + .def = 0x49, 674 + }, 675 + }, { 676 + .id = 0x63, 677 + .swgroup = TEGRA_SWGROUP_SDMMC4A, 678 + .name = "sdmmcrab", 679 + .smmu = { 680 + .reg = 0x234, 681 + .bit = 3, 682 + }, 683 + .la = { 684 + .reg = 0x3c4, 685 + .shift = 0, 686 + .mask = 0xff, 687 + .def = 0x49, 688 + }, 689 + }, { 690 + .id = 0x64, 691 + .name = "sdmmcwa", 692 + .swgroup = TEGRA_SWGROUP_SDMMC1A, 693 + .smmu = { 694 + .reg = 0x234, 695 + .bit = 4, 696 + }, 697 + .la = { 698 + .reg = 0x3b8, 699 + .shift = 16, 700 + .mask = 0xff, 701 + .def = 0x80, 702 + }, 703 + }, { 704 + .id = 0x65, 705 + .name = "sdmmcwaa", 706 + .swgroup = TEGRA_SWGROUP_SDMMC2A, 707 + .smmu = { 708 + .reg = 0x234, 709 + .bit = 5, 710 + }, 711 + .la = { 712 + .reg = 0x3bc, 713 + .shift = 16, 714 + .mask = 0xff, 715 + .def = 0x80, 716 + }, 717 + }, { 718 + .id = 0x66, 719 + .name = "sdmmcw", 720 + .swgroup = TEGRA_SWGROUP_SDMMC3A, 721 + .smmu = { 722 + .reg = 0x234, 723 + .bit = 6, 724 + }, 725 + .la = { 726 + .reg = 0x3c0, 727 + .shift = 16, 728 + .mask = 0xff, 729 + .def = 0x80, 730 + }, 731 + }, { 732 + .id = 0x67, 733 + .name = "sdmmcwab", 734 + .swgroup = TEGRA_SWGROUP_SDMMC4A, 735 + .smmu = { 736 + .reg = 0x234, 737 + .bit = 7, 738 + }, 739 + .la = { 740 + .reg = 0x3c4, 741 + .shift = 16, 742 + .mask = 0xff, 743 + .def = 0x80, 744 + }, 745 + }, { 746 + .id = 0x6c, 747 + .name = "vicsrd", 748 + .swgroup = TEGRA_SWGROUP_VIC, 749 + .smmu = { 750 + .reg = 0x234, 751 + .bit = 12, 752 + }, 753 + .la = { 754 + .reg = 0x394, 755 + .shift = 0, 756 + .mask = 0xff, 757 + .def = 0x1a, 758 + }, 759 + }, { 760 + .id = 0x6d, 761 + .name = "vicswr", 762 + .swgroup = TEGRA_SWGROUP_VIC, 763 + .smmu = { 764 + .reg = 0x234, 765 + .bit = 13, 766 + }, 767 + .la = { 768 + .reg = 0x394, 769 + .shift = 16, 770 + .mask = 0xff, 771 + .def = 0x80, 772 + }, 773 + }, { 774 + .id = 0x72, 775 + .name = "viw", 776 + .swgroup = TEGRA_SWGROUP_VI, 777 + .smmu = { 778 + .reg = 0x234, 779 + .bit = 18, 780 + }, 781 + .la = { 782 + .reg = 0x398, 783 + .shift = 0, 784 + .mask = 0xff, 785 + .def = 0x80, 786 + }, 787 + }, { 788 + .id = 0x73, 789 + .name = "displayd", 790 + .swgroup = TEGRA_SWGROUP_DC, 791 + .smmu = { 792 + .reg = 0x234, 793 + .bit = 19, 794 + }, 795 + .la = { 796 + .reg = 0x3c8, 797 + .shift = 0, 798 + .mask = 0xff, 799 + .def = 0x50, 800 + }, 801 + }, { 802 + .id = 0x78, 803 + .name = "nvdecsrd", 804 + .swgroup = TEGRA_SWGROUP_NVDEC, 805 + .smmu = { 806 + .reg = 0x234, 807 + .bit = 24, 808 + }, 809 + .la = { 810 + .reg = 0x3d8, 811 + .shift = 0, 812 + .mask = 0xff, 813 + .def = 0x23, 814 + }, 815 + }, { 816 + .id = 0x79, 817 + .name = "nvdecswr", 818 + .swgroup = TEGRA_SWGROUP_NVDEC, 819 + .smmu = { 820 + .reg = 0x234, 821 + .bit = 25, 822 + }, 823 + .la = { 824 + .reg = 0x3d8, 825 + .shift = 16, 826 + .mask = 0xff, 827 + .def = 0x80, 828 + }, 829 + }, { 830 + .id = 0x7a, 831 + .name = "aper", 832 + .swgroup = TEGRA_SWGROUP_APE, 833 + .smmu = { 834 + .reg = 0x234, 835 + .bit = 26, 836 + }, 837 + .la = { 838 + .reg = 0x3dc, 839 + .shift = 0, 840 + .mask = 0xff, 841 + .def = 0xff, 842 + }, 843 + }, { 844 + .id = 0x7b, 845 + .name = "apew", 846 + .swgroup = TEGRA_SWGROUP_APE, 847 + .smmu = { 848 + .reg = 0x234, 849 + .bit = 27, 850 + }, 851 + .la = { 852 + .reg = 0x3dc, 853 + .shift = 0, 854 + .mask = 0xff, 855 + .def = 0x80, 856 + }, 857 + }, { 858 + .id = 0x7e, 859 + .name = "nvjpgsrd", 860 + .swgroup = TEGRA_SWGROUP_NVJPG, 861 + .smmu = { 862 + .reg = 0x234, 863 + .bit = 30, 864 + }, 865 + .la = { 866 + .reg = 0x3e4, 867 + .shift = 0, 868 + .mask = 0xff, 869 + .def = 0x23, 870 + }, 871 + }, { 872 + .id = 0x7f, 873 + .name = "nvjpgswr", 874 + .swgroup = TEGRA_SWGROUP_NVJPG, 875 + .smmu = { 876 + .reg = 0x234, 877 + .bit = 31, 878 + }, 879 + .la = { 880 + .reg = 0x3e4, 881 + .shift = 16, 882 + .mask = 0xff, 883 + .def = 0x80, 884 + }, 885 + }, { 886 + .id = 0x80, 887 + .name = "sesrd", 888 + .swgroup = TEGRA_SWGROUP_SE, 889 + .smmu = { 890 + .reg = 0xb98, 891 + .bit = 0, 892 + }, 893 + .la = { 894 + .reg = 0x3e0, 895 + .shift = 0, 896 + .mask = 0xff, 897 + .def = 0x2e, 898 + }, 899 + }, { 900 + .id = 0x81, 901 + .name = "seswr", 902 + .swgroup = TEGRA_SWGROUP_SE, 903 + .smmu = { 904 + .reg = 0xb98, 905 + .bit = 1, 906 + }, 907 + .la = { 908 + .reg = 0xb98, 909 + .shift = 16, 910 + .mask = 0xff, 911 + .def = 0x80, 912 + }, 913 + }, { 914 + .id = 0x82, 915 + .name = "axiapr", 916 + .swgroup = TEGRA_SWGROUP_AXIAP, 917 + .smmu = { 918 + .reg = 0xb98, 919 + .bit = 2, 920 + }, 921 + .la = { 922 + .reg = 0x3a0, 923 + .shift = 0, 924 + .mask = 0xff, 925 + .def = 0xff, 926 + }, 927 + }, { 928 + .id = 0x83, 929 + .name = "axiapw", 930 + .swgroup = TEGRA_SWGROUP_AXIAP, 931 + .smmu = { 932 + .reg = 0xb98, 933 + .bit = 3, 934 + }, 935 + .la = { 936 + .reg = 0x3a0, 937 + .shift = 16, 938 + .mask = 0xff, 939 + .def = 0x80, 940 + }, 941 + }, { 942 + .id = 0x84, 943 + .name = "etrr", 944 + .swgroup = TEGRA_SWGROUP_ETR, 945 + .smmu = { 946 + .reg = 0xb98, 947 + .bit = 4, 948 + }, 949 + .la = { 950 + .reg = 0x3ec, 951 + .shift = 0, 952 + .mask = 0xff, 953 + .def = 0xff, 954 + }, 955 + }, { 956 + .id = 0x85, 957 + .name = "etrw", 958 + .swgroup = TEGRA_SWGROUP_ETR, 959 + .smmu = { 960 + .reg = 0xb98, 961 + .bit = 5, 962 + }, 963 + .la = { 964 + .reg = 0x3ec, 965 + .shift = 16, 966 + .mask = 0xff, 967 + .def = 0xff, 968 + }, 969 + }, { 970 + .id = 0x86, 971 + .name = "tsecsrdb", 972 + .swgroup = TEGRA_SWGROUP_TSECB, 973 + .smmu = { 974 + .reg = 0xb98, 975 + .bit = 6, 976 + }, 977 + .la = { 978 + .reg = 0x3f0, 979 + .shift = 0, 980 + .mask = 0xff, 981 + .def = 0x9b, 982 + }, 983 + }, { 984 + .id = 0x87, 985 + .name = "tsecswrb", 986 + .swgroup = TEGRA_SWGROUP_TSECB, 987 + .smmu = { 988 + .reg = 0xb98, 989 + .bit = 7, 990 + }, 991 + .la = { 992 + .reg = 0x3f0, 993 + .shift = 16, 994 + .mask = 0xff, 995 + .def = 0x80, 996 + }, 997 + }, { 998 + .id = 0x88, 999 + .name = "gpusrd2", 1000 + .swgroup = TEGRA_SWGROUP_GPU, 1001 + .smmu = { 1002 + /* read-only */ 1003 + .reg = 0xb98, 1004 + .bit = 8, 1005 + }, 1006 + .la = { 1007 + .reg = 0x3e8, 1008 + .shift = 0, 1009 + .mask = 0xff, 1010 + .def = 0x1a, 1011 + }, 1012 + }, { 1013 + .id = 0x89, 1014 + .name = "gpuswr2", 1015 + .swgroup = TEGRA_SWGROUP_GPU, 1016 + .smmu = { 1017 + /* read-only */ 1018 + .reg = 0xb98, 1019 + .bit = 9, 1020 + }, 1021 + .la = { 1022 + .reg = 0x3e8, 1023 + .shift = 16, 1024 + .mask = 0xff, 1025 + .def = 0x80, 1026 + }, 1027 + }, 1028 + }; 1029 + 1030 + static const struct tegra_smmu_swgroup tegra210_swgroups[] = { 1031 + { .name = "dc", .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 1032 + { .name = "dcb", .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 1033 + { .name = "afi", .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 }, 1034 + { .name = "avpc", .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 1035 + { .name = "hda", .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 1036 + { .name = "hc", .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 1037 + { .name = "nvenc", .swgroup = TEGRA_SWGROUP_NVENC, .reg = 0x264 }, 1038 + { .name = "ppcs", .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 1039 + { .name = "sata", .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x274 }, 1040 + { .name = "isp2", .swgroup = TEGRA_SWGROUP_ISP2, .reg = 0x258 }, 1041 + { .name = "xusb_host", .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 }, 1042 + { .name = "xusb_dev", .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c }, 1043 + { .name = "isp2b", .swgroup = TEGRA_SWGROUP_ISP2B, .reg = 0xaa4 }, 1044 + { .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 1045 + { .name = "a9avp", .swgroup = TEGRA_SWGROUP_A9AVP, .reg = 0x290 }, 1046 + { .name = "gpu", .swgroup = TEGRA_SWGROUP_GPU, .reg = 0xaac }, 1047 + { .name = "sdmmc1a", .swgroup = TEGRA_SWGROUP_SDMMC1A, .reg = 0xa94 }, 1048 + { .name = "sdmmc2a", .swgroup = TEGRA_SWGROUP_SDMMC2A, .reg = 0xa98 }, 1049 + { .name = "sdmmc3a", .swgroup = TEGRA_SWGROUP_SDMMC3A, .reg = 0xa9c }, 1050 + { .name = "sdmmc4a", .swgroup = TEGRA_SWGROUP_SDMMC4A, .reg = 0xaa0 }, 1051 + { .name = "vic", .swgroup = TEGRA_SWGROUP_VIC, .reg = 0x284 }, 1052 + { .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 1053 + { .name = "nvdec", .swgroup = TEGRA_SWGROUP_NVDEC, .reg = 0xab4 }, 1054 + { .name = "ape", .swgroup = TEGRA_SWGROUP_APE, .reg = 0xab8 }, 1055 + { .name = "nvjpg", .swgroup = TEGRA_SWGROUP_NVJPG, .reg = 0xac0 }, 1056 + { .name = "se", .swgroup = TEGRA_SWGROUP_SE, .reg = 0xabc }, 1057 + { .name = "axiap", .swgroup = TEGRA_SWGROUP_AXIAP, .reg = 0xacc }, 1058 + { .name = "etr", .swgroup = TEGRA_SWGROUP_ETR, .reg = 0xad0 }, 1059 + { .name = "tsecb", .swgroup = TEGRA_SWGROUP_TSECB, .reg = 0xad4 }, 1060 + }; 1061 + 1062 + static const struct tegra_smmu_soc tegra210_smmu_soc = { 1063 + .clients = tegra210_mc_clients, 1064 + .num_clients = ARRAY_SIZE(tegra210_mc_clients), 1065 + .swgroups = tegra210_swgroups, 1066 + .num_swgroups = ARRAY_SIZE(tegra210_swgroups), 1067 + .supports_round_robin_arbitration = true, 1068 + .supports_request_limit = true, 1069 + .num_tlb_lines = 32, 1070 + .num_asids = 128, 1071 + }; 1072 + 1073 + const struct tegra_mc_soc tegra210_mc_soc = { 1074 + .clients = tegra210_mc_clients, 1075 + .num_clients = ARRAY_SIZE(tegra210_mc_clients), 1076 + .num_address_bits = 34, 1077 + .atom_size = 64, 1078 + .client_id_mask = 0xff, 1079 + .smmu = &tegra210_smmu_soc, 1080 + };
+1
drivers/memory/tegra/tegra30.c
··· 966 966 .num_clients = ARRAY_SIZE(tegra30_mc_clients), 967 967 .num_address_bits = 32, 968 968 .atom_size = 16, 969 + .client_id_mask = 0x7f, 969 970 .smmu = &tegra30_smmu_soc, 970 971 };
+3
drivers/reset/Makefile
··· 1 1 obj-$(CONFIG_RESET_CONTROLLER) += core.o 2 + obj-$(CONFIG_ARCH_LPC18XX) += reset-lpc18xx.o 2 3 obj-$(CONFIG_ARCH_SOCFPGA) += reset-socfpga.o 3 4 obj-$(CONFIG_ARCH_BERLIN) += reset-berlin.o 4 5 obj-$(CONFIG_ARCH_SUNXI) += reset-sunxi.o 5 6 obj-$(CONFIG_ARCH_STI) += sti/ 7 + obj-$(CONFIG_ARCH_ZYNQ) += reset-zynq.o 8 + obj-$(CONFIG_ATH79) += reset-ath79.o
+128
drivers/reset/reset-ath79.c
··· 1 + /* 2 + * Copyright (C) 2015 Alban Bedel <albeu@free.fr> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/reset-controller.h> 18 + 19 + struct ath79_reset { 20 + struct reset_controller_dev rcdev; 21 + void __iomem *base; 22 + spinlock_t lock; 23 + }; 24 + 25 + static int ath79_reset_update(struct reset_controller_dev *rcdev, 26 + unsigned long id, bool assert) 27 + { 28 + struct ath79_reset *ath79_reset = 29 + container_of(rcdev, struct ath79_reset, rcdev); 30 + unsigned long flags; 31 + u32 val; 32 + 33 + spin_lock_irqsave(&ath79_reset->lock, flags); 34 + val = readl(ath79_reset->base); 35 + if (assert) 36 + val |= BIT(id); 37 + else 38 + val &= ~BIT(id); 39 + writel(val, ath79_reset->base); 40 + spin_unlock_irqrestore(&ath79_reset->lock, flags); 41 + 42 + return 0; 43 + } 44 + 45 + static int ath79_reset_assert(struct reset_controller_dev *rcdev, 46 + unsigned long id) 47 + { 48 + return ath79_reset_update(rcdev, id, true); 49 + } 50 + 51 + static int ath79_reset_deassert(struct reset_controller_dev *rcdev, 52 + unsigned long id) 53 + { 54 + return ath79_reset_update(rcdev, id, false); 55 + } 56 + 57 + static int ath79_reset_status(struct reset_controller_dev *rcdev, 58 + unsigned long id) 59 + { 60 + struct ath79_reset *ath79_reset = 61 + container_of(rcdev, struct ath79_reset, rcdev); 62 + u32 val; 63 + 64 + val = readl(ath79_reset->base); 65 + 66 + return !!(val & BIT(id)); 67 + } 68 + 69 + static struct reset_control_ops ath79_reset_ops = { 70 + .assert = ath79_reset_assert, 71 + .deassert = ath79_reset_deassert, 72 + .status = ath79_reset_status, 73 + }; 74 + 75 + static int ath79_reset_probe(struct platform_device *pdev) 76 + { 77 + struct ath79_reset *ath79_reset; 78 + struct resource *res; 79 + 80 + ath79_reset = devm_kzalloc(&pdev->dev, 81 + sizeof(*ath79_reset), GFP_KERNEL); 82 + if (!ath79_reset) 83 + return -ENOMEM; 84 + 85 + platform_set_drvdata(pdev, ath79_reset); 86 + 87 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 88 + ath79_reset->base = devm_ioremap_resource(&pdev->dev, res); 89 + if (IS_ERR(ath79_reset->base)) 90 + return PTR_ERR(ath79_reset->base); 91 + 92 + ath79_reset->rcdev.ops = &ath79_reset_ops; 93 + ath79_reset->rcdev.owner = THIS_MODULE; 94 + ath79_reset->rcdev.of_node = pdev->dev.of_node; 95 + ath79_reset->rcdev.of_reset_n_cells = 1; 96 + ath79_reset->rcdev.nr_resets = 32; 97 + 98 + return reset_controller_register(&ath79_reset->rcdev); 99 + } 100 + 101 + static int ath79_reset_remove(struct platform_device *pdev) 102 + { 103 + struct ath79_reset *ath79_reset = platform_get_drvdata(pdev); 104 + 105 + reset_controller_unregister(&ath79_reset->rcdev); 106 + 107 + return 0; 108 + } 109 + 110 + static const struct of_device_id ath79_reset_dt_ids[] = { 111 + { .compatible = "qca,ar7100-reset", }, 112 + { }, 113 + }; 114 + MODULE_DEVICE_TABLE(of, ath79_reset_dt_ids); 115 + 116 + static struct platform_driver ath79_reset_driver = { 117 + .probe = ath79_reset_probe, 118 + .remove = ath79_reset_remove, 119 + .driver = { 120 + .name = "ath79-reset", 121 + .of_match_table = ath79_reset_dt_ids, 122 + }, 123 + }; 124 + module_platform_driver(ath79_reset_driver); 125 + 126 + MODULE_AUTHOR("Alban Bedel <albeu@free.fr>"); 127 + MODULE_DESCRIPTION("AR71xx Reset Controller Driver"); 128 + MODULE_LICENSE("GPL");
+258
drivers/reset/reset-lpc18xx.c
··· 1 + /* 2 + * Reset driver for NXP LPC18xx/43xx Reset Generation Unit (RGU). 3 + * 4 + * Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/delay.h> 14 + #include <linux/err.h> 15 + #include <linux/io.h> 16 + #include <linux/module.h> 17 + #include <linux/of.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/reboot.h> 20 + #include <linux/reset-controller.h> 21 + #include <linux/spinlock.h> 22 + 23 + /* LPC18xx RGU registers */ 24 + #define LPC18XX_RGU_CTRL0 0x100 25 + #define LPC18XX_RGU_CTRL1 0x104 26 + #define LPC18XX_RGU_ACTIVE_STATUS0 0x150 27 + #define LPC18XX_RGU_ACTIVE_STATUS1 0x154 28 + 29 + #define LPC18XX_RGU_RESETS_PER_REG 32 30 + 31 + /* Internal reset outputs */ 32 + #define LPC18XX_RGU_CORE_RST 0 33 + #define LPC43XX_RGU_M0SUB_RST 12 34 + #define LPC43XX_RGU_M0APP_RST 56 35 + 36 + struct lpc18xx_rgu_data { 37 + struct reset_controller_dev rcdev; 38 + struct clk *clk_delay; 39 + struct clk *clk_reg; 40 + void __iomem *base; 41 + spinlock_t lock; 42 + u32 delay_us; 43 + }; 44 + 45 + #define to_rgu_data(p) container_of(p, struct lpc18xx_rgu_data, rcdev) 46 + 47 + static void __iomem *rgu_base; 48 + 49 + static int lpc18xx_rgu_restart(struct notifier_block *this, unsigned long mode, 50 + void *cmd) 51 + { 52 + writel(BIT(LPC18XX_RGU_CORE_RST), rgu_base + LPC18XX_RGU_CTRL0); 53 + mdelay(2000); 54 + 55 + pr_emerg("%s: unable to restart system\n", __func__); 56 + 57 + return NOTIFY_DONE; 58 + } 59 + 60 + static struct notifier_block lpc18xx_rgu_restart_nb = { 61 + .notifier_call = lpc18xx_rgu_restart, 62 + .priority = 192, 63 + }; 64 + 65 + /* 66 + * The LPC18xx RGU has mostly self-deasserting resets except for the 67 + * two reset lines going to the internal Cortex-M0 cores. 68 + * 69 + * To prevent the M0 core resets from accidentally getting deasserted 70 + * status register must be check and bits in control register set to 71 + * preserve the state. 72 + */ 73 + static int lpc18xx_rgu_setclear_reset(struct reset_controller_dev *rcdev, 74 + unsigned long id, bool set) 75 + { 76 + struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev); 77 + u32 stat_offset = LPC18XX_RGU_ACTIVE_STATUS0; 78 + u32 ctrl_offset = LPC18XX_RGU_CTRL0; 79 + unsigned long flags; 80 + u32 stat, rst_bit; 81 + 82 + stat_offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32); 83 + ctrl_offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32); 84 + rst_bit = 1 << (id % LPC18XX_RGU_RESETS_PER_REG); 85 + 86 + spin_lock_irqsave(&rc->lock, flags); 87 + stat = ~readl(rc->base + stat_offset); 88 + if (set) 89 + writel(stat | rst_bit, rc->base + ctrl_offset); 90 + else 91 + writel(stat & ~rst_bit, rc->base + ctrl_offset); 92 + spin_unlock_irqrestore(&rc->lock, flags); 93 + 94 + return 0; 95 + } 96 + 97 + static int lpc18xx_rgu_assert(struct reset_controller_dev *rcdev, 98 + unsigned long id) 99 + { 100 + return lpc18xx_rgu_setclear_reset(rcdev, id, true); 101 + } 102 + 103 + static int lpc18xx_rgu_deassert(struct reset_controller_dev *rcdev, 104 + unsigned long id) 105 + { 106 + return lpc18xx_rgu_setclear_reset(rcdev, id, false); 107 + } 108 + 109 + /* Only M0 cores require explicit reset deassert */ 110 + static int lpc18xx_rgu_reset(struct reset_controller_dev *rcdev, 111 + unsigned long id) 112 + { 113 + struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev); 114 + 115 + lpc18xx_rgu_assert(rcdev, id); 116 + udelay(rc->delay_us); 117 + 118 + switch (id) { 119 + case LPC43XX_RGU_M0SUB_RST: 120 + case LPC43XX_RGU_M0APP_RST: 121 + lpc18xx_rgu_setclear_reset(rcdev, id, false); 122 + } 123 + 124 + return 0; 125 + } 126 + 127 + static int lpc18xx_rgu_status(struct reset_controller_dev *rcdev, 128 + unsigned long id) 129 + { 130 + struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev); 131 + u32 bit, offset = LPC18XX_RGU_ACTIVE_STATUS0; 132 + 133 + offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32); 134 + bit = 1 << (id % LPC18XX_RGU_RESETS_PER_REG); 135 + 136 + return !(readl(rc->base + offset) & bit); 137 + } 138 + 139 + static struct reset_control_ops lpc18xx_rgu_ops = { 140 + .reset = lpc18xx_rgu_reset, 141 + .assert = lpc18xx_rgu_assert, 142 + .deassert = lpc18xx_rgu_deassert, 143 + .status = lpc18xx_rgu_status, 144 + }; 145 + 146 + static int lpc18xx_rgu_probe(struct platform_device *pdev) 147 + { 148 + struct lpc18xx_rgu_data *rc; 149 + struct resource *res; 150 + u32 fcclk, firc; 151 + int ret; 152 + 153 + rc = devm_kzalloc(&pdev->dev, sizeof(*rc), GFP_KERNEL); 154 + if (!rc) 155 + return -ENOMEM; 156 + 157 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 158 + rc->base = devm_ioremap_resource(&pdev->dev, res); 159 + if (IS_ERR(rc->base)) 160 + return PTR_ERR(rc->base); 161 + 162 + rc->clk_reg = devm_clk_get(&pdev->dev, "reg"); 163 + if (IS_ERR(rc->clk_reg)) { 164 + dev_err(&pdev->dev, "reg clock not found\n"); 165 + return PTR_ERR(rc->clk_reg); 166 + } 167 + 168 + rc->clk_delay = devm_clk_get(&pdev->dev, "delay"); 169 + if (IS_ERR(rc->clk_delay)) { 170 + dev_err(&pdev->dev, "delay clock not found\n"); 171 + return PTR_ERR(rc->clk_delay); 172 + } 173 + 174 + ret = clk_prepare_enable(rc->clk_reg); 175 + if (ret) { 176 + dev_err(&pdev->dev, "unable to enable reg clock\n"); 177 + return ret; 178 + } 179 + 180 + ret = clk_prepare_enable(rc->clk_delay); 181 + if (ret) { 182 + dev_err(&pdev->dev, "unable to enable delay clock\n"); 183 + goto dis_clk_reg; 184 + } 185 + 186 + fcclk = clk_get_rate(rc->clk_reg) / USEC_PER_SEC; 187 + firc = clk_get_rate(rc->clk_delay) / USEC_PER_SEC; 188 + if (fcclk == 0 || firc == 0) 189 + rc->delay_us = 2; 190 + else 191 + rc->delay_us = DIV_ROUND_UP(fcclk, firc * firc); 192 + 193 + spin_lock_init(&rc->lock); 194 + 195 + rc->rcdev.owner = THIS_MODULE; 196 + rc->rcdev.nr_resets = 64; 197 + rc->rcdev.ops = &lpc18xx_rgu_ops; 198 + rc->rcdev.of_node = pdev->dev.of_node; 199 + 200 + platform_set_drvdata(pdev, rc); 201 + 202 + ret = reset_controller_register(&rc->rcdev); 203 + if (ret) { 204 + dev_err(&pdev->dev, "unable to register device\n"); 205 + goto dis_clks; 206 + } 207 + 208 + rgu_base = rc->base; 209 + ret = register_restart_handler(&lpc18xx_rgu_restart_nb); 210 + if (ret) 211 + dev_warn(&pdev->dev, "failed to register restart handler\n"); 212 + 213 + return 0; 214 + 215 + dis_clks: 216 + clk_disable_unprepare(rc->clk_delay); 217 + dis_clk_reg: 218 + clk_disable_unprepare(rc->clk_reg); 219 + 220 + return ret; 221 + } 222 + 223 + static int lpc18xx_rgu_remove(struct platform_device *pdev) 224 + { 225 + struct lpc18xx_rgu_data *rc = platform_get_drvdata(pdev); 226 + int ret; 227 + 228 + ret = unregister_restart_handler(&lpc18xx_rgu_restart_nb); 229 + if (ret) 230 + dev_warn(&pdev->dev, "failed to unregister restart handler\n"); 231 + 232 + reset_controller_unregister(&rc->rcdev); 233 + 234 + clk_disable_unprepare(rc->clk_delay); 235 + clk_disable_unprepare(rc->clk_reg); 236 + 237 + return 0; 238 + } 239 + 240 + static const struct of_device_id lpc18xx_rgu_match[] = { 241 + { .compatible = "nxp,lpc1850-rgu" }, 242 + { } 243 + }; 244 + MODULE_DEVICE_TABLE(of, lpc18xx_rgu_match); 245 + 246 + static struct platform_driver lpc18xx_rgu_driver = { 247 + .probe = lpc18xx_rgu_probe, 248 + .remove = lpc18xx_rgu_remove, 249 + .driver = { 250 + .name = "lpc18xx-reset", 251 + .of_match_table = lpc18xx_rgu_match, 252 + }, 253 + }; 254 + module_platform_driver(lpc18xx_rgu_driver); 255 + 256 + MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>"); 257 + MODULE_DESCRIPTION("Reset driver for LPC18xx/43xx RGU"); 258 + MODULE_LICENSE("GPL v2");
+13 -6
drivers/reset/reset-socfpga.c
··· 24 24 #include <linux/types.h> 25 25 26 26 #define NR_BANKS 4 27 - #define OFFSET_MODRST 0x10 28 27 29 28 struct socfpga_reset_data { 30 29 spinlock_t lock; 31 30 void __iomem *membase; 31 + u32 modrst_offset; 32 32 struct reset_controller_dev rcdev; 33 33 }; 34 34 ··· 45 45 46 46 spin_lock_irqsave(&data->lock, flags); 47 47 48 - reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS)); 49 - writel(reg | BIT(offset), data->membase + OFFSET_MODRST + 48 + reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 49 + writel(reg | BIT(offset), data->membase + data->modrst_offset + 50 50 (bank * NR_BANKS)); 51 51 spin_unlock_irqrestore(&data->lock, flags); 52 52 ··· 67 67 68 68 spin_lock_irqsave(&data->lock, flags); 69 69 70 - reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS)); 71 - writel(reg & ~BIT(offset), data->membase + OFFSET_MODRST + 70 + reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 71 + writel(reg & ~BIT(offset), data->membase + data->modrst_offset + 72 72 (bank * NR_BANKS)); 73 73 74 74 spin_unlock_irqrestore(&data->lock, flags); ··· 85 85 int offset = id % BITS_PER_LONG; 86 86 u32 reg; 87 87 88 - reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS)); 88 + reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 89 89 90 90 return !(reg & BIT(offset)); 91 91 } ··· 100 100 { 101 101 struct socfpga_reset_data *data; 102 102 struct resource *res; 103 + struct device *dev = &pdev->dev; 104 + struct device_node *np = dev->of_node; 103 105 104 106 /* 105 107 * The binding was mainlined without the required property. ··· 121 119 data->membase = devm_ioremap_resource(&pdev->dev, res); 122 120 if (IS_ERR(data->membase)) 123 121 return PTR_ERR(data->membase); 122 + 123 + if (of_property_read_u32(np, "altr,modrst-offset", &data->modrst_offset)) { 124 + dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n"); 125 + data->modrst_offset = 0x10; 126 + } 124 127 125 128 spin_lock_init(&data->lock); 126 129
+155
drivers/reset/reset-zynq.c
··· 1 + /* 2 + * Copyright (c) 2015, National Instruments Corp. 3 + * 4 + * Xilinx Zynq Reset controller driver 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; version 2 of the License. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/err.h> 17 + #include <linux/io.h> 18 + #include <linux/module.h> 19 + #include <linux/mfd/syscon.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/reset-controller.h> 23 + #include <linux/regmap.h> 24 + #include <linux/types.h> 25 + 26 + struct zynq_reset_data { 27 + struct regmap *slcr; 28 + struct reset_controller_dev rcdev; 29 + u32 offset; 30 + }; 31 + 32 + #define to_zynq_reset_data(p) \ 33 + container_of((p), struct zynq_reset_data, rcdev) 34 + 35 + static int zynq_reset_assert(struct reset_controller_dev *rcdev, 36 + unsigned long id) 37 + { 38 + struct zynq_reset_data *priv = to_zynq_reset_data(rcdev); 39 + 40 + int bank = id / BITS_PER_LONG; 41 + int offset = id % BITS_PER_LONG; 42 + 43 + pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__, 44 + bank, offset); 45 + 46 + return regmap_update_bits(priv->slcr, 47 + priv->offset + (bank * 4), 48 + BIT(offset), 49 + BIT(offset)); 50 + } 51 + 52 + static int zynq_reset_deassert(struct reset_controller_dev *rcdev, 53 + unsigned long id) 54 + { 55 + struct zynq_reset_data *priv = to_zynq_reset_data(rcdev); 56 + 57 + int bank = id / BITS_PER_LONG; 58 + int offset = id % BITS_PER_LONG; 59 + 60 + pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__, 61 + bank, offset); 62 + 63 + return regmap_update_bits(priv->slcr, 64 + priv->offset + (bank * 4), 65 + BIT(offset), 66 + ~BIT(offset)); 67 + } 68 + 69 + static int zynq_reset_status(struct reset_controller_dev *rcdev, 70 + unsigned long id) 71 + { 72 + struct zynq_reset_data *priv = to_zynq_reset_data(rcdev); 73 + 74 + int bank = id / BITS_PER_LONG; 75 + int offset = id % BITS_PER_LONG; 76 + int ret; 77 + u32 reg; 78 + 79 + pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__, 80 + bank, offset); 81 + 82 + ret = regmap_read(priv->slcr, priv->offset + (bank * 4), &reg); 83 + if (ret) 84 + return ret; 85 + 86 + return !!(reg & BIT(offset)); 87 + } 88 + 89 + static struct reset_control_ops zynq_reset_ops = { 90 + .assert = zynq_reset_assert, 91 + .deassert = zynq_reset_deassert, 92 + .status = zynq_reset_status, 93 + }; 94 + 95 + static int zynq_reset_probe(struct platform_device *pdev) 96 + { 97 + struct resource *res; 98 + struct zynq_reset_data *priv; 99 + 100 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 101 + if (!priv) 102 + return -ENOMEM; 103 + platform_set_drvdata(pdev, priv); 104 + 105 + priv->slcr = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 106 + "syscon"); 107 + if (IS_ERR(priv->slcr)) { 108 + dev_err(&pdev->dev, "unable to get zynq-slcr regmap"); 109 + return PTR_ERR(priv->slcr); 110 + } 111 + 112 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 113 + if (!res) { 114 + dev_err(&pdev->dev, "missing IO resource\n"); 115 + return -ENODEV; 116 + } 117 + 118 + priv->offset = res->start; 119 + 120 + priv->rcdev.owner = THIS_MODULE; 121 + priv->rcdev.nr_resets = resource_size(res) / 4 * BITS_PER_LONG; 122 + priv->rcdev.ops = &zynq_reset_ops; 123 + priv->rcdev.of_node = pdev->dev.of_node; 124 + reset_controller_register(&priv->rcdev); 125 + 126 + return 0; 127 + } 128 + 129 + static int zynq_reset_remove(struct platform_device *pdev) 130 + { 131 + struct zynq_reset_data *priv = platform_get_drvdata(pdev); 132 + 133 + reset_controller_unregister(&priv->rcdev); 134 + 135 + return 0; 136 + } 137 + 138 + static const struct of_device_id zynq_reset_dt_ids[] = { 139 + { .compatible = "xlnx,zynq-reset", }, 140 + { /* sentinel */ }, 141 + }; 142 + 143 + static struct platform_driver zynq_reset_driver = { 144 + .probe = zynq_reset_probe, 145 + .remove = zynq_reset_remove, 146 + .driver = { 147 + .name = KBUILD_MODNAME, 148 + .of_match_table = zynq_reset_dt_ids, 149 + }, 150 + }; 151 + module_platform_driver(zynq_reset_driver); 152 + 153 + MODULE_LICENSE("GPL v2"); 154 + MODULE_AUTHOR("Moritz Fischer <moritz.fischer@ettus.com>"); 155 + MODULE_DESCRIPTION("Zynq Reset Controller Driver");
+2 -2
drivers/reset/sti/reset-stih407.c
··· 11 11 #include <linux/of.h> 12 12 #include <linux/of_platform.h> 13 13 #include <linux/platform_device.h> 14 - #include <dt-bindings/reset-controller/stih407-resets.h> 14 + #include <dt-bindings/reset/stih407-resets.h> 15 15 #include "reset-syscfg.h" 16 16 17 17 /* STiH407 Peripheral powerdown definitions. */ ··· 126 126 .channels = stih407_picophyresets, 127 127 }; 128 128 129 - static struct of_device_id stih407_reset_match[] = { 129 + static const struct of_device_id stih407_reset_match[] = { 130 130 { 131 131 .compatible = "st,stih407-powerdown", 132 132 .data = &stih407_powerdown_controller,
+2 -2
drivers/reset/sti/reset-stih415.c
··· 13 13 #include <linux/of_platform.h> 14 14 #include <linux/platform_device.h> 15 15 16 - #include <dt-bindings/reset-controller/stih415-resets.h> 16 + #include <dt-bindings/reset/stih415-resets.h> 17 17 18 18 #include "reset-syscfg.h" 19 19 ··· 89 89 .channels = stih415_softresets, 90 90 }; 91 91 92 - static struct of_device_id stih415_reset_match[] = { 92 + static const struct of_device_id stih415_reset_match[] = { 93 93 { .compatible = "st,stih415-powerdown", 94 94 .data = &stih415_powerdown_controller, }, 95 95 { .compatible = "st,stih415-softreset",
+2 -2
drivers/reset/sti/reset-stih416.c
··· 13 13 #include <linux/of_platform.h> 14 14 #include <linux/platform_device.h> 15 15 16 - #include <dt-bindings/reset-controller/stih416-resets.h> 16 + #include <dt-bindings/reset/stih416-resets.h> 17 17 18 18 #include "reset-syscfg.h" 19 19 ··· 120 120 .channels = stih416_softresets, 121 121 }; 122 122 123 - static struct of_device_id stih416_reset_match[] = { 123 + static const struct of_device_id stih416_reset_match[] = { 124 124 { .compatible = "st,stih416-powerdown", 125 125 .data = &stih416_powerdown_controller, }, 126 126 { .compatible = "st,stih416-softreset",
+1
drivers/soc/Makefile
··· 2 2 # Makefile for the Linux Kernel SOC specific device drivers. 3 3 # 4 4 5 + obj-$(CONFIG_MACH_DOVE) += dove/ 5 6 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 6 7 obj-$(CONFIG_ARCH_QCOM) += qcom/ 7 8 obj-$(CONFIG_ARCH_SUNXI) += sunxi/
+1
drivers/soc/dove/Makefile
··· 1 + obj-y += pmu.o
+412
drivers/soc/dove/pmu.c
··· 1 + /* 2 + * Marvell Dove PMU support 3 + */ 4 + #include <linux/io.h> 5 + #include <linux/irq.h> 6 + #include <linux/irqdomain.h> 7 + #include <linux/of.h> 8 + #include <linux/of_irq.h> 9 + #include <linux/of_address.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/pm_domain.h> 12 + #include <linux/reset.h> 13 + #include <linux/reset-controller.h> 14 + #include <linux/sched.h> 15 + #include <linux/slab.h> 16 + #include <linux/soc/dove/pmu.h> 17 + #include <linux/spinlock.h> 18 + 19 + #define NR_PMU_IRQS 7 20 + 21 + #define PMC_SW_RST 0x30 22 + #define PMC_IRQ_CAUSE 0x50 23 + #define PMC_IRQ_MASK 0x54 24 + 25 + #define PMU_PWR 0x10 26 + #define PMU_ISO 0x58 27 + 28 + struct pmu_data { 29 + spinlock_t lock; 30 + struct device_node *of_node; 31 + void __iomem *pmc_base; 32 + void __iomem *pmu_base; 33 + struct irq_chip_generic *irq_gc; 34 + struct irq_domain *irq_domain; 35 + #ifdef CONFIG_RESET_CONTROLLER 36 + struct reset_controller_dev reset; 37 + #endif 38 + }; 39 + 40 + /* 41 + * The PMU contains a register to reset various subsystems within the 42 + * SoC. Export this as a reset controller. 43 + */ 44 + #ifdef CONFIG_RESET_CONTROLLER 45 + #define rcdev_to_pmu(rcdev) container_of(rcdev, struct pmu_data, reset) 46 + 47 + static int pmu_reset_reset(struct reset_controller_dev *rc, unsigned long id) 48 + { 49 + struct pmu_data *pmu = rcdev_to_pmu(rc); 50 + unsigned long flags; 51 + u32 val; 52 + 53 + spin_lock_irqsave(&pmu->lock, flags); 54 + val = readl_relaxed(pmu->pmc_base + PMC_SW_RST); 55 + writel_relaxed(val & ~BIT(id), pmu->pmc_base + PMC_SW_RST); 56 + writel_relaxed(val | BIT(id), pmu->pmc_base + PMC_SW_RST); 57 + spin_unlock_irqrestore(&pmu->lock, flags); 58 + 59 + return 0; 60 + } 61 + 62 + static int pmu_reset_assert(struct reset_controller_dev *rc, unsigned long id) 63 + { 64 + struct pmu_data *pmu = rcdev_to_pmu(rc); 65 + unsigned long flags; 66 + u32 val = ~BIT(id); 67 + 68 + spin_lock_irqsave(&pmu->lock, flags); 69 + val &= readl_relaxed(pmu->pmc_base + PMC_SW_RST); 70 + writel_relaxed(val, pmu->pmc_base + PMC_SW_RST); 71 + spin_unlock_irqrestore(&pmu->lock, flags); 72 + 73 + return 0; 74 + } 75 + 76 + static int pmu_reset_deassert(struct reset_controller_dev *rc, unsigned long id) 77 + { 78 + struct pmu_data *pmu = rcdev_to_pmu(rc); 79 + unsigned long flags; 80 + u32 val = BIT(id); 81 + 82 + spin_lock_irqsave(&pmu->lock, flags); 83 + val |= readl_relaxed(pmu->pmc_base + PMC_SW_RST); 84 + writel_relaxed(val, pmu->pmc_base + PMC_SW_RST); 85 + spin_unlock_irqrestore(&pmu->lock, flags); 86 + 87 + return 0; 88 + } 89 + 90 + static struct reset_control_ops pmu_reset_ops = { 91 + .reset = pmu_reset_reset, 92 + .assert = pmu_reset_assert, 93 + .deassert = pmu_reset_deassert, 94 + }; 95 + 96 + static struct reset_controller_dev pmu_reset __initdata = { 97 + .ops = &pmu_reset_ops, 98 + .owner = THIS_MODULE, 99 + .nr_resets = 32, 100 + }; 101 + 102 + static void __init pmu_reset_init(struct pmu_data *pmu) 103 + { 104 + int ret; 105 + 106 + pmu->reset = pmu_reset; 107 + pmu->reset.of_node = pmu->of_node; 108 + 109 + ret = reset_controller_register(&pmu->reset); 110 + if (ret) 111 + pr_err("pmu: %s failed: %d\n", "reset_controller_register", ret); 112 + } 113 + #else 114 + static void __init pmu_reset_init(struct pmu_data *pmu) 115 + { 116 + } 117 + #endif 118 + 119 + struct pmu_domain { 120 + struct pmu_data *pmu; 121 + u32 pwr_mask; 122 + u32 rst_mask; 123 + u32 iso_mask; 124 + struct generic_pm_domain base; 125 + }; 126 + 127 + #define to_pmu_domain(dom) container_of(dom, struct pmu_domain, base) 128 + 129 + /* 130 + * This deals with the "old" Marvell sequence of bringing a power domain 131 + * down/up, which is: apply power, release reset, disable isolators. 132 + * 133 + * Later devices apparantly use a different sequence: power up, disable 134 + * isolators, assert repair signal, enable SRMA clock, enable AXI clock, 135 + * enable module clock, deassert reset. 136 + * 137 + * Note: reading the assembly, it seems that the IO accessors have an 138 + * unfortunate side-effect - they cause memory already read into registers 139 + * for the if () to be re-read for the bit-set or bit-clear operation. 140 + * The code is written to avoid this. 141 + */ 142 + static int pmu_domain_power_off(struct generic_pm_domain *domain) 143 + { 144 + struct pmu_domain *pmu_dom = to_pmu_domain(domain); 145 + struct pmu_data *pmu = pmu_dom->pmu; 146 + unsigned long flags; 147 + unsigned int val; 148 + void __iomem *pmu_base = pmu->pmu_base; 149 + void __iomem *pmc_base = pmu->pmc_base; 150 + 151 + spin_lock_irqsave(&pmu->lock, flags); 152 + 153 + /* Enable isolators */ 154 + if (pmu_dom->iso_mask) { 155 + val = ~pmu_dom->iso_mask; 156 + val &= readl_relaxed(pmu_base + PMU_ISO); 157 + writel_relaxed(val, pmu_base + PMU_ISO); 158 + } 159 + 160 + /* Reset unit */ 161 + if (pmu_dom->rst_mask) { 162 + val = ~pmu_dom->rst_mask; 163 + val &= readl_relaxed(pmc_base + PMC_SW_RST); 164 + writel_relaxed(val, pmc_base + PMC_SW_RST); 165 + } 166 + 167 + /* Power down */ 168 + val = readl_relaxed(pmu_base + PMU_PWR) | pmu_dom->pwr_mask; 169 + writel_relaxed(val, pmu_base + PMU_PWR); 170 + 171 + spin_unlock_irqrestore(&pmu->lock, flags); 172 + 173 + return 0; 174 + } 175 + 176 + static int pmu_domain_power_on(struct generic_pm_domain *domain) 177 + { 178 + struct pmu_domain *pmu_dom = to_pmu_domain(domain); 179 + struct pmu_data *pmu = pmu_dom->pmu; 180 + unsigned long flags; 181 + unsigned int val; 182 + void __iomem *pmu_base = pmu->pmu_base; 183 + void __iomem *pmc_base = pmu->pmc_base; 184 + 185 + spin_lock_irqsave(&pmu->lock, flags); 186 + 187 + /* Power on */ 188 + val = ~pmu_dom->pwr_mask & readl_relaxed(pmu_base + PMU_PWR); 189 + writel_relaxed(val, pmu_base + PMU_PWR); 190 + 191 + /* Release reset */ 192 + if (pmu_dom->rst_mask) { 193 + val = pmu_dom->rst_mask; 194 + val |= readl_relaxed(pmc_base + PMC_SW_RST); 195 + writel_relaxed(val, pmc_base + PMC_SW_RST); 196 + } 197 + 198 + /* Disable isolators */ 199 + if (pmu_dom->iso_mask) { 200 + val = pmu_dom->iso_mask; 201 + val |= readl_relaxed(pmu_base + PMU_ISO); 202 + writel_relaxed(val, pmu_base + PMU_ISO); 203 + } 204 + 205 + spin_unlock_irqrestore(&pmu->lock, flags); 206 + 207 + return 0; 208 + } 209 + 210 + static void __pmu_domain_register(struct pmu_domain *domain, 211 + struct device_node *np) 212 + { 213 + unsigned int val = readl_relaxed(domain->pmu->pmu_base + PMU_PWR); 214 + 215 + domain->base.power_off = pmu_domain_power_off; 216 + domain->base.power_on = pmu_domain_power_on; 217 + 218 + pm_genpd_init(&domain->base, NULL, !(val & domain->pwr_mask)); 219 + 220 + if (np) 221 + of_genpd_add_provider_simple(np, &domain->base); 222 + } 223 + 224 + /* PMU IRQ controller */ 225 + static void pmu_irq_handler(unsigned int irq, struct irq_desc *desc) 226 + { 227 + struct pmu_data *pmu = irq_get_handler_data(irq); 228 + struct irq_chip_generic *gc = pmu->irq_gc; 229 + struct irq_domain *domain = pmu->irq_domain; 230 + void __iomem *base = gc->reg_base; 231 + u32 stat = readl_relaxed(base + PMC_IRQ_CAUSE) & gc->mask_cache; 232 + u32 done = ~0; 233 + 234 + if (stat == 0) { 235 + handle_bad_irq(irq, desc); 236 + return; 237 + } 238 + 239 + while (stat) { 240 + u32 hwirq = fls(stat) - 1; 241 + 242 + stat &= ~(1 << hwirq); 243 + done &= ~(1 << hwirq); 244 + 245 + generic_handle_irq(irq_find_mapping(domain, hwirq)); 246 + } 247 + 248 + /* 249 + * The PMU mask register is not RW0C: it is RW. This means that 250 + * the bits take whatever value is written to them; if you write 251 + * a '1', you will set the interrupt. 252 + * 253 + * Unfortunately this means there is NO race free way to clear 254 + * these interrupts. 255 + * 256 + * So, let's structure the code so that the window is as small as 257 + * possible. 258 + */ 259 + irq_gc_lock(gc); 260 + done &= readl_relaxed(base + PMC_IRQ_CAUSE); 261 + writel_relaxed(done, base + PMC_IRQ_CAUSE); 262 + irq_gc_unlock(gc); 263 + } 264 + 265 + static int __init dove_init_pmu_irq(struct pmu_data *pmu, int irq) 266 + { 267 + const char *name = "pmu_irq"; 268 + struct irq_chip_generic *gc; 269 + struct irq_domain *domain; 270 + int ret; 271 + 272 + /* mask and clear all interrupts */ 273 + writel(0, pmu->pmc_base + PMC_IRQ_MASK); 274 + writel(0, pmu->pmc_base + PMC_IRQ_CAUSE); 275 + 276 + domain = irq_domain_add_linear(pmu->of_node, NR_PMU_IRQS, 277 + &irq_generic_chip_ops, NULL); 278 + if (!domain) { 279 + pr_err("%s: unable to add irq domain\n", name); 280 + return -ENOMEM; 281 + } 282 + 283 + ret = irq_alloc_domain_generic_chips(domain, NR_PMU_IRQS, 1, name, 284 + handle_level_irq, 285 + IRQ_NOREQUEST | IRQ_NOPROBE, 0, 286 + IRQ_GC_INIT_MASK_CACHE); 287 + if (ret) { 288 + pr_err("%s: unable to alloc irq domain gc: %d\n", name, ret); 289 + irq_domain_remove(domain); 290 + return ret; 291 + } 292 + 293 + gc = irq_get_domain_generic_chip(domain, 0); 294 + gc->reg_base = pmu->pmc_base; 295 + gc->chip_types[0].regs.mask = PMC_IRQ_MASK; 296 + gc->chip_types[0].chip.irq_mask = irq_gc_mask_clr_bit; 297 + gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit; 298 + 299 + pmu->irq_domain = domain; 300 + pmu->irq_gc = gc; 301 + 302 + irq_set_handler_data(irq, pmu); 303 + irq_set_chained_handler(irq, pmu_irq_handler); 304 + 305 + return 0; 306 + } 307 + 308 + /* 309 + * pmu: power-manager@d0000 { 310 + * compatible = "marvell,dove-pmu"; 311 + * reg = <0xd0000 0x8000> <0xd8000 0x8000>; 312 + * interrupts = <33>; 313 + * interrupt-controller; 314 + * #reset-cells = 1; 315 + * vpu_domain: vpu-domain { 316 + * #power-domain-cells = <0>; 317 + * marvell,pmu_pwr_mask = <0x00000008>; 318 + * marvell,pmu_iso_mask = <0x00000001>; 319 + * resets = <&pmu 16>; 320 + * }; 321 + * gpu_domain: gpu-domain { 322 + * #power-domain-cells = <0>; 323 + * marvell,pmu_pwr_mask = <0x00000004>; 324 + * marvell,pmu_iso_mask = <0x00000002>; 325 + * resets = <&pmu 18>; 326 + * }; 327 + * }; 328 + */ 329 + int __init dove_init_pmu(void) 330 + { 331 + struct device_node *np_pmu, *domains_node, *np; 332 + struct pmu_data *pmu; 333 + int ret, parent_irq; 334 + 335 + /* Lookup the PMU node */ 336 + np_pmu = of_find_compatible_node(NULL, NULL, "marvell,dove-pmu"); 337 + if (!np_pmu) 338 + return 0; 339 + 340 + domains_node = of_get_child_by_name(np_pmu, "domains"); 341 + if (!domains_node) { 342 + pr_err("%s: failed to find domains sub-node\n", np_pmu->name); 343 + return 0; 344 + } 345 + 346 + pmu = kzalloc(sizeof(*pmu), GFP_KERNEL); 347 + if (!pmu) 348 + return -ENOMEM; 349 + 350 + spin_lock_init(&pmu->lock); 351 + pmu->of_node = np_pmu; 352 + pmu->pmc_base = of_iomap(pmu->of_node, 0); 353 + pmu->pmu_base = of_iomap(pmu->of_node, 1); 354 + if (!pmu->pmc_base || !pmu->pmu_base) { 355 + pr_err("%s: failed to map PMU\n", np_pmu->name); 356 + iounmap(pmu->pmu_base); 357 + iounmap(pmu->pmc_base); 358 + kfree(pmu); 359 + return -ENOMEM; 360 + } 361 + 362 + pmu_reset_init(pmu); 363 + 364 + for_each_available_child_of_node(domains_node, np) { 365 + struct of_phandle_args args; 366 + struct pmu_domain *domain; 367 + 368 + domain = kzalloc(sizeof(*domain), GFP_KERNEL); 369 + if (!domain) 370 + break; 371 + 372 + domain->pmu = pmu; 373 + domain->base.name = kstrdup(np->name, GFP_KERNEL); 374 + if (!domain->base.name) { 375 + kfree(domain); 376 + break; 377 + } 378 + 379 + of_property_read_u32(np, "marvell,pmu_pwr_mask", 380 + &domain->pwr_mask); 381 + of_property_read_u32(np, "marvell,pmu_iso_mask", 382 + &domain->iso_mask); 383 + 384 + /* 385 + * We parse the reset controller property directly here 386 + * to ensure that we can operate when the reset controller 387 + * support is not configured into the kernel. 388 + */ 389 + ret = of_parse_phandle_with_args(np, "resets", "#reset-cells", 390 + 0, &args); 391 + if (ret == 0) { 392 + if (args.np == pmu->of_node) 393 + domain->rst_mask = BIT(args.args[0]); 394 + of_node_put(args.np); 395 + } 396 + 397 + __pmu_domain_register(domain, np); 398 + } 399 + pm_genpd_poweroff_unused(); 400 + 401 + /* Loss of the interrupt controller is not a fatal error. */ 402 + parent_irq = irq_of_parse_and_map(pmu->of_node, 0); 403 + if (!parent_irq) { 404 + pr_err("%s: no interrupt specified\n", np_pmu->name); 405 + } else { 406 + ret = dove_init_pmu_irq(pmu, parent_irq); 407 + if (ret) 408 + pr_err("dove_init_pmu_irq() failed: %d\n", ret); 409 + } 410 + 411 + return 0; 412 + }
+31
drivers/soc/qcom/Kconfig
··· 13 13 config QCOM_PM 14 14 bool "Qualcomm Power Management" 15 15 depends on ARCH_QCOM && !ARM64 16 + select QCOM_SCM 16 17 help 17 18 QCOM Platform specific power driver to manage cores and L2 low power 18 19 modes. It interface with various system drivers to put the cores in 19 20 low power modes. 21 + 22 + config QCOM_SMD 23 + tristate "Qualcomm Shared Memory Driver (SMD)" 24 + depends on QCOM_SMEM 25 + help 26 + Say y here to enable support for the Qualcomm Shared Memory Driver 27 + providing communication channels to remote processors in Qualcomm 28 + platforms. 29 + 30 + config QCOM_SMD_RPM 31 + tristate "Qualcomm Resource Power Manager (RPM) over SMD" 32 + depends on QCOM_SMD && OF 33 + help 34 + If you say yes to this option, support will be included for the 35 + Resource Power Manager system found in the Qualcomm 8974 based 36 + devices. 37 + 38 + This is required to access many regulators, clocks and bus 39 + frequencies controlled by the RPM on these devices. 40 + 41 + Say M here if you want to include support for the Qualcomm RPM as a 42 + module. This will build a module called "qcom-smd-rpm". 43 + 44 + config QCOM_SMEM 45 + tristate "Qualcomm Shared Memory Manager (SMEM)" 46 + depends on ARCH_QCOM 47 + help 48 + Say y here to enable support for the Qualcomm Shared Memory Manager. 49 + The driver provides an interface to items in a heap shared among all 50 + processors in a Qualcomm platform.
+3
drivers/soc/qcom/Makefile
··· 1 1 obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o 2 2 obj-$(CONFIG_QCOM_PM) += spm.o 3 + obj-$(CONFIG_QCOM_SMD) += smd.o 4 + obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o 5 + obj-$(CONFIG_QCOM_SMEM) += smem.o
+244
drivers/soc/qcom/smd-rpm.c
··· 1 + /* 2 + * Copyright (c) 2015, Sony Mobile Communications AB. 3 + * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved. 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/of_platform.h> 18 + #include <linux/io.h> 19 + #include <linux/interrupt.h> 20 + 21 + #include <linux/soc/qcom/smd.h> 22 + #include <linux/soc/qcom/smd-rpm.h> 23 + 24 + #define RPM_REQUEST_TIMEOUT (5 * HZ) 25 + 26 + /** 27 + * struct qcom_smd_rpm - state of the rpm device driver 28 + * @rpm_channel: reference to the smd channel 29 + * @ack: completion for acks 30 + * @lock: mutual exclusion around the send/complete pair 31 + * @ack_status: result of the rpm request 32 + */ 33 + struct qcom_smd_rpm { 34 + struct qcom_smd_channel *rpm_channel; 35 + 36 + struct completion ack; 37 + struct mutex lock; 38 + int ack_status; 39 + }; 40 + 41 + /** 42 + * struct qcom_rpm_header - header for all rpm requests and responses 43 + * @service_type: identifier of the service 44 + * @length: length of the payload 45 + */ 46 + struct qcom_rpm_header { 47 + u32 service_type; 48 + u32 length; 49 + }; 50 + 51 + /** 52 + * struct qcom_rpm_request - request message to the rpm 53 + * @msg_id: identifier of the outgoing message 54 + * @flags: active/sleep state flags 55 + * @type: resource type 56 + * @id: resource id 57 + * @data_len: length of the payload following this header 58 + */ 59 + struct qcom_rpm_request { 60 + u32 msg_id; 61 + u32 flags; 62 + u32 type; 63 + u32 id; 64 + u32 data_len; 65 + }; 66 + 67 + /** 68 + * struct qcom_rpm_message - response message from the rpm 69 + * @msg_type: indicator of the type of message 70 + * @length: the size of this message, including the message header 71 + * @msg_id: message id 72 + * @message: textual message from the rpm 73 + * 74 + * Multiple of these messages can be stacked in an rpm message. 75 + */ 76 + struct qcom_rpm_message { 77 + u32 msg_type; 78 + u32 length; 79 + union { 80 + u32 msg_id; 81 + u8 message[0]; 82 + }; 83 + }; 84 + 85 + #define RPM_SERVICE_TYPE_REQUEST 0x00716572 /* "req\0" */ 86 + 87 + #define RPM_MSG_TYPE_ERR 0x00727265 /* "err\0" */ 88 + #define RPM_MSG_TYPE_MSG_ID 0x2367736d /* "msg#" */ 89 + 90 + /** 91 + * qcom_rpm_smd_write - write @buf to @type:@id 92 + * @rpm: rpm handle 93 + * @type: resource type 94 + * @id: resource identifier 95 + * @buf: the data to be written 96 + * @count: number of bytes in @buf 97 + */ 98 + int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm, 99 + int state, 100 + u32 type, u32 id, 101 + void *buf, 102 + size_t count) 103 + { 104 + static unsigned msg_id = 1; 105 + int left; 106 + int ret; 107 + 108 + struct { 109 + struct qcom_rpm_header hdr; 110 + struct qcom_rpm_request req; 111 + u8 payload[count]; 112 + } pkt; 113 + 114 + /* SMD packets to the RPM may not exceed 256 bytes */ 115 + if (WARN_ON(sizeof(pkt) >= 256)) 116 + return -EINVAL; 117 + 118 + mutex_lock(&rpm->lock); 119 + 120 + pkt.hdr.service_type = RPM_SERVICE_TYPE_REQUEST; 121 + pkt.hdr.length = sizeof(struct qcom_rpm_request) + count; 122 + 123 + pkt.req.msg_id = msg_id++; 124 + pkt.req.flags = BIT(state); 125 + pkt.req.type = type; 126 + pkt.req.id = id; 127 + pkt.req.data_len = count; 128 + memcpy(pkt.payload, buf, count); 129 + 130 + ret = qcom_smd_send(rpm->rpm_channel, &pkt, sizeof(pkt)); 131 + if (ret) 132 + goto out; 133 + 134 + left = wait_for_completion_timeout(&rpm->ack, RPM_REQUEST_TIMEOUT); 135 + if (!left) 136 + ret = -ETIMEDOUT; 137 + else 138 + ret = rpm->ack_status; 139 + 140 + out: 141 + mutex_unlock(&rpm->lock); 142 + return ret; 143 + } 144 + EXPORT_SYMBOL(qcom_rpm_smd_write); 145 + 146 + static int qcom_smd_rpm_callback(struct qcom_smd_device *qsdev, 147 + const void *data, 148 + size_t count) 149 + { 150 + const struct qcom_rpm_header *hdr = data; 151 + const struct qcom_rpm_message *msg; 152 + struct qcom_smd_rpm *rpm = dev_get_drvdata(&qsdev->dev); 153 + const u8 *buf = data + sizeof(struct qcom_rpm_header); 154 + const u8 *end = buf + hdr->length; 155 + char msgbuf[32]; 156 + int status = 0; 157 + u32 len; 158 + 159 + if (hdr->service_type != RPM_SERVICE_TYPE_REQUEST || 160 + hdr->length < sizeof(struct qcom_rpm_message)) { 161 + dev_err(&qsdev->dev, "invalid request\n"); 162 + return 0; 163 + } 164 + 165 + while (buf < end) { 166 + msg = (struct qcom_rpm_message *)buf; 167 + switch (msg->msg_type) { 168 + case RPM_MSG_TYPE_MSG_ID: 169 + break; 170 + case RPM_MSG_TYPE_ERR: 171 + len = min_t(u32, ALIGN(msg->length, 4), sizeof(msgbuf)); 172 + memcpy_fromio(msgbuf, msg->message, len); 173 + msgbuf[len - 1] = 0; 174 + 175 + if (!strcmp(msgbuf, "resource does not exist")) 176 + status = -ENXIO; 177 + else 178 + status = -EINVAL; 179 + break; 180 + } 181 + 182 + buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg->length, 4); 183 + } 184 + 185 + rpm->ack_status = status; 186 + complete(&rpm->ack); 187 + return 0; 188 + } 189 + 190 + static int qcom_smd_rpm_probe(struct qcom_smd_device *sdev) 191 + { 192 + struct qcom_smd_rpm *rpm; 193 + 194 + rpm = devm_kzalloc(&sdev->dev, sizeof(*rpm), GFP_KERNEL); 195 + if (!rpm) 196 + return -ENOMEM; 197 + 198 + mutex_init(&rpm->lock); 199 + init_completion(&rpm->ack); 200 + 201 + rpm->rpm_channel = sdev->channel; 202 + 203 + dev_set_drvdata(&sdev->dev, rpm); 204 + 205 + return of_platform_populate(sdev->dev.of_node, NULL, NULL, &sdev->dev); 206 + } 207 + 208 + static void qcom_smd_rpm_remove(struct qcom_smd_device *sdev) 209 + { 210 + of_platform_depopulate(&sdev->dev); 211 + } 212 + 213 + static const struct of_device_id qcom_smd_rpm_of_match[] = { 214 + { .compatible = "qcom,rpm-msm8974" }, 215 + {} 216 + }; 217 + MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match); 218 + 219 + static struct qcom_smd_driver qcom_smd_rpm_driver = { 220 + .probe = qcom_smd_rpm_probe, 221 + .remove = qcom_smd_rpm_remove, 222 + .callback = qcom_smd_rpm_callback, 223 + .driver = { 224 + .name = "qcom_smd_rpm", 225 + .owner = THIS_MODULE, 226 + .of_match_table = qcom_smd_rpm_of_match, 227 + }, 228 + }; 229 + 230 + static int __init qcom_smd_rpm_init(void) 231 + { 232 + return qcom_smd_driver_register(&qcom_smd_rpm_driver); 233 + } 234 + arch_initcall(qcom_smd_rpm_init); 235 + 236 + static void __exit qcom_smd_rpm_exit(void) 237 + { 238 + qcom_smd_driver_unregister(&qcom_smd_rpm_driver); 239 + } 240 + module_exit(qcom_smd_rpm_exit); 241 + 242 + MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>"); 243 + MODULE_DESCRIPTION("Qualcomm SMD backed RPM driver"); 244 + MODULE_LICENSE("GPL v2");
+1319
drivers/soc/qcom/smd.c
··· 1 + /* 2 + * Copyright (c) 2015, Sony Mobile Communications AB. 3 + * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved. 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/interrupt.h> 16 + #include <linux/io.h> 17 + #include <linux/mfd/syscon.h> 18 + #include <linux/module.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/of_platform.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/regmap.h> 23 + #include <linux/sched.h> 24 + #include <linux/slab.h> 25 + #include <linux/soc/qcom/smd.h> 26 + #include <linux/soc/qcom/smem.h> 27 + #include <linux/wait.h> 28 + 29 + /* 30 + * The Qualcomm Shared Memory communication solution provides point-to-point 31 + * channels for clients to send and receive streaming or packet based data. 32 + * 33 + * Each channel consists of a control item (channel info) and a ring buffer 34 + * pair. The channel info carry information related to channel state, flow 35 + * control and the offsets within the ring buffer. 36 + * 37 + * All allocated channels are listed in an allocation table, identifying the 38 + * pair of items by name, type and remote processor. 39 + * 40 + * Upon creating a new channel the remote processor allocates channel info and 41 + * ring buffer items from the smem heap and populate the allocation table. An 42 + * interrupt is sent to the other end of the channel and a scan for new 43 + * channels should be done. A channel never goes away, it will only change 44 + * state. 45 + * 46 + * The remote processor signals it intent for bring up the communication 47 + * channel by setting the state of its end of the channel to "opening" and 48 + * sends out an interrupt. We detect this change and register a smd device to 49 + * consume the channel. Upon finding a consumer we finish the handshake and the 50 + * channel is up. 51 + * 52 + * Upon closing a channel, the remote processor will update the state of its 53 + * end of the channel and signal us, we will then unregister any attached 54 + * device and close our end of the channel. 55 + * 56 + * Devices attached to a channel can use the qcom_smd_send function to push 57 + * data to the channel, this is done by copying the data into the tx ring 58 + * buffer, updating the pointers in the channel info and signaling the remote 59 + * processor. 60 + * 61 + * The remote processor does the equivalent when it transfer data and upon 62 + * receiving the interrupt we check the channel info for new data and delivers 63 + * this to the attached device. If the device is not ready to receive the data 64 + * we leave it in the ring buffer for now. 65 + */ 66 + 67 + struct smd_channel_info; 68 + struct smd_channel_info_word; 69 + 70 + #define SMD_ALLOC_TBL_COUNT 2 71 + #define SMD_ALLOC_TBL_SIZE 64 72 + 73 + /* 74 + * This lists the various smem heap items relevant for the allocation table and 75 + * smd channel entries. 76 + */ 77 + static const struct { 78 + unsigned alloc_tbl_id; 79 + unsigned info_base_id; 80 + unsigned fifo_base_id; 81 + } smem_items[SMD_ALLOC_TBL_COUNT] = { 82 + { 83 + .alloc_tbl_id = 13, 84 + .info_base_id = 14, 85 + .fifo_base_id = 338 86 + }, 87 + { 88 + .alloc_tbl_id = 14, 89 + .info_base_id = 266, 90 + .fifo_base_id = 202, 91 + }, 92 + }; 93 + 94 + /** 95 + * struct qcom_smd_edge - representing a remote processor 96 + * @smd: handle to qcom_smd 97 + * @of_node: of_node handle for information related to this edge 98 + * @edge_id: identifier of this edge 99 + * @irq: interrupt for signals on this edge 100 + * @ipc_regmap: regmap handle holding the outgoing ipc register 101 + * @ipc_offset: offset within @ipc_regmap of the register for ipc 102 + * @ipc_bit: bit in the register at @ipc_offset of @ipc_regmap 103 + * @channels: list of all channels detected on this edge 104 + * @channels_lock: guard for modifications of @channels 105 + * @allocated: array of bitmaps representing already allocated channels 106 + * @need_rescan: flag that the @work needs to scan smem for new channels 107 + * @smem_available: last available amount of smem triggering a channel scan 108 + * @work: work item for edge house keeping 109 + */ 110 + struct qcom_smd_edge { 111 + struct qcom_smd *smd; 112 + struct device_node *of_node; 113 + unsigned edge_id; 114 + 115 + int irq; 116 + 117 + struct regmap *ipc_regmap; 118 + int ipc_offset; 119 + int ipc_bit; 120 + 121 + struct list_head channels; 122 + spinlock_t channels_lock; 123 + 124 + DECLARE_BITMAP(allocated[SMD_ALLOC_TBL_COUNT], SMD_ALLOC_TBL_SIZE); 125 + 126 + bool need_rescan; 127 + unsigned smem_available; 128 + 129 + struct work_struct work; 130 + }; 131 + 132 + /* 133 + * SMD channel states. 134 + */ 135 + enum smd_channel_state { 136 + SMD_CHANNEL_CLOSED, 137 + SMD_CHANNEL_OPENING, 138 + SMD_CHANNEL_OPENED, 139 + SMD_CHANNEL_FLUSHING, 140 + SMD_CHANNEL_CLOSING, 141 + SMD_CHANNEL_RESET, 142 + SMD_CHANNEL_RESET_OPENING 143 + }; 144 + 145 + /** 146 + * struct qcom_smd_channel - smd channel struct 147 + * @edge: qcom_smd_edge this channel is living on 148 + * @qsdev: reference to a associated smd client device 149 + * @name: name of the channel 150 + * @state: local state of the channel 151 + * @remote_state: remote state of the channel 152 + * @tx_info: byte aligned outgoing channel info 153 + * @rx_info: byte aligned incoming channel info 154 + * @tx_info_word: word aligned outgoing channel info 155 + * @rx_info_word: word aligned incoming channel info 156 + * @tx_lock: lock to make writes to the channel mutually exclusive 157 + * @fblockread_event: wakeup event tied to tx fBLOCKREADINTR 158 + * @tx_fifo: pointer to the outgoing ring buffer 159 + * @rx_fifo: pointer to the incoming ring buffer 160 + * @fifo_size: size of each ring buffer 161 + * @bounce_buffer: bounce buffer for reading wrapped packets 162 + * @cb: callback function registered for this channel 163 + * @recv_lock: guard for rx info modifications and cb pointer 164 + * @pkt_size: size of the currently handled packet 165 + * @list: lite entry for @channels in qcom_smd_edge 166 + */ 167 + struct qcom_smd_channel { 168 + struct qcom_smd_edge *edge; 169 + 170 + struct qcom_smd_device *qsdev; 171 + 172 + char *name; 173 + enum smd_channel_state state; 174 + enum smd_channel_state remote_state; 175 + 176 + struct smd_channel_info *tx_info; 177 + struct smd_channel_info *rx_info; 178 + 179 + struct smd_channel_info_word *tx_info_word; 180 + struct smd_channel_info_word *rx_info_word; 181 + 182 + struct mutex tx_lock; 183 + wait_queue_head_t fblockread_event; 184 + 185 + void *tx_fifo; 186 + void *rx_fifo; 187 + int fifo_size; 188 + 189 + void *bounce_buffer; 190 + int (*cb)(struct qcom_smd_device *, const void *, size_t); 191 + 192 + spinlock_t recv_lock; 193 + 194 + int pkt_size; 195 + 196 + struct list_head list; 197 + }; 198 + 199 + /** 200 + * struct qcom_smd - smd struct 201 + * @dev: device struct 202 + * @num_edges: number of entries in @edges 203 + * @edges: array of edges to be handled 204 + */ 205 + struct qcom_smd { 206 + struct device *dev; 207 + 208 + unsigned num_edges; 209 + struct qcom_smd_edge edges[0]; 210 + }; 211 + 212 + /* 213 + * Format of the smd_info smem items, for byte aligned channels. 214 + */ 215 + struct smd_channel_info { 216 + u32 state; 217 + u8 fDSR; 218 + u8 fCTS; 219 + u8 fCD; 220 + u8 fRI; 221 + u8 fHEAD; 222 + u8 fTAIL; 223 + u8 fSTATE; 224 + u8 fBLOCKREADINTR; 225 + u32 tail; 226 + u32 head; 227 + }; 228 + 229 + /* 230 + * Format of the smd_info smem items, for word aligned channels. 231 + */ 232 + struct smd_channel_info_word { 233 + u32 state; 234 + u32 fDSR; 235 + u32 fCTS; 236 + u32 fCD; 237 + u32 fRI; 238 + u32 fHEAD; 239 + u32 fTAIL; 240 + u32 fSTATE; 241 + u32 fBLOCKREADINTR; 242 + u32 tail; 243 + u32 head; 244 + }; 245 + 246 + #define GET_RX_CHANNEL_INFO(channel, param) \ 247 + (channel->rx_info_word ? \ 248 + channel->rx_info_word->param : \ 249 + channel->rx_info->param) 250 + 251 + #define SET_RX_CHANNEL_INFO(channel, param, value) \ 252 + (channel->rx_info_word ? \ 253 + (channel->rx_info_word->param = value) : \ 254 + (channel->rx_info->param = value)) 255 + 256 + #define GET_TX_CHANNEL_INFO(channel, param) \ 257 + (channel->tx_info_word ? \ 258 + channel->tx_info_word->param : \ 259 + channel->tx_info->param) 260 + 261 + #define SET_TX_CHANNEL_INFO(channel, param, value) \ 262 + (channel->tx_info_word ? \ 263 + (channel->tx_info_word->param = value) : \ 264 + (channel->tx_info->param = value)) 265 + 266 + /** 267 + * struct qcom_smd_alloc_entry - channel allocation entry 268 + * @name: channel name 269 + * @cid: channel index 270 + * @flags: channel flags and edge id 271 + * @ref_count: reference count of the channel 272 + */ 273 + struct qcom_smd_alloc_entry { 274 + u8 name[20]; 275 + u32 cid; 276 + u32 flags; 277 + u32 ref_count; 278 + } __packed; 279 + 280 + #define SMD_CHANNEL_FLAGS_EDGE_MASK 0xff 281 + #define SMD_CHANNEL_FLAGS_STREAM BIT(8) 282 + #define SMD_CHANNEL_FLAGS_PACKET BIT(9) 283 + 284 + /* 285 + * Each smd packet contains a 20 byte header, with the first 4 being the length 286 + * of the packet. 287 + */ 288 + #define SMD_PACKET_HEADER_LEN 20 289 + 290 + /* 291 + * Signal the remote processor associated with 'channel'. 292 + */ 293 + static void qcom_smd_signal_channel(struct qcom_smd_channel *channel) 294 + { 295 + struct qcom_smd_edge *edge = channel->edge; 296 + 297 + regmap_write(edge->ipc_regmap, edge->ipc_offset, BIT(edge->ipc_bit)); 298 + } 299 + 300 + /* 301 + * Initialize the tx channel info 302 + */ 303 + static void qcom_smd_channel_reset(struct qcom_smd_channel *channel) 304 + { 305 + SET_TX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED); 306 + SET_TX_CHANNEL_INFO(channel, fDSR, 0); 307 + SET_TX_CHANNEL_INFO(channel, fCTS, 0); 308 + SET_TX_CHANNEL_INFO(channel, fCD, 0); 309 + SET_TX_CHANNEL_INFO(channel, fRI, 0); 310 + SET_TX_CHANNEL_INFO(channel, fHEAD, 0); 311 + SET_TX_CHANNEL_INFO(channel, fTAIL, 0); 312 + SET_TX_CHANNEL_INFO(channel, fSTATE, 1); 313 + SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0); 314 + SET_TX_CHANNEL_INFO(channel, head, 0); 315 + SET_TX_CHANNEL_INFO(channel, tail, 0); 316 + 317 + qcom_smd_signal_channel(channel); 318 + 319 + channel->state = SMD_CHANNEL_CLOSED; 320 + channel->pkt_size = 0; 321 + } 322 + 323 + /* 324 + * Calculate the amount of data available in the rx fifo 325 + */ 326 + static size_t qcom_smd_channel_get_rx_avail(struct qcom_smd_channel *channel) 327 + { 328 + unsigned head; 329 + unsigned tail; 330 + 331 + head = GET_RX_CHANNEL_INFO(channel, head); 332 + tail = GET_RX_CHANNEL_INFO(channel, tail); 333 + 334 + return (head - tail) & (channel->fifo_size - 1); 335 + } 336 + 337 + /* 338 + * Set tx channel state and inform the remote processor 339 + */ 340 + static void qcom_smd_channel_set_state(struct qcom_smd_channel *channel, 341 + int state) 342 + { 343 + struct qcom_smd_edge *edge = channel->edge; 344 + bool is_open = state == SMD_CHANNEL_OPENED; 345 + 346 + if (channel->state == state) 347 + return; 348 + 349 + dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state); 350 + 351 + SET_TX_CHANNEL_INFO(channel, fDSR, is_open); 352 + SET_TX_CHANNEL_INFO(channel, fCTS, is_open); 353 + SET_TX_CHANNEL_INFO(channel, fCD, is_open); 354 + 355 + SET_TX_CHANNEL_INFO(channel, state, state); 356 + SET_TX_CHANNEL_INFO(channel, fSTATE, 1); 357 + 358 + channel->state = state; 359 + qcom_smd_signal_channel(channel); 360 + } 361 + 362 + /* 363 + * Copy count bytes of data using 32bit accesses, if that's required. 364 + */ 365 + static void smd_copy_to_fifo(void __iomem *_dst, 366 + const void *_src, 367 + size_t count, 368 + bool word_aligned) 369 + { 370 + u32 *dst = (u32 *)_dst; 371 + u32 *src = (u32 *)_src; 372 + 373 + if (word_aligned) { 374 + count /= sizeof(u32); 375 + while (count--) 376 + writel_relaxed(*src++, dst++); 377 + } else { 378 + memcpy_toio(_dst, _src, count); 379 + } 380 + } 381 + 382 + /* 383 + * Copy count bytes of data using 32bit accesses, if that is required. 384 + */ 385 + static void smd_copy_from_fifo(void *_dst, 386 + const void __iomem *_src, 387 + size_t count, 388 + bool word_aligned) 389 + { 390 + u32 *dst = (u32 *)_dst; 391 + u32 *src = (u32 *)_src; 392 + 393 + if (word_aligned) { 394 + count /= sizeof(u32); 395 + while (count--) 396 + *dst++ = readl_relaxed(src++); 397 + } else { 398 + memcpy_fromio(_dst, _src, count); 399 + } 400 + } 401 + 402 + /* 403 + * Read count bytes of data from the rx fifo into buf, but don't advance the 404 + * tail. 405 + */ 406 + static size_t qcom_smd_channel_peek(struct qcom_smd_channel *channel, 407 + void *buf, size_t count) 408 + { 409 + bool word_aligned; 410 + unsigned tail; 411 + size_t len; 412 + 413 + word_aligned = channel->rx_info_word != NULL; 414 + tail = GET_RX_CHANNEL_INFO(channel, tail); 415 + 416 + len = min_t(size_t, count, channel->fifo_size - tail); 417 + if (len) { 418 + smd_copy_from_fifo(buf, 419 + channel->rx_fifo + tail, 420 + len, 421 + word_aligned); 422 + } 423 + 424 + if (len != count) { 425 + smd_copy_from_fifo(buf + len, 426 + channel->rx_fifo, 427 + count - len, 428 + word_aligned); 429 + } 430 + 431 + return count; 432 + } 433 + 434 + /* 435 + * Advance the rx tail by count bytes. 436 + */ 437 + static void qcom_smd_channel_advance(struct qcom_smd_channel *channel, 438 + size_t count) 439 + { 440 + unsigned tail; 441 + 442 + tail = GET_RX_CHANNEL_INFO(channel, tail); 443 + tail += count; 444 + tail &= (channel->fifo_size - 1); 445 + SET_RX_CHANNEL_INFO(channel, tail, tail); 446 + } 447 + 448 + /* 449 + * Read out a single packet from the rx fifo and deliver it to the device 450 + */ 451 + static int qcom_smd_channel_recv_single(struct qcom_smd_channel *channel) 452 + { 453 + struct qcom_smd_device *qsdev = channel->qsdev; 454 + unsigned tail; 455 + size_t len; 456 + void *ptr; 457 + int ret; 458 + 459 + if (!channel->cb) 460 + return 0; 461 + 462 + tail = GET_RX_CHANNEL_INFO(channel, tail); 463 + 464 + /* Use bounce buffer if the data wraps */ 465 + if (tail + channel->pkt_size >= channel->fifo_size) { 466 + ptr = channel->bounce_buffer; 467 + len = qcom_smd_channel_peek(channel, ptr, channel->pkt_size); 468 + } else { 469 + ptr = channel->rx_fifo + tail; 470 + len = channel->pkt_size; 471 + } 472 + 473 + ret = channel->cb(qsdev, ptr, len); 474 + if (ret < 0) 475 + return ret; 476 + 477 + /* Only forward the tail if the client consumed the data */ 478 + qcom_smd_channel_advance(channel, len); 479 + 480 + channel->pkt_size = 0; 481 + 482 + return 0; 483 + } 484 + 485 + /* 486 + * Per channel interrupt handling 487 + */ 488 + static bool qcom_smd_channel_intr(struct qcom_smd_channel *channel) 489 + { 490 + bool need_state_scan = false; 491 + int remote_state; 492 + u32 pktlen; 493 + int avail; 494 + int ret; 495 + 496 + /* Handle state changes */ 497 + remote_state = GET_RX_CHANNEL_INFO(channel, state); 498 + if (remote_state != channel->remote_state) { 499 + channel->remote_state = remote_state; 500 + need_state_scan = true; 501 + } 502 + /* Indicate that we have seen any state change */ 503 + SET_RX_CHANNEL_INFO(channel, fSTATE, 0); 504 + 505 + /* Signal waiting qcom_smd_send() about the interrupt */ 506 + if (!GET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR)) 507 + wake_up_interruptible(&channel->fblockread_event); 508 + 509 + /* Don't consume any data until we've opened the channel */ 510 + if (channel->state != SMD_CHANNEL_OPENED) 511 + goto out; 512 + 513 + /* Indicate that we've seen the new data */ 514 + SET_RX_CHANNEL_INFO(channel, fHEAD, 0); 515 + 516 + /* Consume data */ 517 + for (;;) { 518 + avail = qcom_smd_channel_get_rx_avail(channel); 519 + 520 + if (!channel->pkt_size && avail >= SMD_PACKET_HEADER_LEN) { 521 + qcom_smd_channel_peek(channel, &pktlen, sizeof(pktlen)); 522 + qcom_smd_channel_advance(channel, SMD_PACKET_HEADER_LEN); 523 + channel->pkt_size = pktlen; 524 + } else if (channel->pkt_size && avail >= channel->pkt_size) { 525 + ret = qcom_smd_channel_recv_single(channel); 526 + if (ret) 527 + break; 528 + } else { 529 + break; 530 + } 531 + } 532 + 533 + /* Indicate that we have seen and updated tail */ 534 + SET_RX_CHANNEL_INFO(channel, fTAIL, 1); 535 + 536 + /* Signal the remote that we've consumed the data (if requested) */ 537 + if (!GET_RX_CHANNEL_INFO(channel, fBLOCKREADINTR)) { 538 + /* Ensure ordering of channel info updates */ 539 + wmb(); 540 + 541 + qcom_smd_signal_channel(channel); 542 + } 543 + 544 + out: 545 + return need_state_scan; 546 + } 547 + 548 + /* 549 + * The edge interrupts are triggered by the remote processor on state changes, 550 + * channel info updates or when new channels are created. 551 + */ 552 + static irqreturn_t qcom_smd_edge_intr(int irq, void *data) 553 + { 554 + struct qcom_smd_edge *edge = data; 555 + struct qcom_smd_channel *channel; 556 + unsigned available; 557 + bool kick_worker = false; 558 + 559 + /* 560 + * Handle state changes or data on each of the channels on this edge 561 + */ 562 + spin_lock(&edge->channels_lock); 563 + list_for_each_entry(channel, &edge->channels, list) { 564 + spin_lock(&channel->recv_lock); 565 + kick_worker |= qcom_smd_channel_intr(channel); 566 + spin_unlock(&channel->recv_lock); 567 + } 568 + spin_unlock(&edge->channels_lock); 569 + 570 + /* 571 + * Creating a new channel requires allocating an smem entry, so we only 572 + * have to scan if the amount of available space in smem have changed 573 + * since last scan. 574 + */ 575 + available = qcom_smem_get_free_space(edge->edge_id); 576 + if (available != edge->smem_available) { 577 + edge->smem_available = available; 578 + edge->need_rescan = true; 579 + kick_worker = true; 580 + } 581 + 582 + if (kick_worker) 583 + schedule_work(&edge->work); 584 + 585 + return IRQ_HANDLED; 586 + } 587 + 588 + /* 589 + * Delivers any outstanding packets in the rx fifo, can be used after probe of 590 + * the clients to deliver any packets that wasn't delivered before the client 591 + * was setup. 592 + */ 593 + static void qcom_smd_channel_resume(struct qcom_smd_channel *channel) 594 + { 595 + unsigned long flags; 596 + 597 + spin_lock_irqsave(&channel->recv_lock, flags); 598 + qcom_smd_channel_intr(channel); 599 + spin_unlock_irqrestore(&channel->recv_lock, flags); 600 + } 601 + 602 + /* 603 + * Calculate how much space is available in the tx fifo. 604 + */ 605 + static size_t qcom_smd_get_tx_avail(struct qcom_smd_channel *channel) 606 + { 607 + unsigned head; 608 + unsigned tail; 609 + unsigned mask = channel->fifo_size - 1; 610 + 611 + head = GET_TX_CHANNEL_INFO(channel, head); 612 + tail = GET_TX_CHANNEL_INFO(channel, tail); 613 + 614 + return mask - ((head - tail) & mask); 615 + } 616 + 617 + /* 618 + * Write count bytes of data into channel, possibly wrapping in the ring buffer 619 + */ 620 + static int qcom_smd_write_fifo(struct qcom_smd_channel *channel, 621 + const void *data, 622 + size_t count) 623 + { 624 + bool word_aligned; 625 + unsigned head; 626 + size_t len; 627 + 628 + word_aligned = channel->tx_info_word != NULL; 629 + head = GET_TX_CHANNEL_INFO(channel, head); 630 + 631 + len = min_t(size_t, count, channel->fifo_size - head); 632 + if (len) { 633 + smd_copy_to_fifo(channel->tx_fifo + head, 634 + data, 635 + len, 636 + word_aligned); 637 + } 638 + 639 + if (len != count) { 640 + smd_copy_to_fifo(channel->tx_fifo, 641 + data + len, 642 + count - len, 643 + word_aligned); 644 + } 645 + 646 + head += count; 647 + head &= (channel->fifo_size - 1); 648 + SET_TX_CHANNEL_INFO(channel, head, head); 649 + 650 + return count; 651 + } 652 + 653 + /** 654 + * qcom_smd_send - write data to smd channel 655 + * @channel: channel handle 656 + * @data: buffer of data to write 657 + * @len: number of bytes to write 658 + * 659 + * This is a blocking write of len bytes into the channel's tx ring buffer and 660 + * signal the remote end. It will sleep until there is enough space available 661 + * in the tx buffer, utilizing the fBLOCKREADINTR signaling mechanism to avoid 662 + * polling. 663 + */ 664 + int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len) 665 + { 666 + u32 hdr[5] = {len,}; 667 + int tlen = sizeof(hdr) + len; 668 + int ret; 669 + 670 + /* Word aligned channels only accept word size aligned data */ 671 + if (channel->rx_info_word != NULL && len % 4) 672 + return -EINVAL; 673 + 674 + ret = mutex_lock_interruptible(&channel->tx_lock); 675 + if (ret) 676 + return ret; 677 + 678 + while (qcom_smd_get_tx_avail(channel) < tlen) { 679 + if (channel->state != SMD_CHANNEL_OPENED) { 680 + ret = -EPIPE; 681 + goto out; 682 + } 683 + 684 + SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1); 685 + 686 + ret = wait_event_interruptible(channel->fblockread_event, 687 + qcom_smd_get_tx_avail(channel) >= tlen || 688 + channel->state != SMD_CHANNEL_OPENED); 689 + if (ret) 690 + goto out; 691 + 692 + SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0); 693 + } 694 + 695 + SET_TX_CHANNEL_INFO(channel, fTAIL, 0); 696 + 697 + qcom_smd_write_fifo(channel, hdr, sizeof(hdr)); 698 + qcom_smd_write_fifo(channel, data, len); 699 + 700 + SET_TX_CHANNEL_INFO(channel, fHEAD, 1); 701 + 702 + /* Ensure ordering of channel info updates */ 703 + wmb(); 704 + 705 + qcom_smd_signal_channel(channel); 706 + 707 + out: 708 + mutex_unlock(&channel->tx_lock); 709 + 710 + return ret; 711 + } 712 + EXPORT_SYMBOL(qcom_smd_send); 713 + 714 + static struct qcom_smd_device *to_smd_device(struct device *dev) 715 + { 716 + return container_of(dev, struct qcom_smd_device, dev); 717 + } 718 + 719 + static struct qcom_smd_driver *to_smd_driver(struct device *dev) 720 + { 721 + struct qcom_smd_device *qsdev = to_smd_device(dev); 722 + 723 + return container_of(qsdev->dev.driver, struct qcom_smd_driver, driver); 724 + } 725 + 726 + static int qcom_smd_dev_match(struct device *dev, struct device_driver *drv) 727 + { 728 + return of_driver_match_device(dev, drv); 729 + } 730 + 731 + /* 732 + * Probe the smd client. 733 + * 734 + * The remote side have indicated that it want the channel to be opened, so 735 + * complete the state handshake and probe our client driver. 736 + */ 737 + static int qcom_smd_dev_probe(struct device *dev) 738 + { 739 + struct qcom_smd_device *qsdev = to_smd_device(dev); 740 + struct qcom_smd_driver *qsdrv = to_smd_driver(dev); 741 + struct qcom_smd_channel *channel = qsdev->channel; 742 + size_t bb_size; 743 + int ret; 744 + 745 + /* 746 + * Packets are maximum 4k, but reduce if the fifo is smaller 747 + */ 748 + bb_size = min(channel->fifo_size, SZ_4K); 749 + channel->bounce_buffer = kmalloc(bb_size, GFP_KERNEL); 750 + if (!channel->bounce_buffer) 751 + return -ENOMEM; 752 + 753 + channel->cb = qsdrv->callback; 754 + 755 + qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENING); 756 + 757 + qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENED); 758 + 759 + ret = qsdrv->probe(qsdev); 760 + if (ret) 761 + goto err; 762 + 763 + qcom_smd_channel_resume(channel); 764 + 765 + return 0; 766 + 767 + err: 768 + dev_err(&qsdev->dev, "probe failed\n"); 769 + 770 + channel->cb = NULL; 771 + kfree(channel->bounce_buffer); 772 + channel->bounce_buffer = NULL; 773 + 774 + qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED); 775 + return ret; 776 + } 777 + 778 + /* 779 + * Remove the smd client. 780 + * 781 + * The channel is going away, for some reason, so remove the smd client and 782 + * reset the channel state. 783 + */ 784 + static int qcom_smd_dev_remove(struct device *dev) 785 + { 786 + struct qcom_smd_device *qsdev = to_smd_device(dev); 787 + struct qcom_smd_driver *qsdrv = to_smd_driver(dev); 788 + struct qcom_smd_channel *channel = qsdev->channel; 789 + unsigned long flags; 790 + 791 + qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSING); 792 + 793 + /* 794 + * Make sure we don't race with the code receiving data. 795 + */ 796 + spin_lock_irqsave(&channel->recv_lock, flags); 797 + channel->cb = NULL; 798 + spin_unlock_irqrestore(&channel->recv_lock, flags); 799 + 800 + /* Wake up any sleepers in qcom_smd_send() */ 801 + wake_up_interruptible(&channel->fblockread_event); 802 + 803 + /* 804 + * We expect that the client might block in remove() waiting for any 805 + * outstanding calls to qcom_smd_send() to wake up and finish. 806 + */ 807 + if (qsdrv->remove) 808 + qsdrv->remove(qsdev); 809 + 810 + /* 811 + * The client is now gone, cleanup and reset the channel state. 812 + */ 813 + channel->qsdev = NULL; 814 + kfree(channel->bounce_buffer); 815 + channel->bounce_buffer = NULL; 816 + 817 + qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED); 818 + 819 + qcom_smd_channel_reset(channel); 820 + 821 + return 0; 822 + } 823 + 824 + static struct bus_type qcom_smd_bus = { 825 + .name = "qcom_smd", 826 + .match = qcom_smd_dev_match, 827 + .probe = qcom_smd_dev_probe, 828 + .remove = qcom_smd_dev_remove, 829 + }; 830 + 831 + /* 832 + * Release function for the qcom_smd_device object. 833 + */ 834 + static void qcom_smd_release_device(struct device *dev) 835 + { 836 + struct qcom_smd_device *qsdev = to_smd_device(dev); 837 + 838 + kfree(qsdev); 839 + } 840 + 841 + /* 842 + * Finds the device_node for the smd child interested in this channel. 843 + */ 844 + static struct device_node *qcom_smd_match_channel(struct device_node *edge_node, 845 + const char *channel) 846 + { 847 + struct device_node *child; 848 + const char *name; 849 + const char *key; 850 + int ret; 851 + 852 + for_each_available_child_of_node(edge_node, child) { 853 + key = "qcom,smd-channels"; 854 + ret = of_property_read_string(child, key, &name); 855 + if (ret) { 856 + of_node_put(child); 857 + continue; 858 + } 859 + 860 + if (strcmp(name, channel) == 0) 861 + return child; 862 + } 863 + 864 + return NULL; 865 + } 866 + 867 + /* 868 + * Create a smd client device for channel that is being opened. 869 + */ 870 + static int qcom_smd_create_device(struct qcom_smd_channel *channel) 871 + { 872 + struct qcom_smd_device *qsdev; 873 + struct qcom_smd_edge *edge = channel->edge; 874 + struct device_node *node; 875 + struct qcom_smd *smd = edge->smd; 876 + int ret; 877 + 878 + if (channel->qsdev) 879 + return -EEXIST; 880 + 881 + node = qcom_smd_match_channel(edge->of_node, channel->name); 882 + if (!node) { 883 + dev_dbg(smd->dev, "no match for '%s'\n", channel->name); 884 + return -ENXIO; 885 + } 886 + 887 + dev_dbg(smd->dev, "registering '%s'\n", channel->name); 888 + 889 + qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL); 890 + if (!qsdev) 891 + return -ENOMEM; 892 + 893 + dev_set_name(&qsdev->dev, "%s.%s", edge->of_node->name, node->name); 894 + qsdev->dev.parent = smd->dev; 895 + qsdev->dev.bus = &qcom_smd_bus; 896 + qsdev->dev.release = qcom_smd_release_device; 897 + qsdev->dev.of_node = node; 898 + 899 + qsdev->channel = channel; 900 + 901 + channel->qsdev = qsdev; 902 + 903 + ret = device_register(&qsdev->dev); 904 + if (ret) { 905 + dev_err(smd->dev, "device_register failed: %d\n", ret); 906 + put_device(&qsdev->dev); 907 + } 908 + 909 + return ret; 910 + } 911 + 912 + /* 913 + * Destroy a smd client device for a channel that's going away. 914 + */ 915 + static void qcom_smd_destroy_device(struct qcom_smd_channel *channel) 916 + { 917 + struct device *dev; 918 + 919 + BUG_ON(!channel->qsdev); 920 + 921 + dev = &channel->qsdev->dev; 922 + 923 + device_unregister(dev); 924 + of_node_put(dev->of_node); 925 + put_device(dev); 926 + } 927 + 928 + /** 929 + * qcom_smd_driver_register - register a smd driver 930 + * @qsdrv: qcom_smd_driver struct 931 + */ 932 + int qcom_smd_driver_register(struct qcom_smd_driver *qsdrv) 933 + { 934 + qsdrv->driver.bus = &qcom_smd_bus; 935 + return driver_register(&qsdrv->driver); 936 + } 937 + EXPORT_SYMBOL(qcom_smd_driver_register); 938 + 939 + /** 940 + * qcom_smd_driver_unregister - unregister a smd driver 941 + * @qsdrv: qcom_smd_driver struct 942 + */ 943 + void qcom_smd_driver_unregister(struct qcom_smd_driver *qsdrv) 944 + { 945 + driver_unregister(&qsdrv->driver); 946 + } 947 + EXPORT_SYMBOL(qcom_smd_driver_unregister); 948 + 949 + /* 950 + * Allocate the qcom_smd_channel object for a newly found smd channel, 951 + * retrieving and validating the smem items involved. 952 + */ 953 + static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *edge, 954 + unsigned smem_info_item, 955 + unsigned smem_fifo_item, 956 + char *name) 957 + { 958 + struct qcom_smd_channel *channel; 959 + struct qcom_smd *smd = edge->smd; 960 + size_t fifo_size; 961 + size_t info_size; 962 + void *fifo_base; 963 + void *info; 964 + int ret; 965 + 966 + channel = devm_kzalloc(smd->dev, sizeof(*channel), GFP_KERNEL); 967 + if (!channel) 968 + return ERR_PTR(-ENOMEM); 969 + 970 + channel->edge = edge; 971 + channel->name = devm_kstrdup(smd->dev, name, GFP_KERNEL); 972 + if (!channel->name) 973 + return ERR_PTR(-ENOMEM); 974 + 975 + mutex_init(&channel->tx_lock); 976 + spin_lock_init(&channel->recv_lock); 977 + init_waitqueue_head(&channel->fblockread_event); 978 + 979 + ret = qcom_smem_get(edge->edge_id, smem_info_item, (void **)&info, &info_size); 980 + if (ret) 981 + goto free_name_and_channel; 982 + 983 + /* 984 + * Use the size of the item to figure out which channel info struct to 985 + * use. 986 + */ 987 + if (info_size == 2 * sizeof(struct smd_channel_info_word)) { 988 + channel->tx_info_word = info; 989 + channel->rx_info_word = info + sizeof(struct smd_channel_info_word); 990 + } else if (info_size == 2 * sizeof(struct smd_channel_info)) { 991 + channel->tx_info = info; 992 + channel->rx_info = info + sizeof(struct smd_channel_info); 993 + } else { 994 + dev_err(smd->dev, 995 + "channel info of size %zu not supported\n", info_size); 996 + ret = -EINVAL; 997 + goto free_name_and_channel; 998 + } 999 + 1000 + ret = qcom_smem_get(edge->edge_id, smem_fifo_item, &fifo_base, &fifo_size); 1001 + if (ret) 1002 + goto free_name_and_channel; 1003 + 1004 + /* The channel consist of a rx and tx fifo of equal size */ 1005 + fifo_size /= 2; 1006 + 1007 + dev_dbg(smd->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n", 1008 + name, info_size, fifo_size); 1009 + 1010 + channel->tx_fifo = fifo_base; 1011 + channel->rx_fifo = fifo_base + fifo_size; 1012 + channel->fifo_size = fifo_size; 1013 + 1014 + qcom_smd_channel_reset(channel); 1015 + 1016 + return channel; 1017 + 1018 + free_name_and_channel: 1019 + devm_kfree(smd->dev, channel->name); 1020 + devm_kfree(smd->dev, channel); 1021 + 1022 + return ERR_PTR(ret); 1023 + } 1024 + 1025 + /* 1026 + * Scans the allocation table for any newly allocated channels, calls 1027 + * qcom_smd_create_channel() to create representations of these and add 1028 + * them to the edge's list of channels. 1029 + */ 1030 + static void qcom_discover_channels(struct qcom_smd_edge *edge) 1031 + { 1032 + struct qcom_smd_alloc_entry *alloc_tbl; 1033 + struct qcom_smd_alloc_entry *entry; 1034 + struct qcom_smd_channel *channel; 1035 + struct qcom_smd *smd = edge->smd; 1036 + unsigned long flags; 1037 + unsigned fifo_id; 1038 + unsigned info_id; 1039 + int ret; 1040 + int tbl; 1041 + int i; 1042 + 1043 + for (tbl = 0; tbl < SMD_ALLOC_TBL_COUNT; tbl++) { 1044 + ret = qcom_smem_get(edge->edge_id, 1045 + smem_items[tbl].alloc_tbl_id, 1046 + (void **)&alloc_tbl, 1047 + NULL); 1048 + if (ret < 0) 1049 + continue; 1050 + 1051 + for (i = 0; i < SMD_ALLOC_TBL_SIZE; i++) { 1052 + entry = &alloc_tbl[i]; 1053 + if (test_bit(i, edge->allocated[tbl])) 1054 + continue; 1055 + 1056 + if (entry->ref_count == 0) 1057 + continue; 1058 + 1059 + if (!entry->name[0]) 1060 + continue; 1061 + 1062 + if (!(entry->flags & SMD_CHANNEL_FLAGS_PACKET)) 1063 + continue; 1064 + 1065 + if ((entry->flags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id) 1066 + continue; 1067 + 1068 + info_id = smem_items[tbl].info_base_id + entry->cid; 1069 + fifo_id = smem_items[tbl].fifo_base_id + entry->cid; 1070 + 1071 + channel = qcom_smd_create_channel(edge, info_id, fifo_id, entry->name); 1072 + if (IS_ERR(channel)) 1073 + continue; 1074 + 1075 + spin_lock_irqsave(&edge->channels_lock, flags); 1076 + list_add(&channel->list, &edge->channels); 1077 + spin_unlock_irqrestore(&edge->channels_lock, flags); 1078 + 1079 + dev_dbg(smd->dev, "new channel found: '%s'\n", channel->name); 1080 + set_bit(i, edge->allocated[tbl]); 1081 + } 1082 + } 1083 + 1084 + schedule_work(&edge->work); 1085 + } 1086 + 1087 + /* 1088 + * This per edge worker scans smem for any new channels and register these. It 1089 + * then scans all registered channels for state changes that should be handled 1090 + * by creating or destroying smd client devices for the registered channels. 1091 + * 1092 + * LOCKING: edge->channels_lock is not needed to be held during the traversal 1093 + * of the channels list as it's done synchronously with the only writer. 1094 + */ 1095 + static void qcom_channel_state_worker(struct work_struct *work) 1096 + { 1097 + struct qcom_smd_channel *channel; 1098 + struct qcom_smd_edge *edge = container_of(work, 1099 + struct qcom_smd_edge, 1100 + work); 1101 + unsigned remote_state; 1102 + 1103 + /* 1104 + * Rescan smem if we have reason to belive that there are new channels. 1105 + */ 1106 + if (edge->need_rescan) { 1107 + edge->need_rescan = false; 1108 + qcom_discover_channels(edge); 1109 + } 1110 + 1111 + /* 1112 + * Register a device for any closed channel where the remote processor 1113 + * is showing interest in opening the channel. 1114 + */ 1115 + list_for_each_entry(channel, &edge->channels, list) { 1116 + if (channel->state != SMD_CHANNEL_CLOSED) 1117 + continue; 1118 + 1119 + remote_state = GET_RX_CHANNEL_INFO(channel, state); 1120 + if (remote_state != SMD_CHANNEL_OPENING && 1121 + remote_state != SMD_CHANNEL_OPENED) 1122 + continue; 1123 + 1124 + qcom_smd_create_device(channel); 1125 + } 1126 + 1127 + /* 1128 + * Unregister the device for any channel that is opened where the 1129 + * remote processor is closing the channel. 1130 + */ 1131 + list_for_each_entry(channel, &edge->channels, list) { 1132 + if (channel->state != SMD_CHANNEL_OPENING && 1133 + channel->state != SMD_CHANNEL_OPENED) 1134 + continue; 1135 + 1136 + remote_state = GET_RX_CHANNEL_INFO(channel, state); 1137 + if (remote_state == SMD_CHANNEL_OPENING || 1138 + remote_state == SMD_CHANNEL_OPENED) 1139 + continue; 1140 + 1141 + qcom_smd_destroy_device(channel); 1142 + } 1143 + } 1144 + 1145 + /* 1146 + * Parses an of_node describing an edge. 1147 + */ 1148 + static int qcom_smd_parse_edge(struct device *dev, 1149 + struct device_node *node, 1150 + struct qcom_smd_edge *edge) 1151 + { 1152 + struct device_node *syscon_np; 1153 + const char *key; 1154 + int irq; 1155 + int ret; 1156 + 1157 + INIT_LIST_HEAD(&edge->channels); 1158 + spin_lock_init(&edge->channels_lock); 1159 + 1160 + INIT_WORK(&edge->work, qcom_channel_state_worker); 1161 + 1162 + edge->of_node = of_node_get(node); 1163 + 1164 + irq = irq_of_parse_and_map(node, 0); 1165 + if (irq < 0) { 1166 + dev_err(dev, "required smd interrupt missing\n"); 1167 + return -EINVAL; 1168 + } 1169 + 1170 + ret = devm_request_irq(dev, irq, 1171 + qcom_smd_edge_intr, IRQF_TRIGGER_RISING, 1172 + node->name, edge); 1173 + if (ret) { 1174 + dev_err(dev, "failed to request smd irq\n"); 1175 + return ret; 1176 + } 1177 + 1178 + edge->irq = irq; 1179 + 1180 + key = "qcom,smd-edge"; 1181 + ret = of_property_read_u32(node, key, &edge->edge_id); 1182 + if (ret) { 1183 + dev_err(dev, "edge missing %s property\n", key); 1184 + return -EINVAL; 1185 + } 1186 + 1187 + syscon_np = of_parse_phandle(node, "qcom,ipc", 0); 1188 + if (!syscon_np) { 1189 + dev_err(dev, "no qcom,ipc node\n"); 1190 + return -ENODEV; 1191 + } 1192 + 1193 + edge->ipc_regmap = syscon_node_to_regmap(syscon_np); 1194 + if (IS_ERR(edge->ipc_regmap)) 1195 + return PTR_ERR(edge->ipc_regmap); 1196 + 1197 + key = "qcom,ipc"; 1198 + ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset); 1199 + if (ret < 0) { 1200 + dev_err(dev, "no offset in %s\n", key); 1201 + return -EINVAL; 1202 + } 1203 + 1204 + ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit); 1205 + if (ret < 0) { 1206 + dev_err(dev, "no bit in %s\n", key); 1207 + return -EINVAL; 1208 + } 1209 + 1210 + return 0; 1211 + } 1212 + 1213 + static int qcom_smd_probe(struct platform_device *pdev) 1214 + { 1215 + struct qcom_smd_edge *edge; 1216 + struct device_node *node; 1217 + struct qcom_smd *smd; 1218 + size_t array_size; 1219 + int num_edges; 1220 + int ret; 1221 + int i = 0; 1222 + 1223 + /* Wait for smem */ 1224 + ret = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL, NULL); 1225 + if (ret == -EPROBE_DEFER) 1226 + return ret; 1227 + 1228 + num_edges = of_get_available_child_count(pdev->dev.of_node); 1229 + array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge); 1230 + smd = devm_kzalloc(&pdev->dev, array_size, GFP_KERNEL); 1231 + if (!smd) 1232 + return -ENOMEM; 1233 + smd->dev = &pdev->dev; 1234 + 1235 + smd->num_edges = num_edges; 1236 + for_each_available_child_of_node(pdev->dev.of_node, node) { 1237 + edge = &smd->edges[i++]; 1238 + edge->smd = smd; 1239 + 1240 + ret = qcom_smd_parse_edge(&pdev->dev, node, edge); 1241 + if (ret) 1242 + continue; 1243 + 1244 + edge->need_rescan = true; 1245 + schedule_work(&edge->work); 1246 + } 1247 + 1248 + platform_set_drvdata(pdev, smd); 1249 + 1250 + return 0; 1251 + } 1252 + 1253 + /* 1254 + * Shut down all smd clients by making sure that each edge stops processing 1255 + * events and scanning for new channels, then call destroy on the devices. 1256 + */ 1257 + static int qcom_smd_remove(struct platform_device *pdev) 1258 + { 1259 + struct qcom_smd_channel *channel; 1260 + struct qcom_smd_edge *edge; 1261 + struct qcom_smd *smd = platform_get_drvdata(pdev); 1262 + int i; 1263 + 1264 + for (i = 0; i < smd->num_edges; i++) { 1265 + edge = &smd->edges[i]; 1266 + 1267 + disable_irq(edge->irq); 1268 + cancel_work_sync(&edge->work); 1269 + 1270 + list_for_each_entry(channel, &edge->channels, list) { 1271 + if (!channel->qsdev) 1272 + continue; 1273 + 1274 + qcom_smd_destroy_device(channel); 1275 + } 1276 + } 1277 + 1278 + return 0; 1279 + } 1280 + 1281 + static const struct of_device_id qcom_smd_of_match[] = { 1282 + { .compatible = "qcom,smd" }, 1283 + {} 1284 + }; 1285 + MODULE_DEVICE_TABLE(of, qcom_smd_of_match); 1286 + 1287 + static struct platform_driver qcom_smd_driver = { 1288 + .probe = qcom_smd_probe, 1289 + .remove = qcom_smd_remove, 1290 + .driver = { 1291 + .name = "qcom-smd", 1292 + .of_match_table = qcom_smd_of_match, 1293 + }, 1294 + }; 1295 + 1296 + static int __init qcom_smd_init(void) 1297 + { 1298 + int ret; 1299 + 1300 + ret = bus_register(&qcom_smd_bus); 1301 + if (ret) { 1302 + pr_err("failed to register smd bus: %d\n", ret); 1303 + return ret; 1304 + } 1305 + 1306 + return platform_driver_register(&qcom_smd_driver); 1307 + } 1308 + postcore_initcall(qcom_smd_init); 1309 + 1310 + static void __exit qcom_smd_exit(void) 1311 + { 1312 + platform_driver_unregister(&qcom_smd_driver); 1313 + bus_unregister(&qcom_smd_bus); 1314 + } 1315 + module_exit(qcom_smd_exit); 1316 + 1317 + MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>"); 1318 + MODULE_DESCRIPTION("Qualcomm Shared Memory Driver"); 1319 + MODULE_LICENSE("GPL v2");
+775
drivers/soc/qcom/smem.c
··· 1 + /* 2 + * Copyright (c) 2015, Sony Mobile Communications AB. 3 + * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved. 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/hwspinlock.h> 16 + #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_address.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/slab.h> 22 + #include <linux/soc/qcom/smem.h> 23 + 24 + /* 25 + * The Qualcomm shared memory system is a allocate only heap structure that 26 + * consists of one of more memory areas that can be accessed by the processors 27 + * in the SoC. 28 + * 29 + * All systems contains a global heap, accessible by all processors in the SoC, 30 + * with a table of contents data structure (@smem_header) at the beginning of 31 + * the main shared memory block. 32 + * 33 + * The global header contains meta data for allocations as well as a fixed list 34 + * of 512 entries (@smem_global_entry) that can be initialized to reference 35 + * parts of the shared memory space. 36 + * 37 + * 38 + * In addition to this global heap a set of "private" heaps can be set up at 39 + * boot time with access restrictions so that only certain processor pairs can 40 + * access the data. 41 + * 42 + * These partitions are referenced from an optional partition table 43 + * (@smem_ptable), that is found 4kB from the end of the main smem region. The 44 + * partition table entries (@smem_ptable_entry) lists the involved processors 45 + * (or hosts) and their location in the main shared memory region. 46 + * 47 + * Each partition starts with a header (@smem_partition_header) that identifies 48 + * the partition and holds properties for the two internal memory regions. The 49 + * two regions are cached and non-cached memory respectively. Each region 50 + * contain a link list of allocation headers (@smem_private_entry) followed by 51 + * their data. 52 + * 53 + * Items in the non-cached region are allocated from the start of the partition 54 + * while items in the cached region are allocated from the end. The free area 55 + * is hence the region between the cached and non-cached offsets. 56 + * 57 + * 58 + * To synchronize allocations in the shared memory heaps a remote spinlock must 59 + * be held - currently lock number 3 of the sfpb or tcsr is used for this on all 60 + * platforms. 61 + * 62 + */ 63 + 64 + /* 65 + * Item 3 of the global heap contains an array of versions for the various 66 + * software components in the SoC. We verify that the boot loader version is 67 + * what the expected version (SMEM_EXPECTED_VERSION) as a sanity check. 68 + */ 69 + #define SMEM_ITEM_VERSION 3 70 + #define SMEM_MASTER_SBL_VERSION_INDEX 7 71 + #define SMEM_EXPECTED_VERSION 11 72 + 73 + /* 74 + * The first 8 items are only to be allocated by the boot loader while 75 + * initializing the heap. 76 + */ 77 + #define SMEM_ITEM_LAST_FIXED 8 78 + 79 + /* Highest accepted item number, for both global and private heaps */ 80 + #define SMEM_ITEM_COUNT 512 81 + 82 + /* Processor/host identifier for the application processor */ 83 + #define SMEM_HOST_APPS 0 84 + 85 + /* Max number of processors/hosts in a system */ 86 + #define SMEM_HOST_COUNT 9 87 + 88 + /** 89 + * struct smem_proc_comm - proc_comm communication struct (legacy) 90 + * @command: current command to be executed 91 + * @status: status of the currently requested command 92 + * @params: parameters to the command 93 + */ 94 + struct smem_proc_comm { 95 + u32 command; 96 + u32 status; 97 + u32 params[2]; 98 + }; 99 + 100 + /** 101 + * struct smem_global_entry - entry to reference smem items on the heap 102 + * @allocated: boolean to indicate if this entry is used 103 + * @offset: offset to the allocated space 104 + * @size: size of the allocated space, 8 byte aligned 105 + * @aux_base: base address for the memory region used by this unit, or 0 for 106 + * the default region. bits 0,1 are reserved 107 + */ 108 + struct smem_global_entry { 109 + u32 allocated; 110 + u32 offset; 111 + u32 size; 112 + u32 aux_base; /* bits 1:0 reserved */ 113 + }; 114 + #define AUX_BASE_MASK 0xfffffffc 115 + 116 + /** 117 + * struct smem_header - header found in beginning of primary smem region 118 + * @proc_comm: proc_comm communication interface (legacy) 119 + * @version: array of versions for the various subsystems 120 + * @initialized: boolean to indicate that smem is initialized 121 + * @free_offset: index of the first unallocated byte in smem 122 + * @available: number of bytes available for allocation 123 + * @reserved: reserved field, must be 0 124 + * toc: array of references to items 125 + */ 126 + struct smem_header { 127 + struct smem_proc_comm proc_comm[4]; 128 + u32 version[32]; 129 + u32 initialized; 130 + u32 free_offset; 131 + u32 available; 132 + u32 reserved; 133 + struct smem_global_entry toc[SMEM_ITEM_COUNT]; 134 + }; 135 + 136 + /** 137 + * struct smem_ptable_entry - one entry in the @smem_ptable list 138 + * @offset: offset, within the main shared memory region, of the partition 139 + * @size: size of the partition 140 + * @flags: flags for the partition (currently unused) 141 + * @host0: first processor/host with access to this partition 142 + * @host1: second processor/host with access to this partition 143 + * @reserved: reserved entries for later use 144 + */ 145 + struct smem_ptable_entry { 146 + u32 offset; 147 + u32 size; 148 + u32 flags; 149 + u16 host0; 150 + u16 host1; 151 + u32 reserved[8]; 152 + }; 153 + 154 + /** 155 + * struct smem_ptable - partition table for the private partitions 156 + * @magic: magic number, must be SMEM_PTABLE_MAGIC 157 + * @version: version of the partition table 158 + * @num_entries: number of partitions in the table 159 + * @reserved: for now reserved entries 160 + * @entry: list of @smem_ptable_entry for the @num_entries partitions 161 + */ 162 + struct smem_ptable { 163 + u32 magic; 164 + u32 version; 165 + u32 num_entries; 166 + u32 reserved[5]; 167 + struct smem_ptable_entry entry[]; 168 + }; 169 + #define SMEM_PTABLE_MAGIC 0x434f5424 /* "$TOC" */ 170 + 171 + /** 172 + * struct smem_partition_header - header of the partitions 173 + * @magic: magic number, must be SMEM_PART_MAGIC 174 + * @host0: first processor/host with access to this partition 175 + * @host1: second processor/host with access to this partition 176 + * @size: size of the partition 177 + * @offset_free_uncached: offset to the first free byte of uncached memory in 178 + * this partition 179 + * @offset_free_cached: offset to the first free byte of cached memory in this 180 + * partition 181 + * @reserved: for now reserved entries 182 + */ 183 + struct smem_partition_header { 184 + u32 magic; 185 + u16 host0; 186 + u16 host1; 187 + u32 size; 188 + u32 offset_free_uncached; 189 + u32 offset_free_cached; 190 + u32 reserved[3]; 191 + }; 192 + #define SMEM_PART_MAGIC 0x54525024 /* "$PRT" */ 193 + 194 + /** 195 + * struct smem_private_entry - header of each item in the private partition 196 + * @canary: magic number, must be SMEM_PRIVATE_CANARY 197 + * @item: identifying number of the smem item 198 + * @size: size of the data, including padding bytes 199 + * @padding_data: number of bytes of padding of data 200 + * @padding_hdr: number of bytes of padding between the header and the data 201 + * @reserved: for now reserved entry 202 + */ 203 + struct smem_private_entry { 204 + u16 canary; 205 + u16 item; 206 + u32 size; /* includes padding bytes */ 207 + u16 padding_data; 208 + u16 padding_hdr; 209 + u32 reserved; 210 + }; 211 + #define SMEM_PRIVATE_CANARY 0xa5a5 212 + 213 + /** 214 + * struct smem_region - representation of a chunk of memory used for smem 215 + * @aux_base: identifier of aux_mem base 216 + * @virt_base: virtual base address of memory with this aux_mem identifier 217 + * @size: size of the memory region 218 + */ 219 + struct smem_region { 220 + u32 aux_base; 221 + void __iomem *virt_base; 222 + size_t size; 223 + }; 224 + 225 + /** 226 + * struct qcom_smem - device data for the smem device 227 + * @dev: device pointer 228 + * @hwlock: reference to a hwspinlock 229 + * @partitions: list of pointers to partitions affecting the current 230 + * processor/host 231 + * @num_regions: number of @regions 232 + * @regions: list of the memory regions defining the shared memory 233 + */ 234 + struct qcom_smem { 235 + struct device *dev; 236 + 237 + struct hwspinlock *hwlock; 238 + 239 + struct smem_partition_header *partitions[SMEM_HOST_COUNT]; 240 + 241 + unsigned num_regions; 242 + struct smem_region regions[0]; 243 + }; 244 + 245 + /* Pointer to the one and only smem handle */ 246 + static struct qcom_smem *__smem; 247 + 248 + /* Timeout (ms) for the trylock of remote spinlocks */ 249 + #define HWSPINLOCK_TIMEOUT 1000 250 + 251 + static int qcom_smem_alloc_private(struct qcom_smem *smem, 252 + unsigned host, 253 + unsigned item, 254 + size_t size) 255 + { 256 + struct smem_partition_header *phdr; 257 + struct smem_private_entry *hdr; 258 + size_t alloc_size; 259 + void *p; 260 + 261 + /* We're not going to find it if there's no matching partition */ 262 + if (host >= SMEM_HOST_COUNT || !smem->partitions[host]) 263 + return -ENOENT; 264 + 265 + phdr = smem->partitions[host]; 266 + 267 + p = (void *)phdr + sizeof(*phdr); 268 + while (p < (void *)phdr + phdr->offset_free_uncached) { 269 + hdr = p; 270 + 271 + if (hdr->canary != SMEM_PRIVATE_CANARY) { 272 + dev_err(smem->dev, 273 + "Found invalid canary in host %d partition\n", 274 + host); 275 + return -EINVAL; 276 + } 277 + 278 + if (hdr->item == item) 279 + return -EEXIST; 280 + 281 + p += sizeof(*hdr) + hdr->padding_hdr + hdr->size; 282 + } 283 + 284 + /* Check that we don't grow into the cached region */ 285 + alloc_size = sizeof(*hdr) + ALIGN(size, 8); 286 + if (p + alloc_size >= (void *)phdr + phdr->offset_free_cached) { 287 + dev_err(smem->dev, "Out of memory\n"); 288 + return -ENOSPC; 289 + } 290 + 291 + hdr = p; 292 + hdr->canary = SMEM_PRIVATE_CANARY; 293 + hdr->item = item; 294 + hdr->size = ALIGN(size, 8); 295 + hdr->padding_data = hdr->size - size; 296 + hdr->padding_hdr = 0; 297 + 298 + /* 299 + * Ensure the header is written before we advance the free offset, so 300 + * that remote processors that does not take the remote spinlock still 301 + * gets a consistent view of the linked list. 302 + */ 303 + wmb(); 304 + phdr->offset_free_uncached += alloc_size; 305 + 306 + return 0; 307 + } 308 + 309 + static int qcom_smem_alloc_global(struct qcom_smem *smem, 310 + unsigned item, 311 + size_t size) 312 + { 313 + struct smem_header *header; 314 + struct smem_global_entry *entry; 315 + 316 + if (WARN_ON(item >= SMEM_ITEM_COUNT)) 317 + return -EINVAL; 318 + 319 + header = smem->regions[0].virt_base; 320 + entry = &header->toc[item]; 321 + if (entry->allocated) 322 + return -EEXIST; 323 + 324 + size = ALIGN(size, 8); 325 + if (WARN_ON(size > header->available)) 326 + return -ENOMEM; 327 + 328 + entry->offset = header->free_offset; 329 + entry->size = size; 330 + 331 + /* 332 + * Ensure the header is consistent before we mark the item allocated, 333 + * so that remote processors will get a consistent view of the item 334 + * even though they do not take the spinlock on read. 335 + */ 336 + wmb(); 337 + entry->allocated = 1; 338 + 339 + header->free_offset += size; 340 + header->available -= size; 341 + 342 + return 0; 343 + } 344 + 345 + /** 346 + * qcom_smem_alloc() - allocate space for a smem item 347 + * @host: remote processor id, or -1 348 + * @item: smem item handle 349 + * @size: number of bytes to be allocated 350 + * 351 + * Allocate space for a given smem item of size @size, given that the item is 352 + * not yet allocated. 353 + */ 354 + int qcom_smem_alloc(unsigned host, unsigned item, size_t size) 355 + { 356 + unsigned long flags; 357 + int ret; 358 + 359 + if (!__smem) 360 + return -EPROBE_DEFER; 361 + 362 + if (item < SMEM_ITEM_LAST_FIXED) { 363 + dev_err(__smem->dev, 364 + "Rejecting allocation of static entry %d\n", item); 365 + return -EINVAL; 366 + } 367 + 368 + ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 369 + HWSPINLOCK_TIMEOUT, 370 + &flags); 371 + if (ret) 372 + return ret; 373 + 374 + ret = qcom_smem_alloc_private(__smem, host, item, size); 375 + if (ret == -ENOENT) 376 + ret = qcom_smem_alloc_global(__smem, item, size); 377 + 378 + hwspin_unlock_irqrestore(__smem->hwlock, &flags); 379 + 380 + return ret; 381 + } 382 + EXPORT_SYMBOL(qcom_smem_alloc); 383 + 384 + static int qcom_smem_get_global(struct qcom_smem *smem, 385 + unsigned item, 386 + void **ptr, 387 + size_t *size) 388 + { 389 + struct smem_header *header; 390 + struct smem_region *area; 391 + struct smem_global_entry *entry; 392 + u32 aux_base; 393 + unsigned i; 394 + 395 + if (WARN_ON(item >= SMEM_ITEM_COUNT)) 396 + return -EINVAL; 397 + 398 + header = smem->regions[0].virt_base; 399 + entry = &header->toc[item]; 400 + if (!entry->allocated) 401 + return -ENXIO; 402 + 403 + if (ptr != NULL) { 404 + aux_base = entry->aux_base & AUX_BASE_MASK; 405 + 406 + for (i = 0; i < smem->num_regions; i++) { 407 + area = &smem->regions[i]; 408 + 409 + if (area->aux_base == aux_base || !aux_base) { 410 + *ptr = area->virt_base + entry->offset; 411 + break; 412 + } 413 + } 414 + } 415 + if (size != NULL) 416 + *size = entry->size; 417 + 418 + return 0; 419 + } 420 + 421 + static int qcom_smem_get_private(struct qcom_smem *smem, 422 + unsigned host, 423 + unsigned item, 424 + void **ptr, 425 + size_t *size) 426 + { 427 + struct smem_partition_header *phdr; 428 + struct smem_private_entry *hdr; 429 + void *p; 430 + 431 + /* We're not going to find it if there's no matching partition */ 432 + if (host >= SMEM_HOST_COUNT || !smem->partitions[host]) 433 + return -ENOENT; 434 + 435 + phdr = smem->partitions[host]; 436 + 437 + p = (void *)phdr + sizeof(*phdr); 438 + while (p < (void *)phdr + phdr->offset_free_uncached) { 439 + hdr = p; 440 + 441 + if (hdr->canary != SMEM_PRIVATE_CANARY) { 442 + dev_err(smem->dev, 443 + "Found invalid canary in host %d partition\n", 444 + host); 445 + return -EINVAL; 446 + } 447 + 448 + if (hdr->item == item) { 449 + if (ptr != NULL) 450 + *ptr = p + sizeof(*hdr) + hdr->padding_hdr; 451 + 452 + if (size != NULL) 453 + *size = hdr->size - hdr->padding_data; 454 + 455 + return 0; 456 + } 457 + 458 + p += sizeof(*hdr) + hdr->padding_hdr + hdr->size; 459 + } 460 + 461 + return -ENOENT; 462 + } 463 + 464 + /** 465 + * qcom_smem_get() - resolve ptr of size of a smem item 466 + * @host: the remote processor, or -1 467 + * @item: smem item handle 468 + * @ptr: pointer to be filled out with address of the item 469 + * @size: pointer to be filled out with size of the item 470 + * 471 + * Looks up pointer and size of a smem item. 472 + */ 473 + int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size) 474 + { 475 + unsigned long flags; 476 + int ret; 477 + 478 + if (!__smem) 479 + return -EPROBE_DEFER; 480 + 481 + ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 482 + HWSPINLOCK_TIMEOUT, 483 + &flags); 484 + if (ret) 485 + return ret; 486 + 487 + ret = qcom_smem_get_private(__smem, host, item, ptr, size); 488 + if (ret == -ENOENT) 489 + ret = qcom_smem_get_global(__smem, item, ptr, size); 490 + 491 + hwspin_unlock_irqrestore(__smem->hwlock, &flags); 492 + return ret; 493 + 494 + } 495 + EXPORT_SYMBOL(qcom_smem_get); 496 + 497 + /** 498 + * qcom_smem_get_free_space() - retrieve amount of free space in a partition 499 + * @host: the remote processor identifying a partition, or -1 500 + * 501 + * To be used by smem clients as a quick way to determine if any new 502 + * allocations has been made. 503 + */ 504 + int qcom_smem_get_free_space(unsigned host) 505 + { 506 + struct smem_partition_header *phdr; 507 + struct smem_header *header; 508 + unsigned ret; 509 + 510 + if (!__smem) 511 + return -EPROBE_DEFER; 512 + 513 + if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 514 + phdr = __smem->partitions[host]; 515 + ret = phdr->offset_free_cached - phdr->offset_free_uncached; 516 + } else { 517 + header = __smem->regions[0].virt_base; 518 + ret = header->available; 519 + } 520 + 521 + return ret; 522 + } 523 + EXPORT_SYMBOL(qcom_smem_get_free_space); 524 + 525 + static int qcom_smem_get_sbl_version(struct qcom_smem *smem) 526 + { 527 + unsigned *versions; 528 + size_t size; 529 + int ret; 530 + 531 + ret = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, 532 + (void **)&versions, &size); 533 + if (ret < 0) { 534 + dev_err(smem->dev, "Unable to read the version item\n"); 535 + return -ENOENT; 536 + } 537 + 538 + if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) { 539 + dev_err(smem->dev, "Version item is too small\n"); 540 + return -EINVAL; 541 + } 542 + 543 + return versions[SMEM_MASTER_SBL_VERSION_INDEX]; 544 + } 545 + 546 + static int qcom_smem_enumerate_partitions(struct qcom_smem *smem, 547 + unsigned local_host) 548 + { 549 + struct smem_partition_header *header; 550 + struct smem_ptable_entry *entry; 551 + struct smem_ptable *ptable; 552 + unsigned remote_host; 553 + int i; 554 + 555 + ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K; 556 + if (ptable->magic != SMEM_PTABLE_MAGIC) 557 + return 0; 558 + 559 + if (ptable->version != 1) { 560 + dev_err(smem->dev, 561 + "Unsupported partition header version %d\n", 562 + ptable->version); 563 + return -EINVAL; 564 + } 565 + 566 + for (i = 0; i < ptable->num_entries; i++) { 567 + entry = &ptable->entry[i]; 568 + 569 + if (entry->host0 != local_host && entry->host1 != local_host) 570 + continue; 571 + 572 + if (!entry->offset) 573 + continue; 574 + 575 + if (!entry->size) 576 + continue; 577 + 578 + if (entry->host0 == local_host) 579 + remote_host = entry->host1; 580 + else 581 + remote_host = entry->host0; 582 + 583 + if (remote_host >= SMEM_HOST_COUNT) { 584 + dev_err(smem->dev, 585 + "Invalid remote host %d\n", 586 + remote_host); 587 + return -EINVAL; 588 + } 589 + 590 + if (smem->partitions[remote_host]) { 591 + dev_err(smem->dev, 592 + "Already found a partition for host %d\n", 593 + remote_host); 594 + return -EINVAL; 595 + } 596 + 597 + header = smem->regions[0].virt_base + entry->offset; 598 + 599 + if (header->magic != SMEM_PART_MAGIC) { 600 + dev_err(smem->dev, 601 + "Partition %d has invalid magic\n", i); 602 + return -EINVAL; 603 + } 604 + 605 + if (header->host0 != local_host && header->host1 != local_host) { 606 + dev_err(smem->dev, 607 + "Partition %d hosts are invalid\n", i); 608 + return -EINVAL; 609 + } 610 + 611 + if (header->host0 != remote_host && header->host1 != remote_host) { 612 + dev_err(smem->dev, 613 + "Partition %d hosts are invalid\n", i); 614 + return -EINVAL; 615 + } 616 + 617 + if (header->size != entry->size) { 618 + dev_err(smem->dev, 619 + "Partition %d has invalid size\n", i); 620 + return -EINVAL; 621 + } 622 + 623 + if (header->offset_free_uncached > header->size) { 624 + dev_err(smem->dev, 625 + "Partition %d has invalid free pointer\n", i); 626 + return -EINVAL; 627 + } 628 + 629 + smem->partitions[remote_host] = header; 630 + } 631 + 632 + return 0; 633 + } 634 + 635 + static int qcom_smem_count_mem_regions(struct platform_device *pdev) 636 + { 637 + struct resource *res; 638 + int num_regions = 0; 639 + int i; 640 + 641 + for (i = 0; i < pdev->num_resources; i++) { 642 + res = &pdev->resource[i]; 643 + 644 + if (resource_type(res) == IORESOURCE_MEM) 645 + num_regions++; 646 + } 647 + 648 + return num_regions; 649 + } 650 + 651 + static int qcom_smem_probe(struct platform_device *pdev) 652 + { 653 + struct smem_header *header; 654 + struct device_node *np; 655 + struct qcom_smem *smem; 656 + struct resource *res; 657 + struct resource r; 658 + size_t array_size; 659 + int num_regions = 0; 660 + int hwlock_id; 661 + u32 version; 662 + int ret; 663 + int i; 664 + 665 + num_regions = qcom_smem_count_mem_regions(pdev) + 1; 666 + 667 + array_size = num_regions * sizeof(struct smem_region); 668 + smem = devm_kzalloc(&pdev->dev, sizeof(*smem) + array_size, GFP_KERNEL); 669 + if (!smem) 670 + return -ENOMEM; 671 + 672 + smem->dev = &pdev->dev; 673 + smem->num_regions = num_regions; 674 + 675 + np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0); 676 + if (!np) { 677 + dev_err(&pdev->dev, "No memory-region specified\n"); 678 + return -EINVAL; 679 + } 680 + 681 + ret = of_address_to_resource(np, 0, &r); 682 + of_node_put(np); 683 + if (ret) 684 + return ret; 685 + 686 + smem->regions[0].aux_base = (u32)r.start; 687 + smem->regions[0].size = resource_size(&r); 688 + smem->regions[0].virt_base = devm_ioremap_nocache(&pdev->dev, 689 + r.start, 690 + resource_size(&r)); 691 + if (!smem->regions[0].virt_base) 692 + return -ENOMEM; 693 + 694 + for (i = 1; i < num_regions; i++) { 695 + res = platform_get_resource(pdev, IORESOURCE_MEM, i - 1); 696 + 697 + smem->regions[i].aux_base = (u32)res->start; 698 + smem->regions[i].size = resource_size(res); 699 + smem->regions[i].virt_base = devm_ioremap_nocache(&pdev->dev, 700 + res->start, 701 + resource_size(res)); 702 + if (!smem->regions[i].virt_base) 703 + return -ENOMEM; 704 + } 705 + 706 + header = smem->regions[0].virt_base; 707 + if (header->initialized != 1 || header->reserved) { 708 + dev_err(&pdev->dev, "SMEM is not initialized by SBL\n"); 709 + return -EINVAL; 710 + } 711 + 712 + version = qcom_smem_get_sbl_version(smem); 713 + if (version >> 16 != SMEM_EXPECTED_VERSION) { 714 + dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version); 715 + return -EINVAL; 716 + } 717 + 718 + ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS); 719 + if (ret < 0) 720 + return ret; 721 + 722 + hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); 723 + if (hwlock_id < 0) { 724 + dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 725 + return hwlock_id; 726 + } 727 + 728 + smem->hwlock = hwspin_lock_request_specific(hwlock_id); 729 + if (!smem->hwlock) 730 + return -ENXIO; 731 + 732 + __smem = smem; 733 + 734 + return 0; 735 + } 736 + 737 + static int qcom_smem_remove(struct platform_device *pdev) 738 + { 739 + __smem = NULL; 740 + hwspin_lock_free(__smem->hwlock); 741 + 742 + return 0; 743 + } 744 + 745 + static const struct of_device_id qcom_smem_of_match[] = { 746 + { .compatible = "qcom,smem" }, 747 + {} 748 + }; 749 + MODULE_DEVICE_TABLE(of, qcom_smem_of_match); 750 + 751 + static struct platform_driver qcom_smem_driver = { 752 + .probe = qcom_smem_probe, 753 + .remove = qcom_smem_remove, 754 + .driver = { 755 + .name = "qcom-smem", 756 + .of_match_table = qcom_smem_of_match, 757 + .suppress_bind_attrs = true, 758 + }, 759 + }; 760 + 761 + static int __init qcom_smem_init(void) 762 + { 763 + return platform_driver_register(&qcom_smem_driver); 764 + } 765 + arch_initcall(qcom_smem_init); 766 + 767 + static void __exit qcom_smem_exit(void) 768 + { 769 + platform_driver_unregister(&qcom_smem_driver); 770 + } 771 + module_exit(qcom_smem_exit) 772 + 773 + MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>"); 774 + MODULE_DESCRIPTION("Qualcomm Shared Memory Manager"); 775 + MODULE_LICENSE("GPL v2");
+3 -3
drivers/soc/tegra/Makefile
··· 1 - obj-$(CONFIG_ARCH_TEGRA) += fuse/ 1 + obj-y += fuse/ 2 2 3 - obj-$(CONFIG_ARCH_TEGRA) += common.o 4 - obj-$(CONFIG_ARCH_TEGRA) += pmc.o 3 + obj-y += common.o 4 + obj-y += pmc.o
+2
drivers/soc/tegra/common.c
··· 15 15 { .compatible = "nvidia,tegra30", }, 16 16 { .compatible = "nvidia,tegra114", }, 17 17 { .compatible = "nvidia,tegra124", }, 18 + { .compatible = "nvidia,tegra132", }, 19 + { .compatible = "nvidia,tegra210", }, 18 20 { } 19 21 }; 20 22
+2
drivers/soc/tegra/fuse/Makefile
··· 6 6 obj-$(CONFIG_ARCH_TEGRA_3x_SOC) += speedo-tegra30.o 7 7 obj-$(CONFIG_ARCH_TEGRA_114_SOC) += speedo-tegra114.o 8 8 obj-$(CONFIG_ARCH_TEGRA_124_SOC) += speedo-tegra124.o 9 + obj-$(CONFIG_ARCH_TEGRA_132_SOC) += speedo-tegra124.o 10 + obj-$(CONFIG_ARCH_TEGRA_210_SOC) += speedo-tegra210.o
+206 -55
drivers/soc/tegra/fuse/fuse-tegra.c
··· 15 15 * 16 16 */ 17 17 18 + #include <linux/clk.h> 18 19 #include <linux/device.h> 19 20 #include <linux/kobject.h> 20 - #include <linux/kernel.h> 21 + #include <linux/module.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/of.h> 23 24 #include <linux/of_address.h> ··· 29 28 30 29 #include "fuse.h" 31 30 32 - static u32 (*fuse_readl)(const unsigned int offset); 33 - static int fuse_size; 34 31 struct tegra_sku_info tegra_sku_info; 35 32 EXPORT_SYMBOL(tegra_sku_info); 36 33 ··· 41 42 [TEGRA_REVISION_A04] = "A04", 42 43 }; 43 44 44 - static u8 fuse_readb(const unsigned int offset) 45 + static u8 fuse_readb(struct tegra_fuse *fuse, unsigned int offset) 45 46 { 46 47 u32 val; 47 48 48 - val = fuse_readl(round_down(offset, 4)); 49 + val = fuse->read(fuse, round_down(offset, 4)); 49 50 val >>= (offset % 4) * 8; 50 51 val &= 0xff; 51 52 ··· 53 54 } 54 55 55 56 static ssize_t fuse_read(struct file *fd, struct kobject *kobj, 56 - struct bin_attribute *attr, char *buf, 57 - loff_t pos, size_t size) 57 + struct bin_attribute *attr, char *buf, 58 + loff_t pos, size_t size) 58 59 { 60 + struct device *dev = kobj_to_dev(kobj); 61 + struct tegra_fuse *fuse = dev_get_drvdata(dev); 59 62 int i; 60 63 61 - if (pos < 0 || pos >= fuse_size) 64 + if (pos < 0 || pos >= attr->size) 62 65 return 0; 63 66 64 - if (size > fuse_size - pos) 65 - size = fuse_size - pos; 67 + if (size > attr->size - pos) 68 + size = attr->size - pos; 66 69 67 70 for (i = 0; i < size; i++) 68 - buf[i] = fuse_readb(pos + i); 71 + buf[i] = fuse_readb(fuse, pos + i); 69 72 70 73 return i; 71 74 } ··· 77 76 .read = fuse_read, 78 77 }; 79 78 79 + static int tegra_fuse_create_sysfs(struct device *dev, unsigned int size, 80 + const struct tegra_fuse_info *info) 81 + { 82 + fuse_bin_attr.size = size; 83 + 84 + return device_create_bin_file(dev, &fuse_bin_attr); 85 + } 86 + 80 87 static const struct of_device_id car_match[] __initconst = { 81 88 { .compatible = "nvidia,tegra20-car", }, 82 89 { .compatible = "nvidia,tegra30-car", }, 83 90 { .compatible = "nvidia,tegra114-car", }, 84 91 { .compatible = "nvidia,tegra124-car", }, 85 92 { .compatible = "nvidia,tegra132-car", }, 93 + { .compatible = "nvidia,tegra210-car", }, 86 94 {}, 87 95 }; 96 + 97 + static struct tegra_fuse *fuse = &(struct tegra_fuse) { 98 + .base = NULL, 99 + .soc = NULL, 100 + }; 101 + 102 + static const struct of_device_id tegra_fuse_match[] = { 103 + #ifdef CONFIG_ARCH_TEGRA_210_SOC 104 + { .compatible = "nvidia,tegra210-efuse", .data = &tegra210_fuse_soc }, 105 + #endif 106 + #ifdef CONFIG_ARCH_TEGRA_132_SOC 107 + { .compatible = "nvidia,tegra132-efuse", .data = &tegra124_fuse_soc }, 108 + #endif 109 + #ifdef CONFIG_ARCH_TEGRA_124_SOC 110 + { .compatible = "nvidia,tegra124-efuse", .data = &tegra124_fuse_soc }, 111 + #endif 112 + #ifdef CONFIG_ARCH_TEGRA_114_SOC 113 + { .compatible = "nvidia,tegra114-efuse", .data = &tegra114_fuse_soc }, 114 + #endif 115 + #ifdef CONFIG_ARCH_TEGRA_3x_SOC 116 + { .compatible = "nvidia,tegra30-efuse", .data = &tegra30_fuse_soc }, 117 + #endif 118 + #ifdef CONFIG_ARCH_TEGRA_2x_SOC 119 + { .compatible = "nvidia,tegra20-efuse", .data = &tegra20_fuse_soc }, 120 + #endif 121 + { /* sentinel */ } 122 + }; 123 + 124 + static int tegra_fuse_probe(struct platform_device *pdev) 125 + { 126 + void __iomem *base = fuse->base; 127 + struct resource *res; 128 + int err; 129 + 130 + /* take over the memory region from the early initialization */ 131 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 132 + fuse->base = devm_ioremap_resource(&pdev->dev, res); 133 + if (IS_ERR(fuse->base)) 134 + return PTR_ERR(fuse->base); 135 + 136 + fuse->clk = devm_clk_get(&pdev->dev, "fuse"); 137 + if (IS_ERR(fuse->clk)) { 138 + dev_err(&pdev->dev, "failed to get FUSE clock: %ld", 139 + PTR_ERR(fuse->clk)); 140 + return PTR_ERR(fuse->clk); 141 + } 142 + 143 + platform_set_drvdata(pdev, fuse); 144 + fuse->dev = &pdev->dev; 145 + 146 + if (fuse->soc->probe) { 147 + err = fuse->soc->probe(fuse); 148 + if (err < 0) 149 + return err; 150 + } 151 + 152 + if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size, 153 + fuse->soc->info)) 154 + return -ENODEV; 155 + 156 + /* release the early I/O memory mapping */ 157 + iounmap(base); 158 + 159 + return 0; 160 + } 161 + 162 + static struct platform_driver tegra_fuse_driver = { 163 + .driver = { 164 + .name = "tegra-fuse", 165 + .of_match_table = tegra_fuse_match, 166 + .suppress_bind_attrs = true, 167 + }, 168 + .probe = tegra_fuse_probe, 169 + }; 170 + module_platform_driver(tegra_fuse_driver); 171 + 172 + bool __init tegra_fuse_read_spare(unsigned int spare) 173 + { 174 + unsigned int offset = fuse->soc->info->spare + spare * 4; 175 + 176 + return fuse->read_early(fuse, offset) & 1; 177 + } 178 + 179 + u32 __init tegra_fuse_read_early(unsigned int offset) 180 + { 181 + return fuse->read_early(fuse, offset); 182 + } 183 + 184 + int tegra_fuse_readl(unsigned long offset, u32 *value) 185 + { 186 + if (!fuse->read) 187 + return -EPROBE_DEFER; 188 + 189 + *value = fuse->read(fuse, offset); 190 + 191 + return 0; 192 + } 193 + EXPORT_SYMBOL(tegra_fuse_readl); 88 194 89 195 static void tegra_enable_fuse_clk(void __iomem *base) 90 196 { ··· 210 102 writel(reg, base + 0x14); 211 103 } 212 104 213 - int tegra_fuse_readl(unsigned long offset, u32 *value) 214 - { 215 - if (!fuse_readl) 216 - return -EPROBE_DEFER; 217 - 218 - *value = fuse_readl(offset); 219 - 220 - return 0; 221 - } 222 - EXPORT_SYMBOL(tegra_fuse_readl); 223 - 224 - int tegra_fuse_create_sysfs(struct device *dev, int size, 225 - u32 (*readl)(const unsigned int offset)) 226 - { 227 - if (fuse_size) 228 - return -ENODEV; 229 - 230 - fuse_bin_attr.size = size; 231 - fuse_bin_attr.read = fuse_read; 232 - 233 - fuse_size = size; 234 - fuse_readl = readl; 235 - 236 - return device_create_bin_file(dev, &fuse_bin_attr); 237 - } 238 - 239 105 static int __init tegra_init_fuse(void) 240 106 { 107 + const struct of_device_id *match; 241 108 struct device_node *np; 242 - void __iomem *car_base; 243 - 244 - if (!soc_is_tegra()) 245 - return 0; 109 + struct resource regs; 246 110 247 111 tegra_init_apbmisc(); 248 112 249 - np = of_find_matching_node(NULL, car_match); 250 - car_base = of_iomap(np, 0); 251 - if (car_base) { 252 - tegra_enable_fuse_clk(car_base); 253 - iounmap(car_base); 113 + np = of_find_matching_node_and_match(NULL, tegra_fuse_match, &match); 114 + if (!np) { 115 + /* 116 + * Fall back to legacy initialization for 32-bit ARM only. All 117 + * 64-bit ARM device tree files for Tegra are required to have 118 + * a FUSE node. 119 + * 120 + * This is for backwards-compatibility with old device trees 121 + * that didn't contain a FUSE node. 122 + */ 123 + if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) { 124 + u8 chip = tegra_get_chip_id(); 125 + 126 + regs.start = 0x7000f800; 127 + regs.end = 0x7000fbff; 128 + regs.flags = IORESOURCE_MEM; 129 + 130 + switch (chip) { 131 + #ifdef CONFIG_ARCH_TEGRA_2x_SOC 132 + case TEGRA20: 133 + fuse->soc = &tegra20_fuse_soc; 134 + break; 135 + #endif 136 + 137 + #ifdef CONFIG_ARCH_TEGRA_3x_SOC 138 + case TEGRA30: 139 + fuse->soc = &tegra30_fuse_soc; 140 + break; 141 + #endif 142 + 143 + #ifdef CONFIG_ARCH_TEGRA_114_SOC 144 + case TEGRA114: 145 + fuse->soc = &tegra114_fuse_soc; 146 + break; 147 + #endif 148 + 149 + #ifdef CONFIG_ARCH_TEGRA_124_SOC 150 + case TEGRA124: 151 + fuse->soc = &tegra124_fuse_soc; 152 + break; 153 + #endif 154 + 155 + default: 156 + pr_warn("Unsupported SoC: %02x\n", chip); 157 + break; 158 + } 159 + } else { 160 + /* 161 + * At this point we're not running on Tegra, so play 162 + * nice with multi-platform kernels. 163 + */ 164 + return 0; 165 + } 254 166 } else { 255 - pr_err("Could not enable fuse clk. ioremap tegra car failed.\n"); 167 + /* 168 + * Extract information from the device tree if we've found a 169 + * matching node. 170 + */ 171 + if (of_address_to_resource(np, 0, &regs) < 0) { 172 + pr_err("failed to get FUSE register\n"); 173 + return -ENXIO; 174 + } 175 + 176 + fuse->soc = match->data; 177 + } 178 + 179 + np = of_find_matching_node(NULL, car_match); 180 + if (np) { 181 + void __iomem *base = of_iomap(np, 0); 182 + if (base) { 183 + tegra_enable_fuse_clk(base); 184 + iounmap(base); 185 + } else { 186 + pr_err("failed to map clock registers\n"); 187 + return -ENXIO; 188 + } 189 + } 190 + 191 + fuse->base = ioremap_nocache(regs.start, resource_size(&regs)); 192 + if (!fuse->base) { 193 + pr_err("failed to map FUSE registers\n"); 256 194 return -ENXIO; 257 195 } 258 196 259 - if (tegra_get_chip_id() == TEGRA20) 260 - tegra20_init_fuse_early(); 261 - else 262 - tegra30_init_fuse_early(); 197 + fuse->soc->init(fuse); 263 198 264 - pr_info("Tegra Revision: %s SKU: %d CPU Process: %d Core Process: %d\n", 199 + pr_info("Tegra Revision: %s SKU: %d CPU Process: %d SoC Process: %d\n", 265 200 tegra_revision_name[tegra_sku_info.revision], 266 201 tegra_sku_info.sku_id, tegra_sku_info.cpu_process_id, 267 - tegra_sku_info.core_process_id); 268 - pr_debug("Tegra CPU Speedo ID %d, Soc Speedo ID %d\n", 269 - tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id); 202 + tegra_sku_info.soc_process_id); 203 + pr_debug("Tegra CPU Speedo ID %d, SoC Speedo ID %d\n", 204 + tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id); 270 205 271 206 return 0; 272 207 }
+65 -112
drivers/soc/tegra/fuse/fuse-tegra20.c
··· 34 34 #include "fuse.h" 35 35 36 36 #define FUSE_BEGIN 0x100 37 - #define FUSE_SIZE 0x1f8 38 37 #define FUSE_UID_LOW 0x08 39 38 #define FUSE_UID_HIGH 0x0c 40 39 41 - static phys_addr_t fuse_phys; 42 - static struct clk *fuse_clk; 43 - static void __iomem __initdata *fuse_base; 44 - 45 - static DEFINE_MUTEX(apb_dma_lock); 46 - static DECLARE_COMPLETION(apb_dma_wait); 47 - static struct dma_chan *apb_dma_chan; 48 - static struct dma_slave_config dma_sconfig; 49 - static u32 *apb_buffer; 50 - static dma_addr_t apb_buffer_phys; 40 + static u32 tegra20_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset) 41 + { 42 + return readl_relaxed(fuse->base + FUSE_BEGIN + offset); 43 + } 51 44 52 45 static void apb_dma_complete(void *args) 53 46 { 54 - complete(&apb_dma_wait); 47 + struct tegra_fuse *fuse = args; 48 + 49 + complete(&fuse->apbdma.wait); 55 50 } 56 51 57 - static u32 tegra20_fuse_readl(const unsigned int offset) 52 + static u32 tegra20_fuse_read(struct tegra_fuse *fuse, unsigned int offset) 58 53 { 59 - int ret; 60 - u32 val = 0; 54 + unsigned long flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK; 61 55 struct dma_async_tx_descriptor *dma_desc; 62 56 unsigned long time_left; 57 + u32 value = 0; 58 + int err; 63 59 64 - mutex_lock(&apb_dma_lock); 60 + mutex_lock(&fuse->apbdma.lock); 65 61 66 - dma_sconfig.src_addr = fuse_phys + FUSE_BEGIN + offset; 67 - ret = dmaengine_slave_config(apb_dma_chan, &dma_sconfig); 68 - if (ret) 62 + fuse->apbdma.config.src_addr = fuse->apbdma.phys + FUSE_BEGIN + offset; 63 + 64 + err = dmaengine_slave_config(fuse->apbdma.chan, &fuse->apbdma.config); 65 + if (err) 69 66 goto out; 70 67 71 - dma_desc = dmaengine_prep_slave_single(apb_dma_chan, apb_buffer_phys, 72 - sizeof(u32), DMA_DEV_TO_MEM, 73 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 68 + dma_desc = dmaengine_prep_slave_single(fuse->apbdma.chan, 69 + fuse->apbdma.phys, 70 + sizeof(u32), DMA_DEV_TO_MEM, 71 + flags); 74 72 if (!dma_desc) 75 73 goto out; 76 74 77 75 dma_desc->callback = apb_dma_complete; 78 - dma_desc->callback_param = NULL; 76 + dma_desc->callback_param = fuse; 79 77 80 - reinit_completion(&apb_dma_wait); 78 + reinit_completion(&fuse->apbdma.wait); 81 79 82 - clk_prepare_enable(fuse_clk); 80 + clk_prepare_enable(fuse->clk); 83 81 84 82 dmaengine_submit(dma_desc); 85 - dma_async_issue_pending(apb_dma_chan); 86 - time_left = wait_for_completion_timeout(&apb_dma_wait, 83 + dma_async_issue_pending(fuse->apbdma.chan); 84 + time_left = wait_for_completion_timeout(&fuse->apbdma.wait, 87 85 msecs_to_jiffies(50)); 88 86 89 87 if (WARN(time_left == 0, "apb read dma timed out")) 90 - dmaengine_terminate_all(apb_dma_chan); 88 + dmaengine_terminate_all(fuse->apbdma.chan); 91 89 else 92 - val = *apb_buffer; 90 + value = *fuse->apbdma.virt; 93 91 94 - clk_disable_unprepare(fuse_clk); 92 + clk_disable_unprepare(fuse->clk); 93 + 95 94 out: 96 - mutex_unlock(&apb_dma_lock); 97 - 98 - return val; 95 + mutex_unlock(&fuse->apbdma.lock); 96 + return value; 99 97 } 100 98 101 - static const struct of_device_id tegra20_fuse_of_match[] = { 102 - { .compatible = "nvidia,tegra20-efuse" }, 103 - {}, 104 - }; 105 - 106 - static int apb_dma_init(void) 99 + static int tegra20_fuse_probe(struct tegra_fuse *fuse) 107 100 { 108 101 dma_cap_mask_t mask; 109 102 110 103 dma_cap_zero(mask); 111 104 dma_cap_set(DMA_SLAVE, mask); 112 - apb_dma_chan = dma_request_channel(mask, NULL, NULL); 113 - if (!apb_dma_chan) 105 + 106 + fuse->apbdma.chan = dma_request_channel(mask, NULL, NULL); 107 + if (!fuse->apbdma.chan) 114 108 return -EPROBE_DEFER; 115 109 116 - apb_buffer = dma_alloc_coherent(NULL, sizeof(u32), &apb_buffer_phys, 117 - GFP_KERNEL); 118 - if (!apb_buffer) { 119 - dma_release_channel(apb_dma_chan); 110 + fuse->apbdma.virt = dma_alloc_coherent(fuse->dev, sizeof(u32), 111 + &fuse->apbdma.phys, 112 + GFP_KERNEL); 113 + if (!fuse->apbdma.virt) { 114 + dma_release_channel(fuse->apbdma.chan); 120 115 return -ENOMEM; 121 116 } 122 117 123 - dma_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 124 - dma_sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 125 - dma_sconfig.src_maxburst = 1; 126 - dma_sconfig.dst_maxburst = 1; 118 + fuse->apbdma.config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 119 + fuse->apbdma.config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 120 + fuse->apbdma.config.src_maxburst = 1; 121 + fuse->apbdma.config.dst_maxburst = 1; 122 + 123 + init_completion(&fuse->apbdma.wait); 124 + mutex_init(&fuse->apbdma.lock); 125 + fuse->read = tegra20_fuse_read; 127 126 128 127 return 0; 129 128 } 130 129 131 - static int tegra20_fuse_probe(struct platform_device *pdev) 132 - { 133 - struct resource *res; 134 - int err; 135 - 136 - fuse_clk = devm_clk_get(&pdev->dev, NULL); 137 - if (IS_ERR(fuse_clk)) { 138 - dev_err(&pdev->dev, "missing clock"); 139 - return PTR_ERR(fuse_clk); 140 - } 141 - 142 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 143 - if (!res) 144 - return -EINVAL; 145 - fuse_phys = res->start; 146 - 147 - err = apb_dma_init(); 148 - if (err) 149 - return err; 150 - 151 - if (tegra_fuse_create_sysfs(&pdev->dev, FUSE_SIZE, tegra20_fuse_readl)) 152 - return -ENODEV; 153 - 154 - dev_dbg(&pdev->dev, "loaded\n"); 155 - 156 - return 0; 157 - } 158 - 159 - static struct platform_driver tegra20_fuse_driver = { 160 - .probe = tegra20_fuse_probe, 161 - .driver = { 162 - .name = "tegra20_fuse", 163 - .of_match_table = tegra20_fuse_of_match, 164 - } 130 + static const struct tegra_fuse_info tegra20_fuse_info = { 131 + .read = tegra20_fuse_read, 132 + .size = 0x1f8, 133 + .spare = 0x100, 165 134 }; 166 135 167 - static int __init tegra20_fuse_init(void) 168 - { 169 - return platform_driver_register(&tegra20_fuse_driver); 170 - } 171 - postcore_initcall(tegra20_fuse_init); 172 - 173 136 /* Early boot code. This code is called before the devices are created */ 174 - 175 - u32 __init tegra20_fuse_early(const unsigned int offset) 176 - { 177 - return readl_relaxed(fuse_base + FUSE_BEGIN + offset); 178 - } 179 - 180 - bool __init tegra20_spare_fuse_early(int spare_bit) 181 - { 182 - u32 offset = spare_bit * 4; 183 - bool value; 184 - 185 - value = tegra20_fuse_early(offset + 0x100); 186 - 187 - return value; 188 - } 189 137 190 138 static void __init tegra20_fuse_add_randomness(void) 191 139 { ··· 143 195 randomness[1] = tegra_read_straps(); 144 196 randomness[2] = tegra_read_chipid(); 145 197 randomness[3] = tegra_sku_info.cpu_process_id << 16; 146 - randomness[3] |= tegra_sku_info.core_process_id; 198 + randomness[3] |= tegra_sku_info.soc_process_id; 147 199 randomness[4] = tegra_sku_info.cpu_speedo_id << 16; 148 200 randomness[4] |= tegra_sku_info.soc_speedo_id; 149 - randomness[5] = tegra20_fuse_early(FUSE_UID_LOW); 150 - randomness[6] = tegra20_fuse_early(FUSE_UID_HIGH); 201 + randomness[5] = tegra_fuse_read_early(FUSE_UID_LOW); 202 + randomness[6] = tegra_fuse_read_early(FUSE_UID_HIGH); 151 203 152 204 add_device_randomness(randomness, sizeof(randomness)); 153 205 } 154 206 155 - void __init tegra20_init_fuse_early(void) 207 + static void __init tegra20_fuse_init(struct tegra_fuse *fuse) 156 208 { 157 - fuse_base = ioremap(TEGRA_FUSE_BASE, TEGRA_FUSE_SIZE); 209 + fuse->read_early = tegra20_fuse_read_early; 158 210 159 211 tegra_init_revision(); 160 - tegra20_init_speedo_data(&tegra_sku_info); 212 + fuse->soc->speedo_init(&tegra_sku_info); 161 213 tegra20_fuse_add_randomness(); 162 - 163 - iounmap(fuse_base); 164 214 } 215 + 216 + const struct tegra_fuse_soc tegra20_fuse_soc = { 217 + .init = tegra20_fuse_init, 218 + .speedo_init = tegra20_init_speedo_data, 219 + .probe = tegra20_fuse_probe, 220 + .info = &tegra20_fuse_info, 221 + };
+86 -150
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 42 42 43 43 #define FUSE_HAS_REVISION_INFO BIT(0) 44 44 45 - enum speedo_idx { 46 - SPEEDO_TEGRA30 = 0, 47 - SPEEDO_TEGRA114, 48 - SPEEDO_TEGRA124, 49 - }; 50 - 51 - struct tegra_fuse_info { 52 - int size; 53 - int spare_bit; 54 - enum speedo_idx speedo_idx; 55 - }; 56 - 57 - static void __iomem *fuse_base; 58 - static struct clk *fuse_clk; 59 - static const struct tegra_fuse_info *fuse_info; 60 - 61 - u32 tegra30_fuse_readl(const unsigned int offset) 45 + #if defined(CONFIG_ARCH_TEGRA_3x_SOC) || \ 46 + defined(CONFIG_ARCH_TEGRA_114_SOC) || \ 47 + defined(CONFIG_ARCH_TEGRA_124_SOC) || \ 48 + defined(CONFIG_ARCH_TEGRA_132_SOC) || \ 49 + defined(CONFIG_ARCH_TEGRA_210_SOC) 50 + static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset) 62 51 { 63 - u32 val; 64 - 65 - /* 66 - * early in the boot, the fuse clock will be enabled by 67 - * tegra_init_fuse() 68 - */ 69 - 70 - if (fuse_clk) 71 - clk_prepare_enable(fuse_clk); 72 - 73 - val = readl_relaxed(fuse_base + FUSE_BEGIN + offset); 74 - 75 - if (fuse_clk) 76 - clk_disable_unprepare(fuse_clk); 77 - 78 - return val; 52 + return readl_relaxed(fuse->base + FUSE_BEGIN + offset); 79 53 } 80 54 81 - static const struct tegra_fuse_info tegra30_info = { 82 - .size = 0x2a4, 83 - .spare_bit = 0x144, 84 - .speedo_idx = SPEEDO_TEGRA30, 85 - }; 86 - 87 - static const struct tegra_fuse_info tegra114_info = { 88 - .size = 0x2a0, 89 - .speedo_idx = SPEEDO_TEGRA114, 90 - }; 91 - 92 - static const struct tegra_fuse_info tegra124_info = { 93 - .size = 0x300, 94 - .speedo_idx = SPEEDO_TEGRA124, 95 - }; 96 - 97 - static const struct of_device_id tegra30_fuse_of_match[] = { 98 - { .compatible = "nvidia,tegra30-efuse", .data = &tegra30_info }, 99 - { .compatible = "nvidia,tegra114-efuse", .data = &tegra114_info }, 100 - { .compatible = "nvidia,tegra124-efuse", .data = &tegra124_info }, 101 - {}, 102 - }; 103 - 104 - static int tegra30_fuse_probe(struct platform_device *pdev) 55 + static u32 tegra30_fuse_read(struct tegra_fuse *fuse, unsigned int offset) 105 56 { 106 - const struct of_device_id *of_dev_id; 57 + u32 value; 58 + int err; 107 59 108 - of_dev_id = of_match_device(tegra30_fuse_of_match, &pdev->dev); 109 - if (!of_dev_id) 110 - return -ENODEV; 111 - 112 - fuse_clk = devm_clk_get(&pdev->dev, NULL); 113 - if (IS_ERR(fuse_clk)) { 114 - dev_err(&pdev->dev, "missing clock"); 115 - return PTR_ERR(fuse_clk); 60 + err = clk_prepare_enable(fuse->clk); 61 + if (err < 0) { 62 + dev_err(fuse->dev, "failed to enable FUSE clock: %d\n", err); 63 + return 0; 116 64 } 117 65 118 - platform_set_drvdata(pdev, NULL); 66 + value = readl_relaxed(fuse->base + FUSE_BEGIN + offset); 119 67 120 - if (tegra_fuse_create_sysfs(&pdev->dev, fuse_info->size, 121 - tegra30_fuse_readl)) 122 - return -ENODEV; 68 + clk_disable_unprepare(fuse->clk); 123 69 124 - dev_dbg(&pdev->dev, "loaded\n"); 125 - 126 - return 0; 70 + return value; 127 71 } 128 - 129 - static struct platform_driver tegra30_fuse_driver = { 130 - .probe = tegra30_fuse_probe, 131 - .driver = { 132 - .name = "tegra_fuse", 133 - .of_match_table = tegra30_fuse_of_match, 134 - } 135 - }; 136 - 137 - static int __init tegra30_fuse_init(void) 138 - { 139 - return platform_driver_register(&tegra30_fuse_driver); 140 - } 141 - postcore_initcall(tegra30_fuse_init); 142 - 143 - /* Early boot code. This code is called before the devices are created */ 144 - 145 - typedef void (*speedo_f)(struct tegra_sku_info *sku_info); 146 - 147 - static speedo_f __initdata speedo_tbl[] = { 148 - [SPEEDO_TEGRA30] = tegra30_init_speedo_data, 149 - [SPEEDO_TEGRA114] = tegra114_init_speedo_data, 150 - [SPEEDO_TEGRA124] = tegra124_init_speedo_data, 151 - }; 152 72 153 73 static void __init tegra30_fuse_add_randomness(void) 154 74 { ··· 78 158 randomness[1] = tegra_read_straps(); 79 159 randomness[2] = tegra_read_chipid(); 80 160 randomness[3] = tegra_sku_info.cpu_process_id << 16; 81 - randomness[3] |= tegra_sku_info.core_process_id; 161 + randomness[3] |= tegra_sku_info.soc_process_id; 82 162 randomness[4] = tegra_sku_info.cpu_speedo_id << 16; 83 163 randomness[4] |= tegra_sku_info.soc_speedo_id; 84 - randomness[5] = tegra30_fuse_readl(FUSE_VENDOR_CODE); 85 - randomness[6] = tegra30_fuse_readl(FUSE_FAB_CODE); 86 - randomness[7] = tegra30_fuse_readl(FUSE_LOT_CODE_0); 87 - randomness[8] = tegra30_fuse_readl(FUSE_LOT_CODE_1); 88 - randomness[9] = tegra30_fuse_readl(FUSE_WAFER_ID); 89 - randomness[10] = tegra30_fuse_readl(FUSE_X_COORDINATE); 90 - randomness[11] = tegra30_fuse_readl(FUSE_Y_COORDINATE); 164 + randomness[5] = tegra_fuse_read_early(FUSE_VENDOR_CODE); 165 + randomness[6] = tegra_fuse_read_early(FUSE_FAB_CODE); 166 + randomness[7] = tegra_fuse_read_early(FUSE_LOT_CODE_0); 167 + randomness[8] = tegra_fuse_read_early(FUSE_LOT_CODE_1); 168 + randomness[9] = tegra_fuse_read_early(FUSE_WAFER_ID); 169 + randomness[10] = tegra_fuse_read_early(FUSE_X_COORDINATE); 170 + randomness[11] = tegra_fuse_read_early(FUSE_Y_COORDINATE); 91 171 92 172 add_device_randomness(randomness, sizeof(randomness)); 93 173 } 94 174 95 - static void __init legacy_fuse_init(void) 175 + static void __init tegra30_fuse_init(struct tegra_fuse *fuse) 96 176 { 97 - switch (tegra_get_chip_id()) { 98 - case TEGRA30: 99 - fuse_info = &tegra30_info; 100 - break; 101 - case TEGRA114: 102 - fuse_info = &tegra114_info; 103 - break; 104 - case TEGRA124: 105 - case TEGRA132: 106 - fuse_info = &tegra124_info; 107 - break; 108 - default: 109 - return; 110 - } 111 - 112 - fuse_base = ioremap(TEGRA_FUSE_BASE, TEGRA_FUSE_SIZE); 113 - } 114 - 115 - bool __init tegra30_spare_fuse(int spare_bit) 116 - { 117 - u32 offset = fuse_info->spare_bit + spare_bit * 4; 118 - 119 - return tegra30_fuse_readl(offset) & 1; 120 - } 121 - 122 - void __init tegra30_init_fuse_early(void) 123 - { 124 - struct device_node *np; 125 - const struct of_device_id *of_match; 126 - 127 - np = of_find_matching_node_and_match(NULL, tegra30_fuse_of_match, 128 - &of_match); 129 - if (np) { 130 - fuse_base = of_iomap(np, 0); 131 - fuse_info = (struct tegra_fuse_info *)of_match->data; 132 - } else 133 - legacy_fuse_init(); 134 - 135 - if (!fuse_base) { 136 - pr_warn("fuse DT node missing and unknown chip id: 0x%02x\n", 137 - tegra_get_chip_id()); 138 - return; 139 - } 177 + fuse->read_early = tegra30_fuse_read_early; 178 + fuse->read = tegra30_fuse_read; 140 179 141 180 tegra_init_revision(); 142 - speedo_tbl[fuse_info->speedo_idx](&tegra_sku_info); 181 + fuse->soc->speedo_init(&tegra_sku_info); 143 182 tegra30_fuse_add_randomness(); 144 183 } 184 + #endif 185 + 186 + #ifdef CONFIG_ARCH_TEGRA_3x_SOC 187 + static const struct tegra_fuse_info tegra30_fuse_info = { 188 + .read = tegra30_fuse_read, 189 + .size = 0x2a4, 190 + .spare = 0x144, 191 + }; 192 + 193 + const struct tegra_fuse_soc tegra30_fuse_soc = { 194 + .init = tegra30_fuse_init, 195 + .speedo_init = tegra30_init_speedo_data, 196 + .info = &tegra30_fuse_info, 197 + }; 198 + #endif 199 + 200 + #ifdef CONFIG_ARCH_TEGRA_114_SOC 201 + static const struct tegra_fuse_info tegra114_fuse_info = { 202 + .read = tegra30_fuse_read, 203 + .size = 0x2a0, 204 + .spare = 0x180, 205 + }; 206 + 207 + const struct tegra_fuse_soc tegra114_fuse_soc = { 208 + .init = tegra30_fuse_init, 209 + .speedo_init = tegra114_init_speedo_data, 210 + .info = &tegra114_fuse_info, 211 + }; 212 + #endif 213 + 214 + #if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC) 215 + static const struct tegra_fuse_info tegra124_fuse_info = { 216 + .read = tegra30_fuse_read, 217 + .size = 0x300, 218 + .spare = 0x200, 219 + }; 220 + 221 + const struct tegra_fuse_soc tegra124_fuse_soc = { 222 + .init = tegra30_fuse_init, 223 + .speedo_init = tegra124_init_speedo_data, 224 + .info = &tegra124_fuse_info, 225 + }; 226 + #endif 227 + 228 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 229 + static const struct tegra_fuse_info tegra210_fuse_info = { 230 + .read = tegra30_fuse_read, 231 + .size = 0x300, 232 + .spare = 0x280, 233 + }; 234 + 235 + const struct tegra_fuse_soc tegra210_fuse_soc = { 236 + .init = tegra30_fuse_init, 237 + .speedo_init = tegra210_init_speedo_data, 238 + .info = &tegra210_fuse_info, 239 + }; 240 + #endif
+66 -29
drivers/soc/tegra/fuse/fuse.h
··· 19 19 #ifndef __DRIVERS_MISC_TEGRA_FUSE_H 20 20 #define __DRIVERS_MISC_TEGRA_FUSE_H 21 21 22 - #define TEGRA_FUSE_BASE 0x7000f800 23 - #define TEGRA_FUSE_SIZE 0x400 22 + #include <linux/dmaengine.h> 23 + #include <linux/types.h> 24 24 25 - int tegra_fuse_create_sysfs(struct device *dev, int size, 26 - u32 (*readl)(const unsigned int offset)); 25 + struct tegra_fuse; 27 26 28 - bool tegra30_spare_fuse(int bit); 29 - u32 tegra30_fuse_readl(const unsigned int offset); 30 - void tegra30_init_fuse_early(void); 27 + struct tegra_fuse_info { 28 + u32 (*read)(struct tegra_fuse *fuse, unsigned int offset); 29 + unsigned int size; 30 + unsigned int spare; 31 + }; 32 + 33 + struct tegra_fuse_soc { 34 + void (*init)(struct tegra_fuse *fuse); 35 + void (*speedo_init)(struct tegra_sku_info *info); 36 + int (*probe)(struct tegra_fuse *fuse); 37 + 38 + const struct tegra_fuse_info *info; 39 + }; 40 + 41 + struct tegra_fuse { 42 + struct device *dev; 43 + void __iomem *base; 44 + phys_addr_t phys; 45 + struct clk *clk; 46 + 47 + u32 (*read_early)(struct tegra_fuse *fuse, unsigned int offset); 48 + u32 (*read)(struct tegra_fuse *fuse, unsigned int offset); 49 + const struct tegra_fuse_soc *soc; 50 + 51 + /* APBDMA on Tegra20 */ 52 + struct { 53 + struct mutex lock; 54 + struct completion wait; 55 + struct dma_chan *chan; 56 + struct dma_slave_config config; 57 + dma_addr_t phys; 58 + u32 *virt; 59 + } apbdma; 60 + }; 61 + 31 62 void tegra_init_revision(void); 32 63 void tegra_init_apbmisc(void); 33 64 65 + bool __init tegra_fuse_read_spare(unsigned int spare); 66 + u32 __init tegra_fuse_read_early(unsigned int offset); 67 + 34 68 #ifdef CONFIG_ARCH_TEGRA_2x_SOC 35 69 void tegra20_init_speedo_data(struct tegra_sku_info *sku_info); 36 - bool tegra20_spare_fuse_early(int spare_bit); 37 - void tegra20_init_fuse_early(void); 38 - u32 tegra20_fuse_early(const unsigned int offset); 39 - #else 40 - static inline void tegra20_init_speedo_data(struct tegra_sku_info *sku_info) {} 41 - static inline bool tegra20_spare_fuse_early(int spare_bit) 42 - { 43 - return false; 44 - } 45 - static inline void tegra20_init_fuse_early(void) {} 46 - static inline u32 tegra20_fuse_early(const unsigned int offset) 47 - { 48 - return 0; 49 - } 50 70 #endif 51 - 52 71 53 72 #ifdef CONFIG_ARCH_TEGRA_3x_SOC 54 73 void tegra30_init_speedo_data(struct tegra_sku_info *sku_info); 55 - #else 56 - static inline void tegra30_init_speedo_data(struct tegra_sku_info *sku_info) {} 57 74 #endif 58 75 59 76 #ifdef CONFIG_ARCH_TEGRA_114_SOC 60 77 void tegra114_init_speedo_data(struct tegra_sku_info *sku_info); 61 - #else 62 - static inline void tegra114_init_speedo_data(struct tegra_sku_info *sku_info) {} 63 78 #endif 64 79 65 - #ifdef CONFIG_ARCH_TEGRA_124_SOC 80 + #if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC) 66 81 void tegra124_init_speedo_data(struct tegra_sku_info *sku_info); 67 - #else 68 - static inline void tegra124_init_speedo_data(struct tegra_sku_info *sku_info) {} 82 + #endif 83 + 84 + #ifdef CONFIG_ARCH_TEGRA_210_SOC 85 + void tegra210_init_speedo_data(struct tegra_sku_info *sku_info); 86 + #endif 87 + 88 + #ifdef CONFIG_ARCH_TEGRA_2x_SOC 89 + extern const struct tegra_fuse_soc tegra20_fuse_soc; 90 + #endif 91 + 92 + #ifdef CONFIG_ARCH_TEGRA_3x_SOC 93 + extern const struct tegra_fuse_soc tegra30_fuse_soc; 94 + #endif 95 + 96 + #ifdef CONFIG_ARCH_TEGRA_114_SOC 97 + extern const struct tegra_fuse_soc tegra114_fuse_soc; 98 + #endif 99 + 100 + #if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC) 101 + extern const struct tegra_fuse_soc tegra124_fuse_soc; 102 + #endif 103 + 104 + #ifdef CONFIG_ARCH_TEGRA_210_SOC 105 + extern const struct tegra_fuse_soc tegra210_fuse_soc; 69 106 #endif 70 107 71 108 #endif
+11 -11
drivers/soc/tegra/fuse/speedo-tegra114.c
··· 22 22 23 23 #include "fuse.h" 24 24 25 - #define CORE_PROCESS_CORNERS 2 25 + #define SOC_PROCESS_CORNERS 2 26 26 #define CPU_PROCESS_CORNERS 2 27 27 28 28 enum { ··· 31 31 THRESHOLD_INDEX_COUNT, 32 32 }; 33 33 34 - static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = { 34 + static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = { 35 35 {1123, UINT_MAX}, 36 36 {0, UINT_MAX}, 37 37 }; ··· 74 74 } 75 75 76 76 if (rev == TEGRA_REVISION_A01) { 77 - tmp = tegra30_fuse_readl(0x270) << 1; 78 - tmp |= tegra30_fuse_readl(0x26c); 77 + tmp = tegra_fuse_read_early(0x270) << 1; 78 + tmp |= tegra_fuse_read_early(0x26c); 79 79 if (!tmp) 80 80 sku_info->cpu_speedo_id = 0; 81 81 } ··· 84 84 void __init tegra114_init_speedo_data(struct tegra_sku_info *sku_info) 85 85 { 86 86 u32 cpu_speedo_val; 87 - u32 core_speedo_val; 87 + u32 soc_speedo_val; 88 88 int threshold; 89 89 int i; 90 90 91 91 BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) != 92 92 THRESHOLD_INDEX_COUNT); 93 - BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) != 93 + BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != 94 94 THRESHOLD_INDEX_COUNT); 95 95 96 96 rev_sku_to_speedo_ids(sku_info, &threshold); 97 97 98 - cpu_speedo_val = tegra30_fuse_readl(0x12c) + 1024; 99 - core_speedo_val = tegra30_fuse_readl(0x134); 98 + cpu_speedo_val = tegra_fuse_read_early(0x12c) + 1024; 99 + soc_speedo_val = tegra_fuse_read_early(0x134); 100 100 101 101 for (i = 0; i < CPU_PROCESS_CORNERS; i++) 102 102 if (cpu_speedo_val < cpu_process_speedos[threshold][i]) 103 103 break; 104 104 sku_info->cpu_process_id = i; 105 105 106 - for (i = 0; i < CORE_PROCESS_CORNERS; i++) 107 - if (core_speedo_val < core_process_speedos[threshold][i]) 106 + for (i = 0; i < SOC_PROCESS_CORNERS; i++) 107 + if (soc_speedo_val < soc_process_speedos[threshold][i]) 108 108 break; 109 - sku_info->core_process_id = i; 109 + sku_info->soc_process_id = i; 110 110 }
+13 -13
drivers/soc/tegra/fuse/speedo-tegra124.c
··· 24 24 25 25 #define CPU_PROCESS_CORNERS 2 26 26 #define GPU_PROCESS_CORNERS 2 27 - #define CORE_PROCESS_CORNERS 2 27 + #define SOC_PROCESS_CORNERS 2 28 28 29 29 #define FUSE_CPU_SPEEDO_0 0x14 30 30 #define FUSE_CPU_SPEEDO_1 0x2c ··· 53 53 {0, UINT_MAX}, 54 54 }; 55 55 56 - static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = { 56 + static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = { 57 57 {2101, UINT_MAX}, 58 58 {0, UINT_MAX}, 59 59 }; ··· 119 119 THRESHOLD_INDEX_COUNT); 120 120 BUILD_BUG_ON(ARRAY_SIZE(gpu_process_speedos) != 121 121 THRESHOLD_INDEX_COUNT); 122 - BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) != 122 + BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != 123 123 THRESHOLD_INDEX_COUNT); 124 124 125 - cpu_speedo_0_value = tegra30_fuse_readl(FUSE_CPU_SPEEDO_0); 125 + cpu_speedo_0_value = tegra_fuse_read_early(FUSE_CPU_SPEEDO_0); 126 126 127 127 /* GPU Speedo is stored in CPU_SPEEDO_2 */ 128 - sku_info->gpu_speedo_value = tegra30_fuse_readl(FUSE_CPU_SPEEDO_2); 128 + sku_info->gpu_speedo_value = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2); 129 129 130 - soc_speedo_0_value = tegra30_fuse_readl(FUSE_SOC_SPEEDO_0); 130 + soc_speedo_0_value = tegra_fuse_read_early(FUSE_SOC_SPEEDO_0); 131 131 132 - cpu_iddq_value = tegra30_fuse_readl(FUSE_CPU_IDDQ); 133 - soc_iddq_value = tegra30_fuse_readl(FUSE_SOC_IDDQ); 134 - gpu_iddq_value = tegra30_fuse_readl(FUSE_GPU_IDDQ); 132 + cpu_iddq_value = tegra_fuse_read_early(FUSE_CPU_IDDQ); 133 + soc_iddq_value = tegra_fuse_read_early(FUSE_SOC_IDDQ); 134 + gpu_iddq_value = tegra_fuse_read_early(FUSE_GPU_IDDQ); 135 135 136 136 sku_info->cpu_speedo_value = cpu_speedo_0_value; 137 137 ··· 143 143 144 144 rev_sku_to_speedo_ids(sku_info, &threshold); 145 145 146 - sku_info->cpu_iddq_value = tegra30_fuse_readl(FUSE_CPU_IDDQ); 146 + sku_info->cpu_iddq_value = tegra_fuse_read_early(FUSE_CPU_IDDQ); 147 147 148 148 for (i = 0; i < GPU_PROCESS_CORNERS; i++) 149 149 if (sku_info->gpu_speedo_value < ··· 157 157 break; 158 158 sku_info->cpu_process_id = i; 159 159 160 - for (i = 0; i < CORE_PROCESS_CORNERS; i++) 160 + for (i = 0; i < SOC_PROCESS_CORNERS; i++) 161 161 if (soc_speedo_0_value < 162 - core_process_speedos[threshold][i]) 162 + soc_process_speedos[threshold][i]) 163 163 break; 164 - sku_info->core_process_id = i; 164 + sku_info->soc_process_id = i; 165 165 166 166 pr_debug("Tegra GPU Speedo ID=%d, Speedo Value=%d\n", 167 167 sku_info->gpu_speedo_id, sku_info->gpu_speedo_value);
+14 -14
drivers/soc/tegra/fuse/speedo-tegra20.c
··· 28 28 #define CPU_SPEEDO_REDUND_MSBIT 39 29 29 #define CPU_SPEEDO_REDUND_OFFS (CPU_SPEEDO_REDUND_MSBIT - CPU_SPEEDO_MSBIT) 30 30 31 - #define CORE_SPEEDO_LSBIT 40 32 - #define CORE_SPEEDO_MSBIT 47 33 - #define CORE_SPEEDO_REDUND_LSBIT 48 34 - #define CORE_SPEEDO_REDUND_MSBIT 55 35 - #define CORE_SPEEDO_REDUND_OFFS (CORE_SPEEDO_REDUND_MSBIT - CORE_SPEEDO_MSBIT) 31 + #define SOC_SPEEDO_LSBIT 40 32 + #define SOC_SPEEDO_MSBIT 47 33 + #define SOC_SPEEDO_REDUND_LSBIT 48 34 + #define SOC_SPEEDO_REDUND_MSBIT 55 35 + #define SOC_SPEEDO_REDUND_OFFS (SOC_SPEEDO_REDUND_MSBIT - SOC_SPEEDO_MSBIT) 36 36 37 37 #define SPEEDO_MULT 4 38 38 ··· 56 56 {316, 331, 383, UINT_MAX}, 57 57 }; 58 58 59 - static const u32 __initconst core_process_speedos[][PROCESS_CORNERS_NUM] = { 59 + static const u32 __initconst soc_process_speedos[][PROCESS_CORNERS_NUM] = { 60 60 {165, 195, 224, UINT_MAX}, 61 61 {165, 195, 224, UINT_MAX}, 62 62 {165, 195, 224, UINT_MAX}, ··· 69 69 int i; 70 70 71 71 BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) != SPEEDO_ID_COUNT); 72 - BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) != SPEEDO_ID_COUNT); 72 + BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != SPEEDO_ID_COUNT); 73 73 74 74 if (SPEEDO_ID_SELECT_0(sku_info->revision)) 75 75 sku_info->soc_speedo_id = SPEEDO_ID_0; ··· 80 80 81 81 val = 0; 82 82 for (i = CPU_SPEEDO_MSBIT; i >= CPU_SPEEDO_LSBIT; i--) { 83 - reg = tegra20_spare_fuse_early(i) | 84 - tegra20_spare_fuse_early(i + CPU_SPEEDO_REDUND_OFFS); 83 + reg = tegra_fuse_read_spare(i) | 84 + tegra_fuse_read_spare(i + CPU_SPEEDO_REDUND_OFFS); 85 85 val = (val << 1) | (reg & 0x1); 86 86 } 87 87 val = val * SPEEDO_MULT; ··· 94 94 sku_info->cpu_process_id = i; 95 95 96 96 val = 0; 97 - for (i = CORE_SPEEDO_MSBIT; i >= CORE_SPEEDO_LSBIT; i--) { 98 - reg = tegra20_spare_fuse_early(i) | 99 - tegra20_spare_fuse_early(i + CORE_SPEEDO_REDUND_OFFS); 97 + for (i = SOC_SPEEDO_MSBIT; i >= SOC_SPEEDO_LSBIT; i--) { 98 + reg = tegra_fuse_read_spare(i) | 99 + tegra_fuse_read_spare(i + SOC_SPEEDO_REDUND_OFFS); 100 100 val = (val << 1) | (reg & 0x1); 101 101 } 102 102 val = val * SPEEDO_MULT; 103 103 pr_debug("Core speedo value %u\n", val); 104 104 105 105 for (i = 0; i < (PROCESS_CORNERS_NUM - 1); i++) { 106 - if (val <= core_process_speedos[sku_info->soc_speedo_id][i]) 106 + if (val <= soc_process_speedos[sku_info->soc_speedo_id][i]) 107 107 break; 108 108 } 109 - sku_info->core_process_id = i; 109 + sku_info->soc_process_id = i; 110 110 }
+184
drivers/soc/tegra/fuse/speedo-tegra210.c
··· 1 + /* 2 + * Copyright (c) 2013-2015, NVIDIA CORPORATION. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <linux/device.h> 18 + #include <linux/kernel.h> 19 + #include <linux/bug.h> 20 + 21 + #include <soc/tegra/fuse.h> 22 + 23 + #include "fuse.h" 24 + 25 + #define CPU_PROCESS_CORNERS 2 26 + #define GPU_PROCESS_CORNERS 2 27 + #define SOC_PROCESS_CORNERS 3 28 + 29 + #define FUSE_CPU_SPEEDO_0 0x014 30 + #define FUSE_CPU_SPEEDO_1 0x02c 31 + #define FUSE_CPU_SPEEDO_2 0x030 32 + #define FUSE_SOC_SPEEDO_0 0x034 33 + #define FUSE_SOC_SPEEDO_1 0x038 34 + #define FUSE_SOC_SPEEDO_2 0x03c 35 + #define FUSE_CPU_IDDQ 0x018 36 + #define FUSE_SOC_IDDQ 0x040 37 + #define FUSE_GPU_IDDQ 0x128 38 + #define FUSE_FT_REV 0x028 39 + 40 + enum { 41 + THRESHOLD_INDEX_0, 42 + THRESHOLD_INDEX_1, 43 + THRESHOLD_INDEX_COUNT, 44 + }; 45 + 46 + static const u32 __initconst cpu_process_speedos[][CPU_PROCESS_CORNERS] = { 47 + { 2119, UINT_MAX }, 48 + { 2119, UINT_MAX }, 49 + }; 50 + 51 + static const u32 __initconst gpu_process_speedos[][GPU_PROCESS_CORNERS] = { 52 + { UINT_MAX, UINT_MAX }, 53 + { UINT_MAX, UINT_MAX }, 54 + }; 55 + 56 + static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = { 57 + { 1950, 2100, UINT_MAX }, 58 + { 1950, 2100, UINT_MAX }, 59 + }; 60 + 61 + static u8 __init get_speedo_revision(void) 62 + { 63 + return tegra_fuse_read_spare(4) << 2 | 64 + tegra_fuse_read_spare(3) << 1 | 65 + tegra_fuse_read_spare(2) << 0; 66 + } 67 + 68 + static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info, 69 + u8 speedo_rev, int *threshold) 70 + { 71 + int sku = sku_info->sku_id; 72 + 73 + /* Assign to default */ 74 + sku_info->cpu_speedo_id = 0; 75 + sku_info->soc_speedo_id = 0; 76 + sku_info->gpu_speedo_id = 0; 77 + *threshold = THRESHOLD_INDEX_0; 78 + 79 + switch (sku) { 80 + case 0x00: /* Engineering SKU */ 81 + case 0x01: /* Engineering SKU */ 82 + case 0x07: 83 + case 0x17: 84 + case 0x27: 85 + if (speedo_rev >= 2) 86 + sku_info->gpu_speedo_id = 1; 87 + break; 88 + 89 + case 0x13: 90 + if (speedo_rev >= 2) 91 + sku_info->gpu_speedo_id = 1; 92 + 93 + sku_info->cpu_speedo_id = 1; 94 + break; 95 + 96 + default: 97 + pr_err("Tegra210: unknown SKU %#04x\n", sku); 98 + /* Using the default for the error case */ 99 + break; 100 + } 101 + } 102 + 103 + static int get_process_id(int value, const u32 *speedos, unsigned int num) 104 + { 105 + unsigned int i; 106 + 107 + for (i = 0; i < num; i++) 108 + if (value < speedos[num]) 109 + return i; 110 + 111 + return -EINVAL; 112 + } 113 + 114 + void __init tegra210_init_speedo_data(struct tegra_sku_info *sku_info) 115 + { 116 + int cpu_speedo[3], soc_speedo[3], cpu_iddq, gpu_iddq, soc_iddq; 117 + unsigned int index; 118 + u8 speedo_revision; 119 + 120 + BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) != 121 + THRESHOLD_INDEX_COUNT); 122 + BUILD_BUG_ON(ARRAY_SIZE(gpu_process_speedos) != 123 + THRESHOLD_INDEX_COUNT); 124 + BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != 125 + THRESHOLD_INDEX_COUNT); 126 + 127 + /* Read speedo/IDDQ fuses */ 128 + cpu_speedo[0] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_0); 129 + cpu_speedo[1] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_1); 130 + cpu_speedo[2] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2); 131 + 132 + soc_speedo[0] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_0); 133 + soc_speedo[1] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_1); 134 + soc_speedo[2] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2); 135 + 136 + cpu_iddq = tegra_fuse_read_early(FUSE_CPU_IDDQ) * 4; 137 + soc_iddq = tegra_fuse_read_early(FUSE_SOC_IDDQ) * 4; 138 + gpu_iddq = tegra_fuse_read_early(FUSE_GPU_IDDQ) * 5; 139 + 140 + /* 141 + * Determine CPU, GPU and SoC speedo values depending on speedo fusing 142 + * revision. Note that GPU speedo value is fused in CPU_SPEEDO_2. 143 + */ 144 + speedo_revision = get_speedo_revision(); 145 + pr_info("Speedo Revision %u\n", speedo_revision); 146 + 147 + if (speedo_revision >= 3) { 148 + sku_info->cpu_speedo_value = cpu_speedo[0]; 149 + sku_info->gpu_speedo_value = cpu_speedo[2]; 150 + sku_info->soc_speedo_value = soc_speedo[0]; 151 + } else if (speedo_revision == 2) { 152 + sku_info->cpu_speedo_value = (-1938 + (1095 * cpu_speedo[0] / 100)) / 10; 153 + sku_info->gpu_speedo_value = (-1662 + (1082 * cpu_speedo[2] / 100)) / 10; 154 + sku_info->soc_speedo_value = ( -705 + (1037 * soc_speedo[0] / 100)) / 10; 155 + } else { 156 + sku_info->cpu_speedo_value = 2100; 157 + sku_info->gpu_speedo_value = cpu_speedo[2] - 75; 158 + sku_info->soc_speedo_value = 1900; 159 + } 160 + 161 + if ((sku_info->cpu_speedo_value <= 0) || 162 + (sku_info->gpu_speedo_value <= 0) || 163 + (sku_info->soc_speedo_value <= 0)) { 164 + WARN(1, "speedo value not fused\n"); 165 + return; 166 + } 167 + 168 + rev_sku_to_speedo_ids(sku_info, speedo_revision, &index); 169 + 170 + sku_info->gpu_process_id = get_process_id(sku_info->gpu_speedo_value, 171 + gpu_process_speedos[index], 172 + GPU_PROCESS_CORNERS); 173 + 174 + sku_info->cpu_process_id = get_process_id(sku_info->cpu_speedo_value, 175 + cpu_process_speedos[index], 176 + CPU_PROCESS_CORNERS); 177 + 178 + sku_info->soc_process_id = get_process_id(sku_info->soc_speedo_value, 179 + soc_process_speedos[index], 180 + SOC_PROCESS_CORNERS); 181 + 182 + pr_debug("Tegra GPU Speedo ID=%d, Speedo Value=%d\n", 183 + sku_info->gpu_speedo_id, sku_info->gpu_speedo_value); 184 + }
+24 -24
drivers/soc/tegra/fuse/speedo-tegra30.c
··· 22 22 23 23 #include "fuse.h" 24 24 25 - #define CORE_PROCESS_CORNERS 1 25 + #define SOC_PROCESS_CORNERS 1 26 26 #define CPU_PROCESS_CORNERS 6 27 27 28 28 #define FUSE_SPEEDO_CALIB_0 0x14 ··· 54 54 THRESHOLD_INDEX_COUNT, 55 55 }; 56 56 57 - static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = { 57 + static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = { 58 58 {180}, 59 59 {170}, 60 60 {195}, ··· 93 93 int bit_minus1; 94 94 int bit_minus2; 95 95 96 - reg = tegra30_fuse_readl(FUSE_SPEEDO_CALIB_0); 96 + reg = tegra_fuse_read_early(FUSE_SPEEDO_CALIB_0); 97 97 98 98 *speedo_lp = (reg & 0xFFFF) * 4; 99 99 *speedo_g = ((reg >> 16) & 0xFFFF) * 4; 100 100 101 - ate_ver = tegra30_fuse_readl(FUSE_TEST_PROG_VER); 101 + ate_ver = tegra_fuse_read_early(FUSE_TEST_PROG_VER); 102 102 pr_debug("Tegra ATE prog ver %d.%d\n", ate_ver/10, ate_ver%10); 103 103 104 104 if (ate_ver >= 26) { 105 - bit_minus1 = tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS1); 106 - bit_minus1 |= tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS1_R); 107 - bit_minus2 = tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS2); 108 - bit_minus2 |= tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS2_R); 105 + bit_minus1 = tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS1); 106 + bit_minus1 |= tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS1_R); 107 + bit_minus2 = tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS2); 108 + bit_minus2 |= tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS2_R); 109 109 *speedo_lp |= (bit_minus1 << 1) | bit_minus2; 110 110 111 - bit_minus1 = tegra30_spare_fuse(G_SPEEDO_BIT_MINUS1); 112 - bit_minus1 |= tegra30_spare_fuse(G_SPEEDO_BIT_MINUS1_R); 113 - bit_minus2 = tegra30_spare_fuse(G_SPEEDO_BIT_MINUS2); 114 - bit_minus2 |= tegra30_spare_fuse(G_SPEEDO_BIT_MINUS2_R); 111 + bit_minus1 = tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS1); 112 + bit_minus1 |= tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS1_R); 113 + bit_minus2 = tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS2); 114 + bit_minus2 |= tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS2_R); 115 115 *speedo_g |= (bit_minus1 << 1) | bit_minus2; 116 116 } else { 117 117 *speedo_lp |= 0x3; ··· 121 121 122 122 static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info) 123 123 { 124 - int package_id = tegra30_fuse_readl(FUSE_PACKAGE_INFO) & 0x0F; 124 + int package_id = tegra_fuse_read_early(FUSE_PACKAGE_INFO) & 0x0F; 125 125 126 126 switch (sku_info->revision) { 127 127 case TEGRA_REVISION_A01: ··· 246 246 void __init tegra30_init_speedo_data(struct tegra_sku_info *sku_info) 247 247 { 248 248 u32 cpu_speedo_val; 249 - u32 core_speedo_val; 249 + u32 soc_speedo_val; 250 250 int i; 251 251 252 252 BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) != 253 253 THRESHOLD_INDEX_COUNT); 254 - BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) != 254 + BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != 255 255 THRESHOLD_INDEX_COUNT); 256 256 257 257 258 258 rev_sku_to_speedo_ids(sku_info); 259 - fuse_speedo_calib(&cpu_speedo_val, &core_speedo_val); 259 + fuse_speedo_calib(&cpu_speedo_val, &soc_speedo_val); 260 260 pr_debug("Tegra CPU speedo value %u\n", cpu_speedo_val); 261 - pr_debug("Tegra Core speedo value %u\n", core_speedo_val); 261 + pr_debug("Tegra Core speedo value %u\n", soc_speedo_val); 262 262 263 263 for (i = 0; i < CPU_PROCESS_CORNERS; i++) { 264 264 if (cpu_speedo_val < cpu_process_speedos[threshold_index][i]) ··· 273 273 sku_info->cpu_speedo_id = 1; 274 274 } 275 275 276 - for (i = 0; i < CORE_PROCESS_CORNERS; i++) { 277 - if (core_speedo_val < core_process_speedos[threshold_index][i]) 276 + for (i = 0; i < SOC_PROCESS_CORNERS; i++) { 277 + if (soc_speedo_val < soc_process_speedos[threshold_index][i]) 278 278 break; 279 279 } 280 - sku_info->core_process_id = i - 1; 280 + sku_info->soc_process_id = i - 1; 281 281 282 - if (sku_info->core_process_id == -1) { 283 - pr_warn("Tegra CORE speedo value %3d out of range", 284 - core_speedo_val); 285 - sku_info->core_process_id = 0; 282 + if (sku_info->soc_process_id == -1) { 283 + pr_warn("Tegra SoC speedo value %3d out of range", 284 + soc_speedo_val); 285 + sku_info->soc_process_id = 0; 286 286 sku_info->soc_speedo_id = 1; 287 287 } 288 288 }
+61 -15
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 21 21 #include <linux/io.h> 22 22 23 23 #include <soc/tegra/fuse.h> 24 + #include <soc/tegra/common.h> 24 25 25 26 #include "fuse.h" 26 27 27 - #define APBMISC_BASE 0x70000800 28 - #define APBMISC_SIZE 0x64 29 28 #define FUSE_SKU_INFO 0x10 30 29 31 30 #define PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT 4 ··· 94 95 rev = TEGRA_REVISION_A02; 95 96 break; 96 97 case 3: 97 - if (chip_id == TEGRA20 && (tegra20_spare_fuse_early(18) || 98 - tegra20_spare_fuse_early(19))) 98 + if (chip_id == TEGRA20 && (tegra_fuse_read_spare(18) || 99 + tegra_fuse_read_spare(19))) 99 100 rev = TEGRA_REVISION_A03p; 100 101 else 101 102 rev = TEGRA_REVISION_A03; ··· 109 110 110 111 tegra_sku_info.revision = rev; 111 112 112 - if (chip_id == TEGRA20) 113 - tegra_sku_info.sku_id = tegra20_fuse_early(FUSE_SKU_INFO); 114 - else 115 - tegra_sku_info.sku_id = tegra30_fuse_readl(FUSE_SKU_INFO); 113 + tegra_sku_info.sku_id = tegra_fuse_read_early(FUSE_SKU_INFO); 116 114 } 117 115 118 116 void __init tegra_init_apbmisc(void) 119 117 { 118 + struct resource apbmisc, straps; 120 119 struct device_node *np; 121 120 122 121 np = of_find_matching_node(NULL, apbmisc_match); 123 - apbmisc_base = of_iomap(np, 0); 124 - if (!apbmisc_base) { 125 - pr_warn("ioremap tegra apbmisc failed. using %08x instead\n", 126 - APBMISC_BASE); 127 - apbmisc_base = ioremap(APBMISC_BASE, APBMISC_SIZE); 122 + if (!np) { 123 + /* 124 + * Fall back to legacy initialization for 32-bit ARM only. All 125 + * 64-bit ARM device tree files for Tegra are required to have 126 + * an APBMISC node. 127 + * 128 + * This is for backwards-compatibility with old device trees 129 + * that didn't contain an APBMISC node. 130 + */ 131 + if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) { 132 + /* APBMISC registers (chip revision, ...) */ 133 + apbmisc.start = 0x70000800; 134 + apbmisc.end = 0x70000863; 135 + apbmisc.flags = IORESOURCE_MEM; 136 + 137 + /* strapping options */ 138 + if (tegra_get_chip_id() == TEGRA124) { 139 + straps.start = 0x7000e864; 140 + straps.end = 0x7000e867; 141 + } else { 142 + straps.start = 0x70000008; 143 + straps.end = 0x7000000b; 144 + } 145 + 146 + straps.flags = IORESOURCE_MEM; 147 + 148 + pr_warn("Using APBMISC region %pR\n", &apbmisc); 149 + pr_warn("Using strapping options registers %pR\n", 150 + &straps); 151 + } else { 152 + /* 153 + * At this point we're not running on Tegra, so play 154 + * nice with multi-platform kernels. 155 + */ 156 + return; 157 + } 158 + } else { 159 + /* 160 + * Extract information from the device tree if we've found a 161 + * matching node. 162 + */ 163 + if (of_address_to_resource(np, 0, &apbmisc) < 0) { 164 + pr_err("failed to get APBMISC registers\n"); 165 + return; 166 + } 167 + 168 + if (of_address_to_resource(np, 1, &straps) < 0) { 169 + pr_err("failed to get strapping options registers\n"); 170 + return; 171 + } 128 172 } 129 173 130 - strapping_base = of_iomap(np, 1); 174 + apbmisc_base = ioremap_nocache(apbmisc.start, resource_size(&apbmisc)); 175 + if (!apbmisc_base) 176 + pr_err("failed to map APBMISC registers\n"); 177 + 178 + strapping_base = ioremap_nocache(straps.start, resource_size(&straps)); 131 179 if (!strapping_base) 132 - pr_err("ioremap tegra strapping_base failed\n"); 180 + pr_err("failed to map strapping options registers\n"); 133 181 134 182 long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code"); 135 183 }
+96 -25
drivers/soc/tegra/pmc.c
··· 17 17 * 18 18 */ 19 19 20 + #define pr_fmt(fmt) "tegra-pmc: " fmt 21 + 20 22 #include <linux/kernel.h> 21 23 #include <linux/clk.h> 22 24 #include <linux/clk/tegra.h> ··· 459 457 unsigned long *status, unsigned int *bit) 460 458 { 461 459 unsigned long rate, value; 462 - struct clk *clk; 463 460 464 461 *bit = id % 32; 465 462 ··· 477 476 *request = IO_DPD2_REQ; 478 477 } 479 478 480 - clk = clk_get_sys(NULL, "pclk"); 481 - if (IS_ERR(clk)) 482 - return PTR_ERR(clk); 483 - 484 - rate = clk_get_rate(clk); 485 - clk_put(clk); 479 + rate = clk_get_rate(pmc->clk); 486 480 487 481 tegra_pmc_writel(DPD_SAMPLE_ENABLE, DPD_SAMPLE); 488 482 ··· 531 535 tegra_pmc_writel(value, request); 532 536 533 537 err = tegra_io_rail_poll(status, mask, 0, 250); 534 - if (err < 0) 538 + if (err < 0) { 539 + pr_info("tegra_io_rail_poll() failed: %d\n", err); 535 540 return err; 541 + } 536 542 537 543 tegra_io_rail_unprepare(); 538 544 ··· 549 551 int err; 550 552 551 553 err = tegra_io_rail_prepare(id, &request, &status, &bit); 552 - if (err < 0) 554 + if (err < 0) { 555 + pr_info("tegra_io_rail_prepare() failed: %d\n", err); 553 556 return err; 557 + } 554 558 555 559 mask = 1 << bit; 556 560 ··· 736 736 u32 value, checksum; 737 737 738 738 if (!pmc->soc->has_tsense_reset) 739 - goto out; 739 + return; 740 740 741 741 np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip"); 742 742 if (!np) { 743 743 dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled); 744 - goto out; 744 + return; 745 745 } 746 746 747 747 if (of_property_read_u32(np, "nvidia,i2c-controller-id", &ctrl_id)) { ··· 801 801 802 802 out: 803 803 of_node_put(np); 804 - return; 805 804 } 806 805 807 806 static int tegra_pmc_probe(struct platform_device *pdev) ··· 1001 1002 .has_gpu_clamps = true, 1002 1003 }; 1003 1004 1005 + static const char * const tegra210_powergates[] = { 1006 + [TEGRA_POWERGATE_CPU] = "crail", 1007 + [TEGRA_POWERGATE_3D] = "3d", 1008 + [TEGRA_POWERGATE_VENC] = "venc", 1009 + [TEGRA_POWERGATE_PCIE] = "pcie", 1010 + [TEGRA_POWERGATE_L2] = "l2", 1011 + [TEGRA_POWERGATE_MPE] = "mpe", 1012 + [TEGRA_POWERGATE_HEG] = "heg", 1013 + [TEGRA_POWERGATE_SATA] = "sata", 1014 + [TEGRA_POWERGATE_CPU1] = "cpu1", 1015 + [TEGRA_POWERGATE_CPU2] = "cpu2", 1016 + [TEGRA_POWERGATE_CPU3] = "cpu3", 1017 + [TEGRA_POWERGATE_CELP] = "celp", 1018 + [TEGRA_POWERGATE_CPU0] = "cpu0", 1019 + [TEGRA_POWERGATE_C0NC] = "c0nc", 1020 + [TEGRA_POWERGATE_C1NC] = "c1nc", 1021 + [TEGRA_POWERGATE_SOR] = "sor", 1022 + [TEGRA_POWERGATE_DIS] = "dis", 1023 + [TEGRA_POWERGATE_DISB] = "disb", 1024 + [TEGRA_POWERGATE_XUSBA] = "xusba", 1025 + [TEGRA_POWERGATE_XUSBB] = "xusbb", 1026 + [TEGRA_POWERGATE_XUSBC] = "xusbc", 1027 + [TEGRA_POWERGATE_VIC] = "vic", 1028 + [TEGRA_POWERGATE_IRAM] = "iram", 1029 + [TEGRA_POWERGATE_NVDEC] = "nvdec", 1030 + [TEGRA_POWERGATE_NVJPG] = "nvjpg", 1031 + [TEGRA_POWERGATE_AUD] = "aud", 1032 + [TEGRA_POWERGATE_DFD] = "dfd", 1033 + [TEGRA_POWERGATE_VE2] = "ve2", 1034 + }; 1035 + 1036 + static const u8 tegra210_cpu_powergates[] = { 1037 + TEGRA_POWERGATE_CPU0, 1038 + TEGRA_POWERGATE_CPU1, 1039 + TEGRA_POWERGATE_CPU2, 1040 + TEGRA_POWERGATE_CPU3, 1041 + }; 1042 + 1043 + static const struct tegra_pmc_soc tegra210_pmc_soc = { 1044 + .num_powergates = ARRAY_SIZE(tegra210_powergates), 1045 + .powergates = tegra210_powergates, 1046 + .num_cpu_powergates = ARRAY_SIZE(tegra210_cpu_powergates), 1047 + .cpu_powergates = tegra210_cpu_powergates, 1048 + .has_tsense_reset = true, 1049 + .has_gpu_clamps = true, 1050 + }; 1051 + 1004 1052 static const struct of_device_id tegra_pmc_match[] = { 1053 + { .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc }, 1054 + { .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc }, 1005 1055 { .compatible = "nvidia,tegra124-pmc", .data = &tegra124_pmc_soc }, 1006 1056 { .compatible = "nvidia,tegra114-pmc", .data = &tegra114_pmc_soc }, 1007 1057 { .compatible = "nvidia,tegra30-pmc", .data = &tegra30_pmc_soc }, ··· 1083 1035 bool invert; 1084 1036 u32 value; 1085 1037 1086 - if (!soc_is_tegra()) 1087 - return 0; 1088 - 1089 1038 np = of_find_matching_node_and_match(NULL, tegra_pmc_match, &match); 1090 1039 if (!np) { 1091 - pr_warn("PMC device node not found, disabling powergating\n"); 1040 + /* 1041 + * Fall back to legacy initialization for 32-bit ARM only. All 1042 + * 64-bit ARM device tree files for Tegra are required to have 1043 + * a PMC node. 1044 + * 1045 + * This is for backwards-compatibility with old device trees 1046 + * that didn't contain a PMC node. Note that in this case the 1047 + * SoC data can't be matched and therefore powergating is 1048 + * disabled. 1049 + */ 1050 + if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) { 1051 + pr_warn("DT node not found, powergating disabled\n"); 1092 1052 1093 - regs.start = 0x7000e400; 1094 - regs.end = 0x7000e7ff; 1095 - regs.flags = IORESOURCE_MEM; 1053 + regs.start = 0x7000e400; 1054 + regs.end = 0x7000e7ff; 1055 + regs.flags = IORESOURCE_MEM; 1096 1056 1097 - pr_warn("Using memory region %pR\n", &regs); 1057 + pr_warn("Using memory region %pR\n", &regs); 1058 + } else { 1059 + /* 1060 + * At this point we're not running on Tegra, so play 1061 + * nice with multi-platform kernels. 1062 + */ 1063 + return 0; 1064 + } 1098 1065 } else { 1099 - pmc->soc = match->data; 1100 - } 1066 + /* 1067 + * Extract information from the device tree if we've found a 1068 + * matching node. 1069 + */ 1070 + if (of_address_to_resource(np, 0, &regs) < 0) { 1071 + pr_err("failed to get PMC registers\n"); 1072 + return -ENXIO; 1073 + } 1101 1074 1102 - if (of_address_to_resource(np, 0, &regs) < 0) { 1103 - pr_err("failed to get PMC registers\n"); 1104 - return -ENXIO; 1075 + pmc->soc = match->data; 1105 1076 } 1106 1077 1107 1078 pmc->base = ioremap_nocache(regs.start, resource_size(&regs)); ··· 1131 1064 1132 1065 mutex_init(&pmc->powergates_lock); 1133 1066 1067 + /* 1068 + * Invert the interrupt polarity if a PMC device tree node exists and 1069 + * contains the nvidia,invert-interrupt property. 1070 + */ 1134 1071 invert = of_property_read_bool(np, "nvidia,invert-interrupt"); 1135 1072 1136 1073 value = tegra_pmc_readl(PMC_CNTRL);
+36
include/dt-bindings/memory/tegra210-mc.h
··· 1 + #ifndef DT_BINDINGS_MEMORY_TEGRA210_MC_H 2 + #define DT_BINDINGS_MEMORY_TEGRA210_MC_H 3 + 4 + #define TEGRA_SWGROUP_PTC 0 5 + #define TEGRA_SWGROUP_DC 1 6 + #define TEGRA_SWGROUP_DCB 2 7 + #define TEGRA_SWGROUP_AFI 3 8 + #define TEGRA_SWGROUP_AVPC 4 9 + #define TEGRA_SWGROUP_HDA 5 10 + #define TEGRA_SWGROUP_HC 6 11 + #define TEGRA_SWGROUP_NVENC 7 12 + #define TEGRA_SWGROUP_PPCS 8 13 + #define TEGRA_SWGROUP_SATA 9 14 + #define TEGRA_SWGROUP_MPCORE 10 15 + #define TEGRA_SWGROUP_ISP2 11 16 + #define TEGRA_SWGROUP_XUSB_HOST 12 17 + #define TEGRA_SWGROUP_XUSB_DEV 13 18 + #define TEGRA_SWGROUP_ISP2B 14 19 + #define TEGRA_SWGROUP_TSEC 15 20 + #define TEGRA_SWGROUP_A9AVP 16 21 + #define TEGRA_SWGROUP_GPU 17 22 + #define TEGRA_SWGROUP_SDMMC1A 18 23 + #define TEGRA_SWGROUP_SDMMC2A 19 24 + #define TEGRA_SWGROUP_SDMMC3A 20 25 + #define TEGRA_SWGROUP_SDMMC4A 21 26 + #define TEGRA_SWGROUP_VIC 22 27 + #define TEGRA_SWGROUP_VI 23 28 + #define TEGRA_SWGROUP_NVDEC 24 29 + #define TEGRA_SWGROUP_APE 25 30 + #define TEGRA_SWGROUP_NVJPG 26 31 + #define TEGRA_SWGROUP_SE 27 32 + #define TEGRA_SWGROUP_AXIAP 28 33 + #define TEGRA_SWGROUP_ETR 29 34 + #define TEGRA_SWGROUP_TSECB 30 35 + 36 + #endif
include/dt-bindings/reset-controller/stih407-resets.h include/dt-bindings/reset/stih407-resets.h
include/dt-bindings/reset-controller/stih415-resets.h include/dt-bindings/reset/stih415-resets.h
include/dt-bindings/reset-controller/stih416-resets.h include/dt-bindings/reset/stih416-resets.h
+12
include/linux/clk/shmobile.h
··· 16 16 17 17 #include <linux/types.h> 18 18 19 + struct device; 20 + struct device_node; 21 + struct generic_pm_domain; 22 + 19 23 void r8a7778_clocks_init(u32 mode); 20 24 void r8a7779_clocks_init(u32 mode); 21 25 void rcar_gen2_clocks_init(u32 mode); 26 + 27 + #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 28 + void cpg_mstp_add_clk_domain(struct device_node *np); 29 + int cpg_mstp_attach_dev(struct generic_pm_domain *domain, struct device *dev); 30 + void cpg_mstp_detach_dev(struct generic_pm_domain *domain, struct device *dev); 31 + #else 32 + static inline void cpg_mstp_add_clk_domain(struct device_node *np) {} 33 + #endif 22 34 23 35 #endif
+6
include/linux/soc/dove/pmu.h
··· 1 + #ifndef LINUX_SOC_DOVE_PMU_H 2 + #define LINUX_SOC_DOVE_PMU_H 3 + 4 + int dove_init_pmu(void); 5 + 6 + #endif
+35
include/linux/soc/qcom/smd-rpm.h
··· 1 + #ifndef __QCOM_SMD_RPM_H__ 2 + #define __QCOM_SMD_RPM_H__ 3 + 4 + struct qcom_smd_rpm; 5 + 6 + #define QCOM_SMD_RPM_ACTIVE_STATE 0 7 + #define QCOM_SMD_RPM_SLEEP_STATE 1 8 + 9 + /* 10 + * Constants used for addressing resources in the RPM. 11 + */ 12 + #define QCOM_SMD_RPM_BOOST 0x61747362 13 + #define QCOM_SMD_RPM_BUS_CLK 0x316b6c63 14 + #define QCOM_SMD_RPM_BUS_MASTER 0x73616d62 15 + #define QCOM_SMD_RPM_BUS_SLAVE 0x766c7362 16 + #define QCOM_SMD_RPM_CLK_BUF_A 0x616B6C63 17 + #define QCOM_SMD_RPM_LDOA 0x616f646c 18 + #define QCOM_SMD_RPM_LDOB 0x626F646C 19 + #define QCOM_SMD_RPM_MEM_CLK 0x326b6c63 20 + #define QCOM_SMD_RPM_MISC_CLK 0x306b6c63 21 + #define QCOM_SMD_RPM_NCPA 0x6170636E 22 + #define QCOM_SMD_RPM_NCPB 0x6270636E 23 + #define QCOM_SMD_RPM_OCMEM_PWR 0x706d636f 24 + #define QCOM_SMD_RPM_QPIC_CLK 0x63697071 25 + #define QCOM_SMD_RPM_SMPA 0x61706d73 26 + #define QCOM_SMD_RPM_SMPB 0x62706d73 27 + #define QCOM_SMD_RPM_SPDM 0x63707362 28 + #define QCOM_SMD_RPM_VSA 0x00617376 29 + 30 + int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm, 31 + int state, 32 + u32 resource_type, u32 resource_id, 33 + void *buf, size_t count); 34 + 35 + #endif
+46
include/linux/soc/qcom/smd.h
··· 1 + #ifndef __QCOM_SMD_H__ 2 + #define __QCOM_SMD_H__ 3 + 4 + #include <linux/device.h> 5 + #include <linux/mod_devicetable.h> 6 + 7 + struct qcom_smd; 8 + struct qcom_smd_channel; 9 + struct qcom_smd_lookup; 10 + 11 + /** 12 + * struct qcom_smd_device - smd device struct 13 + * @dev: the device struct 14 + * @channel: handle to the smd channel for this device 15 + */ 16 + struct qcom_smd_device { 17 + struct device dev; 18 + struct qcom_smd_channel *channel; 19 + }; 20 + 21 + /** 22 + * struct qcom_smd_driver - smd driver struct 23 + * @driver: underlying device driver 24 + * @probe: invoked when the smd channel is found 25 + * @remove: invoked when the smd channel is closed 26 + * @callback: invoked when an inbound message is received on the channel, 27 + * should return 0 on success or -EBUSY if the data cannot be 28 + * consumed at this time 29 + */ 30 + struct qcom_smd_driver { 31 + struct device_driver driver; 32 + int (*probe)(struct qcom_smd_device *dev); 33 + void (*remove)(struct qcom_smd_device *dev); 34 + int (*callback)(struct qcom_smd_device *, const void *, size_t); 35 + }; 36 + 37 + int qcom_smd_driver_register(struct qcom_smd_driver *drv); 38 + void qcom_smd_driver_unregister(struct qcom_smd_driver *drv); 39 + 40 + #define module_qcom_smd_driver(__smd_driver) \ 41 + module_driver(__smd_driver, qcom_smd_driver_register, \ 42 + qcom_smd_driver_unregister) 43 + 44 + int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len); 45 + 46 + #endif
+11
include/linux/soc/qcom/smem.h
··· 1 + #ifndef __QCOM_SMEM_H__ 2 + #define __QCOM_SMEM_H__ 3 + 4 + #define QCOM_SMEM_HOST_ANY -1 5 + 6 + int qcom_smem_alloc(unsigned host, unsigned item, size_t size); 7 + int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size); 8 + 9 + int qcom_smem_get_free_space(unsigned host); 10 + 11 + #endif
+4 -2
include/soc/tegra/fuse.h
··· 22 22 #define TEGRA114 0x35 23 23 #define TEGRA124 0x40 24 24 #define TEGRA132 0x13 25 + #define TEGRA210 0x21 25 26 26 27 #define TEGRA_FUSE_SKU_CALIB_0 0xf0 27 28 #define TEGRA30_FUSE_SATA_CALIB 0x124 ··· 48 47 int cpu_speedo_id; 49 48 int cpu_speedo_value; 50 49 int cpu_iddq_value; 51 - int core_process_id; 50 + int soc_process_id; 52 51 int soc_speedo_id; 53 - int gpu_speedo_id; 52 + int soc_speedo_value; 54 53 int gpu_process_id; 54 + int gpu_speedo_id; 55 55 int gpu_speedo_value; 56 56 enum tegra_revision revision; 57 57 };
+2
include/soc/tegra/mc.h
··· 102 102 unsigned int num_address_bits; 103 103 unsigned int atom_size; 104 104 105 + u8 client_id_mask; 106 + 105 107 const struct tegra_smmu_soc *smmu; 106 108 }; 107 109
+5
include/soc/tegra/pmc.h
··· 67 67 #define TEGRA_POWERGATE_XUSBC 22 68 68 #define TEGRA_POWERGATE_VIC 23 69 69 #define TEGRA_POWERGATE_IRAM 24 70 + #define TEGRA_POWERGATE_NVDEC 25 71 + #define TEGRA_POWERGATE_NVJPG 26 72 + #define TEGRA_POWERGATE_AUD 27 73 + #define TEGRA_POWERGATE_DFD 28 74 + #define TEGRA_POWERGATE_VE2 29 70 75 71 76 #define TEGRA_POWERGATE_3D0 TEGRA_POWERGATE_3D 72 77