Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
"Driver updates for ARM SoCs, including a couple of newly added
drivers:

- The Qualcomm external bus interface 2 (EBI2), used in some of their
mobile phone chips for connecting flash memory, LCD displays or
other peripherals

- Secure monitor firmware for Amlogic SoCs, and an NVMEM driver for
the EFUSE based on that firmware interface.

- Perf support for the AppliedMicro X-Gene performance monitor unit

- Reset driver for STMicroelectronics STM32

- Reset driver for SocioNext UniPhier SoCs

Aside from these, there are minor updates to SoC-specific bus,
clocksource, firmware, pinctrl, reset, rtc and pmic drivers"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (50 commits)
bus: qcom-ebi2: depend on HAS_IOMEM
pinctrl: mvebu: orion5x: Generalise mv88f5181l support for 88f5181
clk: mvebu: Add clk support for the orion5x SoC mv88f5181
dt-bindings: EXYNOS: Add Exynos5433 PMU compatible
clocksource: exynos_mct: Add the support for ARM64
perf: xgene: Add APM X-Gene SoC Performance Monitoring Unit driver
Documentation: Add documentation for APM X-Gene SoC PMU DTS binding
MAINTAINERS: Add entry for APM X-Gene SoC PMU driver
bus: qcom: add EBI2 driver
bus: qcom: add EBI2 device tree bindings
rtc: rtc-pm8xxx: Add support for pm8018 rtc
nvmem: amlogic: Add Amlogic Meson EFUSE driver
firmware: Amlogic: Add secure monitor driver
soc: qcom: smd: Reset rx tail rather than tx
memory: atmel-sdramc: fix a possible NULL dereference
reset: hi6220: allow to compile test driver on other architectures
reset: zynq: add driver Kconfig option
reset: sunxi: add driver Kconfig option
reset: stm32: add driver Kconfig option
reset: socfpga: add driver Kconfig option
...

+3790 -291
+1
Documentation/devicetree/bindings/arm/samsung/pmu.txt
··· 10 10 - "samsung,exynos5260-pmu" - for Exynos5260 SoC. 11 11 - "samsung,exynos5410-pmu" - for Exynos5410 SoC, 12 12 - "samsung,exynos5420-pmu" - for Exynos5420 SoC. 13 + - "samsung,exynos5433-pmu" - for Exynos5433 SoC. 13 14 - "samsung,exynos7-pmu" - for Exynos7 SoC. 14 15 second value must be always "syscon". 15 16
+138
Documentation/devicetree/bindings/bus/qcom,ebi2.txt
··· 1 + Qualcomm External Bus Interface 2 (EBI2) 2 + 3 + The EBI2 contains two peripheral blocks: XMEM and LCDC. The XMEM handles any 4 + external memory (such as NAND or other memory-mapped peripherals) whereas 5 + LCDC handles LCD displays. 6 + 7 + As it says it connects devices to an external bus interface, meaning address 8 + lines (up to 9 address lines so can only address 1KiB external memory space), 9 + data lines (16 bits), OE (output enable), ADV (address valid, used on some 10 + NOR flash memories), WE (write enable). This on top of 6 different chip selects 11 + (CS0 thru CS5) so that in theory 6 different devices can be connected. 12 + 13 + Apparently this bus is clocked at 64MHz. It has dedicated pins on the package 14 + and the bus can only come out on these pins, however if some of the pins are 15 + unused they can be left unconnected or remuxed to be used as GPIO or in some 16 + cases other orthogonal functions as well. 17 + 18 + Also CS1 and CS2 has -A and -B signals. Why they have that is unclear to me. 19 + 20 + The chip selects have the following memory range assignments. This region of 21 + memory is referred to as "Chip Peripheral SS FPB0" and is 168MB big. 22 + 23 + Chip Select Physical address base 24 + CS0 GPIO134 0x1a800000-0x1b000000 (8MB) 25 + CS1 GPIO39 (A) / GPIO123 (B) 0x1b000000-0x1b800000 (8MB) 26 + CS2 GPIO40 (A) / GPIO124 (B) 0x1b800000-0x1c000000 (8MB) 27 + CS3 GPIO133 0x1d000000-0x25000000 (128 MB) 28 + CS4 GPIO132 0x1c800000-0x1d000000 (8MB) 29 + CS5 GPIO131 0x1c000000-0x1c800000 (8MB) 30 + 31 + The APQ8060 Qualcomm Application Processor User Guide, 80-N7150-14 Rev. A, 32 + August 6, 2012 contains some incomplete documentation of the EBI2. 33 + 34 + FIXME: the manual mentions "write precharge cycles" and "precharge cycles". 35 + We have not been able to figure out which bit fields these correspond to 36 + in the hardware, or what valid values exist. The current hypothesis is that 37 + this is something just used on the FAST chip selects and that the SLOW 38 + chip selects are understood fully. There is also a "byte device enable" 39 + flag somewhere for 8bit memories. 40 + 41 + FIXME: The chipselects have SLOW and FAST configuration registers. It's a bit 42 + unclear what this means, if they are mutually exclusive or can be used 43 + together, or if some chip selects are hardwired to be FAST and others are SLOW 44 + by design. 45 + 46 + The XMEM registers are totally undocumented but could be partially decoded 47 + because the Cypress AN49576 Antioch Westbridge apparently has suspiciously 48 + similar register layout, see: http://www.cypress.com/file/105771/download 49 + 50 + Required properties: 51 + - compatible: should be one of: 52 + "qcom,msm8660-ebi2" 53 + "qcom,apq8060-ebi2" 54 + - #address-cells: shoule be <2>: the first cell is the chipselect, 55 + the second cell is the offset inside the memory range 56 + - #size-cells: should be <1> 57 + - ranges: should be set to: 58 + ranges = <0 0x0 0x1a800000 0x00800000>, 59 + <1 0x0 0x1b000000 0x00800000>, 60 + <2 0x0 0x1b800000 0x00800000>, 61 + <3 0x0 0x1d000000 0x08000000>, 62 + <4 0x0 0x1c800000 0x00800000>, 63 + <5 0x0 0x1c000000 0x00800000>; 64 + - reg: two ranges of registers: EBI2 config and XMEM config areas 65 + - reg-names: should be "ebi2", "xmem" 66 + - clocks: two clocks, EBI_2X and EBI 67 + - clock-names: shoule be "ebi2x", "ebi2" 68 + 69 + Optional subnodes: 70 + - Nodes inside the EBI2 will be considered device nodes. 71 + 72 + The following optional properties are properties that can be tagged onto 73 + any device subnode. We are assuming that there can be only ONE device per 74 + chipselect subnode, else the properties will become ambigous. 75 + 76 + Optional properties arrays for SLOW chip selects: 77 + - qcom,xmem-recovery-cycles: recovery cycles is the time the memory continues to 78 + drive the data bus after OE is de-asserted, in order to avoid contention on 79 + the data bus. They are inserted when reading one CS and switching to another 80 + CS or read followed by write on the same CS. Valid values 0 thru 15. Minimum 81 + value is actually 1, so a value of 0 will still yield 1 recovery cycle. 82 + - qcom,xmem-write-hold-cycles: write hold cycles, these are extra cycles 83 + inserted after every write minimum 1. The data out is driven from the time 84 + WE is asserted until CS is asserted. With a hold of 1 (value = 0), the CS 85 + stays active for 1 extra cycle etc. Valid values 0 thru 15. 86 + - qcom,xmem-write-delta-cycles: initial latency for write cycles inserted for 87 + the first write to a page or burst memory. Valid values 0 thru 255. 88 + - qcom,xmem-read-delta-cycles: initial latency for read cycles inserted for the 89 + first read to a page or burst memory. Valid values 0 thru 255. 90 + - qcom,xmem-write-wait-cycles: number of wait cycles for every write access, 0=1 91 + cycle. Valid values 0 thru 15. 92 + - qcom,xmem-read-wait-cycles: number of wait cycles for every read access, 0=1 93 + cycle. Valid values 0 thru 15. 94 + 95 + Optional properties arrays for FAST chip selects: 96 + - qcom,xmem-address-hold-enable: this is a boolean property stating that we 97 + shall hold the address for an extra cycle to meet hold time requirements 98 + with ADV assertion. 99 + - qcom,xmem-adv-to-oe-recovery-cycles: the number of cycles elapsed before an OE 100 + assertion, with respect to the cycle where ADV (address valid) is asserted. 101 + 2 means 2 cycles between ADV and OE. Valid values 0, 1, 2 or 3. 102 + - qcom,xmem-read-hold-cycles: the length in cycles of the first segment of a 103 + read transfer. For a single read trandfer this will be the time from CS 104 + assertion to OE assertion. Valid values 0 thru 15. 105 + 106 + 107 + Example: 108 + 109 + ebi2@1a100000 { 110 + compatible = "qcom,apq8060-ebi2"; 111 + #address-cells = <2>; 112 + #size-cells = <1>; 113 + ranges = <0 0x0 0x1a800000 0x00800000>, 114 + <1 0x0 0x1b000000 0x00800000>, 115 + <2 0x0 0x1b800000 0x00800000>, 116 + <3 0x0 0x1d000000 0x08000000>, 117 + <4 0x0 0x1c800000 0x00800000>, 118 + <5 0x0 0x1c000000 0x00800000>; 119 + reg = <0x1a100000 0x1000>, <0x1a110000 0x1000>; 120 + reg-names = "ebi2", "xmem"; 121 + clocks = <&gcc EBI2_2X_CLK>, <&gcc EBI2_CLK>; 122 + clock-names = "ebi2x", "ebi2"; 123 + /* Make sure to set up the pin control for the EBI2 */ 124 + pinctrl-names = "default"; 125 + pinctrl-0 = <&foo_ebi2_pins>; 126 + 127 + foo-ebi2@2,0 { 128 + compatible = "foo"; 129 + reg = <2 0x0 0x100>; 130 + (...) 131 + qcom,xmem-recovery-cycles = <0>; 132 + qcom,xmem-write-hold-cycles = <3>; 133 + qcom,xmem-write-delta-cycles = <31>; 134 + qcom,xmem-read-delta-cycles = <28>; 135 + qcom,xmem-write-wait-cycles = <9>; 136 + qcom,xmem-read-wait-cycles = <9>; 137 + }; 138 + };
+1
Documentation/devicetree/bindings/clock/mvebu-core-clock.txt
··· 52 52 "marvell,dove-core-clock" - for Dove SoC core clocks 53 53 "marvell,kirkwood-core-clock" - for Kirkwood SoC (except mv88f6180) 54 54 "marvell,mv88f6180-core-clock" - for Kirkwood MV88f6180 SoC 55 + "marvell,mv88f5181-core-clock" - for Orion MV88F5181 SoC 55 56 "marvell,mv88f5182-core-clock" - for Orion MV88F5182 SoC 56 57 "marvell,mv88f5281-core-clock" - for Orion MV88F5281 SoC 57 58 "marvell,mv88f6183-core-clock" - for Orion MV88F6183 SoC
+35 -7
Documentation/devicetree/bindings/clock/st,stm32-rcc.txt
··· 1 1 STMicroelectronics STM32 Reset and Clock Controller 2 2 =================================================== 3 3 4 - The RCC IP is both a reset and a clock controller. This documentation only 5 - describes the clock part. 4 + The RCC IP is both a reset and a clock controller. 6 5 7 - Please also refer to clock-bindings.txt in this directory for common clock 8 - controller binding usage. 6 + Please refer to clock-bindings.txt for common clock controller binding usage. 7 + Please also refer to reset.txt for common reset controller binding usage. 9 8 10 9 Required properties: 11 10 - compatible: Should be "st,stm32f42xx-rcc" 12 11 - reg: should be register base and length as documented in the 13 12 datasheet 13 + - #reset-cells: 1, see below 14 14 - #clock-cells: 2, device nodes should specify the clock in their "clocks" 15 15 property, containing a phandle to the clock device node, an index selecting 16 16 between gated clocks and other clocks and an index specifying the clock to ··· 19 19 Example: 20 20 21 21 rcc: rcc@40023800 { 22 + #reset-cells = <1>; 22 23 #clock-cells = <2> 23 24 compatible = "st,stm32f42xx-rcc", "st,stm32-rcc"; 24 25 reg = <0x40023800 0x400>; ··· 36 35 It is calculated as: index = register_offset / 4 * 32 + bit_offset. 37 36 Where bit_offset is the bit offset within the register (LSB is 0, MSB is 31). 38 37 38 + To simplify the usage and to share bit definition with the reset and clock 39 + drivers of the RCC IP, macros are available to generate the index in 40 + human-readble format. 41 + 42 + For STM32F4 series, the macro are available here: 43 + - include/dt-bindings/mfd/stm32f4-rcc.h 44 + 39 45 Example: 40 46 41 47 /* Gated clock, AHB1 bit 0 (GPIOA) */ 42 48 ... { 43 - clocks = <&rcc 0 0> 49 + clocks = <&rcc 0 STM32F4_AHB1_CLOCK(GPIOA)> 44 50 }; 45 51 46 52 /* Gated clock, AHB2 bit 4 (CRYP) */ 47 53 ... { 48 - clocks = <&rcc 0 36> 54 + clocks = <&rcc 0 STM32F4_AHB2_CLOCK(CRYP)> 49 55 }; 50 56 51 57 Specifying other clocks ··· 69 61 70 62 /* Misc clock, FCLK */ 71 63 ... { 72 - clocks = <&rcc 1 1> 64 + clocks = <&rcc 1 STM32F4_APB1_CLOCK(TIM2)> 65 + }; 66 + 67 + 68 + Specifying softreset control of devices 69 + ======================================= 70 + 71 + Device nodes should specify the reset channel required in their "resets" 72 + property, containing a phandle to the reset device node and an index specifying 73 + which channel to use. 74 + The index is the bit number within the RCC registers bank, starting from RCC 75 + base address. 76 + It is calculated as: index = register_offset / 4 * 32 + bit_offset. 77 + Where bit_offset is the bit offset within the register. 78 + For example, for CRC reset: 79 + crc = AHB1RSTR_offset / 4 * 32 + CRCRST_bit_offset = 0x10 / 4 * 32 + 12 = 140 80 + 81 + example: 82 + 83 + timer2 { 84 + resets = <&rcc STM32F4_APB1_RESET(TIM2)>; 73 85 };
+112
Documentation/devicetree/bindings/perf/apm-xgene-pmu.txt
··· 1 + * APM X-Gene SoC PMU bindings 2 + 3 + This is APM X-Gene SoC PMU (Performance Monitoring Unit) module. 4 + The following PMU devices are supported: 5 + 6 + L3C - L3 cache controller 7 + IOB - IO bridge 8 + MCB - Memory controller bridge 9 + MC - Memory controller 10 + 11 + The following section describes the SoC PMU DT node binding. 12 + 13 + Required properties: 14 + - compatible : Shall be "apm,xgene-pmu" for revision 1 or 15 + "apm,xgene-pmu-v2" for revision 2. 16 + - regmap-csw : Regmap of the CPU switch fabric (CSW) resource. 17 + - regmap-mcba : Regmap of the MCB-A (memory bridge) resource. 18 + - regmap-mcbb : Regmap of the MCB-B (memory bridge) resource. 19 + - reg : First resource shall be the CPU bus PMU resource. 20 + - interrupts : Interrupt-specifier for PMU IRQ. 21 + 22 + Required properties for L3C subnode: 23 + - compatible : Shall be "apm,xgene-pmu-l3c". 24 + - reg : First resource shall be the L3C PMU resource. 25 + 26 + Required properties for IOB subnode: 27 + - compatible : Shall be "apm,xgene-pmu-iob". 28 + - reg : First resource shall be the IOB PMU resource. 29 + 30 + Required properties for MCB subnode: 31 + - compatible : Shall be "apm,xgene-pmu-mcb". 32 + - reg : First resource shall be the MCB PMU resource. 33 + - enable-bit-index : The bit indicates if the according MCB is enabled. 34 + 35 + Required properties for MC subnode: 36 + - compatible : Shall be "apm,xgene-pmu-mc". 37 + - reg : First resource shall be the MC PMU resource. 38 + - enable-bit-index : The bit indicates if the according MC is enabled. 39 + 40 + Example: 41 + csw: csw@7e200000 { 42 + compatible = "apm,xgene-csw", "syscon"; 43 + reg = <0x0 0x7e200000 0x0 0x1000>; 44 + }; 45 + 46 + mcba: mcba@7e700000 { 47 + compatible = "apm,xgene-mcb", "syscon"; 48 + reg = <0x0 0x7e700000 0x0 0x1000>; 49 + }; 50 + 51 + mcbb: mcbb@7e720000 { 52 + compatible = "apm,xgene-mcb", "syscon"; 53 + reg = <0x0 0x7e720000 0x0 0x1000>; 54 + }; 55 + 56 + pmu: pmu@78810000 { 57 + compatible = "apm,xgene-pmu-v2"; 58 + #address-cells = <2>; 59 + #size-cells = <2>; 60 + ranges; 61 + regmap-csw = <&csw>; 62 + regmap-mcba = <&mcba>; 63 + regmap-mcbb = <&mcbb>; 64 + reg = <0x0 0x78810000 0x0 0x1000>; 65 + interrupts = <0x0 0x22 0x4>; 66 + 67 + pmul3c@7e610000 { 68 + compatible = "apm,xgene-pmu-l3c"; 69 + reg = <0x0 0x7e610000 0x0 0x1000>; 70 + }; 71 + 72 + pmuiob@7e940000 { 73 + compatible = "apm,xgene-pmu-iob"; 74 + reg = <0x0 0x7e940000 0x0 0x1000>; 75 + }; 76 + 77 + pmucmcb@7e710000 { 78 + compatible = "apm,xgene-pmu-mcb"; 79 + reg = <0x0 0x7e710000 0x0 0x1000>; 80 + enable-bit-index = <0>; 81 + }; 82 + 83 + pmucmcb@7e730000 { 84 + compatible = "apm,xgene-pmu-mcb"; 85 + reg = <0x0 0x7e730000 0x0 0x1000>; 86 + enable-bit-index = <1>; 87 + }; 88 + 89 + pmucmc@7e810000 { 90 + compatible = "apm,xgene-pmu-mc"; 91 + reg = <0x0 0x7e810000 0x0 0x1000>; 92 + enable-bit-index = <0>; 93 + }; 94 + 95 + pmucmc@7e850000 { 96 + compatible = "apm,xgene-pmu-mc"; 97 + reg = <0x0 0x7e850000 0x0 0x1000>; 98 + enable-bit-index = <1>; 99 + }; 100 + 101 + pmucmc@7e890000 { 102 + compatible = "apm,xgene-pmu-mc"; 103 + reg = <0x0 0x7e890000 0x0 0x1000>; 104 + enable-bit-index = <2>; 105 + }; 106 + 107 + pmucmc@7e8d0000 { 108 + compatible = "apm,xgene-pmu-mc"; 109 + reg = <0x0 0x7e8d0000 0x0 0x1000>; 110 + enable-bit-index = <3>; 111 + }; 112 + };
+3 -1
Documentation/devicetree/bindings/pinctrl/marvell,orion-pinctrl.txt
··· 4 4 part and usage. 5 5 6 6 Required properties: 7 - - compatible: "marvell,88f5181l-pinctrl", "marvell,88f5182-pinctrl", 7 + - compatible: "marvell,88f5181-pinctrl", 8 + "marvell,88f5181l-pinctrl", 9 + "marvell,88f5182-pinctrl", 8 10 "marvell,88f5281-pinctrl" 9 11 10 12 - reg: two register areas, the first one describing the first two
+6
Documentation/devicetree/bindings/reset/st,stm32-rcc.txt
··· 1 + STMicroelectronics STM32 Peripheral Reset Controller 2 + ==================================================== 3 + 4 + The RCC IP is both a reset and a clock controller. 5 + 6 + Please see Documentation/devicetree/bindings/clock/st,stm32-rcc.txt
+93
Documentation/devicetree/bindings/reset/uniphier-reset.txt
··· 1 + UniPhier reset controller 2 + 3 + 4 + System reset 5 + ------------ 6 + 7 + Required properties: 8 + - compatible: should be one of the following: 9 + "socionext,uniphier-sld3-reset" - for PH1-sLD3 SoC. 10 + "socionext,uniphier-ld4-reset" - for PH1-LD4 SoC. 11 + "socionext,uniphier-pro4-reset" - for PH1-Pro4 SoC. 12 + "socionext,uniphier-sld8-reset" - for PH1-sLD8 SoC. 13 + "socionext,uniphier-pro5-reset" - for PH1-Pro5 SoC. 14 + "socionext,uniphier-pxs2-reset" - for ProXstream2/PH1-LD6b SoC. 15 + "socionext,uniphier-ld11-reset" - for PH1-LD11 SoC. 16 + "socionext,uniphier-ld20-reset" - for PH1-LD20 SoC. 17 + - #reset-cells: should be 1. 18 + 19 + Example: 20 + 21 + sysctrl@61840000 { 22 + compatible = "socionext,uniphier-ld20-sysctrl", 23 + "simple-mfd", "syscon"; 24 + reg = <0x61840000 0x4000>; 25 + 26 + reset { 27 + compatible = "socionext,uniphier-ld20-reset"; 28 + #reset-cells = <1>; 29 + }; 30 + 31 + other nodes ... 32 + }; 33 + 34 + 35 + Media I/O (MIO) reset 36 + --------------------- 37 + 38 + Required properties: 39 + - compatible: should be one of the following: 40 + "socionext,uniphier-sld3-mio-reset" - for PH1-sLD3 SoC. 41 + "socionext,uniphier-ld4-mio-reset" - for PH1-LD4 SoC. 42 + "socionext,uniphier-pro4-mio-reset" - for PH1-Pro4 SoC. 43 + "socionext,uniphier-sld8-mio-reset" - for PH1-sLD8 SoC. 44 + "socionext,uniphier-pro5-mio-reset" - for PH1-Pro5 SoC. 45 + "socionext,uniphier-pxs2-mio-reset" - for ProXstream2/PH1-LD6b SoC. 46 + "socionext,uniphier-ld11-mio-reset" - for PH1-LD11 SoC. 47 + "socionext,uniphier-ld20-mio-reset" - for PH1-LD20 SoC. 48 + - #reset-cells: should be 1. 49 + 50 + Example: 51 + 52 + mioctrl@59810000 { 53 + compatible = "socionext,uniphier-ld20-mioctrl", 54 + "simple-mfd", "syscon"; 55 + reg = <0x59810000 0x800>; 56 + 57 + reset { 58 + compatible = "socionext,uniphier-ld20-mio-reset"; 59 + #reset-cells = <1>; 60 + }; 61 + 62 + other nodes ... 63 + }; 64 + 65 + 66 + Peripheral reset 67 + ---------------- 68 + 69 + Required properties: 70 + - compatible: should be one of the following: 71 + "socionext,uniphier-ld4-peri-reset" - for PH1-LD4 SoC. 72 + "socionext,uniphier-pro4-peri-reset" - for PH1-Pro4 SoC. 73 + "socionext,uniphier-sld8-peri-reset" - for PH1-sLD8 SoC. 74 + "socionext,uniphier-pro5-peri-reset" - for PH1-Pro5 SoC. 75 + "socionext,uniphier-pxs2-peri-reset" - for ProXstream2/PH1-LD6b SoC. 76 + "socionext,uniphier-ld11-peri-reset" - for PH1-LD11 SoC. 77 + "socionext,uniphier-ld20-peri-reset" - for PH1-LD20 SoC. 78 + - #reset-cells: should be 1. 79 + 80 + Example: 81 + 82 + perictrl@59820000 { 83 + compatible = "socionext,uniphier-ld20-perictrl", 84 + "simple-mfd", "syscon"; 85 + reg = <0x59820000 0x200>; 86 + 87 + reset { 88 + compatible = "socionext,uniphier-ld20-peri-reset"; 89 + #reset-cells = <1>; 90 + }; 91 + 92 + other nodes ... 93 + };
+48
Documentation/perf/xgene-pmu.txt
··· 1 + APM X-Gene SoC Performance Monitoring Unit (PMU) 2 + ================================================ 3 + 4 + X-Gene SoC PMU consists of various independent system device PMUs such as 5 + L3 cache(s), I/O bridge(s), memory controller bridge(s) and memory 6 + controller(s). These PMU devices are loosely architected to follow the 7 + same model as the PMU for ARM cores. The PMUs share the same top level 8 + interrupt and status CSR region. 9 + 10 + PMU (perf) driver 11 + ----------------- 12 + 13 + The xgene-pmu driver registers several perf PMU drivers. Each of the perf 14 + driver provides description of its available events and configuration options 15 + in sysfs, see /sys/devices/<l3cX/iobX/mcbX/mcX>/. 16 + 17 + The "format" directory describes format of the config (event ID), 18 + config1 (agent ID) fields of the perf_event_attr structure. The "events" 19 + directory provides configuration templates for all supported event types that 20 + can be used with perf tool. For example, "l3c0/bank-fifo-full/" is an 21 + equivalent of "l3c0/config=0x0b/". 22 + 23 + Most of the SoC PMU has a specific list of agent ID used for monitoring 24 + performance of a specific datapath. For example, agents of a L3 cache can be 25 + a specific CPU or an I/O bridge. Each PMU has a set of 2 registers capable of 26 + masking the agents from which the request come from. If the bit with 27 + the bit number corresponding to the agent is set, the event is counted only if 28 + it is caused by a request from that agent. Each agent ID bit is inversely mapped 29 + to a corresponding bit in "config1" field. By default, the event will be 30 + counted for all agent requests (config1 = 0x0). For all the supported agents of 31 + each PMU, please refer to APM X-Gene User Manual. 32 + 33 + Each perf driver also provides a "cpumask" sysfs attribute, which contains a 34 + single CPU ID of the processor which will be used to handle all the PMU events. 35 + 36 + Example for perf tool use: 37 + 38 + / # perf list | grep -e l3c -e iob -e mcb -e mc 39 + l3c0/ackq-full/ [Kernel PMU event] 40 + <...> 41 + mcb1/mcb-csw-stall/ [Kernel PMU event] 42 + 43 + / # perf stat -a -e l3c0/read-miss/,mcb1/csw-write-request/ sleep 1 44 + 45 + / # perf stat -a -e l3c0/read-miss,config1=0xfffffffffffffffe/ sleep 1 46 + 47 + The driver does not support sampling, therefore "perf record" will 48 + not work. Per-task (without "-a") perf sessions are not supported.
+8
MAINTAINERS
··· 866 866 F: Documentation/devicetree/bindings/net/apm-xgene-enet.txt 867 867 F: Documentation/devicetree/bindings/net/apm-xgene-mdio.txt 868 868 869 + APPLIED MICRO (APM) X-GENE SOC PMU 870 + M: Tai Nguyen <ttnguyen@apm.com> 871 + S: Supported 872 + F: drivers/perf/xgene_pmu.c 873 + F: Documentation/perf/xgene-pmu.txt 874 + F: Documentation/devicetree/bindings/perf/apm-xgene-pmu.txt 875 + 869 876 APTINA CAMERA SENSOR PLL 870 877 M: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> 871 878 L: linux-media@vger.kernel.org ··· 1868 1861 F: drivers/clk/uniphier/ 1869 1862 F: drivers/i2c/busses/i2c-uniphier* 1870 1863 F: drivers/pinctrl/uniphier/ 1864 + F: drivers/reset/reset-uniphier.c 1871 1865 F: drivers/tty/serial/8250/8250_uniphier.c 1872 1866 N: uniphier 1873 1867
+1
arch/arm/boot/dts/stm32f429.dtsi
··· 334 334 }; 335 335 336 336 rcc: rcc@40023810 { 337 + #reset-cells = <1>; 337 338 #clock-cells = <2>; 338 339 compatible = "st,stm32f42xx-rcc", "st,stm32-rcc"; 339 340 reg = <0x40023800 0x400>;
+9 -5
drivers/bus/Kconfig
··· 108 108 OCP2SCP and in OMAP5, both USB PHY and SATA PHY is connected via 109 109 OCP2SCP. 110 110 111 + config QCOM_EBI2 112 + bool "Qualcomm External Bus Interface 2 (EBI2)" 113 + depends on HAS_IOMEM 114 + help 115 + Say y here to enable support for the Qualcomm External Bus 116 + Interface 2, which can be used to connect things like NAND Flash, 117 + SRAM, ethernet adapters, FPGAs and LCD displays. 118 + 111 119 config SIMPLE_PM_BUS 112 120 bool "Simple Power-Managed Bus Driver" 113 121 depends on OF && PM ··· 140 132 with various RSB based devices, such as AXP223, AXP8XX PMICs, 141 133 and AC100/AC200 ICs. 142 134 143 - # TODO: This uses pm_clk_*() symbols that aren't exported in v4.7 and hence 144 - # the driver will fail to build as a module. However there are patches to 145 - # address that queued for v4.8, so this can be turned into a tristate symbol 146 - # after v4.8-rc1. 147 135 config TEGRA_ACONNECT 148 - bool "Tegra ACONNECT Bus Driver" 136 + tristate "Tegra ACONNECT Bus Driver" 149 137 depends on ARCH_TEGRA_210_SOC 150 138 depends on OF && PM 151 139 select PM_CLK
+1
drivers/bus/Makefile
··· 15 15 obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o 16 16 17 17 obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o 18 + obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o 18 19 obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o 19 20 obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o 20 21 obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
+408
drivers/bus/qcom-ebi2.c
··· 1 + /* 2 + * Qualcomm External Bus Interface 2 (EBI2) driver 3 + * an older version of the Qualcomm Parallel Interface Controller (QPIC) 4 + * 5 + * Copyright (C) 2016 Linaro Ltd. 6 + * 7 + * Author: Linus Walleij <linus.walleij@linaro.org> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2, as 11 + * published by the Free Software Foundation. 12 + * 13 + * See the device tree bindings for this block for more details on the 14 + * hardware. 15 + */ 16 + 17 + #include <linux/module.h> 18 + #include <linux/clk.h> 19 + #include <linux/err.h> 20 + #include <linux/io.h> 21 + #include <linux/of.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/init.h> 24 + #include <linux/io.h> 25 + #include <linux/slab.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/bitops.h> 28 + 29 + /* 30 + * CS0, CS1, CS4 and CS5 are two bits wide, CS2 and CS3 are one bit. 31 + */ 32 + #define EBI2_CS0_ENABLE_MASK BIT(0)|BIT(1) 33 + #define EBI2_CS1_ENABLE_MASK BIT(2)|BIT(3) 34 + #define EBI2_CS2_ENABLE_MASK BIT(4) 35 + #define EBI2_CS3_ENABLE_MASK BIT(5) 36 + #define EBI2_CS4_ENABLE_MASK BIT(6)|BIT(7) 37 + #define EBI2_CS5_ENABLE_MASK BIT(8)|BIT(9) 38 + #define EBI2_CSN_MASK GENMASK(9, 0) 39 + 40 + #define EBI2_XMEM_CFG 0x0000 /* Power management etc */ 41 + 42 + /* 43 + * SLOW CSn CFG 44 + * 45 + * Bits 31-28: RECOVERY recovery cycles (0 = 1, 1 = 2 etc) this is the time the 46 + * memory continues to drive the data bus after OE is de-asserted. 47 + * Inserted when reading one CS and switching to another CS or read 48 + * followed by write on the same CS. Valid values 0 thru 15. 49 + * Bits 27-24: WR_HOLD write hold cycles, these are extra cycles inserted after 50 + * every write minimum 1. The data out is driven from the time WE is 51 + * asserted until CS is asserted. With a hold of 1, the CS stays 52 + * active for 1 extra cycle etc. Valid values 0 thru 15. 53 + * Bits 23-16: WR_DELTA initial latency for write cycles inserted for the first 54 + * write to a page or burst memory 55 + * Bits 15-8: RD_DELTA initial latency for read cycles inserted for the first 56 + * read to a page or burst memory 57 + * Bits 7-4: WR_WAIT number of wait cycles for every write access, 0=1 cycle 58 + * so 1 thru 16 cycles. 59 + * Bits 3-0: RD_WAIT number of wait cycles for every read access, 0=1 cycle 60 + * so 1 thru 16 cycles. 61 + */ 62 + #define EBI2_XMEM_CS0_SLOW_CFG 0x0008 63 + #define EBI2_XMEM_CS1_SLOW_CFG 0x000C 64 + #define EBI2_XMEM_CS2_SLOW_CFG 0x0010 65 + #define EBI2_XMEM_CS3_SLOW_CFG 0x0014 66 + #define EBI2_XMEM_CS4_SLOW_CFG 0x0018 67 + #define EBI2_XMEM_CS5_SLOW_CFG 0x001C 68 + 69 + #define EBI2_XMEM_RECOVERY_SHIFT 28 70 + #define EBI2_XMEM_WR_HOLD_SHIFT 24 71 + #define EBI2_XMEM_WR_DELTA_SHIFT 16 72 + #define EBI2_XMEM_RD_DELTA_SHIFT 8 73 + #define EBI2_XMEM_WR_WAIT_SHIFT 4 74 + #define EBI2_XMEM_RD_WAIT_SHIFT 0 75 + 76 + /* 77 + * FAST CSn CFG 78 + * Bits 31-28: ? 79 + * Bits 27-24: RD_HOLD: the length in cycles of the first segment of a read 80 + * transfer. For a single read trandfer this will be the time 81 + * from CS assertion to OE assertion. 82 + * Bits 18-24: ? 83 + * Bits 17-16: ADV_OE_RECOVERY, the number of cycles elapsed before an OE 84 + * assertion, with respect to the cycle where ADV is asserted. 85 + * 2 means 2 cycles between ADV and OE. Values 0, 1, 2 or 3. 86 + * Bits 5: ADDR_HOLD_ENA, The address is held for an extra cycle to meet 87 + * hold time requirements with ADV assertion. 88 + * 89 + * The manual mentions "write precharge cycles" and "precharge cycles". 90 + * We have not been able to figure out which bit fields these correspond to 91 + * in the hardware, or what valid values exist. The current hypothesis is that 92 + * this is something just used on the FAST chip selects. There is also a "byte 93 + * device enable" flag somewhere for 8bit memories. 94 + */ 95 + #define EBI2_XMEM_CS0_FAST_CFG 0x0028 96 + #define EBI2_XMEM_CS1_FAST_CFG 0x002C 97 + #define EBI2_XMEM_CS2_FAST_CFG 0x0030 98 + #define EBI2_XMEM_CS3_FAST_CFG 0x0034 99 + #define EBI2_XMEM_CS4_FAST_CFG 0x0038 100 + #define EBI2_XMEM_CS5_FAST_CFG 0x003C 101 + 102 + #define EBI2_XMEM_RD_HOLD_SHIFT 24 103 + #define EBI2_XMEM_ADV_OE_RECOVERY_SHIFT 16 104 + #define EBI2_XMEM_ADDR_HOLD_ENA_SHIFT 5 105 + 106 + /** 107 + * struct cs_data - struct with info on a chipselect setting 108 + * @enable_mask: mask to enable the chipselect in the EBI2 config 109 + * @slow_cfg0: offset to XMEMC slow CS config 110 + * @fast_cfg1: offset to XMEMC fast CS config 111 + */ 112 + struct cs_data { 113 + u32 enable_mask; 114 + u16 slow_cfg; 115 + u16 fast_cfg; 116 + }; 117 + 118 + static const struct cs_data cs_info[] = { 119 + { 120 + /* CS0 */ 121 + .enable_mask = EBI2_CS0_ENABLE_MASK, 122 + .slow_cfg = EBI2_XMEM_CS0_SLOW_CFG, 123 + .fast_cfg = EBI2_XMEM_CS0_FAST_CFG, 124 + }, 125 + { 126 + /* CS1 */ 127 + .enable_mask = EBI2_CS1_ENABLE_MASK, 128 + .slow_cfg = EBI2_XMEM_CS1_SLOW_CFG, 129 + .fast_cfg = EBI2_XMEM_CS1_FAST_CFG, 130 + }, 131 + { 132 + /* CS2 */ 133 + .enable_mask = EBI2_CS2_ENABLE_MASK, 134 + .slow_cfg = EBI2_XMEM_CS2_SLOW_CFG, 135 + .fast_cfg = EBI2_XMEM_CS2_FAST_CFG, 136 + }, 137 + { 138 + /* CS3 */ 139 + .enable_mask = EBI2_CS3_ENABLE_MASK, 140 + .slow_cfg = EBI2_XMEM_CS3_SLOW_CFG, 141 + .fast_cfg = EBI2_XMEM_CS3_FAST_CFG, 142 + }, 143 + { 144 + /* CS4 */ 145 + .enable_mask = EBI2_CS4_ENABLE_MASK, 146 + .slow_cfg = EBI2_XMEM_CS4_SLOW_CFG, 147 + .fast_cfg = EBI2_XMEM_CS4_FAST_CFG, 148 + }, 149 + { 150 + /* CS5 */ 151 + .enable_mask = EBI2_CS5_ENABLE_MASK, 152 + .slow_cfg = EBI2_XMEM_CS5_SLOW_CFG, 153 + .fast_cfg = EBI2_XMEM_CS5_FAST_CFG, 154 + }, 155 + }; 156 + 157 + /** 158 + * struct ebi2_xmem_prop - describes an XMEM config property 159 + * @prop: the device tree binding name 160 + * @max: maximum value for the property 161 + * @slowreg: true if this property is in the SLOW CS config register 162 + * else it is assumed to be in the FAST config register 163 + * @shift: the bit field start in the SLOW or FAST register for this 164 + * property 165 + */ 166 + struct ebi2_xmem_prop { 167 + const char *prop; 168 + u32 max; 169 + bool slowreg; 170 + u16 shift; 171 + }; 172 + 173 + static const struct ebi2_xmem_prop xmem_props[] = { 174 + { 175 + .prop = "qcom,xmem-recovery-cycles", 176 + .max = 15, 177 + .slowreg = true, 178 + .shift = EBI2_XMEM_RECOVERY_SHIFT, 179 + }, 180 + { 181 + .prop = "qcom,xmem-write-hold-cycles", 182 + .max = 15, 183 + .slowreg = true, 184 + .shift = EBI2_XMEM_WR_HOLD_SHIFT, 185 + }, 186 + { 187 + .prop = "qcom,xmem-write-delta-cycles", 188 + .max = 255, 189 + .slowreg = true, 190 + .shift = EBI2_XMEM_WR_DELTA_SHIFT, 191 + }, 192 + { 193 + .prop = "qcom,xmem-read-delta-cycles", 194 + .max = 255, 195 + .slowreg = true, 196 + .shift = EBI2_XMEM_RD_DELTA_SHIFT, 197 + }, 198 + { 199 + .prop = "qcom,xmem-write-wait-cycles", 200 + .max = 15, 201 + .slowreg = true, 202 + .shift = EBI2_XMEM_WR_WAIT_SHIFT, 203 + }, 204 + { 205 + .prop = "qcom,xmem-read-wait-cycles", 206 + .max = 15, 207 + .slowreg = true, 208 + .shift = EBI2_XMEM_RD_WAIT_SHIFT, 209 + }, 210 + { 211 + .prop = "qcom,xmem-address-hold-enable", 212 + .max = 1, /* boolean prop */ 213 + .slowreg = false, 214 + .shift = EBI2_XMEM_ADDR_HOLD_ENA_SHIFT, 215 + }, 216 + { 217 + .prop = "qcom,xmem-adv-to-oe-recovery-cycles", 218 + .max = 3, 219 + .slowreg = false, 220 + .shift = EBI2_XMEM_ADV_OE_RECOVERY_SHIFT, 221 + }, 222 + { 223 + .prop = "qcom,xmem-read-hold-cycles", 224 + .max = 15, 225 + .slowreg = false, 226 + .shift = EBI2_XMEM_RD_HOLD_SHIFT, 227 + }, 228 + }; 229 + 230 + static void qcom_ebi2_setup_chipselect(struct device_node *np, 231 + struct device *dev, 232 + void __iomem *ebi2_base, 233 + void __iomem *ebi2_xmem, 234 + u32 csindex) 235 + { 236 + const struct cs_data *csd; 237 + u32 slowcfg, fastcfg; 238 + u32 val; 239 + int ret; 240 + int i; 241 + 242 + csd = &cs_info[csindex]; 243 + val = readl(ebi2_base); 244 + val |= csd->enable_mask; 245 + writel(val, ebi2_base); 246 + dev_dbg(dev, "enabled CS%u\n", csindex); 247 + 248 + /* Next set up the XMEMC */ 249 + slowcfg = 0; 250 + fastcfg = 0; 251 + 252 + for (i = 0; i < ARRAY_SIZE(xmem_props); i++) { 253 + const struct ebi2_xmem_prop *xp = &xmem_props[i]; 254 + 255 + /* All are regular u32 values */ 256 + ret = of_property_read_u32(np, xp->prop, &val); 257 + if (ret) { 258 + dev_dbg(dev, "could not read %s for CS%d\n", 259 + xp->prop, csindex); 260 + continue; 261 + } 262 + 263 + /* First check boolean props */ 264 + if (xp->max == 1 && val) { 265 + if (xp->slowreg) 266 + slowcfg |= BIT(xp->shift); 267 + else 268 + fastcfg |= BIT(xp->shift); 269 + dev_dbg(dev, "set %s flag\n", xp->prop); 270 + continue; 271 + } 272 + 273 + /* We're dealing with an u32 */ 274 + if (val > xp->max) { 275 + dev_err(dev, 276 + "too high value for %s: %u, capped at %u\n", 277 + xp->prop, val, xp->max); 278 + val = xp->max; 279 + } 280 + if (xp->slowreg) 281 + slowcfg |= (val << xp->shift); 282 + else 283 + fastcfg |= (val << xp->shift); 284 + dev_dbg(dev, "set %s to %u\n", xp->prop, val); 285 + } 286 + 287 + dev_info(dev, "CS%u: SLOW CFG 0x%08x, FAST CFG 0x%08x\n", 288 + csindex, slowcfg, fastcfg); 289 + 290 + if (slowcfg) 291 + writel(slowcfg, ebi2_xmem + csd->slow_cfg); 292 + if (fastcfg) 293 + writel(fastcfg, ebi2_xmem + csd->fast_cfg); 294 + } 295 + 296 + static int qcom_ebi2_probe(struct platform_device *pdev) 297 + { 298 + struct device_node *np = pdev->dev.of_node; 299 + struct device_node *child; 300 + struct device *dev = &pdev->dev; 301 + struct resource *res; 302 + void __iomem *ebi2_base; 303 + void __iomem *ebi2_xmem; 304 + struct clk *ebi2xclk; 305 + struct clk *ebi2clk; 306 + bool have_children = false; 307 + u32 val; 308 + int ret; 309 + 310 + ebi2xclk = devm_clk_get(dev, "ebi2x"); 311 + if (IS_ERR(ebi2xclk)) 312 + return PTR_ERR(ebi2xclk); 313 + 314 + ret = clk_prepare_enable(ebi2xclk); 315 + if (ret) { 316 + dev_err(dev, "could not enable EBI2X clk (%d)\n", ret); 317 + return ret; 318 + } 319 + 320 + ebi2clk = devm_clk_get(dev, "ebi2"); 321 + if (IS_ERR(ebi2clk)) { 322 + ret = PTR_ERR(ebi2clk); 323 + goto err_disable_2x_clk; 324 + } 325 + 326 + ret = clk_prepare_enable(ebi2clk); 327 + if (ret) { 328 + dev_err(dev, "could not enable EBI2 clk\n"); 329 + goto err_disable_2x_clk; 330 + } 331 + 332 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 333 + ebi2_base = devm_ioremap_resource(dev, res); 334 + if (IS_ERR(ebi2_base)) { 335 + ret = PTR_ERR(ebi2_base); 336 + goto err_disable_clk; 337 + } 338 + 339 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 340 + ebi2_xmem = devm_ioremap_resource(dev, res); 341 + if (IS_ERR(ebi2_xmem)) { 342 + ret = PTR_ERR(ebi2_xmem); 343 + goto err_disable_clk; 344 + } 345 + 346 + /* Allegedly this turns the power save mode off */ 347 + writel(0UL, ebi2_xmem + EBI2_XMEM_CFG); 348 + 349 + /* Disable all chipselects */ 350 + val = readl(ebi2_base); 351 + val &= ~EBI2_CSN_MASK; 352 + writel(val, ebi2_base); 353 + 354 + /* Walk over the child nodes and see what chipselects we use */ 355 + for_each_available_child_of_node(np, child) { 356 + u32 csindex; 357 + 358 + /* Figure out the chipselect */ 359 + ret = of_property_read_u32(child, "reg", &csindex); 360 + if (ret) 361 + return ret; 362 + 363 + if (csindex > 5) { 364 + dev_err(dev, 365 + "invalid chipselect %u, we only support 0-5\n", 366 + csindex); 367 + continue; 368 + } 369 + 370 + qcom_ebi2_setup_chipselect(child, 371 + dev, 372 + ebi2_base, 373 + ebi2_xmem, 374 + csindex); 375 + 376 + /* We have at least one child */ 377 + have_children = true; 378 + } 379 + 380 + if (have_children) 381 + return of_platform_default_populate(np, NULL, dev); 382 + return 0; 383 + 384 + err_disable_clk: 385 + clk_disable_unprepare(ebi2clk); 386 + err_disable_2x_clk: 387 + clk_disable_unprepare(ebi2xclk); 388 + 389 + return ret; 390 + } 391 + 392 + static const struct of_device_id qcom_ebi2_of_match[] = { 393 + { .compatible = "qcom,msm8660-ebi2", }, 394 + { .compatible = "qcom,apq8060-ebi2", }, 395 + { } 396 + }; 397 + 398 + static struct platform_driver qcom_ebi2_driver = { 399 + .probe = qcom_ebi2_probe, 400 + .driver = { 401 + .name = "qcom-ebi2", 402 + .of_match_table = qcom_ebi2_of_match, 403 + }, 404 + }; 405 + module_platform_driver(qcom_ebi2_driver); 406 + MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>"); 407 + MODULE_DESCRIPTION("Qualcomm EBI2 driver"); 408 + MODULE_LICENSE("GPL");
+2 -20
drivers/bus/tegra-aconnect.c
··· 15 15 #include <linux/pm_clock.h> 16 16 #include <linux/pm_runtime.h> 17 17 18 - static int tegra_aconnect_add_clock(struct device *dev, char *name) 19 - { 20 - struct clk *clk; 21 - int ret; 22 - 23 - clk = clk_get(dev, name); 24 - if (IS_ERR(clk)) { 25 - dev_err(dev, "%s clock not found\n", name); 26 - return PTR_ERR(clk); 27 - } 28 - 29 - ret = pm_clk_add_clk(dev, clk); 30 - if (ret) 31 - clk_put(clk); 32 - 33 - return ret; 34 - } 35 - 36 18 static int tegra_aconnect_probe(struct platform_device *pdev) 37 19 { 38 20 int ret; ··· 26 44 if (ret) 27 45 return ret; 28 46 29 - ret = tegra_aconnect_add_clock(&pdev->dev, "ape"); 47 + ret = of_pm_clk_add_clk(&pdev->dev, "ape"); 30 48 if (ret) 31 49 goto clk_destroy; 32 50 33 - ret = tegra_aconnect_add_clock(&pdev->dev, "apb2ape"); 51 + ret = of_pm_clk_add_clk(&pdev->dev, "apb2ape"); 34 52 if (ret) 35 53 goto clk_destroy; 36 54
+70
drivers/clk/mvebu/orion.c
··· 21 21 }; 22 22 23 23 /* 24 + * Orion 5181 25 + */ 26 + 27 + #define SAR_MV88F5181_TCLK_FREQ 8 28 + #define SAR_MV88F5181_TCLK_FREQ_MASK 0x3 29 + 30 + static u32 __init mv88f5181_get_tclk_freq(void __iomem *sar) 31 + { 32 + u32 opt = (readl(sar) >> SAR_MV88F5181_TCLK_FREQ) & 33 + SAR_MV88F5181_TCLK_FREQ_MASK; 34 + if (opt == 0) 35 + return 133333333; 36 + else if (opt == 1) 37 + return 150000000; 38 + else if (opt == 2) 39 + return 166666667; 40 + else 41 + return 0; 42 + } 43 + 44 + #define SAR_MV88F5181_CPU_FREQ 4 45 + #define SAR_MV88F5181_CPU_FREQ_MASK 0xf 46 + 47 + static u32 __init mv88f5181_get_cpu_freq(void __iomem *sar) 48 + { 49 + u32 opt = (readl(sar) >> SAR_MV88F5181_CPU_FREQ) & 50 + SAR_MV88F5181_CPU_FREQ_MASK; 51 + if (opt == 0) 52 + return 333333333; 53 + else if (opt == 1 || opt == 2) 54 + return 400000000; 55 + else if (opt == 3) 56 + return 500000000; 57 + else 58 + return 0; 59 + } 60 + 61 + static void __init mv88f5181_get_clk_ratio(void __iomem *sar, int id, 62 + int *mult, int *div) 63 + { 64 + u32 opt = (readl(sar) >> SAR_MV88F5181_CPU_FREQ) & 65 + SAR_MV88F5181_CPU_FREQ_MASK; 66 + if (opt == 0 || opt == 1) { 67 + *mult = 1; 68 + *div = 2; 69 + } else if (opt == 2 || opt == 3) { 70 + *mult = 1; 71 + *div = 3; 72 + } else { 73 + *mult = 0; 74 + *div = 1; 75 + } 76 + } 77 + 78 + static const struct coreclk_soc_desc mv88f5181_coreclks = { 79 + .get_tclk_freq = mv88f5181_get_tclk_freq, 80 + .get_cpu_freq = mv88f5181_get_cpu_freq, 81 + .get_clk_ratio = mv88f5181_get_clk_ratio, 82 + .ratios = orion_coreclk_ratios, 83 + .num_ratios = ARRAY_SIZE(orion_coreclk_ratios), 84 + }; 85 + 86 + static void __init mv88f5181_clk_init(struct device_node *np) 87 + { 88 + return mvebu_coreclk_setup(np, &mv88f5181_coreclks); 89 + } 90 + 91 + CLK_OF_DECLARE(mv88f5181_clk, "marvell,mv88f5181-core-clock", mv88f5181_clk_init); 92 + 93 + /* 24 94 * Orion 5182 25 95 */ 26 96
+1 -1
drivers/clocksource/Kconfig
··· 361 361 362 362 config CLKSRC_EXYNOS_MCT 363 363 bool "Exynos multi core timer driver" if COMPILE_TEST 364 - depends on ARM 364 + depends on ARM || ARM64 365 365 help 366 366 Support for Multi Core Timer controller on Exynos SoCs. 367 367
+4
drivers/clocksource/exynos_mct.c
··· 223 223 return exynos4_read_count_32(); 224 224 } 225 225 226 + #if defined(CONFIG_ARM) 226 227 static struct delay_timer exynos4_delay_timer; 227 228 228 229 static cycles_t exynos4_read_current_timer(void) ··· 232 231 "cycles_t needs to move to 32-bit for ARM64 usage"); 233 232 return exynos4_read_count_32(); 234 233 } 234 + #endif 235 235 236 236 static int __init exynos4_clocksource_init(void) 237 237 { 238 238 exynos4_mct_frc_start(); 239 239 240 + #if defined(CONFIG_ARM) 240 241 exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; 241 242 exynos4_delay_timer.freq = clk_rate; 242 243 register_current_timer_delay(&exynos4_delay_timer); 244 + #endif 243 245 244 246 if (clocksource_register_hz(&mct_frc, clk_rate)) 245 247 panic("%s: can't register clocksource\n", mct_frc.name);
+1
drivers/firmware/Kconfig
··· 209 209 source "drivers/firmware/broadcom/Kconfig" 210 210 source "drivers/firmware/google/Kconfig" 211 211 source "drivers/firmware/efi/Kconfig" 212 + source "drivers/firmware/meson/Kconfig" 212 213 213 214 endmenu
+1
drivers/firmware/Makefile
··· 22 22 CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a 23 23 24 24 obj-y += broadcom/ 25 + obj-y += meson/ 25 26 obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ 26 27 obj-$(CONFIG_EFI) += efi/ 27 28 obj-$(CONFIG_UEFI_CPER) += efi/
+9
drivers/firmware/meson/Kconfig
··· 1 + # 2 + # Amlogic Secure Monitor driver 3 + # 4 + config MESON_SM 5 + bool 6 + default ARCH_MESON 7 + depends on ARM64_4K_PAGES 8 + help 9 + Say y here to enable the Amlogic secure monitor driver
+1
drivers/firmware/meson/Makefile
··· 1 + obj-$(CONFIG_MESON_SM) += meson_sm.o
+248
drivers/firmware/meson/meson_sm.c
··· 1 + /* 2 + * Amlogic Secure Monitor driver 3 + * 4 + * Copyright (C) 2016 Endless Mobile, Inc. 5 + * Author: Carlo Caione <carlo@endlessm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License 9 + * version 2 as published by the Free Software Foundation. 10 + * 11 + * You should have received a copy of the GNU General Public License 12 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 13 + */ 14 + 15 + #define pr_fmt(fmt) "meson-sm: " fmt 16 + 17 + #include <linux/arm-smccc.h> 18 + #include <linux/bug.h> 19 + #include <linux/io.h> 20 + #include <linux/of.h> 21 + #include <linux/of_device.h> 22 + #include <linux/printk.h> 23 + #include <linux/types.h> 24 + #include <linux/sizes.h> 25 + 26 + #include <linux/firmware/meson/meson_sm.h> 27 + 28 + struct meson_sm_cmd { 29 + unsigned int index; 30 + u32 smc_id; 31 + }; 32 + #define CMD(d, s) { .index = (d), .smc_id = (s), } 33 + 34 + struct meson_sm_chip { 35 + unsigned int shmem_size; 36 + u32 cmd_shmem_in_base; 37 + u32 cmd_shmem_out_base; 38 + struct meson_sm_cmd cmd[]; 39 + }; 40 + 41 + struct meson_sm_chip gxbb_chip = { 42 + .shmem_size = SZ_4K, 43 + .cmd_shmem_in_base = 0x82000020, 44 + .cmd_shmem_out_base = 0x82000021, 45 + .cmd = { 46 + CMD(SM_EFUSE_READ, 0x82000030), 47 + CMD(SM_EFUSE_WRITE, 0x82000031), 48 + CMD(SM_EFUSE_USER_MAX, 0x82000033), 49 + { /* sentinel */ }, 50 + }, 51 + }; 52 + 53 + struct meson_sm_firmware { 54 + const struct meson_sm_chip *chip; 55 + void __iomem *sm_shmem_in_base; 56 + void __iomem *sm_shmem_out_base; 57 + }; 58 + 59 + static struct meson_sm_firmware fw; 60 + 61 + static u32 meson_sm_get_cmd(const struct meson_sm_chip *chip, 62 + unsigned int cmd_index) 63 + { 64 + const struct meson_sm_cmd *cmd = chip->cmd; 65 + 66 + while (cmd->smc_id && cmd->index != cmd_index) 67 + cmd++; 68 + 69 + return cmd->smc_id; 70 + } 71 + 72 + static u32 __meson_sm_call(u32 cmd, u32 arg0, u32 arg1, u32 arg2, 73 + u32 arg3, u32 arg4) 74 + { 75 + struct arm_smccc_res res; 76 + 77 + arm_smccc_smc(cmd, arg0, arg1, arg2, arg3, arg4, 0, 0, &res); 78 + return res.a0; 79 + } 80 + 81 + static void __iomem *meson_sm_map_shmem(u32 cmd_shmem, unsigned int size) 82 + { 83 + u32 sm_phy_base; 84 + 85 + sm_phy_base = __meson_sm_call(cmd_shmem, 0, 0, 0, 0, 0); 86 + if (!sm_phy_base) 87 + return 0; 88 + 89 + return ioremap_cache(sm_phy_base, size); 90 + } 91 + 92 + /** 93 + * meson_sm_call - generic SMC32 call to the secure-monitor 94 + * 95 + * @cmd_index: Index of the SMC32 function ID 96 + * @ret: Returned value 97 + * @arg0: SMC32 Argument 0 98 + * @arg1: SMC32 Argument 1 99 + * @arg2: SMC32 Argument 2 100 + * @arg3: SMC32 Argument 3 101 + * @arg4: SMC32 Argument 4 102 + * 103 + * Return: 0 on success, a negative value on error 104 + */ 105 + int meson_sm_call(unsigned int cmd_index, u32 *ret, u32 arg0, 106 + u32 arg1, u32 arg2, u32 arg3, u32 arg4) 107 + { 108 + u32 cmd, lret; 109 + 110 + if (!fw.chip) 111 + return -ENOENT; 112 + 113 + cmd = meson_sm_get_cmd(fw.chip, cmd_index); 114 + if (!cmd) 115 + return -EINVAL; 116 + 117 + lret = __meson_sm_call(cmd, arg0, arg1, arg2, arg3, arg4); 118 + 119 + if (ret) 120 + *ret = lret; 121 + 122 + return 0; 123 + } 124 + EXPORT_SYMBOL(meson_sm_call); 125 + 126 + /** 127 + * meson_sm_call_read - retrieve data from secure-monitor 128 + * 129 + * @buffer: Buffer to store the retrieved data 130 + * @cmd_index: Index of the SMC32 function ID 131 + * @arg0: SMC32 Argument 0 132 + * @arg1: SMC32 Argument 1 133 + * @arg2: SMC32 Argument 2 134 + * @arg3: SMC32 Argument 3 135 + * @arg4: SMC32 Argument 4 136 + * 137 + * Return: size of read data on success, a negative value on error 138 + */ 139 + int meson_sm_call_read(void *buffer, unsigned int cmd_index, u32 arg0, 140 + u32 arg1, u32 arg2, u32 arg3, u32 arg4) 141 + { 142 + u32 size; 143 + 144 + if (!fw.chip) 145 + return -ENOENT; 146 + 147 + if (!fw.chip->cmd_shmem_out_base) 148 + return -EINVAL; 149 + 150 + if (meson_sm_call(cmd_index, &size, arg0, arg1, arg2, arg3, arg4) < 0) 151 + return -EINVAL; 152 + 153 + if (!size || size > fw.chip->shmem_size) 154 + return -EINVAL; 155 + 156 + if (buffer) 157 + memcpy(buffer, fw.sm_shmem_out_base, size); 158 + 159 + return size; 160 + } 161 + EXPORT_SYMBOL(meson_sm_call_read); 162 + 163 + /** 164 + * meson_sm_call_write - send data to secure-monitor 165 + * 166 + * @buffer: Buffer containing data to send 167 + * @size: Size of the data to send 168 + * @cmd_index: Index of the SMC32 function ID 169 + * @arg0: SMC32 Argument 0 170 + * @arg1: SMC32 Argument 1 171 + * @arg2: SMC32 Argument 2 172 + * @arg3: SMC32 Argument 3 173 + * @arg4: SMC32 Argument 4 174 + * 175 + * Return: size of sent data on success, a negative value on error 176 + */ 177 + int meson_sm_call_write(void *buffer, unsigned int size, unsigned int cmd_index, 178 + u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 179 + { 180 + u32 written; 181 + 182 + if (!fw.chip) 183 + return -ENOENT; 184 + 185 + if (size > fw.chip->shmem_size) 186 + return -EINVAL; 187 + 188 + if (!fw.chip->cmd_shmem_in_base) 189 + return -EINVAL; 190 + 191 + memcpy(fw.sm_shmem_in_base, buffer, size); 192 + 193 + if (meson_sm_call(cmd_index, &written, arg0, arg1, arg2, arg3, arg4) < 0) 194 + return -EINVAL; 195 + 196 + if (!written) 197 + return -EINVAL; 198 + 199 + return written; 200 + } 201 + EXPORT_SYMBOL(meson_sm_call_write); 202 + 203 + static const struct of_device_id meson_sm_ids[] = { 204 + { .compatible = "amlogic,meson-gxbb-sm", .data = &gxbb_chip }, 205 + { /* sentinel */ }, 206 + }; 207 + 208 + int __init meson_sm_init(void) 209 + { 210 + const struct meson_sm_chip *chip; 211 + const struct of_device_id *matched_np; 212 + struct device_node *np; 213 + 214 + np = of_find_matching_node_and_match(NULL, meson_sm_ids, &matched_np); 215 + if (!np) 216 + return -ENODEV; 217 + 218 + chip = matched_np->data; 219 + if (!chip) { 220 + pr_err("unable to setup secure-monitor data\n"); 221 + goto out; 222 + } 223 + 224 + if (chip->cmd_shmem_in_base) { 225 + fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, 226 + chip->shmem_size); 227 + if (WARN_ON(!fw.sm_shmem_in_base)) 228 + goto out; 229 + } 230 + 231 + if (chip->cmd_shmem_out_base) { 232 + fw.sm_shmem_out_base = meson_sm_map_shmem(chip->cmd_shmem_out_base, 233 + chip->shmem_size); 234 + if (WARN_ON(!fw.sm_shmem_out_base)) 235 + goto out_in_base; 236 + } 237 + 238 + fw.chip = chip; 239 + pr_info("secure-monitor enabled\n"); 240 + 241 + return 0; 242 + 243 + out_in_base: 244 + iounmap(fw.sm_shmem_in_base); 245 + out: 246 + return -EINVAL; 247 + } 248 + device_initcall(meson_sm_init);
+5 -14
drivers/firmware/qcom_scm.c
··· 1 - /* Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 1 + /* 2 + * Qualcomm SCM driver 3 + * 4 + * Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 2 5 * Copyright (C) 2015 Linaro Ltd. 3 6 * 4 7 * This program is free software; you can redistribute it and/or modify ··· 15 12 * 16 13 */ 17 14 #include <linux/platform_device.h> 18 - #include <linux/module.h> 15 + #include <linux/init.h> 19 16 #include <linux/cpumask.h> 20 17 #include <linux/export.h> 21 18 #include <linux/dma-mapping.h> ··· 379 376 {} 380 377 }; 381 378 382 - MODULE_DEVICE_TABLE(of, qcom_scm_dt_match); 383 - 384 379 static struct platform_driver qcom_scm_driver = { 385 380 .driver = { 386 381 .name = "qcom_scm", ··· 415 414 416 415 return platform_driver_register(&qcom_scm_driver); 417 416 } 418 - 419 417 subsys_initcall(qcom_scm_init); 420 - 421 - static void __exit qcom_scm_exit(void) 422 - { 423 - platform_driver_unregister(&qcom_scm_driver); 424 - } 425 - module_exit(qcom_scm_exit); 426 - 427 - MODULE_DESCRIPTION("Qualcomm SCM driver"); 428 - MODULE_LICENSE("GPL v2");
+24 -5
drivers/media/rc/meson-ir.c
··· 24 24 25 25 #define DRIVER_NAME "meson-ir" 26 26 27 + /* valid on all Meson platforms */ 27 28 #define IR_DEC_LDR_ACTIVE 0x00 28 29 #define IR_DEC_LDR_IDLE 0x04 29 30 #define IR_DEC_LDR_REPEAT 0x08 ··· 33 32 #define IR_DEC_FRAME 0x14 34 33 #define IR_DEC_STATUS 0x18 35 34 #define IR_DEC_REG1 0x1c 35 + /* only available on Meson 8b and newer */ 36 + #define IR_DEC_REG2 0x20 36 37 37 38 #define REG0_RATE_MASK (BIT(11) - 1) 38 39 39 - #define REG1_MODE_MASK (BIT(7) | BIT(8)) 40 - #define REG1_MODE_NEC (0 << 7) 41 - #define REG1_MODE_GENERAL (2 << 7) 40 + #define DECODE_MODE_NEC 0x0 41 + #define DECODE_MODE_RAW 0x2 42 + 43 + /* Meson 6b uses REG1 to configure the mode */ 44 + #define REG1_MODE_MASK GENMASK(8, 7) 45 + #define REG1_MODE_SHIFT 7 46 + 47 + /* Meson 8b / GXBB use REG2 to configure the mode */ 48 + #define REG2_MODE_MASK GENMASK(3, 0) 49 + #define REG2_MODE_SHIFT 0 42 50 43 51 #define REG1_TIME_IV_SHIFT 16 44 52 #define REG1_TIME_IV_MASK ((BIT(13) - 1) << REG1_TIME_IV_SHIFT) ··· 168 158 /* Reset the decoder */ 169 159 meson_ir_set_mask(ir, IR_DEC_REG1, REG1_RESET, REG1_RESET); 170 160 meson_ir_set_mask(ir, IR_DEC_REG1, REG1_RESET, 0); 171 - /* Set general operation mode */ 172 - meson_ir_set_mask(ir, IR_DEC_REG1, REG1_MODE_MASK, REG1_MODE_GENERAL); 161 + 162 + /* Set general operation mode (= raw/software decoding) */ 163 + if (of_device_is_compatible(node, "amlogic,meson6-ir")) 164 + meson_ir_set_mask(ir, IR_DEC_REG1, REG1_MODE_MASK, 165 + DECODE_MODE_RAW << REG1_MODE_SHIFT); 166 + else 167 + meson_ir_set_mask(ir, IR_DEC_REG2, REG2_MODE_MASK, 168 + DECODE_MODE_RAW << REG2_MODE_SHIFT); 169 + 173 170 /* Set rate */ 174 171 meson_ir_set_mask(ir, IR_DEC_REG0, REG0_RATE_MASK, MESON_TRATE - 1); 175 172 /* IRQ on rising and falling edges */ ··· 214 197 215 198 static const struct of_device_id meson_ir_match[] = { 216 199 { .compatible = "amlogic,meson6-ir" }, 200 + { .compatible = "amlogic,meson8b-ir" }, 201 + { .compatible = "amlogic,meson-gxbb-ir" }, 217 202 { }, 218 203 }; 219 204
+2 -8
drivers/memory/atmel-ebi.c
··· 410 410 411 411 field.reg = AT91SAM9_SMC_MODE(AT91SAM9_SMC_GENERIC); 412 412 fields->mode = devm_regmap_field_alloc(ebi->dev, ebi->smc, field); 413 - if (IS_ERR(fields->mode)) 414 - return PTR_ERR(fields->mode); 415 - 416 - return 0; 413 + return PTR_ERR_OR_ZERO(fields->mode); 417 414 } 418 415 419 416 static int sama5d3_ebi_init(struct at91_ebi *ebi) ··· 438 441 439 442 field.reg = SAMA5_SMC_MODE(SAMA5_SMC_GENERIC); 440 443 fields->mode = devm_regmap_field_alloc(ebi->dev, ebi->smc, field); 441 - if (IS_ERR(fields->mode)) 442 - return PTR_ERR(fields->mode); 443 - 444 - return 0; 444 + return PTR_ERR_OR_ZERO(fields->mode); 445 445 } 446 446 447 447 static int at91_ebi_dev_setup(struct at91_ebi *ebi, struct device_node *np,
+1 -3
drivers/memory/atmel-sdramc.c
··· 53 53 54 54 static int atmel_ramc_probe(struct platform_device *pdev) 55 55 { 56 - const struct of_device_id *match; 57 56 const struct at91_ramc_caps *caps; 58 57 struct clk *clk; 59 58 60 - match = of_match_device(atmel_ramc_of_match, &pdev->dev); 61 - caps = match->data; 59 + caps = of_device_get_match_data(&pdev->dev); 62 60 63 61 if (caps->has_ddrck) { 64 62 clk = devm_clk_get(&pdev->dev, "ddrck");
+5 -15
drivers/memory/omap-gpmc.c
··· 350 350 return (time_ps + tick_ps - 1) / tick_ps; 351 351 } 352 352 353 - unsigned int gpmc_clk_ticks_to_ns(unsigned ticks, int cs, 354 - enum gpmc_clk_domain cd) 353 + static unsigned int gpmc_clk_ticks_to_ns(unsigned int ticks, int cs, 354 + enum gpmc_clk_domain cd) 355 355 { 356 356 return ticks * gpmc_get_clk_period(cs, cd) / 1000; 357 357 } ··· 2143 2143 ret = -ENODEV; 2144 2144 2145 2145 err_cs: 2146 - if (waitpin_desc) 2147 - gpiochip_free_own_desc(waitpin_desc); 2148 - 2146 + gpiochip_free_own_desc(waitpin_desc); 2149 2147 err: 2150 2148 gpmc_cs_free(cs); 2151 2149 ··· 2263 2265 gpmc->gpio_chip.get = gpmc_gpio_get; 2264 2266 gpmc->gpio_chip.base = -1; 2265 2267 2266 - ret = gpiochip_add(&gpmc->gpio_chip); 2268 + ret = devm_gpiochip_add_data(gpmc->dev, &gpmc->gpio_chip, NULL); 2267 2269 if (ret < 0) { 2268 2270 dev_err(gpmc->dev, "could not register gpio chip: %d\n", ret); 2269 2271 return ret; 2270 2272 } 2271 2273 2272 2274 return 0; 2273 - } 2274 - 2275 - static void gpmc_gpio_exit(struct gpmc_device *gpmc) 2276 - { 2277 - gpiochip_remove(&gpmc->gpio_chip); 2278 2275 } 2279 2276 2280 2277 static int gpmc_probe(struct platform_device *pdev) ··· 2358 2365 rc = gpmc_setup_irq(gpmc); 2359 2366 if (rc) { 2360 2367 dev_err(gpmc->dev, "gpmc_setup_irq failed\n"); 2361 - goto setup_irq_failed; 2368 + goto gpio_init_failed; 2362 2369 } 2363 2370 2364 2371 gpmc_probe_dt_children(pdev); 2365 2372 2366 2373 return 0; 2367 2374 2368 - setup_irq_failed: 2369 - gpmc_gpio_exit(gpmc); 2370 2375 gpio_init_failed: 2371 2376 gpmc_mem_exit(); 2372 2377 pm_runtime_put_sync(&pdev->dev); ··· 2378 2387 struct gpmc_device *gpmc = platform_get_drvdata(pdev); 2379 2388 2380 2389 gpmc_free_irq(gpmc); 2381 - gpmc_gpio_exit(gpmc); 2382 2390 gpmc_mem_exit(); 2383 2391 pm_runtime_put_sync(&pdev->dev); 2384 2392 pm_runtime_disable(&pdev->dev);
+10
drivers/nvmem/Kconfig
··· 101 101 This driver can also be build as a module. If so, the module will 102 102 be called nvmem-vf610-ocotp. 103 103 104 + config MESON_EFUSE 105 + tristate "Amlogic eFuse Support" 106 + depends on (ARCH_MESON || COMPILE_TEST) && MESON_SM 107 + help 108 + This is a driver to retrieve specific values from the eFuse found on 109 + the Amlogic Meson SoCs. 110 + 111 + This driver can also be built as a module. If so, the module 112 + will be called nvmem_meson_efuse. 113 + 104 114 endif
+2
drivers/nvmem/Makefile
··· 22 22 nvmem_sunxi_sid-y := sunxi_sid.o 23 23 obj-$(CONFIG_NVMEM_VF610_OCOTP) += nvmem-vf610-ocotp.o 24 24 nvmem-vf610-ocotp-y := vf610-ocotp.o 25 + obj-$(CONFIG_MESON_EFUSE) += nvmem_meson_efuse.o 26 + nvmem_meson_efuse-y := meson-efuse.o
+93
drivers/nvmem/meson-efuse.c
··· 1 + /* 2 + * Amlogic eFuse Driver 3 + * 4 + * Copyright (c) 2016 Endless Computers, Inc. 5 + * Author: Carlo Caione <carlo@endlessm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify it 8 + * under the terms of version 2 of the GNU General Public License as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, but WITHOUT 12 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 + * more details. 15 + */ 16 + 17 + #include <linux/module.h> 18 + #include <linux/nvmem-provider.h> 19 + #include <linux/of.h> 20 + #include <linux/platform_device.h> 21 + 22 + #include <linux/firmware/meson/meson_sm.h> 23 + 24 + static int meson_efuse_read(void *context, unsigned int offset, 25 + void *val, size_t bytes) 26 + { 27 + u8 *buf = val; 28 + int ret; 29 + 30 + ret = meson_sm_call_read(buf, SM_EFUSE_READ, offset, 31 + bytes, 0, 0, 0); 32 + if (ret < 0) 33 + return ret; 34 + 35 + return 0; 36 + } 37 + 38 + static struct nvmem_config econfig = { 39 + .name = "meson-efuse", 40 + .owner = THIS_MODULE, 41 + .stride = 1, 42 + .word_size = 1, 43 + .read_only = true, 44 + }; 45 + 46 + static const struct of_device_id meson_efuse_match[] = { 47 + { .compatible = "amlogic,meson-gxbb-efuse", }, 48 + { /* sentinel */ }, 49 + }; 50 + MODULE_DEVICE_TABLE(of, meson_efuse_match); 51 + 52 + static int meson_efuse_probe(struct platform_device *pdev) 53 + { 54 + struct nvmem_device *nvmem; 55 + unsigned int size; 56 + 57 + if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) 58 + return -EINVAL; 59 + 60 + econfig.dev = &pdev->dev; 61 + econfig.reg_read = meson_efuse_read; 62 + econfig.size = size; 63 + 64 + nvmem = nvmem_register(&econfig); 65 + if (IS_ERR(nvmem)) 66 + return PTR_ERR(nvmem); 67 + 68 + platform_set_drvdata(pdev, nvmem); 69 + 70 + return 0; 71 + } 72 + 73 + static int meson_efuse_remove(struct platform_device *pdev) 74 + { 75 + struct nvmem_device *nvmem = platform_get_drvdata(pdev); 76 + 77 + return nvmem_unregister(nvmem); 78 + } 79 + 80 + static struct platform_driver meson_efuse_driver = { 81 + .probe = meson_efuse_probe, 82 + .remove = meson_efuse_remove, 83 + .driver = { 84 + .name = "meson-efuse", 85 + .of_match_table = meson_efuse_match, 86 + }, 87 + }; 88 + 89 + module_platform_driver(meson_efuse_driver); 90 + 91 + MODULE_AUTHOR("Carlo Caione <carlo@endlessm.com>"); 92 + MODULE_DESCRIPTION("Amlogic Meson NVMEM driver"); 93 + MODULE_LICENSE("GPL v2");
+7
drivers/perf/Kconfig
··· 12 12 Say y if you want to use CPU performance monitors on ARM-based 13 13 systems. 14 14 15 + config XGENE_PMU 16 + depends on PERF_EVENTS && ARCH_XGENE 17 + bool "APM X-Gene SoC PMU" 18 + default n 19 + help 20 + Say y if you want to use APM X-Gene SoC performance monitors. 21 + 15 22 endmenu
+1
drivers/perf/Makefile
··· 1 1 obj-$(CONFIG_ARM_PMU) += arm_pmu.o 2 + obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
+1398
drivers/perf/xgene_pmu.c
··· 1 + /* 2 + * APM X-Gene SoC PMU (Performance Monitor Unit) 3 + * 4 + * Copyright (c) 2016, Applied Micro Circuits Corporation 5 + * Author: Hoan Tran <hotran@apm.com> 6 + * Tai Nguyen <ttnguyen@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + */ 21 + 22 + #include <linux/acpi.h> 23 + #include <linux/clk.h> 24 + #include <linux/cpumask.h> 25 + #include <linux/interrupt.h> 26 + #include <linux/io.h> 27 + #include <linux/mfd/syscon.h> 28 + #include <linux/of_address.h> 29 + #include <linux/of_fdt.h> 30 + #include <linux/of_irq.h> 31 + #include <linux/of_platform.h> 32 + #include <linux/perf_event.h> 33 + #include <linux/platform_device.h> 34 + #include <linux/regmap.h> 35 + #include <linux/slab.h> 36 + 37 + #define CSW_CSWCR 0x0000 38 + #define CSW_CSWCR_DUALMCB_MASK BIT(0) 39 + #define MCBADDRMR 0x0000 40 + #define MCBADDRMR_DUALMCU_MODE_MASK BIT(2) 41 + 42 + #define PCPPMU_INTSTATUS_REG 0x000 43 + #define PCPPMU_INTMASK_REG 0x004 44 + #define PCPPMU_INTMASK 0x0000000F 45 + #define PCPPMU_INTENMASK 0xFFFFFFFF 46 + #define PCPPMU_INTCLRMASK 0xFFFFFFF0 47 + #define PCPPMU_INT_MCU BIT(0) 48 + #define PCPPMU_INT_MCB BIT(1) 49 + #define PCPPMU_INT_L3C BIT(2) 50 + #define PCPPMU_INT_IOB BIT(3) 51 + 52 + #define PMU_MAX_COUNTERS 4 53 + #define PMU_CNT_MAX_PERIOD 0x100000000ULL 54 + #define PMU_OVERFLOW_MASK 0xF 55 + #define PMU_PMCR_E BIT(0) 56 + #define PMU_PMCR_P BIT(1) 57 + 58 + #define PMU_PMEVCNTR0 0x000 59 + #define PMU_PMEVCNTR1 0x004 60 + #define PMU_PMEVCNTR2 0x008 61 + #define PMU_PMEVCNTR3 0x00C 62 + #define PMU_PMEVTYPER0 0x400 63 + #define PMU_PMEVTYPER1 0x404 64 + #define PMU_PMEVTYPER2 0x408 65 + #define PMU_PMEVTYPER3 0x40C 66 + #define PMU_PMAMR0 0xA00 67 + #define PMU_PMAMR1 0xA04 68 + #define PMU_PMCNTENSET 0xC00 69 + #define PMU_PMCNTENCLR 0xC20 70 + #define PMU_PMINTENSET 0xC40 71 + #define PMU_PMINTENCLR 0xC60 72 + #define PMU_PMOVSR 0xC80 73 + #define PMU_PMCR 0xE04 74 + 75 + #define to_pmu_dev(p) container_of(p, struct xgene_pmu_dev, pmu) 76 + #define GET_CNTR(ev) (ev->hw.idx) 77 + #define GET_EVENTID(ev) (ev->hw.config & 0xFFULL) 78 + #define GET_AGENTID(ev) (ev->hw.config_base & 0xFFFFFFFFUL) 79 + #define GET_AGENT1ID(ev) ((ev->hw.config_base >> 32) & 0xFFFFFFFFUL) 80 + 81 + struct hw_pmu_info { 82 + u32 type; 83 + u32 enable_mask; 84 + void __iomem *csr; 85 + }; 86 + 87 + struct xgene_pmu_dev { 88 + struct hw_pmu_info *inf; 89 + struct xgene_pmu *parent; 90 + struct pmu pmu; 91 + u8 max_counters; 92 + DECLARE_BITMAP(cntr_assign_mask, PMU_MAX_COUNTERS); 93 + u64 max_period; 94 + const struct attribute_group **attr_groups; 95 + struct perf_event *pmu_counter_event[PMU_MAX_COUNTERS]; 96 + }; 97 + 98 + struct xgene_pmu { 99 + struct device *dev; 100 + int version; 101 + void __iomem *pcppmu_csr; 102 + u32 mcb_active_mask; 103 + u32 mc_active_mask; 104 + cpumask_t cpu; 105 + raw_spinlock_t lock; 106 + struct list_head l3cpmus; 107 + struct list_head iobpmus; 108 + struct list_head mcbpmus; 109 + struct list_head mcpmus; 110 + }; 111 + 112 + struct xgene_pmu_dev_ctx { 113 + char *name; 114 + struct list_head next; 115 + struct xgene_pmu_dev *pmu_dev; 116 + struct hw_pmu_info inf; 117 + }; 118 + 119 + struct xgene_pmu_data { 120 + int id; 121 + u32 data; 122 + }; 123 + 124 + enum xgene_pmu_version { 125 + PCP_PMU_V1 = 1, 126 + PCP_PMU_V2, 127 + }; 128 + 129 + enum xgene_pmu_dev_type { 130 + PMU_TYPE_L3C = 0, 131 + PMU_TYPE_IOB, 132 + PMU_TYPE_MCB, 133 + PMU_TYPE_MC, 134 + }; 135 + 136 + /* 137 + * sysfs format attributes 138 + */ 139 + static ssize_t xgene_pmu_format_show(struct device *dev, 140 + struct device_attribute *attr, char *buf) 141 + { 142 + struct dev_ext_attribute *eattr; 143 + 144 + eattr = container_of(attr, struct dev_ext_attribute, attr); 145 + return sprintf(buf, "%s\n", (char *) eattr->var); 146 + } 147 + 148 + #define XGENE_PMU_FORMAT_ATTR(_name, _config) \ 149 + (&((struct dev_ext_attribute[]) { \ 150 + { .attr = __ATTR(_name, S_IRUGO, xgene_pmu_format_show, NULL), \ 151 + .var = (void *) _config, } \ 152 + })[0].attr.attr) 153 + 154 + static struct attribute *l3c_pmu_format_attrs[] = { 155 + XGENE_PMU_FORMAT_ATTR(l3c_eventid, "config:0-7"), 156 + XGENE_PMU_FORMAT_ATTR(l3c_agentid, "config1:0-9"), 157 + NULL, 158 + }; 159 + 160 + static struct attribute *iob_pmu_format_attrs[] = { 161 + XGENE_PMU_FORMAT_ATTR(iob_eventid, "config:0-7"), 162 + XGENE_PMU_FORMAT_ATTR(iob_agentid, "config1:0-63"), 163 + NULL, 164 + }; 165 + 166 + static struct attribute *mcb_pmu_format_attrs[] = { 167 + XGENE_PMU_FORMAT_ATTR(mcb_eventid, "config:0-5"), 168 + XGENE_PMU_FORMAT_ATTR(mcb_agentid, "config1:0-9"), 169 + NULL, 170 + }; 171 + 172 + static struct attribute *mc_pmu_format_attrs[] = { 173 + XGENE_PMU_FORMAT_ATTR(mc_eventid, "config:0-28"), 174 + NULL, 175 + }; 176 + 177 + static const struct attribute_group l3c_pmu_format_attr_group = { 178 + .name = "format", 179 + .attrs = l3c_pmu_format_attrs, 180 + }; 181 + 182 + static const struct attribute_group iob_pmu_format_attr_group = { 183 + .name = "format", 184 + .attrs = iob_pmu_format_attrs, 185 + }; 186 + 187 + static const struct attribute_group mcb_pmu_format_attr_group = { 188 + .name = "format", 189 + .attrs = mcb_pmu_format_attrs, 190 + }; 191 + 192 + static const struct attribute_group mc_pmu_format_attr_group = { 193 + .name = "format", 194 + .attrs = mc_pmu_format_attrs, 195 + }; 196 + 197 + /* 198 + * sysfs event attributes 199 + */ 200 + static ssize_t xgene_pmu_event_show(struct device *dev, 201 + struct device_attribute *attr, char *buf) 202 + { 203 + struct dev_ext_attribute *eattr; 204 + 205 + eattr = container_of(attr, struct dev_ext_attribute, attr); 206 + return sprintf(buf, "config=0x%lx\n", (unsigned long) eattr->var); 207 + } 208 + 209 + #define XGENE_PMU_EVENT_ATTR(_name, _config) \ 210 + (&((struct dev_ext_attribute[]) { \ 211 + { .attr = __ATTR(_name, S_IRUGO, xgene_pmu_event_show, NULL), \ 212 + .var = (void *) _config, } \ 213 + })[0].attr.attr) 214 + 215 + static struct attribute *l3c_pmu_events_attrs[] = { 216 + XGENE_PMU_EVENT_ATTR(cycle-count, 0x00), 217 + XGENE_PMU_EVENT_ATTR(cycle-count-div-64, 0x01), 218 + XGENE_PMU_EVENT_ATTR(read-hit, 0x02), 219 + XGENE_PMU_EVENT_ATTR(read-miss, 0x03), 220 + XGENE_PMU_EVENT_ATTR(write-need-replacement, 0x06), 221 + XGENE_PMU_EVENT_ATTR(write-not-need-replacement, 0x07), 222 + XGENE_PMU_EVENT_ATTR(tq-full, 0x08), 223 + XGENE_PMU_EVENT_ATTR(ackq-full, 0x09), 224 + XGENE_PMU_EVENT_ATTR(wdb-full, 0x0a), 225 + XGENE_PMU_EVENT_ATTR(bank-fifo-full, 0x0b), 226 + XGENE_PMU_EVENT_ATTR(odb-full, 0x0c), 227 + XGENE_PMU_EVENT_ATTR(wbq-full, 0x0d), 228 + XGENE_PMU_EVENT_ATTR(bank-conflict-fifo-issue, 0x0e), 229 + XGENE_PMU_EVENT_ATTR(bank-fifo-issue, 0x0f), 230 + NULL, 231 + }; 232 + 233 + static struct attribute *iob_pmu_events_attrs[] = { 234 + XGENE_PMU_EVENT_ATTR(cycle-count, 0x00), 235 + XGENE_PMU_EVENT_ATTR(cycle-count-div-64, 0x01), 236 + XGENE_PMU_EVENT_ATTR(axi0-read, 0x02), 237 + XGENE_PMU_EVENT_ATTR(axi0-read-partial, 0x03), 238 + XGENE_PMU_EVENT_ATTR(axi1-read, 0x04), 239 + XGENE_PMU_EVENT_ATTR(axi1-read-partial, 0x05), 240 + XGENE_PMU_EVENT_ATTR(csw-read-block, 0x06), 241 + XGENE_PMU_EVENT_ATTR(csw-read-partial, 0x07), 242 + XGENE_PMU_EVENT_ATTR(axi0-write, 0x10), 243 + XGENE_PMU_EVENT_ATTR(axi0-write-partial, 0x11), 244 + XGENE_PMU_EVENT_ATTR(axi1-write, 0x13), 245 + XGENE_PMU_EVENT_ATTR(axi1-write-partial, 0x14), 246 + XGENE_PMU_EVENT_ATTR(csw-inbound-dirty, 0x16), 247 + NULL, 248 + }; 249 + 250 + static struct attribute *mcb_pmu_events_attrs[] = { 251 + XGENE_PMU_EVENT_ATTR(cycle-count, 0x00), 252 + XGENE_PMU_EVENT_ATTR(cycle-count-div-64, 0x01), 253 + XGENE_PMU_EVENT_ATTR(csw-read, 0x02), 254 + XGENE_PMU_EVENT_ATTR(csw-write-request, 0x03), 255 + XGENE_PMU_EVENT_ATTR(mcb-csw-stall, 0x04), 256 + XGENE_PMU_EVENT_ATTR(cancel-read-gack, 0x05), 257 + NULL, 258 + }; 259 + 260 + static struct attribute *mc_pmu_events_attrs[] = { 261 + XGENE_PMU_EVENT_ATTR(cycle-count, 0x00), 262 + XGENE_PMU_EVENT_ATTR(cycle-count-div-64, 0x01), 263 + XGENE_PMU_EVENT_ATTR(act-cmd-sent, 0x02), 264 + XGENE_PMU_EVENT_ATTR(pre-cmd-sent, 0x03), 265 + XGENE_PMU_EVENT_ATTR(rd-cmd-sent, 0x04), 266 + XGENE_PMU_EVENT_ATTR(rda-cmd-sent, 0x05), 267 + XGENE_PMU_EVENT_ATTR(wr-cmd-sent, 0x06), 268 + XGENE_PMU_EVENT_ATTR(wra-cmd-sent, 0x07), 269 + XGENE_PMU_EVENT_ATTR(pde-cmd-sent, 0x08), 270 + XGENE_PMU_EVENT_ATTR(sre-cmd-sent, 0x09), 271 + XGENE_PMU_EVENT_ATTR(prea-cmd-sent, 0x0a), 272 + XGENE_PMU_EVENT_ATTR(ref-cmd-sent, 0x0b), 273 + XGENE_PMU_EVENT_ATTR(rd-rda-cmd-sent, 0x0c), 274 + XGENE_PMU_EVENT_ATTR(wr-wra-cmd-sent, 0x0d), 275 + XGENE_PMU_EVENT_ATTR(in-rd-collision, 0x0e), 276 + XGENE_PMU_EVENT_ATTR(in-wr-collision, 0x0f), 277 + XGENE_PMU_EVENT_ATTR(collision-queue-not-empty, 0x10), 278 + XGENE_PMU_EVENT_ATTR(collision-queue-full, 0x11), 279 + XGENE_PMU_EVENT_ATTR(mcu-request, 0x12), 280 + XGENE_PMU_EVENT_ATTR(mcu-rd-request, 0x13), 281 + XGENE_PMU_EVENT_ATTR(mcu-hp-rd-request, 0x14), 282 + XGENE_PMU_EVENT_ATTR(mcu-wr-request, 0x15), 283 + XGENE_PMU_EVENT_ATTR(mcu-rd-proceed-all, 0x16), 284 + XGENE_PMU_EVENT_ATTR(mcu-rd-proceed-cancel, 0x17), 285 + XGENE_PMU_EVENT_ATTR(mcu-rd-response, 0x18), 286 + XGENE_PMU_EVENT_ATTR(mcu-rd-proceed-speculative-all, 0x19), 287 + XGENE_PMU_EVENT_ATTR(mcu-rd-proceed-speculative-cancel, 0x1a), 288 + XGENE_PMU_EVENT_ATTR(mcu-wr-proceed-all, 0x1b), 289 + XGENE_PMU_EVENT_ATTR(mcu-wr-proceed-cancel, 0x1c), 290 + NULL, 291 + }; 292 + 293 + static const struct attribute_group l3c_pmu_events_attr_group = { 294 + .name = "events", 295 + .attrs = l3c_pmu_events_attrs, 296 + }; 297 + 298 + static const struct attribute_group iob_pmu_events_attr_group = { 299 + .name = "events", 300 + .attrs = iob_pmu_events_attrs, 301 + }; 302 + 303 + static const struct attribute_group mcb_pmu_events_attr_group = { 304 + .name = "events", 305 + .attrs = mcb_pmu_events_attrs, 306 + }; 307 + 308 + static const struct attribute_group mc_pmu_events_attr_group = { 309 + .name = "events", 310 + .attrs = mc_pmu_events_attrs, 311 + }; 312 + 313 + /* 314 + * sysfs cpumask attributes 315 + */ 316 + static ssize_t xgene_pmu_cpumask_show(struct device *dev, 317 + struct device_attribute *attr, char *buf) 318 + { 319 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(dev_get_drvdata(dev)); 320 + 321 + return cpumap_print_to_pagebuf(true, buf, &pmu_dev->parent->cpu); 322 + } 323 + 324 + static DEVICE_ATTR(cpumask, S_IRUGO, xgene_pmu_cpumask_show, NULL); 325 + 326 + static struct attribute *xgene_pmu_cpumask_attrs[] = { 327 + &dev_attr_cpumask.attr, 328 + NULL, 329 + }; 330 + 331 + static const struct attribute_group pmu_cpumask_attr_group = { 332 + .attrs = xgene_pmu_cpumask_attrs, 333 + }; 334 + 335 + /* 336 + * Per PMU device attribute groups 337 + */ 338 + static const struct attribute_group *l3c_pmu_attr_groups[] = { 339 + &l3c_pmu_format_attr_group, 340 + &pmu_cpumask_attr_group, 341 + &l3c_pmu_events_attr_group, 342 + NULL 343 + }; 344 + 345 + static const struct attribute_group *iob_pmu_attr_groups[] = { 346 + &iob_pmu_format_attr_group, 347 + &pmu_cpumask_attr_group, 348 + &iob_pmu_events_attr_group, 349 + NULL 350 + }; 351 + 352 + static const struct attribute_group *mcb_pmu_attr_groups[] = { 353 + &mcb_pmu_format_attr_group, 354 + &pmu_cpumask_attr_group, 355 + &mcb_pmu_events_attr_group, 356 + NULL 357 + }; 358 + 359 + static const struct attribute_group *mc_pmu_attr_groups[] = { 360 + &mc_pmu_format_attr_group, 361 + &pmu_cpumask_attr_group, 362 + &mc_pmu_events_attr_group, 363 + NULL 364 + }; 365 + 366 + static int get_next_avail_cntr(struct xgene_pmu_dev *pmu_dev) 367 + { 368 + int cntr; 369 + 370 + cntr = find_first_zero_bit(pmu_dev->cntr_assign_mask, 371 + pmu_dev->max_counters); 372 + if (cntr == pmu_dev->max_counters) 373 + return -ENOSPC; 374 + set_bit(cntr, pmu_dev->cntr_assign_mask); 375 + 376 + return cntr; 377 + } 378 + 379 + static void clear_avail_cntr(struct xgene_pmu_dev *pmu_dev, int cntr) 380 + { 381 + clear_bit(cntr, pmu_dev->cntr_assign_mask); 382 + } 383 + 384 + static inline void xgene_pmu_mask_int(struct xgene_pmu *xgene_pmu) 385 + { 386 + writel(PCPPMU_INTENMASK, xgene_pmu->pcppmu_csr + PCPPMU_INTMASK_REG); 387 + } 388 + 389 + static inline void xgene_pmu_unmask_int(struct xgene_pmu *xgene_pmu) 390 + { 391 + writel(PCPPMU_INTCLRMASK, xgene_pmu->pcppmu_csr + PCPPMU_INTMASK_REG); 392 + } 393 + 394 + static inline u32 xgene_pmu_read_counter(struct xgene_pmu_dev *pmu_dev, int idx) 395 + { 396 + return readl(pmu_dev->inf->csr + PMU_PMEVCNTR0 + (4 * idx)); 397 + } 398 + 399 + static inline void 400 + xgene_pmu_write_counter(struct xgene_pmu_dev *pmu_dev, int idx, u32 val) 401 + { 402 + writel(val, pmu_dev->inf->csr + PMU_PMEVCNTR0 + (4 * idx)); 403 + } 404 + 405 + static inline void 406 + xgene_pmu_write_evttype(struct xgene_pmu_dev *pmu_dev, int idx, u32 val) 407 + { 408 + writel(val, pmu_dev->inf->csr + PMU_PMEVTYPER0 + (4 * idx)); 409 + } 410 + 411 + static inline void 412 + xgene_pmu_write_agentmsk(struct xgene_pmu_dev *pmu_dev, u32 val) 413 + { 414 + writel(val, pmu_dev->inf->csr + PMU_PMAMR0); 415 + } 416 + 417 + static inline void 418 + xgene_pmu_write_agent1msk(struct xgene_pmu_dev *pmu_dev, u32 val) 419 + { 420 + writel(val, pmu_dev->inf->csr + PMU_PMAMR1); 421 + } 422 + 423 + static inline void 424 + xgene_pmu_enable_counter(struct xgene_pmu_dev *pmu_dev, int idx) 425 + { 426 + u32 val; 427 + 428 + val = readl(pmu_dev->inf->csr + PMU_PMCNTENSET); 429 + val |= 1 << idx; 430 + writel(val, pmu_dev->inf->csr + PMU_PMCNTENSET); 431 + } 432 + 433 + static inline void 434 + xgene_pmu_disable_counter(struct xgene_pmu_dev *pmu_dev, int idx) 435 + { 436 + u32 val; 437 + 438 + val = readl(pmu_dev->inf->csr + PMU_PMCNTENCLR); 439 + val |= 1 << idx; 440 + writel(val, pmu_dev->inf->csr + PMU_PMCNTENCLR); 441 + } 442 + 443 + static inline void 444 + xgene_pmu_enable_counter_int(struct xgene_pmu_dev *pmu_dev, int idx) 445 + { 446 + u32 val; 447 + 448 + val = readl(pmu_dev->inf->csr + PMU_PMINTENSET); 449 + val |= 1 << idx; 450 + writel(val, pmu_dev->inf->csr + PMU_PMINTENSET); 451 + } 452 + 453 + static inline void 454 + xgene_pmu_disable_counter_int(struct xgene_pmu_dev *pmu_dev, int idx) 455 + { 456 + u32 val; 457 + 458 + val = readl(pmu_dev->inf->csr + PMU_PMINTENCLR); 459 + val |= 1 << idx; 460 + writel(val, pmu_dev->inf->csr + PMU_PMINTENCLR); 461 + } 462 + 463 + static inline void xgene_pmu_reset_counters(struct xgene_pmu_dev *pmu_dev) 464 + { 465 + u32 val; 466 + 467 + val = readl(pmu_dev->inf->csr + PMU_PMCR); 468 + val |= PMU_PMCR_P; 469 + writel(val, pmu_dev->inf->csr + PMU_PMCR); 470 + } 471 + 472 + static inline void xgene_pmu_start_counters(struct xgene_pmu_dev *pmu_dev) 473 + { 474 + u32 val; 475 + 476 + val = readl(pmu_dev->inf->csr + PMU_PMCR); 477 + val |= PMU_PMCR_E; 478 + writel(val, pmu_dev->inf->csr + PMU_PMCR); 479 + } 480 + 481 + static inline void xgene_pmu_stop_counters(struct xgene_pmu_dev *pmu_dev) 482 + { 483 + u32 val; 484 + 485 + val = readl(pmu_dev->inf->csr + PMU_PMCR); 486 + val &= ~PMU_PMCR_E; 487 + writel(val, pmu_dev->inf->csr + PMU_PMCR); 488 + } 489 + 490 + static void xgene_perf_pmu_enable(struct pmu *pmu) 491 + { 492 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(pmu); 493 + int enabled = bitmap_weight(pmu_dev->cntr_assign_mask, 494 + pmu_dev->max_counters); 495 + 496 + if (!enabled) 497 + return; 498 + 499 + xgene_pmu_start_counters(pmu_dev); 500 + } 501 + 502 + static void xgene_perf_pmu_disable(struct pmu *pmu) 503 + { 504 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(pmu); 505 + 506 + xgene_pmu_stop_counters(pmu_dev); 507 + } 508 + 509 + static int xgene_perf_event_init(struct perf_event *event) 510 + { 511 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 512 + struct hw_perf_event *hw = &event->hw; 513 + struct perf_event *sibling; 514 + 515 + /* Test the event attr type check for PMU enumeration */ 516 + if (event->attr.type != event->pmu->type) 517 + return -ENOENT; 518 + 519 + /* 520 + * SOC PMU counters are shared across all cores. 521 + * Therefore, it does not support per-process mode. 522 + * Also, it does not support event sampling mode. 523 + */ 524 + if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 525 + return -EINVAL; 526 + 527 + /* SOC counters do not have usr/os/guest/host bits */ 528 + if (event->attr.exclude_user || event->attr.exclude_kernel || 529 + event->attr.exclude_host || event->attr.exclude_guest) 530 + return -EINVAL; 531 + 532 + if (event->cpu < 0) 533 + return -EINVAL; 534 + /* 535 + * Many perf core operations (eg. events rotation) operate on a 536 + * single CPU context. This is obvious for CPU PMUs, where one 537 + * expects the same sets of events being observed on all CPUs, 538 + * but can lead to issues for off-core PMUs, where each 539 + * event could be theoretically assigned to a different CPU. To 540 + * mitigate this, we enforce CPU assignment to one, selected 541 + * processor (the one described in the "cpumask" attribute). 542 + */ 543 + event->cpu = cpumask_first(&pmu_dev->parent->cpu); 544 + 545 + hw->config = event->attr.config; 546 + /* 547 + * Each bit of the config1 field represents an agent from which the 548 + * request of the event come. The event is counted only if it's caused 549 + * by a request of an agent has the bit cleared. 550 + * By default, the event is counted for all agents. 551 + */ 552 + hw->config_base = event->attr.config1; 553 + 554 + /* 555 + * We must NOT create groups containing mixed PMUs, although software 556 + * events are acceptable 557 + */ 558 + if (event->group_leader->pmu != event->pmu && 559 + !is_software_event(event->group_leader)) 560 + return -EINVAL; 561 + 562 + list_for_each_entry(sibling, &event->group_leader->sibling_list, 563 + group_entry) 564 + if (sibling->pmu != event->pmu && 565 + !is_software_event(sibling)) 566 + return -EINVAL; 567 + 568 + return 0; 569 + } 570 + 571 + static void xgene_perf_enable_event(struct perf_event *event) 572 + { 573 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 574 + 575 + xgene_pmu_write_evttype(pmu_dev, GET_CNTR(event), GET_EVENTID(event)); 576 + xgene_pmu_write_agentmsk(pmu_dev, ~((u32)GET_AGENTID(event))); 577 + if (pmu_dev->inf->type == PMU_TYPE_IOB) 578 + xgene_pmu_write_agent1msk(pmu_dev, ~((u32)GET_AGENT1ID(event))); 579 + 580 + xgene_pmu_enable_counter(pmu_dev, GET_CNTR(event)); 581 + xgene_pmu_enable_counter_int(pmu_dev, GET_CNTR(event)); 582 + } 583 + 584 + static void xgene_perf_disable_event(struct perf_event *event) 585 + { 586 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 587 + 588 + xgene_pmu_disable_counter(pmu_dev, GET_CNTR(event)); 589 + xgene_pmu_disable_counter_int(pmu_dev, GET_CNTR(event)); 590 + } 591 + 592 + static void xgene_perf_event_set_period(struct perf_event *event) 593 + { 594 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 595 + struct hw_perf_event *hw = &event->hw; 596 + /* 597 + * The X-Gene PMU counters have a period of 2^32. To account for the 598 + * possiblity of extreme interrupt latency we program for a period of 599 + * half that. Hopefully we can handle the interrupt before another 2^31 600 + * events occur and the counter overtakes its previous value. 601 + */ 602 + u64 val = 1ULL << 31; 603 + 604 + local64_set(&hw->prev_count, val); 605 + xgene_pmu_write_counter(pmu_dev, hw->idx, (u32) val); 606 + } 607 + 608 + static void xgene_perf_event_update(struct perf_event *event) 609 + { 610 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 611 + struct hw_perf_event *hw = &event->hw; 612 + u64 delta, prev_raw_count, new_raw_count; 613 + 614 + again: 615 + prev_raw_count = local64_read(&hw->prev_count); 616 + new_raw_count = xgene_pmu_read_counter(pmu_dev, GET_CNTR(event)); 617 + 618 + if (local64_cmpxchg(&hw->prev_count, prev_raw_count, 619 + new_raw_count) != prev_raw_count) 620 + goto again; 621 + 622 + delta = (new_raw_count - prev_raw_count) & pmu_dev->max_period; 623 + 624 + local64_add(delta, &event->count); 625 + } 626 + 627 + static void xgene_perf_read(struct perf_event *event) 628 + { 629 + xgene_perf_event_update(event); 630 + } 631 + 632 + static void xgene_perf_start(struct perf_event *event, int flags) 633 + { 634 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 635 + struct hw_perf_event *hw = &event->hw; 636 + 637 + if (WARN_ON_ONCE(!(hw->state & PERF_HES_STOPPED))) 638 + return; 639 + 640 + WARN_ON_ONCE(!(hw->state & PERF_HES_UPTODATE)); 641 + hw->state = 0; 642 + 643 + xgene_perf_event_set_period(event); 644 + 645 + if (flags & PERF_EF_RELOAD) { 646 + u64 prev_raw_count = local64_read(&hw->prev_count); 647 + 648 + xgene_pmu_write_counter(pmu_dev, GET_CNTR(event), 649 + (u32) prev_raw_count); 650 + } 651 + 652 + xgene_perf_enable_event(event); 653 + perf_event_update_userpage(event); 654 + } 655 + 656 + static void xgene_perf_stop(struct perf_event *event, int flags) 657 + { 658 + struct hw_perf_event *hw = &event->hw; 659 + u64 config; 660 + 661 + if (hw->state & PERF_HES_UPTODATE) 662 + return; 663 + 664 + xgene_perf_disable_event(event); 665 + WARN_ON_ONCE(hw->state & PERF_HES_STOPPED); 666 + hw->state |= PERF_HES_STOPPED; 667 + 668 + if (hw->state & PERF_HES_UPTODATE) 669 + return; 670 + 671 + config = hw->config; 672 + xgene_perf_read(event); 673 + hw->state |= PERF_HES_UPTODATE; 674 + } 675 + 676 + static int xgene_perf_add(struct perf_event *event, int flags) 677 + { 678 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 679 + struct hw_perf_event *hw = &event->hw; 680 + 681 + hw->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; 682 + 683 + /* Allocate an event counter */ 684 + hw->idx = get_next_avail_cntr(pmu_dev); 685 + if (hw->idx < 0) 686 + return -EAGAIN; 687 + 688 + /* Update counter event pointer for Interrupt handler */ 689 + pmu_dev->pmu_counter_event[hw->idx] = event; 690 + 691 + if (flags & PERF_EF_START) 692 + xgene_perf_start(event, PERF_EF_RELOAD); 693 + 694 + return 0; 695 + } 696 + 697 + static void xgene_perf_del(struct perf_event *event, int flags) 698 + { 699 + struct xgene_pmu_dev *pmu_dev = to_pmu_dev(event->pmu); 700 + struct hw_perf_event *hw = &event->hw; 701 + 702 + xgene_perf_stop(event, PERF_EF_UPDATE); 703 + 704 + /* clear the assigned counter */ 705 + clear_avail_cntr(pmu_dev, GET_CNTR(event)); 706 + 707 + perf_event_update_userpage(event); 708 + pmu_dev->pmu_counter_event[hw->idx] = NULL; 709 + } 710 + 711 + static int xgene_init_perf(struct xgene_pmu_dev *pmu_dev, char *name) 712 + { 713 + struct xgene_pmu *xgene_pmu; 714 + 715 + pmu_dev->max_period = PMU_CNT_MAX_PERIOD - 1; 716 + /* First version PMU supports only single event counter */ 717 + xgene_pmu = pmu_dev->parent; 718 + if (xgene_pmu->version == PCP_PMU_V1) 719 + pmu_dev->max_counters = 1; 720 + else 721 + pmu_dev->max_counters = PMU_MAX_COUNTERS; 722 + 723 + /* Perf driver registration */ 724 + pmu_dev->pmu = (struct pmu) { 725 + .attr_groups = pmu_dev->attr_groups, 726 + .task_ctx_nr = perf_invalid_context, 727 + .pmu_enable = xgene_perf_pmu_enable, 728 + .pmu_disable = xgene_perf_pmu_disable, 729 + .event_init = xgene_perf_event_init, 730 + .add = xgene_perf_add, 731 + .del = xgene_perf_del, 732 + .start = xgene_perf_start, 733 + .stop = xgene_perf_stop, 734 + .read = xgene_perf_read, 735 + }; 736 + 737 + /* Hardware counter init */ 738 + xgene_pmu_stop_counters(pmu_dev); 739 + xgene_pmu_reset_counters(pmu_dev); 740 + 741 + return perf_pmu_register(&pmu_dev->pmu, name, -1); 742 + } 743 + 744 + static int 745 + xgene_pmu_dev_add(struct xgene_pmu *xgene_pmu, struct xgene_pmu_dev_ctx *ctx) 746 + { 747 + struct device *dev = xgene_pmu->dev; 748 + struct xgene_pmu_dev *pmu; 749 + int rc; 750 + 751 + pmu = devm_kzalloc(dev, sizeof(*pmu), GFP_KERNEL); 752 + if (!pmu) 753 + return -ENOMEM; 754 + pmu->parent = xgene_pmu; 755 + pmu->inf = &ctx->inf; 756 + ctx->pmu_dev = pmu; 757 + 758 + switch (pmu->inf->type) { 759 + case PMU_TYPE_L3C: 760 + pmu->attr_groups = l3c_pmu_attr_groups; 761 + break; 762 + case PMU_TYPE_IOB: 763 + pmu->attr_groups = iob_pmu_attr_groups; 764 + break; 765 + case PMU_TYPE_MCB: 766 + if (!(xgene_pmu->mcb_active_mask & pmu->inf->enable_mask)) 767 + goto dev_err; 768 + pmu->attr_groups = mcb_pmu_attr_groups; 769 + break; 770 + case PMU_TYPE_MC: 771 + if (!(xgene_pmu->mc_active_mask & pmu->inf->enable_mask)) 772 + goto dev_err; 773 + pmu->attr_groups = mc_pmu_attr_groups; 774 + break; 775 + default: 776 + return -EINVAL; 777 + } 778 + 779 + rc = xgene_init_perf(pmu, ctx->name); 780 + if (rc) { 781 + dev_err(dev, "%s PMU: Failed to init perf driver\n", ctx->name); 782 + goto dev_err; 783 + } 784 + 785 + dev_info(dev, "%s PMU registered\n", ctx->name); 786 + 787 + return rc; 788 + 789 + dev_err: 790 + devm_kfree(dev, pmu); 791 + return -ENODEV; 792 + } 793 + 794 + static void _xgene_pmu_isr(int irq, struct xgene_pmu_dev *pmu_dev) 795 + { 796 + struct xgene_pmu *xgene_pmu = pmu_dev->parent; 797 + u32 pmovsr; 798 + int idx; 799 + 800 + pmovsr = readl(pmu_dev->inf->csr + PMU_PMOVSR) & PMU_OVERFLOW_MASK; 801 + if (!pmovsr) 802 + return; 803 + 804 + /* Clear interrupt flag */ 805 + if (xgene_pmu->version == PCP_PMU_V1) 806 + writel(0x0, pmu_dev->inf->csr + PMU_PMOVSR); 807 + else 808 + writel(pmovsr, pmu_dev->inf->csr + PMU_PMOVSR); 809 + 810 + for (idx = 0; idx < PMU_MAX_COUNTERS; idx++) { 811 + struct perf_event *event = pmu_dev->pmu_counter_event[idx]; 812 + int overflowed = pmovsr & BIT(idx); 813 + 814 + /* Ignore if we don't have an event. */ 815 + if (!event || !overflowed) 816 + continue; 817 + xgene_perf_event_update(event); 818 + xgene_perf_event_set_period(event); 819 + } 820 + } 821 + 822 + static irqreturn_t xgene_pmu_isr(int irq, void *dev_id) 823 + { 824 + struct xgene_pmu_dev_ctx *ctx; 825 + struct xgene_pmu *xgene_pmu = dev_id; 826 + unsigned long flags; 827 + u32 val; 828 + 829 + raw_spin_lock_irqsave(&xgene_pmu->lock, flags); 830 + 831 + /* Get Interrupt PMU source */ 832 + val = readl(xgene_pmu->pcppmu_csr + PCPPMU_INTSTATUS_REG); 833 + if (val & PCPPMU_INT_MCU) { 834 + list_for_each_entry(ctx, &xgene_pmu->mcpmus, next) { 835 + _xgene_pmu_isr(irq, ctx->pmu_dev); 836 + } 837 + } 838 + if (val & PCPPMU_INT_MCB) { 839 + list_for_each_entry(ctx, &xgene_pmu->mcbpmus, next) { 840 + _xgene_pmu_isr(irq, ctx->pmu_dev); 841 + } 842 + } 843 + if (val & PCPPMU_INT_L3C) { 844 + list_for_each_entry(ctx, &xgene_pmu->l3cpmus, next) { 845 + _xgene_pmu_isr(irq, ctx->pmu_dev); 846 + } 847 + } 848 + if (val & PCPPMU_INT_IOB) { 849 + list_for_each_entry(ctx, &xgene_pmu->iobpmus, next) { 850 + _xgene_pmu_isr(irq, ctx->pmu_dev); 851 + } 852 + } 853 + 854 + raw_spin_unlock_irqrestore(&xgene_pmu->lock, flags); 855 + 856 + return IRQ_HANDLED; 857 + } 858 + 859 + static int acpi_pmu_probe_active_mcb_mcu(struct xgene_pmu *xgene_pmu, 860 + struct platform_device *pdev) 861 + { 862 + void __iomem *csw_csr, *mcba_csr, *mcbb_csr; 863 + struct resource *res; 864 + unsigned int reg; 865 + 866 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 867 + csw_csr = devm_ioremap_resource(&pdev->dev, res); 868 + if (IS_ERR(csw_csr)) { 869 + dev_err(&pdev->dev, "ioremap failed for CSW CSR resource\n"); 870 + return PTR_ERR(csw_csr); 871 + } 872 + 873 + res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 874 + mcba_csr = devm_ioremap_resource(&pdev->dev, res); 875 + if (IS_ERR(mcba_csr)) { 876 + dev_err(&pdev->dev, "ioremap failed for MCBA CSR resource\n"); 877 + return PTR_ERR(mcba_csr); 878 + } 879 + 880 + res = platform_get_resource(pdev, IORESOURCE_MEM, 3); 881 + mcbb_csr = devm_ioremap_resource(&pdev->dev, res); 882 + if (IS_ERR(mcbb_csr)) { 883 + dev_err(&pdev->dev, "ioremap failed for MCBB CSR resource\n"); 884 + return PTR_ERR(mcbb_csr); 885 + } 886 + 887 + reg = readl(csw_csr + CSW_CSWCR); 888 + if (reg & CSW_CSWCR_DUALMCB_MASK) { 889 + /* Dual MCB active */ 890 + xgene_pmu->mcb_active_mask = 0x3; 891 + /* Probe all active MC(s) */ 892 + reg = readl(mcbb_csr + CSW_CSWCR); 893 + xgene_pmu->mc_active_mask = 894 + (reg & MCBADDRMR_DUALMCU_MODE_MASK) ? 0xF : 0x5; 895 + } else { 896 + /* Single MCB active */ 897 + xgene_pmu->mcb_active_mask = 0x1; 898 + /* Probe all active MC(s) */ 899 + reg = readl(mcba_csr + CSW_CSWCR); 900 + xgene_pmu->mc_active_mask = 901 + (reg & MCBADDRMR_DUALMCU_MODE_MASK) ? 0x3 : 0x1; 902 + } 903 + 904 + return 0; 905 + } 906 + 907 + static int fdt_pmu_probe_active_mcb_mcu(struct xgene_pmu *xgene_pmu, 908 + struct platform_device *pdev) 909 + { 910 + struct regmap *csw_map, *mcba_map, *mcbb_map; 911 + struct device_node *np = pdev->dev.of_node; 912 + unsigned int reg; 913 + 914 + csw_map = syscon_regmap_lookup_by_phandle(np, "regmap-csw"); 915 + if (IS_ERR(csw_map)) { 916 + dev_err(&pdev->dev, "unable to get syscon regmap csw\n"); 917 + return PTR_ERR(csw_map); 918 + } 919 + 920 + mcba_map = syscon_regmap_lookup_by_phandle(np, "regmap-mcba"); 921 + if (IS_ERR(mcba_map)) { 922 + dev_err(&pdev->dev, "unable to get syscon regmap mcba\n"); 923 + return PTR_ERR(mcba_map); 924 + } 925 + 926 + mcbb_map = syscon_regmap_lookup_by_phandle(np, "regmap-mcbb"); 927 + if (IS_ERR(mcbb_map)) { 928 + dev_err(&pdev->dev, "unable to get syscon regmap mcbb\n"); 929 + return PTR_ERR(mcbb_map); 930 + } 931 + 932 + if (regmap_read(csw_map, CSW_CSWCR, &reg)) 933 + return -EINVAL; 934 + 935 + if (reg & CSW_CSWCR_DUALMCB_MASK) { 936 + /* Dual MCB active */ 937 + xgene_pmu->mcb_active_mask = 0x3; 938 + /* Probe all active MC(s) */ 939 + if (regmap_read(mcbb_map, MCBADDRMR, &reg)) 940 + return 0; 941 + xgene_pmu->mc_active_mask = 942 + (reg & MCBADDRMR_DUALMCU_MODE_MASK) ? 0xF : 0x5; 943 + } else { 944 + /* Single MCB active */ 945 + xgene_pmu->mcb_active_mask = 0x1; 946 + /* Probe all active MC(s) */ 947 + if (regmap_read(mcba_map, MCBADDRMR, &reg)) 948 + return 0; 949 + xgene_pmu->mc_active_mask = 950 + (reg & MCBADDRMR_DUALMCU_MODE_MASK) ? 0x3 : 0x1; 951 + } 952 + 953 + return 0; 954 + } 955 + 956 + static int xgene_pmu_probe_active_mcb_mcu(struct xgene_pmu *xgene_pmu, 957 + struct platform_device *pdev) 958 + { 959 + if (has_acpi_companion(&pdev->dev)) 960 + return acpi_pmu_probe_active_mcb_mcu(xgene_pmu, pdev); 961 + return fdt_pmu_probe_active_mcb_mcu(xgene_pmu, pdev); 962 + } 963 + 964 + static char *xgene_pmu_dev_name(struct device *dev, u32 type, int id) 965 + { 966 + switch (type) { 967 + case PMU_TYPE_L3C: 968 + return devm_kasprintf(dev, GFP_KERNEL, "l3c%d", id); 969 + case PMU_TYPE_IOB: 970 + return devm_kasprintf(dev, GFP_KERNEL, "iob%d", id); 971 + case PMU_TYPE_MCB: 972 + return devm_kasprintf(dev, GFP_KERNEL, "mcb%d", id); 973 + case PMU_TYPE_MC: 974 + return devm_kasprintf(dev, GFP_KERNEL, "mc%d", id); 975 + default: 976 + return devm_kasprintf(dev, GFP_KERNEL, "unknown"); 977 + } 978 + } 979 + 980 + #if defined(CONFIG_ACPI) 981 + static int acpi_pmu_dev_add_resource(struct acpi_resource *ares, void *data) 982 + { 983 + struct resource *res = data; 984 + 985 + if (ares->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) 986 + acpi_dev_resource_memory(ares, res); 987 + 988 + /* Always tell the ACPI core to skip this resource */ 989 + return 1; 990 + } 991 + 992 + static struct 993 + xgene_pmu_dev_ctx *acpi_get_pmu_hw_inf(struct xgene_pmu *xgene_pmu, 994 + struct acpi_device *adev, u32 type) 995 + { 996 + struct device *dev = xgene_pmu->dev; 997 + struct list_head resource_list; 998 + struct xgene_pmu_dev_ctx *ctx; 999 + const union acpi_object *obj; 1000 + struct hw_pmu_info *inf; 1001 + void __iomem *dev_csr; 1002 + struct resource res; 1003 + int enable_bit; 1004 + int rc; 1005 + 1006 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 1007 + if (!ctx) 1008 + return NULL; 1009 + 1010 + INIT_LIST_HEAD(&resource_list); 1011 + rc = acpi_dev_get_resources(adev, &resource_list, 1012 + acpi_pmu_dev_add_resource, &res); 1013 + acpi_dev_free_resource_list(&resource_list); 1014 + if (rc < 0 || IS_ERR(&res)) { 1015 + dev_err(dev, "PMU type %d: No resource address found\n", type); 1016 + goto err; 1017 + } 1018 + 1019 + dev_csr = devm_ioremap_resource(dev, &res); 1020 + if (IS_ERR(dev_csr)) { 1021 + dev_err(dev, "PMU type %d: Fail to map resource\n", type); 1022 + goto err; 1023 + } 1024 + 1025 + /* A PMU device node without enable-bit-index is always enabled */ 1026 + rc = acpi_dev_get_property(adev, "enable-bit-index", 1027 + ACPI_TYPE_INTEGER, &obj); 1028 + if (rc < 0) 1029 + enable_bit = 0; 1030 + else 1031 + enable_bit = (int) obj->integer.value; 1032 + 1033 + ctx->name = xgene_pmu_dev_name(dev, type, enable_bit); 1034 + if (!ctx->name) { 1035 + dev_err(dev, "PMU type %d: Fail to get device name\n", type); 1036 + goto err; 1037 + } 1038 + inf = &ctx->inf; 1039 + inf->type = type; 1040 + inf->csr = dev_csr; 1041 + inf->enable_mask = 1 << enable_bit; 1042 + 1043 + return ctx; 1044 + err: 1045 + devm_kfree(dev, ctx); 1046 + return NULL; 1047 + } 1048 + 1049 + static acpi_status acpi_pmu_dev_add(acpi_handle handle, u32 level, 1050 + void *data, void **return_value) 1051 + { 1052 + struct xgene_pmu *xgene_pmu = data; 1053 + struct xgene_pmu_dev_ctx *ctx; 1054 + struct acpi_device *adev; 1055 + 1056 + if (acpi_bus_get_device(handle, &adev)) 1057 + return AE_OK; 1058 + if (acpi_bus_get_status(adev) || !adev->status.present) 1059 + return AE_OK; 1060 + 1061 + if (!strcmp(acpi_device_hid(adev), "APMC0D5D")) 1062 + ctx = acpi_get_pmu_hw_inf(xgene_pmu, adev, PMU_TYPE_L3C); 1063 + else if (!strcmp(acpi_device_hid(adev), "APMC0D5E")) 1064 + ctx = acpi_get_pmu_hw_inf(xgene_pmu, adev, PMU_TYPE_IOB); 1065 + else if (!strcmp(acpi_device_hid(adev), "APMC0D5F")) 1066 + ctx = acpi_get_pmu_hw_inf(xgene_pmu, adev, PMU_TYPE_MCB); 1067 + else if (!strcmp(acpi_device_hid(adev), "APMC0D60")) 1068 + ctx = acpi_get_pmu_hw_inf(xgene_pmu, adev, PMU_TYPE_MC); 1069 + else 1070 + ctx = NULL; 1071 + 1072 + if (!ctx) 1073 + return AE_OK; 1074 + 1075 + if (xgene_pmu_dev_add(xgene_pmu, ctx)) { 1076 + /* Can't add the PMU device, skip it */ 1077 + devm_kfree(xgene_pmu->dev, ctx); 1078 + return AE_OK; 1079 + } 1080 + 1081 + switch (ctx->inf.type) { 1082 + case PMU_TYPE_L3C: 1083 + list_add(&ctx->next, &xgene_pmu->l3cpmus); 1084 + break; 1085 + case PMU_TYPE_IOB: 1086 + list_add(&ctx->next, &xgene_pmu->iobpmus); 1087 + break; 1088 + case PMU_TYPE_MCB: 1089 + list_add(&ctx->next, &xgene_pmu->mcbpmus); 1090 + break; 1091 + case PMU_TYPE_MC: 1092 + list_add(&ctx->next, &xgene_pmu->mcpmus); 1093 + break; 1094 + } 1095 + return AE_OK; 1096 + } 1097 + 1098 + static int acpi_pmu_probe_pmu_dev(struct xgene_pmu *xgene_pmu, 1099 + struct platform_device *pdev) 1100 + { 1101 + struct device *dev = xgene_pmu->dev; 1102 + acpi_handle handle; 1103 + acpi_status status; 1104 + 1105 + handle = ACPI_HANDLE(dev); 1106 + if (!handle) 1107 + return -EINVAL; 1108 + 1109 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1, 1110 + acpi_pmu_dev_add, NULL, xgene_pmu, NULL); 1111 + if (ACPI_FAILURE(status)) { 1112 + dev_err(dev, "failed to probe PMU devices\n"); 1113 + return -ENODEV; 1114 + } 1115 + 1116 + return 0; 1117 + } 1118 + #else 1119 + static int acpi_pmu_probe_pmu_dev(struct xgene_pmu *xgene_pmu, 1120 + struct platform_device *pdev) 1121 + { 1122 + return 0; 1123 + } 1124 + #endif 1125 + 1126 + static struct 1127 + xgene_pmu_dev_ctx *fdt_get_pmu_hw_inf(struct xgene_pmu *xgene_pmu, 1128 + struct device_node *np, u32 type) 1129 + { 1130 + struct device *dev = xgene_pmu->dev; 1131 + struct xgene_pmu_dev_ctx *ctx; 1132 + struct hw_pmu_info *inf; 1133 + void __iomem *dev_csr; 1134 + struct resource res; 1135 + int enable_bit; 1136 + int rc; 1137 + 1138 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 1139 + if (!ctx) 1140 + return NULL; 1141 + rc = of_address_to_resource(np, 0, &res); 1142 + if (rc < 0) { 1143 + dev_err(dev, "PMU type %d: No resource address found\n", type); 1144 + goto err; 1145 + } 1146 + dev_csr = devm_ioremap_resource(dev, &res); 1147 + if (IS_ERR(dev_csr)) { 1148 + dev_err(dev, "PMU type %d: Fail to map resource\n", type); 1149 + goto err; 1150 + } 1151 + 1152 + /* A PMU device node without enable-bit-index is always enabled */ 1153 + if (of_property_read_u32(np, "enable-bit-index", &enable_bit)) 1154 + enable_bit = 0; 1155 + 1156 + ctx->name = xgene_pmu_dev_name(dev, type, enable_bit); 1157 + if (!ctx->name) { 1158 + dev_err(dev, "PMU type %d: Fail to get device name\n", type); 1159 + goto err; 1160 + } 1161 + inf = &ctx->inf; 1162 + inf->type = type; 1163 + inf->csr = dev_csr; 1164 + inf->enable_mask = 1 << enable_bit; 1165 + 1166 + return ctx; 1167 + err: 1168 + devm_kfree(dev, ctx); 1169 + return NULL; 1170 + } 1171 + 1172 + static int fdt_pmu_probe_pmu_dev(struct xgene_pmu *xgene_pmu, 1173 + struct platform_device *pdev) 1174 + { 1175 + struct xgene_pmu_dev_ctx *ctx; 1176 + struct device_node *np; 1177 + 1178 + for_each_child_of_node(pdev->dev.of_node, np) { 1179 + if (!of_device_is_available(np)) 1180 + continue; 1181 + 1182 + if (of_device_is_compatible(np, "apm,xgene-pmu-l3c")) 1183 + ctx = fdt_get_pmu_hw_inf(xgene_pmu, np, PMU_TYPE_L3C); 1184 + else if (of_device_is_compatible(np, "apm,xgene-pmu-iob")) 1185 + ctx = fdt_get_pmu_hw_inf(xgene_pmu, np, PMU_TYPE_IOB); 1186 + else if (of_device_is_compatible(np, "apm,xgene-pmu-mcb")) 1187 + ctx = fdt_get_pmu_hw_inf(xgene_pmu, np, PMU_TYPE_MCB); 1188 + else if (of_device_is_compatible(np, "apm,xgene-pmu-mc")) 1189 + ctx = fdt_get_pmu_hw_inf(xgene_pmu, np, PMU_TYPE_MC); 1190 + else 1191 + ctx = NULL; 1192 + 1193 + if (!ctx) 1194 + continue; 1195 + 1196 + if (xgene_pmu_dev_add(xgene_pmu, ctx)) { 1197 + /* Can't add the PMU device, skip it */ 1198 + devm_kfree(xgene_pmu->dev, ctx); 1199 + continue; 1200 + } 1201 + 1202 + switch (ctx->inf.type) { 1203 + case PMU_TYPE_L3C: 1204 + list_add(&ctx->next, &xgene_pmu->l3cpmus); 1205 + break; 1206 + case PMU_TYPE_IOB: 1207 + list_add(&ctx->next, &xgene_pmu->iobpmus); 1208 + break; 1209 + case PMU_TYPE_MCB: 1210 + list_add(&ctx->next, &xgene_pmu->mcbpmus); 1211 + break; 1212 + case PMU_TYPE_MC: 1213 + list_add(&ctx->next, &xgene_pmu->mcpmus); 1214 + break; 1215 + } 1216 + } 1217 + 1218 + return 0; 1219 + } 1220 + 1221 + static int xgene_pmu_probe_pmu_dev(struct xgene_pmu *xgene_pmu, 1222 + struct platform_device *pdev) 1223 + { 1224 + if (has_acpi_companion(&pdev->dev)) 1225 + return acpi_pmu_probe_pmu_dev(xgene_pmu, pdev); 1226 + return fdt_pmu_probe_pmu_dev(xgene_pmu, pdev); 1227 + } 1228 + 1229 + static const struct xgene_pmu_data xgene_pmu_data = { 1230 + .id = PCP_PMU_V1, 1231 + }; 1232 + 1233 + static const struct xgene_pmu_data xgene_pmu_v2_data = { 1234 + .id = PCP_PMU_V2, 1235 + }; 1236 + 1237 + static const struct of_device_id xgene_pmu_of_match[] = { 1238 + { .compatible = "apm,xgene-pmu", .data = &xgene_pmu_data }, 1239 + { .compatible = "apm,xgene-pmu-v2", .data = &xgene_pmu_v2_data }, 1240 + {}, 1241 + }; 1242 + MODULE_DEVICE_TABLE(of, xgene_pmu_of_match); 1243 + #ifdef CONFIG_ACPI 1244 + static const struct acpi_device_id xgene_pmu_acpi_match[] = { 1245 + {"APMC0D5B", PCP_PMU_V1}, 1246 + {"APMC0D5C", PCP_PMU_V2}, 1247 + {}, 1248 + }; 1249 + MODULE_DEVICE_TABLE(acpi, xgene_pmu_acpi_match); 1250 + #endif 1251 + 1252 + static int xgene_pmu_probe(struct platform_device *pdev) 1253 + { 1254 + const struct xgene_pmu_data *dev_data; 1255 + const struct of_device_id *of_id; 1256 + struct xgene_pmu *xgene_pmu; 1257 + struct resource *res; 1258 + int irq, rc; 1259 + int version; 1260 + 1261 + xgene_pmu = devm_kzalloc(&pdev->dev, sizeof(*xgene_pmu), GFP_KERNEL); 1262 + if (!xgene_pmu) 1263 + return -ENOMEM; 1264 + xgene_pmu->dev = &pdev->dev; 1265 + platform_set_drvdata(pdev, xgene_pmu); 1266 + 1267 + version = -EINVAL; 1268 + of_id = of_match_device(xgene_pmu_of_match, &pdev->dev); 1269 + if (of_id) { 1270 + dev_data = (const struct xgene_pmu_data *) of_id->data; 1271 + version = dev_data->id; 1272 + } 1273 + 1274 + #ifdef CONFIG_ACPI 1275 + if (ACPI_COMPANION(&pdev->dev)) { 1276 + const struct acpi_device_id *acpi_id; 1277 + 1278 + acpi_id = acpi_match_device(xgene_pmu_acpi_match, &pdev->dev); 1279 + if (acpi_id) 1280 + version = (int) acpi_id->driver_data; 1281 + } 1282 + #endif 1283 + if (version < 0) 1284 + return -ENODEV; 1285 + 1286 + INIT_LIST_HEAD(&xgene_pmu->l3cpmus); 1287 + INIT_LIST_HEAD(&xgene_pmu->iobpmus); 1288 + INIT_LIST_HEAD(&xgene_pmu->mcbpmus); 1289 + INIT_LIST_HEAD(&xgene_pmu->mcpmus); 1290 + 1291 + xgene_pmu->version = version; 1292 + dev_info(&pdev->dev, "X-Gene PMU version %d\n", xgene_pmu->version); 1293 + 1294 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1295 + xgene_pmu->pcppmu_csr = devm_ioremap_resource(&pdev->dev, res); 1296 + if (IS_ERR(xgene_pmu->pcppmu_csr)) { 1297 + dev_err(&pdev->dev, "ioremap failed for PCP PMU resource\n"); 1298 + rc = PTR_ERR(xgene_pmu->pcppmu_csr); 1299 + goto err; 1300 + } 1301 + 1302 + irq = platform_get_irq(pdev, 0); 1303 + if (irq < 0) { 1304 + dev_err(&pdev->dev, "No IRQ resource\n"); 1305 + rc = -EINVAL; 1306 + goto err; 1307 + } 1308 + rc = devm_request_irq(&pdev->dev, irq, xgene_pmu_isr, 1309 + IRQF_NOBALANCING | IRQF_NO_THREAD, 1310 + dev_name(&pdev->dev), xgene_pmu); 1311 + if (rc) { 1312 + dev_err(&pdev->dev, "Could not request IRQ %d\n", irq); 1313 + goto err; 1314 + } 1315 + 1316 + raw_spin_lock_init(&xgene_pmu->lock); 1317 + 1318 + /* Check for active MCBs and MCUs */ 1319 + rc = xgene_pmu_probe_active_mcb_mcu(xgene_pmu, pdev); 1320 + if (rc) { 1321 + dev_warn(&pdev->dev, "Unknown MCB/MCU active status\n"); 1322 + xgene_pmu->mcb_active_mask = 0x1; 1323 + xgene_pmu->mc_active_mask = 0x1; 1324 + } 1325 + 1326 + /* Pick one core to use for cpumask attributes */ 1327 + cpumask_set_cpu(smp_processor_id(), &xgene_pmu->cpu); 1328 + 1329 + /* Make sure that the overflow interrupt is handled by this CPU */ 1330 + rc = irq_set_affinity(irq, &xgene_pmu->cpu); 1331 + if (rc) { 1332 + dev_err(&pdev->dev, "Failed to set interrupt affinity!\n"); 1333 + goto err; 1334 + } 1335 + 1336 + /* Walk through the tree for all PMU perf devices */ 1337 + rc = xgene_pmu_probe_pmu_dev(xgene_pmu, pdev); 1338 + if (rc) { 1339 + dev_err(&pdev->dev, "No PMU perf devices found!\n"); 1340 + goto err; 1341 + } 1342 + 1343 + /* Enable interrupt */ 1344 + xgene_pmu_unmask_int(xgene_pmu); 1345 + 1346 + return 0; 1347 + 1348 + err: 1349 + if (xgene_pmu->pcppmu_csr) 1350 + devm_iounmap(&pdev->dev, xgene_pmu->pcppmu_csr); 1351 + devm_kfree(&pdev->dev, xgene_pmu); 1352 + 1353 + return rc; 1354 + } 1355 + 1356 + static void 1357 + xgene_pmu_dev_cleanup(struct xgene_pmu *xgene_pmu, struct list_head *pmus) 1358 + { 1359 + struct xgene_pmu_dev_ctx *ctx; 1360 + struct device *dev = xgene_pmu->dev; 1361 + struct xgene_pmu_dev *pmu_dev; 1362 + 1363 + list_for_each_entry(ctx, pmus, next) { 1364 + pmu_dev = ctx->pmu_dev; 1365 + if (pmu_dev->inf->csr) 1366 + devm_iounmap(dev, pmu_dev->inf->csr); 1367 + devm_kfree(dev, ctx); 1368 + devm_kfree(dev, pmu_dev); 1369 + } 1370 + } 1371 + 1372 + static int xgene_pmu_remove(struct platform_device *pdev) 1373 + { 1374 + struct xgene_pmu *xgene_pmu = dev_get_drvdata(&pdev->dev); 1375 + 1376 + xgene_pmu_dev_cleanup(xgene_pmu, &xgene_pmu->l3cpmus); 1377 + xgene_pmu_dev_cleanup(xgene_pmu, &xgene_pmu->iobpmus); 1378 + xgene_pmu_dev_cleanup(xgene_pmu, &xgene_pmu->mcbpmus); 1379 + xgene_pmu_dev_cleanup(xgene_pmu, &xgene_pmu->mcpmus); 1380 + 1381 + if (xgene_pmu->pcppmu_csr) 1382 + devm_iounmap(&pdev->dev, xgene_pmu->pcppmu_csr); 1383 + devm_kfree(&pdev->dev, xgene_pmu); 1384 + 1385 + return 0; 1386 + } 1387 + 1388 + static struct platform_driver xgene_pmu_driver = { 1389 + .probe = xgene_pmu_probe, 1390 + .remove = xgene_pmu_remove, 1391 + .driver = { 1392 + .name = "xgene-pmu", 1393 + .of_match_table = xgene_pmu_of_match, 1394 + .acpi_match_table = ACPI_PTR(xgene_pmu_acpi_match), 1395 + }, 1396 + }; 1397 + 1398 + builtin_platform_driver(xgene_pmu_driver);
+12 -11
drivers/pinctrl/mvebu/pinctrl-orion.c
··· 64 64 return 0; 65 65 } 66 66 67 - #define V(f5181l, f5182, f5281) \ 68 - ((f5181l << 0) | (f5182 << 1) | (f5281 << 2)) 67 + #define V(f5181, f5182, f5281) \ 68 + ((f5181 << 0) | (f5182 << 1) | (f5281 << 2)) 69 69 70 70 enum orion_variant { 71 - V_5181L = V(1, 0, 0), 71 + V_5181 = V(1, 0, 0), 72 72 V_5182 = V(0, 1, 0), 73 73 V_5281 = V(0, 0, 1), 74 74 V_ALL = V(1, 1, 1), ··· 103 103 MPP_VAR_FUNCTION(0x0, "gpio", NULL, V_ALL), 104 104 MPP_VAR_FUNCTION(0x2, "pci", "req5", V_ALL), 105 105 MPP_VAR_FUNCTION(0x4, "nand", "re0", V_5182 | V_5281), 106 - MPP_VAR_FUNCTION(0x5, "pci-1", "clk", V_5181L), 106 + MPP_VAR_FUNCTION(0x5, "pci-1", "clk", V_5181), 107 107 MPP_VAR_FUNCTION(0x5, "sata0", "act", V_5182)), 108 108 MPP_MODE(7, 109 109 MPP_VAR_FUNCTION(0x0, "gpio", NULL, V_ALL), 110 110 MPP_VAR_FUNCTION(0x2, "pci", "gnt5", V_ALL), 111 111 MPP_VAR_FUNCTION(0x4, "nand", "we0", V_5182 | V_5281), 112 - MPP_VAR_FUNCTION(0x5, "pci-1", "clk", V_5181L), 112 + MPP_VAR_FUNCTION(0x5, "pci-1", "clk", V_5181), 113 113 MPP_VAR_FUNCTION(0x5, "sata1", "act", V_5182)), 114 114 MPP_MODE(8, 115 115 MPP_VAR_FUNCTION(0x0, "gpio", NULL, V_ALL), ··· 165 165 MPP_FUNC_CTRL(0, 19, NULL, orion_mpp_ctrl), 166 166 }; 167 167 168 - static struct pinctrl_gpio_range mv88f5181l_gpio_ranges[] = { 168 + static struct pinctrl_gpio_range mv88f5181_gpio_ranges[] = { 169 169 MPP_GPIO_RANGE(0, 0, 0, 16), 170 170 }; 171 171 ··· 177 177 MPP_GPIO_RANGE(0, 0, 0, 16), 178 178 }; 179 179 180 - static struct mvebu_pinctrl_soc_info mv88f5181l_info = { 181 - .variant = V_5181L, 180 + static struct mvebu_pinctrl_soc_info mv88f5181_info = { 181 + .variant = V_5181, 182 182 .controls = orion_mpp_controls, 183 183 .ncontrols = ARRAY_SIZE(orion_mpp_controls), 184 184 .modes = orion_mpp_modes, 185 185 .nmodes = ARRAY_SIZE(orion_mpp_modes), 186 - .gpioranges = mv88f5181l_gpio_ranges, 187 - .ngpioranges = ARRAY_SIZE(mv88f5181l_gpio_ranges), 186 + .gpioranges = mv88f5181_gpio_ranges, 187 + .ngpioranges = ARRAY_SIZE(mv88f5181_gpio_ranges), 188 188 }; 189 189 190 190 static struct mvebu_pinctrl_soc_info mv88f5182_info = { ··· 212 212 * muxing, they are identical. 213 213 */ 214 214 static const struct of_device_id orion_pinctrl_of_match[] = { 215 - { .compatible = "marvell,88f5181l-pinctrl", .data = &mv88f5181l_info }, 215 + { .compatible = "marvell,88f5181-pinctrl", .data = &mv88f5181_info }, 216 + { .compatible = "marvell,88f5181l-pinctrl", .data = &mv88f5181_info }, 216 217 { .compatible = "marvell,88f5182-pinctrl", .data = &mv88f5182_info }, 217 218 { .compatible = "marvell,88f5281-pinctrl", .data = &mv88f5281_info }, 218 219 { }
+65
drivers/reset/Kconfig
··· 14 14 15 15 if RESET_CONTROLLER 16 16 17 + config RESET_ATH79 18 + bool "AR71xx Reset Driver" if COMPILE_TEST 19 + default ATH79 20 + help 21 + This enables the ATH79 reset controller driver that supports the 22 + AR71xx SoC reset controller. 23 + 24 + config RESET_BERLIN 25 + bool "Berlin Reset Driver" if COMPILE_TEST 26 + default ARCH_BERLIN 27 + help 28 + This enables the reset controller driver for Marvell Berlin SoCs. 29 + 30 + config RESET_LPC18XX 31 + bool "LPC18xx/43xx Reset Driver" if COMPILE_TEST 32 + default ARCH_LPC18XX 33 + help 34 + This enables the reset controller driver for NXP LPC18xx/43xx SoCs. 35 + 36 + config RESET_MESON 37 + bool "Meson Reset Driver" if COMPILE_TEST 38 + default ARCH_MESON 39 + help 40 + This enables the reset driver for Amlogic Meson SoCs. 41 + 17 42 config RESET_OXNAS 18 43 bool 44 + 45 + config RESET_PISTACHIO 46 + bool "Pistachio Reset Driver" if COMPILE_TEST 47 + default MACH_PISTACHIO 48 + help 49 + This enables the reset driver for ImgTec Pistachio SoCs. 50 + 51 + config RESET_SOCFPGA 52 + bool "SoCFPGA Reset Driver" if COMPILE_TEST 53 + default ARCH_SOCFPGA 54 + help 55 + This enables the reset controller driver for Altera SoCFPGAs. 56 + 57 + config RESET_STM32 58 + bool "STM32 Reset Driver" if COMPILE_TEST 59 + default ARCH_STM32 60 + help 61 + This enables the RCC reset controller driver for STM32 MCUs. 62 + 63 + config RESET_SUNXI 64 + bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI 65 + default ARCH_SUNXI 66 + help 67 + This enables the reset driver for Allwinner SoCs. 19 68 20 69 config TI_SYSCON_RESET 21 70 tristate "TI SYSCON Reset Driver" ··· 75 26 memory-mapped reset registers as part of a syscon device node. If 76 27 you wish to use the reset framework for such memory-mapped devices, 77 28 say Y here. Otherwise, say N. 29 + 30 + config RESET_UNIPHIER 31 + tristate "Reset controller driver for UniPhier SoCs" 32 + depends on ARCH_UNIPHIER || COMPILE_TEST 33 + depends on OF && MFD_SYSCON 34 + default ARCH_UNIPHIER 35 + help 36 + Support for reset controllers on UniPhier SoCs. 37 + Say Y if you want to control reset signals provided by System Control 38 + block, Media I/O block, Peripheral Block. 39 + 40 + config RESET_ZYNQ 41 + bool "ZYNQ Reset Driver" if COMPILE_TEST 42 + default ARCH_ZYNQ 43 + help 44 + This enables the reset controller driver for Xilinx Zynq SoCs. 78 45 79 46 source "drivers/reset/sti/Kconfig" 80 47 source "drivers/reset/hisilicon/Kconfig"
+11 -9
drivers/reset/Makefile
··· 1 1 obj-y += core.o 2 - obj-$(CONFIG_ARCH_LPC18XX) += reset-lpc18xx.o 3 - obj-$(CONFIG_ARCH_SOCFPGA) += reset-socfpga.o 4 - obj-$(CONFIG_ARCH_BERLIN) += reset-berlin.o 5 - obj-$(CONFIG_MACH_PISTACHIO) += reset-pistachio.o 6 - obj-$(CONFIG_ARCH_MESON) += reset-meson.o 7 - obj-$(CONFIG_ARCH_SUNXI) += reset-sunxi.o 2 + obj-y += hisilicon/ 8 3 obj-$(CONFIG_ARCH_STI) += sti/ 9 - obj-$(CONFIG_ARCH_HISI) += hisilicon/ 10 - obj-$(CONFIG_ARCH_ZYNQ) += reset-zynq.o 11 - obj-$(CONFIG_ATH79) += reset-ath79.o 4 + obj-$(CONFIG_RESET_ATH79) += reset-ath79.o 5 + obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 6 + obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o 7 + obj-$(CONFIG_RESET_MESON) += reset-meson.o 12 8 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 9 + obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 10 + obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o 11 + obj-$(CONFIG_RESET_STM32) += reset-stm32.o 12 + obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o 13 13 obj-$(CONFIG_TI_SYSCON_RESET) += reset-ti-syscon.o 14 + obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o 15 + obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o
+11 -1
drivers/reset/core.c
··· 138 138 */ 139 139 int reset_control_reset(struct reset_control *rstc) 140 140 { 141 - if (WARN_ON(rstc->shared)) 141 + if (WARN_ON(IS_ERR_OR_NULL(rstc)) || 142 + WARN_ON(rstc->shared)) 142 143 return -EINVAL; 143 144 144 145 if (rstc->rcdev->ops->reset) ··· 162 161 */ 163 162 int reset_control_assert(struct reset_control *rstc) 164 163 { 164 + if (WARN_ON(IS_ERR_OR_NULL(rstc))) 165 + return -EINVAL; 166 + 165 167 if (!rstc->rcdev->ops->assert) 166 168 return -ENOTSUPP; 167 169 ··· 188 184 */ 189 185 int reset_control_deassert(struct reset_control *rstc) 190 186 { 187 + if (WARN_ON(IS_ERR_OR_NULL(rstc))) 188 + return -EINVAL; 189 + 191 190 if (!rstc->rcdev->ops->deassert) 192 191 return -ENOTSUPP; 193 192 ··· 211 204 */ 212 205 int reset_control_status(struct reset_control *rstc) 213 206 { 207 + if (WARN_ON(IS_ERR_OR_NULL(rstc))) 208 + return -EINVAL; 209 + 214 210 if (rstc->rcdev->ops->status) 215 211 return rstc->rcdev->ops->status(rstc->rcdev, rstc->id); 216 212
+2 -1
drivers/reset/hisilicon/Kconfig
··· 1 1 config COMMON_RESET_HI6220 2 2 tristate "Hi6220 Reset Driver" 3 - depends on (ARCH_HISI && RESET_CONTROLLER) 3 + depends on ARCH_HISI || COMPILE_TEST 4 + default ARCH_HISI 4 5 help 5 6 Build the Hisilicon Hi6220 reset driver.
+1
drivers/reset/reset-ath79.c
··· 12 12 * GNU General Public License for more details. 13 13 */ 14 14 15 + #include <linux/io.h> 15 16 #include <linux/module.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/reset-controller.h>
+9 -10
drivers/reset/reset-socfpga.c
··· 28 28 struct socfpga_reset_data { 29 29 spinlock_t lock; 30 30 void __iomem *membase; 31 - u32 modrst_offset; 32 31 struct reset_controller_dev rcdev; 33 32 }; 34 33 ··· 44 45 45 46 spin_lock_irqsave(&data->lock, flags); 46 47 47 - reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 48 - writel(reg | BIT(offset), data->membase + data->modrst_offset + 49 - (bank * NR_BANKS)); 48 + reg = readl(data->membase + (bank * NR_BANKS)); 49 + writel(reg | BIT(offset), data->membase + (bank * NR_BANKS)); 50 50 spin_unlock_irqrestore(&data->lock, flags); 51 51 52 52 return 0; ··· 65 67 66 68 spin_lock_irqsave(&data->lock, flags); 67 69 68 - reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 69 - writel(reg & ~BIT(offset), data->membase + data->modrst_offset + 70 - (bank * NR_BANKS)); 70 + reg = readl(data->membase + (bank * NR_BANKS)); 71 + writel(reg & ~BIT(offset), data->membase + (bank * NR_BANKS)); 71 72 72 73 spin_unlock_irqrestore(&data->lock, flags); 73 74 ··· 82 85 int offset = id % BITS_PER_LONG; 83 86 u32 reg; 84 87 85 - reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS)); 88 + reg = readl(data->membase + (bank * NR_BANKS)); 86 89 87 90 return !(reg & BIT(offset)); 88 91 } ··· 99 102 struct resource *res; 100 103 struct device *dev = &pdev->dev; 101 104 struct device_node *np = dev->of_node; 105 + u32 modrst_offset; 102 106 103 107 /* 104 108 * The binding was mainlined without the required property. ··· 120 122 if (IS_ERR(data->membase)) 121 123 return PTR_ERR(data->membase); 122 124 123 - if (of_property_read_u32(np, "altr,modrst-offset", &data->modrst_offset)) { 125 + if (of_property_read_u32(np, "altr,modrst-offset", &modrst_offset)) { 124 126 dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n"); 125 - data->modrst_offset = 0x10; 127 + modrst_offset = 0x10; 126 128 } 129 + data->membase += modrst_offset; 127 130 128 131 spin_lock_init(&data->lock); 129 132
+108
drivers/reset/reset-stm32.c
··· 1 + /* 2 + * Copyright (C) Maxime Coquelin 2015 3 + * Author: Maxime Coquelin <mcoquelin.stm32@gmail.com> 4 + * License terms: GNU General Public License (GPL), version 2 5 + * 6 + * Heavily based on sunxi driver from Maxime Ripard. 7 + */ 8 + 9 + #include <linux/err.h> 10 + #include <linux/io.h> 11 + #include <linux/of.h> 12 + #include <linux/of_address.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/reset-controller.h> 15 + #include <linux/slab.h> 16 + #include <linux/spinlock.h> 17 + #include <linux/types.h> 18 + 19 + struct stm32_reset_data { 20 + spinlock_t lock; 21 + void __iomem *membase; 22 + struct reset_controller_dev rcdev; 23 + }; 24 + 25 + static int stm32_reset_assert(struct reset_controller_dev *rcdev, 26 + unsigned long id) 27 + { 28 + struct stm32_reset_data *data = container_of(rcdev, 29 + struct stm32_reset_data, 30 + rcdev); 31 + int bank = id / BITS_PER_LONG; 32 + int offset = id % BITS_PER_LONG; 33 + unsigned long flags; 34 + u32 reg; 35 + 36 + spin_lock_irqsave(&data->lock, flags); 37 + 38 + reg = readl(data->membase + (bank * 4)); 39 + writel(reg | BIT(offset), data->membase + (bank * 4)); 40 + 41 + spin_unlock_irqrestore(&data->lock, flags); 42 + 43 + return 0; 44 + } 45 + 46 + static int stm32_reset_deassert(struct reset_controller_dev *rcdev, 47 + unsigned long id) 48 + { 49 + struct stm32_reset_data *data = container_of(rcdev, 50 + struct stm32_reset_data, 51 + rcdev); 52 + int bank = id / BITS_PER_LONG; 53 + int offset = id % BITS_PER_LONG; 54 + unsigned long flags; 55 + u32 reg; 56 + 57 + spin_lock_irqsave(&data->lock, flags); 58 + 59 + reg = readl(data->membase + (bank * 4)); 60 + writel(reg & ~BIT(offset), data->membase + (bank * 4)); 61 + 62 + spin_unlock_irqrestore(&data->lock, flags); 63 + 64 + return 0; 65 + } 66 + 67 + static const struct reset_control_ops stm32_reset_ops = { 68 + .assert = stm32_reset_assert, 69 + .deassert = stm32_reset_deassert, 70 + }; 71 + 72 + static const struct of_device_id stm32_reset_dt_ids[] = { 73 + { .compatible = "st,stm32-rcc", }, 74 + { /* sentinel */ }, 75 + }; 76 + 77 + static int stm32_reset_probe(struct platform_device *pdev) 78 + { 79 + struct stm32_reset_data *data; 80 + struct resource *res; 81 + 82 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 83 + if (!data) 84 + return -ENOMEM; 85 + 86 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 87 + data->membase = devm_ioremap_resource(&pdev->dev, res); 88 + if (IS_ERR(data->membase)) 89 + return PTR_ERR(data->membase); 90 + 91 + spin_lock_init(&data->lock); 92 + 93 + data->rcdev.owner = THIS_MODULE; 94 + data->rcdev.nr_resets = resource_size(res) * 8; 95 + data->rcdev.ops = &stm32_reset_ops; 96 + data->rcdev.of_node = pdev->dev.of_node; 97 + 98 + return devm_reset_controller_register(&pdev->dev, &data->rcdev); 99 + } 100 + 101 + static struct platform_driver stm32_reset_driver = { 102 + .probe = stm32_reset_probe, 103 + .driver = { 104 + .name = "stm32-rcc-reset", 105 + .of_match_table = stm32_reset_dt_ids, 106 + }, 107 + }; 108 + builtin_platform_driver(stm32_reset_driver);
+440
drivers/reset/reset-uniphier.c
··· 1 + /* 2 + * Copyright (C) 2016 Socionext Inc. 3 + * Author: Masahiro Yamada <yamada.masahiro@socionext.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/mfd/syscon.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_device.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/regmap.h> 22 + #include <linux/reset-controller.h> 23 + 24 + struct uniphier_reset_data { 25 + unsigned int id; 26 + unsigned int reg; 27 + unsigned int bit; 28 + unsigned int flags; 29 + #define UNIPHIER_RESET_ACTIVE_LOW BIT(0) 30 + }; 31 + 32 + #define UNIPHIER_RESET_ID_END (unsigned int)(-1) 33 + 34 + #define UNIPHIER_RESET_END \ 35 + { .id = UNIPHIER_RESET_ID_END } 36 + 37 + #define UNIPHIER_RESET(_id, _reg, _bit) \ 38 + { \ 39 + .id = (_id), \ 40 + .reg = (_reg), \ 41 + .bit = (_bit), \ 42 + } 43 + 44 + #define UNIPHIER_RESETX(_id, _reg, _bit) \ 45 + { \ 46 + .id = (_id), \ 47 + .reg = (_reg), \ 48 + .bit = (_bit), \ 49 + .flags = UNIPHIER_RESET_ACTIVE_LOW, \ 50 + } 51 + 52 + /* System reset data */ 53 + #define UNIPHIER_SLD3_SYS_RESET_STDMAC(id) \ 54 + UNIPHIER_RESETX((id), 0x2000, 10) 55 + 56 + #define UNIPHIER_LD11_SYS_RESET_STDMAC(id) \ 57 + UNIPHIER_RESETX((id), 0x200c, 8) 58 + 59 + #define UNIPHIER_PRO4_SYS_RESET_GIO(id) \ 60 + UNIPHIER_RESETX((id), 0x2000, 6) 61 + 62 + #define UNIPHIER_LD20_SYS_RESET_GIO(id) \ 63 + UNIPHIER_RESETX((id), 0x200c, 5) 64 + 65 + #define UNIPHIER_PRO4_SYS_RESET_USB3(id, ch) \ 66 + UNIPHIER_RESETX((id), 0x2000 + 0x4 * (ch), 17) 67 + 68 + const struct uniphier_reset_data uniphier_sld3_sys_reset_data[] = { 69 + UNIPHIER_SLD3_SYS_RESET_STDMAC(8), /* Ether, HSC, MIO */ 70 + UNIPHIER_RESET_END, 71 + }; 72 + 73 + const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = { 74 + UNIPHIER_SLD3_SYS_RESET_STDMAC(8), /* HSC, MIO, RLE */ 75 + UNIPHIER_PRO4_SYS_RESET_GIO(12), /* Ether, SATA, USB3 */ 76 + UNIPHIER_PRO4_SYS_RESET_USB3(14, 0), 77 + UNIPHIER_PRO4_SYS_RESET_USB3(15, 1), 78 + UNIPHIER_RESET_END, 79 + }; 80 + 81 + const struct uniphier_reset_data uniphier_pro5_sys_reset_data[] = { 82 + UNIPHIER_SLD3_SYS_RESET_STDMAC(8), /* HSC */ 83 + UNIPHIER_PRO4_SYS_RESET_GIO(12), /* PCIe, USB3 */ 84 + UNIPHIER_PRO4_SYS_RESET_USB3(14, 0), 85 + UNIPHIER_PRO4_SYS_RESET_USB3(15, 1), 86 + UNIPHIER_RESET_END, 87 + }; 88 + 89 + const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = { 90 + UNIPHIER_SLD3_SYS_RESET_STDMAC(8), /* HSC, RLE */ 91 + UNIPHIER_PRO4_SYS_RESET_USB3(14, 0), 92 + UNIPHIER_PRO4_SYS_RESET_USB3(15, 1), 93 + UNIPHIER_RESETX(16, 0x2014, 4), /* USB30-PHY0 */ 94 + UNIPHIER_RESETX(17, 0x2014, 0), /* USB30-PHY1 */ 95 + UNIPHIER_RESETX(18, 0x2014, 2), /* USB30-PHY2 */ 96 + UNIPHIER_RESETX(20, 0x2014, 5), /* USB31-PHY0 */ 97 + UNIPHIER_RESETX(21, 0x2014, 1), /* USB31-PHY1 */ 98 + UNIPHIER_RESETX(28, 0x2014, 12), /* SATA */ 99 + UNIPHIER_RESET(29, 0x2014, 8), /* SATA-PHY (active high) */ 100 + UNIPHIER_RESET_END, 101 + }; 102 + 103 + const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = { 104 + UNIPHIER_LD11_SYS_RESET_STDMAC(8), /* HSC, MIO */ 105 + UNIPHIER_RESET_END, 106 + }; 107 + 108 + const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = { 109 + UNIPHIER_LD11_SYS_RESET_STDMAC(8), /* HSC */ 110 + UNIPHIER_LD20_SYS_RESET_GIO(12), /* PCIe, USB3 */ 111 + UNIPHIER_RESETX(16, 0x200c, 12), /* USB30-PHY0 */ 112 + UNIPHIER_RESETX(17, 0x200c, 13), /* USB30-PHY1 */ 113 + UNIPHIER_RESETX(18, 0x200c, 14), /* USB30-PHY2 */ 114 + UNIPHIER_RESETX(19, 0x200c, 15), /* USB30-PHY3 */ 115 + UNIPHIER_RESET_END, 116 + }; 117 + 118 + /* Media I/O reset data */ 119 + #define UNIPHIER_MIO_RESET_SD(id, ch) \ 120 + UNIPHIER_RESETX((id), 0x110 + 0x200 * (ch), 0) 121 + 122 + #define UNIPHIER_MIO_RESET_SD_BRIDGE(id, ch) \ 123 + UNIPHIER_RESETX((id), 0x110 + 0x200 * (ch), 26) 124 + 125 + #define UNIPHIER_MIO_RESET_EMMC_HW_RESET(id, ch) \ 126 + UNIPHIER_RESETX((id), 0x80 + 0x200 * (ch), 0) 127 + 128 + #define UNIPHIER_MIO_RESET_USB2(id, ch) \ 129 + UNIPHIER_RESETX((id), 0x114 + 0x200 * (ch), 0) 130 + 131 + #define UNIPHIER_MIO_RESET_USB2_BRIDGE(id, ch) \ 132 + UNIPHIER_RESETX((id), 0x110 + 0x200 * (ch), 24) 133 + 134 + #define UNIPHIER_MIO_RESET_DMAC(id) \ 135 + UNIPHIER_RESETX((id), 0x110, 17) 136 + 137 + const struct uniphier_reset_data uniphier_sld3_mio_reset_data[] = { 138 + UNIPHIER_MIO_RESET_SD(0, 0), 139 + UNIPHIER_MIO_RESET_SD(1, 1), 140 + UNIPHIER_MIO_RESET_SD(2, 2), 141 + UNIPHIER_MIO_RESET_SD_BRIDGE(3, 0), 142 + UNIPHIER_MIO_RESET_SD_BRIDGE(4, 1), 143 + UNIPHIER_MIO_RESET_SD_BRIDGE(5, 2), 144 + UNIPHIER_MIO_RESET_EMMC_HW_RESET(6, 1), 145 + UNIPHIER_MIO_RESET_DMAC(7), 146 + UNIPHIER_MIO_RESET_USB2(8, 0), 147 + UNIPHIER_MIO_RESET_USB2(9, 1), 148 + UNIPHIER_MIO_RESET_USB2(10, 2), 149 + UNIPHIER_MIO_RESET_USB2(11, 3), 150 + UNIPHIER_MIO_RESET_USB2_BRIDGE(12, 0), 151 + UNIPHIER_MIO_RESET_USB2_BRIDGE(13, 1), 152 + UNIPHIER_MIO_RESET_USB2_BRIDGE(14, 2), 153 + UNIPHIER_MIO_RESET_USB2_BRIDGE(15, 3), 154 + UNIPHIER_RESET_END, 155 + }; 156 + 157 + const struct uniphier_reset_data uniphier_pro5_mio_reset_data[] = { 158 + UNIPHIER_MIO_RESET_SD(0, 0), 159 + UNIPHIER_MIO_RESET_SD(1, 1), 160 + UNIPHIER_MIO_RESET_EMMC_HW_RESET(6, 1), 161 + UNIPHIER_RESET_END, 162 + }; 163 + 164 + /* Peripheral reset data */ 165 + #define UNIPHIER_PERI_RESET_UART(id, ch) \ 166 + UNIPHIER_RESETX((id), 0x114, 19 + (ch)) 167 + 168 + #define UNIPHIER_PERI_RESET_I2C(id, ch) \ 169 + UNIPHIER_RESETX((id), 0x114, 5 + (ch)) 170 + 171 + #define UNIPHIER_PERI_RESET_FI2C(id, ch) \ 172 + UNIPHIER_RESETX((id), 0x114, 24 + (ch)) 173 + 174 + const struct uniphier_reset_data uniphier_ld4_peri_reset_data[] = { 175 + UNIPHIER_PERI_RESET_UART(0, 0), 176 + UNIPHIER_PERI_RESET_UART(1, 1), 177 + UNIPHIER_PERI_RESET_UART(2, 2), 178 + UNIPHIER_PERI_RESET_UART(3, 3), 179 + UNIPHIER_PERI_RESET_I2C(4, 0), 180 + UNIPHIER_PERI_RESET_I2C(5, 1), 181 + UNIPHIER_PERI_RESET_I2C(6, 2), 182 + UNIPHIER_PERI_RESET_I2C(7, 3), 183 + UNIPHIER_PERI_RESET_I2C(8, 4), 184 + UNIPHIER_RESET_END, 185 + }; 186 + 187 + const struct uniphier_reset_data uniphier_pro4_peri_reset_data[] = { 188 + UNIPHIER_PERI_RESET_UART(0, 0), 189 + UNIPHIER_PERI_RESET_UART(1, 1), 190 + UNIPHIER_PERI_RESET_UART(2, 2), 191 + UNIPHIER_PERI_RESET_UART(3, 3), 192 + UNIPHIER_PERI_RESET_FI2C(4, 0), 193 + UNIPHIER_PERI_RESET_FI2C(5, 1), 194 + UNIPHIER_PERI_RESET_FI2C(6, 2), 195 + UNIPHIER_PERI_RESET_FI2C(7, 3), 196 + UNIPHIER_PERI_RESET_FI2C(8, 4), 197 + UNIPHIER_PERI_RESET_FI2C(9, 5), 198 + UNIPHIER_PERI_RESET_FI2C(10, 6), 199 + UNIPHIER_RESET_END, 200 + }; 201 + 202 + /* core implementaton */ 203 + struct uniphier_reset_priv { 204 + struct reset_controller_dev rcdev; 205 + struct device *dev; 206 + struct regmap *regmap; 207 + const struct uniphier_reset_data *data; 208 + }; 209 + 210 + #define to_uniphier_reset_priv(_rcdev) \ 211 + container_of(_rcdev, struct uniphier_reset_priv, rcdev) 212 + 213 + static int uniphier_reset_update(struct reset_controller_dev *rcdev, 214 + unsigned long id, int assert) 215 + { 216 + struct uniphier_reset_priv *priv = to_uniphier_reset_priv(rcdev); 217 + const struct uniphier_reset_data *p; 218 + 219 + for (p = priv->data; p->id != UNIPHIER_RESET_ID_END; p++) { 220 + unsigned int mask, val; 221 + 222 + if (p->id != id) 223 + continue; 224 + 225 + mask = BIT(p->bit); 226 + 227 + if (assert) 228 + val = mask; 229 + else 230 + val = ~mask; 231 + 232 + if (p->flags & UNIPHIER_RESET_ACTIVE_LOW) 233 + val = ~val; 234 + 235 + return regmap_write_bits(priv->regmap, p->reg, mask, val); 236 + } 237 + 238 + dev_err(priv->dev, "reset_id=%lu was not handled\n", id); 239 + return -EINVAL; 240 + } 241 + 242 + static int uniphier_reset_assert(struct reset_controller_dev *rcdev, 243 + unsigned long id) 244 + { 245 + return uniphier_reset_update(rcdev, id, 1); 246 + } 247 + 248 + static int uniphier_reset_deassert(struct reset_controller_dev *rcdev, 249 + unsigned long id) 250 + { 251 + return uniphier_reset_update(rcdev, id, 0); 252 + } 253 + 254 + static int uniphier_reset_status(struct reset_controller_dev *rcdev, 255 + unsigned long id) 256 + { 257 + struct uniphier_reset_priv *priv = to_uniphier_reset_priv(rcdev); 258 + const struct uniphier_reset_data *p; 259 + 260 + for (p = priv->data; p->id != UNIPHIER_RESET_ID_END; p++) { 261 + unsigned int val; 262 + int ret, asserted; 263 + 264 + if (p->id != id) 265 + continue; 266 + 267 + ret = regmap_read(priv->regmap, p->reg, &val); 268 + if (ret) 269 + return ret; 270 + 271 + asserted = !!(val & BIT(p->bit)); 272 + 273 + if (p->flags & UNIPHIER_RESET_ACTIVE_LOW) 274 + asserted = !asserted; 275 + 276 + return asserted; 277 + } 278 + 279 + dev_err(priv->dev, "reset_id=%lu was not found\n", id); 280 + return -EINVAL; 281 + } 282 + 283 + static const struct reset_control_ops uniphier_reset_ops = { 284 + .assert = uniphier_reset_assert, 285 + .deassert = uniphier_reset_deassert, 286 + .status = uniphier_reset_status, 287 + }; 288 + 289 + static int uniphier_reset_probe(struct platform_device *pdev) 290 + { 291 + struct device *dev = &pdev->dev; 292 + struct uniphier_reset_priv *priv; 293 + const struct uniphier_reset_data *p, *data; 294 + struct regmap *regmap; 295 + struct device_node *parent; 296 + unsigned int nr_resets = 0; 297 + 298 + data = of_device_get_match_data(dev); 299 + if (WARN_ON(!data)) 300 + return -EINVAL; 301 + 302 + parent = of_get_parent(dev->of_node); /* parent should be syscon node */ 303 + regmap = syscon_node_to_regmap(parent); 304 + of_node_put(parent); 305 + if (IS_ERR(regmap)) { 306 + dev_err(dev, "failed to get regmap (error %ld)\n", 307 + PTR_ERR(regmap)); 308 + return PTR_ERR(regmap); 309 + } 310 + 311 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 312 + if (!priv) 313 + return -ENOMEM; 314 + 315 + for (p = data; p->id != UNIPHIER_RESET_ID_END; p++) 316 + nr_resets = max(nr_resets, p->id + 1); 317 + 318 + priv->rcdev.ops = &uniphier_reset_ops; 319 + priv->rcdev.owner = dev->driver->owner; 320 + priv->rcdev.of_node = dev->of_node; 321 + priv->rcdev.nr_resets = nr_resets; 322 + priv->dev = dev; 323 + priv->regmap = regmap; 324 + priv->data = data; 325 + 326 + return devm_reset_controller_register(&pdev->dev, &priv->rcdev); 327 + } 328 + 329 + static const struct of_device_id uniphier_reset_match[] = { 330 + /* System reset */ 331 + { 332 + .compatible = "socionext,uniphier-sld3-reset", 333 + .data = uniphier_sld3_sys_reset_data, 334 + }, 335 + { 336 + .compatible = "socionext,uniphier-ld4-reset", 337 + .data = uniphier_sld3_sys_reset_data, 338 + }, 339 + { 340 + .compatible = "socionext,uniphier-pro4-reset", 341 + .data = uniphier_pro4_sys_reset_data, 342 + }, 343 + { 344 + .compatible = "socionext,uniphier-sld8-reset", 345 + .data = uniphier_sld3_sys_reset_data, 346 + }, 347 + { 348 + .compatible = "socionext,uniphier-pro5-reset", 349 + .data = uniphier_pro5_sys_reset_data, 350 + }, 351 + { 352 + .compatible = "socionext,uniphier-pxs2-reset", 353 + .data = uniphier_pxs2_sys_reset_data, 354 + }, 355 + { 356 + .compatible = "socionext,uniphier-ld11-reset", 357 + .data = uniphier_ld11_sys_reset_data, 358 + }, 359 + { 360 + .compatible = "socionext,uniphier-ld20-reset", 361 + .data = uniphier_ld20_sys_reset_data, 362 + }, 363 + /* Media I/O reset */ 364 + { 365 + .compatible = "socionext,uniphier-sld3-mio-reset", 366 + .data = uniphier_sld3_mio_reset_data, 367 + }, 368 + { 369 + .compatible = "socionext,uniphier-ld4-mio-reset", 370 + .data = uniphier_sld3_mio_reset_data, 371 + }, 372 + { 373 + .compatible = "socionext,uniphier-pro4-mio-reset", 374 + .data = uniphier_sld3_mio_reset_data, 375 + }, 376 + { 377 + .compatible = "socionext,uniphier-sld8-mio-reset", 378 + .data = uniphier_sld3_mio_reset_data, 379 + }, 380 + { 381 + .compatible = "socionext,uniphier-pro5-mio-reset", 382 + .data = uniphier_pro5_mio_reset_data, 383 + }, 384 + { 385 + .compatible = "socionext,uniphier-pxs2-mio-reset", 386 + .data = uniphier_pro5_mio_reset_data, 387 + }, 388 + { 389 + .compatible = "socionext,uniphier-ld11-mio-reset", 390 + .data = uniphier_sld3_mio_reset_data, 391 + }, 392 + { 393 + .compatible = "socionext,uniphier-ld20-mio-reset", 394 + .data = uniphier_pro5_mio_reset_data, 395 + }, 396 + /* Peripheral reset */ 397 + { 398 + .compatible = "socionext,uniphier-ld4-peri-reset", 399 + .data = uniphier_ld4_peri_reset_data, 400 + }, 401 + { 402 + .compatible = "socionext,uniphier-pro4-peri-reset", 403 + .data = uniphier_pro4_peri_reset_data, 404 + }, 405 + { 406 + .compatible = "socionext,uniphier-sld8-peri-reset", 407 + .data = uniphier_ld4_peri_reset_data, 408 + }, 409 + { 410 + .compatible = "socionext,uniphier-pro5-peri-reset", 411 + .data = uniphier_pro4_peri_reset_data, 412 + }, 413 + { 414 + .compatible = "socionext,uniphier-pxs2-peri-reset", 415 + .data = uniphier_pro4_peri_reset_data, 416 + }, 417 + { 418 + .compatible = "socionext,uniphier-ld11-peri-reset", 419 + .data = uniphier_pro4_peri_reset_data, 420 + }, 421 + { 422 + .compatible = "socionext,uniphier-ld20-peri-reset", 423 + .data = uniphier_pro4_peri_reset_data, 424 + }, 425 + { /* sentinel */ } 426 + }; 427 + MODULE_DEVICE_TABLE(of, uniphier_reset_match); 428 + 429 + static struct platform_driver uniphier_reset_driver = { 430 + .probe = uniphier_reset_probe, 431 + .driver = { 432 + .name = "uniphier-reset", 433 + .of_match_table = uniphier_reset_match, 434 + }, 435 + }; 436 + module_platform_driver(uniphier_reset_driver); 437 + 438 + MODULE_AUTHOR("Masahiro Yamada <yamada.masahiro@socionext.com>"); 439 + MODULE_DESCRIPTION("UniPhier Reset Controller Driver"); 440 + MODULE_LICENSE("GPL");
+1 -1
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 583 583 { 584 584 unsigned long timeout; 585 585 586 - timeout = jiffies + usecs_to_jiffies(255); 586 + timeout = jiffies + usecs_to_jiffies(10000); 587 587 588 588 do { 589 589 if (time_after(jiffies, timeout))
+156 -111
drivers/soc/qcom/smd.c
··· 95 95 96 96 /** 97 97 * struct qcom_smd_edge - representing a remote processor 98 - * @smd: handle to qcom_smd 98 + * @dev: device for this edge 99 99 * @of_node: of_node handle for information related to this edge 100 100 * @edge_id: identifier of this edge 101 101 * @remote_pid: identifier of remote processor ··· 111 111 * @state_work: work item for edge state changes 112 112 */ 113 113 struct qcom_smd_edge { 114 - struct qcom_smd *smd; 114 + struct device dev; 115 + 115 116 struct device_node *of_node; 116 117 unsigned edge_id; 117 118 unsigned remote_pid; ··· 135 134 struct work_struct scan_work; 136 135 struct work_struct state_work; 137 136 }; 137 + 138 + #define to_smd_edge(d) container_of(d, struct qcom_smd_edge, dev) 138 139 139 140 /* 140 141 * SMD channel states. ··· 200 197 void *drvdata; 201 198 202 199 struct list_head list; 203 - struct list_head dev_list; 204 - }; 205 - 206 - /** 207 - * struct qcom_smd - smd struct 208 - * @dev: device struct 209 - * @num_edges: number of entries in @edges 210 - * @edges: array of edges to be handled 211 - */ 212 - struct qcom_smd { 213 - struct device *dev; 214 - 215 - unsigned num_edges; 216 - struct qcom_smd_edge edges[0]; 217 200 }; 218 201 219 202 /* ··· 363 374 SET_TX_CHANNEL_FLAG(channel, fSTATE, 1); 364 375 SET_TX_CHANNEL_FLAG(channel, fBLOCKREADINTR, 1); 365 376 SET_TX_CHANNEL_INFO(channel, head, 0); 366 - SET_TX_CHANNEL_INFO(channel, tail, 0); 377 + SET_RX_CHANNEL_INFO(channel, tail, 0); 367 378 368 379 qcom_smd_signal_channel(channel); 369 380 ··· 410 421 if (channel->state == state) 411 422 return; 412 423 413 - dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state); 424 + dev_dbg(&edge->dev, "set_state(%s, %d)\n", channel->name, state); 414 425 415 426 SET_TX_CHANNEL_FLAG(channel, fDSR, is_open); 416 427 SET_TX_CHANNEL_FLAG(channel, fCTS, is_open); ··· 880 891 struct qcom_smd_device *qsdev = to_smd_device(dev); 881 892 struct qcom_smd_driver *qsdrv = to_smd_driver(dev); 882 893 struct qcom_smd_channel *channel = qsdev->channel; 883 - struct qcom_smd_channel *tmp; 884 - struct qcom_smd_channel *ch; 885 894 886 895 qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSING); 887 896 ··· 898 911 if (qsdrv->remove) 899 912 qsdrv->remove(qsdev); 900 913 901 - /* 902 - * The client is now gone, close and release all channels associated 903 - * with this sdev 904 - */ 905 - list_for_each_entry_safe(ch, tmp, &channel->dev_list, dev_list) { 906 - qcom_smd_channel_close(ch); 907 - list_del(&ch->dev_list); 908 - ch->qsdev = NULL; 909 - } 914 + /* The client is now gone, close the primary channel */ 915 + qcom_smd_channel_close(channel); 916 + channel->qsdev = NULL; 910 917 911 918 return 0; 912 919 } ··· 954 973 struct qcom_smd_device *qsdev; 955 974 struct qcom_smd_edge *edge = channel->edge; 956 975 struct device_node *node; 957 - struct qcom_smd *smd = edge->smd; 958 976 int ret; 959 977 960 978 if (channel->qsdev) 961 979 return -EEXIST; 962 980 963 - dev_dbg(smd->dev, "registering '%s'\n", channel->name); 981 + dev_dbg(&edge->dev, "registering '%s'\n", channel->name); 964 982 965 983 qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL); 966 984 if (!qsdev) ··· 970 990 edge->of_node->name, 971 991 node ? node->name : channel->name); 972 992 973 - qsdev->dev.parent = smd->dev; 993 + qsdev->dev.parent = &edge->dev; 974 994 qsdev->dev.bus = &qcom_smd_bus; 975 995 qsdev->dev.release = qcom_smd_release_device; 976 996 qsdev->dev.of_node = node; ··· 981 1001 982 1002 ret = device_register(&qsdev->dev); 983 1003 if (ret) { 984 - dev_err(smd->dev, "device_register failed: %d\n", ret); 1004 + dev_err(&edge->dev, "device_register failed: %d\n", ret); 985 1005 put_device(&qsdev->dev); 986 1006 } 987 1007 ··· 1071 1091 * 1072 1092 * Returns a channel handle on success, or -EPROBE_DEFER if the channel isn't 1073 1093 * ready. 1094 + * 1095 + * Any channels returned must be closed with a call to qcom_smd_close_channel() 1074 1096 */ 1075 1097 struct qcom_smd_channel *qcom_smd_open_channel(struct qcom_smd_channel *parent, 1076 1098 const char *name, ··· 1102 1120 return ERR_PTR(ret); 1103 1121 } 1104 1122 1105 - /* 1106 - * Append the list of channel to the channels associated with the sdev 1107 - */ 1108 - list_add_tail(&channel->dev_list, &sdev->channel->dev_list); 1109 - 1110 1123 return channel; 1111 1124 } 1112 1125 EXPORT_SYMBOL(qcom_smd_open_channel); 1126 + 1127 + /** 1128 + * qcom_smd_close_channel() - close an additionally opened channel 1129 + * @channel: channel handle, returned by qcom_smd_open_channel() 1130 + */ 1131 + void qcom_smd_close_channel(struct qcom_smd_channel *channel) 1132 + { 1133 + qcom_smd_channel_close(channel); 1134 + channel->qsdev = NULL; 1135 + } 1136 + EXPORT_SYMBOL(qcom_smd_close_channel); 1113 1137 1114 1138 /* 1115 1139 * Allocate the qcom_smd_channel object for a newly found smd channel, ··· 1127 1139 char *name) 1128 1140 { 1129 1141 struct qcom_smd_channel *channel; 1130 - struct qcom_smd *smd = edge->smd; 1131 1142 size_t fifo_size; 1132 1143 size_t info_size; 1133 1144 void *fifo_base; 1134 1145 void *info; 1135 1146 int ret; 1136 1147 1137 - channel = devm_kzalloc(smd->dev, sizeof(*channel), GFP_KERNEL); 1148 + channel = devm_kzalloc(&edge->dev, sizeof(*channel), GFP_KERNEL); 1138 1149 if (!channel) 1139 1150 return ERR_PTR(-ENOMEM); 1140 1151 1141 - INIT_LIST_HEAD(&channel->dev_list); 1142 1152 channel->edge = edge; 1143 - channel->name = devm_kstrdup(smd->dev, name, GFP_KERNEL); 1153 + channel->name = devm_kstrdup(&edge->dev, name, GFP_KERNEL); 1144 1154 if (!channel->name) 1145 1155 return ERR_PTR(-ENOMEM); 1146 1156 ··· 1161 1175 } else if (info_size == 2 * sizeof(struct smd_channel_info)) { 1162 1176 channel->info = info; 1163 1177 } else { 1164 - dev_err(smd->dev, 1178 + dev_err(&edge->dev, 1165 1179 "channel info of size %zu not supported\n", info_size); 1166 1180 ret = -EINVAL; 1167 1181 goto free_name_and_channel; ··· 1176 1190 /* The channel consist of a rx and tx fifo of equal size */ 1177 1191 fifo_size /= 2; 1178 1192 1179 - dev_dbg(smd->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n", 1193 + dev_dbg(&edge->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n", 1180 1194 name, info_size, fifo_size); 1181 1195 1182 1196 channel->tx_fifo = fifo_base; ··· 1188 1202 return channel; 1189 1203 1190 1204 free_name_and_channel: 1191 - devm_kfree(smd->dev, channel->name); 1192 - devm_kfree(smd->dev, channel); 1205 + devm_kfree(&edge->dev, channel->name); 1206 + devm_kfree(&edge->dev, channel); 1193 1207 1194 1208 return ERR_PTR(ret); 1195 1209 } ··· 1205 1219 struct qcom_smd_alloc_entry *alloc_tbl; 1206 1220 struct qcom_smd_alloc_entry *entry; 1207 1221 struct qcom_smd_channel *channel; 1208 - struct qcom_smd *smd = edge->smd; 1209 1222 unsigned long flags; 1210 1223 unsigned fifo_id; 1211 1224 unsigned info_id; ··· 1248 1263 list_add(&channel->list, &edge->channels); 1249 1264 spin_unlock_irqrestore(&edge->channels_lock, flags); 1250 1265 1251 - dev_dbg(smd->dev, "new channel found: '%s'\n", channel->name); 1266 + dev_dbg(&edge->dev, "new channel found: '%s'\n", channel->name); 1252 1267 set_bit(i, edge->allocated[tbl]); 1253 1268 1254 1269 wake_up_interruptible(&edge->new_channel_event); ··· 1335 1350 1336 1351 edge->of_node = of_node_get(node); 1337 1352 1338 - irq = irq_of_parse_and_map(node, 0); 1339 - if (irq < 0) { 1340 - dev_err(dev, "required smd interrupt missing\n"); 1341 - return -EINVAL; 1342 - } 1343 - 1344 - ret = devm_request_irq(dev, irq, 1345 - qcom_smd_edge_intr, IRQF_TRIGGER_RISING, 1346 - node->name, edge); 1347 - if (ret) { 1348 - dev_err(dev, "failed to request smd irq\n"); 1349 - return ret; 1350 - } 1351 - 1352 - edge->irq = irq; 1353 - 1354 1353 key = "qcom,smd-edge"; 1355 1354 ret = of_property_read_u32(node, key, &edge->edge_id); 1356 1355 if (ret) { ··· 1369 1400 return -EINVAL; 1370 1401 } 1371 1402 1403 + irq = irq_of_parse_and_map(node, 0); 1404 + if (irq < 0) { 1405 + dev_err(dev, "required smd interrupt missing\n"); 1406 + return -EINVAL; 1407 + } 1408 + 1409 + ret = devm_request_irq(dev, irq, 1410 + qcom_smd_edge_intr, IRQF_TRIGGER_RISING, 1411 + node->name, edge); 1412 + if (ret) { 1413 + dev_err(dev, "failed to request smd irq\n"); 1414 + return ret; 1415 + } 1416 + 1417 + edge->irq = irq; 1418 + 1372 1419 return 0; 1373 1420 } 1374 1421 1375 - static int qcom_smd_probe(struct platform_device *pdev) 1422 + /* 1423 + * Release function for an edge. 1424 + * Reset the state of each associated channel and free the edge context. 1425 + */ 1426 + static void qcom_smd_edge_release(struct device *dev) 1427 + { 1428 + struct qcom_smd_channel *channel; 1429 + struct qcom_smd_edge *edge = to_smd_edge(dev); 1430 + 1431 + list_for_each_entry(channel, &edge->channels, list) { 1432 + SET_RX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED); 1433 + SET_RX_CHANNEL_INFO(channel, head, 0); 1434 + SET_RX_CHANNEL_INFO(channel, tail, 0); 1435 + } 1436 + 1437 + kfree(edge); 1438 + } 1439 + 1440 + /** 1441 + * qcom_smd_register_edge() - register an edge based on an device_node 1442 + * @parent: parent device for the edge 1443 + * @node: device_node describing the edge 1444 + * 1445 + * Returns an edge reference, or negative ERR_PTR() on failure. 1446 + */ 1447 + struct qcom_smd_edge *qcom_smd_register_edge(struct device *parent, 1448 + struct device_node *node) 1376 1449 { 1377 1450 struct qcom_smd_edge *edge; 1378 - struct device_node *node; 1379 - struct qcom_smd *smd; 1380 - size_t array_size; 1381 - int num_edges; 1382 1451 int ret; 1383 - int i = 0; 1452 + 1453 + edge = kzalloc(sizeof(*edge), GFP_KERNEL); 1454 + if (!edge) 1455 + return ERR_PTR(-ENOMEM); 1456 + 1457 + init_waitqueue_head(&edge->new_channel_event); 1458 + 1459 + edge->dev.parent = parent; 1460 + edge->dev.release = qcom_smd_edge_release; 1461 + dev_set_name(&edge->dev, "%s:%s", dev_name(parent), node->name); 1462 + ret = device_register(&edge->dev); 1463 + if (ret) { 1464 + pr_err("failed to register smd edge\n"); 1465 + return ERR_PTR(ret); 1466 + } 1467 + 1468 + ret = qcom_smd_parse_edge(&edge->dev, node, edge); 1469 + if (ret) { 1470 + dev_err(&edge->dev, "failed to parse smd edge\n"); 1471 + goto unregister_dev; 1472 + } 1473 + 1474 + schedule_work(&edge->scan_work); 1475 + 1476 + return edge; 1477 + 1478 + unregister_dev: 1479 + put_device(&edge->dev); 1480 + return ERR_PTR(ret); 1481 + } 1482 + EXPORT_SYMBOL(qcom_smd_register_edge); 1483 + 1484 + static int qcom_smd_remove_device(struct device *dev, void *data) 1485 + { 1486 + device_unregister(dev); 1487 + of_node_put(dev->of_node); 1488 + put_device(dev); 1489 + 1490 + return 0; 1491 + } 1492 + 1493 + /** 1494 + * qcom_smd_unregister_edge() - release an edge and its children 1495 + * @edge: edge reference acquired from qcom_smd_register_edge 1496 + */ 1497 + int qcom_smd_unregister_edge(struct qcom_smd_edge *edge) 1498 + { 1499 + int ret; 1500 + 1501 + disable_irq(edge->irq); 1502 + cancel_work_sync(&edge->scan_work); 1503 + cancel_work_sync(&edge->state_work); 1504 + 1505 + ret = device_for_each_child(&edge->dev, NULL, qcom_smd_remove_device); 1506 + if (ret) 1507 + dev_warn(&edge->dev, "can't remove smd device: %d\n", ret); 1508 + 1509 + device_unregister(&edge->dev); 1510 + 1511 + return 0; 1512 + } 1513 + EXPORT_SYMBOL(qcom_smd_unregister_edge); 1514 + 1515 + static int qcom_smd_probe(struct platform_device *pdev) 1516 + { 1517 + struct device_node *node; 1384 1518 void *p; 1385 1519 1386 1520 /* Wait for smem */ ··· 1491 1419 if (PTR_ERR(p) == -EPROBE_DEFER) 1492 1420 return PTR_ERR(p); 1493 1421 1494 - num_edges = of_get_available_child_count(pdev->dev.of_node); 1495 - array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge); 1496 - smd = devm_kzalloc(&pdev->dev, array_size, GFP_KERNEL); 1497 - if (!smd) 1498 - return -ENOMEM; 1499 - smd->dev = &pdev->dev; 1500 - 1501 - smd->num_edges = num_edges; 1502 - for_each_available_child_of_node(pdev->dev.of_node, node) { 1503 - edge = &smd->edges[i++]; 1504 - edge->smd = smd; 1505 - init_waitqueue_head(&edge->new_channel_event); 1506 - 1507 - ret = qcom_smd_parse_edge(&pdev->dev, node, edge); 1508 - if (ret) 1509 - continue; 1510 - 1511 - schedule_work(&edge->scan_work); 1512 - } 1513 - 1514 - platform_set_drvdata(pdev, smd); 1422 + for_each_available_child_of_node(pdev->dev.of_node, node) 1423 + qcom_smd_register_edge(&pdev->dev, node); 1515 1424 1516 1425 return 0; 1426 + } 1427 + 1428 + static int qcom_smd_remove_edge(struct device *dev, void *data) 1429 + { 1430 + struct qcom_smd_edge *edge = to_smd_edge(dev); 1431 + 1432 + return qcom_smd_unregister_edge(edge); 1517 1433 } 1518 1434 1519 1435 /* ··· 1510 1450 */ 1511 1451 static int qcom_smd_remove(struct platform_device *pdev) 1512 1452 { 1513 - struct qcom_smd_channel *channel; 1514 - struct qcom_smd_edge *edge; 1515 - struct qcom_smd *smd = platform_get_drvdata(pdev); 1516 - int i; 1453 + int ret; 1517 1454 1518 - for (i = 0; i < smd->num_edges; i++) { 1519 - edge = &smd->edges[i]; 1455 + ret = device_for_each_child(&pdev->dev, NULL, qcom_smd_remove_edge); 1456 + if (ret) 1457 + dev_warn(&pdev->dev, "can't remove smd device: %d\n", ret); 1520 1458 1521 - disable_irq(edge->irq); 1522 - cancel_work_sync(&edge->scan_work); 1523 - cancel_work_sync(&edge->state_work); 1524 - 1525 - /* No need to lock here, because the writer is gone */ 1526 - list_for_each_entry(channel, &edge->channels, list) { 1527 - if (!channel->qsdev) 1528 - continue; 1529 - 1530 - qcom_smd_destroy_device(channel); 1531 - } 1532 - } 1533 - 1534 - return 0; 1459 + return ret; 1535 1460 } 1536 1461 1537 1462 static const struct of_device_id qcom_smd_of_match[] = {
+2 -1
drivers/soc/qcom/smem.c
··· 740 740 741 741 hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); 742 742 if (hwlock_id < 0) { 743 - dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 743 + if (hwlock_id != -EPROBE_DEFER) 744 + dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 744 745 return hwlock_id; 745 746 } 746 747
+57 -43
drivers/soc/rockchip/pm_domains.c
··· 27 27 int req_mask; 28 28 int idle_mask; 29 29 int ack_mask; 30 + bool active_wakeup; 30 31 }; 31 32 32 33 struct rockchip_pmu_info { ··· 76 75 77 76 #define to_rockchip_pd(gpd) container_of(gpd, struct rockchip_pm_domain, genpd) 78 77 79 - #define DOMAIN(pwr, status, req, idle, ack) \ 78 + #define DOMAIN(pwr, status, req, idle, ack, wakeup) \ 80 79 { \ 81 80 .pwr_mask = (pwr >= 0) ? BIT(pwr) : 0, \ 82 81 .status_mask = (status >= 0) ? BIT(status) : 0, \ 83 82 .req_mask = (req >= 0) ? BIT(req) : 0, \ 84 83 .idle_mask = (idle >= 0) ? BIT(idle) : 0, \ 85 84 .ack_mask = (ack >= 0) ? BIT(ack) : 0, \ 85 + .active_wakeup = wakeup, \ 86 86 } 87 87 88 - #define DOMAIN_RK3288(pwr, status, req) \ 89 - DOMAIN(pwr, status, req, req, (req) + 16) 88 + #define DOMAIN_RK3288(pwr, status, req, wakeup) \ 89 + DOMAIN(pwr, status, req, req, (req) + 16, wakeup) 90 90 91 - #define DOMAIN_RK3368(pwr, status, req) \ 92 - DOMAIN(pwr, status, req, (req) + 16, req) 91 + #define DOMAIN_RK3368(pwr, status, req, wakeup) \ 92 + DOMAIN(pwr, status, req, (req) + 16, req, wakeup) 93 93 94 - #define DOMAIN_RK3399(pwr, status, req) \ 95 - DOMAIN(pwr, status, req, req, req) 94 + #define DOMAIN_RK3399(pwr, status, req, wakeup) \ 95 + DOMAIN(pwr, status, req, req, req, wakeup) 96 96 97 97 static bool rockchip_pmu_domain_is_idle(struct rockchip_pm_domain *pd) 98 98 { ··· 297 295 pm_clk_destroy(dev); 298 296 } 299 297 298 + static bool rockchip_active_wakeup(struct device *dev) 299 + { 300 + struct generic_pm_domain *genpd; 301 + struct rockchip_pm_domain *pd; 302 + 303 + genpd = pd_to_genpd(dev->pm_domain); 304 + pd = container_of(genpd, struct rockchip_pm_domain, genpd); 305 + 306 + return pd->info->active_wakeup; 307 + } 308 + 300 309 static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, 301 310 struct device_node *node) 302 311 { ··· 428 415 pd->genpd.power_on = rockchip_pd_power_on; 429 416 pd->genpd.attach_dev = rockchip_pd_attach_dev; 430 417 pd->genpd.detach_dev = rockchip_pd_detach_dev; 418 + pd->genpd.dev_ops.active_wakeup = rockchip_active_wakeup; 431 419 pd->genpd.flags = GENPD_FLAG_PM_CLK; 432 420 pm_genpd_init(&pd->genpd, NULL, false); 433 421 ··· 637 623 } 638 624 639 625 static const struct rockchip_domain_info rk3288_pm_domains[] = { 640 - [RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4), 641 - [RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9), 642 - [RK3288_PD_VIDEO] = DOMAIN_RK3288(8, 8, 3), 643 - [RK3288_PD_GPU] = DOMAIN_RK3288(9, 9, 2), 626 + [RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4, false), 627 + [RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9, false), 628 + [RK3288_PD_VIDEO] = DOMAIN_RK3288(8, 8, 3, false), 629 + [RK3288_PD_GPU] = DOMAIN_RK3288(9, 9, 2, false), 644 630 }; 645 631 646 632 static const struct rockchip_domain_info rk3368_pm_domains[] = { 647 - [RK3368_PD_PERI] = DOMAIN_RK3368(13, 12, 6), 648 - [RK3368_PD_VIO] = DOMAIN_RK3368(15, 14, 8), 649 - [RK3368_PD_VIDEO] = DOMAIN_RK3368(14, 13, 7), 650 - [RK3368_PD_GPU_0] = DOMAIN_RK3368(16, 15, 2), 651 - [RK3368_PD_GPU_1] = DOMAIN_RK3368(17, 16, 2), 633 + [RK3368_PD_PERI] = DOMAIN_RK3368(13, 12, 6, true), 634 + [RK3368_PD_VIO] = DOMAIN_RK3368(15, 14, 8, false), 635 + [RK3368_PD_VIDEO] = DOMAIN_RK3368(14, 13, 7, false), 636 + [RK3368_PD_GPU_0] = DOMAIN_RK3368(16, 15, 2, false), 637 + [RK3368_PD_GPU_1] = DOMAIN_RK3368(17, 16, 2, false), 652 638 }; 653 639 654 640 static const struct rockchip_domain_info rk3399_pm_domains[] = { 655 - [RK3399_PD_TCPD0] = DOMAIN_RK3399(8, 8, -1), 656 - [RK3399_PD_TCPD1] = DOMAIN_RK3399(9, 9, -1), 657 - [RK3399_PD_CCI] = DOMAIN_RK3399(10, 10, -1), 658 - [RK3399_PD_CCI0] = DOMAIN_RK3399(-1, -1, 15), 659 - [RK3399_PD_CCI1] = DOMAIN_RK3399(-1, -1, 16), 660 - [RK3399_PD_PERILP] = DOMAIN_RK3399(11, 11, 1), 661 - [RK3399_PD_PERIHP] = DOMAIN_RK3399(12, 12, 2), 662 - [RK3399_PD_CENTER] = DOMAIN_RK3399(13, 13, 14), 663 - [RK3399_PD_VIO] = DOMAIN_RK3399(14, 14, 17), 664 - [RK3399_PD_GPU] = DOMAIN_RK3399(15, 15, 0), 665 - [RK3399_PD_VCODEC] = DOMAIN_RK3399(16, 16, 3), 666 - [RK3399_PD_VDU] = DOMAIN_RK3399(17, 17, 4), 667 - [RK3399_PD_RGA] = DOMAIN_RK3399(18, 18, 5), 668 - [RK3399_PD_IEP] = DOMAIN_RK3399(19, 19, 6), 669 - [RK3399_PD_VO] = DOMAIN_RK3399(20, 20, -1), 670 - [RK3399_PD_VOPB] = DOMAIN_RK3399(-1, -1, 7), 671 - [RK3399_PD_VOPL] = DOMAIN_RK3399(-1, -1, 8), 672 - [RK3399_PD_ISP0] = DOMAIN_RK3399(22, 22, 9), 673 - [RK3399_PD_ISP1] = DOMAIN_RK3399(23, 23, 10), 674 - [RK3399_PD_HDCP] = DOMAIN_RK3399(24, 24, 11), 675 - [RK3399_PD_GMAC] = DOMAIN_RK3399(25, 25, 23), 676 - [RK3399_PD_EMMC] = DOMAIN_RK3399(26, 26, 24), 677 - [RK3399_PD_USB3] = DOMAIN_RK3399(27, 27, 12), 678 - [RK3399_PD_EDP] = DOMAIN_RK3399(28, 28, 22), 679 - [RK3399_PD_GIC] = DOMAIN_RK3399(29, 29, 27), 680 - [RK3399_PD_SD] = DOMAIN_RK3399(30, 30, 28), 681 - [RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(31, 31, 29), 641 + [RK3399_PD_TCPD0] = DOMAIN_RK3399(8, 8, -1, false), 642 + [RK3399_PD_TCPD1] = DOMAIN_RK3399(9, 9, -1, false), 643 + [RK3399_PD_CCI] = DOMAIN_RK3399(10, 10, -1, true), 644 + [RK3399_PD_CCI0] = DOMAIN_RK3399(-1, -1, 15, true), 645 + [RK3399_PD_CCI1] = DOMAIN_RK3399(-1, -1, 16, true), 646 + [RK3399_PD_PERILP] = DOMAIN_RK3399(11, 11, 1, true), 647 + [RK3399_PD_PERIHP] = DOMAIN_RK3399(12, 12, 2, true), 648 + [RK3399_PD_CENTER] = DOMAIN_RK3399(13, 13, 14, true), 649 + [RK3399_PD_VIO] = DOMAIN_RK3399(14, 14, 17, false), 650 + [RK3399_PD_GPU] = DOMAIN_RK3399(15, 15, 0, false), 651 + [RK3399_PD_VCODEC] = DOMAIN_RK3399(16, 16, 3, false), 652 + [RK3399_PD_VDU] = DOMAIN_RK3399(17, 17, 4, false), 653 + [RK3399_PD_RGA] = DOMAIN_RK3399(18, 18, 5, false), 654 + [RK3399_PD_IEP] = DOMAIN_RK3399(19, 19, 6, false), 655 + [RK3399_PD_VO] = DOMAIN_RK3399(20, 20, -1, false), 656 + [RK3399_PD_VOPB] = DOMAIN_RK3399(-1, -1, 7, false), 657 + [RK3399_PD_VOPL] = DOMAIN_RK3399(-1, -1, 8, false), 658 + [RK3399_PD_ISP0] = DOMAIN_RK3399(22, 22, 9, false), 659 + [RK3399_PD_ISP1] = DOMAIN_RK3399(23, 23, 10, false), 660 + [RK3399_PD_HDCP] = DOMAIN_RK3399(24, 24, 11, false), 661 + [RK3399_PD_GMAC] = DOMAIN_RK3399(25, 25, 23, true), 662 + [RK3399_PD_EMMC] = DOMAIN_RK3399(26, 26, 24, true), 663 + [RK3399_PD_USB3] = DOMAIN_RK3399(27, 27, 12, true), 664 + [RK3399_PD_EDP] = DOMAIN_RK3399(28, 28, 22, false), 665 + [RK3399_PD_GIC] = DOMAIN_RK3399(29, 29, 27, true), 666 + [RK3399_PD_SD] = DOMAIN_RK3399(30, 30, 28, true), 667 + [RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(31, 31, 29, true), 682 668 }; 683 669 684 670 static const struct rockchip_pmu_info rk3288_pmu = {
+8 -20
drivers/soc/tegra/pmc.c
··· 967 967 968 968 int tegra_io_rail_power_on(unsigned int id) 969 969 { 970 - unsigned long request, status, value; 971 - unsigned int bit, mask; 970 + unsigned long request, status; 971 + unsigned int bit; 972 972 int err; 973 973 974 974 mutex_lock(&pmc->powergates_lock); ··· 977 977 if (err) 978 978 goto error; 979 979 980 - mask = 1 << bit; 980 + tegra_pmc_writel(IO_DPD_REQ_CODE_OFF | BIT(bit), request); 981 981 982 - value = tegra_pmc_readl(request); 983 - value |= mask; 984 - value &= ~IO_DPD_REQ_CODE_MASK; 985 - value |= IO_DPD_REQ_CODE_OFF; 986 - tegra_pmc_writel(value, request); 987 - 988 - err = tegra_io_rail_poll(status, mask, 0, 250); 982 + err = tegra_io_rail_poll(status, BIT(bit), 0, 250); 989 983 if (err) { 990 984 pr_info("tegra_io_rail_poll() failed: %d\n", err); 991 985 goto error; ··· 996 1002 997 1003 int tegra_io_rail_power_off(unsigned int id) 998 1004 { 999 - unsigned long request, status, value; 1000 - unsigned int bit, mask; 1005 + unsigned long request, status; 1006 + unsigned int bit; 1001 1007 int err; 1002 1008 1003 1009 mutex_lock(&pmc->powergates_lock); ··· 1008 1014 goto error; 1009 1015 } 1010 1016 1011 - mask = 1 << bit; 1017 + tegra_pmc_writel(IO_DPD_REQ_CODE_ON | BIT(bit), request); 1012 1018 1013 - value = tegra_pmc_readl(request); 1014 - value |= mask; 1015 - value &= ~IO_DPD_REQ_CODE_MASK; 1016 - value |= IO_DPD_REQ_CODE_ON; 1017 - tegra_pmc_writel(value, request); 1018 - 1019 - err = tegra_io_rail_poll(status, mask, mask, 250); 1019 + err = tegra_io_rail_poll(status, BIT(bit), BIT(bit), 250); 1020 1020 if (err) 1021 1021 goto error; 1022 1022
+98
include/dt-bindings/mfd/stm32f4-rcc.h
··· 1 + /* 2 + * This header provides constants for the STM32F4 RCC IP 3 + */ 4 + 5 + #ifndef _DT_BINDINGS_MFD_STM32F4_RCC_H 6 + #define _DT_BINDINGS_MFD_STM32F4_RCC_H 7 + 8 + /* AHB1 */ 9 + #define STM32F4_RCC_AHB1_GPIOA 0 10 + #define STM32F4_RCC_AHB1_GPIOB 1 11 + #define STM32F4_RCC_AHB1_GPIOC 2 12 + #define STM32F4_RCC_AHB1_GPIOD 3 13 + #define STM32F4_RCC_AHB1_GPIOE 4 14 + #define STM32F4_RCC_AHB1_GPIOF 5 15 + #define STM32F4_RCC_AHB1_GPIOG 6 16 + #define STM32F4_RCC_AHB1_GPIOH 7 17 + #define STM32F4_RCC_AHB1_GPIOI 8 18 + #define STM32F4_RCC_AHB1_GPIOJ 9 19 + #define STM32F4_RCC_AHB1_GPIOK 10 20 + #define STM32F4_RCC_AHB1_CRC 12 21 + #define STM32F4_RCC_AHB1_DMA1 21 22 + #define STM32F4_RCC_AHB1_DMA2 22 23 + #define STM32F4_RCC_AHB1_DMA2D 23 24 + #define STM32F4_RCC_AHB1_ETHMAC 25 25 + #define STM32F4_RCC_AHB1_OTGHS 29 26 + 27 + #define STM32F4_AHB1_RESET(bit) (STM32F4_RCC_AHB1_##bit + (0x10 * 8)) 28 + #define STM32F4_AHB1_CLOCK(bit) (STM32F4_RCC_AHB1_##bit + (0x30 * 8)) 29 + 30 + 31 + /* AHB2 */ 32 + #define STM32F4_RCC_AHB2_DCMI 0 33 + #define STM32F4_RCC_AHB2_CRYP 4 34 + #define STM32F4_RCC_AHB2_HASH 5 35 + #define STM32F4_RCC_AHB2_RNG 6 36 + #define STM32F4_RCC_AHB2_OTGFS 7 37 + 38 + #define STM32F4_AHB2_RESET(bit) (STM32F4_RCC_AHB2_##bit + (0x14 * 8)) 39 + #define STM32F4_AHB2_CLOCK(bit) (STM32F4_RCC_AHB2_##bit + (0x34 * 8)) 40 + 41 + /* AHB3 */ 42 + #define STM32F4_RCC_AHB3_FMC 0 43 + 44 + #define STM32F4_AHB3_RESET(bit) (STM32F4_RCC_AHB3_##bit + (0x18 * 8)) 45 + #define STM32F4_AHB3_CLOCK(bit) (STM32F4_RCC_AHB3_##bit + (0x38 * 8)) 46 + 47 + /* APB1 */ 48 + #define STM32F4_RCC_APB1_TIM2 0 49 + #define STM32F4_RCC_APB1_TIM3 1 50 + #define STM32F4_RCC_APB1_TIM4 2 51 + #define STM32F4_RCC_APB1_TIM5 3 52 + #define STM32F4_RCC_APB1_TIM6 4 53 + #define STM32F4_RCC_APB1_TIM7 5 54 + #define STM32F4_RCC_APB1_TIM12 6 55 + #define STM32F4_RCC_APB1_TIM13 7 56 + #define STM32F4_RCC_APB1_TIM14 8 57 + #define STM32F4_RCC_APB1_WWDG 11 58 + #define STM32F4_RCC_APB1_SPI2 14 59 + #define STM32F4_RCC_APB1_SPI3 15 60 + #define STM32F4_RCC_APB1_UART2 17 61 + #define STM32F4_RCC_APB1_UART3 18 62 + #define STM32F4_RCC_APB1_UART4 19 63 + #define STM32F4_RCC_APB1_UART5 20 64 + #define STM32F4_RCC_APB1_I2C1 21 65 + #define STM32F4_RCC_APB1_I2C2 22 66 + #define STM32F4_RCC_APB1_I2C3 23 67 + #define STM32F4_RCC_APB1_CAN1 25 68 + #define STM32F4_RCC_APB1_CAN2 26 69 + #define STM32F4_RCC_APB1_PWR 28 70 + #define STM32F4_RCC_APB1_DAC 29 71 + #define STM32F4_RCC_APB1_UART7 30 72 + #define STM32F4_RCC_APB1_UART8 31 73 + 74 + #define STM32F4_APB1_RESET(bit) (STM32F4_RCC_APB1_##bit + (0x20 * 8)) 75 + #define STM32F4_APB1_CLOCK(bit) (STM32F4_RCC_APB1_##bit + (0x40 * 8)) 76 + 77 + /* APB2 */ 78 + #define STM32F4_RCC_APB2_TIM1 0 79 + #define STM32F4_RCC_APB2_TIM8 1 80 + #define STM32F4_RCC_APB2_USART1 4 81 + #define STM32F4_RCC_APB2_USART6 5 82 + #define STM32F4_RCC_APB2_ADC 8 83 + #define STM32F4_RCC_APB2_SDIO 11 84 + #define STM32F4_RCC_APB2_SPI1 12 85 + #define STM32F4_RCC_APB2_SPI4 13 86 + #define STM32F4_RCC_APB2_SYSCFG 14 87 + #define STM32F4_RCC_APB2_TIM9 16 88 + #define STM32F4_RCC_APB2_TIM10 17 89 + #define STM32F4_RCC_APB2_TIM11 18 90 + #define STM32F4_RCC_APB2_SPI5 20 91 + #define STM32F4_RCC_APB2_SPI6 21 92 + #define STM32F4_RCC_APB2_SAI1 22 93 + #define STM32F4_RCC_APB2_LTDC 26 94 + 95 + #define STM32F4_APB2_RESET(bit) (STM32F4_RCC_APB2_##bit + (0x24 * 8)) 96 + #define STM32F4_APB2_CLOCK(bit) (STM32F4_RCC_APB2_##bit + (0x44 * 8)) 97 + 98 + #endif /* _DT_BINDINGS_MFD_STM32F4_RCC_H */
+31
include/linux/firmware/meson/meson_sm.h
··· 1 + /* 2 + * Copyright (C) 2016 Endless Mobile, Inc. 3 + * Author: Carlo Caione <carlo@endlessm.com> 4 + * 5 + * This program is free software; you can redistribute it and/or 6 + * modify it under the terms of the GNU General Public License 7 + * version 2 as published by the Free Software Foundation. 8 + * 9 + * You should have received a copy of the GNU General Public License 10 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 11 + */ 12 + 13 + #ifndef _MESON_SM_FW_H_ 14 + #define _MESON_SM_FW_H_ 15 + 16 + enum { 17 + SM_EFUSE_READ, 18 + SM_EFUSE_WRITE, 19 + SM_EFUSE_USER_MAX, 20 + }; 21 + 22 + struct meson_sm_firmware; 23 + 24 + int meson_sm_call(unsigned int cmd_index, u32 *ret, u32 arg0, u32 arg1, 25 + u32 arg2, u32 arg3, u32 arg4); 26 + int meson_sm_call_write(void *buffer, unsigned int b_size, unsigned int cmd_index, 27 + u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 28 + int meson_sm_call_read(void *buffer, unsigned int cmd_index, u32 arg0, u32 arg1, 29 + u32 arg2, u32 arg3, u32 arg4); 30 + 31 + #endif /* _MESON_SM_FW_H_ */
+2 -2
include/linux/omap-gpmc.h
··· 29 29 struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs, 30 30 int cs); 31 31 #else 32 - static inline gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs, 33 - int cs) 32 + static inline struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs, 33 + int cs) 34 34 { 35 35 return NULL; 36 36 }
+27 -2
include/linux/soc/qcom/smd.h
··· 55 55 struct qcom_smd_channel *qcom_smd_open_channel(struct qcom_smd_channel *channel, 56 56 const char *name, 57 57 qcom_smd_cb_t cb); 58 + void qcom_smd_close_channel(struct qcom_smd_channel *channel); 58 59 void *qcom_smd_get_drvdata(struct qcom_smd_channel *channel); 59 60 void qcom_smd_set_drvdata(struct qcom_smd_channel *channel, void *data); 60 61 int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len); 61 62 63 + 64 + struct qcom_smd_edge *qcom_smd_register_edge(struct device *parent, 65 + struct device_node *node); 66 + int qcom_smd_unregister_edge(struct qcom_smd_edge *edge); 62 67 63 68 #else 64 69 ··· 88 83 return NULL; 89 84 } 90 85 91 - void *qcom_smd_get_drvdata(struct qcom_smd_channel *channel) 86 + static inline void qcom_smd_close_channel(struct qcom_smd_channel *channel) 87 + { 88 + /* This shouldn't be possible */ 89 + WARN_ON(1); 90 + } 91 + 92 + static inline void *qcom_smd_get_drvdata(struct qcom_smd_channel *channel) 92 93 { 93 94 /* This shouldn't be possible */ 94 95 WARN_ON(1); 95 96 return NULL; 96 97 } 97 98 98 - void qcom_smd_set_drvdata(struct qcom_smd_channel *channel, void *data) 99 + static inline void qcom_smd_set_drvdata(struct qcom_smd_channel *channel, void *data) 99 100 { 100 101 /* This shouldn't be possible */ 101 102 WARN_ON(1); ··· 109 98 110 99 static inline int qcom_smd_send(struct qcom_smd_channel *channel, 111 100 const void *data, int len) 101 + { 102 + /* This shouldn't be possible */ 103 + WARN_ON(1); 104 + return -ENXIO; 105 + } 106 + 107 + static inline struct qcom_smd_edge * 108 + qcom_smd_register_edge(struct device *parent, 109 + struct device_node *node) 110 + { 111 + return ERR_PTR(-ENXIO); 112 + } 113 + 114 + static inline int qcom_smd_unregister_edge(struct qcom_smd_edge *edge) 112 115 { 113 116 /* This shouldn't be possible */ 114 117 WARN_ON(1);