Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mvebu-dt64-5.4-2' of git://git.infradead.org/linux-mvebu into arm/late

mvebu dt64 for 5.4 (part 2)

Add support for Turris Mox board (Armada 3720 SoC based)

* tag 'mvebu-dt64-5.4-2' of git://git.infradead.org/linux-mvebu: (53 commits)
arm64: dts: marvell: add DTS for Turris Mox
dt-bindings: marvell: document Turris Mox compatible
arm64: dts: marvell: armada-37xx: add SPI CS1 pinctrl
arm64: dts: marvell: Add cpu clock node on Armada 7K/8K
arm64: dts: marvell: Convert 7k/8k usb-phy properties to phy-supply
arm64: dts: marvell: Add 7k/8k PHYs in PCIe nodes
arm64: dts: marvell: Add 7k/8k PHYs in USB3 nodes
arm64: dts: marvell: Add 7k/8k per-port PHYs in SATA nodes
arm64: dts: marvell: Add CP110 COMPHY clocks
arm64: dts: marvell: armada-37xx: add mailbox node
dt-bindings: gpio: Document GPIOs via Moxtet bus
drivers: gpio: Add support for GPIOs over Moxtet bus
bus: moxtet: Add sysfs and debugfs documentation
dt-bindings: bus: Document moxtet bus binding
bus: Add support for Moxtet bus
reset: Add support for resets provided by SCMI
firmware: arm_scmi: Add RESET protocol in SCMI v2.0
dt-bindings: arm: Extend SCMI to support new reset protocol
firmware: arm_scmi: Make use SCMI v2.0 fastchannel for performance protocol
firmware: arm_scmi: Add discovery of SCMI v2.0 performance fastchannels
...

Link: https://lore.kernel.org/r/87h85two0r.fsf@FE-laptop
Signed-off-by: Arnd Bergmann <arnd@arndb.de>

+3550 -617
+23
Documentation/ABI/testing/debugfs-moxtet
··· 1 + What: /sys/kernel/debug/moxtet/input 2 + Date: March 2019 3 + KernelVersion: 5.3 4 + Contact: Marek Behún <marek.behun@nic.cz> 5 + Description: (R) Read input from the shift registers, in hexadecimal. 6 + Returns N+1 bytes, where N is the number of Moxtet connected 7 + modules. The first byte is from the CPU board itself. 8 + Example: 101214 9 + 10: CPU board with SD card 10 + 12: 2 = PCIe module, 1 = IRQ not active 11 + 14: 4 = Peridot module, 1 = IRQ not active 12 + 13 + What: /sys/kernel/debug/moxtet/output 14 + Date: March 2019 15 + KernelVersion: 5.3 16 + Contact: Marek Behún <marek.behun@nic.cz> 17 + Description: (RW) Read last written value to the shift registers, in 18 + hexadecimal, or write values to the shift registers, also 19 + in hexadecimal. 20 + Example: 0102 21 + 01: 01 was last written, or is to be written, to the 22 + first module's shift register 23 + 02: the same for second module
+17
Documentation/ABI/testing/sysfs-bus-moxtet-devices
··· 1 + What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_description 2 + Date: March 2019 3 + KernelVersion: 5.3 4 + Contact: Marek Behún <marek.behun@nic.cz> 5 + Description: (R) Moxtet module description. Format: string 6 + 7 + What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_id 8 + Date: March 2019 9 + KernelVersion: 5.3 10 + Contact: Marek Behún <marek.behun@nic.cz> 11 + Description: (R) Moxtet module ID. Format: %x 12 + 13 + What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_name 14 + Date: March 2019 15 + KernelVersion: 5.3 16 + Contact: Marek Behún <marek.behun@nic.cz> 17 + Description: (R) Moxtet module name. Format: string
+17
Documentation/devicetree/bindings/arm/arm,scmi.txt
··· 73 73 as used by the firmware. Refer to platform details 74 74 for your implementation for the IDs to use. 75 75 76 + Reset signal bindings for the reset domains based on SCMI Message Protocol 77 + ------------------------------------------------------------ 78 + 79 + This binding for the SCMI reset domain providers uses the generic reset 80 + signal binding[5]. 81 + 82 + Required properties: 83 + - #reset-cells : Should be 1. Contains the reset domain ID value used 84 + by SCMI commands. 85 + 76 86 SRAM and Shared Memory for SCMI 77 87 ------------------------------- 78 88 ··· 103 93 [2] Documentation/devicetree/bindings/power/power_domain.txt 104 94 [3] Documentation/devicetree/bindings/thermal/thermal.txt 105 95 [4] Documentation/devicetree/bindings/sram/sram.txt 96 + [5] Documentation/devicetree/bindings/reset/reset.txt 106 97 107 98 Example: 108 99 ··· 163 152 reg = <0x15>; 164 153 #thermal-sensor-cells = <1>; 165 154 }; 155 + 156 + scmi_reset: protocol@16 { 157 + reg = <0x16>; 158 + #reset-cells = <1>; 159 + }; 166 160 }; 167 161 }; 168 162 ··· 182 166 reg = <0 0x7ff60000 0 0x1000>; 183 167 clocks = <&scmi_clk 4>; 184 168 power-domains = <&scmi_devpd 1>; 169 + resets = <&scmi_reset 10>; 185 170 }; 186 171 187 172 thermal-zones {
+8
Documentation/devicetree/bindings/arm/marvell/armada-37xx.txt
··· 48 48 compatible = "marvell,armada-3700-avs", "syscon"; 49 49 reg = <0x11500 0x40>; 50 50 } 51 + 52 + 53 + CZ.NIC's Turris Mox SOHO router Device Tree Bindings 54 + ---------------------------------------------------- 55 + 56 + Required root node property: 57 + 58 + - compatible: must contain "cznic,turris-mox"
+46
Documentation/devicetree/bindings/bus/moxtet.txt
··· 1 + Turris Mox module status and configuration bus (over SPI) 2 + 3 + Required properties: 4 + - compatible : Should be "cznic,moxtet" 5 + - #address-cells : Has to be 1 6 + - #size-cells : Has to be 0 7 + - spi-cpol : Required inverted clock polarity 8 + - spi-cpha : Required shifted clock phase 9 + - interrupts : Must contain reference to the shared interrupt line 10 + - interrupt-controller : Required 11 + - #interrupt-cells : Has to be 1 12 + 13 + For other required and optional properties of SPI slave nodes please refer to 14 + ../spi/spi-bus.txt. 15 + 16 + Required properties of subnodes: 17 + - reg : Should be position on the Moxtet bus (how many Moxtet 18 + modules are between this module and CPU module, so 19 + either 0 or a positive integer) 20 + 21 + The driver finds the devices connected to the bus by itself, but it may be 22 + needed to reference some of them from other parts of the device tree. In that 23 + case the devices can be defined as subnodes of the moxtet node. 24 + 25 + Example: 26 + 27 + moxtet@1 { 28 + compatible = "cznic,moxtet"; 29 + #address-cells = <1>; 30 + #size-cells = <0>; 31 + reg = <1>; 32 + spi-max-frequency = <10000000>; 33 + spi-cpol; 34 + spi-cpha; 35 + interrupt-controller; 36 + #interrupt-cells = <1>; 37 + interrupt-parent = <&gpiosb>; 38 + interrupts = <5 IRQ_TYPE_EDGE_FALLING>; 39 + 40 + moxtet_sfp: gpio@0 { 41 + compatible = "cznic,moxtet-gpio"; 42 + gpio-controller; 43 + #gpio-cells = <2>; 44 + reg = <0>; 45 + } 46 + };
+18
Documentation/devicetree/bindings/gpio/gpio-moxtet.txt
··· 1 + Turris Mox Moxtet GPIO expander via Moxtet bus 2 + 3 + Required properties: 4 + - compatible : Should be "cznic,moxtet-gpio". 5 + - gpio-controller : Marks the device node as a GPIO controller. 6 + - #gpio-cells : Should be two. For consumer use see gpio.txt. 7 + 8 + Other properties are required for a Moxtet bus device, please refer to 9 + Documentation/devicetree/bindings/bus/moxtet.txt. 10 + 11 + Example: 12 + 13 + moxtet_sfp: gpio@0 { 14 + compatible = "cznic,moxtet-gpio"; 15 + gpio-controller; 16 + #gpio-cells = <2>; 17 + reg = <0>; 18 + }
+4 -2
Documentation/devicetree/bindings/reset/fsl,imx7-src.txt
··· 8 8 - compatible: 9 9 - For i.MX7 SoCs should be "fsl,imx7d-src", "syscon" 10 10 - For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon" 11 + - For i.MX8MM SoCs should be "fsl,imx8mm-src", "fsl,imx8mq-src", "syscon" 11 12 - reg: should be register base and length as documented in the 12 13 datasheet 13 14 - interrupts: Should contain SRC interrupt ··· 47 46 48 47 49 48 For list of all valid reset indices see 50 - <dt-bindings/reset/imx7-reset.h> for i.MX7 and 51 - <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ 49 + <dt-bindings/reset/imx7-reset.h> for i.MX7, 50 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ and 51 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MM
+30
Documentation/devicetree/bindings/reset/snps,dw-reset.txt
··· 1 + Synopsys DesignWare Reset controller 2 + ======================================= 3 + 4 + Please also refer to reset.txt in this directory for common reset 5 + controller binding usage. 6 + 7 + Required properties: 8 + 9 + - compatible: should be one of the following. 10 + "snps,dw-high-reset" - for active high configuration 11 + "snps,dw-low-reset" - for active low configuration 12 + 13 + - reg: physical base address of the controller and length of memory mapped 14 + region. 15 + 16 + - #reset-cells: must be 1. 17 + 18 + example: 19 + 20 + dw_rst_1: reset-controller@0000 { 21 + compatible = "snps,dw-high-reset"; 22 + reg = <0x0000 0x4>; 23 + #reset-cells = <1>; 24 + }; 25 + 26 + dw_rst_2: reset-controller@1000 {i 27 + compatible = "snps,dw-low-reset"; 28 + reg = <0x1000 0x8>; 29 + #reset-cells = <1>; 30 + };
+12 -1
Documentation/devicetree/bindings/soc/fsl/cpm_qe/qe.txt
··· 18 18 - reg : offset and length of the device registers. 19 19 - bus-frequency : the clock frequency for QUICC Engine. 20 20 - fsl,qe-num-riscs: define how many RISC engines the QE has. 21 - - fsl,qe-num-snums: define how many serial number(SNUM) the QE can use for the 21 + - fsl,qe-snums: This property has to be specified as '/bits/ 8' value, 22 + defining the array of serial number (SNUM) values for the virtual 22 23 threads. 23 24 24 25 Optional properties: ··· 35 34 - brg-frequency : the internal clock source frequency for baud-rate 36 35 generators in Hz. 37 36 37 + Deprecated properties 38 + - fsl,qe-num-snums: define how many serial number(SNUM) the QE can use 39 + for the threads. Use fsl,qe-snums instead to not only specify the 40 + number of snums, but also their values. 41 + 38 42 Example: 39 43 qe@e0100000 { 40 44 #address-cells = <1>; ··· 50 44 reg = <e0100000 480>; 51 45 brg-frequency = <0>; 52 46 bus-frequency = <179A7B00>; 47 + fsl,qe-snums = /bits/ 8 < 48 + 0x04 0x05 0x0C 0x0D 0x14 0x15 0x1C 0x1D 49 + 0x24 0x25 0x2C 0x2D 0x34 0x35 0x88 0x89 50 + 0x98 0x99 0xA8 0xA9 0xB8 0xB9 0xC8 0xC9 51 + 0xD8 0xD9 0xE8 0xE9>; 53 52 } 54 53 55 54 * Multi-User RAM (MURAM)
+13
MAINTAINERS
··· 1626 1626 N: [^a-z]sirf 1627 1627 X: drivers/gnss 1628 1628 1629 + ARM/CZ.NIC TURRIS MOX SUPPORT 1630 + M: Marek Behun <marek.behun@nic.cz> 1631 + W: http://mox.turris.cz 1632 + S: Maintained 1633 + F: Documentation/ABI/testing/debugfs-moxtet 1634 + F: Documentation/ABI/testing/sysfs-bus-moxtet-devices 1635 + F: Documentation/devicetree/bindings/bus/moxtet.txt 1636 + F: Documentation/devicetree/bindings/gpio/gpio-moxtet.txt 1637 + F: include/linux/moxtet.h 1638 + F: drivers/bus/moxtet.c 1639 + F: drivers/gpio/gpio-moxtet.c 1640 + 1629 1641 ARM/EBSA110 MACHINE SUPPORT 1630 1642 M: Russell King <linux@armlinux.org.uk> 1631 1643 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 15589 15577 F: drivers/cpufreq/sc[mp]i-cpufreq.c 15590 15578 F: drivers/firmware/arm_scpi.c 15591 15579 F: drivers/firmware/arm_scmi/ 15580 + F: drivers/reset/reset-scmi.c 15592 15581 F: include/linux/sc[mp]i_protocol.h 15593 15582 15594 15583 SYSTEM RESET/SHUTDOWN DRIVERS
+1
arch/arm64/boot/dts/marvell/Makefile
··· 2 2 # Mvebu SoC Family 3 3 dtb-$(CONFIG_ARCH_MVEBU) += armada-3720-db.dtb 4 4 dtb-$(CONFIG_ARCH_MVEBU) += armada-3720-espressobin.dtb 5 + dtb-$(CONFIG_ARCH_MVEBU) += armada-3720-turris-mox.dtb 5 6 dtb-$(CONFIG_ARCH_MVEBU) += armada-3720-uDPU.dtb 6 7 dtb-$(CONFIG_ARCH_MVEBU) += armada-7040-db.dtb 7 8 dtb-$(CONFIG_ARCH_MVEBU) += armada-8040-clearfog-gt-8k.dtb
+840
arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
··· 1 + // SPDX-License-Identifier: (GPL-2.0+ OR MIT) 2 + /* 3 + * Device Tree file for CZ.NIC Turris Mox Board 4 + * 2019 by Marek Behun <marek.behun@nic.cz> 5 + */ 6 + 7 + /dts-v1/; 8 + 9 + #include <dt-bindings/bus/moxtet.h> 10 + #include <dt-bindings/gpio/gpio.h> 11 + #include <dt-bindings/input/input.h> 12 + #include "armada-372x.dtsi" 13 + 14 + / { 15 + model = "CZ.NIC Turris Mox Board"; 16 + compatible = "cznic,turris-mox", "marvell,armada3720", 17 + "marvell,armada3710"; 18 + 19 + aliases { 20 + spi0 = &spi0; 21 + ethernet1 = &eth1; 22 + }; 23 + 24 + chosen { 25 + stdout-path = "serial0:115200n8"; 26 + }; 27 + 28 + memory@0 { 29 + device_type = "memory"; 30 + reg = <0x00000000 0x00000000 0x00000000 0x20000000>; 31 + }; 32 + 33 + leds { 34 + compatible = "gpio-leds"; 35 + red { 36 + label = "mox:red:activity"; 37 + gpios = <&gpiosb 21 GPIO_ACTIVE_LOW>; 38 + linux,default-trigger = "default-on"; 39 + }; 40 + }; 41 + 42 + gpio-keys { 43 + compatible = "gpio-keys"; 44 + 45 + reset { 46 + label = "reset"; 47 + linux,code = <KEY_RESTART>; 48 + gpios = <&gpiosb 20 GPIO_ACTIVE_LOW>; 49 + debounce-interval = <60>; 50 + }; 51 + }; 52 + 53 + exp_usb3_vbus: usb3-vbus { 54 + compatible = "regulator-fixed"; 55 + regulator-name = "usb3-vbus"; 56 + regulator-min-microvolt = <5000000>; 57 + regulator-max-microvolt = <5000000>; 58 + enable-active-high; 59 + regulator-always-on; 60 + gpio = <&gpiosb 0 GPIO_ACTIVE_HIGH>; 61 + }; 62 + 63 + usb3_phy: usb3-phy { 64 + compatible = "usb-nop-xceiv"; 65 + vcc-supply = <&exp_usb3_vbus>; 66 + }; 67 + 68 + vsdc_reg: vsdc-reg { 69 + compatible = "regulator-gpio"; 70 + regulator-name = "vsdc"; 71 + regulator-min-microvolt = <1800000>; 72 + regulator-max-microvolt = <3300000>; 73 + regulator-boot-on; 74 + 75 + gpios = <&gpiosb 23 GPIO_ACTIVE_HIGH>; 76 + gpios-states = <0>; 77 + states = <1800000 0x1 78 + 3300000 0x0>; 79 + enable-active-high; 80 + }; 81 + 82 + vsdio_reg: vsdio-reg { 83 + compatible = "regulator-gpio"; 84 + regulator-name = "vsdio"; 85 + regulator-min-microvolt = <1800000>; 86 + regulator-max-microvolt = <3300000>; 87 + regulator-boot-on; 88 + 89 + gpios = <&gpiosb 22 GPIO_ACTIVE_HIGH>; 90 + gpios-states = <0>; 91 + states = <1800000 0x1 92 + 3300000 0x0>; 93 + enable-active-high; 94 + }; 95 + 96 + sdhci1_pwrseq: sdhci1-pwrseq { 97 + compatible = "mmc-pwrseq-simple"; 98 + reset-gpios = <&gpionb 19 GPIO_ACTIVE_HIGH>; 99 + status = "okay"; 100 + }; 101 + 102 + sfp: sfp { 103 + compatible = "sff,sfp+"; 104 + i2c-bus = <&i2c0>; 105 + los-gpio = <&moxtet_sfp 0 GPIO_ACTIVE_HIGH>; 106 + tx-fault-gpio = <&moxtet_sfp 1 GPIO_ACTIVE_HIGH>; 107 + mod-def0-gpio = <&moxtet_sfp 2 GPIO_ACTIVE_LOW>; 108 + tx-disable-gpio = <&moxtet_sfp 4 GPIO_ACTIVE_HIGH>; 109 + rate-select0-gpio = <&moxtet_sfp 5 GPIO_ACTIVE_HIGH>; 110 + 111 + /* enabled by U-Boot if SFP module is present */ 112 + status = "disabled"; 113 + }; 114 + }; 115 + 116 + &i2c0 { 117 + pinctrl-names = "default"; 118 + pinctrl-0 = <&i2c1_pins>; 119 + clock-frequency = <100000>; 120 + status = "okay"; 121 + 122 + rtc@6f { 123 + compatible = "microchip,mcp7940x"; 124 + reg = <0x6f>; 125 + }; 126 + }; 127 + 128 + &pcie_reset_pins { 129 + function = "gpio"; 130 + }; 131 + 132 + &pcie0 { 133 + pinctrl-names = "default"; 134 + pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>; 135 + status = "okay"; 136 + max-link-speed = <2>; 137 + reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>; 138 + phys = <&comphy1 0>; 139 + 140 + /* enabled by U-Boot if PCIe module is present */ 141 + status = "disabled"; 142 + }; 143 + 144 + &uart0 { 145 + status = "okay"; 146 + }; 147 + 148 + &eth0 { 149 + pinctrl-names = "default"; 150 + pinctrl-0 = <&rgmii_pins>; 151 + phy-mode = "rgmii-id"; 152 + phy = <&phy1>; 153 + status = "okay"; 154 + }; 155 + 156 + &eth1 { 157 + phy-mode = "2500base-x"; 158 + managed = "in-band-status"; 159 + phys = <&comphy0 1>; 160 + }; 161 + 162 + &sdhci0 { 163 + wp-inverted; 164 + bus-width = <4>; 165 + cd-gpios = <&gpionb 10 GPIO_ACTIVE_HIGH>; 166 + vqmmc-supply = <&vsdc_reg>; 167 + marvell,pad-type = "sd"; 168 + status = "okay"; 169 + }; 170 + 171 + &sdhci1 { 172 + pinctrl-names = "default"; 173 + pinctrl-0 = <&sdio_pins>; 174 + non-removable; 175 + bus-width = <4>; 176 + marvell,pad-type = "sd"; 177 + vqmmc-supply = <&vsdio_reg>; 178 + mmc-pwrseq = <&sdhci1_pwrseq>; 179 + status = "okay"; 180 + }; 181 + 182 + &spi0 { 183 + status = "okay"; 184 + pinctrl-names = "default"; 185 + pinctrl-0 = <&spi_quad_pins &spi_cs1_pins>; 186 + assigned-clocks = <&nb_periph_clk 7>; 187 + assigned-clock-parents = <&tbg 1>; 188 + assigned-clock-rates = <20000000>; 189 + 190 + spi-flash@0 { 191 + #address-cells = <1>; 192 + #size-cells = <1>; 193 + compatible = "jedec,spi-nor"; 194 + reg = <0>; 195 + spi-max-frequency = <20000000>; 196 + 197 + partitions { 198 + compatible = "fixed-partitions"; 199 + #address-cells = <1>; 200 + #size-cells = <1>; 201 + 202 + partition@0 { 203 + label = "secure-firmware"; 204 + reg = <0x0 0x20000>; 205 + }; 206 + 207 + partition@20000 { 208 + label = "u-boot"; 209 + reg = <0x20000 0x160000>; 210 + }; 211 + 212 + partition@180000 { 213 + label = "u-boot-env"; 214 + reg = <0x180000 0x10000>; 215 + }; 216 + 217 + partition@190000 { 218 + label = "Rescue system"; 219 + reg = <0x190000 0x660000>; 220 + }; 221 + 222 + partition@7f0000 { 223 + label = "dtb"; 224 + reg = <0x7f0000 0x10000>; 225 + }; 226 + }; 227 + }; 228 + 229 + moxtet: moxtet@1 { 230 + #address-cells = <1>; 231 + #size-cells = <0>; 232 + compatible = "cznic,moxtet"; 233 + reg = <1>; 234 + reset-gpios = <&gpiosb 2 GPIO_ACTIVE_LOW>; 235 + spi-max-frequency = <10000000>; 236 + spi-cpol; 237 + spi-cpha; 238 + interrupt-controller; 239 + #interrupt-cells = <1>; 240 + interrupt-parent = <&gpiosb>; 241 + interrupts = <5 IRQ_TYPE_EDGE_FALLING>; 242 + status = "okay"; 243 + 244 + moxtet_sfp: gpio@0 { 245 + compatible = "cznic,moxtet-gpio"; 246 + gpio-controller; 247 + #gpio-cells = <2>; 248 + reg = <0>; 249 + status = "disabled"; 250 + }; 251 + }; 252 + }; 253 + 254 + &usb2 { 255 + status = "okay"; 256 + }; 257 + 258 + &usb3 { 259 + status = "okay"; 260 + phys = <&comphy2 0>; 261 + usb-phy = <&usb3_phy>; 262 + }; 263 + 264 + &mdio { 265 + pinctrl-names = "default"; 266 + pinctrl-0 = <&smi_pins>; 267 + status = "okay"; 268 + 269 + phy1: ethernet-phy@1 { 270 + reg = <1>; 271 + }; 272 + 273 + /* switch nodes are enabled by U-Boot if modules are present */ 274 + switch0@10 { 275 + compatible = "marvell,mv88e6190"; 276 + reg = <0x10 0>; 277 + dsa,member = <0 0>; 278 + interrupt-parent = <&moxtet>; 279 + interrupts = <MOXTET_IRQ_PERIDOT(0)>; 280 + status = "disabled"; 281 + 282 + mdio { 283 + #address-cells = <1>; 284 + #size-cells = <0>; 285 + 286 + switch0phy1: switch0phy1@1 { 287 + reg = <0x1>; 288 + }; 289 + 290 + switch0phy2: switch0phy2@2 { 291 + reg = <0x2>; 292 + }; 293 + 294 + switch0phy3: switch0phy3@3 { 295 + reg = <0x3>; 296 + }; 297 + 298 + switch0phy4: switch0phy4@4 { 299 + reg = <0x4>; 300 + }; 301 + 302 + switch0phy5: switch0phy5@5 { 303 + reg = <0x5>; 304 + }; 305 + 306 + switch0phy6: switch0phy6@6 { 307 + reg = <0x6>; 308 + }; 309 + 310 + switch0phy7: switch0phy7@7 { 311 + reg = <0x7>; 312 + }; 313 + 314 + switch0phy8: switch0phy8@8 { 315 + reg = <0x8>; 316 + }; 317 + }; 318 + 319 + ports { 320 + #address-cells = <1>; 321 + #size-cells = <0>; 322 + 323 + port@1 { 324 + reg = <0x1>; 325 + label = "lan1"; 326 + phy-handle = <&switch0phy1>; 327 + }; 328 + 329 + port@2 { 330 + reg = <0x2>; 331 + label = "lan2"; 332 + phy-handle = <&switch0phy2>; 333 + }; 334 + 335 + port@3 { 336 + reg = <0x3>; 337 + label = "lan3"; 338 + phy-handle = <&switch0phy3>; 339 + }; 340 + 341 + port@4 { 342 + reg = <0x4>; 343 + label = "lan4"; 344 + phy-handle = <&switch0phy4>; 345 + }; 346 + 347 + port@5 { 348 + reg = <0x5>; 349 + label = "lan5"; 350 + phy-handle = <&switch0phy5>; 351 + }; 352 + 353 + port@6 { 354 + reg = <0x6>; 355 + label = "lan6"; 356 + phy-handle = <&switch0phy6>; 357 + }; 358 + 359 + port@7 { 360 + reg = <0x7>; 361 + label = "lan7"; 362 + phy-handle = <&switch0phy7>; 363 + }; 364 + 365 + port@8 { 366 + reg = <0x8>; 367 + label = "lan8"; 368 + phy-handle = <&switch0phy8>; 369 + }; 370 + 371 + port@9 { 372 + reg = <0x9>; 373 + label = "cpu"; 374 + ethernet = <&eth1>; 375 + phy-mode = "2500base-x"; 376 + managed = "in-band-status"; 377 + }; 378 + 379 + switch0port10: port@a { 380 + reg = <0xa>; 381 + label = "dsa"; 382 + phy-mode = "2500base-x"; 383 + managed = "in-band-status"; 384 + link = <&switch1port9 &switch2port9>; 385 + status = "disabled"; 386 + }; 387 + 388 + port-sfp@a { 389 + reg = <0xa>; 390 + label = "sfp"; 391 + sfp = <&sfp>; 392 + phy-mode = "sgmii"; 393 + managed = "in-band-status"; 394 + status = "disabled"; 395 + }; 396 + }; 397 + }; 398 + 399 + switch0@2 { 400 + compatible = "marvell,mv88e6085"; 401 + reg = <0x2 0>; 402 + dsa,member = <0 0>; 403 + interrupt-parent = <&moxtet>; 404 + interrupts = <MOXTET_IRQ_TOPAZ>; 405 + status = "disabled"; 406 + 407 + mdio { 408 + #address-cells = <1>; 409 + #size-cells = <0>; 410 + 411 + switch0phy1_topaz: switch0phy1@11 { 412 + reg = <0x11>; 413 + }; 414 + 415 + switch0phy2_topaz: switch0phy2@12 { 416 + reg = <0x12>; 417 + }; 418 + 419 + switch0phy3_topaz: switch0phy3@13 { 420 + reg = <0x13>; 421 + }; 422 + 423 + switch0phy4_topaz: switch0phy4@14 { 424 + reg = <0x14>; 425 + }; 426 + }; 427 + 428 + ports { 429 + #address-cells = <1>; 430 + #size-cells = <0>; 431 + 432 + port@1 { 433 + reg = <0x1>; 434 + label = "lan1"; 435 + phy-handle = <&switch0phy1_topaz>; 436 + }; 437 + 438 + port@2 { 439 + reg = <0x2>; 440 + label = "lan2"; 441 + phy-handle = <&switch0phy2_topaz>; 442 + }; 443 + 444 + port@3 { 445 + reg = <0x3>; 446 + label = "lan3"; 447 + phy-handle = <&switch0phy3_topaz>; 448 + }; 449 + 450 + port@4 { 451 + reg = <0x4>; 452 + label = "lan4"; 453 + phy-handle = <&switch0phy4_topaz>; 454 + }; 455 + 456 + port@5 { 457 + reg = <0x5>; 458 + label = "cpu"; 459 + phy-mode = "2500base-x"; 460 + managed = "in-band-status"; 461 + ethernet = <&eth1>; 462 + }; 463 + }; 464 + }; 465 + 466 + switch1@11 { 467 + compatible = "marvell,mv88e6190"; 468 + reg = <0x11 0>; 469 + dsa,member = <0 1>; 470 + interrupt-parent = <&moxtet>; 471 + interrupts = <MOXTET_IRQ_PERIDOT(1)>; 472 + status = "disabled"; 473 + 474 + mdio { 475 + #address-cells = <1>; 476 + #size-cells = <0>; 477 + 478 + switch1phy1: switch1phy1@1 { 479 + reg = <0x1>; 480 + }; 481 + 482 + switch1phy2: switch1phy2@2 { 483 + reg = <0x2>; 484 + }; 485 + 486 + switch1phy3: switch1phy3@3 { 487 + reg = <0x3>; 488 + }; 489 + 490 + switch1phy4: switch1phy4@4 { 491 + reg = <0x4>; 492 + }; 493 + 494 + switch1phy5: switch1phy5@5 { 495 + reg = <0x5>; 496 + }; 497 + 498 + switch1phy6: switch1phy6@6 { 499 + reg = <0x6>; 500 + }; 501 + 502 + switch1phy7: switch1phy7@7 { 503 + reg = <0x7>; 504 + }; 505 + 506 + switch1phy8: switch1phy8@8 { 507 + reg = <0x8>; 508 + }; 509 + }; 510 + 511 + ports { 512 + #address-cells = <1>; 513 + #size-cells = <0>; 514 + 515 + port@1 { 516 + reg = <0x1>; 517 + label = "lan9"; 518 + phy-handle = <&switch1phy1>; 519 + }; 520 + 521 + port@2 { 522 + reg = <0x2>; 523 + label = "lan10"; 524 + phy-handle = <&switch1phy2>; 525 + }; 526 + 527 + port@3 { 528 + reg = <0x3>; 529 + label = "lan11"; 530 + phy-handle = <&switch1phy3>; 531 + }; 532 + 533 + port@4 { 534 + reg = <0x4>; 535 + label = "lan12"; 536 + phy-handle = <&switch1phy4>; 537 + }; 538 + 539 + port@5 { 540 + reg = <0x5>; 541 + label = "lan13"; 542 + phy-handle = <&switch1phy5>; 543 + }; 544 + 545 + port@6 { 546 + reg = <0x6>; 547 + label = "lan14"; 548 + phy-handle = <&switch1phy6>; 549 + }; 550 + 551 + port@7 { 552 + reg = <0x7>; 553 + label = "lan15"; 554 + phy-handle = <&switch1phy7>; 555 + }; 556 + 557 + port@8 { 558 + reg = <0x8>; 559 + label = "lan16"; 560 + phy-handle = <&switch1phy8>; 561 + }; 562 + 563 + switch1port9: port@9 { 564 + reg = <0x9>; 565 + label = "dsa"; 566 + phy-mode = "2500base-x"; 567 + managed = "in-band-status"; 568 + link = <&switch0port10>; 569 + }; 570 + 571 + switch1port10: port@a { 572 + reg = <0xa>; 573 + label = "dsa"; 574 + phy-mode = "2500base-x"; 575 + managed = "in-band-status"; 576 + link = <&switch2port9>; 577 + status = "disabled"; 578 + }; 579 + 580 + port-sfp@a { 581 + reg = <0xa>; 582 + label = "sfp"; 583 + sfp = <&sfp>; 584 + phy-mode = "sgmii"; 585 + managed = "in-band-status"; 586 + status = "disabled"; 587 + }; 588 + }; 589 + }; 590 + 591 + switch1@2 { 592 + compatible = "marvell,mv88e6085"; 593 + reg = <0x2 0>; 594 + dsa,member = <0 1>; 595 + interrupt-parent = <&moxtet>; 596 + interrupts = <MOXTET_IRQ_TOPAZ>; 597 + status = "disabled"; 598 + 599 + mdio { 600 + #address-cells = <1>; 601 + #size-cells = <0>; 602 + 603 + switch1phy1_topaz: switch1phy1@11 { 604 + reg = <0x11>; 605 + }; 606 + 607 + switch1phy2_topaz: switch1phy2@12 { 608 + reg = <0x12>; 609 + }; 610 + 611 + switch1phy3_topaz: switch1phy3@13 { 612 + reg = <0x13>; 613 + }; 614 + 615 + switch1phy4_topaz: switch1phy4@14 { 616 + reg = <0x14>; 617 + }; 618 + }; 619 + 620 + ports { 621 + #address-cells = <1>; 622 + #size-cells = <0>; 623 + 624 + port@1 { 625 + reg = <0x1>; 626 + label = "lan9"; 627 + phy-handle = <&switch1phy1_topaz>; 628 + }; 629 + 630 + port@2 { 631 + reg = <0x2>; 632 + label = "lan10"; 633 + phy-handle = <&switch1phy2_topaz>; 634 + }; 635 + 636 + port@3 { 637 + reg = <0x3>; 638 + label = "lan11"; 639 + phy-handle = <&switch1phy3_topaz>; 640 + }; 641 + 642 + port@4 { 643 + reg = <0x4>; 644 + label = "lan12"; 645 + phy-handle = <&switch1phy4_topaz>; 646 + }; 647 + 648 + port@5 { 649 + reg = <0x5>; 650 + label = "dsa"; 651 + phy-mode = "2500base-x"; 652 + managed = "in-band-status"; 653 + link = <&switch0port10>; 654 + }; 655 + }; 656 + }; 657 + 658 + switch2@12 { 659 + compatible = "marvell,mv88e6190"; 660 + reg = <0x12 0>; 661 + dsa,member = <0 2>; 662 + interrupt-parent = <&moxtet>; 663 + interrupts = <MOXTET_IRQ_PERIDOT(2)>; 664 + status = "disabled"; 665 + 666 + mdio { 667 + #address-cells = <1>; 668 + #size-cells = <0>; 669 + 670 + switch2phy1: switch2phy1@1 { 671 + reg = <0x1>; 672 + }; 673 + 674 + switch2phy2: switch2phy2@2 { 675 + reg = <0x2>; 676 + }; 677 + 678 + switch2phy3: switch2phy3@3 { 679 + reg = <0x3>; 680 + }; 681 + 682 + switch2phy4: switch2phy4@4 { 683 + reg = <0x4>; 684 + }; 685 + 686 + switch2phy5: switch2phy5@5 { 687 + reg = <0x5>; 688 + }; 689 + 690 + switch2phy6: switch2phy6@6 { 691 + reg = <0x6>; 692 + }; 693 + 694 + switch2phy7: switch2phy7@7 { 695 + reg = <0x7>; 696 + }; 697 + 698 + switch2phy8: switch2phy8@8 { 699 + reg = <0x8>; 700 + }; 701 + }; 702 + 703 + ports { 704 + #address-cells = <1>; 705 + #size-cells = <0>; 706 + 707 + port@1 { 708 + reg = <0x1>; 709 + label = "lan17"; 710 + phy-handle = <&switch2phy1>; 711 + }; 712 + 713 + port@2 { 714 + reg = <0x2>; 715 + label = "lan18"; 716 + phy-handle = <&switch2phy2>; 717 + }; 718 + 719 + port@3 { 720 + reg = <0x3>; 721 + label = "lan19"; 722 + phy-handle = <&switch2phy3>; 723 + }; 724 + 725 + port@4 { 726 + reg = <0x4>; 727 + label = "lan20"; 728 + phy-handle = <&switch2phy4>; 729 + }; 730 + 731 + port@5 { 732 + reg = <0x5>; 733 + label = "lan21"; 734 + phy-handle = <&switch2phy5>; 735 + }; 736 + 737 + port@6 { 738 + reg = <0x6>; 739 + label = "lan22"; 740 + phy-handle = <&switch2phy6>; 741 + }; 742 + 743 + port@7 { 744 + reg = <0x7>; 745 + label = "lan23"; 746 + phy-handle = <&switch2phy7>; 747 + }; 748 + 749 + port@8 { 750 + reg = <0x8>; 751 + label = "lan24"; 752 + phy-handle = <&switch2phy8>; 753 + }; 754 + 755 + switch2port9: port@9 { 756 + reg = <0x9>; 757 + label = "dsa"; 758 + phy-mode = "2500base-x"; 759 + managed = "in-band-status"; 760 + link = <&switch1port10 &switch0port10>; 761 + }; 762 + 763 + port-sfp@a { 764 + reg = <0xa>; 765 + label = "sfp"; 766 + sfp = <&sfp>; 767 + phy-mode = "sgmii"; 768 + managed = "in-band-status"; 769 + status = "disabled"; 770 + }; 771 + }; 772 + }; 773 + 774 + switch2@2 { 775 + compatible = "marvell,mv88e6085"; 776 + reg = <0x2 0>; 777 + dsa,member = <0 2>; 778 + interrupt-parent = <&moxtet>; 779 + interrupts = <MOXTET_IRQ_TOPAZ>; 780 + status = "disabled"; 781 + 782 + mdio { 783 + #address-cells = <1>; 784 + #size-cells = <0>; 785 + 786 + switch2phy1_topaz: switch2phy1@11 { 787 + reg = <0x11>; 788 + }; 789 + 790 + switch2phy2_topaz: switch2phy2@12 { 791 + reg = <0x12>; 792 + }; 793 + 794 + switch2phy3_topaz: switch2phy3@13 { 795 + reg = <0x13>; 796 + }; 797 + 798 + switch2phy4_topaz: switch2phy4@14 { 799 + reg = <0x14>; 800 + }; 801 + }; 802 + 803 + ports { 804 + #address-cells = <1>; 805 + #size-cells = <0>; 806 + 807 + port@1 { 808 + reg = <0x1>; 809 + label = "lan17"; 810 + phy-handle = <&switch2phy1_topaz>; 811 + }; 812 + 813 + port@2 { 814 + reg = <0x2>; 815 + label = "lan18"; 816 + phy-handle = <&switch2phy2_topaz>; 817 + }; 818 + 819 + port@3 { 820 + reg = <0x3>; 821 + label = "lan19"; 822 + phy-handle = <&switch2phy3_topaz>; 823 + }; 824 + 825 + port@4 { 826 + reg = <0x4>; 827 + label = "lan20"; 828 + phy-handle = <&switch2phy4_topaz>; 829 + }; 830 + 831 + port@5 { 832 + reg = <0x5>; 833 + label = "dsa"; 834 + phy-mode = "2500base-x"; 835 + managed = "in-band-status"; 836 + link = <&switch1port10 &switch0port10>; 837 + }; 838 + }; 839 + }; 840 + };
+12
arch/arm64/boot/dts/marvell/armada-37xx.dtsi
··· 215 215 function = "spi"; 216 216 }; 217 217 218 + spi_cs1_pins: spi-cs1-pins { 219 + groups = "spi_cs1"; 220 + function = "spi"; 221 + }; 222 + 218 223 i2c1_pins: i2c1-pins { 219 224 groups = "i2c1"; 220 225 function = "i2c"; ··· 422 417 interrupt-names = "mem", "ring0", "ring1", 423 418 "ring2", "ring3", "eip"; 424 419 clocks = <&nb_periph_clk 15>; 420 + }; 421 + 422 + rwtm: mailbox@b0000 { 423 + compatible = "marvell,armada-3700-rwtm-mailbox"; 424 + reg = <0xb0000 0x100>; 425 + interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>; 426 + #mbox-cells = <1>; 425 427 }; 426 428 427 429 sdhci1: sdhci@d0000 {
+25 -12
arch/arm64/boot/dts/marvell/armada-7040-db.dts
··· 73 73 gpio = <&expander0 1 GPIO_ACTIVE_HIGH>; 74 74 vin-supply = <&cp0_exp_usb3_1_current_regulator>; 75 75 }; 76 - 77 - cp0_usb3_0_phy: cp0-usb3-0-phy { 78 - compatible = "usb-nop-xceiv"; 79 - vcc-supply = <&cp0_reg_usb3_0_vbus>; 80 - }; 81 - 82 - cp0_usb3_1_phy: cp0-usb3-1-phy { 83 - compatible = "usb-nop-xceiv"; 84 - vcc-supply = <&cp0_reg_usb3_1_vbus>; 85 - }; 86 76 }; 87 77 88 78 &i2c0 { ··· 114 124 115 125 &cp0_pcie2 { 116 126 status = "okay"; 127 + phys = <&cp0_comphy5 2>; 128 + phy-names = "cp0-pcie2-x1-phy"; 117 129 }; 118 130 119 131 &cp0_i2c0 { ··· 211 219 212 220 &cp0_sata0 { 213 221 status = "okay"; 222 + 223 + sata-port@1 { 224 + phys = <&cp0_comphy3 1>; 225 + phy-names = "cp0-sata0-1-phy"; 226 + }; 227 + }; 228 + 229 + &cp0_comphy1 { 230 + cp0_usbh0_con: connector { 231 + compatible = "usb-a-connector"; 232 + phy-supply = <&cp0_reg_usb3_0_vbus>; 233 + }; 214 234 }; 215 235 216 236 &cp0_usb3_0 { 217 - usb-phy = <&cp0_usb3_0_phy>; 237 + phys = <&cp0_comphy1 0>; 238 + phy-names = "cp0-usb3h0-comphy"; 218 239 status = "okay"; 219 240 }; 220 241 242 + &cp0_comphy4 { 243 + cp0_usbh1_con: connector { 244 + compatible = "usb-a-connector"; 245 + phy-supply = <&cp0_reg_usb3_1_vbus>; 246 + }; 247 + }; 248 + 221 249 &cp0_usb3_1 { 222 - usb-phy = <&cp0_usb3_1_phy>; 250 + phys = <&cp0_comphy4 1>; 251 + phy-names = "cp0-usb3h1-comphy"; 223 252 status = "okay"; 224 253 }; 225 254
+16 -6
arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
··· 51 51 status = "okay"; 52 52 }; 53 53 54 - usb3h0_phy: usb3_phy0 { 55 - compatible = "usb-nop-xceiv"; 56 - vcc-supply = <&v_5v0_usb3_hst_vbus>; 57 - }; 58 - 59 54 sfp_cp0_eth0: sfp-cp0-eth0 { 60 55 compatible = "sff,sfp"; 61 56 i2c-bus = <&cp0_i2c1>; ··· 238 243 pinctrl-names = "default"; 239 244 pinctrl-0 = <&cp0_pci0_reset_pins &cp0_wlan_disable_pins>; 240 245 reset-gpios = <&cp0_gpio2 0 GPIO_ACTIVE_LOW>; 246 + phys = <&cp0_comphy0 0>; 247 + phy-names = "cp0-pcie0-x1-phy"; 241 248 status = "okay"; 242 249 }; 243 250 ··· 345 348 &cp1_sata0 { 346 349 pinctrl-0 = <&cp0_pci1_reset_pins>; 347 350 status = "okay"; 351 + 352 + sata-port@1 { 353 + phys = <&cp1_comphy0 1>; 354 + phy-names = "cp1-sata0-1-phy"; 355 + }; 348 356 }; 349 357 350 358 &cp1_mdio { ··· 469 467 }; 470 468 }; 471 469 470 + &cp1_comphy2 { 471 + cp1_usbh0_con: connector { 472 + compatible = "usb-a-connector"; 473 + phy-supply = <&v_5v0_usb3_hst_vbus>; 474 + }; 475 + }; 476 + 472 477 &cp1_usb3_0 { 473 - usb-phy = <&usb3h0_phy>; 478 + phys = <&cp1_comphy2 0>; 479 + phy-names = "cp1-usb3h0-comphy"; 474 480 status = "okay"; 475 481 };
+37 -6
arch/arm64/boot/dts/marvell/armada-8040-db.dts
··· 54 54 vcc-supply = <&cp0_reg_usb3_0_vbus>; 55 55 }; 56 56 57 - cp0_usb3_1_phy: cp0-usb3-1-phy { 58 - compatible = "usb-nop-xceiv"; 59 - vcc-supply = <&cp0_reg_usb3_1_vbus>; 60 - }; 61 - 62 57 cp1_reg_usb3_0_vbus: cp1-usb3-0-vbus { 63 58 compatible = "regulator-fixed"; 64 59 regulator-name = "cp1-usb3h0-vbus"; ··· 103 108 104 109 /* CON6 on CP0 expansion */ 105 110 &cp0_pcie0 { 111 + phys = <&cp0_comphy0 0>; 112 + phy-names = "cp0-pcie0-x1-phy"; 106 113 status = "okay"; 107 114 }; 108 115 109 116 /* CON5 on CP0 expansion */ 110 117 &cp0_pcie2 { 118 + phys = <&cp0_comphy5 2>; 119 + phy-names = "cp0-pcie2-x1-phy"; 111 120 status = "okay"; 112 121 }; 113 122 ··· 142 143 /* CON4 on CP0 expansion */ 143 144 &cp0_sata0 { 144 145 status = "okay"; 146 + 147 + sata-port@0 { 148 + phys = <&cp0_comphy1 0>; 149 + phy-names = "cp0-sata0-0-phy"; 150 + }; 151 + sata-port@1 { 152 + phys = <&cp0_comphy3 1>; 153 + phy-names = "cp0-sata0-1-phy"; 154 + }; 145 155 }; 146 156 147 157 /* CON9 on CP0 expansion */ ··· 159 151 status = "okay"; 160 152 }; 161 153 154 + &cp0_comphy4 { 155 + cp0_usbh1_con: connector { 156 + compatible = "usb-a-connector"; 157 + phy-supply = <&cp0_reg_usb3_1_vbus>; 158 + }; 159 + }; 160 + 162 161 /* CON10 on CP0 expansion */ 163 162 &cp0_usb3_1 { 164 - usb-phy = <&cp0_usb3_1_phy>; 163 + phys = <&cp0_comphy4 1>; 164 + phy-names = "cp0-usb3h1-comphy"; 165 165 status = "okay"; 166 166 }; 167 167 ··· 203 187 204 188 /* CON6 on CP1 expansion */ 205 189 &cp1_pcie0 { 190 + phys = <&cp1_comphy0 0>; 191 + phy-names = "cp1-pcie0-x1-phy"; 206 192 status = "okay"; 207 193 }; 208 194 209 195 /* CON7 on CP1 expansion */ 210 196 &cp1_pcie1 { 197 + phys = <&cp1_comphy4 1>; 198 + phy-names = "cp1-pcie1-x1-phy"; 211 199 status = "okay"; 212 200 }; 213 201 214 202 /* CON5 on CP1 expansion */ 215 203 &cp1_pcie2 { 204 + phys = <&cp1_comphy5 2>; 205 + phy-names = "cp1-pcie2-x1-phy"; 216 206 status = "okay"; 217 207 }; 218 208 ··· 295 273 /* CON4 on CP1 expansion */ 296 274 &cp1_sata0 { 297 275 status = "okay"; 276 + 277 + sata-port@0 { 278 + phys = <&cp1_comphy1 0>; 279 + phy-names = "cp1-sata0-0-phy"; 280 + }; 281 + sata-port@1 { 282 + phys = <&cp1_comphy3 1>; 283 + phy-names = "cp1-sata0-1-phy"; 284 + }; 298 285 }; 299 286 300 287 /* CON9 on CP1 expansion */
+31 -9
arch/arm64/boot/dts/marvell/armada-8040-mcbin.dtsi
··· 61 61 status = "okay"; 62 62 }; 63 63 64 - usb3h0_phy: usb3_phy0 { 65 - compatible = "usb-nop-xceiv"; 66 - vcc-supply = <&v_5v0_usb3_hst_vbus>; 67 - }; 68 - 69 64 sfp_eth0: sfp-eth0 { 70 65 /* CON15,16 - CPM lane 4 */ 71 66 compatible = "sff,sfp"; ··· 181 186 reset-gpios = <&cp0_gpio2 20 GPIO_ACTIVE_LOW>; 182 187 ranges = <0x81000000 0x0 0xf9010000 0x0 0xf9010000 0x0 0x10000 183 188 0x82000000 0x0 0xc0000000 0x0 0xc0000000 0x0 0x20000000>; 189 + phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, 190 + <&cp0_comphy2 0>, <&cp0_comphy3 0>; 191 + phy-names = "cp0-pcie0-x4-lane0-phy", "cp0-pcie0-x4-lane1-phy", 192 + "cp0-pcie0-x4-lane2-phy", "cp0-pcie0-x4-lane3-phy"; 184 193 status = "okay"; 185 194 }; 186 195 ··· 238 239 }; 239 240 240 241 &cp0_sata0 { 241 - /* CPM Lane 0 - U29 */ 242 242 status = "okay"; 243 + 244 + /* CPM Lane 5 - U29 */ 245 + sata-port@1 { 246 + phys = <&cp0_comphy5 1>; 247 + phy-names = "cp0-sata0-1-phy"; 248 + }; 243 249 }; 244 250 245 251 &cp0_sdhci0 { ··· 328 324 }; 329 325 330 326 &cp1_sata0 { 331 - /* CPS Lane 1 - U32 */ 332 - /* CPS Lane 3 - U31 */ 333 327 status = "okay"; 328 + 329 + /* CPS Lane 1 - U32 */ 330 + sata-port@0 { 331 + phys = <&cp1_comphy1 0>; 332 + phy-names = "cp1-sata0-0-phy"; 333 + }; 334 + 335 + /* CPS Lane 3 - U31 */ 336 + sata-port@1 { 337 + phys = <&cp1_comphy3 1>; 338 + phy-names = "cp1-sata0-1-phy"; 339 + }; 334 340 }; 335 341 336 342 &cp1_spi1 { ··· 355 341 }; 356 342 }; 357 343 344 + &cp1_comphy2 { 345 + cp1_usbh0_con: connector { 346 + compatible = "usb-a-connector"; 347 + phy-supply = <&v_5v0_usb3_hst_vbus>; 348 + }; 349 + }; 350 + 358 351 &cp1_usb3_0 { 359 352 /* CPS Lane 2 - CON7 */ 360 - usb-phy = <&usb3h0_phy>; 353 + phys = <&cp1_comphy2 0>; 354 + phy-names = "cp1-usb3h0-comphy"; 361 355 status = "okay"; 362 356 };
+4 -1
arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi
··· 21 21 reg = <0x000>; 22 22 enable-method = "psci"; 23 23 #cooling-cells = <2>; 24 + clocks = <&cpu_clk 0>; 24 25 }; 25 26 cpu1: cpu@1 { 26 27 device_type = "cpu"; ··· 29 28 reg = <0x001>; 30 29 enable-method = "psci"; 31 30 #cooling-cells = <2>; 31 + clocks = <&cpu_clk 0>; 32 32 }; 33 33 cpu2: cpu@100 { 34 34 device_type = "cpu"; ··· 37 35 reg = <0x100>; 38 36 enable-method = "psci"; 39 37 #cooling-cells = <2>; 38 + clocks = <&cpu_clk 1>; 40 39 }; 41 40 cpu3: cpu@101 { 42 41 device_type = "cpu"; ··· 45 42 reg = <0x101>; 46 43 enable-method = "psci"; 47 44 #cooling-cells = <2>; 45 + clocks = <&cpu_clk 1>; 48 46 }; 49 47 }; 50 - 51 48 };
+7
arch/arm64/boot/dts/marvell/armada-ap806.dtsi
··· 280 280 #address-cells = <1>; 281 281 #size-cells = <1>; 282 282 283 + cpu_clk: clock-cpu@278 { 284 + compatible = "marvell,ap806-cpu-clock"; 285 + clocks = <&ap_clk 0>, <&ap_clk 1>; 286 + #clock-cells = <1>; 287 + reg = <0x278 0xa30>; 288 + }; 289 + 283 290 ap_thermal: thermal-sensor@80 { 284 291 compatible = "marvell,armada-ap806-thermal"; 285 292 reg = <0x80 0x10>;
+13
arch/arm64/boot/dts/marvell/armada-cp110.dtsi
··· 133 133 compatible = "marvell,comphy-cp110"; 134 134 reg = <0x120000 0x6000>; 135 135 marvell,system-controller = <&CP110_LABEL(syscon0)>; 136 + clocks = <&CP110_LABEL(clk) 1 5>, <&CP110_LABEL(clk) 1 6>, 137 + <&CP110_LABEL(clk) 1 18>; 138 + clock-names = "mg_clk", "mg_core_clk", "axi_clk"; 136 139 #address-cells = <1>; 137 140 #size-cells = <0>; 138 141 ··· 309 306 interrupts = <107 IRQ_TYPE_LEVEL_HIGH>; 310 307 clocks = <&CP110_LABEL(clk) 1 15>, 311 308 <&CP110_LABEL(clk) 1 16>; 309 + #address-cells = <1>; 310 + #size-cells = <0>; 312 311 status = "disabled"; 312 + 313 + sata-port@0 { 314 + reg = <0>; 315 + }; 316 + 317 + sata-port@1 { 318 + reg = <1>; 319 + }; 313 320 }; 314 321 315 322 CP110_LABEL(xor0): xor@6a0000 {
+10
drivers/bus/Kconfig
··· 29 29 arbiter. This driver provides timeout and target abort error handling 30 30 and internal bus master decoding. 31 31 32 + config MOXTET 33 + tristate "CZ.NIC Turris Mox module configuration bus" 34 + depends on SPI_MASTER && OF 35 + help 36 + Say yes here to add support for the module configuration bus found 37 + on CZ.NIC's Turris Mox. This is needed for the ability to discover 38 + the order in which the modules are connected and to get/set some of 39 + their settings. For example the GPIOs on Mox SFP module are 40 + configured through this bus. 41 + 32 42 config HISILICON_LPC 33 43 bool "Support for ISA I/O space on HiSilicon Hip06/7" 34 44 depends on ARM64 && (ARCH_HISI || COMPILE_TEST)
+1
drivers/bus/Makefile
··· 8 8 9 9 obj-$(CONFIG_HISILICON_LPC) += hisi_lpc.o 10 10 obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o 11 + obj-$(CONFIG_MOXTET) += moxtet.o 11 12 12 13 # DPAA2 fsl-mc bus 13 14 obj-$(CONFIG_FSL_MC_BUS) += fsl-mc/
+886
drivers/bus/moxtet.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Turris Mox module configuration bus driver 4 + * 5 + * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 6 + */ 7 + 8 + #include <dt-bindings/bus/moxtet.h> 9 + #include <linux/bitops.h> 10 + #include <linux/debugfs.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/module.h> 13 + #include <linux/moxtet.h> 14 + #include <linux/mutex.h> 15 + #include <linux/of_device.h> 16 + #include <linux/of_irq.h> 17 + #include <linux/spi/spi.h> 18 + 19 + /* 20 + * @name: module name for sysfs 21 + * @hwirq_base: base index for IRQ for this module (-1 if no IRQs) 22 + * @nirqs: how many interrupts does the shift register provide 23 + * @desc: module description for kernel log 24 + */ 25 + static const struct { 26 + const char *name; 27 + int hwirq_base; 28 + int nirqs; 29 + const char *desc; 30 + } mox_module_table[] = { 31 + /* do not change order of this array! */ 32 + { NULL, 0, 0, NULL }, 33 + { "sfp", -1, 0, "MOX D (SFP cage)" }, 34 + { "pci", MOXTET_IRQ_PCI, 1, "MOX B (Mini-PCIe)" }, 35 + { "topaz", MOXTET_IRQ_TOPAZ, 1, "MOX C (4 port switch)" }, 36 + { "peridot", MOXTET_IRQ_PERIDOT(0), 1, "MOX E (8 port switch)" }, 37 + { "usb3", MOXTET_IRQ_USB3, 2, "MOX F (USB 3.0)" }, 38 + { "pci-bridge", -1, 0, "MOX G (Mini-PCIe bridge)" }, 39 + }; 40 + 41 + static inline bool mox_module_known(unsigned int id) 42 + { 43 + return id >= TURRIS_MOX_MODULE_FIRST && id <= TURRIS_MOX_MODULE_LAST; 44 + } 45 + 46 + static inline const char *mox_module_name(unsigned int id) 47 + { 48 + if (mox_module_known(id)) 49 + return mox_module_table[id].name; 50 + else 51 + return "unknown"; 52 + } 53 + 54 + #define DEF_MODULE_ATTR(name, fmt, ...) \ 55 + static ssize_t \ 56 + module_##name##_show(struct device *dev, struct device_attribute *a, \ 57 + char *buf) \ 58 + { \ 59 + struct moxtet_device *mdev = to_moxtet_device(dev); \ 60 + return sprintf(buf, (fmt), __VA_ARGS__); \ 61 + } \ 62 + static DEVICE_ATTR_RO(module_##name) 63 + 64 + DEF_MODULE_ATTR(id, "0x%x\n", mdev->id); 65 + DEF_MODULE_ATTR(name, "%s\n", mox_module_name(mdev->id)); 66 + DEF_MODULE_ATTR(description, "%s\n", 67 + mox_module_known(mdev->id) ? mox_module_table[mdev->id].desc 68 + : ""); 69 + 70 + static struct attribute *moxtet_dev_attrs[] = { 71 + &dev_attr_module_id.attr, 72 + &dev_attr_module_name.attr, 73 + &dev_attr_module_description.attr, 74 + NULL, 75 + }; 76 + 77 + static const struct attribute_group moxtet_dev_group = { 78 + .attrs = moxtet_dev_attrs, 79 + }; 80 + 81 + static const struct attribute_group *moxtet_dev_groups[] = { 82 + &moxtet_dev_group, 83 + NULL, 84 + }; 85 + 86 + static int moxtet_match(struct device *dev, struct device_driver *drv) 87 + { 88 + struct moxtet_device *mdev = to_moxtet_device(dev); 89 + struct moxtet_driver *tdrv = to_moxtet_driver(drv); 90 + const enum turris_mox_module_id *t; 91 + 92 + if (of_driver_match_device(dev, drv)) 93 + return 1; 94 + 95 + if (!tdrv->id_table) 96 + return 0; 97 + 98 + for (t = tdrv->id_table; *t; ++t) 99 + if (*t == mdev->id) 100 + return 1; 101 + 102 + return 0; 103 + } 104 + 105 + struct bus_type moxtet_bus_type = { 106 + .name = "moxtet", 107 + .dev_groups = moxtet_dev_groups, 108 + .match = moxtet_match, 109 + }; 110 + EXPORT_SYMBOL_GPL(moxtet_bus_type); 111 + 112 + int __moxtet_register_driver(struct module *owner, 113 + struct moxtet_driver *mdrv) 114 + { 115 + mdrv->driver.owner = owner; 116 + mdrv->driver.bus = &moxtet_bus_type; 117 + return driver_register(&mdrv->driver); 118 + } 119 + EXPORT_SYMBOL_GPL(__moxtet_register_driver); 120 + 121 + static int moxtet_dev_check(struct device *dev, void *data) 122 + { 123 + struct moxtet_device *mdev = to_moxtet_device(dev); 124 + struct moxtet_device *new_dev = data; 125 + 126 + if (mdev->moxtet == new_dev->moxtet && mdev->id == new_dev->id && 127 + mdev->idx == new_dev->idx) 128 + return -EBUSY; 129 + return 0; 130 + } 131 + 132 + static void moxtet_dev_release(struct device *dev) 133 + { 134 + struct moxtet_device *mdev = to_moxtet_device(dev); 135 + 136 + put_device(mdev->moxtet->dev); 137 + kfree(mdev); 138 + } 139 + 140 + static struct moxtet_device * 141 + moxtet_alloc_device(struct moxtet *moxtet) 142 + { 143 + struct moxtet_device *dev; 144 + 145 + if (!get_device(moxtet->dev)) 146 + return NULL; 147 + 148 + dev = kzalloc(sizeof(*dev), GFP_KERNEL); 149 + if (!dev) { 150 + put_device(moxtet->dev); 151 + return NULL; 152 + } 153 + 154 + dev->moxtet = moxtet; 155 + dev->dev.parent = moxtet->dev; 156 + dev->dev.bus = &moxtet_bus_type; 157 + dev->dev.release = moxtet_dev_release; 158 + 159 + device_initialize(&dev->dev); 160 + 161 + return dev; 162 + } 163 + 164 + static int moxtet_add_device(struct moxtet_device *dev) 165 + { 166 + static DEFINE_MUTEX(add_mutex); 167 + int ret; 168 + 169 + if (dev->idx >= TURRIS_MOX_MAX_MODULES || dev->id > 0xf) 170 + return -EINVAL; 171 + 172 + dev_set_name(&dev->dev, "moxtet-%s.%u", mox_module_name(dev->id), 173 + dev->idx); 174 + 175 + mutex_lock(&add_mutex); 176 + 177 + ret = bus_for_each_dev(&moxtet_bus_type, NULL, dev, 178 + moxtet_dev_check); 179 + if (ret) 180 + goto done; 181 + 182 + ret = device_add(&dev->dev); 183 + if (ret < 0) 184 + dev_err(dev->moxtet->dev, "can't add %s, status %d\n", 185 + dev_name(dev->moxtet->dev), ret); 186 + 187 + done: 188 + mutex_unlock(&add_mutex); 189 + return ret; 190 + } 191 + 192 + static int __unregister(struct device *dev, void *null) 193 + { 194 + if (dev->of_node) { 195 + of_node_clear_flag(dev->of_node, OF_POPULATED); 196 + of_node_put(dev->of_node); 197 + } 198 + 199 + device_unregister(dev); 200 + 201 + return 0; 202 + } 203 + 204 + static struct moxtet_device * 205 + of_register_moxtet_device(struct moxtet *moxtet, struct device_node *nc) 206 + { 207 + struct moxtet_device *dev; 208 + u32 val; 209 + int ret; 210 + 211 + dev = moxtet_alloc_device(moxtet); 212 + if (!dev) { 213 + dev_err(moxtet->dev, 214 + "Moxtet device alloc error for %pOF\n", nc); 215 + return ERR_PTR(-ENOMEM); 216 + } 217 + 218 + ret = of_property_read_u32(nc, "reg", &val); 219 + if (ret) { 220 + dev_err(moxtet->dev, "%pOF has no valid 'reg' property (%d)\n", 221 + nc, ret); 222 + goto err_put; 223 + } 224 + 225 + dev->idx = val; 226 + 227 + if (dev->idx >= TURRIS_MOX_MAX_MODULES) { 228 + dev_err(moxtet->dev, "%pOF Moxtet address 0x%x out of range\n", 229 + nc, dev->idx); 230 + ret = -EINVAL; 231 + goto err_put; 232 + } 233 + 234 + dev->id = moxtet->modules[dev->idx]; 235 + 236 + if (!dev->id) { 237 + dev_err(moxtet->dev, "%pOF Moxtet address 0x%x is empty\n", nc, 238 + dev->idx); 239 + ret = -ENODEV; 240 + goto err_put; 241 + } 242 + 243 + of_node_get(nc); 244 + dev->dev.of_node = nc; 245 + 246 + ret = moxtet_add_device(dev); 247 + if (ret) { 248 + dev_err(moxtet->dev, 249 + "Moxtet device register error for %pOF\n", nc); 250 + of_node_put(nc); 251 + goto err_put; 252 + } 253 + 254 + return dev; 255 + 256 + err_put: 257 + put_device(&dev->dev); 258 + return ERR_PTR(ret); 259 + } 260 + 261 + static void of_register_moxtet_devices(struct moxtet *moxtet) 262 + { 263 + struct moxtet_device *dev; 264 + struct device_node *nc; 265 + 266 + if (!moxtet->dev->of_node) 267 + return; 268 + 269 + for_each_available_child_of_node(moxtet->dev->of_node, nc) { 270 + if (of_node_test_and_set_flag(nc, OF_POPULATED)) 271 + continue; 272 + dev = of_register_moxtet_device(moxtet, nc); 273 + if (IS_ERR(dev)) { 274 + dev_warn(moxtet->dev, 275 + "Failed to create Moxtet device for %pOF\n", 276 + nc); 277 + of_node_clear_flag(nc, OF_POPULATED); 278 + } 279 + } 280 + } 281 + 282 + static void 283 + moxtet_register_devices_from_topology(struct moxtet *moxtet) 284 + { 285 + struct moxtet_device *dev; 286 + int i, ret; 287 + 288 + for (i = 0; i < moxtet->count; ++i) { 289 + dev = moxtet_alloc_device(moxtet); 290 + if (!dev) { 291 + dev_err(moxtet->dev, "Moxtet device %u alloc error\n", 292 + i); 293 + continue; 294 + } 295 + 296 + dev->idx = i; 297 + dev->id = moxtet->modules[i]; 298 + 299 + ret = moxtet_add_device(dev); 300 + if (ret && ret != -EBUSY) { 301 + put_device(&dev->dev); 302 + dev_err(moxtet->dev, 303 + "Moxtet device %u register error: %i\n", i, 304 + ret); 305 + } 306 + } 307 + } 308 + 309 + /* 310 + * @nsame: how many modules with same id are already in moxtet->modules 311 + */ 312 + static int moxtet_set_irq(struct moxtet *moxtet, int idx, int id, int nsame) 313 + { 314 + int i, first; 315 + struct moxtet_irqpos *pos; 316 + 317 + first = mox_module_table[id].hwirq_base + 318 + nsame * mox_module_table[id].nirqs; 319 + 320 + if (first + mox_module_table[id].nirqs > MOXTET_NIRQS) 321 + return -EINVAL; 322 + 323 + for (i = 0; i < mox_module_table[id].nirqs; ++i) { 324 + pos = &moxtet->irq.position[first + i]; 325 + pos->idx = idx; 326 + pos->bit = i; 327 + moxtet->irq.exists |= BIT(first + i); 328 + } 329 + 330 + return 0; 331 + } 332 + 333 + static int moxtet_find_topology(struct moxtet *moxtet) 334 + { 335 + u8 buf[TURRIS_MOX_MAX_MODULES]; 336 + int cnts[TURRIS_MOX_MODULE_LAST]; 337 + int i, ret; 338 + 339 + memset(cnts, 0, sizeof(cnts)); 340 + 341 + ret = spi_read(to_spi_device(moxtet->dev), buf, TURRIS_MOX_MAX_MODULES); 342 + if (ret < 0) 343 + return ret; 344 + 345 + if (buf[0] == TURRIS_MOX_CPU_ID_EMMC) { 346 + dev_info(moxtet->dev, "Found MOX A (eMMC CPU) module\n"); 347 + } else if (buf[0] == TURRIS_MOX_CPU_ID_SD) { 348 + dev_info(moxtet->dev, "Found MOX A (CPU) module\n"); 349 + } else { 350 + dev_err(moxtet->dev, "Invalid Turris MOX A CPU module 0x%02x\n", 351 + buf[0]); 352 + return -ENODEV; 353 + } 354 + 355 + moxtet->count = 0; 356 + 357 + for (i = 1; i < TURRIS_MOX_MAX_MODULES; ++i) { 358 + int id; 359 + 360 + if (buf[i] == 0xff) 361 + break; 362 + 363 + id = buf[i] & 0xf; 364 + 365 + moxtet->modules[i-1] = id; 366 + ++moxtet->count; 367 + 368 + if (mox_module_known(id)) { 369 + dev_info(moxtet->dev, "Found %s module\n", 370 + mox_module_table[id].desc); 371 + 372 + if (moxtet_set_irq(moxtet, i-1, id, cnts[id]++) < 0) 373 + dev_err(moxtet->dev, 374 + " Cannot set IRQ for module %s\n", 375 + mox_module_table[id].desc); 376 + } else { 377 + dev_warn(moxtet->dev, 378 + "Unknown Moxtet module found (ID 0x%02x)\n", 379 + id); 380 + } 381 + } 382 + 383 + return 0; 384 + } 385 + 386 + static int moxtet_spi_read(struct moxtet *moxtet, u8 *buf) 387 + { 388 + struct spi_transfer xfer = { 389 + .rx_buf = buf, 390 + .tx_buf = moxtet->tx, 391 + .len = moxtet->count + 1 392 + }; 393 + int ret; 394 + 395 + mutex_lock(&moxtet->lock); 396 + 397 + ret = spi_sync_transfer(to_spi_device(moxtet->dev), &xfer, 1); 398 + 399 + mutex_unlock(&moxtet->lock); 400 + 401 + return ret; 402 + } 403 + 404 + int moxtet_device_read(struct device *dev) 405 + { 406 + struct moxtet_device *mdev = to_moxtet_device(dev); 407 + struct moxtet *moxtet = mdev->moxtet; 408 + u8 buf[TURRIS_MOX_MAX_MODULES]; 409 + int ret; 410 + 411 + if (mdev->idx >= moxtet->count) 412 + return -EINVAL; 413 + 414 + ret = moxtet_spi_read(moxtet, buf); 415 + if (ret < 0) 416 + return ret; 417 + 418 + return buf[mdev->idx + 1] >> 4; 419 + } 420 + EXPORT_SYMBOL_GPL(moxtet_device_read); 421 + 422 + int moxtet_device_write(struct device *dev, u8 val) 423 + { 424 + struct moxtet_device *mdev = to_moxtet_device(dev); 425 + struct moxtet *moxtet = mdev->moxtet; 426 + int ret; 427 + 428 + if (mdev->idx >= moxtet->count) 429 + return -EINVAL; 430 + 431 + mutex_lock(&moxtet->lock); 432 + 433 + moxtet->tx[moxtet->count - mdev->idx] = val; 434 + 435 + ret = spi_write(to_spi_device(moxtet->dev), moxtet->tx, 436 + moxtet->count + 1); 437 + 438 + mutex_unlock(&moxtet->lock); 439 + 440 + return ret; 441 + } 442 + EXPORT_SYMBOL_GPL(moxtet_device_write); 443 + 444 + int moxtet_device_written(struct device *dev) 445 + { 446 + struct moxtet_device *mdev = to_moxtet_device(dev); 447 + struct moxtet *moxtet = mdev->moxtet; 448 + 449 + if (mdev->idx >= moxtet->count) 450 + return -EINVAL; 451 + 452 + return moxtet->tx[moxtet->count - mdev->idx]; 453 + } 454 + EXPORT_SYMBOL_GPL(moxtet_device_written); 455 + 456 + #ifdef CONFIG_DEBUG_FS 457 + static int moxtet_debug_open(struct inode *inode, struct file *file) 458 + { 459 + file->private_data = inode->i_private; 460 + 461 + return nonseekable_open(inode, file); 462 + } 463 + 464 + static ssize_t input_read(struct file *file, char __user *buf, size_t len, 465 + loff_t *ppos) 466 + { 467 + struct moxtet *moxtet = file->private_data; 468 + u8 bin[TURRIS_MOX_MAX_MODULES]; 469 + u8 hex[sizeof(buf) * 2 + 1]; 470 + int ret, n; 471 + 472 + ret = moxtet_spi_read(moxtet, bin); 473 + if (ret < 0) 474 + return ret; 475 + 476 + n = moxtet->count + 1; 477 + bin2hex(hex, bin, n); 478 + 479 + hex[2*n] = '\n'; 480 + 481 + return simple_read_from_buffer(buf, len, ppos, hex, 2*n + 1); 482 + } 483 + 484 + static const struct file_operations input_fops = { 485 + .owner = THIS_MODULE, 486 + .open = moxtet_debug_open, 487 + .read = input_read, 488 + .llseek = no_llseek, 489 + }; 490 + 491 + static ssize_t output_read(struct file *file, char __user *buf, size_t len, 492 + loff_t *ppos) 493 + { 494 + struct moxtet *moxtet = file->private_data; 495 + u8 hex[TURRIS_MOX_MAX_MODULES * 2 + 1]; 496 + u8 *p = hex; 497 + int i; 498 + 499 + mutex_lock(&moxtet->lock); 500 + 501 + for (i = 0; i < moxtet->count; ++i) 502 + p = hex_byte_pack(p, moxtet->tx[moxtet->count - i]); 503 + 504 + mutex_unlock(&moxtet->lock); 505 + 506 + *p++ = '\n'; 507 + 508 + return simple_read_from_buffer(buf, len, ppos, hex, p - hex); 509 + } 510 + 511 + static ssize_t output_write(struct file *file, const char __user *buf, 512 + size_t len, loff_t *ppos) 513 + { 514 + struct moxtet *moxtet = file->private_data; 515 + u8 bin[TURRIS_MOX_MAX_MODULES]; 516 + u8 hex[sizeof(bin) * 2 + 1]; 517 + size_t res; 518 + loff_t dummy = 0; 519 + int err, i; 520 + 521 + if (len > 2 * moxtet->count + 1 || len < 2 * moxtet->count) 522 + return -EINVAL; 523 + 524 + res = simple_write_to_buffer(hex, sizeof(hex), &dummy, buf, len); 525 + if (res < 0) 526 + return res; 527 + 528 + if (len % 2 == 1 && hex[len - 1] != '\n') 529 + return -EINVAL; 530 + 531 + err = hex2bin(bin, hex, moxtet->count); 532 + if (err < 0) 533 + return -EINVAL; 534 + 535 + mutex_lock(&moxtet->lock); 536 + 537 + for (i = 0; i < moxtet->count; ++i) 538 + moxtet->tx[moxtet->count - i] = bin[i]; 539 + 540 + err = spi_write(to_spi_device(moxtet->dev), moxtet->tx, 541 + moxtet->count + 1); 542 + 543 + mutex_unlock(&moxtet->lock); 544 + 545 + return err < 0 ? err : len; 546 + } 547 + 548 + static const struct file_operations output_fops = { 549 + .owner = THIS_MODULE, 550 + .open = moxtet_debug_open, 551 + .read = output_read, 552 + .write = output_write, 553 + .llseek = no_llseek, 554 + }; 555 + 556 + static int moxtet_register_debugfs(struct moxtet *moxtet) 557 + { 558 + struct dentry *root, *entry; 559 + 560 + root = debugfs_create_dir("moxtet", NULL); 561 + 562 + if (IS_ERR(root)) 563 + return PTR_ERR(root); 564 + 565 + entry = debugfs_create_file_unsafe("input", 0444, root, moxtet, 566 + &input_fops); 567 + if (IS_ERR(entry)) 568 + goto err_remove; 569 + 570 + entry = debugfs_create_file_unsafe("output", 0644, root, moxtet, 571 + &output_fops); 572 + if (IS_ERR(entry)) 573 + goto err_remove; 574 + 575 + moxtet->debugfs_root = root; 576 + 577 + return 0; 578 + err_remove: 579 + debugfs_remove_recursive(root); 580 + return PTR_ERR(entry); 581 + } 582 + 583 + static void moxtet_unregister_debugfs(struct moxtet *moxtet) 584 + { 585 + debugfs_remove_recursive(moxtet->debugfs_root); 586 + } 587 + #else 588 + static inline int moxtet_register_debugfs(struct moxtet *moxtet) 589 + { 590 + return 0; 591 + } 592 + 593 + static inline void moxtet_unregister_debugfs(struct moxtet *moxtet) 594 + { 595 + } 596 + #endif 597 + 598 + static int moxtet_irq_domain_map(struct irq_domain *d, unsigned int irq, 599 + irq_hw_number_t hw) 600 + { 601 + struct moxtet *moxtet = d->host_data; 602 + 603 + if (hw >= MOXTET_NIRQS || !(moxtet->irq.exists & BIT(hw))) { 604 + dev_err(moxtet->dev, "Invalid hw irq number\n"); 605 + return -EINVAL; 606 + } 607 + 608 + irq_set_chip_data(irq, d->host_data); 609 + irq_set_chip_and_handler(irq, &moxtet->irq.chip, handle_level_irq); 610 + 611 + return 0; 612 + } 613 + 614 + static int moxtet_irq_domain_xlate(struct irq_domain *d, 615 + struct device_node *ctrlr, 616 + const u32 *intspec, unsigned int intsize, 617 + unsigned long *out_hwirq, 618 + unsigned int *out_type) 619 + { 620 + struct moxtet *moxtet = d->host_data; 621 + int irq; 622 + 623 + if (WARN_ON(intsize < 1)) 624 + return -EINVAL; 625 + 626 + irq = intspec[0]; 627 + 628 + if (irq >= MOXTET_NIRQS || !(moxtet->irq.exists & BIT(irq))) 629 + return -EINVAL; 630 + 631 + *out_hwirq = irq; 632 + *out_type = IRQ_TYPE_NONE; 633 + return 0; 634 + } 635 + 636 + static const struct irq_domain_ops moxtet_irq_domain = { 637 + .map = moxtet_irq_domain_map, 638 + .xlate = moxtet_irq_domain_xlate, 639 + }; 640 + 641 + static void moxtet_irq_mask(struct irq_data *d) 642 + { 643 + struct moxtet *moxtet = irq_data_get_irq_chip_data(d); 644 + 645 + moxtet->irq.masked |= BIT(d->hwirq); 646 + } 647 + 648 + static void moxtet_irq_unmask(struct irq_data *d) 649 + { 650 + struct moxtet *moxtet = irq_data_get_irq_chip_data(d); 651 + 652 + moxtet->irq.masked &= ~BIT(d->hwirq); 653 + } 654 + 655 + static void moxtet_irq_print_chip(struct irq_data *d, struct seq_file *p) 656 + { 657 + struct moxtet *moxtet = irq_data_get_irq_chip_data(d); 658 + struct moxtet_irqpos *pos = &moxtet->irq.position[d->hwirq]; 659 + int id; 660 + 661 + id = moxtet->modules[pos->idx]; 662 + 663 + seq_printf(p, " moxtet-%s.%i#%i", mox_module_name(id), pos->idx, 664 + pos->bit); 665 + } 666 + 667 + static const struct irq_chip moxtet_irq_chip = { 668 + .name = "moxtet", 669 + .irq_mask = moxtet_irq_mask, 670 + .irq_unmask = moxtet_irq_unmask, 671 + .irq_print_chip = moxtet_irq_print_chip, 672 + }; 673 + 674 + static int moxtet_irq_read(struct moxtet *moxtet, unsigned long *map) 675 + { 676 + struct moxtet_irqpos *pos = moxtet->irq.position; 677 + u8 buf[TURRIS_MOX_MAX_MODULES]; 678 + int i, ret; 679 + 680 + ret = moxtet_spi_read(moxtet, buf); 681 + if (ret < 0) 682 + return ret; 683 + 684 + *map = 0; 685 + 686 + for_each_set_bit(i, &moxtet->irq.exists, MOXTET_NIRQS) { 687 + if (!(buf[pos[i].idx + 1] & BIT(4 + pos[i].bit))) 688 + set_bit(i, map); 689 + } 690 + 691 + return 0; 692 + } 693 + 694 + static irqreturn_t moxtet_irq_thread_fn(int irq, void *data) 695 + { 696 + struct moxtet *moxtet = data; 697 + unsigned long set; 698 + int nhandled = 0, i, sub_irq, ret; 699 + 700 + ret = moxtet_irq_read(moxtet, &set); 701 + if (ret < 0) 702 + goto out; 703 + 704 + set &= ~moxtet->irq.masked; 705 + 706 + do { 707 + for_each_set_bit(i, &set, MOXTET_NIRQS) { 708 + sub_irq = irq_find_mapping(moxtet->irq.domain, i); 709 + handle_nested_irq(sub_irq); 710 + dev_dbg(moxtet->dev, "%i irq\n", i); 711 + ++nhandled; 712 + } 713 + 714 + ret = moxtet_irq_read(moxtet, &set); 715 + if (ret < 0) 716 + goto out; 717 + 718 + set &= ~moxtet->irq.masked; 719 + } while (set); 720 + 721 + out: 722 + return (nhandled > 0 ? IRQ_HANDLED : IRQ_NONE); 723 + } 724 + 725 + static void moxtet_irq_free(struct moxtet *moxtet) 726 + { 727 + int i, irq; 728 + 729 + for (i = 0; i < MOXTET_NIRQS; ++i) { 730 + if (moxtet->irq.exists & BIT(i)) { 731 + irq = irq_find_mapping(moxtet->irq.domain, i); 732 + irq_dispose_mapping(irq); 733 + } 734 + } 735 + 736 + irq_domain_remove(moxtet->irq.domain); 737 + } 738 + 739 + static int moxtet_irq_setup(struct moxtet *moxtet) 740 + { 741 + int i, ret; 742 + 743 + moxtet->irq.domain = irq_domain_add_simple(moxtet->dev->of_node, 744 + MOXTET_NIRQS, 0, 745 + &moxtet_irq_domain, moxtet); 746 + if (moxtet->irq.domain == NULL) { 747 + dev_err(moxtet->dev, "Could not add IRQ domain\n"); 748 + return -ENOMEM; 749 + } 750 + 751 + for (i = 0; i < MOXTET_NIRQS; ++i) 752 + if (moxtet->irq.exists & BIT(i)) 753 + irq_create_mapping(moxtet->irq.domain, i); 754 + 755 + moxtet->irq.chip = moxtet_irq_chip; 756 + moxtet->irq.masked = ~0; 757 + 758 + ret = request_threaded_irq(moxtet->dev_irq, NULL, moxtet_irq_thread_fn, 759 + IRQF_ONESHOT, "moxtet", moxtet); 760 + if (ret < 0) 761 + goto err_free; 762 + 763 + return 0; 764 + 765 + err_free: 766 + moxtet_irq_free(moxtet); 767 + return ret; 768 + } 769 + 770 + static int moxtet_probe(struct spi_device *spi) 771 + { 772 + struct moxtet *moxtet; 773 + int ret; 774 + 775 + ret = spi_setup(spi); 776 + if (ret < 0) 777 + return ret; 778 + 779 + moxtet = devm_kzalloc(&spi->dev, sizeof(struct moxtet), 780 + GFP_KERNEL); 781 + if (!moxtet) 782 + return -ENOMEM; 783 + 784 + moxtet->dev = &spi->dev; 785 + spi_set_drvdata(spi, moxtet); 786 + 787 + mutex_init(&moxtet->lock); 788 + 789 + moxtet->dev_irq = of_irq_get(moxtet->dev->of_node, 0); 790 + if (moxtet->dev_irq == -EPROBE_DEFER) 791 + return -EPROBE_DEFER; 792 + 793 + if (moxtet->dev_irq <= 0) { 794 + dev_err(moxtet->dev, "No IRQ resource found\n"); 795 + return -ENXIO; 796 + } 797 + 798 + ret = moxtet_find_topology(moxtet); 799 + if (ret < 0) 800 + return ret; 801 + 802 + if (moxtet->irq.exists) { 803 + ret = moxtet_irq_setup(moxtet); 804 + if (ret < 0) 805 + return ret; 806 + } 807 + 808 + of_register_moxtet_devices(moxtet); 809 + moxtet_register_devices_from_topology(moxtet); 810 + 811 + ret = moxtet_register_debugfs(moxtet); 812 + if (ret < 0) 813 + dev_warn(moxtet->dev, "Failed creating debugfs entries: %i\n", 814 + ret); 815 + 816 + return 0; 817 + } 818 + 819 + static int moxtet_remove(struct spi_device *spi) 820 + { 821 + struct moxtet *moxtet = spi_get_drvdata(spi); 822 + int dummy; 823 + 824 + free_irq(moxtet->dev_irq, moxtet); 825 + 826 + moxtet_irq_free(moxtet); 827 + 828 + moxtet_unregister_debugfs(moxtet); 829 + 830 + dummy = device_for_each_child(moxtet->dev, NULL, __unregister); 831 + 832 + mutex_destroy(&moxtet->lock); 833 + 834 + return 0; 835 + } 836 + 837 + static const struct of_device_id moxtet_dt_ids[] = { 838 + { .compatible = "cznic,moxtet" }, 839 + {}, 840 + }; 841 + MODULE_DEVICE_TABLE(of, moxtet_dt_ids); 842 + 843 + static struct spi_driver moxtet_spi_driver = { 844 + .driver = { 845 + .name = "moxtet", 846 + .of_match_table = moxtet_dt_ids, 847 + }, 848 + .probe = moxtet_probe, 849 + .remove = moxtet_remove, 850 + }; 851 + 852 + static int __init moxtet_init(void) 853 + { 854 + int ret; 855 + 856 + ret = bus_register(&moxtet_bus_type); 857 + if (ret < 0) { 858 + pr_err("moxtet bus registration failed: %d\n", ret); 859 + goto error; 860 + } 861 + 862 + ret = spi_register_driver(&moxtet_spi_driver); 863 + if (ret < 0) { 864 + pr_err("moxtet spi driver registration failed: %d\n", ret); 865 + goto error_bus; 866 + } 867 + 868 + return 0; 869 + 870 + error_bus: 871 + bus_unregister(&moxtet_bus_type); 872 + error: 873 + return ret; 874 + } 875 + postcore_initcall_sync(moxtet_init); 876 + 877 + static void __exit moxtet_exit(void) 878 + { 879 + spi_unregister_driver(&moxtet_spi_driver); 880 + bus_unregister(&moxtet_bus_type); 881 + } 882 + module_exit(moxtet_exit); 883 + 884 + MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 885 + MODULE_DESCRIPTION("CZ.NIC's Turris Mox module configuration bus"); 886 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/clk/clk-scmi.c
··· 69 69 { 70 70 struct scmi_clk *clk = to_scmi_clk(hw); 71 71 72 - return clk->handle->clk_ops->rate_set(clk->handle, clk->id, 0, rate); 72 + return clk->handle->clk_ops->rate_set(clk->handle, clk->id, rate); 73 73 } 74 74 75 75 static int scmi_clk_enable(struct clk_hw *hw)
+1 -1
drivers/firmware/arm_scmi/Makefile
··· 2 2 obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o 3 3 scmi-bus-y = bus.o 4 4 scmi-driver-y = driver.o 5 - scmi-protocols-y = base.o clock.o perf.o power.o sensors.o 5 + scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o 6 6 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+1 -1
drivers/firmware/arm_scmi/base.c
··· 204 204 if (ret) 205 205 return ret; 206 206 207 - *(__le32 *)t->tx.buf = cpu_to_le32(id); 207 + put_unaligned_le32(id, t->tx.buf); 208 208 209 209 ret = scmi_do_xfer(handle, t); 210 210 if (!ret)
+21 -12
drivers/firmware/arm_scmi/clock.c
··· 56 56 struct scmi_clock_set_rate { 57 57 __le32 flags; 58 58 #define CLOCK_SET_ASYNC BIT(0) 59 - #define CLOCK_SET_DELAYED BIT(1) 59 + #define CLOCK_SET_IGNORE_RESP BIT(1) 60 60 #define CLOCK_SET_ROUND_UP BIT(2) 61 61 #define CLOCK_SET_ROUND_AUTO BIT(3) 62 62 __le32 id; ··· 67 67 struct clock_info { 68 68 int num_clocks; 69 69 int max_async_req; 70 + atomic_t cur_async_req; 70 71 struct scmi_clock_info *clk; 71 72 }; 72 73 ··· 107 106 if (ret) 108 107 return ret; 109 108 110 - *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); 109 + put_unaligned_le32(clk_id, t->tx.buf); 111 110 attr = t->rx.buf; 112 111 113 112 ret = scmi_do_xfer(handle, t); ··· 204 203 if (ret) 205 204 return ret; 206 205 207 - *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); 206 + put_unaligned_le32(clk_id, t->tx.buf); 208 207 209 208 ret = scmi_do_xfer(handle, t); 210 - if (!ret) { 211 - __le32 *pval = t->rx.buf; 212 - 213 - *value = le32_to_cpu(*pval); 214 - *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; 215 - } 209 + if (!ret) 210 + *value = get_unaligned_le64(t->rx.buf); 216 211 217 212 scmi_xfer_put(handle, t); 218 213 return ret; 219 214 } 220 215 221 216 static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id, 222 - u32 config, u64 rate) 217 + u64 rate) 223 218 { 224 219 int ret; 220 + u32 flags = 0; 225 221 struct scmi_xfer *t; 226 222 struct scmi_clock_set_rate *cfg; 223 + struct clock_info *ci = handle->clk_priv; 227 224 228 225 ret = scmi_xfer_get_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK, 229 226 sizeof(*cfg), 0, &t); 230 227 if (ret) 231 228 return ret; 232 229 230 + if (ci->max_async_req && 231 + atomic_inc_return(&ci->cur_async_req) < ci->max_async_req) 232 + flags |= CLOCK_SET_ASYNC; 233 + 233 234 cfg = t->tx.buf; 234 - cfg->flags = cpu_to_le32(config); 235 + cfg->flags = cpu_to_le32(flags); 235 236 cfg->id = cpu_to_le32(clk_id); 236 237 cfg->value_low = cpu_to_le32(rate & 0xffffffff); 237 238 cfg->value_high = cpu_to_le32(rate >> 32); 238 239 239 - ret = scmi_do_xfer(handle, t); 240 + if (flags & CLOCK_SET_ASYNC) 241 + ret = scmi_do_xfer_with_response(handle, t); 242 + else 243 + ret = scmi_do_xfer(handle, t); 244 + 245 + if (ci->max_async_req) 246 + atomic_dec(&ci->cur_async_req); 240 247 241 248 scmi_xfer_put(handle, t); 242 249 return ret;
+12 -6
drivers/firmware/arm_scmi/common.h
··· 15 15 #include <linux/scmi_protocol.h> 16 16 #include <linux/types.h> 17 17 18 + #include <asm/unaligned.h> 19 + 18 20 #define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0) 19 21 #define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16) 20 22 #define PROTOCOL_REV_MAJOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x))) ··· 50 48 /** 51 49 * struct scmi_msg_hdr - Message(Tx/Rx) header 52 50 * 53 - * @id: The identifier of the command being sent 54 - * @protocol_id: The identifier of the protocol used to send @id command 55 - * @seq: The token to identify the message. when a message/command returns, 56 - * the platform returns the whole message header unmodified including 57 - * the token 51 + * @id: The identifier of the message being sent 52 + * @protocol_id: The identifier of the protocol used to send @id message 53 + * @seq: The token to identify the message. When a message returns, the 54 + * platform returns the whole message header unmodified including the 55 + * token 58 56 * @status: Status of the transfer once it's complete 59 57 * @poll_completion: Indicate if the transfer needs to be polled for 60 58 * completion or interrupt mode is used ··· 86 84 * @rx: Receive message, the buffer should be pre-allocated to store 87 85 * message. If request-ACK protocol is used, we can reuse the same 88 86 * buffer for the rx path as we use for the tx path. 89 - * @done: completion event 87 + * @done: command message transmit completion event 88 + * @async: pointer to delayed response message received event completion 90 89 */ 91 90 struct scmi_xfer { 92 91 struct scmi_msg_hdr hdr; 93 92 struct scmi_msg tx; 94 93 struct scmi_msg rx; 95 94 struct completion done; 95 + struct completion *async_done; 96 96 }; 97 97 98 98 void scmi_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer); 99 99 int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer); 100 + int scmi_do_xfer_with_response(const struct scmi_handle *h, 101 + struct scmi_xfer *xfer); 100 102 int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id, 101 103 size_t tx_size, size_t rx_size, struct scmi_xfer **p); 102 104 int scmi_handle_put(const struct scmi_handle *handle);
+334 -234
drivers/firmware/arm_scmi/driver.c
··· 30 30 #include "common.h" 31 31 32 32 #define MSG_ID_MASK GENMASK(7, 0) 33 + #define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr)) 33 34 #define MSG_TYPE_MASK GENMASK(9, 8) 35 + #define MSG_XTRACT_TYPE(hdr) FIELD_GET(MSG_TYPE_MASK, (hdr)) 36 + #define MSG_TYPE_COMMAND 0 37 + #define MSG_TYPE_DELAYED_RESP 2 38 + #define MSG_TYPE_NOTIFICATION 3 34 39 #define MSG_PROTOCOL_ID_MASK GENMASK(17, 10) 40 + #define MSG_XTRACT_PROT_ID(hdr) FIELD_GET(MSG_PROTOCOL_ID_MASK, (hdr)) 35 41 #define MSG_TOKEN_ID_MASK GENMASK(27, 18) 36 42 #define MSG_XTRACT_TOKEN(hdr) FIELD_GET(MSG_TOKEN_ID_MASK, (hdr)) 37 43 #define MSG_TOKEN_MAX (MSG_XTRACT_TOKEN(MSG_TOKEN_ID_MASK) + 1) ··· 92 86 }; 93 87 94 88 /** 95 - * struct scmi_chan_info - Structure representing a SCMI channel informfation 89 + * struct scmi_chan_info - Structure representing a SCMI channel information 96 90 * 97 91 * @cl: Mailbox Client 98 92 * @chan: Transmit/Receive mailbox channel ··· 117 111 * @handle: Instance of SCMI handle to send to clients 118 112 * @version: SCMI revision information containing protocol version, 119 113 * implementation version and (sub-)vendor identification. 120 - * @minfo: Message info 121 - * @tx_idr: IDR object to map protocol id to channel info pointer 114 + * @tx_minfo: Universal Transmit Message management info 115 + * @tx_idr: IDR object to map protocol id to Tx channel info pointer 116 + * @rx_idr: IDR object to map protocol id to Rx channel info pointer 122 117 * @protocols_imp: List of protocols implemented, currently maximum of 123 118 * MAX_PROTOCOLS_IMP elements allocated by the base protocol 124 119 * @node: List head ··· 130 123 const struct scmi_desc *desc; 131 124 struct scmi_revision_info version; 132 125 struct scmi_handle handle; 133 - struct scmi_xfers_info minfo; 126 + struct scmi_xfers_info tx_minfo; 134 127 struct idr tx_idr; 128 + struct idr rx_idr; 135 129 u8 *protocols_imp; 136 130 struct list_head node; 137 131 int users; ··· 190 182 static inline void scmi_dump_header_dbg(struct device *dev, 191 183 struct scmi_msg_hdr *hdr) 192 184 { 193 - dev_dbg(dev, "Command ID: %x Sequence ID: %x Protocol: %x\n", 185 + dev_dbg(dev, "Message ID: %x Sequence ID: %x Protocol: %x\n", 194 186 hdr->id, hdr->seq, hdr->protocol_id); 195 187 } 196 188 ··· 198 190 struct scmi_shared_mem __iomem *mem) 199 191 { 200 192 xfer->hdr.status = ioread32(mem->msg_payload); 201 - /* Skip the length of header and statues in payload area i.e 8 bytes*/ 193 + /* Skip the length of header and status in payload area i.e 8 bytes */ 202 194 xfer->rx.len = min_t(size_t, xfer->rx.len, ioread32(&mem->length) - 8); 203 195 204 196 /* Take a copy to the rx buffer.. */ 205 197 memcpy_fromio(xfer->rx.buf, mem->msg_payload + 4, xfer->rx.len); 206 - } 207 - 208 - /** 209 - * scmi_rx_callback() - mailbox client callback for receive messages 210 - * 211 - * @cl: client pointer 212 - * @m: mailbox message 213 - * 214 - * Processes one received message to appropriate transfer information and 215 - * signals completion of the transfer. 216 - * 217 - * NOTE: This function will be invoked in IRQ context, hence should be 218 - * as optimal as possible. 219 - */ 220 - static void scmi_rx_callback(struct mbox_client *cl, void *m) 221 - { 222 - u16 xfer_id; 223 - struct scmi_xfer *xfer; 224 - struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); 225 - struct device *dev = cinfo->dev; 226 - struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 227 - struct scmi_xfers_info *minfo = &info->minfo; 228 - struct scmi_shared_mem __iomem *mem = cinfo->payload; 229 - 230 - xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header)); 231 - 232 - /* Are we even expecting this? */ 233 - if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 234 - dev_err(dev, "message for %d is not expected!\n", xfer_id); 235 - return; 236 - } 237 - 238 - xfer = &minfo->xfer_block[xfer_id]; 239 - 240 - scmi_dump_header_dbg(dev, &xfer->hdr); 241 - /* Is the message of valid length? */ 242 - if (xfer->rx.len > info->desc->max_msg_size) { 243 - dev_err(dev, "unable to handle %zu xfer(max %d)\n", 244 - xfer->rx.len, info->desc->max_msg_size); 245 - return; 246 - } 247 - 248 - scmi_fetch_response(xfer, mem); 249 - complete(&xfer->done); 250 198 } 251 199 252 200 /** ··· 211 247 * @hdr: pointer to header containing all the information on message id, 212 248 * protocol id and sequence id. 213 249 * 214 - * Return: 32-bit packed command header to be sent to the platform. 250 + * Return: 32-bit packed message header to be sent to the platform. 215 251 */ 216 252 static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr) 217 253 { 218 254 return FIELD_PREP(MSG_ID_MASK, hdr->id) | 219 255 FIELD_PREP(MSG_TOKEN_ID_MASK, hdr->seq) | 220 256 FIELD_PREP(MSG_PROTOCOL_ID_MASK, hdr->protocol_id); 257 + } 258 + 259 + /** 260 + * unpack_scmi_header() - unpacks and records message and protocol id 261 + * 262 + * @msg_hdr: 32-bit packed message header sent from the platform 263 + * @hdr: pointer to header to fetch message and protocol id. 264 + */ 265 + static inline void unpack_scmi_header(u32 msg_hdr, struct scmi_msg_hdr *hdr) 266 + { 267 + hdr->id = MSG_XTRACT_ID(msg_hdr); 268 + hdr->protocol_id = MSG_XTRACT_PROT_ID(msg_hdr); 221 269 } 222 270 223 271 /** ··· 247 271 struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); 248 272 struct scmi_shared_mem __iomem *mem = cinfo->payload; 249 273 274 + /* 275 + * Ideally channel must be free by now unless OS timeout last 276 + * request and platform continued to process the same, wait 277 + * until it releases the shared memory, otherwise we may endup 278 + * overwriting its response with new message payload or vice-versa 279 + */ 280 + spin_until_cond(ioread32(&mem->channel_status) & 281 + SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE); 250 282 /* Mark channel busy + clear error */ 251 283 iowrite32(0x0, &mem->channel_status); 252 284 iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED, ··· 269 285 * scmi_xfer_get() - Allocate one message 270 286 * 271 287 * @handle: Pointer to SCMI entity handle 288 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 272 289 * 273 - * Helper function which is used by various command functions that are 290 + * Helper function which is used by various message functions that are 274 291 * exposed to clients of this driver for allocating a message traffic event. 275 292 * 276 293 * This function can sleep depending on pending requests already in the system ··· 280 295 * 281 296 * Return: 0 if all went fine, else corresponding error. 282 297 */ 283 - static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle) 298 + static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle, 299 + struct scmi_xfers_info *minfo) 284 300 { 285 301 u16 xfer_id; 286 302 struct scmi_xfer *xfer; 287 303 unsigned long flags, bit_pos; 288 304 struct scmi_info *info = handle_to_scmi_info(handle); 289 - struct scmi_xfers_info *minfo = &info->minfo; 290 305 291 306 /* Keep the locked section as small as possible */ 292 307 spin_lock_irqsave(&minfo->xfer_lock, flags); ··· 309 324 } 310 325 311 326 /** 312 - * scmi_xfer_put() - Release a message 327 + * __scmi_xfer_put() - Release a message 313 328 * 314 - * @handle: Pointer to SCMI entity handle 329 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 315 330 * @xfer: message that was reserved by scmi_xfer_get 316 331 * 317 332 * This holds a spinlock to maintain integrity of internal data structures. 318 333 */ 319 - void scmi_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) 334 + static void 335 + __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer) 320 336 { 321 337 unsigned long flags; 322 - struct scmi_info *info = handle_to_scmi_info(handle); 323 - struct scmi_xfers_info *minfo = &info->minfo; 324 338 325 339 /* 326 340 * Keep the locked section as small as possible ··· 329 345 spin_lock_irqsave(&minfo->xfer_lock, flags); 330 346 clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table); 331 347 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 348 + } 349 + 350 + /** 351 + * scmi_rx_callback() - mailbox client callback for receive messages 352 + * 353 + * @cl: client pointer 354 + * @m: mailbox message 355 + * 356 + * Processes one received message to appropriate transfer information and 357 + * signals completion of the transfer. 358 + * 359 + * NOTE: This function will be invoked in IRQ context, hence should be 360 + * as optimal as possible. 361 + */ 362 + static void scmi_rx_callback(struct mbox_client *cl, void *m) 363 + { 364 + u8 msg_type; 365 + u32 msg_hdr; 366 + u16 xfer_id; 367 + struct scmi_xfer *xfer; 368 + struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); 369 + struct device *dev = cinfo->dev; 370 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 371 + struct scmi_xfers_info *minfo = &info->tx_minfo; 372 + struct scmi_shared_mem __iomem *mem = cinfo->payload; 373 + 374 + msg_hdr = ioread32(&mem->msg_header); 375 + msg_type = MSG_XTRACT_TYPE(msg_hdr); 376 + xfer_id = MSG_XTRACT_TOKEN(msg_hdr); 377 + 378 + if (msg_type == MSG_TYPE_NOTIFICATION) 379 + return; /* Notifications not yet supported */ 380 + 381 + /* Are we even expecting this? */ 382 + if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 383 + dev_err(dev, "message for %d is not expected!\n", xfer_id); 384 + return; 385 + } 386 + 387 + xfer = &minfo->xfer_block[xfer_id]; 388 + 389 + scmi_dump_header_dbg(dev, &xfer->hdr); 390 + 391 + scmi_fetch_response(xfer, mem); 392 + 393 + if (msg_type == MSG_TYPE_DELAYED_RESP) 394 + complete(xfer->async_done); 395 + else 396 + complete(&xfer->done); 397 + } 398 + 399 + /** 400 + * scmi_xfer_put() - Release a transmit message 401 + * 402 + * @handle: Pointer to SCMI entity handle 403 + * @xfer: message that was reserved by scmi_xfer_get 404 + */ 405 + void scmi_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) 406 + { 407 + struct scmi_info *info = handle_to_scmi_info(handle); 408 + 409 + __scmi_xfer_put(&info->tx_minfo, xfer); 332 410 } 333 411 334 412 static bool ··· 481 435 return ret; 482 436 } 483 437 438 + #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC) 439 + 484 440 /** 485 - * scmi_xfer_get_init() - Allocate and initialise one message 441 + * scmi_do_xfer_with_response() - Do one transfer and wait until the delayed 442 + * response is received 443 + * 444 + * @handle: Pointer to SCMI entity handle 445 + * @xfer: Transfer to initiate and wait for response 446 + * 447 + * Return: -ETIMEDOUT in case of no delayed response, if transmit error, 448 + * return corresponding error, else if all goes well, return 0. 449 + */ 450 + int scmi_do_xfer_with_response(const struct scmi_handle *handle, 451 + struct scmi_xfer *xfer) 452 + { 453 + int ret, timeout = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT); 454 + DECLARE_COMPLETION_ONSTACK(async_response); 455 + 456 + xfer->async_done = &async_response; 457 + 458 + ret = scmi_do_xfer(handle, xfer); 459 + if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout)) 460 + ret = -ETIMEDOUT; 461 + 462 + xfer->async_done = NULL; 463 + return ret; 464 + } 465 + 466 + /** 467 + * scmi_xfer_get_init() - Allocate and initialise one message for transmit 486 468 * 487 469 * @handle: Pointer to SCMI entity handle 488 470 * @msg_id: Message identifier ··· 531 457 int ret; 532 458 struct scmi_xfer *xfer; 533 459 struct scmi_info *info = handle_to_scmi_info(handle); 460 + struct scmi_xfers_info *minfo = &info->tx_minfo; 534 461 struct device *dev = info->dev; 535 462 536 463 /* Ensure we have sane transfer sizes */ ··· 539 464 tx_size > info->desc->max_msg_size) 540 465 return -ERANGE; 541 466 542 - xfer = scmi_xfer_get(handle); 467 + xfer = scmi_xfer_get(handle, minfo); 543 468 if (IS_ERR(xfer)) { 544 469 ret = PTR_ERR(xfer); 545 470 dev_err(dev, "failed to get free message slot(%d)\n", ret); ··· 672 597 return 0; 673 598 } 674 599 675 - static const struct scmi_desc scmi_generic_desc = { 676 - .max_rx_timeout_ms = 30, /* We may increase this if required */ 677 - .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ 678 - .max_msg_size = 128, 679 - }; 680 - 681 - /* Each compatible listed below must have descriptor associated with it */ 682 - static const struct of_device_id scmi_of_match[] = { 683 - { .compatible = "arm,scmi", .data = &scmi_generic_desc }, 684 - { /* Sentinel */ }, 685 - }; 686 - 687 - MODULE_DEVICE_TABLE(of, scmi_of_match); 688 - 689 600 static int scmi_xfer_info_init(struct scmi_info *sinfo) 690 601 { 691 602 int i; 692 603 struct scmi_xfer *xfer; 693 604 struct device *dev = sinfo->dev; 694 605 const struct scmi_desc *desc = sinfo->desc; 695 - struct scmi_xfers_info *info = &sinfo->minfo; 606 + struct scmi_xfers_info *info = &sinfo->tx_minfo; 696 607 697 608 /* Pre-allocated messages, no more than what hdr.seq can support */ 698 609 if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) { ··· 713 652 return 0; 714 653 } 715 654 716 - static int scmi_mailbox_check(struct device_node *np) 655 + static int scmi_mailbox_check(struct device_node *np, int idx) 717 656 { 718 - return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 0, NULL); 657 + return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 658 + idx, NULL); 659 + } 660 + 661 + static int scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, 662 + int prot_id, bool tx) 663 + { 664 + int ret, idx; 665 + struct resource res; 666 + resource_size_t size; 667 + struct device_node *shmem, *np = dev->of_node; 668 + struct scmi_chan_info *cinfo; 669 + struct mbox_client *cl; 670 + struct idr *idr; 671 + const char *desc = tx ? "Tx" : "Rx"; 672 + 673 + /* Transmit channel is first entry i.e. index 0 */ 674 + idx = tx ? 0 : 1; 675 + idr = tx ? &info->tx_idr : &info->rx_idr; 676 + 677 + if (scmi_mailbox_check(np, idx)) { 678 + cinfo = idr_find(idr, SCMI_PROTOCOL_BASE); 679 + if (unlikely(!cinfo)) /* Possible only if platform has no Rx */ 680 + return -EINVAL; 681 + goto idr_alloc; 682 + } 683 + 684 + cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL); 685 + if (!cinfo) 686 + return -ENOMEM; 687 + 688 + cinfo->dev = dev; 689 + 690 + cl = &cinfo->cl; 691 + cl->dev = dev; 692 + cl->rx_callback = scmi_rx_callback; 693 + cl->tx_prepare = tx ? scmi_tx_prepare : NULL; 694 + cl->tx_block = false; 695 + cl->knows_txdone = tx; 696 + 697 + shmem = of_parse_phandle(np, "shmem", idx); 698 + ret = of_address_to_resource(shmem, 0, &res); 699 + of_node_put(shmem); 700 + if (ret) { 701 + dev_err(dev, "failed to get SCMI %s payload memory\n", desc); 702 + return ret; 703 + } 704 + 705 + size = resource_size(&res); 706 + cinfo->payload = devm_ioremap(info->dev, res.start, size); 707 + if (!cinfo->payload) { 708 + dev_err(dev, "failed to ioremap SCMI %s payload\n", desc); 709 + return -EADDRNOTAVAIL; 710 + } 711 + 712 + cinfo->chan = mbox_request_channel(cl, idx); 713 + if (IS_ERR(cinfo->chan)) { 714 + ret = PTR_ERR(cinfo->chan); 715 + if (ret != -EPROBE_DEFER) 716 + dev_err(dev, "failed to request SCMI %s mailbox\n", 717 + desc); 718 + return ret; 719 + } 720 + 721 + idr_alloc: 722 + ret = idr_alloc(idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); 723 + if (ret != prot_id) { 724 + dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret); 725 + return ret; 726 + } 727 + 728 + cinfo->handle = &info->handle; 729 + return 0; 730 + } 731 + 732 + static inline int 733 + scmi_mbox_txrx_setup(struct scmi_info *info, struct device *dev, int prot_id) 734 + { 735 + int ret = scmi_mbox_chan_setup(info, dev, prot_id, true); 736 + 737 + if (!ret) /* Rx is optional, hence no error check */ 738 + scmi_mbox_chan_setup(info, dev, prot_id, false); 739 + 740 + return ret; 741 + } 742 + 743 + static inline void 744 + scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 745 + int prot_id) 746 + { 747 + struct scmi_device *sdev; 748 + 749 + sdev = scmi_device_create(np, info->dev, prot_id); 750 + if (!sdev) { 751 + dev_err(info->dev, "failed to create %d protocol device\n", 752 + prot_id); 753 + return; 754 + } 755 + 756 + if (scmi_mbox_txrx_setup(info, &sdev->dev, prot_id)) { 757 + dev_err(&sdev->dev, "failed to setup transport\n"); 758 + scmi_device_destroy(sdev); 759 + return; 760 + } 761 + 762 + /* setup handle now as the transport is ready */ 763 + scmi_set_handle(sdev); 764 + } 765 + 766 + static int scmi_probe(struct platform_device *pdev) 767 + { 768 + int ret; 769 + struct scmi_handle *handle; 770 + const struct scmi_desc *desc; 771 + struct scmi_info *info; 772 + struct device *dev = &pdev->dev; 773 + struct device_node *child, *np = dev->of_node; 774 + 775 + /* Only mailbox method supported, check for the presence of one */ 776 + if (scmi_mailbox_check(np, 0)) { 777 + dev_err(dev, "no mailbox found in %pOF\n", np); 778 + return -EINVAL; 779 + } 780 + 781 + desc = of_device_get_match_data(dev); 782 + if (!desc) 783 + return -EINVAL; 784 + 785 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 786 + if (!info) 787 + return -ENOMEM; 788 + 789 + info->dev = dev; 790 + info->desc = desc; 791 + INIT_LIST_HEAD(&info->node); 792 + 793 + ret = scmi_xfer_info_init(info); 794 + if (ret) 795 + return ret; 796 + 797 + platform_set_drvdata(pdev, info); 798 + idr_init(&info->tx_idr); 799 + idr_init(&info->rx_idr); 800 + 801 + handle = &info->handle; 802 + handle->dev = info->dev; 803 + handle->version = &info->version; 804 + 805 + ret = scmi_mbox_txrx_setup(info, dev, SCMI_PROTOCOL_BASE); 806 + if (ret) 807 + return ret; 808 + 809 + ret = scmi_base_protocol_init(handle); 810 + if (ret) { 811 + dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); 812 + return ret; 813 + } 814 + 815 + mutex_lock(&scmi_list_mutex); 816 + list_add_tail(&info->node, &scmi_list); 817 + mutex_unlock(&scmi_list_mutex); 818 + 819 + for_each_available_child_of_node(np, child) { 820 + u32 prot_id; 821 + 822 + if (of_property_read_u32(child, "reg", &prot_id)) 823 + continue; 824 + 825 + if (!FIELD_FIT(MSG_PROTOCOL_ID_MASK, prot_id)) 826 + dev_err(dev, "Out of range protocol %d\n", prot_id); 827 + 828 + if (!scmi_is_protocol_implemented(handle, prot_id)) { 829 + dev_err(dev, "SCMI protocol %d not implemented\n", 830 + prot_id); 831 + continue; 832 + } 833 + 834 + scmi_create_protocol_device(child, info, prot_id); 835 + } 836 + 837 + return 0; 719 838 } 720 839 721 840 static int scmi_mbox_free_channel(int id, void *p, void *data) ··· 933 692 ret = idr_for_each(idr, scmi_mbox_free_channel, idr); 934 693 idr_destroy(&info->tx_idr); 935 694 695 + idr = &info->rx_idr; 696 + ret = idr_for_each(idr, scmi_mbox_free_channel, idr); 697 + idr_destroy(&info->rx_idr); 698 + 936 699 return ret; 937 700 } 938 701 939 - static inline int 940 - scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, int prot_id) 941 - { 942 - int ret; 943 - struct resource res; 944 - resource_size_t size; 945 - struct device_node *shmem, *np = dev->of_node; 946 - struct scmi_chan_info *cinfo; 947 - struct mbox_client *cl; 702 + static const struct scmi_desc scmi_generic_desc = { 703 + .max_rx_timeout_ms = 30, /* We may increase this if required */ 704 + .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ 705 + .max_msg_size = 128, 706 + }; 948 707 949 - if (scmi_mailbox_check(np)) { 950 - cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE); 951 - goto idr_alloc; 952 - } 708 + /* Each compatible listed below must have descriptor associated with it */ 709 + static const struct of_device_id scmi_of_match[] = { 710 + { .compatible = "arm,scmi", .data = &scmi_generic_desc }, 711 + { /* Sentinel */ }, 712 + }; 953 713 954 - cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL); 955 - if (!cinfo) 956 - return -ENOMEM; 957 - 958 - cinfo->dev = dev; 959 - 960 - cl = &cinfo->cl; 961 - cl->dev = dev; 962 - cl->rx_callback = scmi_rx_callback; 963 - cl->tx_prepare = scmi_tx_prepare; 964 - cl->tx_block = false; 965 - cl->knows_txdone = true; 966 - 967 - shmem = of_parse_phandle(np, "shmem", 0); 968 - ret = of_address_to_resource(shmem, 0, &res); 969 - of_node_put(shmem); 970 - if (ret) { 971 - dev_err(dev, "failed to get SCMI Tx payload mem resource\n"); 972 - return ret; 973 - } 974 - 975 - size = resource_size(&res); 976 - cinfo->payload = devm_ioremap(info->dev, res.start, size); 977 - if (!cinfo->payload) { 978 - dev_err(dev, "failed to ioremap SCMI Tx payload\n"); 979 - return -EADDRNOTAVAIL; 980 - } 981 - 982 - /* Transmit channel is first entry i.e. index 0 */ 983 - cinfo->chan = mbox_request_channel(cl, 0); 984 - if (IS_ERR(cinfo->chan)) { 985 - ret = PTR_ERR(cinfo->chan); 986 - if (ret != -EPROBE_DEFER) 987 - dev_err(dev, "failed to request SCMI Tx mailbox\n"); 988 - return ret; 989 - } 990 - 991 - idr_alloc: 992 - ret = idr_alloc(&info->tx_idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); 993 - if (ret != prot_id) { 994 - dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret); 995 - return ret; 996 - } 997 - 998 - cinfo->handle = &info->handle; 999 - return 0; 1000 - } 1001 - 1002 - static inline void 1003 - scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 1004 - int prot_id) 1005 - { 1006 - struct scmi_device *sdev; 1007 - 1008 - sdev = scmi_device_create(np, info->dev, prot_id); 1009 - if (!sdev) { 1010 - dev_err(info->dev, "failed to create %d protocol device\n", 1011 - prot_id); 1012 - return; 1013 - } 1014 - 1015 - if (scmi_mbox_chan_setup(info, &sdev->dev, prot_id)) { 1016 - dev_err(&sdev->dev, "failed to setup transport\n"); 1017 - scmi_device_destroy(sdev); 1018 - return; 1019 - } 1020 - 1021 - /* setup handle now as the transport is ready */ 1022 - scmi_set_handle(sdev); 1023 - } 1024 - 1025 - static int scmi_probe(struct platform_device *pdev) 1026 - { 1027 - int ret; 1028 - struct scmi_handle *handle; 1029 - const struct scmi_desc *desc; 1030 - struct scmi_info *info; 1031 - struct device *dev = &pdev->dev; 1032 - struct device_node *child, *np = dev->of_node; 1033 - 1034 - /* Only mailbox method supported, check for the presence of one */ 1035 - if (scmi_mailbox_check(np)) { 1036 - dev_err(dev, "no mailbox found in %pOF\n", np); 1037 - return -EINVAL; 1038 - } 1039 - 1040 - desc = of_device_get_match_data(dev); 1041 - if (!desc) 1042 - return -EINVAL; 1043 - 1044 - info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 1045 - if (!info) 1046 - return -ENOMEM; 1047 - 1048 - info->dev = dev; 1049 - info->desc = desc; 1050 - INIT_LIST_HEAD(&info->node); 1051 - 1052 - ret = scmi_xfer_info_init(info); 1053 - if (ret) 1054 - return ret; 1055 - 1056 - platform_set_drvdata(pdev, info); 1057 - idr_init(&info->tx_idr); 1058 - 1059 - handle = &info->handle; 1060 - handle->dev = info->dev; 1061 - handle->version = &info->version; 1062 - 1063 - ret = scmi_mbox_chan_setup(info, dev, SCMI_PROTOCOL_BASE); 1064 - if (ret) 1065 - return ret; 1066 - 1067 - ret = scmi_base_protocol_init(handle); 1068 - if (ret) { 1069 - dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); 1070 - return ret; 1071 - } 1072 - 1073 - mutex_lock(&scmi_list_mutex); 1074 - list_add_tail(&info->node, &scmi_list); 1075 - mutex_unlock(&scmi_list_mutex); 1076 - 1077 - for_each_available_child_of_node(np, child) { 1078 - u32 prot_id; 1079 - 1080 - if (of_property_read_u32(child, "reg", &prot_id)) 1081 - continue; 1082 - 1083 - if (!FIELD_FIT(MSG_PROTOCOL_ID_MASK, prot_id)) 1084 - dev_err(dev, "Out of range protocol %d\n", prot_id); 1085 - 1086 - if (!scmi_is_protocol_implemented(handle, prot_id)) { 1087 - dev_err(dev, "SCMI protocol %d not implemented\n", 1088 - prot_id); 1089 - continue; 1090 - } 1091 - 1092 - scmi_create_protocol_device(child, info, prot_id); 1093 - } 1094 - 1095 - return 0; 1096 - } 714 + MODULE_DEVICE_TABLE(of, scmi_of_match); 1097 715 1098 716 static struct platform_driver scmi_driver = { 1099 717 .driver = {
+252 -12
drivers/firmware/arm_scmi/perf.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #include <linux/bits.h> 8 9 #include <linux/of.h> 10 + #include <linux/io.h> 11 + #include <linux/io-64-nonatomic-hi-lo.h> 9 12 #include <linux/platform_device.h> 10 13 #include <linux/pm_opp.h> 11 14 #include <linux/sort.h> ··· 24 21 PERF_LEVEL_GET = 0x8, 25 22 PERF_NOTIFY_LIMITS = 0x9, 26 23 PERF_NOTIFY_LEVEL = 0xa, 24 + PERF_DESCRIBE_FASTCHANNEL = 0xb, 27 25 }; 28 26 29 27 struct scmi_opp { ··· 48 44 #define SUPPORTS_SET_PERF_LVL(x) ((x) & BIT(30)) 49 45 #define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29)) 50 46 #define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28)) 47 + #define SUPPORTS_PERF_FASTCHANNELS(x) ((x) & BIT(27)) 51 48 __le32 rate_limit_us; 52 49 __le32 sustained_freq_khz; 53 50 __le32 sustained_perf_level; ··· 92 87 } opp[0]; 93 88 }; 94 89 90 + struct scmi_perf_get_fc_info { 91 + __le32 domain; 92 + __le32 message_id; 93 + }; 94 + 95 + struct scmi_msg_resp_perf_desc_fc { 96 + __le32 attr; 97 + #define SUPPORTS_DOORBELL(x) ((x) & BIT(0)) 98 + #define DOORBELL_REG_WIDTH(x) FIELD_GET(GENMASK(2, 1), (x)) 99 + __le32 rate_limit; 100 + __le32 chan_addr_low; 101 + __le32 chan_addr_high; 102 + __le32 chan_size; 103 + __le32 db_addr_low; 104 + __le32 db_addr_high; 105 + __le32 db_set_lmask; 106 + __le32 db_set_hmask; 107 + __le32 db_preserve_lmask; 108 + __le32 db_preserve_hmask; 109 + }; 110 + 111 + struct scmi_fc_db_info { 112 + int width; 113 + u64 set; 114 + u64 mask; 115 + void __iomem *addr; 116 + }; 117 + 118 + struct scmi_fc_info { 119 + void __iomem *level_set_addr; 120 + void __iomem *limit_set_addr; 121 + void __iomem *level_get_addr; 122 + void __iomem *limit_get_addr; 123 + struct scmi_fc_db_info *level_set_db; 124 + struct scmi_fc_db_info *limit_set_db; 125 + }; 126 + 95 127 struct perf_dom_info { 96 128 bool set_limits; 97 129 bool set_perf; 98 130 bool perf_limit_notify; 99 131 bool perf_level_notify; 132 + bool perf_fastchannels; 100 133 u32 opp_count; 101 134 u32 sustained_freq_khz; 102 135 u32 sustained_perf_level; 103 136 u32 mult_factor; 104 137 char name[SCMI_MAX_STR_SIZE]; 105 138 struct scmi_opp opp[MAX_OPPS]; 139 + struct scmi_fc_info *fc_info; 106 140 }; 107 141 108 142 struct scmi_perf_info { ··· 195 151 if (ret) 196 152 return ret; 197 153 198 - *(__le32 *)t->tx.buf = cpu_to_le32(domain); 154 + put_unaligned_le32(domain, t->tx.buf); 199 155 attr = t->rx.buf; 200 156 201 157 ret = scmi_do_xfer(handle, t); ··· 206 162 dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); 207 163 dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); 208 164 dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); 165 + dom_info->perf_fastchannels = SUPPORTS_PERF_FASTCHANNELS(flags); 209 166 dom_info->sustained_freq_khz = 210 167 le32_to_cpu(attr->sustained_freq_khz); 211 168 dom_info->sustained_perf_level = ··· 294 249 return ret; 295 250 } 296 251 297 - static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, 298 - u32 max_perf, u32 min_perf) 252 + #define SCMI_PERF_FC_RING_DB(w) \ 253 + do { \ 254 + u##w val = 0; \ 255 + \ 256 + if (db->mask) \ 257 + val = ioread##w(db->addr) & db->mask; \ 258 + iowrite##w((u##w)db->set | val, db->addr); \ 259 + } while (0) 260 + 261 + static void scmi_perf_fc_ring_db(struct scmi_fc_db_info *db) 262 + { 263 + if (!db || !db->addr) 264 + return; 265 + 266 + if (db->width == 1) 267 + SCMI_PERF_FC_RING_DB(8); 268 + else if (db->width == 2) 269 + SCMI_PERF_FC_RING_DB(16); 270 + else if (db->width == 4) 271 + SCMI_PERF_FC_RING_DB(32); 272 + else /* db->width == 8 */ 273 + #ifdef CONFIG_64BIT 274 + SCMI_PERF_FC_RING_DB(64); 275 + #else 276 + { 277 + u64 val = 0; 278 + 279 + if (db->mask) 280 + val = ioread64_hi_lo(db->addr) & db->mask; 281 + iowrite64_hi_lo(db->set, db->addr); 282 + } 283 + #endif 284 + } 285 + 286 + static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain, 287 + u32 max_perf, u32 min_perf) 299 288 { 300 289 int ret; 301 290 struct scmi_xfer *t; ··· 351 272 return ret; 352 273 } 353 274 354 - static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, 355 - u32 *max_perf, u32 *min_perf) 275 + static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, 276 + u32 max_perf, u32 min_perf) 277 + { 278 + struct scmi_perf_info *pi = handle->perf_priv; 279 + struct perf_dom_info *dom = pi->dom_info + domain; 280 + 281 + if (dom->fc_info && dom->fc_info->limit_set_addr) { 282 + iowrite32(max_perf, dom->fc_info->limit_set_addr); 283 + iowrite32(min_perf, dom->fc_info->limit_set_addr + 4); 284 + scmi_perf_fc_ring_db(dom->fc_info->limit_set_db); 285 + return 0; 286 + } 287 + 288 + return scmi_perf_mb_limits_set(handle, domain, max_perf, min_perf); 289 + } 290 + 291 + static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain, 292 + u32 *max_perf, u32 *min_perf) 356 293 { 357 294 int ret; 358 295 struct scmi_xfer *t; ··· 379 284 if (ret) 380 285 return ret; 381 286 382 - *(__le32 *)t->tx.buf = cpu_to_le32(domain); 287 + put_unaligned_le32(domain, t->tx.buf); 383 288 384 289 ret = scmi_do_xfer(handle, t); 385 290 if (!ret) { ··· 393 298 return ret; 394 299 } 395 300 396 - static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, 397 - u32 level, bool poll) 301 + static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, 302 + u32 *max_perf, u32 *min_perf) 303 + { 304 + struct scmi_perf_info *pi = handle->perf_priv; 305 + struct perf_dom_info *dom = pi->dom_info + domain; 306 + 307 + if (dom->fc_info && dom->fc_info->limit_get_addr) { 308 + *max_perf = ioread32(dom->fc_info->limit_get_addr); 309 + *min_perf = ioread32(dom->fc_info->limit_get_addr + 4); 310 + return 0; 311 + } 312 + 313 + return scmi_perf_mb_limits_get(handle, domain, max_perf, min_perf); 314 + } 315 + 316 + static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain, 317 + u32 level, bool poll) 398 318 { 399 319 int ret; 400 320 struct scmi_xfer *t; ··· 431 321 return ret; 432 322 } 433 323 434 - static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, 435 - u32 *level, bool poll) 324 + static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, 325 + u32 level, bool poll) 326 + { 327 + struct scmi_perf_info *pi = handle->perf_priv; 328 + struct perf_dom_info *dom = pi->dom_info + domain; 329 + 330 + if (dom->fc_info && dom->fc_info->level_set_addr) { 331 + iowrite32(level, dom->fc_info->level_set_addr); 332 + scmi_perf_fc_ring_db(dom->fc_info->level_set_db); 333 + return 0; 334 + } 335 + 336 + return scmi_perf_mb_level_set(handle, domain, level, poll); 337 + } 338 + 339 + static int scmi_perf_mb_level_get(const struct scmi_handle *handle, u32 domain, 340 + u32 *level, bool poll) 436 341 { 437 342 int ret; 438 343 struct scmi_xfer *t; ··· 458 333 return ret; 459 334 460 335 t->hdr.poll_completion = poll; 461 - *(__le32 *)t->tx.buf = cpu_to_le32(domain); 336 + put_unaligned_le32(domain, t->tx.buf); 462 337 463 338 ret = scmi_do_xfer(handle, t); 464 339 if (!ret) 465 - *level = le32_to_cpu(*(__le32 *)t->rx.buf); 340 + *level = get_unaligned_le32(t->rx.buf); 466 341 467 342 scmi_xfer_put(handle, t); 468 343 return ret; 344 + } 345 + 346 + static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, 347 + u32 *level, bool poll) 348 + { 349 + struct scmi_perf_info *pi = handle->perf_priv; 350 + struct perf_dom_info *dom = pi->dom_info + domain; 351 + 352 + if (dom->fc_info && dom->fc_info->level_get_addr) { 353 + *level = ioread32(dom->fc_info->level_get_addr); 354 + return 0; 355 + } 356 + 357 + return scmi_perf_mb_level_get(handle, domain, level, poll); 358 + } 359 + 360 + static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size) 361 + { 362 + if ((msg == PERF_LEVEL_GET || msg == PERF_LEVEL_SET) && size == 4) 363 + return true; 364 + if ((msg == PERF_LIMITS_GET || msg == PERF_LIMITS_SET) && size == 8) 365 + return true; 366 + return false; 367 + } 368 + 369 + static void 370 + scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain, 371 + u32 message_id, void __iomem **p_addr, 372 + struct scmi_fc_db_info **p_db) 373 + { 374 + int ret; 375 + u32 flags; 376 + u64 phys_addr; 377 + u8 size; 378 + void __iomem *addr; 379 + struct scmi_xfer *t; 380 + struct scmi_fc_db_info *db; 381 + struct scmi_perf_get_fc_info *info; 382 + struct scmi_msg_resp_perf_desc_fc *resp; 383 + 384 + if (!p_addr) 385 + return; 386 + 387 + ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_FASTCHANNEL, 388 + SCMI_PROTOCOL_PERF, 389 + sizeof(*info), sizeof(*resp), &t); 390 + if (ret) 391 + return; 392 + 393 + info = t->tx.buf; 394 + info->domain = cpu_to_le32(domain); 395 + info->message_id = cpu_to_le32(message_id); 396 + 397 + ret = scmi_do_xfer(handle, t); 398 + if (ret) 399 + goto err_xfer; 400 + 401 + resp = t->rx.buf; 402 + flags = le32_to_cpu(resp->attr); 403 + size = le32_to_cpu(resp->chan_size); 404 + if (!scmi_perf_fc_size_is_valid(message_id, size)) 405 + goto err_xfer; 406 + 407 + phys_addr = le32_to_cpu(resp->chan_addr_low); 408 + phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32; 409 + addr = devm_ioremap(handle->dev, phys_addr, size); 410 + if (!addr) 411 + goto err_xfer; 412 + *p_addr = addr; 413 + 414 + if (p_db && SUPPORTS_DOORBELL(flags)) { 415 + db = devm_kzalloc(handle->dev, sizeof(*db), GFP_KERNEL); 416 + if (!db) 417 + goto err_xfer; 418 + 419 + size = 1 << DOORBELL_REG_WIDTH(flags); 420 + phys_addr = le32_to_cpu(resp->db_addr_low); 421 + phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32; 422 + addr = devm_ioremap(handle->dev, phys_addr, size); 423 + if (!addr) 424 + goto err_xfer; 425 + 426 + db->addr = addr; 427 + db->width = size; 428 + db->set = le32_to_cpu(resp->db_set_lmask); 429 + db->set |= (u64)le32_to_cpu(resp->db_set_hmask) << 32; 430 + db->mask = le32_to_cpu(resp->db_preserve_lmask); 431 + db->mask |= (u64)le32_to_cpu(resp->db_preserve_hmask) << 32; 432 + *p_db = db; 433 + } 434 + err_xfer: 435 + scmi_xfer_put(handle, t); 436 + } 437 + 438 + static void scmi_perf_domain_init_fc(const struct scmi_handle *handle, 439 + u32 domain, struct scmi_fc_info **p_fc) 440 + { 441 + struct scmi_fc_info *fc; 442 + 443 + fc = devm_kzalloc(handle->dev, sizeof(*fc), GFP_KERNEL); 444 + if (!fc) 445 + return; 446 + 447 + scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_SET, 448 + &fc->level_set_addr, &fc->level_set_db); 449 + scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_GET, 450 + &fc->level_get_addr, NULL); 451 + scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_SET, 452 + &fc->limit_set_addr, &fc->limit_set_db); 453 + scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_GET, 454 + &fc->limit_get_addr, NULL); 455 + *p_fc = fc; 469 456 } 470 457 471 458 /* Device specific ops */ ··· 731 494 732 495 scmi_perf_domain_attributes_get(handle, domain, dom); 733 496 scmi_perf_describe_levels_get(handle, domain, dom); 497 + 498 + if (dom->perf_fastchannels) 499 + scmi_perf_domain_init_fc(handle, domain, &dom->fc_info); 734 500 } 735 501 736 502 handle->perf_ops = &perf_ops;
+3 -3
drivers/firmware/arm_scmi/power.c
··· 96 96 if (ret) 97 97 return ret; 98 98 99 - *(__le32 *)t->tx.buf = cpu_to_le32(domain); 99 + put_unaligned_le32(domain, t->tx.buf); 100 100 attr = t->rx.buf; 101 101 102 102 ret = scmi_do_xfer(handle, t); ··· 147 147 if (ret) 148 148 return ret; 149 149 150 - *(__le32 *)t->tx.buf = cpu_to_le32(domain); 150 + put_unaligned_le32(domain, t->tx.buf); 151 151 152 152 ret = scmi_do_xfer(handle, t); 153 153 if (!ret) 154 - *state = le32_to_cpu(*(__le32 *)t->rx.buf); 154 + *state = get_unaligned_le32(t->rx.buf); 155 155 156 156 scmi_xfer_put(handle, t); 157 157 return ret;
+231
drivers/firmware/arm_scmi/reset.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Reset Protocol 4 + * 5 + * Copyright (C) 2019 ARM Ltd. 6 + */ 7 + 8 + #include "common.h" 9 + 10 + enum scmi_reset_protocol_cmd { 11 + RESET_DOMAIN_ATTRIBUTES = 0x3, 12 + RESET = 0x4, 13 + RESET_NOTIFY = 0x5, 14 + }; 15 + 16 + enum scmi_reset_protocol_notify { 17 + RESET_ISSUED = 0x0, 18 + }; 19 + 20 + #define NUM_RESET_DOMAIN_MASK 0xffff 21 + #define RESET_NOTIFY_ENABLE BIT(0) 22 + 23 + struct scmi_msg_resp_reset_domain_attributes { 24 + __le32 attributes; 25 + #define SUPPORTS_ASYNC_RESET(x) ((x) & BIT(31)) 26 + #define SUPPORTS_NOTIFY_RESET(x) ((x) & BIT(30)) 27 + __le32 latency; 28 + u8 name[SCMI_MAX_STR_SIZE]; 29 + }; 30 + 31 + struct scmi_msg_reset_domain_reset { 32 + __le32 domain_id; 33 + __le32 flags; 34 + #define AUTONOMOUS_RESET BIT(0) 35 + #define EXPLICIT_RESET_ASSERT BIT(1) 36 + #define ASYNCHRONOUS_RESET BIT(2) 37 + __le32 reset_state; 38 + #define ARCH_RESET_TYPE BIT(31) 39 + #define COLD_RESET_STATE BIT(0) 40 + #define ARCH_COLD_RESET (ARCH_RESET_TYPE | COLD_RESET_STATE) 41 + }; 42 + 43 + struct reset_dom_info { 44 + bool async_reset; 45 + bool reset_notify; 46 + u32 latency_us; 47 + char name[SCMI_MAX_STR_SIZE]; 48 + }; 49 + 50 + struct scmi_reset_info { 51 + int num_domains; 52 + struct reset_dom_info *dom_info; 53 + }; 54 + 55 + static int scmi_reset_attributes_get(const struct scmi_handle *handle, 56 + struct scmi_reset_info *pi) 57 + { 58 + int ret; 59 + struct scmi_xfer *t; 60 + u32 attr; 61 + 62 + ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 63 + SCMI_PROTOCOL_RESET, 0, sizeof(attr), &t); 64 + if (ret) 65 + return ret; 66 + 67 + ret = scmi_do_xfer(handle, t); 68 + if (!ret) { 69 + attr = get_unaligned_le32(t->rx.buf); 70 + pi->num_domains = attr & NUM_RESET_DOMAIN_MASK; 71 + } 72 + 73 + scmi_xfer_put(handle, t); 74 + return ret; 75 + } 76 + 77 + static int 78 + scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 79 + struct reset_dom_info *dom_info) 80 + { 81 + int ret; 82 + struct scmi_xfer *t; 83 + struct scmi_msg_resp_reset_domain_attributes *attr; 84 + 85 + ret = scmi_xfer_get_init(handle, RESET_DOMAIN_ATTRIBUTES, 86 + SCMI_PROTOCOL_RESET, sizeof(domain), 87 + sizeof(*attr), &t); 88 + if (ret) 89 + return ret; 90 + 91 + put_unaligned_le32(domain, t->tx.buf); 92 + attr = t->rx.buf; 93 + 94 + ret = scmi_do_xfer(handle, t); 95 + if (!ret) { 96 + u32 attributes = le32_to_cpu(attr->attributes); 97 + 98 + dom_info->async_reset = SUPPORTS_ASYNC_RESET(attributes); 99 + dom_info->reset_notify = SUPPORTS_NOTIFY_RESET(attributes); 100 + dom_info->latency_us = le32_to_cpu(attr->latency); 101 + if (dom_info->latency_us == U32_MAX) 102 + dom_info->latency_us = 0; 103 + strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 104 + } 105 + 106 + scmi_xfer_put(handle, t); 107 + return ret; 108 + } 109 + 110 + static int scmi_reset_num_domains_get(const struct scmi_handle *handle) 111 + { 112 + struct scmi_reset_info *pi = handle->reset_priv; 113 + 114 + return pi->num_domains; 115 + } 116 + 117 + static char *scmi_reset_name_get(const struct scmi_handle *handle, u32 domain) 118 + { 119 + struct scmi_reset_info *pi = handle->reset_priv; 120 + struct reset_dom_info *dom = pi->dom_info + domain; 121 + 122 + return dom->name; 123 + } 124 + 125 + static int scmi_reset_latency_get(const struct scmi_handle *handle, u32 domain) 126 + { 127 + struct scmi_reset_info *pi = handle->reset_priv; 128 + struct reset_dom_info *dom = pi->dom_info + domain; 129 + 130 + return dom->latency_us; 131 + } 132 + 133 + static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain, 134 + u32 flags, u32 state) 135 + { 136 + int ret; 137 + struct scmi_xfer *t; 138 + struct scmi_msg_reset_domain_reset *dom; 139 + struct scmi_reset_info *pi = handle->reset_priv; 140 + struct reset_dom_info *rdom = pi->dom_info + domain; 141 + 142 + if (rdom->async_reset) 143 + flags |= ASYNCHRONOUS_RESET; 144 + 145 + ret = scmi_xfer_get_init(handle, RESET, SCMI_PROTOCOL_RESET, 146 + sizeof(*dom), 0, &t); 147 + if (ret) 148 + return ret; 149 + 150 + dom = t->tx.buf; 151 + dom->domain_id = cpu_to_le32(domain); 152 + dom->flags = cpu_to_le32(flags); 153 + dom->domain_id = cpu_to_le32(state); 154 + 155 + if (rdom->async_reset) 156 + ret = scmi_do_xfer_with_response(handle, t); 157 + else 158 + ret = scmi_do_xfer(handle, t); 159 + 160 + scmi_xfer_put(handle, t); 161 + return ret; 162 + } 163 + 164 + static int scmi_reset_domain_reset(const struct scmi_handle *handle, u32 domain) 165 + { 166 + return scmi_domain_reset(handle, domain, AUTONOMOUS_RESET, 167 + ARCH_COLD_RESET); 168 + } 169 + 170 + static int 171 + scmi_reset_domain_assert(const struct scmi_handle *handle, u32 domain) 172 + { 173 + return scmi_domain_reset(handle, domain, EXPLICIT_RESET_ASSERT, 174 + ARCH_COLD_RESET); 175 + } 176 + 177 + static int 178 + scmi_reset_domain_deassert(const struct scmi_handle *handle, u32 domain) 179 + { 180 + return scmi_domain_reset(handle, domain, 0, ARCH_COLD_RESET); 181 + } 182 + 183 + static struct scmi_reset_ops reset_ops = { 184 + .num_domains_get = scmi_reset_num_domains_get, 185 + .name_get = scmi_reset_name_get, 186 + .latency_get = scmi_reset_latency_get, 187 + .reset = scmi_reset_domain_reset, 188 + .assert = scmi_reset_domain_assert, 189 + .deassert = scmi_reset_domain_deassert, 190 + }; 191 + 192 + static int scmi_reset_protocol_init(struct scmi_handle *handle) 193 + { 194 + int domain; 195 + u32 version; 196 + struct scmi_reset_info *pinfo; 197 + 198 + scmi_version_get(handle, SCMI_PROTOCOL_RESET, &version); 199 + 200 + dev_dbg(handle->dev, "Reset Version %d.%d\n", 201 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 202 + 203 + pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 204 + if (!pinfo) 205 + return -ENOMEM; 206 + 207 + scmi_reset_attributes_get(handle, pinfo); 208 + 209 + pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 210 + sizeof(*pinfo->dom_info), GFP_KERNEL); 211 + if (!pinfo->dom_info) 212 + return -ENOMEM; 213 + 214 + for (domain = 0; domain < pinfo->num_domains; domain++) { 215 + struct reset_dom_info *dom = pinfo->dom_info + domain; 216 + 217 + scmi_reset_domain_attributes_get(handle, domain, dom); 218 + } 219 + 220 + handle->reset_ops = &reset_ops; 221 + handle->reset_priv = pinfo; 222 + 223 + return 0; 224 + } 225 + 226 + static int __init scmi_reset_init(void) 227 + { 228 + return scmi_protocol_register(SCMI_PROTOCOL_RESET, 229 + &scmi_reset_protocol_init); 230 + } 231 + subsys_initcall(scmi_reset_init);
+34 -23
drivers/firmware/arm_scmi/sensors.c
··· 9 9 10 10 enum scmi_sensor_protocol_cmd { 11 11 SENSOR_DESCRIPTION_GET = 0x3, 12 - SENSOR_CONFIG_SET = 0x4, 13 - SENSOR_TRIP_POINT_SET = 0x5, 12 + SENSOR_TRIP_POINT_NOTIFY = 0x4, 13 + SENSOR_TRIP_POINT_CONFIG = 0x5, 14 14 SENSOR_READING_GET = 0x6, 15 15 }; 16 16 ··· 42 42 } desc[0]; 43 43 }; 44 44 45 - struct scmi_msg_set_sensor_config { 45 + struct scmi_msg_sensor_trip_point_notify { 46 46 __le32 id; 47 47 __le32 event_control; 48 + #define SENSOR_TP_NOTIFY_ALL BIT(0) 48 49 }; 49 50 50 51 struct scmi_msg_set_sensor_trip_point { ··· 120 119 121 120 do { 122 121 /* Set the number of sensors to be skipped/already read */ 123 - *(__le32 *)t->tx.buf = cpu_to_le32(desc_index); 122 + put_unaligned_le32(desc_index, t->tx.buf); 124 123 125 124 ret = scmi_do_xfer(handle, t); 126 125 if (ret) ··· 136 135 } 137 136 138 137 for (cnt = 0; cnt < num_returned; cnt++) { 139 - u32 attrh; 138 + u32 attrh, attrl; 140 139 struct scmi_sensor_info *s; 141 140 141 + attrl = le32_to_cpu(buf->desc[cnt].attributes_low); 142 142 attrh = le32_to_cpu(buf->desc[cnt].attributes_high); 143 143 s = &si->sensors[desc_index + cnt]; 144 144 s->id = le32_to_cpu(buf->desc[cnt].id); ··· 148 146 /* Sign extend to a full s8 */ 149 147 if (s->scale & SENSOR_SCALE_SIGN) 150 148 s->scale |= SENSOR_SCALE_EXTEND; 149 + s->async = SUPPORTS_ASYNC_READ(attrl); 150 + s->num_trip_points = NUM_TRIP_POINTS(attrl); 151 151 strlcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE); 152 152 } 153 153 ··· 164 160 return ret; 165 161 } 166 162 167 - static int 168 - scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id) 163 + static int scmi_sensor_trip_point_notify(const struct scmi_handle *handle, 164 + u32 sensor_id, bool enable) 169 165 { 170 166 int ret; 171 - u32 evt_cntl = BIT(0); 167 + u32 evt_cntl = enable ? SENSOR_TP_NOTIFY_ALL : 0; 172 168 struct scmi_xfer *t; 173 - struct scmi_msg_set_sensor_config *cfg; 169 + struct scmi_msg_sensor_trip_point_notify *cfg; 174 170 175 - ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_SET, 171 + ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_NOTIFY, 176 172 SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t); 177 173 if (ret) 178 174 return ret; ··· 187 183 return ret; 188 184 } 189 185 190 - static int scmi_sensor_trip_point_set(const struct scmi_handle *handle, 191 - u32 sensor_id, u8 trip_id, u64 trip_value) 186 + static int 187 + scmi_sensor_trip_point_config(const struct scmi_handle *handle, u32 sensor_id, 188 + u8 trip_id, u64 trip_value) 192 189 { 193 190 int ret; 194 191 u32 evt_cntl = SENSOR_TP_BOTH; 195 192 struct scmi_xfer *t; 196 193 struct scmi_msg_set_sensor_trip_point *trip; 197 194 198 - ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_SET, 195 + ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_CONFIG, 199 196 SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t); 200 197 if (ret) 201 198 return ret; ··· 214 209 } 215 210 216 211 static int scmi_sensor_reading_get(const struct scmi_handle *handle, 217 - u32 sensor_id, bool async, u64 *value) 212 + u32 sensor_id, u64 *value) 218 213 { 219 214 int ret; 220 215 struct scmi_xfer *t; 221 216 struct scmi_msg_sensor_reading_get *sensor; 217 + struct sensors_info *si = handle->sensor_priv; 218 + struct scmi_sensor_info *s = si->sensors + sensor_id; 222 219 223 220 ret = scmi_xfer_get_init(handle, SENSOR_READING_GET, 224 221 SCMI_PROTOCOL_SENSOR, sizeof(*sensor), ··· 230 223 231 224 sensor = t->tx.buf; 232 225 sensor->id = cpu_to_le32(sensor_id); 233 - sensor->flags = cpu_to_le32(async ? SENSOR_READ_ASYNC : 0); 234 226 235 - ret = scmi_do_xfer(handle, t); 236 - if (!ret) { 237 - __le32 *pval = t->rx.buf; 238 - 239 - *value = le32_to_cpu(*pval); 240 - *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; 227 + if (s->async) { 228 + sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC); 229 + ret = scmi_do_xfer_with_response(handle, t); 230 + if (!ret) 231 + *value = get_unaligned_le64((void *) 232 + ((__le32 *)t->rx.buf + 1)); 233 + } else { 234 + sensor->flags = cpu_to_le32(0); 235 + ret = scmi_do_xfer(handle, t); 236 + if (!ret) 237 + *value = get_unaligned_le64(t->rx.buf); 241 238 } 242 239 243 240 scmi_xfer_put(handle, t); ··· 266 255 static struct scmi_sensor_ops sensor_ops = { 267 256 .count_get = scmi_sensor_count_get, 268 257 .info_get = scmi_sensor_info_get, 269 - .configuration_set = scmi_sensor_configuration_set, 270 - .trip_point_set = scmi_sensor_trip_point_set, 258 + .trip_point_notify = scmi_sensor_trip_point_notify, 259 + .trip_point_config = scmi_sensor_trip_point_config, 271 260 .reading_get = scmi_sensor_reading_get, 272 261 }; 273 262
+9
drivers/gpio/Kconfig
··· 1445 1445 help 1446 1446 GPIO driver for EXAR XRA1403 16-bit SPI-based GPIO expander. 1447 1447 1448 + config GPIO_MOXTET 1449 + tristate "Turris Mox Moxtet bus GPIO expander" 1450 + depends on MOXTET 1451 + help 1452 + Say yes here if you are building for the Turris Mox router. 1453 + This is the driver needed for configuring the GPIOs via the Moxtet 1454 + bus. For example the Mox module with SFP cage needs this driver 1455 + so that phylink can use corresponding GPIOs. 1456 + 1448 1457 endmenu 1449 1458 1450 1459 menu "USB GPIO expanders"
+1
drivers/gpio/Makefile
··· 93 93 obj-$(CONFIG_GPIO_MLXBF) += gpio-mlxbf.o 94 94 obj-$(CONFIG_GPIO_MM_LANTIQ) += gpio-mm-lantiq.o 95 95 obj-$(CONFIG_GPIO_MOCKUP) += gpio-mockup.o 96 + obj-$(CONFIG_GPIO_MOXTET) += gpio-moxtet.o 96 97 obj-$(CONFIG_GPIO_MPC5200) += gpio-mpc5200.o 97 98 obj-$(CONFIG_GPIO_MPC8XXX) += gpio-mpc8xxx.o 98 99 obj-$(CONFIG_GPIO_MSIC) += gpio-msic.o
+179
drivers/gpio/gpio-moxtet.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Turris Mox Moxtet GPIO expander 4 + * 5 + * Copyright (C) 2018 Marek Behun <marek.behun@nic.cz> 6 + */ 7 + 8 + #include <linux/bitops.h> 9 + #include <linux/gpio/driver.h> 10 + #include <linux/moxtet.h> 11 + #include <linux/module.h> 12 + 13 + #define MOXTET_GPIO_NGPIOS 12 14 + #define MOXTET_GPIO_INPUTS 4 15 + 16 + struct moxtet_gpio_desc { 17 + u16 in_mask; 18 + u16 out_mask; 19 + }; 20 + 21 + static const struct moxtet_gpio_desc descs[] = { 22 + [TURRIS_MOX_MODULE_SFP] = { 23 + .in_mask = GENMASK(2, 0), 24 + .out_mask = GENMASK(5, 4), 25 + }, 26 + }; 27 + 28 + struct moxtet_gpio_chip { 29 + struct device *dev; 30 + struct gpio_chip gpio_chip; 31 + const struct moxtet_gpio_desc *desc; 32 + }; 33 + 34 + static int moxtet_gpio_get_value(struct gpio_chip *gc, unsigned int offset) 35 + { 36 + struct moxtet_gpio_chip *chip = gpiochip_get_data(gc); 37 + int ret; 38 + 39 + if (chip->desc->in_mask & BIT(offset)) { 40 + ret = moxtet_device_read(chip->dev); 41 + } else if (chip->desc->out_mask & BIT(offset)) { 42 + ret = moxtet_device_written(chip->dev); 43 + if (ret >= 0) 44 + ret <<= MOXTET_GPIO_INPUTS; 45 + } else { 46 + return -EINVAL; 47 + } 48 + 49 + if (ret < 0) 50 + return ret; 51 + 52 + return !!(ret & BIT(offset)); 53 + } 54 + 55 + static void moxtet_gpio_set_value(struct gpio_chip *gc, unsigned int offset, 56 + int val) 57 + { 58 + struct moxtet_gpio_chip *chip = gpiochip_get_data(gc); 59 + int state; 60 + 61 + state = moxtet_device_written(chip->dev); 62 + if (state < 0) 63 + return; 64 + 65 + offset -= MOXTET_GPIO_INPUTS; 66 + 67 + if (val) 68 + state |= BIT(offset); 69 + else 70 + state &= ~BIT(offset); 71 + 72 + moxtet_device_write(chip->dev, state); 73 + } 74 + 75 + static int moxtet_gpio_get_direction(struct gpio_chip *gc, unsigned int offset) 76 + { 77 + struct moxtet_gpio_chip *chip = gpiochip_get_data(gc); 78 + 79 + /* All lines are hard wired to be either input or output, not both. */ 80 + if (chip->desc->in_mask & BIT(offset)) 81 + return 1; 82 + else if (chip->desc->out_mask & BIT(offset)) 83 + return 0; 84 + else 85 + return -EINVAL; 86 + } 87 + 88 + static int moxtet_gpio_direction_input(struct gpio_chip *gc, 89 + unsigned int offset) 90 + { 91 + struct moxtet_gpio_chip *chip = gpiochip_get_data(gc); 92 + 93 + if (chip->desc->in_mask & BIT(offset)) 94 + return 0; 95 + else if (chip->desc->out_mask & BIT(offset)) 96 + return -ENOTSUPP; 97 + else 98 + return -EINVAL; 99 + } 100 + 101 + static int moxtet_gpio_direction_output(struct gpio_chip *gc, 102 + unsigned int offset, int val) 103 + { 104 + struct moxtet_gpio_chip *chip = gpiochip_get_data(gc); 105 + 106 + if (chip->desc->out_mask & BIT(offset)) 107 + moxtet_gpio_set_value(gc, offset, val); 108 + else if (chip->desc->in_mask & BIT(offset)) 109 + return -ENOTSUPP; 110 + else 111 + return -EINVAL; 112 + 113 + return 0; 114 + } 115 + 116 + static int moxtet_gpio_probe(struct device *dev) 117 + { 118 + struct moxtet_gpio_chip *chip; 119 + struct device_node *nc = dev->of_node; 120 + int id; 121 + 122 + id = to_moxtet_device(dev)->id; 123 + 124 + if (id >= ARRAY_SIZE(descs)) { 125 + dev_err(dev, "%pOF Moxtet device id 0x%x is not supported by gpio-moxtet driver\n", 126 + nc, id); 127 + return -ENOTSUPP; 128 + } 129 + 130 + chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL); 131 + if (!chip) 132 + return -ENOMEM; 133 + 134 + chip->dev = dev; 135 + chip->gpio_chip.parent = dev; 136 + chip->desc = &descs[id]; 137 + 138 + dev_set_drvdata(dev, chip); 139 + 140 + chip->gpio_chip.label = dev_name(dev); 141 + chip->gpio_chip.get_direction = moxtet_gpio_get_direction; 142 + chip->gpio_chip.direction_input = moxtet_gpio_direction_input; 143 + chip->gpio_chip.direction_output = moxtet_gpio_direction_output; 144 + chip->gpio_chip.get = moxtet_gpio_get_value; 145 + chip->gpio_chip.set = moxtet_gpio_set_value; 146 + chip->gpio_chip.base = -1; 147 + 148 + chip->gpio_chip.ngpio = MOXTET_GPIO_NGPIOS; 149 + 150 + chip->gpio_chip.can_sleep = true; 151 + chip->gpio_chip.owner = THIS_MODULE; 152 + 153 + return devm_gpiochip_add_data(dev, &chip->gpio_chip, chip); 154 + } 155 + 156 + static const struct of_device_id moxtet_gpio_dt_ids[] = { 157 + { .compatible = "cznic,moxtet-gpio", }, 158 + {}, 159 + }; 160 + MODULE_DEVICE_TABLE(of, moxtet_gpio_dt_ids); 161 + 162 + static const enum turris_mox_module_id moxtet_gpio_module_table[] = { 163 + TURRIS_MOX_MODULE_SFP, 164 + 0, 165 + }; 166 + 167 + static struct moxtet_driver moxtet_gpio_driver = { 168 + .driver = { 169 + .name = "moxtet-gpio", 170 + .of_match_table = moxtet_gpio_dt_ids, 171 + .probe = moxtet_gpio_probe, 172 + }, 173 + .id_table = moxtet_gpio_module_table, 174 + }; 175 + module_moxtet_driver(moxtet_gpio_driver); 176 + 177 + MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 178 + MODULE_DESCRIPTION("Turris Mox Moxtet GPIO expander"); 179 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/hwmon/scmi-hwmon.c
··· 72 72 const struct scmi_handle *h = scmi_sensors->handle; 73 73 74 74 sensor = *(scmi_sensors->info[type] + channel); 75 - ret = h->sensor_ops->reading_get(h, sensor->id, false, &value); 75 + ret = h->sensor_ops->reading_get(h, sensor->id, &value); 76 76 if (ret) 77 77 return ret; 78 78
+12 -1
drivers/reset/Kconfig
··· 116 116 to control reset signals provided by PDC for Modem, Compute, 117 117 Display, GPU, Debug, AOP, Sensors, Audio, SP and APPS. 118 118 119 + config RESET_SCMI 120 + tristate "Reset driver controlled via ARM SCMI interface" 121 + depends on ARM_SCMI_PROTOCOL || COMPILE_TEST 122 + default ARM_SCMI_PROTOCOL 123 + help 124 + This driver provides support for reset signal/domains that are 125 + controlled by firmware that implements the SCMI interface. 126 + 127 + This driver uses SCMI Message Protocol to interact with the 128 + firmware controlling all the reset signals. 129 + 119 130 config RESET_SIMPLE 120 131 bool "Simple Reset Controller Driver" if COMPILE_TEST 121 - default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED || ARCH_BITMAIN 132 + default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED || ARCH_BITMAIN || ARC 122 133 help 123 134 This enables a simple reset controller driver for reset lines that 124 135 that can be asserted and deasserted by toggling bits in a contiguous,
+1
drivers/reset/Makefile
··· 18 18 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 19 19 obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o 20 20 obj-$(CONFIG_RESET_QCOM_PDC) += reset-qcom-pdc.o 21 + obj-$(CONFIG_RESET_SCMI) += reset-scmi.o 21 22 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 22 23 obj-$(CONFIG_RESET_STM32MP157) += reset-stm32mp1.o 23 24 obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o
+6 -6
drivers/reset/reset-imx7.c
··· 169 169 [IMX8MQ_RESET_OTG2_PHY_RESET] = { SRC_USBOPHY2_RCR, BIT(0) }, 170 170 [IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N] = { SRC_MIPIPHY_RCR, BIT(1) }, 171 171 [IMX8MQ_RESET_MIPI_DSI_RESET_N] = { SRC_MIPIPHY_RCR, BIT(2) }, 172 - [IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N] = { SRC_MIPIPHY_RCR, BIT(3) }, 173 - [IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N] = { SRC_MIPIPHY_RCR, BIT(4) }, 174 - [IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N] = { SRC_MIPIPHY_RCR, BIT(5) }, 172 + [IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N] = { SRC_MIPIPHY_RCR, BIT(3) }, 173 + [IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N] = { SRC_MIPIPHY_RCR, BIT(4) }, 174 + [IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N] = { SRC_MIPIPHY_RCR, BIT(5) }, 175 175 [IMX8MQ_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, 176 176 BIT(2) | BIT(1) }, 177 177 [IMX8MQ_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, ··· 220 220 221 221 case IMX8MQ_RESET_PCIE_CTRL_APPS_EN: 222 222 case IMX8MQ_RESET_PCIE2_CTRL_APPS_EN: /* fallthrough */ 223 - case IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N: /* fallthrough */ 224 - case IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N: /* fallthrough */ 225 - case IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N: /* fallthrough */ 223 + case IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N: /* fallthrough */ 224 + case IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N: /* fallthrough */ 225 + case IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N: /* fallthrough */ 226 226 case IMX8MQ_RESET_MIPI_DSI_RESET_N: /* fallthrough */ 227 227 case IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N: /* fallthrough */ 228 228 value = assert ? 0 : bit;
+1 -50
drivers/reset/reset-meson.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 1 2 /* 2 3 * Amlogic Meson Reset Controller driver 3 4 * 4 - * This file is provided under a dual BSD/GPLv2 license. When using or 5 - * redistributing this file, you may do so under either license. 6 - * 7 - * GPL LICENSE SUMMARY 8 - * 9 5 * Copyright (c) 2016 BayLibre, SAS. 10 6 * Author: Neil Armstrong <narmstrong@baylibre.com> 11 - * 12 - * This program is free software; you can redistribute it and/or modify 13 - * it under the terms of version 2 of the GNU General Public License as 14 - * published by the Free Software Foundation. 15 - * 16 - * This program is distributed in the hope that it will be useful, but 17 - * WITHOUT ANY WARRANTY; without even the implied warranty of 18 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 19 - * General Public License for more details. 20 - * 21 - * You should have received a copy of the GNU General Public License 22 - * along with this program; if not, see <http://www.gnu.org/licenses/>. 23 - * The full GNU General Public License is included in this distribution 24 - * in the file called COPYING. 25 - * 26 - * BSD LICENSE 27 - * 28 - * Copyright (c) 2016 BayLibre, SAS. 29 - * Author: Neil Armstrong <narmstrong@baylibre.com> 30 - * 31 - * Redistribution and use in source and binary forms, with or without 32 - * modification, are permitted provided that the following conditions 33 - * are met: 34 - * 35 - * * Redistributions of source code must retain the above copyright 36 - * notice, this list of conditions and the following disclaimer. 37 - * * Redistributions in binary form must reproduce the above copyright 38 - * notice, this list of conditions and the following disclaimer in 39 - * the documentation and/or other materials provided with the 40 - * distribution. 41 - * * Neither the name of Intel Corporation nor the names of its 42 - * contributors may be used to endorse or promote products derived 43 - * from this software without specific prior written permission. 44 - * 45 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 46 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 47 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 48 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 49 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 50 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 51 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 52 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 53 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 54 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 55 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 56 7 */ 57 8 #include <linux/err.h> 58 9 #include <linux/init.h>
+124
drivers/reset/reset-scmi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ARM System Control and Management Interface (ARM SCMI) reset driver 4 + * 5 + * Copyright (C) 2019 ARM Ltd. 6 + */ 7 + 8 + #include <linux/module.h> 9 + #include <linux/of.h> 10 + #include <linux/device.h> 11 + #include <linux/reset-controller.h> 12 + #include <linux/scmi_protocol.h> 13 + 14 + /** 15 + * struct scmi_reset_data - reset controller information structure 16 + * @rcdev: reset controller entity 17 + * @handle: ARM SCMI handle used for communication with system controller 18 + */ 19 + struct scmi_reset_data { 20 + struct reset_controller_dev rcdev; 21 + const struct scmi_handle *handle; 22 + }; 23 + 24 + #define to_scmi_reset_data(p) container_of((p), struct scmi_reset_data, rcdev) 25 + #define to_scmi_handle(p) (to_scmi_reset_data(p)->handle) 26 + 27 + /** 28 + * scmi_reset_assert() - assert device reset 29 + * @rcdev: reset controller entity 30 + * @id: ID of the reset to be asserted 31 + * 32 + * This function implements the reset driver op to assert a device's reset 33 + * using the ARM SCMI protocol. 34 + * 35 + * Return: 0 for successful request, else a corresponding error value 36 + */ 37 + static int 38 + scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id) 39 + { 40 + const struct scmi_handle *handle = to_scmi_handle(rcdev); 41 + 42 + return handle->reset_ops->assert(handle, id); 43 + } 44 + 45 + /** 46 + * scmi_reset_deassert() - deassert device reset 47 + * @rcdev: reset controller entity 48 + * @id: ID of the reset to be deasserted 49 + * 50 + * This function implements the reset driver op to deassert a device's reset 51 + * using the ARM SCMI protocol. 52 + * 53 + * Return: 0 for successful request, else a corresponding error value 54 + */ 55 + static int 56 + scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id) 57 + { 58 + const struct scmi_handle *handle = to_scmi_handle(rcdev); 59 + 60 + return handle->reset_ops->deassert(handle, id); 61 + } 62 + 63 + /** 64 + * scmi_reset_reset() - reset the device 65 + * @rcdev: reset controller entity 66 + * @id: ID of the reset signal to be reset(assert + deassert) 67 + * 68 + * This function implements the reset driver op to trigger a device's 69 + * reset signal using the ARM SCMI protocol. 70 + * 71 + * Return: 0 for successful request, else a corresponding error value 72 + */ 73 + static int 74 + scmi_reset_reset(struct reset_controller_dev *rcdev, unsigned long id) 75 + { 76 + const struct scmi_handle *handle = to_scmi_handle(rcdev); 77 + 78 + return handle->reset_ops->reset(handle, id); 79 + } 80 + 81 + static const struct reset_control_ops scmi_reset_ops = { 82 + .assert = scmi_reset_assert, 83 + .deassert = scmi_reset_deassert, 84 + .reset = scmi_reset_reset, 85 + }; 86 + 87 + static int scmi_reset_probe(struct scmi_device *sdev) 88 + { 89 + struct scmi_reset_data *data; 90 + struct device *dev = &sdev->dev; 91 + struct device_node *np = dev->of_node; 92 + const struct scmi_handle *handle = sdev->handle; 93 + 94 + if (!handle || !handle->reset_ops) 95 + return -ENODEV; 96 + 97 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 98 + if (!data) 99 + return -ENOMEM; 100 + 101 + data->rcdev.ops = &scmi_reset_ops; 102 + data->rcdev.owner = THIS_MODULE; 103 + data->rcdev.of_node = np; 104 + data->rcdev.nr_resets = handle->reset_ops->num_domains_get(handle); 105 + 106 + return devm_reset_controller_register(dev, &data->rcdev); 107 + } 108 + 109 + static const struct scmi_device_id scmi_id_table[] = { 110 + { SCMI_PROTOCOL_RESET }, 111 + { }, 112 + }; 113 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 114 + 115 + static struct scmi_driver scmi_reset_driver = { 116 + .name = "scmi-reset", 117 + .probe = scmi_reset_probe, 118 + .id_table = scmi_id_table, 119 + }; 120 + module_scmi_driver(scmi_reset_driver); 121 + 122 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 123 + MODULE_DESCRIPTION("ARM SCMI reset controller driver"); 124 + MODULE_LICENSE("GPL v2");
+3
drivers/reset/reset-simple.c
··· 127 127 { .compatible = "aspeed,ast2500-lpc-reset" }, 128 128 { .compatible = "bitmain,bm1880-reset", 129 129 .data = &reset_simple_active_low }, 130 + { .compatible = "snps,dw-high-reset" }, 131 + { .compatible = "snps,dw-low-reset", 132 + .data = &reset_simple_active_low }, 130 133 { /* sentinel */ }, 131 134 }; 132 135
+1 -1
drivers/soc/fsl/dpaa2-console.c
··· 73 73 74 74 mcfbaregs = ioremap(mc_base_addr.start, resource_size(&mc_base_addr)); 75 75 if (!mcfbaregs) { 76 - pr_err("could not map MC Firmaware Base registers\n"); 76 + pr_err("could not map MC Firmware Base registers\n"); 77 77 return 0; 78 78 } 79 79
+68 -101
drivers/soc/fsl/qe/qe.c
··· 10 10 * General Purpose functions for the global management of the 11 11 * QUICC Engine (QE). 12 12 */ 13 + #include <linux/bitmap.h> 13 14 #include <linux/errno.h> 14 15 #include <linux/sched.h> 15 16 #include <linux/kernel.h> ··· 40 39 DEFINE_SPINLOCK(cmxgcr_lock); 41 40 EXPORT_SYMBOL(cmxgcr_lock); 42 41 43 - /* QE snum state */ 44 - enum qe_snum_state { 45 - QE_SNUM_STATE_USED, 46 - QE_SNUM_STATE_FREE 47 - }; 48 - 49 - /* QE snum */ 50 - struct qe_snum { 51 - u8 num; 52 - enum qe_snum_state state; 53 - }; 54 - 55 42 /* We allocate this here because it is used almost exclusively for 56 43 * the communication processor devices. 57 44 */ 58 45 struct qe_immap __iomem *qe_immr; 59 46 EXPORT_SYMBOL(qe_immr); 60 47 61 - static struct qe_snum snums[QE_NUM_OF_SNUM]; /* Dynamically allocated SNUMs */ 48 + static u8 snums[QE_NUM_OF_SNUM]; /* Dynamically allocated SNUMs */ 49 + static DECLARE_BITMAP(snum_state, QE_NUM_OF_SNUM); 62 50 static unsigned int qe_num_of_snum; 63 51 64 52 static phys_addr_t qebase = -1; 53 + 54 + static struct device_node *qe_get_device_node(void) 55 + { 56 + struct device_node *qe; 57 + 58 + /* 59 + * Newer device trees have an "fsl,qe" compatible property for the QE 60 + * node, but we still need to support older device trees. 61 + */ 62 + qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 63 + if (qe) 64 + return qe; 65 + return of_find_node_by_type(NULL, "qe"); 66 + } 65 67 66 68 static phys_addr_t get_qe_base(void) 67 69 { ··· 75 71 if (qebase != -1) 76 72 return qebase; 77 73 78 - qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 79 - if (!qe) { 80 - qe = of_find_node_by_type(NULL, "qe"); 81 - if (!qe) 82 - return qebase; 83 - } 74 + qe = qe_get_device_node(); 75 + if (!qe) 76 + return qebase; 84 77 85 78 ret = of_address_to_resource(qe, 0, &res); 86 79 if (!ret) ··· 171 170 if (brg_clk) 172 171 return brg_clk; 173 172 174 - qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 175 - if (!qe) { 176 - qe = of_find_node_by_type(NULL, "qe"); 177 - if (!qe) 178 - return brg_clk; 179 - } 173 + qe = qe_get_device_node(); 174 + if (!qe) 175 + return brg_clk; 180 176 181 177 prop = of_get_property(qe, "brg-frequency", &size); 182 178 if (prop && size == sizeof(*prop)) ··· 279 281 */ 280 282 static void qe_snums_init(void) 281 283 { 282 - int i; 283 284 static const u8 snum_init_76[] = { 284 285 0x04, 0x05, 0x0C, 0x0D, 0x14, 0x15, 0x1C, 0x1D, 285 286 0x24, 0x25, 0x2C, 0x2D, 0x34, 0x35, 0x88, 0x89, ··· 299 302 0x28, 0x29, 0x38, 0x39, 0x48, 0x49, 0x58, 0x59, 300 303 0x68, 0x69, 0x78, 0x79, 0x80, 0x81, 301 304 }; 302 - static const u8 *snum_init; 305 + struct device_node *qe; 306 + const u8 *snum_init; 307 + int i; 303 308 304 - qe_num_of_snum = qe_get_num_of_snums(); 305 - 306 - if (qe_num_of_snum == 76) 307 - snum_init = snum_init_76; 308 - else 309 - snum_init = snum_init_46; 310 - 311 - for (i = 0; i < qe_num_of_snum; i++) { 312 - snums[i].num = snum_init[i]; 313 - snums[i].state = QE_SNUM_STATE_FREE; 309 + bitmap_zero(snum_state, QE_NUM_OF_SNUM); 310 + qe_num_of_snum = 28; /* The default number of snum for threads is 28 */ 311 + qe = qe_get_device_node(); 312 + if (qe) { 313 + i = of_property_read_variable_u8_array(qe, "fsl,qe-snums", 314 + snums, 1, QE_NUM_OF_SNUM); 315 + if (i > 0) { 316 + of_node_put(qe); 317 + qe_num_of_snum = i; 318 + return; 319 + } 320 + /* 321 + * Fall back to legacy binding of using the value of 322 + * fsl,qe-num-snums to choose one of the static arrays 323 + * above. 324 + */ 325 + of_property_read_u32(qe, "fsl,qe-num-snums", &qe_num_of_snum); 326 + of_node_put(qe); 314 327 } 328 + 329 + if (qe_num_of_snum == 76) { 330 + snum_init = snum_init_76; 331 + } else if (qe_num_of_snum == 28 || qe_num_of_snum == 46) { 332 + snum_init = snum_init_46; 333 + } else { 334 + pr_err("QE: unsupported value of fsl,qe-num-snums: %u\n", qe_num_of_snum); 335 + return; 336 + } 337 + memcpy(snums, snum_init, qe_num_of_snum); 315 338 } 316 339 317 340 int qe_get_snum(void) ··· 341 324 int i; 342 325 343 326 spin_lock_irqsave(&qe_lock, flags); 344 - for (i = 0; i < qe_num_of_snum; i++) { 345 - if (snums[i].state == QE_SNUM_STATE_FREE) { 346 - snums[i].state = QE_SNUM_STATE_USED; 347 - snum = snums[i].num; 348 - break; 349 - } 327 + i = find_first_zero_bit(snum_state, qe_num_of_snum); 328 + if (i < qe_num_of_snum) { 329 + set_bit(i, snum_state); 330 + snum = snums[i]; 350 331 } 351 332 spin_unlock_irqrestore(&qe_lock, flags); 352 333 ··· 354 339 355 340 void qe_put_snum(u8 snum) 356 341 { 357 - int i; 342 + const u8 *p = memchr(snums, snum, qe_num_of_snum); 358 343 359 - for (i = 0; i < qe_num_of_snum; i++) { 360 - if (snums[i].num == snum) { 361 - snums[i].state = QE_SNUM_STATE_FREE; 362 - break; 363 - } 364 - } 344 + if (p) 345 + clear_bit(p - snums, snum_state); 365 346 } 366 347 EXPORT_SYMBOL(qe_put_snum); 367 348 ··· 583 572 584 573 initialized = 1; 585 574 586 - /* 587 - * Newer device trees have an "fsl,qe" compatible property for the QE 588 - * node, but we still need to support older device trees. 589 - */ 590 - qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 591 - if (!qe) { 592 - qe = of_find_node_by_type(NULL, "qe"); 593 - if (!qe) 594 - return NULL; 595 - } 575 + qe = qe_get_device_node(); 576 + if (!qe) 577 + return NULL; 596 578 597 579 /* Find the 'firmware' child node */ 598 580 fw = of_get_child_by_name(qe, "firmware"); ··· 631 627 unsigned int num_of_risc = 0; 632 628 const u32 *prop; 633 629 634 - qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 635 - if (!qe) { 636 - /* Older devices trees did not have an "fsl,qe" 637 - * compatible property, so we need to look for 638 - * the QE node by name. 639 - */ 640 - qe = of_find_node_by_type(NULL, "qe"); 641 - if (!qe) 642 - return num_of_risc; 643 - } 630 + qe = qe_get_device_node(); 631 + if (!qe) 632 + return num_of_risc; 644 633 645 634 prop = of_get_property(qe, "fsl,qe-num-riscs", &size); 646 635 if (prop && size == sizeof(*prop)) ··· 647 650 648 651 unsigned int qe_get_num_of_snums(void) 649 652 { 650 - struct device_node *qe; 651 - int size; 652 - unsigned int num_of_snums; 653 - const u32 *prop; 654 - 655 - num_of_snums = 28; /* The default number of snum for threads is 28 */ 656 - qe = of_find_compatible_node(NULL, NULL, "fsl,qe"); 657 - if (!qe) { 658 - /* Older devices trees did not have an "fsl,qe" 659 - * compatible property, so we need to look for 660 - * the QE node by name. 661 - */ 662 - qe = of_find_node_by_type(NULL, "qe"); 663 - if (!qe) 664 - return num_of_snums; 665 - } 666 - 667 - prop = of_get_property(qe, "fsl,qe-num-snums", &size); 668 - if (prop && size == sizeof(*prop)) { 669 - num_of_snums = *prop; 670 - if ((num_of_snums < 28) || (num_of_snums > QE_NUM_OF_SNUM)) { 671 - /* No QE ever has fewer than 28 SNUMs */ 672 - pr_err("QE: number of snum is invalid\n"); 673 - of_node_put(qe); 674 - return -EINVAL; 675 - } 676 - } 677 - 678 - of_node_put(qe); 679 - 680 - return num_of_snums; 653 + return qe_num_of_snum; 681 654 } 682 655 EXPORT_SYMBOL(qe_get_num_of_snums); 683 656
+1 -1
drivers/soc/renesas/rcar-sysc.c
··· 170 170 struct generic_pm_domain genpd; 171 171 struct rcar_sysc_ch ch; 172 172 unsigned int flags; 173 - char name[0]; 173 + char name[]; 174 174 }; 175 175 176 176 static inline struct rcar_sysc_pd *to_rcar_pd(struct generic_pm_domain *d)
+1
drivers/tee/optee/call.c
··· 148 148 */ 149 149 optee_cq_wait_for_completion(&optee->call_queue, &w); 150 150 } else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) { 151 + might_sleep(); 151 152 param.a0 = res.a0; 152 153 param.a1 = res.a1; 153 154 param.a2 = res.a2;
+16
include/dt-bindings/bus/moxtet.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Constant for device tree bindings for Turris Mox module configuration bus 4 + * 5 + * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 6 + */ 7 + 8 + #ifndef _DT_BINDINGS_BUS_MOXTET_H 9 + #define _DT_BINDINGS_BUS_MOXTET_H 10 + 11 + #define MOXTET_IRQ_PCI 0 12 + #define MOXTET_IRQ_USB3 4 13 + #define MOXTET_IRQ_PERIDOT(n) (8 + (n)) 14 + #define MOXTET_IRQ_TOPAZ 12 15 + 16 + #endif /* _DT_BINDINGS_BUS_MOXTET_H */
+1 -50
include/dt-bindings/reset/amlogic,meson-gxbb-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 1 2 /* 2 - * This file is provided under a dual BSD/GPLv2 license. When using or 3 - * redistributing this file, you may do so under either license. 4 - * 5 - * GPL LICENSE SUMMARY 6 - * 7 3 * Copyright (c) 2016 BayLibre, SAS. 8 4 * Author: Neil Armstrong <narmstrong@baylibre.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of version 2 of the GNU General Public License as 12 - * published by the Free Software Foundation. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, see <http://www.gnu.org/licenses/>. 21 - * The full GNU General Public License is included in this distribution 22 - * in the file called COPYING. 23 - * 24 - * BSD LICENSE 25 - * 26 - * Copyright (c) 2016 BayLibre, SAS. 27 - * Author: Neil Armstrong <narmstrong@baylibre.com> 28 - * 29 - * Redistribution and use in source and binary forms, with or without 30 - * modification, are permitted provided that the following conditions 31 - * are met: 32 - * 33 - * * Redistributions of source code must retain the above copyright 34 - * notice, this list of conditions and the following disclaimer. 35 - * * Redistributions in binary form must reproduce the above copyright 36 - * notice, this list of conditions and the following disclaimer in 37 - * the documentation and/or other materials provided with the 38 - * distribution. 39 - * * Neither the name of Intel Corporation nor the names of its 40 - * contributors may be used to endorse or promote products derived 41 - * from this software without specific prior written permission. 42 - * 43 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 44 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 45 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 46 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 47 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 48 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 49 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 50 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 51 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 52 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 53 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 54 5 */ 55 6 #ifndef _DT_BINDINGS_AMLOGIC_MESON_GXBB_RESET_H 56 7 #define _DT_BINDINGS_AMLOGIC_MESON_GXBB_RESET_H
+1 -50
include/dt-bindings/reset/amlogic,meson8b-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 1 2 /* 2 - * This file is provided under a dual BSD/GPLv2 license. When using or 3 - * redistributing this file, you may do so under either license. 4 - * 5 - * GPL LICENSE SUMMARY 6 - * 7 3 * Copyright (c) 2016 BayLibre, SAS. 8 4 * Author: Neil Armstrong <narmstrong@baylibre.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of version 2 of the GNU General Public License as 12 - * published by the Free Software Foundation. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, see <http://www.gnu.org/licenses/>. 21 - * The full GNU General Public License is included in this distribution 22 - * in the file called COPYING. 23 - * 24 - * BSD LICENSE 25 - * 26 - * Copyright (c) 2016 BayLibre, SAS. 27 - * Author: Neil Armstrong <narmstrong@baylibre.com> 28 - * 29 - * Redistribution and use in source and binary forms, with or without 30 - * modification, are permitted provided that the following conditions 31 - * are met: 32 - * 33 - * * Redistributions of source code must retain the above copyright 34 - * notice, this list of conditions and the following disclaimer. 35 - * * Redistributions in binary form must reproduce the above copyright 36 - * notice, this list of conditions and the following disclaimer in 37 - * the documentation and/or other materials provided with the 38 - * distribution. 39 - * * Neither the name of Intel Corporation nor the names of its 40 - * contributors may be used to endorse or promote products derived 41 - * from this software without specific prior written permission. 42 - * 43 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 44 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 45 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 46 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 47 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 48 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 49 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 50 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 51 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 52 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 53 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 54 5 */ 55 6 #ifndef _DT_BINDINGS_AMLOGIC_MESON8B_RESET_H 56 7 #define _DT_BINDINGS_AMLOGIC_MESON8B_RESET_H
+17 -17
include/dt-bindings/reset/imx8mq-reset.h
··· 31 31 #define IMX8MQ_RESET_OTG2_PHY_RESET 20 32 32 #define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 33 33 #define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 34 - #define IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N 23 35 - #define IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N 24 36 - #define IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N 25 34 + #define IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N 23 35 + #define IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N 24 36 + #define IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N 25 37 37 #define IMX8MQ_RESET_PCIEPHY 26 38 38 #define IMX8MQ_RESET_PCIEPHY_PERST 27 39 39 #define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 40 40 #define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 41 - #define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 41 + #define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 /* i.MX8MM does NOT support */ 42 42 #define IMX8MQ_RESET_DISP_RESET 31 43 43 #define IMX8MQ_RESET_GPU_RESET 32 44 44 #define IMX8MQ_RESET_VPU_RESET 33 45 - #define IMX8MQ_RESET_PCIEPHY2 34 46 - #define IMX8MQ_RESET_PCIEPHY2_PERST 35 47 - #define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 48 - #define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 49 - #define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 50 - #define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 51 - #define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 52 - #define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 53 - #define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 54 - #define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 45 + #define IMX8MQ_RESET_PCIEPHY2 34 /* i.MX8MM does NOT support */ 46 + #define IMX8MQ_RESET_PCIEPHY2_PERST 35 /* i.MX8MM does NOT support */ 47 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 /* i.MX8MM does NOT support */ 48 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 /* i.MX8MM does NOT support */ 49 + #define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 /* i.MX8MM does NOT support */ 50 + #define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 /* i.MX8MM does NOT support */ 51 + #define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 /* i.MX8MM does NOT support */ 52 + #define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 /* i.MX8MM does NOT support */ 53 + #define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 /* i.MX8MM does NOT support */ 54 + #define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 /* i.MX8MM does NOT support */ 55 55 #define IMX8MQ_RESET_DDRC1_PRST 44 56 56 #define IMX8MQ_RESET_DDRC1_CORE_RESET 45 57 57 #define IMX8MQ_RESET_DDRC1_PHY_RESET 46 58 - #define IMX8MQ_RESET_DDRC2_PRST 47 59 - #define IMX8MQ_RESET_DDRC2_CORE_RESET 48 60 - #define IMX8MQ_RESET_DDRC2_PHY_RESET 49 58 + #define IMX8MQ_RESET_DDRC2_PRST 47 /* i.MX8MM does NOT support */ 59 + #define IMX8MQ_RESET_DDRC2_CORE_RESET 48 /* i.MX8MM does NOT support */ 60 + #define IMX8MQ_RESET_DDRC2_PHY_RESET 49 /* i.MX8MM does NOT support */ 61 61 62 62 #define IMX8MQ_RESET_NUM 50 63 63
+109
include/linux/moxtet.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Turris Mox module configuration bus driver 4 + * 5 + * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 6 + */ 7 + 8 + #ifndef __LINUX_MOXTET_H 9 + #define __LINUX_MOXTET_H 10 + 11 + #include <linux/device.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqdomain.h> 14 + #include <linux/mutex.h> 15 + 16 + #define TURRIS_MOX_MAX_MODULES 10 17 + 18 + enum turris_mox_cpu_module_id { 19 + TURRIS_MOX_CPU_ID_EMMC = 0x00, 20 + TURRIS_MOX_CPU_ID_SD = 0x10, 21 + }; 22 + 23 + enum turris_mox_module_id { 24 + TURRIS_MOX_MODULE_FIRST = 0x01, 25 + 26 + TURRIS_MOX_MODULE_SFP = 0x01, 27 + TURRIS_MOX_MODULE_PCI = 0x02, 28 + TURRIS_MOX_MODULE_TOPAZ = 0x03, 29 + TURRIS_MOX_MODULE_PERIDOT = 0x04, 30 + TURRIS_MOX_MODULE_USB3 = 0x05, 31 + TURRIS_MOX_MODULE_PCI_BRIDGE = 0x06, 32 + 33 + TURRIS_MOX_MODULE_LAST = 0x06, 34 + }; 35 + 36 + #define MOXTET_NIRQS 16 37 + 38 + extern struct bus_type moxtet_type; 39 + 40 + struct moxtet { 41 + struct device *dev; 42 + struct mutex lock; 43 + u8 modules[TURRIS_MOX_MAX_MODULES]; 44 + int count; 45 + u8 tx[TURRIS_MOX_MAX_MODULES]; 46 + int dev_irq; 47 + struct { 48 + struct irq_domain *domain; 49 + struct irq_chip chip; 50 + unsigned long masked, exists; 51 + struct moxtet_irqpos { 52 + u8 idx; 53 + u8 bit; 54 + } position[MOXTET_NIRQS]; 55 + } irq; 56 + #ifdef CONFIG_DEBUG_FS 57 + struct dentry *debugfs_root; 58 + #endif 59 + }; 60 + 61 + struct moxtet_driver { 62 + const enum turris_mox_module_id *id_table; 63 + struct device_driver driver; 64 + }; 65 + 66 + static inline struct moxtet_driver * 67 + to_moxtet_driver(struct device_driver *drv) 68 + { 69 + if (!drv) 70 + return NULL; 71 + return container_of(drv, struct moxtet_driver, driver); 72 + } 73 + 74 + extern int __moxtet_register_driver(struct module *owner, 75 + struct moxtet_driver *mdrv); 76 + 77 + static inline void moxtet_unregister_driver(struct moxtet_driver *mdrv) 78 + { 79 + if (mdrv) 80 + driver_unregister(&mdrv->driver); 81 + } 82 + 83 + #define moxtet_register_driver(driver) \ 84 + __moxtet_register_driver(THIS_MODULE, driver) 85 + 86 + #define module_moxtet_driver(__moxtet_driver) \ 87 + module_driver(__moxtet_driver, moxtet_register_driver, \ 88 + moxtet_unregister_driver) 89 + 90 + struct moxtet_device { 91 + struct device dev; 92 + struct moxtet *moxtet; 93 + enum turris_mox_module_id id; 94 + unsigned int idx; 95 + }; 96 + 97 + extern int moxtet_device_read(struct device *dev); 98 + extern int moxtet_device_write(struct device *dev, u8 val); 99 + extern int moxtet_device_written(struct device *dev); 100 + 101 + static inline struct moxtet_device * 102 + to_moxtet_device(struct device *dev) 103 + { 104 + if (!dev) 105 + return NULL; 106 + return container_of(dev, struct moxtet_device, dev); 107 + } 108 + 109 + #endif /* __LINUX_MOXTET_H */
+37 -9
include/linux/scmi_protocol.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * SCMI Message Protocol driver header 4 4 * ··· 71 71 int (*rate_get)(const struct scmi_handle *handle, u32 clk_id, 72 72 u64 *rate); 73 73 int (*rate_set)(const struct scmi_handle *handle, u32 clk_id, 74 - u32 config, u64 rate); 74 + u64 rate); 75 75 int (*enable)(const struct scmi_handle *handle, u32 clk_id); 76 76 int (*disable)(const struct scmi_handle *handle, u32 clk_id); 77 77 }; ··· 145 145 u32 id; 146 146 u8 type; 147 147 s8 scale; 148 + u8 num_trip_points; 149 + bool async; 148 150 char name[SCMI_MAX_STR_SIZE]; 149 151 }; 150 152 ··· 169 167 * 170 168 * @count_get: get the count of sensors provided by SCMI 171 169 * @info_get: get the information of the specified sensor 172 - * @configuration_set: control notifications on cross-over events for 170 + * @trip_point_notify: control notifications on cross-over events for 173 171 * the trip-points 174 - * @trip_point_set: selects and configures a trip-point of interest 172 + * @trip_point_config: selects and configures a trip-point of interest 175 173 * @reading_get: gets the current value of the sensor 176 174 */ 177 175 struct scmi_sensor_ops { ··· 179 177 180 178 const struct scmi_sensor_info *(*info_get) 181 179 (const struct scmi_handle *handle, u32 sensor_id); 182 - int (*configuration_set)(const struct scmi_handle *handle, 183 - u32 sensor_id); 184 - int (*trip_point_set)(const struct scmi_handle *handle, u32 sensor_id, 185 - u8 trip_id, u64 trip_value); 180 + int (*trip_point_notify)(const struct scmi_handle *handle, 181 + u32 sensor_id, bool enable); 182 + int (*trip_point_config)(const struct scmi_handle *handle, 183 + u32 sensor_id, u8 trip_id, u64 trip_value); 186 184 int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id, 187 - bool async, u64 *value); 185 + u64 *value); 186 + }; 187 + 188 + /** 189 + * struct scmi_reset_ops - represents the various operations provided 190 + * by SCMI Reset Protocol 191 + * 192 + * @num_domains_get: get the count of reset domains provided by SCMI 193 + * @name_get: gets the name of a reset domain 194 + * @latency_get: gets the reset latency for the specified reset domain 195 + * @reset: resets the specified reset domain 196 + * @assert: explicitly assert reset signal of the specified reset domain 197 + * @deassert: explicitly deassert reset signal of the specified reset domain 198 + */ 199 + struct scmi_reset_ops { 200 + int (*num_domains_get)(const struct scmi_handle *handle); 201 + char *(*name_get)(const struct scmi_handle *handle, u32 domain); 202 + int (*latency_get)(const struct scmi_handle *handle, u32 domain); 203 + int (*reset)(const struct scmi_handle *handle, u32 domain); 204 + int (*assert)(const struct scmi_handle *handle, u32 domain); 205 + int (*deassert)(const struct scmi_handle *handle, u32 domain); 188 206 }; 189 207 190 208 /** ··· 216 194 * @perf_ops: pointer to set of performance protocol operations 217 195 * @clk_ops: pointer to set of clock protocol operations 218 196 * @sensor_ops: pointer to set of sensor protocol operations 197 + * @reset_ops: pointer to set of reset protocol operations 219 198 * @perf_priv: pointer to private data structure specific to performance 220 199 * protocol(for internal use only) 221 200 * @clk_priv: pointer to private data structure specific to clock ··· 224 201 * @power_priv: pointer to private data structure specific to power 225 202 * protocol(for internal use only) 226 203 * @sensor_priv: pointer to private data structure specific to sensors 204 + * protocol(for internal use only) 205 + * @reset_priv: pointer to private data structure specific to reset 227 206 * protocol(for internal use only) 228 207 */ 229 208 struct scmi_handle { ··· 235 210 struct scmi_clk_ops *clk_ops; 236 211 struct scmi_power_ops *power_ops; 237 212 struct scmi_sensor_ops *sensor_ops; 213 + struct scmi_reset_ops *reset_ops; 238 214 /* for protocol internal use */ 239 215 void *perf_priv; 240 216 void *clk_priv; 241 217 void *power_priv; 242 218 void *sensor_priv; 219 + void *reset_priv; 243 220 }; 244 221 245 222 enum scmi_std_protocol { ··· 251 224 SCMI_PROTOCOL_PERF = 0x13, 252 225 SCMI_PROTOCOL_CLOCK = 0x14, 253 226 SCMI_PROTOCOL_SENSOR = 0x15, 227 + SCMI_PROTOCOL_RESET = 0x16, 254 228 }; 255 229 256 230 struct scmi_device {