Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
"The main addition this time around is the new ARM "SCMI" framework,
which is the latest in a series of standards coming from ARM to do
power management in a platform independent way.

This has been through many review cycles, and it relies on a rather
interesting way of using the mailbox subsystem, but in the end I
agreed that Sudeep's version was the best we could do after all.

Other changes include:

- the ARM CCN driver is moved out of drivers/bus into drivers/perf,
which makes more sense. Similarly, the performance monitoring
portion of the CCI driver are moved the same way and cleaned up a
little more.

- a series of updates to the SCPI framework

- support for the Mediatek mt7623a SoC in drivers/soc

- support for additional NVIDIA Tegra hardware in drivers/soc

- a new reset driver for Socionext Uniphier

- lesser bug fixes in drivers/soc, drivers/tee, drivers/memory, and
drivers/firmware and drivers/reset across platforms"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (87 commits)
reset: uniphier: add ethernet reset control support for PXs3
reset: stm32mp1: Enable stm32mp1 reset driver
dt-bindings: reset: add STM32MP1 resets
reset: uniphier: add Pro4/Pro5/PXs2 audio systems reset control
reset: imx7: add 'depends on HAS_IOMEM' to fix unmet dependency
reset: modify the way reset lookup works for board files
reset: add support for non-DT systems
clk: scmi: use devm_of_clk_add_hw_provider() API and drop scmi_clocks_remove
firmware: arm_scmi: prevent accessing rate_discrete uninitialized
hwmon: (scmi) return -EINVAL when sensor information is unavailable
amlogic: meson-gx-socinfo: Update soc ids
soc/tegra: pmc: Use the new reset APIs to manage reset controllers
soc: mediatek: update power domain data of MT2712
dt-bindings: soc: update MT2712 power dt-bindings
cpufreq: scmi: add thermal dependency
soc: mediatek: fix the mistaken pointer accessed when subdomains are added
soc: mediatek: add SCPSYS power domain driver for MediaTek MT7623A SoC
soc: mediatek: avoid hardcoded value with bus_prot_mask
dt-bindings: soc: add header files required for MT7623A SCPSYS dt-binding
dt-bindings: soc: add SCPSYS binding for MT7623 and MT7623A SoC
...

+6861 -2189
Documentation/arm/CCN.txt Documentation/perf/arm-ccn.txt
+179
Documentation/devicetree/bindings/arm/arm,scmi.txt
··· 1 + System Control and Management Interface (SCMI) Message Protocol 2 + ---------------------------------------------------------- 3 + 4 + The SCMI is intended to allow agents such as OSPM to manage various functions 5 + that are provided by the hardware platform it is running on, including power 6 + and performance functions. 7 + 8 + This binding is intended to define the interface the firmware implementing 9 + the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control 10 + and Management Interface Platform Design Document")[0] provide for OSPM in 11 + the device tree. 12 + 13 + Required properties: 14 + 15 + The scmi node with the following properties shall be under the /firmware/ node. 16 + 17 + - compatible : shall be "arm,scmi" 18 + - mboxes: List of phandle and mailbox channel specifiers. It should contain 19 + exactly one or two mailboxes, one for transmitting messages("tx") 20 + and another optional for receiving the notifications("rx") if 21 + supported. 22 + - shmem : List of phandle pointing to the shared memory(SHM) area as per 23 + generic mailbox client binding. 24 + - #address-cells : should be '1' if the device has sub-nodes, maps to 25 + protocol identifier for a given sub-node. 26 + - #size-cells : should be '0' as 'reg' property doesn't have any size 27 + associated with it. 28 + 29 + Optional properties: 30 + 31 + - mbox-names: shall be "tx" or "rx" depending on mboxes entries. 32 + 33 + See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details 34 + about the generic mailbox controller and client driver bindings. 35 + 36 + The mailbox is the only permitted method of calling the SCMI firmware. 37 + Mailbox doorbell is used as a mechanism to alert the presence of a 38 + messages and/or notification. 39 + 40 + Each protocol supported shall have a sub-node with corresponding compatible 41 + as described in the following sections. If the platform supports dedicated 42 + communication channel for a particular protocol, the 3 properties namely: 43 + mboxes, mbox-names and shmem shall be present in the sub-node corresponding 44 + to that protocol. 45 + 46 + Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol 47 + ------------------------------------------------------------ 48 + 49 + This binding uses the common clock binding[1]. 50 + 51 + Required properties: 52 + - #clock-cells : Should be 1. Contains the Clock ID value used by SCMI commands. 53 + 54 + Power domain bindings for the power domains based on SCMI Message Protocol 55 + ------------------------------------------------------------ 56 + 57 + This binding for the SCMI power domain providers uses the generic power 58 + domain binding[2]. 59 + 60 + Required properties: 61 + - #power-domain-cells : Should be 1. Contains the device or the power 62 + domain ID value used by SCMI commands. 63 + 64 + Sensor bindings for the sensors based on SCMI Message Protocol 65 + -------------------------------------------------------------- 66 + SCMI provides an API to access the various sensors on the SoC. 67 + 68 + Required properties: 69 + - #thermal-sensor-cells: should be set to 1. This property follows the 70 + thermal device tree bindings[3]. 71 + 72 + Valid cell values are raw identifiers (Sensor ID) 73 + as used by the firmware. Refer to platform details 74 + for your implementation for the IDs to use. 75 + 76 + SRAM and Shared Memory for SCMI 77 + ------------------------------- 78 + 79 + A small area of SRAM is reserved for SCMI communication between application 80 + processors and SCP. 81 + 82 + The properties should follow the generic mmio-sram description found in [4] 83 + 84 + Each sub-node represents the reserved area for SCMI. 85 + 86 + Required sub-node properties: 87 + - reg : The base offset and size of the reserved area with the SRAM 88 + - compatible : should be "arm,scmi-shmem" for Non-secure SRAM based 89 + shared memory 90 + 91 + [0] http://infocenter.arm.com/help/topic/com.arm.doc.den0056a/index.html 92 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 93 + [2] Documentation/devicetree/bindings/power/power_domain.txt 94 + [3] Documentation/devicetree/bindings/thermal/thermal.txt 95 + [4] Documentation/devicetree/bindings/sram/sram.txt 96 + 97 + Example: 98 + 99 + sram@50000000 { 100 + compatible = "mmio-sram"; 101 + reg = <0x0 0x50000000 0x0 0x10000>; 102 + 103 + #address-cells = <1>; 104 + #size-cells = <1>; 105 + ranges = <0 0x0 0x50000000 0x10000>; 106 + 107 + cpu_scp_lpri: scp-shmem@0 { 108 + compatible = "arm,scmi-shmem"; 109 + reg = <0x0 0x200>; 110 + }; 111 + 112 + cpu_scp_hpri: scp-shmem@200 { 113 + compatible = "arm,scmi-shmem"; 114 + reg = <0x200 0x200>; 115 + }; 116 + }; 117 + 118 + mailbox@40000000 { 119 + .... 120 + #mbox-cells = <1>; 121 + reg = <0x0 0x40000000 0x0 0x10000>; 122 + }; 123 + 124 + firmware { 125 + 126 + ... 127 + 128 + scmi { 129 + compatible = "arm,scmi"; 130 + mboxes = <&mailbox 0 &mailbox 1>; 131 + mbox-names = "tx", "rx"; 132 + shmem = <&cpu_scp_lpri &cpu_scp_hpri>; 133 + #address-cells = <1>; 134 + #size-cells = <0>; 135 + 136 + scmi_devpd: protocol@11 { 137 + reg = <0x11>; 138 + #power-domain-cells = <1>; 139 + }; 140 + 141 + scmi_dvfs: protocol@13 { 142 + reg = <0x13>; 143 + #clock-cells = <1>; 144 + }; 145 + 146 + scmi_clk: protocol@14 { 147 + reg = <0x14>; 148 + #clock-cells = <1>; 149 + }; 150 + 151 + scmi_sensors0: protocol@15 { 152 + reg = <0x15>; 153 + #thermal-sensor-cells = <1>; 154 + }; 155 + }; 156 + }; 157 + 158 + cpu@0 { 159 + ... 160 + reg = <0 0>; 161 + clocks = <&scmi_dvfs 0>; 162 + }; 163 + 164 + hdlcd@7ff60000 { 165 + ... 166 + reg = <0 0x7ff60000 0 0x1000>; 167 + clocks = <&scmi_clk 4>; 168 + power-domains = <&scmi_devpd 1>; 169 + }; 170 + 171 + thermal-zones { 172 + soc_thermal { 173 + polling-delay-passive = <100>; 174 + polling-delay = <1000>; 175 + /* sensor ID */ 176 + thermal-sensors = <&scmi_sensors0 3>; 177 + ... 178 + }; 179 + };
Documentation/devicetree/bindings/arm/ccn.txt Documentation/devicetree/bindings/perf/arm-ccn.txt
+6
Documentation/devicetree/bindings/arm/samsung/pmu.txt
··· 43 43 - interrupt-parent: a phandle indicating which interrupt controller 44 44 this PMU signals interrupts to. 45 45 46 + 47 + Optional nodes: 48 + 49 + - nodes defining the restart and poweroff syscon children 50 + 51 + 46 52 Example : 47 53 pmu_system_controller: system-controller@10040000 { 48 54 compatible = "samsung,exynos5250-pmu", "syscon";
+28
Documentation/devicetree/bindings/mailbox/mailbox.txt
··· 23 23 24 24 Optional property: 25 25 - mbox-names: List of identifier strings for each mailbox channel. 26 + - shmem : List of phandle pointing to the shared memory(SHM) area between the 27 + users of these mailboxes for IPC, one for each mailbox. This shared 28 + memory can be part of any memory reserved for the purpose of this 29 + communication between the mailbox client and the remote. 30 + 26 31 27 32 Example: 28 33 pwr_cntrl: power { 29 34 ... 30 35 mbox-names = "pwr-ctrl", "rpc"; 31 36 mboxes = <&mailbox 0 &mailbox 1>; 37 + }; 38 + 39 + Example with shared memory(shmem): 40 + 41 + sram: sram@50000000 { 42 + compatible = "mmio-sram"; 43 + reg = <0x50000000 0x10000>; 44 + 45 + #address-cells = <1>; 46 + #size-cells = <1>; 47 + ranges = <0 0x50000000 0x10000>; 48 + 49 + cl_shmem: shmem@0 { 50 + compatible = "client-shmem"; 51 + reg = <0x0 0x200>; 52 + }; 53 + }; 54 + 55 + client@2e000000 { 56 + ... 57 + mboxes = <&mailbox 0>; 58 + shmem = <&cl_shmem>; 59 + .. 32 60 };
+21
Documentation/devicetree/bindings/mfd/aspeed-lpc.txt
··· 176 176 compatible = "aspeed,ast2500-lhc"; 177 177 reg = <0x20 0x24 0x48 0x8>; 178 178 }; 179 + 180 + LPC reset control 181 + ----------------- 182 + 183 + The UARTs present in the ASPEED SoC can have their resets tied to the reset 184 + state of the LPC bus. Some systems may chose to modify this configuration. 185 + 186 + Required properties: 187 + 188 + - compatible: "aspeed,ast2500-lpc-reset" or 189 + "aspeed,ast2400-lpc-reset" 190 + - reg: offset and length of the IP in the LHC memory region 191 + - #reset-controller indicates the number of reset cells expected 192 + 193 + Example: 194 + 195 + lpc_reset: reset-controller@18 { 196 + compatible = "aspeed,ast2500-lpc-reset"; 197 + reg = <0x18 0x4>; 198 + #reset-cells = <1>; 199 + };
+6
Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt
··· 1 + STMicroelectronics STM32MP1 Peripheral Reset Controller 2 + ======================================================= 3 + 4 + The RCC IP is both a reset and a clock controller. 5 + 6 + Please see Documentation/devicetree/bindings/clock/st,stm32mp1-rcc.txt
+4 -1
Documentation/devicetree/bindings/soc/mediatek/scpsys.txt
··· 21 21 - "mediatek,mt2712-scpsys" 22 22 - "mediatek,mt6797-scpsys" 23 23 - "mediatek,mt7622-scpsys" 24 + - "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC 25 + - "mediatek,mt7623a-scpsys": For MT7623A SoC 24 26 - "mediatek,mt8173-scpsys" 25 27 - #power-domain-cells: Must be 1 26 28 - reg: Address range of the SCPSYS unit ··· 30 28 - clock, clock-names: clocks according to the common clock binding. 31 29 These are clocks which hardware needs to be 32 30 enabled before enabling certain power domains. 33 - Required clocks for MT2701: "mm", "mfg", "ethif" 31 + Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif" 34 32 Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec" 35 33 Required clocks for MT6797: "mm", "mfg", "vdec" 36 34 Required clocks for MT7622: "hif_sel" 35 + Required clocks for MT7622A: "ethif" 37 36 Required clocks for MT8173: "mm", "mfg", "venc", "venc_lt" 38 37 39 38 Optional properties:
+6 -5
MAINTAINERS
··· 13491 13491 S: Supported 13492 13492 F: drivers/mfd/syscon.c 13493 13493 13494 - SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers 13494 + SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers 13495 13495 M: Sudeep Holla <sudeep.holla@arm.com> 13496 13496 L: linux-arm-kernel@lists.infradead.org 13497 13497 S: Maintained 13498 - F: Documentation/devicetree/bindings/arm/arm,scpi.txt 13499 - F: drivers/clk/clk-scpi.c 13500 - F: drivers/cpufreq/scpi-cpufreq.c 13498 + F: Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt 13499 + F: drivers/clk/clk-sc[mp]i.c 13500 + F: drivers/cpufreq/sc[mp]i-cpufreq.c 13501 13501 F: drivers/firmware/arm_scpi.c 13502 - F: include/linux/scpi_protocol.h 13502 + F: drivers/firmware/arm_scmi/ 13503 + F: include/linux/sc[mp]i_protocol.h 13503 13504 13504 13505 SYSTEM RESET/SHUTDOWN DRIVERS 13505 13506 M: Sebastian Reichel <sre@kernel.org>
-36
drivers/bus/Kconfig
··· 8 8 config ARM_CCI 9 9 bool 10 10 11 - config ARM_CCI_PMU 12 - bool 13 - select ARM_CCI 14 - 15 11 config ARM_CCI400_COMMON 16 12 bool 17 13 select ARM_CCI 18 - 19 - config ARM_CCI400_PMU 20 - bool "ARM CCI400 PMU support" 21 - depends on (ARM && CPU_V7) || ARM64 22 - depends on PERF_EVENTS 23 - select ARM_CCI400_COMMON 24 - select ARM_CCI_PMU 25 - help 26 - Support for PMU events monitoring on the ARM CCI-400 (cache coherent 27 - interconnect). CCI-400 supports counting events related to the 28 - connected slave/master interfaces. 29 14 30 15 config ARM_CCI400_PORT_CTRL 31 16 bool ··· 19 34 help 20 35 Low level power management driver for CCI400 cache coherent 21 36 interconnect for ARM platforms. 22 - 23 - config ARM_CCI5xx_PMU 24 - bool "ARM CCI-500/CCI-550 PMU support" 25 - depends on (ARM && CPU_V7) || ARM64 26 - depends on PERF_EVENTS 27 - select ARM_CCI_PMU 28 - help 29 - Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache 30 - coherent interconnects. Both of them provide 8 independent event counters, 31 - which can count events pertaining to the slave/master interfaces as well 32 - as the internal events to the CCI. 33 - 34 - If unsure, say Y 35 - 36 - config ARM_CCN 37 - tristate "ARM CCN driver support" 38 - depends on ARM || ARM64 39 - depends on PERF_EVENTS 40 - help 41 - PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) 42 - interconnect. 43 37 44 38 config BRCMSTB_GISB_ARB 45 39 bool "Broadcom STB GISB bus arbiter"
-2
drivers/bus/Makefile
··· 5 5 6 6 # Interconnect bus drivers for ARM platforms 7 7 obj-$(CONFIG_ARM_CCI) += arm-cci.o 8 - obj-$(CONFIG_ARM_CCN) += arm-ccn.o 9 - 10 8 obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o 11 9 12 10 # DPAA2 fsl-mc bus
+14 -1749
drivers/bus/arm-cci.c
··· 16 16 17 17 #include <linux/arm-cci.h> 18 18 #include <linux/io.h> 19 - #include <linux/interrupt.h> 20 19 #include <linux/module.h> 21 20 #include <linux/of_address.h> 22 - #include <linux/of_irq.h> 23 21 #include <linux/of_platform.h> 24 - #include <linux/perf_event.h> 25 22 #include <linux/platform_device.h> 26 23 #include <linux/slab.h> 27 - #include <linux/spinlock.h> 28 24 29 25 #include <asm/cacheflush.h> 30 26 #include <asm/smp_plat.h> 31 27 32 - static void __iomem *cci_ctrl_base; 33 - static unsigned long cci_ctrl_phys; 28 + static void __iomem *cci_ctrl_base __ro_after_init; 29 + static unsigned long cci_ctrl_phys __ro_after_init; 34 30 35 31 #ifdef CONFIG_ARM_CCI400_PORT_CTRL 36 32 struct cci_nb_ports { ··· 55 59 {}, 56 60 }; 57 61 58 - #ifdef CONFIG_ARM_CCI_PMU 62 + static const struct of_dev_auxdata arm_cci_auxdata[] = { 63 + OF_DEV_AUXDATA("arm,cci-400-pmu", 0, NULL, &cci_ctrl_base), 64 + OF_DEV_AUXDATA("arm,cci-400-pmu,r0", 0, NULL, &cci_ctrl_base), 65 + OF_DEV_AUXDATA("arm,cci-400-pmu,r1", 0, NULL, &cci_ctrl_base), 66 + OF_DEV_AUXDATA("arm,cci-500-pmu,r0", 0, NULL, &cci_ctrl_base), 67 + OF_DEV_AUXDATA("arm,cci-550-pmu,r0", 0, NULL, &cci_ctrl_base), 68 + {} 69 + }; 59 70 60 71 #define DRIVER_NAME "ARM-CCI" 61 - #define DRIVER_NAME_PMU DRIVER_NAME " PMU" 62 - 63 - #define CCI_PMCR 0x0100 64 - #define CCI_PID2 0x0fe8 65 - 66 - #define CCI_PMCR_CEN 0x00000001 67 - #define CCI_PMCR_NCNT_MASK 0x0000f800 68 - #define CCI_PMCR_NCNT_SHIFT 11 69 - 70 - #define CCI_PID2_REV_MASK 0xf0 71 - #define CCI_PID2_REV_SHIFT 4 72 - 73 - #define CCI_PMU_EVT_SEL 0x000 74 - #define CCI_PMU_CNTR 0x004 75 - #define CCI_PMU_CNTR_CTRL 0x008 76 - #define CCI_PMU_OVRFLW 0x00c 77 - 78 - #define CCI_PMU_OVRFLW_FLAG 1 79 - 80 - #define CCI_PMU_CNTR_SIZE(model) ((model)->cntr_size) 81 - #define CCI_PMU_CNTR_BASE(model, idx) ((idx) * CCI_PMU_CNTR_SIZE(model)) 82 - #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 83 - #define CCI_PMU_CNTR_LAST(cci_pmu) (cci_pmu->num_cntrs - 1) 84 - 85 - #define CCI_PMU_MAX_HW_CNTRS(model) \ 86 - ((model)->num_hw_cntrs + (model)->fixed_hw_cntrs) 87 - 88 - /* Types of interfaces that can generate events */ 89 - enum { 90 - CCI_IF_SLAVE, 91 - CCI_IF_MASTER, 92 - #ifdef CONFIG_ARM_CCI5xx_PMU 93 - CCI_IF_GLOBAL, 94 - #endif 95 - CCI_IF_MAX, 96 - }; 97 - 98 - struct event_range { 99 - u32 min; 100 - u32 max; 101 - }; 102 - 103 - struct cci_pmu_hw_events { 104 - struct perf_event **events; 105 - unsigned long *used_mask; 106 - raw_spinlock_t pmu_lock; 107 - }; 108 - 109 - struct cci_pmu; 110 - /* 111 - * struct cci_pmu_model: 112 - * @fixed_hw_cntrs - Number of fixed event counters 113 - * @num_hw_cntrs - Maximum number of programmable event counters 114 - * @cntr_size - Size of an event counter mapping 115 - */ 116 - struct cci_pmu_model { 117 - char *name; 118 - u32 fixed_hw_cntrs; 119 - u32 num_hw_cntrs; 120 - u32 cntr_size; 121 - struct attribute **format_attrs; 122 - struct attribute **event_attrs; 123 - struct event_range event_ranges[CCI_IF_MAX]; 124 - int (*validate_hw_event)(struct cci_pmu *, unsigned long); 125 - int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long); 126 - void (*write_counters)(struct cci_pmu *, unsigned long *); 127 - }; 128 - 129 - static struct cci_pmu_model cci_pmu_models[]; 130 - 131 - struct cci_pmu { 132 - void __iomem *base; 133 - struct pmu pmu; 134 - int nr_irqs; 135 - int *irqs; 136 - unsigned long active_irqs; 137 - const struct cci_pmu_model *model; 138 - struct cci_pmu_hw_events hw_events; 139 - struct platform_device *plat_device; 140 - int num_cntrs; 141 - atomic_t active_events; 142 - struct mutex reserve_mutex; 143 - struct hlist_node node; 144 - cpumask_t cpus; 145 - }; 146 - 147 - #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) 148 - 149 - enum cci_models { 150 - #ifdef CONFIG_ARM_CCI400_PMU 151 - CCI400_R0, 152 - CCI400_R1, 153 - #endif 154 - #ifdef CONFIG_ARM_CCI5xx_PMU 155 - CCI500_R0, 156 - CCI550_R0, 157 - #endif 158 - CCI_MODEL_MAX 159 - }; 160 - 161 - static void pmu_write_counters(struct cci_pmu *cci_pmu, 162 - unsigned long *mask); 163 - static ssize_t cci_pmu_format_show(struct device *dev, 164 - struct device_attribute *attr, char *buf); 165 - static ssize_t cci_pmu_event_show(struct device *dev, 166 - struct device_attribute *attr, char *buf); 167 - 168 - #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \ 169 - &((struct dev_ext_attribute[]) { \ 170 - { __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config } \ 171 - })[0].attr.attr 172 - 173 - #define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \ 174 - CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config) 175 - #define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 176 - CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config) 177 - 178 - /* CCI400 PMU Specific definitions */ 179 - 180 - #ifdef CONFIG_ARM_CCI400_PMU 181 - 182 - /* Port ids */ 183 - #define CCI400_PORT_S0 0 184 - #define CCI400_PORT_S1 1 185 - #define CCI400_PORT_S2 2 186 - #define CCI400_PORT_S3 3 187 - #define CCI400_PORT_S4 4 188 - #define CCI400_PORT_M0 5 189 - #define CCI400_PORT_M1 6 190 - #define CCI400_PORT_M2 7 191 - 192 - #define CCI400_R1_PX 5 193 - 194 - /* 195 - * Instead of an event id to monitor CCI cycles, a dedicated counter is 196 - * provided. Use 0xff to represent CCI cycles and hope that no future revisions 197 - * make use of this event in hardware. 198 - */ 199 - enum cci400_perf_events { 200 - CCI400_PMU_CYCLES = 0xff 201 - }; 202 - 203 - #define CCI400_PMU_CYCLE_CNTR_IDX 0 204 - #define CCI400_PMU_CNTR0_IDX 1 205 - 206 - /* 207 - * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8 208 - * ports and bits 4:0 are event codes. There are different event codes 209 - * associated with each port type. 210 - * 211 - * Additionally, the range of events associated with the port types changed 212 - * between Rev0 and Rev1. 213 - * 214 - * The constants below define the range of valid codes for each port type for 215 - * the different revisions and are used to validate the event to be monitored. 216 - */ 217 - 218 - #define CCI400_PMU_EVENT_MASK 0xffUL 219 - #define CCI400_PMU_EVENT_SOURCE_SHIFT 5 220 - #define CCI400_PMU_EVENT_SOURCE_MASK 0x7 221 - #define CCI400_PMU_EVENT_CODE_SHIFT 0 222 - #define CCI400_PMU_EVENT_CODE_MASK 0x1f 223 - #define CCI400_PMU_EVENT_SOURCE(event) \ 224 - ((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \ 225 - CCI400_PMU_EVENT_SOURCE_MASK) 226 - #define CCI400_PMU_EVENT_CODE(event) \ 227 - ((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK) 228 - 229 - #define CCI400_R0_SLAVE_PORT_MIN_EV 0x00 230 - #define CCI400_R0_SLAVE_PORT_MAX_EV 0x13 231 - #define CCI400_R0_MASTER_PORT_MIN_EV 0x14 232 - #define CCI400_R0_MASTER_PORT_MAX_EV 0x1a 233 - 234 - #define CCI400_R1_SLAVE_PORT_MIN_EV 0x00 235 - #define CCI400_R1_SLAVE_PORT_MAX_EV 0x14 236 - #define CCI400_R1_MASTER_PORT_MIN_EV 0x00 237 - #define CCI400_R1_MASTER_PORT_MAX_EV 0x11 238 - 239 - #define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 240 - CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \ 241 - (unsigned long)_config) 242 - 243 - static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 244 - struct device_attribute *attr, char *buf); 245 - 246 - static struct attribute *cci400_pmu_format_attrs[] = { 247 - CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 248 - CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"), 249 - NULL 250 - }; 251 - 252 - static struct attribute *cci400_r0_pmu_event_attrs[] = { 253 - /* Slave events */ 254 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 255 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 256 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 257 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 258 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 259 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 260 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 261 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 262 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 263 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 264 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 265 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 266 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 267 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 268 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 269 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 270 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 271 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 272 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 273 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 274 - /* Master events */ 275 - CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14), 276 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15), 277 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16), 278 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17), 279 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18), 280 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19), 281 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A), 282 - /* Special event for cycles counter */ 283 - CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 284 - NULL 285 - }; 286 - 287 - static struct attribute *cci400_r1_pmu_event_attrs[] = { 288 - /* Slave events */ 289 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 290 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 291 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 292 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 293 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 294 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 295 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 296 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 297 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 298 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 299 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 300 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 301 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 302 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 303 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 304 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 305 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 306 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 307 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 308 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 309 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14), 310 - /* Master events */ 311 - CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0), 312 - CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1), 313 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2), 314 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3), 315 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4), 316 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5), 317 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6), 318 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7), 319 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8), 320 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9), 321 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA), 322 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB), 323 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC), 324 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD), 325 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE), 326 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF), 327 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10), 328 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11), 329 - /* Special event for cycles counter */ 330 - CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 331 - NULL 332 - }; 333 - 334 - static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 335 - struct device_attribute *attr, char *buf) 336 - { 337 - struct dev_ext_attribute *eattr = container_of(attr, 338 - struct dev_ext_attribute, attr); 339 - return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var); 340 - } 341 - 342 - static int cci400_get_event_idx(struct cci_pmu *cci_pmu, 343 - struct cci_pmu_hw_events *hw, 344 - unsigned long cci_event) 345 - { 346 - int idx; 347 - 348 - /* cycles event idx is fixed */ 349 - if (cci_event == CCI400_PMU_CYCLES) { 350 - if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask)) 351 - return -EAGAIN; 352 - 353 - return CCI400_PMU_CYCLE_CNTR_IDX; 354 - } 355 - 356 - for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) 357 - if (!test_and_set_bit(idx, hw->used_mask)) 358 - return idx; 359 - 360 - /* No counters available */ 361 - return -EAGAIN; 362 - } 363 - 364 - static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event) 365 - { 366 - u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event); 367 - u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event); 368 - int if_type; 369 - 370 - if (hw_event & ~CCI400_PMU_EVENT_MASK) 371 - return -ENOENT; 372 - 373 - if (hw_event == CCI400_PMU_CYCLES) 374 - return hw_event; 375 - 376 - switch (ev_source) { 377 - case CCI400_PORT_S0: 378 - case CCI400_PORT_S1: 379 - case CCI400_PORT_S2: 380 - case CCI400_PORT_S3: 381 - case CCI400_PORT_S4: 382 - /* Slave Interface */ 383 - if_type = CCI_IF_SLAVE; 384 - break; 385 - case CCI400_PORT_M0: 386 - case CCI400_PORT_M1: 387 - case CCI400_PORT_M2: 388 - /* Master Interface */ 389 - if_type = CCI_IF_MASTER; 390 - break; 391 - default: 392 - return -ENOENT; 393 - } 394 - 395 - if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 396 - ev_code <= cci_pmu->model->event_ranges[if_type].max) 397 - return hw_event; 398 - 399 - return -ENOENT; 400 - } 401 - 402 - static int probe_cci400_revision(void) 403 - { 404 - int rev; 405 - rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK; 406 - rev >>= CCI_PID2_REV_SHIFT; 407 - 408 - if (rev < CCI400_R1_PX) 409 - return CCI400_R0; 410 - else 411 - return CCI400_R1; 412 - } 413 - 414 - static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) 415 - { 416 - if (platform_has_secure_cci_access()) 417 - return &cci_pmu_models[probe_cci400_revision()]; 418 - return NULL; 419 - } 420 - #else /* !CONFIG_ARM_CCI400_PMU */ 421 - static inline struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) 422 - { 423 - return NULL; 424 - } 425 - #endif /* CONFIG_ARM_CCI400_PMU */ 426 - 427 - #ifdef CONFIG_ARM_CCI5xx_PMU 428 - 429 - /* 430 - * CCI5xx PMU event id is an 9-bit value made of two parts. 431 - * bits [8:5] - Source for the event 432 - * bits [4:0] - Event code (specific to type of interface) 433 - * 434 - * 435 - */ 436 - 437 - /* Port ids */ 438 - #define CCI5xx_PORT_S0 0x0 439 - #define CCI5xx_PORT_S1 0x1 440 - #define CCI5xx_PORT_S2 0x2 441 - #define CCI5xx_PORT_S3 0x3 442 - #define CCI5xx_PORT_S4 0x4 443 - #define CCI5xx_PORT_S5 0x5 444 - #define CCI5xx_PORT_S6 0x6 445 - 446 - #define CCI5xx_PORT_M0 0x8 447 - #define CCI5xx_PORT_M1 0x9 448 - #define CCI5xx_PORT_M2 0xa 449 - #define CCI5xx_PORT_M3 0xb 450 - #define CCI5xx_PORT_M4 0xc 451 - #define CCI5xx_PORT_M5 0xd 452 - #define CCI5xx_PORT_M6 0xe 453 - 454 - #define CCI5xx_PORT_GLOBAL 0xf 455 - 456 - #define CCI5xx_PMU_EVENT_MASK 0x1ffUL 457 - #define CCI5xx_PMU_EVENT_SOURCE_SHIFT 0x5 458 - #define CCI5xx_PMU_EVENT_SOURCE_MASK 0xf 459 - #define CCI5xx_PMU_EVENT_CODE_SHIFT 0x0 460 - #define CCI5xx_PMU_EVENT_CODE_MASK 0x1f 461 - 462 - #define CCI5xx_PMU_EVENT_SOURCE(event) \ 463 - ((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK) 464 - #define CCI5xx_PMU_EVENT_CODE(event) \ 465 - ((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK) 466 - 467 - #define CCI5xx_SLAVE_PORT_MIN_EV 0x00 468 - #define CCI5xx_SLAVE_PORT_MAX_EV 0x1f 469 - #define CCI5xx_MASTER_PORT_MIN_EV 0x00 470 - #define CCI5xx_MASTER_PORT_MAX_EV 0x06 471 - #define CCI5xx_GLOBAL_PORT_MIN_EV 0x00 472 - #define CCI5xx_GLOBAL_PORT_MAX_EV 0x0f 473 - 474 - 475 - #define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 476 - CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \ 477 - (unsigned long) _config) 478 - 479 - static ssize_t cci5xx_pmu_global_event_show(struct device *dev, 480 - struct device_attribute *attr, char *buf); 481 - 482 - static struct attribute *cci5xx_pmu_format_attrs[] = { 483 - CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 484 - CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"), 485 - NULL, 486 - }; 487 - 488 - static struct attribute *cci5xx_pmu_event_attrs[] = { 489 - /* Slave events */ 490 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0), 491 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1), 492 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2), 493 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3), 494 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4), 495 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5), 496 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6), 497 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 498 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8), 499 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9), 500 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA), 501 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB), 502 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC), 503 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD), 504 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE), 505 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF), 506 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10), 507 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11), 508 - CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12), 509 - CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13), 510 - CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14), 511 - CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15), 512 - CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16), 513 - CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17), 514 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18), 515 - CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19), 516 - CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A), 517 - CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B), 518 - CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C), 519 - CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D), 520 - CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E), 521 - CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F), 522 - 523 - /* Master events */ 524 - CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0), 525 - CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1), 526 - CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2), 527 - CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3), 528 - CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4), 529 - CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5), 530 - CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6), 531 - 532 - /* Global events */ 533 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0), 534 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1), 535 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2), 536 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3), 537 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4), 538 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5), 539 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6), 540 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7), 541 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8), 542 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9), 543 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA), 544 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), 545 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), 546 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), 547 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE), 548 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), 549 - NULL 550 - }; 551 - 552 - static ssize_t cci5xx_pmu_global_event_show(struct device *dev, 553 - struct device_attribute *attr, char *buf) 554 - { 555 - struct dev_ext_attribute *eattr = container_of(attr, 556 - struct dev_ext_attribute, attr); 557 - /* Global events have single fixed source code */ 558 - return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n", 559 - (unsigned long)eattr->var, CCI5xx_PORT_GLOBAL); 560 - } 561 - 562 - /* 563 - * CCI500 provides 8 independent event counters that can count 564 - * any of the events available. 565 - * CCI500 PMU event source ids 566 - * 0x0-0x6 - Slave interfaces 567 - * 0x8-0xD - Master interfaces 568 - * 0xf - Global Events 569 - * 0x7,0xe - Reserved 570 - */ 571 - static int cci500_validate_hw_event(struct cci_pmu *cci_pmu, 572 - unsigned long hw_event) 573 - { 574 - u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); 575 - u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); 576 - int if_type; 577 - 578 - if (hw_event & ~CCI5xx_PMU_EVENT_MASK) 579 - return -ENOENT; 580 - 581 - switch (ev_source) { 582 - case CCI5xx_PORT_S0: 583 - case CCI5xx_PORT_S1: 584 - case CCI5xx_PORT_S2: 585 - case CCI5xx_PORT_S3: 586 - case CCI5xx_PORT_S4: 587 - case CCI5xx_PORT_S5: 588 - case CCI5xx_PORT_S6: 589 - if_type = CCI_IF_SLAVE; 590 - break; 591 - case CCI5xx_PORT_M0: 592 - case CCI5xx_PORT_M1: 593 - case CCI5xx_PORT_M2: 594 - case CCI5xx_PORT_M3: 595 - case CCI5xx_PORT_M4: 596 - case CCI5xx_PORT_M5: 597 - if_type = CCI_IF_MASTER; 598 - break; 599 - case CCI5xx_PORT_GLOBAL: 600 - if_type = CCI_IF_GLOBAL; 601 - break; 602 - default: 603 - return -ENOENT; 604 - } 605 - 606 - if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 607 - ev_code <= cci_pmu->model->event_ranges[if_type].max) 608 - return hw_event; 609 - 610 - return -ENOENT; 611 - } 612 - 613 - /* 614 - * CCI550 provides 8 independent event counters that can count 615 - * any of the events available. 616 - * CCI550 PMU event source ids 617 - * 0x0-0x6 - Slave interfaces 618 - * 0x8-0xe - Master interfaces 619 - * 0xf - Global Events 620 - * 0x7 - Reserved 621 - */ 622 - static int cci550_validate_hw_event(struct cci_pmu *cci_pmu, 623 - unsigned long hw_event) 624 - { 625 - u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); 626 - u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); 627 - int if_type; 628 - 629 - if (hw_event & ~CCI5xx_PMU_EVENT_MASK) 630 - return -ENOENT; 631 - 632 - switch (ev_source) { 633 - case CCI5xx_PORT_S0: 634 - case CCI5xx_PORT_S1: 635 - case CCI5xx_PORT_S2: 636 - case CCI5xx_PORT_S3: 637 - case CCI5xx_PORT_S4: 638 - case CCI5xx_PORT_S5: 639 - case CCI5xx_PORT_S6: 640 - if_type = CCI_IF_SLAVE; 641 - break; 642 - case CCI5xx_PORT_M0: 643 - case CCI5xx_PORT_M1: 644 - case CCI5xx_PORT_M2: 645 - case CCI5xx_PORT_M3: 646 - case CCI5xx_PORT_M4: 647 - case CCI5xx_PORT_M5: 648 - case CCI5xx_PORT_M6: 649 - if_type = CCI_IF_MASTER; 650 - break; 651 - case CCI5xx_PORT_GLOBAL: 652 - if_type = CCI_IF_GLOBAL; 653 - break; 654 - default: 655 - return -ENOENT; 656 - } 657 - 658 - if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 659 - ev_code <= cci_pmu->model->event_ranges[if_type].max) 660 - return hw_event; 661 - 662 - return -ENOENT; 663 - } 664 - 665 - #endif /* CONFIG_ARM_CCI5xx_PMU */ 666 - 667 - /* 668 - * Program the CCI PMU counters which have PERF_HES_ARCH set 669 - * with the event period and mark them ready before we enable 670 - * PMU. 671 - */ 672 - static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu) 673 - { 674 - int i; 675 - struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; 676 - 677 - DECLARE_BITMAP(mask, cci_pmu->num_cntrs); 678 - 679 - bitmap_zero(mask, cci_pmu->num_cntrs); 680 - for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) { 681 - struct perf_event *event = cci_hw->events[i]; 682 - 683 - if (WARN_ON(!event)) 684 - continue; 685 - 686 - /* Leave the events which are not counting */ 687 - if (event->hw.state & PERF_HES_STOPPED) 688 - continue; 689 - if (event->hw.state & PERF_HES_ARCH) { 690 - set_bit(i, mask); 691 - event->hw.state &= ~PERF_HES_ARCH; 692 - } 693 - } 694 - 695 - pmu_write_counters(cci_pmu, mask); 696 - } 697 - 698 - /* Should be called with cci_pmu->hw_events->pmu_lock held */ 699 - static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu) 700 - { 701 - u32 val; 702 - 703 - /* Enable all the PMU counters. */ 704 - val = readl_relaxed(cci_ctrl_base + CCI_PMCR) | CCI_PMCR_CEN; 705 - writel(val, cci_ctrl_base + CCI_PMCR); 706 - } 707 - 708 - /* Should be called with cci_pmu->hw_events->pmu_lock held */ 709 - static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu) 710 - { 711 - cci_pmu_sync_counters(cci_pmu); 712 - __cci_pmu_enable_nosync(cci_pmu); 713 - } 714 - 715 - /* Should be called with cci_pmu->hw_events->pmu_lock held */ 716 - static void __cci_pmu_disable(void) 717 - { 718 - u32 val; 719 - 720 - /* Disable all the PMU counters. */ 721 - val = readl_relaxed(cci_ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN; 722 - writel(val, cci_ctrl_base + CCI_PMCR); 723 - } 724 - 725 - static ssize_t cci_pmu_format_show(struct device *dev, 726 - struct device_attribute *attr, char *buf) 727 - { 728 - struct dev_ext_attribute *eattr = container_of(attr, 729 - struct dev_ext_attribute, attr); 730 - return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var); 731 - } 732 - 733 - static ssize_t cci_pmu_event_show(struct device *dev, 734 - struct device_attribute *attr, char *buf) 735 - { 736 - struct dev_ext_attribute *eattr = container_of(attr, 737 - struct dev_ext_attribute, attr); 738 - /* source parameter is mandatory for normal PMU events */ 739 - return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n", 740 - (unsigned long)eattr->var); 741 - } 742 - 743 - static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) 744 - { 745 - return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu); 746 - } 747 - 748 - static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset) 749 - { 750 - return readl_relaxed(cci_pmu->base + 751 - CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 752 - } 753 - 754 - static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value, 755 - int idx, unsigned int offset) 756 - { 757 - writel_relaxed(value, cci_pmu->base + 758 - CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 759 - } 760 - 761 - static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx) 762 - { 763 - pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL); 764 - } 765 - 766 - static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx) 767 - { 768 - pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL); 769 - } 770 - 771 - static bool __maybe_unused 772 - pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx) 773 - { 774 - return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0; 775 - } 776 - 777 - static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event) 778 - { 779 - pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL); 780 - } 781 - 782 - /* 783 - * For all counters on the CCI-PMU, disable any 'enabled' counters, 784 - * saving the changed counters in the mask, so that we can restore 785 - * it later using pmu_restore_counters. The mask is private to the 786 - * caller. We cannot rely on the used_mask maintained by the CCI_PMU 787 - * as it only tells us if the counter is assigned to perf_event or not. 788 - * The state of the perf_event cannot be locked by the PMU layer, hence 789 - * we check the individual counter status (which can be locked by 790 - * cci_pm->hw_events->pmu_lock). 791 - * 792 - * @mask should be initialised to empty by the caller. 793 - */ 794 - static void __maybe_unused 795 - pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 796 - { 797 - int i; 798 - 799 - for (i = 0; i < cci_pmu->num_cntrs; i++) { 800 - if (pmu_counter_is_enabled(cci_pmu, i)) { 801 - set_bit(i, mask); 802 - pmu_disable_counter(cci_pmu, i); 803 - } 804 - } 805 - } 806 - 807 - /* 808 - * Restore the status of the counters. Reversal of the pmu_save_counters(). 809 - * For each counter set in the mask, enable the counter back. 810 - */ 811 - static void __maybe_unused 812 - pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 813 - { 814 - int i; 815 - 816 - for_each_set_bit(i, mask, cci_pmu->num_cntrs) 817 - pmu_enable_counter(cci_pmu, i); 818 - } 819 - 820 - /* 821 - * Returns the number of programmable counters actually implemented 822 - * by the cci 823 - */ 824 - static u32 pmu_get_max_counters(void) 825 - { 826 - return (readl_relaxed(cci_ctrl_base + CCI_PMCR) & 827 - CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; 828 - } 829 - 830 - static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event) 831 - { 832 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 833 - unsigned long cci_event = event->hw.config_base; 834 - int idx; 835 - 836 - if (cci_pmu->model->get_event_idx) 837 - return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event); 838 - 839 - /* Generic code to find an unused idx from the mask */ 840 - for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) 841 - if (!test_and_set_bit(idx, hw->used_mask)) 842 - return idx; 843 - 844 - /* No counters available */ 845 - return -EAGAIN; 846 - } 847 - 848 - static int pmu_map_event(struct perf_event *event) 849 - { 850 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 851 - 852 - if (event->attr.type < PERF_TYPE_MAX || 853 - !cci_pmu->model->validate_hw_event) 854 - return -ENOENT; 855 - 856 - return cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config); 857 - } 858 - 859 - static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler) 860 - { 861 - int i; 862 - struct platform_device *pmu_device = cci_pmu->plat_device; 863 - 864 - if (unlikely(!pmu_device)) 865 - return -ENODEV; 866 - 867 - if (cci_pmu->nr_irqs < 1) { 868 - dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n"); 869 - return -ENODEV; 870 - } 871 - 872 - /* 873 - * Register all available CCI PMU interrupts. In the interrupt handler 874 - * we iterate over the counters checking for interrupt source (the 875 - * overflowing counter) and clear it. 876 - * 877 - * This should allow handling of non-unique interrupt for the counters. 878 - */ 879 - for (i = 0; i < cci_pmu->nr_irqs; i++) { 880 - int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED, 881 - "arm-cci-pmu", cci_pmu); 882 - if (err) { 883 - dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n", 884 - cci_pmu->irqs[i]); 885 - return err; 886 - } 887 - 888 - set_bit(i, &cci_pmu->active_irqs); 889 - } 890 - 891 - return 0; 892 - } 893 - 894 - static void pmu_free_irq(struct cci_pmu *cci_pmu) 895 - { 896 - int i; 897 - 898 - for (i = 0; i < cci_pmu->nr_irqs; i++) { 899 - if (!test_and_clear_bit(i, &cci_pmu->active_irqs)) 900 - continue; 901 - 902 - free_irq(cci_pmu->irqs[i], cci_pmu); 903 - } 904 - } 905 - 906 - static u32 pmu_read_counter(struct perf_event *event) 907 - { 908 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 909 - struct hw_perf_event *hw_counter = &event->hw; 910 - int idx = hw_counter->idx; 911 - u32 value; 912 - 913 - if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 914 - dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 915 - return 0; 916 - } 917 - value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR); 918 - 919 - return value; 920 - } 921 - 922 - static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx) 923 - { 924 - pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR); 925 - } 926 - 927 - static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 928 - { 929 - int i; 930 - struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; 931 - 932 - for_each_set_bit(i, mask, cci_pmu->num_cntrs) { 933 - struct perf_event *event = cci_hw->events[i]; 934 - 935 - if (WARN_ON(!event)) 936 - continue; 937 - pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); 938 - } 939 - } 940 - 941 - static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 942 - { 943 - if (cci_pmu->model->write_counters) 944 - cci_pmu->model->write_counters(cci_pmu, mask); 945 - else 946 - __pmu_write_counters(cci_pmu, mask); 947 - } 948 - 949 - #ifdef CONFIG_ARM_CCI5xx_PMU 950 - 951 - /* 952 - * CCI-500/CCI-550 has advanced power saving policies, which could gate the 953 - * clocks to the PMU counters, which makes the writes to them ineffective. 954 - * The only way to write to those counters is when the global counters 955 - * are enabled and the particular counter is enabled. 956 - * 957 - * So we do the following : 958 - * 959 - * 1) Disable all the PMU counters, saving their current state 960 - * 2) Enable the global PMU profiling, now that all counters are 961 - * disabled. 962 - * 963 - * For each counter to be programmed, repeat steps 3-7: 964 - * 965 - * 3) Write an invalid event code to the event control register for the 966 - counter, so that the counters are not modified. 967 - * 4) Enable the counter control for the counter. 968 - * 5) Set the counter value 969 - * 6) Disable the counter 970 - * 7) Restore the event in the target counter 971 - * 972 - * 8) Disable the global PMU. 973 - * 9) Restore the status of the rest of the counters. 974 - * 975 - * We choose an event which for CCI-5xx is guaranteed not to count. 976 - * We use the highest possible event code (0x1f) for the master interface 0. 977 - */ 978 - #define CCI5xx_INVALID_EVENT ((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \ 979 - (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT)) 980 - static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 981 - { 982 - int i; 983 - DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs); 984 - 985 - bitmap_zero(saved_mask, cci_pmu->num_cntrs); 986 - pmu_save_counters(cci_pmu, saved_mask); 987 - 988 - /* 989 - * Now that all the counters are disabled, we can safely turn the PMU on, 990 - * without syncing the status of the counters 991 - */ 992 - __cci_pmu_enable_nosync(cci_pmu); 993 - 994 - for_each_set_bit(i, mask, cci_pmu->num_cntrs) { 995 - struct perf_event *event = cci_pmu->hw_events.events[i]; 996 - 997 - if (WARN_ON(!event)) 998 - continue; 999 - 1000 - pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT); 1001 - pmu_enable_counter(cci_pmu, i); 1002 - pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); 1003 - pmu_disable_counter(cci_pmu, i); 1004 - pmu_set_event(cci_pmu, i, event->hw.config_base); 1005 - } 1006 - 1007 - __cci_pmu_disable(); 1008 - 1009 - pmu_restore_counters(cci_pmu, saved_mask); 1010 - } 1011 - 1012 - #endif /* CONFIG_ARM_CCI5xx_PMU */ 1013 - 1014 - static u64 pmu_event_update(struct perf_event *event) 1015 - { 1016 - struct hw_perf_event *hwc = &event->hw; 1017 - u64 delta, prev_raw_count, new_raw_count; 1018 - 1019 - do { 1020 - prev_raw_count = local64_read(&hwc->prev_count); 1021 - new_raw_count = pmu_read_counter(event); 1022 - } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count, 1023 - new_raw_count) != prev_raw_count); 1024 - 1025 - delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK; 1026 - 1027 - local64_add(delta, &event->count); 1028 - 1029 - return new_raw_count; 1030 - } 1031 - 1032 - static void pmu_read(struct perf_event *event) 1033 - { 1034 - pmu_event_update(event); 1035 - } 1036 - 1037 - static void pmu_event_set_period(struct perf_event *event) 1038 - { 1039 - struct hw_perf_event *hwc = &event->hw; 1040 - /* 1041 - * The CCI PMU counters have a period of 2^32. To account for the 1042 - * possiblity of extreme interrupt latency we program for a period of 1043 - * half that. Hopefully we can handle the interrupt before another 2^31 1044 - * events occur and the counter overtakes its previous value. 1045 - */ 1046 - u64 val = 1ULL << 31; 1047 - local64_set(&hwc->prev_count, val); 1048 - 1049 - /* 1050 - * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose 1051 - * values needs to be sync-ed with the s/w state before the PMU is 1052 - * enabled. 1053 - * Mark this counter for sync. 1054 - */ 1055 - hwc->state |= PERF_HES_ARCH; 1056 - } 1057 - 1058 - static irqreturn_t pmu_handle_irq(int irq_num, void *dev) 1059 - { 1060 - unsigned long flags; 1061 - struct cci_pmu *cci_pmu = dev; 1062 - struct cci_pmu_hw_events *events = &cci_pmu->hw_events; 1063 - int idx, handled = IRQ_NONE; 1064 - 1065 - raw_spin_lock_irqsave(&events->pmu_lock, flags); 1066 - 1067 - /* Disable the PMU while we walk through the counters */ 1068 - __cci_pmu_disable(); 1069 - /* 1070 - * Iterate over counters and update the corresponding perf events. 1071 - * This should work regardless of whether we have per-counter overflow 1072 - * interrupt or a combined overflow interrupt. 1073 - */ 1074 - for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { 1075 - struct perf_event *event = events->events[idx]; 1076 - 1077 - if (!event) 1078 - continue; 1079 - 1080 - /* Did this counter overflow? */ 1081 - if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) & 1082 - CCI_PMU_OVRFLW_FLAG)) 1083 - continue; 1084 - 1085 - pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx, 1086 - CCI_PMU_OVRFLW); 1087 - 1088 - pmu_event_update(event); 1089 - pmu_event_set_period(event); 1090 - handled = IRQ_HANDLED; 1091 - } 1092 - 1093 - /* Enable the PMU and sync possibly overflowed counters */ 1094 - __cci_pmu_enable_sync(cci_pmu); 1095 - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1096 - 1097 - return IRQ_RETVAL(handled); 1098 - } 1099 - 1100 - static int cci_pmu_get_hw(struct cci_pmu *cci_pmu) 1101 - { 1102 - int ret = pmu_request_irq(cci_pmu, pmu_handle_irq); 1103 - if (ret) { 1104 - pmu_free_irq(cci_pmu); 1105 - return ret; 1106 - } 1107 - return 0; 1108 - } 1109 - 1110 - static void cci_pmu_put_hw(struct cci_pmu *cci_pmu) 1111 - { 1112 - pmu_free_irq(cci_pmu); 1113 - } 1114 - 1115 - static void hw_perf_event_destroy(struct perf_event *event) 1116 - { 1117 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1118 - atomic_t *active_events = &cci_pmu->active_events; 1119 - struct mutex *reserve_mutex = &cci_pmu->reserve_mutex; 1120 - 1121 - if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) { 1122 - cci_pmu_put_hw(cci_pmu); 1123 - mutex_unlock(reserve_mutex); 1124 - } 1125 - } 1126 - 1127 - static void cci_pmu_enable(struct pmu *pmu) 1128 - { 1129 - struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1130 - struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1131 - int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs); 1132 - unsigned long flags; 1133 - 1134 - if (!enabled) 1135 - return; 1136 - 1137 - raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1138 - __cci_pmu_enable_sync(cci_pmu); 1139 - raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1140 - 1141 - } 1142 - 1143 - static void cci_pmu_disable(struct pmu *pmu) 1144 - { 1145 - struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1146 - struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1147 - unsigned long flags; 1148 - 1149 - raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1150 - __cci_pmu_disable(); 1151 - raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1152 - } 1153 - 1154 - /* 1155 - * Check if the idx represents a non-programmable counter. 1156 - * All the fixed event counters are mapped before the programmable 1157 - * counters. 1158 - */ 1159 - static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx) 1160 - { 1161 - return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs); 1162 - } 1163 - 1164 - static void cci_pmu_start(struct perf_event *event, int pmu_flags) 1165 - { 1166 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1167 - struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1168 - struct hw_perf_event *hwc = &event->hw; 1169 - int idx = hwc->idx; 1170 - unsigned long flags; 1171 - 1172 - /* 1173 - * To handle interrupt latency, we always reprogram the period 1174 - * regardlesss of PERF_EF_RELOAD. 1175 - */ 1176 - if (pmu_flags & PERF_EF_RELOAD) 1177 - WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); 1178 - 1179 - hwc->state = 0; 1180 - 1181 - if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 1182 - dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 1183 - return; 1184 - } 1185 - 1186 - raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1187 - 1188 - /* Configure the counter unless you are counting a fixed event */ 1189 - if (!pmu_fixed_hw_idx(cci_pmu, idx)) 1190 - pmu_set_event(cci_pmu, idx, hwc->config_base); 1191 - 1192 - pmu_event_set_period(event); 1193 - pmu_enable_counter(cci_pmu, idx); 1194 - 1195 - raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1196 - } 1197 - 1198 - static void cci_pmu_stop(struct perf_event *event, int pmu_flags) 1199 - { 1200 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1201 - struct hw_perf_event *hwc = &event->hw; 1202 - int idx = hwc->idx; 1203 - 1204 - if (hwc->state & PERF_HES_STOPPED) 1205 - return; 1206 - 1207 - if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 1208 - dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 1209 - return; 1210 - } 1211 - 1212 - /* 1213 - * We always reprogram the counter, so ignore PERF_EF_UPDATE. See 1214 - * cci_pmu_start() 1215 - */ 1216 - pmu_disable_counter(cci_pmu, idx); 1217 - pmu_event_update(event); 1218 - hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 1219 - } 1220 - 1221 - static int cci_pmu_add(struct perf_event *event, int flags) 1222 - { 1223 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1224 - struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1225 - struct hw_perf_event *hwc = &event->hw; 1226 - int idx; 1227 - int err = 0; 1228 - 1229 - perf_pmu_disable(event->pmu); 1230 - 1231 - /* If we don't have a space for the counter then finish early. */ 1232 - idx = pmu_get_event_idx(hw_events, event); 1233 - if (idx < 0) { 1234 - err = idx; 1235 - goto out; 1236 - } 1237 - 1238 - event->hw.idx = idx; 1239 - hw_events->events[idx] = event; 1240 - 1241 - hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 1242 - if (flags & PERF_EF_START) 1243 - cci_pmu_start(event, PERF_EF_RELOAD); 1244 - 1245 - /* Propagate our changes to the userspace mapping. */ 1246 - perf_event_update_userpage(event); 1247 - 1248 - out: 1249 - perf_pmu_enable(event->pmu); 1250 - return err; 1251 - } 1252 - 1253 - static void cci_pmu_del(struct perf_event *event, int flags) 1254 - { 1255 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1256 - struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1257 - struct hw_perf_event *hwc = &event->hw; 1258 - int idx = hwc->idx; 1259 - 1260 - cci_pmu_stop(event, PERF_EF_UPDATE); 1261 - hw_events->events[idx] = NULL; 1262 - clear_bit(idx, hw_events->used_mask); 1263 - 1264 - perf_event_update_userpage(event); 1265 - } 1266 - 1267 - static int 1268 - validate_event(struct pmu *cci_pmu, 1269 - struct cci_pmu_hw_events *hw_events, 1270 - struct perf_event *event) 1271 - { 1272 - if (is_software_event(event)) 1273 - return 1; 1274 - 1275 - /* 1276 - * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The 1277 - * core perf code won't check that the pmu->ctx == leader->ctx 1278 - * until after pmu->event_init(event). 1279 - */ 1280 - if (event->pmu != cci_pmu) 1281 - return 0; 1282 - 1283 - if (event->state < PERF_EVENT_STATE_OFF) 1284 - return 1; 1285 - 1286 - if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) 1287 - return 1; 1288 - 1289 - return pmu_get_event_idx(hw_events, event) >= 0; 1290 - } 1291 - 1292 - static int 1293 - validate_group(struct perf_event *event) 1294 - { 1295 - struct perf_event *sibling, *leader = event->group_leader; 1296 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1297 - unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; 1298 - struct cci_pmu_hw_events fake_pmu = { 1299 - /* 1300 - * Initialise the fake PMU. We only need to populate the 1301 - * used_mask for the purposes of validation. 1302 - */ 1303 - .used_mask = mask, 1304 - }; 1305 - memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); 1306 - 1307 - if (!validate_event(event->pmu, &fake_pmu, leader)) 1308 - return -EINVAL; 1309 - 1310 - for_each_sibling_event(sibling, leader) { 1311 - if (!validate_event(event->pmu, &fake_pmu, sibling)) 1312 - return -EINVAL; 1313 - } 1314 - 1315 - if (!validate_event(event->pmu, &fake_pmu, event)) 1316 - return -EINVAL; 1317 - 1318 - return 0; 1319 - } 1320 - 1321 - static int 1322 - __hw_perf_event_init(struct perf_event *event) 1323 - { 1324 - struct hw_perf_event *hwc = &event->hw; 1325 - int mapping; 1326 - 1327 - mapping = pmu_map_event(event); 1328 - 1329 - if (mapping < 0) { 1330 - pr_debug("event %x:%llx not supported\n", event->attr.type, 1331 - event->attr.config); 1332 - return mapping; 1333 - } 1334 - 1335 - /* 1336 - * We don't assign an index until we actually place the event onto 1337 - * hardware. Use -1 to signify that we haven't decided where to put it 1338 - * yet. 1339 - */ 1340 - hwc->idx = -1; 1341 - hwc->config_base = 0; 1342 - hwc->config = 0; 1343 - hwc->event_base = 0; 1344 - 1345 - /* 1346 - * Store the event encoding into the config_base field. 1347 - */ 1348 - hwc->config_base |= (unsigned long)mapping; 1349 - 1350 - /* 1351 - * Limit the sample_period to half of the counter width. That way, the 1352 - * new counter value is far less likely to overtake the previous one 1353 - * unless you have some serious IRQ latency issues. 1354 - */ 1355 - hwc->sample_period = CCI_PMU_CNTR_MASK >> 1; 1356 - hwc->last_period = hwc->sample_period; 1357 - local64_set(&hwc->period_left, hwc->sample_period); 1358 - 1359 - if (event->group_leader != event) { 1360 - if (validate_group(event) != 0) 1361 - return -EINVAL; 1362 - } 1363 - 1364 - return 0; 1365 - } 1366 - 1367 - static int cci_pmu_event_init(struct perf_event *event) 1368 - { 1369 - struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1370 - atomic_t *active_events = &cci_pmu->active_events; 1371 - int err = 0; 1372 - int cpu; 1373 - 1374 - if (event->attr.type != event->pmu->type) 1375 - return -ENOENT; 1376 - 1377 - /* Shared by all CPUs, no meaningful state to sample */ 1378 - if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 1379 - return -EOPNOTSUPP; 1380 - 1381 - /* We have no filtering of any kind */ 1382 - if (event->attr.exclude_user || 1383 - event->attr.exclude_kernel || 1384 - event->attr.exclude_hv || 1385 - event->attr.exclude_idle || 1386 - event->attr.exclude_host || 1387 - event->attr.exclude_guest) 1388 - return -EINVAL; 1389 - 1390 - /* 1391 - * Following the example set by other "uncore" PMUs, we accept any CPU 1392 - * and rewrite its affinity dynamically rather than having perf core 1393 - * handle cpu == -1 and pid == -1 for this case. 1394 - * 1395 - * The perf core will pin online CPUs for the duration of this call and 1396 - * the event being installed into its context, so the PMU's CPU can't 1397 - * change under our feet. 1398 - */ 1399 - cpu = cpumask_first(&cci_pmu->cpus); 1400 - if (event->cpu < 0 || cpu < 0) 1401 - return -EINVAL; 1402 - event->cpu = cpu; 1403 - 1404 - event->destroy = hw_perf_event_destroy; 1405 - if (!atomic_inc_not_zero(active_events)) { 1406 - mutex_lock(&cci_pmu->reserve_mutex); 1407 - if (atomic_read(active_events) == 0) 1408 - err = cci_pmu_get_hw(cci_pmu); 1409 - if (!err) 1410 - atomic_inc(active_events); 1411 - mutex_unlock(&cci_pmu->reserve_mutex); 1412 - } 1413 - if (err) 1414 - return err; 1415 - 1416 - err = __hw_perf_event_init(event); 1417 - if (err) 1418 - hw_perf_event_destroy(event); 1419 - 1420 - return err; 1421 - } 1422 - 1423 - static ssize_t pmu_cpumask_attr_show(struct device *dev, 1424 - struct device_attribute *attr, char *buf) 1425 - { 1426 - struct pmu *pmu = dev_get_drvdata(dev); 1427 - struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1428 - 1429 - int n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 1430 - cpumask_pr_args(&cci_pmu->cpus)); 1431 - buf[n++] = '\n'; 1432 - buf[n] = '\0'; 1433 - return n; 1434 - } 1435 - 1436 - static struct device_attribute pmu_cpumask_attr = 1437 - __ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL); 1438 - 1439 - static struct attribute *pmu_attrs[] = { 1440 - &pmu_cpumask_attr.attr, 1441 - NULL, 1442 - }; 1443 - 1444 - static struct attribute_group pmu_attr_group = { 1445 - .attrs = pmu_attrs, 1446 - }; 1447 - 1448 - static struct attribute_group pmu_format_attr_group = { 1449 - .name = "format", 1450 - .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 1451 - }; 1452 - 1453 - static struct attribute_group pmu_event_attr_group = { 1454 - .name = "events", 1455 - .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 1456 - }; 1457 - 1458 - static const struct attribute_group *pmu_attr_groups[] = { 1459 - &pmu_attr_group, 1460 - &pmu_format_attr_group, 1461 - &pmu_event_attr_group, 1462 - NULL 1463 - }; 1464 - 1465 - static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) 1466 - { 1467 - const struct cci_pmu_model *model = cci_pmu->model; 1468 - char *name = model->name; 1469 - u32 num_cntrs; 1470 - 1471 - pmu_event_attr_group.attrs = model->event_attrs; 1472 - pmu_format_attr_group.attrs = model->format_attrs; 1473 - 1474 - cci_pmu->pmu = (struct pmu) { 1475 - .name = cci_pmu->model->name, 1476 - .task_ctx_nr = perf_invalid_context, 1477 - .pmu_enable = cci_pmu_enable, 1478 - .pmu_disable = cci_pmu_disable, 1479 - .event_init = cci_pmu_event_init, 1480 - .add = cci_pmu_add, 1481 - .del = cci_pmu_del, 1482 - .start = cci_pmu_start, 1483 - .stop = cci_pmu_stop, 1484 - .read = pmu_read, 1485 - .attr_groups = pmu_attr_groups, 1486 - }; 1487 - 1488 - cci_pmu->plat_device = pdev; 1489 - num_cntrs = pmu_get_max_counters(); 1490 - if (num_cntrs > cci_pmu->model->num_hw_cntrs) { 1491 - dev_warn(&pdev->dev, 1492 - "PMU implements more counters(%d) than supported by" 1493 - " the model(%d), truncated.", 1494 - num_cntrs, cci_pmu->model->num_hw_cntrs); 1495 - num_cntrs = cci_pmu->model->num_hw_cntrs; 1496 - } 1497 - cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs; 1498 - 1499 - return perf_pmu_register(&cci_pmu->pmu, name, -1); 1500 - } 1501 - 1502 - static int cci_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) 1503 - { 1504 - struct cci_pmu *cci_pmu = hlist_entry_safe(node, struct cci_pmu, node); 1505 - unsigned int target; 1506 - 1507 - if (!cpumask_test_and_clear_cpu(cpu, &cci_pmu->cpus)) 1508 - return 0; 1509 - target = cpumask_any_but(cpu_online_mask, cpu); 1510 - if (target >= nr_cpu_ids) 1511 - return 0; 1512 - /* 1513 - * TODO: migrate context once core races on event->ctx have 1514 - * been fixed. 1515 - */ 1516 - cpumask_set_cpu(target, &cci_pmu->cpus); 1517 - return 0; 1518 - } 1519 - 1520 - static struct cci_pmu_model cci_pmu_models[] = { 1521 - #ifdef CONFIG_ARM_CCI400_PMU 1522 - [CCI400_R0] = { 1523 - .name = "CCI_400", 1524 - .fixed_hw_cntrs = 1, /* Cycle counter */ 1525 - .num_hw_cntrs = 4, 1526 - .cntr_size = SZ_4K, 1527 - .format_attrs = cci400_pmu_format_attrs, 1528 - .event_attrs = cci400_r0_pmu_event_attrs, 1529 - .event_ranges = { 1530 - [CCI_IF_SLAVE] = { 1531 - CCI400_R0_SLAVE_PORT_MIN_EV, 1532 - CCI400_R0_SLAVE_PORT_MAX_EV, 1533 - }, 1534 - [CCI_IF_MASTER] = { 1535 - CCI400_R0_MASTER_PORT_MIN_EV, 1536 - CCI400_R0_MASTER_PORT_MAX_EV, 1537 - }, 1538 - }, 1539 - .validate_hw_event = cci400_validate_hw_event, 1540 - .get_event_idx = cci400_get_event_idx, 1541 - }, 1542 - [CCI400_R1] = { 1543 - .name = "CCI_400_r1", 1544 - .fixed_hw_cntrs = 1, /* Cycle counter */ 1545 - .num_hw_cntrs = 4, 1546 - .cntr_size = SZ_4K, 1547 - .format_attrs = cci400_pmu_format_attrs, 1548 - .event_attrs = cci400_r1_pmu_event_attrs, 1549 - .event_ranges = { 1550 - [CCI_IF_SLAVE] = { 1551 - CCI400_R1_SLAVE_PORT_MIN_EV, 1552 - CCI400_R1_SLAVE_PORT_MAX_EV, 1553 - }, 1554 - [CCI_IF_MASTER] = { 1555 - CCI400_R1_MASTER_PORT_MIN_EV, 1556 - CCI400_R1_MASTER_PORT_MAX_EV, 1557 - }, 1558 - }, 1559 - .validate_hw_event = cci400_validate_hw_event, 1560 - .get_event_idx = cci400_get_event_idx, 1561 - }, 1562 - #endif 1563 - #ifdef CONFIG_ARM_CCI5xx_PMU 1564 - [CCI500_R0] = { 1565 - .name = "CCI_500", 1566 - .fixed_hw_cntrs = 0, 1567 - .num_hw_cntrs = 8, 1568 - .cntr_size = SZ_64K, 1569 - .format_attrs = cci5xx_pmu_format_attrs, 1570 - .event_attrs = cci5xx_pmu_event_attrs, 1571 - .event_ranges = { 1572 - [CCI_IF_SLAVE] = { 1573 - CCI5xx_SLAVE_PORT_MIN_EV, 1574 - CCI5xx_SLAVE_PORT_MAX_EV, 1575 - }, 1576 - [CCI_IF_MASTER] = { 1577 - CCI5xx_MASTER_PORT_MIN_EV, 1578 - CCI5xx_MASTER_PORT_MAX_EV, 1579 - }, 1580 - [CCI_IF_GLOBAL] = { 1581 - CCI5xx_GLOBAL_PORT_MIN_EV, 1582 - CCI5xx_GLOBAL_PORT_MAX_EV, 1583 - }, 1584 - }, 1585 - .validate_hw_event = cci500_validate_hw_event, 1586 - .write_counters = cci5xx_pmu_write_counters, 1587 - }, 1588 - [CCI550_R0] = { 1589 - .name = "CCI_550", 1590 - .fixed_hw_cntrs = 0, 1591 - .num_hw_cntrs = 8, 1592 - .cntr_size = SZ_64K, 1593 - .format_attrs = cci5xx_pmu_format_attrs, 1594 - .event_attrs = cci5xx_pmu_event_attrs, 1595 - .event_ranges = { 1596 - [CCI_IF_SLAVE] = { 1597 - CCI5xx_SLAVE_PORT_MIN_EV, 1598 - CCI5xx_SLAVE_PORT_MAX_EV, 1599 - }, 1600 - [CCI_IF_MASTER] = { 1601 - CCI5xx_MASTER_PORT_MIN_EV, 1602 - CCI5xx_MASTER_PORT_MAX_EV, 1603 - }, 1604 - [CCI_IF_GLOBAL] = { 1605 - CCI5xx_GLOBAL_PORT_MIN_EV, 1606 - CCI5xx_GLOBAL_PORT_MAX_EV, 1607 - }, 1608 - }, 1609 - .validate_hw_event = cci550_validate_hw_event, 1610 - .write_counters = cci5xx_pmu_write_counters, 1611 - }, 1612 - #endif 1613 - }; 1614 - 1615 - static const struct of_device_id arm_cci_pmu_matches[] = { 1616 - #ifdef CONFIG_ARM_CCI400_PMU 1617 - { 1618 - .compatible = "arm,cci-400-pmu", 1619 - .data = NULL, 1620 - }, 1621 - { 1622 - .compatible = "arm,cci-400-pmu,r0", 1623 - .data = &cci_pmu_models[CCI400_R0], 1624 - }, 1625 - { 1626 - .compatible = "arm,cci-400-pmu,r1", 1627 - .data = &cci_pmu_models[CCI400_R1], 1628 - }, 1629 - #endif 1630 - #ifdef CONFIG_ARM_CCI5xx_PMU 1631 - { 1632 - .compatible = "arm,cci-500-pmu,r0", 1633 - .data = &cci_pmu_models[CCI500_R0], 1634 - }, 1635 - { 1636 - .compatible = "arm,cci-550-pmu,r0", 1637 - .data = &cci_pmu_models[CCI550_R0], 1638 - }, 1639 - #endif 1640 - {}, 1641 - }; 1642 - 1643 - static inline const struct cci_pmu_model *get_cci_model(struct platform_device *pdev) 1644 - { 1645 - const struct of_device_id *match = of_match_node(arm_cci_pmu_matches, 1646 - pdev->dev.of_node); 1647 - if (!match) 1648 - return NULL; 1649 - if (match->data) 1650 - return match->data; 1651 - 1652 - dev_warn(&pdev->dev, "DEPRECATED compatible property," 1653 - "requires secure access to CCI registers"); 1654 - return probe_cci_model(pdev); 1655 - } 1656 - 1657 - static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) 1658 - { 1659 - int i; 1660 - 1661 - for (i = 0; i < nr_irqs; i++) 1662 - if (irq == irqs[i]) 1663 - return true; 1664 - 1665 - return false; 1666 - } 1667 - 1668 - static struct cci_pmu *cci_pmu_alloc(struct platform_device *pdev) 1669 - { 1670 - struct cci_pmu *cci_pmu; 1671 - const struct cci_pmu_model *model; 1672 - 1673 - /* 1674 - * All allocations are devm_* hence we don't have to free 1675 - * them explicitly on an error, as it would end up in driver 1676 - * detach. 1677 - */ 1678 - model = get_cci_model(pdev); 1679 - if (!model) { 1680 - dev_warn(&pdev->dev, "CCI PMU version not supported\n"); 1681 - return ERR_PTR(-ENODEV); 1682 - } 1683 - 1684 - cci_pmu = devm_kzalloc(&pdev->dev, sizeof(*cci_pmu), GFP_KERNEL); 1685 - if (!cci_pmu) 1686 - return ERR_PTR(-ENOMEM); 1687 - 1688 - cci_pmu->model = model; 1689 - cci_pmu->irqs = devm_kcalloc(&pdev->dev, CCI_PMU_MAX_HW_CNTRS(model), 1690 - sizeof(*cci_pmu->irqs), GFP_KERNEL); 1691 - if (!cci_pmu->irqs) 1692 - return ERR_PTR(-ENOMEM); 1693 - cci_pmu->hw_events.events = devm_kcalloc(&pdev->dev, 1694 - CCI_PMU_MAX_HW_CNTRS(model), 1695 - sizeof(*cci_pmu->hw_events.events), 1696 - GFP_KERNEL); 1697 - if (!cci_pmu->hw_events.events) 1698 - return ERR_PTR(-ENOMEM); 1699 - cci_pmu->hw_events.used_mask = devm_kcalloc(&pdev->dev, 1700 - BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)), 1701 - sizeof(*cci_pmu->hw_events.used_mask), 1702 - GFP_KERNEL); 1703 - if (!cci_pmu->hw_events.used_mask) 1704 - return ERR_PTR(-ENOMEM); 1705 - 1706 - return cci_pmu; 1707 - } 1708 - 1709 - 1710 - static int cci_pmu_probe(struct platform_device *pdev) 1711 - { 1712 - struct resource *res; 1713 - struct cci_pmu *cci_pmu; 1714 - int i, ret, irq; 1715 - 1716 - cci_pmu = cci_pmu_alloc(pdev); 1717 - if (IS_ERR(cci_pmu)) 1718 - return PTR_ERR(cci_pmu); 1719 - 1720 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1721 - cci_pmu->base = devm_ioremap_resource(&pdev->dev, res); 1722 - if (IS_ERR(cci_pmu->base)) 1723 - return -ENOMEM; 1724 - 1725 - /* 1726 - * CCI PMU has one overflow interrupt per counter; but some may be tied 1727 - * together to a common interrupt. 1728 - */ 1729 - cci_pmu->nr_irqs = 0; 1730 - for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) { 1731 - irq = platform_get_irq(pdev, i); 1732 - if (irq < 0) 1733 - break; 1734 - 1735 - if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs)) 1736 - continue; 1737 - 1738 - cci_pmu->irqs[cci_pmu->nr_irqs++] = irq; 1739 - } 1740 - 1741 - /* 1742 - * Ensure that the device tree has as many interrupts as the number 1743 - * of counters. 1744 - */ 1745 - if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) { 1746 - dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n", 1747 - i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)); 1748 - return -EINVAL; 1749 - } 1750 - 1751 - raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); 1752 - mutex_init(&cci_pmu->reserve_mutex); 1753 - atomic_set(&cci_pmu->active_events, 0); 1754 - cpumask_set_cpu(get_cpu(), &cci_pmu->cpus); 1755 - 1756 - ret = cci_pmu_init(cci_pmu, pdev); 1757 - if (ret) { 1758 - put_cpu(); 1759 - return ret; 1760 - } 1761 - 1762 - cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, 1763 - &cci_pmu->node); 1764 - put_cpu(); 1765 - pr_info("ARM %s PMU driver probed", cci_pmu->model->name); 1766 - return 0; 1767 - } 1768 72 1769 73 static int cci_platform_probe(struct platform_device *pdev) 1770 74 { 1771 75 if (!cci_probed()) 1772 76 return -ENODEV; 1773 77 1774 - return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 78 + return of_platform_populate(pdev->dev.of_node, NULL, 79 + arm_cci_auxdata, &pdev->dev); 1775 80 } 1776 - 1777 - static struct platform_driver cci_pmu_driver = { 1778 - .driver = { 1779 - .name = DRIVER_NAME_PMU, 1780 - .of_match_table = arm_cci_pmu_matches, 1781 - }, 1782 - .probe = cci_pmu_probe, 1783 - }; 1784 81 1785 82 static struct platform_driver cci_platform_driver = { 1786 83 .driver = { ··· 85 1796 86 1797 static int __init cci_platform_init(void) 87 1798 { 88 - int ret; 89 - 90 - ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_ARM_CCI_ONLINE, 91 - "perf/arm/cci:online", NULL, 92 - cci_pmu_offline_cpu); 93 - if (ret) 94 - return ret; 95 - 96 - ret = platform_driver_register(&cci_pmu_driver); 97 - if (ret) 98 - return ret; 99 - 100 1799 return platform_driver_register(&cci_platform_driver); 101 1800 } 102 - 103 - #else /* !CONFIG_ARM_CCI_PMU */ 104 - 105 - static int __init cci_platform_init(void) 106 - { 107 - return 0; 108 - } 109 - 110 - #endif /* CONFIG_ARM_CCI_PMU */ 111 1801 112 1802 #ifdef CONFIG_ARM_CCI400_PORT_CTRL 113 1803 ··· 457 2189 if (!ports) 458 2190 return -ENOMEM; 459 2191 460 - for_each_child_of_node(np, cp) { 2192 + for_each_available_child_of_node(np, cp) { 461 2193 if (!of_match_node(arm_cci_ctrl_if_matches, cp)) 462 - continue; 463 - 464 - if (!of_device_is_available(cp)) 465 2194 continue; 466 2195 467 2196 i = nb_ace + nb_ace_lite; ··· 540 2275 struct resource res; 541 2276 542 2277 np = of_find_matching_node(NULL, arm_cci_matches); 543 - if(!np || !of_device_is_available(np)) 2278 + if (!of_device_is_available(np)) 544 2279 return -ENODEV; 545 2280 546 2281 ret = of_address_to_resource(np, 0, &res);
drivers/bus/arm-ccn.c drivers/perf/arm-ccn.c
+10
drivers/clk/Kconfig
··· 62 62 multi-function device has one fixed-rate oscillator, clocked 63 63 at 32KHz. 64 64 65 + config COMMON_CLK_SCMI 66 + tristate "Clock driver controlled via SCMI interface" 67 + depends on ARM_SCMI_PROTOCOL || COMPILE_TEST 68 + ---help--- 69 + This driver provides support for clocks that are controlled 70 + by firmware that implements the SCMI interface. 71 + 72 + This driver uses SCMI Message Protocol to interact with the 73 + firmware providing all the clock controls. 74 + 65 75 config COMMON_CLK_SCPI 66 76 tristate "Clock driver controlled via SCPI interface" 67 77 depends on ARM_SCPI_PROTOCOL || COMPILE_TEST
+1
drivers/clk/Makefile
··· 41 41 obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o 42 42 obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o 43 43 obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o 44 + obj-$(CONFIG_COMMON_CLK_SCMI) += clk-scmi.o 44 45 obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o 45 46 obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o 46 47 obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o
+194
drivers/clk/clk-scmi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Power Interface (SCMI) Protocol based clock driver 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include <linux/clk-provider.h> 9 + #include <linux/device.h> 10 + #include <linux/err.h> 11 + #include <linux/of.h> 12 + #include <linux/module.h> 13 + #include <linux/scmi_protocol.h> 14 + #include <asm/div64.h> 15 + 16 + struct scmi_clk { 17 + u32 id; 18 + struct clk_hw hw; 19 + const struct scmi_clock_info *info; 20 + const struct scmi_handle *handle; 21 + }; 22 + 23 + #define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw) 24 + 25 + static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw, 26 + unsigned long parent_rate) 27 + { 28 + int ret; 29 + u64 rate; 30 + struct scmi_clk *clk = to_scmi_clk(hw); 31 + 32 + ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate); 33 + if (ret) 34 + return 0; 35 + return rate; 36 + } 37 + 38 + static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate, 39 + unsigned long *parent_rate) 40 + { 41 + int step; 42 + u64 fmin, fmax, ftmp; 43 + struct scmi_clk *clk = to_scmi_clk(hw); 44 + 45 + /* 46 + * We can't figure out what rate it will be, so just return the 47 + * rate back to the caller. scmi_clk_recalc_rate() will be called 48 + * after the rate is set and we'll know what rate the clock is 49 + * running at then. 50 + */ 51 + if (clk->info->rate_discrete) 52 + return rate; 53 + 54 + fmin = clk->info->range.min_rate; 55 + fmax = clk->info->range.max_rate; 56 + if (rate <= fmin) 57 + return fmin; 58 + else if (rate >= fmax) 59 + return fmax; 60 + 61 + ftmp = rate - fmin; 62 + ftmp += clk->info->range.step_size - 1; /* to round up */ 63 + step = do_div(ftmp, clk->info->range.step_size); 64 + 65 + return step * clk->info->range.step_size + fmin; 66 + } 67 + 68 + static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate, 69 + unsigned long parent_rate) 70 + { 71 + struct scmi_clk *clk = to_scmi_clk(hw); 72 + 73 + return clk->handle->clk_ops->rate_set(clk->handle, clk->id, 0, rate); 74 + } 75 + 76 + static int scmi_clk_enable(struct clk_hw *hw) 77 + { 78 + struct scmi_clk *clk = to_scmi_clk(hw); 79 + 80 + return clk->handle->clk_ops->enable(clk->handle, clk->id); 81 + } 82 + 83 + static void scmi_clk_disable(struct clk_hw *hw) 84 + { 85 + struct scmi_clk *clk = to_scmi_clk(hw); 86 + 87 + clk->handle->clk_ops->disable(clk->handle, clk->id); 88 + } 89 + 90 + static const struct clk_ops scmi_clk_ops = { 91 + .recalc_rate = scmi_clk_recalc_rate, 92 + .round_rate = scmi_clk_round_rate, 93 + .set_rate = scmi_clk_set_rate, 94 + /* 95 + * We can't provide enable/disable callback as we can't perform the same 96 + * in atomic context. Since the clock framework provides standard API 97 + * clk_prepare_enable that helps cases using clk_enable in non-atomic 98 + * context, it should be fine providing prepare/unprepare. 99 + */ 100 + .prepare = scmi_clk_enable, 101 + .unprepare = scmi_clk_disable, 102 + }; 103 + 104 + static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) 105 + { 106 + int ret; 107 + struct clk_init_data init = { 108 + .flags = CLK_GET_RATE_NOCACHE, 109 + .num_parents = 0, 110 + .ops = &scmi_clk_ops, 111 + .name = sclk->info->name, 112 + }; 113 + 114 + sclk->hw.init = &init; 115 + ret = devm_clk_hw_register(dev, &sclk->hw); 116 + if (!ret) 117 + clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate, 118 + sclk->info->range.max_rate); 119 + return ret; 120 + } 121 + 122 + static int scmi_clocks_probe(struct scmi_device *sdev) 123 + { 124 + int idx, count, err; 125 + struct clk_hw **hws; 126 + struct clk_hw_onecell_data *clk_data; 127 + struct device *dev = &sdev->dev; 128 + struct device_node *np = dev->of_node; 129 + const struct scmi_handle *handle = sdev->handle; 130 + 131 + if (!handle || !handle->clk_ops) 132 + return -ENODEV; 133 + 134 + count = handle->clk_ops->count_get(handle); 135 + if (count < 0) { 136 + dev_err(dev, "%s: invalid clock output count\n", np->name); 137 + return -EINVAL; 138 + } 139 + 140 + clk_data = devm_kzalloc(dev, sizeof(*clk_data) + 141 + sizeof(*clk_data->hws) * count, GFP_KERNEL); 142 + if (!clk_data) 143 + return -ENOMEM; 144 + 145 + clk_data->num = count; 146 + hws = clk_data->hws; 147 + 148 + for (idx = 0; idx < count; idx++) { 149 + struct scmi_clk *sclk; 150 + 151 + sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL); 152 + if (!sclk) 153 + return -ENOMEM; 154 + 155 + sclk->info = handle->clk_ops->info_get(handle, idx); 156 + if (!sclk->info) { 157 + dev_dbg(dev, "invalid clock info for idx %d\n", idx); 158 + continue; 159 + } 160 + 161 + sclk->id = idx; 162 + sclk->handle = handle; 163 + 164 + err = scmi_clk_ops_init(dev, sclk); 165 + if (err) { 166 + dev_err(dev, "failed to register clock %d\n", idx); 167 + devm_kfree(dev, sclk); 168 + hws[idx] = NULL; 169 + } else { 170 + dev_dbg(dev, "Registered clock:%s\n", sclk->info->name); 171 + hws[idx] = &sclk->hw; 172 + } 173 + } 174 + 175 + return devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, 176 + clk_data); 177 + } 178 + 179 + static const struct scmi_device_id scmi_id_table[] = { 180 + { SCMI_PROTOCOL_CLOCK }, 181 + { }, 182 + }; 183 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 184 + 185 + static struct scmi_driver scmi_clocks_driver = { 186 + .name = "scmi-clocks", 187 + .probe = scmi_clocks_probe, 188 + .id_table = scmi_id_table, 189 + }; 190 + module_scmi_driver(scmi_clocks_driver); 191 + 192 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 193 + MODULE_DESCRIPTION("ARM SCMI clock driver"); 194 + MODULE_LICENSE("GPL v2");
+12
drivers/cpufreq/Kconfig.arm
··· 239 239 config ARM_SA1110_CPUFREQ 240 240 bool 241 241 242 + config ARM_SCMI_CPUFREQ 243 + tristate "SCMI based CPUfreq driver" 244 + depends on ARM_SCMI_PROTOCOL || COMPILE_TEST 245 + depends on !CPU_THERMAL || THERMAL 246 + select PM_OPP 247 + help 248 + This adds the CPUfreq driver support for ARM platforms using SCMI 249 + protocol for CPU power management. 250 + 251 + This driver uses SCMI Message Protocol driver to interact with the 252 + firmware providing the CPU DVFS functionality. 253 + 242 254 config ARM_SPEAR_CPUFREQ 243 255 bool "SPEAr CPUFreq support" 244 256 depends on PLAT_SPEAR
+1
drivers/cpufreq/Makefile
··· 75 75 obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o 76 76 obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o 77 77 obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o 78 + obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o 78 79 obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 79 80 obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 80 81 obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
+264
drivers/cpufreq/scmi-cpufreq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Power Interface (SCMI) based CPUFreq Interface driver 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + * Sudeep Holla <sudeep.holla@arm.com> 7 + */ 8 + 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 11 + #include <linux/cpu.h> 12 + #include <linux/cpufreq.h> 13 + #include <linux/cpumask.h> 14 + #include <linux/cpu_cooling.h> 15 + #include <linux/export.h> 16 + #include <linux/module.h> 17 + #include <linux/pm_opp.h> 18 + #include <linux/slab.h> 19 + #include <linux/scmi_protocol.h> 20 + #include <linux/types.h> 21 + 22 + struct scmi_data { 23 + int domain_id; 24 + struct device *cpu_dev; 25 + struct thermal_cooling_device *cdev; 26 + }; 27 + 28 + static const struct scmi_handle *handle; 29 + 30 + static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) 31 + { 32 + struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 33 + struct scmi_perf_ops *perf_ops = handle->perf_ops; 34 + struct scmi_data *priv = policy->driver_data; 35 + unsigned long rate; 36 + int ret; 37 + 38 + ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false); 39 + if (ret) 40 + return 0; 41 + return rate / 1000; 42 + } 43 + 44 + /* 45 + * perf_ops->freq_set is not a synchronous, the actual OPP change will 46 + * happen asynchronously and can get notified if the events are 47 + * subscribed for by the SCMI firmware 48 + */ 49 + static int 50 + scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) 51 + { 52 + int ret; 53 + struct scmi_data *priv = policy->driver_data; 54 + struct scmi_perf_ops *perf_ops = handle->perf_ops; 55 + u64 freq = policy->freq_table[index].frequency * 1000; 56 + 57 + ret = perf_ops->freq_set(handle, priv->domain_id, freq, false); 58 + if (!ret) 59 + arch_set_freq_scale(policy->related_cpus, freq, 60 + policy->cpuinfo.max_freq); 61 + return ret; 62 + } 63 + 64 + static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy, 65 + unsigned int target_freq) 66 + { 67 + struct scmi_data *priv = policy->driver_data; 68 + struct scmi_perf_ops *perf_ops = handle->perf_ops; 69 + 70 + if (!perf_ops->freq_set(handle, priv->domain_id, 71 + target_freq * 1000, true)) { 72 + arch_set_freq_scale(policy->related_cpus, target_freq, 73 + policy->cpuinfo.max_freq); 74 + return target_freq; 75 + } 76 + 77 + return 0; 78 + } 79 + 80 + static int 81 + scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) 82 + { 83 + int cpu, domain, tdomain; 84 + struct device *tcpu_dev; 85 + 86 + domain = handle->perf_ops->device_domain_id(cpu_dev); 87 + if (domain < 0) 88 + return domain; 89 + 90 + for_each_possible_cpu(cpu) { 91 + if (cpu == cpu_dev->id) 92 + continue; 93 + 94 + tcpu_dev = get_cpu_device(cpu); 95 + if (!tcpu_dev) 96 + continue; 97 + 98 + tdomain = handle->perf_ops->device_domain_id(tcpu_dev); 99 + if (tdomain == domain) 100 + cpumask_set_cpu(cpu, cpumask); 101 + } 102 + 103 + return 0; 104 + } 105 + 106 + static int scmi_cpufreq_init(struct cpufreq_policy *policy) 107 + { 108 + int ret; 109 + unsigned int latency; 110 + struct device *cpu_dev; 111 + struct scmi_data *priv; 112 + struct cpufreq_frequency_table *freq_table; 113 + 114 + cpu_dev = get_cpu_device(policy->cpu); 115 + if (!cpu_dev) { 116 + pr_err("failed to get cpu%d device\n", policy->cpu); 117 + return -ENODEV; 118 + } 119 + 120 + ret = handle->perf_ops->add_opps_to_device(handle, cpu_dev); 121 + if (ret) { 122 + dev_warn(cpu_dev, "failed to add opps to the device\n"); 123 + return ret; 124 + } 125 + 126 + ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus); 127 + if (ret) { 128 + dev_warn(cpu_dev, "failed to get sharing cpumask\n"); 129 + return ret; 130 + } 131 + 132 + ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); 133 + if (ret) { 134 + dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 135 + __func__, ret); 136 + return ret; 137 + } 138 + 139 + ret = dev_pm_opp_get_opp_count(cpu_dev); 140 + if (ret <= 0) { 141 + dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n"); 142 + ret = -EPROBE_DEFER; 143 + goto out_free_opp; 144 + } 145 + 146 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 147 + if (!priv) { 148 + ret = -ENOMEM; 149 + goto out_free_opp; 150 + } 151 + 152 + ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 153 + if (ret) { 154 + dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); 155 + goto out_free_priv; 156 + } 157 + 158 + priv->cpu_dev = cpu_dev; 159 + priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev); 160 + 161 + policy->driver_data = priv; 162 + 163 + ret = cpufreq_table_validate_and_show(policy, freq_table); 164 + if (ret) { 165 + dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__, 166 + ret); 167 + goto out_free_cpufreq_table; 168 + } 169 + 170 + /* SCMI allows DVFS request for any domain from any CPU */ 171 + policy->dvfs_possible_from_any_cpu = true; 172 + 173 + latency = handle->perf_ops->get_transition_latency(handle, cpu_dev); 174 + if (!latency) 175 + latency = CPUFREQ_ETERNAL; 176 + 177 + policy->cpuinfo.transition_latency = latency; 178 + 179 + policy->fast_switch_possible = true; 180 + return 0; 181 + 182 + out_free_cpufreq_table: 183 + dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 184 + out_free_priv: 185 + kfree(priv); 186 + out_free_opp: 187 + dev_pm_opp_cpumask_remove_table(policy->cpus); 188 + 189 + return ret; 190 + } 191 + 192 + static int scmi_cpufreq_exit(struct cpufreq_policy *policy) 193 + { 194 + struct scmi_data *priv = policy->driver_data; 195 + 196 + cpufreq_cooling_unregister(priv->cdev); 197 + dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 198 + kfree(priv); 199 + dev_pm_opp_cpumask_remove_table(policy->related_cpus); 200 + 201 + return 0; 202 + } 203 + 204 + static void scmi_cpufreq_ready(struct cpufreq_policy *policy) 205 + { 206 + struct scmi_data *priv = policy->driver_data; 207 + 208 + priv->cdev = of_cpufreq_cooling_register(policy); 209 + } 210 + 211 + static struct cpufreq_driver scmi_cpufreq_driver = { 212 + .name = "scmi", 213 + .flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY | 214 + CPUFREQ_NEED_INITIAL_FREQ_CHECK, 215 + .verify = cpufreq_generic_frequency_table_verify, 216 + .attr = cpufreq_generic_attr, 217 + .target_index = scmi_cpufreq_set_target, 218 + .fast_switch = scmi_cpufreq_fast_switch, 219 + .get = scmi_cpufreq_get_rate, 220 + .init = scmi_cpufreq_init, 221 + .exit = scmi_cpufreq_exit, 222 + .ready = scmi_cpufreq_ready, 223 + }; 224 + 225 + static int scmi_cpufreq_probe(struct scmi_device *sdev) 226 + { 227 + int ret; 228 + 229 + handle = sdev->handle; 230 + 231 + if (!handle || !handle->perf_ops) 232 + return -ENODEV; 233 + 234 + ret = cpufreq_register_driver(&scmi_cpufreq_driver); 235 + if (ret) { 236 + dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n", 237 + __func__, ret); 238 + } 239 + 240 + return ret; 241 + } 242 + 243 + static void scmi_cpufreq_remove(struct scmi_device *sdev) 244 + { 245 + cpufreq_unregister_driver(&scmi_cpufreq_driver); 246 + } 247 + 248 + static const struct scmi_device_id scmi_id_table[] = { 249 + { SCMI_PROTOCOL_PERF }, 250 + { }, 251 + }; 252 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 253 + 254 + static struct scmi_driver scmi_cpufreq_drv = { 255 + .name = "scmi-cpufreq", 256 + .probe = scmi_cpufreq_probe, 257 + .remove = scmi_cpufreq_remove, 258 + .id_table = scmi_id_table, 259 + }; 260 + module_scmi_driver(scmi_cpufreq_drv); 261 + 262 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 263 + MODULE_DESCRIPTION("ARM SCMI CPUFreq interface driver"); 264 + MODULE_LICENSE("GPL v2");
+34
drivers/firmware/Kconfig
··· 19 19 on and off through hotplug, so for now torture tests and PSCI checker 20 20 are mutually exclusive. 21 21 22 + config ARM_SCMI_PROTOCOL 23 + bool "ARM System Control and Management Interface (SCMI) Message Protocol" 24 + depends on ARM || ARM64 || COMPILE_TEST 25 + depends on MAILBOX 26 + help 27 + ARM System Control and Management Interface (SCMI) protocol is a 28 + set of operating system-independent software interfaces that are 29 + used in system management. SCMI is extensible and currently provides 30 + interfaces for: Discovery and self-description of the interfaces 31 + it supports, Power domain management which is the ability to place 32 + a given device or domain into the various power-saving states that 33 + it supports, Performance management which is the ability to control 34 + the performance of a domain that is composed of compute engines 35 + such as application processors and other accelerators, Clock 36 + management which is the ability to set and inquire rates on platform 37 + managed clocks and Sensor management which is the ability to read 38 + sensor data, and be notified of sensor value. 39 + 40 + This protocol library provides interface for all the client drivers 41 + making use of the features offered by the SCMI. 42 + 43 + config ARM_SCMI_POWER_DOMAIN 44 + tristate "SCMI power domain driver" 45 + depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 46 + default y 47 + select PM_GENERIC_DOMAINS if PM 48 + help 49 + This enables support for the SCMI power domains which can be 50 + enabled or disabled via the SCP firmware 51 + 52 + This driver can also be built as a module. If so, the module 53 + will be called scmi_pm_domain. Note this may needed early in boot 54 + before rootfs may be available. 55 + 22 56 config ARM_SCPI_PROTOCOL 23 57 tristate "ARM System Control and Power Interface (SCPI) Message Protocol" 24 58 depends on ARM || ARM64 || COMPILE_TEST
+1
drivers/firmware/Makefile
··· 25 25 CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a 26 26 obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o 27 27 28 + obj-$(CONFIG_ARM_SCMI_PROTOCOL) += arm_scmi/ 28 29 obj-y += broadcom/ 29 30 obj-y += meson/ 30 31 obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
+5
drivers/firmware/arm_scmi/Makefile
··· 1 + obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o 2 + scmi-bus-y = bus.o 3 + scmi-driver-y = driver.o 4 + scmi-protocols-y = base.o clock.o perf.o power.o sensors.o 5 + obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+253
drivers/firmware/arm_scmi/base.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Base Protocol 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include "common.h" 9 + 10 + enum scmi_base_protocol_cmd { 11 + BASE_DISCOVER_VENDOR = 0x3, 12 + BASE_DISCOVER_SUB_VENDOR = 0x4, 13 + BASE_DISCOVER_IMPLEMENT_VERSION = 0x5, 14 + BASE_DISCOVER_LIST_PROTOCOLS = 0x6, 15 + BASE_DISCOVER_AGENT = 0x7, 16 + BASE_NOTIFY_ERRORS = 0x8, 17 + }; 18 + 19 + struct scmi_msg_resp_base_attributes { 20 + u8 num_protocols; 21 + u8 num_agents; 22 + __le16 reserved; 23 + }; 24 + 25 + /** 26 + * scmi_base_attributes_get() - gets the implementation details 27 + * that are associated with the base protocol. 28 + * 29 + * @handle - SCMI entity handle 30 + * 31 + * Return: 0 on success, else appropriate SCMI error. 32 + */ 33 + static int scmi_base_attributes_get(const struct scmi_handle *handle) 34 + { 35 + int ret; 36 + struct scmi_xfer *t; 37 + struct scmi_msg_resp_base_attributes *attr_info; 38 + struct scmi_revision_info *rev = handle->version; 39 + 40 + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, 41 + SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t); 42 + if (ret) 43 + return ret; 44 + 45 + ret = scmi_do_xfer(handle, t); 46 + if (!ret) { 47 + attr_info = t->rx.buf; 48 + rev->num_protocols = attr_info->num_protocols; 49 + rev->num_agents = attr_info->num_agents; 50 + } 51 + 52 + scmi_one_xfer_put(handle, t); 53 + return ret; 54 + } 55 + 56 + /** 57 + * scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string. 58 + * 59 + * @handle - SCMI entity handle 60 + * @sub_vendor - specify true if sub-vendor ID is needed 61 + * 62 + * Return: 0 on success, else appropriate SCMI error. 63 + */ 64 + static int 65 + scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor) 66 + { 67 + u8 cmd; 68 + int ret, size; 69 + char *vendor_id; 70 + struct scmi_xfer *t; 71 + struct scmi_revision_info *rev = handle->version; 72 + 73 + if (sub_vendor) { 74 + cmd = BASE_DISCOVER_SUB_VENDOR; 75 + vendor_id = rev->sub_vendor_id; 76 + size = ARRAY_SIZE(rev->sub_vendor_id); 77 + } else { 78 + cmd = BASE_DISCOVER_VENDOR; 79 + vendor_id = rev->vendor_id; 80 + size = ARRAY_SIZE(rev->vendor_id); 81 + } 82 + 83 + ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t); 84 + if (ret) 85 + return ret; 86 + 87 + ret = scmi_do_xfer(handle, t); 88 + if (!ret) 89 + memcpy(vendor_id, t->rx.buf, size); 90 + 91 + scmi_one_xfer_put(handle, t); 92 + return ret; 93 + } 94 + 95 + /** 96 + * scmi_base_implementation_version_get() - gets a vendor-specific 97 + * implementation 32-bit version. The format of the version number is 98 + * vendor-specific 99 + * 100 + * @handle - SCMI entity handle 101 + * 102 + * Return: 0 on success, else appropriate SCMI error. 103 + */ 104 + static int 105 + scmi_base_implementation_version_get(const struct scmi_handle *handle) 106 + { 107 + int ret; 108 + __le32 *impl_ver; 109 + struct scmi_xfer *t; 110 + struct scmi_revision_info *rev = handle->version; 111 + 112 + ret = scmi_one_xfer_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION, 113 + SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t); 114 + if (ret) 115 + return ret; 116 + 117 + ret = scmi_do_xfer(handle, t); 118 + if (!ret) { 119 + impl_ver = t->rx.buf; 120 + rev->impl_ver = le32_to_cpu(*impl_ver); 121 + } 122 + 123 + scmi_one_xfer_put(handle, t); 124 + return ret; 125 + } 126 + 127 + /** 128 + * scmi_base_implementation_list_get() - gets the list of protocols it is 129 + * OSPM is allowed to access 130 + * 131 + * @handle - SCMI entity handle 132 + * @protocols_imp - pointer to hold the list of protocol identifiers 133 + * 134 + * Return: 0 on success, else appropriate SCMI error. 135 + */ 136 + static int scmi_base_implementation_list_get(const struct scmi_handle *handle, 137 + u8 *protocols_imp) 138 + { 139 + u8 *list; 140 + int ret, loop; 141 + struct scmi_xfer *t; 142 + __le32 *num_skip, *num_ret; 143 + u32 tot_num_ret = 0, loop_num_ret; 144 + struct device *dev = handle->dev; 145 + 146 + ret = scmi_one_xfer_init(handle, BASE_DISCOVER_LIST_PROTOCOLS, 147 + SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t); 148 + if (ret) 149 + return ret; 150 + 151 + num_skip = t->tx.buf; 152 + num_ret = t->rx.buf; 153 + list = t->rx.buf + sizeof(*num_ret); 154 + 155 + do { 156 + /* Set the number of protocols to be skipped/already read */ 157 + *num_skip = cpu_to_le32(tot_num_ret); 158 + 159 + ret = scmi_do_xfer(handle, t); 160 + if (ret) 161 + break; 162 + 163 + loop_num_ret = le32_to_cpu(*num_ret); 164 + if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) { 165 + dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP"); 166 + break; 167 + } 168 + 169 + for (loop = 0; loop < loop_num_ret; loop++) 170 + protocols_imp[tot_num_ret + loop] = *(list + loop); 171 + 172 + tot_num_ret += loop_num_ret; 173 + } while (loop_num_ret); 174 + 175 + scmi_one_xfer_put(handle, t); 176 + return ret; 177 + } 178 + 179 + /** 180 + * scmi_base_discover_agent_get() - discover the name of an agent 181 + * 182 + * @handle - SCMI entity handle 183 + * @id - Agent identifier 184 + * @name - Agent identifier ASCII string 185 + * 186 + * An agent id of 0 is reserved to identify the platform itself. 187 + * Generally operating system is represented as "OSPM" 188 + * 189 + * Return: 0 on success, else appropriate SCMI error. 190 + */ 191 + static int scmi_base_discover_agent_get(const struct scmi_handle *handle, 192 + int id, char *name) 193 + { 194 + int ret; 195 + struct scmi_xfer *t; 196 + 197 + ret = scmi_one_xfer_init(handle, BASE_DISCOVER_AGENT, 198 + SCMI_PROTOCOL_BASE, sizeof(__le32), 199 + SCMI_MAX_STR_SIZE, &t); 200 + if (ret) 201 + return ret; 202 + 203 + *(__le32 *)t->tx.buf = cpu_to_le32(id); 204 + 205 + ret = scmi_do_xfer(handle, t); 206 + if (!ret) 207 + memcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE); 208 + 209 + scmi_one_xfer_put(handle, t); 210 + return ret; 211 + } 212 + 213 + int scmi_base_protocol_init(struct scmi_handle *h) 214 + { 215 + int id, ret; 216 + u8 *prot_imp; 217 + u32 version; 218 + char name[SCMI_MAX_STR_SIZE]; 219 + const struct scmi_handle *handle = h; 220 + struct device *dev = handle->dev; 221 + struct scmi_revision_info *rev = handle->version; 222 + 223 + ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version); 224 + if (ret) 225 + return ret; 226 + 227 + prot_imp = devm_kcalloc(dev, MAX_PROTOCOLS_IMP, sizeof(u8), GFP_KERNEL); 228 + if (!prot_imp) 229 + return -ENOMEM; 230 + 231 + rev->major_ver = PROTOCOL_REV_MAJOR(version), 232 + rev->minor_ver = PROTOCOL_REV_MINOR(version); 233 + 234 + scmi_base_attributes_get(handle); 235 + scmi_base_vendor_id_get(handle, false); 236 + scmi_base_vendor_id_get(handle, true); 237 + scmi_base_implementation_version_get(handle); 238 + scmi_base_implementation_list_get(handle, prot_imp); 239 + scmi_setup_protocol_implemented(handle, prot_imp); 240 + 241 + dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n", 242 + rev->major_ver, rev->minor_ver, rev->vendor_id, 243 + rev->sub_vendor_id, rev->impl_ver); 244 + dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols, 245 + rev->num_agents); 246 + 247 + for (id = 0; id < rev->num_agents; id++) { 248 + scmi_base_discover_agent_get(handle, id, name); 249 + dev_dbg(dev, "Agent %d: %s\n", id, name); 250 + } 251 + 252 + return 0; 253 + }
+221
drivers/firmware/arm_scmi/bus.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol bus layer 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 9 + 10 + #include <linux/types.h> 11 + #include <linux/module.h> 12 + #include <linux/kernel.h> 13 + #include <linux/slab.h> 14 + #include <linux/device.h> 15 + 16 + #include "common.h" 17 + 18 + static DEFINE_IDA(scmi_bus_id); 19 + static DEFINE_IDR(scmi_protocols); 20 + static DEFINE_SPINLOCK(protocol_lock); 21 + 22 + static const struct scmi_device_id * 23 + scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv) 24 + { 25 + const struct scmi_device_id *id = scmi_drv->id_table; 26 + 27 + if (!id) 28 + return NULL; 29 + 30 + for (; id->protocol_id; id++) 31 + if (id->protocol_id == scmi_dev->protocol_id) 32 + return id; 33 + 34 + return NULL; 35 + } 36 + 37 + static int scmi_dev_match(struct device *dev, struct device_driver *drv) 38 + { 39 + struct scmi_driver *scmi_drv = to_scmi_driver(drv); 40 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 41 + const struct scmi_device_id *id; 42 + 43 + id = scmi_dev_match_id(scmi_dev, scmi_drv); 44 + if (id) 45 + return 1; 46 + 47 + return 0; 48 + } 49 + 50 + static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle) 51 + { 52 + scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id); 53 + 54 + if (unlikely(!fn)) 55 + return -EINVAL; 56 + return fn(handle); 57 + } 58 + 59 + static int scmi_dev_probe(struct device *dev) 60 + { 61 + struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); 62 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 63 + const struct scmi_device_id *id; 64 + int ret; 65 + 66 + id = scmi_dev_match_id(scmi_dev, scmi_drv); 67 + if (!id) 68 + return -ENODEV; 69 + 70 + if (!scmi_dev->handle) 71 + return -EPROBE_DEFER; 72 + 73 + ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle); 74 + if (ret) 75 + return ret; 76 + 77 + return scmi_drv->probe(scmi_dev); 78 + } 79 + 80 + static int scmi_dev_remove(struct device *dev) 81 + { 82 + struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); 83 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 84 + 85 + if (scmi_drv->remove) 86 + scmi_drv->remove(scmi_dev); 87 + 88 + return 0; 89 + } 90 + 91 + static struct bus_type scmi_bus_type = { 92 + .name = "scmi_protocol", 93 + .match = scmi_dev_match, 94 + .probe = scmi_dev_probe, 95 + .remove = scmi_dev_remove, 96 + }; 97 + 98 + int scmi_driver_register(struct scmi_driver *driver, struct module *owner, 99 + const char *mod_name) 100 + { 101 + int retval; 102 + 103 + driver->driver.bus = &scmi_bus_type; 104 + driver->driver.name = driver->name; 105 + driver->driver.owner = owner; 106 + driver->driver.mod_name = mod_name; 107 + 108 + retval = driver_register(&driver->driver); 109 + if (!retval) 110 + pr_debug("registered new scmi driver %s\n", driver->name); 111 + 112 + return retval; 113 + } 114 + EXPORT_SYMBOL_GPL(scmi_driver_register); 115 + 116 + void scmi_driver_unregister(struct scmi_driver *driver) 117 + { 118 + driver_unregister(&driver->driver); 119 + } 120 + EXPORT_SYMBOL_GPL(scmi_driver_unregister); 121 + 122 + struct scmi_device * 123 + scmi_device_create(struct device_node *np, struct device *parent, int protocol) 124 + { 125 + int id, retval; 126 + struct scmi_device *scmi_dev; 127 + 128 + id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); 129 + if (id < 0) 130 + return NULL; 131 + 132 + scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL); 133 + if (!scmi_dev) 134 + goto no_mem; 135 + 136 + scmi_dev->id = id; 137 + scmi_dev->protocol_id = protocol; 138 + scmi_dev->dev.parent = parent; 139 + scmi_dev->dev.of_node = np; 140 + scmi_dev->dev.bus = &scmi_bus_type; 141 + dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id); 142 + 143 + retval = device_register(&scmi_dev->dev); 144 + if (!retval) 145 + return scmi_dev; 146 + 147 + put_device(&scmi_dev->dev); 148 + kfree(scmi_dev); 149 + no_mem: 150 + ida_simple_remove(&scmi_bus_id, id); 151 + return NULL; 152 + } 153 + 154 + void scmi_device_destroy(struct scmi_device *scmi_dev) 155 + { 156 + scmi_handle_put(scmi_dev->handle); 157 + device_unregister(&scmi_dev->dev); 158 + ida_simple_remove(&scmi_bus_id, scmi_dev->id); 159 + kfree(scmi_dev); 160 + } 161 + 162 + void scmi_set_handle(struct scmi_device *scmi_dev) 163 + { 164 + scmi_dev->handle = scmi_handle_get(&scmi_dev->dev); 165 + } 166 + 167 + int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn) 168 + { 169 + int ret; 170 + 171 + spin_lock(&protocol_lock); 172 + ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1, 173 + GFP_ATOMIC); 174 + if (ret != protocol_id) 175 + pr_err("unable to allocate SCMI idr slot, err %d\n", ret); 176 + spin_unlock(&protocol_lock); 177 + 178 + return ret; 179 + } 180 + EXPORT_SYMBOL_GPL(scmi_protocol_register); 181 + 182 + void scmi_protocol_unregister(int protocol_id) 183 + { 184 + spin_lock(&protocol_lock); 185 + idr_remove(&scmi_protocols, protocol_id); 186 + spin_unlock(&protocol_lock); 187 + } 188 + EXPORT_SYMBOL_GPL(scmi_protocol_unregister); 189 + 190 + static int __scmi_devices_unregister(struct device *dev, void *data) 191 + { 192 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 193 + 194 + scmi_device_destroy(scmi_dev); 195 + return 0; 196 + } 197 + 198 + static void scmi_devices_unregister(void) 199 + { 200 + bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister); 201 + } 202 + 203 + static int __init scmi_bus_init(void) 204 + { 205 + int retval; 206 + 207 + retval = bus_register(&scmi_bus_type); 208 + if (retval) 209 + pr_err("scmi protocol bus register failed (%d)\n", retval); 210 + 211 + return retval; 212 + } 213 + subsys_initcall(scmi_bus_init); 214 + 215 + static void __exit scmi_bus_exit(void) 216 + { 217 + scmi_devices_unregister(); 218 + bus_unregister(&scmi_bus_type); 219 + ida_destroy(&scmi_bus_id); 220 + } 221 + module_exit(scmi_bus_exit);
+343
drivers/firmware/arm_scmi/clock.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Clock Protocol 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include "common.h" 9 + 10 + enum scmi_clock_protocol_cmd { 11 + CLOCK_ATTRIBUTES = 0x3, 12 + CLOCK_DESCRIBE_RATES = 0x4, 13 + CLOCK_RATE_SET = 0x5, 14 + CLOCK_RATE_GET = 0x6, 15 + CLOCK_CONFIG_SET = 0x7, 16 + }; 17 + 18 + struct scmi_msg_resp_clock_protocol_attributes { 19 + __le16 num_clocks; 20 + u8 max_async_req; 21 + u8 reserved; 22 + }; 23 + 24 + struct scmi_msg_resp_clock_attributes { 25 + __le32 attributes; 26 + #define CLOCK_ENABLE BIT(0) 27 + u8 name[SCMI_MAX_STR_SIZE]; 28 + }; 29 + 30 + struct scmi_clock_set_config { 31 + __le32 id; 32 + __le32 attributes; 33 + }; 34 + 35 + struct scmi_msg_clock_describe_rates { 36 + __le32 id; 37 + __le32 rate_index; 38 + }; 39 + 40 + struct scmi_msg_resp_clock_describe_rates { 41 + __le32 num_rates_flags; 42 + #define NUM_RETURNED(x) ((x) & 0xfff) 43 + #define RATE_DISCRETE(x) !((x) & BIT(12)) 44 + #define NUM_REMAINING(x) ((x) >> 16) 45 + struct { 46 + __le32 value_low; 47 + __le32 value_high; 48 + } rate[0]; 49 + #define RATE_TO_U64(X) \ 50 + ({ \ 51 + typeof(X) x = (X); \ 52 + le32_to_cpu((x).value_low) | (u64)le32_to_cpu((x).value_high) << 32; \ 53 + }) 54 + }; 55 + 56 + struct scmi_clock_set_rate { 57 + __le32 flags; 58 + #define CLOCK_SET_ASYNC BIT(0) 59 + #define CLOCK_SET_DELAYED BIT(1) 60 + #define CLOCK_SET_ROUND_UP BIT(2) 61 + #define CLOCK_SET_ROUND_AUTO BIT(3) 62 + __le32 id; 63 + __le32 value_low; 64 + __le32 value_high; 65 + }; 66 + 67 + struct clock_info { 68 + int num_clocks; 69 + int max_async_req; 70 + struct scmi_clock_info *clk; 71 + }; 72 + 73 + static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle, 74 + struct clock_info *ci) 75 + { 76 + int ret; 77 + struct scmi_xfer *t; 78 + struct scmi_msg_resp_clock_protocol_attributes *attr; 79 + 80 + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, 81 + SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t); 82 + if (ret) 83 + return ret; 84 + 85 + attr = t->rx.buf; 86 + 87 + ret = scmi_do_xfer(handle, t); 88 + if (!ret) { 89 + ci->num_clocks = le16_to_cpu(attr->num_clocks); 90 + ci->max_async_req = attr->max_async_req; 91 + } 92 + 93 + scmi_one_xfer_put(handle, t); 94 + return ret; 95 + } 96 + 97 + static int scmi_clock_attributes_get(const struct scmi_handle *handle, 98 + u32 clk_id, struct scmi_clock_info *clk) 99 + { 100 + int ret; 101 + struct scmi_xfer *t; 102 + struct scmi_msg_resp_clock_attributes *attr; 103 + 104 + ret = scmi_one_xfer_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK, 105 + sizeof(clk_id), sizeof(*attr), &t); 106 + if (ret) 107 + return ret; 108 + 109 + *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); 110 + attr = t->rx.buf; 111 + 112 + ret = scmi_do_xfer(handle, t); 113 + if (!ret) 114 + memcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); 115 + else 116 + clk->name[0] = '\0'; 117 + 118 + scmi_one_xfer_put(handle, t); 119 + return ret; 120 + } 121 + 122 + static int 123 + scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id, 124 + struct scmi_clock_info *clk) 125 + { 126 + u64 *rate; 127 + int ret, cnt; 128 + bool rate_discrete = false; 129 + u32 tot_rate_cnt = 0, rates_flag; 130 + u16 num_returned, num_remaining; 131 + struct scmi_xfer *t; 132 + struct scmi_msg_clock_describe_rates *clk_desc; 133 + struct scmi_msg_resp_clock_describe_rates *rlist; 134 + 135 + ret = scmi_one_xfer_init(handle, CLOCK_DESCRIBE_RATES, 136 + SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t); 137 + if (ret) 138 + return ret; 139 + 140 + clk_desc = t->tx.buf; 141 + rlist = t->rx.buf; 142 + 143 + do { 144 + clk_desc->id = cpu_to_le32(clk_id); 145 + /* Set the number of rates to be skipped/already read */ 146 + clk_desc->rate_index = cpu_to_le32(tot_rate_cnt); 147 + 148 + ret = scmi_do_xfer(handle, t); 149 + if (ret) 150 + goto err; 151 + 152 + rates_flag = le32_to_cpu(rlist->num_rates_flags); 153 + num_remaining = NUM_REMAINING(rates_flag); 154 + rate_discrete = RATE_DISCRETE(rates_flag); 155 + num_returned = NUM_RETURNED(rates_flag); 156 + 157 + if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) { 158 + dev_err(handle->dev, "No. of rates > MAX_NUM_RATES"); 159 + break; 160 + } 161 + 162 + if (!rate_discrete) { 163 + clk->range.min_rate = RATE_TO_U64(rlist->rate[0]); 164 + clk->range.max_rate = RATE_TO_U64(rlist->rate[1]); 165 + clk->range.step_size = RATE_TO_U64(rlist->rate[2]); 166 + dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n", 167 + clk->range.min_rate, clk->range.max_rate, 168 + clk->range.step_size); 169 + break; 170 + } 171 + 172 + rate = &clk->list.rates[tot_rate_cnt]; 173 + for (cnt = 0; cnt < num_returned; cnt++, rate++) { 174 + *rate = RATE_TO_U64(rlist->rate[cnt]); 175 + dev_dbg(handle->dev, "Rate %llu Hz\n", *rate); 176 + } 177 + 178 + tot_rate_cnt += num_returned; 179 + /* 180 + * check for both returned and remaining to avoid infinite 181 + * loop due to buggy firmware 182 + */ 183 + } while (num_returned && num_remaining); 184 + 185 + if (rate_discrete) 186 + clk->list.num_rates = tot_rate_cnt; 187 + 188 + err: 189 + scmi_one_xfer_put(handle, t); 190 + return ret; 191 + } 192 + 193 + static int 194 + scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value) 195 + { 196 + int ret; 197 + struct scmi_xfer *t; 198 + 199 + ret = scmi_one_xfer_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK, 200 + sizeof(__le32), sizeof(u64), &t); 201 + if (ret) 202 + return ret; 203 + 204 + *(__le32 *)t->tx.buf = cpu_to_le32(clk_id); 205 + 206 + ret = scmi_do_xfer(handle, t); 207 + if (!ret) { 208 + __le32 *pval = t->rx.buf; 209 + 210 + *value = le32_to_cpu(*pval); 211 + *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; 212 + } 213 + 214 + scmi_one_xfer_put(handle, t); 215 + return ret; 216 + } 217 + 218 + static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id, 219 + u32 config, u64 rate) 220 + { 221 + int ret; 222 + struct scmi_xfer *t; 223 + struct scmi_clock_set_rate *cfg; 224 + 225 + ret = scmi_one_xfer_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK, 226 + sizeof(*cfg), 0, &t); 227 + if (ret) 228 + return ret; 229 + 230 + cfg = t->tx.buf; 231 + cfg->flags = cpu_to_le32(config); 232 + cfg->id = cpu_to_le32(clk_id); 233 + cfg->value_low = cpu_to_le32(rate & 0xffffffff); 234 + cfg->value_high = cpu_to_le32(rate >> 32); 235 + 236 + ret = scmi_do_xfer(handle, t); 237 + 238 + scmi_one_xfer_put(handle, t); 239 + return ret; 240 + } 241 + 242 + static int 243 + scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config) 244 + { 245 + int ret; 246 + struct scmi_xfer *t; 247 + struct scmi_clock_set_config *cfg; 248 + 249 + ret = scmi_one_xfer_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK, 250 + sizeof(*cfg), 0, &t); 251 + if (ret) 252 + return ret; 253 + 254 + cfg = t->tx.buf; 255 + cfg->id = cpu_to_le32(clk_id); 256 + cfg->attributes = cpu_to_le32(config); 257 + 258 + ret = scmi_do_xfer(handle, t); 259 + 260 + scmi_one_xfer_put(handle, t); 261 + return ret; 262 + } 263 + 264 + static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id) 265 + { 266 + return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE); 267 + } 268 + 269 + static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id) 270 + { 271 + return scmi_clock_config_set(handle, clk_id, 0); 272 + } 273 + 274 + static int scmi_clock_count_get(const struct scmi_handle *handle) 275 + { 276 + struct clock_info *ci = handle->clk_priv; 277 + 278 + return ci->num_clocks; 279 + } 280 + 281 + static const struct scmi_clock_info * 282 + scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id) 283 + { 284 + struct clock_info *ci = handle->clk_priv; 285 + struct scmi_clock_info *clk = ci->clk + clk_id; 286 + 287 + if (!clk->name || !clk->name[0]) 288 + return NULL; 289 + 290 + return clk; 291 + } 292 + 293 + static struct scmi_clk_ops clk_ops = { 294 + .count_get = scmi_clock_count_get, 295 + .info_get = scmi_clock_info_get, 296 + .rate_get = scmi_clock_rate_get, 297 + .rate_set = scmi_clock_rate_set, 298 + .enable = scmi_clock_enable, 299 + .disable = scmi_clock_disable, 300 + }; 301 + 302 + static int scmi_clock_protocol_init(struct scmi_handle *handle) 303 + { 304 + u32 version; 305 + int clkid, ret; 306 + struct clock_info *cinfo; 307 + 308 + scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version); 309 + 310 + dev_dbg(handle->dev, "Clock Version %d.%d\n", 311 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 312 + 313 + cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL); 314 + if (!cinfo) 315 + return -ENOMEM; 316 + 317 + scmi_clock_protocol_attributes_get(handle, cinfo); 318 + 319 + cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks, 320 + sizeof(*cinfo->clk), GFP_KERNEL); 321 + if (!cinfo->clk) 322 + return -ENOMEM; 323 + 324 + for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { 325 + struct scmi_clock_info *clk = cinfo->clk + clkid; 326 + 327 + ret = scmi_clock_attributes_get(handle, clkid, clk); 328 + if (!ret) 329 + scmi_clock_describe_rates_get(handle, clkid, clk); 330 + } 331 + 332 + handle->clk_ops = &clk_ops; 333 + handle->clk_priv = cinfo; 334 + 335 + return 0; 336 + } 337 + 338 + static int __init scmi_clock_init(void) 339 + { 340 + return scmi_protocol_register(SCMI_PROTOCOL_CLOCK, 341 + &scmi_clock_protocol_init); 342 + } 343 + subsys_initcall(scmi_clock_init);
+105
drivers/firmware/arm_scmi/common.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol 4 + * driver common header file containing some definitions, structures 5 + * and function prototypes used in all the different SCMI protocols. 6 + * 7 + * Copyright (C) 2018 ARM Ltd. 8 + */ 9 + 10 + #include <linux/completion.h> 11 + #include <linux/device.h> 12 + #include <linux/errno.h> 13 + #include <linux/kernel.h> 14 + #include <linux/scmi_protocol.h> 15 + #include <linux/types.h> 16 + 17 + #define PROTOCOL_REV_MINOR_BITS 16 18 + #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) 19 + #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) 20 + #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) 21 + #define MAX_PROTOCOLS_IMP 16 22 + #define MAX_OPPS 16 23 + 24 + enum scmi_common_cmd { 25 + PROTOCOL_VERSION = 0x0, 26 + PROTOCOL_ATTRIBUTES = 0x1, 27 + PROTOCOL_MESSAGE_ATTRIBUTES = 0x2, 28 + }; 29 + 30 + /** 31 + * struct scmi_msg_resp_prot_version - Response for a message 32 + * 33 + * @major_version: Major version of the ABI that firmware supports 34 + * @minor_version: Minor version of the ABI that firmware supports 35 + * 36 + * In general, ABI version changes follow the rule that minor version increments 37 + * are backward compatible. Major revision changes in ABI may not be 38 + * backward compatible. 39 + * 40 + * Response to a generic message with message type SCMI_MSG_VERSION 41 + */ 42 + struct scmi_msg_resp_prot_version { 43 + __le16 minor_version; 44 + __le16 major_version; 45 + }; 46 + 47 + /** 48 + * struct scmi_msg_hdr - Message(Tx/Rx) header 49 + * 50 + * @id: The identifier of the command being sent 51 + * @protocol_id: The identifier of the protocol used to send @id command 52 + * @seq: The token to identify the message. when a message/command returns, 53 + * the platform returns the whole message header unmodified including 54 + * the token. 55 + */ 56 + struct scmi_msg_hdr { 57 + u8 id; 58 + u8 protocol_id; 59 + u16 seq; 60 + u32 status; 61 + bool poll_completion; 62 + }; 63 + 64 + /** 65 + * struct scmi_msg - Message(Tx/Rx) structure 66 + * 67 + * @buf: Buffer pointer 68 + * @len: Length of data in the Buffer 69 + */ 70 + struct scmi_msg { 71 + void *buf; 72 + size_t len; 73 + }; 74 + 75 + /** 76 + * struct scmi_xfer - Structure representing a message flow 77 + * 78 + * @hdr: Transmit message header 79 + * @tx: Transmit message 80 + * @rx: Receive message, the buffer should be pre-allocated to store 81 + * message. If request-ACK protocol is used, we can reuse the same 82 + * buffer for the rx path as we use for the tx path. 83 + * @done: completion event 84 + */ 85 + 86 + struct scmi_xfer { 87 + void *con_priv; 88 + struct scmi_msg_hdr hdr; 89 + struct scmi_msg tx; 90 + struct scmi_msg rx; 91 + struct completion done; 92 + }; 93 + 94 + void scmi_one_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer); 95 + int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer); 96 + int scmi_one_xfer_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id, 97 + size_t tx_size, size_t rx_size, struct scmi_xfer **p); 98 + int scmi_handle_put(const struct scmi_handle *handle); 99 + struct scmi_handle *scmi_handle_get(struct device *dev); 100 + void scmi_set_handle(struct scmi_device *scmi_dev); 101 + int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version); 102 + void scmi_setup_protocol_implemented(const struct scmi_handle *handle, 103 + u8 *prot_imp); 104 + 105 + int scmi_base_protocol_init(struct scmi_handle *h);
+871
drivers/firmware/arm_scmi/driver.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol driver 4 + * 5 + * SCMI Message Protocol is used between the System Control Processor(SCP) 6 + * and the Application Processors(AP). The Message Handling Unit(MHU) 7 + * provides a mechanism for inter-processor communication between SCP's 8 + * Cortex M3 and AP. 9 + * 10 + * SCP offers control and management of the core/cluster power states, 11 + * various power domain DVFS including the core/cluster, certain system 12 + * clocks configuration, thermal sensors and many others. 13 + * 14 + * Copyright (C) 2018 ARM Ltd. 15 + */ 16 + 17 + #include <linux/bitmap.h> 18 + #include <linux/export.h> 19 + #include <linux/io.h> 20 + #include <linux/kernel.h> 21 + #include <linux/ktime.h> 22 + #include <linux/mailbox_client.h> 23 + #include <linux/module.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_device.h> 26 + #include <linux/processor.h> 27 + #include <linux/semaphore.h> 28 + #include <linux/slab.h> 29 + 30 + #include "common.h" 31 + 32 + #define MSG_ID_SHIFT 0 33 + #define MSG_ID_MASK 0xff 34 + #define MSG_TYPE_SHIFT 8 35 + #define MSG_TYPE_MASK 0x3 36 + #define MSG_PROTOCOL_ID_SHIFT 10 37 + #define MSG_PROTOCOL_ID_MASK 0xff 38 + #define MSG_TOKEN_ID_SHIFT 18 39 + #define MSG_TOKEN_ID_MASK 0x3ff 40 + #define MSG_XTRACT_TOKEN(header) \ 41 + (((header) >> MSG_TOKEN_ID_SHIFT) & MSG_TOKEN_ID_MASK) 42 + 43 + enum scmi_error_codes { 44 + SCMI_SUCCESS = 0, /* Success */ 45 + SCMI_ERR_SUPPORT = -1, /* Not supported */ 46 + SCMI_ERR_PARAMS = -2, /* Invalid Parameters */ 47 + SCMI_ERR_ACCESS = -3, /* Invalid access/permission denied */ 48 + SCMI_ERR_ENTRY = -4, /* Not found */ 49 + SCMI_ERR_RANGE = -5, /* Value out of range */ 50 + SCMI_ERR_BUSY = -6, /* Device busy */ 51 + SCMI_ERR_COMMS = -7, /* Communication Error */ 52 + SCMI_ERR_GENERIC = -8, /* Generic Error */ 53 + SCMI_ERR_HARDWARE = -9, /* Hardware Error */ 54 + SCMI_ERR_PROTOCOL = -10,/* Protocol Error */ 55 + SCMI_ERR_MAX 56 + }; 57 + 58 + /* List of all SCMI devices active in system */ 59 + static LIST_HEAD(scmi_list); 60 + /* Protection for the entire list */ 61 + static DEFINE_MUTEX(scmi_list_mutex); 62 + 63 + /** 64 + * struct scmi_xfers_info - Structure to manage transfer information 65 + * 66 + * @xfer_block: Preallocated Message array 67 + * @xfer_alloc_table: Bitmap table for allocated messages. 68 + * Index of this bitmap table is also used for message 69 + * sequence identifier. 70 + * @xfer_lock: Protection for message allocation 71 + */ 72 + struct scmi_xfers_info { 73 + struct scmi_xfer *xfer_block; 74 + unsigned long *xfer_alloc_table; 75 + /* protect transfer allocation */ 76 + spinlock_t xfer_lock; 77 + }; 78 + 79 + /** 80 + * struct scmi_desc - Description of SoC integration 81 + * 82 + * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds) 83 + * @max_msg: Maximum number of messages that can be pending 84 + * simultaneously in the system 85 + * @max_msg_size: Maximum size of data per message that can be handled. 86 + */ 87 + struct scmi_desc { 88 + int max_rx_timeout_ms; 89 + int max_msg; 90 + int max_msg_size; 91 + }; 92 + 93 + /** 94 + * struct scmi_chan_info - Structure representing a SCMI channel informfation 95 + * 96 + * @cl: Mailbox Client 97 + * @chan: Transmit/Receive mailbox channel 98 + * @payload: Transmit/Receive mailbox channel payload area 99 + * @dev: Reference to device in the SCMI hierarchy corresponding to this 100 + * channel 101 + */ 102 + struct scmi_chan_info { 103 + struct mbox_client cl; 104 + struct mbox_chan *chan; 105 + void __iomem *payload; 106 + struct device *dev; 107 + struct scmi_handle *handle; 108 + }; 109 + 110 + /** 111 + * struct scmi_info - Structure representing a SCMI instance 112 + * 113 + * @dev: Device pointer 114 + * @desc: SoC description for this instance 115 + * @handle: Instance of SCMI handle to send to clients 116 + * @version: SCMI revision information containing protocol version, 117 + * implementation version and (sub-)vendor identification. 118 + * @minfo: Message info 119 + * @tx_idr: IDR object to map protocol id to channel info pointer 120 + * @protocols_imp: list of protocols implemented, currently maximum of 121 + * MAX_PROTOCOLS_IMP elements allocated by the base protocol 122 + * @node: list head 123 + * @users: Number of users of this instance 124 + */ 125 + struct scmi_info { 126 + struct device *dev; 127 + const struct scmi_desc *desc; 128 + struct scmi_revision_info version; 129 + struct scmi_handle handle; 130 + struct scmi_xfers_info minfo; 131 + struct idr tx_idr; 132 + u8 *protocols_imp; 133 + struct list_head node; 134 + int users; 135 + }; 136 + 137 + #define client_to_scmi_chan_info(c) container_of(c, struct scmi_chan_info, cl) 138 + #define handle_to_scmi_info(h) container_of(h, struct scmi_info, handle) 139 + 140 + /* 141 + * SCMI specification requires all parameters, message headers, return 142 + * arguments or any protocol data to be expressed in little endian 143 + * format only. 144 + */ 145 + struct scmi_shared_mem { 146 + __le32 reserved; 147 + __le32 channel_status; 148 + #define SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR BIT(1) 149 + #define SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE BIT(0) 150 + __le32 reserved1[2]; 151 + __le32 flags; 152 + #define SCMI_SHMEM_FLAG_INTR_ENABLED BIT(0) 153 + __le32 length; 154 + __le32 msg_header; 155 + u8 msg_payload[0]; 156 + }; 157 + 158 + static const int scmi_linux_errmap[] = { 159 + /* better than switch case as long as return value is continuous */ 160 + 0, /* SCMI_SUCCESS */ 161 + -EOPNOTSUPP, /* SCMI_ERR_SUPPORT */ 162 + -EINVAL, /* SCMI_ERR_PARAM */ 163 + -EACCES, /* SCMI_ERR_ACCESS */ 164 + -ENOENT, /* SCMI_ERR_ENTRY */ 165 + -ERANGE, /* SCMI_ERR_RANGE */ 166 + -EBUSY, /* SCMI_ERR_BUSY */ 167 + -ECOMM, /* SCMI_ERR_COMMS */ 168 + -EIO, /* SCMI_ERR_GENERIC */ 169 + -EREMOTEIO, /* SCMI_ERR_HARDWARE */ 170 + -EPROTO, /* SCMI_ERR_PROTOCOL */ 171 + }; 172 + 173 + static inline int scmi_to_linux_errno(int errno) 174 + { 175 + if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX) 176 + return scmi_linux_errmap[-errno]; 177 + return -EIO; 178 + } 179 + 180 + /** 181 + * scmi_dump_header_dbg() - Helper to dump a message header. 182 + * 183 + * @dev: Device pointer corresponding to the SCMI entity 184 + * @hdr: pointer to header. 185 + */ 186 + static inline void scmi_dump_header_dbg(struct device *dev, 187 + struct scmi_msg_hdr *hdr) 188 + { 189 + dev_dbg(dev, "Command ID: %x Sequence ID: %x Protocol: %x\n", 190 + hdr->id, hdr->seq, hdr->protocol_id); 191 + } 192 + 193 + static void scmi_fetch_response(struct scmi_xfer *xfer, 194 + struct scmi_shared_mem __iomem *mem) 195 + { 196 + xfer->hdr.status = ioread32(mem->msg_payload); 197 + /* Skip the length of header and statues in payload area i.e 8 bytes*/ 198 + xfer->rx.len = min_t(size_t, xfer->rx.len, ioread32(&mem->length) - 8); 199 + 200 + /* Take a copy to the rx buffer.. */ 201 + memcpy_fromio(xfer->rx.buf, mem->msg_payload + 4, xfer->rx.len); 202 + } 203 + 204 + /** 205 + * scmi_rx_callback() - mailbox client callback for receive messages 206 + * 207 + * @cl: client pointer 208 + * @m: mailbox message 209 + * 210 + * Processes one received message to appropriate transfer information and 211 + * signals completion of the transfer. 212 + * 213 + * NOTE: This function will be invoked in IRQ context, hence should be 214 + * as optimal as possible. 215 + */ 216 + static void scmi_rx_callback(struct mbox_client *cl, void *m) 217 + { 218 + u16 xfer_id; 219 + struct scmi_xfer *xfer; 220 + struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); 221 + struct device *dev = cinfo->dev; 222 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 223 + struct scmi_xfers_info *minfo = &info->minfo; 224 + struct scmi_shared_mem __iomem *mem = cinfo->payload; 225 + 226 + xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header)); 227 + 228 + /* 229 + * Are we even expecting this? 230 + */ 231 + if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 232 + dev_err(dev, "message for %d is not expected!\n", xfer_id); 233 + return; 234 + } 235 + 236 + xfer = &minfo->xfer_block[xfer_id]; 237 + 238 + scmi_dump_header_dbg(dev, &xfer->hdr); 239 + /* Is the message of valid length? */ 240 + if (xfer->rx.len > info->desc->max_msg_size) { 241 + dev_err(dev, "unable to handle %zu xfer(max %d)\n", 242 + xfer->rx.len, info->desc->max_msg_size); 243 + return; 244 + } 245 + 246 + scmi_fetch_response(xfer, mem); 247 + complete(&xfer->done); 248 + } 249 + 250 + /** 251 + * pack_scmi_header() - packs and returns 32-bit header 252 + * 253 + * @hdr: pointer to header containing all the information on message id, 254 + * protocol id and sequence id. 255 + */ 256 + static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr) 257 + { 258 + return ((hdr->id & MSG_ID_MASK) << MSG_ID_SHIFT) | 259 + ((hdr->seq & MSG_TOKEN_ID_MASK) << MSG_TOKEN_ID_SHIFT) | 260 + ((hdr->protocol_id & MSG_PROTOCOL_ID_MASK) << MSG_PROTOCOL_ID_SHIFT); 261 + } 262 + 263 + /** 264 + * scmi_tx_prepare() - mailbox client callback to prepare for the transfer 265 + * 266 + * @cl: client pointer 267 + * @m: mailbox message 268 + * 269 + * This function prepares the shared memory which contains the header and the 270 + * payload. 271 + */ 272 + static void scmi_tx_prepare(struct mbox_client *cl, void *m) 273 + { 274 + struct scmi_xfer *t = m; 275 + struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl); 276 + struct scmi_shared_mem __iomem *mem = cinfo->payload; 277 + 278 + /* Mark channel busy + clear error */ 279 + iowrite32(0x0, &mem->channel_status); 280 + iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED, 281 + &mem->flags); 282 + iowrite32(sizeof(mem->msg_header) + t->tx.len, &mem->length); 283 + iowrite32(pack_scmi_header(&t->hdr), &mem->msg_header); 284 + if (t->tx.buf) 285 + memcpy_toio(mem->msg_payload, t->tx.buf, t->tx.len); 286 + } 287 + 288 + /** 289 + * scmi_one_xfer_get() - Allocate one message 290 + * 291 + * @handle: SCMI entity handle 292 + * 293 + * Helper function which is used by various command functions that are 294 + * exposed to clients of this driver for allocating a message traffic event. 295 + * 296 + * This function can sleep depending on pending requests already in the system 297 + * for the SCMI entity. Further, this also holds a spinlock to maintain 298 + * integrity of internal data structures. 299 + * 300 + * Return: 0 if all went fine, else corresponding error. 301 + */ 302 + static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle) 303 + { 304 + u16 xfer_id; 305 + struct scmi_xfer *xfer; 306 + unsigned long flags, bit_pos; 307 + struct scmi_info *info = handle_to_scmi_info(handle); 308 + struct scmi_xfers_info *minfo = &info->minfo; 309 + 310 + /* Keep the locked section as small as possible */ 311 + spin_lock_irqsave(&minfo->xfer_lock, flags); 312 + bit_pos = find_first_zero_bit(minfo->xfer_alloc_table, 313 + info->desc->max_msg); 314 + if (bit_pos == info->desc->max_msg) { 315 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 316 + return ERR_PTR(-ENOMEM); 317 + } 318 + set_bit(bit_pos, minfo->xfer_alloc_table); 319 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 320 + 321 + xfer_id = bit_pos; 322 + 323 + xfer = &minfo->xfer_block[xfer_id]; 324 + xfer->hdr.seq = xfer_id; 325 + reinit_completion(&xfer->done); 326 + 327 + return xfer; 328 + } 329 + 330 + /** 331 + * scmi_one_xfer_put() - Release a message 332 + * 333 + * @minfo: transfer info pointer 334 + * @xfer: message that was reserved by scmi_one_xfer_get 335 + * 336 + * This holds a spinlock to maintain integrity of internal data structures. 337 + */ 338 + void scmi_one_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) 339 + { 340 + unsigned long flags; 341 + struct scmi_info *info = handle_to_scmi_info(handle); 342 + struct scmi_xfers_info *minfo = &info->minfo; 343 + 344 + /* 345 + * Keep the locked section as small as possible 346 + * NOTE: we might escape with smp_mb and no lock here.. 347 + * but just be conservative and symmetric. 348 + */ 349 + spin_lock_irqsave(&minfo->xfer_lock, flags); 350 + clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table); 351 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 352 + } 353 + 354 + static bool 355 + scmi_xfer_poll_done(const struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) 356 + { 357 + struct scmi_shared_mem __iomem *mem = cinfo->payload; 358 + u16 xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header)); 359 + 360 + if (xfer->hdr.seq != xfer_id) 361 + return false; 362 + 363 + return ioread32(&mem->channel_status) & 364 + (SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR | 365 + SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE); 366 + } 367 + 368 + #define SCMI_MAX_POLL_TO_NS (100 * NSEC_PER_USEC) 369 + 370 + static bool scmi_xfer_done_no_timeout(const struct scmi_chan_info *cinfo, 371 + struct scmi_xfer *xfer, ktime_t stop) 372 + { 373 + ktime_t __cur = ktime_get(); 374 + 375 + return scmi_xfer_poll_done(cinfo, xfer) || ktime_after(__cur, stop); 376 + } 377 + 378 + /** 379 + * scmi_do_xfer() - Do one transfer 380 + * 381 + * @info: Pointer to SCMI entity information 382 + * @xfer: Transfer to initiate and wait for response 383 + * 384 + * Return: -ETIMEDOUT in case of no response, if transmit error, 385 + * return corresponding error, else if all goes well, 386 + * return 0. 387 + */ 388 + int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) 389 + { 390 + int ret; 391 + int timeout; 392 + struct scmi_info *info = handle_to_scmi_info(handle); 393 + struct device *dev = info->dev; 394 + struct scmi_chan_info *cinfo; 395 + 396 + cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id); 397 + if (unlikely(!cinfo)) 398 + return -EINVAL; 399 + 400 + ret = mbox_send_message(cinfo->chan, xfer); 401 + if (ret < 0) { 402 + dev_dbg(dev, "mbox send fail %d\n", ret); 403 + return ret; 404 + } 405 + 406 + /* mbox_send_message returns non-negative value on success, so reset */ 407 + ret = 0; 408 + 409 + if (xfer->hdr.poll_completion) { 410 + ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS); 411 + 412 + spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop)); 413 + 414 + if (ktime_before(ktime_get(), stop)) 415 + scmi_fetch_response(xfer, cinfo->payload); 416 + else 417 + ret = -ETIMEDOUT; 418 + } else { 419 + /* And we wait for the response. */ 420 + timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); 421 + if (!wait_for_completion_timeout(&xfer->done, timeout)) { 422 + dev_err(dev, "mbox timed out in resp(caller: %pS)\n", 423 + (void *)_RET_IP_); 424 + ret = -ETIMEDOUT; 425 + } 426 + } 427 + 428 + if (!ret && xfer->hdr.status) 429 + ret = scmi_to_linux_errno(xfer->hdr.status); 430 + 431 + /* 432 + * NOTE: we might prefer not to need the mailbox ticker to manage the 433 + * transfer queueing since the protocol layer queues things by itself. 434 + * Unfortunately, we have to kick the mailbox framework after we have 435 + * received our message. 436 + */ 437 + mbox_client_txdone(cinfo->chan, ret); 438 + 439 + return ret; 440 + } 441 + 442 + /** 443 + * scmi_one_xfer_init() - Allocate and initialise one message 444 + * 445 + * @handle: SCMI entity handle 446 + * @msg_id: Message identifier 447 + * @msg_prot_id: Protocol identifier for the message 448 + * @tx_size: transmit message size 449 + * @rx_size: receive message size 450 + * @p: pointer to the allocated and initialised message 451 + * 452 + * This function allocates the message using @scmi_one_xfer_get and 453 + * initialise the header. 454 + * 455 + * Return: 0 if all went fine with @p pointing to message, else 456 + * corresponding error. 457 + */ 458 + int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id, 459 + size_t tx_size, size_t rx_size, struct scmi_xfer **p) 460 + { 461 + int ret; 462 + struct scmi_xfer *xfer; 463 + struct scmi_info *info = handle_to_scmi_info(handle); 464 + struct device *dev = info->dev; 465 + 466 + /* Ensure we have sane transfer sizes */ 467 + if (rx_size > info->desc->max_msg_size || 468 + tx_size > info->desc->max_msg_size) 469 + return -ERANGE; 470 + 471 + xfer = scmi_one_xfer_get(handle); 472 + if (IS_ERR(xfer)) { 473 + ret = PTR_ERR(xfer); 474 + dev_err(dev, "failed to get free message slot(%d)\n", ret); 475 + return ret; 476 + } 477 + 478 + xfer->tx.len = tx_size; 479 + xfer->rx.len = rx_size ? : info->desc->max_msg_size; 480 + xfer->hdr.id = msg_id; 481 + xfer->hdr.protocol_id = prot_id; 482 + xfer->hdr.poll_completion = false; 483 + 484 + *p = xfer; 485 + return 0; 486 + } 487 + 488 + /** 489 + * scmi_version_get() - command to get the revision of the SCMI entity 490 + * 491 + * @handle: Handle to SCMI entity information 492 + * 493 + * Updates the SCMI information in the internal data structure. 494 + * 495 + * Return: 0 if all went fine, else return appropriate error. 496 + */ 497 + int scmi_version_get(const struct scmi_handle *handle, u8 protocol, 498 + u32 *version) 499 + { 500 + int ret; 501 + __le32 *rev_info; 502 + struct scmi_xfer *t; 503 + 504 + ret = scmi_one_xfer_init(handle, PROTOCOL_VERSION, protocol, 0, 505 + sizeof(*version), &t); 506 + if (ret) 507 + return ret; 508 + 509 + ret = scmi_do_xfer(handle, t); 510 + if (!ret) { 511 + rev_info = t->rx.buf; 512 + *version = le32_to_cpu(*rev_info); 513 + } 514 + 515 + scmi_one_xfer_put(handle, t); 516 + return ret; 517 + } 518 + 519 + void scmi_setup_protocol_implemented(const struct scmi_handle *handle, 520 + u8 *prot_imp) 521 + { 522 + struct scmi_info *info = handle_to_scmi_info(handle); 523 + 524 + info->protocols_imp = prot_imp; 525 + } 526 + 527 + static bool 528 + scmi_is_protocol_implemented(const struct scmi_handle *handle, u8 prot_id) 529 + { 530 + int i; 531 + struct scmi_info *info = handle_to_scmi_info(handle); 532 + 533 + if (!info->protocols_imp) 534 + return false; 535 + 536 + for (i = 0; i < MAX_PROTOCOLS_IMP; i++) 537 + if (info->protocols_imp[i] == prot_id) 538 + return true; 539 + return false; 540 + } 541 + 542 + /** 543 + * scmi_handle_get() - Get the SCMI handle for a device 544 + * 545 + * @dev: pointer to device for which we want SCMI handle 546 + * 547 + * NOTE: The function does not track individual clients of the framework 548 + * and is expected to be maintained by caller of SCMI protocol library. 549 + * scmi_handle_put must be balanced with successful scmi_handle_get 550 + * 551 + * Return: pointer to handle if successful, NULL on error 552 + */ 553 + struct scmi_handle *scmi_handle_get(struct device *dev) 554 + { 555 + struct list_head *p; 556 + struct scmi_info *info; 557 + struct scmi_handle *handle = NULL; 558 + 559 + mutex_lock(&scmi_list_mutex); 560 + list_for_each(p, &scmi_list) { 561 + info = list_entry(p, struct scmi_info, node); 562 + if (dev->parent == info->dev) { 563 + handle = &info->handle; 564 + info->users++; 565 + break; 566 + } 567 + } 568 + mutex_unlock(&scmi_list_mutex); 569 + 570 + return handle; 571 + } 572 + 573 + /** 574 + * scmi_handle_put() - Release the handle acquired by scmi_handle_get 575 + * 576 + * @handle: handle acquired by scmi_handle_get 577 + * 578 + * NOTE: The function does not track individual clients of the framework 579 + * and is expected to be maintained by caller of SCMI protocol library. 580 + * scmi_handle_put must be balanced with successful scmi_handle_get 581 + * 582 + * Return: 0 is successfully released 583 + * if null was passed, it returns -EINVAL; 584 + */ 585 + int scmi_handle_put(const struct scmi_handle *handle) 586 + { 587 + struct scmi_info *info; 588 + 589 + if (!handle) 590 + return -EINVAL; 591 + 592 + info = handle_to_scmi_info(handle); 593 + mutex_lock(&scmi_list_mutex); 594 + if (!WARN_ON(!info->users)) 595 + info->users--; 596 + mutex_unlock(&scmi_list_mutex); 597 + 598 + return 0; 599 + } 600 + 601 + static const struct scmi_desc scmi_generic_desc = { 602 + .max_rx_timeout_ms = 30, /* we may increase this if required */ 603 + .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ 604 + .max_msg_size = 128, 605 + }; 606 + 607 + /* Each compatible listed below must have descriptor associated with it */ 608 + static const struct of_device_id scmi_of_match[] = { 609 + { .compatible = "arm,scmi", .data = &scmi_generic_desc }, 610 + { /* Sentinel */ }, 611 + }; 612 + 613 + MODULE_DEVICE_TABLE(of, scmi_of_match); 614 + 615 + static int scmi_xfer_info_init(struct scmi_info *sinfo) 616 + { 617 + int i; 618 + struct scmi_xfer *xfer; 619 + struct device *dev = sinfo->dev; 620 + const struct scmi_desc *desc = sinfo->desc; 621 + struct scmi_xfers_info *info = &sinfo->minfo; 622 + 623 + /* Pre-allocated messages, no more than what hdr.seq can support */ 624 + if (WARN_ON(desc->max_msg >= (MSG_TOKEN_ID_MASK + 1))) { 625 + dev_err(dev, "Maximum message of %d exceeds supported %d\n", 626 + desc->max_msg, MSG_TOKEN_ID_MASK + 1); 627 + return -EINVAL; 628 + } 629 + 630 + info->xfer_block = devm_kcalloc(dev, desc->max_msg, 631 + sizeof(*info->xfer_block), GFP_KERNEL); 632 + if (!info->xfer_block) 633 + return -ENOMEM; 634 + 635 + info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg), 636 + sizeof(long), GFP_KERNEL); 637 + if (!info->xfer_alloc_table) 638 + return -ENOMEM; 639 + 640 + bitmap_zero(info->xfer_alloc_table, desc->max_msg); 641 + 642 + /* Pre-initialize the buffer pointer to pre-allocated buffers */ 643 + for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) { 644 + xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size, 645 + GFP_KERNEL); 646 + if (!xfer->rx.buf) 647 + return -ENOMEM; 648 + 649 + xfer->tx.buf = xfer->rx.buf; 650 + init_completion(&xfer->done); 651 + } 652 + 653 + spin_lock_init(&info->xfer_lock); 654 + 655 + return 0; 656 + } 657 + 658 + static int scmi_mailbox_check(struct device_node *np) 659 + { 660 + struct of_phandle_args arg; 661 + 662 + return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 0, &arg); 663 + } 664 + 665 + static int scmi_mbox_free_channel(int id, void *p, void *data) 666 + { 667 + struct scmi_chan_info *cinfo = p; 668 + struct idr *idr = data; 669 + 670 + if (!IS_ERR_OR_NULL(cinfo->chan)) { 671 + mbox_free_channel(cinfo->chan); 672 + cinfo->chan = NULL; 673 + } 674 + 675 + idr_remove(idr, id); 676 + 677 + return 0; 678 + } 679 + 680 + static int scmi_remove(struct platform_device *pdev) 681 + { 682 + int ret = 0; 683 + struct scmi_info *info = platform_get_drvdata(pdev); 684 + struct idr *idr = &info->tx_idr; 685 + 686 + mutex_lock(&scmi_list_mutex); 687 + if (info->users) 688 + ret = -EBUSY; 689 + else 690 + list_del(&info->node); 691 + mutex_unlock(&scmi_list_mutex); 692 + 693 + if (!ret) { 694 + /* Safe to free channels since no more users */ 695 + ret = idr_for_each(idr, scmi_mbox_free_channel, idr); 696 + idr_destroy(&info->tx_idr); 697 + } 698 + 699 + return ret; 700 + } 701 + 702 + static inline int 703 + scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, int prot_id) 704 + { 705 + int ret; 706 + struct resource res; 707 + resource_size_t size; 708 + struct device_node *shmem, *np = dev->of_node; 709 + struct scmi_chan_info *cinfo; 710 + struct mbox_client *cl; 711 + 712 + if (scmi_mailbox_check(np)) { 713 + cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE); 714 + goto idr_alloc; 715 + } 716 + 717 + cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL); 718 + if (!cinfo) 719 + return -ENOMEM; 720 + 721 + cinfo->dev = dev; 722 + 723 + cl = &cinfo->cl; 724 + cl->dev = dev; 725 + cl->rx_callback = scmi_rx_callback; 726 + cl->tx_prepare = scmi_tx_prepare; 727 + cl->tx_block = false; 728 + cl->knows_txdone = true; 729 + 730 + shmem = of_parse_phandle(np, "shmem", 0); 731 + ret = of_address_to_resource(shmem, 0, &res); 732 + of_node_put(shmem); 733 + if (ret) { 734 + dev_err(dev, "failed to get SCMI Tx payload mem resource\n"); 735 + return ret; 736 + } 737 + 738 + size = resource_size(&res); 739 + cinfo->payload = devm_ioremap(info->dev, res.start, size); 740 + if (!cinfo->payload) { 741 + dev_err(dev, "failed to ioremap SCMI Tx payload\n"); 742 + return -EADDRNOTAVAIL; 743 + } 744 + 745 + /* Transmit channel is first entry i.e. index 0 */ 746 + cinfo->chan = mbox_request_channel(cl, 0); 747 + if (IS_ERR(cinfo->chan)) { 748 + ret = PTR_ERR(cinfo->chan); 749 + if (ret != -EPROBE_DEFER) 750 + dev_err(dev, "failed to request SCMI Tx mailbox\n"); 751 + return ret; 752 + } 753 + 754 + idr_alloc: 755 + ret = idr_alloc(&info->tx_idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); 756 + if (ret != prot_id) { 757 + dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret); 758 + return ret; 759 + } 760 + 761 + cinfo->handle = &info->handle; 762 + return 0; 763 + } 764 + 765 + static inline void 766 + scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 767 + int prot_id) 768 + { 769 + struct scmi_device *sdev; 770 + 771 + sdev = scmi_device_create(np, info->dev, prot_id); 772 + if (!sdev) { 773 + dev_err(info->dev, "failed to create %d protocol device\n", 774 + prot_id); 775 + return; 776 + } 777 + 778 + if (scmi_mbox_chan_setup(info, &sdev->dev, prot_id)) { 779 + dev_err(&sdev->dev, "failed to setup transport\n"); 780 + scmi_device_destroy(sdev); 781 + } 782 + 783 + /* setup handle now as the transport is ready */ 784 + scmi_set_handle(sdev); 785 + } 786 + 787 + static int scmi_probe(struct platform_device *pdev) 788 + { 789 + int ret; 790 + struct scmi_handle *handle; 791 + const struct scmi_desc *desc; 792 + struct scmi_info *info; 793 + struct device *dev = &pdev->dev; 794 + struct device_node *child, *np = dev->of_node; 795 + 796 + /* Only mailbox method supported, check for the presence of one */ 797 + if (scmi_mailbox_check(np)) { 798 + dev_err(dev, "no mailbox found in %pOF\n", np); 799 + return -EINVAL; 800 + } 801 + 802 + desc = of_match_device(scmi_of_match, dev)->data; 803 + 804 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 805 + if (!info) 806 + return -ENOMEM; 807 + 808 + info->dev = dev; 809 + info->desc = desc; 810 + INIT_LIST_HEAD(&info->node); 811 + 812 + ret = scmi_xfer_info_init(info); 813 + if (ret) 814 + return ret; 815 + 816 + platform_set_drvdata(pdev, info); 817 + idr_init(&info->tx_idr); 818 + 819 + handle = &info->handle; 820 + handle->dev = info->dev; 821 + handle->version = &info->version; 822 + 823 + ret = scmi_mbox_chan_setup(info, dev, SCMI_PROTOCOL_BASE); 824 + if (ret) 825 + return ret; 826 + 827 + ret = scmi_base_protocol_init(handle); 828 + if (ret) { 829 + dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); 830 + return ret; 831 + } 832 + 833 + mutex_lock(&scmi_list_mutex); 834 + list_add_tail(&info->node, &scmi_list); 835 + mutex_unlock(&scmi_list_mutex); 836 + 837 + for_each_available_child_of_node(np, child) { 838 + u32 prot_id; 839 + 840 + if (of_property_read_u32(child, "reg", &prot_id)) 841 + continue; 842 + 843 + prot_id &= MSG_PROTOCOL_ID_MASK; 844 + 845 + if (!scmi_is_protocol_implemented(handle, prot_id)) { 846 + dev_err(dev, "SCMI protocol %d not implemented\n", 847 + prot_id); 848 + continue; 849 + } 850 + 851 + scmi_create_protocol_device(child, info, prot_id); 852 + } 853 + 854 + return 0; 855 + } 856 + 857 + static struct platform_driver scmi_driver = { 858 + .driver = { 859 + .name = "arm-scmi", 860 + .of_match_table = scmi_of_match, 861 + }, 862 + .probe = scmi_probe, 863 + .remove = scmi_remove, 864 + }; 865 + 866 + module_platform_driver(scmi_driver); 867 + 868 + MODULE_ALIAS("platform: arm-scmi"); 869 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 870 + MODULE_DESCRIPTION("ARM SCMI protocol driver"); 871 + MODULE_LICENSE("GPL v2");
+481
drivers/firmware/arm_scmi/perf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Performance Protocol 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include <linux/of.h> 9 + #include <linux/platform_device.h> 10 + #include <linux/pm_opp.h> 11 + #include <linux/sort.h> 12 + 13 + #include "common.h" 14 + 15 + enum scmi_performance_protocol_cmd { 16 + PERF_DOMAIN_ATTRIBUTES = 0x3, 17 + PERF_DESCRIBE_LEVELS = 0x4, 18 + PERF_LIMITS_SET = 0x5, 19 + PERF_LIMITS_GET = 0x6, 20 + PERF_LEVEL_SET = 0x7, 21 + PERF_LEVEL_GET = 0x8, 22 + PERF_NOTIFY_LIMITS = 0x9, 23 + PERF_NOTIFY_LEVEL = 0xa, 24 + }; 25 + 26 + struct scmi_opp { 27 + u32 perf; 28 + u32 power; 29 + u32 trans_latency_us; 30 + }; 31 + 32 + struct scmi_msg_resp_perf_attributes { 33 + __le16 num_domains; 34 + __le16 flags; 35 + #define POWER_SCALE_IN_MILLIWATT(x) ((x) & BIT(0)) 36 + __le32 stats_addr_low; 37 + __le32 stats_addr_high; 38 + __le32 stats_size; 39 + }; 40 + 41 + struct scmi_msg_resp_perf_domain_attributes { 42 + __le32 flags; 43 + #define SUPPORTS_SET_LIMITS(x) ((x) & BIT(31)) 44 + #define SUPPORTS_SET_PERF_LVL(x) ((x) & BIT(30)) 45 + #define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29)) 46 + #define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28)) 47 + __le32 rate_limit_us; 48 + __le32 sustained_freq_khz; 49 + __le32 sustained_perf_level; 50 + u8 name[SCMI_MAX_STR_SIZE]; 51 + }; 52 + 53 + struct scmi_msg_perf_describe_levels { 54 + __le32 domain; 55 + __le32 level_index; 56 + }; 57 + 58 + struct scmi_perf_set_limits { 59 + __le32 domain; 60 + __le32 max_level; 61 + __le32 min_level; 62 + }; 63 + 64 + struct scmi_perf_get_limits { 65 + __le32 max_level; 66 + __le32 min_level; 67 + }; 68 + 69 + struct scmi_perf_set_level { 70 + __le32 domain; 71 + __le32 level; 72 + }; 73 + 74 + struct scmi_perf_notify_level_or_limits { 75 + __le32 domain; 76 + __le32 notify_enable; 77 + }; 78 + 79 + struct scmi_msg_resp_perf_describe_levels { 80 + __le16 num_returned; 81 + __le16 num_remaining; 82 + struct { 83 + __le32 perf_val; 84 + __le32 power; 85 + __le16 transition_latency_us; 86 + __le16 reserved; 87 + } opp[0]; 88 + }; 89 + 90 + struct perf_dom_info { 91 + bool set_limits; 92 + bool set_perf; 93 + bool perf_limit_notify; 94 + bool perf_level_notify; 95 + u32 opp_count; 96 + u32 sustained_freq_khz; 97 + u32 sustained_perf_level; 98 + u32 mult_factor; 99 + char name[SCMI_MAX_STR_SIZE]; 100 + struct scmi_opp opp[MAX_OPPS]; 101 + }; 102 + 103 + struct scmi_perf_info { 104 + int num_domains; 105 + bool power_scale_mw; 106 + u64 stats_addr; 107 + u32 stats_size; 108 + struct perf_dom_info *dom_info; 109 + }; 110 + 111 + static int scmi_perf_attributes_get(const struct scmi_handle *handle, 112 + struct scmi_perf_info *pi) 113 + { 114 + int ret; 115 + struct scmi_xfer *t; 116 + struct scmi_msg_resp_perf_attributes *attr; 117 + 118 + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, 119 + SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t); 120 + if (ret) 121 + return ret; 122 + 123 + attr = t->rx.buf; 124 + 125 + ret = scmi_do_xfer(handle, t); 126 + if (!ret) { 127 + u16 flags = le16_to_cpu(attr->flags); 128 + 129 + pi->num_domains = le16_to_cpu(attr->num_domains); 130 + pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags); 131 + pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | 132 + (u64)le32_to_cpu(attr->stats_addr_high) << 32; 133 + pi->stats_size = le32_to_cpu(attr->stats_size); 134 + } 135 + 136 + scmi_one_xfer_put(handle, t); 137 + return ret; 138 + } 139 + 140 + static int 141 + scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 142 + struct perf_dom_info *dom_info) 143 + { 144 + int ret; 145 + struct scmi_xfer *t; 146 + struct scmi_msg_resp_perf_domain_attributes *attr; 147 + 148 + ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES, 149 + SCMI_PROTOCOL_PERF, sizeof(domain), 150 + sizeof(*attr), &t); 151 + if (ret) 152 + return ret; 153 + 154 + *(__le32 *)t->tx.buf = cpu_to_le32(domain); 155 + attr = t->rx.buf; 156 + 157 + ret = scmi_do_xfer(handle, t); 158 + if (!ret) { 159 + u32 flags = le32_to_cpu(attr->flags); 160 + 161 + dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); 162 + dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); 163 + dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); 164 + dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); 165 + dom_info->sustained_freq_khz = 166 + le32_to_cpu(attr->sustained_freq_khz); 167 + dom_info->sustained_perf_level = 168 + le32_to_cpu(attr->sustained_perf_level); 169 + dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) / 170 + dom_info->sustained_perf_level; 171 + memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 172 + } 173 + 174 + scmi_one_xfer_put(handle, t); 175 + return ret; 176 + } 177 + 178 + static int opp_cmp_func(const void *opp1, const void *opp2) 179 + { 180 + const struct scmi_opp *t1 = opp1, *t2 = opp2; 181 + 182 + return t1->perf - t2->perf; 183 + } 184 + 185 + static int 186 + scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain, 187 + struct perf_dom_info *perf_dom) 188 + { 189 + int ret, cnt; 190 + u32 tot_opp_cnt = 0; 191 + u16 num_returned, num_remaining; 192 + struct scmi_xfer *t; 193 + struct scmi_opp *opp; 194 + struct scmi_msg_perf_describe_levels *dom_info; 195 + struct scmi_msg_resp_perf_describe_levels *level_info; 196 + 197 + ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS, 198 + SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t); 199 + if (ret) 200 + return ret; 201 + 202 + dom_info = t->tx.buf; 203 + level_info = t->rx.buf; 204 + 205 + do { 206 + dom_info->domain = cpu_to_le32(domain); 207 + /* Set the number of OPPs to be skipped/already read */ 208 + dom_info->level_index = cpu_to_le32(tot_opp_cnt); 209 + 210 + ret = scmi_do_xfer(handle, t); 211 + if (ret) 212 + break; 213 + 214 + num_returned = le16_to_cpu(level_info->num_returned); 215 + num_remaining = le16_to_cpu(level_info->num_remaining); 216 + if (tot_opp_cnt + num_returned > MAX_OPPS) { 217 + dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS"); 218 + break; 219 + } 220 + 221 + opp = &perf_dom->opp[tot_opp_cnt]; 222 + for (cnt = 0; cnt < num_returned; cnt++, opp++) { 223 + opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val); 224 + opp->power = le32_to_cpu(level_info->opp[cnt].power); 225 + opp->trans_latency_us = le16_to_cpu 226 + (level_info->opp[cnt].transition_latency_us); 227 + 228 + dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n", 229 + opp->perf, opp->power, opp->trans_latency_us); 230 + } 231 + 232 + tot_opp_cnt += num_returned; 233 + /* 234 + * check for both returned and remaining to avoid infinite 235 + * loop due to buggy firmware 236 + */ 237 + } while (num_returned && num_remaining); 238 + 239 + perf_dom->opp_count = tot_opp_cnt; 240 + scmi_one_xfer_put(handle, t); 241 + 242 + sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL); 243 + return ret; 244 + } 245 + 246 + static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, 247 + u32 max_perf, u32 min_perf) 248 + { 249 + int ret; 250 + struct scmi_xfer *t; 251 + struct scmi_perf_set_limits *limits; 252 + 253 + ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF, 254 + sizeof(*limits), 0, &t); 255 + if (ret) 256 + return ret; 257 + 258 + limits = t->tx.buf; 259 + limits->domain = cpu_to_le32(domain); 260 + limits->max_level = cpu_to_le32(max_perf); 261 + limits->min_level = cpu_to_le32(min_perf); 262 + 263 + ret = scmi_do_xfer(handle, t); 264 + 265 + scmi_one_xfer_put(handle, t); 266 + return ret; 267 + } 268 + 269 + static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, 270 + u32 *max_perf, u32 *min_perf) 271 + { 272 + int ret; 273 + struct scmi_xfer *t; 274 + struct scmi_perf_get_limits *limits; 275 + 276 + ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF, 277 + sizeof(__le32), 0, &t); 278 + if (ret) 279 + return ret; 280 + 281 + *(__le32 *)t->tx.buf = cpu_to_le32(domain); 282 + 283 + ret = scmi_do_xfer(handle, t); 284 + if (!ret) { 285 + limits = t->rx.buf; 286 + 287 + *max_perf = le32_to_cpu(limits->max_level); 288 + *min_perf = le32_to_cpu(limits->min_level); 289 + } 290 + 291 + scmi_one_xfer_put(handle, t); 292 + return ret; 293 + } 294 + 295 + static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, 296 + u32 level, bool poll) 297 + { 298 + int ret; 299 + struct scmi_xfer *t; 300 + struct scmi_perf_set_level *lvl; 301 + 302 + ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF, 303 + sizeof(*lvl), 0, &t); 304 + if (ret) 305 + return ret; 306 + 307 + t->hdr.poll_completion = poll; 308 + lvl = t->tx.buf; 309 + lvl->domain = cpu_to_le32(domain); 310 + lvl->level = cpu_to_le32(level); 311 + 312 + ret = scmi_do_xfer(handle, t); 313 + 314 + scmi_one_xfer_put(handle, t); 315 + return ret; 316 + } 317 + 318 + static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, 319 + u32 *level, bool poll) 320 + { 321 + int ret; 322 + struct scmi_xfer *t; 323 + 324 + ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF, 325 + sizeof(u32), sizeof(u32), &t); 326 + if (ret) 327 + return ret; 328 + 329 + t->hdr.poll_completion = poll; 330 + *(__le32 *)t->tx.buf = cpu_to_le32(domain); 331 + 332 + ret = scmi_do_xfer(handle, t); 333 + if (!ret) 334 + *level = le32_to_cpu(*(__le32 *)t->rx.buf); 335 + 336 + scmi_one_xfer_put(handle, t); 337 + return ret; 338 + } 339 + 340 + /* Device specific ops */ 341 + static int scmi_dev_domain_id(struct device *dev) 342 + { 343 + struct of_phandle_args clkspec; 344 + 345 + if (of_parse_phandle_with_args(dev->of_node, "clocks", "#clock-cells", 346 + 0, &clkspec)) 347 + return -EINVAL; 348 + 349 + return clkspec.args[0]; 350 + } 351 + 352 + static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle, 353 + struct device *dev) 354 + { 355 + int idx, ret, domain; 356 + unsigned long freq; 357 + struct scmi_opp *opp; 358 + struct perf_dom_info *dom; 359 + struct scmi_perf_info *pi = handle->perf_priv; 360 + 361 + domain = scmi_dev_domain_id(dev); 362 + if (domain < 0) 363 + return domain; 364 + 365 + dom = pi->dom_info + domain; 366 + if (!dom) 367 + return -EIO; 368 + 369 + for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) { 370 + freq = opp->perf * dom->mult_factor; 371 + 372 + ret = dev_pm_opp_add(dev, freq, 0); 373 + if (ret) { 374 + dev_warn(dev, "failed to add opp %luHz\n", freq); 375 + 376 + while (idx-- > 0) { 377 + freq = (--opp)->perf * dom->mult_factor; 378 + dev_pm_opp_remove(dev, freq); 379 + } 380 + return ret; 381 + } 382 + } 383 + return 0; 384 + } 385 + 386 + static int scmi_dvfs_get_transition_latency(const struct scmi_handle *handle, 387 + struct device *dev) 388 + { 389 + struct perf_dom_info *dom; 390 + struct scmi_perf_info *pi = handle->perf_priv; 391 + int domain = scmi_dev_domain_id(dev); 392 + 393 + if (domain < 0) 394 + return domain; 395 + 396 + dom = pi->dom_info + domain; 397 + if (!dom) 398 + return -EIO; 399 + 400 + /* uS to nS */ 401 + return dom->opp[dom->opp_count - 1].trans_latency_us * 1000; 402 + } 403 + 404 + static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain, 405 + unsigned long freq, bool poll) 406 + { 407 + struct scmi_perf_info *pi = handle->perf_priv; 408 + struct perf_dom_info *dom = pi->dom_info + domain; 409 + 410 + return scmi_perf_level_set(handle, domain, freq / dom->mult_factor, 411 + poll); 412 + } 413 + 414 + static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain, 415 + unsigned long *freq, bool poll) 416 + { 417 + int ret; 418 + u32 level; 419 + struct scmi_perf_info *pi = handle->perf_priv; 420 + struct perf_dom_info *dom = pi->dom_info + domain; 421 + 422 + ret = scmi_perf_level_get(handle, domain, &level, poll); 423 + if (!ret) 424 + *freq = level * dom->mult_factor; 425 + 426 + return ret; 427 + } 428 + 429 + static struct scmi_perf_ops perf_ops = { 430 + .limits_set = scmi_perf_limits_set, 431 + .limits_get = scmi_perf_limits_get, 432 + .level_set = scmi_perf_level_set, 433 + .level_get = scmi_perf_level_get, 434 + .device_domain_id = scmi_dev_domain_id, 435 + .get_transition_latency = scmi_dvfs_get_transition_latency, 436 + .add_opps_to_device = scmi_dvfs_add_opps_to_device, 437 + .freq_set = scmi_dvfs_freq_set, 438 + .freq_get = scmi_dvfs_freq_get, 439 + }; 440 + 441 + static int scmi_perf_protocol_init(struct scmi_handle *handle) 442 + { 443 + int domain; 444 + u32 version; 445 + struct scmi_perf_info *pinfo; 446 + 447 + scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version); 448 + 449 + dev_dbg(handle->dev, "Performance Version %d.%d\n", 450 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 451 + 452 + pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 453 + if (!pinfo) 454 + return -ENOMEM; 455 + 456 + scmi_perf_attributes_get(handle, pinfo); 457 + 458 + pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 459 + sizeof(*pinfo->dom_info), GFP_KERNEL); 460 + if (!pinfo->dom_info) 461 + return -ENOMEM; 462 + 463 + for (domain = 0; domain < pinfo->num_domains; domain++) { 464 + struct perf_dom_info *dom = pinfo->dom_info + domain; 465 + 466 + scmi_perf_domain_attributes_get(handle, domain, dom); 467 + scmi_perf_describe_levels_get(handle, domain, dom); 468 + } 469 + 470 + handle->perf_ops = &perf_ops; 471 + handle->perf_priv = pinfo; 472 + 473 + return 0; 474 + } 475 + 476 + static int __init scmi_perf_init(void) 477 + { 478 + return scmi_protocol_register(SCMI_PROTOCOL_PERF, 479 + &scmi_perf_protocol_init); 480 + } 481 + subsys_initcall(scmi_perf_init);
+221
drivers/firmware/arm_scmi/power.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Power Protocol 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include "common.h" 9 + 10 + enum scmi_power_protocol_cmd { 11 + POWER_DOMAIN_ATTRIBUTES = 0x3, 12 + POWER_STATE_SET = 0x4, 13 + POWER_STATE_GET = 0x5, 14 + POWER_STATE_NOTIFY = 0x6, 15 + }; 16 + 17 + struct scmi_msg_resp_power_attributes { 18 + __le16 num_domains; 19 + __le16 reserved; 20 + __le32 stats_addr_low; 21 + __le32 stats_addr_high; 22 + __le32 stats_size; 23 + }; 24 + 25 + struct scmi_msg_resp_power_domain_attributes { 26 + __le32 flags; 27 + #define SUPPORTS_STATE_SET_NOTIFY(x) ((x) & BIT(31)) 28 + #define SUPPORTS_STATE_SET_ASYNC(x) ((x) & BIT(30)) 29 + #define SUPPORTS_STATE_SET_SYNC(x) ((x) & BIT(29)) 30 + u8 name[SCMI_MAX_STR_SIZE]; 31 + }; 32 + 33 + struct scmi_power_set_state { 34 + __le32 flags; 35 + #define STATE_SET_ASYNC BIT(0) 36 + __le32 domain; 37 + __le32 state; 38 + }; 39 + 40 + struct scmi_power_state_notify { 41 + __le32 domain; 42 + __le32 notify_enable; 43 + }; 44 + 45 + struct power_dom_info { 46 + bool state_set_sync; 47 + bool state_set_async; 48 + bool state_set_notify; 49 + char name[SCMI_MAX_STR_SIZE]; 50 + }; 51 + 52 + struct scmi_power_info { 53 + int num_domains; 54 + u64 stats_addr; 55 + u32 stats_size; 56 + struct power_dom_info *dom_info; 57 + }; 58 + 59 + static int scmi_power_attributes_get(const struct scmi_handle *handle, 60 + struct scmi_power_info *pi) 61 + { 62 + int ret; 63 + struct scmi_xfer *t; 64 + struct scmi_msg_resp_power_attributes *attr; 65 + 66 + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, 67 + SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t); 68 + if (ret) 69 + return ret; 70 + 71 + attr = t->rx.buf; 72 + 73 + ret = scmi_do_xfer(handle, t); 74 + if (!ret) { 75 + pi->num_domains = le16_to_cpu(attr->num_domains); 76 + pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | 77 + (u64)le32_to_cpu(attr->stats_addr_high) << 32; 78 + pi->stats_size = le32_to_cpu(attr->stats_size); 79 + } 80 + 81 + scmi_one_xfer_put(handle, t); 82 + return ret; 83 + } 84 + 85 + static int 86 + scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 87 + struct power_dom_info *dom_info) 88 + { 89 + int ret; 90 + struct scmi_xfer *t; 91 + struct scmi_msg_resp_power_domain_attributes *attr; 92 + 93 + ret = scmi_one_xfer_init(handle, POWER_DOMAIN_ATTRIBUTES, 94 + SCMI_PROTOCOL_POWER, sizeof(domain), 95 + sizeof(*attr), &t); 96 + if (ret) 97 + return ret; 98 + 99 + *(__le32 *)t->tx.buf = cpu_to_le32(domain); 100 + attr = t->rx.buf; 101 + 102 + ret = scmi_do_xfer(handle, t); 103 + if (!ret) { 104 + u32 flags = le32_to_cpu(attr->flags); 105 + 106 + dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags); 107 + dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags); 108 + dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags); 109 + memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 110 + } 111 + 112 + scmi_one_xfer_put(handle, t); 113 + return ret; 114 + } 115 + 116 + static int 117 + scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state) 118 + { 119 + int ret; 120 + struct scmi_xfer *t; 121 + struct scmi_power_set_state *st; 122 + 123 + ret = scmi_one_xfer_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER, 124 + sizeof(*st), 0, &t); 125 + if (ret) 126 + return ret; 127 + 128 + st = t->tx.buf; 129 + st->flags = cpu_to_le32(0); 130 + st->domain = cpu_to_le32(domain); 131 + st->state = cpu_to_le32(state); 132 + 133 + ret = scmi_do_xfer(handle, t); 134 + 135 + scmi_one_xfer_put(handle, t); 136 + return ret; 137 + } 138 + 139 + static int 140 + scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state) 141 + { 142 + int ret; 143 + struct scmi_xfer *t; 144 + 145 + ret = scmi_one_xfer_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER, 146 + sizeof(u32), sizeof(u32), &t); 147 + if (ret) 148 + return ret; 149 + 150 + *(__le32 *)t->tx.buf = cpu_to_le32(domain); 151 + 152 + ret = scmi_do_xfer(handle, t); 153 + if (!ret) 154 + *state = le32_to_cpu(*(__le32 *)t->rx.buf); 155 + 156 + scmi_one_xfer_put(handle, t); 157 + return ret; 158 + } 159 + 160 + static int scmi_power_num_domains_get(const struct scmi_handle *handle) 161 + { 162 + struct scmi_power_info *pi = handle->power_priv; 163 + 164 + return pi->num_domains; 165 + } 166 + 167 + static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain) 168 + { 169 + struct scmi_power_info *pi = handle->power_priv; 170 + struct power_dom_info *dom = pi->dom_info + domain; 171 + 172 + return dom->name; 173 + } 174 + 175 + static struct scmi_power_ops power_ops = { 176 + .num_domains_get = scmi_power_num_domains_get, 177 + .name_get = scmi_power_name_get, 178 + .state_set = scmi_power_state_set, 179 + .state_get = scmi_power_state_get, 180 + }; 181 + 182 + static int scmi_power_protocol_init(struct scmi_handle *handle) 183 + { 184 + int domain; 185 + u32 version; 186 + struct scmi_power_info *pinfo; 187 + 188 + scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version); 189 + 190 + dev_dbg(handle->dev, "Power Version %d.%d\n", 191 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 192 + 193 + pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 194 + if (!pinfo) 195 + return -ENOMEM; 196 + 197 + scmi_power_attributes_get(handle, pinfo); 198 + 199 + pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 200 + sizeof(*pinfo->dom_info), GFP_KERNEL); 201 + if (!pinfo->dom_info) 202 + return -ENOMEM; 203 + 204 + for (domain = 0; domain < pinfo->num_domains; domain++) { 205 + struct power_dom_info *dom = pinfo->dom_info + domain; 206 + 207 + scmi_power_domain_attributes_get(handle, domain, dom); 208 + } 209 + 210 + handle->power_ops = &power_ops; 211 + handle->power_priv = pinfo; 212 + 213 + return 0; 214 + } 215 + 216 + static int __init scmi_power_init(void) 217 + { 218 + return scmi_protocol_register(SCMI_PROTOCOL_POWER, 219 + &scmi_power_protocol_init); 220 + } 221 + subsys_initcall(scmi_power_init);
+129
drivers/firmware/arm_scmi/scmi_pm_domain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * SCMI Generic power domain support. 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include <linux/err.h> 9 + #include <linux/io.h> 10 + #include <linux/module.h> 11 + #include <linux/pm_domain.h> 12 + #include <linux/scmi_protocol.h> 13 + 14 + struct scmi_pm_domain { 15 + struct generic_pm_domain genpd; 16 + const struct scmi_handle *handle; 17 + const char *name; 18 + u32 domain; 19 + }; 20 + 21 + #define to_scmi_pd(gpd) container_of(gpd, struct scmi_pm_domain, genpd) 22 + 23 + static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on) 24 + { 25 + int ret; 26 + u32 state, ret_state; 27 + struct scmi_pm_domain *pd = to_scmi_pd(domain); 28 + const struct scmi_power_ops *ops = pd->handle->power_ops; 29 + 30 + if (power_on) 31 + state = SCMI_POWER_STATE_GENERIC_ON; 32 + else 33 + state = SCMI_POWER_STATE_GENERIC_OFF; 34 + 35 + ret = ops->state_set(pd->handle, pd->domain, state); 36 + if (!ret) 37 + ret = ops->state_get(pd->handle, pd->domain, &ret_state); 38 + if (!ret && state != ret_state) 39 + return -EIO; 40 + 41 + return ret; 42 + } 43 + 44 + static int scmi_pd_power_on(struct generic_pm_domain *domain) 45 + { 46 + return scmi_pd_power(domain, true); 47 + } 48 + 49 + static int scmi_pd_power_off(struct generic_pm_domain *domain) 50 + { 51 + return scmi_pd_power(domain, false); 52 + } 53 + 54 + static int scmi_pm_domain_probe(struct scmi_device *sdev) 55 + { 56 + int num_domains, i; 57 + struct device *dev = &sdev->dev; 58 + struct device_node *np = dev->of_node; 59 + struct scmi_pm_domain *scmi_pd; 60 + struct genpd_onecell_data *scmi_pd_data; 61 + struct generic_pm_domain **domains; 62 + const struct scmi_handle *handle = sdev->handle; 63 + 64 + if (!handle || !handle->power_ops) 65 + return -ENODEV; 66 + 67 + num_domains = handle->power_ops->num_domains_get(handle); 68 + if (num_domains < 0) { 69 + dev_err(dev, "number of domains not found\n"); 70 + return num_domains; 71 + } 72 + 73 + scmi_pd = devm_kcalloc(dev, num_domains, sizeof(*scmi_pd), GFP_KERNEL); 74 + if (!scmi_pd) 75 + return -ENOMEM; 76 + 77 + scmi_pd_data = devm_kzalloc(dev, sizeof(*scmi_pd_data), GFP_KERNEL); 78 + if (!scmi_pd_data) 79 + return -ENOMEM; 80 + 81 + domains = devm_kcalloc(dev, num_domains, sizeof(*domains), GFP_KERNEL); 82 + if (!domains) 83 + return -ENOMEM; 84 + 85 + for (i = 0; i < num_domains; i++, scmi_pd++) { 86 + u32 state; 87 + 88 + domains[i] = &scmi_pd->genpd; 89 + 90 + scmi_pd->domain = i; 91 + scmi_pd->handle = handle; 92 + scmi_pd->name = handle->power_ops->name_get(handle, i); 93 + scmi_pd->genpd.name = scmi_pd->name; 94 + scmi_pd->genpd.power_off = scmi_pd_power_off; 95 + scmi_pd->genpd.power_on = scmi_pd_power_on; 96 + 97 + if (handle->power_ops->state_get(handle, i, &state)) { 98 + dev_warn(dev, "failed to get state for domain %d\n", i); 99 + continue; 100 + } 101 + 102 + pm_genpd_init(&scmi_pd->genpd, NULL, 103 + state == SCMI_POWER_STATE_GENERIC_OFF); 104 + } 105 + 106 + scmi_pd_data->domains = domains; 107 + scmi_pd_data->num_domains = num_domains; 108 + 109 + of_genpd_add_provider_onecell(np, scmi_pd_data); 110 + 111 + return 0; 112 + } 113 + 114 + static const struct scmi_device_id scmi_id_table[] = { 115 + { SCMI_PROTOCOL_POWER }, 116 + { }, 117 + }; 118 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 119 + 120 + static struct scmi_driver scmi_power_domain_driver = { 121 + .name = "scmi-power-domain", 122 + .probe = scmi_pm_domain_probe, 123 + .id_table = scmi_id_table, 124 + }; 125 + module_scmi_driver(scmi_power_domain_driver); 126 + 127 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 128 + MODULE_DESCRIPTION("ARM SCMI power domain driver"); 129 + MODULE_LICENSE("GPL v2");
+291
drivers/firmware/arm_scmi/sensors.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Sensor Protocol 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + 8 + #include "common.h" 9 + 10 + enum scmi_sensor_protocol_cmd { 11 + SENSOR_DESCRIPTION_GET = 0x3, 12 + SENSOR_CONFIG_SET = 0x4, 13 + SENSOR_TRIP_POINT_SET = 0x5, 14 + SENSOR_READING_GET = 0x6, 15 + }; 16 + 17 + struct scmi_msg_resp_sensor_attributes { 18 + __le16 num_sensors; 19 + u8 max_requests; 20 + u8 reserved; 21 + __le32 reg_addr_low; 22 + __le32 reg_addr_high; 23 + __le32 reg_size; 24 + }; 25 + 26 + struct scmi_msg_resp_sensor_description { 27 + __le16 num_returned; 28 + __le16 num_remaining; 29 + struct { 30 + __le32 id; 31 + __le32 attributes_low; 32 + #define SUPPORTS_ASYNC_READ(x) ((x) & BIT(31)) 33 + #define NUM_TRIP_POINTS(x) (((x) >> 4) & 0xff) 34 + __le32 attributes_high; 35 + #define SENSOR_TYPE(x) ((x) & 0xff) 36 + #define SENSOR_SCALE(x) (((x) >> 11) & 0x3f) 37 + #define SENSOR_UPDATE_SCALE(x) (((x) >> 22) & 0x1f) 38 + #define SENSOR_UPDATE_BASE(x) (((x) >> 27) & 0x1f) 39 + u8 name[SCMI_MAX_STR_SIZE]; 40 + } desc[0]; 41 + }; 42 + 43 + struct scmi_msg_set_sensor_config { 44 + __le32 id; 45 + __le32 event_control; 46 + }; 47 + 48 + struct scmi_msg_set_sensor_trip_point { 49 + __le32 id; 50 + __le32 event_control; 51 + #define SENSOR_TP_EVENT_MASK (0x3) 52 + #define SENSOR_TP_DISABLED 0x0 53 + #define SENSOR_TP_POSITIVE 0x1 54 + #define SENSOR_TP_NEGATIVE 0x2 55 + #define SENSOR_TP_BOTH 0x3 56 + #define SENSOR_TP_ID(x) (((x) & 0xff) << 4) 57 + __le32 value_low; 58 + __le32 value_high; 59 + }; 60 + 61 + struct scmi_msg_sensor_reading_get { 62 + __le32 id; 63 + __le32 flags; 64 + #define SENSOR_READ_ASYNC BIT(0) 65 + }; 66 + 67 + struct sensors_info { 68 + int num_sensors; 69 + int max_requests; 70 + u64 reg_addr; 71 + u32 reg_size; 72 + struct scmi_sensor_info *sensors; 73 + }; 74 + 75 + static int scmi_sensor_attributes_get(const struct scmi_handle *handle, 76 + struct sensors_info *si) 77 + { 78 + int ret; 79 + struct scmi_xfer *t; 80 + struct scmi_msg_resp_sensor_attributes *attr; 81 + 82 + ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES, 83 + SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t); 84 + if (ret) 85 + return ret; 86 + 87 + attr = t->rx.buf; 88 + 89 + ret = scmi_do_xfer(handle, t); 90 + if (!ret) { 91 + si->num_sensors = le16_to_cpu(attr->num_sensors); 92 + si->max_requests = attr->max_requests; 93 + si->reg_addr = le32_to_cpu(attr->reg_addr_low) | 94 + (u64)le32_to_cpu(attr->reg_addr_high) << 32; 95 + si->reg_size = le32_to_cpu(attr->reg_size); 96 + } 97 + 98 + scmi_one_xfer_put(handle, t); 99 + return ret; 100 + } 101 + 102 + static int scmi_sensor_description_get(const struct scmi_handle *handle, 103 + struct sensors_info *si) 104 + { 105 + int ret, cnt; 106 + u32 desc_index = 0; 107 + u16 num_returned, num_remaining; 108 + struct scmi_xfer *t; 109 + struct scmi_msg_resp_sensor_description *buf; 110 + 111 + ret = scmi_one_xfer_init(handle, SENSOR_DESCRIPTION_GET, 112 + SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t); 113 + if (ret) 114 + return ret; 115 + 116 + buf = t->rx.buf; 117 + 118 + do { 119 + /* Set the number of sensors to be skipped/already read */ 120 + *(__le32 *)t->tx.buf = cpu_to_le32(desc_index); 121 + 122 + ret = scmi_do_xfer(handle, t); 123 + if (ret) 124 + break; 125 + 126 + num_returned = le16_to_cpu(buf->num_returned); 127 + num_remaining = le16_to_cpu(buf->num_remaining); 128 + 129 + if (desc_index + num_returned > si->num_sensors) { 130 + dev_err(handle->dev, "No. of sensors can't exceed %d", 131 + si->num_sensors); 132 + break; 133 + } 134 + 135 + for (cnt = 0; cnt < num_returned; cnt++) { 136 + u32 attrh; 137 + struct scmi_sensor_info *s; 138 + 139 + attrh = le32_to_cpu(buf->desc[cnt].attributes_high); 140 + s = &si->sensors[desc_index + cnt]; 141 + s->id = le32_to_cpu(buf->desc[cnt].id); 142 + s->type = SENSOR_TYPE(attrh); 143 + memcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE); 144 + } 145 + 146 + desc_index += num_returned; 147 + /* 148 + * check for both returned and remaining to avoid infinite 149 + * loop due to buggy firmware 150 + */ 151 + } while (num_returned && num_remaining); 152 + 153 + scmi_one_xfer_put(handle, t); 154 + return ret; 155 + } 156 + 157 + static int 158 + scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id) 159 + { 160 + int ret; 161 + u32 evt_cntl = BIT(0); 162 + struct scmi_xfer *t; 163 + struct scmi_msg_set_sensor_config *cfg; 164 + 165 + ret = scmi_one_xfer_init(handle, SENSOR_CONFIG_SET, 166 + SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t); 167 + if (ret) 168 + return ret; 169 + 170 + cfg = t->tx.buf; 171 + cfg->id = cpu_to_le32(sensor_id); 172 + cfg->event_control = cpu_to_le32(evt_cntl); 173 + 174 + ret = scmi_do_xfer(handle, t); 175 + 176 + scmi_one_xfer_put(handle, t); 177 + return ret; 178 + } 179 + 180 + static int scmi_sensor_trip_point_set(const struct scmi_handle *handle, 181 + u32 sensor_id, u8 trip_id, u64 trip_value) 182 + { 183 + int ret; 184 + u32 evt_cntl = SENSOR_TP_BOTH; 185 + struct scmi_xfer *t; 186 + struct scmi_msg_set_sensor_trip_point *trip; 187 + 188 + ret = scmi_one_xfer_init(handle, SENSOR_TRIP_POINT_SET, 189 + SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t); 190 + if (ret) 191 + return ret; 192 + 193 + trip = t->tx.buf; 194 + trip->id = cpu_to_le32(sensor_id); 195 + trip->event_control = cpu_to_le32(evt_cntl | SENSOR_TP_ID(trip_id)); 196 + trip->value_low = cpu_to_le32(trip_value & 0xffffffff); 197 + trip->value_high = cpu_to_le32(trip_value >> 32); 198 + 199 + ret = scmi_do_xfer(handle, t); 200 + 201 + scmi_one_xfer_put(handle, t); 202 + return ret; 203 + } 204 + 205 + static int scmi_sensor_reading_get(const struct scmi_handle *handle, 206 + u32 sensor_id, bool async, u64 *value) 207 + { 208 + int ret; 209 + struct scmi_xfer *t; 210 + struct scmi_msg_sensor_reading_get *sensor; 211 + 212 + ret = scmi_one_xfer_init(handle, SENSOR_READING_GET, 213 + SCMI_PROTOCOL_SENSOR, sizeof(*sensor), 214 + sizeof(u64), &t); 215 + if (ret) 216 + return ret; 217 + 218 + sensor = t->tx.buf; 219 + sensor->id = cpu_to_le32(sensor_id); 220 + sensor->flags = cpu_to_le32(async ? SENSOR_READ_ASYNC : 0); 221 + 222 + ret = scmi_do_xfer(handle, t); 223 + if (!ret) { 224 + __le32 *pval = t->rx.buf; 225 + 226 + *value = le32_to_cpu(*pval); 227 + *value |= (u64)le32_to_cpu(*(pval + 1)) << 32; 228 + } 229 + 230 + scmi_one_xfer_put(handle, t); 231 + return ret; 232 + } 233 + 234 + static const struct scmi_sensor_info * 235 + scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id) 236 + { 237 + struct sensors_info *si = handle->sensor_priv; 238 + 239 + return si->sensors + sensor_id; 240 + } 241 + 242 + static int scmi_sensor_count_get(const struct scmi_handle *handle) 243 + { 244 + struct sensors_info *si = handle->sensor_priv; 245 + 246 + return si->num_sensors; 247 + } 248 + 249 + static struct scmi_sensor_ops sensor_ops = { 250 + .count_get = scmi_sensor_count_get, 251 + .info_get = scmi_sensor_info_get, 252 + .configuration_set = scmi_sensor_configuration_set, 253 + .trip_point_set = scmi_sensor_trip_point_set, 254 + .reading_get = scmi_sensor_reading_get, 255 + }; 256 + 257 + static int scmi_sensors_protocol_init(struct scmi_handle *handle) 258 + { 259 + u32 version; 260 + struct sensors_info *sinfo; 261 + 262 + scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version); 263 + 264 + dev_dbg(handle->dev, "Sensor Version %d.%d\n", 265 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 266 + 267 + sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL); 268 + if (!sinfo) 269 + return -ENOMEM; 270 + 271 + scmi_sensor_attributes_get(handle, sinfo); 272 + 273 + sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors, 274 + sizeof(*sinfo->sensors), GFP_KERNEL); 275 + if (!sinfo->sensors) 276 + return -ENOMEM; 277 + 278 + scmi_sensor_description_get(handle, sinfo); 279 + 280 + handle->sensor_ops = &sensor_ops; 281 + handle->sensor_priv = sinfo; 282 + 283 + return 0; 284 + } 285 + 286 + static int __init scmi_sensors_init(void) 287 + { 288 + return scmi_protocol_register(SCMI_PROTOCOL_SENSOR, 289 + &scmi_sensors_protocol_init); 290 + } 291 + subsys_initcall(scmi_sensors_init);
+86 -119
drivers/firmware/arm_scpi.c
··· 28 28 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 29 30 30 #include <linux/bitmap.h> 31 + #include <linux/bitfield.h> 31 32 #include <linux/device.h> 32 33 #include <linux/err.h> 33 34 #include <linux/export.h> ··· 46 45 #include <linux/sort.h> 47 46 #include <linux/spinlock.h> 48 47 49 - #define CMD_ID_SHIFT 0 50 - #define CMD_ID_MASK 0x7f 51 - #define CMD_TOKEN_ID_SHIFT 8 52 - #define CMD_TOKEN_ID_MASK 0xff 53 - #define CMD_DATA_SIZE_SHIFT 16 54 - #define CMD_DATA_SIZE_MASK 0x1ff 55 - #define CMD_LEGACY_DATA_SIZE_SHIFT 20 56 - #define CMD_LEGACY_DATA_SIZE_MASK 0x1ff 57 - #define PACK_SCPI_CMD(cmd_id, tx_sz) \ 58 - ((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \ 59 - (((tx_sz) & CMD_DATA_SIZE_MASK) << CMD_DATA_SIZE_SHIFT)) 60 - #define ADD_SCPI_TOKEN(cmd, token) \ 61 - ((cmd) |= (((token) & CMD_TOKEN_ID_MASK) << CMD_TOKEN_ID_SHIFT)) 62 - #define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz) \ 63 - ((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \ 64 - (((tx_sz) & CMD_LEGACY_DATA_SIZE_MASK) << CMD_LEGACY_DATA_SIZE_SHIFT)) 48 + #define CMD_ID_MASK GENMASK(6, 0) 49 + #define CMD_TOKEN_ID_MASK GENMASK(15, 8) 50 + #define CMD_DATA_SIZE_MASK GENMASK(24, 16) 51 + #define CMD_LEGACY_DATA_SIZE_MASK GENMASK(28, 20) 52 + #define PACK_SCPI_CMD(cmd_id, tx_sz) \ 53 + (FIELD_PREP(CMD_ID_MASK, cmd_id) | \ 54 + FIELD_PREP(CMD_DATA_SIZE_MASK, tx_sz)) 55 + #define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz) \ 56 + (FIELD_PREP(CMD_ID_MASK, cmd_id) | \ 57 + FIELD_PREP(CMD_LEGACY_DATA_SIZE_MASK, tx_sz)) 65 58 66 - #define CMD_SIZE(cmd) (((cmd) >> CMD_DATA_SIZE_SHIFT) & CMD_DATA_SIZE_MASK) 67 - #define CMD_LEGACY_SIZE(cmd) (((cmd) >> CMD_LEGACY_DATA_SIZE_SHIFT) & \ 68 - CMD_LEGACY_DATA_SIZE_MASK) 69 - #define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK << CMD_TOKEN_ID_SHIFT | CMD_ID_MASK) 59 + #define CMD_SIZE(cmd) FIELD_GET(CMD_DATA_SIZE_MASK, cmd) 60 + #define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK | CMD_ID_MASK) 70 61 #define CMD_XTRACT_UNIQ(cmd) ((cmd) & CMD_UNIQ_MASK) 71 62 72 63 #define SCPI_SLOT 0 73 64 74 65 #define MAX_DVFS_DOMAINS 8 75 66 #define MAX_DVFS_OPPS 16 76 - #define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16) 77 - #define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff) 78 67 79 - #define PROTOCOL_REV_MINOR_BITS 16 80 - #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) 81 - #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) 82 - #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) 68 + #define PROTO_REV_MAJOR_MASK GENMASK(31, 16) 69 + #define PROTO_REV_MINOR_MASK GENMASK(15, 0) 83 70 84 - #define FW_REV_MAJOR_BITS 24 85 - #define FW_REV_MINOR_BITS 16 86 - #define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1) 87 - #define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1) 88 - #define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS) 89 - #define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS) 90 - #define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK) 71 + #define FW_REV_MAJOR_MASK GENMASK(31, 24) 72 + #define FW_REV_MINOR_MASK GENMASK(23, 16) 73 + #define FW_REV_PATCH_MASK GENMASK(15, 0) 91 74 92 75 #define MAX_RX_TIMEOUT (msecs_to_jiffies(30)) 93 76 ··· 296 311 u8 name[20]; 297 312 } __packed; 298 313 299 - struct clk_get_value { 300 - __le32 rate; 301 - } __packed; 302 - 303 314 struct clk_set_value { 304 315 __le16 id; 305 316 __le16 reserved; ··· 309 328 } __packed; 310 329 311 330 struct dvfs_info { 312 - __le32 header; 331 + u8 domain; 332 + u8 opp_count; 333 + __le16 latency; 313 334 struct { 314 335 __le32 freq; 315 336 __le32 m_volt; ··· 323 340 u8 index; 324 341 } __packed; 325 342 326 - struct sensor_capabilities { 327 - __le16 sensors; 328 - } __packed; 329 - 330 343 struct _scpi_sensor_info { 331 344 __le16 sensor_id; 332 345 u8 class; 333 346 u8 trigger_type; 334 347 char name[20]; 335 348 }; 336 - 337 - struct sensor_value { 338 - __le32 lo_val; 339 - __le32 hi_val; 340 - } __packed; 341 349 342 350 struct dev_pstate_set { 343 351 __le16 dev_id; ··· 393 419 unsigned int len; 394 420 395 421 if (scpi_info->is_legacy) { 396 - struct legacy_scpi_shared_mem *mem = ch->rx_payload; 422 + struct legacy_scpi_shared_mem __iomem *mem = 423 + ch->rx_payload; 397 424 398 425 /* RX Length is not replied by the legacy Firmware */ 399 426 len = match->rx_len; 400 427 401 - match->status = le32_to_cpu(mem->status); 428 + match->status = ioread32(&mem->status); 402 429 memcpy_fromio(match->rx_buf, mem->payload, len); 403 430 } else { 404 - struct scpi_shared_mem *mem = ch->rx_payload; 431 + struct scpi_shared_mem __iomem *mem = ch->rx_payload; 405 432 406 - len = min(match->rx_len, CMD_SIZE(cmd)); 433 + len = min_t(unsigned int, match->rx_len, CMD_SIZE(cmd)); 407 434 408 - match->status = le32_to_cpu(mem->status); 435 + match->status = ioread32(&mem->status); 409 436 memcpy_fromio(match->rx_buf, mem->payload, len); 410 437 } 411 438 ··· 420 445 static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) 421 446 { 422 447 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 423 - struct scpi_shared_mem *mem = ch->rx_payload; 448 + struct scpi_shared_mem __iomem *mem = ch->rx_payload; 424 449 u32 cmd = 0; 425 450 426 451 if (!scpi_info->is_legacy) 427 - cmd = le32_to_cpu(mem->command); 452 + cmd = ioread32(&mem->command); 428 453 429 454 scpi_process_cmd(ch, cmd); 430 455 } ··· 434 459 unsigned long flags; 435 460 struct scpi_xfer *t = msg; 436 461 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 437 - struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; 462 + struct scpi_shared_mem __iomem *mem = ch->tx_payload; 438 463 439 464 if (t->tx_buf) { 440 465 if (scpi_info->is_legacy) ··· 446 471 if (t->rx_buf) { 447 472 if (!(++ch->token)) 448 473 ++ch->token; 449 - ADD_SCPI_TOKEN(t->cmd, ch->token); 474 + t->cmd |= FIELD_PREP(CMD_TOKEN_ID_MASK, ch->token); 450 475 spin_lock_irqsave(&ch->rx_lock, flags); 451 476 list_add_tail(&t->node, &ch->rx_pending); 452 477 spin_unlock_irqrestore(&ch->rx_lock, flags); 453 478 } 454 479 455 480 if (!scpi_info->is_legacy) 456 - mem->command = cpu_to_le32(t->cmd); 481 + iowrite32(t->cmd, &mem->command); 457 482 } 458 483 459 484 static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) ··· 558 583 static unsigned long scpi_clk_get_val(u16 clk_id) 559 584 { 560 585 int ret; 561 - struct clk_get_value clk; 586 + __le32 rate; 562 587 __le16 le_clk_id = cpu_to_le16(clk_id); 563 588 564 589 ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, 565 - sizeof(le_clk_id), &clk, sizeof(clk)); 590 + sizeof(le_clk_id), &rate, sizeof(rate)); 566 591 567 - return ret ? ret : le32_to_cpu(clk.rate); 592 + return ret ? ret : le32_to_cpu(rate); 568 593 } 569 594 570 595 static int scpi_clk_set_val(u16 clk_id, unsigned long rate) ··· 640 665 if (!info) 641 666 return ERR_PTR(-ENOMEM); 642 667 643 - info->count = DVFS_OPP_COUNT(buf.header); 644 - info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */ 668 + info->count = buf.opp_count; 669 + info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */ 645 670 646 671 info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); 647 672 if (!info->opps) { ··· 688 713 if (IS_ERR(info)) 689 714 return PTR_ERR(info); 690 715 691 - if (!info->latency) 692 - return 0; 693 - 694 716 return info->latency; 695 717 } 696 718 ··· 718 746 719 747 static int scpi_sensor_get_capability(u16 *sensors) 720 748 { 721 - struct sensor_capabilities cap_buf; 749 + __le16 cap; 722 750 int ret; 723 751 724 - ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf, 725 - sizeof(cap_buf)); 752 + ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap, 753 + sizeof(cap)); 726 754 if (!ret) 727 - *sensors = le16_to_cpu(cap_buf.sensors); 755 + *sensors = le16_to_cpu(cap); 728 756 729 757 return ret; 730 758 } ··· 748 776 static int scpi_sensor_get_value(u16 sensor, u64 *val) 749 777 { 750 778 __le16 id = cpu_to_le16(sensor); 751 - struct sensor_value buf; 779 + __le64 value; 752 780 int ret; 753 781 754 782 ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id), 755 - &buf, sizeof(buf)); 783 + &value, sizeof(value)); 756 784 if (ret) 757 785 return ret; 758 786 759 787 if (scpi_info->is_legacy) 760 - /* only 32-bits supported, hi_val can be junk */ 761 - *val = le32_to_cpu(buf.lo_val); 788 + /* only 32-bits supported, upper 32 bits can be junk */ 789 + *val = le32_to_cpup((__le32 *)&value); 762 790 else 763 - *val = (u64)le32_to_cpu(buf.hi_val) << 32 | 764 - le32_to_cpu(buf.lo_val); 791 + *val = le64_to_cpu(value); 765 792 766 793 return 0; 767 794 } ··· 835 864 { 836 865 struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 837 866 838 - return sprintf(buf, "%d.%d\n", 839 - PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 840 - PROTOCOL_REV_MINOR(scpi_info->protocol_version)); 867 + return sprintf(buf, "%lu.%lu\n", 868 + FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), 869 + FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version)); 841 870 } 842 871 static DEVICE_ATTR_RO(protocol_version); 843 872 ··· 846 875 { 847 876 struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 848 877 849 - return sprintf(buf, "%d.%d.%d\n", 850 - FW_REV_MAJOR(scpi_info->firmware_version), 851 - FW_REV_MINOR(scpi_info->firmware_version), 852 - FW_REV_PATCH(scpi_info->firmware_version)); 878 + return sprintf(buf, "%lu.%lu.%lu\n", 879 + FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), 880 + FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), 881 + FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); 853 882 } 854 883 static DEVICE_ATTR_RO(firmware_version); 855 884 ··· 860 889 }; 861 890 ATTRIBUTE_GROUPS(versions); 862 891 863 - static void 864 - scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count) 892 + static void scpi_free_channels(void *data) 865 893 { 894 + struct scpi_drvinfo *info = data; 866 895 int i; 867 896 868 - for (i = 0; i < count && pchan->chan; i++, pchan++) { 869 - mbox_free_channel(pchan->chan); 870 - devm_kfree(dev, pchan->xfers); 871 - devm_iounmap(dev, pchan->rx_payload); 872 - } 897 + for (i = 0; i < info->num_chans; i++) 898 + mbox_free_channel(info->channels[i].chan); 873 899 } 874 900 875 901 static int scpi_remove(struct platform_device *pdev) 876 902 { 877 903 int i; 878 - struct device *dev = &pdev->dev; 879 904 struct scpi_drvinfo *info = platform_get_drvdata(pdev); 880 905 881 906 scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ 882 - 883 - of_platform_depopulate(dev); 884 - sysfs_remove_groups(&dev->kobj, versions_groups); 885 - scpi_free_channels(dev, info->channels, info->num_chans); 886 - platform_set_drvdata(pdev, NULL); 887 907 888 908 for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { 889 909 kfree(info->dvfs[i]->opps); 890 910 kfree(info->dvfs[i]); 891 911 } 892 - devm_kfree(dev, info->channels); 893 - devm_kfree(dev, info); 894 912 895 913 return 0; 896 914 } ··· 912 952 { 913 953 int count, idx, ret; 914 954 struct resource res; 915 - struct scpi_chan *scpi_chan; 916 955 struct device *dev = &pdev->dev; 917 956 struct device_node *np = dev->of_node; 918 957 ··· 928 969 return -ENODEV; 929 970 } 930 971 931 - scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL); 932 - if (!scpi_chan) 972 + scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan), 973 + GFP_KERNEL); 974 + if (!scpi_info->channels) 933 975 return -ENOMEM; 934 976 935 - for (idx = 0; idx < count; idx++) { 977 + ret = devm_add_action(dev, scpi_free_channels, scpi_info); 978 + if (ret) 979 + return ret; 980 + 981 + for (; scpi_info->num_chans < count; scpi_info->num_chans++) { 936 982 resource_size_t size; 937 - struct scpi_chan *pchan = scpi_chan + idx; 983 + int idx = scpi_info->num_chans; 984 + struct scpi_chan *pchan = scpi_info->channels + idx; 938 985 struct mbox_client *cl = &pchan->cl; 939 986 struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 940 987 ··· 948 983 of_node_put(shmem); 949 984 if (ret) { 950 985 dev_err(dev, "failed to get SCPI payload mem resource\n"); 951 - goto err; 986 + return ret; 952 987 } 953 988 954 989 size = resource_size(&res); 955 990 pchan->rx_payload = devm_ioremap(dev, res.start, size); 956 991 if (!pchan->rx_payload) { 957 992 dev_err(dev, "failed to ioremap SCPI payload\n"); 958 - ret = -EADDRNOTAVAIL; 959 - goto err; 993 + return -EADDRNOTAVAIL; 960 994 } 961 995 pchan->tx_payload = pchan->rx_payload + (size >> 1); 962 996 ··· 981 1017 dev_err(dev, "failed to get channel%d err %d\n", 982 1018 idx, ret); 983 1019 } 984 - err: 985 - scpi_free_channels(dev, scpi_chan, idx); 986 - scpi_info = NULL; 987 1020 return ret; 988 1021 } 989 1022 990 - scpi_info->channels = scpi_chan; 991 - scpi_info->num_chans = count; 992 1023 scpi_info->commands = scpi_std_commands; 993 1024 994 1025 platform_set_drvdata(pdev, scpi_info); ··· 1002 1043 ret = scpi_init_versions(scpi_info); 1003 1044 if (ret) { 1004 1045 dev_err(dev, "incorrect or no SCP firmware found\n"); 1005 - scpi_remove(pdev); 1006 1046 return ret; 1007 1047 } 1008 1048 1009 - _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n", 1010 - PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 1011 - PROTOCOL_REV_MINOR(scpi_info->protocol_version), 1012 - FW_REV_MAJOR(scpi_info->firmware_version), 1013 - FW_REV_MINOR(scpi_info->firmware_version), 1014 - FW_REV_PATCH(scpi_info->firmware_version)); 1049 + if (scpi_info->is_legacy && !scpi_info->protocol_version && 1050 + !scpi_info->firmware_version) 1051 + dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n"); 1052 + else 1053 + dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n", 1054 + FIELD_GET(PROTO_REV_MAJOR_MASK, 1055 + scpi_info->protocol_version), 1056 + FIELD_GET(PROTO_REV_MINOR_MASK, 1057 + scpi_info->protocol_version), 1058 + FIELD_GET(FW_REV_MAJOR_MASK, 1059 + scpi_info->firmware_version), 1060 + FIELD_GET(FW_REV_MINOR_MASK, 1061 + scpi_info->firmware_version), 1062 + FIELD_GET(FW_REV_PATCH_MASK, 1063 + scpi_info->firmware_version)); 1015 1064 scpi_info->scpi_ops = &scpi_ops; 1016 1065 1017 - ret = sysfs_create_groups(&dev->kobj, versions_groups); 1066 + ret = devm_device_add_groups(dev, versions_groups); 1018 1067 if (ret) 1019 1068 dev_err(dev, "unable to create sysfs version group\n"); 1020 1069 1021 - return of_platform_populate(dev->of_node, NULL, NULL, dev); 1070 + return devm_of_platform_populate(dev); 1022 1071 } 1023 1072 1024 1073 static const struct of_device_id scpi_of_match[] = {
+12 -13
drivers/firmware/meson/meson_sm.c
··· 17 17 #include <linux/arm-smccc.h> 18 18 #include <linux/bug.h> 19 19 #include <linux/io.h> 20 + #include <linux/module.h> 20 21 #include <linux/of.h> 21 22 #include <linux/of_device.h> 23 + #include <linux/platform_device.h> 22 24 #include <linux/printk.h> 23 25 #include <linux/types.h> 24 26 #include <linux/sizes.h> ··· 219 217 { /* sentinel */ }, 220 218 }; 221 219 222 - int __init meson_sm_init(void) 220 + static int __init meson_sm_probe(struct platform_device *pdev) 223 221 { 224 222 const struct meson_sm_chip *chip; 225 - const struct of_device_id *matched_np; 226 - struct device_node *np; 227 223 228 - np = of_find_matching_node_and_match(NULL, meson_sm_ids, &matched_np); 229 - if (!np) 230 - return -ENODEV; 231 - 232 - chip = matched_np->data; 233 - if (!chip) { 234 - pr_err("unable to setup secure-monitor data\n"); 235 - goto out; 236 - } 224 + chip = of_match_device(meson_sm_ids, &pdev->dev)->data; 237 225 238 226 if (chip->cmd_shmem_in_base) { 239 227 fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, ··· 249 257 out: 250 258 return -EINVAL; 251 259 } 252 - device_initcall(meson_sm_init); 260 + 261 + static struct platform_driver meson_sm_driver = { 262 + .driver = { 263 + .name = "meson-sm", 264 + .of_match_table = of_match_ptr(meson_sm_ids), 265 + }, 266 + }; 267 + module_platform_driver_probe(meson_sm_driver, meson_sm_probe);
+66 -80
drivers/firmware/tegra/bpmp.c
··· 70 70 } 71 71 EXPORT_SYMBOL_GPL(tegra_bpmp_put); 72 72 73 - static int tegra_bpmp_channel_get_index(struct tegra_bpmp_channel *channel) 74 - { 75 - return channel - channel->bpmp->channels; 76 - } 77 - 78 73 static int 79 74 tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel) 80 75 { 81 76 struct tegra_bpmp *bpmp = channel->bpmp; 82 - unsigned int offset, count; 77 + unsigned int count; 83 78 int index; 84 79 85 - offset = bpmp->soc->channels.thread.offset; 86 80 count = bpmp->soc->channels.thread.count; 87 81 88 - index = tegra_bpmp_channel_get_index(channel); 89 - if (index < 0) 90 - return index; 91 - 92 - if (index < offset || index >= offset + count) 82 + index = channel - channel->bpmp->threaded_channels; 83 + if (index < 0 || index >= count) 93 84 return -EINVAL; 94 85 95 - return index - offset; 96 - } 97 - 98 - static struct tegra_bpmp_channel * 99 - tegra_bpmp_channel_get_thread(struct tegra_bpmp *bpmp, unsigned int index) 100 - { 101 - unsigned int offset = bpmp->soc->channels.thread.offset; 102 - unsigned int count = bpmp->soc->channels.thread.count; 103 - 104 - if (index >= count) 105 - return NULL; 106 - 107 - return &bpmp->channels[offset + index]; 108 - } 109 - 110 - static struct tegra_bpmp_channel * 111 - tegra_bpmp_channel_get_tx(struct tegra_bpmp *bpmp) 112 - { 113 - unsigned int offset = bpmp->soc->channels.cpu_tx.offset; 114 - 115 - return &bpmp->channels[offset + smp_processor_id()]; 116 - } 117 - 118 - static struct tegra_bpmp_channel * 119 - tegra_bpmp_channel_get_rx(struct tegra_bpmp *bpmp) 120 - { 121 - unsigned int offset = bpmp->soc->channels.cpu_rx.offset; 122 - 123 - return &bpmp->channels[offset]; 86 + return index; 124 87 } 125 88 126 89 static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg) ··· 234 271 goto unlock; 235 272 } 236 273 237 - channel = tegra_bpmp_channel_get_thread(bpmp, index); 238 - if (!channel) { 239 - err = -EINVAL; 240 - goto unlock; 241 - } 274 + channel = &bpmp->threaded_channels[index]; 242 275 243 276 if (!tegra_bpmp_master_free(channel)) { 244 277 err = -EBUSY; ··· 287 328 if (!tegra_bpmp_message_valid(msg)) 288 329 return -EINVAL; 289 330 290 - channel = tegra_bpmp_channel_get_tx(bpmp); 331 + channel = bpmp->tx_channel; 332 + 333 + spin_lock(&bpmp->atomic_tx_lock); 291 334 292 335 err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK, 293 336 msg->tx.data, msg->tx.size); 294 - if (err < 0) 337 + if (err < 0) { 338 + spin_unlock(&bpmp->atomic_tx_lock); 295 339 return err; 340 + } 341 + 342 + spin_unlock(&bpmp->atomic_tx_lock); 296 343 297 344 err = mbox_send_message(bpmp->mbox.channel, NULL); 298 345 if (err < 0) ··· 572 607 unsigned int i, count; 573 608 unsigned long *busy; 574 609 575 - channel = tegra_bpmp_channel_get_rx(bpmp); 610 + channel = bpmp->rx_channel; 576 611 count = bpmp->soc->channels.thread.count; 577 612 busy = bpmp->threaded.busy; 578 613 ··· 584 619 for_each_set_bit(i, busy, count) { 585 620 struct tegra_bpmp_channel *channel; 586 621 587 - channel = tegra_bpmp_channel_get_thread(bpmp, i); 588 - if (!channel) 589 - continue; 622 + channel = &bpmp->threaded_channels[i]; 590 623 591 624 if (tegra_bpmp_master_acked(channel)) { 592 625 tegra_bpmp_channel_signal(channel); ··· 661 698 662 699 static int tegra_bpmp_probe(struct platform_device *pdev) 663 700 { 664 - struct tegra_bpmp_channel *channel; 665 701 struct tegra_bpmp *bpmp; 666 702 unsigned int i; 667 703 char tag[32]; ··· 694 732 } 695 733 696 734 bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys); 697 - if (!bpmp->rx.pool) { 735 + if (!bpmp->rx.virt) { 698 736 dev_err(&pdev->dev, "failed to allocate from RX pool\n"); 699 737 err = -ENOMEM; 700 738 goto free_tx; ··· 720 758 goto free_rx; 721 759 } 722 760 723 - bpmp->num_channels = bpmp->soc->channels.cpu_tx.count + 724 - bpmp->soc->channels.thread.count + 725 - bpmp->soc->channels.cpu_rx.count; 726 - 727 - bpmp->channels = devm_kcalloc(&pdev->dev, bpmp->num_channels, 728 - sizeof(*channel), GFP_KERNEL); 729 - if (!bpmp->channels) { 761 + spin_lock_init(&bpmp->atomic_tx_lock); 762 + bpmp->tx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->tx_channel), 763 + GFP_KERNEL); 764 + if (!bpmp->tx_channel) { 730 765 err = -ENOMEM; 731 766 goto free_rx; 732 767 } 733 768 734 - /* message channel initialization */ 735 - for (i = 0; i < bpmp->num_channels; i++) { 736 - struct tegra_bpmp_channel *channel = &bpmp->channels[i]; 769 + bpmp->rx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->rx_channel), 770 + GFP_KERNEL); 771 + if (!bpmp->rx_channel) { 772 + err = -ENOMEM; 773 + goto free_rx; 774 + } 737 775 738 - err = tegra_bpmp_channel_init(channel, bpmp, i); 776 + bpmp->threaded_channels = devm_kcalloc(&pdev->dev, bpmp->threaded.count, 777 + sizeof(*bpmp->threaded_channels), 778 + GFP_KERNEL); 779 + if (!bpmp->threaded_channels) { 780 + err = -ENOMEM; 781 + goto free_rx; 782 + } 783 + 784 + err = tegra_bpmp_channel_init(bpmp->tx_channel, bpmp, 785 + bpmp->soc->channels.cpu_tx.offset); 786 + if (err < 0) 787 + goto free_rx; 788 + 789 + err = tegra_bpmp_channel_init(bpmp->rx_channel, bpmp, 790 + bpmp->soc->channels.cpu_rx.offset); 791 + if (err < 0) 792 + goto cleanup_tx_channel; 793 + 794 + for (i = 0; i < bpmp->threaded.count; i++) { 795 + err = tegra_bpmp_channel_init( 796 + &bpmp->threaded_channels[i], bpmp, 797 + bpmp->soc->channels.thread.offset + i); 739 798 if (err < 0) 740 - goto cleanup_channels; 799 + goto cleanup_threaded_channels; 741 800 } 742 801 743 802 /* mbox registration */ ··· 771 788 if (IS_ERR(bpmp->mbox.channel)) { 772 789 err = PTR_ERR(bpmp->mbox.channel); 773 790 dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err); 774 - goto cleanup_channels; 791 + goto cleanup_threaded_channels; 775 792 } 776 793 777 794 /* reset message channels */ 778 - for (i = 0; i < bpmp->num_channels; i++) { 779 - struct tegra_bpmp_channel *channel = &bpmp->channels[i]; 780 - 781 - tegra_bpmp_channel_reset(channel); 782 - } 795 + tegra_bpmp_channel_reset(bpmp->tx_channel); 796 + tegra_bpmp_channel_reset(bpmp->rx_channel); 797 + for (i = 0; i < bpmp->threaded.count; i++) 798 + tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]); 783 799 784 800 err = tegra_bpmp_request_mrq(bpmp, MRQ_PING, 785 801 tegra_bpmp_mrq_handle_ping, bpmp); ··· 827 845 tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp); 828 846 free_mbox: 829 847 mbox_free_channel(bpmp->mbox.channel); 830 - cleanup_channels: 831 - while (i--) 832 - tegra_bpmp_channel_cleanup(&bpmp->channels[i]); 848 + cleanup_threaded_channels: 849 + for (i = 0; i < bpmp->threaded.count; i++) { 850 + if (bpmp->threaded_channels[i].bpmp) 851 + tegra_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 852 + } 853 + 854 + tegra_bpmp_channel_cleanup(bpmp->rx_channel); 855 + cleanup_tx_channel: 856 + tegra_bpmp_channel_cleanup(bpmp->tx_channel); 833 857 free_rx: 834 858 gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096); 835 859 free_tx: ··· 846 858 static const struct tegra_bpmp_soc tegra186_soc = { 847 859 .channels = { 848 860 .cpu_tx = { 849 - .offset = 0, 850 - .count = 6, 861 + .offset = 3, 851 862 .timeout = 60 * USEC_PER_SEC, 852 863 }, 853 864 .thread = { 854 - .offset = 6, 855 - .count = 7, 865 + .offset = 0, 866 + .count = 3, 856 867 .timeout = 600 * USEC_PER_SEC, 857 868 }, 858 869 .cpu_rx = { 859 870 .offset = 13, 860 - .count = 1, 861 871 .timeout = 0, 862 872 }, 863 873 },
+12
drivers/hwmon/Kconfig
··· 317 317 Say Y here if you have an applicable laptop and want to experience 318 318 the awesome power of applesmc. 319 319 320 + config SENSORS_ARM_SCMI 321 + tristate "ARM SCMI Sensors" 322 + depends on ARM_SCMI_PROTOCOL 323 + depends on THERMAL || !THERMAL_OF 324 + help 325 + This driver provides support for temperature, voltage, current 326 + and power sensors available on SCMI based platforms. The actual 327 + number and type of sensors exported depend on the platform. 328 + 329 + This driver can also be built as a module. If so, the module 330 + will be called scmi-hwmon. 331 + 320 332 config SENSORS_ARM_SCPI 321 333 tristate "ARM SCPI Sensors" 322 334 depends on ARM_SCPI_PROTOCOL
+1
drivers/hwmon/Makefile
··· 46 46 obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o 47 47 obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o 48 48 obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o 49 + obj-$(CONFIG_SENSORS_ARM_SCMI) += scmi-hwmon.o 49 50 obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o 50 51 obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o 51 52 obj-$(CONFIG_SENSORS_ASPEED) += aspeed-pwm-tacho.o
+225
drivers/hwmon/scmi-hwmon.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface(SCMI) based hwmon sensor driver 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + * Sudeep Holla <sudeep.holla@arm.com> 7 + */ 8 + 9 + #include <linux/hwmon.h> 10 + #include <linux/module.h> 11 + #include <linux/scmi_protocol.h> 12 + #include <linux/slab.h> 13 + #include <linux/sysfs.h> 14 + #include <linux/thermal.h> 15 + 16 + struct scmi_sensors { 17 + const struct scmi_handle *handle; 18 + const struct scmi_sensor_info **info[hwmon_max]; 19 + }; 20 + 21 + static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 22 + u32 attr, int channel, long *val) 23 + { 24 + int ret; 25 + u64 value; 26 + const struct scmi_sensor_info *sensor; 27 + struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev); 28 + const struct scmi_handle *h = scmi_sensors->handle; 29 + 30 + sensor = *(scmi_sensors->info[type] + channel); 31 + ret = h->sensor_ops->reading_get(h, sensor->id, false, &value); 32 + if (!ret) 33 + *val = value; 34 + 35 + return ret; 36 + } 37 + 38 + static int 39 + scmi_hwmon_read_string(struct device *dev, enum hwmon_sensor_types type, 40 + u32 attr, int channel, const char **str) 41 + { 42 + const struct scmi_sensor_info *sensor; 43 + struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev); 44 + 45 + sensor = *(scmi_sensors->info[type] + channel); 46 + *str = sensor->name; 47 + 48 + return 0; 49 + } 50 + 51 + static umode_t 52 + scmi_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type, 53 + u32 attr, int channel) 54 + { 55 + const struct scmi_sensor_info *sensor; 56 + const struct scmi_sensors *scmi_sensors = drvdata; 57 + 58 + sensor = *(scmi_sensors->info[type] + channel); 59 + if (sensor && sensor->name) 60 + return S_IRUGO; 61 + 62 + return 0; 63 + } 64 + 65 + static const struct hwmon_ops scmi_hwmon_ops = { 66 + .is_visible = scmi_hwmon_is_visible, 67 + .read = scmi_hwmon_read, 68 + .read_string = scmi_hwmon_read_string, 69 + }; 70 + 71 + static struct hwmon_chip_info scmi_chip_info = { 72 + .ops = &scmi_hwmon_ops, 73 + .info = NULL, 74 + }; 75 + 76 + static int scmi_hwmon_add_chan_info(struct hwmon_channel_info *scmi_hwmon_chan, 77 + struct device *dev, int num, 78 + enum hwmon_sensor_types type, u32 config) 79 + { 80 + int i; 81 + u32 *cfg = devm_kcalloc(dev, num + 1, sizeof(*cfg), GFP_KERNEL); 82 + 83 + if (!cfg) 84 + return -ENOMEM; 85 + 86 + scmi_hwmon_chan->type = type; 87 + scmi_hwmon_chan->config = cfg; 88 + for (i = 0; i < num; i++, cfg++) 89 + *cfg = config; 90 + 91 + return 0; 92 + } 93 + 94 + static enum hwmon_sensor_types scmi_types[] = { 95 + [TEMPERATURE_C] = hwmon_temp, 96 + [VOLTAGE] = hwmon_in, 97 + [CURRENT] = hwmon_curr, 98 + [POWER] = hwmon_power, 99 + [ENERGY] = hwmon_energy, 100 + }; 101 + 102 + static u32 hwmon_attributes[] = { 103 + [hwmon_chip] = HWMON_C_REGISTER_TZ, 104 + [hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL, 105 + [hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL, 106 + [hwmon_curr] = HWMON_C_INPUT | HWMON_C_LABEL, 107 + [hwmon_power] = HWMON_P_INPUT | HWMON_P_LABEL, 108 + [hwmon_energy] = HWMON_E_INPUT | HWMON_E_LABEL, 109 + }; 110 + 111 + static int scmi_hwmon_probe(struct scmi_device *sdev) 112 + { 113 + int i, idx; 114 + u16 nr_sensors; 115 + enum hwmon_sensor_types type; 116 + struct scmi_sensors *scmi_sensors; 117 + const struct scmi_sensor_info *sensor; 118 + int nr_count[hwmon_max] = {0}, nr_types = 0; 119 + const struct hwmon_chip_info *chip_info; 120 + struct device *hwdev, *dev = &sdev->dev; 121 + struct hwmon_channel_info *scmi_hwmon_chan; 122 + const struct hwmon_channel_info **ptr_scmi_ci; 123 + const struct scmi_handle *handle = sdev->handle; 124 + 125 + if (!handle || !handle->sensor_ops) 126 + return -ENODEV; 127 + 128 + nr_sensors = handle->sensor_ops->count_get(handle); 129 + if (!nr_sensors) 130 + return -EIO; 131 + 132 + scmi_sensors = devm_kzalloc(dev, sizeof(*scmi_sensors), GFP_KERNEL); 133 + if (!scmi_sensors) 134 + return -ENOMEM; 135 + 136 + scmi_sensors->handle = handle; 137 + 138 + for (i = 0; i < nr_sensors; i++) { 139 + sensor = handle->sensor_ops->info_get(handle, i); 140 + if (!sensor) 141 + return -EINVAL; 142 + 143 + switch (sensor->type) { 144 + case TEMPERATURE_C: 145 + case VOLTAGE: 146 + case CURRENT: 147 + case POWER: 148 + case ENERGY: 149 + type = scmi_types[sensor->type]; 150 + if (!nr_count[type]) 151 + nr_types++; 152 + nr_count[type]++; 153 + break; 154 + } 155 + } 156 + 157 + if (nr_count[hwmon_temp]) 158 + nr_count[hwmon_chip]++, nr_types++; 159 + 160 + scmi_hwmon_chan = devm_kcalloc(dev, nr_types, sizeof(*scmi_hwmon_chan), 161 + GFP_KERNEL); 162 + if (!scmi_hwmon_chan) 163 + return -ENOMEM; 164 + 165 + ptr_scmi_ci = devm_kcalloc(dev, nr_types + 1, sizeof(*ptr_scmi_ci), 166 + GFP_KERNEL); 167 + if (!ptr_scmi_ci) 168 + return -ENOMEM; 169 + 170 + scmi_chip_info.info = ptr_scmi_ci; 171 + chip_info = &scmi_chip_info; 172 + 173 + for (type = 0; type < hwmon_max && nr_count[type]; type++) { 174 + scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type], 175 + type, hwmon_attributes[type]); 176 + *ptr_scmi_ci++ = scmi_hwmon_chan++; 177 + 178 + scmi_sensors->info[type] = 179 + devm_kcalloc(dev, nr_count[type], 180 + sizeof(*scmi_sensors->info), GFP_KERNEL); 181 + if (!scmi_sensors->info[type]) 182 + return -ENOMEM; 183 + } 184 + 185 + for (i = nr_sensors - 1; i >= 0 ; i--) { 186 + sensor = handle->sensor_ops->info_get(handle, i); 187 + if (!sensor) 188 + continue; 189 + 190 + switch (sensor->type) { 191 + case TEMPERATURE_C: 192 + case VOLTAGE: 193 + case CURRENT: 194 + case POWER: 195 + case ENERGY: 196 + type = scmi_types[sensor->type]; 197 + idx = --nr_count[type]; 198 + *(scmi_sensors->info[type] + idx) = sensor; 199 + break; 200 + } 201 + } 202 + 203 + hwdev = devm_hwmon_device_register_with_info(dev, "scmi_sensors", 204 + scmi_sensors, chip_info, 205 + NULL); 206 + 207 + return PTR_ERR_OR_ZERO(hwdev); 208 + } 209 + 210 + static const struct scmi_device_id scmi_id_table[] = { 211 + { SCMI_PROTOCOL_SENSOR }, 212 + { }, 213 + }; 214 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 215 + 216 + static struct scmi_driver scmi_hwmon_drv = { 217 + .name = "scmi-hwmon", 218 + .probe = scmi_hwmon_probe, 219 + .id_table = scmi_id_table, 220 + }; 221 + module_scmi_driver(scmi_hwmon_drv); 222 + 223 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 224 + MODULE_DESCRIPTION("ARM SCMI HWMON interface driver"); 225 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/memory/emif.c
··· 127 127 128 128 for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) { 129 129 do_emif_regdump_show(s, emif, regs_cache[i]); 130 - seq_printf(s, "\n"); 130 + seq_putc(s, '\n'); 131 131 } 132 132 133 133 return 0;
+1
drivers/memory/samsung/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 config SAMSUNG_MC 2 3 bool "Samsung Exynos Memory Controller support" if COMPILE_TEST 3 4 help
+1
drivers/memory/samsung/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-$(CONFIG_EXYNOS_SROM) += exynos-srom.o
+7 -11
drivers/memory/samsung/exynos-srom.c
··· 1 - /* 2 - * Copyright (c) 2015 Samsung Electronics Co., Ltd. 3 - * http://www.samsung.com/ 4 - * 5 - * EXYNOS - SROM Controller support 6 - * Author: Pankaj Dubey <pankaj.dubey@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // Copyright (c) 2015 Samsung Electronics Co., Ltd. 4 + // http://www.samsung.com/ 5 + // 6 + // EXYNOS - SROM Controller support 7 + // Author: Pankaj Dubey <pankaj.dubey@samsung.com> 12 8 13 9 #include <linux/io.h> 14 10 #include <linux/init.h>
+2 -5
drivers/memory/samsung/exynos-srom.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright (c) 2015 Samsung Electronics Co., Ltd. 3 4 * http://www.samsung.com 4 5 * 5 6 * Exynos SROMC register definitions 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 7 + */ 11 8 12 9 #ifndef __EXYNOS_SROM_H 13 10 #define __EXYNOS_SROM_H __FILE__
-1
drivers/memory/ti-emif-pm.c
··· 271 271 emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev, 272 272 res); 273 273 if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) { 274 - dev_err(dev, "could not ioremap emif mem\n"); 275 274 ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt); 276 275 return ret; 277 276 }
+33
drivers/perf/Kconfig
··· 5 5 menu "Performance monitor support" 6 6 depends on PERF_EVENTS 7 7 8 + config ARM_CCI_PMU 9 + bool 10 + select ARM_CCI 11 + 12 + config ARM_CCI400_PMU 13 + bool "ARM CCI400 PMU support" 14 + depends on (ARM && CPU_V7) || ARM64 15 + select ARM_CCI400_COMMON 16 + select ARM_CCI_PMU 17 + help 18 + Support for PMU events monitoring on the ARM CCI-400 (cache coherent 19 + interconnect). CCI-400 supports counting events related to the 20 + connected slave/master interfaces. 21 + 22 + config ARM_CCI5xx_PMU 23 + bool "ARM CCI-500/CCI-550 PMU support" 24 + depends on (ARM && CPU_V7) || ARM64 25 + select ARM_CCI_PMU 26 + help 27 + Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache 28 + coherent interconnects. Both of them provide 8 independent event counters, 29 + which can count events pertaining to the slave/master interfaces as well 30 + as the internal events to the CCI. 31 + 32 + If unsure, say Y 33 + 34 + config ARM_CCN 35 + tristate "ARM CCN driver support" 36 + depends on ARM || ARM64 37 + help 38 + PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) 39 + interconnect. 40 + 8 41 config ARM_PMU 9 42 depends on ARM || ARM64 10 43 bool "ARM PMU framework"
+2
drivers/perf/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_ARM_CCI_PMU) += arm-cci.o 3 + obj-$(CONFIG_ARM_CCN) += arm-ccn.o 2 4 obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o 3 5 obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o 4 6 obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
+1722
drivers/perf/arm-cci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // CCI Cache Coherent Interconnect PMU driver 3 + // Copyright (C) 2013-2018 Arm Ltd. 4 + // Author: Punit Agrawal <punit.agrawal@arm.com>, Suzuki Poulose <suzuki.poulose@arm.com> 5 + 6 + #include <linux/arm-cci.h> 7 + #include <linux/io.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/module.h> 10 + #include <linux/of_address.h> 11 + #include <linux/of_device.h> 12 + #include <linux/of_irq.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/perf_event.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/slab.h> 17 + #include <linux/spinlock.h> 18 + 19 + #define DRIVER_NAME "ARM-CCI PMU" 20 + 21 + #define CCI_PMCR 0x0100 22 + #define CCI_PID2 0x0fe8 23 + 24 + #define CCI_PMCR_CEN 0x00000001 25 + #define CCI_PMCR_NCNT_MASK 0x0000f800 26 + #define CCI_PMCR_NCNT_SHIFT 11 27 + 28 + #define CCI_PID2_REV_MASK 0xf0 29 + #define CCI_PID2_REV_SHIFT 4 30 + 31 + #define CCI_PMU_EVT_SEL 0x000 32 + #define CCI_PMU_CNTR 0x004 33 + #define CCI_PMU_CNTR_CTRL 0x008 34 + #define CCI_PMU_OVRFLW 0x00c 35 + 36 + #define CCI_PMU_OVRFLW_FLAG 1 37 + 38 + #define CCI_PMU_CNTR_SIZE(model) ((model)->cntr_size) 39 + #define CCI_PMU_CNTR_BASE(model, idx) ((idx) * CCI_PMU_CNTR_SIZE(model)) 40 + #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 41 + #define CCI_PMU_CNTR_LAST(cci_pmu) (cci_pmu->num_cntrs - 1) 42 + 43 + #define CCI_PMU_MAX_HW_CNTRS(model) \ 44 + ((model)->num_hw_cntrs + (model)->fixed_hw_cntrs) 45 + 46 + /* Types of interfaces that can generate events */ 47 + enum { 48 + CCI_IF_SLAVE, 49 + CCI_IF_MASTER, 50 + #ifdef CONFIG_ARM_CCI5xx_PMU 51 + CCI_IF_GLOBAL, 52 + #endif 53 + CCI_IF_MAX, 54 + }; 55 + 56 + struct event_range { 57 + u32 min; 58 + u32 max; 59 + }; 60 + 61 + struct cci_pmu_hw_events { 62 + struct perf_event **events; 63 + unsigned long *used_mask; 64 + raw_spinlock_t pmu_lock; 65 + }; 66 + 67 + struct cci_pmu; 68 + /* 69 + * struct cci_pmu_model: 70 + * @fixed_hw_cntrs - Number of fixed event counters 71 + * @num_hw_cntrs - Maximum number of programmable event counters 72 + * @cntr_size - Size of an event counter mapping 73 + */ 74 + struct cci_pmu_model { 75 + char *name; 76 + u32 fixed_hw_cntrs; 77 + u32 num_hw_cntrs; 78 + u32 cntr_size; 79 + struct attribute **format_attrs; 80 + struct attribute **event_attrs; 81 + struct event_range event_ranges[CCI_IF_MAX]; 82 + int (*validate_hw_event)(struct cci_pmu *, unsigned long); 83 + int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long); 84 + void (*write_counters)(struct cci_pmu *, unsigned long *); 85 + }; 86 + 87 + static struct cci_pmu_model cci_pmu_models[]; 88 + 89 + struct cci_pmu { 90 + void __iomem *base; 91 + void __iomem *ctrl_base; 92 + struct pmu pmu; 93 + int cpu; 94 + int nr_irqs; 95 + int *irqs; 96 + unsigned long active_irqs; 97 + const struct cci_pmu_model *model; 98 + struct cci_pmu_hw_events hw_events; 99 + struct platform_device *plat_device; 100 + int num_cntrs; 101 + atomic_t active_events; 102 + struct mutex reserve_mutex; 103 + }; 104 + 105 + #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) 106 + 107 + static struct cci_pmu *g_cci_pmu; 108 + 109 + enum cci_models { 110 + #ifdef CONFIG_ARM_CCI400_PMU 111 + CCI400_R0, 112 + CCI400_R1, 113 + #endif 114 + #ifdef CONFIG_ARM_CCI5xx_PMU 115 + CCI500_R0, 116 + CCI550_R0, 117 + #endif 118 + CCI_MODEL_MAX 119 + }; 120 + 121 + static void pmu_write_counters(struct cci_pmu *cci_pmu, 122 + unsigned long *mask); 123 + static ssize_t cci_pmu_format_show(struct device *dev, 124 + struct device_attribute *attr, char *buf); 125 + static ssize_t cci_pmu_event_show(struct device *dev, 126 + struct device_attribute *attr, char *buf); 127 + 128 + #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \ 129 + &((struct dev_ext_attribute[]) { \ 130 + { __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config } \ 131 + })[0].attr.attr 132 + 133 + #define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \ 134 + CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config) 135 + #define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 136 + CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config) 137 + 138 + /* CCI400 PMU Specific definitions */ 139 + 140 + #ifdef CONFIG_ARM_CCI400_PMU 141 + 142 + /* Port ids */ 143 + #define CCI400_PORT_S0 0 144 + #define CCI400_PORT_S1 1 145 + #define CCI400_PORT_S2 2 146 + #define CCI400_PORT_S3 3 147 + #define CCI400_PORT_S4 4 148 + #define CCI400_PORT_M0 5 149 + #define CCI400_PORT_M1 6 150 + #define CCI400_PORT_M2 7 151 + 152 + #define CCI400_R1_PX 5 153 + 154 + /* 155 + * Instead of an event id to monitor CCI cycles, a dedicated counter is 156 + * provided. Use 0xff to represent CCI cycles and hope that no future revisions 157 + * make use of this event in hardware. 158 + */ 159 + enum cci400_perf_events { 160 + CCI400_PMU_CYCLES = 0xff 161 + }; 162 + 163 + #define CCI400_PMU_CYCLE_CNTR_IDX 0 164 + #define CCI400_PMU_CNTR0_IDX 1 165 + 166 + /* 167 + * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8 168 + * ports and bits 4:0 are event codes. There are different event codes 169 + * associated with each port type. 170 + * 171 + * Additionally, the range of events associated with the port types changed 172 + * between Rev0 and Rev1. 173 + * 174 + * The constants below define the range of valid codes for each port type for 175 + * the different revisions and are used to validate the event to be monitored. 176 + */ 177 + 178 + #define CCI400_PMU_EVENT_MASK 0xffUL 179 + #define CCI400_PMU_EVENT_SOURCE_SHIFT 5 180 + #define CCI400_PMU_EVENT_SOURCE_MASK 0x7 181 + #define CCI400_PMU_EVENT_CODE_SHIFT 0 182 + #define CCI400_PMU_EVENT_CODE_MASK 0x1f 183 + #define CCI400_PMU_EVENT_SOURCE(event) \ 184 + ((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \ 185 + CCI400_PMU_EVENT_SOURCE_MASK) 186 + #define CCI400_PMU_EVENT_CODE(event) \ 187 + ((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK) 188 + 189 + #define CCI400_R0_SLAVE_PORT_MIN_EV 0x00 190 + #define CCI400_R0_SLAVE_PORT_MAX_EV 0x13 191 + #define CCI400_R0_MASTER_PORT_MIN_EV 0x14 192 + #define CCI400_R0_MASTER_PORT_MAX_EV 0x1a 193 + 194 + #define CCI400_R1_SLAVE_PORT_MIN_EV 0x00 195 + #define CCI400_R1_SLAVE_PORT_MAX_EV 0x14 196 + #define CCI400_R1_MASTER_PORT_MIN_EV 0x00 197 + #define CCI400_R1_MASTER_PORT_MAX_EV 0x11 198 + 199 + #define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 200 + CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \ 201 + (unsigned long)_config) 202 + 203 + static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 204 + struct device_attribute *attr, char *buf); 205 + 206 + static struct attribute *cci400_pmu_format_attrs[] = { 207 + CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 208 + CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"), 209 + NULL 210 + }; 211 + 212 + static struct attribute *cci400_r0_pmu_event_attrs[] = { 213 + /* Slave events */ 214 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 215 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 216 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 217 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 218 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 219 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 220 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 221 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 222 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 223 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 224 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 225 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 226 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 227 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 228 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 229 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 230 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 231 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 232 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 233 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 234 + /* Master events */ 235 + CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14), 236 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15), 237 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16), 238 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17), 239 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18), 240 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19), 241 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A), 242 + /* Special event for cycles counter */ 243 + CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 244 + NULL 245 + }; 246 + 247 + static struct attribute *cci400_r1_pmu_event_attrs[] = { 248 + /* Slave events */ 249 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 250 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 251 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 252 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 253 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 254 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 255 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 256 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 257 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 258 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 259 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 260 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 261 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 262 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 263 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 264 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 265 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 266 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 267 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 268 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 269 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14), 270 + /* Master events */ 271 + CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0), 272 + CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1), 273 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2), 274 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3), 275 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4), 276 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5), 277 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6), 278 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7), 279 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8), 280 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9), 281 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA), 282 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB), 283 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC), 284 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD), 285 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE), 286 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF), 287 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10), 288 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11), 289 + /* Special event for cycles counter */ 290 + CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 291 + NULL 292 + }; 293 + 294 + static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 295 + struct device_attribute *attr, char *buf) 296 + { 297 + struct dev_ext_attribute *eattr = container_of(attr, 298 + struct dev_ext_attribute, attr); 299 + return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var); 300 + } 301 + 302 + static int cci400_get_event_idx(struct cci_pmu *cci_pmu, 303 + struct cci_pmu_hw_events *hw, 304 + unsigned long cci_event) 305 + { 306 + int idx; 307 + 308 + /* cycles event idx is fixed */ 309 + if (cci_event == CCI400_PMU_CYCLES) { 310 + if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask)) 311 + return -EAGAIN; 312 + 313 + return CCI400_PMU_CYCLE_CNTR_IDX; 314 + } 315 + 316 + for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) 317 + if (!test_and_set_bit(idx, hw->used_mask)) 318 + return idx; 319 + 320 + /* No counters available */ 321 + return -EAGAIN; 322 + } 323 + 324 + static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event) 325 + { 326 + u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event); 327 + u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event); 328 + int if_type; 329 + 330 + if (hw_event & ~CCI400_PMU_EVENT_MASK) 331 + return -ENOENT; 332 + 333 + if (hw_event == CCI400_PMU_CYCLES) 334 + return hw_event; 335 + 336 + switch (ev_source) { 337 + case CCI400_PORT_S0: 338 + case CCI400_PORT_S1: 339 + case CCI400_PORT_S2: 340 + case CCI400_PORT_S3: 341 + case CCI400_PORT_S4: 342 + /* Slave Interface */ 343 + if_type = CCI_IF_SLAVE; 344 + break; 345 + case CCI400_PORT_M0: 346 + case CCI400_PORT_M1: 347 + case CCI400_PORT_M2: 348 + /* Master Interface */ 349 + if_type = CCI_IF_MASTER; 350 + break; 351 + default: 352 + return -ENOENT; 353 + } 354 + 355 + if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 356 + ev_code <= cci_pmu->model->event_ranges[if_type].max) 357 + return hw_event; 358 + 359 + return -ENOENT; 360 + } 361 + 362 + static int probe_cci400_revision(struct cci_pmu *cci_pmu) 363 + { 364 + int rev; 365 + rev = readl_relaxed(cci_pmu->ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK; 366 + rev >>= CCI_PID2_REV_SHIFT; 367 + 368 + if (rev < CCI400_R1_PX) 369 + return CCI400_R0; 370 + else 371 + return CCI400_R1; 372 + } 373 + 374 + static const struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu) 375 + { 376 + if (platform_has_secure_cci_access()) 377 + return &cci_pmu_models[probe_cci400_revision(cci_pmu)]; 378 + return NULL; 379 + } 380 + #else /* !CONFIG_ARM_CCI400_PMU */ 381 + static inline struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu) 382 + { 383 + return NULL; 384 + } 385 + #endif /* CONFIG_ARM_CCI400_PMU */ 386 + 387 + #ifdef CONFIG_ARM_CCI5xx_PMU 388 + 389 + /* 390 + * CCI5xx PMU event id is an 9-bit value made of two parts. 391 + * bits [8:5] - Source for the event 392 + * bits [4:0] - Event code (specific to type of interface) 393 + * 394 + * 395 + */ 396 + 397 + /* Port ids */ 398 + #define CCI5xx_PORT_S0 0x0 399 + #define CCI5xx_PORT_S1 0x1 400 + #define CCI5xx_PORT_S2 0x2 401 + #define CCI5xx_PORT_S3 0x3 402 + #define CCI5xx_PORT_S4 0x4 403 + #define CCI5xx_PORT_S5 0x5 404 + #define CCI5xx_PORT_S6 0x6 405 + 406 + #define CCI5xx_PORT_M0 0x8 407 + #define CCI5xx_PORT_M1 0x9 408 + #define CCI5xx_PORT_M2 0xa 409 + #define CCI5xx_PORT_M3 0xb 410 + #define CCI5xx_PORT_M4 0xc 411 + #define CCI5xx_PORT_M5 0xd 412 + #define CCI5xx_PORT_M6 0xe 413 + 414 + #define CCI5xx_PORT_GLOBAL 0xf 415 + 416 + #define CCI5xx_PMU_EVENT_MASK 0x1ffUL 417 + #define CCI5xx_PMU_EVENT_SOURCE_SHIFT 0x5 418 + #define CCI5xx_PMU_EVENT_SOURCE_MASK 0xf 419 + #define CCI5xx_PMU_EVENT_CODE_SHIFT 0x0 420 + #define CCI5xx_PMU_EVENT_CODE_MASK 0x1f 421 + 422 + #define CCI5xx_PMU_EVENT_SOURCE(event) \ 423 + ((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK) 424 + #define CCI5xx_PMU_EVENT_CODE(event) \ 425 + ((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK) 426 + 427 + #define CCI5xx_SLAVE_PORT_MIN_EV 0x00 428 + #define CCI5xx_SLAVE_PORT_MAX_EV 0x1f 429 + #define CCI5xx_MASTER_PORT_MIN_EV 0x00 430 + #define CCI5xx_MASTER_PORT_MAX_EV 0x06 431 + #define CCI5xx_GLOBAL_PORT_MIN_EV 0x00 432 + #define CCI5xx_GLOBAL_PORT_MAX_EV 0x0f 433 + 434 + 435 + #define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 436 + CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \ 437 + (unsigned long) _config) 438 + 439 + static ssize_t cci5xx_pmu_global_event_show(struct device *dev, 440 + struct device_attribute *attr, char *buf); 441 + 442 + static struct attribute *cci5xx_pmu_format_attrs[] = { 443 + CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 444 + CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"), 445 + NULL, 446 + }; 447 + 448 + static struct attribute *cci5xx_pmu_event_attrs[] = { 449 + /* Slave events */ 450 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0), 451 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1), 452 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2), 453 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3), 454 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4), 455 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5), 456 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6), 457 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 458 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8), 459 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9), 460 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA), 461 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB), 462 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC), 463 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD), 464 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE), 465 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF), 466 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10), 467 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11), 468 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12), 469 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13), 470 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14), 471 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15), 472 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16), 473 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17), 474 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18), 475 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19), 476 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A), 477 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B), 478 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C), 479 + CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D), 480 + CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E), 481 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F), 482 + 483 + /* Master events */ 484 + CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0), 485 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1), 486 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2), 487 + CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3), 488 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4), 489 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5), 490 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6), 491 + 492 + /* Global events */ 493 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0), 494 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1), 495 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2), 496 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3), 497 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4), 498 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5), 499 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6), 500 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7), 501 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8), 502 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9), 503 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA), 504 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), 505 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), 506 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), 507 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE), 508 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), 509 + NULL 510 + }; 511 + 512 + static ssize_t cci5xx_pmu_global_event_show(struct device *dev, 513 + struct device_attribute *attr, char *buf) 514 + { 515 + struct dev_ext_attribute *eattr = container_of(attr, 516 + struct dev_ext_attribute, attr); 517 + /* Global events have single fixed source code */ 518 + return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n", 519 + (unsigned long)eattr->var, CCI5xx_PORT_GLOBAL); 520 + } 521 + 522 + /* 523 + * CCI500 provides 8 independent event counters that can count 524 + * any of the events available. 525 + * CCI500 PMU event source ids 526 + * 0x0-0x6 - Slave interfaces 527 + * 0x8-0xD - Master interfaces 528 + * 0xf - Global Events 529 + * 0x7,0xe - Reserved 530 + */ 531 + static int cci500_validate_hw_event(struct cci_pmu *cci_pmu, 532 + unsigned long hw_event) 533 + { 534 + u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); 535 + u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); 536 + int if_type; 537 + 538 + if (hw_event & ~CCI5xx_PMU_EVENT_MASK) 539 + return -ENOENT; 540 + 541 + switch (ev_source) { 542 + case CCI5xx_PORT_S0: 543 + case CCI5xx_PORT_S1: 544 + case CCI5xx_PORT_S2: 545 + case CCI5xx_PORT_S3: 546 + case CCI5xx_PORT_S4: 547 + case CCI5xx_PORT_S5: 548 + case CCI5xx_PORT_S6: 549 + if_type = CCI_IF_SLAVE; 550 + break; 551 + case CCI5xx_PORT_M0: 552 + case CCI5xx_PORT_M1: 553 + case CCI5xx_PORT_M2: 554 + case CCI5xx_PORT_M3: 555 + case CCI5xx_PORT_M4: 556 + case CCI5xx_PORT_M5: 557 + if_type = CCI_IF_MASTER; 558 + break; 559 + case CCI5xx_PORT_GLOBAL: 560 + if_type = CCI_IF_GLOBAL; 561 + break; 562 + default: 563 + return -ENOENT; 564 + } 565 + 566 + if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 567 + ev_code <= cci_pmu->model->event_ranges[if_type].max) 568 + return hw_event; 569 + 570 + return -ENOENT; 571 + } 572 + 573 + /* 574 + * CCI550 provides 8 independent event counters that can count 575 + * any of the events available. 576 + * CCI550 PMU event source ids 577 + * 0x0-0x6 - Slave interfaces 578 + * 0x8-0xe - Master interfaces 579 + * 0xf - Global Events 580 + * 0x7 - Reserved 581 + */ 582 + static int cci550_validate_hw_event(struct cci_pmu *cci_pmu, 583 + unsigned long hw_event) 584 + { 585 + u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event); 586 + u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event); 587 + int if_type; 588 + 589 + if (hw_event & ~CCI5xx_PMU_EVENT_MASK) 590 + return -ENOENT; 591 + 592 + switch (ev_source) { 593 + case CCI5xx_PORT_S0: 594 + case CCI5xx_PORT_S1: 595 + case CCI5xx_PORT_S2: 596 + case CCI5xx_PORT_S3: 597 + case CCI5xx_PORT_S4: 598 + case CCI5xx_PORT_S5: 599 + case CCI5xx_PORT_S6: 600 + if_type = CCI_IF_SLAVE; 601 + break; 602 + case CCI5xx_PORT_M0: 603 + case CCI5xx_PORT_M1: 604 + case CCI5xx_PORT_M2: 605 + case CCI5xx_PORT_M3: 606 + case CCI5xx_PORT_M4: 607 + case CCI5xx_PORT_M5: 608 + case CCI5xx_PORT_M6: 609 + if_type = CCI_IF_MASTER; 610 + break; 611 + case CCI5xx_PORT_GLOBAL: 612 + if_type = CCI_IF_GLOBAL; 613 + break; 614 + default: 615 + return -ENOENT; 616 + } 617 + 618 + if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 619 + ev_code <= cci_pmu->model->event_ranges[if_type].max) 620 + return hw_event; 621 + 622 + return -ENOENT; 623 + } 624 + 625 + #endif /* CONFIG_ARM_CCI5xx_PMU */ 626 + 627 + /* 628 + * Program the CCI PMU counters which have PERF_HES_ARCH set 629 + * with the event period and mark them ready before we enable 630 + * PMU. 631 + */ 632 + static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu) 633 + { 634 + int i; 635 + struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; 636 + 637 + DECLARE_BITMAP(mask, cci_pmu->num_cntrs); 638 + 639 + bitmap_zero(mask, cci_pmu->num_cntrs); 640 + for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) { 641 + struct perf_event *event = cci_hw->events[i]; 642 + 643 + if (WARN_ON(!event)) 644 + continue; 645 + 646 + /* Leave the events which are not counting */ 647 + if (event->hw.state & PERF_HES_STOPPED) 648 + continue; 649 + if (event->hw.state & PERF_HES_ARCH) { 650 + set_bit(i, mask); 651 + event->hw.state &= ~PERF_HES_ARCH; 652 + } 653 + } 654 + 655 + pmu_write_counters(cci_pmu, mask); 656 + } 657 + 658 + /* Should be called with cci_pmu->hw_events->pmu_lock held */ 659 + static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu) 660 + { 661 + u32 val; 662 + 663 + /* Enable all the PMU counters. */ 664 + val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) | CCI_PMCR_CEN; 665 + writel(val, cci_pmu->ctrl_base + CCI_PMCR); 666 + } 667 + 668 + /* Should be called with cci_pmu->hw_events->pmu_lock held */ 669 + static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu) 670 + { 671 + cci_pmu_sync_counters(cci_pmu); 672 + __cci_pmu_enable_nosync(cci_pmu); 673 + } 674 + 675 + /* Should be called with cci_pmu->hw_events->pmu_lock held */ 676 + static void __cci_pmu_disable(struct cci_pmu *cci_pmu) 677 + { 678 + u32 val; 679 + 680 + /* Disable all the PMU counters. */ 681 + val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN; 682 + writel(val, cci_pmu->ctrl_base + CCI_PMCR); 683 + } 684 + 685 + static ssize_t cci_pmu_format_show(struct device *dev, 686 + struct device_attribute *attr, char *buf) 687 + { 688 + struct dev_ext_attribute *eattr = container_of(attr, 689 + struct dev_ext_attribute, attr); 690 + return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var); 691 + } 692 + 693 + static ssize_t cci_pmu_event_show(struct device *dev, 694 + struct device_attribute *attr, char *buf) 695 + { 696 + struct dev_ext_attribute *eattr = container_of(attr, 697 + struct dev_ext_attribute, attr); 698 + /* source parameter is mandatory for normal PMU events */ 699 + return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n", 700 + (unsigned long)eattr->var); 701 + } 702 + 703 + static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) 704 + { 705 + return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu); 706 + } 707 + 708 + static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset) 709 + { 710 + return readl_relaxed(cci_pmu->base + 711 + CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 712 + } 713 + 714 + static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value, 715 + int idx, unsigned int offset) 716 + { 717 + writel_relaxed(value, cci_pmu->base + 718 + CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 719 + } 720 + 721 + static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx) 722 + { 723 + pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL); 724 + } 725 + 726 + static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx) 727 + { 728 + pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL); 729 + } 730 + 731 + static bool __maybe_unused 732 + pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx) 733 + { 734 + return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0; 735 + } 736 + 737 + static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event) 738 + { 739 + pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL); 740 + } 741 + 742 + /* 743 + * For all counters on the CCI-PMU, disable any 'enabled' counters, 744 + * saving the changed counters in the mask, so that we can restore 745 + * it later using pmu_restore_counters. The mask is private to the 746 + * caller. We cannot rely on the used_mask maintained by the CCI_PMU 747 + * as it only tells us if the counter is assigned to perf_event or not. 748 + * The state of the perf_event cannot be locked by the PMU layer, hence 749 + * we check the individual counter status (which can be locked by 750 + * cci_pm->hw_events->pmu_lock). 751 + * 752 + * @mask should be initialised to empty by the caller. 753 + */ 754 + static void __maybe_unused 755 + pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 756 + { 757 + int i; 758 + 759 + for (i = 0; i < cci_pmu->num_cntrs; i++) { 760 + if (pmu_counter_is_enabled(cci_pmu, i)) { 761 + set_bit(i, mask); 762 + pmu_disable_counter(cci_pmu, i); 763 + } 764 + } 765 + } 766 + 767 + /* 768 + * Restore the status of the counters. Reversal of the pmu_save_counters(). 769 + * For each counter set in the mask, enable the counter back. 770 + */ 771 + static void __maybe_unused 772 + pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 773 + { 774 + int i; 775 + 776 + for_each_set_bit(i, mask, cci_pmu->num_cntrs) 777 + pmu_enable_counter(cci_pmu, i); 778 + } 779 + 780 + /* 781 + * Returns the number of programmable counters actually implemented 782 + * by the cci 783 + */ 784 + static u32 pmu_get_max_counters(struct cci_pmu *cci_pmu) 785 + { 786 + return (readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) & 787 + CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; 788 + } 789 + 790 + static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event) 791 + { 792 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 793 + unsigned long cci_event = event->hw.config_base; 794 + int idx; 795 + 796 + if (cci_pmu->model->get_event_idx) 797 + return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event); 798 + 799 + /* Generic code to find an unused idx from the mask */ 800 + for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) 801 + if (!test_and_set_bit(idx, hw->used_mask)) 802 + return idx; 803 + 804 + /* No counters available */ 805 + return -EAGAIN; 806 + } 807 + 808 + static int pmu_map_event(struct perf_event *event) 809 + { 810 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 811 + 812 + if (event->attr.type < PERF_TYPE_MAX || 813 + !cci_pmu->model->validate_hw_event) 814 + return -ENOENT; 815 + 816 + return cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config); 817 + } 818 + 819 + static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler) 820 + { 821 + int i; 822 + struct platform_device *pmu_device = cci_pmu->plat_device; 823 + 824 + if (unlikely(!pmu_device)) 825 + return -ENODEV; 826 + 827 + if (cci_pmu->nr_irqs < 1) { 828 + dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n"); 829 + return -ENODEV; 830 + } 831 + 832 + /* 833 + * Register all available CCI PMU interrupts. In the interrupt handler 834 + * we iterate over the counters checking for interrupt source (the 835 + * overflowing counter) and clear it. 836 + * 837 + * This should allow handling of non-unique interrupt for the counters. 838 + */ 839 + for (i = 0; i < cci_pmu->nr_irqs; i++) { 840 + int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED, 841 + "arm-cci-pmu", cci_pmu); 842 + if (err) { 843 + dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n", 844 + cci_pmu->irqs[i]); 845 + return err; 846 + } 847 + 848 + set_bit(i, &cci_pmu->active_irqs); 849 + } 850 + 851 + return 0; 852 + } 853 + 854 + static void pmu_free_irq(struct cci_pmu *cci_pmu) 855 + { 856 + int i; 857 + 858 + for (i = 0; i < cci_pmu->nr_irqs; i++) { 859 + if (!test_and_clear_bit(i, &cci_pmu->active_irqs)) 860 + continue; 861 + 862 + free_irq(cci_pmu->irqs[i], cci_pmu); 863 + } 864 + } 865 + 866 + static u32 pmu_read_counter(struct perf_event *event) 867 + { 868 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 869 + struct hw_perf_event *hw_counter = &event->hw; 870 + int idx = hw_counter->idx; 871 + u32 value; 872 + 873 + if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 874 + dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 875 + return 0; 876 + } 877 + value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR); 878 + 879 + return value; 880 + } 881 + 882 + static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx) 883 + { 884 + pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR); 885 + } 886 + 887 + static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 888 + { 889 + int i; 890 + struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events; 891 + 892 + for_each_set_bit(i, mask, cci_pmu->num_cntrs) { 893 + struct perf_event *event = cci_hw->events[i]; 894 + 895 + if (WARN_ON(!event)) 896 + continue; 897 + pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); 898 + } 899 + } 900 + 901 + static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 902 + { 903 + if (cci_pmu->model->write_counters) 904 + cci_pmu->model->write_counters(cci_pmu, mask); 905 + else 906 + __pmu_write_counters(cci_pmu, mask); 907 + } 908 + 909 + #ifdef CONFIG_ARM_CCI5xx_PMU 910 + 911 + /* 912 + * CCI-500/CCI-550 has advanced power saving policies, which could gate the 913 + * clocks to the PMU counters, which makes the writes to them ineffective. 914 + * The only way to write to those counters is when the global counters 915 + * are enabled and the particular counter is enabled. 916 + * 917 + * So we do the following : 918 + * 919 + * 1) Disable all the PMU counters, saving their current state 920 + * 2) Enable the global PMU profiling, now that all counters are 921 + * disabled. 922 + * 923 + * For each counter to be programmed, repeat steps 3-7: 924 + * 925 + * 3) Write an invalid event code to the event control register for the 926 + counter, so that the counters are not modified. 927 + * 4) Enable the counter control for the counter. 928 + * 5) Set the counter value 929 + * 6) Disable the counter 930 + * 7) Restore the event in the target counter 931 + * 932 + * 8) Disable the global PMU. 933 + * 9) Restore the status of the rest of the counters. 934 + * 935 + * We choose an event which for CCI-5xx is guaranteed not to count. 936 + * We use the highest possible event code (0x1f) for the master interface 0. 937 + */ 938 + #define CCI5xx_INVALID_EVENT ((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \ 939 + (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT)) 940 + static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask) 941 + { 942 + int i; 943 + DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs); 944 + 945 + bitmap_zero(saved_mask, cci_pmu->num_cntrs); 946 + pmu_save_counters(cci_pmu, saved_mask); 947 + 948 + /* 949 + * Now that all the counters are disabled, we can safely turn the PMU on, 950 + * without syncing the status of the counters 951 + */ 952 + __cci_pmu_enable_nosync(cci_pmu); 953 + 954 + for_each_set_bit(i, mask, cci_pmu->num_cntrs) { 955 + struct perf_event *event = cci_pmu->hw_events.events[i]; 956 + 957 + if (WARN_ON(!event)) 958 + continue; 959 + 960 + pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT); 961 + pmu_enable_counter(cci_pmu, i); 962 + pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i); 963 + pmu_disable_counter(cci_pmu, i); 964 + pmu_set_event(cci_pmu, i, event->hw.config_base); 965 + } 966 + 967 + __cci_pmu_disable(cci_pmu); 968 + 969 + pmu_restore_counters(cci_pmu, saved_mask); 970 + } 971 + 972 + #endif /* CONFIG_ARM_CCI5xx_PMU */ 973 + 974 + static u64 pmu_event_update(struct perf_event *event) 975 + { 976 + struct hw_perf_event *hwc = &event->hw; 977 + u64 delta, prev_raw_count, new_raw_count; 978 + 979 + do { 980 + prev_raw_count = local64_read(&hwc->prev_count); 981 + new_raw_count = pmu_read_counter(event); 982 + } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count, 983 + new_raw_count) != prev_raw_count); 984 + 985 + delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK; 986 + 987 + local64_add(delta, &event->count); 988 + 989 + return new_raw_count; 990 + } 991 + 992 + static void pmu_read(struct perf_event *event) 993 + { 994 + pmu_event_update(event); 995 + } 996 + 997 + static void pmu_event_set_period(struct perf_event *event) 998 + { 999 + struct hw_perf_event *hwc = &event->hw; 1000 + /* 1001 + * The CCI PMU counters have a period of 2^32. To account for the 1002 + * possiblity of extreme interrupt latency we program for a period of 1003 + * half that. Hopefully we can handle the interrupt before another 2^31 1004 + * events occur and the counter overtakes its previous value. 1005 + */ 1006 + u64 val = 1ULL << 31; 1007 + local64_set(&hwc->prev_count, val); 1008 + 1009 + /* 1010 + * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose 1011 + * values needs to be sync-ed with the s/w state before the PMU is 1012 + * enabled. 1013 + * Mark this counter for sync. 1014 + */ 1015 + hwc->state |= PERF_HES_ARCH; 1016 + } 1017 + 1018 + static irqreturn_t pmu_handle_irq(int irq_num, void *dev) 1019 + { 1020 + unsigned long flags; 1021 + struct cci_pmu *cci_pmu = dev; 1022 + struct cci_pmu_hw_events *events = &cci_pmu->hw_events; 1023 + int idx, handled = IRQ_NONE; 1024 + 1025 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 1026 + 1027 + /* Disable the PMU while we walk through the counters */ 1028 + __cci_pmu_disable(cci_pmu); 1029 + /* 1030 + * Iterate over counters and update the corresponding perf events. 1031 + * This should work regardless of whether we have per-counter overflow 1032 + * interrupt or a combined overflow interrupt. 1033 + */ 1034 + for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { 1035 + struct perf_event *event = events->events[idx]; 1036 + 1037 + if (!event) 1038 + continue; 1039 + 1040 + /* Did this counter overflow? */ 1041 + if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) & 1042 + CCI_PMU_OVRFLW_FLAG)) 1043 + continue; 1044 + 1045 + pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx, 1046 + CCI_PMU_OVRFLW); 1047 + 1048 + pmu_event_update(event); 1049 + pmu_event_set_period(event); 1050 + handled = IRQ_HANDLED; 1051 + } 1052 + 1053 + /* Enable the PMU and sync possibly overflowed counters */ 1054 + __cci_pmu_enable_sync(cci_pmu); 1055 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1056 + 1057 + return IRQ_RETVAL(handled); 1058 + } 1059 + 1060 + static int cci_pmu_get_hw(struct cci_pmu *cci_pmu) 1061 + { 1062 + int ret = pmu_request_irq(cci_pmu, pmu_handle_irq); 1063 + if (ret) { 1064 + pmu_free_irq(cci_pmu); 1065 + return ret; 1066 + } 1067 + return 0; 1068 + } 1069 + 1070 + static void cci_pmu_put_hw(struct cci_pmu *cci_pmu) 1071 + { 1072 + pmu_free_irq(cci_pmu); 1073 + } 1074 + 1075 + static void hw_perf_event_destroy(struct perf_event *event) 1076 + { 1077 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1078 + atomic_t *active_events = &cci_pmu->active_events; 1079 + struct mutex *reserve_mutex = &cci_pmu->reserve_mutex; 1080 + 1081 + if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) { 1082 + cci_pmu_put_hw(cci_pmu); 1083 + mutex_unlock(reserve_mutex); 1084 + } 1085 + } 1086 + 1087 + static void cci_pmu_enable(struct pmu *pmu) 1088 + { 1089 + struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1090 + struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1091 + int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs); 1092 + unsigned long flags; 1093 + 1094 + if (!enabled) 1095 + return; 1096 + 1097 + raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1098 + __cci_pmu_enable_sync(cci_pmu); 1099 + raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1100 + 1101 + } 1102 + 1103 + static void cci_pmu_disable(struct pmu *pmu) 1104 + { 1105 + struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1106 + struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1107 + unsigned long flags; 1108 + 1109 + raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1110 + __cci_pmu_disable(cci_pmu); 1111 + raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1112 + } 1113 + 1114 + /* 1115 + * Check if the idx represents a non-programmable counter. 1116 + * All the fixed event counters are mapped before the programmable 1117 + * counters. 1118 + */ 1119 + static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx) 1120 + { 1121 + return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs); 1122 + } 1123 + 1124 + static void cci_pmu_start(struct perf_event *event, int pmu_flags) 1125 + { 1126 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1127 + struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1128 + struct hw_perf_event *hwc = &event->hw; 1129 + int idx = hwc->idx; 1130 + unsigned long flags; 1131 + 1132 + /* 1133 + * To handle interrupt latency, we always reprogram the period 1134 + * regardlesss of PERF_EF_RELOAD. 1135 + */ 1136 + if (pmu_flags & PERF_EF_RELOAD) 1137 + WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); 1138 + 1139 + hwc->state = 0; 1140 + 1141 + if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 1142 + dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 1143 + return; 1144 + } 1145 + 1146 + raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 1147 + 1148 + /* Configure the counter unless you are counting a fixed event */ 1149 + if (!pmu_fixed_hw_idx(cci_pmu, idx)) 1150 + pmu_set_event(cci_pmu, idx, hwc->config_base); 1151 + 1152 + pmu_event_set_period(event); 1153 + pmu_enable_counter(cci_pmu, idx); 1154 + 1155 + raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 1156 + } 1157 + 1158 + static void cci_pmu_stop(struct perf_event *event, int pmu_flags) 1159 + { 1160 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1161 + struct hw_perf_event *hwc = &event->hw; 1162 + int idx = hwc->idx; 1163 + 1164 + if (hwc->state & PERF_HES_STOPPED) 1165 + return; 1166 + 1167 + if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) { 1168 + dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 1169 + return; 1170 + } 1171 + 1172 + /* 1173 + * We always reprogram the counter, so ignore PERF_EF_UPDATE. See 1174 + * cci_pmu_start() 1175 + */ 1176 + pmu_disable_counter(cci_pmu, idx); 1177 + pmu_event_update(event); 1178 + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 1179 + } 1180 + 1181 + static int cci_pmu_add(struct perf_event *event, int flags) 1182 + { 1183 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1184 + struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1185 + struct hw_perf_event *hwc = &event->hw; 1186 + int idx; 1187 + int err = 0; 1188 + 1189 + perf_pmu_disable(event->pmu); 1190 + 1191 + /* If we don't have a space for the counter then finish early. */ 1192 + idx = pmu_get_event_idx(hw_events, event); 1193 + if (idx < 0) { 1194 + err = idx; 1195 + goto out; 1196 + } 1197 + 1198 + event->hw.idx = idx; 1199 + hw_events->events[idx] = event; 1200 + 1201 + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 1202 + if (flags & PERF_EF_START) 1203 + cci_pmu_start(event, PERF_EF_RELOAD); 1204 + 1205 + /* Propagate our changes to the userspace mapping. */ 1206 + perf_event_update_userpage(event); 1207 + 1208 + out: 1209 + perf_pmu_enable(event->pmu); 1210 + return err; 1211 + } 1212 + 1213 + static void cci_pmu_del(struct perf_event *event, int flags) 1214 + { 1215 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1216 + struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 1217 + struct hw_perf_event *hwc = &event->hw; 1218 + int idx = hwc->idx; 1219 + 1220 + cci_pmu_stop(event, PERF_EF_UPDATE); 1221 + hw_events->events[idx] = NULL; 1222 + clear_bit(idx, hw_events->used_mask); 1223 + 1224 + perf_event_update_userpage(event); 1225 + } 1226 + 1227 + static int validate_event(struct pmu *cci_pmu, 1228 + struct cci_pmu_hw_events *hw_events, 1229 + struct perf_event *event) 1230 + { 1231 + if (is_software_event(event)) 1232 + return 1; 1233 + 1234 + /* 1235 + * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The 1236 + * core perf code won't check that the pmu->ctx == leader->ctx 1237 + * until after pmu->event_init(event). 1238 + */ 1239 + if (event->pmu != cci_pmu) 1240 + return 0; 1241 + 1242 + if (event->state < PERF_EVENT_STATE_OFF) 1243 + return 1; 1244 + 1245 + if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) 1246 + return 1; 1247 + 1248 + return pmu_get_event_idx(hw_events, event) >= 0; 1249 + } 1250 + 1251 + static int validate_group(struct perf_event *event) 1252 + { 1253 + struct perf_event *sibling, *leader = event->group_leader; 1254 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1255 + unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; 1256 + struct cci_pmu_hw_events fake_pmu = { 1257 + /* 1258 + * Initialise the fake PMU. We only need to populate the 1259 + * used_mask for the purposes of validation. 1260 + */ 1261 + .used_mask = mask, 1262 + }; 1263 + memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); 1264 + 1265 + if (!validate_event(event->pmu, &fake_pmu, leader)) 1266 + return -EINVAL; 1267 + 1268 + for_each_sibling_event(sibling, leader) { 1269 + if (!validate_event(event->pmu, &fake_pmu, sibling)) 1270 + return -EINVAL; 1271 + } 1272 + 1273 + if (!validate_event(event->pmu, &fake_pmu, event)) 1274 + return -EINVAL; 1275 + 1276 + return 0; 1277 + } 1278 + 1279 + static int __hw_perf_event_init(struct perf_event *event) 1280 + { 1281 + struct hw_perf_event *hwc = &event->hw; 1282 + int mapping; 1283 + 1284 + mapping = pmu_map_event(event); 1285 + 1286 + if (mapping < 0) { 1287 + pr_debug("event %x:%llx not supported\n", event->attr.type, 1288 + event->attr.config); 1289 + return mapping; 1290 + } 1291 + 1292 + /* 1293 + * We don't assign an index until we actually place the event onto 1294 + * hardware. Use -1 to signify that we haven't decided where to put it 1295 + * yet. 1296 + */ 1297 + hwc->idx = -1; 1298 + hwc->config_base = 0; 1299 + hwc->config = 0; 1300 + hwc->event_base = 0; 1301 + 1302 + /* 1303 + * Store the event encoding into the config_base field. 1304 + */ 1305 + hwc->config_base |= (unsigned long)mapping; 1306 + 1307 + /* 1308 + * Limit the sample_period to half of the counter width. That way, the 1309 + * new counter value is far less likely to overtake the previous one 1310 + * unless you have some serious IRQ latency issues. 1311 + */ 1312 + hwc->sample_period = CCI_PMU_CNTR_MASK >> 1; 1313 + hwc->last_period = hwc->sample_period; 1314 + local64_set(&hwc->period_left, hwc->sample_period); 1315 + 1316 + if (event->group_leader != event) { 1317 + if (validate_group(event) != 0) 1318 + return -EINVAL; 1319 + } 1320 + 1321 + return 0; 1322 + } 1323 + 1324 + static int cci_pmu_event_init(struct perf_event *event) 1325 + { 1326 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 1327 + atomic_t *active_events = &cci_pmu->active_events; 1328 + int err = 0; 1329 + 1330 + if (event->attr.type != event->pmu->type) 1331 + return -ENOENT; 1332 + 1333 + /* Shared by all CPUs, no meaningful state to sample */ 1334 + if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 1335 + return -EOPNOTSUPP; 1336 + 1337 + /* We have no filtering of any kind */ 1338 + if (event->attr.exclude_user || 1339 + event->attr.exclude_kernel || 1340 + event->attr.exclude_hv || 1341 + event->attr.exclude_idle || 1342 + event->attr.exclude_host || 1343 + event->attr.exclude_guest) 1344 + return -EINVAL; 1345 + 1346 + /* 1347 + * Following the example set by other "uncore" PMUs, we accept any CPU 1348 + * and rewrite its affinity dynamically rather than having perf core 1349 + * handle cpu == -1 and pid == -1 for this case. 1350 + * 1351 + * The perf core will pin online CPUs for the duration of this call and 1352 + * the event being installed into its context, so the PMU's CPU can't 1353 + * change under our feet. 1354 + */ 1355 + if (event->cpu < 0) 1356 + return -EINVAL; 1357 + event->cpu = cci_pmu->cpu; 1358 + 1359 + event->destroy = hw_perf_event_destroy; 1360 + if (!atomic_inc_not_zero(active_events)) { 1361 + mutex_lock(&cci_pmu->reserve_mutex); 1362 + if (atomic_read(active_events) == 0) 1363 + err = cci_pmu_get_hw(cci_pmu); 1364 + if (!err) 1365 + atomic_inc(active_events); 1366 + mutex_unlock(&cci_pmu->reserve_mutex); 1367 + } 1368 + if (err) 1369 + return err; 1370 + 1371 + err = __hw_perf_event_init(event); 1372 + if (err) 1373 + hw_perf_event_destroy(event); 1374 + 1375 + return err; 1376 + } 1377 + 1378 + static ssize_t pmu_cpumask_attr_show(struct device *dev, 1379 + struct device_attribute *attr, char *buf) 1380 + { 1381 + struct pmu *pmu = dev_get_drvdata(dev); 1382 + struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 1383 + 1384 + return cpumap_print_to_pagebuf(true, buf, cpumask_of(cci_pmu->cpu)); 1385 + } 1386 + 1387 + static struct device_attribute pmu_cpumask_attr = 1388 + __ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL); 1389 + 1390 + static struct attribute *pmu_attrs[] = { 1391 + &pmu_cpumask_attr.attr, 1392 + NULL, 1393 + }; 1394 + 1395 + static struct attribute_group pmu_attr_group = { 1396 + .attrs = pmu_attrs, 1397 + }; 1398 + 1399 + static struct attribute_group pmu_format_attr_group = { 1400 + .name = "format", 1401 + .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 1402 + }; 1403 + 1404 + static struct attribute_group pmu_event_attr_group = { 1405 + .name = "events", 1406 + .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 1407 + }; 1408 + 1409 + static const struct attribute_group *pmu_attr_groups[] = { 1410 + &pmu_attr_group, 1411 + &pmu_format_attr_group, 1412 + &pmu_event_attr_group, 1413 + NULL 1414 + }; 1415 + 1416 + static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) 1417 + { 1418 + const struct cci_pmu_model *model = cci_pmu->model; 1419 + char *name = model->name; 1420 + u32 num_cntrs; 1421 + 1422 + pmu_event_attr_group.attrs = model->event_attrs; 1423 + pmu_format_attr_group.attrs = model->format_attrs; 1424 + 1425 + cci_pmu->pmu = (struct pmu) { 1426 + .name = cci_pmu->model->name, 1427 + .task_ctx_nr = perf_invalid_context, 1428 + .pmu_enable = cci_pmu_enable, 1429 + .pmu_disable = cci_pmu_disable, 1430 + .event_init = cci_pmu_event_init, 1431 + .add = cci_pmu_add, 1432 + .del = cci_pmu_del, 1433 + .start = cci_pmu_start, 1434 + .stop = cci_pmu_stop, 1435 + .read = pmu_read, 1436 + .attr_groups = pmu_attr_groups, 1437 + }; 1438 + 1439 + cci_pmu->plat_device = pdev; 1440 + num_cntrs = pmu_get_max_counters(cci_pmu); 1441 + if (num_cntrs > cci_pmu->model->num_hw_cntrs) { 1442 + dev_warn(&pdev->dev, 1443 + "PMU implements more counters(%d) than supported by" 1444 + " the model(%d), truncated.", 1445 + num_cntrs, cci_pmu->model->num_hw_cntrs); 1446 + num_cntrs = cci_pmu->model->num_hw_cntrs; 1447 + } 1448 + cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs; 1449 + 1450 + return perf_pmu_register(&cci_pmu->pmu, name, -1); 1451 + } 1452 + 1453 + static int cci_pmu_offline_cpu(unsigned int cpu) 1454 + { 1455 + int target; 1456 + 1457 + if (!g_cci_pmu || cpu != g_cci_pmu->cpu) 1458 + return 0; 1459 + 1460 + target = cpumask_any_but(cpu_online_mask, cpu); 1461 + if (target >= nr_cpu_ids) 1462 + return 0; 1463 + 1464 + perf_pmu_migrate_context(&g_cci_pmu->pmu, cpu, target); 1465 + g_cci_pmu->cpu = target; 1466 + return 0; 1467 + } 1468 + 1469 + static struct cci_pmu_model cci_pmu_models[] = { 1470 + #ifdef CONFIG_ARM_CCI400_PMU 1471 + [CCI400_R0] = { 1472 + .name = "CCI_400", 1473 + .fixed_hw_cntrs = 1, /* Cycle counter */ 1474 + .num_hw_cntrs = 4, 1475 + .cntr_size = SZ_4K, 1476 + .format_attrs = cci400_pmu_format_attrs, 1477 + .event_attrs = cci400_r0_pmu_event_attrs, 1478 + .event_ranges = { 1479 + [CCI_IF_SLAVE] = { 1480 + CCI400_R0_SLAVE_PORT_MIN_EV, 1481 + CCI400_R0_SLAVE_PORT_MAX_EV, 1482 + }, 1483 + [CCI_IF_MASTER] = { 1484 + CCI400_R0_MASTER_PORT_MIN_EV, 1485 + CCI400_R0_MASTER_PORT_MAX_EV, 1486 + }, 1487 + }, 1488 + .validate_hw_event = cci400_validate_hw_event, 1489 + .get_event_idx = cci400_get_event_idx, 1490 + }, 1491 + [CCI400_R1] = { 1492 + .name = "CCI_400_r1", 1493 + .fixed_hw_cntrs = 1, /* Cycle counter */ 1494 + .num_hw_cntrs = 4, 1495 + .cntr_size = SZ_4K, 1496 + .format_attrs = cci400_pmu_format_attrs, 1497 + .event_attrs = cci400_r1_pmu_event_attrs, 1498 + .event_ranges = { 1499 + [CCI_IF_SLAVE] = { 1500 + CCI400_R1_SLAVE_PORT_MIN_EV, 1501 + CCI400_R1_SLAVE_PORT_MAX_EV, 1502 + }, 1503 + [CCI_IF_MASTER] = { 1504 + CCI400_R1_MASTER_PORT_MIN_EV, 1505 + CCI400_R1_MASTER_PORT_MAX_EV, 1506 + }, 1507 + }, 1508 + .validate_hw_event = cci400_validate_hw_event, 1509 + .get_event_idx = cci400_get_event_idx, 1510 + }, 1511 + #endif 1512 + #ifdef CONFIG_ARM_CCI5xx_PMU 1513 + [CCI500_R0] = { 1514 + .name = "CCI_500", 1515 + .fixed_hw_cntrs = 0, 1516 + .num_hw_cntrs = 8, 1517 + .cntr_size = SZ_64K, 1518 + .format_attrs = cci5xx_pmu_format_attrs, 1519 + .event_attrs = cci5xx_pmu_event_attrs, 1520 + .event_ranges = { 1521 + [CCI_IF_SLAVE] = { 1522 + CCI5xx_SLAVE_PORT_MIN_EV, 1523 + CCI5xx_SLAVE_PORT_MAX_EV, 1524 + }, 1525 + [CCI_IF_MASTER] = { 1526 + CCI5xx_MASTER_PORT_MIN_EV, 1527 + CCI5xx_MASTER_PORT_MAX_EV, 1528 + }, 1529 + [CCI_IF_GLOBAL] = { 1530 + CCI5xx_GLOBAL_PORT_MIN_EV, 1531 + CCI5xx_GLOBAL_PORT_MAX_EV, 1532 + }, 1533 + }, 1534 + .validate_hw_event = cci500_validate_hw_event, 1535 + .write_counters = cci5xx_pmu_write_counters, 1536 + }, 1537 + [CCI550_R0] = { 1538 + .name = "CCI_550", 1539 + .fixed_hw_cntrs = 0, 1540 + .num_hw_cntrs = 8, 1541 + .cntr_size = SZ_64K, 1542 + .format_attrs = cci5xx_pmu_format_attrs, 1543 + .event_attrs = cci5xx_pmu_event_attrs, 1544 + .event_ranges = { 1545 + [CCI_IF_SLAVE] = { 1546 + CCI5xx_SLAVE_PORT_MIN_EV, 1547 + CCI5xx_SLAVE_PORT_MAX_EV, 1548 + }, 1549 + [CCI_IF_MASTER] = { 1550 + CCI5xx_MASTER_PORT_MIN_EV, 1551 + CCI5xx_MASTER_PORT_MAX_EV, 1552 + }, 1553 + [CCI_IF_GLOBAL] = { 1554 + CCI5xx_GLOBAL_PORT_MIN_EV, 1555 + CCI5xx_GLOBAL_PORT_MAX_EV, 1556 + }, 1557 + }, 1558 + .validate_hw_event = cci550_validate_hw_event, 1559 + .write_counters = cci5xx_pmu_write_counters, 1560 + }, 1561 + #endif 1562 + }; 1563 + 1564 + static const struct of_device_id arm_cci_pmu_matches[] = { 1565 + #ifdef CONFIG_ARM_CCI400_PMU 1566 + { 1567 + .compatible = "arm,cci-400-pmu", 1568 + .data = NULL, 1569 + }, 1570 + { 1571 + .compatible = "arm,cci-400-pmu,r0", 1572 + .data = &cci_pmu_models[CCI400_R0], 1573 + }, 1574 + { 1575 + .compatible = "arm,cci-400-pmu,r1", 1576 + .data = &cci_pmu_models[CCI400_R1], 1577 + }, 1578 + #endif 1579 + #ifdef CONFIG_ARM_CCI5xx_PMU 1580 + { 1581 + .compatible = "arm,cci-500-pmu,r0", 1582 + .data = &cci_pmu_models[CCI500_R0], 1583 + }, 1584 + { 1585 + .compatible = "arm,cci-550-pmu,r0", 1586 + .data = &cci_pmu_models[CCI550_R0], 1587 + }, 1588 + #endif 1589 + {}, 1590 + }; 1591 + 1592 + static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) 1593 + { 1594 + int i; 1595 + 1596 + for (i = 0; i < nr_irqs; i++) 1597 + if (irq == irqs[i]) 1598 + return true; 1599 + 1600 + return false; 1601 + } 1602 + 1603 + static struct cci_pmu *cci_pmu_alloc(struct device *dev) 1604 + { 1605 + struct cci_pmu *cci_pmu; 1606 + const struct cci_pmu_model *model; 1607 + 1608 + /* 1609 + * All allocations are devm_* hence we don't have to free 1610 + * them explicitly on an error, as it would end up in driver 1611 + * detach. 1612 + */ 1613 + cci_pmu = devm_kzalloc(dev, sizeof(*cci_pmu), GFP_KERNEL); 1614 + if (!cci_pmu) 1615 + return ERR_PTR(-ENOMEM); 1616 + 1617 + cci_pmu->ctrl_base = *(void __iomem **)dev->platform_data; 1618 + 1619 + model = of_device_get_match_data(dev); 1620 + if (!model) { 1621 + dev_warn(dev, 1622 + "DEPRECATED compatible property, requires secure access to CCI registers"); 1623 + model = probe_cci_model(cci_pmu); 1624 + } 1625 + if (!model) { 1626 + dev_warn(dev, "CCI PMU version not supported\n"); 1627 + return ERR_PTR(-ENODEV); 1628 + } 1629 + 1630 + cci_pmu->model = model; 1631 + cci_pmu->irqs = devm_kcalloc(dev, CCI_PMU_MAX_HW_CNTRS(model), 1632 + sizeof(*cci_pmu->irqs), GFP_KERNEL); 1633 + if (!cci_pmu->irqs) 1634 + return ERR_PTR(-ENOMEM); 1635 + cci_pmu->hw_events.events = devm_kcalloc(dev, 1636 + CCI_PMU_MAX_HW_CNTRS(model), 1637 + sizeof(*cci_pmu->hw_events.events), 1638 + GFP_KERNEL); 1639 + if (!cci_pmu->hw_events.events) 1640 + return ERR_PTR(-ENOMEM); 1641 + cci_pmu->hw_events.used_mask = devm_kcalloc(dev, 1642 + BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)), 1643 + sizeof(*cci_pmu->hw_events.used_mask), 1644 + GFP_KERNEL); 1645 + if (!cci_pmu->hw_events.used_mask) 1646 + return ERR_PTR(-ENOMEM); 1647 + 1648 + return cci_pmu; 1649 + } 1650 + 1651 + static int cci_pmu_probe(struct platform_device *pdev) 1652 + { 1653 + struct resource *res; 1654 + struct cci_pmu *cci_pmu; 1655 + int i, ret, irq; 1656 + 1657 + cci_pmu = cci_pmu_alloc(&pdev->dev); 1658 + if (IS_ERR(cci_pmu)) 1659 + return PTR_ERR(cci_pmu); 1660 + 1661 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1662 + cci_pmu->base = devm_ioremap_resource(&pdev->dev, res); 1663 + if (IS_ERR(cci_pmu->base)) 1664 + return -ENOMEM; 1665 + 1666 + /* 1667 + * CCI PMU has one overflow interrupt per counter; but some may be tied 1668 + * together to a common interrupt. 1669 + */ 1670 + cci_pmu->nr_irqs = 0; 1671 + for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) { 1672 + irq = platform_get_irq(pdev, i); 1673 + if (irq < 0) 1674 + break; 1675 + 1676 + if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs)) 1677 + continue; 1678 + 1679 + cci_pmu->irqs[cci_pmu->nr_irqs++] = irq; 1680 + } 1681 + 1682 + /* 1683 + * Ensure that the device tree has as many interrupts as the number 1684 + * of counters. 1685 + */ 1686 + if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) { 1687 + dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n", 1688 + i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)); 1689 + return -EINVAL; 1690 + } 1691 + 1692 + raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); 1693 + mutex_init(&cci_pmu->reserve_mutex); 1694 + atomic_set(&cci_pmu->active_events, 0); 1695 + cci_pmu->cpu = get_cpu(); 1696 + 1697 + ret = cci_pmu_init(cci_pmu, pdev); 1698 + if (ret) { 1699 + put_cpu(); 1700 + return ret; 1701 + } 1702 + 1703 + cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, 1704 + "perf/arm/cci:online", NULL, 1705 + cci_pmu_offline_cpu); 1706 + put_cpu(); 1707 + g_cci_pmu = cci_pmu; 1708 + pr_info("ARM %s PMU driver probed", cci_pmu->model->name); 1709 + return 0; 1710 + } 1711 + 1712 + static struct platform_driver cci_pmu_driver = { 1713 + .driver = { 1714 + .name = DRIVER_NAME, 1715 + .of_match_table = arm_cci_pmu_matches, 1716 + }, 1717 + .probe = cci_pmu_probe, 1718 + }; 1719 + 1720 + builtin_platform_driver(cci_pmu_driver); 1721 + MODULE_LICENSE("GPL"); 1722 + MODULE_DESCRIPTION("ARM CCI PMU support");
+14 -3
drivers/reset/Kconfig
··· 49 49 50 50 config RESET_IMX7 51 51 bool "i.MX7 Reset Driver" if COMPILE_TEST 52 + depends on HAS_IOMEM 52 53 default SOC_IMX7D 53 54 select MFD_SYSCON 54 55 help ··· 84 83 85 84 config RESET_SIMPLE 86 85 bool "Simple Reset Controller Driver" if COMPILE_TEST 87 - default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX 86 + default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED 88 87 help 89 88 This enables a simple reset controller driver for reset lines that 90 89 that can be asserted and deasserted by toggling bits in a contiguous, 91 90 exclusive register space. 92 91 93 - Currently this driver supports Altera SoCFPGAs, the RCC reset 94 - controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family. 92 + Currently this driver supports: 93 + - Altera SoCFPGAs 94 + - ASPEED BMC SoCs 95 + - RCC reset controller in STM32 MCUs 96 + - Allwinner SoCs 97 + - ZTE's zx2967 family 98 + 99 + config RESET_STM32MP157 100 + bool "STM32MP157 Reset Driver" if COMPILE_TEST 101 + default MACH_STM32MP157 102 + help 103 + This enables the RCC reset controller driver for STM32 MPUs. 95 104 96 105 config RESET_SUNXI 97 106 bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI
+1
drivers/reset/Makefile
··· 15 15 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 16 16 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 17 17 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 18 + obj-$(CONFIG_RESET_STM32MP157) += reset-stm32mp1.o 18 19 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o 19 20 obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o 20 21 obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o
+95 -1
drivers/reset/core.c
··· 23 23 static DEFINE_MUTEX(reset_list_mutex); 24 24 static LIST_HEAD(reset_controller_list); 25 25 26 + static DEFINE_MUTEX(reset_lookup_mutex); 27 + static LIST_HEAD(reset_lookup_list); 28 + 26 29 /** 27 30 * struct reset_control - a reset control 28 31 * @rcdev: a pointer to the reset controller device ··· 150 147 return ret; 151 148 } 152 149 EXPORT_SYMBOL_GPL(devm_reset_controller_register); 150 + 151 + /** 152 + * reset_controller_add_lookup - register a set of lookup entries 153 + * @lookup: array of reset lookup entries 154 + * @num_entries: number of entries in the lookup array 155 + */ 156 + void reset_controller_add_lookup(struct reset_control_lookup *lookup, 157 + unsigned int num_entries) 158 + { 159 + struct reset_control_lookup *entry; 160 + unsigned int i; 161 + 162 + mutex_lock(&reset_lookup_mutex); 163 + for (i = 0; i < num_entries; i++) { 164 + entry = &lookup[i]; 165 + 166 + if (!entry->dev_id || !entry->provider) { 167 + pr_warn("%s(): reset lookup entry badly specified, skipping\n", 168 + __func__); 169 + continue; 170 + } 171 + 172 + list_add_tail(&entry->list, &reset_lookup_list); 173 + } 174 + mutex_unlock(&reset_lookup_mutex); 175 + } 176 + EXPORT_SYMBOL_GPL(reset_controller_add_lookup); 153 177 154 178 static inline struct reset_control_array * 155 179 rstc_to_array(struct reset_control *rstc) { ··· 523 493 } 524 494 EXPORT_SYMBOL_GPL(__of_reset_control_get); 525 495 496 + static struct reset_controller_dev * 497 + __reset_controller_by_name(const char *name) 498 + { 499 + struct reset_controller_dev *rcdev; 500 + 501 + lockdep_assert_held(&reset_list_mutex); 502 + 503 + list_for_each_entry(rcdev, &reset_controller_list, list) { 504 + if (!rcdev->dev) 505 + continue; 506 + 507 + if (!strcmp(name, dev_name(rcdev->dev))) 508 + return rcdev; 509 + } 510 + 511 + return NULL; 512 + } 513 + 514 + static struct reset_control * 515 + __reset_control_get_from_lookup(struct device *dev, const char *con_id, 516 + bool shared, bool optional) 517 + { 518 + const struct reset_control_lookup *lookup; 519 + struct reset_controller_dev *rcdev; 520 + const char *dev_id = dev_name(dev); 521 + struct reset_control *rstc = NULL; 522 + 523 + if (!dev) 524 + return ERR_PTR(-EINVAL); 525 + 526 + mutex_lock(&reset_lookup_mutex); 527 + 528 + list_for_each_entry(lookup, &reset_lookup_list, list) { 529 + if (strcmp(lookup->dev_id, dev_id)) 530 + continue; 531 + 532 + if ((!con_id && !lookup->con_id) || 533 + ((con_id && lookup->con_id) && 534 + !strcmp(con_id, lookup->con_id))) { 535 + mutex_lock(&reset_list_mutex); 536 + rcdev = __reset_controller_by_name(lookup->provider); 537 + if (!rcdev) { 538 + mutex_unlock(&reset_list_mutex); 539 + mutex_unlock(&reset_lookup_mutex); 540 + /* Reset provider may not be ready yet. */ 541 + return ERR_PTR(-EPROBE_DEFER); 542 + } 543 + 544 + rstc = __reset_control_get_internal(rcdev, 545 + lookup->index, 546 + shared); 547 + mutex_unlock(&reset_list_mutex); 548 + break; 549 + } 550 + } 551 + 552 + mutex_unlock(&reset_lookup_mutex); 553 + 554 + if (!rstc) 555 + return optional ? NULL : ERR_PTR(-ENOENT); 556 + 557 + return rstc; 558 + } 559 + 526 560 struct reset_control *__reset_control_get(struct device *dev, const char *id, 527 561 int index, bool shared, bool optional) 528 562 { ··· 594 500 return __of_reset_control_get(dev->of_node, id, index, shared, 595 501 optional); 596 502 597 - return optional ? NULL : ERR_PTR(-EINVAL); 503 + return __reset_control_get_from_lookup(dev, id, shared, optional); 598 504 } 599 505 EXPORT_SYMBOL_GPL(__reset_control_get); 600 506
+5 -17
drivers/reset/reset-meson.c
··· 124 124 return meson_reset_level(rcdev, id, false); 125 125 } 126 126 127 - static const struct reset_control_ops meson_reset_meson8_ops = { 128 - .reset = meson_reset_reset, 129 - }; 130 - 131 - static const struct reset_control_ops meson_reset_gx_ops = { 127 + static const struct reset_control_ops meson_reset_ops = { 132 128 .reset = meson_reset_reset, 133 129 .assert = meson_reset_assert, 134 130 .deassert = meson_reset_deassert, 135 131 }; 136 132 137 133 static const struct of_device_id meson_reset_dt_ids[] = { 138 - { .compatible = "amlogic,meson8b-reset", 139 - .data = &meson_reset_meson8_ops, }, 140 - { .compatible = "amlogic,meson-gxbb-reset", 141 - .data = &meson_reset_gx_ops, }, 142 - { .compatible = "amlogic,meson-axg-reset", 143 - .data = &meson_reset_gx_ops, }, 134 + { .compatible = "amlogic,meson8b-reset" }, 135 + { .compatible = "amlogic,meson-gxbb-reset" }, 136 + { .compatible = "amlogic,meson-axg-reset" }, 144 137 { /* sentinel */ }, 145 138 }; 146 139 147 140 static int meson_reset_probe(struct platform_device *pdev) 148 141 { 149 - const struct reset_control_ops *ops; 150 142 struct meson_reset *data; 151 143 struct resource *res; 152 144 153 145 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 154 146 if (!data) 155 147 return -ENOMEM; 156 - 157 - ops = of_device_get_match_data(&pdev->dev); 158 - if (!ops) 159 - return -EINVAL; 160 148 161 149 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 162 150 data->reg_base = devm_ioremap_resource(&pdev->dev, res); ··· 157 169 158 170 data->rcdev.owner = THIS_MODULE; 159 171 data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG; 160 - data->rcdev.ops = ops; 172 + data->rcdev.ops = &meson_reset_ops; 161 173 data->rcdev.of_node = pdev->dev.of_node; 162 174 163 175 return devm_reset_controller_register(&pdev->dev, &data->rcdev);
+2
drivers/reset/reset-simple.c
··· 125 125 .data = &reset_simple_active_low }, 126 126 { .compatible = "zte,zx296718-reset", 127 127 .data = &reset_simple_active_low }, 128 + { .compatible = "aspeed,ast2400-lpc-reset" }, 129 + { .compatible = "aspeed,ast2500-lpc-reset" }, 128 130 { /* sentinel */ }, 129 131 }; 130 132
+115
drivers/reset/reset-stm32mp1.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) STMicroelectronics 2018 - All Rights Reserved 4 + * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics. 5 + */ 6 + 7 + #include <linux/device.h> 8 + #include <linux/err.h> 9 + #include <linux/io.h> 10 + #include <linux/of.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/reset-controller.h> 13 + 14 + #define CLR_OFFSET 0x4 15 + 16 + struct stm32_reset_data { 17 + struct reset_controller_dev rcdev; 18 + void __iomem *membase; 19 + }; 20 + 21 + static inline struct stm32_reset_data * 22 + to_stm32_reset_data(struct reset_controller_dev *rcdev) 23 + { 24 + return container_of(rcdev, struct stm32_reset_data, rcdev); 25 + } 26 + 27 + static int stm32_reset_update(struct reset_controller_dev *rcdev, 28 + unsigned long id, bool assert) 29 + { 30 + struct stm32_reset_data *data = to_stm32_reset_data(rcdev); 31 + int reg_width = sizeof(u32); 32 + int bank = id / (reg_width * BITS_PER_BYTE); 33 + int offset = id % (reg_width * BITS_PER_BYTE); 34 + void __iomem *addr; 35 + 36 + addr = data->membase + (bank * reg_width); 37 + if (!assert) 38 + addr += CLR_OFFSET; 39 + 40 + writel(BIT(offset), addr); 41 + 42 + return 0; 43 + } 44 + 45 + static int stm32_reset_assert(struct reset_controller_dev *rcdev, 46 + unsigned long id) 47 + { 48 + return stm32_reset_update(rcdev, id, true); 49 + } 50 + 51 + static int stm32_reset_deassert(struct reset_controller_dev *rcdev, 52 + unsigned long id) 53 + { 54 + return stm32_reset_update(rcdev, id, false); 55 + } 56 + 57 + static int stm32_reset_status(struct reset_controller_dev *rcdev, 58 + unsigned long id) 59 + { 60 + struct stm32_reset_data *data = to_stm32_reset_data(rcdev); 61 + int reg_width = sizeof(u32); 62 + int bank = id / (reg_width * BITS_PER_BYTE); 63 + int offset = id % (reg_width * BITS_PER_BYTE); 64 + u32 reg; 65 + 66 + reg = readl(data->membase + (bank * reg_width)); 67 + 68 + return !!(reg & BIT(offset)); 69 + } 70 + 71 + static const struct reset_control_ops stm32_reset_ops = { 72 + .assert = stm32_reset_assert, 73 + .deassert = stm32_reset_deassert, 74 + .status = stm32_reset_status, 75 + }; 76 + 77 + static const struct of_device_id stm32_reset_dt_ids[] = { 78 + { .compatible = "st,stm32mp1-rcc"}, 79 + { /* sentinel */ }, 80 + }; 81 + 82 + static int stm32_reset_probe(struct platform_device *pdev) 83 + { 84 + struct device *dev = &pdev->dev; 85 + struct stm32_reset_data *data; 86 + void __iomem *membase; 87 + struct resource *res; 88 + 89 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 90 + if (!data) 91 + return -ENOMEM; 92 + 93 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 94 + membase = devm_ioremap_resource(dev, res); 95 + if (IS_ERR(membase)) 96 + return PTR_ERR(membase); 97 + 98 + data->membase = membase; 99 + data->rcdev.owner = THIS_MODULE; 100 + data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE; 101 + data->rcdev.ops = &stm32_reset_ops; 102 + data->rcdev.of_node = dev->of_node; 103 + 104 + return devm_reset_controller_register(dev, &data->rcdev); 105 + } 106 + 107 + static struct platform_driver stm32_reset_driver = { 108 + .probe = stm32_reset_probe, 109 + .driver = { 110 + .name = "stm32mp1-reset", 111 + .of_match_table = stm32_reset_dt_ids, 112 + }, 113 + }; 114 + 115 + builtin_platform_driver(stm32_reset_driver);
+5
drivers/reset/reset-uniphier.c
··· 63 63 UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */ 64 64 UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ 65 65 UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ 66 + UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ 66 67 UNIPHIER_RESET_END, 67 68 }; 68 69 ··· 73 72 UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (PCIe, USB3) */ 74 73 UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */ 75 74 UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */ 75 + UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ 76 76 UNIPHIER_RESET_END, 77 77 }; 78 78 ··· 90 88 UNIPHIER_RESETX(21, 0x2014, 1), /* USB31-PHY1 */ 91 89 UNIPHIER_RESETX(28, 0x2014, 12), /* SATA */ 92 90 UNIPHIER_RESET(29, 0x2014, 8), /* SATA-PHY (active high) */ 91 + UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */ 93 92 UNIPHIER_RESET_END, 94 93 }; 95 94 ··· 124 121 static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = { 125 122 UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */ 126 123 UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */ 124 + UNIPHIER_RESETX(6, 0x200c, 9), /* Ether0 */ 125 + UNIPHIER_RESETX(7, 0x200c, 10), /* Ether1 */ 127 126 UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */ 128 127 UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */ 129 128 UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */
+7 -2
drivers/soc/amlogic/meson-gx-pwrc-vpu.c
··· 184 184 185 185 rstc = devm_reset_control_array_get(&pdev->dev, false, false); 186 186 if (IS_ERR(rstc)) { 187 - dev_err(&pdev->dev, "failed to get reset lines\n"); 187 + if (PTR_ERR(rstc) != -EPROBE_DEFER) 188 + dev_err(&pdev->dev, "failed to get reset lines\n"); 188 189 return PTR_ERR(rstc); 189 190 } 190 191 ··· 225 224 226 225 static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev) 227 226 { 228 - meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); 227 + bool powered_off; 228 + 229 + powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd); 230 + if (!powered_off) 231 + meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); 229 232 } 230 233 231 234 static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = {
+10 -1
drivers/soc/amlogic/meson-gx-socinfo.c
··· 33 33 { "GXL", 0x21 }, 34 34 { "GXM", 0x22 }, 35 35 { "TXL", 0x23 }, 36 + { "TXLX", 0x24 }, 37 + { "AXG", 0x25 }, 38 + { "GXLX", 0x26 }, 39 + { "TXHD", 0x27 }, 36 40 }; 37 41 38 42 static const struct meson_gx_package_id { ··· 49 45 { "S905M", 0x1f, 0x20 }, 50 46 { "S905D", 0x21, 0 }, 51 47 { "S905X", 0x21, 0x80 }, 48 + { "S905W", 0x21, 0xa0 }, 52 49 { "S905L", 0x21, 0xc0 }, 53 50 { "S905M2", 0x21, 0xe0 }, 54 51 { "S912", 0x22, 0 }, 52 + { "962X", 0x24, 0x10 }, 53 + { "962E", 0x24, 0x20 }, 54 + { "A113X", 0x25, 0x37 }, 55 + { "A113D", 0x25, 0x22 }, 55 56 }; 56 57 57 58 static inline unsigned int socinfo_to_major(u32 socinfo) ··· 107 98 return "Unknown"; 108 99 } 109 100 110 - int __init meson_gx_socinfo_init(void) 101 + static int __init meson_gx_socinfo_init(void) 111 102 { 112 103 struct soc_device_attribute *soc_dev_attr; 113 104 struct soc_device *soc_dev;
+1 -1
drivers/soc/amlogic/meson-mx-socinfo.c
··· 104 104 { /* sentinel */ } 105 105 }; 106 106 107 - int __init meson_mx_socinfo_init(void) 107 + static int __init meson_mx_socinfo_init(void) 108 108 { 109 109 struct soc_device_attribute *soc_dev_attr; 110 110 struct soc_device *soc_dev;
+1
drivers/soc/imx/gpc.c
··· 254 254 { 255 255 .base = { 256 256 .name = "ARM", 257 + .flags = GENPD_FLAG_ALWAYS_ON, 257 258 }, 258 259 }, { 259 260 .base = {
+99 -5
drivers/soc/mediatek/mtk-scpsys.c
··· 24 24 #include <dt-bindings/power/mt2712-power.h> 25 25 #include <dt-bindings/power/mt6797-power.h> 26 26 #include <dt-bindings/power/mt7622-power.h> 27 + #include <dt-bindings/power/mt7623a-power.h> 27 28 #include <dt-bindings/power/mt8173-power.h> 28 29 29 30 #define SPM_VDE_PWR_CON 0x0210 ··· 519 518 .name = "conn", 520 519 .sta_mask = PWR_STATUS_CONN, 521 520 .ctl_offs = SPM_CONN_PWR_CON, 522 - .bus_prot_mask = 0x0104, 521 + .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M | 522 + MT2701_TOP_AXI_PROT_EN_CONN_S, 523 523 .clk_id = {CLK_NONE}, 524 524 .active_wakeup = true, 525 525 }, ··· 530 528 .ctl_offs = SPM_DIS_PWR_CON, 531 529 .sram_pdn_bits = GENMASK(11, 8), 532 530 .clk_id = {CLK_MM}, 533 - .bus_prot_mask = 0x0002, 531 + .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_MM_M0, 534 532 .active_wakeup = true, 535 533 }, 536 534 [MT2701_POWER_DOMAIN_MFG] = { ··· 666 664 .name = "mfg", 667 665 .sta_mask = PWR_STATUS_MFG, 668 666 .ctl_offs = SPM_MFG_PWR_CON, 669 - .sram_pdn_bits = GENMASK(11, 8), 670 - .sram_pdn_ack_bits = GENMASK(19, 16), 667 + .sram_pdn_bits = GENMASK(8, 8), 668 + .sram_pdn_ack_bits = GENMASK(16, 16), 671 669 .clk_id = {CLK_MFG}, 672 670 .bus_prot_mask = BIT(14) | BIT(21) | BIT(23), 673 671 .active_wakeup = true, 674 672 }, 673 + [MT2712_POWER_DOMAIN_MFG_SC1] = { 674 + .name = "mfg_sc1", 675 + .sta_mask = BIT(22), 676 + .ctl_offs = 0x02c0, 677 + .sram_pdn_bits = GENMASK(8, 8), 678 + .sram_pdn_ack_bits = GENMASK(16, 16), 679 + .clk_id = {CLK_NONE}, 680 + .active_wakeup = true, 681 + }, 682 + [MT2712_POWER_DOMAIN_MFG_SC2] = { 683 + .name = "mfg_sc2", 684 + .sta_mask = BIT(23), 685 + .ctl_offs = 0x02c4, 686 + .sram_pdn_bits = GENMASK(8, 8), 687 + .sram_pdn_ack_bits = GENMASK(16, 16), 688 + .clk_id = {CLK_NONE}, 689 + .active_wakeup = true, 690 + }, 691 + [MT2712_POWER_DOMAIN_MFG_SC3] = { 692 + .name = "mfg_sc3", 693 + .sta_mask = BIT(30), 694 + .ctl_offs = 0x01f8, 695 + .sram_pdn_bits = GENMASK(8, 8), 696 + .sram_pdn_ack_bits = GENMASK(16, 16), 697 + .clk_id = {CLK_NONE}, 698 + .active_wakeup = true, 699 + }, 700 + }; 701 + 702 + static const struct scp_subdomain scp_subdomain_mt2712[] = { 703 + {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VDEC}, 704 + {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VENC}, 705 + {MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_ISP}, 706 + {MT2712_POWER_DOMAIN_MFG, MT2712_POWER_DOMAIN_MFG_SC1}, 707 + {MT2712_POWER_DOMAIN_MFG_SC1, MT2712_POWER_DOMAIN_MFG_SC2}, 708 + {MT2712_POWER_DOMAIN_MFG_SC2, MT2712_POWER_DOMAIN_MFG_SC3}, 675 709 }; 676 710 677 711 /* ··· 832 794 }; 833 795 834 796 /* 797 + * MT7623A power domain support 798 + */ 799 + 800 + static const struct scp_domain_data scp_domain_data_mt7623a[] = { 801 + [MT7623A_POWER_DOMAIN_CONN] = { 802 + .name = "conn", 803 + .sta_mask = PWR_STATUS_CONN, 804 + .ctl_offs = SPM_CONN_PWR_CON, 805 + .bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M | 806 + MT2701_TOP_AXI_PROT_EN_CONN_S, 807 + .clk_id = {CLK_NONE}, 808 + .active_wakeup = true, 809 + }, 810 + [MT7623A_POWER_DOMAIN_ETH] = { 811 + .name = "eth", 812 + .sta_mask = PWR_STATUS_ETH, 813 + .ctl_offs = SPM_ETH_PWR_CON, 814 + .sram_pdn_bits = GENMASK(11, 8), 815 + .sram_pdn_ack_bits = GENMASK(15, 12), 816 + .clk_id = {CLK_ETHIF}, 817 + .active_wakeup = true, 818 + }, 819 + [MT7623A_POWER_DOMAIN_HIF] = { 820 + .name = "hif", 821 + .sta_mask = PWR_STATUS_HIF, 822 + .ctl_offs = SPM_HIF_PWR_CON, 823 + .sram_pdn_bits = GENMASK(11, 8), 824 + .sram_pdn_ack_bits = GENMASK(15, 12), 825 + .clk_id = {CLK_ETHIF}, 826 + .active_wakeup = true, 827 + }, 828 + [MT7623A_POWER_DOMAIN_IFR_MSC] = { 829 + .name = "ifr_msc", 830 + .sta_mask = PWR_STATUS_IFR_MSC, 831 + .ctl_offs = SPM_IFR_MSC_PWR_CON, 832 + .clk_id = {CLK_NONE}, 833 + .active_wakeup = true, 834 + }, 835 + }; 836 + 837 + /* 835 838 * MT8173 power domain support 836 839 */ 837 840 ··· 984 905 static const struct scp_soc_data mt2712_data = { 985 906 .domains = scp_domain_data_mt2712, 986 907 .num_domains = ARRAY_SIZE(scp_domain_data_mt2712), 908 + .subdomains = scp_subdomain_mt2712, 909 + .num_subdomains = ARRAY_SIZE(scp_subdomain_mt2712), 987 910 .regs = { 988 911 .pwr_sta_offs = SPM_PWR_STATUS, 989 912 .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND ··· 1008 927 static const struct scp_soc_data mt7622_data = { 1009 928 .domains = scp_domain_data_mt7622, 1010 929 .num_domains = ARRAY_SIZE(scp_domain_data_mt7622), 930 + .regs = { 931 + .pwr_sta_offs = SPM_PWR_STATUS, 932 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND 933 + }, 934 + .bus_prot_reg_update = true, 935 + }; 936 + 937 + static const struct scp_soc_data mt7623a_data = { 938 + .domains = scp_domain_data_mt7623a, 939 + .num_domains = ARRAY_SIZE(scp_domain_data_mt7623a), 1011 940 .regs = { 1012 941 .pwr_sta_offs = SPM_PWR_STATUS, 1013 942 .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND ··· 1055 964 .compatible = "mediatek,mt7622-scpsys", 1056 965 .data = &mt7622_data, 1057 966 }, { 967 + .compatible = "mediatek,mt7623a-scpsys", 968 + .data = &mt7623a_data, 969 + }, { 1058 970 .compatible = "mediatek,mt8173-scpsys", 1059 971 .data = &mt8173_data, 1060 972 }, { ··· 1086 992 1087 993 pd_data = &scp->pd_data; 1088 994 1089 - for (i = 0, sd = soc->subdomains ; i < soc->num_subdomains ; i++) { 995 + for (i = 0, sd = soc->subdomains; i < soc->num_subdomains; i++, sd++) { 1090 996 ret = pm_genpd_add_subdomain(pd_data->domains[sd->origin], 1091 997 pd_data->domains[sd->subdomain]); 1092 998 if (ret && IS_ENABLED(CONFIG_PM))
+1
drivers/soc/qcom/Kconfig
··· 47 47 config QCOM_RMTFS_MEM 48 48 tristate "Qualcomm Remote Filesystem memory driver" 49 49 depends on ARCH_QCOM 50 + select QCOM_SCM 50 51 help 51 52 The Qualcomm remote filesystem memory driver is used for allocating 52 53 and exposing regions of shared memory with remote processors for the
+34
drivers/soc/qcom/rmtfs_mem.c
··· 37 37 phys_addr_t size; 38 38 39 39 unsigned int client_id; 40 + 41 + unsigned int perms; 40 42 }; 41 43 42 44 static ssize_t qcom_rmtfs_mem_show(struct device *dev, ··· 153 151 static int qcom_rmtfs_mem_probe(struct platform_device *pdev) 154 152 { 155 153 struct device_node *node = pdev->dev.of_node; 154 + struct qcom_scm_vmperm perms[2]; 156 155 struct reserved_mem *rmem; 157 156 struct qcom_rmtfs_mem *rmtfs_mem; 158 157 u32 client_id; 158 + u32 vmid; 159 159 int ret; 160 160 161 161 rmem = of_reserved_mem_lookup(node); ··· 208 204 209 205 rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device; 210 206 207 + ret = of_property_read_u32(node, "qcom,vmid", &vmid); 208 + if (ret < 0 && ret != -EINVAL) { 209 + dev_err(&pdev->dev, "failed to parse qcom,vmid\n"); 210 + goto remove_cdev; 211 + } else if (!ret) { 212 + perms[0].vmid = QCOM_SCM_VMID_HLOS; 213 + perms[0].perm = QCOM_SCM_PERM_RW; 214 + perms[1].vmid = vmid; 215 + perms[1].perm = QCOM_SCM_PERM_RW; 216 + 217 + rmtfs_mem->perms = BIT(QCOM_SCM_VMID_HLOS); 218 + ret = qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size, 219 + &rmtfs_mem->perms, perms, 2); 220 + if (ret < 0) { 221 + dev_err(&pdev->dev, "assign memory failed\n"); 222 + goto remove_cdev; 223 + } 224 + } 225 + 211 226 dev_set_drvdata(&pdev->dev, rmtfs_mem); 212 227 213 228 return 0; 214 229 230 + remove_cdev: 231 + cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); 215 232 put_device: 216 233 put_device(&rmtfs_mem->dev); 217 234 ··· 242 217 static int qcom_rmtfs_mem_remove(struct platform_device *pdev) 243 218 { 244 219 struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev); 220 + struct qcom_scm_vmperm perm; 221 + 222 + if (rmtfs_mem->perms) { 223 + perm.vmid = QCOM_SCM_VMID_HLOS; 224 + perm.perm = QCOM_SCM_PERM_RW; 225 + 226 + qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size, 227 + &rmtfs_mem->perms, &perm, 1); 228 + } 245 229 246 230 cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); 247 231 put_device(&rmtfs_mem->dev);
+1 -1
drivers/soc/qcom/wcnss_ctrl.c
··· 249 249 /* Increment for next fragment */ 250 250 req->seq++; 251 251 252 - data += req->hdr.len; 252 + data += NV_FRAGMENT_SIZE; 253 253 left -= NV_FRAGMENT_SIZE; 254 254 } while (left > 0); 255 255
+28
drivers/soc/rockchip/grf.c
··· 43 43 .num_values = ARRAY_SIZE(rk3036_defaults), 44 44 }; 45 45 46 + #define RK3128_GRF_SOC_CON0 0x140 47 + 48 + static const struct rockchip_grf_value rk3128_defaults[] __initconst = { 49 + { "jtag switching", RK3128_GRF_SOC_CON0, HIWORD_UPDATE(0, 1, 8) }, 50 + }; 51 + 52 + static const struct rockchip_grf_info rk3128_grf __initconst = { 53 + .values = rk3128_defaults, 54 + .num_values = ARRAY_SIZE(rk3128_defaults), 55 + }; 56 + 57 + #define RK3228_GRF_SOC_CON6 0x418 58 + 59 + static const struct rockchip_grf_value rk3228_defaults[] __initconst = { 60 + { "jtag switching", RK3228_GRF_SOC_CON6, HIWORD_UPDATE(0, 1, 8) }, 61 + }; 62 + 63 + static const struct rockchip_grf_info rk3228_grf __initconst = { 64 + .values = rk3228_defaults, 65 + .num_values = ARRAY_SIZE(rk3228_defaults), 66 + }; 67 + 46 68 #define RK3288_GRF_SOC_CON0 0x244 47 69 48 70 static const struct rockchip_grf_value rk3288_defaults[] __initconst = { ··· 113 91 { 114 92 .compatible = "rockchip,rk3036-grf", 115 93 .data = (void *)&rk3036_grf, 94 + }, { 95 + .compatible = "rockchip,rk3128-grf", 96 + .data = (void *)&rk3128_grf, 97 + }, { 98 + .compatible = "rockchip,rk3228-grf", 99 + .data = (void *)&rk3228_grf, 116 100 }, { 117 101 .compatible = "rockchip,rk3288-grf", 118 102 .data = (void *)&rk3288_grf,
+47 -48
drivers/soc/rockchip/pm_domains.c
··· 67 67 struct regmap **qos_regmap; 68 68 u32 *qos_save_regs[MAX_QOS_REGS_NUM]; 69 69 int num_clks; 70 - struct clk *clks[]; 70 + struct clk_bulk_data *clks; 71 71 }; 72 72 73 73 struct rockchip_pmu { ··· 274 274 275 275 static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on) 276 276 { 277 - int i; 277 + struct rockchip_pmu *pmu = pd->pmu; 278 + int ret; 278 279 279 - mutex_lock(&pd->pmu->mutex); 280 + mutex_lock(&pmu->mutex); 280 281 281 282 if (rockchip_pmu_domain_is_on(pd) != power_on) { 282 - for (i = 0; i < pd->num_clks; i++) 283 - clk_enable(pd->clks[i]); 283 + ret = clk_bulk_enable(pd->num_clks, pd->clks); 284 + if (ret < 0) { 285 + dev_err(pmu->dev, "failed to enable clocks\n"); 286 + mutex_unlock(&pmu->mutex); 287 + return ret; 288 + } 284 289 285 290 if (!power_on) { 286 291 rockchip_pmu_save_qos(pd); ··· 303 298 rockchip_pmu_restore_qos(pd); 304 299 } 305 300 306 - for (i = pd->num_clks - 1; i >= 0; i--) 307 - clk_disable(pd->clks[i]); 301 + clk_bulk_disable(pd->num_clks, pd->clks); 308 302 } 309 303 310 - mutex_unlock(&pd->pmu->mutex); 304 + mutex_unlock(&pmu->mutex); 311 305 return 0; 312 306 } 313 307 ··· 368 364 const struct rockchip_domain_info *pd_info; 369 365 struct rockchip_pm_domain *pd; 370 366 struct device_node *qos_node; 371 - struct clk *clk; 372 - int clk_cnt; 373 367 int i, j; 374 368 u32 id; 375 369 int error; ··· 393 391 return -EINVAL; 394 392 } 395 393 396 - clk_cnt = of_count_phandle_with_args(node, "clocks", "#clock-cells"); 397 - pd = devm_kzalloc(pmu->dev, 398 - sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]), 399 - GFP_KERNEL); 394 + pd = devm_kzalloc(pmu->dev, sizeof(*pd), GFP_KERNEL); 400 395 if (!pd) 401 396 return -ENOMEM; 402 397 403 398 pd->info = pd_info; 404 399 pd->pmu = pmu; 405 400 406 - for (i = 0; i < clk_cnt; i++) { 407 - clk = of_clk_get(node, i); 408 - if (IS_ERR(clk)) { 409 - error = PTR_ERR(clk); 401 + pd->num_clks = of_count_phandle_with_args(node, "clocks", 402 + "#clock-cells"); 403 + if (pd->num_clks > 0) { 404 + pd->clks = devm_kcalloc(pmu->dev, pd->num_clks, 405 + sizeof(*pd->clks), GFP_KERNEL); 406 + if (!pd->clks) 407 + return -ENOMEM; 408 + } else { 409 + dev_dbg(pmu->dev, "%s: doesn't have clocks: %d\n", 410 + node->name, pd->num_clks); 411 + pd->num_clks = 0; 412 + } 413 + 414 + for (i = 0; i < pd->num_clks; i++) { 415 + pd->clks[i].clk = of_clk_get(node, i); 416 + if (IS_ERR(pd->clks[i].clk)) { 417 + error = PTR_ERR(pd->clks[i].clk); 410 418 dev_err(pmu->dev, 411 419 "%s: failed to get clk at index %d: %d\n", 412 420 node->name, i, error); 413 - goto err_out; 421 + return error; 414 422 } 415 - 416 - error = clk_prepare(clk); 417 - if (error) { 418 - dev_err(pmu->dev, 419 - "%s: failed to prepare clk %pC (index %d): %d\n", 420 - node->name, clk, i, error); 421 - clk_put(clk); 422 - goto err_out; 423 - } 424 - 425 - pd->clks[pd->num_clks++] = clk; 426 - 427 - dev_dbg(pmu->dev, "added clock '%pC' to domain '%s'\n", 428 - clk, node->name); 429 423 } 424 + 425 + error = clk_bulk_prepare(pd->num_clks, pd->clks); 426 + if (error) 427 + goto err_put_clocks; 430 428 431 429 pd->num_qos = of_count_phandle_with_args(node, "pm_qos", 432 430 NULL); ··· 437 435 GFP_KERNEL); 438 436 if (!pd->qos_regmap) { 439 437 error = -ENOMEM; 440 - goto err_out; 438 + goto err_unprepare_clocks; 441 439 } 442 440 443 441 for (j = 0; j < MAX_QOS_REGS_NUM; j++) { ··· 447 445 GFP_KERNEL); 448 446 if (!pd->qos_save_regs[j]) { 449 447 error = -ENOMEM; 450 - goto err_out; 448 + goto err_unprepare_clocks; 451 449 } 452 450 } 453 451 ··· 455 453 qos_node = of_parse_phandle(node, "pm_qos", j); 456 454 if (!qos_node) { 457 455 error = -ENODEV; 458 - goto err_out; 456 + goto err_unprepare_clocks; 459 457 } 460 458 pd->qos_regmap[j] = syscon_node_to_regmap(qos_node); 461 459 if (IS_ERR(pd->qos_regmap[j])) { 462 460 error = -ENODEV; 463 461 of_node_put(qos_node); 464 - goto err_out; 462 + goto err_unprepare_clocks; 465 463 } 466 464 of_node_put(qos_node); 467 465 } ··· 472 470 dev_err(pmu->dev, 473 471 "failed to power on domain '%s': %d\n", 474 472 node->name, error); 475 - goto err_out; 473 + goto err_unprepare_clocks; 476 474 } 477 475 478 476 pd->genpd.name = node->name; ··· 488 486 pmu->genpd_data.domains[id] = &pd->genpd; 489 487 return 0; 490 488 491 - err_out: 492 - while (--i >= 0) { 493 - clk_unprepare(pd->clks[i]); 494 - clk_put(pd->clks[i]); 495 - } 489 + err_unprepare_clocks: 490 + clk_bulk_unprepare(pd->num_clks, pd->clks); 491 + err_put_clocks: 492 + clk_bulk_put(pd->num_clks, pd->clks); 496 493 return error; 497 494 } 498 495 499 496 static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd) 500 497 { 501 - int i, ret; 498 + int ret; 502 499 503 500 /* 504 501 * We're in the error cleanup already, so we only complain, ··· 508 507 dev_err(pd->pmu->dev, "failed to remove domain '%s' : %d - state may be inconsistent\n", 509 508 pd->genpd.name, ret); 510 509 511 - for (i = 0; i < pd->num_clks; i++) { 512 - clk_unprepare(pd->clks[i]); 513 - clk_put(pd->clks[i]); 514 - } 510 + clk_bulk_unprepare(pd->num_clks, pd->clks); 511 + clk_bulk_put(pd->num_clks, pd->clks); 515 512 516 513 /* protect the zeroing of pm->num_clks */ 517 514 mutex_lock(&pd->pmu->mutex);
+7
drivers/soc/samsung/exynos-pmu.c
··· 85 85 .compatible = "samsung,exynos5250-pmu", 86 86 .data = exynos_pmu_data_arm_ptr(exynos5250_pmu_data), 87 87 }, { 88 + .compatible = "samsung,exynos5410-pmu", 89 + }, { 88 90 .compatible = "samsung,exynos5420-pmu", 89 91 .data = exynos_pmu_data_arm_ptr(exynos5420_pmu_data), 90 92 }, { 91 93 .compatible = "samsung,exynos5433-pmu", 94 + }, { 95 + .compatible = "samsung,exynos7-pmu", 92 96 }, 93 97 { /*sentinel*/ }, 94 98 }; ··· 129 125 pmu_context->pmu_data->pmu_init(); 130 126 131 127 platform_set_drvdata(pdev, pmu_context); 128 + 129 + if (devm_of_platform_populate(dev)) 130 + dev_err(dev, "Error populating children, reboot and poweroff might not work properly\n"); 132 131 133 132 dev_dbg(dev, "Exynos PMU Driver probe done\n"); 134 133 return 0;
+10
drivers/soc/tegra/Kconfig
··· 104 104 multi-format support, ISP for image capture processing and BPMP for 105 105 power management. 106 106 107 + config ARCH_TEGRA_194_SOC 108 + bool "NVIDIA Tegra194 SoC" 109 + select MAILBOX 110 + select TEGRA_BPMP 111 + select TEGRA_HSP_MBOX 112 + select TEGRA_IVC 113 + select SOC_TEGRA_PMC 114 + help 115 + Enable support for the NVIDIA Tegra194 SoC. 116 + 107 117 endif 108 118 endif 109 119
+28 -70
drivers/soc/tegra/pmc.c
··· 127 127 unsigned int id; 128 128 struct clk **clks; 129 129 unsigned int num_clks; 130 - struct reset_control **resets; 131 - unsigned int num_resets; 130 + struct reset_control *reset; 132 131 }; 133 132 134 133 struct tegra_io_pad_soc { ··· 152 153 153 154 bool has_tsense_reset; 154 155 bool has_gpu_clamps; 156 + bool needs_mbist_war; 155 157 156 158 const struct tegra_io_pad_soc *io_pads; 157 159 unsigned int num_io_pads; ··· 368 368 return err; 369 369 } 370 370 371 - static int tegra_powergate_reset_assert(struct tegra_powergate *pg) 371 + int __weak tegra210_clk_handle_mbist_war(unsigned int id) 372 372 { 373 - unsigned int i; 374 - int err; 375 - 376 - for (i = 0; i < pg->num_resets; i++) { 377 - err = reset_control_assert(pg->resets[i]); 378 - if (err) 379 - return err; 380 - } 381 - 382 - return 0; 383 - } 384 - 385 - static int tegra_powergate_reset_deassert(struct tegra_powergate *pg) 386 - { 387 - unsigned int i; 388 - int err; 389 - 390 - for (i = 0; i < pg->num_resets; i++) { 391 - err = reset_control_deassert(pg->resets[i]); 392 - if (err) 393 - return err; 394 - } 395 - 396 373 return 0; 397 374 } 398 375 ··· 378 401 { 379 402 int err; 380 403 381 - err = tegra_powergate_reset_assert(pg); 404 + err = reset_control_assert(pg->reset); 382 405 if (err) 383 406 return err; 384 407 ··· 402 425 403 426 usleep_range(10, 20); 404 427 405 - err = tegra_powergate_reset_deassert(pg); 428 + err = reset_control_deassert(pg->reset); 406 429 if (err) 407 430 goto powergate_off; 408 431 409 432 usleep_range(10, 20); 433 + 434 + if (pg->pmc->soc->needs_mbist_war) 435 + err = tegra210_clk_handle_mbist_war(pg->id); 436 + if (err) 437 + goto disable_clks; 410 438 411 439 if (disable_clocks) 412 440 tegra_powergate_disable_clocks(pg); ··· 438 456 439 457 usleep_range(10, 20); 440 458 441 - err = tegra_powergate_reset_assert(pg); 459 + err = reset_control_assert(pg->reset); 442 460 if (err) 443 461 goto disable_clks; 444 462 ··· 457 475 assert_resets: 458 476 tegra_powergate_enable_clocks(pg); 459 477 usleep_range(10, 20); 460 - tegra_powergate_reset_deassert(pg); 478 + reset_control_deassert(pg->reset); 461 479 usleep_range(10, 20); 462 480 463 481 disable_clks: ··· 568 586 pg.id = id; 569 587 pg.clks = &clk; 570 588 pg.num_clks = 1; 571 - pg.resets = &rst; 572 - pg.num_resets = 1; 589 + pg.reset = rst; 590 + pg.pmc = pmc; 573 591 574 592 err = tegra_powergate_power_up(&pg, false); 575 593 if (err) ··· 757 775 static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, 758 776 struct device_node *np, bool off) 759 777 { 760 - struct reset_control *rst; 761 - unsigned int i, count; 762 778 int err; 763 779 764 - count = of_count_phandle_with_args(np, "resets", "#reset-cells"); 765 - if (count == 0) 766 - return -ENODEV; 767 - 768 - pg->resets = kcalloc(count, sizeof(rst), GFP_KERNEL); 769 - if (!pg->resets) 770 - return -ENOMEM; 771 - 772 - for (i = 0; i < count; i++) { 773 - pg->resets[i] = of_reset_control_get_by_index(np, i); 774 - if (IS_ERR(pg->resets[i])) { 775 - err = PTR_ERR(pg->resets[i]); 776 - goto error; 777 - } 778 - 779 - if (off) 780 - err = reset_control_assert(pg->resets[i]); 781 - else 782 - err = reset_control_deassert(pg->resets[i]); 783 - 784 - if (err) { 785 - reset_control_put(pg->resets[i]); 786 - goto error; 787 - } 780 + pg->reset = of_reset_control_array_get_exclusive(np); 781 + if (IS_ERR(pg->reset)) { 782 + err = PTR_ERR(pg->reset); 783 + pr_err("failed to get device resets: %d\n", err); 784 + return err; 788 785 } 789 786 790 - pg->num_resets = count; 787 + if (off) 788 + err = reset_control_assert(pg->reset); 789 + else 790 + err = reset_control_deassert(pg->reset); 791 791 792 - return 0; 793 - 794 - error: 795 - while (i--) 796 - reset_control_put(pg->resets[i]); 797 - 798 - kfree(pg->resets); 792 + if (err) 793 + reset_control_put(pg->reset); 799 794 800 795 return err; 801 796 } ··· 864 905 pm_genpd_remove(&pg->genpd); 865 906 866 907 remove_resets: 867 - while (pg->num_resets--) 868 - reset_control_put(pg->resets[pg->num_resets]); 869 - 870 - kfree(pg->resets); 908 + reset_control_put(pg->reset); 871 909 872 910 remove_clks: 873 911 while (pg->num_clks--) ··· 1771 1815 .cpu_powergates = tegra210_cpu_powergates, 1772 1816 .has_tsense_reset = true, 1773 1817 .has_gpu_clamps = true, 1818 + .needs_mbist_war = true, 1774 1819 .num_io_pads = ARRAY_SIZE(tegra210_io_pads), 1775 1820 .io_pads = tegra210_io_pads, 1776 1821 .regs = &tegra20_pmc_regs, ··· 1877 1920 }; 1878 1921 1879 1922 static const struct of_device_id tegra_pmc_match[] = { 1923 + { .compatible = "nvidia,tegra194-pmc", .data = &tegra186_pmc_soc }, 1880 1924 { .compatible = "nvidia,tegra186-pmc", .data = &tegra186_pmc_soc }, 1881 1925 { .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc }, 1882 1926 { .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc },
+23
drivers/tee/optee/core.c
··· 356 356 return false; 357 357 } 358 358 359 + static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn) 360 + { 361 + union { 362 + struct arm_smccc_res smccc; 363 + struct optee_smc_call_get_os_revision_result result; 364 + } res = { 365 + .result = { 366 + .build_id = 0 367 + } 368 + }; 369 + 370 + invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0, 371 + &res.smccc); 372 + 373 + if (res.result.build_id) 374 + pr_info("revision %lu.%lu (%08lx)", res.result.major, 375 + res.result.minor, res.result.build_id); 376 + else 377 + pr_info("revision %lu.%lu", res.result.major, res.result.minor); 378 + } 379 + 359 380 static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn) 360 381 { 361 382 union { ··· 567 546 pr_warn("api uid mismatch\n"); 568 547 return ERR_PTR(-EINVAL); 569 548 } 549 + 550 + optee_msg_get_os_revision(invoke_fn); 570 551 571 552 if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 572 553 pr_warn("api revision mismatch\n");
+9 -1
drivers/tee/optee/optee_smc.h
··· 112 112 * Trusted OS, not of the API. 113 113 * 114 114 * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION 115 - * described above. 115 + * described above. May optionally return a 32-bit build identifier in a2, 116 + * with zero meaning unspecified. 116 117 */ 117 118 #define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION 118 119 #define OPTEE_SMC_CALL_GET_OS_REVISION \ 119 120 OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION) 121 + 122 + struct optee_smc_call_get_os_revision_result { 123 + unsigned long major; 124 + unsigned long minor; 125 + unsigned long build_id; 126 + unsigned long reserved1; 127 + }; 120 128 121 129 /* 122 130 * Call with struct optee_msg_arg as argument
+9 -5
drivers/tee/tee_core.c
··· 693 693 { 694 694 struct tee_device *teedev; 695 695 void *ret; 696 - int rc; 696 + int rc, max_id; 697 697 int offs = 0; 698 698 699 699 if (!teedesc || !teedesc->name || !teedesc->ops || ··· 707 707 goto err; 708 708 } 709 709 710 - if (teedesc->flags & TEE_DESC_PRIVILEGED) 710 + max_id = TEE_NUM_DEVICES / 2; 711 + 712 + if (teedesc->flags & TEE_DESC_PRIVILEGED) { 711 713 offs = TEE_NUM_DEVICES / 2; 714 + max_id = TEE_NUM_DEVICES; 715 + } 712 716 713 717 spin_lock(&driver_lock); 714 - teedev->id = find_next_zero_bit(dev_mask, TEE_NUM_DEVICES, offs); 715 - if (teedev->id < TEE_NUM_DEVICES) 718 + teedev->id = find_next_zero_bit(dev_mask, max_id, offs); 719 + if (teedev->id < max_id) 716 720 set_bit(teedev->id, dev_mask); 717 721 spin_unlock(&driver_lock); 718 722 719 - if (teedev->id >= TEE_NUM_DEVICES) { 723 + if (teedev->id >= max_id) { 720 724 ret = ERR_PTR(-ENOMEM); 721 725 goto err; 722 726 }
+3
include/dt-bindings/power/mt2712-power.h
··· 22 22 #define MT2712_POWER_DOMAIN_USB 5 23 23 #define MT2712_POWER_DOMAIN_USB2 6 24 24 #define MT2712_POWER_DOMAIN_MFG 7 25 + #define MT2712_POWER_DOMAIN_MFG_SC1 8 26 + #define MT2712_POWER_DOMAIN_MFG_SC2 9 27 + #define MT2712_POWER_DOMAIN_MFG_SC3 10 25 28 26 29 #endif /* _DT_BINDINGS_POWER_MT2712_POWER_H */
+10
include/dt-bindings/power/mt7623a-power.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _DT_BINDINGS_POWER_MT7623A_POWER_H 3 + #define _DT_BINDINGS_POWER_MT7623A_POWER_H 4 + 5 + #define MT7623A_POWER_DOMAIN_CONN 0 6 + #define MT7623A_POWER_DOMAIN_ETH 1 7 + #define MT7623A_POWER_DOMAIN_HIF 2 8 + #define MT7623A_POWER_DOMAIN_IFR_MSC 3 9 + 10 + #endif /* _DT_BINDINGS_POWER_MT7623A_POWER_H */
+108
include/dt-bindings/reset/stm32mp1-resets.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 or BSD-3-Clause */ 2 + /* 3 + * Copyright (C) STMicroelectronics 2018 - All Rights Reserved 4 + * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics. 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_STM32MP1_RESET_H_ 8 + #define _DT_BINDINGS_STM32MP1_RESET_H_ 9 + 10 + #define LTDC_R 3072 11 + #define DSI_R 3076 12 + #define DDRPERFM_R 3080 13 + #define USBPHY_R 3088 14 + #define SPI6_R 3136 15 + #define I2C4_R 3138 16 + #define I2C6_R 3139 17 + #define USART1_R 3140 18 + #define STGEN_R 3156 19 + #define GPIOZ_R 3200 20 + #define CRYP1_R 3204 21 + #define HASH1_R 3205 22 + #define RNG1_R 3206 23 + #define AXIM_R 3216 24 + #define GPU_R 3269 25 + #define ETHMAC_R 3274 26 + #define FMC_R 3276 27 + #define QSPI_R 3278 28 + #define SDMMC1_R 3280 29 + #define SDMMC2_R 3281 30 + #define CRC1_R 3284 31 + #define USBH_R 3288 32 + #define MDMA_R 3328 33 + #define MCU_R 8225 34 + #define TIM2_R 19456 35 + #define TIM3_R 19457 36 + #define TIM4_R 19458 37 + #define TIM5_R 19459 38 + #define TIM6_R 19460 39 + #define TIM7_R 19461 40 + #define TIM12_R 16462 41 + #define TIM13_R 16463 42 + #define TIM14_R 16464 43 + #define LPTIM1_R 19465 44 + #define SPI2_R 19467 45 + #define SPI3_R 19468 46 + #define USART2_R 19470 47 + #define USART3_R 19471 48 + #define UART4_R 19472 49 + #define UART5_R 19473 50 + #define UART7_R 19474 51 + #define UART8_R 19475 52 + #define I2C1_R 19477 53 + #define I2C2_R 19478 54 + #define I2C3_R 19479 55 + #define I2C5_R 19480 56 + #define SPDIF_R 19482 57 + #define CEC_R 19483 58 + #define DAC12_R 19485 59 + #define MDIO_R 19847 60 + #define TIM1_R 19520 61 + #define TIM8_R 19521 62 + #define TIM15_R 19522 63 + #define TIM16_R 19523 64 + #define TIM17_R 19524 65 + #define SPI1_R 19528 66 + #define SPI4_R 19529 67 + #define SPI5_R 19530 68 + #define USART6_R 19533 69 + #define SAI1_R 19536 70 + #define SAI2_R 19537 71 + #define SAI3_R 19538 72 + #define DFSDM_R 19540 73 + #define FDCAN_R 19544 74 + #define LPTIM2_R 19584 75 + #define LPTIM3_R 19585 76 + #define LPTIM4_R 19586 77 + #define LPTIM5_R 19587 78 + #define SAI4_R 19592 79 + #define SYSCFG_R 19595 80 + #define VREF_R 19597 81 + #define TMPSENS_R 19600 82 + #define PMBCTRL_R 19601 83 + #define DMA1_R 19648 84 + #define DMA2_R 19649 85 + #define DMAMUX_R 19650 86 + #define ADC12_R 19653 87 + #define USBO_R 19656 88 + #define SDMMC3_R 19664 89 + #define CAMITF_R 19712 90 + #define CRYP2_R 19716 91 + #define HASH2_R 19717 92 + #define RNG2_R 19718 93 + #define CRC2_R 19719 94 + #define HSEM_R 19723 95 + #define MBOX_R 19724 96 + #define GPIOA_R 19776 97 + #define GPIOB_R 19777 98 + #define GPIOC_R 19778 99 + #define GPIOD_R 19779 100 + #define GPIOE_R 19780 101 + #define GPIOF_R 19781 102 + #define GPIOG_R 19782 103 + #define GPIOH_R 19783 104 + #define GPIOI_R 19784 105 + #define GPIOJ_R 19785 106 + #define GPIOK_R 19786 107 + 108 + #endif /* _DT_BINDINGS_STM32MP1_RESET_H_ */
+1
include/linux/hwmon.h
··· 29 29 hwmon_humidity, 30 30 hwmon_fan, 31 31 hwmon_pwm, 32 + hwmon_max, 32 33 }; 33 34 34 35 enum hwmon_chip_attributes {
+30
include/linux/reset-controller.h
··· 27 27 struct of_phandle_args; 28 28 29 29 /** 30 + * struct reset_control_lookup - represents a single lookup entry 31 + * 32 + * @list: internal list of all reset lookup entries 33 + * @provider: name of the reset controller device controlling this reset line 34 + * @index: ID of the reset controller in the reset controller device 35 + * @dev_id: name of the device associated with this reset line 36 + * @con_id name of the reset line (can be NULL) 37 + */ 38 + struct reset_control_lookup { 39 + struct list_head list; 40 + const char *provider; 41 + unsigned int index; 42 + const char *dev_id; 43 + const char *con_id; 44 + }; 45 + 46 + #define RESET_LOOKUP(_provider, _index, _dev_id, _con_id) \ 47 + { \ 48 + .provider = _provider, \ 49 + .index = _index, \ 50 + .dev_id = _dev_id, \ 51 + .con_id = _con_id, \ 52 + } 53 + 54 + /** 30 55 * struct reset_controller_dev - reset controller entity that might 31 56 * provide multiple reset controls 32 57 * @ops: a pointer to device specific struct reset_control_ops 33 58 * @owner: kernel module of the reset controller driver 34 59 * @list: internal list of reset controller devices 35 60 * @reset_control_head: head of internal list of requested reset controls 61 + * @dev: corresponding driver model device struct 36 62 * @of_node: corresponding device tree node as phandle target 37 63 * @of_reset_n_cells: number of cells in reset line specifiers 38 64 * @of_xlate: translation function to translate from specifier as found in the ··· 70 44 struct module *owner; 71 45 struct list_head list; 72 46 struct list_head reset_control_head; 47 + struct device *dev; 73 48 struct device_node *of_node; 74 49 int of_reset_n_cells; 75 50 int (*of_xlate)(struct reset_controller_dev *rcdev, ··· 84 57 struct device; 85 58 int devm_reset_controller_register(struct device *dev, 86 59 struct reset_controller_dev *rcdev); 60 + 61 + void reset_controller_add_lookup(struct reset_control_lookup *lookup, 62 + unsigned int num_entries); 87 63 88 64 #endif
+277
include/linux/scmi_protocol.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * SCMI Message Protocol driver header 4 + * 5 + * Copyright (C) 2018 ARM Ltd. 6 + */ 7 + #include <linux/device.h> 8 + #include <linux/types.h> 9 + 10 + #define SCMI_MAX_STR_SIZE 16 11 + #define SCMI_MAX_NUM_RATES 16 12 + 13 + /** 14 + * struct scmi_revision_info - version information structure 15 + * 16 + * @major_ver: Major ABI version. Change here implies risk of backward 17 + * compatibility break. 18 + * @minor_ver: Minor ABI version. Change here implies new feature addition, 19 + * or compatible change in ABI. 20 + * @num_protocols: Number of protocols that are implemented, excluding the 21 + * base protocol. 22 + * @num_agents: Number of agents in the system. 23 + * @impl_ver: A vendor-specific implementation version. 24 + * @vendor_id: A vendor identifier(Null terminated ASCII string) 25 + * @sub_vendor_id: A sub-vendor identifier(Null terminated ASCII string) 26 + */ 27 + struct scmi_revision_info { 28 + u16 major_ver; 29 + u16 minor_ver; 30 + u8 num_protocols; 31 + u8 num_agents; 32 + u32 impl_ver; 33 + char vendor_id[SCMI_MAX_STR_SIZE]; 34 + char sub_vendor_id[SCMI_MAX_STR_SIZE]; 35 + }; 36 + 37 + struct scmi_clock_info { 38 + char name[SCMI_MAX_STR_SIZE]; 39 + bool rate_discrete; 40 + union { 41 + struct { 42 + int num_rates; 43 + u64 rates[SCMI_MAX_NUM_RATES]; 44 + } list; 45 + struct { 46 + u64 min_rate; 47 + u64 max_rate; 48 + u64 step_size; 49 + } range; 50 + }; 51 + }; 52 + 53 + struct scmi_handle; 54 + 55 + /** 56 + * struct scmi_clk_ops - represents the various operations provided 57 + * by SCMI Clock Protocol 58 + * 59 + * @count_get: get the count of clocks provided by SCMI 60 + * @info_get: get the information of the specified clock 61 + * @rate_get: request the current clock rate of a clock 62 + * @rate_set: set the clock rate of a clock 63 + * @enable: enables the specified clock 64 + * @disable: disables the specified clock 65 + */ 66 + struct scmi_clk_ops { 67 + int (*count_get)(const struct scmi_handle *handle); 68 + 69 + const struct scmi_clock_info *(*info_get) 70 + (const struct scmi_handle *handle, u32 clk_id); 71 + int (*rate_get)(const struct scmi_handle *handle, u32 clk_id, 72 + u64 *rate); 73 + int (*rate_set)(const struct scmi_handle *handle, u32 clk_id, 74 + u32 config, u64 rate); 75 + int (*enable)(const struct scmi_handle *handle, u32 clk_id); 76 + int (*disable)(const struct scmi_handle *handle, u32 clk_id); 77 + }; 78 + 79 + /** 80 + * struct scmi_perf_ops - represents the various operations provided 81 + * by SCMI Performance Protocol 82 + * 83 + * @limits_set: sets limits on the performance level of a domain 84 + * @limits_get: gets limits on the performance level of a domain 85 + * @level_set: sets the performance level of a domain 86 + * @level_get: gets the performance level of a domain 87 + * @device_domain_id: gets the scmi domain id for a given device 88 + * @get_transition_latency: gets the DVFS transition latency for a given device 89 + * @add_opps_to_device: adds all the OPPs for a given device 90 + * @freq_set: sets the frequency for a given device using sustained frequency 91 + * to sustained performance level mapping 92 + * @freq_get: gets the frequency for a given device using sustained frequency 93 + * to sustained performance level mapping 94 + */ 95 + struct scmi_perf_ops { 96 + int (*limits_set)(const struct scmi_handle *handle, u32 domain, 97 + u32 max_perf, u32 min_perf); 98 + int (*limits_get)(const struct scmi_handle *handle, u32 domain, 99 + u32 *max_perf, u32 *min_perf); 100 + int (*level_set)(const struct scmi_handle *handle, u32 domain, 101 + u32 level, bool poll); 102 + int (*level_get)(const struct scmi_handle *handle, u32 domain, 103 + u32 *level, bool poll); 104 + int (*device_domain_id)(struct device *dev); 105 + int (*get_transition_latency)(const struct scmi_handle *handle, 106 + struct device *dev); 107 + int (*add_opps_to_device)(const struct scmi_handle *handle, 108 + struct device *dev); 109 + int (*freq_set)(const struct scmi_handle *handle, u32 domain, 110 + unsigned long rate, bool poll); 111 + int (*freq_get)(const struct scmi_handle *handle, u32 domain, 112 + unsigned long *rate, bool poll); 113 + }; 114 + 115 + /** 116 + * struct scmi_power_ops - represents the various operations provided 117 + * by SCMI Power Protocol 118 + * 119 + * @num_domains_get: get the count of power domains provided by SCMI 120 + * @name_get: gets the name of a power domain 121 + * @state_set: sets the power state of a power domain 122 + * @state_get: gets the power state of a power domain 123 + */ 124 + struct scmi_power_ops { 125 + int (*num_domains_get)(const struct scmi_handle *handle); 126 + char *(*name_get)(const struct scmi_handle *handle, u32 domain); 127 + #define SCMI_POWER_STATE_TYPE_SHIFT 30 128 + #define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1) 129 + #define SCMI_POWER_STATE_PARAM(type, id) \ 130 + ((((type) & BIT(0)) << SCMI_POWER_STATE_TYPE_SHIFT) | \ 131 + ((id) & SCMI_POWER_STATE_ID_MASK)) 132 + #define SCMI_POWER_STATE_GENERIC_ON SCMI_POWER_STATE_PARAM(0, 0) 133 + #define SCMI_POWER_STATE_GENERIC_OFF SCMI_POWER_STATE_PARAM(1, 0) 134 + int (*state_set)(const struct scmi_handle *handle, u32 domain, 135 + u32 state); 136 + int (*state_get)(const struct scmi_handle *handle, u32 domain, 137 + u32 *state); 138 + }; 139 + 140 + struct scmi_sensor_info { 141 + u32 id; 142 + u8 type; 143 + char name[SCMI_MAX_STR_SIZE]; 144 + }; 145 + 146 + /* 147 + * Partial list from Distributed Management Task Force (DMTF) specification: 148 + * DSP0249 (Platform Level Data Model specification) 149 + */ 150 + enum scmi_sensor_class { 151 + NONE = 0x0, 152 + TEMPERATURE_C = 0x2, 153 + VOLTAGE = 0x5, 154 + CURRENT = 0x6, 155 + POWER = 0x7, 156 + ENERGY = 0x8, 157 + }; 158 + 159 + /** 160 + * struct scmi_sensor_ops - represents the various operations provided 161 + * by SCMI Sensor Protocol 162 + * 163 + * @count_get: get the count of sensors provided by SCMI 164 + * @info_get: get the information of the specified sensor 165 + * @configuration_set: control notifications on cross-over events for 166 + * the trip-points 167 + * @trip_point_set: selects and configures a trip-point of interest 168 + * @reading_get: gets the current value of the sensor 169 + */ 170 + struct scmi_sensor_ops { 171 + int (*count_get)(const struct scmi_handle *handle); 172 + 173 + const struct scmi_sensor_info *(*info_get) 174 + (const struct scmi_handle *handle, u32 sensor_id); 175 + int (*configuration_set)(const struct scmi_handle *handle, 176 + u32 sensor_id); 177 + int (*trip_point_set)(const struct scmi_handle *handle, u32 sensor_id, 178 + u8 trip_id, u64 trip_value); 179 + int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id, 180 + bool async, u64 *value); 181 + }; 182 + 183 + /** 184 + * struct scmi_handle - Handle returned to ARM SCMI clients for usage. 185 + * 186 + * @dev: pointer to the SCMI device 187 + * @version: pointer to the structure containing SCMI version information 188 + * @power_ops: pointer to set of power protocol operations 189 + * @perf_ops: pointer to set of performance protocol operations 190 + * @clk_ops: pointer to set of clock protocol operations 191 + * @sensor_ops: pointer to set of sensor protocol operations 192 + */ 193 + struct scmi_handle { 194 + struct device *dev; 195 + struct scmi_revision_info *version; 196 + struct scmi_perf_ops *perf_ops; 197 + struct scmi_clk_ops *clk_ops; 198 + struct scmi_power_ops *power_ops; 199 + struct scmi_sensor_ops *sensor_ops; 200 + /* for protocol internal use */ 201 + void *perf_priv; 202 + void *clk_priv; 203 + void *power_priv; 204 + void *sensor_priv; 205 + }; 206 + 207 + enum scmi_std_protocol { 208 + SCMI_PROTOCOL_BASE = 0x10, 209 + SCMI_PROTOCOL_POWER = 0x11, 210 + SCMI_PROTOCOL_SYSTEM = 0x12, 211 + SCMI_PROTOCOL_PERF = 0x13, 212 + SCMI_PROTOCOL_CLOCK = 0x14, 213 + SCMI_PROTOCOL_SENSOR = 0x15, 214 + }; 215 + 216 + struct scmi_device { 217 + u32 id; 218 + u8 protocol_id; 219 + struct device dev; 220 + struct scmi_handle *handle; 221 + }; 222 + 223 + #define to_scmi_dev(d) container_of(d, struct scmi_device, dev) 224 + 225 + struct scmi_device * 226 + scmi_device_create(struct device_node *np, struct device *parent, int protocol); 227 + void scmi_device_destroy(struct scmi_device *scmi_dev); 228 + 229 + struct scmi_device_id { 230 + u8 protocol_id; 231 + }; 232 + 233 + struct scmi_driver { 234 + const char *name; 235 + int (*probe)(struct scmi_device *sdev); 236 + void (*remove)(struct scmi_device *sdev); 237 + const struct scmi_device_id *id_table; 238 + 239 + struct device_driver driver; 240 + }; 241 + 242 + #define to_scmi_driver(d) container_of(d, struct scmi_driver, driver) 243 + 244 + #ifdef CONFIG_ARM_SCMI_PROTOCOL 245 + int scmi_driver_register(struct scmi_driver *driver, 246 + struct module *owner, const char *mod_name); 247 + void scmi_driver_unregister(struct scmi_driver *driver); 248 + #else 249 + static inline int 250 + scmi_driver_register(struct scmi_driver *driver, struct module *owner, 251 + const char *mod_name) 252 + { 253 + return -EINVAL; 254 + } 255 + 256 + static inline void scmi_driver_unregister(struct scmi_driver *driver) {} 257 + #endif /* CONFIG_ARM_SCMI_PROTOCOL */ 258 + 259 + #define scmi_register(driver) \ 260 + scmi_driver_register(driver, THIS_MODULE, KBUILD_MODNAME) 261 + #define scmi_unregister(driver) \ 262 + scmi_driver_unregister(driver) 263 + 264 + /** 265 + * module_scmi_driver() - Helper macro for registering a scmi driver 266 + * @__scmi_driver: scmi_driver structure 267 + * 268 + * Helper macro for scmi drivers to set up proper module init / exit 269 + * functions. Replaces module_init() and module_exit() and keeps people from 270 + * printing pointless things to the kernel log when their driver is loaded. 271 + */ 272 + #define module_scmi_driver(__scmi_driver) \ 273 + module_driver(__scmi_driver, scmi_register, scmi_unregister) 274 + 275 + typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); 276 + int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); 277 + void scmi_protocol_unregister(int protocol_id);
+4
include/linux/soc/mediatek/infracfg.h
··· 21 21 #define MT8173_TOP_AXI_PROT_EN_MFG_M1 BIT(22) 22 22 #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT BIT(23) 23 23 24 + #define MT2701_TOP_AXI_PROT_EN_MM_M0 BIT(1) 25 + #define MT2701_TOP_AXI_PROT_EN_CONN_M BIT(2) 26 + #define MT2701_TOP_AXI_PROT_EN_CONN_S BIT(8) 27 + 24 28 #define MT7622_TOP_AXI_PROT_EN_ETHSYS (BIT(3) | BIT(17)) 25 29 #define MT7622_TOP_AXI_PROT_EN_HIF0 (BIT(24) | BIT(25)) 26 30 #define MT7622_TOP_AXI_PROT_EN_HIF1 (BIT(26) | BIT(27) | \
+1 -4
include/linux/soc/samsung/exynos-pmu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright (c) 2014 Samsung Electronics Co., Ltd. 3 4 * http://www.samsung.com 4 5 * 5 6 * Header for EXYNOS PMU Driver support 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 12 9 #ifndef __LINUX_SOC_EXYNOS_PMU_H
+1 -5
include/linux/soc/samsung/exynos-regs-pmu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd. 3 4 * http://www.samsung.com 4 5 * 5 6 * EXYNOS - Power management unit definition 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - * 11 7 * 12 8 * Notice: 13 9 * This is not a list of all Exynos Power Management Unit SFRs.
+2 -2
include/soc/tegra/bpmp.h
··· 75 75 struct mbox_chan *channel; 76 76 } mbox; 77 77 78 - struct tegra_bpmp_channel *channels; 79 - unsigned int num_channels; 78 + spinlock_t atomic_tx_lock; 79 + struct tegra_bpmp_channel *tx_channel, *rx_channel, *threaded_channels; 80 80 81 81 struct { 82 82 unsigned long *allocated;