Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"Updates for SoC specific drivers include a few subsystems that have
their own maintainers but send them through the soc tree:

TEE/OP-TEE:
- Add tracepoints around calls to secure world

Memory controller drivers:
- Minor fixes for Renesas, Exynos, Mediatek and Tegra platforms
- Add debug statistics to Tegra20 memory controller
- Update Tegra bindings and convert to dtschema

ARM SCMI Firmware:
- Support for modular SCMI protocols and vendor specific extensions
- New SCMI IIO driver
- Per-cpu DVFS

The other driver changes are all from the platform maintainers
directly and reflect the drivers that don't fit into any other
subsystem as well as treewide changes for a particular platform.

SoCFPGA:
- Various cleanups contributed by Krzysztof Kozlowski

Mediatek:
- add MT8183 support to mutex driver
- MMSYS: use per SoC array to describe the possible routing
- add MMSYS support for MT8183 and MT8167
- add support for PMIC wrapper with integrated arbiter
- add support for MT8192/MT6873

Tegra:
- Bug fixes to PMC and clock drivers

NXP/i.MX:
- Update SCU power domain driver to keep console domain power on.
- Add missing ADC1 power domain to SCU power domain driver.
- Update comments for single global power domain in SCU power domain
driver.
- Add i.MX51/i.MX53 unique id support to i.MX SoC driver.

NXP/FSL SoC driver updates for v5.13
- Add ACPI support for RCPM driver
- Use generic io{read,write} for QE drivers after performance
optimized for PowerPC
- Fix QBMAN probe to cleanup HW states correctly for kexec
- Various cleanup and style fix for QBMAN/QE/GUTS drivers

OMAP:
- Preparation to use devicetree for genpd
- ti-sysc needs iorange check improved when the interconnect target
module has no control registers listed
- ti-sysc needs to probe l4_wkup and l4_cfg interconnects first to
avoid issues with missing resources and unnecessary deferred probe
- ti-sysc debug option can now detect more devices
- ti-sysc now warns if an old incomplete devicetree data is found as
we now rely on it being complete for am3 and 4
- soc init code needs to check for prcm and prm nodes for omap4/5 and
dra7
- omap-prm driver needs to enable autoidle retention support for
omap4
- omap5 clocks are missing gpmc and ocmc clock registers
- pci-dra7xx now needs to use builtin_platform_driver instead of
using builtin_platform_driver_probe for deferred probe to work

Raspberry Pi:
- Fix-up all RPi firmware drivers so as for unbind to happen in an
orderly fashion
- Support for RPi's PoE hat PWM bus

Qualcomm
- Improved detection for SCM calling conventions
- Support for OEM specific wifi firmware path
- Added drivers for SC7280/SM8350: RPMH, LLCC< AOSS QMP"

* tag 'arm-drivers-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (165 commits)
soc: aspeed: fix a ternary sign expansion bug
memory: mtk-smi: Add device-link between smi-larb and smi-common
memory: samsung: exynos5422-dmc: handle clk_set_parent() failure
memory: renesas-rpc-if: fix possible NULL pointer dereference of resource
clk: socfpga: fix iomem pointer cast on 64-bit
soc: aspeed: Adapt to new LPC device tree layout
pinctrl: aspeed-g5: Adapt to new LPC device tree layout
ipmi: kcs: aspeed: Adapt to new LPC DTS layout
ARM: dts: Remove LPC BMC and Host partitions
dt-bindings: aspeed-lpc: Remove LPC partitioning
soc: fsl: enable acpi support in RCPM driver
soc: qcom: mdt_loader: Detect truncated read of segments
soc: qcom: mdt_loader: Validate that p_filesz < p_memsz
soc: qcom: pdr: Fix error return code in pdr_register_listener
firmware: qcom_scm: Fix kernel-doc function names to match
firmware: qcom_scm: Suppress sysfs bind attributes
firmware: qcom_scm: Workaround lack of "is available" call on SC7180
firmware: qcom_scm: Reduce locking section for __get_convention()
firmware: qcom_scm: Make __qcom_scm_is_call_available() return bool
Revert "soc: fsl: qe: introduce qe_io{read,write}* wrappers"
...

+4767 -2130
+20
Documentation/devicetree/bindings/arm/bcm/raspberrypi,bcm2835-firmware.yaml
··· 64 64 - compatible 65 65 - "#reset-cells" 66 66 67 + pwm: 68 + type: object 69 + 70 + properties: 71 + compatible: 72 + const: raspberrypi,firmware-poe-pwm 73 + 74 + "#pwm-cells": 75 + # See pwm.yaml in this directory for a description of the cells format. 76 + const: 2 77 + 78 + required: 79 + - compatible 80 + - "#pwm-cells" 81 + 67 82 additionalProperties: false 68 83 69 84 required: ··· 101 86 reset: reset { 102 87 compatible = "raspberrypi,firmware-reset"; 103 88 #reset-cells = <1>; 89 + }; 90 + 91 + pwm: pwm { 92 + compatible = "raspberrypi,firmware-poe-pwm"; 93 + #pwm-cells = <2>; 104 94 }; 105 95 }; 106 96 ...
+1
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt
··· 13 13 - "mediatek,mt6779-mmsys", "syscon" 14 14 - "mediatek,mt6797-mmsys", "syscon" 15 15 - "mediatek,mt7623-mmsys", "mediatek,mt2701-mmsys", "syscon" 16 + - "mediatek,mt8167-mmsys", "syscon" 16 17 - "mediatek,mt8173-mmsys", "syscon" 17 18 - "mediatek,mt8183-mmsys", "syscon" 18 19 - #clock-cells: Must be 1
+1
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 22 22 compatible: 23 23 enum: 24 24 - qcom,sc7180-llcc 25 + - qcom,sc7280-llcc 25 26 - qcom,sdm845-llcc 26 27 - qcom,sm8150-llcc 27 28 - qcom,sm8250-llcc
+1
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 20 20 * "qcom,scm-msm8996" 21 21 * "qcom,scm-msm8998" 22 22 * "qcom,scm-sc7180" 23 + * "qcom,scm-sc7280" 23 24 * "qcom,scm-sdm845" 24 25 * "qcom,scm-sm8150" 25 26 * "qcom,scm-sm8250"
+4 -3
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra124-emc.yaml
··· 37 37 description: 38 38 phandle of the memory controller node 39 39 40 - core-supply: 40 + power-domains: 41 + maxItems: 1 41 42 description: 42 - Phandle of voltage regulator of the SoC "core" power domain. 43 + Phandle of the SoC "core" power domain. 43 44 44 45 operating-points-v2: 45 46 description: ··· 371 370 372 371 nvidia,memory-controller = <&mc>; 373 372 operating-points-v2 = <&dvfs_opp_table>; 374 - core-supply = <&vdd_core>; 373 + power-domains = <&domain>; 375 374 376 375 #interconnect-cells = <0>; 377 376
+2 -2
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt
··· 23 23 matches, the OPP gets enabled. 24 24 25 25 Optional properties: 26 - - core-supply: Phandle of voltage regulator of the SoC "core" power domain. 26 + - power-domains: Phandle of the SoC "core" power domain. 27 27 28 28 Child device nodes describe the memory settings for different configurations and clock rates. 29 29 ··· 48 48 interrupts = <0 78 0x04>; 49 49 clocks = <&tegra_car TEGRA20_CLK_EMC>; 50 50 nvidia,memory-controller = <&mc>; 51 - core-supply = <&core_vdd_reg>; 51 + power-domains = <&domain>; 52 52 operating-points-v2 = <&opp_table>; 53 53 } 54 54
-40
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-mc.txt
··· 1 - NVIDIA Tegra20 MC(Memory Controller) 2 - 3 - Required properties: 4 - - compatible : "nvidia,tegra20-mc-gart" 5 - - reg : Should contain 2 register ranges: physical base address and length of 6 - the controller's registers and the GART aperture respectively. 7 - - clocks: Must contain an entry for each entry in clock-names. 8 - See ../clocks/clock-bindings.txt for details. 9 - - clock-names: Must include the following entries: 10 - - mc: the module's clock input 11 - - interrupts : Should contain MC General interrupt. 12 - - #reset-cells : Should be 1. This cell represents memory client module ID. 13 - The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h> 14 - or in the TRM documentation. 15 - - #iommu-cells: Should be 0. This cell represents the number of cells in an 16 - IOMMU specifier needed to encode an address. GART supports only a single 17 - address space that is shared by all devices, therefore no additional 18 - information needed for the address encoding. 19 - - #interconnect-cells : Should be 1. This cell represents memory client. 20 - The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h>. 21 - 22 - Example: 23 - mc: memory-controller@7000f000 { 24 - compatible = "nvidia,tegra20-mc-gart"; 25 - reg = <0x7000f000 0x400 /* controller registers */ 26 - 0x58000000 0x02000000>; /* GART aperture */ 27 - clocks = <&tegra_car TEGRA20_CLK_MC>; 28 - clock-names = "mc"; 29 - interrupts = <GIC_SPI 77 0x04>; 30 - #reset-cells = <1>; 31 - #iommu-cells = <0>; 32 - #interconnect-cells = <1>; 33 - }; 34 - 35 - video-codec@6001a000 { 36 - compatible = "nvidia,tegra20-vde"; 37 - ... 38 - resets = <&mc TEGRA20_MC_RESET_VDE>; 39 - iommus = <&mc>; 40 - };
+79
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-mc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra20-mc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NVIDIA Tegra20 SoC Memory Controller 8 + 9 + maintainers: 10 + - Dmitry Osipenko <digetx@gmail.com> 11 + - Jon Hunter <jonathanh@nvidia.com> 12 + - Thierry Reding <thierry.reding@gmail.com> 13 + 14 + description: | 15 + The Tegra20 Memory Controller merges request streams from various client 16 + interfaces into request stream(s) for the various memory target devices, 17 + and returns response data to the various clients. The Memory Controller 18 + has a configurable arbitration algorithm to allow the user to fine-tune 19 + performance among the various clients. 20 + 21 + Tegra20 Memory Controller includes the GART (Graphics Address Relocation 22 + Table) which allows Memory Controller to provide a linear view of a 23 + fragmented memory pages. 24 + 25 + properties: 26 + compatible: 27 + const: nvidia,tegra20-mc-gart 28 + 29 + reg: 30 + items: 31 + - description: controller registers 32 + - description: GART registers 33 + 34 + clocks: 35 + maxItems: 1 36 + 37 + clock-names: 38 + items: 39 + - const: mc 40 + 41 + interrupts: 42 + maxItems: 1 43 + 44 + "#reset-cells": 45 + const: 1 46 + 47 + "#iommu-cells": 48 + const: 0 49 + 50 + "#interconnect-cells": 51 + const: 1 52 + 53 + required: 54 + - compatible 55 + - reg 56 + - interrupts 57 + - clocks 58 + - clock-names 59 + - "#reset-cells" 60 + - "#iommu-cells" 61 + - "#interconnect-cells" 62 + 63 + additionalProperties: false 64 + 65 + examples: 66 + - | 67 + memory-controller@7000f000 { 68 + compatible = "nvidia,tegra20-mc-gart"; 69 + reg = <0x7000f000 0x400>, /* Controller registers */ 70 + <0x58000000 0x02000000>; /* GART aperture */ 71 + clocks = <&clock_controller 32>; 72 + clock-names = "mc"; 73 + 74 + interrupts = <0 77 4>; 75 + 76 + #iommu-cells = <0>; 77 + #reset-cells = <1>; 78 + #interconnect-cells = <1>; 79 + };
+4 -3
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml
··· 39 39 description: 40 40 Phandle of the Memory Controller node. 41 41 42 - core-supply: 42 + power-domains: 43 + maxItems: 1 43 44 description: 44 - Phandle of voltage regulator of the SoC "core" power domain. 45 + Phandle of the SoC "core" power domain. 45 46 46 47 operating-points-v2: 47 48 description: ··· 242 241 243 242 nvidia,memory-controller = <&mc>; 244 243 operating-points-v2 = <&dvfs_opp_table>; 245 - core-supply = <&vdd_core>; 244 + power-domains = <&domain>; 246 245 247 246 #interconnect-cells = <0>; 248 247
+25 -75
Documentation/devicetree/bindings/mfd/aspeed-lpc.txt
··· 9 9 conditions it can also take the role of bus master. 10 10 11 11 The LPC controller is represented as a multi-function device to account for the 12 - mix of functionality it provides. The principle split is between the register 13 - layout at the start of the I/O space which is, to quote the Aspeed datasheet, 14 - "basically compatible with the [LPC registers from the] popular BMC controller 15 - H8S/2168[1]", and everything else, where everything else is an eclectic 16 - collection of functions with a esoteric register layout. "Everything else", 17 - here labeled the "host" portion of the controller, includes, but is not limited 18 - to: 12 + mix of functionality, which includes, but is not limited to: 19 13 20 14 * An IPMI Block Transfer[2] Controller 21 15 ··· 38 44 =================== 39 45 40 46 - compatible: One of: 41 - "aspeed,ast2400-lpc", "simple-mfd" 42 - "aspeed,ast2500-lpc", "simple-mfd" 43 - "aspeed,ast2600-lpc", "simple-mfd" 47 + "aspeed,ast2400-lpc-v2", "simple-mfd", "syscon" 48 + "aspeed,ast2500-lpc-v2", "simple-mfd", "syscon" 49 + "aspeed,ast2600-lpc-v2", "simple-mfd", "syscon" 44 50 45 51 - reg: contains the physical address and length values of the Aspeed 46 52 LPC memory region. 47 53 48 54 - #address-cells: <1> 49 55 - #size-cells: <1> 50 - - ranges: Maps 0 to the physical address and length of the LPC memory 51 - region 52 - 53 - Required LPC Child nodes 54 - ======================== 55 - 56 - BMC Node 57 - -------- 58 - 59 - - compatible: One of: 60 - "aspeed,ast2400-lpc-bmc" 61 - "aspeed,ast2500-lpc-bmc" 62 - "aspeed,ast2600-lpc-bmc" 63 - 64 - - reg: contains the physical address and length values of the 65 - H8S/2168-compatible LPC controller memory region 66 - 67 - Host Node 68 - --------- 69 - 70 - - compatible: One of: 71 - "aspeed,ast2400-lpc-host", "simple-mfd", "syscon" 72 - "aspeed,ast2500-lpc-host", "simple-mfd", "syscon" 73 - "aspeed,ast2600-lpc-host", "simple-mfd", "syscon" 74 - 75 - - reg: contains the address and length values of the host-related 76 - register space for the Aspeed LPC controller 77 - 78 - - #address-cells: <1> 79 - - #size-cells: <1> 80 - - ranges: Maps 0 to the address and length of the host-related LPC memory 56 + - ranges: Maps 0 to the physical address and length of the LPC memory 81 57 region 82 58 83 59 Example: 84 60 85 61 lpc: lpc@1e789000 { 86 - compatible = "aspeed,ast2500-lpc", "simple-mfd"; 62 + compatible = "aspeed,ast2500-lpc-v2", "simple-mfd", "syscon"; 87 63 reg = <0x1e789000 0x1000>; 88 64 89 65 #address-cells = <1>; 90 66 #size-cells = <1>; 91 67 ranges = <0x0 0x1e789000 0x1000>; 92 68 93 - lpc_bmc: lpc-bmc@0 { 94 - compatible = "aspeed,ast2500-lpc-bmc"; 69 + lpc_snoop: lpc-snoop@0 { 70 + compatible = "aspeed,ast2600-lpc-snoop"; 95 71 reg = <0x0 0x80>; 96 - }; 97 - 98 - lpc_host: lpc-host@80 { 99 - compatible = "aspeed,ast2500-lpc-host", "simple-mfd", "syscon"; 100 - reg = <0x80 0x1e0>; 101 - reg-io-width = <4>; 102 - 103 - #address-cells = <1>; 104 - #size-cells = <1>; 105 - ranges = <0x0 0x80 0x1e0>; 72 + interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; 73 + snoop-ports = <0x80>; 106 74 }; 107 75 }; 108 76 109 - BMC Node Children 110 - ================== 111 - 112 - 113 - Host Node Children 114 - ================== 115 77 116 78 LPC Host Interface Controller 117 79 ------------------- ··· 99 149 100 150 Example: 101 151 102 - lpc-host@80 { 103 - lpc_ctrl: lpc-ctrl@0 { 104 - compatible = "aspeed,ast2500-lpc-ctrl"; 105 - reg = <0x0 0x80>; 106 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 107 - memory-region = <&flash_memory>; 108 - flash = <&spi>; 109 - }; 152 + lpc_ctrl: lpc-ctrl@80 { 153 + compatible = "aspeed,ast2500-lpc-ctrl"; 154 + reg = <0x80 0x80>; 155 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 156 + memory-region = <&flash_memory>; 157 + flash = <&spi>; 110 158 }; 111 159 112 160 LPC Host Controller ··· 127 179 128 180 Example: 129 181 130 - lhc: lhc@20 { 182 + lhc: lhc@a0 { 131 183 compatible = "aspeed,ast2500-lhc"; 132 - reg = <0x20 0x24 0x48 0x8>; 184 + reg = <0xa0 0x24 0xc8 0x8>; 133 185 }; 134 186 135 187 LPC reset control ··· 140 192 141 193 Required properties: 142 194 143 - - compatible: "aspeed,ast2600-lpc-reset" or 144 - "aspeed,ast2500-lpc-reset" 145 - "aspeed,ast2400-lpc-reset" 195 + - compatible: One of: 196 + "aspeed,ast2600-lpc-reset"; 197 + "aspeed,ast2500-lpc-reset"; 198 + "aspeed,ast2400-lpc-reset"; 199 + 146 200 - reg: offset and length of the IP in the LHC memory region 147 201 - #reset-controller indicates the number of reset cells expected 148 202 149 203 Example: 150 204 151 - lpc_reset: reset-controller@18 { 205 + lpc_reset: reset-controller@98 { 152 206 compatible = "aspeed,ast2500-lpc-reset"; 153 - reg = <0x18 0x4>; 207 + reg = <0x98 0x4>; 154 208 #reset-cells = <1>; 155 209 };
+1
Documentation/devicetree/bindings/power/brcm,bcm-pmb.yaml
··· 16 16 compatible: 17 17 enum: 18 18 - brcm,bcm4908-pmb 19 + - brcm,bcm63138-pmb 19 20 20 21 reg: 21 22 description: register space of one or more buses
+2
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 25 25 - qcom,qcs404-rpmpd 26 26 - qcom,sdm660-rpmpd 27 27 - qcom,sc7180-rpmhpd 28 + - qcom,sc7280-rpmhpd 28 29 - qcom,sdm845-rpmhpd 29 30 - qcom,sdx55-rpmhpd 30 31 - qcom,sm8150-rpmhpd 31 32 - qcom,sm8250-rpmhpd 33 + - qcom,sm8350-rpmhpd 32 34 33 35 '#power-domain-cells': 34 36 const: 1
+1
Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
··· 22 22 "mediatek,mt6765-pwrap" for MT6765 SoCs 23 23 "mediatek,mt6779-pwrap" for MT6779 SoCs 24 24 "mediatek,mt6797-pwrap" for MT6797 SoCs 25 + "mediatek,mt6873-pwrap" for MT6873/8192 SoCs 25 26 "mediatek,mt7622-pwrap" for MT7622 SoCs 26 27 "mediatek,mt8135-pwrap" for MT8135 SoCs 27 28 "mediatek,mt8173-pwrap" for MT8173 SoCs
+1
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt
··· 17 17 Value type: <string> 18 18 Definition: must be one of: 19 19 "qcom,sc7180-aoss-qmp" 20 + "qcom,sc7280-aoss-qmp" 20 21 "qcom,sdm845-aoss-qmp" 21 22 "qcom,sm8150-aoss-qmp" 22 23 "qcom,sm8250-aoss-qmp"
+7
Documentation/devicetree/bindings/soc/qcom/qcom,wcnss.txt
··· 24 24 "qcom,riva", 25 25 "qcom,pronto" 26 26 27 + - firmware-name: 28 + Usage: optional 29 + Value type: <string> 30 + Definition: specifies the relative firmware image path for the WLAN NV 31 + blob. Defaults to "wlan/prima/WCNSS_qcom_wlan_nv.bin" if 32 + not specified. 33 + 27 34 = SUBNODES 28 35 The subnodes of the wcnss node are optional and describe the individual blocks in 29 36 the WCNSS.
+2 -29
Documentation/driver-api/xilinx/eemi.rst
··· 16 16 device to communicate with a power management controller (PMC) on a 17 17 device to issue or respond to power management requests. 18 18 19 - EEMI ops is a structure containing all eemi APIs supported by Zynq MPSoC. 20 - The zynqmp-firmware driver maintain all EEMI APIs in zynqmp_eemi_ops 21 - structure. Any driver who want to communicate with PMC using EEMI APIs 22 - can call zynqmp_pm_get_eemi_ops(). 23 - 24 - Example of EEMI ops:: 25 - 26 - /* zynqmp-firmware driver maintain all EEMI APIs */ 27 - struct zynqmp_eemi_ops { 28 - int (*get_api_version)(u32 *version); 29 - int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out); 30 - }; 31 - 32 - static const struct zynqmp_eemi_ops eemi_ops = { 33 - .get_api_version = zynqmp_pm_get_api_version, 34 - .query_data = zynqmp_pm_query_data, 35 - }; 36 - 37 - Example of EEMI ops usage:: 38 - 39 - static const struct zynqmp_eemi_ops *eemi_ops; 40 - u32 ret_payload[PAYLOAD_ARG_CNT]; 41 - int ret; 42 - 43 - eemi_ops = zynqmp_pm_get_eemi_ops(); 44 - if (IS_ERR(eemi_ops)) 45 - return PTR_ERR(eemi_ops); 46 - 47 - ret = eemi_ops->query_data(qdata, ret_payload); 19 + Any driver who wants to communicate with PMC using EEMI APIs use the 20 + functions provided for each function. 48 21 49 22 IOCTL 50 23 ------
+1
MAINTAINERS
··· 2316 2316 F: drivers/usb/dwc3/dwc3-qcom.c 2317 2317 F: include/dt-bindings/*/qcom* 2318 2318 F: include/linux/*/qcom* 2319 + F: include/linux/soc/qcom/ 2319 2320 2320 2321 ARM/RADISYS ENP2611 MACHINE SUPPORT 2321 2322 M: Lennert Buytenhek <kernel@wantstofly.org>
+1 -1
arch/arm/Kconfig
··· 1327 1327 # selected platforms. 1328 1328 config ARCH_NR_GPIO 1329 1329 int 1330 - default 2048 if ARCH_SOCFPGA 1330 + default 2048 if ARCH_INTEL_SOCFPGA 1331 1331 default 1024 if ARCH_BRCMSTB || ARCH_RENESAS || ARCH_TEGRA || \ 1332 1332 ARCH_ZYNQ || ARCH_ASPEED 1333 1333 default 512 if ARCH_EXYNOS || ARCH_KEYSTONE || SOC_OMAP5 || \
+3 -3
arch/arm/Kconfig.debug
··· 1087 1087 on SD5203 UART. 1088 1088 1089 1089 config DEBUG_SOCFPGA_UART0 1090 - depends on ARCH_SOCFPGA 1090 + depends on ARCH_INTEL_SOCFPGA 1091 1091 bool "Use SOCFPGA UART0 for low-level debug" 1092 1092 select DEBUG_UART_8250 1093 1093 help ··· 1095 1095 on SOCFPGA(Cyclone 5 and Arria 5) based platforms. 1096 1096 1097 1097 config DEBUG_SOCFPGA_ARRIA10_UART1 1098 - depends on ARCH_SOCFPGA 1098 + depends on ARCH_INTEL_SOCFPGA 1099 1099 bool "Use SOCFPGA Arria10 UART1 for low-level debug" 1100 1100 select DEBUG_UART_8250 1101 1101 help ··· 1103 1103 on SOCFPGA(Arria 10) based platforms. 1104 1104 1105 1105 config DEBUG_SOCFPGA_CYCLONE5_UART1 1106 - depends on ARCH_SOCFPGA 1106 + depends on ARCH_INTEL_SOCFPGA 1107 1107 bool "Use SOCFPGA Cyclone 5 UART1 for low-level debug" 1108 1108 select DEBUG_UART_8250 1109 1109 help
+1 -1
arch/arm/Makefile
··· 209 209 machine-$(CONFIG_ARCH_S5PV210) += s5pv210 210 210 machine-$(CONFIG_ARCH_SA1100) += sa1100 211 211 machine-$(CONFIG_ARCH_RENESAS) += shmobile 212 - machine-$(CONFIG_ARCH_SOCFPGA) += socfpga 212 + machine-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga 213 213 machine-$(CONFIG_ARCH_STI) += sti 214 214 machine-$(CONFIG_ARCH_STM32) += stm32 215 215 machine-$(CONFIG_ARCH_SUNXI) += sunxi
+1 -1
arch/arm/boot/dts/Makefile
··· 1033 1033 s5pv210-smdkc110.dtb \ 1034 1034 s5pv210-smdkv210.dtb \ 1035 1035 s5pv210-torbreck.dtb 1036 - dtb-$(CONFIG_ARCH_SOCFPGA) += \ 1036 + dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += \ 1037 1037 socfpga_arria5_socdk.dtb \ 1038 1038 socfpga_arria10_socdk_nand.dtb \ 1039 1039 socfpga_arria10_socdk_qspi.dtb \
+28 -42
arch/arm/boot/dts/aspeed-g4.dtsi
··· 343 343 }; 344 344 345 345 lpc: lpc@1e789000 { 346 - compatible = "aspeed,ast2400-lpc", "simple-mfd"; 346 + compatible = "aspeed,ast2400-lpc-v2", "simple-mfd", "syscon"; 347 347 reg = <0x1e789000 0x1000>; 348 + reg-io-width = <4>; 348 349 349 350 #address-cells = <1>; 350 351 #size-cells = <1>; 351 352 ranges = <0x0 0x1e789000 0x1000>; 352 353 353 - lpc_bmc: lpc-bmc@0 { 354 - compatible = "aspeed,ast2400-lpc-bmc"; 355 - reg = <0x0 0x80>; 354 + lpc_ctrl: lpc-ctrl@80 { 355 + compatible = "aspeed,ast2400-lpc-ctrl"; 356 + reg = <0x80 0x10>; 357 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 358 + status = "disabled"; 356 359 }; 357 360 358 - lpc_host: lpc-host@80 { 359 - compatible = "aspeed,ast2400-lpc-host", "simple-mfd", "syscon"; 360 - reg = <0x80 0x1e0>; 361 - reg-io-width = <4>; 361 + lpc_snoop: lpc-snoop@90 { 362 + compatible = "aspeed,ast2400-lpc-snoop"; 363 + reg = <0x90 0x8>; 364 + interrupts = <8>; 365 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 366 + status = "disabled"; 367 + }; 362 368 363 - #address-cells = <1>; 364 - #size-cells = <1>; 365 - ranges = <0x0 0x80 0x1e0>; 369 + lhc: lhc@a0 { 370 + compatible = "aspeed,ast2400-lhc"; 371 + reg = <0xa0 0x24 0xc8 0x8>; 372 + }; 366 373 367 - lpc_ctrl: lpc-ctrl@0 { 368 - compatible = "aspeed,ast2400-lpc-ctrl"; 369 - reg = <0x0 0x10>; 370 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 371 - status = "disabled"; 372 - }; 374 + lpc_reset: reset-controller@98 { 375 + compatible = "aspeed,ast2400-lpc-reset"; 376 + reg = <0x98 0x4>; 377 + #reset-cells = <1>; 378 + }; 373 379 374 - lpc_snoop: lpc-snoop@10 { 375 - compatible = "aspeed,ast2400-lpc-snoop"; 376 - reg = <0x10 0x8>; 377 - interrupts = <8>; 378 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 379 - status = "disabled"; 380 - }; 381 - 382 - lhc: lhc@20 { 383 - compatible = "aspeed,ast2400-lhc"; 384 - reg = <0x20 0x24 0x48 0x8>; 385 - }; 386 - 387 - lpc_reset: reset-controller@18 { 388 - compatible = "aspeed,ast2400-lpc-reset"; 389 - reg = <0x18 0x4>; 390 - #reset-cells = <1>; 391 - }; 392 - 393 - ibt: ibt@c0 { 394 - compatible = "aspeed,ast2400-ibt-bmc"; 395 - reg = <0xc0 0x18>; 396 - interrupts = <8>; 397 - status = "disabled"; 398 - }; 380 + ibt: ibt@140 { 381 + compatible = "aspeed,ast2400-ibt-bmc"; 382 + reg = <0x140 0x18>; 383 + interrupts = <8>; 384 + status = "disabled"; 399 385 }; 400 386 }; 401 387
+52 -69
arch/arm/boot/dts/aspeed-g5.dtsi
··· 434 434 }; 435 435 436 436 lpc: lpc@1e789000 { 437 - compatible = "aspeed,ast2500-lpc", "simple-mfd"; 437 + compatible = "aspeed,ast2500-lpc-v2", "simple-mfd", "syscon"; 438 438 reg = <0x1e789000 0x1000>; 439 + reg-io-width = <4>; 439 440 440 441 #address-cells = <1>; 441 442 #size-cells = <1>; 442 443 ranges = <0x0 0x1e789000 0x1000>; 443 444 444 - lpc_bmc: lpc-bmc@0 { 445 - compatible = "aspeed,ast2500-lpc-bmc", "simple-mfd", "syscon"; 446 - reg = <0x0 0x80>; 447 - reg-io-width = <4>; 448 - 449 - #address-cells = <1>; 450 - #size-cells = <1>; 451 - ranges = <0x0 0x0 0x80>; 452 - 453 - kcs1: kcs@24 { 454 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 455 - reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>; 456 - interrupts = <8>; 457 - status = "disabled"; 458 - }; 459 - kcs2: kcs@28 { 460 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 461 - reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>; 462 - interrupts = <8>; 463 - status = "disabled"; 464 - }; 465 - kcs3: kcs@2c { 466 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 467 - reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>; 468 - interrupts = <8>; 469 - status = "disabled"; 470 - }; 445 + kcs1: kcs@24 { 446 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 447 + reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>; 448 + interrupts = <8>; 449 + status = "disabled"; 471 450 }; 472 451 473 - lpc_host: lpc-host@80 { 474 - compatible = "aspeed,ast2500-lpc-host", "simple-mfd", "syscon"; 475 - reg = <0x80 0x1e0>; 476 - reg-io-width = <4>; 452 + kcs2: kcs@28 { 453 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 454 + reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>; 455 + interrupts = <8>; 456 + status = "disabled"; 457 + }; 477 458 478 - #address-cells = <1>; 479 - #size-cells = <1>; 480 - ranges = <0x0 0x80 0x1e0>; 459 + kcs3: kcs@2c { 460 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 461 + reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>; 462 + interrupts = <8>; 463 + status = "disabled"; 464 + }; 481 465 482 - kcs4: kcs@94 { 483 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 484 - reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>; 485 - interrupts = <8>; 486 - status = "disabled"; 487 - }; 466 + kcs4: kcs@114 { 467 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 468 + reg = <0x114 0x1>, <0x118 0x1>, <0x11c 0x1>; 469 + interrupts = <8>; 470 + status = "disabled"; 471 + }; 488 472 489 - lpc_ctrl: lpc-ctrl@0 { 490 - compatible = "aspeed,ast2500-lpc-ctrl"; 491 - reg = <0x0 0x10>; 492 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 493 - status = "disabled"; 494 - }; 473 + lpc_ctrl: lpc-ctrl@80 { 474 + compatible = "aspeed,ast2500-lpc-ctrl"; 475 + reg = <0x80 0x10>; 476 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 477 + status = "disabled"; 478 + }; 495 479 496 - lpc_snoop: lpc-snoop@10 { 497 - compatible = "aspeed,ast2500-lpc-snoop"; 498 - reg = <0x10 0x8>; 499 - interrupts = <8>; 500 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 501 - status = "disabled"; 502 - }; 480 + lpc_snoop: lpc-snoop@90 { 481 + compatible = "aspeed,ast2500-lpc-snoop"; 482 + reg = <0x90 0x8>; 483 + interrupts = <8>; 484 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 485 + status = "disabled"; 486 + }; 503 487 504 - lpc_reset: reset-controller@18 { 505 - compatible = "aspeed,ast2500-lpc-reset"; 506 - reg = <0x18 0x4>; 507 - #reset-cells = <1>; 508 - }; 488 + lpc_reset: reset-controller@98 { 489 + compatible = "aspeed,ast2500-lpc-reset"; 490 + reg = <0x98 0x4>; 491 + #reset-cells = <1>; 492 + }; 509 493 510 - lhc: lhc@20 { 511 - compatible = "aspeed,ast2500-lhc"; 512 - reg = <0x20 0x24 0x48 0x8>; 513 - }; 494 + lhc: lhc@a0 { 495 + compatible = "aspeed,ast2500-lhc"; 496 + reg = <0xa0 0x24 0xc8 0x8>; 497 + }; 514 498 515 499 516 - ibt: ibt@c0 { 517 - compatible = "aspeed,ast2500-ibt-bmc"; 518 - reg = <0xc0 0x18>; 519 - interrupts = <8>; 520 - status = "disabled"; 521 - }; 500 + ibt: ibt@140 { 501 + compatible = "aspeed,ast2500-ibt-bmc"; 502 + reg = <0x140 0x18>; 503 + interrupts = <8>; 504 + status = "disabled"; 522 505 }; 523 506 }; 524 507
+53 -70
arch/arm/boot/dts/aspeed-g6.dtsi
··· 460 460 }; 461 461 462 462 lpc: lpc@1e789000 { 463 - compatible = "aspeed,ast2600-lpc", "simple-mfd"; 463 + compatible = "aspeed,ast2600-lpc-v2", "simple-mfd", "syscon"; 464 464 reg = <0x1e789000 0x1000>; 465 + reg-io-width = <4>; 465 466 466 467 #address-cells = <1>; 467 468 #size-cells = <1>; 468 469 ranges = <0x0 0x1e789000 0x1000>; 469 470 470 - lpc_bmc: lpc-bmc@0 { 471 - compatible = "aspeed,ast2600-lpc-bmc", "simple-mfd", "syscon"; 472 - reg = <0x0 0x80>; 473 - reg-io-width = <4>; 474 - 475 - #address-cells = <1>; 476 - #size-cells = <1>; 477 - ranges = <0x0 0x0 0x80>; 478 - 479 - kcs1: kcs@24 { 480 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 481 - reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>; 482 - interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>; 483 - kcs_chan = <1>; 484 - status = "disabled"; 485 - }; 486 - kcs2: kcs@28 { 487 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 488 - reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>; 489 - interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>; 490 - status = "disabled"; 491 - }; 492 - kcs3: kcs@2c { 493 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 494 - reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>; 495 - interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 496 - status = "disabled"; 497 - }; 471 + kcs1: kcs@24 { 472 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 473 + reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>; 474 + interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>; 475 + kcs_chan = <1>; 476 + status = "disabled"; 498 477 }; 499 478 500 - lpc_host: lpc-host@80 { 501 - compatible = "aspeed,ast2600-lpc-host", "simple-mfd", "syscon"; 502 - reg = <0x80 0x1e0>; 503 - reg-io-width = <4>; 479 + kcs2: kcs@28 { 480 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 481 + reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>; 482 + interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>; 483 + status = "disabled"; 484 + }; 504 485 505 - #address-cells = <1>; 506 - #size-cells = <1>; 507 - ranges = <0x0 0x80 0x1e0>; 486 + kcs3: kcs@2c { 487 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 488 + reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>; 489 + interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 490 + status = "disabled"; 491 + }; 508 492 509 - kcs4: kcs@94 { 510 - compatible = "aspeed,ast2500-kcs-bmc-v2"; 511 - reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>; 512 - interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>; 513 - status = "disabled"; 514 - }; 493 + kcs4: kcs@114 { 494 + compatible = "aspeed,ast2500-kcs-bmc-v2"; 495 + reg = <0x114 0x1>, <0x118 0x1>, <0x11c 0x1>; 496 + interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>; 497 + status = "disabled"; 498 + }; 515 499 516 - lpc_ctrl: lpc-ctrl@0 { 517 - compatible = "aspeed,ast2600-lpc-ctrl"; 518 - reg = <0x0 0x80>; 519 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 520 - status = "disabled"; 521 - }; 500 + lpc_ctrl: lpc-ctrl@80 { 501 + compatible = "aspeed,ast2600-lpc-ctrl"; 502 + reg = <0x80 0x80>; 503 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 504 + status = "disabled"; 505 + }; 522 506 523 - lpc_snoop: lpc-snoop@0 { 524 - compatible = "aspeed,ast2600-lpc-snoop"; 525 - reg = <0x0 0x80>; 526 - interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; 527 - clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 528 - status = "disabled"; 529 - }; 507 + lpc_snoop: lpc-snoop@80 { 508 + compatible = "aspeed,ast2600-lpc-snoop"; 509 + reg = <0x80 0x80>; 510 + interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; 511 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 512 + status = "disabled"; 513 + }; 530 514 531 - lhc: lhc@20 { 532 - compatible = "aspeed,ast2600-lhc"; 533 - reg = <0x20 0x24 0x48 0x8>; 534 - }; 515 + lhc: lhc@a0 { 516 + compatible = "aspeed,ast2600-lhc"; 517 + reg = <0xa0 0x24 0xc8 0x8>; 518 + }; 535 519 536 - lpc_reset: reset-controller@18 { 537 - compatible = "aspeed,ast2600-lpc-reset"; 538 - reg = <0x18 0x4>; 539 - #reset-cells = <1>; 540 - }; 520 + lpc_reset: reset-controller@98 { 521 + compatible = "aspeed,ast2600-lpc-reset"; 522 + reg = <0x98 0x4>; 523 + #reset-cells = <1>; 524 + }; 541 525 542 - ibt: ibt@c0 { 543 - compatible = "aspeed,ast2600-ibt-bmc"; 544 - reg = <0xc0 0x18>; 545 - interrupts = <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>; 546 - status = "disabled"; 547 - }; 526 + ibt: ibt@140 { 527 + compatible = "aspeed,ast2600-ibt-bmc"; 528 + reg = <0x140 0x18>; 529 + interrupts = <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>; 530 + status = "disabled"; 548 531 }; 549 532 }; 550 533
+1 -1
arch/arm/configs/multi_v7_defconfig
··· 79 79 CONFIG_ARCH_MSM8974=y 80 80 CONFIG_ARCH_ROCKCHIP=y 81 81 CONFIG_ARCH_RENESAS=y 82 - CONFIG_ARCH_SOCFPGA=y 82 + CONFIG_ARCH_INTEL_SOCFPGA=y 83 83 CONFIG_PLAT_SPEAR=y 84 84 CONFIG_ARCH_SPEAR13XX=y 85 85 CONFIG_MACH_SPEAR1310=y
+1 -1
arch/arm/configs/socfpga_defconfig
··· 9 9 CONFIG_BLK_DEV_INITRD=y 10 10 CONFIG_EMBEDDED=y 11 11 CONFIG_PROFILING=y 12 - CONFIG_ARCH_SOCFPGA=y 12 + CONFIG_ARCH_INTEL_SOCFPGA=y 13 13 CONFIG_ARM_THUMBEE=y 14 14 CONFIG_SMP=y 15 15 CONFIG_NR_CPUS=2
+1 -1
arch/arm/mach-omap2/pdata-quirks.c
··· 574 574 "prm", 575 575 }; 576 576 577 - void __init 577 + static void __init 578 578 pdata_quirks_init_clocks(const struct of_device_id *omap_dt_match_table) 579 579 { 580 580 struct device_node *np;
+2 -2
arch/arm/mach-socfpga/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - menuconfig ARCH_SOCFPGA 2 + menuconfig ARCH_INTEL_SOCFPGA 3 3 bool "Altera SOCFPGA family" 4 4 depends on ARCH_MULTI_V7 5 5 select ARCH_SUPPORTS_BIG_ENDIAN ··· 19 19 select PL310_ERRATA_753970 if PL310 20 20 select PL310_ERRATA_769419 21 21 22 - if ARCH_SOCFPGA 22 + if ARCH_INTEL_SOCFPGA 23 23 config SOCFPGA_SUSPEND 24 24 bool "Suspend to RAM on SOCFPGA" 25 25 help
+4 -13
arch/arm64/Kconfig.platforms
··· 8 8 help 9 9 This enables support for the Actions Semiconductor S900 SoC family. 10 10 11 - config ARCH_AGILEX 12 - bool "Intel's Agilex SoCFPGA Family" 13 - help 14 - This enables support for Intel's Agilex SoCFPGA Family. 15 - 16 - config ARCH_N5X 17 - bool "Intel's eASIC N5X SoCFPGA Family" 18 - help 19 - This enables support for Intel's eASIC N5X SoCFPGA Family. 20 - 21 11 config ARCH_SUNXI 22 12 bool "Allwinner sunxi 64-bit SoC Family" 23 13 select ARCH_HAS_RESET_CONTROLLER ··· 244 254 help 245 255 This enables support for AMD Seattle SOC Family 246 256 247 - config ARCH_STRATIX10 248 - bool "Altera's Stratix 10 SoCFPGA Family" 257 + config ARCH_INTEL_SOCFPGA 258 + bool "Intel's SoCFPGA ARMv8 Families" 249 259 help 250 - This enables support for Altera's Stratix 10 SoCFPGA Family. 260 + This enables support for Intel's SoCFPGA ARMv8 families: 261 + Stratix 10 (ex. Altera), Agilex and eASIC N5X. 251 262 252 263 config ARCH_SYNQUACER 253 264 bool "Socionext SynQuacer SoC Family"
+1 -1
arch/arm64/boot/dts/altera/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - dtb-$(CONFIG_ARCH_STRATIX10) += socfpga_stratix10_socdk.dtb \ 2 + dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga_stratix10_socdk.dtb \ 3 3 socfpga_stratix10_socdk_nand.dtb
+3 -3
arch/arm64/boot/dts/intel/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - dtb-$(CONFIG_ARCH_AGILEX) += socfpga_agilex_socdk.dtb \ 3 - socfpga_agilex_socdk_nand.dtb 2 + dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga_agilex_socdk.dtb \ 3 + socfpga_agilex_socdk_nand.dtb \ 4 + socfpga_n5x_socdk.dtb 4 5 dtb-$(CONFIG_ARCH_KEEMBAY) += keembay-evm.dtb 5 - dtb-$(CONFIG_ARCH_N5X) += socfpga_n5x_socdk.dtb
+1 -1
arch/arm64/configs/defconfig
··· 52 52 CONFIG_ARCH_ROCKCHIP=y 53 53 CONFIG_ARCH_S32=y 54 54 CONFIG_ARCH_SEATTLE=y 55 - CONFIG_ARCH_STRATIX10=y 55 + CONFIG_ARCH_INTEL_SOCFPGA=y 56 56 CONFIG_ARCH_SYNQUACER=y 57 57 CONFIG_ARCH_TEGRA=y 58 58 CONFIG_ARCH_SPRD=y
+3 -1
drivers/bus/qcom-ebi2.c
··· 353 353 354 354 /* Figure out the chipselect */ 355 355 ret = of_property_read_u32(child, "reg", &csindex); 356 - if (ret) 356 + if (ret) { 357 + of_node_put(child); 357 358 return ret; 359 + } 358 360 359 361 if (csindex > 5) { 360 362 dev_err(dev,
+3 -3
drivers/bus/ti-sysc.c
··· 288 288 * limit for clk_get(). If cl ever needs to be freed, it should be done 289 289 * with clkdev_drop(). 290 290 */ 291 - cl = kcalloc(1, sizeof(*cl), GFP_KERNEL); 291 + cl = kzalloc(sizeof(*cl), GFP_KERNEL); 292 292 if (!cl) 293 293 return -ENOMEM; 294 294 ··· 1648 1648 case SOC_UNKNOWN: 1649 1649 default: 1650 1650 return 0; 1651 - }; 1651 + } 1652 1652 1653 1653 /* Remap the whole module range to be able to reset dispc outputs */ 1654 1654 devm_iounmap(ddata->dev, ddata->module_va); ··· 2905 2905 break; 2906 2906 default: 2907 2907 break; 2908 - }; 2908 + } 2909 2909 } 2910 2910 2911 2911 match = soc_device_match(sysc_soc_feat_match);
+16 -11
drivers/char/ipmi/kcs_bmc_aspeed.c
··· 27 27 28 28 #define KCS_CHANNEL_MAX 4 29 29 30 - /* mapped to lpc-bmc@0 IO space */ 31 30 #define LPC_HICR0 0x000 32 31 #define LPC_HICR0_LPC3E BIT(7) 33 32 #define LPC_HICR0_LPC2E BIT(6) ··· 51 52 #define LPC_STR1 0x03C 52 53 #define LPC_STR2 0x040 53 54 #define LPC_STR3 0x044 54 - 55 - /* mapped to lpc-host@80 IO space */ 56 - #define LPC_HICRB 0x080 55 + #define LPC_HICRB 0x100 57 56 #define LPC_HICRB_IBFIF4 BIT(1) 58 57 #define LPC_HICRB_LPC4E BIT(0) 59 - #define LPC_LADR4 0x090 60 - #define LPC_IDR4 0x094 61 - #define LPC_ODR4 0x098 62 - #define LPC_STR4 0x09C 58 + #define LPC_LADR4 0x110 59 + #define LPC_IDR4 0x114 60 + #define LPC_ODR4 0x118 61 + #define LPC_STR4 0x11C 63 62 64 63 struct aspeed_kcs_bmc { 65 64 struct regmap *map; ··· 345 348 struct device_node *np; 346 349 int rc; 347 350 348 - np = pdev->dev.of_node; 351 + np = dev->of_node->parent; 352 + if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") && 353 + !of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") && 354 + !of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) { 355 + dev_err(dev, "unsupported LPC device binding\n"); 356 + return -ENODEV; 357 + } 358 + 359 + np = dev->of_node; 349 360 if (of_device_is_compatible(np, "aspeed,ast2400-kcs-bmc") || 350 - of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc")) 361 + of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc")) 351 362 kcs_bmc = aspeed_kcs_probe_of_v1(pdev); 352 363 else if (of_device_is_compatible(np, "aspeed,ast2400-kcs-bmc-v2") || 353 - of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc-v2")) 364 + of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc-v2")) 354 365 kcs_bmc = aspeed_kcs_probe_of_v2(pdev); 355 366 else 356 367 return -EINVAL;
+1
drivers/clk/Kconfig
··· 394 394 source "drivers/clk/rockchip/Kconfig" 395 395 source "drivers/clk/samsung/Kconfig" 396 396 source "drivers/clk/sifive/Kconfig" 397 + source "drivers/clk/socfpga/Kconfig" 397 398 source "drivers/clk/sprd/Kconfig" 398 399 source "drivers/clk/sunxi/Kconfig" 399 400 source "drivers/clk/sunxi-ng/Kconfig"
+1 -3
drivers/clk/Makefile
··· 104 104 obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/ 105 105 obj-$(CONFIG_COMMON_CLK_SAMSUNG) += samsung/ 106 106 obj-$(CONFIG_CLK_SIFIVE) += sifive/ 107 - obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/ 108 - obj-$(CONFIG_ARCH_AGILEX) += socfpga/ 109 - obj-$(CONFIG_ARCH_STRATIX10) += socfpga/ 107 + obj-y += socfpga/ 110 108 obj-$(CONFIG_PLAT_SPEAR) += spear/ 111 109 obj-y += sprd/ 112 110 obj-$(CONFIG_ARCH_STI) += st/
+1 -1
drivers/clk/bcm/clk-raspberrypi.c
··· 314 314 return -ENOENT; 315 315 } 316 316 317 - firmware = rpi_firmware_get(firmware_node); 317 + firmware = devm_rpi_firmware_get(&pdev->dev, firmware_node); 318 318 of_node_put(firmware_node); 319 319 if (!firmware) 320 320 return -EPROBE_DEFER;
+18 -10
drivers/clk/clk-scmi.c
··· 2 2 /* 3 3 * System Control and Power Interface (SCMI) Protocol based clock driver 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/clk-provider.h> ··· 13 13 #include <linux/scmi_protocol.h> 14 14 #include <asm/div64.h> 15 15 16 + static const struct scmi_clk_proto_ops *scmi_proto_clk_ops; 17 + 16 18 struct scmi_clk { 17 19 u32 id; 18 20 struct clk_hw hw; 19 21 const struct scmi_clock_info *info; 20 - const struct scmi_handle *handle; 22 + const struct scmi_protocol_handle *ph; 21 23 }; 22 24 23 25 #define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw) ··· 31 29 u64 rate; 32 30 struct scmi_clk *clk = to_scmi_clk(hw); 33 31 34 - ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate); 32 + ret = scmi_proto_clk_ops->rate_get(clk->ph, clk->id, &rate); 35 33 if (ret) 36 34 return 0; 37 35 return rate; ··· 71 69 { 72 70 struct scmi_clk *clk = to_scmi_clk(hw); 73 71 74 - return clk->handle->clk_ops->rate_set(clk->handle, clk->id, rate); 72 + return scmi_proto_clk_ops->rate_set(clk->ph, clk->id, rate); 75 73 } 76 74 77 75 static int scmi_clk_enable(struct clk_hw *hw) 78 76 { 79 77 struct scmi_clk *clk = to_scmi_clk(hw); 80 78 81 - return clk->handle->clk_ops->enable(clk->handle, clk->id); 79 + return scmi_proto_clk_ops->enable(clk->ph, clk->id); 82 80 } 83 81 84 82 static void scmi_clk_disable(struct clk_hw *hw) 85 83 { 86 84 struct scmi_clk *clk = to_scmi_clk(hw); 87 85 88 - clk->handle->clk_ops->disable(clk->handle, clk->id); 86 + scmi_proto_clk_ops->disable(clk->ph, clk->id); 89 87 } 90 88 91 89 static const struct clk_ops scmi_clk_ops = { ··· 144 142 struct device *dev = &sdev->dev; 145 143 struct device_node *np = dev->of_node; 146 144 const struct scmi_handle *handle = sdev->handle; 145 + struct scmi_protocol_handle *ph; 147 146 148 - if (!handle || !handle->clk_ops) 147 + if (!handle) 149 148 return -ENODEV; 150 149 151 - count = handle->clk_ops->count_get(handle); 150 + scmi_proto_clk_ops = 151 + handle->devm_protocol_get(sdev, SCMI_PROTOCOL_CLOCK, &ph); 152 + if (IS_ERR(scmi_proto_clk_ops)) 153 + return PTR_ERR(scmi_proto_clk_ops); 154 + 155 + count = scmi_proto_clk_ops->count_get(ph); 152 156 if (count < 0) { 153 157 dev_err(dev, "%pOFn: invalid clock output count\n", np); 154 158 return -EINVAL; ··· 175 167 if (!sclk) 176 168 return -ENOMEM; 177 169 178 - sclk->info = handle->clk_ops->info_get(handle, idx); 170 + sclk->info = scmi_proto_clk_ops->info_get(ph, idx); 179 171 if (!sclk->info) { 180 172 dev_dbg(dev, "invalid clock info for idx %d\n", idx); 181 173 continue; 182 174 } 183 175 184 176 sclk->id = idx; 185 - sclk->handle = handle; 177 + sclk->ph = ph; 186 178 187 179 err = scmi_clk_ops_init(dev, sclk); 188 180 if (err) {
+19
drivers/clk/socfpga/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + config CLK_INTEL_SOCFPGA 3 + bool "Intel SoCFPGA family clock support" if COMPILE_TEST && !ARCH_INTEL_SOCFPGA 4 + default ARCH_INTEL_SOCFPGA 5 + help 6 + Support for the clock controllers present on Intel SoCFPGA and eASIC 7 + devices like Aria, Cyclone, Stratix 10, Agilex and N5X eASIC. 8 + 9 + if CLK_INTEL_SOCFPGA 10 + 11 + config CLK_INTEL_SOCFPGA32 12 + bool "Intel Aria / Cyclone clock controller support" if COMPILE_TEST && (!ARM || !ARCH_INTEL_SOCFPGA) 13 + default ARM && ARCH_INTEL_SOCFPGA 14 + 15 + config CLK_INTEL_SOCFPGA64 16 + bool "Intel Stratix / Agilex / N5X clock controller support" if COMPILE_TEST && (!ARM64 || !ARCH_INTEL_SOCFPGA) 17 + default ARM64 && ARCH_INTEL_SOCFPGA 18 + 19 + endif # CLK_INTEL_SOCFPGA
+5 -6
drivers/clk/socfpga/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_ARCH_SOCFPGA) += clk.o clk-gate.o clk-pll.o clk-periph.o 3 - obj-$(CONFIG_ARCH_SOCFPGA) += clk-pll-a10.o clk-periph-a10.o clk-gate-a10.o 4 - obj-$(CONFIG_ARCH_STRATIX10) += clk-s10.o 5 - obj-$(CONFIG_ARCH_STRATIX10) += clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o 6 - obj-$(CONFIG_ARCH_AGILEX) += clk-agilex.o 7 - obj-$(CONFIG_ARCH_AGILEX) += clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o 2 + obj-$(CONFIG_CLK_INTEL_SOCFPGA32) += clk.o clk-gate.o clk-pll.o clk-periph.o \ 3 + clk-pll-a10.o clk-periph-a10.o clk-gate-a10.o 4 + obj-$(CONFIG_CLK_INTEL_SOCFPGA64) += clk-s10.o \ 5 + clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o \ 6 + clk-agilex.o
-12
drivers/clk/tegra/clk-pll.c
··· 2515 2515 pll_writel(val, PLLE_SS_CTRL, pll); 2516 2516 udelay(1); 2517 2517 2518 - val = pll_readl_misc(pll); 2519 - val &= ~PLLE_MISC_IDDQ_SW_CTRL; 2520 - pll_writel_misc(val, pll); 2521 - 2522 - val = pll_readl(pll->params->aux_reg, pll); 2523 - val |= (PLLE_AUX_USE_LOCKDET | PLLE_AUX_SS_SEQ_INCLUDE); 2524 - val &= ~(PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL); 2525 - pll_writel(val, pll->params->aux_reg, pll); 2526 - udelay(1); 2527 - val |= PLLE_AUX_SEQ_ENABLE; 2528 - pll_writel(val, pll->params->aux_reg, pll); 2529 - 2530 2518 out: 2531 2519 if (pll->lock) 2532 2520 spin_unlock_irqrestore(pll->lock, flags);
+52 -1
drivers/clk/tegra/clk-tegra210.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2012-2014 NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2012-2020 NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/io.h> ··· 403 403 #define PLLRE_BASE_DEFAULT_MASK 0x1c000000 404 404 #define PLLRE_MISC0_WRITE_MASK 0x67ffffff 405 405 406 + /* PLLE */ 407 + #define PLLE_MISC_IDDQ_SW_CTRL (1 << 14) 408 + #define PLLE_AUX_USE_LOCKDET (1 << 3) 409 + #define PLLE_AUX_SS_SEQ_INCLUDE (1 << 31) 410 + #define PLLE_AUX_ENABLE_SWCTL (1 << 4) 411 + #define PLLE_AUX_SS_SWCTL (1 << 6) 412 + #define PLLE_AUX_SEQ_ENABLE (1 << 24) 413 + 406 414 /* PLLX */ 407 415 #define PLLX_USE_DYN_RAMP 1 408 416 #define PLLX_BASE_LOCK (1 << 27) ··· 496 488 497 489 #define PLLU_MISC0_WRITE_MASK 0xbfffffff 498 490 #define PLLU_MISC1_WRITE_MASK 0x00000007 491 + 492 + bool tegra210_plle_hw_sequence_is_enabled(void) 493 + { 494 + u32 value; 495 + 496 + value = readl_relaxed(clk_base + PLLE_AUX); 497 + if (value & PLLE_AUX_SEQ_ENABLE) 498 + return true; 499 + 500 + return false; 501 + } 502 + EXPORT_SYMBOL_GPL(tegra210_plle_hw_sequence_is_enabled); 503 + 504 + int tegra210_plle_hw_sequence_start(void) 505 + { 506 + u32 value; 507 + 508 + if (tegra210_plle_hw_sequence_is_enabled()) 509 + return 0; 510 + 511 + /* skip if PLLE is not enabled yet */ 512 + value = readl_relaxed(clk_base + PLLE_MISC0); 513 + if (!(value & PLLE_MISC_LOCK)) 514 + return -EIO; 515 + 516 + value &= ~PLLE_MISC_IDDQ_SW_CTRL; 517 + writel_relaxed(value, clk_base + PLLE_MISC0); 518 + 519 + value = readl_relaxed(clk_base + PLLE_AUX); 520 + value |= (PLLE_AUX_USE_LOCKDET | PLLE_AUX_SS_SEQ_INCLUDE); 521 + value &= ~(PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL); 522 + writel_relaxed(value, clk_base + PLLE_AUX); 523 + 524 + fence_udelay(1, clk_base); 525 + 526 + value |= PLLE_AUX_SEQ_ENABLE; 527 + writel_relaxed(value, clk_base + PLLE_AUX); 528 + 529 + fence_udelay(1, clk_base); 530 + 531 + return 0; 532 + } 533 + EXPORT_SYMBOL_GPL(tegra210_plle_hw_sequence_start); 499 534 500 535 void tegra210_xusb_pll_hw_control_enable(void) 501 536 {
+73 -34
drivers/cpufreq/scmi-cpufreq.c
··· 2 2 /* 3 3 * System Control and Power Interface (SCMI) based CPUFreq Interface driver 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 * Sudeep Holla <sudeep.holla@arm.com> 7 7 */ 8 8 ··· 25 25 struct device *cpu_dev; 26 26 }; 27 27 28 - static const struct scmi_handle *handle; 28 + static struct scmi_protocol_handle *ph; 29 + static const struct scmi_perf_proto_ops *perf_ops; 29 30 30 31 static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) 31 32 { 32 33 struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 33 - const struct scmi_perf_ops *perf_ops = handle->perf_ops; 34 34 struct scmi_data *priv = policy->driver_data; 35 35 unsigned long rate; 36 36 int ret; 37 37 38 - ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false); 38 + ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false); 39 39 if (ret) 40 40 return 0; 41 41 return rate / 1000; ··· 50 50 scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) 51 51 { 52 52 struct scmi_data *priv = policy->driver_data; 53 - const struct scmi_perf_ops *perf_ops = handle->perf_ops; 54 53 u64 freq = policy->freq_table[index].frequency; 55 54 56 - return perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false); 55 + return perf_ops->freq_set(ph, priv->domain_id, freq * 1000, false); 57 56 } 58 57 59 58 static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy, 60 59 unsigned int target_freq) 61 60 { 62 61 struct scmi_data *priv = policy->driver_data; 63 - const struct scmi_perf_ops *perf_ops = handle->perf_ops; 64 62 65 - if (!perf_ops->freq_set(handle, priv->domain_id, 63 + if (!perf_ops->freq_set(ph, priv->domain_id, 66 64 target_freq * 1000, true)) 67 65 return target_freq; 68 66 ··· 73 75 int cpu, domain, tdomain; 74 76 struct device *tcpu_dev; 75 77 76 - domain = handle->perf_ops->device_domain_id(cpu_dev); 78 + domain = perf_ops->device_domain_id(cpu_dev); 77 79 if (domain < 0) 78 80 return domain; 79 81 ··· 85 87 if (!tcpu_dev) 86 88 continue; 87 89 88 - tdomain = handle->perf_ops->device_domain_id(tcpu_dev); 90 + tdomain = perf_ops->device_domain_id(tcpu_dev); 89 91 if (tdomain == domain) 90 92 cpumask_set_cpu(cpu, cpumask); 91 93 } ··· 100 102 unsigned long Hz; 101 103 int ret, domain; 102 104 103 - domain = handle->perf_ops->device_domain_id(cpu_dev); 105 + domain = perf_ops->device_domain_id(cpu_dev); 104 106 if (domain < 0) 105 107 return domain; 106 108 107 109 /* Get the power cost of the performance domain. */ 108 110 Hz = *KHz * 1000; 109 - ret = handle->perf_ops->est_power_get(handle, domain, &Hz, power); 111 + ret = perf_ops->est_power_get(ph, domain, &Hz, power); 110 112 if (ret) 111 113 return ret; 112 114 ··· 124 126 struct scmi_data *priv; 125 127 struct cpufreq_frequency_table *freq_table; 126 128 struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power); 129 + cpumask_var_t opp_shared_cpus; 127 130 bool power_scale_mw; 128 131 129 132 cpu_dev = get_cpu_device(policy->cpu); ··· 133 134 return -ENODEV; 134 135 } 135 136 136 - ret = handle->perf_ops->device_opps_add(handle, cpu_dev); 137 - if (ret) { 138 - dev_warn(cpu_dev, "failed to add opps to the device\n"); 139 - return ret; 140 - } 137 + if (!zalloc_cpumask_var(&opp_shared_cpus, GFP_KERNEL)) 138 + ret = -ENOMEM; 141 139 140 + /* Obtain CPUs that share SCMI performance controls */ 142 141 ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus); 143 142 if (ret) { 144 143 dev_warn(cpu_dev, "failed to get sharing cpumask\n"); 145 - return ret; 144 + goto out_free_cpumask; 146 145 } 147 146 148 - ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); 149 - if (ret) { 150 - dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 151 - __func__, ret); 152 - return ret; 147 + /* 148 + * Obtain CPUs that share performance levels. 149 + * The OPP 'sharing cpus' info may come from DT through an empty opp 150 + * table and opp-shared. 151 + */ 152 + ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, opp_shared_cpus); 153 + if (ret || !cpumask_weight(opp_shared_cpus)) { 154 + /* 155 + * Either opp-table is not set or no opp-shared was found. 156 + * Use the CPU mask from SCMI to designate CPUs sharing an OPP 157 + * table. 158 + */ 159 + cpumask_copy(opp_shared_cpus, policy->cpus); 153 160 } 154 161 162 + /* 163 + * A previous CPU may have marked OPPs as shared for a few CPUs, based on 164 + * what OPP core provided. If the current CPU is part of those few, then 165 + * there is no need to add OPPs again. 166 + */ 155 167 nr_opp = dev_pm_opp_get_opp_count(cpu_dev); 156 168 if (nr_opp <= 0) { 157 - dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n"); 158 - ret = -EPROBE_DEFER; 159 - goto out_free_opp; 169 + ret = perf_ops->device_opps_add(ph, cpu_dev); 170 + if (ret) { 171 + dev_warn(cpu_dev, "failed to add opps to the device\n"); 172 + goto out_free_cpumask; 173 + } 174 + 175 + nr_opp = dev_pm_opp_get_opp_count(cpu_dev); 176 + if (nr_opp <= 0) { 177 + dev_err(cpu_dev, "%s: No OPPs for this device: %d\n", 178 + __func__, ret); 179 + 180 + ret = -ENODEV; 181 + goto out_free_opp; 182 + } 183 + 184 + ret = dev_pm_opp_set_sharing_cpus(cpu_dev, opp_shared_cpus); 185 + if (ret) { 186 + dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 187 + __func__, ret); 188 + 189 + goto out_free_opp; 190 + } 191 + 192 + power_scale_mw = perf_ops->power_scale_mw_get(ph); 193 + em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, 194 + opp_shared_cpus, power_scale_mw); 160 195 } 161 196 162 197 priv = kzalloc(sizeof(*priv), GFP_KERNEL); ··· 206 173 } 207 174 208 175 priv->cpu_dev = cpu_dev; 209 - priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev); 176 + priv->domain_id = perf_ops->device_domain_id(cpu_dev); 210 177 211 178 policy->driver_data = priv; 212 179 policy->freq_table = freq_table; ··· 214 181 /* SCMI allows DVFS request for any domain from any CPU */ 215 182 policy->dvfs_possible_from_any_cpu = true; 216 183 217 - latency = handle->perf_ops->transition_latency_get(handle, cpu_dev); 184 + latency = perf_ops->transition_latency_get(ph, cpu_dev); 218 185 if (!latency) 219 186 latency = CPUFREQ_ETERNAL; 220 187 221 188 policy->cpuinfo.transition_latency = latency; 222 189 223 190 policy->fast_switch_possible = 224 - handle->perf_ops->fast_switch_possible(handle, cpu_dev); 191 + perf_ops->fast_switch_possible(ph, cpu_dev); 225 192 226 - power_scale_mw = handle->perf_ops->power_scale_mw_get(handle); 227 - em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus, 228 - power_scale_mw); 229 - 193 + free_cpumask_var(opp_shared_cpus); 230 194 return 0; 231 195 232 196 out_free_priv: 233 197 kfree(priv); 198 + 234 199 out_free_opp: 235 200 dev_pm_opp_remove_all_dynamic(cpu_dev); 201 + 202 + out_free_cpumask: 203 + free_cpumask_var(opp_shared_cpus); 236 204 237 205 return ret; 238 206 } ··· 267 233 { 268 234 int ret; 269 235 struct device *dev = &sdev->dev; 236 + const struct scmi_handle *handle; 270 237 271 238 handle = sdev->handle; 272 239 273 - if (!handle || !handle->perf_ops) 240 + if (!handle) 274 241 return -ENODEV; 242 + 243 + perf_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_PERF, &ph); 244 + if (IS_ERR(perf_ops)) 245 + return PTR_ERR(perf_ops); 275 246 276 247 #ifdef CONFIG_COMMON_CLK 277 248 /* dummy clock provider as needed by OPP if clocks property is used */
+1 -1
drivers/dma/Kconfig
··· 100 100 101 101 config AXI_DMAC 102 102 tristate "Analog Devices AXI-DMAC DMA support" 103 - depends on MICROBLAZE || NIOS2 || ARCH_ZYNQ || ARCH_ZYNQMP || ARCH_SOCFPGA || COMPILE_TEST 103 + depends on MICROBLAZE || NIOS2 || ARCH_ZYNQ || ARCH_ZYNQMP || ARCH_INTEL_SOCFPGA || COMPILE_TEST 104 104 select DMA_ENGINE 105 105 select DMA_VIRTUAL_CHANNELS 106 106 select REGMAP_MMIO
+1 -1
drivers/edac/Kconfig
··· 396 396 397 397 config EDAC_ALTERA 398 398 bool "Altera SOCFPGA ECC" 399 - depends on EDAC=y && (ARCH_SOCFPGA || ARCH_STRATIX10) 399 + depends on EDAC=y && ARCH_INTEL_SOCFPGA 400 400 help 401 401 Support for error detection and correction on the 402 402 Altera SOCs. This is the global enable for the
+11 -6
drivers/edac/altera_edac.c
··· 1501 1501 dci->mod_name = ecc_name; 1502 1502 dci->dev_name = ecc_name; 1503 1503 1504 - /* Update the PortB IRQs - A10 has 4, S10 has 2, Index accordingly */ 1505 - #ifdef CONFIG_ARCH_STRATIX10 1504 + /* 1505 + * Update the PortB IRQs - A10 has 4, S10 has 2, Index accordingly 1506 + * 1507 + * FIXME: Instead of ifdefs with different architectures the driver 1508 + * should properly use compatibles. 1509 + */ 1510 + #ifdef CONFIG_64BIT 1506 1511 altdev->sb_irq = irq_of_parse_and_map(np, 1); 1507 1512 #else 1508 1513 altdev->sb_irq = irq_of_parse_and_map(np, 2); ··· 1526 1521 goto err_release_group_1; 1527 1522 } 1528 1523 1529 - #ifdef CONFIG_ARCH_STRATIX10 1524 + #ifdef CONFIG_64BIT 1530 1525 /* Use IRQ to determine SError origin instead of assigning IRQ */ 1531 1526 rc = of_property_read_u32_index(np, "interrupts", 1, &altdev->db_irq); 1532 1527 if (rc) { ··· 1936 1931 goto err_release_group1; 1937 1932 } 1938 1933 1939 - #ifdef CONFIG_ARCH_STRATIX10 1934 + #ifdef CONFIG_64BIT 1940 1935 /* Use IRQ to determine SError origin instead of assigning IRQ */ 1941 1936 rc = of_property_read_u32_index(np, "interrupts", 0, &altdev->db_irq); 1942 1937 if (rc) { ··· 2021 2016 /************** Stratix 10 EDAC Double Bit Error Handler ************/ 2022 2017 #define to_a10edac(p, m) container_of(p, struct altr_arria10_edac, m) 2023 2018 2024 - #ifdef CONFIG_ARCH_STRATIX10 2019 + #ifdef CONFIG_64BIT 2025 2020 /* panic routine issues reboot on non-zero panic_timeout */ 2026 2021 extern int panic_timeout; 2027 2022 ··· 2114 2109 altr_edac_a10_irq_handler, 2115 2110 edac); 2116 2111 2117 - #ifdef CONFIG_ARCH_STRATIX10 2112 + #ifdef CONFIG_64BIT 2118 2113 { 2119 2114 int dberror, err_addr; 2120 2115
+1 -1
drivers/firmware/Kconfig
··· 206 206 207 207 config INTEL_STRATIX10_SERVICE 208 208 tristate "Intel Stratix10 Service Layer" 209 - depends on (ARCH_STRATIX10 || ARCH_AGILEX) && HAVE_ARM_SMCCC 209 + depends on ARCH_INTEL_SOCFPGA && ARM64 && HAVE_ARM_SMCCC 210 210 default n 211 211 help 212 212 Intel Stratix10 service layer runs at privileged exception level,
+79 -63
drivers/firmware/arm_scmi/base.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Base Protocol 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications BASE - " fmt 9 9 10 + #include <linux/module.h> 10 11 #include <linux/scmi_protocol.h> 11 12 12 13 #include "common.h" ··· 51 50 * scmi_base_attributes_get() - gets the implementation details 52 51 * that are associated with the base protocol. 53 52 * 54 - * @handle: SCMI entity handle 53 + * @ph: SCMI protocol handle 55 54 * 56 55 * Return: 0 on success, else appropriate SCMI error. 57 56 */ 58 - static int scmi_base_attributes_get(const struct scmi_handle *handle) 57 + static int scmi_base_attributes_get(const struct scmi_protocol_handle *ph) 59 58 { 60 59 int ret; 61 60 struct scmi_xfer *t; 62 61 struct scmi_msg_resp_base_attributes *attr_info; 63 - struct scmi_revision_info *rev = handle->version; 62 + struct scmi_revision_info *rev = ph->get_priv(ph); 64 63 65 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 66 - SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t); 64 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 65 + 0, sizeof(*attr_info), &t); 67 66 if (ret) 68 67 return ret; 69 68 70 - ret = scmi_do_xfer(handle, t); 69 + ret = ph->xops->do_xfer(ph, t); 71 70 if (!ret) { 72 71 attr_info = t->rx.buf; 73 72 rev->num_protocols = attr_info->num_protocols; 74 73 rev->num_agents = attr_info->num_agents; 75 74 } 76 75 77 - scmi_xfer_put(handle, t); 76 + ph->xops->xfer_put(ph, t); 78 77 79 78 return ret; 80 79 } ··· 82 81 /** 83 82 * scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string. 84 83 * 85 - * @handle: SCMI entity handle 84 + * @ph: SCMI protocol handle 86 85 * @sub_vendor: specify true if sub-vendor ID is needed 87 86 * 88 87 * Return: 0 on success, else appropriate SCMI error. 89 88 */ 90 89 static int 91 - scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor) 90 + scmi_base_vendor_id_get(const struct scmi_protocol_handle *ph, bool sub_vendor) 92 91 { 93 92 u8 cmd; 94 93 int ret, size; 95 94 char *vendor_id; 96 95 struct scmi_xfer *t; 97 - struct scmi_revision_info *rev = handle->version; 96 + struct scmi_revision_info *rev = ph->get_priv(ph); 97 + 98 98 99 99 if (sub_vendor) { 100 100 cmd = BASE_DISCOVER_SUB_VENDOR; ··· 107 105 size = ARRAY_SIZE(rev->vendor_id); 108 106 } 109 107 110 - ret = scmi_xfer_get_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t); 108 + ret = ph->xops->xfer_get_init(ph, cmd, 0, size, &t); 111 109 if (ret) 112 110 return ret; 113 111 114 - ret = scmi_do_xfer(handle, t); 112 + ret = ph->xops->do_xfer(ph, t); 115 113 if (!ret) 116 114 memcpy(vendor_id, t->rx.buf, size); 117 115 118 - scmi_xfer_put(handle, t); 116 + ph->xops->xfer_put(ph, t); 119 117 120 118 return ret; 121 119 } ··· 125 123 * implementation 32-bit version. The format of the version number is 126 124 * vendor-specific 127 125 * 128 - * @handle: SCMI entity handle 126 + * @ph: SCMI protocol handle 129 127 * 130 128 * Return: 0 on success, else appropriate SCMI error. 131 129 */ 132 130 static int 133 - scmi_base_implementation_version_get(const struct scmi_handle *handle) 131 + scmi_base_implementation_version_get(const struct scmi_protocol_handle *ph) 134 132 { 135 133 int ret; 136 134 __le32 *impl_ver; 137 135 struct scmi_xfer *t; 138 - struct scmi_revision_info *rev = handle->version; 136 + struct scmi_revision_info *rev = ph->get_priv(ph); 139 137 140 - ret = scmi_xfer_get_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION, 141 - SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t); 138 + ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_IMPLEMENT_VERSION, 139 + 0, sizeof(*impl_ver), &t); 142 140 if (ret) 143 141 return ret; 144 142 145 - ret = scmi_do_xfer(handle, t); 143 + ret = ph->xops->do_xfer(ph, t); 146 144 if (!ret) { 147 145 impl_ver = t->rx.buf; 148 146 rev->impl_ver = le32_to_cpu(*impl_ver); 149 147 } 150 148 151 - scmi_xfer_put(handle, t); 149 + ph->xops->xfer_put(ph, t); 152 150 153 151 return ret; 154 152 } ··· 157 155 * scmi_base_implementation_list_get() - gets the list of protocols it is 158 156 * OSPM is allowed to access 159 157 * 160 - * @handle: SCMI entity handle 158 + * @ph: SCMI protocol handle 161 159 * @protocols_imp: pointer to hold the list of protocol identifiers 162 160 * 163 161 * Return: 0 on success, else appropriate SCMI error. 164 162 */ 165 - static int scmi_base_implementation_list_get(const struct scmi_handle *handle, 166 - u8 *protocols_imp) 163 + static int 164 + scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph, 165 + u8 *protocols_imp) 167 166 { 168 167 u8 *list; 169 168 int ret, loop; 170 169 struct scmi_xfer *t; 171 170 __le32 *num_skip, *num_ret; 172 171 u32 tot_num_ret = 0, loop_num_ret; 173 - struct device *dev = handle->dev; 172 + struct device *dev = ph->dev; 174 173 175 - ret = scmi_xfer_get_init(handle, BASE_DISCOVER_LIST_PROTOCOLS, 176 - SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t); 174 + ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_LIST_PROTOCOLS, 175 + sizeof(*num_skip), 0, &t); 177 176 if (ret) 178 177 return ret; 179 178 ··· 186 183 /* Set the number of protocols to be skipped/already read */ 187 184 *num_skip = cpu_to_le32(tot_num_ret); 188 185 189 - ret = scmi_do_xfer(handle, t); 186 + ret = ph->xops->do_xfer(ph, t); 190 187 if (ret) 191 188 break; 192 189 ··· 201 198 202 199 tot_num_ret += loop_num_ret; 203 200 204 - scmi_reset_rx_to_maxsz(handle, t); 201 + ph->xops->reset_rx_to_maxsz(ph, t); 205 202 } while (loop_num_ret); 206 203 207 - scmi_xfer_put(handle, t); 204 + ph->xops->xfer_put(ph, t); 208 205 209 206 return ret; 210 207 } ··· 212 209 /** 213 210 * scmi_base_discover_agent_get() - discover the name of an agent 214 211 * 215 - * @handle: SCMI entity handle 212 + * @ph: SCMI protocol handle 216 213 * @id: Agent identifier 217 214 * @name: Agent identifier ASCII string 218 215 * ··· 221 218 * 222 219 * Return: 0 on success, else appropriate SCMI error. 223 220 */ 224 - static int scmi_base_discover_agent_get(const struct scmi_handle *handle, 221 + static int scmi_base_discover_agent_get(const struct scmi_protocol_handle *ph, 225 222 int id, char *name) 226 223 { 227 224 int ret; 228 225 struct scmi_xfer *t; 229 226 230 - ret = scmi_xfer_get_init(handle, BASE_DISCOVER_AGENT, 231 - SCMI_PROTOCOL_BASE, sizeof(__le32), 232 - SCMI_MAX_STR_SIZE, &t); 227 + ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_AGENT, 228 + sizeof(__le32), SCMI_MAX_STR_SIZE, &t); 233 229 if (ret) 234 230 return ret; 235 231 236 232 put_unaligned_le32(id, t->tx.buf); 237 233 238 - ret = scmi_do_xfer(handle, t); 234 + ret = ph->xops->do_xfer(ph, t); 239 235 if (!ret) 240 236 strlcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE); 241 237 242 - scmi_xfer_put(handle, t); 238 + ph->xops->xfer_put(ph, t); 243 239 244 240 return ret; 245 241 } 246 242 247 - static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable) 243 + static int scmi_base_error_notify(const struct scmi_protocol_handle *ph, 244 + bool enable) 248 245 { 249 246 int ret; 250 247 u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0; 251 248 struct scmi_xfer *t; 252 249 struct scmi_msg_base_error_notify *cfg; 253 250 254 - ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS, 255 - SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t); 251 + ret = ph->xops->xfer_get_init(ph, BASE_NOTIFY_ERRORS, 252 + sizeof(*cfg), 0, &t); 256 253 if (ret) 257 254 return ret; 258 255 259 256 cfg = t->tx.buf; 260 257 cfg->event_control = cpu_to_le32(evt_cntl); 261 258 262 - ret = scmi_do_xfer(handle, t); 259 + ret = ph->xops->do_xfer(ph, t); 263 260 264 - scmi_xfer_put(handle, t); 261 + ph->xops->xfer_put(ph, t); 265 262 return ret; 266 263 } 267 264 268 - static int scmi_base_set_notify_enabled(const struct scmi_handle *handle, 265 + static int scmi_base_set_notify_enabled(const struct scmi_protocol_handle *ph, 269 266 u8 evt_id, u32 src_id, bool enable) 270 267 { 271 268 int ret; 272 269 273 - ret = scmi_base_error_notify(handle, enable); 270 + ret = scmi_base_error_notify(ph, enable); 274 271 if (ret) 275 272 pr_debug("FAIL_ENABLED - evt[%X] ret:%d\n", evt_id, ret); 276 273 277 274 return ret; 278 275 } 279 276 280 - static void *scmi_base_fill_custom_report(const struct scmi_handle *handle, 277 + static void *scmi_base_fill_custom_report(const struct scmi_protocol_handle *ph, 281 278 u8 evt_id, ktime_t timestamp, 282 279 const void *payld, size_t payld_sz, 283 280 void *report, u32 *src_id) ··· 321 318 .fill_custom_report = scmi_base_fill_custom_report, 322 319 }; 323 320 324 - int scmi_base_protocol_init(struct scmi_handle *h) 321 + static const struct scmi_protocol_events base_protocol_events = { 322 + .queue_sz = 4 * SCMI_PROTO_QUEUE_SZ, 323 + .ops = &base_event_ops, 324 + .evts = base_events, 325 + .num_events = ARRAY_SIZE(base_events), 326 + .num_sources = SCMI_BASE_NUM_SOURCES, 327 + }; 328 + 329 + static int scmi_base_protocol_init(const struct scmi_protocol_handle *ph) 325 330 { 326 331 int id, ret; 327 332 u8 *prot_imp; 328 333 u32 version; 329 334 char name[SCMI_MAX_STR_SIZE]; 330 - const struct scmi_handle *handle = h; 331 - struct device *dev = handle->dev; 332 - struct scmi_revision_info *rev = handle->version; 335 + struct device *dev = ph->dev; 336 + struct scmi_revision_info *rev = scmi_revision_area_get(ph); 333 337 334 - ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version); 338 + ret = ph->xops->version_get(ph, &version); 335 339 if (ret) 336 340 return ret; 337 341 ··· 348 338 349 339 rev->major_ver = PROTOCOL_REV_MAJOR(version), 350 340 rev->minor_ver = PROTOCOL_REV_MINOR(version); 341 + ph->set_priv(ph, rev); 351 342 352 - scmi_base_attributes_get(handle); 353 - scmi_base_vendor_id_get(handle, false); 354 - scmi_base_vendor_id_get(handle, true); 355 - scmi_base_implementation_version_get(handle); 356 - scmi_base_implementation_list_get(handle, prot_imp); 357 - scmi_setup_protocol_implemented(handle, prot_imp); 343 + scmi_base_attributes_get(ph); 344 + scmi_base_vendor_id_get(ph, false); 345 + scmi_base_vendor_id_get(ph, true); 346 + scmi_base_implementation_version_get(ph); 347 + scmi_base_implementation_list_get(ph, prot_imp); 348 + 349 + scmi_setup_protocol_implemented(ph, prot_imp); 358 350 359 351 dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n", 360 352 rev->major_ver, rev->minor_ver, rev->vendor_id, ··· 364 352 dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols, 365 353 rev->num_agents); 366 354 367 - scmi_register_protocol_events(handle, SCMI_PROTOCOL_BASE, 368 - (4 * SCMI_PROTO_QUEUE_SZ), 369 - &base_event_ops, base_events, 370 - ARRAY_SIZE(base_events), 371 - SCMI_BASE_NUM_SOURCES); 372 - 373 355 for (id = 0; id < rev->num_agents; id++) { 374 - scmi_base_discover_agent_get(handle, id, name); 356 + scmi_base_discover_agent_get(ph, id, name); 375 357 dev_dbg(dev, "Agent %d: %s\n", id, name); 376 358 } 377 359 378 360 return 0; 379 361 } 362 + 363 + static const struct scmi_protocol scmi_base = { 364 + .id = SCMI_PROTOCOL_BASE, 365 + .owner = NULL, 366 + .instance_init = &scmi_base_protocol_init, 367 + .ops = NULL, 368 + .events = &base_protocol_events, 369 + }; 370 + 371 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(base, scmi_base)
+77 -27
drivers/firmware/arm_scmi/bus.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Message Protocol bus layer 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 51 51 return 0; 52 52 } 53 53 54 - static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle) 54 + static int scmi_match_by_id_table(struct device *dev, void *data) 55 55 { 56 - scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id); 56 + struct scmi_device *sdev = to_scmi_dev(dev); 57 + struct scmi_device_id *id_table = data; 57 58 58 - if (unlikely(!fn)) 59 - return -EINVAL; 60 - return fn(handle); 59 + return sdev->protocol_id == id_table->protocol_id && 60 + !strcmp(sdev->name, id_table->name); 61 61 } 62 62 63 - static int scmi_protocol_dummy_init(struct scmi_handle *handle) 63 + struct scmi_device *scmi_child_dev_find(struct device *parent, 64 + int prot_id, const char *name) 64 65 { 65 - return 0; 66 + struct scmi_device_id id_table; 67 + struct device *dev; 68 + 69 + id_table.protocol_id = prot_id; 70 + id_table.name = name; 71 + 72 + dev = device_find_child(parent, &id_table, scmi_match_by_id_table); 73 + if (!dev) 74 + return NULL; 75 + 76 + return to_scmi_dev(dev); 77 + } 78 + 79 + const struct scmi_protocol *scmi_protocol_get(int protocol_id) 80 + { 81 + const struct scmi_protocol *proto; 82 + 83 + proto = idr_find(&scmi_protocols, protocol_id); 84 + if (!proto || !try_module_get(proto->owner)) { 85 + pr_warn("SCMI Protocol 0x%x not found!\n", protocol_id); 86 + return NULL; 87 + } 88 + 89 + pr_debug("Found SCMI Protocol 0x%x\n", protocol_id); 90 + 91 + return proto; 92 + } 93 + 94 + void scmi_protocol_put(int protocol_id) 95 + { 96 + const struct scmi_protocol *proto; 97 + 98 + proto = idr_find(&scmi_protocols, protocol_id); 99 + if (proto) 100 + module_put(proto->owner); 66 101 } 67 102 68 103 static int scmi_dev_probe(struct device *dev) ··· 105 70 struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); 106 71 struct scmi_device *scmi_dev = to_scmi_dev(dev); 107 72 const struct scmi_device_id *id; 108 - int ret; 109 73 110 74 id = scmi_dev_match_id(scmi_dev, scmi_drv); 111 75 if (!id) ··· 112 78 113 79 if (!scmi_dev->handle) 114 80 return -EPROBE_DEFER; 115 - 116 - ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle); 117 - if (ret) 118 - return ret; 119 - 120 - /* Skip protocol initialisation for additional devices */ 121 - idr_replace(&scmi_protocols, &scmi_protocol_dummy_init, 122 - scmi_dev->protocol_id); 123 81 124 82 return scmi_drv->probe(scmi_dev); 125 83 } ··· 139 113 { 140 114 int retval; 141 115 116 + retval = scmi_protocol_device_request(driver->id_table); 117 + if (retval) 118 + return retval; 119 + 142 120 driver->driver.bus = &scmi_bus_type; 143 121 driver->driver.name = driver->name; 144 122 driver->driver.owner = owner; ··· 159 129 void scmi_driver_unregister(struct scmi_driver *driver) 160 130 { 161 131 driver_unregister(&driver->driver); 132 + scmi_protocol_device_unrequest(driver->id_table); 162 133 } 163 134 EXPORT_SYMBOL_GPL(scmi_driver_unregister); 164 135 ··· 225 194 scmi_dev->handle = scmi_handle_get(&scmi_dev->dev); 226 195 } 227 196 228 - int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn) 197 + int scmi_protocol_register(const struct scmi_protocol *proto) 229 198 { 230 199 int ret; 231 200 232 - spin_lock(&protocol_lock); 233 - ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1, 234 - GFP_ATOMIC); 235 - spin_unlock(&protocol_lock); 236 - if (ret != protocol_id) 237 - pr_err("unable to allocate SCMI idr slot, err %d\n", ret); 201 + if (!proto) { 202 + pr_err("invalid protocol\n"); 203 + return -EINVAL; 204 + } 238 205 239 - return ret; 206 + if (!proto->instance_init) { 207 + pr_err("missing init for protocol 0x%x\n", proto->id); 208 + return -EINVAL; 209 + } 210 + 211 + spin_lock(&protocol_lock); 212 + ret = idr_alloc(&scmi_protocols, (void *)proto, 213 + proto->id, proto->id + 1, GFP_ATOMIC); 214 + spin_unlock(&protocol_lock); 215 + if (ret != proto->id) { 216 + pr_err("unable to allocate SCMI idr slot for 0x%x - err %d\n", 217 + proto->id, ret); 218 + return ret; 219 + } 220 + 221 + pr_debug("Registered SCMI Protocol 0x%x\n", proto->id); 222 + 223 + return 0; 240 224 } 241 225 EXPORT_SYMBOL_GPL(scmi_protocol_register); 242 226 243 - void scmi_protocol_unregister(int protocol_id) 227 + void scmi_protocol_unregister(const struct scmi_protocol *proto) 244 228 { 245 229 spin_lock(&protocol_lock); 246 - idr_remove(&scmi_protocols, protocol_id); 230 + idr_remove(&scmi_protocols, proto->id); 247 231 spin_unlock(&protocol_lock); 232 + 233 + pr_debug("Unregistered SCMI Protocol 0x%x\n", proto->id); 234 + 235 + return; 248 236 } 249 237 EXPORT_SYMBOL_GPL(scmi_protocol_unregister); 250 238
+68 -61
drivers/firmware/arm_scmi/clock.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Clock Protocol 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 + #include <linux/module.h> 8 9 #include <linux/sort.h> 9 10 10 11 #include "common.h" ··· 75 74 struct scmi_clock_info *clk; 76 75 }; 77 76 78 - static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle, 79 - struct clock_info *ci) 77 + static int 78 + scmi_clock_protocol_attributes_get(const struct scmi_protocol_handle *ph, 79 + struct clock_info *ci) 80 80 { 81 81 int ret; 82 82 struct scmi_xfer *t; 83 83 struct scmi_msg_resp_clock_protocol_attributes *attr; 84 84 85 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 86 - SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t); 85 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 86 + 0, sizeof(*attr), &t); 87 87 if (ret) 88 88 return ret; 89 89 90 90 attr = t->rx.buf; 91 91 92 - ret = scmi_do_xfer(handle, t); 92 + ret = ph->xops->do_xfer(ph, t); 93 93 if (!ret) { 94 94 ci->num_clocks = le16_to_cpu(attr->num_clocks); 95 95 ci->max_async_req = attr->max_async_req; 96 96 } 97 97 98 - scmi_xfer_put(handle, t); 98 + ph->xops->xfer_put(ph, t); 99 99 return ret; 100 100 } 101 101 102 - static int scmi_clock_attributes_get(const struct scmi_handle *handle, 102 + static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph, 103 103 u32 clk_id, struct scmi_clock_info *clk) 104 104 { 105 105 int ret; 106 106 struct scmi_xfer *t; 107 107 struct scmi_msg_resp_clock_attributes *attr; 108 108 109 - ret = scmi_xfer_get_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK, 110 - sizeof(clk_id), sizeof(*attr), &t); 109 + ret = ph->xops->xfer_get_init(ph, CLOCK_ATTRIBUTES, 110 + sizeof(clk_id), sizeof(*attr), &t); 111 111 if (ret) 112 112 return ret; 113 113 114 114 put_unaligned_le32(clk_id, t->tx.buf); 115 115 attr = t->rx.buf; 116 116 117 - ret = scmi_do_xfer(handle, t); 117 + ret = ph->xops->do_xfer(ph, t); 118 118 if (!ret) 119 119 strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); 120 120 else 121 121 clk->name[0] = '\0'; 122 122 123 - scmi_xfer_put(handle, t); 123 + ph->xops->xfer_put(ph, t); 124 124 return ret; 125 125 } 126 126 ··· 138 136 } 139 137 140 138 static int 141 - scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id, 139 + scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id, 142 140 struct scmi_clock_info *clk) 143 141 { 144 142 u64 *rate = NULL; ··· 150 148 struct scmi_msg_clock_describe_rates *clk_desc; 151 149 struct scmi_msg_resp_clock_describe_rates *rlist; 152 150 153 - ret = scmi_xfer_get_init(handle, CLOCK_DESCRIBE_RATES, 154 - SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t); 151 + ret = ph->xops->xfer_get_init(ph, CLOCK_DESCRIBE_RATES, 152 + sizeof(*clk_desc), 0, &t); 155 153 if (ret) 156 154 return ret; 157 155 ··· 163 161 /* Set the number of rates to be skipped/already read */ 164 162 clk_desc->rate_index = cpu_to_le32(tot_rate_cnt); 165 163 166 - ret = scmi_do_xfer(handle, t); 164 + ret = ph->xops->do_xfer(ph, t); 167 165 if (ret) 168 166 goto err; 169 167 ··· 173 171 num_returned = NUM_RETURNED(rates_flag); 174 172 175 173 if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) { 176 - dev_err(handle->dev, "No. of rates > MAX_NUM_RATES"); 174 + dev_err(ph->dev, "No. of rates > MAX_NUM_RATES"); 177 175 break; 178 176 } 179 177 ··· 181 179 clk->range.min_rate = RATE_TO_U64(rlist->rate[0]); 182 180 clk->range.max_rate = RATE_TO_U64(rlist->rate[1]); 183 181 clk->range.step_size = RATE_TO_U64(rlist->rate[2]); 184 - dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n", 182 + dev_dbg(ph->dev, "Min %llu Max %llu Step %llu Hz\n", 185 183 clk->range.min_rate, clk->range.max_rate, 186 184 clk->range.step_size); 187 185 break; ··· 190 188 rate = &clk->list.rates[tot_rate_cnt]; 191 189 for (cnt = 0; cnt < num_returned; cnt++, rate++) { 192 190 *rate = RATE_TO_U64(rlist->rate[cnt]); 193 - dev_dbg(handle->dev, "Rate %llu Hz\n", *rate); 191 + dev_dbg(ph->dev, "Rate %llu Hz\n", *rate); 194 192 } 195 193 196 194 tot_rate_cnt += num_returned; 197 195 198 - scmi_reset_rx_to_maxsz(handle, t); 196 + ph->xops->reset_rx_to_maxsz(ph, t); 199 197 /* 200 198 * check for both returned and remaining to avoid infinite 201 199 * loop due to buggy firmware ··· 210 208 clk->rate_discrete = rate_discrete; 211 209 212 210 err: 213 - scmi_xfer_put(handle, t); 211 + ph->xops->xfer_put(ph, t); 214 212 return ret; 215 213 } 216 214 217 215 static int 218 - scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value) 216 + scmi_clock_rate_get(const struct scmi_protocol_handle *ph, 217 + u32 clk_id, u64 *value) 219 218 { 220 219 int ret; 221 220 struct scmi_xfer *t; 222 221 223 - ret = scmi_xfer_get_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK, 224 - sizeof(__le32), sizeof(u64), &t); 222 + ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_GET, 223 + sizeof(__le32), sizeof(u64), &t); 225 224 if (ret) 226 225 return ret; 227 226 228 227 put_unaligned_le32(clk_id, t->tx.buf); 229 228 230 - ret = scmi_do_xfer(handle, t); 229 + ret = ph->xops->do_xfer(ph, t); 231 230 if (!ret) 232 231 *value = get_unaligned_le64(t->rx.buf); 233 232 234 - scmi_xfer_put(handle, t); 233 + ph->xops->xfer_put(ph, t); 235 234 return ret; 236 235 } 237 236 238 - static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id, 239 - u64 rate) 237 + static int scmi_clock_rate_set(const struct scmi_protocol_handle *ph, 238 + u32 clk_id, u64 rate) 240 239 { 241 240 int ret; 242 241 u32 flags = 0; 243 242 struct scmi_xfer *t; 244 243 struct scmi_clock_set_rate *cfg; 245 - struct clock_info *ci = handle->clk_priv; 244 + struct clock_info *ci = ph->get_priv(ph); 246 245 247 - ret = scmi_xfer_get_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK, 248 - sizeof(*cfg), 0, &t); 246 + ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_SET, sizeof(*cfg), 0, &t); 249 247 if (ret) 250 248 return ret; 251 249 ··· 260 258 cfg->value_high = cpu_to_le32(rate >> 32); 261 259 262 260 if (flags & CLOCK_SET_ASYNC) 263 - ret = scmi_do_xfer_with_response(handle, t); 261 + ret = ph->xops->do_xfer_with_response(ph, t); 264 262 else 265 - ret = scmi_do_xfer(handle, t); 263 + ret = ph->xops->do_xfer(ph, t); 266 264 267 265 if (ci->max_async_req) 268 266 atomic_dec(&ci->cur_async_req); 269 267 270 - scmi_xfer_put(handle, t); 268 + ph->xops->xfer_put(ph, t); 271 269 return ret; 272 270 } 273 271 274 272 static int 275 - scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config) 273 + scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, 274 + u32 config) 276 275 { 277 276 int ret; 278 277 struct scmi_xfer *t; 279 278 struct scmi_clock_set_config *cfg; 280 279 281 - ret = scmi_xfer_get_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK, 282 - sizeof(*cfg), 0, &t); 280 + ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_SET, 281 + sizeof(*cfg), 0, &t); 283 282 if (ret) 284 283 return ret; 285 284 ··· 288 285 cfg->id = cpu_to_le32(clk_id); 289 286 cfg->attributes = cpu_to_le32(config); 290 287 291 - ret = scmi_do_xfer(handle, t); 288 + ret = ph->xops->do_xfer(ph, t); 292 289 293 - scmi_xfer_put(handle, t); 290 + ph->xops->xfer_put(ph, t); 294 291 return ret; 295 292 } 296 293 297 - static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id) 294 + static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id) 298 295 { 299 - return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE); 296 + return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE); 300 297 } 301 298 302 - static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id) 299 + static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id) 303 300 { 304 - return scmi_clock_config_set(handle, clk_id, 0); 301 + return scmi_clock_config_set(ph, clk_id, 0); 305 302 } 306 303 307 - static int scmi_clock_count_get(const struct scmi_handle *handle) 304 + static int scmi_clock_count_get(const struct scmi_protocol_handle *ph) 308 305 { 309 - struct clock_info *ci = handle->clk_priv; 306 + struct clock_info *ci = ph->get_priv(ph); 310 307 311 308 return ci->num_clocks; 312 309 } 313 310 314 311 static const struct scmi_clock_info * 315 - scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id) 312 + scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id) 316 313 { 317 - struct clock_info *ci = handle->clk_priv; 314 + struct clock_info *ci = ph->get_priv(ph); 318 315 struct scmi_clock_info *clk = ci->clk + clk_id; 319 316 320 317 if (!clk->name[0]) ··· 323 320 return clk; 324 321 } 325 322 326 - static const struct scmi_clk_ops clk_ops = { 323 + static const struct scmi_clk_proto_ops clk_proto_ops = { 327 324 .count_get = scmi_clock_count_get, 328 325 .info_get = scmi_clock_info_get, 329 326 .rate_get = scmi_clock_rate_get, ··· 332 329 .disable = scmi_clock_disable, 333 330 }; 334 331 335 - static int scmi_clock_protocol_init(struct scmi_handle *handle) 332 + static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph) 336 333 { 337 334 u32 version; 338 335 int clkid, ret; 339 336 struct clock_info *cinfo; 340 337 341 - scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version); 338 + ph->xops->version_get(ph, &version); 342 339 343 - dev_dbg(handle->dev, "Clock Version %d.%d\n", 340 + dev_dbg(ph->dev, "Clock Version %d.%d\n", 344 341 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 345 342 346 - cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL); 343 + cinfo = devm_kzalloc(ph->dev, sizeof(*cinfo), GFP_KERNEL); 347 344 if (!cinfo) 348 345 return -ENOMEM; 349 346 350 - scmi_clock_protocol_attributes_get(handle, cinfo); 347 + scmi_clock_protocol_attributes_get(ph, cinfo); 351 348 352 - cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks, 349 + cinfo->clk = devm_kcalloc(ph->dev, cinfo->num_clocks, 353 350 sizeof(*cinfo->clk), GFP_KERNEL); 354 351 if (!cinfo->clk) 355 352 return -ENOMEM; ··· 357 354 for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { 358 355 struct scmi_clock_info *clk = cinfo->clk + clkid; 359 356 360 - ret = scmi_clock_attributes_get(handle, clkid, clk); 357 + ret = scmi_clock_attributes_get(ph, clkid, clk); 361 358 if (!ret) 362 - scmi_clock_describe_rates_get(handle, clkid, clk); 359 + scmi_clock_describe_rates_get(ph, clkid, clk); 363 360 } 364 361 365 362 cinfo->version = version; 366 - handle->clk_ops = &clk_ops; 367 - handle->clk_priv = cinfo; 368 - 369 - return 0; 363 + return ph->set_priv(ph, cinfo); 370 364 } 371 365 372 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_CLOCK, clock) 366 + static const struct scmi_protocol scmi_clock = { 367 + .id = SCMI_PROTOCOL_CLOCK, 368 + .owner = THIS_MODULE, 369 + .instance_init = &scmi_clock_protocol_init, 370 + .ops = &clk_proto_ops, 371 + }; 372 + 373 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(clock, scmi_clock)
+112 -21
drivers/firmware/arm_scmi/common.h
··· 4 4 * driver common header file containing some definitions, structures 5 5 * and function prototypes used in all the different SCMI protocols. 6 6 * 7 - * Copyright (C) 2018 ARM Ltd. 7 + * Copyright (C) 2018-2021 ARM Ltd. 8 8 */ 9 9 #ifndef _SCMI_COMMON_H 10 10 #define _SCMI_COMMON_H ··· 14 14 #include <linux/device.h> 15 15 #include <linux/errno.h> 16 16 #include <linux/kernel.h> 17 + #include <linux/module.h> 17 18 #include <linux/scmi_protocol.h> 18 19 #include <linux/types.h> 19 20 20 21 #include <asm/unaligned.h> 22 + 23 + #include "notify.h" 21 24 22 25 #define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0) 23 26 #define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16) ··· 144 141 struct completion *async_done; 145 142 }; 146 143 147 - void scmi_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer); 148 - int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer); 149 - int scmi_do_xfer_with_response(const struct scmi_handle *h, 150 - struct scmi_xfer *xfer); 151 - int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id, 152 - size_t tx_size, size_t rx_size, struct scmi_xfer **p); 153 - void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle, 154 - struct scmi_xfer *xfer); 144 + struct scmi_xfer_ops; 145 + 146 + /** 147 + * struct scmi_protocol_handle - Reference to an initialized protocol instance 148 + * 149 + * @dev: A reference to the associated SCMI instance device (handle->dev). 150 + * @xops: A reference to a struct holding refs to the core xfer operations that 151 + * can be used by the protocol implementation to generate SCMI messages. 152 + * @set_priv: A method to set protocol private data for this instance. 153 + * @get_priv: A method to get protocol private data previously set. 154 + * 155 + * This structure represents a protocol initialized against specific SCMI 156 + * instance and it will be used as follows: 157 + * - as a parameter fed from the core to the protocol initialization code so 158 + * that it can access the core xfer operations to build and generate SCMI 159 + * messages exclusively for the specific underlying protocol instance. 160 + * - as an opaque handle fed by an SCMI driver user when it tries to access 161 + * this protocol through its own protocol operations. 162 + * In this case this handle will be returned as an opaque object together 163 + * with the related protocol operations when the SCMI driver tries to access 164 + * the protocol. 165 + */ 166 + struct scmi_protocol_handle { 167 + struct device *dev; 168 + const struct scmi_xfer_ops *xops; 169 + int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv); 170 + void *(*get_priv)(const struct scmi_protocol_handle *ph); 171 + }; 172 + 173 + /** 174 + * struct scmi_xfer_ops - References to the core SCMI xfer operations. 175 + * @version_get: Get this version protocol. 176 + * @xfer_get_init: Initialize one struct xfer if any xfer slot is free. 177 + * @reset_rx_to_maxsz: Reset rx size to max transport size. 178 + * @do_xfer: Do the SCMI transfer. 179 + * @do_xfer_with_response: Do the SCMI transfer waiting for a response. 180 + * @xfer_put: Free the xfer slot. 181 + * 182 + * Note that all this operations expect a protocol handle as first parameter; 183 + * they then internally use it to infer the underlying protocol number: this 184 + * way is not possible for a protocol implementation to forge messages for 185 + * another protocol. 186 + */ 187 + struct scmi_xfer_ops { 188 + int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version); 189 + int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id, 190 + size_t tx_size, size_t rx_size, 191 + struct scmi_xfer **p); 192 + void (*reset_rx_to_maxsz)(const struct scmi_protocol_handle *ph, 193 + struct scmi_xfer *xfer); 194 + int (*do_xfer)(const struct scmi_protocol_handle *ph, 195 + struct scmi_xfer *xfer); 196 + int (*do_xfer_with_response)(const struct scmi_protocol_handle *ph, 197 + struct scmi_xfer *xfer); 198 + void (*xfer_put)(const struct scmi_protocol_handle *ph, 199 + struct scmi_xfer *xfer); 200 + }; 201 + 202 + struct scmi_revision_info * 203 + scmi_revision_area_get(const struct scmi_protocol_handle *ph); 155 204 int scmi_handle_put(const struct scmi_handle *handle); 156 205 struct scmi_handle *scmi_handle_get(struct device *dev); 157 206 void scmi_set_handle(struct scmi_device *scmi_dev); 158 - int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version); 159 - void scmi_setup_protocol_implemented(const struct scmi_handle *handle, 207 + void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph, 160 208 u8 *prot_imp); 161 209 162 - int scmi_base_protocol_init(struct scmi_handle *h); 210 + typedef int (*scmi_prot_init_ph_fn_t)(const struct scmi_protocol_handle *); 211 + 212 + /** 213 + * struct scmi_protocol - Protocol descriptor 214 + * @id: Protocol ID. 215 + * @owner: Module reference if any. 216 + * @instance_init: Mandatory protocol initialization function. 217 + * @instance_deinit: Optional protocol de-initialization function. 218 + * @ops: Optional reference to the operations provided by the protocol and 219 + * exposed in scmi_protocol.h. 220 + * @events: An optional reference to the events supported by this protocol. 221 + */ 222 + struct scmi_protocol { 223 + const u8 id; 224 + struct module *owner; 225 + const scmi_prot_init_ph_fn_t instance_init; 226 + const scmi_prot_init_ph_fn_t instance_deinit; 227 + const void *ops; 228 + const struct scmi_protocol_events *events; 229 + }; 163 230 164 231 int __init scmi_bus_init(void); 165 232 void __exit scmi_bus_exit(void); ··· 237 164 #define DECLARE_SCMI_REGISTER_UNREGISTER(func) \ 238 165 int __init scmi_##func##_register(void); \ 239 166 void __exit scmi_##func##_unregister(void) 167 + DECLARE_SCMI_REGISTER_UNREGISTER(base); 240 168 DECLARE_SCMI_REGISTER_UNREGISTER(clock); 241 169 DECLARE_SCMI_REGISTER_UNREGISTER(perf); 242 170 DECLARE_SCMI_REGISTER_UNREGISTER(power); ··· 246 172 DECLARE_SCMI_REGISTER_UNREGISTER(voltage); 247 173 DECLARE_SCMI_REGISTER_UNREGISTER(system); 248 174 249 - #define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(id, name) \ 250 - int __init scmi_##name##_register(void) \ 251 - { \ 252 - return scmi_protocol_register((id), &scmi_##name##_protocol_init); \ 253 - } \ 254 - \ 255 - void __exit scmi_##name##_unregister(void) \ 256 - { \ 257 - scmi_protocol_unregister((id)); \ 175 + #define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(name, proto) \ 176 + static const struct scmi_protocol *__this_proto = &(proto); \ 177 + \ 178 + int __init scmi_##name##_register(void) \ 179 + { \ 180 + return scmi_protocol_register(__this_proto); \ 181 + } \ 182 + \ 183 + void __exit scmi_##name##_unregister(void) \ 184 + { \ 185 + scmi_protocol_unregister(__this_proto); \ 258 186 } 187 + 188 + const struct scmi_protocol *scmi_protocol_get(int protocol_id); 189 + void scmi_protocol_put(int protocol_id); 190 + 191 + int scmi_protocol_acquire(const struct scmi_handle *handle, u8 protocol_id); 192 + void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id); 259 193 260 194 /* SCMI Transport */ 261 195 /** ··· 309 227 bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); 310 228 }; 311 229 230 + int scmi_protocol_device_request(const struct scmi_device_id *id_table); 231 + void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table); 232 + struct scmi_device *scmi_child_dev_find(struct device *parent, 233 + int prot_id, const char *name); 234 + 312 235 /** 313 236 * struct scmi_desc - Description of SoC integration 314 237 * ··· 351 264 void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem); 352 265 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 353 266 struct scmi_xfer *xfer); 267 + 268 + void scmi_notification_instance_data_set(const struct scmi_handle *handle, 269 + void *priv); 270 + void *scmi_notification_instance_data_get(const struct scmi_handle *handle); 354 271 355 272 #endif /* _SCMI_COMMON_H */
+733 -81
drivers/firmware/arm_scmi/driver.c
··· 11 11 * various power domain DVFS including the core/cluster, certain system 12 12 * clocks configuration, thermal sensors and many others. 13 13 * 14 - * Copyright (C) 2018 ARM Ltd. 14 + * Copyright (C) 2018-2021 ARM Ltd. 15 15 */ 16 16 17 17 #include <linux/bitmap.h> 18 + #include <linux/device.h> 18 19 #include <linux/export.h> 20 + #include <linux/idr.h> 19 21 #include <linux/io.h> 20 22 #include <linux/kernel.h> 21 23 #include <linux/ktime.h> 24 + #include <linux/list.h> 22 25 #include <linux/module.h> 23 26 #include <linux/of_address.h> 24 27 #include <linux/of_device.h> 25 28 #include <linux/processor.h> 29 + #include <linux/refcount.h> 26 30 #include <linux/slab.h> 27 31 28 32 #include "common.h" ··· 57 53 /* Track the unique id for the transfers for debug & profiling purpose */ 58 54 static atomic_t transfer_last_id; 59 55 56 + static DEFINE_IDR(scmi_requested_devices); 57 + static DEFINE_MUTEX(scmi_requested_devices_mtx); 58 + 59 + struct scmi_requested_dev { 60 + const struct scmi_device_id *id_table; 61 + struct list_head node; 62 + }; 63 + 60 64 /** 61 65 * struct scmi_xfers_info - Structure to manage transfer information 62 66 * ··· 81 69 }; 82 70 83 71 /** 72 + * struct scmi_protocol_instance - Describe an initialized protocol instance. 73 + * @handle: Reference to the SCMI handle associated to this protocol instance. 74 + * @proto: A reference to the protocol descriptor. 75 + * @gid: A reference for per-protocol devres management. 76 + * @users: A refcount to track effective users of this protocol. 77 + * @priv: Reference for optional protocol private data. 78 + * @ph: An embedded protocol handle that will be passed down to protocol 79 + * initialization code to identify this instance. 80 + * 81 + * Each protocol is initialized independently once for each SCMI platform in 82 + * which is defined by DT and implemented by the SCMI server fw. 83 + */ 84 + struct scmi_protocol_instance { 85 + const struct scmi_handle *handle; 86 + const struct scmi_protocol *proto; 87 + void *gid; 88 + refcount_t users; 89 + void *priv; 90 + struct scmi_protocol_handle ph; 91 + }; 92 + 93 + #define ph_to_pi(h) container_of(h, struct scmi_protocol_instance, ph) 94 + 95 + /** 84 96 * struct scmi_info - Structure representing a SCMI instance 85 97 * 86 98 * @dev: Device pointer ··· 116 80 * @rx_minfo: Universal Receive Message management info 117 81 * @tx_idr: IDR object to map protocol id to Tx channel info pointer 118 82 * @rx_idr: IDR object to map protocol id to Rx channel info pointer 83 + * @protocols: IDR for protocols' instance descriptors initialized for 84 + * this SCMI instance: populated on protocol's first attempted 85 + * usage. 86 + * @protocols_mtx: A mutex to protect protocols instances initialization. 119 87 * @protocols_imp: List of protocols implemented, currently maximum of 120 88 * MAX_PROTOCOLS_IMP elements allocated by the base protocol 89 + * @active_protocols: IDR storing device_nodes for protocols actually defined 90 + * in the DT and confirmed as implemented by fw. 91 + * @notify_priv: Pointer to private data structure specific to notifications. 121 92 * @node: List head 122 93 * @users: Number of users of this instance 123 94 */ ··· 137 94 struct scmi_xfers_info rx_minfo; 138 95 struct idr tx_idr; 139 96 struct idr rx_idr; 97 + struct idr protocols; 98 + /* Ensure mutual exclusive access to protocols instance array */ 99 + struct mutex protocols_mtx; 140 100 u8 *protocols_imp; 101 + struct idr active_protocols; 102 + void *notify_priv; 141 103 struct list_head node; 142 104 int users; 143 105 }; ··· 182 134 { 183 135 dev_dbg(dev, "Message ID: %x Sequence ID: %x Protocol: %x\n", 184 136 hdr->id, hdr->seq, hdr->protocol_id); 137 + } 138 + 139 + void scmi_notification_instance_data_set(const struct scmi_handle *handle, 140 + void *priv) 141 + { 142 + struct scmi_info *info = handle_to_scmi_info(handle); 143 + 144 + info->notify_priv = priv; 145 + /* Ensure updated protocol private date are visible */ 146 + smp_wmb(); 147 + } 148 + 149 + void *scmi_notification_instance_data_get(const struct scmi_handle *handle) 150 + { 151 + struct scmi_info *info = handle_to_scmi_info(handle); 152 + 153 + /* Ensure protocols_private_data has been updated */ 154 + smp_rmb(); 155 + return info->notify_priv; 185 156 } 186 157 187 158 /** ··· 383 316 } 384 317 385 318 /** 386 - * scmi_xfer_put() - Release a transmit message 319 + * xfer_put() - Release a transmit message 387 320 * 388 - * @handle: Pointer to SCMI entity handle 321 + * @ph: Pointer to SCMI protocol handle 389 322 * @xfer: message that was reserved by scmi_xfer_get 390 323 */ 391 - void scmi_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) 324 + static void xfer_put(const struct scmi_protocol_handle *ph, 325 + struct scmi_xfer *xfer) 392 326 { 393 - struct scmi_info *info = handle_to_scmi_info(handle); 327 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 328 + struct scmi_info *info = handle_to_scmi_info(pi->handle); 394 329 395 330 __scmi_xfer_put(&info->tx_minfo, xfer); 396 331 } ··· 409 340 } 410 341 411 342 /** 412 - * scmi_do_xfer() - Do one transfer 343 + * do_xfer() - Do one transfer 413 344 * 414 - * @handle: Pointer to SCMI entity handle 345 + * @ph: Pointer to SCMI protocol handle 415 346 * @xfer: Transfer to initiate and wait for response 416 347 * 417 348 * Return: -ETIMEDOUT in case of no response, if transmit error, 418 349 * return corresponding error, else if all goes well, 419 350 * return 0. 420 351 */ 421 - int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) 352 + static int do_xfer(const struct scmi_protocol_handle *ph, 353 + struct scmi_xfer *xfer) 422 354 { 423 355 int ret; 424 356 int timeout; 425 - struct scmi_info *info = handle_to_scmi_info(handle); 357 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 358 + struct scmi_info *info = handle_to_scmi_info(pi->handle); 426 359 struct device *dev = info->dev; 427 360 struct scmi_chan_info *cinfo; 361 + 362 + /* 363 + * Re-instate protocol id here from protocol handle so that cannot be 364 + * overridden by mistake (or malice) by the protocol code mangling with 365 + * the scmi_xfer structure. 366 + */ 367 + xfer->hdr.protocol_id = pi->proto->id; 428 368 429 369 cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id); 430 370 if (unlikely(!cinfo)) ··· 480 402 return ret; 481 403 } 482 404 483 - void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle, 484 - struct scmi_xfer *xfer) 405 + static void reset_rx_to_maxsz(const struct scmi_protocol_handle *ph, 406 + struct scmi_xfer *xfer) 485 407 { 486 - struct scmi_info *info = handle_to_scmi_info(handle); 408 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 409 + struct scmi_info *info = handle_to_scmi_info(pi->handle); 487 410 488 411 xfer->rx.len = info->desc->max_msg_size; 489 412 } ··· 492 413 #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC) 493 414 494 415 /** 495 - * scmi_do_xfer_with_response() - Do one transfer and wait until the delayed 416 + * do_xfer_with_response() - Do one transfer and wait until the delayed 496 417 * response is received 497 418 * 498 - * @handle: Pointer to SCMI entity handle 419 + * @ph: Pointer to SCMI protocol handle 499 420 * @xfer: Transfer to initiate and wait for response 500 421 * 501 422 * Return: -ETIMEDOUT in case of no delayed response, if transmit error, 502 423 * return corresponding error, else if all goes well, return 0. 503 424 */ 504 - int scmi_do_xfer_with_response(const struct scmi_handle *handle, 505 - struct scmi_xfer *xfer) 425 + static int do_xfer_with_response(const struct scmi_protocol_handle *ph, 426 + struct scmi_xfer *xfer) 506 427 { 507 428 int ret, timeout = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT); 429 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 508 430 DECLARE_COMPLETION_ONSTACK(async_response); 431 + 432 + xfer->hdr.protocol_id = pi->proto->id; 509 433 510 434 xfer->async_done = &async_response; 511 435 512 - ret = scmi_do_xfer(handle, xfer); 436 + ret = do_xfer(ph, xfer); 513 437 if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout)) 514 438 ret = -ETIMEDOUT; 515 439 ··· 521 439 } 522 440 523 441 /** 524 - * scmi_xfer_get_init() - Allocate and initialise one message for transmit 442 + * xfer_get_init() - Allocate and initialise one message for transmit 525 443 * 526 - * @handle: Pointer to SCMI entity handle 444 + * @ph: Pointer to SCMI protocol handle 527 445 * @msg_id: Message identifier 528 - * @prot_id: Protocol identifier for the message 529 446 * @tx_size: transmit message size 530 447 * @rx_size: receive message size 531 448 * @p: pointer to the allocated and initialised message ··· 535 454 * Return: 0 if all went fine with @p pointing to message, else 536 455 * corresponding error. 537 456 */ 538 - int scmi_xfer_get_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id, 539 - size_t tx_size, size_t rx_size, struct scmi_xfer **p) 457 + static int xfer_get_init(const struct scmi_protocol_handle *ph, 458 + u8 msg_id, size_t tx_size, size_t rx_size, 459 + struct scmi_xfer **p) 540 460 { 541 461 int ret; 542 462 struct scmi_xfer *xfer; 543 - struct scmi_info *info = handle_to_scmi_info(handle); 463 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 464 + struct scmi_info *info = handle_to_scmi_info(pi->handle); 544 465 struct scmi_xfers_info *minfo = &info->tx_minfo; 545 466 struct device *dev = info->dev; 546 467 ··· 551 468 tx_size > info->desc->max_msg_size) 552 469 return -ERANGE; 553 470 554 - xfer = scmi_xfer_get(handle, minfo); 471 + xfer = scmi_xfer_get(pi->handle, minfo); 555 472 if (IS_ERR(xfer)) { 556 473 ret = PTR_ERR(xfer); 557 474 dev_err(dev, "failed to get free message slot(%d)\n", ret); ··· 561 478 xfer->tx.len = tx_size; 562 479 xfer->rx.len = rx_size ? : info->desc->max_msg_size; 563 480 xfer->hdr.id = msg_id; 564 - xfer->hdr.protocol_id = prot_id; 481 + xfer->hdr.protocol_id = pi->proto->id; 565 482 xfer->hdr.poll_completion = false; 566 483 567 484 *p = xfer; ··· 570 487 } 571 488 572 489 /** 573 - * scmi_version_get() - command to get the revision of the SCMI entity 490 + * version_get() - command to get the revision of the SCMI entity 574 491 * 575 - * @handle: Pointer to SCMI entity handle 576 - * @protocol: Protocol identifier for the message 492 + * @ph: Pointer to SCMI protocol handle 577 493 * @version: Holds returned version of protocol. 578 494 * 579 495 * Updates the SCMI information in the internal data structure. 580 496 * 581 497 * Return: 0 if all went fine, else return appropriate error. 582 498 */ 583 - int scmi_version_get(const struct scmi_handle *handle, u8 protocol, 584 - u32 *version) 499 + static int version_get(const struct scmi_protocol_handle *ph, u32 *version) 585 500 { 586 501 int ret; 587 502 __le32 *rev_info; 588 503 struct scmi_xfer *t; 589 504 590 - ret = scmi_xfer_get_init(handle, PROTOCOL_VERSION, protocol, 0, 591 - sizeof(*version), &t); 505 + ret = xfer_get_init(ph, PROTOCOL_VERSION, 0, sizeof(*version), &t); 592 506 if (ret) 593 507 return ret; 594 508 595 - ret = scmi_do_xfer(handle, t); 509 + ret = do_xfer(ph, t); 596 510 if (!ret) { 597 511 rev_info = t->rx.buf; 598 512 *version = le32_to_cpu(*rev_info); 599 513 } 600 514 601 - scmi_xfer_put(handle, t); 515 + xfer_put(ph, t); 602 516 return ret; 603 517 } 604 518 605 - void scmi_setup_protocol_implemented(const struct scmi_handle *handle, 606 - u8 *prot_imp) 519 + /** 520 + * scmi_set_protocol_priv - Set protocol specific data at init time 521 + * 522 + * @ph: A reference to the protocol handle. 523 + * @priv: The private data to set. 524 + * 525 + * Return: 0 on Success 526 + */ 527 + static int scmi_set_protocol_priv(const struct scmi_protocol_handle *ph, 528 + void *priv) 529 + { 530 + struct scmi_protocol_instance *pi = ph_to_pi(ph); 531 + 532 + pi->priv = priv; 533 + 534 + return 0; 535 + } 536 + 537 + /** 538 + * scmi_get_protocol_priv - Set protocol specific data at init time 539 + * 540 + * @ph: A reference to the protocol handle. 541 + * 542 + * Return: Protocol private data if any was set. 543 + */ 544 + static void *scmi_get_protocol_priv(const struct scmi_protocol_handle *ph) 545 + { 546 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 547 + 548 + return pi->priv; 549 + } 550 + 551 + static const struct scmi_xfer_ops xfer_ops = { 552 + .version_get = version_get, 553 + .xfer_get_init = xfer_get_init, 554 + .reset_rx_to_maxsz = reset_rx_to_maxsz, 555 + .do_xfer = do_xfer, 556 + .do_xfer_with_response = do_xfer_with_response, 557 + .xfer_put = xfer_put, 558 + }; 559 + 560 + /** 561 + * scmi_revision_area_get - Retrieve version memory area. 562 + * 563 + * @ph: A reference to the protocol handle. 564 + * 565 + * A helper to grab the version memory area reference during SCMI Base protocol 566 + * initialization. 567 + * 568 + * Return: A reference to the version memory area associated to the SCMI 569 + * instance underlying this protocol handle. 570 + */ 571 + struct scmi_revision_info * 572 + scmi_revision_area_get(const struct scmi_protocol_handle *ph) 573 + { 574 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 575 + 576 + return pi->handle->version; 577 + } 578 + 579 + /** 580 + * scmi_alloc_init_protocol_instance - Allocate and initialize a protocol 581 + * instance descriptor. 582 + * @info: The reference to the related SCMI instance. 583 + * @proto: The protocol descriptor. 584 + * 585 + * Allocate a new protocol instance descriptor, using the provided @proto 586 + * description, against the specified SCMI instance @info, and initialize it; 587 + * all resources management is handled via a dedicated per-protocol devres 588 + * group. 589 + * 590 + * Context: Assumes to be called with @protocols_mtx already acquired. 591 + * Return: A reference to a freshly allocated and initialized protocol instance 592 + * or ERR_PTR on failure. On failure the @proto reference is at first 593 + * put using @scmi_protocol_put() before releasing all the devres group. 594 + */ 595 + static struct scmi_protocol_instance * 596 + scmi_alloc_init_protocol_instance(struct scmi_info *info, 597 + const struct scmi_protocol *proto) 598 + { 599 + int ret = -ENOMEM; 600 + void *gid; 601 + struct scmi_protocol_instance *pi; 602 + const struct scmi_handle *handle = &info->handle; 603 + 604 + /* Protocol specific devres group */ 605 + gid = devres_open_group(handle->dev, NULL, GFP_KERNEL); 606 + if (!gid) { 607 + scmi_protocol_put(proto->id); 608 + goto out; 609 + } 610 + 611 + pi = devm_kzalloc(handle->dev, sizeof(*pi), GFP_KERNEL); 612 + if (!pi) 613 + goto clean; 614 + 615 + pi->gid = gid; 616 + pi->proto = proto; 617 + pi->handle = handle; 618 + pi->ph.dev = handle->dev; 619 + pi->ph.xops = &xfer_ops; 620 + pi->ph.set_priv = scmi_set_protocol_priv; 621 + pi->ph.get_priv = scmi_get_protocol_priv; 622 + refcount_set(&pi->users, 1); 623 + /* proto->init is assured NON NULL by scmi_protocol_register */ 624 + ret = pi->proto->instance_init(&pi->ph); 625 + if (ret) 626 + goto clean; 627 + 628 + ret = idr_alloc(&info->protocols, pi, proto->id, proto->id + 1, 629 + GFP_KERNEL); 630 + if (ret != proto->id) 631 + goto clean; 632 + 633 + /* 634 + * Warn but ignore events registration errors since we do not want 635 + * to skip whole protocols if their notifications are messed up. 636 + */ 637 + if (pi->proto->events) { 638 + ret = scmi_register_protocol_events(handle, pi->proto->id, 639 + &pi->ph, 640 + pi->proto->events); 641 + if (ret) 642 + dev_warn(handle->dev, 643 + "Protocol:%X - Events Registration Failed - err:%d\n", 644 + pi->proto->id, ret); 645 + } 646 + 647 + devres_close_group(handle->dev, pi->gid); 648 + dev_dbg(handle->dev, "Initialized protocol: 0x%X\n", pi->proto->id); 649 + 650 + return pi; 651 + 652 + clean: 653 + /* Take care to put the protocol module's owner before releasing all */ 654 + scmi_protocol_put(proto->id); 655 + devres_release_group(handle->dev, gid); 656 + out: 657 + return ERR_PTR(ret); 658 + } 659 + 660 + /** 661 + * scmi_get_protocol_instance - Protocol initialization helper. 662 + * @handle: A reference to the SCMI platform instance. 663 + * @protocol_id: The protocol being requested. 664 + * 665 + * In case the required protocol has never been requested before for this 666 + * instance, allocate and initialize all the needed structures while handling 667 + * resource allocation with a dedicated per-protocol devres subgroup. 668 + * 669 + * Return: A reference to an initialized protocol instance or error on failure: 670 + * in particular returns -EPROBE_DEFER when the desired protocol could 671 + * NOT be found. 672 + */ 673 + static struct scmi_protocol_instance * __must_check 674 + scmi_get_protocol_instance(const struct scmi_handle *handle, u8 protocol_id) 675 + { 676 + struct scmi_protocol_instance *pi; 677 + struct scmi_info *info = handle_to_scmi_info(handle); 678 + 679 + mutex_lock(&info->protocols_mtx); 680 + pi = idr_find(&info->protocols, protocol_id); 681 + 682 + if (pi) { 683 + refcount_inc(&pi->users); 684 + } else { 685 + const struct scmi_protocol *proto; 686 + 687 + /* Fails if protocol not registered on bus */ 688 + proto = scmi_protocol_get(protocol_id); 689 + if (proto) 690 + pi = scmi_alloc_init_protocol_instance(info, proto); 691 + else 692 + pi = ERR_PTR(-EPROBE_DEFER); 693 + } 694 + mutex_unlock(&info->protocols_mtx); 695 + 696 + return pi; 697 + } 698 + 699 + /** 700 + * scmi_protocol_acquire - Protocol acquire 701 + * @handle: A reference to the SCMI platform instance. 702 + * @protocol_id: The protocol being requested. 703 + * 704 + * Register a new user for the requested protocol on the specified SCMI 705 + * platform instance, possibly triggering its initialization on first user. 706 + * 707 + * Return: 0 if protocol was acquired successfully. 708 + */ 709 + int scmi_protocol_acquire(const struct scmi_handle *handle, u8 protocol_id) 710 + { 711 + return PTR_ERR_OR_ZERO(scmi_get_protocol_instance(handle, protocol_id)); 712 + } 713 + 714 + /** 715 + * scmi_protocol_release - Protocol de-initialization helper. 716 + * @handle: A reference to the SCMI platform instance. 717 + * @protocol_id: The protocol being requested. 718 + * 719 + * Remove one user for the specified protocol and triggers de-initialization 720 + * and resources de-allocation once the last user has gone. 721 + */ 722 + void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id) 607 723 { 608 724 struct scmi_info *info = handle_to_scmi_info(handle); 725 + struct scmi_protocol_instance *pi; 726 + 727 + mutex_lock(&info->protocols_mtx); 728 + pi = idr_find(&info->protocols, protocol_id); 729 + if (WARN_ON(!pi)) 730 + goto out; 731 + 732 + if (refcount_dec_and_test(&pi->users)) { 733 + void *gid = pi->gid; 734 + 735 + if (pi->proto->events) 736 + scmi_deregister_protocol_events(handle, protocol_id); 737 + 738 + if (pi->proto->instance_deinit) 739 + pi->proto->instance_deinit(&pi->ph); 740 + 741 + idr_remove(&info->protocols, protocol_id); 742 + 743 + scmi_protocol_put(protocol_id); 744 + 745 + devres_release_group(handle->dev, gid); 746 + dev_dbg(handle->dev, "De-Initialized protocol: 0x%X\n", 747 + protocol_id); 748 + } 749 + 750 + out: 751 + mutex_unlock(&info->protocols_mtx); 752 + } 753 + 754 + void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph, 755 + u8 *prot_imp) 756 + { 757 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 758 + struct scmi_info *info = handle_to_scmi_info(pi->handle); 609 759 610 760 info->protocols_imp = prot_imp; 611 761 } ··· 856 540 if (info->protocols_imp[i] == prot_id) 857 541 return true; 858 542 return false; 543 + } 544 + 545 + struct scmi_protocol_devres { 546 + const struct scmi_handle *handle; 547 + u8 protocol_id; 548 + }; 549 + 550 + static void scmi_devm_release_protocol(struct device *dev, void *res) 551 + { 552 + struct scmi_protocol_devres *dres = res; 553 + 554 + scmi_protocol_release(dres->handle, dres->protocol_id); 555 + } 556 + 557 + /** 558 + * scmi_devm_protocol_get - Devres managed get protocol operations and handle 559 + * @sdev: A reference to an scmi_device whose embedded struct device is to 560 + * be used for devres accounting. 561 + * @protocol_id: The protocol being requested. 562 + * @ph: A pointer reference used to pass back the associated protocol handle. 563 + * 564 + * Get hold of a protocol accounting for its usage, eventually triggering its 565 + * initialization, and returning the protocol specific operations and related 566 + * protocol handle which will be used as first argument in most of the 567 + * protocols operations methods. 568 + * Being a devres based managed method, protocol hold will be automatically 569 + * released, and possibly de-initialized on last user, once the SCMI driver 570 + * owning the scmi_device is unbound from it. 571 + * 572 + * Return: A reference to the requested protocol operations or error. 573 + * Must be checked for errors by caller. 574 + */ 575 + static const void __must_check * 576 + scmi_devm_protocol_get(struct scmi_device *sdev, u8 protocol_id, 577 + struct scmi_protocol_handle **ph) 578 + { 579 + struct scmi_protocol_instance *pi; 580 + struct scmi_protocol_devres *dres; 581 + struct scmi_handle *handle = sdev->handle; 582 + 583 + if (!ph) 584 + return ERR_PTR(-EINVAL); 585 + 586 + dres = devres_alloc(scmi_devm_release_protocol, 587 + sizeof(*dres), GFP_KERNEL); 588 + if (!dres) 589 + return ERR_PTR(-ENOMEM); 590 + 591 + pi = scmi_get_protocol_instance(handle, protocol_id); 592 + if (IS_ERR(pi)) { 593 + devres_free(dres); 594 + return pi; 595 + } 596 + 597 + dres->handle = handle; 598 + dres->protocol_id = protocol_id; 599 + devres_add(&sdev->dev, dres); 600 + 601 + *ph = &pi->ph; 602 + 603 + return pi->proto->ops; 604 + } 605 + 606 + static int scmi_devm_protocol_match(struct device *dev, void *res, void *data) 607 + { 608 + struct scmi_protocol_devres *dres = res; 609 + 610 + if (WARN_ON(!dres || !data)) 611 + return 0; 612 + 613 + return dres->protocol_id == *((u8 *)data); 614 + } 615 + 616 + /** 617 + * scmi_devm_protocol_put - Devres managed put protocol operations and handle 618 + * @sdev: A reference to an scmi_device whose embedded struct device is to 619 + * be used for devres accounting. 620 + * @protocol_id: The protocol being requested. 621 + * 622 + * Explicitly release a protocol hold previously obtained calling the above 623 + * @scmi_devm_protocol_get. 624 + */ 625 + static void scmi_devm_protocol_put(struct scmi_device *sdev, u8 protocol_id) 626 + { 627 + int ret; 628 + 629 + ret = devres_release(&sdev->dev, scmi_devm_release_protocol, 630 + scmi_devm_protocol_match, &protocol_id); 631 + WARN_ON(ret); 632 + } 633 + 634 + static inline 635 + struct scmi_handle *scmi_handle_get_from_info_unlocked(struct scmi_info *info) 636 + { 637 + info->users++; 638 + return &info->handle; 859 639 } 860 640 861 641 /** ··· 975 563 list_for_each(p, &scmi_list) { 976 564 info = list_entry(p, struct scmi_info, node); 977 565 if (dev->parent == info->dev) { 978 - handle = &info->handle; 979 - info->users++; 566 + handle = scmi_handle_get_from_info_unlocked(info); 980 567 break; 981 568 } 982 569 } ··· 1118 707 return ret; 1119 708 } 1120 709 710 + /** 711 + * scmi_get_protocol_device - Helper to get/create an SCMI device. 712 + * 713 + * @np: A device node representing a valid active protocols for the referred 714 + * SCMI instance. 715 + * @info: The referred SCMI instance for which we are getting/creating this 716 + * device. 717 + * @prot_id: The protocol ID. 718 + * @name: The device name. 719 + * 720 + * Referring to the specific SCMI instance identified by @info, this helper 721 + * takes care to return a properly initialized device matching the requested 722 + * @proto_id and @name: if device was still not existent it is created as a 723 + * child of the specified SCMI instance @info and its transport properly 724 + * initialized as usual. 725 + */ 726 + static inline struct scmi_device * 727 + scmi_get_protocol_device(struct device_node *np, struct scmi_info *info, 728 + int prot_id, const char *name) 729 + { 730 + struct scmi_device *sdev; 731 + 732 + /* Already created for this parent SCMI instance ? */ 733 + sdev = scmi_child_dev_find(info->dev, prot_id, name); 734 + if (sdev) 735 + return sdev; 736 + 737 + pr_debug("Creating SCMI device (%s) for protocol %x\n", name, prot_id); 738 + 739 + sdev = scmi_device_create(np, info->dev, prot_id, name); 740 + if (!sdev) { 741 + dev_err(info->dev, "failed to create %d protocol device\n", 742 + prot_id); 743 + return NULL; 744 + } 745 + 746 + if (scmi_txrx_setup(info, &sdev->dev, prot_id)) { 747 + dev_err(&sdev->dev, "failed to setup transport\n"); 748 + scmi_device_destroy(sdev); 749 + return NULL; 750 + } 751 + 752 + return sdev; 753 + } 754 + 1121 755 static inline void 1122 756 scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 1123 757 int prot_id, const char *name) 1124 758 { 1125 759 struct scmi_device *sdev; 1126 760 1127 - sdev = scmi_device_create(np, info->dev, prot_id, name); 1128 - if (!sdev) { 1129 - dev_err(info->dev, "failed to create %d protocol device\n", 1130 - prot_id); 761 + sdev = scmi_get_protocol_device(np, info, prot_id, name); 762 + if (!sdev) 1131 763 return; 1132 - } 1133 - 1134 - if (scmi_txrx_setup(info, &sdev->dev, prot_id)) { 1135 - dev_err(&sdev->dev, "failed to setup transport\n"); 1136 - scmi_device_destroy(sdev); 1137 - return; 1138 - } 1139 764 1140 765 /* setup handle now as the transport is ready */ 1141 766 scmi_set_handle(sdev); 1142 767 } 1143 768 1144 - #define MAX_SCMI_DEV_PER_PROTOCOL 2 1145 - struct scmi_prot_devnames { 1146 - int protocol_id; 1147 - char *names[MAX_SCMI_DEV_PER_PROTOCOL]; 1148 - }; 1149 - 1150 - static struct scmi_prot_devnames devnames[] = { 1151 - { SCMI_PROTOCOL_POWER, { "genpd" },}, 1152 - { SCMI_PROTOCOL_SYSTEM, { "syspower" },}, 1153 - { SCMI_PROTOCOL_PERF, { "cpufreq" },}, 1154 - { SCMI_PROTOCOL_CLOCK, { "clocks" },}, 1155 - { SCMI_PROTOCOL_SENSOR, { "hwmon", "iiodev" },}, 1156 - { SCMI_PROTOCOL_RESET, { "reset" },}, 1157 - { SCMI_PROTOCOL_VOLTAGE, { "regulator" },}, 1158 - }; 1159 - 1160 - static inline void 1161 - scmi_create_protocol_devices(struct device_node *np, struct scmi_info *info, 1162 - int prot_id) 769 + /** 770 + * scmi_create_protocol_devices - Create devices for all pending requests for 771 + * this SCMI instance. 772 + * 773 + * @np: The device node describing the protocol 774 + * @info: The SCMI instance descriptor 775 + * @prot_id: The protocol ID 776 + * 777 + * All devices previously requested for this instance (if any) are found and 778 + * created by scanning the proper @&scmi_requested_devices entry. 779 + */ 780 + static void scmi_create_protocol_devices(struct device_node *np, 781 + struct scmi_info *info, int prot_id) 1163 782 { 1164 - int loop, cnt; 783 + struct list_head *phead; 1165 784 1166 - for (loop = 0; loop < ARRAY_SIZE(devnames); loop++) { 1167 - if (devnames[loop].protocol_id != prot_id) 1168 - continue; 785 + mutex_lock(&scmi_requested_devices_mtx); 786 + phead = idr_find(&scmi_requested_devices, prot_id); 787 + if (phead) { 788 + struct scmi_requested_dev *rdev; 1169 789 1170 - for (cnt = 0; cnt < ARRAY_SIZE(devnames[loop].names); cnt++) { 1171 - const char *name = devnames[loop].names[cnt]; 790 + list_for_each_entry(rdev, phead, node) 791 + scmi_create_protocol_device(np, info, prot_id, 792 + rdev->id_table->name); 793 + } 794 + mutex_unlock(&scmi_requested_devices_mtx); 795 + } 1172 796 1173 - if (name) 1174 - scmi_create_protocol_device(np, info, prot_id, 1175 - name); 797 + /** 798 + * scmi_protocol_device_request - Helper to request a device 799 + * 800 + * @id_table: A protocol/name pair descriptor for the device to be created. 801 + * 802 + * This helper let an SCMI driver request specific devices identified by the 803 + * @id_table to be created for each active SCMI instance. 804 + * 805 + * The requested device name MUST NOT be already existent for any protocol; 806 + * at first the freshly requested @id_table is annotated in the IDR table 807 + * @scmi_requested_devices, then a matching device is created for each already 808 + * active SCMI instance. (if any) 809 + * 810 + * This way the requested device is created straight-away for all the already 811 + * initialized(probed) SCMI instances (handles) and it remains also annotated 812 + * as pending creation if the requesting SCMI driver was loaded before some 813 + * SCMI instance and related transports were available: when such late instance 814 + * is probed, its probe will take care to scan the list of pending requested 815 + * devices and create those on its own (see @scmi_create_protocol_devices and 816 + * its enclosing loop) 817 + * 818 + * Return: 0 on Success 819 + */ 820 + int scmi_protocol_device_request(const struct scmi_device_id *id_table) 821 + { 822 + int ret = 0; 823 + unsigned int id = 0; 824 + struct list_head *head, *phead = NULL; 825 + struct scmi_requested_dev *rdev; 826 + struct scmi_info *info; 827 + 828 + pr_debug("Requesting SCMI device (%s) for protocol %x\n", 829 + id_table->name, id_table->protocol_id); 830 + 831 + /* 832 + * Search for the matching protocol rdev list and then search 833 + * of any existent equally named device...fails if any duplicate found. 834 + */ 835 + mutex_lock(&scmi_requested_devices_mtx); 836 + idr_for_each_entry(&scmi_requested_devices, head, id) { 837 + if (!phead) { 838 + /* A list found registered in the IDR is never empty */ 839 + rdev = list_first_entry(head, struct scmi_requested_dev, 840 + node); 841 + if (rdev->id_table->protocol_id == 842 + id_table->protocol_id) 843 + phead = head; 844 + } 845 + list_for_each_entry(rdev, head, node) { 846 + if (!strcmp(rdev->id_table->name, id_table->name)) { 847 + pr_err("Ignoring duplicate request [%d] %s\n", 848 + rdev->id_table->protocol_id, 849 + rdev->id_table->name); 850 + ret = -EINVAL; 851 + goto out; 852 + } 1176 853 } 1177 854 } 855 + 856 + /* 857 + * No duplicate found for requested id_table, so let's create a new 858 + * requested device entry for this new valid request. 859 + */ 860 + rdev = kzalloc(sizeof(*rdev), GFP_KERNEL); 861 + if (!rdev) { 862 + ret = -ENOMEM; 863 + goto out; 864 + } 865 + rdev->id_table = id_table; 866 + 867 + /* 868 + * Append the new requested device table descriptor to the head of the 869 + * related protocol list, eventually creating such head if not already 870 + * there. 871 + */ 872 + if (!phead) { 873 + phead = kzalloc(sizeof(*phead), GFP_KERNEL); 874 + if (!phead) { 875 + kfree(rdev); 876 + ret = -ENOMEM; 877 + goto out; 878 + } 879 + INIT_LIST_HEAD(phead); 880 + 881 + ret = idr_alloc(&scmi_requested_devices, (void *)phead, 882 + id_table->protocol_id, 883 + id_table->protocol_id + 1, GFP_KERNEL); 884 + if (ret != id_table->protocol_id) { 885 + pr_err("Failed to save SCMI device - ret:%d\n", ret); 886 + kfree(rdev); 887 + kfree(phead); 888 + ret = -EINVAL; 889 + goto out; 890 + } 891 + ret = 0; 892 + } 893 + list_add(&rdev->node, phead); 894 + 895 + /* 896 + * Now effectively create and initialize the requested device for every 897 + * already initialized SCMI instance which has registered the requested 898 + * protocol as a valid active one: i.e. defined in DT and supported by 899 + * current platform FW. 900 + */ 901 + mutex_lock(&scmi_list_mutex); 902 + list_for_each_entry(info, &scmi_list, node) { 903 + struct device_node *child; 904 + 905 + child = idr_find(&info->active_protocols, 906 + id_table->protocol_id); 907 + if (child) { 908 + struct scmi_device *sdev; 909 + 910 + sdev = scmi_get_protocol_device(child, info, 911 + id_table->protocol_id, 912 + id_table->name); 913 + /* Set handle if not already set: device existed */ 914 + if (sdev && !sdev->handle) 915 + sdev->handle = 916 + scmi_handle_get_from_info_unlocked(info); 917 + } else { 918 + dev_err(info->dev, 919 + "Failed. SCMI protocol %d not active.\n", 920 + id_table->protocol_id); 921 + } 922 + } 923 + mutex_unlock(&scmi_list_mutex); 924 + 925 + out: 926 + mutex_unlock(&scmi_requested_devices_mtx); 927 + 928 + return ret; 929 + } 930 + 931 + /** 932 + * scmi_protocol_device_unrequest - Helper to unrequest a device 933 + * 934 + * @id_table: A protocol/name pair descriptor for the device to be unrequested. 935 + * 936 + * An helper to let an SCMI driver release its request about devices; note that 937 + * devices are created and initialized once the first SCMI driver request them 938 + * but they destroyed only on SCMI core unloading/unbinding. 939 + * 940 + * The current SCMI transport layer uses such devices as internal references and 941 + * as such they could be shared as same transport between multiple drivers so 942 + * that cannot be safely destroyed till the whole SCMI stack is removed. 943 + * (unless adding further burden of refcounting.) 944 + */ 945 + void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table) 946 + { 947 + struct list_head *phead; 948 + 949 + pr_debug("Unrequesting SCMI device (%s) for protocol %x\n", 950 + id_table->name, id_table->protocol_id); 951 + 952 + mutex_lock(&scmi_requested_devices_mtx); 953 + phead = idr_find(&scmi_requested_devices, id_table->protocol_id); 954 + if (phead) { 955 + struct scmi_requested_dev *victim, *tmp; 956 + 957 + list_for_each_entry_safe(victim, tmp, phead, node) { 958 + if (!strcmp(victim->id_table->name, id_table->name)) { 959 + list_del(&victim->node); 960 + kfree(victim); 961 + break; 962 + } 963 + } 964 + 965 + if (list_empty(phead)) { 966 + idr_remove(&scmi_requested_devices, 967 + id_table->protocol_id); 968 + kfree(phead); 969 + } 970 + } 971 + mutex_unlock(&scmi_requested_devices_mtx); 1178 972 } 1179 973 1180 974 static int scmi_probe(struct platform_device *pdev) ··· 1402 786 info->dev = dev; 1403 787 info->desc = desc; 1404 788 INIT_LIST_HEAD(&info->node); 789 + idr_init(&info->protocols); 790 + mutex_init(&info->protocols_mtx); 791 + idr_init(&info->active_protocols); 1405 792 1406 793 platform_set_drvdata(pdev, info); 1407 794 idr_init(&info->tx_idr); ··· 1413 794 handle = &info->handle; 1414 795 handle->dev = info->dev; 1415 796 handle->version = &info->version; 797 + handle->devm_protocol_get = scmi_devm_protocol_get; 798 + handle->devm_protocol_put = scmi_devm_protocol_put; 1416 799 1417 800 ret = scmi_txrx_setup(info, dev, SCMI_PROTOCOL_BASE); 1418 801 if (ret) ··· 1427 806 if (scmi_notification_init(handle)) 1428 807 dev_err(dev, "SCMI Notifications NOT available.\n"); 1429 808 1430 - ret = scmi_base_protocol_init(handle); 809 + /* 810 + * Trigger SCMI Base protocol initialization. 811 + * It's mandatory and won't be ever released/deinit until the 812 + * SCMI stack is shutdown/unloaded as a whole. 813 + */ 814 + ret = scmi_protocol_acquire(handle, SCMI_PROTOCOL_BASE); 1431 815 if (ret) { 1432 - dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); 816 + dev_err(dev, "unable to communicate with SCMI\n"); 1433 817 return ret; 1434 818 } 1435 819 ··· 1457 831 continue; 1458 832 } 1459 833 834 + /* 835 + * Save this valid DT protocol descriptor amongst 836 + * @active_protocols for this SCMI instance/ 837 + */ 838 + ret = idr_alloc(&info->active_protocols, child, 839 + prot_id, prot_id + 1, GFP_KERNEL); 840 + if (ret != prot_id) { 841 + dev_err(dev, "SCMI protocol %d already activated. Skip\n", 842 + prot_id); 843 + continue; 844 + } 845 + 846 + of_node_get(child); 1460 847 scmi_create_protocol_devices(child, info, prot_id); 1461 848 } 1462 849 ··· 1483 844 1484 845 static int scmi_remove(struct platform_device *pdev) 1485 846 { 1486 - int ret = 0; 847 + int ret = 0, id; 1487 848 struct scmi_info *info = platform_get_drvdata(pdev); 1488 849 struct idr *idr = &info->tx_idr; 850 + struct device_node *child; 1489 851 1490 852 mutex_lock(&scmi_list_mutex); 1491 853 if (info->users) ··· 1499 859 return ret; 1500 860 1501 861 scmi_notification_exit(&info->handle); 862 + 863 + mutex_lock(&info->protocols_mtx); 864 + idr_destroy(&info->protocols); 865 + mutex_unlock(&info->protocols_mtx); 866 + 867 + idr_for_each_entry(&info->active_protocols, child, id) 868 + of_node_put(child); 869 + idr_destroy(&info->active_protocols); 1502 870 1503 871 /* Safe to free channels since no more users */ 1504 872 ret = idr_for_each(idr, info->desc->ops->chan_free, idr); ··· 1590 942 { 1591 943 scmi_bus_init(); 1592 944 945 + scmi_base_register(); 946 + 1593 947 scmi_clock_register(); 1594 948 scmi_perf_register(); 1595 949 scmi_power_register(); ··· 1606 956 1607 957 static void __exit scmi_driver_exit(void) 1608 958 { 1609 - scmi_bus_exit(); 959 + scmi_base_unregister(); 1610 960 1611 961 scmi_clock_unregister(); 1612 962 scmi_perf_unregister(); ··· 1615 965 scmi_sensors_unregister(); 1616 966 scmi_voltage_unregister(); 1617 967 scmi_system_unregister(); 968 + 969 + scmi_bus_exit(); 1618 970 1619 971 platform_driver_unregister(&scmi_driver); 1620 972 }
+253 -77
drivers/firmware/arm_scmi/notify.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Notification support 4 4 * 5 - * Copyright (C) 2020 ARM Ltd. 5 + * Copyright (C) 2020-2021 ARM Ltd. 6 6 */ 7 7 /** 8 8 * DOC: Theory of operation ··· 91 91 #include <linux/types.h> 92 92 #include <linux/workqueue.h> 93 93 94 + #include "common.h" 94 95 #include "notify.h" 95 96 96 97 #define SCMI_MAX_PROTO 256 ··· 178 177 #define REVT_NOTIFY_SET_STATUS(revt, eid, sid, state) \ 179 178 ({ \ 180 179 typeof(revt) r = revt; \ 181 - r->proto->ops->set_notify_enabled(r->proto->ni->handle, \ 180 + r->proto->ops->set_notify_enabled(r->proto->ph, \ 182 181 (eid), (sid), (state)); \ 183 182 }) 184 183 ··· 191 190 #define REVT_FILL_REPORT(revt, ...) \ 192 191 ({ \ 193 192 typeof(revt) r = revt; \ 194 - r->proto->ops->fill_custom_report(r->proto->ni->handle, \ 193 + r->proto->ops->fill_custom_report(r->proto->ph, \ 195 194 __VA_ARGS__); \ 196 195 }) 197 196 ··· 279 278 * events' descriptors, whose fixed-size is determined at 280 279 * compile time. 281 280 * @registered_mtx: A mutex to protect @registered_events_handlers 281 + * @ph: SCMI protocol handle reference 282 282 * @registered_events_handlers: An hashtable containing all events' handlers 283 283 * descriptors registered for this protocol 284 284 * ··· 304 302 struct scmi_registered_event **registered_events; 305 303 /* mutex to protect registered_events_handlers */ 306 304 struct mutex registered_mtx; 305 + const struct scmi_protocol_handle *ph; 307 306 DECLARE_HASHTABLE(registered_events_handlers, SCMI_REGISTERED_HASH_SZ); 308 307 }; 309 308 ··· 371 368 scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key); 372 369 static void scmi_put_active_handler(struct scmi_notify_instance *ni, 373 370 struct scmi_event_handler *hndl); 374 - static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 371 + static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 375 372 struct scmi_event_handler *hndl); 376 373 377 374 /** ··· 582 579 struct scmi_event_header eh; 583 580 struct scmi_notify_instance *ni; 584 581 585 - /* Ensure notify_priv is updated */ 586 - smp_rmb(); 587 - if (!handle->notify_priv) 582 + ni = scmi_notification_instance_data_get(handle); 583 + if (!ni) 588 584 return 0; 589 - ni = handle->notify_priv; 590 585 591 586 r_evt = SCMI_GET_REVT(ni, proto_id, evt_id); 592 587 if (!r_evt) ··· 733 732 /** 734 733 * scmi_register_protocol_events() - Register Protocol Events with the core 735 734 * @handle: The handle identifying the platform instance against which the 736 - * the protocol's events are registered 735 + * protocol's events are registered 737 736 * @proto_id: Protocol ID 738 - * @queue_sz: Size in bytes of the associated queue to be allocated 739 - * @ops: Protocol specific event-related operations 740 - * @evt: Event descriptor array 741 - * @num_events: Number of events in @evt array 742 - * @num_sources: Number of possible sources for this protocol on this 743 - * platform. 737 + * @ph: SCMI protocol handle. 738 + * @ee: A structure describing the events supported by this protocol. 744 739 * 745 740 * Used by SCMI Protocols initialization code to register with the notification 746 741 * core the list of supported events and their descriptors: takes care to ··· 745 748 * 746 749 * Return: 0 on Success 747 750 */ 748 - int scmi_register_protocol_events(const struct scmi_handle *handle, 749 - u8 proto_id, size_t queue_sz, 750 - const struct scmi_event_ops *ops, 751 - const struct scmi_event *evt, int num_events, 752 - int num_sources) 751 + int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id, 752 + const struct scmi_protocol_handle *ph, 753 + const struct scmi_protocol_events *ee) 753 754 { 754 755 int i; 756 + unsigned int num_sources; 755 757 size_t payld_sz = 0; 756 758 struct scmi_registered_events_desc *pd; 757 759 struct scmi_notify_instance *ni; 760 + const struct scmi_event *evt; 758 761 759 - if (!ops || !evt) 762 + if (!ee || !ee->ops || !ee->evts || !ph || 763 + (!ee->num_sources && !ee->ops->get_num_sources)) 760 764 return -EINVAL; 761 765 762 - /* Ensure notify_priv is updated */ 763 - smp_rmb(); 764 - if (!handle->notify_priv) 765 - return -ENOMEM; 766 - ni = handle->notify_priv; 767 - 768 - /* Attach to the notification main devres group */ 769 - if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL)) 766 + ni = scmi_notification_instance_data_get(handle); 767 + if (!ni) 770 768 return -ENOMEM; 771 769 772 - for (i = 0; i < num_events; i++) 770 + /* num_sources cannot be <= 0 */ 771 + if (ee->num_sources) { 772 + num_sources = ee->num_sources; 773 + } else { 774 + int nsrc = ee->ops->get_num_sources(ph); 775 + 776 + if (nsrc <= 0) 777 + return -EINVAL; 778 + num_sources = nsrc; 779 + } 780 + 781 + evt = ee->evts; 782 + for (i = 0; i < ee->num_events; i++) 773 783 payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz); 774 784 payld_sz += sizeof(struct scmi_event_header); 775 785 776 - pd = scmi_allocate_registered_events_desc(ni, proto_id, queue_sz, 777 - payld_sz, num_events, ops); 786 + pd = scmi_allocate_registered_events_desc(ni, proto_id, ee->queue_sz, 787 + payld_sz, ee->num_events, 788 + ee->ops); 778 789 if (IS_ERR(pd)) 779 - goto err; 790 + return PTR_ERR(pd); 780 791 781 - for (i = 0; i < num_events; i++, evt++) { 792 + pd->ph = ph; 793 + for (i = 0; i < ee->num_events; i++, evt++) { 782 794 struct scmi_registered_event *r_evt; 783 795 784 796 r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt), 785 797 GFP_KERNEL); 786 798 if (!r_evt) 787 - goto err; 799 + return -ENOMEM; 788 800 r_evt->proto = pd; 789 801 r_evt->evt = evt; 790 802 791 803 r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources, 792 804 sizeof(refcount_t), GFP_KERNEL); 793 805 if (!r_evt->sources) 794 - goto err; 806 + return -ENOMEM; 795 807 r_evt->num_sources = num_sources; 796 808 mutex_init(&r_evt->sources_mtx); 797 809 798 810 r_evt->report = devm_kzalloc(ni->handle->dev, 799 811 evt->max_report_sz, GFP_KERNEL); 800 812 if (!r_evt->report) 801 - goto err; 813 + return -ENOMEM; 802 814 803 815 pd->registered_events[i] = r_evt; 804 816 /* Ensure events are updated */ ··· 821 815 /* Ensure protocols are updated */ 822 816 smp_wmb(); 823 817 824 - devres_close_group(ni->handle->dev, ni->gid); 825 - 826 818 /* 827 819 * Finalize any pending events' handler which could have been waiting 828 820 * for this protocol's events registration. ··· 828 824 schedule_work(&ni->init_work); 829 825 830 826 return 0; 827 + } 831 828 832 - err: 833 - dev_warn(handle->dev, "Proto:%X - Registration Failed !\n", proto_id); 834 - /* A failing protocol registration does not trigger full failure */ 835 - devres_close_group(ni->handle->dev, ni->gid); 829 + /** 830 + * scmi_deregister_protocol_events - Deregister protocol events with the core 831 + * @handle: The handle identifying the platform instance against which the 832 + * protocol's events are registered 833 + * @proto_id: Protocol ID 834 + */ 835 + void scmi_deregister_protocol_events(const struct scmi_handle *handle, 836 + u8 proto_id) 837 + { 838 + struct scmi_notify_instance *ni; 839 + struct scmi_registered_events_desc *pd; 836 840 837 - return -ENOMEM; 841 + ni = scmi_notification_instance_data_get(handle); 842 + if (!ni) 843 + return; 844 + 845 + pd = ni->registered_protocols[proto_id]; 846 + if (!pd) 847 + return; 848 + 849 + ni->registered_protocols[proto_id] = NULL; 850 + /* Ensure protocols are updated */ 851 + smp_wmb(); 852 + 853 + cancel_work_sync(&pd->equeue.notify_work); 838 854 } 839 855 840 856 /** ··· 924 900 if (!r_evt) 925 901 return -EINVAL; 926 902 927 - /* Remove from pending and insert into registered */ 903 + /* 904 + * Remove from pending and insert into registered while getting hold 905 + * of protocol instance. 906 + */ 928 907 hash_del(&hndl->hash); 908 + /* 909 + * Acquire protocols only for NON pending handlers, so as NOT to trigger 910 + * protocol initialization when a notifier is registered against a still 911 + * not registered protocol, since it would make little sense to force init 912 + * protocols for which still no SCMI driver user exists: they wouldn't 913 + * emit any event anyway till some SCMI driver starts using it. 914 + */ 915 + scmi_protocol_acquire(ni->handle, KEY_XTRACT_PROTO_ID(hndl->key)); 929 916 hndl->r_evt = r_evt; 917 + 930 918 mutex_lock(&r_evt->proto->registered_mtx); 931 919 hash_add(r_evt->proto->registered_events_handlers, 932 920 &hndl->hash, hndl->key); ··· 1229 1193 * * unregister and free the handler itself 1230 1194 * 1231 1195 * Context: Assumes all the proper locking has been managed by the caller. 1196 + * 1197 + * Return: True if handler was freed (users dropped to zero) 1232 1198 */ 1233 - static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 1199 + static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 1234 1200 struct scmi_event_handler *hndl) 1235 1201 { 1202 + bool freed = false; 1203 + 1236 1204 if (refcount_dec_and_test(&hndl->users)) { 1237 1205 if (!IS_HNDL_PENDING(hndl)) 1238 1206 scmi_disable_events(hndl); 1239 1207 scmi_free_event_handler(hndl); 1208 + freed = true; 1240 1209 } 1210 + 1211 + return freed; 1241 1212 } 1242 1213 1243 1214 static void scmi_put_handler(struct scmi_notify_instance *ni, 1244 1215 struct scmi_event_handler *hndl) 1245 1216 { 1217 + bool freed; 1218 + u8 protocol_id; 1246 1219 struct scmi_registered_event *r_evt = hndl->r_evt; 1247 1220 1248 1221 mutex_lock(&ni->pending_mtx); 1249 - if (r_evt) 1222 + if (r_evt) { 1223 + protocol_id = r_evt->proto->id; 1250 1224 mutex_lock(&r_evt->proto->registered_mtx); 1225 + } 1251 1226 1252 - scmi_put_handler_unlocked(ni, hndl); 1227 + freed = scmi_put_handler_unlocked(ni, hndl); 1253 1228 1254 - if (r_evt) 1229 + if (r_evt) { 1255 1230 mutex_unlock(&r_evt->proto->registered_mtx); 1231 + /* 1232 + * Only registered handler acquired protocol; must be here 1233 + * released only AFTER unlocking registered_mtx, since 1234 + * releasing a protocol can trigger its de-initialization 1235 + * (ie. including r_evt and registered_mtx) 1236 + */ 1237 + if (freed) 1238 + scmi_protocol_release(ni->handle, protocol_id); 1239 + } 1256 1240 mutex_unlock(&ni->pending_mtx); 1257 1241 } 1258 1242 1259 1243 static void scmi_put_active_handler(struct scmi_notify_instance *ni, 1260 1244 struct scmi_event_handler *hndl) 1261 1245 { 1246 + bool freed; 1262 1247 struct scmi_registered_event *r_evt = hndl->r_evt; 1248 + u8 protocol_id = r_evt->proto->id; 1263 1249 1264 1250 mutex_lock(&r_evt->proto->registered_mtx); 1265 - scmi_put_handler_unlocked(ni, hndl); 1251 + freed = scmi_put_handler_unlocked(ni, hndl); 1266 1252 mutex_unlock(&r_evt->proto->registered_mtx); 1253 + if (freed) 1254 + scmi_protocol_release(ni->handle, protocol_id); 1267 1255 } 1268 1256 1269 1257 /** ··· 1307 1247 } 1308 1248 1309 1249 /** 1310 - * scmi_register_notifier() - Register a notifier_block for an event 1250 + * scmi_notifier_register() - Register a notifier_block for an event 1311 1251 * @handle: The handle identifying the platform instance against which the 1312 1252 * callback is registered 1313 1253 * @proto_id: Protocol ID ··· 1339 1279 * 1340 1280 * Return: 0 on Success 1341 1281 */ 1342 - static int scmi_register_notifier(const struct scmi_handle *handle, 1343 - u8 proto_id, u8 evt_id, u32 *src_id, 1282 + static int scmi_notifier_register(const struct scmi_handle *handle, 1283 + u8 proto_id, u8 evt_id, const u32 *src_id, 1344 1284 struct notifier_block *nb) 1345 1285 { 1346 1286 int ret = 0; ··· 1348 1288 struct scmi_event_handler *hndl; 1349 1289 struct scmi_notify_instance *ni; 1350 1290 1351 - /* Ensure notify_priv is updated */ 1352 - smp_rmb(); 1353 - if (!handle->notify_priv) 1291 + ni = scmi_notification_instance_data_get(handle); 1292 + if (!ni) 1354 1293 return -ENODEV; 1355 - ni = handle->notify_priv; 1356 1294 1357 1295 evt_key = MAKE_HASH_KEY(proto_id, evt_id, 1358 1296 src_id ? *src_id : SRC_ID_MASK); ··· 1371 1313 } 1372 1314 1373 1315 /** 1374 - * scmi_unregister_notifier() - Unregister a notifier_block for an event 1316 + * scmi_notifier_unregister() - Unregister a notifier_block for an event 1375 1317 * @handle: The handle identifying the platform instance against which the 1376 1318 * callback is unregistered 1377 1319 * @proto_id: Protocol ID ··· 1386 1328 * 1387 1329 * Return: 0 on Success 1388 1330 */ 1389 - static int scmi_unregister_notifier(const struct scmi_handle *handle, 1390 - u8 proto_id, u8 evt_id, u32 *src_id, 1331 + static int scmi_notifier_unregister(const struct scmi_handle *handle, 1332 + u8 proto_id, u8 evt_id, const u32 *src_id, 1391 1333 struct notifier_block *nb) 1392 1334 { 1393 1335 u32 evt_key; 1394 1336 struct scmi_event_handler *hndl; 1395 1337 struct scmi_notify_instance *ni; 1396 1338 1397 - /* Ensure notify_priv is updated */ 1398 - smp_rmb(); 1399 - if (!handle->notify_priv) 1339 + ni = scmi_notification_instance_data_get(handle); 1340 + if (!ni) 1400 1341 return -ENODEV; 1401 - ni = handle->notify_priv; 1402 1342 1403 1343 evt_key = MAKE_HASH_KEY(proto_id, evt_id, 1404 1344 src_id ? *src_id : SRC_ID_MASK); ··· 1412 1356 scmi_put_handler(ni, hndl); 1413 1357 1414 1358 /* 1415 - * This balances the initial get issued in @scmi_register_notifier. 1359 + * This balances the initial get issued in @scmi_notifier_register. 1416 1360 * If this notifier_block happened to be the last known user callback 1417 1361 * for this event, the handler is here freed and the event's generation 1418 1362 * stopped. ··· 1425 1369 scmi_put_handler(ni, hndl); 1426 1370 1427 1371 return 0; 1372 + } 1373 + 1374 + struct scmi_notifier_devres { 1375 + const struct scmi_handle *handle; 1376 + u8 proto_id; 1377 + u8 evt_id; 1378 + u32 __src_id; 1379 + u32 *src_id; 1380 + struct notifier_block *nb; 1381 + }; 1382 + 1383 + static void scmi_devm_release_notifier(struct device *dev, void *res) 1384 + { 1385 + struct scmi_notifier_devres *dres = res; 1386 + 1387 + scmi_notifier_unregister(dres->handle, dres->proto_id, dres->evt_id, 1388 + dres->src_id, dres->nb); 1389 + } 1390 + 1391 + /** 1392 + * scmi_devm_notifier_register() - Managed registration of a notifier_block 1393 + * for an event 1394 + * @sdev: A reference to an scmi_device whose embedded struct device is to 1395 + * be used for devres accounting. 1396 + * @proto_id: Protocol ID 1397 + * @evt_id: Event ID 1398 + * @src_id: Source ID, when NULL register for events coming form ALL possible 1399 + * sources 1400 + * @nb: A standard notifier block to register for the specified event 1401 + * 1402 + * Generic devres managed helper to register a notifier_block against a 1403 + * protocol event. 1404 + */ 1405 + static int scmi_devm_notifier_register(struct scmi_device *sdev, 1406 + u8 proto_id, u8 evt_id, 1407 + const u32 *src_id, 1408 + struct notifier_block *nb) 1409 + { 1410 + int ret; 1411 + struct scmi_notifier_devres *dres; 1412 + 1413 + dres = devres_alloc(scmi_devm_release_notifier, 1414 + sizeof(*dres), GFP_KERNEL); 1415 + if (!dres) 1416 + return -ENOMEM; 1417 + 1418 + ret = scmi_notifier_register(sdev->handle, proto_id, 1419 + evt_id, src_id, nb); 1420 + if (ret) { 1421 + devres_free(dres); 1422 + return ret; 1423 + } 1424 + 1425 + dres->handle = sdev->handle; 1426 + dres->proto_id = proto_id; 1427 + dres->evt_id = evt_id; 1428 + dres->nb = nb; 1429 + if (src_id) { 1430 + dres->__src_id = *src_id; 1431 + dres->src_id = &dres->__src_id; 1432 + } else { 1433 + dres->src_id = NULL; 1434 + } 1435 + devres_add(&sdev->dev, dres); 1436 + 1437 + return ret; 1438 + } 1439 + 1440 + static int scmi_devm_notifier_match(struct device *dev, void *res, void *data) 1441 + { 1442 + struct scmi_notifier_devres *dres = res; 1443 + struct scmi_notifier_devres *xres = data; 1444 + 1445 + if (WARN_ON(!dres || !xres)) 1446 + return 0; 1447 + 1448 + return dres->proto_id == xres->proto_id && 1449 + dres->evt_id == xres->evt_id && 1450 + dres->nb == xres->nb && 1451 + ((!dres->src_id && !xres->src_id) || 1452 + (dres->src_id && xres->src_id && 1453 + dres->__src_id == xres->__src_id)); 1454 + } 1455 + 1456 + /** 1457 + * scmi_devm_notifier_unregister() - Managed un-registration of a 1458 + * notifier_block for an event 1459 + * @sdev: A reference to an scmi_device whose embedded struct device is to 1460 + * be used for devres accounting. 1461 + * @proto_id: Protocol ID 1462 + * @evt_id: Event ID 1463 + * @src_id: Source ID, when NULL register for events coming form ALL possible 1464 + * sources 1465 + * @nb: A standard notifier block to register for the specified event 1466 + * 1467 + * Generic devres managed helper to explicitly un-register a notifier_block 1468 + * against a protocol event, which was previously registered using the above 1469 + * @scmi_devm_notifier_register. 1470 + */ 1471 + static int scmi_devm_notifier_unregister(struct scmi_device *sdev, 1472 + u8 proto_id, u8 evt_id, 1473 + const u32 *src_id, 1474 + struct notifier_block *nb) 1475 + { 1476 + int ret; 1477 + struct scmi_notifier_devres dres; 1478 + 1479 + dres.handle = sdev->handle; 1480 + dres.proto_id = proto_id; 1481 + dres.evt_id = evt_id; 1482 + if (src_id) { 1483 + dres.__src_id = *src_id; 1484 + dres.src_id = &dres.__src_id; 1485 + } else { 1486 + dres.src_id = NULL; 1487 + } 1488 + 1489 + ret = devres_release(&sdev->dev, scmi_devm_release_notifier, 1490 + scmi_devm_notifier_match, &dres); 1491 + 1492 + WARN_ON(ret); 1493 + 1494 + return ret; 1428 1495 } 1429 1496 1430 1497 /** ··· 1607 1428 * directly from an scmi_driver to register its own notifiers. 1608 1429 */ 1609 1430 static const struct scmi_notify_ops notify_ops = { 1610 - .register_event_notifier = scmi_register_notifier, 1611 - .unregister_event_notifier = scmi_unregister_notifier, 1431 + .devm_event_notifier_register = scmi_devm_notifier_register, 1432 + .devm_event_notifier_unregister = scmi_devm_notifier_unregister, 1433 + .event_notifier_register = scmi_notifier_register, 1434 + .event_notifier_unregister = scmi_notifier_unregister, 1612 1435 }; 1613 1436 1614 1437 /** ··· 1671 1490 1672 1491 INIT_WORK(&ni->init_work, scmi_protocols_late_init); 1673 1492 1493 + scmi_notification_instance_data_set(handle, ni); 1674 1494 handle->notify_ops = &notify_ops; 1675 - handle->notify_priv = ni; 1676 1495 /* Ensure handle is up to date */ 1677 1496 smp_wmb(); 1678 1497 ··· 1684 1503 1685 1504 err: 1686 1505 dev_warn(handle->dev, "Initialization Failed.\n"); 1687 - devres_release_group(handle->dev, NULL); 1506 + devres_release_group(handle->dev, gid); 1688 1507 return -ENOMEM; 1689 1508 } 1690 1509 ··· 1696 1515 { 1697 1516 struct scmi_notify_instance *ni; 1698 1517 1699 - /* Ensure notify_priv is updated */ 1700 - smp_rmb(); 1701 - if (!handle->notify_priv) 1518 + ni = scmi_notification_instance_data_get(handle); 1519 + if (!ni) 1702 1520 return; 1703 - ni = handle->notify_priv; 1704 - 1705 - handle->notify_priv = NULL; 1706 - /* Ensure handle is up to date */ 1707 - smp_wmb(); 1521 + scmi_notification_instance_data_set(handle, NULL); 1708 1522 1709 1523 /* Destroy while letting pending work complete */ 1710 1524 destroy_workqueue(ni->notify_wq);
+32 -8
drivers/firmware/arm_scmi/notify.h
··· 4 4 * notification header file containing some definitions, structures 5 5 * and function prototypes related to SCMI Notification handling. 6 6 * 7 - * Copyright (C) 2020 ARM Ltd. 7 + * Copyright (C) 2020-2021 ARM Ltd. 8 8 */ 9 9 #ifndef _SCMI_NOTIFY_H 10 10 #define _SCMI_NOTIFY_H ··· 31 31 size_t max_report_sz; 32 32 }; 33 33 34 + struct scmi_protocol_handle; 35 + 34 36 /** 35 37 * struct scmi_event_ops - Protocol helpers called by the notification core. 38 + * @get_num_sources: Returns the number of possible events' sources for this 39 + * protocol 36 40 * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications 37 41 * using the proper custom protocol commands. 38 42 * Return 0 on Success ··· 50 46 * process context. 51 47 */ 52 48 struct scmi_event_ops { 53 - int (*set_notify_enabled)(const struct scmi_handle *handle, 49 + int (*get_num_sources)(const struct scmi_protocol_handle *ph); 50 + int (*set_notify_enabled)(const struct scmi_protocol_handle *ph, 54 51 u8 evt_id, u32 src_id, bool enabled); 55 - void *(*fill_custom_report)(const struct scmi_handle *handle, 52 + void *(*fill_custom_report)(const struct scmi_protocol_handle *ph, 56 53 u8 evt_id, ktime_t timestamp, 57 54 const void *payld, size_t payld_sz, 58 55 void *report, u32 *src_id); 59 56 }; 60 57 58 + /** 59 + * struct scmi_protocol_events - Per-protocol description of available events 60 + * @queue_sz: Size in bytes of the per-protocol queue to use. 61 + * @ops: Array of protocol-specific events operations. 62 + * @evts: Array of supported protocol's events. 63 + * @num_events: Number of supported protocol's events described in @evts. 64 + * @num_sources: Number of protocol's sources, should be greater than 0; if not 65 + * available at compile time, it will be provided at run-time via 66 + * @get_num_sources. 67 + */ 68 + struct scmi_protocol_events { 69 + size_t queue_sz; 70 + const struct scmi_event_ops *ops; 71 + const struct scmi_event *evts; 72 + unsigned int num_events; 73 + unsigned int num_sources; 74 + }; 75 + 61 76 int scmi_notification_init(struct scmi_handle *handle); 62 77 void scmi_notification_exit(struct scmi_handle *handle); 63 78 64 - int scmi_register_protocol_events(const struct scmi_handle *handle, 65 - u8 proto_id, size_t queue_sz, 66 - const struct scmi_event_ops *ops, 67 - const struct scmi_event *evt, int num_events, 68 - int num_sources); 79 + struct scmi_protocol_handle; 80 + int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id, 81 + const struct scmi_protocol_handle *ph, 82 + const struct scmi_protocol_events *ee); 83 + void scmi_deregister_protocol_events(const struct scmi_handle *handle, 84 + u8 proto_id); 69 85 int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id, 70 86 const void *buf, size_t len, ktime_t ts); 71 87
+139 -123
drivers/firmware/arm_scmi/perf.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Performance Protocol 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications PERF - " fmt ··· 11 11 #include <linux/of.h> 12 12 #include <linux/io.h> 13 13 #include <linux/io-64-nonatomic-hi-lo.h> 14 + #include <linux/module.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/pm_opp.h> 16 17 #include <linux/scmi_protocol.h> ··· 176 175 PERF_NOTIFY_LEVEL, 177 176 }; 178 177 179 - static int scmi_perf_attributes_get(const struct scmi_handle *handle, 178 + static int scmi_perf_attributes_get(const struct scmi_protocol_handle *ph, 180 179 struct scmi_perf_info *pi) 181 180 { 182 181 int ret; 183 182 struct scmi_xfer *t; 184 183 struct scmi_msg_resp_perf_attributes *attr; 185 184 186 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 187 - SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t); 185 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0, 186 + sizeof(*attr), &t); 188 187 if (ret) 189 188 return ret; 190 189 191 190 attr = t->rx.buf; 192 191 193 - ret = scmi_do_xfer(handle, t); 192 + ret = ph->xops->do_xfer(ph, t); 194 193 if (!ret) { 195 194 u16 flags = le16_to_cpu(attr->flags); 196 195 ··· 201 200 pi->stats_size = le32_to_cpu(attr->stats_size); 202 201 } 203 202 204 - scmi_xfer_put(handle, t); 203 + ph->xops->xfer_put(ph, t); 205 204 return ret; 206 205 } 207 206 208 207 static int 209 - scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 210 - struct perf_dom_info *dom_info) 208 + scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph, 209 + u32 domain, struct perf_dom_info *dom_info) 211 210 { 212 211 int ret; 213 212 struct scmi_xfer *t; 214 213 struct scmi_msg_resp_perf_domain_attributes *attr; 215 214 216 - ret = scmi_xfer_get_init(handle, PERF_DOMAIN_ATTRIBUTES, 217 - SCMI_PROTOCOL_PERF, sizeof(domain), 218 - sizeof(*attr), &t); 215 + ret = ph->xops->xfer_get_init(ph, PERF_DOMAIN_ATTRIBUTES, 216 + sizeof(domain), sizeof(*attr), &t); 219 217 if (ret) 220 218 return ret; 221 219 222 220 put_unaligned_le32(domain, t->tx.buf); 223 221 attr = t->rx.buf; 224 222 225 - ret = scmi_do_xfer(handle, t); 223 + ret = ph->xops->do_xfer(ph, t); 226 224 if (!ret) { 227 225 u32 flags = le32_to_cpu(attr->flags); 228 226 ··· 245 245 strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 246 246 } 247 247 248 - scmi_xfer_put(handle, t); 248 + ph->xops->xfer_put(ph, t); 249 249 return ret; 250 250 } 251 251 ··· 257 257 } 258 258 259 259 static int 260 - scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain, 260 + scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain, 261 261 struct perf_dom_info *perf_dom) 262 262 { 263 263 int ret, cnt; ··· 268 268 struct scmi_msg_perf_describe_levels *dom_info; 269 269 struct scmi_msg_resp_perf_describe_levels *level_info; 270 270 271 - ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_LEVELS, 272 - SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t); 271 + ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_LEVELS, 272 + sizeof(*dom_info), 0, &t); 273 273 if (ret) 274 274 return ret; 275 275 ··· 281 281 /* Set the number of OPPs to be skipped/already read */ 282 282 dom_info->level_index = cpu_to_le32(tot_opp_cnt); 283 283 284 - ret = scmi_do_xfer(handle, t); 284 + ret = ph->xops->do_xfer(ph, t); 285 285 if (ret) 286 286 break; 287 287 288 288 num_returned = le16_to_cpu(level_info->num_returned); 289 289 num_remaining = le16_to_cpu(level_info->num_remaining); 290 290 if (tot_opp_cnt + num_returned > MAX_OPPS) { 291 - dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS"); 291 + dev_err(ph->dev, "No. of OPPs exceeded MAX_OPPS"); 292 292 break; 293 293 } 294 294 ··· 299 299 opp->trans_latency_us = le16_to_cpu 300 300 (level_info->opp[cnt].transition_latency_us); 301 301 302 - dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n", 302 + dev_dbg(ph->dev, "Level %d Power %d Latency %dus\n", 303 303 opp->perf, opp->power, opp->trans_latency_us); 304 304 } 305 305 306 306 tot_opp_cnt += num_returned; 307 307 308 - scmi_reset_rx_to_maxsz(handle, t); 308 + ph->xops->reset_rx_to_maxsz(ph, t); 309 309 /* 310 310 * check for both returned and remaining to avoid infinite 311 311 * loop due to buggy firmware ··· 313 313 } while (num_returned && num_remaining); 314 314 315 315 perf_dom->opp_count = tot_opp_cnt; 316 - scmi_xfer_put(handle, t); 316 + ph->xops->xfer_put(ph, t); 317 317 318 318 sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL); 319 319 return ret; ··· 353 353 #endif 354 354 } 355 355 356 - static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain, 357 - u32 max_perf, u32 min_perf) 356 + static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph, 357 + u32 domain, u32 max_perf, u32 min_perf) 358 358 { 359 359 int ret; 360 360 struct scmi_xfer *t; 361 361 struct scmi_perf_set_limits *limits; 362 362 363 - ret = scmi_xfer_get_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF, 364 - sizeof(*limits), 0, &t); 363 + ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_SET, 364 + sizeof(*limits), 0, &t); 365 365 if (ret) 366 366 return ret; 367 367 ··· 370 370 limits->max_level = cpu_to_le32(max_perf); 371 371 limits->min_level = cpu_to_le32(min_perf); 372 372 373 - ret = scmi_do_xfer(handle, t); 373 + ret = ph->xops->do_xfer(ph, t); 374 374 375 - scmi_xfer_put(handle, t); 375 + ph->xops->xfer_put(ph, t); 376 376 return ret; 377 377 } 378 378 379 - static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain, 380 - u32 max_perf, u32 min_perf) 379 + static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph, 380 + u32 domain, u32 max_perf, u32 min_perf) 381 381 { 382 - struct scmi_perf_info *pi = handle->perf_priv; 382 + struct scmi_perf_info *pi = ph->get_priv(ph); 383 383 struct perf_dom_info *dom = pi->dom_info + domain; 384 384 385 385 if (dom->fc_info && dom->fc_info->limit_set_addr) { ··· 389 389 return 0; 390 390 } 391 391 392 - return scmi_perf_mb_limits_set(handle, domain, max_perf, min_perf); 392 + return scmi_perf_mb_limits_set(ph, domain, max_perf, min_perf); 393 393 } 394 394 395 - static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain, 396 - u32 *max_perf, u32 *min_perf) 395 + static int scmi_perf_mb_limits_get(const struct scmi_protocol_handle *ph, 396 + u32 domain, u32 *max_perf, u32 *min_perf) 397 397 { 398 398 int ret; 399 399 struct scmi_xfer *t; 400 400 struct scmi_perf_get_limits *limits; 401 401 402 - ret = scmi_xfer_get_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF, 403 - sizeof(__le32), 0, &t); 402 + ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_GET, 403 + sizeof(__le32), 0, &t); 404 404 if (ret) 405 405 return ret; 406 406 407 407 put_unaligned_le32(domain, t->tx.buf); 408 408 409 - ret = scmi_do_xfer(handle, t); 409 + ret = ph->xops->do_xfer(ph, t); 410 410 if (!ret) { 411 411 limits = t->rx.buf; 412 412 ··· 414 414 *min_perf = le32_to_cpu(limits->min_level); 415 415 } 416 416 417 - scmi_xfer_put(handle, t); 417 + ph->xops->xfer_put(ph, t); 418 418 return ret; 419 419 } 420 420 421 - static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain, 422 - u32 *max_perf, u32 *min_perf) 421 + static int scmi_perf_limits_get(const struct scmi_protocol_handle *ph, 422 + u32 domain, u32 *max_perf, u32 *min_perf) 423 423 { 424 - struct scmi_perf_info *pi = handle->perf_priv; 424 + struct scmi_perf_info *pi = ph->get_priv(ph); 425 425 struct perf_dom_info *dom = pi->dom_info + domain; 426 426 427 427 if (dom->fc_info && dom->fc_info->limit_get_addr) { ··· 430 430 return 0; 431 431 } 432 432 433 - return scmi_perf_mb_limits_get(handle, domain, max_perf, min_perf); 433 + return scmi_perf_mb_limits_get(ph, domain, max_perf, min_perf); 434 434 } 435 435 436 - static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain, 437 - u32 level, bool poll) 436 + static int scmi_perf_mb_level_set(const struct scmi_protocol_handle *ph, 437 + u32 domain, u32 level, bool poll) 438 438 { 439 439 int ret; 440 440 struct scmi_xfer *t; 441 441 struct scmi_perf_set_level *lvl; 442 442 443 - ret = scmi_xfer_get_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF, 444 - sizeof(*lvl), 0, &t); 443 + ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_SET, sizeof(*lvl), 0, &t); 445 444 if (ret) 446 445 return ret; 447 446 ··· 449 450 lvl->domain = cpu_to_le32(domain); 450 451 lvl->level = cpu_to_le32(level); 451 452 452 - ret = scmi_do_xfer(handle, t); 453 + ret = ph->xops->do_xfer(ph, t); 453 454 454 - scmi_xfer_put(handle, t); 455 + ph->xops->xfer_put(ph, t); 455 456 return ret; 456 457 } 457 458 458 - static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain, 459 - u32 level, bool poll) 459 + static int scmi_perf_level_set(const struct scmi_protocol_handle *ph, 460 + u32 domain, u32 level, bool poll) 460 461 { 461 - struct scmi_perf_info *pi = handle->perf_priv; 462 + struct scmi_perf_info *pi = ph->get_priv(ph); 462 463 struct perf_dom_info *dom = pi->dom_info + domain; 463 464 464 465 if (dom->fc_info && dom->fc_info->level_set_addr) { ··· 467 468 return 0; 468 469 } 469 470 470 - return scmi_perf_mb_level_set(handle, domain, level, poll); 471 + return scmi_perf_mb_level_set(ph, domain, level, poll); 471 472 } 472 473 473 - static int scmi_perf_mb_level_get(const struct scmi_handle *handle, u32 domain, 474 - u32 *level, bool poll) 474 + static int scmi_perf_mb_level_get(const struct scmi_protocol_handle *ph, 475 + u32 domain, u32 *level, bool poll) 475 476 { 476 477 int ret; 477 478 struct scmi_xfer *t; 478 479 479 - ret = scmi_xfer_get_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF, 480 - sizeof(u32), sizeof(u32), &t); 480 + ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_GET, 481 + sizeof(u32), sizeof(u32), &t); 481 482 if (ret) 482 483 return ret; 483 484 484 485 t->hdr.poll_completion = poll; 485 486 put_unaligned_le32(domain, t->tx.buf); 486 487 487 - ret = scmi_do_xfer(handle, t); 488 + ret = ph->xops->do_xfer(ph, t); 488 489 if (!ret) 489 490 *level = get_unaligned_le32(t->rx.buf); 490 491 491 - scmi_xfer_put(handle, t); 492 + ph->xops->xfer_put(ph, t); 492 493 return ret; 493 494 } 494 495 495 - static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain, 496 - u32 *level, bool poll) 496 + static int scmi_perf_level_get(const struct scmi_protocol_handle *ph, 497 + u32 domain, u32 *level, bool poll) 497 498 { 498 - struct scmi_perf_info *pi = handle->perf_priv; 499 + struct scmi_perf_info *pi = ph->get_priv(ph); 499 500 struct perf_dom_info *dom = pi->dom_info + domain; 500 501 501 502 if (dom->fc_info && dom->fc_info->level_get_addr) { ··· 503 504 return 0; 504 505 } 505 506 506 - return scmi_perf_mb_level_get(handle, domain, level, poll); 507 + return scmi_perf_mb_level_get(ph, domain, level, poll); 507 508 } 508 509 509 - static int scmi_perf_level_limits_notify(const struct scmi_handle *handle, 510 + static int scmi_perf_level_limits_notify(const struct scmi_protocol_handle *ph, 510 511 u32 domain, int message_id, 511 512 bool enable) 512 513 { ··· 514 515 struct scmi_xfer *t; 515 516 struct scmi_perf_notify_level_or_limits *notify; 516 517 517 - ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF, 518 - sizeof(*notify), 0, &t); 518 + ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*notify), 0, &t); 519 519 if (ret) 520 520 return ret; 521 521 ··· 522 524 notify->domain = cpu_to_le32(domain); 523 525 notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 524 526 525 - ret = scmi_do_xfer(handle, t); 527 + ret = ph->xops->do_xfer(ph, t); 526 528 527 - scmi_xfer_put(handle, t); 529 + ph->xops->xfer_put(ph, t); 528 530 return ret; 529 531 } 530 532 ··· 538 540 } 539 541 540 542 static void 541 - scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain, 543 + scmi_perf_domain_desc_fc(const struct scmi_protocol_handle *ph, u32 domain, 542 544 u32 message_id, void __iomem **p_addr, 543 545 struct scmi_fc_db_info **p_db) 544 546 { ··· 555 557 if (!p_addr) 556 558 return; 557 559 558 - ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_FASTCHANNEL, 559 - SCMI_PROTOCOL_PERF, 560 - sizeof(*info), sizeof(*resp), &t); 560 + ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_FASTCHANNEL, 561 + sizeof(*info), sizeof(*resp), &t); 561 562 if (ret) 562 563 return; 563 564 ··· 564 567 info->domain = cpu_to_le32(domain); 565 568 info->message_id = cpu_to_le32(message_id); 566 569 567 - ret = scmi_do_xfer(handle, t); 570 + ret = ph->xops->do_xfer(ph, t); 568 571 if (ret) 569 572 goto err_xfer; 570 573 ··· 576 579 577 580 phys_addr = le32_to_cpu(resp->chan_addr_low); 578 581 phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32; 579 - addr = devm_ioremap(handle->dev, phys_addr, size); 582 + addr = devm_ioremap(ph->dev, phys_addr, size); 580 583 if (!addr) 581 584 goto err_xfer; 582 585 *p_addr = addr; 583 586 584 587 if (p_db && SUPPORTS_DOORBELL(flags)) { 585 - db = devm_kzalloc(handle->dev, sizeof(*db), GFP_KERNEL); 588 + db = devm_kzalloc(ph->dev, sizeof(*db), GFP_KERNEL); 586 589 if (!db) 587 590 goto err_xfer; 588 591 589 592 size = 1 << DOORBELL_REG_WIDTH(flags); 590 593 phys_addr = le32_to_cpu(resp->db_addr_low); 591 594 phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32; 592 - addr = devm_ioremap(handle->dev, phys_addr, size); 595 + addr = devm_ioremap(ph->dev, phys_addr, size); 593 596 if (!addr) 594 597 goto err_xfer; 595 598 ··· 602 605 *p_db = db; 603 606 } 604 607 err_xfer: 605 - scmi_xfer_put(handle, t); 608 + ph->xops->xfer_put(ph, t); 606 609 } 607 610 608 - static void scmi_perf_domain_init_fc(const struct scmi_handle *handle, 611 + static void scmi_perf_domain_init_fc(const struct scmi_protocol_handle *ph, 609 612 u32 domain, struct scmi_fc_info **p_fc) 610 613 { 611 614 struct scmi_fc_info *fc; 612 615 613 - fc = devm_kzalloc(handle->dev, sizeof(*fc), GFP_KERNEL); 616 + fc = devm_kzalloc(ph->dev, sizeof(*fc), GFP_KERNEL); 614 617 if (!fc) 615 618 return; 616 619 617 - scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_SET, 620 + scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_SET, 618 621 &fc->level_set_addr, &fc->level_set_db); 619 - scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_GET, 622 + scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_GET, 620 623 &fc->level_get_addr, NULL); 621 - scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_SET, 624 + scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_SET, 622 625 &fc->limit_set_addr, &fc->limit_set_db); 623 - scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_GET, 626 + scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_GET, 624 627 &fc->limit_get_addr, NULL); 625 628 *p_fc = fc; 626 629 } ··· 637 640 return clkspec.args[0]; 638 641 } 639 642 640 - static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle, 643 + static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph, 641 644 struct device *dev) 642 645 { 643 646 int idx, ret, domain; 644 647 unsigned long freq; 645 648 struct scmi_opp *opp; 646 649 struct perf_dom_info *dom; 647 - struct scmi_perf_info *pi = handle->perf_priv; 650 + struct scmi_perf_info *pi = ph->get_priv(ph); 648 651 649 652 domain = scmi_dev_domain_id(dev); 650 653 if (domain < 0) ··· 669 672 return 0; 670 673 } 671 674 672 - static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle, 673 - struct device *dev) 675 + static int 676 + scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph, 677 + struct device *dev) 674 678 { 675 679 struct perf_dom_info *dom; 676 - struct scmi_perf_info *pi = handle->perf_priv; 680 + struct scmi_perf_info *pi = ph->get_priv(ph); 677 681 int domain = scmi_dev_domain_id(dev); 678 682 679 683 if (domain < 0) ··· 685 687 return dom->opp[dom->opp_count - 1].trans_latency_us * 1000; 686 688 } 687 689 688 - static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain, 690 + static int scmi_dvfs_freq_set(const struct scmi_protocol_handle *ph, u32 domain, 689 691 unsigned long freq, bool poll) 690 692 { 691 - struct scmi_perf_info *pi = handle->perf_priv; 693 + struct scmi_perf_info *pi = ph->get_priv(ph); 692 694 struct perf_dom_info *dom = pi->dom_info + domain; 693 695 694 - return scmi_perf_level_set(handle, domain, freq / dom->mult_factor, 695 - poll); 696 + return scmi_perf_level_set(ph, domain, freq / dom->mult_factor, poll); 696 697 } 697 698 698 - static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain, 699 + static int scmi_dvfs_freq_get(const struct scmi_protocol_handle *ph, u32 domain, 699 700 unsigned long *freq, bool poll) 700 701 { 701 702 int ret; 702 703 u32 level; 703 - struct scmi_perf_info *pi = handle->perf_priv; 704 + struct scmi_perf_info *pi = ph->get_priv(ph); 704 705 struct perf_dom_info *dom = pi->dom_info + domain; 705 706 706 - ret = scmi_perf_level_get(handle, domain, &level, poll); 707 + ret = scmi_perf_level_get(ph, domain, &level, poll); 707 708 if (!ret) 708 709 *freq = level * dom->mult_factor; 709 710 710 711 return ret; 711 712 } 712 713 713 - static int scmi_dvfs_est_power_get(const struct scmi_handle *handle, u32 domain, 714 - unsigned long *freq, unsigned long *power) 714 + static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph, 715 + u32 domain, unsigned long *freq, 716 + unsigned long *power) 715 717 { 716 - struct scmi_perf_info *pi = handle->perf_priv; 718 + struct scmi_perf_info *pi = ph->get_priv(ph); 717 719 struct perf_dom_info *dom; 718 720 unsigned long opp_freq; 719 721 int idx, ret = -EINVAL; ··· 737 739 return ret; 738 740 } 739 741 740 - static bool scmi_fast_switch_possible(const struct scmi_handle *handle, 742 + static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph, 741 743 struct device *dev) 742 744 { 743 745 struct perf_dom_info *dom; 744 - struct scmi_perf_info *pi = handle->perf_priv; 746 + struct scmi_perf_info *pi = ph->get_priv(ph); 745 747 746 748 dom = pi->dom_info + scmi_dev_domain_id(dev); 747 749 748 750 return dom->fc_info && dom->fc_info->level_set_addr; 749 751 } 750 752 751 - static bool scmi_power_scale_mw_get(const struct scmi_handle *handle) 753 + static bool scmi_power_scale_mw_get(const struct scmi_protocol_handle *ph) 752 754 { 753 - struct scmi_perf_info *pi = handle->perf_priv; 755 + struct scmi_perf_info *pi = ph->get_priv(ph); 754 756 755 757 return pi->power_scale_mw; 756 758 } 757 759 758 - static const struct scmi_perf_ops perf_ops = { 760 + static const struct scmi_perf_proto_ops perf_proto_ops = { 759 761 .limits_set = scmi_perf_limits_set, 760 762 .limits_get = scmi_perf_limits_get, 761 763 .level_set = scmi_perf_level_set, ··· 770 772 .power_scale_mw_get = scmi_power_scale_mw_get, 771 773 }; 772 774 773 - static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle, 775 + static int scmi_perf_set_notify_enabled(const struct scmi_protocol_handle *ph, 774 776 u8 evt_id, u32 src_id, bool enable) 775 777 { 776 778 int ret, cmd_id; ··· 779 781 return -EINVAL; 780 782 781 783 cmd_id = evt_2_cmd[evt_id]; 782 - ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable); 784 + ret = scmi_perf_level_limits_notify(ph, src_id, cmd_id, enable); 783 785 if (ret) 784 786 pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 785 787 evt_id, src_id, ret); ··· 787 789 return ret; 788 790 } 789 791 790 - static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle, 792 + static void *scmi_perf_fill_custom_report(const struct scmi_protocol_handle *ph, 791 793 u8 evt_id, ktime_t timestamp, 792 794 const void *payld, size_t payld_sz, 793 795 void *report, u32 *src_id) ··· 835 837 return rep; 836 838 } 837 839 840 + static int scmi_perf_get_num_sources(const struct scmi_protocol_handle *ph) 841 + { 842 + struct scmi_perf_info *pi = ph->get_priv(ph); 843 + 844 + if (!pi) 845 + return -EINVAL; 846 + 847 + return pi->num_domains; 848 + } 849 + 838 850 static const struct scmi_event perf_events[] = { 839 851 { 840 852 .id = SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, ··· 859 851 }; 860 852 861 853 static const struct scmi_event_ops perf_event_ops = { 854 + .get_num_sources = scmi_perf_get_num_sources, 862 855 .set_notify_enabled = scmi_perf_set_notify_enabled, 863 856 .fill_custom_report = scmi_perf_fill_custom_report, 864 857 }; 865 858 866 - static int scmi_perf_protocol_init(struct scmi_handle *handle) 859 + static const struct scmi_protocol_events perf_protocol_events = { 860 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 861 + .ops = &perf_event_ops, 862 + .evts = perf_events, 863 + .num_events = ARRAY_SIZE(perf_events), 864 + }; 865 + 866 + static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph) 867 867 { 868 868 int domain; 869 869 u32 version; 870 870 struct scmi_perf_info *pinfo; 871 871 872 - scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version); 872 + ph->xops->version_get(ph, &version); 873 873 874 - dev_dbg(handle->dev, "Performance Version %d.%d\n", 874 + dev_dbg(ph->dev, "Performance Version %d.%d\n", 875 875 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 876 876 877 - pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 877 + pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 878 878 if (!pinfo) 879 879 return -ENOMEM; 880 880 881 - scmi_perf_attributes_get(handle, pinfo); 881 + scmi_perf_attributes_get(ph, pinfo); 882 882 883 - pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 883 + pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 884 884 sizeof(*pinfo->dom_info), GFP_KERNEL); 885 885 if (!pinfo->dom_info) 886 886 return -ENOMEM; ··· 896 880 for (domain = 0; domain < pinfo->num_domains; domain++) { 897 881 struct perf_dom_info *dom = pinfo->dom_info + domain; 898 882 899 - scmi_perf_domain_attributes_get(handle, domain, dom); 900 - scmi_perf_describe_levels_get(handle, domain, dom); 883 + scmi_perf_domain_attributes_get(ph, domain, dom); 884 + scmi_perf_describe_levels_get(ph, domain, dom); 901 885 902 886 if (dom->perf_fastchannels) 903 - scmi_perf_domain_init_fc(handle, domain, &dom->fc_info); 887 + scmi_perf_domain_init_fc(ph, domain, &dom->fc_info); 904 888 } 905 889 906 - scmi_register_protocol_events(handle, 907 - SCMI_PROTOCOL_PERF, SCMI_PROTO_QUEUE_SZ, 908 - &perf_event_ops, perf_events, 909 - ARRAY_SIZE(perf_events), 910 - pinfo->num_domains); 911 - 912 890 pinfo->version = version; 913 - handle->perf_ops = &perf_ops; 914 - handle->perf_priv = pinfo; 915 891 916 - return 0; 892 + return ph->set_priv(ph, pinfo); 917 893 } 918 894 919 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_PERF, perf) 895 + static const struct scmi_protocol scmi_perf = { 896 + .id = SCMI_PROTOCOL_PERF, 897 + .owner = THIS_MODULE, 898 + .instance_init = &scmi_perf_protocol_init, 899 + .ops = &perf_proto_ops, 900 + .events = &perf_protocol_events, 901 + }; 902 + 903 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(perf, scmi_perf)
+76 -58
drivers/firmware/arm_scmi/power.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Power Protocol 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications POWER - " fmt 9 9 10 + #include <linux/module.h> 10 11 #include <linux/scmi_protocol.h> 11 12 12 13 #include "common.h" ··· 69 68 struct power_dom_info *dom_info; 70 69 }; 71 70 72 - static int scmi_power_attributes_get(const struct scmi_handle *handle, 71 + static int scmi_power_attributes_get(const struct scmi_protocol_handle *ph, 73 72 struct scmi_power_info *pi) 74 73 { 75 74 int ret; 76 75 struct scmi_xfer *t; 77 76 struct scmi_msg_resp_power_attributes *attr; 78 77 79 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 80 - SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t); 78 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 79 + 0, sizeof(*attr), &t); 81 80 if (ret) 82 81 return ret; 83 82 84 83 attr = t->rx.buf; 85 84 86 - ret = scmi_do_xfer(handle, t); 85 + ret = ph->xops->do_xfer(ph, t); 87 86 if (!ret) { 88 87 pi->num_domains = le16_to_cpu(attr->num_domains); 89 88 pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | ··· 91 90 pi->stats_size = le32_to_cpu(attr->stats_size); 92 91 } 93 92 94 - scmi_xfer_put(handle, t); 93 + ph->xops->xfer_put(ph, t); 95 94 return ret; 96 95 } 97 96 98 97 static int 99 - scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 100 - struct power_dom_info *dom_info) 98 + scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph, 99 + u32 domain, struct power_dom_info *dom_info) 101 100 { 102 101 int ret; 103 102 struct scmi_xfer *t; 104 103 struct scmi_msg_resp_power_domain_attributes *attr; 105 104 106 - ret = scmi_xfer_get_init(handle, POWER_DOMAIN_ATTRIBUTES, 107 - SCMI_PROTOCOL_POWER, sizeof(domain), 108 - sizeof(*attr), &t); 105 + ret = ph->xops->xfer_get_init(ph, POWER_DOMAIN_ATTRIBUTES, 106 + sizeof(domain), sizeof(*attr), &t); 109 107 if (ret) 110 108 return ret; 111 109 112 110 put_unaligned_le32(domain, t->tx.buf); 113 111 attr = t->rx.buf; 114 112 115 - ret = scmi_do_xfer(handle, t); 113 + ret = ph->xops->do_xfer(ph, t); 116 114 if (!ret) { 117 115 u32 flags = le32_to_cpu(attr->flags); 118 116 ··· 121 121 strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 122 122 } 123 123 124 - scmi_xfer_put(handle, t); 124 + ph->xops->xfer_put(ph, t); 125 125 return ret; 126 126 } 127 127 128 - static int 129 - scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state) 128 + static int scmi_power_state_set(const struct scmi_protocol_handle *ph, 129 + u32 domain, u32 state) 130 130 { 131 131 int ret; 132 132 struct scmi_xfer *t; 133 133 struct scmi_power_set_state *st; 134 134 135 - ret = scmi_xfer_get_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER, 136 - sizeof(*st), 0, &t); 135 + ret = ph->xops->xfer_get_init(ph, POWER_STATE_SET, sizeof(*st), 0, &t); 137 136 if (ret) 138 137 return ret; 139 138 ··· 141 142 st->domain = cpu_to_le32(domain); 142 143 st->state = cpu_to_le32(state); 143 144 144 - ret = scmi_do_xfer(handle, t); 145 + ret = ph->xops->do_xfer(ph, t); 145 146 146 - scmi_xfer_put(handle, t); 147 + ph->xops->xfer_put(ph, t); 147 148 return ret; 148 149 } 149 150 150 - static int 151 - scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state) 151 + static int scmi_power_state_get(const struct scmi_protocol_handle *ph, 152 + u32 domain, u32 *state) 152 153 { 153 154 int ret; 154 155 struct scmi_xfer *t; 155 156 156 - ret = scmi_xfer_get_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER, 157 - sizeof(u32), sizeof(u32), &t); 157 + ret = ph->xops->xfer_get_init(ph, POWER_STATE_GET, sizeof(u32), sizeof(u32), &t); 158 158 if (ret) 159 159 return ret; 160 160 161 161 put_unaligned_le32(domain, t->tx.buf); 162 162 163 - ret = scmi_do_xfer(handle, t); 163 + ret = ph->xops->do_xfer(ph, t); 164 164 if (!ret) 165 165 *state = get_unaligned_le32(t->rx.buf); 166 166 167 - scmi_xfer_put(handle, t); 167 + ph->xops->xfer_put(ph, t); 168 168 return ret; 169 169 } 170 170 171 - static int scmi_power_num_domains_get(const struct scmi_handle *handle) 171 + static int scmi_power_num_domains_get(const struct scmi_protocol_handle *ph) 172 172 { 173 - struct scmi_power_info *pi = handle->power_priv; 173 + struct scmi_power_info *pi = ph->get_priv(ph); 174 174 175 175 return pi->num_domains; 176 176 } 177 177 178 - static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain) 178 + static char *scmi_power_name_get(const struct scmi_protocol_handle *ph, 179 + u32 domain) 179 180 { 180 - struct scmi_power_info *pi = handle->power_priv; 181 + struct scmi_power_info *pi = ph->get_priv(ph); 181 182 struct power_dom_info *dom = pi->dom_info + domain; 182 183 183 184 return dom->name; 184 185 } 185 186 186 - static const struct scmi_power_ops power_ops = { 187 + static const struct scmi_power_proto_ops power_proto_ops = { 187 188 .num_domains_get = scmi_power_num_domains_get, 188 189 .name_get = scmi_power_name_get, 189 190 .state_set = scmi_power_state_set, 190 191 .state_get = scmi_power_state_get, 191 192 }; 192 193 193 - static int scmi_power_request_notify(const struct scmi_handle *handle, 194 + static int scmi_power_request_notify(const struct scmi_protocol_handle *ph, 194 195 u32 domain, bool enable) 195 196 { 196 197 int ret; 197 198 struct scmi_xfer *t; 198 199 struct scmi_power_state_notify *notify; 199 200 200 - ret = scmi_xfer_get_init(handle, POWER_STATE_NOTIFY, 201 - SCMI_PROTOCOL_POWER, sizeof(*notify), 0, &t); 201 + ret = ph->xops->xfer_get_init(ph, POWER_STATE_NOTIFY, 202 + sizeof(*notify), 0, &t); 202 203 if (ret) 203 204 return ret; 204 205 ··· 206 207 notify->domain = cpu_to_le32(domain); 207 208 notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 208 209 209 - ret = scmi_do_xfer(handle, t); 210 + ret = ph->xops->do_xfer(ph, t); 210 211 211 - scmi_xfer_put(handle, t); 212 + ph->xops->xfer_put(ph, t); 212 213 return ret; 213 214 } 214 215 215 - static int scmi_power_set_notify_enabled(const struct scmi_handle *handle, 216 + static int scmi_power_set_notify_enabled(const struct scmi_protocol_handle *ph, 216 217 u8 evt_id, u32 src_id, bool enable) 217 218 { 218 219 int ret; 219 220 220 - ret = scmi_power_request_notify(handle, src_id, enable); 221 + ret = scmi_power_request_notify(ph, src_id, enable); 221 222 if (ret) 222 223 pr_debug("FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n", 223 224 evt_id, src_id, ret); ··· 225 226 return ret; 226 227 } 227 228 228 - static void *scmi_power_fill_custom_report(const struct scmi_handle *handle, 229 - u8 evt_id, ktime_t timestamp, 230 - const void *payld, size_t payld_sz, 231 - void *report, u32 *src_id) 229 + static void * 230 + scmi_power_fill_custom_report(const struct scmi_protocol_handle *ph, 231 + u8 evt_id, ktime_t timestamp, 232 + const void *payld, size_t payld_sz, 233 + void *report, u32 *src_id) 232 234 { 233 235 const struct scmi_power_state_notify_payld *p = payld; 234 236 struct scmi_power_state_changed_report *r = report; ··· 246 246 return r; 247 247 } 248 248 249 + static int scmi_power_get_num_sources(const struct scmi_protocol_handle *ph) 250 + { 251 + struct scmi_power_info *pinfo = ph->get_priv(ph); 252 + 253 + if (!pinfo) 254 + return -EINVAL; 255 + 256 + return pinfo->num_domains; 257 + } 258 + 249 259 static const struct scmi_event power_events[] = { 250 260 { 251 261 .id = SCMI_EVENT_POWER_STATE_CHANGED, ··· 266 256 }; 267 257 268 258 static const struct scmi_event_ops power_event_ops = { 259 + .get_num_sources = scmi_power_get_num_sources, 269 260 .set_notify_enabled = scmi_power_set_notify_enabled, 270 261 .fill_custom_report = scmi_power_fill_custom_report, 271 262 }; 272 263 273 - static int scmi_power_protocol_init(struct scmi_handle *handle) 264 + static const struct scmi_protocol_events power_protocol_events = { 265 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 266 + .ops = &power_event_ops, 267 + .evts = power_events, 268 + .num_events = ARRAY_SIZE(power_events), 269 + }; 270 + 271 + static int scmi_power_protocol_init(const struct scmi_protocol_handle *ph) 274 272 { 275 273 int domain; 276 274 u32 version; 277 275 struct scmi_power_info *pinfo; 278 276 279 - scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version); 277 + ph->xops->version_get(ph, &version); 280 278 281 - dev_dbg(handle->dev, "Power Version %d.%d\n", 279 + dev_dbg(ph->dev, "Power Version %d.%d\n", 282 280 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 283 281 284 - pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 282 + pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 285 283 if (!pinfo) 286 284 return -ENOMEM; 287 285 288 - scmi_power_attributes_get(handle, pinfo); 286 + scmi_power_attributes_get(ph, pinfo); 289 287 290 - pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 288 + pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 291 289 sizeof(*pinfo->dom_info), GFP_KERNEL); 292 290 if (!pinfo->dom_info) 293 291 return -ENOMEM; ··· 303 285 for (domain = 0; domain < pinfo->num_domains; domain++) { 304 286 struct power_dom_info *dom = pinfo->dom_info + domain; 305 287 306 - scmi_power_domain_attributes_get(handle, domain, dom); 288 + scmi_power_domain_attributes_get(ph, domain, dom); 307 289 } 308 290 309 - scmi_register_protocol_events(handle, 310 - SCMI_PROTOCOL_POWER, SCMI_PROTO_QUEUE_SZ, 311 - &power_event_ops, power_events, 312 - ARRAY_SIZE(power_events), 313 - pinfo->num_domains); 314 - 315 291 pinfo->version = version; 316 - handle->power_ops = &power_ops; 317 - handle->power_priv = pinfo; 318 292 319 - return 0; 293 + return ph->set_priv(ph, pinfo); 320 294 } 321 295 322 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_POWER, power) 296 + static const struct scmi_protocol scmi_power = { 297 + .id = SCMI_PROTOCOL_POWER, 298 + .owner = THIS_MODULE, 299 + .instance_init = &scmi_power_protocol_init, 300 + .ops = &power_proto_ops, 301 + .events = &power_protocol_events, 302 + }; 303 + 304 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(power, scmi_power)
+83 -63
drivers/firmware/arm_scmi/reset.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Reset Protocol 4 4 * 5 - * Copyright (C) 2019 ARM Ltd. 5 + * Copyright (C) 2019-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications RESET - " fmt 9 9 10 + #include <linux/module.h> 10 11 #include <linux/scmi_protocol.h> 11 12 12 13 #include "common.h" ··· 65 64 struct reset_dom_info *dom_info; 66 65 }; 67 66 68 - static int scmi_reset_attributes_get(const struct scmi_handle *handle, 67 + static int scmi_reset_attributes_get(const struct scmi_protocol_handle *ph, 69 68 struct scmi_reset_info *pi) 70 69 { 71 70 int ret; 72 71 struct scmi_xfer *t; 73 72 u32 attr; 74 73 75 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 76 - SCMI_PROTOCOL_RESET, 0, sizeof(attr), &t); 74 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 75 + 0, sizeof(attr), &t); 77 76 if (ret) 78 77 return ret; 79 78 80 - ret = scmi_do_xfer(handle, t); 79 + ret = ph->xops->do_xfer(ph, t); 81 80 if (!ret) { 82 81 attr = get_unaligned_le32(t->rx.buf); 83 82 pi->num_domains = attr & NUM_RESET_DOMAIN_MASK; 84 83 } 85 84 86 - scmi_xfer_put(handle, t); 85 + ph->xops->xfer_put(ph, t); 87 86 return ret; 88 87 } 89 88 90 89 static int 91 - scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain, 92 - struct reset_dom_info *dom_info) 90 + scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph, 91 + u32 domain, struct reset_dom_info *dom_info) 93 92 { 94 93 int ret; 95 94 struct scmi_xfer *t; 96 95 struct scmi_msg_resp_reset_domain_attributes *attr; 97 96 98 - ret = scmi_xfer_get_init(handle, RESET_DOMAIN_ATTRIBUTES, 99 - SCMI_PROTOCOL_RESET, sizeof(domain), 100 - sizeof(*attr), &t); 97 + ret = ph->xops->xfer_get_init(ph, RESET_DOMAIN_ATTRIBUTES, 98 + sizeof(domain), sizeof(*attr), &t); 101 99 if (ret) 102 100 return ret; 103 101 104 102 put_unaligned_le32(domain, t->tx.buf); 105 103 attr = t->rx.buf; 106 104 107 - ret = scmi_do_xfer(handle, t); 105 + ret = ph->xops->do_xfer(ph, t); 108 106 if (!ret) { 109 107 u32 attributes = le32_to_cpu(attr->attributes); 110 108 ··· 115 115 strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 116 116 } 117 117 118 - scmi_xfer_put(handle, t); 118 + ph->xops->xfer_put(ph, t); 119 119 return ret; 120 120 } 121 121 122 - static int scmi_reset_num_domains_get(const struct scmi_handle *handle) 122 + static int scmi_reset_num_domains_get(const struct scmi_protocol_handle *ph) 123 123 { 124 - struct scmi_reset_info *pi = handle->reset_priv; 124 + struct scmi_reset_info *pi = ph->get_priv(ph); 125 125 126 126 return pi->num_domains; 127 127 } 128 128 129 - static char *scmi_reset_name_get(const struct scmi_handle *handle, u32 domain) 129 + static char *scmi_reset_name_get(const struct scmi_protocol_handle *ph, 130 + u32 domain) 130 131 { 131 - struct scmi_reset_info *pi = handle->reset_priv; 132 + struct scmi_reset_info *pi = ph->get_priv(ph); 133 + 132 134 struct reset_dom_info *dom = pi->dom_info + domain; 133 135 134 136 return dom->name; 135 137 } 136 138 137 - static int scmi_reset_latency_get(const struct scmi_handle *handle, u32 domain) 139 + static int scmi_reset_latency_get(const struct scmi_protocol_handle *ph, 140 + u32 domain) 138 141 { 139 - struct scmi_reset_info *pi = handle->reset_priv; 142 + struct scmi_reset_info *pi = ph->get_priv(ph); 140 143 struct reset_dom_info *dom = pi->dom_info + domain; 141 144 142 145 return dom->latency_us; 143 146 } 144 147 145 - static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain, 148 + static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain, 146 149 u32 flags, u32 state) 147 150 { 148 151 int ret; 149 152 struct scmi_xfer *t; 150 153 struct scmi_msg_reset_domain_reset *dom; 151 - struct scmi_reset_info *pi = handle->reset_priv; 154 + struct scmi_reset_info *pi = ph->get_priv(ph); 152 155 struct reset_dom_info *rdom = pi->dom_info + domain; 153 156 154 157 if (rdom->async_reset) 155 158 flags |= ASYNCHRONOUS_RESET; 156 159 157 - ret = scmi_xfer_get_init(handle, RESET, SCMI_PROTOCOL_RESET, 158 - sizeof(*dom), 0, &t); 160 + ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t); 159 161 if (ret) 160 162 return ret; 161 163 ··· 167 165 dom->reset_state = cpu_to_le32(state); 168 166 169 167 if (rdom->async_reset) 170 - ret = scmi_do_xfer_with_response(handle, t); 168 + ret = ph->xops->do_xfer_with_response(ph, t); 171 169 else 172 - ret = scmi_do_xfer(handle, t); 170 + ret = ph->xops->do_xfer(ph, t); 173 171 174 - scmi_xfer_put(handle, t); 172 + ph->xops->xfer_put(ph, t); 175 173 return ret; 176 174 } 177 175 178 - static int scmi_reset_domain_reset(const struct scmi_handle *handle, u32 domain) 176 + static int scmi_reset_domain_reset(const struct scmi_protocol_handle *ph, 177 + u32 domain) 179 178 { 180 - return scmi_domain_reset(handle, domain, AUTONOMOUS_RESET, 179 + return scmi_domain_reset(ph, domain, AUTONOMOUS_RESET, 181 180 ARCH_COLD_RESET); 182 181 } 183 182 184 183 static int 185 - scmi_reset_domain_assert(const struct scmi_handle *handle, u32 domain) 184 + scmi_reset_domain_assert(const struct scmi_protocol_handle *ph, u32 domain) 186 185 { 187 - return scmi_domain_reset(handle, domain, EXPLICIT_RESET_ASSERT, 186 + return scmi_domain_reset(ph, domain, EXPLICIT_RESET_ASSERT, 188 187 ARCH_COLD_RESET); 189 188 } 190 189 191 190 static int 192 - scmi_reset_domain_deassert(const struct scmi_handle *handle, u32 domain) 191 + scmi_reset_domain_deassert(const struct scmi_protocol_handle *ph, u32 domain) 193 192 { 194 - return scmi_domain_reset(handle, domain, 0, ARCH_COLD_RESET); 193 + return scmi_domain_reset(ph, domain, 0, ARCH_COLD_RESET); 195 194 } 196 195 197 - static const struct scmi_reset_ops reset_ops = { 196 + static const struct scmi_reset_proto_ops reset_proto_ops = { 198 197 .num_domains_get = scmi_reset_num_domains_get, 199 198 .name_get = scmi_reset_name_get, 200 199 .latency_get = scmi_reset_latency_get, ··· 204 201 .deassert = scmi_reset_domain_deassert, 205 202 }; 206 203 207 - static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id, 208 - bool enable) 204 + static int scmi_reset_notify(const struct scmi_protocol_handle *ph, 205 + u32 domain_id, bool enable) 209 206 { 210 207 int ret; 211 208 u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0; 212 209 struct scmi_xfer *t; 213 210 struct scmi_msg_reset_notify *cfg; 214 211 215 - ret = scmi_xfer_get_init(handle, RESET_NOTIFY, 216 - SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t); 212 + ret = ph->xops->xfer_get_init(ph, RESET_NOTIFY, sizeof(*cfg), 0, &t); 217 213 if (ret) 218 214 return ret; 219 215 ··· 220 218 cfg->id = cpu_to_le32(domain_id); 221 219 cfg->event_control = cpu_to_le32(evt_cntl); 222 220 223 - ret = scmi_do_xfer(handle, t); 221 + ret = ph->xops->do_xfer(ph, t); 224 222 225 - scmi_xfer_put(handle, t); 223 + ph->xops->xfer_put(ph, t); 226 224 return ret; 227 225 } 228 226 229 - static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle, 227 + static int scmi_reset_set_notify_enabled(const struct scmi_protocol_handle *ph, 230 228 u8 evt_id, u32 src_id, bool enable) 231 229 { 232 230 int ret; 233 231 234 - ret = scmi_reset_notify(handle, src_id, enable); 232 + ret = scmi_reset_notify(ph, src_id, enable); 235 233 if (ret) 236 234 pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 237 235 evt_id, src_id, ret); ··· 239 237 return ret; 240 238 } 241 239 242 - static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle, 243 - u8 evt_id, ktime_t timestamp, 244 - const void *payld, size_t payld_sz, 245 - void *report, u32 *src_id) 240 + static void * 241 + scmi_reset_fill_custom_report(const struct scmi_protocol_handle *ph, 242 + u8 evt_id, ktime_t timestamp, 243 + const void *payld, size_t payld_sz, 244 + void *report, u32 *src_id) 246 245 { 247 246 const struct scmi_reset_issued_notify_payld *p = payld; 248 247 struct scmi_reset_issued_report *r = report; ··· 260 257 return r; 261 258 } 262 259 260 + static int scmi_reset_get_num_sources(const struct scmi_protocol_handle *ph) 261 + { 262 + struct scmi_reset_info *pinfo = ph->get_priv(ph); 263 + 264 + if (!pinfo) 265 + return -EINVAL; 266 + 267 + return pinfo->num_domains; 268 + } 269 + 263 270 static const struct scmi_event reset_events[] = { 264 271 { 265 272 .id = SCMI_EVENT_RESET_ISSUED, ··· 279 266 }; 280 267 281 268 static const struct scmi_event_ops reset_event_ops = { 269 + .get_num_sources = scmi_reset_get_num_sources, 282 270 .set_notify_enabled = scmi_reset_set_notify_enabled, 283 271 .fill_custom_report = scmi_reset_fill_custom_report, 284 272 }; 285 273 286 - static int scmi_reset_protocol_init(struct scmi_handle *handle) 274 + static const struct scmi_protocol_events reset_protocol_events = { 275 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 276 + .ops = &reset_event_ops, 277 + .evts = reset_events, 278 + .num_events = ARRAY_SIZE(reset_events), 279 + }; 280 + 281 + static int scmi_reset_protocol_init(const struct scmi_protocol_handle *ph) 287 282 { 288 283 int domain; 289 284 u32 version; 290 285 struct scmi_reset_info *pinfo; 291 286 292 - scmi_version_get(handle, SCMI_PROTOCOL_RESET, &version); 287 + ph->xops->version_get(ph, &version); 293 288 294 - dev_dbg(handle->dev, "Reset Version %d.%d\n", 289 + dev_dbg(ph->dev, "Reset Version %d.%d\n", 295 290 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 296 291 297 - pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 292 + pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 298 293 if (!pinfo) 299 294 return -ENOMEM; 300 295 301 - scmi_reset_attributes_get(handle, pinfo); 296 + scmi_reset_attributes_get(ph, pinfo); 302 297 303 - pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains, 298 + pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 304 299 sizeof(*pinfo->dom_info), GFP_KERNEL); 305 300 if (!pinfo->dom_info) 306 301 return -ENOMEM; ··· 316 295 for (domain = 0; domain < pinfo->num_domains; domain++) { 317 296 struct reset_dom_info *dom = pinfo->dom_info + domain; 318 297 319 - scmi_reset_domain_attributes_get(handle, domain, dom); 298 + scmi_reset_domain_attributes_get(ph, domain, dom); 320 299 } 321 300 322 - scmi_register_protocol_events(handle, 323 - SCMI_PROTOCOL_RESET, SCMI_PROTO_QUEUE_SZ, 324 - &reset_event_ops, reset_events, 325 - ARRAY_SIZE(reset_events), 326 - pinfo->num_domains); 327 - 328 301 pinfo->version = version; 329 - handle->reset_ops = &reset_ops; 330 - handle->reset_priv = pinfo; 331 - 332 - return 0; 302 + return ph->set_priv(ph, pinfo); 333 303 } 334 304 335 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_RESET, reset) 305 + static const struct scmi_protocol scmi_reset = { 306 + .id = SCMI_PROTOCOL_RESET, 307 + .owner = THIS_MODULE, 308 + .instance_init = &scmi_reset_protocol_init, 309 + .ops = &reset_proto_ops, 310 + .events = &reset_protocol_events, 311 + }; 312 + 313 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(reset, scmi_reset)
+16 -10
drivers/firmware/arm_scmi/scmi_pm_domain.c
··· 2 2 /* 3 3 * SCMI Generic power domain support. 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/err.h> ··· 11 11 #include <linux/pm_domain.h> 12 12 #include <linux/scmi_protocol.h> 13 13 14 + static const struct scmi_power_proto_ops *power_ops; 15 + 14 16 struct scmi_pm_domain { 15 17 struct generic_pm_domain genpd; 16 - const struct scmi_handle *handle; 18 + const struct scmi_protocol_handle *ph; 17 19 const char *name; 18 20 u32 domain; 19 21 }; ··· 27 25 int ret; 28 26 u32 state, ret_state; 29 27 struct scmi_pm_domain *pd = to_scmi_pd(domain); 30 - const struct scmi_power_ops *ops = pd->handle->power_ops; 31 28 32 29 if (power_on) 33 30 state = SCMI_POWER_STATE_GENERIC_ON; 34 31 else 35 32 state = SCMI_POWER_STATE_GENERIC_OFF; 36 33 37 - ret = ops->state_set(pd->handle, pd->domain, state); 34 + ret = power_ops->state_set(pd->ph, pd->domain, state); 38 35 if (!ret) 39 - ret = ops->state_get(pd->handle, pd->domain, &ret_state); 36 + ret = power_ops->state_get(pd->ph, pd->domain, &ret_state); 40 37 if (!ret && state != ret_state) 41 38 return -EIO; 42 39 ··· 61 60 struct genpd_onecell_data *scmi_pd_data; 62 61 struct generic_pm_domain **domains; 63 62 const struct scmi_handle *handle = sdev->handle; 63 + struct scmi_protocol_handle *ph; 64 64 65 - if (!handle || !handle->power_ops) 65 + if (!handle) 66 66 return -ENODEV; 67 67 68 - num_domains = handle->power_ops->num_domains_get(handle); 68 + power_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_POWER, &ph); 69 + if (IS_ERR(power_ops)) 70 + return PTR_ERR(power_ops); 71 + 72 + num_domains = power_ops->num_domains_get(ph); 69 73 if (num_domains < 0) { 70 74 dev_err(dev, "number of domains not found\n"); 71 75 return num_domains; ··· 91 85 for (i = 0; i < num_domains; i++, scmi_pd++) { 92 86 u32 state; 93 87 94 - if (handle->power_ops->state_get(handle, i, &state)) { 88 + if (power_ops->state_get(ph, i, &state)) { 95 89 dev_warn(dev, "failed to get state for domain %d\n", i); 96 90 continue; 97 91 } 98 92 99 93 scmi_pd->domain = i; 100 - scmi_pd->handle = handle; 101 - scmi_pd->name = handle->power_ops->name_get(handle, i); 94 + scmi_pd->ph = ph; 95 + scmi_pd->name = power_ops->name_get(ph, i); 102 96 scmi_pd->genpd.name = scmi_pd->name; 103 97 scmi_pd->genpd.power_off = scmi_pd_power_off; 104 98 scmi_pd->genpd.power_on = scmi_pd_power_on;
+123 -109
drivers/firmware/arm_scmi/sensors.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Sensor Protocol 4 4 * 5 - * Copyright (C) 2018-2020 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications SENSOR - " fmt 9 9 10 10 #include <linux/bitfield.h> 11 + #include <linux/module.h> 11 12 #include <linux/scmi_protocol.h> 12 13 13 14 #include "common.h" ··· 202 201 struct scmi_sensor_info *sensors; 203 202 }; 204 203 205 - static int scmi_sensor_attributes_get(const struct scmi_handle *handle, 204 + static int scmi_sensor_attributes_get(const struct scmi_protocol_handle *ph, 206 205 struct sensors_info *si) 207 206 { 208 207 int ret; 209 208 struct scmi_xfer *t; 210 209 struct scmi_msg_resp_sensor_attributes *attr; 211 210 212 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 213 - SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t); 211 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 212 + 0, sizeof(*attr), &t); 214 213 if (ret) 215 214 return ret; 216 215 217 216 attr = t->rx.buf; 218 217 219 - ret = scmi_do_xfer(handle, t); 218 + ret = ph->xops->do_xfer(ph, t); 220 219 if (!ret) { 221 220 si->num_sensors = le16_to_cpu(attr->num_sensors); 222 221 si->max_requests = attr->max_requests; ··· 225 224 si->reg_size = le32_to_cpu(attr->reg_size); 226 225 } 227 226 228 - scmi_xfer_put(handle, t); 227 + ph->xops->xfer_put(ph, t); 229 228 return ret; 230 229 } 231 230 ··· 236 235 out->max_range = get_unaligned_le64((void *)&in->max_range_low); 237 236 } 238 237 239 - static int scmi_sensor_update_intervals(const struct scmi_handle *handle, 238 + static int scmi_sensor_update_intervals(const struct scmi_protocol_handle *ph, 240 239 struct scmi_sensor_info *s) 241 240 { 242 241 int ret, cnt; ··· 246 245 struct scmi_msg_resp_sensor_list_update_intervals *buf; 247 246 struct scmi_msg_sensor_list_update_intervals *msg; 248 247 249 - ret = scmi_xfer_get_init(handle, SENSOR_LIST_UPDATE_INTERVALS, 250 - SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &ti); 248 + ret = ph->xops->xfer_get_init(ph, SENSOR_LIST_UPDATE_INTERVALS, 249 + sizeof(*msg), 0, &ti); 251 250 if (ret) 252 251 return ret; 253 252 ··· 260 259 msg->id = cpu_to_le32(s->id); 261 260 msg->index = cpu_to_le32(desc_index); 262 261 263 - ret = scmi_do_xfer(handle, ti); 262 + ret = ph->xops->do_xfer(ph, ti); 264 263 if (ret) 265 264 break; 266 265 ··· 278 277 /* segmented intervals are reported in one triplet */ 279 278 if (s->intervals.segmented && 280 279 (num_remaining || num_returned != 3)) { 281 - dev_err(handle->dev, 280 + dev_err(ph->dev, 282 281 "Sensor ID:%d advertises an invalid segmented interval (%d)\n", 283 282 s->id, s->intervals.count); 284 283 s->intervals.segmented = false; ··· 289 288 /* Direct allocation when exceeding pre-allocated */ 290 289 if (s->intervals.count >= SCMI_MAX_PREALLOC_POOL) { 291 290 s->intervals.desc = 292 - devm_kcalloc(handle->dev, 291 + devm_kcalloc(ph->dev, 293 292 s->intervals.count, 294 293 sizeof(*s->intervals.desc), 295 294 GFP_KERNEL); ··· 301 300 } 302 301 } 303 302 } else if (desc_index + num_returned > s->intervals.count) { 304 - dev_err(handle->dev, 303 + dev_err(ph->dev, 305 304 "No. of update intervals can't exceed %d\n", 306 305 s->intervals.count); 307 306 ret = -EINVAL; ··· 314 313 315 314 desc_index += num_returned; 316 315 317 - scmi_reset_rx_to_maxsz(handle, ti); 316 + ph->xops->reset_rx_to_maxsz(ph, ti); 318 317 /* 319 318 * check for both returned and remaining to avoid infinite 320 319 * loop due to buggy firmware 321 320 */ 322 321 } while (num_returned && num_remaining); 323 322 324 - scmi_xfer_put(handle, ti); 323 + ph->xops->xfer_put(ph, ti); 325 324 return ret; 326 325 } 327 326 328 - static int scmi_sensor_axis_description(const struct scmi_handle *handle, 327 + static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph, 329 328 struct scmi_sensor_info *s) 330 329 { 331 330 int ret, cnt; ··· 335 334 struct scmi_msg_resp_sensor_axis_description *buf; 336 335 struct scmi_msg_sensor_axis_description_get *msg; 337 336 338 - s->axis = devm_kcalloc(handle->dev, s->num_axis, 337 + s->axis = devm_kcalloc(ph->dev, s->num_axis, 339 338 sizeof(*s->axis), GFP_KERNEL); 340 339 if (!s->axis) 341 340 return -ENOMEM; 342 341 343 - ret = scmi_xfer_get_init(handle, SENSOR_AXIS_DESCRIPTION_GET, 344 - SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &te); 342 + ret = ph->xops->xfer_get_init(ph, SENSOR_AXIS_DESCRIPTION_GET, 343 + sizeof(*msg), 0, &te); 345 344 if (ret) 346 345 return ret; 347 346 ··· 355 354 msg->id = cpu_to_le32(s->id); 356 355 msg->axis_desc_index = cpu_to_le32(desc_index); 357 356 358 - ret = scmi_do_xfer(handle, te); 357 + ret = ph->xops->do_xfer(ph, te); 359 358 if (ret) 360 359 break; 361 360 ··· 364 363 num_remaining = NUM_AXIS_REMAINING(flags); 365 364 366 365 if (desc_index + num_returned > s->num_axis) { 367 - dev_err(handle->dev, "No. of axis can't exceed %d\n", 366 + dev_err(ph->dev, "No. of axis can't exceed %d\n", 368 367 s->num_axis); 369 368 break; 370 369 } ··· 406 405 407 406 desc_index += num_returned; 408 407 409 - scmi_reset_rx_to_maxsz(handle, te); 408 + ph->xops->reset_rx_to_maxsz(ph, te); 410 409 /* 411 410 * check for both returned and remaining to avoid infinite 412 411 * loop due to buggy firmware 413 412 */ 414 413 } while (num_returned && num_remaining); 415 414 416 - scmi_xfer_put(handle, te); 415 + ph->xops->xfer_put(ph, te); 417 416 return ret; 418 417 } 419 418 420 - static int scmi_sensor_description_get(const struct scmi_handle *handle, 419 + static int scmi_sensor_description_get(const struct scmi_protocol_handle *ph, 421 420 struct sensors_info *si) 422 421 { 423 422 int ret, cnt; ··· 426 425 struct scmi_xfer *t; 427 426 struct scmi_msg_resp_sensor_description *buf; 428 427 429 - ret = scmi_xfer_get_init(handle, SENSOR_DESCRIPTION_GET, 430 - SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t); 428 + ret = ph->xops->xfer_get_init(ph, SENSOR_DESCRIPTION_GET, 429 + sizeof(__le32), 0, &t); 431 430 if (ret) 432 431 return ret; 433 432 ··· 438 437 439 438 /* Set the number of sensors to be skipped/already read */ 440 439 put_unaligned_le32(desc_index, t->tx.buf); 441 - ret = scmi_do_xfer(handle, t); 440 + 441 + ret = ph->xops->do_xfer(ph, t); 442 442 if (ret) 443 443 break; 444 444 ··· 447 445 num_remaining = le16_to_cpu(buf->num_remaining); 448 446 449 447 if (desc_index + num_returned > si->num_sensors) { 450 - dev_err(handle->dev, "No. of sensors can't exceed %d", 448 + dev_err(ph->dev, "No. of sensors can't exceed %d", 451 449 si->num_sensors); 452 450 break; 453 451 } ··· 502 500 * Since the command is optional, on error carry 503 501 * on without any update interval. 504 502 */ 505 - if (scmi_sensor_update_intervals(handle, s)) 506 - dev_dbg(handle->dev, 503 + if (scmi_sensor_update_intervals(ph, s)) 504 + dev_dbg(ph->dev, 507 505 "Update Intervals not available for sensor ID:%d\n", 508 506 s->id); 509 507 } ··· 537 535 } 538 536 } 539 537 if (s->num_axis > 0) { 540 - ret = scmi_sensor_axis_description(handle, s); 538 + ret = scmi_sensor_axis_description(ph, s); 541 539 if (ret) 542 540 goto out; 543 541 } ··· 547 545 548 546 desc_index += num_returned; 549 547 550 - scmi_reset_rx_to_maxsz(handle, t); 548 + ph->xops->reset_rx_to_maxsz(ph, t); 551 549 /* 552 550 * check for both returned and remaining to avoid infinite 553 551 * loop due to buggy firmware ··· 555 553 } while (num_returned && num_remaining); 556 554 557 555 out: 558 - scmi_xfer_put(handle, t); 556 + ph->xops->xfer_put(ph, t); 559 557 return ret; 560 558 } 561 559 562 560 static inline int 563 - scmi_sensor_request_notify(const struct scmi_handle *handle, u32 sensor_id, 561 + scmi_sensor_request_notify(const struct scmi_protocol_handle *ph, u32 sensor_id, 564 562 u8 message_id, bool enable) 565 563 { 566 564 int ret; ··· 568 566 struct scmi_xfer *t; 569 567 struct scmi_msg_sensor_request_notify *cfg; 570 568 571 - ret = scmi_xfer_get_init(handle, message_id, 572 - SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t); 569 + ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*cfg), 0, &t); 573 570 if (ret) 574 571 return ret; 575 572 ··· 576 575 cfg->id = cpu_to_le32(sensor_id); 577 576 cfg->event_control = cpu_to_le32(evt_cntl); 578 577 579 - ret = scmi_do_xfer(handle, t); 578 + ret = ph->xops->do_xfer(ph, t); 580 579 581 - scmi_xfer_put(handle, t); 580 + ph->xops->xfer_put(ph, t); 582 581 return ret; 583 582 } 584 583 585 - static int scmi_sensor_trip_point_notify(const struct scmi_handle *handle, 584 + static int scmi_sensor_trip_point_notify(const struct scmi_protocol_handle *ph, 586 585 u32 sensor_id, bool enable) 587 586 { 588 - return scmi_sensor_request_notify(handle, sensor_id, 587 + return scmi_sensor_request_notify(ph, sensor_id, 589 588 SENSOR_TRIP_POINT_NOTIFY, 590 589 enable); 591 590 } 592 591 593 592 static int 594 - scmi_sensor_continuous_update_notify(const struct scmi_handle *handle, 593 + scmi_sensor_continuous_update_notify(const struct scmi_protocol_handle *ph, 595 594 u32 sensor_id, bool enable) 596 595 { 597 - return scmi_sensor_request_notify(handle, sensor_id, 596 + return scmi_sensor_request_notify(ph, sensor_id, 598 597 SENSOR_CONTINUOUS_UPDATE_NOTIFY, 599 598 enable); 600 599 } 601 600 602 601 static int 603 - scmi_sensor_trip_point_config(const struct scmi_handle *handle, u32 sensor_id, 604 - u8 trip_id, u64 trip_value) 602 + scmi_sensor_trip_point_config(const struct scmi_protocol_handle *ph, 603 + u32 sensor_id, u8 trip_id, u64 trip_value) 605 604 { 606 605 int ret; 607 606 u32 evt_cntl = SENSOR_TP_BOTH; 608 607 struct scmi_xfer *t; 609 608 struct scmi_msg_set_sensor_trip_point *trip; 610 609 611 - ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_CONFIG, 612 - SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t); 610 + ret = ph->xops->xfer_get_init(ph, SENSOR_TRIP_POINT_CONFIG, 611 + sizeof(*trip), 0, &t); 613 612 if (ret) 614 613 return ret; 615 614 ··· 619 618 trip->value_low = cpu_to_le32(trip_value & 0xffffffff); 620 619 trip->value_high = cpu_to_le32(trip_value >> 32); 621 620 622 - ret = scmi_do_xfer(handle, t); 621 + ret = ph->xops->do_xfer(ph, t); 623 622 624 - scmi_xfer_put(handle, t); 623 + ph->xops->xfer_put(ph, t); 625 624 return ret; 626 625 } 627 626 628 - static int scmi_sensor_config_get(const struct scmi_handle *handle, 627 + static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph, 629 628 u32 sensor_id, u32 *sensor_config) 630 629 { 631 630 int ret; 632 631 struct scmi_xfer *t; 633 632 634 - ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_GET, 635 - SCMI_PROTOCOL_SENSOR, sizeof(__le32), 636 - sizeof(__le32), &t); 633 + ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_GET, 634 + sizeof(__le32), sizeof(__le32), &t); 637 635 if (ret) 638 636 return ret; 639 637 640 638 put_unaligned_le32(cpu_to_le32(sensor_id), t->tx.buf); 641 - ret = scmi_do_xfer(handle, t); 639 + ret = ph->xops->do_xfer(ph, t); 642 640 if (!ret) { 643 - struct sensors_info *si = handle->sensor_priv; 641 + struct sensors_info *si = ph->get_priv(ph); 644 642 struct scmi_sensor_info *s = si->sensors + sensor_id; 645 643 646 644 *sensor_config = get_unaligned_le64(t->rx.buf); 647 645 s->sensor_config = *sensor_config; 648 646 } 649 647 650 - scmi_xfer_put(handle, t); 648 + ph->xops->xfer_put(ph, t); 651 649 return ret; 652 650 } 653 651 654 - static int scmi_sensor_config_set(const struct scmi_handle *handle, 652 + static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph, 655 653 u32 sensor_id, u32 sensor_config) 656 654 { 657 655 int ret; 658 656 struct scmi_xfer *t; 659 657 struct scmi_msg_sensor_config_set *msg; 660 658 661 - ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_SET, 662 - SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &t); 659 + ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_SET, 660 + sizeof(*msg), 0, &t); 663 661 if (ret) 664 662 return ret; 665 663 ··· 666 666 msg->id = cpu_to_le32(sensor_id); 667 667 msg->sensor_config = cpu_to_le32(sensor_config); 668 668 669 - ret = scmi_do_xfer(handle, t); 669 + ret = ph->xops->do_xfer(ph, t); 670 670 if (!ret) { 671 - struct sensors_info *si = handle->sensor_priv; 671 + struct sensors_info *si = ph->get_priv(ph); 672 672 struct scmi_sensor_info *s = si->sensors + sensor_id; 673 673 674 674 s->sensor_config = sensor_config; 675 675 } 676 676 677 - scmi_xfer_put(handle, t); 677 + ph->xops->xfer_put(ph, t); 678 678 return ret; 679 679 } 680 680 681 681 /** 682 682 * scmi_sensor_reading_get - Read scalar sensor value 683 - * @handle: Platform handle 683 + * @ph: Protocol handle 684 684 * @sensor_id: Sensor ID 685 685 * @value: The 64bit value sensor reading 686 686 * ··· 693 693 * 694 694 * Return: 0 on Success 695 695 */ 696 - static int scmi_sensor_reading_get(const struct scmi_handle *handle, 696 + static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph, 697 697 u32 sensor_id, u64 *value) 698 698 { 699 699 int ret; 700 700 struct scmi_xfer *t; 701 701 struct scmi_msg_sensor_reading_get *sensor; 702 - struct sensors_info *si = handle->sensor_priv; 702 + struct sensors_info *si = ph->get_priv(ph); 703 703 struct scmi_sensor_info *s = si->sensors + sensor_id; 704 704 705 - ret = scmi_xfer_get_init(handle, SENSOR_READING_GET, 706 - SCMI_PROTOCOL_SENSOR, sizeof(*sensor), 0, &t); 705 + ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET, 706 + sizeof(*sensor), 0, &t); 707 707 if (ret) 708 708 return ret; 709 709 ··· 711 711 sensor->id = cpu_to_le32(sensor_id); 712 712 if (s->async) { 713 713 sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC); 714 - ret = scmi_do_xfer_with_response(handle, t); 714 + ret = ph->xops->do_xfer_with_response(ph, t); 715 715 if (!ret) { 716 716 struct scmi_resp_sensor_reading_complete *resp; 717 717 ··· 723 723 } 724 724 } else { 725 725 sensor->flags = cpu_to_le32(0); 726 - ret = scmi_do_xfer(handle, t); 726 + ret = ph->xops->do_xfer(ph, t); 727 727 if (!ret) 728 728 *value = get_unaligned_le64(t->rx.buf); 729 729 } 730 730 731 - scmi_xfer_put(handle, t); 731 + ph->xops->xfer_put(ph, t); 732 732 return ret; 733 733 } 734 734 ··· 742 742 743 743 /** 744 744 * scmi_sensor_reading_get_timestamped - Read multiple-axis timestamped values 745 - * @handle: Platform handle 745 + * @ph: Protocol handle 746 746 * @sensor_id: Sensor ID 747 747 * @count: The length of the provided @readings array 748 748 * @readings: An array of elements each representing a timestamped per-axis ··· 755 755 * Return: 0 on Success 756 756 */ 757 757 static int 758 - scmi_sensor_reading_get_timestamped(const struct scmi_handle *handle, 758 + scmi_sensor_reading_get_timestamped(const struct scmi_protocol_handle *ph, 759 759 u32 sensor_id, u8 count, 760 760 struct scmi_sensor_reading *readings) 761 761 { 762 762 int ret; 763 763 struct scmi_xfer *t; 764 764 struct scmi_msg_sensor_reading_get *sensor; 765 - struct sensors_info *si = handle->sensor_priv; 765 + struct sensors_info *si = ph->get_priv(ph); 766 766 struct scmi_sensor_info *s = si->sensors + sensor_id; 767 767 768 768 if (!count || !readings || 769 769 (!s->num_axis && count > 1) || (s->num_axis && count > s->num_axis)) 770 770 return -EINVAL; 771 771 772 - ret = scmi_xfer_get_init(handle, SENSOR_READING_GET, 773 - SCMI_PROTOCOL_SENSOR, sizeof(*sensor), 0, &t); 772 + ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET, 773 + sizeof(*sensor), 0, &t); 774 774 if (ret) 775 775 return ret; 776 776 ··· 778 778 sensor->id = cpu_to_le32(sensor_id); 779 779 if (s->async) { 780 780 sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC); 781 - ret = scmi_do_xfer_with_response(handle, t); 781 + ret = ph->xops->do_xfer_with_response(ph, t); 782 782 if (!ret) { 783 783 int i; 784 784 struct scmi_resp_sensor_reading_complete_v3 *resp; ··· 794 794 } 795 795 } else { 796 796 sensor->flags = cpu_to_le32(0); 797 - ret = scmi_do_xfer(handle, t); 797 + ret = ph->xops->do_xfer(ph, t); 798 798 if (!ret) { 799 799 int i; 800 800 struct scmi_sensor_reading_resp *resp_readings; ··· 806 806 } 807 807 } 808 808 809 - scmi_xfer_put(handle, t); 809 + ph->xops->xfer_put(ph, t); 810 810 return ret; 811 811 } 812 812 813 813 static const struct scmi_sensor_info * 814 - scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id) 814 + scmi_sensor_info_get(const struct scmi_protocol_handle *ph, u32 sensor_id) 815 815 { 816 - struct sensors_info *si = handle->sensor_priv; 816 + struct sensors_info *si = ph->get_priv(ph); 817 817 818 818 return si->sensors + sensor_id; 819 819 } 820 820 821 - static int scmi_sensor_count_get(const struct scmi_handle *handle) 821 + static int scmi_sensor_count_get(const struct scmi_protocol_handle *ph) 822 822 { 823 - struct sensors_info *si = handle->sensor_priv; 823 + struct sensors_info *si = ph->get_priv(ph); 824 824 825 825 return si->num_sensors; 826 826 } 827 827 828 - static const struct scmi_sensor_ops sensor_ops = { 828 + static const struct scmi_sensor_proto_ops sensor_proto_ops = { 829 829 .count_get = scmi_sensor_count_get, 830 830 .info_get = scmi_sensor_info_get, 831 831 .trip_point_config = scmi_sensor_trip_point_config, ··· 835 835 .config_set = scmi_sensor_config_set, 836 836 }; 837 837 838 - static int scmi_sensor_set_notify_enabled(const struct scmi_handle *handle, 838 + static int scmi_sensor_set_notify_enabled(const struct scmi_protocol_handle *ph, 839 839 u8 evt_id, u32 src_id, bool enable) 840 840 { 841 841 int ret; 842 842 843 843 switch (evt_id) { 844 844 case SCMI_EVENT_SENSOR_TRIP_POINT_EVENT: 845 - ret = scmi_sensor_trip_point_notify(handle, src_id, enable); 845 + ret = scmi_sensor_trip_point_notify(ph, src_id, enable); 846 846 break; 847 847 case SCMI_EVENT_SENSOR_UPDATE: 848 - ret = scmi_sensor_continuous_update_notify(handle, src_id, 849 - enable); 848 + ret = scmi_sensor_continuous_update_notify(ph, src_id, enable); 850 849 break; 851 850 default: 852 851 ret = -EINVAL; ··· 859 860 return ret; 860 861 } 861 862 862 - static void *scmi_sensor_fill_custom_report(const struct scmi_handle *handle, 863 - u8 evt_id, ktime_t timestamp, 864 - const void *payld, size_t payld_sz, 865 - void *report, u32 *src_id) 863 + static void * 864 + scmi_sensor_fill_custom_report(const struct scmi_protocol_handle *ph, 865 + u8 evt_id, ktime_t timestamp, 866 + const void *payld, size_t payld_sz, 867 + void *report, u32 *src_id) 866 868 { 867 869 void *rep = NULL; 868 870 ··· 890 890 struct scmi_sensor_info *s; 891 891 const struct scmi_sensor_update_notify_payld *p = payld; 892 892 struct scmi_sensor_update_report *r = report; 893 - struct sensors_info *sinfo = handle->sensor_priv; 893 + struct sensors_info *sinfo = ph->get_priv(ph); 894 894 895 895 /* payld_sz is variable for this event */ 896 896 r->sensor_id = le32_to_cpu(p->sensor_id); ··· 920 920 return rep; 921 921 } 922 922 923 + static int scmi_sensor_get_num_sources(const struct scmi_protocol_handle *ph) 924 + { 925 + struct sensors_info *si = ph->get_priv(ph); 926 + 927 + return si->num_sensors; 928 + } 929 + 923 930 static const struct scmi_event sensor_events[] = { 924 931 { 925 932 .id = SCMI_EVENT_SENSOR_TRIP_POINT_EVENT, ··· 946 939 }; 947 940 948 941 static const struct scmi_event_ops sensor_event_ops = { 942 + .get_num_sources = scmi_sensor_get_num_sources, 949 943 .set_notify_enabled = scmi_sensor_set_notify_enabled, 950 944 .fill_custom_report = scmi_sensor_fill_custom_report, 951 945 }; 952 946 953 - static int scmi_sensors_protocol_init(struct scmi_handle *handle) 947 + static const struct scmi_protocol_events sensor_protocol_events = { 948 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 949 + .ops = &sensor_event_ops, 950 + .evts = sensor_events, 951 + .num_events = ARRAY_SIZE(sensor_events), 952 + }; 953 + 954 + static int scmi_sensors_protocol_init(const struct scmi_protocol_handle *ph) 954 955 { 955 956 u32 version; 956 957 int ret; 957 958 struct sensors_info *sinfo; 958 959 959 - scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version); 960 + ph->xops->version_get(ph, &version); 960 961 961 - dev_dbg(handle->dev, "Sensor Version %d.%d\n", 962 + dev_dbg(ph->dev, "Sensor Version %d.%d\n", 962 963 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 963 964 964 - sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL); 965 + sinfo = devm_kzalloc(ph->dev, sizeof(*sinfo), GFP_KERNEL); 965 966 if (!sinfo) 966 967 return -ENOMEM; 967 968 sinfo->version = version; 968 969 969 - ret = scmi_sensor_attributes_get(handle, sinfo); 970 + ret = scmi_sensor_attributes_get(ph, sinfo); 970 971 if (ret) 971 972 return ret; 972 - sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors, 973 + sinfo->sensors = devm_kcalloc(ph->dev, sinfo->num_sensors, 973 974 sizeof(*sinfo->sensors), GFP_KERNEL); 974 975 if (!sinfo->sensors) 975 976 return -ENOMEM; 976 977 977 - ret = scmi_sensor_description_get(handle, sinfo); 978 + ret = scmi_sensor_description_get(ph, sinfo); 978 979 if (ret) 979 980 return ret; 980 981 981 - scmi_register_protocol_events(handle, 982 - SCMI_PROTOCOL_SENSOR, SCMI_PROTO_QUEUE_SZ, 983 - &sensor_event_ops, sensor_events, 984 - ARRAY_SIZE(sensor_events), 985 - sinfo->num_sensors); 986 - 987 - handle->sensor_priv = sinfo; 988 - handle->sensor_ops = &sensor_ops; 989 - 990 - return 0; 982 + return ph->set_priv(ph, sinfo); 991 983 } 992 984 993 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_SENSOR, sensors) 985 + static const struct scmi_protocol scmi_sensors = { 986 + .id = SCMI_PROTOCOL_SENSOR, 987 + .owner = THIS_MODULE, 988 + .instance_init = &scmi_sensors_protocol_init, 989 + .ops = &sensor_proto_ops, 990 + .events = &sensor_protocol_events, 991 + }; 992 + 993 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(sensors, scmi_sensors)
+36 -27
drivers/firmware/arm_scmi/system.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) System Power Protocol 4 4 * 5 - * Copyright (C) 2020 ARM Ltd. 5 + * Copyright (C) 2020-2021 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications SYSTEM - " fmt 9 9 10 + #include <linux/module.h> 10 11 #include <linux/scmi_protocol.h> 11 12 12 13 #include "common.h" ··· 33 32 u32 version; 34 33 }; 35 34 36 - static int scmi_system_request_notify(const struct scmi_handle *handle, 35 + static int scmi_system_request_notify(const struct scmi_protocol_handle *ph, 37 36 bool enable) 38 37 { 39 38 int ret; 40 39 struct scmi_xfer *t; 41 40 struct scmi_system_power_state_notify *notify; 42 41 43 - ret = scmi_xfer_get_init(handle, SYSTEM_POWER_STATE_NOTIFY, 44 - SCMI_PROTOCOL_SYSTEM, sizeof(*notify), 0, &t); 42 + ret = ph->xops->xfer_get_init(ph, SYSTEM_POWER_STATE_NOTIFY, 43 + sizeof(*notify), 0, &t); 45 44 if (ret) 46 45 return ret; 47 46 48 47 notify = t->tx.buf; 49 48 notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 50 49 51 - ret = scmi_do_xfer(handle, t); 50 + ret = ph->xops->do_xfer(ph, t); 52 51 53 - scmi_xfer_put(handle, t); 52 + ph->xops->xfer_put(ph, t); 54 53 return ret; 55 54 } 56 55 57 - static int scmi_system_set_notify_enabled(const struct scmi_handle *handle, 56 + static int scmi_system_set_notify_enabled(const struct scmi_protocol_handle *ph, 58 57 u8 evt_id, u32 src_id, bool enable) 59 58 { 60 59 int ret; 61 60 62 - ret = scmi_system_request_notify(handle, enable); 61 + ret = scmi_system_request_notify(ph, enable); 63 62 if (ret) 64 63 pr_debug("FAIL_ENABLE - evt[%X] - ret:%d\n", evt_id, ret); 65 64 66 65 return ret; 67 66 } 68 67 69 - static void *scmi_system_fill_custom_report(const struct scmi_handle *handle, 70 - u8 evt_id, ktime_t timestamp, 71 - const void *payld, size_t payld_sz, 72 - void *report, u32 *src_id) 68 + static void * 69 + scmi_system_fill_custom_report(const struct scmi_protocol_handle *ph, 70 + u8 evt_id, ktime_t timestamp, 71 + const void *payld, size_t payld_sz, 72 + void *report, u32 *src_id) 73 73 { 74 74 const struct scmi_system_power_state_notifier_payld *p = payld; 75 75 struct scmi_system_power_state_notifier_report *r = report; ··· 103 101 .fill_custom_report = scmi_system_fill_custom_report, 104 102 }; 105 103 106 - static int scmi_system_protocol_init(struct scmi_handle *handle) 104 + static const struct scmi_protocol_events system_protocol_events = { 105 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 106 + .ops = &system_event_ops, 107 + .evts = system_events, 108 + .num_events = ARRAY_SIZE(system_events), 109 + .num_sources = SCMI_SYSTEM_NUM_SOURCES, 110 + }; 111 + 112 + static int scmi_system_protocol_init(const struct scmi_protocol_handle *ph) 107 113 { 108 114 u32 version; 109 115 struct scmi_system_info *pinfo; 110 116 111 - scmi_version_get(handle, SCMI_PROTOCOL_SYSTEM, &version); 117 + ph->xops->version_get(ph, &version); 112 118 113 - dev_dbg(handle->dev, "System Power Version %d.%d\n", 119 + dev_dbg(ph->dev, "System Power Version %d.%d\n", 114 120 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 115 121 116 - pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL); 122 + pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 117 123 if (!pinfo) 118 124 return -ENOMEM; 119 125 120 - scmi_register_protocol_events(handle, 121 - SCMI_PROTOCOL_SYSTEM, SCMI_PROTO_QUEUE_SZ, 122 - &system_event_ops, 123 - system_events, 124 - ARRAY_SIZE(system_events), 125 - SCMI_SYSTEM_NUM_SOURCES); 126 - 127 126 pinfo->version = version; 128 - handle->system_priv = pinfo; 129 - 130 - return 0; 127 + return ph->set_priv(ph, pinfo); 131 128 } 132 129 133 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_SYSTEM, system) 130 + static const struct scmi_protocol scmi_system = { 131 + .id = SCMI_PROTOCOL_SYSTEM, 132 + .owner = THIS_MODULE, 133 + .instance_init = &scmi_system_protocol_init, 134 + .ops = NULL, 135 + .events = &system_protocol_events, 136 + }; 137 + 138 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(system, scmi_system)
+63 -63
drivers/firmware/arm_scmi/voltage.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Voltage Protocol 4 4 * 5 - * Copyright (C) 2020 ARM Ltd. 5 + * Copyright (C) 2020-2021 ARM Ltd. 6 6 */ 7 7 8 + #include <linux/module.h> 8 9 #include <linux/scmi_protocol.h> 9 10 10 11 #include "common.h" ··· 60 59 struct scmi_voltage_info *domains; 61 60 }; 62 61 63 - static int scmi_protocol_attributes_get(const struct scmi_handle *handle, 62 + static int scmi_protocol_attributes_get(const struct scmi_protocol_handle *ph, 64 63 struct voltage_info *vinfo) 65 64 { 66 65 int ret; 67 66 struct scmi_xfer *t; 68 67 69 - ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES, 70 - SCMI_PROTOCOL_VOLTAGE, 0, sizeof(__le32), &t); 68 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0, 69 + sizeof(__le32), &t); 71 70 if (ret) 72 71 return ret; 73 72 74 - ret = scmi_do_xfer(handle, t); 73 + ret = ph->xops->do_xfer(ph, t); 75 74 if (!ret) 76 75 vinfo->num_domains = 77 76 NUM_VOLTAGE_DOMAINS(get_unaligned_le32(t->rx.buf)); 78 77 79 - scmi_xfer_put(handle, t); 78 + ph->xops->xfer_put(ph, t); 80 79 return ret; 81 80 } 82 81 ··· 110 109 return 0; 111 110 } 112 111 113 - static int scmi_voltage_descriptors_get(const struct scmi_handle *handle, 112 + static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph, 114 113 struct voltage_info *vinfo) 115 114 { 116 115 int ret, dom; 117 116 struct scmi_xfer *td, *tl; 118 - struct device *dev = handle->dev; 117 + struct device *dev = ph->dev; 119 118 struct scmi_msg_resp_domain_attributes *resp_dom; 120 119 struct scmi_msg_resp_describe_levels *resp_levels; 121 120 122 - ret = scmi_xfer_get_init(handle, VOLTAGE_DOMAIN_ATTRIBUTES, 123 - SCMI_PROTOCOL_VOLTAGE, sizeof(__le32), 124 - sizeof(*resp_dom), &td); 121 + ret = ph->xops->xfer_get_init(ph, VOLTAGE_DOMAIN_ATTRIBUTES, 122 + sizeof(__le32), sizeof(*resp_dom), &td); 125 123 if (ret) 126 124 return ret; 127 125 resp_dom = td->rx.buf; 128 126 129 - ret = scmi_xfer_get_init(handle, VOLTAGE_DESCRIBE_LEVELS, 130 - SCMI_PROTOCOL_VOLTAGE, sizeof(__le64), 0, &tl); 127 + ret = ph->xops->xfer_get_init(ph, VOLTAGE_DESCRIBE_LEVELS, 128 + sizeof(__le64), 0, &tl); 131 129 if (ret) 132 130 goto outd; 133 131 resp_levels = tl->rx.buf; ··· 139 139 140 140 /* Retrieve domain attributes at first ... */ 141 141 put_unaligned_le32(dom, td->tx.buf); 142 - ret = scmi_do_xfer(handle, td); 142 + ret = ph->xops->do_xfer(ph, td); 143 143 /* Skip domain on comms error */ 144 144 if (ret) 145 145 continue; ··· 157 157 158 158 cmd->domain_id = cpu_to_le32(v->id); 159 159 cmd->level_index = desc_index; 160 - ret = scmi_do_xfer(handle, tl); 160 + ret = ph->xops->do_xfer(ph, tl); 161 161 if (ret) 162 162 break; 163 163 ··· 176 176 } 177 177 178 178 if (desc_index + num_returned > v->num_levels) { 179 - dev_err(handle->dev, 179 + dev_err(ph->dev, 180 180 "No. of voltage levels can't exceed %d\n", 181 181 v->num_levels); 182 182 ret = -EINVAL; ··· 195 195 196 196 desc_index += num_returned; 197 197 198 - scmi_reset_rx_to_maxsz(handle, tl); 198 + ph->xops->reset_rx_to_maxsz(ph, tl); 199 199 /* check both to avoid infinite loop due to buggy fw */ 200 200 } while (num_returned && num_remaining); 201 201 ··· 204 204 devm_kfree(dev, v->levels_uv); 205 205 } 206 206 207 - scmi_reset_rx_to_maxsz(handle, td); 207 + ph->xops->reset_rx_to_maxsz(ph, td); 208 208 } 209 209 210 - scmi_xfer_put(handle, tl); 210 + ph->xops->xfer_put(ph, tl); 211 211 outd: 212 - scmi_xfer_put(handle, td); 212 + ph->xops->xfer_put(ph, td); 213 213 214 214 return ret; 215 215 } 216 216 217 - static int __scmi_voltage_get_u32(const struct scmi_handle *handle, 217 + static int __scmi_voltage_get_u32(const struct scmi_protocol_handle *ph, 218 218 u8 cmd_id, u32 domain_id, u32 *value) 219 219 { 220 220 int ret; 221 221 struct scmi_xfer *t; 222 - struct voltage_info *vinfo = handle->voltage_priv; 222 + struct voltage_info *vinfo = ph->get_priv(ph); 223 223 224 224 if (domain_id >= vinfo->num_domains) 225 225 return -EINVAL; 226 226 227 - ret = scmi_xfer_get_init(handle, cmd_id, 228 - SCMI_PROTOCOL_VOLTAGE, 229 - sizeof(__le32), 0, &t); 227 + ret = ph->xops->xfer_get_init(ph, cmd_id, sizeof(__le32), 0, &t); 230 228 if (ret) 231 229 return ret; 232 230 233 231 put_unaligned_le32(domain_id, t->tx.buf); 234 - ret = scmi_do_xfer(handle, t); 232 + ret = ph->xops->do_xfer(ph, t); 235 233 if (!ret) 236 234 *value = get_unaligned_le32(t->rx.buf); 237 235 238 - scmi_xfer_put(handle, t); 236 + ph->xops->xfer_put(ph, t); 239 237 return ret; 240 238 } 241 239 242 - static int scmi_voltage_config_set(const struct scmi_handle *handle, 240 + static int scmi_voltage_config_set(const struct scmi_protocol_handle *ph, 243 241 u32 domain_id, u32 config) 244 242 { 245 243 int ret; 246 244 struct scmi_xfer *t; 247 - struct voltage_info *vinfo = handle->voltage_priv; 245 + struct voltage_info *vinfo = ph->get_priv(ph); 248 246 struct scmi_msg_cmd_config_set *cmd; 249 247 250 248 if (domain_id >= vinfo->num_domains) 251 249 return -EINVAL; 252 250 253 - ret = scmi_xfer_get_init(handle, VOLTAGE_CONFIG_SET, 254 - SCMI_PROTOCOL_VOLTAGE, 255 - sizeof(*cmd), 0, &t); 251 + ret = ph->xops->xfer_get_init(ph, VOLTAGE_CONFIG_SET, 252 + sizeof(*cmd), 0, &t); 256 253 if (ret) 257 254 return ret; 258 255 ··· 257 260 cmd->domain_id = cpu_to_le32(domain_id); 258 261 cmd->config = cpu_to_le32(config & GENMASK(3, 0)); 259 262 260 - ret = scmi_do_xfer(handle, t); 263 + ret = ph->xops->do_xfer(ph, t); 261 264 262 - scmi_xfer_put(handle, t); 265 + ph->xops->xfer_put(ph, t); 263 266 return ret; 264 267 } 265 268 266 - static int scmi_voltage_config_get(const struct scmi_handle *handle, 269 + static int scmi_voltage_config_get(const struct scmi_protocol_handle *ph, 267 270 u32 domain_id, u32 *config) 268 271 { 269 - return __scmi_voltage_get_u32(handle, VOLTAGE_CONFIG_GET, 272 + return __scmi_voltage_get_u32(ph, VOLTAGE_CONFIG_GET, 270 273 domain_id, config); 271 274 } 272 275 273 - static int scmi_voltage_level_set(const struct scmi_handle *handle, 276 + static int scmi_voltage_level_set(const struct scmi_protocol_handle *ph, 274 277 u32 domain_id, u32 flags, s32 volt_uV) 275 278 { 276 279 int ret; 277 280 struct scmi_xfer *t; 278 - struct voltage_info *vinfo = handle->voltage_priv; 281 + struct voltage_info *vinfo = ph->get_priv(ph); 279 282 struct scmi_msg_cmd_level_set *cmd; 280 283 281 284 if (domain_id >= vinfo->num_domains) 282 285 return -EINVAL; 283 286 284 - ret = scmi_xfer_get_init(handle, VOLTAGE_LEVEL_SET, 285 - SCMI_PROTOCOL_VOLTAGE, 286 - sizeof(*cmd), 0, &t); 287 + ret = ph->xops->xfer_get_init(ph, VOLTAGE_LEVEL_SET, 288 + sizeof(*cmd), 0, &t); 287 289 if (ret) 288 290 return ret; 289 291 ··· 291 295 cmd->flags = cpu_to_le32(flags); 292 296 cmd->voltage_level = cpu_to_le32(volt_uV); 293 297 294 - ret = scmi_do_xfer(handle, t); 298 + ret = ph->xops->do_xfer(ph, t); 295 299 296 - scmi_xfer_put(handle, t); 300 + ph->xops->xfer_put(ph, t); 297 301 return ret; 298 302 } 299 303 300 - static int scmi_voltage_level_get(const struct scmi_handle *handle, 304 + static int scmi_voltage_level_get(const struct scmi_protocol_handle *ph, 301 305 u32 domain_id, s32 *volt_uV) 302 306 { 303 - return __scmi_voltage_get_u32(handle, VOLTAGE_LEVEL_GET, 307 + return __scmi_voltage_get_u32(ph, VOLTAGE_LEVEL_GET, 304 308 domain_id, (u32 *)volt_uV); 305 309 } 306 310 307 311 static const struct scmi_voltage_info * __must_check 308 - scmi_voltage_info_get(const struct scmi_handle *handle, u32 domain_id) 312 + scmi_voltage_info_get(const struct scmi_protocol_handle *ph, u32 domain_id) 309 313 { 310 - struct voltage_info *vinfo = handle->voltage_priv; 314 + struct voltage_info *vinfo = ph->get_priv(ph); 311 315 312 316 if (domain_id >= vinfo->num_domains || 313 317 !vinfo->domains[domain_id].num_levels) ··· 316 320 return vinfo->domains + domain_id; 317 321 } 318 322 319 - static int scmi_voltage_domains_num_get(const struct scmi_handle *handle) 323 + static int scmi_voltage_domains_num_get(const struct scmi_protocol_handle *ph) 320 324 { 321 - struct voltage_info *vinfo = handle->voltage_priv; 325 + struct voltage_info *vinfo = ph->get_priv(ph); 322 326 323 327 return vinfo->num_domains; 324 328 } 325 329 326 - static struct scmi_voltage_ops voltage_ops = { 330 + static struct scmi_voltage_proto_ops voltage_proto_ops = { 327 331 .num_domains_get = scmi_voltage_domains_num_get, 328 332 .info_get = scmi_voltage_info_get, 329 333 .config_set = scmi_voltage_config_set, ··· 332 336 .level_get = scmi_voltage_level_get, 333 337 }; 334 338 335 - static int scmi_voltage_protocol_init(struct scmi_handle *handle) 339 + static int scmi_voltage_protocol_init(const struct scmi_protocol_handle *ph) 336 340 { 337 341 int ret; 338 342 u32 version; 339 343 struct voltage_info *vinfo; 340 344 341 - ret = scmi_version_get(handle, SCMI_PROTOCOL_VOLTAGE, &version); 345 + ret = ph->xops->version_get(ph, &version); 342 346 if (ret) 343 347 return ret; 344 348 345 - dev_dbg(handle->dev, "Voltage Version %d.%d\n", 349 + dev_dbg(ph->dev, "Voltage Version %d.%d\n", 346 350 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 347 351 348 - vinfo = devm_kzalloc(handle->dev, sizeof(*vinfo), GFP_KERNEL); 352 + vinfo = devm_kzalloc(ph->dev, sizeof(*vinfo), GFP_KERNEL); 349 353 if (!vinfo) 350 354 return -ENOMEM; 351 355 vinfo->version = version; 352 356 353 - ret = scmi_protocol_attributes_get(handle, vinfo); 357 + ret = scmi_protocol_attributes_get(ph, vinfo); 354 358 if (ret) 355 359 return ret; 356 360 357 361 if (vinfo->num_domains) { 358 - vinfo->domains = devm_kcalloc(handle->dev, vinfo->num_domains, 362 + vinfo->domains = devm_kcalloc(ph->dev, vinfo->num_domains, 359 363 sizeof(*vinfo->domains), 360 364 GFP_KERNEL); 361 365 if (!vinfo->domains) 362 366 return -ENOMEM; 363 - ret = scmi_voltage_descriptors_get(handle, vinfo); 367 + ret = scmi_voltage_descriptors_get(ph, vinfo); 364 368 if (ret) 365 369 return ret; 366 370 } else { 367 - dev_warn(handle->dev, "No Voltage domains found.\n"); 371 + dev_warn(ph->dev, "No Voltage domains found.\n"); 368 372 } 369 373 370 - handle->voltage_ops = &voltage_ops; 371 - handle->voltage_priv = vinfo; 372 - 373 - return 0; 374 + return ph->set_priv(ph, vinfo); 374 375 } 375 376 376 - DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_VOLTAGE, voltage) 377 + static const struct scmi_protocol scmi_voltage = { 378 + .id = SCMI_PROTOCOL_VOLTAGE, 379 + .owner = THIS_MODULE, 380 + .instance_init = &scmi_voltage_protocol_init, 381 + .ops = &voltage_proto_ops, 382 + }; 383 + 384 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(voltage, scmi_voltage)
+37 -4
drivers/firmware/imx/scu-pd.c
··· 29 29 * The framework needs some proper extension to support multi power 30 30 * domain cases. 31 31 * 32 + * Update: Genpd assigns the ->of_node for the virtual device before it 33 + * invokes ->attach_dev() callback, hence parsing for device resources via 34 + * DT should work fine. 35 + * 32 36 * 2. It also breaks most of current drivers as the driver probe sequence 33 37 * behavior changed if removing ->power_on|off() callback and use 34 38 * ->start() and ->stop() instead. genpd_dev_pm_attach will only power ··· 43 39 * domain enabled will trigger a HW access error. That means we need fix 44 40 * most drivers probe sequence with proper runtime pm. 45 41 * 46 - * In summary, we need fix above two issue before being able to switch to 47 - * the "single global power domain" way. 42 + * Update: Runtime PM support isn't necessary. Instead, this can easily be 43 + * fixed in drivers by adding a call to dev_pm_domain_start() during probe. 44 + * 45 + * In summary, the second part needs to be addressed via minor updates to the 46 + * relevant drivers, before the "single global power domain" model can be used. 48 47 * 49 48 */ 50 49 ··· 92 85 const struct imx_sc_pd_range *pd_ranges; 93 86 u8 num_ranges; 94 87 }; 88 + 89 + static int imx_con_rsrc; 95 90 96 91 static const struct imx_sc_pd_range imx8qxp_scu_pd_ranges[] = { 97 92 /* LSIO SS */ ··· 143 134 { "can", IMX_SC_R_CAN_0, 3, true, 0 }, 144 135 { "ftm", IMX_SC_R_FTM_0, 2, true, 0 }, 145 136 { "lpi2c", IMX_SC_R_I2C_0, 4, true, 0 }, 146 - { "adc", IMX_SC_R_ADC_0, 1, true, 0 }, 137 + { "adc", IMX_SC_R_ADC_0, 2, true, 0 }, 147 138 { "lcd", IMX_SC_R_LCD_0, 1, true, 0 }, 148 139 { "lcd0-pwm", IMX_SC_R_LCD_0_PWM_0, 1, true, 0 }, 149 140 { "lpuart", IMX_SC_R_UART_0, 4, true, 0 }, ··· 216 207 return container_of(genpd, struct imx_sc_pm_domain, pd); 217 208 } 218 209 210 + static void imx_sc_pd_get_console_rsrc(void) 211 + { 212 + struct of_phandle_args specs; 213 + int ret; 214 + 215 + if (!of_stdout) 216 + return; 217 + 218 + ret = of_parse_phandle_with_args(of_stdout, "power-domains", 219 + "#power-domain-cells", 220 + 0, &specs); 221 + if (ret) 222 + return; 223 + 224 + imx_con_rsrc = specs.args[0]; 225 + } 226 + 219 227 static int imx_sc_pd_power(struct generic_pm_domain *domain, bool power_on) 220 228 { 221 229 struct imx_sc_msg_req_set_resource_power_mode msg; ··· 293 267 const struct imx_sc_pd_range *pd_ranges) 294 268 { 295 269 struct imx_sc_pm_domain *sc_pd; 270 + bool is_off = true; 296 271 int ret; 297 272 298 273 if (!imx_sc_rm_is_resource_owned(pm_ipc_handle, pd_ranges->rsrc + idx)) ··· 315 288 "%s", pd_ranges->name); 316 289 317 290 sc_pd->pd.name = sc_pd->name; 291 + if (imx_con_rsrc == sc_pd->rsrc) { 292 + sc_pd->pd.flags = GENPD_FLAG_RPM_ALWAYS_ON; 293 + is_off = false; 294 + } 318 295 319 296 if (sc_pd->rsrc >= IMX_SC_R_LAST) { 320 297 dev_warn(dev, "invalid pd %s rsrc id %d found", ··· 328 297 return NULL; 329 298 } 330 299 331 - ret = pm_genpd_init(&sc_pd->pd, NULL, true); 300 + ret = pm_genpd_init(&sc_pd->pd, NULL, is_off); 332 301 if (ret) { 333 302 dev_warn(dev, "failed to init pd %s rsrc id %d", 334 303 sc_pd->name, sc_pd->rsrc); ··· 393 362 pd_soc = of_device_get_match_data(&pdev->dev); 394 363 if (!pd_soc) 395 364 return -ENODEV; 365 + 366 + imx_sc_pd_get_console_rsrc(); 396 367 397 368 return imx_scu_init_pm_domains(&pdev->dev, pd_soc); 398 369 }
+2 -2
drivers/firmware/qcom_scm-legacy.c
··· 118 118 } 119 119 120 120 /** 121 - * qcom_scm_call() - Sends a command to the SCM and waits for the command to 121 + * scm_legacy_call() - Sends a command to the SCM and waits for the command to 122 122 * finish processing. 123 123 * 124 124 * A note on cache maintenance: ··· 209 209 (n & 0xf)) 210 210 211 211 /** 212 - * qcom_scm_call_atomic() - Send an atomic SCM command with up to 5 arguments 212 + * scm_legacy_call_atomic() - Send an atomic SCM command with up to 5 arguments 213 213 * and 3 return values 214 214 * @desc: SCM call descriptor containing arguments 215 215 * @res: SCM call return values
+7 -5
drivers/firmware/qcom_scm-smc.c
··· 77 77 } while (res->a0 == QCOM_SCM_V2_EBUSY); 78 78 } 79 79 80 - int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 81 - struct qcom_scm_res *res, bool atomic) 80 + 81 + int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 82 + enum qcom_scm_convention qcom_convention, 83 + struct qcom_scm_res *res, bool atomic) 82 84 { 83 85 int arglen = desc->arginfo & 0xf; 84 86 int i; ··· 89 87 size_t alloc_len; 90 88 gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL; 91 89 u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL; 92 - u32 qcom_smccc_convention = 93 - (qcom_scm_convention == SMC_CONVENTION_ARM_32) ? 94 - ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64; 90 + u32 qcom_smccc_convention = (qcom_convention == SMC_CONVENTION_ARM_32) ? 91 + ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64; 95 92 struct arm_smccc_res smc_res; 96 93 struct arm_smccc_args smc = {0}; 97 94 ··· 149 148 } 150 149 151 150 return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0; 151 + 152 152 }
+49 -38
drivers/firmware/qcom_scm.c
··· 113 113 clk_disable_unprepare(__scm->bus_clk); 114 114 } 115 115 116 - static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 117 - u32 cmd_id); 116 + enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN; 117 + static DEFINE_SPINLOCK(scm_query_lock); 118 118 119 - enum qcom_scm_convention qcom_scm_convention; 120 - static bool has_queried __read_mostly; 121 - static DEFINE_SPINLOCK(query_lock); 122 - 123 - static void __query_convention(void) 119 + static enum qcom_scm_convention __get_convention(void) 124 120 { 125 121 unsigned long flags; 126 122 struct qcom_scm_desc desc = { ··· 129 133 .owner = ARM_SMCCC_OWNER_SIP, 130 134 }; 131 135 struct qcom_scm_res res; 136 + enum qcom_scm_convention probed_convention; 132 137 int ret; 138 + bool forced = false; 133 139 134 - spin_lock_irqsave(&query_lock, flags); 135 - if (has_queried) 136 - goto out; 140 + if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN)) 141 + return qcom_scm_convention; 137 142 138 - qcom_scm_convention = SMC_CONVENTION_ARM_64; 139 - // Device isn't required as there is only one argument - no device 140 - // needed to dma_map_single to secure world 141 - ret = scm_smc_call(NULL, &desc, &res, true); 143 + /* 144 + * Device isn't required as there is only one argument - no device 145 + * needed to dma_map_single to secure world 146 + */ 147 + probed_convention = SMC_CONVENTION_ARM_64; 148 + ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true); 142 149 if (!ret && res.result[0] == 1) 143 - goto out; 150 + goto found; 144 151 145 - qcom_scm_convention = SMC_CONVENTION_ARM_32; 146 - ret = scm_smc_call(NULL, &desc, &res, true); 152 + /* 153 + * Some SC7180 firmwares didn't implement the 154 + * QCOM_SCM_INFO_IS_CALL_AVAIL call, so we fallback to forcing ARM_64 155 + * calling conventions on these firmwares. Luckily we don't make any 156 + * early calls into the firmware on these SoCs so the device pointer 157 + * will be valid here to check if the compatible matches. 158 + */ 159 + if (of_device_is_compatible(__scm ? __scm->dev->of_node : NULL, "qcom,scm-sc7180")) { 160 + forced = true; 161 + goto found; 162 + } 163 + 164 + probed_convention = SMC_CONVENTION_ARM_32; 165 + ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true); 147 166 if (!ret && res.result[0] == 1) 148 - goto out; 167 + goto found; 149 168 150 - qcom_scm_convention = SMC_CONVENTION_LEGACY; 151 - out: 152 - has_queried = true; 153 - spin_unlock_irqrestore(&query_lock, flags); 154 - pr_info("qcom_scm: convention: %s\n", 155 - qcom_scm_convention_names[qcom_scm_convention]); 156 - } 169 + probed_convention = SMC_CONVENTION_LEGACY; 170 + found: 171 + spin_lock_irqsave(&scm_query_lock, flags); 172 + if (probed_convention != qcom_scm_convention) { 173 + qcom_scm_convention = probed_convention; 174 + pr_info("qcom_scm: convention: %s%s\n", 175 + qcom_scm_convention_names[qcom_scm_convention], 176 + forced ? " (forced)" : ""); 177 + } 178 + spin_unlock_irqrestore(&scm_query_lock, flags); 157 179 158 - static inline enum qcom_scm_convention __get_convention(void) 159 - { 160 - if (unlikely(!has_queried)) 161 - __query_convention(); 162 180 return qcom_scm_convention; 163 181 } 164 182 ··· 229 219 } 230 220 } 231 221 232 - static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 233 - u32 cmd_id) 222 + static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 223 + u32 cmd_id) 234 224 { 235 225 int ret; 236 226 struct qcom_scm_desc desc = { ··· 257 247 258 248 ret = qcom_scm_call(dev, &desc, &res); 259 249 260 - return ret ? : res.result[0]; 250 + return ret ? false : !!res.result[0]; 261 251 } 262 252 263 253 /** ··· 595 585 }; 596 586 struct qcom_scm_res res; 597 587 598 - ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL, 599 - QCOM_SCM_PIL_PAS_IS_SUPPORTED); 600 - if (ret <= 0) 588 + if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL, 589 + QCOM_SCM_PIL_PAS_IS_SUPPORTED)) 601 590 return false; 602 591 603 592 ret = qcom_scm_call(__scm->dev, &desc, &res); ··· 1069 1060 */ 1070 1061 bool qcom_scm_hdcp_available(void) 1071 1062 { 1063 + bool avail; 1072 1064 int ret = qcom_scm_clk_enable(); 1073 1065 1074 1066 if (ret) 1075 1067 return ret; 1076 1068 1077 - ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP, 1069 + avail = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP, 1078 1070 QCOM_SCM_HDCP_INVOKE); 1079 1071 1080 1072 qcom_scm_clk_disable(); 1081 1073 1082 - return ret > 0; 1074 + return avail; 1083 1075 } 1084 1076 EXPORT_SYMBOL(qcom_scm_hdcp_available); 1085 1077 ··· 1252 1242 __scm = scm; 1253 1243 __scm->dev = &pdev->dev; 1254 1244 1255 - __query_convention(); 1245 + __get_convention(); 1256 1246 1257 1247 /* 1258 1248 * If requested enable "download mode", from this point on warmboot ··· 1301 1291 .driver = { 1302 1292 .name = "qcom_scm", 1303 1293 .of_match_table = qcom_scm_dt_match, 1294 + .suppress_bind_attrs = true, 1304 1295 }, 1305 1296 .probe = qcom_scm_probe, 1306 1297 .shutdown = qcom_scm_shutdown,
+5 -2
drivers/firmware/qcom_scm.h
··· 61 61 }; 62 62 63 63 #define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF)) 64 - extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 65 - struct qcom_scm_res *res, bool atomic); 64 + extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 65 + enum qcom_scm_convention qcom_convention, 66 + struct qcom_scm_res *res, bool atomic); 67 + #define scm_smc_call(dev, desc, res, atomic) \ 68 + __scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic)) 66 69 67 70 #define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff)) 68 71 extern int scm_legacy_call_atomic(struct device *dev,
+66 -3
drivers/firmware/raspberrypi.c
··· 7 7 */ 8 8 9 9 #include <linux/dma-mapping.h> 10 + #include <linux/kref.h> 10 11 #include <linux/mailbox_client.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of_platform.h> ··· 28 27 struct mbox_chan *chan; /* The property channel. */ 29 28 struct completion c; 30 29 u32 enabled; 30 + 31 + struct kref consumers; 31 32 }; 32 33 33 34 static DEFINE_MUTEX(transaction_lock); ··· 228 225 -1, NULL, 0); 229 226 } 230 227 228 + static void rpi_firmware_delete(struct kref *kref) 229 + { 230 + struct rpi_firmware *fw = container_of(kref, struct rpi_firmware, 231 + consumers); 232 + 233 + mbox_free_channel(fw->chan); 234 + kfree(fw); 235 + } 236 + 237 + void rpi_firmware_put(struct rpi_firmware *fw) 238 + { 239 + kref_put(&fw->consumers, rpi_firmware_delete); 240 + } 241 + EXPORT_SYMBOL_GPL(rpi_firmware_put); 242 + 243 + static void devm_rpi_firmware_put(void *data) 244 + { 245 + struct rpi_firmware *fw = data; 246 + 247 + rpi_firmware_put(fw); 248 + } 249 + 231 250 static int rpi_firmware_probe(struct platform_device *pdev) 232 251 { 233 252 struct device *dev = &pdev->dev; 234 253 struct rpi_firmware *fw; 235 254 236 - fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL); 255 + /* 256 + * Memory will be freed by rpi_firmware_delete() once all users have 257 + * released their firmware handles. Don't use devm_kzalloc() here. 258 + */ 259 + fw = kzalloc(sizeof(*fw), GFP_KERNEL); 237 260 if (!fw) 238 261 return -ENOMEM; 239 262 ··· 276 247 } 277 248 278 249 init_completion(&fw->c); 250 + kref_init(&fw->consumers); 279 251 280 252 platform_set_drvdata(pdev, fw); 281 253 ··· 305 275 rpi_hwmon = NULL; 306 276 platform_device_unregister(rpi_clk); 307 277 rpi_clk = NULL; 308 - mbox_free_channel(fw->chan); 278 + 279 + rpi_firmware_put(fw); 309 280 310 281 return 0; 311 282 } ··· 315 284 * rpi_firmware_get - Get pointer to rpi_firmware structure. 316 285 * @firmware_node: Pointer to the firmware Device Tree node. 317 286 * 287 + * The reference to rpi_firmware has to be released with rpi_firmware_put(). 288 + * 318 289 * Returns NULL is the firmware device is not ready. 319 290 */ 320 291 struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node) 321 292 { 322 293 struct platform_device *pdev = of_find_device_by_node(firmware_node); 294 + struct rpi_firmware *fw; 323 295 324 296 if (!pdev) 325 297 return NULL; 326 298 327 - return platform_get_drvdata(pdev); 299 + fw = platform_get_drvdata(pdev); 300 + if (!fw) 301 + return NULL; 302 + 303 + if (!kref_get_unless_zero(&fw->consumers)) 304 + return NULL; 305 + 306 + return fw; 328 307 } 329 308 EXPORT_SYMBOL_GPL(rpi_firmware_get); 309 + 310 + /** 311 + * devm_rpi_firmware_get - Get pointer to rpi_firmware structure. 312 + * @firmware_node: Pointer to the firmware Device Tree node. 313 + * 314 + * Returns NULL is the firmware device is not ready. 315 + */ 316 + struct rpi_firmware *devm_rpi_firmware_get(struct device *dev, 317 + struct device_node *firmware_node) 318 + { 319 + struct rpi_firmware *fw; 320 + 321 + fw = rpi_firmware_get(firmware_node); 322 + if (!fw) 323 + return NULL; 324 + 325 + if (devm_add_action_or_reset(dev, devm_rpi_firmware_put, fw)) 326 + return NULL; 327 + 328 + return fw; 329 + } 330 + EXPORT_SYMBOL_GPL(devm_rpi_firmware_get); 330 331 331 332 static const struct of_device_id rpi_firmware_of_match[] = { 332 333 { .compatible = "raspberrypi,bcm2835-firmware", },
+3 -2
drivers/firmware/xilinx/zynqmp.c
··· 2 2 /* 3 3 * Xilinx Zynq MPSoC Firmware layer 4 4 * 5 - * Copyright (C) 2014-2020 Xilinx, Inc. 5 + * Copyright (C) 2014-2021 Xilinx, Inc. 6 6 * 7 7 * Michal Simek <michal.simek@xilinx.com> 8 8 * Davorin Mista <davorin.mista@aggios.com> ··· 1280 1280 static int zynqmp_firmware_remove(struct platform_device *pdev) 1281 1281 { 1282 1282 struct pm_api_feature_data *feature_data; 1283 + struct hlist_node *tmp; 1283 1284 int i; 1284 1285 1285 1286 mfd_remove_devices(&pdev->dev); 1286 1287 zynqmp_pm_api_debugfs_exit(); 1287 1288 1288 - hash_for_each(pm_api_features_map, i, feature_data, hentry) { 1289 + hash_for_each_safe(pm_api_features_map, i, tmp, feature_data, hentry) { 1289 1290 hash_del(&feature_data->hentry); 1290 1291 kfree(feature_data); 1291 1292 }
+4 -4
drivers/fpga/Kconfig
··· 14 14 15 15 config FPGA_MGR_SOCFPGA 16 16 tristate "Altera SOCFPGA FPGA Manager" 17 - depends on ARCH_SOCFPGA || COMPILE_TEST 17 + depends on ARCH_INTEL_SOCFPGA || COMPILE_TEST 18 18 help 19 19 FPGA manager driver support for Altera SOCFPGA. 20 20 21 21 config FPGA_MGR_SOCFPGA_A10 22 22 tristate "Altera SoCFPGA Arria10" 23 - depends on ARCH_SOCFPGA || COMPILE_TEST 23 + depends on ARCH_INTEL_SOCFPGA || COMPILE_TEST 24 24 select REGMAP_MMIO 25 25 help 26 26 FPGA manager driver support for Altera Arria10 SoCFPGA. ··· 60 60 61 61 config FPGA_MGR_STRATIX10_SOC 62 62 tristate "Intel Stratix10 SoC FPGA Manager" 63 - depends on (ARCH_STRATIX10 && INTEL_STRATIX10_SERVICE) 63 + depends on (ARCH_INTEL_SOCFPGA && INTEL_STRATIX10_SERVICE) 64 64 help 65 65 FPGA manager driver support for the Intel Stratix10 SoC. 66 66 ··· 99 99 100 100 config SOCFPGA_FPGA_BRIDGE 101 101 tristate "Altera SoCFPGA FPGA Bridges" 102 - depends on ARCH_SOCFPGA && FPGA_BRIDGE 102 + depends on ARCH_INTEL_SOCFPGA && FPGA_BRIDGE 103 103 help 104 104 Say Y to enable drivers for FPGA bridges for Altera SOCFPGA 105 105 devices.
+1 -1
drivers/gpio/gpio-raspberrypi-exp.c
··· 208 208 return -ENOENT; 209 209 } 210 210 211 - fw = rpi_firmware_get(fw_node); 211 + fw = devm_rpi_firmware_get(&pdev->dev, fw_node); 212 212 of_node_put(fw_node); 213 213 if (!fw) 214 214 return -EPROBE_DEFER;
+15 -9
drivers/hwmon/scmi-hwmon.c
··· 2 2 /* 3 3 * System Control and Management Interface(SCMI) based hwmon sensor driver 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 * Sudeep Holla <sudeep.holla@arm.com> 7 7 */ 8 8 ··· 13 13 #include <linux/sysfs.h> 14 14 #include <linux/thermal.h> 15 15 16 + static const struct scmi_sensor_proto_ops *sensor_ops; 17 + 16 18 struct scmi_sensors { 17 - const struct scmi_handle *handle; 19 + const struct scmi_protocol_handle *ph; 18 20 const struct scmi_sensor_info **info[hwmon_max]; 19 21 }; 20 22 ··· 71 69 u64 value; 72 70 const struct scmi_sensor_info *sensor; 73 71 struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev); 74 - const struct scmi_handle *h = scmi_sensors->handle; 75 72 76 73 sensor = *(scmi_sensors->info[type] + channel); 77 - ret = h->sensor_ops->reading_get(h, sensor->id, &value); 74 + ret = sensor_ops->reading_get(scmi_sensors->ph, sensor->id, &value); 78 75 if (ret) 79 76 return ret; 80 77 ··· 170 169 struct hwmon_channel_info *scmi_hwmon_chan; 171 170 const struct hwmon_channel_info **ptr_scmi_ci; 172 171 const struct scmi_handle *handle = sdev->handle; 172 + struct scmi_protocol_handle *ph; 173 173 174 - if (!handle || !handle->sensor_ops) 174 + if (!handle) 175 175 return -ENODEV; 176 176 177 - nr_sensors = handle->sensor_ops->count_get(handle); 177 + sensor_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_SENSOR, &ph); 178 + if (IS_ERR(sensor_ops)) 179 + return PTR_ERR(sensor_ops); 180 + 181 + nr_sensors = sensor_ops->count_get(ph); 178 182 if (!nr_sensors) 179 183 return -EIO; 180 184 ··· 187 181 if (!scmi_sensors) 188 182 return -ENOMEM; 189 183 190 - scmi_sensors->handle = handle; 184 + scmi_sensors->ph = ph; 191 185 192 186 for (i = 0; i < nr_sensors; i++) { 193 - sensor = handle->sensor_ops->info_get(handle, i); 187 + sensor = sensor_ops->info_get(ph, i); 194 188 if (!sensor) 195 189 return -EINVAL; 196 190 ··· 242 236 } 243 237 244 238 for (i = nr_sensors - 1; i >= 0 ; i--) { 245 - sensor = handle->sensor_ops->info_get(handle, i); 239 + sensor = sensor_ops->info_get(ph, i); 246 240 if (!sensor) 247 241 continue; 248 242
+1 -1
drivers/i2c/busses/Kconfig
··· 369 369 370 370 config I2C_ALTERA 371 371 tristate "Altera Soft IP I2C" 372 - depends on ARCH_SOCFPGA || NIOS2 || COMPILE_TEST 372 + depends on ARCH_INTEL_SOCFPGA || NIOS2 || COMPILE_TEST 373 373 depends on OF 374 374 help 375 375 If you say yes to this option, support will be included for the
+50 -50
drivers/iio/common/scmi_sensors/scmi_iio.c
··· 22 22 #define SCMI_IIO_NUM_OF_AXIS 3 23 23 24 24 struct scmi_iio_priv { 25 - struct scmi_handle *handle; 25 + const struct scmi_sensor_proto_ops *sensor_ops; 26 + struct scmi_protocol_handle *ph; 26 27 const struct scmi_sensor_info *sensor_info; 27 28 struct iio_dev *indio_dev; 28 29 /* adding one additional channel for timestamp */ ··· 83 82 static int scmi_iio_buffer_preenable(struct iio_dev *iio_dev) 84 83 { 85 84 struct scmi_iio_priv *sensor = iio_priv(iio_dev); 86 - u32 sensor_id = sensor->sensor_info->id; 87 85 u32 sensor_config = 0; 88 86 int err; 89 87 ··· 92 92 93 93 sensor_config |= FIELD_PREP(SCMI_SENS_CFG_SENSOR_ENABLED_MASK, 94 94 SCMI_SENS_CFG_SENSOR_ENABLE); 95 - 96 - err = sensor->handle->notify_ops->register_event_notifier(sensor->handle, 97 - SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE, 98 - &sensor_id, &sensor->sensor_update_nb); 99 - if (err) { 100 - dev_err(&iio_dev->dev, 101 - "Error in registering sensor update notifier for sensor %s err %d", 102 - sensor->sensor_info->name, err); 103 - return err; 104 - } 105 - 106 - err = sensor->handle->sensor_ops->config_set(sensor->handle, 107 - sensor->sensor_info->id, sensor_config); 108 - if (err) { 109 - sensor->handle->notify_ops->unregister_event_notifier(sensor->handle, 110 - SCMI_PROTOCOL_SENSOR, 111 - SCMI_EVENT_SENSOR_UPDATE, &sensor_id, 112 - &sensor->sensor_update_nb); 95 + err = sensor->sensor_ops->config_set(sensor->ph, 96 + sensor->sensor_info->id, 97 + sensor_config); 98 + if (err) 113 99 dev_err(&iio_dev->dev, "Error in enabling sensor %s err %d", 114 100 sensor->sensor_info->name, err); 115 - } 116 101 117 102 return err; 118 103 } ··· 105 120 static int scmi_iio_buffer_postdisable(struct iio_dev *iio_dev) 106 121 { 107 122 struct scmi_iio_priv *sensor = iio_priv(iio_dev); 108 - u32 sensor_id = sensor->sensor_info->id; 109 123 u32 sensor_config = 0; 110 124 int err; 111 125 112 126 sensor_config |= FIELD_PREP(SCMI_SENS_CFG_SENSOR_ENABLED_MASK, 113 127 SCMI_SENS_CFG_SENSOR_DISABLE); 114 - 115 - err = sensor->handle->notify_ops->unregister_event_notifier(sensor->handle, 116 - SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE, 117 - &sensor_id, &sensor->sensor_update_nb); 118 - if (err) { 119 - dev_err(&iio_dev->dev, 120 - "Error in unregistering sensor update notifier for sensor %s err %d", 121 - sensor->sensor_info->name, err); 122 - return err; 123 - } 124 - 125 - err = sensor->handle->sensor_ops->config_set(sensor->handle, sensor_id, 126 - sensor_config); 128 + err = sensor->sensor_ops->config_set(sensor->ph, 129 + sensor->sensor_info->id, 130 + sensor_config); 127 131 if (err) { 128 132 dev_err(&iio_dev->dev, 129 133 "Error in disabling sensor %s with err %d", ··· 135 161 u32 sensor_config; 136 162 char buf[32]; 137 163 138 - int err = sensor->handle->sensor_ops->config_get(sensor->handle, 139 - sensor->sensor_info->id, &sensor_config); 164 + int err = sensor->sensor_ops->config_get(sensor->ph, 165 + sensor->sensor_info->id, 166 + &sensor_config); 140 167 if (err) { 141 168 dev_err(&iio_dev->dev, 142 169 "Error in getting sensor config for sensor %s err %d", ··· 183 208 sensor_config |= 184 209 FIELD_PREP(SCMI_SENS_CFG_ROUND_MASK, SCMI_SENS_CFG_ROUND_AUTO); 185 210 186 - err = sensor->handle->sensor_ops->config_set(sensor->handle, 187 - sensor->sensor_info->id, sensor_config); 211 + err = sensor->sensor_ops->config_set(sensor->ph, 212 + sensor->sensor_info->id, 213 + sensor_config); 188 214 if (err) 189 215 dev_err(&iio_dev->dev, 190 216 "Error in setting sensor update interval for sensor %s value %u err %d", ··· 250 274 u32 sensor_config; 251 275 int mult; 252 276 253 - int err = sensor->handle->sensor_ops->config_get(sensor->handle, 254 - sensor->sensor_info->id, &sensor_config); 277 + int err = sensor->sensor_ops->config_get(sensor->ph, 278 + sensor->sensor_info->id, 279 + &sensor_config); 255 280 if (err) { 256 281 dev_err(&iio_dev->dev, 257 282 "Error in getting sensor config for sensor %s err %d", ··· 505 528 return 0; 506 529 } 507 530 508 - static struct iio_dev *scmi_alloc_iiodev(struct device *dev, 509 - struct scmi_handle *handle, 510 - const struct scmi_sensor_info *sensor_info) 531 + static struct iio_dev * 532 + scmi_alloc_iiodev(struct scmi_device *sdev, 533 + const struct scmi_sensor_proto_ops *ops, 534 + struct scmi_protocol_handle *ph, 535 + const struct scmi_sensor_info *sensor_info) 511 536 { 512 537 struct iio_chan_spec *iio_channels; 513 538 struct scmi_iio_priv *sensor; 514 539 enum iio_modifier modifier; 515 540 enum iio_chan_type type; 516 541 struct iio_dev *iiodev; 542 + struct device *dev = &sdev->dev; 543 + const struct scmi_handle *handle = sdev->handle; 517 544 int i, ret; 518 545 519 546 iiodev = devm_iio_device_alloc(dev, sizeof(*sensor)); ··· 527 546 iiodev->modes = INDIO_DIRECT_MODE; 528 547 iiodev->dev.parent = dev; 529 548 sensor = iio_priv(iiodev); 530 - sensor->handle = handle; 549 + sensor->sensor_ops = ops; 550 + sensor->ph = ph; 531 551 sensor->sensor_info = sensor_info; 532 552 sensor->sensor_update_nb.notifier_call = scmi_iio_sensor_update_cb; 533 553 sensor->indio_dev = iiodev; ··· 563 581 sensor_info->axis[i].id); 564 582 } 565 583 584 + ret = handle->notify_ops->devm_event_notifier_register(sdev, 585 + SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE, 586 + &sensor->sensor_info->id, 587 + &sensor->sensor_update_nb); 588 + if (ret) { 589 + dev_err(&iiodev->dev, 590 + "Error in registering sensor update notifier for sensor %s err %d", 591 + sensor->sensor_info->name, ret); 592 + return ERR_PTR(ret); 593 + } 594 + 566 595 scmi_iio_set_timestamp_channel(&iio_channels[i], i); 567 596 iiodev->channels = iio_channels; 568 597 return iiodev; ··· 583 590 { 584 591 const struct scmi_sensor_info *sensor_info; 585 592 struct scmi_handle *handle = sdev->handle; 593 + const struct scmi_sensor_proto_ops *sensor_ops; 594 + struct scmi_protocol_handle *ph; 586 595 struct device *dev = &sdev->dev; 587 596 struct iio_dev *scmi_iio_dev; 588 597 u16 nr_sensors; 589 598 int err = -ENODEV, i; 590 599 591 - if (!handle || !handle->sensor_ops) { 600 + if (!handle) 601 + return -ENODEV; 602 + 603 + sensor_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_SENSOR, &ph); 604 + if (IS_ERR(sensor_ops)) { 592 605 dev_err(dev, "SCMI device has no sensor interface\n"); 593 - return -EINVAL; 606 + return PTR_ERR(sensor_ops); 594 607 } 595 608 596 - nr_sensors = handle->sensor_ops->count_get(handle); 609 + nr_sensors = sensor_ops->count_get(ph); 597 610 if (!nr_sensors) { 598 611 dev_dbg(dev, "0 sensors found via SCMI bus\n"); 599 612 return -ENODEV; 600 613 } 601 614 602 615 for (i = 0; i < nr_sensors; i++) { 603 - sensor_info = handle->sensor_ops->info_get(handle, i); 616 + sensor_info = sensor_ops->info_get(ph, i); 604 617 if (!sensor_info) { 605 618 dev_err(dev, "SCMI sensor %d has missing info\n", i); 606 619 return -EINVAL; ··· 621 622 sensor_info->axis[0].type != RADIANS_SEC) 622 623 continue; 623 624 624 - scmi_iio_dev = scmi_alloc_iiodev(dev, handle, sensor_info); 625 + scmi_iio_dev = scmi_alloc_iiodev(sdev, sensor_ops, ph, 626 + sensor_info); 625 627 if (IS_ERR(scmi_iio_dev)) { 626 628 dev_err(dev, 627 629 "failed to allocate IIO device for sensor %s: %ld\n",
+1 -1
drivers/input/touchscreen/raspberrypi-ts.c
··· 160 160 touchbuf = (u32)ts->fw_regs_phys; 161 161 error = rpi_firmware_property(fw, RPI_FIRMWARE_FRAMEBUFFER_SET_TOUCHBUF, 162 162 &touchbuf, sizeof(touchbuf)); 163 - 163 + rpi_firmware_put(fw); 164 164 if (error || touchbuf != 0) { 165 165 dev_warn(dev, "Failed to set touchbuf, %d\n", error); 166 166 return error;
+1 -3
drivers/memory/fsl-corenet-cf.c
··· 192 192 } 193 193 194 194 ccf->regs = devm_ioremap_resource(&pdev->dev, r); 195 - if (IS_ERR(ccf->regs)) { 196 - dev_err(&pdev->dev, "%s: can't map mem resource\n", __func__); 195 + if (IS_ERR(ccf->regs)) 197 196 return PTR_ERR(ccf->regs); 198 - } 199 197 200 198 ccf->dev = &pdev->dev; 201 199 ccf->info = match->data;
+10 -9
drivers/memory/mtk-smi.c
··· 319 319 struct device *dev = &pdev->dev; 320 320 struct device_node *smi_node; 321 321 struct platform_device *smi_pdev; 322 + struct device_link *link; 322 323 323 324 larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL); 324 325 if (!larb) ··· 359 358 if (!platform_get_drvdata(smi_pdev)) 360 359 return -EPROBE_DEFER; 361 360 larb->smi_common_dev = &smi_pdev->dev; 361 + link = device_link_add(dev, larb->smi_common_dev, 362 + DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 363 + if (!link) { 364 + dev_err(dev, "Unable to link smi-common dev\n"); 365 + return -ENODEV; 366 + } 362 367 } else { 363 368 dev_err(dev, "Failed to get the smi_common device\n"); 364 369 return -EINVAL; ··· 377 370 378 371 static int mtk_smi_larb_remove(struct platform_device *pdev) 379 372 { 373 + struct mtk_smi_larb *larb = platform_get_drvdata(pdev); 374 + 375 + device_link_remove(&pdev->dev, larb->smi_common_dev); 380 376 pm_runtime_disable(&pdev->dev); 381 377 component_del(&pdev->dev, &mtk_smi_larb_component_ops); 382 378 return 0; ··· 391 381 const struct mtk_smi_larb_gen *larb_gen = larb->larb_gen; 392 382 int ret; 393 383 394 - /* Power on smi-common. */ 395 - ret = pm_runtime_resume_and_get(larb->smi_common_dev); 396 - if (ret < 0) { 397 - dev_err(dev, "Failed to pm get for smi-common(%d).\n", ret); 398 - return ret; 399 - } 400 - 401 384 ret = mtk_smi_clk_enable(&larb->smi); 402 385 if (ret < 0) { 403 386 dev_err(dev, "Failed to enable clock(%d).\n", ret); 404 - pm_runtime_put_sync(larb->smi_common_dev); 405 387 return ret; 406 388 } 407 389 ··· 408 406 struct mtk_smi_larb *larb = dev_get_drvdata(dev); 409 407 410 408 mtk_smi_clk_disable(&larb->smi); 411 - pm_runtime_put_sync(larb->smi_common_dev); 412 409 return 0; 413 410 } 414 411
+5 -2
drivers/memory/omap-gpmc.c
··· 1009 1009 1010 1010 void gpmc_cs_free(int cs) 1011 1011 { 1012 - struct gpmc_cs_data *gpmc = &gpmc_cs[cs]; 1013 - struct resource *res = &gpmc->mem; 1012 + struct gpmc_cs_data *gpmc; 1013 + struct resource *res; 1014 1014 1015 1015 spin_lock(&gpmc_mem_lock); 1016 1016 if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) { ··· 1018 1018 spin_unlock(&gpmc_mem_lock); 1019 1019 return; 1020 1020 } 1021 + gpmc = &gpmc_cs[cs]; 1022 + res = &gpmc->mem; 1023 + 1021 1024 gpmc_cs_disable_mem(cs); 1022 1025 if (res->flags) 1023 1026 release_resource(res);
+1 -1
drivers/memory/pl353-smc.c
··· 63 63 /* ECC memory config register specific constants */ 64 64 #define PL353_SMC_ECC_MEMCFG_MODE_MASK 0xC 65 65 #define PL353_SMC_ECC_MEMCFG_MODE_SHIFT 2 66 - #define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0xC 66 + #define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0x3 67 67 68 68 #define PL353_SMC_DC_UPT_NAND_REGS ((4 << 23) | /* CS: NAND chip */ \ 69 69 (2 << 21)) /* UpdateRegs operation */
+1 -1
drivers/memory/renesas-rpc-if.c
··· 192 192 } 193 193 194 194 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap"); 195 - rpc->size = resource_size(res); 196 195 rpc->dirmap = devm_ioremap_resource(&pdev->dev, res); 197 196 if (IS_ERR(rpc->dirmap)) 198 197 rpc->dirmap = NULL; 198 + rpc->size = resource_size(res); 199 199 200 200 rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); 201 201
+3 -1
drivers/memory/samsung/exynos5422-dmc.c
··· 1298 1298 1299 1299 dmc->curr_volt = target_volt; 1300 1300 1301 - clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll); 1301 + ret = clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll); 1302 + if (ret) 1303 + return ret; 1302 1304 1303 1305 clk_prepare_enable(dmc->fout_bpll); 1304 1306 clk_prepare_enable(dmc->mout_bpll);
+9
drivers/memory/tegra/mc.c
··· 827 827 return err; 828 828 } 829 829 830 + mc->debugfs.root = debugfs_create_dir("mc", NULL); 831 + 832 + if (mc->soc->init) { 833 + err = mc->soc->init(mc); 834 + if (err < 0) 835 + dev_err(&pdev->dev, "failed to initialize SoC driver: %d\n", 836 + err); 837 + } 838 + 830 839 err = tegra_mc_reset_setup(mc); 831 840 if (err < 0) 832 841 dev_err(&pdev->dev, "failed to register reset controller: %d\n",
+2 -2
drivers/memory/tegra/mc.h
··· 92 92 return container_of(provider, struct tegra_mc, provider); 93 93 } 94 94 95 - static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset) 95 + static inline u32 mc_readl(const struct tegra_mc *mc, unsigned long offset) 96 96 { 97 97 return readl_relaxed(mc->regs + offset); 98 98 } 99 99 100 - static inline void mc_writel(struct tegra_mc *mc, u32 value, 100 + static inline void mc_writel(const struct tegra_mc *mc, u32 value, 101 101 unsigned long offset) 102 102 { 103 103 writel_relaxed(value, mc->regs + offset);
+8 -8
drivers/memory/tegra/tegra124-emc.c
··· 905 905 else 906 906 emc->dram_bus_width = 32; 907 907 908 - dev_info(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 908 + dev_info_once(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 909 909 910 910 emc->dram_type &= EMC_FBIO_CFG5_DRAM_TYPE_MASK; 911 911 emc->dram_type >>= EMC_FBIO_CFG5_DRAM_TYPE_SHIFT; ··· 1204 1204 return 0; 1205 1205 } 1206 1206 1207 - DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1207 + DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1208 1208 tegra_emc_debug_min_rate_get, 1209 1209 tegra_emc_debug_min_rate_set, "%llu\n"); 1210 1210 ··· 1234 1234 return 0; 1235 1235 } 1236 1236 1237 - DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1237 + DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1238 1238 tegra_emc_debug_max_rate_get, 1239 1239 tegra_emc_debug_max_rate_set, "%llu\n"); 1240 1240 ··· 1419 1419 goto put_hw_table; 1420 1420 } 1421 1421 1422 - dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 1423 - hw_version, clk_get_rate(emc->clk) / 1000000); 1422 + dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 1423 + hw_version, clk_get_rate(emc->clk) / 1000000); 1424 1424 1425 1425 /* first dummy rate-set initializes voltage state */ 1426 1426 err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk)); ··· 1475 1475 if (err) 1476 1476 return err; 1477 1477 } else { 1478 - dev_info(&pdev->dev, 1479 - "no memory timings for RAM code %u found in DT\n", 1480 - ram_code); 1478 + dev_info_once(&pdev->dev, 1479 + "no memory timings for RAM code %u found in DT\n", 1480 + ram_code); 1481 1481 } 1482 1482 1483 1483 err = emc_init(emc);
+10 -10
drivers/memory/tegra/tegra20-emc.c
··· 411 411 sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings, 412 412 NULL); 413 413 414 - dev_info(emc->dev, 415 - "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 416 - emc->num_timings, 417 - tegra_read_ram_code(), 418 - emc->timings[0].rate / 1000000, 419 - emc->timings[emc->num_timings - 1].rate / 1000000); 414 + dev_info_once(emc->dev, 415 + "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 416 + emc->num_timings, 417 + tegra_read_ram_code(), 418 + emc->timings[0].rate / 1000000, 419 + emc->timings[emc->num_timings - 1].rate / 1000000); 420 420 421 421 return 0; 422 422 } ··· 429 429 int err; 430 430 431 431 if (of_get_child_count(dev->of_node) == 0) { 432 - dev_info(dev, "device-tree doesn't have memory timings\n"); 432 + dev_info_once(dev, "device-tree doesn't have memory timings\n"); 433 433 return NULL; 434 434 } 435 435 ··· 496 496 else 497 497 emc->dram_bus_width = 32; 498 498 499 - dev_info(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 499 + dev_info_once(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 500 500 501 501 return 0; 502 502 } ··· 931 931 goto put_hw_table; 932 932 } 933 933 934 - dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 935 - hw_version, clk_get_rate(emc->clk) / 1000000); 934 + dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 935 + hw_version, clk_get_rate(emc->clk) / 1000000); 936 936 937 937 /* first dummy rate-set initializes voltage state */ 938 938 err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));
+332
drivers/memory/tegra/tegra20.c
··· 3 3 * Copyright (C) 2012 NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 + #include <linux/bitfield.h> 7 + #include <linux/delay.h> 8 + #include <linux/mutex.h> 6 9 #include <linux/of_device.h> 7 10 #include <linux/slab.h> 8 11 #include <linux/string.h> ··· 13 10 #include <dt-bindings/memory/tegra20-mc.h> 14 11 15 12 #include "mc.h" 13 + 14 + #define MC_STAT_CONTROL 0x90 15 + #define MC_STAT_EMC_CLOCK_LIMIT 0xa0 16 + #define MC_STAT_EMC_CLOCKS 0xa4 17 + #define MC_STAT_EMC_CONTROL_0 0xa8 18 + #define MC_STAT_EMC_CONTROL_1 0xac 19 + #define MC_STAT_EMC_COUNT_0 0xb8 20 + #define MC_STAT_EMC_COUNT_1 0xbc 21 + 22 + #define MC_STAT_CONTROL_CLIENT_ID GENMASK(13, 8) 23 + #define MC_STAT_CONTROL_EVENT GENMASK(23, 16) 24 + #define MC_STAT_CONTROL_PRI_EVENT GENMASK(25, 24) 25 + #define MC_STAT_CONTROL_FILTER_CLIENT_ENABLE GENMASK(26, 26) 26 + #define MC_STAT_CONTROL_FILTER_PRI GENMASK(29, 28) 27 + 28 + #define MC_STAT_CONTROL_PRI_EVENT_HP 0 29 + #define MC_STAT_CONTROL_PRI_EVENT_TM 1 30 + #define MC_STAT_CONTROL_PRI_EVENT_BW 2 31 + 32 + #define MC_STAT_CONTROL_FILTER_PRI_DISABLE 0 33 + #define MC_STAT_CONTROL_FILTER_PRI_NO 1 34 + #define MC_STAT_CONTROL_FILTER_PRI_YES 2 35 + 36 + #define MC_STAT_CONTROL_EVENT_QUALIFIED 0 37 + #define MC_STAT_CONTROL_EVENT_ANY_READ 1 38 + #define MC_STAT_CONTROL_EVENT_ANY_WRITE 2 39 + #define MC_STAT_CONTROL_EVENT_RD_WR_CHANGE 3 40 + #define MC_STAT_CONTROL_EVENT_SUCCESSIVE 4 41 + #define MC_STAT_CONTROL_EVENT_ARB_BANK_AA 5 42 + #define MC_STAT_CONTROL_EVENT_ARB_BANK_BB 6 43 + #define MC_STAT_CONTROL_EVENT_PAGE_MISS 7 44 + #define MC_STAT_CONTROL_EVENT_AUTO_PRECHARGE 8 45 + 46 + #define EMC_GATHER_RST (0 << 8) 47 + #define EMC_GATHER_CLEAR (1 << 8) 48 + #define EMC_GATHER_DISABLE (2 << 8) 49 + #define EMC_GATHER_ENABLE (3 << 8) 50 + 51 + #define MC_STAT_SAMPLE_TIME_USEC 16000 52 + 53 + /* we store collected statistics as a fixed point values */ 54 + #define MC_FX_FRAC_SCALE 100 55 + 56 + static DEFINE_MUTEX(tegra20_mc_stat_lock); 57 + 58 + struct tegra20_mc_stat_gather { 59 + unsigned int pri_filter; 60 + unsigned int pri_event; 61 + unsigned int result; 62 + unsigned int client; 63 + unsigned int event; 64 + bool client_enb; 65 + }; 66 + 67 + struct tegra20_mc_stat { 68 + struct tegra20_mc_stat_gather gather0; 69 + struct tegra20_mc_stat_gather gather1; 70 + unsigned int sample_time_usec; 71 + const struct tegra_mc *mc; 72 + }; 73 + 74 + struct tegra20_mc_client_stat { 75 + unsigned int events; 76 + unsigned int arb_high_prio; 77 + unsigned int arb_timeout; 78 + unsigned int arb_bandwidth; 79 + unsigned int rd_wr_change; 80 + unsigned int successive; 81 + unsigned int page_miss; 82 + unsigned int auto_precharge; 83 + unsigned int arb_bank_aa; 84 + unsigned int arb_bank_bb; 85 + }; 16 86 17 87 static const struct tegra_mc_client tegra20_mc_clients[] = { 18 88 { ··· 432 356 .set = tegra20_mc_icc_set, 433 357 }; 434 358 359 + static u32 tegra20_mc_stat_gather_control(const struct tegra20_mc_stat_gather *g) 360 + { 361 + u32 control; 362 + 363 + control = FIELD_PREP(MC_STAT_CONTROL_EVENT, g->event); 364 + control |= FIELD_PREP(MC_STAT_CONTROL_CLIENT_ID, g->client); 365 + control |= FIELD_PREP(MC_STAT_CONTROL_PRI_EVENT, g->pri_event); 366 + control |= FIELD_PREP(MC_STAT_CONTROL_FILTER_PRI, g->pri_filter); 367 + control |= FIELD_PREP(MC_STAT_CONTROL_FILTER_CLIENT_ENABLE, g->client_enb); 368 + 369 + return control; 370 + } 371 + 372 + static void tegra20_mc_stat_gather(struct tegra20_mc_stat *stat) 373 + { 374 + u32 clocks, count0, count1, control_0, control_1; 375 + const struct tegra_mc *mc = stat->mc; 376 + 377 + control_0 = tegra20_mc_stat_gather_control(&stat->gather0); 378 + control_1 = tegra20_mc_stat_gather_control(&stat->gather1); 379 + 380 + /* 381 + * Reset statistic gathers state, select statistics collection mode 382 + * and set clocks counter saturation limit to maximum. 383 + */ 384 + mc_writel(mc, 0x00000000, MC_STAT_CONTROL); 385 + mc_writel(mc, control_0, MC_STAT_EMC_CONTROL_0); 386 + mc_writel(mc, control_1, MC_STAT_EMC_CONTROL_1); 387 + mc_writel(mc, 0xffffffff, MC_STAT_EMC_CLOCK_LIMIT); 388 + 389 + mc_writel(mc, EMC_GATHER_ENABLE, MC_STAT_CONTROL); 390 + fsleep(stat->sample_time_usec); 391 + mc_writel(mc, EMC_GATHER_DISABLE, MC_STAT_CONTROL); 392 + 393 + count0 = mc_readl(mc, MC_STAT_EMC_COUNT_0); 394 + count1 = mc_readl(mc, MC_STAT_EMC_COUNT_1); 395 + clocks = mc_readl(mc, MC_STAT_EMC_CLOCKS); 396 + clocks = max(clocks / 100 / MC_FX_FRAC_SCALE, 1u); 397 + 398 + stat->gather0.result = DIV_ROUND_UP(count0, clocks); 399 + stat->gather1.result = DIV_ROUND_UP(count1, clocks); 400 + } 401 + 402 + static void tegra20_mc_stat_events(const struct tegra_mc *mc, 403 + const struct tegra_mc_client *client0, 404 + const struct tegra_mc_client *client1, 405 + unsigned int pri_filter, 406 + unsigned int pri_event, 407 + unsigned int event, 408 + unsigned int *result0, 409 + unsigned int *result1) 410 + { 411 + struct tegra20_mc_stat stat = {}; 412 + 413 + stat.gather0.client = client0 ? client0->id : 0; 414 + stat.gather0.pri_filter = pri_filter; 415 + stat.gather0.client_enb = !!client0; 416 + stat.gather0.pri_event = pri_event; 417 + stat.gather0.event = event; 418 + 419 + stat.gather1.client = client1 ? client1->id : 0; 420 + stat.gather1.pri_filter = pri_filter; 421 + stat.gather1.client_enb = !!client1; 422 + stat.gather1.pri_event = pri_event; 423 + stat.gather1.event = event; 424 + 425 + stat.sample_time_usec = MC_STAT_SAMPLE_TIME_USEC; 426 + stat.mc = mc; 427 + 428 + tegra20_mc_stat_gather(&stat); 429 + 430 + *result0 = stat.gather0.result; 431 + *result1 = stat.gather1.result; 432 + } 433 + 434 + static void tegra20_mc_collect_stats(const struct tegra_mc *mc, 435 + struct tegra20_mc_client_stat *stats) 436 + { 437 + const struct tegra_mc_client *client0, *client1; 438 + unsigned int i; 439 + 440 + /* collect memory controller utilization percent for each client */ 441 + for (i = 0; i < mc->soc->num_clients; i += 2) { 442 + client0 = &mc->soc->clients[i]; 443 + client1 = &mc->soc->clients[i + 1]; 444 + 445 + if (i + 1 == mc->soc->num_clients) 446 + client1 = NULL; 447 + 448 + tegra20_mc_stat_events(mc, client0, client1, 449 + MC_STAT_CONTROL_FILTER_PRI_DISABLE, 450 + MC_STAT_CONTROL_PRI_EVENT_HP, 451 + MC_STAT_CONTROL_EVENT_QUALIFIED, 452 + &stats[i + 0].events, 453 + &stats[i + 1].events); 454 + } 455 + 456 + /* collect more info from active clients */ 457 + for (i = 0; i < mc->soc->num_clients; i++) { 458 + unsigned int clienta, clientb = mc->soc->num_clients; 459 + 460 + for (client0 = NULL; i < mc->soc->num_clients; i++) { 461 + if (stats[i].events) { 462 + client0 = &mc->soc->clients[i]; 463 + clienta = i++; 464 + break; 465 + } 466 + } 467 + 468 + for (client1 = NULL; i < mc->soc->num_clients; i++) { 469 + if (stats[i].events) { 470 + client1 = &mc->soc->clients[i]; 471 + clientb = i; 472 + break; 473 + } 474 + } 475 + 476 + if (!client0 && !client1) 477 + break; 478 + 479 + tegra20_mc_stat_events(mc, client0, client1, 480 + MC_STAT_CONTROL_FILTER_PRI_YES, 481 + MC_STAT_CONTROL_PRI_EVENT_HP, 482 + MC_STAT_CONTROL_EVENT_QUALIFIED, 483 + &stats[clienta].arb_high_prio, 484 + &stats[clientb].arb_high_prio); 485 + 486 + tegra20_mc_stat_events(mc, client0, client1, 487 + MC_STAT_CONTROL_FILTER_PRI_YES, 488 + MC_STAT_CONTROL_PRI_EVENT_TM, 489 + MC_STAT_CONTROL_EVENT_QUALIFIED, 490 + &stats[clienta].arb_timeout, 491 + &stats[clientb].arb_timeout); 492 + 493 + tegra20_mc_stat_events(mc, client0, client1, 494 + MC_STAT_CONTROL_FILTER_PRI_YES, 495 + MC_STAT_CONTROL_PRI_EVENT_BW, 496 + MC_STAT_CONTROL_EVENT_QUALIFIED, 497 + &stats[clienta].arb_bandwidth, 498 + &stats[clientb].arb_bandwidth); 499 + 500 + tegra20_mc_stat_events(mc, client0, client1, 501 + MC_STAT_CONTROL_FILTER_PRI_DISABLE, 502 + MC_STAT_CONTROL_PRI_EVENT_HP, 503 + MC_STAT_CONTROL_EVENT_RD_WR_CHANGE, 504 + &stats[clienta].rd_wr_change, 505 + &stats[clientb].rd_wr_change); 506 + 507 + tegra20_mc_stat_events(mc, client0, client1, 508 + MC_STAT_CONTROL_FILTER_PRI_DISABLE, 509 + MC_STAT_CONTROL_PRI_EVENT_HP, 510 + MC_STAT_CONTROL_EVENT_SUCCESSIVE, 511 + &stats[clienta].successive, 512 + &stats[clientb].successive); 513 + 514 + tegra20_mc_stat_events(mc, client0, client1, 515 + MC_STAT_CONTROL_FILTER_PRI_DISABLE, 516 + MC_STAT_CONTROL_PRI_EVENT_HP, 517 + MC_STAT_CONTROL_EVENT_PAGE_MISS, 518 + &stats[clienta].page_miss, 519 + &stats[clientb].page_miss); 520 + } 521 + } 522 + 523 + static void tegra20_mc_printf_percents(struct seq_file *s, 524 + const char *fmt, 525 + unsigned int percents_fx) 526 + { 527 + char percents_str[8]; 528 + 529 + snprintf(percents_str, ARRAY_SIZE(percents_str), "%3u.%02u%%", 530 + percents_fx / MC_FX_FRAC_SCALE, percents_fx % MC_FX_FRAC_SCALE); 531 + 532 + seq_printf(s, fmt, percents_str); 533 + } 534 + 535 + static int tegra20_mc_stats_show(struct seq_file *s, void *unused) 536 + { 537 + const struct tegra_mc *mc = dev_get_drvdata(s->private); 538 + struct tegra20_mc_client_stat *stats; 539 + unsigned int i; 540 + 541 + stats = kcalloc(mc->soc->num_clients + 1, sizeof(*stats), GFP_KERNEL); 542 + if (!stats) 543 + return -ENOMEM; 544 + 545 + mutex_lock(&tegra20_mc_stat_lock); 546 + 547 + tegra20_mc_collect_stats(mc, stats); 548 + 549 + mutex_unlock(&tegra20_mc_stat_lock); 550 + 551 + seq_puts(s, "Memory client Events Timeout High priority Bandwidth ARB RW change Successive Page miss\n"); 552 + seq_puts(s, "-----------------------------------------------------------------------------------------------------\n"); 553 + 554 + for (i = 0; i < mc->soc->num_clients; i++) { 555 + seq_printf(s, "%-14s ", mc->soc->clients[i].name); 556 + 557 + /* An event is generated when client performs R/W request. */ 558 + tegra20_mc_printf_percents(s, "%-9s", stats[i].events); 559 + 560 + /* 561 + * An event is generated based on the timeout (TM) signal 562 + * accompanying a request for arbitration. 563 + */ 564 + tegra20_mc_printf_percents(s, "%-10s", stats[i].arb_timeout); 565 + 566 + /* 567 + * An event is generated based on the high-priority (HP) signal 568 + * accompanying a request for arbitration. 569 + */ 570 + tegra20_mc_printf_percents(s, "%-16s", stats[i].arb_high_prio); 571 + 572 + /* 573 + * An event is generated based on the bandwidth (BW) signal 574 + * accompanying a request for arbitration. 575 + */ 576 + tegra20_mc_printf_percents(s, "%-16s", stats[i].arb_bandwidth); 577 + 578 + /* 579 + * An event is generated when the memory controller switches 580 + * between making a read request to making a write request. 581 + */ 582 + tegra20_mc_printf_percents(s, "%-12s", stats[i].rd_wr_change); 583 + 584 + /* 585 + * An even generated when the chosen client has wins arbitration 586 + * when it was also the winner at the previous request. If a 587 + * client makes N requests in a row that are honored, SUCCESSIVE 588 + * will be counted (N-1) times. Large values for this event 589 + * imply that if we were patient enough, all of those requests 590 + * could have been coalesced. 591 + */ 592 + tegra20_mc_printf_percents(s, "%-13s", stats[i].successive); 593 + 594 + /* 595 + * An event is generated when the memory controller detects a 596 + * page miss for the current request. 597 + */ 598 + tegra20_mc_printf_percents(s, "%-12s\n", stats[i].page_miss); 599 + } 600 + 601 + kfree(stats); 602 + 603 + return 0; 604 + } 605 + 606 + static int tegra20_mc_init(struct tegra_mc *mc) 607 + { 608 + debugfs_create_devm_seqfile(mc->dev, "stats", mc->debugfs.root, 609 + tegra20_mc_stats_show); 610 + 611 + return 0; 612 + } 613 + 435 614 const struct tegra_mc_soc tegra20_mc_soc = { 436 615 .clients = tegra20_mc_clients, 437 616 .num_clients = ARRAY_SIZE(tegra20_mc_clients), ··· 698 367 .resets = tegra20_mc_resets, 699 368 .num_resets = ARRAY_SIZE(tegra20_mc_resets), 700 369 .icc_ops = &tegra20_mc_icc_ops, 370 + .init = tegra20_mc_init, 701 371 };
+9 -9
drivers/memory/tegra/tegra30-emc.c
··· 998 998 if (err) 999 999 return err; 1000 1000 1001 - dev_info(emc->dev, 1002 - "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 1003 - emc->num_timings, 1004 - tegra_read_ram_code(), 1005 - emc->timings[0].rate / 1000000, 1006 - emc->timings[emc->num_timings - 1].rate / 1000000); 1001 + dev_info_once(emc->dev, 1002 + "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 1003 + emc->num_timings, 1004 + tegra_read_ram_code(), 1005 + emc->timings[0].rate / 1000000, 1006 + emc->timings[emc->num_timings - 1].rate / 1000000); 1007 1007 1008 1008 return 0; 1009 1009 } ··· 1015 1015 int err; 1016 1016 1017 1017 if (of_get_child_count(dev->of_node) == 0) { 1018 - dev_info(dev, "device-tree doesn't have memory timings\n"); 1018 + dev_info_once(dev, "device-tree doesn't have memory timings\n"); 1019 1019 return NULL; 1020 1020 } 1021 1021 ··· 1503 1503 goto put_hw_table; 1504 1504 } 1505 1505 1506 - dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 1507 - hw_version, clk_get_rate(emc->clk) / 1000000); 1506 + dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 1507 + hw_version, clk_get_rate(emc->clk) / 1000000); 1508 1508 1509 1509 /* first dummy rate-set initializes voltage state */ 1510 1510 err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));
+2 -2
drivers/mfd/Kconfig
··· 21 21 22 22 config MFD_ALTERA_A10SR 23 23 bool "Altera Arria10 DevKit System Resource chip" 24 - depends on ARCH_SOCFPGA && SPI_MASTER=y && OF 24 + depends on ARCH_INTEL_SOCFPGA && SPI_MASTER=y && OF 25 25 select REGMAP_SPI 26 26 select MFD_CORE 27 27 help ··· 32 32 33 33 config MFD_ALTERA_SYSMGR 34 34 bool "Altera SOCFPGA System Manager" 35 - depends on (ARCH_SOCFPGA || ARCH_STRATIX10) && OF 35 + depends on ARCH_INTEL_SOCFPGA && OF 36 36 select MFD_SYSCON 37 37 help 38 38 Select this to get System Manager support for all Altera branded
+2 -2
drivers/net/ethernet/stmicro/stmmac/Kconfig
··· 140 140 141 141 config DWMAC_SOCFPGA 142 142 tristate "SOCFPGA dwmac support" 143 - default (ARCH_SOCFPGA || ARCH_STRATIX10) 144 - depends on OF && (ARCH_SOCFPGA || ARCH_STRATIX10 || COMPILE_TEST) 143 + default ARCH_INTEL_SOCFPGA 144 + depends on OF && (ARCH_INTEL_SOCFPGA || COMPILE_TEST) 145 145 select MFD_SYSCON 146 146 help 147 147 Support for ethernet controller on Altera SOCFPGA
+11 -6
drivers/pinctrl/aspeed/pinctrl-aspeed-g5.c
··· 60 60 #define COND2 { ASPEED_IP_SCU, SCU94, GENMASK(1, 0), 0, 0 } 61 61 62 62 /* LHCR0 is offset from the end of the H8S/2168-compatible registers */ 63 - #define LHCR0 0x20 63 + #define LHCR0 0xa0 64 64 #define GFX064 0x64 65 65 66 66 #define B14 0 ··· 2648 2648 } 2649 2649 2650 2650 if (ip == ASPEED_IP_LPC) { 2651 - struct device_node *node; 2651 + struct device_node *np; 2652 2652 struct regmap *map; 2653 2653 2654 - node = of_parse_phandle(ctx->dev->of_node, 2654 + np = of_parse_phandle(ctx->dev->of_node, 2655 2655 "aspeed,external-nodes", 1); 2656 - if (node) { 2657 - map = syscon_node_to_regmap(node->parent); 2658 - of_node_put(node); 2656 + if (np) { 2657 + if (!of_device_is_compatible(np->parent, "aspeed,ast2400-lpc-v2") && 2658 + !of_device_is_compatible(np->parent, "aspeed,ast2500-lpc-v2") && 2659 + !of_device_is_compatible(np->parent, "aspeed,ast2600-lpc-v2")) 2660 + return ERR_PTR(-ENODEV); 2661 + 2662 + map = syscon_node_to_regmap(np->parent); 2663 + of_node_put(np); 2659 2664 if (IS_ERR(map)) 2660 2665 return map; 2661 2666 } else
+9
drivers/pwm/Kconfig
··· 423 423 To compile this driver as a module, choose M here: the module 424 424 will be called pwm-pxa. 425 425 426 + config PWM_RASPBERRYPI_POE 427 + tristate "Raspberry Pi Firwmware PoE Hat PWM support" 428 + # Make sure not 'y' when RASPBERRYPI_FIRMWARE is 'm'. This can only 429 + # happen when COMPILE_TEST=y, hence the added !RASPBERRYPI_FIRMWARE. 430 + depends on RASPBERRYPI_FIRMWARE || (COMPILE_TEST && !RASPBERRYPI_FIRMWARE) 431 + help 432 + Enable Raspberry Pi firmware controller PWM bus used to control the 433 + official RPI PoE hat 434 + 426 435 config PWM_RCAR 427 436 tristate "Renesas R-Car PWM support" 428 437 depends on ARCH_RENESAS || COMPILE_TEST
+1
drivers/pwm/Makefile
··· 38 38 obj-$(CONFIG_PWM_OMAP_DMTIMER) += pwm-omap-dmtimer.o 39 39 obj-$(CONFIG_PWM_PCA9685) += pwm-pca9685.o 40 40 obj-$(CONFIG_PWM_PXA) += pwm-pxa.o 41 + obj-$(CONFIG_PWM_RASPBERRYPI_POE) += pwm-raspberrypi-poe.o 41 42 obj-$(CONFIG_PWM_RCAR) += pwm-rcar.o 42 43 obj-$(CONFIG_PWM_RENESAS_TPU) += pwm-renesas-tpu.o 43 44 obj-$(CONFIG_PWM_ROCKCHIP) += pwm-rockchip.o
+206
drivers/pwm/pwm-raspberrypi-poe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright 2021 Nicolas Saenz Julienne <nsaenzjulienne@suse.de> 4 + * For more information on Raspberry Pi's PoE hat see: 5 + * https://www.raspberrypi.org/products/poe-hat/ 6 + * 7 + * Limitations: 8 + * - No disable bit, so a disabled PWM is simulated by duty_cycle 0 9 + * - Only normal polarity 10 + * - Fixed 12.5 kHz period 11 + * 12 + * The current period is completed when HW is reconfigured. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/pwm.h> 19 + 20 + #include <soc/bcm2835/raspberrypi-firmware.h> 21 + #include <dt-bindings/pwm/raspberrypi,firmware-poe-pwm.h> 22 + 23 + #define RPI_PWM_MAX_DUTY 255 24 + #define RPI_PWM_PERIOD_NS 80000 /* 12.5 kHz */ 25 + 26 + #define RPI_PWM_CUR_DUTY_REG 0x0 27 + 28 + struct raspberrypi_pwm { 29 + struct rpi_firmware *firmware; 30 + struct pwm_chip chip; 31 + unsigned int duty_cycle; 32 + }; 33 + 34 + struct raspberrypi_pwm_prop { 35 + __le32 reg; 36 + __le32 val; 37 + __le32 ret; 38 + } __packed; 39 + 40 + static inline 41 + struct raspberrypi_pwm *raspberrypi_pwm_from_chip(struct pwm_chip *chip) 42 + { 43 + return container_of(chip, struct raspberrypi_pwm, chip); 44 + } 45 + 46 + static int raspberrypi_pwm_set_property(struct rpi_firmware *firmware, 47 + u32 reg, u32 val) 48 + { 49 + struct raspberrypi_pwm_prop msg = { 50 + .reg = cpu_to_le32(reg), 51 + .val = cpu_to_le32(val), 52 + }; 53 + int ret; 54 + 55 + ret = rpi_firmware_property(firmware, RPI_FIRMWARE_SET_POE_HAT_VAL, 56 + &msg, sizeof(msg)); 57 + if (ret) 58 + return ret; 59 + if (msg.ret) 60 + return -EIO; 61 + 62 + return 0; 63 + } 64 + 65 + static int raspberrypi_pwm_get_property(struct rpi_firmware *firmware, 66 + u32 reg, u32 *val) 67 + { 68 + struct raspberrypi_pwm_prop msg = { 69 + .reg = reg 70 + }; 71 + int ret; 72 + 73 + ret = rpi_firmware_property(firmware, RPI_FIRMWARE_GET_POE_HAT_VAL, 74 + &msg, sizeof(msg)); 75 + if (ret) 76 + return ret; 77 + if (msg.ret) 78 + return -EIO; 79 + 80 + *val = le32_to_cpu(msg.val); 81 + 82 + return 0; 83 + } 84 + 85 + static void raspberrypi_pwm_get_state(struct pwm_chip *chip, 86 + struct pwm_device *pwm, 87 + struct pwm_state *state) 88 + { 89 + struct raspberrypi_pwm *rpipwm = raspberrypi_pwm_from_chip(chip); 90 + 91 + state->period = RPI_PWM_PERIOD_NS; 92 + state->duty_cycle = DIV_ROUND_UP(rpipwm->duty_cycle * RPI_PWM_PERIOD_NS, 93 + RPI_PWM_MAX_DUTY); 94 + state->enabled = !!(rpipwm->duty_cycle); 95 + state->polarity = PWM_POLARITY_NORMAL; 96 + } 97 + 98 + static int raspberrypi_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, 99 + const struct pwm_state *state) 100 + { 101 + struct raspberrypi_pwm *rpipwm = raspberrypi_pwm_from_chip(chip); 102 + unsigned int duty_cycle; 103 + int ret; 104 + 105 + if (state->period < RPI_PWM_PERIOD_NS || 106 + state->polarity != PWM_POLARITY_NORMAL) 107 + return -EINVAL; 108 + 109 + if (!state->enabled) 110 + duty_cycle = 0; 111 + else if (state->duty_cycle < RPI_PWM_PERIOD_NS) 112 + duty_cycle = DIV_ROUND_DOWN_ULL(state->duty_cycle * RPI_PWM_MAX_DUTY, 113 + RPI_PWM_PERIOD_NS); 114 + else 115 + duty_cycle = RPI_PWM_MAX_DUTY; 116 + 117 + if (duty_cycle == rpipwm->duty_cycle) 118 + return 0; 119 + 120 + ret = raspberrypi_pwm_set_property(rpipwm->firmware, RPI_PWM_CUR_DUTY_REG, 121 + duty_cycle); 122 + if (ret) { 123 + dev_err(chip->dev, "Failed to set duty cycle: %pe\n", 124 + ERR_PTR(ret)); 125 + return ret; 126 + } 127 + 128 + rpipwm->duty_cycle = duty_cycle; 129 + 130 + return 0; 131 + } 132 + 133 + static const struct pwm_ops raspberrypi_pwm_ops = { 134 + .get_state = raspberrypi_pwm_get_state, 135 + .apply = raspberrypi_pwm_apply, 136 + .owner = THIS_MODULE, 137 + }; 138 + 139 + static int raspberrypi_pwm_probe(struct platform_device *pdev) 140 + { 141 + struct device_node *firmware_node; 142 + struct device *dev = &pdev->dev; 143 + struct rpi_firmware *firmware; 144 + struct raspberrypi_pwm *rpipwm; 145 + int ret; 146 + 147 + firmware_node = of_get_parent(dev->of_node); 148 + if (!firmware_node) { 149 + dev_err(dev, "Missing firmware node\n"); 150 + return -ENOENT; 151 + } 152 + 153 + firmware = devm_rpi_firmware_get(&pdev->dev, firmware_node); 154 + of_node_put(firmware_node); 155 + if (!firmware) 156 + return dev_err_probe(dev, -EPROBE_DEFER, 157 + "Failed to get firmware handle\n"); 158 + 159 + rpipwm = devm_kzalloc(&pdev->dev, sizeof(*rpipwm), GFP_KERNEL); 160 + if (!rpipwm) 161 + return -ENOMEM; 162 + 163 + rpipwm->firmware = firmware; 164 + rpipwm->chip.dev = dev; 165 + rpipwm->chip.ops = &raspberrypi_pwm_ops; 166 + rpipwm->chip.base = -1; 167 + rpipwm->chip.npwm = RASPBERRYPI_FIRMWARE_PWM_NUM; 168 + 169 + platform_set_drvdata(pdev, rpipwm); 170 + 171 + ret = raspberrypi_pwm_get_property(rpipwm->firmware, RPI_PWM_CUR_DUTY_REG, 172 + &rpipwm->duty_cycle); 173 + if (ret) { 174 + dev_err(dev, "Failed to get duty cycle: %pe\n", ERR_PTR(ret)); 175 + return ret; 176 + } 177 + 178 + return pwmchip_add(&rpipwm->chip); 179 + } 180 + 181 + static int raspberrypi_pwm_remove(struct platform_device *pdev) 182 + { 183 + struct raspberrypi_pwm *rpipwm = platform_get_drvdata(pdev); 184 + 185 + return pwmchip_remove(&rpipwm->chip); 186 + } 187 + 188 + static const struct of_device_id raspberrypi_pwm_of_match[] = { 189 + { .compatible = "raspberrypi,firmware-poe-pwm", }, 190 + { } 191 + }; 192 + MODULE_DEVICE_TABLE(of, raspberrypi_pwm_of_match); 193 + 194 + static struct platform_driver raspberrypi_pwm_driver = { 195 + .driver = { 196 + .name = "raspberrypi-poe-pwm", 197 + .of_match_table = raspberrypi_pwm_of_match, 198 + }, 199 + .probe = raspberrypi_pwm_probe, 200 + .remove = raspberrypi_pwm_remove, 201 + }; 202 + module_platform_driver(raspberrypi_pwm_driver); 203 + 204 + MODULE_AUTHOR("Nicolas Saenz Julienne <nsaenzjulienne@suse.de>"); 205 + MODULE_DESCRIPTION("Raspberry Pi Firmware Based PWM Bus Driver"); 206 + MODULE_LICENSE("GPL v2");
+23 -19
drivers/regulator/scmi-regulator.c
··· 2 2 // 3 3 // System Control and Management Interface (SCMI) based regulator driver 4 4 // 5 - // Copyright (C) 2020 ARM Ltd. 5 + // Copyright (C) 2020-2021 ARM Ltd. 6 6 // 7 7 // Implements a regulator driver on top of the SCMI Voltage Protocol. 8 8 // ··· 33 33 #include <linux/slab.h> 34 34 #include <linux/types.h> 35 35 36 + static const struct scmi_voltage_proto_ops *voltage_ops; 37 + 36 38 struct scmi_regulator { 37 39 u32 id; 38 40 struct scmi_device *sdev; 41 + struct scmi_protocol_handle *ph; 39 42 struct regulator_dev *rdev; 40 43 struct device_node *of_node; 41 44 struct regulator_desc desc; ··· 53 50 static int scmi_reg_enable(struct regulator_dev *rdev) 54 51 { 55 52 struct scmi_regulator *sreg = rdev_get_drvdata(rdev); 56 - const struct scmi_handle *handle = sreg->sdev->handle; 57 53 58 - return handle->voltage_ops->config_set(handle, sreg->id, 59 - SCMI_VOLTAGE_ARCH_STATE_ON); 54 + return voltage_ops->config_set(sreg->ph, sreg->id, 55 + SCMI_VOLTAGE_ARCH_STATE_ON); 60 56 } 61 57 62 58 static int scmi_reg_disable(struct regulator_dev *rdev) 63 59 { 64 60 struct scmi_regulator *sreg = rdev_get_drvdata(rdev); 65 - const struct scmi_handle *handle = sreg->sdev->handle; 66 61 67 - return handle->voltage_ops->config_set(handle, sreg->id, 68 - SCMI_VOLTAGE_ARCH_STATE_OFF); 62 + return voltage_ops->config_set(sreg->ph, sreg->id, 63 + SCMI_VOLTAGE_ARCH_STATE_OFF); 69 64 } 70 65 71 66 static int scmi_reg_is_enabled(struct regulator_dev *rdev) ··· 71 70 int ret; 72 71 u32 config; 73 72 struct scmi_regulator *sreg = rdev_get_drvdata(rdev); 74 - const struct scmi_handle *handle = sreg->sdev->handle; 75 73 76 - ret = handle->voltage_ops->config_get(handle, sreg->id, 77 - &config); 74 + ret = voltage_ops->config_get(sreg->ph, sreg->id, &config); 78 75 if (ret) { 79 76 dev_err(&sreg->sdev->dev, 80 77 "Error %d reading regulator %s status.\n", ··· 88 89 int ret; 89 90 s32 volt_uV; 90 91 struct scmi_regulator *sreg = rdev_get_drvdata(rdev); 91 - const struct scmi_handle *handle = sreg->sdev->handle; 92 92 93 - ret = handle->voltage_ops->level_get(handle, sreg->id, &volt_uV); 93 + ret = voltage_ops->level_get(sreg->ph, sreg->id, &volt_uV); 94 94 if (ret) 95 95 return ret; 96 96 ··· 101 103 { 102 104 s32 volt_uV; 103 105 struct scmi_regulator *sreg = rdev_get_drvdata(rdev); 104 - const struct scmi_handle *handle = sreg->sdev->handle; 105 106 106 107 volt_uV = sreg->desc.ops->list_voltage(rdev, selector); 107 108 if (volt_uV <= 0) 108 109 return -EINVAL; 109 110 110 - return handle->voltage_ops->level_set(handle, sreg->id, 0x0, volt_uV); 111 + return voltage_ops->level_set(sreg->ph, sreg->id, 0x0, volt_uV); 111 112 } 112 113 113 114 static const struct regulator_ops scmi_reg_fixed_ops = { ··· 201 204 static int scmi_regulator_common_init(struct scmi_regulator *sreg) 202 205 { 203 206 int ret; 204 - const struct scmi_handle *handle = sreg->sdev->handle; 205 207 struct device *dev = &sreg->sdev->dev; 206 208 const struct scmi_voltage_info *vinfo; 207 209 208 - vinfo = handle->voltage_ops->info_get(handle, sreg->id); 210 + vinfo = voltage_ops->info_get(sreg->ph, sreg->id); 209 211 if (!vinfo) { 210 212 dev_warn(dev, "Failure to get voltage domain %d\n", 211 213 sreg->id); ··· 253 257 } 254 258 255 259 static int process_scmi_regulator_of_node(struct scmi_device *sdev, 260 + struct scmi_protocol_handle *ph, 256 261 struct device_node *np, 257 262 struct scmi_regulator_info *rinfo) 258 263 { ··· 281 284 282 285 rinfo->sregv[dom]->id = dom; 283 286 rinfo->sregv[dom]->sdev = sdev; 287 + rinfo->sregv[dom]->ph = ph; 284 288 285 289 /* get hold of good nodes */ 286 290 of_node_get(np); ··· 300 302 struct device_node *np, *child; 301 303 const struct scmi_handle *handle = sdev->handle; 302 304 struct scmi_regulator_info *rinfo; 305 + struct scmi_protocol_handle *ph; 303 306 304 - if (!handle || !handle->voltage_ops) 307 + if (!handle) 305 308 return -ENODEV; 306 309 307 - num_doms = handle->voltage_ops->num_domains_get(handle); 310 + voltage_ops = handle->devm_protocol_get(sdev, 311 + SCMI_PROTOCOL_VOLTAGE, &ph); 312 + if (IS_ERR(voltage_ops)) 313 + return PTR_ERR(voltage_ops); 314 + 315 + num_doms = voltage_ops->num_domains_get(ph); 308 316 if (num_doms <= 0) { 309 317 if (!num_doms) { 310 318 dev_err(&sdev->dev, ··· 345 341 */ 346 342 np = of_find_node_by_name(handle->dev->of_node, "regulators"); 347 343 for_each_child_of_node(np, child) { 348 - ret = process_scmi_regulator_of_node(sdev, child, rinfo); 344 + ret = process_scmi_regulator_of_node(sdev, ph, child, rinfo); 349 345 /* abort on any mem issue */ 350 346 if (ret == -ENOMEM) 351 347 return ret;
+3 -3
drivers/reset/Kconfig
··· 183 183 184 184 config RESET_SIMPLE 185 185 bool "Simple Reset Controller Driver" if COMPILE_TEST 186 - default ARCH_AGILEX || ARCH_ASPEED || ARCH_BCM4908 || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARC 186 + default ARCH_ASPEED || ARCH_BCM4908 || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || (ARCH_INTEL_SOCFPGA && ARM64) || ARCH_SUNXI || ARC 187 187 help 188 188 This enables a simple reset controller driver for reset lines that 189 189 that can be asserted and deasserted by toggling bits in a contiguous, ··· 205 205 This enables the RCC reset controller driver for STM32 MPUs. 206 206 207 207 config RESET_SOCFPGA 208 - bool "SoCFPGA Reset Driver" if COMPILE_TEST && !ARCH_SOCFPGA 209 - default ARCH_SOCFPGA 208 + bool "SoCFPGA Reset Driver" if COMPILE_TEST && (!ARM || !ARCH_INTEL_SOCFPGA) 209 + default ARM && ARCH_INTEL_SOCFPGA 210 210 select RESET_SIMPLE 211 211 help 212 212 This enables the reset driver for the SoCFPGA ARMv7 platforms. This
+1 -1
drivers/reset/reset-raspberrypi.c
··· 82 82 return -ENOENT; 83 83 } 84 84 85 - fw = rpi_firmware_get(np); 85 + fw = devm_rpi_firmware_get(&pdev->dev, np); 86 86 of_node_put(np); 87 87 if (!fw) 88 88 return -EPROBE_DEFER;
+20 -13
drivers/reset/reset-scmi.c
··· 2 2 /* 3 3 * ARM System Control and Management Interface (ARM SCMI) reset driver 4 4 * 5 - * Copyright (C) 2019 ARM Ltd. 5 + * Copyright (C) 2019-2021 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/module.h> ··· 11 11 #include <linux/reset-controller.h> 12 12 #include <linux/scmi_protocol.h> 13 13 14 + static const struct scmi_reset_proto_ops *reset_ops; 15 + 14 16 /** 15 17 * struct scmi_reset_data - reset controller information structure 16 18 * @rcdev: reset controller entity 17 - * @handle: ARM SCMI handle used for communication with system controller 19 + * @ph: ARM SCMI protocol handle used for communication with system controller 18 20 */ 19 21 struct scmi_reset_data { 20 22 struct reset_controller_dev rcdev; 21 - const struct scmi_handle *handle; 23 + const struct scmi_protocol_handle *ph; 22 24 }; 23 25 24 26 #define to_scmi_reset_data(p) container_of((p), struct scmi_reset_data, rcdev) 25 - #define to_scmi_handle(p) (to_scmi_reset_data(p)->handle) 27 + #define to_scmi_handle(p) (to_scmi_reset_data(p)->ph) 26 28 27 29 /** 28 30 * scmi_reset_assert() - assert device reset ··· 39 37 static int 40 38 scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id) 41 39 { 42 - const struct scmi_handle *handle = to_scmi_handle(rcdev); 40 + const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev); 43 41 44 - return handle->reset_ops->assert(handle, id); 42 + return reset_ops->assert(ph, id); 45 43 } 46 44 47 45 /** ··· 57 55 static int 58 56 scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id) 59 57 { 60 - const struct scmi_handle *handle = to_scmi_handle(rcdev); 58 + const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev); 61 59 62 - return handle->reset_ops->deassert(handle, id); 60 + return reset_ops->deassert(ph, id); 63 61 } 64 62 65 63 /** ··· 75 73 static int 76 74 scmi_reset_reset(struct reset_controller_dev *rcdev, unsigned long id) 77 75 { 78 - const struct scmi_handle *handle = to_scmi_handle(rcdev); 76 + const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev); 79 77 80 - return handle->reset_ops->reset(handle, id); 78 + return reset_ops->reset(ph, id); 81 79 } 82 80 83 81 static const struct reset_control_ops scmi_reset_ops = { ··· 92 90 struct device *dev = &sdev->dev; 93 91 struct device_node *np = dev->of_node; 94 92 const struct scmi_handle *handle = sdev->handle; 93 + struct scmi_protocol_handle *ph; 95 94 96 - if (!handle || !handle->reset_ops) 95 + if (!handle) 97 96 return -ENODEV; 97 + 98 + reset_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_RESET, &ph); 99 + if (IS_ERR(reset_ops)) 100 + return PTR_ERR(reset_ops); 98 101 99 102 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 100 103 if (!data) ··· 108 101 data->rcdev.ops = &scmi_reset_ops; 109 102 data->rcdev.owner = THIS_MODULE; 110 103 data->rcdev.of_node = np; 111 - data->rcdev.nr_resets = handle->reset_ops->num_domains_get(handle); 112 - data->handle = handle; 104 + data->rcdev.nr_resets = reset_ops->num_domains_get(ph); 105 + data->ph = ph; 113 106 114 107 return devm_reset_controller_register(dev, &data->rcdev); 115 108 }
+14 -6
drivers/soc/aspeed/aspeed-lpc-ctrl.c
··· 18 18 19 19 #define DEVICE_NAME "aspeed-lpc-ctrl" 20 20 21 - #define HICR5 0x0 21 + #define HICR5 0x80 22 22 #define HICR5_ENL2H BIT(8) 23 23 #define HICR5_ENFWH BIT(10) 24 24 25 - #define HICR6 0x4 25 + #define HICR6 0x84 26 26 #define SW_FWH2AHB BIT(17) 27 27 28 - #define HICR7 0x8 29 - #define HICR8 0xc 28 + #define HICR7 0x88 29 + #define HICR8 0x8c 30 30 31 31 struct aspeed_lpc_ctrl { 32 32 struct miscdevice miscdev; ··· 215 215 struct device_node *node; 216 216 struct resource resm; 217 217 struct device *dev; 218 + struct device_node *np; 218 219 int rc; 219 220 220 221 dev = &pdev->dev; ··· 271 270 } 272 271 } 273 272 274 - lpc_ctrl->regmap = syscon_node_to_regmap( 275 - pdev->dev.parent->of_node); 273 + np = pdev->dev.parent->of_node; 274 + if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") && 275 + !of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") && 276 + !of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) { 277 + dev_err(dev, "unsupported LPC device binding\n"); 278 + return -ENODEV; 279 + } 280 + 281 + lpc_ctrl->regmap = syscon_node_to_regmap(np); 276 282 if (IS_ERR(lpc_ctrl->regmap)) { 277 283 dev_err(dev, "Couldn't get regmap\n"); 278 284 return -ENODEV;
+18 -9
drivers/soc/aspeed/aspeed-lpc-snoop.c
··· 29 29 #define NUM_SNOOP_CHANNELS 2 30 30 #define SNOOP_FIFO_SIZE 2048 31 31 32 - #define HICR5 0x0 32 + #define HICR5 0x80 33 33 #define HICR5_EN_SNP0W BIT(0) 34 34 #define HICR5_ENINT_SNP0W BIT(1) 35 35 #define HICR5_EN_SNP1W BIT(2) 36 36 #define HICR5_ENINT_SNP1W BIT(3) 37 - 38 - #define HICR6 0x4 37 + #define HICR6 0x84 39 38 #define HICR6_STR_SNP0W BIT(0) 40 39 #define HICR6_STR_SNP1W BIT(1) 41 - #define SNPWADR 0x10 40 + #define SNPWADR 0x90 42 41 #define SNPWADR_CH0_MASK GENMASK(15, 0) 43 42 #define SNPWADR_CH0_SHIFT 0 44 43 #define SNPWADR_CH1_MASK GENMASK(31, 16) 45 44 #define SNPWADR_CH1_SHIFT 16 46 - #define SNPWDR 0x14 45 + #define SNPWDR 0x94 47 46 #define SNPWDR_CH0_MASK GENMASK(7, 0) 48 47 #define SNPWDR_CH0_SHIFT 0 49 48 #define SNPWDR_CH1_MASK GENMASK(15, 8) 50 49 #define SNPWDR_CH1_SHIFT 8 51 - #define HICRB 0x80 50 + #define HICRB 0x100 52 51 #define HICRB_ENSNP0D BIT(14) 53 52 #define HICRB_ENSNP1D BIT(15) 54 53 ··· 94 95 return -EINTR; 95 96 } 96 97 ret = kfifo_to_user(&chan->fifo, buffer, count, &copied); 98 + if (ret) 99 + return ret; 97 100 98 - return ret ? ret : copied; 101 + return copied; 99 102 } 100 103 101 104 static __poll_t snoop_file_poll(struct file *file, ··· 261 260 { 262 261 struct aspeed_lpc_snoop *lpc_snoop; 263 262 struct device *dev; 263 + struct device_node *np; 264 264 u32 port; 265 265 int rc; 266 266 ··· 271 269 if (!lpc_snoop) 272 270 return -ENOMEM; 273 271 274 - lpc_snoop->regmap = syscon_node_to_regmap( 275 - pdev->dev.parent->of_node); 272 + np = pdev->dev.parent->of_node; 273 + if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") && 274 + !of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") && 275 + !of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) { 276 + dev_err(dev, "unsupported LPC device binding\n"); 277 + return -ENODEV; 278 + } 279 + 280 + lpc_snoop->regmap = syscon_node_to_regmap(np); 276 281 if (IS_ERR(lpc_snoop->regmap)) { 277 282 dev_err(dev, "Couldn't get regmap\n"); 278 283 return -ENODEV;
+30
drivers/soc/bcm/bcm63xx/bcm-pmb.c
··· 209 209 return err; 210 210 } 211 211 212 + static int bcm_pmb_power_on_sata(struct bcm_pmb *pmb, int bus, u8 device) 213 + { 214 + int err; 215 + 216 + err = bcm_pmb_power_on_zone(pmb, bus, device, 0); 217 + if (err) 218 + return err; 219 + 220 + /* Does not apply to the BCM963158 */ 221 + err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_MISC_CONTROL, 0); 222 + if (err) 223 + return err; 224 + 225 + err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_SR_CONTROL, 0xffffffff); 226 + if (err) 227 + return err; 228 + 229 + err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_SR_CONTROL, 0); 230 + 231 + return err; 232 + } 233 + 212 234 static int bcm_pmb_power_on(struct generic_pm_domain *genpd) 213 235 { 214 236 struct bcm_pmb_pm_domain *pd = container_of(genpd, struct bcm_pmb_pm_domain, genpd); ··· 244 222 return bcm_pmb_power_on_zone(pmb, data->bus, data->device, 0); 245 223 case BCM_PMB_HOST_USB: 246 224 return bcm_pmb_power_on_device(pmb, data->bus, data->device); 225 + case BCM_PMB_SATA: 226 + return bcm_pmb_power_on_sata(pmb, data->bus, data->device); 247 227 default: 248 228 dev_err(pmb->dev, "unsupported device id: %d\n", data->id); 249 229 return -EINVAL; ··· 341 317 { }, 342 318 }; 343 319 320 + static const struct bcm_pmb_pd_data bcm_pmb_bcm63138_data[] = { 321 + { .name = "sata", .id = BCM_PMB_SATA, .bus = 0, .device = 3, }, 322 + { }, 323 + }; 324 + 344 325 static const struct of_device_id bcm_pmb_of_match[] = { 345 326 { .compatible = "brcm,bcm4908-pmb", .data = &bcm_pmb_bcm4908_data, }, 327 + { .compatible = "brcm,bcm63138-pmb", .data = &bcm_pmb_bcm63138_data, }, 346 328 { }, 347 329 }; 348 330
+1 -1
drivers/soc/bcm/raspberrypi-power.c
··· 177 177 return -ENODEV; 178 178 } 179 179 180 - rpi_domains->fw = rpi_firmware_get(fw_np); 180 + rpi_domains->fw = devm_rpi_firmware_get(&pdev->dev, fw_np); 181 181 of_node_put(fw_np); 182 182 if (!rpi_domains->fw) 183 183 return -EPROBE_DEFER;
+1 -1
drivers/soc/fsl/guts.c
··· 117 117 if (matches->svr == (svr & matches->mask)) 118 118 return matches; 119 119 matches++; 120 - }; 120 + } 121 121 return NULL; 122 122 } 123 123
-1
drivers/soc/fsl/qbman/bman.c
··· 709 709 return pool; 710 710 err: 711 711 bm_release_bpid(bpid); 712 - kfree(pool); 713 712 return NULL; 714 713 } 715 714 EXPORT_SYMBOL(bman_new_pool);
+2 -1
drivers/soc/fsl/qbman/bman_portal.c
··· 160 160 __bman_portals_probed = 1; 161 161 /* unassigned portal, skip init */ 162 162 spin_unlock(&bman_lock); 163 - return 0; 163 + goto check_cleanup; 164 164 } 165 165 166 166 cpumask_set_cpu(cpu, &portal_cpus); ··· 176 176 if (!cpu_online(cpu)) 177 177 bman_offline_cpu(cpu); 178 178 179 + check_cleanup: 179 180 if (__bman_portals_probed == 1 && bman_requires_cleanup()) { 180 181 /* 181 182 * BMan wasn't reset prior to boot (Kexec for example)
+2 -1
drivers/soc/fsl/qbman/qman_portal.c
··· 302 302 __qman_portals_probed = 1; 303 303 /* unassigned portal, skip init */ 304 304 spin_unlock(&qman_lock); 305 - return 0; 305 + goto check_cleanup; 306 306 } 307 307 308 308 cpumask_set_cpu(cpu, &portal_cpus); ··· 323 323 if (!cpu_online(cpu)) 324 324 qman_offline_cpu(cpu); 325 325 326 + check_cleanup: 326 327 if (__qman_portals_probed == 1 && qman_requires_cleanup()) { 327 328 /* 328 329 * QMan wasn't reset prior to boot (Kexec for example)
+10 -10
drivers/soc/fsl/qe/gpio.c
··· 41 41 container_of(mm_gc, struct qe_gpio_chip, mm_gc); 42 42 struct qe_pio_regs __iomem *regs = mm_gc->regs; 43 43 44 - qe_gc->cpdata = qe_ioread32be(&regs->cpdata); 44 + qe_gc->cpdata = ioread32be(&regs->cpdata); 45 45 qe_gc->saved_regs.cpdata = qe_gc->cpdata; 46 - qe_gc->saved_regs.cpdir1 = qe_ioread32be(&regs->cpdir1); 47 - qe_gc->saved_regs.cpdir2 = qe_ioread32be(&regs->cpdir2); 48 - qe_gc->saved_regs.cppar1 = qe_ioread32be(&regs->cppar1); 49 - qe_gc->saved_regs.cppar2 = qe_ioread32be(&regs->cppar2); 50 - qe_gc->saved_regs.cpodr = qe_ioread32be(&regs->cpodr); 46 + qe_gc->saved_regs.cpdir1 = ioread32be(&regs->cpdir1); 47 + qe_gc->saved_regs.cpdir2 = ioread32be(&regs->cpdir2); 48 + qe_gc->saved_regs.cppar1 = ioread32be(&regs->cppar1); 49 + qe_gc->saved_regs.cppar2 = ioread32be(&regs->cppar2); 50 + qe_gc->saved_regs.cpodr = ioread32be(&regs->cpodr); 51 51 } 52 52 53 53 static int qe_gpio_get(struct gpio_chip *gc, unsigned int gpio) ··· 56 56 struct qe_pio_regs __iomem *regs = mm_gc->regs; 57 57 u32 pin_mask = 1 << (QE_PIO_PINS - 1 - gpio); 58 58 59 - return !!(qe_ioread32be(&regs->cpdata) & pin_mask); 59 + return !!(ioread32be(&regs->cpdata) & pin_mask); 60 60 } 61 61 62 62 static void qe_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) ··· 74 74 else 75 75 qe_gc->cpdata &= ~pin_mask; 76 76 77 - qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 77 + iowrite32be(qe_gc->cpdata, &regs->cpdata); 78 78 79 79 spin_unlock_irqrestore(&qe_gc->lock, flags); 80 80 } ··· 101 101 } 102 102 } 103 103 104 - qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 104 + iowrite32be(qe_gc->cpdata, &regs->cpdata); 105 105 106 106 spin_unlock_irqrestore(&qe_gc->lock, flags); 107 107 } ··· 269 269 else 270 270 qe_gc->cpdata &= ~mask1; 271 271 272 - qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 272 + iowrite32be(qe_gc->cpdata, &regs->cpdata); 273 273 qe_clrsetbits_be32(&regs->cpodr, mask1, sregs->cpodr & mask1); 274 274 275 275 spin_unlock_irqrestore(&qe_gc->lock, flags);
+12 -12
drivers/soc/fsl/qe/qe.c
··· 109 109 110 110 spin_lock_irqsave(&qe_lock, flags); 111 111 if (cmd == QE_RESET) { 112 - qe_iowrite32be((u32)(cmd | QE_CR_FLG), &qe_immr->cp.cecr); 112 + iowrite32be((u32)(cmd | QE_CR_FLG), &qe_immr->cp.cecr); 113 113 } else { 114 114 if (cmd == QE_ASSIGN_PAGE) { 115 115 /* Here device is the SNUM, not sub-block */ ··· 126 126 mcn_shift = QE_CR_MCN_NORMAL_SHIFT; 127 127 } 128 128 129 - qe_iowrite32be(cmd_input, &qe_immr->cp.cecdr); 130 - qe_iowrite32be((cmd | QE_CR_FLG | ((u32)device << dev_shift) | (u32)mcn_protocol << mcn_shift), 129 + iowrite32be(cmd_input, &qe_immr->cp.cecdr); 130 + iowrite32be((cmd | QE_CR_FLG | ((u32)device << dev_shift) | (u32)mcn_protocol << mcn_shift), 131 131 &qe_immr->cp.cecr); 132 132 } 133 133 134 134 /* wait for the QE_CR_FLG to clear */ 135 - ret = readx_poll_timeout_atomic(qe_ioread32be, &qe_immr->cp.cecr, val, 135 + ret = readx_poll_timeout_atomic(ioread32be, &qe_immr->cp.cecr, val, 136 136 (val & QE_CR_FLG) == 0, 0, 100); 137 137 /* On timeout, ret is -ETIMEDOUT, otherwise it will be 0. */ 138 138 spin_unlock_irqrestore(&qe_lock, flags); ··· 231 231 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) | 232 232 QE_BRGC_ENABLE | div16; 233 233 234 - qe_iowrite32be(tempval, &qe_immr->brg.brgc[brg - QE_BRG1]); 234 + iowrite32be(tempval, &qe_immr->brg.brgc[brg - QE_BRG1]); 235 235 236 236 return 0; 237 237 } ··· 375 375 return -ENOMEM; 376 376 } 377 377 378 - qe_iowrite32be((u32)sdma_buf_offset & QE_SDEBCR_BA_MASK, 378 + iowrite32be((u32)sdma_buf_offset & QE_SDEBCR_BA_MASK, 379 379 &sdma->sdebcr); 380 - qe_iowrite32be((QE_SDMR_GLB_1_MSK | (0x1 << QE_SDMR_CEN_SHIFT)), 380 + iowrite32be((QE_SDMR_GLB_1_MSK | (0x1 << QE_SDMR_CEN_SHIFT)), 381 381 &sdma->sdmr); 382 382 383 383 return 0; ··· 416 416 "uploading microcode '%s'\n", ucode->id); 417 417 418 418 /* Use auto-increment */ 419 - qe_iowrite32be(be32_to_cpu(ucode->iram_offset) | QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR, 419 + iowrite32be(be32_to_cpu(ucode->iram_offset) | QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR, 420 420 &qe_immr->iram.iadd); 421 421 422 422 for (i = 0; i < be32_to_cpu(ucode->count); i++) 423 - qe_iowrite32be(be32_to_cpu(code[i]), &qe_immr->iram.idata); 423 + iowrite32be(be32_to_cpu(code[i]), &qe_immr->iram.idata); 424 424 425 425 /* Set I-RAM Ready Register */ 426 - qe_iowrite32be(QE_IRAM_READY, &qe_immr->iram.iready); 426 + iowrite32be(QE_IRAM_READY, &qe_immr->iram.iready); 427 427 } 428 428 429 429 /* ··· 542 542 u32 trap = be32_to_cpu(ucode->traps[j]); 543 543 544 544 if (trap) 545 - qe_iowrite32be(trap, 545 + iowrite32be(trap, 546 546 &qe_immr->rsp[i].tibcr[j]); 547 547 } 548 548 549 549 /* Enable traps */ 550 - qe_iowrite32be(be32_to_cpu(ucode->eccr), 550 + iowrite32be(be32_to_cpu(ucode->eccr), 551 551 &qe_immr->rsp[i].eccr); 552 552 } 553 553
+1 -2
drivers/soc/fsl/qe/qe_common.c
··· 26 26 #include <soc/fsl/qe/qe.h> 27 27 28 28 static struct gen_pool *muram_pool; 29 - static spinlock_t cpm_muram_lock; 29 + static DEFINE_SPINLOCK(cpm_muram_lock); 30 30 static void __iomem *muram_vbase; 31 31 static phys_addr_t muram_pbase; 32 32 ··· 54 54 if (muram_pbase) 55 55 return 0; 56 56 57 - spin_lock_init(&cpm_muram_lock); 58 57 np = of_find_compatible_node(NULL, NULL, "fsl,cpm-muram-data"); 59 58 if (!np) { 60 59 /* try legacy bindings */
+2 -2
drivers/soc/fsl/qe/qe_ic.c
··· 222 222 223 223 static inline u32 qe_ic_read(__be32 __iomem *base, unsigned int reg) 224 224 { 225 - return qe_ioread32be(base + (reg >> 2)); 225 + return ioread32be(base + (reg >> 2)); 226 226 } 227 227 228 228 static inline void qe_ic_write(__be32 __iomem *base, unsigned int reg, 229 229 u32 value) 230 230 { 231 - qe_iowrite32be(value, base + (reg >> 2)); 231 + iowrite32be(value, base + (reg >> 2)); 232 232 } 233 233 234 234 static inline struct qe_ic *qe_ic_from_irq(unsigned int virq)
+18 -18
drivers/soc/fsl/qe/qe_io.c
··· 54 54 pin_mask1bit = (u32) (1 << (QE_PIO_PINS - (pin + 1))); 55 55 56 56 /* Set open drain, if required */ 57 - tmp_val = qe_ioread32be(&par_io->cpodr); 57 + tmp_val = ioread32be(&par_io->cpodr); 58 58 if (open_drain) 59 - qe_iowrite32be(pin_mask1bit | tmp_val, &par_io->cpodr); 59 + iowrite32be(pin_mask1bit | tmp_val, &par_io->cpodr); 60 60 else 61 - qe_iowrite32be(~pin_mask1bit & tmp_val, &par_io->cpodr); 61 + iowrite32be(~pin_mask1bit & tmp_val, &par_io->cpodr); 62 62 63 63 /* define direction */ 64 64 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 65 - qe_ioread32be(&par_io->cpdir2) : 66 - qe_ioread32be(&par_io->cpdir1); 65 + ioread32be(&par_io->cpdir2) : 66 + ioread32be(&par_io->cpdir1); 67 67 68 68 /* get all bits mask for 2 bit per port */ 69 69 pin_mask2bits = (u32) (0x3 << (QE_PIO_PINS - ··· 75 75 76 76 /* clear and set 2 bits mask */ 77 77 if (pin > (QE_PIO_PINS / 2) - 1) { 78 - qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir2); 78 + iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir2); 79 79 tmp_val &= ~pin_mask2bits; 80 - qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir2); 80 + iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir2); 81 81 } else { 82 - qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir1); 82 + iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir1); 83 83 tmp_val &= ~pin_mask2bits; 84 - qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir1); 84 + iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir1); 85 85 } 86 86 /* define pin assignment */ 87 87 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 88 - qe_ioread32be(&par_io->cppar2) : 89 - qe_ioread32be(&par_io->cppar1); 88 + ioread32be(&par_io->cppar2) : 89 + ioread32be(&par_io->cppar1); 90 90 91 91 new_mask2bits = (u32) (assignment << (QE_PIO_PINS - 92 92 (pin % (QE_PIO_PINS / 2) + 1) * 2)); 93 93 /* clear and set 2 bits mask */ 94 94 if (pin > (QE_PIO_PINS / 2) - 1) { 95 - qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar2); 95 + iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar2); 96 96 tmp_val &= ~pin_mask2bits; 97 - qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar2); 97 + iowrite32be(new_mask2bits | tmp_val, &par_io->cppar2); 98 98 } else { 99 - qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar1); 99 + iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar1); 100 100 tmp_val &= ~pin_mask2bits; 101 - qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar1); 101 + iowrite32be(new_mask2bits | tmp_val, &par_io->cppar1); 102 102 } 103 103 } 104 104 EXPORT_SYMBOL(__par_io_config_pin); ··· 126 126 /* calculate pin location */ 127 127 pin_mask = (u32) (1 << (QE_PIO_PINS - 1 - pin)); 128 128 129 - tmp_val = qe_ioread32be(&par_io[port].cpdata); 129 + tmp_val = ioread32be(&par_io[port].cpdata); 130 130 131 131 if (val == 0) /* clear */ 132 - qe_iowrite32be(~pin_mask & tmp_val, &par_io[port].cpdata); 132 + iowrite32be(~pin_mask & tmp_val, &par_io[port].cpdata); 133 133 else /* set */ 134 - qe_iowrite32be(pin_mask | tmp_val, &par_io[port].cpdata); 134 + iowrite32be(pin_mask | tmp_val, &par_io[port].cpdata); 135 135 136 136 return 0; 137 137 }
+34 -34
drivers/soc/fsl/qe/ucc_fast.c
··· 29 29 printk(KERN_INFO "Base address: 0x%p\n", uccf->uf_regs); 30 30 31 31 printk(KERN_INFO "gumr : addr=0x%p, val=0x%08x\n", 32 - &uccf->uf_regs->gumr, qe_ioread32be(&uccf->uf_regs->gumr)); 32 + &uccf->uf_regs->gumr, ioread32be(&uccf->uf_regs->gumr)); 33 33 printk(KERN_INFO "upsmr : addr=0x%p, val=0x%08x\n", 34 - &uccf->uf_regs->upsmr, qe_ioread32be(&uccf->uf_regs->upsmr)); 34 + &uccf->uf_regs->upsmr, ioread32be(&uccf->uf_regs->upsmr)); 35 35 printk(KERN_INFO "utodr : addr=0x%p, val=0x%04x\n", 36 - &uccf->uf_regs->utodr, qe_ioread16be(&uccf->uf_regs->utodr)); 36 + &uccf->uf_regs->utodr, ioread16be(&uccf->uf_regs->utodr)); 37 37 printk(KERN_INFO "udsr : addr=0x%p, val=0x%04x\n", 38 - &uccf->uf_regs->udsr, qe_ioread16be(&uccf->uf_regs->udsr)); 38 + &uccf->uf_regs->udsr, ioread16be(&uccf->uf_regs->udsr)); 39 39 printk(KERN_INFO "ucce : addr=0x%p, val=0x%08x\n", 40 - &uccf->uf_regs->ucce, qe_ioread32be(&uccf->uf_regs->ucce)); 40 + &uccf->uf_regs->ucce, ioread32be(&uccf->uf_regs->ucce)); 41 41 printk(KERN_INFO "uccm : addr=0x%p, val=0x%08x\n", 42 - &uccf->uf_regs->uccm, qe_ioread32be(&uccf->uf_regs->uccm)); 42 + &uccf->uf_regs->uccm, ioread32be(&uccf->uf_regs->uccm)); 43 43 printk(KERN_INFO "uccs : addr=0x%p, val=0x%02x\n", 44 - &uccf->uf_regs->uccs, qe_ioread8(&uccf->uf_regs->uccs)); 44 + &uccf->uf_regs->uccs, ioread8(&uccf->uf_regs->uccs)); 45 45 printk(KERN_INFO "urfb : addr=0x%p, val=0x%08x\n", 46 - &uccf->uf_regs->urfb, qe_ioread32be(&uccf->uf_regs->urfb)); 46 + &uccf->uf_regs->urfb, ioread32be(&uccf->uf_regs->urfb)); 47 47 printk(KERN_INFO "urfs : addr=0x%p, val=0x%04x\n", 48 - &uccf->uf_regs->urfs, qe_ioread16be(&uccf->uf_regs->urfs)); 48 + &uccf->uf_regs->urfs, ioread16be(&uccf->uf_regs->urfs)); 49 49 printk(KERN_INFO "urfet : addr=0x%p, val=0x%04x\n", 50 - &uccf->uf_regs->urfet, qe_ioread16be(&uccf->uf_regs->urfet)); 50 + &uccf->uf_regs->urfet, ioread16be(&uccf->uf_regs->urfet)); 51 51 printk(KERN_INFO "urfset: addr=0x%p, val=0x%04x\n", 52 52 &uccf->uf_regs->urfset, 53 - qe_ioread16be(&uccf->uf_regs->urfset)); 53 + ioread16be(&uccf->uf_regs->urfset)); 54 54 printk(KERN_INFO "utfb : addr=0x%p, val=0x%08x\n", 55 - &uccf->uf_regs->utfb, qe_ioread32be(&uccf->uf_regs->utfb)); 55 + &uccf->uf_regs->utfb, ioread32be(&uccf->uf_regs->utfb)); 56 56 printk(KERN_INFO "utfs : addr=0x%p, val=0x%04x\n", 57 - &uccf->uf_regs->utfs, qe_ioread16be(&uccf->uf_regs->utfs)); 57 + &uccf->uf_regs->utfs, ioread16be(&uccf->uf_regs->utfs)); 58 58 printk(KERN_INFO "utfet : addr=0x%p, val=0x%04x\n", 59 - &uccf->uf_regs->utfet, qe_ioread16be(&uccf->uf_regs->utfet)); 59 + &uccf->uf_regs->utfet, ioread16be(&uccf->uf_regs->utfet)); 60 60 printk(KERN_INFO "utftt : addr=0x%p, val=0x%04x\n", 61 - &uccf->uf_regs->utftt, qe_ioread16be(&uccf->uf_regs->utftt)); 61 + &uccf->uf_regs->utftt, ioread16be(&uccf->uf_regs->utftt)); 62 62 printk(KERN_INFO "utpt : addr=0x%p, val=0x%04x\n", 63 - &uccf->uf_regs->utpt, qe_ioread16be(&uccf->uf_regs->utpt)); 63 + &uccf->uf_regs->utpt, ioread16be(&uccf->uf_regs->utpt)); 64 64 printk(KERN_INFO "urtry : addr=0x%p, val=0x%08x\n", 65 - &uccf->uf_regs->urtry, qe_ioread32be(&uccf->uf_regs->urtry)); 65 + &uccf->uf_regs->urtry, ioread32be(&uccf->uf_regs->urtry)); 66 66 printk(KERN_INFO "guemr : addr=0x%p, val=0x%02x\n", 67 - &uccf->uf_regs->guemr, qe_ioread8(&uccf->uf_regs->guemr)); 67 + &uccf->uf_regs->guemr, ioread8(&uccf->uf_regs->guemr)); 68 68 } 69 69 EXPORT_SYMBOL(ucc_fast_dump_regs); 70 70 ··· 86 86 87 87 void ucc_fast_transmit_on_demand(struct ucc_fast_private * uccf) 88 88 { 89 - qe_iowrite16be(UCC_FAST_TOD, &uccf->uf_regs->utodr); 89 + iowrite16be(UCC_FAST_TOD, &uccf->uf_regs->utodr); 90 90 } 91 91 EXPORT_SYMBOL(ucc_fast_transmit_on_demand); 92 92 ··· 98 98 uf_regs = uccf->uf_regs; 99 99 100 100 /* Enable reception and/or transmission on this UCC. */ 101 - gumr = qe_ioread32be(&uf_regs->gumr); 101 + gumr = ioread32be(&uf_regs->gumr); 102 102 if (mode & COMM_DIR_TX) { 103 103 gumr |= UCC_FAST_GUMR_ENT; 104 104 uccf->enabled_tx = 1; ··· 107 107 gumr |= UCC_FAST_GUMR_ENR; 108 108 uccf->enabled_rx = 1; 109 109 } 110 - qe_iowrite32be(gumr, &uf_regs->gumr); 110 + iowrite32be(gumr, &uf_regs->gumr); 111 111 } 112 112 EXPORT_SYMBOL(ucc_fast_enable); 113 113 ··· 119 119 uf_regs = uccf->uf_regs; 120 120 121 121 /* Disable reception and/or transmission on this UCC. */ 122 - gumr = qe_ioread32be(&uf_regs->gumr); 122 + gumr = ioread32be(&uf_regs->gumr); 123 123 if (mode & COMM_DIR_TX) { 124 124 gumr &= ~UCC_FAST_GUMR_ENT; 125 125 uccf->enabled_tx = 0; ··· 128 128 gumr &= ~UCC_FAST_GUMR_ENR; 129 129 uccf->enabled_rx = 0; 130 130 } 131 - qe_iowrite32be(gumr, &uf_regs->gumr); 131 + iowrite32be(gumr, &uf_regs->gumr); 132 132 } 133 133 EXPORT_SYMBOL(ucc_fast_disable); 134 134 ··· 262 262 gumr |= uf_info->tenc; 263 263 gumr |= uf_info->tcrc; 264 264 gumr |= uf_info->mode; 265 - qe_iowrite32be(gumr, &uf_regs->gumr); 265 + iowrite32be(gumr, &uf_regs->gumr); 266 266 267 267 /* Allocate memory for Tx Virtual Fifo */ 268 268 uccf->ucc_fast_tx_virtual_fifo_base_offset = ··· 287 287 } 288 288 289 289 /* Set Virtual Fifo registers */ 290 - qe_iowrite16be(uf_info->urfs, &uf_regs->urfs); 291 - qe_iowrite16be(uf_info->urfet, &uf_regs->urfet); 292 - qe_iowrite16be(uf_info->urfset, &uf_regs->urfset); 293 - qe_iowrite16be(uf_info->utfs, &uf_regs->utfs); 294 - qe_iowrite16be(uf_info->utfet, &uf_regs->utfet); 295 - qe_iowrite16be(uf_info->utftt, &uf_regs->utftt); 290 + iowrite16be(uf_info->urfs, &uf_regs->urfs); 291 + iowrite16be(uf_info->urfet, &uf_regs->urfet); 292 + iowrite16be(uf_info->urfset, &uf_regs->urfset); 293 + iowrite16be(uf_info->utfs, &uf_regs->utfs); 294 + iowrite16be(uf_info->utfet, &uf_regs->utfet); 295 + iowrite16be(uf_info->utftt, &uf_regs->utftt); 296 296 /* utfb, urfb are offsets from MURAM base */ 297 - qe_iowrite32be(uccf->ucc_fast_tx_virtual_fifo_base_offset, 297 + iowrite32be(uccf->ucc_fast_tx_virtual_fifo_base_offset, 298 298 &uf_regs->utfb); 299 - qe_iowrite32be(uccf->ucc_fast_rx_virtual_fifo_base_offset, 299 + iowrite32be(uccf->ucc_fast_rx_virtual_fifo_base_offset, 300 300 &uf_regs->urfb); 301 301 302 302 /* Mux clocking */ ··· 365 365 } 366 366 367 367 /* Set interrupt mask register at UCC level. */ 368 - qe_iowrite32be(uf_info->uccm_mask, &uf_regs->uccm); 368 + iowrite32be(uf_info->uccm_mask, &uf_regs->uccm); 369 369 370 370 /* First, clear anything pending at UCC level, 371 371 * otherwise, old garbage may come through 372 372 * as soon as the dam is opened. */ 373 373 374 374 /* Writing '1' clears */ 375 - qe_iowrite32be(0xffffffff, &uf_regs->ucce); 375 + iowrite32be(0xffffffff, &uf_regs->ucce); 376 376 377 377 *uccf_ret = uccf; 378 378 return 0;
+21 -21
drivers/soc/fsl/qe/ucc_slow.c
··· 78 78 us_regs = uccs->us_regs; 79 79 80 80 /* Enable reception and/or transmission on this UCC. */ 81 - gumr_l = qe_ioread32be(&us_regs->gumr_l); 81 + gumr_l = ioread32be(&us_regs->gumr_l); 82 82 if (mode & COMM_DIR_TX) { 83 83 gumr_l |= UCC_SLOW_GUMR_L_ENT; 84 84 uccs->enabled_tx = 1; ··· 87 87 gumr_l |= UCC_SLOW_GUMR_L_ENR; 88 88 uccs->enabled_rx = 1; 89 89 } 90 - qe_iowrite32be(gumr_l, &us_regs->gumr_l); 90 + iowrite32be(gumr_l, &us_regs->gumr_l); 91 91 } 92 92 EXPORT_SYMBOL(ucc_slow_enable); 93 93 ··· 99 99 us_regs = uccs->us_regs; 100 100 101 101 /* Disable reception and/or transmission on this UCC. */ 102 - gumr_l = qe_ioread32be(&us_regs->gumr_l); 102 + gumr_l = ioread32be(&us_regs->gumr_l); 103 103 if (mode & COMM_DIR_TX) { 104 104 gumr_l &= ~UCC_SLOW_GUMR_L_ENT; 105 105 uccs->enabled_tx = 0; ··· 108 108 gumr_l &= ~UCC_SLOW_GUMR_L_ENR; 109 109 uccs->enabled_rx = 0; 110 110 } 111 - qe_iowrite32be(gumr_l, &us_regs->gumr_l); 111 + iowrite32be(gumr_l, &us_regs->gumr_l); 112 112 } 113 113 EXPORT_SYMBOL(ucc_slow_disable); 114 114 ··· 194 194 return ret; 195 195 } 196 196 197 - qe_iowrite16be(us_info->max_rx_buf_length, &uccs->us_pram->mrblr); 197 + iowrite16be(us_info->max_rx_buf_length, &uccs->us_pram->mrblr); 198 198 199 199 INIT_LIST_HEAD(&uccs->confQ); 200 200 ··· 222 222 bd = uccs->confBd = uccs->tx_bd = qe_muram_addr(uccs->tx_base_offset); 223 223 for (i = 0; i < us_info->tx_bd_ring_len - 1; i++) { 224 224 /* clear bd buffer */ 225 - qe_iowrite32be(0, &bd->buf); 225 + iowrite32be(0, &bd->buf); 226 226 /* set bd status and length */ 227 - qe_iowrite32be(0, (u32 __iomem *)bd); 227 + iowrite32be(0, (u32 __iomem *)bd); 228 228 bd++; 229 229 } 230 230 /* for last BD set Wrap bit */ 231 - qe_iowrite32be(0, &bd->buf); 232 - qe_iowrite32be(T_W, (u32 __iomem *)bd); 231 + iowrite32be(0, &bd->buf); 232 + iowrite32be(T_W, (u32 __iomem *)bd); 233 233 234 234 /* Init Rx bds */ 235 235 bd = uccs->rx_bd = qe_muram_addr(uccs->rx_base_offset); 236 236 for (i = 0; i < us_info->rx_bd_ring_len - 1; i++) { 237 237 /* set bd status and length */ 238 - qe_iowrite32be(0, (u32 __iomem *)bd); 238 + iowrite32be(0, (u32 __iomem *)bd); 239 239 /* clear bd buffer */ 240 - qe_iowrite32be(0, &bd->buf); 240 + iowrite32be(0, &bd->buf); 241 241 bd++; 242 242 } 243 243 /* for last BD set Wrap bit */ 244 - qe_iowrite32be(R_W, (u32 __iomem *)bd); 245 - qe_iowrite32be(0, &bd->buf); 244 + iowrite32be(R_W, (u32 __iomem *)bd); 245 + iowrite32be(0, &bd->buf); 246 246 247 247 /* Set GUMR (For more details see the hardware spec.). */ 248 248 /* gumr_h */ ··· 263 263 gumr |= UCC_SLOW_GUMR_H_TXSY; 264 264 if (us_info->rtsm) 265 265 gumr |= UCC_SLOW_GUMR_H_RTSM; 266 - qe_iowrite32be(gumr, &us_regs->gumr_h); 266 + iowrite32be(gumr, &us_regs->gumr_h); 267 267 268 268 /* gumr_l */ 269 269 gumr = (u32)us_info->tdcr | (u32)us_info->rdcr | (u32)us_info->tenc | ··· 276 276 gumr |= UCC_SLOW_GUMR_L_TINV; 277 277 if (us_info->tend) 278 278 gumr |= UCC_SLOW_GUMR_L_TEND; 279 - qe_iowrite32be(gumr, &us_regs->gumr_l); 279 + iowrite32be(gumr, &us_regs->gumr_l); 280 280 281 281 /* Function code registers */ 282 282 283 283 /* if the data is in cachable memory, the 'global' */ 284 284 /* in the function code should be set. */ 285 - qe_iowrite8(UCC_BMR_BO_BE, &uccs->us_pram->tbmr); 286 - qe_iowrite8(UCC_BMR_BO_BE, &uccs->us_pram->rbmr); 285 + iowrite8(UCC_BMR_BO_BE, &uccs->us_pram->tbmr); 286 + iowrite8(UCC_BMR_BO_BE, &uccs->us_pram->rbmr); 287 287 288 288 /* rbase, tbase are offsets from MURAM base */ 289 - qe_iowrite16be(uccs->rx_base_offset, &uccs->us_pram->rbase); 290 - qe_iowrite16be(uccs->tx_base_offset, &uccs->us_pram->tbase); 289 + iowrite16be(uccs->rx_base_offset, &uccs->us_pram->rbase); 290 + iowrite16be(uccs->tx_base_offset, &uccs->us_pram->tbase); 291 291 292 292 /* Mux clocking */ 293 293 /* Grant Support */ ··· 317 317 } 318 318 319 319 /* Set interrupt mask register at UCC level. */ 320 - qe_iowrite16be(us_info->uccm_mask, &us_regs->uccm); 320 + iowrite16be(us_info->uccm_mask, &us_regs->uccm); 321 321 322 322 /* First, clear anything pending at UCC level, 323 323 * otherwise, old garbage may come through 324 324 * as soon as the dam is opened. */ 325 325 326 326 /* Writing '1' clears */ 327 - qe_iowrite16be(0xffff, &us_regs->ucce); 327 + iowrite16be(0xffff, &us_regs->ucce); 328 328 329 329 /* Issue QE Init command */ 330 330 if (us_info->init_tx && us_info->init_rx)
+22 -2
drivers/soc/fsl/rcpm.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/suspend.h> 15 15 #include <linux/kernel.h> 16 + #include <linux/acpi.h> 16 17 17 18 #define RCPM_WAKEUP_CELL_MAX_SIZE 7 18 19 ··· 79 78 "fsl,rcpm-wakeup", value, 80 79 rcpm->wakeup_cells + 1); 81 80 82 - /* Wakeup source should refer to current rcpm device */ 83 - if (ret || (np->phandle != value[0])) 81 + if (ret) 84 82 continue; 83 + 84 + /* 85 + * For DT mode, would handle devices with "fsl,rcpm-wakeup" 86 + * pointing to the current RCPM node. 87 + * 88 + * For ACPI mode, currently we assume there is only one 89 + * RCPM controller existing. 90 + */ 91 + if (is_of_node(dev->fwnode)) 92 + if (np->phandle != value[0]) 93 + continue; 85 94 86 95 /* Property "#fsl,rcpm-wakeup-cells" of rcpm node defines the 87 96 * number of IPPDEXPCR register cells, and "fsl,rcpm-wakeup" ··· 183 172 }; 184 173 MODULE_DEVICE_TABLE(of, rcpm_of_match); 185 174 175 + #ifdef CONFIG_ACPI 176 + static const struct acpi_device_id rcpm_acpi_ids[] = { 177 + {"NXP0015",}, 178 + { } 179 + }; 180 + MODULE_DEVICE_TABLE(acpi, rcpm_acpi_ids); 181 + #endif 182 + 186 183 static struct platform_driver rcpm_driver = { 187 184 .driver = { 188 185 .name = "rcpm", 189 186 .of_match_table = rcpm_of_match, 187 + .acpi_match_table = ACPI_PTR(rcpm_acpi_ids), 190 188 .pm = &rcpm_pm_ops, 191 189 }, 192 190 .probe = rcpm_probe,
+12
drivers/soc/imx/soc-imx.c
··· 13 13 #include <soc/imx/cpu.h> 14 14 #include <soc/imx/revision.h> 15 15 16 + #define IIM_UID 0x820 17 + 16 18 #define OCOTP_UID_H 0x420 17 19 #define OCOTP_UID_L 0x410 18 20 ··· 34 32 u64 soc_uid = 0; 35 33 u32 val; 36 34 int ret; 35 + int i; 37 36 38 37 if (of_machine_is_compatible("fsl,ls1021a")) 39 38 return 0; ··· 71 68 soc_id = "i.MX35"; 72 69 break; 73 70 case MXC_CPU_MX51: 71 + ocotp_compat = "fsl,imx51-iim"; 74 72 soc_id = "i.MX51"; 75 73 break; 76 74 case MXC_CPU_MX53: 75 + ocotp_compat = "fsl,imx53-iim"; 77 76 soc_id = "i.MX53"; 78 77 break; 79 78 case MXC_CPU_IMX6SL: ··· 158 153 regmap_read(ocotp, OCOTP_ULP_UID_1, &val); 159 154 soc_uid <<= 16; 160 155 soc_uid |= val & 0xffff; 156 + } else if (__mxc_cpu_type == MXC_CPU_MX51 || 157 + __mxc_cpu_type == MXC_CPU_MX53) { 158 + for (i=0; i < 8; i++) { 159 + regmap_read(ocotp, IIM_UID + i*4, &val); 160 + soc_uid <<= 8; 161 + soc_uid |= (val & 0xff); 162 + } 161 163 } else { 162 164 regmap_read(ocotp, OCOTP_UID_H, &val); 163 165 soc_uid = val;
+35
drivers/soc/mediatek/mt8167-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8167_MMSYS_H 4 + #define __SOC_MEDIATEK_MT8167_MMSYS_H 5 + 6 + #define MT8167_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x030 7 + #define MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN 0x038 8 + #define MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x058 9 + #define MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN 0x064 10 + #define MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN 0x06c 11 + 12 + #define MT8167_DITHER_MOUT_EN_RDMA0 0x1 13 + #define MT8167_RDMA0_SOUT_DSI0 0x2 14 + #define MT8167_DSI0_SEL_IN_RDMA0 0x1 15 + 16 + static const struct mtk_mmsys_routes mt8167_mmsys_routing_table[] = { 17 + { 18 + DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 19 + MT8167_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0, 20 + }, { 21 + DDP_COMPONENT_DITHER, DDP_COMPONENT_RDMA0, 22 + MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN, MT8167_DITHER_MOUT_EN_RDMA0 23 + }, { 24 + DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 25 + MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0 26 + }, { 27 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0, 28 + MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, MT8167_DSI0_SEL_IN_RDMA0 29 + }, { 30 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0, 31 + MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8167_RDMA0_SOUT_DSI0 32 + }, 33 + }; 34 + 35 + #endif /* __SOC_MEDIATEK_MT8167_MMSYS_H */
+7
drivers/soc/mediatek/mt8167-pm-domains.h
··· 15 15 16 16 static const struct scpsys_domain_data scpsys_domain_data_mt8167[] = { 17 17 [MT8167_POWER_DOMAIN_MM] = { 18 + .name = "mm", 18 19 .sta_mask = PWR_STATUS_DISP, 19 20 .ctl_offs = SPM_DIS_PWR_CON, 20 21 .sram_pdn_bits = GENMASK(11, 8), ··· 27 26 .caps = MTK_SCPD_ACTIVE_WAKEUP, 28 27 }, 29 28 [MT8167_POWER_DOMAIN_VDEC] = { 29 + .name = "vdec", 30 30 .sta_mask = PWR_STATUS_VDEC, 31 31 .ctl_offs = SPM_VDE_PWR_CON, 32 32 .sram_pdn_bits = GENMASK(8, 8), ··· 35 33 .caps = MTK_SCPD_ACTIVE_WAKEUP, 36 34 }, 37 35 [MT8167_POWER_DOMAIN_ISP] = { 36 + .name = "isp", 38 37 .sta_mask = PWR_STATUS_ISP, 39 38 .ctl_offs = SPM_ISP_PWR_CON, 40 39 .sram_pdn_bits = GENMASK(11, 8), ··· 43 40 .caps = MTK_SCPD_ACTIVE_WAKEUP, 44 41 }, 45 42 [MT8167_POWER_DOMAIN_MFG_ASYNC] = { 43 + .name = "mfg_async", 46 44 .sta_mask = MT8167_PWR_STATUS_MFG_ASYNC, 47 45 .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 48 46 .sram_pdn_bits = 0, ··· 54 50 }, 55 51 }, 56 52 [MT8167_POWER_DOMAIN_MFG_2D] = { 53 + .name = "mfg_2d", 57 54 .sta_mask = MT8167_PWR_STATUS_MFG_2D, 58 55 .ctl_offs = SPM_MFG_2D_PWR_CON, 59 56 .sram_pdn_bits = GENMASK(11, 8), 60 57 .sram_pdn_ack_bits = GENMASK(15, 12), 61 58 }, 62 59 [MT8167_POWER_DOMAIN_MFG] = { 60 + .name = "mfg", 63 61 .sta_mask = PWR_STATUS_MFG, 64 62 .ctl_offs = SPM_MFG_PWR_CON, 65 63 .sram_pdn_bits = GENMASK(11, 8), 66 64 .sram_pdn_ack_bits = GENMASK(15, 12), 67 65 }, 68 66 [MT8167_POWER_DOMAIN_CONN] = { 67 + .name = "conn", 69 68 .sta_mask = PWR_STATUS_CONN, 70 69 .ctl_offs = SPM_CONN_PWR_CON, 71 70 .sram_pdn_bits = GENMASK(8, 8),
+10
drivers/soc/mediatek/mt8173-pm-domains.h
··· 12 12 13 13 static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = { 14 14 [MT8173_POWER_DOMAIN_VDEC] = { 15 + .name = "vdec", 15 16 .sta_mask = PWR_STATUS_VDEC, 16 17 .ctl_offs = SPM_VDE_PWR_CON, 17 18 .sram_pdn_bits = GENMASK(11, 8), 18 19 .sram_pdn_ack_bits = GENMASK(12, 12), 19 20 }, 20 21 [MT8173_POWER_DOMAIN_VENC] = { 22 + .name = "venc", 21 23 .sta_mask = PWR_STATUS_VENC, 22 24 .ctl_offs = SPM_VEN_PWR_CON, 23 25 .sram_pdn_bits = GENMASK(11, 8), 24 26 .sram_pdn_ack_bits = GENMASK(15, 12), 25 27 }, 26 28 [MT8173_POWER_DOMAIN_ISP] = { 29 + .name = "isp", 27 30 .sta_mask = PWR_STATUS_ISP, 28 31 .ctl_offs = SPM_ISP_PWR_CON, 29 32 .sram_pdn_bits = GENMASK(11, 8), 30 33 .sram_pdn_ack_bits = GENMASK(13, 12), 31 34 }, 32 35 [MT8173_POWER_DOMAIN_MM] = { 36 + .name = "mm", 33 37 .sta_mask = PWR_STATUS_DISP, 34 38 .ctl_offs = SPM_DIS_PWR_CON, 35 39 .sram_pdn_bits = GENMASK(11, 8), ··· 44 40 }, 45 41 }, 46 42 [MT8173_POWER_DOMAIN_VENC_LT] = { 43 + .name = "venc_lt", 47 44 .sta_mask = PWR_STATUS_VENC_LT, 48 45 .ctl_offs = SPM_VEN2_PWR_CON, 49 46 .sram_pdn_bits = GENMASK(11, 8), 50 47 .sram_pdn_ack_bits = GENMASK(15, 12), 51 48 }, 52 49 [MT8173_POWER_DOMAIN_AUDIO] = { 50 + .name = "audio", 53 51 .sta_mask = PWR_STATUS_AUDIO, 54 52 .ctl_offs = SPM_AUDIO_PWR_CON, 55 53 .sram_pdn_bits = GENMASK(11, 8), 56 54 .sram_pdn_ack_bits = GENMASK(15, 12), 57 55 }, 58 56 [MT8173_POWER_DOMAIN_USB] = { 57 + .name = "usb", 59 58 .sta_mask = PWR_STATUS_USB, 60 59 .ctl_offs = SPM_USB_PWR_CON, 61 60 .sram_pdn_bits = GENMASK(11, 8), ··· 66 59 .caps = MTK_SCPD_ACTIVE_WAKEUP, 67 60 }, 68 61 [MT8173_POWER_DOMAIN_MFG_ASYNC] = { 62 + .name = "mfg_async", 69 63 .sta_mask = PWR_STATUS_MFG_ASYNC, 70 64 .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 71 65 .sram_pdn_bits = GENMASK(11, 8), 72 66 .sram_pdn_ack_bits = 0, 73 67 }, 74 68 [MT8173_POWER_DOMAIN_MFG_2D] = { 69 + .name = "mfg_2d", 75 70 .sta_mask = PWR_STATUS_MFG_2D, 76 71 .ctl_offs = SPM_MFG_2D_PWR_CON, 77 72 .sram_pdn_bits = GENMASK(11, 8), 78 73 .sram_pdn_ack_bits = GENMASK(13, 12), 79 74 }, 80 75 [MT8173_POWER_DOMAIN_MFG] = { 76 + .name = "mfg", 81 77 .sta_mask = PWR_STATUS_MFG, 82 78 .ctl_offs = SPM_MFG_PWR_CON, 83 79 .sram_pdn_bits = GENMASK(13, 8),
+54
drivers/soc/mediatek/mt8183-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8183_MMSYS_H 4 + #define __SOC_MEDIATEK_MT8183_MMSYS_H 5 + 6 + #define MT8183_DISP_OVL0_MOUT_EN 0xf00 7 + #define MT8183_DISP_OVL0_2L_MOUT_EN 0xf04 8 + #define MT8183_DISP_OVL1_2L_MOUT_EN 0xf08 9 + #define MT8183_DISP_DITHER0_MOUT_EN 0xf0c 10 + #define MT8183_DISP_PATH0_SEL_IN 0xf24 11 + #define MT8183_DISP_DSI0_SEL_IN 0xf2c 12 + #define MT8183_DISP_DPI0_SEL_IN 0xf30 13 + #define MT8183_DISP_RDMA0_SOUT_SEL_IN 0xf50 14 + #define MT8183_DISP_RDMA1_SOUT_SEL_IN 0xf54 15 + 16 + #define MT8183_OVL0_MOUT_EN_OVL0_2L BIT(4) 17 + #define MT8183_OVL0_2L_MOUT_EN_DISP_PATH0 BIT(0) 18 + #define MT8183_OVL1_2L_MOUT_EN_RDMA1 BIT(4) 19 + #define MT8183_DITHER0_MOUT_IN_DSI0 BIT(0) 20 + #define MT8183_DISP_PATH0_SEL_IN_OVL0_2L 0x1 21 + #define MT8183_DSI0_SEL_IN_RDMA0 0x1 22 + #define MT8183_DSI0_SEL_IN_RDMA1 0x3 23 + #define MT8183_DPI0_SEL_IN_RDMA0 0x1 24 + #define MT8183_DPI0_SEL_IN_RDMA1 0x2 25 + #define MT8183_RDMA0_SOUT_COLOR0 0x1 26 + #define MT8183_RDMA1_SOUT_DSI0 0x1 27 + 28 + static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = { 29 + { 30 + DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0, 31 + MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L 32 + }, { 33 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 34 + MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0 35 + }, { 36 + DDP_COMPONENT_OVL_2L1, DDP_COMPONENT_RDMA1, 37 + MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1 38 + }, { 39 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 40 + MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0 41 + }, { 42 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 43 + MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L 44 + }, { 45 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 46 + MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1 47 + }, { 48 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 49 + MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0 50 + } 51 + }; 52 + 53 + #endif /* __SOC_MEDIATEK_MT8183_MMSYS_H */ 54 +
+15
drivers/soc/mediatek/mt8183-pm-domains.h
··· 12 12 13 13 static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { 14 14 [MT8183_POWER_DOMAIN_AUDIO] = { 15 + .name = "audio", 15 16 .sta_mask = PWR_STATUS_AUDIO, 16 17 .ctl_offs = 0x0314, 17 18 .sram_pdn_bits = GENMASK(11, 8), 18 19 .sram_pdn_ack_bits = GENMASK(15, 12), 19 20 }, 20 21 [MT8183_POWER_DOMAIN_CONN] = { 22 + .name = "conn", 21 23 .sta_mask = PWR_STATUS_CONN, 22 24 .ctl_offs = 0x032c, 23 25 .sram_pdn_bits = 0, ··· 30 28 }, 31 29 }, 32 30 [MT8183_POWER_DOMAIN_MFG_ASYNC] = { 31 + .name = "mfg_async", 33 32 .sta_mask = PWR_STATUS_MFG_ASYNC, 34 33 .ctl_offs = 0x0334, 35 34 .sram_pdn_bits = 0, 36 35 .sram_pdn_ack_bits = 0, 37 36 }, 38 37 [MT8183_POWER_DOMAIN_MFG] = { 38 + .name = "mfg", 39 39 .sta_mask = PWR_STATUS_MFG, 40 40 .ctl_offs = 0x0338, 41 41 .sram_pdn_bits = GENMASK(8, 8), ··· 45 41 .caps = MTK_SCPD_DOMAIN_SUPPLY, 46 42 }, 47 43 [MT8183_POWER_DOMAIN_MFG_CORE0] = { 44 + .name = "mfg_core0", 48 45 .sta_mask = BIT(7), 49 46 .ctl_offs = 0x034c, 50 47 .sram_pdn_bits = GENMASK(8, 8), 51 48 .sram_pdn_ack_bits = GENMASK(12, 12), 52 49 }, 53 50 [MT8183_POWER_DOMAIN_MFG_CORE1] = { 51 + .name = "mfg_core1", 54 52 .sta_mask = BIT(20), 55 53 .ctl_offs = 0x0310, 56 54 .sram_pdn_bits = GENMASK(8, 8), 57 55 .sram_pdn_ack_bits = GENMASK(12, 12), 58 56 }, 59 57 [MT8183_POWER_DOMAIN_MFG_2D] = { 58 + .name = "mfg_2d", 60 59 .sta_mask = PWR_STATUS_MFG_2D, 61 60 .ctl_offs = 0x0348, 62 61 .sram_pdn_bits = GENMASK(8, 8), ··· 72 65 }, 73 66 }, 74 67 [MT8183_POWER_DOMAIN_DISP] = { 68 + .name = "disp", 75 69 .sta_mask = PWR_STATUS_DISP, 76 70 .ctl_offs = 0x030c, 77 71 .sram_pdn_bits = GENMASK(8, 8), ··· 91 83 }, 92 84 }, 93 85 [MT8183_POWER_DOMAIN_CAM] = { 86 + .name = "cam", 94 87 .sta_mask = BIT(25), 95 88 .ctl_offs = 0x0344, 96 89 .sram_pdn_bits = GENMASK(9, 8), ··· 114 105 }, 115 106 }, 116 107 [MT8183_POWER_DOMAIN_ISP] = { 108 + .name = "isp", 117 109 .sta_mask = PWR_STATUS_ISP, 118 110 .ctl_offs = 0x0308, 119 111 .sram_pdn_bits = GENMASK(9, 8), ··· 137 127 }, 138 128 }, 139 129 [MT8183_POWER_DOMAIN_VDEC] = { 130 + .name = "vdec", 140 131 .sta_mask = BIT(31), 141 132 .ctl_offs = 0x0300, 142 133 .sram_pdn_bits = GENMASK(8, 8), ··· 150 139 }, 151 140 }, 152 141 [MT8183_POWER_DOMAIN_VENC] = { 142 + .name = "venc", 153 143 .sta_mask = PWR_STATUS_VENC, 154 144 .ctl_offs = 0x0304, 155 145 .sram_pdn_bits = GENMASK(11, 8), ··· 163 151 }, 164 152 }, 165 153 [MT8183_POWER_DOMAIN_VPU_TOP] = { 154 + .name = "vpu_top", 166 155 .sta_mask = BIT(26), 167 156 .ctl_offs = 0x0324, 168 157 .sram_pdn_bits = GENMASK(8, 8), ··· 190 177 }, 191 178 }, 192 179 [MT8183_POWER_DOMAIN_VPU_CORE0] = { 180 + .name = "vpu_core0", 193 181 .sta_mask = BIT(27), 194 182 .ctl_offs = 0x33c, 195 183 .sram_pdn_bits = GENMASK(11, 8), ··· 208 194 .caps = MTK_SCPD_SRAM_ISO, 209 195 }, 210 196 [MT8183_POWER_DOMAIN_VPU_CORE1] = { 197 + .name = "vpu_core1", 211 198 .sta_mask = BIT(28), 212 199 .ctl_offs = 0x0340, 213 200 .sram_pdn_bits = GENMASK(11, 8),
+21
drivers/soc/mediatek/mt8192-pm-domains.h
··· 12 12 13 13 static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { 14 14 [MT8192_POWER_DOMAIN_AUDIO] = { 15 + .name = "audio", 15 16 .sta_mask = BIT(21), 16 17 .ctl_offs = 0x0354, 17 18 .sram_pdn_bits = GENMASK(8, 8), ··· 25 24 }, 26 25 }, 27 26 [MT8192_POWER_DOMAIN_CONN] = { 27 + .name = "conn", 28 28 .sta_mask = PWR_STATUS_CONN, 29 29 .ctl_offs = 0x0304, 30 30 .sram_pdn_bits = 0, ··· 47 45 .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 48 46 }, 49 47 [MT8192_POWER_DOMAIN_MFG0] = { 48 + .name = "mfg0", 50 49 .sta_mask = BIT(2), 51 50 .ctl_offs = 0x0308, 52 51 .sram_pdn_bits = GENMASK(8, 8), 53 52 .sram_pdn_ack_bits = GENMASK(12, 12), 54 53 }, 55 54 [MT8192_POWER_DOMAIN_MFG1] = { 55 + .name = "mfg1", 56 56 .sta_mask = BIT(3), 57 57 .ctl_offs = 0x030c, 58 58 .sram_pdn_bits = GENMASK(8, 8), ··· 79 75 }, 80 76 }, 81 77 [MT8192_POWER_DOMAIN_MFG2] = { 78 + .name = "mfg2", 82 79 .sta_mask = BIT(4), 83 80 .ctl_offs = 0x0310, 84 81 .sram_pdn_bits = GENMASK(8, 8), 85 82 .sram_pdn_ack_bits = GENMASK(12, 12), 86 83 }, 87 84 [MT8192_POWER_DOMAIN_MFG3] = { 85 + .name = "mfg3", 88 86 .sta_mask = BIT(5), 89 87 .ctl_offs = 0x0314, 90 88 .sram_pdn_bits = GENMASK(8, 8), 91 89 .sram_pdn_ack_bits = GENMASK(12, 12), 92 90 }, 93 91 [MT8192_POWER_DOMAIN_MFG4] = { 92 + .name = "mfg4", 94 93 .sta_mask = BIT(6), 95 94 .ctl_offs = 0x0318, 96 95 .sram_pdn_bits = GENMASK(8, 8), 97 96 .sram_pdn_ack_bits = GENMASK(12, 12), 98 97 }, 99 98 [MT8192_POWER_DOMAIN_MFG5] = { 99 + .name = "mfg5", 100 100 .sta_mask = BIT(7), 101 101 .ctl_offs = 0x031c, 102 102 .sram_pdn_bits = GENMASK(8, 8), 103 103 .sram_pdn_ack_bits = GENMASK(12, 12), 104 104 }, 105 105 [MT8192_POWER_DOMAIN_MFG6] = { 106 + .name = "mfg6", 106 107 .sta_mask = BIT(8), 107 108 .ctl_offs = 0x0320, 108 109 .sram_pdn_bits = GENMASK(8, 8), 109 110 .sram_pdn_ack_bits = GENMASK(12, 12), 110 111 }, 111 112 [MT8192_POWER_DOMAIN_DISP] = { 113 + .name = "disp", 112 114 .sta_mask = BIT(20), 113 115 .ctl_offs = 0x0350, 114 116 .sram_pdn_bits = GENMASK(8, 8), ··· 143 133 }, 144 134 }, 145 135 [MT8192_POWER_DOMAIN_IPE] = { 136 + .name = "ipe", 146 137 .sta_mask = BIT(14), 147 138 .ctl_offs = 0x0338, 148 139 .sram_pdn_bits = GENMASK(8, 8), ··· 160 149 }, 161 150 }, 162 151 [MT8192_POWER_DOMAIN_ISP] = { 152 + .name = "isp", 163 153 .sta_mask = BIT(12), 164 154 .ctl_offs = 0x0330, 165 155 .sram_pdn_bits = GENMASK(8, 8), ··· 177 165 }, 178 166 }, 179 167 [MT8192_POWER_DOMAIN_ISP2] = { 168 + .name = "isp2", 180 169 .sta_mask = BIT(13), 181 170 .ctl_offs = 0x0334, 182 171 .sram_pdn_bits = GENMASK(8, 8), ··· 194 181 }, 195 182 }, 196 183 [MT8192_POWER_DOMAIN_MDP] = { 184 + .name = "mdp", 197 185 .sta_mask = BIT(19), 198 186 .ctl_offs = 0x034c, 199 187 .sram_pdn_bits = GENMASK(8, 8), ··· 211 197 }, 212 198 }, 213 199 [MT8192_POWER_DOMAIN_VENC] = { 200 + .name = "venc", 214 201 .sta_mask = BIT(17), 215 202 .ctl_offs = 0x0344, 216 203 .sram_pdn_bits = GENMASK(8, 8), ··· 228 213 }, 229 214 }, 230 215 [MT8192_POWER_DOMAIN_VDEC] = { 216 + .name = "vdec", 231 217 .sta_mask = BIT(15), 232 218 .ctl_offs = 0x033c, 233 219 .sram_pdn_bits = GENMASK(8, 8), ··· 245 229 }, 246 230 }, 247 231 [MT8192_POWER_DOMAIN_VDEC2] = { 232 + .name = "vdec2", 248 233 .sta_mask = BIT(16), 249 234 .ctl_offs = 0x0340, 250 235 .sram_pdn_bits = GENMASK(8, 8), 251 236 .sram_pdn_ack_bits = GENMASK(12, 12), 252 237 }, 253 238 [MT8192_POWER_DOMAIN_CAM] = { 239 + .name = "cam", 254 240 .sta_mask = BIT(23), 255 241 .ctl_offs = 0x035c, 256 242 .sram_pdn_bits = GENMASK(8, 8), ··· 281 263 }, 282 264 }, 283 265 [MT8192_POWER_DOMAIN_CAM_RAWA] = { 266 + .name = "cam_rawa", 284 267 .sta_mask = BIT(24), 285 268 .ctl_offs = 0x0360, 286 269 .sram_pdn_bits = GENMASK(8, 8), 287 270 .sram_pdn_ack_bits = GENMASK(12, 12), 288 271 }, 289 272 [MT8192_POWER_DOMAIN_CAM_RAWB] = { 273 + .name = "cam_rawb", 290 274 .sta_mask = BIT(25), 291 275 .ctl_offs = 0x0364, 292 276 .sram_pdn_bits = GENMASK(8, 8), 293 277 .sram_pdn_ack_bits = GENMASK(12, 12), 294 278 }, 295 279 [MT8192_POWER_DOMAIN_CAM_RAWC] = { 280 + .name = "cam_rawc", 296 281 .sta_mask = BIT(26), 297 282 .ctl_offs = 0x0368, 298 283 .sram_pdn_bits = GENMASK(8, 8),
+54 -256
drivers/soc/mediatek/mtk-mmsys.c
··· 10 10 #include <linux/platform_device.h> 11 11 #include <linux/soc/mediatek/mtk-mmsys.h> 12 12 13 - #define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 14 - #define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 15 - #define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 16 - #define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c 17 - #define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 18 - #define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 19 - #define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 20 - #define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 21 - #define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 22 - #define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac 23 - #define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 24 - #define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 25 - #define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 26 - #define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 27 - 28 - #define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 29 - #define DISP_REG_CONFIG_OUT_SEL 0x04c 30 - #define DISP_REG_CONFIG_DSI_SEL 0x050 31 - #define DISP_REG_CONFIG_DPI_SEL 0x064 32 - 33 - #define OVL0_MOUT_EN_COLOR0 0x1 34 - #define OD_MOUT_EN_RDMA0 0x1 35 - #define OD1_MOUT_EN_RDMA1 BIT(16) 36 - #define UFOE_MOUT_EN_DSI0 0x1 37 - #define COLOR0_SEL_IN_OVL0 0x1 38 - #define OVL1_MOUT_EN_COLOR1 0x1 39 - #define GAMMA_MOUT_EN_RDMA1 0x1 40 - #define RDMA0_SOUT_DPI0 0x2 41 - #define RDMA0_SOUT_DPI1 0x3 42 - #define RDMA0_SOUT_DSI1 0x1 43 - #define RDMA0_SOUT_DSI2 0x4 44 - #define RDMA0_SOUT_DSI3 0x5 45 - #define RDMA1_SOUT_DPI0 0x2 46 - #define RDMA1_SOUT_DPI1 0x3 47 - #define RDMA1_SOUT_DSI1 0x1 48 - #define RDMA1_SOUT_DSI2 0x4 49 - #define RDMA1_SOUT_DSI3 0x5 50 - #define RDMA2_SOUT_DPI0 0x2 51 - #define RDMA2_SOUT_DPI1 0x3 52 - #define RDMA2_SOUT_DSI1 0x1 53 - #define RDMA2_SOUT_DSI2 0x4 54 - #define RDMA2_SOUT_DSI3 0x5 55 - #define DPI0_SEL_IN_RDMA1 0x1 56 - #define DPI0_SEL_IN_RDMA2 0x3 57 - #define DPI1_SEL_IN_RDMA1 (0x1 << 8) 58 - #define DPI1_SEL_IN_RDMA2 (0x3 << 8) 59 - #define DSI0_SEL_IN_RDMA1 0x1 60 - #define DSI0_SEL_IN_RDMA2 0x4 61 - #define DSI1_SEL_IN_RDMA1 0x1 62 - #define DSI1_SEL_IN_RDMA2 0x4 63 - #define DSI2_SEL_IN_RDMA1 (0x1 << 16) 64 - #define DSI2_SEL_IN_RDMA2 (0x4 << 16) 65 - #define DSI3_SEL_IN_RDMA1 (0x1 << 16) 66 - #define DSI3_SEL_IN_RDMA2 (0x4 << 16) 67 - #define COLOR1_SEL_IN_OVL1 0x1 68 - 69 - #define OVL_MOUT_EN_RDMA 0x1 70 - #define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 71 - #define BLS_TO_DPI_RDMA1_TO_DSI 0x2 72 - #define DSI_SEL_IN_BLS 0x0 73 - #define DPI_SEL_IN_BLS 0x0 74 - #define DSI_SEL_IN_RDMA 0x1 75 - 76 - struct mtk_mmsys_driver_data { 77 - const char *clk_driver; 78 - }; 13 + #include "mtk-mmsys.h" 14 + #include "mt8167-mmsys.h" 15 + #include "mt8183-mmsys.h" 79 16 80 17 static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { 81 18 .clk_driver = "clk-mt2701-mm", 19 + .routes = mmsys_default_routing_table, 20 + .num_routes = ARRAY_SIZE(mmsys_default_routing_table), 82 21 }; 83 22 84 23 static const struct mtk_mmsys_driver_data mt2712_mmsys_driver_data = { 85 24 .clk_driver = "clk-mt2712-mm", 25 + .routes = mmsys_default_routing_table, 26 + .num_routes = ARRAY_SIZE(mmsys_default_routing_table), 86 27 }; 87 28 88 29 static const struct mtk_mmsys_driver_data mt6779_mmsys_driver_data = { ··· 34 93 .clk_driver = "clk-mt6797-mm", 35 94 }; 36 95 96 + static const struct mtk_mmsys_driver_data mt8167_mmsys_driver_data = { 97 + .clk_driver = "clk-mt8167-mm", 98 + .routes = mt8167_mmsys_routing_table, 99 + .num_routes = ARRAY_SIZE(mt8167_mmsys_routing_table), 100 + }; 101 + 37 102 static const struct mtk_mmsys_driver_data mt8173_mmsys_driver_data = { 38 103 .clk_driver = "clk-mt8173-mm", 104 + .routes = mmsys_default_routing_table, 105 + .num_routes = ARRAY_SIZE(mmsys_default_routing_table), 39 106 }; 40 107 41 108 static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = { 42 109 .clk_driver = "clk-mt8183-mm", 110 + .routes = mmsys_mt8183_routing_table, 111 + .num_routes = ARRAY_SIZE(mmsys_mt8183_routing_table), 43 112 }; 44 113 45 - static unsigned int mtk_mmsys_ddp_mout_en(enum mtk_ddp_comp_id cur, 46 - enum mtk_ddp_comp_id next, 47 - unsigned int *addr) 48 - { 49 - unsigned int value; 50 - 51 - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 52 - *addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN; 53 - value = OVL0_MOUT_EN_COLOR0; 54 - } else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) { 55 - *addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN; 56 - value = OVL_MOUT_EN_RDMA; 57 - } else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) { 58 - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 59 - value = OD_MOUT_EN_RDMA0; 60 - } else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) { 61 - *addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN; 62 - value = UFOE_MOUT_EN_DSI0; 63 - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 64 - *addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN; 65 - value = OVL1_MOUT_EN_COLOR1; 66 - } else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) { 67 - *addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN; 68 - value = GAMMA_MOUT_EN_RDMA1; 69 - } else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) { 70 - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 71 - value = OD1_MOUT_EN_RDMA1; 72 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) { 73 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 74 - value = RDMA0_SOUT_DPI0; 75 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) { 76 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 77 - value = RDMA0_SOUT_DPI1; 78 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) { 79 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 80 - value = RDMA0_SOUT_DSI1; 81 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) { 82 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 83 - value = RDMA0_SOUT_DSI2; 84 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) { 85 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 86 - value = RDMA0_SOUT_DSI3; 87 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 88 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 89 - value = RDMA1_SOUT_DSI1; 90 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 91 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 92 - value = RDMA1_SOUT_DSI2; 93 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 94 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 95 - value = RDMA1_SOUT_DSI3; 96 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 97 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 98 - value = RDMA1_SOUT_DPI0; 99 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 100 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 101 - value = RDMA1_SOUT_DPI1; 102 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 103 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 104 - value = RDMA2_SOUT_DPI0; 105 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 106 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 107 - value = RDMA2_SOUT_DPI1; 108 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 109 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 110 - value = RDMA2_SOUT_DSI1; 111 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 112 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 113 - value = RDMA2_SOUT_DSI2; 114 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 115 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 116 - value = RDMA2_SOUT_DSI3; 117 - } else { 118 - value = 0; 119 - } 120 - 121 - return value; 122 - } 123 - 124 - static unsigned int mtk_mmsys_ddp_sel_in(enum mtk_ddp_comp_id cur, 125 - enum mtk_ddp_comp_id next, 126 - unsigned int *addr) 127 - { 128 - unsigned int value; 129 - 130 - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 131 - *addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN; 132 - value = COLOR0_SEL_IN_OVL0; 133 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 134 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 135 - value = DPI0_SEL_IN_RDMA1; 136 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 137 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 138 - value = DPI1_SEL_IN_RDMA1; 139 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) { 140 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 141 - value = DSI0_SEL_IN_RDMA1; 142 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 143 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 144 - value = DSI1_SEL_IN_RDMA1; 145 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 146 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 147 - value = DSI2_SEL_IN_RDMA1; 148 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 149 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 150 - value = DSI3_SEL_IN_RDMA1; 151 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 152 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 153 - value = DPI0_SEL_IN_RDMA2; 154 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 155 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 156 - value = DPI1_SEL_IN_RDMA2; 157 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) { 158 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 159 - value = DSI0_SEL_IN_RDMA2; 160 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 161 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 162 - value = DSI1_SEL_IN_RDMA2; 163 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 164 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 165 - value = DSI2_SEL_IN_RDMA2; 166 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 167 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 168 - value = DSI3_SEL_IN_RDMA2; 169 - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 170 - *addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN; 171 - value = COLOR1_SEL_IN_OVL1; 172 - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 173 - *addr = DISP_REG_CONFIG_DSI_SEL; 174 - value = DSI_SEL_IN_BLS; 175 - } else { 176 - value = 0; 177 - } 178 - 179 - return value; 180 - } 181 - 182 - static void mtk_mmsys_ddp_sout_sel(void __iomem *config_regs, 183 - enum mtk_ddp_comp_id cur, 184 - enum mtk_ddp_comp_id next) 185 - { 186 - if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 187 - writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1, 188 - config_regs + DISP_REG_CONFIG_OUT_SEL); 189 - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) { 190 - writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI, 191 - config_regs + DISP_REG_CONFIG_OUT_SEL); 192 - writel_relaxed(DSI_SEL_IN_RDMA, 193 - config_regs + DISP_REG_CONFIG_DSI_SEL); 194 - writel_relaxed(DPI_SEL_IN_BLS, 195 - config_regs + DISP_REG_CONFIG_DPI_SEL); 196 - } 197 - } 114 + struct mtk_mmsys { 115 + void __iomem *regs; 116 + const struct mtk_mmsys_driver_data *data; 117 + }; 198 118 199 119 void mtk_mmsys_ddp_connect(struct device *dev, 200 120 enum mtk_ddp_comp_id cur, 201 121 enum mtk_ddp_comp_id next) 202 122 { 203 - void __iomem *config_regs = dev_get_drvdata(dev); 204 - unsigned int addr, value, reg; 123 + struct mtk_mmsys *mmsys = dev_get_drvdata(dev); 124 + const struct mtk_mmsys_routes *routes = mmsys->data->routes; 125 + u32 reg; 126 + int i; 205 127 206 - value = mtk_mmsys_ddp_mout_en(cur, next, &addr); 207 - if (value) { 208 - reg = readl_relaxed(config_regs + addr) | value; 209 - writel_relaxed(reg, config_regs + addr); 210 - } 211 - 212 - mtk_mmsys_ddp_sout_sel(config_regs, cur, next); 213 - 214 - value = mtk_mmsys_ddp_sel_in(cur, next, &addr); 215 - if (value) { 216 - reg = readl_relaxed(config_regs + addr) | value; 217 - writel_relaxed(reg, config_regs + addr); 218 - } 128 + for (i = 0; i < mmsys->data->num_routes; i++) 129 + if (cur == routes[i].from_comp && next == routes[i].to_comp) { 130 + reg = readl_relaxed(mmsys->regs + routes[i].addr) | routes[i].val; 131 + writel_relaxed(reg, mmsys->regs + routes[i].addr); 132 + } 219 133 } 220 134 EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_connect); 221 135 ··· 78 282 enum mtk_ddp_comp_id cur, 79 283 enum mtk_ddp_comp_id next) 80 284 { 81 - void __iomem *config_regs = dev_get_drvdata(dev); 82 - unsigned int addr, value, reg; 285 + struct mtk_mmsys *mmsys = dev_get_drvdata(dev); 286 + const struct mtk_mmsys_routes *routes = mmsys->data->routes; 287 + u32 reg; 288 + int i; 83 289 84 - value = mtk_mmsys_ddp_mout_en(cur, next, &addr); 85 - if (value) { 86 - reg = readl_relaxed(config_regs + addr) & ~value; 87 - writel_relaxed(reg, config_regs + addr); 88 - } 89 - 90 - value = mtk_mmsys_ddp_sel_in(cur, next, &addr); 91 - if (value) { 92 - reg = readl_relaxed(config_regs + addr) & ~value; 93 - writel_relaxed(reg, config_regs + addr); 94 - } 290 + for (i = 0; i < mmsys->data->num_routes; i++) 291 + if (cur == routes[i].from_comp && next == routes[i].to_comp) { 292 + reg = readl_relaxed(mmsys->regs + routes[i].addr) & ~routes[i].val; 293 + writel_relaxed(reg, mmsys->regs + routes[i].addr); 294 + } 95 295 } 96 296 EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_disconnect); 97 297 98 298 static int mtk_mmsys_probe(struct platform_device *pdev) 99 299 { 100 - const struct mtk_mmsys_driver_data *data; 101 300 struct device *dev = &pdev->dev; 102 301 struct platform_device *clks; 103 302 struct platform_device *drm; 104 - void __iomem *config_regs; 303 + struct mtk_mmsys *mmsys; 105 304 int ret; 106 305 107 - config_regs = devm_platform_ioremap_resource(pdev, 0); 108 - if (IS_ERR(config_regs)) { 109 - ret = PTR_ERR(config_regs); 306 + mmsys = devm_kzalloc(dev, sizeof(*mmsys), GFP_KERNEL); 307 + if (!mmsys) 308 + return -ENOMEM; 309 + 310 + mmsys->regs = devm_platform_ioremap_resource(pdev, 0); 311 + if (IS_ERR(mmsys->regs)) { 312 + ret = PTR_ERR(mmsys->regs); 110 313 dev_err(dev, "Failed to ioremap mmsys registers: %d\n", ret); 111 314 return ret; 112 315 } 113 316 114 - platform_set_drvdata(pdev, config_regs); 317 + mmsys->data = of_device_get_match_data(&pdev->dev); 318 + platform_set_drvdata(pdev, mmsys); 115 319 116 - data = of_device_get_match_data(&pdev->dev); 117 - 118 - clks = platform_device_register_data(&pdev->dev, data->clk_driver, 320 + clks = platform_device_register_data(&pdev->dev, mmsys->data->clk_driver, 119 321 PLATFORM_DEVID_AUTO, NULL, 0); 120 322 if (IS_ERR(clks)) 121 323 return PTR_ERR(clks); ··· 144 350 { 145 351 .compatible = "mediatek,mt6797-mmsys", 146 352 .data = &mt6797_mmsys_driver_data, 353 + }, 354 + { 355 + .compatible = "mediatek,mt8167-mmsys", 356 + .data = &mt8167_mmsys_driver_data, 147 357 }, 148 358 { 149 359 .compatible = "mediatek,mt8173-mmsys",
+215
drivers/soc/mediatek/mtk-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MTK_MMSYS_H 4 + #define __SOC_MEDIATEK_MTK_MMSYS_H 5 + 6 + #define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 7 + #define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 8 + #define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 9 + #define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c 10 + #define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 11 + #define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 12 + #define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 13 + #define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 14 + #define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 15 + #define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac 16 + #define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 17 + #define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 18 + #define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 19 + #define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 20 + 21 + #define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 22 + #define DISP_REG_CONFIG_OUT_SEL 0x04c 23 + #define DISP_REG_CONFIG_DSI_SEL 0x050 24 + #define DISP_REG_CONFIG_DPI_SEL 0x064 25 + 26 + #define OVL0_MOUT_EN_COLOR0 0x1 27 + #define OD_MOUT_EN_RDMA0 0x1 28 + #define OD1_MOUT_EN_RDMA1 BIT(16) 29 + #define UFOE_MOUT_EN_DSI0 0x1 30 + #define COLOR0_SEL_IN_OVL0 0x1 31 + #define OVL1_MOUT_EN_COLOR1 0x1 32 + #define GAMMA_MOUT_EN_RDMA1 0x1 33 + #define RDMA0_SOUT_DPI0 0x2 34 + #define RDMA0_SOUT_DPI1 0x3 35 + #define RDMA0_SOUT_DSI1 0x1 36 + #define RDMA0_SOUT_DSI2 0x4 37 + #define RDMA0_SOUT_DSI3 0x5 38 + #define RDMA1_SOUT_DPI0 0x2 39 + #define RDMA1_SOUT_DPI1 0x3 40 + #define RDMA1_SOUT_DSI1 0x1 41 + #define RDMA1_SOUT_DSI2 0x4 42 + #define RDMA1_SOUT_DSI3 0x5 43 + #define RDMA2_SOUT_DPI0 0x2 44 + #define RDMA2_SOUT_DPI1 0x3 45 + #define RDMA2_SOUT_DSI1 0x1 46 + #define RDMA2_SOUT_DSI2 0x4 47 + #define RDMA2_SOUT_DSI3 0x5 48 + #define DPI0_SEL_IN_RDMA1 0x1 49 + #define DPI0_SEL_IN_RDMA2 0x3 50 + #define DPI1_SEL_IN_RDMA1 (0x1 << 8) 51 + #define DPI1_SEL_IN_RDMA2 (0x3 << 8) 52 + #define DSI0_SEL_IN_RDMA1 0x1 53 + #define DSI0_SEL_IN_RDMA2 0x4 54 + #define DSI1_SEL_IN_RDMA1 0x1 55 + #define DSI1_SEL_IN_RDMA2 0x4 56 + #define DSI2_SEL_IN_RDMA1 (0x1 << 16) 57 + #define DSI2_SEL_IN_RDMA2 (0x4 << 16) 58 + #define DSI3_SEL_IN_RDMA1 (0x1 << 16) 59 + #define DSI3_SEL_IN_RDMA2 (0x4 << 16) 60 + #define COLOR1_SEL_IN_OVL1 0x1 61 + 62 + #define OVL_MOUT_EN_RDMA 0x1 63 + #define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 64 + #define BLS_TO_DPI_RDMA1_TO_DSI 0x2 65 + #define DSI_SEL_IN_BLS 0x0 66 + #define DPI_SEL_IN_BLS 0x0 67 + #define DSI_SEL_IN_RDMA 0x1 68 + 69 + struct mtk_mmsys_routes { 70 + u32 from_comp; 71 + u32 to_comp; 72 + u32 addr; 73 + u32 val; 74 + }; 75 + 76 + struct mtk_mmsys_driver_data { 77 + const char *clk_driver; 78 + const struct mtk_mmsys_routes *routes; 79 + const unsigned int num_routes; 80 + }; 81 + 82 + /* 83 + * Routes in mt8173, mt2701, mt2712 are different. That means 84 + * in the same register address, it controls different input/output 85 + * selection for each SoC. But, right now, they use the same table as 86 + * default routes meet their requirements. But we don't have the complete 87 + * route information for these three SoC, so just keep them in the same 88 + * table. After we've more information, we could separate mt2701, mt2712 89 + * to an independent table. 90 + */ 91 + static const struct mtk_mmsys_routes mmsys_default_routing_table[] = { 92 + { 93 + DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0, 94 + DISP_REG_CONFIG_OUT_SEL, BLS_TO_DSI_RDMA1_TO_DPI1 95 + }, { 96 + DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0, 97 + DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_BLS 98 + }, { 99 + DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 100 + DISP_REG_CONFIG_OUT_SEL, BLS_TO_DPI_RDMA1_TO_DSI 101 + }, { 102 + DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 103 + DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_RDMA 104 + }, { 105 + DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 106 + DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_BLS 107 + }, { 108 + DDP_COMPONENT_GAMMA, DDP_COMPONENT_RDMA1, 109 + DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1 110 + }, { 111 + DDP_COMPONENT_OD0, DDP_COMPONENT_RDMA0, 112 + DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0 113 + }, { 114 + DDP_COMPONENT_OD1, DDP_COMPONENT_RDMA1, 115 + DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1 116 + }, { 117 + DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 118 + DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0 119 + }, { 120 + DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 121 + DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0 122 + }, { 123 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 124 + DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA 125 + }, { 126 + DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 127 + DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1 128 + }, { 129 + DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 130 + DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1 131 + }, { 132 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI0, 133 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI0 134 + }, { 135 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI1, 136 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI1 137 + }, { 138 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI1, 139 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI1 140 + }, { 141 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI2, 142 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI2 143 + }, { 144 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI3, 145 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI3 146 + }, { 147 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 148 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI0 149 + }, { 150 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 151 + DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA1 152 + }, { 153 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1, 154 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI1 155 + }, { 156 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1, 157 + DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA1 158 + }, { 159 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI0, 160 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA1 161 + }, { 162 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1, 163 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI1 164 + }, { 165 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1, 166 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA1 167 + }, { 168 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2, 169 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI2 170 + }, { 171 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2, 172 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA1 173 + }, { 174 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3, 175 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI3 176 + }, { 177 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3, 178 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA1 179 + }, { 180 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0, 181 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI0 182 + }, { 183 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0, 184 + DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA2 185 + }, { 186 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1, 187 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI1 188 + }, { 189 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1, 190 + DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA2 191 + }, { 192 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI0, 193 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA2 194 + }, { 195 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1, 196 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI1 197 + }, { 198 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1, 199 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA2 200 + }, { 201 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2, 202 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI2 203 + }, { 204 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2, 205 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA2 206 + }, { 207 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3, 208 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI3 209 + }, { 210 + DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3, 211 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA2 212 + } 213 + }; 214 + 215 + #endif /* __SOC_MEDIATEK_MTK_MMSYS_H */
+51 -1
drivers/soc/mediatek/mtk-mutex.c
··· 14 14 15 15 #define MT2701_MUTEX0_MOD0 0x2c 16 16 #define MT2701_MUTEX0_SOF0 0x30 17 + #define MT8183_MUTEX0_MOD0 0x30 18 + #define MT8183_MUTEX0_SOF0 0x2c 17 19 18 20 #define DISP_REG_MUTEX_EN(n) (0x20 + 0x20 * (n)) 19 21 #define DISP_REG_MUTEX(n) (0x24 + 0x20 * (n)) ··· 38 36 #define MT8167_MUTEX_MOD_DISP_GAMMA 14 39 37 #define MT8167_MUTEX_MOD_DISP_DITHER 15 40 38 #define MT8167_MUTEX_MOD_DISP_UFOE 16 39 + 40 + #define MT8183_MUTEX_MOD_DISP_RDMA0 0 41 + #define MT8183_MUTEX_MOD_DISP_RDMA1 1 42 + #define MT8183_MUTEX_MOD_DISP_OVL0 9 43 + #define MT8183_MUTEX_MOD_DISP_OVL0_2L 10 44 + #define MT8183_MUTEX_MOD_DISP_OVL1_2L 11 45 + #define MT8183_MUTEX_MOD_DISP_WDMA0 12 46 + #define MT8183_MUTEX_MOD_DISP_COLOR0 13 47 + #define MT8183_MUTEX_MOD_DISP_CCORR0 14 48 + #define MT8183_MUTEX_MOD_DISP_AAL0 15 49 + #define MT8183_MUTEX_MOD_DISP_GAMMA0 16 50 + #define MT8183_MUTEX_MOD_DISP_DITHER0 17 41 51 42 52 #define MT8173_MUTEX_MOD_DISP_OVL0 11 43 53 #define MT8173_MUTEX_MOD_DISP_OVL1 12 ··· 101 87 #define MT2712_MUTEX_SOF_DSI3 6 102 88 #define MT8167_MUTEX_SOF_DPI0 2 103 89 #define MT8167_MUTEX_SOF_DPI1 3 90 + #define MT8183_MUTEX_SOF_DSI0 1 91 + #define MT8183_MUTEX_SOF_DPI0 2 92 + 93 + #define MT8183_MUTEX_EOF_DSI0 (MT8183_MUTEX_SOF_DSI0 << 6) 94 + #define MT8183_MUTEX_EOF_DPI0 (MT8183_MUTEX_SOF_DPI0 << 6) 104 95 105 96 struct mtk_mutex { 106 97 int id; ··· 200 181 [DDP_COMPONENT_WDMA1] = MT8173_MUTEX_MOD_DISP_WDMA1, 201 182 }; 202 183 184 + static const unsigned int mt8183_mutex_mod[DDP_COMPONENT_ID_MAX] = { 185 + [DDP_COMPONENT_AAL0] = MT8183_MUTEX_MOD_DISP_AAL0, 186 + [DDP_COMPONENT_CCORR] = MT8183_MUTEX_MOD_DISP_CCORR0, 187 + [DDP_COMPONENT_COLOR0] = MT8183_MUTEX_MOD_DISP_COLOR0, 188 + [DDP_COMPONENT_DITHER] = MT8183_MUTEX_MOD_DISP_DITHER0, 189 + [DDP_COMPONENT_GAMMA] = MT8183_MUTEX_MOD_DISP_GAMMA0, 190 + [DDP_COMPONENT_OVL0] = MT8183_MUTEX_MOD_DISP_OVL0, 191 + [DDP_COMPONENT_OVL_2L0] = MT8183_MUTEX_MOD_DISP_OVL0_2L, 192 + [DDP_COMPONENT_OVL_2L1] = MT8183_MUTEX_MOD_DISP_OVL1_2L, 193 + [DDP_COMPONENT_RDMA0] = MT8183_MUTEX_MOD_DISP_RDMA0, 194 + [DDP_COMPONENT_RDMA1] = MT8183_MUTEX_MOD_DISP_RDMA1, 195 + [DDP_COMPONENT_WDMA0] = MT8183_MUTEX_MOD_DISP_WDMA0, 196 + }; 197 + 203 198 static const unsigned int mt2712_mutex_sof[MUTEX_SOF_DSI3 + 1] = { 204 199 [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, 205 200 [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, ··· 229 196 [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, 230 197 [MUTEX_SOF_DPI0] = MT8167_MUTEX_SOF_DPI0, 231 198 [MUTEX_SOF_DPI1] = MT8167_MUTEX_SOF_DPI1, 199 + }; 200 + 201 + /* Add EOF setting so overlay hardware can receive frame done irq */ 202 + static const unsigned int mt8183_mutex_sof[MUTEX_SOF_DSI3 + 1] = { 203 + [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, 204 + [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0 | MT8183_MUTEX_EOF_DSI0, 205 + [MUTEX_SOF_DPI0] = MT8183_MUTEX_SOF_DPI0 | MT8183_MUTEX_EOF_DPI0, 232 206 }; 233 207 234 208 static const struct mtk_mutex_data mt2701_mutex_driver_data = { ··· 265 225 .mutex_sof = mt2712_mutex_sof, 266 226 .mutex_mod_reg = MT2701_MUTEX0_MOD0, 267 227 .mutex_sof_reg = MT2701_MUTEX0_SOF0, 228 + }; 229 + 230 + static const struct mtk_mutex_data mt8183_mutex_driver_data = { 231 + .mutex_mod = mt8183_mutex_mod, 232 + .mutex_sof = mt8183_mutex_sof, 233 + .mutex_mod_reg = MT8183_MUTEX0_MOD0, 234 + .mutex_sof_reg = MT8183_MUTEX0_SOF0, 235 + .no_clk = true, 268 236 }; 269 237 270 238 struct mtk_mutex *mtk_mutex_get(struct device *dev) ··· 505 457 .data = &mt8167_mutex_driver_data}, 506 458 { .compatible = "mediatek,mt8173-disp-mutex", 507 459 .data = &mt8173_mutex_driver_data}, 460 + { .compatible = "mediatek,mt8183-disp-mutex", 461 + .data = &mt8183_mutex_driver_data}, 508 462 {}, 509 463 }; 510 464 MODULE_DEVICE_TABLE(of, mutex_driver_dt_match); 511 465 512 - struct platform_driver mtk_mutex_driver = { 466 + static struct platform_driver mtk_mutex_driver = { 513 467 .probe = mtk_mutex_probe, 514 468 .remove = mtk_mutex_remove, 515 469 .driver = {
+8 -3
drivers/soc/mediatek/mtk-pm-domains.c
··· 438 438 goto err_unprepare_subsys_clocks; 439 439 } 440 440 441 - pd->genpd.name = node->name; 441 + if (!pd->data->name) 442 + pd->genpd.name = node->name; 443 + else 444 + pd->genpd.name = pd->data->name; 445 + 442 446 pd->genpd.power_off = scpsys_power_off; 443 447 pd->genpd.power_on = scpsys_power_on; 444 448 ··· 491 487 492 488 child_pd = scpsys_add_one_domain(scpsys, child); 493 489 if (IS_ERR(child_pd)) { 494 - dev_err_probe(scpsys->dev, PTR_ERR(child_pd), 495 - "%pOF: failed to get child domain id\n", child); 490 + ret = PTR_ERR(child_pd); 491 + dev_err_probe(scpsys->dev, ret, "%pOF: failed to get child domain id\n", 492 + child); 496 493 goto err_put_node; 497 494 } 498 495
+2
drivers/soc/mediatek/mtk-pm-domains.h
··· 76 76 77 77 /** 78 78 * struct scpsys_domain_data - scp domain data for power on/off flow 79 + * @name: The name of the power domain. 79 80 * @sta_mask: The mask for power on/off status bit. 80 81 * @ctl_offs: The offset for main power control register. 81 82 * @sram_pdn_bits: The mask for sram power control bits. ··· 86 85 * @bp_smi: bus protection for smi subsystem 87 86 */ 88 87 struct scpsys_domain_data { 88 + const char *name; 89 89 u32 sta_mask; 90 90 int ctl_offs; 91 91 u32 sram_pdn_bits;
+82 -15
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 25 25 26 26 /* macro for wrapper status */ 27 27 #define PWRAP_GET_WACS_RDATA(x) (((x) >> 0) & 0x0000ffff) 28 + #define PWRAP_GET_WACS_ARB_FSM(x) (((x) >> 1) & 0x00000007) 28 29 #define PWRAP_GET_WACS_FSM(x) (((x) >> 16) & 0x00000007) 29 30 #define PWRAP_GET_WACS_REQ(x) (((x) >> 19) & 0x00000001) 30 - #define PWRAP_STATE_SYNC_IDLE0 (1 << 20) 31 - #define PWRAP_STATE_INIT_DONE0 (1 << 21) 31 + #define PWRAP_STATE_SYNC_IDLE0 BIT(20) 32 + #define PWRAP_STATE_INIT_DONE0 BIT(21) 33 + #define PWRAP_STATE_INIT_DONE1 BIT(15) 32 34 33 35 /* macro for WACS FSM */ 34 36 #define PWRAP_WACS_FSM_IDLE 0x00 ··· 76 74 #define PWRAP_CAP_DCM BIT(2) 77 75 #define PWRAP_CAP_INT1_EN BIT(3) 78 76 #define PWRAP_CAP_WDT_SRC1 BIT(4) 77 + #define PWRAP_CAP_ARB BIT(5) 79 78 80 79 /* defines for slave device wrapper registers */ 81 80 enum dew_regs { ··· 343 340 PWRAP_DCM_DBC_PRD, 344 341 PWRAP_EINT_STA0_ADR, 345 342 PWRAP_EINT_STA1_ADR, 343 + PWRAP_SWINF_2_WDATA_31_0, 344 + PWRAP_SWINF_2_RDATA_31_0, 346 345 347 346 /* MT2701 only regs */ 348 347 PWRAP_ADC_CMD_ADDR, ··· 630 625 [PWRAP_WDT_SRC_EN] = 0x100, 631 626 [PWRAP_DCM_EN] = 0x1CC, 632 627 [PWRAP_DCM_DBC_PRD] = 0x1D4, 628 + }; 629 + 630 + static int mt6873_regs[] = { 631 + [PWRAP_INIT_DONE2] = 0x0, 632 + [PWRAP_TIMER_EN] = 0x3E0, 633 + [PWRAP_INT_EN] = 0x448, 634 + [PWRAP_WACS2_CMD] = 0xC80, 635 + [PWRAP_SWINF_2_WDATA_31_0] = 0xC84, 636 + [PWRAP_SWINF_2_RDATA_31_0] = 0xC94, 637 + [PWRAP_WACS2_VLDCLR] = 0xCA4, 638 + [PWRAP_WACS2_RDATA] = 0xCA8, 633 639 }; 634 640 635 641 static int mt7622_regs[] = { ··· 1061 1045 PWRAP_MT6765, 1062 1046 PWRAP_MT6779, 1063 1047 PWRAP_MT6797, 1048 + PWRAP_MT6873, 1064 1049 PWRAP_MT7622, 1065 1050 PWRAP_MT8135, 1066 1051 PWRAP_MT8173, ··· 1123 1106 writel(val, wrp->base + wrp->master->regs[reg]); 1124 1107 } 1125 1108 1109 + static u32 pwrap_get_fsm_state(struct pmic_wrapper *wrp) 1110 + { 1111 + u32 val; 1112 + 1113 + val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 1114 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 1115 + return PWRAP_GET_WACS_ARB_FSM(val); 1116 + else 1117 + return PWRAP_GET_WACS_FSM(val); 1118 + } 1119 + 1126 1120 static bool pwrap_is_fsm_idle(struct pmic_wrapper *wrp) 1127 1121 { 1128 - u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 1129 - 1130 - return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_IDLE; 1122 + return pwrap_get_fsm_state(wrp) == PWRAP_WACS_FSM_IDLE; 1131 1123 } 1132 1124 1133 1125 static bool pwrap_is_fsm_vldclr(struct pmic_wrapper *wrp) 1134 1126 { 1135 - u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 1136 - 1137 - return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR; 1127 + return pwrap_get_fsm_state(wrp) == PWRAP_WACS_FSM_WFVLDCLR; 1138 1128 } 1139 1129 1140 1130 /* ··· 1189 1165 static int pwrap_read16(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 1190 1166 { 1191 1167 int ret; 1168 + u32 val; 1192 1169 1193 1170 ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 1194 1171 if (ret) { ··· 1197 1172 return ret; 1198 1173 } 1199 1174 1200 - pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD); 1175 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 1176 + val = adr; 1177 + else 1178 + val = (adr >> 1) << 16; 1179 + pwrap_writel(wrp, val, PWRAP_WACS2_CMD); 1201 1180 1202 1181 ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr); 1203 1182 if (ret) 1204 1183 return ret; 1205 1184 1206 - *rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA)); 1185 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 1186 + val = pwrap_readl(wrp, PWRAP_SWINF_2_RDATA_31_0); 1187 + else 1188 + val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 1189 + *rdata = PWRAP_GET_WACS_RDATA(val); 1207 1190 1208 1191 pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR); 1209 1192 ··· 1261 1228 return ret; 1262 1229 } 1263 1230 1264 - pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata, 1265 - PWRAP_WACS2_CMD); 1231 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) { 1232 + pwrap_writel(wrp, wdata, PWRAP_SWINF_2_WDATA_31_0); 1233 + pwrap_writel(wrp, BIT(29) | adr, PWRAP_WACS2_CMD); 1234 + } else { 1235 + pwrap_writel(wrp, BIT(31) | ((adr >> 1) << 16) | wdata, 1236 + PWRAP_WACS2_CMD); 1237 + } 1266 1238 1267 1239 return 0; 1268 1240 } ··· 1523 1485 case PWRAP_MT7622: 1524 1486 pwrap_writel(wrp, 0, PWRAP_CIPHER_EN); 1525 1487 break; 1488 + case PWRAP_MT6873: 1526 1489 case PWRAP_MT8183: 1527 1490 break; 1528 1491 } ··· 1960 1921 .init_soc_specific = NULL, 1961 1922 }; 1962 1923 1924 + static const struct pmic_wrapper_type pwrap_mt6873 = { 1925 + .regs = mt6873_regs, 1926 + .type = PWRAP_MT6873, 1927 + .arb_en_all = 0x777f, 1928 + .int_en_all = BIT(4) | BIT(5), 1929 + .int1_en_all = 0, 1930 + .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 1931 + .wdt_src = PWRAP_WDT_SRC_MASK_ALL, 1932 + .caps = PWRAP_CAP_ARB, 1933 + .init_reg_clock = pwrap_common_init_reg_clock, 1934 + .init_soc_specific = NULL, 1935 + }; 1936 + 1963 1937 static const struct pmic_wrapper_type pwrap_mt7622 = { 1964 1938 .regs = mt7622_regs, 1965 1939 .type = PWRAP_MT7622, ··· 2051 1999 .compatible = "mediatek,mt6797-pwrap", 2052 2000 .data = &pwrap_mt6797, 2053 2001 }, { 2002 + .compatible = "mediatek,mt6873-pwrap", 2003 + .data = &pwrap_mt6873, 2004 + }, { 2054 2005 .compatible = "mediatek,mt7622-pwrap", 2055 2006 .data = &pwrap_mt7622, 2056 2007 }, { ··· 2077 2022 static int pwrap_probe(struct platform_device *pdev) 2078 2023 { 2079 2024 int ret, irq; 2025 + u32 mask_done; 2080 2026 struct pmic_wrapper *wrp; 2081 2027 struct device_node *np = pdev->dev.of_node; 2082 2028 const struct of_device_id *of_slave_id = NULL; ··· 2172 2116 } 2173 2117 } 2174 2118 2175 - if (!(pwrap_readl(wrp, PWRAP_WACS2_RDATA) & PWRAP_STATE_INIT_DONE0)) { 2119 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 2120 + mask_done = PWRAP_STATE_INIT_DONE1; 2121 + else 2122 + mask_done = PWRAP_STATE_INIT_DONE0; 2123 + 2124 + if (!(pwrap_readl(wrp, PWRAP_WACS2_RDATA) & mask_done)) { 2176 2125 dev_dbg(wrp->dev, "initialization isn't finished\n"); 2177 2126 ret = -ENODEV; 2178 2127 goto err_out2; 2179 2128 } 2180 2129 2181 2130 /* Initialize watchdog, may not be done by the bootloader */ 2182 - pwrap_writel(wrp, 0xf, PWRAP_WDT_UNIT); 2131 + if (!HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 2132 + pwrap_writel(wrp, 0xf, PWRAP_WDT_UNIT); 2133 + 2183 2134 /* 2184 2135 * Since STAUPD was not used on mt8173 platform, 2185 2136 * so STAUPD of WDT_SRC which should be turned off ··· 2195 2132 if (HAS_CAP(wrp->master->caps, PWRAP_CAP_WDT_SRC1)) 2196 2133 pwrap_writel(wrp, wrp->master->wdt_src, PWRAP_WDT_SRC_EN_1); 2197 2134 2198 - pwrap_writel(wrp, 0x1, PWRAP_TIMER_EN); 2135 + if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 2136 + pwrap_writel(wrp, 0x3, PWRAP_TIMER_EN); 2137 + else 2138 + pwrap_writel(wrp, 0x1, PWRAP_TIMER_EN); 2139 + 2199 2140 pwrap_writel(wrp, wrp->master->int_en_all, PWRAP_INT_EN); 2200 2141 /* 2201 2142 * We add INT1 interrupt to handle starvation and request exception
+19
drivers/soc/qcom/llcc-qcom.c
··· 109 109 { LLCC_GPU, 12, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 110 110 }; 111 111 112 + static const struct llcc_slice_config sc7280_data[] = { 113 + { LLCC_CPUSS, 1, 768, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 1, 0}, 114 + { LLCC_MDMHPGRW, 7, 512, 2, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 115 + { LLCC_CMPT, 10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 116 + { LLCC_GPUHTW, 11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 117 + { LLCC_GPU, 12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 118 + { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 1, 0}, 119 + { LLCC_MDMPNG, 21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 120 + { LLCC_WLHW, 24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 121 + { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 122 + }; 123 + 112 124 static const struct llcc_slice_config sdm845_data[] = { 113 125 { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 }, 114 126 { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, ··· 188 176 static const struct qcom_llcc_config sc7180_cfg = { 189 177 .sct_data = sc7180_data, 190 178 .size = ARRAY_SIZE(sc7180_data), 179 + .need_llcc_cfg = true, 180 + }; 181 + 182 + static const struct qcom_llcc_config sc7280_cfg = { 183 + .sct_data = sc7280_data, 184 + .size = ARRAY_SIZE(sc7280_data), 191 185 .need_llcc_cfg = true, 192 186 }; 193 187 ··· 624 606 625 607 static const struct of_device_id qcom_llcc_of_match[] = { 626 608 { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 609 + { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfg }, 627 610 { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 628 611 { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, 629 612 { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfg },
+17
drivers/soc/qcom/mdt_loader.c
··· 230 230 break; 231 231 } 232 232 233 + if (phdr->p_filesz > phdr->p_memsz) { 234 + dev_err(dev, 235 + "refusing to load segment %d with p_filesz > p_memsz\n", 236 + i); 237 + ret = -EINVAL; 238 + break; 239 + } 240 + 233 241 ptr = mem_region + offset; 234 242 235 243 if (phdr->p_filesz && phdr->p_offset < fw->size) { ··· 258 250 ptr, phdr->p_filesz); 259 251 if (ret) { 260 252 dev_err(dev, "failed to load %s\n", fw_name); 253 + break; 254 + } 255 + 256 + if (seg_fw->size != phdr->p_filesz) { 257 + dev_err(dev, 258 + "failed to load segment %d from truncated file %s\n", 259 + i, fw_name); 260 + release_firmware(seg_fw); 261 + ret = -EINVAL; 261 262 break; 262 263 } 263 264
+1 -1
drivers/soc/qcom/pdr_interface.c
··· 153 153 if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { 154 154 pr_err("PDR: %s register listener failed: 0x%x\n", 155 155 pds->service_path, resp.resp.error); 156 - return ret; 156 + return -EREMOTEIO; 157 157 } 158 158 159 159 pds->state = resp.curr_state;
+1
drivers/soc/qcom/qcom_aoss.c
··· 597 597 598 598 static const struct of_device_id qmp_dt_match[] = { 599 599 { .compatible = "qcom,sc7180-aoss-qmp", }, 600 + { .compatible = "qcom,sc7280-aoss-qmp", }, 600 601 { .compatible = "qcom,sdm845-aoss-qmp", }, 601 602 { .compatible = "qcom,sm8150-aoss-qmp", }, 602 603 { .compatible = "qcom,sm8250-aoss-qmp", },
+4 -4
drivers/soc/qcom/qmi_encdec.c
··· 451 451 452 452 /** 453 453 * qmi_decode_struct_elem() - Decodes elements of struct data type 454 - * @ei_array: Struct info array descibing the struct element. 454 + * @ei_array: Struct info array describing the struct element. 455 455 * @buf_dst: Buffer to store the decoded element. 456 456 * @buf_src: Buffer containing the elements in QMI wire format. 457 457 * @elem_len: Number of elements to be decoded. 458 - * @tlv_len: Total size of the encoded inforation corresponding to 458 + * @tlv_len: Total size of the encoded information corresponding to 459 459 * this struct element. 460 460 * @dec_level: Depth of the nested structure from the main structure. 461 461 * ··· 499 499 500 500 /** 501 501 * qmi_decode_string_elem() - Decodes elements of string data type 502 - * @ei_array: Struct info array descibing the string element. 502 + * @ei_array: Struct info array describing the string element. 503 503 * @buf_dst: Buffer to store the decoded element. 504 504 * @buf_src: Buffer containing the elements in QMI wire format. 505 - * @tlv_len: Total size of the encoded inforation corresponding to 505 + * @tlv_len: Total size of the encoded information corresponding to 506 506 * this string element. 507 507 * @dec_level: Depth of the string element from the main structure. 508 508 *
+22 -43
drivers/soc/qcom/rpmh-rsc.c
··· 195 195 } 196 196 197 197 /** 198 - * tcs_is_free() - Return if a TCS is totally free. 199 - * @drv: The RSC controller. 200 - * @tcs_id: The global ID of this TCS. 201 - * 202 - * Returns true if nobody has claimed this TCS (by setting tcs_in_use). 203 - * 204 - * Context: Must be called with the drv->lock held. 205 - * 206 - * Return: true if the given TCS is free. 207 - */ 208 - static bool tcs_is_free(struct rsc_drv *drv, int tcs_id) 209 - { 210 - return !test_bit(tcs_id, drv->tcs_in_use); 211 - } 212 - 213 - /** 214 198 * tcs_invalidate() - Invalidate all TCSes of the given type (sleep or wake). 215 199 * @drv: The RSC controller. 216 200 * @type: SLEEP_TCS or WAKE_TCS ··· 392 408 393 409 irq_status = readl_relaxed(drv->tcs_base + RSC_DRV_IRQ_STATUS); 394 410 395 - for_each_set_bit(i, &irq_status, BITS_PER_LONG) { 411 + for_each_set_bit(i, &irq_status, BITS_PER_TYPE(u32)) { 396 412 req = get_req_from_tcs(drv, i); 397 - if (!req) { 398 - WARN_ON(1); 413 + if (WARN_ON(!req)) 399 414 goto skip; 400 - } 401 415 402 416 err = 0; 403 417 for (j = 0; j < req->num_cmds; j++) { ··· 502 520 * 503 521 * Return: 0 if nothing in flight or -EBUSY if we should try again later. 504 522 * The caller must re-enable interrupts between tries since that's 505 - * the only way tcs_is_free() will ever return true and the only way 523 + * the only way tcs_in_use will ever be updated and the only way 506 524 * RSC_DRV_CMD_ENABLE will ever be cleared. 507 525 */ 508 526 static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, ··· 510 528 { 511 529 unsigned long curr_enabled; 512 530 u32 addr; 513 - int i, j, k; 514 - int tcs_id = tcs->offset; 531 + int j, k; 532 + int i = tcs->offset; 515 533 516 - for (i = 0; i < tcs->num_tcs; i++, tcs_id++) { 517 - if (tcs_is_free(drv, tcs_id)) 518 - continue; 519 - 520 - curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); 534 + for_each_set_bit_from(i, drv->tcs_in_use, tcs->offset + tcs->num_tcs) { 535 + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i); 521 536 522 537 for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) { 523 - addr = read_tcs_cmd(drv, RSC_DRV_CMD_ADDR, tcs_id, j); 538 + addr = read_tcs_cmd(drv, RSC_DRV_CMD_ADDR, i, j); 524 539 for (k = 0; k < msg->num_cmds; k++) { 525 540 if (addr == msg->cmds[k].addr) 526 541 return -EBUSY; ··· 535 556 * 536 557 * Must be called with the drv->lock held since that protects tcs_in_use. 537 558 * 538 - * Return: The first tcs that's free. 559 + * Return: The first tcs that's free or -EBUSY if all in use. 539 560 */ 540 561 static int find_free_tcs(struct tcs_group *tcs) 541 562 { 542 - int i; 563 + const struct rsc_drv *drv = tcs->drv; 564 + unsigned long i; 565 + unsigned long max = tcs->offset + tcs->num_tcs; 543 566 544 - for (i = 0; i < tcs->num_tcs; i++) { 545 - if (tcs_is_free(tcs->drv, tcs->offset + i)) 546 - return tcs->offset + i; 547 - } 567 + i = find_next_zero_bit(drv->tcs_in_use, max, tcs->offset); 568 + if (i >= max) 569 + return -EBUSY; 548 570 549 - return -EBUSY; 571 + return i; 550 572 } 551 573 552 574 /** ··· 734 754 */ 735 755 static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv) 736 756 { 737 - int m; 738 - struct tcs_group *tcs = &drv->tcs[ACTIVE_TCS]; 757 + unsigned long set; 758 + const struct tcs_group *tcs = &drv->tcs[ACTIVE_TCS]; 759 + unsigned long max; 739 760 740 761 /* 741 762 * If we made an active request on a RSC that does not have a ··· 747 766 if (!tcs->num_tcs) 748 767 tcs = &drv->tcs[WAKE_TCS]; 749 768 750 - for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { 751 - if (!tcs_is_free(drv, m)) 752 - return true; 753 - } 769 + max = tcs->offset + tcs->num_tcs; 770 + set = find_next_bit(drv->tcs_in_use, max, tcs->offset); 754 771 755 - return false; 772 + return set < max; 756 773 } 757 774 758 775 /**
+56
drivers/soc/qcom/rpmhpd.c
··· 200 200 .num_pds = ARRAY_SIZE(sm8250_rpmhpds), 201 201 }; 202 202 203 + /* SM8350 Power domains */ 204 + static struct rpmhpd sm8350_mxc_ao; 205 + static struct rpmhpd sm8350_mxc = { 206 + .pd = { .name = "mxc", }, 207 + .peer = &sm8150_mmcx_ao, 208 + .res_name = "mxc.lvl", 209 + }; 210 + 211 + static struct rpmhpd sm8350_mxc_ao = { 212 + .pd = { .name = "mxc_ao", }, 213 + .active_only = true, 214 + .peer = &sm8350_mxc, 215 + .res_name = "mxc.lvl", 216 + }; 217 + 218 + static struct rpmhpd *sm8350_rpmhpds[] = { 219 + [SM8350_CX] = &sdm845_cx, 220 + [SM8350_CX_AO] = &sdm845_cx_ao, 221 + [SM8350_EBI] = &sdm845_ebi, 222 + [SM8350_GFX] = &sdm845_gfx, 223 + [SM8350_LCX] = &sdm845_lcx, 224 + [SM8350_LMX] = &sdm845_lmx, 225 + [SM8350_MMCX] = &sm8150_mmcx, 226 + [SM8350_MMCX_AO] = &sm8150_mmcx_ao, 227 + [SM8350_MX] = &sdm845_mx, 228 + [SM8350_MX_AO] = &sdm845_mx_ao, 229 + [SM8350_MXC] = &sm8350_mxc, 230 + [SM8350_MXC_AO] = &sm8350_mxc_ao, 231 + [SM8350_MSS] = &sdm845_mss, 232 + }; 233 + 234 + static const struct rpmhpd_desc sm8350_desc = { 235 + .rpmhpds = sm8350_rpmhpds, 236 + .num_pds = ARRAY_SIZE(sm8350_rpmhpds), 237 + }; 238 + 203 239 /* SC7180 RPMH powerdomains */ 204 240 static struct rpmhpd *sc7180_rpmhpds[] = { 205 241 [SC7180_CX] = &sdm845_cx, ··· 253 217 .num_pds = ARRAY_SIZE(sc7180_rpmhpds), 254 218 }; 255 219 220 + /* SC7280 RPMH powerdomains */ 221 + static struct rpmhpd *sc7280_rpmhpds[] = { 222 + [SC7280_CX] = &sdm845_cx, 223 + [SC7280_CX_AO] = &sdm845_cx_ao, 224 + [SC7280_EBI] = &sdm845_ebi, 225 + [SC7280_GFX] = &sdm845_gfx, 226 + [SC7280_MX] = &sdm845_mx, 227 + [SC7280_MX_AO] = &sdm845_mx_ao, 228 + [SC7280_LMX] = &sdm845_lmx, 229 + [SC7280_LCX] = &sdm845_lcx, 230 + [SC7280_MSS] = &sdm845_mss, 231 + }; 232 + 233 + static const struct rpmhpd_desc sc7280_desc = { 234 + .rpmhpds = sc7280_rpmhpds, 235 + .num_pds = ARRAY_SIZE(sc7280_rpmhpds), 236 + }; 237 + 256 238 static const struct of_device_id rpmhpd_match_table[] = { 257 239 { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc }, 240 + { .compatible = "qcom,sc7280-rpmhpd", .data = &sc7280_desc }, 258 241 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 259 242 { .compatible = "qcom,sdx55-rpmhpd", .data = &sdx55_desc}, 260 243 { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, 261 244 { .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc }, 245 + { .compatible = "qcom,sm8350-rpmhpd", .data = &sm8350_desc }, 262 246 { } 263 247 }; 264 248 MODULE_DEVICE_TABLE(of, rpmhpd_match_table);
+1 -1
drivers/soc/qcom/smem.c
··· 84 84 #define SMEM_GLOBAL_HOST 0xfffe 85 85 86 86 /* Max number of processors/hosts in a system */ 87 - #define SMEM_HOST_COUNT 11 87 + #define SMEM_HOST_COUNT 14 88 88 89 89 /** 90 90 * struct smem_proc_comm - proc_comm communication struct (legacy)
+10 -5
drivers/soc/qcom/wcnss_ctrl.c
··· 199 199 { 200 200 struct wcnss_download_nv_req *req; 201 201 const struct firmware *fw; 202 + struct device *dev = wcnss->dev; 203 + const char *nvbin = NVBIN_FILE; 202 204 const void *data; 203 205 ssize_t left; 204 206 int ret; ··· 209 207 if (!req) 210 208 return -ENOMEM; 211 209 212 - ret = request_firmware(&fw, NVBIN_FILE, wcnss->dev); 210 + ret = of_property_read_string(dev->of_node, "firmware-name", &nvbin); 211 + if (ret < 0 && ret != -EINVAL) 212 + goto free_req; 213 + 214 + ret = request_firmware(&fw, nvbin, dev); 213 215 if (ret < 0) { 214 - dev_err(wcnss->dev, "Failed to load nv file %s: %d\n", 215 - NVBIN_FILE, ret); 216 + dev_err(dev, "Failed to load nv file %s: %d\n", nvbin, ret); 216 217 goto free_req; 217 218 } 218 219 ··· 240 235 241 236 ret = rpmsg_send(wcnss->channel, req, req->hdr.len); 242 237 if (ret < 0) { 243 - dev_err(wcnss->dev, "failed to send smd packet\n"); 238 + dev_err(dev, "failed to send smd packet\n"); 244 239 goto release_fw; 245 240 } 246 241 ··· 253 248 254 249 ret = wait_for_completion_timeout(&wcnss->ack, WCNSS_REQUEST_TIMEOUT); 255 250 if (!ret) { 256 - dev_err(wcnss->dev, "timeout waiting for nv upload ack\n"); 251 + dev_err(dev, "timeout waiting for nv upload ack\n"); 257 252 ret = -ETIMEDOUT; 258 253 } else { 259 254 *expect_cbc = wcnss->ack_status == WCNSS_ACK_COLD_BOOTING;
+2 -2
drivers/soc/renesas/rmobile-sysc.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/of.h> 16 16 #include <linux/of_address.h> 17 - #include <linux/of_platform.h> 18 - #include <linux/platform_device.h> 19 17 #include <linux/pm.h> 20 18 #include <linux/pm_clock.h> 21 19 #include <linux/pm_domain.h> ··· 342 344 of_node_put(np); 343 345 break; 344 346 } 347 + 348 + fwnode_dev_initialized(&np->fwnode, true); 345 349 } 346 350 347 351 put_special_pds();
+251 -8
drivers/soc/tegra/pmc.c
··· 39 39 #include <linux/platform_device.h> 40 40 #include <linux/pm_domain.h> 41 41 #include <linux/reboot.h> 42 + #include <linux/regmap.h> 42 43 #include <linux/reset.h> 43 44 #include <linux/seq_file.h> 44 45 #include <linux/slab.h> ··· 103 102 104 103 #define PMC_PWR_DET_VALUE 0xe4 105 104 105 + #define PMC_USB_DEBOUNCE_DEL 0xec 106 + #define PMC_USB_AO 0xf0 107 + 106 108 #define PMC_SCRATCH41 0x140 107 109 108 110 #define PMC_WAKE2_MASK 0x160 ··· 137 133 #define IO_DPD2_STATUS 0x1c4 138 134 #define SEL_DPD_TIM 0x1c8 139 135 136 + #define PMC_UTMIP_UHSIC_TRIGGERS 0x1ec 137 + #define PMC_UTMIP_UHSIC_SAVED_STATE 0x1f0 138 + 139 + #define PMC_UTMIP_TERM_PAD_CFG 0x1f8 140 + #define PMC_UTMIP_UHSIC_SLEEP_CFG 0x1fc 141 + #define PMC_UTMIP_UHSIC_FAKE 0x218 142 + 140 143 #define PMC_SCRATCH54 0x258 141 144 #define PMC_SCRATCH54_DATA_SHIFT 8 142 145 #define PMC_SCRATCH54_ADDR_SHIFT 0 ··· 156 145 #define PMC_SCRATCH55_CHECKSUM_SHIFT 16 157 146 #define PMC_SCRATCH55_I2CSLV1_SHIFT 0 158 147 148 + #define PMC_UTMIP_UHSIC_LINE_WAKEUP 0x26c 149 + 150 + #define PMC_UTMIP_BIAS_MASTER_CNTRL 0x270 151 + #define PMC_UTMIP_MASTER_CONFIG 0x274 152 + #define PMC_UTMIP_UHSIC2_TRIGGERS 0x27c 153 + #define PMC_UTMIP_MASTER2_CONFIG 0x29c 154 + 159 155 #define GPU_RG_CNTRL 0x2d4 160 156 157 + #define PMC_UTMIP_PAD_CFG0 0x4c0 158 + #define PMC_UTMIP_UHSIC_SLEEP_CFG1 0x4d0 159 + #define PMC_UTMIP_SLEEPWALK_P3 0x4e0 161 160 /* Tegra186 and later */ 162 161 #define WAKE_AOWAKE_CNTRL(x) (0x000 + ((x) << 2)) 163 162 #define WAKE_AOWAKE_CNTRL_LEVEL (1 << 3) ··· 258 237 unsigned int id; 259 238 struct clk **clks; 260 239 unsigned int num_clks; 240 + unsigned long *clk_rates; 261 241 struct reset_control *reset; 262 242 }; 263 243 ··· 339 317 bool invert); 340 318 int (*irq_set_wake)(struct irq_data *data, unsigned int on); 341 319 int (*irq_set_type)(struct irq_data *data, unsigned int type); 320 + int (*powergate_set)(struct tegra_pmc *pmc, unsigned int id, 321 + bool new_state); 342 322 343 323 const char * const *reset_sources; 344 324 unsigned int num_reset_sources; ··· 358 334 const struct pmc_clk_init_data *pmc_clks_data; 359 335 unsigned int num_pmc_clks; 360 336 bool has_blink_output; 337 + bool has_usb_sleepwalk; 361 338 }; 362 339 363 340 /** ··· 542 517 return -ENODEV; 543 518 } 544 519 520 + static int tegra20_powergate_set(struct tegra_pmc *pmc, unsigned int id, 521 + bool new_state) 522 + { 523 + unsigned int retries = 100; 524 + bool status; 525 + int ret; 526 + 527 + /* 528 + * As per TRM documentation, the toggle command will be dropped by PMC 529 + * if there is contention with a HW-initiated toggling (i.e. CPU core 530 + * power-gated), the command should be retried in that case. 531 + */ 532 + do { 533 + tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 534 + 535 + /* wait for PMC to execute the command */ 536 + ret = readx_poll_timeout(tegra_powergate_state, id, status, 537 + status == new_state, 1, 10); 538 + } while (ret == -ETIMEDOUT && retries--); 539 + 540 + return ret; 541 + } 542 + 543 + static inline bool tegra_powergate_toggle_ready(struct tegra_pmc *pmc) 544 + { 545 + return !(tegra_pmc_readl(pmc, PWRGATE_TOGGLE) & PWRGATE_TOGGLE_START); 546 + } 547 + 548 + static int tegra114_powergate_set(struct tegra_pmc *pmc, unsigned int id, 549 + bool new_state) 550 + { 551 + bool status; 552 + int err; 553 + 554 + /* wait while PMC power gating is contended */ 555 + err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status, 556 + status == true, 1, 100); 557 + if (err) 558 + return err; 559 + 560 + tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 561 + 562 + /* wait for PMC to accept the command */ 563 + err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status, 564 + status == true, 1, 100); 565 + if (err) 566 + return err; 567 + 568 + /* wait for PMC to execute the command */ 569 + err = readx_poll_timeout(tegra_powergate_state, id, status, 570 + status == new_state, 10, 100000); 571 + if (err) 572 + return err; 573 + 574 + return 0; 575 + } 576 + 545 577 /** 546 578 * tegra_powergate_set() - set the state of a partition 547 579 * @pmc: power management controller ··· 608 526 static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id, 609 527 bool new_state) 610 528 { 611 - bool status; 612 529 int err; 613 530 614 531 if (id == TEGRA_POWERGATE_3D && pmc->soc->has_gpu_clamps) ··· 620 539 return 0; 621 540 } 622 541 623 - tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 624 - 625 - err = readx_poll_timeout(tegra_powergate_state, id, status, 626 - status == new_state, 10, 100000); 542 + err = pmc->soc->powergate_set(pmc, id, new_state); 627 543 628 544 mutex_unlock(&pmc->powergates_lock); 629 545 ··· 660 582 661 583 out: 662 584 mutex_unlock(&pmc->powergates_lock); 585 + 586 + return 0; 587 + } 588 + 589 + static int tegra_powergate_prepare_clocks(struct tegra_powergate *pg) 590 + { 591 + unsigned long safe_rate = 100 * 1000 * 1000; 592 + unsigned int i; 593 + int err; 594 + 595 + for (i = 0; i < pg->num_clks; i++) { 596 + pg->clk_rates[i] = clk_get_rate(pg->clks[i]); 597 + 598 + if (!pg->clk_rates[i]) { 599 + err = -EINVAL; 600 + goto out; 601 + } 602 + 603 + if (pg->clk_rates[i] <= safe_rate) 604 + continue; 605 + 606 + /* 607 + * We don't know whether voltage state is okay for the 608 + * current clock rate, hence it's better to temporally 609 + * switch clock to a safe rate which is suitable for 610 + * all voltages, before enabling the clock. 611 + */ 612 + err = clk_set_rate(pg->clks[i], safe_rate); 613 + if (err) 614 + goto out; 615 + } 616 + 617 + return 0; 618 + 619 + out: 620 + while (i--) 621 + clk_set_rate(pg->clks[i], pg->clk_rates[i]); 622 + 623 + return err; 624 + } 625 + 626 + static int tegra_powergate_unprepare_clocks(struct tegra_powergate *pg) 627 + { 628 + unsigned int i; 629 + int err; 630 + 631 + for (i = 0; i < pg->num_clks; i++) { 632 + err = clk_set_rate(pg->clks[i], pg->clk_rates[i]); 633 + if (err) 634 + return err; 635 + } 663 636 664 637 return 0; 665 638 } ··· 765 636 766 637 usleep_range(10, 20); 767 638 639 + err = tegra_powergate_prepare_clocks(pg); 640 + if (err) 641 + goto powergate_off; 642 + 768 643 err = tegra_powergate_enable_clocks(pg); 769 644 if (err) 770 - goto disable_clks; 645 + goto unprepare_clks; 771 646 772 647 usleep_range(10, 20); 773 648 ··· 795 662 if (disable_clocks) 796 663 tegra_powergate_disable_clocks(pg); 797 664 665 + err = tegra_powergate_unprepare_clocks(pg); 666 + if (err) 667 + return err; 668 + 798 669 return 0; 799 670 800 671 disable_clks: 801 672 tegra_powergate_disable_clocks(pg); 802 673 usleep_range(10, 20); 674 + 675 + unprepare_clks: 676 + tegra_powergate_unprepare_clocks(pg); 803 677 804 678 powergate_off: 805 679 tegra_powergate_set(pg->pmc, pg->id, false); ··· 818 678 { 819 679 int err; 820 680 821 - err = tegra_powergate_enable_clocks(pg); 681 + err = tegra_powergate_prepare_clocks(pg); 822 682 if (err) 823 683 return err; 684 + 685 + err = tegra_powergate_enable_clocks(pg); 686 + if (err) 687 + goto unprepare_clks; 824 688 825 689 usleep_range(10, 20); 826 690 ··· 842 698 if (err) 843 699 goto assert_resets; 844 700 701 + err = tegra_powergate_unprepare_clocks(pg); 702 + if (err) 703 + return err; 704 + 845 705 return 0; 846 706 847 707 assert_resets: ··· 856 708 857 709 disable_clks: 858 710 tegra_powergate_disable_clocks(pg); 711 + 712 + unprepare_clks: 713 + tegra_powergate_unprepare_clocks(pg); 859 714 860 715 return err; 861 716 } ··· 890 739 891 740 err = reset_control_acquire(pg->reset); 892 741 if (err < 0) { 893 - pr_err("failed to acquire resets: %d\n", err); 742 + dev_err(dev, "failed to acquire resets for PM domain %s: %d\n", 743 + pg->genpd.name, err); 894 744 return err; 895 745 } 896 746 ··· 978 826 if (!pg) 979 827 return -ENOMEM; 980 828 829 + pg->clk_rates = kzalloc(sizeof(*pg->clk_rates), GFP_KERNEL); 830 + if (!pg->clk_rates) { 831 + kfree(pg->clks); 832 + return -ENOMEM; 833 + } 834 + 981 835 pg->id = id; 982 836 pg->clks = &clk; 983 837 pg->num_clks = 1; ··· 995 837 dev_err(pmc->dev, "failed to turn on partition %d: %d\n", id, 996 838 err); 997 839 840 + kfree(pg->clk_rates); 998 841 kfree(pg); 999 842 1000 843 return err; ··· 1146 987 if (!pg->clks) 1147 988 return -ENOMEM; 1148 989 990 + pg->clk_rates = kcalloc(count, sizeof(*pg->clk_rates), GFP_KERNEL); 991 + if (!pg->clk_rates) { 992 + kfree(pg->clks); 993 + return -ENOMEM; 994 + } 995 + 1149 996 for (i = 0; i < count; i++) { 1150 997 pg->clks[i] = of_clk_get(np, i); 1151 998 if (IS_ERR(pg->clks[i])) { ··· 1168 1003 while (i--) 1169 1004 clk_put(pg->clks[i]); 1170 1005 1006 + kfree(pg->clk_rates); 1171 1007 kfree(pg->clks); 1172 1008 1173 1009 return err; ··· 2609 2443 err); 2610 2444 } 2611 2445 2446 + static const struct regmap_range pmc_usb_sleepwalk_ranges[] = { 2447 + regmap_reg_range(PMC_USB_DEBOUNCE_DEL, PMC_USB_AO), 2448 + regmap_reg_range(PMC_UTMIP_UHSIC_TRIGGERS, PMC_UTMIP_UHSIC_SAVED_STATE), 2449 + regmap_reg_range(PMC_UTMIP_TERM_PAD_CFG, PMC_UTMIP_UHSIC_FAKE), 2450 + regmap_reg_range(PMC_UTMIP_UHSIC_LINE_WAKEUP, PMC_UTMIP_UHSIC_LINE_WAKEUP), 2451 + regmap_reg_range(PMC_UTMIP_BIAS_MASTER_CNTRL, PMC_UTMIP_MASTER_CONFIG), 2452 + regmap_reg_range(PMC_UTMIP_UHSIC2_TRIGGERS, PMC_UTMIP_MASTER2_CONFIG), 2453 + regmap_reg_range(PMC_UTMIP_PAD_CFG0, PMC_UTMIP_UHSIC_SLEEP_CFG1), 2454 + regmap_reg_range(PMC_UTMIP_SLEEPWALK_P3, PMC_UTMIP_SLEEPWALK_P3), 2455 + }; 2456 + 2457 + static const struct regmap_access_table pmc_usb_sleepwalk_table = { 2458 + .yes_ranges = pmc_usb_sleepwalk_ranges, 2459 + .n_yes_ranges = ARRAY_SIZE(pmc_usb_sleepwalk_ranges), 2460 + }; 2461 + 2462 + static int tegra_pmc_regmap_readl(void *context, unsigned int offset, unsigned int *value) 2463 + { 2464 + struct tegra_pmc *pmc = context; 2465 + 2466 + *value = tegra_pmc_readl(pmc, offset); 2467 + return 0; 2468 + } 2469 + 2470 + static int tegra_pmc_regmap_writel(void *context, unsigned int offset, unsigned int value) 2471 + { 2472 + struct tegra_pmc *pmc = context; 2473 + 2474 + tegra_pmc_writel(pmc, value, offset); 2475 + return 0; 2476 + } 2477 + 2478 + static const struct regmap_config usb_sleepwalk_regmap_config = { 2479 + .name = "usb_sleepwalk", 2480 + .reg_bits = 32, 2481 + .val_bits = 32, 2482 + .reg_stride = 4, 2483 + .fast_io = true, 2484 + .rd_table = &pmc_usb_sleepwalk_table, 2485 + .wr_table = &pmc_usb_sleepwalk_table, 2486 + .reg_read = tegra_pmc_regmap_readl, 2487 + .reg_write = tegra_pmc_regmap_writel, 2488 + }; 2489 + 2490 + static int tegra_pmc_regmap_init(struct tegra_pmc *pmc) 2491 + { 2492 + struct regmap *regmap; 2493 + int err; 2494 + 2495 + if (pmc->soc->has_usb_sleepwalk) { 2496 + regmap = devm_regmap_init(pmc->dev, NULL, pmc, &usb_sleepwalk_regmap_config); 2497 + if (IS_ERR(regmap)) { 2498 + err = PTR_ERR(regmap); 2499 + dev_err(pmc->dev, "failed to allocate register map (%d)\n", err); 2500 + return err; 2501 + } 2502 + } 2503 + 2504 + return 0; 2505 + } 2506 + 2612 2507 static int tegra_pmc_probe(struct platform_device *pdev) 2613 2508 { 2614 2509 void __iomem *base; ··· 2773 2546 2774 2547 err = tegra_pmc_pinctrl_init(pmc); 2775 2548 if (err) 2549 + goto cleanup_restart_handler; 2550 + 2551 + err = tegra_pmc_regmap_init(pmc); 2552 + if (err < 0) 2776 2553 goto cleanup_restart_handler; 2777 2554 2778 2555 err = tegra_powergate_init(pmc, pdev->dev.of_node); ··· 2930 2699 .regs = &tegra20_pmc_regs, 2931 2700 .init = tegra20_pmc_init, 2932 2701 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 2702 + .powergate_set = tegra20_powergate_set, 2933 2703 .reset_sources = NULL, 2934 2704 .num_reset_sources = 0, 2935 2705 .reset_levels = NULL, ··· 2938 2706 .pmc_clks_data = NULL, 2939 2707 .num_pmc_clks = 0, 2940 2708 .has_blink_output = true, 2709 + .has_usb_sleepwalk = false, 2941 2710 }; 2942 2711 2943 2712 static const char * const tegra30_powergates[] = { ··· 2990 2757 .regs = &tegra20_pmc_regs, 2991 2758 .init = tegra20_pmc_init, 2992 2759 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 2760 + .powergate_set = tegra20_powergate_set, 2993 2761 .reset_sources = tegra30_reset_sources, 2994 2762 .num_reset_sources = ARRAY_SIZE(tegra30_reset_sources), 2995 2763 .reset_levels = NULL, ··· 2998 2764 .pmc_clks_data = tegra_pmc_clks_data, 2999 2765 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3000 2766 .has_blink_output = true, 2767 + .has_usb_sleepwalk = false, 3001 2768 }; 3002 2769 3003 2770 static const char * const tegra114_powergates[] = { ··· 3046 2811 .regs = &tegra20_pmc_regs, 3047 2812 .init = tegra20_pmc_init, 3048 2813 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 2814 + .powergate_set = tegra114_powergate_set, 3049 2815 .reset_sources = tegra30_reset_sources, 3050 2816 .num_reset_sources = ARRAY_SIZE(tegra30_reset_sources), 3051 2817 .reset_levels = NULL, ··· 3054 2818 .pmc_clks_data = tegra_pmc_clks_data, 3055 2819 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3056 2820 .has_blink_output = true, 2821 + .has_usb_sleepwalk = false, 3057 2822 }; 3058 2823 3059 2824 static const char * const tegra124_powergates[] = { ··· 3162 2925 .regs = &tegra20_pmc_regs, 3163 2926 .init = tegra20_pmc_init, 3164 2927 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 2928 + .powergate_set = tegra114_powergate_set, 3165 2929 .reset_sources = tegra30_reset_sources, 3166 2930 .num_reset_sources = ARRAY_SIZE(tegra30_reset_sources), 3167 2931 .reset_levels = NULL, ··· 3170 2932 .pmc_clks_data = tegra_pmc_clks_data, 3171 2933 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3172 2934 .has_blink_output = true, 2935 + .has_usb_sleepwalk = true, 3173 2936 }; 3174 2937 3175 2938 static const char * const tegra210_powergates[] = { ··· 3287 3048 .regs = &tegra20_pmc_regs, 3288 3049 .init = tegra20_pmc_init, 3289 3050 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 3051 + .powergate_set = tegra114_powergate_set, 3290 3052 .irq_set_wake = tegra210_pmc_irq_set_wake, 3291 3053 .irq_set_type = tegra210_pmc_irq_set_type, 3292 3054 .reset_sources = tegra210_reset_sources, ··· 3299 3059 .pmc_clks_data = tegra_pmc_clks_data, 3300 3060 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3301 3061 .has_blink_output = true, 3062 + .has_usb_sleepwalk = true, 3302 3063 }; 3303 3064 3304 3065 #define TEGRA186_IO_PAD_TABLE(_pad) \ ··· 3455 3214 .pmc_clks_data = NULL, 3456 3215 .num_pmc_clks = 0, 3457 3216 .has_blink_output = false, 3217 + .has_usb_sleepwalk = false, 3458 3218 }; 3459 3219 3460 3220 #define TEGRA194_IO_PAD_TABLE(_pad) \ ··· 3589 3347 .pmc_clks_data = NULL, 3590 3348 .num_pmc_clks = 0, 3591 3349 .has_blink_output = false, 3350 + .has_usb_sleepwalk = false, 3592 3351 }; 3593 3352 3594 3353 static const struct tegra_pmc_regs tegra234_pmc_regs = {
+1 -1
drivers/soc/tegra/regulators-tegra30.c
··· 178 178 * survive the voltage drop if it's running on a higher frequency. 179 179 */ 180 180 if (!cpu_min_uV_consumers) 181 - cpu_min_uV = cpu_uV; 181 + cpu_min_uV = max(cpu_uV, cpu_min_uV); 182 182 183 183 /* 184 184 * Bootloader shall set up voltages correctly, but if it
+1 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 2738 2738 return -ENOENT; 2739 2739 } 2740 2740 2741 - drvdata->fw = rpi_firmware_get(fw_node); 2741 + drvdata->fw = devm_rpi_firmware_get(&pdev->dev, fw_node); 2742 2742 of_node_put(fw_node); 2743 2743 if (!drvdata->fw) 2744 2744 return -EPROBE_DEFER;
+3
drivers/tee/optee/Makefile
··· 6 6 optee-objs += supp.o 7 7 optee-objs += shm_pool.o 8 8 optee-objs += device.o 9 + 10 + # for tracing framework to find optee_trace.h 11 + CFLAGS_call.o := -I$(src)
+4
drivers/tee/optee/call.c
··· 14 14 #include <linux/uaccess.h> 15 15 #include "optee_private.h" 16 16 #include "optee_smc.h" 17 + #define CREATE_TRACE_POINTS 18 + #include "optee_trace.h" 17 19 18 20 struct optee_call_waiter { 19 21 struct list_head list_node; ··· 140 138 while (true) { 141 139 struct arm_smccc_res res; 142 140 141 + trace_optee_invoke_fn_begin(&param); 143 142 optee->invoke_fn(param.a0, param.a1, param.a2, param.a3, 144 143 param.a4, param.a5, param.a6, param.a7, 145 144 &res); 145 + trace_optee_invoke_fn_end(&param, &res); 146 146 147 147 if (res.a0 == OPTEE_SMC_RETURN_ETHREAD_LIMIT) { 148 148 /*
-10
drivers/tee/optee/core.c
··· 79 79 return rc; 80 80 p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; 81 81 p->u.memref.shm = shm; 82 - 83 - /* Check that the memref is covered by the shm object */ 84 - if (p->u.memref.size) { 85 - size_t o = p->u.memref.shm_offs + 86 - p->u.memref.size - 1; 87 - 88 - rc = tee_shm_get_pa(shm, o, NULL); 89 - if (rc) 90 - return rc; 91 - } 92 82 break; 93 83 case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: 94 84 case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
+67
drivers/tee/optee/optee_trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * optee trace points 4 + * 5 + * Copyright (C) 2021 Synaptics Incorporated 6 + * Author: Jisheng Zhang <jszhang@kernel.org> 7 + */ 8 + 9 + #undef TRACE_SYSTEM 10 + #define TRACE_SYSTEM optee 11 + 12 + #if !defined(_TRACE_OPTEE_H) || defined(TRACE_HEADER_MULTI_READ) 13 + #define _TRACE_OPTEE_H 14 + 15 + #include <linux/arm-smccc.h> 16 + #include <linux/tracepoint.h> 17 + #include "optee_private.h" 18 + 19 + TRACE_EVENT(optee_invoke_fn_begin, 20 + TP_PROTO(struct optee_rpc_param *param), 21 + TP_ARGS(param), 22 + 23 + TP_STRUCT__entry( 24 + __field(void *, param) 25 + __array(u32, args, 8) 26 + ), 27 + 28 + TP_fast_assign( 29 + __entry->param = param; 30 + BUILD_BUG_ON(sizeof(*param) < sizeof(__entry->args)); 31 + memcpy(__entry->args, param, sizeof(__entry->args)); 32 + ), 33 + 34 + TP_printk("param=%p (%x, %x, %x, %x, %x, %x, %x, %x)", __entry->param, 35 + __entry->args[0], __entry->args[1], __entry->args[2], 36 + __entry->args[3], __entry->args[4], __entry->args[5], 37 + __entry->args[6], __entry->args[7]) 38 + ); 39 + 40 + TRACE_EVENT(optee_invoke_fn_end, 41 + TP_PROTO(struct optee_rpc_param *param, struct arm_smccc_res *res), 42 + TP_ARGS(param, res), 43 + 44 + TP_STRUCT__entry( 45 + __field(void *, param) 46 + __array(unsigned long, rets, 4) 47 + ), 48 + 49 + TP_fast_assign( 50 + __entry->param = param; 51 + BUILD_BUG_ON(sizeof(*res) < sizeof(__entry->rets)); 52 + memcpy(__entry->rets, res, sizeof(__entry->rets)); 53 + ), 54 + 55 + TP_printk("param=%p ret (%lx, %lx, %lx, %lx)", __entry->param, 56 + __entry->rets[0], __entry->rets[1], __entry->rets[2], 57 + __entry->rets[3]) 58 + ); 59 + #endif /* _TRACE_OPTEE_H */ 60 + 61 + #undef TRACE_INCLUDE_PATH 62 + #define TRACE_INCLUDE_PATH . 63 + #undef TRACE_INCLUDE_FILE 64 + #define TRACE_INCLUDE_FILE optee_trace 65 + 66 + /* This part must be outside protection */ 67 + #include <trace/define_trace.h>
+62 -62
drivers/tty/serial/ucc_uart.c
··· 261 261 struct qe_bd *bdp = qe_port->tx_bd_base; 262 262 263 263 while (1) { 264 - if (qe_ioread16be(&bdp->status) & BD_SC_READY) 264 + if (ioread16be(&bdp->status) & BD_SC_READY) 265 265 /* This BD is not done, so return "not done" */ 266 266 return 0; 267 267 268 - if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 268 + if (ioread16be(&bdp->status) & BD_SC_WRAP) 269 269 /* 270 270 * This BD is done and it's the last one, so return 271 271 * "done" ··· 344 344 p = qe2cpu_addr(be32_to_cpu(bdp->buf), qe_port); 345 345 346 346 *p++ = port->x_char; 347 - qe_iowrite16be(1, &bdp->length); 347 + iowrite16be(1, &bdp->length); 348 348 qe_setbits_be16(&bdp->status, BD_SC_READY); 349 349 /* Get next BD. */ 350 - if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 350 + if (ioread16be(&bdp->status) & BD_SC_WRAP) 351 351 bdp = qe_port->tx_bd_base; 352 352 else 353 353 bdp++; ··· 366 366 /* Pick next descriptor and fill from buffer */ 367 367 bdp = qe_port->tx_cur; 368 368 369 - while (!(qe_ioread16be(&bdp->status) & BD_SC_READY) && 369 + while (!(ioread16be(&bdp->status) & BD_SC_READY) && 370 370 (xmit->tail != xmit->head)) { 371 371 count = 0; 372 372 p = qe2cpu_addr(be32_to_cpu(bdp->buf), qe_port); ··· 379 379 break; 380 380 } 381 381 382 - qe_iowrite16be(count, &bdp->length); 382 + iowrite16be(count, &bdp->length); 383 383 qe_setbits_be16(&bdp->status, BD_SC_READY); 384 384 385 385 /* Get next BD. */ 386 - if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 386 + if (ioread16be(&bdp->status) & BD_SC_WRAP) 387 387 bdp = qe_port->tx_bd_base; 388 388 else 389 389 bdp++; ··· 416 416 container_of(port, struct uart_qe_port, port); 417 417 418 418 /* If we currently are transmitting, then just return */ 419 - if (qe_ioread16be(&qe_port->uccp->uccm) & UCC_UART_UCCE_TX) 419 + if (ioread16be(&qe_port->uccp->uccm) & UCC_UART_UCCE_TX) 420 420 return; 421 421 422 422 /* Otherwise, pump the port and start transmission */ ··· 471 471 */ 472 472 bdp = qe_port->rx_cur; 473 473 while (1) { 474 - status = qe_ioread16be(&bdp->status); 474 + status = ioread16be(&bdp->status); 475 475 476 476 /* If this one is empty, then we assume we've read them all */ 477 477 if (status & BD_SC_EMPTY) 478 478 break; 479 479 480 480 /* get number of characters, and check space in RX buffer */ 481 - i = qe_ioread16be(&bdp->length); 481 + i = ioread16be(&bdp->length); 482 482 483 483 /* If we don't have enough room in RX buffer for the entire BD, 484 484 * then we try later, which will be the next RX interrupt. ··· 512 512 qe_clrsetbits_be16(&bdp->status, 513 513 BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV | BD_SC_ID, 514 514 BD_SC_EMPTY); 515 - if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 515 + if (ioread16be(&bdp->status) & BD_SC_WRAP) 516 516 bdp = qe_port->rx_bd_base; 517 517 else 518 518 bdp++; ··· 569 569 u16 events; 570 570 571 571 /* Clear the interrupts */ 572 - events = qe_ioread16be(&uccp->ucce); 573 - qe_iowrite16be(events, &uccp->ucce); 572 + events = ioread16be(&uccp->ucce); 573 + iowrite16be(events, &uccp->ucce); 574 574 575 575 if (events & UCC_UART_UCCE_BRKE) 576 576 uart_handle_break(&qe_port->port); ··· 601 601 bdp = qe_port->rx_bd_base; 602 602 qe_port->rx_cur = qe_port->rx_bd_base; 603 603 for (i = 0; i < (qe_port->rx_nrfifos - 1); i++) { 604 - qe_iowrite16be(BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 605 - qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 606 - qe_iowrite16be(0, &bdp->length); 604 + iowrite16be(BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 605 + iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 606 + iowrite16be(0, &bdp->length); 607 607 bd_virt += qe_port->rx_fifosize; 608 608 bdp++; 609 609 } 610 610 611 611 /* */ 612 - qe_iowrite16be(BD_SC_WRAP | BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 613 - qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 614 - qe_iowrite16be(0, &bdp->length); 612 + iowrite16be(BD_SC_WRAP | BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 613 + iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 614 + iowrite16be(0, &bdp->length); 615 615 616 616 /* Set the physical address of the host memory 617 617 * buffers in the buffer descriptors, and the ··· 622 622 qe_port->tx_cur = qe_port->tx_bd_base; 623 623 bdp = qe_port->tx_bd_base; 624 624 for (i = 0; i < (qe_port->tx_nrfifos - 1); i++) { 625 - qe_iowrite16be(BD_SC_INTRPT, &bdp->status); 626 - qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 627 - qe_iowrite16be(0, &bdp->length); 625 + iowrite16be(BD_SC_INTRPT, &bdp->status); 626 + iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 627 + iowrite16be(0, &bdp->length); 628 628 bd_virt += qe_port->tx_fifosize; 629 629 bdp++; 630 630 } ··· 634 634 qe_setbits_be16(&qe_port->tx_cur->status, BD_SC_P); 635 635 #endif 636 636 637 - qe_iowrite16be(BD_SC_WRAP | BD_SC_INTRPT, &bdp->status); 638 - qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 639 - qe_iowrite16be(0, &bdp->length); 637 + iowrite16be(BD_SC_WRAP | BD_SC_INTRPT, &bdp->status); 638 + iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 639 + iowrite16be(0, &bdp->length); 640 640 } 641 641 642 642 /* ··· 658 658 ucc_slow_disable(qe_port->us_private, COMM_DIR_RX_AND_TX); 659 659 660 660 /* Program the UCC UART parameter RAM */ 661 - qe_iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.rbmr); 662 - qe_iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.tbmr); 663 - qe_iowrite16be(qe_port->rx_fifosize, &uccup->common.mrblr); 664 - qe_iowrite16be(0x10, &uccup->maxidl); 665 - qe_iowrite16be(1, &uccup->brkcr); 666 - qe_iowrite16be(0, &uccup->parec); 667 - qe_iowrite16be(0, &uccup->frmec); 668 - qe_iowrite16be(0, &uccup->nosec); 669 - qe_iowrite16be(0, &uccup->brkec); 670 - qe_iowrite16be(0, &uccup->uaddr[0]); 671 - qe_iowrite16be(0, &uccup->uaddr[1]); 672 - qe_iowrite16be(0, &uccup->toseq); 661 + iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.rbmr); 662 + iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.tbmr); 663 + iowrite16be(qe_port->rx_fifosize, &uccup->common.mrblr); 664 + iowrite16be(0x10, &uccup->maxidl); 665 + iowrite16be(1, &uccup->brkcr); 666 + iowrite16be(0, &uccup->parec); 667 + iowrite16be(0, &uccup->frmec); 668 + iowrite16be(0, &uccup->nosec); 669 + iowrite16be(0, &uccup->brkec); 670 + iowrite16be(0, &uccup->uaddr[0]); 671 + iowrite16be(0, &uccup->uaddr[1]); 672 + iowrite16be(0, &uccup->toseq); 673 673 for (i = 0; i < 8; i++) 674 - qe_iowrite16be(0xC000, &uccup->cchars[i]); 675 - qe_iowrite16be(0xc0ff, &uccup->rccm); 674 + iowrite16be(0xC000, &uccup->cchars[i]); 675 + iowrite16be(0xc0ff, &uccup->rccm); 676 676 677 677 /* Configure the GUMR registers for UART */ 678 678 if (soft_uart) { ··· 702 702 #endif 703 703 704 704 /* Disable rx interrupts and clear all pending events. */ 705 - qe_iowrite16be(0, &uccp->uccm); 706 - qe_iowrite16be(0xffff, &uccp->ucce); 707 - qe_iowrite16be(0x7e7e, &uccp->udsr); 705 + iowrite16be(0, &uccp->uccm); 706 + iowrite16be(0xffff, &uccp->ucce); 707 + iowrite16be(0x7e7e, &uccp->udsr); 708 708 709 709 /* Initialize UPSMR */ 710 - qe_iowrite16be(0, &uccp->upsmr); 710 + iowrite16be(0, &uccp->upsmr); 711 711 712 712 if (soft_uart) { 713 - qe_iowrite16be(0x30, &uccup->supsmr); 714 - qe_iowrite16be(0, &uccup->res92); 715 - qe_iowrite32be(0, &uccup->rx_state); 716 - qe_iowrite32be(0, &uccup->rx_cnt); 717 - qe_iowrite8(0, &uccup->rx_bitmark); 718 - qe_iowrite8(10, &uccup->rx_length); 719 - qe_iowrite32be(0x4000, &uccup->dump_ptr); 720 - qe_iowrite8(0, &uccup->rx_temp_dlst_qe); 721 - qe_iowrite32be(0, &uccup->rx_frame_rem); 722 - qe_iowrite8(0, &uccup->rx_frame_rem_size); 713 + iowrite16be(0x30, &uccup->supsmr); 714 + iowrite16be(0, &uccup->res92); 715 + iowrite32be(0, &uccup->rx_state); 716 + iowrite32be(0, &uccup->rx_cnt); 717 + iowrite8(0, &uccup->rx_bitmark); 718 + iowrite8(10, &uccup->rx_length); 719 + iowrite32be(0x4000, &uccup->dump_ptr); 720 + iowrite8(0, &uccup->rx_temp_dlst_qe); 721 + iowrite32be(0, &uccup->rx_frame_rem); 722 + iowrite8(0, &uccup->rx_frame_rem_size); 723 723 /* Soft-UART requires TX to be 1X */ 724 - qe_iowrite8(UCC_UART_TX_STATE_UART | UCC_UART_TX_STATE_X1, 724 + iowrite8(UCC_UART_TX_STATE_UART | UCC_UART_TX_STATE_X1, 725 725 &uccup->tx_mode); 726 - qe_iowrite16be(0, &uccup->tx_state); 727 - qe_iowrite8(0, &uccup->resD4); 728 - qe_iowrite16be(0, &uccup->resD5); 726 + iowrite16be(0, &uccup->tx_state); 727 + iowrite8(0, &uccup->resD4); 728 + iowrite16be(0, &uccup->resD5); 729 729 730 730 /* Set UART mode. 731 731 * Enable receive and transmit. ··· 850 850 struct ucc_slow __iomem *uccp = qe_port->uccp; 851 851 unsigned int baud; 852 852 unsigned long flags; 853 - u16 upsmr = qe_ioread16be(&uccp->upsmr); 853 + u16 upsmr = ioread16be(&uccp->upsmr); 854 854 struct ucc_uart_pram __iomem *uccup = qe_port->uccup; 855 - u16 supsmr = qe_ioread16be(&uccup->supsmr); 855 + u16 supsmr = ioread16be(&uccup->supsmr); 856 856 u8 char_length = 2; /* 1 + CL + PEN + 1 + SL */ 857 857 858 858 /* Character length programmed into the mode register is the ··· 950 950 /* Update the per-port timeout. */ 951 951 uart_update_timeout(port, termios->c_cflag, baud); 952 952 953 - qe_iowrite16be(upsmr, &uccp->upsmr); 953 + iowrite16be(upsmr, &uccp->upsmr); 954 954 if (soft_uart) { 955 - qe_iowrite16be(supsmr, &uccup->supsmr); 956 - qe_iowrite8(char_length, &uccup->rx_length); 955 + iowrite16be(supsmr, &uccup->supsmr); 956 + iowrite8(char_length, &uccup->rx_length); 957 957 958 958 /* Soft-UART requires a 1X multiplier for TX */ 959 959 qe_setbrg(qe_port->us_info.rx_clock, baud, 16);
+26
include/dt-bindings/power/qcom-rpmpd.h
··· 45 45 #define SM8250_MX 8 46 46 #define SM8250_MX_AO 9 47 47 48 + /* SM8350 Power Domain Indexes */ 49 + #define SM8350_CX 0 50 + #define SM8350_CX_AO 1 51 + #define SM8350_EBI 2 52 + #define SM8350_GFX 3 53 + #define SM8350_LCX 4 54 + #define SM8350_LMX 5 55 + #define SM8350_MMCX 6 56 + #define SM8350_MMCX_AO 7 57 + #define SM8350_MX 8 58 + #define SM8350_MX_AO 9 59 + #define SM8350_MXC 10 60 + #define SM8350_MXC_AO 11 61 + #define SM8350_MSS 12 62 + 48 63 /* SC7180 Power Domain Indexes */ 49 64 #define SC7180_CX 0 50 65 #define SC7180_CX_AO 1 ··· 69 54 #define SC7180_LMX 5 70 55 #define SC7180_LCX 6 71 56 #define SC7180_MSS 7 57 + 58 + /* SC7280 Power Domain Indexes */ 59 + #define SC7280_CX 0 60 + #define SC7280_CX_AO 1 61 + #define SC7280_EBI 2 62 + #define SC7280_GFX 3 63 + #define SC7280_MX 4 64 + #define SC7280_MX_AO 5 65 + #define SC7280_LMX 6 66 + #define SC7280_LCX 7 67 + #define SC7280_MSS 8 72 68 73 69 /* SDM845 Power Domain performance levels */ 74 70 #define RPMH_REGULATOR_LEVEL_RETENTION 16
+13
include/dt-bindings/pwm/raspberrypi,firmware-poe-pwm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2020 Nicolas Saenz Julienne 4 + * Author: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_RASPBERRYPI_FIRMWARE_PWM_H 8 + #define _DT_BINDINGS_RASPBERRYPI_FIRMWARE_PWM_H 9 + 10 + #define RASPBERRYPI_FIRMWARE_PWM_POE 0 11 + #define RASPBERRYPI_FIRMWARE_PWM_NUM 1 12 + 13 + #endif
+1
include/dt-bindings/soc/bcm-pmb.h
··· 7 7 #define BCM_PMB_PCIE1 0x02 8 8 #define BCM_PMB_PCIE2 0x03 9 9 #define BCM_PMB_HOST_USB 0x04 10 + #define BCM_PMB_SATA 0x05 10 11 11 12 #endif
+3 -1
include/linux/clk/tegra.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2012-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #ifndef __LINUX_CLK_TEGRA_H_ ··· 123 123 } 124 124 #endif 125 125 126 + extern int tegra210_plle_hw_sequence_start(void); 127 + extern bool tegra210_plle_hw_sequence_is_enabled(void); 126 128 extern void tegra210_xusb_pll_hw_control_enable(void); 127 129 extern void tegra210_xusb_pll_hw_sequence_start(void); 128 130 extern void tegra210_sata_pll_hw_control_enable(void);
-5
include/linux/firmware/xlnx-zynqmp.h
··· 354 354 int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype); 355 355 int zynqmp_pm_set_boot_health_status(u32 value); 356 356 #else 357 - static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void) 358 - { 359 - return ERR_PTR(-ENODEV); 360 - } 361 - 362 357 static inline int zynqmp_pm_get_api_version(u32 *version) 363 358 { 364 359 return -ENODEV;
+2 -2
include/linux/fsl/guts.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /** 2 + /* 3 3 * Freecale 85xx and 86xx Global Utilties register set 4 4 * 5 5 * Authors: Jeff Brown ··· 14 14 #include <linux/types.h> 15 15 #include <linux/io.h> 16 16 17 - /** 17 + /* 18 18 * Global Utility Registers. 19 19 * 20 20 * Not all registers defined in this structure are available on all chips, so
+102 -98
include/linux/scmi_protocol.h
··· 2 2 /* 3 3 * SCMI Message Protocol driver header 4 4 * 5 - * Copyright (C) 2018 ARM Ltd. 5 + * Copyright (C) 2018-2021 ARM Ltd. 6 6 */ 7 7 8 8 #ifndef _LINUX_SCMI_PROTOCOL_H ··· 57 57 }; 58 58 59 59 struct scmi_handle; 60 + struct scmi_device; 61 + struct scmi_protocol_handle; 60 62 61 63 /** 62 - * struct scmi_clk_ops - represents the various operations provided 64 + * struct scmi_clk_proto_ops - represents the various operations provided 63 65 * by SCMI Clock Protocol 64 66 * 65 67 * @count_get: get the count of clocks provided by SCMI ··· 71 69 * @enable: enables the specified clock 72 70 * @disable: disables the specified clock 73 71 */ 74 - struct scmi_clk_ops { 75 - int (*count_get)(const struct scmi_handle *handle); 72 + struct scmi_clk_proto_ops { 73 + int (*count_get)(const struct scmi_protocol_handle *ph); 76 74 77 75 const struct scmi_clock_info *(*info_get) 78 - (const struct scmi_handle *handle, u32 clk_id); 79 - int (*rate_get)(const struct scmi_handle *handle, u32 clk_id, 76 + (const struct scmi_protocol_handle *ph, u32 clk_id); 77 + int (*rate_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 80 78 u64 *rate); 81 - int (*rate_set)(const struct scmi_handle *handle, u32 clk_id, 79 + int (*rate_set)(const struct scmi_protocol_handle *ph, u32 clk_id, 82 80 u64 rate); 83 - int (*enable)(const struct scmi_handle *handle, u32 clk_id); 84 - int (*disable)(const struct scmi_handle *handle, u32 clk_id); 81 + int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id); 82 + int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id); 85 83 }; 86 84 87 85 /** 88 - * struct scmi_perf_ops - represents the various operations provided 86 + * struct scmi_perf_proto_ops - represents the various operations provided 89 87 * by SCMI Performance Protocol 90 88 * 91 89 * @limits_set: sets limits on the performance level of a domain ··· 102 100 * @est_power_get: gets the estimated power cost for a given performance domain 103 101 * at a given frequency 104 102 */ 105 - struct scmi_perf_ops { 106 - int (*limits_set)(const struct scmi_handle *handle, u32 domain, 103 + struct scmi_perf_proto_ops { 104 + int (*limits_set)(const struct scmi_protocol_handle *ph, u32 domain, 107 105 u32 max_perf, u32 min_perf); 108 - int (*limits_get)(const struct scmi_handle *handle, u32 domain, 106 + int (*limits_get)(const struct scmi_protocol_handle *ph, u32 domain, 109 107 u32 *max_perf, u32 *min_perf); 110 - int (*level_set)(const struct scmi_handle *handle, u32 domain, 108 + int (*level_set)(const struct scmi_protocol_handle *ph, u32 domain, 111 109 u32 level, bool poll); 112 - int (*level_get)(const struct scmi_handle *handle, u32 domain, 110 + int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain, 113 111 u32 *level, bool poll); 114 112 int (*device_domain_id)(struct device *dev); 115 - int (*transition_latency_get)(const struct scmi_handle *handle, 113 + int (*transition_latency_get)(const struct scmi_protocol_handle *ph, 116 114 struct device *dev); 117 - int (*device_opps_add)(const struct scmi_handle *handle, 115 + int (*device_opps_add)(const struct scmi_protocol_handle *ph, 118 116 struct device *dev); 119 - int (*freq_set)(const struct scmi_handle *handle, u32 domain, 117 + int (*freq_set)(const struct scmi_protocol_handle *ph, u32 domain, 120 118 unsigned long rate, bool poll); 121 - int (*freq_get)(const struct scmi_handle *handle, u32 domain, 119 + int (*freq_get)(const struct scmi_protocol_handle *ph, u32 domain, 122 120 unsigned long *rate, bool poll); 123 - int (*est_power_get)(const struct scmi_handle *handle, u32 domain, 121 + int (*est_power_get)(const struct scmi_protocol_handle *ph, u32 domain, 124 122 unsigned long *rate, unsigned long *power); 125 - bool (*fast_switch_possible)(const struct scmi_handle *handle, 123 + bool (*fast_switch_possible)(const struct scmi_protocol_handle *ph, 126 124 struct device *dev); 127 - bool (*power_scale_mw_get)(const struct scmi_handle *handle); 125 + bool (*power_scale_mw_get)(const struct scmi_protocol_handle *ph); 128 126 }; 129 127 130 128 /** 131 - * struct scmi_power_ops - represents the various operations provided 129 + * struct scmi_power_proto_ops - represents the various operations provided 132 130 * by SCMI Power Protocol 133 131 * 134 132 * @num_domains_get: get the count of power domains provided by SCMI ··· 136 134 * @state_set: sets the power state of a power domain 137 135 * @state_get: gets the power state of a power domain 138 136 */ 139 - struct scmi_power_ops { 140 - int (*num_domains_get)(const struct scmi_handle *handle); 141 - char *(*name_get)(const struct scmi_handle *handle, u32 domain); 137 + struct scmi_power_proto_ops { 138 + int (*num_domains_get)(const struct scmi_protocol_handle *ph); 139 + char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain); 142 140 #define SCMI_POWER_STATE_TYPE_SHIFT 30 143 141 #define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1) 144 142 #define SCMI_POWER_STATE_PARAM(type, id) \ ··· 146 144 ((id) & SCMI_POWER_STATE_ID_MASK)) 147 145 #define SCMI_POWER_STATE_GENERIC_ON SCMI_POWER_STATE_PARAM(0, 0) 148 146 #define SCMI_POWER_STATE_GENERIC_OFF SCMI_POWER_STATE_PARAM(1, 0) 149 - int (*state_set)(const struct scmi_handle *handle, u32 domain, 147 + int (*state_set)(const struct scmi_protocol_handle *ph, u32 domain, 150 148 u32 state); 151 - int (*state_get)(const struct scmi_handle *handle, u32 domain, 149 + int (*state_get)(const struct scmi_protocol_handle *ph, u32 domain, 152 150 u32 *state); 153 151 }; 154 152 ··· 431 429 }; 432 430 433 431 /** 434 - * struct scmi_sensor_ops - represents the various operations provided 432 + * struct scmi_sensor_proto_ops - represents the various operations provided 435 433 * by SCMI Sensor Protocol 436 434 * 437 435 * @count_get: get the count of sensors provided by SCMI ··· 446 444 * @config_get: Get sensor current configuration 447 445 * @config_set: Set sensor current configuration 448 446 */ 449 - struct scmi_sensor_ops { 450 - int (*count_get)(const struct scmi_handle *handle); 447 + struct scmi_sensor_proto_ops { 448 + int (*count_get)(const struct scmi_protocol_handle *ph); 451 449 const struct scmi_sensor_info *(*info_get) 452 - (const struct scmi_handle *handle, u32 sensor_id); 453 - int (*trip_point_config)(const struct scmi_handle *handle, 450 + (const struct scmi_protocol_handle *ph, u32 sensor_id); 451 + int (*trip_point_config)(const struct scmi_protocol_handle *ph, 454 452 u32 sensor_id, u8 trip_id, u64 trip_value); 455 - int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id, 453 + int (*reading_get)(const struct scmi_protocol_handle *ph, u32 sensor_id, 456 454 u64 *value); 457 - int (*reading_get_timestamped)(const struct scmi_handle *handle, 455 + int (*reading_get_timestamped)(const struct scmi_protocol_handle *ph, 458 456 u32 sensor_id, u8 count, 459 457 struct scmi_sensor_reading *readings); 460 - int (*config_get)(const struct scmi_handle *handle, 458 + int (*config_get)(const struct scmi_protocol_handle *ph, 461 459 u32 sensor_id, u32 *sensor_config); 462 - int (*config_set)(const struct scmi_handle *handle, 460 + int (*config_set)(const struct scmi_protocol_handle *ph, 463 461 u32 sensor_id, u32 sensor_config); 464 462 }; 465 463 466 464 /** 467 - * struct scmi_reset_ops - represents the various operations provided 465 + * struct scmi_reset_proto_ops - represents the various operations provided 468 466 * by SCMI Reset Protocol 469 467 * 470 468 * @num_domains_get: get the count of reset domains provided by SCMI ··· 474 472 * @assert: explicitly assert reset signal of the specified reset domain 475 473 * @deassert: explicitly deassert reset signal of the specified reset domain 476 474 */ 477 - struct scmi_reset_ops { 478 - int (*num_domains_get)(const struct scmi_handle *handle); 479 - char *(*name_get)(const struct scmi_handle *handle, u32 domain); 480 - int (*latency_get)(const struct scmi_handle *handle, u32 domain); 481 - int (*reset)(const struct scmi_handle *handle, u32 domain); 482 - int (*assert)(const struct scmi_handle *handle, u32 domain); 483 - int (*deassert)(const struct scmi_handle *handle, u32 domain); 475 + struct scmi_reset_proto_ops { 476 + int (*num_domains_get)(const struct scmi_protocol_handle *ph); 477 + char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain); 478 + int (*latency_get)(const struct scmi_protocol_handle *ph, u32 domain); 479 + int (*reset)(const struct scmi_protocol_handle *ph, u32 domain); 480 + int (*assert)(const struct scmi_protocol_handle *ph, u32 domain); 481 + int (*deassert)(const struct scmi_protocol_handle *ph, u32 domain); 484 482 }; 485 483 486 484 /** ··· 515 513 }; 516 514 517 515 /** 518 - * struct scmi_voltage_ops - represents the various operations provided 516 + * struct scmi_voltage_proto_ops - represents the various operations provided 519 517 * by SCMI Voltage Protocol 520 518 * 521 519 * @num_domains_get: get the count of voltage domains provided by SCMI ··· 525 523 * @level_set: set the voltage level for the specified domain 526 524 * @level_get: get the voltage level of the specified domain 527 525 */ 528 - struct scmi_voltage_ops { 529 - int (*num_domains_get)(const struct scmi_handle *handle); 526 + struct scmi_voltage_proto_ops { 527 + int (*num_domains_get)(const struct scmi_protocol_handle *ph); 530 528 const struct scmi_voltage_info __must_check *(*info_get) 531 - (const struct scmi_handle *handle, u32 domain_id); 532 - int (*config_set)(const struct scmi_handle *handle, u32 domain_id, 529 + (const struct scmi_protocol_handle *ph, u32 domain_id); 530 + int (*config_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 533 531 u32 config); 534 532 #define SCMI_VOLTAGE_ARCH_STATE_OFF 0x0 535 533 #define SCMI_VOLTAGE_ARCH_STATE_ON 0x7 536 - int (*config_get)(const struct scmi_handle *handle, u32 domain_id, 534 + int (*config_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 537 535 u32 *config); 538 - int (*level_set)(const struct scmi_handle *handle, u32 domain_id, 536 + int (*level_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 539 537 u32 flags, s32 volt_uV); 540 - int (*level_get)(const struct scmi_handle *handle, u32 domain_id, 538 + int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 541 539 s32 *volt_uV); 542 540 }; 543 541 544 542 /** 545 543 * struct scmi_notify_ops - represents notifications' operations provided by 546 544 * SCMI core 547 - * @register_event_notifier: Register a notifier_block for the requested event 548 - * @unregister_event_notifier: Unregister a notifier_block for the requested 545 + * @devm_event_notifier_register: Managed registration of a notifier_block for 546 + * the requested event 547 + * @devm_event_notifier_unregister: Managed unregistration of a notifier_block 548 + * for the requested event 549 + * @event_notifier_register: Register a notifier_block for the requested event 550 + * @event_notifier_unregister: Unregister a notifier_block for the requested 549 551 * event 550 552 * 551 553 * A user can register/unregister its own notifier_block against the wanted ··· 557 551 * tuple: (proto_id, evt_id, src_id) using the provided register/unregister 558 552 * interface where: 559 553 * 560 - * @handle: The handle identifying the platform instance to use 554 + * @sdev: The scmi_device to use when calling the devres managed ops devm_ 555 + * @handle: The handle identifying the platform instance to use, when not 556 + * calling the managed ops devm_ 561 557 * @proto_id: The protocol ID as in SCMI Specification 562 558 * @evt_id: The message ID of the desired event as in SCMI Specification 563 559 * @src_id: A pointer to the desired source ID if different sources are ··· 582 574 * @report: A custom struct describing the specific event delivered 583 575 */ 584 576 struct scmi_notify_ops { 585 - int (*register_event_notifier)(const struct scmi_handle *handle, 586 - u8 proto_id, u8 evt_id, u32 *src_id, 577 + int (*devm_event_notifier_register)(struct scmi_device *sdev, 578 + u8 proto_id, u8 evt_id, 579 + const u32 *src_id, 580 + struct notifier_block *nb); 581 + int (*devm_event_notifier_unregister)(struct scmi_device *sdev, 582 + u8 proto_id, u8 evt_id, 583 + const u32 *src_id, 584 + struct notifier_block *nb); 585 + int (*event_notifier_register)(const struct scmi_handle *handle, 586 + u8 proto_id, u8 evt_id, 587 + const u32 *src_id, 587 588 struct notifier_block *nb); 588 - int (*unregister_event_notifier)(const struct scmi_handle *handle, 589 - u8 proto_id, u8 evt_id, u32 *src_id, 589 + int (*event_notifier_unregister)(const struct scmi_handle *handle, 590 + u8 proto_id, u8 evt_id, 591 + const u32 *src_id, 590 592 struct notifier_block *nb); 591 593 }; 592 594 ··· 605 587 * 606 588 * @dev: pointer to the SCMI device 607 589 * @version: pointer to the structure containing SCMI version information 608 - * @power_ops: pointer to set of power protocol operations 609 - * @perf_ops: pointer to set of performance protocol operations 610 - * @clk_ops: pointer to set of clock protocol operations 611 - * @sensor_ops: pointer to set of sensor protocol operations 612 - * @reset_ops: pointer to set of reset protocol operations 613 - * @voltage_ops: pointer to set of voltage protocol operations 590 + * @devm_protocol_get: devres managed method to acquire a protocol and get specific 591 + * operations and a dedicated protocol handler 592 + * @devm_protocol_put: devres managed method to release a protocol 614 593 * @notify_ops: pointer to set of notifications related operations 615 - * @perf_priv: pointer to private data structure specific to performance 616 - * protocol(for internal use only) 617 - * @clk_priv: pointer to private data structure specific to clock 618 - * protocol(for internal use only) 619 - * @power_priv: pointer to private data structure specific to power 620 - * protocol(for internal use only) 621 - * @sensor_priv: pointer to private data structure specific to sensors 622 - * protocol(for internal use only) 623 - * @reset_priv: pointer to private data structure specific to reset 624 - * protocol(for internal use only) 625 - * @voltage_priv: pointer to private data structure specific to voltage 626 - * protocol(for internal use only) 627 - * @notify_priv: pointer to private data structure specific to notifications 628 - * (for internal use only) 629 594 */ 630 595 struct scmi_handle { 631 596 struct device *dev; 632 597 struct scmi_revision_info *version; 633 - const struct scmi_perf_ops *perf_ops; 634 - const struct scmi_clk_ops *clk_ops; 635 - const struct scmi_power_ops *power_ops; 636 - const struct scmi_sensor_ops *sensor_ops; 637 - const struct scmi_reset_ops *reset_ops; 638 - const struct scmi_voltage_ops *voltage_ops; 598 + 599 + const void __must_check * 600 + (*devm_protocol_get)(struct scmi_device *sdev, u8 proto, 601 + struct scmi_protocol_handle **ph); 602 + void (*devm_protocol_put)(struct scmi_device *sdev, u8 proto); 603 + 639 604 const struct scmi_notify_ops *notify_ops; 640 - /* for protocol internal use */ 641 - void *perf_priv; 642 - void *clk_priv; 643 - void *power_priv; 644 - void *sensor_priv; 645 - void *reset_priv; 646 - void *voltage_priv; 647 - void *notify_priv; 648 - void *system_priv; 649 605 }; 650 606 651 607 enum scmi_std_protocol { ··· 704 712 #define module_scmi_driver(__scmi_driver) \ 705 713 module_driver(__scmi_driver, scmi_register, scmi_unregister) 706 714 707 - typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); 708 - int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); 709 - void scmi_protocol_unregister(int protocol_id); 715 + /** 716 + * module_scmi_protocol() - Helper macro for registering a scmi protocol 717 + * @__scmi_protocol: scmi_protocol structure 718 + * 719 + * Helper macro for scmi drivers to set up proper module init / exit 720 + * functions. Replaces module_init() and module_exit() and keeps people from 721 + * printing pointless things to the kernel log when their driver is loaded. 722 + */ 723 + #define module_scmi_protocol(__scmi_protocol) \ 724 + module_driver(__scmi_protocol, \ 725 + scmi_protocol_register, scmi_protocol_unregister) 726 + 727 + struct scmi_protocol; 728 + int scmi_protocol_register(const struct scmi_protocol *proto); 729 + void scmi_protocol_unregister(const struct scmi_protocol *proto); 710 730 711 731 /* SCMI Notification API - Custom Event Reports */ 712 732 enum scmi_notification_events {
+1 -1
include/linux/soc/qcom/apr.h
··· 113 113 114 114 /** 115 115 * module_apr_driver() - Helper macro for registering a aprbus driver 116 - * @__aprbus_driver: aprbus_driver struct 116 + * @__apr_driver: apr_driver struct 117 117 * 118 118 * Helper macro for aprbus drivers which do not do anything special in 119 119 * module init/exit. This eliminates a lot of boilerplate. Each module
+1 -1
include/linux/soc/qcom/irq.h
··· 7 7 8 8 #define GPIO_NO_WAKE_IRQ ~0U 9 9 10 - /** 10 + /* 11 11 * QCOM specific IRQ domain flags that distinguishes the handling of wakeup 12 12 * capable interrupts by different interrupt controllers. 13 13 *
+3 -3
include/linux/soc/qcom/llcc-qcom.h
··· 35 35 #define LLCC_WRCACHE 31 36 36 37 37 /** 38 - * llcc_slice_desc - Cache slice descriptor 38 + * struct llcc_slice_desc - Cache slice descriptor 39 39 * @slice_id: llcc slice id 40 40 * @slice_size: Size allocated for the llcc slice 41 41 */ ··· 45 45 }; 46 46 47 47 /** 48 - * llcc_edac_reg_data - llcc edac registers data for each error type 48 + * struct llcc_edac_reg_data - llcc edac registers data for each error type 49 49 * @name: Name of the error 50 50 * @synd_reg: Syndrome register address 51 51 * @count_status_reg: Status register address to read the error count ··· 69 69 }; 70 70 71 71 /** 72 - * llcc_drv_data - Data associated with the llcc driver 72 + * struct llcc_drv_data - Data associated with the llcc driver 73 73 * @regmap: regmap associated with the llcc device 74 74 * @bcast_regmap: regmap associated with llcc broadcast offset 75 75 * @cfg: pointer to the data structure for slice configuration
+2 -2
include/linux/soc/qcom/qmi.h
··· 16 16 struct socket; 17 17 18 18 /** 19 - * qmi_header - wireformat header of QMI messages 19 + * struct qmi_header - wireformat header of QMI messages 20 20 * @type: type of message 21 21 * @txn_id: transaction id 22 22 * @msg_id: message id ··· 93 93 #define QMI_ERR_NOT_SUPPORTED_V01 94 94 94 95 95 /** 96 - * qmi_response_type_v01 - common response header (decoded) 96 + * struct qmi_response_type_v01 - common response header (decoded) 97 97 * @result: result of the transaction 98 98 * @error: error value, when @result is QMI_RESULT_FAILURE_V01 99 99 */
+10
include/soc/bcm2835/raspberrypi-firmware.h
··· 140 140 u32 tag, void *data, size_t len); 141 141 int rpi_firmware_property_list(struct rpi_firmware *fw, 142 142 void *data, size_t tag_size); 143 + void rpi_firmware_put(struct rpi_firmware *fw); 143 144 struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node); 145 + struct rpi_firmware *devm_rpi_firmware_get(struct device *dev, 146 + struct device_node *firmware_node); 144 147 #else 145 148 static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag, 146 149 void *data, size_t len) ··· 157 154 return -ENOSYS; 158 155 } 159 156 157 + static inline void rpi_firmware_put(struct rpi_firmware *fw) { } 160 158 static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node) 159 + { 160 + return NULL; 161 + } 162 + 163 + static inline struct rpi_firmware *devm_rpi_firmware_get(struct device *dev, 164 + struct device_node *firmware_node) 161 165 { 162 166 return NULL; 163 167 }
+9 -25
include/soc/fsl/qe/qe.h
··· 239 239 #define qe_muram_dma cpm_muram_dma 240 240 #define qe_muram_free_addr cpm_muram_free_addr 241 241 242 - #ifdef CONFIG_PPC32 243 - #define qe_iowrite8(val, addr) out_8(addr, val) 244 - #define qe_iowrite16be(val, addr) out_be16(addr, val) 245 - #define qe_iowrite32be(val, addr) out_be32(addr, val) 246 - #define qe_ioread8(addr) in_8(addr) 247 - #define qe_ioread16be(addr) in_be16(addr) 248 - #define qe_ioread32be(addr) in_be32(addr) 249 - #else 250 - #define qe_iowrite8(val, addr) iowrite8(val, addr) 251 - #define qe_iowrite16be(val, addr) iowrite16be(val, addr) 252 - #define qe_iowrite32be(val, addr) iowrite32be(val, addr) 253 - #define qe_ioread8(addr) ioread8(addr) 254 - #define qe_ioread16be(addr) ioread16be(addr) 255 - #define qe_ioread32be(addr) ioread32be(addr) 256 - #endif 242 + #define qe_setbits_be32(_addr, _v) iowrite32be(ioread32be(_addr) | (_v), (_addr)) 243 + #define qe_clrbits_be32(_addr, _v) iowrite32be(ioread32be(_addr) & ~(_v), (_addr)) 257 244 258 - #define qe_setbits_be32(_addr, _v) qe_iowrite32be(qe_ioread32be(_addr) | (_v), (_addr)) 259 - #define qe_clrbits_be32(_addr, _v) qe_iowrite32be(qe_ioread32be(_addr) & ~(_v), (_addr)) 245 + #define qe_setbits_be16(_addr, _v) iowrite16be(ioread16be(_addr) | (_v), (_addr)) 246 + #define qe_clrbits_be16(_addr, _v) iowrite16be(ioread16be(_addr) & ~(_v), (_addr)) 260 247 261 - #define qe_setbits_be16(_addr, _v) qe_iowrite16be(qe_ioread16be(_addr) | (_v), (_addr)) 262 - #define qe_clrbits_be16(_addr, _v) qe_iowrite16be(qe_ioread16be(_addr) & ~(_v), (_addr)) 263 - 264 - #define qe_setbits_8(_addr, _v) qe_iowrite8(qe_ioread8(_addr) | (_v), (_addr)) 265 - #define qe_clrbits_8(_addr, _v) qe_iowrite8(qe_ioread8(_addr) & ~(_v), (_addr)) 248 + #define qe_setbits_8(_addr, _v) iowrite8(ioread8(_addr) | (_v), (_addr)) 249 + #define qe_clrbits_8(_addr, _v) iowrite8(ioread8(_addr) & ~(_v), (_addr)) 266 250 267 251 #define qe_clrsetbits_be32(addr, clear, set) \ 268 - qe_iowrite32be((qe_ioread32be(addr) & ~(clear)) | (set), (addr)) 252 + iowrite32be((ioread32be(addr) & ~(clear)) | (set), (addr)) 269 253 #define qe_clrsetbits_be16(addr, clear, set) \ 270 - qe_iowrite16be((qe_ioread16be(addr) & ~(clear)) | (set), (addr)) 254 + iowrite16be((ioread16be(addr) & ~(clear)) | (set), (addr)) 271 255 #define qe_clrsetbits_8(addr, clear, set) \ 272 - qe_iowrite8((qe_ioread8(addr) & ~(clear)) | (set), (addr)) 256 + iowrite8((ioread8(addr) & ~(clear)) | (set), (addr)) 273 257 274 258 /* Structure that defines QE firmware binary files. 275 259 *
+7
include/soc/tegra/mc.h
··· 7 7 #define __SOC_TEGRA_MC_H__ 8 8 9 9 #include <linux/bits.h> 10 + #include <linux/debugfs.h> 10 11 #include <linux/err.h> 11 12 #include <linux/interconnect-provider.h> 12 13 #include <linux/reset-controller.h> ··· 176 175 unsigned int num_resets; 177 176 178 177 const struct tegra_mc_icc_ops *icc_ops; 178 + 179 + int (*init)(struct tegra_mc *mc); 179 180 }; 180 181 181 182 struct tegra_mc { ··· 199 196 struct icc_provider provider; 200 197 201 198 spinlock_t lock; 199 + 200 + struct { 201 + struct dentry *root; 202 + } debugfs; 202 203 }; 203 204 204 205 int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);