Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"Updates for SoC specific drivers include a few subsystems that have
their own maintainers but send them through the soc tree:

SCMI firmware:
- add support for a completion interrupt

Reset controllers:
- new driver for BCM4908
- new devm_reset_control_get_optional_exclusive_released() function

Memory controllers:
- Renesas RZ/G2 support
- Tegra124 interconnect support
- Allow more drivers to be loadable modules

TEE/optee firmware:
- minor code cleanup

The other half of this is SoC specific drivers that do not belong into
any other subsystem, most of them living in drivers/soc:

- Allwinner/sunxi power management work
- Allwinner H616 support

- ASpeed AST2600 system identification support

- AT91 SAMA7G5 SoC ID driver
- AT91 SoC driver cleanups

- Broadcom BCM4908 power management bus support

- Marvell mbus cleanups

- Mediatek MT8167 power domain support

- Qualcomm socinfo driver support for PMIC
- Qualcomm SoC identification for many more products

- TI Keystone driver cleanups for PRUSS and elsewhere"

* tag 'arm-drivers-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (89 commits)
soc: aspeed: socinfo: Add new systems
soc: aspeed: snoop: Add clock control logic
memory: tegra186-emc: Replace DEFINE_SIMPLE_ATTRIBUTE with DEFINE_DEBUGFS_ATTRIBUTE
memory: samsung: exynos5422-dmc: Correct function names in kerneldoc
memory: ti-emif-pm: Drop of_match_ptr from of_device_id table
optee: simplify i2c access
drivers: soc: atmel: fix type for same7
tee: optee: remove need_resched() before cond_resched()
soc: qcom: ocmem: don't return NULL in of_get_ocmem
optee: sync OP-TEE headers
tee: optee: fix 'physical' typos
drivers: optee: use flexible-array member instead of zero-length array
tee: fix some comment typos in header files
soc: ti: k3-ringacc: Use of_device_get_match_data()
soc: ti: pruss: Refactor the CFG sub-module init
soc: mediatek: pm-domains: Don't print an error if child domain is deferred
soc: mediatek: pm-domains: Add domain regulator supply
dt-bindings: power: Add domain regulator supply
soc: mediatek: cmdq: Remove cmdq_pkt_flush()
soc: mediatek: pm-domains: Add support for mt8167
...

+2446 -799
+8
Documentation/devicetree/bindings/arm/arm,scmi.txt
··· 31 31 32 32 - mbox-names: shall be "tx" or "rx" depending on mboxes entries. 33 33 34 + - interrupts : when using smc or hvc transports, this optional 35 + property indicates that msg completion by the platform is indicated 36 + by an interrupt rather than by the return of the smc call. This 37 + should not be used except when the platform requires such behavior. 38 + 39 + - interrupt-names : if "interrupts" is present, interrupt-names must also 40 + be present and have the value "a2p". 41 + 34 42 See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details 35 43 about the generic mailbox controller and client driver bindings. 36 44
+1 -1
Documentation/devicetree/bindings/arm/atmel-sysregs.txt
··· 1 1 Atmel system registers 2 2 3 3 Chipid required properties: 4 - - compatible: Should be "atmel,sama5d2-chipid" 4 + - compatible: Should be "atmel,sama5d2-chipid" or "microchip,sama7g5-chipid" 5 5 - reg : Should contain registers location and length 6 6 7 7 PIT Timer required properties:
+1
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 24 24 - qcom,sc7180-llcc 25 25 - qcom,sdm845-llcc 26 26 - qcom,sm8150-llcc 27 + - qcom,sm8250-llcc 27 28 28 29 reg: 29 30 items:
+3 -1
Documentation/devicetree/bindings/bus/allwinner,sun8i-a23-rsb.yaml
··· 21 21 oneOf: 22 22 - const: allwinner,sun8i-a23-rsb 23 23 - items: 24 - - const: allwinner,sun8i-a83t-rsb 24 + - enum: 25 + - allwinner,sun8i-a83t-rsb 26 + - allwinner,sun50i-h616-rsb 25 27 - const: allwinner,sun8i-a23-rsb 26 28 27 29 reg:
+5 -1
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 26 26 compatible: 27 27 items: 28 28 - enum: 29 + - renesas,r8a774a1-rpc-if # RZ/G2M 30 + - renesas,r8a774b1-rpc-if # RZ/G2N 31 + - renesas,r8a774c0-rpc-if # RZ/G2E 32 + - renesas,r8a774e1-rpc-if # RZ/G2H 29 33 - renesas,r8a77970-rpc-if # R-Car V3M 30 34 - renesas,r8a77980-rpc-if # R-Car V3H 31 35 - renesas,r8a77995-rpc-if # R-Car D3 32 - - const: renesas,rcar-gen3-rpc-if # a generic R-Car gen3 device 36 + - const: renesas,rcar-gen3-rpc-if # a generic R-Car gen3 or RZ/G2 device 33 37 34 38 reg: 35 39 items:
+50
Documentation/devicetree/bindings/power/brcm,bcm-pmb.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/power/brcm,bcm-pmb.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Broadcom PMB (Power Management Bus) controller 8 + 9 + description: This document describes Broadcom's PMB controller. It supports 10 + powering various types of connected devices (e.g. PCIe, USB, SATA). 11 + 12 + maintainers: 13 + - Rafał Miłecki <rafal@milecki.pl> 14 + 15 + properties: 16 + compatible: 17 + enum: 18 + - brcm,bcm4908-pmb 19 + 20 + reg: 21 + description: register space of one or more buses 22 + maxItems: 1 23 + 24 + big-endian: 25 + $ref: /schemas/types.yaml#/definitions/flag 26 + description: Flag to use for block working in big endian mode. 27 + 28 + "#power-domain-cells": 29 + description: cell specifies device ID (see bcm-pmb.h) 30 + const: 1 31 + 32 + required: 33 + - reg 34 + - "#power-domain-cells" 35 + 36 + additionalProperties: false 37 + 38 + examples: 39 + - | 40 + #include <dt-bindings/soc/bcm-pmb.h> 41 + 42 + pmb: power-controller@802800e0 { 43 + compatible = "brcm,bcm4908-pmb"; 44 + reg = <0x802800e0 0x40>; 45 + #power-domain-cells = <1>; 46 + }; 47 + 48 + foo { 49 + power-domains = <&pmb BCM_PMB_PCIE0>; 50 + };
+11
Documentation/devicetree/bindings/power/mediatek,power-controller.yaml
··· 23 23 24 24 compatible: 25 25 enum: 26 + - mediatek,mt8167-power-controller 26 27 - mediatek,mt8173-power-controller 27 28 - mediatek,mt8183-power-controller 28 29 - mediatek,mt8192-power-controller ··· 60 59 reg: 61 60 description: | 62 61 Power domain index. Valid values are defined in: 62 + "include/dt-bindings/power/mt8167-power.h" - for MT8167 type power domain. 63 63 "include/dt-bindings/power/mt8173-power.h" - for MT8173 type power domain. 64 64 "include/dt-bindings/power/mt8183-power.h" - for MT8183 type power domain. 65 65 "include/dt-bindings/power/mt8192-power.h" - for MT8192 type power domain. ··· 83 81 In order to follow properly the power-up sequencing, the clocks must 84 82 be specified by order, adding first the BASIC clocks followed by the 85 83 SUSBSYS clocks. 84 + 85 + domain-supply: 86 + description: domain regulator supply. 86 87 87 88 mediatek,infracfg: 88 89 $ref: /schemas/types.yaml#/definitions/phandle ··· 135 130 be specified by order, adding first the BASIC clocks followed by the 136 131 SUSBSYS clocks. 137 132 133 + domain-supply: 134 + description: domain regulator supply. 135 + 138 136 mediatek,infracfg: 139 137 $ref: /schemas/types.yaml#/definitions/phandle 140 138 description: phandle to the device containing the INFRACFG register range. ··· 185 177 In order to follow properly the power-up sequencing, the clocks must 186 178 be specified by order, adding first the BASIC clocks followed by the 187 179 SUSBSYS clocks. 180 + 181 + domain-supply: 182 + description: domain regulator supply. 188 183 189 184 mediatek,infracfg: 190 185 $ref: /schemas/types.yaml#/definitions/phandle
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 19 19 - qcom,msm8916-rpmpd 20 20 - qcom,msm8939-rpmpd 21 21 - qcom,msm8976-rpmpd 22 + - qcom,msm8994-rpmpd 22 23 - qcom,msm8996-rpmpd 23 24 - qcom,msm8998-rpmpd 24 25 - qcom,qcs404-rpmpd
+39
Documentation/devicetree/bindings/reset/brcm,bcm4908-misc-pcie-reset.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/brcm,bcm4908-misc-pcie-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Broadcom MISC block PCIe reset controller 8 + 9 + description: This document describes reset controller handling PCIe PERST# 10 + signals. On BCM4908 it's a part of the MISC block. 11 + 12 + maintainers: 13 + - Rafał Miłecki <rafal@milecki.pl> 14 + 15 + properties: 16 + compatible: 17 + const: brcm,bcm4908-misc-pcie-reset 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + "#reset-cells": 23 + description: PCIe core id 24 + const: 1 25 + 26 + required: 27 + - compatible 28 + - reg 29 + - "#reset-cells" 30 + 31 + additionalProperties: false 32 + 33 + examples: 34 + - | 35 + reset-controller@ff802644 { 36 + compatible = "brcm,bcm4908-misc-pcie-reset"; 37 + reg = <0xff802644 0x04>; 38 + #reset-cells = <1>; 39 + };
-44
Documentation/devicetree/bindings/reset/hisilicon,hi3660-reset.txt
··· 1 - Hisilicon System Reset Controller 2 - ====================================== 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - The reset controller registers are part of the system-ctl block on 8 - hi3660 and hi3670 SoCs. 9 - 10 - Required properties: 11 - - compatible: should be one of the following: 12 - "hisilicon,hi3660-reset" for HI3660 13 - "hisilicon,hi3670-reset", "hisilicon,hi3660-reset" for HI3670 14 - - hisi,rst-syscon: phandle of the reset's syscon. 15 - - #reset-cells : Specifies the number of cells needed to encode a 16 - reset source. The type shall be a <u32> and the value shall be 2. 17 - 18 - Cell #1 : offset of the reset assert control 19 - register from the syscon register base 20 - offset + 4: deassert control register 21 - offset + 8: status control register 22 - Cell #2 : bit position of the reset in the reset control register 23 - 24 - Example: 25 - iomcu: iomcu@ffd7e000 { 26 - compatible = "hisilicon,hi3660-iomcu", "syscon"; 27 - reg = <0x0 0xffd7e000 0x0 0x1000>; 28 - }; 29 - 30 - iomcu_rst: iomcu_rst_controller { 31 - compatible = "hisilicon,hi3660-reset"; 32 - hisi,rst-syscon = <&iomcu>; 33 - #reset-cells = <2>; 34 - }; 35 - 36 - Specifying reset lines connected to IP modules 37 - ============================================== 38 - example: 39 - 40 - i2c0: i2c@..... { 41 - ... 42 - resets = <&iomcu_rst 0x20 3>; /* offset: 0x20; bit: 3 */ 43 - ... 44 - };
+77
Documentation/devicetree/bindings/reset/hisilicon,hi3660-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/hisilicon,hi3660-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Hisilicon System Reset Controller 8 + 9 + maintainers: 10 + - Wei Xu <xuwei5@hisilicon.com> 11 + 12 + description: | 13 + Please also refer to reset.txt in this directory for common reset 14 + controller binding usage. 15 + The reset controller registers are part of the system-ctl block on 16 + hi3660 and hi3670 SoCs. 17 + 18 + properties: 19 + compatible: 20 + oneOf: 21 + - items: 22 + - const: hisilicon,hi3660-reset 23 + - items: 24 + - const: hisilicon,hi3670-reset 25 + - const: hisilicon,hi3660-reset 26 + 27 + hisilicon,rst-syscon: 28 + description: phandle of the reset's syscon. 29 + $ref: /schemas/types.yaml#/definitions/phandle 30 + 31 + '#reset-cells': 32 + description: | 33 + Specifies the number of cells needed to encode a reset source. 34 + Cell #1 : offset of the reset assert control register from the syscon 35 + register base 36 + offset + 4: deassert control register 37 + offset + 8: status control register 38 + Cell #2 : bit position of the reset in the reset control register 39 + const: 2 40 + 41 + required: 42 + - compatible 43 + 44 + additionalProperties: false 45 + 46 + examples: 47 + - | 48 + #include <dt-bindings/interrupt-controller/irq.h> 49 + #include <dt-bindings/interrupt-controller/arm-gic.h> 50 + #include <dt-bindings/clock/hi3660-clock.h> 51 + 52 + iomcu: iomcu@ffd7e000 { 53 + compatible = "hisilicon,hi3660-iomcu", "syscon"; 54 + reg = <0xffd7e000 0x1000>; 55 + }; 56 + 57 + iomcu_rst: iomcu_rst_controller { 58 + compatible = "hisilicon,hi3660-reset"; 59 + hisilicon,rst-syscon = <&iomcu>; 60 + #reset-cells = <2>; 61 + }; 62 + 63 + /* Specifying reset lines connected to IP modules */ 64 + i2c@ffd71000 { 65 + compatible = "snps,designware-i2c"; 66 + reg = <0xffd71000 0x1000>; 67 + interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>; 68 + #address-cells = <1>; 69 + #size-cells = <0>; 70 + clock-frequency = <400000>; 71 + clocks = <&crg_ctrl HI3660_CLK_GATE_I2C0>; 72 + resets = <&iomcu_rst 0x20 3>; 73 + pinctrl-names = "default"; 74 + pinctrl-0 = <&i2c0_pmx_func &i2c0_cfg_func>; 75 + status = "disabled"; 76 + }; 77 + ...
+1
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt
··· 20 20 "qcom,sdm845-aoss-qmp" 21 21 "qcom,sm8150-aoss-qmp" 22 22 "qcom,sm8250-aoss-qmp" 23 + "qcom,sm8350-aoss-qmp" 23 24 24 25 - reg: 25 26 Usage: required
-57
Documentation/devicetree/bindings/soc/qcom/qcom,smem.txt
··· 1 - Qualcomm Shared Memory Manager binding 2 - 3 - This binding describes the Qualcomm Shared Memory Manager, used to share data 4 - between various subsystems and OSes in Qualcomm platforms. 5 - 6 - - compatible: 7 - Usage: required 8 - Value type: <stringlist> 9 - Definition: must be: 10 - "qcom,smem" 11 - 12 - - memory-region: 13 - Usage: required 14 - Value type: <prop-encoded-array> 15 - Definition: handle to memory reservation for main SMEM memory region. 16 - 17 - - qcom,rpm-msg-ram: 18 - Usage: required 19 - Value type: <prop-encoded-array> 20 - Definition: handle to RPM message memory resource 21 - 22 - - hwlocks: 23 - Usage: required 24 - Value type: <prop-encoded-array> 25 - Definition: reference to a hwspinlock used to protect allocations from 26 - the shared memory 27 - 28 - = EXAMPLE 29 - The following example shows the SMEM setup for MSM8974, with a main SMEM region 30 - at 0xfa00000 and the RPM message ram at 0xfc428000: 31 - 32 - reserved-memory { 33 - #address-cells = <1>; 34 - #size-cells = <1>; 35 - ranges; 36 - 37 - smem_region: smem@fa00000 { 38 - reg = <0xfa00000 0x200000>; 39 - no-map; 40 - }; 41 - }; 42 - 43 - smem@fa00000 { 44 - compatible = "qcom,smem"; 45 - 46 - memory-region = <&smem_region>; 47 - qcom,rpm-msg-ram = <&rpm_msg_ram>; 48 - 49 - hwlocks = <&tcsr_mutex 3>; 50 - }; 51 - 52 - soc { 53 - rpm_msg_ram: memory@fc428000 { 54 - compatible = "qcom,rpm-msg-ram"; 55 - reg = <0xfc428000 0x4000>; 56 - }; 57 - };
+72
Documentation/devicetree/bindings/soc/qcom/qcom,smem.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/soc/qcom/qcom,smem.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Qualcomm Shared Memory Manager binding 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + 13 + description: | 14 + This binding describes the Qualcomm Shared Memory Manager, used to share data 15 + between various subsystems and OSes in Qualcomm platforms. 16 + 17 + properties: 18 + compatible: 19 + const: qcom,smem 20 + 21 + memory-region: 22 + maxItems: 1 23 + description: handle to memory reservation for main SMEM memory region. 24 + 25 + hwlocks: 26 + maxItems: 1 27 + 28 + qcom,rpm-msg-ram: 29 + $ref: /schemas/types.yaml#/definitions/phandle 30 + description: handle to RPM message memory resource 31 + 32 + required: 33 + - compatible 34 + - memory-region 35 + - hwlocks 36 + 37 + additionalProperties: false 38 + 39 + examples: 40 + - | 41 + reserved-memory { 42 + #address-cells = <1>; 43 + #size-cells = <1>; 44 + ranges; 45 + 46 + smem_region: smem@fa00000 { 47 + reg = <0xfa00000 0x200000>; 48 + no-map; 49 + }; 50 + }; 51 + 52 + smem { 53 + compatible = "qcom,smem"; 54 + 55 + memory-region = <&smem_region>; 56 + qcom,rpm-msg-ram = <&rpm_msg_ram>; 57 + 58 + hwlocks = <&tcsr_mutex 3>; 59 + }; 60 + 61 + soc { 62 + #address-cells = <1>; 63 + #size-cells = <1>; 64 + ranges; 65 + 66 + rpm_msg_ram: sram@fc428000 { 67 + compatible = "qcom,rpm-msg-ram"; 68 + reg = <0xfc428000 0x4000>; 69 + }; 70 + }; 71 + 72 + ...
+76
Documentation/devicetree/bindings/soc/ti/ti,pruss.yaml
··· 81 81 ranges: 82 82 maxItems: 1 83 83 84 + dma-ranges: 85 + maxItems: 1 86 + 84 87 power-domains: 85 88 description: | 86 89 This property is as per sci-pm-domain.txt. ··· 281 278 that is common to all the PRU cores. This should be represented as an 282 279 interrupt-controller node. 283 280 281 + allOf: 282 + - $ref: /schemas/interrupt-controller/ti,pruss-intc.yaml# 283 + 284 284 type: object 285 285 286 286 mdio@[a-f0-9]+$: ··· 304 298 inactive by using the standard DT string property, "status". The ICSSG IP 305 299 present on K3 SoCs have additional auxiliary PRU cores with slightly 306 300 different IP integration. 301 + 302 + allOf: 303 + - $ref: /schemas/remoteproc/ti,pru-rproc.yaml# 307 304 308 305 type: object 309 306 ··· 380 371 reg = <0x32000 0x58>; 381 372 }; 382 373 374 + pruss_intc: interrupt-controller@20000 { 375 + compatible = "ti,pruss-intc"; 376 + reg = <0x20000 0x2000>; 377 + interrupt-controller; 378 + #interrupt-cells = <3>; 379 + interrupts = <20 21 22 23 24 25 26 27>; 380 + interrupt-names = "host_intr0", "host_intr1", 381 + "host_intr2", "host_intr3", 382 + "host_intr4", "host_intr5", 383 + "host_intr6", "host_intr7"; 384 + }; 385 + 386 + pru0: pru@34000 { 387 + compatible = "ti,am3356-pru"; 388 + reg = <0x34000 0x2000>, 389 + <0x22000 0x400>, 390 + <0x22400 0x100>; 391 + reg-names = "iram", "control", "debug"; 392 + firmware-name = "am335x-pru0-fw"; 393 + }; 394 + 395 + pru1: pru@38000 { 396 + compatible = "ti,am3356-pru"; 397 + reg = <0x38000 0x2000>, 398 + <0x24000 0x400>, 399 + <0x24400 0x100>; 400 + reg-names = "iram", "control", "debug"; 401 + firmware-name = "am335x-pru1-fw"; 402 + }; 403 + 383 404 pruss_mdio: mdio@32400 { 384 405 compatible = "ti,davinci_mdio"; 385 406 reg = <0x32400 0x90>; ··· 462 423 pruss1_mii_rt: mii-rt@32000 { 463 424 compatible = "ti,pruss-mii", "syscon"; 464 425 reg = <0x32000 0x58>; 426 + }; 427 + 428 + pruss1_intc: interrupt-controller@20000 { 429 + compatible = "ti,pruss-intc"; 430 + reg = <0x20000 0x2000>; 431 + interrupt-controller; 432 + #interrupt-cells = <3>; 433 + interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>, 434 + <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>, 435 + <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>, 436 + <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>, 437 + <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>, 438 + <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>, 439 + <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>; 440 + interrupt-names = "host_intr0", "host_intr1", 441 + "host_intr2", "host_intr3", 442 + "host_intr4", 443 + "host_intr6", "host_intr7"; 444 + ti,irqs-reserved = /bits/ 8 <0x20>; /* BIT(5) */ 445 + }; 446 + 447 + pru1_0: pru@34000 { 448 + compatible = "ti,am4376-pru"; 449 + reg = <0x34000 0x3000>, 450 + <0x22000 0x400>, 451 + <0x22400 0x100>; 452 + reg-names = "iram", "control", "debug"; 453 + firmware-name = "am437x-pru1_0-fw"; 454 + }; 455 + 456 + pru1_1: pru@38000 { 457 + compatible = "ti,am4376-pru"; 458 + reg = <0x38000 0x3000>, 459 + <0x24000 0x400>, 460 + <0x24400 0x100>; 461 + reg-names = "iram", "control", "debug"; 462 + firmware-name = "am437x-pru1_1-fw"; 465 463 }; 466 464 467 465 pruss1_mdio: mdio@32400 {
+1
Documentation/devicetree/bindings/sram/allwinner,sun4i-a10-system-control.yaml
··· 49 49 - items: 50 50 - const: allwinner,suniv-f1c100s-system-control 51 51 - const: allwinner,sun4i-a10-system-control 52 + - const: allwinner,sun50i-h616-system-control 52 53 53 54 reg: 54 55 maxItems: 1
+12
MAINTAINERS
··· 3645 3645 S: Maintained 3646 3646 F: drivers/firmware/broadcom/* 3647 3647 3648 + BROADCOM PMB (POWER MANAGEMENT BUS) DRIVER 3649 + M: Rafał Miłecki <rafal@milecki.pl> 3650 + M: Florian Fainelli <f.fainelli@gmail.com> 3651 + M: bcm-kernel-feedback-list@broadcom.com 3652 + L: linux-pm@vger.kernel.org 3653 + S: Maintained 3654 + T: git git://github.com/broadcom/stblinux.git 3655 + F: drivers/soc/bcm/bcm-pmb.c 3656 + F: include/dt-bindings/soc/bcm-pmb.h 3657 + 3648 3658 BROADCOM SPECIFIC AMBA DRIVER (BCMA) 3649 3659 M: Rafał Miłecki <zajec5@gmail.com> 3650 3660 L: linux-wireless@vger.kernel.org ··· 17175 17165 17176 17166 SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers 17177 17167 M: Sudeep Holla <sudeep.holla@arm.com> 17168 + R: Cristian Marussi <cristian.marussi@arm.com> 17178 17169 L: linux-arm-kernel@lists.infradead.org 17179 17170 S: Maintained 17180 17171 F: Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt ··· 17183 17172 F: drivers/cpufreq/sc[mp]i-cpufreq.c 17184 17173 F: drivers/firmware/arm_scmi/ 17185 17174 F: drivers/firmware/arm_scpi.c 17175 + F: drivers/regulator/scmi-regulator.c 17186 17176 F: drivers/reset/reset-scmi.c 17187 17177 F: include/linux/sc[mp]i_protocol.h 17188 17178 F: include/trace/events/scmi.h
+1 -1
drivers/bus/mvebu-mbus.c
··· 1111 1111 1112 1112 mbus->sdramwins_base = ioremap(sdramwins_phys_base, sdramwins_size); 1113 1113 if (!mbus->sdramwins_base) { 1114 - iounmap(mbus_state.mbuswins_base); 1114 + iounmap(mbus->mbuswins_base); 1115 1115 return -ENOMEM; 1116 1116 } 1117 1117
+153 -62
drivers/bus/sunxi-rsb.c
··· 45 45 #include <linux/of_irq.h> 46 46 #include <linux/of_platform.h> 47 47 #include <linux/platform_device.h> 48 + #include <linux/pm.h> 49 + #include <linux/pm_runtime.h> 48 50 #include <linux/regmap.h> 49 51 #include <linux/reset.h> 50 52 #include <linux/slab.h> ··· 128 126 struct completion complete; 129 127 struct mutex lock; 130 128 unsigned int status; 129 + u32 clk_freq; 131 130 }; 132 131 133 132 /* bus / slave device related functions */ ··· 173 170 { 174 171 const struct sunxi_rsb_driver *drv = to_sunxi_rsb_driver(dev->driver); 175 172 176 - return drv->remove(to_sunxi_rsb_device(dev)); 173 + drv->remove(to_sunxi_rsb_device(dev)); 174 + 175 + return 0; 177 176 } 178 177 179 178 static struct bus_type sunxi_rsb_bus = { ··· 340 335 return -EINVAL; 341 336 } 342 337 338 + ret = pm_runtime_resume_and_get(rsb->dev); 339 + if (ret) 340 + return ret; 341 + 343 342 mutex_lock(&rsb->lock); 344 343 345 344 writel(addr, rsb->regs + RSB_ADDR); ··· 358 349 359 350 unlock: 360 351 mutex_unlock(&rsb->lock); 352 + 353 + pm_runtime_mark_last_busy(rsb->dev); 354 + pm_runtime_put_autosuspend(rsb->dev); 361 355 362 356 return ret; 363 357 } ··· 389 377 return -EINVAL; 390 378 } 391 379 380 + ret = pm_runtime_resume_and_get(rsb->dev); 381 + if (ret) 382 + return ret; 383 + 392 384 mutex_lock(&rsb->lock); 393 385 394 386 writel(addr, rsb->regs + RSB_ADDR); ··· 402 386 ret = _sunxi_rsb_run_xfer(rsb); 403 387 404 388 mutex_unlock(&rsb->lock); 389 + 390 + pm_runtime_mark_last_busy(rsb->dev); 391 + pm_runtime_put_autosuspend(rsb->dev); 405 392 406 393 return ret; 407 394 } ··· 633 614 return 0; 634 615 } 635 616 636 - static const struct of_device_id sunxi_rsb_of_match_table[] = { 637 - { .compatible = "allwinner,sun8i-a23-rsb" }, 638 - {} 639 - }; 640 - MODULE_DEVICE_TABLE(of, sunxi_rsb_of_match_table); 617 + static int sunxi_rsb_hw_init(struct sunxi_rsb *rsb) 618 + { 619 + struct device *dev = rsb->dev; 620 + unsigned long p_clk_freq; 621 + u32 clk_delay, reg; 622 + int clk_div, ret; 623 + 624 + ret = clk_prepare_enable(rsb->clk); 625 + if (ret) { 626 + dev_err(dev, "failed to enable clk: %d\n", ret); 627 + return ret; 628 + } 629 + 630 + ret = reset_control_deassert(rsb->rstc); 631 + if (ret) { 632 + dev_err(dev, "failed to deassert reset line: %d\n", ret); 633 + goto err_clk_disable; 634 + } 635 + 636 + /* reset the controller */ 637 + writel(RSB_CTRL_SOFT_RST, rsb->regs + RSB_CTRL); 638 + readl_poll_timeout(rsb->regs + RSB_CTRL, reg, 639 + !(reg & RSB_CTRL_SOFT_RST), 1000, 100000); 640 + 641 + /* 642 + * Clock frequency and delay calculation code is from 643 + * Allwinner U-boot sources. 644 + * 645 + * From A83 user manual: 646 + * bus clock frequency = parent clock frequency / (2 * (divider + 1)) 647 + */ 648 + p_clk_freq = clk_get_rate(rsb->clk); 649 + clk_div = p_clk_freq / rsb->clk_freq / 2; 650 + if (!clk_div) 651 + clk_div = 1; 652 + else if (clk_div > RSB_CCR_MAX_CLK_DIV + 1) 653 + clk_div = RSB_CCR_MAX_CLK_DIV + 1; 654 + 655 + clk_delay = clk_div >> 1; 656 + if (!clk_delay) 657 + clk_delay = 1; 658 + 659 + dev_info(dev, "RSB running at %lu Hz\n", p_clk_freq / clk_div / 2); 660 + writel(RSB_CCR_SDA_OUT_DELAY(clk_delay) | RSB_CCR_CLK_DIV(clk_div - 1), 661 + rsb->regs + RSB_CCR); 662 + 663 + return 0; 664 + 665 + err_clk_disable: 666 + clk_disable_unprepare(rsb->clk); 667 + 668 + return ret; 669 + } 670 + 671 + static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb) 672 + { 673 + /* Keep the clock and PM reference counts consistent. */ 674 + if (pm_runtime_status_suspended(rsb->dev)) 675 + pm_runtime_resume(rsb->dev); 676 + reset_control_assert(rsb->rstc); 677 + clk_disable_unprepare(rsb->clk); 678 + } 679 + 680 + static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev) 681 + { 682 + struct sunxi_rsb *rsb = dev_get_drvdata(dev); 683 + 684 + clk_disable_unprepare(rsb->clk); 685 + 686 + return 0; 687 + } 688 + 689 + static int __maybe_unused sunxi_rsb_runtime_resume(struct device *dev) 690 + { 691 + struct sunxi_rsb *rsb = dev_get_drvdata(dev); 692 + 693 + return clk_prepare_enable(rsb->clk); 694 + } 695 + 696 + static int __maybe_unused sunxi_rsb_suspend(struct device *dev) 697 + { 698 + struct sunxi_rsb *rsb = dev_get_drvdata(dev); 699 + 700 + sunxi_rsb_hw_exit(rsb); 701 + 702 + return 0; 703 + } 704 + 705 + static int __maybe_unused sunxi_rsb_resume(struct device *dev) 706 + { 707 + struct sunxi_rsb *rsb = dev_get_drvdata(dev); 708 + 709 + return sunxi_rsb_hw_init(rsb); 710 + } 641 711 642 712 static int sunxi_rsb_probe(struct platform_device *pdev) 643 713 { ··· 734 626 struct device_node *np = dev->of_node; 735 627 struct resource *r; 736 628 struct sunxi_rsb *rsb; 737 - unsigned long p_clk_freq; 738 - u32 clk_delay, clk_freq = 3000000; 739 - int clk_div, irq, ret; 740 - u32 reg; 629 + u32 clk_freq = 3000000; 630 + int irq, ret; 741 631 742 632 of_property_read_u32(np, "clock-frequency", &clk_freq); 743 633 if (clk_freq > RSB_MAX_FREQ) { ··· 750 644 return -ENOMEM; 751 645 752 646 rsb->dev = dev; 647 + rsb->clk_freq = clk_freq; 753 648 platform_set_drvdata(pdev, rsb); 754 649 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 755 650 rsb->regs = devm_ioremap_resource(dev, r); ··· 768 661 return ret; 769 662 } 770 663 771 - ret = clk_prepare_enable(rsb->clk); 772 - if (ret) { 773 - dev_err(dev, "failed to enable clk: %d\n", ret); 774 - return ret; 775 - } 776 - 777 - p_clk_freq = clk_get_rate(rsb->clk); 778 - 779 664 rsb->rstc = devm_reset_control_get(dev, NULL); 780 665 if (IS_ERR(rsb->rstc)) { 781 666 ret = PTR_ERR(rsb->rstc); 782 667 dev_err(dev, "failed to retrieve reset controller: %d\n", ret); 783 - goto err_clk_disable; 784 - } 785 - 786 - ret = reset_control_deassert(rsb->rstc); 787 - if (ret) { 788 - dev_err(dev, "failed to deassert reset line: %d\n", ret); 789 - goto err_clk_disable; 668 + return ret; 790 669 } 791 670 792 671 init_completion(&rsb->complete); 793 672 mutex_init(&rsb->lock); 794 673 795 - /* reset the controller */ 796 - writel(RSB_CTRL_SOFT_RST, rsb->regs + RSB_CTRL); 797 - readl_poll_timeout(rsb->regs + RSB_CTRL, reg, 798 - !(reg & RSB_CTRL_SOFT_RST), 1000, 100000); 799 - 800 - /* 801 - * Clock frequency and delay calculation code is from 802 - * Allwinner U-boot sources. 803 - * 804 - * From A83 user manual: 805 - * bus clock frequency = parent clock frequency / (2 * (divider + 1)) 806 - */ 807 - clk_div = p_clk_freq / clk_freq / 2; 808 - if (!clk_div) 809 - clk_div = 1; 810 - else if (clk_div > RSB_CCR_MAX_CLK_DIV + 1) 811 - clk_div = RSB_CCR_MAX_CLK_DIV + 1; 812 - 813 - clk_delay = clk_div >> 1; 814 - if (!clk_delay) 815 - clk_delay = 1; 816 - 817 - dev_info(dev, "RSB running at %lu Hz\n", p_clk_freq / clk_div / 2); 818 - writel(RSB_CCR_SDA_OUT_DELAY(clk_delay) | RSB_CCR_CLK_DIV(clk_div - 1), 819 - rsb->regs + RSB_CCR); 820 - 821 674 ret = devm_request_irq(dev, irq, sunxi_rsb_irq, 0, RSB_CTRL_NAME, rsb); 822 675 if (ret) { 823 676 dev_err(dev, "can't register interrupt handler irq %d: %d\n", 824 677 irq, ret); 825 - goto err_reset_assert; 678 + return ret; 826 679 } 680 + 681 + ret = sunxi_rsb_hw_init(rsb); 682 + if (ret) 683 + return ret; 827 684 828 685 /* initialize all devices on the bus into RSB mode */ 829 686 ret = sunxi_rsb_init_device_mode(rsb); 830 687 if (ret) 831 688 dev_warn(dev, "Initialize device mode failed: %d\n", ret); 832 689 690 + pm_suspend_ignore_children(dev, true); 691 + pm_runtime_set_active(dev); 692 + pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); 693 + pm_runtime_use_autosuspend(dev); 694 + pm_runtime_enable(dev); 695 + 833 696 of_rsb_register_devices(rsb); 834 697 835 698 return 0; 836 - 837 - err_reset_assert: 838 - reset_control_assert(rsb->rstc); 839 - 840 - err_clk_disable: 841 - clk_disable_unprepare(rsb->clk); 842 - 843 - return ret; 844 699 } 845 700 846 701 static int sunxi_rsb_remove(struct platform_device *pdev) ··· 810 741 struct sunxi_rsb *rsb = platform_get_drvdata(pdev); 811 742 812 743 device_for_each_child(rsb->dev, NULL, sunxi_rsb_remove_devices); 813 - reset_control_assert(rsb->rstc); 814 - clk_disable_unprepare(rsb->clk); 744 + pm_runtime_disable(&pdev->dev); 745 + sunxi_rsb_hw_exit(rsb); 815 746 816 747 return 0; 817 748 } 818 749 750 + static void sunxi_rsb_shutdown(struct platform_device *pdev) 751 + { 752 + struct sunxi_rsb *rsb = platform_get_drvdata(pdev); 753 + 754 + pm_runtime_disable(&pdev->dev); 755 + sunxi_rsb_hw_exit(rsb); 756 + } 757 + 758 + static const struct dev_pm_ops sunxi_rsb_dev_pm_ops = { 759 + SET_RUNTIME_PM_OPS(sunxi_rsb_runtime_suspend, 760 + sunxi_rsb_runtime_resume, NULL) 761 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(sunxi_rsb_suspend, sunxi_rsb_resume) 762 + }; 763 + 764 + static const struct of_device_id sunxi_rsb_of_match_table[] = { 765 + { .compatible = "allwinner,sun8i-a23-rsb" }, 766 + {} 767 + }; 768 + MODULE_DEVICE_TABLE(of, sunxi_rsb_of_match_table); 769 + 819 770 static struct platform_driver sunxi_rsb_driver = { 820 771 .probe = sunxi_rsb_probe, 821 772 .remove = sunxi_rsb_remove, 773 + .shutdown = sunxi_rsb_shutdown, 822 774 .driver = { 823 775 .name = RSB_CTRL_NAME, 824 776 .of_match_table = sunxi_rsb_of_match_table, 777 + .pm = &sunxi_rsb_dev_pm_ops, 825 778 }, 826 779 }; 827 780
+3
drivers/clk/tegra/Kconfig
··· 7 7 depends on ARCH_TEGRA_124_SOC || ARCH_TEGRA_210_SOC 8 8 select PM_OPP 9 9 def_bool y 10 + 11 + config TEGRA124_CLK_EMC 12 + bool
+1 -1
drivers/clk/tegra/Makefile
··· 22 22 obj-$(CONFIG_ARCH_TEGRA_114_SOC) += clk-tegra114.o 23 23 obj-$(CONFIG_ARCH_TEGRA_124_SOC) += clk-tegra124.o 24 24 obj-$(CONFIG_TEGRA_CLK_DFLL) += clk-tegra124-dfll-fcpu.o 25 - obj-$(CONFIG_TEGRA124_EMC) += clk-tegra124-emc.o 25 + obj-$(CONFIG_TEGRA124_CLK_EMC) += clk-tegra124-emc.o 26 26 obj-$(CONFIG_ARCH_TEGRA_132_SOC) += clk-tegra124.o 27 27 obj-y += cvb.o 28 28 obj-$(CONFIG_ARCH_TEGRA_210_SOC) += clk-tegra210.o
+36 -5
drivers/clk/tegra/clk-tegra124-emc.c
··· 11 11 #include <linux/clk-provider.h> 12 12 #include <linux/clk.h> 13 13 #include <linux/clkdev.h> 14 + #include <linux/clk/tegra.h> 14 15 #include <linux/delay.h> 16 + #include <linux/export.h> 15 17 #include <linux/io.h> 16 18 #include <linux/module.h> 17 19 #include <linux/of_address.h> ··· 23 21 #include <linux/string.h> 24 22 25 23 #include <soc/tegra/fuse.h> 26 - #include <soc/tegra/emc.h> 27 24 28 25 #include "clk.h" 29 26 ··· 81 80 int num_timings; 82 81 struct emc_timing *timings; 83 82 spinlock_t *lock; 83 + 84 + tegra124_emc_prepare_timing_change_cb *prepare_timing_change; 85 + tegra124_emc_complete_timing_change_cb *complete_timing_change; 84 86 }; 85 87 86 88 /* Common clock framework callback implementations */ ··· 180 176 if (tegra->emc) 181 177 return tegra->emc; 182 178 179 + if (!tegra->prepare_timing_change || !tegra->complete_timing_change) 180 + return NULL; 181 + 183 182 if (!tegra->emc_node) 184 183 return NULL; 185 184 ··· 248 241 249 242 div = timing->parent_rate / (timing->rate / 2) - 2; 250 243 251 - err = tegra_emc_prepare_timing_change(emc, timing->rate); 244 + err = tegra->prepare_timing_change(emc, timing->rate); 252 245 if (err) 253 246 return err; 254 247 ··· 266 259 267 260 spin_unlock_irqrestore(tegra->lock, flags); 268 261 269 - tegra_emc_complete_timing_change(emc, timing->rate); 262 + tegra->complete_timing_change(emc, timing->rate); 270 263 271 264 clk_hw_reparent(&tegra->hw, __clk_get_hw(timing->parent)); 272 265 clk_disable_unprepare(tegra->prev_parent); ··· 480 473 .get_parent = emc_get_parent, 481 474 }; 482 475 483 - struct clk *tegra_clk_register_emc(void __iomem *base, struct device_node *np, 484 - spinlock_t *lock) 476 + struct clk *tegra124_clk_register_emc(void __iomem *base, struct device_node *np, 477 + spinlock_t *lock) 485 478 { 486 479 struct tegra_clk_emc *tegra; 487 480 struct clk_init_data init; ··· 545 538 546 539 return clk; 547 540 }; 541 + 542 + void tegra124_clk_set_emc_callbacks(tegra124_emc_prepare_timing_change_cb *prep_cb, 543 + tegra124_emc_complete_timing_change_cb *complete_cb) 544 + { 545 + struct clk *clk = __clk_lookup("emc"); 546 + struct tegra_clk_emc *tegra; 547 + struct clk_hw *hw; 548 + 549 + if (clk) { 550 + hw = __clk_get_hw(clk); 551 + tegra = container_of(hw, struct tegra_clk_emc, hw); 552 + 553 + tegra->prepare_timing_change = prep_cb; 554 + tegra->complete_timing_change = complete_cb; 555 + } 556 + } 557 + EXPORT_SYMBOL_GPL(tegra124_clk_set_emc_callbacks); 558 + 559 + bool tegra124_clk_emc_driver_available(struct clk_hw *hw) 560 + { 561 + struct tegra_clk_emc *tegra = container_of(hw, struct tegra_clk_emc, hw); 562 + 563 + return tegra->prepare_timing_change && tegra->complete_timing_change; 564 + }
+23 -3
drivers/clk/tegra/clk-tegra124.c
··· 1500 1500 writel(plld_base, clk_base + PLLD_BASE); 1501 1501 } 1502 1502 1503 + static struct clk *tegra124_clk_src_onecell_get(struct of_phandle_args *clkspec, 1504 + void *data) 1505 + { 1506 + struct clk_hw *hw; 1507 + struct clk *clk; 1508 + 1509 + clk = of_clk_src_onecell_get(clkspec, data); 1510 + if (IS_ERR(clk)) 1511 + return clk; 1512 + 1513 + hw = __clk_get_hw(clk); 1514 + 1515 + if (clkspec->args[0] == TEGRA124_CLK_EMC) { 1516 + if (!tegra124_clk_emc_driver_available(hw)) 1517 + return ERR_PTR(-EPROBE_DEFER); 1518 + } 1519 + 1520 + return clk; 1521 + } 1522 + 1503 1523 /** 1504 1524 * tegra124_132_clock_init_post - clock initialization postamble for T124/T132 1505 1525 * @np: struct device_node * of the DT node for the SoC CAR IP block ··· 1536 1516 &pll_x_params); 1537 1517 tegra_init_special_resets(1, tegra124_reset_assert, 1538 1518 tegra124_reset_deassert); 1539 - tegra_add_of_provider(np, of_clk_src_onecell_get); 1519 + tegra_add_of_provider(np, tegra124_clk_src_onecell_get); 1540 1520 1541 - clks[TEGRA124_CLK_EMC] = tegra_clk_register_emc(clk_base, np, 1542 - &emc_lock); 1521 + clks[TEGRA124_CLK_EMC] = tegra124_clk_register_emc(clk_base, np, 1522 + &emc_lock); 1543 1523 1544 1524 tegra_register_devclks(devclks, ARRAY_SIZE(devclks)); 1545 1525
+12 -6
drivers/clk/tegra/clk.h
··· 881 881 void __iomem *pmc_base, struct tegra_clk *tegra_clks, 882 882 struct tegra_clk_pll_params *pll_params); 883 883 884 - #ifdef CONFIG_TEGRA124_EMC 885 - struct clk *tegra_clk_register_emc(void __iomem *base, struct device_node *np, 886 - spinlock_t *lock); 884 + #ifdef CONFIG_TEGRA124_CLK_EMC 885 + struct clk *tegra124_clk_register_emc(void __iomem *base, struct device_node *np, 886 + spinlock_t *lock); 887 + bool tegra124_clk_emc_driver_available(struct clk_hw *emc_hw); 887 888 #else 888 - static inline struct clk *tegra_clk_register_emc(void __iomem *base, 889 - struct device_node *np, 890 - spinlock_t *lock) 889 + static inline struct clk * 890 + tegra124_clk_register_emc(void __iomem *base, struct device_node *np, 891 + spinlock_t *lock) 891 892 { 892 893 return NULL; 894 + } 895 + 896 + static inline bool tegra124_clk_emc_driver_available(struct clk_hw *emc_hw) 897 + { 898 + return false; 893 899 } 894 900 #endif 895 901
+2 -2
drivers/firmware/arm_scmi/driver.c
··· 848 848 struct scmi_info *info = platform_get_drvdata(pdev); 849 849 struct idr *idr = &info->tx_idr; 850 850 851 - scmi_notification_exit(&info->handle); 852 - 853 851 mutex_lock(&scmi_list_mutex); 854 852 if (info->users) 855 853 ret = -EBUSY; ··· 857 859 858 860 if (ret) 859 861 return ret; 862 + 863 + scmi_notification_exit(&info->handle); 860 864 861 865 /* Safe to free channels since no more users */ 862 866 ret = idr_for_each(idr, info->desc->ops->chan_free, idr);
+41 -1
drivers/firmware/arm_scmi/smc.c
··· 9 9 #include <linux/arm-smccc.h> 10 10 #include <linux/device.h> 11 11 #include <linux/err.h> 12 + #include <linux/interrupt.h> 12 13 #include <linux/mutex.h> 13 14 #include <linux/of.h> 14 15 #include <linux/of_address.h> 16 + #include <linux/of_irq.h> 15 17 #include <linux/slab.h> 16 18 17 19 #include "common.h" ··· 25 23 * @shmem: Transmit/Receive shared memory area 26 24 * @shmem_lock: Lock to protect access to Tx/Rx shared memory area 27 25 * @func_id: smc/hvc call function id 26 + * @irq: Optional; employed when platforms indicates msg completion by intr. 27 + * @tx_complete: Optional, employed only when irq is valid. 28 28 */ 29 29 30 30 struct scmi_smc { ··· 34 30 struct scmi_shared_mem __iomem *shmem; 35 31 struct mutex shmem_lock; 36 32 u32 func_id; 33 + int irq; 34 + struct completion tx_complete; 37 35 }; 36 + 37 + static irqreturn_t smc_msg_done_isr(int irq, void *data) 38 + { 39 + struct scmi_smc *scmi_info = data; 40 + 41 + complete(&scmi_info->tx_complete); 42 + 43 + return IRQ_HANDLED; 44 + } 38 45 39 46 static bool smc_chan_available(struct device *dev, int idx) 40 47 { ··· 66 51 struct resource res; 67 52 struct device_node *np; 68 53 u32 func_id; 69 - int ret; 54 + int ret, irq; 70 55 71 56 if (!tx) 72 57 return -ENODEV; ··· 93 78 ret = of_property_read_u32(dev->of_node, "arm,smc-id", &func_id); 94 79 if (ret < 0) 95 80 return ret; 81 + 82 + /* 83 + * If there is an interrupt named "a2p", then the service and 84 + * completion of a message is signaled by an interrupt rather than by 85 + * the return of the SMC call. 86 + */ 87 + irq = of_irq_get_byname(cdev->of_node, "a2p"); 88 + if (irq > 0) { 89 + ret = devm_request_irq(dev, irq, smc_msg_done_isr, 90 + IRQF_NO_SUSPEND, 91 + dev_name(dev), scmi_info); 92 + if (ret) { 93 + dev_err(dev, "failed to setup SCMI smc irq\n"); 94 + return ret; 95 + } 96 + init_completion(&scmi_info->tx_complete); 97 + scmi_info->irq = irq; 98 + } 96 99 97 100 scmi_info->func_id = func_id; 98 101 scmi_info->cinfo = cinfo; ··· 143 110 144 111 shmem_tx_prepare(scmi_info->shmem, xfer); 145 112 113 + if (scmi_info->irq) 114 + reinit_completion(&scmi_info->tx_complete); 115 + 146 116 arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); 117 + 118 + if (scmi_info->irq) 119 + wait_for_completion(&scmi_info->tx_complete); 120 + 147 121 scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem)); 148 122 149 123 mutex_unlock(&scmi_info->shmem_lock);
+4 -4
drivers/memory/Kconfig
··· 173 173 memory devices such as NAND and SRAM. 174 174 175 175 config MTK_SMI 176 - bool "Mediatek SoC Memory Controller driver" if COMPILE_TEST 176 + tristate "MediaTek SoC Memory Controller driver" if COMPILE_TEST 177 177 depends on ARCH_MEDIATEK || COMPILE_TEST 178 178 help 179 179 This driver is for the Memory Controller module in MediaTek SoCs, ··· 202 202 depends on ARCH_RENESAS || COMPILE_TEST 203 203 select REGMAP_MMIO 204 204 help 205 - This supports Renesas R-Car Gen3 RPC-IF which provides either SPI 206 - host or HyperFlash. You'll have to select individual components 207 - under the corresponding menu. 205 + This supports Renesas R-Car Gen3 or RZ/G2 RPC-IF which provides 206 + either SPI host or HyperFlash. You'll have to select individual 207 + components under the corresponding menu. 208 208 209 209 config STM32_FMC2_EBI 210 210 tristate "Support for FMC2 External Bus Interface on STM32MP SoCs"
+1 -2
drivers/memory/emif.c
··· 70 70 }; 71 71 72 72 static struct emif_data *emif1; 73 - static spinlock_t emif_lock; 73 + static DEFINE_SPINLOCK(emif_lock); 74 74 static unsigned long irq_state; 75 75 static u32 t_ck; /* DDR clock period in ps */ 76 76 static LIST_HEAD(device_list); ··· 1531 1531 /* One-time actions taken on probing the first device */ 1532 1532 if (!emif1) { 1533 1533 emif1 = emif; 1534 - spin_lock_init(&emif_lock); 1535 1534 1536 1535 /* 1537 1536 * TODO: register notifiers for frequency and voltage
+17 -21
drivers/memory/mtk-smi.c
··· 130 130 131 131 int mtk_smi_larb_get(struct device *larbdev) 132 132 { 133 - int ret = pm_runtime_get_sync(larbdev); 133 + int ret = pm_runtime_resume_and_get(larbdev); 134 134 135 135 return (ret < 0) ? ret : 0; 136 136 } ··· 374 374 int ret; 375 375 376 376 /* Power on smi-common. */ 377 - ret = pm_runtime_get_sync(larb->smi_common_dev); 377 + ret = pm_runtime_resume_and_get(larb->smi_common_dev); 378 378 if (ret < 0) { 379 379 dev_err(dev, "Failed to pm get for smi-common(%d).\n", ret); 380 380 return ret; ··· 587 587 } 588 588 }; 589 589 590 + static struct platform_driver * const smidrivers[] = { 591 + &mtk_smi_common_driver, 592 + &mtk_smi_larb_driver, 593 + }; 594 + 590 595 static int __init mtk_smi_init(void) 591 596 { 592 - int ret; 593 - 594 - ret = platform_driver_register(&mtk_smi_common_driver); 595 - if (ret != 0) { 596 - pr_err("Failed to register SMI driver\n"); 597 - return ret; 598 - } 599 - 600 - ret = platform_driver_register(&mtk_smi_larb_driver); 601 - if (ret != 0) { 602 - pr_err("Failed to register SMI-LARB driver\n"); 603 - goto err_unreg_smi; 604 - } 605 - return ret; 606 - 607 - err_unreg_smi: 608 - platform_driver_unregister(&mtk_smi_common_driver); 609 - return ret; 597 + return platform_register_drivers(smidrivers, ARRAY_SIZE(smidrivers)); 610 598 } 611 - 612 599 module_init(mtk_smi_init); 600 + 601 + static void __exit mtk_smi_exit(void) 602 + { 603 + platform_unregister_drivers(smidrivers, ARRAY_SIZE(smidrivers)); 604 + } 605 + module_exit(mtk_smi_exit); 606 + 607 + MODULE_DESCRIPTION("MediaTek SMI driver"); 608 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/memory/samsung/exynos5422-dmc.c
··· 278 278 } 279 279 280 280 /** 281 - * find_target_freq_id() - Finds requested frequency in local DMC configuration 281 + * find_target_freq_idx() - Finds requested frequency in local DMC configuration 282 282 * @dmc: device for which the information is checked 283 283 * @target_rate: requested frequency in KHz 284 284 * ··· 998 998 }; 999 999 1000 1000 /** 1001 - * exynos5_dmc_align_initial_frequency() - Align initial frequency value 1001 + * exynos5_dmc_align_init_freq() - Align initial frequency value 1002 1002 * @dmc: device for which the frequency is going to be set 1003 1003 * @bootloader_init_freq: initial frequency set by the bootloader in KHz 1004 1004 *
+3 -1
drivers/memory/tegra/Kconfig
··· 32 32 external memory. 33 33 34 34 config TEGRA124_EMC 35 - bool "NVIDIA Tegra124 External Memory Controller driver" 35 + tristate "NVIDIA Tegra124 External Memory Controller driver" 36 36 default y 37 37 depends on TEGRA_MC && ARCH_TEGRA_124_SOC 38 + select TEGRA124_CLK_EMC 39 + select PM_OPP 38 40 help 39 41 This driver is for the External Memory Controller (EMC) found on 40 42 Tegra124 chips. The EMC controls the external DRAM on the board.
+7
drivers/memory/tegra/mc.c
··· 176 176 if (!rst_ops) 177 177 return -ENODEV; 178 178 179 + /* DMA flushing will fail if reset is already asserted */ 180 + if (rst_ops->reset_status) { 181 + /* check whether reset is asserted */ 182 + if (rst_ops->reset_status(mc, rst)) 183 + return 0; 184 + } 185 + 179 186 if (rst_ops->block_dma) { 180 187 /* block clients DMA requests */ 181 188 err = rst_ops->block_dma(mc, rst);
+330 -38
drivers/memory/tegra/tegra124-emc.c
··· 9 9 #include <linux/clk-provider.h> 10 10 #include <linux/clk.h> 11 11 #include <linux/clkdev.h> 12 + #include <linux/clk/tegra.h> 12 13 #include <linux/debugfs.h> 13 14 #include <linux/delay.h> 15 + #include <linux/interconnect-provider.h> 14 16 #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/mutex.h> 15 19 #include <linux/of_address.h> 16 20 #include <linux/of_platform.h> 17 21 #include <linux/platform_device.h> 22 + #include <linux/pm_opp.h> 18 23 #include <linux/sort.h> 19 24 #include <linux/string.h> 20 25 21 - #include <soc/tegra/emc.h> 22 26 #include <soc/tegra/fuse.h> 23 27 #include <soc/tegra/mc.h> 28 + 29 + #include "mc.h" 24 30 25 31 #define EMC_FBIO_CFG5 0x104 26 32 #define EMC_FBIO_CFG5_DRAM_TYPE_MASK 0x3 27 33 #define EMC_FBIO_CFG5_DRAM_TYPE_SHIFT 0 34 + #define EMC_FBIO_CFG5_DRAM_WIDTH_X64 BIT(4) 28 35 29 36 #define EMC_INTSTATUS 0x0 30 37 #define EMC_INTSTATUS_CLKCHANGE_COMPLETE BIT(4) ··· 467 460 u32 emc_zcal_interval; 468 461 }; 469 462 463 + enum emc_rate_request_type { 464 + EMC_RATE_DEBUG, 465 + EMC_RATE_ICC, 466 + EMC_RATE_TYPE_MAX, 467 + }; 468 + 469 + struct emc_rate_request { 470 + unsigned long min_rate; 471 + unsigned long max_rate; 472 + }; 473 + 470 474 struct tegra_emc { 471 475 struct device *dev; 472 476 ··· 488 470 struct clk *clk; 489 471 490 472 enum emc_dram_type dram_type; 473 + unsigned int dram_bus_width; 491 474 unsigned int dram_num; 492 475 493 476 struct emc_timing last_timing; ··· 500 481 unsigned long min_rate; 501 482 unsigned long max_rate; 502 483 } debugfs; 484 + 485 + struct icc_provider provider; 486 + 487 + /* 488 + * There are multiple sources in the EMC driver which could request 489 + * a min/max clock rate, these rates are contained in this array. 490 + */ 491 + struct emc_rate_request requested_rate[EMC_RATE_TYPE_MAX]; 492 + 493 + /* protect shared rate-change code path */ 494 + struct mutex rate_lock; 503 495 }; 504 496 505 497 /* Timing change sequence functions */ ··· 592 562 return timing; 593 563 } 594 564 595 - int tegra_emc_prepare_timing_change(struct tegra_emc *emc, 596 - unsigned long rate) 565 + static int tegra_emc_prepare_timing_change(struct tegra_emc *emc, 566 + unsigned long rate) 597 567 { 598 568 struct emc_timing *timing = tegra_emc_find_timing(emc, rate); 599 569 struct emc_timing *last = &emc->last_timing; ··· 820 790 return 0; 821 791 } 822 792 823 - void tegra_emc_complete_timing_change(struct tegra_emc *emc, 824 - unsigned long rate) 793 + static void tegra_emc_complete_timing_change(struct tegra_emc *emc, 794 + unsigned long rate) 825 795 { 826 796 struct emc_timing *timing = tegra_emc_find_timing(emc, rate); 827 797 struct emc_timing *last = &emc->last_timing; ··· 899 869 static int emc_init(struct tegra_emc *emc) 900 870 { 901 871 emc->dram_type = readl(emc->regs + EMC_FBIO_CFG5); 872 + 873 + if (emc->dram_type & EMC_FBIO_CFG5_DRAM_WIDTH_X64) 874 + emc->dram_bus_width = 64; 875 + else 876 + emc->dram_bus_width = 32; 877 + 878 + dev_info(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 879 + 902 880 emc->dram_type &= EMC_FBIO_CFG5_DRAM_TYPE_MASK; 903 881 emc->dram_type >>= EMC_FBIO_CFG5_DRAM_TYPE_SHIFT; 904 882 ··· 1025 987 { .compatible = "nvidia,tegra132-emc" }, 1026 988 {} 1027 989 }; 990 + MODULE_DEVICE_TABLE(of, tegra_emc_of_match); 1028 991 1029 992 static struct device_node * 1030 993 tegra_emc_find_node_by_ram_code(struct device_node *node, u32 ram_code) ··· 1044 1005 } 1045 1006 1046 1007 return NULL; 1008 + } 1009 + 1010 + static void tegra_emc_rate_requests_init(struct tegra_emc *emc) 1011 + { 1012 + unsigned int i; 1013 + 1014 + for (i = 0; i < EMC_RATE_TYPE_MAX; i++) { 1015 + emc->requested_rate[i].min_rate = 0; 1016 + emc->requested_rate[i].max_rate = ULONG_MAX; 1017 + } 1018 + } 1019 + 1020 + static int emc_request_rate(struct tegra_emc *emc, 1021 + unsigned long new_min_rate, 1022 + unsigned long new_max_rate, 1023 + enum emc_rate_request_type type) 1024 + { 1025 + struct emc_rate_request *req = emc->requested_rate; 1026 + unsigned long min_rate = 0, max_rate = ULONG_MAX; 1027 + unsigned int i; 1028 + int err; 1029 + 1030 + /* select minimum and maximum rates among the requested rates */ 1031 + for (i = 0; i < EMC_RATE_TYPE_MAX; i++, req++) { 1032 + if (i == type) { 1033 + min_rate = max(new_min_rate, min_rate); 1034 + max_rate = min(new_max_rate, max_rate); 1035 + } else { 1036 + min_rate = max(req->min_rate, min_rate); 1037 + max_rate = min(req->max_rate, max_rate); 1038 + } 1039 + } 1040 + 1041 + if (min_rate > max_rate) { 1042 + dev_err_ratelimited(emc->dev, "%s: type %u: out of range: %lu %lu\n", 1043 + __func__, type, min_rate, max_rate); 1044 + return -ERANGE; 1045 + } 1046 + 1047 + /* 1048 + * EMC rate-changes should go via OPP API because it manages voltage 1049 + * changes. 1050 + */ 1051 + err = dev_pm_opp_set_rate(emc->dev, min_rate); 1052 + if (err) 1053 + return err; 1054 + 1055 + emc->requested_rate[type].min_rate = new_min_rate; 1056 + emc->requested_rate[type].max_rate = new_max_rate; 1057 + 1058 + return 0; 1059 + } 1060 + 1061 + static int emc_set_min_rate(struct tegra_emc *emc, unsigned long rate, 1062 + enum emc_rate_request_type type) 1063 + { 1064 + struct emc_rate_request *req = &emc->requested_rate[type]; 1065 + int ret; 1066 + 1067 + mutex_lock(&emc->rate_lock); 1068 + ret = emc_request_rate(emc, rate, req->max_rate, type); 1069 + mutex_unlock(&emc->rate_lock); 1070 + 1071 + return ret; 1072 + } 1073 + 1074 + static int emc_set_max_rate(struct tegra_emc *emc, unsigned long rate, 1075 + enum emc_rate_request_type type) 1076 + { 1077 + struct emc_rate_request *req = &emc->requested_rate[type]; 1078 + int ret; 1079 + 1080 + mutex_lock(&emc->rate_lock); 1081 + ret = emc_request_rate(emc, req->min_rate, rate, type); 1082 + mutex_unlock(&emc->rate_lock); 1083 + 1084 + return ret; 1047 1085 } 1048 1086 1049 1087 /* ··· 1195 1079 if (!tegra_emc_validate_rate(emc, rate)) 1196 1080 return -EINVAL; 1197 1081 1198 - err = clk_set_min_rate(emc->clk, rate); 1082 + err = emc_set_min_rate(emc, rate, EMC_RATE_DEBUG); 1199 1083 if (err < 0) 1200 1084 return err; 1201 1085 ··· 1225 1109 if (!tegra_emc_validate_rate(emc, rate)) 1226 1110 return -EINVAL; 1227 1111 1228 - err = clk_set_max_rate(emc->clk, rate); 1112 + err = emc_set_max_rate(emc, rate, EMC_RATE_DEBUG); 1229 1113 if (err < 0) 1230 1114 return err; 1231 1115 ··· 1242 1126 { 1243 1127 unsigned int i; 1244 1128 int err; 1245 - 1246 - emc->clk = devm_clk_get(dev, "emc"); 1247 - if (IS_ERR(emc->clk)) { 1248 - if (PTR_ERR(emc->clk) != -ENODEV) { 1249 - dev_err(dev, "failed to get EMC clock: %ld\n", 1250 - PTR_ERR(emc->clk)); 1251 - return; 1252 - } 1253 - } 1254 1129 1255 1130 emc->debugfs.min_rate = ULONG_MAX; 1256 1131 emc->debugfs.max_rate = 0; ··· 1282 1175 emc, &tegra_emc_debug_max_rate_fops); 1283 1176 } 1284 1177 1178 + static inline struct tegra_emc * 1179 + to_tegra_emc_provider(struct icc_provider *provider) 1180 + { 1181 + return container_of(provider, struct tegra_emc, provider); 1182 + } 1183 + 1184 + static struct icc_node_data * 1185 + emc_of_icc_xlate_extended(struct of_phandle_args *spec, void *data) 1186 + { 1187 + struct icc_provider *provider = data; 1188 + struct icc_node_data *ndata; 1189 + struct icc_node *node; 1190 + 1191 + /* External Memory is the only possible ICC route */ 1192 + list_for_each_entry(node, &provider->nodes, node_list) { 1193 + if (node->id != TEGRA_ICC_EMEM) 1194 + continue; 1195 + 1196 + ndata = kzalloc(sizeof(*ndata), GFP_KERNEL); 1197 + if (!ndata) 1198 + return ERR_PTR(-ENOMEM); 1199 + 1200 + /* 1201 + * SRC and DST nodes should have matching TAG in order to have 1202 + * it set by default for a requested path. 1203 + */ 1204 + ndata->tag = TEGRA_MC_ICC_TAG_ISO; 1205 + ndata->node = node; 1206 + 1207 + return ndata; 1208 + } 1209 + 1210 + return ERR_PTR(-EPROBE_DEFER); 1211 + } 1212 + 1213 + static int emc_icc_set(struct icc_node *src, struct icc_node *dst) 1214 + { 1215 + struct tegra_emc *emc = to_tegra_emc_provider(dst->provider); 1216 + unsigned long long peak_bw = icc_units_to_bps(dst->peak_bw); 1217 + unsigned long long avg_bw = icc_units_to_bps(dst->avg_bw); 1218 + unsigned long long rate = max(avg_bw, peak_bw); 1219 + unsigned int dram_data_bus_width_bytes; 1220 + const unsigned int ddr = 2; 1221 + int err; 1222 + 1223 + /* 1224 + * Tegra124 EMC runs on a clock rate of SDRAM bus. This means that 1225 + * EMC clock rate is twice smaller than the peak data rate because 1226 + * data is sampled on both EMC clock edges. 1227 + */ 1228 + dram_data_bus_width_bytes = emc->dram_bus_width / 8; 1229 + do_div(rate, ddr * dram_data_bus_width_bytes); 1230 + rate = min_t(u64, rate, U32_MAX); 1231 + 1232 + err = emc_set_min_rate(emc, rate, EMC_RATE_ICC); 1233 + if (err) 1234 + return err; 1235 + 1236 + return 0; 1237 + } 1238 + 1239 + static int tegra_emc_interconnect_init(struct tegra_emc *emc) 1240 + { 1241 + const struct tegra_mc_soc *soc = emc->mc->soc; 1242 + struct icc_node *node; 1243 + int err; 1244 + 1245 + emc->provider.dev = emc->dev; 1246 + emc->provider.set = emc_icc_set; 1247 + emc->provider.data = &emc->provider; 1248 + emc->provider.aggregate = soc->icc_ops->aggregate; 1249 + emc->provider.xlate_extended = emc_of_icc_xlate_extended; 1250 + 1251 + err = icc_provider_add(&emc->provider); 1252 + if (err) 1253 + goto err_msg; 1254 + 1255 + /* create External Memory Controller node */ 1256 + node = icc_node_create(TEGRA_ICC_EMC); 1257 + if (IS_ERR(node)) { 1258 + err = PTR_ERR(node); 1259 + goto del_provider; 1260 + } 1261 + 1262 + node->name = "External Memory Controller"; 1263 + icc_node_add(node, &emc->provider); 1264 + 1265 + /* link External Memory Controller to External Memory (DRAM) */ 1266 + err = icc_link_create(node, TEGRA_ICC_EMEM); 1267 + if (err) 1268 + goto remove_nodes; 1269 + 1270 + /* create External Memory node */ 1271 + node = icc_node_create(TEGRA_ICC_EMEM); 1272 + if (IS_ERR(node)) { 1273 + err = PTR_ERR(node); 1274 + goto remove_nodes; 1275 + } 1276 + 1277 + node->name = "External Memory (DRAM)"; 1278 + icc_node_add(node, &emc->provider); 1279 + 1280 + return 0; 1281 + 1282 + remove_nodes: 1283 + icc_nodes_remove(&emc->provider); 1284 + del_provider: 1285 + icc_provider_del(&emc->provider); 1286 + err_msg: 1287 + dev_err(emc->dev, "failed to initialize ICC: %d\n", err); 1288 + 1289 + return err; 1290 + } 1291 + 1292 + static int tegra_emc_opp_table_init(struct tegra_emc *emc) 1293 + { 1294 + u32 hw_version = BIT(tegra_sku_info.soc_speedo_id); 1295 + struct opp_table *hw_opp_table; 1296 + int err; 1297 + 1298 + hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1); 1299 + err = PTR_ERR_OR_ZERO(hw_opp_table); 1300 + if (err) { 1301 + dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err); 1302 + return err; 1303 + } 1304 + 1305 + err = dev_pm_opp_of_add_table(emc->dev); 1306 + if (err) { 1307 + if (err == -ENODEV) 1308 + dev_err(emc->dev, "OPP table not found, please update your device tree\n"); 1309 + else 1310 + dev_err(emc->dev, "failed to add OPP table: %d\n", err); 1311 + 1312 + goto put_hw_table; 1313 + } 1314 + 1315 + dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n", 1316 + hw_version, clk_get_rate(emc->clk) / 1000000); 1317 + 1318 + /* first dummy rate-set initializes voltage state */ 1319 + err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk)); 1320 + if (err) { 1321 + dev_err(emc->dev, "failed to initialize OPP clock: %d\n", err); 1322 + goto remove_table; 1323 + } 1324 + 1325 + return 0; 1326 + 1327 + remove_table: 1328 + dev_pm_opp_of_remove_table(emc->dev); 1329 + put_hw_table: 1330 + dev_pm_opp_put_supported_hw(hw_opp_table); 1331 + 1332 + return err; 1333 + } 1334 + 1335 + static void devm_tegra_emc_unset_callback(void *data) 1336 + { 1337 + tegra124_clk_set_emc_callbacks(NULL, NULL); 1338 + } 1339 + 1285 1340 static int tegra_emc_probe(struct platform_device *pdev) 1286 1341 { 1287 1342 struct device_node *np; ··· 1455 1186 if (!emc) 1456 1187 return -ENOMEM; 1457 1188 1189 + mutex_init(&emc->rate_lock); 1458 1190 emc->dev = &pdev->dev; 1459 1191 1460 1192 emc->regs = devm_platform_ioremap_resource(pdev, 0); ··· 1469 1199 ram_code = tegra_read_ram_code(); 1470 1200 1471 1201 np = tegra_emc_find_node_by_ram_code(pdev->dev.of_node, ram_code); 1472 - if (!np) { 1473 - dev_err(&pdev->dev, 1474 - "no memory timings for RAM code %u found in DT\n", 1475 - ram_code); 1476 - return -ENOENT; 1477 - } 1478 - 1479 - err = tegra_emc_load_timings_from_dt(emc, np); 1480 - of_node_put(np); 1481 - if (err) 1482 - return err; 1483 - 1484 - if (emc->num_timings == 0) { 1485 - dev_err(&pdev->dev, 1486 - "no memory timings for RAM code %u registered\n", 1487 - ram_code); 1488 - return -ENOENT; 1202 + if (np) { 1203 + err = tegra_emc_load_timings_from_dt(emc, np); 1204 + of_node_put(np); 1205 + if (err) 1206 + return err; 1207 + } else { 1208 + dev_info(&pdev->dev, 1209 + "no memory timings for RAM code %u found in DT\n", 1210 + ram_code); 1489 1211 } 1490 1212 1491 1213 err = emc_init(emc); ··· 1488 1226 1489 1227 platform_set_drvdata(pdev, emc); 1490 1228 1229 + tegra124_clk_set_emc_callbacks(tegra_emc_prepare_timing_change, 1230 + tegra_emc_complete_timing_change); 1231 + 1232 + err = devm_add_action_or_reset(&pdev->dev, devm_tegra_emc_unset_callback, 1233 + NULL); 1234 + if (err) 1235 + return err; 1236 + 1237 + emc->clk = devm_clk_get(&pdev->dev, "emc"); 1238 + if (IS_ERR(emc->clk)) { 1239 + err = PTR_ERR(emc->clk); 1240 + dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err); 1241 + return err; 1242 + } 1243 + 1244 + err = tegra_emc_opp_table_init(emc); 1245 + if (err) 1246 + return err; 1247 + 1248 + tegra_emc_rate_requests_init(emc); 1249 + 1491 1250 if (IS_ENABLED(CONFIG_DEBUG_FS)) 1492 1251 emc_debugfs_init(&pdev->dev, emc); 1252 + 1253 + tegra_emc_interconnect_init(emc); 1254 + 1255 + /* 1256 + * Don't allow the kernel module to be unloaded. Unloading adds some 1257 + * extra complexity which doesn't really worth the effort in a case of 1258 + * this driver. 1259 + */ 1260 + try_module_get(THIS_MODULE); 1493 1261 1494 1262 return 0; 1495 1263 }; ··· 1530 1238 .name = "tegra-emc", 1531 1239 .of_match_table = tegra_emc_of_match, 1532 1240 .suppress_bind_attrs = true, 1241 + .sync_state = icc_sync_state, 1533 1242 }, 1534 1243 }; 1244 + module_platform_driver(tegra_emc_driver); 1535 1245 1536 - static int tegra_emc_init(void) 1537 - { 1538 - return platform_driver_register(&tegra_emc_driver); 1539 - } 1540 - subsys_initcall(tegra_emc_init); 1246 + MODULE_AUTHOR("Mikko Perttunen <mperttunen@nvidia.com>"); 1247 + MODULE_DESCRIPTION("NVIDIA Tegra124 EMC driver"); 1248 + MODULE_LICENSE("GPL v2");
+81 -1
drivers/memory/tegra/tegra124.c
··· 4 4 */ 5 5 6 6 #include <linux/of.h> 7 - #include <linux/mm.h> 7 + #include <linux/of_device.h> 8 + #include <linux/slab.h> 8 9 9 10 #include <dt-bindings/memory/tegra124-mc.h> 10 11 ··· 1011 1010 TEGRA124_MC_RESET(GPU, 0x970, 0x974, 2), 1012 1011 }; 1013 1012 1013 + static int tegra124_mc_icc_set(struct icc_node *src, struct icc_node *dst) 1014 + { 1015 + /* TODO: program PTSA */ 1016 + return 0; 1017 + } 1018 + 1019 + static int tegra124_mc_icc_aggreate(struct icc_node *node, u32 tag, u32 avg_bw, 1020 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 1021 + { 1022 + /* 1023 + * ISO clients need to reserve extra bandwidth up-front because 1024 + * there could be high bandwidth pressure during initial filling 1025 + * of the client's FIFO buffers. Secondly, we need to take into 1026 + * account impurities of the memory subsystem. 1027 + */ 1028 + if (tag & TEGRA_MC_ICC_TAG_ISO) 1029 + peak_bw = tegra_mc_scale_percents(peak_bw, 400); 1030 + 1031 + *agg_avg += avg_bw; 1032 + *agg_peak = max(*agg_peak, peak_bw); 1033 + 1034 + return 0; 1035 + } 1036 + 1037 + static struct icc_node_data * 1038 + tegra124_mc_of_icc_xlate_extended(struct of_phandle_args *spec, void *data) 1039 + { 1040 + struct tegra_mc *mc = icc_provider_to_tegra_mc(data); 1041 + const struct tegra_mc_client *client; 1042 + unsigned int i, idx = spec->args[0]; 1043 + struct icc_node_data *ndata; 1044 + struct icc_node *node; 1045 + 1046 + list_for_each_entry(node, &mc->provider.nodes, node_list) { 1047 + if (node->id != idx) 1048 + continue; 1049 + 1050 + ndata = kzalloc(sizeof(*ndata), GFP_KERNEL); 1051 + if (!ndata) 1052 + return ERR_PTR(-ENOMEM); 1053 + 1054 + client = &mc->soc->clients[idx]; 1055 + ndata->node = node; 1056 + 1057 + switch (client->swgroup) { 1058 + case TEGRA_SWGROUP_DC: 1059 + case TEGRA_SWGROUP_DCB: 1060 + case TEGRA_SWGROUP_PTC: 1061 + case TEGRA_SWGROUP_VI: 1062 + /* these clients are isochronous by default */ 1063 + ndata->tag = TEGRA_MC_ICC_TAG_ISO; 1064 + break; 1065 + 1066 + default: 1067 + ndata->tag = TEGRA_MC_ICC_TAG_DEFAULT; 1068 + break; 1069 + } 1070 + 1071 + return ndata; 1072 + } 1073 + 1074 + for (i = 0; i < mc->soc->num_clients; i++) { 1075 + if (mc->soc->clients[i].id == idx) 1076 + return ERR_PTR(-EPROBE_DEFER); 1077 + } 1078 + 1079 + dev_err(mc->dev, "invalid ICC client ID %u\n", idx); 1080 + 1081 + return ERR_PTR(-EINVAL); 1082 + } 1083 + 1084 + static const struct tegra_mc_icc_ops tegra124_mc_icc_ops = { 1085 + .xlate_extended = tegra124_mc_of_icc_xlate_extended, 1086 + .aggregate = tegra124_mc_icc_aggreate, 1087 + .set = tegra124_mc_icc_set, 1088 + }; 1089 + 1014 1090 #ifdef CONFIG_ARCH_TEGRA_124_SOC 1015 1091 static const unsigned long tegra124_mc_emem_regs[] = { 1016 1092 MC_EMEM_ARB_CFG, ··· 1139 1061 .reset_ops = &tegra_mc_reset_ops_common, 1140 1062 .resets = tegra124_mc_resets, 1141 1063 .num_resets = ARRAY_SIZE(tegra124_mc_resets), 1064 + .icc_ops = &tegra124_mc_icc_ops, 1142 1065 }; 1143 1066 #endif /* CONFIG_ARCH_TEGRA_124_SOC */ 1144 1067 ··· 1170 1091 .reset_ops = &tegra_mc_reset_ops_common, 1171 1092 .resets = tegra124_mc_resets, 1172 1093 .num_resets = ARRAY_SIZE(tegra124_mc_resets), 1094 + .icc_ops = &tegra124_mc_icc_ops, 1173 1095 }; 1174 1096 #endif /* CONFIG_ARCH_TEGRA_132_SOC */
+6 -6
drivers/memory/tegra/tegra186-emc.c
··· 125 125 return 0; 126 126 } 127 127 128 - DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_min_rate_fops, 129 - tegra186_emc_debug_min_rate_get, 130 - tegra186_emc_debug_min_rate_set, "%llu\n"); 128 + DEFINE_DEBUGFS_ATTRIBUTE(tegra186_emc_debug_min_rate_fops, 129 + tegra186_emc_debug_min_rate_get, 130 + tegra186_emc_debug_min_rate_set, "%llu\n"); 131 131 132 132 static int tegra186_emc_debug_max_rate_get(void *data, u64 *rate) 133 133 { ··· 155 155 return 0; 156 156 } 157 157 158 - DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_max_rate_fops, 159 - tegra186_emc_debug_max_rate_get, 160 - tegra186_emc_debug_max_rate_set, "%llu\n"); 158 + DEFINE_DEBUGFS_ATTRIBUTE(tegra186_emc_debug_max_rate_fops, 159 + tegra186_emc_debug_max_rate_get, 160 + tegra186_emc_debug_max_rate_set, "%llu\n"); 161 161 162 162 static int tegra186_emc_probe(struct platform_device *pdev) 163 163 {
+2 -11
drivers/memory/tegra/tegra20-emc.c
··· 911 911 static int tegra_emc_opp_table_init(struct tegra_emc *emc) 912 912 { 913 913 u32 hw_version = BIT(tegra_sku_info.soc_process_id); 914 - struct opp_table *clk_opp_table, *hw_opp_table; 914 + struct opp_table *hw_opp_table; 915 915 int err; 916 - 917 - clk_opp_table = dev_pm_opp_set_clkname(emc->dev, NULL); 918 - err = PTR_ERR_OR_ZERO(clk_opp_table); 919 - if (err) { 920 - dev_err(emc->dev, "failed to set OPP clk: %d\n", err); 921 - return err; 922 - } 923 916 924 917 hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1); 925 918 err = PTR_ERR_OR_ZERO(hw_opp_table); 926 919 if (err) { 927 920 dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err); 928 - goto put_clk_table; 921 + return err; 929 922 } 930 923 931 924 err = dev_pm_opp_of_add_table(emc->dev); ··· 947 954 dev_pm_opp_of_remove_table(emc->dev); 948 955 put_hw_table: 949 956 dev_pm_opp_put_supported_hw(hw_opp_table); 950 - put_clk_table: 951 - dev_pm_opp_put_clkname(clk_opp_table); 952 957 953 958 return err; 954 959 }
+2 -11
drivers/memory/tegra/tegra30-emc.c
··· 1483 1483 static int tegra_emc_opp_table_init(struct tegra_emc *emc) 1484 1484 { 1485 1485 u32 hw_version = BIT(tegra_sku_info.soc_speedo_id); 1486 - struct opp_table *clk_opp_table, *hw_opp_table; 1486 + struct opp_table *hw_opp_table; 1487 1487 int err; 1488 - 1489 - clk_opp_table = dev_pm_opp_set_clkname(emc->dev, NULL); 1490 - err = PTR_ERR_OR_ZERO(clk_opp_table); 1491 - if (err) { 1492 - dev_err(emc->dev, "failed to set OPP clk: %d\n", err); 1493 - return err; 1494 - } 1495 1488 1496 1489 hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1); 1497 1490 err = PTR_ERR_OR_ZERO(hw_opp_table); 1498 1491 if (err) { 1499 1492 dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err); 1500 - goto put_clk_table; 1493 + return err; 1501 1494 } 1502 1495 1503 1496 err = dev_pm_opp_of_add_table(emc->dev); ··· 1519 1526 dev_pm_opp_of_remove_table(emc->dev); 1520 1527 put_hw_table: 1521 1528 dev_pm_opp_put_supported_hw(hw_opp_table); 1522 - put_clk_table: 1523 - dev_pm_opp_put_clkname(clk_opp_table); 1524 1529 1525 1530 return err; 1526 1531 }
+6 -2
drivers/memory/ti-aemif.c
··· 378 378 */ 379 379 for_each_available_child_of_node(np, child_np) { 380 380 ret = of_aemif_parse_abus_config(pdev, child_np); 381 - if (ret < 0) 381 + if (ret < 0) { 382 + of_node_put(child_np); 382 383 goto error; 384 + } 383 385 } 384 386 } else if (pdata && pdata->num_abus_data > 0) { 385 387 for (i = 0; i < pdata->num_abus_data; i++, aemif->num_cs++) { ··· 407 405 for_each_available_child_of_node(np, child_np) { 408 406 ret = of_platform_populate(child_np, NULL, 409 407 dev_lookup, dev); 410 - if (ret < 0) 408 + if (ret < 0) { 409 + of_node_put(child_np); 411 410 goto error; 411 + } 412 412 } 413 413 } else if (pdata) { 414 414 for (i = 0; i < pdata->num_sub_devices; i++) {
+1 -1
drivers/memory/ti-emif-pm.c
··· 340 340 .remove = ti_emif_remove, 341 341 .driver = { 342 342 .name = KBUILD_MODNAME, 343 - .of_match_table = of_match_ptr(ti_emif_of_match), 343 + .of_match_table = ti_emif_of_match, 344 344 .pm = &ti_emif_pm_ops, 345 345 }, 346 346 };
+3 -1
drivers/mfd/axp20x-i2c.c
··· 54 54 { 55 55 struct axp20x_dev *axp20x = i2c_get_clientdata(i2c); 56 56 57 - return axp20x_device_remove(axp20x); 57 + axp20x_device_remove(axp20x); 58 + 59 + return 0; 58 60 } 59 61 60 62 #ifdef CONFIG_OF
+2 -2
drivers/mfd/axp20x-rsb.c
··· 49 49 return axp20x_device_probe(axp20x); 50 50 } 51 51 52 - static int axp20x_rsb_remove(struct sunxi_rsb_device *rdev) 52 + static void axp20x_rsb_remove(struct sunxi_rsb_device *rdev) 53 53 { 54 54 struct axp20x_dev *axp20x = sunxi_rsb_device_get_drvdata(rdev); 55 55 56 - return axp20x_device_remove(axp20x); 56 + axp20x_device_remove(axp20x); 57 57 } 58 58 59 59 static const struct of_device_id axp20x_rsb_of_match[] = {
+1 -3
drivers/mfd/axp20x.c
··· 987 987 } 988 988 EXPORT_SYMBOL(axp20x_device_probe); 989 989 990 - int axp20x_device_remove(struct axp20x_dev *axp20x) 990 + void axp20x_device_remove(struct axp20x_dev *axp20x) 991 991 { 992 992 if (axp20x == axp20x_pm_power_off) { 993 993 axp20x_pm_power_off = NULL; ··· 996 996 997 997 mfd_remove_devices(axp20x->dev); 998 998 regmap_del_irq_chip(axp20x->irq, axp20x->regmap_irqc); 999 - 1000 - return 0; 1001 999 } 1002 1000 EXPORT_SYMBOL(axp20x_device_remove); 1003 1001
+1 -1
drivers/reset/Kconfig
··· 173 173 174 174 config RESET_SIMPLE 175 175 bool "Simple Reset Controller Driver" if COMPILE_TEST 176 - default ARCH_AGILEX || ARCH_ASPEED || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARC 176 + default ARCH_AGILEX || ARCH_ASPEED || ARCH_BCM4908 || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARC 177 177 help 178 178 This enables a simple reset controller driver for reset lines that 179 179 that can be asserted and deasserted by toggling bits in a contiguous,
+2 -2
drivers/reset/core.c
··· 875 875 EXPORT_SYMBOL_GPL(__devm_reset_control_get); 876 876 877 877 /** 878 - * device_reset - find reset controller associated with the device 879 - * and perform reset 878 + * __device_reset - find reset controller associated with the device 879 + * and perform reset 880 880 * @dev: device to be reset by the controller 881 881 * @optional: whether it is optional to reset the device 882 882 *
+7 -2
drivers/reset/hisilicon/reset-hi3660.c
··· 83 83 if (!rc) 84 84 return -ENOMEM; 85 85 86 - rc->map = syscon_regmap_lookup_by_phandle(np, "hisi,rst-syscon"); 86 + rc->map = syscon_regmap_lookup_by_phandle(np, "hisilicon,rst-syscon"); 87 + if (rc->map == ERR_PTR(-ENODEV)) { 88 + /* fall back to the deprecated compatible */ 89 + rc->map = syscon_regmap_lookup_by_phandle(np, 90 + "hisi,rst-syscon"); 91 + } 87 92 if (IS_ERR(rc->map)) { 88 - dev_err(dev, "failed to get hi3660,rst-syscon\n"); 93 + dev_err(dev, "failed to get hisilicon,rst-syscon\n"); 89 94 return PTR_ERR(rc->map); 90 95 } 91 96
+2
drivers/reset/reset-simple.c
··· 146 146 { .compatible = "aspeed,ast2500-lpc-reset" }, 147 147 { .compatible = "bitmain,bm1880-reset", 148 148 .data = &reset_simple_active_low }, 149 + { .compatible = "brcm,bcm4908-misc-pcie-reset", 150 + .data = &reset_simple_active_low }, 149 151 { .compatible = "snps,dw-high-reset" }, 150 152 { .compatible = "snps,dw-low-reset", 151 153 .data = &reset_simple_active_low },
+27 -3
drivers/soc/aspeed/aspeed-lpc-snoop.c
··· 11 11 */ 12 12 13 13 #include <linux/bitops.h> 14 + #include <linux/clk.h> 14 15 #include <linux/interrupt.h> 15 16 #include <linux/fs.h> 16 17 #include <linux/kfifo.h> ··· 68 67 struct aspeed_lpc_snoop { 69 68 struct regmap *regmap; 70 69 int irq; 70 + struct clk *clk; 71 71 struct aspeed_lpc_snoop_channel chan[NUM_SNOOP_CHANNELS]; 72 72 }; 73 73 ··· 284 282 return -ENODEV; 285 283 } 286 284 285 + lpc_snoop->clk = devm_clk_get(dev, NULL); 286 + if (IS_ERR(lpc_snoop->clk)) { 287 + rc = PTR_ERR(lpc_snoop->clk); 288 + if (rc != -EPROBE_DEFER) 289 + dev_err(dev, "couldn't get clock\n"); 290 + return rc; 291 + } 292 + rc = clk_prepare_enable(lpc_snoop->clk); 293 + if (rc) { 294 + dev_err(dev, "couldn't enable clock\n"); 295 + return rc; 296 + } 297 + 287 298 rc = aspeed_lpc_snoop_config_irq(lpc_snoop, pdev); 288 299 if (rc) 289 - return rc; 300 + goto err; 290 301 291 302 rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 0, port); 292 303 if (rc) 293 - return rc; 304 + goto err; 294 305 295 306 /* Configuration of 2nd snoop channel port is optional */ 296 307 if (of_property_read_u32_index(dev->of_node, "snoop-ports", 297 308 1, &port) == 0) { 298 309 rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 1, port); 299 - if (rc) 310 + if (rc) { 300 311 aspeed_lpc_disable_snoop(lpc_snoop, 0); 312 + goto err; 313 + } 301 314 } 315 + 316 + return 0; 317 + 318 + err: 319 + clk_disable_unprepare(lpc_snoop->clk); 302 320 303 321 return rc; 304 322 } ··· 330 308 /* Disable both snoop channels */ 331 309 aspeed_lpc_disable_snoop(lpc_snoop, 0); 332 310 aspeed_lpc_disable_snoop(lpc_snoop, 1); 311 + 312 + clk_disable_unprepare(lpc_snoop->clk); 333 313 334 314 return 0; 335 315 }
+24 -7
drivers/soc/aspeed/aspeed-socinfo.c
··· 25 25 /* AST2600 */ 26 26 { "AST2600", 0x05000303 }, 27 27 { "AST2620", 0x05010203 }, 28 + { "AST2605", 0x05030103 }, 28 29 }; 29 30 30 31 static const char *siliconid_to_name(u32 siliconid) ··· 44 43 static const char *siliconid_to_rev(u32 siliconid) 45 44 { 46 45 unsigned int rev = (siliconid >> 16) & 0xff; 46 + unsigned int gen = (siliconid >> 24) & 0xff; 47 47 48 - switch (rev) { 49 - case 0: 50 - return "A0"; 51 - case 1: 52 - return "A1"; 53 - case 3: 54 - return "A2"; 48 + if (gen < 0x5) { 49 + /* AST2500 and below */ 50 + switch (rev) { 51 + case 0: 52 + return "A0"; 53 + case 1: 54 + return "A1"; 55 + case 3: 56 + return "A2"; 57 + } 58 + } else { 59 + /* AST2600 */ 60 + switch (rev) { 61 + case 0: 62 + return "A0"; 63 + case 1: 64 + return "A1"; 65 + case 2: 66 + return "A2"; 67 + case 3: 68 + return "A3"; 69 + } 55 70 } 56 71 57 72 return "??";
+155 -72
drivers/soc/atmel/soc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 1 2 /* 2 3 * Copyright (C) 2015 Atmel 3 4 * 4 5 * Alexandre Belloni <alexandre.belloni@free-electrons.com 5 6 * Boris Brezillon <boris.brezillon@free-electrons.com 6 - * 7 - * This file is licensed under the terms of the GNU General Public 8 - * License version 2. This program is licensed "as is" without any 9 - * warranty of any kind, whether express or implied. 10 - * 11 7 */ 12 8 13 9 #define pr_fmt(fmt) "AT91: " fmt ··· 21 25 #define AT91_DBGU_EXID 0x44 22 26 #define AT91_CHIPID_CIDR 0x00 23 27 #define AT91_CHIPID_EXID 0x04 24 - #define AT91_CIDR_VERSION(x) ((x) & 0x1f) 28 + #define AT91_CIDR_VERSION(x, m) ((x) & (m)) 29 + #define AT91_CIDR_VERSION_MASK GENMASK(4, 0) 30 + #define AT91_CIDR_VERSION_MASK_SAMA7G5 GENMASK(3, 0) 25 31 #define AT91_CIDR_EXT BIT(31) 26 - #define AT91_CIDR_MATCH_MASK 0x7fffffe0 32 + #define AT91_CIDR_MATCH_MASK GENMASK(30, 5) 33 + #define AT91_CIDR_MASK_SAMA7G5 GENMASK(27, 5) 27 34 28 - static const struct at91_soc __initconst socs[] = { 35 + static const struct at91_soc socs[] __initconst = { 29 36 #ifdef CONFIG_SOC_AT91RM9200 30 - AT91_SOC(AT91RM9200_CIDR_MATCH, 0, "at91rm9200 BGA", "at91rm9200"), 37 + AT91_SOC(AT91RM9200_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 38 + AT91_CIDR_VERSION_MASK, 0, "at91rm9200 BGA", "at91rm9200"), 31 39 #endif 32 40 #ifdef CONFIG_SOC_AT91SAM9 33 - AT91_SOC(AT91SAM9260_CIDR_MATCH, 0, "at91sam9260", NULL), 34 - AT91_SOC(AT91SAM9261_CIDR_MATCH, 0, "at91sam9261", NULL), 35 - AT91_SOC(AT91SAM9263_CIDR_MATCH, 0, "at91sam9263", NULL), 36 - AT91_SOC(AT91SAM9G20_CIDR_MATCH, 0, "at91sam9g20", NULL), 37 - AT91_SOC(AT91SAM9RL64_CIDR_MATCH, 0, "at91sam9rl64", NULL), 38 - AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91SAM9M11_EXID_MATCH, 41 + AT91_SOC(AT91SAM9260_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 42 + AT91_CIDR_VERSION_MASK, 0, "at91sam9260", NULL), 43 + AT91_SOC(AT91SAM9261_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 44 + AT91_CIDR_VERSION_MASK, 0, "at91sam9261", NULL), 45 + AT91_SOC(AT91SAM9263_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 46 + AT91_CIDR_VERSION_MASK, 0, "at91sam9263", NULL), 47 + AT91_SOC(AT91SAM9G20_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 48 + AT91_CIDR_VERSION_MASK, 0, "at91sam9g20", NULL), 49 + AT91_SOC(AT91SAM9RL64_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 50 + AT91_CIDR_VERSION_MASK, 0, "at91sam9rl64", NULL), 51 + AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 52 + AT91_CIDR_VERSION_MASK, AT91SAM9M11_EXID_MATCH, 39 53 "at91sam9m11", "at91sam9g45"), 40 - AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91SAM9M10_EXID_MATCH, 54 + AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 55 + AT91_CIDR_VERSION_MASK, AT91SAM9M10_EXID_MATCH, 41 56 "at91sam9m10", "at91sam9g45"), 42 - AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91SAM9G46_EXID_MATCH, 57 + AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 58 + AT91_CIDR_VERSION_MASK, AT91SAM9G46_EXID_MATCH, 43 59 "at91sam9g46", "at91sam9g45"), 44 - AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91SAM9G45_EXID_MATCH, 60 + AT91_SOC(AT91SAM9G45_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 61 + AT91_CIDR_VERSION_MASK, AT91SAM9G45_EXID_MATCH, 45 62 "at91sam9g45", "at91sam9g45"), 46 - AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91SAM9G15_EXID_MATCH, 63 + AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 64 + AT91_CIDR_VERSION_MASK, AT91SAM9G15_EXID_MATCH, 47 65 "at91sam9g15", "at91sam9x5"), 48 - AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91SAM9G35_EXID_MATCH, 66 + AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 67 + AT91_CIDR_VERSION_MASK, AT91SAM9G35_EXID_MATCH, 49 68 "at91sam9g35", "at91sam9x5"), 50 - AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91SAM9X35_EXID_MATCH, 69 + AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 70 + AT91_CIDR_VERSION_MASK, AT91SAM9X35_EXID_MATCH, 51 71 "at91sam9x35", "at91sam9x5"), 52 - AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91SAM9G25_EXID_MATCH, 72 + AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 73 + AT91_CIDR_VERSION_MASK, AT91SAM9G25_EXID_MATCH, 53 74 "at91sam9g25", "at91sam9x5"), 54 - AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91SAM9X25_EXID_MATCH, 75 + AT91_SOC(AT91SAM9X5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 76 + AT91_CIDR_VERSION_MASK, AT91SAM9X25_EXID_MATCH, 55 77 "at91sam9x25", "at91sam9x5"), 56 - AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91SAM9CN12_EXID_MATCH, 78 + AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 79 + AT91_CIDR_VERSION_MASK, AT91SAM9CN12_EXID_MATCH, 57 80 "at91sam9cn12", "at91sam9n12"), 58 - AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91SAM9N12_EXID_MATCH, 81 + AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 82 + AT91_CIDR_VERSION_MASK, AT91SAM9N12_EXID_MATCH, 59 83 "at91sam9n12", "at91sam9n12"), 60 - AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91SAM9CN11_EXID_MATCH, 84 + AT91_SOC(AT91SAM9N12_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 85 + AT91_CIDR_VERSION_MASK, AT91SAM9CN11_EXID_MATCH, 61 86 "at91sam9cn11", "at91sam9n12"), 62 - AT91_SOC(AT91SAM9XE128_CIDR_MATCH, 0, "at91sam9xe128", "at91sam9xe128"), 63 - AT91_SOC(AT91SAM9XE256_CIDR_MATCH, 0, "at91sam9xe256", "at91sam9xe256"), 64 - AT91_SOC(AT91SAM9XE512_CIDR_MATCH, 0, "at91sam9xe512", "at91sam9xe512"), 87 + AT91_SOC(AT91SAM9XE128_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 88 + AT91_CIDR_VERSION_MASK, 0, "at91sam9xe128", "at91sam9xe128"), 89 + AT91_SOC(AT91SAM9XE256_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 90 + AT91_CIDR_VERSION_MASK, 0, "at91sam9xe256", "at91sam9xe256"), 91 + AT91_SOC(AT91SAM9XE512_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 92 + AT91_CIDR_VERSION_MASK, 0, "at91sam9xe512", "at91sam9xe512"), 65 93 #endif 66 94 #ifdef CONFIG_SOC_SAM9X60 67 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_EXID_MATCH, "sam9x60", "sam9x60"), 95 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 96 + AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 97 + "sam9x60", "sam9x60"), 68 98 AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D5M_EXID_MATCH, 99 + AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 69 100 "sam9x60 64MiB DDR2 SiP", "sam9x60"), 70 101 AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D1G_EXID_MATCH, 102 + AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 71 103 "sam9x60 128MiB DDR2 SiP", "sam9x60"), 72 104 AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D6K_EXID_MATCH, 105 + AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 73 106 "sam9x60 8MiB SDRAM SiP", "sam9x60"), 74 107 #endif 75 108 #ifdef CONFIG_SOC_SAMA5 76 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D21CU_EXID_MATCH, 109 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 110 + AT91_CIDR_VERSION_MASK, SAMA5D21CU_EXID_MATCH, 77 111 "sama5d21", "sama5d2"), 78 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D22CU_EXID_MATCH, 112 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 113 + AT91_CIDR_VERSION_MASK, SAMA5D22CU_EXID_MATCH, 79 114 "sama5d22", "sama5d2"), 80 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D225C_D1M_EXID_MATCH, 115 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 116 + AT91_CIDR_VERSION_MASK, SAMA5D225C_D1M_EXID_MATCH, 81 117 "sama5d225c 16MiB SiP", "sama5d2"), 82 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D23CU_EXID_MATCH, 118 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 119 + AT91_CIDR_VERSION_MASK, SAMA5D23CU_EXID_MATCH, 83 120 "sama5d23", "sama5d2"), 84 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CX_EXID_MATCH, 121 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 122 + AT91_CIDR_VERSION_MASK, SAMA5D24CX_EXID_MATCH, 85 123 "sama5d24", "sama5d2"), 86 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CU_EXID_MATCH, 124 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 125 + AT91_CIDR_VERSION_MASK, SAMA5D24CU_EXID_MATCH, 87 126 "sama5d24", "sama5d2"), 88 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D26CU_EXID_MATCH, 127 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 128 + AT91_CIDR_VERSION_MASK, SAMA5D26CU_EXID_MATCH, 89 129 "sama5d26", "sama5d2"), 90 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CU_EXID_MATCH, 130 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 131 + AT91_CIDR_VERSION_MASK, SAMA5D27CU_EXID_MATCH, 91 132 "sama5d27", "sama5d2"), 92 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CN_EXID_MATCH, 133 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 134 + AT91_CIDR_VERSION_MASK, SAMA5D27CN_EXID_MATCH, 93 135 "sama5d27", "sama5d2"), 94 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D1G_EXID_MATCH, 136 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 137 + AT91_CIDR_VERSION_MASK, SAMA5D27C_D1G_EXID_MATCH, 95 138 "sama5d27c 128MiB SiP", "sama5d2"), 96 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D5M_EXID_MATCH, 139 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 140 + AT91_CIDR_VERSION_MASK, SAMA5D27C_D5M_EXID_MATCH, 97 141 "sama5d27c 64MiB SiP", "sama5d2"), 98 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_LD1G_EXID_MATCH, 142 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 143 + AT91_CIDR_VERSION_MASK, SAMA5D27C_LD1G_EXID_MATCH, 99 144 "sama5d27c 128MiB LPDDR2 SiP", "sama5d2"), 100 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_LD2G_EXID_MATCH, 145 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 146 + AT91_CIDR_VERSION_MASK, SAMA5D27C_LD2G_EXID_MATCH, 101 147 "sama5d27c 256MiB LPDDR2 SiP", "sama5d2"), 102 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CU_EXID_MATCH, 148 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 149 + AT91_CIDR_VERSION_MASK, SAMA5D28CU_EXID_MATCH, 103 150 "sama5d28", "sama5d2"), 104 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CN_EXID_MATCH, 151 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 152 + AT91_CIDR_VERSION_MASK, SAMA5D28CN_EXID_MATCH, 105 153 "sama5d28", "sama5d2"), 106 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_D1G_EXID_MATCH, 154 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 155 + AT91_CIDR_VERSION_MASK, SAMA5D28C_D1G_EXID_MATCH, 107 156 "sama5d28c 128MiB SiP", "sama5d2"), 108 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_LD1G_EXID_MATCH, 157 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 158 + AT91_CIDR_VERSION_MASK, SAMA5D28C_LD1G_EXID_MATCH, 109 159 "sama5d28c 128MiB LPDDR2 SiP", "sama5d2"), 110 - AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_LD2G_EXID_MATCH, 160 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 161 + AT91_CIDR_VERSION_MASK, SAMA5D28C_LD2G_EXID_MATCH, 111 162 "sama5d28c 256MiB LPDDR2 SiP", "sama5d2"), 112 - AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D31_EXID_MATCH, 163 + AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 164 + AT91_CIDR_VERSION_MASK, SAMA5D31_EXID_MATCH, 113 165 "sama5d31", "sama5d3"), 114 - AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D33_EXID_MATCH, 166 + AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 167 + AT91_CIDR_VERSION_MASK, SAMA5D33_EXID_MATCH, 115 168 "sama5d33", "sama5d3"), 116 - AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D34_EXID_MATCH, 169 + AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 170 + AT91_CIDR_VERSION_MASK, SAMA5D34_EXID_MATCH, 117 171 "sama5d34", "sama5d3"), 118 - AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D35_EXID_MATCH, 172 + AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 173 + AT91_CIDR_VERSION_MASK, SAMA5D35_EXID_MATCH, 119 174 "sama5d35", "sama5d3"), 120 - AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D36_EXID_MATCH, 175 + AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 176 + AT91_CIDR_VERSION_MASK, SAMA5D36_EXID_MATCH, 121 177 "sama5d36", "sama5d3"), 122 - AT91_SOC(SAMA5D4_CIDR_MATCH, SAMA5D41_EXID_MATCH, 178 + AT91_SOC(SAMA5D4_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 179 + AT91_CIDR_VERSION_MASK, SAMA5D41_EXID_MATCH, 123 180 "sama5d41", "sama5d4"), 124 - AT91_SOC(SAMA5D4_CIDR_MATCH, SAMA5D42_EXID_MATCH, 181 + AT91_SOC(SAMA5D4_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 182 + AT91_CIDR_VERSION_MASK, SAMA5D42_EXID_MATCH, 125 183 "sama5d42", "sama5d4"), 126 - AT91_SOC(SAMA5D4_CIDR_MATCH, SAMA5D43_EXID_MATCH, 184 + AT91_SOC(SAMA5D4_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 185 + AT91_CIDR_VERSION_MASK, SAMA5D43_EXID_MATCH, 127 186 "sama5d43", "sama5d4"), 128 - AT91_SOC(SAMA5D4_CIDR_MATCH, SAMA5D44_EXID_MATCH, 187 + AT91_SOC(SAMA5D4_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 188 + AT91_CIDR_VERSION_MASK, SAMA5D44_EXID_MATCH, 129 189 "sama5d44", "sama5d4"), 130 190 #endif 131 191 #ifdef CONFIG_SOC_SAMV7 132 - AT91_SOC(SAME70Q21_CIDR_MATCH, SAME70Q21_EXID_MATCH, 192 + AT91_SOC(SAME70Q21_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 193 + AT91_CIDR_VERSION_MASK, SAME70Q21_EXID_MATCH, 133 194 "same70q21", "same7"), 134 - AT91_SOC(SAME70Q20_CIDR_MATCH, SAME70Q20_EXID_MATCH, 195 + AT91_SOC(SAME70Q20_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 196 + AT91_CIDR_VERSION_MASK, SAME70Q20_EXID_MATCH, 135 197 "same70q20", "same7"), 136 - AT91_SOC(SAME70Q19_CIDR_MATCH, SAME70Q19_EXID_MATCH, 198 + AT91_SOC(SAME70Q19_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 199 + AT91_CIDR_VERSION_MASK, SAME70Q19_EXID_MATCH, 137 200 "same70q19", "same7"), 138 - AT91_SOC(SAMS70Q21_CIDR_MATCH, SAMS70Q21_EXID_MATCH, 201 + AT91_SOC(SAMS70Q21_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 202 + AT91_CIDR_VERSION_MASK, SAMS70Q21_EXID_MATCH, 139 203 "sams70q21", "sams7"), 140 - AT91_SOC(SAMS70Q20_CIDR_MATCH, SAMS70Q20_EXID_MATCH, 204 + AT91_SOC(SAMS70Q20_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 205 + AT91_CIDR_VERSION_MASK, SAMS70Q20_EXID_MATCH, 141 206 "sams70q20", "sams7"), 142 - AT91_SOC(SAMS70Q19_CIDR_MATCH, SAMS70Q19_EXID_MATCH, 207 + AT91_SOC(SAMS70Q19_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 208 + AT91_CIDR_VERSION_MASK, SAMS70Q19_EXID_MATCH, 143 209 "sams70q19", "sams7"), 144 - AT91_SOC(SAMV71Q21_CIDR_MATCH, SAMV71Q21_EXID_MATCH, 210 + AT91_SOC(SAMV71Q21_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 211 + AT91_CIDR_VERSION_MASK, SAMV71Q21_EXID_MATCH, 145 212 "samv71q21", "samv7"), 146 - AT91_SOC(SAMV71Q20_CIDR_MATCH, SAMV71Q20_EXID_MATCH, 213 + AT91_SOC(SAMV71Q20_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 214 + AT91_CIDR_VERSION_MASK, SAMV71Q20_EXID_MATCH, 147 215 "samv71q20", "samv7"), 148 - AT91_SOC(SAMV71Q19_CIDR_MATCH, SAMV71Q19_EXID_MATCH, 216 + AT91_SOC(SAMV71Q19_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 217 + AT91_CIDR_VERSION_MASK, SAMV71Q19_EXID_MATCH, 149 218 "samv71q19", "samv7"), 150 - AT91_SOC(SAMV70Q20_CIDR_MATCH, SAMV70Q20_EXID_MATCH, 219 + AT91_SOC(SAMV70Q20_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 220 + AT91_CIDR_VERSION_MASK, SAMV70Q20_EXID_MATCH, 151 221 "samv70q20", "samv7"), 152 - AT91_SOC(SAMV70Q19_CIDR_MATCH, SAMV70Q19_EXID_MATCH, 222 + AT91_SOC(SAMV70Q19_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 223 + AT91_CIDR_VERSION_MASK, SAMV70Q19_EXID_MATCH, 153 224 "samv70q19", "samv7"), 225 + #endif 226 + #ifdef CONFIG_SOC_SAMA7 227 + AT91_SOC(SAMA7G5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 228 + AT91_CIDR_VERSION_MASK_SAMA7G5, SAMA7G51_EXID_MATCH, 229 + "sama7g51", "sama7g5"), 230 + AT91_SOC(SAMA7G5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 231 + AT91_CIDR_VERSION_MASK_SAMA7G5, SAMA7G52_EXID_MATCH, 232 + "sama7g52", "sama7g5"), 233 + AT91_SOC(SAMA7G5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 234 + AT91_CIDR_VERSION_MASK_SAMA7G5, SAMA7G53_EXID_MATCH, 235 + "sama7g53", "sama7g5"), 236 + AT91_SOC(SAMA7G5_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 237 + AT91_CIDR_VERSION_MASK_SAMA7G5, SAMA7G54_EXID_MATCH, 238 + "sama7g54", "sama7g5"), 154 239 #endif 155 240 { /* sentinel */ }, 156 241 }; ··· 268 191 { 269 192 struct device_node *np; 270 193 void __iomem *regs; 194 + static const struct of_device_id chipids[] = { 195 + { .compatible = "atmel,sama5d2-chipid" }, 196 + { .compatible = "microchip,sama7g5-chipid" }, 197 + { }, 198 + }; 271 199 272 - np = of_find_compatible_node(NULL, NULL, "atmel,sama5d2-chipid"); 200 + np = of_find_matching_node(NULL, chipids); 273 201 if (!np) 274 202 return -ENODEV; 275 203 ··· 317 235 } 318 236 319 237 for (soc = socs; soc->name; soc++) { 320 - if (soc->cidr_match != (cidr & AT91_CIDR_MATCH_MASK)) 238 + if (soc->cidr_match != (cidr & soc->cidr_mask)) 321 239 continue; 322 240 323 241 if (!(cidr & AT91_CIDR_EXT) || soc->exid_match == exid) ··· 336 254 soc_dev_attr->family = soc->family; 337 255 soc_dev_attr->soc_id = soc->name; 338 256 soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%X", 339 - AT91_CIDR_VERSION(cidr)); 257 + AT91_CIDR_VERSION(cidr, soc->version_mask)); 340 258 soc_dev = soc_device_register(soc_dev_attr); 341 259 if (IS_ERR(soc_dev)) { 342 260 kfree(soc_dev_attr->revision); ··· 348 266 if (soc->family) 349 267 pr_info("Detected SoC family: %s\n", soc->family); 350 268 pr_info("Detected SoC: %s, revision %X\n", soc->name, 351 - AT91_CIDR_VERSION(cidr)); 269 + AT91_CIDR_VERSION(cidr, soc->version_mask)); 352 270 353 271 return soc_dev; 354 272 } ··· 358 276 { .compatible = "atmel,at91sam9", }, 359 277 { .compatible = "atmel,sama5", }, 360 278 { .compatible = "atmel,samv7", }, 279 + { .compatible = "microchip,sama7g5", }, 361 280 { } 362 281 }; 363 282
+13 -6
drivers/soc/atmel/soc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 1 2 /* 2 3 * Copyright (C) 2015 Atmel 3 4 * 4 5 * Boris Brezillon <boris.brezillon@free-electrons.com 5 - * 6 - * This file is licensed under the terms of the GNU General Public 7 - * License version 2. This program is licensed "as is" without any 8 - * warranty of any kind, whether express or implied. 9 - * 10 6 */ 11 7 12 8 #ifndef __AT91_SOC_H ··· 12 16 13 17 struct at91_soc { 14 18 u32 cidr_match; 19 + u32 cidr_mask; 20 + u32 version_mask; 15 21 u32 exid_match; 16 22 const char *name; 17 23 const char *family; 18 24 }; 19 25 20 - #define AT91_SOC(__cidr, __exid, __name, __family) \ 26 + #define AT91_SOC(__cidr, __cidr_mask, __version_mask, __exid, \ 27 + __name, __family) \ 21 28 { \ 22 29 .cidr_match = (__cidr), \ 30 + .cidr_mask = (__cidr_mask), \ 31 + .version_mask = (__version_mask), \ 23 32 .exid_match = (__exid), \ 24 33 .name = (__name), \ 25 34 .family = (__family), \ ··· 44 43 #define AT91SAM9X5_CIDR_MATCH 0x019a05a0 45 44 #define AT91SAM9N12_CIDR_MATCH 0x019a07a0 46 45 #define SAM9X60_CIDR_MATCH 0x019b35a0 46 + #define SAMA7G5_CIDR_MATCH 0x00162100 47 47 48 48 #define AT91SAM9M11_EXID_MATCH 0x00000001 49 49 #define AT91SAM9M10_EXID_MATCH 0x00000002 ··· 65 63 #define SAM9X60_D5M_EXID_MATCH 0x00000001 66 64 #define SAM9X60_D1G_EXID_MATCH 0x00000010 67 65 #define SAM9X60_D6K_EXID_MATCH 0x00000011 66 + 67 + #define SAMA7G51_EXID_MATCH 0x3 68 + #define SAMA7G52_EXID_MATCH 0x2 69 + #define SAMA7G53_EXID_MATCH 0x1 70 + #define SAMA7G54_EXID_MATCH 0x0 68 71 69 72 #define AT91SAM9XE128_CIDR_MATCH 0x329973a0 70 73 #define AT91SAM9XE256_CIDR_MATCH 0x329a93a0
+1 -1
drivers/soc/bcm/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_BCM2835_POWER) += bcm2835-power.o 3 3 obj-$(CONFIG_RASPBERRYPI_POWER) += raspberrypi-power.o 4 - obj-$(CONFIG_SOC_BCM63XX) += bcm63xx/ 4 + obj-y += bcm63xx/ 5 5 obj-$(CONFIG_SOC_BRCMSTB) += brcmstb/
+9
drivers/soc/bcm/bcm63xx/Kconfig
··· 10 10 BCM6318, BCM6328, BCM6362 and BCM63268 SoCs. 11 11 12 12 endif # SOC_BCM63XX 13 + 14 + config BCM_PMB 15 + bool "Broadcom PMB (Power Management Bus) driver" 16 + depends on ARCH_BCM4908 || (COMPILE_TEST && OF) 17 + default ARCH_BCM4908 18 + select PM_GENERIC_DOMAINS if PM 19 + help 20 + This enables support for the Broadcom's PMB (Power Management Bus) that 21 + is used for disabling and enabling SoC devices.
+1
drivers/soc/bcm/bcm63xx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_BCM63XX_POWER) += bcm63xx-power.o 3 + obj-$(CONFIG_BCM_PMB) += bcm-pmb.o
+333
drivers/soc/bcm/bcm63xx/bcm-pmb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Copyright (c) 2013 Broadcom 4 + * Copyright (C) 2020 Rafał Miłecki <rafal@milecki.pl> 5 + */ 6 + 7 + #include <dt-bindings/soc/bcm-pmb.h> 8 + #include <linux/io.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/of_device.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/pm_domain.h> 14 + #include <linux/reset/bcm63xx_pmb.h> 15 + 16 + #define BPCM_ID_REG 0x00 17 + #define BPCM_CAPABILITIES 0x04 18 + #define BPCM_CAP_NUM_ZONES 0x000000ff 19 + #define BPCM_CAP_SR_REG_BITS 0x0000ff00 20 + #define BPCM_CAP_PLLTYPE 0x00030000 21 + #define BPCM_CAP_UBUS 0x00080000 22 + #define BPCM_CONTROL 0x08 23 + #define BPCM_STATUS 0x0c 24 + #define BPCM_ROSC_CONTROL 0x10 25 + #define BPCM_ROSC_THRESH_H 0x14 26 + #define BPCM_ROSC_THRESHOLD_BCM6838 0x14 27 + #define BPCM_ROSC_THRESH_S 0x18 28 + #define BPCM_ROSC_COUNT_BCM6838 0x18 29 + #define BPCM_ROSC_COUNT 0x1c 30 + #define BPCM_PWD_CONTROL_BCM6838 0x1c 31 + #define BPCM_PWD_CONTROL 0x20 32 + #define BPCM_SR_CONTROL_BCM6838 0x20 33 + #define BPCM_PWD_ACCUM_CONTROL 0x24 34 + #define BPCM_SR_CONTROL 0x28 35 + #define BPCM_GLOBAL_CONTROL 0x2c 36 + #define BPCM_MISC_CONTROL 0x30 37 + #define BPCM_MISC_CONTROL2 0x34 38 + #define BPCM_SGPHY_CNTL 0x38 39 + #define BPCM_SGPHY_STATUS 0x3c 40 + #define BPCM_ZONE0 0x40 41 + #define BPCM_ZONE_CONTROL 0x00 42 + #define BPCM_ZONE_CONTROL_MANUAL_CLK_EN 0x00000001 43 + #define BPCM_ZONE_CONTROL_MANUAL_RESET_CTL 0x00000002 44 + #define BPCM_ZONE_CONTROL_FREQ_SCALE_USED 0x00000004 /* R/O */ 45 + #define BPCM_ZONE_CONTROL_DPG_CAPABLE 0x00000008 /* R/O */ 46 + #define BPCM_ZONE_CONTROL_MANUAL_MEM_PWR 0x00000030 47 + #define BPCM_ZONE_CONTROL_MANUAL_ISO_CTL 0x00000040 48 + #define BPCM_ZONE_CONTROL_MANUAL_CTL 0x00000080 49 + #define BPCM_ZONE_CONTROL_DPG_CTL_EN 0x00000100 50 + #define BPCM_ZONE_CONTROL_PWR_DN_REQ 0x00000200 51 + #define BPCM_ZONE_CONTROL_PWR_UP_REQ 0x00000400 52 + #define BPCM_ZONE_CONTROL_MEM_PWR_CTL_EN 0x00000800 53 + #define BPCM_ZONE_CONTROL_BLK_RESET_ASSERT 0x00001000 54 + #define BPCM_ZONE_CONTROL_MEM_STBY 0x00002000 55 + #define BPCM_ZONE_CONTROL_RESERVED 0x0007c000 56 + #define BPCM_ZONE_CONTROL_PWR_CNTL_STATE 0x00f80000 57 + #define BPCM_ZONE_CONTROL_FREQ_SCALAR_DYN_SEL 0x01000000 /* R/O */ 58 + #define BPCM_ZONE_CONTROL_PWR_OFF_STATE 0x02000000 /* R/O */ 59 + #define BPCM_ZONE_CONTROL_PWR_ON_STATE 0x04000000 /* R/O */ 60 + #define BPCM_ZONE_CONTROL_PWR_GOOD 0x08000000 /* R/O */ 61 + #define BPCM_ZONE_CONTROL_DPG_PWR_STATE 0x10000000 /* R/O */ 62 + #define BPCM_ZONE_CONTROL_MEM_PWR_STATE 0x20000000 /* R/O */ 63 + #define BPCM_ZONE_CONTROL_ISO_STATE 0x40000000 /* R/O */ 64 + #define BPCM_ZONE_CONTROL_RESET_STATE 0x80000000 /* R/O */ 65 + #define BPCM_ZONE_CONFIG1 0x04 66 + #define BPCM_ZONE_CONFIG2 0x08 67 + #define BPCM_ZONE_FREQ_SCALAR_CONTROL 0x0c 68 + #define BPCM_ZONE_SIZE 0x10 69 + 70 + struct bcm_pmb { 71 + struct device *dev; 72 + void __iomem *base; 73 + spinlock_t lock; 74 + bool little_endian; 75 + struct genpd_onecell_data genpd_onecell_data; 76 + }; 77 + 78 + struct bcm_pmb_pd_data { 79 + const char * const name; 80 + int id; 81 + u8 bus; 82 + u8 device; 83 + }; 84 + 85 + struct bcm_pmb_pm_domain { 86 + struct bcm_pmb *pmb; 87 + const struct bcm_pmb_pd_data *data; 88 + struct generic_pm_domain genpd; 89 + }; 90 + 91 + static int bcm_pmb_bpcm_read(struct bcm_pmb *pmb, int bus, u8 device, 92 + int offset, u32 *val) 93 + { 94 + void __iomem *base = pmb->base + bus * 0x20; 95 + unsigned long flags; 96 + int err; 97 + 98 + spin_lock_irqsave(&pmb->lock, flags); 99 + err = bpcm_rd(base, device, offset, val); 100 + spin_unlock_irqrestore(&pmb->lock, flags); 101 + 102 + if (!err) 103 + *val = pmb->little_endian ? le32_to_cpu(*val) : be32_to_cpu(*val); 104 + 105 + return err; 106 + } 107 + 108 + static int bcm_pmb_bpcm_write(struct bcm_pmb *pmb, int bus, u8 device, 109 + int offset, u32 val) 110 + { 111 + void __iomem *base = pmb->base + bus * 0x20; 112 + unsigned long flags; 113 + int err; 114 + 115 + val = pmb->little_endian ? cpu_to_le32(val) : cpu_to_be32(val); 116 + 117 + spin_lock_irqsave(&pmb->lock, flags); 118 + err = bpcm_wr(base, device, offset, val); 119 + spin_unlock_irqrestore(&pmb->lock, flags); 120 + 121 + return err; 122 + } 123 + 124 + static int bcm_pmb_power_off_zone(struct bcm_pmb *pmb, int bus, u8 device, 125 + int zone) 126 + { 127 + int offset; 128 + u32 val; 129 + int err; 130 + 131 + offset = BPCM_ZONE0 + zone * BPCM_ZONE_SIZE + BPCM_ZONE_CONTROL; 132 + 133 + err = bcm_pmb_bpcm_read(pmb, bus, device, offset, &val); 134 + if (err) 135 + return err; 136 + 137 + val |= BPCM_ZONE_CONTROL_PWR_DN_REQ; 138 + val &= ~BPCM_ZONE_CONTROL_PWR_UP_REQ; 139 + 140 + err = bcm_pmb_bpcm_write(pmb, bus, device, offset, val); 141 + 142 + return err; 143 + } 144 + 145 + static int bcm_pmb_power_on_zone(struct bcm_pmb *pmb, int bus, u8 device, 146 + int zone) 147 + { 148 + int offset; 149 + u32 val; 150 + int err; 151 + 152 + offset = BPCM_ZONE0 + zone * BPCM_ZONE_SIZE + BPCM_ZONE_CONTROL; 153 + 154 + err = bcm_pmb_bpcm_read(pmb, bus, device, offset, &val); 155 + if (err) 156 + return err; 157 + 158 + if (!(val & BPCM_ZONE_CONTROL_PWR_ON_STATE)) { 159 + val &= ~BPCM_ZONE_CONTROL_PWR_DN_REQ; 160 + val |= BPCM_ZONE_CONTROL_DPG_CTL_EN; 161 + val |= BPCM_ZONE_CONTROL_PWR_UP_REQ; 162 + val |= BPCM_ZONE_CONTROL_MEM_PWR_CTL_EN; 163 + val |= BPCM_ZONE_CONTROL_BLK_RESET_ASSERT; 164 + 165 + err = bcm_pmb_bpcm_write(pmb, bus, device, offset, val); 166 + } 167 + 168 + return err; 169 + } 170 + 171 + static int bcm_pmb_power_off_device(struct bcm_pmb *pmb, int bus, u8 device) 172 + { 173 + int offset; 174 + u32 val; 175 + int err; 176 + 177 + /* Entire device can be powered off by powering off the 0th zone */ 178 + offset = BPCM_ZONE0 + BPCM_ZONE_CONTROL; 179 + 180 + err = bcm_pmb_bpcm_read(pmb, bus, device, offset, &val); 181 + if (err) 182 + return err; 183 + 184 + if (!(val & BPCM_ZONE_CONTROL_PWR_OFF_STATE)) { 185 + val = BPCM_ZONE_CONTROL_PWR_DN_REQ; 186 + 187 + err = bcm_pmb_bpcm_write(pmb, bus, device, offset, val); 188 + } 189 + 190 + return err; 191 + } 192 + 193 + static int bcm_pmb_power_on_device(struct bcm_pmb *pmb, int bus, u8 device) 194 + { 195 + u32 val; 196 + int err; 197 + int i; 198 + 199 + err = bcm_pmb_bpcm_read(pmb, bus, device, BPCM_CAPABILITIES, &val); 200 + if (err) 201 + return err; 202 + 203 + for (i = 0; i < (val & BPCM_CAP_NUM_ZONES); i++) { 204 + err = bcm_pmb_power_on_zone(pmb, bus, device, i); 205 + if (err) 206 + return err; 207 + } 208 + 209 + return err; 210 + } 211 + 212 + static int bcm_pmb_power_on(struct generic_pm_domain *genpd) 213 + { 214 + struct bcm_pmb_pm_domain *pd = container_of(genpd, struct bcm_pmb_pm_domain, genpd); 215 + const struct bcm_pmb_pd_data *data = pd->data; 216 + struct bcm_pmb *pmb = pd->pmb; 217 + 218 + switch (data->id) { 219 + case BCM_PMB_PCIE0: 220 + case BCM_PMB_PCIE1: 221 + case BCM_PMB_PCIE2: 222 + return bcm_pmb_power_on_zone(pmb, data->bus, data->device, 0); 223 + case BCM_PMB_HOST_USB: 224 + return bcm_pmb_power_on_device(pmb, data->bus, data->device); 225 + default: 226 + dev_err(pmb->dev, "unsupported device id: %d\n", data->id); 227 + return -EINVAL; 228 + } 229 + } 230 + 231 + static int bcm_pmb_power_off(struct generic_pm_domain *genpd) 232 + { 233 + struct bcm_pmb_pm_domain *pd = container_of(genpd, struct bcm_pmb_pm_domain, genpd); 234 + const struct bcm_pmb_pd_data *data = pd->data; 235 + struct bcm_pmb *pmb = pd->pmb; 236 + 237 + switch (data->id) { 238 + case BCM_PMB_PCIE0: 239 + case BCM_PMB_PCIE1: 240 + case BCM_PMB_PCIE2: 241 + return bcm_pmb_power_off_zone(pmb, data->bus, data->device, 0); 242 + case BCM_PMB_HOST_USB: 243 + return bcm_pmb_power_off_device(pmb, data->bus, data->device); 244 + default: 245 + dev_err(pmb->dev, "unsupported device id: %d\n", data->id); 246 + return -EINVAL; 247 + } 248 + } 249 + 250 + static int bcm_pmb_probe(struct platform_device *pdev) 251 + { 252 + struct device *dev = &pdev->dev; 253 + const struct bcm_pmb_pd_data *table; 254 + const struct bcm_pmb_pd_data *e; 255 + struct resource *res; 256 + struct bcm_pmb *pmb; 257 + int max_id; 258 + int err; 259 + 260 + pmb = devm_kzalloc(dev, sizeof(*pmb), GFP_KERNEL); 261 + if (!pmb) 262 + return -ENOMEM; 263 + 264 + pmb->dev = dev; 265 + 266 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 267 + pmb->base = devm_ioremap_resource(&pdev->dev, res); 268 + if (IS_ERR(pmb->base)) 269 + return PTR_ERR(pmb->base); 270 + 271 + spin_lock_init(&pmb->lock); 272 + 273 + pmb->little_endian = !of_device_is_big_endian(dev->of_node); 274 + 275 + table = of_device_get_match_data(dev); 276 + if (!table) 277 + return -EINVAL; 278 + 279 + max_id = 0; 280 + for (e = table; e->name; e++) 281 + max_id = max(max_id, e->id); 282 + 283 + pmb->genpd_onecell_data.num_domains = max_id + 1; 284 + pmb->genpd_onecell_data.domains = 285 + devm_kcalloc(dev, pmb->genpd_onecell_data.num_domains, 286 + sizeof(struct generic_pm_domain *), GFP_KERNEL); 287 + if (!pmb->genpd_onecell_data.domains) 288 + return -ENOMEM; 289 + 290 + for (e = table; e->name; e++) { 291 + struct bcm_pmb_pm_domain *pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL); 292 + 293 + pd->pmb = pmb; 294 + pd->data = e; 295 + pd->genpd.name = e->name; 296 + pd->genpd.power_on = bcm_pmb_power_on; 297 + pd->genpd.power_off = bcm_pmb_power_off; 298 + 299 + pm_genpd_init(&pd->genpd, NULL, true); 300 + pmb->genpd_onecell_data.domains[e->id] = &pd->genpd; 301 + } 302 + 303 + err = of_genpd_add_provider_onecell(dev->of_node, &pmb->genpd_onecell_data); 304 + if (err) { 305 + dev_err(dev, "failed to add genpd provider: %d\n", err); 306 + return err; 307 + } 308 + 309 + return 0; 310 + } 311 + 312 + static const struct bcm_pmb_pd_data bcm_pmb_bcm4908_data[] = { 313 + { .name = "pcie2", .id = BCM_PMB_PCIE2, .bus = 0, .device = 2, }, 314 + { .name = "pcie0", .id = BCM_PMB_PCIE0, .bus = 1, .device = 14, }, 315 + { .name = "pcie1", .id = BCM_PMB_PCIE1, .bus = 1, .device = 15, }, 316 + { .name = "usb", .id = BCM_PMB_HOST_USB, .bus = 1, .device = 17, }, 317 + { }, 318 + }; 319 + 320 + static const struct of_device_id bcm_pmb_of_match[] = { 321 + { .compatible = "brcm,bcm4908-pmb", .data = &bcm_pmb_bcm4908_data, }, 322 + { }, 323 + }; 324 + 325 + static struct platform_driver bcm_pmb_driver = { 326 + .driver = { 327 + .name = "bcm-pmb", 328 + .of_match_table = bcm_pmb_of_match, 329 + }, 330 + .probe = bcm_pmb_probe, 331 + }; 332 + 333 + builtin_platform_driver(bcm_pmb_driver);
-17
drivers/soc/bcm/brcmstb/common.c
··· 11 11 #include <linux/soc/brcmstb/brcmstb.h> 12 12 #include <linux/sys_soc.h> 13 13 14 - #include <soc/brcmstb/common.h> 15 - 16 14 static u32 family_id; 17 15 static u32 product_id; 18 16 ··· 18 20 { .compatible = "brcm,brcmstb", }, 19 21 { } 20 22 }; 21 - 22 - bool soc_is_brcmstb(void) 23 - { 24 - const struct of_device_id *match; 25 - struct device_node *root; 26 - 27 - root = of_find_node_by_path("/"); 28 - if (!root) 29 - return false; 30 - 31 - match = of_match_node(brcmstb_machine_match, root); 32 - of_node_put(root); 33 - 34 - return match != NULL; 35 - } 36 23 37 24 u32 brcmstb_get_family_id(void) 38 25 {
+72 -12
drivers/soc/imx/soc-imx8m.c
··· 5 5 6 6 #include <linux/init.h> 7 7 #include <linux/io.h> 8 + #include <linux/module.h> 9 + #include <linux/nvmem-consumer.h> 8 10 #include <linux/of_address.h> 9 11 #include <linux/slab.h> 10 12 #include <linux/sys_soc.h> ··· 31 29 32 30 struct imx8_soc_data { 33 31 char *name; 34 - u32 (*soc_revision)(void); 32 + u32 (*soc_revision)(struct device *dev); 35 33 }; 36 34 37 35 static u64 soc_uid; ··· 52 50 static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; }; 53 51 #endif 54 52 55 - static u32 __init imx8mq_soc_revision(void) 53 + static u32 __init imx8mq_soc_revision(struct device *dev) 56 54 { 57 55 struct device_node *np; 58 56 void __iomem *ocotp_base; ··· 77 75 rev = REV_B1; 78 76 } 79 77 80 - soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 81 - soc_uid <<= 32; 82 - soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 78 + if (dev) { 79 + int ret; 80 + 81 + ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); 82 + if (ret) { 83 + iounmap(ocotp_base); 84 + of_node_put(np); 85 + return ret; 86 + } 87 + } else { 88 + soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 89 + soc_uid <<= 32; 90 + soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 91 + } 83 92 84 93 iounmap(ocotp_base); 85 94 of_node_put(np); ··· 120 107 of_node_put(np); 121 108 } 122 109 123 - static u32 __init imx8mm_soc_revision(void) 110 + static u32 __init imx8mm_soc_revision(struct device *dev) 124 111 { 125 112 struct device_node *np; 126 113 void __iomem *anatop_base; ··· 138 125 iounmap(anatop_base); 139 126 of_node_put(np); 140 127 141 - imx8mm_soc_uid(); 128 + if (dev) { 129 + int ret; 130 + 131 + ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); 132 + if (ret) 133 + return ret; 134 + } else { 135 + imx8mm_soc_uid(); 136 + } 142 137 143 138 return rev; 144 139 } ··· 171 150 .soc_revision = imx8mm_soc_revision, 172 151 }; 173 152 174 - static __maybe_unused const struct of_device_id imx8_soc_match[] = { 153 + static __maybe_unused const struct of_device_id imx8_machine_match[] = { 175 154 { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, 176 155 { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, 177 156 { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, 178 157 { .compatible = "fsl,imx8mp", .data = &imx8mp_soc_data, }, 158 + { } 159 + }; 160 + 161 + static __maybe_unused const struct of_device_id imx8_soc_match[] = { 162 + { .compatible = "fsl,imx8mq-soc", .data = &imx8mq_soc_data, }, 163 + { .compatible = "fsl,imx8mm-soc", .data = &imx8mm_soc_data, }, 164 + { .compatible = "fsl,imx8mn-soc", .data = &imx8mn_soc_data, }, 165 + { .compatible = "fsl,imx8mp-soc", .data = &imx8mp_soc_data, }, 179 166 { } 180 167 }; 181 168 ··· 192 163 kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \ 193 164 "unknown" 194 165 195 - static int __init imx8_soc_init(void) 166 + static int imx8_soc_info(struct platform_device *pdev) 196 167 { 197 168 struct soc_device_attribute *soc_dev_attr; 198 169 struct soc_device *soc_dev; ··· 211 182 if (ret) 212 183 goto free_soc; 213 184 214 - id = of_match_node(imx8_soc_match, of_root); 185 + if (pdev) 186 + id = of_match_node(imx8_soc_match, pdev->dev.of_node); 187 + else 188 + id = of_match_node(imx8_machine_match, of_root); 215 189 if (!id) { 216 190 ret = -ENODEV; 217 191 goto free_soc; ··· 223 191 data = id->data; 224 192 if (data) { 225 193 soc_dev_attr->soc_id = data->name; 226 - if (data->soc_revision) 227 - soc_rev = data->soc_revision(); 194 + if (data->soc_revision) { 195 + if (pdev) { 196 + soc_rev = data->soc_revision(&pdev->dev); 197 + ret = soc_rev; 198 + if (ret < 0) 199 + goto free_soc; 200 + } else { 201 + soc_rev = data->soc_revision(NULL); 202 + } 203 + } 228 204 } 229 205 230 206 soc_dev_attr->revision = imx8_revision(soc_rev); ··· 270 230 kfree(soc_dev_attr); 271 231 return ret; 272 232 } 233 + 234 + /* Retain device_initcall is for backward compatibility with DTS. */ 235 + static int __init imx8_soc_init(void) 236 + { 237 + if (of_find_matching_node_and_match(NULL, imx8_soc_match, NULL)) 238 + return 0; 239 + 240 + return imx8_soc_info(NULL); 241 + } 273 242 device_initcall(imx8_soc_init); 243 + 244 + static struct platform_driver imx8_soc_info_driver = { 245 + .probe = imx8_soc_info, 246 + .driver = { 247 + .name = "imx8_soc_info", 248 + .of_match_table = imx8_soc_match, 249 + }, 250 + }; 251 + 252 + module_platform_driver(imx8_soc_info_driver); 253 + MODULE_LICENSE("GPL v2");
+86
drivers/soc/mediatek/mt8167-pm-domains.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8167_PM_DOMAINS_H 4 + #define __SOC_MEDIATEK_MT8167_PM_DOMAINS_H 5 + 6 + #include "mtk-pm-domains.h" 7 + #include <dt-bindings/power/mt8167-power.h> 8 + 9 + #define MT8167_PWR_STATUS_MFG_2D BIT(24) 10 + #define MT8167_PWR_STATUS_MFG_ASYNC BIT(25) 11 + 12 + /* 13 + * MT8167 power domain support 14 + */ 15 + 16 + static const struct scpsys_domain_data scpsys_domain_data_mt8167[] = { 17 + [MT8167_POWER_DOMAIN_MM] = { 18 + .sta_mask = PWR_STATUS_DISP, 19 + .ctl_offs = SPM_DIS_PWR_CON, 20 + .sram_pdn_bits = GENMASK(11, 8), 21 + .sram_pdn_ack_bits = GENMASK(12, 12), 22 + .bp_infracfg = { 23 + BUS_PROT_UPDATE_TOPAXI(MT8167_TOP_AXI_PROT_EN_MM_EMI | 24 + MT8167_TOP_AXI_PROT_EN_MCU_MM), 25 + }, 26 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 27 + }, 28 + [MT8167_POWER_DOMAIN_VDEC] = { 29 + .sta_mask = PWR_STATUS_VDEC, 30 + .ctl_offs = SPM_VDE_PWR_CON, 31 + .sram_pdn_bits = GENMASK(8, 8), 32 + .sram_pdn_ack_bits = GENMASK(12, 12), 33 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 34 + }, 35 + [MT8167_POWER_DOMAIN_ISP] = { 36 + .sta_mask = PWR_STATUS_ISP, 37 + .ctl_offs = SPM_ISP_PWR_CON, 38 + .sram_pdn_bits = GENMASK(11, 8), 39 + .sram_pdn_ack_bits = GENMASK(13, 12), 40 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 41 + }, 42 + [MT8167_POWER_DOMAIN_MFG_ASYNC] = { 43 + .sta_mask = MT8167_PWR_STATUS_MFG_ASYNC, 44 + .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 45 + .sram_pdn_bits = 0, 46 + .sram_pdn_ack_bits = 0, 47 + .bp_infracfg = { 48 + BUS_PROT_UPDATE_TOPAXI(MT8167_TOP_AXI_PROT_EN_MCU_MFG | 49 + MT8167_TOP_AXI_PROT_EN_MFG_EMI), 50 + }, 51 + }, 52 + [MT8167_POWER_DOMAIN_MFG_2D] = { 53 + .sta_mask = MT8167_PWR_STATUS_MFG_2D, 54 + .ctl_offs = SPM_MFG_2D_PWR_CON, 55 + .sram_pdn_bits = GENMASK(11, 8), 56 + .sram_pdn_ack_bits = GENMASK(15, 12), 57 + }, 58 + [MT8167_POWER_DOMAIN_MFG] = { 59 + .sta_mask = PWR_STATUS_MFG, 60 + .ctl_offs = SPM_MFG_PWR_CON, 61 + .sram_pdn_bits = GENMASK(11, 8), 62 + .sram_pdn_ack_bits = GENMASK(15, 12), 63 + }, 64 + [MT8167_POWER_DOMAIN_CONN] = { 65 + .sta_mask = PWR_STATUS_CONN, 66 + .ctl_offs = SPM_CONN_PWR_CON, 67 + .sram_pdn_bits = GENMASK(8, 8), 68 + .sram_pdn_ack_bits = 0, 69 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 70 + .bp_infracfg = { 71 + BUS_PROT_UPDATE_TOPAXI(MT8167_TOP_AXI_PROT_EN_CONN_EMI | 72 + MT8167_TOP_AXI_PROT_EN_CONN_MCU | 73 + MT8167_TOP_AXI_PROT_EN_MCU_CONN), 74 + }, 75 + }, 76 + }; 77 + 78 + static const struct scpsys_soc_data mt8167_scpsys_data = { 79 + .domains_data = scpsys_domain_data_mt8167, 80 + .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8167), 81 + .pwr_sta_offs = SPM_PWR_STATUS, 82 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 83 + }; 84 + 85 + #endif /* __SOC_MEDIATEK_MT8167_PM_DOMAINS_H */ 86 +
+1
drivers/soc/mediatek/mt8183-pm-domains.h
··· 38 38 .ctl_offs = 0x0338, 39 39 .sram_pdn_bits = GENMASK(8, 8), 40 40 .sram_pdn_ack_bits = GENMASK(12, 12), 41 + .caps = MTK_SCPD_DOMAIN_SUPPLY, 41 42 }, 42 43 [MT8183_POWER_DOMAIN_MFG_CORE0] = { 43 44 .sta_mask = BIT(7),
-32
drivers/soc/mediatek/mtk-cmdq-helper.c
··· 463 463 } 464 464 EXPORT_SYMBOL(cmdq_pkt_flush_async); 465 465 466 - struct cmdq_flush_completion { 467 - struct completion cmplt; 468 - bool err; 469 - }; 470 - 471 - static void cmdq_pkt_flush_cb(struct cmdq_cb_data data) 472 - { 473 - struct cmdq_flush_completion *cmplt; 474 - 475 - cmplt = (struct cmdq_flush_completion *)data.data; 476 - if (data.sta != CMDQ_CB_NORMAL) 477 - cmplt->err = true; 478 - else 479 - cmplt->err = false; 480 - complete(&cmplt->cmplt); 481 - } 482 - 483 - int cmdq_pkt_flush(struct cmdq_pkt *pkt) 484 - { 485 - struct cmdq_flush_completion cmplt; 486 - int err; 487 - 488 - init_completion(&cmplt.cmplt); 489 - err = cmdq_pkt_flush_async(pkt, cmdq_pkt_flush_cb, &cmplt); 490 - if (err < 0) 491 - return err; 492 - wait_for_completion(&cmplt.cmplt); 493 - 494 - return cmplt.err ? -EFAULT : 0; 495 - } 496 - EXPORT_SYMBOL(cmdq_pkt_flush); 497 - 498 466 MODULE_LICENSE("GPL v2");
+48 -3
drivers/soc/mediatek/mtk-pm-domains.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/pm_domain.h> 15 15 #include <linux/regmap.h> 16 + #include <linux/regulator/consumer.h> 16 17 #include <linux/soc/mediatek/infracfg.h> 17 18 19 + #include "mt8167-pm-domains.h" 18 20 #include "mt8173-pm-domains.h" 19 21 #include "mt8183-pm-domains.h" 20 22 #include "mt8192-pm-domains.h" ··· 42 40 struct clk_bulk_data *subsys_clks; 43 41 struct regmap *infracfg; 44 42 struct regmap *smi; 43 + struct regulator *supply; 45 44 }; 46 45 47 46 struct scpsys { ··· 190 187 return _scpsys_bus_protect_disable(pd->data->bp_infracfg, pd->infracfg); 191 188 } 192 189 190 + static int scpsys_regulator_enable(struct regulator *supply) 191 + { 192 + return supply ? regulator_enable(supply) : 0; 193 + } 194 + 195 + static int scpsys_regulator_disable(struct regulator *supply) 196 + { 197 + return supply ? regulator_disable(supply) : 0; 198 + } 199 + 193 200 static int scpsys_power_on(struct generic_pm_domain *genpd) 194 201 { 195 202 struct scpsys_domain *pd = container_of(genpd, struct scpsys_domain, genpd); ··· 207 194 bool tmp; 208 195 int ret; 209 196 210 - ret = clk_bulk_enable(pd->num_clks, pd->clks); 197 + ret = scpsys_regulator_enable(pd->supply); 211 198 if (ret) 212 199 return ret; 200 + 201 + ret = clk_bulk_enable(pd->num_clks, pd->clks); 202 + if (ret) 203 + goto err_reg; 213 204 214 205 /* subsys power on */ 215 206 regmap_set_bits(scpsys->base, pd->data->ctl_offs, PWR_ON_BIT); ··· 249 232 clk_bulk_disable(pd->num_subsys_clks, pd->subsys_clks); 250 233 err_pwr_ack: 251 234 clk_bulk_disable(pd->num_clks, pd->clks); 235 + err_reg: 236 + scpsys_regulator_disable(pd->supply); 252 237 return ret; 253 238 } 254 239 ··· 286 267 287 268 clk_bulk_disable(pd->num_clks, pd->clks); 288 269 270 + scpsys_regulator_disable(pd->supply); 271 + 289 272 return 0; 290 273 } 291 274 ··· 296 275 { 297 276 const struct scpsys_domain_data *domain_data; 298 277 struct scpsys_domain *pd; 278 + struct device_node *root_node = scpsys->dev->of_node; 299 279 struct property *prop; 300 280 const char *clk_name; 301 281 int i, ret, num_clks; ··· 328 306 329 307 pd->data = domain_data; 330 308 pd->scpsys = scpsys; 309 + 310 + if (MTK_SCPD_CAPS(pd, MTK_SCPD_DOMAIN_SUPPLY)) { 311 + /* 312 + * Find regulator in current power domain node. 313 + * devm_regulator_get() finds regulator in a node and its child 314 + * node, so set of_node to current power domain node then change 315 + * back to original node after regulator is found for current 316 + * power domain node. 317 + */ 318 + scpsys->dev->of_node = node; 319 + pd->supply = devm_regulator_get(scpsys->dev, "domain"); 320 + scpsys->dev->of_node = root_node; 321 + if (IS_ERR(pd->supply)) { 322 + dev_err_probe(scpsys->dev, PTR_ERR(pd->supply), 323 + "%pOF: failed to get power supply.\n", 324 + node); 325 + return ERR_CAST(pd->supply); 326 + } 327 + } 331 328 332 329 pd->infracfg = syscon_regmap_lookup_by_phandle_optional(node, "mediatek,infracfg"); 333 330 if (IS_ERR(pd->infracfg)) ··· 487 446 488 447 child_pd = scpsys_add_one_domain(scpsys, child); 489 448 if (IS_ERR(child_pd)) { 490 - ret = PTR_ERR(child_pd); 491 - dev_err(scpsys->dev, "%pOF: failed to get child domain id\n", child); 449 + dev_err_probe(scpsys->dev, PTR_ERR(child_pd), 450 + "%pOF: failed to get child domain id\n", child); 492 451 goto err_put_node; 493 452 } 494 453 ··· 555 514 } 556 515 557 516 static const struct of_device_id scpsys_of_match[] = { 517 + { 518 + .compatible = "mediatek,mt8167-power-controller", 519 + .data = &mt8167_scpsys_data, 520 + }, 558 521 { 559 522 .compatible = "mediatek,mt8173-power-controller", 560 523 .data = &mt8173_scpsys_data,
+2
drivers/soc/mediatek/mtk-pm-domains.h
··· 7 7 #define MTK_SCPD_FWAIT_SRAM BIT(1) 8 8 #define MTK_SCPD_SRAM_ISO BIT(2) 9 9 #define MTK_SCPD_KEEP_DEFAULT_OFF BIT(3) 10 + #define MTK_SCPD_DOMAIN_SUPPLY BIT(4) 10 11 #define MTK_SCPD_CAPS(_scpd, _x) ((_scpd)->data->caps & (_x)) 11 12 12 13 #define SPM_VDE_PWR_CON 0x0210 ··· 15 14 #define SPM_VEN_PWR_CON 0x0230 16 15 #define SPM_ISP_PWR_CON 0x0238 17 16 #define SPM_DIS_PWR_CON 0x023c 17 + #define SPM_CONN_PWR_CON 0x0280 18 18 #define SPM_VEN2_PWR_CON 0x0298 19 19 #define SPM_AUDIO_PWR_CON 0x029c 20 20 #define SPM_MFG_2D_PWR_CON 0x02c0
+50
drivers/soc/qcom/llcc-qcom.c
··· 4 4 * 5 5 */ 6 6 7 + #include <linux/bitfield.h> 7 8 #include <linux/bitmap.h> 8 9 #include <linux/bitops.h> 9 10 #include <linux/device.h> ··· 36 35 37 36 #define CACHE_LINE_SIZE_SHIFT 6 38 37 38 + #define LLCC_COMMON_HW_INFO 0x00030000 39 + #define LLCC_MAJOR_VERSION_MASK GENMASK(31, 24) 40 + 39 41 #define LLCC_COMMON_STATUS0 0x0003000c 40 42 #define LLCC_LB_CNT_MASK GENMASK(31, 28) 41 43 #define LLCC_LB_CNT_SHIFT 28 ··· 51 47 52 48 #define LLCC_TRP_SCID_DIS_CAP_ALLOC 0x21f00 53 49 #define LLCC_TRP_PCB_ACT 0x21f04 50 + #define LLCC_TRP_WRSC_EN 0x21f20 54 51 55 52 #define BANK_OFFSET_STRIDE 0x80000 56 53 ··· 78 73 * then the ways assigned to this client are not flushed on power 79 74 * collapse. 80 75 * @activate_on_init: Activate the slice immediately after it is programmed 76 + * @write_scid_en: Bit enables write cache support for a given scid. 81 77 */ 82 78 struct llcc_slice_config { 83 79 u32 usecase_id; ··· 93 87 bool dis_cap_alloc; 94 88 bool retain_on_pc; 95 89 bool activate_on_init; 90 + bool write_scid_en; 96 91 }; 97 92 98 93 struct qcom_llcc_config { ··· 154 147 { LLCC_WRCACHE, 31, 128, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0 }, 155 148 }; 156 149 150 + static const struct llcc_slice_config sm8250_data[] = { 151 + { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 }, 152 + { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 153 + { LLCC_AUDIO, 6, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 154 + { LLCC_CMPT, 10, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 155 + { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 156 + { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 }, 157 + { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 158 + { LLCC_CMPTDMA, 15, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 159 + { LLCC_DISP, 16, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 160 + { LLCC_VIDFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 161 + { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 162 + { LLCC_NPU, 23, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 163 + { LLCC_WLHW, 24, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 164 + { LLCC_CVP, 28, 256, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 165 + { LLCC_APTCM, 30, 128, 3, 0, 0x0, 0x3, 1, 0, 0, 1, 0, 0 }, 166 + { LLCC_WRCACHE, 31, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 167 + }; 168 + 157 169 static const struct qcom_llcc_config sc7180_cfg = { 158 170 .sct_data = sc7180_data, 159 171 .size = ARRAY_SIZE(sc7180_data), ··· 188 162 static const struct qcom_llcc_config sm8150_cfg = { 189 163 .sct_data = sm8150_data, 190 164 .size = ARRAY_SIZE(sm8150_data), 165 + }; 166 + 167 + static const struct qcom_llcc_config sm8250_cfg = { 168 + .sct_data = sm8250_data, 169 + .size = ARRAY_SIZE(sm8250_data), 191 170 }; 192 171 193 172 static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; ··· 444 413 return ret; 445 414 } 446 415 416 + if (drv_data->major_version == 2) { 417 + u32 wren; 418 + 419 + wren = config->write_scid_en << config->slice_id; 420 + ret = regmap_update_bits(drv_data->bcast_regmap, LLCC_TRP_WRSC_EN, 421 + BIT(config->slice_id), wren); 422 + if (ret) 423 + return ret; 424 + } 425 + 447 426 if (config->activate_on_init) { 448 427 desc.slice_id = config->slice_id; 449 428 ret = llcc_slice_activate(&desc); ··· 517 476 const struct qcom_llcc_config *cfg; 518 477 const struct llcc_slice_config *llcc_cfg; 519 478 u32 sz; 479 + u32 version; 520 480 521 481 drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL); 522 482 if (!drv_data) { ··· 537 495 ret = PTR_ERR(drv_data->bcast_regmap); 538 496 goto err; 539 497 } 498 + 499 + /* Extract major version of the IP */ 500 + ret = regmap_read(drv_data->bcast_regmap, LLCC_COMMON_HW_INFO, &version); 501 + if (ret) 502 + goto err; 503 + 504 + drv_data->major_version = FIELD_GET(LLCC_MAJOR_VERSION_MASK, version); 540 505 541 506 ret = regmap_read(drv_data->regmap, LLCC_COMMON_STATUS0, 542 507 &num_banks); ··· 608 559 { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 609 560 { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 610 561 { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, 562 + { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfg }, 611 563 { } 612 564 }; 613 565
+7 -1
drivers/soc/qcom/ocmem.c
··· 189 189 { 190 190 struct platform_device *pdev; 191 191 struct device_node *devnode; 192 + struct ocmem *ocmem; 192 193 193 194 devnode = of_parse_phandle(dev->of_node, "sram", 0); 194 195 if (!devnode || !devnode->parent) { ··· 203 202 return ERR_PTR(-EPROBE_DEFER); 204 203 } 205 204 206 - return platform_get_drvdata(pdev); 205 + ocmem = platform_get_drvdata(pdev); 206 + if (!ocmem) { 207 + dev_err(dev, "Cannot get ocmem\n"); 208 + return ERR_PTR(-ENODEV); 209 + } 210 + return ocmem; 207 211 } 208 212 EXPORT_SYMBOL(of_get_ocmem); 209 213
+1
drivers/soc/qcom/qcom_aoss.c
··· 600 600 { .compatible = "qcom,sdm845-aoss-qmp", }, 601 601 { .compatible = "qcom,sm8150-aoss-qmp", }, 602 602 { .compatible = "qcom,sm8250-aoss-qmp", }, 603 + { .compatible = "qcom,sm8350-aoss-qmp", }, 603 604 {} 604 605 }; 605 606 MODULE_DEVICE_TABLE(of, qmp_dt_match);
+10 -14
drivers/soc/qcom/rpmh-rsc.c
··· 231 231 if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) 232 232 return; 233 233 234 - for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { 234 + for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) 235 235 write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0); 236 - write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0); 237 - } 236 + 238 237 bitmap_zero(tcs->slots, MAX_TCS_SLOTS); 239 238 } 240 239 ··· 363 364 enable = TCS_AMC_MODE_ENABLE; 364 365 write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 365 366 enable |= TCS_AMC_MODE_TRIGGER; 366 - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 367 + write_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, enable); 367 368 } 368 369 } 369 370 ··· 442 443 skip: 443 444 /* Reclaim the TCS */ 444 445 write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); 445 - write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); 446 446 writel_relaxed(BIT(i), drv->tcs_base + RSC_DRV_IRQ_CLEAR); 447 447 spin_lock(&drv->lock); 448 448 clear_bit(i, drv->tcs_in_use); ··· 474 476 static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id, 475 477 const struct tcs_request *msg) 476 478 { 477 - u32 msgid, cmd_msgid; 479 + u32 msgid; 480 + u32 cmd_msgid = CMD_MSGID_LEN | CMD_MSGID_WRITE; 478 481 u32 cmd_enable = 0; 479 - u32 cmd_complete; 480 482 struct tcs_cmd *cmd; 481 483 int i, j; 482 484 483 - cmd_msgid = CMD_MSGID_LEN; 485 + /* Convert all commands to RR when the request has wait_for_compl set */ 484 486 cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0; 485 - cmd_msgid |= CMD_MSGID_WRITE; 486 - 487 - cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id); 488 487 489 488 for (i = 0, j = cmd_id; i < msg->num_cmds; i++, j++) { 490 489 cmd = &msg->cmds[i]; 491 490 cmd_enable |= BIT(j); 492 - cmd_complete |= cmd->wait << j; 493 491 msgid = cmd_msgid; 492 + /* 493 + * Additionally, if the cmd->wait is set, make the command 494 + * response reqd even if the overall request was fire-n-forget. 495 + */ 494 496 msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0; 495 497 496 498 write_tcs_cmd(drv, RSC_DRV_CMD_MSGID, tcs_id, j, msgid); ··· 499 501 trace_rpmh_send_msg(drv, tcs_id, j, msgid, cmd); 500 502 } 501 503 502 - write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, cmd_complete); 503 504 cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); 504 505 write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable); 505 506 } ··· 649 652 * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate() 650 653 */ 651 654 write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); 652 - write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); 653 655 enable_tcs_irq(drv, tcs_id, true); 654 656 } 655 657 spin_unlock_irqrestore(&drv->lock, flags);
+28
drivers/soc/qcom/rpmpd.c
··· 21 21 * RPMPD_X is X encoded as a little-endian, lower-case, ASCII string */ 22 22 #define RPMPD_SMPA 0x61706d73 23 23 #define RPMPD_LDOA 0x616f646c 24 + #define RPMPD_SMPB 0x62706d73 25 + #define RPMPD_LDOB 0x626f646c 24 26 #define RPMPD_RWCX 0x78637772 25 27 #define RPMPD_RWMX 0x786d7772 26 28 #define RPMPD_RWLC 0x636c7772 ··· 186 184 .max_state = RPM_SMD_LEVEL_TURBO_HIGH, 187 185 }; 188 186 187 + /* msm8994 RPM Power domains */ 188 + DEFINE_RPMPD_PAIR(msm8994, vddcx, vddcx_ao, SMPA, CORNER, 1); 189 + DEFINE_RPMPD_PAIR(msm8994, vddmx, vddmx_ao, SMPA, CORNER, 2); 190 + /* Attention! *Some* 8994 boards with pm8004 may use SMPC here! */ 191 + DEFINE_RPMPD_CORNER(msm8994, vddgfx, SMPB, 2); 192 + 193 + DEFINE_RPMPD_VFC(msm8994, vddcx_vfc, SMPA, 1); 194 + DEFINE_RPMPD_VFC(msm8994, vddgfx_vfc, SMPB, 2); 195 + 196 + static struct rpmpd *msm8994_rpmpds[] = { 197 + [MSM8994_VDDCX] = &msm8994_vddcx, 198 + [MSM8994_VDDCX_AO] = &msm8994_vddcx_ao, 199 + [MSM8994_VDDCX_VFC] = &msm8994_vddcx_vfc, 200 + [MSM8994_VDDMX] = &msm8994_vddmx, 201 + [MSM8994_VDDMX_AO] = &msm8994_vddmx_ao, 202 + [MSM8994_VDDGFX] = &msm8994_vddgfx, 203 + [MSM8994_VDDGFX_VFC] = &msm8994_vddgfx_vfc, 204 + }; 205 + 206 + static const struct rpmpd_desc msm8994_desc = { 207 + .rpmpds = msm8994_rpmpds, 208 + .num_pds = ARRAY_SIZE(msm8994_rpmpds), 209 + .max_state = MAX_CORNER_RPMPD_STATE, 210 + }; 211 + 189 212 /* msm8996 RPM Power domains */ 190 213 DEFINE_RPMPD_PAIR(msm8996, vddcx, vddcx_ao, SMPA, CORNER, 1); 191 214 DEFINE_RPMPD_PAIR(msm8996, vddmx, vddmx_ao, SMPA, CORNER, 2); ··· 329 302 { .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc }, 330 303 { .compatible = "qcom,msm8939-rpmpd", .data = &msm8939_desc }, 331 304 { .compatible = "qcom,msm8976-rpmpd", .data = &msm8976_desc }, 305 + { .compatible = "qcom,msm8994-rpmpd", .data = &msm8994_desc }, 332 306 { .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc }, 333 307 { .compatible = "qcom,msm8998-rpmpd", .data = &msm8998_desc }, 334 308 { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc },
+1 -3
drivers/soc/qcom/smem.c
··· 732 732 header = smem->regions[0].virt_base + le32_to_cpu(entry->offset); 733 733 734 734 if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) { 735 - dev_err(smem->dev, "bad partition magic %02x %02x %02x %02x\n", 736 - header->magic[0], header->magic[1], 737 - header->magic[2], header->magic[3]); 735 + dev_err(smem->dev, "bad partition magic %4ph\n", header->magic); 738 736 return NULL; 739 737 } 740 738
+81 -24
drivers/soc/qcom/socinfo.c
··· 15 15 #include <linux/sys_soc.h> 16 16 #include <linux/types.h> 17 17 18 + #include <asm/unaligned.h> 19 + 18 20 /* 19 21 * SoC version type with major number in the upper 16 bits and minor 20 22 * number in the lower 16 bits. ··· 85 83 [23] = "PM8038", 86 84 [24] = "PM8922", 87 85 [25] = "PM8917", 86 + [30] = "PM8150", 87 + [31] = "PM8150L", 88 + [32] = "PM8150B", 89 + [33] = "PMK8002", 90 + [36] = "PM8009", 88 91 }; 89 92 #endif /* CONFIG_DEBUG_FS */ 90 93 ··· 224 217 { 250, "MSM8616" }, 225 218 { 251, "MSM8992" }, 226 219 { 253, "APQ8094" }, 220 + { 290, "MDM9607" }, 227 221 { 291, "APQ8096" }, 222 + { 292, "MSM8998" }, 228 223 { 293, "MSM8953" }, 224 + { 296, "MDM8207" }, 225 + { 297, "MDM9207" }, 226 + { 298, "MDM9307" }, 227 + { 299, "MDM9628" }, 229 228 { 304, "APQ8053" }, 230 229 { 305, "MSM8996SG" }, 231 230 { 310, "MSM8996AU" }, 232 231 { 311, "APQ8096AU" }, 233 232 { 312, "APQ8096SG" }, 233 + { 317, "SDM660" }, 234 234 { 318, "SDM630" }, 235 + { 319, "APQ8098" }, 235 236 { 321, "SDM845" }, 237 + { 322, "MDM9206" }, 238 + { 324, "SDA660" }, 239 + { 325, "SDM658" }, 240 + { 326, "SDA658" }, 241 + { 327, "SDA630" }, 236 242 { 338, "SDM450" }, 237 243 { 341, "SDA845" }, 244 + { 345, "SDM636" }, 245 + { 346, "SDA636" }, 238 246 { 349, "SDM632" }, 239 247 { 350, "SDA632" }, 240 248 { 351, "SDA450" }, 241 249 { 356, "SM8250" }, 242 250 { 402, "IPQ6018" }, 243 251 { 425, "SC7180" }, 252 + { 455, "QRB5165" }, 244 253 }; 245 254 246 255 static const char *socinfo_machine(struct device *dev, unsigned int id) ··· 287 264 } 288 265 289 266 #define DEBUGFS_ADD(info, name) \ 290 - debugfs_create_file(__stringify(name), 0400, \ 267 + debugfs_create_file(__stringify(name), 0444, \ 291 268 qcom_socinfo->dbg_root, \ 292 269 info, &qcom_ ##name## _ops) 293 270 ··· 309 286 if (model < 0) 310 287 return -EINVAL; 311 288 312 - if (model <= ARRAY_SIZE(pmic_models) && pmic_models[model]) 289 + if (model < ARRAY_SIZE(pmic_models) && pmic_models[model]) 313 290 seq_printf(seq, "%s\n", pmic_models[model]); 314 291 else 315 292 seq_printf(seq, "unknown (%d)\n", model); 293 + 294 + return 0; 295 + } 296 + 297 + static int qcom_show_pmic_model_array(struct seq_file *seq, void *p) 298 + { 299 + struct socinfo *socinfo = seq->private; 300 + unsigned int num_pmics = le32_to_cpu(socinfo->num_pmics); 301 + unsigned int pmic_array_offset = le32_to_cpu(socinfo->pmic_array_offset); 302 + int i; 303 + void *ptr = socinfo; 304 + 305 + ptr += pmic_array_offset; 306 + 307 + /* No need for bounds checking, it happened at socinfo_debugfs_init */ 308 + for (i = 0; i < num_pmics; i++) { 309 + unsigned int model = SOCINFO_MINOR(get_unaligned_le32(ptr + 2 * i * sizeof(u32))); 310 + unsigned int die_rev = get_unaligned_le32(ptr + (2 * i + 1) * sizeof(u32)); 311 + 312 + if (model < ARRAY_SIZE(pmic_models) && pmic_models[model]) 313 + seq_printf(seq, "%s %u.%u\n", pmic_models[model], 314 + SOCINFO_MAJOR(die_rev), 315 + SOCINFO_MINOR(die_rev)); 316 + else 317 + seq_printf(seq, "unknown (%d)\n", model); 318 + } 316 319 317 320 return 0; 318 321 } ··· 365 316 366 317 QCOM_OPEN(build_id, qcom_show_build_id); 367 318 QCOM_OPEN(pmic_model, qcom_show_pmic_model); 319 + QCOM_OPEN(pmic_model_array, qcom_show_pmic_model_array); 368 320 QCOM_OPEN(pmic_die_rev, qcom_show_pmic_die_revision); 369 321 QCOM_OPEN(chip_id, qcom_show_chip_id); 370 322 ··· 394 344 DEFINE_IMAGE_OPS(oem); 395 345 396 346 static void socinfo_debugfs_init(struct qcom_socinfo *qcom_socinfo, 397 - struct socinfo *info) 347 + struct socinfo *info, size_t info_size) 398 348 { 399 349 struct smem_image_version *versions; 400 350 struct dentry *dentry; 401 351 size_t size; 402 352 int i; 353 + unsigned int num_pmics; 354 + unsigned int pmic_array_offset; 403 355 404 356 qcom_socinfo->dbg_root = debugfs_create_dir("qcom_socinfo", NULL); 405 357 406 358 qcom_socinfo->info.fmt = __le32_to_cpu(info->fmt); 407 359 408 - debugfs_create_x32("info_fmt", 0400, qcom_socinfo->dbg_root, 360 + debugfs_create_x32("info_fmt", 0444, qcom_socinfo->dbg_root, 409 361 &qcom_socinfo->info.fmt); 410 362 411 363 switch (qcom_socinfo->info.fmt) { 412 364 case SOCINFO_VERSION(0, 15): 413 365 qcom_socinfo->info.nmodem_supported = __le32_to_cpu(info->nmodem_supported); 414 366 415 - debugfs_create_u32("nmodem_supported", 0400, qcom_socinfo->dbg_root, 367 + debugfs_create_u32("nmodem_supported", 0444, qcom_socinfo->dbg_root, 416 368 &qcom_socinfo->info.nmodem_supported); 417 369 fallthrough; 418 370 case SOCINFO_VERSION(0, 14): ··· 423 371 qcom_socinfo->info.num_defective_parts = __le32_to_cpu(info->num_defective_parts); 424 372 qcom_socinfo->info.ndefective_parts_array_offset = __le32_to_cpu(info->ndefective_parts_array_offset); 425 373 426 - debugfs_create_u32("num_clusters", 0400, qcom_socinfo->dbg_root, 374 + debugfs_create_u32("num_clusters", 0444, qcom_socinfo->dbg_root, 427 375 &qcom_socinfo->info.num_clusters); 428 - debugfs_create_u32("ncluster_array_offset", 0400, qcom_socinfo->dbg_root, 376 + debugfs_create_u32("ncluster_array_offset", 0444, qcom_socinfo->dbg_root, 429 377 &qcom_socinfo->info.ncluster_array_offset); 430 - debugfs_create_u32("num_defective_parts", 0400, qcom_socinfo->dbg_root, 378 + debugfs_create_u32("num_defective_parts", 0444, qcom_socinfo->dbg_root, 431 379 &qcom_socinfo->info.num_defective_parts); 432 - debugfs_create_u32("ndefective_parts_array_offset", 0400, qcom_socinfo->dbg_root, 380 + debugfs_create_u32("ndefective_parts_array_offset", 0444, qcom_socinfo->dbg_root, 433 381 &qcom_socinfo->info.ndefective_parts_array_offset); 434 382 fallthrough; 435 383 case SOCINFO_VERSION(0, 13): 436 384 qcom_socinfo->info.nproduct_id = __le32_to_cpu(info->nproduct_id); 437 385 438 - debugfs_create_u32("nproduct_id", 0400, qcom_socinfo->dbg_root, 386 + debugfs_create_u32("nproduct_id", 0444, qcom_socinfo->dbg_root, 439 387 &qcom_socinfo->info.nproduct_id); 440 388 DEBUGFS_ADD(info, chip_id); 441 389 fallthrough; ··· 447 395 qcom_socinfo->info.raw_device_num = 448 396 __le32_to_cpu(info->raw_device_num); 449 397 450 - debugfs_create_x32("chip_family", 0400, qcom_socinfo->dbg_root, 398 + debugfs_create_x32("chip_family", 0444, qcom_socinfo->dbg_root, 451 399 &qcom_socinfo->info.chip_family); 452 - debugfs_create_x32("raw_device_family", 0400, 400 + debugfs_create_x32("raw_device_family", 0444, 453 401 qcom_socinfo->dbg_root, 454 402 &qcom_socinfo->info.raw_device_family); 455 - debugfs_create_x32("raw_device_number", 0400, 403 + debugfs_create_x32("raw_device_number", 0444, 456 404 qcom_socinfo->dbg_root, 457 405 &qcom_socinfo->info.raw_device_num); 458 406 fallthrough; 459 407 case SOCINFO_VERSION(0, 11): 408 + num_pmics = le32_to_cpu(info->num_pmics); 409 + pmic_array_offset = le32_to_cpu(info->pmic_array_offset); 410 + if (pmic_array_offset + 2 * num_pmics * sizeof(u32) <= info_size) 411 + DEBUGFS_ADD(info, pmic_model_array); 412 + fallthrough; 460 413 case SOCINFO_VERSION(0, 10): 461 414 case SOCINFO_VERSION(0, 9): 462 415 qcom_socinfo->info.foundry_id = __le32_to_cpu(info->foundry_id); 463 416 464 - debugfs_create_u32("foundry_id", 0400, qcom_socinfo->dbg_root, 417 + debugfs_create_u32("foundry_id", 0444, qcom_socinfo->dbg_root, 465 418 &qcom_socinfo->info.foundry_id); 466 419 fallthrough; 467 420 case SOCINFO_VERSION(0, 8): ··· 478 421 qcom_socinfo->info.hw_plat_subtype = 479 422 __le32_to_cpu(info->hw_plat_subtype); 480 423 481 - debugfs_create_u32("hardware_platform_subtype", 0400, 424 + debugfs_create_u32("hardware_platform_subtype", 0444, 482 425 qcom_socinfo->dbg_root, 483 426 &qcom_socinfo->info.hw_plat_subtype); 484 427 fallthrough; ··· 486 429 qcom_socinfo->info.accessory_chip = 487 430 __le32_to_cpu(info->accessory_chip); 488 431 489 - debugfs_create_u32("accessory_chip", 0400, 432 + debugfs_create_u32("accessory_chip", 0444, 490 433 qcom_socinfo->dbg_root, 491 434 &qcom_socinfo->info.accessory_chip); 492 435 fallthrough; 493 436 case SOCINFO_VERSION(0, 4): 494 437 qcom_socinfo->info.plat_ver = __le32_to_cpu(info->plat_ver); 495 438 496 - debugfs_create_u32("platform_version", 0400, 439 + debugfs_create_u32("platform_version", 0444, 497 440 qcom_socinfo->dbg_root, 498 441 &qcom_socinfo->info.plat_ver); 499 442 fallthrough; 500 443 case SOCINFO_VERSION(0, 3): 501 444 qcom_socinfo->info.hw_plat = __le32_to_cpu(info->hw_plat); 502 445 503 - debugfs_create_u32("hardware_platform", 0400, 446 + debugfs_create_u32("hardware_platform", 0444, 504 447 qcom_socinfo->dbg_root, 505 448 &qcom_socinfo->info.hw_plat); 506 449 fallthrough; 507 450 case SOCINFO_VERSION(0, 2): 508 451 qcom_socinfo->info.raw_ver = __le32_to_cpu(info->raw_ver); 509 452 510 - debugfs_create_u32("raw_version", 0400, qcom_socinfo->dbg_root, 453 + debugfs_create_u32("raw_version", 0444, qcom_socinfo->dbg_root, 511 454 &qcom_socinfo->info.raw_ver); 512 455 fallthrough; 513 456 case SOCINFO_VERSION(0, 1): ··· 524 467 525 468 dentry = debugfs_create_dir(socinfo_image_names[i], 526 469 qcom_socinfo->dbg_root); 527 - debugfs_create_file("name", 0400, dentry, &versions[i], 470 + debugfs_create_file("name", 0444, dentry, &versions[i], 528 471 &qcom_image_name_ops); 529 - debugfs_create_file("variant", 0400, dentry, &versions[i], 472 + debugfs_create_file("variant", 0444, dentry, &versions[i], 530 473 &qcom_image_variant_ops); 531 - debugfs_create_file("oem", 0400, dentry, &versions[i], 474 + debugfs_create_file("oem", 0444, dentry, &versions[i], 532 475 &qcom_image_oem_ops); 533 476 } 534 477 } ··· 539 482 } 540 483 #else 541 484 static void socinfo_debugfs_init(struct qcom_socinfo *qcom_socinfo, 542 - struct socinfo *info) 485 + struct socinfo *info, size_t info_size) 543 486 { 544 487 } 545 488 static void socinfo_debugfs_exit(struct qcom_socinfo *qcom_socinfo) { } ··· 579 522 if (IS_ERR(qs->soc_dev)) 580 523 return PTR_ERR(qs->soc_dev); 581 524 582 - socinfo_debugfs_init(qs, info); 525 + socinfo_debugfs_init(qs, info, item_size); 583 526 584 527 /* Feed the soc specific unique data into entropy pool */ 585 528 add_device_randomness(info, item_size);
+23 -8
drivers/soc/sunxi/sunxi_sram.c
··· 283 283 EXPORT_SYMBOL(sunxi_sram_release); 284 284 285 285 struct sunxi_sramc_variant { 286 - bool has_emac_clock; 286 + int num_emac_clocks; 287 287 }; 288 288 289 289 static const struct sunxi_sramc_variant sun4i_a10_sramc_variant = { ··· 291 291 }; 292 292 293 293 static const struct sunxi_sramc_variant sun8i_h3_sramc_variant = { 294 - .has_emac_clock = true, 294 + .num_emac_clocks = 1, 295 295 }; 296 296 297 297 static const struct sunxi_sramc_variant sun50i_a64_sramc_variant = { 298 - .has_emac_clock = true, 298 + .num_emac_clocks = 1, 299 + }; 300 + 301 + static const struct sunxi_sramc_variant sun50i_h616_sramc_variant = { 302 + .num_emac_clocks = 2, 299 303 }; 300 304 301 305 #define SUNXI_SRAM_EMAC_CLOCK_REG 0x30 302 306 static bool sunxi_sram_regmap_accessible_reg(struct device *dev, 303 307 unsigned int reg) 304 308 { 305 - if (reg == SUNXI_SRAM_EMAC_CLOCK_REG) 306 - return true; 307 - return false; 309 + const struct sunxi_sramc_variant *variant; 310 + 311 + variant = of_device_get_match_data(dev); 312 + 313 + if (reg < SUNXI_SRAM_EMAC_CLOCK_REG) 314 + return false; 315 + if (reg > SUNXI_SRAM_EMAC_CLOCK_REG + variant->num_emac_clocks * 4) 316 + return false; 317 + 318 + return true; 308 319 } 309 320 310 321 static struct regmap_config sunxi_sram_emac_clock_regmap = { ··· 323 312 .val_bits = 32, 324 313 .reg_stride = 4, 325 314 /* last defined register */ 326 - .max_register = SUNXI_SRAM_EMAC_CLOCK_REG, 315 + .max_register = SUNXI_SRAM_EMAC_CLOCK_REG + 4, 327 316 /* other devices have no business accessing other registers */ 328 317 .readable_reg = sunxi_sram_regmap_accessible_reg, 329 318 .writeable_reg = sunxi_sram_regmap_accessible_reg, ··· 354 343 if (!d) 355 344 return -ENOMEM; 356 345 357 - if (variant->has_emac_clock) { 346 + if (variant->num_emac_clocks > 0) { 358 347 emac_clock = devm_regmap_init_mmio(&pdev->dev, base, 359 348 &sunxi_sram_emac_clock_regmap); 360 349 ··· 397 386 { 398 387 .compatible = "allwinner,sun50i-h5-system-control", 399 388 .data = &sun50i_a64_sramc_variant, 389 + }, 390 + { 391 + .compatible = "allwinner,sun50i-h616-system-control", 392 + .data = &sun50i_h616_sramc_variant, 400 393 }, 401 394 { }, 402 395 };
+3 -4
drivers/soc/ti/k3-ringacc.c
··· 9 9 #include <linux/io.h> 10 10 #include <linux/init.h> 11 11 #include <linux/of.h> 12 + #include <linux/of_device.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/sys_soc.h> 14 15 #include <linux/dma/ti-cppi5.h> ··· 1518 1517 static int k3_ringacc_probe(struct platform_device *pdev) 1519 1518 { 1520 1519 const struct ringacc_match_data *match_data; 1521 - const struct of_device_id *match; 1522 1520 struct device *dev = &pdev->dev; 1523 1521 struct k3_ringacc *ringacc; 1524 1522 int ret; 1525 1523 1526 - match = of_match_node(k3_ringacc_of_match, dev->of_node); 1527 - if (!match) 1524 + match_data = of_device_get_match_data(&pdev->dev); 1525 + if (!match_data) 1528 1526 return -ENODEV; 1529 - match_data = match->data; 1530 1527 1531 1528 ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL); 1532 1529 if (!ringacc)
+1
drivers/soc/ti/knav_dma.c
··· 758 758 for_each_child_of_node(node, child) { 759 759 ret = dma_init(node, child); 760 760 if (ret) { 761 + of_node_put(child); 761 762 dev_err(&pdev->dev, "init failed with %d\n", ret); 762 763 break; 763 764 }
+3
drivers/soc/ti/knav_qmss_queue.c
··· 1087 1087 for_each_child_of_node(regions, child) { 1088 1088 region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL); 1089 1089 if (!region) { 1090 + of_node_put(child); 1090 1091 dev_err(dev, "out of memory allocating region\n"); 1091 1092 return -ENOMEM; 1092 1093 } ··· 1400 1399 for_each_child_of_node(qmgrs, child) { 1401 1400 qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL); 1402 1401 if (!qmgr) { 1402 + of_node_put(child); 1403 1403 dev_err(dev, "out of memory allocating qmgr\n"); 1404 1404 return -ENOMEM; 1405 1405 } ··· 1500 1498 for_each_child_of_node(pdsps, child) { 1501 1499 pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL); 1502 1500 if (!pdsp) { 1501 + of_node_put(child); 1503 1502 dev_err(dev, "out of memory allocating pdsp\n"); 1504 1503 return -ENOMEM; 1505 1504 }
+4 -1
drivers/soc/ti/pm33xx.c
··· 535 535 536 536 ret = am33xx_push_sram_idle(); 537 537 if (ret) 538 - goto err_free_sram; 538 + goto err_unsetup_rtc; 539 539 540 540 am33xx_pm_set_ipc_ops(); 541 541 ··· 575 575 err_pm_runtime_disable: 576 576 pm_runtime_disable(dev); 577 577 wkup_m3_ipc_put(m3_ipc); 578 + err_unsetup_rtc: 579 + iounmap(rtc_base_virt); 580 + clk_put(rtc_fck); 578 581 err_free_sram: 579 582 am33xx_pm_free_sram(); 580 583 pm33xx_dev = NULL;
+50 -41
drivers/soc/ti/pruss.c
··· 161 161 .reg_stride = 4, 162 162 }; 163 163 164 + static int pruss_cfg_of_init(struct device *dev, struct pruss *pruss) 165 + { 166 + struct device_node *np = dev_of_node(dev); 167 + struct device_node *child; 168 + struct resource res; 169 + int ret; 170 + 171 + child = of_get_child_by_name(np, "cfg"); 172 + if (!child) { 173 + dev_err(dev, "%pOF is missing its 'cfg' node\n", child); 174 + return -ENODEV; 175 + } 176 + 177 + if (of_address_to_resource(child, 0, &res)) { 178 + ret = -ENOMEM; 179 + goto node_put; 180 + } 181 + 182 + pruss->cfg_base = devm_ioremap(dev, res.start, resource_size(&res)); 183 + if (!pruss->cfg_base) { 184 + ret = -ENOMEM; 185 + goto node_put; 186 + } 187 + 188 + regmap_conf.name = kasprintf(GFP_KERNEL, "%pOFn@%llx", child, 189 + (u64)res.start); 190 + regmap_conf.max_register = resource_size(&res) - 4; 191 + 192 + pruss->cfg_regmap = devm_regmap_init_mmio(dev, pruss->cfg_base, 193 + &regmap_conf); 194 + kfree(regmap_conf.name); 195 + if (IS_ERR(pruss->cfg_regmap)) { 196 + dev_err(dev, "regmap_init_mmio failed for cfg, ret = %ld\n", 197 + PTR_ERR(pruss->cfg_regmap)); 198 + ret = PTR_ERR(pruss->cfg_regmap); 199 + goto node_put; 200 + } 201 + 202 + ret = pruss_clk_init(pruss, child); 203 + if (ret) 204 + dev_err(dev, "pruss_clk_init failed, ret = %d\n", ret); 205 + 206 + node_put: 207 + of_node_put(child); 208 + return ret; 209 + } 210 + 164 211 static int pruss_probe(struct platform_device *pdev) 165 212 { 166 213 struct device *dev = &pdev->dev; ··· 286 239 goto rpm_disable; 287 240 } 288 241 289 - child = of_get_child_by_name(np, "cfg"); 290 - if (!child) { 291 - dev_err(dev, "%pOF is missing its 'cfg' node\n", child); 292 - ret = -ENODEV; 242 + ret = pruss_cfg_of_init(dev, pruss); 243 + if (ret < 0) 293 244 goto rpm_put; 294 - } 295 - 296 - if (of_address_to_resource(child, 0, &res)) { 297 - ret = -ENOMEM; 298 - goto node_put; 299 - } 300 - 301 - pruss->cfg_base = devm_ioremap(dev, res.start, resource_size(&res)); 302 - if (!pruss->cfg_base) { 303 - ret = -ENOMEM; 304 - goto node_put; 305 - } 306 - 307 - regmap_conf.name = kasprintf(GFP_KERNEL, "%pOFn@%llx", child, 308 - (u64)res.start); 309 - regmap_conf.max_register = resource_size(&res) - 4; 310 - 311 - pruss->cfg_regmap = devm_regmap_init_mmio(dev, pruss->cfg_base, 312 - &regmap_conf); 313 - kfree(regmap_conf.name); 314 - if (IS_ERR(pruss->cfg_regmap)) { 315 - dev_err(dev, "regmap_init_mmio failed for cfg, ret = %ld\n", 316 - PTR_ERR(pruss->cfg_regmap)); 317 - ret = PTR_ERR(pruss->cfg_regmap); 318 - goto node_put; 319 - } 320 - 321 - ret = pruss_clk_init(pruss, child); 322 - if (ret) { 323 - dev_err(dev, "failed to setup coreclk-mux\n"); 324 - goto node_put; 325 - } 326 245 327 246 ret = devm_of_platform_populate(dev); 328 247 if (ret) { 329 248 dev_err(dev, "failed to register child devices\n"); 330 - goto node_put; 249 + goto rpm_put; 331 250 } 332 - 333 - of_node_put(child); 334 251 335 252 return 0; 336 253 337 - node_put: 338 - of_node_put(child); 339 254 rpm_put: 340 255 pm_runtime_put_sync(dev); 341 256 rpm_disable:
+1 -2
drivers/tee/optee/call.c
··· 149 149 */ 150 150 optee_cq_wait_for_completion(&optee->call_queue, &w); 151 151 } else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) { 152 - if (need_resched()) 153 - cond_resched(); 152 + cond_resched(); 154 153 param.a0 = res.a0; 155 154 param.a1 = res.a1; 156 155 param.a2 = res.a2;
+9 -149
drivers/tee/optee/optee_msg.h
··· 1 1 /* SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) */ 2 2 /* 3 - * Copyright (c) 2015-2019, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 4 */ 5 5 #ifndef _OPTEE_MSG_H 6 6 #define _OPTEE_MSG_H ··· 12 12 * This file defines the OP-TEE message protocol used to communicate 13 13 * with an instance of OP-TEE running in secure world. 14 14 * 15 - * This file is divided into three sections. 15 + * This file is divided into two sections. 16 16 * 1. Formatting of messages. 17 17 * 2. Requests from normal world 18 - * 3. Requests from secure world, Remote Procedure Call (RPC), handled by 19 - * tee-supplicant. 20 18 */ 21 19 22 20 /***************************************************************************** ··· 52 54 * Every entry in buffer should point to a 4k page beginning (12 least 53 55 * significant bits must be equal to zero). 54 56 * 55 - * 12 least significant bints of optee_msg_param.u.tmem.buf_ptr should hold page 56 - * offset of the user buffer. 57 + * 12 least significant bits of optee_msg_param.u.tmem.buf_ptr should hold 58 + * page offset of user buffer. 57 59 * 58 60 * So, entries should be placed like members of this structure: 59 61 * ··· 174 176 * @params: the parameters supplied to the OS Command 175 177 * 176 178 * All normal calls to Trusted OS uses this struct. If cmd requires further 177 - * information than what these field holds it can be passed as a parameter 179 + * information than what these fields hold it can be passed as a parameter 178 180 * tagged as meta (setting the OPTEE_MSG_ATTR_META bit in corresponding 179 - * attrs field). All parameters tagged as meta has to come first. 180 - * 181 - * Temp memref parameters can be fragmented if supported by the Trusted OS 182 - * (when optee_smc.h is bearer of this protocol this is indicated with 183 - * OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM). If a logical memref parameter is 184 - * fragmented then has all but the last fragment the 185 - * OPTEE_MSG_ATTR_FRAGMENT bit set in attrs. Even if a memref is fragmented 186 - * it will still be presented as a single logical memref to the Trusted 187 - * Application. 181 + * attrs field). All parameters tagged as meta have to come first. 188 182 */ 189 183 struct optee_msg_arg { 190 184 u32 cmd; ··· 189 199 u32 num_params; 190 200 191 201 /* num_params tells the actual number of element in params */ 192 - struct optee_msg_param params[0]; 202 + struct optee_msg_param params[]; 193 203 }; 194 204 195 205 /** ··· 280 290 * OPTEE_MSG_CMD_REGISTER_SHM registers a shared memory reference. The 281 291 * information is passed as: 282 292 * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT 283 - * [| OPTEE_MSG_ATTR_FRAGMENT] 293 + * [| OPTEE_MSG_ATTR_NONCONTIG] 284 294 * [in] param[0].u.tmem.buf_ptr physical address (of first fragment) 285 295 * [in] param[0].u.tmem.size size (of first fragment) 286 296 * [in] param[0].u.tmem.shm_ref holds shared memory reference 287 - * ... 288 - * The shared memory can optionally be fragmented, temp memrefs can follow 289 - * each other with all but the last with the OPTEE_MSG_ATTR_FRAGMENT bit set. 290 297 * 291 - * OPTEE_MSG_CMD_UNREGISTER_SHM unregisteres a previously registered shared 298 + * OPTEE_MSG_CMD_UNREGISTER_SHM unregisters a previously registered shared 292 299 * memory reference. The information is passed as: 293 300 * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_RMEM_INPUT 294 301 * [in] param[0].u.rmem.shm_ref holds shared memory reference ··· 299 312 #define OPTEE_MSG_CMD_REGISTER_SHM 4 300 313 #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 301 314 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004 302 - 303 - /***************************************************************************** 304 - * Part 3 - Requests from secure world, RPC 305 - *****************************************************************************/ 306 - 307 - /* 308 - * All RPC is done with a struct optee_msg_arg as bearer of information, 309 - * struct optee_msg_arg::arg holds values defined by OPTEE_MSG_RPC_CMD_* below 310 - * 311 - * RPC communication with tee-supplicant is reversed compared to normal 312 - * client communication desribed above. The supplicant receives requests 313 - * and sends responses. 314 - */ 315 - 316 - /* 317 - * Load a TA into memory, defined in tee-supplicant 318 - */ 319 - #define OPTEE_MSG_RPC_CMD_LOAD_TA 0 320 - 321 - /* 322 - * Reserved 323 - */ 324 - #define OPTEE_MSG_RPC_CMD_RPMB 1 325 - 326 - /* 327 - * File system access, defined in tee-supplicant 328 - */ 329 - #define OPTEE_MSG_RPC_CMD_FS 2 330 - 331 - /* 332 - * Get time 333 - * 334 - * Returns number of seconds and nano seconds since the Epoch, 335 - * 1970-01-01 00:00:00 +0000 (UTC). 336 - * 337 - * [out] param[0].u.value.a Number of seconds 338 - * [out] param[0].u.value.b Number of nano seconds. 339 - */ 340 - #define OPTEE_MSG_RPC_CMD_GET_TIME 3 341 - 342 - /* 343 - * Wait queue primitive, helper for secure world to implement a wait queue. 344 - * 345 - * If secure world need to wait for a secure world mutex it issues a sleep 346 - * request instead of spinning in secure world. Conversely is a wakeup 347 - * request issued when a secure world mutex with a thread waiting thread is 348 - * unlocked. 349 - * 350 - * Waiting on a key 351 - * [in] param[0].u.value.a OPTEE_MSG_RPC_WAIT_QUEUE_SLEEP 352 - * [in] param[0].u.value.b wait key 353 - * 354 - * Waking up a key 355 - * [in] param[0].u.value.a OPTEE_MSG_RPC_WAIT_QUEUE_WAKEUP 356 - * [in] param[0].u.value.b wakeup key 357 - */ 358 - #define OPTEE_MSG_RPC_CMD_WAIT_QUEUE 4 359 - #define OPTEE_MSG_RPC_WAIT_QUEUE_SLEEP 0 360 - #define OPTEE_MSG_RPC_WAIT_QUEUE_WAKEUP 1 361 - 362 - /* 363 - * Suspend execution 364 - * 365 - * [in] param[0].value .a number of milliseconds to suspend 366 - */ 367 - #define OPTEE_MSG_RPC_CMD_SUSPEND 5 368 - 369 - /* 370 - * Allocate a piece of shared memory 371 - * 372 - * Shared memory can optionally be fragmented, to support that additional 373 - * spare param entries are allocated to make room for eventual fragments. 374 - * The spare param entries has .attr = OPTEE_MSG_ATTR_TYPE_NONE when 375 - * unused. All returned temp memrefs except the last should have the 376 - * OPTEE_MSG_ATTR_FRAGMENT bit set in the attr field. 377 - * 378 - * [in] param[0].u.value.a type of memory one of 379 - * OPTEE_MSG_RPC_SHM_TYPE_* below 380 - * [in] param[0].u.value.b requested size 381 - * [in] param[0].u.value.c required alignment 382 - * 383 - * [out] param[0].u.tmem.buf_ptr physical address (of first fragment) 384 - * [out] param[0].u.tmem.size size (of first fragment) 385 - * [out] param[0].u.tmem.shm_ref shared memory reference 386 - * ... 387 - * [out] param[n].u.tmem.buf_ptr physical address 388 - * [out] param[n].u.tmem.size size 389 - * [out] param[n].u.tmem.shm_ref shared memory reference (same value 390 - * as in param[n-1].u.tmem.shm_ref) 391 - */ 392 - #define OPTEE_MSG_RPC_CMD_SHM_ALLOC 6 393 - /* Memory that can be shared with a non-secure user space application */ 394 - #define OPTEE_MSG_RPC_SHM_TYPE_APPL 0 395 - /* Memory only shared with non-secure kernel */ 396 - #define OPTEE_MSG_RPC_SHM_TYPE_KERNEL 1 397 - 398 - /* 399 - * Free shared memory previously allocated with OPTEE_MSG_RPC_CMD_SHM_ALLOC 400 - * 401 - * [in] param[0].u.value.a type of memory one of 402 - * OPTEE_MSG_RPC_SHM_TYPE_* above 403 - * [in] param[0].u.value.b value of shared memory reference 404 - * returned in param[0].u.tmem.shm_ref 405 - * above 406 - */ 407 - #define OPTEE_MSG_RPC_CMD_SHM_FREE 7 408 - 409 - /* 410 - * Access a device on an i2c bus 411 - * 412 - * [in] param[0].u.value.a mode: RD(0), WR(1) 413 - * [in] param[0].u.value.b i2c adapter 414 - * [in] param[0].u.value.c i2c chip 415 - * 416 - * [in] param[1].u.value.a i2c control flags 417 - * 418 - * [in/out] memref[2] buffer to exchange the transfer data 419 - * with the secure world 420 - * 421 - * [out] param[3].u.value.a bytes transferred by the driver 422 - */ 423 - #define OPTEE_MSG_RPC_CMD_I2C_TRANSFER 21 424 - /* I2C master transfer modes */ 425 - #define OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD 0 426 - #define OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR 1 427 - /* I2C master control flags */ 428 - #define OPTEE_MSG_RPC_CMD_I2C_FLAGS_TEN_BIT BIT(0) 429 315 430 316 #endif /* _OPTEE_MSG_H */
+103
drivers/tee/optee/optee_rpc_cmd.h
··· 1 + /* SPDX-License-Identifier: BSD-2-Clause */ 2 + /* 3 + * Copyright (c) 2016-2021, Linaro Limited 4 + */ 5 + 6 + #ifndef __OPTEE_RPC_CMD_H 7 + #define __OPTEE_RPC_CMD_H 8 + 9 + /* 10 + * All RPC is done with a struct optee_msg_arg as bearer of information, 11 + * struct optee_msg_arg::arg holds values defined by OPTEE_RPC_CMD_* below. 12 + * Only the commands handled by the kernel driver are defined here. 13 + * 14 + * RPC communication with tee-supplicant is reversed compared to normal 15 + * client communication described above. The supplicant receives requests 16 + * and sends responses. 17 + */ 18 + 19 + /* 20 + * Get time 21 + * 22 + * Returns number of seconds and nano seconds since the Epoch, 23 + * 1970-01-01 00:00:00 +0000 (UTC). 24 + * 25 + * [out] value[0].a Number of seconds 26 + * [out] value[0].b Number of nano seconds. 27 + */ 28 + #define OPTEE_RPC_CMD_GET_TIME 3 29 + 30 + /* 31 + * Wait queue primitive, helper for secure world to implement a wait queue. 32 + * 33 + * If secure world needs to wait for a secure world mutex it issues a sleep 34 + * request instead of spinning in secure world. Conversely is a wakeup 35 + * request issued when a secure world mutex with a thread waiting thread is 36 + * unlocked. 37 + * 38 + * Waiting on a key 39 + * [in] value[0].a OPTEE_RPC_WAIT_QUEUE_SLEEP 40 + * [in] value[0].b Wait key 41 + * 42 + * Waking up a key 43 + * [in] value[0].a OPTEE_RPC_WAIT_QUEUE_WAKEUP 44 + * [in] value[0].b Wakeup key 45 + */ 46 + #define OPTEE_RPC_CMD_WAIT_QUEUE 4 47 + #define OPTEE_RPC_WAIT_QUEUE_SLEEP 0 48 + #define OPTEE_RPC_WAIT_QUEUE_WAKEUP 1 49 + 50 + /* 51 + * Suspend execution 52 + * 53 + * [in] value[0].a Number of milliseconds to suspend 54 + */ 55 + #define OPTEE_RPC_CMD_SUSPEND 5 56 + 57 + /* 58 + * Allocate a piece of shared memory 59 + * 60 + * [in] value[0].a Type of memory one of 61 + * OPTEE_RPC_SHM_TYPE_* below 62 + * [in] value[0].b Requested size 63 + * [in] value[0].c Required alignment 64 + * [out] memref[0] Buffer 65 + */ 66 + #define OPTEE_RPC_CMD_SHM_ALLOC 6 67 + /* Memory that can be shared with a non-secure user space application */ 68 + #define OPTEE_RPC_SHM_TYPE_APPL 0 69 + /* Memory only shared with non-secure kernel */ 70 + #define OPTEE_RPC_SHM_TYPE_KERNEL 1 71 + 72 + /* 73 + * Free shared memory previously allocated with OPTEE_RPC_CMD_SHM_ALLOC 74 + * 75 + * [in] value[0].a Type of memory one of 76 + * OPTEE_RPC_SHM_TYPE_* above 77 + * [in] value[0].b Value of shared memory reference or cookie 78 + */ 79 + #define OPTEE_RPC_CMD_SHM_FREE 7 80 + 81 + /* 82 + * Issue master requests (read and write operations) to an I2C chip. 83 + * 84 + * [in] value[0].a Transfer mode (OPTEE_RPC_I2C_TRANSFER_*) 85 + * [in] value[0].b The I2C bus (a.k.a adapter). 86 + * 16 bit field. 87 + * [in] value[0].c The I2C chip (a.k.a address). 88 + * 16 bit field (either 7 or 10 bit effective). 89 + * [in] value[1].a The I2C master control flags (ie, 10 bit address). 90 + * 16 bit field. 91 + * [in/out] memref[2] Buffer used for data transfers. 92 + * [out] value[3].a Number of bytes transferred by the REE. 93 + */ 94 + #define OPTEE_RPC_CMD_I2C_TRANSFER 21 95 + 96 + /* I2C master transfer modes */ 97 + #define OPTEE_RPC_I2C_TRANSFER_RD 0 98 + #define OPTEE_RPC_I2C_TRANSFER_WR 1 99 + 100 + /* I2C master control flags */ 101 + #define OPTEE_RPC_I2C_FLAGS_TEN_BIT BIT(0) 102 + 103 + #endif /*__OPTEE_RPC_CMD_H*/
+49 -23
drivers/tee/optee/optee_smc.h
··· 1 1 /* SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) */ 2 2 /* 3 - * Copyright (c) 2015-2019, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 4 */ 5 5 #ifndef OPTEE_SMC_H 6 6 #define OPTEE_SMC_H ··· 39 39 /* 40 40 * Function specified by SMC Calling convention 41 41 * 42 - * Return one of the following UIDs if using API specified in this file 43 - * without further extentions: 44 - * 65cb6b93-af0c-4617-8ed6-644a8d1140f8 45 - * see also OPTEE_SMC_UID_* in optee_msg.h 42 + * Return the following UID if using API specified in this file 43 + * without further extensions: 44 + * 384fb3e0-e7f8-11e3-af63-0002a5d5c51b. 45 + * see also OPTEE_MSG_UID_* in optee_msg.h 46 46 */ 47 47 #define OPTEE_SMC_FUNCID_CALLS_UID OPTEE_MSG_FUNCID_CALLS_UID 48 48 #define OPTEE_SMC_CALLS_UID \ ··· 53 53 /* 54 54 * Function specified by SMC Calling convention 55 55 * 56 - * Returns 2.0 if using API specified in this file without further extentions. 56 + * Returns 2.0 if using API specified in this file without further extensions. 57 57 * see also OPTEE_MSG_REVISION_* in optee_msg.h 58 58 */ 59 59 #define OPTEE_SMC_FUNCID_CALLS_REVISION OPTEE_MSG_FUNCID_CALLS_REVISION ··· 109 109 * 110 110 * Call register usage: 111 111 * a0 SMC Function ID, OPTEE_SMC*CALL_WITH_ARG 112 - * a1 Upper 32bit of a 64bit physical pointer to a struct optee_msg_arg 113 - * a2 Lower 32bit of a 64bit physical pointer to a struct optee_msg_arg 112 + * a1 Upper 32 bits of a 64-bit physical pointer to a struct optee_msg_arg 113 + * a2 Lower 32 bits of a 64-bit physical pointer to a struct optee_msg_arg 114 114 * a3 Cache settings, not used if physical pointer is in a predefined shared 115 115 * memory area else per OPTEE_SMC_SHM_* 116 116 * a4-6 Not used ··· 139 139 * optee_msg_arg. 140 140 * OPTEE_SMC_RETURN_ETHREAD_LIMIT Number of Trusted OS threads exceeded, 141 141 * try again later. 142 - * OPTEE_SMC_RETURN_EBADADDR Bad physcial pointer to struct 142 + * OPTEE_SMC_RETURN_EBADADDR Bad physical pointer to struct 143 143 * optee_msg_arg. 144 144 * OPTEE_SMC_RETURN_EBADCMD Bad/unknown cmd in struct optee_msg_arg 145 145 * OPTEE_SMC_RETURN_IS_RPC() Call suspended by RPC call to normal ··· 214 214 * secure world accepts command buffers located in any parts of non-secure RAM 215 215 */ 216 216 #define OPTEE_SMC_SEC_CAP_DYNAMIC_SHM BIT(2) 217 - 218 - /* Secure world supports Shared Memory with a NULL buffer reference */ 217 + /* Secure world is built with virtualization support */ 218 + #define OPTEE_SMC_SEC_CAP_VIRTUALIZATION BIT(3) 219 + /* Secure world supports Shared Memory with a NULL reference */ 219 220 #define OPTEE_SMC_SEC_CAP_MEMREF_NULL BIT(4) 220 221 221 222 #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 ··· 246 245 * 247 246 * Normal return register usage: 248 247 * a0 OPTEE_SMC_RETURN_OK 249 - * a1 Upper 32bit of a 64bit Shared memory cookie 250 - * a2 Lower 32bit of a 64bit Shared memory cookie 248 + * a1 Upper 32 bits of a 64-bit Shared memory cookie 249 + * a2 Lower 32 bits of a 64-bit Shared memory cookie 251 250 * a3-7 Preserved 252 251 * 253 252 * Cache empty return register usage: ··· 293 292 #define OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE 11 294 293 #define OPTEE_SMC_ENABLE_SHM_CACHE \ 295 294 OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE) 295 + 296 + /* 297 + * Query OP-TEE about number of supported threads 298 + * 299 + * Normal World OS or Hypervisor issues this call to find out how many 300 + * threads OP-TEE supports. That is how many standard calls can be issued 301 + * in parallel before OP-TEE will return OPTEE_SMC_RETURN_ETHREAD_LIMIT. 302 + * 303 + * Call requests usage: 304 + * a0 SMC Function ID, OPTEE_SMC_GET_THREAD_COUNT 305 + * a1-6 Not used 306 + * a7 Hypervisor Client ID register 307 + * 308 + * Normal return register usage: 309 + * a0 OPTEE_SMC_RETURN_OK 310 + * a1 Number of threads 311 + * a2-7 Preserved 312 + * 313 + * Error return: 314 + * a0 OPTEE_SMC_RETURN_UNKNOWN_FUNCTION Requested call is not implemented 315 + * a1-7 Preserved 316 + */ 317 + #define OPTEE_SMC_FUNCID_GET_THREAD_COUNT 15 318 + #define OPTEE_SMC_GET_THREAD_COUNT \ 319 + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_THREAD_COUNT) 296 320 297 321 /* 298 322 * Resume from RPC (for example after processing a foreign interrupt) ··· 367 341 * 368 342 * "Return" register usage: 369 343 * a0 SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC. 370 - * a1 Upper 32bits of 64bit physical pointer to allocated 344 + * a1 Upper 32 bits of 64-bit physical pointer to allocated 371 345 * memory, (a1 == 0 && a2 == 0) if size was 0 or if memory can't 372 346 * be allocated. 373 - * a2 Lower 32bits of 64bit physical pointer to allocated 347 + * a2 Lower 32 bits of 64-bit physical pointer to allocated 374 348 * memory, (a1 == 0 && a2 == 0) if size was 0 or if memory can't 375 349 * be allocated 376 350 * a3 Preserved 377 - * a4 Upper 32bits of 64bit Shared memory cookie used when freeing 351 + * a4 Upper 32 bits of 64-bit Shared memory cookie used when freeing 378 352 * the memory or doing an RPC 379 - * a5 Lower 32bits of 64bit Shared memory cookie used when freeing 353 + * a5 Lower 32 bits of 64-bit Shared memory cookie used when freeing 380 354 * the memory or doing an RPC 381 355 * a6-7 Preserved 382 356 */ ··· 389 363 * 390 364 * "Call" register usage: 391 365 * a0 This value, OPTEE_SMC_RETURN_RPC_FREE 392 - * a1 Upper 32bits of 64bit shared memory cookie belonging to this 366 + * a1 Upper 32 bits of 64-bit shared memory cookie belonging to this 393 367 * argument memory 394 - * a2 Lower 32bits of 64bit shared memory cookie belonging to this 368 + * a2 Lower 32 bits of 64-bit shared memory cookie belonging to this 395 369 * argument memory 396 370 * a3-7 Resume information, must be preserved 397 371 * ··· 405 379 OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_FREE) 406 380 407 381 /* 408 - * Deliver foreign interrupt to normal world. 382 + * Deliver a foreign interrupt in normal world. 409 383 * 410 384 * "Call" register usage: 411 385 * a0 OPTEE_SMC_RETURN_RPC_FOREIGN_INTR ··· 415 389 * a0 SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC. 416 390 * a1-7 Preserved 417 391 */ 418 - #define OPTEE_SMC_RPC_FUNC_FOREIGN_INTR 4 392 + #define OPTEE_SMC_RPC_FUNC_FOREIGN_INTR 4 419 393 #define OPTEE_SMC_RETURN_RPC_FOREIGN_INTR \ 420 394 OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_FOREIGN_INTR) 421 395 ··· 431 405 * 432 406 * "Call" register usage: 433 407 * a0 OPTEE_SMC_RETURN_RPC_CMD 434 - * a1 Upper 32bit of a 64bit Shared memory cookie holding a 408 + * a1 Upper 32 bits of a 64-bit Shared memory cookie holding a 435 409 * struct optee_msg_arg, must be preserved, only the data should 436 410 * be updated 437 - * a2 Lower 32bit of a 64bit Shared memory cookie holding a 411 + * a2 Lower 32 bits of a 64-bit Shared memory cookie holding a 438 412 * struct optee_msg_arg, must be preserved, only the data should 439 413 * be updated 440 414 * a3-7 Resume information, must be preserved
+36 -34
drivers/tee/optee/rpc.c
··· 12 12 #include <linux/tee_drv.h> 13 13 #include "optee_private.h" 14 14 #include "optee_smc.h" 15 + #include "optee_rpc_cmd.h" 15 16 16 17 struct wq_entry { 17 18 struct list_head link; ··· 55 54 static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, 56 55 struct optee_msg_arg *arg) 57 56 { 58 - struct i2c_client client = { 0 }; 59 57 struct tee_param *params; 58 + struct i2c_adapter *adapter; 59 + struct i2c_msg msg = { }; 60 60 size_t i; 61 61 int ret = -EOPNOTSUPP; 62 62 u8 attr[] = { ··· 87 85 goto bad; 88 86 } 89 87 90 - client.adapter = i2c_get_adapter(params[0].u.value.b); 91 - if (!client.adapter) 88 + adapter = i2c_get_adapter(params[0].u.value.b); 89 + if (!adapter) 92 90 goto bad; 93 91 94 - if (params[1].u.value.a & OPTEE_MSG_RPC_CMD_I2C_FLAGS_TEN_BIT) { 95 - if (!i2c_check_functionality(client.adapter, 92 + if (params[1].u.value.a & OPTEE_RPC_I2C_FLAGS_TEN_BIT) { 93 + if (!i2c_check_functionality(adapter, 96 94 I2C_FUNC_10BIT_ADDR)) { 97 - i2c_put_adapter(client.adapter); 95 + i2c_put_adapter(adapter); 98 96 goto bad; 99 97 } 100 98 101 - client.flags = I2C_CLIENT_TEN; 99 + msg.flags = I2C_M_TEN; 102 100 } 103 101 104 - client.addr = params[0].u.value.c; 105 - snprintf(client.name, I2C_NAME_SIZE, "i2c%d", client.adapter->nr); 102 + msg.addr = params[0].u.value.c; 103 + msg.buf = params[2].u.memref.shm->kaddr; 104 + msg.len = params[2].u.memref.size; 106 105 107 106 switch (params[0].u.value.a) { 108 - case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD: 109 - ret = i2c_master_recv(&client, params[2].u.memref.shm->kaddr, 110 - params[2].u.memref.size); 107 + case OPTEE_RPC_I2C_TRANSFER_RD: 108 + msg.flags |= I2C_M_RD; 111 109 break; 112 - case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR: 113 - ret = i2c_master_send(&client, params[2].u.memref.shm->kaddr, 114 - params[2].u.memref.size); 110 + case OPTEE_RPC_I2C_TRANSFER_WR: 115 111 break; 116 112 default: 117 - i2c_put_adapter(client.adapter); 113 + i2c_put_adapter(adapter); 118 114 goto bad; 119 115 } 116 + 117 + ret = i2c_transfer(adapter, &msg, 1); 120 118 121 119 if (ret < 0) { 122 120 arg->ret = TEEC_ERROR_COMMUNICATION; 123 121 } else { 124 - params[3].u.value.a = ret; 122 + params[3].u.value.a = msg.len; 125 123 if (optee_to_msg_param(arg->params, arg->num_params, params)) 126 124 arg->ret = TEEC_ERROR_BAD_PARAMETERS; 127 125 else 128 126 arg->ret = TEEC_SUCCESS; 129 127 } 130 128 131 - i2c_put_adapter(client.adapter); 129 + i2c_put_adapter(adapter); 132 130 kfree(params); 133 131 return; 134 132 bad: ··· 196 194 goto bad; 197 195 198 196 switch (arg->params[0].u.value.a) { 199 - case OPTEE_MSG_RPC_WAIT_QUEUE_SLEEP: 197 + case OPTEE_RPC_WAIT_QUEUE_SLEEP: 200 198 wq_sleep(&optee->wait_queue, arg->params[0].u.value.b); 201 199 break; 202 - case OPTEE_MSG_RPC_WAIT_QUEUE_WAKEUP: 200 + case OPTEE_RPC_WAIT_QUEUE_WAKEUP: 203 201 wq_wakeup(&optee->wait_queue, arg->params[0].u.value.b); 204 202 break; 205 203 default: ··· 269 267 struct tee_shm *shm; 270 268 271 269 param.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT; 272 - param.u.value.a = OPTEE_MSG_RPC_SHM_TYPE_APPL; 270 + param.u.value.a = OPTEE_RPC_SHM_TYPE_APPL; 273 271 param.u.value.b = sz; 274 272 param.u.value.c = 0; 275 273 276 - ret = optee_supp_thrd_req(ctx, OPTEE_MSG_RPC_CMD_SHM_ALLOC, 1, &param); 274 + ret = optee_supp_thrd_req(ctx, OPTEE_RPC_CMD_SHM_ALLOC, 1, &param); 277 275 if (ret) 278 276 return ERR_PTR(-ENOMEM); 279 277 ··· 310 308 311 309 sz = arg->params[0].u.value.b; 312 310 switch (arg->params[0].u.value.a) { 313 - case OPTEE_MSG_RPC_SHM_TYPE_APPL: 311 + case OPTEE_RPC_SHM_TYPE_APPL: 314 312 shm = cmd_alloc_suppl(ctx, sz); 315 313 break; 316 - case OPTEE_MSG_RPC_SHM_TYPE_KERNEL: 314 + case OPTEE_RPC_SHM_TYPE_KERNEL: 317 315 shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED); 318 316 break; 319 317 default: ··· 385 383 struct tee_param param; 386 384 387 385 param.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT; 388 - param.u.value.a = OPTEE_MSG_RPC_SHM_TYPE_APPL; 386 + param.u.value.a = OPTEE_RPC_SHM_TYPE_APPL; 389 387 param.u.value.b = tee_shm_get_id(shm); 390 388 param.u.value.c = 0; 391 389 ··· 402 400 */ 403 401 tee_shm_put(shm); 404 402 405 - optee_supp_thrd_req(ctx, OPTEE_MSG_RPC_CMD_SHM_FREE, 1, &param); 403 + optee_supp_thrd_req(ctx, OPTEE_RPC_CMD_SHM_FREE, 1, &param); 406 404 } 407 405 408 406 static void handle_rpc_func_cmd_shm_free(struct tee_context *ctx, ··· 420 418 421 419 shm = (struct tee_shm *)(unsigned long)arg->params[0].u.value.b; 422 420 switch (arg->params[0].u.value.a) { 423 - case OPTEE_MSG_RPC_SHM_TYPE_APPL: 421 + case OPTEE_RPC_SHM_TYPE_APPL: 424 422 cmd_free_suppl(ctx, shm); 425 423 break; 426 - case OPTEE_MSG_RPC_SHM_TYPE_KERNEL: 424 + case OPTEE_RPC_SHM_TYPE_KERNEL: 427 425 tee_shm_free(shm); 428 426 break; 429 427 default: ··· 460 458 } 461 459 462 460 switch (arg->cmd) { 463 - case OPTEE_MSG_RPC_CMD_GET_TIME: 461 + case OPTEE_RPC_CMD_GET_TIME: 464 462 handle_rpc_func_cmd_get_time(arg); 465 463 break; 466 - case OPTEE_MSG_RPC_CMD_WAIT_QUEUE: 464 + case OPTEE_RPC_CMD_WAIT_QUEUE: 467 465 handle_rpc_func_cmd_wq(optee, arg); 468 466 break; 469 - case OPTEE_MSG_RPC_CMD_SUSPEND: 467 + case OPTEE_RPC_CMD_SUSPEND: 470 468 handle_rpc_func_cmd_wait(arg); 471 469 break; 472 - case OPTEE_MSG_RPC_CMD_SHM_ALLOC: 470 + case OPTEE_RPC_CMD_SHM_ALLOC: 473 471 free_pages_list(call_ctx); 474 472 handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx); 475 473 break; 476 - case OPTEE_MSG_RPC_CMD_SHM_FREE: 474 + case OPTEE_RPC_CMD_SHM_FREE: 477 475 handle_rpc_func_cmd_shm_free(ctx, arg); 478 476 break; 479 - case OPTEE_MSG_RPC_CMD_I2C_TRANSFER: 477 + case OPTEE_RPC_CMD_I2C_TRANSFER: 480 478 handle_rpc_func_cmd_i2c_transfer(ctx, arg); 481 479 break; 482 480 default:
+17
include/dt-bindings/power/mt8167-power.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright (c) 2020 MediaTek Inc. 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_POWER_MT8167_POWER_H 7 + #define _DT_BINDINGS_POWER_MT8167_POWER_H 8 + 9 + #define MT8167_POWER_DOMAIN_MM 0 10 + #define MT8167_POWER_DOMAIN_VDEC 1 11 + #define MT8167_POWER_DOMAIN_ISP 2 12 + #define MT8167_POWER_DOMAIN_CONN 3 13 + #define MT8167_POWER_DOMAIN_MFG_ASYNC 4 14 + #define MT8167_POWER_DOMAIN_MFG_2D 5 15 + #define MT8167_POWER_DOMAIN_MFG 6 16 + 17 + #endif /* _DT_BINDINGS_POWER_MT8167_POWER_H */
+9
include/dt-bindings/power/qcom-rpmpd.h
··· 94 94 #define MSM8976_VDDMX_AO 4 95 95 #define MSM8976_VDDMX_VFL 5 96 96 97 + /* MSM8994 Power Domain Indexes */ 98 + #define MSM8994_VDDCX 0 99 + #define MSM8994_VDDCX_AO 1 100 + #define MSM8994_VDDCX_VFC 2 101 + #define MSM8994_VDDMX 3 102 + #define MSM8994_VDDMX_AO 4 103 + #define MSM8994_VDDGFX 5 104 + #define MSM8994_VDDGFX_VFC 6 105 + 97 106 /* MSM8996 Power Domain Indexes */ 98 107 #define MSM8996_VDDCX 0 99 108 #define MSM8996_VDDCX_AO 1
+11
include/dt-bindings/soc/bcm-pmb.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later OR MIT */ 2 + 3 + #ifndef __DT_BINDINGS_SOC_BCM_PMB_H 4 + #define __DT_BINDINGS_SOC_BCM_PMB_H 5 + 6 + #define BCM_PMB_PCIE0 0x01 7 + #define BCM_PMB_PCIE1 0x02 8 + #define BCM_PMB_PCIE2 0x03 9 + #define BCM_PMB_HOST_USB 0x04 10 + 11 + #endif
+8
include/linux/clk/tegra.h
··· 136 136 extern void tegra210_clk_emc_update_setting(u32 emc_src_value); 137 137 138 138 struct clk; 139 + struct tegra_emc; 139 140 140 141 typedef long (tegra20_clk_emc_round_cb)(unsigned long rate, 141 142 unsigned long min_rate, ··· 146 145 void tegra20_clk_set_emc_round_callback(tegra20_clk_emc_round_cb *round_cb, 147 146 void *cb_arg); 148 147 int tegra20_clk_prepare_emc_mc_same_freq(struct clk *emc_clk, bool same); 148 + 149 + typedef int (tegra124_emc_prepare_timing_change_cb)(struct tegra_emc *emc, 150 + unsigned long rate); 151 + typedef void (tegra124_emc_complete_timing_change_cb)(struct tegra_emc *emc, 152 + unsigned long rate); 153 + void tegra124_clk_set_emc_callbacks(tegra124_emc_prepare_timing_change_cb *prep_cb, 154 + tegra124_emc_complete_timing_change_cb *complete_cb); 149 155 150 156 struct tegra210_clk_emc_config { 151 157 unsigned long rate;
+1 -1
include/linux/mfd/axp20x.h
··· 696 696 * 697 697 * This tells the axp20x core to remove the associated mfd devices 698 698 */ 699 - int axp20x_device_remove(struct axp20x_dev *axp20x); 699 + void axp20x_device_remove(struct axp20x_dev *axp20x); 700 700 701 701 #endif /* __LINUX_MFD_AXP20X_H */
+19
include/linux/reset.h
··· 363 363 } 364 364 365 365 /** 366 + * devm_reset_control_get_optional_exclusive_released - resource managed 367 + * reset_control_get_optional_exclusive_released() 368 + * @dev: device to be reset by the controller 369 + * @id: reset line name 370 + * 371 + * Managed-and-optional variant of reset_control_get_exclusive_released(). For 372 + * reset controllers returned from this function, reset_control_put() is called 373 + * automatically on driver detach. 374 + * 375 + * See reset_control_get_exclusive_released() for more information. 376 + */ 377 + static inline struct reset_control * 378 + __must_check devm_reset_control_get_optional_exclusive_released(struct device *dev, 379 + const char *id) 380 + { 381 + return __devm_reset_control_get(dev, id, 0, false, true, false); 382 + } 383 + 384 + /** 366 385 * devm_reset_control_get_shared - resource managed reset_control_get_shared() 367 386 * @dev: device to be reset by the controller 368 387 * @id: reset line name
+16
include/linux/soc/brcmstb/brcmstb.h
··· 2 2 #ifndef __BRCMSTB_SOC_H 3 3 #define __BRCMSTB_SOC_H 4 4 5 + #include <linux/kconfig.h> 6 + 5 7 static inline u32 BRCM_ID(u32 reg) 6 8 { 7 9 return reg >> 28 ? reg >> 16 : reg >> 8; ··· 14 12 return reg & 0xff; 15 13 } 16 14 15 + #if IS_ENABLED(CONFIG_SOC_BRCMSTB) 16 + 17 17 /* 18 18 * Helper functions for getting family or product id from the 19 19 * SoC driver. 20 20 */ 21 21 u32 brcmstb_get_family_id(void); 22 22 u32 brcmstb_get_product_id(void); 23 + 24 + #else 25 + static inline u32 brcmstb_get_family_id(void) 26 + { 27 + return 0; 28 + } 29 + 30 + static inline u32 brcmstb_get_product_id(void) 31 + { 32 + return 0; 33 + } 34 + #endif 23 35 24 36 #endif /* __BRCMSTB_SOC_H */
+8
include/linux/soc/mediatek/infracfg.h
··· 123 123 #define MT8173_TOP_AXI_PROT_EN_MFG_M1 BIT(22) 124 124 #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT BIT(23) 125 125 126 + #define MT8167_TOP_AXI_PROT_EN_MM_EMI BIT(1) 127 + #define MT8167_TOP_AXI_PROT_EN_MCU_MFG BIT(2) 128 + #define MT8167_TOP_AXI_PROT_EN_CONN_EMI BIT(4) 129 + #define MT8167_TOP_AXI_PROT_EN_MFG_EMI BIT(5) 130 + #define MT8167_TOP_AXI_PROT_EN_CONN_MCU BIT(8) 131 + #define MT8167_TOP_AXI_PROT_EN_MCU_CONN BIT(9) 132 + #define MT8167_TOP_AXI_PROT_EN_MCU_MM BIT(11) 133 + 126 134 #define MT2701_TOP_AXI_PROT_EN_MM_M0 BIT(1) 127 135 #define MT2701_TOP_AXI_PROT_EN_CONN_M BIT(2) 128 136 #define MT2701_TOP_AXI_PROT_EN_CONN_S BIT(8)
-12
include/linux/soc/mediatek/mtk-cmdq.h
··· 280 280 int cmdq_pkt_flush_async(struct cmdq_pkt *pkt, cmdq_async_flush_cb cb, 281 281 void *data); 282 282 283 - /** 284 - * cmdq_pkt_flush() - trigger CMDQ to execute the CMDQ packet 285 - * @pkt: the CMDQ packet 286 - * 287 - * Return: 0 for success; else the error code is returned 288 - * 289 - * Trigger CMDQ to execute the CMDQ packet. Note that this is a 290 - * synchronous flush function. When the function returned, the recorded 291 - * commands have been done. 292 - */ 293 - int cmdq_pkt_flush(struct cmdq_pkt *pkt); 294 - 295 283 #endif /* __MTK_CMDQ_H__ */
+3
include/linux/soc/qcom/llcc-qcom.h
··· 29 29 #define LLCC_AUDHW 22 30 30 #define LLCC_NPU 23 31 31 #define LLCC_WLHW 24 32 + #define LLCC_CVP 28 32 33 #define LLCC_MODPE 29 33 34 #define LLCC_APTCM 30 34 35 #define LLCC_WRCACHE 31 ··· 80 79 * @bitmap: Bit map to track the active slice ids 81 80 * @offsets: Pointer to the bank offsets array 82 81 * @ecc_irq: interrupt for llcc cache error detection and reporting 82 + * @major_version: Indicates the LLCC major version 83 83 */ 84 84 struct llcc_drv_data { 85 85 struct regmap *regmap; ··· 93 91 unsigned long *bitmap; 94 92 u32 *offsets; 95 93 int ecc_irq; 94 + u32 major_version; 96 95 }; 97 96 98 97 #if IS_ENABLED(CONFIG_QCOM_LLCC)
+1 -1
include/linux/sunxi-rsb.h
··· 59 59 struct sunxi_rsb_driver { 60 60 struct device_driver driver; 61 61 int (*probe)(struct sunxi_rsb_device *rdev); 62 - int (*remove)(struct sunxi_rsb_device *rdev); 62 + void (*remove)(struct sunxi_rsb_device *rdev); 63 63 }; 64 64 65 65 static inline struct sunxi_rsb_driver *to_sunxi_rsb_driver(struct device_driver *d)
+1 -1
include/linux/tee_drv.h
··· 88 88 * @close_session: close a session 89 89 * @invoke_func: invoke a trusted function 90 90 * @cancel_req: request cancel of an ongoing invoke or open 91 - * @supp_revc: called for supplicant to get a command 91 + * @supp_recv: called for supplicant to get a command 92 92 * @supp_send: called for supplicant to send a response 93 93 * @shm_register: register shared memory buffer in TEE 94 94 * @shm_unregister: unregister shared memory buffer in TEE
-12
include/soc/brcmstb/common.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright © 2014 NVIDIA Corporation 4 - * Copyright © 2015 Broadcom Corporation 5 - */ 6 - 7 - #ifndef __SOC_BRCMSTB_COMMON_H__ 8 - #define __SOC_BRCMSTB_COMMON_H__ 9 - 10 - bool soc_is_brcmstb(void); 11 - 12 - #endif /* __SOC_BRCMSTB_COMMON_H__ */
+1 -1
include/soc/mediatek/smi.h
··· 9 9 #include <linux/bitops.h> 10 10 #include <linux/device.h> 11 11 12 - #ifdef CONFIG_MTK_SMI 12 + #if IS_ENABLED(CONFIG_MTK_SMI) 13 13 14 14 #define MTK_LARB_NR_MAX 16 15 15
+8 -1
include/soc/qcom/tcs.h
··· 30 30 * 31 31 * @addr: the address of the resource slv_id:18:16 | offset:0:15 32 32 * @data: the resource state request 33 - * @wait: wait for this request to be complete before sending the next 33 + * @wait: ensure that this command is complete before returning. 34 + * Setting "wait" here only makes sense during rpmh_write_batch() for 35 + * active-only transfers, this is because: 36 + * rpmh_write() - Always waits. 37 + * (DEFINE_RPMH_MSG_ONSTACK will set .wait_for_compl) 38 + * rpmh_write_async() - Never waits. 39 + * (There's no request completion callback) 34 40 */ 35 41 struct tcs_cmd { 36 42 u32 addr; ··· 49 43 * 50 44 * @state: state for the request. 51 45 * @wait_for_compl: wait until we get a response from the h/w accelerator 46 + * (same as setting cmd->wait for all commands in the request) 52 47 * @num_cmds: the number of @cmds in this request 53 48 * @cmds: an array of tcs_cmds 54 49 */
-16
include/soc/tegra/emc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (c) 2014 NVIDIA Corporation. All rights reserved. 4 - */ 5 - 6 - #ifndef __SOC_TEGRA_EMC_H__ 7 - #define __SOC_TEGRA_EMC_H__ 8 - 9 - struct tegra_emc; 10 - 11 - int tegra_emc_prepare_timing_change(struct tegra_emc *emc, 12 - unsigned long rate); 13 - void tegra_emc_complete_timing_change(struct tegra_emc *emc, 14 - unsigned long rate); 15 - 16 - #endif /* __SOC_TEGRA_EMC_H__ */
+1 -1
include/uapi/linux/tee.h
··· 355 355 }; 356 356 357 357 /** 358 - * TEE_IOC_SUPPL_SEND - Receive a request for a supplicant function 358 + * TEE_IOC_SUPPL_SEND - Send a response to a received request 359 359 * 360 360 * Takes a struct tee_ioctl_buf_data which contains a struct 361 361 * tee_iocl_supp_send_arg followed by any array of struct tee_param