Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM driver updates from Arnd Bergmann:
"There are minor updates to SoC specific drivers for chips by Rockchip,
Samsung, NVIDIA, TI, NXP, i.MX, Qualcomm, and Broadcom.

Noteworthy driver changes include:

- Several conversions of DT bindings to yaml format.

- Renesas adds driver support for R-Car V4H, RZ/V2M and RZ/G2UL SoCs.

- Qualcomm adds a bus driver for the SSC (Snapdragon Sensor Core),
and support for more chips in the RPMh power domains and the
soc-id.

- NXP has a new driver for the HDMI blk-ctrl on i.MX8MP.

- Apple M1 gains support for the on-chip NVMe controller, making it
possible to finally use the internal disks. This also includes SoC
drivers for their RTKit IPC and for the SART DMA address filter.

For other subsystems that merge their drivers through the SoC tree, we
have

- Firmware drivers for the ARM firmware stack including TEE, OP-TEE,
SCMI and FF-A get a number of smaller updates and cleanups. OP-TEE
now has a cache for firmware argument structures as an
optimization, and SCMI now supports the 3.1 version of the
specification.

- Reset controller updates to Amlogic, ASpeed, Renesas and ACPI
drivers

- Memory controller updates for Tegra, and a few updates for other
platforms"

* tag 'arm-drivers-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (159 commits)
memory: tegra: Add MC error logging on Tegra186 onward
memory: tegra: Add memory controller channels support
memory: tegra: Add APE memory clients for Tegra234
memory: tegra: Add Tegra234 support
nvme-apple: fix sparse endianess warnings
soc/tegra: pmc: Document core domain fields
soc: qcom: pdr: use static for servreg_* variables
soc: imx: fix semicolon.cocci warnings
soc: renesas: R-Car V3U is R-Car Gen4
soc: imx: add i.MX8MP HDMI blk-ctrl
soc: imx: imx8m-blk-ctrl: Add i.MX8MP media blk-ctrl
soc: imx: add i.MX8MP HSIO blk-ctrl
soc: imx: imx8m-blk-ctrl: set power device name
soc: qcom: llcc: Add sc8180x and sc8280xp configurations
dt-bindings: arm: msm: Add sc8180x and sc8280xp LLCC compatibles
soc/tegra: pmc: Select REGMAP
dt-bindings: reset: st,sti-powerdown: Convert to yaml
dt-bindings: reset: st,sti-picophyreset: Convert to yaml
dt-bindings: reset: socfpga: Convert to yaml
dt-bindings: reset: snps,axs10x-reset: Convert to yaml
...

+10351 -2264
+2
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 23 23 enum: 24 24 - qcom,sc7180-llcc 25 25 - qcom,sc7280-llcc 26 + - qcom,sc8180x-llcc 27 + - qcom,sc8280xp-llcc 26 28 - qcom,sdm845-llcc 27 29 - qcom,sm6350-llcc 28 30 - qcom,sm8150-llcc
+20
Documentation/devicetree/bindings/arm/qcom.yaml
··· 39 39 msm8994 40 40 msm8996 41 41 sa8155p 42 + sa8540p 42 43 sc7180 43 44 sc7280 45 + sc8180x 46 + sc8280xp 44 47 sdm630 45 48 sdm632 46 49 sdm660 ··· 231 228 232 229 - items: 233 230 - enum: 231 + - lenovo,flex-5g 232 + - microsoft,surface-prox 233 + - qcom,sc8180x-primus 234 + - const: qcom,sc8180x 235 + 236 + - items: 237 + - enum: 238 + - qcom,sc8280xp-qrd 239 + - const: qcom,sc8280xp 240 + 241 + - items: 242 + - enum: 234 243 - fairphone,fp3 235 244 - const: qcom,sdm632 236 245 ··· 273 258 - enum: 274 259 - qcom,sa8155p-adp 275 260 - const: qcom,sa8155p 261 + 262 + - items: 263 + - enum: 264 + - qcom,sa8295p-adp 265 + - const: qcom,sa8540p 276 266 277 267 - items: 278 268 - enum:
+147
Documentation/devicetree/bindings/bus/qcom,ssc-block-bus.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/bus/qcom,ssc-block-bus.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: The AHB Bus Providing a Global View of the SSC Block on (some) qcom SoCs 8 + 9 + maintainers: 10 + - Michael Srba <Michael.Srba@seznam.cz> 11 + 12 + description: | 13 + This binding describes the dependencies (clocks, resets, power domains) which 14 + need to be turned on in a sequence before communication over the AHB bus 15 + becomes possible. 16 + 17 + Additionally, the reg property is used to pass to the driver the location of 18 + two sadly undocumented registers which need to be poked as part of the sequence. 19 + 20 + The SSC (Snapdragon Sensor Core) block contains a gpio controller, i2c/spi/uart 21 + controllers, a hexagon core, and a clock controller which provides clocks for 22 + the above. 23 + 24 + properties: 25 + compatible: 26 + items: 27 + - const: qcom,msm8998-ssc-block-bus 28 + - const: qcom,ssc-block-bus 29 + 30 + reg: 31 + description: | 32 + Shall contain the addresses of the SSCAON_CONFIG0 and SSCAON_CONFIG1 33 + registers 34 + minItems: 2 35 + maxItems: 2 36 + 37 + reg-names: 38 + items: 39 + - const: mpm_sscaon_config0 40 + - const: mpm_sscaon_config1 41 + 42 + '#address-cells': 43 + enum: [ 1, 2 ] 44 + 45 + '#size-cells': 46 + enum: [ 1, 2 ] 47 + 48 + ranges: true 49 + 50 + clocks: 51 + minItems: 6 52 + maxItems: 6 53 + 54 + clock-names: 55 + items: 56 + - const: xo 57 + - const: aggre2 58 + - const: gcc_im_sleep 59 + - const: aggre2_north 60 + - const: ssc_xo 61 + - const: ssc_ahbs 62 + 63 + power-domains: 64 + description: Power domain phandles for the ssc_cx and ssc_mx power domains 65 + minItems: 2 66 + maxItems: 2 67 + 68 + power-domain-names: 69 + items: 70 + - const: ssc_cx 71 + - const: ssc_mx 72 + 73 + resets: 74 + description: | 75 + Reset phandles for the ssc_reset and ssc_bcr resets (note: ssc_bcr is the 76 + branch control register associated with the ssc_xo and ssc_ahbs clocks) 77 + minItems: 2 78 + maxItems: 2 79 + 80 + reset-names: 81 + items: 82 + - const: ssc_reset 83 + - const: ssc_bcr 84 + 85 + qcom,halt-regs: 86 + $ref: /schemas/types.yaml#/definitions/phandle-array 87 + description: describes how to locate the ssc AXI halt register 88 + items: 89 + - items: 90 + - description: Phandle reference to a syscon representing TCSR 91 + - description: offset for the ssc AXI halt register 92 + 93 + required: 94 + - compatible 95 + - reg 96 + - reg-names 97 + - '#address-cells' 98 + - '#size-cells' 99 + - ranges 100 + - clocks 101 + - clock-names 102 + - power-domains 103 + - power-domain-names 104 + - resets 105 + - reset-names 106 + - qcom,halt-regs 107 + 108 + additionalProperties: 109 + type: object 110 + 111 + examples: 112 + - | 113 + #include <dt-bindings/clock/qcom,gcc-msm8998.h> 114 + #include <dt-bindings/clock/qcom,rpmcc.h> 115 + #include <dt-bindings/power/qcom-rpmpd.h> 116 + 117 + soc { 118 + #address-cells = <1>; 119 + #size-cells = <1>; 120 + 121 + // devices under this node are physically located in the SSC block, connected to an ssc-internal bus; 122 + ssc_ahb_slave: bus@10ac008 { 123 + #address-cells = <1>; 124 + #size-cells = <1>; 125 + ranges; 126 + 127 + compatible = "qcom,msm8998-ssc-block-bus", "qcom,ssc-block-bus"; 128 + reg = <0x10ac008 0x4>, <0x10ac010 0x4>; 129 + reg-names = "mpm_sscaon_config0", "mpm_sscaon_config1"; 130 + 131 + clocks = <&xo>, 132 + <&rpmcc RPM_SMD_AGGR2_NOC_CLK>, 133 + <&gcc GCC_IM_SLEEP>, 134 + <&gcc AGGRE2_SNOC_NORTH_AXI>, 135 + <&gcc SSC_XO>, 136 + <&gcc SSC_CNOC_AHBS_CLK>; 137 + clock-names = "xo", "aggre2", "gcc_im_sleep", "aggre2_north", "ssc_xo", "ssc_ahbs"; 138 + 139 + resets = <&gcc GCC_SSC_RESET>, <&gcc GCC_SSC_BCR>; 140 + reset-names = "ssc_reset", "ssc_bcr"; 141 + 142 + power-domains = <&rpmpd MSM8998_SSCCX>, <&rpmpd MSM8998_SSCMX>; 143 + power-domain-names = "ssc_cx", "ssc_mx"; 144 + 145 + qcom,halt-regs = <&tcsr_mutex_regs 0x26000>; 146 + }; 147 + };
+2 -1
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 19 19 * "qcom,scm-msm8953" 20 20 * "qcom,scm-msm8960" 21 21 * "qcom,scm-msm8974" 22 + * "qcom,scm-msm8976" 22 23 * "qcom,scm-msm8994" 23 24 * "qcom,scm-msm8996" 24 25 * "qcom,scm-msm8998" ··· 38 37 * core clock required for "qcom,scm-apq8064", "qcom,scm-msm8660" and 39 38 "qcom,scm-msm8960" 40 39 * core, iface and bus clocks required for "qcom,scm-apq8084", 41 - "qcom,scm-msm8916", "qcom,scm-msm8953" and "qcom,scm-msm8974" 40 + "qcom,scm-msm8916", "qcom,scm-msm8953", "qcom,scm-msm8974" and "qcom,scm-msm8976" 42 41 - clock-names: Must contain "core" for the core clock, "iface" for the interface 43 42 clock and "bus" for the bus clock per the requirements of the compatible. 44 43 - qcom,dload-mode: phandle to the TCSR hardware block and offset of the
+4 -4
Documentation/devicetree/bindings/interconnect/qcom,bcm-voter.yaml
··· 45 45 46 46 examples: 47 47 # Example 1: apps bcm_voter on SDM845 SoC should be defined inside &apps_rsc node 48 - # as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt 48 + # as defined in Documentation/devicetree/bindings/soc/qcom/qcom,rpmh-rsc.yaml 49 49 - | 50 50 51 - apps_bcm_voter: bcm_voter { 51 + apps_bcm_voter: bcm-voter { 52 52 compatible = "qcom,bcm-voter"; 53 53 }; 54 54 55 55 # Example 2: disp bcm_voter on SDM845 should be defined inside &disp_rsc node 56 - # as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt 56 + # as defined in Documentation/devicetree/bindings/soc/qcom/qcom,rpmh-rsc.yaml 57 57 - | 58 58 59 59 #include <dt-bindings/interconnect/qcom,icc.h> 60 60 61 - disp_bcm_voter: bcm_voter { 61 + disp_bcm_voter: bcm-voter { 62 62 compatible = "qcom,bcm-voter"; 63 63 qcom,tcs-wait = <QCOM_ICC_TAG_AMC>; 64 64 };
+52
Documentation/devicetree/bindings/iommu/apple,sart.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/iommu/apple,sart.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Apple SART DMA address filter 8 + 9 + maintainers: 10 + - Sven Peter <sven@svenpeter.dev> 11 + 12 + description: 13 + Apple SART is a simple address filter for DMA transactions. Regions of 14 + physical memory must be added to the SART's allow list before any 15 + DMA can target these. Unlike a proper IOMMU no remapping can be done and 16 + special support in the consumer driver is required since not all DMA 17 + transactions of a single device are subject to SART filtering. 18 + 19 + SART1 has first been used since at least the A11 (iPhone 8 and iPhone X) 20 + and allows 36 bit of physical address space and filter entries with sizes 21 + up to 24 bit. 22 + 23 + SART2, first seen in A14 and M1, allows 36 bit of physical address space 24 + and filter entry size up to 36 bit. 25 + 26 + SART3, first seen in M1 Pro/Max, extends both the address space and filter 27 + entry size to 42 bit. 28 + 29 + properties: 30 + compatible: 31 + enum: 32 + - apple,t6000-sart 33 + - apple,t8103-sart 34 + 35 + reg: 36 + maxItems: 1 37 + 38 + power-domains: 39 + maxItems: 1 40 + 41 + required: 42 + - compatible 43 + - reg 44 + 45 + additionalProperties: false 46 + 47 + examples: 48 + - | 49 + iommu@7bc50000 { 50 + compatible = "apple,t8103-sart"; 51 + reg = <0x7bc50000 0x4000>; 52 + };
+5
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 31 31 - renesas,r8a774b1-rpc-if # RZ/G2N 32 32 - renesas,r8a774c0-rpc-if # RZ/G2E 33 33 - renesas,r8a774e1-rpc-if # RZ/G2H 34 + - renesas,r8a7795-rpc-if # R-Car H3 35 + - renesas,r8a7796-rpc-if # R-Car M3-W 36 + - renesas,r8a77961-rpc-if # R-Car M3-W+ 37 + - renesas,r8a77965-rpc-if # R-Car M3-N 34 38 - renesas,r8a77970-rpc-if # R-Car V3M 35 39 - renesas,r8a77980-rpc-if # R-Car V3H 40 + - renesas,r8a77990-rpc-if # R-Car E3 36 41 - renesas,r8a77995-rpc-if # R-Car D3 37 42 - renesas,r8a779a0-rpc-if # R-Car V3U 38 43 - const: renesas,rcar-gen3-rpc-if # a generic R-Car gen3 or RZ/G2{E,H,M,N} device
+111
Documentation/devicetree/bindings/nvme/apple,nvme-ans.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/nvme/apple,nvme-ans.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Apple ANS NVM Express host controller 8 + 9 + maintainers: 10 + - Sven Peter <sven@svenpeter.dev> 11 + 12 + properties: 13 + compatible: 14 + items: 15 + - enum: 16 + - apple,t8103-nvme-ans2 17 + - apple,t6000-nvme-ans2 18 + - const: apple,nvme-ans2 19 + 20 + reg: 21 + items: 22 + - description: NVMe and NVMMU registers 23 + - description: ANS2 co-processor control registers 24 + 25 + reg-names: 26 + items: 27 + - const: nvme 28 + - const: ans 29 + 30 + resets: 31 + maxItems: 1 32 + 33 + power-domains: 34 + # two domains for t8103, three for t6000 35 + minItems: 2 36 + items: 37 + - description: power domain for the NVMe controller. 38 + - description: power domain for the first PCIe bus connecting the NVMe 39 + controller to the storage modules. 40 + - description: optional power domain for the second PCIe bus 41 + connecting the NVMe controller to the storage modules. 42 + 43 + power-domain-names: 44 + minItems: 2 45 + items: 46 + - const: ans 47 + - const: apcie0 48 + - const: apcie1 49 + 50 + mboxes: 51 + maxItems: 1 52 + description: Mailbox of the ANS2 co-processor 53 + 54 + interrupts: 55 + maxItems: 1 56 + 57 + apple,sart: 58 + maxItems: 1 59 + $ref: /schemas/types.yaml#/definitions/phandle 60 + description: | 61 + Reference to the SART address filter. 62 + 63 + The SART address filter is documented in iommu/apple,sart.yaml. 64 + 65 + if: 66 + properties: 67 + compatible: 68 + contains: 69 + const: apple,t8103-nvme-ans2 70 + then: 71 + properties: 72 + power-domains: 73 + maxItems: 2 74 + power-domain-names: 75 + maxItems: 2 76 + else: 77 + properties: 78 + power-domains: 79 + minItems: 3 80 + power-domain-names: 81 + minItems: 3 82 + 83 + required: 84 + - compatible 85 + - reg 86 + - reg-names 87 + - resets 88 + - power-domains 89 + - power-domain-names 90 + - mboxes 91 + - interrupts 92 + - apple,sart 93 + 94 + additionalProperties: false 95 + 96 + examples: 97 + - | 98 + #include <dt-bindings/interrupt-controller/apple-aic.h> 99 + #include <dt-bindings/interrupt-controller/irq.h> 100 + 101 + nvme@7bcc0000 { 102 + compatible = "apple,t8103-nvme-ans2", "apple,nvme-ans2"; 103 + reg = <0x7bcc0000 0x40000>, <0x77400000 0x4000>; 104 + reg-names = "nvme", "ans"; 105 + interrupts = <AIC_IRQ 590 IRQ_TYPE_LEVEL_HIGH>; 106 + mboxes = <&ans>; 107 + apple,sart = <&sart>; 108 + power-domains = <&ps_ans2>, <&ps_apcie_st>; 109 + power-domain-names = "ans", "apcie0"; 110 + resets = <&ps_ans2>; 111 + };
+3
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 27 27 - qcom,msm8998-rpmpd 28 28 - qcom,qcm2290-rpmpd 29 29 - qcom,qcs404-rpmpd 30 + - qcom,sa8540p-rpmhpd 30 31 - qcom,sdm660-rpmpd 31 32 - qcom,sc7180-rpmhpd 32 33 - qcom,sc7280-rpmhpd 33 34 - qcom,sc8180x-rpmhpd 35 + - qcom,sc8280xp-rpmhpd 34 36 - qcom,sdm845-rpmhpd 35 37 - qcom,sdx55-rpmhpd 38 + - qcom,sdx65-rpmhpd 36 39 - qcom,sm6115-rpmpd 37 40 - qcom,sm6125-rpmpd 38 41 - qcom,sm6350-rpmhpd
+3 -2
Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
··· 12 12 resides as a subnode of the SMD. As such, the SMD-RPM regulator requires 13 13 that the SMD and RPM nodes be present. 14 14 15 - Please refer to Documentation/devicetree/bindings/soc/qcom/qcom,smd.txt for 15 + Please refer to Documentation/devicetree/bindings/soc/qcom/qcom,smd.yaml for 16 16 information pertaining to the SMD node. 17 17 18 18 Please refer to Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml ··· 69 69 l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22 70 70 71 71 maintainers: 72 - - Kathiravan T <kathirav@codeaurora.org> 72 + - Andy Gross <agross@kernel.org> 73 + - Bjorn Andersson <bjorn.andersson@linaro.org> 73 74 74 75 properties: 75 76 compatible:
+1 -1
Documentation/devicetree/bindings/remoteproc/qcom,q6v5.txt
··· 250 250 251 251 The Hexagon node may also have an subnode named either "smd-edge" or 252 252 "glink-edge" that describes the communication edge, channels and devices 253 - related to the Hexagon. See ../soc/qcom/qcom,smd.txt and 253 + related to the Hexagon. See ../soc/qcom/qcom,smd.yaml and 254 254 ../soc/qcom/qcom,glink.txt for details on how to describe these. 255 255 256 256 = EXAMPLE
+1 -1
Documentation/devicetree/bindings/remoteproc/qcom,wcnss-pil.txt
··· 111 111 112 112 The wcnss node can also have an subnode named "smd-edge" that describes the SMD 113 113 edge, channels and devices related to the WCNSS. 114 - See ../soc/qcom/qcom,smd.txt for details on how to describe the SMD edge. 114 + See ../soc/qcom/qcom,smd.yaml for details on how to describe the SMD edge. 115 115 116 116 = EXAMPLE 117 117 The following example describes the resources needed to boot control the WCNSS,
+47
Documentation/devicetree/bindings/reset/altr,rst-mgr.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/altr,rst-mgr.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Altera SOCFPGA Reset Manager 8 + 9 + maintainers: 10 + - Dinh Nguyen <dinguyen@altera.com> 11 + 12 + properties: 13 + compatible: 14 + oneOf: 15 + - description: Cyclone5/Arria5/Arria10 16 + const: altr,rst-mgr 17 + - description: Stratix10 ARM64 SoC 18 + items: 19 + - const: altr,stratix10-rst-mgr 20 + - const: altr,rst-mgr 21 + 22 + reg: 23 + maxItems: 1 24 + 25 + altr,modrst-offset: 26 + $ref: /schemas/types.yaml#/definitions/uint32 27 + description: Offset of the first modrst register 28 + 29 + '#reset-cells': 30 + const: 1 31 + 32 + required: 33 + - compatible 34 + - reg 35 + - altr,modrst-offset 36 + - '#reset-cells' 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + rstmgr@ffd05000 { 43 + compatible = "altr,rst-mgr"; 44 + reg = <0xffd05000 0x1000>; 45 + altr,modrst-offset = <0x10>; 46 + #reset-cells = <1>; 47 + };
-22
Documentation/devicetree/bindings/reset/amlogic,meson-axg-audio-arb.txt
··· 1 - * Amlogic audio memory arbiter controller 2 - 3 - The Amlogic Audio ARB is a simple device which enables or 4 - disables the access of Audio FIFOs to DDR on AXG based SoC. 5 - 6 - Required properties: 7 - - compatible: 'amlogic,meson-axg-audio-arb' or 8 - 'amlogic,meson-sm1-audio-arb' 9 - - reg: physical base address of the controller and length of memory 10 - mapped region. 11 - - clocks: phandle to the fifo peripheral clock provided by the audio 12 - clock controller. 13 - - #reset-cells: must be 1. 14 - 15 - Example on the A113 SoC: 16 - 17 - arb: reset-controller@280 { 18 - compatible = "amlogic,meson-axg-audio-arb"; 19 - reg = <0x0 0x280 0x0 0x4>; 20 - #reset-cells = <1>; 21 - clocks = <&clkc_audio AUD_CLKID_DDR_ARB>; 22 - };
+56
Documentation/devicetree/bindings/reset/amlogic,meson-axg-audio-arb.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + # Copyright 2019 BayLibre, SAS 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/reset/amlogic,meson-axg-audio-arb.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Amlogic audio memory arbiter controller 9 + 10 + maintainers: 11 + - Jerome Brunet <jbrunet@baylibre.com> 12 + 13 + description: The Amlogic Audio ARB is a simple device which enables or disables 14 + the access of Audio FIFOs to DDR on AXG based SoC. 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - amlogic,meson-axg-audio-arb 20 + - amlogic,meson-sm1-audio-arb 21 + 22 + reg: 23 + maxItems: 1 24 + 25 + clocks: 26 + maxItems: 1 27 + description: | 28 + phandle to the fifo peripheral clock provided by the audio clock 29 + controller. 30 + 31 + "#reset-cells": 32 + const: 1 33 + 34 + required: 35 + - compatible 36 + - reg 37 + - clocks 38 + - "#reset-cells" 39 + 40 + additionalProperties: false 41 + 42 + examples: 43 + - | 44 + // on the A113 SoC: 45 + #include <dt-bindings/clock/axg-audio-clkc.h> 46 + bus { 47 + #address-cells = <2>; 48 + #size-cells = <2>; 49 + 50 + arb: reset-controller@280 { 51 + compatible = "amlogic,meson-axg-audio-arb"; 52 + reg = <0x0 0x280 0x0 0x4>; 53 + #reset-cells = <1>; 54 + clocks = <&clkc_audio AUD_CLKID_DDR_ARB>; 55 + }; 56 + };
+1
Documentation/devicetree/bindings/reset/amlogic,meson-reset.yaml
··· 17 17 - amlogic,meson-gxbb-reset # Reset Controller on GXBB and compatible SoCs 18 18 - amlogic,meson-axg-reset # Reset Controller on AXG and compatible SoCs 19 19 - amlogic,meson-a1-reset # Reset Controller on A1 and compatible SoCs 20 + - amlogic,meson-s4-reset # Reset Controller on S4 and compatible SoCs 20 21 21 22 reg: 22 23 maxItems: 1
-20
Documentation/devicetree/bindings/reset/ath79-reset.txt
··· 1 - Binding for Qualcomm Atheros AR7xxx/AR9XXX reset controller 2 - 3 - Please also refer to reset.txt in this directory for common reset 4 - controller binding usage. 5 - 6 - Required Properties: 7 - - compatible: has to be "qca,<soctype>-reset", "qca,ar7100-reset" 8 - as fallback 9 - - reg: Base address and size of the controllers memory area 10 - - #reset-cells : Specifies the number of cells needed to encode reset 11 - line, should be 1 12 - 13 - Example: 14 - 15 - reset-controller@1806001c { 16 - compatible = "qca,ar9132-reset", "qca,ar7100-reset"; 17 - reg = <0x1806001c 0x4>; 18 - 19 - #reset-cells = <1>; 20 - };
-23
Documentation/devicetree/bindings/reset/berlin,reset.txt
··· 1 - Marvell Berlin reset controller 2 - =============================== 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - The reset controller node must be a sub-node of the chip controller 8 - node on Berlin SoCs. 9 - 10 - Required properties: 11 - - compatible: should be "marvell,berlin2-reset" 12 - - #reset-cells: must be set to 2 13 - 14 - Example: 15 - 16 - chip_rst: reset { 17 - compatible = "marvell,berlin2-reset"; 18 - #reset-cells = <2>; 19 - }; 20 - 21 - &usb_phy0 { 22 - resets = <&chip_rst 0x104 12>; 23 - };
-18
Documentation/devicetree/bindings/reset/bitmain,bm1880-reset.txt
··· 1 - Bitmain BM1880 SoC Reset Controller 2 - =================================== 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - Required properties: 8 - - compatible: Should be "bitmain,bm1880-reset" 9 - - reg: Offset and length of reset controller space in SCTRL. 10 - - #reset-cells: Must be 1. 11 - 12 - Example: 13 - 14 - rst: reset-controller@c00 { 15 - compatible = "bitmain,bm1880-reset"; 16 - reg = <0xc00 0x8>; 17 - #reset-cells = <1>; 18 - };
+36
Documentation/devicetree/bindings/reset/bitmain,bm1880-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + # Copyright 2019 Manivannan Sadhasivam <mani@kernel.org> 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/reset/bitmain,bm1880-reset.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Bitmain BM1880 SoC Reset Controller 9 + 10 + maintainers: 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + const: bitmain,bm1880-reset 16 + 17 + reg: 18 + maxItems: 1 19 + 20 + "#reset-cells": 21 + const: 1 22 + 23 + required: 24 + - compatible 25 + - reg 26 + - "#reset-cells" 27 + 28 + additionalProperties: false 29 + 30 + examples: 31 + - | 32 + rst: reset-controller@c00 { 33 + compatible = "bitmain,bm1880-reset"; 34 + reg = <0xc00 0x8>; 35 + #reset-cells = <1>; 36 + };
-30
Documentation/devicetree/bindings/reset/lantiq,reset.txt
··· 1 - Lantiq XWAY SoC RCU reset controller binding 2 - ============================================ 3 - 4 - This binding describes a reset-controller found on the RCU module on Lantiq 5 - XWAY SoCs. 6 - 7 - This node has to be a sub node of the Lantiq RCU block. 8 - 9 - ------------------------------------------------------------------------------- 10 - Required properties: 11 - - compatible : Should be one of 12 - "lantiq,danube-reset" 13 - "lantiq,xrx200-reset" 14 - - reg : Defines the following sets of registers in the parent 15 - syscon device 16 - - Offset of the reset set register 17 - - Offset of the reset status register 18 - - #reset-cells : Specifies the number of cells needed to encode the 19 - reset line, should be 2. 20 - The first cell takes the reset set bit and the 21 - second cell takes the status bit. 22 - 23 - ------------------------------------------------------------------------------- 24 - Example for the reset-controllers on the xRX200 SoCs: 25 - reset0: reset-controller@10 { 26 - compatible = "lantiq,xrx200-reset"; 27 - reg <0x10 0x04>, <0x14 0x04>; 28 - 29 - #reset-cells = <2>; 30 - };
+49
Documentation/devicetree/bindings/reset/lantiq,reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/lantiq,reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Lantiq XWAY SoC RCU reset controller 8 + 9 + maintainers: 10 + - Martin Blumenstingl <martin.blumenstingl@googlemail.com> 11 + 12 + description: | 13 + This binding describes a reset-controller found on the RCU module on Lantiq 14 + XWAY SoCs. This node has to be a sub node of the Lantiq RCU block. 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - lantiq,danube-reset 20 + - lantiq,xrx200-reset 21 + 22 + reg: 23 + description: | 24 + Defines the following sets of registers in the parent syscon device 25 + Offset of the reset set register 26 + Offset of the reset status register 27 + maxItems: 2 28 + 29 + '#reset-cells': 30 + description: | 31 + The first cell takes the reset set bit and the second cell takes the 32 + status bit. 33 + const: 2 34 + 35 + required: 36 + - compatible 37 + - reg 38 + - '#reset-cells' 39 + 40 + additionalProperties: false 41 + 42 + examples: 43 + - | 44 + // On the xRX200 SoCs: 45 + reset0: reset-controller@10 { 46 + compatible = "lantiq,xrx200-reset"; 47 + reg = <0x10 0x04>, <0x14 0x04>; 48 + #reset-cells = <2>; 49 + };
+38
Documentation/devicetree/bindings/reset/marvell,berlin2-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + # Copyright 2015 Antoine Tenart <atenart@kernel.org> 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/reset/marvell,berlin2-reset.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Marvell Berlin reset controller 9 + 10 + maintainers: 11 + - Antoine Tenart <atenart@kernel.org> 12 + 13 + description: The reset controller node must be a sub-node of the chip 14 + controller node on Berlin SoCs. 15 + 16 + properties: 17 + compatible: 18 + const: marvell,berlin2-reset 19 + 20 + "#reset-cells": 21 + const: 2 22 + 23 + required: 24 + - compatible 25 + - "#reset-cells" 26 + 27 + additionalProperties: false 28 + 29 + examples: 30 + - | 31 + chip: chip-control@ea0000 { 32 + reg = <0xea0000 0x400>; 33 + 34 + chip_rst: reset { 35 + compatible = "marvell,berlin2-reset"; 36 + #reset-cells = <2>; 37 + }; 38 + };
-32
Documentation/devicetree/bindings/reset/nuvoton,npcm-reset.txt
··· 1 - Nuvoton NPCM Reset controller 2 - 3 - Required properties: 4 - - compatible : "nuvoton,npcm750-reset" for NPCM7XX BMC 5 - - reg : specifies physical base address and size of the register. 6 - - #reset-cells: must be set to 2 7 - 8 - Optional property: 9 - - nuvoton,sw-reset-number - Contains the software reset number to restart the SoC. 10 - NPCM7xx contain four software reset that represent numbers 1 to 4. 11 - 12 - If 'nuvoton,sw-reset-number' is not specified software reset is disabled. 13 - 14 - Example: 15 - rstc: rstc@f0801000 { 16 - compatible = "nuvoton,npcm750-reset"; 17 - reg = <0xf0801000 0x70>; 18 - #reset-cells = <2>; 19 - nuvoton,sw-reset-number = <2>; 20 - }; 21 - 22 - Specifying reset lines connected to IP NPCM7XX modules 23 - ====================================================== 24 - example: 25 - 26 - spi0: spi@..... { 27 - ... 28 - resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1>; 29 - ... 30 - }; 31 - 32 - The index could be found in <dt-bindings/reset/nuvoton,npcm7xx-reset.h>.
+50
Documentation/devicetree/bindings/reset/nuvoton,npcm750-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/nuvoton,npcm750-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Nuvoton NPCM Reset controller 8 + 9 + maintainers: 10 + - Tomer Maimon <tmaimon77@gmail.com> 11 + 12 + properties: 13 + compatible: 14 + const: nuvoton,npcm750-reset 15 + 16 + reg: 17 + maxItems: 1 18 + 19 + '#reset-cells': 20 + const: 2 21 + 22 + nuvoton,sw-reset-number: 23 + $ref: /schemas/types.yaml#/definitions/uint32 24 + minimum: 1 25 + maximum: 4 26 + description: | 27 + Contains the software reset number to restart the SoC. 28 + If not specified, software reset is disabled. 29 + 30 + required: 31 + - compatible 32 + - reg 33 + - '#reset-cells' 34 + 35 + additionalProperties: false 36 + 37 + examples: 38 + - | 39 + #include <dt-bindings/reset/nuvoton,npcm7xx-reset.h> 40 + rstc: rstc@f0801000 { 41 + compatible = "nuvoton,npcm750-reset"; 42 + reg = <0xf0801000 0x70>; 43 + #reset-cells = <2>; 44 + nuvoton,sw-reset-number = <2>; 45 + }; 46 + 47 + // Specifying reset lines connected to IP NPCM7XX modules 48 + spi0: spi { 49 + resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1>; 50 + };
+40
Documentation/devicetree/bindings/reset/qca,ar7100-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + # Copyright 2015 Alban Bedel <albeu@free.fr> 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/reset/qca,ar7100-reset.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Qualcomm Atheros AR7xxx/AR9XXX reset controller 9 + 10 + maintainers: 11 + - Alban Bedel <albeu@free.fr> 12 + 13 + properties: 14 + compatible: 15 + items: 16 + - enum: 17 + - qca,ar9132-reset 18 + - qca,ar9331-reset 19 + - const: qca,ar7100-reset 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + "#reset-cells": 25 + const: 1 26 + 27 + required: 28 + - compatible 29 + - reg 30 + - "#reset-cells" 31 + 32 + additionalProperties: false 33 + 34 + examples: 35 + - | 36 + reset-controller@1806001c { 37 + compatible = "qca,ar9132-reset", "qca,ar7100-reset"; 38 + reg = <0x1806001c 0x4>; 39 + #reset-cells = <1>; 40 + };
-33
Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt
··· 1 - Binding for the AXS10x reset controller 2 - 3 - This binding describes the ARC AXS10x boards custom IP-block which allows 4 - to control reset signals of selected peripherals. For example DW GMAC, etc... 5 - This block is controlled via memory-mapped register (AKA CREG) which 6 - represents up-to 32 reset lines. 7 - 8 - As of today only the following lines are used: 9 - - DW GMAC - line 5 10 - 11 - This binding uses the common reset binding[1]. 12 - 13 - [1] Documentation/devicetree/bindings/reset/reset.txt 14 - 15 - Required properties: 16 - - compatible: should be "snps,axs10x-reset". 17 - - reg: should always contain pair address - length: for creg reset 18 - bits register. 19 - - #reset-cells: from common reset binding; Should always be set to 1. 20 - 21 - Example: 22 - reset: reset-controller@11220 { 23 - compatible = "snps,axs10x-reset"; 24 - #reset-cells = <1>; 25 - reg = <0x11220 0x4>; 26 - }; 27 - 28 - Specifying reset lines connected to IP modules: 29 - ethernet@.... { 30 - .... 31 - resets = <&reset 5>; 32 - .... 33 - };
+48
Documentation/devicetree/bindings/reset/snps,axs10x-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/snps,axs10x-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: AXS10x reset controller 8 + 9 + maintainers: 10 + - Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> 11 + 12 + description: | 13 + This binding describes the ARC AXS10x boards custom IP-block which allows 14 + to control reset signals of selected peripherals. For example DW GMAC, etc... 15 + This block is controlled via memory-mapped register (AKA CREG) which 16 + represents up-to 32 reset lines. 17 + As of today only the following lines are used: 18 + - DW GMAC - line 5 19 + 20 + properties: 21 + compatible: 22 + const: snps,axs10x-reset 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + '#reset-cells': 28 + const: 1 29 + 30 + required: 31 + - compatible 32 + - reg 33 + - '#reset-cells' 34 + 35 + additionalProperties: false 36 + 37 + examples: 38 + - | 39 + reset: reset-controller@11220 { 40 + compatible = "snps,axs10x-reset"; 41 + #reset-cells = <1>; 42 + reg = <0x11220 0x4>; 43 + }; 44 + 45 + // Specifying reset lines connected to IP modules: 46 + ethernet { 47 + resets = <&reset 5>; 48 + };
-16
Documentation/devicetree/bindings/reset/socfpga-reset.txt
··· 1 - Altera SOCFPGA Reset Manager 2 - 3 - Required properties: 4 - - compatible : "altr,rst-mgr" for (Cyclone5/Arria5/Arria10) 5 - "altr,stratix10-rst-mgr","altr,rst-mgr" for Stratix10 ARM64 SoC 6 - - reg : Should contain 1 register ranges(address and length) 7 - - altr,modrst-offset : Should contain the offset of the first modrst register. 8 - - #reset-cells: 1 9 - 10 - Example: 11 - rstmgr@ffd05000 { 12 - #reset-cells = <1>; 13 - compatible = "altr,rst-mgr"; 14 - reg = <0xffd05000 0x1000>; 15 - altr,modrst-offset = <0x10>; 16 - };
+38 -14
Documentation/devicetree/bindings/reset/socionext,uniphier-glue-reset.yaml
··· 38 38 minItems: 1 39 39 maxItems: 2 40 40 41 - clock-names: 42 - oneOf: 43 - - items: # for Pro4, Pro5 44 - - const: gio 45 - - const: link 46 - - items: # for others 47 - - const: link 41 + clock-names: true 48 42 49 43 resets: 50 44 minItems: 1 51 45 maxItems: 2 52 46 53 - reset-names: 54 - oneOf: 55 - - items: # for Pro4, Pro5 56 - - const: gio 57 - - const: link 58 - - items: # for others 59 - - const: link 47 + reset-names: true 48 + 49 + allOf: 50 + - if: 51 + properties: 52 + compatible: 53 + contains: 54 + enum: 55 + - socionext,uniphier-pro4-usb3-reset 56 + - socionext,uniphier-pro5-usb3-reset 57 + - socionext,uniphier-pro4-ahci-reset 58 + then: 59 + properties: 60 + clocks: 61 + minItems: 2 62 + maxItems: 2 63 + clock-names: 64 + items: 65 + - const: gio 66 + - const: link 67 + resets: 68 + minItems: 2 69 + maxItems: 2 70 + reset-names: 71 + items: 72 + - const: gio 73 + - const: link 74 + else: 75 + properties: 76 + clocks: 77 + maxItems: 1 78 + clock-names: 79 + const: link 80 + resets: 81 + maxItems: 1 82 + reset-names: 83 + const: link 60 84 61 85 additionalProperties: false 62 86
-42
Documentation/devicetree/bindings/reset/st,sti-picophyreset.txt
··· 1 - STMicroelectronics STi family Sysconfig Picophy SoftReset Controller 2 - ============================================================================= 3 - 4 - This binding describes a reset controller device that is used to enable and 5 - disable on-chip PicoPHY USB2 phy(s) using "softreset" control bits found in 6 - the STi family SoC system configuration registers. 7 - 8 - The actual action taken when softreset is asserted is hardware dependent. 9 - However, when asserted it may not be possible to access the hardware's 10 - registers and after an assert/deassert sequence the hardware's previous state 11 - may no longer be valid. 12 - 13 - Please refer to Documentation/devicetree/bindings/reset/reset.txt 14 - for common reset controller binding usage. 15 - 16 - Required properties: 17 - - compatible: Should be "st,stih407-picophyreset" 18 - - #reset-cells: 1, see below 19 - 20 - Example: 21 - 22 - picophyreset: picophyreset-controller { 23 - compatible = "st,stih407-picophyreset"; 24 - #reset-cells = <1>; 25 - }; 26 - 27 - Specifying picophyreset control of devices 28 - ======================================= 29 - 30 - Device nodes should specify the reset channel required in their "resets" 31 - property, containing a phandle to the picophyreset device node and an 32 - index specifying which channel to use, as described in 33 - Documentation/devicetree/bindings/reset/reset.txt. 34 - 35 - Example: 36 - 37 - usb2_picophy0: usbpicophy@0 { 38 - resets = <&picophyreset STIH407_PICOPHY0_RESET>; 39 - }; 40 - 41 - Macro definitions for the supported reset channels can be found in: 42 - include/dt-bindings/reset/stih407-resets.h
-45
Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
··· 1 - STMicroelectronics STi family Sysconfig Peripheral Powerdown Reset Controller 2 - ============================================================================= 3 - 4 - This binding describes a reset controller device that is used to enable and 5 - disable on-chip peripheral controllers such as USB and SATA, using 6 - "powerdown" control bits found in the STi family SoC system configuration 7 - registers. These have been grouped together into a single reset controller 8 - device for convenience. 9 - 10 - The actual action taken when powerdown is asserted is hardware dependent. 11 - However, when asserted it may not be possible to access the hardware's 12 - registers and after an assert/deassert sequence the hardware's previous state 13 - may no longer be valid. 14 - 15 - Please refer to reset.txt in this directory for common reset 16 - controller binding usage. 17 - 18 - Required properties: 19 - - compatible: Should be "st,stih407-powerdown" 20 - - #reset-cells: 1, see below 21 - 22 - example: 23 - 24 - powerdown: powerdown-controller { 25 - compatible = "st,stih407-powerdown"; 26 - #reset-cells = <1>; 27 - }; 28 - 29 - 30 - Specifying powerdown control of devices 31 - ======================================= 32 - 33 - Device nodes should specify the reset channel required in their "resets" 34 - property, containing a phandle to the powerdown device node and an 35 - index specifying which channel to use, as described in reset.txt 36 - 37 - example: 38 - 39 - st_dwc3: dwc3@8f94000 { 40 - resets = <&powerdown STIH407_USB3_POWERDOWN>, 41 - }; 42 - 43 - Macro definitions for the supported reset channels can be found in: 44 - 45 - include/dt-bindings/reset/stih407-resets.h
+47
Documentation/devicetree/bindings/reset/st,stih407-picophyreset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/st,stih407-picophyreset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STi family Sysconfig Picophy SoftReset Controller 8 + 9 + maintainers: 10 + - Peter Griffin <peter.griffin@linaro.org> 11 + 12 + description: | 13 + This binding describes a reset controller device that is used to enable and 14 + disable on-chip PicoPHY USB2 phy(s) using "softreset" control bits found in 15 + the STi family SoC system configuration registers. 16 + 17 + The actual action taken when softreset is asserted is hardware dependent. 18 + However, when asserted it may not be possible to access the hardware's 19 + registers and after an assert/deassert sequence the hardware's previous state 20 + may no longer be valid. 21 + 22 + properties: 23 + compatible: 24 + const: st,stih407-picophyreset 25 + 26 + '#reset-cells': 27 + const: 1 28 + 29 + required: 30 + - compatible 31 + - '#reset-cells' 32 + 33 + additionalProperties: false 34 + 35 + examples: 36 + - | 37 + #include <dt-bindings/reset/stih407-resets.h> 38 + 39 + picophyreset: picophyreset-controller { 40 + compatible = "st,stih407-picophyreset"; 41 + #reset-cells = <1>; 42 + }; 43 + 44 + // Specifying picophyreset control of devices 45 + usb2_picophy0: usbpicophy { 46 + resets = <&picophyreset STIH407_PICOPHY0_RESET>; 47 + };
+49
Documentation/devicetree/bindings/reset/st,stih407-powerdown.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/st,stih407-powerdown.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STi family Sysconfig Peripheral Powerdown Reset Controller 8 + 9 + maintainers: 10 + - Srinivas Kandagatla <srinivas.kandagatla@st.com> 11 + 12 + description: | 13 + This binding describes a reset controller device that is used to enable and 14 + disable on-chip peripheral controllers such as USB and SATA, using 15 + "powerdown" control bits found in the STi family SoC system configuration 16 + registers. These have been grouped together into a single reset controller 17 + device for convenience. 18 + 19 + The actual action taken when powerdown is asserted is hardware dependent. 20 + However, when asserted it may not be possible to access the hardware's 21 + registers and after an assert/deassert sequence the hardware's previous state 22 + may no longer be valid. 23 + 24 + properties: 25 + compatible: 26 + const: st,stih407-powerdown 27 + 28 + '#reset-cells': 29 + const: 1 30 + 31 + required: 32 + - compatible 33 + - '#reset-cells' 34 + 35 + additionalProperties: false 36 + 37 + examples: 38 + - | 39 + #include <dt-bindings/reset/stih407-resets.h> 40 + 41 + powerdown: powerdown-controller { 42 + compatible = "st,stih407-powerdown"; 43 + #reset-cells = <1>; 44 + }; 45 + 46 + // Specifying powerdown control of devices: 47 + st_dwc3: dwc3 { 48 + resets = <&powerdown STIH407_USB3_POWERDOWN>; 49 + };
+3 -96
Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml
··· 63 63 - ranges 64 64 65 65 patternProperties: 66 - "^.*@[0-9a-f]+$": 67 - type: object 68 - description: Common properties for GENI Serial Engine based I2C, SPI and 69 - UART controller. 70 - 71 - properties: 72 - reg: 73 - description: GENI Serial Engine register address and length. 74 - maxItems: 1 75 - 76 - clock-names: 77 - const: se 78 - 79 - clocks: 80 - description: Serial engine core clock needed by the device. 81 - maxItems: 1 82 - 83 - interconnects: 84 - minItems: 2 85 - maxItems: 3 86 - 87 - interconnect-names: 88 - minItems: 2 89 - items: 90 - - const: qup-core 91 - - const: qup-config 92 - - const: qup-memory 93 - 94 - required: 95 - - reg 96 - - clock-names 97 - - clocks 98 - 99 66 "spi@[0-9a-f]+$": 100 67 type: object 101 68 description: GENI serial engine based SPI controller. SPI in master mode 102 69 supports up to 50MHz, up to four chip selects, programmable 103 70 data path from 4 bits to 32 bits and numerous protocol 104 71 variants. 105 - $ref: /schemas/spi/spi-controller.yaml# 106 - 107 - properties: 108 - compatible: 109 - enum: 110 - - qcom,geni-spi 111 - 112 - interrupts: 113 - maxItems: 1 114 - 115 - "#address-cells": 116 - const: 1 117 - 118 - "#size-cells": 119 - const: 0 120 - 121 - required: 122 - - compatible 123 - - interrupts 124 - - "#address-cells" 125 - - "#size-cells" 72 + $ref: /schemas/spi/qcom,spi-geni-qcom.yaml# 126 73 127 74 "i2c@[0-9a-f]+$": 128 75 type: object 129 76 description: GENI serial engine based I2C controller. 130 - $ref: /schemas/i2c/i2c-controller.yaml# 131 - 132 - properties: 133 - compatible: 134 - enum: 135 - - qcom,geni-i2c 136 - 137 - interrupts: 138 - maxItems: 1 139 - 140 - "#address-cells": 141 - const: 1 142 - 143 - "#size-cells": 144 - const: 0 145 - 146 - clock-frequency: 147 - description: Desired I2C bus clock frequency in Hz. 148 - default: 100000 149 - 150 - required: 151 - - compatible 152 - - interrupts 153 - - "#address-cells" 154 - - "#size-cells" 77 + $ref: /schemas/i2c/qcom,i2c-geni-qcom.yaml# 155 78 156 79 "serial@[0-9a-f]+$": 157 80 type: object 158 81 description: GENI Serial Engine based UART Controller. 159 - $ref: /schemas/serial.yaml# 160 - 161 - properties: 162 - compatible: 163 - enum: 164 - - qcom,geni-uart 165 - - qcom,geni-debug-uart 166 - 167 - interrupts: 168 - minItems: 1 169 - items: 170 - - description: UART core irq 171 - - description: Wakeup irq (RX GPIO) 172 - 173 - required: 174 - - compatible 175 - - interrupts 82 + $ref: /schemas/serial/qcom,serial-geni-qcom.yaml# 176 83 177 84 additionalProperties: false 178 85
+272
Documentation/devicetree/bindings/soc/qcom/qcom,rpmh-rsc.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,rpmh-rsc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm RPMH RSC 8 + 9 + maintainers: 10 + - Bjorn Andersson <bjorn.andersson@linaro.org> 11 + 12 + description: | 13 + Resource Power Manager Hardened (RPMH) is the mechanism for communicating 14 + with the hardened resource accelerators on Qualcomm SoCs. Requests to the 15 + resources can be written to the Trigger Command Set (TCS) registers and 16 + using a (addr, val) pair and triggered. Messages in the TCS are then sent in 17 + sequence over an internal bus. 18 + 19 + The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity 20 + (Resource State Coordinator a.k.a RSC) that can handle multiple sleep and 21 + active/wake resource requests. Multiple such DRVs can exist in a SoC and can 22 + be written to from Linux. The structure of each DRV follows the same template 23 + with a few variations that are captured by the properties here. 24 + 25 + A TCS may be triggered from Linux or triggered by the F/W after all the CPUs 26 + have powered off to facilitate idle power saving. TCS could be classified as:: 27 + ACTIVE - Triggered by Linux 28 + SLEEP - Triggered by F/W 29 + WAKE - Triggered by F/W 30 + CONTROL - Triggered by F/W 31 + See also:: <dt-bindings/soc/qcom,rpmh-rsc.h> 32 + 33 + The order in which they are described in the DT, should match the hardware 34 + configuration. 35 + 36 + Requests can be made for the state of a resource, when the subsystem is 37 + active or idle. When all subsystems like Modem, GPU, CPU are idle, the 38 + resource state will be an aggregate of the sleep votes from each of those 39 + subsystems. Clients may request a sleep value for their shared resources in 40 + addition to the active mode requests. 41 + 42 + Drivers that want to use the RSC to communicate with RPMH must specify their 43 + bindings as child nodes of the RSC controllers they wish to communicate with. 44 + 45 + properties: 46 + compatible: 47 + const: qcom,rpmh-rsc 48 + 49 + interrupts: 50 + minItems: 1 51 + maxItems: 4 52 + description: 53 + The interrupt that trips when a message complete/response is received for 54 + this DRV from the accelerators. 55 + Number of interrupts must match number of DRV blocks. 56 + 57 + label: 58 + description: 59 + Name for the RSC. The name would be used in trace logs. 60 + 61 + qcom,drv-id: 62 + $ref: /schemas/types.yaml#/definitions/uint32 63 + description: 64 + The ID of the DRV in the RSC block that will be used by this controller. 65 + 66 + qcom,tcs-config: 67 + $ref: /schemas/types.yaml#/definitions/uint32-matrix 68 + items: 69 + - items: 70 + - description: TCS type 71 + enum: [ 0, 1, 2, 3 ] 72 + - description: Number of TCS 73 + - items: 74 + - description: TCS type 75 + enum: [ 0, 1, 2, 3 ] 76 + - description: Number of TCS 77 + - items: 78 + - description: TCS type 79 + enum: [ 0, 1, 2, 3] 80 + - description: Numbe r of TCS 81 + - items: 82 + - description: TCS type 83 + enum: [ 0, 1, 2, 3 ] 84 + - description: Number of TCS 85 + description: | 86 + The tuple defining the configuration of TCS. Must have two cells which 87 + describe each TCS type. The order of the TCS must match the hardware 88 + configuration. 89 + Cell 1 (TCS Type):: TCS types to be specified:: 90 + - ACTIVE_TCS 91 + - SLEEP_TCS 92 + - WAKE_TCS 93 + - CONTROL_TCS 94 + Cell 2 (Number of TCS):: <u32> 95 + 96 + qcom,tcs-offset: 97 + $ref: /schemas/types.yaml#/definitions/uint32 98 + description: 99 + The offset of the TCS blocks. 100 + 101 + reg: 102 + minItems: 1 103 + maxItems: 4 104 + 105 + reg-names: 106 + minItems: 1 107 + items: 108 + - const: drv-0 109 + - const: drv-1 110 + - const: drv-2 111 + - const: drv-3 112 + 113 + bcm-voter: 114 + $ref: /schemas/interconnect/qcom,bcm-voter.yaml# 115 + 116 + clock-controller: 117 + $ref: /schemas/clock/qcom,rpmhcc.yaml# 118 + 119 + power-controller: 120 + $ref: /schemas/power/qcom,rpmpd.yaml# 121 + 122 + patternProperties: 123 + '-regulators$': 124 + $ref: /schemas/regulator/qcom,rpmh-regulator.yaml# 125 + 126 + required: 127 + - compatible 128 + - interrupts 129 + - qcom,drv-id 130 + - qcom,tcs-config 131 + - qcom,tcs-offset 132 + - reg 133 + - reg-names 134 + 135 + additionalProperties: false 136 + 137 + examples: 138 + - | 139 + // For a TCS whose RSC base address is 0x179C0000 and is at a DRV id of 140 + // 2, the register offsets for DRV2 start at 0D00, the register 141 + // calculations are like this:: 142 + // DRV0: 0x179C0000 143 + // DRV2: 0x179C0000 + 0x10000 = 0x179D0000 144 + // DRV2: 0x179C0000 + 0x10000 * 2 = 0x179E0000 145 + // TCS-OFFSET: 0xD00 146 + #include <dt-bindings/interrupt-controller/arm-gic.h> 147 + #include <dt-bindings/soc/qcom,rpmh-rsc.h> 148 + 149 + rsc@179c0000 { 150 + compatible = "qcom,rpmh-rsc"; 151 + reg = <0x179c0000 0x10000>, 152 + <0x179d0000 0x10000>, 153 + <0x179e0000 0x10000>; 154 + reg-names = "drv-0", "drv-1", "drv-2"; 155 + interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>, 156 + <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>, 157 + <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 158 + label = "apps_rsc"; 159 + qcom,tcs-offset = <0xd00>; 160 + qcom,drv-id = <2>; 161 + qcom,tcs-config = <ACTIVE_TCS 2>, 162 + <SLEEP_TCS 3>, 163 + <WAKE_TCS 3>, 164 + <CONTROL_TCS 1>; 165 + }; 166 + 167 + - | 168 + // For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the 169 + // register offsets for DRV0 start at 01C00, the register calculations are 170 + // like this:: 171 + // DRV0: 0xAF20000 172 + // TCS-OFFSET: 0x1C00 173 + #include <dt-bindings/interrupt-controller/arm-gic.h> 174 + #include <dt-bindings/soc/qcom,rpmh-rsc.h> 175 + 176 + rsc@af20000 { 177 + compatible = "qcom,rpmh-rsc"; 178 + reg = <0xaf20000 0x10000>; 179 + reg-names = "drv-0"; 180 + interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>; 181 + label = "disp_rsc"; 182 + qcom,tcs-offset = <0x1c00>; 183 + qcom,drv-id = <0>; 184 + qcom,tcs-config = <ACTIVE_TCS 0>, 185 + <SLEEP_TCS 1>, 186 + <WAKE_TCS 1>, 187 + <CONTROL_TCS 0>; 188 + }; 189 + 190 + - | 191 + #include <dt-bindings/interrupt-controller/arm-gic.h> 192 + #include <dt-bindings/soc/qcom,rpmh-rsc.h> 193 + #include <dt-bindings/power/qcom-rpmpd.h> 194 + 195 + rsc@18200000 { 196 + compatible = "qcom,rpmh-rsc"; 197 + reg = <0x18200000 0x10000>, 198 + <0x18210000 0x10000>, 199 + <0x18220000 0x10000>; 200 + reg-names = "drv-0", "drv-1", "drv-2"; 201 + interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>, 202 + <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>, 203 + <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 204 + label = "apps_rsc"; 205 + qcom,tcs-offset = <0xd00>; 206 + qcom,drv-id = <2>; 207 + qcom,tcs-config = <ACTIVE_TCS 2>, 208 + <SLEEP_TCS 3>, 209 + <WAKE_TCS 3>, 210 + <CONTROL_TCS 0>; 211 + 212 + clock-controller { 213 + compatible = "qcom,sm8350-rpmh-clk"; 214 + #clock-cells = <1>; 215 + clock-names = "xo"; 216 + clocks = <&xo_board>; 217 + }; 218 + 219 + power-controller { 220 + compatible = "qcom,sm8350-rpmhpd"; 221 + #power-domain-cells = <1>; 222 + operating-points-v2 = <&rpmhpd_opp_table>; 223 + 224 + rpmhpd_opp_table: opp-table { 225 + compatible = "operating-points-v2"; 226 + 227 + rpmhpd_opp_ret: opp1 { 228 + opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>; 229 + }; 230 + 231 + rpmhpd_opp_min_svs: opp2 { 232 + opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; 233 + }; 234 + 235 + rpmhpd_opp_low_svs: opp3 { 236 + opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; 237 + }; 238 + 239 + rpmhpd_opp_svs: opp4 { 240 + opp-level = <RPMH_REGULATOR_LEVEL_SVS>; 241 + }; 242 + 243 + rpmhpd_opp_svs_l1: opp5 { 244 + opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; 245 + }; 246 + 247 + rpmhpd_opp_nom: opp6 { 248 + opp-level = <RPMH_REGULATOR_LEVEL_NOM>; 249 + }; 250 + 251 + rpmhpd_opp_nom_l1: opp7 { 252 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; 253 + }; 254 + 255 + rpmhpd_opp_nom_l2: opp8 { 256 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>; 257 + }; 258 + 259 + rpmhpd_opp_turbo: opp9 { 260 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; 261 + }; 262 + 263 + rpmhpd_opp_turbo_l1: opp10 { 264 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; 265 + }; 266 + }; 267 + }; 268 + 269 + bcm-voter { 270 + compatible = "qcom,bcm-voter"; 271 + }; 272 + };
+4 -3
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 12 12 to vote for state of the system resources, such as clocks, regulators and bus 13 13 frequencies. 14 14 15 - The SMD information for the RPM edge should be filled out. See qcom,smd.txt 15 + The SMD information for the RPM edge should be filled out. See qcom,smd.yaml 16 16 for the required edge properties. All SMD related properties will reside 17 17 within the RPM node itself. 18 18 ··· 25 25 rpm_requests. 26 26 27 27 maintainers: 28 - - Kathiravan T <kathirav@codeaurora.org> 28 + - Andy Gross <agross@kernel.org> 29 + - Bjorn Andersson <bjorn.andersson@linaro.org> 29 30 30 31 properties: 31 32 compatible: ··· 84 83 qcom,ipc = <&apcs 8 0>; 85 84 qcom,smd-edge = <15>; 86 85 87 - rpm_requests { 86 + rpm-requests { 88 87 compatible = "qcom,rpm-msm8974"; 89 88 qcom,smd-channels = "rpm_requests"; 90 89
-98
Documentation/devicetree/bindings/soc/qcom/qcom,smd.txt
··· 1 - Qualcomm Shared Memory Driver (SMD) binding 2 - 3 - This binding describes the Qualcomm Shared Memory Driver, a fifo based 4 - communication channel for sending data between the various subsystems in 5 - Qualcomm platforms. 6 - 7 - - compatible: 8 - Usage: required 9 - Value type: <stringlist> 10 - Definition: must be "qcom,smd" 11 - 12 - = EDGES 13 - 14 - Each subnode of the SMD node represents a remote subsystem or a remote 15 - processor of some sort - or in SMD language an "edge". The name of the edges 16 - are not important. 17 - The edge is described by the following properties: 18 - 19 - - interrupts: 20 - Usage: required 21 - Value type: <prop-encoded-array> 22 - Definition: should specify the IRQ used by the remote processor to 23 - signal this processor about communication related updates 24 - 25 - - mboxes: 26 - Usage: required 27 - Value type: <prop-encoded-array> 28 - Definition: reference to the associated doorbell in APCS, as described 29 - in mailbox/mailbox.txt 30 - 31 - - qcom,ipc: 32 - Usage: required, unless mboxes is specified 33 - Value type: <prop-encoded-array> 34 - Definition: three entries specifying the outgoing ipc bit used for 35 - signaling the remote processor: 36 - - phandle to a syscon node representing the apcs registers 37 - - u32 representing offset to the register within the syscon 38 - - u32 representing the ipc bit within the register 39 - 40 - - qcom,smd-edge: 41 - Usage: required 42 - Value type: <u32> 43 - Definition: the identifier of the remote processor in the smd channel 44 - allocation table 45 - 46 - - qcom,remote-pid: 47 - Usage: optional 48 - Value type: <u32> 49 - Definition: the identifier for the remote processor as known by the rest 50 - of the system. 51 - 52 - - label: 53 - Usage: optional 54 - Value type: <string> 55 - Definition: name of the edge, used for debugging and identification 56 - purposes. The node name will be used if this is not 57 - present. 58 - 59 - = SMD DEVICES 60 - 61 - In turn, subnodes of the "edges" represent devices tied to SMD channels on that 62 - "edge". The names of the devices are not important. The properties of these 63 - nodes are defined by the individual bindings for the SMD devices - but must 64 - contain the following property: 65 - 66 - - qcom,smd-channels: 67 - Usage: required 68 - Value type: <stringlist> 69 - Definition: a list of channels tied to this device, used for matching 70 - the device to channels 71 - 72 - = EXAMPLE 73 - 74 - The following example represents a smd node, with one edge representing the 75 - "rpm" subsystem. For the "rpm" subsystem we have a device tied to the 76 - "rpm_request" channel. 77 - 78 - apcs: syscon@f9011000 { 79 - compatible = "syscon"; 80 - reg = <0xf9011000 0x1000>; 81 - }; 82 - 83 - smd { 84 - compatible = "qcom,smd"; 85 - 86 - rpm { 87 - interrupts = <0 168 1>; 88 - qcom,ipc = <&apcs 8 0>; 89 - qcom,smd-edge = <15>; 90 - 91 - rpm_requests { 92 - compatible = "qcom,rpm-msm8974"; 93 - qcom,smd-channels = "rpm_requests"; 94 - 95 - ... 96 - }; 97 - }; 98 - };
+137
Documentation/devicetree/bindings/soc/qcom/qcom,smd.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,smd.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Shared Memory Driver 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 13 + 14 + description: 15 + The Qualcomm Shared Memory Driver is a FIFO based communication channel for 16 + sending data between the various subsystems in Qualcomm platforms. 17 + 18 + properties: 19 + compatible: 20 + const: qcom,smd 21 + 22 + patternProperties: 23 + "^.*-edge|rpm$": 24 + type: object 25 + description: 26 + Each subnode of the SMD node represents a remote subsystem or a remote 27 + processor of some sort - or in SMD language an "edge". The name of the 28 + edges are not important. 29 + 30 + properties: 31 + interrupts: 32 + maxItems: 1 33 + 34 + label: 35 + $ref: /schemas/types.yaml#/definitions/string 36 + description: 37 + Name of the edge, used for debugging and identification purposes. The 38 + node name will be used if this is not present. 39 + 40 + mboxes: 41 + maxItems: 1 42 + description: 43 + Reference to the mailbox representing the outgoing doorbell in APCS for 44 + this client. 45 + 46 + qcom,ipc: 47 + $ref: /schemas/types.yaml#/definitions/phandle-array 48 + items: 49 + - items: 50 + - description: phandle to a syscon node representing the APCS registers 51 + - description: u32 representing offset to the register within the syscon 52 + - description: u32 representing the ipc bit within the register 53 + description: 54 + Three entries specifying the outgoing ipc bit used for signaling the 55 + remote processor. 56 + 57 + qcom,smd-edge: 58 + $ref: /schemas/types.yaml#/definitions/uint32 59 + description: 60 + The identifier of the remote processor in the smd channel allocation 61 + table. 62 + 63 + qcom,remote-pid: 64 + $ref: /schemas/types.yaml#/definitions/uint32 65 + description: 66 + The identifier for the remote processor as known by the rest of the 67 + system. 68 + 69 + # Binding for edge subnodes is not complete 70 + patternProperties: 71 + "^rpm-requests$": 72 + type: object 73 + description: 74 + In turn, subnodes of the "edges" represent devices tied to SMD 75 + channels on that "edge". The names of the devices are not 76 + important. The properties of these nodes are defined by the 77 + individual bindings for the SMD devices. 78 + 79 + properties: 80 + qcom,smd-channels: 81 + $ref: /schemas/types.yaml#/definitions/string-array 82 + minItems: 1 83 + maxItems: 32 84 + description: 85 + A list of channels tied to this device, used for matching the 86 + device to channels. 87 + 88 + required: 89 + - compatible 90 + - qcom,smd-channels 91 + 92 + additionalProperties: true 93 + 94 + required: 95 + - interrupts 96 + - qcom,smd-edge 97 + 98 + oneOf: 99 + - required: 100 + - mboxes 101 + - required: 102 + - qcom,ipc 103 + 104 + additionalProperties: false 105 + 106 + required: 107 + - compatible 108 + 109 + additionalProperties: false 110 + 111 + examples: 112 + # The following example represents a smd node, with one edge representing the 113 + # "rpm" subsystem. For the "rpm" subsystem we have a device tied to the 114 + # "rpm_request" channel. 115 + - | 116 + #include <dt-bindings/interrupt-controller/arm-gic.h> 117 + 118 + shared-memory { 119 + compatible = "qcom,smd"; 120 + 121 + rpm { 122 + interrupts = <GIC_SPI 168 IRQ_TYPE_EDGE_RISING>; 123 + qcom,ipc = <&apcs 8 0>; 124 + qcom,smd-edge = <15>; 125 + 126 + rpm-requests { 127 + compatible = "qcom,rpm-msm8974"; 128 + qcom,smd-channels = "rpm_requests"; 129 + 130 + clock-controller { 131 + compatible = "qcom,rpmcc-msm8974", "qcom,rpmcc"; 132 + #clock-cells = <1>; 133 + }; 134 + 135 + }; 136 + }; 137 + };
-104
Documentation/devicetree/bindings/soc/qcom/qcom,smsm.txt
··· 1 - Qualcomm Shared Memory State Machine 2 - 3 - The Shared Memory State Machine facilitates broadcasting of single bit state 4 - information between the processors in a Qualcomm SoC. Each processor is 5 - assigned 32 bits of state that can be modified. A processor can through a 6 - matrix of bitmaps signal subscription of notifications upon changes to a 7 - certain bit owned by a certain remote processor. 8 - 9 - - compatible: 10 - Usage: required 11 - Value type: <string> 12 - Definition: must be one of: 13 - "qcom,smsm" 14 - 15 - - qcom,ipc-N: 16 - Usage: required 17 - Value type: <prop-encoded-array> 18 - Definition: three entries specifying the outgoing ipc bit used for 19 - signaling the N:th remote processor 20 - - phandle to a syscon node representing the apcs registers 21 - - u32 representing offset to the register within the syscon 22 - - u32 representing the ipc bit within the register 23 - 24 - - qcom,local-host: 25 - Usage: optional 26 - Value type: <u32> 27 - Definition: identifier of the local processor in the list of hosts, or 28 - in other words specifier of the column in the subscription 29 - matrix representing the local processor 30 - defaults to host 0 31 - 32 - - #address-cells: 33 - Usage: required 34 - Value type: <u32> 35 - Definition: must be 1 36 - 37 - - #size-cells: 38 - Usage: required 39 - Value type: <u32> 40 - Definition: must be 0 41 - 42 - = SUBNODES 43 - Each processor's state bits are described by a subnode of the smsm device node. 44 - Nodes can either be flagged as an interrupt-controller to denote a remote 45 - processor's state bits or the local processors bits. The node names are not 46 - important. 47 - 48 - - reg: 49 - Usage: required 50 - Value type: <u32> 51 - Definition: specifies the offset, in words, of the first bit for this 52 - entry 53 - 54 - - #qcom,smem-state-cells: 55 - Usage: required for local entry 56 - Value type: <u32> 57 - Definition: must be 1 - denotes bit number 58 - 59 - - interrupt-controller: 60 - Usage: required for remote entries 61 - Value type: <empty> 62 - Definition: marks the entry as a interrupt-controller and the state bits 63 - to belong to a remote processor 64 - 65 - - #interrupt-cells: 66 - Usage: required for remote entries 67 - Value type: <u32> 68 - Definition: must be 2 - denotes bit number and IRQ flags 69 - 70 - - interrupts: 71 - Usage: required for remote entries 72 - Value type: <prop-encoded-array> 73 - Definition: one entry specifying remote IRQ used by the remote processor 74 - to signal changes of its state bits 75 - 76 - 77 - = EXAMPLE 78 - The following example shows the SMEM setup for controlling properties of the 79 - wireless processor, defined from the 8974 apps processor's point-of-view. It 80 - encompasses one outbound entry and the outgoing interrupt for the wireless 81 - processor. 82 - 83 - smsm { 84 - compatible = "qcom,smsm"; 85 - 86 - #address-cells = <1>; 87 - #size-cells = <0>; 88 - 89 - qcom,ipc-3 = <&apcs 8 19>; 90 - 91 - apps_smsm: apps@0 { 92 - reg = <0>; 93 - 94 - #qcom,smem-state-cells = <1>; 95 - }; 96 - 97 - wcnss_smsm: wcnss@7 { 98 - reg = <7>; 99 - interrupts = <0 144 1>; 100 - 101 - interrupt-controller; 102 - #interrupt-cells = <2>; 103 - }; 104 - };
+138
Documentation/devicetree/bindings/soc/qcom/qcom,smsm.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,smsm.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Shared Memory State Machine 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 13 + 14 + description: 15 + The Shared Memory State Machine facilitates broadcasting of single bit state 16 + information between the processors in a Qualcomm SoC. Each processor is 17 + assigned 32 bits of state that can be modified. A processor can through a 18 + matrix of bitmaps signal subscription of notifications upon changes to a 19 + certain bit owned by a certain remote processor. 20 + 21 + properties: 22 + compatible: 23 + const: qcom,smsm 24 + 25 + '#address-cells': 26 + const: 1 27 + 28 + qcom,local-host: 29 + $ref: /schemas/types.yaml#/definitions/uint32 30 + default: 0 31 + description: 32 + Identifier of the local processor in the list of hosts, or in other words 33 + specifier of the column in the subscription matrix representing the local 34 + processor. 35 + 36 + '#size-cells': 37 + const: 0 38 + 39 + patternProperties: 40 + "^qcom,ipc-[1-4]$": 41 + $ref: /schemas/types.yaml#/definitions/phandle-array 42 + items: 43 + - items: 44 + - description: phandle to a syscon node representing the APCS registers 45 + - description: u32 representing offset to the register within the syscon 46 + - description: u32 representing the ipc bit within the register 47 + description: 48 + Three entries specifying the outgoing ipc bit used for signaling the N:th 49 + remote processor. 50 + 51 + "@[0-9a-f]$": 52 + type: object 53 + description: 54 + Each processor's state bits are described by a subnode of the SMSM device 55 + node. Nodes can either be flagged as an interrupt-controller to denote a 56 + remote processor's state bits or the local processors bits. The node 57 + names are not important. 58 + 59 + properties: 60 + reg: 61 + maxItems: 1 62 + 63 + interrupt-controller: 64 + description: 65 + Marks the entry as a interrupt-controller and the state bits to 66 + belong to a remote processor. 67 + 68 + '#interrupt-cells': 69 + const: 2 70 + 71 + interrupts: 72 + maxItems: 1 73 + description: 74 + One entry specifying remote IRQ used by the remote processor to 75 + signal changes of its state bits. 76 + 77 + '#qcom,smem-state-cells': 78 + $ref: /schemas/types.yaml#/definitions/uint32 79 + const: 1 80 + description: 81 + Required for local entry. Denotes bit number. 82 + 83 + required: 84 + - reg 85 + 86 + oneOf: 87 + - required: 88 + - '#qcom,smem-state-cells' 89 + - required: 90 + - interrupt-controller 91 + - '#interrupt-cells' 92 + - interrupts 93 + 94 + additionalProperties: false 95 + 96 + required: 97 + - compatible 98 + - '#address-cells' 99 + - '#size-cells' 100 + 101 + anyOf: 102 + - required: 103 + - qcom,ipc-1 104 + - required: 105 + - qcom,ipc-2 106 + - required: 107 + - qcom,ipc-3 108 + - required: 109 + - qcom,ipc-4 110 + 111 + additionalProperties: false 112 + 113 + examples: 114 + # The following example shows the SMEM setup for controlling properties of 115 + # the wireless processor, defined from the 8974 apps processor's 116 + # point-of-view. It encompasses one outbound entry and the outgoing interrupt 117 + # for the wireless processor. 118 + - | 119 + #include <dt-bindings/interrupt-controller/arm-gic.h> 120 + 121 + shared-memory { 122 + compatible = "qcom,smsm"; 123 + #address-cells = <1>; 124 + #size-cells = <0>; 125 + qcom,ipc-3 = <&apcs 8 19>; 126 + 127 + apps_smsm: apps@0 { 128 + reg = <0>; 129 + #qcom,smem-state-cells = <1>; 130 + }; 131 + 132 + wcnss_smsm: wcnss@7 { 133 + reg = <7>; 134 + interrupts = <GIC_SPI 144 IRQ_TYPE_EDGE_RISING>; 135 + interrupt-controller; 136 + #interrupt-cells = <2>; 137 + }; 138 + };
-131
Documentation/devicetree/bindings/soc/qcom/qcom,wcnss.txt
··· 1 - Qualcomm WCNSS Binding 2 - 3 - This binding describes the Qualcomm WCNSS hardware. It consists of control 4 - block and a BT, WiFi and FM radio block, all using SMD as command channels. 5 - 6 - - compatible: 7 - Usage: required 8 - Value type: <string> 9 - Definition: must be: "qcom,wcnss", 10 - 11 - - qcom,smd-channel: 12 - Usage: required 13 - Value type: <string> 14 - Definition: standard SMD property specifying the SMD channel used for 15 - communication with the WiFi firmware. 16 - Should be "WCNSS_CTRL". 17 - 18 - - qcom,mmio: 19 - Usage: required 20 - Value type: <prop-encoded-array> 21 - Definition: reference to a node specifying the wcnss "ccu" and "dxe" 22 - register blocks. The node must be compatible with one of 23 - the following: 24 - "qcom,riva", 25 - "qcom,pronto" 26 - 27 - - firmware-name: 28 - Usage: optional 29 - Value type: <string> 30 - Definition: specifies the relative firmware image path for the WLAN NV 31 - blob. Defaults to "wlan/prima/WCNSS_qcom_wlan_nv.bin" if 32 - not specified. 33 - 34 - = SUBNODES 35 - The subnodes of the wcnss node are optional and describe the individual blocks in 36 - the WCNSS. 37 - 38 - == Bluetooth 39 - The following properties are defined to the bluetooth node: 40 - 41 - - compatible: 42 - Usage: required 43 - Value type: <string> 44 - Definition: must be: 45 - "qcom,wcnss-bt" 46 - 47 - - local-bd-address: 48 - Usage: optional 49 - Value type: <u8 array> 50 - Definition: see Documentation/devicetree/bindings/net/bluetooth.txt 51 - 52 - == WiFi 53 - The following properties are defined to the WiFi node: 54 - 55 - - compatible: 56 - Usage: required 57 - Value type: <string> 58 - Definition: must be one of: 59 - "qcom,wcnss-wlan", 60 - 61 - - interrupts: 62 - Usage: required 63 - Value type: <prop-encoded-array> 64 - Definition: should specify the "rx" and "tx" interrupts 65 - 66 - - interrupt-names: 67 - Usage: required 68 - Value type: <stringlist> 69 - Definition: must contain "rx" and "tx" 70 - 71 - - qcom,smem-state: 72 - Usage: required 73 - Value type: <prop-encoded-array> 74 - Definition: should reference the tx-enable and tx-rings-empty SMEM states 75 - 76 - - qcom,smem-state-names: 77 - Usage: required 78 - Value type: <stringlist> 79 - Definition: must contain "tx-enable" and "tx-rings-empty" 80 - 81 - = EXAMPLE 82 - The following example represents a SMD node, with one edge representing the 83 - "pronto" subsystem, with the wcnss device and its wcn3680 BT and WiFi blocks 84 - described; as found on the 8974 platform. 85 - 86 - smd { 87 - compatible = "qcom,smd"; 88 - 89 - pronto-edge { 90 - interrupts = <0 142 1>; 91 - 92 - qcom,ipc = <&apcs 8 17>; 93 - qcom,smd-edge = <6>; 94 - 95 - wcnss { 96 - compatible = "qcom,wcnss"; 97 - qcom,smd-channels = "WCNSS_CTRL"; 98 - 99 - #address-cells = <1>; 100 - #size-cells = <1>; 101 - 102 - qcom,mmio = <&pronto>; 103 - 104 - bt { 105 - compatible = "qcom,wcnss-bt"; 106 - 107 - /* BD address 00:11:22:33:44:55 */ 108 - local-bd-address = [ 55 44 33 22 11 00 ]; 109 - }; 110 - 111 - wlan { 112 - compatible = "qcom,wcnss-wlan"; 113 - 114 - interrupts = <0 145 0>, <0 146 0>; 115 - interrupt-names = "tx", "rx"; 116 - 117 - qcom,smem-state = <&apps_smsm 10>, <&apps_smsm 9>; 118 - qcom,smem-state-names = "tx-enable", "tx-rings-empty"; 119 - }; 120 - }; 121 - }; 122 - }; 123 - 124 - soc { 125 - pronto: pronto { 126 - compatible = "qcom,pronto"; 127 - 128 - reg = <0xfb204000 0x2000>, <0xfb202000 0x1000>, <0xfb21b000 0x3000>; 129 - reg-names = "ccu", "dxe", "pmu"; 130 - }; 131 - };
+137
Documentation/devicetree/bindings/soc/qcom/qcom,wcnss.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,wcnss.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm WCNSS 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + 13 + description: 14 + The Qualcomm WCNSS hardware consists of control block and a BT, WiFi and FM 15 + radio block, all using SMD as command channels. 16 + 17 + properties: 18 + compatible: 19 + const: qcom,wcnss 20 + 21 + firmware-name: 22 + $ref: /schemas/types.yaml#/definitions/string 23 + default: "wlan/prima/WCNSS_qcom_wlan_nv.bin" 24 + description: 25 + Relative firmware image path for the WLAN NV blob. 26 + 27 + qcom,mmio: 28 + $ref: /schemas/types.yaml#/definitions/phandle 29 + description: | 30 + Reference to a node specifying the wcnss "ccu" and "dxe" register blocks. 31 + The node must be compatible with one of the following:: 32 + - qcom,riva" 33 + - qcom,pronto" 34 + 35 + qcom,smd-channels: 36 + $ref: /schemas/types.yaml#/definitions/string 37 + const: WCNSS_CTRL 38 + description: 39 + Standard SMD property specifying the SMD channel used for communication 40 + with the WiFi firmware. 41 + 42 + bluetooth: 43 + type: object 44 + additionalProperties: false 45 + properties: 46 + compatible: 47 + const: qcom,wcnss-bt 48 + 49 + local-bd-address: 50 + $ref: /schemas/types.yaml#/definitions/uint8-array 51 + maxItems: 6 52 + description: 53 + See Documentation/devicetree/bindings/net/bluetooth.txt 54 + 55 + required: 56 + - compatible 57 + 58 + wifi: 59 + additionalProperties: false 60 + type: object 61 + properties: 62 + compatible: 63 + const: qcom,wcnss-wlan 64 + 65 + interrupts: 66 + maxItems: 2 67 + 68 + interrupt-names: 69 + items: 70 + - const: tx 71 + - const: rx 72 + 73 + qcom,smem-states: 74 + $ref: /schemas/types.yaml#/definitions/phandle-array 75 + maxItems: 2 76 + description: 77 + Should reference the tx-enable and tx-rings-empty SMEM states. 78 + 79 + qcom,smem-state-names: 80 + $ref: /schemas/types.yaml#/definitions/string-array 81 + items: 82 + - const: tx-enable 83 + - const: tx-rings-empty 84 + description: 85 + Names of SMEM states. 86 + 87 + required: 88 + - compatible 89 + - interrupts 90 + - interrupt-names 91 + - qcom,smem-states 92 + - qcom,smem-state-names 93 + 94 + required: 95 + - compatible 96 + - qcom,mmio 97 + - qcom,smd-channels 98 + 99 + additionalProperties: false 100 + 101 + examples: 102 + - | 103 + #include <dt-bindings/interrupt-controller/arm-gic.h> 104 + 105 + smd-edge { 106 + interrupts = <GIC_SPI 142 IRQ_TYPE_EDGE_RISING>; 107 + 108 + qcom,ipc = <&apcs 8 17>; 109 + qcom,smd-edge = <6>; 110 + qcom,remote-pid = <4>; 111 + 112 + label = "pronto"; 113 + 114 + wcnss { 115 + compatible = "qcom,wcnss"; 116 + qcom,smd-channels = "WCNSS_CTRL"; 117 + 118 + qcom,mmio = <&pronto>; 119 + 120 + bluetooth { 121 + compatible = "qcom,wcnss-bt"; 122 + /* BD address 00:11:22:33:44:55 */ 123 + local-bd-address = [ 55 44 33 22 11 00 ]; 124 + }; 125 + 126 + wifi { 127 + compatible = "qcom,wcnss-wlan"; 128 + 129 + interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 130 + <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>; 131 + interrupt-names = "tx", "rx"; 132 + 133 + qcom,smem-states = <&apps_smsm 10>, <&apps_smsm 9>; 134 + qcom,smem-state-names = "tx-enable", "tx-rings-empty"; 135 + }; 136 + }; 137 + };
-137
Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
··· 1 - RPMH RSC: 2 - ------------ 3 - 4 - Resource Power Manager Hardened (RPMH) is the mechanism for communicating with 5 - the hardened resource accelerators on Qualcomm SoCs. Requests to the resources 6 - can be written to the Trigger Command Set (TCS) registers and using a (addr, 7 - val) pair and triggered. Messages in the TCS are then sent in sequence over an 8 - internal bus. 9 - 10 - The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity 11 - (Resource State Coordinator a.k.a RSC) that can handle multiple sleep and 12 - active/wake resource requests. Multiple such DRVs can exist in a SoC and can 13 - be written to from Linux. The structure of each DRV follows the same template 14 - with a few variations that are captured by the properties here. 15 - 16 - A TCS may be triggered from Linux or triggered by the F/W after all the CPUs 17 - have powered off to facilitate idle power saving. TCS could be classified as - 18 - 19 - ACTIVE /* Triggered by Linux */ 20 - SLEEP /* Triggered by F/W */ 21 - WAKE /* Triggered by F/W */ 22 - CONTROL /* Triggered by F/W */ 23 - 24 - The order in which they are described in the DT, should match the hardware 25 - configuration. 26 - 27 - Requests can be made for the state of a resource, when the subsystem is active 28 - or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state 29 - will be an aggregate of the sleep votes from each of those subsystems. Clients 30 - may request a sleep value for their shared resources in addition to the active 31 - mode requests. 32 - 33 - Properties: 34 - 35 - - compatible: 36 - Usage: required 37 - Value type: <string> 38 - Definition: Should be "qcom,rpmh-rsc". 39 - 40 - - reg: 41 - Usage: required 42 - Value type: <prop-encoded-array> 43 - Definition: The first register specifies the base address of the 44 - DRV(s). The number of DRVs in the dependent on the RSC. 45 - The tcs-offset specifies the start address of the 46 - TCS in the DRVs. 47 - 48 - - reg-names: 49 - Usage: required 50 - Value type: <string> 51 - Definition: Maps the register specified in the reg property. Must be 52 - "drv-0", "drv-1", "drv-2" etc and "tcs-offset". The 53 - 54 - - interrupts: 55 - Usage: required 56 - Value type: <prop-encoded-interrupt> 57 - Definition: The interrupt that trips when a message complete/response 58 - is received for this DRV from the accelerators. 59 - 60 - - qcom,drv-id: 61 - Usage: required 62 - Value type: <u32> 63 - Definition: The id of the DRV in the RSC block that will be used by 64 - this controller. 65 - 66 - - qcom,tcs-config: 67 - Usage: required 68 - Value type: <prop-encoded-array> 69 - Definition: The tuple defining the configuration of TCS. 70 - Must have 2 cells which describe each TCS type. 71 - <type number_of_tcs>. 72 - The order of the TCS must match the hardware 73 - configuration. 74 - - Cell #1 (TCS Type): TCS types to be specified - 75 - ACTIVE_TCS 76 - SLEEP_TCS 77 - WAKE_TCS 78 - CONTROL_TCS 79 - - Cell #2 (Number of TCS): <u32> 80 - 81 - - label: 82 - Usage: optional 83 - Value type: <string> 84 - Definition: Name for the RSC. The name would be used in trace logs. 85 - 86 - Drivers that want to use the RSC to communicate with RPMH must specify their 87 - bindings as child nodes of the RSC controllers they wish to communicate with. 88 - 89 - Example 1: 90 - 91 - For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the 92 - register offsets for DRV2 start at 0D00, the register calculations are like 93 - this - 94 - DRV0: 0x179C0000 95 - DRV2: 0x179C0000 + 0x10000 = 0x179D0000 96 - DRV2: 0x179C0000 + 0x10000 * 2 = 0x179E0000 97 - TCS-OFFSET: 0xD00 98 - 99 - apps_rsc: rsc@179c0000 { 100 - label = "apps_rsc"; 101 - compatible = "qcom,rpmh-rsc"; 102 - reg = <0x179c0000 0x10000>, 103 - <0x179d0000 0x10000>, 104 - <0x179e0000 0x10000>; 105 - reg-names = "drv-0", "drv-1", "drv-2"; 106 - interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>, 107 - <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>, 108 - <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 109 - qcom,tcs-offset = <0xd00>; 110 - qcom,drv-id = <2>; 111 - qcom,tcs-config = <ACTIVE_TCS 2>, 112 - <SLEEP_TCS 3>, 113 - <WAKE_TCS 3>, 114 - <CONTROL_TCS 1>; 115 - }; 116 - 117 - Example 2: 118 - 119 - For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the 120 - register offsets for DRV0 start at 01C00, the register calculations are like 121 - this - 122 - DRV0: 0xAF20000 123 - TCS-OFFSET: 0x1C00 124 - 125 - disp_rsc: rsc@af20000 { 126 - label = "disp_rsc"; 127 - compatible = "qcom,rpmh-rsc"; 128 - reg = <0xaf20000 0x10000>; 129 - reg-names = "drv-0"; 130 - interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>; 131 - qcom,tcs-offset = <0x1c00>; 132 - qcom,drv-id = <0>; 133 - qcom,tcs-config = <ACTIVE_TCS 0>, 134 - <SLEEP_TCS 1>, 135 - <WAKE_TCS 1>, 136 - <CONTROL_TCS 0>; 137 - };
+3
Documentation/devicetree/bindings/soc/rockchip/grf.yaml
··· 15 15 - items: 16 16 - enum: 17 17 - rockchip,rk3288-sgrf 18 + - rockchip,rk3566-pipe-grf 19 + - rockchip,rk3568-pipe-grf 20 + - rockchip,rk3568-pipe-phy-grf 18 21 - rockchip,rk3568-usb2phy-grf 19 22 - rockchip,rv1108-usbgrf 20 23 - const: syscon
+1 -1
Documentation/devicetree/bindings/soc/samsung/exynos-usi.yaml
··· 77 77 description: Child node describing underlying UART/serial 78 78 79 79 "^spi@[0-9a-f]+$": 80 - type: object 80 + $ref: /schemas/spi/samsung,spi.yaml 81 81 description: Child node describing underlying SPI 82 82 83 83 required:
-39
Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.txt
··· 1 - GENI based Qualcomm Universal Peripheral (QUP) Serial Peripheral Interface (SPI) 2 - 3 - The QUP v3 core is a GENI based AHB slave that provides a common data path 4 - (an output FIFO and an input FIFO) for serial peripheral interface (SPI) 5 - mini-core. 6 - 7 - SPI in master mode supports up to 50MHz, up to four chip selects, programmable 8 - data path from 4 bits to 32 bits and numerous protocol variants. 9 - 10 - Required properties: 11 - - compatible: Must contain "qcom,geni-spi". 12 - - reg: Must contain SPI register location and length. 13 - - interrupts: Must contain SPI controller interrupts. 14 - - clock-names: Must contain "se". 15 - - clocks: Serial engine core clock needed by the device. 16 - - #address-cells: Must be <1> to define a chip select address on 17 - the SPI bus. 18 - - #size-cells: Must be <0>. 19 - 20 - SPI Controller nodes must be child of GENI based Qualcomm Universal 21 - Peripharal. Please refer GENI based QUP wrapper controller node bindings 22 - described in Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml. 23 - 24 - SPI slave nodes must be children of the SPI master node and conform to SPI bus 25 - binding as described in Documentation/devicetree/bindings/spi/spi-bus.txt. 26 - 27 - Example: 28 - spi0: spi@a84000 { 29 - compatible = "qcom,geni-spi"; 30 - reg = <0xa84000 0x4000>; 31 - interrupts = <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>; 32 - clock-names = "se"; 33 - clocks = <&clock_gcc GCC_QUPV3_WRAP0_S0_CLK>; 34 - pinctrl-names = "default", "sleep"; 35 - pinctrl-0 = <&qup_1_spi_2_active>; 36 - pinctrl-1 = <&qup_1_spi_2_sleep>; 37 - #address-cells = <1>; 38 - #size-cells = <0>; 39 - };
+116
Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/qcom,spi-geni-qcom.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: GENI based Qualcomm Universal Peripheral (QUP) Serial Peripheral Interface (SPI) 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 13 + 14 + description: 15 + The QUP v3 core is a GENI based AHB slave that provides a common data path 16 + (an output FIFO and an input FIFO) for serial peripheral interface (SPI) 17 + mini-core. 18 + 19 + SPI in master mode supports up to 50MHz, up to four chip selects, 20 + programmable data path from 4 bits to 32 bits and numerous protocol variants. 21 + 22 + SPI Controller nodes must be child of GENI based Qualcomm Universal 23 + Peripharal. Please refer GENI based QUP wrapper controller node bindings 24 + described in Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml. 25 + 26 + allOf: 27 + - $ref: /schemas/spi/spi-controller.yaml# 28 + 29 + properties: 30 + compatible: 31 + const: qcom,geni-spi 32 + 33 + clocks: 34 + maxItems: 1 35 + 36 + clock-names: 37 + const: se 38 + 39 + dmas: 40 + maxItems: 2 41 + 42 + dma-names: 43 + items: 44 + - const: tx 45 + - const: rx 46 + 47 + interconnects: 48 + maxItems: 2 49 + 50 + interconnect-names: 51 + items: 52 + - const: qup-core 53 + - const: qup-config 54 + 55 + interrupts: 56 + maxItems: 1 57 + 58 + operating-points-v2: true 59 + 60 + power-domains: 61 + maxItems: 1 62 + 63 + reg: 64 + maxItems: 1 65 + 66 + required: 67 + - compatible 68 + - clocks 69 + - clock-names 70 + - interrupts 71 + - reg 72 + 73 + unevaluatedProperties: false 74 + 75 + examples: 76 + - | 77 + #include <dt-bindings/clock/qcom,gcc-sc7180.h> 78 + #include <dt-bindings/interconnect/qcom,sc7180.h> 79 + #include <dt-bindings/interrupt-controller/arm-gic.h> 80 + #include <dt-bindings/power/qcom-rpmpd.h> 81 + 82 + spi@880000 { 83 + compatible = "qcom,geni-spi"; 84 + reg = <0x00880000 0x4000>; 85 + clock-names = "se"; 86 + clocks = <&gcc GCC_QUPV3_WRAP0_S0_CLK>; 87 + pinctrl-names = "default"; 88 + pinctrl-0 = <&qup_spi0_default>; 89 + interrupts = <GIC_SPI 601 IRQ_TYPE_LEVEL_HIGH>; 90 + #address-cells = <1>; 91 + #size-cells = <0>; 92 + power-domains = <&rpmhpd SC7180_CX>; 93 + operating-points-v2 = <&qup_opp_table>; 94 + interconnects = <&qup_virt MASTER_QUP_CORE_0 0 &qup_virt SLAVE_QUP_CORE_0 0>, 95 + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_0 0>; 96 + interconnect-names = "qup-core", "qup-config"; 97 + }; 98 + 99 + - | 100 + #include <dt-bindings/dma/qcom-gpi.h> 101 + 102 + spi@884000 { 103 + compatible = "qcom,geni-spi"; 104 + reg = <0x00884000 0x4000>; 105 + clock-names = "se"; 106 + clocks = <&gcc GCC_QUPV3_WRAP0_S1_CLK>; 107 + dmas = <&gpi_dma0 0 1 QCOM_GPI_SPI>, 108 + <&gpi_dma0 1 1 QCOM_GPI_SPI>; 109 + dma-names = "tx", "rx"; 110 + pinctrl-names = "default"; 111 + pinctrl-0 = <&qup_spi1_default>; 112 + interrupts = <GIC_SPI 602 IRQ_TYPE_LEVEL_HIGH>; 113 + spi-max-frequency = <50000000>; 114 + #address-cells = <1>; 115 + #size-cells = <0>; 116 + };
+64 -3
Documentation/devicetree/bindings/timer/samsung,exynos4210-mct.yaml
··· 19 19 20 20 properties: 21 21 compatible: 22 - enum: 23 - - samsung,exynos4210-mct 24 - - samsung,exynos4412-mct 22 + oneOf: 23 + - enum: 24 + - samsung,exynos4210-mct 25 + - samsung,exynos4412-mct 26 + - items: 27 + - enum: 28 + - samsung,exynos3250-mct 29 + - samsung,exynos5250-mct 30 + - samsung,exynos5260-mct 31 + - samsung,exynos5420-mct 32 + - samsung,exynos5433-mct 33 + - samsung,exynos850-mct 34 + - tesla,fsd-mct 35 + - const: samsung,exynos4210-mct 25 36 26 37 clocks: 27 38 maxItems: 2 ··· 72 61 - clocks 73 62 - interrupts 74 63 - reg 64 + 65 + allOf: 66 + - if: 67 + properties: 68 + compatible: 69 + contains: 70 + const: samsung,exynos3250-mct 71 + then: 72 + properties: 73 + interrupts: 74 + minItems: 8 75 + maxItems: 8 76 + 77 + - if: 78 + properties: 79 + compatible: 80 + contains: 81 + const: samsung,exynos5250-mct 82 + then: 83 + properties: 84 + interrupts: 85 + minItems: 6 86 + maxItems: 6 87 + 88 + - if: 89 + properties: 90 + compatible: 91 + contains: 92 + enum: 93 + - samsung,exynos5260-mct 94 + - samsung,exynos5420-mct 95 + - samsung,exynos5433-mct 96 + - samsung,exynos850-mct 97 + then: 98 + properties: 99 + interrupts: 100 + minItems: 12 101 + maxItems: 12 102 + 103 + - if: 104 + properties: 105 + compatible: 106 + contains: 107 + enum: 108 + - tesla,fsd-mct 109 + then: 110 + properties: 111 + interrupts: 112 + minItems: 16 113 + maxItems: 16 75 114 76 115 additionalProperties: false 77 116
+4
MAINTAINERS
··· 1837 1837 F: Documentation/devicetree/bindings/clock/apple,nco.yaml 1838 1838 F: Documentation/devicetree/bindings/i2c/apple,i2c.yaml 1839 1839 F: Documentation/devicetree/bindings/interrupt-controller/apple,* 1840 + F: Documentation/devicetree/bindings/iommu/apple,sart.yaml 1840 1841 F: Documentation/devicetree/bindings/mailbox/apple,mailbox.yaml 1842 + F: Documentation/devicetree/bindings/nvme/apple,nvme-ans.yaml 1841 1843 F: Documentation/devicetree/bindings/pci/apple,pcie.yaml 1842 1844 F: Documentation/devicetree/bindings/pinctrl/apple,pinctrl.yaml 1843 1845 F: Documentation/devicetree/bindings/power/apple* ··· 1850 1848 F: drivers/i2c/busses/i2c-pasemi-platform.c 1851 1849 F: drivers/irqchip/irq-apple-aic.c 1852 1850 F: drivers/mailbox/apple-mailbox.c 1851 + F: drivers/nvme/host/apple.c 1853 1852 F: drivers/pinctrl/pinctrl-apple-gpio.c 1854 1853 F: drivers/soc/apple/* 1855 1854 F: drivers/watchdog/apple_wdt.c 1856 1855 F: include/dt-bindings/interrupt-controller/apple-aic.h 1857 1856 F: include/dt-bindings/pinctrl/apple.h 1858 1857 F: include/linux/apple-mailbox.h 1858 + F: include/linux/soc/apple/* 1859 1859 1860 1860 ARM/ARTPEC MACHINE SUPPORT 1861 1861 M: Jesper Nilsson <jesper.nilsson@axis.com>
+11
drivers/bus/Kconfig
··· 152 152 Interface 2, which can be used to connect things like NAND Flash, 153 153 SRAM, ethernet adapters, FPGAs and LCD displays. 154 154 155 + config QCOM_SSC_BLOCK_BUS 156 + bool "Qualcomm SSC Block Bus Init Driver" 157 + depends on ARCH_QCOM 158 + help 159 + Say y here to enable support for initializing the bus that connects 160 + the SSC block's internal bus to the cNoC (configurantion NoC) on 161 + (some) qcom SoCs. 162 + The SSC (Snapdragon Sensor Core) block contains a gpio controller, 163 + i2c/spi/uart controllers, a hexagon core, and a clock controller 164 + which provides clocks for the above. 165 + 155 166 config SUN50I_DE2_BUS 156 167 bool "Allwinner A64 DE2 Bus Driver" 157 168 default ARM64
+1
drivers/bus/Makefile
··· 25 25 26 26 obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o 27 27 obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o 28 + obj-$(CONFIG_QCOM_SSC_BLOCK_BUS) += qcom-ssc-block-bus.o 28 29 obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o 29 30 obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o 30 31 obj-$(CONFIG_OF) += simple-pm-bus.o
-1
drivers/bus/brcmstb_gisb.c
··· 536 536 .name = "brcm-gisb-arb", 537 537 .of_match_table = brcmstb_gisb_arb_of_match, 538 538 .pm = &brcmstb_gisb_arb_pm_ops, 539 - .suppress_bind_attrs = true, 540 539 }, 541 540 }; 542 541
+389
drivers/bus/qcom-ssc-block-bus.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + // Copyright (c) 2021, Michael Srba 3 + 4 + #include <linux/clk.h> 5 + #include <linux/delay.h> 6 + #include <linux/io.h> 7 + #include <linux/mfd/syscon.h> 8 + #include <linux/module.h> 9 + #include <linux/of_platform.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/pm_clock.h> 12 + #include <linux/pm_domain.h> 13 + #include <linux/pm_runtime.h> 14 + #include <linux/regmap.h> 15 + #include <linux/reset.h> 16 + 17 + /* AXI Halt Register Offsets */ 18 + #define AXI_HALTREQ_REG 0x0 19 + #define AXI_HALTACK_REG 0x4 20 + #define AXI_IDLE_REG 0x8 21 + 22 + #define SSCAON_CONFIG0_CLAMP_EN_OVRD BIT(4) 23 + #define SSCAON_CONFIG0_CLAMP_EN_OVRD_VAL BIT(5) 24 + 25 + static const char *const qcom_ssc_block_pd_names[] = { 26 + "ssc_cx", 27 + "ssc_mx" 28 + }; 29 + 30 + struct qcom_ssc_block_bus_data { 31 + const char *const *pd_names; 32 + struct device *pds[ARRAY_SIZE(qcom_ssc_block_pd_names)]; 33 + char __iomem *reg_mpm_sscaon_config0; 34 + char __iomem *reg_mpm_sscaon_config1; 35 + struct regmap *halt_map; 36 + struct clk *xo_clk; 37 + struct clk *aggre2_clk; 38 + struct clk *gcc_im_sleep_clk; 39 + struct clk *aggre2_north_clk; 40 + struct clk *ssc_xo_clk; 41 + struct clk *ssc_ahbs_clk; 42 + struct reset_control *ssc_bcr; 43 + struct reset_control *ssc_reset; 44 + u32 ssc_axi_halt; 45 + int num_pds; 46 + }; 47 + 48 + static void reg32_set_bits(char __iomem *reg, u32 value) 49 + { 50 + u32 tmp = ioread32(reg); 51 + 52 + iowrite32(tmp | value, reg); 53 + } 54 + 55 + static void reg32_clear_bits(char __iomem *reg, u32 value) 56 + { 57 + u32 tmp = ioread32(reg); 58 + 59 + iowrite32(tmp & (~value), reg); 60 + } 61 + 62 + static int qcom_ssc_block_bus_init(struct device *dev) 63 + { 64 + int ret; 65 + 66 + struct qcom_ssc_block_bus_data *data = dev_get_drvdata(dev); 67 + 68 + ret = clk_prepare_enable(data->xo_clk); 69 + if (ret) { 70 + dev_err(dev, "error enabling xo_clk: %d\n", ret); 71 + goto err_xo_clk; 72 + } 73 + 74 + ret = clk_prepare_enable(data->aggre2_clk); 75 + if (ret) { 76 + dev_err(dev, "error enabling aggre2_clk: %d\n", ret); 77 + goto err_aggre2_clk; 78 + } 79 + 80 + ret = clk_prepare_enable(data->gcc_im_sleep_clk); 81 + if (ret) { 82 + dev_err(dev, "error enabling gcc_im_sleep_clk: %d\n", ret); 83 + goto err_gcc_im_sleep_clk; 84 + } 85 + 86 + /* 87 + * We need to intervene here because the HW logic driving these signals cannot handle 88 + * initialization after power collapse by itself. 89 + */ 90 + reg32_clear_bits(data->reg_mpm_sscaon_config0, 91 + SSCAON_CONFIG0_CLAMP_EN_OVRD | SSCAON_CONFIG0_CLAMP_EN_OVRD_VAL); 92 + /* override few_ack/rest_ack */ 93 + reg32_clear_bits(data->reg_mpm_sscaon_config1, BIT(31)); 94 + 95 + ret = clk_prepare_enable(data->aggre2_north_clk); 96 + if (ret) { 97 + dev_err(dev, "error enabling aggre2_north_clk: %d\n", ret); 98 + goto err_aggre2_north_clk; 99 + } 100 + 101 + ret = reset_control_deassert(data->ssc_reset); 102 + if (ret) { 103 + dev_err(dev, "error deasserting ssc_reset: %d\n", ret); 104 + goto err_ssc_reset; 105 + } 106 + 107 + ret = reset_control_deassert(data->ssc_bcr); 108 + if (ret) { 109 + dev_err(dev, "error deasserting ssc_bcr: %d\n", ret); 110 + goto err_ssc_bcr; 111 + } 112 + 113 + regmap_write(data->halt_map, data->ssc_axi_halt + AXI_HALTREQ_REG, 0); 114 + 115 + ret = clk_prepare_enable(data->ssc_xo_clk); 116 + if (ret) { 117 + dev_err(dev, "error deasserting ssc_xo_clk: %d\n", ret); 118 + goto err_ssc_xo_clk; 119 + } 120 + 121 + ret = clk_prepare_enable(data->ssc_ahbs_clk); 122 + if (ret) { 123 + dev_err(dev, "error deasserting ssc_ahbs_clk: %d\n", ret); 124 + goto err_ssc_ahbs_clk; 125 + } 126 + 127 + return 0; 128 + 129 + err_ssc_ahbs_clk: 130 + clk_disable(data->ssc_xo_clk); 131 + 132 + err_ssc_xo_clk: 133 + regmap_write(data->halt_map, data->ssc_axi_halt + AXI_HALTREQ_REG, 1); 134 + 135 + reset_control_assert(data->ssc_bcr); 136 + 137 + err_ssc_bcr: 138 + reset_control_assert(data->ssc_reset); 139 + 140 + err_ssc_reset: 141 + clk_disable(data->aggre2_north_clk); 142 + 143 + err_aggre2_north_clk: 144 + reg32_set_bits(data->reg_mpm_sscaon_config0, BIT(4) | BIT(5)); 145 + reg32_set_bits(data->reg_mpm_sscaon_config1, BIT(31)); 146 + 147 + clk_disable(data->gcc_im_sleep_clk); 148 + 149 + err_gcc_im_sleep_clk: 150 + clk_disable(data->aggre2_clk); 151 + 152 + err_aggre2_clk: 153 + clk_disable(data->xo_clk); 154 + 155 + err_xo_clk: 156 + return ret; 157 + } 158 + 159 + static void qcom_ssc_block_bus_deinit(struct device *dev) 160 + { 161 + int ret; 162 + 163 + struct qcom_ssc_block_bus_data *data = dev_get_drvdata(dev); 164 + 165 + clk_disable(data->ssc_xo_clk); 166 + clk_disable(data->ssc_ahbs_clk); 167 + 168 + ret = reset_control_assert(data->ssc_bcr); 169 + if (ret) 170 + dev_err(dev, "error asserting ssc_bcr: %d\n", ret); 171 + 172 + regmap_write(data->halt_map, data->ssc_axi_halt + AXI_HALTREQ_REG, 1); 173 + 174 + reg32_set_bits(data->reg_mpm_sscaon_config1, BIT(31)); 175 + reg32_set_bits(data->reg_mpm_sscaon_config0, BIT(4) | BIT(5)); 176 + 177 + ret = reset_control_assert(data->ssc_reset); 178 + if (ret) 179 + dev_err(dev, "error asserting ssc_reset: %d\n", ret); 180 + 181 + clk_disable(data->gcc_im_sleep_clk); 182 + 183 + clk_disable(data->aggre2_north_clk); 184 + 185 + clk_disable(data->aggre2_clk); 186 + clk_disable(data->xo_clk); 187 + } 188 + 189 + static int qcom_ssc_block_bus_pds_attach(struct device *dev, struct device **pds, 190 + const char *const *pd_names, size_t num_pds) 191 + { 192 + int ret; 193 + int i; 194 + 195 + for (i = 0; i < num_pds; i++) { 196 + pds[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]); 197 + if (IS_ERR_OR_NULL(pds[i])) { 198 + ret = PTR_ERR(pds[i]) ? : -ENODATA; 199 + goto unroll_attach; 200 + } 201 + } 202 + 203 + return num_pds; 204 + 205 + unroll_attach: 206 + for (i--; i >= 0; i--) 207 + dev_pm_domain_detach(pds[i], false); 208 + 209 + return ret; 210 + }; 211 + 212 + static void qcom_ssc_block_bus_pds_detach(struct device *dev, struct device **pds, size_t num_pds) 213 + { 214 + int i; 215 + 216 + for (i = 0; i < num_pds; i++) 217 + dev_pm_domain_detach(pds[i], false); 218 + } 219 + 220 + static int qcom_ssc_block_bus_pds_enable(struct device **pds, size_t num_pds) 221 + { 222 + int ret; 223 + int i; 224 + 225 + for (i = 0; i < num_pds; i++) { 226 + dev_pm_genpd_set_performance_state(pds[i], INT_MAX); 227 + ret = pm_runtime_get_sync(pds[i]); 228 + if (ret < 0) 229 + goto unroll_pd_votes; 230 + } 231 + 232 + return 0; 233 + 234 + unroll_pd_votes: 235 + for (i--; i >= 0; i--) { 236 + dev_pm_genpd_set_performance_state(pds[i], 0); 237 + pm_runtime_put(pds[i]); 238 + } 239 + 240 + return ret; 241 + }; 242 + 243 + static void qcom_ssc_block_bus_pds_disable(struct device **pds, size_t num_pds) 244 + { 245 + int i; 246 + 247 + for (i = 0; i < num_pds; i++) { 248 + dev_pm_genpd_set_performance_state(pds[i], 0); 249 + pm_runtime_put(pds[i]); 250 + } 251 + } 252 + 253 + static int qcom_ssc_block_bus_probe(struct platform_device *pdev) 254 + { 255 + struct qcom_ssc_block_bus_data *data; 256 + struct device_node *np = pdev->dev.of_node; 257 + struct of_phandle_args halt_args; 258 + struct resource *res; 259 + int ret; 260 + 261 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 262 + if (!data) 263 + return -ENOMEM; 264 + 265 + platform_set_drvdata(pdev, data); 266 + 267 + data->pd_names = qcom_ssc_block_pd_names; 268 + data->num_pds = ARRAY_SIZE(qcom_ssc_block_pd_names); 269 + 270 + /* power domains */ 271 + ret = qcom_ssc_block_bus_pds_attach(&pdev->dev, data->pds, data->pd_names, data->num_pds); 272 + if (ret < 0) 273 + return dev_err_probe(&pdev->dev, ret, "error when attaching power domains\n"); 274 + 275 + ret = qcom_ssc_block_bus_pds_enable(data->pds, data->num_pds); 276 + if (ret < 0) 277 + return dev_err_probe(&pdev->dev, ret, "error when enabling power domains\n"); 278 + 279 + /* low level overrides for when the HW logic doesn't "just work" */ 280 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mpm_sscaon_config0"); 281 + data->reg_mpm_sscaon_config0 = devm_ioremap_resource(&pdev->dev, res); 282 + if (IS_ERR(data->reg_mpm_sscaon_config0)) 283 + return dev_err_probe(&pdev->dev, PTR_ERR(data->reg_mpm_sscaon_config0), 284 + "Failed to ioremap mpm_sscaon_config0\n"); 285 + 286 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mpm_sscaon_config1"); 287 + data->reg_mpm_sscaon_config1 = devm_ioremap_resource(&pdev->dev, res); 288 + if (IS_ERR(data->reg_mpm_sscaon_config1)) 289 + return dev_err_probe(&pdev->dev, PTR_ERR(data->reg_mpm_sscaon_config1), 290 + "Failed to ioremap mpm_sscaon_config1\n"); 291 + 292 + /* resets */ 293 + data->ssc_bcr = devm_reset_control_get_exclusive(&pdev->dev, "ssc_bcr"); 294 + if (IS_ERR(data->ssc_bcr)) 295 + return dev_err_probe(&pdev->dev, PTR_ERR(data->ssc_bcr), 296 + "Failed to acquire reset: scc_bcr\n"); 297 + 298 + data->ssc_reset = devm_reset_control_get_exclusive(&pdev->dev, "ssc_reset"); 299 + if (IS_ERR(data->ssc_reset)) 300 + return dev_err_probe(&pdev->dev, PTR_ERR(data->ssc_reset), 301 + "Failed to acquire reset: ssc_reset:\n"); 302 + 303 + /* clocks */ 304 + data->xo_clk = devm_clk_get(&pdev->dev, "xo"); 305 + if (IS_ERR(data->xo_clk)) 306 + return dev_err_probe(&pdev->dev, PTR_ERR(data->xo_clk), 307 + "Failed to get clock: xo\n"); 308 + 309 + data->aggre2_clk = devm_clk_get(&pdev->dev, "aggre2"); 310 + if (IS_ERR(data->aggre2_clk)) 311 + return dev_err_probe(&pdev->dev, PTR_ERR(data->aggre2_clk), 312 + "Failed to get clock: aggre2\n"); 313 + 314 + data->gcc_im_sleep_clk = devm_clk_get(&pdev->dev, "gcc_im_sleep"); 315 + if (IS_ERR(data->gcc_im_sleep_clk)) 316 + return dev_err_probe(&pdev->dev, PTR_ERR(data->gcc_im_sleep_clk), 317 + "Failed to get clock: gcc_im_sleep\n"); 318 + 319 + data->aggre2_north_clk = devm_clk_get(&pdev->dev, "aggre2_north"); 320 + if (IS_ERR(data->aggre2_north_clk)) 321 + return dev_err_probe(&pdev->dev, PTR_ERR(data->aggre2_north_clk), 322 + "Failed to get clock: aggre2_north\n"); 323 + 324 + data->ssc_xo_clk = devm_clk_get(&pdev->dev, "ssc_xo"); 325 + if (IS_ERR(data->ssc_xo_clk)) 326 + return dev_err_probe(&pdev->dev, PTR_ERR(data->ssc_xo_clk), 327 + "Failed to get clock: ssc_xo\n"); 328 + 329 + data->ssc_ahbs_clk = devm_clk_get(&pdev->dev, "ssc_ahbs"); 330 + if (IS_ERR(data->ssc_ahbs_clk)) 331 + return dev_err_probe(&pdev->dev, PTR_ERR(data->ssc_ahbs_clk), 332 + "Failed to get clock: ssc_ahbs\n"); 333 + 334 + ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node, "qcom,halt-regs", 1, 0, 335 + &halt_args); 336 + if (ret < 0) 337 + return dev_err_probe(&pdev->dev, ret, "Failed to parse qcom,halt-regs\n"); 338 + 339 + data->halt_map = syscon_node_to_regmap(halt_args.np); 340 + of_node_put(halt_args.np); 341 + if (IS_ERR(data->halt_map)) 342 + return PTR_ERR(data->halt_map); 343 + 344 + data->ssc_axi_halt = halt_args.args[0]; 345 + 346 + qcom_ssc_block_bus_init(&pdev->dev); 347 + 348 + of_platform_populate(np, NULL, NULL, &pdev->dev); 349 + 350 + return 0; 351 + } 352 + 353 + static int qcom_ssc_block_bus_remove(struct platform_device *pdev) 354 + { 355 + struct qcom_ssc_block_bus_data *data = platform_get_drvdata(pdev); 356 + 357 + qcom_ssc_block_bus_deinit(&pdev->dev); 358 + 359 + iounmap(data->reg_mpm_sscaon_config0); 360 + iounmap(data->reg_mpm_sscaon_config1); 361 + 362 + qcom_ssc_block_bus_pds_disable(data->pds, data->num_pds); 363 + qcom_ssc_block_bus_pds_detach(&pdev->dev, data->pds, data->num_pds); 364 + pm_runtime_disable(&pdev->dev); 365 + pm_clk_destroy(&pdev->dev); 366 + 367 + return 0; 368 + } 369 + 370 + static const struct of_device_id qcom_ssc_block_bus_of_match[] = { 371 + { .compatible = "qcom,ssc-block-bus", }, 372 + { /* sentinel */ } 373 + }; 374 + MODULE_DEVICE_TABLE(of, qcom_ssc_block_bus_of_match); 375 + 376 + static struct platform_driver qcom_ssc_block_bus_driver = { 377 + .probe = qcom_ssc_block_bus_probe, 378 + .remove = qcom_ssc_block_bus_remove, 379 + .driver = { 380 + .name = "qcom-ssc-block-bus", 381 + .of_match_table = qcom_ssc_block_bus_of_match, 382 + }, 383 + }; 384 + 385 + module_platform_driver(qcom_ssc_block_bus_driver); 386 + 387 + MODULE_DESCRIPTION("A driver for handling the init sequence needed for accessing the SSC block on (some) qcom SoCs over AHB"); 388 + MODULE_AUTHOR("Michael Srba <Michael.Srba@seznam.cz>"); 389 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/bus/ti-sysc.c
··· 3049 3049 SOC_FLAG("AM43*", SOC_AM4), 3050 3050 SOC_FLAG("DRA7*", SOC_DRA7), 3051 3051 3052 - { /* sentinel */ }, 3052 + { /* sentinel */ } 3053 3053 }; 3054 3054 3055 3055 /* ··· 3070 3070 SOC_FLAG("OMAP3615/AM3715", DIS_IVA), 3071 3071 SOC_FLAG("OMAP3621", DIS_ISP), 3072 3072 3073 - { /* sentinel */ }, 3073 + { /* sentinel */ } 3074 3074 }; 3075 3075 3076 3076 static int sysc_add_disabled(unsigned long base)
+14 -10
drivers/firmware/arm_ffa/driver.c
··· 398 398 if (ret.a0 == FFA_ERROR) 399 399 return ffa_to_linux_errno((int)ret.a2); 400 400 401 - if (ret.a0 != FFA_SUCCESS) 401 + if (ret.a0 == FFA_SUCCESS) { 402 + if (handle) 403 + *handle = PACK_HANDLE(ret.a2, ret.a3); 404 + } else if (ret.a0 == FFA_MEM_FRAG_RX) { 405 + if (handle) 406 + *handle = PACK_HANDLE(ret.a1, ret.a2); 407 + } else { 402 408 return -EOPNOTSUPP; 403 - 404 - if (handle) 405 - *handle = PACK_HANDLE(ret.a2, ret.a3); 409 + } 406 410 407 411 return frag_len; 408 412 } ··· 430 426 if (ret.a0 == FFA_ERROR) 431 427 return ffa_to_linux_errno((int)ret.a2); 432 428 433 - if (ret.a0 != FFA_MEM_FRAG_RX) 434 - return -EOPNOTSUPP; 429 + if (ret.a0 == FFA_MEM_FRAG_RX) 430 + return ret.a3; 431 + else if (ret.a0 == FFA_SUCCESS) 432 + return 0; 435 433 436 - return ret.a3; 434 + return -EOPNOTSUPP; 437 435 } 438 436 439 437 static int ··· 588 582 return -ENODEV; 589 583 } 590 584 591 - count = ffa_partition_probe(&uuid_null, &pbuf); 585 + count = ffa_partition_probe(&uuid, &pbuf); 592 586 if (count <= 0) 593 587 return -ENOENT; 594 588 ··· 694 688 __func__, tpbuf->id); 695 689 continue; 696 690 } 697 - 698 - ffa_dev_set_drvdata(ffa_dev, drv_info); 699 691 } 700 692 kfree(pbuf); 701 693 }
+1
drivers/firmware/arm_scmi/Kconfig
··· 59 59 depends on OPTEE=y || OPTEE=ARM_SCMI_PROTOCOL 60 60 select ARM_SCMI_HAVE_TRANSPORT 61 61 select ARM_SCMI_HAVE_SHMEM 62 + select ARM_SCMI_HAVE_MSG 62 63 default y 63 64 help 64 65 This enables the OP-TEE service based transport for SCMI.
+38 -8
drivers/firmware/arm_scmi/base.c
··· 178 178 __le32 *num_skip, *num_ret; 179 179 u32 tot_num_ret = 0, loop_num_ret; 180 180 struct device *dev = ph->dev; 181 + struct scmi_revision_info *rev = ph->get_priv(ph); 181 182 182 183 ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_LIST_PROTOCOLS, 183 184 sizeof(*num_skip), 0, &t); ··· 190 189 list = t->rx.buf + sizeof(*num_ret); 191 190 192 191 do { 192 + size_t real_list_sz; 193 + u32 calc_list_sz; 194 + 193 195 /* Set the number of protocols to be skipped/already read */ 194 196 *num_skip = cpu_to_le32(tot_num_ret); 195 197 ··· 201 197 break; 202 198 203 199 loop_num_ret = le32_to_cpu(*num_ret); 204 - if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) { 205 - dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP"); 200 + if (!loop_num_ret) 201 + break; 202 + 203 + if (loop_num_ret > rev->num_protocols - tot_num_ret) { 204 + dev_err(dev, 205 + "No. Returned protocols > Total protocols.\n"); 206 + break; 207 + } 208 + 209 + if (t->rx.len < (sizeof(u32) * 2)) { 210 + dev_err(dev, "Truncated reply - rx.len:%zd\n", 211 + t->rx.len); 212 + ret = -EPROTO; 213 + break; 214 + } 215 + 216 + real_list_sz = t->rx.len - sizeof(u32); 217 + calc_list_sz = (1 + (loop_num_ret - 1) / sizeof(u32)) * 218 + sizeof(u32); 219 + if (calc_list_sz != real_list_sz) { 220 + dev_err(dev, 221 + "Malformed reply - real_sz:%zd calc_sz:%u\n", 222 + real_list_sz, calc_list_sz); 223 + ret = -EPROTO; 206 224 break; 207 225 } 208 226 ··· 234 208 tot_num_ret += loop_num_ret; 235 209 236 210 ph->xops->reset_rx_to_maxsz(ph, t); 237 - } while (loop_num_ret); 211 + } while (tot_num_ret < rev->num_protocols); 238 212 239 213 ph->xops->xfer_put(ph, t); 240 214 ··· 377 351 if (ret) 378 352 return ret; 379 353 380 - prot_imp = devm_kcalloc(dev, MAX_PROTOCOLS_IMP, sizeof(u8), GFP_KERNEL); 381 - if (!prot_imp) 382 - return -ENOMEM; 383 - 384 354 rev->major_ver = PROTOCOL_REV_MAJOR(version), 385 355 rev->minor_ver = PROTOCOL_REV_MINOR(version); 386 356 ph->set_priv(ph, rev); 387 357 388 - scmi_base_attributes_get(ph); 358 + ret = scmi_base_attributes_get(ph); 359 + if (ret) 360 + return ret; 361 + 362 + prot_imp = devm_kcalloc(dev, rev->num_protocols, sizeof(u8), 363 + GFP_KERNEL); 364 + if (!prot_imp) 365 + return -ENOMEM; 366 + 389 367 scmi_base_vendor_id_get(ph, false); 390 368 scmi_base_vendor_id_get(ph, true); 391 369 scmi_base_implementation_version_get(ph);
+273 -80
drivers/firmware/arm_scmi/clock.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Clock Protocol 4 4 * 5 - * Copyright (C) 2018-2021 ARM Ltd. 5 + * Copyright (C) 2018-2022 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/module.h> 9 + #include <linux/limits.h> 9 10 #include <linux/sort.h> 10 11 11 - #include "common.h" 12 + #include "protocols.h" 13 + #include "notify.h" 12 14 13 15 enum scmi_clock_protocol_cmd { 14 16 CLOCK_ATTRIBUTES = 0x3, ··· 18 16 CLOCK_RATE_SET = 0x5, 19 17 CLOCK_RATE_GET = 0x6, 20 18 CLOCK_CONFIG_SET = 0x7, 19 + CLOCK_NAME_GET = 0x8, 20 + CLOCK_RATE_NOTIFY = 0x9, 21 + CLOCK_RATE_CHANGE_REQUESTED_NOTIFY = 0xA, 21 22 }; 22 23 23 24 struct scmi_msg_resp_clock_protocol_attributes { ··· 32 27 struct scmi_msg_resp_clock_attributes { 33 28 __le32 attributes; 34 29 #define CLOCK_ENABLE BIT(0) 35 - u8 name[SCMI_MAX_STR_SIZE]; 30 + #define SUPPORTS_RATE_CHANGED_NOTIF(x) ((x) & BIT(31)) 31 + #define SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(x) ((x) & BIT(30)) 32 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(29)) 33 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 36 34 __le32 clock_enable_latency; 37 35 }; 38 36 ··· 76 68 __le32 value_high; 77 69 }; 78 70 71 + struct scmi_msg_resp_set_rate_complete { 72 + __le32 id; 73 + __le32 rate_low; 74 + __le32 rate_high; 75 + }; 76 + 77 + struct scmi_msg_clock_rate_notify { 78 + __le32 clk_id; 79 + __le32 notify_enable; 80 + }; 81 + 82 + struct scmi_clock_rate_notify_payld { 83 + __le32 agent_id; 84 + __le32 clock_id; 85 + __le32 rate_low; 86 + __le32 rate_high; 87 + }; 88 + 79 89 struct clock_info { 80 90 u32 version; 81 91 int num_clocks; 82 92 int max_async_req; 83 93 atomic_t cur_async_req; 84 94 struct scmi_clock_info *clk; 95 + }; 96 + 97 + static enum scmi_clock_protocol_cmd evt_2_cmd[] = { 98 + CLOCK_RATE_NOTIFY, 99 + CLOCK_RATE_CHANGE_REQUESTED_NOTIFY, 85 100 }; 86 101 87 102 static int ··· 133 102 } 134 103 135 104 static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph, 136 - u32 clk_id, struct scmi_clock_info *clk) 105 + u32 clk_id, struct scmi_clock_info *clk, 106 + u32 version) 137 107 { 138 108 int ret; 109 + u32 attributes; 139 110 struct scmi_xfer *t; 140 111 struct scmi_msg_resp_clock_attributes *attr; 141 112 ··· 151 118 152 119 ret = ph->xops->do_xfer(ph, t); 153 120 if (!ret) { 121 + u32 latency = 0; 122 + attributes = le32_to_cpu(attr->attributes); 154 123 strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); 155 - /* Is optional field clock_enable_latency provided ? */ 156 - if (t->rx.len == sizeof(*attr)) 157 - clk->enable_latency = 158 - le32_to_cpu(attr->clock_enable_latency); 159 - } else { 160 - clk->name[0] = '\0'; 124 + /* clock_enable_latency field is present only since SCMI v3.1 */ 125 + if (PROTOCOL_REV_MAJOR(version) >= 0x2) 126 + latency = le32_to_cpu(attr->clock_enable_latency); 127 + clk->enable_latency = latency ? : U32_MAX; 161 128 } 162 129 163 130 ph->xops->xfer_put(ph, t); 131 + 132 + /* 133 + * If supported overwrite short name with the extended one; 134 + * on error just carry on and use already provided short name. 135 + */ 136 + if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x2) { 137 + if (SUPPORTS_EXTENDED_NAMES(attributes)) 138 + ph->hops->extended_name_get(ph, CLOCK_NAME_GET, clk_id, 139 + clk->name, 140 + SCMI_MAX_STR_SIZE); 141 + 142 + if (SUPPORTS_RATE_CHANGED_NOTIF(attributes)) 143 + clk->rate_changed_notifications = true; 144 + if (SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(attributes)) 145 + clk->rate_change_requested_notifications = true; 146 + } 147 + 164 148 return ret; 165 149 } 166 150 ··· 193 143 return 1; 194 144 } 195 145 146 + struct scmi_clk_ipriv { 147 + u32 clk_id; 148 + struct scmi_clock_info *clk; 149 + }; 150 + 151 + static void iter_clk_describe_prepare_message(void *message, 152 + const unsigned int desc_index, 153 + const void *priv) 154 + { 155 + struct scmi_msg_clock_describe_rates *msg = message; 156 + const struct scmi_clk_ipriv *p = priv; 157 + 158 + msg->id = cpu_to_le32(p->clk_id); 159 + /* Set the number of rates to be skipped/already read */ 160 + msg->rate_index = cpu_to_le32(desc_index); 161 + } 162 + 163 + static int 164 + iter_clk_describe_update_state(struct scmi_iterator_state *st, 165 + const void *response, void *priv) 166 + { 167 + u32 flags; 168 + struct scmi_clk_ipriv *p = priv; 169 + const struct scmi_msg_resp_clock_describe_rates *r = response; 170 + 171 + flags = le32_to_cpu(r->num_rates_flags); 172 + st->num_remaining = NUM_REMAINING(flags); 173 + st->num_returned = NUM_RETURNED(flags); 174 + p->clk->rate_discrete = RATE_DISCRETE(flags); 175 + 176 + return 0; 177 + } 178 + 179 + static int 180 + iter_clk_describe_process_response(const struct scmi_protocol_handle *ph, 181 + const void *response, 182 + struct scmi_iterator_state *st, void *priv) 183 + { 184 + int ret = 0; 185 + struct scmi_clk_ipriv *p = priv; 186 + const struct scmi_msg_resp_clock_describe_rates *r = response; 187 + 188 + if (!p->clk->rate_discrete) { 189 + switch (st->desc_index + st->loop_idx) { 190 + case 0: 191 + p->clk->range.min_rate = RATE_TO_U64(r->rate[0]); 192 + break; 193 + case 1: 194 + p->clk->range.max_rate = RATE_TO_U64(r->rate[1]); 195 + break; 196 + case 2: 197 + p->clk->range.step_size = RATE_TO_U64(r->rate[2]); 198 + break; 199 + default: 200 + ret = -EINVAL; 201 + break; 202 + } 203 + } else { 204 + u64 *rate = &p->clk->list.rates[st->desc_index + st->loop_idx]; 205 + 206 + *rate = RATE_TO_U64(r->rate[st->loop_idx]); 207 + p->clk->list.num_rates++; 208 + //XXX dev_dbg(ph->dev, "Rate %llu Hz\n", *rate); 209 + } 210 + 211 + return ret; 212 + } 213 + 196 214 static int 197 215 scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id, 198 216 struct scmi_clock_info *clk) 199 217 { 200 - u64 *rate = NULL; 201 - int ret, cnt; 202 - bool rate_discrete = false; 203 - u32 tot_rate_cnt = 0, rates_flag; 204 - u16 num_returned, num_remaining; 205 - struct scmi_xfer *t; 206 - struct scmi_msg_clock_describe_rates *clk_desc; 207 - struct scmi_msg_resp_clock_describe_rates *rlist; 218 + int ret; 208 219 209 - ret = ph->xops->xfer_get_init(ph, CLOCK_DESCRIBE_RATES, 210 - sizeof(*clk_desc), 0, &t); 220 + void *iter; 221 + struct scmi_msg_clock_describe_rates *msg; 222 + struct scmi_iterator_ops ops = { 223 + .prepare_message = iter_clk_describe_prepare_message, 224 + .update_state = iter_clk_describe_update_state, 225 + .process_response = iter_clk_describe_process_response, 226 + }; 227 + struct scmi_clk_ipriv cpriv = { 228 + .clk_id = clk_id, 229 + .clk = clk, 230 + }; 231 + 232 + iter = ph->hops->iter_response_init(ph, &ops, SCMI_MAX_NUM_RATES, 233 + CLOCK_DESCRIBE_RATES, 234 + sizeof(*msg), &cpriv); 235 + if (IS_ERR(iter)) 236 + return PTR_ERR(iter); 237 + 238 + ret = ph->hops->iter_response_run(iter); 211 239 if (ret) 212 240 return ret; 213 241 214 - clk_desc = t->tx.buf; 215 - rlist = t->rx.buf; 216 - 217 - do { 218 - clk_desc->id = cpu_to_le32(clk_id); 219 - /* Set the number of rates to be skipped/already read */ 220 - clk_desc->rate_index = cpu_to_le32(tot_rate_cnt); 221 - 222 - ret = ph->xops->do_xfer(ph, t); 223 - if (ret) 224 - goto err; 225 - 226 - rates_flag = le32_to_cpu(rlist->num_rates_flags); 227 - num_remaining = NUM_REMAINING(rates_flag); 228 - rate_discrete = RATE_DISCRETE(rates_flag); 229 - num_returned = NUM_RETURNED(rates_flag); 230 - 231 - if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) { 232 - dev_err(ph->dev, "No. of rates > MAX_NUM_RATES"); 233 - break; 234 - } 235 - 236 - if (!rate_discrete) { 237 - clk->range.min_rate = RATE_TO_U64(rlist->rate[0]); 238 - clk->range.max_rate = RATE_TO_U64(rlist->rate[1]); 239 - clk->range.step_size = RATE_TO_U64(rlist->rate[2]); 240 - dev_dbg(ph->dev, "Min %llu Max %llu Step %llu Hz\n", 241 - clk->range.min_rate, clk->range.max_rate, 242 - clk->range.step_size); 243 - break; 244 - } 245 - 246 - rate = &clk->list.rates[tot_rate_cnt]; 247 - for (cnt = 0; cnt < num_returned; cnt++, rate++) { 248 - *rate = RATE_TO_U64(rlist->rate[cnt]); 249 - dev_dbg(ph->dev, "Rate %llu Hz\n", *rate); 250 - } 251 - 252 - tot_rate_cnt += num_returned; 253 - 254 - ph->xops->reset_rx_to_maxsz(ph, t); 255 - /* 256 - * check for both returned and remaining to avoid infinite 257 - * loop due to buggy firmware 258 - */ 259 - } while (num_returned && num_remaining); 260 - 261 - if (rate_discrete && rate) { 262 - clk->list.num_rates = tot_rate_cnt; 263 - sort(clk->list.rates, tot_rate_cnt, sizeof(*rate), 264 - rate_cmp_func, NULL); 242 + if (!clk->rate_discrete) { 243 + dev_dbg(ph->dev, "Min %llu Max %llu Step %llu Hz\n", 244 + clk->range.min_rate, clk->range.max_rate, 245 + clk->range.step_size); 246 + } else if (clk->list.num_rates) { 247 + sort(clk->list.rates, clk->list.num_rates, 248 + sizeof(clk->list.rates[0]), rate_cmp_func, NULL); 265 249 } 266 250 267 - clk->rate_discrete = rate_discrete; 268 - 269 - err: 270 - ph->xops->xfer_put(ph, t); 271 251 return ret; 272 252 } 273 253 ··· 346 266 cfg->value_low = cpu_to_le32(rate & 0xffffffff); 347 267 cfg->value_high = cpu_to_le32(rate >> 32); 348 268 349 - if (flags & CLOCK_SET_ASYNC) 269 + if (flags & CLOCK_SET_ASYNC) { 350 270 ret = ph->xops->do_xfer_with_response(ph, t); 351 - else 271 + if (!ret) { 272 + struct scmi_msg_resp_set_rate_complete *resp; 273 + 274 + resp = t->rx.buf; 275 + if (le32_to_cpu(resp->id) == clk_id) 276 + dev_dbg(ph->dev, 277 + "Clk ID %d set async to %llu\n", clk_id, 278 + get_unaligned_le64(&resp->rate_low)); 279 + else 280 + ret = -EPROTO; 281 + } 282 + } else { 352 283 ret = ph->xops->do_xfer(ph, t); 284 + } 353 285 354 286 if (ci->max_async_req) 355 287 atomic_dec(&ci->cur_async_req); ··· 447 355 .disable_atomic = scmi_clock_disable_atomic, 448 356 }; 449 357 358 + static int scmi_clk_rate_notify(const struct scmi_protocol_handle *ph, 359 + u32 clk_id, int message_id, bool enable) 360 + { 361 + int ret; 362 + struct scmi_xfer *t; 363 + struct scmi_msg_clock_rate_notify *notify; 364 + 365 + ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*notify), 0, &t); 366 + if (ret) 367 + return ret; 368 + 369 + notify = t->tx.buf; 370 + notify->clk_id = cpu_to_le32(clk_id); 371 + notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 372 + 373 + ret = ph->xops->do_xfer(ph, t); 374 + 375 + ph->xops->xfer_put(ph, t); 376 + return ret; 377 + } 378 + 379 + static int scmi_clk_set_notify_enabled(const struct scmi_protocol_handle *ph, 380 + u8 evt_id, u32 src_id, bool enable) 381 + { 382 + int ret, cmd_id; 383 + 384 + if (evt_id >= ARRAY_SIZE(evt_2_cmd)) 385 + return -EINVAL; 386 + 387 + cmd_id = evt_2_cmd[evt_id]; 388 + ret = scmi_clk_rate_notify(ph, src_id, cmd_id, enable); 389 + if (ret) 390 + pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 391 + evt_id, src_id, ret); 392 + 393 + return ret; 394 + } 395 + 396 + static void *scmi_clk_fill_custom_report(const struct scmi_protocol_handle *ph, 397 + u8 evt_id, ktime_t timestamp, 398 + const void *payld, size_t payld_sz, 399 + void *report, u32 *src_id) 400 + { 401 + const struct scmi_clock_rate_notify_payld *p = payld; 402 + struct scmi_clock_rate_notif_report *r = report; 403 + 404 + if (sizeof(*p) != payld_sz || 405 + (evt_id != SCMI_EVENT_CLOCK_RATE_CHANGED && 406 + evt_id != SCMI_EVENT_CLOCK_RATE_CHANGE_REQUESTED)) 407 + return NULL; 408 + 409 + r->timestamp = timestamp; 410 + r->agent_id = le32_to_cpu(p->agent_id); 411 + r->clock_id = le32_to_cpu(p->clock_id); 412 + r->rate = get_unaligned_le64(&p->rate_low); 413 + *src_id = r->clock_id; 414 + 415 + return r; 416 + } 417 + 418 + static int scmi_clk_get_num_sources(const struct scmi_protocol_handle *ph) 419 + { 420 + struct clock_info *ci = ph->get_priv(ph); 421 + 422 + if (!ci) 423 + return -EINVAL; 424 + 425 + return ci->num_clocks; 426 + } 427 + 428 + static const struct scmi_event clk_events[] = { 429 + { 430 + .id = SCMI_EVENT_CLOCK_RATE_CHANGED, 431 + .max_payld_sz = sizeof(struct scmi_clock_rate_notify_payld), 432 + .max_report_sz = sizeof(struct scmi_clock_rate_notif_report), 433 + }, 434 + { 435 + .id = SCMI_EVENT_CLOCK_RATE_CHANGE_REQUESTED, 436 + .max_payld_sz = sizeof(struct scmi_clock_rate_notify_payld), 437 + .max_report_sz = sizeof(struct scmi_clock_rate_notif_report), 438 + }, 439 + }; 440 + 441 + static const struct scmi_event_ops clk_event_ops = { 442 + .get_num_sources = scmi_clk_get_num_sources, 443 + .set_notify_enabled = scmi_clk_set_notify_enabled, 444 + .fill_custom_report = scmi_clk_fill_custom_report, 445 + }; 446 + 447 + static const struct scmi_protocol_events clk_protocol_events = { 448 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 449 + .ops = &clk_event_ops, 450 + .evts = clk_events, 451 + .num_events = ARRAY_SIZE(clk_events), 452 + }; 453 + 450 454 static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph) 451 455 { 452 456 u32 version; 453 457 int clkid, ret; 454 458 struct clock_info *cinfo; 455 459 456 - ph->xops->version_get(ph, &version); 460 + ret = ph->xops->version_get(ph, &version); 461 + if (ret) 462 + return ret; 457 463 458 464 dev_dbg(ph->dev, "Clock Version %d.%d\n", 459 465 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); ··· 560 370 if (!cinfo) 561 371 return -ENOMEM; 562 372 563 - scmi_clock_protocol_attributes_get(ph, cinfo); 373 + ret = scmi_clock_protocol_attributes_get(ph, cinfo); 374 + if (ret) 375 + return ret; 564 376 565 377 cinfo->clk = devm_kcalloc(ph->dev, cinfo->num_clocks, 566 378 sizeof(*cinfo->clk), GFP_KERNEL); ··· 572 380 for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { 573 381 struct scmi_clock_info *clk = cinfo->clk + clkid; 574 382 575 - ret = scmi_clock_attributes_get(ph, clkid, clk); 383 + ret = scmi_clock_attributes_get(ph, clkid, clk, version); 576 384 if (!ret) 577 385 scmi_clock_describe_rates_get(ph, clkid, clk); 578 386 } ··· 586 394 .owner = THIS_MODULE, 587 395 .instance_init = &scmi_clock_protocol_init, 588 396 .ops = &clk_proto_ops, 397 + .events = &clk_protocol_events, 589 398 }; 590 399 591 400 DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(clock, scmi_clock)
+2 -223
drivers/firmware/arm_scmi/common.h
··· 4 4 * driver common header file containing some definitions, structures 5 5 * and function prototypes used in all the different SCMI protocols. 6 6 * 7 - * Copyright (C) 2018-2021 ARM Ltd. 7 + * Copyright (C) 2018-2022 ARM Ltd. 8 8 */ 9 9 #ifndef _SCMI_COMMON_H 10 10 #define _SCMI_COMMON_H ··· 24 24 25 25 #include <asm/unaligned.h> 26 26 27 + #include "protocols.h" 27 28 #include "notify.h" 28 - 29 - #define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0) 30 - #define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16) 31 - #define PROTOCOL_REV_MAJOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x))) 32 - #define PROTOCOL_REV_MINOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MINOR_MASK, (x))) 33 - #define MAX_PROTOCOLS_IMP 16 34 - #define MAX_OPPS 16 35 - 36 - enum scmi_common_cmd { 37 - PROTOCOL_VERSION = 0x0, 38 - PROTOCOL_ATTRIBUTES = 0x1, 39 - PROTOCOL_MESSAGE_ATTRIBUTES = 0x2, 40 - }; 41 - 42 - /** 43 - * struct scmi_msg_resp_prot_version - Response for a message 44 - * 45 - * @minor_version: Minor version of the ABI that firmware supports 46 - * @major_version: Major version of the ABI that firmware supports 47 - * 48 - * In general, ABI version changes follow the rule that minor version increments 49 - * are backward compatible. Major revision changes in ABI may not be 50 - * backward compatible. 51 - * 52 - * Response to a generic message with message type SCMI_MSG_VERSION 53 - */ 54 - struct scmi_msg_resp_prot_version { 55 - __le16 minor_version; 56 - __le16 major_version; 57 - }; 58 29 59 30 #define MSG_ID_MASK GENMASK(7, 0) 60 31 #define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr)) ··· 49 78 * fit the whole table into one 4k page. 50 79 */ 51 80 #define SCMI_PENDING_XFERS_HT_ORDER_SZ 9 52 - 53 - /** 54 - * struct scmi_msg_hdr - Message(Tx/Rx) header 55 - * 56 - * @id: The identifier of the message being sent 57 - * @protocol_id: The identifier of the protocol used to send @id message 58 - * @type: The SCMI type for this message 59 - * @seq: The token to identify the message. When a message returns, the 60 - * platform returns the whole message header unmodified including the 61 - * token 62 - * @status: Status of the transfer once it's complete 63 - * @poll_completion: Indicate if the transfer needs to be polled for 64 - * completion or interrupt mode is used 65 - */ 66 - struct scmi_msg_hdr { 67 - u8 id; 68 - u8 protocol_id; 69 - u8 type; 70 - u16 seq; 71 - u32 status; 72 - bool poll_completion; 73 - }; 74 81 75 82 /** 76 83 * pack_scmi_header() - packs and returns 32-bit header ··· 79 130 hdr->type = MSG_XTRACT_TYPE(msg_hdr); 80 131 } 81 132 82 - /** 83 - * struct scmi_msg - Message(Tx/Rx) structure 84 - * 85 - * @buf: Buffer pointer 86 - * @len: Length of data in the Buffer 87 - */ 88 - struct scmi_msg { 89 - void *buf; 90 - size_t len; 91 - }; 92 - 93 - /** 94 - * struct scmi_xfer - Structure representing a message flow 95 - * 96 - * @transfer_id: Unique ID for debug & profiling purpose 97 - * @hdr: Transmit message header 98 - * @tx: Transmit message 99 - * @rx: Receive message, the buffer should be pre-allocated to store 100 - * message. If request-ACK protocol is used, we can reuse the same 101 - * buffer for the rx path as we use for the tx path. 102 - * @done: command message transmit completion event 103 - * @async_done: pointer to delayed response message received event completion 104 - * @pending: True for xfers added to @pending_xfers hashtable 105 - * @node: An hlist_node reference used to store this xfer, alternatively, on 106 - * the free list @free_xfers or in the @pending_xfers hashtable 107 - * @users: A refcount to track the active users for this xfer. 108 - * This is meant to protect against the possibility that, when a command 109 - * transaction times out concurrently with the reception of a valid 110 - * response message, the xfer could be finally put on the TX path, and 111 - * so vanish, while on the RX path scmi_rx_callback() is still 112 - * processing it: in such a case this refcounting will ensure that, even 113 - * though the timed-out transaction will anyway cause the command 114 - * request to be reported as failed by time-out, the underlying xfer 115 - * cannot be discarded and possibly reused until the last one user on 116 - * the RX path has released it. 117 - * @busy: An atomic flag to ensure exclusive write access to this xfer 118 - * @state: The current state of this transfer, with states transitions deemed 119 - * valid being: 120 - * - SCMI_XFER_SENT_OK -> SCMI_XFER_RESP_OK [ -> SCMI_XFER_DRESP_OK ] 121 - * - SCMI_XFER_SENT_OK -> SCMI_XFER_DRESP_OK 122 - * (Missing synchronous response is assumed OK and ignored) 123 - * @lock: A spinlock to protect state and busy fields. 124 - * @priv: A pointer for transport private usage. 125 - */ 126 - struct scmi_xfer { 127 - int transfer_id; 128 - struct scmi_msg_hdr hdr; 129 - struct scmi_msg tx; 130 - struct scmi_msg rx; 131 - struct completion done; 132 - struct completion *async_done; 133 - bool pending; 134 - struct hlist_node node; 135 - refcount_t users; 136 - #define SCMI_XFER_FREE 0 137 - #define SCMI_XFER_BUSY 1 138 - atomic_t busy; 139 - #define SCMI_XFER_SENT_OK 0 140 - #define SCMI_XFER_RESP_OK 1 141 - #define SCMI_XFER_DRESP_OK 2 142 - int state; 143 - /* A lock to protect state and busy fields */ 144 - spinlock_t lock; 145 - void *priv; 146 - }; 147 - 148 133 /* 149 134 * An helper macro to lookup an xfer from the @pending_xfers hashtable 150 135 * using the message sequence number token as a key. ··· 94 211 xfer_; \ 95 212 }) 96 213 97 - struct scmi_xfer_ops; 98 - 99 - /** 100 - * struct scmi_protocol_handle - Reference to an initialized protocol instance 101 - * 102 - * @dev: A reference to the associated SCMI instance device (handle->dev). 103 - * @xops: A reference to a struct holding refs to the core xfer operations that 104 - * can be used by the protocol implementation to generate SCMI messages. 105 - * @set_priv: A method to set protocol private data for this instance. 106 - * @get_priv: A method to get protocol private data previously set. 107 - * 108 - * This structure represents a protocol initialized against specific SCMI 109 - * instance and it will be used as follows: 110 - * - as a parameter fed from the core to the protocol initialization code so 111 - * that it can access the core xfer operations to build and generate SCMI 112 - * messages exclusively for the specific underlying protocol instance. 113 - * - as an opaque handle fed by an SCMI driver user when it tries to access 114 - * this protocol through its own protocol operations. 115 - * In this case this handle will be returned as an opaque object together 116 - * with the related protocol operations when the SCMI driver tries to access 117 - * the protocol. 118 - */ 119 - struct scmi_protocol_handle { 120 - struct device *dev; 121 - const struct scmi_xfer_ops *xops; 122 - int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv); 123 - void *(*get_priv)(const struct scmi_protocol_handle *ph); 124 - }; 125 - 126 - /** 127 - * struct scmi_xfer_ops - References to the core SCMI xfer operations. 128 - * @version_get: Get this version protocol. 129 - * @xfer_get_init: Initialize one struct xfer if any xfer slot is free. 130 - * @reset_rx_to_maxsz: Reset rx size to max transport size. 131 - * @do_xfer: Do the SCMI transfer. 132 - * @do_xfer_with_response: Do the SCMI transfer waiting for a response. 133 - * @xfer_put: Free the xfer slot. 134 - * 135 - * Note that all this operations expect a protocol handle as first parameter; 136 - * they then internally use it to infer the underlying protocol number: this 137 - * way is not possible for a protocol implementation to forge messages for 138 - * another protocol. 139 - */ 140 - struct scmi_xfer_ops { 141 - int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version); 142 - int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id, 143 - size_t tx_size, size_t rx_size, 144 - struct scmi_xfer **p); 145 - void (*reset_rx_to_maxsz)(const struct scmi_protocol_handle *ph, 146 - struct scmi_xfer *xfer); 147 - int (*do_xfer)(const struct scmi_protocol_handle *ph, 148 - struct scmi_xfer *xfer); 149 - int (*do_xfer_with_response)(const struct scmi_protocol_handle *ph, 150 - struct scmi_xfer *xfer); 151 - void (*xfer_put)(const struct scmi_protocol_handle *ph, 152 - struct scmi_xfer *xfer); 153 - }; 154 - 155 214 struct scmi_revision_info * 156 215 scmi_revision_area_get(const struct scmi_protocol_handle *ph); 157 216 int scmi_handle_put(const struct scmi_handle *handle); ··· 102 277 void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph, 103 278 u8 *prot_imp); 104 279 105 - typedef int (*scmi_prot_init_ph_fn_t)(const struct scmi_protocol_handle *); 106 - 107 - /** 108 - * struct scmi_protocol - Protocol descriptor 109 - * @id: Protocol ID. 110 - * @owner: Module reference if any. 111 - * @instance_init: Mandatory protocol initialization function. 112 - * @instance_deinit: Optional protocol de-initialization function. 113 - * @ops: Optional reference to the operations provided by the protocol and 114 - * exposed in scmi_protocol.h. 115 - * @events: An optional reference to the events supported by this protocol. 116 - */ 117 - struct scmi_protocol { 118 - const u8 id; 119 - struct module *owner; 120 - const scmi_prot_init_ph_fn_t instance_init; 121 - const scmi_prot_init_ph_fn_t instance_deinit; 122 - const void *ops; 123 - const struct scmi_protocol_events *events; 124 - }; 125 - 126 280 int __init scmi_bus_init(void); 127 281 void __exit scmi_bus_exit(void); 128 - 129 - #define DECLARE_SCMI_REGISTER_UNREGISTER(func) \ 130 - int __init scmi_##func##_register(void); \ 131 - void __exit scmi_##func##_unregister(void) 132 - DECLARE_SCMI_REGISTER_UNREGISTER(base); 133 - DECLARE_SCMI_REGISTER_UNREGISTER(clock); 134 - DECLARE_SCMI_REGISTER_UNREGISTER(perf); 135 - DECLARE_SCMI_REGISTER_UNREGISTER(power); 136 - DECLARE_SCMI_REGISTER_UNREGISTER(reset); 137 - DECLARE_SCMI_REGISTER_UNREGISTER(sensors); 138 - DECLARE_SCMI_REGISTER_UNREGISTER(voltage); 139 - DECLARE_SCMI_REGISTER_UNREGISTER(system); 140 - 141 - #define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(name, proto) \ 142 - static const struct scmi_protocol *__this_proto = &(proto); \ 143 - \ 144 - int __init scmi_##name##_register(void) \ 145 - { \ 146 - return scmi_protocol_register(__this_proto); \ 147 - } \ 148 - \ 149 - void __exit scmi_##name##_unregister(void) \ 150 - { \ 151 - scmi_protocol_unregister(__this_proto); \ 152 - } 153 282 154 283 const struct scmi_protocol *scmi_protocol_get(int protocol_id); 155 284 void scmi_protocol_put(int protocol_id);
+166 -2
drivers/firmware/arm_scmi/driver.c
··· 128 128 * usage. 129 129 * @protocols_mtx: A mutex to protect protocols instances initialization. 130 130 * @protocols_imp: List of protocols implemented, currently maximum of 131 - * MAX_PROTOCOLS_IMP elements allocated by the base protocol 131 + * scmi_revision_info.num_protocols elements allocated by the 132 + * base protocol 132 133 * @active_protocols: IDR storing device_nodes for protocols actually defined 133 134 * in the DT and confirmed as implemented by fw. 134 135 * @atomic_threshold: Optional system wide DT-configured threshold, expressed ··· 1103 1102 .xfer_put = xfer_put, 1104 1103 }; 1105 1104 1105 + struct scmi_msg_resp_domain_name_get { 1106 + __le32 flags; 1107 + u8 name[SCMI_MAX_STR_SIZE]; 1108 + }; 1109 + 1110 + /** 1111 + * scmi_common_extended_name_get - Common helper to get extended resources name 1112 + * @ph: A protocol handle reference. 1113 + * @cmd_id: The specific command ID to use. 1114 + * @res_id: The specific resource ID to use. 1115 + * @name: A pointer to the preallocated area where the retrieved name will be 1116 + * stored as a NULL terminated string. 1117 + * @len: The len in bytes of the @name char array. 1118 + * 1119 + * Return: 0 on Succcess 1120 + */ 1121 + static int scmi_common_extended_name_get(const struct scmi_protocol_handle *ph, 1122 + u8 cmd_id, u32 res_id, char *name, 1123 + size_t len) 1124 + { 1125 + int ret; 1126 + struct scmi_xfer *t; 1127 + struct scmi_msg_resp_domain_name_get *resp; 1128 + 1129 + ret = ph->xops->xfer_get_init(ph, cmd_id, sizeof(res_id), 1130 + sizeof(*resp), &t); 1131 + if (ret) 1132 + goto out; 1133 + 1134 + put_unaligned_le32(res_id, t->tx.buf); 1135 + resp = t->rx.buf; 1136 + 1137 + ret = ph->xops->do_xfer(ph, t); 1138 + if (!ret) 1139 + strscpy(name, resp->name, len); 1140 + 1141 + ph->xops->xfer_put(ph, t); 1142 + out: 1143 + if (ret) 1144 + dev_warn(ph->dev, 1145 + "Failed to get extended name - id:%u (ret:%d). Using %s\n", 1146 + res_id, ret, name); 1147 + return ret; 1148 + } 1149 + 1150 + /** 1151 + * struct scmi_iterator - Iterator descriptor 1152 + * @msg: A reference to the message TX buffer; filled by @prepare_message with 1153 + * a proper custom command payload for each multi-part command request. 1154 + * @resp: A reference to the response RX buffer; used by @update_state and 1155 + * @process_response to parse the multi-part replies. 1156 + * @t: A reference to the underlying xfer initialized and used transparently by 1157 + * the iterator internal routines. 1158 + * @ph: A reference to the associated protocol handle to be used. 1159 + * @ops: A reference to the custom provided iterator operations. 1160 + * @state: The current iterator state; used and updated in turn by the iterators 1161 + * internal routines and by the caller-provided @scmi_iterator_ops. 1162 + * @priv: A reference to optional private data as provided by the caller and 1163 + * passed back to the @@scmi_iterator_ops. 1164 + */ 1165 + struct scmi_iterator { 1166 + void *msg; 1167 + void *resp; 1168 + struct scmi_xfer *t; 1169 + const struct scmi_protocol_handle *ph; 1170 + struct scmi_iterator_ops *ops; 1171 + struct scmi_iterator_state state; 1172 + void *priv; 1173 + }; 1174 + 1175 + static void *scmi_iterator_init(const struct scmi_protocol_handle *ph, 1176 + struct scmi_iterator_ops *ops, 1177 + unsigned int max_resources, u8 msg_id, 1178 + size_t tx_size, void *priv) 1179 + { 1180 + int ret; 1181 + struct scmi_iterator *i; 1182 + 1183 + i = devm_kzalloc(ph->dev, sizeof(*i), GFP_KERNEL); 1184 + if (!i) 1185 + return ERR_PTR(-ENOMEM); 1186 + 1187 + i->ph = ph; 1188 + i->ops = ops; 1189 + i->priv = priv; 1190 + 1191 + ret = ph->xops->xfer_get_init(ph, msg_id, tx_size, 0, &i->t); 1192 + if (ret) { 1193 + devm_kfree(ph->dev, i); 1194 + return ERR_PTR(ret); 1195 + } 1196 + 1197 + i->state.max_resources = max_resources; 1198 + i->msg = i->t->tx.buf; 1199 + i->resp = i->t->rx.buf; 1200 + 1201 + return i; 1202 + } 1203 + 1204 + static int scmi_iterator_run(void *iter) 1205 + { 1206 + int ret = -EINVAL; 1207 + struct scmi_iterator_ops *iops; 1208 + const struct scmi_protocol_handle *ph; 1209 + struct scmi_iterator_state *st; 1210 + struct scmi_iterator *i = iter; 1211 + 1212 + if (!i || !i->ops || !i->ph) 1213 + return ret; 1214 + 1215 + iops = i->ops; 1216 + ph = i->ph; 1217 + st = &i->state; 1218 + 1219 + do { 1220 + iops->prepare_message(i->msg, st->desc_index, i->priv); 1221 + ret = ph->xops->do_xfer(ph, i->t); 1222 + if (ret) 1223 + break; 1224 + 1225 + ret = iops->update_state(st, i->resp, i->priv); 1226 + if (ret) 1227 + break; 1228 + 1229 + if (st->num_returned > st->max_resources - st->desc_index) { 1230 + dev_err(ph->dev, 1231 + "No. of resources can't exceed %d\n", 1232 + st->max_resources); 1233 + ret = -EINVAL; 1234 + break; 1235 + } 1236 + 1237 + for (st->loop_idx = 0; st->loop_idx < st->num_returned; 1238 + st->loop_idx++) { 1239 + ret = iops->process_response(ph, i->resp, st, i->priv); 1240 + if (ret) 1241 + goto out; 1242 + } 1243 + 1244 + st->desc_index += st->num_returned; 1245 + ph->xops->reset_rx_to_maxsz(ph, i->t); 1246 + /* 1247 + * check for both returned and remaining to avoid infinite 1248 + * loop due to buggy firmware 1249 + */ 1250 + } while (st->num_returned && st->num_remaining); 1251 + 1252 + out: 1253 + /* Finalize and destroy iterator */ 1254 + ph->xops->xfer_put(ph, i->t); 1255 + devm_kfree(ph->dev, i); 1256 + 1257 + return ret; 1258 + } 1259 + 1260 + static const struct scmi_proto_helpers_ops helpers_ops = { 1261 + .extended_name_get = scmi_common_extended_name_get, 1262 + .iter_response_init = scmi_iterator_init, 1263 + .iter_response_run = scmi_iterator_run, 1264 + }; 1265 + 1106 1266 /** 1107 1267 * scmi_revision_area_get - Retrieve version memory area. 1108 1268 * ··· 1324 1162 pi->handle = handle; 1325 1163 pi->ph.dev = handle->dev; 1326 1164 pi->ph.xops = &xfer_ops; 1165 + pi->ph.hops = &helpers_ops; 1327 1166 pi->ph.set_priv = scmi_set_protocol_priv; 1328 1167 pi->ph.get_priv = scmi_get_protocol_priv; 1329 1168 refcount_set(&pi->users, 1); ··· 1473 1310 { 1474 1311 int i; 1475 1312 struct scmi_info *info = handle_to_scmi_info(handle); 1313 + struct scmi_revision_info *rev = handle->version; 1476 1314 1477 1315 if (!info->protocols_imp) 1478 1316 return false; 1479 1317 1480 - for (i = 0; i < MAX_PROTOCOLS_IMP; i++) 1318 + for (i = 0; i < rev->num_protocols; i++) 1481 1319 if (info->protocols_imp[i] == prot_id) 1482 1320 return true; 1483 1321 return false;
+108 -36
drivers/firmware/arm_scmi/optee.c
··· 64 64 * [in] value[0].b: Requested capabilities mask (enum pta_scmi_caps) 65 65 */ 66 66 PTA_SCMI_CMD_GET_CHANNEL = 3, 67 + 68 + /* 69 + * PTA_SCMI_CMD_PROCESS_MSG_CHANNEL - Process SCMI message in a MSG 70 + * buffer pointed by memref parameters 71 + * 72 + * [in] value[0].a: Channel handle 73 + * [in] memref[1]: Message buffer (MSG and SCMI payload) 74 + * [out] memref[2]: Response buffer (MSG and SCMI payload) 75 + * 76 + * Shared memories used for SCMI message/response are MSG buffers 77 + * referenced by param[1] and param[2]. MSG transport protocol 78 + * uses a 32bit header to carry SCMI meta-data (protocol ID and 79 + * protocol message ID) followed by the effective SCMI message 80 + * payload. 81 + */ 82 + PTA_SCMI_CMD_PROCESS_MSG_CHANNEL = 4, 67 83 }; 68 84 69 85 /* ··· 88 72 * PTA_SCMI_CAPS_SMT_HEADER 89 73 * When set, OP-TEE supports command using SMT header protocol (SCMI shmem) in 90 74 * shared memory buffers to carry SCMI protocol synchronisation information. 75 + * 76 + * PTA_SCMI_CAPS_MSG_HEADER 77 + * When set, OP-TEE supports command using MSG header protocol in an OP-TEE 78 + * shared memory to carry SCMI protocol synchronisation information and SCMI 79 + * message payload. 91 80 */ 92 81 #define PTA_SCMI_CAPS_NONE 0 93 82 #define PTA_SCMI_CAPS_SMT_HEADER BIT(0) 83 + #define PTA_SCMI_CAPS_MSG_HEADER BIT(1) 84 + #define PTA_SCMI_CAPS_MASK (PTA_SCMI_CAPS_SMT_HEADER | \ 85 + PTA_SCMI_CAPS_MSG_HEADER) 94 86 95 87 /** 96 88 * struct scmi_optee_channel - Description of an OP-TEE SCMI channel ··· 109 85 * @mu: Mutex protection on channel access 110 86 * @cinfo: SCMI channel information 111 87 * @shmem: Virtual base address of the shared memory 112 - * @tee_shm: Reference to TEE shared memory or NULL if using static shmem 88 + * @req: Shared memory protocol handle for SCMI request and synchronous response 89 + * @tee_shm: TEE shared memory handle @req or NULL if using IOMEM shmem 113 90 * @link: Reference in agent's channel list 114 91 */ 115 92 struct scmi_optee_channel { ··· 119 94 u32 caps; 120 95 struct mutex mu; 121 96 struct scmi_chan_info *cinfo; 122 - struct scmi_shared_mem __iomem *shmem; 97 + union { 98 + struct scmi_shared_mem __iomem *shmem; 99 + struct scmi_msg_payld *msg; 100 + } req; 123 101 struct tee_shm *tee_shm; 124 102 struct list_head link; 125 103 }; ··· 206 178 207 179 caps = param[0].u.value.a; 208 180 209 - if (!(caps & PTA_SCMI_CAPS_SMT_HEADER)) { 210 - dev_err(agent->dev, "OP-TEE SCMI PTA doesn't support SMT\n"); 181 + if (!(caps & (PTA_SCMI_CAPS_SMT_HEADER | PTA_SCMI_CAPS_MSG_HEADER))) { 182 + dev_err(agent->dev, "OP-TEE SCMI PTA doesn't support SMT and MSG\n"); 211 183 return -EOPNOTSUPP; 212 184 } 213 185 ··· 221 193 struct device *dev = scmi_optee_private->dev; 222 194 struct tee_ioctl_invoke_arg arg = { }; 223 195 struct tee_param param[1] = { }; 224 - unsigned int caps = PTA_SCMI_CAPS_SMT_HEADER; 196 + unsigned int caps = 0; 225 197 int ret; 198 + 199 + if (channel->tee_shm) 200 + caps = PTA_SCMI_CAPS_MSG_HEADER; 201 + else 202 + caps = PTA_SCMI_CAPS_SMT_HEADER; 226 203 227 204 arg.func = PTA_SCMI_CMD_GET_CHANNEL; 228 205 arg.session = channel->tee_session; ··· 253 220 254 221 static int invoke_process_smt_channel(struct scmi_optee_channel *channel) 255 222 { 256 - struct tee_ioctl_invoke_arg arg = { }; 257 - struct tee_param param[2] = { }; 223 + struct tee_ioctl_invoke_arg arg = { 224 + .func = PTA_SCMI_CMD_PROCESS_SMT_CHANNEL, 225 + .session = channel->tee_session, 226 + .num_params = 1, 227 + }; 228 + struct tee_param param[1] = { }; 258 229 int ret; 259 230 260 - arg.session = channel->tee_session; 261 231 param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; 262 232 param[0].u.value.a = channel->channel_id; 263 233 264 - if (channel->tee_shm) { 265 - param[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT; 266 - param[1].u.memref.shm = channel->tee_shm; 267 - param[1].u.memref.size = SCMI_OPTEE_MAX_MSG_SIZE; 268 - arg.num_params = 2; 269 - arg.func = PTA_SCMI_CMD_PROCESS_SMT_CHANNEL_MESSAGE; 270 - } else { 271 - arg.num_params = 1; 272 - arg.func = PTA_SCMI_CMD_PROCESS_SMT_CHANNEL; 234 + ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param); 235 + if (ret < 0 || arg.ret) { 236 + dev_err(scmi_optee_private->dev, "Can't invoke channel %u: %d / %#x\n", 237 + channel->channel_id, ret, arg.ret); 238 + return -EIO; 273 239 } 240 + 241 + return 0; 242 + } 243 + 244 + static int invoke_process_msg_channel(struct scmi_optee_channel *channel, size_t msg_size) 245 + { 246 + struct tee_ioctl_invoke_arg arg = { 247 + .func = PTA_SCMI_CMD_PROCESS_MSG_CHANNEL, 248 + .session = channel->tee_session, 249 + .num_params = 3, 250 + }; 251 + struct tee_param param[3] = { }; 252 + int ret; 253 + 254 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; 255 + param[0].u.value.a = channel->channel_id; 256 + 257 + param[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 258 + param[1].u.memref.shm = channel->tee_shm; 259 + param[1].u.memref.size = msg_size; 260 + 261 + param[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT; 262 + param[2].u.memref.shm = channel->tee_shm; 263 + param[2].u.memref.size = SCMI_OPTEE_MAX_MSG_SIZE; 274 264 275 265 ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param); 276 266 if (ret < 0 || arg.ret) { ··· 335 279 { 336 280 struct scmi_optee_channel *channel = cinfo->transport_info; 337 281 338 - shmem_clear_channel(channel->shmem); 282 + if (!channel->tee_shm) 283 + shmem_clear_channel(channel->req.shmem); 284 + } 285 + 286 + static int setup_dynamic_shmem(struct device *dev, struct scmi_optee_channel *channel) 287 + { 288 + const size_t msg_size = SCMI_OPTEE_MAX_MSG_SIZE; 289 + void *shbuf; 290 + 291 + channel->tee_shm = tee_shm_alloc_kernel_buf(scmi_optee_private->tee_ctx, msg_size); 292 + if (IS_ERR(channel->tee_shm)) { 293 + dev_err(channel->cinfo->dev, "shmem allocation failed\n"); 294 + return -ENOMEM; 295 + } 296 + 297 + shbuf = tee_shm_get_va(channel->tee_shm, 0); 298 + memset(shbuf, 0, msg_size); 299 + channel->req.msg = shbuf; 300 + 301 + return 0; 339 302 } 340 303 341 304 static int setup_static_shmem(struct device *dev, struct scmi_chan_info *cinfo, ··· 379 304 380 305 size = resource_size(&res); 381 306 382 - channel->shmem = devm_ioremap(dev, res.start, size); 383 - if (!channel->shmem) { 307 + channel->req.shmem = devm_ioremap(dev, res.start, size); 308 + if (!channel->req.shmem) { 384 309 dev_err(dev, "Failed to ioremap SCMI Tx shared memory\n"); 385 310 ret = -EADDRNOTAVAIL; 386 311 goto out; ··· 400 325 if (of_find_property(cinfo->dev->of_node, "shmem", NULL)) 401 326 return setup_static_shmem(dev, cinfo, channel); 402 327 else 403 - return -ENOMEM; 328 + return setup_dynamic_shmem(dev, channel); 404 329 } 405 330 406 331 static int scmi_optee_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, bool tx) ··· 480 405 return 0; 481 406 } 482 407 483 - static struct scmi_shared_mem __iomem * 484 - get_channel_shm(struct scmi_optee_channel *chan, struct scmi_xfer *xfer) 485 - { 486 - if (!chan) 487 - return NULL; 488 - 489 - return chan->shmem; 490 - } 491 - 492 - 493 408 static int scmi_optee_send_message(struct scmi_chan_info *cinfo, 494 409 struct scmi_xfer *xfer) 495 410 { 496 411 struct scmi_optee_channel *channel = cinfo->transport_info; 497 - struct scmi_shared_mem __iomem *shmem = get_channel_shm(channel, xfer); 498 412 int ret; 499 413 500 414 mutex_lock(&channel->mu); 501 - shmem_tx_prepare(shmem, xfer); 502 415 503 - ret = invoke_process_smt_channel(channel); 416 + if (channel->tee_shm) { 417 + msg_tx_prepare(channel->req.msg, xfer); 418 + ret = invoke_process_msg_channel(channel, msg_command_size(xfer)); 419 + } else { 420 + shmem_tx_prepare(channel->req.shmem, xfer); 421 + ret = invoke_process_smt_channel(channel); 422 + } 423 + 504 424 if (ret) 505 425 mutex_unlock(&channel->mu); 506 426 ··· 506 436 struct scmi_xfer *xfer) 507 437 { 508 438 struct scmi_optee_channel *channel = cinfo->transport_info; 509 - struct scmi_shared_mem __iomem *shmem = get_channel_shm(channel, xfer); 510 439 511 - shmem_fetch_response(shmem, xfer); 440 + if (channel->tee_shm) 441 + msg_fetch_response(channel->req.msg, SCMI_OPTEE_MAX_MSG_SIZE, xfer); 442 + else 443 + shmem_fetch_response(channel->req.shmem, xfer); 512 444 } 513 445 514 446 static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret,
+108 -60
drivers/firmware/arm_scmi/perf.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Performance Protocol 4 4 * 5 - * Copyright (C) 2018-2021 ARM Ltd. 5 + * Copyright (C) 2018-2022 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications PERF - " fmt ··· 17 17 #include <linux/scmi_protocol.h> 18 18 #include <linux/sort.h> 19 19 20 - #include "common.h" 20 + #include "protocols.h" 21 21 #include "notify.h" 22 + 23 + #define MAX_OPPS 16 22 24 23 25 enum scmi_performance_protocol_cmd { 24 26 PERF_DOMAIN_ATTRIBUTES = 0x3, ··· 32 30 PERF_NOTIFY_LIMITS = 0x9, 33 31 PERF_NOTIFY_LEVEL = 0xa, 34 32 PERF_DESCRIBE_FASTCHANNEL = 0xb, 33 + PERF_DOMAIN_NAME_GET = 0xc, 35 34 }; 36 35 37 36 struct scmi_opp { ··· 45 42 __le16 num_domains; 46 43 __le16 flags; 47 44 #define POWER_SCALE_IN_MILLIWATT(x) ((x) & BIT(0)) 45 + #define POWER_SCALE_IN_MICROWATT(x) ((x) & BIT(1)) 48 46 __le32 stats_addr_low; 49 47 __le32 stats_addr_high; 50 48 __le32 stats_size; ··· 58 54 #define SUPPORTS_PERF_LIMIT_NOTIFY(x) ((x) & BIT(29)) 59 55 #define SUPPORTS_PERF_LEVEL_NOTIFY(x) ((x) & BIT(28)) 60 56 #define SUPPORTS_PERF_FASTCHANNELS(x) ((x) & BIT(27)) 57 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(26)) 61 58 __le32 rate_limit_us; 62 59 __le32 sustained_freq_khz; 63 60 __le32 sustained_perf_level; 64 - u8 name[SCMI_MAX_STR_SIZE]; 61 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 65 62 }; 66 63 67 64 struct scmi_msg_perf_describe_levels { ··· 171 166 u32 version; 172 167 int num_domains; 173 168 bool power_scale_mw; 169 + bool power_scale_uw; 174 170 u64 stats_addr; 175 171 u32 stats_size; 176 172 struct perf_dom_info *dom_info; ··· 202 196 203 197 pi->num_domains = le16_to_cpu(attr->num_domains); 204 198 pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags); 199 + if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3) 200 + pi->power_scale_uw = POWER_SCALE_IN_MICROWATT(flags); 205 201 pi->stats_addr = le32_to_cpu(attr->stats_addr_low) | 206 202 (u64)le32_to_cpu(attr->stats_addr_high) << 32; 207 203 pi->stats_size = le32_to_cpu(attr->stats_size); ··· 215 207 216 208 static int 217 209 scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph, 218 - u32 domain, struct perf_dom_info *dom_info) 210 + u32 domain, struct perf_dom_info *dom_info, 211 + u32 version) 219 212 { 220 213 int ret; 214 + u32 flags; 221 215 struct scmi_xfer *t; 222 216 struct scmi_msg_resp_perf_domain_attributes *attr; 223 217 ··· 233 223 234 224 ret = ph->xops->do_xfer(ph, t); 235 225 if (!ret) { 236 - u32 flags = le32_to_cpu(attr->flags); 226 + flags = le32_to_cpu(attr->flags); 237 227 238 228 dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); 239 229 dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); ··· 256 246 } 257 247 258 248 ph->xops->xfer_put(ph, t); 249 + 250 + /* 251 + * If supported overwrite short name with the extended one; 252 + * on error just carry on and use already provided short name. 253 + */ 254 + if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 255 + SUPPORTS_EXTENDED_NAMES(flags)) 256 + ph->hops->extended_name_get(ph, PERF_DOMAIN_NAME_GET, domain, 257 + dom_info->name, SCMI_MAX_STR_SIZE); 258 + 259 259 return ret; 260 260 } 261 261 ··· 276 256 return t1->perf - t2->perf; 277 257 } 278 258 259 + struct scmi_perf_ipriv { 260 + u32 domain; 261 + struct perf_dom_info *perf_dom; 262 + }; 263 + 264 + static void iter_perf_levels_prepare_message(void *message, 265 + unsigned int desc_index, 266 + const void *priv) 267 + { 268 + struct scmi_msg_perf_describe_levels *msg = message; 269 + const struct scmi_perf_ipriv *p = priv; 270 + 271 + msg->domain = cpu_to_le32(p->domain); 272 + /* Set the number of OPPs to be skipped/already read */ 273 + msg->level_index = cpu_to_le32(desc_index); 274 + } 275 + 276 + static int iter_perf_levels_update_state(struct scmi_iterator_state *st, 277 + const void *response, void *priv) 278 + { 279 + const struct scmi_msg_resp_perf_describe_levels *r = response; 280 + 281 + st->num_returned = le16_to_cpu(r->num_returned); 282 + st->num_remaining = le16_to_cpu(r->num_remaining); 283 + 284 + return 0; 285 + } 286 + 287 + static int 288 + iter_perf_levels_process_response(const struct scmi_protocol_handle *ph, 289 + const void *response, 290 + struct scmi_iterator_state *st, void *priv) 291 + { 292 + struct scmi_opp *opp; 293 + const struct scmi_msg_resp_perf_describe_levels *r = response; 294 + struct scmi_perf_ipriv *p = priv; 295 + 296 + opp = &p->perf_dom->opp[st->desc_index + st->loop_idx]; 297 + opp->perf = le32_to_cpu(r->opp[st->loop_idx].perf_val); 298 + opp->power = le32_to_cpu(r->opp[st->loop_idx].power); 299 + opp->trans_latency_us = 300 + le16_to_cpu(r->opp[st->loop_idx].transition_latency_us); 301 + p->perf_dom->opp_count++; 302 + 303 + dev_dbg(ph->dev, "Level %d Power %d Latency %dus\n", 304 + opp->perf, opp->power, opp->trans_latency_us); 305 + 306 + return 0; 307 + } 308 + 279 309 static int 280 310 scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain, 281 311 struct perf_dom_info *perf_dom) 282 312 { 283 - int ret, cnt; 284 - u32 tot_opp_cnt = 0; 285 - u16 num_returned, num_remaining; 286 - struct scmi_xfer *t; 287 - struct scmi_opp *opp; 288 - struct scmi_msg_perf_describe_levels *dom_info; 289 - struct scmi_msg_resp_perf_describe_levels *level_info; 313 + int ret; 314 + void *iter; 315 + struct scmi_msg_perf_describe_levels *msg; 316 + struct scmi_iterator_ops ops = { 317 + .prepare_message = iter_perf_levels_prepare_message, 318 + .update_state = iter_perf_levels_update_state, 319 + .process_response = iter_perf_levels_process_response, 320 + }; 321 + struct scmi_perf_ipriv ppriv = { 322 + .domain = domain, 323 + .perf_dom = perf_dom, 324 + }; 290 325 291 - ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_LEVELS, 292 - sizeof(*dom_info), 0, &t); 326 + iter = ph->hops->iter_response_init(ph, &ops, MAX_OPPS, 327 + PERF_DESCRIBE_LEVELS, 328 + sizeof(*msg), &ppriv); 329 + if (IS_ERR(iter)) 330 + return PTR_ERR(iter); 331 + 332 + ret = ph->hops->iter_response_run(iter); 293 333 if (ret) 294 334 return ret; 295 335 296 - dom_info = t->tx.buf; 297 - level_info = t->rx.buf; 336 + if (perf_dom->opp_count) 337 + sort(perf_dom->opp, perf_dom->opp_count, 338 + sizeof(struct scmi_opp), opp_cmp_func, NULL); 298 339 299 - do { 300 - dom_info->domain = cpu_to_le32(domain); 301 - /* Set the number of OPPs to be skipped/already read */ 302 - dom_info->level_index = cpu_to_le32(tot_opp_cnt); 303 - 304 - ret = ph->xops->do_xfer(ph, t); 305 - if (ret) 306 - break; 307 - 308 - num_returned = le16_to_cpu(level_info->num_returned); 309 - num_remaining = le16_to_cpu(level_info->num_remaining); 310 - if (tot_opp_cnt + num_returned > MAX_OPPS) { 311 - dev_err(ph->dev, "No. of OPPs exceeded MAX_OPPS"); 312 - break; 313 - } 314 - 315 - opp = &perf_dom->opp[tot_opp_cnt]; 316 - for (cnt = 0; cnt < num_returned; cnt++, opp++) { 317 - opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val); 318 - opp->power = le32_to_cpu(level_info->opp[cnt].power); 319 - opp->trans_latency_us = le16_to_cpu 320 - (level_info->opp[cnt].transition_latency_us); 321 - 322 - dev_dbg(ph->dev, "Level %d Power %d Latency %dus\n", 323 - opp->perf, opp->power, opp->trans_latency_us); 324 - } 325 - 326 - tot_opp_cnt += num_returned; 327 - 328 - ph->xops->reset_rx_to_maxsz(ph, t); 329 - /* 330 - * check for both returned and remaining to avoid infinite 331 - * loop due to buggy firmware 332 - */ 333 - } while (num_returned && num_remaining); 334 - 335 - perf_dom->opp_count = tot_opp_cnt; 336 - ph->xops->xfer_put(ph, t); 337 - 338 - sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL); 339 340 return ret; 340 341 } 341 342 ··· 422 381 { 423 382 struct scmi_perf_info *pi = ph->get_priv(ph); 424 383 struct perf_dom_info *dom = pi->dom_info + domain; 384 + 385 + if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3 && !max_perf && !min_perf) 386 + return -EINVAL; 425 387 426 388 if (dom->fc_info && dom->fc_info->limit_set_addr) { 427 389 iowrite32(max_perf, dom->fc_info->limit_set_addr); ··· 917 873 918 874 static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph) 919 875 { 920 - int domain; 876 + int domain, ret; 921 877 u32 version; 922 878 struct scmi_perf_info *pinfo; 923 879 924 - ph->xops->version_get(ph, &version); 880 + ret = ph->xops->version_get(ph, &version); 881 + if (ret) 882 + return ret; 925 883 926 884 dev_dbg(ph->dev, "Performance Version %d.%d\n", 927 885 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); ··· 932 886 if (!pinfo) 933 887 return -ENOMEM; 934 888 935 - scmi_perf_attributes_get(ph, pinfo); 889 + ret = scmi_perf_attributes_get(ph, pinfo); 890 + if (ret) 891 + return ret; 936 892 937 893 pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 938 894 sizeof(*pinfo->dom_info), GFP_KERNEL); ··· 944 896 for (domain = 0; domain < pinfo->num_domains; domain++) { 945 897 struct perf_dom_info *dom = pinfo->dom_info + domain; 946 898 947 - scmi_perf_domain_attributes_get(ph, domain, dom); 899 + scmi_perf_domain_attributes_get(ph, domain, dom, version); 948 900 scmi_perf_describe_levels_get(ph, domain, dom); 949 901 950 902 if (dom->perf_fastchannels)
+32 -12
drivers/firmware/arm_scmi/power.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Power Protocol 4 4 * 5 - * Copyright (C) 2018-2021 ARM Ltd. 5 + * Copyright (C) 2018-2022 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications POWER - " fmt ··· 10 10 #include <linux/module.h> 11 11 #include <linux/scmi_protocol.h> 12 12 13 - #include "common.h" 13 + #include "protocols.h" 14 14 #include "notify.h" 15 15 16 16 enum scmi_power_protocol_cmd { ··· 18 18 POWER_STATE_SET = 0x4, 19 19 POWER_STATE_GET = 0x5, 20 20 POWER_STATE_NOTIFY = 0x6, 21 + POWER_DOMAIN_NAME_GET = 0x8, 21 22 }; 22 23 23 24 struct scmi_msg_resp_power_attributes { ··· 34 33 #define SUPPORTS_STATE_SET_NOTIFY(x) ((x) & BIT(31)) 35 34 #define SUPPORTS_STATE_SET_ASYNC(x) ((x) & BIT(30)) 36 35 #define SUPPORTS_STATE_SET_SYNC(x) ((x) & BIT(29)) 37 - u8 name[SCMI_MAX_STR_SIZE]; 36 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(27)) 37 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 38 38 }; 39 39 40 40 struct scmi_power_set_state { ··· 99 97 100 98 static int 101 99 scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph, 102 - u32 domain, struct power_dom_info *dom_info) 100 + u32 domain, struct power_dom_info *dom_info, 101 + u32 version) 103 102 { 104 103 int ret; 104 + u32 flags; 105 105 struct scmi_xfer *t; 106 106 struct scmi_msg_resp_power_domain_attributes *attr; 107 107 ··· 117 113 118 114 ret = ph->xops->do_xfer(ph, t); 119 115 if (!ret) { 120 - u32 flags = le32_to_cpu(attr->flags); 116 + flags = le32_to_cpu(attr->flags); 121 117 122 118 dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags); 123 119 dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags); 124 120 dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags); 125 121 strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 126 122 } 127 - 128 123 ph->xops->xfer_put(ph, t); 124 + 125 + /* 126 + * If supported overwrite short name with the extended one; 127 + * on error just carry on and use already provided short name. 128 + */ 129 + if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 130 + SUPPORTS_EXTENDED_NAMES(flags)) { 131 + ph->hops->extended_name_get(ph, POWER_DOMAIN_NAME_GET, 132 + domain, dom_info->name, 133 + SCMI_MAX_STR_SIZE); 134 + } 135 + 129 136 return ret; 130 137 } 131 138 ··· 189 174 return pi->num_domains; 190 175 } 191 176 192 - static char *scmi_power_name_get(const struct scmi_protocol_handle *ph, 193 - u32 domain) 177 + static const char * 178 + scmi_power_name_get(const struct scmi_protocol_handle *ph, 179 + u32 domain) 194 180 { 195 181 struct scmi_power_info *pi = ph->get_priv(ph); 196 182 struct power_dom_info *dom = pi->dom_info + domain; ··· 296 280 297 281 static int scmi_power_protocol_init(const struct scmi_protocol_handle *ph) 298 282 { 299 - int domain; 283 + int domain, ret; 300 284 u32 version; 301 285 struct scmi_power_info *pinfo; 302 286 303 - ph->xops->version_get(ph, &version); 287 + ret = ph->xops->version_get(ph, &version); 288 + if (ret) 289 + return ret; 304 290 305 291 dev_dbg(ph->dev, "Power Version %d.%d\n", 306 292 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); ··· 311 293 if (!pinfo) 312 294 return -ENOMEM; 313 295 314 - scmi_power_attributes_get(ph, pinfo); 296 + ret = scmi_power_attributes_get(ph, pinfo); 297 + if (ret) 298 + return ret; 315 299 316 300 pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 317 301 sizeof(*pinfo->dom_info), GFP_KERNEL); ··· 323 303 for (domain = 0; domain < pinfo->num_domains; domain++) { 324 304 struct power_dom_info *dom = pinfo->dom_info + domain; 325 305 326 - scmi_power_domain_attributes_get(ph, domain, dom); 306 + scmi_power_domain_attributes_get(ph, domain, dom, version); 327 307 } 328 308 329 309 pinfo->version = version;
+318
drivers/firmware/arm_scmi/protocols.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol 4 + * protocols common header file containing some definitions, structures 5 + * and function prototypes used in all the different SCMI protocols. 6 + * 7 + * Copyright (C) 2022 ARM Ltd. 8 + */ 9 + #ifndef _SCMI_PROTOCOLS_H 10 + #define _SCMI_PROTOCOLS_H 11 + 12 + #include <linux/bitfield.h> 13 + #include <linux/completion.h> 14 + #include <linux/device.h> 15 + #include <linux/errno.h> 16 + #include <linux/kernel.h> 17 + #include <linux/hashtable.h> 18 + #include <linux/list.h> 19 + #include <linux/module.h> 20 + #include <linux/refcount.h> 21 + #include <linux/scmi_protocol.h> 22 + #include <linux/spinlock.h> 23 + #include <linux/types.h> 24 + 25 + #include <asm/unaligned.h> 26 + 27 + #define SCMI_SHORT_NAME_MAX_SIZE 16 28 + 29 + #define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0) 30 + #define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16) 31 + #define PROTOCOL_REV_MAJOR(x) ((u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x)))) 32 + #define PROTOCOL_REV_MINOR(x) ((u16)(FIELD_GET(PROTOCOL_REV_MINOR_MASK, (x)))) 33 + 34 + enum scmi_common_cmd { 35 + PROTOCOL_VERSION = 0x0, 36 + PROTOCOL_ATTRIBUTES = 0x1, 37 + PROTOCOL_MESSAGE_ATTRIBUTES = 0x2, 38 + }; 39 + 40 + /** 41 + * struct scmi_msg_resp_prot_version - Response for a message 42 + * 43 + * @minor_version: Minor version of the ABI that firmware supports 44 + * @major_version: Major version of the ABI that firmware supports 45 + * 46 + * In general, ABI version changes follow the rule that minor version increments 47 + * are backward compatible. Major revision changes in ABI may not be 48 + * backward compatible. 49 + * 50 + * Response to a generic message with message type SCMI_MSG_VERSION 51 + */ 52 + struct scmi_msg_resp_prot_version { 53 + __le16 minor_version; 54 + __le16 major_version; 55 + }; 56 + 57 + /** 58 + * struct scmi_msg - Message(Tx/Rx) structure 59 + * 60 + * @buf: Buffer pointer 61 + * @len: Length of data in the Buffer 62 + */ 63 + struct scmi_msg { 64 + void *buf; 65 + size_t len; 66 + }; 67 + 68 + /** 69 + * struct scmi_msg_hdr - Message(Tx/Rx) header 70 + * 71 + * @id: The identifier of the message being sent 72 + * @protocol_id: The identifier of the protocol used to send @id message 73 + * @type: The SCMI type for this message 74 + * @seq: The token to identify the message. When a message returns, the 75 + * platform returns the whole message header unmodified including the 76 + * token 77 + * @status: Status of the transfer once it's complete 78 + * @poll_completion: Indicate if the transfer needs to be polled for 79 + * completion or interrupt mode is used 80 + */ 81 + struct scmi_msg_hdr { 82 + u8 id; 83 + u8 protocol_id; 84 + u8 type; 85 + u16 seq; 86 + u32 status; 87 + bool poll_completion; 88 + }; 89 + 90 + /** 91 + * struct scmi_xfer - Structure representing a message flow 92 + * 93 + * @transfer_id: Unique ID for debug & profiling purpose 94 + * @hdr: Transmit message header 95 + * @tx: Transmit message 96 + * @rx: Receive message, the buffer should be pre-allocated to store 97 + * message. If request-ACK protocol is used, we can reuse the same 98 + * buffer for the rx path as we use for the tx path. 99 + * @done: command message transmit completion event 100 + * @async_done: pointer to delayed response message received event completion 101 + * @pending: True for xfers added to @pending_xfers hashtable 102 + * @node: An hlist_node reference used to store this xfer, alternatively, on 103 + * the free list @free_xfers or in the @pending_xfers hashtable 104 + * @users: A refcount to track the active users for this xfer. 105 + * This is meant to protect against the possibility that, when a command 106 + * transaction times out concurrently with the reception of a valid 107 + * response message, the xfer could be finally put on the TX path, and 108 + * so vanish, while on the RX path scmi_rx_callback() is still 109 + * processing it: in such a case this refcounting will ensure that, even 110 + * though the timed-out transaction will anyway cause the command 111 + * request to be reported as failed by time-out, the underlying xfer 112 + * cannot be discarded and possibly reused until the last one user on 113 + * the RX path has released it. 114 + * @busy: An atomic flag to ensure exclusive write access to this xfer 115 + * @state: The current state of this transfer, with states transitions deemed 116 + * valid being: 117 + * - SCMI_XFER_SENT_OK -> SCMI_XFER_RESP_OK [ -> SCMI_XFER_DRESP_OK ] 118 + * - SCMI_XFER_SENT_OK -> SCMI_XFER_DRESP_OK 119 + * (Missing synchronous response is assumed OK and ignored) 120 + * @lock: A spinlock to protect state and busy fields. 121 + * @priv: A pointer for transport private usage. 122 + */ 123 + struct scmi_xfer { 124 + int transfer_id; 125 + struct scmi_msg_hdr hdr; 126 + struct scmi_msg tx; 127 + struct scmi_msg rx; 128 + struct completion done; 129 + struct completion *async_done; 130 + bool pending; 131 + struct hlist_node node; 132 + refcount_t users; 133 + #define SCMI_XFER_FREE 0 134 + #define SCMI_XFER_BUSY 1 135 + atomic_t busy; 136 + #define SCMI_XFER_SENT_OK 0 137 + #define SCMI_XFER_RESP_OK 1 138 + #define SCMI_XFER_DRESP_OK 2 139 + int state; 140 + /* A lock to protect state and busy fields */ 141 + spinlock_t lock; 142 + void *priv; 143 + }; 144 + 145 + struct scmi_xfer_ops; 146 + struct scmi_proto_helpers_ops; 147 + 148 + /** 149 + * struct scmi_protocol_handle - Reference to an initialized protocol instance 150 + * 151 + * @dev: A reference to the associated SCMI instance device (handle->dev). 152 + * @xops: A reference to a struct holding refs to the core xfer operations that 153 + * can be used by the protocol implementation to generate SCMI messages. 154 + * @set_priv: A method to set protocol private data for this instance. 155 + * @get_priv: A method to get protocol private data previously set. 156 + * 157 + * This structure represents a protocol initialized against specific SCMI 158 + * instance and it will be used as follows: 159 + * - as a parameter fed from the core to the protocol initialization code so 160 + * that it can access the core xfer operations to build and generate SCMI 161 + * messages exclusively for the specific underlying protocol instance. 162 + * - as an opaque handle fed by an SCMI driver user when it tries to access 163 + * this protocol through its own protocol operations. 164 + * In this case this handle will be returned as an opaque object together 165 + * with the related protocol operations when the SCMI driver tries to access 166 + * the protocol. 167 + */ 168 + struct scmi_protocol_handle { 169 + struct device *dev; 170 + const struct scmi_xfer_ops *xops; 171 + const struct scmi_proto_helpers_ops *hops; 172 + int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv); 173 + void *(*get_priv)(const struct scmi_protocol_handle *ph); 174 + }; 175 + 176 + /** 177 + * struct scmi_iterator_state - Iterator current state descriptor 178 + * @desc_index: Starting index for the current mulit-part request. 179 + * @num_returned: Number of returned items in the last multi-part reply. 180 + * @num_remaining: Number of remaining items in the multi-part message. 181 + * @max_resources: Maximum acceptable number of items, configured by the caller 182 + * depending on the underlying resources that it is querying. 183 + * @loop_idx: The iterator loop index in the current multi-part reply. 184 + * @priv: Optional pointer to some additional state-related private data setup 185 + * by the caller during the iterations. 186 + */ 187 + struct scmi_iterator_state { 188 + unsigned int desc_index; 189 + unsigned int num_returned; 190 + unsigned int num_remaining; 191 + unsigned int max_resources; 192 + unsigned int loop_idx; 193 + void *priv; 194 + }; 195 + 196 + /** 197 + * struct scmi_iterator_ops - Custom iterator operations 198 + * @prepare_message: An operation to provide the custom logic to fill in the 199 + * SCMI command request pointed by @message. @desc_index is 200 + * a reference to the next index to use in the multi-part 201 + * request. 202 + * @update_state: An operation to provide the custom logic to update the 203 + * iterator state from the actual message response. 204 + * @process_response: An operation to provide the custom logic needed to process 205 + * each chunk of the multi-part message. 206 + */ 207 + struct scmi_iterator_ops { 208 + void (*prepare_message)(void *message, unsigned int desc_index, 209 + const void *priv); 210 + int (*update_state)(struct scmi_iterator_state *st, 211 + const void *response, void *priv); 212 + int (*process_response)(const struct scmi_protocol_handle *ph, 213 + const void *response, 214 + struct scmi_iterator_state *st, void *priv); 215 + }; 216 + 217 + /** 218 + * struct scmi_proto_helpers_ops - References to common protocol helpers 219 + * @extended_name_get: A common helper function to retrieve extended naming 220 + * for the specified resource using the specified command. 221 + * Result is returned as a NULL terminated string in the 222 + * pre-allocated area pointed to by @name with maximum 223 + * capacity of @len bytes. 224 + * @iter_response_init: A common helper to initialize a generic iterator to 225 + * parse multi-message responses: when run the iterator 226 + * will take care to send the initial command request as 227 + * specified by @msg_id and @tx_size and then to parse the 228 + * multi-part responses using the custom operations 229 + * provided in @ops. 230 + * @iter_response_run: A common helper to trigger the run of a previously 231 + * initialized iterator. 232 + */ 233 + struct scmi_proto_helpers_ops { 234 + int (*extended_name_get)(const struct scmi_protocol_handle *ph, 235 + u8 cmd_id, u32 res_id, char *name, size_t len); 236 + void *(*iter_response_init)(const struct scmi_protocol_handle *ph, 237 + struct scmi_iterator_ops *ops, 238 + unsigned int max_resources, u8 msg_id, 239 + size_t tx_size, void *priv); 240 + int (*iter_response_run)(void *iter); 241 + }; 242 + 243 + /** 244 + * struct scmi_xfer_ops - References to the core SCMI xfer operations. 245 + * @version_get: Get this version protocol. 246 + * @xfer_get_init: Initialize one struct xfer if any xfer slot is free. 247 + * @reset_rx_to_maxsz: Reset rx size to max transport size. 248 + * @do_xfer: Do the SCMI transfer. 249 + * @do_xfer_with_response: Do the SCMI transfer waiting for a response. 250 + * @xfer_put: Free the xfer slot. 251 + * 252 + * Note that all this operations expect a protocol handle as first parameter; 253 + * they then internally use it to infer the underlying protocol number: this 254 + * way is not possible for a protocol implementation to forge messages for 255 + * another protocol. 256 + */ 257 + struct scmi_xfer_ops { 258 + int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version); 259 + int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id, 260 + size_t tx_size, size_t rx_size, 261 + struct scmi_xfer **p); 262 + void (*reset_rx_to_maxsz)(const struct scmi_protocol_handle *ph, 263 + struct scmi_xfer *xfer); 264 + int (*do_xfer)(const struct scmi_protocol_handle *ph, 265 + struct scmi_xfer *xfer); 266 + int (*do_xfer_with_response)(const struct scmi_protocol_handle *ph, 267 + struct scmi_xfer *xfer); 268 + void (*xfer_put)(const struct scmi_protocol_handle *ph, 269 + struct scmi_xfer *xfer); 270 + }; 271 + 272 + typedef int (*scmi_prot_init_ph_fn_t)(const struct scmi_protocol_handle *); 273 + 274 + /** 275 + * struct scmi_protocol - Protocol descriptor 276 + * @id: Protocol ID. 277 + * @owner: Module reference if any. 278 + * @instance_init: Mandatory protocol initialization function. 279 + * @instance_deinit: Optional protocol de-initialization function. 280 + * @ops: Optional reference to the operations provided by the protocol and 281 + * exposed in scmi_protocol.h. 282 + * @events: An optional reference to the events supported by this protocol. 283 + */ 284 + struct scmi_protocol { 285 + const u8 id; 286 + struct module *owner; 287 + const scmi_prot_init_ph_fn_t instance_init; 288 + const scmi_prot_init_ph_fn_t instance_deinit; 289 + const void *ops; 290 + const struct scmi_protocol_events *events; 291 + }; 292 + 293 + #define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(name, proto) \ 294 + static const struct scmi_protocol *__this_proto = &(proto); \ 295 + \ 296 + int __init scmi_##name##_register(void) \ 297 + { \ 298 + return scmi_protocol_register(__this_proto); \ 299 + } \ 300 + \ 301 + void __exit scmi_##name##_unregister(void) \ 302 + { \ 303 + scmi_protocol_unregister(__this_proto); \ 304 + } 305 + 306 + #define DECLARE_SCMI_REGISTER_UNREGISTER(func) \ 307 + int __init scmi_##func##_register(void); \ 308 + void __exit scmi_##func##_unregister(void) 309 + DECLARE_SCMI_REGISTER_UNREGISTER(base); 310 + DECLARE_SCMI_REGISTER_UNREGISTER(clock); 311 + DECLARE_SCMI_REGISTER_UNREGISTER(perf); 312 + DECLARE_SCMI_REGISTER_UNREGISTER(power); 313 + DECLARE_SCMI_REGISTER_UNREGISTER(reset); 314 + DECLARE_SCMI_REGISTER_UNREGISTER(sensors); 315 + DECLARE_SCMI_REGISTER_UNREGISTER(voltage); 316 + DECLARE_SCMI_REGISTER_UNREGISTER(system); 317 + 318 + #endif /* _SCMI_PROTOCOLS_H */
+29 -11
drivers/firmware/arm_scmi/reset.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Reset Protocol 4 4 * 5 - * Copyright (C) 2019-2021 ARM Ltd. 5 + * Copyright (C) 2019-2022 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications RESET - " fmt ··· 10 10 #include <linux/module.h> 11 11 #include <linux/scmi_protocol.h> 12 12 13 - #include "common.h" 13 + #include "protocols.h" 14 14 #include "notify.h" 15 15 16 16 enum scmi_reset_protocol_cmd { 17 17 RESET_DOMAIN_ATTRIBUTES = 0x3, 18 18 RESET = 0x4, 19 19 RESET_NOTIFY = 0x5, 20 + RESET_DOMAIN_NAME_GET = 0x6, 20 21 }; 21 22 22 23 #define NUM_RESET_DOMAIN_MASK 0xffff ··· 27 26 __le32 attributes; 28 27 #define SUPPORTS_ASYNC_RESET(x) ((x) & BIT(31)) 29 28 #define SUPPORTS_NOTIFY_RESET(x) ((x) & BIT(30)) 29 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(29)) 30 30 __le32 latency; 31 - u8 name[SCMI_MAX_STR_SIZE]; 31 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 32 32 }; 33 33 34 34 struct scmi_msg_reset_domain_reset { ··· 91 89 92 90 static int 93 91 scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph, 94 - u32 domain, struct reset_dom_info *dom_info) 92 + u32 domain, struct reset_dom_info *dom_info, 93 + u32 version) 95 94 { 96 95 int ret; 96 + u32 attributes; 97 97 struct scmi_xfer *t; 98 98 struct scmi_msg_resp_reset_domain_attributes *attr; 99 99 ··· 109 105 110 106 ret = ph->xops->do_xfer(ph, t); 111 107 if (!ret) { 112 - u32 attributes = le32_to_cpu(attr->attributes); 108 + attributes = le32_to_cpu(attr->attributes); 113 109 114 110 dom_info->async_reset = SUPPORTS_ASYNC_RESET(attributes); 115 111 dom_info->reset_notify = SUPPORTS_NOTIFY_RESET(attributes); ··· 120 116 } 121 117 122 118 ph->xops->xfer_put(ph, t); 119 + 120 + /* 121 + * If supported overwrite short name with the extended one; 122 + * on error just carry on and use already provided short name. 123 + */ 124 + if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 125 + SUPPORTS_EXTENDED_NAMES(attributes)) 126 + ph->hops->extended_name_get(ph, RESET_DOMAIN_NAME_GET, domain, 127 + dom_info->name, SCMI_MAX_STR_SIZE); 128 + 123 129 return ret; 124 130 } 125 131 ··· 140 126 return pi->num_domains; 141 127 } 142 128 143 - static char *scmi_reset_name_get(const struct scmi_protocol_handle *ph, 144 - u32 domain) 129 + static const char * 130 + scmi_reset_name_get(const struct scmi_protocol_handle *ph, u32 domain) 145 131 { 146 132 struct scmi_reset_info *pi = ph->get_priv(ph); 147 133 ··· 307 293 308 294 static int scmi_reset_protocol_init(const struct scmi_protocol_handle *ph) 309 295 { 310 - int domain; 296 + int domain, ret; 311 297 u32 version; 312 298 struct scmi_reset_info *pinfo; 313 299 314 - ph->xops->version_get(ph, &version); 300 + ret = ph->xops->version_get(ph, &version); 301 + if (ret) 302 + return ret; 315 303 316 304 dev_dbg(ph->dev, "Reset Version %d.%d\n", 317 305 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); ··· 322 306 if (!pinfo) 323 307 return -ENOMEM; 324 308 325 - scmi_reset_attributes_get(ph, pinfo); 309 + ret = scmi_reset_attributes_get(ph, pinfo); 310 + if (ret) 311 + return ret; 326 312 327 313 pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains, 328 314 sizeof(*pinfo->dom_info), GFP_KERNEL); ··· 334 316 for (domain = 0; domain < pinfo->num_domains; domain++) { 335 317 struct reset_dom_info *dom = pinfo->dom_info + domain; 336 318 337 - scmi_reset_domain_attributes_get(ph, domain, dom); 319 + scmi_reset_domain_attributes_get(ph, domain, dom, version); 338 320 } 339 321 340 322 pinfo->version = version;
+374 -277
drivers/firmware/arm_scmi/sensors.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Sensor Protocol 4 4 * 5 - * Copyright (C) 2018-2021 ARM Ltd. 5 + * Copyright (C) 2018-2022 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications SENSOR - " fmt ··· 11 11 #include <linux/module.h> 12 12 #include <linux/scmi_protocol.h> 13 13 14 - #include "common.h" 14 + #include "protocols.h" 15 15 #include "notify.h" 16 16 17 17 #define SCMI_MAX_NUM_SENSOR_AXIS 63 ··· 27 27 SENSOR_CONFIG_GET = 0x9, 28 28 SENSOR_CONFIG_SET = 0xA, 29 29 SENSOR_CONTINUOUS_UPDATE_NOTIFY = 0xB, 30 + SENSOR_NAME_GET = 0xC, 31 + SENSOR_AXIS_NAME_GET = 0xD, 30 32 }; 31 33 32 34 struct scmi_msg_resp_sensor_attributes { ··· 65 63 __le32 max_range_high; 66 64 }; 67 65 66 + struct scmi_msg_sensor_description { 67 + __le32 desc_index; 68 + }; 69 + 68 70 struct scmi_msg_resp_sensor_description { 69 71 __le16 num_returned; 70 72 __le16 num_remaining; ··· 77 71 __le32 attributes_low; 78 72 /* Common attributes_low macros */ 79 73 #define SUPPORTS_ASYNC_READ(x) FIELD_GET(BIT(31), (x)) 74 + #define SUPPORTS_EXTENDED_NAMES(x) FIELD_GET(BIT(29), (x)) 80 75 #define NUM_TRIP_POINTS(x) FIELD_GET(GENMASK(7, 0), (x)) 81 76 __le32 attributes_high; 82 77 /* Common attributes_high macros */ ··· 85 78 #define SENSOR_SCALE_SIGN BIT(4) 86 79 #define SENSOR_SCALE_EXTEND GENMASK(31, 5) 87 80 #define SENSOR_TYPE(x) FIELD_GET(GENMASK(7, 0), (x)) 88 - u8 name[SCMI_MAX_STR_SIZE]; 81 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 89 82 /* only for version > 2.0 */ 90 83 __le32 power; 91 84 __le32 resolution; ··· 118 111 struct scmi_axis_descriptor { 119 112 __le32 id; 120 113 __le32 attributes_low; 114 + #define SUPPORTS_EXTENDED_AXIS_NAMES(x) FIELD_GET(BIT(9), (x)) 121 115 __le32 attributes_high; 122 - u8 name[SCMI_MAX_STR_SIZE]; 116 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 123 117 __le32 resolution; 124 118 struct scmi_msg_resp_attrs attrs; 119 + } desc[]; 120 + }; 121 + 122 + struct scmi_msg_resp_sensor_axis_names_description { 123 + __le32 num_axis_flags; 124 + struct scmi_sensor_axis_name_descriptor { 125 + __le32 axis_id; 126 + u8 name[SCMI_MAX_STR_SIZE]; 125 127 } desc[]; 126 128 }; 127 129 ··· 247 231 } 248 232 249 233 static inline void scmi_parse_range_attrs(struct scmi_range_attrs *out, 250 - struct scmi_msg_resp_attrs *in) 234 + const struct scmi_msg_resp_attrs *in) 251 235 { 252 236 out->min_range = get_unaligned_le64((void *)&in->min_range_low); 253 237 out->max_range = get_unaligned_le64((void *)&in->max_range_low); 254 238 } 255 239 240 + struct scmi_sens_ipriv { 241 + void *priv; 242 + struct device *dev; 243 + }; 244 + 245 + static void iter_intervals_prepare_message(void *message, 246 + unsigned int desc_index, 247 + const void *p) 248 + { 249 + struct scmi_msg_sensor_list_update_intervals *msg = message; 250 + const struct scmi_sensor_info *s; 251 + 252 + s = ((const struct scmi_sens_ipriv *)p)->priv; 253 + /* Set the number of sensors to be skipped/already read */ 254 + msg->id = cpu_to_le32(s->id); 255 + msg->index = cpu_to_le32(desc_index); 256 + } 257 + 258 + static int iter_intervals_update_state(struct scmi_iterator_state *st, 259 + const void *response, void *p) 260 + { 261 + u32 flags; 262 + struct scmi_sensor_info *s = ((struct scmi_sens_ipriv *)p)->priv; 263 + struct device *dev = ((struct scmi_sens_ipriv *)p)->dev; 264 + const struct scmi_msg_resp_sensor_list_update_intervals *r = response; 265 + 266 + flags = le32_to_cpu(r->num_intervals_flags); 267 + st->num_returned = NUM_INTERVALS_RETURNED(flags); 268 + st->num_remaining = NUM_INTERVALS_REMAINING(flags); 269 + 270 + /* 271 + * Max intervals is not declared previously anywhere so we 272 + * assume it's returned+remaining on first call. 273 + */ 274 + if (!st->max_resources) { 275 + s->intervals.segmented = SEGMENTED_INTVL_FORMAT(flags); 276 + s->intervals.count = st->num_returned + st->num_remaining; 277 + /* segmented intervals are reported in one triplet */ 278 + if (s->intervals.segmented && 279 + (st->num_remaining || st->num_returned != 3)) { 280 + dev_err(dev, 281 + "Sensor ID:%d advertises an invalid segmented interval (%d)\n", 282 + s->id, s->intervals.count); 283 + s->intervals.segmented = false; 284 + s->intervals.count = 0; 285 + return -EINVAL; 286 + } 287 + /* Direct allocation when exceeding pre-allocated */ 288 + if (s->intervals.count >= SCMI_MAX_PREALLOC_POOL) { 289 + s->intervals.desc = 290 + devm_kcalloc(dev, 291 + s->intervals.count, 292 + sizeof(*s->intervals.desc), 293 + GFP_KERNEL); 294 + if (!s->intervals.desc) { 295 + s->intervals.segmented = false; 296 + s->intervals.count = 0; 297 + return -ENOMEM; 298 + } 299 + } 300 + 301 + st->max_resources = s->intervals.count; 302 + } 303 + 304 + return 0; 305 + } 306 + 307 + static int 308 + iter_intervals_process_response(const struct scmi_protocol_handle *ph, 309 + const void *response, 310 + struct scmi_iterator_state *st, void *p) 311 + { 312 + const struct scmi_msg_resp_sensor_list_update_intervals *r = response; 313 + struct scmi_sensor_info *s = ((struct scmi_sens_ipriv *)p)->priv; 314 + 315 + s->intervals.desc[st->desc_index + st->loop_idx] = 316 + le32_to_cpu(r->intervals[st->loop_idx]); 317 + 318 + return 0; 319 + } 320 + 256 321 static int scmi_sensor_update_intervals(const struct scmi_protocol_handle *ph, 257 322 struct scmi_sensor_info *s) 258 323 { 259 - int ret, cnt; 260 - u32 desc_index = 0; 261 - u16 num_returned, num_remaining; 262 - struct scmi_xfer *ti; 263 - struct scmi_msg_resp_sensor_list_update_intervals *buf; 324 + void *iter; 264 325 struct scmi_msg_sensor_list_update_intervals *msg; 326 + struct scmi_iterator_ops ops = { 327 + .prepare_message = iter_intervals_prepare_message, 328 + .update_state = iter_intervals_update_state, 329 + .process_response = iter_intervals_process_response, 330 + }; 331 + struct scmi_sens_ipriv upriv = { 332 + .priv = s, 333 + .dev = ph->dev, 334 + }; 265 335 266 - ret = ph->xops->xfer_get_init(ph, SENSOR_LIST_UPDATE_INTERVALS, 267 - sizeof(*msg), 0, &ti); 268 - if (ret) 269 - return ret; 336 + iter = ph->hops->iter_response_init(ph, &ops, s->intervals.count, 337 + SENSOR_LIST_UPDATE_INTERVALS, 338 + sizeof(*msg), &upriv); 339 + if (IS_ERR(iter)) 340 + return PTR_ERR(iter); 270 341 271 - buf = ti->rx.buf; 272 - do { 273 - u32 flags; 342 + return ph->hops->iter_response_run(iter); 343 + } 274 344 275 - msg = ti->tx.buf; 276 - /* Set the number of sensors to be skipped/already read */ 277 - msg->id = cpu_to_le32(s->id); 278 - msg->index = cpu_to_le32(desc_index); 345 + static void iter_axes_desc_prepare_message(void *message, 346 + const unsigned int desc_index, 347 + const void *priv) 348 + { 349 + struct scmi_msg_sensor_axis_description_get *msg = message; 350 + const struct scmi_sensor_info *s = priv; 279 351 280 - ret = ph->xops->do_xfer(ph, ti); 281 - if (ret) 282 - break; 352 + /* Set the number of sensors to be skipped/already read */ 353 + msg->id = cpu_to_le32(s->id); 354 + msg->axis_desc_index = cpu_to_le32(desc_index); 355 + } 283 356 284 - flags = le32_to_cpu(buf->num_intervals_flags); 285 - num_returned = NUM_INTERVALS_RETURNED(flags); 286 - num_remaining = NUM_INTERVALS_REMAINING(flags); 357 + static int 358 + iter_axes_desc_update_state(struct scmi_iterator_state *st, 359 + const void *response, void *priv) 360 + { 361 + u32 flags; 362 + const struct scmi_msg_resp_sensor_axis_description *r = response; 287 363 288 - /* 289 - * Max intervals is not declared previously anywhere so we 290 - * assume it's returned+remaining. 291 - */ 292 - if (!s->intervals.count) { 293 - s->intervals.segmented = SEGMENTED_INTVL_FORMAT(flags); 294 - s->intervals.count = num_returned + num_remaining; 295 - /* segmented intervals are reported in one triplet */ 296 - if (s->intervals.segmented && 297 - (num_remaining || num_returned != 3)) { 298 - dev_err(ph->dev, 299 - "Sensor ID:%d advertises an invalid segmented interval (%d)\n", 300 - s->id, s->intervals.count); 301 - s->intervals.segmented = false; 302 - s->intervals.count = 0; 303 - ret = -EINVAL; 304 - break; 305 - } 306 - /* Direct allocation when exceeding pre-allocated */ 307 - if (s->intervals.count >= SCMI_MAX_PREALLOC_POOL) { 308 - s->intervals.desc = 309 - devm_kcalloc(ph->dev, 310 - s->intervals.count, 311 - sizeof(*s->intervals.desc), 312 - GFP_KERNEL); 313 - if (!s->intervals.desc) { 314 - s->intervals.segmented = false; 315 - s->intervals.count = 0; 316 - ret = -ENOMEM; 317 - break; 318 - } 319 - } 320 - } else if (desc_index + num_returned > s->intervals.count) { 321 - dev_err(ph->dev, 322 - "No. of update intervals can't exceed %d\n", 323 - s->intervals.count); 324 - ret = -EINVAL; 325 - break; 326 - } 364 + flags = le32_to_cpu(r->num_axis_flags); 365 + st->num_returned = NUM_AXIS_RETURNED(flags); 366 + st->num_remaining = NUM_AXIS_REMAINING(flags); 367 + st->priv = (void *)&r->desc[0]; 327 368 328 - for (cnt = 0; cnt < num_returned; cnt++) 329 - s->intervals.desc[desc_index + cnt] = 330 - le32_to_cpu(buf->intervals[cnt]); 369 + return 0; 370 + } 331 371 332 - desc_index += num_returned; 372 + static int 373 + iter_axes_desc_process_response(const struct scmi_protocol_handle *ph, 374 + const void *response, 375 + struct scmi_iterator_state *st, void *priv) 376 + { 377 + u32 attrh, attrl; 378 + struct scmi_sensor_axis_info *a; 379 + size_t dsize = SCMI_MSG_RESP_AXIS_DESCR_BASE_SZ; 380 + struct scmi_sensor_info *s = priv; 381 + const struct scmi_axis_descriptor *adesc = st->priv; 333 382 334 - ph->xops->reset_rx_to_maxsz(ph, ti); 335 - /* 336 - * check for both returned and remaining to avoid infinite 337 - * loop due to buggy firmware 338 - */ 339 - } while (num_returned && num_remaining); 383 + attrl = le32_to_cpu(adesc->attributes_low); 340 384 341 - ph->xops->xfer_put(ph, ti); 342 - return ret; 385 + a = &s->axis[st->desc_index + st->loop_idx]; 386 + a->id = le32_to_cpu(adesc->id); 387 + a->extended_attrs = SUPPORTS_EXTEND_ATTRS(attrl); 388 + 389 + attrh = le32_to_cpu(adesc->attributes_high); 390 + a->scale = S32_EXT(SENSOR_SCALE(attrh)); 391 + a->type = SENSOR_TYPE(attrh); 392 + strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE); 393 + 394 + if (a->extended_attrs) { 395 + unsigned int ares = le32_to_cpu(adesc->resolution); 396 + 397 + a->resolution = SENSOR_RES(ares); 398 + a->exponent = S32_EXT(SENSOR_RES_EXP(ares)); 399 + dsize += sizeof(adesc->resolution); 400 + 401 + scmi_parse_range_attrs(&a->attrs, &adesc->attrs); 402 + dsize += sizeof(adesc->attrs); 403 + } 404 + st->priv = ((u8 *)adesc + dsize); 405 + 406 + return 0; 407 + } 408 + 409 + static int 410 + iter_axes_extended_name_update_state(struct scmi_iterator_state *st, 411 + const void *response, void *priv) 412 + { 413 + u32 flags; 414 + const struct scmi_msg_resp_sensor_axis_names_description *r = response; 415 + 416 + flags = le32_to_cpu(r->num_axis_flags); 417 + st->num_returned = NUM_AXIS_RETURNED(flags); 418 + st->num_remaining = NUM_AXIS_REMAINING(flags); 419 + st->priv = (void *)&r->desc[0]; 420 + 421 + return 0; 422 + } 423 + 424 + static int 425 + iter_axes_extended_name_process_response(const struct scmi_protocol_handle *ph, 426 + const void *response, 427 + struct scmi_iterator_state *st, 428 + void *priv) 429 + { 430 + struct scmi_sensor_axis_info *a; 431 + const struct scmi_sensor_info *s = priv; 432 + struct scmi_sensor_axis_name_descriptor *adesc = st->priv; 433 + 434 + a = &s->axis[st->desc_index + st->loop_idx]; 435 + strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE); 436 + st->priv = ++adesc; 437 + 438 + return 0; 439 + } 440 + 441 + static int 442 + scmi_sensor_axis_extended_names_get(const struct scmi_protocol_handle *ph, 443 + struct scmi_sensor_info *s) 444 + { 445 + void *iter; 446 + struct scmi_msg_sensor_axis_description_get *msg; 447 + struct scmi_iterator_ops ops = { 448 + .prepare_message = iter_axes_desc_prepare_message, 449 + .update_state = iter_axes_extended_name_update_state, 450 + .process_response = iter_axes_extended_name_process_response, 451 + }; 452 + 453 + iter = ph->hops->iter_response_init(ph, &ops, s->num_axis, 454 + SENSOR_AXIS_NAME_GET, 455 + sizeof(*msg), s); 456 + if (IS_ERR(iter)) 457 + return PTR_ERR(iter); 458 + 459 + return ph->hops->iter_response_run(iter); 343 460 } 344 461 345 462 static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph, 346 - struct scmi_sensor_info *s) 463 + struct scmi_sensor_info *s, 464 + u32 version) 347 465 { 348 - int ret, cnt; 349 - u32 desc_index = 0; 350 - u16 num_returned, num_remaining; 351 - struct scmi_xfer *te; 352 - struct scmi_msg_resp_sensor_axis_description *buf; 466 + int ret; 467 + void *iter; 353 468 struct scmi_msg_sensor_axis_description_get *msg; 469 + struct scmi_iterator_ops ops = { 470 + .prepare_message = iter_axes_desc_prepare_message, 471 + .update_state = iter_axes_desc_update_state, 472 + .process_response = iter_axes_desc_process_response, 473 + }; 354 474 355 475 s->axis = devm_kcalloc(ph->dev, s->num_axis, 356 476 sizeof(*s->axis), GFP_KERNEL); 357 477 if (!s->axis) 358 478 return -ENOMEM; 359 479 360 - ret = ph->xops->xfer_get_init(ph, SENSOR_AXIS_DESCRIPTION_GET, 361 - sizeof(*msg), 0, &te); 480 + iter = ph->hops->iter_response_init(ph, &ops, s->num_axis, 481 + SENSOR_AXIS_DESCRIPTION_GET, 482 + sizeof(*msg), s); 483 + if (IS_ERR(iter)) 484 + return PTR_ERR(iter); 485 + 486 + ret = ph->hops->iter_response_run(iter); 362 487 if (ret) 363 488 return ret; 364 489 365 - buf = te->rx.buf; 366 - do { 367 - u32 flags; 368 - struct scmi_axis_descriptor *adesc; 490 + if (PROTOCOL_REV_MAJOR(version) >= 0x3) 491 + ret = scmi_sensor_axis_extended_names_get(ph, s); 369 492 370 - msg = te->tx.buf; 371 - /* Set the number of sensors to be skipped/already read */ 372 - msg->id = cpu_to_le32(s->id); 373 - msg->axis_desc_index = cpu_to_le32(desc_index); 493 + return ret; 494 + } 374 495 375 - ret = ph->xops->do_xfer(ph, te); 376 - if (ret) 377 - break; 496 + static void iter_sens_descr_prepare_message(void *message, 497 + unsigned int desc_index, 498 + const void *priv) 499 + { 500 + struct scmi_msg_sensor_description *msg = message; 378 501 379 - flags = le32_to_cpu(buf->num_axis_flags); 380 - num_returned = NUM_AXIS_RETURNED(flags); 381 - num_remaining = NUM_AXIS_REMAINING(flags); 502 + msg->desc_index = cpu_to_le32(desc_index); 503 + } 382 504 383 - if (desc_index + num_returned > s->num_axis) { 384 - dev_err(ph->dev, "No. of axis can't exceed %d\n", 385 - s->num_axis); 386 - break; 387 - } 505 + static int iter_sens_descr_update_state(struct scmi_iterator_state *st, 506 + const void *response, void *priv) 507 + { 508 + const struct scmi_msg_resp_sensor_description *r = response; 388 509 389 - adesc = &buf->desc[0]; 390 - for (cnt = 0; cnt < num_returned; cnt++) { 391 - u32 attrh, attrl; 392 - struct scmi_sensor_axis_info *a; 393 - size_t dsize = SCMI_MSG_RESP_AXIS_DESCR_BASE_SZ; 510 + st->num_returned = le16_to_cpu(r->num_returned); 511 + st->num_remaining = le16_to_cpu(r->num_remaining); 512 + st->priv = (void *)&r->desc[0]; 394 513 395 - attrl = le32_to_cpu(adesc->attributes_low); 514 + return 0; 515 + } 396 516 397 - a = &s->axis[desc_index + cnt]; 517 + static int 518 + iter_sens_descr_process_response(const struct scmi_protocol_handle *ph, 519 + const void *response, 520 + struct scmi_iterator_state *st, void *priv) 398 521 399 - a->id = le32_to_cpu(adesc->id); 400 - a->extended_attrs = SUPPORTS_EXTEND_ATTRS(attrl); 522 + { 523 + int ret = 0; 524 + u32 attrh, attrl; 525 + size_t dsize = SCMI_MSG_RESP_SENS_DESCR_BASE_SZ; 526 + struct scmi_sensor_info *s; 527 + struct sensors_info *si = priv; 528 + const struct scmi_sensor_descriptor *sdesc = st->priv; 401 529 402 - attrh = le32_to_cpu(adesc->attributes_high); 403 - a->scale = S32_EXT(SENSOR_SCALE(attrh)); 404 - a->type = SENSOR_TYPE(attrh); 405 - strlcpy(a->name, adesc->name, SCMI_MAX_STR_SIZE); 530 + s = &si->sensors[st->desc_index + st->loop_idx]; 531 + s->id = le32_to_cpu(sdesc->id); 406 532 407 - if (a->extended_attrs) { 408 - unsigned int ares = 409 - le32_to_cpu(adesc->resolution); 533 + attrl = le32_to_cpu(sdesc->attributes_low); 534 + /* common bitfields parsing */ 535 + s->async = SUPPORTS_ASYNC_READ(attrl); 536 + s->num_trip_points = NUM_TRIP_POINTS(attrl); 537 + /** 538 + * only SCMIv3.0 specific bitfield below. 539 + * Such bitfields are assumed to be zeroed on non 540 + * relevant fw versions...assuming fw not buggy ! 541 + */ 542 + s->update = SUPPORTS_UPDATE_NOTIFY(attrl); 543 + s->timestamped = SUPPORTS_TIMESTAMP(attrl); 544 + if (s->timestamped) 545 + s->tstamp_scale = S32_EXT(SENSOR_TSTAMP_EXP(attrl)); 546 + s->extended_scalar_attrs = SUPPORTS_EXTEND_ATTRS(attrl); 410 547 411 - a->resolution = SENSOR_RES(ares); 412 - a->exponent = 413 - S32_EXT(SENSOR_RES_EXP(ares)); 414 - dsize += sizeof(adesc->resolution); 415 - 416 - scmi_parse_range_attrs(&a->attrs, 417 - &adesc->attrs); 418 - dsize += sizeof(adesc->attrs); 419 - } 420 - 421 - adesc = (typeof(adesc))((u8 *)adesc + dsize); 422 - } 423 - 424 - desc_index += num_returned; 425 - 426 - ph->xops->reset_rx_to_maxsz(ph, te); 548 + attrh = le32_to_cpu(sdesc->attributes_high); 549 + /* common bitfields parsing */ 550 + s->scale = S32_EXT(SENSOR_SCALE(attrh)); 551 + s->type = SENSOR_TYPE(attrh); 552 + /* Use pre-allocated pool wherever possible */ 553 + s->intervals.desc = s->intervals.prealloc_pool; 554 + if (si->version == SCMIv2_SENSOR_PROTOCOL) { 555 + s->intervals.segmented = false; 556 + s->intervals.count = 1; 427 557 /* 428 - * check for both returned and remaining to avoid infinite 429 - * loop due to buggy firmware 558 + * Convert SCMIv2.0 update interval format to 559 + * SCMIv3.0 to be used as the common exposed 560 + * descriptor, accessible via common macros. 430 561 */ 431 - } while (num_returned && num_remaining); 562 + s->intervals.desc[0] = (SENSOR_UPDATE_BASE(attrh) << 5) | 563 + SENSOR_UPDATE_SCALE(attrh); 564 + } else { 565 + /* 566 + * From SCMIv3.0 update intervals are retrieved 567 + * via a dedicated (optional) command. 568 + * Since the command is optional, on error carry 569 + * on without any update interval. 570 + */ 571 + if (scmi_sensor_update_intervals(ph, s)) 572 + dev_dbg(ph->dev, 573 + "Update Intervals not available for sensor ID:%d\n", 574 + s->id); 575 + } 576 + /** 577 + * only > SCMIv2.0 specific bitfield below. 578 + * Such bitfields are assumed to be zeroed on non 579 + * relevant fw versions...assuming fw not buggy ! 580 + */ 581 + s->num_axis = min_t(unsigned int, 582 + SUPPORTS_AXIS(attrh) ? 583 + SENSOR_AXIS_NUMBER(attrh) : 0, 584 + SCMI_MAX_NUM_SENSOR_AXIS); 585 + strscpy(s->name, sdesc->name, SCMI_MAX_STR_SIZE); 432 586 433 - ph->xops->xfer_put(ph, te); 587 + /* 588 + * If supported overwrite short name with the extended 589 + * one; on error just carry on and use already provided 590 + * short name. 591 + */ 592 + if (PROTOCOL_REV_MAJOR(si->version) >= 0x3 && 593 + SUPPORTS_EXTENDED_NAMES(attrl)) 594 + ph->hops->extended_name_get(ph, SENSOR_NAME_GET, s->id, 595 + s->name, SCMI_MAX_STR_SIZE); 596 + 597 + if (s->extended_scalar_attrs) { 598 + s->sensor_power = le32_to_cpu(sdesc->power); 599 + dsize += sizeof(sdesc->power); 600 + 601 + /* Only for sensors reporting scalar values */ 602 + if (s->num_axis == 0) { 603 + unsigned int sres = le32_to_cpu(sdesc->resolution); 604 + 605 + s->resolution = SENSOR_RES(sres); 606 + s->exponent = S32_EXT(SENSOR_RES_EXP(sres)); 607 + dsize += sizeof(sdesc->resolution); 608 + 609 + scmi_parse_range_attrs(&s->scalar_attrs, 610 + &sdesc->scalar_attrs); 611 + dsize += sizeof(sdesc->scalar_attrs); 612 + } 613 + } 614 + 615 + if (s->num_axis > 0) 616 + ret = scmi_sensor_axis_description(ph, s, si->version); 617 + 618 + st->priv = ((u8 *)sdesc + dsize); 619 + 434 620 return ret; 435 621 } 436 622 437 623 static int scmi_sensor_description_get(const struct scmi_protocol_handle *ph, 438 624 struct sensors_info *si) 439 625 { 440 - int ret, cnt; 441 - u32 desc_index = 0; 442 - u16 num_returned, num_remaining; 443 - struct scmi_xfer *t; 444 - struct scmi_msg_resp_sensor_description *buf; 626 + void *iter; 627 + struct scmi_iterator_ops ops = { 628 + .prepare_message = iter_sens_descr_prepare_message, 629 + .update_state = iter_sens_descr_update_state, 630 + .process_response = iter_sens_descr_process_response, 631 + }; 445 632 446 - ret = ph->xops->xfer_get_init(ph, SENSOR_DESCRIPTION_GET, 447 - sizeof(__le32), 0, &t); 448 - if (ret) 449 - return ret; 633 + iter = ph->hops->iter_response_init(ph, &ops, si->num_sensors, 634 + SENSOR_DESCRIPTION_GET, 635 + sizeof(__le32), si); 636 + if (IS_ERR(iter)) 637 + return PTR_ERR(iter); 450 638 451 - buf = t->rx.buf; 452 - 453 - do { 454 - struct scmi_sensor_descriptor *sdesc; 455 - 456 - /* Set the number of sensors to be skipped/already read */ 457 - put_unaligned_le32(desc_index, t->tx.buf); 458 - 459 - ret = ph->xops->do_xfer(ph, t); 460 - if (ret) 461 - break; 462 - 463 - num_returned = le16_to_cpu(buf->num_returned); 464 - num_remaining = le16_to_cpu(buf->num_remaining); 465 - 466 - if (desc_index + num_returned > si->num_sensors) { 467 - dev_err(ph->dev, "No. of sensors can't exceed %d", 468 - si->num_sensors); 469 - break; 470 - } 471 - 472 - sdesc = &buf->desc[0]; 473 - for (cnt = 0; cnt < num_returned; cnt++) { 474 - u32 attrh, attrl; 475 - struct scmi_sensor_info *s; 476 - size_t dsize = SCMI_MSG_RESP_SENS_DESCR_BASE_SZ; 477 - 478 - s = &si->sensors[desc_index + cnt]; 479 - s->id = le32_to_cpu(sdesc->id); 480 - 481 - attrl = le32_to_cpu(sdesc->attributes_low); 482 - /* common bitfields parsing */ 483 - s->async = SUPPORTS_ASYNC_READ(attrl); 484 - s->num_trip_points = NUM_TRIP_POINTS(attrl); 485 - /** 486 - * only SCMIv3.0 specific bitfield below. 487 - * Such bitfields are assumed to be zeroed on non 488 - * relevant fw versions...assuming fw not buggy ! 489 - */ 490 - s->update = SUPPORTS_UPDATE_NOTIFY(attrl); 491 - s->timestamped = SUPPORTS_TIMESTAMP(attrl); 492 - if (s->timestamped) 493 - s->tstamp_scale = 494 - S32_EXT(SENSOR_TSTAMP_EXP(attrl)); 495 - s->extended_scalar_attrs = 496 - SUPPORTS_EXTEND_ATTRS(attrl); 497 - 498 - attrh = le32_to_cpu(sdesc->attributes_high); 499 - /* common bitfields parsing */ 500 - s->scale = S32_EXT(SENSOR_SCALE(attrh)); 501 - s->type = SENSOR_TYPE(attrh); 502 - /* Use pre-allocated pool wherever possible */ 503 - s->intervals.desc = s->intervals.prealloc_pool; 504 - if (si->version == SCMIv2_SENSOR_PROTOCOL) { 505 - s->intervals.segmented = false; 506 - s->intervals.count = 1; 507 - /* 508 - * Convert SCMIv2.0 update interval format to 509 - * SCMIv3.0 to be used as the common exposed 510 - * descriptor, accessible via common macros. 511 - */ 512 - s->intervals.desc[0] = 513 - (SENSOR_UPDATE_BASE(attrh) << 5) | 514 - SENSOR_UPDATE_SCALE(attrh); 515 - } else { 516 - /* 517 - * From SCMIv3.0 update intervals are retrieved 518 - * via a dedicated (optional) command. 519 - * Since the command is optional, on error carry 520 - * on without any update interval. 521 - */ 522 - if (scmi_sensor_update_intervals(ph, s)) 523 - dev_dbg(ph->dev, 524 - "Update Intervals not available for sensor ID:%d\n", 525 - s->id); 526 - } 527 - /** 528 - * only > SCMIv2.0 specific bitfield below. 529 - * Such bitfields are assumed to be zeroed on non 530 - * relevant fw versions...assuming fw not buggy ! 531 - */ 532 - s->num_axis = min_t(unsigned int, 533 - SUPPORTS_AXIS(attrh) ? 534 - SENSOR_AXIS_NUMBER(attrh) : 0, 535 - SCMI_MAX_NUM_SENSOR_AXIS); 536 - strlcpy(s->name, sdesc->name, SCMI_MAX_STR_SIZE); 537 - 538 - if (s->extended_scalar_attrs) { 539 - s->sensor_power = le32_to_cpu(sdesc->power); 540 - dsize += sizeof(sdesc->power); 541 - /* Only for sensors reporting scalar values */ 542 - if (s->num_axis == 0) { 543 - unsigned int sres = 544 - le32_to_cpu(sdesc->resolution); 545 - 546 - s->resolution = SENSOR_RES(sres); 547 - s->exponent = 548 - S32_EXT(SENSOR_RES_EXP(sres)); 549 - dsize += sizeof(sdesc->resolution); 550 - 551 - scmi_parse_range_attrs(&s->scalar_attrs, 552 - &sdesc->scalar_attrs); 553 - dsize += sizeof(sdesc->scalar_attrs); 554 - } 555 - } 556 - if (s->num_axis > 0) { 557 - ret = scmi_sensor_axis_description(ph, s); 558 - if (ret) 559 - goto out; 560 - } 561 - 562 - sdesc = (typeof(sdesc))((u8 *)sdesc + dsize); 563 - } 564 - 565 - desc_index += num_returned; 566 - 567 - ph->xops->reset_rx_to_maxsz(ph, t); 568 - /* 569 - * check for both returned and remaining to avoid infinite 570 - * loop due to buggy firmware 571 - */ 572 - } while (num_returned && num_remaining); 573 - 574 - out: 575 - ph->xops->xfer_put(ph, t); 576 - return ret; 639 + return ph->hops->iter_response_run(iter); 577 640 } 578 641 579 642 static inline int ··· 1061 966 int ret; 1062 967 struct sensors_info *sinfo; 1063 968 1064 - ph->xops->version_get(ph, &version); 969 + ret = ph->xops->version_get(ph, &version); 970 + if (ret) 971 + return ret; 1065 972 1066 973 dev_dbg(ph->dev, "Sensor Version %d.%d\n", 1067 974 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+6 -3
drivers/firmware/arm_scmi/system.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) System Power Protocol 4 4 * 5 - * Copyright (C) 2020-2021 ARM Ltd. 5 + * Copyright (C) 2020-2022 ARM Ltd. 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "SCMI Notifications SYSTEM - " fmt ··· 10 10 #include <linux/module.h> 11 11 #include <linux/scmi_protocol.h> 12 12 13 - #include "common.h" 13 + #include "protocols.h" 14 14 #include "notify.h" 15 15 16 16 #define SCMI_SYSTEM_NUM_SOURCES 1 ··· 113 113 114 114 static int scmi_system_protocol_init(const struct scmi_protocol_handle *ph) 115 115 { 116 + int ret; 116 117 u32 version; 117 118 struct scmi_system_info *pinfo; 118 119 119 - ph->xops->version_get(ph, &version); 120 + ret = ph->xops->version_get(ph, &version); 121 + if (ret) 122 + return ret; 120 123 121 124 dev_dbg(ph->dev, "System Power Version %d.%d\n", 122 125 PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+144 -74
drivers/firmware/arm_scmi/voltage.c
··· 2 2 /* 3 3 * System Control and Management Interface (SCMI) Voltage Protocol 4 4 * 5 - * Copyright (C) 2020-2021 ARM Ltd. 5 + * Copyright (C) 2020-2022 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/module.h> 9 9 #include <linux/scmi_protocol.h> 10 10 11 - #include "common.h" 11 + #include "protocols.h" 12 12 13 13 #define VOLTAGE_DOMS_NUM_MASK GENMASK(15, 0) 14 14 #define REMAINING_LEVELS_MASK GENMASK(31, 16) ··· 21 21 VOLTAGE_CONFIG_GET = 0x6, 22 22 VOLTAGE_LEVEL_SET = 0x7, 23 23 VOLTAGE_LEVEL_GET = 0x8, 24 + VOLTAGE_DOMAIN_NAME_GET = 0x09, 24 25 }; 25 26 26 27 #define NUM_VOLTAGE_DOMAINS(x) ((u16)(FIELD_GET(VOLTAGE_DOMS_NUM_MASK, (x)))) 27 28 28 29 struct scmi_msg_resp_domain_attributes { 29 30 __le32 attr; 30 - u8 name[SCMI_MAX_STR_SIZE]; 31 + #define SUPPORTS_ASYNC_LEVEL_SET(x) ((x) & BIT(31)) 32 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(30)) 33 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 31 34 }; 32 35 33 36 struct scmi_msg_cmd_describe_levels { ··· 54 51 struct scmi_msg_cmd_level_set { 55 52 __le32 domain_id; 56 53 __le32 flags; 54 + __le32 voltage_level; 55 + }; 56 + 57 + struct scmi_resp_voltage_level_set_complete { 58 + __le32 domain_id; 57 59 __le32 voltage_level; 58 60 }; 59 61 ··· 118 110 return 0; 119 111 } 120 112 113 + struct scmi_volt_ipriv { 114 + struct device *dev; 115 + struct scmi_voltage_info *v; 116 + }; 117 + 118 + static void iter_volt_levels_prepare_message(void *message, 119 + unsigned int desc_index, 120 + const void *priv) 121 + { 122 + struct scmi_msg_cmd_describe_levels *msg = message; 123 + const struct scmi_volt_ipriv *p = priv; 124 + 125 + msg->domain_id = cpu_to_le32(p->v->id); 126 + msg->level_index = cpu_to_le32(desc_index); 127 + } 128 + 129 + static int iter_volt_levels_update_state(struct scmi_iterator_state *st, 130 + const void *response, void *priv) 131 + { 132 + int ret = 0; 133 + u32 flags; 134 + const struct scmi_msg_resp_describe_levels *r = response; 135 + struct scmi_volt_ipriv *p = priv; 136 + 137 + flags = le32_to_cpu(r->flags); 138 + st->num_returned = NUM_RETURNED_LEVELS(flags); 139 + st->num_remaining = NUM_REMAINING_LEVELS(flags); 140 + 141 + /* Allocate space for num_levels if not already done */ 142 + if (!p->v->num_levels) { 143 + ret = scmi_init_voltage_levels(p->dev, p->v, st->num_returned, 144 + st->num_remaining, 145 + SUPPORTS_SEGMENTED_LEVELS(flags)); 146 + if (!ret) 147 + st->max_resources = p->v->num_levels; 148 + } 149 + 150 + return ret; 151 + } 152 + 153 + static int 154 + iter_volt_levels_process_response(const struct scmi_protocol_handle *ph, 155 + const void *response, 156 + struct scmi_iterator_state *st, void *priv) 157 + { 158 + s32 val; 159 + const struct scmi_msg_resp_describe_levels *r = response; 160 + struct scmi_volt_ipriv *p = priv; 161 + 162 + val = (s32)le32_to_cpu(r->voltage[st->loop_idx]); 163 + p->v->levels_uv[st->desc_index + st->loop_idx] = val; 164 + if (val < 0) 165 + p->v->negative_volts_allowed = true; 166 + 167 + return 0; 168 + } 169 + 170 + static int scmi_voltage_levels_get(const struct scmi_protocol_handle *ph, 171 + struct scmi_voltage_info *v) 172 + { 173 + int ret; 174 + void *iter; 175 + struct scmi_msg_cmd_describe_levels *msg; 176 + struct scmi_iterator_ops ops = { 177 + .prepare_message = iter_volt_levels_prepare_message, 178 + .update_state = iter_volt_levels_update_state, 179 + .process_response = iter_volt_levels_process_response, 180 + }; 181 + struct scmi_volt_ipriv vpriv = { 182 + .dev = ph->dev, 183 + .v = v, 184 + }; 185 + 186 + iter = ph->hops->iter_response_init(ph, &ops, v->num_levels, 187 + VOLTAGE_DESCRIBE_LEVELS, 188 + sizeof(*msg), &vpriv); 189 + if (IS_ERR(iter)) 190 + return PTR_ERR(iter); 191 + 192 + ret = ph->hops->iter_response_run(iter); 193 + if (ret) { 194 + v->num_levels = 0; 195 + devm_kfree(ph->dev, v->levels_uv); 196 + } 197 + 198 + return ret; 199 + } 200 + 121 201 static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph, 122 202 struct voltage_info *vinfo) 123 203 { 124 204 int ret, dom; 125 - struct scmi_xfer *td, *tl; 126 - struct device *dev = ph->dev; 205 + struct scmi_xfer *td; 127 206 struct scmi_msg_resp_domain_attributes *resp_dom; 128 - struct scmi_msg_resp_describe_levels *resp_levels; 129 207 130 208 ret = ph->xops->xfer_get_init(ph, VOLTAGE_DOMAIN_ATTRIBUTES, 131 209 sizeof(__le32), sizeof(*resp_dom), &td); ··· 219 125 return ret; 220 126 resp_dom = td->rx.buf; 221 127 222 - ret = ph->xops->xfer_get_init(ph, VOLTAGE_DESCRIBE_LEVELS, 223 - sizeof(__le64), 0, &tl); 224 - if (ret) 225 - goto outd; 226 - resp_levels = tl->rx.buf; 227 - 228 128 for (dom = 0; dom < vinfo->num_domains; dom++) { 229 - u32 desc_index = 0; 230 - u16 num_returned = 0, num_remaining = 0; 231 - struct scmi_msg_cmd_describe_levels *cmd; 129 + u32 attributes; 232 130 struct scmi_voltage_info *v; 233 131 234 132 /* Retrieve domain attributes at first ... */ ··· 232 146 233 147 v = vinfo->domains + dom; 234 148 v->id = dom; 235 - v->attributes = le32_to_cpu(resp_dom->attr); 149 + attributes = le32_to_cpu(resp_dom->attr); 236 150 strlcpy(v->name, resp_dom->name, SCMI_MAX_STR_SIZE); 237 151 238 - cmd = tl->tx.buf; 239 - /* ...then retrieve domain levels descriptions */ 240 - do { 241 - u32 flags; 242 - int cnt; 243 - 244 - cmd->domain_id = cpu_to_le32(v->id); 245 - cmd->level_index = cpu_to_le32(desc_index); 246 - ret = ph->xops->do_xfer(ph, tl); 247 - if (ret) 248 - break; 249 - 250 - flags = le32_to_cpu(resp_levels->flags); 251 - num_returned = NUM_RETURNED_LEVELS(flags); 252 - num_remaining = NUM_REMAINING_LEVELS(flags); 253 - 254 - /* Allocate space for num_levels if not already done */ 255 - if (!v->num_levels) { 256 - ret = scmi_init_voltage_levels(dev, v, 257 - num_returned, 258 - num_remaining, 259 - SUPPORTS_SEGMENTED_LEVELS(flags)); 260 - if (ret) 261 - break; 262 - } 263 - 264 - if (desc_index + num_returned > v->num_levels) { 265 - dev_err(ph->dev, 266 - "No. of voltage levels can't exceed %d\n", 267 - v->num_levels); 268 - ret = -EINVAL; 269 - break; 270 - } 271 - 272 - for (cnt = 0; cnt < num_returned; cnt++) { 273 - s32 val; 274 - 275 - val = 276 - (s32)le32_to_cpu(resp_levels->voltage[cnt]); 277 - v->levels_uv[desc_index + cnt] = val; 278 - if (val < 0) 279 - v->negative_volts_allowed = true; 280 - } 281 - 282 - desc_index += num_returned; 283 - 284 - ph->xops->reset_rx_to_maxsz(ph, tl); 285 - /* check both to avoid infinite loop due to buggy fw */ 286 - } while (num_returned && num_remaining); 287 - 288 - if (ret) { 289 - v->num_levels = 0; 290 - devm_kfree(dev, v->levels_uv); 152 + /* 153 + * If supported overwrite short name with the extended one; 154 + * on error just carry on and use already provided short name. 155 + */ 156 + if (PROTOCOL_REV_MAJOR(vinfo->version) >= 0x2) { 157 + if (SUPPORTS_EXTENDED_NAMES(attributes)) 158 + ph->hops->extended_name_get(ph, 159 + VOLTAGE_DOMAIN_NAME_GET, 160 + v->id, v->name, 161 + SCMI_MAX_STR_SIZE); 162 + if (SUPPORTS_ASYNC_LEVEL_SET(attributes)) 163 + v->async_level_set = true; 291 164 } 165 + 166 + ret = scmi_voltage_levels_get(ph, v); 167 + /* Skip invalid voltage descriptors */ 168 + if (ret) 169 + continue; 292 170 293 171 ph->xops->reset_rx_to_maxsz(ph, td); 294 172 } 295 173 296 - ph->xops->xfer_put(ph, tl); 297 - outd: 298 174 ph->xops->xfer_put(ph, td); 299 175 300 176 return ret; ··· 319 271 } 320 272 321 273 static int scmi_voltage_level_set(const struct scmi_protocol_handle *ph, 322 - u32 domain_id, u32 flags, s32 volt_uV) 274 + u32 domain_id, 275 + enum scmi_voltage_level_mode mode, 276 + s32 volt_uV) 323 277 { 324 278 int ret; 325 279 struct scmi_xfer *t; 326 280 struct voltage_info *vinfo = ph->get_priv(ph); 327 281 struct scmi_msg_cmd_level_set *cmd; 282 + struct scmi_voltage_info *v; 328 283 329 284 if (domain_id >= vinfo->num_domains) 330 285 return -EINVAL; ··· 337 286 if (ret) 338 287 return ret; 339 288 289 + v = vinfo->domains + domain_id; 290 + 340 291 cmd = t->tx.buf; 341 292 cmd->domain_id = cpu_to_le32(domain_id); 342 - cmd->flags = cpu_to_le32(flags); 343 293 cmd->voltage_level = cpu_to_le32(volt_uV); 344 294 345 - ret = ph->xops->do_xfer(ph, t); 295 + if (!v->async_level_set || mode != SCMI_VOLTAGE_LEVEL_SET_AUTO) { 296 + cmd->flags = cpu_to_le32(0x0); 297 + ret = ph->xops->do_xfer(ph, t); 298 + } else { 299 + cmd->flags = cpu_to_le32(0x1); 300 + ret = ph->xops->do_xfer_with_response(ph, t); 301 + if (!ret) { 302 + struct scmi_resp_voltage_level_set_complete *resp; 303 + 304 + resp = t->rx.buf; 305 + if (le32_to_cpu(resp->domain_id) == domain_id) 306 + dev_dbg(ph->dev, 307 + "Voltage domain %d set async to %d\n", 308 + v->id, 309 + le32_to_cpu(resp->voltage_level)); 310 + else 311 + ret = -EPROTO; 312 + } 313 + } 346 314 347 315 ph->xops->xfer_put(ph, t); 348 316 return ret;
+4
drivers/firmware/qcom_scm.c
··· 1379 1379 SCM_HAS_IFACE_CLK | 1380 1380 SCM_HAS_BUS_CLK) 1381 1381 }, 1382 + { .compatible = "qcom,scm-msm8976", .data = (void *)(SCM_HAS_CORE_CLK | 1383 + SCM_HAS_IFACE_CLK | 1384 + SCM_HAS_BUS_CLK) 1385 + }, 1382 1386 { .compatible = "qcom,scm-msm8994" }, 1383 1387 { .compatible = "qcom,scm-msm8996" }, 1384 1388 { .compatible = "qcom,scm" },
+55 -6
drivers/firmware/ti_sci.c
··· 2 2 /* 3 3 * Texas Instruments System Control Interface Protocol Driver 4 4 * 5 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 5 + * Copyright (C) 2015-2022 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Nishanth Menon 7 7 */ 8 8 ··· 12 12 #include <linux/debugfs.h> 13 13 #include <linux/export.h> 14 14 #include <linux/io.h> 15 + #include <linux/iopoll.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/mailbox_client.h> 17 18 #include <linux/module.h> ··· 97 96 * @node: list head 98 97 * @host_id: Host ID 99 98 * @users: Number of users of this instance 99 + * @is_suspending: Flag set to indicate in suspend path. 100 100 */ 101 101 struct ti_sci_info { 102 102 struct device *dev; ··· 116 114 u8 host_id; 117 115 /* protected by ti_sci_list_mutex */ 118 116 int users; 119 - 117 + bool is_suspending; 120 118 }; 121 119 122 120 #define cl_to_ti_sci_info(c) container_of(c, struct ti_sci_info, cl) ··· 351 349 352 350 hdr = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 353 351 xfer->tx_message.len = tx_message_size; 352 + xfer->tx_message.chan_rx = info->chan_rx; 353 + xfer->tx_message.timeout_rx_ms = info->desc->max_rx_timeout_ms; 354 354 xfer->rx_len = (u8)rx_message_size; 355 355 356 356 reinit_completion(&xfer->done); ··· 410 406 int ret; 411 407 int timeout; 412 408 struct device *dev = info->dev; 409 + bool done_state = true; 413 410 414 411 ret = mbox_send_message(info->chan_tx, &xfer->tx_message); 415 412 if (ret < 0) ··· 418 413 419 414 ret = 0; 420 415 421 - /* And we wait for the response. */ 422 - timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); 423 - if (!wait_for_completion_timeout(&xfer->done, timeout)) { 416 + if (!info->is_suspending) { 417 + /* And we wait for the response. */ 418 + timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); 419 + if (!wait_for_completion_timeout(&xfer->done, timeout)) 420 + ret = -ETIMEDOUT; 421 + } else { 422 + /* 423 + * If we are suspending, we cannot use wait_for_completion_timeout 424 + * during noirq phase, so we must manually poll the completion. 425 + */ 426 + ret = read_poll_timeout_atomic(try_wait_for_completion, done_state, 427 + true, 1, 428 + info->desc->max_rx_timeout_ms * 1000, 429 + false, &xfer->done); 430 + } 431 + 432 + if (ret == -ETIMEDOUT || !done_state) { 424 433 dev_err(dev, "Mbox timedout in resp(caller: %pS)\n", 425 434 (void *)_RET_IP_); 426 - ret = -ETIMEDOUT; 427 435 } 436 + 428 437 /* 429 438 * NOTE: we might prefer not to need the mailbox ticker to manage the 430 439 * transfer queueing since the protocol layer queues things by itself. ··· 3283 3264 return NOTIFY_BAD; 3284 3265 } 3285 3266 3267 + static void ti_sci_set_is_suspending(struct ti_sci_info *info, bool is_suspending) 3268 + { 3269 + info->is_suspending = is_suspending; 3270 + } 3271 + 3272 + static int ti_sci_suspend(struct device *dev) 3273 + { 3274 + struct ti_sci_info *info = dev_get_drvdata(dev); 3275 + /* 3276 + * We must switch operation to polled mode now as drivers and the genpd 3277 + * layer may make late TI SCI calls to change clock and device states 3278 + * from the noirq phase of suspend. 3279 + */ 3280 + ti_sci_set_is_suspending(info, true); 3281 + 3282 + return 0; 3283 + } 3284 + 3285 + static int ti_sci_resume(struct device *dev) 3286 + { 3287 + struct ti_sci_info *info = dev_get_drvdata(dev); 3288 + 3289 + ti_sci_set_is_suspending(info, false); 3290 + 3291 + return 0; 3292 + } 3293 + 3294 + static DEFINE_SIMPLE_DEV_PM_OPS(ti_sci_pm_ops, ti_sci_suspend, ti_sci_resume); 3295 + 3286 3296 /* Description for K2G */ 3287 3297 static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = { 3288 3298 .default_host_id = 2, ··· 3520 3472 .driver = { 3521 3473 .name = "ti-sci", 3522 3474 .of_match_table = of_match_ptr(ti_sci_of_match), 3475 + .pm = &ti_sci_pm_ops, 3523 3476 }, 3524 3477 }; 3525 3478 module_platform_driver(ti_sci_driver);
+1 -1
drivers/memory/Kconfig
··· 103 103 temperature changes 104 104 105 105 config OMAP_GPMC 106 - bool "Texas Instruments OMAP SoC GPMC driver" if COMPILE_TEST 106 + tristate "Texas Instruments OMAP SoC GPMC driver" 107 107 depends on OF_ADDRESS 108 108 select GPIOLIB 109 109 help
+3 -7
drivers/memory/brcmstb_dpfe.c
··· 857 857 { 858 858 struct device *dev = &pdev->dev; 859 859 struct brcmstb_dpfe_priv *priv; 860 - struct resource *res; 861 860 int ret; 862 861 863 862 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 868 869 mutex_init(&priv->lock); 869 870 platform_set_drvdata(pdev, priv); 870 871 871 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-cpu"); 872 - priv->regs = devm_ioremap_resource(dev, res); 872 + priv->regs = devm_platform_ioremap_resource_byname(pdev, "dpfe-cpu"); 873 873 if (IS_ERR(priv->regs)) { 874 874 dev_err(dev, "couldn't map DCPU registers\n"); 875 875 return -ENODEV; 876 876 } 877 877 878 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-dmem"); 879 - priv->dmem = devm_ioremap_resource(dev, res); 878 + priv->dmem = devm_platform_ioremap_resource_byname(pdev, "dpfe-dmem"); 880 879 if (IS_ERR(priv->dmem)) { 881 880 dev_err(dev, "Couldn't map DCPU data memory\n"); 882 881 return -ENOENT; 883 882 } 884 883 885 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-imem"); 886 - priv->imem = devm_ioremap_resource(dev, res); 884 + priv->imem = devm_platform_ioremap_resource_byname(pdev, "dpfe-imem"); 887 885 if (IS_ERR(priv->imem)) { 888 886 dev_err(dev, "Couldn't map DCPU instruction memory\n"); 889 887 return -ENOENT;
+1 -2
drivers/memory/da8xx-ddrctl.c
··· 115 115 return -EINVAL; 116 116 } 117 117 118 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 119 - ddrctl = devm_ioremap_resource(dev, res); 118 + ddrctl = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 120 119 if (IS_ERR(ddrctl)) { 121 120 dev_err(dev, "unable to map memory controller registers\n"); 122 121 return PTR_ERR(ddrctl);
+2 -13
drivers/memory/emif.c
··· 1025 1025 temp = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL); 1026 1026 dev_info = devm_kzalloc(dev, sizeof(*dev_info), GFP_KERNEL); 1027 1027 1028 - if (!emif || !temp || !dev_info) { 1029 - dev_err(dev, "%s:%d: allocation error\n", __func__, __LINE__); 1028 + if (!emif || !temp || !dev_info) 1030 1029 goto error; 1031 - } 1032 1030 1033 1031 memcpy(temp, pd, sizeof(*pd)); 1034 1032 pd = temp; ··· 1065 1067 temp = devm_kzalloc(dev, sizeof(*cust_cfgs), GFP_KERNEL); 1066 1068 if (temp) 1067 1069 memcpy(temp, cust_cfgs, sizeof(*cust_cfgs)); 1068 - else 1069 - dev_warn(dev, "%s:%d: allocation error\n", __func__, 1070 - __LINE__); 1071 1070 pd->custom_configs = temp; 1072 1071 } 1073 1072 ··· 1079 1084 memcpy(temp, pd->timings, size); 1080 1085 pd->timings = temp; 1081 1086 } else { 1082 - dev_warn(dev, "%s:%d: allocation error\n", __func__, 1083 - __LINE__); 1084 1087 get_default_timings(emif); 1085 1088 } 1086 1089 } else { ··· 1091 1098 memcpy(temp, pd->min_tck, sizeof(*pd->min_tck)); 1092 1099 pd->min_tck = temp; 1093 1100 } else { 1094 - dev_warn(dev, "%s:%d: allocation error\n", __func__, 1095 - __LINE__); 1096 1101 pd->min_tck = &lpddr2_jedec_min_tck; 1097 1102 } 1098 1103 } else { ··· 1107 1116 static int __init_or_module emif_probe(struct platform_device *pdev) 1108 1117 { 1109 1118 struct emif_data *emif; 1110 - struct resource *res; 1111 1119 int irq, ret; 1112 1120 1113 1121 if (pdev->dev.of_node) ··· 1125 1135 emif->dev = &pdev->dev; 1126 1136 platform_set_drvdata(pdev, emif); 1127 1137 1128 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1129 - emif->base = devm_ioremap_resource(emif->dev, res); 1138 + emif->base = devm_platform_ioremap_resource(pdev, 0); 1130 1139 if (IS_ERR(emif->base)) 1131 1140 goto error; 1132 1141
+1 -8
drivers/memory/fsl-corenet-cf.c
··· 172 172 static int ccf_probe(struct platform_device *pdev) 173 173 { 174 174 struct ccf_private *ccf; 175 - struct resource *r; 176 175 const struct of_device_id *match; 177 176 u32 errinten; 178 177 int ret, irq; ··· 184 185 if (!ccf) 185 186 return -ENOMEM; 186 187 187 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 188 - if (!r) { 189 - dev_err(&pdev->dev, "%s: no mem resource\n", __func__); 190 - return -ENXIO; 191 - } 192 - 193 - ccf->regs = devm_ioremap_resource(&pdev->dev, r); 188 + ccf->regs = devm_platform_ioremap_resource(pdev, 0); 194 189 if (IS_ERR(ccf->regs)) 195 190 return PTR_ERR(ccf->regs); 196 191
+23 -20
drivers/memory/omap-gpmc.c
··· 12 12 #include <linux/cpu_pm.h> 13 13 #include <linux/irq.h> 14 14 #include <linux/kernel.h> 15 + #include <linux/module.h> 15 16 #include <linux/init.h> 16 17 #include <linux/err.h> 17 18 #include <linux/clk.h> ··· 1890 1889 } 1891 1890 1892 1891 #ifdef CONFIG_OF 1893 - static const struct of_device_id gpmc_dt_ids[] = { 1894 - { .compatible = "ti,omap2420-gpmc" }, 1895 - { .compatible = "ti,omap2430-gpmc" }, 1896 - { .compatible = "ti,omap3430-gpmc" }, /* omap3430 & omap3630 */ 1897 - { .compatible = "ti,omap4430-gpmc" }, /* omap4430 & omap4460 & omap543x */ 1898 - { .compatible = "ti,am3352-gpmc" }, /* am335x devices */ 1899 - { .compatible = "ti,am64-gpmc" }, 1900 - { } 1901 - }; 1902 - 1903 1892 static void gpmc_cs_set_name(int cs, const char *name) 1904 1893 { 1905 1894 struct gpmc_cs_data *gpmc = &gpmc_cs[cs]; ··· 2248 2257 if (!of_platform_device_create(child, NULL, &pdev->dev)) 2249 2258 goto err_child_fail; 2250 2259 2251 - /* is child a common bus? */ 2252 - if (of_match_node(of_default_bus_match_table, child)) 2253 - /* create children and other common bus children */ 2254 - if (of_platform_default_populate(child, NULL, &pdev->dev)) 2255 - goto err_child_fail; 2260 + /* create children and other common bus children */ 2261 + if (of_platform_default_populate(child, NULL, &pdev->dev)) 2262 + goto err_child_fail; 2256 2263 2257 2264 return 0; 2258 2265 ··· 2266 2277 2267 2278 return ret; 2268 2279 } 2280 + 2281 + static const struct of_device_id gpmc_dt_ids[]; 2269 2282 2270 2283 static int gpmc_probe_dt(struct platform_device *pdev) 2271 2284 { ··· 2635 2644 2636 2645 static SIMPLE_DEV_PM_OPS(gpmc_pm_ops, gpmc_suspend, gpmc_resume); 2637 2646 2647 + #ifdef CONFIG_OF 2648 + static const struct of_device_id gpmc_dt_ids[] = { 2649 + { .compatible = "ti,omap2420-gpmc" }, 2650 + { .compatible = "ti,omap2430-gpmc" }, 2651 + { .compatible = "ti,omap3430-gpmc" }, /* omap3430 & omap3630 */ 2652 + { .compatible = "ti,omap4430-gpmc" }, /* omap4430 & omap4460 & omap543x */ 2653 + { .compatible = "ti,am3352-gpmc" }, /* am335x devices */ 2654 + { .compatible = "ti,am64-gpmc" }, 2655 + { } 2656 + }; 2657 + MODULE_DEVICE_TABLE(of, gpmc_dt_ids); 2658 + #endif 2659 + 2638 2660 static struct platform_driver gpmc_driver = { 2639 2661 .probe = gpmc_probe, 2640 2662 .remove = gpmc_remove, ··· 2658 2654 }, 2659 2655 }; 2660 2656 2661 - static __init int gpmc_init(void) 2662 - { 2663 - return platform_driver_register(&gpmc_driver); 2664 - } 2665 - postcore_initcall(gpmc_init); 2657 + module_platform_driver(gpmc_driver); 2658 + 2659 + MODULE_DESCRIPTION("Texas Instruments GPMC driver"); 2660 + MODULE_LICENSE("GPL");
+9 -22
drivers/memory/renesas-rpc-if.c
··· 259 259 260 260 rpc->dev = dev; 261 261 262 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 263 - rpc->base = devm_ioremap_resource(&pdev->dev, res); 262 + rpc->base = devm_platform_ioremap_resource_byname(pdev, "regs"); 264 263 if (IS_ERR(rpc->base)) 265 264 return PTR_ERR(rpc->base); 266 265 ··· 487 488 case RPCIF_DATA_OUT: 488 489 while (pos < rpc->xferlen) { 489 490 u32 bytes_left = rpc->xferlen - pos; 490 - u32 nbytes, data[2]; 491 + u32 nbytes, data[2], *p = data; 491 492 492 493 smcr = rpc->smcr | RPCIF_SMCR_SPIE; 493 494 ··· 501 502 rpc->xfer_size = nbytes; 502 503 503 504 memcpy(data, rpc->buffer + pos, nbytes); 504 - if (nbytes == 8) { 505 - regmap_write(rpc->regmap, RPCIF_SMWDR1, 506 - data[0]); 507 - regmap_write(rpc->regmap, RPCIF_SMWDR0, 508 - data[1]); 509 - } else { 510 - regmap_write(rpc->regmap, RPCIF_SMWDR0, 511 - data[0]); 512 - } 505 + if (nbytes == 8) 506 + regmap_write(rpc->regmap, RPCIF_SMWDR1, *p++); 507 + regmap_write(rpc->regmap, RPCIF_SMWDR0, *p); 513 508 514 509 regmap_write(rpc->regmap, RPCIF_SMCR, smcr); 515 510 ret = wait_msg_xfer_end(rpc); ··· 545 552 } 546 553 while (pos < rpc->xferlen) { 547 554 u32 bytes_left = rpc->xferlen - pos; 548 - u32 nbytes, data[2]; 555 + u32 nbytes, data[2], *p = data; 549 556 550 557 /* nbytes may only be 1, 2, 4, or 8 */ 551 558 nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left)); ··· 562 569 if (ret) 563 570 goto err_out; 564 571 565 - if (nbytes == 8) { 566 - regmap_read(rpc->regmap, RPCIF_SMRDR1, 567 - &data[0]); 568 - regmap_read(rpc->regmap, RPCIF_SMRDR0, 569 - &data[1]); 570 - } else { 571 - regmap_read(rpc->regmap, RPCIF_SMRDR0, 572 - &data[0]); 573 - } 572 + if (nbytes == 8) 573 + regmap_read(rpc->regmap, RPCIF_SMRDR1, p++); 574 + regmap_read(rpc->regmap, RPCIF_SMRDR0, p); 574 575 memcpy(rpc->buffer + pos, data, nbytes); 575 576 576 577 pos += nbytes;
+2 -3
drivers/memory/samsung/exynos5422-dmc.c
··· 1322 1322 */ 1323 1323 static int exynos5_performance_counters_init(struct exynos5_dmc *dmc) 1324 1324 { 1325 - int counters_size; 1326 1325 int ret, i; 1327 1326 1328 1327 dmc->num_counters = devfreq_event_get_edev_count(dmc->dev, ··· 1331 1332 return dmc->num_counters; 1332 1333 } 1333 1334 1334 - counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters; 1335 - dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL); 1335 + dmc->counter = devm_kcalloc(dmc->dev, dmc->num_counters, 1336 + sizeof(*dmc->counter), GFP_KERNEL); 1336 1337 if (!dmc->counter) 1337 1338 return -ENOMEM; 1338 1339
+2
drivers/memory/tegra/Makefile
··· 9 9 tegra-mc-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o 10 10 tegra-mc-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o 11 11 tegra-mc-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra194.o 12 + tegra-mc-$(CONFIG_ARCH_TEGRA_234_SOC) += tegra186.o tegra234.o 12 13 13 14 obj-$(CONFIG_TEGRA_MC) += tegra-mc.o 14 15 ··· 20 19 obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o 21 20 obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186-emc.o 22 21 obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186-emc.o 22 + obj-$(CONFIG_ARCH_TEGRA_234_SOC) += tegra186-emc.o 23 23 24 24 tegra210-emc-y := tegra210-emc-core.o tegra210-emc-cc-r21021.o
+122 -19
drivers/memory/tegra/mc.c
··· 45 45 #ifdef CONFIG_ARCH_TEGRA_194_SOC 46 46 { .compatible = "nvidia,tegra194-mc", .data = &tegra194_mc_soc }, 47 47 #endif 48 + #ifdef CONFIG_ARCH_TEGRA_234_SOC 49 + { .compatible = "nvidia,tegra234-mc", .data = &tegra234_mc_soc }, 50 + #endif 48 51 { /* sentinel */ } 49 52 }; 50 53 MODULE_DEVICE_TABLE(of, tegra_mc_of_match); ··· 508 505 return 0; 509 506 } 510 507 511 - static irqreturn_t tegra30_mc_handle_irq(int irq, void *data) 508 + const struct tegra_mc_ops tegra30_mc_ops = { 509 + .probe = tegra30_mc_probe, 510 + .handle_irq = tegra30_mc_handle_irq, 511 + }; 512 + #endif 513 + 514 + static int mc_global_intstatus_to_channel(const struct tegra_mc *mc, u32 status, 515 + unsigned int *mc_channel) 516 + { 517 + if ((status & mc->soc->ch_intmask) == 0) 518 + return -EINVAL; 519 + 520 + *mc_channel = __ffs((status & mc->soc->ch_intmask) >> 521 + mc->soc->global_intstatus_channel_shift); 522 + 523 + return 0; 524 + } 525 + 526 + static u32 mc_channel_to_global_intstatus(const struct tegra_mc *mc, 527 + unsigned int channel) 528 + { 529 + return BIT(channel) << mc->soc->global_intstatus_channel_shift; 530 + } 531 + 532 + irqreturn_t tegra30_mc_handle_irq(int irq, void *data) 512 533 { 513 534 struct tegra_mc *mc = data; 535 + unsigned int bit, channel; 514 536 unsigned long status; 515 - unsigned int bit; 516 537 517 - /* mask all interrupts to avoid flooding */ 518 - status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask; 538 + if (mc->soc->num_channels) { 539 + u32 global_status; 540 + int err; 541 + 542 + global_status = mc_ch_readl(mc, MC_BROADCAST_CHANNEL, MC_GLOBAL_INTSTATUS); 543 + err = mc_global_intstatus_to_channel(mc, global_status, &channel); 544 + if (err < 0) { 545 + dev_err_ratelimited(mc->dev, "unknown interrupt channel 0x%08x\n", 546 + global_status); 547 + return IRQ_NONE; 548 + } 549 + 550 + /* mask all interrupts to avoid flooding */ 551 + status = mc_ch_readl(mc, channel, MC_INTSTATUS) & mc->soc->intmask; 552 + } else { 553 + status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask; 554 + } 555 + 519 556 if (!status) 520 557 return IRQ_NONE; 521 558 ··· 563 520 const char *error = tegra_mc_status_names[bit] ?: "unknown"; 564 521 const char *client = "unknown", *desc; 565 522 const char *direction, *secure; 523 + u32 status_reg, addr_reg; 524 + u32 intmask = BIT(bit); 566 525 phys_addr_t addr = 0; 526 + #ifdef CONFIG_PHYS_ADDR_T_64BIT 527 + u32 addr_hi_reg = 0; 528 + #endif 567 529 unsigned int i; 568 530 char perm[7]; 569 531 u8 id, type; 570 532 u32 value; 571 533 572 - value = mc_readl(mc, MC_ERR_STATUS); 534 + switch (intmask) { 535 + case MC_INT_DECERR_VPR: 536 + status_reg = MC_ERR_VPR_STATUS; 537 + addr_reg = MC_ERR_VPR_ADR; 538 + break; 539 + 540 + case MC_INT_SECERR_SEC: 541 + status_reg = MC_ERR_SEC_STATUS; 542 + addr_reg = MC_ERR_SEC_ADR; 543 + break; 544 + 545 + case MC_INT_DECERR_MTS: 546 + status_reg = MC_ERR_MTS_STATUS; 547 + addr_reg = MC_ERR_MTS_ADR; 548 + break; 549 + 550 + case MC_INT_DECERR_GENERALIZED_CARVEOUT: 551 + status_reg = MC_ERR_GENERALIZED_CARVEOUT_STATUS; 552 + addr_reg = MC_ERR_GENERALIZED_CARVEOUT_ADR; 553 + break; 554 + 555 + case MC_INT_DECERR_ROUTE_SANITY: 556 + status_reg = MC_ERR_ROUTE_SANITY_STATUS; 557 + addr_reg = MC_ERR_ROUTE_SANITY_ADR; 558 + break; 559 + 560 + default: 561 + status_reg = MC_ERR_STATUS; 562 + addr_reg = MC_ERR_ADR; 563 + 564 + #ifdef CONFIG_PHYS_ADDR_T_64BIT 565 + if (mc->soc->has_addr_hi_reg) 566 + addr_hi_reg = MC_ERR_ADR_HI; 567 + #endif 568 + break; 569 + } 570 + 571 + if (mc->soc->num_channels) 572 + value = mc_ch_readl(mc, channel, status_reg); 573 + else 574 + value = mc_readl(mc, status_reg); 573 575 574 576 #ifdef CONFIG_PHYS_ADDR_T_64BIT 575 577 if (mc->soc->num_address_bits > 32) { 576 - addr = ((value >> MC_ERR_STATUS_ADR_HI_SHIFT) & 577 - MC_ERR_STATUS_ADR_HI_MASK); 578 + if (addr_hi_reg) { 579 + if (mc->soc->num_channels) 580 + addr = mc_ch_readl(mc, channel, addr_hi_reg); 581 + else 582 + addr = mc_readl(mc, addr_hi_reg); 583 + } else { 584 + addr = ((value >> MC_ERR_STATUS_ADR_HI_SHIFT) & 585 + MC_ERR_STATUS_ADR_HI_MASK); 586 + } 578 587 addr <<= 32; 579 588 } 580 589 #endif ··· 683 588 break; 684 589 } 685 590 686 - value = mc_readl(mc, MC_ERR_ADR); 591 + if (mc->soc->num_channels) 592 + value = mc_ch_readl(mc, channel, addr_reg); 593 + else 594 + value = mc_readl(mc, addr_reg); 687 595 addr |= value; 688 596 689 597 dev_err_ratelimited(mc->dev, "%s: %s%s @%pa: %s (%s%s)\n", ··· 695 597 } 696 598 697 599 /* clear interrupts */ 698 - mc_writel(mc, status, MC_INTSTATUS); 600 + if (mc->soc->num_channels) { 601 + mc_ch_writel(mc, channel, status, MC_INTSTATUS); 602 + mc_ch_writel(mc, MC_BROADCAST_CHANNEL, 603 + mc_channel_to_global_intstatus(mc, channel), 604 + MC_GLOBAL_INTSTATUS); 605 + } else { 606 + mc_writel(mc, status, MC_INTSTATUS); 607 + } 699 608 700 609 return IRQ_HANDLED; 701 610 } 702 - 703 - const struct tegra_mc_ops tegra30_mc_ops = { 704 - .probe = tegra30_mc_probe, 705 - .handle_irq = tegra30_mc_handle_irq, 706 - }; 707 - #endif 708 611 709 612 const char *const tegra_mc_status_names[32] = { 710 613 [ 1] = "External interrupt", ··· 718 619 [12] = "VPR violation", 719 620 [13] = "Secure carveout violation", 720 621 [16] = "MTS carveout violation", 622 + [17] = "Generalized carveout violation", 623 + [20] = "Route Sanity error", 721 624 }; 722 625 723 626 const char *const tegra_mc_error_names[8] = { ··· 817 716 818 717 static int tegra_mc_probe(struct platform_device *pdev) 819 718 { 820 - struct resource *res; 821 719 struct tegra_mc *mc; 822 720 u64 mask; 823 721 int err; ··· 841 741 /* length of MC tick in nanoseconds */ 842 742 mc->tick = 30; 843 743 844 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 845 - mc->regs = devm_ioremap_resource(&pdev->dev, res); 744 + mc->regs = devm_platform_ioremap_resource(pdev, 0); 846 745 if (IS_ERR(mc->regs)) 847 746 return PTR_ERR(mc->regs); 848 747 ··· 860 761 861 762 WARN(!mc->soc->client_id_mask, "missing client ID mask for this SoC\n"); 862 763 863 - mc_writel(mc, mc->soc->intmask, MC_INTMASK); 764 + if (mc->soc->num_channels) 765 + mc_ch_writel(mc, MC_BROADCAST_CHANNEL, mc->soc->intmask, 766 + MC_INTMASK); 767 + else 768 + mc_writel(mc, mc->soc->intmask, MC_INTMASK); 864 769 865 770 err = devm_request_irq(&pdev->dev, mc->irq, mc->soc->ops->handle_irq, 0, 866 771 dev_name(&pdev->dev), mc);
+47 -1
drivers/memory/tegra/mc.h
··· 43 43 #define MC_EMEM_ARB_OVERRIDE 0xe8 44 44 #define MC_TIMING_CONTROL_DBG 0xf8 45 45 #define MC_TIMING_CONTROL 0xfc 46 + #define MC_ERR_VPR_STATUS 0x654 47 + #define MC_ERR_VPR_ADR 0x658 48 + #define MC_ERR_SEC_STATUS 0x67c 49 + #define MC_ERR_SEC_ADR 0x680 50 + #define MC_ERR_MTS_STATUS 0x9b0 51 + #define MC_ERR_MTS_ADR 0x9b4 52 + #define MC_ERR_ROUTE_SANITY_STATUS 0x9c0 53 + #define MC_ERR_ROUTE_SANITY_ADR 0x9c4 54 + #define MC_ERR_GENERALIZED_CARVEOUT_STATUS 0xc00 55 + #define MC_ERR_GENERALIZED_CARVEOUT_ADR 0xc04 56 + #define MC_GLOBAL_INTSTATUS 0xf24 57 + #define MC_ERR_ADR_HI 0x11fc 46 58 59 + #define MC_INT_DECERR_ROUTE_SANITY BIT(20) 60 + #define MC_INT_DECERR_GENERALIZED_CARVEOUT BIT(17) 47 61 #define MC_INT_DECERR_MTS BIT(16) 48 62 #define MC_INT_SECERR_SEC BIT(13) 49 63 #define MC_INT_DECERR_VPR BIT(12) ··· 92 78 93 79 #define MC_TIMING_UPDATE BIT(0) 94 80 81 + #define MC_BROADCAST_CHANNEL ~0 82 + 95 83 static inline u32 tegra_mc_scale_percents(u64 val, unsigned int percents) 96 84 { 97 85 val = val * percents; ··· 106 90 icc_provider_to_tegra_mc(struct icc_provider *provider) 107 91 { 108 92 return container_of(provider, struct tegra_mc, provider); 93 + } 94 + 95 + static inline u32 mc_ch_readl(const struct tegra_mc *mc, int ch, 96 + unsigned long offset) 97 + { 98 + if (!mc->bcast_ch_regs) 99 + return 0; 100 + 101 + if (ch == MC_BROADCAST_CHANNEL) 102 + return readl_relaxed(mc->bcast_ch_regs + offset); 103 + 104 + return readl_relaxed(mc->ch_regs[ch] + offset); 105 + } 106 + 107 + static inline void mc_ch_writel(const struct tegra_mc *mc, int ch, 108 + u32 value, unsigned long offset) 109 + { 110 + if (!mc->bcast_ch_regs) 111 + return; 112 + 113 + if (ch == MC_BROADCAST_CHANNEL) 114 + writel_relaxed(value, mc->bcast_ch_regs + offset); 115 + else 116 + writel_relaxed(value, mc->ch_regs[ch] + offset); 109 117 } 110 118 111 119 static inline u32 mc_readl(const struct tegra_mc *mc, unsigned long offset) ··· 177 137 extern const struct tegra_mc_soc tegra194_mc_soc; 178 138 #endif 179 139 140 + #ifdef CONFIG_ARCH_TEGRA_234_SOC 141 + extern const struct tegra_mc_soc tegra234_mc_soc; 142 + #endif 143 + 180 144 #if defined(CONFIG_ARCH_TEGRA_3x_SOC) || \ 181 145 defined(CONFIG_ARCH_TEGRA_114_SOC) || \ 182 146 defined(CONFIG_ARCH_TEGRA_124_SOC) || \ ··· 191 147 #endif 192 148 193 149 #if defined(CONFIG_ARCH_TEGRA_186_SOC) || \ 194 - defined(CONFIG_ARCH_TEGRA_194_SOC) 150 + defined(CONFIG_ARCH_TEGRA_194_SOC) || \ 151 + defined(CONFIG_ARCH_TEGRA_234_SOC) 195 152 extern const struct tegra_mc_ops tegra186_mc_ops; 196 153 #endif 197 154 155 + irqreturn_t tegra30_mc_handle_irq(int irq, void *data); 198 156 extern const char * const tegra_mc_status_names[32]; 199 157 extern const char * const tegra_mc_error_names[8]; 200 158
+3
drivers/memory/tegra/tegra186-emc.c
··· 273 273 #if defined(CONFIG_ARCH_TEGRA_194_SOC) 274 274 { .compatible = "nvidia,tegra194-emc" }, 275 275 #endif 276 + #if defined(CONFIG_ARCH_TEGRA_234_SOC) 277 + { .compatible = "nvidia,tegra234-emc" }, 278 + #endif 276 279 { /* sentinel */ } 277 280 }; 278 281 MODULE_DEVICE_TABLE(of, tegra186_emc_of_match);
+39
drivers/memory/tegra/tegra186.c
··· 16 16 #include <dt-bindings/memory/tegra186-mc.h> 17 17 #endif 18 18 19 + #include "mc.h" 20 + 19 21 #define MC_SID_STREAMID_OVERRIDE_MASK GENMASK(7, 0) 20 22 #define MC_SID_STREAMID_SECURITY_WRITE_ACCESS_DISABLED BIT(16) 21 23 #define MC_SID_STREAMID_SECURITY_OVERRIDE BIT(8) ··· 50 48 51 49 static int tegra186_mc_probe(struct tegra_mc *mc) 52 50 { 51 + struct platform_device *pdev = to_platform_device(mc->dev); 52 + unsigned int i; 53 + char name[8]; 53 54 int err; 54 55 56 + mc->bcast_ch_regs = devm_platform_ioremap_resource_byname(pdev, "broadcast"); 57 + if (IS_ERR(mc->bcast_ch_regs)) { 58 + if (PTR_ERR(mc->bcast_ch_regs) == -EINVAL) { 59 + dev_warn(&pdev->dev, 60 + "Broadcast channel is missing, please update your device-tree\n"); 61 + mc->bcast_ch_regs = NULL; 62 + goto populate; 63 + } 64 + 65 + return PTR_ERR(mc->bcast_ch_regs); 66 + } 67 + 68 + mc->ch_regs = devm_kcalloc(mc->dev, mc->soc->num_channels, sizeof(*mc->ch_regs), 69 + GFP_KERNEL); 70 + if (!mc->ch_regs) 71 + return -ENOMEM; 72 + 73 + for (i = 0; i < mc->soc->num_channels; i++) { 74 + snprintf(name, sizeof(name), "ch%u", i); 75 + 76 + mc->ch_regs[i] = devm_platform_ioremap_resource_byname(pdev, name); 77 + if (IS_ERR(mc->ch_regs[i])) 78 + return PTR_ERR(mc->ch_regs[i]); 79 + } 80 + 81 + populate: 55 82 err = of_platform_populate(mc->dev->of_node, NULL, NULL, mc->dev); 56 83 if (err < 0) 57 84 return err; ··· 175 144 .remove = tegra186_mc_remove, 176 145 .resume = tegra186_mc_resume, 177 146 .probe_device = tegra186_mc_probe_device, 147 + .handle_irq = tegra30_mc_handle_irq, 178 148 }; 179 149 180 150 #if defined(CONFIG_ARCH_TEGRA_186_SOC) ··· 907 875 .num_clients = ARRAY_SIZE(tegra186_mc_clients), 908 876 .clients = tegra186_mc_clients, 909 877 .num_address_bits = 40, 878 + .num_channels = 4, 879 + .client_id_mask = 0xff, 880 + .intmask = MC_INT_DECERR_GENERALIZED_CARVEOUT | MC_INT_DECERR_MTS | 881 + MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | 882 + MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM, 910 883 .ops = &tegra186_mc_ops, 884 + .ch_intmask = 0x0000000f, 885 + .global_intstatus_channel_shift = 0, 911 886 }; 912 887 #endif
+9
drivers/memory/tegra/tegra194.c
··· 1347 1347 .num_clients = ARRAY_SIZE(tegra194_mc_clients), 1348 1348 .clients = tegra194_mc_clients, 1349 1349 .num_address_bits = 40, 1350 + .num_channels = 16, 1351 + .client_id_mask = 0xff, 1352 + .intmask = MC_INT_DECERR_ROUTE_SANITY | 1353 + MC_INT_DECERR_GENERALIZED_CARVEOUT | MC_INT_DECERR_MTS | 1354 + MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | 1355 + MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM, 1356 + .has_addr_hi_reg = true, 1350 1357 .ops = &tegra186_mc_ops, 1358 + .ch_intmask = 0x00000f00, 1359 + .global_intstatus_channel_shift = 8, 1351 1360 };
+110
drivers/memory/tegra/tegra234.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2021-2022, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <soc/tegra/mc.h> 7 + 8 + #include <dt-bindings/memory/tegra234-mc.h> 9 + 10 + #include "mc.h" 11 + 12 + static const struct tegra_mc_client tegra234_mc_clients[] = { 13 + { 14 + .id = TEGRA234_MEMORY_CLIENT_SDMMCRAB, 15 + .name = "sdmmcrab", 16 + .sid = TEGRA234_SID_SDMMC4, 17 + .regs = { 18 + .sid = { 19 + .override = 0x318, 20 + .security = 0x31c, 21 + }, 22 + }, 23 + }, { 24 + .id = TEGRA234_MEMORY_CLIENT_SDMMCWAB, 25 + .name = "sdmmcwab", 26 + .sid = TEGRA234_SID_SDMMC4, 27 + .regs = { 28 + .sid = { 29 + .override = 0x338, 30 + .security = 0x33c, 31 + }, 32 + }, 33 + }, { 34 + .id = TEGRA234_MEMORY_CLIENT_BPMPR, 35 + .name = "bpmpr", 36 + .sid = TEGRA234_SID_BPMP, 37 + .regs = { 38 + .sid = { 39 + .override = 0x498, 40 + .security = 0x49c, 41 + }, 42 + }, 43 + }, { 44 + .id = TEGRA234_MEMORY_CLIENT_BPMPW, 45 + .name = "bpmpw", 46 + .sid = TEGRA234_SID_BPMP, 47 + .regs = { 48 + .sid = { 49 + .override = 0x4a0, 50 + .security = 0x4a4, 51 + }, 52 + }, 53 + }, { 54 + .id = TEGRA234_MEMORY_CLIENT_BPMPDMAR, 55 + .name = "bpmpdmar", 56 + .sid = TEGRA234_SID_BPMP, 57 + .regs = { 58 + .sid = { 59 + .override = 0x4a8, 60 + .security = 0x4ac, 61 + }, 62 + }, 63 + }, { 64 + .id = TEGRA234_MEMORY_CLIENT_BPMPDMAW, 65 + .name = "bpmpdmaw", 66 + .sid = TEGRA234_SID_BPMP, 67 + .regs = { 68 + .sid = { 69 + .override = 0x4b0, 70 + .security = 0x4b4, 71 + }, 72 + }, 73 + }, { 74 + .id = TEGRA234_MEMORY_CLIENT_APEDMAR, 75 + .name = "apedmar", 76 + .sid = TEGRA234_SID_APE, 77 + .regs = { 78 + .sid = { 79 + .override = 0x4f8, 80 + .security = 0x4fc, 81 + }, 82 + }, 83 + }, { 84 + .id = TEGRA234_MEMORY_CLIENT_APEDMAW, 85 + .name = "apedmaw", 86 + .sid = TEGRA234_SID_APE, 87 + .regs = { 88 + .sid = { 89 + .override = 0x500, 90 + .security = 0x504, 91 + }, 92 + }, 93 + }, 94 + }; 95 + 96 + const struct tegra_mc_soc tegra234_mc_soc = { 97 + .num_clients = ARRAY_SIZE(tegra234_mc_clients), 98 + .clients = tegra234_mc_clients, 99 + .num_address_bits = 40, 100 + .num_channels = 16, 101 + .client_id_mask = 0x1ff, 102 + .intmask = MC_INT_DECERR_ROUTE_SANITY | 103 + MC_INT_DECERR_GENERALIZED_CARVEOUT | MC_INT_DECERR_MTS | 104 + MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | 105 + MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM, 106 + .has_addr_hi_reg = true, 107 + .ops = &tegra186_mc_ops, 108 + .ch_intmask = 0x0000ff00, 109 + .global_intstatus_channel_shift = 8, 110 + };
+1 -3
drivers/memory/ti-aemif.c
··· 328 328 { 329 329 int i; 330 330 int ret = -ENODEV; 331 - struct resource *res; 332 331 struct device *dev = &pdev->dev; 333 332 struct device_node *np = dev->of_node; 334 333 struct device_node *child_np; ··· 361 362 else if (pdata) 362 363 aemif->cs_offset = pdata->cs_offset; 363 364 364 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 365 - aemif->base = devm_ioremap_resource(dev, res); 365 + aemif->base = devm_platform_ioremap_resource(pdev, 0); 366 366 if (IS_ERR(aemif->base)) { 367 367 ret = PTR_ERR(aemif->base); 368 368 goto error;
+3 -3
drivers/memory/ti-emif-pm.c
··· 290 290 291 291 emif_data->pm_data.ti_emif_sram_config = (unsigned long)match->data; 292 292 293 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 294 - emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev, 295 - res); 293 + emif_data->pm_data.ti_emif_base_addr_virt = devm_platform_get_and_ioremap_resource(pdev, 294 + 0, 295 + &res); 296 296 if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) { 297 297 ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt); 298 298 return ret;
+13
drivers/nvme/host/Kconfig
··· 91 91 from https://github.com/linux-nvme/nvme-cli. 92 92 93 93 If unsure, say N. 94 + 95 + config NVME_APPLE 96 + tristate "Apple ANS2 NVM Express host driver" 97 + depends on OF && BLOCK 98 + depends on APPLE_RTKIT && APPLE_SART 99 + depends on ARCH_APPLE || COMPILE_TEST 100 + select NVME_CORE 101 + help 102 + This provides support for the NVMe controller embedded in Apple SoCs 103 + such as the M1. 104 + 105 + To compile this driver as a module, choose M here: the 106 + module will be called nvme-apple.
+3
drivers/nvme/host/Makefile
··· 8 8 obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o 9 9 obj-$(CONFIG_NVME_FC) += nvme-fc.o 10 10 obj-$(CONFIG_NVME_TCP) += nvme-tcp.o 11 + obj-$(CONFIG_NVME_APPLE) += nvme-apple.o 11 12 12 13 nvme-core-y := core.o ioctl.o constants.o 13 14 nvme-core-$(CONFIG_TRACING) += trace.o ··· 26 25 nvme-fc-y += fc.o 27 26 28 27 nvme-tcp-y += tcp.o 28 + 29 + nvme-apple-y += apple.o
+1593
drivers/nvme/host/apple.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Apple ANS NVM Express device driver 4 + * Copyright The Asahi Linux Contributors 5 + * 6 + * Based on the pci.c NVM Express device driver 7 + * Copyright (c) 2011-2014, Intel Corporation. 8 + * and on the rdma.c NVMe over Fabrics RDMA host code. 9 + * Copyright (c) 2015-2016 HGST, a Western Digital Company. 10 + */ 11 + 12 + #include <linux/async.h> 13 + #include <linux/blkdev.h> 14 + #include <linux/blk-mq.h> 15 + #include <linux/device.h> 16 + #include <linux/dma-mapping.h> 17 + #include <linux/dmapool.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/io-64-nonatomic-lo-hi.h> 20 + #include <linux/io.h> 21 + #include <linux/iopoll.h> 22 + #include <linux/jiffies.h> 23 + #include <linux/mempool.h> 24 + #include <linux/module.h> 25 + #include <linux/of.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/once.h> 28 + #include <linux/platform_device.h> 29 + #include <linux/pm_domain.h> 30 + #include <linux/soc/apple/rtkit.h> 31 + #include <linux/soc/apple/sart.h> 32 + #include <linux/reset.h> 33 + #include <linux/time64.h> 34 + 35 + #include "nvme.h" 36 + 37 + #define APPLE_ANS_BOOT_TIMEOUT USEC_PER_SEC 38 + #define APPLE_ANS_MAX_QUEUE_DEPTH 64 39 + 40 + #define APPLE_ANS_COPROC_CPU_CONTROL 0x44 41 + #define APPLE_ANS_COPROC_CPU_CONTROL_RUN BIT(4) 42 + 43 + #define APPLE_ANS_ACQ_DB 0x1004 44 + #define APPLE_ANS_IOCQ_DB 0x100c 45 + 46 + #define APPLE_ANS_MAX_PEND_CMDS_CTRL 0x1210 47 + 48 + #define APPLE_ANS_BOOT_STATUS 0x1300 49 + #define APPLE_ANS_BOOT_STATUS_OK 0xde71ce55 50 + 51 + #define APPLE_ANS_UNKNOWN_CTRL 0x24008 52 + #define APPLE_ANS_PRP_NULL_CHECK BIT(11) 53 + 54 + #define APPLE_ANS_LINEAR_SQ_CTRL 0x24908 55 + #define APPLE_ANS_LINEAR_SQ_EN BIT(0) 56 + 57 + #define APPLE_ANS_LINEAR_ASQ_DB 0x2490c 58 + #define APPLE_ANS_LINEAR_IOSQ_DB 0x24910 59 + 60 + #define APPLE_NVMMU_NUM_TCBS 0x28100 61 + #define APPLE_NVMMU_ASQ_TCB_BASE 0x28108 62 + #define APPLE_NVMMU_IOSQ_TCB_BASE 0x28110 63 + #define APPLE_NVMMU_TCB_INVAL 0x28118 64 + #define APPLE_NVMMU_TCB_STAT 0x28120 65 + 66 + /* 67 + * This controller is a bit weird in the way command tags works: Both the 68 + * admin and the IO queue share the same tag space. Additionally, tags 69 + * cannot be higher than 0x40 which effectively limits the combined 70 + * queue depth to 0x40. Instead of wasting half of that on the admin queue 71 + * which gets much less traffic we instead reduce its size here. 72 + * The controller also doesn't support async event such that no space must 73 + * be reserved for NVME_NR_AEN_COMMANDS. 74 + */ 75 + #define APPLE_NVME_AQ_DEPTH 2 76 + #define APPLE_NVME_AQ_MQ_TAG_DEPTH (APPLE_NVME_AQ_DEPTH - 1) 77 + 78 + /* 79 + * These can be higher, but we need to ensure that any command doesn't 80 + * require an sg allocation that needs more than a page of data. 81 + */ 82 + #define NVME_MAX_KB_SZ 4096 83 + #define NVME_MAX_SEGS 127 84 + 85 + /* 86 + * This controller comes with an embedded IOMMU known as NVMMU. 87 + * The NVMMU is pointed to an array of TCBs indexed by the command tag. 88 + * Each command must be configured inside this structure before it's allowed 89 + * to execute, including commands that don't require DMA transfers. 90 + * 91 + * An exception to this are Apple's vendor-specific commands (opcode 0xD8 on the 92 + * admin queue): Those commands must still be added to the NVMMU but the DMA 93 + * buffers cannot be represented as PRPs and must instead be allowed using SART. 94 + * 95 + * Programming the PRPs to the same values as those in the submission queue 96 + * looks rather silly at first. This hardware is however designed for a kernel 97 + * that runs the NVMMU code in a higher exception level than the NVMe driver. 98 + * In that setting the NVMe driver first programs the submission queue entry 99 + * and then executes a hypercall to the code that is allowed to program the 100 + * NVMMU. The NVMMU driver then creates a shadow copy of the PRPs while 101 + * verifying that they don't point to kernel text, data, pagetables, or similar 102 + * protected areas before programming the TCB to point to this shadow copy. 103 + * Since Linux doesn't do any of that we may as well just point both the queue 104 + * and the TCB PRP pointer to the same memory. 105 + */ 106 + struct apple_nvmmu_tcb { 107 + u8 opcode; 108 + 109 + #define APPLE_ANS_TCB_DMA_FROM_DEVICE BIT(0) 110 + #define APPLE_ANS_TCB_DMA_TO_DEVICE BIT(1) 111 + u8 dma_flags; 112 + 113 + u8 command_id; 114 + u8 _unk0; 115 + __le16 length; 116 + u8 _unk1[18]; 117 + __le64 prp1; 118 + __le64 prp2; 119 + u8 _unk2[16]; 120 + u8 aes_iv[8]; 121 + u8 _aes_unk[64]; 122 + }; 123 + 124 + /* 125 + * The Apple NVMe controller only supports a single admin and a single IO queue 126 + * which are both limited to 64 entries and share a single interrupt. 127 + * 128 + * The completion queue works as usual. The submission "queue" instead is 129 + * an array indexed by the command tag on this hardware. Commands must also be 130 + * present in the NVMMU's tcb array. They are triggered by writing their tag to 131 + * a MMIO register. 132 + */ 133 + struct apple_nvme_queue { 134 + struct nvme_command *sqes; 135 + struct nvme_completion *cqes; 136 + struct apple_nvmmu_tcb *tcbs; 137 + 138 + dma_addr_t sq_dma_addr; 139 + dma_addr_t cq_dma_addr; 140 + dma_addr_t tcb_dma_addr; 141 + 142 + u32 __iomem *sq_db; 143 + u32 __iomem *cq_db; 144 + 145 + u16 cq_head; 146 + u8 cq_phase; 147 + 148 + bool is_adminq; 149 + bool enabled; 150 + }; 151 + 152 + /* 153 + * The apple_nvme_iod describes the data in an I/O. 154 + * 155 + * The sg pointer contains the list of PRP chunk allocations in addition 156 + * to the actual struct scatterlist. 157 + */ 158 + struct apple_nvme_iod { 159 + struct nvme_request req; 160 + struct nvme_command cmd; 161 + struct apple_nvme_queue *q; 162 + int npages; /* In the PRP list. 0 means small pool in use */ 163 + int nents; /* Used in scatterlist */ 164 + dma_addr_t first_dma; 165 + unsigned int dma_len; /* length of single DMA segment mapping */ 166 + struct scatterlist *sg; 167 + }; 168 + 169 + struct apple_nvme { 170 + struct device *dev; 171 + 172 + void __iomem *mmio_coproc; 173 + void __iomem *mmio_nvme; 174 + 175 + struct device **pd_dev; 176 + struct device_link **pd_link; 177 + int pd_count; 178 + 179 + struct apple_sart *sart; 180 + struct apple_rtkit *rtk; 181 + struct reset_control *reset; 182 + 183 + struct dma_pool *prp_page_pool; 184 + struct dma_pool *prp_small_pool; 185 + mempool_t *iod_mempool; 186 + 187 + struct nvme_ctrl ctrl; 188 + struct work_struct remove_work; 189 + 190 + struct apple_nvme_queue adminq; 191 + struct apple_nvme_queue ioq; 192 + 193 + struct blk_mq_tag_set admin_tagset; 194 + struct blk_mq_tag_set tagset; 195 + 196 + int irq; 197 + spinlock_t lock; 198 + }; 199 + 200 + static_assert(sizeof(struct nvme_command) == 64); 201 + static_assert(sizeof(struct apple_nvmmu_tcb) == 128); 202 + 203 + static inline struct apple_nvme *ctrl_to_apple_nvme(struct nvme_ctrl *ctrl) 204 + { 205 + return container_of(ctrl, struct apple_nvme, ctrl); 206 + } 207 + 208 + static inline struct apple_nvme *queue_to_apple_nvme(struct apple_nvme_queue *q) 209 + { 210 + if (q->is_adminq) 211 + return container_of(q, struct apple_nvme, adminq); 212 + else 213 + return container_of(q, struct apple_nvme, ioq); 214 + } 215 + 216 + static unsigned int apple_nvme_queue_depth(struct apple_nvme_queue *q) 217 + { 218 + if (q->is_adminq) 219 + return APPLE_NVME_AQ_DEPTH; 220 + else 221 + return APPLE_ANS_MAX_QUEUE_DEPTH; 222 + } 223 + 224 + static void apple_nvme_rtkit_crashed(void *cookie) 225 + { 226 + struct apple_nvme *anv = cookie; 227 + 228 + dev_warn(anv->dev, "RTKit crashed; unable to recover without a reboot"); 229 + nvme_reset_ctrl(&anv->ctrl); 230 + } 231 + 232 + static int apple_nvme_sart_dma_setup(void *cookie, 233 + struct apple_rtkit_shmem *bfr) 234 + { 235 + struct apple_nvme *anv = cookie; 236 + int ret; 237 + 238 + if (bfr->iova) 239 + return -EINVAL; 240 + if (!bfr->size) 241 + return -EINVAL; 242 + 243 + bfr->buffer = 244 + dma_alloc_coherent(anv->dev, bfr->size, &bfr->iova, GFP_KERNEL); 245 + if (!bfr->buffer) 246 + return -ENOMEM; 247 + 248 + ret = apple_sart_add_allowed_region(anv->sart, bfr->iova, bfr->size); 249 + if (ret) { 250 + dma_free_coherent(anv->dev, bfr->size, bfr->buffer, bfr->iova); 251 + bfr->buffer = NULL; 252 + return -ENOMEM; 253 + } 254 + 255 + return 0; 256 + } 257 + 258 + static void apple_nvme_sart_dma_destroy(void *cookie, 259 + struct apple_rtkit_shmem *bfr) 260 + { 261 + struct apple_nvme *anv = cookie; 262 + 263 + apple_sart_remove_allowed_region(anv->sart, bfr->iova, bfr->size); 264 + dma_free_coherent(anv->dev, bfr->size, bfr->buffer, bfr->iova); 265 + } 266 + 267 + static const struct apple_rtkit_ops apple_nvme_rtkit_ops = { 268 + .crashed = apple_nvme_rtkit_crashed, 269 + .shmem_setup = apple_nvme_sart_dma_setup, 270 + .shmem_destroy = apple_nvme_sart_dma_destroy, 271 + }; 272 + 273 + static void apple_nvmmu_inval(struct apple_nvme_queue *q, unsigned int tag) 274 + { 275 + struct apple_nvme *anv = queue_to_apple_nvme(q); 276 + 277 + writel(tag, anv->mmio_nvme + APPLE_NVMMU_TCB_INVAL); 278 + if (readl(anv->mmio_nvme + APPLE_NVMMU_TCB_STAT)) 279 + dev_warn_ratelimited(anv->dev, 280 + "NVMMU TCB invalidation failed\n"); 281 + } 282 + 283 + static void apple_nvme_submit_cmd(struct apple_nvme_queue *q, 284 + struct nvme_command *cmd) 285 + { 286 + struct apple_nvme *anv = queue_to_apple_nvme(q); 287 + u32 tag = nvme_tag_from_cid(cmd->common.command_id); 288 + struct apple_nvmmu_tcb *tcb = &q->tcbs[tag]; 289 + 290 + tcb->opcode = cmd->common.opcode; 291 + tcb->prp1 = cmd->common.dptr.prp1; 292 + tcb->prp2 = cmd->common.dptr.prp2; 293 + tcb->length = cmd->rw.length; 294 + tcb->command_id = tag; 295 + 296 + if (nvme_is_write(cmd)) 297 + tcb->dma_flags = APPLE_ANS_TCB_DMA_TO_DEVICE; 298 + else 299 + tcb->dma_flags = APPLE_ANS_TCB_DMA_FROM_DEVICE; 300 + 301 + memcpy(&q->sqes[tag], cmd, sizeof(*cmd)); 302 + 303 + /* 304 + * This lock here doesn't make much sense at a first glace but 305 + * removing it will result in occasional missed completetion 306 + * interrupts even though the commands still appear on the CQ. 307 + * It's unclear why this happens but our best guess is that 308 + * there is a bug in the firmware triggered when a new command 309 + * is issued while we're inside the irq handler between the 310 + * NVMMU invalidation (and making the tag available again) 311 + * and the final CQ update. 312 + */ 313 + spin_lock_irq(&anv->lock); 314 + writel(tag, q->sq_db); 315 + spin_unlock_irq(&anv->lock); 316 + } 317 + 318 + /* 319 + * From pci.c: 320 + * Will slightly overestimate the number of pages needed. This is OK 321 + * as it only leads to a small amount of wasted memory for the lifetime of 322 + * the I/O. 323 + */ 324 + static inline size_t apple_nvme_iod_alloc_size(void) 325 + { 326 + const unsigned int nprps = DIV_ROUND_UP( 327 + NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE, NVME_CTRL_PAGE_SIZE); 328 + const int npages = DIV_ROUND_UP(8 * nprps, PAGE_SIZE - 8); 329 + const size_t alloc_size = sizeof(__le64 *) * npages + 330 + sizeof(struct scatterlist) * NVME_MAX_SEGS; 331 + 332 + return alloc_size; 333 + } 334 + 335 + static void **apple_nvme_iod_list(struct request *req) 336 + { 337 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 338 + 339 + return (void **)(iod->sg + blk_rq_nr_phys_segments(req)); 340 + } 341 + 342 + static void apple_nvme_free_prps(struct apple_nvme *anv, struct request *req) 343 + { 344 + const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1; 345 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 346 + dma_addr_t dma_addr = iod->first_dma; 347 + int i; 348 + 349 + for (i = 0; i < iod->npages; i++) { 350 + __le64 *prp_list = apple_nvme_iod_list(req)[i]; 351 + dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); 352 + 353 + dma_pool_free(anv->prp_page_pool, prp_list, dma_addr); 354 + dma_addr = next_dma_addr; 355 + } 356 + } 357 + 358 + static void apple_nvme_unmap_data(struct apple_nvme *anv, struct request *req) 359 + { 360 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 361 + 362 + if (iod->dma_len) { 363 + dma_unmap_page(anv->dev, iod->first_dma, iod->dma_len, 364 + rq_dma_dir(req)); 365 + return; 366 + } 367 + 368 + WARN_ON_ONCE(!iod->nents); 369 + 370 + dma_unmap_sg(anv->dev, iod->sg, iod->nents, rq_dma_dir(req)); 371 + if (iod->npages == 0) 372 + dma_pool_free(anv->prp_small_pool, apple_nvme_iod_list(req)[0], 373 + iod->first_dma); 374 + else 375 + apple_nvme_free_prps(anv, req); 376 + mempool_free(iod->sg, anv->iod_mempool); 377 + } 378 + 379 + static void apple_nvme_print_sgl(struct scatterlist *sgl, int nents) 380 + { 381 + int i; 382 + struct scatterlist *sg; 383 + 384 + for_each_sg(sgl, sg, nents, i) { 385 + dma_addr_t phys = sg_phys(sg); 386 + 387 + pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d dma_address:%pad dma_length:%d\n", 388 + i, &phys, sg->offset, sg->length, &sg_dma_address(sg), 389 + sg_dma_len(sg)); 390 + } 391 + } 392 + 393 + static blk_status_t apple_nvme_setup_prps(struct apple_nvme *anv, 394 + struct request *req, 395 + struct nvme_rw_command *cmnd) 396 + { 397 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 398 + struct dma_pool *pool; 399 + int length = blk_rq_payload_bytes(req); 400 + struct scatterlist *sg = iod->sg; 401 + int dma_len = sg_dma_len(sg); 402 + u64 dma_addr = sg_dma_address(sg); 403 + int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); 404 + __le64 *prp_list; 405 + void **list = apple_nvme_iod_list(req); 406 + dma_addr_t prp_dma; 407 + int nprps, i; 408 + 409 + length -= (NVME_CTRL_PAGE_SIZE - offset); 410 + if (length <= 0) { 411 + iod->first_dma = 0; 412 + goto done; 413 + } 414 + 415 + dma_len -= (NVME_CTRL_PAGE_SIZE - offset); 416 + if (dma_len) { 417 + dma_addr += (NVME_CTRL_PAGE_SIZE - offset); 418 + } else { 419 + sg = sg_next(sg); 420 + dma_addr = sg_dma_address(sg); 421 + dma_len = sg_dma_len(sg); 422 + } 423 + 424 + if (length <= NVME_CTRL_PAGE_SIZE) { 425 + iod->first_dma = dma_addr; 426 + goto done; 427 + } 428 + 429 + nprps = DIV_ROUND_UP(length, NVME_CTRL_PAGE_SIZE); 430 + if (nprps <= (256 / 8)) { 431 + pool = anv->prp_small_pool; 432 + iod->npages = 0; 433 + } else { 434 + pool = anv->prp_page_pool; 435 + iod->npages = 1; 436 + } 437 + 438 + prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); 439 + if (!prp_list) { 440 + iod->first_dma = dma_addr; 441 + iod->npages = -1; 442 + return BLK_STS_RESOURCE; 443 + } 444 + list[0] = prp_list; 445 + iod->first_dma = prp_dma; 446 + i = 0; 447 + for (;;) { 448 + if (i == NVME_CTRL_PAGE_SIZE >> 3) { 449 + __le64 *old_prp_list = prp_list; 450 + 451 + prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); 452 + if (!prp_list) 453 + goto free_prps; 454 + list[iod->npages++] = prp_list; 455 + prp_list[0] = old_prp_list[i - 1]; 456 + old_prp_list[i - 1] = cpu_to_le64(prp_dma); 457 + i = 1; 458 + } 459 + prp_list[i++] = cpu_to_le64(dma_addr); 460 + dma_len -= NVME_CTRL_PAGE_SIZE; 461 + dma_addr += NVME_CTRL_PAGE_SIZE; 462 + length -= NVME_CTRL_PAGE_SIZE; 463 + if (length <= 0) 464 + break; 465 + if (dma_len > 0) 466 + continue; 467 + if (unlikely(dma_len < 0)) 468 + goto bad_sgl; 469 + sg = sg_next(sg); 470 + dma_addr = sg_dma_address(sg); 471 + dma_len = sg_dma_len(sg); 472 + } 473 + done: 474 + cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg)); 475 + cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); 476 + return BLK_STS_OK; 477 + free_prps: 478 + apple_nvme_free_prps(anv, req); 479 + return BLK_STS_RESOURCE; 480 + bad_sgl: 481 + WARN(DO_ONCE(apple_nvme_print_sgl, iod->sg, iod->nents), 482 + "Invalid SGL for payload:%d nents:%d\n", blk_rq_payload_bytes(req), 483 + iod->nents); 484 + return BLK_STS_IOERR; 485 + } 486 + 487 + static blk_status_t apple_nvme_setup_prp_simple(struct apple_nvme *anv, 488 + struct request *req, 489 + struct nvme_rw_command *cmnd, 490 + struct bio_vec *bv) 491 + { 492 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 493 + unsigned int offset = bv->bv_offset & (NVME_CTRL_PAGE_SIZE - 1); 494 + unsigned int first_prp_len = NVME_CTRL_PAGE_SIZE - offset; 495 + 496 + iod->first_dma = dma_map_bvec(anv->dev, bv, rq_dma_dir(req), 0); 497 + if (dma_mapping_error(anv->dev, iod->first_dma)) 498 + return BLK_STS_RESOURCE; 499 + iod->dma_len = bv->bv_len; 500 + 501 + cmnd->dptr.prp1 = cpu_to_le64(iod->first_dma); 502 + if (bv->bv_len > first_prp_len) 503 + cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma + first_prp_len); 504 + return BLK_STS_OK; 505 + } 506 + 507 + static blk_status_t apple_nvme_map_data(struct apple_nvme *anv, 508 + struct request *req, 509 + struct nvme_command *cmnd) 510 + { 511 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 512 + blk_status_t ret = BLK_STS_RESOURCE; 513 + int nr_mapped; 514 + 515 + if (blk_rq_nr_phys_segments(req) == 1) { 516 + struct bio_vec bv = req_bvec(req); 517 + 518 + if (bv.bv_offset + bv.bv_len <= NVME_CTRL_PAGE_SIZE * 2) 519 + return apple_nvme_setup_prp_simple(anv, req, &cmnd->rw, 520 + &bv); 521 + } 522 + 523 + iod->dma_len = 0; 524 + iod->sg = mempool_alloc(anv->iod_mempool, GFP_ATOMIC); 525 + if (!iod->sg) 526 + return BLK_STS_RESOURCE; 527 + sg_init_table(iod->sg, blk_rq_nr_phys_segments(req)); 528 + iod->nents = blk_rq_map_sg(req->q, req, iod->sg); 529 + if (!iod->nents) 530 + goto out_free_sg; 531 + 532 + nr_mapped = dma_map_sg_attrs(anv->dev, iod->sg, iod->nents, 533 + rq_dma_dir(req), DMA_ATTR_NO_WARN); 534 + if (!nr_mapped) 535 + goto out_free_sg; 536 + 537 + ret = apple_nvme_setup_prps(anv, req, &cmnd->rw); 538 + if (ret != BLK_STS_OK) 539 + goto out_unmap_sg; 540 + return BLK_STS_OK; 541 + 542 + out_unmap_sg: 543 + dma_unmap_sg(anv->dev, iod->sg, iod->nents, rq_dma_dir(req)); 544 + out_free_sg: 545 + mempool_free(iod->sg, anv->iod_mempool); 546 + return ret; 547 + } 548 + 549 + static __always_inline void apple_nvme_unmap_rq(struct request *req) 550 + { 551 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 552 + struct apple_nvme *anv = queue_to_apple_nvme(iod->q); 553 + 554 + if (blk_rq_nr_phys_segments(req)) 555 + apple_nvme_unmap_data(anv, req); 556 + } 557 + 558 + static void apple_nvme_complete_rq(struct request *req) 559 + { 560 + apple_nvme_unmap_rq(req); 561 + nvme_complete_rq(req); 562 + } 563 + 564 + static void apple_nvme_complete_batch(struct io_comp_batch *iob) 565 + { 566 + nvme_complete_batch(iob, apple_nvme_unmap_rq); 567 + } 568 + 569 + static inline bool apple_nvme_cqe_pending(struct apple_nvme_queue *q) 570 + { 571 + struct nvme_completion *hcqe = &q->cqes[q->cq_head]; 572 + 573 + return (le16_to_cpu(READ_ONCE(hcqe->status)) & 1) == q->cq_phase; 574 + } 575 + 576 + static inline struct blk_mq_tags * 577 + apple_nvme_queue_tagset(struct apple_nvme *anv, struct apple_nvme_queue *q) 578 + { 579 + if (q->is_adminq) 580 + return anv->admin_tagset.tags[0]; 581 + else 582 + return anv->tagset.tags[0]; 583 + } 584 + 585 + static inline void apple_nvme_handle_cqe(struct apple_nvme_queue *q, 586 + struct io_comp_batch *iob, u16 idx) 587 + { 588 + struct apple_nvme *anv = queue_to_apple_nvme(q); 589 + struct nvme_completion *cqe = &q->cqes[idx]; 590 + __u16 command_id = READ_ONCE(cqe->command_id); 591 + struct request *req; 592 + 593 + apple_nvmmu_inval(q, command_id); 594 + 595 + req = nvme_find_rq(apple_nvme_queue_tagset(anv, q), command_id); 596 + if (unlikely(!req)) { 597 + dev_warn(anv->dev, "invalid id %d completed", command_id); 598 + return; 599 + } 600 + 601 + if (!nvme_try_complete_req(req, cqe->status, cqe->result) && 602 + !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, 603 + apple_nvme_complete_batch)) 604 + apple_nvme_complete_rq(req); 605 + } 606 + 607 + static inline void apple_nvme_update_cq_head(struct apple_nvme_queue *q) 608 + { 609 + u32 tmp = q->cq_head + 1; 610 + 611 + if (tmp == apple_nvme_queue_depth(q)) { 612 + q->cq_head = 0; 613 + q->cq_phase ^= 1; 614 + } else { 615 + q->cq_head = tmp; 616 + } 617 + } 618 + 619 + static bool apple_nvme_poll_cq(struct apple_nvme_queue *q, 620 + struct io_comp_batch *iob) 621 + { 622 + bool found = false; 623 + 624 + while (apple_nvme_cqe_pending(q)) { 625 + found = true; 626 + 627 + /* 628 + * load-load control dependency between phase and the rest of 629 + * the cqe requires a full read memory barrier 630 + */ 631 + dma_rmb(); 632 + apple_nvme_handle_cqe(q, iob, q->cq_head); 633 + apple_nvme_update_cq_head(q); 634 + } 635 + 636 + if (found) 637 + writel(q->cq_head, q->cq_db); 638 + 639 + return found; 640 + } 641 + 642 + static bool apple_nvme_handle_cq(struct apple_nvme_queue *q, bool force) 643 + { 644 + bool found; 645 + DEFINE_IO_COMP_BATCH(iob); 646 + 647 + if (!READ_ONCE(q->enabled) && !force) 648 + return false; 649 + 650 + found = apple_nvme_poll_cq(q, &iob); 651 + 652 + if (!rq_list_empty(iob.req_list)) 653 + apple_nvme_complete_batch(&iob); 654 + 655 + return found; 656 + } 657 + 658 + static irqreturn_t apple_nvme_irq(int irq, void *data) 659 + { 660 + struct apple_nvme *anv = data; 661 + bool handled = false; 662 + unsigned long flags; 663 + 664 + spin_lock_irqsave(&anv->lock, flags); 665 + if (apple_nvme_handle_cq(&anv->ioq, false)) 666 + handled = true; 667 + if (apple_nvme_handle_cq(&anv->adminq, false)) 668 + handled = true; 669 + spin_unlock_irqrestore(&anv->lock, flags); 670 + 671 + if (handled) 672 + return IRQ_HANDLED; 673 + return IRQ_NONE; 674 + } 675 + 676 + static int apple_nvme_create_cq(struct apple_nvme *anv) 677 + { 678 + struct nvme_command c = {}; 679 + 680 + /* 681 + * Note: we (ab)use the fact that the prp fields survive if no data 682 + * is attached to the request. 683 + */ 684 + c.create_cq.opcode = nvme_admin_create_cq; 685 + c.create_cq.prp1 = cpu_to_le64(anv->ioq.cq_dma_addr); 686 + c.create_cq.cqid = cpu_to_le16(1); 687 + c.create_cq.qsize = cpu_to_le16(APPLE_ANS_MAX_QUEUE_DEPTH - 1); 688 + c.create_cq.cq_flags = cpu_to_le16(NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED); 689 + c.create_cq.irq_vector = cpu_to_le16(0); 690 + 691 + return nvme_submit_sync_cmd(anv->ctrl.admin_q, &c, NULL, 0); 692 + } 693 + 694 + static int apple_nvme_remove_cq(struct apple_nvme *anv) 695 + { 696 + struct nvme_command c = {}; 697 + 698 + c.delete_queue.opcode = nvme_admin_delete_cq; 699 + c.delete_queue.qid = cpu_to_le16(1); 700 + 701 + return nvme_submit_sync_cmd(anv->ctrl.admin_q, &c, NULL, 0); 702 + } 703 + 704 + static int apple_nvme_create_sq(struct apple_nvme *anv) 705 + { 706 + struct nvme_command c = {}; 707 + 708 + /* 709 + * Note: we (ab)use the fact that the prp fields survive if no data 710 + * is attached to the request. 711 + */ 712 + c.create_sq.opcode = nvme_admin_create_sq; 713 + c.create_sq.prp1 = cpu_to_le64(anv->ioq.sq_dma_addr); 714 + c.create_sq.sqid = cpu_to_le16(1); 715 + c.create_sq.qsize = cpu_to_le16(APPLE_ANS_MAX_QUEUE_DEPTH - 1); 716 + c.create_sq.sq_flags = cpu_to_le16(NVME_QUEUE_PHYS_CONTIG); 717 + c.create_sq.cqid = cpu_to_le16(1); 718 + 719 + return nvme_submit_sync_cmd(anv->ctrl.admin_q, &c, NULL, 0); 720 + } 721 + 722 + static int apple_nvme_remove_sq(struct apple_nvme *anv) 723 + { 724 + struct nvme_command c = {}; 725 + 726 + c.delete_queue.opcode = nvme_admin_delete_sq; 727 + c.delete_queue.qid = cpu_to_le16(1); 728 + 729 + return nvme_submit_sync_cmd(anv->ctrl.admin_q, &c, NULL, 0); 730 + } 731 + 732 + static blk_status_t apple_nvme_queue_rq(struct blk_mq_hw_ctx *hctx, 733 + const struct blk_mq_queue_data *bd) 734 + { 735 + struct nvme_ns *ns = hctx->queue->queuedata; 736 + struct apple_nvme_queue *q = hctx->driver_data; 737 + struct apple_nvme *anv = queue_to_apple_nvme(q); 738 + struct request *req = bd->rq; 739 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 740 + struct nvme_command *cmnd = &iod->cmd; 741 + blk_status_t ret; 742 + 743 + iod->npages = -1; 744 + iod->nents = 0; 745 + 746 + /* 747 + * We should not need to do this, but we're still using this to 748 + * ensure we can drain requests on a dying queue. 749 + */ 750 + if (unlikely(!READ_ONCE(q->enabled))) 751 + return BLK_STS_IOERR; 752 + 753 + if (!nvme_check_ready(&anv->ctrl, req, true)) 754 + return nvme_fail_nonready_command(&anv->ctrl, req); 755 + 756 + ret = nvme_setup_cmd(ns, req); 757 + if (ret) 758 + return ret; 759 + 760 + if (blk_rq_nr_phys_segments(req)) { 761 + ret = apple_nvme_map_data(anv, req, cmnd); 762 + if (ret) 763 + goto out_free_cmd; 764 + } 765 + 766 + blk_mq_start_request(req); 767 + apple_nvme_submit_cmd(q, cmnd); 768 + return BLK_STS_OK; 769 + 770 + out_free_cmd: 771 + nvme_cleanup_cmd(req); 772 + return ret; 773 + } 774 + 775 + static int apple_nvme_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, 776 + unsigned int hctx_idx) 777 + { 778 + hctx->driver_data = data; 779 + return 0; 780 + } 781 + 782 + static int apple_nvme_init_request(struct blk_mq_tag_set *set, 783 + struct request *req, unsigned int hctx_idx, 784 + unsigned int numa_node) 785 + { 786 + struct apple_nvme_queue *q = set->driver_data; 787 + struct apple_nvme *anv = queue_to_apple_nvme(q); 788 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 789 + struct nvme_request *nreq = nvme_req(req); 790 + 791 + iod->q = q; 792 + nreq->ctrl = &anv->ctrl; 793 + nreq->cmd = &iod->cmd; 794 + 795 + return 0; 796 + } 797 + 798 + static void apple_nvme_disable(struct apple_nvme *anv, bool shutdown) 799 + { 800 + u32 csts = readl(anv->mmio_nvme + NVME_REG_CSTS); 801 + bool dead = false, freeze = false; 802 + unsigned long flags; 803 + 804 + if (apple_rtkit_is_crashed(anv->rtk)) 805 + dead = true; 806 + if (!(csts & NVME_CSTS_RDY)) 807 + dead = true; 808 + if (csts & NVME_CSTS_CFS) 809 + dead = true; 810 + 811 + if (anv->ctrl.state == NVME_CTRL_LIVE || 812 + anv->ctrl.state == NVME_CTRL_RESETTING) { 813 + freeze = true; 814 + nvme_start_freeze(&anv->ctrl); 815 + } 816 + 817 + /* 818 + * Give the controller a chance to complete all entered requests if 819 + * doing a safe shutdown. 820 + */ 821 + if (!dead && shutdown && freeze) 822 + nvme_wait_freeze_timeout(&anv->ctrl, NVME_IO_TIMEOUT); 823 + 824 + nvme_stop_queues(&anv->ctrl); 825 + 826 + if (!dead) { 827 + if (READ_ONCE(anv->ioq.enabled)) { 828 + apple_nvme_remove_sq(anv); 829 + apple_nvme_remove_cq(anv); 830 + } 831 + 832 + if (shutdown) 833 + nvme_shutdown_ctrl(&anv->ctrl); 834 + nvme_disable_ctrl(&anv->ctrl); 835 + } 836 + 837 + WRITE_ONCE(anv->ioq.enabled, false); 838 + WRITE_ONCE(anv->adminq.enabled, false); 839 + mb(); /* ensure that nvme_queue_rq() sees that enabled is cleared */ 840 + nvme_stop_admin_queue(&anv->ctrl); 841 + 842 + /* last chance to complete any requests before nvme_cancel_request */ 843 + spin_lock_irqsave(&anv->lock, flags); 844 + apple_nvme_handle_cq(&anv->ioq, true); 845 + apple_nvme_handle_cq(&anv->adminq, true); 846 + spin_unlock_irqrestore(&anv->lock, flags); 847 + 848 + blk_mq_tagset_busy_iter(&anv->tagset, nvme_cancel_request, &anv->ctrl); 849 + blk_mq_tagset_busy_iter(&anv->admin_tagset, nvme_cancel_request, 850 + &anv->ctrl); 851 + blk_mq_tagset_wait_completed_request(&anv->tagset); 852 + blk_mq_tagset_wait_completed_request(&anv->admin_tagset); 853 + 854 + /* 855 + * The driver will not be starting up queues again if shutting down so 856 + * must flush all entered requests to their failed completion to avoid 857 + * deadlocking blk-mq hot-cpu notifier. 858 + */ 859 + if (shutdown) { 860 + nvme_start_queues(&anv->ctrl); 861 + nvme_start_admin_queue(&anv->ctrl); 862 + } 863 + } 864 + 865 + static enum blk_eh_timer_return apple_nvme_timeout(struct request *req, 866 + bool reserved) 867 + { 868 + struct apple_nvme_iod *iod = blk_mq_rq_to_pdu(req); 869 + struct apple_nvme_queue *q = iod->q; 870 + struct apple_nvme *anv = queue_to_apple_nvme(q); 871 + unsigned long flags; 872 + u32 csts = readl(anv->mmio_nvme + NVME_REG_CSTS); 873 + 874 + if (anv->ctrl.state != NVME_CTRL_LIVE) { 875 + /* 876 + * From rdma.c: 877 + * If we are resetting, connecting or deleting we should 878 + * complete immediately because we may block controller 879 + * teardown or setup sequence 880 + * - ctrl disable/shutdown fabrics requests 881 + * - connect requests 882 + * - initialization admin requests 883 + * - I/O requests that entered after unquiescing and 884 + * the controller stopped responding 885 + * 886 + * All other requests should be cancelled by the error 887 + * recovery work, so it's fine that we fail it here. 888 + */ 889 + dev_warn(anv->dev, 890 + "I/O %d(aq:%d) timeout while not in live state\n", 891 + req->tag, q->is_adminq); 892 + if (blk_mq_request_started(req) && 893 + !blk_mq_request_completed(req)) { 894 + nvme_req(req)->status = NVME_SC_HOST_ABORTED_CMD; 895 + nvme_req(req)->flags |= NVME_REQ_CANCELLED; 896 + blk_mq_complete_request(req); 897 + } 898 + return BLK_EH_DONE; 899 + } 900 + 901 + /* check if we just missed an interrupt if we're still alive */ 902 + if (!apple_rtkit_is_crashed(anv->rtk) && !(csts & NVME_CSTS_CFS)) { 903 + spin_lock_irqsave(&anv->lock, flags); 904 + apple_nvme_handle_cq(q, false); 905 + spin_unlock_irqrestore(&anv->lock, flags); 906 + if (blk_mq_request_completed(req)) { 907 + dev_warn(anv->dev, 908 + "I/O %d(aq:%d) timeout: completion polled\n", 909 + req->tag, q->is_adminq); 910 + return BLK_EH_DONE; 911 + } 912 + } 913 + 914 + /* 915 + * aborting commands isn't supported which leaves a full reset as our 916 + * only option here 917 + */ 918 + dev_warn(anv->dev, "I/O %d(aq:%d) timeout: resetting controller\n", 919 + req->tag, q->is_adminq); 920 + nvme_req(req)->flags |= NVME_REQ_CANCELLED; 921 + apple_nvme_disable(anv, false); 922 + nvme_reset_ctrl(&anv->ctrl); 923 + return BLK_EH_DONE; 924 + } 925 + 926 + static int apple_nvme_poll(struct blk_mq_hw_ctx *hctx, 927 + struct io_comp_batch *iob) 928 + { 929 + struct apple_nvme_queue *q = hctx->driver_data; 930 + struct apple_nvme *anv = queue_to_apple_nvme(q); 931 + bool found; 932 + unsigned long flags; 933 + 934 + spin_lock_irqsave(&anv->lock, flags); 935 + found = apple_nvme_poll_cq(q, iob); 936 + spin_unlock_irqrestore(&anv->lock, flags); 937 + 938 + return found; 939 + } 940 + 941 + static const struct blk_mq_ops apple_nvme_mq_admin_ops = { 942 + .queue_rq = apple_nvme_queue_rq, 943 + .complete = apple_nvme_complete_rq, 944 + .init_hctx = apple_nvme_init_hctx, 945 + .init_request = apple_nvme_init_request, 946 + .timeout = apple_nvme_timeout, 947 + }; 948 + 949 + static const struct blk_mq_ops apple_nvme_mq_ops = { 950 + .queue_rq = apple_nvme_queue_rq, 951 + .complete = apple_nvme_complete_rq, 952 + .init_hctx = apple_nvme_init_hctx, 953 + .init_request = apple_nvme_init_request, 954 + .timeout = apple_nvme_timeout, 955 + .poll = apple_nvme_poll, 956 + }; 957 + 958 + static void apple_nvme_init_queue(struct apple_nvme_queue *q) 959 + { 960 + unsigned int depth = apple_nvme_queue_depth(q); 961 + 962 + q->cq_head = 0; 963 + q->cq_phase = 1; 964 + memset(q->tcbs, 0, 965 + APPLE_ANS_MAX_QUEUE_DEPTH * sizeof(struct apple_nvmmu_tcb)); 966 + memset(q->cqes, 0, depth * sizeof(struct nvme_completion)); 967 + WRITE_ONCE(q->enabled, true); 968 + wmb(); /* ensure the first interrupt sees the initialization */ 969 + } 970 + 971 + static void apple_nvme_reset_work(struct work_struct *work) 972 + { 973 + unsigned int nr_io_queues = 1; 974 + int ret; 975 + u32 boot_status, aqa; 976 + struct apple_nvme *anv = 977 + container_of(work, struct apple_nvme, ctrl.reset_work); 978 + 979 + if (anv->ctrl.state != NVME_CTRL_RESETTING) { 980 + dev_warn(anv->dev, "ctrl state %d is not RESETTING\n", 981 + anv->ctrl.state); 982 + ret = -ENODEV; 983 + goto out; 984 + } 985 + 986 + /* there's unfortunately no known way to recover if RTKit crashed :( */ 987 + if (apple_rtkit_is_crashed(anv->rtk)) { 988 + dev_err(anv->dev, 989 + "RTKit has crashed without any way to recover."); 990 + ret = -EIO; 991 + goto out; 992 + } 993 + 994 + if (anv->ctrl.ctrl_config & NVME_CC_ENABLE) 995 + apple_nvme_disable(anv, false); 996 + 997 + /* RTKit must be shut down cleanly for the (soft)-reset to work */ 998 + if (apple_rtkit_is_running(anv->rtk)) { 999 + dev_dbg(anv->dev, "Trying to shut down RTKit before reset."); 1000 + ret = apple_rtkit_shutdown(anv->rtk); 1001 + if (ret) 1002 + goto out; 1003 + } 1004 + 1005 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1006 + 1007 + ret = reset_control_assert(anv->reset); 1008 + if (ret) 1009 + goto out; 1010 + 1011 + ret = apple_rtkit_reinit(anv->rtk); 1012 + if (ret) 1013 + goto out; 1014 + 1015 + ret = reset_control_deassert(anv->reset); 1016 + if (ret) 1017 + goto out; 1018 + 1019 + writel(APPLE_ANS_COPROC_CPU_CONTROL_RUN, 1020 + anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1021 + ret = apple_rtkit_boot(anv->rtk); 1022 + if (ret) { 1023 + dev_err(anv->dev, "ANS did not boot"); 1024 + goto out; 1025 + } 1026 + 1027 + ret = readl_poll_timeout(anv->mmio_nvme + APPLE_ANS_BOOT_STATUS, 1028 + boot_status, 1029 + boot_status == APPLE_ANS_BOOT_STATUS_OK, 1030 + USEC_PER_MSEC, APPLE_ANS_BOOT_TIMEOUT); 1031 + if (ret) { 1032 + dev_err(anv->dev, "ANS did not initialize"); 1033 + goto out; 1034 + } 1035 + 1036 + dev_dbg(anv->dev, "ANS booted successfully."); 1037 + 1038 + /* 1039 + * Limit the max command size to prevent iod->sg allocations going 1040 + * over a single page. 1041 + */ 1042 + anv->ctrl.max_hw_sectors = min_t(u32, NVME_MAX_KB_SZ << 1, 1043 + dma_max_mapping_size(anv->dev) >> 9); 1044 + anv->ctrl.max_segments = NVME_MAX_SEGS; 1045 + 1046 + /* 1047 + * Enable NVMMU and linear submission queues. 1048 + * While we could keep those disabled and pretend this is slightly 1049 + * more common NVMe controller we'd still need some quirks (e.g. 1050 + * sq entries will be 128 bytes) and Apple might drop support for 1051 + * that mode in the future. 1052 + */ 1053 + writel(APPLE_ANS_LINEAR_SQ_EN, 1054 + anv->mmio_nvme + APPLE_ANS_LINEAR_SQ_CTRL); 1055 + 1056 + /* Allow as many pending command as possible for both queues */ 1057 + writel(APPLE_ANS_MAX_QUEUE_DEPTH | (APPLE_ANS_MAX_QUEUE_DEPTH << 16), 1058 + anv->mmio_nvme + APPLE_ANS_MAX_PEND_CMDS_CTRL); 1059 + 1060 + /* Setup the NVMMU for the maximum admin and IO queue depth */ 1061 + writel(APPLE_ANS_MAX_QUEUE_DEPTH - 1, 1062 + anv->mmio_nvme + APPLE_NVMMU_NUM_TCBS); 1063 + 1064 + /* 1065 + * This is probably a chicken bit: without it all commands where any PRP 1066 + * is set to zero (including those that don't use that field) fail and 1067 + * the co-processor complains about "completed with err BAD_CMD-" or 1068 + * a "NULL_PRP_PTR_ERR" in the syslog 1069 + */ 1070 + writel(readl(anv->mmio_nvme + APPLE_ANS_UNKNOWN_CTRL) & 1071 + ~APPLE_ANS_PRP_NULL_CHECK, 1072 + anv->mmio_nvme + APPLE_ANS_UNKNOWN_CTRL); 1073 + 1074 + /* Setup the admin queue */ 1075 + aqa = APPLE_NVME_AQ_DEPTH - 1; 1076 + aqa |= aqa << 16; 1077 + writel(aqa, anv->mmio_nvme + NVME_REG_AQA); 1078 + writeq(anv->adminq.sq_dma_addr, anv->mmio_nvme + NVME_REG_ASQ); 1079 + writeq(anv->adminq.cq_dma_addr, anv->mmio_nvme + NVME_REG_ACQ); 1080 + 1081 + /* Setup NVMMU for both queues */ 1082 + writeq(anv->adminq.tcb_dma_addr, 1083 + anv->mmio_nvme + APPLE_NVMMU_ASQ_TCB_BASE); 1084 + writeq(anv->ioq.tcb_dma_addr, 1085 + anv->mmio_nvme + APPLE_NVMMU_IOSQ_TCB_BASE); 1086 + 1087 + anv->ctrl.sqsize = 1088 + APPLE_ANS_MAX_QUEUE_DEPTH - 1; /* 0's based queue depth */ 1089 + anv->ctrl.cap = readq(anv->mmio_nvme + NVME_REG_CAP); 1090 + 1091 + dev_dbg(anv->dev, "Enabling controller now"); 1092 + ret = nvme_enable_ctrl(&anv->ctrl); 1093 + if (ret) 1094 + goto out; 1095 + 1096 + dev_dbg(anv->dev, "Starting admin queue"); 1097 + apple_nvme_init_queue(&anv->adminq); 1098 + nvme_start_admin_queue(&anv->ctrl); 1099 + 1100 + if (!nvme_change_ctrl_state(&anv->ctrl, NVME_CTRL_CONNECTING)) { 1101 + dev_warn(anv->ctrl.device, 1102 + "failed to mark controller CONNECTING\n"); 1103 + ret = -ENODEV; 1104 + goto out; 1105 + } 1106 + 1107 + ret = nvme_init_ctrl_finish(&anv->ctrl); 1108 + if (ret) 1109 + goto out; 1110 + 1111 + dev_dbg(anv->dev, "Creating IOCQ"); 1112 + ret = apple_nvme_create_cq(anv); 1113 + if (ret) 1114 + goto out; 1115 + dev_dbg(anv->dev, "Creating IOSQ"); 1116 + ret = apple_nvme_create_sq(anv); 1117 + if (ret) 1118 + goto out_remove_cq; 1119 + 1120 + apple_nvme_init_queue(&anv->ioq); 1121 + nr_io_queues = 1; 1122 + ret = nvme_set_queue_count(&anv->ctrl, &nr_io_queues); 1123 + if (ret) 1124 + goto out_remove_sq; 1125 + if (nr_io_queues != 1) { 1126 + ret = -ENXIO; 1127 + goto out_remove_sq; 1128 + } 1129 + 1130 + anv->ctrl.queue_count = nr_io_queues + 1; 1131 + 1132 + nvme_start_queues(&anv->ctrl); 1133 + nvme_wait_freeze(&anv->ctrl); 1134 + blk_mq_update_nr_hw_queues(&anv->tagset, 1); 1135 + nvme_unfreeze(&anv->ctrl); 1136 + 1137 + if (!nvme_change_ctrl_state(&anv->ctrl, NVME_CTRL_LIVE)) { 1138 + dev_warn(anv->ctrl.device, 1139 + "failed to mark controller live state\n"); 1140 + ret = -ENODEV; 1141 + goto out_remove_sq; 1142 + } 1143 + 1144 + nvme_start_ctrl(&anv->ctrl); 1145 + 1146 + dev_dbg(anv->dev, "ANS boot and NVMe init completed."); 1147 + return; 1148 + 1149 + out_remove_sq: 1150 + apple_nvme_remove_sq(anv); 1151 + out_remove_cq: 1152 + apple_nvme_remove_cq(anv); 1153 + out: 1154 + dev_warn(anv->ctrl.device, "Reset failure status: %d\n", ret); 1155 + nvme_change_ctrl_state(&anv->ctrl, NVME_CTRL_DELETING); 1156 + nvme_get_ctrl(&anv->ctrl); 1157 + apple_nvme_disable(anv, false); 1158 + nvme_kill_queues(&anv->ctrl); 1159 + if (!queue_work(nvme_wq, &anv->remove_work)) 1160 + nvme_put_ctrl(&anv->ctrl); 1161 + } 1162 + 1163 + static void apple_nvme_remove_dead_ctrl_work(struct work_struct *work) 1164 + { 1165 + struct apple_nvme *anv = 1166 + container_of(work, struct apple_nvme, remove_work); 1167 + 1168 + nvme_put_ctrl(&anv->ctrl); 1169 + device_release_driver(anv->dev); 1170 + } 1171 + 1172 + static int apple_nvme_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val) 1173 + { 1174 + *val = readl(ctrl_to_apple_nvme(ctrl)->mmio_nvme + off); 1175 + return 0; 1176 + } 1177 + 1178 + static int apple_nvme_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val) 1179 + { 1180 + writel(val, ctrl_to_apple_nvme(ctrl)->mmio_nvme + off); 1181 + return 0; 1182 + } 1183 + 1184 + static int apple_nvme_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val) 1185 + { 1186 + *val = readq(ctrl_to_apple_nvme(ctrl)->mmio_nvme + off); 1187 + return 0; 1188 + } 1189 + 1190 + static int apple_nvme_get_address(struct nvme_ctrl *ctrl, char *buf, int size) 1191 + { 1192 + struct device *dev = ctrl_to_apple_nvme(ctrl)->dev; 1193 + 1194 + return snprintf(buf, size, "%s\n", dev_name(dev)); 1195 + } 1196 + 1197 + static void apple_nvme_free_ctrl(struct nvme_ctrl *ctrl) 1198 + { 1199 + struct apple_nvme *anv = ctrl_to_apple_nvme(ctrl); 1200 + 1201 + if (anv->ctrl.admin_q) 1202 + blk_put_queue(anv->ctrl.admin_q); 1203 + put_device(anv->dev); 1204 + } 1205 + 1206 + static const struct nvme_ctrl_ops nvme_ctrl_ops = { 1207 + .name = "apple-nvme", 1208 + .module = THIS_MODULE, 1209 + .flags = 0, 1210 + .reg_read32 = apple_nvme_reg_read32, 1211 + .reg_write32 = apple_nvme_reg_write32, 1212 + .reg_read64 = apple_nvme_reg_read64, 1213 + .free_ctrl = apple_nvme_free_ctrl, 1214 + .get_address = apple_nvme_get_address, 1215 + }; 1216 + 1217 + static void apple_nvme_async_probe(void *data, async_cookie_t cookie) 1218 + { 1219 + struct apple_nvme *anv = data; 1220 + 1221 + flush_work(&anv->ctrl.reset_work); 1222 + flush_work(&anv->ctrl.scan_work); 1223 + nvme_put_ctrl(&anv->ctrl); 1224 + } 1225 + 1226 + static int apple_nvme_alloc_tagsets(struct apple_nvme *anv) 1227 + { 1228 + int ret; 1229 + 1230 + anv->admin_tagset.ops = &apple_nvme_mq_admin_ops; 1231 + anv->admin_tagset.nr_hw_queues = 1; 1232 + anv->admin_tagset.queue_depth = APPLE_NVME_AQ_MQ_TAG_DEPTH; 1233 + anv->admin_tagset.timeout = NVME_ADMIN_TIMEOUT; 1234 + anv->admin_tagset.numa_node = NUMA_NO_NODE; 1235 + anv->admin_tagset.cmd_size = sizeof(struct apple_nvme_iod); 1236 + anv->admin_tagset.flags = BLK_MQ_F_NO_SCHED; 1237 + anv->admin_tagset.driver_data = &anv->adminq; 1238 + 1239 + ret = blk_mq_alloc_tag_set(&anv->admin_tagset); 1240 + if (ret) 1241 + return ret; 1242 + ret = devm_add_action_or_reset(anv->dev, 1243 + (void (*)(void *))blk_mq_free_tag_set, 1244 + &anv->admin_tagset); 1245 + if (ret) 1246 + return ret; 1247 + 1248 + anv->tagset.ops = &apple_nvme_mq_ops; 1249 + anv->tagset.nr_hw_queues = 1; 1250 + anv->tagset.nr_maps = 1; 1251 + /* 1252 + * Tags are used as an index to the NVMMU and must be unique across 1253 + * both queues. The admin queue gets the first APPLE_NVME_AQ_DEPTH which 1254 + * must be marked as reserved in the IO queue. 1255 + */ 1256 + anv->tagset.reserved_tags = APPLE_NVME_AQ_DEPTH; 1257 + anv->tagset.queue_depth = APPLE_ANS_MAX_QUEUE_DEPTH - 1; 1258 + anv->tagset.timeout = NVME_IO_TIMEOUT; 1259 + anv->tagset.numa_node = NUMA_NO_NODE; 1260 + anv->tagset.cmd_size = sizeof(struct apple_nvme_iod); 1261 + anv->tagset.flags = BLK_MQ_F_SHOULD_MERGE; 1262 + anv->tagset.driver_data = &anv->ioq; 1263 + 1264 + ret = blk_mq_alloc_tag_set(&anv->tagset); 1265 + if (ret) 1266 + return ret; 1267 + ret = devm_add_action_or_reset( 1268 + anv->dev, (void (*)(void *))blk_mq_free_tag_set, &anv->tagset); 1269 + if (ret) 1270 + return ret; 1271 + 1272 + anv->ctrl.admin_tagset = &anv->admin_tagset; 1273 + anv->ctrl.tagset = &anv->tagset; 1274 + 1275 + return 0; 1276 + } 1277 + 1278 + static int apple_nvme_queue_alloc(struct apple_nvme *anv, 1279 + struct apple_nvme_queue *q) 1280 + { 1281 + unsigned int depth = apple_nvme_queue_depth(q); 1282 + 1283 + q->cqes = dmam_alloc_coherent(anv->dev, 1284 + depth * sizeof(struct nvme_completion), 1285 + &q->cq_dma_addr, GFP_KERNEL); 1286 + if (!q->cqes) 1287 + return -ENOMEM; 1288 + 1289 + q->sqes = dmam_alloc_coherent(anv->dev, 1290 + depth * sizeof(struct nvme_command), 1291 + &q->sq_dma_addr, GFP_KERNEL); 1292 + if (!q->sqes) 1293 + return -ENOMEM; 1294 + 1295 + /* 1296 + * We need the maximum queue depth here because the NVMMU only has a 1297 + * single depth configuration shared between both queues. 1298 + */ 1299 + q->tcbs = dmam_alloc_coherent(anv->dev, 1300 + APPLE_ANS_MAX_QUEUE_DEPTH * 1301 + sizeof(struct apple_nvmmu_tcb), 1302 + &q->tcb_dma_addr, GFP_KERNEL); 1303 + if (!q->tcbs) 1304 + return -ENOMEM; 1305 + 1306 + /* 1307 + * initialize phase to make sure the allocated and empty memory 1308 + * doesn't look like a full cq already. 1309 + */ 1310 + q->cq_phase = 1; 1311 + return 0; 1312 + } 1313 + 1314 + static void apple_nvme_detach_genpd(struct apple_nvme *anv) 1315 + { 1316 + int i; 1317 + 1318 + if (anv->pd_count <= 1) 1319 + return; 1320 + 1321 + for (i = anv->pd_count - 1; i >= 0; i--) { 1322 + if (anv->pd_link[i]) 1323 + device_link_del(anv->pd_link[i]); 1324 + if (!IS_ERR_OR_NULL(anv->pd_dev[i])) 1325 + dev_pm_domain_detach(anv->pd_dev[i], true); 1326 + } 1327 + } 1328 + 1329 + static int apple_nvme_attach_genpd(struct apple_nvme *anv) 1330 + { 1331 + struct device *dev = anv->dev; 1332 + int i; 1333 + 1334 + anv->pd_count = of_count_phandle_with_args( 1335 + dev->of_node, "power-domains", "#power-domain-cells"); 1336 + if (anv->pd_count <= 1) 1337 + return 0; 1338 + 1339 + anv->pd_dev = devm_kcalloc(dev, anv->pd_count, sizeof(*anv->pd_dev), 1340 + GFP_KERNEL); 1341 + if (!anv->pd_dev) 1342 + return -ENOMEM; 1343 + 1344 + anv->pd_link = devm_kcalloc(dev, anv->pd_count, sizeof(*anv->pd_link), 1345 + GFP_KERNEL); 1346 + if (!anv->pd_link) 1347 + return -ENOMEM; 1348 + 1349 + for (i = 0; i < anv->pd_count; i++) { 1350 + anv->pd_dev[i] = dev_pm_domain_attach_by_id(dev, i); 1351 + if (IS_ERR(anv->pd_dev[i])) { 1352 + apple_nvme_detach_genpd(anv); 1353 + return PTR_ERR(anv->pd_dev[i]); 1354 + } 1355 + 1356 + anv->pd_link[i] = device_link_add(dev, anv->pd_dev[i], 1357 + DL_FLAG_STATELESS | 1358 + DL_FLAG_PM_RUNTIME | 1359 + DL_FLAG_RPM_ACTIVE); 1360 + if (!anv->pd_link[i]) { 1361 + apple_nvme_detach_genpd(anv); 1362 + return -EINVAL; 1363 + } 1364 + } 1365 + 1366 + return 0; 1367 + } 1368 + 1369 + static int apple_nvme_probe(struct platform_device *pdev) 1370 + { 1371 + struct device *dev = &pdev->dev; 1372 + struct apple_nvme *anv; 1373 + int ret; 1374 + 1375 + anv = devm_kzalloc(dev, sizeof(*anv), GFP_KERNEL); 1376 + if (!anv) 1377 + return -ENOMEM; 1378 + 1379 + anv->dev = get_device(dev); 1380 + anv->adminq.is_adminq = true; 1381 + platform_set_drvdata(pdev, anv); 1382 + 1383 + ret = apple_nvme_attach_genpd(anv); 1384 + if (ret < 0) { 1385 + dev_err_probe(dev, ret, "Failed to attach power domains"); 1386 + goto put_dev; 1387 + } 1388 + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { 1389 + ret = -ENXIO; 1390 + goto put_dev; 1391 + } 1392 + 1393 + anv->irq = platform_get_irq(pdev, 0); 1394 + if (anv->irq < 0) { 1395 + ret = anv->irq; 1396 + goto put_dev; 1397 + } 1398 + if (!anv->irq) { 1399 + ret = -ENXIO; 1400 + goto put_dev; 1401 + } 1402 + 1403 + anv->mmio_coproc = devm_platform_ioremap_resource_byname(pdev, "ans"); 1404 + if (IS_ERR(anv->mmio_coproc)) { 1405 + ret = PTR_ERR(anv->mmio_coproc); 1406 + goto put_dev; 1407 + } 1408 + anv->mmio_nvme = devm_platform_ioremap_resource_byname(pdev, "nvme"); 1409 + if (IS_ERR(anv->mmio_nvme)) { 1410 + ret = PTR_ERR(anv->mmio_nvme); 1411 + goto put_dev; 1412 + } 1413 + 1414 + anv->adminq.sq_db = anv->mmio_nvme + APPLE_ANS_LINEAR_ASQ_DB; 1415 + anv->adminq.cq_db = anv->mmio_nvme + APPLE_ANS_ACQ_DB; 1416 + anv->ioq.sq_db = anv->mmio_nvme + APPLE_ANS_LINEAR_IOSQ_DB; 1417 + anv->ioq.cq_db = anv->mmio_nvme + APPLE_ANS_IOCQ_DB; 1418 + 1419 + anv->sart = devm_apple_sart_get(dev); 1420 + if (IS_ERR(anv->sart)) { 1421 + ret = dev_err_probe(dev, PTR_ERR(anv->sart), 1422 + "Failed to initialize SART"); 1423 + goto put_dev; 1424 + } 1425 + 1426 + anv->reset = devm_reset_control_array_get_exclusive(anv->dev); 1427 + if (IS_ERR(anv->reset)) { 1428 + ret = dev_err_probe(dev, PTR_ERR(anv->reset), 1429 + "Failed to get reset control"); 1430 + goto put_dev; 1431 + } 1432 + 1433 + INIT_WORK(&anv->ctrl.reset_work, apple_nvme_reset_work); 1434 + INIT_WORK(&anv->remove_work, apple_nvme_remove_dead_ctrl_work); 1435 + spin_lock_init(&anv->lock); 1436 + 1437 + ret = apple_nvme_queue_alloc(anv, &anv->adminq); 1438 + if (ret) 1439 + goto put_dev; 1440 + ret = apple_nvme_queue_alloc(anv, &anv->ioq); 1441 + if (ret) 1442 + goto put_dev; 1443 + 1444 + anv->prp_page_pool = dmam_pool_create("prp list page", anv->dev, 1445 + NVME_CTRL_PAGE_SIZE, 1446 + NVME_CTRL_PAGE_SIZE, 0); 1447 + if (!anv->prp_page_pool) { 1448 + ret = -ENOMEM; 1449 + goto put_dev; 1450 + } 1451 + 1452 + anv->prp_small_pool = 1453 + dmam_pool_create("prp list 256", anv->dev, 256, 256, 0); 1454 + if (!anv->prp_small_pool) { 1455 + ret = -ENOMEM; 1456 + goto put_dev; 1457 + } 1458 + 1459 + WARN_ON_ONCE(apple_nvme_iod_alloc_size() > PAGE_SIZE); 1460 + anv->iod_mempool = 1461 + mempool_create_kmalloc_pool(1, apple_nvme_iod_alloc_size()); 1462 + if (!anv->iod_mempool) { 1463 + ret = -ENOMEM; 1464 + goto put_dev; 1465 + } 1466 + ret = devm_add_action_or_reset( 1467 + anv->dev, (void (*)(void *))mempool_destroy, anv->iod_mempool); 1468 + if (ret) 1469 + goto put_dev; 1470 + 1471 + ret = apple_nvme_alloc_tagsets(anv); 1472 + if (ret) 1473 + goto put_dev; 1474 + 1475 + ret = devm_request_irq(anv->dev, anv->irq, apple_nvme_irq, 0, 1476 + "nvme-apple", anv); 1477 + if (ret) { 1478 + dev_err_probe(dev, ret, "Failed to request IRQ"); 1479 + goto put_dev; 1480 + } 1481 + 1482 + anv->rtk = 1483 + devm_apple_rtkit_init(dev, anv, NULL, 0, &apple_nvme_rtkit_ops); 1484 + if (IS_ERR(anv->rtk)) { 1485 + ret = dev_err_probe(dev, PTR_ERR(anv->rtk), 1486 + "Failed to initialize RTKit"); 1487 + goto put_dev; 1488 + } 1489 + 1490 + ret = nvme_init_ctrl(&anv->ctrl, anv->dev, &nvme_ctrl_ops, 1491 + NVME_QUIRK_SKIP_CID_GEN); 1492 + if (ret) { 1493 + dev_err_probe(dev, ret, "Failed to initialize nvme_ctrl"); 1494 + goto put_dev; 1495 + } 1496 + 1497 + anv->ctrl.admin_q = blk_mq_init_queue(&anv->admin_tagset); 1498 + if (IS_ERR(anv->ctrl.admin_q)) { 1499 + ret = -ENOMEM; 1500 + goto put_dev; 1501 + } 1502 + 1503 + if (!blk_get_queue(anv->ctrl.admin_q)) { 1504 + nvme_start_admin_queue(&anv->ctrl); 1505 + blk_cleanup_queue(anv->ctrl.admin_q); 1506 + anv->ctrl.admin_q = NULL; 1507 + ret = -ENODEV; 1508 + goto put_dev; 1509 + } 1510 + 1511 + nvme_reset_ctrl(&anv->ctrl); 1512 + async_schedule(apple_nvme_async_probe, anv); 1513 + 1514 + return 0; 1515 + 1516 + put_dev: 1517 + put_device(anv->dev); 1518 + return ret; 1519 + } 1520 + 1521 + static int apple_nvme_remove(struct platform_device *pdev) 1522 + { 1523 + struct apple_nvme *anv = platform_get_drvdata(pdev); 1524 + 1525 + nvme_change_ctrl_state(&anv->ctrl, NVME_CTRL_DELETING); 1526 + flush_work(&anv->ctrl.reset_work); 1527 + nvme_stop_ctrl(&anv->ctrl); 1528 + nvme_remove_namespaces(&anv->ctrl); 1529 + apple_nvme_disable(anv, true); 1530 + nvme_uninit_ctrl(&anv->ctrl); 1531 + 1532 + if (apple_rtkit_is_running(anv->rtk)) 1533 + apple_rtkit_shutdown(anv->rtk); 1534 + 1535 + apple_nvme_detach_genpd(anv); 1536 + 1537 + return 0; 1538 + } 1539 + 1540 + static void apple_nvme_shutdown(struct platform_device *pdev) 1541 + { 1542 + struct apple_nvme *anv = platform_get_drvdata(pdev); 1543 + 1544 + apple_nvme_disable(anv, true); 1545 + if (apple_rtkit_is_running(anv->rtk)) 1546 + apple_rtkit_shutdown(anv->rtk); 1547 + } 1548 + 1549 + static int apple_nvme_resume(struct device *dev) 1550 + { 1551 + struct apple_nvme *anv = dev_get_drvdata(dev); 1552 + 1553 + return nvme_reset_ctrl(&anv->ctrl); 1554 + } 1555 + 1556 + static int apple_nvme_suspend(struct device *dev) 1557 + { 1558 + struct apple_nvme *anv = dev_get_drvdata(dev); 1559 + int ret = 0; 1560 + 1561 + apple_nvme_disable(anv, true); 1562 + 1563 + if (apple_rtkit_is_running(anv->rtk)) 1564 + ret = apple_rtkit_shutdown(anv->rtk); 1565 + 1566 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1567 + 1568 + return ret; 1569 + } 1570 + 1571 + static DEFINE_SIMPLE_DEV_PM_OPS(apple_nvme_pm_ops, apple_nvme_suspend, 1572 + apple_nvme_resume); 1573 + 1574 + static const struct of_device_id apple_nvme_of_match[] = { 1575 + { .compatible = "apple,nvme-ans2" }, 1576 + {}, 1577 + }; 1578 + MODULE_DEVICE_TABLE(of, apple_nvme_of_match); 1579 + 1580 + static struct platform_driver apple_nvme_driver = { 1581 + .driver = { 1582 + .name = "nvme-apple", 1583 + .of_match_table = apple_nvme_of_match, 1584 + .pm = pm_sleep_ptr(&apple_nvme_pm_ops), 1585 + }, 1586 + .probe = apple_nvme_probe, 1587 + .remove = apple_nvme_remove, 1588 + .shutdown = apple_nvme_shutdown, 1589 + }; 1590 + module_platform_driver(apple_nvme_driver); 1591 + 1592 + MODULE_AUTHOR("Sven Peter <sven@svenpeter.dev>"); 1593 + MODULE_LICENSE("GPL");
+2 -2
drivers/reset/Kconfig
··· 183 183 184 184 config RESET_RZG2L_USBPHY_CTRL 185 185 tristate "Renesas RZ/G2L USBPHY control driver" 186 - depends on ARCH_R9A07G044 || COMPILE_TEST 186 + depends on ARCH_RZG2L || COMPILE_TEST 187 187 help 188 188 Support for USBPHY Control found on RZ/G2L family. It mainly 189 189 controls reset and power down of the USB/PHY. ··· 240 240 241 241 config RESET_TI_SCI 242 242 tristate "TI System Control Interface (TI-SCI) reset driver" 243 - depends on TI_SCI_PROTOCOL 243 + depends on TI_SCI_PROTOCOL || COMPILE_TEST 244 244 help 245 245 This enables the reset driver support over TI System Control Interface 246 246 available on some new TI's SoCs. If you wish to use reset resources
+14 -1
drivers/reset/core.c
··· 12 12 #include <linux/kref.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of.h> 15 + #include <linux/acpi.h> 15 16 #include <linux/reset.h> 16 17 #include <linux/reset-controller.h> 17 18 #include <linux/slab.h> ··· 1101 1100 * 1102 1101 * Convenience wrapper for __reset_control_get() and reset_control_reset(). 1103 1102 * This is useful for the common case of devices with single, dedicated reset 1104 - * lines. 1103 + * lines. _RST firmware method will be called for devices with ACPI. 1105 1104 */ 1106 1105 int __device_reset(struct device *dev, bool optional) 1107 1106 { 1108 1107 struct reset_control *rstc; 1109 1108 int ret; 1109 + 1110 + #ifdef CONFIG_ACPI 1111 + acpi_handle handle = ACPI_HANDLE(dev); 1112 + 1113 + if (handle) { 1114 + if (!acpi_has_method(handle, "_RST")) 1115 + return optional ? 0 : -ENOENT; 1116 + if (ACPI_FAILURE(acpi_evaluate_object(handle, "_RST", NULL, 1117 + NULL))) 1118 + return -EIO; 1119 + } 1120 + #endif 1110 1121 1111 1122 rstc = __reset_control_get(dev, NULL, 0, 0, optional, true); 1112 1123 if (IS_ERR(rstc))
+6
drivers/reset/reset-meson.c
··· 98 98 .level_offset = 0x40, 99 99 }; 100 100 101 + static const struct meson_reset_param meson_s4_param = { 102 + .reg_count = 6, 103 + .level_offset = 0x40, 104 + }; 105 + 101 106 static const struct of_device_id meson_reset_dt_ids[] = { 102 107 { .compatible = "amlogic,meson8b-reset", .data = &meson8b_param}, 103 108 { .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param}, 104 109 { .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param}, 105 110 { .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param}, 111 + { .compatible = "amlogic,meson-s4-reset", .data = &meson_s4_param}, 106 112 { /* sentinel */ }, 107 113 }; 108 114 MODULE_DEVICE_TABLE(of, meson_reset_dt_ids);
+1
drivers/reset/reset-simple.c
··· 144 144 .data = &reset_simple_active_low }, 145 145 { .compatible = "aspeed,ast2400-lpc-reset" }, 146 146 { .compatible = "aspeed,ast2500-lpc-reset" }, 147 + { .compatible = "aspeed,ast2600-lpc-reset" }, 147 148 { .compatible = "bitmain,bm1880-reset", 148 149 .data = &reset_simple_active_low }, 149 150 { .compatible = "brcm,bcm4908-misc-pcie-reset",
+34 -41
drivers/reset/reset-uniphier-glue.c
··· 23 23 24 24 struct uniphier_glue_reset_priv { 25 25 struct clk_bulk_data clk[MAX_CLKS]; 26 - struct reset_control *rst[MAX_RSTS]; 26 + struct reset_control_bulk_data rst[MAX_RSTS]; 27 27 struct reset_simple_data rdata; 28 28 const struct uniphier_glue_reset_soc_data *data; 29 29 }; 30 + 31 + static void uniphier_clk_disable(void *_priv) 32 + { 33 + struct uniphier_glue_reset_priv *priv = _priv; 34 + 35 + clk_bulk_disable_unprepare(priv->data->nclks, priv->clk); 36 + } 37 + 38 + static void uniphier_rst_assert(void *_priv) 39 + { 40 + struct uniphier_glue_reset_priv *priv = _priv; 41 + 42 + reset_control_bulk_assert(priv->data->nrsts, priv->rst); 43 + } 30 44 31 45 static int uniphier_glue_reset_probe(struct platform_device *pdev) 32 46 { ··· 48 34 struct uniphier_glue_reset_priv *priv; 49 35 struct resource *res; 50 36 resource_size_t size; 51 - const char *name; 52 - int i, ret, nr; 37 + int i, ret; 53 38 54 39 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 55 40 if (!priv) ··· 71 58 if (ret) 72 59 return ret; 73 60 74 - for (i = 0; i < priv->data->nrsts; i++) { 75 - name = priv->data->reset_names[i]; 76 - priv->rst[i] = devm_reset_control_get_shared(dev, name); 77 - if (IS_ERR(priv->rst[i])) 78 - return PTR_ERR(priv->rst[i]); 79 - } 61 + for (i = 0; i < priv->data->nrsts; i++) 62 + priv->rst[i].id = priv->data->reset_names[i]; 63 + ret = devm_reset_control_bulk_get_shared(dev, priv->data->nrsts, 64 + priv->rst); 65 + if (ret) 66 + return ret; 80 67 81 68 ret = clk_bulk_prepare_enable(priv->data->nclks, priv->clk); 82 69 if (ret) 83 70 return ret; 84 71 85 - for (nr = 0; nr < priv->data->nrsts; nr++) { 86 - ret = reset_control_deassert(priv->rst[nr]); 87 - if (ret) 88 - goto out_rst_assert; 89 - } 72 + ret = devm_add_action_or_reset(dev, uniphier_clk_disable, priv); 73 + if (ret) 74 + return ret; 75 + 76 + ret = reset_control_bulk_deassert(priv->data->nrsts, priv->rst); 77 + if (ret) 78 + return ret; 79 + 80 + ret = devm_add_action_or_reset(dev, uniphier_rst_assert, priv); 81 + if (ret) 82 + return ret; 90 83 91 84 spin_lock_init(&priv->rdata.lock); 92 85 priv->rdata.rcdev.owner = THIS_MODULE; ··· 103 84 104 85 platform_set_drvdata(pdev, priv); 105 86 106 - ret = devm_reset_controller_register(dev, &priv->rdata.rcdev); 107 - if (ret) 108 - goto out_rst_assert; 109 - 110 - return 0; 111 - 112 - out_rst_assert: 113 - while (nr--) 114 - reset_control_assert(priv->rst[nr]); 115 - 116 - clk_bulk_disable_unprepare(priv->data->nclks, priv->clk); 117 - 118 - return ret; 119 - } 120 - 121 - static int uniphier_glue_reset_remove(struct platform_device *pdev) 122 - { 123 - struct uniphier_glue_reset_priv *priv = platform_get_drvdata(pdev); 124 - int i; 125 - 126 - for (i = 0; i < priv->data->nrsts; i++) 127 - reset_control_assert(priv->rst[i]); 128 - 129 - clk_bulk_disable_unprepare(priv->data->nclks, priv->clk); 130 - 131 - return 0; 87 + return devm_reset_controller_register(dev, &priv->rdata.rcdev); 132 88 } 133 89 134 90 static const char * const uniphier_pro4_clock_reset_names[] = { ··· 171 177 172 178 static struct platform_driver uniphier_glue_reset_driver = { 173 179 .probe = uniphier_glue_reset_probe, 174 - .remove = uniphier_glue_reset_remove, 175 180 .driver = { 176 181 .name = "uniphier-glue-reset", 177 182 .of_match_table = uniphier_glue_reset_match,
+2 -2
drivers/soc/Makefile
··· 4 4 # 5 5 6 6 obj-$(CONFIG_ARCH_ACTIONS) += actions/ 7 - obj-$(CONFIG_ARCH_APPLE) += apple/ 7 + obj-y += apple/ 8 8 obj-y += aspeed/ 9 9 obj-$(CONFIG_ARCH_AT91) += atmel/ 10 10 obj-y += bcm/ ··· 22 22 obj-y += amlogic/ 23 23 obj-y += qcom/ 24 24 obj-y += renesas/ 25 - obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/ 25 + obj-y += rockchip/ 26 26 obj-$(CONFIG_SOC_SAMSUNG) += samsung/ 27 27 obj-$(CONFIG_SOC_SIFIVE) += sifive/ 28 28 obj-y += sunxi/
+24
drivers/soc/apple/Kconfig
··· 17 17 controls for SoC devices. This driver manages them through the 18 18 generic power domain framework, and also provides reset support. 19 19 20 + config APPLE_RTKIT 21 + tristate "Apple RTKit co-processor IPC protocol" 22 + depends on MAILBOX 23 + depends on ARCH_APPLE || COMPILE_TEST 24 + default ARCH_APPLE 25 + help 26 + Apple SoCs such as the M1 come with various co-processors running 27 + their proprietary RTKit operating system. This option enables support 28 + for the protocol library used to communicate with those. It is used 29 + by various client drivers. 30 + 31 + Say 'y' here if you have an Apple SoC. 32 + 33 + config APPLE_SART 34 + tristate "Apple SART DMA address filter" 35 + depends on ARCH_APPLE || COMPILE_TEST 36 + default ARCH_APPLE 37 + help 38 + Apple SART is a simple DMA address filter used on Apple SoCs such 39 + as the M1. It is usually required for the NVMe coprocessor which does 40 + not use a proper IOMMU. 41 + 42 + Say 'y' here if you have an Apple SoC. 43 + 20 44 endmenu 21 45 22 46 endif
+6
drivers/soc/apple/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_APPLE_PMGR_PWRSTATE) += apple-pmgr-pwrstate.o 3 + 4 + obj-$(CONFIG_APPLE_RTKIT) += apple-rtkit.o 5 + apple-rtkit-y = rtkit.o rtkit-crashlog.o 6 + 7 + obj-$(CONFIG_APPLE_SART) += apple-sart.o 8 + apple-sart-y = sart.o
+154
drivers/soc/apple/rtkit-crashlog.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only OR MIT 2 + /* 3 + * Apple RTKit IPC library 4 + * Copyright (C) The Asahi Linux Contributors 5 + */ 6 + #include "rtkit-internal.h" 7 + 8 + #define FOURCC(a, b, c, d) \ 9 + (((u32)(a) << 24) | ((u32)(b) << 16) | ((u32)(c) << 8) | ((u32)(d))) 10 + 11 + #define APPLE_RTKIT_CRASHLOG_HEADER FOURCC('C', 'L', 'H', 'E') 12 + #define APPLE_RTKIT_CRASHLOG_STR FOURCC('C', 's', 't', 'r') 13 + #define APPLE_RTKIT_CRASHLOG_VERSION FOURCC('C', 'v', 'e', 'r') 14 + #define APPLE_RTKIT_CRASHLOG_MBOX FOURCC('C', 'm', 'b', 'x') 15 + #define APPLE_RTKIT_CRASHLOG_TIME FOURCC('C', 't', 'i', 'm') 16 + 17 + struct apple_rtkit_crashlog_header { 18 + u32 fourcc; 19 + u32 version; 20 + u32 size; 21 + u32 flags; 22 + u8 _unk[16]; 23 + }; 24 + static_assert(sizeof(struct apple_rtkit_crashlog_header) == 0x20); 25 + 26 + struct apple_rtkit_crashlog_mbox_entry { 27 + u64 msg0; 28 + u64 msg1; 29 + u32 timestamp; 30 + u8 _unk[4]; 31 + }; 32 + static_assert(sizeof(struct apple_rtkit_crashlog_mbox_entry) == 0x18); 33 + 34 + static void apple_rtkit_crashlog_dump_str(struct apple_rtkit *rtk, u8 *bfr, 35 + size_t size) 36 + { 37 + u32 idx; 38 + u8 *ptr, *end; 39 + 40 + memcpy(&idx, bfr, 4); 41 + 42 + ptr = bfr + 4; 43 + end = bfr + size; 44 + while (ptr < end) { 45 + u8 *newline = memchr(ptr, '\n', end - ptr); 46 + 47 + if (newline) { 48 + u8 tmp = *newline; 49 + *newline = '\0'; 50 + dev_warn(rtk->dev, "RTKit: Message (id=%x): %s\n", idx, 51 + ptr); 52 + *newline = tmp; 53 + ptr = newline + 1; 54 + } else { 55 + dev_warn(rtk->dev, "RTKit: Message (id=%x): %s", idx, 56 + ptr); 57 + break; 58 + } 59 + } 60 + } 61 + 62 + static void apple_rtkit_crashlog_dump_version(struct apple_rtkit *rtk, u8 *bfr, 63 + size_t size) 64 + { 65 + dev_warn(rtk->dev, "RTKit: Version: %s", bfr + 16); 66 + } 67 + 68 + static void apple_rtkit_crashlog_dump_time(struct apple_rtkit *rtk, u8 *bfr, 69 + size_t size) 70 + { 71 + u64 crash_time; 72 + 73 + memcpy(&crash_time, bfr, 8); 74 + dev_warn(rtk->dev, "RTKit: Crash time: %lld", crash_time); 75 + } 76 + 77 + static void apple_rtkit_crashlog_dump_mailbox(struct apple_rtkit *rtk, u8 *bfr, 78 + size_t size) 79 + { 80 + u32 type, index, i; 81 + size_t n_messages; 82 + struct apple_rtkit_crashlog_mbox_entry entry; 83 + 84 + memcpy(&type, bfr + 16, 4); 85 + memcpy(&index, bfr + 24, 4); 86 + n_messages = (size - 28) / sizeof(entry); 87 + 88 + dev_warn(rtk->dev, "RTKit: Mailbox history (type = %d, index = %d)", 89 + type, index); 90 + for (i = 0; i < n_messages; ++i) { 91 + memcpy(&entry, bfr + 28 + i * sizeof(entry), sizeof(entry)); 92 + dev_warn(rtk->dev, "RTKit: #%03d@%08x: %016llx %016llx", i, 93 + entry.timestamp, entry.msg0, entry.msg1); 94 + } 95 + } 96 + 97 + void apple_rtkit_crashlog_dump(struct apple_rtkit *rtk, u8 *bfr, size_t size) 98 + { 99 + size_t offset; 100 + u32 section_fourcc, section_size; 101 + struct apple_rtkit_crashlog_header header; 102 + 103 + memcpy(&header, bfr, sizeof(header)); 104 + if (header.fourcc != APPLE_RTKIT_CRASHLOG_HEADER) { 105 + dev_warn(rtk->dev, "RTKit: Expected crashlog header but got %x", 106 + header.fourcc); 107 + return; 108 + } 109 + 110 + if (header.size > size) { 111 + dev_warn(rtk->dev, "RTKit: Crashlog size (%x) is too large", 112 + header.size); 113 + return; 114 + } 115 + 116 + size = header.size; 117 + offset = sizeof(header); 118 + 119 + while (offset < size) { 120 + memcpy(&section_fourcc, bfr + offset, 4); 121 + memcpy(&section_size, bfr + offset + 12, 4); 122 + 123 + switch (section_fourcc) { 124 + case APPLE_RTKIT_CRASHLOG_HEADER: 125 + dev_dbg(rtk->dev, "RTKit: End of crashlog reached"); 126 + return; 127 + case APPLE_RTKIT_CRASHLOG_STR: 128 + apple_rtkit_crashlog_dump_str(rtk, bfr + offset + 16, 129 + section_size); 130 + break; 131 + case APPLE_RTKIT_CRASHLOG_VERSION: 132 + apple_rtkit_crashlog_dump_version( 133 + rtk, bfr + offset + 16, section_size); 134 + break; 135 + case APPLE_RTKIT_CRASHLOG_MBOX: 136 + apple_rtkit_crashlog_dump_mailbox( 137 + rtk, bfr + offset + 16, section_size); 138 + break; 139 + case APPLE_RTKIT_CRASHLOG_TIME: 140 + apple_rtkit_crashlog_dump_time(rtk, bfr + offset + 16, 141 + section_size); 142 + break; 143 + default: 144 + dev_warn(rtk->dev, 145 + "RTKit: Unknown crashlog section: %x", 146 + section_fourcc); 147 + } 148 + 149 + offset += section_size; 150 + } 151 + 152 + dev_warn(rtk->dev, 153 + "RTKit: End of crashlog reached but no footer present"); 154 + }
+62
drivers/soc/apple/rtkit-internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only OR MIT */ 2 + /* 3 + * Apple RTKit IPC library 4 + * Copyright (C) The Asahi Linux Contributors 5 + */ 6 + 7 + #ifndef _APPLE_RTKIT_INTERAL_H 8 + #define _APPLE_RTKIT_INTERAL_H 9 + 10 + #include <linux/apple-mailbox.h> 11 + #include <linux/bitfield.h> 12 + #include <linux/bitmap.h> 13 + #include <linux/completion.h> 14 + #include <linux/dma-mapping.h> 15 + #include <linux/io.h> 16 + #include <linux/kernel.h> 17 + #include <linux/mailbox_client.h> 18 + #include <linux/module.h> 19 + #include <linux/slab.h> 20 + #include <linux/soc/apple/rtkit.h> 21 + #include <linux/workqueue.h> 22 + 23 + #define APPLE_RTKIT_APP_ENDPOINT_START 0x20 24 + #define APPLE_RTKIT_MAX_ENDPOINTS 0x100 25 + 26 + struct apple_rtkit { 27 + void *cookie; 28 + const struct apple_rtkit_ops *ops; 29 + struct device *dev; 30 + 31 + const char *mbox_name; 32 + int mbox_idx; 33 + struct mbox_client mbox_cl; 34 + struct mbox_chan *mbox_chan; 35 + 36 + struct completion epmap_completion; 37 + struct completion iop_pwr_ack_completion; 38 + struct completion ap_pwr_ack_completion; 39 + 40 + int boot_result; 41 + int version; 42 + 43 + unsigned int iop_power_state; 44 + unsigned int ap_power_state; 45 + bool crashed; 46 + 47 + DECLARE_BITMAP(endpoints, APPLE_RTKIT_MAX_ENDPOINTS); 48 + 49 + struct apple_rtkit_shmem ioreport_buffer; 50 + struct apple_rtkit_shmem crashlog_buffer; 51 + 52 + struct apple_rtkit_shmem syslog_buffer; 53 + char *syslog_msg_buffer; 54 + size_t syslog_n_entries; 55 + size_t syslog_msg_size; 56 + 57 + struct workqueue_struct *wq; 58 + }; 59 + 60 + void apple_rtkit_crashlog_dump(struct apple_rtkit *rtk, u8 *bfr, size_t size); 61 + 62 + #endif
+958
drivers/soc/apple/rtkit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only OR MIT 2 + /* 3 + * Apple RTKit IPC library 4 + * Copyright (C) The Asahi Linux Contributors 5 + */ 6 + 7 + #include "rtkit-internal.h" 8 + 9 + enum { 10 + APPLE_RTKIT_PWR_STATE_OFF = 0x00, /* power off, cannot be restarted */ 11 + APPLE_RTKIT_PWR_STATE_SLEEP = 0x01, /* sleeping, can be restarted */ 12 + APPLE_RTKIT_PWR_STATE_QUIESCED = 0x10, /* running but no communication */ 13 + APPLE_RTKIT_PWR_STATE_ON = 0x20, /* normal operating state */ 14 + }; 15 + 16 + enum { 17 + APPLE_RTKIT_EP_MGMT = 0, 18 + APPLE_RTKIT_EP_CRASHLOG = 1, 19 + APPLE_RTKIT_EP_SYSLOG = 2, 20 + APPLE_RTKIT_EP_DEBUG = 3, 21 + APPLE_RTKIT_EP_IOREPORT = 4, 22 + APPLE_RTKIT_EP_OSLOG = 8, 23 + }; 24 + 25 + #define APPLE_RTKIT_MGMT_TYPE GENMASK_ULL(59, 52) 26 + 27 + enum { 28 + APPLE_RTKIT_MGMT_HELLO = 1, 29 + APPLE_RTKIT_MGMT_HELLO_REPLY = 2, 30 + APPLE_RTKIT_MGMT_STARTEP = 5, 31 + APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE = 6, 32 + APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE_ACK = 7, 33 + APPLE_RTKIT_MGMT_EPMAP = 8, 34 + APPLE_RTKIT_MGMT_EPMAP_REPLY = 8, 35 + APPLE_RTKIT_MGMT_SET_AP_PWR_STATE = 0xb, 36 + APPLE_RTKIT_MGMT_SET_AP_PWR_STATE_ACK = 0xb, 37 + }; 38 + 39 + #define APPLE_RTKIT_MGMT_HELLO_MINVER GENMASK_ULL(15, 0) 40 + #define APPLE_RTKIT_MGMT_HELLO_MAXVER GENMASK_ULL(31, 16) 41 + 42 + #define APPLE_RTKIT_MGMT_EPMAP_LAST BIT_ULL(51) 43 + #define APPLE_RTKIT_MGMT_EPMAP_BASE GENMASK_ULL(34, 32) 44 + #define APPLE_RTKIT_MGMT_EPMAP_BITMAP GENMASK_ULL(31, 0) 45 + 46 + #define APPLE_RTKIT_MGMT_EPMAP_REPLY_MORE BIT_ULL(0) 47 + 48 + #define APPLE_RTKIT_MGMT_STARTEP_EP GENMASK_ULL(39, 32) 49 + #define APPLE_RTKIT_MGMT_STARTEP_FLAG BIT_ULL(1) 50 + 51 + #define APPLE_RTKIT_MGMT_PWR_STATE GENMASK_ULL(15, 0) 52 + 53 + #define APPLE_RTKIT_CRASHLOG_CRASH 1 54 + 55 + #define APPLE_RTKIT_BUFFER_REQUEST 1 56 + #define APPLE_RTKIT_BUFFER_REQUEST_SIZE GENMASK_ULL(51, 44) 57 + #define APPLE_RTKIT_BUFFER_REQUEST_IOVA GENMASK_ULL(41, 0) 58 + 59 + #define APPLE_RTKIT_SYSLOG_TYPE GENMASK_ULL(59, 52) 60 + 61 + #define APPLE_RTKIT_SYSLOG_LOG 5 62 + 63 + #define APPLE_RTKIT_SYSLOG_INIT 8 64 + #define APPLE_RTKIT_SYSLOG_N_ENTRIES GENMASK_ULL(7, 0) 65 + #define APPLE_RTKIT_SYSLOG_MSG_SIZE GENMASK_ULL(31, 24) 66 + 67 + #define APPLE_RTKIT_OSLOG_TYPE GENMASK_ULL(63, 56) 68 + #define APPLE_RTKIT_OSLOG_INIT 1 69 + #define APPLE_RTKIT_OSLOG_ACK 3 70 + 71 + #define APPLE_RTKIT_MIN_SUPPORTED_VERSION 11 72 + #define APPLE_RTKIT_MAX_SUPPORTED_VERSION 12 73 + 74 + struct apple_rtkit_msg { 75 + struct completion *completion; 76 + struct apple_mbox_msg mbox_msg; 77 + }; 78 + 79 + struct apple_rtkit_rx_work { 80 + struct apple_rtkit *rtk; 81 + u8 ep; 82 + u64 msg; 83 + struct work_struct work; 84 + }; 85 + 86 + bool apple_rtkit_is_running(struct apple_rtkit *rtk) 87 + { 88 + if (rtk->crashed) 89 + return false; 90 + if ((rtk->iop_power_state & 0xff) != APPLE_RTKIT_PWR_STATE_ON) 91 + return false; 92 + if ((rtk->ap_power_state & 0xff) != APPLE_RTKIT_PWR_STATE_ON) 93 + return false; 94 + return true; 95 + } 96 + EXPORT_SYMBOL_GPL(apple_rtkit_is_running); 97 + 98 + bool apple_rtkit_is_crashed(struct apple_rtkit *rtk) 99 + { 100 + return rtk->crashed; 101 + } 102 + EXPORT_SYMBOL_GPL(apple_rtkit_is_crashed); 103 + 104 + static void apple_rtkit_management_send(struct apple_rtkit *rtk, u8 type, 105 + u64 msg) 106 + { 107 + msg &= ~APPLE_RTKIT_MGMT_TYPE; 108 + msg |= FIELD_PREP(APPLE_RTKIT_MGMT_TYPE, type); 109 + apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_MGMT, msg, NULL, false); 110 + } 111 + 112 + static void apple_rtkit_management_rx_hello(struct apple_rtkit *rtk, u64 msg) 113 + { 114 + u64 reply; 115 + 116 + int min_ver = FIELD_GET(APPLE_RTKIT_MGMT_HELLO_MINVER, msg); 117 + int max_ver = FIELD_GET(APPLE_RTKIT_MGMT_HELLO_MAXVER, msg); 118 + int want_ver = min(APPLE_RTKIT_MAX_SUPPORTED_VERSION, max_ver); 119 + 120 + dev_dbg(rtk->dev, "RTKit: Min ver %d, max ver %d\n", min_ver, max_ver); 121 + 122 + if (min_ver > APPLE_RTKIT_MAX_SUPPORTED_VERSION) { 123 + dev_err(rtk->dev, "RTKit: Firmware min version %d is too new\n", 124 + min_ver); 125 + goto abort_boot; 126 + } 127 + 128 + if (max_ver < APPLE_RTKIT_MIN_SUPPORTED_VERSION) { 129 + dev_err(rtk->dev, "RTKit: Firmware max version %d is too old\n", 130 + max_ver); 131 + goto abort_boot; 132 + } 133 + 134 + dev_info(rtk->dev, "RTKit: Initializing (protocol version %d)\n", 135 + want_ver); 136 + rtk->version = want_ver; 137 + 138 + reply = FIELD_PREP(APPLE_RTKIT_MGMT_HELLO_MINVER, want_ver); 139 + reply |= FIELD_PREP(APPLE_RTKIT_MGMT_HELLO_MAXVER, want_ver); 140 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_HELLO_REPLY, reply); 141 + 142 + return; 143 + 144 + abort_boot: 145 + rtk->boot_result = -EINVAL; 146 + complete_all(&rtk->epmap_completion); 147 + } 148 + 149 + static void apple_rtkit_management_rx_epmap(struct apple_rtkit *rtk, u64 msg) 150 + { 151 + int i, ep; 152 + u64 reply; 153 + unsigned long bitmap = FIELD_GET(APPLE_RTKIT_MGMT_EPMAP_BITMAP, msg); 154 + u32 base = FIELD_GET(APPLE_RTKIT_MGMT_EPMAP_BASE, msg); 155 + 156 + dev_dbg(rtk->dev, 157 + "RTKit: received endpoint bitmap 0x%lx with base 0x%x\n", 158 + bitmap, base); 159 + 160 + for_each_set_bit(i, &bitmap, 32) { 161 + ep = 32 * base + i; 162 + dev_dbg(rtk->dev, "RTKit: Discovered endpoint 0x%02x\n", ep); 163 + set_bit(ep, rtk->endpoints); 164 + } 165 + 166 + reply = FIELD_PREP(APPLE_RTKIT_MGMT_EPMAP_BASE, base); 167 + if (msg & APPLE_RTKIT_MGMT_EPMAP_LAST) 168 + reply |= APPLE_RTKIT_MGMT_EPMAP_LAST; 169 + else 170 + reply |= APPLE_RTKIT_MGMT_EPMAP_REPLY_MORE; 171 + 172 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_EPMAP_REPLY, reply); 173 + 174 + if (!(msg & APPLE_RTKIT_MGMT_EPMAP_LAST)) 175 + return; 176 + 177 + for_each_set_bit(ep, rtk->endpoints, APPLE_RTKIT_APP_ENDPOINT_START) { 178 + switch (ep) { 179 + /* the management endpoint is started by default */ 180 + case APPLE_RTKIT_EP_MGMT: 181 + break; 182 + 183 + /* without starting these RTKit refuses to boot */ 184 + case APPLE_RTKIT_EP_SYSLOG: 185 + case APPLE_RTKIT_EP_CRASHLOG: 186 + case APPLE_RTKIT_EP_DEBUG: 187 + case APPLE_RTKIT_EP_IOREPORT: 188 + case APPLE_RTKIT_EP_OSLOG: 189 + dev_dbg(rtk->dev, 190 + "RTKit: Starting system endpoint 0x%02x\n", ep); 191 + apple_rtkit_start_ep(rtk, ep); 192 + break; 193 + 194 + default: 195 + dev_warn(rtk->dev, 196 + "RTKit: Unknown system endpoint: 0x%02x\n", 197 + ep); 198 + } 199 + } 200 + 201 + rtk->boot_result = 0; 202 + complete_all(&rtk->epmap_completion); 203 + } 204 + 205 + static void apple_rtkit_management_rx_iop_pwr_ack(struct apple_rtkit *rtk, 206 + u64 msg) 207 + { 208 + unsigned int new_state = FIELD_GET(APPLE_RTKIT_MGMT_PWR_STATE, msg); 209 + 210 + dev_dbg(rtk->dev, "RTKit: IOP power state transition: 0x%x -> 0x%x\n", 211 + rtk->iop_power_state, new_state); 212 + rtk->iop_power_state = new_state; 213 + 214 + complete_all(&rtk->iop_pwr_ack_completion); 215 + } 216 + 217 + static void apple_rtkit_management_rx_ap_pwr_ack(struct apple_rtkit *rtk, 218 + u64 msg) 219 + { 220 + unsigned int new_state = FIELD_GET(APPLE_RTKIT_MGMT_PWR_STATE, msg); 221 + 222 + dev_dbg(rtk->dev, "RTKit: AP power state transition: 0x%x -> 0x%x\n", 223 + rtk->ap_power_state, new_state); 224 + rtk->ap_power_state = new_state; 225 + 226 + complete_all(&rtk->ap_pwr_ack_completion); 227 + } 228 + 229 + static void apple_rtkit_management_rx(struct apple_rtkit *rtk, u64 msg) 230 + { 231 + u8 type = FIELD_GET(APPLE_RTKIT_MGMT_TYPE, msg); 232 + 233 + switch (type) { 234 + case APPLE_RTKIT_MGMT_HELLO: 235 + apple_rtkit_management_rx_hello(rtk, msg); 236 + break; 237 + case APPLE_RTKIT_MGMT_EPMAP: 238 + apple_rtkit_management_rx_epmap(rtk, msg); 239 + break; 240 + case APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE_ACK: 241 + apple_rtkit_management_rx_iop_pwr_ack(rtk, msg); 242 + break; 243 + case APPLE_RTKIT_MGMT_SET_AP_PWR_STATE_ACK: 244 + apple_rtkit_management_rx_ap_pwr_ack(rtk, msg); 245 + break; 246 + default: 247 + dev_warn( 248 + rtk->dev, 249 + "RTKit: unknown management message: 0x%llx (type: 0x%02x)\n", 250 + msg, type); 251 + } 252 + } 253 + 254 + static int apple_rtkit_common_rx_get_buffer(struct apple_rtkit *rtk, 255 + struct apple_rtkit_shmem *buffer, 256 + u8 ep, u64 msg) 257 + { 258 + size_t n_4kpages = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg); 259 + u64 reply; 260 + int err; 261 + 262 + buffer->buffer = NULL; 263 + buffer->iomem = NULL; 264 + buffer->is_mapped = false; 265 + buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg); 266 + buffer->size = n_4kpages << 12; 267 + 268 + dev_dbg(rtk->dev, "RTKit: buffer request for 0x%zx bytes at %pad\n", 269 + buffer->size, &buffer->iova); 270 + 271 + if (buffer->iova && 272 + (!rtk->ops->shmem_setup || !rtk->ops->shmem_destroy)) { 273 + err = -EINVAL; 274 + goto error; 275 + } 276 + 277 + if (rtk->ops->shmem_setup) { 278 + err = rtk->ops->shmem_setup(rtk->cookie, buffer); 279 + if (err) 280 + goto error; 281 + } else { 282 + buffer->buffer = dma_alloc_coherent(rtk->dev, buffer->size, 283 + &buffer->iova, GFP_KERNEL); 284 + if (!buffer->buffer) { 285 + err = -ENOMEM; 286 + goto error; 287 + } 288 + } 289 + 290 + if (!buffer->is_mapped) { 291 + reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE, 292 + APPLE_RTKIT_BUFFER_REQUEST); 293 + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, n_4kpages); 294 + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA, 295 + buffer->iova); 296 + apple_rtkit_send_message(rtk, ep, reply, NULL, false); 297 + } 298 + 299 + return 0; 300 + 301 + error: 302 + buffer->buffer = NULL; 303 + buffer->iomem = NULL; 304 + buffer->iova = 0; 305 + buffer->size = 0; 306 + buffer->is_mapped = false; 307 + return err; 308 + } 309 + 310 + static void apple_rtkit_free_buffer(struct apple_rtkit *rtk, 311 + struct apple_rtkit_shmem *bfr) 312 + { 313 + if (bfr->size == 0) 314 + return; 315 + 316 + if (rtk->ops->shmem_destroy) 317 + rtk->ops->shmem_destroy(rtk->cookie, bfr); 318 + else if (bfr->buffer) 319 + dma_free_coherent(rtk->dev, bfr->size, bfr->buffer, bfr->iova); 320 + 321 + bfr->buffer = NULL; 322 + bfr->iomem = NULL; 323 + bfr->iova = 0; 324 + bfr->size = 0; 325 + bfr->is_mapped = false; 326 + } 327 + 328 + static void apple_rtkit_memcpy(struct apple_rtkit *rtk, void *dst, 329 + struct apple_rtkit_shmem *bfr, size_t offset, 330 + size_t len) 331 + { 332 + if (bfr->iomem) 333 + memcpy_fromio(dst, bfr->iomem + offset, len); 334 + else 335 + memcpy(dst, bfr->buffer + offset, len); 336 + } 337 + 338 + static void apple_rtkit_crashlog_rx(struct apple_rtkit *rtk, u64 msg) 339 + { 340 + u8 type = FIELD_GET(APPLE_RTKIT_SYSLOG_TYPE, msg); 341 + u8 *bfr; 342 + 343 + if (type != APPLE_RTKIT_CRASHLOG_CRASH) { 344 + dev_warn(rtk->dev, "RTKit: Unknown crashlog message: %llx\n", 345 + msg); 346 + return; 347 + } 348 + 349 + if (!rtk->crashlog_buffer.size) { 350 + apple_rtkit_common_rx_get_buffer(rtk, &rtk->crashlog_buffer, 351 + APPLE_RTKIT_EP_CRASHLOG, msg); 352 + return; 353 + } 354 + 355 + dev_err(rtk->dev, "RTKit: co-processor has crashed\n"); 356 + 357 + /* 358 + * create a shadow copy here to make sure the co-processor isn't able 359 + * to change the log while we're dumping it. this also ensures 360 + * the buffer is in normal memory and not iomem for e.g. the SMC 361 + */ 362 + bfr = kzalloc(rtk->crashlog_buffer.size, GFP_KERNEL); 363 + if (bfr) { 364 + apple_rtkit_memcpy(rtk, bfr, &rtk->crashlog_buffer, 0, 365 + rtk->crashlog_buffer.size); 366 + apple_rtkit_crashlog_dump(rtk, bfr, rtk->crashlog_buffer.size); 367 + kfree(bfr); 368 + } else { 369 + dev_err(rtk->dev, 370 + "RTKit: Couldn't allocate crashlog shadow buffer\n"); 371 + } 372 + 373 + rtk->crashed = true; 374 + if (rtk->ops->crashed) 375 + rtk->ops->crashed(rtk->cookie); 376 + } 377 + 378 + static void apple_rtkit_ioreport_rx(struct apple_rtkit *rtk, u64 msg) 379 + { 380 + u8 type = FIELD_GET(APPLE_RTKIT_SYSLOG_TYPE, msg); 381 + 382 + switch (type) { 383 + case APPLE_RTKIT_BUFFER_REQUEST: 384 + apple_rtkit_common_rx_get_buffer(rtk, &rtk->ioreport_buffer, 385 + APPLE_RTKIT_EP_IOREPORT, msg); 386 + break; 387 + /* unknown, must be ACKed or the co-processor will hang */ 388 + case 0x8: 389 + case 0xc: 390 + apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_IOREPORT, msg, 391 + NULL, false); 392 + break; 393 + default: 394 + dev_warn(rtk->dev, "RTKit: Unknown ioreport message: %llx\n", 395 + msg); 396 + } 397 + } 398 + 399 + static void apple_rtkit_syslog_rx_init(struct apple_rtkit *rtk, u64 msg) 400 + { 401 + rtk->syslog_n_entries = FIELD_GET(APPLE_RTKIT_SYSLOG_N_ENTRIES, msg); 402 + rtk->syslog_msg_size = FIELD_GET(APPLE_RTKIT_SYSLOG_MSG_SIZE, msg); 403 + 404 + rtk->syslog_msg_buffer = kzalloc(rtk->syslog_msg_size, GFP_KERNEL); 405 + 406 + dev_dbg(rtk->dev, 407 + "RTKit: syslog initialized: entries: %zd, msg_size: %zd\n", 408 + rtk->syslog_n_entries, rtk->syslog_msg_size); 409 + } 410 + 411 + static void apple_rtkit_syslog_rx_log(struct apple_rtkit *rtk, u64 msg) 412 + { 413 + u8 idx = msg & 0xff; 414 + char log_context[24]; 415 + size_t entry_size = 0x20 + rtk->syslog_msg_size; 416 + 417 + if (!rtk->syslog_msg_buffer) { 418 + dev_warn( 419 + rtk->dev, 420 + "RTKit: received syslog message but no syslog_msg_buffer\n"); 421 + goto done; 422 + } 423 + if (!rtk->syslog_buffer.size) { 424 + dev_warn( 425 + rtk->dev, 426 + "RTKit: received syslog message but syslog_buffer.size is zero\n"); 427 + goto done; 428 + } 429 + if (!rtk->syslog_buffer.buffer && !rtk->syslog_buffer.iomem) { 430 + dev_warn( 431 + rtk->dev, 432 + "RTKit: received syslog message but no syslog_buffer.buffer or syslog_buffer.iomem\n"); 433 + goto done; 434 + } 435 + if (idx > rtk->syslog_n_entries) { 436 + dev_warn(rtk->dev, "RTKit: syslog index %d out of range\n", 437 + idx); 438 + goto done; 439 + } 440 + 441 + apple_rtkit_memcpy(rtk, log_context, &rtk->syslog_buffer, 442 + idx * entry_size + 8, sizeof(log_context)); 443 + apple_rtkit_memcpy(rtk, rtk->syslog_msg_buffer, &rtk->syslog_buffer, 444 + idx * entry_size + 8 + sizeof(log_context), 445 + rtk->syslog_msg_size); 446 + 447 + log_context[sizeof(log_context) - 1] = 0; 448 + rtk->syslog_msg_buffer[rtk->syslog_msg_size - 1] = 0; 449 + dev_info(rtk->dev, "RTKit: syslog message: %s: %s\n", log_context, 450 + rtk->syslog_msg_buffer); 451 + 452 + done: 453 + apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_SYSLOG, msg, NULL, false); 454 + } 455 + 456 + static void apple_rtkit_syslog_rx(struct apple_rtkit *rtk, u64 msg) 457 + { 458 + u8 type = FIELD_GET(APPLE_RTKIT_SYSLOG_TYPE, msg); 459 + 460 + switch (type) { 461 + case APPLE_RTKIT_BUFFER_REQUEST: 462 + apple_rtkit_common_rx_get_buffer(rtk, &rtk->syslog_buffer, 463 + APPLE_RTKIT_EP_SYSLOG, msg); 464 + break; 465 + case APPLE_RTKIT_SYSLOG_INIT: 466 + apple_rtkit_syslog_rx_init(rtk, msg); 467 + break; 468 + case APPLE_RTKIT_SYSLOG_LOG: 469 + apple_rtkit_syslog_rx_log(rtk, msg); 470 + break; 471 + default: 472 + dev_warn(rtk->dev, "RTKit: Unknown syslog message: %llx\n", 473 + msg); 474 + } 475 + } 476 + 477 + static void apple_rtkit_oslog_rx_init(struct apple_rtkit *rtk, u64 msg) 478 + { 479 + u64 ack; 480 + 481 + dev_dbg(rtk->dev, "RTKit: oslog init: msg: 0x%llx\n", msg); 482 + ack = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, APPLE_RTKIT_OSLOG_ACK); 483 + apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_OSLOG, ack, NULL, false); 484 + } 485 + 486 + static void apple_rtkit_oslog_rx(struct apple_rtkit *rtk, u64 msg) 487 + { 488 + u8 type = FIELD_GET(APPLE_RTKIT_OSLOG_TYPE, msg); 489 + 490 + switch (type) { 491 + case APPLE_RTKIT_OSLOG_INIT: 492 + apple_rtkit_oslog_rx_init(rtk, msg); 493 + break; 494 + default: 495 + dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", msg); 496 + } 497 + } 498 + 499 + static void apple_rtkit_rx_work(struct work_struct *work) 500 + { 501 + struct apple_rtkit_rx_work *rtk_work = 502 + container_of(work, struct apple_rtkit_rx_work, work); 503 + struct apple_rtkit *rtk = rtk_work->rtk; 504 + 505 + switch (rtk_work->ep) { 506 + case APPLE_RTKIT_EP_MGMT: 507 + apple_rtkit_management_rx(rtk, rtk_work->msg); 508 + break; 509 + case APPLE_RTKIT_EP_CRASHLOG: 510 + apple_rtkit_crashlog_rx(rtk, rtk_work->msg); 511 + break; 512 + case APPLE_RTKIT_EP_SYSLOG: 513 + apple_rtkit_syslog_rx(rtk, rtk_work->msg); 514 + break; 515 + case APPLE_RTKIT_EP_IOREPORT: 516 + apple_rtkit_ioreport_rx(rtk, rtk_work->msg); 517 + break; 518 + case APPLE_RTKIT_EP_OSLOG: 519 + apple_rtkit_oslog_rx(rtk, rtk_work->msg); 520 + break; 521 + case APPLE_RTKIT_APP_ENDPOINT_START ... 0xff: 522 + if (rtk->ops->recv_message) 523 + rtk->ops->recv_message(rtk->cookie, rtk_work->ep, 524 + rtk_work->msg); 525 + else 526 + dev_warn( 527 + rtk->dev, 528 + "Received unexpected message to EP%02d: %llx\n", 529 + rtk_work->ep, rtk_work->msg); 530 + break; 531 + default: 532 + dev_warn(rtk->dev, 533 + "RTKit: message to unknown endpoint %02x: %llx\n", 534 + rtk_work->ep, rtk_work->msg); 535 + } 536 + 537 + kfree(rtk_work); 538 + } 539 + 540 + static void apple_rtkit_rx(struct mbox_client *cl, void *mssg) 541 + { 542 + struct apple_rtkit *rtk = container_of(cl, struct apple_rtkit, mbox_cl); 543 + struct apple_mbox_msg *msg = mssg; 544 + struct apple_rtkit_rx_work *work; 545 + u8 ep = msg->msg1; 546 + 547 + /* 548 + * The message was read from a MMIO FIFO and we have to make 549 + * sure all reads from buffers sent with that message happen 550 + * afterwards. 551 + */ 552 + dma_rmb(); 553 + 554 + if (!test_bit(ep, rtk->endpoints)) 555 + dev_warn(rtk->dev, 556 + "RTKit: Message to undiscovered endpoint 0x%02x\n", 557 + ep); 558 + 559 + if (ep >= APPLE_RTKIT_APP_ENDPOINT_START && 560 + rtk->ops->recv_message_early && 561 + rtk->ops->recv_message_early(rtk->cookie, ep, msg->msg0)) 562 + return; 563 + 564 + work = kzalloc(sizeof(*work), GFP_ATOMIC); 565 + if (!work) 566 + return; 567 + 568 + work->rtk = rtk; 569 + work->ep = ep; 570 + work->msg = msg->msg0; 571 + INIT_WORK(&work->work, apple_rtkit_rx_work); 572 + queue_work(rtk->wq, &work->work); 573 + } 574 + 575 + static void apple_rtkit_tx_done(struct mbox_client *cl, void *mssg, int r) 576 + { 577 + struct apple_rtkit_msg *msg = 578 + container_of(mssg, struct apple_rtkit_msg, mbox_msg); 579 + 580 + if (r == -ETIME) 581 + return; 582 + 583 + if (msg->completion) 584 + complete(msg->completion); 585 + kfree(msg); 586 + } 587 + 588 + int apple_rtkit_send_message(struct apple_rtkit *rtk, u8 ep, u64 message, 589 + struct completion *completion, bool atomic) 590 + { 591 + struct apple_rtkit_msg *msg; 592 + int ret; 593 + gfp_t flags; 594 + 595 + if (rtk->crashed) 596 + return -EINVAL; 597 + if (ep >= APPLE_RTKIT_APP_ENDPOINT_START && 598 + !apple_rtkit_is_running(rtk)) 599 + return -EINVAL; 600 + 601 + if (atomic) 602 + flags = GFP_ATOMIC; 603 + else 604 + flags = GFP_KERNEL; 605 + 606 + msg = kzalloc(sizeof(*msg), flags); 607 + if (!msg) 608 + return -ENOMEM; 609 + 610 + msg->mbox_msg.msg0 = message; 611 + msg->mbox_msg.msg1 = ep; 612 + msg->completion = completion; 613 + 614 + /* 615 + * The message will be sent with a MMIO write. We need the barrier 616 + * here to ensure any previous writes to buffers are visible to the 617 + * device before that MMIO write happens. 618 + */ 619 + dma_wmb(); 620 + 621 + ret = mbox_send_message(rtk->mbox_chan, &msg->mbox_msg); 622 + if (ret < 0) { 623 + kfree(msg); 624 + return ret; 625 + } 626 + 627 + return 0; 628 + } 629 + EXPORT_SYMBOL_GPL(apple_rtkit_send_message); 630 + 631 + int apple_rtkit_send_message_wait(struct apple_rtkit *rtk, u8 ep, u64 message, 632 + unsigned long timeout, bool atomic) 633 + { 634 + DECLARE_COMPLETION_ONSTACK(completion); 635 + int ret; 636 + long t; 637 + 638 + ret = apple_rtkit_send_message(rtk, ep, message, &completion, atomic); 639 + if (ret < 0) 640 + return ret; 641 + 642 + if (atomic) { 643 + ret = mbox_flush(rtk->mbox_chan, timeout); 644 + if (ret < 0) 645 + return ret; 646 + 647 + if (try_wait_for_completion(&completion)) 648 + return 0; 649 + 650 + return -ETIME; 651 + } else { 652 + t = wait_for_completion_interruptible_timeout( 653 + &completion, msecs_to_jiffies(timeout)); 654 + if (t < 0) 655 + return t; 656 + else if (t == 0) 657 + return -ETIME; 658 + return 0; 659 + } 660 + } 661 + EXPORT_SYMBOL_GPL(apple_rtkit_send_message_wait); 662 + 663 + int apple_rtkit_start_ep(struct apple_rtkit *rtk, u8 endpoint) 664 + { 665 + u64 msg; 666 + 667 + if (!test_bit(endpoint, rtk->endpoints)) 668 + return -EINVAL; 669 + if (endpoint >= APPLE_RTKIT_APP_ENDPOINT_START && 670 + !apple_rtkit_is_running(rtk)) 671 + return -EINVAL; 672 + 673 + msg = FIELD_PREP(APPLE_RTKIT_MGMT_STARTEP_EP, endpoint); 674 + msg |= APPLE_RTKIT_MGMT_STARTEP_FLAG; 675 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_STARTEP, msg); 676 + 677 + return 0; 678 + } 679 + EXPORT_SYMBOL_GPL(apple_rtkit_start_ep); 680 + 681 + static int apple_rtkit_request_mbox_chan(struct apple_rtkit *rtk) 682 + { 683 + if (rtk->mbox_name) 684 + rtk->mbox_chan = mbox_request_channel_byname(&rtk->mbox_cl, 685 + rtk->mbox_name); 686 + else 687 + rtk->mbox_chan = 688 + mbox_request_channel(&rtk->mbox_cl, rtk->mbox_idx); 689 + 690 + if (IS_ERR(rtk->mbox_chan)) 691 + return PTR_ERR(rtk->mbox_chan); 692 + return 0; 693 + } 694 + 695 + static struct apple_rtkit *apple_rtkit_init(struct device *dev, void *cookie, 696 + const char *mbox_name, int mbox_idx, 697 + const struct apple_rtkit_ops *ops) 698 + { 699 + struct apple_rtkit *rtk; 700 + int ret; 701 + 702 + if (!ops) 703 + return ERR_PTR(-EINVAL); 704 + 705 + rtk = kzalloc(sizeof(*rtk), GFP_KERNEL); 706 + if (!rtk) 707 + return ERR_PTR(-ENOMEM); 708 + 709 + rtk->dev = dev; 710 + rtk->cookie = cookie; 711 + rtk->ops = ops; 712 + 713 + init_completion(&rtk->epmap_completion); 714 + init_completion(&rtk->iop_pwr_ack_completion); 715 + init_completion(&rtk->ap_pwr_ack_completion); 716 + 717 + bitmap_zero(rtk->endpoints, APPLE_RTKIT_MAX_ENDPOINTS); 718 + set_bit(APPLE_RTKIT_EP_MGMT, rtk->endpoints); 719 + 720 + rtk->mbox_name = mbox_name; 721 + rtk->mbox_idx = mbox_idx; 722 + rtk->mbox_cl.dev = dev; 723 + rtk->mbox_cl.tx_block = false; 724 + rtk->mbox_cl.knows_txdone = false; 725 + rtk->mbox_cl.rx_callback = &apple_rtkit_rx; 726 + rtk->mbox_cl.tx_done = &apple_rtkit_tx_done; 727 + 728 + rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_MEM_RECLAIM, 729 + dev_name(rtk->dev)); 730 + if (!rtk->wq) { 731 + ret = -ENOMEM; 732 + goto free_rtk; 733 + } 734 + 735 + ret = apple_rtkit_request_mbox_chan(rtk); 736 + if (ret) 737 + goto destroy_wq; 738 + 739 + return rtk; 740 + 741 + destroy_wq: 742 + destroy_workqueue(rtk->wq); 743 + free_rtk: 744 + kfree(rtk); 745 + return ERR_PTR(ret); 746 + } 747 + 748 + static int apple_rtkit_wait_for_completion(struct completion *c) 749 + { 750 + long t; 751 + 752 + t = wait_for_completion_interruptible_timeout(c, 753 + msecs_to_jiffies(1000)); 754 + if (t < 0) 755 + return t; 756 + else if (t == 0) 757 + return -ETIME; 758 + else 759 + return 0; 760 + } 761 + 762 + int apple_rtkit_reinit(struct apple_rtkit *rtk) 763 + { 764 + /* make sure we don't handle any messages while reinitializing */ 765 + mbox_free_channel(rtk->mbox_chan); 766 + flush_workqueue(rtk->wq); 767 + 768 + apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); 769 + apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); 770 + apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer); 771 + 772 + kfree(rtk->syslog_msg_buffer); 773 + 774 + rtk->syslog_msg_buffer = NULL; 775 + rtk->syslog_n_entries = 0; 776 + rtk->syslog_msg_size = 0; 777 + 778 + bitmap_zero(rtk->endpoints, APPLE_RTKIT_MAX_ENDPOINTS); 779 + set_bit(APPLE_RTKIT_EP_MGMT, rtk->endpoints); 780 + 781 + reinit_completion(&rtk->epmap_completion); 782 + reinit_completion(&rtk->iop_pwr_ack_completion); 783 + reinit_completion(&rtk->ap_pwr_ack_completion); 784 + 785 + rtk->crashed = false; 786 + rtk->iop_power_state = APPLE_RTKIT_PWR_STATE_OFF; 787 + rtk->ap_power_state = APPLE_RTKIT_PWR_STATE_OFF; 788 + 789 + return apple_rtkit_request_mbox_chan(rtk); 790 + } 791 + EXPORT_SYMBOL_GPL(apple_rtkit_reinit); 792 + 793 + static int apple_rtkit_set_ap_power_state(struct apple_rtkit *rtk, 794 + unsigned int state) 795 + { 796 + u64 msg; 797 + int ret; 798 + 799 + reinit_completion(&rtk->ap_pwr_ack_completion); 800 + 801 + msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, state); 802 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_AP_PWR_STATE, 803 + msg); 804 + 805 + ret = apple_rtkit_wait_for_completion(&rtk->ap_pwr_ack_completion); 806 + if (ret) 807 + return ret; 808 + 809 + if (rtk->ap_power_state != state) 810 + return -EINVAL; 811 + return 0; 812 + } 813 + 814 + static int apple_rtkit_set_iop_power_state(struct apple_rtkit *rtk, 815 + unsigned int state) 816 + { 817 + u64 msg; 818 + int ret; 819 + 820 + reinit_completion(&rtk->iop_pwr_ack_completion); 821 + 822 + msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, state); 823 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 824 + msg); 825 + 826 + ret = apple_rtkit_wait_for_completion(&rtk->iop_pwr_ack_completion); 827 + if (ret) 828 + return ret; 829 + 830 + if (rtk->iop_power_state != state) 831 + return -EINVAL; 832 + return 0; 833 + } 834 + 835 + int apple_rtkit_boot(struct apple_rtkit *rtk) 836 + { 837 + int ret; 838 + 839 + if (apple_rtkit_is_running(rtk)) 840 + return 0; 841 + if (rtk->crashed) 842 + return -EINVAL; 843 + 844 + dev_dbg(rtk->dev, "RTKit: waiting for boot to finish\n"); 845 + ret = apple_rtkit_wait_for_completion(&rtk->epmap_completion); 846 + if (ret) 847 + return ret; 848 + if (rtk->boot_result) 849 + return rtk->boot_result; 850 + 851 + dev_dbg(rtk->dev, "RTKit: waiting for IOP power state ACK\n"); 852 + ret = apple_rtkit_wait_for_completion(&rtk->iop_pwr_ack_completion); 853 + if (ret) 854 + return ret; 855 + 856 + return apple_rtkit_set_ap_power_state(rtk, APPLE_RTKIT_PWR_STATE_ON); 857 + } 858 + EXPORT_SYMBOL_GPL(apple_rtkit_boot); 859 + 860 + int apple_rtkit_shutdown(struct apple_rtkit *rtk) 861 + { 862 + int ret; 863 + 864 + /* if OFF is used here the co-processor will not wake up again */ 865 + ret = apple_rtkit_set_ap_power_state(rtk, 866 + APPLE_RTKIT_PWR_STATE_QUIESCED); 867 + if (ret) 868 + return ret; 869 + 870 + ret = apple_rtkit_set_iop_power_state(rtk, APPLE_RTKIT_PWR_STATE_SLEEP); 871 + if (ret) 872 + return ret; 873 + 874 + return apple_rtkit_reinit(rtk); 875 + } 876 + EXPORT_SYMBOL_GPL(apple_rtkit_shutdown); 877 + 878 + int apple_rtkit_quiesce(struct apple_rtkit *rtk) 879 + { 880 + int ret; 881 + 882 + ret = apple_rtkit_set_ap_power_state(rtk, 883 + APPLE_RTKIT_PWR_STATE_QUIESCED); 884 + if (ret) 885 + return ret; 886 + 887 + ret = apple_rtkit_set_iop_power_state(rtk, 888 + APPLE_RTKIT_PWR_STATE_QUIESCED); 889 + if (ret) 890 + return ret; 891 + 892 + ret = apple_rtkit_reinit(rtk); 893 + if (ret) 894 + return ret; 895 + 896 + rtk->iop_power_state = APPLE_RTKIT_PWR_STATE_QUIESCED; 897 + rtk->ap_power_state = APPLE_RTKIT_PWR_STATE_QUIESCED; 898 + return 0; 899 + } 900 + EXPORT_SYMBOL_GPL(apple_rtkit_quiesce); 901 + 902 + int apple_rtkit_wake(struct apple_rtkit *rtk) 903 + { 904 + u64 msg; 905 + 906 + if (apple_rtkit_is_running(rtk)) 907 + return -EINVAL; 908 + 909 + reinit_completion(&rtk->iop_pwr_ack_completion); 910 + 911 + /* 912 + * Use open-coded apple_rtkit_set_iop_power_state since apple_rtkit_boot 913 + * will wait for the completion anyway. 914 + */ 915 + msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, APPLE_RTKIT_PWR_STATE_ON); 916 + apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 917 + msg); 918 + 919 + return apple_rtkit_boot(rtk); 920 + } 921 + EXPORT_SYMBOL_GPL(apple_rtkit_wake); 922 + 923 + static void apple_rtkit_free(struct apple_rtkit *rtk) 924 + { 925 + mbox_free_channel(rtk->mbox_chan); 926 + destroy_workqueue(rtk->wq); 927 + 928 + apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); 929 + apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); 930 + apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer); 931 + 932 + kfree(rtk->syslog_msg_buffer); 933 + kfree(rtk); 934 + } 935 + 936 + struct apple_rtkit *devm_apple_rtkit_init(struct device *dev, void *cookie, 937 + const char *mbox_name, int mbox_idx, 938 + const struct apple_rtkit_ops *ops) 939 + { 940 + struct apple_rtkit *rtk; 941 + int ret; 942 + 943 + rtk = apple_rtkit_init(dev, cookie, mbox_name, mbox_idx, ops); 944 + if (IS_ERR(rtk)) 945 + return rtk; 946 + 947 + ret = devm_add_action_or_reset(dev, (void (*)(void *))apple_rtkit_free, 948 + rtk); 949 + if (ret) 950 + return ERR_PTR(ret); 951 + 952 + return rtk; 953 + } 954 + EXPORT_SYMBOL_GPL(devm_apple_rtkit_init); 955 + 956 + MODULE_LICENSE("Dual MIT/GPL"); 957 + MODULE_AUTHOR("Sven Peter <sven@svenpeter.dev>"); 958 + MODULE_DESCRIPTION("Apple RTKit driver");
+328
drivers/soc/apple/sart.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only OR MIT 2 + /* 3 + * Apple SART device driver 4 + * Copyright (C) The Asahi Linux Contributors 5 + * 6 + * Apple SART is a simple address filter for some DMA transactions. 7 + * Regions of physical memory must be added to the SART's allow 8 + * list before any DMA can target these. Unlike a proper 9 + * IOMMU no remapping can be done and special support in the 10 + * consumer driver is required since not all DMA transactions of 11 + * a single device are subject to SART filtering. 12 + */ 13 + 14 + #include <linux/soc/apple/sart.h> 15 + #include <linux/atomic.h> 16 + #include <linux/bits.h> 17 + #include <linux/bitfield.h> 18 + #include <linux/device.h> 19 + #include <linux/io.h> 20 + #include <linux/module.h> 21 + #include <linux/of.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/types.h> 25 + 26 + #define APPLE_SART_MAX_ENTRIES 16 27 + 28 + /* This is probably a bitfield but the exact meaning of each bit is unknown. */ 29 + #define APPLE_SART_FLAGS_ALLOW 0xff 30 + 31 + /* SARTv2 registers */ 32 + #define APPLE_SART2_CONFIG(idx) (0x00 + 4 * (idx)) 33 + #define APPLE_SART2_CONFIG_FLAGS GENMASK(31, 24) 34 + #define APPLE_SART2_CONFIG_SIZE GENMASK(23, 0) 35 + #define APPLE_SART2_CONFIG_SIZE_SHIFT 12 36 + #define APPLE_SART2_CONFIG_SIZE_MAX GENMASK(23, 0) 37 + 38 + #define APPLE_SART2_PADDR(idx) (0x40 + 4 * (idx)) 39 + #define APPLE_SART2_PADDR_SHIFT 12 40 + 41 + /* SARTv3 registers */ 42 + #define APPLE_SART3_CONFIG(idx) (0x00 + 4 * (idx)) 43 + 44 + #define APPLE_SART3_PADDR(idx) (0x40 + 4 * (idx)) 45 + #define APPLE_SART3_PADDR_SHIFT 12 46 + 47 + #define APPLE_SART3_SIZE(idx) (0x80 + 4 * (idx)) 48 + #define APPLE_SART3_SIZE_SHIFT 12 49 + #define APPLE_SART3_SIZE_MAX GENMASK(29, 0) 50 + 51 + struct apple_sart_ops { 52 + void (*get_entry)(struct apple_sart *sart, int index, u8 *flags, 53 + phys_addr_t *paddr, size_t *size); 54 + void (*set_entry)(struct apple_sart *sart, int index, u8 flags, 55 + phys_addr_t paddr_shifted, size_t size_shifted); 56 + unsigned int size_shift; 57 + unsigned int paddr_shift; 58 + size_t size_max; 59 + }; 60 + 61 + struct apple_sart { 62 + struct device *dev; 63 + void __iomem *regs; 64 + 65 + const struct apple_sart_ops *ops; 66 + 67 + unsigned long protected_entries; 68 + unsigned long used_entries; 69 + }; 70 + 71 + static void sart2_get_entry(struct apple_sart *sart, int index, u8 *flags, 72 + phys_addr_t *paddr, size_t *size) 73 + { 74 + u32 cfg = readl(sart->regs + APPLE_SART2_CONFIG(index)); 75 + phys_addr_t paddr_ = readl(sart->regs + APPLE_SART2_PADDR(index)); 76 + size_t size_ = FIELD_GET(APPLE_SART2_CONFIG_SIZE, cfg); 77 + 78 + *flags = FIELD_GET(APPLE_SART2_CONFIG_FLAGS, cfg); 79 + *size = size_ << APPLE_SART2_CONFIG_SIZE_SHIFT; 80 + *paddr = paddr_ << APPLE_SART2_PADDR_SHIFT; 81 + } 82 + 83 + static void sart2_set_entry(struct apple_sart *sart, int index, u8 flags, 84 + phys_addr_t paddr_shifted, size_t size_shifted) 85 + { 86 + u32 cfg; 87 + 88 + cfg = FIELD_PREP(APPLE_SART2_CONFIG_FLAGS, flags); 89 + cfg |= FIELD_PREP(APPLE_SART2_CONFIG_SIZE, size_shifted); 90 + 91 + writel(paddr_shifted, sart->regs + APPLE_SART2_PADDR(index)); 92 + writel(cfg, sart->regs + APPLE_SART2_CONFIG(index)); 93 + } 94 + 95 + static struct apple_sart_ops sart_ops_v2 = { 96 + .get_entry = sart2_get_entry, 97 + .set_entry = sart2_set_entry, 98 + .size_shift = APPLE_SART2_CONFIG_SIZE_SHIFT, 99 + .paddr_shift = APPLE_SART2_PADDR_SHIFT, 100 + .size_max = APPLE_SART2_CONFIG_SIZE_MAX, 101 + }; 102 + 103 + static void sart3_get_entry(struct apple_sart *sart, int index, u8 *flags, 104 + phys_addr_t *paddr, size_t *size) 105 + { 106 + phys_addr_t paddr_ = readl(sart->regs + APPLE_SART3_PADDR(index)); 107 + size_t size_ = readl(sart->regs + APPLE_SART3_SIZE(index)); 108 + 109 + *flags = readl(sart->regs + APPLE_SART3_CONFIG(index)); 110 + *size = size_ << APPLE_SART3_SIZE_SHIFT; 111 + *paddr = paddr_ << APPLE_SART3_PADDR_SHIFT; 112 + } 113 + 114 + static void sart3_set_entry(struct apple_sart *sart, int index, u8 flags, 115 + phys_addr_t paddr_shifted, size_t size_shifted) 116 + { 117 + writel(paddr_shifted, sart->regs + APPLE_SART3_PADDR(index)); 118 + writel(size_shifted, sart->regs + APPLE_SART3_SIZE(index)); 119 + writel(flags, sart->regs + APPLE_SART3_CONFIG(index)); 120 + } 121 + 122 + static struct apple_sart_ops sart_ops_v3 = { 123 + .get_entry = sart3_get_entry, 124 + .set_entry = sart3_set_entry, 125 + .size_shift = APPLE_SART3_SIZE_SHIFT, 126 + .paddr_shift = APPLE_SART3_PADDR_SHIFT, 127 + .size_max = APPLE_SART3_SIZE_MAX, 128 + }; 129 + 130 + static int apple_sart_probe(struct platform_device *pdev) 131 + { 132 + int i; 133 + struct apple_sart *sart; 134 + struct device *dev = &pdev->dev; 135 + 136 + sart = devm_kzalloc(dev, sizeof(*sart), GFP_KERNEL); 137 + if (!sart) 138 + return -ENOMEM; 139 + 140 + sart->dev = dev; 141 + sart->ops = of_device_get_match_data(dev); 142 + 143 + sart->regs = devm_platform_ioremap_resource(pdev, 0); 144 + if (IS_ERR(sart->regs)) 145 + return PTR_ERR(sart->regs); 146 + 147 + for (i = 0; i < APPLE_SART_MAX_ENTRIES; ++i) { 148 + u8 flags; 149 + size_t size; 150 + phys_addr_t paddr; 151 + 152 + sart->ops->get_entry(sart, i, &flags, &paddr, &size); 153 + 154 + if (!flags) 155 + continue; 156 + 157 + dev_dbg(sart->dev, 158 + "SART bootloader entry: index %02d; flags: 0x%02x; paddr: %pa; size: 0x%zx\n", 159 + i, flags, &paddr, size); 160 + set_bit(i, &sart->protected_entries); 161 + } 162 + 163 + platform_set_drvdata(pdev, sart); 164 + return 0; 165 + } 166 + 167 + struct apple_sart *devm_apple_sart_get(struct device *dev) 168 + { 169 + struct device_node *sart_node; 170 + struct platform_device *sart_pdev; 171 + struct apple_sart *sart; 172 + int ret; 173 + 174 + sart_node = of_parse_phandle(dev->of_node, "apple,sart", 0); 175 + if (!sart_node) 176 + return ERR_PTR(-ENODEV); 177 + 178 + sart_pdev = of_find_device_by_node(sart_node); 179 + of_node_put(sart_node); 180 + 181 + if (!sart_pdev) 182 + return ERR_PTR(-ENODEV); 183 + 184 + sart = dev_get_drvdata(&sart_pdev->dev); 185 + if (!sart) { 186 + put_device(&sart_pdev->dev); 187 + return ERR_PTR(-EPROBE_DEFER); 188 + } 189 + 190 + ret = devm_add_action_or_reset(dev, (void (*)(void *))put_device, 191 + &sart_pdev->dev); 192 + if (ret) 193 + return ERR_PTR(ret); 194 + 195 + device_link_add(dev, &sart_pdev->dev, 196 + DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER); 197 + 198 + return sart; 199 + } 200 + EXPORT_SYMBOL_GPL(devm_apple_sart_get); 201 + 202 + static int sart_set_entry(struct apple_sart *sart, int index, u8 flags, 203 + phys_addr_t paddr, size_t size) 204 + { 205 + if (size & ((1 << sart->ops->size_shift) - 1)) 206 + return -EINVAL; 207 + if (paddr & ((1 << sart->ops->paddr_shift) - 1)) 208 + return -EINVAL; 209 + 210 + paddr >>= sart->ops->size_shift; 211 + size >>= sart->ops->paddr_shift; 212 + 213 + if (size > sart->ops->size_max) 214 + return -EINVAL; 215 + 216 + sart->ops->set_entry(sart, index, flags, paddr, size); 217 + return 0; 218 + } 219 + 220 + int apple_sart_add_allowed_region(struct apple_sart *sart, phys_addr_t paddr, 221 + size_t size) 222 + { 223 + int i, ret; 224 + 225 + for (i = 0; i < APPLE_SART_MAX_ENTRIES; ++i) { 226 + if (test_bit(i, &sart->protected_entries)) 227 + continue; 228 + if (test_and_set_bit(i, &sart->used_entries)) 229 + continue; 230 + 231 + ret = sart_set_entry(sart, i, APPLE_SART_FLAGS_ALLOW, paddr, 232 + size); 233 + if (ret) { 234 + dev_dbg(sart->dev, 235 + "unable to set entry %d to [%pa, 0x%zx]\n", 236 + i, &paddr, size); 237 + clear_bit(i, &sart->used_entries); 238 + return ret; 239 + } 240 + 241 + dev_dbg(sart->dev, "wrote [%pa, 0x%zx] to %d\n", &paddr, size, 242 + i); 243 + return 0; 244 + } 245 + 246 + dev_warn(sart->dev, 247 + "no free entries left to add [paddr: 0x%pa, size: 0x%zx]\n", 248 + &paddr, size); 249 + 250 + return -EBUSY; 251 + } 252 + EXPORT_SYMBOL_GPL(apple_sart_add_allowed_region); 253 + 254 + int apple_sart_remove_allowed_region(struct apple_sart *sart, phys_addr_t paddr, 255 + size_t size) 256 + { 257 + int i; 258 + 259 + dev_dbg(sart->dev, 260 + "will remove [paddr: %pa, size: 0x%zx] from allowed regions\n", 261 + &paddr, size); 262 + 263 + for (i = 0; i < APPLE_SART_MAX_ENTRIES; ++i) { 264 + u8 eflags; 265 + size_t esize; 266 + phys_addr_t epaddr; 267 + 268 + if (test_bit(i, &sart->protected_entries)) 269 + continue; 270 + 271 + sart->ops->get_entry(sart, i, &eflags, &epaddr, &esize); 272 + 273 + if (epaddr != paddr || esize != size) 274 + continue; 275 + 276 + sart->ops->set_entry(sart, i, 0, 0, 0); 277 + 278 + clear_bit(i, &sart->used_entries); 279 + dev_dbg(sart->dev, "cleared entry %d\n", i); 280 + return 0; 281 + } 282 + 283 + dev_warn(sart->dev, "entry [paddr: 0x%pa, size: 0x%zx] not found\n", 284 + &paddr, size); 285 + 286 + return -EINVAL; 287 + } 288 + EXPORT_SYMBOL_GPL(apple_sart_remove_allowed_region); 289 + 290 + static void apple_sart_shutdown(struct platform_device *pdev) 291 + { 292 + struct apple_sart *sart = dev_get_drvdata(&pdev->dev); 293 + int i; 294 + 295 + for (i = 0; i < APPLE_SART_MAX_ENTRIES; ++i) { 296 + if (test_bit(i, &sart->protected_entries)) 297 + continue; 298 + 299 + sart->ops->set_entry(sart, i, 0, 0, 0); 300 + } 301 + } 302 + 303 + static const struct of_device_id apple_sart_of_match[] = { 304 + { 305 + .compatible = "apple,t6000-sart", 306 + .data = &sart_ops_v3, 307 + }, 308 + { 309 + .compatible = "apple,t8103-sart", 310 + .data = &sart_ops_v2, 311 + }, 312 + {} 313 + }; 314 + MODULE_DEVICE_TABLE(of, apple_sart_of_match); 315 + 316 + static struct platform_driver apple_sart_driver = { 317 + .driver = { 318 + .name = "apple-sart", 319 + .of_match_table = apple_sart_of_match, 320 + }, 321 + .probe = apple_sart_probe, 322 + .shutdown = apple_sart_shutdown, 323 + }; 324 + module_platform_driver(apple_sart_driver); 325 + 326 + MODULE_LICENSE("Dual MIT/GPL"); 327 + MODULE_AUTHOR("Sven Peter <sven@svenpeter.dev>"); 328 + MODULE_DESCRIPTION("Apple SART driver");
+3
drivers/soc/bcm/bcm63xx/bcm-pmb.c
··· 312 312 for (e = table; e->name; e++) { 313 313 struct bcm_pmb_pm_domain *pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL); 314 314 315 + if (!pd) 316 + return -ENOMEM; 317 + 315 318 pd->pmb = pmb; 316 319 pd->data = e; 317 320 pd->genpd.name = e->name;
+1
drivers/soc/imx/Makefile
··· 6 6 obj-$(CONFIG_IMX_GPCV2_PM_DOMAINS) += gpcv2.o 7 7 obj-$(CONFIG_SOC_IMX8M) += soc-imx8m.o 8 8 obj-$(CONFIG_SOC_IMX8M) += imx8m-blk-ctrl.o 9 + obj-$(CONFIG_SOC_IMX8M) += imx8mp-blk-ctrl.o
+418 -12
drivers/soc/imx/gpcv2.c
··· 21 21 #include <dt-bindings/power/imx8mq-power.h> 22 22 #include <dt-bindings/power/imx8mm-power.h> 23 23 #include <dt-bindings/power/imx8mn-power.h> 24 + #include <dt-bindings/power/imx8mp-power.h> 24 25 25 26 #define GPC_LPCR_A_CORE_BSC 0x000 26 27 27 28 #define GPC_PGC_CPU_MAPPING 0x0ec 29 + #define IMX8MP_GPC_PGC_CPU_MAPPING 0x1cc 28 30 29 31 #define IMX7_USB_HSIC_PHY_A_CORE_DOMAIN BIT(6) 30 32 #define IMX7_USB_OTG2_PHY_A_CORE_DOMAIN BIT(5) ··· 66 64 #define IMX8MN_DDR1_A53_DOMAIN BIT(7) 67 65 #define IMX8MN_OTG1_A53_DOMAIN BIT(4) 68 66 #define IMX8MN_MIPI_A53_DOMAIN BIT(2) 67 + 68 + #define IMX8MP_MEDIA_ISPDWP_A53_DOMAIN BIT(20) 69 + #define IMX8MP_HSIOMIX_A53_DOMAIN BIT(19) 70 + #define IMX8MP_MIPI_PHY2_A53_DOMAIN BIT(18) 71 + #define IMX8MP_HDMI_PHY_A53_DOMAIN BIT(17) 72 + #define IMX8MP_HDMIMIX_A53_DOMAIN BIT(16) 73 + #define IMX8MP_VPU_VC8000E_A53_DOMAIN BIT(15) 74 + #define IMX8MP_VPU_G2_A53_DOMAIN BIT(14) 75 + #define IMX8MP_VPU_G1_A53_DOMAIN BIT(13) 76 + #define IMX8MP_MEDIAMIX_A53_DOMAIN BIT(12) 77 + #define IMX8MP_GPU3D_A53_DOMAIN BIT(11) 78 + #define IMX8MP_VPUMIX_A53_DOMAIN BIT(10) 79 + #define IMX8MP_GPUMIX_A53_DOMAIN BIT(9) 80 + #define IMX8MP_GPU2D_A53_DOMAIN BIT(8) 81 + #define IMX8MP_AUDIOMIX_A53_DOMAIN BIT(7) 82 + #define IMX8MP_MLMIX_A53_DOMAIN BIT(6) 83 + #define IMX8MP_USB2_PHY_A53_DOMAIN BIT(5) 84 + #define IMX8MP_USB1_PHY_A53_DOMAIN BIT(4) 85 + #define IMX8MP_PCIE_PHY_A53_DOMAIN BIT(3) 86 + #define IMX8MP_MIPI_PHY1_A53_DOMAIN BIT(2) 87 + 88 + #define IMX8MP_GPC_PU_PGC_SW_PUP_REQ 0x0d8 89 + #define IMX8MP_GPC_PU_PGC_SW_PDN_REQ 0x0e4 69 90 70 91 #define GPC_PU_PGC_SW_PUP_REQ 0x0f8 71 92 #define GPC_PU_PGC_SW_PDN_REQ 0x104 ··· 132 107 #define IMX8MN_OTG1_SW_Pxx_REQ BIT(2) 133 108 #define IMX8MN_MIPI_SW_Pxx_REQ BIT(0) 134 109 110 + #define IMX8MP_DDRMIX_Pxx_REQ BIT(19) 111 + #define IMX8MP_MEDIA_ISP_DWP_Pxx_REQ BIT(18) 112 + #define IMX8MP_HSIOMIX_Pxx_REQ BIT(17) 113 + #define IMX8MP_MIPI_PHY2_Pxx_REQ BIT(16) 114 + #define IMX8MP_HDMI_PHY_Pxx_REQ BIT(15) 115 + #define IMX8MP_HDMIMIX_Pxx_REQ BIT(14) 116 + #define IMX8MP_VPU_VC8K_Pxx_REQ BIT(13) 117 + #define IMX8MP_VPU_G2_Pxx_REQ BIT(12) 118 + #define IMX8MP_VPU_G1_Pxx_REQ BIT(11) 119 + #define IMX8MP_MEDIMIX_Pxx_REQ BIT(10) 120 + #define IMX8MP_GPU_3D_Pxx_REQ BIT(9) 121 + #define IMX8MP_VPU_MIX_SHARE_LOGIC_Pxx_REQ BIT(8) 122 + #define IMX8MP_GPU_SHARE_LOGIC_Pxx_REQ BIT(7) 123 + #define IMX8MP_GPU_2D_Pxx_REQ BIT(6) 124 + #define IMX8MP_AUDIOMIX_Pxx_REQ BIT(5) 125 + #define IMX8MP_MLMIX_Pxx_REQ BIT(4) 126 + #define IMX8MP_USB2_PHY_Pxx_REQ BIT(3) 127 + #define IMX8MP_USB1_PHY_Pxx_REQ BIT(2) 128 + #define IMX8MP_PCIE_PHY_SW_Pxx_REQ BIT(1) 129 + #define IMX8MP_MIPI_PHY1_SW_Pxx_REQ BIT(0) 130 + 135 131 #define GPC_M4_PU_PDN_FLG 0x1bc 136 132 133 + #define IMX8MP_GPC_PU_PWRHSK 0x190 137 134 #define GPC_PU_PWRHSK 0x1fc 138 135 139 136 #define IMX8M_GPU_HSK_PWRDNACKN BIT(26) ··· 164 117 #define IMX8M_GPU_HSK_PWRDNREQN BIT(6) 165 118 #define IMX8M_VPU_HSK_PWRDNREQN BIT(5) 166 119 #define IMX8M_DISP_HSK_PWRDNREQN BIT(4) 167 - 168 120 169 121 #define IMX8MM_GPUMIX_HSK_PWRDNACKN BIT(29) 170 122 #define IMX8MM_GPU_HSK_PWRDNACKN (BIT(27) | BIT(28)) ··· 182 136 #define IMX8MN_GPUMIX_HSK_PWRDNREQN (BIT(11) | BIT(9)) 183 137 #define IMX8MN_DISPMIX_HSK_PWRDNREQN BIT(7) 184 138 #define IMX8MN_HSIO_HSK_PWRDNREQN BIT(5) 139 + 140 + #define IMX8MP_MEDIAMIX_PWRDNACKN BIT(30) 141 + #define IMX8MP_HDMIMIX_PWRDNACKN BIT(29) 142 + #define IMX8MP_HSIOMIX_PWRDNACKN BIT(28) 143 + #define IMX8MP_VPUMIX_PWRDNACKN BIT(26) 144 + #define IMX8MP_GPUMIX_PWRDNACKN BIT(25) 145 + #define IMX8MP_MLMIX_PWRDNACKN (BIT(23) | BIT(24)) 146 + #define IMX8MP_AUDIOMIX_PWRDNACKN (BIT(20) | BIT(31)) 147 + #define IMX8MP_MEDIAMIX_PWRDNREQN BIT(14) 148 + #define IMX8MP_HDMIMIX_PWRDNREQN BIT(13) 149 + #define IMX8MP_HSIOMIX_PWRDNREQN BIT(12) 150 + #define IMX8MP_VPUMIX_PWRDNREQN BIT(10) 151 + #define IMX8MP_GPUMIX_PWRDNREQN BIT(9) 152 + #define IMX8MP_MLMIX_PWRDNREQN (BIT(7) | BIT(8)) 153 + #define IMX8MP_AUDIOMIX_PWRDNREQN (BIT(4) | BIT(15)) 185 154 186 155 /* 187 156 * The PGC offset values in Reference Manual ··· 240 179 #define IMX8MN_PGC_GPUMIX 23 241 180 #define IMX8MN_PGC_DISPMIX 26 242 181 182 + #define IMX8MP_PGC_NOC 9 183 + #define IMX8MP_PGC_MIPI1 12 184 + #define IMX8MP_PGC_PCIE 13 185 + #define IMX8MP_PGC_USB1 14 186 + #define IMX8MP_PGC_USB2 15 187 + #define IMX8MP_PGC_MLMIX 16 188 + #define IMX8MP_PGC_AUDIOMIX 17 189 + #define IMX8MP_PGC_GPU2D 18 190 + #define IMX8MP_PGC_GPUMIX 19 191 + #define IMX8MP_PGC_VPUMIX 20 192 + #define IMX8MP_PGC_GPU3D 21 193 + #define IMX8MP_PGC_MEDIAMIX 22 194 + #define IMX8MP_PGC_VPU_G1 23 195 + #define IMX8MP_PGC_VPU_G2 24 196 + #define IMX8MP_PGC_VPU_VC8000E 25 197 + #define IMX8MP_PGC_HDMIMIX 26 198 + #define IMX8MP_PGC_HDMI 27 199 + #define IMX8MP_PGC_MIPI2 28 200 + #define IMX8MP_PGC_HSIOMIX 29 201 + #define IMX8MP_PGC_MEDIA_ISP_DWP 30 202 + #define IMX8MP_PGC_DDRMIX 31 203 + 243 204 #define GPC_PGC_CTRL(n) (0x800 + (n) * 0x40) 244 205 #define GPC_PGC_SR(n) (GPC_PGC_CTRL(n) + 0xc) 245 206 246 207 #define GPC_PGC_CTRL_PCR BIT(0) 247 208 209 + struct imx_pgc_regs { 210 + u16 map; 211 + u16 pup; 212 + u16 pdn; 213 + u16 hsk; 214 + }; 215 + 248 216 struct imx_pgc_domain { 249 217 struct generic_pm_domain genpd; 250 218 struct regmap *regmap; 219 + const struct imx_pgc_regs *regs; 251 220 struct regulator *regulator; 252 221 struct reset_control *reset; 253 222 struct clk_bulk_data *clks; ··· 295 204 const int voltage; 296 205 const bool keep_clocks; 297 206 struct device *dev; 207 + 208 + unsigned int pgc_sw_pup_reg; 209 + unsigned int pgc_sw_pdn_reg; 298 210 }; 299 211 300 212 struct imx_pgc_domain_data { 301 213 const struct imx_pgc_domain *domains; 302 214 size_t domains_num; 303 215 const struct regmap_access_table *reg_access_table; 216 + const struct imx_pgc_regs *pgc_regs; 304 217 }; 305 218 306 219 static inline struct imx_pgc_domain * ··· 344 249 345 250 if (domain->bits.pxx) { 346 251 /* request the domain to power up */ 347 - regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PUP_REQ, 252 + regmap_update_bits(domain->regmap, domain->regs->pup, 348 253 domain->bits.pxx, domain->bits.pxx); 349 254 /* 350 255 * As per "5.5.9.4 Example Code 4" in IMX7DRM.pdf wait 351 256 * for PUP_REQ/PDN_REQ bit to be cleared 352 257 */ 353 258 ret = regmap_read_poll_timeout(domain->regmap, 354 - GPC_PU_PGC_SW_PUP_REQ, reg_val, 259 + domain->regs->pup, reg_val, 355 260 !(reg_val & domain->bits.pxx), 356 261 0, USEC_PER_MSEC); 357 262 if (ret) { ··· 373 278 374 279 /* request the ADB400 to power up */ 375 280 if (domain->bits.hskreq) { 376 - regmap_update_bits(domain->regmap, GPC_PU_PWRHSK, 281 + regmap_update_bits(domain->regmap, domain->regs->hsk, 377 282 domain->bits.hskreq, domain->bits.hskreq); 378 283 379 284 /* 380 - * ret = regmap_read_poll_timeout(domain->regmap, GPC_PU_PWRHSK, reg_val, 285 + * ret = regmap_read_poll_timeout(domain->regmap, domain->regs->hsk, reg_val, 381 286 * (reg_val & domain->bits.hskack), 0, 382 287 * USEC_PER_MSEC); 383 288 * Technically we need the commented code to wait handshake. But that needs ··· 424 329 425 330 /* request the ADB400 to power down */ 426 331 if (domain->bits.hskreq) { 427 - regmap_clear_bits(domain->regmap, GPC_PU_PWRHSK, 332 + regmap_clear_bits(domain->regmap, domain->regs->hsk, 428 333 domain->bits.hskreq); 429 334 430 - ret = regmap_read_poll_timeout(domain->regmap, GPC_PU_PWRHSK, 335 + ret = regmap_read_poll_timeout(domain->regmap, domain->regs->hsk, 431 336 reg_val, 432 337 !(reg_val & domain->bits.hskack), 433 338 0, USEC_PER_MSEC); ··· 445 350 } 446 351 447 352 /* request the domain to power down */ 448 - regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PDN_REQ, 353 + regmap_update_bits(domain->regmap, domain->regs->pdn, 449 354 domain->bits.pxx, domain->bits.pxx); 450 355 /* 451 356 * As per "5.5.9.4 Example Code 4" in IMX7DRM.pdf wait 452 357 * for PUP_REQ/PDN_REQ bit to be cleared 453 358 */ 454 359 ret = regmap_read_poll_timeout(domain->regmap, 455 - GPC_PU_PGC_SW_PDN_REQ, reg_val, 360 + domain->regs->pdn, reg_val, 456 361 !(reg_val & domain->bits.pxx), 457 362 0, USEC_PER_MSEC); 458 363 if (ret) { ··· 537 442 .n_yes_ranges = ARRAY_SIZE(imx7_yes_ranges), 538 443 }; 539 444 445 + static const struct imx_pgc_regs imx7_pgc_regs = { 446 + .map = GPC_PGC_CPU_MAPPING, 447 + .pup = GPC_PU_PGC_SW_PUP_REQ, 448 + .pdn = GPC_PU_PGC_SW_PDN_REQ, 449 + .hsk = GPC_PU_PWRHSK, 450 + }; 451 + 540 452 static const struct imx_pgc_domain_data imx7_pgc_domain_data = { 541 453 .domains = imx7_pgc_domains, 542 454 .domains_num = ARRAY_SIZE(imx7_pgc_domains), 543 455 .reg_access_table = &imx7_access_table, 456 + .pgc_regs = &imx7_pgc_regs, 544 457 }; 545 458 546 459 static const struct imx_pgc_domain imx8m_pgc_domains[] = { ··· 717 614 .domains = imx8m_pgc_domains, 718 615 .domains_num = ARRAY_SIZE(imx8m_pgc_domains), 719 616 .reg_access_table = &imx8m_access_table, 617 + .pgc_regs = &imx7_pgc_regs, 720 618 }; 721 619 722 620 static const struct imx_pgc_domain imx8mm_pgc_domains[] = { ··· 908 804 .domains = imx8mm_pgc_domains, 909 805 .domains_num = ARRAY_SIZE(imx8mm_pgc_domains), 910 806 .reg_access_table = &imx8mm_access_table, 807 + .pgc_regs = &imx7_pgc_regs, 808 + }; 809 + 810 + static const struct imx_pgc_domain imx8mp_pgc_domains[] = { 811 + [IMX8MP_POWER_DOMAIN_MIPI_PHY1] = { 812 + .genpd = { 813 + .name = "mipi-phy1", 814 + }, 815 + .bits = { 816 + .pxx = IMX8MP_MIPI_PHY1_SW_Pxx_REQ, 817 + .map = IMX8MP_MIPI_PHY1_A53_DOMAIN, 818 + }, 819 + .pgc = BIT(IMX8MP_PGC_MIPI1), 820 + }, 821 + 822 + [IMX8MP_POWER_DOMAIN_PCIE_PHY] = { 823 + .genpd = { 824 + .name = "pcie-phy1", 825 + }, 826 + .bits = { 827 + .pxx = IMX8MP_PCIE_PHY_SW_Pxx_REQ, 828 + .map = IMX8MP_PCIE_PHY_A53_DOMAIN, 829 + }, 830 + .pgc = BIT(IMX8MP_PGC_PCIE), 831 + }, 832 + 833 + [IMX8MP_POWER_DOMAIN_USB1_PHY] = { 834 + .genpd = { 835 + .name = "usb-otg1", 836 + }, 837 + .bits = { 838 + .pxx = IMX8MP_USB1_PHY_Pxx_REQ, 839 + .map = IMX8MP_USB1_PHY_A53_DOMAIN, 840 + }, 841 + .pgc = BIT(IMX8MP_PGC_USB1), 842 + }, 843 + 844 + [IMX8MP_POWER_DOMAIN_USB2_PHY] = { 845 + .genpd = { 846 + .name = "usb-otg2", 847 + }, 848 + .bits = { 849 + .pxx = IMX8MP_USB2_PHY_Pxx_REQ, 850 + .map = IMX8MP_USB2_PHY_A53_DOMAIN, 851 + }, 852 + .pgc = BIT(IMX8MP_PGC_USB2), 853 + }, 854 + 855 + [IMX8MP_POWER_DOMAIN_MLMIX] = { 856 + .genpd = { 857 + .name = "mlmix", 858 + }, 859 + .bits = { 860 + .pxx = IMX8MP_MLMIX_Pxx_REQ, 861 + .map = IMX8MP_MLMIX_A53_DOMAIN, 862 + .hskreq = IMX8MP_MLMIX_PWRDNREQN, 863 + .hskack = IMX8MP_MLMIX_PWRDNACKN, 864 + }, 865 + .pgc = BIT(IMX8MP_PGC_MLMIX), 866 + .keep_clocks = true, 867 + }, 868 + 869 + [IMX8MP_POWER_DOMAIN_AUDIOMIX] = { 870 + .genpd = { 871 + .name = "audiomix", 872 + }, 873 + .bits = { 874 + .pxx = IMX8MP_AUDIOMIX_Pxx_REQ, 875 + .map = IMX8MP_AUDIOMIX_A53_DOMAIN, 876 + .hskreq = IMX8MP_AUDIOMIX_PWRDNREQN, 877 + .hskack = IMX8MP_AUDIOMIX_PWRDNACKN, 878 + }, 879 + .pgc = BIT(IMX8MP_PGC_AUDIOMIX), 880 + .keep_clocks = true, 881 + }, 882 + 883 + [IMX8MP_POWER_DOMAIN_GPU2D] = { 884 + .genpd = { 885 + .name = "gpu2d", 886 + }, 887 + .bits = { 888 + .pxx = IMX8MP_GPU_2D_Pxx_REQ, 889 + .map = IMX8MP_GPU2D_A53_DOMAIN, 890 + }, 891 + .pgc = BIT(IMX8MP_PGC_GPU2D), 892 + }, 893 + 894 + [IMX8MP_POWER_DOMAIN_GPUMIX] = { 895 + .genpd = { 896 + .name = "gpumix", 897 + }, 898 + .bits = { 899 + .pxx = IMX8MP_GPU_SHARE_LOGIC_Pxx_REQ, 900 + .map = IMX8MP_GPUMIX_A53_DOMAIN, 901 + .hskreq = IMX8MP_GPUMIX_PWRDNREQN, 902 + .hskack = IMX8MP_GPUMIX_PWRDNACKN, 903 + }, 904 + .pgc = BIT(IMX8MP_PGC_GPUMIX), 905 + .keep_clocks = true, 906 + }, 907 + 908 + [IMX8MP_POWER_DOMAIN_VPUMIX] = { 909 + .genpd = { 910 + .name = "vpumix", 911 + }, 912 + .bits = { 913 + .pxx = IMX8MP_VPU_MIX_SHARE_LOGIC_Pxx_REQ, 914 + .map = IMX8MP_VPUMIX_A53_DOMAIN, 915 + .hskreq = IMX8MP_VPUMIX_PWRDNREQN, 916 + .hskack = IMX8MP_VPUMIX_PWRDNACKN, 917 + }, 918 + .pgc = BIT(IMX8MP_PGC_VPUMIX), 919 + .keep_clocks = true, 920 + }, 921 + 922 + [IMX8MP_POWER_DOMAIN_GPU3D] = { 923 + .genpd = { 924 + .name = "gpu3d", 925 + }, 926 + .bits = { 927 + .pxx = IMX8MP_GPU_3D_Pxx_REQ, 928 + .map = IMX8MP_GPU3D_A53_DOMAIN, 929 + }, 930 + .pgc = BIT(IMX8MP_PGC_GPU3D), 931 + }, 932 + 933 + [IMX8MP_POWER_DOMAIN_MEDIAMIX] = { 934 + .genpd = { 935 + .name = "mediamix", 936 + }, 937 + .bits = { 938 + .pxx = IMX8MP_MEDIMIX_Pxx_REQ, 939 + .map = IMX8MP_MEDIAMIX_A53_DOMAIN, 940 + .hskreq = IMX8MP_MEDIAMIX_PWRDNREQN, 941 + .hskack = IMX8MP_MEDIAMIX_PWRDNACKN, 942 + }, 943 + .pgc = BIT(IMX8MP_PGC_MEDIAMIX), 944 + .keep_clocks = true, 945 + }, 946 + 947 + [IMX8MP_POWER_DOMAIN_VPU_G1] = { 948 + .genpd = { 949 + .name = "vpu-g1", 950 + }, 951 + .bits = { 952 + .pxx = IMX8MP_VPU_G1_Pxx_REQ, 953 + .map = IMX8MP_VPU_G1_A53_DOMAIN, 954 + }, 955 + .pgc = BIT(IMX8MP_PGC_VPU_G1), 956 + }, 957 + 958 + [IMX8MP_POWER_DOMAIN_VPU_G2] = { 959 + .genpd = { 960 + .name = "vpu-g2", 961 + }, 962 + .bits = { 963 + .pxx = IMX8MP_VPU_G2_Pxx_REQ, 964 + .map = IMX8MP_VPU_G2_A53_DOMAIN 965 + }, 966 + .pgc = BIT(IMX8MP_PGC_VPU_G2), 967 + }, 968 + 969 + [IMX8MP_POWER_DOMAIN_VPU_VC8000E] = { 970 + .genpd = { 971 + .name = "vpu-h1", 972 + }, 973 + .bits = { 974 + .pxx = IMX8MP_VPU_VC8K_Pxx_REQ, 975 + .map = IMX8MP_VPU_VC8000E_A53_DOMAIN, 976 + }, 977 + .pgc = BIT(IMX8MP_PGC_VPU_VC8000E), 978 + }, 979 + 980 + [IMX8MP_POWER_DOMAIN_HDMIMIX] = { 981 + .genpd = { 982 + .name = "hdmimix", 983 + }, 984 + .bits = { 985 + .pxx = IMX8MP_HDMIMIX_Pxx_REQ, 986 + .map = IMX8MP_HDMIMIX_A53_DOMAIN, 987 + .hskreq = IMX8MP_HDMIMIX_PWRDNREQN, 988 + .hskack = IMX8MP_HDMIMIX_PWRDNACKN, 989 + }, 990 + .pgc = BIT(IMX8MP_PGC_HDMIMIX), 991 + .keep_clocks = true, 992 + }, 993 + 994 + [IMX8MP_POWER_DOMAIN_HDMI_PHY] = { 995 + .genpd = { 996 + .name = "hdmi-phy", 997 + }, 998 + .bits = { 999 + .pxx = IMX8MP_HDMI_PHY_Pxx_REQ, 1000 + .map = IMX8MP_HDMI_PHY_A53_DOMAIN, 1001 + }, 1002 + .pgc = BIT(IMX8MP_PGC_HDMI), 1003 + }, 1004 + 1005 + [IMX8MP_POWER_DOMAIN_MIPI_PHY2] = { 1006 + .genpd = { 1007 + .name = "mipi-phy2", 1008 + }, 1009 + .bits = { 1010 + .pxx = IMX8MP_MIPI_PHY2_Pxx_REQ, 1011 + .map = IMX8MP_MIPI_PHY2_A53_DOMAIN, 1012 + }, 1013 + .pgc = BIT(IMX8MP_PGC_MIPI2), 1014 + }, 1015 + 1016 + [IMX8MP_POWER_DOMAIN_HSIOMIX] = { 1017 + .genpd = { 1018 + .name = "hsiomix", 1019 + }, 1020 + .bits = { 1021 + .pxx = IMX8MP_HSIOMIX_Pxx_REQ, 1022 + .map = IMX8MP_HSIOMIX_A53_DOMAIN, 1023 + .hskreq = IMX8MP_HSIOMIX_PWRDNREQN, 1024 + .hskack = IMX8MP_HSIOMIX_PWRDNACKN, 1025 + }, 1026 + .pgc = BIT(IMX8MP_PGC_HSIOMIX), 1027 + .keep_clocks = true, 1028 + }, 1029 + 1030 + [IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP] = { 1031 + .genpd = { 1032 + .name = "mediamix-isp-dwp", 1033 + }, 1034 + .bits = { 1035 + .pxx = IMX8MP_MEDIA_ISP_DWP_Pxx_REQ, 1036 + .map = IMX8MP_MEDIA_ISPDWP_A53_DOMAIN, 1037 + }, 1038 + .pgc = BIT(IMX8MP_PGC_MEDIA_ISP_DWP), 1039 + }, 1040 + }; 1041 + 1042 + static const struct regmap_range imx8mp_yes_ranges[] = { 1043 + regmap_reg_range(GPC_LPCR_A_CORE_BSC, 1044 + IMX8MP_GPC_PGC_CPU_MAPPING), 1045 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_NOC), 1046 + GPC_PGC_SR(IMX8MP_PGC_NOC)), 1047 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_MIPI1), 1048 + GPC_PGC_SR(IMX8MP_PGC_MIPI1)), 1049 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_PCIE), 1050 + GPC_PGC_SR(IMX8MP_PGC_PCIE)), 1051 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_USB1), 1052 + GPC_PGC_SR(IMX8MP_PGC_USB1)), 1053 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_USB2), 1054 + GPC_PGC_SR(IMX8MP_PGC_USB2)), 1055 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_MLMIX), 1056 + GPC_PGC_SR(IMX8MP_PGC_MLMIX)), 1057 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_AUDIOMIX), 1058 + GPC_PGC_SR(IMX8MP_PGC_AUDIOMIX)), 1059 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_GPU2D), 1060 + GPC_PGC_SR(IMX8MP_PGC_GPU2D)), 1061 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_GPUMIX), 1062 + GPC_PGC_SR(IMX8MP_PGC_GPUMIX)), 1063 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_VPUMIX), 1064 + GPC_PGC_SR(IMX8MP_PGC_VPUMIX)), 1065 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_GPU3D), 1066 + GPC_PGC_SR(IMX8MP_PGC_GPU3D)), 1067 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_MEDIAMIX), 1068 + GPC_PGC_SR(IMX8MP_PGC_MEDIAMIX)), 1069 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_VPU_G1), 1070 + GPC_PGC_SR(IMX8MP_PGC_VPU_G1)), 1071 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_VPU_G2), 1072 + GPC_PGC_SR(IMX8MP_PGC_VPU_G2)), 1073 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_VPU_VC8000E), 1074 + GPC_PGC_SR(IMX8MP_PGC_VPU_VC8000E)), 1075 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_HDMIMIX), 1076 + GPC_PGC_SR(IMX8MP_PGC_HDMIMIX)), 1077 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_HDMI), 1078 + GPC_PGC_SR(IMX8MP_PGC_HDMI)), 1079 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_MIPI2), 1080 + GPC_PGC_SR(IMX8MP_PGC_MIPI2)), 1081 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_HSIOMIX), 1082 + GPC_PGC_SR(IMX8MP_PGC_HSIOMIX)), 1083 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_MEDIA_ISP_DWP), 1084 + GPC_PGC_SR(IMX8MP_PGC_MEDIA_ISP_DWP)), 1085 + regmap_reg_range(GPC_PGC_CTRL(IMX8MP_PGC_DDRMIX), 1086 + GPC_PGC_SR(IMX8MP_PGC_DDRMIX)), 1087 + }; 1088 + 1089 + static const struct regmap_access_table imx8mp_access_table = { 1090 + .yes_ranges = imx8mp_yes_ranges, 1091 + .n_yes_ranges = ARRAY_SIZE(imx8mp_yes_ranges), 1092 + }; 1093 + 1094 + static const struct imx_pgc_regs imx8mp_pgc_regs = { 1095 + .map = IMX8MP_GPC_PGC_CPU_MAPPING, 1096 + .pup = IMX8MP_GPC_PU_PGC_SW_PUP_REQ, 1097 + .pdn = IMX8MP_GPC_PU_PGC_SW_PDN_REQ, 1098 + .hsk = IMX8MP_GPC_PU_PWRHSK, 1099 + }; 1100 + static const struct imx_pgc_domain_data imx8mp_pgc_domain_data = { 1101 + .domains = imx8mp_pgc_domains, 1102 + .domains_num = ARRAY_SIZE(imx8mp_pgc_domains), 1103 + .reg_access_table = &imx8mp_access_table, 1104 + .pgc_regs = &imx8mp_pgc_regs, 911 1105 }; 912 1106 913 1107 static const struct imx_pgc_domain imx8mn_pgc_domains[] = { ··· 1297 895 .domains = imx8mn_pgc_domains, 1298 896 .domains_num = ARRAY_SIZE(imx8mn_pgc_domains), 1299 897 .reg_access_table = &imx8mn_access_table, 898 + .pgc_regs = &imx7_pgc_regs, 1300 899 }; 1301 900 1302 901 static int imx_pgc_domain_probe(struct platform_device *pdev) ··· 1330 927 pm_runtime_enable(domain->dev); 1331 928 1332 929 if (domain->bits.map) 1333 - regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING, 930 + regmap_update_bits(domain->regmap, domain->regs->map, 1334 931 domain->bits.map, domain->bits.map); 1335 932 1336 933 ret = pm_genpd_init(&domain->genpd, NULL, true); ··· 1356 953 pm_genpd_remove(&domain->genpd); 1357 954 out_domain_unmap: 1358 955 if (domain->bits.map) 1359 - regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING, 956 + regmap_update_bits(domain->regmap, domain->regs->map, 1360 957 domain->bits.map, 0); 1361 958 pm_runtime_disable(domain->dev); 1362 959 ··· 1371 968 pm_genpd_remove(&domain->genpd); 1372 969 1373 970 if (domain->bits.map) 1374 - regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING, 971 + regmap_update_bits(domain->regmap, domain->regs->map, 1375 972 domain->bits.map, 0); 1376 973 1377 974 pm_runtime_disable(domain->dev); ··· 1502 1099 1503 1100 domain = pd_pdev->dev.platform_data; 1504 1101 domain->regmap = regmap; 1102 + domain->regs = domain_data->pgc_regs; 1103 + 1505 1104 domain->genpd.power_on = imx_pgc_power_up; 1506 1105 domain->genpd.power_off = imx_pgc_power_down; 1507 1106 ··· 1525 1120 { .compatible = "fsl,imx7d-gpc", .data = &imx7_pgc_domain_data, }, 1526 1121 { .compatible = "fsl,imx8mm-gpc", .data = &imx8mm_pgc_domain_data, }, 1527 1122 { .compatible = "fsl,imx8mn-gpc", .data = &imx8mn_pgc_domain_data, }, 1123 + { .compatible = "fsl,imx8mp-gpc", .data = &imx8mp_pgc_domain_data, }, 1528 1124 { .compatible = "fsl,imx8mq-gpc", .data = &imx8m_pgc_domain_data, }, 1529 1125 { } 1530 1126 };
+122 -2
drivers/soc/imx/imx8m-blk-ctrl.c
··· 15 15 16 16 #include <dt-bindings/power/imx8mm-power.h> 17 17 #include <dt-bindings/power/imx8mn-power.h> 18 + #include <dt-bindings/power/imx8mp-power.h> 18 19 #include <dt-bindings/power/imx8mq-power.h> 19 20 20 21 #define BLK_SFT_RSTN 0x0 21 22 #define BLK_CLK_EN 0x4 22 - #define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano DISPLAY_BLK_CTRL only */ 23 + #define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano/Plus DISPLAY_BLK_CTRL only */ 23 24 24 25 struct imx8m_blk_ctrl_domain; 25 26 ··· 42 41 u32 clk_mask; 43 42 44 43 /* 45 - * i.MX8M Mini and Nano have a third DISPLAY_BLK_CTRL register 44 + * i.MX8M Mini, Nano and Plus have a third DISPLAY_BLK_CTRL register 46 45 * which is used to control the reset for the MIPI Phy. 47 46 * Since it's only present in certain circumstances, 48 47 * an if-statement should be used before setting and clearing this ··· 242 241 ret = PTR_ERR(domain->power_dev); 243 242 goto cleanup_pds; 244 243 } 244 + dev_set_name(domain->power_dev, "%s", data->name); 245 245 246 246 domain->genpd.name = data->name; 247 247 domain->genpd.power_on = imx8m_blk_ctrl_power_on; ··· 592 590 .num_domains = ARRAY_SIZE(imx8mn_disp_blk_ctl_domain_data), 593 591 }; 594 592 593 + static int imx8mp_media_power_notifier(struct notifier_block *nb, 594 + unsigned long action, void *data) 595 + { 596 + struct imx8m_blk_ctrl *bc = container_of(nb, struct imx8m_blk_ctrl, 597 + power_nb); 598 + 599 + if (action != GENPD_NOTIFY_ON && action != GENPD_NOTIFY_PRE_OFF) 600 + return NOTIFY_OK; 601 + 602 + /* Enable bus clock and deassert bus reset */ 603 + regmap_set_bits(bc->regmap, BLK_CLK_EN, BIT(8)); 604 + regmap_set_bits(bc->regmap, BLK_SFT_RSTN, BIT(8)); 605 + 606 + /* 607 + * On power up we have no software backchannel to the GPC to 608 + * wait for the ADB handshake to happen, so we just delay for a 609 + * bit. On power down the GPC driver waits for the handshake. 610 + */ 611 + if (action == GENPD_NOTIFY_ON) 612 + udelay(5); 613 + 614 + return NOTIFY_OK; 615 + } 616 + 617 + /* 618 + * From i.MX 8M Plus Applications Processor Reference Manual, Rev. 1, 619 + * section 13.2.2, 13.2.3 620 + * isp-ahb and dwe are not in Figure 13-5. Media BLK_CTRL Clocks 621 + */ 622 + static const struct imx8m_blk_ctrl_domain_data imx8mp_media_blk_ctl_domain_data[] = { 623 + [IMX8MP_MEDIABLK_PD_MIPI_DSI_1] = { 624 + .name = "mediablk-mipi-dsi-1", 625 + .clk_names = (const char *[]){ "apb", "phy", }, 626 + .num_clks = 2, 627 + .gpc_name = "mipi-dsi1", 628 + .rst_mask = BIT(0) | BIT(1), 629 + .clk_mask = BIT(0) | BIT(1), 630 + .mipi_phy_rst_mask = BIT(17), 631 + }, 632 + [IMX8MP_MEDIABLK_PD_MIPI_CSI2_1] = { 633 + .name = "mediablk-mipi-csi2-1", 634 + .clk_names = (const char *[]){ "apb", "cam1" }, 635 + .num_clks = 2, 636 + .gpc_name = "mipi-csi1", 637 + .rst_mask = BIT(2) | BIT(3), 638 + .clk_mask = BIT(2) | BIT(3), 639 + .mipi_phy_rst_mask = BIT(16), 640 + }, 641 + [IMX8MP_MEDIABLK_PD_LCDIF_1] = { 642 + .name = "mediablk-lcdif-1", 643 + .clk_names = (const char *[]){ "disp1", "apb", "axi", }, 644 + .num_clks = 3, 645 + .gpc_name = "lcdif1", 646 + .rst_mask = BIT(4) | BIT(5) | BIT(23), 647 + .clk_mask = BIT(4) | BIT(5) | BIT(23), 648 + }, 649 + [IMX8MP_MEDIABLK_PD_ISI] = { 650 + .name = "mediablk-isi", 651 + .clk_names = (const char *[]){ "axi", "apb" }, 652 + .num_clks = 2, 653 + .gpc_name = "isi", 654 + .rst_mask = BIT(6) | BIT(7), 655 + .clk_mask = BIT(6) | BIT(7), 656 + }, 657 + [IMX8MP_MEDIABLK_PD_MIPI_CSI2_2] = { 658 + .name = "mediablk-mipi-csi2-2", 659 + .clk_names = (const char *[]){ "apb", "cam2" }, 660 + .num_clks = 2, 661 + .gpc_name = "mipi-csi2", 662 + .rst_mask = BIT(9) | BIT(10), 663 + .clk_mask = BIT(9) | BIT(10), 664 + .mipi_phy_rst_mask = BIT(30), 665 + }, 666 + [IMX8MP_MEDIABLK_PD_LCDIF_2] = { 667 + .name = "mediablk-lcdif-2", 668 + .clk_names = (const char *[]){ "disp1", "apb", "axi", }, 669 + .num_clks = 3, 670 + .gpc_name = "lcdif2", 671 + .rst_mask = BIT(11) | BIT(12) | BIT(24), 672 + .clk_mask = BIT(11) | BIT(12) | BIT(24), 673 + }, 674 + [IMX8MP_MEDIABLK_PD_ISP] = { 675 + .name = "mediablk-isp", 676 + .clk_names = (const char *[]){ "isp", "axi", "apb" }, 677 + .num_clks = 3, 678 + .gpc_name = "isp", 679 + .rst_mask = BIT(16) | BIT(17) | BIT(18), 680 + .clk_mask = BIT(16) | BIT(17) | BIT(18), 681 + }, 682 + [IMX8MP_MEDIABLK_PD_DWE] = { 683 + .name = "mediablk-dwe", 684 + .clk_names = (const char *[]){ "axi", "apb" }, 685 + .num_clks = 2, 686 + .gpc_name = "dwe", 687 + .rst_mask = BIT(19) | BIT(20) | BIT(21), 688 + .clk_mask = BIT(19) | BIT(20) | BIT(21), 689 + }, 690 + [IMX8MP_MEDIABLK_PD_MIPI_DSI_2] = { 691 + .name = "mediablk-mipi-dsi-2", 692 + .clk_names = (const char *[]){ "phy", }, 693 + .num_clks = 1, 694 + .gpc_name = "mipi-dsi2", 695 + .rst_mask = BIT(22), 696 + .clk_mask = BIT(22), 697 + .mipi_phy_rst_mask = BIT(29), 698 + }, 699 + }; 700 + 701 + static const struct imx8m_blk_ctrl_data imx8mp_media_blk_ctl_dev_data = { 702 + .max_reg = 0x138, 703 + .power_notifier_fn = imx8mp_media_power_notifier, 704 + .domains = imx8mp_media_blk_ctl_domain_data, 705 + .num_domains = ARRAY_SIZE(imx8mp_media_blk_ctl_domain_data), 706 + }; 707 + 595 708 static int imx8mq_vpu_power_notifier(struct notifier_block *nb, 596 709 unsigned long action, void *data) 597 710 { ··· 779 662 }, { 780 663 .compatible = "fsl,imx8mn-disp-blk-ctrl", 781 664 .data = &imx8mn_disp_blk_ctl_dev_data 665 + }, { 666 + .compatible = "fsl,imx8mp-media-blk-ctrl", 667 + .data = &imx8mp_media_blk_ctl_dev_data 782 668 }, { 783 669 .compatible = "fsl,imx8mq-vpu-blk-ctrl", 784 670 .data = &imx8mq_vpu_blk_ctl_dev_data
+696
drivers/soc/imx/imx8mp-blk-ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * Copyright 2022 Pengutronix, Lucas Stach <kernel@pengutronix.de> 5 + */ 6 + 7 + #include <linux/clk.h> 8 + #include <linux/device.h> 9 + #include <linux/module.h> 10 + #include <linux/of_device.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_domain.h> 13 + #include <linux/pm_runtime.h> 14 + #include <linux/regmap.h> 15 + 16 + #include <dt-bindings/power/imx8mp-power.h> 17 + 18 + #define GPR_REG0 0x0 19 + #define PCIE_CLOCK_MODULE_EN BIT(0) 20 + #define USB_CLOCK_MODULE_EN BIT(1) 21 + 22 + struct imx8mp_blk_ctrl_domain; 23 + 24 + struct imx8mp_blk_ctrl { 25 + struct device *dev; 26 + struct notifier_block power_nb; 27 + struct device *bus_power_dev; 28 + struct regmap *regmap; 29 + struct imx8mp_blk_ctrl_domain *domains; 30 + struct genpd_onecell_data onecell_data; 31 + void (*power_off) (struct imx8mp_blk_ctrl *bc, struct imx8mp_blk_ctrl_domain *domain); 32 + void (*power_on) (struct imx8mp_blk_ctrl *bc, struct imx8mp_blk_ctrl_domain *domain); 33 + }; 34 + 35 + struct imx8mp_blk_ctrl_domain_data { 36 + const char *name; 37 + const char * const *clk_names; 38 + int num_clks; 39 + const char *gpc_name; 40 + }; 41 + 42 + #define DOMAIN_MAX_CLKS 2 43 + 44 + struct imx8mp_blk_ctrl_domain { 45 + struct generic_pm_domain genpd; 46 + const struct imx8mp_blk_ctrl_domain_data *data; 47 + struct clk_bulk_data clks[DOMAIN_MAX_CLKS]; 48 + struct device *power_dev; 49 + struct imx8mp_blk_ctrl *bc; 50 + int id; 51 + }; 52 + 53 + struct imx8mp_blk_ctrl_data { 54 + int max_reg; 55 + notifier_fn_t power_notifier_fn; 56 + void (*power_off) (struct imx8mp_blk_ctrl *bc, struct imx8mp_blk_ctrl_domain *domain); 57 + void (*power_on) (struct imx8mp_blk_ctrl *bc, struct imx8mp_blk_ctrl_domain *domain); 58 + const struct imx8mp_blk_ctrl_domain_data *domains; 59 + int num_domains; 60 + }; 61 + 62 + static inline struct imx8mp_blk_ctrl_domain * 63 + to_imx8mp_blk_ctrl_domain(struct generic_pm_domain *genpd) 64 + { 65 + return container_of(genpd, struct imx8mp_blk_ctrl_domain, genpd); 66 + } 67 + 68 + static void imx8mp_hsio_blk_ctrl_power_on(struct imx8mp_blk_ctrl *bc, 69 + struct imx8mp_blk_ctrl_domain *domain) 70 + { 71 + switch (domain->id) { 72 + case IMX8MP_HSIOBLK_PD_USB: 73 + regmap_set_bits(bc->regmap, GPR_REG0, USB_CLOCK_MODULE_EN); 74 + break; 75 + case IMX8MP_HSIOBLK_PD_PCIE: 76 + regmap_set_bits(bc->regmap, GPR_REG0, PCIE_CLOCK_MODULE_EN); 77 + break; 78 + default: 79 + break; 80 + } 81 + } 82 + 83 + static void imx8mp_hsio_blk_ctrl_power_off(struct imx8mp_blk_ctrl *bc, 84 + struct imx8mp_blk_ctrl_domain *domain) 85 + { 86 + switch (domain->id) { 87 + case IMX8MP_HSIOBLK_PD_USB: 88 + regmap_clear_bits(bc->regmap, GPR_REG0, USB_CLOCK_MODULE_EN); 89 + break; 90 + case IMX8MP_HSIOBLK_PD_PCIE: 91 + regmap_clear_bits(bc->regmap, GPR_REG0, PCIE_CLOCK_MODULE_EN); 92 + break; 93 + default: 94 + break; 95 + } 96 + } 97 + 98 + static int imx8mp_hsio_power_notifier(struct notifier_block *nb, 99 + unsigned long action, void *data) 100 + { 101 + struct imx8mp_blk_ctrl *bc = container_of(nb, struct imx8mp_blk_ctrl, 102 + power_nb); 103 + struct clk_bulk_data *usb_clk = bc->domains[IMX8MP_HSIOBLK_PD_USB].clks; 104 + int num_clks = bc->domains[IMX8MP_HSIOBLK_PD_USB].data->num_clks; 105 + int ret; 106 + 107 + switch (action) { 108 + case GENPD_NOTIFY_ON: 109 + /* 110 + * enable USB clock for a moment for the power-on ADB handshake 111 + * to proceed 112 + */ 113 + ret = clk_bulk_prepare_enable(num_clks, usb_clk); 114 + if (ret) 115 + return NOTIFY_BAD; 116 + regmap_set_bits(bc->regmap, GPR_REG0, USB_CLOCK_MODULE_EN); 117 + 118 + udelay(5); 119 + 120 + regmap_clear_bits(bc->regmap, GPR_REG0, USB_CLOCK_MODULE_EN); 121 + clk_bulk_disable_unprepare(num_clks, usb_clk); 122 + break; 123 + case GENPD_NOTIFY_PRE_OFF: 124 + /* enable USB clock for the power-down ADB handshake to work */ 125 + ret = clk_bulk_prepare_enable(num_clks, usb_clk); 126 + if (ret) 127 + return NOTIFY_BAD; 128 + 129 + regmap_set_bits(bc->regmap, GPR_REG0, USB_CLOCK_MODULE_EN); 130 + break; 131 + case GENPD_NOTIFY_OFF: 132 + clk_bulk_disable_unprepare(num_clks, usb_clk); 133 + break; 134 + default: 135 + break; 136 + } 137 + 138 + return NOTIFY_OK; 139 + } 140 + 141 + static const struct imx8mp_blk_ctrl_domain_data imx8mp_hsio_domain_data[] = { 142 + [IMX8MP_HSIOBLK_PD_USB] = { 143 + .name = "hsioblk-usb", 144 + .clk_names = (const char *[]){ "usb" }, 145 + .num_clks = 1, 146 + .gpc_name = "usb", 147 + }, 148 + [IMX8MP_HSIOBLK_PD_USB_PHY1] = { 149 + .name = "hsioblk-usb-phy1", 150 + .gpc_name = "usb-phy1", 151 + }, 152 + [IMX8MP_HSIOBLK_PD_USB_PHY2] = { 153 + .name = "hsioblk-usb-phy2", 154 + .gpc_name = "usb-phy2", 155 + }, 156 + [IMX8MP_HSIOBLK_PD_PCIE] = { 157 + .name = "hsioblk-pcie", 158 + .clk_names = (const char *[]){ "pcie" }, 159 + .num_clks = 1, 160 + .gpc_name = "pcie", 161 + }, 162 + [IMX8MP_HSIOBLK_PD_PCIE_PHY] = { 163 + .name = "hsioblk-pcie-phy", 164 + .gpc_name = "pcie-phy", 165 + }, 166 + }; 167 + 168 + static const struct imx8mp_blk_ctrl_data imx8mp_hsio_blk_ctl_dev_data = { 169 + .max_reg = 0x24, 170 + .power_on = imx8mp_hsio_blk_ctrl_power_on, 171 + .power_off = imx8mp_hsio_blk_ctrl_power_off, 172 + .power_notifier_fn = imx8mp_hsio_power_notifier, 173 + .domains = imx8mp_hsio_domain_data, 174 + .num_domains = ARRAY_SIZE(imx8mp_hsio_domain_data), 175 + }; 176 + 177 + #define HDMI_RTX_RESET_CTL0 0x20 178 + #define HDMI_RTX_CLK_CTL0 0x40 179 + #define HDMI_RTX_CLK_CTL1 0x50 180 + #define HDMI_RTX_CLK_CTL2 0x60 181 + #define HDMI_RTX_CLK_CTL3 0x70 182 + #define HDMI_RTX_CLK_CTL4 0x80 183 + #define HDMI_TX_CONTROL0 0x200 184 + 185 + static void imx8mp_hdmi_blk_ctrl_power_on(struct imx8mp_blk_ctrl *bc, 186 + struct imx8mp_blk_ctrl_domain *domain) 187 + { 188 + switch (domain->id) { 189 + case IMX8MP_HDMIBLK_PD_IRQSTEER: 190 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, BIT(9)); 191 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(16)); 192 + break; 193 + case IMX8MP_HDMIBLK_PD_LCDIF: 194 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 195 + BIT(7) | BIT(16) | BIT(17) | BIT(18) | 196 + BIT(19) | BIT(20)); 197 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(11)); 198 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, 199 + BIT(4) | BIT(5) | BIT(6)); 200 + break; 201 + case IMX8MP_HDMIBLK_PD_PAI: 202 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(17)); 203 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(18)); 204 + break; 205 + case IMX8MP_HDMIBLK_PD_PVI: 206 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(28)); 207 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(22)); 208 + break; 209 + case IMX8MP_HDMIBLK_PD_TRNG: 210 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(27) | BIT(30)); 211 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(20)); 212 + break; 213 + case IMX8MP_HDMIBLK_PD_HDMI_TX: 214 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 215 + BIT(2) | BIT(4) | BIT(5)); 216 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, 217 + BIT(12) | BIT(13) | BIT(14) | BIT(15) | BIT(16) | 218 + BIT(18) | BIT(19) | BIT(20) | BIT(21)); 219 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, 220 + BIT(7) | BIT(10) | BIT(11)); 221 + regmap_set_bits(bc->regmap, HDMI_TX_CONTROL0, BIT(1)); 222 + break; 223 + case IMX8MP_HDMIBLK_PD_HDMI_TX_PHY: 224 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(22) | BIT(24)); 225 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(12)); 226 + regmap_clear_bits(bc->regmap, HDMI_TX_CONTROL0, BIT(3)); 227 + break; 228 + default: 229 + break; 230 + } 231 + } 232 + 233 + static void imx8mp_hdmi_blk_ctrl_power_off(struct imx8mp_blk_ctrl *bc, 234 + struct imx8mp_blk_ctrl_domain *domain) 235 + { 236 + switch (domain->id) { 237 + case IMX8MP_HDMIBLK_PD_IRQSTEER: 238 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL0, BIT(9)); 239 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(16)); 240 + break; 241 + case IMX8MP_HDMIBLK_PD_LCDIF: 242 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, 243 + BIT(4) | BIT(5) | BIT(6)); 244 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(11)); 245 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 246 + BIT(7) | BIT(16) | BIT(17) | BIT(18) | 247 + BIT(19) | BIT(20)); 248 + break; 249 + case IMX8MP_HDMIBLK_PD_PAI: 250 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(18)); 251 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(17)); 252 + break; 253 + case IMX8MP_HDMIBLK_PD_PVI: 254 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(22)); 255 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(28)); 256 + break; 257 + case IMX8MP_HDMIBLK_PD_TRNG: 258 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(20)); 259 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(27) | BIT(30)); 260 + break; 261 + case IMX8MP_HDMIBLK_PD_HDMI_TX: 262 + regmap_clear_bits(bc->regmap, HDMI_TX_CONTROL0, BIT(1)); 263 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, 264 + BIT(7) | BIT(10) | BIT(11)); 265 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, 266 + BIT(12) | BIT(13) | BIT(14) | BIT(15) | BIT(16) | 267 + BIT(18) | BIT(19) | BIT(20) | BIT(21)); 268 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 269 + BIT(2) | BIT(4) | BIT(5)); 270 + break; 271 + case IMX8MP_HDMIBLK_PD_HDMI_TX_PHY: 272 + regmap_set_bits(bc->regmap, HDMI_TX_CONTROL0, BIT(3)); 273 + regmap_clear_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(12)); 274 + regmap_clear_bits(bc->regmap, HDMI_RTX_CLK_CTL1, BIT(22) | BIT(24)); 275 + break; 276 + default: 277 + break; 278 + } 279 + } 280 + 281 + static int imx8mp_hdmi_power_notifier(struct notifier_block *nb, 282 + unsigned long action, void *data) 283 + { 284 + struct imx8mp_blk_ctrl *bc = container_of(nb, struct imx8mp_blk_ctrl, 285 + power_nb); 286 + 287 + if (action != GENPD_NOTIFY_ON) 288 + return NOTIFY_OK; 289 + 290 + /* 291 + * Contrary to other blk-ctrls the reset and clock don't clear when the 292 + * power domain is powered down. To ensure the proper reset pulsing, 293 + * first clear them all to asserted state, then enable the bus clocks 294 + * and then release the ADB reset. 295 + */ 296 + regmap_write(bc->regmap, HDMI_RTX_RESET_CTL0, 0x0); 297 + regmap_write(bc->regmap, HDMI_RTX_CLK_CTL0, 0x0); 298 + regmap_write(bc->regmap, HDMI_RTX_CLK_CTL1, 0x0); 299 + regmap_set_bits(bc->regmap, HDMI_RTX_CLK_CTL0, 300 + BIT(0) | BIT(1) | BIT(10)); 301 + regmap_set_bits(bc->regmap, HDMI_RTX_RESET_CTL0, BIT(0)); 302 + 303 + /* 304 + * On power up we have no software backchannel to the GPC to 305 + * wait for the ADB handshake to happen, so we just delay for a 306 + * bit. On power down the GPC driver waits for the handshake. 307 + */ 308 + udelay(5); 309 + 310 + return NOTIFY_OK; 311 + } 312 + 313 + static const struct imx8mp_blk_ctrl_domain_data imx8mp_hdmi_domain_data[] = { 314 + [IMX8MP_HDMIBLK_PD_IRQSTEER] = { 315 + .name = "hdmiblk-irqsteer", 316 + .clk_names = (const char *[]){ "apb" }, 317 + .num_clks = 1, 318 + .gpc_name = "irqsteer", 319 + }, 320 + [IMX8MP_HDMIBLK_PD_LCDIF] = { 321 + .name = "hdmiblk-lcdif", 322 + .clk_names = (const char *[]){ "axi", "apb" }, 323 + .num_clks = 2, 324 + .gpc_name = "lcdif", 325 + }, 326 + [IMX8MP_HDMIBLK_PD_PAI] = { 327 + .name = "hdmiblk-pai", 328 + .clk_names = (const char *[]){ "apb" }, 329 + .num_clks = 1, 330 + .gpc_name = "pai", 331 + }, 332 + [IMX8MP_HDMIBLK_PD_PVI] = { 333 + .name = "hdmiblk-pvi", 334 + .clk_names = (const char *[]){ "apb" }, 335 + .num_clks = 1, 336 + .gpc_name = "pvi", 337 + }, 338 + [IMX8MP_HDMIBLK_PD_TRNG] = { 339 + .name = "hdmiblk-trng", 340 + .clk_names = (const char *[]){ "apb" }, 341 + .num_clks = 1, 342 + .gpc_name = "trng", 343 + }, 344 + [IMX8MP_HDMIBLK_PD_HDMI_TX] = { 345 + .name = "hdmiblk-hdmi-tx", 346 + .clk_names = (const char *[]){ "apb", "ref_266m" }, 347 + .num_clks = 2, 348 + .gpc_name = "hdmi-tx", 349 + }, 350 + [IMX8MP_HDMIBLK_PD_HDMI_TX_PHY] = { 351 + .name = "hdmiblk-hdmi-tx-phy", 352 + .clk_names = (const char *[]){ "apb", "ref_24m" }, 353 + .num_clks = 2, 354 + .gpc_name = "hdmi-tx-phy", 355 + }, 356 + }; 357 + 358 + static const struct imx8mp_blk_ctrl_data imx8mp_hdmi_blk_ctl_dev_data = { 359 + .max_reg = 0x23c, 360 + .power_on = imx8mp_hdmi_blk_ctrl_power_on, 361 + .power_off = imx8mp_hdmi_blk_ctrl_power_off, 362 + .power_notifier_fn = imx8mp_hdmi_power_notifier, 363 + .domains = imx8mp_hdmi_domain_data, 364 + .num_domains = ARRAY_SIZE(imx8mp_hdmi_domain_data), 365 + }; 366 + 367 + static int imx8mp_blk_ctrl_power_on(struct generic_pm_domain *genpd) 368 + { 369 + struct imx8mp_blk_ctrl_domain *domain = to_imx8mp_blk_ctrl_domain(genpd); 370 + const struct imx8mp_blk_ctrl_domain_data *data = domain->data; 371 + struct imx8mp_blk_ctrl *bc = domain->bc; 372 + int ret; 373 + 374 + /* make sure bus domain is awake */ 375 + ret = pm_runtime_resume_and_get(bc->bus_power_dev); 376 + if (ret < 0) { 377 + dev_err(bc->dev, "failed to power up bus domain\n"); 378 + return ret; 379 + } 380 + 381 + /* enable upstream clocks */ 382 + ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); 383 + if (ret) { 384 + dev_err(bc->dev, "failed to enable clocks\n"); 385 + goto bus_put; 386 + } 387 + 388 + /* domain specific blk-ctrl manipulation */ 389 + bc->power_on(bc, domain); 390 + 391 + /* power up upstream GPC domain */ 392 + ret = pm_runtime_resume_and_get(domain->power_dev); 393 + if (ret < 0) { 394 + dev_err(bc->dev, "failed to power up peripheral domain\n"); 395 + goto clk_disable; 396 + } 397 + 398 + clk_bulk_disable_unprepare(data->num_clks, domain->clks); 399 + 400 + return 0; 401 + 402 + clk_disable: 403 + clk_bulk_disable_unprepare(data->num_clks, domain->clks); 404 + bus_put: 405 + pm_runtime_put(bc->bus_power_dev); 406 + 407 + return ret; 408 + } 409 + 410 + static int imx8mp_blk_ctrl_power_off(struct generic_pm_domain *genpd) 411 + { 412 + struct imx8mp_blk_ctrl_domain *domain = to_imx8mp_blk_ctrl_domain(genpd); 413 + const struct imx8mp_blk_ctrl_domain_data *data = domain->data; 414 + struct imx8mp_blk_ctrl *bc = domain->bc; 415 + int ret; 416 + 417 + ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); 418 + if (ret) { 419 + dev_err(bc->dev, "failed to enable clocks\n"); 420 + return ret; 421 + } 422 + 423 + /* domain specific blk-ctrl manipulation */ 424 + bc->power_off(bc, domain); 425 + 426 + clk_bulk_disable_unprepare(data->num_clks, domain->clks); 427 + 428 + /* power down upstream GPC domain */ 429 + pm_runtime_put(domain->power_dev); 430 + 431 + /* allow bus domain to suspend */ 432 + pm_runtime_put(bc->bus_power_dev); 433 + 434 + return 0; 435 + } 436 + 437 + static struct generic_pm_domain * 438 + imx8m_blk_ctrl_xlate(struct of_phandle_args *args, void *data) 439 + { 440 + struct genpd_onecell_data *onecell_data = data; 441 + unsigned int index = args->args[0]; 442 + 443 + if (args->args_count != 1 || 444 + index >= onecell_data->num_domains) 445 + return ERR_PTR(-EINVAL); 446 + 447 + return onecell_data->domains[index]; 448 + } 449 + 450 + static struct lock_class_key blk_ctrl_genpd_lock_class; 451 + 452 + static int imx8mp_blk_ctrl_probe(struct platform_device *pdev) 453 + { 454 + const struct imx8mp_blk_ctrl_data *bc_data; 455 + struct device *dev = &pdev->dev; 456 + struct imx8mp_blk_ctrl *bc; 457 + void __iomem *base; 458 + int num_domains, i, ret; 459 + 460 + struct regmap_config regmap_config = { 461 + .reg_bits = 32, 462 + .val_bits = 32, 463 + .reg_stride = 4, 464 + }; 465 + 466 + bc = devm_kzalloc(dev, sizeof(*bc), GFP_KERNEL); 467 + if (!bc) 468 + return -ENOMEM; 469 + 470 + bc->dev = dev; 471 + 472 + bc_data = of_device_get_match_data(dev); 473 + num_domains = bc_data->num_domains; 474 + 475 + base = devm_platform_ioremap_resource(pdev, 0); 476 + if (IS_ERR(base)) 477 + return PTR_ERR(base); 478 + 479 + regmap_config.max_register = bc_data->max_reg; 480 + bc->regmap = devm_regmap_init_mmio(dev, base, &regmap_config); 481 + if (IS_ERR(bc->regmap)) 482 + return dev_err_probe(dev, PTR_ERR(bc->regmap), 483 + "failed to init regmap\n"); 484 + 485 + bc->domains = devm_kcalloc(dev, num_domains, 486 + sizeof(struct imx8mp_blk_ctrl_domain), 487 + GFP_KERNEL); 488 + if (!bc->domains) 489 + return -ENOMEM; 490 + 491 + bc->onecell_data.num_domains = num_domains; 492 + bc->onecell_data.xlate = imx8m_blk_ctrl_xlate; 493 + bc->onecell_data.domains = 494 + devm_kcalloc(dev, num_domains, 495 + sizeof(struct generic_pm_domain *), GFP_KERNEL); 496 + if (!bc->onecell_data.domains) 497 + return -ENOMEM; 498 + 499 + bc->bus_power_dev = genpd_dev_pm_attach_by_name(dev, "bus"); 500 + if (IS_ERR(bc->bus_power_dev)) 501 + return dev_err_probe(dev, PTR_ERR(bc->bus_power_dev), 502 + "failed to attach bus power domain\n"); 503 + 504 + bc->power_off = bc_data->power_off; 505 + bc->power_on = bc_data->power_on; 506 + 507 + for (i = 0; i < num_domains; i++) { 508 + const struct imx8mp_blk_ctrl_domain_data *data = &bc_data->domains[i]; 509 + struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i]; 510 + int j; 511 + 512 + domain->data = data; 513 + 514 + for (j = 0; j < data->num_clks; j++) 515 + domain->clks[j].id = data->clk_names[j]; 516 + 517 + ret = devm_clk_bulk_get(dev, data->num_clks, domain->clks); 518 + if (ret) { 519 + dev_err_probe(dev, ret, "failed to get clock\n"); 520 + goto cleanup_pds; 521 + } 522 + 523 + domain->power_dev = 524 + dev_pm_domain_attach_by_name(dev, data->gpc_name); 525 + if (IS_ERR(domain->power_dev)) { 526 + dev_err_probe(dev, PTR_ERR(domain->power_dev), 527 + "failed to attach power domain %s\n", 528 + data->gpc_name); 529 + ret = PTR_ERR(domain->power_dev); 530 + goto cleanup_pds; 531 + } 532 + dev_set_name(domain->power_dev, "%s", data->name); 533 + 534 + domain->genpd.name = data->name; 535 + domain->genpd.power_on = imx8mp_blk_ctrl_power_on; 536 + domain->genpd.power_off = imx8mp_blk_ctrl_power_off; 537 + domain->bc = bc; 538 + domain->id = i; 539 + 540 + ret = pm_genpd_init(&domain->genpd, NULL, true); 541 + if (ret) { 542 + dev_err_probe(dev, ret, "failed to init power domain\n"); 543 + dev_pm_domain_detach(domain->power_dev, true); 544 + goto cleanup_pds; 545 + } 546 + 547 + /* 548 + * We use runtime PM to trigger power on/off of the upstream GPC 549 + * domain, as a strict hierarchical parent/child power domain 550 + * setup doesn't allow us to meet the sequencing requirements. 551 + * This means we have nested locking of genpd locks, without the 552 + * nesting being visible at the genpd level, so we need a 553 + * separate lock class to make lockdep aware of the fact that 554 + * this are separate domain locks that can be nested without a 555 + * self-deadlock. 556 + */ 557 + lockdep_set_class(&domain->genpd.mlock, 558 + &blk_ctrl_genpd_lock_class); 559 + 560 + bc->onecell_data.domains[i] = &domain->genpd; 561 + } 562 + 563 + ret = of_genpd_add_provider_onecell(dev->of_node, &bc->onecell_data); 564 + if (ret) { 565 + dev_err_probe(dev, ret, "failed to add power domain provider\n"); 566 + goto cleanup_pds; 567 + } 568 + 569 + bc->power_nb.notifier_call = bc_data->power_notifier_fn; 570 + ret = dev_pm_genpd_add_notifier(bc->bus_power_dev, &bc->power_nb); 571 + if (ret) { 572 + dev_err_probe(dev, ret, "failed to add power notifier\n"); 573 + goto cleanup_provider; 574 + } 575 + 576 + dev_set_drvdata(dev, bc); 577 + 578 + return 0; 579 + 580 + cleanup_provider: 581 + of_genpd_del_provider(dev->of_node); 582 + cleanup_pds: 583 + for (i--; i >= 0; i--) { 584 + pm_genpd_remove(&bc->domains[i].genpd); 585 + dev_pm_domain_detach(bc->domains[i].power_dev, true); 586 + } 587 + 588 + dev_pm_domain_detach(bc->bus_power_dev, true); 589 + 590 + return ret; 591 + } 592 + 593 + static int imx8mp_blk_ctrl_remove(struct platform_device *pdev) 594 + { 595 + struct imx8mp_blk_ctrl *bc = dev_get_drvdata(&pdev->dev); 596 + int i; 597 + 598 + of_genpd_del_provider(pdev->dev.of_node); 599 + 600 + for (i = 0; bc->onecell_data.num_domains; i++) { 601 + struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i]; 602 + 603 + pm_genpd_remove(&domain->genpd); 604 + dev_pm_domain_detach(domain->power_dev, true); 605 + } 606 + 607 + dev_pm_genpd_remove_notifier(bc->bus_power_dev); 608 + 609 + dev_pm_domain_detach(bc->bus_power_dev, true); 610 + 611 + return 0; 612 + } 613 + 614 + #ifdef CONFIG_PM_SLEEP 615 + static int imx8mp_blk_ctrl_suspend(struct device *dev) 616 + { 617 + struct imx8mp_blk_ctrl *bc = dev_get_drvdata(dev); 618 + int ret, i; 619 + 620 + /* 621 + * This may look strange, but is done so the generic PM_SLEEP code 622 + * can power down our domains and more importantly power them up again 623 + * after resume, without tripping over our usage of runtime PM to 624 + * control the upstream GPC domains. Things happen in the right order 625 + * in the system suspend/resume paths due to the device parent/child 626 + * hierarchy. 627 + */ 628 + ret = pm_runtime_get_sync(bc->bus_power_dev); 629 + if (ret < 0) { 630 + pm_runtime_put_noidle(bc->bus_power_dev); 631 + return ret; 632 + } 633 + 634 + for (i = 0; i < bc->onecell_data.num_domains; i++) { 635 + struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i]; 636 + 637 + ret = pm_runtime_get_sync(domain->power_dev); 638 + if (ret < 0) { 639 + pm_runtime_put_noidle(domain->power_dev); 640 + goto out_fail; 641 + } 642 + } 643 + 644 + return 0; 645 + 646 + out_fail: 647 + for (i--; i >= 0; i--) 648 + pm_runtime_put(bc->domains[i].power_dev); 649 + 650 + pm_runtime_put(bc->bus_power_dev); 651 + 652 + return ret; 653 + } 654 + 655 + static int imx8mp_blk_ctrl_resume(struct device *dev) 656 + { 657 + struct imx8mp_blk_ctrl *bc = dev_get_drvdata(dev); 658 + int i; 659 + 660 + for (i = 0; i < bc->onecell_data.num_domains; i++) 661 + pm_runtime_put(bc->domains[i].power_dev); 662 + 663 + pm_runtime_put(bc->bus_power_dev); 664 + 665 + return 0; 666 + } 667 + #endif 668 + 669 + static const struct dev_pm_ops imx8mp_blk_ctrl_pm_ops = { 670 + SET_SYSTEM_SLEEP_PM_OPS(imx8mp_blk_ctrl_suspend, 671 + imx8mp_blk_ctrl_resume) 672 + }; 673 + 674 + static const struct of_device_id imx8mp_blk_ctrl_of_match[] = { 675 + { 676 + .compatible = "fsl,imx8mp-hsio-blk-ctrl", 677 + .data = &imx8mp_hsio_blk_ctl_dev_data, 678 + }, { 679 + .compatible = "fsl,imx8mp-hdmi-blk-ctrl", 680 + .data = &imx8mp_hdmi_blk_ctl_dev_data, 681 + }, { 682 + /* Sentinel */ 683 + } 684 + }; 685 + MODULE_DEVICE_TABLE(of, imx8m_blk_ctrl_of_match); 686 + 687 + static struct platform_driver imx8mp_blk_ctrl_driver = { 688 + .probe = imx8mp_blk_ctrl_probe, 689 + .remove = imx8mp_blk_ctrl_remove, 690 + .driver = { 691 + .name = "imx8mp-blk-ctrl", 692 + .pm = &imx8mp_blk_ctrl_pm_ops, 693 + .of_match_table = imx8mp_blk_ctrl_of_match, 694 + }, 695 + }; 696 + module_platform_driver(imx8mp_blk_ctrl_driver);
+61
drivers/soc/qcom/llcc-qcom.c
··· 130 130 { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 131 131 }; 132 132 133 + static const struct llcc_slice_config sc8180x_data[] = { 134 + { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1 }, 135 + { LLCC_VIDSC0, 2, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 136 + { LLCC_VIDSC1, 3, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 137 + { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 138 + { LLCC_MDMHPGRW, 7, 3072, 1, 1, 0x3ff, 0xc00, 0, 0, 0, 1, 0 }, 139 + { LLCC_MDM, 8, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 140 + { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 141 + { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 142 + { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 143 + { LLCC_GPU, 12, 5120, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 144 + { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1 }, 145 + { LLCC_CMPTDMA, 15, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 146 + { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 147 + { LLCC_VIDFW, 17, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 148 + { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 149 + { LLCC_MDMPNG, 21, 1024, 0, 1, 0xc, 0x0, 0, 0, 0, 1, 0 }, 150 + { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 151 + { LLCC_NPU, 23, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 152 + { LLCC_WLHW, 24, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 153 + { LLCC_MODPE, 29, 512, 1, 1, 0xc, 0x0, 0, 0, 0, 1, 0 }, 154 + { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 }, 155 + { LLCC_WRCACHE, 31, 128, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0 }, 156 + }; 157 + 158 + static const struct llcc_slice_config sc8280xp_data[] = { 159 + { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 }, 160 + { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 161 + { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 162 + { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 163 + { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 164 + { LLCC_GPU, 12, 4096, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 }, 165 + { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 166 + { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 167 + { LLCC_AUDHW, 22, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 168 + { LLCC_DRE, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 169 + { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 170 + { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0, 0 }, 171 + { LLCC_WRCACHE, 31, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 172 + { LLCC_CVPFW, 32, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 173 + { LLCC_CPUSS1, 33, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 174 + { LLCC_CPUHWT, 36, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 175 + }; 176 + 133 177 static const struct llcc_slice_config sdm845_data[] = { 134 178 { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 }, 135 179 { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, ··· 316 272 static const struct qcom_llcc_config sc7280_cfg = { 317 273 .sct_data = sc7280_data, 318 274 .size = ARRAY_SIZE(sc7280_data), 275 + .need_llcc_cfg = true, 276 + .reg_offset = llcc_v1_2_reg_offset, 277 + }; 278 + 279 + static const struct qcom_llcc_config sc8180x_cfg = { 280 + .sct_data = sc8180x_data, 281 + .size = ARRAY_SIZE(sc8180x_data), 282 + .need_llcc_cfg = true, 283 + .reg_offset = llcc_v1_2_reg_offset, 284 + }; 285 + 286 + static const struct qcom_llcc_config sc8280xp_cfg = { 287 + .sct_data = sc8280xp_data, 288 + .size = ARRAY_SIZE(sc8280xp_data), 319 289 .need_llcc_cfg = true, 320 290 .reg_offset = llcc_v1_2_reg_offset, 321 291 }; ··· 799 741 static const struct of_device_id qcom_llcc_of_match[] = { 800 742 { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 801 743 { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfg }, 744 + { .compatible = "qcom,sc8180x-llcc", .data = &sc8180x_cfg }, 745 + { .compatible = "qcom,sc8280xp-llcc", .data = &sc8280xp_cfg }, 802 746 { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 803 747 { .compatible = "qcom,sm6350-llcc", .data = &sm6350_cfg }, 804 748 { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, ··· 809 749 { .compatible = "qcom,sm8450-llcc", .data = &sm8450_cfg }, 810 750 { } 811 751 }; 752 + MODULE_DEVICE_TABLE(of, qcom_llcc_of_match); 812 753 813 754 static struct platform_driver qcom_llcc_driver = { 814 755 .driver = {
+5 -6
drivers/soc/qcom/pdr_interface.c
··· 304 304 notifier_hdl); 305 305 const struct servreg_state_updated_ind *ind_msg = data; 306 306 struct pdr_list_node *ind; 307 - struct pdr_service *pds; 308 - bool found = false; 307 + struct pdr_service *pds = NULL, *iter; 309 308 310 309 if (!ind_msg || !ind_msg->service_path[0] || 311 310 strlen(ind_msg->service_path) > SERVREG_NAME_LENGTH) 312 311 return; 313 312 314 313 mutex_lock(&pdr->list_lock); 315 - list_for_each_entry(pds, &pdr->lookups, node) { 316 - if (strcmp(pds->service_path, ind_msg->service_path)) 314 + list_for_each_entry(iter, &pdr->lookups, node) { 315 + if (strcmp(iter->service_path, ind_msg->service_path)) 317 316 continue; 318 317 319 - found = true; 318 + pds = iter; 320 319 break; 321 320 } 322 321 mutex_unlock(&pdr->list_lock); 323 322 324 - if (!found) 323 + if (!pds) 325 324 return; 326 325 327 326 pr_info("PDR: Indication received from %s, state: 0x%x, trans-id: %d\n",
+10 -10
drivers/soc/qcom/pdr_internal.h
··· 28 28 u32 instance; 29 29 }; 30 30 31 - struct qmi_elem_info servreg_location_entry_ei[] = { 31 + static struct qmi_elem_info servreg_location_entry_ei[] = { 32 32 { 33 33 .data_type = QMI_STRING, 34 34 .elem_len = SERVREG_NAME_LENGTH + 1, ··· 74 74 u32 domain_offset; 75 75 }; 76 76 77 - struct qmi_elem_info servreg_get_domain_list_req_ei[] = { 77 + static struct qmi_elem_info servreg_get_domain_list_req_ei[] = { 78 78 { 79 79 .data_type = QMI_STRING, 80 80 .elem_len = SERVREG_NAME_LENGTH + 1, ··· 116 116 struct servreg_location_entry domain_list[SERVREG_DOMAIN_LIST_LENGTH]; 117 117 }; 118 118 119 - struct qmi_elem_info servreg_get_domain_list_resp_ei[] = { 119 + static struct qmi_elem_info servreg_get_domain_list_resp_ei[] = { 120 120 { 121 121 .data_type = QMI_STRUCT, 122 122 .elem_len = 1, ··· 199 199 char service_path[SERVREG_NAME_LENGTH + 1]; 200 200 }; 201 201 202 - struct qmi_elem_info servreg_register_listener_req_ei[] = { 202 + static struct qmi_elem_info servreg_register_listener_req_ei[] = { 203 203 { 204 204 .data_type = QMI_UNSIGNED_1_BYTE, 205 205 .elem_len = 1, ··· 227 227 enum servreg_service_state curr_state; 228 228 }; 229 229 230 - struct qmi_elem_info servreg_register_listener_resp_ei[] = { 230 + static struct qmi_elem_info servreg_register_listener_resp_ei[] = { 231 231 { 232 232 .data_type = QMI_STRUCT, 233 233 .elem_len = 1, ··· 263 263 char service_path[SERVREG_NAME_LENGTH + 1]; 264 264 }; 265 265 266 - struct qmi_elem_info servreg_restart_pd_req_ei[] = { 266 + static struct qmi_elem_info servreg_restart_pd_req_ei[] = { 267 267 { 268 268 .data_type = QMI_STRING, 269 269 .elem_len = SERVREG_NAME_LENGTH + 1, ··· 280 280 struct qmi_response_type_v01 resp; 281 281 }; 282 282 283 - struct qmi_elem_info servreg_restart_pd_resp_ei[] = { 283 + static struct qmi_elem_info servreg_restart_pd_resp_ei[] = { 284 284 { 285 285 .data_type = QMI_STRUCT, 286 286 .elem_len = 1, ··· 300 300 u16 transaction_id; 301 301 }; 302 302 303 - struct qmi_elem_info servreg_state_updated_ind_ei[] = { 303 + static struct qmi_elem_info servreg_state_updated_ind_ei[] = { 304 304 { 305 305 .data_type = QMI_SIGNED_4_BYTE_ENUM, 306 306 .elem_len = 1, ··· 336 336 u16 transaction_id; 337 337 }; 338 338 339 - struct qmi_elem_info servreg_set_ack_req_ei[] = { 339 + static struct qmi_elem_info servreg_set_ack_req_ei[] = { 340 340 { 341 341 .data_type = QMI_STRING, 342 342 .elem_len = SERVREG_NAME_LENGTH + 1, ··· 362 362 struct qmi_response_type_v01 resp; 363 363 }; 364 364 365 - struct qmi_elem_info servreg_set_ack_resp_ei[] = { 365 + static struct qmi_elem_info servreg_set_ack_resp_ei[] = { 366 366 { 367 367 .data_type = QMI_STRUCT, 368 368 .elem_len = 1,
+70 -3
drivers/soc/qcom/rpmhpd.c
··· 180 180 .res_name = "mxc.lvl", 181 181 }; 182 182 183 + static struct rpmhpd nsp = { 184 + .pd = { .name = "nsp", }, 185 + .res_name = "nsp.lvl", 186 + }; 187 + 188 + static struct rpmhpd qphy = { 189 + .pd = { .name = "qphy", }, 190 + .res_name = "qphy.lvl", 191 + }; 192 + 193 + /* SA8540P RPMH powerdomains */ 194 + static struct rpmhpd *sa8540p_rpmhpds[] = { 195 + [SC8280XP_CX] = &cx, 196 + [SC8280XP_CX_AO] = &cx_ao, 197 + [SC8280XP_EBI] = &ebi, 198 + [SC8280XP_GFX] = &gfx, 199 + [SC8280XP_LCX] = &lcx, 200 + [SC8280XP_LMX] = &lmx, 201 + [SC8280XP_MMCX] = &mmcx, 202 + [SC8280XP_MMCX_AO] = &mmcx_ao, 203 + [SC8280XP_MX] = &mx, 204 + [SC8280XP_MX_AO] = &mx_ao, 205 + [SC8280XP_NSP] = &nsp, 206 + }; 207 + 208 + static const struct rpmhpd_desc sa8540p_desc = { 209 + .rpmhpds = sa8540p_rpmhpds, 210 + .num_pds = ARRAY_SIZE(sa8540p_rpmhpds), 211 + }; 212 + 183 213 /* SDM845 RPMH powerdomains */ 184 214 static struct rpmhpd *sdm845_rpmhpds[] = { 185 215 [SDM845_CX] = &cx_w_mx_parent, ··· 238 208 static const struct rpmhpd_desc sdx55_desc = { 239 209 .rpmhpds = sdx55_rpmhpds, 240 210 .num_pds = ARRAY_SIZE(sdx55_rpmhpds), 211 + }; 212 + 213 + /* SDX65 RPMH powerdomains */ 214 + static struct rpmhpd *sdx65_rpmhpds[] = { 215 + [SDX65_CX] = &cx_w_mx_parent, 216 + [SDX65_CX_AO] = &cx_ao_w_mx_parent, 217 + [SDX65_MSS] = &mss, 218 + [SDX65_MX] = &mx, 219 + [SDX65_MX_AO] = &mx_ao, 220 + [SDX65_MXC] = &mxc, 221 + }; 222 + 223 + static const struct rpmhpd_desc sdx65_desc = { 224 + .rpmhpds = sdx65_rpmhpds, 225 + .num_pds = ARRAY_SIZE(sdx65_rpmhpds), 241 226 }; 242 227 243 228 /* SM6350 RPMH powerdomains */ ··· 408 363 .num_pds = ARRAY_SIZE(sc8180x_rpmhpds), 409 364 }; 410 365 366 + /* SC8280xp RPMH powerdomains */ 367 + static struct rpmhpd *sc8280xp_rpmhpds[] = { 368 + [SC8280XP_CX] = &cx, 369 + [SC8280XP_CX_AO] = &cx_ao, 370 + [SC8280XP_EBI] = &ebi, 371 + [SC8280XP_GFX] = &gfx, 372 + [SC8280XP_LCX] = &lcx, 373 + [SC8280XP_LMX] = &lmx, 374 + [SC8280XP_MMCX] = &mmcx, 375 + [SC8280XP_MMCX_AO] = &mmcx_ao, 376 + [SC8280XP_MX] = &mx, 377 + [SC8280XP_MX_AO] = &mx_ao, 378 + [SC8280XP_NSP] = &nsp, 379 + [SC8280XP_QPHY] = &qphy, 380 + }; 381 + 382 + static const struct rpmhpd_desc sc8280xp_desc = { 383 + .rpmhpds = sc8280xp_rpmhpds, 384 + .num_pds = ARRAY_SIZE(sc8280xp_rpmhpds), 385 + }; 386 + 411 387 static const struct of_device_id rpmhpd_match_table[] = { 388 + { .compatible = "qcom,sa8540p-rpmhpd", .data = &sa8540p_desc }, 412 389 { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc }, 413 390 { .compatible = "qcom,sc7280-rpmhpd", .data = &sc7280_desc }, 414 391 { .compatible = "qcom,sc8180x-rpmhpd", .data = &sc8180x_desc }, 392 + { .compatible = "qcom,sc8280xp-rpmhpd", .data = &sc8280xp_desc }, 415 393 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 416 394 { .compatible = "qcom,sdx55-rpmhpd", .data = &sdx55_desc}, 395 + { .compatible = "qcom,sdx65-rpmhpd", .data = &sdx65_desc}, 417 396 { .compatible = "qcom,sm6350-rpmhpd", .data = &sm6350_desc }, 418 397 { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, 419 398 { .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc }, ··· 666 597 data->num_domains = num_pds; 667 598 668 599 for (i = 0; i < num_pds; i++) { 669 - if (!rpmhpds[i]) { 670 - dev_warn(dev, "rpmhpds[%d] is empty\n", i); 600 + if (!rpmhpds[i]) 671 601 continue; 672 - } 673 602 674 603 rpmhpds[i]->dev = dev; 675 604 rpmhpds[i]->addr = cmd_db_read_addr(rpmhpds[i]->res_name);
+233 -72
drivers/soc/qcom/smem.c
··· 195 195 __le32 reserved[3]; 196 196 }; 197 197 198 + /** 199 + * struct smem_partition - describes smem partition 200 + * @virt_base: starting virtual address of partition 201 + * @phys_base: starting physical address of partition 202 + * @cacheline: alignment for "cached" entries 203 + * @size: size of partition 204 + */ 205 + struct smem_partition { 206 + void __iomem *virt_base; 207 + phys_addr_t phys_base; 208 + size_t cacheline; 209 + size_t size; 210 + }; 211 + 198 212 static const u8 SMEM_PART_MAGIC[] = { 0x24, 0x50, 0x52, 0x54 }; 199 213 200 214 /** ··· 264 250 * struct qcom_smem - device data for the smem device 265 251 * @dev: device pointer 266 252 * @hwlock: reference to a hwspinlock 267 - * @global_partition: pointer to global partition when in use 268 - * @global_cacheline: cacheline size for global partition 269 - * @partitions: list of pointers to partitions affecting the current 270 - * processor/host 271 - * @cacheline: list of cacheline sizes for each host 253 + * @ptable: virtual base of partition table 254 + * @global_partition: describes for global partition when in use 255 + * @partitions: list of partitions of current processor/host 272 256 * @item_count: max accepted item number 273 257 * @socinfo: platform device pointer 274 258 * @num_regions: number of @regions ··· 277 265 278 266 struct hwspinlock *hwlock; 279 267 280 - struct smem_partition_header *global_partition; 281 - size_t global_cacheline; 282 - struct smem_partition_header *partitions[SMEM_HOST_COUNT]; 283 - size_t cacheline[SMEM_HOST_COUNT]; 284 268 u32 item_count; 285 269 struct platform_device *socinfo; 270 + struct smem_ptable *ptable; 271 + struct smem_partition global_partition; 272 + struct smem_partition partitions[SMEM_HOST_COUNT]; 286 273 287 274 unsigned num_regions; 288 275 struct smem_region regions[]; ··· 359 348 #define HWSPINLOCK_TIMEOUT 1000 360 349 361 350 static int qcom_smem_alloc_private(struct qcom_smem *smem, 362 - struct smem_partition_header *phdr, 351 + struct smem_partition *part, 363 352 unsigned item, 364 353 size_t size) 365 354 { 366 355 struct smem_private_entry *hdr, *end; 356 + struct smem_partition_header *phdr; 367 357 size_t alloc_size; 368 358 void *cached; 359 + void *p_end; 360 + 361 + phdr = (struct smem_partition_header __force *)part->virt_base; 362 + p_end = (void *)phdr + part->size; 369 363 370 364 hdr = phdr_to_first_uncached_entry(phdr); 371 365 end = phdr_to_last_uncached_entry(phdr); 372 366 cached = phdr_to_last_cached_entry(phdr); 367 + 368 + if (WARN_ON((void *)end > p_end || cached > p_end)) 369 + return -EINVAL; 373 370 374 371 while (hdr < end) { 375 372 if (hdr->canary != SMEM_PRIVATE_CANARY) ··· 387 368 388 369 hdr = uncached_entry_next(hdr); 389 370 } 371 + 372 + if (WARN_ON((void *)hdr > p_end)) 373 + return -EINVAL; 390 374 391 375 /* Check that we don't grow into the cached region */ 392 376 alloc_size = sizeof(*hdr) + ALIGN(size, 8); ··· 464 442 */ 465 443 int qcom_smem_alloc(unsigned host, unsigned item, size_t size) 466 444 { 467 - struct smem_partition_header *phdr; 445 + struct smem_partition *part; 468 446 unsigned long flags; 469 447 int ret; 470 448 ··· 486 464 if (ret) 487 465 return ret; 488 466 489 - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 490 - phdr = __smem->partitions[host]; 491 - ret = qcom_smem_alloc_private(__smem, phdr, item, size); 492 - } else if (__smem->global_partition) { 493 - phdr = __smem->global_partition; 494 - ret = qcom_smem_alloc_private(__smem, phdr, item, size); 467 + if (host < SMEM_HOST_COUNT && __smem->partitions[host].virt_base) { 468 + part = &__smem->partitions[host]; 469 + ret = qcom_smem_alloc_private(__smem, part, item, size); 470 + } else if (__smem->global_partition.virt_base) { 471 + part = &__smem->global_partition; 472 + ret = qcom_smem_alloc_private(__smem, part, item, size); 495 473 } else { 496 474 ret = qcom_smem_alloc_global(__smem, item, size); 497 475 } ··· 509 487 struct smem_header *header; 510 488 struct smem_region *region; 511 489 struct smem_global_entry *entry; 490 + u64 entry_offset; 491 + u32 e_size; 512 492 u32 aux_base; 513 493 unsigned i; 514 494 ··· 525 501 region = &smem->regions[i]; 526 502 527 503 if ((u32)region->aux_base == aux_base || !aux_base) { 504 + e_size = le32_to_cpu(entry->size); 505 + entry_offset = le32_to_cpu(entry->offset); 506 + 507 + if (WARN_ON(e_size + entry_offset > region->size)) 508 + return ERR_PTR(-EINVAL); 509 + 528 510 if (size != NULL) 529 - *size = le32_to_cpu(entry->size); 530 - return region->virt_base + le32_to_cpu(entry->offset); 511 + *size = e_size; 512 + 513 + return region->virt_base + entry_offset; 531 514 } 532 515 } 533 516 ··· 542 511 } 543 512 544 513 static void *qcom_smem_get_private(struct qcom_smem *smem, 545 - struct smem_partition_header *phdr, 546 - size_t cacheline, 514 + struct smem_partition *part, 547 515 unsigned item, 548 516 size_t *size) 549 517 { 550 518 struct smem_private_entry *e, *end; 519 + struct smem_partition_header *phdr; 520 + void *item_ptr, *p_end; 521 + u32 padding_data; 522 + u32 e_size; 523 + 524 + phdr = (struct smem_partition_header __force *)part->virt_base; 525 + p_end = (void *)phdr + part->size; 551 526 552 527 e = phdr_to_first_uncached_entry(phdr); 553 528 end = phdr_to_last_uncached_entry(phdr); ··· 563 526 goto invalid_canary; 564 527 565 528 if (le16_to_cpu(e->item) == item) { 566 - if (size != NULL) 567 - *size = le32_to_cpu(e->size) - 568 - le16_to_cpu(e->padding_data); 529 + if (size != NULL) { 530 + e_size = le32_to_cpu(e->size); 531 + padding_data = le16_to_cpu(e->padding_data); 569 532 570 - return uncached_entry_to_item(e); 533 + if (WARN_ON(e_size > part->size || padding_data > e_size)) 534 + return ERR_PTR(-EINVAL); 535 + 536 + *size = e_size - padding_data; 537 + } 538 + 539 + item_ptr = uncached_entry_to_item(e); 540 + if (WARN_ON(item_ptr > p_end)) 541 + return ERR_PTR(-EINVAL); 542 + 543 + return item_ptr; 571 544 } 572 545 573 546 e = uncached_entry_next(e); 574 547 } 575 548 549 + if (WARN_ON((void *)e > p_end)) 550 + return ERR_PTR(-EINVAL); 551 + 576 552 /* Item was not found in the uncached list, search the cached list */ 577 553 578 - e = phdr_to_first_cached_entry(phdr, cacheline); 554 + e = phdr_to_first_cached_entry(phdr, part->cacheline); 579 555 end = phdr_to_last_cached_entry(phdr); 556 + 557 + if (WARN_ON((void *)e < (void *)phdr || (void *)end > p_end)) 558 + return ERR_PTR(-EINVAL); 580 559 581 560 while (e > end) { 582 561 if (e->canary != SMEM_PRIVATE_CANARY) 583 562 goto invalid_canary; 584 563 585 564 if (le16_to_cpu(e->item) == item) { 586 - if (size != NULL) 587 - *size = le32_to_cpu(e->size) - 588 - le16_to_cpu(e->padding_data); 565 + if (size != NULL) { 566 + e_size = le32_to_cpu(e->size); 567 + padding_data = le16_to_cpu(e->padding_data); 589 568 590 - return cached_entry_to_item(e); 569 + if (WARN_ON(e_size > part->size || padding_data > e_size)) 570 + return ERR_PTR(-EINVAL); 571 + 572 + *size = e_size - padding_data; 573 + } 574 + 575 + item_ptr = cached_entry_to_item(e); 576 + if (WARN_ON(item_ptr < (void *)phdr)) 577 + return ERR_PTR(-EINVAL); 578 + 579 + return item_ptr; 591 580 } 592 581 593 - e = cached_entry_next(e, cacheline); 582 + e = cached_entry_next(e, part->cacheline); 594 583 } 584 + 585 + if (WARN_ON((void *)e < (void *)phdr)) 586 + return ERR_PTR(-EINVAL); 595 587 596 588 return ERR_PTR(-ENOENT); 597 589 ··· 642 576 */ 643 577 void *qcom_smem_get(unsigned host, unsigned item, size_t *size) 644 578 { 645 - struct smem_partition_header *phdr; 579 + struct smem_partition *part; 646 580 unsigned long flags; 647 - size_t cacheln; 648 581 int ret; 649 582 void *ptr = ERR_PTR(-EPROBE_DEFER); 650 583 ··· 659 594 if (ret) 660 595 return ERR_PTR(ret); 661 596 662 - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 663 - phdr = __smem->partitions[host]; 664 - cacheln = __smem->cacheline[host]; 665 - ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); 666 - } else if (__smem->global_partition) { 667 - phdr = __smem->global_partition; 668 - cacheln = __smem->global_cacheline; 669 - ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); 597 + if (host < SMEM_HOST_COUNT && __smem->partitions[host].virt_base) { 598 + part = &__smem->partitions[host]; 599 + ptr = qcom_smem_get_private(__smem, part, item, size); 600 + } else if (__smem->global_partition.virt_base) { 601 + part = &__smem->global_partition; 602 + ptr = qcom_smem_get_private(__smem, part, item, size); 670 603 } else { 671 604 ptr = qcom_smem_get_global(__smem, item, size); 672 605 } ··· 685 622 */ 686 623 int qcom_smem_get_free_space(unsigned host) 687 624 { 625 + struct smem_partition *part; 688 626 struct smem_partition_header *phdr; 689 627 struct smem_header *header; 690 628 unsigned ret; ··· 693 629 if (!__smem) 694 630 return -EPROBE_DEFER; 695 631 696 - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 697 - phdr = __smem->partitions[host]; 632 + if (host < SMEM_HOST_COUNT && __smem->partitions[host].virt_base) { 633 + part = &__smem->partitions[host]; 634 + phdr = part->virt_base; 698 635 ret = le32_to_cpu(phdr->offset_free_cached) - 699 636 le32_to_cpu(phdr->offset_free_uncached); 700 - } else if (__smem->global_partition) { 701 - phdr = __smem->global_partition; 637 + 638 + if (ret > le32_to_cpu(part->size)) 639 + return -EINVAL; 640 + } else if (__smem->global_partition.virt_base) { 641 + part = &__smem->global_partition; 642 + phdr = part->virt_base; 702 643 ret = le32_to_cpu(phdr->offset_free_cached) - 703 644 le32_to_cpu(phdr->offset_free_uncached); 645 + 646 + if (ret > le32_to_cpu(part->size)) 647 + return -EINVAL; 704 648 } else { 705 649 header = __smem->regions[0].virt_base; 706 650 ret = le32_to_cpu(header->available); 651 + 652 + if (ret > __smem->regions[0].size) 653 + return -EINVAL; 707 654 } 708 655 709 656 return ret; 710 657 } 711 658 EXPORT_SYMBOL(qcom_smem_get_free_space); 659 + 660 + static bool addr_in_range(void __iomem *base, size_t size, void *addr) 661 + { 662 + return base && (addr >= base && addr < base + size); 663 + } 712 664 713 665 /** 714 666 * qcom_smem_virt_to_phys() - return the physical address associated ··· 735 655 */ 736 656 phys_addr_t qcom_smem_virt_to_phys(void *p) 737 657 { 738 - unsigned i; 658 + struct smem_partition *part; 659 + struct smem_region *area; 660 + u64 offset; 661 + u32 i; 662 + 663 + for (i = 0; i < SMEM_HOST_COUNT; i++) { 664 + part = &__smem->partitions[i]; 665 + 666 + if (addr_in_range(part->virt_base, part->size, p)) { 667 + offset = p - part->virt_base; 668 + 669 + return (phys_addr_t)part->phys_base + offset; 670 + } 671 + } 672 + 673 + part = &__smem->global_partition; 674 + 675 + if (addr_in_range(part->virt_base, part->size, p)) { 676 + offset = p - part->virt_base; 677 + 678 + return (phys_addr_t)part->phys_base + offset; 679 + } 739 680 740 681 for (i = 0; i < __smem->num_regions; i++) { 741 - struct smem_region *region = &__smem->regions[i]; 682 + area = &__smem->regions[i]; 742 683 743 - if (p < region->virt_base) 744 - continue; 745 - if (p < region->virt_base + region->size) { 746 - u64 offset = p - region->virt_base; 684 + if (addr_in_range(area->virt_base, area->size, p)) { 685 + offset = p - area->virt_base; 747 686 748 - return region->aux_base + offset; 687 + return (phys_addr_t)area->aux_base + offset; 749 688 } 750 689 } 751 690 ··· 788 689 struct smem_ptable *ptable; 789 690 u32 version; 790 691 791 - ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K; 692 + ptable = smem->ptable; 792 693 if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic))) 793 694 return ERR_PTR(-ENOENT); 794 695 ··· 827 728 struct smem_ptable_entry *entry, u16 host0, u16 host1) 828 729 { 829 730 struct smem_partition_header *header; 731 + u32 phys_addr; 830 732 u32 size; 831 733 832 - header = smem->regions[0].virt_base + le32_to_cpu(entry->offset); 734 + phys_addr = smem->regions[0].aux_base + le32_to_cpu(entry->offset); 735 + header = devm_ioremap_wc(smem->dev, phys_addr, le32_to_cpu(entry->size)); 736 + 737 + if (!header) 738 + return NULL; 833 739 834 740 if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) { 835 741 dev_err(smem->dev, "bad partition magic %4ph\n", header->magic); ··· 876 772 bool found = false; 877 773 int i; 878 774 879 - if (smem->global_partition) { 775 + if (smem->global_partition.virt_base) { 880 776 dev_err(smem->dev, "Already found the global partition\n"); 881 777 return -EINVAL; 882 778 } ··· 911 807 if (!header) 912 808 return -EINVAL; 913 809 914 - smem->global_partition = header; 915 - smem->global_cacheline = le32_to_cpu(entry->cacheline); 810 + smem->global_partition.virt_base = (void __iomem *)header; 811 + smem->global_partition.phys_base = smem->regions[0].aux_base + 812 + le32_to_cpu(entry->offset); 813 + smem->global_partition.size = le32_to_cpu(entry->size); 814 + smem->global_partition.cacheline = le32_to_cpu(entry->cacheline); 916 815 917 816 return 0; 918 817 } ··· 955 848 return -EINVAL; 956 849 } 957 850 958 - if (smem->partitions[remote_host]) { 851 + if (smem->partitions[remote_host].virt_base) { 959 852 dev_err(smem->dev, "duplicate host %hu\n", remote_host); 960 853 return -EINVAL; 961 854 } ··· 964 857 if (!header) 965 858 return -EINVAL; 966 859 967 - smem->partitions[remote_host] = header; 968 - smem->cacheline[remote_host] = le32_to_cpu(entry->cacheline); 860 + smem->partitions[remote_host].virt_base = (void __iomem *)header; 861 + smem->partitions[remote_host].phys_base = smem->regions[0].aux_base + 862 + le32_to_cpu(entry->offset); 863 + smem->partitions[remote_host].size = le32_to_cpu(entry->size); 864 + smem->partitions[remote_host].cacheline = le32_to_cpu(entry->cacheline); 969 865 } 866 + 867 + return 0; 868 + } 869 + 870 + static int qcom_smem_map_toc(struct qcom_smem *smem, struct smem_region *region) 871 + { 872 + u32 ptable_start; 873 + 874 + /* map starting 4K for smem header */ 875 + region->virt_base = devm_ioremap_wc(smem->dev, region->aux_base, SZ_4K); 876 + ptable_start = region->aux_base + region->size - SZ_4K; 877 + /* map last 4k for toc */ 878 + smem->ptable = devm_ioremap_wc(smem->dev, ptable_start, SZ_4K); 879 + 880 + if (!region->virt_base || !smem->ptable) 881 + return -ENOMEM; 882 + 883 + return 0; 884 + } 885 + 886 + static int qcom_smem_map_global(struct qcom_smem *smem, u32 size) 887 + { 888 + u32 phys_addr; 889 + 890 + phys_addr = smem->regions[0].aux_base; 891 + 892 + smem->regions[0].size = size; 893 + smem->regions[0].virt_base = devm_ioremap_wc(smem->dev, phys_addr, size); 894 + 895 + if (!smem->regions[0].virt_base) 896 + return -ENOMEM; 970 897 971 898 return 0; 972 899 } ··· 1035 894 struct smem_header *header; 1036 895 struct reserved_mem *rmem; 1037 896 struct qcom_smem *smem; 897 + unsigned long flags; 1038 898 size_t array_size; 1039 899 int num_regions; 1040 900 int hwlock_id; 1041 901 u32 version; 902 + u32 size; 1042 903 int ret; 1043 904 int i; 1044 905 ··· 1076 933 return ret; 1077 934 } 1078 935 1079 - for (i = 0; i < num_regions; i++) { 936 + 937 + ret = qcom_smem_map_toc(smem, &smem->regions[0]); 938 + if (ret) 939 + return ret; 940 + 941 + for (i = 1; i < num_regions; i++) { 1080 942 smem->regions[i].virt_base = devm_ioremap_wc(&pdev->dev, 1081 943 smem->regions[i].aux_base, 1082 944 smem->regions[i].size); ··· 1098 950 return -EINVAL; 1099 951 } 1100 952 953 + hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); 954 + if (hwlock_id < 0) { 955 + if (hwlock_id != -EPROBE_DEFER) 956 + dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 957 + return hwlock_id; 958 + } 959 + 960 + smem->hwlock = hwspin_lock_request_specific(hwlock_id); 961 + if (!smem->hwlock) 962 + return -ENXIO; 963 + 964 + ret = hwspin_lock_timeout_irqsave(smem->hwlock, HWSPINLOCK_TIMEOUT, &flags); 965 + if (ret) 966 + return ret; 967 + size = readl_relaxed(&header->available) + readl_relaxed(&header->free_offset); 968 + hwspin_unlock_irqrestore(smem->hwlock, &flags); 969 + 1101 970 version = qcom_smem_get_sbl_version(smem); 971 + /* 972 + * smem header mapping is required only in heap version scheme, so unmap 973 + * it here. It will be remapped in qcom_smem_map_global() when whole 974 + * partition is mapped again. 975 + */ 976 + devm_iounmap(smem->dev, smem->regions[0].virt_base); 1102 977 switch (version >> 16) { 1103 978 case SMEM_GLOBAL_PART_VERSION: 1104 979 ret = qcom_smem_set_global_partition(smem); ··· 1130 959 smem->item_count = qcom_smem_get_item_count(smem); 1131 960 break; 1132 961 case SMEM_GLOBAL_HEAP_VERSION: 962 + qcom_smem_map_global(smem, size); 1133 963 smem->item_count = SMEM_ITEM_COUNT; 1134 964 break; 1135 965 default: ··· 1142 970 ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS); 1143 971 if (ret < 0 && ret != -ENOENT) 1144 972 return ret; 1145 - 1146 - hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); 1147 - if (hwlock_id < 0) { 1148 - if (hwlock_id != -EPROBE_DEFER) 1149 - dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 1150 - return hwlock_id; 1151 - } 1152 - 1153 - smem->hwlock = hwspin_lock_request_specific(hwlock_id); 1154 - if (!smem->hwlock) 1155 - return -ENXIO; 1156 973 1157 974 __smem = smem; 1158 975
+1
drivers/soc/qcom/smp2p.c
··· 493 493 } 494 494 495 495 smp2p->ipc_regmap = syscon_node_to_regmap(syscon); 496 + of_node_put(syscon); 496 497 if (IS_ERR(smp2p->ipc_regmap)) 497 498 return PTR_ERR(smp2p->ipc_regmap); 498 499
+1
drivers/soc/qcom/smsm.c
··· 374 374 return 0; 375 375 376 376 host->ipc_regmap = syscon_node_to_regmap(syscon); 377 + of_node_put(syscon); 377 378 if (IS_ERR(host->ipc_regmap)) 378 379 return PTR_ERR(host->ipc_regmap); 379 380
+14 -12
drivers/soc/qcom/socinfo.c
··· 236 236 { 184, "APQ8074" }, 237 237 { 185, "MSM8274" }, 238 238 { 186, "MSM8674" }, 239 - { 194, "MSM8974PRO" }, 239 + { 194, "MSM8974PRO-AC" }, 240 240 { 198, "MSM8126" }, 241 241 { 199, "APQ8026" }, 242 242 { 200, "MSM8926" }, 243 243 { 205, "MSM8326" }, 244 244 { 206, "MSM8916" }, 245 245 { 207, "MSM8994" }, 246 - { 208, "APQ8074-AA" }, 247 - { 209, "APQ8074-AB" }, 248 - { 210, "APQ8074PRO" }, 249 - { 211, "MSM8274-AA" }, 250 - { 212, "MSM8274-AB" }, 251 - { 213, "MSM8274PRO" }, 252 - { 214, "MSM8674-AA" }, 253 - { 215, "MSM8674-AB" }, 254 - { 216, "MSM8674PRO" }, 255 - { 217, "MSM8974-AA" }, 256 - { 218, "MSM8974-AB" }, 246 + { 208, "APQ8074PRO-AA" }, 247 + { 209, "APQ8074PRO-AB" }, 248 + { 210, "APQ8074PRO-AC" }, 249 + { 211, "MSM8274PRO-AA" }, 250 + { 212, "MSM8274PRO-AB" }, 251 + { 213, "MSM8274PRO-AC" }, 252 + { 214, "MSM8674PRO-AA" }, 253 + { 215, "MSM8674PRO-AB" }, 254 + { 216, "MSM8674PRO-AC" }, 255 + { 217, "MSM8974PRO-AA" }, 256 + { 218, "MSM8974PRO-AB" }, 257 257 { 219, "APQ8028" }, 258 258 { 220, "MSM8128" }, 259 259 { 221, "MSM8228" }, ··· 330 330 { 459, "SM7225" }, 331 331 { 460, "SA8540P" }, 332 332 { 480, "SM8450" }, 333 + { 482, "SM8450" }, 334 + { 487, "SC7280" }, 333 335 }; 334 336 335 337 static const char *socinfo_machine(struct device *dev, unsigned int id)
+26
drivers/soc/renesas/Kconfig
··· 47 47 48 48 config ARCH_RZN1 49 49 bool 50 + select PM 51 + select PM_GENERIC_DOMAINS 50 52 select ARM_AMBA 51 53 52 54 if ARM && ARCH_RENESAS ··· 270 268 help 271 269 This enables support for the Renesas R-Car V3U SoC. 272 270 271 + config ARCH_R8A779G0 272 + bool "ARM64 Platform support for R-Car V4H" 273 + select ARCH_RCAR_GEN3 274 + select SYSC_R8A779G0 275 + help 276 + This enables support for the Renesas R-Car V4H SoC. 277 + 273 278 config ARCH_R8A774C0 274 279 bool "ARM64 Platform support for RZ/G2E" 275 280 select ARCH_RCAR_GEN3 ··· 305 296 help 306 297 This enables support for the Renesas RZ/G2N SoC. 307 298 299 + config ARCH_R9A07G043 300 + bool "ARM64 Platform support for RZ/G2UL" 301 + select ARCH_RZG2L 302 + help 303 + This enables support for the Renesas RZ/G2UL SoC variants. 304 + 308 305 config ARCH_R9A07G044 309 306 bool "ARM64 Platform support for RZ/G2L" 310 307 select ARCH_RZG2L ··· 322 307 select ARCH_RZG2L 323 308 help 324 309 This enables support for the Renesas RZ/V2L SoC variants. 310 + 311 + config ARCH_R9A09G011 312 + bool "ARM64 Platform support for RZ/V2M" 313 + select PM 314 + select PM_GENERIC_DOMAINS 315 + help 316 + This enables support for the Renesas RZ/V2M SoC. 325 317 326 318 endif # ARM64 327 319 ··· 399 377 400 378 config SYSC_R8A779A0 401 379 bool "System Controller support for R-Car V3U" if COMPILE_TEST 380 + select SYSC_RCAR_GEN4 381 + 382 + config SYSC_R8A779G0 383 + bool "System Controller support for R-Car V4H" if COMPILE_TEST 402 384 select SYSC_RCAR_GEN4 403 385 404 386 config SYSC_RMOBILE
+1
drivers/soc/renesas/Makefile
··· 26 26 obj-$(CONFIG_SYSC_R8A77995) += r8a77995-sysc.o 27 27 obj-$(CONFIG_SYSC_R8A779A0) += r8a779a0-sysc.o 28 28 obj-$(CONFIG_SYSC_R8A779F0) += r8a779f0-sysc.o 29 + obj-$(CONFIG_SYSC_R8A779G0) += r8a779g0-sysc.o 29 30 ifdef CONFIG_SMP 30 31 obj-$(CONFIG_ARCH_R9A06G032) += r9a06g032-smp.o 31 32 endif
+62
drivers/soc/renesas/r8a779g0-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas R-Car V4H System Controller 4 + * 5 + * Copyright (C) 2022 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/bits.h> 9 + #include <linux/clk/renesas.h> 10 + #include <linux/delay.h> 11 + #include <linux/err.h> 12 + #include <linux/io.h> 13 + #include <linux/iopoll.h> 14 + #include <linux/kernel.h> 15 + #include <linux/mm.h> 16 + #include <linux/of_address.h> 17 + #include <linux/pm_domain.h> 18 + #include <linux/slab.h> 19 + #include <linux/spinlock.h> 20 + #include <linux/types.h> 21 + 22 + #include <dt-bindings/power/r8a779g0-sysc.h> 23 + 24 + #include "rcar-gen4-sysc.h" 25 + 26 + static struct rcar_gen4_sysc_area r8a779g0_areas[] __initdata = { 27 + { "always-on", R8A779G0_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 28 + { "a3e0", R8A779G0_PD_A3E0, R8A779G0_PD_ALWAYS_ON, PD_SCU }, 29 + { "a2e0d0", R8A779G0_PD_A2E0D0, R8A779G0_PD_A3E0, PD_SCU }, 30 + { "a2e0d1", R8A779G0_PD_A2E0D1, R8A779G0_PD_A3E0, PD_SCU }, 31 + { "a1e0d0c0", R8A779G0_PD_A1E0D0C0, R8A779G0_PD_A2E0D0, PD_CPU_NOCR }, 32 + { "a1e0d0c1", R8A779G0_PD_A1E0D0C1, R8A779G0_PD_A2E0D0, PD_CPU_NOCR }, 33 + { "a1e0d1c0", R8A779G0_PD_A1E0D1C0, R8A779G0_PD_A2E0D1, PD_CPU_NOCR }, 34 + { "a1e0d1c1", R8A779G0_PD_A1E0D1C1, R8A779G0_PD_A2E0D1, PD_CPU_NOCR }, 35 + { "a33dga", R8A779G0_PD_A33DGA, R8A779G0_PD_ALWAYS_ON }, 36 + { "a23dgb", R8A779G0_PD_A23DGB, R8A779G0_PD_A33DGA }, 37 + { "a3vip0", R8A779G0_PD_A3VIP0, R8A779G0_PD_ALWAYS_ON }, 38 + { "a3vip1", R8A779G0_PD_A3VIP1, R8A779G0_PD_ALWAYS_ON }, 39 + { "a3vip2", R8A779G0_PD_A3VIP2, R8A779G0_PD_ALWAYS_ON }, 40 + { "a3isp0", R8A779G0_PD_A3ISP0, R8A779G0_PD_ALWAYS_ON }, 41 + { "a3isp1", R8A779G0_PD_A3ISP1, R8A779G0_PD_ALWAYS_ON }, 42 + { "a3ir", R8A779G0_PD_A3IR, R8A779G0_PD_ALWAYS_ON }, 43 + { "a2cn0", R8A779G0_PD_A2CN0, R8A779G0_PD_A3IR }, 44 + { "a1cnn0", R8A779G0_PD_A1CNN0, R8A779G0_PD_A2CN0 }, 45 + { "a1dsp0", R8A779G0_PD_A1DSP0, R8A779G0_PD_A2CN0 }, 46 + { "a1dsp1", R8A779G0_PD_A1DSP1, R8A779G0_PD_A2CN0 }, 47 + { "a1dsp2", R8A779G0_PD_A1DSP2, R8A779G0_PD_A2CN0 }, 48 + { "a1dsp3", R8A779G0_PD_A1DSP3, R8A779G0_PD_A2CN0 }, 49 + { "a2imp01", R8A779G0_PD_A2IMP01, R8A779G0_PD_A3IR }, 50 + { "a2imp23", R8A779G0_PD_A2IMP23, R8A779G0_PD_A3IR }, 51 + { "a2psc", R8A779G0_PD_A2PSC, R8A779G0_PD_A3IR }, 52 + { "a2dma", R8A779G0_PD_A2DMA, R8A779G0_PD_A3IR }, 53 + { "a2cv0", R8A779G0_PD_A2CV0, R8A779G0_PD_A3IR }, 54 + { "a2cv1", R8A779G0_PD_A2CV1, R8A779G0_PD_A3IR }, 55 + { "a2cv2", R8A779G0_PD_A2CV2, R8A779G0_PD_A3IR }, 56 + { "a2cv3", R8A779G0_PD_A2CV3, R8A779G0_PD_A3IR }, 57 + }; 58 + 59 + const struct rcar_gen4_sysc_info r8a779g0_sysc_info __initconst = { 60 + .areas = r8a779g0_areas, 61 + .num_areas = ARRAY_SIZE(r8a779g0_areas), 62 + };
+3
drivers/soc/renesas/rcar-gen4-sysc.c
··· 282 282 #ifdef CONFIG_SYSC_R8A779F0 283 283 { .compatible = "renesas,r8a779f0-sysc", .data = &r8a779f0_sysc_info }, 284 284 #endif 285 + #ifdef CONFIG_SYSC_R8A779G0 286 + { .compatible = "renesas,r8a779g0-sysc", .data = &r8a779g0_sysc_info }, 287 + #endif 285 288 { /* sentinel */ } 286 289 }; 287 290
+1
drivers/soc/renesas/rcar-gen4-sysc.h
··· 39 39 40 40 extern const struct rcar_gen4_sysc_info r8a779a0_sysc_info; 41 41 extern const struct rcar_gen4_sysc_info r8a779f0_sysc_info; 42 + extern const struct rcar_gen4_sysc_info r8a779g0_sysc_info; 42 43 43 44 #endif /* __SOC_RENESAS_RCAR_GEN4_SYSC_H__ */
+1
drivers/soc/renesas/rcar-rst.c
··· 103 103 /* R-Car Gen4 */ 104 104 { .compatible = "renesas,r8a779a0-rst", .data = &rcar_rst_gen4 }, 105 105 { .compatible = "renesas,r8a779f0-rst", .data = &rcar_rst_gen4 }, 106 + { .compatible = "renesas,r8a779g0-rst", .data = &rcar_rst_gen4 }, 106 107 { /* sentinel */ } 107 108 }; 108 109
+22 -1
drivers/soc/renesas/renesas-soc.c
··· 64 64 .name = "RZ/G2L", 65 65 }; 66 66 67 + static const struct renesas_family fam_rzg2ul __initconst __maybe_unused = { 68 + .name = "RZ/G2UL", 69 + }; 70 + 67 71 static const struct renesas_family fam_rzv2l __initconst __maybe_unused = { 68 72 .name = "RZ/V2L", 69 73 }; ··· 152 148 .id = 0x841c447, 153 149 }; 154 150 151 + static const struct renesas_soc soc_rz_g2ul __initconst __maybe_unused = { 152 + .family = &fam_rzg2ul, 153 + .id = 0x8450447, 154 + }; 155 + 155 156 static const struct renesas_soc soc_rz_v2l __initconst __maybe_unused = { 156 157 .family = &fam_rzv2l, 157 158 .id = 0x8447447, ··· 232 223 }; 233 224 234 225 static const struct renesas_soc soc_rcar_v3u __initconst __maybe_unused = { 235 - .family = &fam_rcar_gen3, 226 + .family = &fam_rcar_gen4, 236 227 .id = 0x59, 237 228 }; 238 229 239 230 static const struct renesas_soc soc_rcar_s4 __initconst __maybe_unused = { 240 231 .family = &fam_rcar_gen4, 241 232 .id = 0x5a, 233 + }; 234 + 235 + static const struct renesas_soc soc_rcar_v4h __initconst __maybe_unused = { 236 + .family = &fam_rcar_gen4, 237 + .id = 0x5c, 242 238 }; 243 239 244 240 static const struct renesas_soc soc_shmobile_ag5 __initconst __maybe_unused = { ··· 354 340 #ifdef CONFIG_ARCH_R8A779F0 355 341 { .compatible = "renesas,r8a779f0", .data = &soc_rcar_s4 }, 356 342 #endif 343 + #ifdef CONFIG_ARCH_R8A779G0 344 + { .compatible = "renesas,r8a779g0", .data = &soc_rcar_v4h }, 345 + #endif 346 + #if defined(CONFIG_ARCH_R9A07G043) 347 + { .compatible = "renesas,r9a07g043", .data = &soc_rz_g2ul }, 348 + #endif 357 349 #if defined(CONFIG_ARCH_R9A07G044) 358 350 { .compatible = "renesas,r9a07g044", .data = &soc_rz_g2l }, 359 351 #endif ··· 398 378 399 379 static const struct of_device_id renesas_ids[] __initconst = { 400 380 { .compatible = "renesas,bsid", .data = &id_bsid }, 381 + { .compatible = "renesas,r9a07g043-sysc", .data = &id_rzg2l }, 401 382 { .compatible = "renesas,r9a07g044-sysc", .data = &id_rzg2l }, 402 383 { .compatible = "renesas,r9a07g054-sysc", .data = &id_rzg2l }, 403 384 { .compatible = "renesas,prr", .data = &id_prr },
+12 -12
drivers/soc/rockchip/Kconfig
··· 23 23 voltage supplied by the regulators. 24 24 25 25 config ROCKCHIP_PM_DOMAINS 26 - bool "Rockchip generic power domain" 27 - depends on PM 28 - select PM_GENERIC_DOMAINS 29 - help 30 - Say y here to enable power domain support. 31 - In order to meet high performance and low power requirements, a power 32 - management unit is designed or saving power when RK3288 in low power 33 - mode. The RK3288 PMU is dedicated for managing the power of the whole chip. 26 + bool "Rockchip generic power domain" 27 + depends on PM 28 + select PM_GENERIC_DOMAINS 29 + help 30 + Say y here to enable power domain support. 31 + In order to meet high performance and low power requirements, a power 32 + management unit is designed or saving power when RK3288 in low power 33 + mode. The RK3288 PMU is dedicated for managing the power of the whole chip. 34 34 35 - If unsure, say N. 35 + If unsure, say N. 36 36 37 37 config ROCKCHIP_DTPM 38 38 tristate "Rockchip DTPM hierarchy" 39 39 depends on DTPM && m 40 40 help 41 - Describe the hierarchy for the Dynamic Thermal Power 42 - Management tree on this platform. That will create all the 43 - power capping capable devices. 41 + Describe the hierarchy for the Dynamic Thermal Power Management tree 42 + on this platform. That will create all the power capping capable 43 + devices. 44 44 45 45 endif
+17
drivers/soc/rockchip/grf.c
··· 108 108 .num_values = ARRAY_SIZE(rk3399_defaults), 109 109 }; 110 110 111 + #define RK3566_GRF_USB3OTG0_CON1 0x0104 112 + 113 + static const struct rockchip_grf_value rk3566_defaults[] __initconst = { 114 + { "usb3otg port switch", RK3566_GRF_USB3OTG0_CON1, HIWORD_UPDATE(0, 1, 12) }, 115 + { "usb3otg clock switch", RK3566_GRF_USB3OTG0_CON1, HIWORD_UPDATE(1, 1, 7) }, 116 + { "usb3otg disable usb3", RK3566_GRF_USB3OTG0_CON1, HIWORD_UPDATE(1, 1, 0) }, 117 + }; 118 + 119 + static const struct rockchip_grf_info rk3566_pipegrf __initconst = { 120 + .values = rk3566_defaults, 121 + .num_values = ARRAY_SIZE(rk3566_defaults), 122 + }; 123 + 124 + 111 125 static const struct of_device_id rockchip_grf_dt_match[] __initconst = { 112 126 { 113 127 .compatible = "rockchip,rk3036-grf", ··· 144 130 }, { 145 131 .compatible = "rockchip,rk3399-grf", 146 132 .data = (void *)&rk3399_grf, 133 + }, { 134 + .compatible = "rockchip,rk3566-pipe-grf", 135 + .data = (void *)&rk3566_pipegrf, 147 136 }, 148 137 { /* sentinel */ }, 149 138 };
+5 -5
drivers/soc/rockchip/pm_domains.c
··· 283 283 regmap_update_bits(pmu->regmap, pmu->info->req_offset, 284 284 pd_info->req_mask, idle ? -1U : 0); 285 285 286 - dsb(sy); 286 + wmb(); 287 287 288 288 /* Wait util idle_ack = 1 */ 289 289 target_ack = idle ? pd_info->ack_mask : 0; ··· 390 390 regmap_update_bits(pmu->regmap, pmu->info->pwr_offset, 391 391 pd->info->pwr_mask, on ? 0 : -1U); 392 392 393 - dsb(sy); 393 + wmb(); 394 394 395 395 if (readx_poll_timeout_atomic(rockchip_pmu_domain_is_on, pd, is_on, 396 396 is_on == on, 0, 10000)) { ··· 1186 1186 .name = "rockchip-pm-domain", 1187 1187 .of_match_table = rockchip_pm_domain_dt_match, 1188 1188 /* 1189 - * We can't forcibly eject devices form power domain, 1190 - * so we can't really remove power domains once they 1191 - * were added. 1189 + * We can't forcibly eject devices from the power 1190 + * domain, so we can't really remove power domains 1191 + * once they were added. 1192 1192 */ 1193 1193 .suppress_bind_attrs = true, 1194 1194 },
+1
drivers/soc/tegra/Kconfig
··· 146 146 select GENERIC_PINCONF 147 147 select PM_OPP 148 148 select PM_GENERIC_DOMAINS 149 + select REGMAP 149 150 150 151 config SOC_TEGRA_POWERGATE_BPMP 151 152 def_bool y
+4 -4
drivers/soc/tegra/fuse/fuse-tegra.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2013-2021, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2013-2022, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk.h> ··· 162 162 .bit_offset = 0, 163 163 .nbits = 32, 164 164 }, { 165 - .name = "gcplex-config-fuse", 165 + .name = "gpu-gcplex-config-fuse", 166 166 .offset = 0x1c8, 167 167 .bytes = 4, 168 168 .bit_offset = 0, ··· 186 186 .bit_offset = 0, 187 187 .nbits = 32, 188 188 }, { 189 - .name = "pdi0", 189 + .name = "gpu-pdi0", 190 190 .offset = 0x300, 191 191 .bytes = 4, 192 192 .bit_offset = 0, 193 193 .nbits = 32, 194 194 }, { 195 - .name = "pdi1", 195 + .name = "gpu-pdi1", 196 196 .offset = 0x304, 197 197 .bytes = 4, 198 198 .bit_offset = 0,
+16 -1
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2013-2014, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2013-2022, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/device.h> ··· 344 344 .cell_name = "xusb-pad-calibration-ext", 345 345 .dev_id = "3520000.padctl", 346 346 .con_id = "calibration-ext", 347 + }, { 348 + .nvmem_name = "fuse", 349 + .cell_name = "gpu-gcplex-config-fuse", 350 + .dev_id = "17000000.gpu", 351 + .con_id = "gcplex-config-fuse", 352 + }, { 353 + .nvmem_name = "fuse", 354 + .cell_name = "gpu-pdi0", 355 + .dev_id = "17000000.gpu", 356 + .con_id = "pdi0", 357 + }, { 358 + .nvmem_name = "fuse", 359 + .cell_name = "gpu-pdi1", 360 + .dev_id = "17000000.gpu", 361 + .con_id = "pdi1", 347 362 }, 348 363 }; 349 364
+27 -8
drivers/soc/tegra/pmc.c
··· 394 394 * @domain: IRQ domain provided by the PMC 395 395 * @irq: chip implementation for the IRQ domain 396 396 * @clk_nb: pclk clock changes handler 397 + * @core_domain_state_synced: flag marking the core domain's state as synced 398 + * @core_domain_registered: flag marking the core domain as registered 397 399 */ 398 400 struct tegra_pmc { 399 401 struct device *dev; ··· 3768 3766 }; 3769 3767 3770 3768 static const char * const tegra234_reset_sources[] = { 3771 - "SYS_RESET_N", 3769 + "SYS_RESET_N", /* 0x0 */ 3772 3770 "AOWDT", 3773 3771 "BCCPLEXWDT", 3774 3772 "BPMPWDT", ··· 3776 3774 "SPEWDT", 3777 3775 "APEWDT", 3778 3776 "LCCPLEXWDT", 3779 - "SENSOR", 3780 - "AOTAG", 3781 - "VFSENSOR", 3777 + "SENSOR", /* 0x8 */ 3778 + NULL, 3779 + NULL, 3782 3780 "MAINSWRST", 3783 3781 "SC7", 3784 3782 "HSM", 3785 - "CSITE", 3783 + NULL, 3786 3784 "RCEWDT", 3787 - "PVA0WDT", 3788 - "PVA1WDT", 3789 - "L1A_ASYNC", 3785 + NULL, /* 0x10 */ 3786 + NULL, 3787 + NULL, 3790 3788 "BPMPBOOT", 3791 3789 "FUSECRC", 3790 + "DCEWDT", 3791 + "PSCWDT", 3792 + "PSC", 3793 + "CSITE_SW", /* 0x18 */ 3794 + "POD", 3795 + "SCPM", 3796 + "VREFRO_POWERBAD", 3797 + "VMON", 3798 + "FMON", 3799 + "FSI_R5WDT", 3800 + "FSI_THERM", 3801 + "FSI_R52C0WDT", /* 0x20 */ 3802 + "FSI_R52C1WDT", 3803 + "FSI_R52C2WDT", 3804 + "FSI_R52C3WDT", 3805 + "FSI_FMON", 3806 + "FSI_VMON", /* 0x25 */ 3792 3807 }; 3793 3808 3794 3809 static const struct tegra_wake_event tegra234_wake_events[] = {
+13 -16
drivers/soc/ti/knav_dma.c
··· 415 415 void *knav_dma_open_channel(struct device *dev, const char *name, 416 416 struct knav_dma_cfg *config) 417 417 { 418 - struct knav_dma_chan *chan; 419 - struct knav_dma_device *dma; 420 - bool found = false; 418 + struct knav_dma_device *dma = NULL, *iter1; 419 + struct knav_dma_chan *chan = NULL, *iter2; 421 420 int chan_num = -1; 422 421 const char *instance; 423 422 ··· 443 444 } 444 445 445 446 /* Look for correct dma instance */ 446 - list_for_each_entry(dma, &kdev->list, list) { 447 - if (!strcmp(dma->name, instance)) { 448 - found = true; 447 + list_for_each_entry(iter1, &kdev->list, list) { 448 + if (!strcmp(iter1->name, instance)) { 449 + dma = iter1; 449 450 break; 450 451 } 451 452 } 452 - if (!found) { 453 + if (!dma) { 453 454 dev_err(kdev->dev, "No DMA instance with name %s\n", instance); 454 455 return (void *)-EINVAL; 455 456 } 456 457 457 458 /* Look for correct dma channel from dma instance */ 458 - found = false; 459 - list_for_each_entry(chan, &dma->chan_list, list) { 459 + list_for_each_entry(iter2, &dma->chan_list, list) { 460 460 if (config->direction == DMA_MEM_TO_DEV) { 461 - if (chan->channel == chan_num) { 462 - found = true; 461 + if (iter2->channel == chan_num) { 462 + chan = iter2; 463 463 break; 464 464 } 465 465 } else { 466 - if (chan->flow == chan_num) { 467 - found = true; 466 + if (iter2->flow == chan_num) { 467 + chan = iter2; 468 468 break; 469 469 } 470 470 } 471 471 } 472 - if (!found) { 472 + if (!chan) { 473 473 dev_err(kdev->dev, "channel %d is not in DMA %s\n", 474 474 chan_num, instance); 475 475 return (void *)-EINVAL; ··· 745 747 INIT_LIST_HEAD(&kdev->list); 746 748 747 749 pm_runtime_enable(kdev->dev); 748 - ret = pm_runtime_get_sync(kdev->dev); 750 + ret = pm_runtime_resume_and_get(kdev->dev); 749 751 if (ret < 0) { 750 - pm_runtime_put_noidle(kdev->dev); 751 752 dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret); 752 753 goto err_pm_disable; 753 754 }
+9 -12
drivers/soc/ti/knav_qmss_queue.c
··· 758 758 int num_desc, int region_id) 759 759 { 760 760 struct knav_region *reg_itr, *region = NULL; 761 - struct knav_pool *pool, *pi; 761 + struct knav_pool *pool, *pi = NULL, *iter; 762 762 struct list_head *node; 763 763 unsigned last_offset; 764 - bool slot_found; 765 764 int ret; 766 765 767 766 if (!kdev) ··· 789 790 } 790 791 791 792 pool->queue = knav_queue_open(name, KNAV_QUEUE_GP, 0); 792 - if (IS_ERR_OR_NULL(pool->queue)) { 793 + if (IS_ERR(pool->queue)) { 793 794 dev_err(kdev->dev, 794 795 "failed to open queue for pool(%s), error %ld\n", 795 796 name, PTR_ERR(pool->queue)); ··· 815 816 * the request 816 817 */ 817 818 last_offset = 0; 818 - slot_found = false; 819 819 node = &region->pools; 820 - list_for_each_entry(pi, &region->pools, region_inst) { 821 - if ((pi->region_offset - last_offset) >= num_desc) { 822 - slot_found = true; 820 + list_for_each_entry(iter, &region->pools, region_inst) { 821 + if ((iter->region_offset - last_offset) >= num_desc) { 822 + pi = iter; 823 823 break; 824 824 } 825 - last_offset = pi->region_offset + pi->num_desc; 825 + last_offset = iter->region_offset + iter->num_desc; 826 826 } 827 - node = &pi->region_inst; 828 827 829 - if (slot_found) { 828 + if (pi) { 829 + node = &pi->region_inst; 830 830 pool->region = region; 831 831 pool->num_desc = num_desc; 832 832 pool->region_offset = last_offset; ··· 1783 1785 INIT_LIST_HEAD(&kdev->pdsps); 1784 1786 1785 1787 pm_runtime_enable(&pdev->dev); 1786 - ret = pm_runtime_get_sync(&pdev->dev); 1788 + ret = pm_runtime_resume_and_get(&pdev->dev); 1787 1789 if (ret < 0) { 1788 - pm_runtime_put_noidle(&pdev->dev); 1789 1790 dev_err(dev, "Failed to enable QMSS\n"); 1790 1791 return ret; 1791 1792 }
+2 -5
drivers/soc/ti/omap_prm.c
··· 941 941 struct resource *res; 942 942 const struct omap_prm_data *data; 943 943 struct omap_prm *prm; 944 - const struct of_device_id *match; 945 944 int ret; 946 945 947 946 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 948 947 if (!res) 949 948 return -ENODEV; 950 949 951 - match = of_match_device(omap_prm_id_table, &pdev->dev); 952 - if (!match) 950 + data = of_device_get_match_data(&pdev->dev); 951 + if (!data) 953 952 return -ENOTSUPP; 954 953 955 954 prm = devm_kzalloc(&pdev->dev, sizeof(*prm), GFP_KERNEL); 956 955 if (!prm) 957 956 return -ENOMEM; 958 - 959 - data = match->data; 960 957 961 958 while (data->base != res->start) { 962 959 if (!data->base)
+2 -4
drivers/soc/ti/pm33xx.c
··· 555 555 #endif /* CONFIG_SUSPEND */ 556 556 557 557 pm_runtime_enable(dev); 558 - ret = pm_runtime_get_sync(dev); 559 - if (ret < 0) { 560 - pm_runtime_put_noidle(dev); 558 + ret = pm_runtime_resume_and_get(dev); 559 + if (ret < 0) 561 560 goto err_pm_runtime_disable; 562 - } 563 561 564 562 ret = pm_ops->init(am33xx_do_sram_idle); 565 563 if (ret) {
+1 -2
drivers/soc/ti/pruss.c
··· 279 279 platform_set_drvdata(pdev, pruss); 280 280 281 281 pm_runtime_enable(dev); 282 - ret = pm_runtime_get_sync(dev); 282 + ret = pm_runtime_resume_and_get(dev); 283 283 if (ret < 0) { 284 284 dev_err(dev, "couldn't enable module\n"); 285 - pm_runtime_put_noidle(dev); 286 285 goto rpm_disable; 287 286 } 288 287
+2
drivers/soc/ti/ti_sci_pm_domains.c
··· 183 183 devm_kcalloc(dev, max_id + 1, 184 184 sizeof(*pd_provider->data.domains), 185 185 GFP_KERNEL); 186 + if (!pd_provider->data.domains) 187 + return -ENOMEM; 186 188 187 189 pd_provider->data.num_domains = max_id + 1; 188 190 pd_provider->data.xlate = ti_sci_pd_xlate;
+204 -6
drivers/soc/ti/wkup_m3_ipc.c
··· 7 7 * Dave Gerlach <d-gerlach@ti.com> 8 8 */ 9 9 10 + #include <linux/debugfs.h> 10 11 #include <linux/err.h> 12 + #include <linux/firmware.h> 11 13 #include <linux/kernel.h> 12 14 #include <linux/kthread.h> 13 15 #include <linux/interrupt.h> ··· 42 40 #define M3_FW_VERSION_MASK 0xffff 43 41 #define M3_WAKE_SRC_MASK 0xff 44 42 43 + #define IPC_MEM_TYPE_SHIFT (0x0) 44 + #define IPC_MEM_TYPE_MASK (0x7 << 0) 45 + #define IPC_VTT_STAT_SHIFT (0x3) 46 + #define IPC_VTT_STAT_MASK (0x1 << 3) 47 + #define IPC_VTT_GPIO_PIN_SHIFT (0x4) 48 + #define IPC_VTT_GPIO_PIN_MASK (0x3f << 4) 49 + #define IPC_IO_ISOLATION_STAT_SHIFT (10) 50 + #define IPC_IO_ISOLATION_STAT_MASK (0x1 << 10) 51 + 52 + #define IPC_DBG_HALT_SHIFT (11) 53 + #define IPC_DBG_HALT_MASK (0x1 << 11) 54 + 45 55 #define M3_STATE_UNKNOWN 0 46 56 #define M3_STATE_RESET 1 47 57 #define M3_STATE_INITED 2 48 58 #define M3_STATE_MSG_FOR_LP 3 49 59 #define M3_STATE_MSG_FOR_RESET 4 60 + 61 + #define WKUP_M3_SD_FW_MAGIC 0x570C 62 + 63 + #define WKUP_M3_DMEM_START 0x80000 64 + #define WKUP_M3_AUXDATA_OFFSET 0x1000 65 + #define WKUP_M3_AUXDATA_SIZE 0xFF 50 66 51 67 static struct wkup_m3_ipc *m3_ipc_state; 52 68 ··· 85 65 {.irq_nr = 51, .src = "ADC_TSC"}, 86 66 {.irq_nr = 0, .src = "Unknown"}, 87 67 }; 68 + 69 + /** 70 + * wkup_m3_copy_aux_data - Copy auxiliary data to special region of m3 dmem 71 + * @data - pointer to data 72 + * @sz - size of data to copy (limit 256 bytes) 73 + * 74 + * Copies any additional blob of data to the wkup_m3 dmem to be used by the 75 + * firmware 76 + */ 77 + static unsigned long wkup_m3_copy_aux_data(struct wkup_m3_ipc *m3_ipc, 78 + const void *data, int sz) 79 + { 80 + unsigned long aux_data_dev_addr; 81 + void *aux_data_addr; 82 + 83 + aux_data_dev_addr = WKUP_M3_DMEM_START + WKUP_M3_AUXDATA_OFFSET; 84 + aux_data_addr = rproc_da_to_va(m3_ipc->rproc, 85 + aux_data_dev_addr, 86 + WKUP_M3_AUXDATA_SIZE, 87 + NULL); 88 + memcpy(aux_data_addr, data, sz); 89 + 90 + return WKUP_M3_AUXDATA_OFFSET; 91 + } 92 + 93 + static void wkup_m3_scale_data_fw_cb(const struct firmware *fw, void *context) 94 + { 95 + unsigned long val, aux_base; 96 + struct wkup_m3_scale_data_header hdr; 97 + struct wkup_m3_ipc *m3_ipc = context; 98 + struct device *dev = m3_ipc->dev; 99 + 100 + if (!fw) { 101 + dev_err(dev, "Voltage scale fw name given but file missing.\n"); 102 + return; 103 + } 104 + 105 + memcpy(&hdr, fw->data, sizeof(hdr)); 106 + 107 + if (hdr.magic != WKUP_M3_SD_FW_MAGIC) { 108 + dev_err(dev, "PM: Voltage Scale Data binary does not appear valid.\n"); 109 + goto release_sd_fw; 110 + } 111 + 112 + aux_base = wkup_m3_copy_aux_data(m3_ipc, fw->data + sizeof(hdr), 113 + fw->size - sizeof(hdr)); 114 + 115 + val = (aux_base + hdr.sleep_offset); 116 + val |= ((aux_base + hdr.wake_offset) << 16); 117 + 118 + m3_ipc->volt_scale_offsets = val; 119 + 120 + release_sd_fw: 121 + release_firmware(fw); 122 + }; 123 + 124 + static int wkup_m3_init_scale_data(struct wkup_m3_ipc *m3_ipc, 125 + struct device *dev) 126 + { 127 + int ret = 0; 128 + 129 + /* 130 + * If no name is provided, user has already been warned, pm will 131 + * still work so return 0 132 + */ 133 + 134 + if (!m3_ipc->sd_fw_name) 135 + return ret; 136 + 137 + ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT, 138 + m3_ipc->sd_fw_name, dev, GFP_ATOMIC, 139 + m3_ipc, wkup_m3_scale_data_fw_cb); 140 + 141 + return ret; 142 + } 143 + 144 + #ifdef CONFIG_DEBUG_FS 145 + static void wkup_m3_set_halt_late(bool enabled) 146 + { 147 + if (enabled) 148 + m3_ipc_state->halt = (1 << IPC_DBG_HALT_SHIFT); 149 + else 150 + m3_ipc_state->halt = 0; 151 + } 152 + 153 + static int option_get(void *data, u64 *val) 154 + { 155 + u32 *option = data; 156 + 157 + *val = *option; 158 + 159 + return 0; 160 + } 161 + 162 + static int option_set(void *data, u64 val) 163 + { 164 + u32 *option = data; 165 + 166 + *option = val; 167 + 168 + if (option == &m3_ipc_state->halt) { 169 + if (val) 170 + wkup_m3_set_halt_late(true); 171 + else 172 + wkup_m3_set_halt_late(false); 173 + } 174 + 175 + return 0; 176 + } 177 + 178 + DEFINE_SIMPLE_ATTRIBUTE(wkup_m3_ipc_option_fops, option_get, option_set, 179 + "%llu\n"); 180 + 181 + static int wkup_m3_ipc_dbg_init(struct wkup_m3_ipc *m3_ipc) 182 + { 183 + m3_ipc->dbg_path = debugfs_create_dir("wkup_m3_ipc", NULL); 184 + 185 + if (!m3_ipc->dbg_path) 186 + return -EINVAL; 187 + 188 + (void)debugfs_create_file("enable_late_halt", 0644, 189 + m3_ipc->dbg_path, 190 + &m3_ipc->halt, 191 + &wkup_m3_ipc_option_fops); 192 + 193 + return 0; 194 + } 195 + 196 + static inline void wkup_m3_ipc_dbg_destroy(struct wkup_m3_ipc *m3_ipc) 197 + { 198 + debugfs_remove_recursive(m3_ipc->dbg_path); 199 + } 200 + #else 201 + static inline int wkup_m3_ipc_dbg_init(struct wkup_m3_ipc *m3_ipc) 202 + { 203 + return 0; 204 + } 205 + 206 + static inline void wkup_m3_ipc_dbg_destroy(struct wkup_m3_ipc *m3_ipc) 207 + { 208 + } 209 + #endif /* CONFIG_DEBUG_FS */ 88 210 89 211 static void am33xx_txev_eoi(struct wkup_m3_ipc *m3_ipc) 90 212 { ··· 292 130 } 293 131 294 132 m3_ipc->state = M3_STATE_INITED; 133 + wkup_m3_init_scale_data(m3_ipc, dev); 295 134 complete(&m3_ipc->sync_complete); 296 135 break; 297 136 case M3_STATE_MSG_FOR_RESET: ··· 378 215 (m3_ipc->state != M3_STATE_UNKNOWN)); 379 216 } 380 217 218 + static void wkup_m3_set_vtt_gpio(struct wkup_m3_ipc *m3_ipc, int gpio) 219 + { 220 + m3_ipc->vtt_conf = (1 << IPC_VTT_STAT_SHIFT) | 221 + (gpio << IPC_VTT_GPIO_PIN_SHIFT); 222 + } 223 + 224 + static void wkup_m3_set_io_isolation(struct wkup_m3_ipc *m3_ipc) 225 + { 226 + m3_ipc->isolation_conf = (1 << IPC_IO_ISOLATION_STAT_SHIFT); 227 + } 228 + 381 229 /* Public functions */ 382 230 /** 383 231 * wkup_m3_set_mem_type - Pass wkup_m3 which type of memory is in use ··· 454 280 switch (state) { 455 281 case WKUP_M3_DEEPSLEEP: 456 282 m3_power_state = IPC_CMD_DS0; 283 + wkup_m3_ctrl_ipc_write(m3_ipc, m3_ipc->volt_scale_offsets, 5); 457 284 break; 458 285 case WKUP_M3_STANDBY: 459 286 m3_power_state = IPC_CMD_STANDBY; 287 + wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 5); 460 288 break; 461 289 case WKUP_M3_IDLE: 462 290 m3_power_state = IPC_CMD_IDLE; 291 + wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 5); 463 292 break; 464 293 default: 465 294 return 1; ··· 471 294 /* Program each required IPC register then write defaults to others */ 472 295 wkup_m3_ctrl_ipc_write(m3_ipc, m3_ipc->resume_addr, 0); 473 296 wkup_m3_ctrl_ipc_write(m3_ipc, m3_power_state, 1); 474 - wkup_m3_ctrl_ipc_write(m3_ipc, m3_ipc->mem_type, 4); 297 + wkup_m3_ctrl_ipc_write(m3_ipc, m3_ipc->mem_type | 298 + m3_ipc->vtt_conf | 299 + m3_ipc->isolation_conf | 300 + m3_ipc->halt, 4); 475 301 476 302 wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 2); 477 303 wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 3); 478 - wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 5); 479 304 wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 6); 480 305 wkup_m3_ctrl_ipc_write(m3_ipc, DS_IPC_DEFAULT, 7); 481 306 ··· 612 433 static int wkup_m3_ipc_probe(struct platform_device *pdev) 613 434 { 614 435 struct device *dev = &pdev->dev; 615 - int irq, ret; 436 + int irq, ret, temp; 616 437 phandle rproc_phandle; 617 438 struct rproc *m3_rproc; 618 439 struct resource *res; 619 440 struct task_struct *task; 620 441 struct wkup_m3_ipc *m3_ipc; 442 + struct device_node *np = dev->of_node; 621 443 622 444 m3_ipc = devm_kzalloc(dev, sizeof(*m3_ipc), GFP_KERNEL); 623 445 if (!m3_ipc) ··· 630 450 return PTR_ERR(m3_ipc->ipc_mem_base); 631 451 632 452 irq = platform_get_irq(pdev, 0); 633 - if (irq < 0) { 634 - dev_err(&pdev->dev, "no irq resource\n"); 453 + if (irq < 0) 635 454 return irq; 636 - } 637 455 638 456 ret = devm_request_irq(dev, irq, wkup_m3_txev_handler, 639 457 0, "wkup_m3_txev", m3_ipc); ··· 674 496 675 497 m3_ipc->ops = &ipc_ops; 676 498 499 + if (!of_property_read_u32(np, "ti,vtt-gpio-pin", &temp)) { 500 + if (temp >= 0 && temp <= 31) 501 + wkup_m3_set_vtt_gpio(m3_ipc, temp); 502 + else 503 + dev_warn(dev, "Invalid VTT GPIO(%d) pin\n", temp); 504 + } 505 + 506 + if (of_find_property(np, "ti,set-io-isolation", NULL)) 507 + wkup_m3_set_io_isolation(m3_ipc); 508 + 509 + ret = of_property_read_string(np, "firmware-name", 510 + &m3_ipc->sd_fw_name); 511 + if (ret) { 512 + dev_dbg(dev, "Voltage scaling data blob not provided from DT.\n"); 513 + }; 514 + 677 515 /* 678 516 * Wait for firmware loading completion in a thread so we 679 517 * can boot the wkup_m3 as soon as it's ready without holding ··· 704 510 goto err_put_rproc; 705 511 } 706 512 513 + wkup_m3_ipc_dbg_init(m3_ipc); 514 + 707 515 return 0; 708 516 709 517 err_put_rproc: ··· 717 521 718 522 static int wkup_m3_ipc_remove(struct platform_device *pdev) 719 523 { 524 + wkup_m3_ipc_dbg_destroy(m3_ipc_state); 525 + 720 526 mbox_free_channel(m3_ipc_state->mbox); 721 527 722 528 rproc_shutdown(m3_ipc_state->rproc);
+1 -4
drivers/tee/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 # Generic Trusted Execution Environment Configuration 3 - config TEE 3 + menuconfig TEE 4 4 tristate "Trusted Execution Environment support" 5 5 depends on HAVE_ARM_SMCCC || COMPILE_TEST || CPU_SUP_AMD 6 6 select CRYPTO ··· 13 13 14 14 if TEE 15 15 16 - menu "TEE drivers" 17 - 18 16 source "drivers/tee/optee/Kconfig" 19 17 source "drivers/tee/amdtee/Kconfig" 20 - endmenu 21 18 22 19 endif
+197 -47
drivers/tee/optee/call.c
··· 11 11 #include <linux/types.h> 12 12 #include "optee_private.h" 13 13 14 + #define MAX_ARG_PARAM_COUNT 6 15 + 16 + /* 17 + * How much memory we allocate for each entry. This doesn't have to be a 18 + * single page, but it makes sense to keep at least keep it as multiples of 19 + * the page size. 20 + */ 21 + #define SHM_ENTRY_SIZE PAGE_SIZE 22 + 23 + /* 24 + * We need to have a compile time constant to be able to determine the 25 + * maximum needed size of the bit field. 26 + */ 27 + #define MIN_ARG_SIZE OPTEE_MSG_GET_ARG_SIZE(MAX_ARG_PARAM_COUNT) 28 + #define MAX_ARG_COUNT_PER_ENTRY (SHM_ENTRY_SIZE / MIN_ARG_SIZE) 29 + 30 + /* 31 + * Shared memory for argument structs are cached here. The number of 32 + * arguments structs that can fit is determined at runtime depending on the 33 + * needed RPC parameter count reported by secure world 34 + * (optee->rpc_param_count). 35 + */ 36 + struct optee_shm_arg_entry { 37 + struct list_head list_node; 38 + struct tee_shm *shm; 39 + DECLARE_BITMAP(map, MAX_ARG_COUNT_PER_ENTRY); 40 + }; 41 + 14 42 void optee_cq_wait_init(struct optee_call_queue *cq, 15 43 struct optee_call_waiter *w) 16 44 { ··· 132 104 return NULL; 133 105 } 134 106 135 - struct tee_shm *optee_get_msg_arg(struct tee_context *ctx, size_t num_params, 136 - struct optee_msg_arg **msg_arg) 107 + void optee_shm_arg_cache_init(struct optee *optee, u32 flags) 108 + { 109 + INIT_LIST_HEAD(&optee->shm_arg_cache.shm_args); 110 + mutex_init(&optee->shm_arg_cache.mutex); 111 + optee->shm_arg_cache.flags = flags; 112 + } 113 + 114 + void optee_shm_arg_cache_uninit(struct optee *optee) 115 + { 116 + struct list_head *head = &optee->shm_arg_cache.shm_args; 117 + struct optee_shm_arg_entry *entry; 118 + 119 + mutex_destroy(&optee->shm_arg_cache.mutex); 120 + while (!list_empty(head)) { 121 + entry = list_first_entry(head, struct optee_shm_arg_entry, 122 + list_node); 123 + list_del(&entry->list_node); 124 + if (find_first_bit(entry->map, MAX_ARG_COUNT_PER_ENTRY) != 125 + MAX_ARG_COUNT_PER_ENTRY) { 126 + pr_err("Freeing non-free entry\n"); 127 + } 128 + tee_shm_free(entry->shm); 129 + kfree(entry); 130 + } 131 + } 132 + 133 + size_t optee_msg_arg_size(size_t rpc_param_count) 134 + { 135 + size_t sz = OPTEE_MSG_GET_ARG_SIZE(MAX_ARG_PARAM_COUNT); 136 + 137 + if (rpc_param_count) 138 + sz += OPTEE_MSG_GET_ARG_SIZE(rpc_param_count); 139 + 140 + return sz; 141 + } 142 + 143 + /** 144 + * optee_get_msg_arg() - Provide shared memory for argument struct 145 + * @ctx: Caller TEE context 146 + * @num_params: Number of parameter to store 147 + * @entry_ret: Entry pointer, needed when freeing the buffer 148 + * @shm_ret: Shared memory buffer 149 + * @offs_ret: Offset of argument strut in shared memory buffer 150 + * 151 + * @returns a pointer to the argument struct in memory, else an ERR_PTR 152 + */ 153 + struct optee_msg_arg *optee_get_msg_arg(struct tee_context *ctx, 154 + size_t num_params, 155 + struct optee_shm_arg_entry **entry_ret, 156 + struct tee_shm **shm_ret, 157 + u_int *offs_ret) 137 158 { 138 159 struct optee *optee = tee_get_drvdata(ctx->teedev); 139 - size_t sz = OPTEE_MSG_GET_ARG_SIZE(num_params); 140 - struct tee_shm *shm; 160 + size_t sz = optee_msg_arg_size(optee->rpc_param_count); 161 + struct optee_shm_arg_entry *entry; 141 162 struct optee_msg_arg *ma; 163 + size_t args_per_entry; 164 + u_long bit; 165 + u_int offs; 166 + void *res; 142 167 143 - /* 144 - * rpc_arg_count is set to the number of allocated parameters in 145 - * the RPC argument struct if a second MSG arg struct is expected. 146 - * The second arg struct will then be used for RPC. 147 - */ 148 - if (optee->rpc_arg_count) 149 - sz += OPTEE_MSG_GET_ARG_SIZE(optee->rpc_arg_count); 168 + if (num_params > MAX_ARG_PARAM_COUNT) 169 + return ERR_PTR(-EINVAL); 150 170 151 - shm = tee_shm_alloc_priv_buf(ctx, sz); 152 - if (IS_ERR(shm)) 153 - return shm; 171 + if (optee->shm_arg_cache.flags & OPTEE_SHM_ARG_SHARED) 172 + args_per_entry = SHM_ENTRY_SIZE / sz; 173 + else 174 + args_per_entry = 1; 154 175 155 - ma = tee_shm_get_va(shm, 0); 156 - if (IS_ERR(ma)) { 157 - tee_shm_free(shm); 158 - return (void *)ma; 176 + mutex_lock(&optee->shm_arg_cache.mutex); 177 + list_for_each_entry(entry, &optee->shm_arg_cache.shm_args, list_node) { 178 + bit = find_first_zero_bit(entry->map, MAX_ARG_COUNT_PER_ENTRY); 179 + if (bit < args_per_entry) 180 + goto have_entry; 159 181 } 160 182 161 - memset(ma, 0, OPTEE_MSG_GET_ARG_SIZE(num_params)); 162 - ma->num_params = num_params; 163 - *msg_arg = ma; 183 + /* 184 + * No entry was found, let's allocate a new. 185 + */ 186 + entry = kzalloc(sizeof(*entry), GFP_KERNEL); 187 + if (!entry) { 188 + res = ERR_PTR(-ENOMEM); 189 + goto out; 190 + } 164 191 165 - return shm; 192 + if (optee->shm_arg_cache.flags & OPTEE_SHM_ARG_ALLOC_PRIV) 193 + res = tee_shm_alloc_priv_buf(ctx, SHM_ENTRY_SIZE); 194 + else 195 + res = tee_shm_alloc_kernel_buf(ctx, SHM_ENTRY_SIZE); 196 + 197 + if (IS_ERR(res)) { 198 + kfree(entry); 199 + goto out; 200 + } 201 + entry->shm = res; 202 + list_add(&entry->list_node, &optee->shm_arg_cache.shm_args); 203 + bit = 0; 204 + 205 + have_entry: 206 + offs = bit * sz; 207 + res = tee_shm_get_va(entry->shm, offs); 208 + if (IS_ERR(res)) 209 + goto out; 210 + ma = res; 211 + set_bit(bit, entry->map); 212 + memset(ma, 0, sz); 213 + ma->num_params = num_params; 214 + *entry_ret = entry; 215 + *shm_ret = entry->shm; 216 + *offs_ret = offs; 217 + out: 218 + mutex_unlock(&optee->shm_arg_cache.mutex); 219 + return res; 220 + } 221 + 222 + /** 223 + * optee_free_msg_arg() - Free previsouly obtained shared memory 224 + * @ctx: Caller TEE context 225 + * @entry: Pointer returned when the shared memory was obtained 226 + * @offs: Offset of shared memory buffer to free 227 + * 228 + * This function frees the shared memory obtained with optee_get_msg_arg(). 229 + */ 230 + void optee_free_msg_arg(struct tee_context *ctx, 231 + struct optee_shm_arg_entry *entry, u_int offs) 232 + { 233 + struct optee *optee = tee_get_drvdata(ctx->teedev); 234 + size_t sz = optee_msg_arg_size(optee->rpc_param_count); 235 + u_long bit; 236 + 237 + if (offs > SHM_ENTRY_SIZE || offs % sz) { 238 + pr_err("Invalid offs %u\n", offs); 239 + return; 240 + } 241 + bit = offs / sz; 242 + 243 + mutex_lock(&optee->shm_arg_cache.mutex); 244 + 245 + if (!test_bit(bit, entry->map)) 246 + pr_err("Bit pos %lu is already free\n", bit); 247 + clear_bit(bit, entry->map); 248 + 249 + mutex_unlock(&optee->shm_arg_cache.mutex); 166 250 } 167 251 168 252 int optee_open_session(struct tee_context *ctx, ··· 283 143 { 284 144 struct optee *optee = tee_get_drvdata(ctx->teedev); 285 145 struct optee_context_data *ctxdata = ctx->data; 286 - int rc; 146 + struct optee_shm_arg_entry *entry; 287 147 struct tee_shm *shm; 288 148 struct optee_msg_arg *msg_arg; 289 149 struct optee_session *sess = NULL; 290 150 uuid_t client_uuid; 151 + u_int offs; 152 + int rc; 291 153 292 154 /* +2 for the meta parameters added below */ 293 - shm = optee_get_msg_arg(ctx, arg->num_params + 2, &msg_arg); 294 - if (IS_ERR(shm)) 295 - return PTR_ERR(shm); 155 + msg_arg = optee_get_msg_arg(ctx, arg->num_params + 2, 156 + &entry, &shm, &offs); 157 + if (IS_ERR(msg_arg)) 158 + return PTR_ERR(msg_arg); 296 159 297 160 msg_arg->cmd = OPTEE_MSG_CMD_OPEN_SESSION; 298 161 msg_arg->cancel_id = arg->cancel_id; ··· 328 185 goto out; 329 186 } 330 187 331 - if (optee->ops->do_call_with_arg(ctx, shm)) { 188 + if (optee->ops->do_call_with_arg(ctx, shm, offs)) { 332 189 msg_arg->ret = TEEC_ERROR_COMMUNICATION; 333 190 msg_arg->ret_origin = TEEC_ORIGIN_COMMS; 334 191 } ··· 355 212 arg->ret_origin = msg_arg->ret_origin; 356 213 } 357 214 out: 358 - tee_shm_free(shm); 215 + optee_free_msg_arg(ctx, entry, offs); 359 216 360 217 return rc; 361 218 } 362 219 363 220 int optee_close_session_helper(struct tee_context *ctx, u32 session) 364 221 { 365 - struct tee_shm *shm; 366 222 struct optee *optee = tee_get_drvdata(ctx->teedev); 223 + struct optee_shm_arg_entry *entry; 367 224 struct optee_msg_arg *msg_arg; 225 + struct tee_shm *shm; 226 + u_int offs; 368 227 369 - shm = optee_get_msg_arg(ctx, 0, &msg_arg); 370 - if (IS_ERR(shm)) 371 - return PTR_ERR(shm); 228 + msg_arg = optee_get_msg_arg(ctx, 0, &entry, &shm, &offs); 229 + if (IS_ERR(msg_arg)) 230 + return PTR_ERR(msg_arg); 372 231 373 232 msg_arg->cmd = OPTEE_MSG_CMD_CLOSE_SESSION; 374 233 msg_arg->session = session; 375 - optee->ops->do_call_with_arg(ctx, shm); 234 + optee->ops->do_call_with_arg(ctx, shm, offs); 376 235 377 - tee_shm_free(shm); 236 + optee_free_msg_arg(ctx, entry, offs); 378 237 379 238 return 0; 380 239 } ··· 404 259 { 405 260 struct optee *optee = tee_get_drvdata(ctx->teedev); 406 261 struct optee_context_data *ctxdata = ctx->data; 407 - struct tee_shm *shm; 262 + struct optee_shm_arg_entry *entry; 408 263 struct optee_msg_arg *msg_arg; 409 264 struct optee_session *sess; 265 + struct tee_shm *shm; 266 + u_int offs; 410 267 int rc; 411 268 412 269 /* Check that the session is valid */ ··· 418 271 if (!sess) 419 272 return -EINVAL; 420 273 421 - shm = optee_get_msg_arg(ctx, arg->num_params, &msg_arg); 422 - if (IS_ERR(shm)) 423 - return PTR_ERR(shm); 274 + msg_arg = optee_get_msg_arg(ctx, arg->num_params, 275 + &entry, &shm, &offs); 276 + if (IS_ERR(msg_arg)) 277 + return PTR_ERR(msg_arg); 424 278 msg_arg->cmd = OPTEE_MSG_CMD_INVOKE_COMMAND; 425 279 msg_arg->func = arg->func; 426 280 msg_arg->session = arg->session; ··· 432 284 if (rc) 433 285 goto out; 434 286 435 - if (optee->ops->do_call_with_arg(ctx, shm)) { 287 + if (optee->ops->do_call_with_arg(ctx, shm, offs)) { 436 288 msg_arg->ret = TEEC_ERROR_COMMUNICATION; 437 289 msg_arg->ret_origin = TEEC_ORIGIN_COMMS; 438 290 } ··· 446 298 arg->ret = msg_arg->ret; 447 299 arg->ret_origin = msg_arg->ret_origin; 448 300 out: 449 - tee_shm_free(shm); 301 + optee_free_msg_arg(ctx, entry, offs); 450 302 return rc; 451 303 } 452 304 ··· 454 306 { 455 307 struct optee *optee = tee_get_drvdata(ctx->teedev); 456 308 struct optee_context_data *ctxdata = ctx->data; 457 - struct tee_shm *shm; 309 + struct optee_shm_arg_entry *entry; 458 310 struct optee_msg_arg *msg_arg; 459 311 struct optee_session *sess; 312 + struct tee_shm *shm; 313 + u_int offs; 460 314 461 315 /* Check that the session is valid */ 462 316 mutex_lock(&ctxdata->mutex); ··· 467 317 if (!sess) 468 318 return -EINVAL; 469 319 470 - shm = optee_get_msg_arg(ctx, 0, &msg_arg); 471 - if (IS_ERR(shm)) 472 - return PTR_ERR(shm); 320 + msg_arg = optee_get_msg_arg(ctx, 0, &entry, &shm, &offs); 321 + if (IS_ERR(msg_arg)) 322 + return PTR_ERR(msg_arg); 473 323 474 324 msg_arg->cmd = OPTEE_MSG_CMD_CANCEL; 475 325 msg_arg->session = session; 476 326 msg_arg->cancel_id = cancel_id; 477 - optee->ops->do_call_with_arg(ctx, shm); 327 + optee->ops->do_call_with_arg(ctx, shm, offs); 478 328 479 - tee_shm_free(shm); 329 + optee_free_msg_arg(ctx, entry, offs); 480 330 return 0; 481 331 } 482 332 ··· 512 362 * Allow kernel address to register with OP-TEE as kernel 513 363 * pages are configured as normal memory only. 514 364 */ 515 - if (virt_addr_valid(start)) 365 + if (virt_addr_valid(start) || is_vmalloc_addr((void *)start)) 516 366 return 0; 517 367 518 368 mmap_read_lock(mm);
+1
drivers/tee/optee/core.c
··· 171 171 optee_unregister_devices(); 172 172 173 173 optee_notif_uninit(optee); 174 + optee_shm_arg_cache_uninit(optee); 174 175 teedev_close_context(optee->ctx); 175 176 /* 176 177 * The two devices have to be unregistered before we can free the
+28 -10
drivers/tee/optee/ffa_abi.c
··· 601 601 * optee_ffa_do_call_with_arg() - Do a FF-A call to enter OP-TEE in secure world 602 602 * @ctx: calling context 603 603 * @shm: shared memory holding the message to pass to secure world 604 + * @offs: offset of the message in @shm 604 605 * 605 606 * Does a FF-A call to OP-TEE in secure world and handles eventual resulting 606 607 * Remote Procedure Calls (RPC) from OP-TEE. ··· 610 609 */ 611 610 612 611 static int optee_ffa_do_call_with_arg(struct tee_context *ctx, 613 - struct tee_shm *shm) 612 + struct tee_shm *shm, u_int offs) 614 613 { 615 614 struct ffa_send_direct_data data = { 616 615 .data0 = OPTEE_FFA_YIELDING_CALL_WITH_ARG, 617 616 .data1 = (u32)shm->sec_world_id, 618 617 .data2 = (u32)(shm->sec_world_id >> 32), 619 - .data3 = shm->offset, 618 + .data3 = offs, 620 619 }; 621 620 struct optee_msg_arg *arg; 622 621 unsigned int rpc_arg_offs; 623 622 struct optee_msg_arg *rpc_arg; 624 623 625 - arg = tee_shm_get_va(shm, 0); 624 + /* 625 + * The shared memory object has to start on a page when passed as 626 + * an argument struct. This is also what the shm pool allocator 627 + * returns, but check this before calling secure world to catch 628 + * eventual errors early in case something changes. 629 + */ 630 + if (shm->offset) 631 + return -EINVAL; 632 + 633 + arg = tee_shm_get_va(shm, offs); 626 634 if (IS_ERR(arg)) 627 635 return PTR_ERR(arg); 628 636 629 637 rpc_arg_offs = OPTEE_MSG_GET_ARG_SIZE(arg->num_params); 630 - rpc_arg = tee_shm_get_va(shm, rpc_arg_offs); 638 + rpc_arg = tee_shm_get_va(shm, offs + rpc_arg_offs); 631 639 if (IS_ERR(rpc_arg)) 632 640 return PTR_ERR(rpc_arg); 633 641 ··· 688 678 689 679 static bool optee_ffa_exchange_caps(struct ffa_device *ffa_dev, 690 680 const struct ffa_dev_ops *ops, 691 - unsigned int *rpc_arg_count) 681 + u32 *sec_caps, 682 + unsigned int *rpc_param_count) 692 683 { 693 684 struct ffa_send_direct_data data = { OPTEE_FFA_EXCHANGE_CAPABILITIES }; 694 685 int rc; ··· 704 693 return false; 705 694 } 706 695 707 - *rpc_arg_count = (u8)data.data1; 696 + *rpc_param_count = (u8)data.data1; 697 + *sec_caps = data.data2; 708 698 709 699 return true; 710 700 } ··· 771 759 772 760 static void optee_ffa_remove(struct ffa_device *ffa_dev) 773 761 { 774 - struct optee *optee = ffa_dev->dev.driver_data; 762 + struct optee *optee = ffa_dev_get_drvdata(ffa_dev); 775 763 776 764 optee_remove_common(optee); 777 765 ··· 784 772 static int optee_ffa_probe(struct ffa_device *ffa_dev) 785 773 { 786 774 const struct ffa_dev_ops *ffa_ops; 787 - unsigned int rpc_arg_count; 775 + unsigned int rpc_param_count; 788 776 struct tee_shm_pool *pool; 789 777 struct tee_device *teedev; 790 778 struct tee_context *ctx; 779 + u32 arg_cache_flags = 0; 791 780 struct optee *optee; 781 + u32 sec_caps; 792 782 int rc; 793 783 794 784 ffa_ops = ffa_dev_ops_get(ffa_dev); ··· 802 788 if (!optee_ffa_api_is_compatbile(ffa_dev, ffa_ops)) 803 789 return -EINVAL; 804 790 805 - if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &rpc_arg_count)) 791 + if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &sec_caps, 792 + &rpc_param_count)) 806 793 return -EINVAL; 794 + if (sec_caps & OPTEE_FFA_SEC_CAP_ARG_OFFSET) 795 + arg_cache_flags |= OPTEE_SHM_ARG_SHARED; 807 796 808 797 optee = kzalloc(sizeof(*optee), GFP_KERNEL); 809 798 if (!optee) ··· 822 805 optee->ops = &optee_ffa_ops; 823 806 optee->ffa.ffa_dev = ffa_dev; 824 807 optee->ffa.ffa_ops = ffa_ops; 825 - optee->rpc_arg_count = rpc_arg_count; 808 + optee->rpc_param_count = rpc_param_count; 826 809 827 810 teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, 828 811 optee); ··· 855 838 mutex_init(&optee->call_queue.mutex); 856 839 INIT_LIST_HEAD(&optee->call_queue.waiters); 857 840 optee_supp_init(&optee->supp); 841 + optee_shm_arg_cache_init(optee, arg_cache_flags); 858 842 ffa_dev_set_drvdata(ffa_dev, optee); 859 843 ctx = teedev_open(optee->teedev); 860 844 if (IS_ERR(ctx)) {
+11 -1
drivers/tee/optee/optee_ffa.h
··· 81 81 * as the second MSG arg struct for 82 82 * OPTEE_FFA_YIELDING_CALL_WITH_ARG. 83 83 * Bit[31:8]: Reserved (MBZ) 84 - * w5-w7: Note used (MBZ) 84 + * w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below, 85 + * unused bits MBZ. 86 + * w6-w7: Not used (MBZ) 85 87 */ 88 + /* 89 + * Secure world supports giving an offset into the argument shared memory 90 + * object, see also OPTEE_FFA_YIELDING_CALL_WITH_ARG 91 + */ 92 + #define OPTEE_FFA_SEC_CAP_ARG_OFFSET BIT(0) 93 + 86 94 #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) 87 95 88 96 /* ··· 120 112 * OPTEE_MSG_GET_ARG_SIZE(num_params) follows a struct optee_msg_arg 121 113 * for RPC, this struct has reserved space for the number of RPC 122 114 * parameters as returned by OPTEE_FFA_EXCHANGE_CAPABILITIES. 115 + * MBZ unless the bit OPTEE_FFA_SEC_CAP_ARG_OFFSET is received with 116 + * OPTEE_FFA_EXCHANGE_CAPABILITIES. 123 117 * w7: Not used (MBZ) 124 118 * Resume from RPC. Register usage: 125 119 * w3: Service ID, OPTEE_FFA_YIELDING_CALL_RESUME
+26 -5
drivers/tee/optee/optee_private.h
··· 59 59 u_long *bitmap; 60 60 }; 61 61 62 + #define OPTEE_SHM_ARG_ALLOC_PRIV BIT(0) 63 + #define OPTEE_SHM_ARG_SHARED BIT(1) 64 + struct optee_shm_arg_entry; 65 + struct optee_shm_arg_cache { 66 + u32 flags; 67 + /* Serializes access to this struct */ 68 + struct mutex mutex; 69 + struct list_head shm_args; 70 + }; 71 + 62 72 /** 63 73 * struct optee_supp - supplicant synchronization struct 64 74 * @ctx the context of current connected supplicant. ··· 131 121 */ 132 122 struct optee_ops { 133 123 int (*do_call_with_arg)(struct tee_context *ctx, 134 - struct tee_shm *shm_arg); 124 + struct tee_shm *shm_arg, u_int offs); 135 125 int (*to_msg_param)(struct optee *optee, 136 126 struct optee_msg_param *msg_params, 137 127 size_t num_params, const struct tee_param *params); ··· 153 143 * @notif: notification synchronization struct 154 144 * @supp: supplicant synchronization struct for RPC to supplicant 155 145 * @pool: shared memory pool 156 - * @rpc_arg_count: If > 0 number of RPC parameters to make room for 146 + * @rpc_param_count: If > 0 number of RPC parameters to make room for 157 147 * @scan_bus_done flag if device registation was already done. 158 148 * @scan_bus_wq workqueue to scan optee bus and register optee drivers 159 149 * @scan_bus_work workq to scan optee bus and register optee drivers ··· 167 157 struct optee_smc smc; 168 158 struct optee_ffa ffa; 169 159 }; 160 + struct optee_shm_arg_cache shm_arg_cache; 170 161 struct optee_call_queue call_queue; 171 162 struct optee_notif notif; 172 163 struct optee_supp supp; 173 164 struct tee_shm_pool *pool; 174 - unsigned int rpc_arg_count; 165 + unsigned int rpc_param_count; 175 166 bool scan_bus_done; 176 167 struct workqueue_struct *scan_bus_wq; 177 168 struct work_struct scan_bus_work; ··· 284 273 void optee_cq_wait_final(struct optee_call_queue *cq, 285 274 struct optee_call_waiter *w); 286 275 int optee_check_mem_type(unsigned long start, size_t num_pages); 287 - struct tee_shm *optee_get_msg_arg(struct tee_context *ctx, size_t num_params, 288 - struct optee_msg_arg **msg_arg); 276 + 277 + void optee_shm_arg_cache_init(struct optee *optee, u32 flags); 278 + void optee_shm_arg_cache_uninit(struct optee *optee); 279 + struct optee_msg_arg *optee_get_msg_arg(struct tee_context *ctx, 280 + size_t num_params, 281 + struct optee_shm_arg_entry **entry, 282 + struct tee_shm **shm_ret, 283 + u_int *offs); 284 + void optee_free_msg_arg(struct tee_context *ctx, 285 + struct optee_shm_arg_entry *entry, u_int offs); 286 + size_t optee_msg_arg_size(size_t rpc_param_count); 287 + 289 288 290 289 struct tee_shm *optee_rpc_cmd_alloc_suppl(struct tee_context *ctx, size_t sz); 291 290 void optee_rpc_cmd_free_suppl(struct tee_context *ctx, struct tee_shm *shm);
+42 -6
drivers/tee/optee/optee_smc.h
··· 107 107 /* 108 108 * Call with struct optee_msg_arg as argument 109 109 * 110 - * When calling this function normal world has a few responsibilities: 110 + * When called with OPTEE_SMC_CALL_WITH_RPC_ARG or 111 + * OPTEE_SMC_CALL_WITH_REGD_ARG in a0 there is one RPC struct optee_msg_arg 112 + * following after the first struct optee_msg_arg. The RPC struct 113 + * optee_msg_arg has reserved space for the number of RPC parameters as 114 + * returned by OPTEE_SMC_EXCHANGE_CAPABILITIES. 115 + * 116 + * When calling these functions, normal world has a few responsibilities: 111 117 * 1. It must be able to handle eventual RPCs 112 118 * 2. Non-secure interrupts should not be masked 113 119 * 3. If asynchronous notifications has been negotiated successfully, then 114 - * asynchronous notifications should be unmasked during this call. 120 + * the interrupt for asynchronous notifications should be unmasked 121 + * during this call. 115 122 * 116 - * Call register usage: 117 - * a0 SMC Function ID, OPTEE_SMC*CALL_WITH_ARG 123 + * Call register usage, OPTEE_SMC_CALL_WITH_ARG and 124 + * OPTEE_SMC_CALL_WITH_RPC_ARG: 125 + * a0 SMC Function ID, OPTEE_SMC_CALL_WITH_ARG or OPTEE_SMC_CALL_WITH_RPC_ARG 118 126 * a1 Upper 32 bits of a 64-bit physical pointer to a struct optee_msg_arg 119 127 * a2 Lower 32 bits of a 64-bit physical pointer to a struct optee_msg_arg 120 128 * a3 Cache settings, not used if physical pointer is in a predefined shared 121 129 * memory area else per OPTEE_SMC_SHM_* 130 + * a4-6 Not used 131 + * a7 Hypervisor Client ID register 132 + * 133 + * Call register usage, OPTEE_SMC_CALL_WITH_REGD_ARG: 134 + * a0 SMC Function ID, OPTEE_SMC_CALL_WITH_REGD_ARG 135 + * a1 Upper 32 bits of a 64-bit shared memory cookie 136 + * a2 Lower 32 bits of a 64-bit shared memory cookie 137 + * a3 Offset of the struct optee_msg_arg in the shared memory with the 138 + * supplied cookie 122 139 * a4-6 Not used 123 140 * a7 Hypervisor Client ID register 124 141 * ··· 171 154 #define OPTEE_SMC_FUNCID_CALL_WITH_ARG OPTEE_MSG_FUNCID_CALL_WITH_ARG 172 155 #define OPTEE_SMC_CALL_WITH_ARG \ 173 156 OPTEE_SMC_STD_CALL_VAL(OPTEE_SMC_FUNCID_CALL_WITH_ARG) 157 + #define OPTEE_SMC_CALL_WITH_RPC_ARG \ 158 + OPTEE_SMC_STD_CALL_VAL(OPTEE_SMC_FUNCID_CALL_WITH_RPC_ARG) 159 + #define OPTEE_SMC_CALL_WITH_REGD_ARG \ 160 + OPTEE_SMC_STD_CALL_VAL(OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG) 174 161 175 162 /* 176 163 * Get Shared Memory Config ··· 223 202 * a0 OPTEE_SMC_RETURN_OK 224 203 * a1 bitfield of secure world capabilities OPTEE_SMC_SEC_CAP_* 225 204 * a2 The maximum secure world notification number 226 - * a3-7 Preserved 205 + * a3 Bit[7:0]: Number of parameters needed for RPC to be supplied 206 + * as the second MSG arg struct for 207 + * OPTEE_SMC_CALL_WITH_ARG 208 + * Bit[31:8]: Reserved (MBZ) 209 + * a4-7 Preserved 227 210 * 228 211 * Error return register usage: 229 212 * a0 OPTEE_SMC_RETURN_ENOTAVAIL, can't use the capabilities from normal world ··· 252 227 #define OPTEE_SMC_SEC_CAP_MEMREF_NULL BIT(4) 253 228 /* Secure world supports asynchronous notification of normal world */ 254 229 #define OPTEE_SMC_SEC_CAP_ASYNC_NOTIF BIT(5) 230 + /* Secure world supports pre-allocating RPC arg struct */ 231 + #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) 255 232 256 233 #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 257 234 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ ··· 263 236 unsigned long status; 264 237 unsigned long capabilities; 265 238 unsigned long max_notif_value; 266 - unsigned long reserved0; 239 + unsigned long data; 267 240 }; 268 241 269 242 /* ··· 385 358 * should be called until all pended values have been retrieved. When a 386 359 * value is retrieved, it's cleared from the record in secure world. 387 360 * 361 + * It is expected that this function is called from an interrupt handler 362 + * in normal world. 363 + * 388 364 * Call requests usage: 389 365 * a0 SMC Function ID, OPTEE_SMC_GET_ASYNC_NOTIF_VALUE 390 366 * a1-6 Not used ··· 419 389 #define OPTEE_SMC_FUNCID_GET_ASYNC_NOTIF_VALUE 17 420 390 #define OPTEE_SMC_GET_ASYNC_NOTIF_VALUE \ 421 391 OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_ASYNC_NOTIF_VALUE) 392 + 393 + /* See OPTEE_SMC_CALL_WITH_RPC_ARG above */ 394 + #define OPTEE_SMC_FUNCID_CALL_WITH_RPC_ARG 18 395 + 396 + /* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ 397 + #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 422 398 423 399 /* 424 400 * Resume from RPC (for example after processing a foreign interrupt)
+160 -37
drivers/tee/optee/smc_abi.c
··· 437 437 struct optee_msg_arg *msg_arg; 438 438 struct tee_shm *shm_arg; 439 439 u64 *pages_list; 440 + size_t sz; 440 441 int rc; 441 442 442 443 if (!num_pages) ··· 451 450 if (!pages_list) 452 451 return -ENOMEM; 453 452 454 - shm_arg = optee_get_msg_arg(ctx, 1, &msg_arg); 453 + /* 454 + * We're about to register shared memory we can't register shared 455 + * memory for this request or there's a catch-22. 456 + * 457 + * So in this we'll have to do the good old temporary private 458 + * allocation instead of using optee_get_msg_arg(). 459 + */ 460 + sz = optee_msg_arg_size(optee->rpc_param_count); 461 + shm_arg = tee_shm_alloc_priv_buf(ctx, sz); 455 462 if (IS_ERR(shm_arg)) { 456 463 rc = PTR_ERR(shm_arg); 464 + goto out; 465 + } 466 + msg_arg = tee_shm_get_va(shm_arg, 0); 467 + if (IS_ERR(msg_arg)) { 468 + rc = PTR_ERR(msg_arg); 457 469 goto out; 458 470 } 459 471 460 472 optee_fill_pages_list(pages_list, pages, num_pages, 461 473 tee_shm_get_page_offset(shm)); 462 474 475 + memset(msg_arg, 0, OPTEE_MSG_GET_ARG_SIZE(1)); 476 + msg_arg->num_params = 1; 463 477 msg_arg->cmd = OPTEE_MSG_CMD_REGISTER_SHM; 464 478 msg_arg->params->attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | 465 479 OPTEE_MSG_ATTR_NONCONTIG; ··· 487 471 msg_arg->params->u.tmem.buf_ptr = virt_to_phys(pages_list) | 488 472 (tee_shm_get_page_offset(shm) & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1)); 489 473 490 - if (optee->ops->do_call_with_arg(ctx, shm_arg) || 474 + if (optee->ops->do_call_with_arg(ctx, shm_arg, 0) || 491 475 msg_arg->ret != TEEC_SUCCESS) 492 476 rc = -EINVAL; 493 477 ··· 503 487 struct optee_msg_arg *msg_arg; 504 488 struct tee_shm *shm_arg; 505 489 int rc = 0; 490 + size_t sz; 506 491 507 - shm_arg = optee_get_msg_arg(ctx, 1, &msg_arg); 492 + /* 493 + * We're about to unregister shared memory and we may not be able 494 + * register shared memory for this request in case we're called 495 + * from optee_shm_arg_cache_uninit(). 496 + * 497 + * So in order to keep things simple in this function just as in 498 + * optee_shm_register() we'll use temporary private allocation 499 + * instead of using optee_get_msg_arg(). 500 + */ 501 + sz = optee_msg_arg_size(optee->rpc_param_count); 502 + shm_arg = tee_shm_alloc_priv_buf(ctx, sz); 508 503 if (IS_ERR(shm_arg)) 509 504 return PTR_ERR(shm_arg); 505 + msg_arg = tee_shm_get_va(shm_arg, 0); 506 + if (IS_ERR(msg_arg)) { 507 + rc = PTR_ERR(msg_arg); 508 + goto out; 509 + } 510 510 511 + memset(msg_arg, 0, sz); 512 + msg_arg->num_params = 1; 511 513 msg_arg->cmd = OPTEE_MSG_CMD_UNREGISTER_SHM; 512 - 513 514 msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; 514 515 msg_arg->params[0].u.rmem.shm_ref = (unsigned long)shm; 515 516 516 - if (optee->ops->do_call_with_arg(ctx, shm_arg) || 517 + if (optee->ops->do_call_with_arg(ctx, shm_arg, 0) || 517 518 msg_arg->ret != TEEC_SUCCESS) 518 519 rc = -EINVAL; 520 + out: 519 521 tee_shm_free(shm_arg); 520 522 return rc; 521 523 } ··· 766 732 } 767 733 768 734 static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee, 769 - struct tee_shm *shm, 735 + struct optee_msg_arg *arg, 770 736 struct optee_call_ctx *call_ctx) 771 737 { 772 - struct optee_msg_arg *arg; 773 - 774 - arg = tee_shm_get_va(shm, 0); 775 - if (IS_ERR(arg)) { 776 - pr_err("%s: tee_shm_get_va %p failed\n", __func__, shm); 777 - return; 778 - } 779 738 780 739 switch (arg->cmd) { 781 740 case OPTEE_RPC_CMD_SHM_ALLOC: ··· 792 765 * Result of RPC is written back into @param. 793 766 */ 794 767 static void optee_handle_rpc(struct tee_context *ctx, 768 + struct optee_msg_arg *rpc_arg, 795 769 struct optee_rpc_param *param, 796 770 struct optee_call_ctx *call_ctx) 797 771 { 798 772 struct tee_device *teedev = ctx->teedev; 799 773 struct optee *optee = tee_get_drvdata(teedev); 774 + struct optee_msg_arg *arg; 800 775 struct tee_shm *shm; 801 776 phys_addr_t pa; 802 777 ··· 830 801 */ 831 802 break; 832 803 case OPTEE_SMC_RPC_FUNC_CMD: 833 - shm = reg_pair_to_ptr(param->a1, param->a2); 834 - handle_rpc_func_cmd(ctx, optee, shm, call_ctx); 804 + if (rpc_arg) { 805 + arg = rpc_arg; 806 + } else { 807 + shm = reg_pair_to_ptr(param->a1, param->a2); 808 + arg = tee_shm_get_va(shm, 0); 809 + if (IS_ERR(arg)) { 810 + pr_err("%s: tee_shm_get_va %p failed\n", 811 + __func__, shm); 812 + break; 813 + } 814 + } 815 + 816 + handle_rpc_func_cmd(ctx, optee, arg, call_ctx); 835 817 break; 836 818 default: 837 819 pr_warn("Unknown RPC func 0x%x\n", ··· 856 816 /** 857 817 * optee_smc_do_call_with_arg() - Do an SMC to OP-TEE in secure world 858 818 * @ctx: calling context 859 - * @arg: shared memory holding the message to pass to secure world 819 + * @shm: shared memory holding the message to pass to secure world 820 + * @offs: offset of the message in @shm 860 821 * 861 822 * Does and SMC to OP-TEE in secure world and handles eventual resulting 862 823 * Remote Procedure Calls (RPC) from OP-TEE. ··· 865 824 * Returns return code from secure world, 0 is OK 866 825 */ 867 826 static int optee_smc_do_call_with_arg(struct tee_context *ctx, 868 - struct tee_shm *arg) 827 + struct tee_shm *shm, u_int offs) 869 828 { 870 829 struct optee *optee = tee_get_drvdata(ctx->teedev); 871 830 struct optee_call_waiter w; 872 831 struct optee_rpc_param param = { }; 873 832 struct optee_call_ctx call_ctx = { }; 874 - phys_addr_t parg; 833 + struct optee_msg_arg *rpc_arg = NULL; 875 834 int rc; 876 835 877 - rc = tee_shm_get_pa(arg, 0, &parg); 878 - if (rc) 879 - return rc; 836 + if (optee->rpc_param_count) { 837 + struct optee_msg_arg *arg; 838 + unsigned int rpc_arg_offs; 880 839 881 - param.a0 = OPTEE_SMC_CALL_WITH_ARG; 882 - reg_pair_from_64(&param.a1, &param.a2, parg); 840 + arg = tee_shm_get_va(shm, offs); 841 + if (IS_ERR(arg)) 842 + return PTR_ERR(arg); 843 + 844 + rpc_arg_offs = OPTEE_MSG_GET_ARG_SIZE(arg->num_params); 845 + rpc_arg = tee_shm_get_va(shm, offs + rpc_arg_offs); 846 + if (IS_ERR(arg)) 847 + return PTR_ERR(arg); 848 + } 849 + 850 + if (rpc_arg && tee_shm_is_dynamic(shm)) { 851 + param.a0 = OPTEE_SMC_CALL_WITH_REGD_ARG; 852 + reg_pair_from_64(&param.a1, &param.a2, (u_long)shm); 853 + param.a3 = offs; 854 + } else { 855 + phys_addr_t parg; 856 + 857 + rc = tee_shm_get_pa(shm, offs, &parg); 858 + if (rc) 859 + return rc; 860 + 861 + if (rpc_arg) 862 + param.a0 = OPTEE_SMC_CALL_WITH_RPC_ARG; 863 + else 864 + param.a0 = OPTEE_SMC_CALL_WITH_ARG; 865 + reg_pair_from_64(&param.a1, &param.a2, parg); 866 + } 883 867 /* Initialize waiter */ 884 868 optee_cq_wait_init(&optee->call_queue, &w); 885 869 while (true) { ··· 928 862 param.a1 = res.a1; 929 863 param.a2 = res.a2; 930 864 param.a3 = res.a3; 931 - optee_handle_rpc(ctx, &param, &call_ctx); 865 + optee_handle_rpc(ctx, rpc_arg, &param, &call_ctx); 932 866 } else { 933 867 rc = res.a0; 934 868 break; ··· 947 881 948 882 static int simple_call_with_arg(struct tee_context *ctx, u32 cmd) 949 883 { 884 + struct optee_shm_arg_entry *entry; 950 885 struct optee_msg_arg *msg_arg; 951 886 struct tee_shm *shm; 887 + u_int offs; 952 888 953 - shm = optee_get_msg_arg(ctx, 0, &msg_arg); 954 - if (IS_ERR(shm)) 955 - return PTR_ERR(shm); 889 + msg_arg = optee_get_msg_arg(ctx, 0, &entry, &shm, &offs); 890 + if (IS_ERR(msg_arg)) 891 + return PTR_ERR(msg_arg); 956 892 957 893 msg_arg->cmd = cmd; 958 - optee_smc_do_call_with_arg(ctx, shm); 894 + optee_smc_do_call_with_arg(ctx, shm, offs); 959 895 960 - tee_shm_free(shm); 896 + optee_free_msg_arg(ctx, entry, offs); 961 897 return 0; 962 898 } 963 899 ··· 1186 1118 } 1187 1119 1188 1120 static bool optee_msg_exchange_capabilities(optee_invoke_fn *invoke_fn, 1189 - u32 *sec_caps, u32 *max_notif_value) 1121 + u32 *sec_caps, u32 *max_notif_value, 1122 + unsigned int *rpc_param_count) 1190 1123 { 1191 1124 union { 1192 1125 struct arm_smccc_res smccc; ··· 1214 1145 *max_notif_value = res.result.max_notif_value; 1215 1146 else 1216 1147 *max_notif_value = OPTEE_DEFAULT_MAX_NOTIF_VALUE; 1148 + if (*sec_caps & OPTEE_SMC_SEC_CAP_RPC_ARG) 1149 + *rpc_param_count = (u8)res.result.data; 1150 + else 1151 + *rpc_param_count = 0; 1217 1152 1218 1153 return true; 1219 1154 } ··· 1324 1251 * reference counters and also avoid wild pointers in secure world 1325 1252 * into the old shared memory range. 1326 1253 */ 1327 - optee_disable_shm_cache(optee); 1254 + if (!optee->rpc_param_count) 1255 + optee_disable_shm_cache(optee); 1328 1256 1329 1257 optee_smc_notif_uninit_irq(optee); 1330 1258 ··· 1348 1274 */ 1349 1275 static void optee_shutdown(struct platform_device *pdev) 1350 1276 { 1351 - optee_disable_shm_cache(platform_get_drvdata(pdev)); 1277 + struct optee *optee = platform_get_drvdata(pdev); 1278 + 1279 + if (!optee->rpc_param_count) 1280 + optee_disable_shm_cache(optee); 1352 1281 } 1353 1282 1354 1283 static int optee_probe(struct platform_device *pdev) ··· 1360 1283 struct tee_shm_pool *pool = ERR_PTR(-EINVAL); 1361 1284 struct optee *optee = NULL; 1362 1285 void *memremaped_shm = NULL; 1286 + unsigned int rpc_param_count; 1363 1287 struct tee_device *teedev; 1364 1288 struct tee_context *ctx; 1365 1289 u32 max_notif_value; 1290 + u32 arg_cache_flags; 1366 1291 u32 sec_caps; 1367 1292 int rc; 1368 1293 ··· 1385 1306 } 1386 1307 1387 1308 if (!optee_msg_exchange_capabilities(invoke_fn, &sec_caps, 1388 - &max_notif_value)) { 1309 + &max_notif_value, 1310 + &rpc_param_count)) { 1389 1311 pr_warn("capabilities mismatch\n"); 1390 1312 return -EINVAL; 1391 1313 } ··· 1394 1314 /* 1395 1315 * Try to use dynamic shared memory if possible 1396 1316 */ 1397 - if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 1317 + if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) { 1318 + /* 1319 + * If we have OPTEE_SMC_SEC_CAP_RPC_ARG we can ask 1320 + * optee_get_msg_arg() to pre-register (by having 1321 + * OPTEE_SHM_ARG_ALLOC_PRIV cleared) the page used to pass 1322 + * an argument struct. 1323 + * 1324 + * With the page is pre-registered we can use a non-zero 1325 + * offset for argument struct, this is indicated with 1326 + * OPTEE_SHM_ARG_SHARED. 1327 + * 1328 + * This means that optee_smc_do_call_with_arg() will use 1329 + * OPTEE_SMC_CALL_WITH_REGD_ARG for pre-registered pages. 1330 + */ 1331 + if (sec_caps & OPTEE_SMC_SEC_CAP_RPC_ARG) 1332 + arg_cache_flags = OPTEE_SHM_ARG_SHARED; 1333 + else 1334 + arg_cache_flags = OPTEE_SHM_ARG_ALLOC_PRIV; 1335 + 1398 1336 pool = optee_shm_pool_alloc_pages(); 1337 + } 1399 1338 1400 1339 /* 1401 1340 * If dynamic shared memory is not available or failed - try static one 1402 1341 */ 1403 - if (IS_ERR(pool) && (sec_caps & OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM)) 1342 + if (IS_ERR(pool) && (sec_caps & OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM)) { 1343 + /* 1344 + * The static memory pool can use non-zero page offsets so 1345 + * let optee_get_msg_arg() know that with OPTEE_SHM_ARG_SHARED. 1346 + * 1347 + * optee_get_msg_arg() should not pre-register the 1348 + * allocated page used to pass an argument struct, this is 1349 + * indicated with OPTEE_SHM_ARG_ALLOC_PRIV. 1350 + * 1351 + * This means that optee_smc_do_call_with_arg() will use 1352 + * OPTEE_SMC_CALL_WITH_ARG if rpc_param_count is 0, else 1353 + * OPTEE_SMC_CALL_WITH_RPC_ARG. 1354 + */ 1355 + arg_cache_flags = OPTEE_SHM_ARG_SHARED | 1356 + OPTEE_SHM_ARG_ALLOC_PRIV; 1404 1357 pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm); 1358 + } 1405 1359 1406 1360 if (IS_ERR(pool)) 1407 1361 return PTR_ERR(pool); ··· 1449 1335 optee->ops = &optee_ops; 1450 1336 optee->smc.invoke_fn = invoke_fn; 1451 1337 optee->smc.sec_caps = sec_caps; 1338 + optee->rpc_param_count = rpc_param_count; 1452 1339 1453 1340 teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); 1454 1341 if (IS_ERR(teedev)) { ··· 1478 1363 optee_supp_init(&optee->supp); 1479 1364 optee->smc.memremaped_shm = memremaped_shm; 1480 1365 optee->pool = pool; 1366 + optee_shm_arg_cache_init(optee, arg_cache_flags); 1481 1367 1482 1368 platform_set_drvdata(pdev, optee); 1483 1369 ctx = teedev_open(optee->teedev); ··· 1519 1403 */ 1520 1404 optee_disable_unmapped_shm_cache(optee); 1521 1405 1522 - optee_enable_shm_cache(optee); 1406 + /* 1407 + * Only enable the shm cache in case we're not able to pass the RPC 1408 + * arg struct right after the normal arg struct. 1409 + */ 1410 + if (!optee->rpc_param_count) 1411 + optee_enable_shm_cache(optee); 1523 1412 1524 1413 if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 1525 1414 pr_info("dynamic shared memory is enabled\n"); ··· 1537 1416 return 0; 1538 1417 1539 1418 err_disable_shm_cache: 1540 - optee_disable_shm_cache(optee); 1419 + if (!optee->rpc_param_count) 1420 + optee_disable_shm_cache(optee); 1541 1421 optee_smc_notif_uninit_irq(optee); 1542 1422 optee_unregister_devices(); 1543 1423 err_notif_uninit: ··· 1546 1424 err_close_ctx: 1547 1425 teedev_close_context(ctx); 1548 1426 err_supp_uninit: 1427 + optee_shm_arg_cache_uninit(optee); 1549 1428 optee_supp_uninit(&optee->supp); 1550 1429 mutex_destroy(&optee->call_queue.mutex); 1551 1430 err_unreg_supp_teedev:
-2
drivers/tee/tee_core.c
··· 302 302 return PTR_ERR(shm); 303 303 304 304 data.id = shm->id; 305 - data.flags = shm->flags; 306 305 data.size = shm->size; 307 306 308 307 if (copy_to_user(udata, &data, sizeof(data))) ··· 338 339 return PTR_ERR(shm); 339 340 340 341 data.id = shm->id; 341 - data.flags = shm->flags; 342 342 data.length = shm->size; 343 343 344 344 if (copy_to_user(udata, &data, sizeof(data)))
+25 -60
drivers/tee/tee_shm.c
··· 23 23 static int shm_get_kernel_pages(unsigned long start, size_t page_count, 24 24 struct page **pages) 25 25 { 26 - struct kvec *kiov; 27 26 size_t n; 28 27 int rc; 29 28 30 - kiov = kcalloc(page_count, sizeof(*kiov), GFP_KERNEL); 31 - if (!kiov) 32 - return -ENOMEM; 29 + if (is_vmalloc_addr((void *)start)) { 30 + struct page *page; 33 31 34 - for (n = 0; n < page_count; n++) { 35 - kiov[n].iov_base = (void *)(start + n * PAGE_SIZE); 36 - kiov[n].iov_len = PAGE_SIZE; 32 + for (n = 0; n < page_count; n++) { 33 + page = vmalloc_to_page((void *)(start + PAGE_SIZE * n)); 34 + if (!page) 35 + return -ENOMEM; 36 + 37 + get_page(page); 38 + pages[n] = page; 39 + } 40 + rc = page_count; 41 + } else { 42 + struct kvec *kiov; 43 + 44 + kiov = kcalloc(page_count, sizeof(*kiov), GFP_KERNEL); 45 + if (!kiov) 46 + return -ENOMEM; 47 + 48 + for (n = 0; n < page_count; n++) { 49 + kiov[n].iov_base = (void *)(start + n * PAGE_SIZE); 50 + kiov[n].iov_len = PAGE_SIZE; 51 + } 52 + 53 + rc = get_kernel_pages(kiov, page_count, 0, pages); 54 + kfree(kiov); 37 55 } 38 - 39 - rc = get_kernel_pages(kiov, page_count, 0, pages); 40 - kfree(kiov); 41 56 42 57 return rc; 43 58 } ··· 428 413 tee_shm_put(shm); 429 414 } 430 415 EXPORT_SYMBOL_GPL(tee_shm_free); 431 - 432 - /** 433 - * tee_shm_va2pa() - Get physical address of a virtual address 434 - * @shm: Shared memory handle 435 - * @va: Virtual address to tranlsate 436 - * @pa: Returned physical address 437 - * @returns 0 on success and < 0 on failure 438 - */ 439 - int tee_shm_va2pa(struct tee_shm *shm, void *va, phys_addr_t *pa) 440 - { 441 - if (!shm->kaddr) 442 - return -EINVAL; 443 - /* Check that we're in the range of the shm */ 444 - if ((char *)va < (char *)shm->kaddr) 445 - return -EINVAL; 446 - if ((char *)va >= ((char *)shm->kaddr + shm->size)) 447 - return -EINVAL; 448 - 449 - return tee_shm_get_pa( 450 - shm, (unsigned long)va - (unsigned long)shm->kaddr, pa); 451 - } 452 - EXPORT_SYMBOL_GPL(tee_shm_va2pa); 453 - 454 - /** 455 - * tee_shm_pa2va() - Get virtual address of a physical address 456 - * @shm: Shared memory handle 457 - * @pa: Physical address to tranlsate 458 - * @va: Returned virtual address 459 - * @returns 0 on success and < 0 on failure 460 - */ 461 - int tee_shm_pa2va(struct tee_shm *shm, phys_addr_t pa, void **va) 462 - { 463 - if (!shm->kaddr) 464 - return -EINVAL; 465 - /* Check that we're in the range of the shm */ 466 - if (pa < shm->paddr) 467 - return -EINVAL; 468 - if (pa >= (shm->paddr + shm->size)) 469 - return -EINVAL; 470 - 471 - if (va) { 472 - void *v = tee_shm_get_va(shm, pa - shm->paddr); 473 - 474 - if (IS_ERR(v)) 475 - return PTR_ERR(v); 476 - *va = v; 477 - } 478 - return 0; 479 - } 480 - EXPORT_SYMBOL_GPL(tee_shm_pa2va); 481 416 482 417 /** 483 418 * tee_shm_get_va() - Get virtual address of a shared memory plus an offset
+26
include/dt-bindings/power/qcom-rpmpd.h
··· 20 20 #define SDX55_MX 1 21 21 #define SDX55_CX 2 22 22 23 + /* SDX65 Power Domain Indexes */ 24 + #define SDX65_MSS 0 25 + #define SDX65_MX 1 26 + #define SDX65_MX_AO 2 27 + #define SDX65_CX 3 28 + #define SDX65_CX_AO 4 29 + #define SDX65_MXC 5 30 + 23 31 /* SM6350 Power Domain Indexes */ 24 32 #define SM6350_CX 0 25 33 #define SM6350_GFX 1 ··· 124 116 #define SC8180X_MSS 8 125 117 #define SC8180X_MX 9 126 118 #define SC8180X_MX_AO 10 119 + 120 + /* SC8280XP Power Domain Indexes */ 121 + #define SC8280XP_CX 0 122 + #define SC8280XP_CX_AO 1 123 + #define SC8280XP_DDR 2 124 + #define SC8280XP_EBI 3 125 + #define SC8280XP_GFX 4 126 + #define SC8280XP_LCX 5 127 + #define SC8280XP_LMX 6 128 + #define SC8280XP_MMCX 7 129 + #define SC8280XP_MMCX_AO 8 130 + #define SC8280XP_MSS 9 131 + #define SC8280XP_MX 10 132 + #define SC8280XP_MXC 12 133 + #define SC8280XP_MX_AO 11 134 + #define SC8280XP_NSP 13 135 + #define SC8280XP_QPHY 14 136 + #define SC8280XP_XO 15 127 137 128 138 /* SDM845 Power Domain performance levels */ 129 139 #define RPMH_REGULATOR_LEVEL_RETENTION 16
+125
include/dt-bindings/reset/amlogic,meson-s4-reset.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ OR MIT) */ 2 + /* 3 + * Copyright (c) 2021 Amlogic, Inc. All rights reserved. 4 + * Author: Zelong Dong <zelong.dong@amlogic.com> 5 + * 6 + */ 7 + 8 + #ifndef _DT_BINDINGS_AMLOGIC_MESON_S4_RESET_H 9 + #define _DT_BINDINGS_AMLOGIC_MESON_S4_RESET_H 10 + 11 + /* RESET0 */ 12 + #define RESET_USB_DDR0 0 13 + #define RESET_USB_DDR1 1 14 + #define RESET_USB_DDR2 2 15 + #define RESET_USB_DDR3 3 16 + #define RESET_USBCTRL 4 17 + /* 5-7 */ 18 + #define RESET_USBPHY20 8 19 + #define RESET_USBPHY21 9 20 + /* 10-15 */ 21 + #define RESET_HDMITX_APB 16 22 + #define RESET_BRG_VCBUS_DEC 17 23 + #define RESET_VCBUS 18 24 + #define RESET_VID_PLL_DIV 19 25 + #define RESET_VDI6 20 26 + #define RESET_GE2D 21 27 + #define RESET_HDMITXPHY 22 28 + #define RESET_VID_LOCK 23 29 + #define RESET_VENCL 24 30 + #define RESET_VDAC 25 31 + #define RESET_VENCP 26 32 + #define RESET_VENCI 27 33 + #define RESET_RDMA 28 34 + #define RESET_HDMI_TX 29 35 + #define RESET_VIU 30 36 + #define RESET_VENC 31 37 + 38 + /* RESET1 */ 39 + #define RESET_AUDIO 32 40 + #define RESET_MALI_APB 33 41 + #define RESET_MALI 34 42 + #define RESET_DDR_APB 35 43 + #define RESET_DDR 36 44 + #define RESET_DOS_APB 37 45 + #define RESET_DOS 38 46 + /* 39-47 */ 47 + #define RESET_ETH 48 48 + /* 49-51 */ 49 + #define RESET_DEMOD 52 50 + /* 53-63 */ 51 + 52 + /* RESET2 */ 53 + #define RESET_ABUS_ARB 64 54 + #define RESET_IR_CTRL 65 55 + #define RESET_TEMPSENSOR_DDR 66 56 + #define RESET_TEMPSENSOR_PLL 67 57 + /* 68-71 */ 58 + #define RESET_SMART_CARD 72 59 + #define RESET_SPICC0 73 60 + /* 74 */ 61 + #define RESET_RSA 75 62 + /* 76-79 */ 63 + #define RESET_MSR_CLK 80 64 + #define RESET_SPIFC 81 65 + #define RESET_SARADC 82 66 + /* 83-87 */ 67 + #define RESET_ACODEC 88 68 + #define RESET_CEC 89 69 + #define RESET_AFIFO 90 70 + #define RESET_WATCHDOG 91 71 + /* 92-95 */ 72 + 73 + /* RESET3 */ 74 + /* 96-127 */ 75 + 76 + /* RESET4 */ 77 + /* 128-131 */ 78 + #define RESET_PWM_AB 132 79 + #define RESET_PWM_CD 133 80 + #define RESET_PWM_EF 134 81 + #define RESET_PWM_GH 135 82 + #define RESET_PWM_IJ 136 83 + /* 137 */ 84 + #define RESET_UART_A 138 85 + #define RESET_UART_B 139 86 + #define RESET_UART_C 140 87 + #define RESET_UART_D 141 88 + #define RESET_UART_E 142 89 + /* 143 */ 90 + #define RESET_I2C_S_A 144 91 + #define RESET_I2C_M_A 145 92 + #define RESET_I2C_M_B 146 93 + #define RESET_I2C_M_C 147 94 + #define RESET_I2C_M_D 148 95 + #define RESET_I2C_M_E 149 96 + /* 150-151 */ 97 + #define RESET_SD_EMMC_A 152 98 + #define RESET_SD_EMMC_B 153 99 + #define RESET_NAND_EMMC 154 100 + /* 155-159 */ 101 + 102 + /* RESET5 */ 103 + #define RESET_BRG_VDEC_PIPL0 160 104 + #define RESET_BRG_HEVCF_PIPL0 161 105 + /* 162 */ 106 + #define RESET_BRG_HCODEC_PIPL0 163 107 + #define RESET_BRG_GE2D_PIPL0 164 108 + #define RESET_BRG_VPU_PIPL0 165 109 + #define RESET_BRG_CPU_PIPL0 166 110 + #define RESET_BRG_MALI_PIPL0 167 111 + /* 168 */ 112 + #define RESET_BRG_MALI_PIPL1 169 113 + /* 170-171 */ 114 + #define RESET_BRG_HEVCF_PIPL1 172 115 + #define RESET_BRG_HEVCB_PIPL1 173 116 + /* 174-183 */ 117 + #define RESET_RAMA 184 118 + /* 185-186 */ 119 + #define RESET_BRG_NIC_VAPB 187 120 + #define RESET_BRG_NIC_DSU 188 121 + #define RESET_BRG_NIC_SYSCLK 189 122 + #define RESET_BRG_NIC_MAIN 190 123 + #define RESET_BRG_NIC_ALL 191 124 + 125 + #endif
+6 -1
include/linux/arm_ffa.h
··· 38 38 39 39 static inline void ffa_dev_set_drvdata(struct ffa_device *fdev, void *data) 40 40 { 41 - fdev->dev.driver_data = data; 41 + dev_set_drvdata(&fdev->dev, data); 42 + } 43 + 44 + static inline void *ffa_dev_get_drvdata(struct ffa_device *fdev) 45 + { 46 + return dev_get_drvdata(&fdev->dev); 42 47 } 43 48 44 49 #if IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)
+25 -6
include/linux/scmi_protocol.h
··· 13 13 #include <linux/notifier.h> 14 14 #include <linux/types.h> 15 15 16 - #define SCMI_MAX_STR_SIZE 16 16 + #define SCMI_MAX_STR_SIZE 64 17 17 #define SCMI_MAX_NUM_RATES 16 18 18 19 19 /** ··· 44 44 char name[SCMI_MAX_STR_SIZE]; 45 45 unsigned int enable_latency; 46 46 bool rate_discrete; 47 + bool rate_changed_notifications; 48 + bool rate_change_requested_notifications; 47 49 union { 48 50 struct { 49 51 int num_rates; ··· 148 146 */ 149 147 struct scmi_power_proto_ops { 150 148 int (*num_domains_get)(const struct scmi_protocol_handle *ph); 151 - char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain); 149 + const char *(*name_get)(const struct scmi_protocol_handle *ph, 150 + u32 domain); 152 151 #define SCMI_POWER_STATE_TYPE_SHIFT 30 153 152 #define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1) 154 153 #define SCMI_POWER_STATE_PARAM(type, id) \ ··· 487 484 */ 488 485 struct scmi_reset_proto_ops { 489 486 int (*num_domains_get)(const struct scmi_protocol_handle *ph); 490 - char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain); 487 + const char *(*name_get)(const struct scmi_protocol_handle *ph, 488 + u32 domain); 491 489 int (*latency_get)(const struct scmi_protocol_handle *ph, u32 domain); 492 490 int (*reset)(const struct scmi_protocol_handle *ph, u32 domain); 493 491 int (*assert)(const struct scmi_protocol_handle *ph, u32 domain); 494 492 int (*deassert)(const struct scmi_protocol_handle *ph, u32 domain); 493 + }; 494 + 495 + enum scmi_voltage_level_mode { 496 + SCMI_VOLTAGE_LEVEL_SET_AUTO, 497 + SCMI_VOLTAGE_LEVEL_SET_SYNC, 495 498 }; 496 499 497 500 /** ··· 512 503 * supported voltage level 513 504 * @negative_volts_allowed: True if any of the entries of @levels_uv represent 514 505 * a negative voltage. 515 - * @attributes: represents Voltage Domain advertised attributes 506 + * @async_level_set: True when the voltage domain supports asynchronous level 507 + * set commands. 516 508 * @name: name assigned to the Voltage Domain by platform 517 509 * @num_levels: number of total entries in @levels_uv. 518 510 * @levels_uv: array of entries describing the available voltage levels for ··· 523 513 unsigned int id; 524 514 bool segmented; 525 515 bool negative_volts_allowed; 526 - unsigned int attributes; 516 + bool async_level_set; 527 517 char name[SCMI_MAX_STR_SIZE]; 528 518 unsigned int num_levels; 529 519 #define SCMI_VOLTAGE_SEGMENT_LOW 0 ··· 554 544 int (*config_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 555 545 u32 *config); 556 546 int (*level_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 557 - u32 flags, s32 volt_uV); 547 + enum scmi_voltage_level_mode mode, s32 volt_uV); 558 548 int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 559 549 s32 *volt_uV); 560 550 }; ··· 752 742 /* SCMI Notification API - Custom Event Reports */ 753 743 enum scmi_notification_events { 754 744 SCMI_EVENT_POWER_STATE_CHANGED = 0x0, 745 + SCMI_EVENT_CLOCK_RATE_CHANGED = 0x0, 746 + SCMI_EVENT_CLOCK_RATE_CHANGE_REQUESTED = 0x1, 755 747 SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED = 0x0, 756 748 SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED = 0x1, 757 749 SCMI_EVENT_SENSOR_TRIP_POINT_EVENT = 0x0, ··· 768 756 unsigned int agent_id; 769 757 unsigned int domain_id; 770 758 unsigned int power_state; 759 + }; 760 + 761 + struct scmi_clock_rate_notif_report { 762 + ktime_t timestamp; 763 + unsigned int agent_id; 764 + unsigned int clock_id; 765 + unsigned long long rate; 771 766 }; 772 767 773 768 struct scmi_system_power_state_notifier_report {
+155
include/linux/soc/apple/rtkit.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only OR MIT */ 2 + /* 3 + * Apple RTKit IPC Library 4 + * Copyright (C) The Asahi Linux Contributors 5 + * 6 + * Apple's SoCs come with various co-processors running their RTKit operating 7 + * system. This protocol library is used by client drivers to use the 8 + * features provided by them. 9 + */ 10 + #ifndef _LINUX_APPLE_RTKIT_H_ 11 + #define _LINUX_APPLE_RTKIT_H_ 12 + 13 + #include <linux/device.h> 14 + #include <linux/types.h> 15 + #include <linux/mailbox_client.h> 16 + 17 + /* 18 + * Struct to represent implementation-specific RTKit operations. 19 + * 20 + * @buffer: Shared memory buffer allocated inside normal RAM. 21 + * @iomem: Shared memory buffer controlled by the co-processors. 22 + * @size: Size of the shared memory buffer. 23 + * @iova: Device VA of shared memory buffer. 24 + * @is_mapped: Shared memory buffer is managed by the co-processor. 25 + */ 26 + 27 + struct apple_rtkit_shmem { 28 + void *buffer; 29 + void __iomem *iomem; 30 + size_t size; 31 + dma_addr_t iova; 32 + bool is_mapped; 33 + }; 34 + 35 + /* 36 + * Struct to represent implementation-specific RTKit operations. 37 + * 38 + * @crashed: Called when the co-processor has crashed. Runs in process 39 + * context. 40 + * @recv_message: Function called when a message from RTKit is received 41 + * on a non-system endpoint. Called from a worker thread. 42 + * @recv_message_early: 43 + * Like recv_message, but called from atomic context. It 44 + * should return true if it handled the message. If it 45 + * returns false, the message will be passed on to the 46 + * worker thread. 47 + * @shmem_setup: Setup shared memory buffer. If bfr.is_iomem is true the 48 + * buffer is managed by the co-processor and needs to be mapped. 49 + * Otherwise the buffer is managed by Linux and needs to be 50 + * allocated. If not specified dma_alloc_coherent is used. 51 + * Called in process context. 52 + * @shmem_destroy: Undo the shared memory buffer setup in shmem_setup. If not 53 + * specified dma_free_coherent is used. Called in process 54 + * context. 55 + */ 56 + struct apple_rtkit_ops { 57 + void (*crashed)(void *cookie); 58 + void (*recv_message)(void *cookie, u8 endpoint, u64 message); 59 + bool (*recv_message_early)(void *cookie, u8 endpoint, u64 message); 60 + int (*shmem_setup)(void *cookie, struct apple_rtkit_shmem *bfr); 61 + void (*shmem_destroy)(void *cookie, struct apple_rtkit_shmem *bfr); 62 + }; 63 + 64 + struct apple_rtkit; 65 + 66 + /* 67 + * Initializes the internal state required to handle RTKit. This 68 + * should usually be called within _probe. 69 + * 70 + * @dev: Pointer to the device node this coprocessor is assocated with 71 + * @cookie: opaque cookie passed to all functions defined in rtkit_ops 72 + * @mbox_name: mailbox name used to communicate with the co-processor 73 + * @mbox_idx: mailbox index to be used if mbox_name is NULL 74 + * @ops: pointer to rtkit_ops to be used for this co-processor 75 + */ 76 + struct apple_rtkit *devm_apple_rtkit_init(struct device *dev, void *cookie, 77 + const char *mbox_name, int mbox_idx, 78 + const struct apple_rtkit_ops *ops); 79 + 80 + /* 81 + * Reinitialize internal structures. Must only be called with the co-processor 82 + * is held in reset. 83 + */ 84 + int apple_rtkit_reinit(struct apple_rtkit *rtk); 85 + 86 + /* 87 + * Handle RTKit's boot process. Should be called after the CPU of the 88 + * co-processor has been started. 89 + */ 90 + int apple_rtkit_boot(struct apple_rtkit *rtk); 91 + 92 + /* 93 + * Quiesce the co-processor. 94 + */ 95 + int apple_rtkit_quiesce(struct apple_rtkit *rtk); 96 + 97 + /* 98 + * Wake the co-processor up from hibernation mode. 99 + */ 100 + int apple_rtkit_wake(struct apple_rtkit *rtk); 101 + 102 + /* 103 + * Shutdown the co-processor 104 + */ 105 + int apple_rtkit_shutdown(struct apple_rtkit *rtk); 106 + 107 + /* 108 + * Checks if RTKit is running and ready to handle messages. 109 + */ 110 + bool apple_rtkit_is_running(struct apple_rtkit *rtk); 111 + 112 + /* 113 + * Checks if RTKit has crashed. 114 + */ 115 + bool apple_rtkit_is_crashed(struct apple_rtkit *rtk); 116 + 117 + /* 118 + * Starts an endpoint. Must be called after boot but before any messages can be 119 + * sent or received from that endpoint. 120 + */ 121 + int apple_rtkit_start_ep(struct apple_rtkit *rtk, u8 endpoint); 122 + 123 + /* 124 + * Send a message to the given endpoint. 125 + * 126 + * @rtk: RTKit reference 127 + * @ep: target endpoint 128 + * @message: message to be sent 129 + * @completeion: will be completed once the message has been submitted 130 + * to the hardware FIFO. Can be NULL. 131 + * @atomic: if set to true this function can be called from atomic 132 + * context. 133 + */ 134 + int apple_rtkit_send_message(struct apple_rtkit *rtk, u8 ep, u64 message, 135 + struct completion *completion, bool atomic); 136 + 137 + /* 138 + * Send a message to the given endpoint and wait until it has been submitted 139 + * to the hardware FIFO. 140 + * Will return zero on success and a negative error code on failure 141 + * (e.g. -ETIME when the message couldn't be written within the given 142 + * timeout) 143 + * 144 + * @rtk: RTKit reference 145 + * @ep: target endpoint 146 + * @message: message to be sent 147 + * @timeout: timeout in milliseconds to allow the message transmission 148 + * to be completed 149 + * @atomic: if set to true this function can be called from atomic 150 + * context. 151 + */ 152 + int apple_rtkit_send_message_wait(struct apple_rtkit *rtk, u8 ep, u64 message, 153 + unsigned long timeout, bool atomic); 154 + 155 + #endif /* _LINUX_APPLE_RTKIT_H_ */
+53
include/linux/soc/apple/sart.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only OR MIT */ 2 + /* 3 + * Apple SART device driver 4 + * Copyright (C) The Asahi Linux Contributors 5 + * 6 + * Apple SART is a simple address filter for DMA transactions. 7 + * Regions of physical memory must be added to the SART's allow 8 + * list before any DMA can target these. Unlike a proper 9 + * IOMMU no remapping can be done. 10 + */ 11 + 12 + #ifndef _LINUX_SOC_APPLE_SART_H_ 13 + #define _LINUX_SOC_APPLE_SART_H_ 14 + 15 + #include <linux/device.h> 16 + #include <linux/err.h> 17 + #include <linux/types.h> 18 + 19 + struct apple_sart; 20 + 21 + /* 22 + * Get a reference to the SART attached to dev. 23 + * 24 + * Looks for the phandle reference in apple,sart and returns a pointer 25 + * to the corresponding apple_sart struct to be used with 26 + * apple_sart_add_allowed_region and apple_sart_remove_allowed_region. 27 + */ 28 + struct apple_sart *devm_apple_sart_get(struct device *dev); 29 + 30 + /* 31 + * Adds the region [paddr, paddr+size] to the DMA allow list. 32 + * 33 + * @sart: SART reference 34 + * @paddr: Start address of the region to be used for DMA 35 + * @size: Size of the region to be used for DMA. 36 + */ 37 + int apple_sart_add_allowed_region(struct apple_sart *sart, phys_addr_t paddr, 38 + size_t size); 39 + 40 + /* 41 + * Removes the region [paddr, paddr+size] from the DMA allow list. 42 + * 43 + * Note that exact same paddr and size used for apple_sart_add_allowed_region 44 + * have to be passed. 45 + * 46 + * @sart: SART reference 47 + * @paddr: Start address of the region no longer used for DMA 48 + * @size: Size of the region no longer used for DMA. 49 + */ 50 + int apple_sart_remove_allowed_region(struct apple_sart *sart, phys_addr_t paddr, 51 + size_t size); 52 + 53 + #endif /* _LINUX_SOC_APPLE_SART_H_ */
+2
include/linux/soc/qcom/llcc-qcom.h
··· 29 29 #define LLCC_AUDHW 22 30 30 #define LLCC_NPU 23 31 31 #define LLCC_WLHW 24 32 + #define LLCC_PIMEM 25 33 + #define LLCC_DRE 26 32 34 #define LLCC_CVP 28 33 35 #define LLCC_MODPE 29 34 36 #define LLCC_APTCM 30
-18
include/linux/tee_drv.h
··· 299 299 void tee_shm_put(struct tee_shm *shm); 300 300 301 301 /** 302 - * tee_shm_va2pa() - Get physical address of a virtual address 303 - * @shm: Shared memory handle 304 - * @va: Virtual address to tranlsate 305 - * @pa: Returned physical address 306 - * @returns 0 on success and < 0 on failure 307 - */ 308 - int tee_shm_va2pa(struct tee_shm *shm, void *va, phys_addr_t *pa); 309 - 310 - /** 311 - * tee_shm_pa2va() - Get virtual address of a physical address 312 - * @shm: Shared memory handle 313 - * @pa: Physical address to tranlsate 314 - * @va: Returned virtual address 315 - * @returns 0 on success and < 0 on failure 316 - */ 317 - int tee_shm_pa2va(struct tee_shm *shm, phys_addr_t pa, void **va); 318 - 319 - /** 320 302 * tee_shm_get_va() - Get virtual address of a shared memory plus an offset 321 303 * @shm: Shared memory handle 322 304 * @offs: Offset from start of this shared memory
+13
include/linux/wkup_m3_ipc.h
··· 33 33 34 34 int mem_type; 35 35 unsigned long resume_addr; 36 + int vtt_conf; 37 + int isolation_conf; 36 38 int state; 39 + u32 halt; 40 + 41 + unsigned long volt_scale_offsets; 42 + const char *sd_fw_name; 37 43 38 44 struct completion sync_complete; 39 45 struct mbox_client mbox_client; ··· 47 41 48 42 struct wkup_m3_ipc_ops *ops; 49 43 int is_rtc_only; 44 + struct dentry *dbg_path; 50 45 }; 51 46 52 47 struct wkup_m3_wakeup_src { 53 48 int irq_nr; 54 49 char src[10]; 55 50 }; 51 + 52 + struct wkup_m3_scale_data_header { 53 + u16 magic; 54 + u8 sleep_offset; 55 + u8 wake_offset; 56 + } __packed; 56 57 57 58 struct wkup_m3_ipc_ops { 58 59 void (*set_mem_type)(struct wkup_m3_ipc *m3_ipc, int mem_type);
+7 -1
include/soc/tegra/mc.h
··· 193 193 unsigned int num_address_bits; 194 194 unsigned int atom_size; 195 195 196 - u8 client_id_mask; 196 + u16 client_id_mask; 197 + u8 num_channels; 197 198 198 199 const struct tegra_smmu_soc *smmu; 199 200 200 201 u32 intmask; 202 + u32 ch_intmask; 203 + u32 global_intstatus_channel_shift; 204 + bool has_addr_hi_reg; 201 205 202 206 const struct tegra_mc_reset_ops *reset_ops; 203 207 const struct tegra_mc_reset *resets; ··· 216 212 struct tegra_smmu *smmu; 217 213 struct gart_device *gart; 218 214 void __iomem *regs; 215 + void __iomem *bcast_ch_regs; 216 + void __iomem **ch_regs; 219 217 struct clk *clk; 220 218 int irq; 221 219
-4
include/uapi/linux/tee.h
··· 42 42 #define TEE_IOC_MAGIC 0xa4 43 43 #define TEE_IOC_BASE 0 44 44 45 - /* Flags relating to shared memory */ 46 - #define TEE_IOCTL_SHM_MAPPED 0x1 /* memory mapped in normal world */ 47 - #define TEE_IOCTL_SHM_DMA_BUF 0x2 /* dma-buf handle on shared memory */ 48 - 49 45 #define TEE_MAX_ARG_SIZE 1024 50 46 51 47 #define TEE_GEN_CAP_GP (1 << 0)/* GlobalPlatform compliant TEE */