Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"Nothing surprising in the SoC specific drivers, with the usual
updates:

- Added or improved SoC driver support for Tegra234, Exynos4121,
RK3588, as well as multiple Mediatek and Qualcomm chips

- SCMI firmware gains support for multiple SMC/HVC transport and
version 3.2 of the protocol

- Cleanups amd minor changes for the reset controller, memory
controller, firmware and sram drivers

- Minor changes to amd/xilinx, samsung, tegra, nxp, ti, qualcomm,
amlogic and renesas SoC specific drivers"

* tag 'soc-drivers-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (118 commits)
dt-bindings: interrupt-controller: Convert Amlogic Meson GPIO interrupt controller binding
MAINTAINERS: add PHY-related files to Amlogic SoC file list
drivers: meson: secure-pwrc: always enable DMA domain
tee: optee: Use kmemdup() to replace kmalloc + memcpy
soc: qcom: geni-se: Do not bother about enable/disable of interrupts in secondary sequencer
dt-bindings: sram: qcom,imem: document qdu1000
soc: qcom: icc-bwmon: Fix MSM8998 count unit
dt-bindings: soc: qcom,rpmh-rsc: Require power-domains
soc: qcom: socinfo: Add Soc ID for IPQ5300
dt-bindings: arm: qcom,ids: add SoC ID for IPQ5300
soc: qcom: Fix a IS_ERR() vs NULL bug in probe
soc: qcom: socinfo: Add support for new fields in revision 19
soc: qcom: socinfo: Add support for new fields in revision 18
dt-bindings: firmware: scm: Add compatible for SDX75
soc: qcom: mdt_loader: Fix split image detection
dt-bindings: memory-controllers: drop unneeded quotes
soc: rockchip: dtpm: use C99 array init syntax
firmware: tegra: bpmp: Add support for DRAM MRQ GSCs
soc/tegra: pmc: Use devm_clk_notifier_register()
soc/tegra: pmc: Simplify debugfs initialization
...

+3086 -944
+7 -1
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 34 34 - description: SCMI compliant firmware with ARM SMC/HVC transport 35 35 items: 36 36 - const: arm,scmi-smc 37 + - description: SCMI compliant firmware with ARM SMC/HVC transport 38 + with shmem address(4KB-page, offset) as parameters 39 + items: 40 + - const: arm,scmi-smc-param 37 41 - description: SCMI compliant firmware with SCMI Virtio transport. 38 42 The virtio transport only supports a single device. 39 43 items: ··· 303 299 properties: 304 300 compatible: 305 301 contains: 306 - const: arm,scmi-smc 302 + enum: 303 + - arm,scmi-smc 304 + - arm,scmi-smc-param 307 305 then: 308 306 required: 309 307 - arm,smc-id
+1
Documentation/devicetree/bindings/firmware/qcom,scm.yaml
··· 51 51 - qcom,scm-sdm845 52 52 - qcom,scm-sdx55 53 53 - qcom,scm-sdx65 54 + - qcom,scm-sdx75 54 55 - qcom,scm-sm6115 55 56 - qcom,scm-sm6125 56 57 - qcom,scm-sm6350
-38
Documentation/devicetree/bindings/interrupt-controller/amlogic,meson-gpio-intc.txt
··· 1 - Amlogic meson GPIO interrupt controller 2 - 3 - Meson SoCs contains an interrupt controller which is able to watch the SoC 4 - pads and generate an interrupt on edge or level. The controller is essentially 5 - a 256 pads to 8 GIC interrupt multiplexer, with a filter block to select edge 6 - or level and polarity. It does not expose all 256 mux inputs because the 7 - documentation shows that the upper part is not mapped to any pad. The actual 8 - number of interrupt exposed depends on the SoC. 9 - 10 - Required properties: 11 - 12 - - compatible : must have "amlogic,meson8-gpio-intc" and either 13 - "amlogic,meson8-gpio-intc" for meson8 SoCs (S802) or 14 - "amlogic,meson8b-gpio-intc" for meson8b SoCs (S805) or 15 - "amlogic,meson-gxbb-gpio-intc" for GXBB SoCs (S905) or 16 - "amlogic,meson-gxl-gpio-intc" for GXL SoCs (S905X, S912) 17 - "amlogic,meson-axg-gpio-intc" for AXG SoCs (A113D, A113X) 18 - "amlogic,meson-g12a-gpio-intc" for G12A SoCs (S905D2, S905X2, S905Y2) 19 - "amlogic,meson-sm1-gpio-intc" for SM1 SoCs (S905D3, S905X3, S905Y3) 20 - "amlogic,meson-a1-gpio-intc" for A1 SoCs (A113L) 21 - "amlogic,meson-s4-gpio-intc" for S4 SoCs (S802X2, S905Y4, S805X2G, S905W2) 22 - - reg : Specifies base physical address and size of the registers. 23 - - interrupt-controller : Identifies the node as an interrupt controller. 24 - - #interrupt-cells : Specifies the number of cells needed to encode an 25 - interrupt source. The value must be 2. 26 - - meson,channel-interrupts: Array with the 8 upstream hwirq numbers. These 27 - are the hwirqs used on the parent interrupt controller. 28 - 29 - Example: 30 - 31 - gpio_interrupt: interrupt-controller@9880 { 32 - compatible = "amlogic,meson-gxbb-gpio-intc", 33 - "amlogic,meson-gpio-intc"; 34 - reg = <0x0 0x9880 0x0 0x10>; 35 - interrupt-controller; 36 - #interrupt-cells = <2>; 37 - meson,channel-interrupts = <64 65 66 67 68 69 70 71>; 38 - };
+72
Documentation/devicetree/bindings/interrupt-controller/amlogic,meson-gpio-intc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interrupt-controller/amlogic,meson-gpio-intc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Amlogic Meson GPIO interrupt controller 8 + 9 + maintainers: 10 + - Heiner Kallweit <hkallweit1@gmail.com> 11 + 12 + description: | 13 + Meson SoCs contains an interrupt controller which is able to watch the SoC 14 + pads and generate an interrupt on edge or level. The controller is essentially 15 + a 256 pads to 8 or 12 GIC interrupt multiplexer, with a filter block to select 16 + edge or level and polarity. It does not expose all 256 mux inputs because the 17 + documentation shows that the upper part is not mapped to any pad. The actual 18 + number of interrupts exposed depends on the SoC. 19 + 20 + allOf: 21 + - $ref: /schemas/interrupt-controller.yaml# 22 + 23 + properties: 24 + compatible: 25 + oneOf: 26 + - const: amlogic,meson-gpio-intc 27 + - items: 28 + - enum: 29 + - amlogic,meson8-gpio-intc 30 + - amlogic,meson8b-gpio-intc 31 + - amlogic,meson-gxbb-gpio-intc 32 + - amlogic,meson-gxl-gpio-intc 33 + - amlogic,meson-axg-gpio-intc 34 + - amlogic,meson-g12a-gpio-intc 35 + - amlogic,meson-sm1-gpio-intc 36 + - amlogic,meson-a1-gpio-intc 37 + - amlogic,meson-s4-gpio-intc 38 + - const: amlogic,meson-gpio-intc 39 + 40 + reg: 41 + maxItems: 1 42 + 43 + interrupt-controller: true 44 + 45 + "#interrupt-cells": 46 + const: 2 47 + 48 + amlogic,channel-interrupts: 49 + description: Array with the upstream hwirq numbers 50 + minItems: 8 51 + maxItems: 12 52 + $ref: /schemas/types.yaml#/definitions/uint32-array 53 + 54 + required: 55 + - compatible 56 + - reg 57 + - interrupt-controller 58 + - "#interrupt-cells" 59 + - amlogic,channel-interrupts 60 + 61 + additionalProperties: false 62 + 63 + examples: 64 + - | 65 + interrupt-controller@9880 { 66 + compatible = "amlogic,meson-gxbb-gpio-intc", 67 + "amlogic,meson-gpio-intc"; 68 + reg = <0x9880 0x10>; 69 + interrupt-controller; 70 + #interrupt-cells = <2>; 71 + amlogic,channel-interrupts = <64 65 66 67 68 69 70 71>; 72 + };
-78
Documentation/devicetree/bindings/media/s5p-mfc.txt
··· 1 - * Samsung Multi Format Codec (MFC) 2 - 3 - Multi Format Codec (MFC) is the IP present in Samsung SoCs which 4 - supports high resolution decoding and encoding functionalities. 5 - The MFC device driver is a v4l2 driver which can encode/decode 6 - video raw/elementary streams and has support for all popular 7 - video codecs. 8 - 9 - Required properties: 10 - - compatible : value should be either one among the following 11 - (a) "samsung,mfc-v5" for MFC v5 present in Exynos4 SoCs 12 - (b) "samsung,mfc-v6" for MFC v6 present in Exynos5 SoCs 13 - (c) "samsung,exynos3250-mfc", "samsung,mfc-v7" for MFC v7 14 - present in Exynos3250 SoC 15 - (d) "samsung,mfc-v7" for MFC v7 present in Exynos5420 SoC 16 - (e) "samsung,mfc-v8" for MFC v8 present in Exynos5800 SoC 17 - (f) "samsung,exynos5433-mfc" for MFC v8 present in Exynos5433 SoC 18 - (g) "samsung,mfc-v10" for MFC v10 present in Exynos7880 SoC 19 - 20 - - reg : Physical base address of the IP registers and length of memory 21 - mapped region. 22 - 23 - - interrupts : MFC interrupt number to the CPU. 24 - - clocks : from common clock binding: handle to mfc clock. 25 - - clock-names : from common clock binding: must contain "mfc", 26 - corresponding to entry in the clocks property. 27 - 28 - Optional properties: 29 - - power-domains : power-domain property defined with a phandle 30 - to respective power domain. 31 - - memory-region : from reserved memory binding: phandles to two reserved 32 - memory regions, first is for "left" mfc memory bus interfaces, 33 - second if for the "right" mfc memory bus, used when no SYSMMU 34 - support is available; used only by MFC v5 present in Exynos4 SoCs 35 - 36 - Obsolete properties: 37 - - samsung,mfc-r, samsung,mfc-l : support removed, please use memory-region 38 - property instead 39 - 40 - 41 - Example: 42 - SoC specific DT entry: 43 - 44 - mfc: codec@13400000 { 45 - compatible = "samsung,mfc-v5"; 46 - reg = <0x13400000 0x10000>; 47 - interrupts = <0 94 0>; 48 - power-domains = <&pd_mfc>; 49 - clocks = <&clock 273>; 50 - clock-names = "mfc"; 51 - }; 52 - 53 - Reserved memory specific DT entry for given board (see reserved memory binding 54 - for more information): 55 - 56 - reserved-memory { 57 - #address-cells = <1>; 58 - #size-cells = <1>; 59 - ranges; 60 - 61 - mfc_left: region@51000000 { 62 - compatible = "shared-dma-pool"; 63 - no-map; 64 - reg = <0x51000000 0x800000>; 65 - }; 66 - 67 - mfc_right: region@43000000 { 68 - compatible = "shared-dma-pool"; 69 - no-map; 70 - reg = <0x43000000 0x800000>; 71 - }; 72 - }; 73 - 74 - Board specific DT entry: 75 - 76 - codec@13400000 { 77 - memory-region = <&mfc_left>, <&mfc_right>; 78 - };
+184
Documentation/devicetree/bindings/media/samsung,s5p-mfc.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/media/samsung,s5p-mfc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Samsung Exynos Multi Format Codec (MFC) 8 + 9 + maintainers: 10 + - Marek Szyprowski <m.szyprowski@samsung.com> 11 + - Aakarsh Jain <aakarsh.jain@samsung.com> 12 + 13 + description: 14 + Multi Format Codec (MFC) is the IP present in Samsung SoCs which 15 + supports high resolution decoding and encoding functionalities. 16 + 17 + properties: 18 + compatible: 19 + oneOf: 20 + - enum: 21 + - samsung,exynos5433-mfc # Exynos5433 22 + - samsung,mfc-v5 # Exynos4 23 + - samsung,mfc-v6 # Exynos5 24 + - samsung,mfc-v7 # Exynos5420 25 + - samsung,mfc-v8 # Exynos5800 26 + - samsung,mfc-v10 # Exynos7880 27 + - items: 28 + - enum: 29 + - samsung,exynos3250-mfc # Exynos3250 30 + - const: samsung,mfc-v7 # Fall back for Exynos3250 31 + 32 + reg: 33 + maxItems: 1 34 + 35 + clocks: 36 + minItems: 1 37 + maxItems: 3 38 + 39 + clock-names: 40 + minItems: 1 41 + maxItems: 3 42 + 43 + interrupts: 44 + maxItems: 1 45 + 46 + iommus: 47 + minItems: 1 48 + maxItems: 2 49 + 50 + iommu-names: 51 + minItems: 1 52 + maxItems: 2 53 + 54 + power-domains: 55 + maxItems: 1 56 + 57 + memory-region: 58 + minItems: 1 59 + maxItems: 2 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - clocks 65 + - clock-names 66 + - interrupts 67 + 68 + additionalProperties: false 69 + 70 + allOf: 71 + - if: 72 + properties: 73 + compatible: 74 + contains: 75 + enum: 76 + - samsung,exynos3250-mfc 77 + then: 78 + properties: 79 + clocks: 80 + maxItems: 2 81 + clock-names: 82 + items: 83 + - const: mfc 84 + - const: sclk_mfc 85 + iommus: 86 + maxItems: 1 87 + iommus-names: false 88 + 89 + - if: 90 + properties: 91 + compatible: 92 + contains: 93 + enum: 94 + - samsung,exynos5433-mfc 95 + then: 96 + properties: 97 + clocks: 98 + maxItems: 3 99 + clock-names: 100 + items: 101 + - const: pclk 102 + - const: aclk 103 + - const: aclk_xiu 104 + iommus: 105 + maxItems: 2 106 + iommus-names: 107 + items: 108 + - const: left 109 + - const: right 110 + 111 + - if: 112 + properties: 113 + compatible: 114 + contains: 115 + enum: 116 + - samsung,mfc-v5 117 + then: 118 + properties: 119 + clocks: 120 + maxItems: 2 121 + clock-names: 122 + items: 123 + - const: mfc 124 + - const: sclk_mfc 125 + iommus: 126 + maxItems: 2 127 + iommus-names: 128 + items: 129 + - const: left 130 + - const: right 131 + 132 + - if: 133 + properties: 134 + compatible: 135 + contains: 136 + enum: 137 + - samsung,mfc-v6 138 + - samsung,mfc-v8 139 + then: 140 + properties: 141 + clocks: 142 + maxItems: 1 143 + clock-names: 144 + items: 145 + - const: mfc 146 + iommus: 147 + maxItems: 2 148 + iommus-names: 149 + items: 150 + - const: left 151 + - const: right 152 + 153 + - if: 154 + properties: 155 + compatible: 156 + contains: 157 + enum: 158 + - samsung,mfc-v7 159 + then: 160 + properties: 161 + clocks: 162 + minItems: 1 163 + maxItems: 2 164 + iommus: 165 + minItems: 1 166 + maxItems: 2 167 + 168 + examples: 169 + - | 170 + #include <dt-bindings/clock/exynos4.h> 171 + #include <dt-bindings/clock/exynos-audss-clk.h> 172 + #include <dt-bindings/interrupt-controller/arm-gic.h> 173 + #include <dt-bindings/interrupt-controller/irq.h> 174 + 175 + codec@13400000 { 176 + compatible = "samsung,mfc-v5"; 177 + reg = <0x13400000 0x10000>; 178 + interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>; 179 + power-domains = <&pd_mfc>; 180 + clocks = <&clock CLK_MFC>, <&clock CLK_SCLK_MFC>; 181 + clock-names = "mfc", "sclk_mfc"; 182 + iommus = <&sysmmu_mfc_l>, <&sysmmu_mfc_r>; 183 + iommu-names = "left", "right"; 184 + };
+1 -1
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml
··· 165 165 const: 0 166 166 167 167 lpddr2: 168 - $ref: "ddr/jedec,lpddr2.yaml#" 168 + $ref: ddr/jedec,lpddr2.yaml# 169 169 type: object 170 170 171 171 patternProperties:
+1 -1
Documentation/devicetree/bindings/memory-controllers/ti,gpmc.yaml
··· 129 129 The child device node represents the device connected to the GPMC 130 130 bus. The device can be a NAND chip, SRAM device, NOR device 131 131 or an ASIC. 132 - $ref: "ti,gpmc-child.yaml" 132 + $ref: ti,gpmc-child.yaml 133 133 134 134 135 135 required:
+4
Documentation/devicetree/bindings/phy/mediatek,dsi-phy.yaml
··· 26 26 - const: mediatek,mt2701-mipi-tx 27 27 - items: 28 28 - enum: 29 + - mediatek,mt6795-mipi-tx 30 + - const: mediatek,mt8173-mipi-tx 31 + - items: 32 + - enum: 29 33 - mediatek,mt8365-mipi-tx 30 34 - const: mediatek,mt8183-mipi-tx 31 35 - const: mediatek,mt2701-mipi-tx
+3 -1
Documentation/devicetree/bindings/pwm/mediatek,pwm-disp.yaml
··· 22 22 - mediatek,mt8173-disp-pwm 23 23 - mediatek,mt8183-disp-pwm 24 24 - items: 25 - - const: mediatek,mt8167-disp-pwm 25 + - enum: 26 + - mediatek,mt6795-disp-pwm 27 + - mediatek,mt8167-disp-pwm 26 28 - const: mediatek,mt8173-disp-pwm 27 29 - items: 28 30 - enum:
-32
Documentation/devicetree/bindings/reset/oxnas,reset.txt
··· 1 - Oxford Semiconductor OXNAS SoC Family RESET Controller 2 - ================================================ 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - Required properties: 8 - - compatible: For OX810SE, should be "oxsemi,ox810se-reset" 9 - For OX820, should be "oxsemi,ox820-reset" 10 - - #reset-cells: 1, see below 11 - 12 - Parent node should have the following properties : 13 - - compatible: For OX810SE, should be : 14 - "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd" 15 - For OX820, should be : 16 - "oxsemi,ox820-sys-ctrl", "syscon", "simple-mfd" 17 - 18 - Reset indices are in dt-bindings include files : 19 - - For OX810SE: include/dt-bindings/reset/oxsemi,ox810se.h 20 - - For OX820: include/dt-bindings/reset/oxsemi,ox820.h 21 - 22 - example: 23 - 24 - sys: sys-ctrl@000000 { 25 - compatible = "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd"; 26 - reg = <0x000000 0x100000>; 27 - 28 - reset: reset-controller { 29 - compatible = "oxsemi,ox810se-reset"; 30 - #reset-cells = <1>; 31 - }; 32 - };
+1
Documentation/devicetree/bindings/soc/mediatek/mediatek,pwrap.yaml
··· 33 33 - mediatek,mt2701-pwrap 34 34 - mediatek,mt6765-pwrap 35 35 - mediatek,mt6779-pwrap 36 + - mediatek,mt6795-pwrap 36 37 - mediatek,mt6797-pwrap 37 38 - mediatek,mt6873-pwrap 38 39 - mediatek,mt7622-pwrap
+1
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.yaml
··· 26 26 items: 27 27 - enum: 28 28 - qcom,qdu1000-aoss-qmp 29 + - qcom,sa8775p-aoss-qmp 29 30 - qcom,sc7180-aoss-qmp 30 31 - qcom,sc7280-aoss-qmp 31 32 - qcom,sc8180x-aoss-qmp
+3 -1
Documentation/devicetree/bindings/soc/qcom/qcom,eud.yaml
··· 55 55 examples: 56 56 - | 57 57 eud@88e0000 { 58 - compatible = "qcom,sc7280-eud","qcom,eud"; 58 + compatible = "qcom,sc7280-eud", "qcom,eud"; 59 59 reg = <0x88e0000 0x2000>, 60 60 <0x88e2000 0x1000>; 61 + 61 62 ports { 62 63 #address-cells = <1>; 63 64 #size-cells = <0>; ··· 68 67 remote-endpoint = <&usb2_role_switch>; 69 68 }; 70 69 }; 70 + 71 71 port@1 { 72 72 reg = <1>; 73 73 eud_con: endpoint {
+69
Documentation/devicetree/bindings/soc/qcom/qcom,rpm-master-stats.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,rpm-master-stats.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Technologies, Inc. (QTI) RPM Master Stats 8 + 9 + maintainers: 10 + - Konrad Dybcio <konrad.dybcio@linaro.org> 11 + 12 + description: | 13 + The Qualcomm RPM (Resource Power Manager) architecture includes a concept 14 + of "RPM Masters". They can be thought of as "the local gang leaders", usually 15 + spanning a single subsystem (e.g. APSS, ADSP, CDSP). All of the RPM decisions 16 + (particularly around entering hardware-driven low power modes: XO shutdown 17 + and total system-wide power collapse) are first made at Master-level, and 18 + only then aggregated for the entire system. 19 + 20 + The Master Stats provide a few useful bits that can be used to assess whether 21 + our device has entered the desired low-power mode, how long it took to do so, 22 + the duration of that residence, how long it took to come back online, 23 + how many times a given sleep state was entered and which cores are actively 24 + voting for staying awake. 25 + 26 + This scheme has been used on various SoCs in the 2013-2023 era, with some 27 + newer or higher-end designs providing this information through an SMEM query. 28 + 29 + properties: 30 + compatible: 31 + const: qcom,rpm-master-stats 32 + 33 + qcom,rpm-msg-ram: 34 + $ref: /schemas/types.yaml#/definitions/phandle-array 35 + description: Phandle to an RPM MSG RAM slice containing the master stats 36 + minItems: 1 37 + maxItems: 5 38 + 39 + qcom,master-names: 40 + $ref: /schemas/types.yaml#/definitions/string-array 41 + description: 42 + The name of the RPM Master which owns the MSG RAM slice where this 43 + instance of Master Stats resides 44 + minItems: 1 45 + maxItems: 5 46 + 47 + required: 48 + - compatible 49 + - qcom,rpm-msg-ram 50 + - qcom,master-names 51 + 52 + additionalProperties: false 53 + 54 + examples: 55 + - | 56 + stats { 57 + compatible = "qcom,rpm-master-stats"; 58 + qcom,rpm-msg-ram = <&apss_master_stats>, 59 + <&mpss_master_stats>, 60 + <&adsp_master_stats>, 61 + <&cdsp_master_stats>, 62 + <&tz_master_stats>; 63 + qcom,master-names = "APSS", 64 + "MPSS", 65 + "ADSP", 66 + "CDSP", 67 + "TZ"; 68 + }; 69 + ...
+2
Documentation/devicetree/bindings/soc/qcom/qcom,rpmh-rsc.yaml
··· 124 124 - qcom,tcs-offset 125 125 - reg 126 126 - reg-names 127 + - power-domains 127 128 128 129 additionalProperties: false 129 130 ··· 180 179 <SLEEP_TCS 1>, 181 180 <WAKE_TCS 1>, 182 181 <CONTROL_TCS 0>; 182 + power-domains = <&CLUSTER_PD>; 183 183 }; 184 184 185 185 - |
+1
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 81 81 contains: 82 82 enum: 83 83 - qcom,rpm-apq8084 84 + - qcom,rpm-msm8226 84 85 - qcom,rpm-msm8916 85 86 - qcom,rpm-msm8936 86 87 - qcom,rpm-msm8974
+3
Documentation/devicetree/bindings/soc/rockchip/grf.yaml
··· 24 24 - rockchip,rk3588-bigcore1-grf 25 25 - rockchip,rk3588-ioc 26 26 - rockchip,rk3588-php-grf 27 + - rockchip,rk3588-pipe-phy-grf 27 28 - rockchip,rk3588-sys-grf 28 29 - rockchip,rk3588-pcie3-phy-grf 29 30 - rockchip,rk3588-pcie3-pipe-grf ··· 53 52 - rockchip,rk3399-pmugrf 54 53 - rockchip,rk3568-grf 55 54 - rockchip,rk3568-pmugrf 55 + - rockchip,rk3588-usb2phy-grf 56 56 - rockchip,rv1108-grf 57 57 - rockchip,rv1108-pmugrf 58 58 - rockchip,rv1126-grf ··· 201 199 - rockchip,rk3308-usb2phy-grf 202 200 - rockchip,rk3328-usb2phy-grf 203 201 - rockchip,rk3399-grf 202 + - rockchip,rk3588-usb2phy-grf 204 203 - rockchip,rv1108-grf 205 204 206 205 then:
+5
Documentation/devicetree/bindings/soc/samsung/exynos-pmu.yaml
··· 17 17 enum: 18 18 - samsung,exynos3250-pmu 19 19 - samsung,exynos4210-pmu 20 + - samsung,exynos4212-pmu 20 21 - samsung,exynos4412-pmu 21 22 - samsung,exynos5250-pmu 22 23 - samsung,exynos5260-pmu ··· 37 36 - enum: 38 37 - samsung,exynos3250-pmu 39 38 - samsung,exynos4210-pmu 39 + - samsung,exynos4212-pmu 40 40 - samsung,exynos4412-pmu 41 41 - samsung,exynos5250-pmu 42 42 - samsung,exynos5260-pmu ··· 52 50 - enum: 53 51 - samsung,exynos3250-pmu 54 52 - samsung,exynos4210-pmu 53 + - samsung,exynos4212-pmu 55 54 - samsung,exynos4412-pmu 56 55 - samsung,exynos5250-pmu 57 56 - samsung,exynos5420-pmu ··· 128 125 enum: 129 126 - samsung,exynos3250-pmu 130 127 - samsung,exynos4210-pmu 128 + - samsung,exynos4212-pmu 131 129 - samsung,exynos4412-pmu 132 130 - samsung,exynos5250-pmu 133 131 - samsung,exynos5410-pmu ··· 147 143 enum: 148 144 - samsung,exynos3250-pmu 149 145 - samsung,exynos4210-pmu 146 + - samsung,exynos4212-pmu 150 147 - samsung,exynos4412-pmu 151 148 - samsung,exynos5250-pmu 152 149 - samsung,exynos5420-pmu
+8 -3
Documentation/devicetree/bindings/spmi/mtk,spmi-mtk-pmif.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - enum: 22 - - mediatek,mt6873-spmi 23 - - mediatek,mt8195-spmi 21 + oneOf: 22 + - enum: 23 + - mediatek,mt6873-spmi 24 + - mediatek,mt8195-spmi 25 + - items: 26 + - enum: 27 + - mediatek,mt8186-spmi 28 + - const: mediatek,mt8195-spmi 24 29 25 30 reg: 26 31 maxItems: 2
+2
Documentation/devicetree/bindings/sram/qcom,imem.yaml
··· 18 18 items: 19 19 - enum: 20 20 - qcom,apq8064-imem 21 + - qcom,msm8226-imem 21 22 - qcom,msm8974-imem 22 23 - qcom,qcs404-imem 24 + - qcom,qdu1000-imem 23 25 - qcom,sc7180-imem 24 26 - qcom,sc7280-imem 25 27 - qcom,sdm630-imem
+1
Documentation/devicetree/bindings/sram/sram.yaml
··· 94 94 - samsung,exynos4210-sysram 95 95 - samsung,exynos4210-sysram-ns 96 96 - socionext,milbeaut-smp-sram 97 + - stericsson,u8500-esram 97 98 98 99 reg: 99 100 description:
+9 -11
MAINTAINERS
··· 1906 1906 L: linux-amlogic@lists.infradead.org 1907 1907 S: Maintained 1908 1908 W: http://linux-meson.com/ 1909 + F: Documentation/devicetree/bindings/phy/amlogic* 1909 1910 F: arch/arm/boot/dts/amlogic/ 1910 1911 F: arch/arm/mach-meson/ 1911 1912 F: arch/arm64/boot/dts/amlogic/ 1912 1913 F: drivers/mmc/host/meson* 1914 + F: drivers/phy/amlogic/ 1913 1915 F: drivers/pinctrl/meson/ 1914 1916 F: drivers/rtc/rtc-meson* 1915 1917 F: drivers/soc/amlogic/ ··· 2576 2574 ARM/QUALCOMM SUPPORT 2577 2575 M: Andy Gross <agross@kernel.org> 2578 2576 M: Bjorn Andersson <andersson@kernel.org> 2579 - R: Konrad Dybcio <konrad.dybcio@linaro.org> 2577 + M: Konrad Dybcio <konrad.dybcio@linaro.org> 2580 2578 L: linux-arm-msm@vger.kernel.org 2581 2579 S: Maintained 2582 2580 T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git ··· 7109 7107 F: drivers/gpu/drm/xen/ 7110 7108 7111 7109 DRM DRIVERS FOR XILINX 7112 - M: Hyun Kwon <hyun.kwon@xilinx.com> 7113 7110 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 7114 7111 L: dri-devel@lists.freedesktop.org 7115 7112 S: Maintained ··· 23268 23267 F: drivers/iio/adc/xilinx-ams.c 23269 23268 23270 23269 XILINX AXI ETHERNET DRIVER 23271 - M: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> 23270 + M: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com> 23272 23271 S: Maintained 23273 23272 F: Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml 23274 23273 F: drivers/net/ethernet/xilinx/xilinx_axienet* ··· 23288 23287 F: include/linux/firmware/xlnx-event-manager.h 23289 23288 23290 23289 XILINX GPIO DRIVER 23291 - M: Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com> 23292 - R: Srinivas Neeli <srinivas.neeli@xilinx.com> 23290 + M: Shubhrajyoti Datta <shubhrajyoti.datta@amd.com> 23291 + R: Srinivas Neeli <srinivas.neeli@amd.com> 23293 23292 R: Michal Simek <michal.simek@amd.com> 23294 23293 S: Maintained 23295 23294 F: Documentation/devicetree/bindings/gpio/gpio-zynq.yaml ··· 23304 23303 F: include/clocksource/timer-xilinx.h 23305 23304 23306 23305 XILINX SD-FEC IP CORES 23307 - M: Derek Kiernan <derek.kiernan@xilinx.com> 23308 - M: Dragan Cvetic <dragan.cvetic@xilinx.com> 23306 + M: Derek Kiernan <derek.kiernan@amd.com> 23307 + M: Dragan Cvetic <dragan.cvetic@amd.com> 23309 23308 S: Maintained 23310 23309 F: Documentation/devicetree/bindings/misc/xlnx,sd-fec.txt 23311 23310 F: Documentation/misc-devices/xilinx_sdfec.rst ··· 23321 23320 F: drivers/tty/serial/uartlite.c 23322 23321 23323 23322 XILINX VIDEO IP CORES 23324 - M: Hyun Kwon <hyun.kwon@xilinx.com> 23325 23323 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 23326 23324 L: linux-media@vger.kernel.org 23327 23325 S: Supported ··· 23349 23349 F: include/linux/platform_data/amd_xdma.h 23350 23350 23351 23351 XILINX ZYNQMP DPDMA DRIVER 23352 - M: Hyun Kwon <hyun.kwon@xilinx.com> 23353 23352 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 23354 23353 L: dmaengine@vger.kernel.org 23355 23354 S: Supported ··· 23364 23365 F: drivers/edac/zynqmp_edac.c 23365 23366 23366 23367 XILINX ZYNQMP PSGTR PHY DRIVER 23367 - M: Anurag Kumar Vulisha <anurag.kumar.vulisha@xilinx.com> 23368 23368 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 23369 23369 L: linux-kernel@vger.kernel.org 23370 23370 S: Supported ··· 23372 23374 F: drivers/phy/xilinx/phy-zynqmp.c 23373 23375 23374 23376 XILINX ZYNQMP SHA3 DRIVER 23375 - M: Harsha <harsha.harsha@xilinx.com> 23377 + M: Harsha <harsha.harsha@amd.com> 23376 23378 S: Maintained 23377 23379 F: drivers/crypto/xilinx/zynqmp-sha.c 23378 23380
-4
arch/arm/mach-at91/Kconfig
··· 97 97 depends on ARCH_MULTI_V5 98 98 select ATMEL_AIC_IRQ 99 99 select ATMEL_PM if PM 100 - select ATMEL_SDRAMC 101 100 select CPU_ARM926T 102 101 select HAVE_AT91_SMD 103 102 select HAVE_AT91_USB_CLK ··· 130 131 depends on ARCH_MULTI_V5 131 132 select ATMEL_AIC5_IRQ 132 133 select ATMEL_PM if PM 133 - select ATMEL_SDRAMC 134 134 select CPU_ARM926T 135 135 select HAVE_AT91_USB_CLK 136 136 select HAVE_AT91_GENERATED_CLK ··· 211 213 bool 212 214 select ATMEL_AIC5_IRQ 213 215 select ATMEL_PM if PM 214 - select ATMEL_SDRAMC 215 216 select MEMORY 216 217 select SOC_SAM_V7 217 218 select SRAM if PM ··· 231 234 bool 232 235 select ARM_GIC 233 236 select ATMEL_PM if PM 234 - select ATMEL_SDRAMC 235 237 select MEMORY 236 238 select SOC_SAM_V7 237 239 select SRAM if PM
+5 -7
drivers/bus/fsl-mc/dprc-driver.c
··· 835 835 * It tears down the interrupts that were configured for the DPRC device. 836 836 * It destroys the interrupt pool associated with this MC bus. 837 837 */ 838 - static int dprc_remove(struct fsl_mc_device *mc_dev) 838 + static void dprc_remove(struct fsl_mc_device *mc_dev) 839 839 { 840 840 struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_dev); 841 841 842 - if (!is_fsl_mc_bus_dprc(mc_dev)) 843 - return -EINVAL; 844 - 845 - if (!mc_bus->irq_resources) 846 - return -EINVAL; 842 + if (!mc_bus->irq_resources) { 843 + dev_err(&mc_dev->dev, "No irq resources, so unbinding the device failed\n"); 844 + return; 845 + } 847 846 848 847 if (dev_get_msi_domain(&mc_dev->dev)) 849 848 dprc_teardown_irq(mc_dev); ··· 852 853 dprc_cleanup(mc_dev); 853 854 854 855 dev_info(&mc_dev->dev, "DPRC device unbound from driver"); 855 - return 0; 856 856 } 857 857 858 858 static const struct fsl_mc_device_id match_id_table[] = {
+20 -21
drivers/bus/fsl-mc/fsl-mc-allocator.c
··· 103 103 struct fsl_mc_resource *resource; 104 104 int error = -EINVAL; 105 105 106 - if (!fsl_mc_is_allocatable(mc_dev)) 107 - goto out; 108 - 109 - resource = mc_dev->resource; 110 - if (!resource || resource->data != mc_dev) 111 - goto out; 112 - 113 106 mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent); 114 107 mc_bus = to_fsl_mc_bus(mc_bus_dev); 115 - res_pool = resource->parent_pool; 116 - if (res_pool != &mc_bus->resource_pools[resource->type]) 108 + 109 + resource = mc_dev->resource; 110 + if (!resource || resource->data != mc_dev) { 111 + dev_err(&mc_bus_dev->dev, "resource mismatch\n"); 117 112 goto out; 113 + } 114 + 115 + res_pool = resource->parent_pool; 116 + if (res_pool != &mc_bus->resource_pools[resource->type]) { 117 + dev_err(&mc_bus_dev->dev, "pool mismatch\n"); 118 + goto out; 119 + } 118 120 119 121 mutex_lock(&res_pool->mutex); 120 122 121 - if (res_pool->max_count <= 0) 123 + if (res_pool->max_count <= 0) { 124 + dev_err(&mc_bus_dev->dev, "max_count underflow\n"); 122 125 goto out_unlock; 126 + } 123 127 if (res_pool->free_count <= 0 || 124 - res_pool->free_count > res_pool->max_count) 128 + res_pool->free_count > res_pool->max_count) { 129 + dev_err(&mc_bus_dev->dev, "free_count mismatch\n"); 125 130 goto out_unlock; 131 + } 126 132 127 133 /* 128 134 * If the device is currently allocated, its resource is not ··· 563 557 struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev); 564 558 struct fsl_mc_resource_pool *res_pool = 565 559 &mc_bus->resource_pools[pool_type]; 566 - int free_count = 0; 567 560 568 - list_for_each_entry_safe(resource, next, &res_pool->free_list, node) { 569 - free_count++; 561 + list_for_each_entry_safe(resource, next, &res_pool->free_list, node) 570 562 devm_kfree(&mc_bus_dev->dev, resource); 571 - } 572 563 } 573 564 574 565 void fsl_mc_cleanup_all_resource_pools(struct fsl_mc_device *mc_bus_dev) ··· 612 609 * fsl_mc_allocator_remove - callback invoked when an allocatable device is 613 610 * being removed from the system 614 611 */ 615 - static int fsl_mc_allocator_remove(struct fsl_mc_device *mc_dev) 612 + static void fsl_mc_allocator_remove(struct fsl_mc_device *mc_dev) 616 613 { 617 614 int error; 618 - 619 - if (!fsl_mc_is_allocatable(mc_dev)) 620 - return -EINVAL; 621 615 622 616 if (mc_dev->resource) { 623 617 error = fsl_mc_resource_pool_remove_device(mc_dev); 624 618 if (error < 0) 625 - return error; 619 + return; 626 620 } 627 621 628 622 dev_dbg(&mc_dev->dev, 629 623 "Allocatable fsl-mc device unbound from fsl_mc_allocator driver"); 630 - return 0; 631 624 } 632 625 633 626 static const struct fsl_mc_device_id match_id_table[] = {
+1 -6
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 454 454 { 455 455 struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 456 456 struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 457 - int error; 458 457 459 - error = mc_drv->remove(mc_dev); 460 - if (error < 0) { 461 - dev_err(dev, "%s failed: %d\n", __func__, error); 462 - return error; 463 - } 458 + mc_drv->remove(mc_dev); 464 459 465 460 return 0; 466 461 }
+2 -2
drivers/bus/ti-sysc.c
··· 1791 1791 if (!ddata->module_va) 1792 1792 return -EIO; 1793 1793 1794 - /* DISP_CONTROL */ 1794 + /* DISP_CONTROL, shut down lcd and digit on disable if enabled */ 1795 1795 val = sysc_read(ddata, dispc_offset + 0x40); 1796 1796 lcd_en = val & lcd_en_mask; 1797 1797 digit_en = val & digit_en_mask; ··· 1803 1803 else 1804 1804 irq_mask |= BIT(2) | BIT(3); /* EVSYNC bits */ 1805 1805 } 1806 - if (disable & (lcd_en | digit_en)) 1806 + if (disable && (lcd_en || digit_en)) 1807 1807 sysc_write(ddata, dispc_offset + 0x40, 1808 1808 val & ~(lcd_en_mask | digit_en_mask)); 1809 1809
+11 -52
drivers/cpufreq/qcom-cpufreq-nvmem.c
··· 29 29 #include <linux/slab.h> 30 30 #include <linux/soc/qcom/smem.h> 31 31 32 - #define MSM_ID_SMEM 137 33 - 34 - enum _msm_id { 35 - MSM8996V3 = 0xF6ul, 36 - APQ8096V3 = 0x123ul, 37 - MSM8996SG = 0x131ul, 38 - APQ8096SG = 0x138ul, 39 - }; 40 - 41 - enum _msm8996_version { 42 - MSM8996_V3, 43 - MSM8996_SG, 44 - NUM_OF_MSM8996_VERSIONS, 45 - }; 32 + #include <dt-bindings/arm/qcom,ids.h> 46 33 47 34 struct qcom_cpufreq_drv; 48 35 ··· 127 140 dev_dbg(cpu_dev, "PVS version: %d\n", *pvs_ver); 128 141 } 129 142 130 - static enum _msm8996_version qcom_cpufreq_get_msm_id(void) 131 - { 132 - size_t len; 133 - u32 *msm_id; 134 - enum _msm8996_version version; 135 - 136 - msm_id = qcom_smem_get(QCOM_SMEM_HOST_ANY, MSM_ID_SMEM, &len); 137 - if (IS_ERR(msm_id)) 138 - return NUM_OF_MSM8996_VERSIONS; 139 - 140 - /* The first 4 bytes are format, next to them is the actual msm-id */ 141 - msm_id++; 142 - 143 - switch ((enum _msm_id)*msm_id) { 144 - case MSM8996V3: 145 - case APQ8096V3: 146 - version = MSM8996_V3; 147 - break; 148 - case MSM8996SG: 149 - case APQ8096SG: 150 - version = MSM8996_SG; 151 - break; 152 - default: 153 - version = NUM_OF_MSM8996_VERSIONS; 154 - } 155 - 156 - return version; 157 - } 158 - 159 143 static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev, 160 144 struct nvmem_cell *speedbin_nvmem, 161 145 char **pvs_name, 162 146 struct qcom_cpufreq_drv *drv) 163 147 { 164 148 size_t len; 149 + u32 msm_id; 165 150 u8 *speedbin; 166 - enum _msm8996_version msm8996_version; 151 + int ret; 167 152 *pvs_name = NULL; 168 153 169 - msm8996_version = qcom_cpufreq_get_msm_id(); 170 - if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { 171 - dev_err(cpu_dev, "Not Snapdragon 820/821!"); 172 - return -ENODEV; 173 - } 154 + ret = qcom_smem_get_soc_id(&msm_id); 155 + if (ret) 156 + return ret; 174 157 175 158 speedbin = nvmem_cell_read(speedbin_nvmem, &len); 176 159 if (IS_ERR(speedbin)) 177 160 return PTR_ERR(speedbin); 178 161 179 - switch (msm8996_version) { 180 - case MSM8996_V3: 162 + switch (msm_id) { 163 + case QCOM_ID_MSM8996: 164 + case QCOM_ID_APQ8096: 181 165 drv->versions = 1 << (unsigned int)(*speedbin); 182 166 break; 183 - case MSM8996_SG: 167 + case QCOM_ID_MSM8996SG: 168 + case QCOM_ID_APQ8096SG: 184 169 drv->versions = 1 << ((unsigned int)(*speedbin) + 4); 185 170 break; 186 171 default:
+1 -3
drivers/crypto/caam/caamalg_qi2.c
··· 5402 5402 return err; 5403 5403 } 5404 5404 5405 - static int __cold dpaa2_caam_remove(struct fsl_mc_device *ls_dev) 5405 + static void __cold dpaa2_caam_remove(struct fsl_mc_device *ls_dev) 5406 5406 { 5407 5407 struct device *dev; 5408 5408 struct dpaa2_caam_priv *priv; ··· 5443 5443 free_percpu(priv->ppriv); 5444 5444 fsl_mc_portal_free(priv->mc_io); 5445 5445 kmem_cache_destroy(qi_cache); 5446 - 5447 - return 0; 5448 5446 } 5449 5447 5450 5448 int dpaa2_caam_enqueue(struct device *dev, struct caam_request *req)
+1 -3
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
··· 765 765 return err; 766 766 } 767 767 768 - static int dpaa2_qdma_remove(struct fsl_mc_device *ls_dev) 768 + static void dpaa2_qdma_remove(struct fsl_mc_device *ls_dev) 769 769 { 770 770 struct dpaa2_qdma_engine *dpaa2_qdma; 771 771 struct dpaa2_qdma_priv *priv; ··· 787 787 dma_async_device_unregister(&dpaa2_qdma->dma_dev); 788 788 kfree(priv); 789 789 kfree(dpaa2_qdma); 790 - 791 - return 0; 792 790 } 793 791 794 792 static void dpaa2_qdma_shutdown(struct fsl_mc_device *ls_dev)
+1
drivers/firmware/arm_scmi/driver.c
··· 2914 2914 #endif 2915 2915 #ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC 2916 2916 { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, 2917 + { .compatible = "arm,scmi-smc-param", .data = &scmi_smc_desc}, 2917 2918 #endif 2918 2919 #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO 2919 2920 { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
+148 -25
drivers/firmware/arm_scmi/powercap.c
··· 108 108 }; 109 109 110 110 struct scmi_powercap_state { 111 + bool enabled; 112 + u32 last_pcap; 111 113 bool meas_notif_enabled; 112 114 u64 thresholds; 113 115 #define THRESH_LOW(p, id) \ ··· 315 313 return ret; 316 314 } 317 315 318 - static int scmi_powercap_cap_get(const struct scmi_protocol_handle *ph, 319 - u32 domain_id, u32 *power_cap) 316 + static int __scmi_powercap_cap_get(const struct scmi_protocol_handle *ph, 317 + const struct scmi_powercap_info *dom, 318 + u32 *power_cap) 320 319 { 321 - struct scmi_powercap_info *dom; 322 - struct powercap_info *pi = ph->get_priv(ph); 323 - 324 - if (!power_cap || domain_id >= pi->num_domains) 325 - return -EINVAL; 326 - 327 - dom = pi->powercaps + domain_id; 328 320 if (dom->fc_info && dom->fc_info[POWERCAP_FC_CAP].get_addr) { 329 321 *power_cap = ioread32(dom->fc_info[POWERCAP_FC_CAP].get_addr); 330 322 trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_CAP_GET, 331 - domain_id, *power_cap, 0); 323 + dom->id, *power_cap, 0); 332 324 return 0; 333 325 } 334 326 335 - return scmi_powercap_xfer_cap_get(ph, domain_id, power_cap); 327 + return scmi_powercap_xfer_cap_get(ph, dom->id, power_cap); 328 + } 329 + 330 + static int scmi_powercap_cap_get(const struct scmi_protocol_handle *ph, 331 + u32 domain_id, u32 *power_cap) 332 + { 333 + const struct scmi_powercap_info *dom; 334 + 335 + if (!power_cap) 336 + return -EINVAL; 337 + 338 + dom = scmi_powercap_dom_info_get(ph, domain_id); 339 + if (!dom) 340 + return -EINVAL; 341 + 342 + return __scmi_powercap_cap_get(ph, dom, power_cap); 336 343 } 337 344 338 345 static int scmi_powercap_xfer_cap_set(const struct scmi_protocol_handle *ph, ··· 386 375 return ret; 387 376 } 388 377 389 - static int scmi_powercap_cap_set(const struct scmi_protocol_handle *ph, 390 - u32 domain_id, u32 power_cap, 391 - bool ignore_dresp) 378 + static int __scmi_powercap_cap_set(const struct scmi_protocol_handle *ph, 379 + struct powercap_info *pi, u32 domain_id, 380 + u32 power_cap, bool ignore_dresp) 392 381 { 382 + int ret = -EINVAL; 393 383 const struct scmi_powercap_info *pc; 394 384 395 385 pc = scmi_powercap_dom_info_get(ph, domain_id); 396 - if (!pc || !pc->powercap_cap_config || !power_cap || 397 - power_cap < pc->min_power_cap || 398 - power_cap > pc->max_power_cap) 399 - return -EINVAL; 386 + if (!pc || !pc->powercap_cap_config) 387 + return ret; 388 + 389 + if (power_cap && 390 + (power_cap < pc->min_power_cap || power_cap > pc->max_power_cap)) 391 + return ret; 400 392 401 393 if (pc->fc_info && pc->fc_info[POWERCAP_FC_CAP].set_addr) { 402 394 struct scmi_fc_info *fci = &pc->fc_info[POWERCAP_FC_CAP]; ··· 408 394 ph->hops->fastchannel_db_ring(fci->set_db); 409 395 trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_CAP_SET, 410 396 domain_id, power_cap, 0); 397 + ret = 0; 398 + } else { 399 + ret = scmi_powercap_xfer_cap_set(ph, pc, power_cap, 400 + ignore_dresp); 401 + } 402 + 403 + /* Save the last explicitly set non-zero powercap value */ 404 + if (PROTOCOL_REV_MAJOR(pi->version) >= 0x2 && !ret && power_cap) 405 + pi->states[domain_id].last_pcap = power_cap; 406 + 407 + return ret; 408 + } 409 + 410 + static int scmi_powercap_cap_set(const struct scmi_protocol_handle *ph, 411 + u32 domain_id, u32 power_cap, 412 + bool ignore_dresp) 413 + { 414 + struct powercap_info *pi = ph->get_priv(ph); 415 + 416 + /* 417 + * Disallow zero as a possible explicitly requested powercap: 418 + * there are enable/disable operations for this. 419 + */ 420 + if (!power_cap) 421 + return -EINVAL; 422 + 423 + /* Just log the last set request if acting on a disabled domain */ 424 + if (PROTOCOL_REV_MAJOR(pi->version) >= 0x2 && 425 + !pi->states[domain_id].enabled) { 426 + pi->states[domain_id].last_pcap = power_cap; 411 427 return 0; 412 428 } 413 429 414 - return scmi_powercap_xfer_cap_set(ph, pc, power_cap, ignore_dresp); 430 + return __scmi_powercap_cap_set(ph, pi, domain_id, 431 + power_cap, ignore_dresp); 415 432 } 416 433 417 434 static int scmi_powercap_xfer_pai_get(const struct scmi_protocol_handle *ph, ··· 609 564 return ret; 610 565 } 611 566 567 + static int scmi_powercap_cap_enable_set(const struct scmi_protocol_handle *ph, 568 + u32 domain_id, bool enable) 569 + { 570 + int ret; 571 + u32 power_cap; 572 + struct powercap_info *pi = ph->get_priv(ph); 573 + 574 + if (PROTOCOL_REV_MAJOR(pi->version) < 0x2) 575 + return -EINVAL; 576 + 577 + if (enable == pi->states[domain_id].enabled) 578 + return 0; 579 + 580 + if (enable) { 581 + /* Cannot enable with a zero powercap. */ 582 + if (!pi->states[domain_id].last_pcap) 583 + return -EINVAL; 584 + 585 + ret = __scmi_powercap_cap_set(ph, pi, domain_id, 586 + pi->states[domain_id].last_pcap, 587 + true); 588 + } else { 589 + ret = __scmi_powercap_cap_set(ph, pi, domain_id, 0, true); 590 + } 591 + 592 + if (ret) 593 + return ret; 594 + 595 + /* 596 + * Update our internal state to reflect final platform state: the SCMI 597 + * server could have ignored a disable request and kept enforcing some 598 + * powercap limit requested by other agents. 599 + */ 600 + ret = scmi_powercap_cap_get(ph, domain_id, &power_cap); 601 + if (!ret) 602 + pi->states[domain_id].enabled = !!power_cap; 603 + 604 + return ret; 605 + } 606 + 607 + static int scmi_powercap_cap_enable_get(const struct scmi_protocol_handle *ph, 608 + u32 domain_id, bool *enable) 609 + { 610 + int ret; 611 + u32 power_cap; 612 + struct powercap_info *pi = ph->get_priv(ph); 613 + 614 + *enable = true; 615 + if (PROTOCOL_REV_MAJOR(pi->version) < 0x2) 616 + return 0; 617 + 618 + /* 619 + * Report always real platform state; platform could have ignored 620 + * a previous disable request. Default true on any error. 621 + */ 622 + ret = scmi_powercap_cap_get(ph, domain_id, &power_cap); 623 + if (!ret) 624 + *enable = !!power_cap; 625 + 626 + /* Update internal state with current real platform state */ 627 + pi->states[domain_id].enabled = *enable; 628 + 629 + return 0; 630 + } 631 + 612 632 static const struct scmi_powercap_proto_ops powercap_proto_ops = { 613 633 .num_domains_get = scmi_powercap_num_domains_get, 614 634 .info_get = scmi_powercap_dom_info_get, 615 635 .cap_get = scmi_powercap_cap_get, 616 636 .cap_set = scmi_powercap_cap_set, 637 + .cap_enable_set = scmi_powercap_cap_enable_set, 638 + .cap_enable_get = scmi_powercap_cap_enable_get, 617 639 .pai_get = scmi_powercap_pai_get, 618 640 .pai_set = scmi_powercap_pai_set, 619 641 .measurements_get = scmi_powercap_measurements_get, ··· 941 829 if (!pinfo->powercaps) 942 830 return -ENOMEM; 943 831 832 + pinfo->states = devm_kcalloc(ph->dev, pinfo->num_domains, 833 + sizeof(*pinfo->states), GFP_KERNEL); 834 + if (!pinfo->states) 835 + return -ENOMEM; 836 + 944 837 /* 945 838 * Note that any failure in retrieving any domain attribute leads to 946 839 * the whole Powercap protocol initialization failure: this way the ··· 960 843 if (pinfo->powercaps[domain].fastchannels) 961 844 scmi_powercap_domain_init_fc(ph, domain, 962 845 &pinfo->powercaps[domain].fc_info); 846 + 847 + /* Grab initial state when disable is supported. */ 848 + if (PROTOCOL_REV_MAJOR(version) >= 0x2) { 849 + ret = __scmi_powercap_cap_get(ph, 850 + &pinfo->powercaps[domain], 851 + &pinfo->states[domain].last_pcap); 852 + if (ret) 853 + return ret; 854 + 855 + pinfo->states[domain].enabled = 856 + !!pinfo->states[domain].last_pcap; 857 + } 963 858 } 964 859 965 - pinfo->states = devm_kcalloc(ph->dev, pinfo->num_domains, 966 - sizeof(*pinfo->states), GFP_KERNEL); 967 - if (!pinfo->states) 968 - return -ENOMEM; 969 - 970 860 pinfo->version = version; 971 - 972 861 return ph->set_priv(ph, pinfo); 973 862 } 974 863
+29 -1
drivers/firmware/arm_scmi/smc.c
··· 20 20 21 21 #include "common.h" 22 22 23 + /* 24 + * The shmem address is split into 4K page and offset. 25 + * This is to make sure the parameters fit in 32bit arguments of the 26 + * smc/hvc call to keep it uniform across smc32/smc64 conventions. 27 + * This however limits the shmem address to 44 bit. 28 + * 29 + * These optional parameters can be used to distinguish among multiple 30 + * scmi instances that are using the same smc-id. 31 + * The page parameter is passed in r1/x1/w1 register and the offset parameter 32 + * is passed in r2/x2/w2 register. 33 + */ 34 + 35 + #define SHMEM_SIZE (SZ_4K) 36 + #define SHMEM_SHIFT 12 37 + #define SHMEM_PAGE(x) (_UL((x) >> SHMEM_SHIFT)) 38 + #define SHMEM_OFFSET(x) ((x) & (SHMEM_SIZE - 1)) 39 + 23 40 /** 24 41 * struct scmi_smc - Structure representing a SCMI smc transport 25 42 * ··· 47 30 * @inflight: Atomic flag to protect access to Tx/Rx shared memory area. 48 31 * Used when operating in atomic mode. 49 32 * @func_id: smc/hvc call function id 33 + * @param_page: 4K page number of the shmem channel 34 + * @param_offset: Offset within the 4K page of the shmem channel 50 35 */ 51 36 52 37 struct scmi_smc { ··· 59 40 #define INFLIGHT_NONE MSG_TOKEN_MAX 60 41 atomic_t inflight; 61 42 u32 func_id; 43 + u32 param_page; 44 + u32 param_offset; 62 45 }; 63 46 64 47 static irqreturn_t smc_msg_done_isr(int irq, void *data) ··· 158 137 if (ret < 0) 159 138 return ret; 160 139 140 + if (of_device_is_compatible(dev->of_node, "arm,scmi-smc-param")) { 141 + scmi_info->param_page = SHMEM_PAGE(res.start); 142 + scmi_info->param_offset = SHMEM_OFFSET(res.start); 143 + } 161 144 /* 162 145 * If there is an interrupt named "a2p", then the service and 163 146 * completion of a message is signaled by an interrupt rather than by ··· 204 179 { 205 180 struct scmi_smc *scmi_info = cinfo->transport_info; 206 181 struct arm_smccc_res res; 182 + unsigned long page = scmi_info->param_page; 183 + unsigned long offset = scmi_info->param_offset; 207 184 208 185 /* 209 186 * Channel will be released only once response has been ··· 215 188 216 189 shmem_tx_prepare(scmi_info->shmem, xfer, cinfo); 217 190 218 - arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); 191 + arm_smccc_1_1_invoke(scmi_info->func_id, page, offset, 0, 0, 0, 0, 0, 192 + &res); 219 193 220 194 /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ 221 195 if (res.a0) {
+145 -59
drivers/firmware/tegra/bpmp-tegra186.c
··· 4 4 */ 5 5 6 6 #include <linux/genalloc.h> 7 + #include <linux/io.h> 7 8 #include <linux/mailbox_client.h> 9 + #include <linux/of_address.h> 8 10 #include <linux/platform_device.h> 9 11 10 12 #include <soc/tegra/bpmp.h> ··· 20 18 21 19 struct { 22 20 struct gen_pool *pool; 23 - void __iomem *virt; 21 + union { 22 + void __iomem *sram; 23 + void *dram; 24 + }; 24 25 dma_addr_t phys; 25 26 } tx, rx; 26 27 ··· 123 118 queue_size = tegra_ivc_total_queue_size(message_size); 124 119 offset = queue_size * index; 125 120 126 - iosys_map_set_vaddr_iomem(&rx, priv->rx.virt + offset); 127 - iosys_map_set_vaddr_iomem(&tx, priv->tx.virt + offset); 121 + if (priv->rx.pool) { 122 + iosys_map_set_vaddr_iomem(&rx, priv->rx.sram + offset); 123 + iosys_map_set_vaddr_iomem(&tx, priv->tx.sram + offset); 124 + } else { 125 + iosys_map_set_vaddr(&rx, priv->rx.dram + offset); 126 + iosys_map_set_vaddr(&tx, priv->tx.dram + offset); 127 + } 128 128 129 129 err = tegra_ivc_init(channel->ivc, NULL, &rx, priv->rx.phys + offset, &tx, 130 130 priv->tx.phys + offset, 1, message_size, tegra186_bpmp_ivc_notify, ··· 168 158 tegra_bpmp_handle_rx(bpmp); 169 159 } 170 160 171 - static int tegra186_bpmp_init(struct tegra_bpmp *bpmp) 161 + static void tegra186_bpmp_teardown_channels(struct tegra_bpmp *bpmp) 172 162 { 173 - struct tegra186_bpmp *priv; 163 + struct tegra186_bpmp *priv = bpmp->priv; 174 164 unsigned int i; 165 + 166 + for (i = 0; i < bpmp->threaded.count; i++) { 167 + if (!bpmp->threaded_channels[i].bpmp) 168 + continue; 169 + 170 + tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 171 + } 172 + 173 + tegra186_bpmp_channel_cleanup(bpmp->rx_channel); 174 + tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 175 + 176 + if (priv->tx.pool) { 177 + gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.sram, 4096); 178 + gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.sram, 4096); 179 + } 180 + } 181 + 182 + static int tegra186_bpmp_dram_init(struct tegra_bpmp *bpmp) 183 + { 184 + struct tegra186_bpmp *priv = bpmp->priv; 185 + struct device_node *np; 186 + struct resource res; 187 + size_t size; 175 188 int err; 176 189 177 - priv = devm_kzalloc(bpmp->dev, sizeof(*priv), GFP_KERNEL); 178 - if (!priv) 179 - return -ENOMEM; 190 + np = of_parse_phandle(bpmp->dev->of_node, "memory-region", 0); 191 + if (!np) 192 + return -ENODEV; 180 193 181 - bpmp->priv = priv; 182 - priv->parent = bpmp; 194 + err = of_address_to_resource(np, 0, &res); 195 + if (err < 0) { 196 + dev_warn(bpmp->dev, "failed to parse memory region: %d\n", err); 197 + return err; 198 + } 199 + 200 + size = resource_size(&res); 201 + 202 + if (size < SZ_8K) { 203 + dev_warn(bpmp->dev, "DRAM region must be larger than 8 KiB\n"); 204 + return -EINVAL; 205 + } 206 + 207 + priv->tx.phys = res.start; 208 + priv->rx.phys = res.start + SZ_4K; 209 + 210 + priv->tx.dram = devm_memremap(bpmp->dev, priv->tx.phys, size, 211 + MEMREMAP_WC); 212 + if (IS_ERR(priv->tx.dram)) { 213 + err = PTR_ERR(priv->tx.dram); 214 + dev_warn(bpmp->dev, "failed to map DRAM region: %d\n", err); 215 + return err; 216 + } 217 + 218 + priv->rx.dram = priv->tx.dram + SZ_4K; 219 + 220 + return 0; 221 + } 222 + 223 + static int tegra186_bpmp_sram_init(struct tegra_bpmp *bpmp) 224 + { 225 + struct tegra186_bpmp *priv = bpmp->priv; 226 + int err; 183 227 184 228 priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0); 185 229 if (!priv->tx.pool) { ··· 241 177 return -EPROBE_DEFER; 242 178 } 243 179 244 - priv->tx.virt = (void __iomem *)gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys); 245 - if (!priv->tx.virt) { 180 + priv->tx.sram = (void __iomem *)gen_pool_dma_alloc(priv->tx.pool, 4096, 181 + &priv->tx.phys); 182 + if (!priv->tx.sram) { 246 183 dev_err(bpmp->dev, "failed to allocate from TX pool\n"); 247 184 return -ENOMEM; 248 185 } ··· 255 190 goto free_tx; 256 191 } 257 192 258 - priv->rx.virt = (void __iomem *)gen_pool_dma_alloc(priv->rx.pool, 4096, &priv->rx.phys); 259 - if (!priv->rx.virt) { 193 + priv->rx.sram = (void __iomem *)gen_pool_dma_alloc(priv->rx.pool, 4096, 194 + &priv->rx.phys); 195 + if (!priv->rx.sram) { 260 196 dev_err(bpmp->dev, "failed to allocate from RX pool\n"); 261 197 err = -ENOMEM; 262 198 goto free_tx; 263 199 } 264 200 201 + return 0; 202 + 203 + free_tx: 204 + gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.sram, 4096); 205 + 206 + return err; 207 + } 208 + 209 + static int tegra186_bpmp_setup_channels(struct tegra_bpmp *bpmp) 210 + { 211 + unsigned int i; 212 + int err; 213 + 214 + err = tegra186_bpmp_dram_init(bpmp); 215 + if (err == -ENODEV) { 216 + err = tegra186_bpmp_sram_init(bpmp); 217 + if (err < 0) 218 + return err; 219 + } 220 + 265 221 err = tegra186_bpmp_channel_init(bpmp->tx_channel, bpmp, 266 222 bpmp->soc->channels.cpu_tx.offset); 267 223 if (err < 0) 268 - goto free_rx; 224 + return err; 269 225 270 226 err = tegra186_bpmp_channel_init(bpmp->rx_channel, bpmp, 271 227 bpmp->soc->channels.cpu_rx.offset); 272 - if (err < 0) 273 - goto cleanup_tx_channel; 228 + if (err < 0) { 229 + tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 230 + return err; 231 + } 274 232 275 233 for (i = 0; i < bpmp->threaded.count; i++) { 276 234 unsigned int index = bpmp->soc->channels.thread.offset + i; ··· 301 213 err = tegra186_bpmp_channel_init(&bpmp->threaded_channels[i], 302 214 bpmp, index); 303 215 if (err < 0) 304 - goto cleanup_channels; 216 + break; 305 217 } 218 + 219 + if (err < 0) 220 + tegra186_bpmp_teardown_channels(bpmp); 221 + 222 + return err; 223 + } 224 + 225 + static void tegra186_bpmp_reset_channels(struct tegra_bpmp *bpmp) 226 + { 227 + unsigned int i; 228 + 229 + /* reset message channels */ 230 + tegra186_bpmp_channel_reset(bpmp->tx_channel); 231 + tegra186_bpmp_channel_reset(bpmp->rx_channel); 232 + 233 + for (i = 0; i < bpmp->threaded.count; i++) 234 + tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]); 235 + } 236 + 237 + static int tegra186_bpmp_init(struct tegra_bpmp *bpmp) 238 + { 239 + struct tegra186_bpmp *priv; 240 + int err; 241 + 242 + priv = devm_kzalloc(bpmp->dev, sizeof(*priv), GFP_KERNEL); 243 + if (!priv) 244 + return -ENOMEM; 245 + 246 + priv->parent = bpmp; 247 + bpmp->priv = priv; 248 + 249 + err = tegra186_bpmp_setup_channels(bpmp); 250 + if (err < 0) 251 + return err; 306 252 307 253 /* mbox registration */ 308 254 priv->mbox.client.dev = bpmp->dev; ··· 348 226 if (IS_ERR(priv->mbox.channel)) { 349 227 err = PTR_ERR(priv->mbox.channel); 350 228 dev_err(bpmp->dev, "failed to get HSP mailbox: %d\n", err); 351 - goto cleanup_channels; 229 + tegra186_bpmp_teardown_channels(bpmp); 230 + return err; 352 231 } 353 232 354 - tegra186_bpmp_channel_reset(bpmp->tx_channel); 355 - tegra186_bpmp_channel_reset(bpmp->rx_channel); 356 - 357 - for (i = 0; i < bpmp->threaded.count; i++) 358 - tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]); 233 + tegra186_bpmp_reset_channels(bpmp); 359 234 360 235 return 0; 361 - 362 - cleanup_channels: 363 - for (i = 0; i < bpmp->threaded.count; i++) { 364 - if (!bpmp->threaded_channels[i].bpmp) 365 - continue; 366 - 367 - tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 368 - } 369 - 370 - tegra186_bpmp_channel_cleanup(bpmp->rx_channel); 371 - cleanup_tx_channel: 372 - tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 373 - free_rx: 374 - gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096); 375 - free_tx: 376 - gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096); 377 - 378 - return err; 379 236 } 380 237 381 238 static void tegra186_bpmp_deinit(struct tegra_bpmp *bpmp) 382 239 { 383 240 struct tegra186_bpmp *priv = bpmp->priv; 384 - unsigned int i; 385 241 386 242 mbox_free_channel(priv->mbox.channel); 387 243 388 - for (i = 0; i < bpmp->threaded.count; i++) 389 - tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 390 - 391 - tegra186_bpmp_channel_cleanup(bpmp->rx_channel); 392 - tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 393 - 394 - gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096); 395 - gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096); 244 + tegra186_bpmp_teardown_channels(bpmp); 396 245 } 397 246 398 247 static int tegra186_bpmp_resume(struct tegra_bpmp *bpmp) 399 248 { 400 - unsigned int i; 401 - 402 - /* reset message channels */ 403 - tegra186_bpmp_channel_reset(bpmp->tx_channel); 404 - tegra186_bpmp_channel_reset(bpmp->rx_channel); 405 - 406 - for (i = 0; i < bpmp->threaded.count; i++) 407 - tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]); 249 + tegra186_bpmp_reset_channels(bpmp); 408 250 409 251 return 0; 410 252 }
+2 -2
drivers/firmware/tegra/bpmp.c
··· 735 735 if (!bpmp->threaded_channels) 736 736 return -ENOMEM; 737 737 738 + platform_set_drvdata(pdev, bpmp); 739 + 738 740 err = bpmp->soc->ops->init(bpmp); 739 741 if (err < 0) 740 742 return err; ··· 759 757 } 760 758 761 759 dev_info(&pdev->dev, "firmware: %.*s\n", (int)sizeof(tag), tag); 762 - 763 - platform_set_drvdata(pdev, bpmp); 764 760 765 761 err = of_platform_default_populate(pdev->dev.of_node, NULL, &pdev->dev); 766 762 if (err < 0)
+10 -2
drivers/firmware/xilinx/zynqmp.c
··· 942 942 */ 943 943 int zynqmp_pm_fpga_load(const u64 address, const u32 size, const u32 flags) 944 944 { 945 - return zynqmp_pm_invoke_fn(PM_FPGA_LOAD, lower_32_bits(address), 946 - upper_32_bits(address), size, flags, NULL); 945 + u32 ret_payload[PAYLOAD_ARG_CNT]; 946 + int ret; 947 + 948 + ret = zynqmp_pm_invoke_fn(PM_FPGA_LOAD, lower_32_bits(address), 949 + upper_32_bits(address), size, flags, 950 + ret_payload); 951 + if (ret_payload[0]) 952 + return -ret_payload[0]; 953 + 954 + return ret; 947 955 } 948 956 EXPORT_SYMBOL_GPL(zynqmp_pm_fpga_load); 949 957
-11
drivers/memory/Kconfig
··· 30 30 If you have an embedded system with an AMBA bus and a PL172 31 31 controller, say Y or M here. 32 32 33 - config ATMEL_SDRAMC 34 - bool "Atmel (Multi-port DDR-)SDRAM Controller" 35 - default y if ARCH_AT91 36 - depends on ARCH_AT91 || COMPILE_TEST 37 - depends on OF 38 - help 39 - This driver is for Atmel SDRAM Controller or Atmel Multi-port 40 - DDR-SDRAM Controller available on Atmel AT91SAM9 and SAMA5 SoCs. 41 - Starting with the at91sam9g45, this controller supports SDR, DDR and 42 - LP-DDR memories. 43 - 44 33 config ATMEL_EBI 45 34 bool "Atmel EBI driver" 46 35 default y if ARCH_AT91
-1
drivers/memory/Makefile
··· 8 8 obj-$(CONFIG_OF) += of_memory.o 9 9 endif 10 10 obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o 11 - obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o 12 11 obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o 13 12 obj-$(CONFIG_BRCMSTB_DPFE) += brcmstb_dpfe.o 14 13 obj-$(CONFIG_BRCMSTB_MEMC) += brcmstb_memc.o
-74
drivers/memory/atmel-sdramc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Atmel (Multi-port DDR-)SDRAM Controller driver 4 - * 5 - * Author: Alexandre Belloni <alexandre.belloni@free-electrons.com> 6 - * 7 - * Copyright (C) 2014 Atmel 8 - */ 9 - 10 - #include <linux/clk.h> 11 - #include <linux/err.h> 12 - #include <linux/kernel.h> 13 - #include <linux/init.h> 14 - #include <linux/of_platform.h> 15 - #include <linux/platform_device.h> 16 - 17 - struct at91_ramc_caps { 18 - bool has_ddrck; 19 - bool has_mpddr_clk; 20 - }; 21 - 22 - static const struct at91_ramc_caps at91rm9200_caps = { }; 23 - 24 - static const struct at91_ramc_caps at91sam9g45_caps = { 25 - .has_ddrck = 1, 26 - .has_mpddr_clk = 0, 27 - }; 28 - 29 - static const struct at91_ramc_caps sama5d3_caps = { 30 - .has_ddrck = 1, 31 - .has_mpddr_clk = 1, 32 - }; 33 - 34 - static const struct of_device_id atmel_ramc_of_match[] = { 35 - { .compatible = "atmel,at91rm9200-sdramc", .data = &at91rm9200_caps, }, 36 - { .compatible = "atmel,at91sam9260-sdramc", .data = &at91rm9200_caps, }, 37 - { .compatible = "atmel,at91sam9g45-ddramc", .data = &at91sam9g45_caps, }, 38 - { .compatible = "atmel,sama5d3-ddramc", .data = &sama5d3_caps, }, 39 - {}, 40 - }; 41 - 42 - static int atmel_ramc_probe(struct platform_device *pdev) 43 - { 44 - const struct at91_ramc_caps *caps; 45 - struct clk *clk; 46 - 47 - caps = of_device_get_match_data(&pdev->dev); 48 - 49 - if (caps->has_ddrck) { 50 - clk = devm_clk_get_enabled(&pdev->dev, "ddrck"); 51 - if (IS_ERR(clk)) 52 - return PTR_ERR(clk); 53 - } 54 - 55 - if (caps->has_mpddr_clk) { 56 - clk = devm_clk_get_enabled(&pdev->dev, "mpddr"); 57 - if (IS_ERR(clk)) { 58 - pr_err("AT91 RAMC: couldn't get mpddr clock\n"); 59 - return PTR_ERR(clk); 60 - } 61 - } 62 - 63 - return 0; 64 - } 65 - 66 - static struct platform_driver atmel_ramc_driver = { 67 - .probe = atmel_ramc_probe, 68 - .driver = { 69 - .name = "atmel-ramc", 70 - .of_match_table = atmel_ramc_of_match, 71 - }, 72 - }; 73 - 74 - builtin_platform_driver(atmel_ramc_driver);
+3 -1
drivers/memory/brcmstb_dpfe.c
··· 434 434 static int __send_command(struct brcmstb_dpfe_priv *priv, unsigned int cmd, 435 435 u32 result[]) 436 436 { 437 - const u32 *msg = priv->dpfe_api->command[cmd]; 438 437 void __iomem *regs = priv->regs; 439 438 unsigned int i, chksum, chksum_idx; 439 + const u32 *msg; 440 440 int ret = 0; 441 441 u32 resp; 442 442 443 443 if (cmd >= DPFE_CMD_MAX) 444 444 return -1; 445 + 446 + msg = priv->dpfe_api->command[cmd]; 445 447 446 448 mutex_lock(&priv->lock); 447 449
+39 -14
drivers/memory/renesas-rpc-if.c
··· 7 7 * Copyright (C) 2019-2020 Cogent Embedded, Inc. 8 8 */ 9 9 10 + #include <linux/bitops.h> 10 11 #include <linux/clk.h> 11 12 #include <linux/io.h> 12 13 #include <linux/module.h> ··· 164 163 .n_yes_ranges = ARRAY_SIZE(rpcif_volatile_ranges), 165 164 }; 166 165 166 + struct rpcif_info { 167 + enum rpcif_type type; 168 + u8 strtim; 169 + }; 170 + 167 171 struct rpcif_priv { 168 172 struct device *dev; 169 173 void __iomem *base; ··· 177 171 struct reset_control *rstc; 178 172 struct platform_device *vdev; 179 173 size_t size; 180 - enum rpcif_type type; 174 + const struct rpcif_info *info; 181 175 enum rpcif_data_dir dir; 182 176 u8 bus_size; 183 177 u8 xfer_size; ··· 190 184 u32 enable; /* DRENR or SMENR */ 191 185 u32 dummy; /* DRDMCR or SMDMCR */ 192 186 u32 ddr; /* DRDRENR or SMDRENR */ 187 + }; 188 + 189 + static const struct rpcif_info rpcif_info_r8a7796 = { 190 + .type = RPCIF_RCAR_GEN3, 191 + .strtim = 6, 192 + }; 193 + 194 + static const struct rpcif_info rpcif_info_gen3 = { 195 + .type = RPCIF_RCAR_GEN3, 196 + .strtim = 7, 197 + }; 198 + 199 + static const struct rpcif_info rpcif_info_rz_g2l = { 200 + .type = RPCIF_RZ_G2L, 201 + .strtim = 7, 202 + }; 203 + 204 + static const struct rpcif_info rpcif_info_gen4 = { 205 + .type = RPCIF_RCAR_GEN4, 206 + .strtim = 15, 193 207 }; 194 208 195 209 /* ··· 336 310 if (ret) 337 311 return ret; 338 312 339 - if (rpc->type == RPCIF_RZ_G2L) { 313 + if (rpc->info->type == RPCIF_RZ_G2L) { 340 314 ret = reset_control_reset(rpc->rstc); 341 315 if (ret) 342 316 return ret; ··· 350 324 /* DMA Transfer is not supported */ 351 325 regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, RPCIF_PHYCNT_HS, 0); 352 326 353 - if (rpc->type == RPCIF_RCAR_GEN3) 354 - regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, 355 - RPCIF_PHYCNT_STRTIM(7), RPCIF_PHYCNT_STRTIM(7)); 356 - else if (rpc->type == RPCIF_RCAR_GEN4) 357 - regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, 358 - RPCIF_PHYCNT_STRTIM(15), RPCIF_PHYCNT_STRTIM(15)); 327 + regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, 328 + /* create mask with all affected bits set */ 329 + RPCIF_PHYCNT_STRTIM(BIT(fls(rpc->info->strtim)) - 1), 330 + RPCIF_PHYCNT_STRTIM(rpc->info->strtim)); 359 331 360 332 regmap_update_bits(rpc->regmap, RPCIF_PHYOFFSET1, RPCIF_PHYOFFSET1_DDRTMG(3), 361 333 RPCIF_PHYOFFSET1_DDRTMG(3)); ··· 364 340 regmap_update_bits(rpc->regmap, RPCIF_PHYINT, 365 341 RPCIF_PHYINT_WPVAL, 0); 366 342 367 - if (rpc->type == RPCIF_RZ_G2L) 343 + if (rpc->info->type == RPCIF_RZ_G2L) 368 344 regmap_update_bits(rpc->regmap, RPCIF_CMNCR, 369 345 RPCIF_CMNCR_MOIIO(3) | RPCIF_CMNCR_IOFV(3) | 370 346 RPCIF_CMNCR_BSZ(3), ··· 753 729 rpc->dirmap = devm_ioremap_resource(dev, res); 754 730 if (IS_ERR(rpc->dirmap)) 755 731 return PTR_ERR(rpc->dirmap); 756 - rpc->size = resource_size(res); 757 732 758 - rpc->type = (uintptr_t)of_device_get_match_data(dev); 733 + rpc->size = resource_size(res); 734 + rpc->info = of_device_get_match_data(dev); 759 735 rpc->rstc = devm_reset_control_get_exclusive(dev, NULL); 760 736 if (IS_ERR(rpc->rstc)) 761 737 return PTR_ERR(rpc->rstc); ··· 788 764 } 789 765 790 766 static const struct of_device_id rpcif_of_match[] = { 791 - { .compatible = "renesas,rcar-gen3-rpc-if", .data = (void *)RPCIF_RCAR_GEN3 }, 792 - { .compatible = "renesas,rcar-gen4-rpc-if", .data = (void *)RPCIF_RCAR_GEN4 }, 793 - { .compatible = "renesas,rzg2l-rpc-if", .data = (void *)RPCIF_RZ_G2L }, 767 + { .compatible = "renesas,r8a7796-rpc-if", .data = &rpcif_info_r8a7796 }, 768 + { .compatible = "renesas,rcar-gen3-rpc-if", .data = &rpcif_info_gen3 }, 769 + { .compatible = "renesas,rcar-gen4-rpc-if", .data = &rpcif_info_gen4 }, 770 + { .compatible = "renesas,rzg2l-rpc-if", .data = &rpcif_info_rz_g2l }, 794 771 {}, 795 772 }; 796 773 MODULE_DEVICE_TABLE(of, rpcif_of_match);
+24
drivers/memory/tegra/mc.c
··· 15 15 #include <linux/platform_device.h> 16 16 #include <linux/slab.h> 17 17 #include <linux/sort.h> 18 + #include <linux/tegra-icc.h> 18 19 19 20 #include <soc/tegra/fuse.h> 20 21 ··· 793 792 mc->provider.data = &mc->provider; 794 793 mc->provider.set = mc->soc->icc_ops->set; 795 794 mc->provider.aggregate = mc->soc->icc_ops->aggregate; 795 + mc->provider.get_bw = mc->soc->icc_ops->get_bw; 796 + mc->provider.xlate = mc->soc->icc_ops->xlate; 796 797 mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended; 797 798 798 799 icc_provider_init(&mc->provider); ··· 827 824 err = icc_link_create(node, TEGRA_ICC_MC); 828 825 if (err) 829 826 goto remove_nodes; 827 + 828 + node->data = (struct tegra_mc_client *)&(mc->soc->clients[i]); 830 829 } 831 830 832 831 err = icc_provider_register(&mc->provider); ··· 841 836 icc_nodes_remove(&mc->provider); 842 837 843 838 return err; 839 + } 840 + 841 + static void tegra_mc_num_channel_enabled(struct tegra_mc *mc) 842 + { 843 + unsigned int i; 844 + u32 value; 845 + 846 + value = mc_ch_readl(mc, 0, MC_EMEM_ADR_CFG_CHANNEL_ENABLE); 847 + if (value <= 0) { 848 + mc->num_channels = mc->soc->num_channels; 849 + return; 850 + } 851 + 852 + for (i = 0; i < 32; i++) { 853 + if (value & BIT(i)) 854 + mc->num_channels++; 855 + } 844 856 } 845 857 846 858 static int tegra_mc_probe(struct platform_device *pdev) ··· 897 875 if (err < 0) 898 876 return err; 899 877 } 878 + 879 + tegra_mc_num_channel_enabled(mc); 900 880 901 881 if (mc->soc->ops && mc->soc->ops->handle_irq) { 902 882 mc->irq = platform_get_irq(pdev, 0);
+1
drivers/memory/tegra/mc.h
··· 53 53 #define MC_ERR_ROUTE_SANITY_ADR 0x9c4 54 54 #define MC_ERR_GENERALIZED_CARVEOUT_STATUS 0xc00 55 55 #define MC_ERR_GENERALIZED_CARVEOUT_ADR 0xc04 56 + #define MC_EMEM_ADR_CFG_CHANNEL_ENABLE 0xdf8 56 57 #define MC_GLOBAL_INTSTATUS 0xf24 57 58 #define MC_ERR_ADR_HI 0x11fc 58 59
+133
drivers/memory/tegra/tegra186-emc.c
··· 7 7 #include <linux/debugfs.h> 8 8 #include <linux/module.h> 9 9 #include <linux/mod_devicetable.h> 10 + #include <linux/of_platform.h> 10 11 #include <linux/platform_device.h> 11 12 12 13 #include <soc/tegra/bpmp.h> 14 + #include "mc.h" 13 15 14 16 struct tegra186_emc_dvfs { 15 17 unsigned long latency; ··· 31 29 unsigned long min_rate; 32 30 unsigned long max_rate; 33 31 } debugfs; 32 + 33 + struct icc_provider provider; 34 34 }; 35 + 36 + static inline struct tegra186_emc *to_tegra186_emc(struct icc_provider *provider) 37 + { 38 + return container_of(provider, struct tegra186_emc, provider); 39 + } 35 40 36 41 /* 37 42 * debugfs interface ··· 155 146 tegra186_emc_debug_max_rate_get, 156 147 tegra186_emc_debug_max_rate_set, "%llu\n"); 157 148 149 + /* 150 + * tegra_emc_icc_set_bw() - Set BW api for EMC provider 151 + * @src: ICC node for External Memory Controller (EMC) 152 + * @dst: ICC node for External Memory (DRAM) 153 + * 154 + * Do nothing here as info to BPMP-FW is now passed in the BW set function 155 + * of the MC driver. BPMP-FW sets the final Freq based on the passed values. 156 + */ 157 + static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst) 158 + { 159 + return 0; 160 + } 161 + 162 + static struct icc_node * 163 + tegra_emc_of_icc_xlate(struct of_phandle_args *spec, void *data) 164 + { 165 + struct icc_provider *provider = data; 166 + struct icc_node *node; 167 + 168 + /* External Memory is the only possible ICC route */ 169 + list_for_each_entry(node, &provider->nodes, node_list) { 170 + if (node->id != TEGRA_ICC_EMEM) 171 + continue; 172 + 173 + return node; 174 + } 175 + 176 + return ERR_PTR(-EPROBE_DEFER); 177 + } 178 + 179 + static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak) 180 + { 181 + *avg = 0; 182 + *peak = 0; 183 + 184 + return 0; 185 + } 186 + 187 + static int tegra_emc_interconnect_init(struct tegra186_emc *emc) 188 + { 189 + struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent); 190 + const struct tegra_mc_soc *soc = mc->soc; 191 + struct icc_node *node; 192 + int err; 193 + 194 + emc->provider.dev = emc->dev; 195 + emc->provider.set = tegra_emc_icc_set_bw; 196 + emc->provider.data = &emc->provider; 197 + emc->provider.aggregate = soc->icc_ops->aggregate; 198 + emc->provider.xlate = tegra_emc_of_icc_xlate; 199 + emc->provider.get_bw = tegra_emc_icc_get_init_bw; 200 + 201 + icc_provider_init(&emc->provider); 202 + 203 + /* create External Memory Controller node */ 204 + node = icc_node_create(TEGRA_ICC_EMC); 205 + if (IS_ERR(node)) { 206 + err = PTR_ERR(node); 207 + goto err_msg; 208 + } 209 + 210 + node->name = "External Memory Controller"; 211 + icc_node_add(node, &emc->provider); 212 + 213 + /* link External Memory Controller to External Memory (DRAM) */ 214 + err = icc_link_create(node, TEGRA_ICC_EMEM); 215 + if (err) 216 + goto remove_nodes; 217 + 218 + /* create External Memory node */ 219 + node = icc_node_create(TEGRA_ICC_EMEM); 220 + if (IS_ERR(node)) { 221 + err = PTR_ERR(node); 222 + goto remove_nodes; 223 + } 224 + 225 + node->name = "External Memory (DRAM)"; 226 + icc_node_add(node, &emc->provider); 227 + 228 + err = icc_provider_register(&emc->provider); 229 + if (err) 230 + goto remove_nodes; 231 + 232 + return 0; 233 + 234 + remove_nodes: 235 + icc_nodes_remove(&emc->provider); 236 + err_msg: 237 + dev_err(emc->dev, "failed to initialize ICC: %d\n", err); 238 + 239 + return err; 240 + } 241 + 158 242 static int tegra186_emc_probe(struct platform_device *pdev) 159 243 { 244 + struct tegra_mc *mc = dev_get_drvdata(pdev->dev.parent); 160 245 struct mrq_emc_dvfs_latency_response response; 161 246 struct tegra_bpmp_message msg; 162 247 struct tegra186_emc *emc; ··· 339 236 debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 340 237 emc, &tegra186_emc_debug_max_rate_fops); 341 238 239 + if (mc && mc->soc->icc_ops) { 240 + if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_BWMGR_INT)) { 241 + mc->bwmgr_mrq_supported = true; 242 + 243 + /* 244 + * MC driver probe can't get BPMP reference as it gets probed 245 + * earlier than BPMP. So, save the BPMP ref got from the EMC 246 + * DT node in the mc->bpmp and use it in MC's icc_set hook. 247 + */ 248 + mc->bpmp = emc->bpmp; 249 + barrier(); 250 + } 251 + 252 + /* 253 + * Initialize the ICC even if BPMP-FW doesn't support 'MRQ_BWMGR_INT'. 254 + * Use the flag 'mc->bwmgr_mrq_supported' within MC driver and return 255 + * EINVAL instead of passing the request to BPMP-FW later when the BW 256 + * request is made by client with 'icc_set_bw()' call. 257 + */ 258 + err = tegra_emc_interconnect_init(emc); 259 + if (err) { 260 + mc->bpmp = NULL; 261 + goto put_bpmp; 262 + } 263 + } 264 + 342 265 return 0; 343 266 344 267 put_bpmp: ··· 374 245 375 246 static int tegra186_emc_remove(struct platform_device *pdev) 376 247 { 248 + struct tegra_mc *mc = dev_get_drvdata(pdev->dev.parent); 377 249 struct tegra186_emc *emc = platform_get_drvdata(pdev); 378 250 379 251 debugfs_remove_recursive(emc->debugfs.root); 252 + 253 + mc->bpmp = NULL; 380 254 tegra_bpmp_put(emc->bpmp); 381 255 382 256 return 0; ··· 404 272 .name = "tegra186-emc", 405 273 .of_match_table = tegra186_emc_of_match, 406 274 .suppress_bind_attrs = true, 275 + .sync_state = icc_sync_state, 407 276 }, 408 277 .probe = tegra186_emc_probe, 409 278 .remove = tegra186_emc_remove,
+594 -1
drivers/memory/tegra/tegra234.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (C) 2021-2022, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (C) 2022-2023, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <soc/tegra/mc.h> 7 7 8 8 #include <dt-bindings/memory/tegra234-mc.h> 9 + #include <linux/interconnect.h> 10 + #include <linux/tegra-icc.h> 9 11 12 + #include <soc/tegra/bpmp.h> 10 13 #include "mc.h" 11 14 12 15 static const struct tegra_mc_client tegra234_mc_clients[] = { 13 16 { 17 + .id = TEGRA234_MEMORY_CLIENT_HDAR, 18 + .name = "hdar", 19 + .bpmp_id = TEGRA_ICC_BPMP_HDA, 20 + .type = TEGRA_ICC_ISO_AUDIO, 21 + .sid = TEGRA234_SID_HDA, 22 + .regs = { 23 + .sid = { 24 + .override = 0xa8, 25 + .security = 0xac, 26 + }, 27 + }, 28 + }, { 29 + .id = TEGRA234_MEMORY_CLIENT_HDAW, 30 + .name = "hdaw", 31 + .bpmp_id = TEGRA_ICC_BPMP_HDA, 32 + .type = TEGRA_ICC_ISO_AUDIO, 33 + .sid = TEGRA234_SID_HDA, 34 + .regs = { 35 + .sid = { 36 + .override = 0x1a8, 37 + .security = 0x1ac, 38 + }, 39 + }, 40 + }, { 14 41 .id = TEGRA234_MEMORY_CLIENT_MGBEARD, 15 42 .name = "mgbeard", 43 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 44 + .type = TEGRA_ICC_NISO, 16 45 .sid = TEGRA234_SID_MGBE, 17 46 .regs = { 18 47 .sid = { ··· 52 23 }, { 53 24 .id = TEGRA234_MEMORY_CLIENT_MGBEBRD, 54 25 .name = "mgbebrd", 26 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 27 + .type = TEGRA_ICC_NISO, 55 28 .sid = TEGRA234_SID_MGBE_VF1, 56 29 .regs = { 57 30 .sid = { ··· 64 33 }, { 65 34 .id = TEGRA234_MEMORY_CLIENT_MGBECRD, 66 35 .name = "mgbecrd", 36 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 37 + .type = TEGRA_ICC_NISO, 67 38 .sid = TEGRA234_SID_MGBE_VF2, 68 39 .regs = { 69 40 .sid = { ··· 76 43 }, { 77 44 .id = TEGRA234_MEMORY_CLIENT_MGBEDRD, 78 45 .name = "mgbedrd", 46 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 47 + .type = TEGRA_ICC_NISO, 79 48 .sid = TEGRA234_SID_MGBE_VF3, 80 49 .regs = { 81 50 .sid = { ··· 87 52 }, 88 53 }, { 89 54 .id = TEGRA234_MEMORY_CLIENT_MGBEAWR, 55 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 56 + .type = TEGRA_ICC_NISO, 90 57 .name = "mgbeawr", 91 58 .sid = TEGRA234_SID_MGBE, 92 59 .regs = { ··· 100 63 }, { 101 64 .id = TEGRA234_MEMORY_CLIENT_MGBEBWR, 102 65 .name = "mgbebwr", 66 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 67 + .type = TEGRA_ICC_NISO, 103 68 .sid = TEGRA234_SID_MGBE_VF1, 104 69 .regs = { 105 70 .sid = { ··· 112 73 }, { 113 74 .id = TEGRA234_MEMORY_CLIENT_MGBECWR, 114 75 .name = "mgbecwr", 76 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 77 + .type = TEGRA_ICC_NISO, 115 78 .sid = TEGRA234_SID_MGBE_VF2, 116 79 .regs = { 117 80 .sid = { ··· 124 83 }, { 125 84 .id = TEGRA234_MEMORY_CLIENT_SDMMCRAB, 126 85 .name = "sdmmcrab", 86 + .bpmp_id = TEGRA_ICC_BPMP_SDMMC_4, 87 + .type = TEGRA_ICC_NISO, 127 88 .sid = TEGRA234_SID_SDMMC4, 128 89 .regs = { 129 90 .sid = { ··· 136 93 }, { 137 94 .id = TEGRA234_MEMORY_CLIENT_MGBEDWR, 138 95 .name = "mgbedwr", 96 + .bpmp_id = TEGRA_ICC_BPMP_EQOS, 97 + .type = TEGRA_ICC_NISO, 139 98 .sid = TEGRA234_SID_MGBE_VF3, 140 99 .regs = { 141 100 .sid = { ··· 148 103 }, { 149 104 .id = TEGRA234_MEMORY_CLIENT_SDMMCWAB, 150 105 .name = "sdmmcwab", 106 + .bpmp_id = TEGRA_ICC_BPMP_SDMMC_4, 107 + .type = TEGRA_ICC_NISO, 151 108 .sid = TEGRA234_SID_SDMMC4, 152 109 .regs = { 153 110 .sid = { 154 111 .override = 0x338, 155 112 .security = 0x33c, 113 + }, 114 + }, 115 + }, { 116 + .id = TEGRA234_MEMORY_CLIENT_VI2W, 117 + .name = "vi2w", 118 + .bpmp_id = TEGRA_ICC_BPMP_VI2, 119 + .type = TEGRA_ICC_ISO_VI, 120 + .sid = TEGRA234_SID_ISO_VI2, 121 + .regs = { 122 + .sid = { 123 + .override = 0x380, 124 + .security = 0x384, 125 + }, 126 + }, 127 + }, { 128 + .id = TEGRA234_MEMORY_CLIENT_VI2FALR, 129 + .name = "vi2falr", 130 + .bpmp_id = TEGRA_ICC_BPMP_VI2FAL, 131 + .type = TEGRA_ICC_ISO_VIFAL, 132 + .sid = TEGRA234_SID_ISO_VI2FALC, 133 + .regs = { 134 + .sid = { 135 + .override = 0x388, 136 + .security = 0x38c, 137 + }, 138 + }, 139 + }, { 140 + .id = TEGRA234_MEMORY_CLIENT_VI2FALW, 141 + .name = "vi2falw", 142 + .bpmp_id = TEGRA_ICC_BPMP_VI2FAL, 143 + .type = TEGRA_ICC_ISO_VIFAL, 144 + .sid = TEGRA234_SID_ISO_VI2FALC, 145 + .regs = { 146 + .sid = { 147 + .override = 0x3e0, 148 + .security = 0x3e4, 149 + }, 150 + }, 151 + }, { 152 + .id = TEGRA234_MEMORY_CLIENT_APER, 153 + .name = "aper", 154 + .bpmp_id = TEGRA_ICC_BPMP_APE, 155 + .type = TEGRA_ICC_ISO_AUDIO, 156 + .sid = TEGRA234_SID_APE, 157 + .regs = { 158 + .sid = { 159 + .override = 0x3d0, 160 + .security = 0x3d4, 161 + }, 162 + }, 163 + }, { 164 + .id = TEGRA234_MEMORY_CLIENT_APEW, 165 + .name = "apew", 166 + .bpmp_id = TEGRA_ICC_BPMP_APE, 167 + .type = TEGRA_ICC_ISO_AUDIO, 168 + .sid = TEGRA234_SID_APE, 169 + .regs = { 170 + .sid = { 171 + .override = 0x3d8, 172 + .security = 0x3dc, 173 + }, 174 + }, 175 + }, { 176 + .id = TEGRA234_MEMORY_CLIENT_NVDISPLAYR, 177 + .name = "nvdisplayr", 178 + .bpmp_id = TEGRA_ICC_BPMP_DISPLAY, 179 + .type = TEGRA_ICC_ISO_DISPLAY, 180 + .sid = TEGRA234_SID_ISO_NVDISPLAY, 181 + .regs = { 182 + .sid = { 183 + .override = 0x490, 184 + .security = 0x494, 185 + }, 186 + }, 187 + }, { 188 + .id = TEGRA234_MEMORY_CLIENT_NVDISPLAYR1, 189 + .name = "nvdisplayr1", 190 + .bpmp_id = TEGRA_ICC_BPMP_DISPLAY, 191 + .type = TEGRA_ICC_ISO_DISPLAY, 192 + .sid = TEGRA234_SID_ISO_NVDISPLAY, 193 + .regs = { 194 + .sid = { 195 + .override = 0x508, 196 + .security = 0x50c, 156 197 }, 157 198 }, 158 199 }, { ··· 284 153 }, { 285 154 .id = TEGRA234_MEMORY_CLIENT_APEDMAR, 286 155 .name = "apedmar", 156 + .bpmp_id = TEGRA_ICC_BPMP_APEDMA, 157 + .type = TEGRA_ICC_ISO_AUDIO, 287 158 .sid = TEGRA234_SID_APE, 288 159 .regs = { 289 160 .sid = { ··· 296 163 }, { 297 164 .id = TEGRA234_MEMORY_CLIENT_APEDMAW, 298 165 .name = "apedmaw", 166 + .bpmp_id = TEGRA_ICC_BPMP_APEDMA, 167 + .type = TEGRA_ICC_ISO_AUDIO, 299 168 .sid = TEGRA234_SID_APE, 300 169 .regs = { 301 170 .sid = { ··· 465 330 .security = 0x37c, 466 331 }, 467 332 }, 333 + }, { 334 + .id = TEGRA234_MEMORY_CLIENT_PCIE0R, 335 + .name = "pcie0r", 336 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_0, 337 + .type = TEGRA_ICC_NISO, 338 + .sid = TEGRA234_SID_PCIE0, 339 + .regs = { 340 + .sid = { 341 + .override = 0x6c0, 342 + .security = 0x6c4, 343 + }, 344 + }, 345 + }, { 346 + .id = TEGRA234_MEMORY_CLIENT_PCIE0W, 347 + .name = "pcie0w", 348 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_0, 349 + .type = TEGRA_ICC_NISO, 350 + .sid = TEGRA234_SID_PCIE0, 351 + .regs = { 352 + .sid = { 353 + .override = 0x6c8, 354 + .security = 0x6cc, 355 + }, 356 + }, 357 + }, { 358 + .id = TEGRA234_MEMORY_CLIENT_PCIE1R, 359 + .name = "pcie1r", 360 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_1, 361 + .type = TEGRA_ICC_NISO, 362 + .sid = TEGRA234_SID_PCIE1, 363 + .regs = { 364 + .sid = { 365 + .override = 0x6d0, 366 + .security = 0x6d4, 367 + }, 368 + }, 369 + }, { 370 + .id = TEGRA234_MEMORY_CLIENT_PCIE1W, 371 + .name = "pcie1w", 372 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_1, 373 + .type = TEGRA_ICC_NISO, 374 + .sid = TEGRA234_SID_PCIE1, 375 + .regs = { 376 + .sid = { 377 + .override = 0x6d8, 378 + .security = 0x6dc, 379 + }, 380 + }, 381 + }, { 382 + .id = TEGRA234_MEMORY_CLIENT_PCIE2AR, 383 + .name = "pcie2ar", 384 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_2, 385 + .type = TEGRA_ICC_NISO, 386 + .sid = TEGRA234_SID_PCIE2, 387 + .regs = { 388 + .sid = { 389 + .override = 0x6e0, 390 + .security = 0x6e4, 391 + }, 392 + }, 393 + }, { 394 + .id = TEGRA234_MEMORY_CLIENT_PCIE2AW, 395 + .name = "pcie2aw", 396 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_2, 397 + .type = TEGRA_ICC_NISO, 398 + .sid = TEGRA234_SID_PCIE2, 399 + .regs = { 400 + .sid = { 401 + .override = 0x6e8, 402 + .security = 0x6ec, 403 + }, 404 + }, 405 + }, { 406 + .id = TEGRA234_MEMORY_CLIENT_PCIE3R, 407 + .name = "pcie3r", 408 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_3, 409 + .type = TEGRA_ICC_NISO, 410 + .sid = TEGRA234_SID_PCIE3, 411 + .regs = { 412 + .sid = { 413 + .override = 0x6f0, 414 + .security = 0x6f4, 415 + }, 416 + }, 417 + }, { 418 + .id = TEGRA234_MEMORY_CLIENT_PCIE3W, 419 + .name = "pcie3w", 420 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_3, 421 + .type = TEGRA_ICC_NISO, 422 + .sid = TEGRA234_SID_PCIE3, 423 + .regs = { 424 + .sid = { 425 + .override = 0x6f8, 426 + .security = 0x6fc, 427 + }, 428 + }, 429 + }, { 430 + .id = TEGRA234_MEMORY_CLIENT_PCIE4R, 431 + .name = "pcie4r", 432 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_4, 433 + .type = TEGRA_ICC_NISO, 434 + .sid = TEGRA234_SID_PCIE4, 435 + .regs = { 436 + .sid = { 437 + .override = 0x700, 438 + .security = 0x704, 439 + }, 440 + }, 441 + }, { 442 + .id = TEGRA234_MEMORY_CLIENT_PCIE4W, 443 + .name = "pcie4w", 444 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_4, 445 + .type = TEGRA_ICC_NISO, 446 + .sid = TEGRA234_SID_PCIE4, 447 + .regs = { 448 + .sid = { 449 + .override = 0x708, 450 + .security = 0x70c, 451 + }, 452 + }, 453 + }, { 454 + .id = TEGRA234_MEMORY_CLIENT_PCIE5R, 455 + .name = "pcie5r", 456 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_5, 457 + .type = TEGRA_ICC_NISO, 458 + .sid = TEGRA234_SID_PCIE5, 459 + .regs = { 460 + .sid = { 461 + .override = 0x710, 462 + .security = 0x714, 463 + }, 464 + }, 465 + }, { 466 + .id = TEGRA234_MEMORY_CLIENT_PCIE5W, 467 + .name = "pcie5w", 468 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_5, 469 + .type = TEGRA_ICC_NISO, 470 + .sid = TEGRA234_SID_PCIE5, 471 + .regs = { 472 + .sid = { 473 + .override = 0x718, 474 + .security = 0x71c, 475 + }, 476 + }, 477 + }, { 478 + .id = TEGRA234_MEMORY_CLIENT_PCIE5R1, 479 + .name = "pcie5r1", 480 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_5, 481 + .type = TEGRA_ICC_NISO, 482 + .sid = TEGRA234_SID_PCIE5, 483 + .regs = { 484 + .sid = { 485 + .override = 0x778, 486 + .security = 0x77c, 487 + }, 488 + }, 489 + }, { 490 + .id = TEGRA234_MEMORY_CLIENT_PCIE6AR, 491 + .name = "pcie6ar", 492 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_6, 493 + .type = TEGRA_ICC_NISO, 494 + .sid = TEGRA234_SID_PCIE6, 495 + .regs = { 496 + .sid = { 497 + .override = 0x140, 498 + .security = 0x144, 499 + }, 500 + }, 501 + }, { 502 + .id = TEGRA234_MEMORY_CLIENT_PCIE6AW, 503 + .name = "pcie6aw", 504 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_6, 505 + .type = TEGRA_ICC_NISO, 506 + .sid = TEGRA234_SID_PCIE6, 507 + .regs = { 508 + .sid = { 509 + .override = 0x148, 510 + .security = 0x14c, 511 + }, 512 + }, 513 + }, { 514 + .id = TEGRA234_MEMORY_CLIENT_PCIE6AR1, 515 + .name = "pcie6ar1", 516 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_6, 517 + .type = TEGRA_ICC_NISO, 518 + .sid = TEGRA234_SID_PCIE6, 519 + .regs = { 520 + .sid = { 521 + .override = 0x1e8, 522 + .security = 0x1ec, 523 + }, 524 + }, 525 + }, { 526 + .id = TEGRA234_MEMORY_CLIENT_PCIE7AR, 527 + .name = "pcie7ar", 528 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_7, 529 + .type = TEGRA_ICC_NISO, 530 + .sid = TEGRA234_SID_PCIE7, 531 + .regs = { 532 + .sid = { 533 + .override = 0x150, 534 + .security = 0x154, 535 + }, 536 + }, 537 + }, { 538 + .id = TEGRA234_MEMORY_CLIENT_PCIE7AW, 539 + .name = "pcie7aw", 540 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_7, 541 + .type = TEGRA_ICC_NISO, 542 + .sid = TEGRA234_SID_PCIE7, 543 + .regs = { 544 + .sid = { 545 + .override = 0x180, 546 + .security = 0x184, 547 + }, 548 + }, 549 + }, { 550 + .id = TEGRA234_MEMORY_CLIENT_PCIE7AR1, 551 + .name = "pcie7ar1", 552 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_7, 553 + .type = TEGRA_ICC_NISO, 554 + .sid = TEGRA234_SID_PCIE7, 555 + .regs = { 556 + .sid = { 557 + .override = 0x248, 558 + .security = 0x24c, 559 + }, 560 + }, 561 + }, { 562 + .id = TEGRA234_MEMORY_CLIENT_PCIE8AR, 563 + .name = "pcie8ar", 564 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_8, 565 + .type = TEGRA_ICC_NISO, 566 + .sid = TEGRA234_SID_PCIE8, 567 + .regs = { 568 + .sid = { 569 + .override = 0x190, 570 + .security = 0x194, 571 + }, 572 + }, 573 + }, { 574 + .id = TEGRA234_MEMORY_CLIENT_PCIE8AW, 575 + .name = "pcie8aw", 576 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_8, 577 + .type = TEGRA_ICC_NISO, 578 + .sid = TEGRA234_SID_PCIE8, 579 + .regs = { 580 + .sid = { 581 + .override = 0x1d8, 582 + .security = 0x1dc, 583 + }, 584 + }, 585 + }, { 586 + .id = TEGRA234_MEMORY_CLIENT_PCIE9AR, 587 + .name = "pcie9ar", 588 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_9, 589 + .type = TEGRA_ICC_NISO, 590 + .sid = TEGRA234_SID_PCIE9, 591 + .regs = { 592 + .sid = { 593 + .override = 0x1e0, 594 + .security = 0x1e4, 595 + }, 596 + }, 597 + }, { 598 + .id = TEGRA234_MEMORY_CLIENT_PCIE9AW, 599 + .name = "pcie9aw", 600 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_9, 601 + .type = TEGRA_ICC_NISO, 602 + .sid = TEGRA234_SID_PCIE9, 603 + .regs = { 604 + .sid = { 605 + .override = 0x1f0, 606 + .security = 0x1f4, 607 + }, 608 + }, 609 + }, { 610 + .id = TEGRA234_MEMORY_CLIENT_PCIE10AR, 611 + .name = "pcie10ar", 612 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_10, 613 + .type = TEGRA_ICC_NISO, 614 + .sid = TEGRA234_SID_PCIE10, 615 + .regs = { 616 + .sid = { 617 + .override = 0x1f8, 618 + .security = 0x1fc, 619 + }, 620 + }, 621 + }, { 622 + .id = TEGRA234_MEMORY_CLIENT_PCIE10AW, 623 + .name = "pcie10aw", 624 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_10, 625 + .type = TEGRA_ICC_NISO, 626 + .sid = TEGRA234_SID_PCIE10, 627 + .regs = { 628 + .sid = { 629 + .override = 0x200, 630 + .security = 0x204, 631 + }, 632 + }, 633 + }, { 634 + .id = TEGRA234_MEMORY_CLIENT_PCIE10AR1, 635 + .name = "pcie10ar1", 636 + .bpmp_id = TEGRA_ICC_BPMP_PCIE_10, 637 + .type = TEGRA_ICC_NISO, 638 + .sid = TEGRA234_SID_PCIE10, 639 + .regs = { 640 + .sid = { 641 + .override = 0x240, 642 + .security = 0x244, 643 + }, 644 + }, 645 + }, { 646 + .id = TEGRA_ICC_MC_CPU_CLUSTER0, 647 + .name = "sw_cluster0", 648 + .bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER0, 649 + .type = TEGRA_ICC_NISO, 650 + }, { 651 + .id = TEGRA_ICC_MC_CPU_CLUSTER1, 652 + .name = "sw_cluster1", 653 + .bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER1, 654 + .type = TEGRA_ICC_NISO, 655 + }, { 656 + .id = TEGRA_ICC_MC_CPU_CLUSTER2, 657 + .name = "sw_cluster2", 658 + .bpmp_id = TEGRA_ICC_BPMP_CPU_CLUSTER2, 659 + .type = TEGRA_ICC_NISO, 468 660 }, 661 + }; 662 + 663 + /* 664 + * tegra234_mc_icc_set() - Pass MC client info to the BPMP-FW 665 + * @src: ICC node for Memory Controller's (MC) Client 666 + * @dst: ICC node for Memory Controller (MC) 667 + * 668 + * Passing the current request info from the MC to the BPMP-FW where 669 + * LA and PTSA registers are accessed and the final EMC freq is set 670 + * based on client_id, type, latency and bandwidth. 671 + * icc_set_bw() makes set_bw calls for both MC and EMC providers in 672 + * sequence. Both the calls are protected by 'mutex_lock(&icc_lock)'. 673 + * So, the data passed won't be updated by concurrent set calls from 674 + * other clients. 675 + */ 676 + static int tegra234_mc_icc_set(struct icc_node *src, struct icc_node *dst) 677 + { 678 + struct tegra_mc *mc = icc_provider_to_tegra_mc(dst->provider); 679 + struct mrq_bwmgr_int_request bwmgr_req = { 0 }; 680 + struct mrq_bwmgr_int_response bwmgr_resp = { 0 }; 681 + const struct tegra_mc_client *pclient = src->data; 682 + struct tegra_bpmp_message msg; 683 + int ret; 684 + 685 + /* 686 + * Same Src and Dst node will happen during boot from icc_node_add(). 687 + * This can be used to pre-initialize and set bandwidth for all clients 688 + * before their drivers are loaded. We are skipping this case as for us, 689 + * the pre-initialization already happened in Bootloader(MB2) and BPMP-FW. 690 + */ 691 + if (src->id == dst->id) 692 + return 0; 693 + 694 + if (!mc->bwmgr_mrq_supported) 695 + return -EINVAL; 696 + 697 + if (!mc->bpmp) { 698 + dev_err(mc->dev, "BPMP reference NULL\n"); 699 + return -ENOENT; 700 + } 701 + 702 + if (pclient->type == TEGRA_ICC_NISO) 703 + bwmgr_req.bwmgr_calc_set_req.niso_bw = src->avg_bw; 704 + else 705 + bwmgr_req.bwmgr_calc_set_req.iso_bw = src->avg_bw; 706 + 707 + bwmgr_req.bwmgr_calc_set_req.client_id = pclient->bpmp_id; 708 + 709 + bwmgr_req.cmd = CMD_BWMGR_INT_CALC_AND_SET; 710 + bwmgr_req.bwmgr_calc_set_req.mc_floor = src->peak_bw; 711 + bwmgr_req.bwmgr_calc_set_req.floor_unit = BWMGR_INT_UNIT_KBPS; 712 + 713 + memset(&msg, 0, sizeof(msg)); 714 + msg.mrq = MRQ_BWMGR_INT; 715 + msg.tx.data = &bwmgr_req; 716 + msg.tx.size = sizeof(bwmgr_req); 717 + msg.rx.data = &bwmgr_resp; 718 + msg.rx.size = sizeof(bwmgr_resp); 719 + 720 + ret = tegra_bpmp_transfer(mc->bpmp, &msg); 721 + if (ret < 0) { 722 + dev_err(mc->dev, "BPMP transfer failed: %d\n", ret); 723 + goto error; 724 + } 725 + if (msg.rx.ret < 0) { 726 + pr_err("failed to set bandwidth for %u: %d\n", 727 + bwmgr_req.bwmgr_calc_set_req.client_id, msg.rx.ret); 728 + ret = -EINVAL; 729 + } 730 + 731 + error: 732 + return ret; 733 + } 734 + 735 + static int tegra234_mc_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 736 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 737 + { 738 + struct icc_provider *p = node->provider; 739 + struct tegra_mc *mc = icc_provider_to_tegra_mc(p); 740 + 741 + if (!mc->bwmgr_mrq_supported) 742 + return -EINVAL; 743 + 744 + if (node->id == TEGRA_ICC_MC_CPU_CLUSTER0 || 745 + node->id == TEGRA_ICC_MC_CPU_CLUSTER1 || 746 + node->id == TEGRA_ICC_MC_CPU_CLUSTER2) { 747 + if (mc) 748 + peak_bw = peak_bw * mc->num_channels; 749 + } 750 + 751 + *agg_avg += avg_bw; 752 + *agg_peak = max(*agg_peak, peak_bw); 753 + 754 + return 0; 755 + } 756 + 757 + static struct icc_node* 758 + tegra234_mc_of_icc_xlate(struct of_phandle_args *spec, void *data) 759 + { 760 + struct tegra_mc *mc = icc_provider_to_tegra_mc(data); 761 + unsigned int cl_id = spec->args[0]; 762 + struct icc_node *node; 763 + 764 + list_for_each_entry(node, &mc->provider.nodes, node_list) { 765 + if (node->id != cl_id) 766 + continue; 767 + 768 + return node; 769 + } 770 + 771 + /* 772 + * If a client driver calls devm_of_icc_get() before the MC driver 773 + * is probed, then return EPROBE_DEFER to the client driver. 774 + */ 775 + return ERR_PTR(-EPROBE_DEFER); 776 + } 777 + 778 + static int tegra234_mc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak) 779 + { 780 + *avg = 0; 781 + *peak = 0; 782 + 783 + return 0; 784 + } 785 + 786 + static const struct tegra_mc_icc_ops tegra234_mc_icc_ops = { 787 + .xlate = tegra234_mc_of_icc_xlate, 788 + .aggregate = tegra234_mc_icc_aggregate, 789 + .get_bw = tegra234_mc_icc_get_init_bw, 790 + .set = tegra234_mc_icc_set, 469 791 }; 470 792 471 793 const struct tegra_mc_soc tegra234_mc_soc = { ··· 937 345 MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM, 938 346 .has_addr_hi_reg = true, 939 347 .ops = &tegra186_mc_ops, 348 + .icc_ops = &tegra234_mc_icc_ops, 940 349 .ch_intmask = 0x0000ff00, 941 350 .global_intstatus_channel_shift = 8, 942 351 /*
+5 -4
drivers/misc/sram.c
··· 235 235 goto err_chunks; 236 236 } 237 237 if (!label) 238 - label = child->name; 239 - 240 - block->label = devm_kstrdup(sram->dev, 241 - label, GFP_KERNEL); 238 + block->label = devm_kasprintf(sram->dev, GFP_KERNEL, 239 + "%s", dev_name(sram->dev)); 240 + else 241 + block->label = devm_kstrdup(sram->dev, 242 + label, GFP_KERNEL); 242 243 if (!block->label) { 243 244 ret = -ENOMEM; 244 245 goto err_chunks;
+1 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 5025 5025 return err; 5026 5026 } 5027 5027 5028 - static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev) 5028 + static void dpaa2_eth_remove(struct fsl_mc_device *ls_dev) 5029 5029 { 5030 5030 struct device *dev; 5031 5031 struct net_device *net_dev; ··· 5073 5073 dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name); 5074 5074 5075 5075 free_netdev(net_dev); 5076 - 5077 - return 0; 5078 5076 } 5079 5077 5080 5078 static const struct fsl_mc_device_id dpaa2_eth_match_id_table[] = {
+1 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
··· 219 219 return err; 220 220 } 221 221 222 - static int dpaa2_ptp_remove(struct fsl_mc_device *mc_dev) 222 + static void dpaa2_ptp_remove(struct fsl_mc_device *mc_dev) 223 223 { 224 224 struct device *dev = &mc_dev->dev; 225 225 struct ptp_qoriq *ptp_qoriq; ··· 232 232 fsl_mc_free_irqs(mc_dev); 233 233 dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle); 234 234 fsl_mc_portal_free(mc_dev->mc_io); 235 - 236 - return 0; 237 235 } 238 236 239 237 static const struct fsl_mc_device_id dpaa2_ptp_match_id_table[] = {
+1 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 3221 3221 dev_warn(dev, "dpsw_close err %d\n", err); 3222 3222 } 3223 3223 3224 - static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev) 3224 + static void dpaa2_switch_remove(struct fsl_mc_device *sw_dev) 3225 3225 { 3226 3226 struct ethsw_port_priv *port_priv; 3227 3227 struct ethsw_core *ethsw; ··· 3252 3252 kfree(ethsw); 3253 3253 3254 3254 dev_set_drvdata(dev, NULL); 3255 - 3256 - return 0; 3257 3255 } 3258 3256 3259 3257 static int dpaa2_switch_probe_port(struct ethsw_core *ethsw,
+36 -8
drivers/pci/controller/dwc/pcie-tegra194.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/gpio.h> 16 16 #include <linux/gpio/consumer.h> 17 + #include <linux/interconnect.h> 17 18 #include <linux/interrupt.h> 18 19 #include <linux/iopoll.h> 19 20 #include <linux/kernel.h> ··· 224 223 #define EP_STATE_ENABLED 1 225 224 226 225 static const unsigned int pcie_gen_freq[] = { 226 + GEN1_CORE_CLK_FREQ, /* PCI_EXP_LNKSTA_CLS == 0; undefined */ 227 227 GEN1_CORE_CLK_FREQ, 228 228 GEN2_CORE_CLK_FREQ, 229 229 GEN3_CORE_CLK_FREQ, ··· 289 287 unsigned int pex_rst_irq; 290 288 int ep_state; 291 289 long link_status; 290 + struct icc_path *icc_path; 292 291 }; 293 292 294 293 static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) ··· 311 308 struct tegra_pcie_soc { 312 309 enum dw_pcie_device_mode mode; 313 310 }; 311 + 312 + static void tegra_pcie_icc_set(struct tegra_pcie_dw *pcie) 313 + { 314 + struct dw_pcie *pci = &pcie->pci; 315 + u32 val, speed, width; 316 + 317 + val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); 318 + 319 + speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val); 320 + width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val); 321 + 322 + val = width * (PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]) / BITS_PER_BYTE); 323 + 324 + if (icc_set_bw(pcie->icc_path, MBps_to_icc(val), 0)) 325 + dev_err(pcie->dev, "can't set bw[%u]\n", val); 326 + 327 + if (speed >= ARRAY_SIZE(pcie_gen_freq)) 328 + speed = 0; 329 + 330 + clk_set_rate(pcie->core_clk, pcie_gen_freq[speed]); 331 + } 314 332 315 333 static void apply_bad_link_workaround(struct dw_pcie_rp *pp) 316 334 { ··· 476 452 struct tegra_pcie_dw *pcie = arg; 477 453 struct dw_pcie_ep *ep = &pcie->pci.ep; 478 454 struct dw_pcie *pci = &pcie->pci; 479 - u32 val, speed; 455 + u32 val; 480 456 481 457 if (test_and_clear_bit(0, &pcie->link_status)) 482 458 dw_pcie_ep_linkup(ep); 483 459 484 - speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & 485 - PCI_EXP_LNKSTA_CLS; 486 - clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); 460 + tegra_pcie_icc_set(pcie); 487 461 488 462 if (pcie->of_data->has_ltr_req_fix) 489 463 return IRQ_HANDLED; ··· 967 945 968 946 static int tegra_pcie_dw_start_link(struct dw_pcie *pci) 969 947 { 970 - u32 val, offset, speed, tmp; 971 948 struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 972 949 struct dw_pcie_rp *pp = &pci->pp; 950 + u32 val, offset, tmp; 973 951 bool retry = true; 974 952 975 953 if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { ··· 1040 1018 goto retry_link; 1041 1019 } 1042 1020 1043 - speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & 1044 - PCI_EXP_LNKSTA_CLS; 1045 - clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); 1021 + tegra_pcie_icc_set(pcie); 1046 1022 1047 1023 tegra_pcie_enable_interrupts(pp); 1048 1024 ··· 2243 2223 return PTR_ERR(pcie->bpmp); 2244 2224 2245 2225 platform_set_drvdata(pdev, pcie); 2226 + 2227 + pcie->icc_path = devm_of_icc_get(&pdev->dev, "write"); 2228 + ret = PTR_ERR_OR_ZERO(pcie->icc_path); 2229 + if (ret) { 2230 + tegra_bpmp_put(pcie->bpmp); 2231 + dev_err_probe(&pdev->dev, ret, "failed to get write interconnect\n"); 2232 + return ret; 2233 + } 2246 2234 2247 2235 switch (pcie->of_data->mode) { 2248 2236 case DW_PCIE_RC_TYPE:
+16
drivers/powercap/arm_scmi_powercap.c
··· 70 70 return 0; 71 71 } 72 72 73 + static int scmi_powercap_zone_enable_set(struct powercap_zone *pz, bool mode) 74 + { 75 + struct scmi_powercap_zone *spz = to_scmi_powercap_zone(pz); 76 + 77 + return powercap_ops->cap_enable_set(spz->ph, spz->info->id, mode); 78 + } 79 + 80 + static int scmi_powercap_zone_enable_get(struct powercap_zone *pz, bool *mode) 81 + { 82 + struct scmi_powercap_zone *spz = to_scmi_powercap_zone(pz); 83 + 84 + return powercap_ops->cap_enable_get(spz->ph, spz->info->id, mode); 85 + } 86 + 73 87 static const struct powercap_zone_ops zone_ops = { 74 88 .get_max_power_range_uw = scmi_powercap_get_max_power_range_uw, 75 89 .get_power_uw = scmi_powercap_get_power_uw, 76 90 .release = scmi_powercap_zone_release, 91 + .set_enable = scmi_powercap_zone_enable_set, 92 + .get_enable = scmi_powercap_zone_enable_get, 77 93 }; 78 94 79 95 static void scmi_powercap_normalize_cap(const struct scmi_powercap_zone *spz,
-15
drivers/remoteproc/pru_rproc.c
··· 82 82 }; 83 83 84 84 /** 85 - * enum pru_type - PRU core type identifier 86 - * 87 - * @PRU_TYPE_PRU: Programmable Real-time Unit 88 - * @PRU_TYPE_RTU: Auxiliary Programmable Real-Time Unit 89 - * @PRU_TYPE_TX_PRU: Transmit Programmable Real-Time Unit 90 - * @PRU_TYPE_MAX: just keep this one at the end 91 - */ 92 - enum pru_type { 93 - PRU_TYPE_PRU = 0, 94 - PRU_TYPE_RTU, 95 - PRU_TYPE_TX_PRU, 96 - PRU_TYPE_MAX, 97 - }; 98 - 99 - /** 100 85 * struct pru_private_data - device data for a PRU core 101 86 * @type: type of the PRU core (PRU, RTU, Tx_PRU) 102 87 * @is_k3: flag used to identify the need for special load handling
+2 -4
drivers/reset/Kconfig
··· 150 150 help 151 151 This enables the reset controller driver for Nuvoton MA35D1 SoC. 152 152 153 - config RESET_OXNAS 154 - bool 155 - 156 153 config RESET_PISTACHIO 157 154 bool "Pistachio Reset Driver" 158 155 depends on MIPS || COMPILE_TEST ··· 158 161 159 162 config RESET_POLARFIRE_SOC 160 163 bool "Microchip PolarFire SoC (MPFS) Reset Driver" 161 - depends on AUXILIARY_BUS && MCHP_CLK_MPFS 164 + depends on MCHP_CLK_MPFS 165 + select AUXILIARY_BUS 162 166 default MCHP_CLK_MPFS 163 167 help 164 168 This driver supports peripheral reset for the Microchip PolarFire SoC
-1
drivers/reset/Makefile
··· 22 22 obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o 23 23 obj-$(CONFIG_RESET_NPCM) += reset-npcm.o 24 24 obj-$(CONFIG_RESET_NUVOTON_MA35D1) += reset-ma35d1.o 25 - obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 26 25 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 27 26 obj-$(CONFIG_RESET_POLARFIRE_SOC) += reset-mpfs.o 28 27 obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o
+1 -3
drivers/reset/reset-ath79.c
··· 86 86 static int ath79_reset_probe(struct platform_device *pdev) 87 87 { 88 88 struct ath79_reset *ath79_reset; 89 - struct resource *res; 90 89 int err; 91 90 92 91 ath79_reset = devm_kzalloc(&pdev->dev, ··· 95 96 96 97 platform_set_drvdata(pdev, ath79_reset); 97 98 98 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 99 - ath79_reset->base = devm_ioremap_resource(&pdev->dev, res); 99 + ath79_reset->base = devm_platform_ioremap_resource(pdev, 0); 100 100 if (IS_ERR(ath79_reset->base)) 101 101 return PTR_ERR(ath79_reset->base); 102 102
+1 -3
drivers/reset/reset-axs10x.c
··· 44 44 static int axs10x_reset_probe(struct platform_device *pdev) 45 45 { 46 46 struct axs10x_rst *rst; 47 - struct resource *mem; 48 47 49 48 rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL); 50 49 if (!rst) 51 50 return -ENOMEM; 52 51 53 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 54 - rst->regs_rst = devm_ioremap_resource(&pdev->dev, mem); 52 + rst->regs_rst = devm_platform_ioremap_resource(pdev, 0); 55 53 if (IS_ERR(rst->regs_rst)) 56 54 return PTR_ERR(rst->regs_rst); 57 55
+1 -3
drivers/reset/reset-brcmstb-rescal.c
··· 66 66 static int brcm_rescal_reset_probe(struct platform_device *pdev) 67 67 { 68 68 struct brcm_rescal_reset *data; 69 - struct resource *res; 70 69 71 70 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 72 71 if (!data) 73 72 return -ENOMEM; 74 73 75 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 76 - data->base = devm_ioremap_resource(&pdev->dev, res); 74 + data->base = devm_platform_ioremap_resource(pdev, 0); 77 75 if (IS_ERR(data->base)) 78 76 return PTR_ERR(data->base); 79 77
+2 -5
drivers/reset/reset-hsdk.c
··· 92 92 static int hsdk_reset_probe(struct platform_device *pdev) 93 93 { 94 94 struct hsdk_rst *rst; 95 - struct resource *mem; 96 95 97 96 rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL); 98 97 if (!rst) 99 98 return -ENOMEM; 100 99 101 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 102 - rst->regs_ctl = devm_ioremap_resource(&pdev->dev, mem); 100 + rst->regs_ctl = devm_platform_ioremap_resource(pdev, 0); 103 101 if (IS_ERR(rst->regs_ctl)) 104 102 return PTR_ERR(rst->regs_ctl); 105 103 106 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 1); 107 - rst->regs_rst = devm_ioremap_resource(&pdev->dev, mem); 104 + rst->regs_rst = devm_platform_ioremap_resource(pdev, 1); 108 105 if (IS_ERR(rst->regs_rst)) 109 106 return PTR_ERR(rst->regs_rst); 110 107
+1 -3
drivers/reset/reset-lpc18xx.c
··· 139 139 static int lpc18xx_rgu_probe(struct platform_device *pdev) 140 140 { 141 141 struct lpc18xx_rgu_data *rc; 142 - struct resource *res; 143 142 u32 fcclk, firc; 144 143 int ret; 145 144 ··· 146 147 if (!rc) 147 148 return -ENOMEM; 148 149 149 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 150 - rc->base = devm_ioremap_resource(&pdev->dev, res); 150 + rc->base = devm_platform_ioremap_resource(pdev, 0); 151 151 if (IS_ERR(rc->base)) 152 152 return PTR_ERR(rc->base); 153 153
+2 -5
drivers/reset/reset-meson-audio-arb.c
··· 151 151 platform_set_drvdata(pdev, arb); 152 152 153 153 arb->clk = devm_clk_get(dev, NULL); 154 - if (IS_ERR(arb->clk)) { 155 - if (PTR_ERR(arb->clk) != -EPROBE_DEFER) 156 - dev_err(dev, "failed to get clock\n"); 157 - return PTR_ERR(arb->clk); 158 - } 154 + if (IS_ERR(arb->clk)) 155 + return dev_err_probe(dev, PTR_ERR(arb->clk), "failed to get clock\n"); 159 156 160 157 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 161 158 arb->regs = devm_ioremap_resource(dev, res);
+1 -3
drivers/reset/reset-meson.c
··· 116 116 static int meson_reset_probe(struct platform_device *pdev) 117 117 { 118 118 struct meson_reset *data; 119 - struct resource *res; 120 119 121 120 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 122 121 if (!data) 123 122 return -ENOMEM; 124 123 125 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 126 - data->reg_base = devm_ioremap_resource(&pdev->dev, res); 124 + data->reg_base = devm_platform_ioremap_resource(pdev, 0); 127 125 if (IS_ERR(data->reg_base)) 128 126 return PTR_ERR(data->reg_base); 129 127
-114
drivers/reset/reset-oxnas.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Oxford Semiconductor Reset Controller driver 4 - * 5 - * Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com> 6 - * Copyright (C) 2014 Ma Haijun <mahaijuns@gmail.com> 7 - * Copyright (C) 2009 Oxford Semiconductor Ltd 8 - */ 9 - #include <linux/err.h> 10 - #include <linux/init.h> 11 - #include <linux/of.h> 12 - #include <linux/platform_device.h> 13 - #include <linux/reset-controller.h> 14 - #include <linux/slab.h> 15 - #include <linux/delay.h> 16 - #include <linux/types.h> 17 - #include <linux/regmap.h> 18 - #include <linux/mfd/syscon.h> 19 - 20 - /* Regmap offsets */ 21 - #define RST_SET_REGOFFSET 0x34 22 - #define RST_CLR_REGOFFSET 0x38 23 - 24 - struct oxnas_reset { 25 - struct regmap *regmap; 26 - struct reset_controller_dev rcdev; 27 - }; 28 - 29 - static int oxnas_reset_reset(struct reset_controller_dev *rcdev, 30 - unsigned long id) 31 - { 32 - struct oxnas_reset *data = 33 - container_of(rcdev, struct oxnas_reset, rcdev); 34 - 35 - regmap_write(data->regmap, RST_SET_REGOFFSET, BIT(id)); 36 - msleep(50); 37 - regmap_write(data->regmap, RST_CLR_REGOFFSET, BIT(id)); 38 - 39 - return 0; 40 - } 41 - 42 - static int oxnas_reset_assert(struct reset_controller_dev *rcdev, 43 - unsigned long id) 44 - { 45 - struct oxnas_reset *data = 46 - container_of(rcdev, struct oxnas_reset, rcdev); 47 - 48 - regmap_write(data->regmap, RST_SET_REGOFFSET, BIT(id)); 49 - 50 - return 0; 51 - } 52 - 53 - static int oxnas_reset_deassert(struct reset_controller_dev *rcdev, 54 - unsigned long id) 55 - { 56 - struct oxnas_reset *data = 57 - container_of(rcdev, struct oxnas_reset, rcdev); 58 - 59 - regmap_write(data->regmap, RST_CLR_REGOFFSET, BIT(id)); 60 - 61 - return 0; 62 - } 63 - 64 - static const struct reset_control_ops oxnas_reset_ops = { 65 - .reset = oxnas_reset_reset, 66 - .assert = oxnas_reset_assert, 67 - .deassert = oxnas_reset_deassert, 68 - }; 69 - 70 - static const struct of_device_id oxnas_reset_dt_ids[] = { 71 - { .compatible = "oxsemi,ox810se-reset", }, 72 - { .compatible = "oxsemi,ox820-reset", }, 73 - { /* sentinel */ }, 74 - }; 75 - 76 - static int oxnas_reset_probe(struct platform_device *pdev) 77 - { 78 - struct oxnas_reset *data; 79 - struct device *parent; 80 - 81 - parent = pdev->dev.parent; 82 - if (!parent) { 83 - dev_err(&pdev->dev, "no parent\n"); 84 - return -ENODEV; 85 - } 86 - 87 - data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 88 - if (!data) 89 - return -ENOMEM; 90 - 91 - data->regmap = syscon_node_to_regmap(parent->of_node); 92 - if (IS_ERR(data->regmap)) { 93 - dev_err(&pdev->dev, "failed to get parent regmap\n"); 94 - return PTR_ERR(data->regmap); 95 - } 96 - 97 - platform_set_drvdata(pdev, data); 98 - 99 - data->rcdev.owner = THIS_MODULE; 100 - data->rcdev.nr_resets = 32; 101 - data->rcdev.ops = &oxnas_reset_ops; 102 - data->rcdev.of_node = pdev->dev.of_node; 103 - 104 - return devm_reset_controller_register(&pdev->dev, &data->rcdev); 105 - } 106 - 107 - static struct platform_driver oxnas_reset_driver = { 108 - .probe = oxnas_reset_probe, 109 - .driver = { 110 - .name = "oxnas-reset", 111 - .of_match_table = oxnas_reset_dt_ids, 112 - }, 113 - }; 114 - builtin_platform_driver(oxnas_reset_driver);
+2 -1
drivers/reset/starfive/Kconfig
··· 13 13 14 14 config RESET_STARFIVE_JH7110 15 15 bool "StarFive JH7110 Reset Driver" 16 - depends on AUXILIARY_BUS && CLK_STARFIVE_JH7110_SYS 16 + depends on CLK_STARFIVE_JH7110_SYS 17 + select AUXILIARY_BUS 17 18 select RESET_STARFIVE_JH71X0 18 19 default ARCH_STARFIVE 19 20 help
-4
drivers/reset/sti/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 if ARCH_STI 3 3 4 - config STI_RESET_SYSCFG 5 - bool 6 - 7 4 config STIH407_RESET 8 5 bool 9 - select STI_RESET_SYSCFG 10 6 11 7 endif
+1 -3
drivers/reset/sti/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_STI_RESET_SYSCFG) += reset-syscfg.o 3 - 4 - obj-$(CONFIG_STIH407_RESET) += reset-stih407.o 2 + obj-$(CONFIG_STIH407_RESET) += reset-stih407.o reset-syscfg.o
+4 -14
drivers/reset/sti/reset-syscfg.c
··· 64 64 return err; 65 65 66 66 if (ch->ack) { 67 - unsigned long timeout = jiffies + msecs_to_jiffies(1000); 68 67 u32 ack_val; 69 68 70 - while (true) { 71 - err = regmap_field_read(ch->ack, &ack_val); 72 - if (err) 73 - return err; 74 - 75 - if (ack_val == ctrl_val) 76 - break; 77 - 78 - if (time_after(jiffies, timeout)) 79 - return -ETIME; 80 - 81 - cpu_relax(); 82 - } 69 + err = regmap_field_read_poll_timeout(ch->ack, ack_val, (ack_val == ctrl_val), 70 + 100, USEC_PER_SEC); 71 + if (err) 72 + return err; 83 73 } 84 74 85 75 return 0;
+1 -1
drivers/soc/amlogic/meson-secure-pwrc.c
··· 105 105 SEC_PD(ACODEC, 0), 106 106 SEC_PD(AUDIO, 0), 107 107 SEC_PD(OTP, 0), 108 - SEC_PD(DMA, 0), 108 + SEC_PD(DMA, GENPD_FLAG_ALWAYS_ON | GENPD_FLAG_IRQ_SAFE), 109 109 SEC_PD(SD_EMMC, 0), 110 110 SEC_PD(RAMA, 0), 111 111 /* SRAMB is used as ATF runtime memory, and should be always on */
+1 -7
drivers/soc/fsl/dpio/dpio-driver.c
··· 270 270 fsl_mc_free_irqs(dpio_dev); 271 271 } 272 272 273 - static int dpaa2_dpio_remove(struct fsl_mc_device *dpio_dev) 273 + static void dpaa2_dpio_remove(struct fsl_mc_device *dpio_dev) 274 274 { 275 275 struct device *dev; 276 276 struct dpio_priv *priv; ··· 297 297 298 298 dpio_close(dpio_dev->mc_io, 0, dpio_dev->mc_handle); 299 299 300 - fsl_mc_portal_free(dpio_dev->mc_io); 301 - 302 - return 0; 303 - 304 300 err_open: 305 301 fsl_mc_portal_free(dpio_dev->mc_io); 306 - 307 - return err; 308 302 } 309 303 310 304 static const struct fsl_mc_device_id dpaa2_dpio_match_id_table[] = {
+1
drivers/soc/fsl/qe/Kconfig
··· 62 62 63 63 config QE_USB 64 64 bool 65 + depends on QUICC_ENGINE 65 66 default y if USB_FSL_QE 66 67 help 67 68 QE USB Controller support
-1
drivers/soc/mediatek/mtk-mutex.c
··· 1051 1051 .probe = mtk_mutex_probe, 1052 1052 .driver = { 1053 1053 .name = "mediatek-mutex", 1054 - .owner = THIS_MODULE, 1055 1054 .of_match_table = mutex_driver_dt_match, 1056 1055 }, 1057 1056 };
+266 -26
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 47 47 48 48 /* macro for device wrapper default value */ 49 49 #define PWRAP_DEW_READ_TEST_VAL 0x5aa5 50 + #define PWRAP_DEW_COMP_READ_TEST_VAL 0xa55a 50 51 #define PWRAP_DEW_WRITE_TEST_VAL 0xa55a 51 52 52 53 /* macro for manual command */ ··· 168 167 [PWRAP_DEW_CIPHER_MODE] = 0x01a0, 169 168 [PWRAP_DEW_CIPHER_SWRST] = 0x01a2, 170 169 [PWRAP_DEW_RDDMY_NO] = 0x01a4, 170 + }; 171 + 172 + static const u32 mt6331_regs[] = { 173 + [PWRAP_DEW_DIO_EN] = 0x018c, 174 + [PWRAP_DEW_READ_TEST] = 0x018e, 175 + [PWRAP_DEW_WRITE_TEST] = 0x0190, 176 + [PWRAP_DEW_CRC_SWRST] = 0x0192, 177 + [PWRAP_DEW_CRC_EN] = 0x0194, 178 + [PWRAP_DEW_CRC_VAL] = 0x0196, 179 + [PWRAP_DEW_MON_GRP_SEL] = 0x0198, 180 + [PWRAP_DEW_CIPHER_KEY_SEL] = 0x019a, 181 + [PWRAP_DEW_CIPHER_IV_SEL] = 0x019c, 182 + [PWRAP_DEW_CIPHER_EN] = 0x019e, 183 + [PWRAP_DEW_CIPHER_RDY] = 0x01a0, 184 + [PWRAP_DEW_CIPHER_MODE] = 0x01a2, 185 + [PWRAP_DEW_CIPHER_SWRST] = 0x01a4, 186 + [PWRAP_DEW_RDDMY_NO] = 0x01a6, 187 + }; 188 + 189 + static const u32 mt6332_regs[] = { 190 + [PWRAP_DEW_DIO_EN] = 0x80f6, 191 + [PWRAP_DEW_READ_TEST] = 0x80f8, 192 + [PWRAP_DEW_WRITE_TEST] = 0x80fa, 193 + [PWRAP_DEW_CRC_SWRST] = 0x80fc, 194 + [PWRAP_DEW_CRC_EN] = 0x80fe, 195 + [PWRAP_DEW_CRC_VAL] = 0x8100, 196 + [PWRAP_DEW_MON_GRP_SEL] = 0x8102, 197 + [PWRAP_DEW_CIPHER_KEY_SEL] = 0x8104, 198 + [PWRAP_DEW_CIPHER_IV_SEL] = 0x8106, 199 + [PWRAP_DEW_CIPHER_EN] = 0x8108, 200 + [PWRAP_DEW_CIPHER_RDY] = 0x810a, 201 + [PWRAP_DEW_CIPHER_MODE] = 0x810c, 202 + [PWRAP_DEW_CIPHER_SWRST] = 0x810e, 203 + [PWRAP_DEW_RDDMY_NO] = 0x8110, 171 204 }; 172 205 173 206 static const u32 mt6351_regs[] = { ··· 637 602 [PWRAP_WACS2_CMD] = 0xC20, 638 603 [PWRAP_WACS2_RDATA] = 0xC24, 639 604 [PWRAP_WACS2_VLDCLR] = 0xC28, 605 + }; 606 + 607 + static int mt6795_regs[] = { 608 + [PWRAP_MUX_SEL] = 0x0, 609 + [PWRAP_WRAP_EN] = 0x4, 610 + [PWRAP_DIO_EN] = 0x8, 611 + [PWRAP_SIDLY] = 0xc, 612 + [PWRAP_RDDMY] = 0x10, 613 + [PWRAP_SI_CK_CON] = 0x14, 614 + [PWRAP_CSHEXT_WRITE] = 0x18, 615 + [PWRAP_CSHEXT_READ] = 0x1c, 616 + [PWRAP_CSLEXT_START] = 0x20, 617 + [PWRAP_CSLEXT_END] = 0x24, 618 + [PWRAP_STAUPD_PRD] = 0x28, 619 + [PWRAP_STAUPD_GRPEN] = 0x2c, 620 + [PWRAP_EINT_STA0_ADR] = 0x30, 621 + [PWRAP_EINT_STA1_ADR] = 0x34, 622 + [PWRAP_STAUPD_MAN_TRIG] = 0x40, 623 + [PWRAP_STAUPD_STA] = 0x44, 624 + [PWRAP_WRAP_STA] = 0x48, 625 + [PWRAP_HARB_INIT] = 0x4c, 626 + [PWRAP_HARB_HPRIO] = 0x50, 627 + [PWRAP_HIPRIO_ARB_EN] = 0x54, 628 + [PWRAP_HARB_STA0] = 0x58, 629 + [PWRAP_HARB_STA1] = 0x5c, 630 + [PWRAP_MAN_EN] = 0x60, 631 + [PWRAP_MAN_CMD] = 0x64, 632 + [PWRAP_MAN_RDATA] = 0x68, 633 + [PWRAP_MAN_VLDCLR] = 0x6c, 634 + [PWRAP_WACS0_EN] = 0x70, 635 + [PWRAP_INIT_DONE0] = 0x74, 636 + [PWRAP_WACS0_CMD] = 0x78, 637 + [PWRAP_WACS0_RDATA] = 0x7c, 638 + [PWRAP_WACS0_VLDCLR] = 0x80, 639 + [PWRAP_WACS1_EN] = 0x84, 640 + [PWRAP_INIT_DONE1] = 0x88, 641 + [PWRAP_WACS1_CMD] = 0x8c, 642 + [PWRAP_WACS1_RDATA] = 0x90, 643 + [PWRAP_WACS1_VLDCLR] = 0x94, 644 + [PWRAP_WACS2_EN] = 0x98, 645 + [PWRAP_INIT_DONE2] = 0x9c, 646 + [PWRAP_WACS2_CMD] = 0xa0, 647 + [PWRAP_WACS2_RDATA] = 0xa4, 648 + [PWRAP_WACS2_VLDCLR] = 0xa8, 649 + [PWRAP_INT_EN] = 0xac, 650 + [PWRAP_INT_FLG_RAW] = 0xb0, 651 + [PWRAP_INT_FLG] = 0xb4, 652 + [PWRAP_INT_CLR] = 0xb8, 653 + [PWRAP_SIG_ADR] = 0xbc, 654 + [PWRAP_SIG_MODE] = 0xc0, 655 + [PWRAP_SIG_VALUE] = 0xc4, 656 + [PWRAP_SIG_ERRVAL] = 0xc8, 657 + [PWRAP_CRC_EN] = 0xcc, 658 + [PWRAP_TIMER_EN] = 0xd0, 659 + [PWRAP_TIMER_STA] = 0xd4, 660 + [PWRAP_WDT_UNIT] = 0xd8, 661 + [PWRAP_WDT_SRC_EN] = 0xdc, 662 + [PWRAP_WDT_FLG] = 0xe0, 663 + [PWRAP_DEBUG_INT_SEL] = 0xe4, 664 + [PWRAP_DVFS_ADR0] = 0xe8, 665 + [PWRAP_DVFS_WDATA0] = 0xec, 666 + [PWRAP_DVFS_ADR1] = 0xf0, 667 + [PWRAP_DVFS_WDATA1] = 0xf4, 668 + [PWRAP_DVFS_ADR2] = 0xf8, 669 + [PWRAP_DVFS_WDATA2] = 0xfc, 670 + [PWRAP_DVFS_ADR3] = 0x100, 671 + [PWRAP_DVFS_WDATA3] = 0x104, 672 + [PWRAP_DVFS_ADR4] = 0x108, 673 + [PWRAP_DVFS_WDATA4] = 0x10c, 674 + [PWRAP_DVFS_ADR5] = 0x110, 675 + [PWRAP_DVFS_WDATA5] = 0x114, 676 + [PWRAP_DVFS_ADR6] = 0x118, 677 + [PWRAP_DVFS_WDATA6] = 0x11c, 678 + [PWRAP_DVFS_ADR7] = 0x120, 679 + [PWRAP_DVFS_WDATA7] = 0x124, 680 + [PWRAP_SPMINF_STA] = 0x128, 681 + [PWRAP_CIPHER_KEY_SEL] = 0x12c, 682 + [PWRAP_CIPHER_IV_SEL] = 0x130, 683 + [PWRAP_CIPHER_EN] = 0x134, 684 + [PWRAP_CIPHER_RDY] = 0x138, 685 + [PWRAP_CIPHER_MODE] = 0x13c, 686 + [PWRAP_CIPHER_SWRST] = 0x140, 687 + [PWRAP_DCM_EN] = 0x144, 688 + [PWRAP_DCM_DBC_PRD] = 0x148, 689 + [PWRAP_EXT_CK] = 0x14c, 640 690 }; 641 691 642 692 static int mt6797_regs[] = { ··· 1301 1181 1302 1182 enum pmic_type { 1303 1183 PMIC_MT6323, 1184 + PMIC_MT6331, 1185 + PMIC_MT6332, 1304 1186 PMIC_MT6351, 1305 1187 PMIC_MT6357, 1306 1188 PMIC_MT6358, ··· 1315 1193 PWRAP_MT2701, 1316 1194 PWRAP_MT6765, 1317 1195 PWRAP_MT6779, 1196 + PWRAP_MT6795, 1318 1197 PWRAP_MT6797, 1319 1198 PWRAP_MT6873, 1320 1199 PWRAP_MT7622, ··· 1341 1218 int (*pwrap_write)(struct pmic_wrapper *wrp, u32 adr, u32 wdata); 1342 1219 }; 1343 1220 1221 + /** 1222 + * struct pwrap_slv_type - PMIC device wrapper definitions 1223 + * @dew_regs: Device Wrapper (DeW) register offsets 1224 + * @type: PMIC Type (model) 1225 + * @comp_dew_regs: Device Wrapper (DeW) register offsets for companion device 1226 + * @comp_type: Companion PMIC Type (model) 1227 + * @regops: Register R/W ops 1228 + * @caps: Capability flags for the target device 1229 + */ 1344 1230 struct pwrap_slv_type { 1345 1231 const u32 *dew_regs; 1346 1232 enum pmic_type type; 1233 + const u32 *comp_dew_regs; 1234 + enum pmic_type comp_type; 1347 1235 const struct pwrap_slv_regops *regops; 1348 - /* Flags indicating the capability for the target slave */ 1349 1236 u32 caps; 1350 1237 }; 1351 1238 ··· 1588 1455 return pwrap_write(context, adr, wdata); 1589 1456 } 1590 1457 1458 + static bool pwrap_pmic_read_test(struct pmic_wrapper *wrp, const u32 *dew_regs, 1459 + u16 read_test_val) 1460 + { 1461 + bool is_success; 1462 + u32 rdata; 1463 + 1464 + pwrap_read(wrp, dew_regs[PWRAP_DEW_READ_TEST], &rdata); 1465 + is_success = ((rdata & U16_MAX) == read_test_val); 1466 + 1467 + return is_success; 1468 + } 1469 + 1591 1470 static int pwrap_reset_spislave(struct pmic_wrapper *wrp) 1592 1471 { 1593 1472 bool tmp; ··· 1643 1498 */ 1644 1499 static int pwrap_init_sidly(struct pmic_wrapper *wrp) 1645 1500 { 1646 - u32 rdata; 1647 1501 u32 i; 1648 1502 u32 pass = 0; 1503 + bool read_ok; 1649 1504 signed char dly[16] = { 1650 1505 -1, 0, 1, 0, 2, -1, 1, 1, 3, -1, -1, -1, 3, -1, 2, 1 1651 1506 }; 1652 1507 1653 1508 for (i = 0; i < 4; i++) { 1654 1509 pwrap_writel(wrp, i, PWRAP_SIDLY); 1655 - pwrap_read(wrp, wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], 1656 - &rdata); 1657 - if (rdata == PWRAP_DEW_READ_TEST_VAL) { 1510 + read_ok = pwrap_pmic_read_test(wrp, wrp->slave->dew_regs, 1511 + PWRAP_DEW_READ_TEST_VAL); 1512 + if (read_ok) { 1658 1513 dev_dbg(wrp->dev, "[Read Test] pass, SIDLY=%x\n", i); 1659 1514 pass |= 1 << i; 1660 1515 } ··· 1674 1529 static int pwrap_init_dual_io(struct pmic_wrapper *wrp) 1675 1530 { 1676 1531 int ret; 1677 - bool tmp; 1678 - u32 rdata; 1532 + bool read_ok, tmp; 1533 + bool comp_read_ok = true; 1679 1534 1680 1535 /* Enable dual IO mode */ 1681 1536 pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1); 1537 + if (wrp->slave->comp_dew_regs) 1538 + pwrap_write(wrp, wrp->slave->comp_dew_regs[PWRAP_DEW_DIO_EN], 1); 1682 1539 1683 1540 /* Check IDLE & INIT_DONE in advance */ 1684 1541 ret = readx_poll_timeout(pwrap_is_fsm_idle_and_sync_idle, wrp, tmp, tmp, ··· 1693 1546 pwrap_writel(wrp, 1, PWRAP_DIO_EN); 1694 1547 1695 1548 /* Read Test */ 1696 - pwrap_read(wrp, 1697 - wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata); 1698 - if (rdata != PWRAP_DEW_READ_TEST_VAL) { 1699 - dev_err(wrp->dev, 1700 - "Read failed on DIO mode: 0x%04x!=0x%04x\n", 1701 - PWRAP_DEW_READ_TEST_VAL, rdata); 1549 + read_ok = pwrap_pmic_read_test(wrp, wrp->slave->dew_regs, PWRAP_DEW_READ_TEST_VAL); 1550 + if (wrp->slave->comp_dew_regs) 1551 + comp_read_ok = pwrap_pmic_read_test(wrp, wrp->slave->comp_dew_regs, 1552 + PWRAP_DEW_COMP_READ_TEST_VAL); 1553 + if (!read_ok || !comp_read_ok) { 1554 + dev_err(wrp->dev, "Read failed on DIO mode. Main PMIC %s%s\n", 1555 + !read_ok ? "fail" : "success", 1556 + wrp->slave->comp_dew_regs && !comp_read_ok ? 1557 + ", Companion PMIC fail" : ""); 1702 1558 return -EFAULT; 1703 1559 } 1704 1560 ··· 1736 1586 static int pwrap_common_init_reg_clock(struct pmic_wrapper *wrp) 1737 1587 { 1738 1588 switch (wrp->master->type) { 1589 + case PWRAP_MT6795: 1590 + if (wrp->slave->type == PMIC_MT6331) { 1591 + const u32 *dew_regs = wrp->slave->dew_regs; 1592 + 1593 + pwrap_write(wrp, dew_regs[PWRAP_DEW_RDDMY_NO], 0x8); 1594 + 1595 + if (wrp->slave->comp_type == PMIC_MT6332) { 1596 + dew_regs = wrp->slave->comp_dew_regs; 1597 + pwrap_write(wrp, dew_regs[PWRAP_DEW_RDDMY_NO], 0x8); 1598 + } 1599 + } 1600 + pwrap_writel(wrp, 0x88, PWRAP_RDDMY); 1601 + pwrap_init_chip_select_ext(wrp, 15, 15, 15, 15); 1602 + break; 1739 1603 case PWRAP_MT8173: 1740 1604 pwrap_init_chip_select_ext(wrp, 0, 4, 2, 2); 1741 1605 break; ··· 1790 1626 return pwrap_readl(wrp, PWRAP_CIPHER_RDY) & 1; 1791 1627 } 1792 1628 1793 - static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp) 1629 + static bool __pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp, const u32 *dew_regs) 1794 1630 { 1795 1631 u32 rdata; 1796 1632 int ret; 1797 1633 1798 - ret = pwrap_read(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_RDY], 1799 - &rdata); 1634 + ret = pwrap_read(wrp, dew_regs[PWRAP_DEW_CIPHER_RDY], &rdata); 1800 1635 if (ret) 1801 1636 return false; 1802 1637 1803 1638 return rdata == 1; 1639 + } 1640 + 1641 + 1642 + static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp) 1643 + { 1644 + bool ret = __pwrap_is_pmic_cipher_ready(wrp, wrp->slave->dew_regs); 1645 + 1646 + if (!ret) 1647 + return ret; 1648 + 1649 + /* If there's any companion, wait for it to be ready too */ 1650 + if (wrp->slave->comp_dew_regs) 1651 + ret = __pwrap_is_pmic_cipher_ready(wrp, wrp->slave->comp_dew_regs); 1652 + 1653 + return ret; 1654 + } 1655 + 1656 + static void pwrap_config_cipher(struct pmic_wrapper *wrp, const u32 *dew_regs) 1657 + { 1658 + pwrap_write(wrp, dew_regs[PWRAP_DEW_CIPHER_SWRST], 0x1); 1659 + pwrap_write(wrp, dew_regs[PWRAP_DEW_CIPHER_SWRST], 0x0); 1660 + pwrap_write(wrp, dew_regs[PWRAP_DEW_CIPHER_KEY_SEL], 0x1); 1661 + pwrap_write(wrp, dew_regs[PWRAP_DEW_CIPHER_IV_SEL], 0x2); 1804 1662 } 1805 1663 1806 1664 static int pwrap_init_cipher(struct pmic_wrapper *wrp) ··· 1844 1658 case PWRAP_MT2701: 1845 1659 case PWRAP_MT6765: 1846 1660 case PWRAP_MT6779: 1661 + case PWRAP_MT6795: 1847 1662 case PWRAP_MT6797: 1848 1663 case PWRAP_MT8173: 1849 1664 case PWRAP_MT8186: ··· 1862 1675 } 1863 1676 1864 1677 /* Config cipher mode @PMIC */ 1865 - pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_SWRST], 0x1); 1866 - pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_SWRST], 0x0); 1867 - pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_KEY_SEL], 0x1); 1868 - pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_IV_SEL], 0x2); 1678 + pwrap_config_cipher(wrp, wrp->slave->dew_regs); 1679 + 1680 + /* If there is any companion PMIC, configure cipher mode there too */ 1681 + if (wrp->slave->comp_type > 0) 1682 + pwrap_config_cipher(wrp, wrp->slave->comp_dew_regs); 1869 1683 1870 1684 switch (wrp->slave->type) { 1871 1685 case PMIC_MT6397: ··· 1928 1740 1929 1741 static int pwrap_init_security(struct pmic_wrapper *wrp) 1930 1742 { 1743 + u32 crc_val; 1931 1744 int ret; 1932 1745 1933 1746 /* Enable encryption */ ··· 1937 1748 return ret; 1938 1749 1939 1750 /* Signature checking - using CRC */ 1940 - if (pwrap_write(wrp, 1941 - wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1)) 1942 - return -EFAULT; 1751 + ret = pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1); 1752 + if (ret == 0 && wrp->slave->comp_dew_regs) 1753 + ret = pwrap_write(wrp, wrp->slave->comp_dew_regs[PWRAP_DEW_CRC_EN], 0x1); 1943 1754 1944 1755 pwrap_writel(wrp, 0x1, PWRAP_CRC_EN); 1945 1756 pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE); 1946 - pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL], 1947 - PWRAP_SIG_ADR); 1757 + 1758 + /* CRC value */ 1759 + crc_val = wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL]; 1760 + if (wrp->slave->comp_dew_regs) 1761 + crc_val |= wrp->slave->comp_dew_regs[PWRAP_DEW_CRC_VAL] << 16; 1762 + 1763 + pwrap_writel(wrp, crc_val, PWRAP_SIG_ADR); 1764 + 1765 + /* PMIC Wrapper Arbiter priority */ 1948 1766 pwrap_writel(wrp, 1949 1767 wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN); 1950 1768 ··· 2015 1819 return 0; 2016 1820 } 2017 1821 1822 + static int pwrap_mt6795_init_soc_specific(struct pmic_wrapper *wrp) 1823 + { 1824 + pwrap_writel(wrp, 0xf, PWRAP_STAUPD_GRPEN); 1825 + 1826 + if (wrp->slave->type == PMIC_MT6331) 1827 + pwrap_writel(wrp, 0x1b4, PWRAP_EINT_STA0_ADR); 1828 + 1829 + if (wrp->slave->comp_type == PMIC_MT6332) 1830 + pwrap_writel(wrp, 0x8112, PWRAP_EINT_STA1_ADR); 1831 + 1832 + return 0; 1833 + } 1834 + 2018 1835 static int pwrap_mt7622_init_soc_specific(struct pmic_wrapper *wrp) 2019 1836 { 2020 1837 pwrap_writel(wrp, 0, PWRAP_STAUPD_PRD); ··· 2063 1854 if (wrp->rstc_bridge) 2064 1855 reset_control_reset(wrp->rstc_bridge); 2065 1856 2066 - if (wrp->master->type == PWRAP_MT8173) { 1857 + switch (wrp->master->type) { 1858 + case PWRAP_MT6795: 1859 + fallthrough; 1860 + case PWRAP_MT8173: 2067 1861 /* Enable DCM */ 2068 1862 pwrap_writel(wrp, 3, PWRAP_DCM_EN); 2069 1863 pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD); 1864 + break; 1865 + default: 1866 + break; 2070 1867 } 2071 1868 2072 1869 if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) { ··· 2197 1982 PWRAP_SLV_CAP_SECURITY, 2198 1983 }; 2199 1984 1985 + static const struct pwrap_slv_type pmic_mt6331 = { 1986 + .dew_regs = mt6331_regs, 1987 + .type = PMIC_MT6331, 1988 + .comp_dew_regs = mt6332_regs, 1989 + .comp_type = PMIC_MT6332, 1990 + .regops = &pwrap_regops16, 1991 + .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO | 1992 + PWRAP_SLV_CAP_SECURITY, 1993 + }; 1994 + 2200 1995 static const struct pwrap_slv_type pmic_mt6351 = { 2201 1996 .dew_regs = mt6351_regs, 2202 1997 .type = PMIC_MT6351, ··· 2252 2027 2253 2028 static const struct of_device_id of_slave_match_tbl[] = { 2254 2029 { .compatible = "mediatek,mt6323", .data = &pmic_mt6323 }, 2030 + { .compatible = "mediatek,mt6331", .data = &pmic_mt6331 }, 2255 2031 { .compatible = "mediatek,mt6351", .data = &pmic_mt6351 }, 2256 2032 { .compatible = "mediatek,mt6357", .data = &pmic_mt6357 }, 2257 2033 { .compatible = "mediatek,mt6358", .data = &pmic_mt6358 }, ··· 2303 2077 .caps = 0, 2304 2078 .init_reg_clock = pwrap_common_init_reg_clock, 2305 2079 .init_soc_specific = NULL, 2080 + }; 2081 + 2082 + static const struct pmic_wrapper_type pwrap_mt6795 = { 2083 + .regs = mt6795_regs, 2084 + .type = PWRAP_MT6795, 2085 + .arb_en_all = 0x3f, 2086 + .int_en_all = ~(u32)(BIT(31) | BIT(2) | BIT(1)), 2087 + .int1_en_all = 0, 2088 + .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 2089 + .wdt_src = PWRAP_WDT_SRC_MASK_NO_STAUPD, 2090 + .caps = PWRAP_CAP_RESET | PWRAP_CAP_DCM, 2091 + .init_reg_clock = pwrap_common_init_reg_clock, 2092 + .init_soc_specific = pwrap_mt6795_init_soc_specific, 2306 2093 }; 2307 2094 2308 2095 static const struct pmic_wrapper_type pwrap_mt6797 = { ··· 2451 2212 { .compatible = "mediatek,mt2701-pwrap", .data = &pwrap_mt2701 }, 2452 2213 { .compatible = "mediatek,mt6765-pwrap", .data = &pwrap_mt6765 }, 2453 2214 { .compatible = "mediatek,mt6779-pwrap", .data = &pwrap_mt6779 }, 2215 + { .compatible = "mediatek,mt6795-pwrap", .data = &pwrap_mt6795 }, 2454 2216 { .compatible = "mediatek,mt6797-pwrap", .data = &pwrap_mt6797 }, 2455 2217 { .compatible = "mediatek,mt6873-pwrap", .data = &pwrap_mt6873 }, 2456 2218 { .compatible = "mediatek,mt7622-pwrap", .data = &pwrap_mt7622 },
+2 -2
drivers/soc/mediatek/mtk-svs.c
··· 2061 2061 svsb = &svsp->banks[idx]; 2062 2062 2063 2063 if (svsb->type == SVSB_HIGH) 2064 - svsb->opp_dev = svs_add_device_link(svsp, "mali"); 2064 + svsb->opp_dev = svs_add_device_link(svsp, "gpu"); 2065 2065 else if (svsb->type == SVSB_LOW) 2066 - svsb->opp_dev = svs_get_subsys_device(svsp, "mali"); 2066 + svsb->opp_dev = svs_get_subsys_device(svsp, "gpu"); 2067 2067 2068 2068 if (IS_ERR(svsb->opp_dev)) 2069 2069 return dev_err_probe(svsp->dev, PTR_ERR(svsb->opp_dev),
+11
drivers/soc/qcom/Kconfig
··· 135 135 136 136 Say y here if you intend to boot the modem remoteproc. 137 137 138 + config QCOM_RPM_MASTER_STATS 139 + tristate "Qualcomm RPM Master stats" 140 + depends on ARCH_QCOM || COMPILE_TEST 141 + help 142 + The RPM Master sleep stats driver provides detailed per-subsystem 143 + sleep/wake data, read from the RPM message RAM. It can be used to 144 + assess whether all the low-power modes available are entered as 145 + expected or to check which part of the SoC prevents it from sleeping. 146 + 147 + Say y here if you intend to debug or monitor platform sleep. 148 + 138 149 config QCOM_RPMH 139 150 tristate "Qualcomm RPM-Hardened (RPMH) Communication" 140 151 depends on ARCH_QCOM || COMPILE_TEST
+1
drivers/soc/qcom/Makefile
··· 14 14 qmi_helpers-y += qmi_encdec.o qmi_interface.o 15 15 obj-$(CONFIG_QCOM_RAMP_CTRL) += ramp_controller.o 16 16 obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o 17 + obj-$(CONFIG_QCOM_RPM_MASTER_STATS) += rpm_master_stats.o 17 18 obj-$(CONFIG_QCOM_RPMH) += qcom_rpmh.o 18 19 qcom_rpmh-y += rpmh-rsc.o 19 20 qcom_rpmh-y += rpmh.o
+1 -1
drivers/soc/qcom/icc-bwmon.c
··· 806 806 807 807 static const struct icc_bwmon_data msm8998_bwmon_data = { 808 808 .sample_ms = 4, 809 - .count_unit_kb = 64, 809 + .count_unit_kb = 1024, 810 810 .default_highbw_kbps = 4800 * 1024, /* 4.8 GBps */ 811 811 .default_medbw_kbps = 512 * 1024, /* 512 MBps */ 812 812 .default_lowbw_kbps = 0,
+42 -7
drivers/soc/qcom/mdt_loader.c
··· 210 210 const struct elf32_hdr *ehdr; 211 211 phys_addr_t min_addr = PHYS_ADDR_MAX; 212 212 phys_addr_t max_addr = 0; 213 + bool relocate = false; 213 214 size_t metadata_len; 214 215 void *metadata; 215 216 int ret; ··· 224 223 225 224 if (!mdt_phdr_valid(phdr)) 226 225 continue; 226 + 227 + if (phdr->p_flags & QCOM_MDT_RELOCATABLE) 228 + relocate = true; 227 229 228 230 if (phdr->p_paddr < min_addr) 229 231 min_addr = phdr->p_paddr; ··· 250 246 goto out; 251 247 } 252 248 253 - ret = qcom_scm_pas_mem_setup(pas_id, mem_phys, max_addr - min_addr); 254 - if (ret) { 255 - /* Unable to set up relocation */ 256 - dev_err(dev, "error %d setting up firmware %s\n", ret, fw_name); 257 - goto out; 249 + if (relocate) { 250 + ret = qcom_scm_pas_mem_setup(pas_id, mem_phys, max_addr - min_addr); 251 + if (ret) { 252 + /* Unable to set up relocation */ 253 + dev_err(dev, "error %d setting up firmware %s\n", ret, fw_name); 254 + goto out; 255 + } 258 256 } 259 257 260 258 out: 261 259 return ret; 262 260 } 263 261 EXPORT_SYMBOL_GPL(qcom_mdt_pas_init); 262 + 263 + static bool qcom_mdt_bins_are_split(const struct firmware *fw, const char *fw_name) 264 + { 265 + const struct elf32_phdr *phdrs; 266 + const struct elf32_hdr *ehdr; 267 + uint64_t seg_start, seg_end; 268 + int i; 269 + 270 + ehdr = (struct elf32_hdr *)fw->data; 271 + phdrs = (struct elf32_phdr *)(ehdr + 1); 272 + 273 + for (i = 0; i < ehdr->e_phnum; i++) { 274 + /* 275 + * The size of the MDT file is not padded to include any 276 + * zero-sized segments at the end. Ignore these, as they should 277 + * not affect the decision about image being split or not. 278 + */ 279 + if (!phdrs[i].p_filesz) 280 + continue; 281 + 282 + seg_start = phdrs[i].p_offset; 283 + seg_end = phdrs[i].p_offset + phdrs[i].p_filesz; 284 + if (seg_start > fw->size || seg_end > fw->size) 285 + return true; 286 + } 287 + 288 + return false; 289 + } 264 290 265 291 static int __qcom_mdt_load(struct device *dev, const struct firmware *fw, 266 292 const char *fw_name, int pas_id, void *mem_region, ··· 304 270 phys_addr_t min_addr = PHYS_ADDR_MAX; 305 271 ssize_t offset; 306 272 bool relocate = false; 273 + bool is_split; 307 274 void *ptr; 308 275 int ret = 0; 309 276 int i; ··· 312 277 if (!fw || !mem_region || !mem_phys || !mem_size) 313 278 return -EINVAL; 314 279 280 + is_split = qcom_mdt_bins_are_split(fw, fw_name); 315 281 ehdr = (struct elf32_hdr *)fw->data; 316 282 phdrs = (struct elf32_phdr *)(ehdr + 1); 317 283 ··· 366 330 367 331 ptr = mem_region + offset; 368 332 369 - if (phdr->p_filesz && phdr->p_offset < fw->size && 370 - phdr->p_offset + phdr->p_filesz <= fw->size) { 333 + if (phdr->p_filesz && !is_split) { 371 334 /* Firmware is large enough to be non-split */ 372 335 if (phdr->p_offset + phdr->p_filesz > fw->size) { 373 336 dev_err(dev, "file %s segment %d would be truncated\n",
+10
drivers/soc/qcom/ocmem.c
··· 76 76 #define OCMEM_REG_GFX_MPU_START 0x00001004 77 77 #define OCMEM_REG_GFX_MPU_END 0x00001008 78 78 79 + #define OCMEM_HW_VERSION_MAJOR(val) FIELD_GET(GENMASK(31, 28), val) 80 + #define OCMEM_HW_VERSION_MINOR(val) FIELD_GET(GENMASK(27, 16), val) 81 + #define OCMEM_HW_VERSION_STEP(val) FIELD_GET(GENMASK(15, 0), val) 82 + 79 83 #define OCMEM_HW_PROFILE_NUM_PORTS(val) FIELD_PREP(0x0000000f, (val)) 80 84 #define OCMEM_HW_PROFILE_NUM_MACROS(val) FIELD_PREP(0x00003f00, (val)) 81 85 ··· 358 354 goto err_clk_disable; 359 355 } 360 356 } 357 + 358 + reg = ocmem_read(ocmem, OCMEM_REG_HW_VERSION); 359 + dev_dbg(dev, "OCMEM hardware version: %lu.%lu.%lu\n", 360 + OCMEM_HW_VERSION_MAJOR(reg), 361 + OCMEM_HW_VERSION_MINOR(reg), 362 + OCMEM_HW_VERSION_STEP(reg)); 361 363 362 364 reg = ocmem_read(ocmem, OCMEM_REG_HW_PROFILE); 363 365 ocmem->num_ports = OCMEM_HW_PROFILE_NUM_PORTS(reg);
+6 -2
drivers/soc/qcom/pmic_glink.c
··· 338 338 return 0; 339 339 } 340 340 341 - /* Do not handle altmode for now on those platforms */ 342 341 static const unsigned long pmic_glink_sm8450_client_mask = BIT(PMIC_GLINK_CLIENT_BATT) | 342 + BIT(PMIC_GLINK_CLIENT_ALTMODE) | 343 + BIT(PMIC_GLINK_CLIENT_UCSI); 344 + 345 + /* Do not handle altmode for now on those platforms */ 346 + static const unsigned long pmic_glink_sm8550_client_mask = BIT(PMIC_GLINK_CLIENT_BATT) | 343 347 BIT(PMIC_GLINK_CLIENT_UCSI); 344 348 345 349 static const struct of_device_id pmic_glink_of_match[] = { 346 350 { .compatible = "qcom,sm8450-pmic-glink", .data = &pmic_glink_sm8450_client_mask }, 347 - { .compatible = "qcom,sm8550-pmic-glink", .data = &pmic_glink_sm8450_client_mask }, 351 + { .compatible = "qcom,sm8550-pmic-glink", .data = &pmic_glink_sm8550_client_mask }, 348 352 { .compatible = "qcom,pmic-glink" }, 349 353 {} 350 354 };
+4 -24
drivers/soc/qcom/qcom-geni-se.c
··· 281 281 282 282 geni_se_irq_clear(se); 283 283 284 - /* 285 - * The RX path for the UART is asynchronous and so needs more 286 - * complex logic for enabling / disabling its interrupts. 287 - * 288 - * Specific notes: 289 - * - The done and TX-related interrupts are managed manually. 290 - * - We don't RX from the main sequencer (we use the secondary) so 291 - * we don't need the RX-related interrupts enabled in the main 292 - * sequencer for UART. 293 - */ 284 + /* UART driver manages enabling / disabling interrupts internally */ 294 285 if (proto != GENI_SE_UART) { 286 + /* Non-UART use only primary sequencer so dont bother about S_IRQ */ 295 287 val_old = val = readl_relaxed(se->base + SE_GENI_M_IRQ_EN); 296 288 val |= M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN; 297 289 val |= M_RX_FIFO_WATERMARK_EN | M_RX_FIFO_LAST_EN; 298 290 if (val != val_old) 299 291 writel_relaxed(val, se->base + SE_GENI_M_IRQ_EN); 300 - 301 - val_old = val = readl_relaxed(se->base + SE_GENI_S_IRQ_EN); 302 - val |= S_CMD_DONE_EN; 303 - if (val != val_old) 304 - writel_relaxed(val, se->base + SE_GENI_S_IRQ_EN); 305 292 } 306 293 307 294 val_old = val = readl_relaxed(se->base + SE_GENI_DMA_MODE_EN); ··· 304 317 305 318 geni_se_irq_clear(se); 306 319 320 + /* UART driver manages enabling / disabling interrupts internally */ 307 321 if (proto != GENI_SE_UART) { 322 + /* Non-UART use only primary sequencer so dont bother about S_IRQ */ 308 323 val_old = val = readl_relaxed(se->base + SE_GENI_M_IRQ_EN); 309 324 val &= ~(M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN); 310 325 val &= ~(M_RX_FIFO_WATERMARK_EN | M_RX_FIFO_LAST_EN); 311 326 if (val != val_old) 312 327 writel_relaxed(val, se->base + SE_GENI_M_IRQ_EN); 313 - 314 - val_old = val = readl_relaxed(se->base + SE_GENI_S_IRQ_EN); 315 - val &= ~S_CMD_DONE_EN; 316 - if (val != val_old) 317 - writel_relaxed(val, se->base + SE_GENI_S_IRQ_EN); 318 328 } 319 329 320 330 val_old = val = readl_relaxed(se->base + SE_GENI_DMA_MODE_EN); ··· 327 343 geni_se_irq_clear(se); 328 344 329 345 writel(0, se->base + SE_IRQ_EN); 330 - 331 - val = readl(se->base + SE_GENI_S_IRQ_EN); 332 - val &= ~S_CMD_DONE_EN; 333 - writel(val, se->base + SE_GENI_S_IRQ_EN); 334 346 335 347 val = readl(se->base + SE_GENI_M_IRQ_EN); 336 348 val &= ~(M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN |
+1 -1
drivers/soc/qcom/qmi_interface.c
··· 650 650 if (!qmi->recv_buf) 651 651 return -ENOMEM; 652 652 653 - qmi->wq = alloc_workqueue("qmi_msg_handler", WQ_UNBOUND, 1); 653 + qmi->wq = alloc_ordered_workqueue("qmi_msg_handler", 0); 654 654 if (!qmi->wq) { 655 655 ret = -ENOMEM; 656 656 goto err_free_recv_buf;
+7 -4
drivers/soc/qcom/ramp_controller.c
··· 308 308 return qcom_ramp_controller_start(qrc); 309 309 } 310 310 311 - static int qcom_ramp_controller_remove(struct platform_device *pdev) 311 + static void qcom_ramp_controller_remove(struct platform_device *pdev) 312 312 { 313 313 struct qcom_ramp_controller *qrc = platform_get_drvdata(pdev); 314 + int ret; 314 315 315 - return rc_write_cfg(qrc, qrc->desc->cfg_ramp_dis, 316 - RC_DCVS_CFG_SID, qrc->desc->num_ramp_dis); 316 + ret = rc_write_cfg(qrc, qrc->desc->cfg_ramp_dis, 317 + RC_DCVS_CFG_SID, qrc->desc->num_ramp_dis); 318 + if (ret) 319 + dev_err(&pdev->dev, "Failed to send disable sequence\n"); 317 320 } 318 321 319 322 static const struct of_device_id qcom_ramp_controller_match_table[] = { ··· 332 329 .suppress_bind_attrs = true, 333 330 }, 334 331 .probe = qcom_ramp_controller_probe, 335 - .remove = qcom_ramp_controller_remove, 332 + .remove_new = qcom_ramp_controller_remove, 336 333 }; 337 334 338 335 static int __init qcom_ramp_controller_init(void)
+163
drivers/soc/qcom/rpm_master_stats.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved. 4 + * Copyright (c) 2023, Linaro Limited 5 + * 6 + * This driver supports what is known as "Master Stats v2" in Qualcomm 7 + * downstream kernel terms, which seems to be the only version which has 8 + * ever shipped, all the way from 2013 to 2023. 9 + */ 10 + 11 + #include <linux/debugfs.h> 12 + #include <linux/io.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_address.h> 16 + #include <linux/platform_device.h> 17 + 18 + struct master_stats_data { 19 + void __iomem *base; 20 + const char *label; 21 + }; 22 + 23 + struct rpm_master_stats { 24 + u32 active_cores; 25 + u32 num_shutdowns; 26 + u64 shutdown_req; 27 + u64 wakeup_idx; 28 + u64 bringup_req; 29 + u64 bringup_ack; 30 + u32 wakeup_reason; /* 0 = "rude wakeup", 1 = scheduled wakeup */ 31 + u32 last_sleep_trans_dur; 32 + u32 last_wake_trans_dur; 33 + 34 + /* Per-subsystem (*not necessarily* SoC-wide) XO shutdown stats */ 35 + u32 xo_count; 36 + u64 xo_last_enter; 37 + u64 last_exit; 38 + u64 xo_total_dur; 39 + } __packed; 40 + 41 + static int master_stats_show(struct seq_file *s, void *unused) 42 + { 43 + struct master_stats_data *data = s->private; 44 + struct rpm_master_stats stat; 45 + 46 + memcpy_fromio(&stat, data->base, sizeof(stat)); 47 + 48 + seq_printf(s, "%s:\n", data->label); 49 + 50 + seq_printf(s, "\tLast shutdown @ %llu\n", stat.shutdown_req); 51 + seq_printf(s, "\tLast bringup req @ %llu\n", stat.bringup_req); 52 + seq_printf(s, "\tLast bringup ack @ %llu\n", stat.bringup_ack); 53 + seq_printf(s, "\tLast wakeup idx: %llu\n", stat.wakeup_idx); 54 + seq_printf(s, "\tLast XO shutdown enter @ %llu\n", stat.xo_last_enter); 55 + seq_printf(s, "\tLast XO shutdown exit @ %llu\n", stat.last_exit); 56 + seq_printf(s, "\tXO total duration: %llu\n", stat.xo_total_dur); 57 + seq_printf(s, "\tLast sleep transition duration: %u\n", stat.last_sleep_trans_dur); 58 + seq_printf(s, "\tLast wake transition duration: %u\n", stat.last_wake_trans_dur); 59 + seq_printf(s, "\tXO shutdown count: %u\n", stat.xo_count); 60 + seq_printf(s, "\tWakeup reason: 0x%x\n", stat.wakeup_reason); 61 + seq_printf(s, "\tShutdown count: %u\n", stat.num_shutdowns); 62 + seq_printf(s, "\tActive cores bitmask: 0x%x\n", stat.active_cores); 63 + 64 + return 0; 65 + } 66 + DEFINE_SHOW_ATTRIBUTE(master_stats); 67 + 68 + static int master_stats_probe(struct platform_device *pdev) 69 + { 70 + struct device *dev = &pdev->dev; 71 + struct master_stats_data *data; 72 + struct device_node *msgram_np; 73 + struct dentry *dent, *root; 74 + struct resource res; 75 + int count, i, ret; 76 + 77 + count = of_property_count_strings(dev->of_node, "qcom,master-names"); 78 + if (count < 0) 79 + return count; 80 + 81 + data = devm_kzalloc(dev, count * sizeof(*data), GFP_KERNEL); 82 + if (!data) 83 + return -ENOMEM; 84 + 85 + root = debugfs_create_dir("qcom_rpm_master_stats", NULL); 86 + platform_set_drvdata(pdev, root); 87 + 88 + for (i = 0; i < count; i++) { 89 + msgram_np = of_parse_phandle(dev->of_node, "qcom,rpm-msg-ram", i); 90 + if (!msgram_np) { 91 + debugfs_remove_recursive(root); 92 + return dev_err_probe(dev, -ENODEV, 93 + "Couldn't parse MSG RAM phandle idx %d", i); 94 + } 95 + 96 + /* 97 + * Purposefully skip devm_platform helpers as we're using a 98 + * shared resource. 99 + */ 100 + ret = of_address_to_resource(msgram_np, 0, &res); 101 + of_node_put(msgram_np); 102 + if (ret < 0) { 103 + debugfs_remove_recursive(root); 104 + return ret; 105 + } 106 + 107 + data[i].base = devm_ioremap(dev, res.start, resource_size(&res)); 108 + if (!data[i].base) { 109 + debugfs_remove_recursive(root); 110 + return dev_err_probe(dev, -EINVAL, 111 + "Could not map the MSG RAM slice idx %d!\n", i); 112 + } 113 + 114 + ret = of_property_read_string_index(dev->of_node, "qcom,master-names", i, 115 + &data[i].label); 116 + if (ret < 0) { 117 + debugfs_remove_recursive(root); 118 + return dev_err_probe(dev, ret, 119 + "Could not read name idx %d!\n", i); 120 + } 121 + 122 + /* 123 + * Generally it's not advised to fail on debugfs errors, but this 124 + * driver's only job is exposing data therein. 125 + */ 126 + dent = debugfs_create_file(data[i].label, 0444, root, 127 + &data[i], &master_stats_fops); 128 + if (IS_ERR(dent)) { 129 + debugfs_remove_recursive(root); 130 + return dev_err_probe(dev, PTR_ERR(dent), 131 + "Failed to create debugfs file %s!\n", data[i].label); 132 + } 133 + } 134 + 135 + device_set_pm_not_required(dev); 136 + 137 + return 0; 138 + } 139 + 140 + static void master_stats_remove(struct platform_device *pdev) 141 + { 142 + struct dentry *root = platform_get_drvdata(pdev); 143 + 144 + debugfs_remove_recursive(root); 145 + } 146 + 147 + static const struct of_device_id rpm_master_table[] = { 148 + { .compatible = "qcom,rpm-master-stats" }, 149 + { }, 150 + }; 151 + 152 + static struct platform_driver master_stats_driver = { 153 + .probe = master_stats_probe, 154 + .remove_new = master_stats_remove, 155 + .driver = { 156 + .name = "qcom_rpm_master_stats", 157 + .of_match_table = rpm_master_table, 158 + }, 159 + }; 160 + module_platform_driver(master_stats_driver); 161 + 162 + MODULE_DESCRIPTION("Qualcomm RPM Master Statistics driver"); 163 + MODULE_LICENSE("GPL");
+2 -2
drivers/soc/qcom/rpmpd.c
··· 892 892 pd->corner = state; 893 893 894 894 /* Always send updates for vfc and vfl */ 895 - if (!pd->enabled && pd->key != KEY_FLOOR_CORNER && 896 - pd->key != KEY_FLOOR_LEVEL) 895 + if (!pd->enabled && pd->key != cpu_to_le32(KEY_FLOOR_CORNER) && 896 + pd->key != cpu_to_le32(KEY_FLOOR_LEVEL)) 897 897 goto out; 898 898 899 899 ret = rpmpd_aggregate_corner(pd);
+27 -4
drivers/soc/qcom/smem.c
··· 14 14 #include <linux/sizes.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/soc/qcom/smem.h> 17 + #include <linux/soc/qcom/socinfo.h> 17 18 18 19 /* 19 20 * The Qualcomm shared memory system is a allocate only heap structure that ··· 501 500 502 501 return ret; 503 502 } 504 - EXPORT_SYMBOL(qcom_smem_alloc); 503 + EXPORT_SYMBOL_GPL(qcom_smem_alloc); 505 504 506 505 static void *qcom_smem_get_global(struct qcom_smem *smem, 507 506 unsigned item, ··· 675 674 return ptr; 676 675 677 676 } 678 - EXPORT_SYMBOL(qcom_smem_get); 677 + EXPORT_SYMBOL_GPL(qcom_smem_get); 679 678 680 679 /** 681 680 * qcom_smem_get_free_space() - retrieve amount of free space in a partition ··· 720 719 721 720 return ret; 722 721 } 723 - EXPORT_SYMBOL(qcom_smem_get_free_space); 722 + EXPORT_SYMBOL_GPL(qcom_smem_get_free_space); 724 723 725 724 static bool addr_in_range(void __iomem *base, size_t size, void *addr) 726 725 { ··· 771 770 772 771 return 0; 773 772 } 774 - EXPORT_SYMBOL(qcom_smem_virt_to_phys); 773 + EXPORT_SYMBOL_GPL(qcom_smem_virt_to_phys); 774 + 775 + /** 776 + * qcom_smem_get_soc_id() - return the SoC ID 777 + * @id: On success, we return the SoC ID here. 778 + * 779 + * Look up SoC ID from HW/SW build ID and return it. 780 + * 781 + * Return: 0 on success, negative errno on failure. 782 + */ 783 + int qcom_smem_get_soc_id(u32 *id) 784 + { 785 + struct socinfo *info; 786 + 787 + info = qcom_smem_get(QCOM_SMEM_HOST_ANY, SMEM_HW_SW_BUILD_ID, NULL); 788 + if (IS_ERR(info)) 789 + return PTR_ERR(info); 790 + 791 + *id = __le32_to_cpu(info->id); 792 + 793 + return 0; 794 + } 795 + EXPORT_SYMBOL_GPL(qcom_smem_get_soc_id); 775 796 776 797 static int qcom_smem_get_sbl_version(struct qcom_smem *smem) 777 798 {
+37 -74
drivers/soc/qcom/socinfo.c
··· 11 11 #include <linux/random.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/soc/qcom/smem.h> 14 + #include <linux/soc/qcom/socinfo.h> 14 15 #include <linux/string.h> 15 16 #include <linux/stringify.h> 16 17 #include <linux/sys_soc.h> ··· 32 31 /* Helper macros to create soc_id table */ 33 32 #define qcom_board_id(id) QCOM_ID_ ## id, __stringify(id) 34 33 #define qcom_board_id_named(id, name) QCOM_ID_ ## id, (name) 35 - 36 - #define SMEM_SOCINFO_BUILD_ID_LENGTH 32 37 - #define SMEM_SOCINFO_CHIP_ID_LENGTH 32 38 - 39 - /* 40 - * SMEM item id, used to acquire handles to respective 41 - * SMEM region. 42 - */ 43 - #define SMEM_HW_SW_BUILD_ID 137 44 34 45 35 #ifdef CONFIG_DEBUG_FS 46 36 #define SMEM_IMAGE_VERSION_BLOCKS_COUNT 32 ··· 118 126 [58] = "PM8450", 119 127 [65] = "PM8010", 120 128 }; 121 - #endif /* CONFIG_DEBUG_FS */ 122 129 123 - /* Socinfo SMEM item structure */ 124 - struct socinfo { 125 - __le32 fmt; 126 - __le32 id; 127 - __le32 ver; 128 - char build_id[SMEM_SOCINFO_BUILD_ID_LENGTH]; 129 - /* Version 2 */ 130 - __le32 raw_id; 131 - __le32 raw_ver; 132 - /* Version 3 */ 133 - __le32 hw_plat; 134 - /* Version 4 */ 135 - __le32 plat_ver; 136 - /* Version 5 */ 137 - __le32 accessory_chip; 138 - /* Version 6 */ 139 - __le32 hw_plat_subtype; 140 - /* Version 7 */ 141 - __le32 pmic_model; 142 - __le32 pmic_die_rev; 143 - /* Version 8 */ 144 - __le32 pmic_model_1; 145 - __le32 pmic_die_rev_1; 146 - __le32 pmic_model_2; 147 - __le32 pmic_die_rev_2; 148 - /* Version 9 */ 149 - __le32 foundry_id; 150 - /* Version 10 */ 151 - __le32 serial_num; 152 - /* Version 11 */ 153 - __le32 num_pmics; 154 - __le32 pmic_array_offset; 155 - /* Version 12 */ 156 - __le32 chip_family; 157 - __le32 raw_device_family; 158 - __le32 raw_device_num; 159 - /* Version 13 */ 160 - __le32 nproduct_id; 161 - char chip_id[SMEM_SOCINFO_CHIP_ID_LENGTH]; 162 - /* Version 14 */ 163 - __le32 num_clusters; 164 - __le32 ncluster_array_offset; 165 - __le32 num_defective_parts; 166 - __le32 ndefective_parts_array_offset; 167 - /* Version 15 */ 168 - __le32 nmodem_supported; 169 - /* Version 16 */ 170 - __le32 feature_code; 171 - __le32 pcode; 172 - __le32 npartnamemap_offset; 173 - __le32 nnum_partname_mapping; 174 - /* Version 17 */ 175 - __le32 oem_variant; 176 - }; 177 - 178 - #ifdef CONFIG_DEBUG_FS 179 130 struct socinfo_params { 180 131 u32 raw_device_family; 181 132 u32 hw_plat_subtype; ··· 133 198 u32 nproduct_id; 134 199 u32 num_clusters; 135 200 u32 ncluster_array_offset; 136 - u32 num_defective_parts; 137 - u32 ndefective_parts_array_offset; 201 + u32 num_subset_parts; 202 + u32 nsubset_parts_array_offset; 138 203 u32 nmodem_supported; 139 204 u32 feature_code; 140 205 u32 pcode; 141 206 u32 oem_variant; 207 + u32 num_func_clusters; 208 + u32 boot_cluster; 209 + u32 boot_core; 142 210 }; 143 211 144 212 struct smem_image_version { ··· 372 434 { qcom_board_id(SM8350) }, 373 435 { qcom_board_id(QCM2290) }, 374 436 { qcom_board_id(SM6115) }, 437 + { qcom_board_id(IPQ5010) }, 438 + { qcom_board_id(IPQ5018) }, 439 + { qcom_board_id(IPQ5028) }, 375 440 { qcom_board_id(SC8280XP) }, 376 441 { qcom_board_id(IPQ6005) }, 377 442 { qcom_board_id(QRB5165) }, ··· 388 447 { qcom_board_id_named(SM8450_3, "SM8450") }, 389 448 { qcom_board_id(SC7280) }, 390 449 { qcom_board_id(SC7180P) }, 450 + { qcom_board_id(IPQ5000) }, 451 + { qcom_board_id(IPQ0509) }, 452 + { qcom_board_id(IPQ0518) }, 391 453 { qcom_board_id(SM6375) }, 392 454 { qcom_board_id(IPQ9514) }, 393 455 { qcom_board_id(IPQ9550) }, ··· 398 454 { qcom_board_id(IPQ9570) }, 399 455 { qcom_board_id(IPQ9574) }, 400 456 { qcom_board_id(SM8550) }, 457 + { qcom_board_id(IPQ5016) }, 401 458 { qcom_board_id(IPQ9510) }, 402 459 { qcom_board_id(QRB4210) }, 403 460 { qcom_board_id(QRB2210) }, ··· 406 461 { qcom_board_id(QRU1000) }, 407 462 { qcom_board_id(QDU1000) }, 408 463 { qcom_board_id(QDU1010) }, 464 + { qcom_board_id(IPQ5019) }, 409 465 { qcom_board_id(QRU1032) }, 410 466 { qcom_board_id(QRU1052) }, 411 467 { qcom_board_id(QRU1062) }, 412 468 { qcom_board_id(IPQ5332) }, 413 469 { qcom_board_id(IPQ5322) }, 470 + { qcom_board_id(IPQ5312) }, 471 + { qcom_board_id(IPQ5302) }, 472 + { qcom_board_id(IPQ5300) }, 414 473 }; 415 474 416 475 static const char *socinfo_machine(struct device *dev, unsigned int id) ··· 569 620 &qcom_socinfo->info.fmt); 570 621 571 622 switch (qcom_socinfo->info.fmt) { 623 + case SOCINFO_VERSION(0, 19): 624 + qcom_socinfo->info.num_func_clusters = __le32_to_cpu(info->num_func_clusters); 625 + qcom_socinfo->info.boot_cluster = __le32_to_cpu(info->boot_cluster); 626 + qcom_socinfo->info.boot_core = __le32_to_cpu(info->boot_core); 627 + 628 + debugfs_create_u32("num_func_clusters", 0444, qcom_socinfo->dbg_root, 629 + &qcom_socinfo->info.num_func_clusters); 630 + debugfs_create_u32("boot_cluster", 0444, qcom_socinfo->dbg_root, 631 + &qcom_socinfo->info.boot_cluster); 632 + debugfs_create_u32("boot_core", 0444, qcom_socinfo->dbg_root, 633 + &qcom_socinfo->info.boot_core); 634 + fallthrough; 635 + case SOCINFO_VERSION(0, 18): 572 636 case SOCINFO_VERSION(0, 17): 573 637 qcom_socinfo->info.oem_variant = __le32_to_cpu(info->oem_variant); 574 638 debugfs_create_u32("oem_variant", 0444, qcom_socinfo->dbg_root, ··· 605 643 case SOCINFO_VERSION(0, 14): 606 644 qcom_socinfo->info.num_clusters = __le32_to_cpu(info->num_clusters); 607 645 qcom_socinfo->info.ncluster_array_offset = __le32_to_cpu(info->ncluster_array_offset); 608 - qcom_socinfo->info.num_defective_parts = __le32_to_cpu(info->num_defective_parts); 609 - qcom_socinfo->info.ndefective_parts_array_offset = __le32_to_cpu(info->ndefective_parts_array_offset); 646 + qcom_socinfo->info.num_subset_parts = __le32_to_cpu(info->num_subset_parts); 647 + qcom_socinfo->info.nsubset_parts_array_offset = 648 + __le32_to_cpu(info->nsubset_parts_array_offset); 610 649 611 650 debugfs_create_u32("num_clusters", 0444, qcom_socinfo->dbg_root, 612 651 &qcom_socinfo->info.num_clusters); 613 652 debugfs_create_u32("ncluster_array_offset", 0444, qcom_socinfo->dbg_root, 614 653 &qcom_socinfo->info.ncluster_array_offset); 615 - debugfs_create_u32("num_defective_parts", 0444, qcom_socinfo->dbg_root, 616 - &qcom_socinfo->info.num_defective_parts); 617 - debugfs_create_u32("ndefective_parts_array_offset", 0444, qcom_socinfo->dbg_root, 618 - &qcom_socinfo->info.ndefective_parts_array_offset); 654 + debugfs_create_u32("num_subset_parts", 0444, qcom_socinfo->dbg_root, 655 + &qcom_socinfo->info.num_subset_parts); 656 + debugfs_create_u32("nsubset_parts_array_offset", 0444, qcom_socinfo->dbg_root, 657 + &qcom_socinfo->info.nsubset_parts_array_offset); 619 658 fallthrough; 620 659 case SOCINFO_VERSION(0, 13): 621 660 qcom_socinfo->info.nproduct_id = __le32_to_cpu(info->nproduct_id);
+14 -1
drivers/soc/renesas/rcar-rst.c
··· 12 12 13 13 #define WDTRSTCR_RESET 0xA55A0002 14 14 #define WDTRSTCR 0x0054 15 + #define GEN4_WDTRSTCR 0x0010 15 16 16 17 #define CR7BAR 0x0070 17 18 #define CR7BAREN BIT(4) ··· 25 24 static int rcar_rst_enable_wdt_reset(void __iomem *base) 26 25 { 27 26 iowrite32(WDTRSTCR_RESET, base + WDTRSTCR); 27 + return 0; 28 + } 29 + 30 + static int rcar_rst_v3u_enable_wdt_reset(void __iomem *base) 31 + { 32 + iowrite32(WDTRSTCR_RESET, base + GEN4_WDTRSTCR); 28 33 return 0; 29 34 } 30 35 ··· 73 66 .set_rproc_boot_addr = rcar_rst_set_gen3_rproc_boot_addr, 74 67 }; 75 68 69 + /* V3U firmware doesn't enable WDT reset and there won't be updates anymore */ 70 + static const struct rst_config rcar_rst_v3u __initconst = { 71 + .modemr = 0x00, /* MODEMR0 and it has CPG related bits */ 72 + .configure = rcar_rst_v3u_enable_wdt_reset, 73 + }; 74 + 76 75 static const struct rst_config rcar_rst_gen4 __initconst = { 77 76 .modemr = 0x00, /* MODEMR0 and it has CPG related bits */ 78 77 }; ··· 114 101 { .compatible = "renesas,r8a77990-rst", .data = &rcar_rst_gen3 }, 115 102 { .compatible = "renesas,r8a77995-rst", .data = &rcar_rst_gen3 }, 116 103 /* R-Car Gen4 */ 117 - { .compatible = "renesas,r8a779a0-rst", .data = &rcar_rst_gen4 }, 104 + { .compatible = "renesas,r8a779a0-rst", .data = &rcar_rst_v3u }, 118 105 { .compatible = "renesas,r8a779f0-rst", .data = &rcar_rst_gen4 }, 119 106 { .compatible = "renesas,r8a779g0-rst", .data = &rcar_rst_gen4 }, 120 107 { /* sentinel */ }
+9 -20
drivers/soc/renesas/rmobile-sysc.c
··· 12 12 #include <linux/clk/renesas.h> 13 13 #include <linux/console.h> 14 14 #include <linux/delay.h> 15 + #include <linux/io.h> 16 + #include <linux/iopoll.h> 15 17 #include <linux/of.h> 16 18 #include <linux/of_address.h> 17 19 #include <linux/pm.h> 18 20 #include <linux/pm_clock.h> 19 21 #include <linux/pm_domain.h> 20 22 #include <linux/slab.h> 21 - 22 - #include <asm/io.h> 23 23 24 24 /* SYSC */ 25 25 #define SPDCR 0x08 /* SYS Power Down Control Register */ ··· 47 47 { 48 48 struct rmobile_pm_domain *rmobile_pd = to_rmobile_pd(genpd); 49 49 unsigned int mask = BIT(rmobile_pd->bit_shift); 50 + u32 val; 50 51 51 52 if (rmobile_pd->suspend) { 52 53 int ret = rmobile_pd->suspend(); ··· 57 56 } 58 57 59 58 if (readl(rmobile_pd->base + PSTR) & mask) { 60 - unsigned int retry_count; 61 59 writel(mask, rmobile_pd->base + SPDCR); 62 60 63 - for (retry_count = PSTR_RETRIES; retry_count; retry_count--) { 64 - if (!(readl(rmobile_pd->base + SPDCR) & mask)) 65 - break; 66 - cpu_relax(); 67 - } 61 + readl_poll_timeout_atomic(rmobile_pd->base + SPDCR, val, 62 + !(val & mask), 0, PSTR_RETRIES); 68 63 } 69 64 70 65 pr_debug("%s: Power off, 0x%08x -> PSTR = 0x%08x\n", genpd->name, mask, ··· 71 74 72 75 static int __rmobile_pd_power_up(struct rmobile_pm_domain *rmobile_pd) 73 76 { 74 - unsigned int mask = BIT(rmobile_pd->bit_shift); 75 - unsigned int retry_count; 77 + unsigned int val, mask = BIT(rmobile_pd->bit_shift); 76 78 int ret = 0; 77 79 78 80 if (readl(rmobile_pd->base + PSTR) & mask) ··· 79 83 80 84 writel(mask, rmobile_pd->base + SWUCR); 81 85 82 - for (retry_count = 2 * PSTR_RETRIES; retry_count; retry_count--) { 83 - if (!(readl(rmobile_pd->base + SWUCR) & mask)) 84 - break; 85 - if (retry_count > PSTR_RETRIES) 86 - udelay(PSTR_DELAY_US); 87 - else 88 - cpu_relax(); 89 - } 90 - if (!retry_count) 91 - ret = -EIO; 86 + ret = readl_poll_timeout_atomic(rmobile_pd->base + SWUCR, val, 87 + (val & mask), PSTR_DELAY_US, 88 + PSTR_RETRIES * PSTR_DELAY_US); 92 89 93 90 pr_debug("%s: Power on, 0x%08x -> PSTR = 0x%08x\n", 94 91 rmobile_pd->genpd.name, mask,
+27 -27
drivers/soc/rockchip/dtpm.c
··· 12 12 #include <linux/platform_device.h> 13 13 14 14 static struct dtpm_node __initdata rk3399_hierarchy[] = { 15 - [0]{ .name = "rk3399", 16 - .type = DTPM_NODE_VIRTUAL }, 17 - [1]{ .name = "package", 18 - .type = DTPM_NODE_VIRTUAL, 19 - .parent = &rk3399_hierarchy[0] }, 20 - [2]{ .name = "/cpus/cpu@0", 21 - .type = DTPM_NODE_DT, 22 - .parent = &rk3399_hierarchy[1] }, 23 - [3]{ .name = "/cpus/cpu@1", 24 - .type = DTPM_NODE_DT, 25 - .parent = &rk3399_hierarchy[1] }, 26 - [4]{ .name = "/cpus/cpu@2", 27 - .type = DTPM_NODE_DT, 28 - .parent = &rk3399_hierarchy[1] }, 29 - [5]{ .name = "/cpus/cpu@3", 30 - .type = DTPM_NODE_DT, 31 - .parent = &rk3399_hierarchy[1] }, 32 - [6]{ .name = "/cpus/cpu@100", 33 - .type = DTPM_NODE_DT, 34 - .parent = &rk3399_hierarchy[1] }, 35 - [7]{ .name = "/cpus/cpu@101", 36 - .type = DTPM_NODE_DT, 37 - .parent = &rk3399_hierarchy[1] }, 38 - [8]{ .name = "/gpu@ff9a0000", 39 - .type = DTPM_NODE_DT, 40 - .parent = &rk3399_hierarchy[1] }, 41 - [9]{ /* sentinel */ } 15 + [0] = { .name = "rk3399", 16 + .type = DTPM_NODE_VIRTUAL }, 17 + [1] = { .name = "package", 18 + .type = DTPM_NODE_VIRTUAL, 19 + .parent = &rk3399_hierarchy[0] }, 20 + [2] = { .name = "/cpus/cpu@0", 21 + .type = DTPM_NODE_DT, 22 + .parent = &rk3399_hierarchy[1] }, 23 + [3] = { .name = "/cpus/cpu@1", 24 + .type = DTPM_NODE_DT, 25 + .parent = &rk3399_hierarchy[1] }, 26 + [4] = { .name = "/cpus/cpu@2", 27 + .type = DTPM_NODE_DT, 28 + .parent = &rk3399_hierarchy[1] }, 29 + [5] = { .name = "/cpus/cpu@3", 30 + .type = DTPM_NODE_DT, 31 + .parent = &rk3399_hierarchy[1] }, 32 + [6] = { .name = "/cpus/cpu@100", 33 + .type = DTPM_NODE_DT, 34 + .parent = &rk3399_hierarchy[1] }, 35 + [7] = { .name = "/cpus/cpu@101", 36 + .type = DTPM_NODE_DT, 37 + .parent = &rk3399_hierarchy[1] }, 38 + [8] = { .name = "/gpu@ff9a0000", 39 + .type = DTPM_NODE_DT, 40 + .parent = &rk3399_hierarchy[1] }, 41 + [9] = { /* sentinel */ } 42 42 }; 43 43 44 44 static struct of_device_id __initdata rockchip_dtpm_match_table[] = {
+125 -35
drivers/soc/rockchip/pm_domains.c
··· 43 43 bool active_wakeup; 44 44 int pwr_w_mask; 45 45 int req_w_mask; 46 + int mem_status_mask; 46 47 int repair_status_mask; 47 48 u32 pwr_offset; 49 + u32 mem_offset; 48 50 u32 req_offset; 49 51 }; 50 52 ··· 56 54 u32 req_offset; 57 55 u32 idle_offset; 58 56 u32 ack_offset; 57 + u32 mem_pwr_offset; 58 + u32 chain_status_offset; 59 + u32 mem_status_offset; 59 60 u32 repair_status_offset; 60 61 61 62 u32 core_pwrcnt_offset; ··· 124 119 .active_wakeup = wakeup, \ 125 120 } 126 121 127 - #define DOMAIN_M_O_R(_name, p_offset, pwr, status, r_status, r_offset, req, idle, ack, wakeup) \ 122 + #define DOMAIN_M_O_R(_name, p_offset, pwr, status, m_offset, m_status, r_status, r_offset, req, idle, ack, wakeup) \ 128 123 { \ 129 124 .name = _name, \ 130 125 .pwr_offset = p_offset, \ 131 126 .pwr_w_mask = (pwr) << 16, \ 132 127 .pwr_mask = (pwr), \ 133 128 .status_mask = (status), \ 129 + .mem_offset = m_offset, \ 130 + .mem_status_mask = (m_status), \ 134 131 .repair_status_mask = (r_status), \ 135 132 .req_offset = r_offset, \ 136 133 .req_w_mask = (req) << 16, \ ··· 276 269 } 277 270 EXPORT_SYMBOL_GPL(rockchip_pmu_unblock); 278 271 279 - #define DOMAIN_RK3588(name, p_offset, pwr, status, r_status, r_offset, req, idle, wakeup) \ 280 - DOMAIN_M_O_R(name, p_offset, pwr, status, r_status, r_offset, req, idle, idle, wakeup) 272 + #define DOMAIN_RK3588(name, p_offset, pwr, status, m_offset, m_status, r_status, r_offset, req, idle, wakeup) \ 273 + DOMAIN_M_O_R(name, p_offset, pwr, status, m_offset, m_status, r_status, r_offset, req, idle, idle, wakeup) 281 274 282 275 static bool rockchip_pmu_domain_is_idle(struct rockchip_pm_domain *pd) 283 276 { ··· 415 408 return !(val & pd->info->status_mask); 416 409 } 417 410 411 + static bool rockchip_pmu_domain_is_mem_on(struct rockchip_pm_domain *pd) 412 + { 413 + struct rockchip_pmu *pmu = pd->pmu; 414 + unsigned int val; 415 + 416 + regmap_read(pmu->regmap, 417 + pmu->info->mem_status_offset + pd->info->mem_offset, &val); 418 + 419 + /* 1'b0: power on, 1'b1: power off */ 420 + return !(val & pd->info->mem_status_mask); 421 + } 422 + 423 + static bool rockchip_pmu_domain_is_chain_on(struct rockchip_pm_domain *pd) 424 + { 425 + struct rockchip_pmu *pmu = pd->pmu; 426 + unsigned int val; 427 + 428 + regmap_read(pmu->regmap, 429 + pmu->info->chain_status_offset + pd->info->mem_offset, &val); 430 + 431 + /* 1'b1: power on, 1'b0: power off */ 432 + return val & pd->info->mem_status_mask; 433 + } 434 + 435 + static int rockchip_pmu_domain_mem_reset(struct rockchip_pm_domain *pd) 436 + { 437 + struct rockchip_pmu *pmu = pd->pmu; 438 + struct generic_pm_domain *genpd = &pd->genpd; 439 + bool is_on; 440 + int ret = 0; 441 + 442 + ret = readx_poll_timeout_atomic(rockchip_pmu_domain_is_chain_on, pd, is_on, 443 + is_on == true, 0, 10000); 444 + if (ret) { 445 + dev_err(pmu->dev, 446 + "failed to get chain status '%s', target_on=1, val=%d\n", 447 + genpd->name, is_on); 448 + goto error; 449 + } 450 + 451 + udelay(20); 452 + 453 + regmap_write(pmu->regmap, pmu->info->mem_pwr_offset + pd->info->pwr_offset, 454 + (pd->info->pwr_mask | pd->info->pwr_w_mask)); 455 + wmb(); 456 + 457 + ret = readx_poll_timeout_atomic(rockchip_pmu_domain_is_mem_on, pd, is_on, 458 + is_on == false, 0, 10000); 459 + if (ret) { 460 + dev_err(pmu->dev, 461 + "failed to get mem status '%s', target_on=0, val=%d\n", 462 + genpd->name, is_on); 463 + goto error; 464 + } 465 + 466 + regmap_write(pmu->regmap, pmu->info->mem_pwr_offset + pd->info->pwr_offset, 467 + pd->info->pwr_w_mask); 468 + wmb(); 469 + 470 + ret = readx_poll_timeout_atomic(rockchip_pmu_domain_is_mem_on, pd, is_on, 471 + is_on == true, 0, 10000); 472 + if (ret) { 473 + dev_err(pmu->dev, 474 + "failed to get mem status '%s', target_on=1, val=%d\n", 475 + genpd->name, is_on); 476 + } 477 + 478 + error: 479 + return ret; 480 + } 481 + 418 482 static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd, 419 483 bool on) 420 484 { 421 485 struct rockchip_pmu *pmu = pd->pmu; 422 486 struct generic_pm_domain *genpd = &pd->genpd; 423 487 u32 pd_pwr_offset = pd->info->pwr_offset; 424 - bool is_on; 488 + bool is_on, is_mem_on = false; 425 489 426 490 if (pd->info->pwr_mask == 0) 427 491 return; 428 - else if (pd->info->pwr_w_mask) 492 + 493 + if (on && pd->info->mem_status_mask) 494 + is_mem_on = rockchip_pmu_domain_is_mem_on(pd); 495 + 496 + if (pd->info->pwr_w_mask) 429 497 regmap_write(pmu->regmap, pmu->info->pwr_offset + pd_pwr_offset, 430 498 on ? pd->info->pwr_w_mask : 431 499 (pd->info->pwr_mask | pd->info->pwr_w_mask)); ··· 509 427 pd->info->pwr_mask, on ? 0 : -1U); 510 428 511 429 wmb(); 430 + 431 + if (is_mem_on && rockchip_pmu_domain_mem_reset(pd)) 432 + return; 512 433 513 434 if (readx_poll_timeout_atomic(rockchip_pmu_domain_is_on, pd, is_on, 514 435 is_on == on, 0, 10000)) { ··· 730 645 pd->genpd.flags = GENPD_FLAG_PM_CLK; 731 646 if (pd_info->active_wakeup) 732 647 pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP; 733 - pm_genpd_init(&pd->genpd, NULL, !rockchip_pmu_domain_is_on(pd)); 648 + pm_genpd_init(&pd->genpd, NULL, 649 + !rockchip_pmu_domain_is_on(pd) || 650 + (pd->info->mem_status_mask && !rockchip_pmu_domain_is_mem_on(pd))); 734 651 735 652 pmu->genpd_data.domains[id] = &pd->genpd; 736 653 return 0; ··· 1111 1024 }; 1112 1025 1113 1026 static const struct rockchip_domain_info rk3588_pm_domains[] = { 1114 - [RK3588_PD_GPU] = DOMAIN_RK3588("gpu", 0x0, BIT(0), 0, BIT(1), 0x0, BIT(0), BIT(0), false), 1115 - [RK3588_PD_NPU] = DOMAIN_RK3588("npu", 0x0, BIT(1), BIT(1), 0, 0x0, 0, 0, false), 1116 - [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0, 0x0, 0, 0, false), 1117 - [RK3588_PD_NPUTOP] = DOMAIN_RK3588("nputop", 0x0, BIT(3), 0, BIT(2), 0x0, BIT(1), BIT(1), false), 1118 - [RK3588_PD_NPU1] = DOMAIN_RK3588("npu1", 0x0, BIT(4), 0, BIT(3), 0x0, BIT(2), BIT(2), false), 1119 - [RK3588_PD_NPU2] = DOMAIN_RK3588("npu2", 0x0, BIT(5), 0, BIT(4), 0x0, BIT(3), BIT(3), false), 1120 - [RK3588_PD_VENC0] = DOMAIN_RK3588("venc0", 0x0, BIT(6), 0, BIT(5), 0x0, BIT(4), BIT(4), false), 1121 - [RK3588_PD_VENC1] = DOMAIN_RK3588("venc1", 0x0, BIT(7), 0, BIT(6), 0x0, BIT(5), BIT(5), false), 1122 - [RK3588_PD_RKVDEC0] = DOMAIN_RK3588("rkvdec0", 0x0, BIT(8), 0, BIT(7), 0x0, BIT(6), BIT(6), false), 1123 - [RK3588_PD_RKVDEC1] = DOMAIN_RK3588("rkvdec1", 0x0, BIT(9), 0, BIT(8), 0x0, BIT(7), BIT(7), false), 1124 - [RK3588_PD_VDPU] = DOMAIN_RK3588("vdpu", 0x0, BIT(10), 0, BIT(9), 0x0, BIT(8), BIT(8), false), 1125 - [RK3588_PD_RGA30] = DOMAIN_RK3588("rga30", 0x0, BIT(11), 0, BIT(10), 0x0, 0, 0, false), 1126 - [RK3588_PD_AV1] = DOMAIN_RK3588("av1", 0x0, BIT(12), 0, BIT(11), 0x0, BIT(9), BIT(9), false), 1127 - [RK3588_PD_VI] = DOMAIN_RK3588("vi", 0x0, BIT(13), 0, BIT(12), 0x0, BIT(10), BIT(10), false), 1128 - [RK3588_PD_FEC] = DOMAIN_RK3588("fec", 0x0, BIT(14), 0, BIT(13), 0x0, 0, 0, false), 1129 - [RK3588_PD_ISP1] = DOMAIN_RK3588("isp1", 0x0, BIT(15), 0, BIT(14), 0x0, BIT(11), BIT(11), false), 1130 - [RK3588_PD_RGA31] = DOMAIN_RK3588("rga31", 0x4, BIT(0), 0, BIT(15), 0x0, BIT(12), BIT(12), false), 1131 - [RK3588_PD_VOP] = DOMAIN_RK3588("vop", 0x4, BIT(1), 0, BIT(16), 0x0, BIT(13) | BIT(14), BIT(13) | BIT(14), false), 1132 - [RK3588_PD_VO0] = DOMAIN_RK3588("vo0", 0x4, BIT(2), 0, BIT(17), 0x0, BIT(15), BIT(15), false), 1133 - [RK3588_PD_VO1] = DOMAIN_RK3588("vo1", 0x4, BIT(3), 0, BIT(18), 0x4, BIT(0), BIT(16), false), 1134 - [RK3588_PD_AUDIO] = DOMAIN_RK3588("audio", 0x4, BIT(4), 0, BIT(19), 0x4, BIT(1), BIT(17), false), 1135 - [RK3588_PD_PHP] = DOMAIN_RK3588("php", 0x4, BIT(5), 0, BIT(20), 0x4, BIT(5), BIT(21), false), 1136 - [RK3588_PD_GMAC] = DOMAIN_RK3588("gmac", 0x4, BIT(6), 0, BIT(21), 0x0, 0, 0, false), 1137 - [RK3588_PD_PCIE] = DOMAIN_RK3588("pcie", 0x4, BIT(7), 0, BIT(22), 0x0, 0, 0, true), 1138 - [RK3588_PD_NVM] = DOMAIN_RK3588("nvm", 0x4, BIT(8), BIT(24), 0, 0x4, BIT(2), BIT(18), false), 1139 - [RK3588_PD_NVM0] = DOMAIN_RK3588("nvm0", 0x4, BIT(9), 0, BIT(23), 0x0, 0, 0, false), 1140 - [RK3588_PD_SDIO] = DOMAIN_RK3588("sdio", 0x4, BIT(10), 0, BIT(24), 0x4, BIT(3), BIT(19), false), 1141 - [RK3588_PD_USB] = DOMAIN_RK3588("usb", 0x4, BIT(11), 0, BIT(25), 0x4, BIT(4), BIT(20), true), 1142 - [RK3588_PD_SDMMC] = DOMAIN_RK3588("sdmmc", 0x4, BIT(13), 0, BIT(26), 0x0, 0, 0, false), 1027 + [RK3588_PD_GPU] = DOMAIN_RK3588("gpu", 0x0, BIT(0), 0, 0x0, 0, BIT(1), 0x0, BIT(0), BIT(0), false), 1028 + [RK3588_PD_NPU] = DOMAIN_RK3588("npu", 0x0, BIT(1), BIT(1), 0x0, 0, 0, 0x0, 0, 0, false), 1029 + [RK3588_PD_VCODEC] = DOMAIN_RK3588("vcodec", 0x0, BIT(2), BIT(2), 0x0, 0, 0, 0x0, 0, 0, false), 1030 + [RK3588_PD_NPUTOP] = DOMAIN_RK3588("nputop", 0x0, BIT(3), 0, 0x0, BIT(11), BIT(2), 0x0, BIT(1), BIT(1), false), 1031 + [RK3588_PD_NPU1] = DOMAIN_RK3588("npu1", 0x0, BIT(4), 0, 0x0, BIT(12), BIT(3), 0x0, BIT(2), BIT(2), false), 1032 + [RK3588_PD_NPU2] = DOMAIN_RK3588("npu2", 0x0, BIT(5), 0, 0x0, BIT(13), BIT(4), 0x0, BIT(3), BIT(3), false), 1033 + [RK3588_PD_VENC0] = DOMAIN_RK3588("venc0", 0x0, BIT(6), 0, 0x0, BIT(14), BIT(5), 0x0, BIT(4), BIT(4), false), 1034 + [RK3588_PD_VENC1] = DOMAIN_RK3588("venc1", 0x0, BIT(7), 0, 0x0, BIT(15), BIT(6), 0x0, BIT(5), BIT(5), false), 1035 + [RK3588_PD_RKVDEC0] = DOMAIN_RK3588("rkvdec0", 0x0, BIT(8), 0, 0x0, BIT(16), BIT(7), 0x0, BIT(6), BIT(6), false), 1036 + [RK3588_PD_RKVDEC1] = DOMAIN_RK3588("rkvdec1", 0x0, BIT(9), 0, 0x0, BIT(17), BIT(8), 0x0, BIT(7), BIT(7), false), 1037 + [RK3588_PD_VDPU] = DOMAIN_RK3588("vdpu", 0x0, BIT(10), 0, 0x0, BIT(18), BIT(9), 0x0, BIT(8), BIT(8), false), 1038 + [RK3588_PD_RGA30] = DOMAIN_RK3588("rga30", 0x0, BIT(11), 0, 0x0, BIT(19), BIT(10), 0x0, 0, 0, false), 1039 + [RK3588_PD_AV1] = DOMAIN_RK3588("av1", 0x0, BIT(12), 0, 0x0, BIT(20), BIT(11), 0x0, BIT(9), BIT(9), false), 1040 + [RK3588_PD_VI] = DOMAIN_RK3588("vi", 0x0, BIT(13), 0, 0x0, BIT(21), BIT(12), 0x0, BIT(10), BIT(10), false), 1041 + [RK3588_PD_FEC] = DOMAIN_RK3588("fec", 0x0, BIT(14), 0, 0x0, BIT(22), BIT(13), 0x0, 0, 0, false), 1042 + [RK3588_PD_ISP1] = DOMAIN_RK3588("isp1", 0x0, BIT(15), 0, 0x0, BIT(23), BIT(14), 0x0, BIT(11), BIT(11), false), 1043 + [RK3588_PD_RGA31] = DOMAIN_RK3588("rga31", 0x4, BIT(0), 0, 0x0, BIT(24), BIT(15), 0x0, BIT(12), BIT(12), false), 1044 + [RK3588_PD_VOP] = DOMAIN_RK3588("vop", 0x4, BIT(1), 0, 0x0, BIT(25), BIT(16), 0x0, BIT(13) | BIT(14), BIT(13) | BIT(14), false), 1045 + [RK3588_PD_VO0] = DOMAIN_RK3588("vo0", 0x4, BIT(2), 0, 0x0, BIT(26), BIT(17), 0x0, BIT(15), BIT(15), false), 1046 + [RK3588_PD_VO1] = DOMAIN_RK3588("vo1", 0x4, BIT(3), 0, 0x0, BIT(27), BIT(18), 0x4, BIT(0), BIT(16), false), 1047 + [RK3588_PD_AUDIO] = DOMAIN_RK3588("audio", 0x4, BIT(4), 0, 0x0, BIT(28), BIT(19), 0x4, BIT(1), BIT(17), false), 1048 + [RK3588_PD_PHP] = DOMAIN_RK3588("php", 0x4, BIT(5), 0, 0x0, BIT(29), BIT(20), 0x4, BIT(5), BIT(21), false), 1049 + [RK3588_PD_GMAC] = DOMAIN_RK3588("gmac", 0x4, BIT(6), 0, 0x0, BIT(30), BIT(21), 0x0, 0, 0, false), 1050 + [RK3588_PD_PCIE] = DOMAIN_RK3588("pcie", 0x4, BIT(7), 0, 0x0, BIT(31), BIT(22), 0x0, 0, 0, true), 1051 + [RK3588_PD_NVM] = DOMAIN_RK3588("nvm", 0x4, BIT(8), BIT(24), 0x4, 0, 0, 0x4, BIT(2), BIT(18), false), 1052 + [RK3588_PD_NVM0] = DOMAIN_RK3588("nvm0", 0x4, BIT(9), 0, 0x4, BIT(1), BIT(23), 0x0, 0, 0, false), 1053 + [RK3588_PD_SDIO] = DOMAIN_RK3588("sdio", 0x4, BIT(10), 0, 0x4, BIT(2), BIT(24), 0x4, BIT(3), BIT(19), false), 1054 + [RK3588_PD_USB] = DOMAIN_RK3588("usb", 0x4, BIT(11), 0, 0x4, BIT(3), BIT(25), 0x4, BIT(4), BIT(20), true), 1055 + [RK3588_PD_SDMMC] = DOMAIN_RK3588("sdmmc", 0x4, BIT(13), 0, 0x4, BIT(5), BIT(26), 0x0, 0, 0, false), 1143 1056 }; 1144 1057 1145 1058 static const struct rockchip_pmu_info px30_pmu = { ··· 1294 1207 .req_offset = 0x10c, 1295 1208 .idle_offset = 0x120, 1296 1209 .ack_offset = 0x118, 1210 + .mem_pwr_offset = 0x1a0, 1211 + .chain_status_offset = 0x1f0, 1212 + .mem_status_offset = 0x1f8, 1297 1213 .repair_status_offset = 0x290, 1298 1214 1299 1215 .num_domains = ARRAY_SIZE(rk3588_pm_domains),
+9
drivers/soc/samsung/exynos-pmu.c
··· 57 57 58 58 if (pmu_data->powerdown_conf_extra) 59 59 pmu_data->powerdown_conf_extra(mode); 60 + 61 + if (pmu_data->pmu_config_extra) { 62 + for (i = 0; pmu_data->pmu_config_extra[i].offset != PMU_TABLE_END; i++) 63 + pmu_raw_writel(pmu_data->pmu_config_extra[i].val[mode], 64 + pmu_data->pmu_config_extra[i].offset); 65 + } 60 66 } 61 67 62 68 /* ··· 85 79 }, { 86 80 .compatible = "samsung,exynos4210-pmu", 87 81 .data = exynos_pmu_data_arm_ptr(exynos4210_pmu_data), 82 + }, { 83 + .compatible = "samsung,exynos4212-pmu", 84 + .data = exynos_pmu_data_arm_ptr(exynos4212_pmu_data), 88 85 }, { 89 86 .compatible = "samsung,exynos4412-pmu", 90 87 .data = exynos_pmu_data_arm_ptr(exynos4412_pmu_data),
+2
drivers/soc/samsung/exynos-pmu.h
··· 20 20 21 21 struct exynos_pmu_data { 22 22 const struct exynos_pmu_conf *pmu_config; 23 + const struct exynos_pmu_conf *pmu_config_extra; 23 24 24 25 void (*pmu_init)(void); 25 26 void (*powerdown_conf)(enum sys_powerdown); ··· 33 32 /* list of all exported SoC specific data */ 34 33 extern const struct exynos_pmu_data exynos3250_pmu_data; 35 34 extern const struct exynos_pmu_data exynos4210_pmu_data; 35 + extern const struct exynos_pmu_data exynos4212_pmu_data; 36 36 extern const struct exynos_pmu_data exynos4412_pmu_data; 37 37 extern const struct exynos_pmu_data exynos5250_pmu_data; 38 38 extern const struct exynos_pmu_data exynos5420_pmu_data;
+11 -2
drivers/soc/samsung/exynos4-pmu.c
··· 86 86 { PMU_TABLE_END,}, 87 87 }; 88 88 89 - static const struct exynos_pmu_conf exynos4412_pmu_config[] = { 89 + static const struct exynos_pmu_conf exynos4x12_pmu_config[] = { 90 90 { S5P_ARM_CORE0_LOWPWR, { 0x0, 0x0, 0x2 } }, 91 91 { S5P_DIS_IRQ_CORE0, { 0x0, 0x0, 0x0 } }, 92 92 { S5P_DIS_IRQ_CENTRAL0, { 0x0, 0x0, 0x0 } }, ··· 191 191 { S5P_GPS_ALIVE_LOWPWR, { 0x7, 0x0, 0x0 } }, 192 192 { S5P_CMU_SYSCLK_ISP_LOWPWR, { 0x1, 0x0, 0x0 } }, 193 193 { S5P_CMU_SYSCLK_GPS_LOWPWR, { 0x1, 0x0, 0x0 } }, 194 + { PMU_TABLE_END,}, 195 + }; 196 + 197 + static const struct exynos_pmu_conf exynos4412_pmu_config[] = { 194 198 { S5P_ARM_CORE2_LOWPWR, { 0x0, 0x0, 0x2 } }, 195 199 { S5P_DIS_IRQ_CORE2, { 0x0, 0x0, 0x0 } }, 196 200 { S5P_DIS_IRQ_CENTRAL2, { 0x0, 0x0, 0x0 } }, ··· 208 204 .pmu_config = exynos4210_pmu_config, 209 205 }; 210 206 207 + const struct exynos_pmu_data exynos4212_pmu_data = { 208 + .pmu_config = exynos4x12_pmu_config, 209 + }; 210 + 211 211 const struct exynos_pmu_data exynos4412_pmu_data = { 212 - .pmu_config = exynos4412_pmu_config, 212 + .pmu_config = exynos4x12_pmu_config, 213 + .pmu_config_extra = exynos4412_pmu_config, 213 214 };
+1 -1
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 663 663 664 664 static const struct tegra_fuse_info tegra234_fuse_info = { 665 665 .read = tegra30_fuse_read, 666 - .size = 0x98c, 666 + .size = 0xf90, 667 667 .spare = 0x280, 668 668 }; 669 669
+2 -1
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2014-2023, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/export.h> ··· 62 62 switch (tegra_get_chip_id()) { 63 63 case TEGRA194: 64 64 case TEGRA234: 65 + case TEGRA264: 65 66 if (tegra_get_platform() == 0) 66 67 return true; 67 68
+7 -24
drivers/soc/tegra/pmc.c
··· 396 396 * @clk: pointer to pclk clock 397 397 * @soc: pointer to SoC data structure 398 398 * @tz_only: flag specifying if the PMC can only be accessed via TrustZone 399 - * @debugfs: pointer to debugfs entry 400 399 * @rate: currently configured rate of pclk 401 400 * @suspend_mode: lowest suspend mode available 402 401 * @cpu_good_time: CPU power good time (in microseconds) ··· 430 431 void __iomem *aotag; 431 432 void __iomem *scratch; 432 433 struct clk *clk; 433 - struct dentry *debugfs; 434 434 435 435 const struct tegra_pmc_soc *soc; 436 436 bool tz_only; ··· 1187 1189 } 1188 1190 1189 1191 DEFINE_SHOW_ATTRIBUTE(powergate); 1190 - 1191 - static int tegra_powergate_debugfs_init(void) 1192 - { 1193 - pmc->debugfs = debugfs_create_file("powergate", S_IRUGO, NULL, NULL, 1194 - &powergate_fops); 1195 - if (!pmc->debugfs) 1196 - return -ENOMEM; 1197 - 1198 - return 0; 1199 - } 1200 1192 1201 1193 static int tegra_powergate_of_get_clks(struct tegra_powergate *pg, 1202 1194 struct device_node *np) ··· 2992 3004 */ 2993 3005 if (pmc->clk) { 2994 3006 pmc->clk_nb.notifier_call = tegra_pmc_clk_notify_cb; 2995 - err = clk_notifier_register(pmc->clk, &pmc->clk_nb); 3007 + err = devm_clk_notifier_register(&pdev->dev, pmc->clk, 3008 + &pmc->clk_nb); 2996 3009 if (err) { 2997 3010 dev_err(&pdev->dev, 2998 3011 "failed to register clk notifier\n"); ··· 3015 3026 3016 3027 tegra_pmc_reset_sysfs_init(pmc); 3017 3028 3018 - if (IS_ENABLED(CONFIG_DEBUG_FS)) { 3019 - err = tegra_powergate_debugfs_init(); 3020 - if (err < 0) 3021 - goto cleanup_sysfs; 3022 - } 3023 - 3024 3029 err = tegra_pmc_pinctrl_init(pmc); 3025 3030 if (err) 3026 - goto cleanup_debugfs; 3031 + goto cleanup_sysfs; 3027 3032 3028 3033 err = tegra_pmc_regmap_init(pmc); 3029 3034 if (err < 0) 3030 - goto cleanup_debugfs; 3035 + goto cleanup_sysfs; 3031 3036 3032 3037 err = tegra_powergate_init(pmc, pdev->dev.of_node); 3033 3038 if (err < 0) ··· 3044 3061 if (pmc->soc->set_wake_filters) 3045 3062 pmc->soc->set_wake_filters(pmc); 3046 3063 3064 + debugfs_create_file("powergate", 0444, NULL, NULL, &powergate_fops); 3065 + 3047 3066 return 0; 3048 3067 3049 3068 cleanup_powergates: 3050 3069 tegra_powergate_remove_all(pdev->dev.of_node); 3051 - cleanup_debugfs: 3052 - debugfs_remove(pmc->debugfs); 3053 3070 cleanup_sysfs: 3054 3071 device_remove_file(&pdev->dev, &dev_attr_reset_reason); 3055 3072 device_remove_file(&pdev->dev, &dev_attr_reset_level); 3056 - clk_notifier_unregister(pmc->clk, &pmc->clk_nb); 3057 3073 3058 3074 return err; 3059 3075 } ··· 4232 4250 TEGRA_WAKE_GPIO("power", 29, 1, TEGRA234_AON_GPIO(EE, 4)), 4233 4251 TEGRA_WAKE_GPIO("mgbe", 56, 0, TEGRA234_MAIN_GPIO(Y, 3)), 4234 4252 TEGRA_WAKE_IRQ("rtc", 73, 10), 4253 + TEGRA_WAKE_IRQ("sw-wake", SW_WAKE_ID, 179), 4235 4254 }; 4236 4255 4237 4256 static const struct tegra_pmc_soc tegra234_pmc_soc = {
+1 -1
drivers/soc/ti/Kconfig
··· 85 85 86 86 config TI_PRUSS 87 87 tristate "TI PRU-ICSS Subsystem Platform drivers" 88 - depends on SOC_AM33XX || SOC_AM43XX || SOC_DRA7XX || ARCH_KEYSTONE || ARCH_K3 88 + depends on SOC_AM33XX || SOC_AM43XX || SOC_DRA7XX || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST 89 89 select MFD_SYSCON 90 90 help 91 91 TI PRU-ICSS Subsystem platform specific support.
+261 -2
drivers/soc/ti/pruss.c
··· 6 6 * Author(s): 7 7 * Suman Anna <s-anna@ti.com> 8 8 * Andrew F. Davis <afd@ti.com> 9 + * Tero Kristo <t-kristo@ti.com> 9 10 */ 10 11 11 12 #include <linux/clk-provider.h> ··· 19 18 #include <linux/pm_runtime.h> 20 19 #include <linux/pruss_driver.h> 21 20 #include <linux/regmap.h> 21 + #include <linux/remoteproc.h> 22 22 #include <linux/slab.h> 23 + #include "pruss.h" 23 24 24 25 /** 25 26 * struct pruss_private_data - PRUSS driver private data ··· 33 30 bool has_core_mux_clock; 34 31 }; 35 32 33 + /** 34 + * pruss_get() - get the pruss for a given PRU remoteproc 35 + * @rproc: remoteproc handle of a PRU instance 36 + * 37 + * Finds the parent pruss device for a PRU given the @rproc handle of the 38 + * PRU remote processor. This function increments the pruss device's refcount, 39 + * so always use pruss_put() to decrement it back once pruss isn't needed 40 + * anymore. 41 + * 42 + * This API doesn't check if @rproc is valid or not. It is expected the caller 43 + * will have done a pru_rproc_get() on @rproc, before calling this API to make 44 + * sure that @rproc is valid. 45 + * 46 + * Return: pruss handle on success, and an ERR_PTR on failure using one 47 + * of the following error values 48 + * -EINVAL if invalid parameter 49 + * -ENODEV if PRU device or PRUSS device is not found 50 + */ 51 + struct pruss *pruss_get(struct rproc *rproc) 52 + { 53 + struct pruss *pruss; 54 + struct device *dev; 55 + struct platform_device *ppdev; 56 + 57 + if (IS_ERR_OR_NULL(rproc)) 58 + return ERR_PTR(-EINVAL); 59 + 60 + dev = &rproc->dev; 61 + 62 + /* make sure it is PRU rproc */ 63 + if (!dev->parent || !is_pru_rproc(dev->parent)) 64 + return ERR_PTR(-ENODEV); 65 + 66 + ppdev = to_platform_device(dev->parent->parent); 67 + pruss = platform_get_drvdata(ppdev); 68 + if (!pruss) 69 + return ERR_PTR(-ENODEV); 70 + 71 + get_device(pruss->dev); 72 + 73 + return pruss; 74 + } 75 + EXPORT_SYMBOL_GPL(pruss_get); 76 + 77 + /** 78 + * pruss_put() - decrement pruss device's usecount 79 + * @pruss: pruss handle 80 + * 81 + * Complimentary function for pruss_get(). Needs to be called 82 + * after the PRUSS is used, and only if the pruss_get() succeeds. 83 + */ 84 + void pruss_put(struct pruss *pruss) 85 + { 86 + if (IS_ERR_OR_NULL(pruss)) 87 + return; 88 + 89 + put_device(pruss->dev); 90 + } 91 + EXPORT_SYMBOL_GPL(pruss_put); 92 + 93 + /** 94 + * pruss_request_mem_region() - request a memory resource 95 + * @pruss: the pruss instance 96 + * @mem_id: the memory resource id 97 + * @region: pointer to memory region structure to be filled in 98 + * 99 + * This function allows a client driver to request a memory resource, 100 + * and if successful, will let the client driver own the particular 101 + * memory region until released using the pruss_release_mem_region() 102 + * API. 103 + * 104 + * Return: 0 if requested memory region is available (in such case pointer to 105 + * memory region is returned via @region), an error otherwise 106 + */ 107 + int pruss_request_mem_region(struct pruss *pruss, enum pruss_mem mem_id, 108 + struct pruss_mem_region *region) 109 + { 110 + if (!pruss || !region || mem_id >= PRUSS_MEM_MAX) 111 + return -EINVAL; 112 + 113 + mutex_lock(&pruss->lock); 114 + 115 + if (pruss->mem_in_use[mem_id]) { 116 + mutex_unlock(&pruss->lock); 117 + return -EBUSY; 118 + } 119 + 120 + *region = pruss->mem_regions[mem_id]; 121 + pruss->mem_in_use[mem_id] = region; 122 + 123 + mutex_unlock(&pruss->lock); 124 + 125 + return 0; 126 + } 127 + EXPORT_SYMBOL_GPL(pruss_request_mem_region); 128 + 129 + /** 130 + * pruss_release_mem_region() - release a memory resource 131 + * @pruss: the pruss instance 132 + * @region: the memory region to release 133 + * 134 + * This function is the complimentary function to 135 + * pruss_request_mem_region(), and allows the client drivers to 136 + * release back a memory resource. 137 + * 138 + * Return: 0 on success, an error code otherwise 139 + */ 140 + int pruss_release_mem_region(struct pruss *pruss, 141 + struct pruss_mem_region *region) 142 + { 143 + int id; 144 + 145 + if (!pruss || !region) 146 + return -EINVAL; 147 + 148 + mutex_lock(&pruss->lock); 149 + 150 + /* find out the memory region being released */ 151 + for (id = 0; id < PRUSS_MEM_MAX; id++) { 152 + if (pruss->mem_in_use[id] == region) 153 + break; 154 + } 155 + 156 + if (id == PRUSS_MEM_MAX) { 157 + mutex_unlock(&pruss->lock); 158 + return -EINVAL; 159 + } 160 + 161 + pruss->mem_in_use[id] = NULL; 162 + 163 + mutex_unlock(&pruss->lock); 164 + 165 + return 0; 166 + } 167 + EXPORT_SYMBOL_GPL(pruss_release_mem_region); 168 + 169 + /** 170 + * pruss_cfg_get_gpmux() - get the current GPMUX value for a PRU device 171 + * @pruss: pruss instance 172 + * @pru_id: PRU identifier (0-1) 173 + * @mux: pointer to store the current mux value into 174 + * 175 + * Return: 0 on success, or an error code otherwise 176 + */ 177 + int pruss_cfg_get_gpmux(struct pruss *pruss, enum pruss_pru_id pru_id, u8 *mux) 178 + { 179 + int ret; 180 + u32 val; 181 + 182 + if (pru_id >= PRUSS_NUM_PRUS || !mux) 183 + return -EINVAL; 184 + 185 + ret = pruss_cfg_read(pruss, PRUSS_CFG_GPCFG(pru_id), &val); 186 + if (!ret) 187 + *mux = (u8)((val & PRUSS_GPCFG_PRU_MUX_SEL_MASK) >> 188 + PRUSS_GPCFG_PRU_MUX_SEL_SHIFT); 189 + return ret; 190 + } 191 + EXPORT_SYMBOL_GPL(pruss_cfg_get_gpmux); 192 + 193 + /** 194 + * pruss_cfg_set_gpmux() - set the GPMUX value for a PRU device 195 + * @pruss: pruss instance 196 + * @pru_id: PRU identifier (0-1) 197 + * @mux: new mux value for PRU 198 + * 199 + * Return: 0 on success, or an error code otherwise 200 + */ 201 + int pruss_cfg_set_gpmux(struct pruss *pruss, enum pruss_pru_id pru_id, u8 mux) 202 + { 203 + if (mux >= PRUSS_GP_MUX_SEL_MAX || 204 + pru_id >= PRUSS_NUM_PRUS) 205 + return -EINVAL; 206 + 207 + return pruss_cfg_update(pruss, PRUSS_CFG_GPCFG(pru_id), 208 + PRUSS_GPCFG_PRU_MUX_SEL_MASK, 209 + (u32)mux << PRUSS_GPCFG_PRU_MUX_SEL_SHIFT); 210 + } 211 + EXPORT_SYMBOL_GPL(pruss_cfg_set_gpmux); 212 + 213 + /** 214 + * pruss_cfg_gpimode() - set the GPI mode of the PRU 215 + * @pruss: the pruss instance handle 216 + * @pru_id: id of the PRU core within the PRUSS 217 + * @mode: GPI mode to set 218 + * 219 + * Sets the GPI mode for a given PRU by programming the 220 + * corresponding PRUSS_CFG_GPCFGx register 221 + * 222 + * Return: 0 on success, or an error code otherwise 223 + */ 224 + int pruss_cfg_gpimode(struct pruss *pruss, enum pruss_pru_id pru_id, 225 + enum pruss_gpi_mode mode) 226 + { 227 + if (pru_id >= PRUSS_NUM_PRUS || mode >= PRUSS_GPI_MODE_MAX) 228 + return -EINVAL; 229 + 230 + return pruss_cfg_update(pruss, PRUSS_CFG_GPCFG(pru_id), 231 + PRUSS_GPCFG_PRU_GPI_MODE_MASK, 232 + mode << PRUSS_GPCFG_PRU_GPI_MODE_SHIFT); 233 + } 234 + EXPORT_SYMBOL_GPL(pruss_cfg_gpimode); 235 + 236 + /** 237 + * pruss_cfg_miirt_enable() - Enable/disable MII RT Events 238 + * @pruss: the pruss instance 239 + * @enable: enable/disable 240 + * 241 + * Enable/disable the MII RT Events for the PRUSS. 242 + * 243 + * Return: 0 on success, or an error code otherwise 244 + */ 245 + int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable) 246 + { 247 + u32 set = enable ? PRUSS_MII_RT_EVENT_EN : 0; 248 + 249 + return pruss_cfg_update(pruss, PRUSS_CFG_MII_RT, 250 + PRUSS_MII_RT_EVENT_EN, set); 251 + } 252 + EXPORT_SYMBOL_GPL(pruss_cfg_miirt_enable); 253 + 254 + /** 255 + * pruss_cfg_xfr_enable() - Enable/disable XIN XOUT shift functionality 256 + * @pruss: the pruss instance 257 + * @pru_type: PRU core type identifier 258 + * @enable: enable/disable 259 + * 260 + * Return: 0 on success, or an error code otherwise 261 + */ 262 + int pruss_cfg_xfr_enable(struct pruss *pruss, enum pru_type pru_type, 263 + bool enable) 264 + { 265 + u32 mask, set; 266 + 267 + switch (pru_type) { 268 + case PRU_TYPE_PRU: 269 + mask = PRUSS_SPP_XFER_SHIFT_EN; 270 + break; 271 + case PRU_TYPE_RTU: 272 + mask = PRUSS_SPP_RTU_XFR_SHIFT_EN; 273 + break; 274 + default: 275 + return -EINVAL; 276 + } 277 + 278 + set = enable ? mask : 0; 279 + 280 + return pruss_cfg_update(pruss, PRUSS_CFG_SPP, mask, set); 281 + } 282 + EXPORT_SYMBOL_GPL(pruss_cfg_xfr_enable); 283 + 36 284 static void pruss_of_free_clk_provider(void *data) 37 285 { 38 286 struct device_node *clk_mux_np = data; 39 287 40 288 of_clk_del_provider(clk_mux_np); 41 289 of_node_put(clk_mux_np); 290 + } 291 + 292 + static void pruss_clk_unregister_mux(void *data) 293 + { 294 + clk_unregister_mux(data); 42 295 } 43 296 44 297 static int pruss_clk_mux_setup(struct pruss *pruss, struct clk *clk_mux, ··· 352 93 goto put_clk_mux_np; 353 94 } 354 95 355 - ret = devm_add_action_or_reset(dev, (void(*)(void *))clk_unregister_mux, 356 - clk_mux); 96 + ret = devm_add_action_or_reset(dev, pruss_clk_unregister_mux, clk_mux); 357 97 if (ret) { 358 98 dev_err(dev, "failed to add clkmux unregister action %d", ret); 359 99 goto put_clk_mux_np; ··· 490 232 return -ENOMEM; 491 233 492 234 pruss->dev = dev; 235 + mutex_init(&pruss->lock); 493 236 494 237 child = of_get_child_by_name(np, "memories"); 495 238 if (!child) {
+88
drivers/soc/ti/pruss.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * PRU-ICSS Subsystem user interfaces 4 + * 5 + * Copyright (C) 2015-2023 Texas Instruments Incorporated - http://www.ti.com 6 + * MD Danish Anwar <danishanwar@ti.com> 7 + */ 8 + 9 + #ifndef _SOC_TI_PRUSS_H_ 10 + #define _SOC_TI_PRUSS_H_ 11 + 12 + #include <linux/bits.h> 13 + #include <linux/regmap.h> 14 + 15 + /* 16 + * PRU_ICSS_CFG registers 17 + * SYSCFG, ISRP, ISP, IESP, IECP, SCRP applicable on AMxxxx devices only 18 + */ 19 + #define PRUSS_CFG_REVID 0x00 20 + #define PRUSS_CFG_SYSCFG 0x04 21 + #define PRUSS_CFG_GPCFG(x) (0x08 + (x) * 4) 22 + #define PRUSS_CFG_CGR 0x10 23 + #define PRUSS_CFG_ISRP 0x14 24 + #define PRUSS_CFG_ISP 0x18 25 + #define PRUSS_CFG_IESP 0x1C 26 + #define PRUSS_CFG_IECP 0x20 27 + #define PRUSS_CFG_SCRP 0x24 28 + #define PRUSS_CFG_PMAO 0x28 29 + #define PRUSS_CFG_MII_RT 0x2C 30 + #define PRUSS_CFG_IEPCLK 0x30 31 + #define PRUSS_CFG_SPP 0x34 32 + #define PRUSS_CFG_PIN_MX 0x40 33 + 34 + /* PRUSS_GPCFG register bits */ 35 + #define PRUSS_GPCFG_PRU_GPI_MODE_MASK GENMASK(1, 0) 36 + #define PRUSS_GPCFG_PRU_GPI_MODE_SHIFT 0 37 + 38 + #define PRUSS_GPCFG_PRU_MUX_SEL_SHIFT 26 39 + #define PRUSS_GPCFG_PRU_MUX_SEL_MASK GENMASK(29, 26) 40 + 41 + /* PRUSS_MII_RT register bits */ 42 + #define PRUSS_MII_RT_EVENT_EN BIT(0) 43 + 44 + /* PRUSS_SPP register bits */ 45 + #define PRUSS_SPP_XFER_SHIFT_EN BIT(1) 46 + #define PRUSS_SPP_PRU1_PAD_HP_EN BIT(0) 47 + #define PRUSS_SPP_RTU_XFR_SHIFT_EN BIT(3) 48 + 49 + /** 50 + * pruss_cfg_read() - read a PRUSS CFG sub-module register 51 + * @pruss: the pruss instance handle 52 + * @reg: register offset within the CFG sub-module 53 + * @val: pointer to return the value in 54 + * 55 + * Reads a given register within the PRUSS CFG sub-module and 56 + * returns it through the passed-in @val pointer 57 + * 58 + * Return: 0 on success, or an error code otherwise 59 + */ 60 + static int pruss_cfg_read(struct pruss *pruss, unsigned int reg, unsigned int *val) 61 + { 62 + if (IS_ERR_OR_NULL(pruss)) 63 + return -EINVAL; 64 + 65 + return regmap_read(pruss->cfg_regmap, reg, val); 66 + } 67 + 68 + /** 69 + * pruss_cfg_update() - configure a PRUSS CFG sub-module register 70 + * @pruss: the pruss instance handle 71 + * @reg: register offset within the CFG sub-module 72 + * @mask: bit mask to use for programming the @val 73 + * @val: value to write 74 + * 75 + * Programs a given register within the PRUSS CFG sub-module 76 + * 77 + * Return: 0 on success, or an error code otherwise 78 + */ 79 + static int pruss_cfg_update(struct pruss *pruss, unsigned int reg, 80 + unsigned int mask, unsigned int val) 81 + { 82 + if (IS_ERR_OR_NULL(pruss)) 83 + return -EINVAL; 84 + 85 + return regmap_update_bits(pruss->cfg_regmap, reg, mask, val); 86 + } 87 + 88 + #endif /* _SOC_TI_PRUSS_H_ */
+1 -3
drivers/soc/ti/smartreflex.c
··· 815 815 { 816 816 struct omap_sr *sr_info; 817 817 struct omap_sr_data *pdata = pdev->dev.platform_data; 818 - struct resource *mem; 819 818 struct dentry *nvalue_dir; 820 819 int i, ret = 0; 821 820 ··· 834 835 return -EINVAL; 835 836 } 836 837 837 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 838 - sr_info->base = devm_ioremap_resource(&pdev->dev, mem); 838 + sr_info->base = devm_platform_ioremap_resource(pdev, 0); 839 839 if (IS_ERR(sr_info->base)) 840 840 return PTR_ERR(sr_info->base); 841 841
+1 -1
drivers/soc/ti/wkup_m3_ipc.c
··· 202 202 { 203 203 m3_ipc->dbg_path = debugfs_create_dir("wkup_m3_ipc", NULL); 204 204 205 - if (!m3_ipc->dbg_path) 205 + if (IS_ERR(m3_ipc->dbg_path)) 206 206 return -EINVAL; 207 207 208 208 (void)debugfs_create_file("enable_late_halt", 0644,
+4 -2
drivers/soc/xilinx/xlnx_event_manager.c
··· 192 192 struct registered_event_data *eve_data; 193 193 struct agent_cb *cb_pos; 194 194 struct agent_cb *cb_next; 195 + struct hlist_node *tmp; 195 196 196 197 is_need_to_unregister = false; 197 198 198 199 /* Check for existing entry in hash table for given cb_type */ 199 - hash_for_each_possible(reg_driver_map, eve_data, hentry, PM_INIT_SUSPEND_CB) { 200 + hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, PM_INIT_SUSPEND_CB) { 200 201 if (eve_data->cb_type == PM_INIT_SUSPEND_CB) { 201 202 /* Delete the list of callback */ 202 203 list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) { ··· 229 228 u64 key = ((u64)node_id << 32U) | (u64)event; 230 229 struct agent_cb *cb_pos; 231 230 struct agent_cb *cb_next; 231 + struct hlist_node *tmp; 232 232 233 233 is_need_to_unregister = false; 234 234 235 235 /* Check for existing entry in hash table for given key id */ 236 - hash_for_each_possible(reg_driver_map, eve_data, hentry, key) { 236 + hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, key) { 237 237 if (eve_data->key == key) { 238 238 /* Delete the list of callback */ 239 239 list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) {
+2 -2
drivers/soc/xilinx/zynqmp_power.c
··· 218 218 } else if (ret != -EACCES && ret != -ENODEV) { 219 219 dev_err(&pdev->dev, "Failed to Register with Xilinx Event manager %d\n", ret); 220 220 return ret; 221 - } else if (of_find_property(pdev->dev.of_node, "mboxes", NULL)) { 221 + } else if (of_property_present(pdev->dev.of_node, "mboxes")) { 222 222 zynqmp_pm_init_suspend_work = 223 223 devm_kzalloc(&pdev->dev, 224 224 sizeof(struct zynqmp_pm_work_struct), ··· 240 240 dev_err(&pdev->dev, "Failed to request rx channel\n"); 241 241 return PTR_ERR(rx_chan); 242 242 } 243 - } else if (of_find_property(pdev->dev.of_node, "interrupts", NULL)) { 243 + } else if (of_property_present(pdev->dev.of_node, "interrupts")) { 244 244 irq = platform_get_irq(pdev, 0); 245 245 if (irq <= 0) 246 246 return -ENXIO;
+1 -2
drivers/tee/optee/smc_abi.c
··· 1541 1541 * This uses the GFP_DMA flag to ensure we are allocated memory in the 1542 1542 * 32-bit space since TF-A cannot map memory beyond the 32-bit boundary. 1543 1543 */ 1544 - data_buf = kmalloc(fw->size, GFP_KERNEL | GFP_DMA); 1544 + data_buf = kmemdup(fw->data, fw->size, GFP_KERNEL | GFP_DMA); 1545 1545 if (!data_buf) { 1546 1546 rc = -ENOMEM; 1547 1547 goto fw_err; 1548 1548 } 1549 - memcpy(data_buf, fw->data, fw->size); 1550 1549 data_pa = virt_to_phys(data_buf); 1551 1550 reg_pair_from_64(&data_pa_high, &data_pa_low, data_pa); 1552 1551 reg_pair_from_64(&data_size_high, &data_size_low, data_size);
+1 -2
drivers/vfio/fsl-mc/vfio_fsl_mc.c
··· 570 570 mutex_destroy(&vdev->igate); 571 571 } 572 572 573 - static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev) 573 + static void vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev) 574 574 { 575 575 struct device *dev = &mc_dev->dev; 576 576 struct vfio_fsl_mc_device *vdev = dev_get_drvdata(dev); ··· 578 578 vfio_unregister_group_dev(&vdev->vdev); 579 579 dprc_remove_devices(mc_dev, NULL, 0); 580 580 vfio_put_device(&vdev->vdev); 581 - return 0; 582 581 } 583 582 584 583 static const struct vfio_device_ops vfio_fsl_mc_ops = {
+11
include/dt-bindings/arm/qcom,ids.h
··· 216 216 #define QCOM_ID_SM8350 439 217 217 #define QCOM_ID_QCM2290 441 218 218 #define QCOM_ID_SM6115 444 219 + #define QCOM_ID_IPQ5010 446 220 + #define QCOM_ID_IPQ5018 447 221 + #define QCOM_ID_IPQ5028 448 219 222 #define QCOM_ID_SC8280XP 449 220 223 #define QCOM_ID_IPQ6005 453 221 224 #define QCOM_ID_QRB5165 455 ··· 232 229 #define QCOM_ID_SM8450_3 482 233 230 #define QCOM_ID_SC7280 487 234 231 #define QCOM_ID_SC7180P 495 232 + #define QCOM_ID_IPQ5000 503 233 + #define QCOM_ID_IPQ0509 504 234 + #define QCOM_ID_IPQ0518 505 235 235 #define QCOM_ID_SM6375 507 236 236 #define QCOM_ID_IPQ9514 510 237 237 #define QCOM_ID_IPQ9550 511 ··· 242 236 #define QCOM_ID_IPQ9570 513 243 237 #define QCOM_ID_IPQ9574 514 244 238 #define QCOM_ID_SM8550 519 239 + #define QCOM_ID_IPQ5016 520 245 240 #define QCOM_ID_IPQ9510 521 246 241 #define QCOM_ID_QRB4210 523 247 242 #define QCOM_ID_QRB2210 524 ··· 250 243 #define QCOM_ID_QRU1000 539 251 244 #define QCOM_ID_QDU1000 545 252 245 #define QCOM_ID_QDU1010 587 246 + #define QCOM_ID_IPQ5019 569 253 247 #define QCOM_ID_QRU1032 588 254 248 #define QCOM_ID_QRU1052 589 255 249 #define QCOM_ID_QRU1062 590 256 250 #define QCOM_ID_IPQ5332 592 257 251 #define QCOM_ID_IPQ5322 593 252 + #define QCOM_ID_IPQ5312 594 253 + #define QCOM_ID_IPQ5302 595 254 + #define QCOM_ID_IPQ5300 624 258 255 259 256 /* 260 257 * The board type and revision information, used by Qualcomm bootloaders and
+2
include/linux/arm-cci.h
··· 43 43 } 44 44 #endif 45 45 46 + void cci_enable_port_for_self(void); 47 + 46 48 #define cci_disable_port_by_device(dev) \ 47 49 __cci_control_port_by_device(dev, false) 48 50 #define cci_enable_port_by_device(dev) \
+1 -1
include/linux/fsl/mc.h
··· 48 48 struct device_driver driver; 49 49 const struct fsl_mc_device_id *match_id_table; 50 50 int (*probe)(struct fsl_mc_device *dev); 51 - int (*remove)(struct fsl_mc_device *dev); 51 + void (*remove)(struct fsl_mc_device *dev); 52 52 void (*shutdown)(struct fsl_mc_device *dev); 53 53 int (*suspend)(struct fsl_mc_device *dev, pm_message_t state); 54 54 int (*resume)(struct fsl_mc_device *dev);
+123
include/linux/pruss_driver.h
··· 9 9 #ifndef _PRUSS_DRIVER_H_ 10 10 #define _PRUSS_DRIVER_H_ 11 11 12 + #include <linux/mutex.h> 13 + #include <linux/remoteproc/pruss.h> 12 14 #include <linux/types.h> 15 + #include <linux/err.h> 16 + 17 + /* 18 + * enum pruss_gp_mux_sel - PRUSS GPI/O Mux modes for the 19 + * PRUSS_GPCFG0/1 registers 20 + * 21 + * NOTE: The below defines are the most common values, but there 22 + * are some exceptions like on 66AK2G, where the RESERVED and MII2 23 + * values are interchanged. Also, this bit-field does not exist on 24 + * AM335x SoCs 25 + */ 26 + enum pruss_gp_mux_sel { 27 + PRUSS_GP_MUX_SEL_GP, 28 + PRUSS_GP_MUX_SEL_ENDAT, 29 + PRUSS_GP_MUX_SEL_RESERVED, 30 + PRUSS_GP_MUX_SEL_SD, 31 + PRUSS_GP_MUX_SEL_MII2, 32 + PRUSS_GP_MUX_SEL_MAX, 33 + }; 34 + 35 + /* 36 + * enum pruss_gpi_mode - PRUSS GPI configuration modes, used 37 + * to program the PRUSS_GPCFG0/1 registers 38 + */ 39 + enum pruss_gpi_mode { 40 + PRUSS_GPI_MODE_DIRECT, 41 + PRUSS_GPI_MODE_PARALLEL, 42 + PRUSS_GPI_MODE_28BIT_SHIFT, 43 + PRUSS_GPI_MODE_MII, 44 + PRUSS_GPI_MODE_MAX, 45 + }; 46 + 47 + /** 48 + * enum pru_type - PRU core type identifier 49 + * 50 + * @PRU_TYPE_PRU: Programmable Real-time Unit 51 + * @PRU_TYPE_RTU: Auxiliary Programmable Real-Time Unit 52 + * @PRU_TYPE_TX_PRU: Transmit Programmable Real-Time Unit 53 + * @PRU_TYPE_MAX: just keep this one at the end 54 + */ 55 + enum pru_type { 56 + PRU_TYPE_PRU, 57 + PRU_TYPE_RTU, 58 + PRU_TYPE_TX_PRU, 59 + PRU_TYPE_MAX, 60 + }; 13 61 14 62 /* 15 63 * enum pruss_mem - PRUSS memory range identifiers ··· 87 39 * @cfg_base: base iomap for CFG region 88 40 * @cfg_regmap: regmap for config region 89 41 * @mem_regions: data for each of the PRUSS memory regions 42 + * @mem_in_use: to indicate if memory resource is in use 43 + * @lock: mutex to serialize access to resources 90 44 * @core_clk_mux: clk handle for PRUSS CORE_CLK_MUX 91 45 * @iep_clk_mux: clk handle for PRUSS IEP_CLK_MUX 92 46 */ ··· 97 47 void __iomem *cfg_base; 98 48 struct regmap *cfg_regmap; 99 49 struct pruss_mem_region mem_regions[PRUSS_MEM_MAX]; 50 + struct pruss_mem_region *mem_in_use[PRUSS_MEM_MAX]; 51 + struct mutex lock; /* PRU resource lock */ 100 52 struct clk *core_clk_mux; 101 53 struct clk *iep_clk_mux; 102 54 }; 55 + 56 + #if IS_ENABLED(CONFIG_TI_PRUSS) 57 + 58 + struct pruss *pruss_get(struct rproc *rproc); 59 + void pruss_put(struct pruss *pruss); 60 + int pruss_request_mem_region(struct pruss *pruss, enum pruss_mem mem_id, 61 + struct pruss_mem_region *region); 62 + int pruss_release_mem_region(struct pruss *pruss, 63 + struct pruss_mem_region *region); 64 + int pruss_cfg_get_gpmux(struct pruss *pruss, enum pruss_pru_id pru_id, u8 *mux); 65 + int pruss_cfg_set_gpmux(struct pruss *pruss, enum pruss_pru_id pru_id, u8 mux); 66 + int pruss_cfg_gpimode(struct pruss *pruss, enum pruss_pru_id pru_id, 67 + enum pruss_gpi_mode mode); 68 + int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable); 69 + int pruss_cfg_xfr_enable(struct pruss *pruss, enum pru_type pru_type, 70 + bool enable); 71 + 72 + #else 73 + 74 + static inline struct pruss *pruss_get(struct rproc *rproc) 75 + { 76 + return ERR_PTR(-EOPNOTSUPP); 77 + } 78 + 79 + static inline void pruss_put(struct pruss *pruss) { } 80 + 81 + static inline int pruss_request_mem_region(struct pruss *pruss, 82 + enum pruss_mem mem_id, 83 + struct pruss_mem_region *region) 84 + { 85 + return -EOPNOTSUPP; 86 + } 87 + 88 + static inline int pruss_release_mem_region(struct pruss *pruss, 89 + struct pruss_mem_region *region) 90 + { 91 + return -EOPNOTSUPP; 92 + } 93 + 94 + static inline int pruss_cfg_get_gpmux(struct pruss *pruss, 95 + enum pruss_pru_id pru_id, u8 *mux) 96 + { 97 + return ERR_PTR(-EOPNOTSUPP); 98 + } 99 + 100 + static inline int pruss_cfg_set_gpmux(struct pruss *pruss, 101 + enum pruss_pru_id pru_id, u8 mux) 102 + { 103 + return ERR_PTR(-EOPNOTSUPP); 104 + } 105 + 106 + static inline int pruss_cfg_gpimode(struct pruss *pruss, 107 + enum pruss_pru_id pru_id, 108 + enum pruss_gpi_mode mode) 109 + { 110 + return ERR_PTR(-EOPNOTSUPP); 111 + } 112 + 113 + static inline int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable) 114 + { 115 + return ERR_PTR(-EOPNOTSUPP); 116 + } 117 + 118 + static inline int pruss_cfg_xfr_enable(struct pruss *pruss, 119 + enum pru_type pru_type, 120 + bool enable); 121 + { 122 + return ERR_PTR(-EOPNOTSUPP); 123 + } 124 + 125 + #endif /* CONFIG_TI_PRUSS */ 103 126 104 127 #endif /* _PRUSS_DRIVER_H_ */
+18
include/linux/scmi_protocol.h
··· 629 629 * @num_domains_get: get the count of powercap domains provided by SCMI. 630 630 * @info_get: get the information for the specified domain. 631 631 * @cap_get: get the current CAP value for the specified domain. 632 + * On SCMI platforms supporting powercap zone disabling, this could 633 + * report a zero value for a zone where powercapping is disabled. 632 634 * @cap_set: set the CAP value for the specified domain to the provided value; 633 635 * if the domain supports setting the CAP with an asynchronous command 634 636 * this request will finally trigger an asynchronous transfer, but, if 635 637 * @ignore_dresp here is set to true, this call will anyway return 636 638 * immediately without waiting for the related delayed response. 639 + * Note that the powercap requested value must NOT be zero, even if 640 + * the platform supports disabling a powercap by setting its cap to 641 + * zero (since SCMI v3.2): there are dedicated operations that should 642 + * be used for that. (@cap_enable_set/get) 643 + * @cap_enable_set: enable or disable the powercapping on the specified domain, 644 + * if supported by the SCMI platform implementation. 645 + * Note that, by the SCMI specification, the platform can 646 + * silently ignore our disable request and decide to enforce 647 + * anyway some other powercap value requested by another agent 648 + * on the system: for this reason @cap_get and @cap_enable_get 649 + * will always report the final platform view of the powercaps. 650 + * @cap_enable_get: get the current CAP enable status for the specified domain. 637 651 * @pai_get: get the current PAI value for the specified domain. 638 652 * @pai_set: set the PAI value for the specified domain to the provided value. 639 653 * @measurements_get: retrieve the current average power measurements for the ··· 676 662 u32 *power_cap); 677 663 int (*cap_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 678 664 u32 power_cap, bool ignore_dresp); 665 + int (*cap_enable_set)(const struct scmi_protocol_handle *ph, 666 + u32 domain_id, bool enable); 667 + int (*cap_enable_get)(const struct scmi_protocol_handle *ph, 668 + u32 domain_id, bool *enable); 679 669 int (*pai_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 680 670 u32 *pai); 681 671 int (*pai_set)(const struct scmi_protocol_handle *ph, u32 domain_id,
+1 -2
include/linux/soc/mediatek/mtk-mmsys.h
··· 27 27 DDP_COMPONENT_CCORR, 28 28 DDP_COMPONENT_COLOR0, 29 29 DDP_COMPONENT_COLOR1, 30 - DDP_COMPONENT_DITHER, 31 - DDP_COMPONENT_DITHER0 = DDP_COMPONENT_DITHER, 30 + DDP_COMPONENT_DITHER0, 32 31 DDP_COMPONENT_DITHER1, 33 32 DDP_COMPONENT_DP_INTF0, 34 33 DDP_COMPONENT_DP_INTF1,
+2
include/linux/soc/qcom/smem.h
··· 11 11 12 12 phys_addr_t qcom_smem_virt_to_phys(void *p); 13 13 14 + int qcom_smem_get_soc_id(u32 *id); 15 + 14 16 #endif
+77
include/linux/soc/qcom/socinfo.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef __QCOM_SOCINFO_H__ 4 + #define __QCOM_SOCINFO_H__ 5 + 6 + /* 7 + * SMEM item id, used to acquire handles to respective 8 + * SMEM region. 9 + */ 10 + #define SMEM_HW_SW_BUILD_ID 137 11 + 12 + #define SMEM_SOCINFO_BUILD_ID_LENGTH 32 13 + #define SMEM_SOCINFO_CHIP_ID_LENGTH 32 14 + 15 + /* Socinfo SMEM item structure */ 16 + struct socinfo { 17 + __le32 fmt; 18 + __le32 id; 19 + __le32 ver; 20 + char build_id[SMEM_SOCINFO_BUILD_ID_LENGTH]; 21 + /* Version 2 */ 22 + __le32 raw_id; 23 + __le32 raw_ver; 24 + /* Version 3 */ 25 + __le32 hw_plat; 26 + /* Version 4 */ 27 + __le32 plat_ver; 28 + /* Version 5 */ 29 + __le32 accessory_chip; 30 + /* Version 6 */ 31 + __le32 hw_plat_subtype; 32 + /* Version 7 */ 33 + __le32 pmic_model; 34 + __le32 pmic_die_rev; 35 + /* Version 8 */ 36 + __le32 pmic_model_1; 37 + __le32 pmic_die_rev_1; 38 + __le32 pmic_model_2; 39 + __le32 pmic_die_rev_2; 40 + /* Version 9 */ 41 + __le32 foundry_id; 42 + /* Version 10 */ 43 + __le32 serial_num; 44 + /* Version 11 */ 45 + __le32 num_pmics; 46 + __le32 pmic_array_offset; 47 + /* Version 12 */ 48 + __le32 chip_family; 49 + __le32 raw_device_family; 50 + __le32 raw_device_num; 51 + /* Version 13 */ 52 + __le32 nproduct_id; 53 + char chip_id[SMEM_SOCINFO_CHIP_ID_LENGTH]; 54 + /* Version 14 */ 55 + __le32 num_clusters; 56 + __le32 ncluster_array_offset; 57 + __le32 num_subset_parts; 58 + __le32 nsubset_parts_array_offset; 59 + /* Version 15 */ 60 + __le32 nmodem_supported; 61 + /* Version 16 */ 62 + __le32 feature_code; 63 + __le32 pcode; 64 + __le32 npartnamemap_offset; 65 + __le32 nnum_partname_mapping; 66 + /* Version 17 */ 67 + __le32 oem_variant; 68 + /* Version 18 */ 69 + __le32 num_kvps; 70 + __le32 kvps_offset; 71 + /* Version 19 */ 72 + __le32 num_func_clusters; 73 + __le32 boot_cluster; 74 + __le32 boot_core; 75 + }; 76 + 77 + #endif
+65
include/linux/tegra-icc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2022-2023 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef LINUX_TEGRA_ICC_H 7 + #define LINUX_TEGRA_ICC_H 8 + 9 + enum tegra_icc_client_type { 10 + TEGRA_ICC_NONE, 11 + TEGRA_ICC_NISO, 12 + TEGRA_ICC_ISO_DISPLAY, 13 + TEGRA_ICC_ISO_VI, 14 + TEGRA_ICC_ISO_AUDIO, 15 + TEGRA_ICC_ISO_VIFAL, 16 + }; 17 + 18 + /* ICC ID's for MC client's used in BPMP */ 19 + #define TEGRA_ICC_BPMP_DEBUG 1 20 + #define TEGRA_ICC_BPMP_CPU_CLUSTER0 2 21 + #define TEGRA_ICC_BPMP_CPU_CLUSTER1 3 22 + #define TEGRA_ICC_BPMP_CPU_CLUSTER2 4 23 + #define TEGRA_ICC_BPMP_GPU 5 24 + #define TEGRA_ICC_BPMP_CACTMON 6 25 + #define TEGRA_ICC_BPMP_DISPLAY 7 26 + #define TEGRA_ICC_BPMP_VI 8 27 + #define TEGRA_ICC_BPMP_EQOS 9 28 + #define TEGRA_ICC_BPMP_PCIE_0 10 29 + #define TEGRA_ICC_BPMP_PCIE_1 11 30 + #define TEGRA_ICC_BPMP_PCIE_2 12 31 + #define TEGRA_ICC_BPMP_PCIE_3 13 32 + #define TEGRA_ICC_BPMP_PCIE_4 14 33 + #define TEGRA_ICC_BPMP_PCIE_5 15 34 + #define TEGRA_ICC_BPMP_PCIE_6 16 35 + #define TEGRA_ICC_BPMP_PCIE_7 17 36 + #define TEGRA_ICC_BPMP_PCIE_8 18 37 + #define TEGRA_ICC_BPMP_PCIE_9 19 38 + #define TEGRA_ICC_BPMP_PCIE_10 20 39 + #define TEGRA_ICC_BPMP_DLA_0 21 40 + #define TEGRA_ICC_BPMP_DLA_1 22 41 + #define TEGRA_ICC_BPMP_SDMMC_1 23 42 + #define TEGRA_ICC_BPMP_SDMMC_2 24 43 + #define TEGRA_ICC_BPMP_SDMMC_3 25 44 + #define TEGRA_ICC_BPMP_SDMMC_4 26 45 + #define TEGRA_ICC_BPMP_NVDEC 27 46 + #define TEGRA_ICC_BPMP_NVENC 28 47 + #define TEGRA_ICC_BPMP_NVJPG_0 29 48 + #define TEGRA_ICC_BPMP_NVJPG_1 30 49 + #define TEGRA_ICC_BPMP_OFAA 31 50 + #define TEGRA_ICC_BPMP_XUSB_HOST 32 51 + #define TEGRA_ICC_BPMP_XUSB_DEV 33 52 + #define TEGRA_ICC_BPMP_TSEC 34 53 + #define TEGRA_ICC_BPMP_VIC 35 54 + #define TEGRA_ICC_BPMP_APE 36 55 + #define TEGRA_ICC_BPMP_APEDMA 37 56 + #define TEGRA_ICC_BPMP_SE 38 57 + #define TEGRA_ICC_BPMP_ISP 39 58 + #define TEGRA_ICC_BPMP_HDA 40 59 + #define TEGRA_ICC_BPMP_VIFAL 41 60 + #define TEGRA_ICC_BPMP_VI2FAL 42 61 + #define TEGRA_ICC_BPMP_VI2 43 62 + #define TEGRA_ICC_BPMP_RCE 44 63 + #define TEGRA_ICC_BPMP_PVA 45 64 + 65 + #endif /* LINUX_TEGRA_ICC_H */
+2 -1
include/soc/tegra/fuse.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2012-2023, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #ifndef __SOC_TEGRA_FUSE_H__ ··· 17 17 #define TEGRA186 0x18 18 18 #define TEGRA194 0x19 19 19 #define TEGRA234 0x23 20 + #define TEGRA264 0x26 20 21 21 22 #define TEGRA_FUSE_SKU_CALIB_0 0xf0 22 23 #define TEGRA30_FUSE_SATA_CALIB 0x124
+8
include/soc/tegra/mc.h
··· 13 13 #include <linux/irq.h> 14 14 #include <linux/reset-controller.h> 15 15 #include <linux/types.h> 16 + #include <linux/tegra-icc.h> 16 17 17 18 struct clk; 18 19 struct device; ··· 27 26 28 27 struct tegra_mc_client { 29 28 unsigned int id; 29 + unsigned int bpmp_id; 30 + enum tegra_icc_client_type type; 30 31 const char *name; 31 32 /* 32 33 * For Tegra210 and earlier, this is the SWGROUP ID used for IOVA translations in the ··· 169 166 int (*set)(struct icc_node *src, struct icc_node *dst); 170 167 int (*aggregate)(struct icc_node *node, u32 tag, u32 avg_bw, 171 168 u32 peak_bw, u32 *agg_avg, u32 *agg_peak); 169 + struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data); 172 170 struct icc_node_data *(*xlate_extended)(struct of_phandle_args *spec, 173 171 void *data); 172 + int (*get_bw)(struct icc_node *node, u32 *avg, u32 *peak); 174 173 }; 175 174 176 175 struct tegra_mc_ops { ··· 219 214 }; 220 215 221 216 struct tegra_mc { 217 + struct tegra_bpmp *bpmp; 222 218 struct device *dev; 223 219 struct tegra_smmu *smmu; 224 220 struct gart_device *gart; ··· 234 228 235 229 struct tegra_mc_timing *timings; 236 230 unsigned int num_timings; 231 + unsigned int num_channels; 237 232 233 + bool bwmgr_mrq_supported; 238 234 struct reset_controller_dev reset; 239 235 240 236 struct icc_provider provider;