Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC drivers from Arnd Bergmann:
"The SoC driver updates contain changes to improve support for
additional SoC variants, as well as cleanups an minor bugfixes
in a number of existing drivers.

Notable updates this time include:

- Support for Qualcomm MSM8909 (Snapdragon 210) in various drivers

- Updates for interconnect drivers on Qualcomm Snapdragon

- A new driver support for NMI interrupts on Fujitsu A64fx

- A rework of Broadcom BCMBCA Kconfig dependencies

- Improved support for BCM2711 (Raspberry Pi 4) power management to
allow the use of the V3D GPU

- Cleanups to the NXP guts driver

- Arm SCMI firmware driver updates to add tracing support, and use
the firmware interfaces for system power control and for power
capping"

* tag 'arm-drivers-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (125 commits)
soc: a64fx-diag: disable modular build
dt-bindings: soc: qcom: qcom,smd-rpm: add power-controller
dt-bindings: soc: qcom: aoss: document qcom,sm8450-aoss-qmp
dt-bindings: soc: qcom,rpmh-rsc: simplify qcom,tcs-config
ARM: mach-qcom: Add support for MSM8909
dt-bindings: arm: cpus: Document "qcom,msm8909-smp" enable-method
soc: qcom: spm: Add CPU data for MSM8909
dt-bindings: soc: qcom: spm: Add MSM8909 CPU compatible
soc: qcom: rpmpd: Add compatible for MSM8909
dt-bindings: power: qcom-rpmpd: Add MSM8909 power domains
soc: qcom: smd-rpm: Add compatible for MSM8909
dt-bindings: soc: qcom: smd-rpm: Add MSM8909
soc: qcom: icc-bwmon: Remove unnecessary print function dev_err()
soc: fujitsu: Add A64FX diagnostic interrupt driver
soc: qcom: socinfo: Fix the id of SA8540P SoC
soc: qcom: Make QCOM_RPMPD depend on PM
tty: serial: bcm63xx: bcmbca: Replace ARCH_BCM_63XX with ARCH_BCMBCA
spi: bcm63xx-hsspi: bcmbca: Replace ARCH_BCM_63XX with ARCH_BCMBCA
clk: bcm: bcmbca: Replace ARCH_BCM_63XX with ARCH_BCMBCA
hwrng: bcm2835: bcmbca: Replace ARCH_BCM_63XX with ARCH_BCMBCA
...

+6418 -588
+1
Documentation/devicetree/bindings/arm/cpus.yaml
··· 221 221 - qcom,kpss-acc-v1 222 222 - qcom,kpss-acc-v2 223 223 - qcom,msm8226-smp 224 + - qcom,msm8909-smp 224 225 # Only valid on ARM 32-bit, see above for ARM v8 64-bit 225 226 - qcom,msm8916-smp 226 227 - renesas,apmu
+1 -1
Documentation/devicetree/bindings/arm/qcom.yaml
··· 7 7 title: QCOM device tree bindings 8 8 9 9 maintainers: 10 - - Stephen Boyd <sboyd@codeaurora.org> 10 + - Bjorn Andersson <bjorn.andersson@linaro.org> 11 11 12 12 description: | 13 13 Some qcom based bootloaders identify the dtb blob based on a set of
+40
Documentation/devicetree/bindings/arm/tegra/nvidia,tegra194-axi2apb.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/arm/tegra/nvidia,tegra194-axi2apb.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: NVIDIA Tegra194 AXI2APB bridge 8 + 9 + maintainers: 10 + - Sumit Gupta <sumitg@nvidia.com> 11 + 12 + properties: 13 + $nodename: 14 + pattern: "^axi2apb@([0-9a-f]+)$" 15 + 16 + compatible: 17 + enum: 18 + - nvidia,tegra194-axi2apb 19 + 20 + reg: 21 + maxItems: 6 22 + description: Physical base address and length of registers for all bridges 23 + 24 + additionalProperties: false 25 + 26 + required: 27 + - compatible 28 + - reg 29 + 30 + examples: 31 + - | 32 + axi2apb: axi2apb@2390000 { 33 + compatible = "nvidia,tegra194-axi2apb"; 34 + reg = <0x02390000 0x1000>, 35 + <0x023a0000 0x1000>, 36 + <0x023b0000 0x1000>, 37 + <0x023c0000 0x1000>, 38 + <0x023d0000 0x1000>, 39 + <0x023e0000 0x1000>; 40 + };
+97
Documentation/devicetree/bindings/arm/tegra/nvidia,tegra194-cbb.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/arm/tegra/nvidia,tegra194-cbb.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: NVIDIA Tegra194 CBB 1.0 bindings 8 + 9 + maintainers: 10 + - Sumit Gupta <sumitg@nvidia.com> 11 + 12 + description: |+ 13 + The Control Backbone (CBB) is comprised of the physical path from an 14 + initiator to a target's register configuration space. CBB 1.0 has 15 + multiple hierarchical sub-NOCs (Network-on-Chip) and connects various 16 + initiators and targets using different bridges like AXIP2P, AXI2APB. 17 + 18 + This driver handles errors due to illegal register accesses reported 19 + by the NOCs inside the CBB. NOCs reporting errors are cluster NOCs 20 + "AON-NOC, SCE-NOC, RCE-NOC, BPMP-NOC, CV-NOC" and "CBB Central NOC" 21 + which is the main NOC. 22 + 23 + By default, the access issuing initiator is informed about the error 24 + using SError or Data Abort exception unless the ERD (Error Response 25 + Disable) is enabled/set for that initiator. If the ERD is enabled, then 26 + SError or Data Abort is masked and the error is reported with interrupt. 27 + 28 + - For CCPLEX (CPU Complex) initiator, the driver sets ERD bit. So, the 29 + errors due to illegal accesses from CCPLEX are reported by interrupts. 30 + If ERD is not set, then error is reported by SError. 31 + - For other initiators, the ERD is disabled. So, the access issuing 32 + initiator is informed about the illegal access by Data Abort exception. 33 + In addition, an interrupt is also generated to CCPLEX. These initiators 34 + include all engines using Cortex-R5 (which is ARMv7 CPU cluster) and 35 + engines like TSEC (Security co-processor), NVDEC (NVIDIA Video Decoder 36 + engine) etc which can initiate transactions. 37 + 38 + The driver prints relevant debug information like Error Code, Error 39 + Description, Master, Address, AXI ID, Cache, Protection, Security Group 40 + etc on receiving error notification. 41 + 42 + properties: 43 + $nodename: 44 + pattern: "^[a-z]+-noc@[0-9a-f]+$" 45 + 46 + compatible: 47 + enum: 48 + - nvidia,tegra194-cbb-noc 49 + - nvidia,tegra194-aon-noc 50 + - nvidia,tegra194-bpmp-noc 51 + - nvidia,tegra194-rce-noc 52 + - nvidia,tegra194-sce-noc 53 + 54 + reg: 55 + maxItems: 1 56 + 57 + interrupts: 58 + description: 59 + CCPLEX receives secure or nonsecure interrupt depending on error type. 60 + A secure interrupt is received for SEC(firewall) & SLV errors and a 61 + non-secure interrupt is received for TMO & DEC errors. 62 + items: 63 + - description: non-secure interrupt 64 + - description: secure interrupt 65 + 66 + nvidia,axi2apb: 67 + $ref: '/schemas/types.yaml#/definitions/phandle' 68 + description: 69 + Specifies the node having all axi2apb bridges which need to be checked 70 + for any error logged in their status register. 71 + 72 + nvidia,apbmisc: 73 + $ref: '/schemas/types.yaml#/definitions/phandle' 74 + description: 75 + Specifies the apbmisc node which need to be used for reading the ERD 76 + register. 77 + 78 + additionalProperties: false 79 + 80 + required: 81 + - compatible 82 + - reg 83 + - interrupts 84 + - nvidia,apbmisc 85 + 86 + examples: 87 + - | 88 + #include <dt-bindings/interrupt-controller/arm-gic.h> 89 + 90 + cbb-noc@2300000 { 91 + compatible = "nvidia,tegra194-cbb-noc"; 92 + reg = <0x02300000 0x1000>; 93 + interrupts = <GIC_SPI 230 IRQ_TYPE_LEVEL_HIGH>, 94 + <GIC_SPI 231 IRQ_TYPE_LEVEL_HIGH>; 95 + nvidia,axi2apb = <&axi2apb>; 96 + nvidia,apbmisc = <&apbmisc>; 97 + };
+74
Documentation/devicetree/bindings/arm/tegra/nvidia,tegra234-cbb.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/arm/tegra/nvidia,tegra234-cbb.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: NVIDIA Tegra CBB 2.0 bindings 8 + 9 + maintainers: 10 + - Sumit Gupta <sumitg@nvidia.com> 11 + 12 + description: |+ 13 + The Control Backbone (CBB) is comprised of the physical path from an 14 + initiator to a target's register configuration space. CBB 2.0 consists 15 + of multiple sub-blocks connected to each other to create a topology. 16 + The Tegra234 SoC has different fabrics based on CBB 2.0 architecture 17 + which include cluster fabrics BPMP, AON, PSC, SCE, RCE, DCE, FSI and 18 + "CBB central fabric". 19 + 20 + In CBB 2.0, each initiator which can issue transactions connects to a 21 + Root Master Node (MN) before it connects to any other element of the 22 + fabric. Each Root MN contains a Error Monitor (EM) which detects and 23 + logs error. Interrupts from various EM blocks are collated by Error 24 + Notifier (EN) which is per fabric and presents a single interrupt from 25 + fabric to the SoC interrupt controller. 26 + 27 + The driver handles errors from CBB due to illegal register accesses 28 + and prints debug information about failed transaction on receiving 29 + the interrupt from EN. Debug information includes Error Code, Error 30 + Description, MasterID, Fabric, SlaveID, Address, Cache, Protection, 31 + Security Group etc on receiving error notification. 32 + 33 + If the Error Response Disable (ERD) is set/enabled for an initiator, 34 + then SError or Data abort exception error response is masked and an 35 + interrupt is used for reporting errors due to illegal accesses from 36 + that initiator. The value returned on read failures is '0xFFFFFFFF' 37 + for compatibility with PCIE. 38 + 39 + properties: 40 + $nodename: 41 + pattern: "^[a-z]+-fabric@[0-9a-f]+$" 42 + 43 + compatible: 44 + enum: 45 + - nvidia,tegra234-aon-fabric 46 + - nvidia,tegra234-bpmp-fabric 47 + - nvidia,tegra234-cbb-fabric 48 + - nvidia,tegra234-dce-fabric 49 + - nvidia,tegra234-rce-fabric 50 + - nvidia,tegra234-sce-fabric 51 + 52 + reg: 53 + maxItems: 1 54 + 55 + interrupts: 56 + items: 57 + - description: secure interrupt from error notifier 58 + 59 + additionalProperties: false 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - interrupts 65 + 66 + examples: 67 + - | 68 + #include <dt-bindings/interrupt-controller/arm-gic.h> 69 + 70 + cbb-fabric@1300000 { 71 + compatible = "nvidia,tegra234-cbb-fabric"; 72 + reg = <0x13a00000 0x400000>; 73 + interrupts = <GIC_SPI 231 IRQ_TYPE_LEVEL_HIGH>; 74 + };
+13 -1
Documentation/devicetree/bindings/display/mediatek/mediatek,mutex.yaml Documentation/devicetree/bindings/soc/mediatek/mediatek,mutex.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/mediatek/mediatek,mutex.yaml# 4 + $id: http://devicetree.org/schemas/soc/mediatek/mediatek,mutex.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Mediatek mutex ··· 54 54 to gce. The event id is defined in the gce header 55 55 include/dt-bindings/gce/<chip>-gce.h of each chips. 56 56 $ref: /schemas/types.yaml#/definitions/uint32-array 57 + 58 + mediatek,gce-client-reg: 59 + $ref: /schemas/types.yaml#/definitions/phandle-array 60 + items: 61 + items: 62 + - description: phandle of GCE 63 + - description: GCE subsys id 64 + - description: register offset 65 + - description: register size 66 + description: The register of client driver can be configured by gce with 67 + 4 arguments defined in this property. Each GCE subsys id is mapping to 68 + a client defined in the header include/dt-bindings/gce/<chip>-gce.h. 57 69 58 70 required: 59 71 - compatible
+10
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 183 183 required: 184 184 - reg 185 185 186 + protocol@18: 187 + type: object 188 + properties: 189 + reg: 190 + const: 0x18 191 + 186 192 additionalProperties: false 187 193 188 194 patternProperties: ··· 328 322 regulator-max-microvolt = <4200000>; 329 323 }; 330 324 }; 325 + }; 326 + 327 + scmi_powercap: protocol@18 { 328 + reg = <0x18>; 331 329 }; 332 330 }; 333 331 };
+4
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 23 23 * "qcom,scm-msm8994" 24 24 * "qcom,scm-msm8996" 25 25 * "qcom,scm-msm8998" 26 + * "qcom,scm-qcs404" 26 27 * "qcom,scm-sc7180" 27 28 * "qcom,scm-sc7280" 29 + * "qcom,scm-sm6125" 28 30 * "qcom,scm-sdm845" 29 31 * "qcom,scm-sdx55" 32 + * "qcom,scm-sdx65" 30 33 * "qcom,scm-sm6350" 31 34 * "qcom,scm-sm8150" 32 35 * "qcom,scm-sm8250" ··· 46 43 clock and "bus" for the bus clock per the requirements of the compatible. 47 44 - qcom,dload-mode: phandle to the TCSR hardware block and offset of the 48 45 download mode control register (optional) 46 + - interconnects: Specifies the bandwidth requirements of the SCM interface (optional) 49 47 50 48 Example for MSM8916: 51 49
+86
Documentation/devicetree/bindings/interconnect/qcom,msm8998-bwmon.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interconnect/qcom,msm8998-bwmon.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Interconnect Bandwidth Monitor 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 11 + 12 + description: | 13 + Bandwidth Monitor measures current throughput on buses between various NoC 14 + fabrics and provides information when it crosses configured thresholds. 15 + 16 + Certain SoCs might have more than one Bandwidth Monitors, for example on SDM845:: 17 + - Measuring the bandwidth between CPUs and Last Level Cache Controller - 18 + called just BWMON, 19 + - Measuring the bandwidth between Last Level Cache Controller and memory 20 + (DDR) - called LLCC BWMON. 21 + 22 + properties: 23 + compatible: 24 + oneOf: 25 + - items: 26 + - enum: 27 + - qcom,sdm845-bwmon 28 + - const: qcom,msm8998-bwmon 29 + - const: qcom,msm8998-bwmon # BWMON v4 30 + 31 + interconnects: 32 + maxItems: 1 33 + 34 + interrupts: 35 + maxItems: 1 36 + 37 + operating-points-v2: true 38 + opp-table: true 39 + 40 + reg: 41 + # BWMON v4 (currently described) and BWMON v5 use one register address 42 + # space. BWMON v2 uses two register spaces - not yet described. 43 + maxItems: 1 44 + 45 + required: 46 + - compatible 47 + - interconnects 48 + - interrupts 49 + - operating-points-v2 50 + - opp-table 51 + - reg 52 + 53 + additionalProperties: false 54 + 55 + examples: 56 + - | 57 + #include <dt-bindings/interconnect/qcom,sdm845.h> 58 + #include <dt-bindings/interrupt-controller/arm-gic.h> 59 + 60 + pmu@1436400 { 61 + compatible = "qcom,sdm845-bwmon", "qcom,msm8998-bwmon"; 62 + reg = <0x01436400 0x600>; 63 + interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>; 64 + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_LLCC 3>; 65 + 66 + operating-points-v2 = <&cpu_bwmon_opp_table>; 67 + 68 + cpu_bwmon_opp_table: opp-table { 69 + compatible = "operating-points-v2"; 70 + opp-0 { 71 + opp-peak-kBps = <4800000>; 72 + }; 73 + opp-1 { 74 + opp-peak-kBps = <9216000>; 75 + }; 76 + opp-2 { 77 + opp-peak-kBps = <15052800>; 78 + }; 79 + opp-3 { 80 + opp-peak-kBps = <20889600>; 81 + }; 82 + opp-4 { 83 + opp-peak-kBps = <25497600>; 84 + }; 85 + }; 86 + };
+1
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
··· 32 32 - mediatek,mt2701-smi-common 33 33 - mediatek,mt2712-smi-common 34 34 - mediatek,mt6779-smi-common 35 + - mediatek,mt6795-smi-common 35 36 - mediatek,mt8167-smi-common 36 37 - mediatek,mt8173-smi-common 37 38 - mediatek,mt8183-smi-common
+1
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
··· 20 20 - mediatek,mt2701-smi-larb 21 21 - mediatek,mt2712-smi-larb 22 22 - mediatek,mt6779-smi-larb 23 + - mediatek,mt6795-smi-larb 23 24 - mediatek,mt8167-smi-larb 24 25 - mediatek,mt8173-smi-larb 25 26 - mediatek,mt8183-smi-larb
+2
Documentation/devicetree/bindings/power/mediatek,power-controller.yaml
··· 23 23 24 24 compatible: 25 25 enum: 26 + - mediatek,mt6795-power-controller 26 27 - mediatek,mt8167-power-controller 27 28 - mediatek,mt8173-power-controller 28 29 - mediatek,mt8183-power-controller ··· 63 62 reg: 64 63 description: | 65 64 Power domain index. Valid values are defined in: 65 + "include/dt-bindings/power/mt6795-power.h" - for MT8167 type power domain. 66 66 "include/dt-bindings/power/mt8167-power.h" - for MT8167 type power domain. 67 67 "include/dt-bindings/power/mt8173-power.h" - for MT8173 type power domain. 68 68 "include/dt-bindings/power/mt8183-power.h" - for MT8183 type power domain.
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 18 18 enum: 19 19 - qcom,mdm9607-rpmpd 20 20 - qcom,msm8226-rpmpd 21 + - qcom,msm8909-rpmpd 21 22 - qcom,msm8916-rpmpd 22 23 - qcom,msm8939-rpmpd 23 24 - qcom,msm8953-rpmpd
+1
Documentation/devicetree/bindings/soc/mediatek/devapc.yaml
··· 20 20 compatible: 21 21 enum: 22 22 - mediatek,mt6779-devapc 23 + - mediatek,mt8186-devapc 23 24 24 25 reg: 25 26 description: The base address of devapc register bank
+91
Documentation/devicetree/bindings/soc/mediatek/mtk-svs.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/mediatek/mtk-svs.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek Smart Voltage Scaling (SVS) Device Tree Bindings 8 + 9 + maintainers: 10 + - Roger Lu <roger.lu@mediatek.com> 11 + - Matthias Brugger <matthias.bgg@gmail.com> 12 + - Kevin Hilman <khilman@kernel.org> 13 + 14 + description: |+ 15 + The SVS engine is a piece of hardware which has several 16 + controllers(banks) for calculating suitable voltage to 17 + different power domains(CPU/GPU/CCI) according to 18 + chip process corner, temperatures and other factors. Then DVFS 19 + driver could apply SVS bank voltage to PMIC/Buck. 20 + 21 + properties: 22 + compatible: 23 + enum: 24 + - mediatek,mt8183-svs 25 + - mediatek,mt8192-svs 26 + 27 + reg: 28 + maxItems: 1 29 + description: Address range of the MTK SVS controller. 30 + 31 + interrupts: 32 + maxItems: 1 33 + 34 + clocks: 35 + maxItems: 1 36 + description: Main clock for MTK SVS controller to work. 37 + 38 + clock-names: 39 + const: main 40 + 41 + nvmem-cells: 42 + minItems: 1 43 + description: 44 + Phandle to the calibration data provided by a nvmem device. 45 + items: 46 + - description: SVS efuse for SVS controller 47 + - description: Thermal efuse for SVS controller 48 + 49 + nvmem-cell-names: 50 + items: 51 + - const: svs-calibration-data 52 + - const: t-calibration-data 53 + 54 + resets: 55 + maxItems: 1 56 + 57 + reset-names: 58 + items: 59 + - const: svs_rst 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - interrupts 65 + - clocks 66 + - clock-names 67 + - nvmem-cells 68 + - nvmem-cell-names 69 + 70 + additionalProperties: false 71 + 72 + examples: 73 + - | 74 + #include <dt-bindings/clock/mt8183-clk.h> 75 + #include <dt-bindings/interrupt-controller/arm-gic.h> 76 + #include <dt-bindings/interrupt-controller/irq.h> 77 + 78 + soc { 79 + #address-cells = <2>; 80 + #size-cells = <2>; 81 + 82 + svs@1100b000 { 83 + compatible = "mediatek,mt8183-svs"; 84 + reg = <0 0x1100b000 0 0x1000>; 85 + interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>; 86 + clocks = <&infracfg CLK_INFRA_THERM>; 87 + clock-names = "main"; 88 + nvmem-cells = <&svs_calibration>, <&thermal_calibration>; 89 + nvmem-cell-names = "svs-calibration-data", "t-calibration-data"; 90 + }; 91 + };
+1
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.yaml
··· 33 33 - qcom,sm8150-aoss-qmp 34 34 - qcom,sm8250-aoss-qmp 35 35 - qcom,sm8350-aoss-qmp 36 + - qcom,sm8450-aoss-qmp 36 37 - const: qcom,aoss-qmp 37 38 38 39 reg:
+11 -22
Documentation/devicetree/bindings/soc/qcom/qcom,rpmh-rsc.yaml
··· 65 65 66 66 qcom,tcs-config: 67 67 $ref: /schemas/types.yaml#/definitions/uint32-matrix 68 + minItems: 4 69 + maxItems: 4 68 70 items: 69 - - items: 70 - - description: TCS type 71 - enum: [ 0, 1, 2, 3 ] 72 - - description: Number of TCS 73 - - items: 74 - - description: TCS type 75 - enum: [ 0, 1, 2, 3 ] 76 - - description: Number of TCS 77 - - items: 78 - - description: TCS type 79 - enum: [ 0, 1, 2, 3] 80 - - description: Numbe r of TCS 81 - - items: 82 - - description: TCS type 83 - enum: [ 0, 1, 2, 3 ] 84 - - description: Number of TCS 71 + items: 72 + - description: | 73 + TCS type:: 74 + - ACTIVE_TCS 75 + - SLEEP_TCS 76 + - WAKE_TCS 77 + - CONTROL_TCS 78 + enum: [ 0, 1, 2, 3 ] 79 + - description: Number of TCS 85 80 description: | 86 81 The tuple defining the configuration of TCS. Must have two cells which 87 82 describe each TCS type. The order of the TCS must match the hardware 88 83 configuration. 89 - Cell 1 (TCS Type):: TCS types to be specified:: 90 - - ACTIVE_TCS 91 - - SLEEP_TCS 92 - - WAKE_TCS 93 - - CONTROL_TCS 94 - Cell 2 (Number of TCS):: <u32> 95 84 96 85 qcom,tcs-offset: 97 86 $ref: /schemas/types.yaml#/definitions/uint32
+4
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 34 34 - qcom,rpm-apq8084 35 35 - qcom,rpm-ipq6018 36 36 - qcom,rpm-msm8226 37 + - qcom,rpm-msm8909 37 38 - qcom,rpm-msm8916 38 39 - qcom,rpm-msm8936 39 40 - qcom,rpm-msm8953 ··· 51 50 clock-controller: 52 51 $ref: /schemas/clock/qcom,rpmcc.yaml# 53 52 unevaluatedProperties: false 53 + 54 + power-controller: 55 + $ref: /schemas/power/qcom,rpmpd.yaml# 54 56 55 57 qcom,smd-channels: 56 58 $ref: /schemas/types.yaml#/definitions/string-array
+1
Documentation/devicetree/bindings/soc/qcom/qcom,spm.yaml
··· 22 22 - qcom,sdm660-silver-saw2-v4.1-l2 23 23 - qcom,msm8998-gold-saw2-v4.1-l2 24 24 - qcom,msm8998-silver-saw2-v4.1-l2 25 + - qcom,msm8909-saw2-v3.0-cpu 25 26 - qcom,msm8916-saw2-v3.0-cpu 26 27 - qcom,msm8226-saw2-v2.1-cpu 27 28 - qcom,msm8974-saw2-v2.1-cpu
-1
Documentation/devicetree/bindings/soc/qcom/qcom,wcnss.yaml
··· 77 77 Should reference the tx-enable and tx-rings-empty SMEM states. 78 78 79 79 qcom,smem-state-names: 80 - $ref: /schemas/types.yaml#/definitions/string-array 81 80 items: 82 81 - const: tx-enable 83 82 - const: tx-rings-empty
+3 -2
Documentation/devicetree/bindings/soc/ti/ti,pruss.yaml
··· 65 65 - ti,am4376-pruss0 # for AM437x SoC family and PRUSS unit 0 66 66 - ti,am4376-pruss1 # for AM437x SoC family and PRUSS unit 1 67 67 - ti,am5728-pruss # for AM57xx SoC family 68 - - ti,k2g-pruss # for 66AK2G SoC family 68 + - ti,am625-pruss # for K3 AM62x SoC family 69 + - ti,am642-icssg # for K3 AM64x SoC family 69 70 - ti,am654-icssg # for K3 AM65x SoC family 70 71 - ti,j721e-icssg # for K3 J721E SoC family 71 - - ti,am642-icssg # for K3 AM64x SoC family 72 + - ti,k2g-pruss # for 66AK2G SoC family 72 73 73 74 reg: 74 75 maxItems: 1
+12
MAINTAINERS
··· 242 242 F: include/uapi/linux/virtio_9p.h 243 243 F: net/9p/ 244 244 245 + A64FX DIAG DRIVER 246 + M: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> 247 + S: Supported 248 + F: drivers/soc/fujitsu/a64fx-diag.c 249 + 245 250 A8293 MEDIA DRIVER 246 251 M: Antti Palosaari <crope@iki.fi> 247 252 L: linux-media@vger.kernel.org ··· 16678 16673 S: Maintained 16679 16674 F: Documentation/devicetree/bindings/i2c/i2c-qcom-cci.txt 16680 16675 F: drivers/i2c/busses/i2c-qcom-cci.c 16676 + 16677 + QUALCOMM INTERCONNECT BWMON DRIVER 16678 + M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 16679 + L: linux-arm-msm@vger.kernel.org 16680 + S: Maintained 16681 + F: Documentation/devicetree/bindings/interconnect/qcom,msm8998-bwmon.yaml 16682 + F: drivers/soc/qcom/icc-bwmon.c 16681 16683 16682 16684 QUALCOMM IOMMU 16683 16685 M: Rob Clark <robdclark@gmail.com>
+4
arch/arm/mach-qcom/Kconfig
··· 20 20 bool "Enable support for MSM8X60" 21 21 select CLKSRC_QCOM 22 22 23 + config ARCH_MSM8909 24 + bool "Enable support for MSM8909" 25 + select HAVE_ARM_ARCH_TIMER 26 + 23 27 config ARCH_MSM8916 24 28 bool "Enable support for MSM8916" 25 29 select HAVE_ARM_ARCH_TIMER
+1
arch/arm/mach-qcom/platsmp.c
··· 384 384 #endif 385 385 }; 386 386 CPU_METHOD_OF_DECLARE(qcom_smp_msm8226, "qcom,msm8226-smp", &qcom_smp_cortex_a7_ops); 387 + CPU_METHOD_OF_DECLARE(qcom_smp_msm8909, "qcom,msm8909-smp", &qcom_smp_cortex_a7_ops); 387 388 CPU_METHOD_OF_DECLARE(qcom_smp_msm8916, "qcom,msm8916-smp", &qcom_smp_cortex_a7_ops); 388 389 389 390 static const struct smp_operations qcom_smp_kpssv1_ops __initconst = {
+1 -1
drivers/ata/Kconfig
··· 148 148 config AHCI_BRCM 149 149 tristate "Broadcom AHCI SATA support" 150 150 depends on ARCH_BRCMSTB || BMIPS_GENERIC || ARCH_BCM_NSP || \ 151 - ARCH_BCM_63XX || COMPILE_TEST 151 + ARCH_BCMBCA || COMPILE_TEST 152 152 select SATA_HOST 153 153 help 154 154 This option enables support for the AHCI SATA3 controller found on
+1 -1
drivers/char/hw_random/Kconfig
··· 87 87 config HW_RANDOM_BCM2835 88 88 tristate "Broadcom BCM2835/BCM63xx Random Number Generator support" 89 89 depends on ARCH_BCM2835 || ARCH_BCM_NSP || ARCH_BCM_5301X || \ 90 - ARCH_BCM_63XX || BCM63XX || BMIPS_GENERIC || COMPILE_TEST 90 + ARCH_BCMBCA || BCM63XX || BMIPS_GENERIC || COMPILE_TEST 91 91 default HW_RANDOM 92 92 help 93 93 This driver provides kernel-side support for the Random Number
+2 -2
drivers/clk/bcm/Kconfig
··· 22 22 23 23 config CLK_BCM_63XX 24 24 bool "Broadcom BCM63xx clock support" 25 - depends on ARCH_BCM_63XX || COMPILE_TEST 25 + depends on ARCH_BCMBCA || COMPILE_TEST 26 26 select COMMON_CLK_IPROC 27 - default ARCH_BCM_63XX 27 + default ARCH_BCMBCA 28 28 help 29 29 Enable common clock framework support for Broadcom BCM63xx DSL SoCs 30 30 based on the ARM architecture
+12
drivers/firmware/arm_scmi/Kconfig
··· 149 149 will be called scmi_pm_domain. Note this may needed early in boot 150 150 before rootfs may be available. 151 151 152 + config ARM_SCMI_POWER_CONTROL 153 + tristate "SCMI system power control driver" 154 + depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 155 + help 156 + This enables System Power control logic which binds system shutdown or 157 + reboot actions to SCMI System Power notifications generated by SCP 158 + firmware. 159 + 160 + This driver can also be built as a module. If so, the module will be 161 + called scmi_power_control. Note this may needed early in boot to catch 162 + early shutdown/reboot SCMI requests. 163 + 152 164 endmenu
+2 -1
drivers/firmware/arm_scmi/Makefile
··· 7 7 scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o 8 8 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO) += virtio.o 9 9 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += optee.o 10 - scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o 10 + scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o powercap.o 11 11 scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \ 12 12 $(scmi-transport-y) 13 13 obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o 14 14 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o 15 + obj-$(CONFIG_ARM_SCMI_POWER_CONTROL) += scmi_power_control.o 15 16 16 17 ifeq ($(CONFIG_THUMB2_KERNEL)$(CONFIG_CC_IS_CLANG),yy) 17 18 # The use of R7 in the SMCCC conflicts with the compiler's use of R7 as a frame
+266 -15
drivers/firmware/arm_scmi/driver.c
··· 19 19 #include <linux/export.h> 20 20 #include <linux/idr.h> 21 21 #include <linux/io.h> 22 + #include <linux/io-64-nonatomic-hi-lo.h> 22 23 #include <linux/kernel.h> 23 24 #include <linux/ktime.h> 24 25 #include <linux/hashtable.h> ··· 60 59 61 60 static DEFINE_IDR(scmi_requested_devices); 62 61 static DEFINE_MUTEX(scmi_requested_devices_mtx); 62 + 63 + /* Track globally the creation of SCMI SystemPower related devices */ 64 + static bool scmi_syspower_registered; 65 + /* Protect access to scmi_syspower_registered */ 66 + static DEFINE_MUTEX(scmi_syspower_mtx); 63 67 64 68 struct scmi_requested_dev { 65 69 const struct scmi_device_id *id_table; ··· 666 660 smp_store_mb(xfer->priv, priv); 667 661 info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 668 662 xfer); 663 + 664 + trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, "NOTI", 665 + xfer->hdr.seq, xfer->hdr.status, 666 + xfer->rx.buf, xfer->rx.len); 667 + 669 668 scmi_notify(cinfo->handle, xfer->hdr.protocol_id, 670 669 xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts); 671 670 ··· 704 693 /* Ensure order between xfer->priv store and following ops */ 705 694 smp_store_mb(xfer->priv, priv); 706 695 info->desc->ops->fetch_response(cinfo, xfer); 696 + 697 + trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, 698 + xfer->hdr.type == MSG_TYPE_DELAYED_RESP ? 699 + "DLYD" : "RESP", 700 + xfer->hdr.seq, xfer->hdr.status, 701 + xfer->rx.buf, xfer->rx.len); 707 702 708 703 trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 709 704 xfer->hdr.protocol_id, xfer->hdr.seq, ··· 844 827 xfer->state = SCMI_XFER_RESP_OK; 845 828 } 846 829 spin_unlock_irqrestore(&xfer->lock, flags); 830 + 831 + /* Trace polled replies. */ 832 + trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, 833 + "RESP", 834 + xfer->hdr.seq, xfer->hdr.status, 835 + xfer->rx.buf, xfer->rx.len); 847 836 } 848 837 } else { 849 838 /* And we wait for the response. */ ··· 925 902 dev_dbg(dev, "Failed to send message %d\n", ret); 926 903 return ret; 927 904 } 905 + 906 + trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, "CMND", 907 + xfer->hdr.seq, xfer->hdr.status, 908 + xfer->tx.buf, xfer->tx.len); 928 909 929 910 ret = scmi_wait_for_message_response(cinfo, xfer); 930 911 if (!ret && xfer->hdr.status) ··· 1286 1259 return ret; 1287 1260 } 1288 1261 1262 + struct scmi_msg_get_fc_info { 1263 + __le32 domain; 1264 + __le32 message_id; 1265 + }; 1266 + 1267 + struct scmi_msg_resp_desc_fc { 1268 + __le32 attr; 1269 + #define SUPPORTS_DOORBELL(x) ((x) & BIT(0)) 1270 + #define DOORBELL_REG_WIDTH(x) FIELD_GET(GENMASK(2, 1), (x)) 1271 + __le32 rate_limit; 1272 + __le32 chan_addr_low; 1273 + __le32 chan_addr_high; 1274 + __le32 chan_size; 1275 + __le32 db_addr_low; 1276 + __le32 db_addr_high; 1277 + __le32 db_set_lmask; 1278 + __le32 db_set_hmask; 1279 + __le32 db_preserve_lmask; 1280 + __le32 db_preserve_hmask; 1281 + }; 1282 + 1283 + static void 1284 + scmi_common_fastchannel_init(const struct scmi_protocol_handle *ph, 1285 + u8 describe_id, u32 message_id, u32 valid_size, 1286 + u32 domain, void __iomem **p_addr, 1287 + struct scmi_fc_db_info **p_db) 1288 + { 1289 + int ret; 1290 + u32 flags; 1291 + u64 phys_addr; 1292 + u8 size; 1293 + void __iomem *addr; 1294 + struct scmi_xfer *t; 1295 + struct scmi_fc_db_info *db = NULL; 1296 + struct scmi_msg_get_fc_info *info; 1297 + struct scmi_msg_resp_desc_fc *resp; 1298 + const struct scmi_protocol_instance *pi = ph_to_pi(ph); 1299 + 1300 + if (!p_addr) { 1301 + ret = -EINVAL; 1302 + goto err_out; 1303 + } 1304 + 1305 + ret = ph->xops->xfer_get_init(ph, describe_id, 1306 + sizeof(*info), sizeof(*resp), &t); 1307 + if (ret) 1308 + goto err_out; 1309 + 1310 + info = t->tx.buf; 1311 + info->domain = cpu_to_le32(domain); 1312 + info->message_id = cpu_to_le32(message_id); 1313 + 1314 + /* 1315 + * Bail out on error leaving fc_info addresses zeroed; this includes 1316 + * the case in which the requested domain/message_id does NOT support 1317 + * fastchannels at all. 1318 + */ 1319 + ret = ph->xops->do_xfer(ph, t); 1320 + if (ret) 1321 + goto err_xfer; 1322 + 1323 + resp = t->rx.buf; 1324 + flags = le32_to_cpu(resp->attr); 1325 + size = le32_to_cpu(resp->chan_size); 1326 + if (size != valid_size) { 1327 + ret = -EINVAL; 1328 + goto err_xfer; 1329 + } 1330 + 1331 + phys_addr = le32_to_cpu(resp->chan_addr_low); 1332 + phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32; 1333 + addr = devm_ioremap(ph->dev, phys_addr, size); 1334 + if (!addr) { 1335 + ret = -EADDRNOTAVAIL; 1336 + goto err_xfer; 1337 + } 1338 + 1339 + *p_addr = addr; 1340 + 1341 + if (p_db && SUPPORTS_DOORBELL(flags)) { 1342 + db = devm_kzalloc(ph->dev, sizeof(*db), GFP_KERNEL); 1343 + if (!db) { 1344 + ret = -ENOMEM; 1345 + goto err_db; 1346 + } 1347 + 1348 + size = 1 << DOORBELL_REG_WIDTH(flags); 1349 + phys_addr = le32_to_cpu(resp->db_addr_low); 1350 + phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32; 1351 + addr = devm_ioremap(ph->dev, phys_addr, size); 1352 + if (!addr) { 1353 + ret = -EADDRNOTAVAIL; 1354 + goto err_db_mem; 1355 + } 1356 + 1357 + db->addr = addr; 1358 + db->width = size; 1359 + db->set = le32_to_cpu(resp->db_set_lmask); 1360 + db->set |= (u64)le32_to_cpu(resp->db_set_hmask) << 32; 1361 + db->mask = le32_to_cpu(resp->db_preserve_lmask); 1362 + db->mask |= (u64)le32_to_cpu(resp->db_preserve_hmask) << 32; 1363 + 1364 + *p_db = db; 1365 + } 1366 + 1367 + ph->xops->xfer_put(ph, t); 1368 + 1369 + dev_dbg(ph->dev, 1370 + "Using valid FC for protocol %X [MSG_ID:%u / RES_ID:%u]\n", 1371 + pi->proto->id, message_id, domain); 1372 + 1373 + return; 1374 + 1375 + err_db_mem: 1376 + devm_kfree(ph->dev, db); 1377 + 1378 + err_db: 1379 + *p_addr = NULL; 1380 + 1381 + err_xfer: 1382 + ph->xops->xfer_put(ph, t); 1383 + 1384 + err_out: 1385 + dev_warn(ph->dev, 1386 + "Failed to get FC for protocol %X [MSG_ID:%u / RES_ID:%u] - ret:%d. Using regular messaging.\n", 1387 + pi->proto->id, message_id, domain, ret); 1388 + } 1389 + 1390 + #define SCMI_PROTO_FC_RING_DB(w) \ 1391 + do { \ 1392 + u##w val = 0; \ 1393 + \ 1394 + if (db->mask) \ 1395 + val = ioread##w(db->addr) & db->mask; \ 1396 + iowrite##w((u##w)db->set | val, db->addr); \ 1397 + } while (0) 1398 + 1399 + static void scmi_common_fastchannel_db_ring(struct scmi_fc_db_info *db) 1400 + { 1401 + if (!db || !db->addr) 1402 + return; 1403 + 1404 + if (db->width == 1) 1405 + SCMI_PROTO_FC_RING_DB(8); 1406 + else if (db->width == 2) 1407 + SCMI_PROTO_FC_RING_DB(16); 1408 + else if (db->width == 4) 1409 + SCMI_PROTO_FC_RING_DB(32); 1410 + else /* db->width == 8 */ 1411 + #ifdef CONFIG_64BIT 1412 + SCMI_PROTO_FC_RING_DB(64); 1413 + #else 1414 + { 1415 + u64 val = 0; 1416 + 1417 + if (db->mask) 1418 + val = ioread64_hi_lo(db->addr) & db->mask; 1419 + iowrite64_hi_lo(db->set | val, db->addr); 1420 + } 1421 + #endif 1422 + } 1423 + 1289 1424 static const struct scmi_proto_helpers_ops helpers_ops = { 1290 1425 .extended_name_get = scmi_common_extended_name_get, 1291 1426 .iter_response_init = scmi_iterator_init, 1292 1427 .iter_response_run = scmi_iterator_run, 1428 + .fastchannel_init = scmi_common_fastchannel_init, 1429 + .fastchannel_db_ring = scmi_common_fastchannel_db_ring, 1293 1430 }; 1294 1431 1295 1432 /** ··· 1688 1497 scmi_protocol_release(dres->handle, dres->protocol_id); 1689 1498 } 1690 1499 1500 + static struct scmi_protocol_instance __must_check * 1501 + scmi_devres_protocol_instance_get(struct scmi_device *sdev, u8 protocol_id) 1502 + { 1503 + struct scmi_protocol_instance *pi; 1504 + struct scmi_protocol_devres *dres; 1505 + 1506 + dres = devres_alloc(scmi_devm_release_protocol, 1507 + sizeof(*dres), GFP_KERNEL); 1508 + if (!dres) 1509 + return ERR_PTR(-ENOMEM); 1510 + 1511 + pi = scmi_get_protocol_instance(sdev->handle, protocol_id); 1512 + if (IS_ERR(pi)) { 1513 + devres_free(dres); 1514 + return pi; 1515 + } 1516 + 1517 + dres->handle = sdev->handle; 1518 + dres->protocol_id = protocol_id; 1519 + devres_add(&sdev->dev, dres); 1520 + 1521 + return pi; 1522 + } 1523 + 1691 1524 /** 1692 1525 * scmi_devm_protocol_get - Devres managed get protocol operations and handle 1693 1526 * @sdev: A reference to an scmi_device whose embedded struct device is to ··· 1735 1520 struct scmi_protocol_handle **ph) 1736 1521 { 1737 1522 struct scmi_protocol_instance *pi; 1738 - struct scmi_protocol_devres *dres; 1739 - struct scmi_handle *handle = sdev->handle; 1740 1523 1741 1524 if (!ph) 1742 1525 return ERR_PTR(-EINVAL); 1743 1526 1744 - dres = devres_alloc(scmi_devm_release_protocol, 1745 - sizeof(*dres), GFP_KERNEL); 1746 - if (!dres) 1747 - return ERR_PTR(-ENOMEM); 1748 - 1749 - pi = scmi_get_protocol_instance(handle, protocol_id); 1750 - if (IS_ERR(pi)) { 1751 - devres_free(dres); 1527 + pi = scmi_devres_protocol_instance_get(sdev, protocol_id); 1528 + if (IS_ERR(pi)) 1752 1529 return pi; 1753 - } 1754 - 1755 - dres->handle = handle; 1756 - dres->protocol_id = protocol_id; 1757 - devres_add(&sdev->dev, dres); 1758 1530 1759 1531 *ph = &pi->ph; 1760 1532 1761 1533 return pi->proto->ops; 1534 + } 1535 + 1536 + /** 1537 + * scmi_devm_protocol_acquire - Devres managed helper to get hold of a protocol 1538 + * @sdev: A reference to an scmi_device whose embedded struct device is to 1539 + * be used for devres accounting. 1540 + * @protocol_id: The protocol being requested. 1541 + * 1542 + * Get hold of a protocol accounting for its usage, possibly triggering its 1543 + * initialization but without getting access to its protocol specific operations 1544 + * and handle. 1545 + * 1546 + * Being a devres based managed method, protocol hold will be automatically 1547 + * released, and possibly de-initialized on last user, once the SCMI driver 1548 + * owning the scmi_device is unbound from it. 1549 + * 1550 + * Return: 0 on SUCCESS 1551 + */ 1552 + static int __must_check scmi_devm_protocol_acquire(struct scmi_device *sdev, 1553 + u8 protocol_id) 1554 + { 1555 + struct scmi_protocol_instance *pi; 1556 + 1557 + pi = scmi_devres_protocol_instance_get(sdev, protocol_id); 1558 + if (IS_ERR(pi)) 1559 + return PTR_ERR(pi); 1560 + 1561 + return 0; 1762 1562 } 1763 1563 1764 1564 static int scmi_devm_protocol_match(struct device *dev, void *res, void *data) ··· 2079 1849 if (sdev) 2080 1850 return sdev; 2081 1851 1852 + mutex_lock(&scmi_syspower_mtx); 1853 + if (prot_id == SCMI_PROTOCOL_SYSTEM && scmi_syspower_registered) { 1854 + dev_warn(info->dev, 1855 + "SCMI SystemPower protocol device must be unique !\n"); 1856 + mutex_unlock(&scmi_syspower_mtx); 1857 + 1858 + return NULL; 1859 + } 1860 + 2082 1861 pr_debug("Creating SCMI device (%s) for protocol %x\n", name, prot_id); 2083 1862 2084 1863 sdev = scmi_device_create(np, info->dev, prot_id, name); 2085 1864 if (!sdev) { 2086 1865 dev_err(info->dev, "failed to create %d protocol device\n", 2087 1866 prot_id); 1867 + mutex_unlock(&scmi_syspower_mtx); 1868 + 2088 1869 return NULL; 2089 1870 } 2090 1871 2091 1872 if (scmi_txrx_setup(info, &sdev->dev, prot_id)) { 2092 1873 dev_err(&sdev->dev, "failed to setup transport\n"); 2093 1874 scmi_device_destroy(sdev); 1875 + mutex_unlock(&scmi_syspower_mtx); 1876 + 2094 1877 return NULL; 2095 1878 } 1879 + 1880 + if (prot_id == SCMI_PROTOCOL_SYSTEM) 1881 + scmi_syspower_registered = true; 1882 + 1883 + mutex_unlock(&scmi_syspower_mtx); 2096 1884 2097 1885 return sdev; 2098 1886 } ··· 2380 2132 handle = &info->handle; 2381 2133 handle->dev = info->dev; 2382 2134 handle->version = &info->version; 2135 + handle->devm_protocol_acquire = scmi_devm_protocol_acquire; 2383 2136 handle->devm_protocol_get = scmi_devm_protocol_get; 2384 2137 handle->devm_protocol_put = scmi_devm_protocol_put; 2385 2138 ··· 2650 2401 scmi_sensors_register(); 2651 2402 scmi_voltage_register(); 2652 2403 scmi_system_register(); 2404 + scmi_powercap_register(); 2653 2405 2654 2406 return platform_driver_register(&scmi_driver); 2655 2407 } ··· 2667 2417 scmi_sensors_unregister(); 2668 2418 scmi_voltage_unregister(); 2669 2419 scmi_system_unregister(); 2420 + scmi_powercap_unregister(); 2670 2421 2671 2422 scmi_bus_exit(); 2672 2423
+54 -171
drivers/firmware/arm_scmi/perf.c
··· 10 10 #include <linux/bits.h> 11 11 #include <linux/of.h> 12 12 #include <linux/io.h> 13 - #include <linux/io-64-nonatomic-hi-lo.h> 14 13 #include <linux/module.h> 15 14 #include <linux/platform_device.h> 16 15 #include <linux/pm_opp.h> 17 16 #include <linux/scmi_protocol.h> 18 17 #include <linux/sort.h> 18 + 19 + #include <trace/events/scmi.h> 19 20 20 21 #include "protocols.h" 21 22 #include "notify.h" ··· 34 33 PERF_NOTIFY_LEVEL = 0xa, 35 34 PERF_DESCRIBE_FASTCHANNEL = 0xb, 36 35 PERF_DOMAIN_NAME_GET = 0xc, 36 + }; 37 + 38 + enum { 39 + PERF_FC_LEVEL, 40 + PERF_FC_LIMIT, 41 + PERF_FC_MAX, 37 42 }; 38 43 39 44 struct scmi_opp { ··· 120 113 __le16 transition_latency_us; 121 114 __le16 reserved; 122 115 } opp[]; 123 - }; 124 - 125 - struct scmi_perf_get_fc_info { 126 - __le32 domain; 127 - __le32 message_id; 128 - }; 129 - 130 - struct scmi_msg_resp_perf_desc_fc { 131 - __le32 attr; 132 - #define SUPPORTS_DOORBELL(x) ((x) & BIT(0)) 133 - #define DOORBELL_REG_WIDTH(x) FIELD_GET(GENMASK(2, 1), (x)) 134 - __le32 rate_limit; 135 - __le32 chan_addr_low; 136 - __le32 chan_addr_high; 137 - __le32 chan_size; 138 - __le32 db_addr_low; 139 - __le32 db_addr_high; 140 - __le32 db_set_lmask; 141 - __le32 db_set_hmask; 142 - __le32 db_preserve_lmask; 143 - __le32 db_preserve_hmask; 144 - }; 145 - 146 - struct scmi_fc_db_info { 147 - int width; 148 - u64 set; 149 - u64 mask; 150 - void __iomem *addr; 151 - }; 152 - 153 - struct scmi_fc_info { 154 - void __iomem *level_set_addr; 155 - void __iomem *limit_set_addr; 156 - void __iomem *level_get_addr; 157 - void __iomem *limit_get_addr; 158 - struct scmi_fc_db_info *level_set_db; 159 - struct scmi_fc_db_info *limit_set_db; 160 116 }; 161 117 162 118 struct perf_dom_info { ··· 330 360 return ret; 331 361 } 332 362 333 - #define SCMI_PERF_FC_RING_DB(w) \ 334 - do { \ 335 - u##w val = 0; \ 336 - \ 337 - if (db->mask) \ 338 - val = ioread##w(db->addr) & db->mask; \ 339 - iowrite##w((u##w)db->set | val, db->addr); \ 340 - } while (0) 341 - 342 - static void scmi_perf_fc_ring_db(struct scmi_fc_db_info *db) 343 - { 344 - if (!db || !db->addr) 345 - return; 346 - 347 - if (db->width == 1) 348 - SCMI_PERF_FC_RING_DB(8); 349 - else if (db->width == 2) 350 - SCMI_PERF_FC_RING_DB(16); 351 - else if (db->width == 4) 352 - SCMI_PERF_FC_RING_DB(32); 353 - else /* db->width == 8 */ 354 - #ifdef CONFIG_64BIT 355 - SCMI_PERF_FC_RING_DB(64); 356 - #else 357 - { 358 - u64 val = 0; 359 - 360 - if (db->mask) 361 - val = ioread64_hi_lo(db->addr) & db->mask; 362 - iowrite64_hi_lo(db->set | val, db->addr); 363 - } 364 - #endif 365 - } 366 - 367 363 static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph, 368 364 u32 domain, u32 max_perf, u32 min_perf) 369 365 { ··· 362 426 if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3 && !max_perf && !min_perf) 363 427 return -EINVAL; 364 428 365 - if (dom->fc_info && dom->fc_info->limit_set_addr) { 366 - iowrite32(max_perf, dom->fc_info->limit_set_addr); 367 - iowrite32(min_perf, dom->fc_info->limit_set_addr + 4); 368 - scmi_perf_fc_ring_db(dom->fc_info->limit_set_db); 429 + if (dom->fc_info && dom->fc_info[PERF_FC_LIMIT].set_addr) { 430 + struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LIMIT]; 431 + 432 + trace_scmi_fc_call(SCMI_PROTOCOL_PERF, PERF_LIMITS_SET, 433 + domain, min_perf, max_perf); 434 + iowrite32(max_perf, fci->set_addr); 435 + iowrite32(min_perf, fci->set_addr + 4); 436 + ph->hops->fastchannel_db_ring(fci->set_db); 369 437 return 0; 370 438 } 371 439 ··· 408 468 struct scmi_perf_info *pi = ph->get_priv(ph); 409 469 struct perf_dom_info *dom = pi->dom_info + domain; 410 470 411 - if (dom->fc_info && dom->fc_info->limit_get_addr) { 412 - *max_perf = ioread32(dom->fc_info->limit_get_addr); 413 - *min_perf = ioread32(dom->fc_info->limit_get_addr + 4); 471 + if (dom->fc_info && dom->fc_info[PERF_FC_LIMIT].get_addr) { 472 + struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LIMIT]; 473 + 474 + *max_perf = ioread32(fci->get_addr); 475 + *min_perf = ioread32(fci->get_addr + 4); 476 + trace_scmi_fc_call(SCMI_PROTOCOL_PERF, PERF_LIMITS_GET, 477 + domain, *min_perf, *max_perf); 414 478 return 0; 415 479 } 416 480 ··· 449 505 struct scmi_perf_info *pi = ph->get_priv(ph); 450 506 struct perf_dom_info *dom = pi->dom_info + domain; 451 507 452 - if (dom->fc_info && dom->fc_info->level_set_addr) { 453 - iowrite32(level, dom->fc_info->level_set_addr); 454 - scmi_perf_fc_ring_db(dom->fc_info->level_set_db); 508 + if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr) { 509 + struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LEVEL]; 510 + 511 + trace_scmi_fc_call(SCMI_PROTOCOL_PERF, PERF_LEVEL_SET, 512 + domain, level, 0); 513 + iowrite32(level, fci->set_addr); 514 + ph->hops->fastchannel_db_ring(fci->set_db); 455 515 return 0; 456 516 } 457 517 ··· 490 542 struct scmi_perf_info *pi = ph->get_priv(ph); 491 543 struct perf_dom_info *dom = pi->dom_info + domain; 492 544 493 - if (dom->fc_info && dom->fc_info->level_get_addr) { 494 - *level = ioread32(dom->fc_info->level_get_addr); 545 + if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].get_addr) { 546 + *level = ioread32(dom->fc_info[PERF_FC_LEVEL].get_addr); 547 + trace_scmi_fc_call(SCMI_PROTOCOL_PERF, PERF_LEVEL_GET, 548 + domain, *level, 0); 495 549 return 0; 496 550 } 497 551 ··· 522 572 return ret; 523 573 } 524 574 525 - static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size) 526 - { 527 - if ((msg == PERF_LEVEL_GET || msg == PERF_LEVEL_SET) && size == 4) 528 - return true; 529 - if ((msg == PERF_LIMITS_GET || msg == PERF_LIMITS_SET) && size == 8) 530 - return true; 531 - return false; 532 - } 533 - 534 - static void 535 - scmi_perf_domain_desc_fc(const struct scmi_protocol_handle *ph, u32 domain, 536 - u32 message_id, void __iomem **p_addr, 537 - struct scmi_fc_db_info **p_db) 538 - { 539 - int ret; 540 - u32 flags; 541 - u64 phys_addr; 542 - u8 size; 543 - void __iomem *addr; 544 - struct scmi_xfer *t; 545 - struct scmi_fc_db_info *db; 546 - struct scmi_perf_get_fc_info *info; 547 - struct scmi_msg_resp_perf_desc_fc *resp; 548 - 549 - if (!p_addr) 550 - return; 551 - 552 - ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_FASTCHANNEL, 553 - sizeof(*info), sizeof(*resp), &t); 554 - if (ret) 555 - return; 556 - 557 - info = t->tx.buf; 558 - info->domain = cpu_to_le32(domain); 559 - info->message_id = cpu_to_le32(message_id); 560 - 561 - ret = ph->xops->do_xfer(ph, t); 562 - if (ret) 563 - goto err_xfer; 564 - 565 - resp = t->rx.buf; 566 - flags = le32_to_cpu(resp->attr); 567 - size = le32_to_cpu(resp->chan_size); 568 - if (!scmi_perf_fc_size_is_valid(message_id, size)) 569 - goto err_xfer; 570 - 571 - phys_addr = le32_to_cpu(resp->chan_addr_low); 572 - phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32; 573 - addr = devm_ioremap(ph->dev, phys_addr, size); 574 - if (!addr) 575 - goto err_xfer; 576 - *p_addr = addr; 577 - 578 - if (p_db && SUPPORTS_DOORBELL(flags)) { 579 - db = devm_kzalloc(ph->dev, sizeof(*db), GFP_KERNEL); 580 - if (!db) 581 - goto err_xfer; 582 - 583 - size = 1 << DOORBELL_REG_WIDTH(flags); 584 - phys_addr = le32_to_cpu(resp->db_addr_low); 585 - phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32; 586 - addr = devm_ioremap(ph->dev, phys_addr, size); 587 - if (!addr) 588 - goto err_xfer; 589 - 590 - db->addr = addr; 591 - db->width = size; 592 - db->set = le32_to_cpu(resp->db_set_lmask); 593 - db->set |= (u64)le32_to_cpu(resp->db_set_hmask) << 32; 594 - db->mask = le32_to_cpu(resp->db_preserve_lmask); 595 - db->mask |= (u64)le32_to_cpu(resp->db_preserve_hmask) << 32; 596 - *p_db = db; 597 - } 598 - err_xfer: 599 - ph->xops->xfer_put(ph, t); 600 - } 601 - 602 575 static void scmi_perf_domain_init_fc(const struct scmi_protocol_handle *ph, 603 576 u32 domain, struct scmi_fc_info **p_fc) 604 577 { 605 578 struct scmi_fc_info *fc; 606 579 607 - fc = devm_kzalloc(ph->dev, sizeof(*fc), GFP_KERNEL); 580 + fc = devm_kcalloc(ph->dev, PERF_FC_MAX, sizeof(*fc), GFP_KERNEL); 608 581 if (!fc) 609 582 return; 610 583 611 - scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_SET, 612 - &fc->level_set_addr, &fc->level_set_db); 613 - scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_GET, 614 - &fc->level_get_addr, NULL); 615 - scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_SET, 616 - &fc->limit_set_addr, &fc->limit_set_db); 617 - scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_GET, 618 - &fc->limit_get_addr, NULL); 584 + ph->hops->fastchannel_init(ph, PERF_DESCRIBE_FASTCHANNEL, 585 + PERF_LEVEL_SET, 4, domain, 586 + &fc[PERF_FC_LEVEL].set_addr, 587 + &fc[PERF_FC_LEVEL].set_db); 588 + 589 + ph->hops->fastchannel_init(ph, PERF_DESCRIBE_FASTCHANNEL, 590 + PERF_LEVEL_GET, 4, domain, 591 + &fc[PERF_FC_LEVEL].get_addr, NULL); 592 + 593 + ph->hops->fastchannel_init(ph, PERF_DESCRIBE_FASTCHANNEL, 594 + PERF_LIMITS_SET, 8, domain, 595 + &fc[PERF_FC_LIMIT].set_addr, 596 + &fc[PERF_FC_LIMIT].set_db); 597 + 598 + ph->hops->fastchannel_init(ph, PERF_DESCRIBE_FASTCHANNEL, 599 + PERF_LIMITS_GET, 8, domain, 600 + &fc[PERF_FC_LIMIT].get_addr, NULL); 601 + 619 602 *p_fc = fc; 620 603 } 621 604 ··· 672 789 673 790 dom = pi->dom_info + scmi_dev_domain_id(dev); 674 791 675 - return dom->fc_info && dom->fc_info->level_set_addr; 792 + return dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr; 676 793 } 677 794 678 795 static bool scmi_power_scale_mw_get(const struct scmi_protocol_handle *ph)
+866
drivers/firmware/arm_scmi/powercap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Powercap Protocol 4 + * 5 + * Copyright (C) 2022 ARM Ltd. 6 + */ 7 + 8 + #define pr_fmt(fmt) "SCMI Notifications POWERCAP - " fmt 9 + 10 + #include <linux/bitfield.h> 11 + #include <linux/io.h> 12 + #include <linux/module.h> 13 + #include <linux/scmi_protocol.h> 14 + 15 + #include <trace/events/scmi.h> 16 + 17 + #include "protocols.h" 18 + #include "notify.h" 19 + 20 + enum scmi_powercap_protocol_cmd { 21 + POWERCAP_DOMAIN_ATTRIBUTES = 0x3, 22 + POWERCAP_CAP_GET = 0x4, 23 + POWERCAP_CAP_SET = 0x5, 24 + POWERCAP_PAI_GET = 0x6, 25 + POWERCAP_PAI_SET = 0x7, 26 + POWERCAP_DOMAIN_NAME_GET = 0x8, 27 + POWERCAP_MEASUREMENTS_GET = 0x9, 28 + POWERCAP_CAP_NOTIFY = 0xa, 29 + POWERCAP_MEASUREMENTS_NOTIFY = 0xb, 30 + POWERCAP_DESCRIBE_FASTCHANNEL = 0xc, 31 + }; 32 + 33 + enum { 34 + POWERCAP_FC_CAP, 35 + POWERCAP_FC_PAI, 36 + POWERCAP_FC_MAX, 37 + }; 38 + 39 + struct scmi_msg_resp_powercap_domain_attributes { 40 + __le32 attributes; 41 + #define SUPPORTS_POWERCAP_CAP_CHANGE_NOTIFY(x) ((x) & BIT(31)) 42 + #define SUPPORTS_POWERCAP_MEASUREMENTS_CHANGE_NOTIFY(x) ((x) & BIT(30)) 43 + #define SUPPORTS_ASYNC_POWERCAP_CAP_SET(x) ((x) & BIT(29)) 44 + #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(28)) 45 + #define SUPPORTS_POWERCAP_CAP_CONFIGURATION(x) ((x) & BIT(27)) 46 + #define SUPPORTS_POWERCAP_MONITORING(x) ((x) & BIT(26)) 47 + #define SUPPORTS_POWERCAP_PAI_CONFIGURATION(x) ((x) & BIT(25)) 48 + #define SUPPORTS_POWERCAP_FASTCHANNELS(x) ((x) & BIT(22)) 49 + #define POWERCAP_POWER_UNIT(x) \ 50 + (FIELD_GET(GENMASK(24, 23), (x))) 51 + #define SUPPORTS_POWER_UNITS_MW(x) \ 52 + (POWERCAP_POWER_UNIT(x) == 0x2) 53 + #define SUPPORTS_POWER_UNITS_UW(x) \ 54 + (POWERCAP_POWER_UNIT(x) == 0x1) 55 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 56 + __le32 min_pai; 57 + __le32 max_pai; 58 + __le32 pai_step; 59 + __le32 min_power_cap; 60 + __le32 max_power_cap; 61 + __le32 power_cap_step; 62 + __le32 sustainable_power; 63 + __le32 accuracy; 64 + __le32 parent_id; 65 + }; 66 + 67 + struct scmi_msg_powercap_set_cap_or_pai { 68 + __le32 domain; 69 + __le32 flags; 70 + #define CAP_SET_ASYNC BIT(1) 71 + #define CAP_SET_IGNORE_DRESP BIT(0) 72 + __le32 value; 73 + }; 74 + 75 + struct scmi_msg_resp_powercap_cap_set_complete { 76 + __le32 domain; 77 + __le32 power_cap; 78 + }; 79 + 80 + struct scmi_msg_resp_powercap_meas_get { 81 + __le32 power; 82 + __le32 pai; 83 + }; 84 + 85 + struct scmi_msg_powercap_notify_cap { 86 + __le32 domain; 87 + __le32 notify_enable; 88 + }; 89 + 90 + struct scmi_msg_powercap_notify_thresh { 91 + __le32 domain; 92 + __le32 notify_enable; 93 + __le32 power_thresh_low; 94 + __le32 power_thresh_high; 95 + }; 96 + 97 + struct scmi_powercap_cap_changed_notify_payld { 98 + __le32 agent_id; 99 + __le32 domain_id; 100 + __le32 power_cap; 101 + __le32 pai; 102 + }; 103 + 104 + struct scmi_powercap_meas_changed_notify_payld { 105 + __le32 agent_id; 106 + __le32 domain_id; 107 + __le32 power; 108 + }; 109 + 110 + struct scmi_powercap_state { 111 + bool meas_notif_enabled; 112 + u64 thresholds; 113 + #define THRESH_LOW(p, id) \ 114 + (lower_32_bits((p)->states[(id)].thresholds)) 115 + #define THRESH_HIGH(p, id) \ 116 + (upper_32_bits((p)->states[(id)].thresholds)) 117 + }; 118 + 119 + struct powercap_info { 120 + u32 version; 121 + int num_domains; 122 + struct scmi_powercap_state *states; 123 + struct scmi_powercap_info *powercaps; 124 + }; 125 + 126 + static enum scmi_powercap_protocol_cmd evt_2_cmd[] = { 127 + POWERCAP_CAP_NOTIFY, 128 + POWERCAP_MEASUREMENTS_NOTIFY, 129 + }; 130 + 131 + static int scmi_powercap_notify(const struct scmi_protocol_handle *ph, 132 + u32 domain, int message_id, bool enable); 133 + 134 + static int 135 + scmi_powercap_attributes_get(const struct scmi_protocol_handle *ph, 136 + struct powercap_info *pi) 137 + { 138 + int ret; 139 + struct scmi_xfer *t; 140 + 141 + ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0, 142 + sizeof(u32), &t); 143 + if (ret) 144 + return ret; 145 + 146 + ret = ph->xops->do_xfer(ph, t); 147 + if (!ret) { 148 + u32 attributes; 149 + 150 + attributes = get_unaligned_le32(t->rx.buf); 151 + pi->num_domains = FIELD_GET(GENMASK(15, 0), attributes); 152 + } 153 + 154 + ph->xops->xfer_put(ph, t); 155 + return ret; 156 + } 157 + 158 + static inline int 159 + scmi_powercap_validate(unsigned int min_val, unsigned int max_val, 160 + unsigned int step_val, bool configurable) 161 + { 162 + if (!min_val || !max_val) 163 + return -EPROTO; 164 + 165 + if ((configurable && min_val == max_val) || 166 + (!configurable && min_val != max_val)) 167 + return -EPROTO; 168 + 169 + if (min_val != max_val && !step_val) 170 + return -EPROTO; 171 + 172 + return 0; 173 + } 174 + 175 + static int 176 + scmi_powercap_domain_attributes_get(const struct scmi_protocol_handle *ph, 177 + struct powercap_info *pinfo, u32 domain) 178 + { 179 + int ret; 180 + u32 flags; 181 + struct scmi_xfer *t; 182 + struct scmi_powercap_info *dom_info = pinfo->powercaps + domain; 183 + struct scmi_msg_resp_powercap_domain_attributes *resp; 184 + 185 + ret = ph->xops->xfer_get_init(ph, POWERCAP_DOMAIN_ATTRIBUTES, 186 + sizeof(domain), sizeof(*resp), &t); 187 + if (ret) 188 + return ret; 189 + 190 + put_unaligned_le32(domain, t->tx.buf); 191 + resp = t->rx.buf; 192 + 193 + ret = ph->xops->do_xfer(ph, t); 194 + if (!ret) { 195 + flags = le32_to_cpu(resp->attributes); 196 + 197 + dom_info->id = domain; 198 + dom_info->notify_powercap_cap_change = 199 + SUPPORTS_POWERCAP_CAP_CHANGE_NOTIFY(flags); 200 + dom_info->notify_powercap_measurement_change = 201 + SUPPORTS_POWERCAP_MEASUREMENTS_CHANGE_NOTIFY(flags); 202 + dom_info->async_powercap_cap_set = 203 + SUPPORTS_ASYNC_POWERCAP_CAP_SET(flags); 204 + dom_info->powercap_cap_config = 205 + SUPPORTS_POWERCAP_CAP_CONFIGURATION(flags); 206 + dom_info->powercap_monitoring = 207 + SUPPORTS_POWERCAP_MONITORING(flags); 208 + dom_info->powercap_pai_config = 209 + SUPPORTS_POWERCAP_PAI_CONFIGURATION(flags); 210 + dom_info->powercap_scale_mw = 211 + SUPPORTS_POWER_UNITS_MW(flags); 212 + dom_info->powercap_scale_uw = 213 + SUPPORTS_POWER_UNITS_UW(flags); 214 + dom_info->fastchannels = 215 + SUPPORTS_POWERCAP_FASTCHANNELS(flags); 216 + 217 + strscpy(dom_info->name, resp->name, SCMI_SHORT_NAME_MAX_SIZE); 218 + 219 + dom_info->min_pai = le32_to_cpu(resp->min_pai); 220 + dom_info->max_pai = le32_to_cpu(resp->max_pai); 221 + dom_info->pai_step = le32_to_cpu(resp->pai_step); 222 + ret = scmi_powercap_validate(dom_info->min_pai, 223 + dom_info->max_pai, 224 + dom_info->pai_step, 225 + dom_info->powercap_pai_config); 226 + if (ret) { 227 + dev_err(ph->dev, 228 + "Platform reported inconsistent PAI config for domain %d - %s\n", 229 + dom_info->id, dom_info->name); 230 + goto clean; 231 + } 232 + 233 + dom_info->min_power_cap = le32_to_cpu(resp->min_power_cap); 234 + dom_info->max_power_cap = le32_to_cpu(resp->max_power_cap); 235 + dom_info->power_cap_step = le32_to_cpu(resp->power_cap_step); 236 + ret = scmi_powercap_validate(dom_info->min_power_cap, 237 + dom_info->max_power_cap, 238 + dom_info->power_cap_step, 239 + dom_info->powercap_cap_config); 240 + if (ret) { 241 + dev_err(ph->dev, 242 + "Platform reported inconsistent CAP config for domain %d - %s\n", 243 + dom_info->id, dom_info->name); 244 + goto clean; 245 + } 246 + 247 + dom_info->sustainable_power = 248 + le32_to_cpu(resp->sustainable_power); 249 + dom_info->accuracy = le32_to_cpu(resp->accuracy); 250 + 251 + dom_info->parent_id = le32_to_cpu(resp->parent_id); 252 + if (dom_info->parent_id != SCMI_POWERCAP_ROOT_ZONE_ID && 253 + (dom_info->parent_id >= pinfo->num_domains || 254 + dom_info->parent_id == dom_info->id)) { 255 + dev_err(ph->dev, 256 + "Platform reported inconsistent parent ID for domain %d - %s\n", 257 + dom_info->id, dom_info->name); 258 + ret = -ENODEV; 259 + } 260 + } 261 + 262 + clean: 263 + ph->xops->xfer_put(ph, t); 264 + 265 + /* 266 + * If supported overwrite short name with the extended one; 267 + * on error just carry on and use already provided short name. 268 + */ 269 + if (!ret && SUPPORTS_EXTENDED_NAMES(flags)) 270 + ph->hops->extended_name_get(ph, POWERCAP_DOMAIN_NAME_GET, 271 + domain, dom_info->name, 272 + SCMI_MAX_STR_SIZE); 273 + 274 + return ret; 275 + } 276 + 277 + static int scmi_powercap_num_domains_get(const struct scmi_protocol_handle *ph) 278 + { 279 + struct powercap_info *pi = ph->get_priv(ph); 280 + 281 + return pi->num_domains; 282 + } 283 + 284 + static const struct scmi_powercap_info * 285 + scmi_powercap_dom_info_get(const struct scmi_protocol_handle *ph, u32 domain_id) 286 + { 287 + struct powercap_info *pi = ph->get_priv(ph); 288 + 289 + if (domain_id >= pi->num_domains) 290 + return NULL; 291 + 292 + return pi->powercaps + domain_id; 293 + } 294 + 295 + static int scmi_powercap_xfer_cap_get(const struct scmi_protocol_handle *ph, 296 + u32 domain_id, u32 *power_cap) 297 + { 298 + int ret; 299 + struct scmi_xfer *t; 300 + 301 + ret = ph->xops->xfer_get_init(ph, POWERCAP_CAP_GET, sizeof(u32), 302 + sizeof(u32), &t); 303 + if (ret) 304 + return ret; 305 + 306 + put_unaligned_le32(domain_id, t->tx.buf); 307 + ret = ph->xops->do_xfer(ph, t); 308 + if (!ret) 309 + *power_cap = get_unaligned_le32(t->rx.buf); 310 + 311 + ph->xops->xfer_put(ph, t); 312 + 313 + return ret; 314 + } 315 + 316 + static int scmi_powercap_cap_get(const struct scmi_protocol_handle *ph, 317 + u32 domain_id, u32 *power_cap) 318 + { 319 + struct scmi_powercap_info *dom; 320 + struct powercap_info *pi = ph->get_priv(ph); 321 + 322 + if (!power_cap || domain_id >= pi->num_domains) 323 + return -EINVAL; 324 + 325 + dom = pi->powercaps + domain_id; 326 + if (dom->fc_info && dom->fc_info[POWERCAP_FC_CAP].get_addr) { 327 + *power_cap = ioread32(dom->fc_info[POWERCAP_FC_CAP].get_addr); 328 + trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_CAP_GET, 329 + domain_id, *power_cap, 0); 330 + return 0; 331 + } 332 + 333 + return scmi_powercap_xfer_cap_get(ph, domain_id, power_cap); 334 + } 335 + 336 + static int scmi_powercap_xfer_cap_set(const struct scmi_protocol_handle *ph, 337 + const struct scmi_powercap_info *pc, 338 + u32 power_cap, bool ignore_dresp) 339 + { 340 + int ret; 341 + struct scmi_xfer *t; 342 + struct scmi_msg_powercap_set_cap_or_pai *msg; 343 + 344 + ret = ph->xops->xfer_get_init(ph, POWERCAP_CAP_SET, 345 + sizeof(*msg), 0, &t); 346 + if (ret) 347 + return ret; 348 + 349 + msg = t->tx.buf; 350 + msg->domain = cpu_to_le32(pc->id); 351 + msg->flags = 352 + cpu_to_le32(FIELD_PREP(CAP_SET_ASYNC, !!pc->async_powercap_cap_set) | 353 + FIELD_PREP(CAP_SET_IGNORE_DRESP, !!ignore_dresp)); 354 + msg->value = cpu_to_le32(power_cap); 355 + 356 + if (!pc->async_powercap_cap_set || ignore_dresp) { 357 + ret = ph->xops->do_xfer(ph, t); 358 + } else { 359 + ret = ph->xops->do_xfer_with_response(ph, t); 360 + if (!ret) { 361 + struct scmi_msg_resp_powercap_cap_set_complete *resp; 362 + 363 + resp = t->rx.buf; 364 + if (le32_to_cpu(resp->domain) == pc->id) 365 + dev_dbg(ph->dev, 366 + "Powercap ID %d CAP set async to %u\n", 367 + pc->id, 368 + get_unaligned_le32(&resp->power_cap)); 369 + else 370 + ret = -EPROTO; 371 + } 372 + } 373 + 374 + ph->xops->xfer_put(ph, t); 375 + return ret; 376 + } 377 + 378 + static int scmi_powercap_cap_set(const struct scmi_protocol_handle *ph, 379 + u32 domain_id, u32 power_cap, 380 + bool ignore_dresp) 381 + { 382 + const struct scmi_powercap_info *pc; 383 + 384 + pc = scmi_powercap_dom_info_get(ph, domain_id); 385 + if (!pc || !pc->powercap_cap_config || !power_cap || 386 + power_cap < pc->min_power_cap || 387 + power_cap > pc->max_power_cap) 388 + return -EINVAL; 389 + 390 + if (pc->fc_info && pc->fc_info[POWERCAP_FC_CAP].set_addr) { 391 + struct scmi_fc_info *fci = &pc->fc_info[POWERCAP_FC_CAP]; 392 + 393 + iowrite32(power_cap, fci->set_addr); 394 + ph->hops->fastchannel_db_ring(fci->set_db); 395 + trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_CAP_SET, 396 + domain_id, power_cap, 0); 397 + return 0; 398 + } 399 + 400 + return scmi_powercap_xfer_cap_set(ph, pc, power_cap, ignore_dresp); 401 + } 402 + 403 + static int scmi_powercap_xfer_pai_get(const struct scmi_protocol_handle *ph, 404 + u32 domain_id, u32 *pai) 405 + { 406 + int ret; 407 + struct scmi_xfer *t; 408 + 409 + ret = ph->xops->xfer_get_init(ph, POWERCAP_PAI_GET, sizeof(u32), 410 + sizeof(u32), &t); 411 + if (ret) 412 + return ret; 413 + 414 + put_unaligned_le32(domain_id, t->tx.buf); 415 + ret = ph->xops->do_xfer(ph, t); 416 + if (!ret) 417 + *pai = get_unaligned_le32(t->rx.buf); 418 + 419 + ph->xops->xfer_put(ph, t); 420 + 421 + return ret; 422 + } 423 + 424 + static int scmi_powercap_pai_get(const struct scmi_protocol_handle *ph, 425 + u32 domain_id, u32 *pai) 426 + { 427 + struct scmi_powercap_info *dom; 428 + struct powercap_info *pi = ph->get_priv(ph); 429 + 430 + if (!pai || domain_id >= pi->num_domains) 431 + return -EINVAL; 432 + 433 + dom = pi->powercaps + domain_id; 434 + if (dom->fc_info && dom->fc_info[POWERCAP_FC_PAI].get_addr) { 435 + *pai = ioread32(dom->fc_info[POWERCAP_FC_PAI].get_addr); 436 + trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_PAI_GET, 437 + domain_id, *pai, 0); 438 + return 0; 439 + } 440 + 441 + return scmi_powercap_xfer_pai_get(ph, domain_id, pai); 442 + } 443 + 444 + static int scmi_powercap_xfer_pai_set(const struct scmi_protocol_handle *ph, 445 + u32 domain_id, u32 pai) 446 + { 447 + int ret; 448 + struct scmi_xfer *t; 449 + struct scmi_msg_powercap_set_cap_or_pai *msg; 450 + 451 + ret = ph->xops->xfer_get_init(ph, POWERCAP_PAI_SET, 452 + sizeof(*msg), 0, &t); 453 + if (ret) 454 + return ret; 455 + 456 + msg = t->tx.buf; 457 + msg->domain = cpu_to_le32(domain_id); 458 + msg->flags = cpu_to_le32(0); 459 + msg->value = cpu_to_le32(pai); 460 + 461 + ret = ph->xops->do_xfer(ph, t); 462 + 463 + ph->xops->xfer_put(ph, t); 464 + return ret; 465 + } 466 + 467 + static int scmi_powercap_pai_set(const struct scmi_protocol_handle *ph, 468 + u32 domain_id, u32 pai) 469 + { 470 + const struct scmi_powercap_info *pc; 471 + 472 + pc = scmi_powercap_dom_info_get(ph, domain_id); 473 + if (!pc || !pc->powercap_pai_config || !pai || 474 + pai < pc->min_pai || pai > pc->max_pai) 475 + return -EINVAL; 476 + 477 + if (pc->fc_info && pc->fc_info[POWERCAP_FC_PAI].set_addr) { 478 + struct scmi_fc_info *fci = &pc->fc_info[POWERCAP_FC_PAI]; 479 + 480 + trace_scmi_fc_call(SCMI_PROTOCOL_POWERCAP, POWERCAP_PAI_SET, 481 + domain_id, pai, 0); 482 + iowrite32(pai, fci->set_addr); 483 + ph->hops->fastchannel_db_ring(fci->set_db); 484 + return 0; 485 + } 486 + 487 + return scmi_powercap_xfer_pai_set(ph, domain_id, pai); 488 + } 489 + 490 + static int scmi_powercap_measurements_get(const struct scmi_protocol_handle *ph, 491 + u32 domain_id, u32 *average_power, 492 + u32 *pai) 493 + { 494 + int ret; 495 + struct scmi_xfer *t; 496 + struct scmi_msg_resp_powercap_meas_get *resp; 497 + const struct scmi_powercap_info *pc; 498 + 499 + pc = scmi_powercap_dom_info_get(ph, domain_id); 500 + if (!pc || !pc->powercap_monitoring || !pai || !average_power) 501 + return -EINVAL; 502 + 503 + ret = ph->xops->xfer_get_init(ph, POWERCAP_MEASUREMENTS_GET, 504 + sizeof(u32), sizeof(*resp), &t); 505 + if (ret) 506 + return ret; 507 + 508 + resp = t->rx.buf; 509 + put_unaligned_le32(domain_id, t->tx.buf); 510 + ret = ph->xops->do_xfer(ph, t); 511 + if (!ret) { 512 + *average_power = le32_to_cpu(resp->power); 513 + *pai = le32_to_cpu(resp->pai); 514 + } 515 + 516 + ph->xops->xfer_put(ph, t); 517 + return ret; 518 + } 519 + 520 + static int 521 + scmi_powercap_measurements_threshold_get(const struct scmi_protocol_handle *ph, 522 + u32 domain_id, u32 *power_thresh_low, 523 + u32 *power_thresh_high) 524 + { 525 + struct powercap_info *pi = ph->get_priv(ph); 526 + 527 + if (!power_thresh_low || !power_thresh_high || 528 + domain_id >= pi->num_domains) 529 + return -EINVAL; 530 + 531 + *power_thresh_low = THRESH_LOW(pi, domain_id); 532 + *power_thresh_high = THRESH_HIGH(pi, domain_id); 533 + 534 + return 0; 535 + } 536 + 537 + static int 538 + scmi_powercap_measurements_threshold_set(const struct scmi_protocol_handle *ph, 539 + u32 domain_id, u32 power_thresh_low, 540 + u32 power_thresh_high) 541 + { 542 + int ret = 0; 543 + struct powercap_info *pi = ph->get_priv(ph); 544 + 545 + if (domain_id >= pi->num_domains || 546 + power_thresh_low > power_thresh_high) 547 + return -EINVAL; 548 + 549 + /* Anything to do ? */ 550 + if (THRESH_LOW(pi, domain_id) == power_thresh_low && 551 + THRESH_HIGH(pi, domain_id) == power_thresh_high) 552 + return ret; 553 + 554 + pi->states[domain_id].thresholds = 555 + (FIELD_PREP(GENMASK_ULL(31, 0), power_thresh_low) | 556 + FIELD_PREP(GENMASK_ULL(63, 32), power_thresh_high)); 557 + 558 + /* Update thresholds if notification already enabled */ 559 + if (pi->states[domain_id].meas_notif_enabled) 560 + ret = scmi_powercap_notify(ph, domain_id, 561 + POWERCAP_MEASUREMENTS_NOTIFY, 562 + true); 563 + 564 + return ret; 565 + } 566 + 567 + static const struct scmi_powercap_proto_ops powercap_proto_ops = { 568 + .num_domains_get = scmi_powercap_num_domains_get, 569 + .info_get = scmi_powercap_dom_info_get, 570 + .cap_get = scmi_powercap_cap_get, 571 + .cap_set = scmi_powercap_cap_set, 572 + .pai_get = scmi_powercap_pai_get, 573 + .pai_set = scmi_powercap_pai_set, 574 + .measurements_get = scmi_powercap_measurements_get, 575 + .measurements_threshold_set = scmi_powercap_measurements_threshold_set, 576 + .measurements_threshold_get = scmi_powercap_measurements_threshold_get, 577 + }; 578 + 579 + static void scmi_powercap_domain_init_fc(const struct scmi_protocol_handle *ph, 580 + u32 domain, struct scmi_fc_info **p_fc) 581 + { 582 + struct scmi_fc_info *fc; 583 + 584 + fc = devm_kcalloc(ph->dev, POWERCAP_FC_MAX, sizeof(*fc), GFP_KERNEL); 585 + if (!fc) 586 + return; 587 + 588 + ph->hops->fastchannel_init(ph, POWERCAP_DESCRIBE_FASTCHANNEL, 589 + POWERCAP_CAP_SET, 4, domain, 590 + &fc[POWERCAP_FC_CAP].set_addr, 591 + &fc[POWERCAP_FC_CAP].set_db); 592 + 593 + ph->hops->fastchannel_init(ph, POWERCAP_DESCRIBE_FASTCHANNEL, 594 + POWERCAP_CAP_GET, 4, domain, 595 + &fc[POWERCAP_FC_CAP].get_addr, NULL); 596 + 597 + ph->hops->fastchannel_init(ph, POWERCAP_DESCRIBE_FASTCHANNEL, 598 + POWERCAP_PAI_SET, 4, domain, 599 + &fc[POWERCAP_FC_PAI].set_addr, 600 + &fc[POWERCAP_FC_PAI].set_db); 601 + 602 + ph->hops->fastchannel_init(ph, POWERCAP_DESCRIBE_FASTCHANNEL, 603 + POWERCAP_PAI_GET, 4, domain, 604 + &fc[POWERCAP_FC_PAI].get_addr, NULL); 605 + 606 + *p_fc = fc; 607 + } 608 + 609 + static int scmi_powercap_notify(const struct scmi_protocol_handle *ph, 610 + u32 domain, int message_id, bool enable) 611 + { 612 + int ret; 613 + struct scmi_xfer *t; 614 + 615 + switch (message_id) { 616 + case POWERCAP_CAP_NOTIFY: 617 + { 618 + struct scmi_msg_powercap_notify_cap *notify; 619 + 620 + ret = ph->xops->xfer_get_init(ph, message_id, 621 + sizeof(*notify), 0, &t); 622 + if (ret) 623 + return ret; 624 + 625 + notify = t->tx.buf; 626 + notify->domain = cpu_to_le32(domain); 627 + notify->notify_enable = cpu_to_le32(enable ? BIT(0) : 0); 628 + break; 629 + } 630 + case POWERCAP_MEASUREMENTS_NOTIFY: 631 + { 632 + u32 low, high; 633 + struct scmi_msg_powercap_notify_thresh *notify; 634 + 635 + /* 636 + * Note that we have to pick the most recently configured 637 + * thresholds to build a proper POWERCAP_MEASUREMENTS_NOTIFY 638 + * enable request and we fail, complaining, if no thresholds 639 + * were ever set, since this is an indication the API has been 640 + * used wrongly. 641 + */ 642 + ret = scmi_powercap_measurements_threshold_get(ph, domain, 643 + &low, &high); 644 + if (ret) 645 + return ret; 646 + 647 + if (enable && !low && !high) { 648 + dev_err(ph->dev, 649 + "Invalid Measurements Notify thresholds: %u/%u\n", 650 + low, high); 651 + return -EINVAL; 652 + } 653 + 654 + ret = ph->xops->xfer_get_init(ph, message_id, 655 + sizeof(*notify), 0, &t); 656 + if (ret) 657 + return ret; 658 + 659 + notify = t->tx.buf; 660 + notify->domain = cpu_to_le32(domain); 661 + notify->notify_enable = cpu_to_le32(enable ? BIT(0) : 0); 662 + notify->power_thresh_low = cpu_to_le32(low); 663 + notify->power_thresh_high = cpu_to_le32(high); 664 + break; 665 + } 666 + default: 667 + return -EINVAL; 668 + } 669 + 670 + ret = ph->xops->do_xfer(ph, t); 671 + 672 + ph->xops->xfer_put(ph, t); 673 + return ret; 674 + } 675 + 676 + static int 677 + scmi_powercap_set_notify_enabled(const struct scmi_protocol_handle *ph, 678 + u8 evt_id, u32 src_id, bool enable) 679 + { 680 + int ret, cmd_id; 681 + struct powercap_info *pi = ph->get_priv(ph); 682 + 683 + if (evt_id >= ARRAY_SIZE(evt_2_cmd) || src_id >= pi->num_domains) 684 + return -EINVAL; 685 + 686 + cmd_id = evt_2_cmd[evt_id]; 687 + ret = scmi_powercap_notify(ph, src_id, cmd_id, enable); 688 + if (ret) 689 + pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 690 + evt_id, src_id, ret); 691 + else if (cmd_id == POWERCAP_MEASUREMENTS_NOTIFY) 692 + /* 693 + * On success save the current notification enabled state, so 694 + * as to be able to properly update the notification thresholds 695 + * when they are modified on a domain for which measurement 696 + * notifications were currently enabled. 697 + * 698 + * This is needed because the SCMI Notification core machinery 699 + * and API does not support passing per-notification custom 700 + * arguments at callback registration time. 701 + * 702 + * Note that this can be done here with a simple flag since the 703 + * SCMI core Notifications code takes care of keeping proper 704 + * per-domain enables refcounting, so that this helper function 705 + * will be called only once (for enables) when the first user 706 + * registers a callback on this domain and once more (disable) 707 + * when the last user de-registers its callback. 708 + */ 709 + pi->states[src_id].meas_notif_enabled = enable; 710 + 711 + return ret; 712 + } 713 + 714 + static void * 715 + scmi_powercap_fill_custom_report(const struct scmi_protocol_handle *ph, 716 + u8 evt_id, ktime_t timestamp, 717 + const void *payld, size_t payld_sz, 718 + void *report, u32 *src_id) 719 + { 720 + void *rep = NULL; 721 + 722 + switch (evt_id) { 723 + case SCMI_EVENT_POWERCAP_CAP_CHANGED: 724 + { 725 + const struct scmi_powercap_cap_changed_notify_payld *p = payld; 726 + struct scmi_powercap_cap_changed_report *r = report; 727 + 728 + if (sizeof(*p) != payld_sz) 729 + break; 730 + 731 + r->timestamp = timestamp; 732 + r->agent_id = le32_to_cpu(p->agent_id); 733 + r->domain_id = le32_to_cpu(p->domain_id); 734 + r->power_cap = le32_to_cpu(p->power_cap); 735 + r->pai = le32_to_cpu(p->pai); 736 + *src_id = r->domain_id; 737 + rep = r; 738 + break; 739 + } 740 + case SCMI_EVENT_POWERCAP_MEASUREMENTS_CHANGED: 741 + { 742 + const struct scmi_powercap_meas_changed_notify_payld *p = payld; 743 + struct scmi_powercap_meas_changed_report *r = report; 744 + 745 + if (sizeof(*p) != payld_sz) 746 + break; 747 + 748 + r->timestamp = timestamp; 749 + r->agent_id = le32_to_cpu(p->agent_id); 750 + r->domain_id = le32_to_cpu(p->domain_id); 751 + r->power = le32_to_cpu(p->power); 752 + *src_id = r->domain_id; 753 + rep = r; 754 + break; 755 + } 756 + default: 757 + break; 758 + } 759 + 760 + return rep; 761 + } 762 + 763 + static int 764 + scmi_powercap_get_num_sources(const struct scmi_protocol_handle *ph) 765 + { 766 + struct powercap_info *pi = ph->get_priv(ph); 767 + 768 + if (!pi) 769 + return -EINVAL; 770 + 771 + return pi->num_domains; 772 + } 773 + 774 + static const struct scmi_event powercap_events[] = { 775 + { 776 + .id = SCMI_EVENT_POWERCAP_CAP_CHANGED, 777 + .max_payld_sz = 778 + sizeof(struct scmi_powercap_cap_changed_notify_payld), 779 + .max_report_sz = 780 + sizeof(struct scmi_powercap_cap_changed_report), 781 + }, 782 + { 783 + .id = SCMI_EVENT_POWERCAP_MEASUREMENTS_CHANGED, 784 + .max_payld_sz = 785 + sizeof(struct scmi_powercap_meas_changed_notify_payld), 786 + .max_report_sz = 787 + sizeof(struct scmi_powercap_meas_changed_report), 788 + }, 789 + }; 790 + 791 + static const struct scmi_event_ops powercap_event_ops = { 792 + .get_num_sources = scmi_powercap_get_num_sources, 793 + .set_notify_enabled = scmi_powercap_set_notify_enabled, 794 + .fill_custom_report = scmi_powercap_fill_custom_report, 795 + }; 796 + 797 + static const struct scmi_protocol_events powercap_protocol_events = { 798 + .queue_sz = SCMI_PROTO_QUEUE_SZ, 799 + .ops = &powercap_event_ops, 800 + .evts = powercap_events, 801 + .num_events = ARRAY_SIZE(powercap_events), 802 + }; 803 + 804 + static int 805 + scmi_powercap_protocol_init(const struct scmi_protocol_handle *ph) 806 + { 807 + int domain, ret; 808 + u32 version; 809 + struct powercap_info *pinfo; 810 + 811 + ret = ph->xops->version_get(ph, &version); 812 + if (ret) 813 + return ret; 814 + 815 + dev_dbg(ph->dev, "Powercap Version %d.%d\n", 816 + PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 817 + 818 + pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 819 + if (!pinfo) 820 + return -ENOMEM; 821 + 822 + ret = scmi_powercap_attributes_get(ph, pinfo); 823 + if (ret) 824 + return ret; 825 + 826 + pinfo->powercaps = devm_kcalloc(ph->dev, pinfo->num_domains, 827 + sizeof(*pinfo->powercaps), 828 + GFP_KERNEL); 829 + if (!pinfo->powercaps) 830 + return -ENOMEM; 831 + 832 + /* 833 + * Note that any failure in retrieving any domain attribute leads to 834 + * the whole Powercap protocol initialization failure: this way the 835 + * reported Powercap domains are all assured, when accessed, to be well 836 + * formed and correlated by sane parent-child relationship (if any). 837 + */ 838 + for (domain = 0; domain < pinfo->num_domains; domain++) { 839 + ret = scmi_powercap_domain_attributes_get(ph, pinfo, domain); 840 + if (ret) 841 + return ret; 842 + 843 + if (pinfo->powercaps[domain].fastchannels) 844 + scmi_powercap_domain_init_fc(ph, domain, 845 + &pinfo->powercaps[domain].fc_info); 846 + } 847 + 848 + pinfo->states = devm_kcalloc(ph->dev, pinfo->num_domains, 849 + sizeof(*pinfo->states), GFP_KERNEL); 850 + if (!pinfo->states) 851 + return -ENOMEM; 852 + 853 + pinfo->version = version; 854 + 855 + return ph->set_priv(ph, pinfo); 856 + } 857 + 858 + static const struct scmi_protocol scmi_powercap = { 859 + .id = SCMI_PROTOCOL_POWERCAP, 860 + .owner = THIS_MODULE, 861 + .instance_init = &scmi_powercap_protocol_init, 862 + .ops = &powercap_proto_ops, 863 + .events = &powercap_protocol_events, 864 + }; 865 + 866 + DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(powercap, scmi_powercap)
+23
drivers/firmware/arm_scmi/protocols.h
··· 215 215 struct scmi_iterator_state *st, void *priv); 216 216 }; 217 217 218 + struct scmi_fc_db_info { 219 + int width; 220 + u64 set; 221 + u64 mask; 222 + void __iomem *addr; 223 + }; 224 + 225 + struct scmi_fc_info { 226 + void __iomem *set_addr; 227 + void __iomem *get_addr; 228 + struct scmi_fc_db_info *set_db; 229 + }; 230 + 218 231 /** 219 232 * struct scmi_proto_helpers_ops - References to common protocol helpers 220 233 * @extended_name_get: A common helper function to retrieve extended naming ··· 243 230 * provided in @ops. 244 231 * @iter_response_run: A common helper to trigger the run of a previously 245 232 * initialized iterator. 233 + * @fastchannel_init: A common helper used to initialize FC descriptors by 234 + * gathering FC descriptions from the SCMI platform server. 235 + * @fastchannel_db_ring: A common helper to ring a FC doorbell. 246 236 */ 247 237 struct scmi_proto_helpers_ops { 248 238 int (*extended_name_get)(const struct scmi_protocol_handle *ph, ··· 255 239 unsigned int max_resources, u8 msg_id, 256 240 size_t tx_size, void *priv); 257 241 int (*iter_response_run)(void *iter); 242 + void (*fastchannel_init)(const struct scmi_protocol_handle *ph, 243 + u8 describe_id, u32 message_id, 244 + u32 valid_size, u32 domain, 245 + void __iomem **p_addr, 246 + struct scmi_fc_db_info **p_db); 247 + void (*fastchannel_db_ring)(struct scmi_fc_db_info *db); 258 248 }; 259 249 260 250 /** ··· 337 315 DECLARE_SCMI_REGISTER_UNREGISTER(sensors); 338 316 DECLARE_SCMI_REGISTER_UNREGISTER(voltage); 339 317 DECLARE_SCMI_REGISTER_UNREGISTER(system); 318 + DECLARE_SCMI_REGISTER_UNREGISTER(powercap); 340 319 341 320 #endif /* _SCMI_PROTOCOLS_H */
+362
drivers/firmware/arm_scmi/scmi_power_control.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * SCMI Generic SystemPower Control driver. 4 + * 5 + * Copyright (C) 2020-2022 ARM Ltd. 6 + */ 7 + /* 8 + * In order to handle platform originated SCMI SystemPower requests (like 9 + * shutdowns or cold/warm resets) we register an SCMI Notification notifier 10 + * block to react when such SCMI SystemPower events are emitted by platform. 11 + * 12 + * Once such a notification is received we act accordingly to perform the 13 + * required system transition depending on the kind of request. 14 + * 15 + * Graceful requests are routed to userspace through the same API methods 16 + * (orderly_poweroff/reboot()) used by ACPI when handling ACPI Shutdown bus 17 + * events. 18 + * 19 + * Direct forceful requests are not supported since are not meant to be sent 20 + * by the SCMI platform to an OSPM like Linux. 21 + * 22 + * Additionally, graceful request notifications can carry an optional timeout 23 + * field stating the maximum amount of time allowed by the platform for 24 + * completion after which they are converted to forceful ones: the assumption 25 + * here is that even graceful requests can be upper-bound by a maximum final 26 + * timeout strictly enforced by the platform itself which can ultimately cut 27 + * the power off at will anytime; in order to avoid such extreme scenario, we 28 + * track progress of graceful requests through the means of a reboot notifier 29 + * converting timed-out graceful requests to forceful ones, so at least we 30 + * try to perform a clean sync and shutdown/restart before the power is cut. 31 + * 32 + * Given the peculiar nature of SCMI SystemPower protocol, that is being in 33 + * charge of triggering system wide shutdown/reboot events, there should be 34 + * only one SCMI platform actively emitting SystemPower events. 35 + * For this reason the SCMI core takes care to enforce the creation of one 36 + * single unique device associated to the SCMI System Power protocol; no matter 37 + * how many SCMI platforms are defined on the system, only one can be designated 38 + * to support System Power: as a consequence this driver will never be probed 39 + * more than once. 40 + * 41 + * For similar reasons as soon as the first valid SystemPower is received by 42 + * this driver and the shutdown/reboot is started, any further notification 43 + * possibly emitted by the platform will be ignored. 44 + */ 45 + 46 + #include <linux/math.h> 47 + #include <linux/module.h> 48 + #include <linux/mutex.h> 49 + #include <linux/printk.h> 50 + #include <linux/reboot.h> 51 + #include <linux/scmi_protocol.h> 52 + #include <linux/slab.h> 53 + #include <linux/time64.h> 54 + #include <linux/timer.h> 55 + #include <linux/types.h> 56 + #include <linux/workqueue.h> 57 + 58 + #ifndef MODULE 59 + #include <linux/fs.h> 60 + #endif 61 + 62 + enum scmi_syspower_state { 63 + SCMI_SYSPOWER_IDLE, 64 + SCMI_SYSPOWER_IN_PROGRESS, 65 + SCMI_SYSPOWER_REBOOTING 66 + }; 67 + 68 + /** 69 + * struct scmi_syspower_conf - Common configuration 70 + * 71 + * @dev: A reference device 72 + * @state: Current SystemPower state 73 + * @state_mtx: @state related mutex 74 + * @required_transition: The requested transition as decribed in the received 75 + * SCMI SystemPower notification 76 + * @userspace_nb: The notifier_block registered against the SCMI SystemPower 77 + * notification to start the needed userspace interactions. 78 + * @reboot_nb: A notifier_block optionally used to track reboot progress 79 + * @forceful_work: A worker used to trigger a forceful transition once a 80 + * graceful has timed out. 81 + */ 82 + struct scmi_syspower_conf { 83 + struct device *dev; 84 + enum scmi_syspower_state state; 85 + /* Protect access to state */ 86 + struct mutex state_mtx; 87 + enum scmi_system_events required_transition; 88 + 89 + struct notifier_block userspace_nb; 90 + struct notifier_block reboot_nb; 91 + 92 + struct delayed_work forceful_work; 93 + }; 94 + 95 + #define userspace_nb_to_sconf(x) \ 96 + container_of(x, struct scmi_syspower_conf, userspace_nb) 97 + 98 + #define reboot_nb_to_sconf(x) \ 99 + container_of(x, struct scmi_syspower_conf, reboot_nb) 100 + 101 + #define dwork_to_sconf(x) \ 102 + container_of(x, struct scmi_syspower_conf, forceful_work) 103 + 104 + /** 105 + * scmi_reboot_notifier - A reboot notifier to catch an ongoing successful 106 + * system transition 107 + * @nb: Reference to the related notifier block 108 + * @reason: The reason for the ongoing reboot 109 + * @__unused: The cmd being executed on a restart request (unused) 110 + * 111 + * When an ongoing system transition is detected, compatible with the one 112 + * requested by SCMI, cancel the delayed work. 113 + * 114 + * Return: NOTIFY_OK in any case 115 + */ 116 + static int scmi_reboot_notifier(struct notifier_block *nb, 117 + unsigned long reason, void *__unused) 118 + { 119 + struct scmi_syspower_conf *sc = reboot_nb_to_sconf(nb); 120 + 121 + mutex_lock(&sc->state_mtx); 122 + switch (reason) { 123 + case SYS_HALT: 124 + case SYS_POWER_OFF: 125 + if (sc->required_transition == SCMI_SYSTEM_SHUTDOWN) 126 + sc->state = SCMI_SYSPOWER_REBOOTING; 127 + break; 128 + case SYS_RESTART: 129 + if (sc->required_transition == SCMI_SYSTEM_COLDRESET || 130 + sc->required_transition == SCMI_SYSTEM_WARMRESET) 131 + sc->state = SCMI_SYSPOWER_REBOOTING; 132 + break; 133 + default: 134 + break; 135 + } 136 + 137 + if (sc->state == SCMI_SYSPOWER_REBOOTING) { 138 + dev_dbg(sc->dev, "Reboot in progress...cancel delayed work.\n"); 139 + cancel_delayed_work_sync(&sc->forceful_work); 140 + } 141 + mutex_unlock(&sc->state_mtx); 142 + 143 + return NOTIFY_OK; 144 + } 145 + 146 + /** 147 + * scmi_request_forceful_transition - Request forceful SystemPower transition 148 + * @sc: A reference to the configuration data 149 + * 150 + * Initiates the required SystemPower transition without involving userspace: 151 + * just trigger the action at the kernel level after issuing an emergency 152 + * sync. (if possible at all) 153 + */ 154 + static inline void 155 + scmi_request_forceful_transition(struct scmi_syspower_conf *sc) 156 + { 157 + dev_dbg(sc->dev, "Serving forceful request:%d\n", 158 + sc->required_transition); 159 + 160 + #ifndef MODULE 161 + emergency_sync(); 162 + #endif 163 + switch (sc->required_transition) { 164 + case SCMI_SYSTEM_SHUTDOWN: 165 + kernel_power_off(); 166 + break; 167 + case SCMI_SYSTEM_COLDRESET: 168 + case SCMI_SYSTEM_WARMRESET: 169 + kernel_restart(NULL); 170 + break; 171 + default: 172 + break; 173 + } 174 + } 175 + 176 + static void scmi_forceful_work_func(struct work_struct *work) 177 + { 178 + struct scmi_syspower_conf *sc; 179 + struct delayed_work *dwork; 180 + 181 + if (system_state > SYSTEM_RUNNING) 182 + return; 183 + 184 + dwork = to_delayed_work(work); 185 + sc = dwork_to_sconf(dwork); 186 + 187 + dev_dbg(sc->dev, "Graceful request timed out...forcing !\n"); 188 + mutex_lock(&sc->state_mtx); 189 + /* avoid deadlock by unregistering reboot notifier first */ 190 + unregister_reboot_notifier(&sc->reboot_nb); 191 + if (sc->state == SCMI_SYSPOWER_IN_PROGRESS) 192 + scmi_request_forceful_transition(sc); 193 + mutex_unlock(&sc->state_mtx); 194 + } 195 + 196 + /** 197 + * scmi_request_graceful_transition - Request graceful SystemPower transition 198 + * @sc: A reference to the configuration data 199 + * @timeout_ms: The desired timeout to wait for the shutdown to complete before 200 + * system is forcibly shutdown. 201 + * 202 + * Initiates the required SystemPower transition, requesting userspace 203 + * co-operation: it uses the same orderly_ methods used by ACPI Shutdown event 204 + * processing. 205 + * 206 + * Takes care also to register a reboot notifier and to schedule a delayed work 207 + * in order to detect if userspace actions are taking too long and in such a 208 + * case to trigger a forceful transition. 209 + */ 210 + static void scmi_request_graceful_transition(struct scmi_syspower_conf *sc, 211 + unsigned int timeout_ms) 212 + { 213 + unsigned int adj_timeout_ms = 0; 214 + 215 + if (timeout_ms) { 216 + int ret; 217 + 218 + sc->reboot_nb.notifier_call = &scmi_reboot_notifier; 219 + ret = register_reboot_notifier(&sc->reboot_nb); 220 + if (!ret) { 221 + /* Wait only up to 75% of the advertised timeout */ 222 + adj_timeout_ms = mult_frac(timeout_ms, 3, 4); 223 + INIT_DELAYED_WORK(&sc->forceful_work, 224 + scmi_forceful_work_func); 225 + schedule_delayed_work(&sc->forceful_work, 226 + msecs_to_jiffies(adj_timeout_ms)); 227 + } else { 228 + /* Carry on best effort even without a reboot notifier */ 229 + dev_warn(sc->dev, 230 + "Cannot register reboot notifier !\n"); 231 + } 232 + } 233 + 234 + dev_dbg(sc->dev, 235 + "Serving graceful req:%d (timeout_ms:%u adj_timeout_ms:%u)\n", 236 + sc->required_transition, timeout_ms, adj_timeout_ms); 237 + 238 + switch (sc->required_transition) { 239 + case SCMI_SYSTEM_SHUTDOWN: 240 + /* 241 + * When triggered early at boot-time the 'orderly' call will 242 + * partially fail due to the lack of userspace itself, but 243 + * the force=true argument will start anyway a successful 244 + * forced shutdown. 245 + */ 246 + orderly_poweroff(true); 247 + break; 248 + case SCMI_SYSTEM_COLDRESET: 249 + case SCMI_SYSTEM_WARMRESET: 250 + orderly_reboot(); 251 + break; 252 + default: 253 + break; 254 + } 255 + } 256 + 257 + /** 258 + * scmi_userspace_notifier - Notifier callback to act on SystemPower 259 + * Notifications 260 + * @nb: Reference to the related notifier block 261 + * @event: The SystemPower notification event id 262 + * @data: The SystemPower event report 263 + * 264 + * This callback is in charge of decoding the received SystemPower report 265 + * and act accordingly triggering a graceful or forceful system transition. 266 + * 267 + * Note that once a valid SCMI SystemPower event starts being served, any 268 + * other following SystemPower notification received from the same SCMI 269 + * instance (handle) will be ignored. 270 + * 271 + * Return: NOTIFY_OK once a valid SystemPower event has been successfully 272 + * processed. 273 + */ 274 + static int scmi_userspace_notifier(struct notifier_block *nb, 275 + unsigned long event, void *data) 276 + { 277 + struct scmi_system_power_state_notifier_report *er = data; 278 + struct scmi_syspower_conf *sc = userspace_nb_to_sconf(nb); 279 + 280 + if (er->system_state >= SCMI_SYSTEM_POWERUP) { 281 + dev_err(sc->dev, "Ignoring unsupported system_state: 0x%X\n", 282 + er->system_state); 283 + return NOTIFY_DONE; 284 + } 285 + 286 + if (!SCMI_SYSPOWER_IS_REQUEST_GRACEFUL(er->flags)) { 287 + dev_err(sc->dev, "Ignoring forceful notification.\n"); 288 + return NOTIFY_DONE; 289 + } 290 + 291 + /* 292 + * Bail out if system is already shutting down or an SCMI SystemPower 293 + * requested is already being served. 294 + */ 295 + if (system_state > SYSTEM_RUNNING) 296 + return NOTIFY_DONE; 297 + mutex_lock(&sc->state_mtx); 298 + if (sc->state != SCMI_SYSPOWER_IDLE) { 299 + dev_dbg(sc->dev, 300 + "Transition already in progress...ignore.\n"); 301 + mutex_unlock(&sc->state_mtx); 302 + return NOTIFY_DONE; 303 + } 304 + sc->state = SCMI_SYSPOWER_IN_PROGRESS; 305 + mutex_unlock(&sc->state_mtx); 306 + 307 + sc->required_transition = er->system_state; 308 + 309 + /* Leaving a trace in logs of who triggered the shutdown/reboot. */ 310 + dev_info(sc->dev, "Serving shutdown/reboot request: %d\n", 311 + sc->required_transition); 312 + 313 + scmi_request_graceful_transition(sc, er->timeout); 314 + 315 + return NOTIFY_OK; 316 + } 317 + 318 + static int scmi_syspower_probe(struct scmi_device *sdev) 319 + { 320 + int ret; 321 + struct scmi_syspower_conf *sc; 322 + struct scmi_handle *handle = sdev->handle; 323 + 324 + if (!handle) 325 + return -ENODEV; 326 + 327 + ret = handle->devm_protocol_acquire(sdev, SCMI_PROTOCOL_SYSTEM); 328 + if (ret) 329 + return ret; 330 + 331 + sc = devm_kzalloc(&sdev->dev, sizeof(*sc), GFP_KERNEL); 332 + if (!sc) 333 + return -ENOMEM; 334 + 335 + sc->state = SCMI_SYSPOWER_IDLE; 336 + mutex_init(&sc->state_mtx); 337 + sc->required_transition = SCMI_SYSTEM_MAX; 338 + sc->userspace_nb.notifier_call = &scmi_userspace_notifier; 339 + sc->dev = &sdev->dev; 340 + 341 + return handle->notify_ops->devm_event_notifier_register(sdev, 342 + SCMI_PROTOCOL_SYSTEM, 343 + SCMI_EVENT_SYSTEM_POWER_STATE_NOTIFIER, 344 + NULL, &sc->userspace_nb); 345 + } 346 + 347 + static const struct scmi_device_id scmi_id_table[] = { 348 + { SCMI_PROTOCOL_SYSTEM, "syspower" }, 349 + { }, 350 + }; 351 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 352 + 353 + static struct scmi_driver scmi_system_power_driver = { 354 + .name = "scmi-system-power", 355 + .probe = scmi_syspower_probe, 356 + .id_table = scmi_id_table, 357 + }; 358 + module_scmi_driver(scmi_system_power_driver); 359 + 360 + MODULE_AUTHOR("Cristian Marussi <cristian.marussi@arm.com>"); 361 + MODULE_DESCRIPTION("ARM SCMI SystemPower Control driver"); 362 + MODULE_LICENSE("GPL");
+16 -1
drivers/firmware/arm_scmi/system.c
··· 27 27 __le32 agent_id; 28 28 __le32 flags; 29 29 __le32 system_state; 30 + __le32 timeout; 30 31 }; 31 32 32 33 struct scmi_system_info { 33 34 u32 version; 35 + bool graceful_timeout_supported; 34 36 }; 35 37 36 38 static int scmi_system_request_notify(const struct scmi_protocol_handle *ph, ··· 74 72 const void *payld, size_t payld_sz, 75 73 void *report, u32 *src_id) 76 74 { 75 + size_t expected_sz; 77 76 const struct scmi_system_power_state_notifier_payld *p = payld; 78 77 struct scmi_system_power_state_notifier_report *r = report; 78 + struct scmi_system_info *pinfo = ph->get_priv(ph); 79 79 80 + expected_sz = pinfo->graceful_timeout_supported ? 81 + sizeof(*p) : sizeof(*p) - sizeof(__le32); 80 82 if (evt_id != SCMI_EVENT_SYSTEM_POWER_STATE_NOTIFIER || 81 - sizeof(*p) != payld_sz) 83 + payld_sz != expected_sz) 82 84 return NULL; 83 85 84 86 r->timestamp = timestamp; 85 87 r->agent_id = le32_to_cpu(p->agent_id); 86 88 r->flags = le32_to_cpu(p->flags); 87 89 r->system_state = le32_to_cpu(p->system_state); 90 + if (pinfo->graceful_timeout_supported && 91 + r->system_state == SCMI_SYSTEM_SHUTDOWN && 92 + SCMI_SYSPOWER_IS_REQUEST_GRACEFUL(r->flags)) 93 + r->timeout = le32_to_cpu(p->timeout); 94 + else 95 + r->timeout = 0x00; 88 96 *src_id = 0; 89 97 90 98 return r; ··· 141 129 return -ENOMEM; 142 130 143 131 pinfo->version = version; 132 + if (PROTOCOL_REV_MAJOR(pinfo->version) >= 0x2) 133 + pinfo->graceful_timeout_supported = true; 134 + 144 135 return ph->set_priv(ph, pinfo); 145 136 } 146 137
+35 -26
drivers/firmware/arm_scpi.c
··· 815 815 info->firmware_version = le32_to_cpu(caps.platform_version); 816 816 } 817 817 /* Ignore error if not implemented */ 818 - if (scpi_info->is_legacy && ret == -EOPNOTSUPP) 818 + if (info->is_legacy && ret == -EOPNOTSUPP) 819 819 return 0; 820 820 821 821 return ret; ··· 913 913 struct resource res; 914 914 struct device *dev = &pdev->dev; 915 915 struct device_node *np = dev->of_node; 916 + struct scpi_drvinfo *scpi_drvinfo; 916 917 917 - scpi_info = devm_kzalloc(dev, sizeof(*scpi_info), GFP_KERNEL); 918 - if (!scpi_info) 918 + scpi_drvinfo = devm_kzalloc(dev, sizeof(*scpi_drvinfo), GFP_KERNEL); 919 + if (!scpi_drvinfo) 919 920 return -ENOMEM; 920 921 921 922 if (of_match_device(legacy_scpi_of_match, &pdev->dev)) 922 - scpi_info->is_legacy = true; 923 + scpi_drvinfo->is_legacy = true; 923 924 924 925 count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells"); 925 926 if (count < 0) { ··· 928 927 return -ENODEV; 929 928 } 930 929 931 - scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan), 932 - GFP_KERNEL); 933 - if (!scpi_info->channels) 930 + scpi_drvinfo->channels = 931 + devm_kcalloc(dev, count, sizeof(struct scpi_chan), GFP_KERNEL); 932 + if (!scpi_drvinfo->channels) 934 933 return -ENOMEM; 935 934 936 - ret = devm_add_action(dev, scpi_free_channels, scpi_info); 935 + ret = devm_add_action(dev, scpi_free_channels, scpi_drvinfo); 937 936 if (ret) 938 937 return ret; 939 938 940 - for (; scpi_info->num_chans < count; scpi_info->num_chans++) { 939 + for (; scpi_drvinfo->num_chans < count; scpi_drvinfo->num_chans++) { 941 940 resource_size_t size; 942 - int idx = scpi_info->num_chans; 943 - struct scpi_chan *pchan = scpi_info->channels + idx; 941 + int idx = scpi_drvinfo->num_chans; 942 + struct scpi_chan *pchan = scpi_drvinfo->channels + idx; 944 943 struct mbox_client *cl = &pchan->cl; 945 944 struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 946 945 ··· 987 986 return ret; 988 987 } 989 988 990 - scpi_info->commands = scpi_std_commands; 989 + scpi_drvinfo->commands = scpi_std_commands; 991 990 992 - platform_set_drvdata(pdev, scpi_info); 991 + platform_set_drvdata(pdev, scpi_drvinfo); 993 992 994 - if (scpi_info->is_legacy) { 993 + if (scpi_drvinfo->is_legacy) { 995 994 /* Replace with legacy variants */ 996 995 scpi_ops.clk_set_val = legacy_scpi_clk_set_val; 997 - scpi_info->commands = scpi_legacy_commands; 996 + scpi_drvinfo->commands = scpi_legacy_commands; 998 997 999 998 /* Fill priority bitmap */ 1000 999 for (idx = 0; idx < ARRAY_SIZE(legacy_hpriority_cmds); idx++) 1001 1000 set_bit(legacy_hpriority_cmds[idx], 1002 - scpi_info->cmd_priority); 1001 + scpi_drvinfo->cmd_priority); 1003 1002 } 1004 1003 1005 - ret = scpi_init_versions(scpi_info); 1004 + scpi_info = scpi_drvinfo; 1005 + 1006 + ret = scpi_init_versions(scpi_drvinfo); 1006 1007 if (ret) { 1007 1008 dev_err(dev, "incorrect or no SCP firmware found\n"); 1009 + scpi_info = NULL; 1008 1010 return ret; 1009 1011 } 1010 1012 1011 - if (scpi_info->is_legacy && !scpi_info->protocol_version && 1012 - !scpi_info->firmware_version) 1013 + if (scpi_drvinfo->is_legacy && !scpi_drvinfo->protocol_version && 1014 + !scpi_drvinfo->firmware_version) 1013 1015 dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n"); 1014 1016 else 1015 1017 dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n", 1016 1018 FIELD_GET(PROTO_REV_MAJOR_MASK, 1017 - scpi_info->protocol_version), 1019 + scpi_drvinfo->protocol_version), 1018 1020 FIELD_GET(PROTO_REV_MINOR_MASK, 1019 - scpi_info->protocol_version), 1021 + scpi_drvinfo->protocol_version), 1020 1022 FIELD_GET(FW_REV_MAJOR_MASK, 1021 - scpi_info->firmware_version), 1023 + scpi_drvinfo->firmware_version), 1022 1024 FIELD_GET(FW_REV_MINOR_MASK, 1023 - scpi_info->firmware_version), 1025 + scpi_drvinfo->firmware_version), 1024 1026 FIELD_GET(FW_REV_PATCH_MASK, 1025 - scpi_info->firmware_version)); 1026 - scpi_info->scpi_ops = &scpi_ops; 1027 + scpi_drvinfo->firmware_version)); 1027 1028 1028 - return devm_of_platform_populate(dev); 1029 + scpi_drvinfo->scpi_ops = &scpi_ops; 1030 + 1031 + ret = devm_of_platform_populate(dev); 1032 + if (ret) 1033 + scpi_info = NULL; 1034 + 1035 + return ret; 1029 1036 } 1030 1037 1031 1038 static const struct of_device_id scpi_of_match[] = {
+4
drivers/firmware/qcom_scm-legacy.c
··· 120 120 /** 121 121 * scm_legacy_call() - Sends a command to the SCM and waits for the command to 122 122 * finish processing. 123 + * @dev: device 124 + * @desc: descriptor structure containing arguments and return values 125 + * @res: results from SMC call 123 126 * 124 127 * A note on cache maintenance: 125 128 * Note that any buffers that are expected to be accessed by the secure world ··· 214 211 /** 215 212 * scm_legacy_call_atomic() - Send an atomic SCM command with up to 5 arguments 216 213 * and 3 return values 214 + * @unused: device, legacy argument, not used, can be NULL 217 215 * @desc: SCM call descriptor containing arguments 218 216 * @res: SCM call return values 219 217 *
+70 -1
drivers/firmware/qcom_scm.c
··· 7 7 #include <linux/cpumask.h> 8 8 #include <linux/export.h> 9 9 #include <linux/dma-mapping.h> 10 + #include <linux/interconnect.h> 10 11 #include <linux/module.h> 11 12 #include <linux/types.h> 12 13 #include <linux/qcom_scm.h> ··· 32 31 struct clk *core_clk; 33 32 struct clk *iface_clk; 34 33 struct clk *bus_clk; 34 + struct icc_path *path; 35 35 struct reset_controller_dev reset; 36 + 37 + /* control access to the interconnect path */ 38 + struct mutex scm_bw_lock; 39 + int scm_vote_count; 36 40 37 41 u64 dload_mode_addr; 38 42 }; ··· 103 97 clk_disable_unprepare(__scm->core_clk); 104 98 clk_disable_unprepare(__scm->iface_clk); 105 99 clk_disable_unprepare(__scm->bus_clk); 100 + } 101 + 102 + static int qcom_scm_bw_enable(void) 103 + { 104 + int ret = 0; 105 + 106 + if (!__scm->path) 107 + return 0; 108 + 109 + if (IS_ERR(__scm->path)) 110 + return -EINVAL; 111 + 112 + mutex_lock(&__scm->scm_bw_lock); 113 + if (!__scm->scm_vote_count) { 114 + ret = icc_set_bw(__scm->path, 0, UINT_MAX); 115 + if (ret < 0) { 116 + dev_err(__scm->dev, "failed to set bandwidth request\n"); 117 + goto err_bw; 118 + } 119 + } 120 + __scm->scm_vote_count++; 121 + err_bw: 122 + mutex_unlock(&__scm->scm_bw_lock); 123 + 124 + return ret; 125 + } 126 + 127 + static void qcom_scm_bw_disable(void) 128 + { 129 + if (IS_ERR_OR_NULL(__scm->path)) 130 + return; 131 + 132 + mutex_lock(&__scm->scm_bw_lock); 133 + if (__scm->scm_vote_count-- == 1) 134 + icc_set_bw(__scm->path, 0, 0); 135 + mutex_unlock(&__scm->scm_bw_lock); 106 136 } 107 137 108 138 enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN; ··· 486 444 if (ret) 487 445 goto out; 488 446 447 + ret = qcom_scm_bw_enable(); 448 + if (ret) 449 + return ret; 450 + 489 451 desc.args[1] = mdata_phys; 490 452 491 453 ret = qcom_scm_call(__scm->dev, &desc, &res); 492 454 455 + qcom_scm_bw_disable(); 493 456 qcom_scm_clk_disable(); 494 457 495 458 out: ··· 554 507 if (ret) 555 508 return ret; 556 509 510 + ret = qcom_scm_bw_enable(); 511 + if (ret) 512 + return ret; 513 + 557 514 ret = qcom_scm_call(__scm->dev, &desc, &res); 515 + qcom_scm_bw_disable(); 558 516 qcom_scm_clk_disable(); 559 517 560 518 return ret ? : res.result[0]; ··· 589 537 if (ret) 590 538 return ret; 591 539 540 + ret = qcom_scm_bw_enable(); 541 + if (ret) 542 + return ret; 543 + 592 544 ret = qcom_scm_call(__scm->dev, &desc, &res); 545 + qcom_scm_bw_disable(); 593 546 qcom_scm_clk_disable(); 594 547 595 548 return ret ? : res.result[0]; ··· 623 566 if (ret) 624 567 return ret; 625 568 569 + ret = qcom_scm_bw_enable(); 570 + if (ret) 571 + return ret; 572 + 626 573 ret = qcom_scm_call(__scm->dev, &desc, &res); 627 574 575 + qcom_scm_bw_disable(); 628 576 qcom_scm_clk_disable(); 629 577 630 578 return ret ? : res.result[0]; ··· 1339 1277 if (ret < 0) 1340 1278 return ret; 1341 1279 1280 + mutex_init(&scm->scm_bw_lock); 1281 + 1342 1282 clks = (unsigned long)of_device_get_match_data(&pdev->dev); 1283 + 1284 + scm->path = devm_of_icc_get(&pdev->dev, NULL); 1285 + if (IS_ERR(scm->path)) 1286 + return dev_err_probe(&pdev->dev, PTR_ERR(scm->path), 1287 + "failed to acquire interconnect path\n"); 1343 1288 1344 1289 scm->core_clk = devm_clk_get(&pdev->dev, "core"); 1345 1290 if (IS_ERR(scm->core_clk)) { ··· 1406 1337 1407 1338 /* 1408 1339 * If requested enable "download mode", from this point on warmboot 1409 - * will cause the the boot stages to enter download mode, unless 1340 + * will cause the boot stages to enter download mode, unless 1410 1341 * disabled below by a clean shutdown/reboot. 1411 1342 */ 1412 1343 if (download_mode)
+5 -5
drivers/firmware/tegra/bpmp-debugfs.c
··· 474 474 mode |= attrs & DEBUGFS_S_IWUSR ? 0200 : 0; 475 475 dentry = debugfs_create_file(name, mode, parent, bpmp, 476 476 &bpmp_debug_fops); 477 - if (!dentry) { 477 + if (IS_ERR(dentry)) { 478 478 err = -ENOMEM; 479 479 goto out; 480 480 } ··· 725 725 726 726 if (t & DEBUGFS_S_ISDIR) { 727 727 dentry = debugfs_create_dir(name, parent); 728 - if (!dentry) 728 + if (IS_ERR(dentry)) 729 729 return -ENOMEM; 730 730 err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1); 731 731 if (err < 0) ··· 738 738 dentry = debugfs_create_file(name, mode, 739 739 parent, bpmp, 740 740 &debugfs_fops); 741 - if (!dentry) 741 + if (IS_ERR(dentry)) 742 742 return -ENOMEM; 743 743 } 744 744 } ··· 788 788 return 0; 789 789 790 790 root = debugfs_create_dir("bpmp", NULL); 791 - if (!root) 791 + if (IS_ERR(root)) 792 792 return -ENOMEM; 793 793 794 794 bpmp->debugfs_mirror = debugfs_create_dir("debug", root); 795 - if (!bpmp->debugfs_mirror) { 795 + if (IS_ERR(bpmp->debugfs_mirror)) { 796 796 err = -ENOMEM; 797 797 goto out; 798 798 }
+3 -3
drivers/firmware/tegra/bpmp.c
··· 201 201 int err; 202 202 203 203 if (data && size > 0) 204 - memcpy(data, channel->ib->data, size); 204 + memcpy_fromio(data, channel->ib->data, size); 205 205 206 206 err = tegra_bpmp_ack_response(channel); 207 207 if (err < 0) ··· 245 245 channel->ob->flags = flags; 246 246 247 247 if (data && size > 0) 248 - memcpy(channel->ob->data, data, size); 248 + memcpy_toio(channel->ob->data, data, size); 249 249 250 250 return tegra_bpmp_post_request(channel); 251 251 } ··· 420 420 channel->ob->code = code; 421 421 422 422 if (data && size > 0) 423 - memcpy(channel->ob->data, data, size); 423 + memcpy_toio(channel->ob->data, data, size); 424 424 425 425 err = tegra_bpmp_post_response(channel); 426 426 if (WARN_ON(err < 0))
+1 -1
drivers/i2c/busses/Kconfig
··· 486 486 487 487 config I2C_BRCMSTB 488 488 tristate "BRCM Settop/DSL I2C controller" 489 - depends on ARCH_BCM2835 || ARCH_BCM4908 || ARCH_BCM_63XX || \ 489 + depends on ARCH_BCM2835 || ARCH_BCM4908 || ARCH_BCMBCA || \ 490 490 ARCH_BRCMSTB || BMIPS_GENERIC || COMPILE_TEST 491 491 default y 492 492 help
+17
drivers/memory/mtk-smi.c
··· 21 21 /* SMI COMMON */ 22 22 #define SMI_L1LEN 0x100 23 23 24 + #define SMI_L1_ARB 0x200 24 25 #define SMI_BUS_SEL 0x220 25 26 #define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1) 26 27 /* All are MMU0 defaultly. Only specialize mmu1 here. */ 27 28 #define F_MMU1_LARB(larbid) (0x1 << SMI_BUS_LARB_SHIFT(larbid)) 28 29 30 + #define SMI_READ_FIFO_TH 0x230 29 31 #define SMI_M4U_TH 0x234 30 32 #define SMI_FIFO_TH1 0x238 31 33 #define SMI_FIFO_TH2 0x23c ··· 362 360 {.compatible = "mediatek,mt2701-smi-larb", .data = &mtk_smi_larb_mt2701}, 363 361 {.compatible = "mediatek,mt2712-smi-larb", .data = &mtk_smi_larb_mt2712}, 364 362 {.compatible = "mediatek,mt6779-smi-larb", .data = &mtk_smi_larb_mt6779}, 363 + {.compatible = "mediatek,mt6795-smi-larb", .data = &mtk_smi_larb_mt8173}, 365 364 {.compatible = "mediatek,mt8167-smi-larb", .data = &mtk_smi_larb_mt8167}, 366 365 {.compatible = "mediatek,mt8173-smi-larb", .data = &mtk_smi_larb_mt8173}, 367 366 {.compatible = "mediatek,mt8183-smi-larb", .data = &mtk_smi_larb_mt8183}, ··· 547 544 } 548 545 }; 549 546 547 + static const struct mtk_smi_reg_pair mtk_smi_common_mt6795_init[SMI_COMMON_INIT_REGS_NR] = { 548 + {SMI_L1_ARB, 0x1b}, 549 + {SMI_M4U_TH, 0xce810c85}, 550 + {SMI_FIFO_TH1, 0x43214c8}, 551 + {SMI_READ_FIFO_TH, 0x191f}, 552 + }; 553 + 550 554 static const struct mtk_smi_reg_pair mtk_smi_common_mt8195_init[SMI_COMMON_INIT_REGS_NR] = { 551 555 {SMI_L1LEN, 0xb}, 552 556 {SMI_M4U_TH, 0xe100e10}, ··· 576 566 .has_gals = true, 577 567 .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(4) | 578 568 F_MMU1_LARB(5) | F_MMU1_LARB(6) | F_MMU1_LARB(7), 569 + }; 570 + 571 + static const struct mtk_smi_common_plat mtk_smi_common_mt6795 = { 572 + .type = MTK_SMI_GEN2, 573 + .bus_sel = F_MMU1_LARB(0), 574 + .init = mtk_smi_common_mt6795_init, 579 575 }; 580 576 581 577 static const struct mtk_smi_common_plat mtk_smi_common_mt8183 = { ··· 628 612 {.compatible = "mediatek,mt2701-smi-common", .data = &mtk_smi_common_gen1}, 629 613 {.compatible = "mediatek,mt2712-smi-common", .data = &mtk_smi_common_gen2}, 630 614 {.compatible = "mediatek,mt6779-smi-common", .data = &mtk_smi_common_mt6779}, 615 + {.compatible = "mediatek,mt6795-smi-common", .data = &mtk_smi_common_mt6795}, 631 616 {.compatible = "mediatek,mt8167-smi-common", .data = &mtk_smi_common_gen2}, 632 617 {.compatible = "mediatek,mt8173-smi-common", .data = &mtk_smi_common_gen2}, 633 618 {.compatible = "mediatek,mt8183-smi-common", .data = &mtk_smi_common_mt8183},
+80
drivers/memory/tegra/tegra234.c
··· 11 11 12 12 static const struct tegra_mc_client tegra234_mc_clients[] = { 13 13 { 14 + .id = TEGRA234_MEMORY_CLIENT_MGBEARD, 15 + .name = "mgbeard", 16 + .sid = TEGRA234_SID_MGBE, 17 + .regs = { 18 + .sid = { 19 + .override = 0x2c0, 20 + .security = 0x2c4, 21 + }, 22 + }, 23 + }, { 24 + .id = TEGRA234_MEMORY_CLIENT_MGBEBRD, 25 + .name = "mgbebrd", 26 + .sid = TEGRA234_SID_MGBE_VF1, 27 + .regs = { 28 + .sid = { 29 + .override = 0x2c8, 30 + .security = 0x2cc, 31 + }, 32 + }, 33 + }, { 34 + .id = TEGRA234_MEMORY_CLIENT_MGBECRD, 35 + .name = "mgbecrd", 36 + .sid = TEGRA234_SID_MGBE_VF2, 37 + .regs = { 38 + .sid = { 39 + .override = 0x2d0, 40 + .security = 0x2d4, 41 + }, 42 + }, 43 + }, { 44 + .id = TEGRA234_MEMORY_CLIENT_MGBEDRD, 45 + .name = "mgbedrd", 46 + .sid = TEGRA234_SID_MGBE_VF3, 47 + .regs = { 48 + .sid = { 49 + .override = 0x2d8, 50 + .security = 0x2dc, 51 + }, 52 + }, 53 + }, { 54 + .id = TEGRA234_MEMORY_CLIENT_MGBEAWR, 55 + .name = "mgbeawr", 56 + .sid = TEGRA234_SID_MGBE, 57 + .regs = { 58 + .sid = { 59 + .override = 0x2e0, 60 + .security = 0x2e4, 61 + }, 62 + }, 63 + }, { 64 + .id = TEGRA234_MEMORY_CLIENT_MGBEBWR, 65 + .name = "mgbebwr", 66 + .sid = TEGRA234_SID_MGBE_VF1, 67 + .regs = { 68 + .sid = { 69 + .override = 0x2f8, 70 + .security = 0x2fc, 71 + }, 72 + }, 73 + }, { 74 + .id = TEGRA234_MEMORY_CLIENT_MGBECWR, 75 + .name = "mgbecwr", 76 + .sid = TEGRA234_SID_MGBE_VF2, 77 + .regs = { 78 + .sid = { 79 + .override = 0x308, 80 + .security = 0x30c, 81 + }, 82 + }, 83 + }, { 14 84 .id = TEGRA234_MEMORY_CLIENT_SDMMCRAB, 15 85 .name = "sdmmcrab", 16 86 .sid = TEGRA234_SID_SDMMC4, ··· 88 18 .sid = { 89 19 .override = 0x318, 90 20 .security = 0x31c, 21 + }, 22 + }, 23 + }, { 24 + .id = TEGRA234_MEMORY_CLIENT_MGBEDWR, 25 + .name = "mgbedwr", 26 + .sid = TEGRA234_SID_MGBE_VF3, 27 + .regs = { 28 + .sid = { 29 + .override = 0x328, 30 + .security = 0x32c, 91 31 }, 92 32 }, 93 33 }, {
+54 -20
drivers/mfd/bcm2835-pm.c
··· 25 25 { .name = "bcm2835-power" }, 26 26 }; 27 27 28 + static int bcm2835_pm_get_pdata(struct platform_device *pdev, 29 + struct bcm2835_pm *pm) 30 + { 31 + if (of_find_property(pm->dev->of_node, "reg-names", NULL)) { 32 + struct resource *res; 33 + 34 + pm->base = devm_platform_ioremap_resource_byname(pdev, "pm"); 35 + if (IS_ERR(pm->base)) 36 + return PTR_ERR(pm->base); 37 + 38 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "asb"); 39 + if (res) { 40 + pm->asb = devm_ioremap_resource(&pdev->dev, res); 41 + if (IS_ERR(pm->asb)) 42 + pm->asb = NULL; 43 + } 44 + 45 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 46 + "rpivid_asb"); 47 + if (res) { 48 + pm->rpivid_asb = devm_ioremap_resource(&pdev->dev, res); 49 + if (IS_ERR(pm->rpivid_asb)) 50 + pm->rpivid_asb = NULL; 51 + } 52 + 53 + return 0; 54 + } 55 + 56 + /* If no 'reg-names' property is found we can assume we're using old DTB. */ 57 + pm->base = devm_platform_ioremap_resource(pdev, 0); 58 + if (IS_ERR(pm->base)) 59 + return PTR_ERR(pm->base); 60 + 61 + pm->asb = devm_platform_ioremap_resource(pdev, 1); 62 + if (IS_ERR(pm->asb)) 63 + pm->asb = NULL; 64 + 65 + pm->rpivid_asb = devm_platform_ioremap_resource(pdev, 2); 66 + if (IS_ERR(pm->rpivid_asb)) 67 + pm->rpivid_asb = NULL; 68 + 69 + return 0; 70 + } 71 + 28 72 static int bcm2835_pm_probe(struct platform_device *pdev) 29 73 { 30 - struct resource *res; 31 74 struct device *dev = &pdev->dev; 32 75 struct bcm2835_pm *pm; 33 76 int ret; ··· 82 39 83 40 pm->dev = dev; 84 41 85 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 86 - pm->base = devm_ioremap_resource(dev, res); 87 - if (IS_ERR(pm->base)) 88 - return PTR_ERR(pm->base); 42 + ret = bcm2835_pm_get_pdata(pdev, pm); 43 + if (ret) 44 + return ret; 89 45 90 46 ret = devm_mfd_add_devices(dev, -1, 91 47 bcm2835_pm_devs, ARRAY_SIZE(bcm2835_pm_devs), ··· 92 50 if (ret) 93 51 return ret; 94 52 95 - /* We'll use the presence of the AXI ASB regs in the 53 + /* 54 + * We'll use the presence of the AXI ASB regs in the 96 55 * bcm2835-pm binding as the key for whether we can reference 97 56 * the full PM register range and support power domains. 98 57 */ 99 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 100 - if (res) { 101 - pm->asb = devm_ioremap_resource(dev, res); 102 - if (IS_ERR(pm->asb)) 103 - return PTR_ERR(pm->asb); 104 - 105 - ret = devm_mfd_add_devices(dev, -1, 106 - bcm2835_power_devs, 107 - ARRAY_SIZE(bcm2835_power_devs), 108 - NULL, 0, NULL); 109 - if (ret) 110 - return ret; 111 - } 112 - 58 + if (pm->asb) 59 + return devm_mfd_add_devices(dev, -1, bcm2835_power_devs, 60 + ARRAY_SIZE(bcm2835_power_devs), 61 + NULL, 0, NULL); 113 62 return 0; 114 63 } 115 64 116 65 static const struct of_device_id bcm2835_pm_of_match[] = { 117 66 { .compatible = "brcm,bcm2835-pm-wdt", }, 118 67 { .compatible = "brcm,bcm2835-pm", }, 68 + { .compatible = "brcm,bcm2711-pm", }, 119 69 {}, 120 70 }; 121 71 MODULE_DEVICE_TABLE(of, bcm2835_pm_of_match);
+1 -1
drivers/phy/broadcom/Kconfig
··· 83 83 config PHY_BRCM_SATA 84 84 tristate "Broadcom SATA PHY driver" 85 85 depends on ARCH_BRCMSTB || ARCH_BCM_IPROC || BMIPS_GENERIC || \ 86 - ARCH_BCM_63XX || COMPILE_TEST 86 + ARCH_BCMBCA || COMPILE_TEST 87 87 depends on OF 88 88 select GENERIC_PHY 89 89 default ARCH_BCM_IPROC
+1
drivers/soc/Kconfig
··· 9 9 source "drivers/soc/bcm/Kconfig" 10 10 source "drivers/soc/canaan/Kconfig" 11 11 source "drivers/soc/fsl/Kconfig" 12 + source "drivers/soc/fujitsu/Kconfig" 12 13 source "drivers/soc/imx/Kconfig" 13 14 source "drivers/soc/ixp4xx/Kconfig" 14 15 source "drivers/soc/litex/Kconfig"
+1
drivers/soc/Makefile
··· 12 12 obj-$(CONFIG_ARCH_DOVE) += dove/ 13 13 obj-$(CONFIG_MACH_DOVE) += dove/ 14 14 obj-y += fsl/ 15 + obj-y += fujitsu/ 15 16 obj-$(CONFIG_ARCH_GEMINI) += gemini/ 16 17 obj-y += imx/ 17 18 obj-y += ixp4xx/
+1
drivers/soc/amlogic/meson-mx-socinfo.c
··· 126 126 np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids); 127 127 if (np) { 128 128 analog_top_regmap = syscon_node_to_regmap(np); 129 + of_node_put(np); 129 130 if (IS_ERR(analog_top_regmap)) 130 131 return PTR_ERR(analog_top_regmap); 131 132
+3 -1
drivers/soc/amlogic/meson-secure-pwrc.c
··· 152 152 } 153 153 154 154 pwrc = devm_kzalloc(&pdev->dev, sizeof(*pwrc), GFP_KERNEL); 155 - if (!pwrc) 155 + if (!pwrc) { 156 + of_node_put(sm_np); 156 157 return -ENOMEM; 158 + } 157 159 158 160 pwrc->fw = meson_sm_get(sm_np); 159 161 of_node_put(sm_np);
+48 -24
drivers/soc/bcm/bcm2835-power.c
··· 126 126 127 127 #define ASB_AXI_BRDG_ID 0x20 128 128 129 - #define ASB_READ(reg) readl(power->asb + (reg)) 130 - #define ASB_WRITE(reg, val) writel(PM_PASSWORD | (val), power->asb + (reg)) 129 + #define BCM2835_BRDG_ID 0x62726467 131 130 132 131 struct bcm2835_power_domain { 133 132 struct generic_pm_domain base; ··· 141 142 void __iomem *base; 142 143 /* AXI Async bridge registers. */ 143 144 void __iomem *asb; 145 + /* RPiVid bridge registers. */ 146 + void __iomem *rpivid_asb; 144 147 145 148 struct genpd_onecell_data pd_xlate; 146 149 struct bcm2835_power_domain domains[BCM2835_POWER_DOMAIN_COUNT]; 147 150 struct reset_controller_dev reset; 148 151 }; 149 152 150 - static int bcm2835_asb_enable(struct bcm2835_power *power, u32 reg) 153 + static int bcm2835_asb_control(struct bcm2835_power *power, u32 reg, bool enable) 151 154 { 155 + void __iomem *base = power->asb; 152 156 u64 start; 157 + u32 val; 153 158 154 - if (!reg) 159 + switch (reg) { 160 + case 0: 155 161 return 0; 162 + case ASB_V3D_S_CTRL: 163 + case ASB_V3D_M_CTRL: 164 + if (power->rpivid_asb) 165 + base = power->rpivid_asb; 166 + break; 167 + } 156 168 157 169 start = ktime_get_ns(); 158 170 159 171 /* Enable the module's async AXI bridges. */ 160 - ASB_WRITE(reg, ASB_READ(reg) & ~ASB_REQ_STOP); 161 - while (ASB_READ(reg) & ASB_ACK) { 172 + if (enable) { 173 + val = readl(base + reg) & ~ASB_REQ_STOP; 174 + } else { 175 + val = readl(base + reg) | ASB_REQ_STOP; 176 + } 177 + writel(PM_PASSWORD | val, base + reg); 178 + 179 + while (readl(base + reg) & ASB_ACK) { 162 180 cpu_relax(); 163 181 if (ktime_get_ns() - start >= 1000) 164 182 return -ETIMEDOUT; ··· 184 168 return 0; 185 169 } 186 170 171 + static int bcm2835_asb_enable(struct bcm2835_power *power, u32 reg) 172 + { 173 + return bcm2835_asb_control(power, reg, true); 174 + } 175 + 187 176 static int bcm2835_asb_disable(struct bcm2835_power *power, u32 reg) 188 177 { 189 - u64 start; 190 - 191 - if (!reg) 192 - return 0; 193 - 194 - start = ktime_get_ns(); 195 - 196 - /* Enable the module's async AXI bridges. */ 197 - ASB_WRITE(reg, ASB_READ(reg) | ASB_REQ_STOP); 198 - while (!(ASB_READ(reg) & ASB_ACK)) { 199 - cpu_relax(); 200 - if (ktime_get_ns() - start >= 1000) 201 - return -ETIMEDOUT; 202 - } 203 - 204 - return 0; 178 + return bcm2835_asb_control(power, reg, false); 205 179 } 206 180 207 181 static int bcm2835_power_power_off(struct bcm2835_power_domain *pd, u32 pm_reg) 208 182 { 209 183 struct bcm2835_power *power = pd->power; 184 + 185 + /* We don't run this on BCM2711 */ 186 + if (power->rpivid_asb) 187 + return 0; 210 188 211 189 /* Enable functional isolation */ 212 190 PM_WRITE(pm_reg, PM_READ(pm_reg) & ~PM_ISFUNC); ··· 222 212 int ret; 223 213 int inrush; 224 214 bool powok; 215 + 216 + /* We don't run this on BCM2711 */ 217 + if (power->rpivid_asb) 218 + return 0; 225 219 226 220 /* If it was already powered on by the fw, leave it that way. */ 227 221 if (PM_READ(pm_reg) & PM_POWUP) ··· 640 626 power->dev = dev; 641 627 power->base = pm->base; 642 628 power->asb = pm->asb; 629 + power->rpivid_asb = pm->rpivid_asb; 643 630 644 - id = ASB_READ(ASB_AXI_BRDG_ID); 645 - if (id != 0x62726467 /* "BRDG" */) { 631 + id = readl(power->asb + ASB_AXI_BRDG_ID); 632 + if (id != BCM2835_BRDG_ID /* "BRDG" */) { 646 633 dev_err(dev, "ASB register ID returned 0x%08x\n", id); 647 634 return -ENODEV; 635 + } 636 + 637 + if (power->rpivid_asb) { 638 + id = readl(power->rpivid_asb + ASB_AXI_BRDG_ID); 639 + if (id != BCM2835_BRDG_ID /* "BRDG" */) { 640 + dev_err(dev, "RPiVid ASB register ID returned 0x%08x\n", 641 + id); 642 + return -ENODEV; 643 + } 648 644 } 649 645 650 646 power->pd_xlate.domains = devm_kcalloc(dev,
+6 -3
drivers/soc/bcm/brcmstb/biuctrl.c
··· 340 340 341 341 ret = setup_hifcpubiuctrl_regs(np); 342 342 if (ret) 343 - return ret; 343 + goto out_put; 344 344 345 345 ret = mcp_write_pairing_set(); 346 346 if (ret) { 347 347 pr_err("MCP: Unable to disable write pairing!\n"); 348 - return ret; 348 + goto out_put; 349 349 } 350 350 351 351 a72_b53_rac_enable_all(np); ··· 353 353 #ifdef CONFIG_PM_SLEEP 354 354 register_syscore_ops(&brcmstb_cpu_credit_syscore_ops); 355 355 #endif 356 - return 0; 356 + ret = 0; 357 + out_put: 358 + of_node_put(np); 359 + return ret; 357 360 } 358 361 early_initcall(brcmstb_biuctrl_init);
+1 -1
drivers/soc/bcm/brcmstb/pm/pm-arm.c
··· 721 721 ctrl.phy_a_standby_ctrl_offs = ddr_phy_data->phy_a_standby_ctrl_offs; 722 722 ctrl.phy_b_standby_ctrl_offs = ddr_phy_data->phy_b_standby_ctrl_offs; 723 723 /* 724 - * Slightly grosss to use the phy ver to get a memc, 724 + * Slightly gross to use the phy ver to get a memc, 725 725 * offset but that is the only versioned things so far 726 726 * we can test for. 727 727 */
+121 -100
drivers/soc/fsl/guts.c
··· 14 14 #include <linux/platform_device.h> 15 15 #include <linux/fsl/guts.h> 16 16 17 - struct guts { 18 - struct ccsr_guts __iomem *regs; 19 - bool little_endian; 20 - }; 21 - 22 17 struct fsl_soc_die_attr { 23 18 char *die; 24 19 u32 svr; 25 20 u32 mask; 26 21 }; 27 22 28 - static struct guts *guts; 29 - static struct soc_device_attribute soc_dev_attr; 30 - static struct soc_device *soc_dev; 31 - 23 + struct fsl_soc_data { 24 + const char *sfp_compat; 25 + u32 uid_offset; 26 + }; 32 27 33 28 /* SoC die attribute definition for QorIQ platform */ 34 29 static const struct fsl_soc_die_attr fsl_soc_die[] = { ··· 115 120 return NULL; 116 121 } 117 122 118 - static u32 fsl_guts_get_svr(void) 123 + static u64 fsl_guts_get_soc_uid(const char *compat, unsigned int offset) 119 124 { 120 - u32 svr = 0; 125 + struct device_node *np; 126 + void __iomem *sfp_base; 127 + u64 uid; 121 128 122 - if (!guts || !guts->regs) 123 - return svr; 129 + np = of_find_compatible_node(NULL, NULL, compat); 130 + if (!np) 131 + return 0; 124 132 125 - if (guts->little_endian) 126 - svr = ioread32(&guts->regs->svr); 127 - else 128 - svr = ioread32be(&guts->regs->svr); 129 - 130 - return svr; 131 - } 132 - 133 - static int fsl_guts_probe(struct platform_device *pdev) 134 - { 135 - struct device_node *root, *np = pdev->dev.of_node; 136 - struct device *dev = &pdev->dev; 137 - const struct fsl_soc_die_attr *soc_die; 138 - const char *machine; 139 - u32 svr; 140 - 141 - /* Initialize guts */ 142 - guts = devm_kzalloc(dev, sizeof(*guts), GFP_KERNEL); 143 - if (!guts) 144 - return -ENOMEM; 145 - 146 - guts->little_endian = of_property_read_bool(np, "little-endian"); 147 - 148 - guts->regs = devm_platform_ioremap_resource(pdev, 0); 149 - if (IS_ERR(guts->regs)) 150 - return PTR_ERR(guts->regs); 151 - 152 - /* Register soc device */ 153 - root = of_find_node_by_path("/"); 154 - if (of_property_read_string(root, "model", &machine)) 155 - of_property_read_string_index(root, "compatible", 0, &machine); 156 - if (machine) { 157 - soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL); 158 - if (!soc_dev_attr.machine) { 159 - of_node_put(root); 160 - return -ENOMEM; 161 - } 133 + sfp_base = of_iomap(np, 0); 134 + if (!sfp_base) { 135 + of_node_put(np); 136 + return 0; 162 137 } 163 - of_node_put(root); 164 138 165 - svr = fsl_guts_get_svr(); 166 - soc_die = fsl_soc_die_match(svr, fsl_soc_die); 167 - if (soc_die) { 168 - soc_dev_attr.family = devm_kasprintf(dev, GFP_KERNEL, 169 - "QorIQ %s", soc_die->die); 170 - } else { 171 - soc_dev_attr.family = devm_kasprintf(dev, GFP_KERNEL, "QorIQ"); 172 - } 173 - if (!soc_dev_attr.family) 174 - return -ENOMEM; 175 - soc_dev_attr.soc_id = devm_kasprintf(dev, GFP_KERNEL, 176 - "svr:0x%08x", svr); 177 - if (!soc_dev_attr.soc_id) 178 - return -ENOMEM; 179 - soc_dev_attr.revision = devm_kasprintf(dev, GFP_KERNEL, "%d.%d", 180 - (svr >> 4) & 0xf, svr & 0xf); 181 - if (!soc_dev_attr.revision) 182 - return -ENOMEM; 139 + uid = ioread32(sfp_base + offset); 140 + uid <<= 32; 141 + uid |= ioread32(sfp_base + offset + 4); 183 142 184 - soc_dev = soc_device_register(&soc_dev_attr); 185 - if (IS_ERR(soc_dev)) 186 - return PTR_ERR(soc_dev); 143 + iounmap(sfp_base); 144 + of_node_put(np); 187 145 188 - pr_info("Machine: %s\n", soc_dev_attr.machine); 189 - pr_info("SoC family: %s\n", soc_dev_attr.family); 190 - pr_info("SoC ID: %s, Revision: %s\n", 191 - soc_dev_attr.soc_id, soc_dev_attr.revision); 192 - return 0; 146 + return uid; 193 147 } 194 148 195 - static int fsl_guts_remove(struct platform_device *dev) 196 - { 197 - soc_device_unregister(soc_dev); 198 - return 0; 199 - } 149 + static const struct fsl_soc_data ls1028a_data = { 150 + .sfp_compat = "fsl,ls1028a-sfp", 151 + .uid_offset = 0x21c, 152 + }; 200 153 201 154 /* 202 155 * Table for matching compatible strings, for device tree ··· 174 231 { .compatible = "fsl,ls1012a-dcfg", }, 175 232 { .compatible = "fsl,ls1046a-dcfg", }, 176 233 { .compatible = "fsl,lx2160a-dcfg", }, 177 - { .compatible = "fsl,ls1028a-dcfg", }, 234 + { .compatible = "fsl,ls1028a-dcfg", .data = &ls1028a_data}, 178 235 {} 179 - }; 180 - MODULE_DEVICE_TABLE(of, fsl_guts_of_match); 181 - 182 - static struct platform_driver fsl_guts_driver = { 183 - .driver = { 184 - .name = "fsl-guts", 185 - .of_match_table = fsl_guts_of_match, 186 - }, 187 - .probe = fsl_guts_probe, 188 - .remove = fsl_guts_remove, 189 236 }; 190 237 191 238 static int __init fsl_guts_init(void) 192 239 { 193 - return platform_driver_register(&fsl_guts_driver); 240 + struct soc_device_attribute *soc_dev_attr; 241 + static struct soc_device *soc_dev; 242 + const struct fsl_soc_die_attr *soc_die; 243 + const struct fsl_soc_data *soc_data; 244 + const struct of_device_id *match; 245 + struct ccsr_guts __iomem *regs; 246 + const char *machine = NULL; 247 + struct device_node *np; 248 + bool little_endian; 249 + u64 soc_uid = 0; 250 + u32 svr; 251 + int ret; 252 + 253 + np = of_find_matching_node_and_match(NULL, fsl_guts_of_match, &match); 254 + if (!np) 255 + return 0; 256 + soc_data = match->data; 257 + 258 + regs = of_iomap(np, 0); 259 + if (!regs) { 260 + of_node_put(np); 261 + return -ENOMEM; 262 + } 263 + 264 + little_endian = of_property_read_bool(np, "little-endian"); 265 + if (little_endian) 266 + svr = ioread32(&regs->svr); 267 + else 268 + svr = ioread32be(&regs->svr); 269 + iounmap(regs); 270 + of_node_put(np); 271 + 272 + /* Register soc device */ 273 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 274 + if (!soc_dev_attr) 275 + return -ENOMEM; 276 + 277 + if (of_property_read_string(of_root, "model", &machine)) 278 + of_property_read_string_index(of_root, "compatible", 0, &machine); 279 + if (machine) { 280 + soc_dev_attr->machine = kstrdup(machine, GFP_KERNEL); 281 + if (!soc_dev_attr->machine) 282 + goto err_nomem; 283 + } 284 + 285 + soc_die = fsl_soc_die_match(svr, fsl_soc_die); 286 + if (soc_die) { 287 + soc_dev_attr->family = kasprintf(GFP_KERNEL, "QorIQ %s", 288 + soc_die->die); 289 + } else { 290 + soc_dev_attr->family = kasprintf(GFP_KERNEL, "QorIQ"); 291 + } 292 + if (!soc_dev_attr->family) 293 + goto err_nomem; 294 + 295 + soc_dev_attr->soc_id = kasprintf(GFP_KERNEL, "svr:0x%08x", svr); 296 + if (!soc_dev_attr->soc_id) 297 + goto err_nomem; 298 + 299 + soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%d.%d", 300 + (svr >> 4) & 0xf, svr & 0xf); 301 + if (!soc_dev_attr->revision) 302 + goto err_nomem; 303 + 304 + if (soc_data) 305 + soc_uid = fsl_guts_get_soc_uid(soc_data->sfp_compat, 306 + soc_data->uid_offset); 307 + if (soc_uid) 308 + soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", 309 + soc_uid); 310 + 311 + soc_dev = soc_device_register(soc_dev_attr); 312 + if (IS_ERR(soc_dev)) { 313 + ret = PTR_ERR(soc_dev); 314 + goto err; 315 + } 316 + 317 + pr_info("Machine: %s\n", soc_dev_attr->machine); 318 + pr_info("SoC family: %s\n", soc_dev_attr->family); 319 + pr_info("SoC ID: %s, Revision: %s\n", 320 + soc_dev_attr->soc_id, soc_dev_attr->revision); 321 + 322 + return 0; 323 + 324 + err_nomem: 325 + ret = -ENOMEM; 326 + err: 327 + kfree(soc_dev_attr->machine); 328 + kfree(soc_dev_attr->family); 329 + kfree(soc_dev_attr->soc_id); 330 + kfree(soc_dev_attr->revision); 331 + kfree(soc_dev_attr->serial_number); 332 + kfree(soc_dev_attr); 333 + 334 + return ret; 194 335 } 195 336 core_initcall(fsl_guts_init); 196 - 197 - static void __exit fsl_guts_exit(void) 198 - { 199 - platform_driver_unregister(&fsl_guts_driver); 200 - } 201 - module_exit(fsl_guts_exit);
+16
drivers/soc/fujitsu/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + menu "fujitsu SoC drivers" 3 + 4 + config A64FX_DIAG 5 + bool "A64FX diag driver" 6 + depends on ARM64 7 + depends on ACPI 8 + help 9 + Say Y here if you want to enable diag interrupt on Fujitsu A64FX. 10 + This driver enables BMC's diagnostic requests and enables 11 + A64FX-specific interrupts. This allows administrators to obtain 12 + kernel dumps via diagnostic requests using ipmitool, etc. 13 + 14 + If unsure, say N. 15 + 16 + endmenu
+3
drivers/soc/fujitsu/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + obj-$(CONFIG_A64FX_DIAG) += a64fx-diag.o
+154
drivers/soc/fujitsu/a64fx-diag.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * A64FX diag driver. 4 + * Copyright (c) 2022 Fujitsu Ltd. 5 + */ 6 + 7 + #include <linux/acpi.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/irq.h> 10 + #include <linux/module.h> 11 + #include <linux/platform_device.h> 12 + 13 + #define A64FX_DIAG_IRQ 1 14 + #define BMC_DIAG_INTERRUPT_ENABLE 0x40 15 + #define BMC_DIAG_INTERRUPT_STATUS 0x44 16 + #define BMC_DIAG_INTERRUPT_MASK BIT(31) 17 + 18 + struct a64fx_diag_priv { 19 + void __iomem *mmsc_reg_base; 20 + int irq; 21 + bool has_nmi; 22 + }; 23 + 24 + static irqreturn_t a64fx_diag_handler_nmi(int irq, void *dev_id) 25 + { 26 + nmi_panic(NULL, "a64fx_diag: interrupt received\n"); 27 + 28 + return IRQ_HANDLED; 29 + } 30 + 31 + static irqreturn_t a64fx_diag_handler_irq(int irq, void *dev_id) 32 + { 33 + panic("a64fx_diag: interrupt received\n"); 34 + 35 + return IRQ_HANDLED; 36 + } 37 + 38 + static void a64fx_diag_interrupt_clear(struct a64fx_diag_priv *priv) 39 + { 40 + void __iomem *diag_status_reg_addr; 41 + u32 mmsc; 42 + 43 + diag_status_reg_addr = priv->mmsc_reg_base + BMC_DIAG_INTERRUPT_STATUS; 44 + mmsc = readl(diag_status_reg_addr); 45 + if (mmsc & BMC_DIAG_INTERRUPT_MASK) 46 + writel(BMC_DIAG_INTERRUPT_MASK, diag_status_reg_addr); 47 + } 48 + 49 + static void a64fx_diag_interrupt_enable(struct a64fx_diag_priv *priv) 50 + { 51 + void __iomem *diag_enable_reg_addr; 52 + u32 mmsc; 53 + 54 + diag_enable_reg_addr = priv->mmsc_reg_base + BMC_DIAG_INTERRUPT_ENABLE; 55 + mmsc = readl(diag_enable_reg_addr); 56 + if (!(mmsc & BMC_DIAG_INTERRUPT_MASK)) { 57 + mmsc |= BMC_DIAG_INTERRUPT_MASK; 58 + writel(mmsc, diag_enable_reg_addr); 59 + } 60 + } 61 + 62 + static void a64fx_diag_interrupt_disable(struct a64fx_diag_priv *priv) 63 + { 64 + void __iomem *diag_enable_reg_addr; 65 + u32 mmsc; 66 + 67 + diag_enable_reg_addr = priv->mmsc_reg_base + BMC_DIAG_INTERRUPT_ENABLE; 68 + mmsc = readl(diag_enable_reg_addr); 69 + if (mmsc & BMC_DIAG_INTERRUPT_MASK) { 70 + mmsc &= ~BMC_DIAG_INTERRUPT_MASK; 71 + writel(mmsc, diag_enable_reg_addr); 72 + } 73 + } 74 + 75 + static int a64fx_diag_probe(struct platform_device *pdev) 76 + { 77 + struct device *dev = &pdev->dev; 78 + struct a64fx_diag_priv *priv; 79 + unsigned long irq_flags; 80 + int ret; 81 + 82 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 83 + if (priv == NULL) 84 + return -ENOMEM; 85 + 86 + priv->mmsc_reg_base = devm_platform_ioremap_resource(pdev, 0); 87 + if (IS_ERR(priv->mmsc_reg_base)) 88 + return PTR_ERR(priv->mmsc_reg_base); 89 + 90 + priv->irq = platform_get_irq(pdev, A64FX_DIAG_IRQ); 91 + if (priv->irq < 0) 92 + return priv->irq; 93 + 94 + platform_set_drvdata(pdev, priv); 95 + 96 + irq_flags = IRQF_PERCPU | IRQF_NOBALANCING | IRQF_NO_AUTOEN | 97 + IRQF_NO_THREAD; 98 + ret = request_nmi(priv->irq, &a64fx_diag_handler_nmi, irq_flags, 99 + "a64fx_diag_nmi", NULL); 100 + if (ret) { 101 + ret = request_irq(priv->irq, &a64fx_diag_handler_irq, 102 + irq_flags, "a64fx_diag_irq", NULL); 103 + if (ret) { 104 + dev_err(dev, "cannot register IRQ %d\n", ret); 105 + return ret; 106 + } 107 + enable_irq(priv->irq); 108 + } else { 109 + enable_nmi(priv->irq); 110 + priv->has_nmi = true; 111 + } 112 + 113 + a64fx_diag_interrupt_clear(priv); 114 + a64fx_diag_interrupt_enable(priv); 115 + 116 + return 0; 117 + } 118 + 119 + static int a64fx_diag_remove(struct platform_device *pdev) 120 + { 121 + struct a64fx_diag_priv *priv = platform_get_drvdata(pdev); 122 + 123 + a64fx_diag_interrupt_disable(priv); 124 + a64fx_diag_interrupt_clear(priv); 125 + 126 + if (priv->has_nmi) 127 + free_nmi(priv->irq, NULL); 128 + else 129 + free_irq(priv->irq, NULL); 130 + 131 + return 0; 132 + } 133 + 134 + static const struct acpi_device_id a64fx_diag_acpi_match[] = { 135 + { "FUJI2007", 0 }, 136 + { }, 137 + }; 138 + MODULE_DEVICE_TABLE(acpi, a64fx_diag_acpi_match); 139 + 140 + 141 + static struct platform_driver a64fx_diag_driver = { 142 + .driver = { 143 + .name = "a64fx_diag_driver", 144 + .acpi_match_table = ACPI_PTR(a64fx_diag_acpi_match), 145 + }, 146 + .probe = a64fx_diag_probe, 147 + .remove = a64fx_diag_remove, 148 + }; 149 + 150 + module_platform_driver(a64fx_diag_driver); 151 + 152 + MODULE_LICENSE("GPL v2"); 153 + MODULE_AUTHOR("Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com>"); 154 + MODULE_DESCRIPTION("A64FX diag driver");
+6 -2
drivers/soc/imx/gpcv2.c
··· 328 328 if (!IS_ERR(domain->regulator)) { 329 329 ret = regulator_enable(domain->regulator); 330 330 if (ret) { 331 - dev_err(domain->dev, "failed to enable regulator\n"); 331 + dev_err(domain->dev, 332 + "failed to enable regulator: %pe\n", 333 + ERR_PTR(ret)); 332 334 goto out_put_pm; 333 335 } 334 336 } ··· 469 467 if (!IS_ERR(domain->regulator)) { 470 468 ret = regulator_disable(domain->regulator); 471 469 if (ret) { 472 - dev_err(domain->dev, "failed to disable regulator\n"); 470 + dev_err(domain->dev, 471 + "failed to disable regulator: %pe\n", 472 + ERR_PTR(ret)); 473 473 return ret; 474 474 } 475 475 }
+6 -3
drivers/soc/imx/imx8m-blk-ctrl.c
··· 216 216 bc->bus_power_dev = genpd_dev_pm_attach_by_name(dev, "bus"); 217 217 if (IS_ERR(bc->bus_power_dev)) 218 218 return dev_err_probe(dev, PTR_ERR(bc->bus_power_dev), 219 - "failed to attach power domain\n"); 219 + "failed to attach power domain \"bus\"\n"); 220 220 221 221 for (i = 0; i < bc_data->num_domains; i++) { 222 222 const struct imx8m_blk_ctrl_domain_data *data = &bc_data->domains[i]; ··· 238 238 dev_pm_domain_attach_by_name(dev, data->gpc_name); 239 239 if (IS_ERR(domain->power_dev)) { 240 240 dev_err_probe(dev, PTR_ERR(domain->power_dev), 241 - "failed to attach power domain\n"); 241 + "failed to attach power domain \"%s\"\n", 242 + data->gpc_name); 242 243 ret = PTR_ERR(domain->power_dev); 243 244 goto cleanup_pds; 244 245 } ··· 252 251 253 252 ret = pm_genpd_init(&domain->genpd, NULL, true); 254 253 if (ret) { 255 - dev_err_probe(dev, ret, "failed to init power domain\n"); 254 + dev_err_probe(dev, ret, 255 + "failed to init power domain \"%s\"\n", 256 + data->gpc_name); 256 257 dev_pm_domain_detach(domain->power_dev, true); 257 258 goto cleanup_pds; 258 259 }
+10
drivers/soc/mediatek/Kconfig
··· 73 73 Say yes here to add support for the MediaTek Multimedia 74 74 Subsystem (MMSYS). 75 75 76 + config MTK_SVS 77 + tristate "MediaTek Smart Voltage Scaling(SVS)" 78 + depends on MTK_EFUSE && NVMEM 79 + help 80 + The Smart Voltage Scaling(SVS) engine is a piece of hardware 81 + which has several controllers(banks) for calculating suitable 82 + voltage to different power domains(CPU/GPU/CCI) according to 83 + chip process corner, temperatures and other factors. Then DVFS 84 + driver could apply SVS bank voltage to PMIC/Buck. 85 + 76 86 endmenu
+1
drivers/soc/mediatek/Makefile
··· 7 7 obj-$(CONFIG_MTK_SCPSYS_PM_DOMAINS) += mtk-pm-domains.o 8 8 obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o 9 9 obj-$(CONFIG_MTK_MMSYS) += mtk-mutex.o 10 + obj-$(CONFIG_MTK_SVS) += mtk-svs.o
+112
drivers/soc/mediatek/mt6795-pm-domains.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT6795_PM_DOMAINS_H 4 + #define __SOC_MEDIATEK_MT6795_PM_DOMAINS_H 5 + 6 + #include "mtk-pm-domains.h" 7 + #include <dt-bindings/power/mt6795-power.h> 8 + 9 + /* 10 + * MT6795 power domain support 11 + */ 12 + 13 + static const struct scpsys_domain_data scpsys_domain_data_mt6795[] = { 14 + [MT6795_POWER_DOMAIN_VDEC] = { 15 + .name = "vdec", 16 + .sta_mask = PWR_STATUS_VDEC, 17 + .ctl_offs = SPM_VDE_PWR_CON, 18 + .pwr_sta_offs = SPM_PWR_STATUS, 19 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 20 + .sram_pdn_bits = GENMASK(11, 8), 21 + .sram_pdn_ack_bits = GENMASK(12, 12), 22 + }, 23 + [MT6795_POWER_DOMAIN_VENC] = { 24 + .name = "venc", 25 + .sta_mask = PWR_STATUS_VENC, 26 + .ctl_offs = SPM_VEN_PWR_CON, 27 + .pwr_sta_offs = SPM_PWR_STATUS, 28 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 29 + .sram_pdn_bits = GENMASK(11, 8), 30 + .sram_pdn_ack_bits = GENMASK(15, 12), 31 + }, 32 + [MT6795_POWER_DOMAIN_ISP] = { 33 + .name = "isp", 34 + .sta_mask = PWR_STATUS_ISP, 35 + .ctl_offs = SPM_ISP_PWR_CON, 36 + .pwr_sta_offs = SPM_PWR_STATUS, 37 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 38 + .sram_pdn_bits = GENMASK(11, 8), 39 + .sram_pdn_ack_bits = GENMASK(13, 12), 40 + }, 41 + [MT6795_POWER_DOMAIN_MM] = { 42 + .name = "mm", 43 + .sta_mask = PWR_STATUS_DISP, 44 + .ctl_offs = SPM_DIS_PWR_CON, 45 + .pwr_sta_offs = SPM_PWR_STATUS, 46 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 47 + .sram_pdn_bits = GENMASK(11, 8), 48 + .sram_pdn_ack_bits = GENMASK(12, 12), 49 + .bp_infracfg = { 50 + BUS_PROT_UPDATE_TOPAXI(MT8173_TOP_AXI_PROT_EN_MM_M0 | 51 + MT8173_TOP_AXI_PROT_EN_MM_M1), 52 + }, 53 + }, 54 + [MT6795_POWER_DOMAIN_MJC] = { 55 + .name = "mjc", 56 + .sta_mask = BIT(20), 57 + .ctl_offs = 0x298, 58 + .pwr_sta_offs = SPM_PWR_STATUS, 59 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 60 + .sram_pdn_bits = GENMASK(11, 8), 61 + .sram_pdn_ack_bits = GENMASK(15, 12), 62 + }, 63 + [MT6795_POWER_DOMAIN_AUDIO] = { 64 + .name = "audio", 65 + .sta_mask = PWR_STATUS_AUDIO, 66 + .ctl_offs = SPM_AUDIO_PWR_CON, 67 + .pwr_sta_offs = SPM_PWR_STATUS, 68 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 69 + .sram_pdn_bits = GENMASK(11, 8), 70 + .sram_pdn_ack_bits = GENMASK(15, 12), 71 + }, 72 + [MT6795_POWER_DOMAIN_MFG_ASYNC] = { 73 + .name = "mfg_async", 74 + .sta_mask = PWR_STATUS_MFG_ASYNC, 75 + .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 76 + .pwr_sta_offs = SPM_PWR_STATUS, 77 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 78 + .sram_pdn_bits = GENMASK(11, 8), 79 + .sram_pdn_ack_bits = 0, 80 + }, 81 + [MT6795_POWER_DOMAIN_MFG_2D] = { 82 + .name = "mfg_2d", 83 + .sta_mask = PWR_STATUS_MFG_2D, 84 + .ctl_offs = SPM_MFG_2D_PWR_CON, 85 + .pwr_sta_offs = SPM_PWR_STATUS, 86 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 87 + .sram_pdn_bits = GENMASK(11, 8), 88 + .sram_pdn_ack_bits = GENMASK(13, 12), 89 + }, 90 + [MT6795_POWER_DOMAIN_MFG] = { 91 + .name = "mfg", 92 + .sta_mask = PWR_STATUS_MFG, 93 + .ctl_offs = SPM_MFG_PWR_CON, 94 + .pwr_sta_offs = SPM_PWR_STATUS, 95 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 96 + .sram_pdn_bits = GENMASK(13, 8), 97 + .sram_pdn_ack_bits = GENMASK(21, 16), 98 + .bp_infracfg = { 99 + BUS_PROT_UPDATE_TOPAXI(MT8173_TOP_AXI_PROT_EN_MFG_S | 100 + MT8173_TOP_AXI_PROT_EN_MFG_M0 | 101 + MT8173_TOP_AXI_PROT_EN_MFG_M1 | 102 + MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT), 103 + }, 104 + }, 105 + }; 106 + 107 + static const struct scpsys_soc_data mt6795_scpsys_data = { 108 + .domains_data = scpsys_domain_data_mt6795, 109 + .num_domains = ARRAY_SIZE(scpsys_domain_data_mt6795), 110 + }; 111 + 112 + #endif /* __SOC_MEDIATEK_MT6795_PM_DOMAINS_H */
+1
drivers/soc/mediatek/mt8183-pm-domains.h
··· 41 41 .pwr_sta2nd_offs = 0x0184, 42 42 .sram_pdn_bits = 0, 43 43 .sram_pdn_ack_bits = 0, 44 + .caps = MTK_SCPD_DOMAIN_SUPPLY, 44 45 }, 45 46 [MT8183_POWER_DOMAIN_MFG] = { 46 47 .name = "mfg",
+1 -1
drivers/soc/mediatek/mt8186-pm-domains.h
··· 51 51 MT8186_TOP_AXI_PROT_EN_1_CLR, 52 52 MT8186_TOP_AXI_PROT_EN_1_STA), 53 53 }, 54 - .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 54 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_DOMAIN_SUPPLY, 55 55 }, 56 56 [MT8186_POWER_DOMAIN_MFG2] = { 57 57 .name = "mfg2",
+2
drivers/soc/mediatek/mt8192-pm-domains.h
··· 58 58 .pwr_sta2nd_offs = 0x0170, 59 59 .sram_pdn_bits = GENMASK(8, 8), 60 60 .sram_pdn_ack_bits = GENMASK(12, 12), 61 + .caps = MTK_SCPD_DOMAIN_SUPPLY, 61 62 }, 62 63 [MT8192_POWER_DOMAIN_MFG1] = { 63 64 .name = "mfg1", ··· 86 85 MT8192_TOP_AXI_PROT_EN_2_CLR, 87 86 MT8192_TOP_AXI_PROT_EN_2_STA1), 88 87 }, 88 + .caps = MTK_SCPD_DOMAIN_SUPPLY, 89 89 }, 90 90 [MT8192_POWER_DOMAIN_MFG2] = { 91 91 .name = "mfg2",
+2 -2
drivers/soc/mediatek/mt8195-pm-domains.h
··· 67 67 .ctl_offs = 0x334, 68 68 .pwr_sta_offs = 0x174, 69 69 .pwr_sta2nd_offs = 0x178, 70 - .caps = MTK_SCPD_ACTIVE_WAKEUP, 70 + .caps = MTK_SCPD_ACTIVE_WAKEUP | MTK_SCPD_ALWAYS_ON, 71 71 }, 72 72 [MT8195_POWER_DOMAIN_CSI_RX_TOP] = { 73 73 .name = "csi_rx_top", ··· 162 162 MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_CLR, 163 163 MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_STA1), 164 164 }, 165 - .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 165 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_DOMAIN_SUPPLY, 166 166 }, 167 167 [MT8195_POWER_DOMAIN_MFG2] = { 168 168 .name = "mfg2",
+22
drivers/soc/mediatek/mt8365-mmsys.h
··· 10 10 #define MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN 0xf60 11 11 #define MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0xf64 12 12 #define MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN 0xf68 13 + #define MT8365_DISP_REG_CONFIG_DISP_RDMA1_SOUT_SEL 0xfd0 14 + #define MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN 0xfd8 15 + #define MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00 0xfdc 13 16 14 17 #define MT8365_RDMA0_SOUT_COLOR0 0x1 15 18 #define MT8365_DITHER_MOUT_EN_DSI0 0x1 ··· 21 18 #define MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 0x0 22 19 #define MT8365_DISP_COLOR_SEL_IN_COLOR0 0x0 23 20 #define MT8365_OVL0_MOUT_PATH0_SEL BIT(0) 21 + #define MT8365_RDMA1_SOUT_DPI0 0x1 22 + #define MT8365_DPI0_SEL_IN_RDMA1 0x0 23 + #define MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK 0x1 24 + #define MT8365_DPI0_SEL_IN_RDMA1 0x0 24 25 25 26 static const struct mtk_mmsys_routes mt8365_mmsys_routing_table[] = { 26 27 { ··· 61 54 DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 62 55 MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN, 63 56 MT8365_RDMA0_RSZ0_SEL_IN_RDMA0, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 57 + }, 58 + { 59 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 60 + MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00, 61 + MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK, MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK 62 + }, 63 + { 64 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 65 + MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN, 66 + MT8365_DPI0_SEL_IN_RDMA1, MT8365_DPI0_SEL_IN_RDMA1 67 + }, 68 + { 69 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 70 + MT8365_DISP_REG_CONFIG_DISP_RDMA1_SOUT_SEL, 71 + MT8365_RDMA1_SOUT_DPI0, MT8365_RDMA1_SOUT_DPI0 64 72 }, 65 73 }; 66 74
+30 -15
drivers/soc/mediatek/mtk-devapc.c
··· 31 31 u32 vio_dbg1; 32 32 }; 33 33 34 - struct mtk_devapc_data { 35 - /* numbers of violation index */ 36 - u32 vio_idx_num; 37 - 34 + struct mtk_devapc_regs_ofs { 38 35 /* reg offset */ 39 36 u32 vio_mask_offset; 40 37 u32 vio_sta_offset; ··· 41 44 u32 vio_shift_sta_offset; 42 45 u32 vio_shift_sel_offset; 43 46 u32 vio_shift_con_offset; 47 + }; 48 + 49 + struct mtk_devapc_data { 50 + /* numbers of violation index */ 51 + u32 vio_idx_num; 52 + const struct mtk_devapc_regs_ofs *regs_ofs; 44 53 }; 45 54 46 55 struct mtk_devapc_context { ··· 61 58 void __iomem *reg; 62 59 int i; 63 60 64 - reg = ctx->infra_base + ctx->data->vio_sta_offset; 61 + reg = ctx->infra_base + ctx->data->regs_ofs->vio_sta_offset; 65 62 66 63 for (i = 0; i < VIO_MOD_TO_REG_IND(ctx->data->vio_idx_num) - 1; i++) 67 64 writel(GENMASK(31, 0), reg + 4 * i); ··· 76 73 u32 val; 77 74 int i; 78 75 79 - reg = ctx->infra_base + ctx->data->vio_mask_offset; 76 + reg = ctx->infra_base + ctx->data->regs_ofs->vio_mask_offset; 80 77 81 78 if (mask) 82 79 val = GENMASK(31, 0); ··· 119 116 u32 val; 120 117 121 118 pd_vio_shift_sta_reg = ctx->infra_base + 122 - ctx->data->vio_shift_sta_offset; 119 + ctx->data->regs_ofs->vio_shift_sta_offset; 123 120 pd_vio_shift_sel_reg = ctx->infra_base + 124 - ctx->data->vio_shift_sel_offset; 121 + ctx->data->regs_ofs->vio_shift_sel_offset; 125 122 pd_vio_shift_con_reg = ctx->infra_base + 126 - ctx->data->vio_shift_con_offset; 123 + ctx->data->regs_ofs->vio_shift_con_offset; 127 124 128 125 /* Find the minimum shift group which has violation */ 129 126 val = readl(pd_vio_shift_sta_reg); ··· 164 161 void __iomem *vio_dbg0_reg; 165 162 void __iomem *vio_dbg1_reg; 166 163 167 - vio_dbg0_reg = ctx->infra_base + ctx->data->vio_dbg0_offset; 168 - vio_dbg1_reg = ctx->infra_base + ctx->data->vio_dbg1_offset; 164 + vio_dbg0_reg = ctx->infra_base + ctx->data->regs_ofs->vio_dbg0_offset; 165 + vio_dbg1_reg = ctx->infra_base + ctx->data->regs_ofs->vio_dbg1_offset; 169 166 170 167 vio_dbgs.vio_dbg0 = readl(vio_dbg0_reg); 171 168 vio_dbgs.vio_dbg1 = readl(vio_dbg1_reg); ··· 203 200 */ 204 201 static void start_devapc(struct mtk_devapc_context *ctx) 205 202 { 206 - writel(BIT(31), ctx->infra_base + ctx->data->apc_con_offset); 203 + writel(BIT(31), ctx->infra_base + ctx->data->regs_ofs->apc_con_offset); 207 204 208 205 mask_module_irq(ctx, false); 209 206 } ··· 215 212 { 216 213 mask_module_irq(ctx, true); 217 214 218 - writel(BIT(2), ctx->infra_base + ctx->data->apc_con_offset); 215 + writel(BIT(2), ctx->infra_base + ctx->data->regs_ofs->apc_con_offset); 219 216 } 220 217 221 - static const struct mtk_devapc_data devapc_mt6779 = { 222 - .vio_idx_num = 511, 218 + static const struct mtk_devapc_regs_ofs devapc_regs_ofs_mt6779 = { 223 219 .vio_mask_offset = 0x0, 224 220 .vio_sta_offset = 0x400, 225 221 .vio_dbg0_offset = 0x900, ··· 229 227 .vio_shift_con_offset = 0xF20, 230 228 }; 231 229 230 + static const struct mtk_devapc_data devapc_mt6779 = { 231 + .vio_idx_num = 511, 232 + .regs_ofs = &devapc_regs_ofs_mt6779, 233 + }; 234 + 235 + static const struct mtk_devapc_data devapc_mt8186 = { 236 + .vio_idx_num = 519, 237 + .regs_ofs = &devapc_regs_ofs_mt6779, 238 + }; 239 + 232 240 static const struct of_device_id mtk_devapc_dt_match[] = { 233 241 { 234 242 .compatible = "mediatek,mt6779-devapc", 235 243 .data = &devapc_mt6779, 244 + }, { 245 + .compatible = "mediatek,mt8186-devapc", 246 + .data = &devapc_mt8186, 236 247 }, { 237 248 }, 238 249 };
+153 -2
drivers/soc/mediatek/mtk-mutex.c
··· 7 7 #include <linux/iopoll.h> 8 8 #include <linux/module.h> 9 9 #include <linux/of_device.h> 10 + #include <linux/of_address.h> 10 11 #include <linux/platform_device.h> 11 12 #include <linux/regmap.h> 12 13 #include <linux/soc/mediatek/mtk-mmsys.h> 13 14 #include <linux/soc/mediatek/mtk-mutex.h> 15 + #include <linux/soc/mediatek/mtk-cmdq.h> 14 16 15 17 #define MT2701_MUTEX0_MOD0 0x2c 16 18 #define MT2701_MUTEX0_SOF0 0x30 ··· 82 80 #define MT8183_MUTEX_MOD_DISP_GAMMA0 16 83 81 #define MT8183_MUTEX_MOD_DISP_DITHER0 17 84 82 83 + #define MT8183_MUTEX_MOD_MDP_RDMA0 2 84 + #define MT8183_MUTEX_MOD_MDP_RSZ0 4 85 + #define MT8183_MUTEX_MOD_MDP_RSZ1 5 86 + #define MT8183_MUTEX_MOD_MDP_TDSHP0 6 87 + #define MT8183_MUTEX_MOD_MDP_WROT0 7 88 + #define MT8183_MUTEX_MOD_MDP_WDMA 8 89 + #define MT8183_MUTEX_MOD_MDP_AAL0 23 90 + #define MT8183_MUTEX_MOD_MDP_CCORR0 24 91 + 85 92 #define MT8173_MUTEX_MOD_DISP_OVL0 11 86 93 #define MT8173_MUTEX_MOD_DISP_OVL1 12 87 94 #define MT8173_MUTEX_MOD_DISP_RDMA0 13 ··· 120 109 #define MT8195_MUTEX_MOD_DISP_VPP_MERGE 20 121 110 #define MT8195_MUTEX_MOD_DISP_DP_INTF0 21 122 111 #define MT8195_MUTEX_MOD_DISP_PWM0 27 112 + 113 + #define MT8365_MUTEX_MOD_DISP_OVL0 7 114 + #define MT8365_MUTEX_MOD_DISP_OVL0_2L 8 115 + #define MT8365_MUTEX_MOD_DISP_RDMA0 9 116 + #define MT8365_MUTEX_MOD_DISP_RDMA1 10 117 + #define MT8365_MUTEX_MOD_DISP_WDMA0 11 118 + #define MT8365_MUTEX_MOD_DISP_COLOR0 12 119 + #define MT8365_MUTEX_MOD_DISP_CCORR 13 120 + #define MT8365_MUTEX_MOD_DISP_AAL 14 121 + #define MT8365_MUTEX_MOD_DISP_GAMMA 15 122 + #define MT8365_MUTEX_MOD_DISP_DITHER 16 123 + #define MT8365_MUTEX_MOD_DISP_DSI0 17 124 + #define MT8365_MUTEX_MOD_DISP_PWM0 20 125 + #define MT8365_MUTEX_MOD_DISP_DPI0 22 123 126 124 127 #define MT2712_MUTEX_MOD_DISP_PWM2 10 125 128 #define MT2712_MUTEX_MOD_DISP_OVL0 11 ··· 210 185 const unsigned int *mutex_sof; 211 186 const unsigned int mutex_mod_reg; 212 187 const unsigned int mutex_sof_reg; 188 + const unsigned int *mutex_table_mod; 213 189 const bool no_clk; 214 190 }; 215 191 ··· 220 194 void __iomem *regs; 221 195 struct mtk_mutex mutex[10]; 222 196 const struct mtk_mutex_data *data; 197 + phys_addr_t addr; 198 + struct cmdq_client_reg cmdq_reg; 223 199 }; 224 200 225 201 static const unsigned int mt2701_mutex_mod[DDP_COMPONENT_ID_MAX] = { ··· 300 272 [DDP_COMPONENT_WDMA0] = MT8183_MUTEX_MOD_DISP_WDMA0, 301 273 }; 302 274 275 + static const unsigned int mt8183_mutex_table_mod[MUTEX_MOD_IDX_MAX] = { 276 + [MUTEX_MOD_IDX_MDP_RDMA0] = MT8183_MUTEX_MOD_MDP_RDMA0, 277 + [MUTEX_MOD_IDX_MDP_RSZ0] = MT8183_MUTEX_MOD_MDP_RSZ0, 278 + [MUTEX_MOD_IDX_MDP_RSZ1] = MT8183_MUTEX_MOD_MDP_RSZ1, 279 + [MUTEX_MOD_IDX_MDP_TDSHP0] = MT8183_MUTEX_MOD_MDP_TDSHP0, 280 + [MUTEX_MOD_IDX_MDP_WROT0] = MT8183_MUTEX_MOD_MDP_WROT0, 281 + [MUTEX_MOD_IDX_MDP_WDMA] = MT8183_MUTEX_MOD_MDP_WDMA, 282 + [MUTEX_MOD_IDX_MDP_AAL0] = MT8183_MUTEX_MOD_MDP_AAL0, 283 + [MUTEX_MOD_IDX_MDP_CCORR0] = MT8183_MUTEX_MOD_MDP_CCORR0, 284 + }; 285 + 303 286 static const unsigned int mt8186_mutex_mod[DDP_COMPONENT_ID_MAX] = { 304 287 [DDP_COMPONENT_AAL0] = MT8186_MUTEX_MOD_DISP_AAL0, 305 288 [DDP_COMPONENT_CCORR] = MT8186_MUTEX_MOD_DISP_CCORR0, ··· 352 313 [DDP_COMPONENT_DSI0] = MT8195_MUTEX_MOD_DISP_DSI0, 353 314 [DDP_COMPONENT_PWM0] = MT8195_MUTEX_MOD_DISP_PWM0, 354 315 [DDP_COMPONENT_DP_INTF0] = MT8195_MUTEX_MOD_DISP_DP_INTF0, 316 + }; 317 + 318 + static const unsigned int mt8365_mutex_mod[DDP_COMPONENT_ID_MAX] = { 319 + [DDP_COMPONENT_AAL0] = MT8365_MUTEX_MOD_DISP_AAL, 320 + [DDP_COMPONENT_CCORR] = MT8365_MUTEX_MOD_DISP_CCORR, 321 + [DDP_COMPONENT_COLOR0] = MT8365_MUTEX_MOD_DISP_COLOR0, 322 + [DDP_COMPONENT_DITHER0] = MT8365_MUTEX_MOD_DISP_DITHER, 323 + [DDP_COMPONENT_DPI0] = MT8365_MUTEX_MOD_DISP_DPI0, 324 + [DDP_COMPONENT_DSI0] = MT8365_MUTEX_MOD_DISP_DSI0, 325 + [DDP_COMPONENT_GAMMA] = MT8365_MUTEX_MOD_DISP_GAMMA, 326 + [DDP_COMPONENT_OVL0] = MT8365_MUTEX_MOD_DISP_OVL0, 327 + [DDP_COMPONENT_OVL_2L0] = MT8365_MUTEX_MOD_DISP_OVL0_2L, 328 + [DDP_COMPONENT_PWM0] = MT8365_MUTEX_MOD_DISP_PWM0, 329 + [DDP_COMPONENT_RDMA0] = MT8365_MUTEX_MOD_DISP_RDMA0, 330 + [DDP_COMPONENT_RDMA1] = MT8365_MUTEX_MOD_DISP_RDMA1, 331 + [DDP_COMPONENT_WDMA0] = MT8365_MUTEX_MOD_DISP_WDMA0, 355 332 }; 356 333 357 334 static const unsigned int mt2712_mutex_sof[DDP_MUTEX_SOF_MAX] = { ··· 454 399 .mutex_sof = mt8183_mutex_sof, 455 400 .mutex_mod_reg = MT8183_MUTEX0_MOD0, 456 401 .mutex_sof_reg = MT8183_MUTEX0_SOF0, 402 + .mutex_table_mod = mt8183_mutex_table_mod, 457 403 .no_clk = true, 458 404 }; 459 405 ··· 477 421 .mutex_sof = mt8195_mutex_sof, 478 422 .mutex_mod_reg = MT8183_MUTEX0_MOD0, 479 423 .mutex_sof_reg = MT8183_MUTEX0_SOF0, 424 + }; 425 + 426 + static const struct mtk_mutex_data mt8365_mutex_driver_data = { 427 + .mutex_mod = mt8365_mutex_mod, 428 + .mutex_sof = mt8183_mutex_sof, 429 + .mutex_mod_reg = MT8183_MUTEX0_MOD0, 430 + .mutex_sof_reg = MT8183_MUTEX0_SOF0, 431 + .no_clk = true, 480 432 }; 481 433 482 434 struct mtk_mutex *mtk_mutex_get(struct device *dev) ··· 636 572 } 637 573 EXPORT_SYMBOL_GPL(mtk_mutex_enable); 638 574 575 + int mtk_mutex_enable_by_cmdq(struct mtk_mutex *mutex, void *pkt) 576 + { 577 + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, 578 + mutex[mutex->id]); 579 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 580 + struct cmdq_pkt *cmdq_pkt = (struct cmdq_pkt *)pkt; 581 + 582 + WARN_ON(&mtx->mutex[mutex->id] != mutex); 583 + 584 + if (!mtx->cmdq_reg.size) { 585 + dev_err(mtx->dev, "mediatek,gce-client-reg hasn't been set"); 586 + return -EINVAL; 587 + } 588 + 589 + cmdq_pkt_write(cmdq_pkt, mtx->cmdq_reg.subsys, 590 + mtx->addr + DISP_REG_MUTEX_EN(mutex->id), 1); 591 + return 0; 592 + #else 593 + dev_err(mtx->dev, "Not support for enable MUTEX by CMDQ"); 594 + return -ENODEV; 595 + #endif 596 + } 597 + EXPORT_SYMBOL_GPL(mtk_mutex_enable_by_cmdq); 598 + 639 599 void mtk_mutex_disable(struct mtk_mutex *mutex) 640 600 { 641 601 struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, ··· 694 606 } 695 607 EXPORT_SYMBOL_GPL(mtk_mutex_release); 696 608 609 + int mtk_mutex_write_mod(struct mtk_mutex *mutex, 610 + enum mtk_mutex_mod_index idx, bool clear) 611 + { 612 + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, 613 + mutex[mutex->id]); 614 + unsigned int reg; 615 + unsigned int offset; 616 + 617 + WARN_ON(&mtx->mutex[mutex->id] != mutex); 618 + 619 + if (idx < MUTEX_MOD_IDX_MDP_RDMA0 || 620 + idx >= MUTEX_MOD_IDX_MAX) { 621 + dev_err(mtx->dev, "Not supported MOD table index : %d", idx); 622 + return -EINVAL; 623 + } 624 + 625 + offset = DISP_REG_MUTEX_MOD(mtx->data->mutex_mod_reg, 626 + mutex->id); 627 + reg = readl_relaxed(mtx->regs + offset); 628 + 629 + if (clear) 630 + reg &= ~BIT(mtx->data->mutex_table_mod[idx]); 631 + else 632 + reg |= BIT(mtx->data->mutex_table_mod[idx]); 633 + 634 + writel_relaxed(reg, mtx->regs + offset); 635 + 636 + return 0; 637 + } 638 + EXPORT_SYMBOL_GPL(mtk_mutex_write_mod); 639 + 640 + int mtk_mutex_write_sof(struct mtk_mutex *mutex, 641 + enum mtk_mutex_sof_index idx) 642 + { 643 + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, 644 + mutex[mutex->id]); 645 + 646 + WARN_ON(&mtx->mutex[mutex->id] != mutex); 647 + 648 + if (idx < MUTEX_SOF_IDX_SINGLE_MODE || 649 + idx >= MUTEX_SOF_IDX_MAX) { 650 + dev_err(mtx->dev, "Not supported SOF index : %d", idx); 651 + return -EINVAL; 652 + } 653 + 654 + writel_relaxed(idx, mtx->regs + 655 + DISP_REG_MUTEX_SOF(mtx->data->mutex_sof_reg, mutex->id)); 656 + 657 + return 0; 658 + } 659 + EXPORT_SYMBOL_GPL(mtk_mutex_write_sof); 660 + 697 661 static int mtk_mutex_probe(struct platform_device *pdev) 698 662 { 699 663 struct device *dev = &pdev->dev; 700 664 struct mtk_mutex_ctx *mtx; 701 665 struct resource *regs; 702 666 int i; 667 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 668 + int ret; 669 + #endif 703 670 704 671 mtx = devm_kzalloc(dev, sizeof(*mtx), GFP_KERNEL); 705 672 if (!mtx) ··· 774 631 } 775 632 } 776 633 777 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 778 - mtx->regs = devm_ioremap_resource(dev, regs); 634 + mtx->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &regs); 779 635 if (IS_ERR(mtx->regs)) { 780 636 dev_err(dev, "Failed to map mutex registers\n"); 781 637 return PTR_ERR(mtx->regs); 782 638 } 639 + mtx->addr = regs->start; 640 + 641 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 642 + ret = cmdq_dev_get_client_reg(dev, &mtx->cmdq_reg, 0); 643 + if (ret) 644 + dev_dbg(dev, "No mediatek,gce-client-reg!\n"); 645 + #endif 783 646 784 647 platform_set_drvdata(pdev, mtx); 785 648 ··· 814 665 .data = &mt8192_mutex_driver_data}, 815 666 { .compatible = "mediatek,mt8195-disp-mutex", 816 667 .data = &mt8195_mutex_driver_data}, 668 + { .compatible = "mediatek,mt8365-disp-mutex", 669 + .data = &mt8365_mutex_driver_data}, 817 670 {}, 818 671 }; 819 672 MODULE_DEVICE_TABLE(of, mutex_driver_dt_match);
+8
drivers/soc/mediatek/mtk-pm-domains.c
··· 16 16 #include <linux/regulator/consumer.h> 17 17 #include <linux/soc/mediatek/infracfg.h> 18 18 19 + #include "mt6795-pm-domains.h" 19 20 #include "mt8167-pm-domains.h" 20 21 #include "mt8173-pm-domains.h" 21 22 #include "mt8183-pm-domains.h" ··· 429 428 dev_err(scpsys->dev, "%pOF: failed to power on domain: %d\n", node, ret); 430 429 goto err_put_subsys_clocks; 431 430 } 431 + 432 + if (MTK_SCPD_CAPS(pd, MTK_SCPD_ALWAYS_ON)) 433 + pd->genpd.flags |= GENPD_FLAG_ALWAYS_ON; 432 434 } 433 435 434 436 if (scpsys->domains[id]) { ··· 559 555 } 560 556 561 557 static const struct of_device_id scpsys_of_match[] = { 558 + { 559 + .compatible = "mediatek,mt6795-power-controller", 560 + .data = &mt6795_scpsys_data, 561 + }, 562 562 { 563 563 .compatible = "mediatek,mt8167-power-controller", 564 564 .data = &mt8167_scpsys_data,
+2
drivers/soc/mediatek/mtk-pm-domains.h
··· 8 8 #define MTK_SCPD_SRAM_ISO BIT(2) 9 9 #define MTK_SCPD_KEEP_DEFAULT_OFF BIT(3) 10 10 #define MTK_SCPD_DOMAIN_SUPPLY BIT(4) 11 + /* can't set MTK_SCPD_KEEP_DEFAULT_OFF at the same time */ 12 + #define MTK_SCPD_ALWAYS_ON BIT(5) 11 13 #define MTK_SCPD_CAPS(_scpd, _x) ((_scpd)->data->caps & (_x)) 12 14 13 15 #define SPM_VDE_PWR_CON 0x0210
+97 -128
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 13 13 #include <linux/regmap.h> 14 14 #include <linux/reset.h> 15 15 16 + #define PWRAP_POLL_DELAY_US 10 17 + #define PWRAP_POLL_TIMEOUT_US 10000 18 + 16 19 #define PWRAP_MT8135_BRIDGE_IORD_ARB_EN 0x4 17 20 #define PWRAP_MT8135_BRIDGE_WACS3_EN 0x10 18 21 #define PWRAP_MT8135_BRIDGE_INIT_DONE3 0x14 ··· 1143 1140 }; 1144 1141 1145 1142 struct pmic_wrapper; 1146 - struct pwrap_slv_type { 1147 - const u32 *dew_regs; 1148 - enum pmic_type type; 1143 + 1144 + struct pwrap_slv_regops { 1149 1145 const struct regmap_config *regmap; 1150 - /* Flags indicating the capability for the target slave */ 1151 - u32 caps; 1152 1146 /* 1153 1147 * pwrap operations are highly associated with the PMIC types, 1154 1148 * so the pointers added increases flexibility allowing determination ··· 1153 1153 */ 1154 1154 int (*pwrap_read)(struct pmic_wrapper *wrp, u32 adr, u32 *rdata); 1155 1155 int (*pwrap_write)(struct pmic_wrapper *wrp, u32 adr, u32 wdata); 1156 + }; 1157 + 1158 + struct pwrap_slv_type { 1159 + const u32 *dew_regs; 1160 + enum pmic_type type; 1161 + const struct pwrap_slv_regops *regops; 1162 + /* Flags indicating the capability for the target slave */ 1163 + u32 caps; 1156 1164 }; 1157 1165 1158 1166 struct pmic_wrapper { ··· 1249 1241 (val & PWRAP_STATE_SYNC_IDLE0); 1250 1242 } 1251 1243 1252 - static int pwrap_wait_for_state(struct pmic_wrapper *wrp, 1253 - bool (*fp)(struct pmic_wrapper *)) 1254 - { 1255 - unsigned long timeout; 1256 - 1257 - timeout = jiffies + usecs_to_jiffies(10000); 1258 - 1259 - do { 1260 - if (time_after(jiffies, timeout)) 1261 - return fp(wrp) ? 0 : -ETIMEDOUT; 1262 - if (fp(wrp)) 1263 - return 0; 1264 - } while (1); 1265 - } 1266 - 1267 1244 static int pwrap_read16(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 1268 1245 { 1246 + bool tmp; 1269 1247 int ret; 1270 1248 u32 val; 1271 1249 1272 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 1250 + ret = readx_poll_timeout(pwrap_is_fsm_idle, wrp, tmp, tmp, 1251 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1273 1252 if (ret) { 1274 1253 pwrap_leave_fsm_vldclr(wrp); 1275 1254 return ret; ··· 1268 1273 val = (adr >> 1) << 16; 1269 1274 pwrap_writel(wrp, val, PWRAP_WACS2_CMD); 1270 1275 1271 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr); 1276 + ret = readx_poll_timeout(pwrap_is_fsm_vldclr, wrp, tmp, tmp, 1277 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1272 1278 if (ret) 1273 1279 return ret; 1274 1280 ··· 1286 1290 1287 1291 static int pwrap_read32(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 1288 1292 { 1293 + bool tmp; 1289 1294 int ret, msb; 1290 1295 1291 1296 *rdata = 0; 1292 1297 for (msb = 0; msb < 2; msb++) { 1293 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 1298 + ret = readx_poll_timeout(pwrap_is_fsm_idle, wrp, tmp, tmp, 1299 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1300 + 1294 1301 if (ret) { 1295 1302 pwrap_leave_fsm_vldclr(wrp); 1296 1303 return ret; ··· 1302 1303 pwrap_writel(wrp, ((msb << 30) | (adr << 16)), 1303 1304 PWRAP_WACS2_CMD); 1304 1305 1305 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr); 1306 + ret = readx_poll_timeout(pwrap_is_fsm_vldclr, wrp, tmp, tmp, 1307 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1306 1308 if (ret) 1307 1309 return ret; 1308 1310 ··· 1318 1318 1319 1319 static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 1320 1320 { 1321 - return wrp->slave->pwrap_read(wrp, adr, rdata); 1321 + return wrp->slave->regops->pwrap_read(wrp, adr, rdata); 1322 1322 } 1323 1323 1324 1324 static int pwrap_write16(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 1325 1325 { 1326 + bool tmp; 1326 1327 int ret; 1327 1328 1328 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 1329 + ret = readx_poll_timeout(pwrap_is_fsm_idle, wrp, tmp, tmp, 1330 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1329 1331 if (ret) { 1330 1332 pwrap_leave_fsm_vldclr(wrp); 1331 1333 return ret; ··· 1346 1344 1347 1345 static int pwrap_write32(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 1348 1346 { 1347 + bool tmp; 1349 1348 int ret, msb, rdata; 1350 1349 1351 1350 for (msb = 0; msb < 2; msb++) { 1352 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 1351 + ret = readx_poll_timeout(pwrap_is_fsm_idle, wrp, tmp, tmp, 1352 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1353 1353 if (ret) { 1354 1354 pwrap_leave_fsm_vldclr(wrp); 1355 1355 return ret; ··· 1377 1373 1378 1374 static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 1379 1375 { 1380 - return wrp->slave->pwrap_write(wrp, adr, wdata); 1376 + return wrp->slave->regops->pwrap_write(wrp, adr, wdata); 1381 1377 } 1382 1378 1383 1379 static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata) ··· 1392 1388 1393 1389 static int pwrap_reset_spislave(struct pmic_wrapper *wrp) 1394 1390 { 1391 + bool tmp; 1395 1392 int ret, i; 1396 1393 1397 1394 pwrap_writel(wrp, 0, PWRAP_HIPRIO_ARB_EN); ··· 1412 1407 pwrap_writel(wrp, wrp->master->spi_w | PWRAP_MAN_CMD_OP_OUTS, 1413 1408 PWRAP_MAN_CMD); 1414 1409 1415 - ret = pwrap_wait_for_state(wrp, pwrap_is_sync_idle); 1410 + ret = readx_poll_timeout(pwrap_is_sync_idle, wrp, tmp, tmp, 1411 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1416 1412 if (ret) { 1417 1413 dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 1418 1414 return ret; ··· 1464 1458 static int pwrap_init_dual_io(struct pmic_wrapper *wrp) 1465 1459 { 1466 1460 int ret; 1461 + bool tmp; 1467 1462 u32 rdata; 1468 1463 1469 1464 /* Enable dual IO mode */ 1470 1465 pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1); 1471 1466 1472 1467 /* Check IDLE & INIT_DONE in advance */ 1473 - ret = pwrap_wait_for_state(wrp, 1474 - pwrap_is_fsm_idle_and_sync_idle); 1468 + ret = readx_poll_timeout(pwrap_is_fsm_idle_and_sync_idle, wrp, tmp, tmp, 1469 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1475 1470 if (ret) { 1476 1471 dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 1477 1472 return ret; ··· 1577 1570 static int pwrap_init_cipher(struct pmic_wrapper *wrp) 1578 1571 { 1579 1572 int ret; 1573 + bool tmp; 1580 1574 u32 rdata = 0; 1581 1575 1582 1576 pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST); ··· 1632 1624 } 1633 1625 1634 1626 /* wait for cipher data ready@AP */ 1635 - ret = pwrap_wait_for_state(wrp, pwrap_is_cipher_ready); 1627 + ret = readx_poll_timeout(pwrap_is_cipher_ready, wrp, tmp, tmp, 1628 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1636 1629 if (ret) { 1637 1630 dev_err(wrp->dev, "cipher data ready@AP fail, ret=%d\n", ret); 1638 1631 return ret; 1639 1632 } 1640 1633 1641 1634 /* wait for cipher data ready@PMIC */ 1642 - ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready); 1635 + ret = readx_poll_timeout(pwrap_is_pmic_cipher_ready, wrp, tmp, tmp, 1636 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1643 1637 if (ret) { 1644 1638 dev_err(wrp->dev, 1645 1639 "timeout waiting for cipher data ready@PMIC\n"); ··· 1650 1640 1651 1641 /* wait for cipher mode idle */ 1652 1642 pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_MODE], 0x1); 1653 - ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle); 1643 + ret = readx_poll_timeout(pwrap_is_fsm_idle_and_sync_idle, wrp, tmp, tmp, 1644 + PWRAP_POLL_DELAY_US, PWRAP_POLL_TIMEOUT_US); 1654 1645 if (ret) { 1655 1646 dev_err(wrp->dev, "cipher mode idle fail, ret=%d\n", ret); 1656 1647 return ret; ··· 1896 1885 .max_register = 0xffff, 1897 1886 }; 1898 1887 1888 + static const struct pwrap_slv_regops pwrap_regops16 = { 1889 + .pwrap_read = pwrap_read16, 1890 + .pwrap_write = pwrap_write16, 1891 + .regmap = &pwrap_regmap_config16, 1892 + }; 1893 + 1894 + static const struct pwrap_slv_regops pwrap_regops32 = { 1895 + .pwrap_read = pwrap_read32, 1896 + .pwrap_write = pwrap_write32, 1897 + .regmap = &pwrap_regmap_config32, 1898 + }; 1899 + 1899 1900 static const struct pwrap_slv_type pmic_mt6323 = { 1900 1901 .dew_regs = mt6323_regs, 1901 1902 .type = PMIC_MT6323, 1902 - .regmap = &pwrap_regmap_config16, 1903 + .regops = &pwrap_regops16, 1903 1904 .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO | 1904 1905 PWRAP_SLV_CAP_SECURITY, 1905 - .pwrap_read = pwrap_read16, 1906 - .pwrap_write = pwrap_write16, 1907 1906 }; 1908 1907 1909 1908 static const struct pwrap_slv_type pmic_mt6351 = { 1910 1909 .dew_regs = mt6351_regs, 1911 1910 .type = PMIC_MT6351, 1912 - .regmap = &pwrap_regmap_config16, 1911 + .regops = &pwrap_regops16, 1913 1912 .caps = 0, 1914 - .pwrap_read = pwrap_read16, 1915 - .pwrap_write = pwrap_write16, 1916 1913 }; 1917 1914 1918 1915 static const struct pwrap_slv_type pmic_mt6357 = { 1919 1916 .dew_regs = mt6357_regs, 1920 1917 .type = PMIC_MT6357, 1921 - .regmap = &pwrap_regmap_config16, 1918 + .regops = &pwrap_regops16, 1922 1919 .caps = 0, 1923 - .pwrap_read = pwrap_read16, 1924 - .pwrap_write = pwrap_write16, 1925 1920 }; 1926 1921 1927 1922 static const struct pwrap_slv_type pmic_mt6358 = { 1928 1923 .dew_regs = mt6358_regs, 1929 1924 .type = PMIC_MT6358, 1930 - .regmap = &pwrap_regmap_config16, 1925 + .regops = &pwrap_regops16, 1931 1926 .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO, 1932 - .pwrap_read = pwrap_read16, 1933 - .pwrap_write = pwrap_write16, 1934 1927 }; 1935 1928 1936 1929 static const struct pwrap_slv_type pmic_mt6359 = { 1937 1930 .dew_regs = mt6359_regs, 1938 1931 .type = PMIC_MT6359, 1939 - .regmap = &pwrap_regmap_config16, 1932 + .regops = &pwrap_regops16, 1940 1933 .caps = PWRAP_SLV_CAP_DUALIO, 1941 - .pwrap_read = pwrap_read16, 1942 - .pwrap_write = pwrap_write16, 1943 1934 }; 1944 1935 1945 1936 static const struct pwrap_slv_type pmic_mt6380 = { 1946 1937 .dew_regs = NULL, 1947 1938 .type = PMIC_MT6380, 1948 - .regmap = &pwrap_regmap_config32, 1939 + .regops = &pwrap_regops32, 1949 1940 .caps = 0, 1950 - .pwrap_read = pwrap_read32, 1951 - .pwrap_write = pwrap_write32, 1952 1941 }; 1953 1942 1954 1943 static const struct pwrap_slv_type pmic_mt6397 = { 1955 1944 .dew_regs = mt6397_regs, 1956 1945 .type = PMIC_MT6397, 1957 - .regmap = &pwrap_regmap_config16, 1946 + .regops = &pwrap_regops16, 1958 1947 .caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO | 1959 1948 PWRAP_SLV_CAP_SECURITY, 1960 - .pwrap_read = pwrap_read16, 1961 - .pwrap_write = pwrap_write16, 1962 1949 }; 1963 1950 1964 1951 static const struct of_device_id of_slave_match_tbl[] = { 1965 - { 1966 - .compatible = "mediatek,mt6323", 1967 - .data = &pmic_mt6323, 1968 - }, { 1969 - .compatible = "mediatek,mt6351", 1970 - .data = &pmic_mt6351, 1971 - }, { 1972 - .compatible = "mediatek,mt6357", 1973 - .data = &pmic_mt6357, 1974 - }, { 1975 - .compatible = "mediatek,mt6358", 1976 - .data = &pmic_mt6358, 1977 - }, { 1978 - .compatible = "mediatek,mt6359", 1979 - .data = &pmic_mt6359, 1980 - }, { 1981 - /* The MT6380 PMIC only implements a regulator, so we bind it 1982 - * directly instead of using a MFD. 1983 - */ 1984 - .compatible = "mediatek,mt6380-regulator", 1985 - .data = &pmic_mt6380, 1986 - }, { 1987 - .compatible = "mediatek,mt6397", 1988 - .data = &pmic_mt6397, 1989 - }, { 1990 - /* sentinel */ 1991 - } 1952 + { .compatible = "mediatek,mt6323", .data = &pmic_mt6323 }, 1953 + { .compatible = "mediatek,mt6351", .data = &pmic_mt6351 }, 1954 + { .compatible = "mediatek,mt6357", .data = &pmic_mt6357 }, 1955 + { .compatible = "mediatek,mt6358", .data = &pmic_mt6358 }, 1956 + { .compatible = "mediatek,mt6359", .data = &pmic_mt6359 }, 1957 + 1958 + /* The MT6380 PMIC only implements a regulator, so we bind it 1959 + * directly instead of using a MFD. 1960 + */ 1961 + { .compatible = "mediatek,mt6380-regulator", .data = &pmic_mt6380 }, 1962 + { .compatible = "mediatek,mt6397", .data = &pmic_mt6397 }, 1963 + { /* sentinel */ } 1992 1964 }; 1993 1965 MODULE_DEVICE_TABLE(of, of_slave_match_tbl); 1994 1966 ··· 2130 2136 }; 2131 2137 2132 2138 static const struct of_device_id of_pwrap_match_tbl[] = { 2133 - { 2134 - .compatible = "mediatek,mt2701-pwrap", 2135 - .data = &pwrap_mt2701, 2136 - }, { 2137 - .compatible = "mediatek,mt6765-pwrap", 2138 - .data = &pwrap_mt6765, 2139 - }, { 2140 - .compatible = "mediatek,mt6779-pwrap", 2141 - .data = &pwrap_mt6779, 2142 - }, { 2143 - .compatible = "mediatek,mt6797-pwrap", 2144 - .data = &pwrap_mt6797, 2145 - }, { 2146 - .compatible = "mediatek,mt6873-pwrap", 2147 - .data = &pwrap_mt6873, 2148 - }, { 2149 - .compatible = "mediatek,mt7622-pwrap", 2150 - .data = &pwrap_mt7622, 2151 - }, { 2152 - .compatible = "mediatek,mt8135-pwrap", 2153 - .data = &pwrap_mt8135, 2154 - }, { 2155 - .compatible = "mediatek,mt8173-pwrap", 2156 - .data = &pwrap_mt8173, 2157 - }, { 2158 - .compatible = "mediatek,mt8183-pwrap", 2159 - .data = &pwrap_mt8183, 2160 - }, { 2161 - .compatible = "mediatek,mt8186-pwrap", 2162 - .data = &pwrap_mt8186, 2163 - }, { 2164 - .compatible = "mediatek,mt8195-pwrap", 2165 - .data = &pwrap_mt8195, 2166 - }, { 2167 - .compatible = "mediatek,mt8516-pwrap", 2168 - .data = &pwrap_mt8516, 2169 - }, { 2170 - /* sentinel */ 2171 - } 2139 + { .compatible = "mediatek,mt2701-pwrap", .data = &pwrap_mt2701 }, 2140 + { .compatible = "mediatek,mt6765-pwrap", .data = &pwrap_mt6765 }, 2141 + { .compatible = "mediatek,mt6779-pwrap", .data = &pwrap_mt6779 }, 2142 + { .compatible = "mediatek,mt6797-pwrap", .data = &pwrap_mt6797 }, 2143 + { .compatible = "mediatek,mt6873-pwrap", .data = &pwrap_mt6873 }, 2144 + { .compatible = "mediatek,mt7622-pwrap", .data = &pwrap_mt7622 }, 2145 + { .compatible = "mediatek,mt8135-pwrap", .data = &pwrap_mt8135 }, 2146 + { .compatible = "mediatek,mt8173-pwrap", .data = &pwrap_mt8173 }, 2147 + { .compatible = "mediatek,mt8183-pwrap", .data = &pwrap_mt8183 }, 2148 + { .compatible = "mediatek,mt8186-pwrap", .data = &pwrap_mt8186 }, 2149 + { .compatible = "mediatek,mt8195-pwrap", .data = &pwrap_mt8195 }, 2150 + { .compatible = "mediatek,mt8516-pwrap", .data = &pwrap_mt8516 }, 2151 + { /* sentinel */ } 2172 2152 }; 2173 2153 MODULE_DEVICE_TABLE(of, of_pwrap_match_tbl); 2174 2154 ··· 2153 2185 struct pmic_wrapper *wrp; 2154 2186 struct device_node *np = pdev->dev.of_node; 2155 2187 const struct of_device_id *of_slave_id = NULL; 2156 - struct resource *res; 2157 2188 2158 2189 if (np->child) 2159 2190 of_slave_id = of_match_node(of_slave_match_tbl, np->child); ··· 2172 2205 wrp->slave = of_slave_id->data; 2173 2206 wrp->dev = &pdev->dev; 2174 2207 2175 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwrap"); 2176 - wrp->base = devm_ioremap_resource(wrp->dev, res); 2208 + wrp->base = devm_platform_ioremap_resource_byname(pdev, "pwrap"); 2177 2209 if (IS_ERR(wrp->base)) 2178 2210 return PTR_ERR(wrp->base); 2179 2211 ··· 2186 2220 } 2187 2221 2188 2222 if (HAS_CAP(wrp->master->caps, PWRAP_CAP_BRIDGE)) { 2189 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 2190 - "pwrap-bridge"); 2191 - wrp->bridge_base = devm_ioremap_resource(wrp->dev, res); 2223 + wrp->bridge_base = devm_platform_ioremap_resource_byname(pdev, "pwrap-bridge"); 2192 2224 if (IS_ERR(wrp->bridge_base)) 2193 2225 return PTR_ERR(wrp->bridge_base); 2194 2226 ··· 2279 2315 pwrap_writel(wrp, wrp->master->int1_en_all, PWRAP_INT1_EN); 2280 2316 2281 2317 irq = platform_get_irq(pdev, 0); 2318 + if (irq < 0) { 2319 + ret = irq; 2320 + goto err_out2; 2321 + } 2322 + 2282 2323 ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, 2283 2324 IRQF_TRIGGER_HIGH, 2284 2325 "mt-pmic-pwrap", wrp); 2285 2326 if (ret) 2286 2327 goto err_out2; 2287 2328 2288 - wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, wrp->slave->regmap); 2329 + wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, wrp->slave->regops->regmap); 2289 2330 if (IS_ERR(wrp->regmap)) { 2290 2331 ret = PTR_ERR(wrp->regmap); 2291 2332 goto err_out2;
+2403
drivers/soc/mediatek/mtk-svs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2022 MediaTek Inc. 4 + */ 5 + 6 + #include <linux/bits.h> 7 + #include <linux/clk.h> 8 + #include <linux/completion.h> 9 + #include <linux/cpuidle.h> 10 + #include <linux/debugfs.h> 11 + #include <linux/device.h> 12 + #include <linux/init.h> 13 + #include <linux/interrupt.h> 14 + #include <linux/kernel.h> 15 + #include <linux/kthread.h> 16 + #include <linux/module.h> 17 + #include <linux/mutex.h> 18 + #include <linux/nvmem-consumer.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 21 + #include <linux/of_platform.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/pm_domain.h> 24 + #include <linux/pm_opp.h> 25 + #include <linux/pm_runtime.h> 26 + #include <linux/regulator/consumer.h> 27 + #include <linux/reset.h> 28 + #include <linux/seq_file.h> 29 + #include <linux/slab.h> 30 + #include <linux/spinlock.h> 31 + #include <linux/thermal.h> 32 + 33 + /* svs bank 1-line software id */ 34 + #define SVSB_CPU_LITTLE BIT(0) 35 + #define SVSB_CPU_BIG BIT(1) 36 + #define SVSB_CCI BIT(2) 37 + #define SVSB_GPU BIT(3) 38 + 39 + /* svs bank 2-line type */ 40 + #define SVSB_LOW BIT(8) 41 + #define SVSB_HIGH BIT(9) 42 + 43 + /* svs bank mode support */ 44 + #define SVSB_MODE_ALL_DISABLE 0 45 + #define SVSB_MODE_INIT01 BIT(1) 46 + #define SVSB_MODE_INIT02 BIT(2) 47 + #define SVSB_MODE_MON BIT(3) 48 + 49 + /* svs bank volt flags */ 50 + #define SVSB_INIT01_PD_REQ BIT(0) 51 + #define SVSB_INIT01_VOLT_IGNORE BIT(1) 52 + #define SVSB_INIT01_VOLT_INC_ONLY BIT(2) 53 + #define SVSB_MON_VOLT_IGNORE BIT(16) 54 + #define SVSB_REMOVE_DVTFIXED_VOLT BIT(24) 55 + 56 + /* svs bank register common configuration */ 57 + #define SVSB_DET_MAX 0xffff 58 + #define SVSB_DET_WINDOW 0xa28 59 + #define SVSB_DTHI 0x1 60 + #define SVSB_DTLO 0xfe 61 + #define SVSB_EN_INIT01 0x1 62 + #define SVSB_EN_INIT02 0x5 63 + #define SVSB_EN_MON 0x2 64 + #define SVSB_EN_OFF 0x0 65 + #define SVSB_INTEN_INIT0x 0x00005f01 66 + #define SVSB_INTEN_MONVOPEN 0x00ff0000 67 + #define SVSB_INTSTS_CLEAN 0x00ffffff 68 + #define SVSB_INTSTS_COMPLETE 0x1 69 + #define SVSB_INTSTS_MONVOP 0x00ff0000 70 + #define SVSB_RUNCONFIG_DEFAULT 0x80000000 71 + 72 + /* svs bank related setting */ 73 + #define BITS8 8 74 + #define MAX_OPP_ENTRIES 16 75 + #define REG_BYTES 4 76 + #define SVSB_DC_SIGNED_BIT BIT(15) 77 + #define SVSB_DET_CLK_EN BIT(31) 78 + #define SVSB_TEMP_LOWER_BOUND 0xb2 79 + #define SVSB_TEMP_UPPER_BOUND 0x64 80 + 81 + static DEFINE_SPINLOCK(svs_lock); 82 + 83 + #define debug_fops_ro(name) \ 84 + static int svs_##name##_debug_open(struct inode *inode, \ 85 + struct file *filp) \ 86 + { \ 87 + return single_open(filp, svs_##name##_debug_show, \ 88 + inode->i_private); \ 89 + } \ 90 + static const struct file_operations svs_##name##_debug_fops = { \ 91 + .owner = THIS_MODULE, \ 92 + .open = svs_##name##_debug_open, \ 93 + .read = seq_read, \ 94 + .llseek = seq_lseek, \ 95 + .release = single_release, \ 96 + } 97 + 98 + #define debug_fops_rw(name) \ 99 + static int svs_##name##_debug_open(struct inode *inode, \ 100 + struct file *filp) \ 101 + { \ 102 + return single_open(filp, svs_##name##_debug_show, \ 103 + inode->i_private); \ 104 + } \ 105 + static const struct file_operations svs_##name##_debug_fops = { \ 106 + .owner = THIS_MODULE, \ 107 + .open = svs_##name##_debug_open, \ 108 + .read = seq_read, \ 109 + .write = svs_##name##_debug_write, \ 110 + .llseek = seq_lseek, \ 111 + .release = single_release, \ 112 + } 113 + 114 + #define svs_dentry_data(name) {__stringify(name), &svs_##name##_debug_fops} 115 + 116 + /** 117 + * enum svsb_phase - svs bank phase enumeration 118 + * @SVSB_PHASE_ERROR: svs bank encounters unexpected condition 119 + * @SVSB_PHASE_INIT01: svs bank basic init for data calibration 120 + * @SVSB_PHASE_INIT02: svs bank can provide voltages to opp table 121 + * @SVSB_PHASE_MON: svs bank can provide voltages with thermal effect 122 + * @SVSB_PHASE_MAX: total number of svs bank phase (debug purpose) 123 + * 124 + * Each svs bank has its own independent phase and we enable each svs bank by 125 + * running their phase orderly. However, when svs bank encounters unexpected 126 + * condition, it will fire an irq (PHASE_ERROR) to inform svs software. 127 + * 128 + * svs bank general phase-enabled order: 129 + * SVSB_PHASE_INIT01 -> SVSB_PHASE_INIT02 -> SVSB_PHASE_MON 130 + */ 131 + enum svsb_phase { 132 + SVSB_PHASE_ERROR = 0, 133 + SVSB_PHASE_INIT01, 134 + SVSB_PHASE_INIT02, 135 + SVSB_PHASE_MON, 136 + SVSB_PHASE_MAX, 137 + }; 138 + 139 + enum svs_reg_index { 140 + DESCHAR = 0, 141 + TEMPCHAR, 142 + DETCHAR, 143 + AGECHAR, 144 + DCCONFIG, 145 + AGECONFIG, 146 + FREQPCT30, 147 + FREQPCT74, 148 + LIMITVALS, 149 + VBOOT, 150 + DETWINDOW, 151 + CONFIG, 152 + TSCALCS, 153 + RUNCONFIG, 154 + SVSEN, 155 + INIT2VALS, 156 + DCVALUES, 157 + AGEVALUES, 158 + VOP30, 159 + VOP74, 160 + TEMP, 161 + INTSTS, 162 + INTSTSRAW, 163 + INTEN, 164 + CHKINT, 165 + CHKSHIFT, 166 + STATUS, 167 + VDESIGN30, 168 + VDESIGN74, 169 + DVT30, 170 + DVT74, 171 + AGECOUNT, 172 + SMSTATE0, 173 + SMSTATE1, 174 + CTL0, 175 + DESDETSEC, 176 + TEMPAGESEC, 177 + CTRLSPARE0, 178 + CTRLSPARE1, 179 + CTRLSPARE2, 180 + CTRLSPARE3, 181 + CORESEL, 182 + THERMINTST, 183 + INTST, 184 + THSTAGE0ST, 185 + THSTAGE1ST, 186 + THSTAGE2ST, 187 + THAHBST0, 188 + THAHBST1, 189 + SPARE0, 190 + SPARE1, 191 + SPARE2, 192 + SPARE3, 193 + THSLPEVEB, 194 + SVS_REG_MAX, 195 + }; 196 + 197 + static const u32 svs_regs_v2[] = { 198 + [DESCHAR] = 0xc00, 199 + [TEMPCHAR] = 0xc04, 200 + [DETCHAR] = 0xc08, 201 + [AGECHAR] = 0xc0c, 202 + [DCCONFIG] = 0xc10, 203 + [AGECONFIG] = 0xc14, 204 + [FREQPCT30] = 0xc18, 205 + [FREQPCT74] = 0xc1c, 206 + [LIMITVALS] = 0xc20, 207 + [VBOOT] = 0xc24, 208 + [DETWINDOW] = 0xc28, 209 + [CONFIG] = 0xc2c, 210 + [TSCALCS] = 0xc30, 211 + [RUNCONFIG] = 0xc34, 212 + [SVSEN] = 0xc38, 213 + [INIT2VALS] = 0xc3c, 214 + [DCVALUES] = 0xc40, 215 + [AGEVALUES] = 0xc44, 216 + [VOP30] = 0xc48, 217 + [VOP74] = 0xc4c, 218 + [TEMP] = 0xc50, 219 + [INTSTS] = 0xc54, 220 + [INTSTSRAW] = 0xc58, 221 + [INTEN] = 0xc5c, 222 + [CHKINT] = 0xc60, 223 + [CHKSHIFT] = 0xc64, 224 + [STATUS] = 0xc68, 225 + [VDESIGN30] = 0xc6c, 226 + [VDESIGN74] = 0xc70, 227 + [DVT30] = 0xc74, 228 + [DVT74] = 0xc78, 229 + [AGECOUNT] = 0xc7c, 230 + [SMSTATE0] = 0xc80, 231 + [SMSTATE1] = 0xc84, 232 + [CTL0] = 0xc88, 233 + [DESDETSEC] = 0xce0, 234 + [TEMPAGESEC] = 0xce4, 235 + [CTRLSPARE0] = 0xcf0, 236 + [CTRLSPARE1] = 0xcf4, 237 + [CTRLSPARE2] = 0xcf8, 238 + [CTRLSPARE3] = 0xcfc, 239 + [CORESEL] = 0xf00, 240 + [THERMINTST] = 0xf04, 241 + [INTST] = 0xf08, 242 + [THSTAGE0ST] = 0xf0c, 243 + [THSTAGE1ST] = 0xf10, 244 + [THSTAGE2ST] = 0xf14, 245 + [THAHBST0] = 0xf18, 246 + [THAHBST1] = 0xf1c, 247 + [SPARE0] = 0xf20, 248 + [SPARE1] = 0xf24, 249 + [SPARE2] = 0xf28, 250 + [SPARE3] = 0xf2c, 251 + [THSLPEVEB] = 0xf30, 252 + }; 253 + 254 + /** 255 + * struct svs_platform - svs platform control 256 + * @name: svs platform name 257 + * @base: svs platform register base 258 + * @dev: svs platform device 259 + * @main_clk: main clock for svs bank 260 + * @pbank: svs bank pointer needing to be protected by spin_lock section 261 + * @banks: svs banks that svs platform supports 262 + * @rst: svs platform reset control 263 + * @efuse_parsing: svs platform efuse parsing function pointer 264 + * @probe: svs platform probe function pointer 265 + * @irqflags: svs platform irq settings flags 266 + * @efuse_max: total number of svs efuse 267 + * @tefuse_max: total number of thermal efuse 268 + * @regs: svs platform registers map 269 + * @bank_max: total number of svs banks 270 + * @efuse: svs efuse data received from NVMEM framework 271 + * @tefuse: thermal efuse data received from NVMEM framework 272 + */ 273 + struct svs_platform { 274 + char *name; 275 + void __iomem *base; 276 + struct device *dev; 277 + struct clk *main_clk; 278 + struct svs_bank *pbank; 279 + struct svs_bank *banks; 280 + struct reset_control *rst; 281 + bool (*efuse_parsing)(struct svs_platform *svsp); 282 + int (*probe)(struct svs_platform *svsp); 283 + unsigned long irqflags; 284 + size_t efuse_max; 285 + size_t tefuse_max; 286 + const u32 *regs; 287 + u32 bank_max; 288 + u32 *efuse; 289 + u32 *tefuse; 290 + }; 291 + 292 + struct svs_platform_data { 293 + char *name; 294 + struct svs_bank *banks; 295 + bool (*efuse_parsing)(struct svs_platform *svsp); 296 + int (*probe)(struct svs_platform *svsp); 297 + unsigned long irqflags; 298 + const u32 *regs; 299 + u32 bank_max; 300 + }; 301 + 302 + /** 303 + * struct svs_bank - svs bank representation 304 + * @dev: bank device 305 + * @opp_dev: device for opp table/buck control 306 + * @init_completion: the timeout completion for bank init 307 + * @buck: regulator used by opp_dev 308 + * @tzd: thermal zone device for getting temperature 309 + * @lock: mutex lock to protect voltage update process 310 + * @set_freq_pct: function pointer to set bank frequency percent table 311 + * @get_volts: function pointer to get bank voltages 312 + * @name: bank name 313 + * @buck_name: regulator name 314 + * @tzone_name: thermal zone name 315 + * @phase: bank current phase 316 + * @volt_od: bank voltage overdrive 317 + * @reg_data: bank register data in different phase for debug purpose 318 + * @pm_runtime_enabled_count: bank pm runtime enabled count 319 + * @mode_support: bank mode support. 320 + * @freq_base: reference frequency for bank init 321 + * @turn_freq_base: refenrece frequency for 2-line turn point 322 + * @vboot: voltage request for bank init01 only 323 + * @opp_dfreq: default opp frequency table 324 + * @opp_dvolt: default opp voltage table 325 + * @freq_pct: frequency percent table for bank init 326 + * @volt: bank voltage table 327 + * @volt_step: bank voltage step 328 + * @volt_base: bank voltage base 329 + * @volt_flags: bank voltage flags 330 + * @vmax: bank voltage maximum 331 + * @vmin: bank voltage minimum 332 + * @age_config: bank age configuration 333 + * @age_voffset_in: bank age voltage offset 334 + * @dc_config: bank dc configuration 335 + * @dc_voffset_in: bank dc voltage offset 336 + * @dvt_fixed: bank dvt fixed value 337 + * @vco: bank VCO value 338 + * @chk_shift: bank chicken shift 339 + * @core_sel: bank selection 340 + * @opp_count: bank opp count 341 + * @int_st: bank interrupt identification 342 + * @sw_id: bank software identification 343 + * @cpu_id: cpu core id for SVS CPU bank use only 344 + * @ctl0: TS-x selection 345 + * @temp: bank temperature 346 + * @tzone_htemp: thermal zone high temperature threshold 347 + * @tzone_htemp_voffset: thermal zone high temperature voltage offset 348 + * @tzone_ltemp: thermal zone low temperature threshold 349 + * @tzone_ltemp_voffset: thermal zone low temperature voltage offset 350 + * @bts: svs efuse data 351 + * @mts: svs efuse data 352 + * @bdes: svs efuse data 353 + * @mdes: svs efuse data 354 + * @mtdes: svs efuse data 355 + * @dcbdet: svs efuse data 356 + * @dcmdet: svs efuse data 357 + * @turn_pt: 2-line turn point tells which opp_volt calculated by high/low bank 358 + * @type: bank type to represent it is 2-line (high/low) bank or 1-line bank 359 + * 360 + * Svs bank will generate suitalbe voltages by below general math equation 361 + * and provide these voltages to opp voltage table. 362 + * 363 + * opp_volt[i] = (volt[i] * volt_step) + volt_base; 364 + */ 365 + struct svs_bank { 366 + struct device *dev; 367 + struct device *opp_dev; 368 + struct completion init_completion; 369 + struct regulator *buck; 370 + struct thermal_zone_device *tzd; 371 + struct mutex lock; /* lock to protect voltage update process */ 372 + void (*set_freq_pct)(struct svs_platform *svsp); 373 + void (*get_volts)(struct svs_platform *svsp); 374 + char *name; 375 + char *buck_name; 376 + char *tzone_name; 377 + enum svsb_phase phase; 378 + s32 volt_od; 379 + u32 reg_data[SVSB_PHASE_MAX][SVS_REG_MAX]; 380 + u32 pm_runtime_enabled_count; 381 + u32 mode_support; 382 + u32 freq_base; 383 + u32 turn_freq_base; 384 + u32 vboot; 385 + u32 opp_dfreq[MAX_OPP_ENTRIES]; 386 + u32 opp_dvolt[MAX_OPP_ENTRIES]; 387 + u32 freq_pct[MAX_OPP_ENTRIES]; 388 + u32 volt[MAX_OPP_ENTRIES]; 389 + u32 volt_step; 390 + u32 volt_base; 391 + u32 volt_flags; 392 + u32 vmax; 393 + u32 vmin; 394 + u32 age_config; 395 + u32 age_voffset_in; 396 + u32 dc_config; 397 + u32 dc_voffset_in; 398 + u32 dvt_fixed; 399 + u32 vco; 400 + u32 chk_shift; 401 + u32 core_sel; 402 + u32 opp_count; 403 + u32 int_st; 404 + u32 sw_id; 405 + u32 cpu_id; 406 + u32 ctl0; 407 + u32 temp; 408 + u32 tzone_htemp; 409 + u32 tzone_htemp_voffset; 410 + u32 tzone_ltemp; 411 + u32 tzone_ltemp_voffset; 412 + u32 bts; 413 + u32 mts; 414 + u32 bdes; 415 + u32 mdes; 416 + u32 mtdes; 417 + u32 dcbdet; 418 + u32 dcmdet; 419 + u32 turn_pt; 420 + u32 type; 421 + }; 422 + 423 + static u32 percent(u32 numerator, u32 denominator) 424 + { 425 + /* If not divide 1000, "numerator * 100" will have data overflow. */ 426 + numerator /= 1000; 427 + denominator /= 1000; 428 + 429 + return DIV_ROUND_UP(numerator * 100, denominator); 430 + } 431 + 432 + static u32 svs_readl_relaxed(struct svs_platform *svsp, enum svs_reg_index rg_i) 433 + { 434 + return readl_relaxed(svsp->base + svsp->regs[rg_i]); 435 + } 436 + 437 + static void svs_writel_relaxed(struct svs_platform *svsp, u32 val, 438 + enum svs_reg_index rg_i) 439 + { 440 + writel_relaxed(val, svsp->base + svsp->regs[rg_i]); 441 + } 442 + 443 + static void svs_switch_bank(struct svs_platform *svsp) 444 + { 445 + struct svs_bank *svsb = svsp->pbank; 446 + 447 + svs_writel_relaxed(svsp, svsb->core_sel, CORESEL); 448 + } 449 + 450 + static u32 svs_bank_volt_to_opp_volt(u32 svsb_volt, u32 svsb_volt_step, 451 + u32 svsb_volt_base) 452 + { 453 + return (svsb_volt * svsb_volt_step) + svsb_volt_base; 454 + } 455 + 456 + static u32 svs_opp_volt_to_bank_volt(u32 opp_u_volt, u32 svsb_volt_step, 457 + u32 svsb_volt_base) 458 + { 459 + return (opp_u_volt - svsb_volt_base) / svsb_volt_step; 460 + } 461 + 462 + static int svs_sync_bank_volts_from_opp(struct svs_bank *svsb) 463 + { 464 + struct dev_pm_opp *opp; 465 + u32 i, opp_u_volt; 466 + 467 + for (i = 0; i < svsb->opp_count; i++) { 468 + opp = dev_pm_opp_find_freq_exact(svsb->opp_dev, 469 + svsb->opp_dfreq[i], 470 + true); 471 + if (IS_ERR(opp)) { 472 + dev_err(svsb->dev, "cannot find freq = %u (%ld)\n", 473 + svsb->opp_dfreq[i], PTR_ERR(opp)); 474 + return PTR_ERR(opp); 475 + } 476 + 477 + opp_u_volt = dev_pm_opp_get_voltage(opp); 478 + svsb->volt[i] = svs_opp_volt_to_bank_volt(opp_u_volt, 479 + svsb->volt_step, 480 + svsb->volt_base); 481 + dev_pm_opp_put(opp); 482 + } 483 + 484 + return 0; 485 + } 486 + 487 + static int svs_adjust_pm_opp_volts(struct svs_bank *svsb) 488 + { 489 + int ret = -EPERM, tzone_temp = 0; 490 + u32 i, svsb_volt, opp_volt, temp_voffset = 0, opp_start, opp_stop; 491 + 492 + mutex_lock(&svsb->lock); 493 + 494 + /* 495 + * 2-line bank updates its corresponding opp volts. 496 + * 1-line bank updates all opp volts. 497 + */ 498 + if (svsb->type == SVSB_HIGH) { 499 + opp_start = 0; 500 + opp_stop = svsb->turn_pt; 501 + } else if (svsb->type == SVSB_LOW) { 502 + opp_start = svsb->turn_pt; 503 + opp_stop = svsb->opp_count; 504 + } else { 505 + opp_start = 0; 506 + opp_stop = svsb->opp_count; 507 + } 508 + 509 + /* Get thermal effect */ 510 + if (svsb->phase == SVSB_PHASE_MON) { 511 + ret = thermal_zone_get_temp(svsb->tzd, &tzone_temp); 512 + if (ret || (svsb->temp > SVSB_TEMP_UPPER_BOUND && 513 + svsb->temp < SVSB_TEMP_LOWER_BOUND)) { 514 + dev_err(svsb->dev, "%s: %d (0x%x), run default volts\n", 515 + svsb->tzone_name, ret, svsb->temp); 516 + svsb->phase = SVSB_PHASE_ERROR; 517 + } 518 + 519 + if (tzone_temp >= svsb->tzone_htemp) 520 + temp_voffset += svsb->tzone_htemp_voffset; 521 + else if (tzone_temp <= svsb->tzone_ltemp) 522 + temp_voffset += svsb->tzone_ltemp_voffset; 523 + 524 + /* 2-line bank update all opp volts when running mon mode */ 525 + if (svsb->type == SVSB_HIGH || svsb->type == SVSB_LOW) { 526 + opp_start = 0; 527 + opp_stop = svsb->opp_count; 528 + } 529 + } 530 + 531 + /* vmin <= svsb_volt (opp_volt) <= default opp voltage */ 532 + for (i = opp_start; i < opp_stop; i++) { 533 + switch (svsb->phase) { 534 + case SVSB_PHASE_ERROR: 535 + opp_volt = svsb->opp_dvolt[i]; 536 + break; 537 + case SVSB_PHASE_INIT01: 538 + /* do nothing */ 539 + goto unlock_mutex; 540 + case SVSB_PHASE_INIT02: 541 + svsb_volt = max(svsb->volt[i], svsb->vmin); 542 + opp_volt = svs_bank_volt_to_opp_volt(svsb_volt, 543 + svsb->volt_step, 544 + svsb->volt_base); 545 + break; 546 + case SVSB_PHASE_MON: 547 + svsb_volt = max(svsb->volt[i] + temp_voffset, svsb->vmin); 548 + opp_volt = svs_bank_volt_to_opp_volt(svsb_volt, 549 + svsb->volt_step, 550 + svsb->volt_base); 551 + break; 552 + default: 553 + dev_err(svsb->dev, "unknown phase: %u\n", svsb->phase); 554 + ret = -EINVAL; 555 + goto unlock_mutex; 556 + } 557 + 558 + opp_volt = min(opp_volt, svsb->opp_dvolt[i]); 559 + ret = dev_pm_opp_adjust_voltage(svsb->opp_dev, 560 + svsb->opp_dfreq[i], 561 + opp_volt, opp_volt, 562 + svsb->opp_dvolt[i]); 563 + if (ret) { 564 + dev_err(svsb->dev, "set %uuV fail: %d\n", 565 + opp_volt, ret); 566 + goto unlock_mutex; 567 + } 568 + } 569 + 570 + unlock_mutex: 571 + mutex_unlock(&svsb->lock); 572 + 573 + return ret; 574 + } 575 + 576 + static int svs_dump_debug_show(struct seq_file *m, void *p) 577 + { 578 + struct svs_platform *svsp = (struct svs_platform *)m->private; 579 + struct svs_bank *svsb; 580 + unsigned long svs_reg_addr; 581 + u32 idx, i, j, bank_id; 582 + 583 + for (i = 0; i < svsp->efuse_max; i++) 584 + if (svsp->efuse && svsp->efuse[i]) 585 + seq_printf(m, "M_HW_RES%d = 0x%08x\n", 586 + i, svsp->efuse[i]); 587 + 588 + for (i = 0; i < svsp->tefuse_max; i++) 589 + if (svsp->tefuse) 590 + seq_printf(m, "THERMAL_EFUSE%d = 0x%08x\n", 591 + i, svsp->tefuse[i]); 592 + 593 + for (bank_id = 0, idx = 0; idx < svsp->bank_max; idx++, bank_id++) { 594 + svsb = &svsp->banks[idx]; 595 + 596 + for (i = SVSB_PHASE_INIT01; i <= SVSB_PHASE_MON; i++) { 597 + seq_printf(m, "Bank_number = %u\n", bank_id); 598 + 599 + if (i == SVSB_PHASE_INIT01 || i == SVSB_PHASE_INIT02) 600 + seq_printf(m, "mode = init%d\n", i); 601 + else if (i == SVSB_PHASE_MON) 602 + seq_puts(m, "mode = mon\n"); 603 + else 604 + seq_puts(m, "mode = error\n"); 605 + 606 + for (j = DESCHAR; j < SVS_REG_MAX; j++) { 607 + svs_reg_addr = (unsigned long)(svsp->base + 608 + svsp->regs[j]); 609 + seq_printf(m, "0x%08lx = 0x%08x\n", 610 + svs_reg_addr, svsb->reg_data[i][j]); 611 + } 612 + } 613 + } 614 + 615 + return 0; 616 + } 617 + 618 + debug_fops_ro(dump); 619 + 620 + static int svs_enable_debug_show(struct seq_file *m, void *v) 621 + { 622 + struct svs_bank *svsb = (struct svs_bank *)m->private; 623 + 624 + switch (svsb->phase) { 625 + case SVSB_PHASE_ERROR: 626 + seq_puts(m, "disabled\n"); 627 + break; 628 + case SVSB_PHASE_INIT01: 629 + seq_puts(m, "init1\n"); 630 + break; 631 + case SVSB_PHASE_INIT02: 632 + seq_puts(m, "init2\n"); 633 + break; 634 + case SVSB_PHASE_MON: 635 + seq_puts(m, "mon mode\n"); 636 + break; 637 + default: 638 + seq_puts(m, "unknown\n"); 639 + break; 640 + } 641 + 642 + return 0; 643 + } 644 + 645 + static ssize_t svs_enable_debug_write(struct file *filp, 646 + const char __user *buffer, 647 + size_t count, loff_t *pos) 648 + { 649 + struct svs_bank *svsb = file_inode(filp)->i_private; 650 + struct svs_platform *svsp = dev_get_drvdata(svsb->dev); 651 + unsigned long flags; 652 + int enabled, ret; 653 + char *buf = NULL; 654 + 655 + if (count >= PAGE_SIZE) 656 + return -EINVAL; 657 + 658 + buf = (char *)memdup_user_nul(buffer, count); 659 + if (IS_ERR(buf)) 660 + return PTR_ERR(buf); 661 + 662 + ret = kstrtoint(buf, 10, &enabled); 663 + if (ret) 664 + return ret; 665 + 666 + if (!enabled) { 667 + spin_lock_irqsave(&svs_lock, flags); 668 + svsp->pbank = svsb; 669 + svsb->mode_support = SVSB_MODE_ALL_DISABLE; 670 + svs_switch_bank(svsp); 671 + svs_writel_relaxed(svsp, SVSB_EN_OFF, SVSEN); 672 + svs_writel_relaxed(svsp, SVSB_INTSTS_CLEAN, INTSTS); 673 + spin_unlock_irqrestore(&svs_lock, flags); 674 + 675 + svsb->phase = SVSB_PHASE_ERROR; 676 + svs_adjust_pm_opp_volts(svsb); 677 + } 678 + 679 + kfree(buf); 680 + 681 + return count; 682 + } 683 + 684 + debug_fops_rw(enable); 685 + 686 + static int svs_status_debug_show(struct seq_file *m, void *v) 687 + { 688 + struct svs_bank *svsb = (struct svs_bank *)m->private; 689 + struct dev_pm_opp *opp; 690 + int tzone_temp = 0, ret; 691 + u32 i; 692 + 693 + ret = thermal_zone_get_temp(svsb->tzd, &tzone_temp); 694 + if (ret) 695 + seq_printf(m, "%s: temperature ignore, turn_pt = %u\n", 696 + svsb->name, svsb->turn_pt); 697 + else 698 + seq_printf(m, "%s: temperature = %d, turn_pt = %u\n", 699 + svsb->name, tzone_temp, svsb->turn_pt); 700 + 701 + for (i = 0; i < svsb->opp_count; i++) { 702 + opp = dev_pm_opp_find_freq_exact(svsb->opp_dev, 703 + svsb->opp_dfreq[i], true); 704 + if (IS_ERR(opp)) { 705 + seq_printf(m, "%s: cannot find freq = %u (%ld)\n", 706 + svsb->name, svsb->opp_dfreq[i], 707 + PTR_ERR(opp)); 708 + return PTR_ERR(opp); 709 + } 710 + 711 + seq_printf(m, "opp_freq[%02u]: %u, opp_volt[%02u]: %lu, ", 712 + i, svsb->opp_dfreq[i], i, 713 + dev_pm_opp_get_voltage(opp)); 714 + seq_printf(m, "svsb_volt[%02u]: 0x%x, freq_pct[%02u]: %u\n", 715 + i, svsb->volt[i], i, svsb->freq_pct[i]); 716 + dev_pm_opp_put(opp); 717 + } 718 + 719 + return 0; 720 + } 721 + 722 + debug_fops_ro(status); 723 + 724 + static int svs_create_debug_cmds(struct svs_platform *svsp) 725 + { 726 + struct svs_bank *svsb; 727 + struct dentry *svs_dir, *svsb_dir, *file_entry; 728 + const char *d = "/sys/kernel/debug/svs"; 729 + u32 i, idx; 730 + 731 + struct svs_dentry { 732 + const char *name; 733 + const struct file_operations *fops; 734 + }; 735 + 736 + struct svs_dentry svs_entries[] = { 737 + svs_dentry_data(dump), 738 + }; 739 + 740 + struct svs_dentry svsb_entries[] = { 741 + svs_dentry_data(enable), 742 + svs_dentry_data(status), 743 + }; 744 + 745 + svs_dir = debugfs_create_dir("svs", NULL); 746 + if (IS_ERR(svs_dir)) { 747 + dev_err(svsp->dev, "cannot create %s: %ld\n", 748 + d, PTR_ERR(svs_dir)); 749 + return PTR_ERR(svs_dir); 750 + } 751 + 752 + for (i = 0; i < ARRAY_SIZE(svs_entries); i++) { 753 + file_entry = debugfs_create_file(svs_entries[i].name, 0664, 754 + svs_dir, svsp, 755 + svs_entries[i].fops); 756 + if (IS_ERR(file_entry)) { 757 + dev_err(svsp->dev, "cannot create %s/%s: %ld\n", 758 + d, svs_entries[i].name, PTR_ERR(file_entry)); 759 + return PTR_ERR(file_entry); 760 + } 761 + } 762 + 763 + for (idx = 0; idx < svsp->bank_max; idx++) { 764 + svsb = &svsp->banks[idx]; 765 + 766 + if (svsb->mode_support == SVSB_MODE_ALL_DISABLE) 767 + continue; 768 + 769 + svsb_dir = debugfs_create_dir(svsb->name, svs_dir); 770 + if (IS_ERR(svsb_dir)) { 771 + dev_err(svsp->dev, "cannot create %s/%s: %ld\n", 772 + d, svsb->name, PTR_ERR(svsb_dir)); 773 + return PTR_ERR(svsb_dir); 774 + } 775 + 776 + for (i = 0; i < ARRAY_SIZE(svsb_entries); i++) { 777 + file_entry = debugfs_create_file(svsb_entries[i].name, 778 + 0664, svsb_dir, svsb, 779 + svsb_entries[i].fops); 780 + if (IS_ERR(file_entry)) { 781 + dev_err(svsp->dev, "no %s/%s/%s?: %ld\n", 782 + d, svsb->name, svsb_entries[i].name, 783 + PTR_ERR(file_entry)); 784 + return PTR_ERR(file_entry); 785 + } 786 + } 787 + } 788 + 789 + return 0; 790 + } 791 + 792 + static u32 interpolate(u32 f0, u32 f1, u32 v0, u32 v1, u32 fx) 793 + { 794 + u32 vx; 795 + 796 + if (v0 == v1 || f0 == f1) 797 + return v0; 798 + 799 + /* *100 to have decimal fraction factor */ 800 + vx = (v0 * 100) - ((((v0 - v1) * 100) / (f0 - f1)) * (f0 - fx)); 801 + 802 + return DIV_ROUND_UP(vx, 100); 803 + } 804 + 805 + static void svs_get_bank_volts_v3(struct svs_platform *svsp) 806 + { 807 + struct svs_bank *svsb = svsp->pbank; 808 + u32 i, j, *vop, vop74, vop30, turn_pt = svsb->turn_pt; 809 + u32 b_sft, shift_byte = 0, opp_start = 0, opp_stop = 0; 810 + u32 middle_index = (svsb->opp_count / 2); 811 + 812 + if (svsb->phase == SVSB_PHASE_MON && 813 + svsb->volt_flags & SVSB_MON_VOLT_IGNORE) 814 + return; 815 + 816 + vop74 = svs_readl_relaxed(svsp, VOP74); 817 + vop30 = svs_readl_relaxed(svsp, VOP30); 818 + 819 + /* Target is to set svsb->volt[] by algorithm */ 820 + if (turn_pt < middle_index) { 821 + if (svsb->type == SVSB_HIGH) { 822 + /* volt[0] ~ volt[turn_pt - 1] */ 823 + for (i = 0; i < turn_pt; i++) { 824 + b_sft = BITS8 * (shift_byte % REG_BYTES); 825 + vop = (shift_byte < REG_BYTES) ? &vop30 : 826 + &vop74; 827 + svsb->volt[i] = (*vop >> b_sft) & GENMASK(7, 0); 828 + shift_byte++; 829 + } 830 + } else if (svsb->type == SVSB_LOW) { 831 + /* volt[turn_pt] + volt[j] ~ volt[opp_count - 1] */ 832 + j = svsb->opp_count - 7; 833 + svsb->volt[turn_pt] = vop30 & GENMASK(7, 0); 834 + shift_byte++; 835 + for (i = j; i < svsb->opp_count; i++) { 836 + b_sft = BITS8 * (shift_byte % REG_BYTES); 837 + vop = (shift_byte < REG_BYTES) ? &vop30 : 838 + &vop74; 839 + svsb->volt[i] = (*vop >> b_sft) & GENMASK(7, 0); 840 + shift_byte++; 841 + } 842 + 843 + /* volt[turn_pt + 1] ~ volt[j - 1] by interpolate */ 844 + for (i = turn_pt + 1; i < j; i++) 845 + svsb->volt[i] = interpolate(svsb->freq_pct[turn_pt], 846 + svsb->freq_pct[j], 847 + svsb->volt[turn_pt], 848 + svsb->volt[j], 849 + svsb->freq_pct[i]); 850 + } 851 + } else { 852 + if (svsb->type == SVSB_HIGH) { 853 + /* volt[0] + volt[j] ~ volt[turn_pt - 1] */ 854 + j = turn_pt - 7; 855 + svsb->volt[0] = vop30 & GENMASK(7, 0); 856 + shift_byte++; 857 + for (i = j; i < turn_pt; i++) { 858 + b_sft = BITS8 * (shift_byte % REG_BYTES); 859 + vop = (shift_byte < REG_BYTES) ? &vop30 : 860 + &vop74; 861 + svsb->volt[i] = (*vop >> b_sft) & GENMASK(7, 0); 862 + shift_byte++; 863 + } 864 + 865 + /* volt[1] ~ volt[j - 1] by interpolate */ 866 + for (i = 1; i < j; i++) 867 + svsb->volt[i] = interpolate(svsb->freq_pct[0], 868 + svsb->freq_pct[j], 869 + svsb->volt[0], 870 + svsb->volt[j], 871 + svsb->freq_pct[i]); 872 + } else if (svsb->type == SVSB_LOW) { 873 + /* volt[turn_pt] ~ volt[opp_count - 1] */ 874 + for (i = turn_pt; i < svsb->opp_count; i++) { 875 + b_sft = BITS8 * (shift_byte % REG_BYTES); 876 + vop = (shift_byte < REG_BYTES) ? &vop30 : 877 + &vop74; 878 + svsb->volt[i] = (*vop >> b_sft) & GENMASK(7, 0); 879 + shift_byte++; 880 + } 881 + } 882 + } 883 + 884 + if (svsb->type == SVSB_HIGH) { 885 + opp_start = 0; 886 + opp_stop = svsb->turn_pt; 887 + } else if (svsb->type == SVSB_LOW) { 888 + opp_start = svsb->turn_pt; 889 + opp_stop = svsb->opp_count; 890 + } 891 + 892 + for (i = opp_start; i < opp_stop; i++) 893 + if (svsb->volt_flags & SVSB_REMOVE_DVTFIXED_VOLT) 894 + svsb->volt[i] -= svsb->dvt_fixed; 895 + } 896 + 897 + static void svs_set_bank_freq_pct_v3(struct svs_platform *svsp) 898 + { 899 + struct svs_bank *svsb = svsp->pbank; 900 + u32 i, j, *freq_pct, freq_pct74 = 0, freq_pct30 = 0; 901 + u32 b_sft, shift_byte = 0, turn_pt; 902 + u32 middle_index = (svsb->opp_count / 2); 903 + 904 + for (i = 0; i < svsb->opp_count; i++) { 905 + if (svsb->opp_dfreq[i] <= svsb->turn_freq_base) { 906 + svsb->turn_pt = i; 907 + break; 908 + } 909 + } 910 + 911 + turn_pt = svsb->turn_pt; 912 + 913 + /* Target is to fill out freq_pct74 / freq_pct30 by algorithm */ 914 + if (turn_pt < middle_index) { 915 + if (svsb->type == SVSB_HIGH) { 916 + /* 917 + * If we don't handle this situation, 918 + * SVSB_HIGH's FREQPCT74 / FREQPCT30 would keep "0" 919 + * and this leads SVSB_LOW to work abnormally. 920 + */ 921 + if (turn_pt == 0) 922 + freq_pct30 = svsb->freq_pct[0]; 923 + 924 + /* freq_pct[0] ~ freq_pct[turn_pt - 1] */ 925 + for (i = 0; i < turn_pt; i++) { 926 + b_sft = BITS8 * (shift_byte % REG_BYTES); 927 + freq_pct = (shift_byte < REG_BYTES) ? 928 + &freq_pct30 : &freq_pct74; 929 + *freq_pct |= (svsb->freq_pct[i] << b_sft); 930 + shift_byte++; 931 + } 932 + } else if (svsb->type == SVSB_LOW) { 933 + /* 934 + * freq_pct[turn_pt] + 935 + * freq_pct[opp_count - 7] ~ freq_pct[opp_count -1] 936 + */ 937 + freq_pct30 = svsb->freq_pct[turn_pt]; 938 + shift_byte++; 939 + j = svsb->opp_count - 7; 940 + for (i = j; i < svsb->opp_count; i++) { 941 + b_sft = BITS8 * (shift_byte % REG_BYTES); 942 + freq_pct = (shift_byte < REG_BYTES) ? 943 + &freq_pct30 : &freq_pct74; 944 + *freq_pct |= (svsb->freq_pct[i] << b_sft); 945 + shift_byte++; 946 + } 947 + } 948 + } else { 949 + if (svsb->type == SVSB_HIGH) { 950 + /* 951 + * freq_pct[0] + 952 + * freq_pct[turn_pt - 7] ~ freq_pct[turn_pt - 1] 953 + */ 954 + freq_pct30 = svsb->freq_pct[0]; 955 + shift_byte++; 956 + j = turn_pt - 7; 957 + for (i = j; i < turn_pt; i++) { 958 + b_sft = BITS8 * (shift_byte % REG_BYTES); 959 + freq_pct = (shift_byte < REG_BYTES) ? 960 + &freq_pct30 : &freq_pct74; 961 + *freq_pct |= (svsb->freq_pct[i] << b_sft); 962 + shift_byte++; 963 + } 964 + } else if (svsb->type == SVSB_LOW) { 965 + /* freq_pct[turn_pt] ~ freq_pct[opp_count - 1] */ 966 + for (i = turn_pt; i < svsb->opp_count; i++) { 967 + b_sft = BITS8 * (shift_byte % REG_BYTES); 968 + freq_pct = (shift_byte < REG_BYTES) ? 969 + &freq_pct30 : &freq_pct74; 970 + *freq_pct |= (svsb->freq_pct[i] << b_sft); 971 + shift_byte++; 972 + } 973 + } 974 + } 975 + 976 + svs_writel_relaxed(svsp, freq_pct74, FREQPCT74); 977 + svs_writel_relaxed(svsp, freq_pct30, FREQPCT30); 978 + } 979 + 980 + static void svs_get_bank_volts_v2(struct svs_platform *svsp) 981 + { 982 + struct svs_bank *svsb = svsp->pbank; 983 + u32 temp, i; 984 + 985 + temp = svs_readl_relaxed(svsp, VOP74); 986 + svsb->volt[14] = (temp >> 24) & GENMASK(7, 0); 987 + svsb->volt[12] = (temp >> 16) & GENMASK(7, 0); 988 + svsb->volt[10] = (temp >> 8) & GENMASK(7, 0); 989 + svsb->volt[8] = (temp & GENMASK(7, 0)); 990 + 991 + temp = svs_readl_relaxed(svsp, VOP30); 992 + svsb->volt[6] = (temp >> 24) & GENMASK(7, 0); 993 + svsb->volt[4] = (temp >> 16) & GENMASK(7, 0); 994 + svsb->volt[2] = (temp >> 8) & GENMASK(7, 0); 995 + svsb->volt[0] = (temp & GENMASK(7, 0)); 996 + 997 + for (i = 0; i <= 12; i += 2) 998 + svsb->volt[i + 1] = interpolate(svsb->freq_pct[i], 999 + svsb->freq_pct[i + 2], 1000 + svsb->volt[i], 1001 + svsb->volt[i + 2], 1002 + svsb->freq_pct[i + 1]); 1003 + 1004 + svsb->volt[15] = interpolate(svsb->freq_pct[12], 1005 + svsb->freq_pct[14], 1006 + svsb->volt[12], 1007 + svsb->volt[14], 1008 + svsb->freq_pct[15]); 1009 + 1010 + for (i = 0; i < svsb->opp_count; i++) 1011 + svsb->volt[i] += svsb->volt_od; 1012 + } 1013 + 1014 + static void svs_set_bank_freq_pct_v2(struct svs_platform *svsp) 1015 + { 1016 + struct svs_bank *svsb = svsp->pbank; 1017 + 1018 + svs_writel_relaxed(svsp, 1019 + (svsb->freq_pct[14] << 24) | 1020 + (svsb->freq_pct[12] << 16) | 1021 + (svsb->freq_pct[10] << 8) | 1022 + svsb->freq_pct[8], 1023 + FREQPCT74); 1024 + 1025 + svs_writel_relaxed(svsp, 1026 + (svsb->freq_pct[6] << 24) | 1027 + (svsb->freq_pct[4] << 16) | 1028 + (svsb->freq_pct[2] << 8) | 1029 + svsb->freq_pct[0], 1030 + FREQPCT30); 1031 + } 1032 + 1033 + static void svs_set_bank_phase(struct svs_platform *svsp, 1034 + enum svsb_phase target_phase) 1035 + { 1036 + struct svs_bank *svsb = svsp->pbank; 1037 + u32 des_char, temp_char, det_char, limit_vals, init2vals, ts_calcs; 1038 + 1039 + svs_switch_bank(svsp); 1040 + 1041 + des_char = (svsb->bdes << 8) | svsb->mdes; 1042 + svs_writel_relaxed(svsp, des_char, DESCHAR); 1043 + 1044 + temp_char = (svsb->vco << 16) | (svsb->mtdes << 8) | svsb->dvt_fixed; 1045 + svs_writel_relaxed(svsp, temp_char, TEMPCHAR); 1046 + 1047 + det_char = (svsb->dcbdet << 8) | svsb->dcmdet; 1048 + svs_writel_relaxed(svsp, det_char, DETCHAR); 1049 + 1050 + svs_writel_relaxed(svsp, svsb->dc_config, DCCONFIG); 1051 + svs_writel_relaxed(svsp, svsb->age_config, AGECONFIG); 1052 + svs_writel_relaxed(svsp, SVSB_RUNCONFIG_DEFAULT, RUNCONFIG); 1053 + 1054 + svsb->set_freq_pct(svsp); 1055 + 1056 + limit_vals = (svsb->vmax << 24) | (svsb->vmin << 16) | 1057 + (SVSB_DTHI << 8) | SVSB_DTLO; 1058 + svs_writel_relaxed(svsp, limit_vals, LIMITVALS); 1059 + 1060 + svs_writel_relaxed(svsp, SVSB_DET_WINDOW, DETWINDOW); 1061 + svs_writel_relaxed(svsp, SVSB_DET_MAX, CONFIG); 1062 + svs_writel_relaxed(svsp, svsb->chk_shift, CHKSHIFT); 1063 + svs_writel_relaxed(svsp, svsb->ctl0, CTL0); 1064 + svs_writel_relaxed(svsp, SVSB_INTSTS_CLEAN, INTSTS); 1065 + 1066 + switch (target_phase) { 1067 + case SVSB_PHASE_INIT01: 1068 + svs_writel_relaxed(svsp, svsb->vboot, VBOOT); 1069 + svs_writel_relaxed(svsp, SVSB_INTEN_INIT0x, INTEN); 1070 + svs_writel_relaxed(svsp, SVSB_EN_INIT01, SVSEN); 1071 + break; 1072 + case SVSB_PHASE_INIT02: 1073 + svs_writel_relaxed(svsp, SVSB_INTEN_INIT0x, INTEN); 1074 + init2vals = (svsb->age_voffset_in << 16) | svsb->dc_voffset_in; 1075 + svs_writel_relaxed(svsp, init2vals, INIT2VALS); 1076 + svs_writel_relaxed(svsp, SVSB_EN_INIT02, SVSEN); 1077 + break; 1078 + case SVSB_PHASE_MON: 1079 + ts_calcs = (svsb->bts << 12) | svsb->mts; 1080 + svs_writel_relaxed(svsp, ts_calcs, TSCALCS); 1081 + svs_writel_relaxed(svsp, SVSB_INTEN_MONVOPEN, INTEN); 1082 + svs_writel_relaxed(svsp, SVSB_EN_MON, SVSEN); 1083 + break; 1084 + default: 1085 + dev_err(svsb->dev, "requested unknown target phase: %u\n", 1086 + target_phase); 1087 + break; 1088 + } 1089 + } 1090 + 1091 + static inline void svs_save_bank_register_data(struct svs_platform *svsp, 1092 + enum svsb_phase phase) 1093 + { 1094 + struct svs_bank *svsb = svsp->pbank; 1095 + enum svs_reg_index rg_i; 1096 + 1097 + for (rg_i = DESCHAR; rg_i < SVS_REG_MAX; rg_i++) 1098 + svsb->reg_data[phase][rg_i] = svs_readl_relaxed(svsp, rg_i); 1099 + } 1100 + 1101 + static inline void svs_error_isr_handler(struct svs_platform *svsp) 1102 + { 1103 + struct svs_bank *svsb = svsp->pbank; 1104 + 1105 + dev_err(svsb->dev, "%s: CORESEL = 0x%08x\n", 1106 + __func__, svs_readl_relaxed(svsp, CORESEL)); 1107 + dev_err(svsb->dev, "SVSEN = 0x%08x, INTSTS = 0x%08x\n", 1108 + svs_readl_relaxed(svsp, SVSEN), 1109 + svs_readl_relaxed(svsp, INTSTS)); 1110 + dev_err(svsb->dev, "SMSTATE0 = 0x%08x, SMSTATE1 = 0x%08x\n", 1111 + svs_readl_relaxed(svsp, SMSTATE0), 1112 + svs_readl_relaxed(svsp, SMSTATE1)); 1113 + dev_err(svsb->dev, "TEMP = 0x%08x\n", svs_readl_relaxed(svsp, TEMP)); 1114 + 1115 + svs_save_bank_register_data(svsp, SVSB_PHASE_ERROR); 1116 + 1117 + svsb->phase = SVSB_PHASE_ERROR; 1118 + svs_writel_relaxed(svsp, SVSB_EN_OFF, SVSEN); 1119 + svs_writel_relaxed(svsp, SVSB_INTSTS_CLEAN, INTSTS); 1120 + } 1121 + 1122 + static inline void svs_init01_isr_handler(struct svs_platform *svsp) 1123 + { 1124 + struct svs_bank *svsb = svsp->pbank; 1125 + 1126 + dev_info(svsb->dev, "%s: VDN74~30:0x%08x~0x%08x, DC:0x%08x\n", 1127 + __func__, svs_readl_relaxed(svsp, VDESIGN74), 1128 + svs_readl_relaxed(svsp, VDESIGN30), 1129 + svs_readl_relaxed(svsp, DCVALUES)); 1130 + 1131 + svs_save_bank_register_data(svsp, SVSB_PHASE_INIT01); 1132 + 1133 + svsb->phase = SVSB_PHASE_INIT01; 1134 + svsb->dc_voffset_in = ~(svs_readl_relaxed(svsp, DCVALUES) & 1135 + GENMASK(15, 0)) + 1; 1136 + if (svsb->volt_flags & SVSB_INIT01_VOLT_IGNORE || 1137 + (svsb->dc_voffset_in & SVSB_DC_SIGNED_BIT && 1138 + svsb->volt_flags & SVSB_INIT01_VOLT_INC_ONLY)) 1139 + svsb->dc_voffset_in = 0; 1140 + 1141 + svsb->age_voffset_in = svs_readl_relaxed(svsp, AGEVALUES) & 1142 + GENMASK(15, 0); 1143 + 1144 + svs_writel_relaxed(svsp, SVSB_EN_OFF, SVSEN); 1145 + svs_writel_relaxed(svsp, SVSB_INTSTS_COMPLETE, INTSTS); 1146 + svsb->core_sel &= ~SVSB_DET_CLK_EN; 1147 + } 1148 + 1149 + static inline void svs_init02_isr_handler(struct svs_platform *svsp) 1150 + { 1151 + struct svs_bank *svsb = svsp->pbank; 1152 + 1153 + dev_info(svsb->dev, "%s: VOP74~30:0x%08x~0x%08x, DC:0x%08x\n", 1154 + __func__, svs_readl_relaxed(svsp, VOP74), 1155 + svs_readl_relaxed(svsp, VOP30), 1156 + svs_readl_relaxed(svsp, DCVALUES)); 1157 + 1158 + svs_save_bank_register_data(svsp, SVSB_PHASE_INIT02); 1159 + 1160 + svsb->phase = SVSB_PHASE_INIT02; 1161 + svsb->get_volts(svsp); 1162 + 1163 + svs_writel_relaxed(svsp, SVSB_EN_OFF, SVSEN); 1164 + svs_writel_relaxed(svsp, SVSB_INTSTS_COMPLETE, INTSTS); 1165 + } 1166 + 1167 + static inline void svs_mon_mode_isr_handler(struct svs_platform *svsp) 1168 + { 1169 + struct svs_bank *svsb = svsp->pbank; 1170 + 1171 + svs_save_bank_register_data(svsp, SVSB_PHASE_MON); 1172 + 1173 + svsb->phase = SVSB_PHASE_MON; 1174 + svsb->get_volts(svsp); 1175 + 1176 + svsb->temp = svs_readl_relaxed(svsp, TEMP) & GENMASK(7, 0); 1177 + svs_writel_relaxed(svsp, SVSB_INTSTS_MONVOP, INTSTS); 1178 + } 1179 + 1180 + static irqreturn_t svs_isr(int irq, void *data) 1181 + { 1182 + struct svs_platform *svsp = data; 1183 + struct svs_bank *svsb = NULL; 1184 + unsigned long flags; 1185 + u32 idx, int_sts, svs_en; 1186 + 1187 + for (idx = 0; idx < svsp->bank_max; idx++) { 1188 + svsb = &svsp->banks[idx]; 1189 + WARN(!svsb, "%s: svsb(%s) is null", __func__, svsb->name); 1190 + 1191 + spin_lock_irqsave(&svs_lock, flags); 1192 + svsp->pbank = svsb; 1193 + 1194 + /* Find out which svs bank fires interrupt */ 1195 + if (svsb->int_st & svs_readl_relaxed(svsp, INTST)) { 1196 + spin_unlock_irqrestore(&svs_lock, flags); 1197 + continue; 1198 + } 1199 + 1200 + svs_switch_bank(svsp); 1201 + int_sts = svs_readl_relaxed(svsp, INTSTS); 1202 + svs_en = svs_readl_relaxed(svsp, SVSEN); 1203 + 1204 + if (int_sts == SVSB_INTSTS_COMPLETE && 1205 + svs_en == SVSB_EN_INIT01) 1206 + svs_init01_isr_handler(svsp); 1207 + else if (int_sts == SVSB_INTSTS_COMPLETE && 1208 + svs_en == SVSB_EN_INIT02) 1209 + svs_init02_isr_handler(svsp); 1210 + else if (int_sts & SVSB_INTSTS_MONVOP) 1211 + svs_mon_mode_isr_handler(svsp); 1212 + else 1213 + svs_error_isr_handler(svsp); 1214 + 1215 + spin_unlock_irqrestore(&svs_lock, flags); 1216 + break; 1217 + } 1218 + 1219 + svs_adjust_pm_opp_volts(svsb); 1220 + 1221 + if (svsb->phase == SVSB_PHASE_INIT01 || 1222 + svsb->phase == SVSB_PHASE_INIT02) 1223 + complete(&svsb->init_completion); 1224 + 1225 + return IRQ_HANDLED; 1226 + } 1227 + 1228 + static int svs_init01(struct svs_platform *svsp) 1229 + { 1230 + struct svs_bank *svsb; 1231 + unsigned long flags, time_left; 1232 + bool search_done; 1233 + int ret = 0, r; 1234 + u32 opp_freq, opp_vboot, buck_volt, idx, i; 1235 + 1236 + /* Keep CPUs' core power on for svs_init01 initialization */ 1237 + cpuidle_pause_and_lock(); 1238 + 1239 + /* Svs bank init01 preparation - power enable */ 1240 + for (idx = 0; idx < svsp->bank_max; idx++) { 1241 + svsb = &svsp->banks[idx]; 1242 + 1243 + if (!(svsb->mode_support & SVSB_MODE_INIT01)) 1244 + continue; 1245 + 1246 + ret = regulator_enable(svsb->buck); 1247 + if (ret) { 1248 + dev_err(svsb->dev, "%s enable fail: %d\n", 1249 + svsb->buck_name, ret); 1250 + goto svs_init01_resume_cpuidle; 1251 + } 1252 + 1253 + /* Some buck doesn't support mode change. Show fail msg only */ 1254 + ret = regulator_set_mode(svsb->buck, REGULATOR_MODE_FAST); 1255 + if (ret) 1256 + dev_notice(svsb->dev, "set fast mode fail: %d\n", ret); 1257 + 1258 + if (svsb->volt_flags & SVSB_INIT01_PD_REQ) { 1259 + if (!pm_runtime_enabled(svsb->opp_dev)) { 1260 + pm_runtime_enable(svsb->opp_dev); 1261 + svsb->pm_runtime_enabled_count++; 1262 + } 1263 + 1264 + ret = pm_runtime_get_sync(svsb->opp_dev); 1265 + if (ret < 0) { 1266 + dev_err(svsb->dev, "mtcmos on fail: %d\n", ret); 1267 + goto svs_init01_resume_cpuidle; 1268 + } 1269 + } 1270 + } 1271 + 1272 + /* 1273 + * Svs bank init01 preparation - vboot voltage adjustment 1274 + * Sometimes two svs banks use the same buck. Therefore, 1275 + * we have to set each svs bank to target voltage(vboot) first. 1276 + */ 1277 + for (idx = 0; idx < svsp->bank_max; idx++) { 1278 + svsb = &svsp->banks[idx]; 1279 + 1280 + if (!(svsb->mode_support & SVSB_MODE_INIT01)) 1281 + continue; 1282 + 1283 + /* 1284 + * Find the fastest freq that can be run at vboot and 1285 + * fix to that freq until svs_init01 is done. 1286 + */ 1287 + search_done = false; 1288 + opp_vboot = svs_bank_volt_to_opp_volt(svsb->vboot, 1289 + svsb->volt_step, 1290 + svsb->volt_base); 1291 + 1292 + for (i = 0; i < svsb->opp_count; i++) { 1293 + opp_freq = svsb->opp_dfreq[i]; 1294 + if (!search_done && svsb->opp_dvolt[i] <= opp_vboot) { 1295 + ret = dev_pm_opp_adjust_voltage(svsb->opp_dev, 1296 + opp_freq, 1297 + opp_vboot, 1298 + opp_vboot, 1299 + opp_vboot); 1300 + if (ret) { 1301 + dev_err(svsb->dev, 1302 + "set opp %uuV vboot fail: %d\n", 1303 + opp_vboot, ret); 1304 + goto svs_init01_finish; 1305 + } 1306 + 1307 + search_done = true; 1308 + } else { 1309 + ret = dev_pm_opp_disable(svsb->opp_dev, 1310 + svsb->opp_dfreq[i]); 1311 + if (ret) { 1312 + dev_err(svsb->dev, 1313 + "opp %uHz disable fail: %d\n", 1314 + svsb->opp_dfreq[i], ret); 1315 + goto svs_init01_finish; 1316 + } 1317 + } 1318 + } 1319 + } 1320 + 1321 + /* Svs bank init01 begins */ 1322 + for (idx = 0; idx < svsp->bank_max; idx++) { 1323 + svsb = &svsp->banks[idx]; 1324 + 1325 + if (!(svsb->mode_support & SVSB_MODE_INIT01)) 1326 + continue; 1327 + 1328 + opp_vboot = svs_bank_volt_to_opp_volt(svsb->vboot, 1329 + svsb->volt_step, 1330 + svsb->volt_base); 1331 + 1332 + buck_volt = regulator_get_voltage(svsb->buck); 1333 + if (buck_volt != opp_vboot) { 1334 + dev_err(svsb->dev, 1335 + "buck voltage: %uuV, expected vboot: %uuV\n", 1336 + buck_volt, opp_vboot); 1337 + ret = -EPERM; 1338 + goto svs_init01_finish; 1339 + } 1340 + 1341 + spin_lock_irqsave(&svs_lock, flags); 1342 + svsp->pbank = svsb; 1343 + svs_set_bank_phase(svsp, SVSB_PHASE_INIT01); 1344 + spin_unlock_irqrestore(&svs_lock, flags); 1345 + 1346 + time_left = wait_for_completion_timeout(&svsb->init_completion, 1347 + msecs_to_jiffies(5000)); 1348 + if (!time_left) { 1349 + dev_err(svsb->dev, "init01 completion timeout\n"); 1350 + ret = -EBUSY; 1351 + goto svs_init01_finish; 1352 + } 1353 + } 1354 + 1355 + svs_init01_finish: 1356 + for (idx = 0; idx < svsp->bank_max; idx++) { 1357 + svsb = &svsp->banks[idx]; 1358 + 1359 + if (!(svsb->mode_support & SVSB_MODE_INIT01)) 1360 + continue; 1361 + 1362 + for (i = 0; i < svsb->opp_count; i++) { 1363 + r = dev_pm_opp_enable(svsb->opp_dev, 1364 + svsb->opp_dfreq[i]); 1365 + if (r) 1366 + dev_err(svsb->dev, "opp %uHz enable fail: %d\n", 1367 + svsb->opp_dfreq[i], r); 1368 + } 1369 + 1370 + if (svsb->volt_flags & SVSB_INIT01_PD_REQ) { 1371 + r = pm_runtime_put_sync(svsb->opp_dev); 1372 + if (r) 1373 + dev_err(svsb->dev, "mtcmos off fail: %d\n", r); 1374 + 1375 + if (svsb->pm_runtime_enabled_count > 0) { 1376 + pm_runtime_disable(svsb->opp_dev); 1377 + svsb->pm_runtime_enabled_count--; 1378 + } 1379 + } 1380 + 1381 + r = regulator_set_mode(svsb->buck, REGULATOR_MODE_NORMAL); 1382 + if (r) 1383 + dev_notice(svsb->dev, "set normal mode fail: %d\n", r); 1384 + 1385 + r = regulator_disable(svsb->buck); 1386 + if (r) 1387 + dev_err(svsb->dev, "%s disable fail: %d\n", 1388 + svsb->buck_name, r); 1389 + } 1390 + 1391 + svs_init01_resume_cpuidle: 1392 + cpuidle_resume_and_unlock(); 1393 + 1394 + return ret; 1395 + } 1396 + 1397 + static int svs_init02(struct svs_platform *svsp) 1398 + { 1399 + struct svs_bank *svsb; 1400 + unsigned long flags, time_left; 1401 + u32 idx; 1402 + 1403 + for (idx = 0; idx < svsp->bank_max; idx++) { 1404 + svsb = &svsp->banks[idx]; 1405 + 1406 + if (!(svsb->mode_support & SVSB_MODE_INIT02)) 1407 + continue; 1408 + 1409 + reinit_completion(&svsb->init_completion); 1410 + spin_lock_irqsave(&svs_lock, flags); 1411 + svsp->pbank = svsb; 1412 + svs_set_bank_phase(svsp, SVSB_PHASE_INIT02); 1413 + spin_unlock_irqrestore(&svs_lock, flags); 1414 + 1415 + time_left = wait_for_completion_timeout(&svsb->init_completion, 1416 + msecs_to_jiffies(5000)); 1417 + if (!time_left) { 1418 + dev_err(svsb->dev, "init02 completion timeout\n"); 1419 + return -EBUSY; 1420 + } 1421 + } 1422 + 1423 + /* 1424 + * 2-line high/low bank update its corresponding opp voltages only. 1425 + * Therefore, we sync voltages from opp for high/low bank voltages 1426 + * consistency. 1427 + */ 1428 + for (idx = 0; idx < svsp->bank_max; idx++) { 1429 + svsb = &svsp->banks[idx]; 1430 + 1431 + if (!(svsb->mode_support & SVSB_MODE_INIT02)) 1432 + continue; 1433 + 1434 + if (svsb->type == SVSB_HIGH || svsb->type == SVSB_LOW) { 1435 + if (svs_sync_bank_volts_from_opp(svsb)) { 1436 + dev_err(svsb->dev, "sync volt fail\n"); 1437 + return -EPERM; 1438 + } 1439 + } 1440 + } 1441 + 1442 + return 0; 1443 + } 1444 + 1445 + static void svs_mon_mode(struct svs_platform *svsp) 1446 + { 1447 + struct svs_bank *svsb; 1448 + unsigned long flags; 1449 + u32 idx; 1450 + 1451 + for (idx = 0; idx < svsp->bank_max; idx++) { 1452 + svsb = &svsp->banks[idx]; 1453 + 1454 + if (!(svsb->mode_support & SVSB_MODE_MON)) 1455 + continue; 1456 + 1457 + spin_lock_irqsave(&svs_lock, flags); 1458 + svsp->pbank = svsb; 1459 + svs_set_bank_phase(svsp, SVSB_PHASE_MON); 1460 + spin_unlock_irqrestore(&svs_lock, flags); 1461 + } 1462 + } 1463 + 1464 + static int svs_start(struct svs_platform *svsp) 1465 + { 1466 + int ret; 1467 + 1468 + ret = svs_init01(svsp); 1469 + if (ret) 1470 + return ret; 1471 + 1472 + ret = svs_init02(svsp); 1473 + if (ret) 1474 + return ret; 1475 + 1476 + svs_mon_mode(svsp); 1477 + 1478 + return 0; 1479 + } 1480 + 1481 + static int svs_suspend(struct device *dev) 1482 + { 1483 + struct svs_platform *svsp = dev_get_drvdata(dev); 1484 + struct svs_bank *svsb; 1485 + unsigned long flags; 1486 + int ret; 1487 + u32 idx; 1488 + 1489 + for (idx = 0; idx < svsp->bank_max; idx++) { 1490 + svsb = &svsp->banks[idx]; 1491 + 1492 + /* This might wait for svs_isr() process */ 1493 + spin_lock_irqsave(&svs_lock, flags); 1494 + svsp->pbank = svsb; 1495 + svs_switch_bank(svsp); 1496 + svs_writel_relaxed(svsp, SVSB_EN_OFF, SVSEN); 1497 + svs_writel_relaxed(svsp, SVSB_INTSTS_CLEAN, INTSTS); 1498 + spin_unlock_irqrestore(&svs_lock, flags); 1499 + 1500 + svsb->phase = SVSB_PHASE_ERROR; 1501 + svs_adjust_pm_opp_volts(svsb); 1502 + } 1503 + 1504 + ret = reset_control_assert(svsp->rst); 1505 + if (ret) { 1506 + dev_err(svsp->dev, "cannot assert reset %d\n", ret); 1507 + return ret; 1508 + } 1509 + 1510 + clk_disable_unprepare(svsp->main_clk); 1511 + 1512 + return 0; 1513 + } 1514 + 1515 + static int svs_resume(struct device *dev) 1516 + { 1517 + struct svs_platform *svsp = dev_get_drvdata(dev); 1518 + int ret; 1519 + 1520 + ret = clk_prepare_enable(svsp->main_clk); 1521 + if (ret) { 1522 + dev_err(svsp->dev, "cannot enable main_clk, disable svs\n"); 1523 + return ret; 1524 + } 1525 + 1526 + ret = reset_control_deassert(svsp->rst); 1527 + if (ret) { 1528 + dev_err(svsp->dev, "cannot deassert reset %d\n", ret); 1529 + goto out_of_resume; 1530 + } 1531 + 1532 + ret = svs_init02(svsp); 1533 + if (ret) 1534 + goto out_of_resume; 1535 + 1536 + svs_mon_mode(svsp); 1537 + 1538 + return 0; 1539 + 1540 + out_of_resume: 1541 + clk_disable_unprepare(svsp->main_clk); 1542 + return ret; 1543 + } 1544 + 1545 + static int svs_bank_resource_setup(struct svs_platform *svsp) 1546 + { 1547 + struct svs_bank *svsb; 1548 + struct dev_pm_opp *opp; 1549 + unsigned long freq; 1550 + int count, ret; 1551 + u32 idx, i; 1552 + 1553 + dev_set_drvdata(svsp->dev, svsp); 1554 + 1555 + for (idx = 0; idx < svsp->bank_max; idx++) { 1556 + svsb = &svsp->banks[idx]; 1557 + 1558 + switch (svsb->sw_id) { 1559 + case SVSB_CPU_LITTLE: 1560 + svsb->name = "SVSB_CPU_LITTLE"; 1561 + break; 1562 + case SVSB_CPU_BIG: 1563 + svsb->name = "SVSB_CPU_BIG"; 1564 + break; 1565 + case SVSB_CCI: 1566 + svsb->name = "SVSB_CCI"; 1567 + break; 1568 + case SVSB_GPU: 1569 + if (svsb->type == SVSB_HIGH) 1570 + svsb->name = "SVSB_GPU_HIGH"; 1571 + else if (svsb->type == SVSB_LOW) 1572 + svsb->name = "SVSB_GPU_LOW"; 1573 + else 1574 + svsb->name = "SVSB_GPU"; 1575 + break; 1576 + default: 1577 + dev_err(svsb->dev, "unknown sw_id: %u\n", svsb->sw_id); 1578 + return -EINVAL; 1579 + } 1580 + 1581 + svsb->dev = devm_kzalloc(svsp->dev, sizeof(*svsb->dev), 1582 + GFP_KERNEL); 1583 + if (!svsb->dev) 1584 + return -ENOMEM; 1585 + 1586 + ret = dev_set_name(svsb->dev, "%s", svsb->name); 1587 + if (ret) 1588 + return ret; 1589 + 1590 + dev_set_drvdata(svsb->dev, svsp); 1591 + 1592 + ret = dev_pm_opp_of_add_table(svsb->opp_dev); 1593 + if (ret) { 1594 + dev_err(svsb->dev, "add opp table fail: %d\n", ret); 1595 + return ret; 1596 + } 1597 + 1598 + mutex_init(&svsb->lock); 1599 + init_completion(&svsb->init_completion); 1600 + 1601 + if (svsb->mode_support & SVSB_MODE_INIT01) { 1602 + svsb->buck = devm_regulator_get_optional(svsb->opp_dev, 1603 + svsb->buck_name); 1604 + if (IS_ERR(svsb->buck)) { 1605 + dev_err(svsb->dev, "cannot get \"%s-supply\"\n", 1606 + svsb->buck_name); 1607 + return PTR_ERR(svsb->buck); 1608 + } 1609 + } 1610 + 1611 + if (svsb->mode_support & SVSB_MODE_MON) { 1612 + svsb->tzd = thermal_zone_get_zone_by_name(svsb->tzone_name); 1613 + if (IS_ERR(svsb->tzd)) { 1614 + dev_err(svsb->dev, "cannot get \"%s\" thermal zone\n", 1615 + svsb->tzone_name); 1616 + return PTR_ERR(svsb->tzd); 1617 + } 1618 + } 1619 + 1620 + count = dev_pm_opp_get_opp_count(svsb->opp_dev); 1621 + if (svsb->opp_count != count) { 1622 + dev_err(svsb->dev, 1623 + "opp_count not \"%u\" but get \"%d\"?\n", 1624 + svsb->opp_count, count); 1625 + return count; 1626 + } 1627 + 1628 + for (i = 0, freq = U32_MAX; i < svsb->opp_count; i++, freq--) { 1629 + opp = dev_pm_opp_find_freq_floor(svsb->opp_dev, &freq); 1630 + if (IS_ERR(opp)) { 1631 + dev_err(svsb->dev, "cannot find freq = %ld\n", 1632 + PTR_ERR(opp)); 1633 + return PTR_ERR(opp); 1634 + } 1635 + 1636 + svsb->opp_dfreq[i] = freq; 1637 + svsb->opp_dvolt[i] = dev_pm_opp_get_voltage(opp); 1638 + svsb->freq_pct[i] = percent(svsb->opp_dfreq[i], 1639 + svsb->freq_base); 1640 + dev_pm_opp_put(opp); 1641 + } 1642 + } 1643 + 1644 + return 0; 1645 + } 1646 + 1647 + static bool svs_mt8192_efuse_parsing(struct svs_platform *svsp) 1648 + { 1649 + struct svs_bank *svsb; 1650 + struct nvmem_cell *cell; 1651 + u32 idx, i, vmin, golden_temp; 1652 + 1653 + for (i = 0; i < svsp->efuse_max; i++) 1654 + if (svsp->efuse[i]) 1655 + dev_info(svsp->dev, "M_HW_RES%d: 0x%08x\n", 1656 + i, svsp->efuse[i]); 1657 + 1658 + if (!svsp->efuse[9]) { 1659 + dev_notice(svsp->dev, "svs_efuse[9] = 0x0?\n"); 1660 + return false; 1661 + } 1662 + 1663 + /* Svs efuse parsing */ 1664 + vmin = (svsp->efuse[19] >> 4) & GENMASK(1, 0); 1665 + 1666 + for (idx = 0; idx < svsp->bank_max; idx++) { 1667 + svsb = &svsp->banks[idx]; 1668 + 1669 + if (vmin == 0x1) 1670 + svsb->vmin = 0x1e; 1671 + 1672 + if (svsb->type == SVSB_LOW) { 1673 + svsb->mtdes = svsp->efuse[10] & GENMASK(7, 0); 1674 + svsb->bdes = (svsp->efuse[10] >> 16) & GENMASK(7, 0); 1675 + svsb->mdes = (svsp->efuse[10] >> 24) & GENMASK(7, 0); 1676 + svsb->dcbdet = (svsp->efuse[17]) & GENMASK(7, 0); 1677 + svsb->dcmdet = (svsp->efuse[17] >> 8) & GENMASK(7, 0); 1678 + } else if (svsb->type == SVSB_HIGH) { 1679 + svsb->mtdes = svsp->efuse[9] & GENMASK(7, 0); 1680 + svsb->bdes = (svsp->efuse[9] >> 16) & GENMASK(7, 0); 1681 + svsb->mdes = (svsp->efuse[9] >> 24) & GENMASK(7, 0); 1682 + svsb->dcbdet = (svsp->efuse[17] >> 16) & GENMASK(7, 0); 1683 + svsb->dcmdet = (svsp->efuse[17] >> 24) & GENMASK(7, 0); 1684 + } 1685 + 1686 + svsb->vmax += svsb->dvt_fixed; 1687 + } 1688 + 1689 + /* Thermal efuse parsing */ 1690 + cell = nvmem_cell_get(svsp->dev, "t-calibration-data"); 1691 + if (IS_ERR_OR_NULL(cell)) { 1692 + dev_err(svsp->dev, "no \"t-calibration-data\"? %ld\n", 1693 + PTR_ERR(cell)); 1694 + return false; 1695 + } 1696 + 1697 + svsp->tefuse = nvmem_cell_read(cell, &svsp->tefuse_max); 1698 + if (IS_ERR(svsp->tefuse)) { 1699 + dev_err(svsp->dev, "cannot read thermal efuse: %ld\n", 1700 + PTR_ERR(svsp->tefuse)); 1701 + nvmem_cell_put(cell); 1702 + return false; 1703 + } 1704 + 1705 + svsp->tefuse_max /= sizeof(u32); 1706 + nvmem_cell_put(cell); 1707 + 1708 + for (i = 0; i < svsp->tefuse_max; i++) 1709 + if (svsp->tefuse[i] != 0) 1710 + break; 1711 + 1712 + if (i == svsp->tefuse_max) 1713 + golden_temp = 50; /* All thermal efuse data are 0 */ 1714 + else 1715 + golden_temp = (svsp->tefuse[0] >> 24) & GENMASK(7, 0); 1716 + 1717 + for (idx = 0; idx < svsp->bank_max; idx++) { 1718 + svsb = &svsp->banks[idx]; 1719 + svsb->mts = 500; 1720 + svsb->bts = (((500 * golden_temp + 250460) / 1000) - 25) * 4; 1721 + } 1722 + 1723 + return true; 1724 + } 1725 + 1726 + static bool svs_mt8183_efuse_parsing(struct svs_platform *svsp) 1727 + { 1728 + struct svs_bank *svsb; 1729 + struct nvmem_cell *cell; 1730 + int format[6], x_roomt[6], o_vtsmcu[5], o_vtsabb, tb_roomt = 0; 1731 + int adc_ge_t, adc_oe_t, ge, oe, gain, degc_cali, adc_cali_en_t; 1732 + int o_slope, o_slope_sign, ts_id; 1733 + u32 idx, i, ft_pgm, mts, temp0, temp1, temp2; 1734 + 1735 + for (i = 0; i < svsp->efuse_max; i++) 1736 + if (svsp->efuse[i]) 1737 + dev_info(svsp->dev, "M_HW_RES%d: 0x%08x\n", 1738 + i, svsp->efuse[i]); 1739 + 1740 + if (!svsp->efuse[2]) { 1741 + dev_notice(svsp->dev, "svs_efuse[2] = 0x0?\n"); 1742 + return false; 1743 + } 1744 + 1745 + /* Svs efuse parsing */ 1746 + ft_pgm = (svsp->efuse[0] >> 4) & GENMASK(3, 0); 1747 + 1748 + for (idx = 0; idx < svsp->bank_max; idx++) { 1749 + svsb = &svsp->banks[idx]; 1750 + 1751 + if (ft_pgm <= 1) 1752 + svsb->volt_flags |= SVSB_INIT01_VOLT_IGNORE; 1753 + 1754 + switch (svsb->sw_id) { 1755 + case SVSB_CPU_LITTLE: 1756 + svsb->bdes = svsp->efuse[16] & GENMASK(7, 0); 1757 + svsb->mdes = (svsp->efuse[16] >> 8) & GENMASK(7, 0); 1758 + svsb->dcbdet = (svsp->efuse[16] >> 16) & GENMASK(7, 0); 1759 + svsb->dcmdet = (svsp->efuse[16] >> 24) & GENMASK(7, 0); 1760 + svsb->mtdes = (svsp->efuse[17] >> 16) & GENMASK(7, 0); 1761 + 1762 + if (ft_pgm <= 3) 1763 + svsb->volt_od += 10; 1764 + else 1765 + svsb->volt_od += 2; 1766 + break; 1767 + case SVSB_CPU_BIG: 1768 + svsb->bdes = svsp->efuse[18] & GENMASK(7, 0); 1769 + svsb->mdes = (svsp->efuse[18] >> 8) & GENMASK(7, 0); 1770 + svsb->dcbdet = (svsp->efuse[18] >> 16) & GENMASK(7, 0); 1771 + svsb->dcmdet = (svsp->efuse[18] >> 24) & GENMASK(7, 0); 1772 + svsb->mtdes = svsp->efuse[17] & GENMASK(7, 0); 1773 + 1774 + if (ft_pgm <= 3) 1775 + svsb->volt_od += 15; 1776 + else 1777 + svsb->volt_od += 12; 1778 + break; 1779 + case SVSB_CCI: 1780 + svsb->bdes = svsp->efuse[4] & GENMASK(7, 0); 1781 + svsb->mdes = (svsp->efuse[4] >> 8) & GENMASK(7, 0); 1782 + svsb->dcbdet = (svsp->efuse[4] >> 16) & GENMASK(7, 0); 1783 + svsb->dcmdet = (svsp->efuse[4] >> 24) & GENMASK(7, 0); 1784 + svsb->mtdes = (svsp->efuse[5] >> 16) & GENMASK(7, 0); 1785 + 1786 + if (ft_pgm <= 3) 1787 + svsb->volt_od += 10; 1788 + else 1789 + svsb->volt_od += 2; 1790 + break; 1791 + case SVSB_GPU: 1792 + svsb->bdes = svsp->efuse[6] & GENMASK(7, 0); 1793 + svsb->mdes = (svsp->efuse[6] >> 8) & GENMASK(7, 0); 1794 + svsb->dcbdet = (svsp->efuse[6] >> 16) & GENMASK(7, 0); 1795 + svsb->dcmdet = (svsp->efuse[6] >> 24) & GENMASK(7, 0); 1796 + svsb->mtdes = svsp->efuse[5] & GENMASK(7, 0); 1797 + 1798 + if (ft_pgm >= 2) { 1799 + svsb->freq_base = 800000000; /* 800MHz */ 1800 + svsb->dvt_fixed = 2; 1801 + } 1802 + break; 1803 + default: 1804 + dev_err(svsb->dev, "unknown sw_id: %u\n", svsb->sw_id); 1805 + return false; 1806 + } 1807 + } 1808 + 1809 + /* Get thermal efuse by nvmem */ 1810 + cell = nvmem_cell_get(svsp->dev, "t-calibration-data"); 1811 + if (IS_ERR(cell)) { 1812 + dev_err(svsp->dev, "no \"t-calibration-data\"? %ld\n", 1813 + PTR_ERR(cell)); 1814 + goto remove_mt8183_svsb_mon_mode; 1815 + } 1816 + 1817 + svsp->tefuse = nvmem_cell_read(cell, &svsp->tefuse_max); 1818 + if (IS_ERR(svsp->tefuse)) { 1819 + dev_err(svsp->dev, "cannot read thermal efuse: %ld\n", 1820 + PTR_ERR(svsp->tefuse)); 1821 + nvmem_cell_put(cell); 1822 + goto remove_mt8183_svsb_mon_mode; 1823 + } 1824 + 1825 + svsp->tefuse_max /= sizeof(u32); 1826 + nvmem_cell_put(cell); 1827 + 1828 + /* Thermal efuse parsing */ 1829 + adc_ge_t = (svsp->tefuse[1] >> 22) & GENMASK(9, 0); 1830 + adc_oe_t = (svsp->tefuse[1] >> 12) & GENMASK(9, 0); 1831 + 1832 + o_vtsmcu[0] = (svsp->tefuse[0] >> 17) & GENMASK(8, 0); 1833 + o_vtsmcu[1] = (svsp->tefuse[0] >> 8) & GENMASK(8, 0); 1834 + o_vtsmcu[2] = svsp->tefuse[1] & GENMASK(8, 0); 1835 + o_vtsmcu[3] = (svsp->tefuse[2] >> 23) & GENMASK(8, 0); 1836 + o_vtsmcu[4] = (svsp->tefuse[2] >> 5) & GENMASK(8, 0); 1837 + o_vtsabb = (svsp->tefuse[2] >> 14) & GENMASK(8, 0); 1838 + 1839 + degc_cali = (svsp->tefuse[0] >> 1) & GENMASK(5, 0); 1840 + adc_cali_en_t = svsp->tefuse[0] & BIT(0); 1841 + o_slope_sign = (svsp->tefuse[0] >> 7) & BIT(0); 1842 + 1843 + ts_id = (svsp->tefuse[1] >> 9) & BIT(0); 1844 + o_slope = (svsp->tefuse[0] >> 26) & GENMASK(5, 0); 1845 + 1846 + if (adc_cali_en_t == 1) { 1847 + if (!ts_id) 1848 + o_slope = 0; 1849 + 1850 + if (adc_ge_t < 265 || adc_ge_t > 758 || 1851 + adc_oe_t < 265 || adc_oe_t > 758 || 1852 + o_vtsmcu[0] < -8 || o_vtsmcu[0] > 484 || 1853 + o_vtsmcu[1] < -8 || o_vtsmcu[1] > 484 || 1854 + o_vtsmcu[2] < -8 || o_vtsmcu[2] > 484 || 1855 + o_vtsmcu[3] < -8 || o_vtsmcu[3] > 484 || 1856 + o_vtsmcu[4] < -8 || o_vtsmcu[4] > 484 || 1857 + o_vtsabb < -8 || o_vtsabb > 484 || 1858 + degc_cali < 1 || degc_cali > 63) { 1859 + dev_err(svsp->dev, "bad thermal efuse, no mon mode\n"); 1860 + goto remove_mt8183_svsb_mon_mode; 1861 + } 1862 + } else { 1863 + dev_err(svsp->dev, "no thermal efuse, no mon mode\n"); 1864 + goto remove_mt8183_svsb_mon_mode; 1865 + } 1866 + 1867 + ge = ((adc_ge_t - 512) * 10000) / 4096; 1868 + oe = (adc_oe_t - 512); 1869 + gain = (10000 + ge); 1870 + 1871 + format[0] = (o_vtsmcu[0] + 3350 - oe); 1872 + format[1] = (o_vtsmcu[1] + 3350 - oe); 1873 + format[2] = (o_vtsmcu[2] + 3350 - oe); 1874 + format[3] = (o_vtsmcu[3] + 3350 - oe); 1875 + format[4] = (o_vtsmcu[4] + 3350 - oe); 1876 + format[5] = (o_vtsabb + 3350 - oe); 1877 + 1878 + for (i = 0; i < 6; i++) 1879 + x_roomt[i] = (((format[i] * 10000) / 4096) * 10000) / gain; 1880 + 1881 + temp0 = (10000 * 100000 / gain) * 15 / 18; 1882 + 1883 + if (!o_slope_sign) 1884 + mts = (temp0 * 10) / (1534 + o_slope * 10); 1885 + else 1886 + mts = (temp0 * 10) / (1534 - o_slope * 10); 1887 + 1888 + for (idx = 0; idx < svsp->bank_max; idx++) { 1889 + svsb = &svsp->banks[idx]; 1890 + svsb->mts = mts; 1891 + 1892 + switch (svsb->sw_id) { 1893 + case SVSB_CPU_LITTLE: 1894 + tb_roomt = x_roomt[3]; 1895 + break; 1896 + case SVSB_CPU_BIG: 1897 + tb_roomt = x_roomt[4]; 1898 + break; 1899 + case SVSB_CCI: 1900 + tb_roomt = x_roomt[3]; 1901 + break; 1902 + case SVSB_GPU: 1903 + tb_roomt = x_roomt[1]; 1904 + break; 1905 + default: 1906 + dev_err(svsb->dev, "unknown sw_id: %u\n", svsb->sw_id); 1907 + goto remove_mt8183_svsb_mon_mode; 1908 + } 1909 + 1910 + temp0 = (degc_cali * 10 / 2); 1911 + temp1 = ((10000 * 100000 / 4096 / gain) * 1912 + oe + tb_roomt * 10) * 15 / 18; 1913 + 1914 + if (!o_slope_sign) 1915 + temp2 = temp1 * 100 / (1534 + o_slope * 10); 1916 + else 1917 + temp2 = temp1 * 100 / (1534 - o_slope * 10); 1918 + 1919 + svsb->bts = (temp0 + temp2 - 250) * 4 / 10; 1920 + } 1921 + 1922 + return true; 1923 + 1924 + remove_mt8183_svsb_mon_mode: 1925 + for (idx = 0; idx < svsp->bank_max; idx++) { 1926 + svsb = &svsp->banks[idx]; 1927 + svsb->mode_support &= ~SVSB_MODE_MON; 1928 + } 1929 + 1930 + return true; 1931 + } 1932 + 1933 + static bool svs_is_efuse_data_correct(struct svs_platform *svsp) 1934 + { 1935 + struct nvmem_cell *cell; 1936 + 1937 + /* Get svs efuse by nvmem */ 1938 + cell = nvmem_cell_get(svsp->dev, "svs-calibration-data"); 1939 + if (IS_ERR(cell)) { 1940 + dev_err(svsp->dev, "no \"svs-calibration-data\"? %ld\n", 1941 + PTR_ERR(cell)); 1942 + return false; 1943 + } 1944 + 1945 + svsp->efuse = nvmem_cell_read(cell, &svsp->efuse_max); 1946 + if (IS_ERR(svsp->efuse)) { 1947 + dev_err(svsp->dev, "cannot read svs efuse: %ld\n", 1948 + PTR_ERR(svsp->efuse)); 1949 + nvmem_cell_put(cell); 1950 + return false; 1951 + } 1952 + 1953 + svsp->efuse_max /= sizeof(u32); 1954 + nvmem_cell_put(cell); 1955 + 1956 + return svsp->efuse_parsing(svsp); 1957 + } 1958 + 1959 + static struct device *svs_get_subsys_device(struct svs_platform *svsp, 1960 + const char *node_name) 1961 + { 1962 + struct platform_device *pdev; 1963 + struct device_node *np; 1964 + 1965 + np = of_find_node_by_name(NULL, node_name); 1966 + if (!np) { 1967 + dev_err(svsp->dev, "cannot find %s node\n", node_name); 1968 + return ERR_PTR(-ENODEV); 1969 + } 1970 + 1971 + pdev = of_find_device_by_node(np); 1972 + if (!pdev) { 1973 + of_node_put(np); 1974 + dev_err(svsp->dev, "cannot find pdev by %s\n", node_name); 1975 + return ERR_PTR(-ENXIO); 1976 + } 1977 + 1978 + of_node_put(np); 1979 + 1980 + return &pdev->dev; 1981 + } 1982 + 1983 + static struct device *svs_add_device_link(struct svs_platform *svsp, 1984 + const char *node_name) 1985 + { 1986 + struct device *dev; 1987 + struct device_link *sup_link; 1988 + 1989 + if (!node_name) { 1990 + dev_err(svsp->dev, "node name cannot be null\n"); 1991 + return ERR_PTR(-EINVAL); 1992 + } 1993 + 1994 + dev = svs_get_subsys_device(svsp, node_name); 1995 + if (IS_ERR(dev)) 1996 + return dev; 1997 + 1998 + sup_link = device_link_add(svsp->dev, dev, 1999 + DL_FLAG_AUTOREMOVE_CONSUMER); 2000 + if (!sup_link) { 2001 + dev_err(svsp->dev, "sup_link is NULL\n"); 2002 + return ERR_PTR(-EINVAL); 2003 + } 2004 + 2005 + if (sup_link->supplier->links.status != DL_DEV_DRIVER_BOUND) 2006 + return ERR_PTR(-EPROBE_DEFER); 2007 + 2008 + return dev; 2009 + } 2010 + 2011 + static int svs_mt8192_platform_probe(struct svs_platform *svsp) 2012 + { 2013 + struct device *dev; 2014 + struct svs_bank *svsb; 2015 + u32 idx; 2016 + 2017 + svsp->rst = devm_reset_control_get_optional(svsp->dev, "svs_rst"); 2018 + if (IS_ERR(svsp->rst)) 2019 + return dev_err_probe(svsp->dev, PTR_ERR(svsp->rst), 2020 + "cannot get svs reset control\n"); 2021 + 2022 + dev = svs_add_device_link(svsp, "lvts"); 2023 + if (IS_ERR(dev)) 2024 + return dev_err_probe(svsp->dev, PTR_ERR(dev), 2025 + "failed to get lvts device\n"); 2026 + 2027 + for (idx = 0; idx < svsp->bank_max; idx++) { 2028 + svsb = &svsp->banks[idx]; 2029 + 2030 + if (svsb->type == SVSB_HIGH) 2031 + svsb->opp_dev = svs_add_device_link(svsp, "mali"); 2032 + else if (svsb->type == SVSB_LOW) 2033 + svsb->opp_dev = svs_get_subsys_device(svsp, "mali"); 2034 + 2035 + if (IS_ERR(svsb->opp_dev)) 2036 + return dev_err_probe(svsp->dev, PTR_ERR(svsb->opp_dev), 2037 + "failed to get OPP device for bank %d\n", 2038 + idx); 2039 + } 2040 + 2041 + return 0; 2042 + } 2043 + 2044 + static int svs_mt8183_platform_probe(struct svs_platform *svsp) 2045 + { 2046 + struct device *dev; 2047 + struct svs_bank *svsb; 2048 + u32 idx; 2049 + 2050 + dev = svs_add_device_link(svsp, "thermal"); 2051 + if (IS_ERR(dev)) 2052 + return dev_err_probe(svsp->dev, PTR_ERR(dev), 2053 + "failed to get thermal device\n"); 2054 + 2055 + for (idx = 0; idx < svsp->bank_max; idx++) { 2056 + svsb = &svsp->banks[idx]; 2057 + 2058 + switch (svsb->sw_id) { 2059 + case SVSB_CPU_LITTLE: 2060 + case SVSB_CPU_BIG: 2061 + svsb->opp_dev = get_cpu_device(svsb->cpu_id); 2062 + break; 2063 + case SVSB_CCI: 2064 + svsb->opp_dev = svs_add_device_link(svsp, "cci"); 2065 + break; 2066 + case SVSB_GPU: 2067 + svsb->opp_dev = svs_add_device_link(svsp, "gpu"); 2068 + break; 2069 + default: 2070 + dev_err(svsb->dev, "unknown sw_id: %u\n", svsb->sw_id); 2071 + return -EINVAL; 2072 + } 2073 + 2074 + if (IS_ERR(svsb->opp_dev)) 2075 + return dev_err_probe(svsp->dev, PTR_ERR(svsb->opp_dev), 2076 + "failed to get OPP device for bank %d\n", 2077 + idx); 2078 + } 2079 + 2080 + return 0; 2081 + } 2082 + 2083 + static struct svs_bank svs_mt8192_banks[] = { 2084 + { 2085 + .sw_id = SVSB_GPU, 2086 + .type = SVSB_LOW, 2087 + .set_freq_pct = svs_set_bank_freq_pct_v3, 2088 + .get_volts = svs_get_bank_volts_v3, 2089 + .volt_flags = SVSB_REMOVE_DVTFIXED_VOLT, 2090 + .mode_support = SVSB_MODE_INIT02, 2091 + .opp_count = MAX_OPP_ENTRIES, 2092 + .freq_base = 688000000, 2093 + .turn_freq_base = 688000000, 2094 + .volt_step = 6250, 2095 + .volt_base = 400000, 2096 + .vmax = 0x60, 2097 + .vmin = 0x1a, 2098 + .age_config = 0x555555, 2099 + .dc_config = 0x1, 2100 + .dvt_fixed = 0x1, 2101 + .vco = 0x18, 2102 + .chk_shift = 0x87, 2103 + .core_sel = 0x0fff0100, 2104 + .int_st = BIT(0), 2105 + .ctl0 = 0x00540003, 2106 + }, 2107 + { 2108 + .sw_id = SVSB_GPU, 2109 + .type = SVSB_HIGH, 2110 + .set_freq_pct = svs_set_bank_freq_pct_v3, 2111 + .get_volts = svs_get_bank_volts_v3, 2112 + .tzone_name = "gpu1", 2113 + .volt_flags = SVSB_REMOVE_DVTFIXED_VOLT | 2114 + SVSB_MON_VOLT_IGNORE, 2115 + .mode_support = SVSB_MODE_INIT02 | SVSB_MODE_MON, 2116 + .opp_count = MAX_OPP_ENTRIES, 2117 + .freq_base = 902000000, 2118 + .turn_freq_base = 688000000, 2119 + .volt_step = 6250, 2120 + .volt_base = 400000, 2121 + .vmax = 0x60, 2122 + .vmin = 0x1a, 2123 + .age_config = 0x555555, 2124 + .dc_config = 0x1, 2125 + .dvt_fixed = 0x6, 2126 + .vco = 0x18, 2127 + .chk_shift = 0x87, 2128 + .core_sel = 0x0fff0101, 2129 + .int_st = BIT(1), 2130 + .ctl0 = 0x00540003, 2131 + .tzone_htemp = 85000, 2132 + .tzone_htemp_voffset = 0, 2133 + .tzone_ltemp = 25000, 2134 + .tzone_ltemp_voffset = 7, 2135 + }, 2136 + }; 2137 + 2138 + static struct svs_bank svs_mt8183_banks[] = { 2139 + { 2140 + .sw_id = SVSB_CPU_LITTLE, 2141 + .set_freq_pct = svs_set_bank_freq_pct_v2, 2142 + .get_volts = svs_get_bank_volts_v2, 2143 + .cpu_id = 0, 2144 + .buck_name = "proc", 2145 + .volt_flags = SVSB_INIT01_VOLT_INC_ONLY, 2146 + .mode_support = SVSB_MODE_INIT01 | SVSB_MODE_INIT02, 2147 + .opp_count = MAX_OPP_ENTRIES, 2148 + .freq_base = 1989000000, 2149 + .vboot = 0x30, 2150 + .volt_step = 6250, 2151 + .volt_base = 500000, 2152 + .vmax = 0x64, 2153 + .vmin = 0x18, 2154 + .age_config = 0x555555, 2155 + .dc_config = 0x555555, 2156 + .dvt_fixed = 0x7, 2157 + .vco = 0x10, 2158 + .chk_shift = 0x77, 2159 + .core_sel = 0x8fff0000, 2160 + .int_st = BIT(0), 2161 + .ctl0 = 0x00010001, 2162 + }, 2163 + { 2164 + .sw_id = SVSB_CPU_BIG, 2165 + .set_freq_pct = svs_set_bank_freq_pct_v2, 2166 + .get_volts = svs_get_bank_volts_v2, 2167 + .cpu_id = 4, 2168 + .buck_name = "proc", 2169 + .volt_flags = SVSB_INIT01_VOLT_INC_ONLY, 2170 + .mode_support = SVSB_MODE_INIT01 | SVSB_MODE_INIT02, 2171 + .opp_count = MAX_OPP_ENTRIES, 2172 + .freq_base = 1989000000, 2173 + .vboot = 0x30, 2174 + .volt_step = 6250, 2175 + .volt_base = 500000, 2176 + .vmax = 0x58, 2177 + .vmin = 0x10, 2178 + .age_config = 0x555555, 2179 + .dc_config = 0x555555, 2180 + .dvt_fixed = 0x7, 2181 + .vco = 0x10, 2182 + .chk_shift = 0x77, 2183 + .core_sel = 0x8fff0001, 2184 + .int_st = BIT(1), 2185 + .ctl0 = 0x00000001, 2186 + }, 2187 + { 2188 + .sw_id = SVSB_CCI, 2189 + .set_freq_pct = svs_set_bank_freq_pct_v2, 2190 + .get_volts = svs_get_bank_volts_v2, 2191 + .buck_name = "proc", 2192 + .volt_flags = SVSB_INIT01_VOLT_INC_ONLY, 2193 + .mode_support = SVSB_MODE_INIT01 | SVSB_MODE_INIT02, 2194 + .opp_count = MAX_OPP_ENTRIES, 2195 + .freq_base = 1196000000, 2196 + .vboot = 0x30, 2197 + .volt_step = 6250, 2198 + .volt_base = 500000, 2199 + .vmax = 0x64, 2200 + .vmin = 0x18, 2201 + .age_config = 0x555555, 2202 + .dc_config = 0x555555, 2203 + .dvt_fixed = 0x7, 2204 + .vco = 0x10, 2205 + .chk_shift = 0x77, 2206 + .core_sel = 0x8fff0002, 2207 + .int_st = BIT(2), 2208 + .ctl0 = 0x00100003, 2209 + }, 2210 + { 2211 + .sw_id = SVSB_GPU, 2212 + .set_freq_pct = svs_set_bank_freq_pct_v2, 2213 + .get_volts = svs_get_bank_volts_v2, 2214 + .buck_name = "mali", 2215 + .tzone_name = "tzts2", 2216 + .volt_flags = SVSB_INIT01_PD_REQ | 2217 + SVSB_INIT01_VOLT_INC_ONLY, 2218 + .mode_support = SVSB_MODE_INIT01 | SVSB_MODE_INIT02 | 2219 + SVSB_MODE_MON, 2220 + .opp_count = MAX_OPP_ENTRIES, 2221 + .freq_base = 900000000, 2222 + .vboot = 0x30, 2223 + .volt_step = 6250, 2224 + .volt_base = 500000, 2225 + .vmax = 0x40, 2226 + .vmin = 0x14, 2227 + .age_config = 0x555555, 2228 + .dc_config = 0x555555, 2229 + .dvt_fixed = 0x3, 2230 + .vco = 0x10, 2231 + .chk_shift = 0x77, 2232 + .core_sel = 0x8fff0003, 2233 + .int_st = BIT(3), 2234 + .ctl0 = 0x00050001, 2235 + .tzone_htemp = 85000, 2236 + .tzone_htemp_voffset = 0, 2237 + .tzone_ltemp = 25000, 2238 + .tzone_ltemp_voffset = 3, 2239 + }, 2240 + }; 2241 + 2242 + static const struct svs_platform_data svs_mt8192_platform_data = { 2243 + .name = "mt8192-svs", 2244 + .banks = svs_mt8192_banks, 2245 + .efuse_parsing = svs_mt8192_efuse_parsing, 2246 + .probe = svs_mt8192_platform_probe, 2247 + .irqflags = IRQF_TRIGGER_HIGH, 2248 + .regs = svs_regs_v2, 2249 + .bank_max = ARRAY_SIZE(svs_mt8192_banks), 2250 + }; 2251 + 2252 + static const struct svs_platform_data svs_mt8183_platform_data = { 2253 + .name = "mt8183-svs", 2254 + .banks = svs_mt8183_banks, 2255 + .efuse_parsing = svs_mt8183_efuse_parsing, 2256 + .probe = svs_mt8183_platform_probe, 2257 + .irqflags = IRQF_TRIGGER_LOW, 2258 + .regs = svs_regs_v2, 2259 + .bank_max = ARRAY_SIZE(svs_mt8183_banks), 2260 + }; 2261 + 2262 + static const struct of_device_id svs_of_match[] = { 2263 + { 2264 + .compatible = "mediatek,mt8192-svs", 2265 + .data = &svs_mt8192_platform_data, 2266 + }, { 2267 + .compatible = "mediatek,mt8183-svs", 2268 + .data = &svs_mt8183_platform_data, 2269 + }, { 2270 + /* Sentinel */ 2271 + }, 2272 + }; 2273 + 2274 + static struct svs_platform *svs_platform_probe(struct platform_device *pdev) 2275 + { 2276 + struct svs_platform *svsp; 2277 + const struct svs_platform_data *svsp_data; 2278 + int ret; 2279 + 2280 + svsp_data = of_device_get_match_data(&pdev->dev); 2281 + if (!svsp_data) { 2282 + dev_err(&pdev->dev, "no svs platform data?\n"); 2283 + return ERR_PTR(-EPERM); 2284 + } 2285 + 2286 + svsp = devm_kzalloc(&pdev->dev, sizeof(*svsp), GFP_KERNEL); 2287 + if (!svsp) 2288 + return ERR_PTR(-ENOMEM); 2289 + 2290 + svsp->dev = &pdev->dev; 2291 + svsp->name = svsp_data->name; 2292 + svsp->banks = svsp_data->banks; 2293 + svsp->efuse_parsing = svsp_data->efuse_parsing; 2294 + svsp->probe = svsp_data->probe; 2295 + svsp->irqflags = svsp_data->irqflags; 2296 + svsp->regs = svsp_data->regs; 2297 + svsp->bank_max = svsp_data->bank_max; 2298 + 2299 + ret = svsp->probe(svsp); 2300 + if (ret) 2301 + return ERR_PTR(ret); 2302 + 2303 + return svsp; 2304 + } 2305 + 2306 + static int svs_probe(struct platform_device *pdev) 2307 + { 2308 + struct svs_platform *svsp; 2309 + unsigned int svsp_irq; 2310 + int ret; 2311 + 2312 + svsp = svs_platform_probe(pdev); 2313 + if (IS_ERR(svsp)) 2314 + return PTR_ERR(svsp); 2315 + 2316 + if (!svs_is_efuse_data_correct(svsp)) { 2317 + dev_notice(svsp->dev, "efuse data isn't correct\n"); 2318 + ret = -EPERM; 2319 + goto svs_probe_free_resource; 2320 + } 2321 + 2322 + ret = svs_bank_resource_setup(svsp); 2323 + if (ret) { 2324 + dev_err(svsp->dev, "svs bank resource setup fail: %d\n", ret); 2325 + goto svs_probe_free_resource; 2326 + } 2327 + 2328 + svsp_irq = irq_of_parse_and_map(svsp->dev->of_node, 0); 2329 + ret = devm_request_threaded_irq(svsp->dev, svsp_irq, NULL, svs_isr, 2330 + svsp->irqflags | IRQF_ONESHOT, 2331 + svsp->name, svsp); 2332 + if (ret) { 2333 + dev_err(svsp->dev, "register irq(%d) failed: %d\n", 2334 + svsp_irq, ret); 2335 + goto svs_probe_free_resource; 2336 + } 2337 + 2338 + svsp->main_clk = devm_clk_get(svsp->dev, "main"); 2339 + if (IS_ERR(svsp->main_clk)) { 2340 + dev_err(svsp->dev, "failed to get clock: %ld\n", 2341 + PTR_ERR(svsp->main_clk)); 2342 + ret = PTR_ERR(svsp->main_clk); 2343 + goto svs_probe_free_resource; 2344 + } 2345 + 2346 + ret = clk_prepare_enable(svsp->main_clk); 2347 + if (ret) { 2348 + dev_err(svsp->dev, "cannot enable main clk: %d\n", ret); 2349 + goto svs_probe_free_resource; 2350 + } 2351 + 2352 + svsp->base = of_iomap(svsp->dev->of_node, 0); 2353 + if (IS_ERR_OR_NULL(svsp->base)) { 2354 + dev_err(svsp->dev, "cannot find svs register base\n"); 2355 + ret = -EINVAL; 2356 + goto svs_probe_clk_disable; 2357 + } 2358 + 2359 + ret = svs_start(svsp); 2360 + if (ret) { 2361 + dev_err(svsp->dev, "svs start fail: %d\n", ret); 2362 + goto svs_probe_iounmap; 2363 + } 2364 + 2365 + ret = svs_create_debug_cmds(svsp); 2366 + if (ret) { 2367 + dev_err(svsp->dev, "svs create debug cmds fail: %d\n", ret); 2368 + goto svs_probe_iounmap; 2369 + } 2370 + 2371 + return 0; 2372 + 2373 + svs_probe_iounmap: 2374 + iounmap(svsp->base); 2375 + 2376 + svs_probe_clk_disable: 2377 + clk_disable_unprepare(svsp->main_clk); 2378 + 2379 + svs_probe_free_resource: 2380 + if (!IS_ERR_OR_NULL(svsp->efuse)) 2381 + kfree(svsp->efuse); 2382 + if (!IS_ERR_OR_NULL(svsp->tefuse)) 2383 + kfree(svsp->tefuse); 2384 + 2385 + return ret; 2386 + } 2387 + 2388 + static DEFINE_SIMPLE_DEV_PM_OPS(svs_pm_ops, svs_suspend, svs_resume); 2389 + 2390 + static struct platform_driver svs_driver = { 2391 + .probe = svs_probe, 2392 + .driver = { 2393 + .name = "mtk-svs", 2394 + .pm = &svs_pm_ops, 2395 + .of_match_table = of_match_ptr(svs_of_match), 2396 + }, 2397 + }; 2398 + 2399 + module_platform_driver(svs_driver); 2400 + 2401 + MODULE_AUTHOR("Roger Lu <roger.lu@mediatek.com>"); 2402 + MODULE_DESCRIPTION("MediaTek SVS driver"); 2403 + MODULE_LICENSE("GPL");
+18
drivers/soc/qcom/Kconfig
··· 129 129 130 130 config QCOM_RPMPD 131 131 tristate "Qualcomm RPM Power domain driver" 132 + depends on PM 132 133 depends on QCOM_SMD_RPM 134 + select PM_GENERIC_DOMAINS 135 + select PM_GENERIC_DOMAINS_OF 133 136 help 134 137 QCOM RPM Power domain driver to support power-domains with 135 138 performance states. The driver communicates a performance state ··· 231 228 application processor and QDSP6. APR is 232 229 used by audio driver to configure QDSP6 233 230 ASM, ADM and AFE modules. 231 + 232 + config QCOM_ICC_BWMON 233 + tristate "QCOM Interconnect Bandwidth Monitor driver" 234 + depends on ARCH_QCOM || COMPILE_TEST 235 + select PM_OPP 236 + help 237 + Sets up driver monitoring bandwidth on various interconnects and 238 + based on that voting for interconnect bandwidth, adjusting their 239 + speed to current demand. 240 + Current implementation brings support for BWMON v4, used for example 241 + on SDM845 to measure bandwidth between CPU (gladiator_noc) and Last 242 + Level Cache (memnoc). Usage of this BWMON allows to remove some of 243 + the fixed bandwidth votes from cpufreq (CPU nodes) thus achieve high 244 + memory throughput even with lower CPU frequencies. 245 + 234 246 endmenu
+1
drivers/soc/qcom/Makefile
··· 28 28 obj-$(CONFIG_QCOM_RPMHPD) += rpmhpd.o 29 29 obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o 30 30 obj-$(CONFIG_QCOM_KRYO_L2_ACCESSORS) += kryo-l2-accessors.o 31 + obj-$(CONFIG_QCOM_ICC_BWMON) += icc-bwmon.o
+6 -9
drivers/soc/qcom/apr.c
··· 377 377 static void apr_device_remove(struct device *dev) 378 378 { 379 379 struct apr_device *adev = to_apr_device(dev); 380 - struct apr_driver *adrv; 380 + struct apr_driver *adrv = to_apr_driver(dev->driver); 381 381 struct packet_router *apr = dev_get_drvdata(adev->dev.parent); 382 382 383 - if (dev->driver) { 384 - adrv = to_apr_driver(dev->driver); 385 - if (adrv->remove) 386 - adrv->remove(adev); 387 - spin_lock(&apr->svcs_lock); 388 - idr_remove(&apr->svcs_idr, adev->svc.id); 389 - spin_unlock(&apr->svcs_lock); 390 - } 383 + if (adrv->remove) 384 + adrv->remove(adev); 385 + spin_lock(&apr->svcs_lock); 386 + idr_remove(&apr->svcs_idr, adev->svc.id); 387 + spin_unlock(&apr->svcs_lock); 391 388 } 392 389 393 390 static int apr_uevent(struct device *dev, struct kobj_uevent_env *env)
+6 -2
drivers/soc/qcom/cmd-db.c
··· 141 141 const struct rsc_hdr *rsc_hdr; 142 142 const struct entry_header *ent; 143 143 int ret, i, j; 144 - u8 query[8]; 144 + u8 query[sizeof(ent->id)] __nonstring; 145 145 146 146 ret = cmd_db_ready(); 147 147 if (ret) 148 148 return ret; 149 149 150 - /* Pad out query string to same length as in DB */ 150 + /* 151 + * Pad out query string to same length as in DB. NOTE: the output 152 + * query string is not necessarily '\0' terminated if it bumps up 153 + * against the max size. That's OK and expected. 154 + */ 151 155 strncpy(query, id, sizeof(query)); 152 156 153 157 for (i = 0; i < MAX_SLV_ID; i++) {
+419
drivers/soc/qcom/icc-bwmon.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. 4 + * Copyright (C) 2021-2022 Linaro Ltd 5 + * Author: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>, based on 6 + * previous work of Thara Gopinath and msm-4.9 downstream sources. 7 + */ 8 + #include <linux/interconnect.h> 9 + #include <linux/interrupt.h> 10 + #include <linux/io.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/of_device.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_opp.h> 16 + #include <linux/sizes.h> 17 + 18 + /* 19 + * The BWMON samples data throughput within 'sample_ms' time. With three 20 + * configurable thresholds (Low, Medium and High) gives four windows (called 21 + * zones) of current bandwidth: 22 + * 23 + * Zone 0: byte count < THRES_LO 24 + * Zone 1: THRES_LO < byte count < THRES_MED 25 + * Zone 2: THRES_MED < byte count < THRES_HIGH 26 + * Zone 3: THRES_HIGH < byte count 27 + * 28 + * Zones 0 and 2 are not used by this driver. 29 + */ 30 + 31 + /* Internal sampling clock frequency */ 32 + #define HW_TIMER_HZ 19200000 33 + 34 + #define BWMON_GLOBAL_IRQ_STATUS 0x0 35 + #define BWMON_GLOBAL_IRQ_CLEAR 0x8 36 + #define BWMON_GLOBAL_IRQ_ENABLE 0xc 37 + #define BWMON_GLOBAL_IRQ_ENABLE_ENABLE BIT(0) 38 + 39 + #define BWMON_IRQ_STATUS 0x100 40 + #define BWMON_IRQ_STATUS_ZONE_SHIFT 4 41 + #define BWMON_IRQ_CLEAR 0x108 42 + #define BWMON_IRQ_ENABLE 0x10c 43 + #define BWMON_IRQ_ENABLE_ZONE1_SHIFT 5 44 + #define BWMON_IRQ_ENABLE_ZONE2_SHIFT 6 45 + #define BWMON_IRQ_ENABLE_ZONE3_SHIFT 7 46 + #define BWMON_IRQ_ENABLE_MASK (BIT(BWMON_IRQ_ENABLE_ZONE1_SHIFT) | \ 47 + BIT(BWMON_IRQ_ENABLE_ZONE3_SHIFT)) 48 + 49 + #define BWMON_ENABLE 0x2a0 50 + #define BWMON_ENABLE_ENABLE BIT(0) 51 + 52 + #define BWMON_CLEAR 0x2a4 53 + #define BWMON_CLEAR_CLEAR BIT(0) 54 + 55 + #define BWMON_SAMPLE_WINDOW 0x2a8 56 + #define BWMON_THRESHOLD_HIGH 0x2ac 57 + #define BWMON_THRESHOLD_MED 0x2b0 58 + #define BWMON_THRESHOLD_LOW 0x2b4 59 + 60 + #define BWMON_ZONE_ACTIONS 0x2b8 61 + /* 62 + * Actions to perform on some zone 'z' when current zone hits the threshold: 63 + * Increment counter of zone 'z' 64 + */ 65 + #define BWMON_ZONE_ACTIONS_INCREMENT(z) (0x2 << ((z) * 2)) 66 + /* Clear counter of zone 'z' */ 67 + #define BWMON_ZONE_ACTIONS_CLEAR(z) (0x1 << ((z) * 2)) 68 + 69 + /* Zone 0 threshold hit: Clear zone count */ 70 + #define BWMON_ZONE_ACTIONS_ZONE0 (BWMON_ZONE_ACTIONS_CLEAR(0)) 71 + 72 + /* Zone 1 threshold hit: Increment zone count & clear lower zones */ 73 + #define BWMON_ZONE_ACTIONS_ZONE1 (BWMON_ZONE_ACTIONS_INCREMENT(1) | \ 74 + BWMON_ZONE_ACTIONS_CLEAR(0)) 75 + 76 + /* Zone 2 threshold hit: Increment zone count & clear lower zones */ 77 + #define BWMON_ZONE_ACTIONS_ZONE2 (BWMON_ZONE_ACTIONS_INCREMENT(2) | \ 78 + BWMON_ZONE_ACTIONS_CLEAR(1) | \ 79 + BWMON_ZONE_ACTIONS_CLEAR(0)) 80 + 81 + /* Zone 3 threshold hit: Increment zone count & clear lower zones */ 82 + #define BWMON_ZONE_ACTIONS_ZONE3 (BWMON_ZONE_ACTIONS_INCREMENT(3) | \ 83 + BWMON_ZONE_ACTIONS_CLEAR(2) | \ 84 + BWMON_ZONE_ACTIONS_CLEAR(1) | \ 85 + BWMON_ZONE_ACTIONS_CLEAR(0)) 86 + /* Value for BWMON_ZONE_ACTIONS */ 87 + #define BWMON_ZONE_ACTIONS_DEFAULT (BWMON_ZONE_ACTIONS_ZONE0 | \ 88 + BWMON_ZONE_ACTIONS_ZONE1 << 8 | \ 89 + BWMON_ZONE_ACTIONS_ZONE2 << 16 | \ 90 + BWMON_ZONE_ACTIONS_ZONE3 << 24) 91 + 92 + /* 93 + * There is no clear documentation/explanation of BWMON_THRESHOLD_COUNT 94 + * register. Based on observations, this is number of times one threshold has to 95 + * be reached, to trigger interrupt in given zone. 96 + * 97 + * 0xff are maximum values meant to ignore the zones 0 and 2. 98 + */ 99 + #define BWMON_THRESHOLD_COUNT 0x2bc 100 + #define BWMON_THRESHOLD_COUNT_ZONE1_SHIFT 8 101 + #define BWMON_THRESHOLD_COUNT_ZONE2_SHIFT 16 102 + #define BWMON_THRESHOLD_COUNT_ZONE3_SHIFT 24 103 + #define BWMON_THRESHOLD_COUNT_ZONE0_DEFAULT 0xff 104 + #define BWMON_THRESHOLD_COUNT_ZONE2_DEFAULT 0xff 105 + 106 + /* BWMONv4 count registers use count unit of 64 kB */ 107 + #define BWMON_COUNT_UNIT_KB 64 108 + #define BWMON_ZONE_COUNT 0x2d8 109 + #define BWMON_ZONE_MAX(zone) (0x2e0 + 4 * (zone)) 110 + 111 + struct icc_bwmon_data { 112 + unsigned int sample_ms; 113 + unsigned int default_highbw_kbps; 114 + unsigned int default_medbw_kbps; 115 + unsigned int default_lowbw_kbps; 116 + u8 zone1_thres_count; 117 + u8 zone3_thres_count; 118 + }; 119 + 120 + struct icc_bwmon { 121 + struct device *dev; 122 + void __iomem *base; 123 + int irq; 124 + 125 + unsigned int default_lowbw_kbps; 126 + unsigned int sample_ms; 127 + unsigned int max_bw_kbps; 128 + unsigned int min_bw_kbps; 129 + unsigned int target_kbps; 130 + unsigned int current_kbps; 131 + }; 132 + 133 + static void bwmon_clear_counters(struct icc_bwmon *bwmon) 134 + { 135 + /* 136 + * Clear counters. The order and barriers are 137 + * important. Quoting downstream Qualcomm msm-4.9 tree: 138 + * 139 + * The counter clear and IRQ clear bits are not in the same 4KB 140 + * region. So, we need to make sure the counter clear is completed 141 + * before we try to clear the IRQ or do any other counter operations. 142 + */ 143 + writel(BWMON_CLEAR_CLEAR, bwmon->base + BWMON_CLEAR); 144 + } 145 + 146 + static void bwmon_clear_irq(struct icc_bwmon *bwmon) 147 + { 148 + /* 149 + * Clear zone and global interrupts. The order and barriers are 150 + * important. Quoting downstream Qualcomm msm-4.9 tree: 151 + * 152 + * Synchronize the local interrupt clear in mon_irq_clear() 153 + * with the global interrupt clear here. Otherwise, the CPU 154 + * may reorder the two writes and clear the global interrupt 155 + * before the local interrupt, causing the global interrupt 156 + * to be retriggered by the local interrupt still being high. 157 + * 158 + * Similarly, because the global registers are in a different 159 + * region than the local registers, we need to ensure any register 160 + * writes to enable the monitor after this call are ordered with the 161 + * clearing here so that local writes don't happen before the 162 + * interrupt is cleared. 163 + */ 164 + writel(BWMON_IRQ_ENABLE_MASK, bwmon->base + BWMON_IRQ_CLEAR); 165 + writel(BIT(0), bwmon->base + BWMON_GLOBAL_IRQ_CLEAR); 166 + } 167 + 168 + static void bwmon_disable(struct icc_bwmon *bwmon) 169 + { 170 + /* Disable interrupts. Strict ordering, see bwmon_clear_irq(). */ 171 + writel(0x0, bwmon->base + BWMON_GLOBAL_IRQ_ENABLE); 172 + writel(0x0, bwmon->base + BWMON_IRQ_ENABLE); 173 + 174 + /* 175 + * Disable bwmon. Must happen before bwmon_clear_irq() to avoid spurious 176 + * IRQ. 177 + */ 178 + writel(0x0, bwmon->base + BWMON_ENABLE); 179 + } 180 + 181 + static void bwmon_enable(struct icc_bwmon *bwmon, unsigned int irq_enable) 182 + { 183 + /* Enable interrupts */ 184 + writel(BWMON_GLOBAL_IRQ_ENABLE_ENABLE, 185 + bwmon->base + BWMON_GLOBAL_IRQ_ENABLE); 186 + writel(irq_enable, bwmon->base + BWMON_IRQ_ENABLE); 187 + 188 + /* Enable bwmon */ 189 + writel(BWMON_ENABLE_ENABLE, bwmon->base + BWMON_ENABLE); 190 + } 191 + 192 + static unsigned int bwmon_kbps_to_count(unsigned int kbps) 193 + { 194 + return kbps / BWMON_COUNT_UNIT_KB; 195 + } 196 + 197 + static void bwmon_set_threshold(struct icc_bwmon *bwmon, unsigned int reg, 198 + unsigned int kbps) 199 + { 200 + unsigned int thres; 201 + 202 + thres = mult_frac(bwmon_kbps_to_count(kbps), bwmon->sample_ms, 203 + MSEC_PER_SEC); 204 + writel_relaxed(thres, bwmon->base + reg); 205 + } 206 + 207 + static void bwmon_start(struct icc_bwmon *bwmon, 208 + const struct icc_bwmon_data *data) 209 + { 210 + unsigned int thres_count; 211 + int window; 212 + 213 + bwmon_clear_counters(bwmon); 214 + 215 + window = mult_frac(bwmon->sample_ms, HW_TIMER_HZ, MSEC_PER_SEC); 216 + /* Maximum sampling window: 0xfffff */ 217 + writel_relaxed(window, bwmon->base + BWMON_SAMPLE_WINDOW); 218 + 219 + bwmon_set_threshold(bwmon, BWMON_THRESHOLD_HIGH, 220 + data->default_highbw_kbps); 221 + bwmon_set_threshold(bwmon, BWMON_THRESHOLD_MED, 222 + data->default_medbw_kbps); 223 + bwmon_set_threshold(bwmon, BWMON_THRESHOLD_LOW, 224 + data->default_lowbw_kbps); 225 + 226 + thres_count = data->zone3_thres_count << BWMON_THRESHOLD_COUNT_ZONE3_SHIFT | 227 + BWMON_THRESHOLD_COUNT_ZONE2_DEFAULT << BWMON_THRESHOLD_COUNT_ZONE2_SHIFT | 228 + data->zone1_thres_count << BWMON_THRESHOLD_COUNT_ZONE1_SHIFT | 229 + BWMON_THRESHOLD_COUNT_ZONE0_DEFAULT; 230 + writel_relaxed(thres_count, bwmon->base + BWMON_THRESHOLD_COUNT); 231 + writel_relaxed(BWMON_ZONE_ACTIONS_DEFAULT, 232 + bwmon->base + BWMON_ZONE_ACTIONS); 233 + /* Write barriers in bwmon_clear_irq() */ 234 + 235 + bwmon_clear_irq(bwmon); 236 + bwmon_enable(bwmon, BWMON_IRQ_ENABLE_MASK); 237 + } 238 + 239 + static irqreturn_t bwmon_intr(int irq, void *dev_id) 240 + { 241 + struct icc_bwmon *bwmon = dev_id; 242 + unsigned int status, max; 243 + int zone; 244 + 245 + status = readl(bwmon->base + BWMON_IRQ_STATUS); 246 + status &= BWMON_IRQ_ENABLE_MASK; 247 + if (!status) { 248 + /* 249 + * Only zone 1 and zone 3 interrupts are enabled but zone 2 250 + * threshold could be hit and trigger interrupt even if not 251 + * enabled. 252 + * Such spurious interrupt might come with valuable max count or 253 + * not, so solution would be to always check all 254 + * BWMON_ZONE_MAX() registers to find the highest value. 255 + * Such case is currently ignored. 256 + */ 257 + return IRQ_NONE; 258 + } 259 + 260 + bwmon_disable(bwmon); 261 + 262 + zone = get_bitmask_order(status >> BWMON_IRQ_STATUS_ZONE_SHIFT) - 1; 263 + /* 264 + * Zone max bytes count register returns count units within sampling 265 + * window. Downstream kernel for BWMONv4 (called BWMON type 2 in 266 + * downstream) always increments the max bytes count by one. 267 + */ 268 + max = readl(bwmon->base + BWMON_ZONE_MAX(zone)) + 1; 269 + max *= BWMON_COUNT_UNIT_KB; 270 + bwmon->target_kbps = mult_frac(max, MSEC_PER_SEC, bwmon->sample_ms); 271 + 272 + return IRQ_WAKE_THREAD; 273 + } 274 + 275 + static irqreturn_t bwmon_intr_thread(int irq, void *dev_id) 276 + { 277 + struct icc_bwmon *bwmon = dev_id; 278 + unsigned int irq_enable = 0; 279 + struct dev_pm_opp *opp, *target_opp; 280 + unsigned int bw_kbps, up_kbps, down_kbps; 281 + 282 + bw_kbps = bwmon->target_kbps; 283 + 284 + target_opp = dev_pm_opp_find_bw_ceil(bwmon->dev, &bw_kbps, 0); 285 + if (IS_ERR(target_opp) && PTR_ERR(target_opp) == -ERANGE) 286 + target_opp = dev_pm_opp_find_bw_floor(bwmon->dev, &bw_kbps, 0); 287 + 288 + bwmon->target_kbps = bw_kbps; 289 + 290 + bw_kbps--; 291 + opp = dev_pm_opp_find_bw_floor(bwmon->dev, &bw_kbps, 0); 292 + if (IS_ERR(opp) && PTR_ERR(opp) == -ERANGE) 293 + down_kbps = bwmon->target_kbps; 294 + else 295 + down_kbps = bw_kbps; 296 + 297 + up_kbps = bwmon->target_kbps + 1; 298 + 299 + if (bwmon->target_kbps >= bwmon->max_bw_kbps) 300 + irq_enable = BIT(BWMON_IRQ_ENABLE_ZONE1_SHIFT); 301 + else if (bwmon->target_kbps <= bwmon->min_bw_kbps) 302 + irq_enable = BIT(BWMON_IRQ_ENABLE_ZONE3_SHIFT); 303 + else 304 + irq_enable = BWMON_IRQ_ENABLE_MASK; 305 + 306 + bwmon_set_threshold(bwmon, BWMON_THRESHOLD_HIGH, up_kbps); 307 + bwmon_set_threshold(bwmon, BWMON_THRESHOLD_MED, down_kbps); 308 + /* Write barriers in bwmon_clear_counters() */ 309 + bwmon_clear_counters(bwmon); 310 + bwmon_clear_irq(bwmon); 311 + bwmon_enable(bwmon, irq_enable); 312 + 313 + if (bwmon->target_kbps == bwmon->current_kbps) 314 + goto out; 315 + 316 + dev_pm_opp_set_opp(bwmon->dev, target_opp); 317 + bwmon->current_kbps = bwmon->target_kbps; 318 + 319 + out: 320 + dev_pm_opp_put(target_opp); 321 + if (!IS_ERR(opp)) 322 + dev_pm_opp_put(opp); 323 + 324 + return IRQ_HANDLED; 325 + } 326 + 327 + static int bwmon_probe(struct platform_device *pdev) 328 + { 329 + struct device *dev = &pdev->dev; 330 + struct dev_pm_opp *opp; 331 + struct icc_bwmon *bwmon; 332 + const struct icc_bwmon_data *data; 333 + int ret; 334 + 335 + bwmon = devm_kzalloc(dev, sizeof(*bwmon), GFP_KERNEL); 336 + if (!bwmon) 337 + return -ENOMEM; 338 + 339 + data = of_device_get_match_data(dev); 340 + 341 + bwmon->base = devm_platform_ioremap_resource(pdev, 0); 342 + if (IS_ERR(bwmon->base)) { 343 + dev_err(dev, "failed to map bwmon registers\n"); 344 + return PTR_ERR(bwmon->base); 345 + } 346 + 347 + bwmon->irq = platform_get_irq(pdev, 0); 348 + if (bwmon->irq < 0) 349 + return bwmon->irq; 350 + 351 + ret = devm_pm_opp_of_add_table(dev); 352 + if (ret) 353 + return dev_err_probe(dev, ret, "failed to add OPP table\n"); 354 + 355 + bwmon->max_bw_kbps = UINT_MAX; 356 + opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0); 357 + if (IS_ERR(opp)) 358 + return dev_err_probe(dev, ret, "failed to find max peak bandwidth\n"); 359 + 360 + bwmon->min_bw_kbps = 0; 361 + opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0); 362 + if (IS_ERR(opp)) 363 + return dev_err_probe(dev, ret, "failed to find min peak bandwidth\n"); 364 + 365 + bwmon->sample_ms = data->sample_ms; 366 + bwmon->default_lowbw_kbps = data->default_lowbw_kbps; 367 + bwmon->dev = dev; 368 + 369 + bwmon_disable(bwmon); 370 + ret = devm_request_threaded_irq(dev, bwmon->irq, bwmon_intr, 371 + bwmon_intr_thread, 372 + IRQF_ONESHOT, dev_name(dev), bwmon); 373 + if (ret) 374 + return dev_err_probe(dev, ret, "failed to request IRQ\n"); 375 + 376 + platform_set_drvdata(pdev, bwmon); 377 + bwmon_start(bwmon, data); 378 + 379 + return 0; 380 + } 381 + 382 + static int bwmon_remove(struct platform_device *pdev) 383 + { 384 + struct icc_bwmon *bwmon = platform_get_drvdata(pdev); 385 + 386 + bwmon_disable(bwmon); 387 + 388 + return 0; 389 + } 390 + 391 + /* BWMON v4 */ 392 + static const struct icc_bwmon_data msm8998_bwmon_data = { 393 + .sample_ms = 4, 394 + .default_highbw_kbps = 4800 * 1024, /* 4.8 GBps */ 395 + .default_medbw_kbps = 512 * 1024, /* 512 MBps */ 396 + .default_lowbw_kbps = 0, 397 + .zone1_thres_count = 16, 398 + .zone3_thres_count = 1, 399 + }; 400 + 401 + static const struct of_device_id bwmon_of_match[] = { 402 + { .compatible = "qcom,msm8998-bwmon", .data = &msm8998_bwmon_data }, 403 + {} 404 + }; 405 + MODULE_DEVICE_TABLE(of, bwmon_of_match); 406 + 407 + static struct platform_driver bwmon_driver = { 408 + .probe = bwmon_probe, 409 + .remove = bwmon_remove, 410 + .driver = { 411 + .name = "qcom-bwmon", 412 + .of_match_table = bwmon_of_match, 413 + }, 414 + }; 415 + module_platform_driver(bwmon_driver); 416 + 417 + MODULE_AUTHOR("Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>"); 418 + MODULE_DESCRIPTION("QCOM BWMON driver"); 419 + MODULE_LICENSE("GPL");
+1 -1
drivers/soc/qcom/llcc-qcom.c
··· 382 382 * llcc_slice_getd - get llcc slice descriptor 383 383 * @uid: usecase_id for the client 384 384 * 385 - * A pointer to llcc slice descriptor will be returned on success and 385 + * A pointer to llcc slice descriptor will be returned on success 386 386 * and error pointer is returned on failure 387 387 */ 388 388 struct llcc_slice_desc *llcc_slice_getd(u32 uid)
+3 -1
drivers/soc/qcom/mdt_loader.c
··· 108 108 * qcom_mdt_read_metadata() - read header and metadata from mdt or mbn 109 109 * @fw: firmware of mdt header or mbn 110 110 * @data_len: length of the read metadata blob 111 + * @fw_name: name of the firmware, for construction of segment file names 112 + * @dev: device handle to associate resources with 111 113 * 112 114 * The mechanism that performs the authentication of the loading firmware 113 115 * expects an ELF header directly followed by the segment of hashes, with no ··· 194 192 * qcom_mdt_pas_init() - initialize PAS region for firmware loading 195 193 * @dev: device handle to associate resources with 196 194 * @fw: firmware object for the mdt file 197 - * @firmware: name of the firmware, for construction of segment file names 195 + * @fw_name: name of the firmware, for construction of segment file names 198 196 * @pas_id: PAS identifier 199 197 * @mem_phys: physical address of allocated memory region 200 198 * @ctx: PAS metadata context, to be released by caller
+3
drivers/soc/qcom/ocmem.c
··· 194 194 devnode = of_parse_phandle(dev->of_node, "sram", 0); 195 195 if (!devnode || !devnode->parent) { 196 196 dev_err(dev, "Cannot look up sram phandle\n"); 197 + of_node_put(devnode); 197 198 return ERR_PTR(-ENODEV); 198 199 } 199 200 200 201 pdev = of_find_device_by_node(devnode->parent); 201 202 if (!pdev) { 202 203 dev_err(dev, "Cannot find device node %s\n", devnode->name); 204 + of_node_put(devnode); 203 205 return ERR_PTR(-EPROBE_DEFER); 204 206 } 207 + of_node_put(devnode); 205 208 206 209 ocmem = platform_get_drvdata(pdev); 207 210 if (!ocmem) {
+3 -1
drivers/soc/qcom/qcom_aoss.c
··· 399 399 continue; 400 400 ret = qmp_cooling_device_add(qmp, &qmp->cooling_devs[count++], 401 401 child); 402 - if (ret) 402 + if (ret) { 403 + of_node_put(child); 403 404 goto unroll; 405 + } 404 406 } 405 407 406 408 if (!count)
+2 -2
drivers/soc/qcom/rpmhpd.c
··· 23 23 /** 24 24 * struct rpmhpd - top level RPMh power domain resource data structure 25 25 * @dev: rpmh power domain controller device 26 - * @pd: generic_pm_domain corrresponding to the power domain 27 - * @parent: generic_pm_domain corrresponding to the parent's power domain 26 + * @pd: generic_pm_domain corresponding to the power domain 27 + * @parent: generic_pm_domain corresponding to the parent's power domain 28 28 * @peer: A peer power domain in case Active only Voting is 29 29 * supported 30 30 * @active_only: True if it represents an Active only peer
+1
drivers/soc/qcom/rpmpd.c
··· 453 453 static const struct of_device_id rpmpd_match_table[] = { 454 454 { .compatible = "qcom,mdm9607-rpmpd", .data = &mdm9607_desc }, 455 455 { .compatible = "qcom,msm8226-rpmpd", .data = &msm8226_desc }, 456 + { .compatible = "qcom,msm8909-rpmpd", .data = &msm8916_desc }, 456 457 { .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc }, 457 458 { .compatible = "qcom,msm8939-rpmpd", .data = &msm8939_desc }, 458 459 { .compatible = "qcom,msm8953-rpmpd", .data = &msm8953_desc },
+1
drivers/soc/qcom/smd-rpm.c
··· 234 234 { .compatible = "qcom,rpm-apq8084" }, 235 235 { .compatible = "qcom,rpm-ipq6018" }, 236 236 { .compatible = "qcom,rpm-msm8226" }, 237 + { .compatible = "qcom,rpm-msm8909" }, 237 238 { .compatible = "qcom,rpm-msm8916" }, 238 239 { .compatible = "qcom,rpm-msm8936" }, 239 240 { .compatible = "qcom,rpm-msm8953" },
+3
drivers/soc/qcom/smp2p.c
··· 119 119 * @out: pointer to the outbound smem item 120 120 * @smem_items: ids of the two smem items 121 121 * @valid_entries: already scanned inbound entries 122 + * @ssr_ack_enabled: SMP2P_FEATURE_SSR_ACK feature is supported and was enabled 123 + * @ssr_ack: current cached state of the local ack bit 124 + * @negotiation_done: whether negotiating finished 122 125 * @local_pid: processor id of the inbound edge 123 126 * @remote_pid: processor id of the outbound edge 124 127 * @ipc_regmap: regmap for the outbound ipc
+3 -1
drivers/soc/qcom/socinfo.c
··· 328 328 { 455, "QRB5165" }, 329 329 { 457, "SM8450" }, 330 330 { 459, "SM7225" }, 331 - { 460, "SA8540P" }, 331 + { 460, "SA8295P" }, 332 + { 461, "SA8540P" }, 332 333 { 480, "SM8450" }, 333 334 { 482, "SM8450" }, 334 335 { 487, "SC7280" }, 336 + { 495, "SC7180P" }, 335 337 }; 336 338 337 339 static const char *socinfo_machine(struct device *dev, unsigned int id)
+14
drivers/soc/qcom/spm.c
··· 74 74 [SPM_REG_SEQ_ENTRY] = 0x400, 75 75 }; 76 76 77 + /* SPM register data for 8909 */ 78 + static const struct spm_reg_data spm_reg_8909_cpu = { 79 + .reg_offset = spm_reg_offset_v3_0, 80 + .spm_cfg = 0x1, 81 + .spm_dly = 0x3C102800, 82 + .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90, 83 + 0x5B, 0x60, 0x03, 0x60, 0x76, 0x76, 0x0B, 0x94, 0x5B, 0x80, 84 + 0x10, 0x26, 0x30, 0x0F }, 85 + .start_index[PM_SLEEP_MODE_STBY] = 0, 86 + .start_index[PM_SLEEP_MODE_SPC] = 5, 87 + }; 88 + 77 89 /* SPM register data for 8916 */ 78 90 static const struct spm_reg_data spm_reg_8916_cpu = { 79 91 .reg_offset = spm_reg_offset_v3_0, ··· 207 195 .data = &spm_reg_660_silver_l2 }, 208 196 { .compatible = "qcom,msm8226-saw2-v2.1-cpu", 209 197 .data = &spm_reg_8226_cpu }, 198 + { .compatible = "qcom,msm8909-saw2-v3.0-cpu", 199 + .data = &spm_reg_8909_cpu }, 210 200 { .compatible = "qcom,msm8916-saw2-v3.0-cpu", 211 201 .data = &spm_reg_8916_cpu }, 212 202 { .compatible = "qcom,msm8974-saw2-v2.1-cpu",
+5 -5
drivers/soc/renesas/r8a779a0-sysc.c
··· 57 57 { "a2cv6", R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR }, 58 58 { "a2cn2", R8A779A0_PD_A2CN2, R8A779A0_PD_A3IR }, 59 59 { "a2imp23", R8A779A0_PD_A2IMP23, R8A779A0_PD_A3IR }, 60 - { "a2dp1", R8A779A0_PD_A2DP0, R8A779A0_PD_A3IR }, 61 - { "a2cv2", R8A779A0_PD_A2CV0, R8A779A0_PD_A3IR }, 62 - { "a2cv3", R8A779A0_PD_A2CV1, R8A779A0_PD_A3IR }, 63 - { "a2cv5", R8A779A0_PD_A2CV4, R8A779A0_PD_A3IR }, 64 - { "a2cv7", R8A779A0_PD_A2CV6, R8A779A0_PD_A3IR }, 60 + { "a2dp1", R8A779A0_PD_A2DP1, R8A779A0_PD_A3IR }, 61 + { "a2cv2", R8A779A0_PD_A2CV2, R8A779A0_PD_A3IR }, 62 + { "a2cv3", R8A779A0_PD_A2CV3, R8A779A0_PD_A3IR }, 63 + { "a2cv5", R8A779A0_PD_A2CV5, R8A779A0_PD_A3IR }, 64 + { "a2cv7", R8A779A0_PD_A2CV7, R8A779A0_PD_A3IR }, 65 65 { "a2cn1", R8A779A0_PD_A2CN1, R8A779A0_PD_A3IR }, 66 66 { "a1cnn0", R8A779A0_PD_A1CNN0, R8A779A0_PD_A2CN0 }, 67 67 { "a1cnn2", R8A779A0_PD_A1CNN2, R8A779A0_PD_A2CN2 },
+2 -2
drivers/soc/renesas/rcar-gen4-sysc.h
··· 25 25 struct rcar_gen4_sysc_area { 26 26 const char *name; 27 27 u8 pdr; /* PDRn */ 28 - int parent; /* -1 if none */ 29 - unsigned int flags; /* See PD_* */ 28 + s8 parent; /* -1 if none */ 29 + u8 flags; /* See PD_* */ 30 30 }; 31 31 32 32 /*
+2 -2
drivers/soc/renesas/rcar-sysc.h
··· 31 31 u16 chan_offs; /* Offset of PWRSR register for this area */ 32 32 u8 chan_bit; /* Bit in PWR* (except for PWRUP in PWRSR) */ 33 33 u8 isr_bit; /* Bit in SYSCI*R */ 34 - int parent; /* -1 if none */ 35 - unsigned int flags; /* See PD_* */ 34 + s8 parent; /* -1 if none */ 35 + u8 flags; /* See PD_* */ 36 36 }; 37 37 38 38
+1
drivers/soc/sunxi/Kconfig
··· 6 6 config SUNXI_MBUS 7 7 bool 8 8 default ARCH_SUNXI 9 + depends on ARM || ARM64 9 10 help 10 11 Say y to enable the fixups needed to support the Allwinner 11 12 MBUS DMA quirks.
+1
drivers/soc/ti/pruss.c
··· 338 338 { .compatible = "ti,am654-icssg", .data = &am65x_j721e_pruss_data, }, 339 339 { .compatible = "ti,j721e-icssg", .data = &am65x_j721e_pruss_data, }, 340 340 { .compatible = "ti,am642-icssg", .data = &am65x_j721e_pruss_data, }, 341 + { .compatible = "ti,am625-pruss", .data = &am65x_j721e_pruss_data, }, 341 342 {}, 342 343 }; 343 344 MODULE_DEVICE_TABLE(of, pruss_of_match);
+1 -1
drivers/soc/ti/wkup_m3_ipc.c
··· 688 688 &m3_ipc->sd_fw_name); 689 689 if (ret) { 690 690 dev_dbg(dev, "Voltage scaling data blob not provided from DT.\n"); 691 - }; 691 + } 692 692 693 693 /* 694 694 * Wait for firmware loading completion in a thread so we
+1 -1
drivers/spi/Kconfig
··· 183 183 184 184 config SPI_BCM63XX_HSSPI 185 185 tristate "Broadcom BCM63XX HS SPI controller driver" 186 - depends on BCM63XX || BMIPS_GENERIC || ARCH_BCM_63XX || COMPILE_TEST 186 + depends on BCM63XX || BMIPS_GENERIC || ARCH_BCMBCA || COMPILE_TEST 187 187 help 188 188 This enables support for the High Speed SPI controller present on 189 189 newer Broadcom BCM63XX SoCs.
+2 -2
drivers/tty/serial/Kconfig
··· 1099 1099 config SERIAL_BCM63XX 1100 1100 tristate "Broadcom BCM63xx/BCM33xx UART support" 1101 1101 select SERIAL_CORE 1102 - depends on ARCH_BCM4908 || ARCH_BCM_63XX || BCM63XX || BMIPS_GENERIC || COMPILE_TEST 1103 - default ARCH_BCM4908 || ARCH_BCM_63XX || BCM63XX || BMIPS_GENERIC 1102 + depends on ARCH_BCM4908 || ARCH_BCMBCA || BCM63XX || BMIPS_GENERIC || COMPILE_TEST 1103 + default ARCH_BCM4908 || ARCH_BCMBCA || BCM63XX || BMIPS_GENERIC 1104 1104 help 1105 1105 This enables the driver for the onchip UART core found on 1106 1106 the following chipsets:
+101
include/dt-bindings/clock/tegra234-clock.h
··· 164 164 #define TEGRA234_CLK_PEX1_C5_CORE 225U 165 165 /** @brief PLL controlled by CLK_RST_CONTROLLER_PLLC4_BASE */ 166 166 #define TEGRA234_CLK_PLLC4 237U 167 + /** @brief RX clock recovered from MGBE0 lane input */ 168 + #define TEGRA234_CLK_MGBE0_RX_INPUT 248U 169 + /** @brief RX clock recovered from MGBE1 lane input */ 170 + #define TEGRA234_CLK_MGBE1_RX_INPUT 249U 171 + /** @brief RX clock recovered from MGBE2 lane input */ 172 + #define TEGRA234_CLK_MGBE2_RX_INPUT 250U 173 + /** @brief RX clock recovered from MGBE3 lane input */ 174 + #define TEGRA234_CLK_MGBE3_RX_INPUT 251U 167 175 /** @brief 32K input clock provided by PMIC */ 168 176 #define TEGRA234_CLK_CLK_32K 289U 177 + /** @brief Monitored branch of MBGE0 RX input clock */ 178 + #define TEGRA234_CLK_MGBE0_RX_INPUT_M 357U 179 + /** @brief Monitored branch of MBGE1 RX input clock */ 180 + #define TEGRA234_CLK_MGBE1_RX_INPUT_M 358U 181 + /** @brief Monitored branch of MBGE2 RX input clock */ 182 + #define TEGRA234_CLK_MGBE2_RX_INPUT_M 359U 183 + /** @brief Monitored branch of MBGE3 RX input clock */ 184 + #define TEGRA234_CLK_MGBE3_RX_INPUT_M 360U 185 + /** @brief Monitored branch of MGBE0 RX PCS mux output */ 186 + #define TEGRA234_CLK_MGBE0_RX_PCS_M 361U 187 + /** @brief Monitored branch of MGBE1 RX PCS mux output */ 188 + #define TEGRA234_CLK_MGBE1_RX_PCS_M 362U 189 + /** @brief Monitored branch of MGBE2 RX PCS mux output */ 190 + #define TEGRA234_CLK_MGBE2_RX_PCS_M 363U 191 + /** @brief Monitored branch of MGBE3 RX PCS mux output */ 192 + #define TEGRA234_CLK_MGBE3_RX_PCS_M 364U 193 + /** @brief RX PCS clock recovered from MGBE0 lane input */ 194 + #define TEGRA234_CLK_MGBE0_RX_PCS_INPUT 369U 195 + /** @brief RX PCS clock recovered from MGBE1 lane input */ 196 + #define TEGRA234_CLK_MGBE1_RX_PCS_INPUT 370U 197 + /** @brief RX PCS clock recovered from MGBE2 lane input */ 198 + #define TEGRA234_CLK_MGBE2_RX_PCS_INPUT 371U 199 + /** @brief RX PCS clock recovered from MGBE3 lane input */ 200 + #define TEGRA234_CLK_MGBE3_RX_PCS_INPUT 372U 201 + /** @brief output of mux controlled by GBE_UPHY_MGBE0_RX_PCS_CLK_SRC_SEL */ 202 + #define TEGRA234_CLK_MGBE0_RX_PCS 373U 203 + /** @brief GBE_UPHY_MGBE0_TX_CLK divider gated output */ 204 + #define TEGRA234_CLK_MGBE0_TX 374U 205 + /** @brief GBE_UPHY_MGBE0_TX_PCS_CLK divider gated output */ 206 + #define TEGRA234_CLK_MGBE0_TX_PCS 375U 207 + /** @brief GBE_UPHY_MGBE0_MAC_CLK divider output */ 208 + #define TEGRA234_CLK_MGBE0_MAC_DIVIDER 376U 209 + /** @brief GBE_UPHY_MGBE0_MAC_CLK gate output */ 210 + #define TEGRA234_CLK_MGBE0_MAC 377U 211 + /** @brief GBE_UPHY_MGBE0_MACSEC_CLK gate output */ 212 + #define TEGRA234_CLK_MGBE0_MACSEC 378U 213 + /** @brief GBE_UPHY_MGBE0_EEE_PCS_CLK gate output */ 214 + #define TEGRA234_CLK_MGBE0_EEE_PCS 379U 215 + /** @brief GBE_UPHY_MGBE0_APP_CLK gate output */ 216 + #define TEGRA234_CLK_MGBE0_APP 380U 217 + /** @brief GBE_UPHY_MGBE0_PTP_REF_CLK divider gated output */ 218 + #define TEGRA234_CLK_MGBE0_PTP_REF 381U 219 + /** @brief output of mux controlled by GBE_UPHY_MGBE1_RX_PCS_CLK_SRC_SEL */ 220 + #define TEGRA234_CLK_MGBE1_RX_PCS 382U 221 + /** @brief GBE_UPHY_MGBE1_TX_CLK divider gated output */ 222 + #define TEGRA234_CLK_MGBE1_TX 383U 223 + /** @brief GBE_UPHY_MGBE1_TX_PCS_CLK divider gated output */ 224 + #define TEGRA234_CLK_MGBE1_TX_PCS 384U 225 + /** @brief GBE_UPHY_MGBE1_MAC_CLK divider output */ 226 + #define TEGRA234_CLK_MGBE1_MAC_DIVIDER 385U 227 + /** @brief GBE_UPHY_MGBE1_MAC_CLK gate output */ 228 + #define TEGRA234_CLK_MGBE1_MAC 386U 229 + /** @brief GBE_UPHY_MGBE1_EEE_PCS_CLK gate output */ 230 + #define TEGRA234_CLK_MGBE1_EEE_PCS 388U 231 + /** @brief GBE_UPHY_MGBE1_APP_CLK gate output */ 232 + #define TEGRA234_CLK_MGBE1_APP 389U 233 + /** @brief GBE_UPHY_MGBE1_PTP_REF_CLK divider gated output */ 234 + #define TEGRA234_CLK_MGBE1_PTP_REF 390U 235 + /** @brief output of mux controlled by GBE_UPHY_MGBE2_RX_PCS_CLK_SRC_SEL */ 236 + #define TEGRA234_CLK_MGBE2_RX_PCS 391U 237 + /** @brief GBE_UPHY_MGBE2_TX_CLK divider gated output */ 238 + #define TEGRA234_CLK_MGBE2_TX 392U 239 + /** @brief GBE_UPHY_MGBE2_TX_PCS_CLK divider gated output */ 240 + #define TEGRA234_CLK_MGBE2_TX_PCS 393U 241 + /** @brief GBE_UPHY_MGBE2_MAC_CLK divider output */ 242 + #define TEGRA234_CLK_MGBE2_MAC_DIVIDER 394U 243 + /** @brief GBE_UPHY_MGBE2_MAC_CLK gate output */ 244 + #define TEGRA234_CLK_MGBE2_MAC 395U 245 + /** @brief GBE_UPHY_MGBE2_EEE_PCS_CLK gate output */ 246 + #define TEGRA234_CLK_MGBE2_EEE_PCS 397U 247 + /** @brief GBE_UPHY_MGBE2_APP_CLK gate output */ 248 + #define TEGRA234_CLK_MGBE2_APP 398U 249 + /** @brief GBE_UPHY_MGBE2_PTP_REF_CLK divider gated output */ 250 + #define TEGRA234_CLK_MGBE2_PTP_REF 399U 251 + /** @brief output of mux controlled by GBE_UPHY_MGBE3_RX_PCS_CLK_SRC_SEL */ 252 + #define TEGRA234_CLK_MGBE3_RX_PCS 400U 253 + /** @brief GBE_UPHY_MGBE3_TX_CLK divider gated output */ 254 + #define TEGRA234_CLK_MGBE3_TX 401U 255 + /** @brief GBE_UPHY_MGBE3_TX_PCS_CLK divider gated output */ 256 + #define TEGRA234_CLK_MGBE3_TX_PCS 402U 257 + /** @brief GBE_UPHY_MGBE3_MAC_CLK divider output */ 258 + #define TEGRA234_CLK_MGBE3_MAC_DIVIDER 403U 259 + /** @brief GBE_UPHY_MGBE3_MAC_CLK gate output */ 260 + #define TEGRA234_CLK_MGBE3_MAC 404U 261 + /** @brief GBE_UPHY_MGBE3_MACSEC_CLK gate output */ 262 + #define TEGRA234_CLK_MGBE3_MACSEC 405U 263 + /** @brief GBE_UPHY_MGBE3_EEE_PCS_CLK gate output */ 264 + #define TEGRA234_CLK_MGBE3_EEE_PCS 406U 265 + /** @brief GBE_UPHY_MGBE3_APP_CLK gate output */ 266 + #define TEGRA234_CLK_MGBE3_APP 407U 267 + /** @brief GBE_UPHY_MGBE3_PTP_REF_CLK divider gated output */ 268 + #define TEGRA234_CLK_MGBE3_PTP_REF 408U 169 269 /** @brief CLK_RST_CONTROLLER_AZA2XBITCLK_OUT_SWITCH_DIVIDER switch divider output (aza_2xbitclk) */ 170 270 #define TEGRA234_CLK_AZA_2XBIT 457U 171 271 /** @brief aza_2xbitclk / 2 (aza_bitclk) */ 172 272 #define TEGRA234_CLK_AZA_BIT 458U 273 + 173 274 #endif
+21
include/dt-bindings/memory/tegra234-mc.h
··· 11 11 /* NISO0 stream IDs */ 12 12 #define TEGRA234_SID_APE 0x02 13 13 #define TEGRA234_SID_HDA 0x03 14 + #define TEGRA234_SID_GPCDMA 0x04 15 + #define TEGRA234_SID_MGBE 0x06 14 16 #define TEGRA234_SID_PCIE0 0x12 15 17 #define TEGRA234_SID_PCIE4 0x13 16 18 #define TEGRA234_SID_PCIE5 0x14 17 19 #define TEGRA234_SID_PCIE6 0x15 18 20 #define TEGRA234_SID_PCIE9 0x1f 21 + #define TEGRA234_SID_MGBE_VF1 0x49 22 + #define TEGRA234_SID_MGBE_VF2 0x4a 23 + #define TEGRA234_SID_MGBE_VF3 0x4b 19 24 20 25 /* NISO1 stream IDs */ 21 26 #define TEGRA234_SID_SDMMC4 0x02 ··· 66 61 #define TEGRA234_MEMORY_CLIENT_PCIE10AR1 0x48 67 62 /* PCIE7r1 read clients */ 68 63 #define TEGRA234_MEMORY_CLIENT_PCIE7AR1 0x49 64 + /* MGBE0 read client */ 65 + #define TEGRA234_MEMORY_CLIENT_MGBEARD 0x58 66 + /* MGBEB read client */ 67 + #define TEGRA234_MEMORY_CLIENT_MGBEBRD 0x59 68 + /* MGBEC read client */ 69 + #define TEGRA234_MEMORY_CLIENT_MGBECRD 0x5a 70 + /* MGBED read client */ 71 + #define TEGRA234_MEMORY_CLIENT_MGBEDRD 0x5b 72 + /* MGBE0 write client */ 73 + #define TEGRA234_MEMORY_CLIENT_MGBEAWR 0x5c 74 + /* MGBEB write client */ 75 + #define TEGRA234_MEMORY_CLIENT_MGBEBWR 0x5f 76 + /* MGBEC write client */ 77 + #define TEGRA234_MEMORY_CLIENT_MGBECWR 0x61 69 78 /* sdmmcd memory read client */ 70 79 #define TEGRA234_MEMORY_CLIENT_SDMMCRAB 0x63 80 + /* MGBED write client */ 81 + #define TEGRA234_MEMORY_CLIENT_MGBEDWR 0x65 71 82 /* sdmmcd memory write client */ 72 83 #define TEGRA234_MEMORY_CLIENT_SDMMCWAB 0x67 73 84 /* BPMP read client */
+16
include/dt-bindings/power/mt6795-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + #ifndef _DT_BINDINGS_POWER_MT6795_POWER_H 3 + #define _DT_BINDINGS_POWER_MT6795_POWER_H 4 + 5 + #define MT6795_POWER_DOMAIN_MM 0 6 + #define MT6795_POWER_DOMAIN_VDEC 1 7 + #define MT6795_POWER_DOMAIN_VENC 2 8 + #define MT6795_POWER_DOMAIN_ISP 3 9 + #define MT6795_POWER_DOMAIN_MJC 4 10 + #define MT6795_POWER_DOMAIN_AUDIO 5 11 + #define MT6795_POWER_DOMAIN_MFG_ASYNC 6 12 + #define MT6795_POWER_DOMAIN_MFG_2D 7 13 + #define MT6795_POWER_DOMAIN_MFG 8 14 + #define MT6795_POWER_DOMAIN_MODEM 9 15 + 16 + #endif /* _DT_BINDINGS_POWER_MT6795_POWER_H */
+7
include/dt-bindings/power/qcom-rpmpd.h
··· 187 187 #define MSM8916_VDDMX 3 188 188 #define MSM8916_VDDMX_AO 4 189 189 190 + /* MSM8909 Power Domain Indexes */ 191 + #define MSM8909_VDDCX MSM8916_VDDCX 192 + #define MSM8909_VDDCX_AO MSM8916_VDDCX_AO 193 + #define MSM8909_VDDCX_VFC MSM8916_VDDCX_VFC 194 + #define MSM8909_VDDMX MSM8916_VDDMX 195 + #define MSM8909_VDDMX_AO MSM8916_VDDMX_AO 196 + 190 197 /* MSM8953 Power Domain Indexes */ 191 198 #define MSM8953_VDDMD 0 192 199 #define MSM8953_VDDMD_AO 1
+1
include/dt-bindings/power/tegra234-powergate.h
··· 18 18 #define TEGRA234_POWER_DOMAIN_MGBEA 17U 19 19 #define TEGRA234_POWER_DOMAIN_MGBEB 18U 20 20 #define TEGRA234_POWER_DOMAIN_MGBEC 19U 21 + #define TEGRA234_POWER_DOMAIN_MGBED 20U 21 22 22 23 #endif
+9
include/dt-bindings/reset/tegra234-reset.h
··· 15 15 #define TEGRA234_RESET_PEX1_COMMON_APB 13U 16 16 #define TEGRA234_RESET_PEX2_CORE_7 14U 17 17 #define TEGRA234_RESET_PEX2_CORE_7_APB 15U 18 + #define TEGRA234_RESET_GPCDMA 18U 18 19 #define TEGRA234_RESET_HDA 20U 19 20 #define TEGRA234_RESET_HDACODEC 21U 20 21 #define TEGRA234_RESET_I2C1 24U ··· 30 29 #define TEGRA234_RESET_I2C7 33U 31 30 #define TEGRA234_RESET_I2C8 34U 32 31 #define TEGRA234_RESET_I2C9 35U 32 + #define TEGRA234_RESET_MGBE0_PCS 45U 33 + #define TEGRA234_RESET_MGBE0_MAC 46U 34 + #define TEGRA234_RESET_MGBE1_PCS 49U 35 + #define TEGRA234_RESET_MGBE1_MAC 50U 36 + #define TEGRA234_RESET_MGBE2_PCS 53U 37 + #define TEGRA234_RESET_MGBE2_MAC 54U 33 38 #define TEGRA234_RESET_PEX2_CORE_10 56U 34 39 #define TEGRA234_RESET_PEX2_CORE_10_APB 57U 35 40 #define TEGRA234_RESET_PEX2_COMMON_APB 58U ··· 50 43 #define TEGRA234_RESET_QSPI0 76U 51 44 #define TEGRA234_RESET_QSPI1 77U 52 45 #define TEGRA234_RESET_SDMMC4 85U 46 + #define TEGRA234_RESET_MGBE3_PCS 87U 47 + #define TEGRA234_RESET_MGBE3_MAC 88U 53 48 #define TEGRA234_RESET_UARTA 100U 54 49 #define TEGRA234_RESET_PEX0_CORE_0 116U 55 50 #define TEGRA234_RESET_PEX0_CORE_1 117U
+1
include/linux/mfd/bcm2835-pm.h
··· 9 9 struct device *dev; 10 10 void __iomem *base; 11 11 void __iomem *asb; 12 + void __iomem *rpivid_asb; 12 13 }; 13 14 14 15 #endif /* BCM2835_MFD_PM_H */
+134
include/linux/scmi_protocol.h
··· 561 561 }; 562 562 563 563 /** 564 + * struct scmi_powercap_info - Describe one available Powercap domain 565 + * 566 + * @id: Domain ID as advertised by the platform. 567 + * @notify_powercap_cap_change: CAP change notification support. 568 + * @notify_powercap_measurement_change: MEASUREMENTS change notifications 569 + * support. 570 + * @async_powercap_cap_set: Asynchronous CAP set support. 571 + * @powercap_cap_config: CAP configuration support. 572 + * @powercap_monitoring: Monitoring (measurements) support. 573 + * @powercap_pai_config: PAI configuration support. 574 + * @powercap_scale_mw: Domain reports power data in milliwatt units. 575 + * @powercap_scale_uw: Domain reports power data in microwatt units. 576 + * Note that, when both @powercap_scale_mw and 577 + * @powercap_scale_uw are set to false, the domain 578 + * reports power data on an abstract linear scale. 579 + * @name: name assigned to the Powercap Domain by platform. 580 + * @min_pai: Minimum configurable PAI. 581 + * @max_pai: Maximum configurable PAI. 582 + * @pai_step: Step size between two consecutive PAI values. 583 + * @min_power_cap: Minimum configurable CAP. 584 + * @max_power_cap: Maximum configurable CAP. 585 + * @power_cap_step: Step size between two consecutive CAP values. 586 + * @sustainable_power: Maximum sustainable power consumption for this domain 587 + * under normal conditions. 588 + * @accuracy: The accuracy with which the power is measured and reported in 589 + * integral multiples of 0.001 percent. 590 + * @parent_id: Identifier of the containing parent power capping domain, or the 591 + * value 0xFFFFFFFF if this powercap domain is a root domain not 592 + * contained in any other domain. 593 + */ 594 + struct scmi_powercap_info { 595 + unsigned int id; 596 + bool notify_powercap_cap_change; 597 + bool notify_powercap_measurement_change; 598 + bool async_powercap_cap_set; 599 + bool powercap_cap_config; 600 + bool powercap_monitoring; 601 + bool powercap_pai_config; 602 + bool powercap_scale_mw; 603 + bool powercap_scale_uw; 604 + bool fastchannels; 605 + char name[SCMI_MAX_STR_SIZE]; 606 + unsigned int min_pai; 607 + unsigned int max_pai; 608 + unsigned int pai_step; 609 + unsigned int min_power_cap; 610 + unsigned int max_power_cap; 611 + unsigned int power_cap_step; 612 + unsigned int sustainable_power; 613 + unsigned int accuracy; 614 + #define SCMI_POWERCAP_ROOT_ZONE_ID 0xFFFFFFFFUL 615 + unsigned int parent_id; 616 + struct scmi_fc_info *fc_info; 617 + }; 618 + 619 + /** 620 + * struct scmi_powercap_proto_ops - represents the various operations provided 621 + * by SCMI Powercap Protocol 622 + * 623 + * @num_domains_get: get the count of powercap domains provided by SCMI. 624 + * @info_get: get the information for the specified domain. 625 + * @cap_get: get the current CAP value for the specified domain. 626 + * @cap_set: set the CAP value for the specified domain to the provided value; 627 + * if the domain supports setting the CAP with an asynchronous command 628 + * this request will finally trigger an asynchronous transfer, but, if 629 + * @ignore_dresp here is set to true, this call will anyway return 630 + * immediately without waiting for the related delayed response. 631 + * @pai_get: get the current PAI value for the specified domain. 632 + * @pai_set: set the PAI value for the specified domain to the provided value. 633 + * @measurements_get: retrieve the current average power measurements for the 634 + * specified domain and the related PAI upon which is 635 + * calculated. 636 + * @measurements_threshold_set: set the desired low and high power thresholds 637 + * to be used when registering for notification 638 + * of type POWERCAP_MEASUREMENTS_NOTIFY with this 639 + * powercap domain. 640 + * Note that this must be called at least once 641 + * before registering any callback with the usual 642 + * @scmi_notify_ops; moreover, in case this method 643 + * is called with measurement notifications already 644 + * enabled it will also trigger, transparently, a 645 + * proper update of the power thresholds configured 646 + * in the SCMI backend server. 647 + * @measurements_threshold_get: get the currently configured low and high power 648 + * thresholds used when registering callbacks for 649 + * notification POWERCAP_MEASUREMENTS_NOTIFY. 650 + */ 651 + struct scmi_powercap_proto_ops { 652 + int (*num_domains_get)(const struct scmi_protocol_handle *ph); 653 + const struct scmi_powercap_info __must_check *(*info_get) 654 + (const struct scmi_protocol_handle *ph, u32 domain_id); 655 + int (*cap_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 656 + u32 *power_cap); 657 + int (*cap_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 658 + u32 power_cap, bool ignore_dresp); 659 + int (*pai_get)(const struct scmi_protocol_handle *ph, u32 domain_id, 660 + u32 *pai); 661 + int (*pai_set)(const struct scmi_protocol_handle *ph, u32 domain_id, 662 + u32 pai); 663 + int (*measurements_get)(const struct scmi_protocol_handle *ph, 664 + u32 domain_id, u32 *average_power, u32 *pai); 665 + int (*measurements_threshold_set)(const struct scmi_protocol_handle *ph, 666 + u32 domain_id, u32 power_thresh_low, 667 + u32 power_thresh_high); 668 + int (*measurements_threshold_get)(const struct scmi_protocol_handle *ph, 669 + u32 domain_id, u32 *power_thresh_low, 670 + u32 *power_thresh_high); 671 + }; 672 + 673 + /** 564 674 * struct scmi_notify_ops - represents notifications' operations provided by 565 675 * SCMI core 566 676 * @devm_event_notifier_register: Managed registration of a notifier_block for ··· 734 624 * 735 625 * @dev: pointer to the SCMI device 736 626 * @version: pointer to the structure containing SCMI version information 627 + * @devm_protocol_acquire: devres managed method to get hold of a protocol, 628 + * causing its initialization and related resource 629 + * accounting 737 630 * @devm_protocol_get: devres managed method to acquire a protocol and get specific 738 631 * operations and a dedicated protocol handler 739 632 * @devm_protocol_put: devres managed method to release a protocol ··· 755 642 struct device *dev; 756 643 struct scmi_revision_info *version; 757 644 645 + int __must_check (*devm_protocol_acquire)(struct scmi_device *sdev, 646 + u8 proto); 758 647 const void __must_check * 759 648 (*devm_protocol_get)(struct scmi_device *sdev, u8 proto, 760 649 struct scmi_protocol_handle **ph); ··· 776 661 SCMI_PROTOCOL_SENSOR = 0x15, 777 662 SCMI_PROTOCOL_RESET = 0x16, 778 663 SCMI_PROTOCOL_VOLTAGE = 0x17, 664 + SCMI_PROTOCOL_POWERCAP = 0x18, 779 665 }; 780 666 781 667 enum scmi_system_events { ··· 878 762 SCMI_EVENT_RESET_ISSUED = 0x0, 879 763 SCMI_EVENT_BASE_ERROR_EVENT = 0x0, 880 764 SCMI_EVENT_SYSTEM_POWER_STATE_NOTIFIER = 0x0, 765 + SCMI_EVENT_POWERCAP_CAP_CHANGED = 0x0, 766 + SCMI_EVENT_POWERCAP_MEASUREMENTS_CHANGED = 0x1, 881 767 }; 882 768 883 769 struct scmi_power_state_changed_report { ··· 899 781 struct scmi_system_power_state_notifier_report { 900 782 ktime_t timestamp; 901 783 unsigned int agent_id; 784 + #define SCMI_SYSPOWER_IS_REQUEST_GRACEFUL(flags) ((flags) & BIT(0)) 902 785 unsigned int flags; 903 786 unsigned int system_state; 787 + unsigned int timeout; 904 788 }; 905 789 906 790 struct scmi_perf_limits_report { ··· 950 830 unsigned long long reports[]; 951 831 }; 952 832 833 + struct scmi_powercap_cap_changed_report { 834 + ktime_t timestamp; 835 + unsigned int agent_id; 836 + unsigned int domain_id; 837 + unsigned int power_cap; 838 + unsigned int pai; 839 + }; 840 + 841 + struct scmi_powercap_meas_changed_report { 842 + ktime_t timestamp; 843 + unsigned int agent_id; 844 + unsigned int domain_id; 845 + unsigned int power; 846 + }; 953 847 #endif /* _LINUX_SCMI_PROTOCOL_H */
+27
include/linux/soc/mediatek/mtk-mutex.h
··· 10 10 struct device; 11 11 struct mtk_mutex; 12 12 13 + enum mtk_mutex_mod_index { 14 + /* MDP table index */ 15 + MUTEX_MOD_IDX_MDP_RDMA0, 16 + MUTEX_MOD_IDX_MDP_RSZ0, 17 + MUTEX_MOD_IDX_MDP_RSZ1, 18 + MUTEX_MOD_IDX_MDP_TDSHP0, 19 + MUTEX_MOD_IDX_MDP_WROT0, 20 + MUTEX_MOD_IDX_MDP_WDMA, 21 + MUTEX_MOD_IDX_MDP_AAL0, 22 + MUTEX_MOD_IDX_MDP_CCORR0, 23 + 24 + MUTEX_MOD_IDX_MAX /* ALWAYS keep at the end */ 25 + }; 26 + 27 + enum mtk_mutex_sof_index { 28 + MUTEX_SOF_IDX_SINGLE_MODE, 29 + 30 + MUTEX_SOF_IDX_MAX /* ALWAYS keep at the end */ 31 + }; 32 + 13 33 struct mtk_mutex *mtk_mutex_get(struct device *dev); 14 34 int mtk_mutex_prepare(struct mtk_mutex *mutex); 15 35 void mtk_mutex_add_comp(struct mtk_mutex *mutex, 16 36 enum mtk_ddp_comp_id id); 17 37 void mtk_mutex_enable(struct mtk_mutex *mutex); 38 + int mtk_mutex_enable_by_cmdq(struct mtk_mutex *mutex, 39 + void *pkt); 18 40 void mtk_mutex_disable(struct mtk_mutex *mutex); 19 41 void mtk_mutex_remove_comp(struct mtk_mutex *mutex, 20 42 enum mtk_ddp_comp_id id); ··· 44 22 void mtk_mutex_put(struct mtk_mutex *mutex); 45 23 void mtk_mutex_acquire(struct mtk_mutex *mutex); 46 24 void mtk_mutex_release(struct mtk_mutex *mutex); 25 + int mtk_mutex_write_mod(struct mtk_mutex *mutex, 26 + enum mtk_mutex_mod_index idx, 27 + bool clear); 28 + int mtk_mutex_write_sof(struct mtk_mutex *mutex, 29 + enum mtk_mutex_sof_index idx); 47 30 48 31 #endif /* MTK_MUTEX_H */
+56
include/trace/events/scmi.h
··· 7 7 8 8 #include <linux/tracepoint.h> 9 9 10 + TRACE_EVENT(scmi_fc_call, 11 + TP_PROTO(u8 protocol_id, u8 msg_id, u32 res_id, u32 val1, u32 val2), 12 + TP_ARGS(protocol_id, msg_id, res_id, val1, val2), 13 + 14 + TP_STRUCT__entry( 15 + __field(u8, protocol_id) 16 + __field(u8, msg_id) 17 + __field(u32, res_id) 18 + __field(u32, val1) 19 + __field(u32, val2) 20 + ), 21 + 22 + TP_fast_assign( 23 + __entry->protocol_id = protocol_id; 24 + __entry->msg_id = msg_id; 25 + __entry->res_id = res_id; 26 + __entry->val1 = val1; 27 + __entry->val2 = val2; 28 + ), 29 + 30 + TP_printk("[0x%02X]:[0x%02X]:[%08X]:%u:%u", 31 + __entry->protocol_id, __entry->msg_id, 32 + __entry->res_id, __entry->val1, __entry->val2) 33 + ); 34 + 10 35 TRACE_EVENT(scmi_xfer_begin, 11 36 TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 12 37 bool poll), ··· 136 111 TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u msg_type=%u", 137 112 __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 138 113 __entry->seq, __entry->msg_type) 114 + ); 115 + 116 + TRACE_EVENT(scmi_msg_dump, 117 + TP_PROTO(u8 protocol_id, u8 msg_id, unsigned char *tag, u16 seq, 118 + int status, void *buf, size_t len), 119 + TP_ARGS(protocol_id, msg_id, tag, seq, status, buf, len), 120 + 121 + TP_STRUCT__entry( 122 + __field(u8, protocol_id) 123 + __field(u8, msg_id) 124 + __array(char, tag, 5) 125 + __field(u16, seq) 126 + __field(int, status) 127 + __field(size_t, len) 128 + __dynamic_array(unsigned char, cmd, len) 129 + ), 130 + 131 + TP_fast_assign( 132 + __entry->protocol_id = protocol_id; 133 + __entry->msg_id = msg_id; 134 + strscpy(__entry->tag, tag, 5); 135 + __entry->seq = seq; 136 + __entry->status = status; 137 + __entry->len = len; 138 + memcpy(__get_dynamic_array(cmd), buf, __entry->len); 139 + ), 140 + 141 + TP_printk("pt=%02X t=%s msg_id=%02X seq=%04X s=%d pyld=%s", 142 + __entry->protocol_id, __entry->tag, __entry->msg_id, 143 + __entry->seq, __entry->status, 144 + __print_hex_str(__get_dynamic_array(cmd), __entry->len)) 139 145 ); 140 146 #endif /* _TRACE_SCMI_H */ 141 147