Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull SoC driver updates from Arnd Bergmann:
"Nothing particular important in the SoC driver updates, just the usual
improvements to for drivers/soc and a couple of subsystems that don't
fit anywhere else:

- The largest set of updates is for Qualcomm SoC drivers, extending
the set of supported features for additional SoCs in the QSEECOM,
LLCC and socinfo drivers.a

- The ti_sci firmware driver gains support for power managment

- The drivers/reset subsystem sees a rework of the microchip sparx5
and amlogic reset drivers to support additional chips, plus a few
minor updates on other platforms

- The SCMI firmware interface driver gains support for two protocol
extensions, allowing more flexible use of the shared memory area
and new DT binding properties for configurability.

- Mediatek SoC drivers gain support for power managment on the MT8188
SoC and a new driver for DVFS.

- The AMD/Xilinx ZynqMP SoC drivers gain support for system reboot
and a few bugfixes

- The Hisilicon Kunpeng HCCS driver gains support for configuring
lanes through sysfs

Finally, there are cleanups and minor fixes for drivers/{soc, bus,
memory}, including changing back the .remove_new callback to .remove,
as well as a few other updates for freescale (powerpc) soc drivers,
NXP i.MX soc drivers, cznic turris platform driver, memory controller
drviers, TI OMAP SoC drivers, and Tegra firmware drivers"

* tag 'soc-drivers-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (116 commits)
soc: fsl: cpm1: qmc: Set the ret error code on platform_get_irq() failure
soc: fsl: rcpm: fix missing of_node_put() in copy_ippdexpcr1_setting()
soc: fsl: cpm1: tsa: switch to for_each_available_child_of_node_scoped()
platform: cznic: turris-omnia-mcu: Rename variable holding GPIO line names
platform: cznic: turris-omnia-mcu: Document the driver private data structure
firmware: turris-mox-rwtm: Document the driver private data structure
bus: Switch back to struct platform_driver::remove()
soc: qcom: ice: Remove the device_link field in qcom_ice
drm/msm/adreno: Setup SMMU aparture for per-process page table
firmware: qcom: scm: Introduce CP_SMMU_APERTURE_ID
firmware: arm_scpi: Check the DVFS OPP count returned by the firmware
soc: qcom: socinfo: add IPQ5424/IPQ5404 SoC ID
dt-bindings: arm: qcom,ids: add SoC ID for IPQ5424/IPQ5404
soc: qcom: llcc: Flip the manual slice configuration condition
dt-bindings: firmware: qcom,scm: Document sm8750 SCM
firmware: qcom: uefisecapp: Allow X1E Devkit devices
misc: lan966x_pci: Fix dtc warn 'Missing interrupt-parent'
misc: lan966x_pci: Fix dtc warns 'missing or empty reg/ranges property'
soc: qcom: llcc: Add LLCC configuration for the QCS8300 platform
dt-bindings: cache: qcom,llcc: Document the QCS8300 LLCC
...

+7199 -1060
+45
Documentation/ABI/testing/sysfs-devices-platform-kunpeng_hccs
··· 79 79 indicates a lane. 80 80 crc_err_cnt: (RO) CRC err count on this port. 81 81 ============= ==== ============================================= 82 + 83 + What: /sys/devices/platform/HISI04Bx:00/used_types 84 + Date: August 2024 85 + KernelVersion: 6.12 86 + Contact: Huisong Li <lihuisong@huawei.com> 87 + Description: 88 + This interface is used to show all HCCS types used on the 89 + platform, like, HCCS-v1, HCCS-v2 and so on. 90 + 91 + What: /sys/devices/platform/HISI04Bx:00/available_inc_dec_lane_types 92 + What: /sys/devices/platform/HISI04Bx:00/dec_lane_of_type 93 + What: /sys/devices/platform/HISI04Bx:00/inc_lane_of_type 94 + Date: August 2024 95 + KernelVersion: 6.12 96 + Contact: Huisong Li <lihuisong@huawei.com> 97 + Description: 98 + These interfaces under /sys/devices/platform/HISI04Bx/ are 99 + used to support the low power consumption feature of some 100 + HCCS types by changing the number of lanes used. The interfaces 101 + changing the number of lanes used are 'dec_lane_of_type' and 102 + 'inc_lane_of_type' which require root privileges. These 103 + interfaces aren't exposed if no HCCS type on platform support 104 + this feature. Please note that decreasing lane number is only 105 + allowed if all the specified HCCS ports are not busy. 106 + 107 + The low power consumption interfaces are as follows: 108 + 109 + ============================= ==== ================================ 110 + available_inc_dec_lane_types: (RO) available HCCS types (string) to 111 + increase and decrease the number 112 + of lane used, e.g. HCCS-v2. 113 + dec_lane_of_type: (WO) input HCCS type supported 114 + decreasing lane to decrease the 115 + used lane number of all specified 116 + HCCS type ports on platform to 117 + the minimum. 118 + You can query the 'cur_lane_num' 119 + to get the minimum lane number 120 + after executing successfully. 121 + inc_lane_of_type: (WO) input HCCS type supported 122 + increasing lane to increase the 123 + used lane number of all specified 124 + HCCS type ports on platform to 125 + the full lane state. 126 + ============================= ==== ================================
+32
Documentation/devicetree/bindings/cache/qcom,llcc.yaml
··· 20 20 properties: 21 21 compatible: 22 22 enum: 23 + - qcom,qcs615-llcc 24 + - qcom,qcs8300-llcc 23 25 - qcom,qdu1000-llcc 24 26 - qcom,sa8775p-llcc 27 + - qcom,sar1130p-llcc 28 + - qcom,sar2130p-llcc 25 29 - qcom,sc7180-llcc 26 30 - qcom,sc7280-llcc 27 31 - qcom,sc8180x-llcc ··· 71 67 compatible: 72 68 contains: 73 69 enum: 70 + - qcom,sar1130p-llcc 71 + - qcom,sar2130p-llcc 72 + then: 73 + properties: 74 + reg: 75 + items: 76 + - description: LLCC0 base register region 77 + - description: LLCC1 base register region 78 + - description: LLCC broadcast OR register region 79 + - description: LLCC broadcast AND register region 80 + - description: LLCC scratchpad broadcast OR register region 81 + - description: LLCC scratchpad broadcast AND register region 82 + reg-names: 83 + items: 84 + - const: llcc0_base 85 + - const: llcc1_base 86 + - const: llcc_broadcast_base 87 + - const: llcc_broadcast_and_base 88 + - const: llcc_scratchpad_broadcast_base 89 + - const: llcc_scratchpad_broadcast_and_base 90 + 91 + - if: 92 + properties: 93 + compatible: 94 + contains: 95 + enum: 96 + - qcom,qcs615-llcc 74 97 - qcom,sc7180-llcc 75 98 - qcom,sm6350-llcc 76 99 then: ··· 228 197 compatible: 229 198 contains: 230 199 enum: 200 + - qcom,qcs8300-llcc 231 201 - qcom,sdm845-llcc 232 202 - qcom,sm8150-llcc 233 203 - qcom,sm8250-llcc
+15
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 131 131 be a non-zero value if set. 132 132 minimum: 1 133 133 134 + arm,max-msg-size: 135 + $ref: /schemas/types.yaml#/definitions/uint32 136 + description: 137 + An optional value, expressed in bytes, representing the maximum size 138 + allowed for the payload of messages transmitted on this transport. 139 + 140 + arm,max-msg: 141 + $ref: /schemas/types.yaml#/definitions/uint32 142 + description: 143 + An optional value representing the maximum number of concurrent in-flight 144 + messages allowed by this transport; this number represents the maximum 145 + number of concurrently outstanding messages that the server can handle on 146 + this platform. If set, the value should be non-zero. 147 + minimum: 1 148 + 134 149 arm,smc-id: 135 150 $ref: /schemas/types.yaml#/definitions/uint32 136 151 description:
+6
Documentation/devicetree/bindings/firmware/qcom,scm.yaml
··· 42 42 - qcom,scm-msm8996 43 43 - qcom,scm-msm8998 44 44 - qcom,scm-qcm2290 45 + - qcom,scm-qcs8300 45 46 - qcom,scm-qdu1000 47 + - qcom,scm-sa8255p 46 48 - qcom,scm-sa8775p 49 + - qcom,scm-sar2130p 47 50 - qcom,scm-sc7180 48 51 - qcom,scm-sc7280 49 52 - qcom,scm-sc8180x ··· 67 64 - qcom,scm-sm8450 68 65 - qcom,scm-sm8550 69 66 - qcom,scm-sm8650 67 + - qcom,scm-sm8750 70 68 - qcom,scm-qcs404 71 69 - qcom,scm-x1e80100 72 70 - const: qcom,scm ··· 199 195 - qcom,scm-sm8450 200 196 - qcom,scm-sm8550 201 197 - qcom,scm-sm8650 198 + - qcom,scm-sm8750 202 199 then: 203 200 properties: 204 201 interrupts: false ··· 209 204 compatible: 210 205 contains: 211 206 enum: 207 + - qcom,scm-sa8255p 212 208 - qcom,scm-sa8775p 213 209 then: 214 210 properties:
+27 -5
Documentation/devicetree/bindings/memory-controllers/fsl/fsl,ifc.yaml
··· 58 58 access window as configured. 59 59 60 60 patternProperties: 61 - "^.*@[a-f0-9]+(,[a-f0-9]+)+$": 61 + "^nand@[a-f0-9]+(,[a-f0-9]+)+$": 62 62 type: object 63 - description: | 64 - Child device nodes describe the devices connected to IFC such as NOR (e.g. 65 - cfi-flash) and NAND (fsl,ifc-nand). There might be board specific devices 66 - like FPGAs, CPLDs, etc. 63 + properties: 64 + compatible: 65 + const: fsl,ifc-nand 66 + 67 + reg: 68 + maxItems: 1 69 + 70 + "#address-cells": 71 + const: 1 72 + 73 + "#size-cells": 74 + const: 1 75 + 76 + patternProperties: 77 + "^partition@[0-9a-f]+": 78 + $ref: /schemas/mtd/partitions/partition.yaml# 79 + deprecated: true 67 80 68 81 required: 69 82 - compatible 70 83 - reg 84 + 85 + additionalProperties: false 86 + 87 + "(flash|fpga|board-control|cpld)@[a-f0-9]+(,[a-f0-9]+)+$": 88 + type: object 89 + oneOf: 90 + - $ref: /schemas/board/fsl,fpga-qixis.yaml# 91 + - $ref: /schemas/mtd/mtd-physmap.yaml# 92 + unevaluatedProperties: false 71 93 72 94 required: 73 95 - compatible
+83
Documentation/devicetree/bindings/soc/mediatek/mediatek,mt8183-dvfsrc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/mediatek/mediatek,mt8183-dvfsrc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek Dynamic Voltage and Frequency Scaling Resource Collector (DVFSRC) 8 + 9 + description: 10 + The Dynamic Voltage and Frequency Scaling Resource Collector (DVFSRC) is a 11 + Hardware module used to collect all the requests from both software and the 12 + various remote processors embedded into the SoC and decide about a minimum 13 + operating voltage and a minimum DRAM frequency to fulfill those requests in 14 + an effort to provide the best achievable performance per watt. 15 + This hardware IP is capable of transparently performing direct register R/W 16 + on all of the DVFSRC-controlled regulators and SoC bandwidth knobs. 17 + 18 + maintainers: 19 + - AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 20 + - Henry Chen <henryc.chen@mediatek.com> 21 + 22 + properties: 23 + compatible: 24 + oneOf: 25 + - enum: 26 + - mediatek,mt8183-dvfsrc 27 + - mediatek,mt8195-dvfsrc 28 + - items: 29 + - const: mediatek,mt8192-dvfsrc 30 + - const: mediatek,mt8195-dvfsrc 31 + 32 + reg: 33 + maxItems: 1 34 + description: DVFSRC common register address and length. 35 + 36 + regulators: 37 + type: object 38 + $ref: /schemas/regulator/mediatek,mt6873-dvfsrc-regulator.yaml# 39 + 40 + interconnect: 41 + type: object 42 + $ref: /schemas/interconnect/mediatek,mt8183-emi.yaml# 43 + 44 + required: 45 + - compatible 46 + - reg 47 + 48 + additionalProperties: false 49 + 50 + examples: 51 + - | 52 + soc { 53 + #address-cells = <2>; 54 + #size-cells = <2>; 55 + 56 + system-controller@10012000 { 57 + compatible = "mediatek,mt8195-dvfsrc"; 58 + reg = <0 0x10012000 0 0x1000>; 59 + 60 + regulators { 61 + compatible = "mediatek,mt8195-dvfsrc-regulator"; 62 + 63 + dvfsrc_vcore: dvfsrc-vcore { 64 + regulator-name = "dvfsrc-vcore"; 65 + regulator-min-microvolt = <550000>; 66 + regulator-max-microvolt = <750000>; 67 + regulator-always-on; 68 + }; 69 + 70 + dvfsrc_vscp: dvfsrc-vscp { 71 + regulator-name = "dvfsrc-vscp"; 72 + regulator-min-microvolt = <550000>; 73 + regulator-max-microvolt = <750000>; 74 + regulator-always-on; 75 + }; 76 + }; 77 + 78 + emi_icc: interconnect { 79 + compatible = "mediatek,mt8195-emi"; 80 + #interconnect-cells = <1>; 81 + }; 82 + }; 83 + };
+4
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.yaml
··· 25 25 compatible: 26 26 items: 27 27 - enum: 28 + - qcom,qcs8300-aoss-qmp 28 29 - qcom,qdu1000-aoss-qmp 30 + - qcom,sa8255p-aoss-qmp 29 31 - qcom,sa8775p-aoss-qmp 32 + - qcom,sar2130p-aoss-qmp 30 33 - qcom,sc7180-aoss-qmp 31 34 - qcom,sc7280-aoss-qmp 32 35 - qcom,sc8180x-aoss-qmp ··· 43 40 - qcom,sm8450-aoss-qmp 44 41 - qcom,sm8550-aoss-qmp 45 42 - qcom,sm8650-aoss-qmp 43 + - qcom,sm8750-aoss-qmp 46 44 - qcom,x1e80100-aoss-qmp 47 45 - const: qcom,aoss-qmp 48 46
+1
Documentation/devicetree/bindings/sram/qcom,imem.yaml
··· 21 21 - qcom,msm8226-imem 22 22 - qcom,msm8974-imem 23 23 - qcom,qcs404-imem 24 + - qcom,qcs8300-imem 24 25 - qcom,qdu1000-imem 25 26 - qcom,sa8775p-imem 26 27 - qcom,sc7180-imem
+6
Documentation/devicetree/bindings/sram/sram.yaml
··· 101 101 IO mem address range, relative to the SRAM range. 102 102 maxItems: 1 103 103 104 + reg-io-width: 105 + description: 106 + The size (in bytes) of the IO accesses that should be performed on the 107 + SRAM. 108 + enum: [1, 2, 4, 8] 109 + 104 110 pool: 105 111 description: 106 112 Indicates that the particular reserved SRAM area is addressable
+9
MAINTAINERS
··· 2815 2815 2816 2816 ARM/QUALCOMM MAILING LIST 2817 2817 L: linux-arm-msm@vger.kernel.org 2818 + C: irc://irc.oftc.net/linux-msm 2818 2819 F: Documentation/devicetree/bindings/*/qcom* 2819 2820 F: Documentation/devicetree/bindings/soc/qcom/ 2820 2821 F: arch/arm/boot/dts/qcom/ ··· 2857 2856 M: Konrad Dybcio <konradybcio@kernel.org> 2858 2857 L: linux-arm-msm@vger.kernel.org 2859 2858 S: Maintained 2859 + C: irc://irc.oftc.net/linux-msm 2860 2860 T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git 2861 2861 F: Documentation/devicetree/bindings/arm/qcom-soc.yaml 2862 2862 F: Documentation/devicetree/bindings/arm/qcom.yaml ··· 15178 15176 F: Documentation/devicetree/bindings/interrupt-controller/microchip,lan966x-oic.yaml 15179 15177 F: drivers/irqchip/irq-lan966x-oic.c 15180 15178 15179 + MICROCHIP LAN966X PCI DRIVER 15180 + M: Herve Codina <herve.codina@bootlin.com> 15181 + S: Maintained 15182 + F: drivers/misc/lan966x_pci.c 15183 + F: drivers/misc/lan966x_pci.dtso 15184 + 15181 15185 MICROCHIP LCDFB DRIVER 15182 15186 M: Nicolas Ferre <nicolas.ferre@microchip.com> 15183 15187 L: linux-fbdev@vger.kernel.org ··· 18295 18287 M: Bjorn Andersson <andersson@kernel.org> 18296 18288 L: linux-arm-msm@vger.kernel.org 18297 18289 S: Maintained 18290 + C: irc://irc.oftc.net/linux-msm 18298 18291 F: Documentation/devicetree/bindings/pinctrl/qcom,* 18299 18292 F: drivers/pinctrl/qcom/ 18300 18293
+1
arch/arm64/configs/defconfig
··· 1472 1472 CONFIG_EXTCON_PTN5150=m 1473 1473 CONFIG_EXTCON_USB_GPIO=y 1474 1474 CONFIG_EXTCON_USBC_CROS_EC=y 1475 + CONFIG_FSL_IFC=y 1475 1476 CONFIG_RENESAS_RPCIF=m 1476 1477 CONFIG_IIO=y 1477 1478 CONFIG_EXYNOS_ADC=y
+1
drivers/base/power/qos.c
··· 137 137 138 138 return ret; 139 139 } 140 + EXPORT_SYMBOL_GPL(dev_pm_qos_read_value); 140 141 141 142 /** 142 143 * apply_constraint - Add/modify/remove device PM QoS request.
+1 -1
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 1210 1210 .acpi_match_table = fsl_mc_bus_acpi_match_table, 1211 1211 }, 1212 1212 .probe = fsl_mc_bus_probe, 1213 - .remove_new = fsl_mc_bus_remove, 1213 + .remove = fsl_mc_bus_remove, 1214 1214 .shutdown = fsl_mc_bus_remove, 1215 1215 }; 1216 1216
+1 -1
drivers/bus/hisi_lpc.c
··· 689 689 .acpi_match_table = hisi_lpc_acpi_match, 690 690 }, 691 691 .probe = hisi_lpc_probe, 692 - .remove_new = hisi_lpc_remove, 692 + .remove = hisi_lpc_remove, 693 693 }; 694 694 builtin_platform_driver(hisi_lpc_driver);
+1 -1
drivers/bus/omap-ocp2scp.c
··· 101 101 102 102 static struct platform_driver omap_ocp2scp_driver = { 103 103 .probe = omap_ocp2scp_probe, 104 - .remove_new = omap_ocp2scp_remove, 104 + .remove = omap_ocp2scp_remove, 105 105 .driver = { 106 106 .name = "omap-ocp2scp", 107 107 .of_match_table = of_match_ptr(omap_ocp2scp_id_table),
+1 -1
drivers/bus/omap_l3_smx.c
··· 273 273 274 274 static struct platform_driver omap3_l3_driver = { 275 275 .probe = omap3_l3_probe, 276 - .remove_new = omap3_l3_remove, 276 + .remove = omap3_l3_remove, 277 277 .driver = { 278 278 .name = "omap_l3_smx", 279 279 .of_match_table = of_match_ptr(omap3_l3_match),
+1 -1
drivers/bus/qcom-ssc-block-bus.c
··· 373 373 374 374 static struct platform_driver qcom_ssc_block_bus_driver = { 375 375 .probe = qcom_ssc_block_bus_probe, 376 - .remove_new = qcom_ssc_block_bus_remove, 376 + .remove = qcom_ssc_block_bus_remove, 377 377 .driver = { 378 378 .name = "qcom-ssc-block-bus", 379 379 .of_match_table = qcom_ssc_block_bus_of_match,
+1 -1
drivers/bus/simple-pm-bus.c
··· 128 128 129 129 static struct platform_driver simple_pm_bus_driver = { 130 130 .probe = simple_pm_bus_probe, 131 - .remove_new = simple_pm_bus_remove, 131 + .remove = simple_pm_bus_remove, 132 132 .driver = { 133 133 .name = "simple-pm-bus", 134 134 .of_match_table = simple_pm_bus_of_match,
+1 -1
drivers/bus/sun50i-de2.c
··· 36 36 37 37 static struct platform_driver sun50i_de2_bus_driver = { 38 38 .probe = sun50i_de2_bus_probe, 39 - .remove_new = sun50i_de2_bus_remove, 39 + .remove = sun50i_de2_bus_remove, 40 40 .driver = { 41 41 .name = "sun50i-de2-bus", 42 42 .of_match_table = sun50i_de2_bus_of_match,
+1 -1
drivers/bus/sunxi-rsb.c
··· 832 832 833 833 static struct platform_driver sunxi_rsb_driver = { 834 834 .probe = sunxi_rsb_probe, 835 - .remove_new = sunxi_rsb_remove, 835 + .remove = sunxi_rsb_remove, 836 836 .driver = { 837 837 .name = RSB_CTRL_NAME, 838 838 .of_match_table = sunxi_rsb_of_match_table,
+1 -1
drivers/bus/tegra-aconnect.c
··· 104 104 105 105 static struct platform_driver tegra_aconnect_driver = { 106 106 .probe = tegra_aconnect_probe, 107 - .remove_new = tegra_aconnect_remove, 107 + .remove = tegra_aconnect_remove, 108 108 .driver = { 109 109 .name = "tegra-aconnect", 110 110 .of_match_table = tegra_aconnect_of_match,
+1 -1
drivers/bus/tegra-gmi.c
··· 303 303 304 304 static struct platform_driver tegra_gmi_driver = { 305 305 .probe = tegra_gmi_probe, 306 - .remove_new = tegra_gmi_remove, 306 + .remove = tegra_gmi_remove, 307 307 .driver = { 308 308 .name = "tegra-gmi", 309 309 .of_match_table = tegra_gmi_id_table,
+1 -1
drivers/bus/ti-pwmss.c
··· 44 44 .of_match_table = pwmss_of_match, 45 45 }, 46 46 .probe = pwmss_probe, 47 - .remove_new = pwmss_remove, 47 + .remove = pwmss_remove, 48 48 }; 49 49 50 50 module_platform_driver(pwmss_driver);
+1 -1
drivers/bus/ti-sysc.c
··· 3345 3345 3346 3346 static struct platform_driver sysc_driver = { 3347 3347 .probe = sysc_probe, 3348 - .remove_new = sysc_remove, 3348 + .remove = sysc_remove, 3349 3349 .driver = { 3350 3350 .name = "ti-sysc", 3351 3351 .of_match_table = sysc_match,
+1 -1
drivers/bus/ts-nbus.c
··· 336 336 337 337 static struct platform_driver ts_nbus_driver = { 338 338 .probe = ts_nbus_probe, 339 - .remove_new = ts_nbus_remove, 339 + .remove = ts_nbus_remove, 340 340 .driver = { 341 341 .name = "ts_nbus", 342 342 .of_match_table = ts_nbus_of_match,
+40 -5
drivers/firmware/arm_scmi/common.h
··· 31 31 32 32 #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC) 33 33 34 + #define SCMI_SHMEM_MAX_PAYLOAD_SIZE 104 35 + 34 36 enum scmi_error_codes { 35 37 SCMI_SUCCESS = 0, /* Success */ 36 38 SCMI_ERR_SUPPORT = -1, /* Not supported */ ··· 167 165 * channel 168 166 * @is_p2a: A flag to identify a channel as P2A (RX) 169 167 * @rx_timeout_ms: The configured RX timeout in milliseconds. 168 + * @max_msg_size: Maximum size of message payload. 170 169 * @handle: Pointer to SCMI entity handle 171 170 * @no_completion_irq: Flag to indicate that this channel has no completion 172 171 * interrupt mechanism for synchronous commands. ··· 180 177 struct device *dev; 181 178 bool is_p2a; 182 179 unsigned int rx_timeout_ms; 180 + unsigned int max_msg_size; 183 181 struct scmi_handle *handle; 184 182 bool no_completion_irq; 185 183 void *transport_info; ··· 228 224 * @max_msg: Maximum number of messages for a channel type (tx or rx) that can 229 225 * be pending simultaneously in the system. May be overridden by the 230 226 * get_max_msg op. 231 - * @max_msg_size: Maximum size of data per message that can be handled. 227 + * @max_msg_size: Maximum size of data payload per message that can be handled. 228 + * @atomic_threshold: Optional system wide DT-configured threshold, expressed 229 + * in microseconds, for atomic operations. 230 + * Only SCMI synchronous commands reported by the platform 231 + * to have an execution latency lesser-equal to the threshold 232 + * should be considered for atomic mode operation: such 233 + * decision is finally left up to the SCMI drivers. 232 234 * @force_polling: Flag to force this whole transport to use SCMI core polling 233 235 * mechanism instead of completion interrupts even if available. 234 236 * @sync_cmds_completed_on_ret: Flag to indicate that the transport assures ··· 253 243 int max_rx_timeout_ms; 254 244 int max_msg; 255 245 int max_msg_size; 246 + unsigned int atomic_threshold; 256 247 const bool force_polling; 257 248 const bool sync_cmds_completed_on_ret; 258 249 const bool atomic_enabled; ··· 322 311 MSG_MBOX_SPURIOUS = -5, 323 312 }; 324 313 314 + /* Used for compactness and signature validation of the function pointers being 315 + * passed. 316 + */ 317 + typedef void (*shmem_copy_toio_t)(void __iomem *to, const void *from, 318 + size_t count); 319 + typedef void (*shmem_copy_fromio_t)(void *to, const void __iomem *from, 320 + size_t count); 321 + 322 + /** 323 + * struct scmi_shmem_io_ops - I/O operations to read from/write to 324 + * Shared Memory 325 + * 326 + * @toio: Copy data to the shared memory area 327 + * @fromio: Copy data from the shared memory area 328 + */ 329 + struct scmi_shmem_io_ops { 330 + shmem_copy_fromio_t fromio; 331 + shmem_copy_toio_t toio; 332 + }; 333 + 325 334 /* shmem related declarations */ 326 335 struct scmi_shared_mem; 327 336 ··· 362 331 struct scmi_shared_mem_operations { 363 332 void (*tx_prepare)(struct scmi_shared_mem __iomem *shmem, 364 333 struct scmi_xfer *xfer, 365 - struct scmi_chan_info *cinfo); 334 + struct scmi_chan_info *cinfo, 335 + shmem_copy_toio_t toio); 366 336 u32 (*read_header)(struct scmi_shared_mem __iomem *shmem); 367 337 368 338 void (*fetch_response)(struct scmi_shared_mem __iomem *shmem, 369 - struct scmi_xfer *xfer); 339 + struct scmi_xfer *xfer, 340 + shmem_copy_fromio_t fromio); 370 341 void (*fetch_notification)(struct scmi_shared_mem __iomem *shmem, 371 - size_t max_len, struct scmi_xfer *xfer); 342 + size_t max_len, struct scmi_xfer *xfer, 343 + shmem_copy_fromio_t fromio); 372 344 void (*clear_channel)(struct scmi_shared_mem __iomem *shmem); 373 345 bool (*poll_done)(struct scmi_shared_mem __iomem *shmem, 374 346 struct scmi_xfer *xfer); ··· 379 345 bool (*channel_intr_enabled)(struct scmi_shared_mem __iomem *shmem); 380 346 void __iomem *(*setup_iomap)(struct scmi_chan_info *cinfo, 381 347 struct device *dev, 382 - bool tx, struct resource *res); 348 + bool tx, struct resource *res, 349 + struct scmi_shmem_io_ops **ops); 383 350 }; 384 351 385 352 const struct scmi_shared_mem_operations *scmi_shared_mem_operations_get(void);
+24 -18
drivers/firmware/arm_scmi/driver.c
··· 149 149 * base protocol 150 150 * @active_protocols: IDR storing device_nodes for protocols actually defined 151 151 * in the DT and confirmed as implemented by fw. 152 - * @atomic_threshold: Optional system wide DT-configured threshold, expressed 153 - * in microseconds, for atomic operations. 154 - * Only SCMI synchronous commands reported by the platform 155 - * to have an execution latency lesser-equal to the threshold 156 - * should be considered for atomic mode operation: such 157 - * decision is finally left up to the SCMI drivers. 158 152 * @notify_priv: Pointer to private data structure specific to notifications. 159 153 * @node: List head 160 154 * @users: Number of users of this instance ··· 174 180 struct mutex protocols_mtx; 175 181 u8 *protocols_imp; 176 182 struct idr active_protocols; 177 - unsigned int atomic_threshold; 178 183 void *notify_priv; 179 184 struct list_head node; 180 185 int users; ··· 2438 2445 ret = info->desc->atomic_enabled && 2439 2446 is_transport_polling_capable(info->desc); 2440 2447 if (ret && atomic_threshold) 2441 - *atomic_threshold = info->atomic_threshold; 2448 + *atomic_threshold = info->desc->atomic_threshold; 2442 2449 2443 2450 return ret; 2444 2451 } ··· 2638 2645 2639 2646 cinfo->is_p2a = !tx; 2640 2647 cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms; 2648 + cinfo->max_msg_size = info->desc->max_msg_size; 2641 2649 2642 2650 /* Create a unique name for this transport device */ 2643 2651 snprintf(name, 32, "__scmi_transport_device_%s_%02X", ··· 2952 2958 (char **)&dbg->name); 2953 2959 2954 2960 debugfs_create_u32("atomic_threshold_us", 0400, top_dentry, 2955 - &info->atomic_threshold); 2961 + (u32 *)&info->desc->atomic_threshold); 2956 2962 2957 2963 debugfs_create_str("type", 0400, trans, (char **)&dbg->type); 2958 2964 ··· 3047 3053 if (ret && ret != -EINVAL) 3048 3054 dev_err(dev, "Malformed arm,max-rx-timeout-ms DT property.\n"); 3049 3055 3050 - dev_info(dev, "SCMI max-rx-timeout: %dms\n", 3051 - trans->desc->max_rx_timeout_ms); 3056 + ret = of_property_read_u32(dev->of_node, "arm,max-msg-size", 3057 + &trans->desc->max_msg_size); 3058 + if (ret && ret != -EINVAL) 3059 + dev_err(dev, "Malformed arm,max-msg-size DT property.\n"); 3060 + 3061 + ret = of_property_read_u32(dev->of_node, "arm,max-msg", 3062 + &trans->desc->max_msg); 3063 + if (ret && ret != -EINVAL) 3064 + dev_err(dev, "Malformed arm,max-msg DT property.\n"); 3065 + 3066 + dev_info(dev, 3067 + "SCMI max-rx-timeout: %dms / max-msg-size: %dbytes / max-msg: %d\n", 3068 + trans->desc->max_rx_timeout_ms, trans->desc->max_msg_size, 3069 + trans->desc->max_msg); 3070 + 3071 + /* System wide atomic threshold for atomic ops .. if any */ 3072 + if (!of_property_read_u32(dev->of_node, "atomic-threshold-us", 3073 + &trans->desc->atomic_threshold)) 3074 + dev_info(dev, 3075 + "SCMI System wide atomic threshold set to %u us\n", 3076 + trans->desc->atomic_threshold); 3052 3077 3053 3078 return trans->desc; 3054 3079 } ··· 3118 3105 handle->devm_protocol_acquire = scmi_devm_protocol_acquire; 3119 3106 handle->devm_protocol_get = scmi_devm_protocol_get; 3120 3107 handle->devm_protocol_put = scmi_devm_protocol_put; 3121 - 3122 - /* System wide atomic threshold for atomic ops .. if any */ 3123 - if (!of_property_read_u32(np, "atomic-threshold-us", 3124 - &info->atomic_threshold)) 3125 - dev_info(dev, 3126 - "SCMI System wide atomic threshold set to %d us\n", 3127 - info->atomic_threshold); 3128 3108 handle->is_transport_atomic = scmi_is_transport_atomic; 3129 3109 3130 3110 /* Setup all channels described in the DT at first */
+78 -7
drivers/firmware/arm_scmi/shmem.c
··· 16 16 17 17 #include "common.h" 18 18 19 + #define SCMI_SHMEM_LAYOUT_OVERHEAD 24 20 + 19 21 /* 20 22 * SCMI specification requires all parameters, message headers, return 21 23 * arguments or any protocol data to be expressed in little endian ··· 36 34 u8 msg_payload[]; 37 35 }; 38 36 37 + static inline void shmem_memcpy_fromio32(void *to, 38 + const void __iomem *from, 39 + size_t count) 40 + { 41 + WARN_ON(!IS_ALIGNED((unsigned long)from, 4) || 42 + !IS_ALIGNED((unsigned long)to, 4) || 43 + count % 4); 44 + 45 + __ioread32_copy(to, from, count / 4); 46 + } 47 + 48 + static inline void shmem_memcpy_toio32(void __iomem *to, 49 + const void *from, 50 + size_t count) 51 + { 52 + WARN_ON(!IS_ALIGNED((unsigned long)to, 4) || 53 + !IS_ALIGNED((unsigned long)from, 4) || 54 + count % 4); 55 + 56 + __iowrite32_copy(to, from, count / 4); 57 + } 58 + 59 + static struct scmi_shmem_io_ops shmem_io_ops32 = { 60 + .fromio = shmem_memcpy_fromio32, 61 + .toio = shmem_memcpy_toio32, 62 + }; 63 + 64 + /* Wrappers are needed for proper memcpy_{from,to}_io expansion by the 65 + * pre-processor. 66 + */ 67 + static inline void shmem_memcpy_fromio(void *to, 68 + const void __iomem *from, 69 + size_t count) 70 + { 71 + memcpy_fromio(to, from, count); 72 + } 73 + 74 + static inline void shmem_memcpy_toio(void __iomem *to, 75 + const void *from, 76 + size_t count) 77 + { 78 + memcpy_toio(to, from, count); 79 + } 80 + 81 + static struct scmi_shmem_io_ops shmem_io_ops_default = { 82 + .fromio = shmem_memcpy_fromio, 83 + .toio = shmem_memcpy_toio, 84 + }; 85 + 39 86 static void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem, 40 87 struct scmi_xfer *xfer, 41 - struct scmi_chan_info *cinfo) 88 + struct scmi_chan_info *cinfo, 89 + shmem_copy_toio_t copy_toio) 42 90 { 43 91 ktime_t stop; 44 92 ··· 125 73 iowrite32(sizeof(shmem->msg_header) + xfer->tx.len, &shmem->length); 126 74 iowrite32(pack_scmi_header(&xfer->hdr), &shmem->msg_header); 127 75 if (xfer->tx.buf) 128 - memcpy_toio(shmem->msg_payload, xfer->tx.buf, xfer->tx.len); 76 + copy_toio(shmem->msg_payload, xfer->tx.buf, xfer->tx.len); 129 77 } 130 78 131 79 static u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem) ··· 134 82 } 135 83 136 84 static void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem, 137 - struct scmi_xfer *xfer) 85 + struct scmi_xfer *xfer, 86 + shmem_copy_fromio_t copy_fromio) 138 87 { 139 88 size_t len = ioread32(&shmem->length); 140 89 ··· 144 91 xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0); 145 92 146 93 /* Take a copy to the rx buffer.. */ 147 - memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len); 94 + copy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len); 148 95 } 149 96 150 97 static void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, 151 - size_t max_len, struct scmi_xfer *xfer) 98 + size_t max_len, struct scmi_xfer *xfer, 99 + shmem_copy_fromio_t copy_fromio) 152 100 { 153 101 size_t len = ioread32(&shmem->length); 154 102 ··· 157 103 xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0); 158 104 159 105 /* Take a copy to the rx buffer.. */ 160 - memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len); 106 + copy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len); 161 107 } 162 108 163 109 static void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem) ··· 193 139 194 140 static void __iomem *shmem_setup_iomap(struct scmi_chan_info *cinfo, 195 141 struct device *dev, bool tx, 196 - struct resource *res) 142 + struct resource *res, 143 + struct scmi_shmem_io_ops **ops) 197 144 { 198 145 struct device_node *shmem __free(device_node); 199 146 const char *desc = tx ? "Tx" : "Rx"; ··· 203 148 struct resource lres = {}; 204 149 resource_size_t size; 205 150 void __iomem *addr; 151 + u32 reg_io_width; 206 152 207 153 shmem = of_parse_phandle(cdev->of_node, "shmem", idx); 208 154 if (!shmem) ··· 223 167 } 224 168 225 169 size = resource_size(res); 170 + if (cinfo->max_msg_size + SCMI_SHMEM_LAYOUT_OVERHEAD > size) { 171 + dev_err(dev, "misconfigured SCMI shared memory\n"); 172 + return IOMEM_ERR_PTR(-ENOSPC); 173 + } 174 + 226 175 addr = devm_ioremap(dev, res->start, size); 227 176 if (!addr) { 228 177 dev_err(dev, "failed to ioremap SCMI %s shared memory\n", desc); 229 178 return IOMEM_ERR_PTR(-EADDRNOTAVAIL); 179 + } 180 + 181 + of_property_read_u32(shmem, "reg-io-width", &reg_io_width); 182 + switch (reg_io_width) { 183 + case 4: 184 + *ops = &shmem_io_ops32; 185 + break; 186 + default: 187 + *ops = &shmem_io_ops_default; 188 + break; 230 189 } 231 190 232 191 return addr;
+10 -5
drivers/firmware/arm_scmi/transports/mailbox.c
··· 26 26 * @cinfo: SCMI channel info 27 27 * @shmem: Transmit/Receive shared memory area 28 28 * @chan_lock: Lock that prevents multiple xfers from being queued 29 + * @io_ops: Transport specific I/O operations 29 30 */ 30 31 struct scmi_mailbox { 31 32 struct mbox_client cl; ··· 36 35 struct scmi_chan_info *cinfo; 37 36 struct scmi_shared_mem __iomem *shmem; 38 37 struct mutex chan_lock; 38 + struct scmi_shmem_io_ops *io_ops; 39 39 }; 40 40 41 41 #define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl) ··· 47 45 { 48 46 struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl); 49 47 50 - core->shmem->tx_prepare(smbox->shmem, m, smbox->cinfo); 48 + core->shmem->tx_prepare(smbox->shmem, m, smbox->cinfo, 49 + smbox->io_ops->toio); 51 50 } 52 51 53 52 static void rx_callback(struct mbox_client *cl, void *m) ··· 200 197 if (!smbox) 201 198 return -ENOMEM; 202 199 203 - smbox->shmem = core->shmem->setup_iomap(cinfo, dev, tx, NULL); 200 + smbox->shmem = core->shmem->setup_iomap(cinfo, dev, tx, NULL, 201 + &smbox->io_ops); 204 202 if (IS_ERR(smbox->shmem)) 205 203 return PTR_ERR(smbox->shmem); 206 204 ··· 309 305 { 310 306 struct scmi_mailbox *smbox = cinfo->transport_info; 311 307 312 - core->shmem->fetch_response(smbox->shmem, xfer); 308 + core->shmem->fetch_response(smbox->shmem, xfer, smbox->io_ops->fromio); 313 309 } 314 310 315 311 static void mailbox_fetch_notification(struct scmi_chan_info *cinfo, ··· 317 313 { 318 314 struct scmi_mailbox *smbox = cinfo->transport_info; 319 315 320 - core->shmem->fetch_notification(smbox->shmem, max_len, xfer); 316 + core->shmem->fetch_notification(smbox->shmem, max_len, xfer, 317 + smbox->io_ops->fromio); 321 318 } 322 319 323 320 static void mailbox_clear_channel(struct scmi_chan_info *cinfo) ··· 371 366 .ops = &scmi_mailbox_ops, 372 367 .max_rx_timeout_ms = 30, /* We may increase this if required */ 373 368 .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ 374 - .max_msg_size = 128, 369 + .max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE, 375 370 }; 376 371 377 372 static const struct of_device_id scmi_of_match[] = {
+11 -8
drivers/firmware/arm_scmi/transports/optee.c
··· 17 17 18 18 #include "../common.h" 19 19 20 - #define SCMI_OPTEE_MAX_MSG_SIZE 128 21 - 22 20 enum scmi_optee_pta_cmd { 23 21 /* 24 22 * PTA_SCMI_CMD_CAPABILITIES - Get channel capabilities ··· 112 114 * @req.shmem: Virtual base address of the shared memory 113 115 * @req.msg: Shared memory protocol handle for SCMI request and 114 116 * synchronous response 117 + * @io_ops: Transport specific I/O operations 115 118 * @tee_shm: TEE shared memory handle @req or NULL if using IOMEM shmem 116 119 * @link: Reference in agent's channel list 117 120 */ ··· 127 128 struct scmi_shared_mem __iomem *shmem; 128 129 struct scmi_msg_payld *msg; 129 130 } req; 131 + struct scmi_shmem_io_ops *io_ops; 130 132 struct tee_shm *tee_shm; 131 133 struct list_head link; 132 134 }; ··· 297 297 298 298 param[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT; 299 299 param[2].u.memref.shm = channel->tee_shm; 300 - param[2].u.memref.size = SCMI_OPTEE_MAX_MSG_SIZE; 300 + param[2].u.memref.size = SCMI_SHMEM_MAX_PAYLOAD_SIZE; 301 301 302 302 ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param); 303 303 if (ret < 0 || arg.ret) { ··· 330 330 331 331 static int setup_dynamic_shmem(struct device *dev, struct scmi_optee_channel *channel) 332 332 { 333 - const size_t msg_size = SCMI_OPTEE_MAX_MSG_SIZE; 333 + const size_t msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE; 334 334 void *shbuf; 335 335 336 336 channel->tee_shm = tee_shm_alloc_kernel_buf(scmi_optee_private->tee_ctx, msg_size); ··· 350 350 static int setup_static_shmem(struct device *dev, struct scmi_chan_info *cinfo, 351 351 struct scmi_optee_channel *channel) 352 352 { 353 - channel->req.shmem = core->shmem->setup_iomap(cinfo, dev, true, NULL); 353 + channel->req.shmem = core->shmem->setup_iomap(cinfo, dev, true, NULL, 354 + &channel->io_ops); 354 355 if (IS_ERR(channel->req.shmem)) 355 356 return PTR_ERR(channel->req.shmem); 356 357 ··· 466 465 ret = invoke_process_msg_channel(channel, 467 466 core->msg->command_size(xfer)); 468 467 } else { 469 - core->shmem->tx_prepare(channel->req.shmem, xfer, cinfo); 468 + core->shmem->tx_prepare(channel->req.shmem, xfer, cinfo, 469 + channel->io_ops->toio); 470 470 ret = invoke_process_smt_channel(channel); 471 471 } 472 472 ··· 486 484 core->msg->fetch_response(channel->req.msg, 487 485 channel->rx_len, xfer); 488 486 else 489 - core->shmem->fetch_response(channel->req.shmem, xfer); 487 + core->shmem->fetch_response(channel->req.shmem, xfer, 488 + channel->io_ops->fromio); 490 489 } 491 490 492 491 static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret, ··· 517 514 .ops = &scmi_optee_ops, 518 515 .max_rx_timeout_ms = 30, 519 516 .max_msg = 20, 520 - .max_msg_size = SCMI_OPTEE_MAX_MSG_SIZE, 517 + .max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE, 521 518 .sync_cmds_completed_on_ret = true, 522 519 }; 523 520
+9 -4
drivers/firmware/arm_scmi/transports/smc.c
··· 45 45 * @irq: An optional IRQ for completion 46 46 * @cinfo: SCMI channel info 47 47 * @shmem: Transmit/Receive shared memory area 48 + * @io_ops: Transport specific I/O operations 48 49 * @shmem_lock: Lock to protect access to Tx/Rx shared memory area. 49 50 * Used when NOT operating in atomic mode. 50 51 * @inflight: Atomic flag to protect access to Tx/Rx shared memory area. ··· 61 60 int irq; 62 61 struct scmi_chan_info *cinfo; 63 62 struct scmi_shared_mem __iomem *shmem; 63 + struct scmi_shmem_io_ops *io_ops; 64 64 /* Protect access to shmem area */ 65 65 struct mutex shmem_lock; 66 66 #define INFLIGHT_NONE MSG_TOKEN_MAX ··· 146 144 if (!scmi_info) 147 145 return -ENOMEM; 148 146 149 - scmi_info->shmem = core->shmem->setup_iomap(cinfo, dev, tx, &res); 147 + scmi_info->shmem = core->shmem->setup_iomap(cinfo, dev, tx, &res, 148 + &scmi_info->io_ops); 150 149 if (IS_ERR(scmi_info->shmem)) 151 150 return PTR_ERR(scmi_info->shmem); 152 151 ··· 232 229 */ 233 230 smc_channel_lock_acquire(scmi_info, xfer); 234 231 235 - core->shmem->tx_prepare(scmi_info->shmem, xfer, cinfo); 232 + core->shmem->tx_prepare(scmi_info->shmem, xfer, cinfo, 233 + scmi_info->io_ops->toio); 236 234 237 235 if (scmi_info->cap_id != ULONG_MAX) 238 236 arm_smccc_1_1_invoke(scmi_info->func_id, scmi_info->cap_id, 0, ··· 257 253 { 258 254 struct scmi_smc *scmi_info = cinfo->transport_info; 259 255 260 - core->shmem->fetch_response(scmi_info->shmem, xfer); 256 + core->shmem->fetch_response(scmi_info->shmem, xfer, 257 + scmi_info->io_ops->fromio); 261 258 } 262 259 263 260 static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret, ··· 282 277 .ops = &scmi_smc_ops, 283 278 .max_rx_timeout_ms = 30, 284 279 .max_msg = 20, 285 - .max_msg_size = 128, 280 + .max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE, 286 281 /* 287 282 * Setting .sync_cmds_atomic_replies to true for SMC assumes that, 288 283 * once the SMC instruction has completed successfully, the issued
+8 -7
drivers/firmware/arm_scmi/transports/virtio.c
··· 32 32 33 33 #define VIRTIO_MAX_RX_TIMEOUT_MS 60000 34 34 #define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */ 35 - #define VIRTIO_SCMI_MAX_PDU_SIZE \ 36 - (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD) 35 + #define VIRTIO_SCMI_MAX_PDU_SIZE(ci) \ 36 + ((ci)->max_msg_size + SCMI_MSG_MAX_PROT_OVERHEAD) 37 37 #define DESCRIPTORS_PER_TX_MSG 2 38 38 39 39 /** ··· 90 90 * @input: SDU used for (delayed) responses and notifications 91 91 * @list: List which scmi_vio_msg may be part of 92 92 * @rx_len: Input SDU size in bytes, once input has been received 93 + * @max_len: Maximumm allowed SDU size in bytes 93 94 * @poll_idx: Last used index registered for polling purposes if this message 94 95 * transaction reply was configured for polling. 95 96 * @poll_status: Polling state for this message. ··· 103 102 struct scmi_msg_payld *input; 104 103 struct list_head list; 105 104 unsigned int rx_len; 105 + unsigned int max_len; 106 106 unsigned int poll_idx; 107 107 enum poll_states poll_status; 108 108 /* Lock to protect access to poll_status */ ··· 236 234 unsigned long flags; 237 235 struct device *dev = &vioch->vqueue->vdev->dev; 238 236 239 - sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE); 237 + sg_init_one(&sg_in, msg->input, msg->max_len); 240 238 241 239 spin_lock_irqsave(&vioch->lock, flags); 242 240 ··· 441 439 if (!msg) 442 440 return -ENOMEM; 443 441 442 + msg->max_len = VIRTIO_SCMI_MAX_PDU_SIZE(cinfo); 444 443 if (tx) { 445 - msg->request = devm_kzalloc(dev, 446 - VIRTIO_SCMI_MAX_PDU_SIZE, 444 + msg->request = devm_kzalloc(dev, msg->max_len, 447 445 GFP_KERNEL); 448 446 if (!msg->request) 449 447 return -ENOMEM; ··· 451 449 refcount_set(&msg->users, 1); 452 450 } 453 451 454 - msg->input = devm_kzalloc(dev, VIRTIO_SCMI_MAX_PDU_SIZE, 455 - GFP_KERNEL); 452 + msg->input = devm_kzalloc(dev, msg->max_len, GFP_KERNEL); 456 453 if (!msg->input) 457 454 return -ENOMEM; 458 455
+3
drivers/firmware/arm_scpi.c
··· 630 630 if (ret) 631 631 return ERR_PTR(ret); 632 632 633 + if (!buf.opp_count) 634 + return ERR_PTR(-ENOENT); 635 + 633 636 info = kmalloc(sizeof(*info), GFP_KERNEL); 634 637 if (!info) 635 638 return ERR_PTR(-ENOMEM);
+30
drivers/firmware/qcom/qcom_scm.c
··· 904 904 } 905 905 EXPORT_SYMBOL_GPL(qcom_scm_restore_sec_cfg); 906 906 907 + #define QCOM_SCM_CP_APERTURE_CONTEXT_MASK GENMASK(7, 0) 908 + 909 + bool qcom_scm_set_gpu_smmu_aperture_is_available(void) 910 + { 911 + return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP, 912 + QCOM_SCM_MP_CP_SMMU_APERTURE_ID); 913 + } 914 + EXPORT_SYMBOL_GPL(qcom_scm_set_gpu_smmu_aperture_is_available); 915 + 916 + int qcom_scm_set_gpu_smmu_aperture(unsigned int context_bank) 917 + { 918 + struct qcom_scm_desc desc = { 919 + .svc = QCOM_SCM_SVC_MP, 920 + .cmd = QCOM_SCM_MP_CP_SMMU_APERTURE_ID, 921 + .arginfo = QCOM_SCM_ARGS(4), 922 + .args[0] = 0xffff0000 | FIELD_PREP(QCOM_SCM_CP_APERTURE_CONTEXT_MASK, context_bank), 923 + .args[1] = 0xffffffff, 924 + .args[2] = 0xffffffff, 925 + .args[3] = 0xffffffff, 926 + .owner = ARM_SMCCC_OWNER_SIP 927 + }; 928 + 929 + return qcom_scm_call(__scm->dev, &desc, NULL); 930 + } 931 + EXPORT_SYMBOL_GPL(qcom_scm_set_gpu_smmu_aperture); 932 + 907 933 int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) 908 934 { 909 935 struct qcom_scm_desc desc = { ··· 1768 1742 + any potential issues with this, only allow validated machines for now. 1769 1743 */ 1770 1744 static const struct of_device_id qcom_scm_qseecom_allowlist[] __maybe_unused = { 1745 + { .compatible = "dell,xps13-9345" }, 1771 1746 { .compatible = "lenovo,flex-5g" }, 1772 1747 { .compatible = "lenovo,thinkpad-t14s" }, 1773 1748 { .compatible = "lenovo,thinkpad-x13s", }, 1749 + { .compatible = "lenovo,yoga-slim7x" }, 1750 + { .compatible = "microsoft,arcata", }, 1774 1751 { .compatible = "microsoft,romulus13", }, 1775 1752 { .compatible = "microsoft,romulus15", }, 1776 1753 { .compatible = "qcom,sc8180x-primus" }, 1754 + { .compatible = "qcom,x1e001de-devkit" }, 1777 1755 { .compatible = "qcom,x1e80100-crd" }, 1778 1756 { .compatible = "qcom,x1e80100-qcp" }, 1779 1757 { }
+1
drivers/firmware/qcom/qcom_scm.h
··· 116 116 #define QCOM_SCM_MP_IOMMU_SET_CP_POOL_SIZE 0x05 117 117 #define QCOM_SCM_MP_VIDEO_VAR 0x08 118 118 #define QCOM_SCM_MP_ASSIGN 0x16 119 + #define QCOM_SCM_MP_CP_SMMU_APERTURE_ID 0x1b 119 120 #define QCOM_SCM_MP_SHM_BRIDGE_ENABLE 0x1c 120 121 #define QCOM_SCM_MP_SHM_BRIDGE_DELETE 0x1d 121 122 #define QCOM_SCM_MP_SHM_BRIDGE_CREATE 0x1e
+9 -5
drivers/firmware/tegra/bpmp.c
··· 3 3 * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 - #include <linux/cleanup.h> 7 6 #include <linux/clk/tegra.h> 8 7 #include <linux/genalloc.h> 9 8 #include <linux/mailbox_client.h> ··· 34 35 35 36 struct tegra_bpmp *tegra_bpmp_get(struct device *dev) 36 37 { 37 - struct device_node *np __free(device_node); 38 38 struct platform_device *pdev; 39 39 struct tegra_bpmp *bpmp; 40 + struct device_node *np; 40 41 41 42 np = of_parse_phandle(dev->of_node, "nvidia,bpmp", 0); 42 43 if (!np) 43 44 return ERR_PTR(-ENOENT); 44 45 45 46 pdev = of_find_device_by_node(np); 46 - if (!pdev) 47 - return ERR_PTR(-ENODEV); 47 + if (!pdev) { 48 + bpmp = ERR_PTR(-ENODEV); 49 + goto put; 50 + } 48 51 49 52 bpmp = platform_get_drvdata(pdev); 50 53 if (!bpmp) { 54 + bpmp = ERR_PTR(-EPROBE_DEFER); 51 55 put_device(&pdev->dev); 52 - return ERR_PTR(-EPROBE_DEFER); 56 + goto put; 53 57 } 54 58 59 + put: 60 + of_node_put(np); 55 61 return bpmp; 56 62 } 57 63 EXPORT_SYMBOL_GPL(tegra_bpmp_get);
+487 -2
drivers/firmware/ti_sci.c
··· 2 2 /* 3 3 * Texas Instruments System Control Interface Protocol Driver 4 4 * 5 - * Copyright (C) 2015-2022 Texas Instruments Incorporated - https://www.ti.com/ 5 + * Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Nishanth Menon 7 7 */ 8 8 9 9 #define pr_fmt(fmt) "%s: " fmt, __func__ 10 10 11 11 #include <linux/bitmap.h> 12 + #include <linux/cpu.h> 12 13 #include <linux/debugfs.h> 13 14 #include <linux/export.h> 14 15 #include <linux/io.h> ··· 20 19 #include <linux/of.h> 21 20 #include <linux/of_platform.h> 22 21 #include <linux/platform_device.h> 22 + #include <linux/pm_qos.h> 23 23 #include <linux/property.h> 24 24 #include <linux/semaphore.h> 25 25 #include <linux/slab.h> 26 26 #include <linux/soc/ti/ti-msgmgr.h> 27 27 #include <linux/soc/ti/ti_sci_protocol.h> 28 + #include <linux/suspend.h> 29 + #include <linux/sys_soc.h> 28 30 #include <linux/reboot.h> 29 31 30 32 #include "ti_sci.h" ··· 102 98 * @minfo: Message info 103 99 * @node: list head 104 100 * @host_id: Host ID 101 + * @fw_caps: FW/SoC low power capabilities 105 102 * @users: Number of users of this instance 106 103 */ 107 104 struct ti_sci_info { ··· 119 114 struct ti_sci_xfers_info minfo; 120 115 struct list_head node; 121 116 u8 host_id; 117 + u64 fw_caps; 122 118 /* protected by ti_sci_list_mutex */ 123 119 int users; 124 120 }; ··· 1657 1651 return ret; 1658 1652 } 1659 1653 1654 + /** 1655 + * ti_sci_cmd_prepare_sleep() - Prepare system for system suspend 1656 + * @handle: pointer to TI SCI handle 1657 + * @mode: Device identifier 1658 + * @ctx_lo: Low part of address for context save 1659 + * @ctx_hi: High part of address for context save 1660 + * @debug_flags: Debug flags to pass to firmware 1661 + * 1662 + * Return: 0 if all went well, else returns appropriate error value. 1663 + */ 1664 + static int ti_sci_cmd_prepare_sleep(const struct ti_sci_handle *handle, u8 mode, 1665 + u32 ctx_lo, u32 ctx_hi, u32 debug_flags) 1666 + { 1667 + struct ti_sci_info *info; 1668 + struct ti_sci_msg_req_prepare_sleep *req; 1669 + struct ti_sci_msg_hdr *resp; 1670 + struct ti_sci_xfer *xfer; 1671 + struct device *dev; 1672 + int ret = 0; 1673 + 1674 + if (IS_ERR(handle)) 1675 + return PTR_ERR(handle); 1676 + if (!handle) 1677 + return -EINVAL; 1678 + 1679 + info = handle_to_ti_sci_info(handle); 1680 + dev = info->dev; 1681 + 1682 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PREPARE_SLEEP, 1683 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1684 + sizeof(*req), sizeof(*resp)); 1685 + if (IS_ERR(xfer)) { 1686 + ret = PTR_ERR(xfer); 1687 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1688 + return ret; 1689 + } 1690 + 1691 + req = (struct ti_sci_msg_req_prepare_sleep *)xfer->xfer_buf; 1692 + req->mode = mode; 1693 + req->ctx_lo = ctx_lo; 1694 + req->ctx_hi = ctx_hi; 1695 + req->debug_flags = debug_flags; 1696 + 1697 + ret = ti_sci_do_xfer(info, xfer); 1698 + if (ret) { 1699 + dev_err(dev, "Mbox send fail %d\n", ret); 1700 + goto fail; 1701 + } 1702 + 1703 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 1704 + 1705 + if (!ti_sci_is_response_ack(resp)) { 1706 + dev_err(dev, "Failed to prepare sleep\n"); 1707 + ret = -ENODEV; 1708 + } 1709 + 1710 + fail: 1711 + ti_sci_put_one_xfer(&info->minfo, xfer); 1712 + 1713 + return ret; 1714 + } 1715 + 1716 + /** 1717 + * ti_sci_msg_cmd_query_fw_caps() - Get the FW/SoC capabilities 1718 + * @handle: Pointer to TI SCI handle 1719 + * @fw_caps: Each bit in fw_caps indicating one FW/SOC capability 1720 + * 1721 + * Check if the firmware supports any optional low power modes. 1722 + * Old revisions of TIFS (< 08.04) will NACK the request which results in 1723 + * -ENODEV being returned. 1724 + * 1725 + * Return: 0 if all went well, else returns appropriate error value. 1726 + */ 1727 + static int ti_sci_msg_cmd_query_fw_caps(const struct ti_sci_handle *handle, 1728 + u64 *fw_caps) 1729 + { 1730 + struct ti_sci_info *info; 1731 + struct ti_sci_xfer *xfer; 1732 + struct ti_sci_msg_resp_query_fw_caps *resp; 1733 + struct device *dev; 1734 + int ret = 0; 1735 + 1736 + if (IS_ERR(handle)) 1737 + return PTR_ERR(handle); 1738 + if (!handle) 1739 + return -EINVAL; 1740 + 1741 + info = handle_to_ti_sci_info(handle); 1742 + dev = info->dev; 1743 + 1744 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_QUERY_FW_CAPS, 1745 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1746 + sizeof(struct ti_sci_msg_hdr), 1747 + sizeof(*resp)); 1748 + if (IS_ERR(xfer)) { 1749 + ret = PTR_ERR(xfer); 1750 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1751 + return ret; 1752 + } 1753 + 1754 + ret = ti_sci_do_xfer(info, xfer); 1755 + if (ret) { 1756 + dev_err(dev, "Mbox send fail %d\n", ret); 1757 + goto fail; 1758 + } 1759 + 1760 + resp = (struct ti_sci_msg_resp_query_fw_caps *)xfer->xfer_buf; 1761 + 1762 + if (!ti_sci_is_response_ack(resp)) { 1763 + dev_err(dev, "Failed to get capabilities\n"); 1764 + ret = -ENODEV; 1765 + goto fail; 1766 + } 1767 + 1768 + if (fw_caps) 1769 + *fw_caps = resp->fw_caps; 1770 + 1771 + fail: 1772 + ti_sci_put_one_xfer(&info->minfo, xfer); 1773 + 1774 + return ret; 1775 + } 1776 + 1777 + /** 1778 + * ti_sci_cmd_set_io_isolation() - Enable IO isolation in LPM 1779 + * @handle: Pointer to TI SCI handle 1780 + * @state: The desired state of the IO isolation 1781 + * 1782 + * Return: 0 if all went well, else returns appropriate error value. 1783 + */ 1784 + static int ti_sci_cmd_set_io_isolation(const struct ti_sci_handle *handle, 1785 + u8 state) 1786 + { 1787 + struct ti_sci_info *info; 1788 + struct ti_sci_msg_req_set_io_isolation *req; 1789 + struct ti_sci_msg_hdr *resp; 1790 + struct ti_sci_xfer *xfer; 1791 + struct device *dev; 1792 + int ret = 0; 1793 + 1794 + if (IS_ERR(handle)) 1795 + return PTR_ERR(handle); 1796 + if (!handle) 1797 + return -EINVAL; 1798 + 1799 + info = handle_to_ti_sci_info(handle); 1800 + dev = info->dev; 1801 + 1802 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_IO_ISOLATION, 1803 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1804 + sizeof(*req), sizeof(*resp)); 1805 + if (IS_ERR(xfer)) { 1806 + ret = PTR_ERR(xfer); 1807 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1808 + return ret; 1809 + } 1810 + req = (struct ti_sci_msg_req_set_io_isolation *)xfer->xfer_buf; 1811 + req->state = state; 1812 + 1813 + ret = ti_sci_do_xfer(info, xfer); 1814 + if (ret) { 1815 + dev_err(dev, "Mbox send fail %d\n", ret); 1816 + goto fail; 1817 + } 1818 + 1819 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 1820 + 1821 + if (!ti_sci_is_response_ack(resp)) { 1822 + dev_err(dev, "Failed to set IO isolation\n"); 1823 + ret = -ENODEV; 1824 + } 1825 + 1826 + fail: 1827 + ti_sci_put_one_xfer(&info->minfo, xfer); 1828 + 1829 + return ret; 1830 + } 1831 + 1832 + /** 1833 + * ti_sci_msg_cmd_lpm_wake_reason() - Get the wakeup source from LPM 1834 + * @handle: Pointer to TI SCI handle 1835 + * @source: The wakeup source that woke the SoC from LPM 1836 + * @timestamp: Timestamp of the wakeup event 1837 + * @pin: The pin that has triggered wake up 1838 + * @mode: The last entered low power mode 1839 + * 1840 + * Return: 0 if all went well, else returns appropriate error value. 1841 + */ 1842 + static int ti_sci_msg_cmd_lpm_wake_reason(const struct ti_sci_handle *handle, 1843 + u32 *source, u64 *timestamp, u8 *pin, u8 *mode) 1844 + { 1845 + struct ti_sci_info *info; 1846 + struct ti_sci_xfer *xfer; 1847 + struct ti_sci_msg_resp_lpm_wake_reason *resp; 1848 + struct device *dev; 1849 + int ret = 0; 1850 + 1851 + if (IS_ERR(handle)) 1852 + return PTR_ERR(handle); 1853 + if (!handle) 1854 + return -EINVAL; 1855 + 1856 + info = handle_to_ti_sci_info(handle); 1857 + dev = info->dev; 1858 + 1859 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_WAKE_REASON, 1860 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1861 + sizeof(struct ti_sci_msg_hdr), 1862 + sizeof(*resp)); 1863 + if (IS_ERR(xfer)) { 1864 + ret = PTR_ERR(xfer); 1865 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1866 + return ret; 1867 + } 1868 + 1869 + ret = ti_sci_do_xfer(info, xfer); 1870 + if (ret) { 1871 + dev_err(dev, "Mbox send fail %d\n", ret); 1872 + goto fail; 1873 + } 1874 + 1875 + resp = (struct ti_sci_msg_resp_lpm_wake_reason *)xfer->xfer_buf; 1876 + 1877 + if (!ti_sci_is_response_ack(resp)) { 1878 + dev_err(dev, "Failed to get wake reason\n"); 1879 + ret = -ENODEV; 1880 + goto fail; 1881 + } 1882 + 1883 + if (source) 1884 + *source = resp->wake_source; 1885 + if (timestamp) 1886 + *timestamp = resp->wake_timestamp; 1887 + if (pin) 1888 + *pin = resp->wake_pin; 1889 + if (mode) 1890 + *mode = resp->mode; 1891 + 1892 + fail: 1893 + ti_sci_put_one_xfer(&info->minfo, xfer); 1894 + 1895 + return ret; 1896 + } 1897 + 1898 + /** 1899 + * ti_sci_cmd_set_device_constraint() - Set LPM constraint on behalf of a device 1900 + * @handle: pointer to TI SCI handle 1901 + * @id: Device identifier 1902 + * @state: The desired state of device constraint: set or clear 1903 + * 1904 + * Return: 0 if all went well, else returns appropriate error value. 1905 + */ 1906 + static int ti_sci_cmd_set_device_constraint(const struct ti_sci_handle *handle, 1907 + u32 id, u8 state) 1908 + { 1909 + struct ti_sci_info *info; 1910 + struct ti_sci_msg_req_lpm_set_device_constraint *req; 1911 + struct ti_sci_msg_hdr *resp; 1912 + struct ti_sci_xfer *xfer; 1913 + struct device *dev; 1914 + int ret = 0; 1915 + 1916 + if (IS_ERR(handle)) 1917 + return PTR_ERR(handle); 1918 + if (!handle) 1919 + return -EINVAL; 1920 + 1921 + info = handle_to_ti_sci_info(handle); 1922 + dev = info->dev; 1923 + 1924 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_SET_DEVICE_CONSTRAINT, 1925 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1926 + sizeof(*req), sizeof(*resp)); 1927 + if (IS_ERR(xfer)) { 1928 + ret = PTR_ERR(xfer); 1929 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1930 + return ret; 1931 + } 1932 + req = (struct ti_sci_msg_req_lpm_set_device_constraint *)xfer->xfer_buf; 1933 + req->id = id; 1934 + req->state = state; 1935 + 1936 + ret = ti_sci_do_xfer(info, xfer); 1937 + if (ret) { 1938 + dev_err(dev, "Mbox send fail %d\n", ret); 1939 + goto fail; 1940 + } 1941 + 1942 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 1943 + 1944 + if (!ti_sci_is_response_ack(resp)) { 1945 + dev_err(dev, "Failed to set device constraint\n"); 1946 + ret = -ENODEV; 1947 + } 1948 + 1949 + fail: 1950 + ti_sci_put_one_xfer(&info->minfo, xfer); 1951 + 1952 + return ret; 1953 + } 1954 + 1955 + /** 1956 + * ti_sci_cmd_set_latency_constraint() - Set LPM resume latency constraint 1957 + * @handle: pointer to TI SCI handle 1958 + * @latency: maximum acceptable latency (in ms) to wake up from LPM 1959 + * @state: The desired state of latency constraint: set or clear 1960 + * 1961 + * Return: 0 if all went well, else returns appropriate error value. 1962 + */ 1963 + static int ti_sci_cmd_set_latency_constraint(const struct ti_sci_handle *handle, 1964 + u16 latency, u8 state) 1965 + { 1966 + struct ti_sci_info *info; 1967 + struct ti_sci_msg_req_lpm_set_latency_constraint *req; 1968 + struct ti_sci_msg_hdr *resp; 1969 + struct ti_sci_xfer *xfer; 1970 + struct device *dev; 1971 + int ret = 0; 1972 + 1973 + if (IS_ERR(handle)) 1974 + return PTR_ERR(handle); 1975 + if (!handle) 1976 + return -EINVAL; 1977 + 1978 + info = handle_to_ti_sci_info(handle); 1979 + dev = info->dev; 1980 + 1981 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_SET_LATENCY_CONSTRAINT, 1982 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1983 + sizeof(*req), sizeof(*resp)); 1984 + if (IS_ERR(xfer)) { 1985 + ret = PTR_ERR(xfer); 1986 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1987 + return ret; 1988 + } 1989 + req = (struct ti_sci_msg_req_lpm_set_latency_constraint *)xfer->xfer_buf; 1990 + req->latency = latency; 1991 + req->state = state; 1992 + 1993 + ret = ti_sci_do_xfer(info, xfer); 1994 + if (ret) { 1995 + dev_err(dev, "Mbox send fail %d\n", ret); 1996 + goto fail; 1997 + } 1998 + 1999 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2000 + 2001 + if (!ti_sci_is_response_ack(resp)) { 2002 + dev_err(dev, "Failed to set device constraint\n"); 2003 + ret = -ENODEV; 2004 + } 2005 + 2006 + fail: 2007 + ti_sci_put_one_xfer(&info->minfo, xfer); 2008 + 2009 + return ret; 2010 + } 2011 + 1660 2012 static int ti_sci_cmd_core_reboot(const struct ti_sci_handle *handle) 1661 2013 { 1662 2014 struct ti_sci_info *info; ··· 3157 2793 struct ti_sci_core_ops *core_ops = &ops->core_ops; 3158 2794 struct ti_sci_dev_ops *dops = &ops->dev_ops; 3159 2795 struct ti_sci_clk_ops *cops = &ops->clk_ops; 2796 + struct ti_sci_pm_ops *pmops = &ops->pm_ops; 3160 2797 struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops; 3161 2798 struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops; 3162 2799 struct ti_sci_rm_ringacc_ops *rops = &ops->rm_ring_ops; ··· 3196 2831 cops->get_best_match_freq = ti_sci_cmd_clk_get_match_freq; 3197 2832 cops->set_freq = ti_sci_cmd_clk_set_freq; 3198 2833 cops->get_freq = ti_sci_cmd_clk_get_freq; 2834 + 2835 + if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) { 2836 + pr_debug("detected DM managed LPM in fw_caps\n"); 2837 + pmops->lpm_wake_reason = ti_sci_msg_cmd_lpm_wake_reason; 2838 + pmops->set_device_constraint = ti_sci_cmd_set_device_constraint; 2839 + pmops->set_latency_constraint = ti_sci_cmd_set_latency_constraint; 2840 + } 3199 2841 3200 2842 rm_core_ops->get_range = ti_sci_cmd_get_resource_range; 3201 2843 rm_core_ops->get_range_from_shost = ··· 3634 3262 return NOTIFY_BAD; 3635 3263 } 3636 3264 3265 + static int ti_sci_prepare_system_suspend(struct ti_sci_info *info) 3266 + { 3267 + /* 3268 + * Map and validate the target Linux suspend state to TISCI LPM. 3269 + * Default is to let Device Manager select the low power mode. 3270 + */ 3271 + switch (pm_suspend_target_state) { 3272 + case PM_SUSPEND_MEM: 3273 + if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) { 3274 + /* 3275 + * For the DM_MANAGED mode the context is reserved for 3276 + * internal use and can be 0 3277 + */ 3278 + return ti_sci_cmd_prepare_sleep(&info->handle, 3279 + TISCI_MSG_VALUE_SLEEP_MODE_DM_MANAGED, 3280 + 0, 0, 0); 3281 + } else { 3282 + /* DM Managed is not supported by the firmware. */ 3283 + dev_err(info->dev, "Suspend to memory is not supported by the firmware\n"); 3284 + return -EOPNOTSUPP; 3285 + } 3286 + break; 3287 + default: 3288 + /* 3289 + * Do not fail if we don't have action to take for a 3290 + * specific suspend mode. 3291 + */ 3292 + return 0; 3293 + } 3294 + } 3295 + 3296 + static int __maybe_unused ti_sci_suspend(struct device *dev) 3297 + { 3298 + struct ti_sci_info *info = dev_get_drvdata(dev); 3299 + struct device *cpu_dev, *cpu_dev_max = NULL; 3300 + s32 val, cpu_lat = 0; 3301 + int i, ret; 3302 + 3303 + if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) { 3304 + for_each_possible_cpu(i) { 3305 + cpu_dev = get_cpu_device(i); 3306 + val = dev_pm_qos_read_value(cpu_dev, DEV_PM_QOS_RESUME_LATENCY); 3307 + if (val != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) { 3308 + cpu_lat = max(cpu_lat, val); 3309 + cpu_dev_max = cpu_dev; 3310 + } 3311 + } 3312 + if (cpu_dev_max) { 3313 + dev_dbg(cpu_dev_max, "%s: sending max CPU latency=%u\n", __func__, cpu_lat); 3314 + ret = ti_sci_cmd_set_latency_constraint(&info->handle, 3315 + cpu_lat, TISCI_MSG_CONSTRAINT_SET); 3316 + if (ret) 3317 + return ret; 3318 + } 3319 + } 3320 + 3321 + ret = ti_sci_prepare_system_suspend(info); 3322 + if (ret) 3323 + return ret; 3324 + 3325 + return 0; 3326 + } 3327 + 3328 + static int __maybe_unused ti_sci_suspend_noirq(struct device *dev) 3329 + { 3330 + struct ti_sci_info *info = dev_get_drvdata(dev); 3331 + int ret = 0; 3332 + 3333 + ret = ti_sci_cmd_set_io_isolation(&info->handle, TISCI_MSG_VALUE_IO_ENABLE); 3334 + if (ret) 3335 + return ret; 3336 + 3337 + return 0; 3338 + } 3339 + 3340 + static int __maybe_unused ti_sci_resume_noirq(struct device *dev) 3341 + { 3342 + struct ti_sci_info *info = dev_get_drvdata(dev); 3343 + int ret = 0; 3344 + u32 source; 3345 + u64 time; 3346 + u8 pin; 3347 + u8 mode; 3348 + 3349 + ret = ti_sci_cmd_set_io_isolation(&info->handle, TISCI_MSG_VALUE_IO_DISABLE); 3350 + if (ret) 3351 + return ret; 3352 + 3353 + ret = ti_sci_msg_cmd_lpm_wake_reason(&info->handle, &source, &time, &pin, &mode); 3354 + /* Do not fail to resume on error as the wake reason is not critical */ 3355 + if (!ret) 3356 + dev_info(dev, "ti_sci: wakeup source:0x%x, pin:0x%x, mode:0x%x\n", 3357 + source, pin, mode); 3358 + 3359 + return 0; 3360 + } 3361 + 3362 + static const struct dev_pm_ops ti_sci_pm_ops = { 3363 + #ifdef CONFIG_PM_SLEEP 3364 + .suspend = ti_sci_suspend, 3365 + .suspend_noirq = ti_sci_suspend_noirq, 3366 + .resume_noirq = ti_sci_resume_noirq, 3367 + #endif 3368 + }; 3369 + 3637 3370 /* Description for K2G */ 3638 3371 static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = { 3639 3372 .default_host_id = 2, ··· 3867 3390 goto out; 3868 3391 } 3869 3392 3393 + ti_sci_msg_cmd_query_fw_caps(&info->handle, &info->fw_caps); 3394 + dev_dbg(dev, "Detected firmware capabilities: %s%s%s\n", 3395 + info->fw_caps & MSG_FLAG_CAPS_GENERIC ? "Generic" : "", 3396 + info->fw_caps & MSG_FLAG_CAPS_LPM_PARTIAL_IO ? " Partial-IO" : "", 3397 + info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED ? " DM-Managed" : "" 3398 + ); 3399 + 3870 3400 ti_sci_setup_ops(info); 3871 3401 3872 3402 ret = devm_register_restart_handler(dev, tisci_reboot_handler, info); ··· 3905 3421 .probe = ti_sci_probe, 3906 3422 .driver = { 3907 3423 .name = "ti-sci", 3908 - .of_match_table = of_match_ptr(ti_sci_of_match), 3424 + .of_match_table = ti_sci_of_match, 3909 3425 .suppress_bind_attrs = true, 3426 + .pm = &ti_sci_pm_ops, 3910 3427 }, 3911 3428 }; 3912 3429 module_platform_driver(ti_sci_driver);
+142 -1
drivers/firmware/ti_sci.h
··· 6 6 * The system works in a message response protocol 7 7 * See: https://software-dl.ti.com/tisci/esd/latest/index.html for details 8 8 * 9 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 9 + * Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/ 10 10 */ 11 11 12 12 #ifndef __TI_SCI_H ··· 19 19 #define TI_SCI_MSG_WAKE_REASON 0x0003 20 20 #define TI_SCI_MSG_GOODBYE 0x0004 21 21 #define TI_SCI_MSG_SYS_RESET 0x0005 22 + #define TI_SCI_MSG_QUERY_FW_CAPS 0x0022 22 23 23 24 /* Device requests */ 24 25 #define TI_SCI_MSG_SET_DEVICE_STATE 0x0200 ··· 35 34 #define TI_SCI_MSG_SET_CLOCK_FREQ 0x010c 36 35 #define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d 37 36 #define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e 37 + 38 + /* Low Power Mode Requests */ 39 + #define TI_SCI_MSG_PREPARE_SLEEP 0x0300 40 + #define TI_SCI_MSG_LPM_WAKE_REASON 0x0306 41 + #define TI_SCI_MSG_SET_IO_ISOLATION 0x0307 42 + #define TI_SCI_MSG_LPM_SET_DEVICE_CONSTRAINT 0x0309 43 + #define TI_SCI_MSG_LPM_SET_LATENCY_CONSTRAINT 0x030A 38 44 39 45 /* Resource Management Requests */ 40 46 #define TI_SCI_MSG_GET_RESOURCE_RANGE 0x1500 ··· 138 130 */ 139 131 struct ti_sci_msg_req_reboot { 140 132 struct ti_sci_msg_hdr hdr; 133 + } __packed; 134 + 135 + /** 136 + * struct ti_sci_msg_resp_query_fw_caps - Response for query firmware caps 137 + * @hdr: Generic header 138 + * @fw_caps: Each bit in fw_caps indicating one FW/SOC capability 139 + * MSG_FLAG_CAPS_GENERIC: Generic capability (LPM not supported) 140 + * MSG_FLAG_CAPS_LPM_PARTIAL_IO: Partial IO in LPM 141 + * MSG_FLAG_CAPS_LPM_DM_MANAGED: LPM can be managed by DM 142 + * 143 + * Response to a generic message with message type TI_SCI_MSG_QUERY_FW_CAPS 144 + * providing currently available SOC/firmware capabilities. SoC that don't 145 + * support low power modes return only MSG_FLAG_CAPS_GENERIC capability. 146 + */ 147 + struct ti_sci_msg_resp_query_fw_caps { 148 + struct ti_sci_msg_hdr hdr; 149 + #define MSG_FLAG_CAPS_GENERIC TI_SCI_MSG_FLAG(0) 150 + #define MSG_FLAG_CAPS_LPM_PARTIAL_IO TI_SCI_MSG_FLAG(4) 151 + #define MSG_FLAG_CAPS_LPM_DM_MANAGED TI_SCI_MSG_FLAG(5) 152 + #define MSG_MASK_CAPS_LPM GENMASK_ULL(4, 1) 153 + u64 fw_caps; 141 154 } __packed; 142 155 143 156 /** ··· 572 543 struct ti_sci_msg_resp_get_clock_freq { 573 544 struct ti_sci_msg_hdr hdr; 574 545 u64 freq_hz; 546 + } __packed; 547 + 548 + /** 549 + * struct tisci_msg_req_prepare_sleep - Request for TISCI_MSG_PREPARE_SLEEP. 550 + * 551 + * @hdr TISCI header to provide ACK/NAK flags to the host. 552 + * @mode Low power mode to enter. 553 + * @ctx_lo Low 32-bits of physical pointer to address to use for context save. 554 + * @ctx_hi High 32-bits of physical pointer to address to use for context save. 555 + * @debug_flags Flags that can be set to halt the sequence during suspend or 556 + * resume to allow JTAG connection and debug. 557 + * 558 + * This message is used as the first step of entering a low power mode. It 559 + * allows configurable information, including which state to enter to be 560 + * easily shared from the application, as this is a non-secure message and 561 + * therefore can be sent by anyone. 562 + */ 563 + struct ti_sci_msg_req_prepare_sleep { 564 + struct ti_sci_msg_hdr hdr; 565 + 566 + #define TISCI_MSG_VALUE_SLEEP_MODE_DM_MANAGED 0xfd 567 + u8 mode; 568 + u32 ctx_lo; 569 + u32 ctx_hi; 570 + u32 debug_flags; 571 + } __packed; 572 + 573 + /** 574 + * struct tisci_msg_set_io_isolation_req - Request for TI_SCI_MSG_SET_IO_ISOLATION. 575 + * 576 + * @hdr: Generic header 577 + * @state: The deseared state of the IO isolation. 578 + * 579 + * This message is used to enable/disable IO isolation for low power modes. 580 + * Response is generic ACK / NACK message. 581 + */ 582 + struct ti_sci_msg_req_set_io_isolation { 583 + struct ti_sci_msg_hdr hdr; 584 + u8 state; 585 + } __packed; 586 + 587 + /** 588 + * struct ti_sci_msg_resp_lpm_wake_reason - Response for TI_SCI_MSG_LPM_WAKE_REASON. 589 + * 590 + * @hdr: Generic header. 591 + * @wake_source: The wake up source that woke soc from LPM. 592 + * @wake_timestamp: Timestamp at which soc woke. 593 + * @wake_pin: The pin that has triggered wake up. 594 + * @mode: The last entered low power mode. 595 + * @rsvd: Reserved for future use. 596 + * 597 + * Response to a generic message with message type TI_SCI_MSG_LPM_WAKE_REASON, 598 + * used to query the wake up source, pin and entered low power mode. 599 + */ 600 + struct ti_sci_msg_resp_lpm_wake_reason { 601 + struct ti_sci_msg_hdr hdr; 602 + u32 wake_source; 603 + u64 wake_timestamp; 604 + u8 wake_pin; 605 + u8 mode; 606 + u32 rsvd[2]; 607 + } __packed; 608 + 609 + /** 610 + * struct ti_sci_msg_req_lpm_set_device_constraint - Request for 611 + * TISCI_MSG_LPM_SET_DEVICE_CONSTRAINT. 612 + * 613 + * @hdr: TISCI header to provide ACK/NAK flags to the host. 614 + * @id: Device ID of device whose constraint has to be modified. 615 + * @state: The desired state of device constraint: set or clear. 616 + * @rsvd: Reserved for future use. 617 + * 618 + * This message is used by host to set constraint on the device. This can be 619 + * sent anytime after boot before prepare sleep message. Any device can set a 620 + * constraint on the low power mode that the SoC can enter. It allows 621 + * configurable information to be easily shared from the application, as this 622 + * is a non-secure message and therefore can be sent by anyone. By setting a 623 + * constraint, the device ensures that it will not be powered off or reset in 624 + * the selected mode. Note: Access Restriction: Exclusivity flag of Device will 625 + * be honored. If some other host already has constraint on this device ID, 626 + * NACK will be returned. 627 + */ 628 + struct ti_sci_msg_req_lpm_set_device_constraint { 629 + struct ti_sci_msg_hdr hdr; 630 + u32 id; 631 + u8 state; 632 + u32 rsvd[2]; 633 + } __packed; 634 + 635 + /** 636 + * struct ti_sci_msg_req_lpm_set_latency_constraint - Request for 637 + * TISCI_MSG_LPM_SET_LATENCY_CONSTRAINT. 638 + * 639 + * @hdr: TISCI header to provide ACK/NAK flags to the host. 640 + * @wkup_latency: The maximum acceptable latency to wake up from low power mode 641 + * in milliseconds. The deeper the state, the higher the latency. 642 + * @state: The desired state of wakeup latency constraint: set or clear. 643 + * @rsvd: Reserved for future use. 644 + * 645 + * This message is used by host to set wakeup latency from low power mode. This can 646 + * be sent anytime after boot before prepare sleep message, and can be sent after 647 + * current low power mode is exited. Any device can set a constraint on the low power 648 + * mode that the SoC can enter. It allows configurable information to be easily shared 649 + * from the application, as this is a non-secure message and therefore can be sent by 650 + * anyone. By setting a wakeup latency constraint, the host ensures that the resume time 651 + * from selected low power mode will be less than the constraint value. 652 + */ 653 + struct ti_sci_msg_req_lpm_set_latency_constraint { 654 + struct ti_sci_msg_hdr hdr; 655 + u16 latency; 656 + u8 state; 657 + u32 rsvd; 575 658 } __packed; 576 659 577 660 #define TI_SCI_IRQ_SECONDARY_HOST_INVALID 0xff
+21 -2
drivers/firmware/turris-mox-rwtm.c
··· 61 61 MBOX_CMD_OTP_WRITE = 8, 62 62 }; 63 63 64 + /** 65 + * struct mox_rwtm - driver private data structure 66 + * @mbox_client: rWTM mailbox client 67 + * @mbox: rWTM mailbox channel 68 + * @hwrng: RNG driver structure 69 + * @reply: last mailbox reply, filled in receive callback 70 + * @buf: DMA buffer 71 + * @buf_phys: physical address of the DMA buffer 72 + * @busy: mutex to protect mailbox command execution 73 + * @cmd_done: command done completion 74 + * @has_board_info: whether board information is present 75 + * @serial_number: serial number of the device 76 + * @board_version: board version / revision of the device 77 + * @ram_size: RAM size of the device 78 + * @mac_address1: first MAC address of the device 79 + * @mac_address2: second MAC address of the device 80 + * @has_pubkey: whether board ECDSA public key is present 81 + * @pubkey: board ECDSA public key 82 + * @last_sig: last ECDSA signature generated with board ECDSA private key 83 + * @last_sig_done: whether the last ECDSA signing is complete 84 + */ 64 85 struct mox_rwtm { 65 86 struct mbox_client mbox_client; 66 87 struct mbox_chan *mbox; ··· 95 74 struct mutex busy; 96 75 struct completion cmd_done; 97 76 98 - /* board information */ 99 77 bool has_board_info; 100 78 u64 serial_number; 101 79 int board_version, ram_size; 102 80 u8 mac_address1[ETH_ALEN], mac_address2[ETH_ALEN]; 103 81 104 - /* public key burned in eFuse */ 105 82 bool has_pubkey; 106 83 u8 pubkey[135]; 107 84
+161 -1
drivers/firmware/xilinx/zynqmp-debug.c
··· 31 31 32 32 #define PM_API(id) {id, #id, strlen(#id)} 33 33 static struct pm_api_info pm_api_list[] = { 34 + PM_API(PM_FORCE_POWERDOWN), 35 + PM_API(PM_REQUEST_WAKEUP), 36 + PM_API(PM_SYSTEM_SHUTDOWN), 37 + PM_API(PM_REQUEST_NODE), 38 + PM_API(PM_RELEASE_NODE), 39 + PM_API(PM_SET_REQUIREMENT), 34 40 PM_API(PM_GET_API_VERSION), 41 + PM_API(PM_REGISTER_NOTIFIER), 42 + PM_API(PM_RESET_ASSERT), 43 + PM_API(PM_RESET_GET_STATUS), 44 + PM_API(PM_GET_CHIPID), 45 + PM_API(PM_PINCTRL_SET_FUNCTION), 46 + PM_API(PM_PINCTRL_CONFIG_PARAM_GET), 47 + PM_API(PM_PINCTRL_CONFIG_PARAM_SET), 48 + PM_API(PM_IOCTL), 49 + PM_API(PM_CLOCK_ENABLE), 50 + PM_API(PM_CLOCK_DISABLE), 51 + PM_API(PM_CLOCK_GETSTATE), 52 + PM_API(PM_CLOCK_SETDIVIDER), 53 + PM_API(PM_CLOCK_GETDIVIDER), 54 + PM_API(PM_CLOCK_SETPARENT), 55 + PM_API(PM_CLOCK_GETPARENT), 35 56 PM_API(PM_QUERY_DATA), 36 57 }; 37 58 38 59 static struct dentry *firmware_debugfs_root; 60 + 61 + /** 62 + * zynqmp_pm_ioctl - PM IOCTL for device control and configs 63 + * @node: Node ID of the device 64 + * @ioctl: ID of the requested IOCTL 65 + * @arg1: Argument 1 of requested IOCTL call 66 + * @arg2: Argument 2 of requested IOCTL call 67 + * @arg3: Argument 3 of requested IOCTL call 68 + * @out: Returned output value 69 + * 70 + * Return: Returns status, either success or error+reason 71 + */ 72 + static int zynqmp_pm_ioctl(const u32 node, const u32 ioctl, const u32 arg1, 73 + const u32 arg2, const u32 arg3, u32 *out) 74 + { 75 + return zynqmp_pm_invoke_fn(PM_IOCTL, out, 5, node, ioctl, arg1, arg2, arg3); 76 + } 39 77 40 78 /** 41 79 * zynqmp_pm_argument_value() - Extract argument value from a PM-API request ··· 133 95 sprintf(debugfs_buf, "PM-API Version = %d.%d\n", 134 96 pm_api_version >> 16, pm_api_version & 0xffff); 135 97 break; 98 + case PM_FORCE_POWERDOWN: 99 + ret = zynqmp_pm_force_pwrdwn(pm_api_arg[0], 100 + pm_api_arg[1] ? pm_api_arg[1] : 101 + ZYNQMP_PM_REQUEST_ACK_NO); 102 + break; 103 + case PM_REQUEST_WAKEUP: 104 + ret = zynqmp_pm_request_wake(pm_api_arg[0], 105 + pm_api_arg[1], pm_api_arg[2], 106 + pm_api_arg[3] ? pm_api_arg[3] : 107 + ZYNQMP_PM_REQUEST_ACK_NO); 108 + break; 109 + case PM_SYSTEM_SHUTDOWN: 110 + ret = zynqmp_pm_system_shutdown(pm_api_arg[0], pm_api_arg[1]); 111 + break; 112 + case PM_REQUEST_NODE: 113 + ret = zynqmp_pm_request_node(pm_api_arg[0], 114 + pm_api_arg[1] ? pm_api_arg[1] : 115 + ZYNQMP_PM_CAPABILITY_ACCESS, 116 + pm_api_arg[2] ? pm_api_arg[2] : 0, 117 + pm_api_arg[3] ? pm_api_arg[3] : 118 + ZYNQMP_PM_REQUEST_ACK_BLOCKING); 119 + break; 120 + case PM_RELEASE_NODE: 121 + ret = zynqmp_pm_release_node(pm_api_arg[0]); 122 + break; 123 + case PM_SET_REQUIREMENT: 124 + ret = zynqmp_pm_set_requirement(pm_api_arg[0], 125 + pm_api_arg[1] ? pm_api_arg[1] : 126 + ZYNQMP_PM_CAPABILITY_CONTEXT, 127 + pm_api_arg[2] ? 128 + pm_api_arg[2] : 0, 129 + pm_api_arg[3] ? pm_api_arg[3] : 130 + ZYNQMP_PM_REQUEST_ACK_BLOCKING); 131 + break; 132 + case PM_REGISTER_NOTIFIER: 133 + ret = zynqmp_pm_register_notifier(pm_api_arg[0], 134 + pm_api_arg[1] ? 135 + pm_api_arg[1] : 0, 136 + pm_api_arg[2] ? 137 + pm_api_arg[2] : 0, 138 + pm_api_arg[3] ? 139 + pm_api_arg[3] : 0); 140 + break; 141 + case PM_RESET_ASSERT: 142 + ret = zynqmp_pm_reset_assert(pm_api_arg[0], pm_api_arg[1]); 143 + break; 144 + case PM_RESET_GET_STATUS: 145 + ret = zynqmp_pm_reset_get_status(pm_api_arg[0], &pm_api_ret[0]); 146 + if (!ret) 147 + sprintf(debugfs_buf, "Reset status: %u\n", 148 + pm_api_ret[0]); 149 + break; 150 + case PM_GET_CHIPID: 151 + ret = zynqmp_pm_get_chipid(&pm_api_ret[0], &pm_api_ret[1]); 152 + if (!ret) 153 + sprintf(debugfs_buf, "Idcode: %#x, Version:%#x\n", 154 + pm_api_ret[0], pm_api_ret[1]); 155 + break; 156 + case PM_PINCTRL_SET_FUNCTION: 157 + ret = zynqmp_pm_pinctrl_set_function(pm_api_arg[0], 158 + pm_api_arg[1]); 159 + break; 160 + case PM_PINCTRL_CONFIG_PARAM_GET: 161 + ret = zynqmp_pm_pinctrl_get_config(pm_api_arg[0], pm_api_arg[1], 162 + &pm_api_ret[0]); 163 + if (!ret) 164 + sprintf(debugfs_buf, 165 + "Pin: %llu, Param: %llu, Value: %u\n", 166 + pm_api_arg[0], pm_api_arg[1], 167 + pm_api_ret[0]); 168 + break; 169 + case PM_PINCTRL_CONFIG_PARAM_SET: 170 + ret = zynqmp_pm_pinctrl_set_config(pm_api_arg[0], 171 + pm_api_arg[1], 172 + pm_api_arg[2]); 173 + break; 174 + case PM_IOCTL: 175 + ret = zynqmp_pm_ioctl(pm_api_arg[0], pm_api_arg[1], 176 + pm_api_arg[2], pm_api_arg[3], 177 + pm_api_arg[4], &pm_api_ret[0]); 178 + if (!ret && (pm_api_arg[1] == IOCTL_GET_RPU_OPER_MODE || 179 + pm_api_arg[1] == IOCTL_GET_PLL_FRAC_MODE || 180 + pm_api_arg[1] == IOCTL_GET_PLL_FRAC_DATA || 181 + pm_api_arg[1] == IOCTL_READ_GGS || 182 + pm_api_arg[1] == IOCTL_READ_PGGS || 183 + pm_api_arg[1] == IOCTL_READ_REG)) 184 + sprintf(debugfs_buf, "IOCTL return value: %u\n", 185 + pm_api_ret[1]); 186 + if (!ret && pm_api_arg[1] == IOCTL_GET_QOS) 187 + sprintf(debugfs_buf, "Default QoS: %u\nCurrent QoS: %u\n", 188 + pm_api_ret[1], pm_api_ret[2]); 189 + break; 190 + case PM_CLOCK_ENABLE: 191 + ret = zynqmp_pm_clock_enable(pm_api_arg[0]); 192 + break; 193 + case PM_CLOCK_DISABLE: 194 + ret = zynqmp_pm_clock_disable(pm_api_arg[0]); 195 + break; 196 + case PM_CLOCK_GETSTATE: 197 + ret = zynqmp_pm_clock_getstate(pm_api_arg[0], &pm_api_ret[0]); 198 + if (!ret) 199 + sprintf(debugfs_buf, "Clock state: %u\n", 200 + pm_api_ret[0]); 201 + break; 202 + case PM_CLOCK_SETDIVIDER: 203 + ret = zynqmp_pm_clock_setdivider(pm_api_arg[0], pm_api_arg[1]); 204 + break; 205 + case PM_CLOCK_GETDIVIDER: 206 + ret = zynqmp_pm_clock_getdivider(pm_api_arg[0], &pm_api_ret[0]); 207 + if (!ret) 208 + sprintf(debugfs_buf, "Divider Value: %d\n", 209 + pm_api_ret[0]); 210 + break; 211 + case PM_CLOCK_SETPARENT: 212 + ret = zynqmp_pm_clock_setparent(pm_api_arg[0], pm_api_arg[1]); 213 + break; 214 + case PM_CLOCK_GETPARENT: 215 + ret = zynqmp_pm_clock_getparent(pm_api_arg[0], &pm_api_ret[0]); 216 + if (!ret) 217 + sprintf(debugfs_buf, 218 + "Clock parent Index: %u\n", pm_api_ret[0]); 219 + break; 136 220 case PM_QUERY_DATA: 137 221 qdata.qid = pm_api_arg[0]; 138 222 qdata.arg1 = pm_api_arg[1]; ··· 310 150 char *kern_buff, *tmp_buff; 311 151 char *pm_api_req; 312 152 u32 pm_id = 0; 313 - u64 pm_api_arg[4] = {0, 0, 0, 0}; 153 + u64 pm_api_arg[5] = {0, 0, 0, 0, 0}; 314 154 /* Return values from PM APIs calls */ 315 155 u32 pm_api_ret[4] = {0, 0, 0, 0}; 316 156
+144 -9
drivers/firmware/xilinx/zynqmp.c
··· 3 3 * Xilinx Zynq MPSoC Firmware layer 4 4 * 5 5 * Copyright (C) 2014-2022 Xilinx, Inc. 6 - * Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc. 6 + * Copyright (C) 2022 - 2024, Advanced Micro Devices, Inc. 7 7 * 8 8 * Michal Simek <michal.simek@amd.com> 9 9 * Davorin Mista <davorin.mista@aggios.com> ··· 46 46 static u32 ioctl_features[FEATURE_PAYLOAD_SIZE]; 47 47 static u32 query_features[FEATURE_PAYLOAD_SIZE]; 48 48 49 + static u32 sip_svc_version; 49 50 static struct platform_device *em_dev; 50 51 51 52 /** ··· 152 151 ret_payload[1] = upper_32_bits(res.a0); 153 152 ret_payload[2] = lower_32_bits(res.a1); 154 153 ret_payload[3] = upper_32_bits(res.a1); 154 + ret_payload[4] = lower_32_bits(res.a2); 155 + ret_payload[5] = upper_32_bits(res.a2); 156 + ret_payload[6] = lower_32_bits(res.a3); 155 157 } 156 158 157 159 return zynqmp_pm_ret_code((enum pm_ret_status)res.a0); ··· 195 191 ret_payload[1] = upper_32_bits(res.a0); 196 192 ret_payload[2] = lower_32_bits(res.a1); 197 193 ret_payload[3] = upper_32_bits(res.a1); 194 + ret_payload[4] = lower_32_bits(res.a2); 195 + ret_payload[5] = upper_32_bits(res.a2); 196 + ret_payload[6] = lower_32_bits(res.a3); 198 197 } 199 198 200 199 return zynqmp_pm_ret_code((enum pm_ret_status)res.a0); ··· 225 218 * Feature check of TF-A APIs is done in the TF-A layer and it expects for 226 219 * MODULE_ID_MASK bits of SMC's arg[0] to be the same as PM_MODULE_ID. 227 220 */ 228 - if (module_id == TF_A_MODULE_ID) 221 + if (module_id == TF_A_MODULE_ID) { 229 222 module_id = PM_MODULE_ID; 223 + smc_arg[1] = api_id; 224 + } else { 225 + smc_arg[1] = (api_id & API_ID_MASK); 226 + } 230 227 231 228 smc_arg[0] = PM_SIP_SVC | FIELD_PREP(MODULE_ID_MASK, module_id) | feature_check_api_id; 232 - smc_arg[1] = (api_id & API_ID_MASK); 233 229 234 230 ret = do_fw_call(ret_payload, 2, smc_arg[0], smc_arg[1]); 235 231 if (ret) ··· 340 330 return 0; 341 331 } 342 332 EXPORT_SYMBOL_GPL(zynqmp_pm_is_function_supported); 333 + 334 + /** 335 + * zynqmp_pm_invoke_fw_fn() - Invoke the system-level platform management layer 336 + * caller function depending on the configuration 337 + * @pm_api_id: Requested PM-API call 338 + * @ret_payload: Returned value array 339 + * @num_args: Number of arguments to requested PM-API call 340 + * 341 + * Invoke platform management function for SMC or HVC call, depending on 342 + * configuration. 343 + * Following SMC Calling Convention (SMCCC) for SMC64: 344 + * Pm Function Identifier, 345 + * PM_SIP_SVC + PASS_THROUGH_FW_CMD_ID = 346 + * ((SMC_TYPE_FAST << FUNCID_TYPE_SHIFT) 347 + * ((SMC_64) << FUNCID_CC_SHIFT) 348 + * ((SIP_START) << FUNCID_OEN_SHIFT) 349 + * (PASS_THROUGH_FW_CMD_ID)) 350 + * 351 + * PM_SIP_SVC - Registered ZynqMP SIP Service Call. 352 + * PASS_THROUGH_FW_CMD_ID - Fixed SiP SVC call ID for FW specific calls. 353 + * 354 + * Return: Returns status, either success or error+reason 355 + */ 356 + int zynqmp_pm_invoke_fw_fn(u32 pm_api_id, u32 *ret_payload, u32 num_args, ...) 357 + { 358 + /* 359 + * Added SIP service call Function Identifier 360 + * Make sure to stay in x0 register 361 + */ 362 + u64 smc_arg[SMC_ARG_CNT_64]; 363 + int ret, i; 364 + va_list arg_list; 365 + u32 args[SMC_ARG_CNT_32] = {0}; 366 + u32 module_id; 367 + 368 + if (num_args > SMC_ARG_CNT_32) 369 + return -EINVAL; 370 + 371 + va_start(arg_list, num_args); 372 + 373 + /* Check if feature is supported or not */ 374 + ret = zynqmp_pm_feature(pm_api_id); 375 + if (ret < 0) 376 + return ret; 377 + 378 + for (i = 0; i < num_args; i++) 379 + args[i] = va_arg(arg_list, u32); 380 + 381 + va_end(arg_list); 382 + 383 + module_id = FIELD_GET(PLM_MODULE_ID_MASK, pm_api_id); 384 + 385 + if (module_id == 0) 386 + module_id = XPM_MODULE_ID; 387 + 388 + smc_arg[0] = PM_SIP_SVC | PASS_THROUGH_FW_CMD_ID; 389 + smc_arg[1] = ((u64)args[0] << 32U) | FIELD_PREP(PLM_MODULE_ID_MASK, module_id) | 390 + (pm_api_id & API_ID_MASK); 391 + for (i = 1; i < (SMC_ARG_CNT_64 - 1); i++) 392 + smc_arg[i + 1] = ((u64)args[(i * 2)] << 32U) | args[(i * 2) - 1]; 393 + 394 + return do_fw_call(ret_payload, 8, smc_arg[0], smc_arg[1], smc_arg[2], smc_arg[3], 395 + smc_arg[4], smc_arg[5], smc_arg[6], smc_arg[7]); 396 + } 343 397 344 398 /** 345 399 * zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer ··· 563 489 EXPORT_SYMBOL_GPL(zynqmp_pm_get_family_info); 564 490 565 491 /** 492 + * zynqmp_pm_get_sip_svc_version() - Get SiP service call version 493 + * @version: Returned version value 494 + * 495 + * Return: Returns status, either success or error+reason 496 + */ 497 + static int zynqmp_pm_get_sip_svc_version(u32 *version) 498 + { 499 + struct arm_smccc_res res; 500 + u64 args[SMC_ARG_CNT_64] = {0}; 501 + 502 + if (!version) 503 + return -EINVAL; 504 + 505 + /* Check if SiP SVC version already verified */ 506 + if (sip_svc_version > 0) { 507 + *version = sip_svc_version; 508 + return 0; 509 + } 510 + 511 + args[0] = GET_SIP_SVC_VERSION; 512 + 513 + arm_smccc_smc(args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7], &res); 514 + 515 + *version = ((lower_32_bits(res.a0) << 16U) | lower_32_bits(res.a1)); 516 + 517 + return zynqmp_pm_ret_code(XST_PM_SUCCESS); 518 + } 519 + 520 + /** 566 521 * zynqmp_pm_get_trustzone_version() - Get secure trustzone firmware version 567 522 * @version: Returned version value 568 523 * ··· 655 552 */ 656 553 int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out) 657 554 { 658 - int ret; 555 + int ret, i = 0; 556 + u32 ret_payload[PAYLOAD_ARG_CNT] = {0}; 659 557 660 - ret = zynqmp_pm_invoke_fn(PM_QUERY_DATA, out, 4, qdata.qid, qdata.arg1, qdata.arg2, 661 - qdata.arg3); 558 + if (sip_svc_version >= SIP_SVC_PASSTHROUGH_VERSION) { 559 + ret = zynqmp_pm_invoke_fw_fn(PM_QUERY_DATA, ret_payload, 4, 560 + qdata.qid, qdata.arg1, 561 + qdata.arg2, qdata.arg3); 562 + /* To support backward compatibility */ 563 + if (!ret && !ret_payload[0]) { 564 + /* 565 + * TF-A passes return status on 0th index but 566 + * api to get clock name reads data from 0th 567 + * index so pass data at 0th index instead of 568 + * return status 569 + */ 570 + if (qdata.qid == PM_QID_CLOCK_GET_NAME || 571 + qdata.qid == PM_QID_PINCTRL_GET_FUNCTION_NAME) 572 + i = 1; 573 + 574 + for (; i < PAYLOAD_ARG_CNT; i++, out++) 575 + *out = ret_payload[i]; 576 + 577 + return ret; 578 + } 579 + } 580 + 581 + ret = zynqmp_pm_invoke_fn(PM_QUERY_DATA, out, 4, qdata.qid, 582 + qdata.arg1, qdata.arg2, qdata.arg3); 662 583 663 584 /* 664 585 * For clock name query, all bytes in SMC response are clock name ··· 1047 920 * 1048 921 * Return: Returns status, either success or error+reason 1049 922 */ 1050 - int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset, 923 + int zynqmp_pm_reset_assert(const u32 reset, 1051 924 const enum zynqmp_pm_reset_action assert_flag) 1052 925 { 1053 926 return zynqmp_pm_invoke_fn(PM_RESET_ASSERT, NULL, 2, reset, assert_flag); ··· 1061 934 * 1062 935 * Return: Returns status, either success or error+reason 1063 936 */ 1064 - int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, u32 *status) 937 + int zynqmp_pm_reset_get_status(const u32 reset, u32 *status) 1065 938 { 1066 939 u32 ret_payload[PAYLOAD_ARG_CNT]; 1067 940 int ret; ··· 1245 1118 if (pm_family_code == ZYNQMP_FAMILY_CODE && 1246 1119 param == PM_PINCTRL_CONFIG_TRI_STATE) { 1247 1120 ret = zynqmp_pm_feature(PM_PINCTRL_CONFIG_PARAM_SET); 1248 - if (ret < PM_PINCTRL_PARAM_SET_VERSION) 1121 + if (ret < PM_PINCTRL_PARAM_SET_VERSION) { 1122 + pr_warn("The requested pinctrl feature is not supported in the current firmware.\n" 1123 + "Expected firmware version is 2023.1 and above for this feature to work.\r\n"); 1249 1124 return -EOPNOTSUPP; 1125 + } 1250 1126 } 1251 1127 1252 1128 return zynqmp_pm_invoke_fn(PM_PINCTRL_CONFIG_PARAM_SET, NULL, 3, pin, param, value); ··· 2014 1884 int ret; 2015 1885 2016 1886 ret = get_set_conduit_method(dev->of_node); 1887 + if (ret) 1888 + return ret; 1889 + 1890 + /* Get SiP SVC version number */ 1891 + ret = zynqmp_pm_get_sip_svc_version(&sip_svc_version); 2017 1892 if (ret) 2018 1893 return ret; 2019 1894
+11
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 572 572 573 573 int adreno_hw_init(struct msm_gpu *gpu) 574 574 { 575 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 576 + int ret; 577 + 575 578 VERB("%s", gpu->name); 579 + 580 + if (adreno_gpu->info->family >= ADRENO_6XX_GEN1 && 581 + qcom_scm_set_gpu_smmu_aperture_is_available()) { 582 + /* We currently always use context bank 0, so hard code this */ 583 + ret = qcom_scm_set_gpu_smmu_aperture(0); 584 + if (ret) 585 + DRM_DEV_ERROR(gpu->dev->dev, "unable to set SMMU aperture: %d\n", ret); 586 + } 576 587 577 588 for (int i = 0; i < gpu->nr_rings; i++) { 578 589 struct msm_ringbuffer *ring = gpu->rb[i];
+24
drivers/misc/Kconfig
··· 610 610 To compile this driver as a module, choose M here: the module 611 611 will be called mrvl_cn10k_dpi. 612 612 613 + config MCHP_LAN966X_PCI 614 + tristate "Microchip LAN966x PCIe Support" 615 + depends on PCI 616 + select OF 617 + select OF_OVERLAY 618 + select IRQ_DOMAIN 619 + help 620 + This enables the support for the LAN966x PCIe device. 621 + 622 + This is used to drive the LAN966x PCIe device from the host system 623 + to which it is connected. The driver uses a device tree overlay to 624 + load other drivers to support for LAN966x internal components. 625 + 626 + Even if this driver does not depend on those other drivers, in order 627 + to have a fully functional board, the following drivers are needed: 628 + - fixed-clock (COMMON_CLK) 629 + - lan966x-oic (LAN966X_OIC) 630 + - lan966x-cpu-syscon (MFD_SYSCON) 631 + - lan966x-switch-reset (RESET_MCHP_SPARX5) 632 + - lan966x-pinctrl (PINCTRL_OCELOT) 633 + - lan966x-serdes (PHY_LAN966X_SERDES) 634 + - lan966x-miim (MDIO_MSCC_MIIM) 635 + - lan966x-switch (LAN966X_SWITCH) 636 + 613 637 source "drivers/misc/c2port/Kconfig" 614 638 source "drivers/misc/eeprom/Kconfig" 615 639 source "drivers/misc/cb710/Kconfig"
+3
drivers/misc/Makefile
··· 71 71 obj-$(CONFIG_TPS6594_PFSM) += tps6594-pfsm.o 72 72 obj-$(CONFIG_NSM) += nsm.o 73 73 obj-$(CONFIG_MARVELL_CN10K_DPI) += mrvl_cn10k_dpi.o 74 + lan966x-pci-objs := lan966x_pci.o 75 + lan966x-pci-objs += lan966x_pci.dtbo.o 76 + obj-$(CONFIG_MCHP_LAN966X_PCI) += lan966x-pci.o 74 77 obj-y += keba/
+215
drivers/misc/lan966x_pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Microchip LAN966x PCI driver 4 + * 5 + * Copyright (c) 2024 Microchip Technology Inc. and its subsidiaries. 6 + * 7 + * Authors: 8 + * Clément Léger <clement.leger@bootlin.com> 9 + * Hervé Codina <herve.codina@bootlin.com> 10 + */ 11 + 12 + #include <linux/device.h> 13 + #include <linux/irq.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/module.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/pci.h> 18 + #include <linux/pci_ids.h> 19 + #include <linux/slab.h> 20 + 21 + /* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */ 22 + extern char __dtbo_lan966x_pci_begin[]; 23 + extern char __dtbo_lan966x_pci_end[]; 24 + 25 + struct pci_dev_intr_ctrl { 26 + struct pci_dev *pci_dev; 27 + struct irq_domain *irq_domain; 28 + int irq; 29 + }; 30 + 31 + static int pci_dev_irq_domain_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hw) 32 + { 33 + irq_set_chip_and_handler(virq, &dummy_irq_chip, handle_simple_irq); 34 + return 0; 35 + } 36 + 37 + static const struct irq_domain_ops pci_dev_irq_domain_ops = { 38 + .map = pci_dev_irq_domain_map, 39 + .xlate = irq_domain_xlate_onecell, 40 + }; 41 + 42 + static irqreturn_t pci_dev_irq_handler(int irq, void *data) 43 + { 44 + struct pci_dev_intr_ctrl *intr_ctrl = data; 45 + int ret; 46 + 47 + ret = generic_handle_domain_irq(intr_ctrl->irq_domain, 0); 48 + return ret ? IRQ_NONE : IRQ_HANDLED; 49 + } 50 + 51 + static struct pci_dev_intr_ctrl *pci_dev_create_intr_ctrl(struct pci_dev *pdev) 52 + { 53 + struct pci_dev_intr_ctrl *intr_ctrl __free(kfree) = NULL; 54 + struct fwnode_handle *fwnode; 55 + int ret; 56 + 57 + fwnode = dev_fwnode(&pdev->dev); 58 + if (!fwnode) 59 + return ERR_PTR(-ENODEV); 60 + 61 + intr_ctrl = kmalloc(sizeof(*intr_ctrl), GFP_KERNEL); 62 + if (!intr_ctrl) 63 + return ERR_PTR(-ENOMEM); 64 + 65 + intr_ctrl->pci_dev = pdev; 66 + 67 + intr_ctrl->irq_domain = irq_domain_create_linear(fwnode, 1, &pci_dev_irq_domain_ops, 68 + intr_ctrl); 69 + if (!intr_ctrl->irq_domain) { 70 + pci_err(pdev, "Failed to create irqdomain\n"); 71 + return ERR_PTR(-ENOMEM); 72 + } 73 + 74 + ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX); 75 + if (ret < 0) { 76 + pci_err(pdev, "Unable alloc irq vector (%d)\n", ret); 77 + goto err_remove_domain; 78 + } 79 + intr_ctrl->irq = pci_irq_vector(pdev, 0); 80 + ret = request_irq(intr_ctrl->irq, pci_dev_irq_handler, IRQF_SHARED, 81 + pci_name(pdev), intr_ctrl); 82 + if (ret) { 83 + pci_err(pdev, "Unable to request irq %d (%d)\n", intr_ctrl->irq, ret); 84 + goto err_free_irq_vector; 85 + } 86 + 87 + return_ptr(intr_ctrl); 88 + 89 + err_free_irq_vector: 90 + pci_free_irq_vectors(pdev); 91 + err_remove_domain: 92 + irq_domain_remove(intr_ctrl->irq_domain); 93 + return ERR_PTR(ret); 94 + } 95 + 96 + static void pci_dev_remove_intr_ctrl(struct pci_dev_intr_ctrl *intr_ctrl) 97 + { 98 + free_irq(intr_ctrl->irq, intr_ctrl); 99 + pci_free_irq_vectors(intr_ctrl->pci_dev); 100 + irq_dispose_mapping(irq_find_mapping(intr_ctrl->irq_domain, 0)); 101 + irq_domain_remove(intr_ctrl->irq_domain); 102 + kfree(intr_ctrl); 103 + } 104 + 105 + static void devm_pci_dev_remove_intr_ctrl(void *intr_ctrl) 106 + { 107 + pci_dev_remove_intr_ctrl(intr_ctrl); 108 + } 109 + 110 + static int devm_pci_dev_create_intr_ctrl(struct pci_dev *pdev) 111 + { 112 + struct pci_dev_intr_ctrl *intr_ctrl; 113 + 114 + intr_ctrl = pci_dev_create_intr_ctrl(pdev); 115 + if (IS_ERR(intr_ctrl)) 116 + return PTR_ERR(intr_ctrl); 117 + 118 + return devm_add_action_or_reset(&pdev->dev, devm_pci_dev_remove_intr_ctrl, intr_ctrl); 119 + } 120 + 121 + struct lan966x_pci { 122 + struct device *dev; 123 + int ovcs_id; 124 + }; 125 + 126 + static int lan966x_pci_load_overlay(struct lan966x_pci *data) 127 + { 128 + u32 dtbo_size = __dtbo_lan966x_pci_end - __dtbo_lan966x_pci_begin; 129 + void *dtbo_start = __dtbo_lan966x_pci_begin; 130 + 131 + return of_overlay_fdt_apply(dtbo_start, dtbo_size, &data->ovcs_id, dev_of_node(data->dev)); 132 + } 133 + 134 + static void lan966x_pci_unload_overlay(struct lan966x_pci *data) 135 + { 136 + of_overlay_remove(&data->ovcs_id); 137 + } 138 + 139 + static int lan966x_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 140 + { 141 + struct device *dev = &pdev->dev; 142 + struct lan966x_pci *data; 143 + int ret; 144 + 145 + /* 146 + * On ACPI system, fwnode can point to the ACPI node. 147 + * This driver needs an of_node to be used as the device-tree overlay 148 + * target. This of_node should be set by the PCI core if it succeeds in 149 + * creating it (CONFIG_PCI_DYNAMIC_OF_NODES feature). 150 + * Check here for the validity of this of_node. 151 + */ 152 + if (!dev_of_node(dev)) 153 + return dev_err_probe(dev, -EINVAL, "Missing of_node for device\n"); 154 + 155 + /* Need to be done before devm_pci_dev_create_intr_ctrl. 156 + * It allocates an IRQ and so pdev->irq is updated. 157 + */ 158 + ret = pcim_enable_device(pdev); 159 + if (ret) 160 + return ret; 161 + 162 + ret = devm_pci_dev_create_intr_ctrl(pdev); 163 + if (ret) 164 + return ret; 165 + 166 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 167 + if (!data) 168 + return -ENOMEM; 169 + 170 + pci_set_drvdata(pdev, data); 171 + data->dev = dev; 172 + 173 + ret = lan966x_pci_load_overlay(data); 174 + if (ret) 175 + return ret; 176 + 177 + pci_set_master(pdev); 178 + 179 + ret = of_platform_default_populate(dev_of_node(dev), NULL, dev); 180 + if (ret) 181 + goto err_unload_overlay; 182 + 183 + return 0; 184 + 185 + err_unload_overlay: 186 + lan966x_pci_unload_overlay(data); 187 + return ret; 188 + } 189 + 190 + static void lan966x_pci_remove(struct pci_dev *pdev) 191 + { 192 + struct lan966x_pci *data = pci_get_drvdata(pdev); 193 + 194 + of_platform_depopulate(data->dev); 195 + 196 + lan966x_pci_unload_overlay(data); 197 + } 198 + 199 + static struct pci_device_id lan966x_pci_ids[] = { 200 + { PCI_DEVICE(PCI_VENDOR_ID_EFAR, 0x9660) }, 201 + { } 202 + }; 203 + MODULE_DEVICE_TABLE(pci, lan966x_pci_ids); 204 + 205 + static struct pci_driver lan966x_pci_driver = { 206 + .name = "mchp_lan966x_pci", 207 + .id_table = lan966x_pci_ids, 208 + .probe = lan966x_pci_probe, 209 + .remove = lan966x_pci_remove, 210 + }; 211 + module_pci_driver(lan966x_pci_driver); 212 + 213 + MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>"); 214 + MODULE_DESCRIPTION("Microchip LAN966x PCI driver"); 215 + MODULE_LICENSE("GPL");
+177
drivers/misc/lan966x_pci.dtso
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2022 Microchip UNG 4 + */ 5 + 6 + #include <dt-bindings/clock/microchip,lan966x.h> 7 + #include <dt-bindings/gpio/gpio.h> 8 + #include <dt-bindings/interrupt-controller/irq.h> 9 + #include <dt-bindings/mfd/atmel-flexcom.h> 10 + #include <dt-bindings/phy/phy-lan966x-serdes.h> 11 + 12 + /dts-v1/; 13 + /plugin/; 14 + 15 + / { 16 + fragment@0 { 17 + target-path = ""; 18 + 19 + /* 20 + * These properties allow to avoid a dtc warnings. 21 + * The real interrupt controller is the PCI device itself. It 22 + * is the node on which the device tree overlay will be applied. 23 + * This node has those properties. 24 + */ 25 + #interrupt-cells = <1>; 26 + interrupt-controller; 27 + 28 + __overlay__ { 29 + #address-cells = <3>; 30 + #size-cells = <2>; 31 + 32 + cpu_clk: clock-600000000 { 33 + compatible = "fixed-clock"; 34 + #clock-cells = <0>; 35 + clock-frequency = <600000000>; /* CPU clock = 600MHz */ 36 + }; 37 + 38 + ddr_clk: clock-30000000 { 39 + compatible = "fixed-clock"; 40 + #clock-cells = <0>; 41 + clock-frequency = <30000000>; /* Fabric clock = 30MHz */ 42 + }; 43 + 44 + sys_clk: clock-15625000 { 45 + compatible = "fixed-clock"; 46 + #clock-cells = <0>; 47 + clock-frequency = <15625000>; /* System clock = 15.625MHz */ 48 + }; 49 + 50 + pci-ep-bus@0 { 51 + compatible = "simple-bus"; 52 + #address-cells = <1>; 53 + #size-cells = <1>; 54 + 55 + /* 56 + * map @0xe2000000 (32MB) to BAR0 (CPU) 57 + * map @0xe0000000 (16MB) to BAR1 (AMBA) 58 + */ 59 + ranges = <0xe2000000 0x00 0x00 0x00 0x2000000 60 + 0xe0000000 0x01 0x00 0x00 0x1000000>; 61 + 62 + oic: oic@e00c0120 { 63 + compatible = "microchip,lan966x-oic"; 64 + #interrupt-cells = <2>; 65 + interrupt-controller; 66 + interrupts = <0>; /* PCI INTx assigned interrupt */ 67 + reg = <0xe00c0120 0x190>; 68 + }; 69 + 70 + cpu_ctrl: syscon@e00c0000 { 71 + compatible = "microchip,lan966x-cpu-syscon", "syscon"; 72 + reg = <0xe00c0000 0xa8>; 73 + }; 74 + 75 + reset: reset@e200400c { 76 + compatible = "microchip,lan966x-switch-reset"; 77 + reg = <0xe200400c 0x4>, <0xe00c0000 0xa8>; 78 + reg-names = "gcb","cpu"; 79 + #reset-cells = <1>; 80 + cpu-syscon = <&cpu_ctrl>; 81 + }; 82 + 83 + gpio: pinctrl@e2004064 { 84 + compatible = "microchip,lan966x-pinctrl"; 85 + reg = <0xe2004064 0xb4>, 86 + <0xe2010024 0x138>; 87 + resets = <&reset 0>; 88 + reset-names = "switch"; 89 + gpio-controller; 90 + #gpio-cells = <2>; 91 + gpio-ranges = <&gpio 0 0 78>; 92 + interrupt-parent = <&oic>; 93 + interrupt-controller; 94 + interrupts = <17 IRQ_TYPE_LEVEL_HIGH>; 95 + #interrupt-cells = <2>; 96 + 97 + tod_pins: tod_pins { 98 + pins = "GPIO_36"; 99 + function = "ptpsync_1"; 100 + }; 101 + 102 + fc0_a_pins: fcb4-i2c-pins { 103 + /* RXD, TXD */ 104 + pins = "GPIO_9", "GPIO_10"; 105 + function = "fc0_a"; 106 + }; 107 + 108 + }; 109 + 110 + serdes: serdes@e202c000 { 111 + compatible = "microchip,lan966x-serdes"; 112 + reg = <0xe202c000 0x9c>, 113 + <0xe2004010 0x4>; 114 + #phy-cells = <2>; 115 + }; 116 + 117 + mdio1: mdio@e200413c { 118 + #address-cells = <1>; 119 + #size-cells = <0>; 120 + compatible = "microchip,lan966x-miim"; 121 + reg = <0xe200413c 0x24>, 122 + <0xe2010020 0x4>; 123 + 124 + resets = <&reset 0>; 125 + reset-names = "switch"; 126 + 127 + lan966x_phy0: ethernet-lan966x_phy@1 { 128 + reg = <1>; 129 + }; 130 + 131 + lan966x_phy1: ethernet-lan966x_phy@2 { 132 + reg = <2>; 133 + }; 134 + }; 135 + 136 + switch: switch@e0000000 { 137 + compatible = "microchip,lan966x-switch"; 138 + reg = <0xe0000000 0x0100000>, 139 + <0xe2000000 0x0800000>; 140 + reg-names = "cpu", "gcb"; 141 + 142 + interrupt-parent = <&oic>; 143 + interrupts = <12 IRQ_TYPE_LEVEL_HIGH>, 144 + <9 IRQ_TYPE_LEVEL_HIGH>; 145 + interrupt-names = "xtr", "ana"; 146 + 147 + resets = <&reset 0>; 148 + reset-names = "switch"; 149 + 150 + pinctrl-names = "default"; 151 + pinctrl-0 = <&tod_pins>; 152 + 153 + ethernet-ports { 154 + #address-cells = <1>; 155 + #size-cells = <0>; 156 + 157 + port0: port@0 { 158 + phy-handle = <&lan966x_phy0>; 159 + 160 + reg = <0>; 161 + phy-mode = "gmii"; 162 + phys = <&serdes 0 CU(0)>; 163 + }; 164 + 165 + port1: port@1 { 166 + phy-handle = <&lan966x_phy1>; 167 + 168 + reg = <1>; 169 + phy-mode = "gmii"; 170 + phys = <&serdes 1 CU(1)>; 171 + }; 172 + }; 173 + }; 174 + }; 175 + }; 176 + }; 177 + };
+1
drivers/pci/quirks.c
··· 6266 6266 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5020, of_pci_make_dev_node); 6267 6267 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node); 6268 6268 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node); 6269 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node); 6269 6270 6270 6271 /* 6271 6272 * Devices known to require a longer delay before first config space access
+2 -2
drivers/platform/cznic/turris-omnia-mcu-gpio.c
··· 28 28 #define OMNIA_CMD_INT_ARG_LEN 8 29 29 #define FRONT_BUTTON_RELEASE_DELAY_MS 50 30 30 31 - static const char * const omnia_mcu_gpio_templates[64] = { 31 + static const char * const omnia_mcu_gpio_names[64] = { 32 32 /* GPIOs with value read from the 16-bit wide status */ 33 33 [4] = "MiniPCIe0 Card Detect", 34 34 [5] = "MiniPCIe0 mSATA Indicator", ··· 1018 1018 mcu->gc.set_multiple = omnia_gpio_set_multiple; 1019 1019 mcu->gc.init_valid_mask = omnia_gpio_init_valid_mask; 1020 1020 mcu->gc.can_sleep = true; 1021 - mcu->gc.names = omnia_mcu_gpio_templates; 1021 + mcu->gc.names = omnia_mcu_gpio_names; 1022 1022 mcu->gc.base = -1; 1023 1023 mcu->gc.ngpio = ARRAY_SIZE(omnia_gpios); 1024 1024 mcu->gc.label = "Turris Omnia MCU GPIOs";
+36 -6
drivers/platform/cznic/turris-omnia-mcu.h
··· 23 23 struct i2c_client; 24 24 struct rtc_device; 25 25 26 + /** 27 + * struct omnia_mcu - driver private data structure 28 + * @client: I2C client 29 + * @type: MCU type (STM32, GD32, MKL, or unknown) 30 + * @features: bitmap of features supported by the MCU firmware 31 + * @board_serial_number: board serial number, if stored in MCU 32 + * @board_first_mac: board first MAC address, if stored in MCU 33 + * @board_revision: board revision, if stored in MCU 34 + * @gc: GPIO chip 35 + * @lock: mutex to protect internal GPIO chip state 36 + * @mask: bitmap of masked IRQs 37 + * @rising: bitmap of rising edge IRQs 38 + * @falling: bitmap of falling edge IRQs 39 + * @both: bitmap of both edges IRQs 40 + * @cached: bitmap of cached IRQ line values (when an IRQ line is configured for 41 + * both edges, we cache the corresponding GPIO values in the IRQ 42 + * handler) 43 + * @is_cached: bitmap of which IRQ line values are cached 44 + * @button_release_emul_work: front button release emulation work, used with old MCU firmware 45 + * versions which did not send button release events, only button press 46 + * events 47 + * @last_status: cached value of the status word, to be compared with new value to 48 + * determine which interrupt events occurred, used with old MCU 49 + * firmware versions which only informed that the status word changed, 50 + * but not which bits of the status word changed 51 + * @button_pressed_emul: the front button is still emulated to be pressed 52 + * @rtcdev: RTC device, does not actually count real-time, the device is only 53 + * used for the RTC alarm mechanism, so that the board can be 54 + * configured to wake up from poweroff state at a specific time 55 + * @rtc_alarm: RTC alarm that was set for the board to wake up on, in MCU time 56 + * (seconds since last MCU reset) 57 + * @front_button_poweron: the front button should power on the device after it is powered off 58 + * @wdt: watchdog driver structure 59 + * @trng: RNG driver structure 60 + * @trng_entropy_ready: RNG entropy ready completion 61 + */ 26 62 struct omnia_mcu { 27 63 struct i2c_client *client; 28 64 const char *type; 29 65 u32 features; 30 66 31 - /* board information */ 32 67 u64 board_serial_number; 33 68 u8 board_first_mac[ETH_ALEN]; 34 69 u8 board_revision; 35 70 36 71 #ifdef CONFIG_TURRIS_OMNIA_MCU_GPIO 37 - /* GPIO chip */ 38 72 struct gpio_chip gc; 39 73 struct mutex lock; 40 74 unsigned long mask, rising, falling, both, cached, is_cached; 41 - /* Old MCU firmware handling needs the following */ 42 75 struct delayed_work button_release_emul_work; 43 76 unsigned long last_status; 44 77 bool button_pressed_emul; 45 78 #endif 46 79 47 80 #ifdef CONFIG_TURRIS_OMNIA_MCU_SYSOFF_WAKEUP 48 - /* RTC device for configuring wake-up */ 49 81 struct rtc_device *rtcdev; 50 82 u32 rtc_alarm; 51 83 bool front_button_poweron; 52 84 #endif 53 85 54 86 #ifdef CONFIG_TURRIS_OMNIA_MCU_WATCHDOG 55 - /* MCU watchdog */ 56 87 struct watchdog_device wdt; 57 88 #endif 58 89 59 90 #ifdef CONFIG_TURRIS_OMNIA_MCU_TRNG 60 - /* true random number generator */ 61 91 struct hwrng trng; 62 92 struct completion trng_entropy_ready; 63 93 #endif
+3 -16
drivers/reset/Kconfig
··· 146 146 This enables the reset controller driver for NXP LPC18xx/43xx SoCs. 147 147 148 148 config RESET_MCHP_SPARX5 149 - bool "Microchip Sparx5 reset driver" 150 - depends on ARCH_SPARX5 || SOC_LAN966 || COMPILE_TEST 149 + tristate "Microchip Sparx5 reset driver" 150 + depends on ARCH_SPARX5 || SOC_LAN966 || MCHP_LAN966X_PCI || COMPILE_TEST 151 151 default y if SPARX5_SWITCH 152 152 select MFD_SYSCON 153 153 help 154 154 This driver supports switch core reset for the Microchip Sparx5 SoC. 155 - 156 - config RESET_MESON 157 - tristate "Meson Reset Driver" 158 - depends on ARCH_MESON || COMPILE_TEST 159 - default ARCH_MESON 160 - help 161 - This enables the reset driver for Amlogic Meson SoCs. 162 - 163 - config RESET_MESON_AUDIO_ARB 164 - tristate "Meson Audio Memory Arbiter Reset Driver" 165 - depends on ARCH_MESON || COMPILE_TEST 166 - help 167 - This enables the reset driver for Audio Memory Arbiter of 168 - Amlogic's A113 based SoCs 169 155 170 156 config RESET_NPCM 171 157 bool "NPCM BMC Reset Driver" if COMPILE_TEST ··· 342 356 help 343 357 This enables the reset controller driver for Xilinx ZynqMP SoCs. 344 358 359 + source "drivers/reset/amlogic/Kconfig" 345 360 source "drivers/reset/starfive/Kconfig" 346 361 source "drivers/reset/sti/Kconfig" 347 362 source "drivers/reset/hisilicon/Kconfig"
+1 -2
drivers/reset/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-y += core.o 3 + obj-y += amlogic/ 3 4 obj-y += hisilicon/ 4 5 obj-y += starfive/ 5 6 obj-y += sti/ ··· 22 21 obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o 23 22 obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o 24 23 obj-$(CONFIG_RESET_MCHP_SPARX5) += reset-microchip-sparx5.o 25 - obj-$(CONFIG_RESET_MESON) += reset-meson.o 26 - obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o 27 24 obj-$(CONFIG_RESET_NPCM) += reset-npcm.o 28 25 obj-$(CONFIG_RESET_NUVOTON_MA35D1) += reset-ma35d1.o 29 26 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
+27
drivers/reset/amlogic/Kconfig
··· 1 + config RESET_MESON_COMMON 2 + tristate 3 + select REGMAP 4 + 5 + config RESET_MESON 6 + tristate "Meson Reset Driver" 7 + depends on ARCH_MESON || COMPILE_TEST 8 + default ARCH_MESON 9 + select REGMAP_MMIO 10 + select RESET_MESON_COMMON 11 + help 12 + This enables the reset driver for Amlogic SoCs. 13 + 14 + config RESET_MESON_AUX 15 + tristate "Meson Reset Auxiliary Driver" 16 + depends on ARCH_MESON || COMPILE_TEST 17 + select AUXILIARY_BUS 18 + select RESET_MESON_COMMON 19 + help 20 + This enables the reset auxiliary driver for Amlogic SoCs. 21 + 22 + config RESET_MESON_AUDIO_ARB 23 + tristate "Meson Audio Memory Arbiter Reset Driver" 24 + depends on ARCH_MESON || COMPILE_TEST 25 + help 26 + This enables the reset driver for Audio Memory Arbiter of 27 + Amlogic's A113 based SoCs
+4
drivers/reset/amlogic/Makefile
··· 1 + obj-$(CONFIG_RESET_MESON) += reset-meson.o 2 + obj-$(CONFIG_RESET_MESON_AUX) += reset-meson-aux.o 3 + obj-$(CONFIG_RESET_MESON_COMMON) += reset-meson-common.o 4 + obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o
+136
drivers/reset/amlogic/reset-meson-aux.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + /* 3 + * Amlogic Meson Reset Auxiliary driver 4 + * 5 + * Copyright (c) 2024 BayLibre, SAS. 6 + * Author: Jerome Brunet <jbrunet@baylibre.com> 7 + */ 8 + 9 + #include <linux/err.h> 10 + #include <linux/module.h> 11 + #include <linux/auxiliary_bus.h> 12 + #include <linux/regmap.h> 13 + #include <linux/reset-controller.h> 14 + #include <linux/slab.h> 15 + 16 + #include "reset-meson.h" 17 + #include <soc/amlogic/reset-meson-aux.h> 18 + 19 + static DEFINE_IDA(meson_rst_aux_ida); 20 + 21 + struct meson_reset_adev { 22 + struct auxiliary_device adev; 23 + struct regmap *map; 24 + }; 25 + 26 + #define to_meson_reset_adev(_adev) \ 27 + container_of((_adev), struct meson_reset_adev, adev) 28 + 29 + static const struct meson_reset_param meson_g12a_audio_param = { 30 + .reset_ops = &meson_reset_toggle_ops, 31 + .reset_num = 26, 32 + .level_offset = 0x24, 33 + }; 34 + 35 + static const struct meson_reset_param meson_sm1_audio_param = { 36 + .reset_ops = &meson_reset_toggle_ops, 37 + .reset_num = 39, 38 + .level_offset = 0x28, 39 + }; 40 + 41 + static const struct auxiliary_device_id meson_reset_aux_ids[] = { 42 + { 43 + .name = "axg-audio-clkc.rst-g12a", 44 + .driver_data = (kernel_ulong_t)&meson_g12a_audio_param, 45 + }, { 46 + .name = "axg-audio-clkc.rst-sm1", 47 + .driver_data = (kernel_ulong_t)&meson_sm1_audio_param, 48 + }, {} 49 + }; 50 + MODULE_DEVICE_TABLE(auxiliary, meson_reset_aux_ids); 51 + 52 + static int meson_reset_aux_probe(struct auxiliary_device *adev, 53 + const struct auxiliary_device_id *id) 54 + { 55 + const struct meson_reset_param *param = 56 + (const struct meson_reset_param *)(id->driver_data); 57 + struct meson_reset_adev *raux = 58 + to_meson_reset_adev(adev); 59 + 60 + return meson_reset_controller_register(&adev->dev, raux->map, param); 61 + } 62 + 63 + static struct auxiliary_driver meson_reset_aux_driver = { 64 + .probe = meson_reset_aux_probe, 65 + .id_table = meson_reset_aux_ids, 66 + }; 67 + module_auxiliary_driver(meson_reset_aux_driver); 68 + 69 + static void meson_rst_aux_release(struct device *dev) 70 + { 71 + struct auxiliary_device *adev = to_auxiliary_dev(dev); 72 + struct meson_reset_adev *raux = 73 + to_meson_reset_adev(adev); 74 + 75 + ida_free(&meson_rst_aux_ida, adev->id); 76 + kfree(raux); 77 + } 78 + 79 + static void meson_rst_aux_unregister_adev(void *_adev) 80 + { 81 + struct auxiliary_device *adev = _adev; 82 + 83 + auxiliary_device_delete(adev); 84 + auxiliary_device_uninit(adev); 85 + } 86 + 87 + int devm_meson_rst_aux_register(struct device *dev, 88 + struct regmap *map, 89 + const char *adev_name) 90 + { 91 + struct meson_reset_adev *raux; 92 + struct auxiliary_device *adev; 93 + int ret; 94 + 95 + raux = kzalloc(sizeof(*raux), GFP_KERNEL); 96 + if (!raux) 97 + return -ENOMEM; 98 + 99 + ret = ida_alloc(&meson_rst_aux_ida, GFP_KERNEL); 100 + if (ret < 0) 101 + goto raux_free; 102 + 103 + raux->map = map; 104 + 105 + adev = &raux->adev; 106 + adev->id = ret; 107 + adev->name = adev_name; 108 + adev->dev.parent = dev; 109 + adev->dev.release = meson_rst_aux_release; 110 + device_set_of_node_from_dev(&adev->dev, dev); 111 + 112 + ret = auxiliary_device_init(adev); 113 + if (ret) 114 + goto ida_free; 115 + 116 + ret = __auxiliary_device_add(adev, dev->driver->name); 117 + if (ret) { 118 + auxiliary_device_uninit(adev); 119 + return ret; 120 + } 121 + 122 + return devm_add_action_or_reset(dev, meson_rst_aux_unregister_adev, 123 + adev); 124 + 125 + ida_free: 126 + ida_free(&meson_rst_aux_ida, adev->id); 127 + raux_free: 128 + kfree(raux); 129 + return ret; 130 + } 131 + EXPORT_SYMBOL_GPL(devm_meson_rst_aux_register); 132 + 133 + MODULE_DESCRIPTION("Amlogic Meson Reset Auxiliary driver"); 134 + MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>"); 135 + MODULE_LICENSE("Dual BSD/GPL"); 136 + MODULE_IMPORT_NS(MESON_RESET);
+142
drivers/reset/amlogic/reset-meson-common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + /* 3 + * Amlogic Meson Reset core functions 4 + * 5 + * Copyright (c) 2016-2024 BayLibre, SAS. 6 + * Authors: Neil Armstrong <narmstrong@baylibre.com> 7 + * Jerome Brunet <jbrunet@baylibre.com> 8 + */ 9 + 10 + #include <linux/device.h> 11 + #include <linux/module.h> 12 + #include <linux/regmap.h> 13 + #include <linux/reset-controller.h> 14 + 15 + #include "reset-meson.h" 16 + 17 + struct meson_reset { 18 + const struct meson_reset_param *param; 19 + struct reset_controller_dev rcdev; 20 + struct regmap *map; 21 + }; 22 + 23 + static void meson_reset_offset_and_bit(struct meson_reset *data, 24 + unsigned long id, 25 + unsigned int *offset, 26 + unsigned int *bit) 27 + { 28 + unsigned int stride = regmap_get_reg_stride(data->map); 29 + 30 + *offset = (id / (stride * BITS_PER_BYTE)) * stride; 31 + *bit = id % (stride * BITS_PER_BYTE); 32 + } 33 + 34 + static int meson_reset_reset(struct reset_controller_dev *rcdev, 35 + unsigned long id) 36 + { 37 + struct meson_reset *data = 38 + container_of(rcdev, struct meson_reset, rcdev); 39 + unsigned int offset, bit; 40 + 41 + meson_reset_offset_and_bit(data, id, &offset, &bit); 42 + offset += data->param->reset_offset; 43 + 44 + return regmap_write(data->map, offset, BIT(bit)); 45 + } 46 + 47 + static int meson_reset_level(struct reset_controller_dev *rcdev, 48 + unsigned long id, bool assert) 49 + { 50 + struct meson_reset *data = 51 + container_of(rcdev, struct meson_reset, rcdev); 52 + unsigned int offset, bit; 53 + 54 + meson_reset_offset_and_bit(data, id, &offset, &bit); 55 + offset += data->param->level_offset; 56 + assert ^= data->param->level_low_reset; 57 + 58 + return regmap_update_bits(data->map, offset, 59 + BIT(bit), assert ? BIT(bit) : 0); 60 + } 61 + 62 + static int meson_reset_status(struct reset_controller_dev *rcdev, 63 + unsigned long id) 64 + { 65 + struct meson_reset *data = 66 + container_of(rcdev, struct meson_reset, rcdev); 67 + unsigned int val, offset, bit; 68 + 69 + meson_reset_offset_and_bit(data, id, &offset, &bit); 70 + offset += data->param->level_offset; 71 + 72 + regmap_read(data->map, offset, &val); 73 + val = !!(BIT(bit) & val); 74 + 75 + return val ^ data->param->level_low_reset; 76 + } 77 + 78 + static int meson_reset_assert(struct reset_controller_dev *rcdev, 79 + unsigned long id) 80 + { 81 + return meson_reset_level(rcdev, id, true); 82 + } 83 + 84 + static int meson_reset_deassert(struct reset_controller_dev *rcdev, 85 + unsigned long id) 86 + { 87 + return meson_reset_level(rcdev, id, false); 88 + } 89 + 90 + static int meson_reset_level_toggle(struct reset_controller_dev *rcdev, 91 + unsigned long id) 92 + { 93 + int ret; 94 + 95 + ret = meson_reset_assert(rcdev, id); 96 + if (ret) 97 + return ret; 98 + 99 + return meson_reset_deassert(rcdev, id); 100 + } 101 + 102 + const struct reset_control_ops meson_reset_ops = { 103 + .reset = meson_reset_reset, 104 + .assert = meson_reset_assert, 105 + .deassert = meson_reset_deassert, 106 + .status = meson_reset_status, 107 + }; 108 + EXPORT_SYMBOL_NS_GPL(meson_reset_ops, MESON_RESET); 109 + 110 + const struct reset_control_ops meson_reset_toggle_ops = { 111 + .reset = meson_reset_level_toggle, 112 + .assert = meson_reset_assert, 113 + .deassert = meson_reset_deassert, 114 + .status = meson_reset_status, 115 + }; 116 + EXPORT_SYMBOL_NS_GPL(meson_reset_toggle_ops, MESON_RESET); 117 + 118 + int meson_reset_controller_register(struct device *dev, struct regmap *map, 119 + const struct meson_reset_param *param) 120 + { 121 + struct meson_reset *data; 122 + 123 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 124 + if (!data) 125 + return -ENOMEM; 126 + 127 + data->param = param; 128 + data->map = map; 129 + data->rcdev.owner = dev->driver->owner; 130 + data->rcdev.nr_resets = param->reset_num; 131 + data->rcdev.ops = data->param->reset_ops; 132 + data->rcdev.of_node = dev->of_node; 133 + 134 + return devm_reset_controller_register(dev, &data->rcdev); 135 + } 136 + EXPORT_SYMBOL_NS_GPL(meson_reset_controller_register, MESON_RESET); 137 + 138 + MODULE_DESCRIPTION("Amlogic Meson Reset Core function"); 139 + MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>"); 140 + MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>"); 141 + MODULE_LICENSE("Dual BSD/GPL"); 142 + MODULE_IMPORT_NS(MESON_RESET);
+105
drivers/reset/amlogic/reset-meson.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + /* 3 + * Amlogic Meson Reset Controller driver 4 + * 5 + * Copyright (c) 2016-2024 BayLibre, SAS. 6 + * Authors: Neil Armstrong <narmstrong@baylibre.com> 7 + * Jerome Brunet <jbrunet@baylibre.com> 8 + */ 9 + 10 + #include <linux/err.h> 11 + #include <linux/io.h> 12 + #include <linux/of.h> 13 + #include <linux/module.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/regmap.h> 16 + #include <linux/reset-controller.h> 17 + 18 + #include "reset-meson.h" 19 + 20 + static const struct meson_reset_param meson8b_param = { 21 + .reset_ops = &meson_reset_ops, 22 + .reset_num = 256, 23 + .reset_offset = 0x0, 24 + .level_offset = 0x7c, 25 + .level_low_reset = true, 26 + }; 27 + 28 + static const struct meson_reset_param meson_a1_param = { 29 + .reset_ops = &meson_reset_ops, 30 + .reset_num = 96, 31 + .reset_offset = 0x0, 32 + .level_offset = 0x40, 33 + .level_low_reset = true, 34 + }; 35 + 36 + static const struct meson_reset_param meson_s4_param = { 37 + .reset_ops = &meson_reset_ops, 38 + .reset_num = 192, 39 + .reset_offset = 0x0, 40 + .level_offset = 0x40, 41 + .level_low_reset = true, 42 + }; 43 + 44 + static const struct meson_reset_param t7_param = { 45 + .reset_num = 224, 46 + .reset_offset = 0x0, 47 + .level_offset = 0x40, 48 + .level_low_reset = true, 49 + }; 50 + 51 + static const struct of_device_id meson_reset_dt_ids[] = { 52 + { .compatible = "amlogic,meson8b-reset", .data = &meson8b_param}, 53 + { .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param}, 54 + { .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param}, 55 + { .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param}, 56 + { .compatible = "amlogic,meson-s4-reset", .data = &meson_s4_param}, 57 + { .compatible = "amlogic,c3-reset", .data = &meson_s4_param}, 58 + { .compatible = "amlogic,t7-reset", .data = &t7_param}, 59 + { /* sentinel */ }, 60 + }; 61 + MODULE_DEVICE_TABLE(of, meson_reset_dt_ids); 62 + 63 + static const struct regmap_config regmap_config = { 64 + .reg_bits = 32, 65 + .val_bits = 32, 66 + .reg_stride = 4, 67 + }; 68 + 69 + static int meson_reset_probe(struct platform_device *pdev) 70 + { 71 + const struct meson_reset_param *param; 72 + struct device *dev = &pdev->dev; 73 + struct regmap *map; 74 + void __iomem *base; 75 + 76 + base = devm_platform_ioremap_resource(pdev, 0); 77 + if (IS_ERR(base)) 78 + return PTR_ERR(base); 79 + 80 + param = device_get_match_data(dev); 81 + if (!param) 82 + return -ENODEV; 83 + 84 + map = devm_regmap_init_mmio(dev, base, &regmap_config); 85 + if (IS_ERR(map)) 86 + return dev_err_probe(dev, PTR_ERR(map), 87 + "can't init regmap mmio region\n"); 88 + 89 + return meson_reset_controller_register(dev, map, param); 90 + } 91 + 92 + static struct platform_driver meson_reset_driver = { 93 + .probe = meson_reset_probe, 94 + .driver = { 95 + .name = "meson_reset", 96 + .of_match_table = meson_reset_dt_ids, 97 + }, 98 + }; 99 + module_platform_driver(meson_reset_driver); 100 + 101 + MODULE_DESCRIPTION("Amlogic Meson Reset Controller driver"); 102 + MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>"); 103 + MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>"); 104 + MODULE_LICENSE("Dual BSD/GPL"); 105 + MODULE_IMPORT_NS(MESON_RESET);
+28
drivers/reset/amlogic/reset-meson.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 + /* 3 + * Copyright (c) 2024 BayLibre, SAS. 4 + * Author: Jerome Brunet <jbrunet@baylibre.com> 5 + */ 6 + 7 + #ifndef __MESON_RESET_H 8 + #define __MESON_RESET_H 9 + 10 + #include <linux/module.h> 11 + #include <linux/regmap.h> 12 + #include <linux/reset-controller.h> 13 + 14 + struct meson_reset_param { 15 + const struct reset_control_ops *reset_ops; 16 + unsigned int reset_num; 17 + unsigned int reset_offset; 18 + unsigned int level_offset; 19 + bool level_low_reset; 20 + }; 21 + 22 + int meson_reset_controller_register(struct device *dev, struct regmap *map, 23 + const struct meson_reset_param *param); 24 + 25 + extern const struct reset_control_ops meson_reset_ops; 26 + extern const struct reset_control_ops meson_reset_toggle_ops; 27 + 28 + #endif /* __MESON_RESET_H */
+86 -33
drivers/reset/core.c
··· 773 773 774 774 static struct reset_control * 775 775 __reset_control_get_internal(struct reset_controller_dev *rcdev, 776 - unsigned int index, bool shared, bool acquired) 776 + unsigned int index, enum reset_control_flags flags) 777 777 { 778 + bool shared = flags & RESET_CONTROL_FLAGS_BIT_SHARED; 779 + bool acquired = flags & RESET_CONTROL_FLAGS_BIT_ACQUIRED; 778 780 struct reset_control *rstc; 779 781 780 782 lockdep_assert_held(&reset_list_mutex); 783 + 784 + /* Expect callers to filter out OPTIONAL and DEASSERTED bits */ 785 + if (WARN_ON(flags & ~(RESET_CONTROL_FLAGS_BIT_SHARED | 786 + RESET_CONTROL_FLAGS_BIT_ACQUIRED))) 787 + return ERR_PTR(-EINVAL); 781 788 782 789 list_for_each_entry(rstc, &rcdev->reset_control_head, list) { 783 790 if (rstc->id == index) { ··· 1001 994 1002 995 struct reset_control * 1003 996 __of_reset_control_get(struct device_node *node, const char *id, int index, 1004 - bool shared, bool optional, bool acquired) 997 + enum reset_control_flags flags) 1005 998 { 999 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 1006 1000 bool gpio_fallback = false; 1007 1001 struct reset_control *rstc; 1008 1002 struct reset_controller_dev *rcdev; ··· 1067 1059 goto out_unlock; 1068 1060 } 1069 1061 1062 + flags &= ~RESET_CONTROL_FLAGS_BIT_OPTIONAL; 1063 + 1070 1064 /* reset_list_mutex also protects the rcdev's reset_control list */ 1071 - rstc = __reset_control_get_internal(rcdev, rstc_id, shared, acquired); 1065 + rstc = __reset_control_get_internal(rcdev, rstc_id, flags); 1072 1066 1073 1067 out_unlock: 1074 1068 mutex_unlock(&reset_list_mutex); ··· 1101 1091 1102 1092 static struct reset_control * 1103 1093 __reset_control_get_from_lookup(struct device *dev, const char *con_id, 1104 - bool shared, bool optional, bool acquired) 1094 + enum reset_control_flags flags) 1105 1095 { 1096 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 1106 1097 const struct reset_control_lookup *lookup; 1107 1098 struct reset_controller_dev *rcdev; 1108 1099 const char *dev_id = dev_name(dev); ··· 1127 1116 return ERR_PTR(-EPROBE_DEFER); 1128 1117 } 1129 1118 1119 + flags &= ~RESET_CONTROL_FLAGS_BIT_OPTIONAL; 1120 + 1130 1121 rstc = __reset_control_get_internal(rcdev, 1131 1122 lookup->index, 1132 - shared, acquired); 1123 + flags); 1133 1124 mutex_unlock(&reset_list_mutex); 1134 1125 break; 1135 1126 } ··· 1146 1133 } 1147 1134 1148 1135 struct reset_control *__reset_control_get(struct device *dev, const char *id, 1149 - int index, bool shared, bool optional, 1150 - bool acquired) 1136 + int index, enum reset_control_flags flags) 1151 1137 { 1138 + bool shared = flags & RESET_CONTROL_FLAGS_BIT_SHARED; 1139 + bool acquired = flags & RESET_CONTROL_FLAGS_BIT_ACQUIRED; 1140 + 1152 1141 if (WARN_ON(shared && acquired)) 1153 1142 return ERR_PTR(-EINVAL); 1154 1143 1155 1144 if (dev->of_node) 1156 - return __of_reset_control_get(dev->of_node, id, index, shared, 1157 - optional, acquired); 1145 + return __of_reset_control_get(dev->of_node, id, index, flags); 1158 1146 1159 - return __reset_control_get_from_lookup(dev, id, shared, optional, 1160 - acquired); 1147 + return __reset_control_get_from_lookup(dev, id, flags); 1161 1148 } 1162 1149 EXPORT_SYMBOL_GPL(__reset_control_get); 1163 1150 1164 1151 int __reset_control_bulk_get(struct device *dev, int num_rstcs, 1165 1152 struct reset_control_bulk_data *rstcs, 1166 - bool shared, bool optional, bool acquired) 1153 + enum reset_control_flags flags) 1167 1154 { 1168 1155 int ret, i; 1169 1156 1170 1157 for (i = 0; i < num_rstcs; i++) { 1171 - rstcs[i].rstc = __reset_control_get(dev, rstcs[i].id, 0, 1172 - shared, optional, acquired); 1158 + rstcs[i].rstc = __reset_control_get(dev, rstcs[i].id, 0, flags); 1173 1159 if (IS_ERR(rstcs[i].rstc)) { 1174 1160 ret = PTR_ERR(rstcs[i].rstc); 1175 1161 goto err; ··· 1236 1224 reset_control_put(*(struct reset_control **)res); 1237 1225 } 1238 1226 1227 + static void devm_reset_control_release_deasserted(struct device *dev, void *res) 1228 + { 1229 + struct reset_control *rstc = *(struct reset_control **)res; 1230 + 1231 + reset_control_assert(rstc); 1232 + reset_control_put(rstc); 1233 + } 1234 + 1239 1235 struct reset_control * 1240 1236 __devm_reset_control_get(struct device *dev, const char *id, int index, 1241 - bool shared, bool optional, bool acquired) 1237 + enum reset_control_flags flags) 1242 1238 { 1243 1239 struct reset_control **ptr, *rstc; 1240 + bool deasserted = flags & RESET_CONTROL_FLAGS_BIT_DEASSERTED; 1244 1241 1245 - ptr = devres_alloc(devm_reset_control_release, sizeof(*ptr), 1242 + ptr = devres_alloc(deasserted ? devm_reset_control_release_deasserted : 1243 + devm_reset_control_release, sizeof(*ptr), 1246 1244 GFP_KERNEL); 1247 1245 if (!ptr) 1248 1246 return ERR_PTR(-ENOMEM); 1249 1247 1250 - rstc = __reset_control_get(dev, id, index, shared, optional, acquired); 1248 + flags &= ~RESET_CONTROL_FLAGS_BIT_DEASSERTED; 1249 + 1250 + rstc = __reset_control_get(dev, id, index, flags); 1251 1251 if (IS_ERR_OR_NULL(rstc)) { 1252 1252 devres_free(ptr); 1253 1253 return rstc; 1254 + } 1255 + 1256 + if (deasserted) { 1257 + int ret; 1258 + 1259 + ret = reset_control_deassert(rstc); 1260 + if (ret) { 1261 + reset_control_put(rstc); 1262 + devres_free(ptr); 1263 + return ERR_PTR(ret); 1264 + } 1254 1265 } 1255 1266 1256 1267 *ptr = rstc; ··· 1295 1260 reset_control_bulk_put(devres->num_rstcs, devres->rstcs); 1296 1261 } 1297 1262 1263 + static void devm_reset_control_bulk_release_deasserted(struct device *dev, void *res) 1264 + { 1265 + struct reset_control_bulk_devres *devres = res; 1266 + 1267 + reset_control_bulk_assert(devres->num_rstcs, devres->rstcs); 1268 + reset_control_bulk_put(devres->num_rstcs, devres->rstcs); 1269 + } 1270 + 1298 1271 int __devm_reset_control_bulk_get(struct device *dev, int num_rstcs, 1299 1272 struct reset_control_bulk_data *rstcs, 1300 - bool shared, bool optional, bool acquired) 1273 + enum reset_control_flags flags) 1301 1274 { 1302 1275 struct reset_control_bulk_devres *ptr; 1276 + bool deasserted = flags & RESET_CONTROL_FLAGS_BIT_DEASSERTED; 1303 1277 int ret; 1304 1278 1305 - ptr = devres_alloc(devm_reset_control_bulk_release, sizeof(*ptr), 1279 + ptr = devres_alloc(deasserted ? devm_reset_control_bulk_release_deasserted : 1280 + devm_reset_control_bulk_release, sizeof(*ptr), 1306 1281 GFP_KERNEL); 1307 1282 if (!ptr) 1308 1283 return -ENOMEM; 1309 1284 1310 - ret = __reset_control_bulk_get(dev, num_rstcs, rstcs, shared, optional, acquired); 1285 + flags &= ~RESET_CONTROL_FLAGS_BIT_DEASSERTED; 1286 + 1287 + ret = __reset_control_bulk_get(dev, num_rstcs, rstcs, flags); 1311 1288 if (ret < 0) { 1312 1289 devres_free(ptr); 1313 1290 return ret; 1291 + } 1292 + 1293 + if (deasserted) { 1294 + ret = reset_control_bulk_deassert(num_rstcs, rstcs); 1295 + if (ret) { 1296 + reset_control_bulk_put(num_rstcs, rstcs); 1297 + devres_free(ptr); 1298 + return ret; 1299 + } 1314 1300 } 1315 1301 1316 1302 ptr->num_rstcs = num_rstcs; ··· 1354 1298 */ 1355 1299 int __device_reset(struct device *dev, bool optional) 1356 1300 { 1301 + enum reset_control_flags flags; 1357 1302 struct reset_control *rstc; 1358 1303 int ret; 1359 1304 ··· 1370 1313 } 1371 1314 #endif 1372 1315 1373 - rstc = __reset_control_get(dev, NULL, 0, 0, optional, true); 1316 + flags = optional ? RESET_CONTROL_OPTIONAL_EXCLUSIVE : RESET_CONTROL_EXCLUSIVE; 1317 + rstc = __reset_control_get(dev, NULL, 0, flags); 1374 1318 if (IS_ERR(rstc)) 1375 1319 return PTR_ERR(rstc); 1376 1320 ··· 1414 1356 * device node. 1415 1357 * 1416 1358 * @np: device node for the device that requests the reset controls array 1417 - * @shared: whether reset controls are shared or not 1418 - * @optional: whether it is optional to get the reset controls 1419 - * @acquired: only one reset control may be acquired for a given controller 1420 - * and ID 1359 + * @flags: whether reset controls are shared, optional, acquired 1421 1360 * 1422 1361 * Returns pointer to allocated reset_control on success or error on failure 1423 1362 */ 1424 1363 struct reset_control * 1425 - of_reset_control_array_get(struct device_node *np, bool shared, bool optional, 1426 - bool acquired) 1364 + of_reset_control_array_get(struct device_node *np, enum reset_control_flags flags) 1427 1365 { 1366 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 1428 1367 struct reset_control_array *resets; 1429 1368 struct reset_control *rstc; 1430 1369 int num, i; ··· 1436 1381 resets->num_rstcs = num; 1437 1382 1438 1383 for (i = 0; i < num; i++) { 1439 - rstc = __of_reset_control_get(np, NULL, i, shared, optional, 1440 - acquired); 1384 + rstc = __of_reset_control_get(np, NULL, i, flags); 1441 1385 if (IS_ERR(rstc)) 1442 1386 goto err_rst; 1443 1387 resets->rstc[i] = rstc; ··· 1461 1407 * devm_reset_control_array_get - Resource managed reset control array get 1462 1408 * 1463 1409 * @dev: device that requests the list of reset controls 1464 - * @shared: whether reset controls are shared or not 1465 - * @optional: whether it is optional to get the reset controls 1410 + * @flags: whether reset controls are shared, optional, acquired 1466 1411 * 1467 1412 * The reset control array APIs are intended for a list of resets 1468 1413 * that just have to be asserted or deasserted, without any ··· 1470 1417 * Returns pointer to allocated reset_control on success or error on failure 1471 1418 */ 1472 1419 struct reset_control * 1473 - devm_reset_control_array_get(struct device *dev, bool shared, bool optional) 1420 + devm_reset_control_array_get(struct device *dev, enum reset_control_flags flags) 1474 1421 { 1475 1422 struct reset_control **ptr, *rstc; 1476 1423 ··· 1479 1426 if (!ptr) 1480 1427 return ERR_PTR(-ENOMEM); 1481 1428 1482 - rstc = of_reset_control_array_get(dev->of_node, shared, optional, true); 1429 + rstc = of_reset_control_array_get(dev->of_node, flags); 1483 1430 if (IS_ERR_OR_NULL(rstc)) { 1484 1431 devres_free(ptr); 1485 1432 return rstc;
drivers/reset/reset-meson-audio-arb.c drivers/reset/amlogic/reset-meson-audio-arb.c
-159
drivers/reset/reset-meson.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 - /* 3 - * Amlogic Meson Reset Controller driver 4 - * 5 - * Copyright (c) 2016 BayLibre, SAS. 6 - * Author: Neil Armstrong <narmstrong@baylibre.com> 7 - */ 8 - #include <linux/err.h> 9 - #include <linux/init.h> 10 - #include <linux/io.h> 11 - #include <linux/of.h> 12 - #include <linux/module.h> 13 - #include <linux/platform_device.h> 14 - #include <linux/reset-controller.h> 15 - #include <linux/slab.h> 16 - #include <linux/types.h> 17 - 18 - #define BITS_PER_REG 32 19 - 20 - struct meson_reset_param { 21 - int reg_count; 22 - int level_offset; 23 - }; 24 - 25 - struct meson_reset { 26 - void __iomem *reg_base; 27 - const struct meson_reset_param *param; 28 - struct reset_controller_dev rcdev; 29 - spinlock_t lock; 30 - }; 31 - 32 - static int meson_reset_reset(struct reset_controller_dev *rcdev, 33 - unsigned long id) 34 - { 35 - struct meson_reset *data = 36 - container_of(rcdev, struct meson_reset, rcdev); 37 - unsigned int bank = id / BITS_PER_REG; 38 - unsigned int offset = id % BITS_PER_REG; 39 - void __iomem *reg_addr = data->reg_base + (bank << 2); 40 - 41 - writel(BIT(offset), reg_addr); 42 - 43 - return 0; 44 - } 45 - 46 - static int meson_reset_level(struct reset_controller_dev *rcdev, 47 - unsigned long id, bool assert) 48 - { 49 - struct meson_reset *data = 50 - container_of(rcdev, struct meson_reset, rcdev); 51 - unsigned int bank = id / BITS_PER_REG; 52 - unsigned int offset = id % BITS_PER_REG; 53 - void __iomem *reg_addr; 54 - unsigned long flags; 55 - u32 reg; 56 - 57 - reg_addr = data->reg_base + data->param->level_offset + (bank << 2); 58 - 59 - spin_lock_irqsave(&data->lock, flags); 60 - 61 - reg = readl(reg_addr); 62 - if (assert) 63 - writel(reg & ~BIT(offset), reg_addr); 64 - else 65 - writel(reg | BIT(offset), reg_addr); 66 - 67 - spin_unlock_irqrestore(&data->lock, flags); 68 - 69 - return 0; 70 - } 71 - 72 - static int meson_reset_assert(struct reset_controller_dev *rcdev, 73 - unsigned long id) 74 - { 75 - return meson_reset_level(rcdev, id, true); 76 - } 77 - 78 - static int meson_reset_deassert(struct reset_controller_dev *rcdev, 79 - unsigned long id) 80 - { 81 - return meson_reset_level(rcdev, id, false); 82 - } 83 - 84 - static const struct reset_control_ops meson_reset_ops = { 85 - .reset = meson_reset_reset, 86 - .assert = meson_reset_assert, 87 - .deassert = meson_reset_deassert, 88 - }; 89 - 90 - static const struct meson_reset_param meson8b_param = { 91 - .reg_count = 8, 92 - .level_offset = 0x7c, 93 - }; 94 - 95 - static const struct meson_reset_param meson_a1_param = { 96 - .reg_count = 3, 97 - .level_offset = 0x40, 98 - }; 99 - 100 - static const struct meson_reset_param meson_s4_param = { 101 - .reg_count = 6, 102 - .level_offset = 0x40, 103 - }; 104 - 105 - static const struct meson_reset_param t7_param = { 106 - .reg_count = 7, 107 - .level_offset = 0x40, 108 - }; 109 - 110 - static const struct of_device_id meson_reset_dt_ids[] = { 111 - { .compatible = "amlogic,meson8b-reset", .data = &meson8b_param}, 112 - { .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param}, 113 - { .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param}, 114 - { .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param}, 115 - { .compatible = "amlogic,meson-s4-reset", .data = &meson_s4_param}, 116 - { .compatible = "amlogic,c3-reset", .data = &meson_s4_param}, 117 - { .compatible = "amlogic,t7-reset", .data = &t7_param}, 118 - { /* sentinel */ }, 119 - }; 120 - MODULE_DEVICE_TABLE(of, meson_reset_dt_ids); 121 - 122 - static int meson_reset_probe(struct platform_device *pdev) 123 - { 124 - struct meson_reset *data; 125 - 126 - data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 127 - if (!data) 128 - return -ENOMEM; 129 - 130 - data->reg_base = devm_platform_ioremap_resource(pdev, 0); 131 - if (IS_ERR(data->reg_base)) 132 - return PTR_ERR(data->reg_base); 133 - 134 - data->param = of_device_get_match_data(&pdev->dev); 135 - if (!data->param) 136 - return -ENODEV; 137 - 138 - spin_lock_init(&data->lock); 139 - 140 - data->rcdev.owner = THIS_MODULE; 141 - data->rcdev.nr_resets = data->param->reg_count * BITS_PER_REG; 142 - data->rcdev.ops = &meson_reset_ops; 143 - data->rcdev.of_node = pdev->dev.of_node; 144 - 145 - return devm_reset_controller_register(&pdev->dev, &data->rcdev); 146 - } 147 - 148 - static struct platform_driver meson_reset_driver = { 149 - .probe = meson_reset_probe, 150 - .driver = { 151 - .name = "meson_reset", 152 - .of_match_table = meson_reset_dt_ids, 153 - }, 154 - }; 155 - module_platform_driver(meson_reset_driver); 156 - 157 - MODULE_DESCRIPTION("Amlogic Meson Reset Controller driver"); 158 - MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>"); 159 - MODULE_LICENSE("Dual BSD/GPL");
+37 -1
drivers/reset/reset-microchip-sparx5.c
··· 62 62 .reset = sparx5_reset_noop, 63 63 }; 64 64 65 + static const struct regmap_config mchp_lan966x_syscon_regmap_config = { 66 + .reg_bits = 32, 67 + .val_bits = 32, 68 + .reg_stride = 4, 69 + }; 70 + 71 + static struct regmap *mchp_lan966x_syscon_to_regmap(struct device *dev, 72 + struct device_node *syscon_np) 73 + { 74 + struct regmap_config regmap_config = mchp_lan966x_syscon_regmap_config; 75 + resource_size_t size; 76 + void __iomem *base; 77 + 78 + base = devm_of_iomap(dev, syscon_np, 0, &size); 79 + if (IS_ERR(base)) 80 + return ERR_CAST(base); 81 + 82 + regmap_config.max_register = size - 4; 83 + 84 + return devm_regmap_init_mmio(dev, base, &regmap_config); 85 + } 86 + 65 87 static int mchp_sparx5_map_syscon(struct platform_device *pdev, char *name, 66 88 struct regmap **target) 67 89 { ··· 94 72 syscon_np = of_parse_phandle(pdev->dev.of_node, name, 0); 95 73 if (!syscon_np) 96 74 return -ENODEV; 97 - regmap = syscon_node_to_regmap(syscon_np); 75 + 76 + /* 77 + * The syscon API doesn't support syscon device removal. 78 + * When used in LAN966x PCI device, the cpu-syscon device needs to be 79 + * removed when the PCI device is removed. 80 + * In case of LAN966x, map the syscon device locally to support the 81 + * device removal. 82 + */ 83 + if (of_device_is_compatible(pdev->dev.of_node, "microchip,lan966x-switch-reset")) 84 + regmap = mchp_lan966x_syscon_to_regmap(&pdev->dev, syscon_np); 85 + else 86 + regmap = syscon_node_to_regmap(syscon_np); 98 87 of_node_put(syscon_np); 99 88 if (IS_ERR(regmap)) { 100 89 err = PTR_ERR(regmap); ··· 154 121 return err; 155 122 156 123 ctx->rcdev.owner = THIS_MODULE; 124 + ctx->rcdev.dev = &pdev->dev; 157 125 ctx->rcdev.nr_resets = 1; 158 126 ctx->rcdev.ops = &sparx5_reset_ops; 159 127 ctx->rcdev.of_node = dn; ··· 192 158 }, 193 159 { } 194 160 }; 161 + MODULE_DEVICE_TABLE(of, mchp_sparx5_reset_of_match); 195 162 196 163 static struct platform_driver mchp_sparx5_reset_driver = { 197 164 .probe = mchp_sparx5_reset_probe, ··· 215 180 216 181 MODULE_DESCRIPTION("Microchip Sparx5 switch reset driver"); 217 182 MODULE_AUTHOR("Steen Hegelund <steen.hegelund@microchip.com>"); 183 + MODULE_LICENSE("GPL");
+5 -19
drivers/reset/reset-uniphier-glue.c
··· 35 35 clk_bulk_disable_unprepare(priv->data->nclks, priv->clk); 36 36 } 37 37 38 - static void uniphier_rst_assert(void *_priv) 39 - { 40 - struct uniphier_glue_reset_priv *priv = _priv; 41 - 42 - reset_control_bulk_assert(priv->data->nrsts, priv->rst); 43 - } 44 - 45 38 static int uniphier_glue_reset_probe(struct platform_device *pdev) 46 39 { 47 40 struct device *dev = &pdev->dev; ··· 61 68 if (ret) 62 69 return ret; 63 70 64 - for (i = 0; i < priv->data->nrsts; i++) 65 - priv->rst[i].id = priv->data->reset_names[i]; 66 - ret = devm_reset_control_bulk_get_shared(dev, priv->data->nrsts, 67 - priv->rst); 68 - if (ret) 69 - return ret; 70 - 71 71 ret = clk_bulk_prepare_enable(priv->data->nclks, priv->clk); 72 72 if (ret) 73 73 return ret; ··· 69 83 if (ret) 70 84 return ret; 71 85 72 - ret = reset_control_bulk_deassert(priv->data->nrsts, priv->rst); 73 - if (ret) 74 - return ret; 75 - 76 - ret = devm_add_action_or_reset(dev, uniphier_rst_assert, priv); 86 + for (i = 0; i < priv->data->nrsts; i++) 87 + priv->rst[i].id = priv->data->reset_names[i]; 88 + ret = devm_reset_control_bulk_get_shared_deasserted(dev, 89 + priv->data->nrsts, 90 + priv->rst); 77 91 if (ret) 78 92 return ret; 79 93
+1 -1
drivers/soc/aspeed/aspeed-lpc-ctrl.c
··· 353 353 .of_match_table = aspeed_lpc_ctrl_match, 354 354 }, 355 355 .probe = aspeed_lpc_ctrl_probe, 356 - .remove_new = aspeed_lpc_ctrl_remove, 356 + .remove = aspeed_lpc_ctrl_remove, 357 357 }; 358 358 359 359 module_platform_driver(aspeed_lpc_ctrl_driver);
+1 -1
drivers/soc/aspeed/aspeed-lpc-snoop.c
··· 366 366 .of_match_table = aspeed_lpc_snoop_match, 367 367 }, 368 368 .probe = aspeed_lpc_snoop_probe, 369 - .remove_new = aspeed_lpc_snoop_remove, 369 + .remove = aspeed_lpc_snoop_remove, 370 370 }; 371 371 372 372 module_platform_driver(aspeed_lpc_snoop_driver);
+1 -1
drivers/soc/aspeed/aspeed-p2a-ctrl.c
··· 431 431 .of_match_table = aspeed_p2a_ctrl_match, 432 432 }, 433 433 .probe = aspeed_p2a_ctrl_probe, 434 - .remove_new = aspeed_p2a_ctrl_remove, 434 + .remove = aspeed_p2a_ctrl_remove, 435 435 }; 436 436 437 437 module_platform_driver(aspeed_p2a_ctrl_driver);
+1 -1
drivers/soc/aspeed/aspeed-uart-routing.c
··· 589 589 .of_match_table = aspeed_uart_routing_table, 590 590 }, 591 591 .probe = aspeed_uart_routing_probe, 592 - .remove_new = aspeed_uart_routing_remove, 592 + .remove = aspeed_uart_routing_remove, 593 593 }; 594 594 595 595 module_platform_driver(aspeed_uart_routing_driver);
+1 -1
drivers/soc/fsl/dpaa2-console.c
··· 320 320 .of_match_table = dpaa2_console_match_table, 321 321 }, 322 322 .probe = dpaa2_console_probe, 323 - .remove_new = dpaa2_console_remove, 323 + .remove = dpaa2_console_remove, 324 324 }; 325 325 module_platform_driver(dpaa2_console_driver); 326 326
+4 -2
drivers/soc/fsl/qe/qmc.c
··· 2004 2004 2005 2005 /* Set the irq handler */ 2006 2006 irq = platform_get_irq(pdev, 0); 2007 - if (irq < 0) 2007 + if (irq < 0) { 2008 + ret = irq; 2008 2009 goto err_exit_xcc; 2010 + } 2009 2011 ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc); 2010 2012 if (ret < 0) 2011 2013 goto err_exit_xcc; ··· 2094 2092 .of_match_table = of_match_ptr(qmc_id_table), 2095 2093 }, 2096 2094 .probe = qmc_probe, 2097 - .remove_new = qmc_remove, 2095 + .remove = qmc_remove, 2098 2096 }; 2099 2097 module_platform_driver(qmc_driver); 2100 2098
+5 -25
drivers/soc/fsl/qe/tsa.c
··· 680 680 681 681 static int tsa_of_parse_tdms(struct tsa *tsa, struct device_node *np) 682 682 { 683 - struct device_node *tdm_np; 684 683 struct tsa_tdm *tdm; 685 684 struct clk *clk; 686 685 u32 tdm_id, val; ··· 690 691 for (i = 0; i < ARRAY_SIZE(tsa->tdm); i++) 691 692 tsa->tdm[i].is_enable = false; 692 693 693 - for_each_available_child_of_node(np, tdm_np) { 694 + for_each_available_child_of_node_scoped(np, tdm_np) { 694 695 ret = of_property_read_u32(tdm_np, "reg", &tdm_id); 695 696 if (ret) { 696 697 dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np); 697 - of_node_put(tdm_np); 698 698 return ret; 699 699 } 700 700 switch (tdm_id) { ··· 717 719 invalid_tdm: 718 720 dev_err(tsa->dev, "%pOF: Invalid tdm_id (%u)\n", tdm_np, 719 721 tdm_id); 720 - of_node_put(tdm_np); 721 722 return -EINVAL; 722 723 } 723 724 } 724 725 725 - for_each_available_child_of_node(np, tdm_np) { 726 + for_each_available_child_of_node_scoped(np, tdm_np) { 726 727 ret = of_property_read_u32(tdm_np, "reg", &tdm_id); 727 728 if (ret) { 728 729 dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np); 729 - of_node_put(tdm_np); 730 730 return ret; 731 731 } 732 732 ··· 738 742 dev_err(tsa->dev, 739 743 "%pOF: failed to read fsl,rx-frame-sync-delay-bits\n", 740 744 tdm_np); 741 - of_node_put(tdm_np); 742 745 return ret; 743 746 } 744 747 if (val > 3) { 745 748 dev_err(tsa->dev, 746 749 "%pOF: Invalid fsl,rx-frame-sync-delay-bits (%u)\n", 747 750 tdm_np, val); 748 - of_node_put(tdm_np); 749 751 return -EINVAL; 750 752 } 751 753 tdm->simode_tdm |= TSA_SIMODE_TDM_RFSD(val); ··· 755 761 dev_err(tsa->dev, 756 762 "%pOF: failed to read fsl,tx-frame-sync-delay-bits\n", 757 763 tdm_np); 758 - of_node_put(tdm_np); 759 764 return ret; 760 765 } 761 766 if (val > 3) { 762 767 dev_err(tsa->dev, 763 768 "%pOF: Invalid fsl,tx-frame-sync-delay-bits (%u)\n", 764 769 tdm_np, val); 765 - of_node_put(tdm_np); 766 770 return -EINVAL; 767 771 } 768 772 tdm->simode_tdm |= TSA_SIMODE_TDM_TFSD(val); ··· 784 792 clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "rsync" : "l1rsync"); 785 793 if (IS_ERR(clk)) { 786 794 ret = PTR_ERR(clk); 787 - of_node_put(tdm_np); 788 795 goto err; 789 796 } 790 797 ret = clk_prepare_enable(clk); 791 798 if (ret) { 792 799 clk_put(clk); 793 - of_node_put(tdm_np); 794 800 goto err; 795 801 } 796 802 tdm->l1rsync_clk = clk; ··· 796 806 clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "rclk" : "l1rclk"); 797 807 if (IS_ERR(clk)) { 798 808 ret = PTR_ERR(clk); 799 - of_node_put(tdm_np); 800 809 goto err; 801 810 } 802 811 ret = clk_prepare_enable(clk); 803 812 if (ret) { 804 813 clk_put(clk); 805 - of_node_put(tdm_np); 806 814 goto err; 807 815 } 808 816 tdm->l1rclk_clk = clk; ··· 809 821 clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "tsync" : "l1tsync"); 810 822 if (IS_ERR(clk)) { 811 823 ret = PTR_ERR(clk); 812 - of_node_put(tdm_np); 813 824 goto err; 814 825 } 815 826 ret = clk_prepare_enable(clk); 816 827 if (ret) { 817 828 clk_put(clk); 818 - of_node_put(tdm_np); 819 829 goto err; 820 830 } 821 831 tdm->l1tsync_clk = clk; ··· 821 835 clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "tclk" : "l1tclk"); 822 836 if (IS_ERR(clk)) { 823 837 ret = PTR_ERR(clk); 824 - of_node_put(tdm_np); 825 838 goto err; 826 839 } 827 840 ret = clk_prepare_enable(clk); 828 841 if (ret) { 829 842 clk_put(clk); 830 - of_node_put(tdm_np); 831 843 goto err; 832 844 } 833 845 tdm->l1tclk_clk = clk; ··· 843 859 } 844 860 845 861 ret = tsa_of_parse_tdm_rx_route(tsa, tdm_np, tsa->tdms, tdm_id); 846 - if (ret) { 847 - of_node_put(tdm_np); 862 + if (ret) 848 863 goto err; 849 - } 850 864 851 865 ret = tsa_of_parse_tdm_tx_route(tsa, tdm_np, tsa->tdms, tdm_id); 852 - if (ret) { 853 - of_node_put(tdm_np); 866 + if (ret) 854 867 goto err; 855 - } 856 868 857 869 tdm->is_enable = true; 858 870 } ··· 1066 1086 .of_match_table = of_match_ptr(tsa_id_table), 1067 1087 }, 1068 1088 .probe = tsa_probe, 1069 - .remove_new = tsa_remove, 1089 + .remove = tsa_remove, 1070 1090 }; 1071 1091 module_platform_driver(tsa_driver); 1072 1092
+1
drivers/soc/fsl/rcpm.c
··· 36 36 return; 37 37 38 38 regs = of_iomap(np, 0); 39 + of_node_put(np); 39 40 if (!regs) 40 41 return; 41 42
+1 -1
drivers/soc/fujitsu/a64fx-diag.c
··· 142 142 .acpi_match_table = ACPI_PTR(a64fx_diag_acpi_match), 143 143 }, 144 144 .probe = a64fx_diag_probe, 145 - .remove_new = a64fx_diag_remove, 145 + .remove = a64fx_diag_remove, 146 146 }; 147 147 148 148 module_platform_driver(a64fx_diag_driver);
+5 -2
drivers/soc/hisilicon/Kconfig
··· 13 13 interconnection bus protocol. 14 14 The performance of application may be affected if some HCCS 15 15 ports are not in full lane status, have a large number of CRC 16 - errors and so on. 16 + errors and so on. This may support for reducing system power 17 + consumption if there are HCCS ports supported low power feature 18 + on platform. 17 19 18 20 Say M here if you want to include support for querying the 19 - health status and port information of HCCS on Kunpeng SoC. 21 + health status and port information of HCCS, or reducing system 22 + power consumption on Kunpeng SoC. 20 23 21 24 endmenu
+500 -16
drivers/soc/hisilicon/kunpeng_hccs.c
··· 21 21 * - if all enabled ports are in linked 22 22 * - if all linked ports are in full lane 23 23 * - CRC error count sum 24 + * 25 + * - Retrieve all HCCS types used on the platform. 26 + * 27 + * - Support low power feature for all specified HCCS type ports, and 28 + * provide the following interface: 29 + * - query HCCS types supported increasing and decreasing lane number. 30 + * - decrease lane number of all specified HCCS type ports on idle state. 31 + * - increase lane number of all specified HCCS type ports. 24 32 */ 25 33 #include <linux/acpi.h> 34 + #include <linux/delay.h> 26 35 #include <linux/iopoll.h> 27 36 #include <linux/platform_device.h> 37 + #include <linux/stringify.h> 28 38 #include <linux/sysfs.h> 39 + #include <linux/types.h> 29 40 30 41 #include <acpi/pcc.h> 31 42 ··· 62 51 static struct hccs_chip_info *kobj_to_chip_info(struct kobject *k) 63 52 { 64 53 return container_of(k, struct hccs_chip_info, kobj); 54 + } 55 + 56 + static struct hccs_dev *device_kobj_to_hccs_dev(struct kobject *k) 57 + { 58 + struct device *dev = container_of(k, struct device, kobj); 59 + struct platform_device *pdev = 60 + container_of(dev, struct platform_device, dev); 61 + 62 + return platform_get_drvdata(pdev); 63 + } 64 + 65 + static char *hccs_port_type_to_name(struct hccs_dev *hdev, u8 type) 66 + { 67 + u16 i; 68 + 69 + for (i = 0; i < hdev->used_type_num; i++) { 70 + if (hdev->type_name_maps[i].type == type) 71 + return hdev->type_name_maps[i].name; 72 + } 73 + 74 + return NULL; 75 + } 76 + 77 + static int hccs_name_to_port_type(struct hccs_dev *hdev, 78 + const char *name, u8 *type) 79 + { 80 + u16 i; 81 + 82 + for (i = 0; i < hdev->used_type_num; i++) { 83 + if (strcmp(hdev->type_name_maps[i].name, name) == 0) { 84 + *type = hdev->type_name_maps[i].type; 85 + return 0; 86 + } 87 + } 88 + 89 + return -EINVAL; 65 90 } 66 91 67 92 struct hccs_register_ctx { ··· 191 144 192 145 pcc_chan = pcc_mbox_request_channel(cl, hdev->chan_id); 193 146 if (IS_ERR(pcc_chan)) { 194 - dev_err(dev, "PPC channel request failed.\n"); 147 + dev_err(dev, "PCC channel request failed.\n"); 195 148 rc = -ENODEV; 196 149 goto out; 197 150 } ··· 217 170 goto err_mbx_channel_free; 218 171 } 219 172 220 - if (pcc_chan->shmem_base_addr) { 221 - cl_info->pcc_comm_addr = ioremap(pcc_chan->shmem_base_addr, 222 - pcc_chan->shmem_size); 223 - if (!cl_info->pcc_comm_addr) { 224 - dev_err(dev, "Failed to ioremap PCC communication region for channel-%u.\n", 225 - hdev->chan_id); 226 - rc = -ENOMEM; 227 - goto err_mbx_channel_free; 228 - } 173 + if (!pcc_chan->shmem_base_addr || 174 + pcc_chan->shmem_size != HCCS_PCC_SHARE_MEM_BYTES) { 175 + dev_err(dev, "The base address or size (%llu) of PCC communication region is invalid.\n", 176 + pcc_chan->shmem_size); 177 + rc = -EINVAL; 178 + goto err_mbx_channel_free; 179 + } 180 + 181 + cl_info->pcc_comm_addr = ioremap(pcc_chan->shmem_base_addr, 182 + pcc_chan->shmem_size); 183 + if (!cl_info->pcc_comm_addr) { 184 + dev_err(dev, "Failed to ioremap PCC communication region for channel-%u.\n", 185 + hdev->chan_id); 186 + rc = -ENOMEM; 187 + goto err_mbx_channel_free; 229 188 } 230 189 231 190 return 0; ··· 504 451 struct device *dev = hdev->dev; 505 452 struct hccs_chip_info *chip; 506 453 struct hccs_die_info *die; 454 + bool has_die_info = false; 507 455 u8 i, j; 508 456 int ret; 509 457 ··· 513 459 if (!chip->die_num) 514 460 continue; 515 461 462 + has_die_info = true; 516 463 chip->dies = devm_kzalloc(hdev->dev, 517 464 chip->die_num * sizeof(struct hccs_die_info), 518 465 GFP_KERNEL); ··· 535 480 } 536 481 } 537 482 538 - return 0; 483 + return has_die_info ? 0 : -EINVAL; 539 484 } 540 485 541 486 static int hccs_get_bd_info(struct hccs_dev *hdev, u8 opcode, ··· 641 586 port = &die->ports[i]; 642 587 port->port_id = attrs[i].port_id; 643 588 port->port_type = attrs[i].port_type; 644 - port->lane_mode = attrs[i].lane_mode; 589 + port->max_lane_num = attrs[i].max_lane_num; 645 590 port->enable = attrs[i].enable; 646 591 port->die = die; 647 592 } ··· 656 601 struct device *dev = hdev->dev; 657 602 struct hccs_chip_info *chip; 658 603 struct hccs_die_info *die; 604 + bool has_port_info = false; 659 605 u8 i, j; 660 606 int ret; 661 607 ··· 667 611 if (!die->port_num) 668 612 continue; 669 613 614 + has_port_info = true; 670 615 die->ports = devm_kzalloc(dev, 671 616 die->port_num * sizeof(struct hccs_port_info), 672 617 GFP_KERNEL); ··· 686 629 } 687 630 } 688 631 689 - return 0; 632 + return has_port_info ? 0 : -EINVAL; 690 633 } 691 634 692 635 static int hccs_get_hw_info(struct hccs_dev *hdev) ··· 712 655 dev_err(hdev->dev, "query all port info on platform failed, ret = %d.\n", 713 656 ret); 714 657 return ret; 658 + } 659 + 660 + return 0; 661 + } 662 + 663 + static u16 hccs_calc_used_type_num(struct hccs_dev *hdev, 664 + unsigned long *hccs_ver) 665 + { 666 + struct hccs_chip_info *chip; 667 + struct hccs_port_info *port; 668 + struct hccs_die_info *die; 669 + u16 used_type_num = 0; 670 + u16 i, j, k; 671 + 672 + for (i = 0; i < hdev->chip_num; i++) { 673 + chip = &hdev->chips[i]; 674 + for (j = 0; j < chip->die_num; j++) { 675 + die = &chip->dies[j]; 676 + for (k = 0; k < die->port_num; k++) { 677 + port = &die->ports[k]; 678 + set_bit(port->port_type, hccs_ver); 679 + } 680 + } 681 + } 682 + 683 + for_each_set_bit(i, hccs_ver, HCCS_IP_MAX + 1) 684 + used_type_num++; 685 + 686 + return used_type_num; 687 + } 688 + 689 + static int hccs_init_type_name_maps(struct hccs_dev *hdev) 690 + { 691 + DECLARE_BITMAP(hccs_ver, HCCS_IP_MAX + 1) = {}; 692 + unsigned int i; 693 + u16 idx = 0; 694 + 695 + hdev->used_type_num = hccs_calc_used_type_num(hdev, hccs_ver); 696 + hdev->type_name_maps = devm_kcalloc(hdev->dev, hdev->used_type_num, 697 + sizeof(struct hccs_type_name_map), 698 + GFP_KERNEL); 699 + if (!hdev->type_name_maps) 700 + return -ENOMEM; 701 + 702 + for_each_set_bit(i, hccs_ver, HCCS_IP_MAX + 1) { 703 + hdev->type_name_maps[idx].type = i; 704 + sprintf(hdev->type_name_maps[idx].name, 705 + "%s%u", HCCS_IP_PREFIX, i); 706 + idx++; 715 707 } 716 708 717 709 return 0; ··· 926 820 { 927 821 const struct hccs_port_info *port = kobj_to_port_info(kobj); 928 822 929 - return sysfs_emit(buf, "HCCS-v%u\n", port->port_type); 823 + return sysfs_emit(buf, "%s%u\n", HCCS_IP_PREFIX, port->port_type); 930 824 } 931 825 static struct kobj_attribute hccs_type_attr = __ATTR_RO(type); 932 826 ··· 935 829 { 936 830 const struct hccs_port_info *port = kobj_to_port_info(kobj); 937 831 938 - return sysfs_emit(buf, "x%u\n", port->lane_mode); 832 + return sysfs_emit(buf, "x%u\n", port->max_lane_num); 939 833 } 940 834 static struct kobj_attribute lane_mode_attr = __ATTR_RO(lane_mode); 941 835 ··· 1230 1124 .default_groups = hccs_chip_default_groups, 1231 1125 }; 1232 1126 1127 + static int hccs_parse_pm_port_type(struct hccs_dev *hdev, const char *buf, 1128 + u8 *port_type) 1129 + { 1130 + char hccs_name[HCCS_NAME_MAX_LEN + 1] = ""; 1131 + u8 type; 1132 + int ret; 1133 + 1134 + ret = sscanf(buf, "%" __stringify(HCCS_NAME_MAX_LEN) "s", hccs_name); 1135 + if (ret != 1) 1136 + return -EINVAL; 1137 + 1138 + ret = hccs_name_to_port_type(hdev, hccs_name, &type); 1139 + if (ret) { 1140 + dev_dbg(hdev->dev, "input invalid, please get the available types from 'used_types'.\n"); 1141 + return ret; 1142 + } 1143 + 1144 + if (type == HCCS_V2 && hdev->caps & HCCS_CAPS_HCCS_V2_PM) { 1145 + *port_type = type; 1146 + return 0; 1147 + } 1148 + 1149 + dev_dbg(hdev->dev, "%s doesn't support for increasing and decreasing lane.\n", 1150 + hccs_name); 1151 + 1152 + return -EOPNOTSUPP; 1153 + } 1154 + 1155 + static int hccs_query_port_idle_status(struct hccs_dev *hdev, 1156 + struct hccs_port_info *port, u8 *idle) 1157 + { 1158 + const struct hccs_die_info *die = port->die; 1159 + const struct hccs_chip_info *chip = die->chip; 1160 + struct hccs_port_comm_req_param *req_param; 1161 + struct hccs_desc desc; 1162 + int ret; 1163 + 1164 + hccs_init_req_desc(&desc); 1165 + req_param = (struct hccs_port_comm_req_param *)desc.req.data; 1166 + req_param->chip_id = chip->chip_id; 1167 + req_param->die_id = die->die_id; 1168 + req_param->port_id = port->port_id; 1169 + ret = hccs_pcc_cmd_send(hdev, HCCS_GET_PORT_IDLE_STATUS, &desc); 1170 + if (ret) { 1171 + dev_err(hdev->dev, 1172 + "get port idle status failed, ret = %d.\n", ret); 1173 + return ret; 1174 + } 1175 + 1176 + *idle = *((u8 *)desc.rsp.data); 1177 + return 0; 1178 + } 1179 + 1180 + static int hccs_get_all_spec_port_idle_sta(struct hccs_dev *hdev, u8 port_type, 1181 + bool *all_idle) 1182 + { 1183 + struct hccs_chip_info *chip; 1184 + struct hccs_port_info *port; 1185 + struct hccs_die_info *die; 1186 + int ret = 0; 1187 + u8 i, j, k; 1188 + u8 idle; 1189 + 1190 + *all_idle = false; 1191 + for (i = 0; i < hdev->chip_num; i++) { 1192 + chip = &hdev->chips[i]; 1193 + for (j = 0; j < chip->die_num; j++) { 1194 + die = &chip->dies[j]; 1195 + for (k = 0; k < die->port_num; k++) { 1196 + port = &die->ports[k]; 1197 + if (port->port_type != port_type) 1198 + continue; 1199 + ret = hccs_query_port_idle_status(hdev, port, 1200 + &idle); 1201 + if (ret) { 1202 + dev_err(hdev->dev, 1203 + "hccs%u on chip%u/die%u get idle status failed, ret = %d.\n", 1204 + k, i, j, ret); 1205 + return ret; 1206 + } else if (idle == 0) { 1207 + dev_info(hdev->dev, "hccs%u on chip%u/die%u is busy.\n", 1208 + k, i, j); 1209 + return 0; 1210 + } 1211 + } 1212 + } 1213 + } 1214 + *all_idle = true; 1215 + 1216 + return 0; 1217 + } 1218 + 1219 + static int hccs_get_all_spec_port_full_lane_sta(struct hccs_dev *hdev, 1220 + u8 port_type, bool *full_lane) 1221 + { 1222 + struct hccs_link_status status = {0}; 1223 + struct hccs_chip_info *chip; 1224 + struct hccs_port_info *port; 1225 + struct hccs_die_info *die; 1226 + u8 i, j, k; 1227 + int ret; 1228 + 1229 + *full_lane = false; 1230 + for (i = 0; i < hdev->chip_num; i++) { 1231 + chip = &hdev->chips[i]; 1232 + for (j = 0; j < chip->die_num; j++) { 1233 + die = &chip->dies[j]; 1234 + for (k = 0; k < die->port_num; k++) { 1235 + port = &die->ports[k]; 1236 + if (port->port_type != port_type) 1237 + continue; 1238 + ret = hccs_query_port_link_status(hdev, port, 1239 + &status); 1240 + if (ret) 1241 + return ret; 1242 + if (status.lane_num != port->max_lane_num) 1243 + return 0; 1244 + } 1245 + } 1246 + } 1247 + *full_lane = true; 1248 + 1249 + return 0; 1250 + } 1251 + 1252 + static int hccs_prepare_inc_lane(struct hccs_dev *hdev, u8 type) 1253 + { 1254 + struct hccs_inc_lane_req_param *req_param; 1255 + struct hccs_desc desc; 1256 + int ret; 1257 + 1258 + hccs_init_req_desc(&desc); 1259 + req_param = (struct hccs_inc_lane_req_param *)desc.req.data; 1260 + req_param->port_type = type; 1261 + req_param->opt_type = HCCS_PREPARE_INC_LANE; 1262 + ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc); 1263 + if (ret) 1264 + dev_err(hdev->dev, "prepare for increasing lane failed, ret = %d.\n", 1265 + ret); 1266 + 1267 + return ret; 1268 + } 1269 + 1270 + static int hccs_wait_serdes_adapt_completed(struct hccs_dev *hdev, u8 type) 1271 + { 1272 + #define HCCS_MAX_WAIT_CNT_FOR_ADAPT 10 1273 + #define HCCS_QUERY_ADAPT_RES_DELAY_MS 100 1274 + #define HCCS_SERDES_ADAPT_OK 0 1275 + 1276 + struct hccs_inc_lane_req_param *req_param; 1277 + u8 wait_cnt = HCCS_MAX_WAIT_CNT_FOR_ADAPT; 1278 + struct hccs_desc desc; 1279 + u8 adapt_res; 1280 + int ret; 1281 + 1282 + do { 1283 + hccs_init_req_desc(&desc); 1284 + req_param = (struct hccs_inc_lane_req_param *)desc.req.data; 1285 + req_param->port_type = type; 1286 + req_param->opt_type = HCCS_GET_ADAPT_RES; 1287 + ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc); 1288 + if (ret) { 1289 + dev_err(hdev->dev, "query adapting result failed, ret = %d.\n", 1290 + ret); 1291 + return ret; 1292 + } 1293 + adapt_res = *((u8 *)&desc.rsp.data); 1294 + if (adapt_res == HCCS_SERDES_ADAPT_OK) 1295 + return 0; 1296 + 1297 + msleep(HCCS_QUERY_ADAPT_RES_DELAY_MS); 1298 + } while (--wait_cnt); 1299 + 1300 + dev_err(hdev->dev, "wait for adapting completed timeout.\n"); 1301 + 1302 + return -ETIMEDOUT; 1303 + } 1304 + 1305 + static int hccs_start_hpcs_retraining(struct hccs_dev *hdev, u8 type) 1306 + { 1307 + struct hccs_inc_lane_req_param *req_param; 1308 + struct hccs_desc desc; 1309 + int ret; 1310 + 1311 + hccs_init_req_desc(&desc); 1312 + req_param = (struct hccs_inc_lane_req_param *)desc.req.data; 1313 + req_param->port_type = type; 1314 + req_param->opt_type = HCCS_START_RETRAINING; 1315 + ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc); 1316 + if (ret) 1317 + dev_err(hdev->dev, "start hpcs retraining failed, ret = %d.\n", 1318 + ret); 1319 + 1320 + return ret; 1321 + } 1322 + 1323 + static int hccs_start_inc_lane(struct hccs_dev *hdev, u8 type) 1324 + { 1325 + int ret; 1326 + 1327 + ret = hccs_prepare_inc_lane(hdev, type); 1328 + if (ret) 1329 + return ret; 1330 + 1331 + ret = hccs_wait_serdes_adapt_completed(hdev, type); 1332 + if (ret) 1333 + return ret; 1334 + 1335 + return hccs_start_hpcs_retraining(hdev, type); 1336 + } 1337 + 1338 + static int hccs_start_dec_lane(struct hccs_dev *hdev, u8 type) 1339 + { 1340 + struct hccs_desc desc; 1341 + u8 *port_type; 1342 + int ret; 1343 + 1344 + hccs_init_req_desc(&desc); 1345 + port_type = (u8 *)desc.req.data; 1346 + *port_type = type; 1347 + ret = hccs_pcc_cmd_send(hdev, HCCS_PM_DEC_LANE, &desc); 1348 + if (ret) 1349 + dev_err(hdev->dev, "start to decrease lane failed, ret = %d.\n", 1350 + ret); 1351 + 1352 + return ret; 1353 + } 1354 + 1355 + static ssize_t dec_lane_of_type_store(struct kobject *kobj, struct kobj_attribute *attr, 1356 + const char *buf, size_t count) 1357 + { 1358 + struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj); 1359 + bool all_in_idle; 1360 + u8 port_type; 1361 + int ret; 1362 + 1363 + ret = hccs_parse_pm_port_type(hdev, buf, &port_type); 1364 + if (ret) 1365 + return ret; 1366 + 1367 + mutex_lock(&hdev->lock); 1368 + ret = hccs_get_all_spec_port_idle_sta(hdev, port_type, &all_in_idle); 1369 + if (ret) 1370 + goto out; 1371 + if (!all_in_idle) { 1372 + ret = -EBUSY; 1373 + dev_err(hdev->dev, "please don't decrese lanes on high load with %s, ret = %d.\n", 1374 + hccs_port_type_to_name(hdev, port_type), ret); 1375 + goto out; 1376 + } 1377 + 1378 + ret = hccs_start_dec_lane(hdev, port_type); 1379 + out: 1380 + mutex_unlock(&hdev->lock); 1381 + 1382 + return ret == 0 ? count : ret; 1383 + } 1384 + static struct kobj_attribute dec_lane_of_type_attr = 1385 + __ATTR(dec_lane_of_type, 0200, NULL, dec_lane_of_type_store); 1386 + 1387 + static ssize_t inc_lane_of_type_store(struct kobject *kobj, struct kobj_attribute *attr, 1388 + const char *buf, size_t count) 1389 + { 1390 + struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj); 1391 + bool full_lane; 1392 + u8 port_type; 1393 + int ret; 1394 + 1395 + ret = hccs_parse_pm_port_type(hdev, buf, &port_type); 1396 + if (ret) 1397 + return ret; 1398 + 1399 + mutex_lock(&hdev->lock); 1400 + ret = hccs_get_all_spec_port_full_lane_sta(hdev, port_type, &full_lane); 1401 + if (ret || full_lane) 1402 + goto out; 1403 + 1404 + ret = hccs_start_inc_lane(hdev, port_type); 1405 + out: 1406 + mutex_unlock(&hdev->lock); 1407 + return ret == 0 ? count : ret; 1408 + } 1409 + static struct kobj_attribute inc_lane_of_type_attr = 1410 + __ATTR(inc_lane_of_type, 0200, NULL, inc_lane_of_type_store); 1411 + 1412 + static ssize_t available_inc_dec_lane_types_show(struct kobject *kobj, 1413 + struct kobj_attribute *attr, 1414 + char *buf) 1415 + { 1416 + struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj); 1417 + 1418 + if (hdev->caps & HCCS_CAPS_HCCS_V2_PM) 1419 + return sysfs_emit(buf, "%s\n", 1420 + hccs_port_type_to_name(hdev, HCCS_V2)); 1421 + 1422 + return -EINVAL; 1423 + } 1424 + static struct kobj_attribute available_inc_dec_lane_types_attr = 1425 + __ATTR(available_inc_dec_lane_types, 0444, 1426 + available_inc_dec_lane_types_show, NULL); 1427 + 1428 + static ssize_t used_types_show(struct kobject *kobj, 1429 + struct kobj_attribute *attr, char *buf) 1430 + { 1431 + struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj); 1432 + int len = 0; 1433 + u16 i; 1434 + 1435 + for (i = 0; i < hdev->used_type_num - 1; i++) 1436 + len += sysfs_emit(&buf[len], "%s ", hdev->type_name_maps[i].name); 1437 + len += sysfs_emit(&buf[len], "%s\n", hdev->type_name_maps[i].name); 1438 + 1439 + return len; 1440 + } 1441 + static struct kobj_attribute used_types_attr = 1442 + __ATTR(used_types, 0444, used_types_show, NULL); 1443 + 1444 + static void hccs_remove_misc_sysfs(struct hccs_dev *hdev) 1445 + { 1446 + sysfs_remove_file(&hdev->dev->kobj, &used_types_attr.attr); 1447 + 1448 + if (!(hdev->caps & HCCS_CAPS_HCCS_V2_PM)) 1449 + return; 1450 + 1451 + sysfs_remove_file(&hdev->dev->kobj, 1452 + &available_inc_dec_lane_types_attr.attr); 1453 + sysfs_remove_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr); 1454 + sysfs_remove_file(&hdev->dev->kobj, &inc_lane_of_type_attr.attr); 1455 + } 1456 + 1457 + static int hccs_add_misc_sysfs(struct hccs_dev *hdev) 1458 + { 1459 + int ret; 1460 + 1461 + ret = sysfs_create_file(&hdev->dev->kobj, &used_types_attr.attr); 1462 + if (ret) 1463 + return ret; 1464 + 1465 + if (!(hdev->caps & HCCS_CAPS_HCCS_V2_PM)) 1466 + return 0; 1467 + 1468 + ret = sysfs_create_file(&hdev->dev->kobj, 1469 + &available_inc_dec_lane_types_attr.attr); 1470 + if (ret) 1471 + goto used_types_remove; 1472 + 1473 + ret = sysfs_create_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr); 1474 + if (ret) 1475 + goto inc_dec_lane_types_remove; 1476 + 1477 + ret = sysfs_create_file(&hdev->dev->kobj, &inc_lane_of_type_attr.attr); 1478 + if (ret) 1479 + goto dec_lane_of_type_remove; 1480 + 1481 + return 0; 1482 + 1483 + dec_lane_of_type_remove: 1484 + sysfs_remove_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr); 1485 + inc_dec_lane_types_remove: 1486 + sysfs_remove_file(&hdev->dev->kobj, 1487 + &available_inc_dec_lane_types_attr.attr); 1488 + used_types_remove: 1489 + sysfs_remove_file(&hdev->dev->kobj, &used_types_attr.attr); 1490 + return ret; 1491 + } 1492 + 1233 1493 static void hccs_remove_die_dir(struct hccs_die_info *die) 1234 1494 { 1235 1495 struct hccs_port_info *port; ··· 1630 1158 1631 1159 for (i = 0; i < hdev->chip_num; i++) 1632 1160 hccs_remove_chip_dir(&hdev->chips[i]); 1161 + 1162 + hccs_remove_misc_sysfs(hdev); 1633 1163 } 1634 1164 1635 1165 static int hccs_create_hccs_dir(struct hccs_dev *hdev, ··· 1727 1253 } 1728 1254 } 1729 1255 1256 + ret = hccs_add_misc_sysfs(hdev); 1257 + if (ret) { 1258 + dev_err(hdev->dev, "create misc sysfs interface failed, ret = %d\n", ret); 1259 + goto err; 1260 + } 1261 + 1730 1262 return 0; 1731 1263 err: 1732 1264 for (k = 0; k < id; k++) ··· 1783 1303 if (rc) 1784 1304 goto unregister_pcc_chan; 1785 1305 1306 + rc = hccs_init_type_name_maps(hdev); 1307 + if (rc) 1308 + goto unregister_pcc_chan; 1309 + 1786 1310 rc = hccs_create_topo_dirs(hdev); 1787 1311 if (rc) 1788 1312 goto unregister_pcc_chan; ··· 1832 1348 1833 1349 static struct platform_driver hccs_driver = { 1834 1350 .probe = hccs_probe, 1835 - .remove_new = hccs_remove, 1351 + .remove = hccs_remove, 1836 1352 .driver = { 1837 1353 .name = "kunpeng_hccs", 1838 1354 .acpi_match_table = hccs_acpi_match,
+31 -2
drivers/soc/hisilicon/kunpeng_hccs.h
··· 10 10 * | P0 | P1 | P2 | P3 | P0 | P1 | P2 | P3 | P0 | P1 | P2 | P3 |P0 | P1 | P2 | P3 | 11 11 */ 12 12 13 + enum hccs_port_type { 14 + HCCS_V1 = 1, 15 + HCCS_V2, 16 + }; 17 + 18 + #define HCCS_IP_PREFIX "HCCS-v" 19 + #define HCCS_IP_MAX 255 20 + #define HCCS_NAME_MAX_LEN 9 21 + struct hccs_type_name_map { 22 + u8 type; 23 + char name[HCCS_NAME_MAX_LEN + 1]; 24 + }; 25 + 13 26 /* 14 27 * This value cannot be 255, otherwise the loop of the multi-BD communication 15 28 * case cannot end. ··· 32 19 struct hccs_port_info { 33 20 u8 port_id; 34 21 u8 port_type; 35 - u8 lane_mode; 22 + u8 max_lane_num; 36 23 bool enable; /* if the port is enabled */ 37 24 struct kobject kobj; 38 25 bool dir_created; ··· 80 67 bool has_txdone_irq; 81 68 }; 82 69 70 + #define HCCS_CAPS_HCCS_V2_PM BIT_ULL(0) 71 + 83 72 struct hccs_dev { 84 73 struct device *dev; 85 74 struct acpi_device *acpi_dev; 86 75 const struct hccs_verspecific_data *verspec_data; 76 + /* device capabilities from firmware, like HCCS_CAPS_xxx. */ 87 77 u64 caps; 88 78 u8 chip_num; 89 79 struct hccs_chip_info *chips; 80 + u16 used_type_num; 81 + struct hccs_type_name_map *type_name_maps; 90 82 u8 chan_id; 91 83 struct mutex lock; 92 84 struct hccs_mbox_client_info cl_info; ··· 109 91 HCCS_GET_DIE_PORTS_LANE_STA, 110 92 HCCS_GET_DIE_PORTS_LINK_STA, 111 93 HCCS_GET_DIE_PORTS_CRC_ERR_CNT, 94 + HCCS_GET_PORT_IDLE_STATUS, 95 + HCCS_PM_DEC_LANE, 96 + HCCS_PM_INC_LANE, 112 97 HCCS_SUB_CMD_MAX = 255, 113 98 }; 114 99 ··· 134 113 struct hccs_port_attr { 135 114 u8 port_id; 136 115 u8 port_type; 137 - u8 lane_mode; 116 + u8 max_lane_num; 138 117 u8 enable : 1; /* if the port is enabled */ 139 118 u16 rsv[2]; 140 119 }; ··· 153 132 u8 chip_id; 154 133 u8 die_id; 155 134 u8 port_id; 135 + }; 136 + 137 + #define HCCS_PREPARE_INC_LANE 1 138 + #define HCCS_GET_ADAPT_RES 2 139 + #define HCCS_START_RETRAINING 3 140 + struct hccs_inc_lane_req_param { 141 + u8 port_type; 142 + u8 opt_type; 156 143 }; 157 144 158 145 #define HCCS_PORT_RESET 1
+102 -74
drivers/soc/imx/soc-imx8m.c
··· 30 30 31 31 struct imx8_soc_data { 32 32 char *name; 33 - u32 (*soc_revision)(void); 33 + int (*soc_revision)(u32 *socrev, u64 *socuid); 34 34 }; 35 - 36 - static u64 soc_uid; 37 35 38 36 #ifdef CONFIG_HAVE_ARM_SMCCC 39 37 static u32 imx8mq_soc_revision_from_atf(void) ··· 49 51 static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; }; 50 52 #endif 51 53 52 - static u32 __init imx8mq_soc_revision(void) 54 + static int imx8mq_soc_revision(u32 *socrev, u64 *socuid) 53 55 { 54 - struct device_node *np; 56 + struct device_node *np __free(device_node) = 57 + of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp"); 55 58 void __iomem *ocotp_base; 56 59 u32 magic; 57 60 u32 rev; 58 61 struct clk *clk; 62 + int ret; 59 63 60 - np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp"); 61 64 if (!np) 62 - return 0; 65 + return -EINVAL; 63 66 64 67 ocotp_base = of_iomap(np, 0); 65 - WARN_ON(!ocotp_base); 68 + if (!ocotp_base) 69 + return -EINVAL; 70 + 66 71 clk = of_clk_get_by_name(np, NULL); 67 72 if (IS_ERR(clk)) { 68 - WARN_ON(IS_ERR(clk)); 69 - return 0; 73 + ret = PTR_ERR(clk); 74 + goto err_clk; 70 75 } 71 76 72 77 clk_prepare_enable(clk); ··· 85 84 rev = REV_B1; 86 85 } 87 86 88 - soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 89 - soc_uid <<= 32; 90 - soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 87 + *socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 88 + *socuid <<= 32; 89 + *socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 90 + 91 + *socrev = rev; 91 92 92 93 clk_disable_unprepare(clk); 93 94 clk_put(clk); 94 95 iounmap(ocotp_base); 95 - of_node_put(np); 96 96 97 - return rev; 97 + return 0; 98 + 99 + err_clk: 100 + iounmap(ocotp_base); 101 + return ret; 98 102 } 99 103 100 - static void __init imx8mm_soc_uid(void) 104 + static int imx8mm_soc_uid(u64 *socuid) 101 105 { 106 + struct device_node *np __free(device_node) = 107 + of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp"); 102 108 void __iomem *ocotp_base; 103 - struct device_node *np; 104 109 struct clk *clk; 110 + int ret = 0; 105 111 u32 offset = of_machine_is_compatible("fsl,imx8mp") ? 106 112 IMX8MP_OCOTP_UID_OFFSET : 0; 107 113 108 - np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp"); 109 114 if (!np) 110 - return; 115 + return -EINVAL; 111 116 112 117 ocotp_base = of_iomap(np, 0); 113 - WARN_ON(!ocotp_base); 118 + if (!ocotp_base) 119 + return -EINVAL; 120 + 114 121 clk = of_clk_get_by_name(np, NULL); 115 122 if (IS_ERR(clk)) { 116 - WARN_ON(IS_ERR(clk)); 117 - return; 123 + ret = PTR_ERR(clk); 124 + goto err_clk; 118 125 } 119 126 120 127 clk_prepare_enable(clk); 121 128 122 - soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset); 123 - soc_uid <<= 32; 124 - soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset); 129 + *socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset); 130 + *socuid <<= 32; 131 + *socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset); 125 132 126 133 clk_disable_unprepare(clk); 127 134 clk_put(clk); 135 + 136 + err_clk: 128 137 iounmap(ocotp_base); 129 - of_node_put(np); 138 + return ret; 130 139 } 131 140 132 - static u32 __init imx8mm_soc_revision(void) 141 + static int imx8mm_soc_revision(u32 *socrev, u64 *socuid) 133 142 { 134 - struct device_node *np; 143 + struct device_node *np __free(device_node) = 144 + of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop"); 135 145 void __iomem *anatop_base; 136 - u32 rev; 137 146 138 - np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop"); 139 147 if (!np) 140 - return 0; 148 + return -EINVAL; 141 149 142 150 anatop_base = of_iomap(np, 0); 143 - WARN_ON(!anatop_base); 151 + if (!anatop_base) 152 + return -EINVAL; 144 153 145 - rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM); 154 + *socrev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM); 146 155 147 156 iounmap(anatop_base); 148 - of_node_put(np); 149 157 150 - imx8mm_soc_uid(); 151 - 152 - return rev; 158 + return imx8mm_soc_uid(socuid); 153 159 } 154 160 155 161 static const struct imx8_soc_data imx8mq_soc_data = { ··· 187 179 { } 188 180 }; 189 181 190 - #define imx8_revision(soc_rev) \ 191 - soc_rev ? \ 192 - kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \ 182 + #define imx8_revision(dev, soc_rev) \ 183 + (soc_rev) ? \ 184 + devm_kasprintf((dev), GFP_KERNEL, "%d.%d", ((soc_rev) >> 4) & 0xf, (soc_rev) & 0xf) : \ 193 185 "unknown" 194 186 195 - static int __init imx8_soc_init(void) 187 + static int imx8m_soc_probe(struct platform_device *pdev) 196 188 { 197 189 struct soc_device_attribute *soc_dev_attr; 198 - struct soc_device *soc_dev; 199 - const struct of_device_id *id; 200 - u32 soc_rev = 0; 201 190 const struct imx8_soc_data *data; 191 + struct device *dev = &pdev->dev; 192 + const struct of_device_id *id; 193 + struct soc_device *soc_dev; 194 + u32 soc_rev = 0; 195 + u64 soc_uid = 0; 202 196 int ret; 203 197 204 - soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 198 + soc_dev_attr = devm_kzalloc(dev, sizeof(*soc_dev_attr), GFP_KERNEL); 205 199 if (!soc_dev_attr) 206 200 return -ENOMEM; 207 201 ··· 211 201 212 202 ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine); 213 203 if (ret) 214 - goto free_soc; 204 + return ret; 215 205 216 206 id = of_match_node(imx8_soc_match, of_root); 217 - if (!id) { 218 - ret = -ENODEV; 219 - goto free_soc; 220 - } 207 + if (!id) 208 + return -ENODEV; 221 209 222 210 data = id->data; 223 211 if (data) { 224 212 soc_dev_attr->soc_id = data->name; 225 - if (data->soc_revision) 226 - soc_rev = data->soc_revision(); 213 + if (data->soc_revision) { 214 + ret = data->soc_revision(&soc_rev, &soc_uid); 215 + if (ret) 216 + return ret; 217 + } 227 218 } 228 219 229 - soc_dev_attr->revision = imx8_revision(soc_rev); 230 - if (!soc_dev_attr->revision) { 231 - ret = -ENOMEM; 232 - goto free_soc; 233 - } 220 + soc_dev_attr->revision = imx8_revision(dev, soc_rev); 221 + if (!soc_dev_attr->revision) 222 + return -ENOMEM; 234 223 235 - soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", soc_uid); 236 - if (!soc_dev_attr->serial_number) { 237 - ret = -ENOMEM; 238 - goto free_rev; 239 - } 224 + soc_dev_attr->serial_number = devm_kasprintf(dev, GFP_KERNEL, "%016llX", soc_uid); 225 + if (!soc_dev_attr->serial_number) 226 + return -ENOMEM; 240 227 241 228 soc_dev = soc_device_register(soc_dev_attr); 242 - if (IS_ERR(soc_dev)) { 243 - ret = PTR_ERR(soc_dev); 244 - goto free_serial_number; 245 - } 229 + if (IS_ERR(soc_dev)) 230 + return PTR_ERR(soc_dev); 246 231 247 232 pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id, 248 233 soc_dev_attr->revision); ··· 246 241 platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0); 247 242 248 243 return 0; 244 + } 249 245 250 - free_serial_number: 251 - kfree(soc_dev_attr->serial_number); 252 - free_rev: 253 - if (strcmp(soc_dev_attr->revision, "unknown")) 254 - kfree(soc_dev_attr->revision); 255 - free_soc: 256 - kfree(soc_dev_attr); 257 - return ret; 246 + static struct platform_driver imx8m_soc_driver = { 247 + .probe = imx8m_soc_probe, 248 + .driver = { 249 + .name = "imx8m-soc", 250 + }, 251 + }; 252 + 253 + static int __init imx8_soc_init(void) 254 + { 255 + struct platform_device *pdev; 256 + int ret; 257 + 258 + /* No match means this is non-i.MX8M hardware, do nothing. */ 259 + if (!of_match_node(imx8_soc_match, of_root)) 260 + return 0; 261 + 262 + ret = platform_driver_register(&imx8m_soc_driver); 263 + if (ret) { 264 + pr_err("Failed to register imx8m-soc platform driver: %d\n", ret); 265 + return ret; 266 + } 267 + 268 + pdev = platform_device_register_simple("imx8m-soc", -1, NULL, 0); 269 + if (IS_ERR(pdev)) { 270 + pr_err("Failed to register imx8m-soc platform device: %ld\n", PTR_ERR(pdev)); 271 + platform_driver_unregister(&imx8m_soc_driver); 272 + return PTR_ERR(pdev); 273 + } 274 + 275 + return 0; 258 276 } 259 277 device_initcall(imx8_soc_init); 260 278 MODULE_DESCRIPTION("NXP i.MX8M SoC driver");
+1 -1
drivers/soc/ixp4xx/ixp4xx-npe.c
··· 759 759 .of_match_table = ixp4xx_npe_of_match, 760 760 }, 761 761 .probe = ixp4xx_npe_probe, 762 - .remove_new = ixp4xx_npe_remove, 762 + .remove = ixp4xx_npe_remove, 763 763 }; 764 764 module_platform_driver(ixp4xx_npe_driver); 765 765
+1 -1
drivers/soc/ixp4xx/ixp4xx-qmgr.c
··· 461 461 .of_match_table = ixp4xx_qmgr_of_match, 462 462 }, 463 463 .probe = ixp4xx_qmgr_probe, 464 - .remove_new = ixp4xx_qmgr_remove, 464 + .remove = ixp4xx_qmgr_remove, 465 465 }; 466 466 module_platform_driver(ixp4xx_qmgr_driver); 467 467
+1 -1
drivers/soc/litex/litex_soc_ctrl.c
··· 131 131 .of_match_table = litex_soc_ctrl_of_match, 132 132 }, 133 133 .probe = litex_soc_ctrl_probe, 134 - .remove_new = litex_soc_ctrl_remove, 134 + .remove = litex_soc_ctrl_remove, 135 135 }; 136 136 137 137 module_platform_driver(litex_soc_ctrl_driver);
+1 -1
drivers/soc/loongson/loongson2_guts.c
··· 169 169 .of_match_table = loongson2_guts_of_match, 170 170 }, 171 171 .probe = loongson2_guts_probe, 172 - .remove_new = loongson2_guts_remove, 172 + .remove = loongson2_guts_remove, 173 173 }; 174 174 175 175 static int __init loongson2_guts_init(void)
+11
drivers/soc/mediatek/Kconfig
··· 26 26 The violation information is logged for further analysis or 27 27 countermeasures. 28 28 29 + config MTK_DVFSRC 30 + tristate "MediaTek DVFSRC Support" 31 + depends on ARCH_MEDIATEK 32 + help 33 + Say yes here to add support for the MediaTek Dynamic Voltage 34 + and Frequency Scaling Resource Collector (DVFSRC): a HW 35 + IP found on many MediaTek SoCs, which is responsible for 36 + collecting DVFS requests from various SoC IPs, other than 37 + software, and performing bandwidth scaling to provide the 38 + best achievable performance-per-watt. 39 + 29 40 config MTK_INFRACFG 30 41 bool "MediaTek INFRACFG Support" 31 42 select REGMAP
+1
drivers/soc/mediatek/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_MTK_CMDQ) += mtk-cmdq-helper.o 3 3 obj-$(CONFIG_MTK_DEVAPC) += mtk-devapc.o 4 + obj-$(CONFIG_MTK_DVFSRC) += mtk-dvfsrc.o 4 5 obj-$(CONFIG_MTK_INFRACFG) += mtk-infracfg.o 5 6 obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o 6 7 obj-$(CONFIG_MTK_REGULATOR_COUPLER) += mtk-regulator-coupler.o
+106 -124
drivers/soc/mediatek/mtk-cmdq-helper.c
··· 180 180 return 0; 181 181 } 182 182 183 + static int cmdq_pkt_mask(struct cmdq_pkt *pkt, u32 mask) 184 + { 185 + struct cmdq_instruction inst = { 186 + .op = CMDQ_CODE_MASK, 187 + .mask = ~mask 188 + }; 189 + return cmdq_pkt_append_command(pkt, inst); 190 + } 191 + 183 192 int cmdq_pkt_write(struct cmdq_pkt *pkt, u8 subsys, u16 offset, u32 value) 184 193 { 185 - struct cmdq_instruction inst; 186 - 187 - inst.op = CMDQ_CODE_WRITE; 188 - inst.value = value; 189 - inst.offset = offset; 190 - inst.subsys = subsys; 191 - 194 + struct cmdq_instruction inst = { 195 + .op = CMDQ_CODE_WRITE, 196 + .value = value, 197 + .offset = offset, 198 + .subsys = subsys 199 + }; 192 200 return cmdq_pkt_append_command(pkt, inst); 193 201 } 194 202 EXPORT_SYMBOL(cmdq_pkt_write); ··· 204 196 int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys, 205 197 u16 offset, u32 value, u32 mask) 206 198 { 207 - struct cmdq_instruction inst = { {0} }; 208 199 u16 offset_mask = offset; 209 200 int err; 210 201 211 - if (mask != 0xffffffff) { 212 - inst.op = CMDQ_CODE_MASK; 213 - inst.mask = ~mask; 214 - err = cmdq_pkt_append_command(pkt, inst); 202 + if (mask != GENMASK(31, 0)) { 203 + err = cmdq_pkt_mask(pkt, mask); 215 204 if (err < 0) 216 205 return err; 217 206 218 207 offset_mask |= CMDQ_WRITE_ENABLE_MASK; 219 208 } 220 - err = cmdq_pkt_write(pkt, subsys, offset_mask, value); 221 - 222 - return err; 209 + return cmdq_pkt_write(pkt, subsys, offset_mask, value); 223 210 } 224 211 EXPORT_SYMBOL(cmdq_pkt_write_mask); 225 212 226 213 int cmdq_pkt_read_s(struct cmdq_pkt *pkt, u16 high_addr_reg_idx, u16 addr_low, 227 214 u16 reg_idx) 228 215 { 229 - struct cmdq_instruction inst = {}; 230 - 231 - inst.op = CMDQ_CODE_READ_S; 232 - inst.dst_t = CMDQ_REG_TYPE; 233 - inst.sop = high_addr_reg_idx; 234 - inst.reg_dst = reg_idx; 235 - inst.src_reg = addr_low; 236 - 216 + struct cmdq_instruction inst = { 217 + .op = CMDQ_CODE_READ_S, 218 + .dst_t = CMDQ_REG_TYPE, 219 + .sop = high_addr_reg_idx, 220 + .reg_dst = reg_idx, 221 + .src_reg = addr_low 222 + }; 237 223 return cmdq_pkt_append_command(pkt, inst); 238 224 } 239 225 EXPORT_SYMBOL(cmdq_pkt_read_s); ··· 235 233 int cmdq_pkt_write_s(struct cmdq_pkt *pkt, u16 high_addr_reg_idx, 236 234 u16 addr_low, u16 src_reg_idx) 237 235 { 238 - struct cmdq_instruction inst = {}; 239 - 240 - inst.op = CMDQ_CODE_WRITE_S; 241 - inst.src_t = CMDQ_REG_TYPE; 242 - inst.sop = high_addr_reg_idx; 243 - inst.offset = addr_low; 244 - inst.src_reg = src_reg_idx; 245 - 236 + struct cmdq_instruction inst = { 237 + .op = CMDQ_CODE_WRITE_S, 238 + .src_t = CMDQ_REG_TYPE, 239 + .sop = high_addr_reg_idx, 240 + .offset = addr_low, 241 + .src_reg = src_reg_idx 242 + }; 246 243 return cmdq_pkt_append_command(pkt, inst); 247 244 } 248 245 EXPORT_SYMBOL(cmdq_pkt_write_s); ··· 249 248 int cmdq_pkt_write_s_mask(struct cmdq_pkt *pkt, u16 high_addr_reg_idx, 250 249 u16 addr_low, u16 src_reg_idx, u32 mask) 251 250 { 252 - struct cmdq_instruction inst = {}; 251 + struct cmdq_instruction inst = { 252 + .op = CMDQ_CODE_WRITE_S_MASK, 253 + .src_t = CMDQ_REG_TYPE, 254 + .sop = high_addr_reg_idx, 255 + .offset = addr_low, 256 + .src_reg = src_reg_idx, 257 + }; 253 258 int err; 254 259 255 - inst.op = CMDQ_CODE_MASK; 256 - inst.mask = ~mask; 257 - err = cmdq_pkt_append_command(pkt, inst); 260 + err = cmdq_pkt_mask(pkt, mask); 258 261 if (err < 0) 259 262 return err; 260 - 261 - inst.mask = 0; 262 - inst.op = CMDQ_CODE_WRITE_S_MASK; 263 - inst.src_t = CMDQ_REG_TYPE; 264 - inst.sop = high_addr_reg_idx; 265 - inst.offset = addr_low; 266 - inst.src_reg = src_reg_idx; 267 263 268 264 return cmdq_pkt_append_command(pkt, inst); 269 265 } ··· 269 271 int cmdq_pkt_write_s_value(struct cmdq_pkt *pkt, u8 high_addr_reg_idx, 270 272 u16 addr_low, u32 value) 271 273 { 272 - struct cmdq_instruction inst = {}; 273 - 274 - inst.op = CMDQ_CODE_WRITE_S; 275 - inst.sop = high_addr_reg_idx; 276 - inst.offset = addr_low; 277 - inst.value = value; 278 - 274 + struct cmdq_instruction inst = { 275 + .op = CMDQ_CODE_WRITE_S, 276 + .sop = high_addr_reg_idx, 277 + .offset = addr_low, 278 + .value = value 279 + }; 279 280 return cmdq_pkt_append_command(pkt, inst); 280 281 } 281 282 EXPORT_SYMBOL(cmdq_pkt_write_s_value); ··· 282 285 int cmdq_pkt_write_s_mask_value(struct cmdq_pkt *pkt, u8 high_addr_reg_idx, 283 286 u16 addr_low, u32 value, u32 mask) 284 287 { 285 - struct cmdq_instruction inst = {}; 288 + struct cmdq_instruction inst = { 289 + .op = CMDQ_CODE_WRITE_S_MASK, 290 + .sop = high_addr_reg_idx, 291 + .offset = addr_low, 292 + .value = value 293 + }; 286 294 int err; 287 295 288 - inst.op = CMDQ_CODE_MASK; 289 - inst.mask = ~mask; 290 - err = cmdq_pkt_append_command(pkt, inst); 296 + err = cmdq_pkt_mask(pkt, mask); 291 297 if (err < 0) 292 298 return err; 293 - 294 - inst.op = CMDQ_CODE_WRITE_S_MASK; 295 - inst.sop = high_addr_reg_idx; 296 - inst.offset = addr_low; 297 - inst.value = value; 298 299 299 300 return cmdq_pkt_append_command(pkt, inst); 300 301 } ··· 326 331 327 332 int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event, bool clear) 328 333 { 329 - struct cmdq_instruction inst = { {0} }; 330 334 u32 clear_option = clear ? CMDQ_WFE_UPDATE : 0; 335 + struct cmdq_instruction inst = { 336 + .op = CMDQ_CODE_WFE, 337 + .value = CMDQ_WFE_OPTION | clear_option, 338 + .event = event 339 + }; 331 340 332 341 if (event >= CMDQ_MAX_EVENT) 333 342 return -EINVAL; 334 - 335 - inst.op = CMDQ_CODE_WFE; 336 - inst.value = CMDQ_WFE_OPTION | clear_option; 337 - inst.event = event; 338 343 339 344 return cmdq_pkt_append_command(pkt, inst); 340 345 } ··· 342 347 343 348 int cmdq_pkt_acquire_event(struct cmdq_pkt *pkt, u16 event) 344 349 { 345 - struct cmdq_instruction inst = {}; 350 + struct cmdq_instruction inst = { 351 + .op = CMDQ_CODE_WFE, 352 + .value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE | CMDQ_WFE_WAIT, 353 + .event = event 354 + }; 346 355 347 356 if (event >= CMDQ_MAX_EVENT) 348 357 return -EINVAL; 349 - 350 - inst.op = CMDQ_CODE_WFE; 351 - inst.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE | CMDQ_WFE_WAIT; 352 - inst.event = event; 353 358 354 359 return cmdq_pkt_append_command(pkt, inst); 355 360 } ··· 357 362 358 363 int cmdq_pkt_clear_event(struct cmdq_pkt *pkt, u16 event) 359 364 { 360 - struct cmdq_instruction inst = { {0} }; 365 + struct cmdq_instruction inst = { 366 + .op = CMDQ_CODE_WFE, 367 + .value = CMDQ_WFE_UPDATE, 368 + .event = event 369 + }; 361 370 362 371 if (event >= CMDQ_MAX_EVENT) 363 372 return -EINVAL; 364 - 365 - inst.op = CMDQ_CODE_WFE; 366 - inst.value = CMDQ_WFE_UPDATE; 367 - inst.event = event; 368 373 369 374 return cmdq_pkt_append_command(pkt, inst); 370 375 } ··· 372 377 373 378 int cmdq_pkt_set_event(struct cmdq_pkt *pkt, u16 event) 374 379 { 375 - struct cmdq_instruction inst = {}; 380 + struct cmdq_instruction inst = { 381 + .op = CMDQ_CODE_WFE, 382 + .value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE, 383 + .event = event 384 + }; 376 385 377 386 if (event >= CMDQ_MAX_EVENT) 378 387 return -EINVAL; 379 - 380 - inst.op = CMDQ_CODE_WFE; 381 - inst.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE; 382 - inst.event = event; 383 388 384 389 return cmdq_pkt_append_command(pkt, inst); 385 390 } ··· 388 393 int cmdq_pkt_poll(struct cmdq_pkt *pkt, u8 subsys, 389 394 u16 offset, u32 value) 390 395 { 391 - struct cmdq_instruction inst = { {0} }; 392 - int err; 393 - 394 - inst.op = CMDQ_CODE_POLL; 395 - inst.value = value; 396 - inst.offset = offset; 397 - inst.subsys = subsys; 398 - err = cmdq_pkt_append_command(pkt, inst); 399 - 400 - return err; 396 + struct cmdq_instruction inst = { 397 + .op = CMDQ_CODE_POLL, 398 + .value = value, 399 + .offset = offset, 400 + .subsys = subsys 401 + }; 402 + return cmdq_pkt_append_command(pkt, inst); 401 403 } 402 404 EXPORT_SYMBOL(cmdq_pkt_poll); 403 405 404 406 int cmdq_pkt_poll_mask(struct cmdq_pkt *pkt, u8 subsys, 405 407 u16 offset, u32 value, u32 mask) 406 408 { 407 - struct cmdq_instruction inst = { {0} }; 408 409 int err; 409 410 410 - inst.op = CMDQ_CODE_MASK; 411 - inst.mask = ~mask; 412 - err = cmdq_pkt_append_command(pkt, inst); 411 + err = cmdq_pkt_mask(pkt, mask); 413 412 if (err < 0) 414 413 return err; 415 414 416 415 offset = offset | CMDQ_POLL_ENABLE_MASK; 417 - err = cmdq_pkt_poll(pkt, subsys, offset, value); 418 - 419 - return err; 416 + return cmdq_pkt_poll(pkt, subsys, offset, value); 420 417 } 421 418 EXPORT_SYMBOL(cmdq_pkt_poll_mask); 422 419 ··· 423 436 * which enables use_mask bit. 424 437 */ 425 438 if (mask != GENMASK(31, 0)) { 426 - inst.op = CMDQ_CODE_MASK; 427 - inst.mask = ~mask; 428 - ret = cmdq_pkt_append_command(pkt, inst); 439 + ret = cmdq_pkt_mask(pkt, mask); 429 440 if (ret < 0) 430 441 return ret; 431 442 use_mask = CMDQ_POLL_ENABLE_MASK; ··· 462 477 enum cmdq_logic_op s_op, 463 478 struct cmdq_operand *right_operand) 464 479 { 465 - struct cmdq_instruction inst = { {0} }; 480 + struct cmdq_instruction inst; 466 481 467 482 if (!left_operand || !right_operand || s_op >= CMDQ_LOGIC_MAX) 468 483 return -EINVAL; 469 484 485 + inst.value = 0; 470 486 inst.op = CMDQ_CODE_LOGIC; 471 487 inst.dst_t = CMDQ_REG_TYPE; 472 488 inst.src_t = cmdq_operand_get_type(left_operand); ··· 483 497 484 498 int cmdq_pkt_assign(struct cmdq_pkt *pkt, u16 reg_idx, u32 value) 485 499 { 486 - struct cmdq_instruction inst = {}; 487 - 488 - inst.op = CMDQ_CODE_LOGIC; 489 - inst.dst_t = CMDQ_REG_TYPE; 490 - inst.reg_dst = reg_idx; 491 - inst.value = value; 500 + struct cmdq_instruction inst = { 501 + .op = CMDQ_CODE_LOGIC, 502 + .dst_t = CMDQ_REG_TYPE, 503 + .reg_dst = reg_idx, 504 + .value = value 505 + }; 492 506 return cmdq_pkt_append_command(pkt, inst); 493 507 } 494 508 EXPORT_SYMBOL(cmdq_pkt_assign); 495 509 496 510 int cmdq_pkt_jump_abs(struct cmdq_pkt *pkt, dma_addr_t addr, u8 shift_pa) 497 511 { 498 - struct cmdq_instruction inst = {}; 499 - 500 - inst.op = CMDQ_CODE_JUMP; 501 - inst.offset = CMDQ_JUMP_ABSOLUTE; 502 - inst.value = addr >> shift_pa; 512 + struct cmdq_instruction inst = { 513 + .op = CMDQ_CODE_JUMP, 514 + .offset = CMDQ_JUMP_ABSOLUTE, 515 + .value = addr >> shift_pa 516 + }; 503 517 return cmdq_pkt_append_command(pkt, inst); 504 518 } 505 519 EXPORT_SYMBOL(cmdq_pkt_jump_abs); 506 520 507 521 int cmdq_pkt_jump_rel(struct cmdq_pkt *pkt, s32 offset, u8 shift_pa) 508 522 { 509 - struct cmdq_instruction inst = { {0} }; 510 - 511 - inst.op = CMDQ_CODE_JUMP; 512 - inst.value = (u32)offset >> shift_pa; 523 + struct cmdq_instruction inst = { 524 + .op = CMDQ_CODE_JUMP, 525 + .value = (u32)offset >> shift_pa 526 + }; 513 527 return cmdq_pkt_append_command(pkt, inst); 514 528 } 515 529 EXPORT_SYMBOL(cmdq_pkt_jump_rel); 516 530 517 531 int cmdq_pkt_eoc(struct cmdq_pkt *pkt) 518 532 { 519 - struct cmdq_instruction inst = { {0} }; 520 - 521 - inst.op = CMDQ_CODE_EOC; 522 - inst.value = CMDQ_EOC_IRQ_EN; 533 + struct cmdq_instruction inst = { 534 + .op = CMDQ_CODE_EOC, 535 + .value = CMDQ_EOC_IRQ_EN 536 + }; 523 537 return cmdq_pkt_append_command(pkt, inst); 524 538 } 525 539 EXPORT_SYMBOL(cmdq_pkt_eoc); ··· 530 544 int err; 531 545 532 546 /* insert EOC and generate IRQ for each command iteration */ 533 - inst.op = CMDQ_CODE_EOC; 534 - inst.value = CMDQ_EOC_IRQ_EN; 535 - err = cmdq_pkt_append_command(pkt, inst); 547 + err = cmdq_pkt_eoc(pkt); 536 548 if (err < 0) 537 549 return err; 538 550 ··· 538 554 inst.op = CMDQ_CODE_JUMP; 539 555 inst.value = CMDQ_JUMP_PASS >> 540 556 cmdq_get_shift_pa(((struct cmdq_client *)pkt->cl)->chan); 541 - err = cmdq_pkt_append_command(pkt, inst); 542 - 543 - return err; 557 + return cmdq_pkt_append_command(pkt, inst); 544 558 } 545 559 EXPORT_SYMBOL(cmdq_pkt_finalize); 546 560
+1 -1
drivers/soc/mediatek/mtk-devapc.c
··· 301 301 302 302 static struct platform_driver mtk_devapc_driver = { 303 303 .probe = mtk_devapc_probe, 304 - .remove_new = mtk_devapc_remove, 304 + .remove = mtk_devapc_remove, 305 305 .driver = { 306 306 .name = "mtk-devapc", 307 307 .of_match_table = mtk_devapc_dt_match,
+545
drivers/soc/mediatek/mtk-dvfsrc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2021 MediaTek Inc. 4 + * Copyright (c) 2024 Collabora Ltd. 5 + * AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 6 + */ 7 + 8 + #include <linux/arm-smccc.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/iopoll.h> 11 + #include <linux/module.h> 12 + #include <linux/of.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/soc/mediatek/dvfsrc.h> 16 + #include <linux/soc/mediatek/mtk_sip_svc.h> 17 + 18 + /* DVFSRC_LEVEL */ 19 + #define DVFSRC_V1_LEVEL_TARGET_LEVEL GENMASK(15, 0) 20 + #define DVFSRC_TGT_LEVEL_IDLE 0x00 21 + #define DVFSRC_V1_LEVEL_CURRENT_LEVEL GENMASK(31, 16) 22 + 23 + /* DVFSRC_SW_REQ, DVFSRC_SW_REQ2 */ 24 + #define DVFSRC_V1_SW_REQ2_DRAM_LEVEL GENMASK(1, 0) 25 + #define DVFSRC_V1_SW_REQ2_VCORE_LEVEL GENMASK(3, 2) 26 + 27 + #define DVFSRC_V2_SW_REQ_DRAM_LEVEL GENMASK(3, 0) 28 + #define DVFSRC_V2_SW_REQ_VCORE_LEVEL GENMASK(6, 4) 29 + 30 + /* DVFSRC_VCORE */ 31 + #define DVFSRC_V2_VCORE_REQ_VSCP_LEVEL GENMASK(14, 12) 32 + 33 + #define DVFSRC_POLL_TIMEOUT_US 1000 34 + #define STARTUP_TIME_US 1 35 + 36 + #define MTK_SIP_DVFSRC_INIT 0x0 37 + #define MTK_SIP_DVFSRC_START 0x1 38 + 39 + struct dvfsrc_bw_constraints { 40 + u16 max_dram_nom_bw; 41 + u16 max_dram_peak_bw; 42 + u16 max_dram_hrt_bw; 43 + }; 44 + 45 + struct dvfsrc_opp { 46 + u32 vcore_opp; 47 + u32 dram_opp; 48 + }; 49 + 50 + struct dvfsrc_opp_desc { 51 + const struct dvfsrc_opp *opps; 52 + u32 num_opp; 53 + }; 54 + 55 + struct dvfsrc_soc_data; 56 + struct mtk_dvfsrc { 57 + struct device *dev; 58 + struct platform_device *icc; 59 + struct platform_device *regulator; 60 + const struct dvfsrc_soc_data *dvd; 61 + const struct dvfsrc_opp_desc *curr_opps; 62 + void __iomem *regs; 63 + int dram_type; 64 + }; 65 + 66 + struct dvfsrc_soc_data { 67 + const int *regs; 68 + const struct dvfsrc_opp_desc *opps_desc; 69 + u32 (*get_target_level)(struct mtk_dvfsrc *dvfsrc); 70 + u32 (*get_current_level)(struct mtk_dvfsrc *dvfsrc); 71 + u32 (*get_vcore_level)(struct mtk_dvfsrc *dvfsrc); 72 + u32 (*get_vscp_level)(struct mtk_dvfsrc *dvfsrc); 73 + void (*set_dram_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); 74 + void (*set_dram_peak_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); 75 + void (*set_dram_hrt_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); 76 + void (*set_opp_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 77 + void (*set_vcore_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 78 + void (*set_vscp_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 79 + int (*wait_for_opp_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 80 + int (*wait_for_vcore_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 81 + const struct dvfsrc_bw_constraints *bw_constraints; 82 + }; 83 + 84 + static u32 dvfsrc_readl(struct mtk_dvfsrc *dvfs, u32 offset) 85 + { 86 + return readl(dvfs->regs + dvfs->dvd->regs[offset]); 87 + } 88 + 89 + static void dvfsrc_writel(struct mtk_dvfsrc *dvfs, u32 offset, u32 val) 90 + { 91 + writel(val, dvfs->regs + dvfs->dvd->regs[offset]); 92 + } 93 + 94 + enum dvfsrc_regs { 95 + DVFSRC_SW_REQ, 96 + DVFSRC_SW_REQ2, 97 + DVFSRC_LEVEL, 98 + DVFSRC_TARGET_LEVEL, 99 + DVFSRC_SW_BW, 100 + DVFSRC_SW_PEAK_BW, 101 + DVFSRC_SW_HRT_BW, 102 + DVFSRC_VCORE, 103 + DVFSRC_REGS_MAX, 104 + }; 105 + 106 + static const int dvfsrc_mt8183_regs[] = { 107 + [DVFSRC_SW_REQ] = 0x4, 108 + [DVFSRC_SW_REQ2] = 0x8, 109 + [DVFSRC_LEVEL] = 0xDC, 110 + [DVFSRC_SW_BW] = 0x160, 111 + }; 112 + 113 + static const int dvfsrc_mt8195_regs[] = { 114 + [DVFSRC_SW_REQ] = 0xc, 115 + [DVFSRC_VCORE] = 0x6c, 116 + [DVFSRC_SW_PEAK_BW] = 0x278, 117 + [DVFSRC_SW_BW] = 0x26c, 118 + [DVFSRC_SW_HRT_BW] = 0x290, 119 + [DVFSRC_LEVEL] = 0xd44, 120 + [DVFSRC_TARGET_LEVEL] = 0xd48, 121 + }; 122 + 123 + static const struct dvfsrc_opp *dvfsrc_get_current_opp(struct mtk_dvfsrc *dvfsrc) 124 + { 125 + u32 level = dvfsrc->dvd->get_current_level(dvfsrc); 126 + 127 + return &dvfsrc->curr_opps->opps[level]; 128 + } 129 + 130 + static bool dvfsrc_is_idle(struct mtk_dvfsrc *dvfsrc) 131 + { 132 + if (!dvfsrc->dvd->get_target_level) 133 + return true; 134 + 135 + return dvfsrc->dvd->get_target_level(dvfsrc) == DVFSRC_TGT_LEVEL_IDLE; 136 + } 137 + 138 + static int dvfsrc_wait_for_vcore_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level) 139 + { 140 + const struct dvfsrc_opp *curr; 141 + 142 + return readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr, 143 + curr->vcore_opp >= level, STARTUP_TIME_US, 144 + DVFSRC_POLL_TIMEOUT_US); 145 + } 146 + 147 + static int dvfsrc_wait_for_opp_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level) 148 + { 149 + const struct dvfsrc_opp *target, *curr; 150 + int ret; 151 + 152 + target = &dvfsrc->curr_opps->opps[level]; 153 + ret = readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr, 154 + curr->dram_opp >= target->dram_opp && 155 + curr->vcore_opp >= target->vcore_opp, 156 + STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US); 157 + if (ret < 0) { 158 + dev_warn(dvfsrc->dev, 159 + "timeout! target OPP: %u, dram: %d, vcore: %d\n", level, 160 + curr->dram_opp, curr->vcore_opp); 161 + return ret; 162 + } 163 + 164 + return 0; 165 + } 166 + 167 + static int dvfsrc_wait_for_opp_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level) 168 + { 169 + const struct dvfsrc_opp *target, *curr; 170 + int ret; 171 + 172 + target = &dvfsrc->curr_opps->opps[level]; 173 + ret = readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr, 174 + curr->dram_opp >= target->dram_opp && 175 + curr->vcore_opp >= target->vcore_opp, 176 + STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US); 177 + if (ret < 0) { 178 + dev_warn(dvfsrc->dev, 179 + "timeout! target OPP: %u, dram: %d\n", level, curr->dram_opp); 180 + return ret; 181 + } 182 + 183 + return 0; 184 + } 185 + 186 + static u32 dvfsrc_get_target_level_v1(struct mtk_dvfsrc *dvfsrc) 187 + { 188 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL); 189 + 190 + return FIELD_GET(DVFSRC_V1_LEVEL_TARGET_LEVEL, val); 191 + } 192 + 193 + static u32 dvfsrc_get_current_level_v1(struct mtk_dvfsrc *dvfsrc) 194 + { 195 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL); 196 + u32 current_level = FIELD_GET(DVFSRC_V1_LEVEL_CURRENT_LEVEL, val); 197 + 198 + return ffs(current_level) - 1; 199 + } 200 + 201 + static u32 dvfsrc_get_target_level_v2(struct mtk_dvfsrc *dvfsrc) 202 + { 203 + return dvfsrc_readl(dvfsrc, DVFSRC_TARGET_LEVEL); 204 + } 205 + 206 + static u32 dvfsrc_get_current_level_v2(struct mtk_dvfsrc *dvfsrc) 207 + { 208 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL); 209 + u32 level = ffs(val); 210 + 211 + /* Valid levels */ 212 + if (level < dvfsrc->curr_opps->num_opp) 213 + return dvfsrc->curr_opps->num_opp - level; 214 + 215 + /* Zero for level 0 or invalid level */ 216 + return 0; 217 + } 218 + 219 + static u32 dvfsrc_get_vcore_level_v1(struct mtk_dvfsrc *dvfsrc) 220 + { 221 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ2); 222 + 223 + return FIELD_GET(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, val); 224 + } 225 + 226 + static void dvfsrc_set_vcore_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level) 227 + { 228 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ2); 229 + 230 + val &= ~DVFSRC_V1_SW_REQ2_VCORE_LEVEL; 231 + val |= FIELD_PREP(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, level); 232 + 233 + dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ2, val); 234 + } 235 + 236 + static u32 dvfsrc_get_vcore_level_v2(struct mtk_dvfsrc *dvfsrc) 237 + { 238 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ); 239 + 240 + return FIELD_GET(DVFSRC_V2_SW_REQ_VCORE_LEVEL, val); 241 + } 242 + 243 + static void dvfsrc_set_vcore_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level) 244 + { 245 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ); 246 + 247 + val &= ~DVFSRC_V2_SW_REQ_VCORE_LEVEL; 248 + val |= FIELD_PREP(DVFSRC_V2_SW_REQ_VCORE_LEVEL, level); 249 + 250 + dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val); 251 + } 252 + 253 + static u32 dvfsrc_get_vscp_level_v2(struct mtk_dvfsrc *dvfsrc) 254 + { 255 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_VCORE); 256 + 257 + return FIELD_GET(DVFSRC_V2_VCORE_REQ_VSCP_LEVEL, val); 258 + } 259 + 260 + static void dvfsrc_set_vscp_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level) 261 + { 262 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_VCORE); 263 + 264 + val &= ~DVFSRC_V2_VCORE_REQ_VSCP_LEVEL; 265 + val |= FIELD_PREP(DVFSRC_V2_VCORE_REQ_VSCP_LEVEL, level); 266 + 267 + dvfsrc_writel(dvfsrc, DVFSRC_VCORE, val); 268 + } 269 + 270 + static void __dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u32 reg, 271 + u16 max_bw, u16 min_bw, u64 bw) 272 + { 273 + u32 new_bw = (u32)div_u64(bw, 100 * 1000); 274 + 275 + /* If bw constraints (in mbps) are defined make sure to respect them */ 276 + if (max_bw) 277 + new_bw = min(new_bw, max_bw); 278 + if (min_bw && new_bw > 0) 279 + new_bw = max(new_bw, min_bw); 280 + 281 + dvfsrc_writel(dvfsrc, reg, new_bw); 282 + } 283 + 284 + static void dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 285 + { 286 + u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_nom_bw; 287 + 288 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_BW, max_bw, 0, bw); 289 + }; 290 + 291 + static void dvfsrc_set_dram_peak_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 292 + { 293 + u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_peak_bw; 294 + 295 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_PEAK_BW, max_bw, 0, bw); 296 + } 297 + 298 + static void dvfsrc_set_dram_hrt_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 299 + { 300 + u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_hrt_bw; 301 + 302 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_HRT_BW, max_bw, 0, bw); 303 + } 304 + 305 + static void dvfsrc_set_opp_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level) 306 + { 307 + const struct dvfsrc_opp *opp = &dvfsrc->curr_opps->opps[level]; 308 + u32 val; 309 + 310 + /* Translate Pstate to DVFSRC level and set it to DVFSRC HW */ 311 + val = FIELD_PREP(DVFSRC_V1_SW_REQ2_DRAM_LEVEL, opp->dram_opp); 312 + val |= FIELD_PREP(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, opp->vcore_opp); 313 + 314 + dev_dbg(dvfsrc->dev, "vcore_opp: %d, dram_opp: %d\n", opp->vcore_opp, opp->dram_opp); 315 + dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val); 316 + } 317 + 318 + int mtk_dvfsrc_send_request(const struct device *dev, u32 cmd, u64 data) 319 + { 320 + struct mtk_dvfsrc *dvfsrc = dev_get_drvdata(dev); 321 + bool state; 322 + int ret; 323 + 324 + dev_dbg(dvfsrc->dev, "cmd: %d, data: %llu\n", cmd, data); 325 + 326 + switch (cmd) { 327 + case MTK_DVFSRC_CMD_BW: 328 + dvfsrc->dvd->set_dram_bw(dvfsrc, data); 329 + return 0; 330 + case MTK_DVFSRC_CMD_HRT_BW: 331 + if (dvfsrc->dvd->set_dram_hrt_bw) 332 + dvfsrc->dvd->set_dram_hrt_bw(dvfsrc, data); 333 + return 0; 334 + case MTK_DVFSRC_CMD_PEAK_BW: 335 + if (dvfsrc->dvd->set_dram_peak_bw) 336 + dvfsrc->dvd->set_dram_peak_bw(dvfsrc, data); 337 + return 0; 338 + case MTK_DVFSRC_CMD_OPP: 339 + if (!dvfsrc->dvd->set_opp_level) 340 + return 0; 341 + 342 + dvfsrc->dvd->set_opp_level(dvfsrc, data); 343 + break; 344 + case MTK_DVFSRC_CMD_VCORE_LEVEL: 345 + dvfsrc->dvd->set_vcore_level(dvfsrc, data); 346 + break; 347 + case MTK_DVFSRC_CMD_VSCP_LEVEL: 348 + if (!dvfsrc->dvd->set_vscp_level) 349 + return 0; 350 + 351 + dvfsrc->dvd->set_vscp_level(dvfsrc, data); 352 + break; 353 + default: 354 + dev_err(dvfsrc->dev, "unknown command: %d\n", cmd); 355 + return -EOPNOTSUPP; 356 + } 357 + 358 + /* DVFSRC needs at least 2T(~196ns) to handle a request */ 359 + udelay(STARTUP_TIME_US); 360 + 361 + ret = readx_poll_timeout_atomic(dvfsrc_is_idle, dvfsrc, state, state, 362 + STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US); 363 + if (ret < 0) { 364 + dev_warn(dvfsrc->dev, 365 + "%d: idle timeout, data: %llu, last: %d -> %d\n", cmd, data, 366 + dvfsrc->dvd->get_current_level(dvfsrc), 367 + dvfsrc->dvd->get_target_level(dvfsrc)); 368 + return ret; 369 + } 370 + 371 + if (cmd == MTK_DVFSRC_CMD_OPP) 372 + ret = dvfsrc->dvd->wait_for_opp_level(dvfsrc, data); 373 + else 374 + ret = dvfsrc->dvd->wait_for_vcore_level(dvfsrc, data); 375 + 376 + if (ret < 0) { 377 + dev_warn(dvfsrc->dev, 378 + "%d: wait timeout, data: %llu, last: %d -> %d\n", 379 + cmd, data, 380 + dvfsrc->dvd->get_current_level(dvfsrc), 381 + dvfsrc->dvd->get_target_level(dvfsrc)); 382 + return ret; 383 + } 384 + 385 + return 0; 386 + } 387 + EXPORT_SYMBOL(mtk_dvfsrc_send_request); 388 + 389 + int mtk_dvfsrc_query_info(const struct device *dev, u32 cmd, int *data) 390 + { 391 + struct mtk_dvfsrc *dvfsrc = dev_get_drvdata(dev); 392 + 393 + switch (cmd) { 394 + case MTK_DVFSRC_CMD_VCORE_LEVEL: 395 + *data = dvfsrc->dvd->get_vcore_level(dvfsrc); 396 + break; 397 + case MTK_DVFSRC_CMD_VSCP_LEVEL: 398 + *data = dvfsrc->dvd->get_vscp_level(dvfsrc); 399 + break; 400 + default: 401 + return -EOPNOTSUPP; 402 + } 403 + 404 + return 0; 405 + } 406 + EXPORT_SYMBOL(mtk_dvfsrc_query_info); 407 + 408 + static int mtk_dvfsrc_probe(struct platform_device *pdev) 409 + { 410 + struct arm_smccc_res ares; 411 + struct mtk_dvfsrc *dvfsrc; 412 + int ret; 413 + 414 + dvfsrc = devm_kzalloc(&pdev->dev, sizeof(*dvfsrc), GFP_KERNEL); 415 + if (!dvfsrc) 416 + return -ENOMEM; 417 + 418 + dvfsrc->dvd = of_device_get_match_data(&pdev->dev); 419 + dvfsrc->dev = &pdev->dev; 420 + 421 + dvfsrc->regs = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 422 + if (IS_ERR(dvfsrc->regs)) 423 + return PTR_ERR(dvfsrc->regs); 424 + 425 + arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_INIT, 426 + 0, 0, 0, 0, 0, 0, &ares); 427 + if (ares.a0) 428 + return dev_err_probe(&pdev->dev, -EINVAL, "DVFSRC init failed: %lu\n", ares.a0); 429 + 430 + dvfsrc->dram_type = ares.a1; 431 + dev_dbg(&pdev->dev, "DRAM Type: %d\n", dvfsrc->dram_type); 432 + 433 + dvfsrc->curr_opps = &dvfsrc->dvd->opps_desc[dvfsrc->dram_type]; 434 + platform_set_drvdata(pdev, dvfsrc); 435 + 436 + ret = devm_of_platform_populate(&pdev->dev); 437 + if (ret) 438 + return dev_err_probe(&pdev->dev, ret, "Failed to populate child devices\n"); 439 + 440 + /* Everything is set up - make it run! */ 441 + arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_START, 442 + 0, 0, 0, 0, 0, 0, &ares); 443 + if (ares.a0) 444 + return dev_err_probe(&pdev->dev, -EINVAL, "Cannot start DVFSRC: %lu\n", ares.a0); 445 + 446 + return 0; 447 + } 448 + 449 + static const struct dvfsrc_opp dvfsrc_opp_mt8183_lp4[] = { 450 + { 0, 0 }, { 0, 1 }, { 0, 2 }, { 1, 2 }, 451 + }; 452 + 453 + static const struct dvfsrc_opp dvfsrc_opp_mt8183_lp3[] = { 454 + { 0, 0 }, { 0, 1 }, { 1, 1 }, { 1, 2 }, 455 + }; 456 + 457 + static const struct dvfsrc_opp_desc dvfsrc_opp_mt8183_desc[] = { 458 + [0] = { 459 + .opps = dvfsrc_opp_mt8183_lp4, 460 + .num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp4), 461 + }, 462 + [1] = { 463 + .opps = dvfsrc_opp_mt8183_lp3, 464 + .num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp3), 465 + }, 466 + [2] = { 467 + .opps = dvfsrc_opp_mt8183_lp3, 468 + .num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp3), 469 + } 470 + }; 471 + 472 + static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_mt8183 = { 0, 0, 0 }; 473 + 474 + static const struct dvfsrc_soc_data mt8183_data = { 475 + .opps_desc = dvfsrc_opp_mt8183_desc, 476 + .regs = dvfsrc_mt8183_regs, 477 + .get_target_level = dvfsrc_get_target_level_v1, 478 + .get_current_level = dvfsrc_get_current_level_v1, 479 + .get_vcore_level = dvfsrc_get_vcore_level_v1, 480 + .set_dram_bw = dvfsrc_set_dram_bw_v1, 481 + .set_opp_level = dvfsrc_set_opp_level_v1, 482 + .set_vcore_level = dvfsrc_set_vcore_level_v1, 483 + .wait_for_opp_level = dvfsrc_wait_for_opp_level_v1, 484 + .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1, 485 + .bw_constraints = &dvfsrc_bw_constr_mt8183, 486 + }; 487 + 488 + static const struct dvfsrc_opp dvfsrc_opp_mt8195_lp4[] = { 489 + { 0, 0 }, { 1, 0 }, { 2, 0 }, { 3, 0 }, 490 + { 0, 1 }, { 1, 1 }, { 2, 1 }, { 3, 1 }, 491 + { 0, 2 }, { 1, 2 }, { 2, 2 }, { 3, 2 }, 492 + { 1, 3 }, { 2, 3 }, { 3, 3 }, { 1, 4 }, 493 + { 2, 4 }, { 3, 4 }, { 2, 5 }, { 3, 5 }, 494 + { 3, 6 }, 495 + }; 496 + 497 + static const struct dvfsrc_opp_desc dvfsrc_opp_mt8195_desc[] = { 498 + [0] = { 499 + .opps = dvfsrc_opp_mt8195_lp4, 500 + .num_opp = ARRAY_SIZE(dvfsrc_opp_mt8195_lp4), 501 + } 502 + }; 503 + 504 + static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_mt8195 = { 505 + .max_dram_nom_bw = 255, 506 + .max_dram_peak_bw = 255, 507 + .max_dram_hrt_bw = 1023, 508 + }; 509 + 510 + static const struct dvfsrc_soc_data mt8195_data = { 511 + .opps_desc = dvfsrc_opp_mt8195_desc, 512 + .regs = dvfsrc_mt8195_regs, 513 + .get_target_level = dvfsrc_get_target_level_v2, 514 + .get_current_level = dvfsrc_get_current_level_v2, 515 + .get_vcore_level = dvfsrc_get_vcore_level_v2, 516 + .get_vscp_level = dvfsrc_get_vscp_level_v2, 517 + .set_dram_bw = dvfsrc_set_dram_bw_v1, 518 + .set_dram_peak_bw = dvfsrc_set_dram_peak_bw_v1, 519 + .set_dram_hrt_bw = dvfsrc_set_dram_hrt_bw_v1, 520 + .set_vcore_level = dvfsrc_set_vcore_level_v2, 521 + .set_vscp_level = dvfsrc_set_vscp_level_v2, 522 + .wait_for_opp_level = dvfsrc_wait_for_opp_level_v2, 523 + .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1, 524 + .bw_constraints = &dvfsrc_bw_constr_mt8195, 525 + }; 526 + 527 + static const struct of_device_id mtk_dvfsrc_of_match[] = { 528 + { .compatible = "mediatek,mt8183-dvfsrc", .data = &mt8183_data }, 529 + { .compatible = "mediatek,mt8195-dvfsrc", .data = &mt8195_data }, 530 + { /* sentinel */ } 531 + }; 532 + 533 + static struct platform_driver mtk_dvfsrc_driver = { 534 + .probe = mtk_dvfsrc_probe, 535 + .driver = { 536 + .name = "mtk-dvfsrc", 537 + .of_match_table = mtk_dvfsrc_of_match, 538 + }, 539 + }; 540 + module_platform_driver(mtk_dvfsrc_driver); 541 + 542 + MODULE_AUTHOR("AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>"); 543 + MODULE_AUTHOR("Dawei Chien <dawei.chien@mediatek.com>"); 544 + MODULE_LICENSE("GPL"); 545 + MODULE_DESCRIPTION("MediaTek DVFSRC driver");
+1 -1
drivers/soc/mediatek/mtk-mmsys.c
··· 487 487 .of_match_table = of_match_mtk_mmsys, 488 488 }, 489 489 .probe = mtk_mmsys_probe, 490 - .remove_new = mtk_mmsys_remove, 490 + .remove = mtk_mmsys_remove, 491 491 }; 492 492 module_platform_driver(mtk_mmsys_drv); 493 493
+1
drivers/soc/mediatek/mtk-regulator-coupler.c
··· 147 147 { 148 148 if (!of_machine_is_compatible("mediatek,mt8183") && 149 149 !of_machine_is_compatible("mediatek,mt8186") && 150 + !of_machine_is_compatible("mediatek,mt8188") && 150 151 !of_machine_is_compatible("mediatek,mt8192")) 151 152 return 0; 152 153
+1 -1
drivers/soc/mediatek/mtk-socinfo.c
··· 187 187 188 188 static struct platform_driver mtk_socinfo = { 189 189 .probe = mtk_socinfo_probe, 190 - .remove_new = mtk_socinfo_remove, 190 + .remove = mtk_socinfo_remove, 191 191 .driver = { 192 192 .name = "mtk-socinfo", 193 193 },
+1 -3
drivers/soc/mediatek/mtk-svs.c
··· 2133 2133 } 2134 2134 2135 2135 pdev = of_find_device_by_node(np); 2136 + of_node_put(np); 2136 2137 if (!pdev) { 2137 - of_node_put(np); 2138 2138 dev_err(svsp->dev, "cannot find pdev by %s\n", node_name); 2139 2139 return ERR_PTR(-ENXIO); 2140 2140 } 2141 - 2142 - of_node_put(np); 2143 2141 2144 2142 return &pdev->dev; 2145 2143 }
+1 -1
drivers/soc/microchip/mpfs-sys-controller.c
··· 232 232 .of_match_table = mpfs_sys_controller_of_match, 233 233 }, 234 234 .probe = mpfs_sys_controller_probe, 235 - .remove_new = mpfs_sys_controller_remove, 235 + .remove = mpfs_sys_controller_remove, 236 236 }; 237 237 module_platform_driver(mpfs_sys_controller_driver); 238 238
+1 -1
drivers/soc/pxa/ssp.c
··· 197 197 198 198 static struct platform_driver pxa_ssp_driver = { 199 199 .probe = pxa_ssp_probe, 200 - .remove_new = pxa_ssp_remove, 200 + .remove = pxa_ssp_remove, 201 201 .driver = { 202 202 .name = "pxa2xx-ssp", 203 203 .of_match_table = of_match_ptr(pxa_ssp_of_ids),
+1 -1
drivers/soc/qcom/icc-bwmon.c
··· 872 872 873 873 static struct platform_driver bwmon_driver = { 874 874 .probe = bwmon_probe, 875 - .remove_new = bwmon_remove, 875 + .remove = bwmon_remove, 876 876 .driver = { 877 877 .name = "qcom-bwmon", 878 878 .of_match_table = bwmon_of_match,
+3 -3
drivers/soc/qcom/ice.c
··· 44 44 struct qcom_ice { 45 45 struct device *dev; 46 46 void __iomem *base; 47 - struct device_link *link; 48 47 49 48 struct clk *core_clk; 50 49 }; ··· 267 268 struct qcom_ice *ice; 268 269 struct resource *res; 269 270 void __iomem *base; 271 + struct device_link *link; 270 272 271 273 if (!dev || !dev->of_node) 272 274 return ERR_PTR(-ENODEV); ··· 311 311 return ERR_PTR(-EPROBE_DEFER); 312 312 } 313 313 314 - ice->link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER); 315 - if (!ice->link) { 314 + link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER); 315 + if (!link) { 316 316 dev_err(&pdev->dev, 317 317 "Failed to create device link to consumer %s\n", 318 318 dev_name(dev));
+2958 -303
drivers/soc/qcom/llcc-qcom.c
··· 27 27 #define ACT_CTRL_OPCODE_ACTIVATE BIT(0) 28 28 #define ACT_CTRL_OPCODE_DEACTIVATE BIT(1) 29 29 #define ACT_CTRL_ACT_TRIG BIT(0) 30 - #define ACT_CTRL_OPCODE_SHIFT 0x01 31 - #define ATTR1_PROBE_TARGET_WAYS_SHIFT 0x02 32 - #define ATTR1_FIXED_SIZE_SHIFT 0x03 33 - #define ATTR1_PRIORITY_SHIFT 0x04 34 - #define ATTR1_MAX_CAP_SHIFT 0x10 30 + #define ACT_CTRL_OPCODE_SHIFT 1 31 + #define ATTR1_PROBE_TARGET_WAYS_SHIFT 2 32 + #define ATTR1_FIXED_SIZE_SHIFT 3 33 + #define ATTR1_PRIORITY_SHIFT 4 34 + #define ATTR1_MAX_CAP_SHIFT 16 35 35 #define ATTR0_RES_WAYS_MASK GENMASK(15, 0) 36 36 #define ATTR0_BONUS_WAYS_MASK GENMASK(31, 16) 37 - #define ATTR0_BONUS_WAYS_SHIFT 0x10 37 + #define ATTR0_BONUS_WAYS_SHIFT 16 38 38 #define LLCC_STATUS_READ_DELAY 100 39 39 40 40 #define CACHE_LINE_SIZE_SHIFT 6 ··· 136 136 const struct llcc_slice_config *sct_data; 137 137 const u32 *reg_offset; 138 138 const struct llcc_edac_reg_offset *edac_reg_offset; 139 + u32 max_cap_shift; /* instead of ATTR1_MAX_CAP_SHIFT */ 140 + u32 num_banks; 139 141 int size; 140 - bool need_llcc_cfg; 142 + bool skip_llcc_cfg; 141 143 bool no_edac; 142 144 bool irq_configured; 143 145 }; ··· 155 153 }; 156 154 157 155 static const struct llcc_slice_config sa8775p_data[] = { 158 - {LLCC_CPUSS, 1, 2048, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 1, 0, 0}, 159 - {LLCC_VIDSC0, 2, 512, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 160 - {LLCC_CPUSS1, 3, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 161 - {LLCC_CPUHWT, 5, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 162 - {LLCC_AUDIO, 6, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0}, 163 - {LLCC_CMPT, 10, 4096, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 164 - {LLCC_GPUHTW, 11, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 165 - {LLCC_GPU, 12, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 1, 0}, 166 - {LLCC_MMUHWT, 13, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0}, 167 - {LLCC_CMPTDMA, 15, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 168 - {LLCC_DISP, 16, 4096, 2, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 169 - {LLCC_VIDFW, 17, 3072, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 170 - {LLCC_AUDHW, 22, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0}, 171 - {LLCC_CVP, 28, 256, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0}, 172 - {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0}, 173 - {LLCC_WRCACHE, 31, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0}, 156 + { 157 + .usecase_id = LLCC_CPUSS, 158 + .slice_id = 1, 159 + .max_cap = 2048, 160 + .priority = 1, 161 + .bonus_ways = 0xff, 162 + .cache_mode = 0, 163 + .retain_on_pc = true, 164 + .activate_on_init = true, 165 + }, { 166 + .usecase_id = LLCC_VIDSC0, 167 + .slice_id = 2, 168 + .max_cap = 512, 169 + .priority = 3, 170 + .fixed_size = true, 171 + .bonus_ways = 0xff, 172 + .cache_mode = 0, 173 + .retain_on_pc = true, 174 + }, { 175 + .usecase_id = LLCC_CPUSS1, 176 + .slice_id = 3, 177 + .max_cap = 1024, 178 + .priority = 1, 179 + .fixed_size = true, 180 + .bonus_ways = 0xff, 181 + .cache_mode = 0, 182 + .retain_on_pc = true, 183 + }, { 184 + .usecase_id = LLCC_CPUHWT, 185 + .slice_id = 5, 186 + .max_cap = 512, 187 + .priority = 1, 188 + .fixed_size = true, 189 + .bonus_ways = 0xff, 190 + .cache_mode = 0, 191 + .retain_on_pc = true, 192 + }, { 193 + .usecase_id = LLCC_AUDIO, 194 + .slice_id = 6, 195 + .max_cap = 1024, 196 + .priority = 1, 197 + .fixed_size = true, 198 + .bonus_ways = 0xff, 199 + .cache_mode = 0, 200 + }, { 201 + .usecase_id = LLCC_CMPT, 202 + .slice_id = 10, 203 + .max_cap = 4096, 204 + .priority = 1, 205 + .fixed_size = true, 206 + .bonus_ways = 0xff, 207 + .cache_mode = 0, 208 + .retain_on_pc = true, 209 + }, { 210 + .usecase_id = LLCC_GPUHTW, 211 + .slice_id = 11, 212 + .max_cap = 1024, 213 + .priority = 1, 214 + .fixed_size = true, 215 + .bonus_ways = 0xff, 216 + .cache_mode = 0, 217 + .retain_on_pc = true, 218 + }, { 219 + .usecase_id = LLCC_GPU, 220 + .slice_id = 12, 221 + .max_cap = 1024, 222 + .priority = 1, 223 + .fixed_size = true, 224 + .bonus_ways = 0xff, 225 + .cache_mode = 0, 226 + .retain_on_pc = true, 227 + .write_scid_en = true, 228 + }, { 229 + .usecase_id = LLCC_MMUHWT, 230 + .slice_id = 13, 231 + .max_cap = 1024, 232 + .priority = 1, 233 + .fixed_size = true, 234 + .bonus_ways = 0xff, 235 + .cache_mode = 0, 236 + .activate_on_init = true, 237 + }, { 238 + .usecase_id = LLCC_CMPTDMA, 239 + .slice_id = 15, 240 + .max_cap = 1024, 241 + .priority = 1, 242 + .fixed_size = true, 243 + .bonus_ways = 0xff, 244 + .cache_mode = 0, 245 + .retain_on_pc = true, 246 + }, { 247 + .usecase_id = LLCC_DISP, 248 + .slice_id = 16, 249 + .max_cap = 4096, 250 + .priority = 2, 251 + .fixed_size = true, 252 + .bonus_ways = 0xff, 253 + .cache_mode = 0, 254 + .retain_on_pc = true, 255 + }, { 256 + .usecase_id = LLCC_VIDFW, 257 + .slice_id = 17, 258 + .max_cap = 3072, 259 + .priority = 1, 260 + .bonus_ways = 0xff, 261 + .cache_mode = 0, 262 + .retain_on_pc = true, 263 + }, { 264 + .usecase_id = LLCC_AUDHW, 265 + .slice_id = 22, 266 + .max_cap = 1024, 267 + .priority = 1, 268 + .fixed_size = true, 269 + .bonus_ways = 0xff, 270 + .cache_mode = 0, 271 + }, { 272 + .usecase_id = LLCC_CVP, 273 + .slice_id = 28, 274 + .max_cap = 256, 275 + .priority = 3, 276 + .fixed_size = true, 277 + .bonus_ways = 0xff, 278 + .cache_mode = 0, 279 + .retain_on_pc = true, 280 + }, { 281 + .usecase_id = LLCC_APTCM, 282 + .slice_id = 30, 283 + .max_cap = 1024, 284 + .priority = 3, 285 + .fixed_size = true, 286 + .res_ways = 0xf0, 287 + .cache_mode = 1, 288 + .retain_on_pc = true, 289 + }, { 290 + .usecase_id = LLCC_WRCACHE, 291 + .slice_id = 31, 292 + .max_cap = 512, 293 + .priority = 1, 294 + .fixed_size = true, 295 + .bonus_ways = 0xff, 296 + .cache_mode = 0, 297 + .activate_on_init = true, 298 + }, 299 + }; 300 + 301 + static const struct llcc_slice_config sar1130p_data[] = { 302 + { 303 + .usecase_id = LLCC_CPUSS, 304 + .slice_id = 1, 305 + .max_cap = 4096, 306 + .priority = 1, 307 + .bonus_ways = 0x1fff, 308 + .res_ways = 0x0, 309 + .cache_mode = 0, 310 + .retain_on_pc = true, 311 + .activate_on_init = true, 312 + }, { 313 + .usecase_id = LLCC_VIDSC0, 314 + .slice_id = 2, 315 + .max_cap = 512, 316 + .priority = 3, 317 + .fixed_size = true, 318 + .bonus_ways = 0x1fff, 319 + .res_ways = 0x0, 320 + .cache_mode = 0, 321 + .retain_on_pc = true, 322 + }, { 323 + .usecase_id = LLCC_AUDIO, 324 + .slice_id = 6, 325 + .max_cap = 1024, 326 + .priority = 3, 327 + .fixed_size = true, 328 + .bonus_ways = 0x1fff, 329 + .res_ways = 0x0, 330 + .cache_mode = 0, 331 + .retain_on_pc = true, 332 + }, { 333 + .usecase_id = LLCC_CMPT, 334 + .slice_id = 10, 335 + .max_cap = 1024, 336 + .priority = 1, 337 + .fixed_size = true, 338 + .bonus_ways = 0x1fff, 339 + .res_ways = 0x0, 340 + .cache_mode = 0, 341 + .retain_on_pc = true, 342 + }, { 343 + .usecase_id = LLCC_GPUHTW, 344 + .slice_id = 11, 345 + .max_cap = 0, 346 + .priority = 1, 347 + .fixed_size = true, 348 + .bonus_ways = 0x1fff, 349 + .res_ways = 0x0, 350 + .cache_mode = 0, 351 + .retain_on_pc = true, 352 + }, { 353 + .usecase_id = LLCC_GPU, 354 + .slice_id = 12, 355 + .max_cap = 3072, 356 + .priority = 3, 357 + .fixed_size = true, 358 + .bonus_ways = 0x1fff, 359 + .res_ways = 0x0, 360 + .cache_mode = 0, 361 + .retain_on_pc = true, 362 + .write_scid_en = true, 363 + }, { 364 + .usecase_id = LLCC_MMUHWT, 365 + .slice_id = 13, 366 + .max_cap = 512, 367 + .priority = 1, 368 + .fixed_size = true, 369 + .bonus_ways = 0x1fff, 370 + .res_ways = 0x0, 371 + .cache_mode = 0, 372 + }, { 373 + .usecase_id = LLCC_DISP, 374 + .slice_id = 16, 375 + .max_cap = 12800, 376 + .priority = 1, 377 + .fixed_size = true, 378 + .bonus_ways = 0x1fff, 379 + .res_ways = 0x0, 380 + .cache_mode = 0, 381 + .retain_on_pc = true, 382 + }, { 383 + .usecase_id = LLCC_CVP, 384 + .slice_id = 28, 385 + .max_cap = 256, 386 + .priority = 3, 387 + .fixed_size = true, 388 + .bonus_ways = 0x1fff, 389 + .res_ways = 0x0, 390 + .cache_mode = 0, 391 + .retain_on_pc = true, 392 + }, { 393 + .usecase_id = LLCC_APTCM, 394 + .slice_id = 26, 395 + .max_cap = 2048, 396 + .priority = 3, 397 + .fixed_size = true, 398 + .bonus_ways = 0x0, 399 + .res_ways = 0x3, 400 + .cache_mode = true, 401 + .dis_cap_alloc = true, 402 + .retain_on_pc = true, 403 + }, { 404 + .usecase_id = LLCC_WRCACHE, 405 + .slice_id = 31, 406 + .max_cap = 256, 407 + .priority = 1, 408 + .fixed_size = true, 409 + .bonus_ways = 0x1fff, 410 + .res_ways = 0x0, 411 + .cache_mode = 0, 412 + .activate_on_init = true, 413 + }, { 414 + .usecase_id = LLCC_AENPU, 415 + .slice_id = 30, 416 + .max_cap = 3072, 417 + .priority = 3, 418 + .fixed_size = true, 419 + .bonus_ways = 0x1fff, 420 + .res_ways = 0x0, 421 + .cache_mode = 0, 422 + .retain_on_pc = true, 423 + }, { 424 + .usecase_id = LLCC_DISP_LEFT, 425 + .slice_id = 17, 426 + .max_cap = 0, 427 + .priority = 1, 428 + .fixed_size = true, 429 + .bonus_ways = 0x0, 430 + .res_ways = 0x0, 431 + .cache_mode = 0, 432 + .retain_on_pc = true, 433 + }, { 434 + .usecase_id = LLCC_DISP_RIGHT, 435 + .slice_id = 18, 436 + .max_cap = 0, 437 + .priority = 1, 438 + .fixed_size = true, 439 + .bonus_ways = 0x0, 440 + .res_ways = 0x0, 441 + .cache_mode = 0, 442 + .retain_on_pc = true, 443 + }, { 444 + .usecase_id = LLCC_EVCS_LEFT, 445 + .slice_id = 22, 446 + .max_cap = 0, 447 + .priority = 1, 448 + .fixed_size = true, 449 + .bonus_ways = 0x0, 450 + .res_ways = 0x0, 451 + .cache_mode = 0, 452 + .retain_on_pc = true, 453 + }, { 454 + .usecase_id = LLCC_EVCS_RIGHT, 455 + .slice_id = 23, 456 + .max_cap = 0, 457 + .priority = 1, 458 + .fixed_size = true, 459 + .bonus_ways = 0x0, 460 + .res_ways = 0x0, 461 + .cache_mode = 0, 462 + .retain_on_pc = true, 463 + }, 464 + }; 465 + 466 + static const struct llcc_slice_config sar2130p_data[] = { 467 + { 468 + .usecase_id = LLCC_CPUSS, 469 + .slice_id = 1, 470 + .max_cap = 6144, 471 + .priority = 1, 472 + .fixed_size = 0, 473 + .bonus_ways = 0x3fffffff, 474 + .res_ways = 0x0, 475 + .cache_mode = 0, 476 + .retain_on_pc = true, 477 + .activate_on_init = true, 478 + }, { 479 + .usecase_id = LLCC_VIDSC0, 480 + .slice_id = 2, 481 + .max_cap = 128, 482 + .priority = 2, 483 + .fixed_size = true, 484 + .bonus_ways = 0x3fffffff, 485 + .res_ways = 0x0, 486 + .cache_mode = 0, 487 + .retain_on_pc = true, 488 + }, { 489 + .usecase_id = LLCC_AUDIO, 490 + .slice_id = 6, 491 + .max_cap = 1024, 492 + .priority = 3, 493 + .fixed_size = true, 494 + .bonus_ways = 0x3fffffff, 495 + .res_ways = 0x0, 496 + .cache_mode = 0, 497 + .retain_on_pc = true, 498 + }, { 499 + .usecase_id = LLCC_CMPT, 500 + .slice_id = 10, 501 + .max_cap = 1024, 502 + .priority = 1, 503 + .fixed_size = true, 504 + .bonus_ways = 0x3fffffff, 505 + .res_ways = 0x0, 506 + .cache_mode = 0, 507 + .retain_on_pc = true, 508 + }, { 509 + .usecase_id = LLCC_GPUHTW, 510 + .slice_id = 11, 511 + .max_cap = 0, 512 + .priority = 1, 513 + .fixed_size = true, 514 + .bonus_ways = 0x3fffffff, 515 + .res_ways = 0x0, 516 + .cache_mode = 0, 517 + .retain_on_pc = true, 518 + }, { 519 + .usecase_id = LLCC_GPU, 520 + .slice_id = 12, 521 + .max_cap = 1536, 522 + .priority = 2, 523 + .fixed_size = true, 524 + .bonus_ways = 0x3fffffff, 525 + .res_ways = 0x0, 526 + .cache_mode = 0, 527 + .retain_on_pc = true, 528 + .write_scid_en = true, 529 + }, { 530 + .usecase_id = LLCC_MMUHWT, 531 + .slice_id = 13, 532 + .max_cap = 1024, 533 + .priority = 1, 534 + .fixed_size = true, 535 + .bonus_ways = 0x3fffffff, 536 + .res_ways = 0x0, 537 + .cache_mode = 0, 538 + .activate_on_init = true, 539 + }, { 540 + .usecase_id = LLCC_DISP, 541 + .slice_id = 16, 542 + .max_cap = 0, 543 + .priority = 1, 544 + .fixed_size = true, 545 + .bonus_ways = 0x3fffffff, 546 + .res_ways = 0x0, 547 + .cache_mode = 0, 548 + .retain_on_pc = true, 549 + }, { 550 + .usecase_id = LLCC_APTCM, 551 + .slice_id = 26, 552 + .max_cap = 2048, 553 + .priority = 3, 554 + .fixed_size = true, 555 + .bonus_ways = 0x0, 556 + .res_ways = 0x3, 557 + .cache_mode = true, 558 + .dis_cap_alloc = true, 559 + .retain_on_pc = true, 560 + }, { 561 + .usecase_id = LLCC_WRCACHE, 562 + .slice_id = 31, 563 + .max_cap = 256, 564 + .priority = 1, 565 + .fixed_size = true, 566 + .bonus_ways = 0x3fffffff, 567 + .res_ways = 0x0, 568 + .cache_mode = 0, 569 + .activate_on_init = true, 570 + }, { 571 + .usecase_id = LLCC_VIEYE, 572 + .slice_id = 7, 573 + .max_cap = 7168, 574 + .priority = 4, 575 + .fixed_size = true, 576 + .bonus_ways = 0x3fffffff, 577 + .res_ways = 0x0, 578 + .cache_mode = 0, 579 + .retain_on_pc = true, 580 + }, { 581 + .usecase_id = LLCC_VIDPTH, 582 + .slice_id = 8, 583 + .max_cap = 7168, 584 + .priority = 4, 585 + .fixed_size = true, 586 + .bonus_ways = 0x3fffffff, 587 + .res_ways = 0x0, 588 + .cache_mode = 0, 589 + .retain_on_pc = true, 590 + }, { 591 + .usecase_id = LLCC_GPUMV, 592 + .slice_id = 9, 593 + .max_cap = 2048, 594 + .priority = 2, 595 + .fixed_size = true, 596 + .bonus_ways = 0x3fffffff, 597 + .res_ways = 0x0, 598 + .cache_mode = 0, 599 + .retain_on_pc = true, 600 + }, { 601 + .usecase_id = LLCC_EVA_LEFT, 602 + .slice_id = 20, 603 + .max_cap = 7168, 604 + .priority = 5, 605 + .fixed_size = true, 606 + .bonus_ways = 0x3ffffffc, 607 + .res_ways = 0x0, 608 + .cache_mode = 0, 609 + .retain_on_pc = true, 610 + }, { 611 + .usecase_id = LLCC_EVA_RIGHT, 612 + .slice_id = 21, 613 + .max_cap = 7168, 614 + .priority = 5, 615 + .fixed_size = true, 616 + .bonus_ways = 0x3ffffffc, 617 + .res_ways = 0x0, 618 + .cache_mode = 0, 619 + .retain_on_pc = true, 620 + }, { 621 + .usecase_id = LLCC_EVAGAIN, 622 + .slice_id = 25, 623 + .max_cap = 1024, 624 + .priority = 2, 625 + .fixed_size = true, 626 + .bonus_ways = 0x3fffffff, 627 + .res_ways = 0x0, 628 + .cache_mode = 0, 629 + .retain_on_pc = true, 630 + }, { 631 + .usecase_id = LLCC_AENPU, 632 + .slice_id = 30, 633 + .max_cap = 3072, 634 + .priority = 3, 635 + .fixed_size = true, 636 + .bonus_ways = 0x3fffffff, 637 + .res_ways = 0x0, 638 + .cache_mode = 0, 639 + .retain_on_pc = true, 640 + }, { 641 + .usecase_id = LLCC_VIPTH, 642 + .slice_id = 29, 643 + .max_cap = 1024, 644 + .priority = 4, 645 + .fixed_size = true, 646 + .bonus_ways = 0x3fffffff, 647 + .res_ways = 0x0, 648 + .cache_mode = 0, 649 + .retain_on_pc = true, 650 + }, { 651 + .usecase_id = LLCC_DISP_LEFT, 652 + .slice_id = 17, 653 + .max_cap = 0, 654 + .priority = 1, 655 + .fixed_size = true, 656 + .bonus_ways = 0x0, 657 + .res_ways = 0x0, 658 + .cache_mode = 0, 659 + .retain_on_pc = true, 660 + }, { 661 + .usecase_id = LLCC_DISP_RIGHT, 662 + .slice_id = 18, 663 + .max_cap = 0, 664 + .priority = 1, 665 + .fixed_size = true, 666 + .bonus_ways = 0x0, 667 + .res_ways = 0x0, 668 + .cache_mode = 0, 669 + .retain_on_pc = true, 670 + }, { 671 + .usecase_id = LLCC_EVCS_LEFT, 672 + .slice_id = 22, 673 + .max_cap = 0, 674 + .priority = 1, 675 + .fixed_size = true, 676 + .bonus_ways = 0x0, 677 + .res_ways = 0x0, 678 + .cache_mode = 0, 679 + .retain_on_pc = true, 680 + }, { 681 + .usecase_id = LLCC_EVCS_RIGHT, 682 + .slice_id = 23, 683 + .max_cap = 0, 684 + .priority = 1, 685 + .fixed_size = true, 686 + .bonus_ways = 0x0, 687 + .res_ways = 0x0, 688 + .cache_mode = 0, 689 + .retain_on_pc = true, 690 + }, { 691 + .usecase_id = LLCC_SPAD, 692 + .slice_id = 24, 693 + .max_cap = 7168, 694 + .priority = 1, 695 + .fixed_size = true, 696 + .bonus_ways = 0x0, 697 + .res_ways = 0x0, 698 + .cache_mode = 0, 699 + .retain_on_pc = true, 700 + }, 174 701 }; 175 702 176 703 static const struct llcc_slice_config sc7180_data[] = { 177 - { LLCC_CPUSS, 1, 256, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 1 }, 178 - { LLCC_MDM, 8, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 179 - { LLCC_GPUHTW, 11, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 180 - { LLCC_GPU, 12, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 704 + { 705 + .usecase_id = LLCC_CPUSS, 706 + .slice_id = 1, 707 + .max_cap = 256, 708 + .priority = 1, 709 + .bonus_ways = 0xf, 710 + .cache_mode = 0, 711 + .retain_on_pc = true, 712 + .activate_on_init = true, 713 + }, { 714 + .usecase_id = LLCC_MDM, 715 + .slice_id = 8, 716 + .max_cap = 128, 717 + .priority = 1, 718 + .bonus_ways = 0xf, 719 + .cache_mode = 0, 720 + .retain_on_pc = true, 721 + }, { 722 + .usecase_id = LLCC_GPUHTW, 723 + .slice_id = 11, 724 + .max_cap = 128, 725 + .priority = 1, 726 + .bonus_ways = 0xf, 727 + .cache_mode = 0, 728 + .retain_on_pc = true, 729 + }, { 730 + .usecase_id = LLCC_GPU, 731 + .slice_id = 12, 732 + .max_cap = 128, 733 + .priority = 1, 734 + .bonus_ways = 0xf, 735 + .cache_mode = 0, 736 + .retain_on_pc = true, 737 + }, 181 738 }; 182 739 183 740 static const struct llcc_slice_config sc7280_data[] = { 184 - { LLCC_CPUSS, 1, 768, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 1, 0}, 185 - { LLCC_MDMHPGRW, 7, 512, 2, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 186 - { LLCC_CMPT, 10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 187 - { LLCC_GPUHTW, 11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 188 - { LLCC_GPU, 12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 189 - { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 0, 1, 0}, 190 - { LLCC_MDMPNG, 21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 191 - { LLCC_WLHW, 24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 192 - { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 741 + { 742 + .usecase_id = LLCC_CPUSS, 743 + .slice_id = 1, 744 + .max_cap = 768, 745 + .priority = 1, 746 + .bonus_ways = 0x3f, 747 + .cache_mode = 0, 748 + .retain_on_pc = true, 749 + .activate_on_init = true, 750 + }, { 751 + .usecase_id = LLCC_MDMHPGRW, 752 + .slice_id = 7, 753 + .max_cap = 512, 754 + .priority = 2, 755 + .fixed_size = true, 756 + .bonus_ways = 0x3f, 757 + .cache_mode = 0, 758 + .retain_on_pc = true, 759 + }, { 760 + .usecase_id = LLCC_CMPT, 761 + .slice_id = 10, 762 + .max_cap = 768, 763 + .priority = 1, 764 + .fixed_size = true, 765 + .bonus_ways = 0x3f, 766 + .cache_mode = 0, 767 + .retain_on_pc = true, 768 + }, { 769 + .usecase_id = LLCC_GPUHTW, 770 + .slice_id = 11, 771 + .max_cap = 256, 772 + .priority = 1, 773 + .fixed_size = true, 774 + .bonus_ways = 0x3f, 775 + .cache_mode = 0, 776 + .retain_on_pc = true, 777 + }, { 778 + .usecase_id = LLCC_GPU, 779 + .slice_id = 12, 780 + .max_cap = 512, 781 + .priority = 1, 782 + .bonus_ways = 0x3f, 783 + .cache_mode = 0, 784 + .retain_on_pc = true, 785 + }, { 786 + .usecase_id = LLCC_MMUHWT, 787 + .slice_id = 13, 788 + .max_cap = 256, 789 + .priority = 1, 790 + .fixed_size = true, 791 + .bonus_ways = 0x3f, 792 + .cache_mode = 0, 793 + .activate_on_init = true, 794 + }, { 795 + .usecase_id = LLCC_MDMPNG, 796 + .slice_id = 21, 797 + .max_cap = 768, 798 + .priority = 0, 799 + .fixed_size = true, 800 + .bonus_ways = 0x3f, 801 + .cache_mode = 0, 802 + .retain_on_pc = true, 803 + }, { 804 + .usecase_id = LLCC_WLHW, 805 + .slice_id = 24, 806 + .max_cap = 256, 807 + .priority = 1, 808 + .fixed_size = true, 809 + .bonus_ways = 0x3f, 810 + .cache_mode = 0, 811 + .retain_on_pc = true, 812 + }, { 813 + .usecase_id = LLCC_MODPE, 814 + .slice_id = 29, 815 + .max_cap = 64, 816 + .priority = 1, 817 + .fixed_size = true, 818 + .bonus_ways = 0x3f, 819 + .cache_mode = 0, 820 + .retain_on_pc = true, 821 + }, 193 822 }; 194 823 195 824 static const struct llcc_slice_config sc8180x_data[] = { 196 - { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1 }, 197 - { LLCC_VIDSC0, 2, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 198 - { LLCC_VIDSC1, 3, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 199 - { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 200 - { LLCC_MDMHPGRW, 7, 3072, 1, 1, 0x3ff, 0xc00, 0, 0, 0, 1, 0 }, 201 - { LLCC_MDM, 8, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 202 - { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 203 - { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 204 - { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 205 - { LLCC_GPU, 12, 5120, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 206 - { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1 }, 207 - { LLCC_CMPTDMA, 15, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 208 - { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 209 - { LLCC_VIDFW, 17, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 210 - { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 211 - { LLCC_MDMPNG, 21, 1024, 0, 1, 0xc, 0x0, 0, 0, 0, 1, 0 }, 212 - { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 213 - { LLCC_NPU, 23, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 214 - { LLCC_WLHW, 24, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 }, 215 - { LLCC_MODPE, 29, 512, 1, 1, 0xc, 0x0, 0, 0, 0, 1, 0 }, 216 - { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 }, 217 - { LLCC_WRCACHE, 31, 128, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0 }, 825 + { 826 + .usecase_id = LLCC_CPUSS, 827 + .slice_id = 1, 828 + .max_cap = 6144, 829 + .priority = 1, 830 + .fixed_size = true, 831 + .bonus_ways = 0xfff, 832 + .cache_mode = 0, 833 + .retain_on_pc = true, 834 + .activate_on_init = true, 835 + }, { 836 + .usecase_id = LLCC_VIDSC0, 837 + .slice_id = 2, 838 + .max_cap = 512, 839 + .priority = 2, 840 + .fixed_size = true, 841 + .bonus_ways = 0xfff, 842 + .cache_mode = 0, 843 + .retain_on_pc = true, 844 + }, { 845 + .usecase_id = LLCC_VIDSC1, 846 + .slice_id = 3, 847 + .max_cap = 512, 848 + .priority = 2, 849 + .fixed_size = true, 850 + .bonus_ways = 0xfff, 851 + .cache_mode = 0, 852 + .retain_on_pc = true, 853 + }, { 854 + .usecase_id = LLCC_AUDIO, 855 + .slice_id = 6, 856 + .max_cap = 1024, 857 + .priority = 1, 858 + .fixed_size = true, 859 + .bonus_ways = 0xfff, 860 + .cache_mode = 0, 861 + .retain_on_pc = true, 862 + }, { 863 + .usecase_id = LLCC_MDMHPGRW, 864 + .slice_id = 7, 865 + .max_cap = 3072, 866 + .priority = 1, 867 + .fixed_size = true, 868 + .bonus_ways = 0x3ff, 869 + .res_ways = 0xc00, 870 + .cache_mode = 0, 871 + .retain_on_pc = true, 872 + }, { 873 + .usecase_id = LLCC_MDM, 874 + .slice_id = 8, 875 + .max_cap = 3072, 876 + .priority = 1, 877 + .fixed_size = true, 878 + .bonus_ways = 0xfff, 879 + .cache_mode = 0, 880 + .retain_on_pc = true, 881 + }, { 882 + .usecase_id = LLCC_MODHW, 883 + .slice_id = 9, 884 + .max_cap = 1024, 885 + .priority = 1, 886 + .fixed_size = true, 887 + .bonus_ways = 0xfff, 888 + .cache_mode = 0, 889 + .retain_on_pc = true, 890 + }, { 891 + .usecase_id = LLCC_CMPT, 892 + .slice_id = 10, 893 + .max_cap = 6144, 894 + .priority = 1, 895 + .fixed_size = true, 896 + .bonus_ways = 0xfff, 897 + .cache_mode = 0, 898 + .retain_on_pc = true, 899 + }, { 900 + .usecase_id = LLCC_GPUHTW, 901 + .slice_id = 11, 902 + .max_cap = 1024, 903 + .priority = 1, 904 + .fixed_size = true, 905 + .bonus_ways = 0xfff, 906 + .cache_mode = 0, 907 + .retain_on_pc = true, 908 + }, { 909 + .usecase_id = LLCC_GPU, 910 + .slice_id = 12, 911 + .max_cap = 5120, 912 + .priority = 1, 913 + .fixed_size = true, 914 + .bonus_ways = 0xfff, 915 + .cache_mode = 0, 916 + .retain_on_pc = true, 917 + }, { 918 + .usecase_id = LLCC_MMUHWT, 919 + .slice_id = 13, 920 + .max_cap = 1024, 921 + .priority = 1, 922 + .fixed_size = true, 923 + .bonus_ways = 0xfff, 924 + .cache_mode = 0, 925 + .activate_on_init = true, 926 + }, { 927 + .usecase_id = LLCC_CMPTDMA, 928 + .slice_id = 15, 929 + .max_cap = 6144, 930 + .priority = 1, 931 + .fixed_size = true, 932 + .bonus_ways = 0xfff, 933 + .cache_mode = 0, 934 + .retain_on_pc = true, 935 + }, { 936 + .usecase_id = LLCC_DISP, 937 + .slice_id = 16, 938 + .max_cap = 6144, 939 + .priority = 1, 940 + .fixed_size = true, 941 + .bonus_ways = 0xfff, 942 + .cache_mode = 0, 943 + .retain_on_pc = true, 944 + }, { 945 + .usecase_id = LLCC_VIDFW, 946 + .slice_id = 17, 947 + .max_cap = 1024, 948 + .priority = 1, 949 + .fixed_size = true, 950 + .bonus_ways = 0xfff, 951 + .cache_mode = 0, 952 + .retain_on_pc = true, 953 + }, { 954 + .usecase_id = LLCC_MDMHPFX, 955 + .slice_id = 20, 956 + .max_cap = 1024, 957 + .priority = 2, 958 + .fixed_size = true, 959 + .bonus_ways = 0xfff, 960 + .cache_mode = 0, 961 + .retain_on_pc = true, 962 + }, { 963 + .usecase_id = LLCC_MDMPNG, 964 + .slice_id = 21, 965 + .max_cap = 1024, 966 + .priority = 0, 967 + .fixed_size = true, 968 + .bonus_ways = 0xc, 969 + .cache_mode = 0, 970 + .retain_on_pc = true, 971 + }, { 972 + .usecase_id = LLCC_AUDHW, 973 + .slice_id = 22, 974 + .max_cap = 1024, 975 + .priority = 1, 976 + .fixed_size = true, 977 + .bonus_ways = 0xfff, 978 + .cache_mode = 0, 979 + .retain_on_pc = true, 980 + }, { 981 + .usecase_id = LLCC_NPU, 982 + .slice_id = 23, 983 + .max_cap = 6144, 984 + .priority = 1, 985 + .fixed_size = true, 986 + .bonus_ways = 0xfff, 987 + .cache_mode = 0, 988 + .retain_on_pc = true, 989 + }, { 990 + .usecase_id = LLCC_WLHW, 991 + .slice_id = 24, 992 + .max_cap = 6144, 993 + .priority = 1, 994 + .fixed_size = true, 995 + .bonus_ways = 0xfff, 996 + .cache_mode = 0, 997 + .retain_on_pc = true, 998 + }, { 999 + .usecase_id = LLCC_MODPE, 1000 + .slice_id = 29, 1001 + .max_cap = 512, 1002 + .priority = 1, 1003 + .fixed_size = true, 1004 + .bonus_ways = 0xc, 1005 + .cache_mode = 0, 1006 + .retain_on_pc = true, 1007 + }, { 1008 + .usecase_id = LLCC_APTCM, 1009 + .slice_id = 30, 1010 + .max_cap = 512, 1011 + .priority = 3, 1012 + .fixed_size = true, 1013 + .res_ways = 0x1, 1014 + .cache_mode = 1, 1015 + .retain_on_pc = true, 1016 + }, { 1017 + .usecase_id = LLCC_WRCACHE, 1018 + .slice_id = 31, 1019 + .max_cap = 128, 1020 + .priority = 1, 1021 + .fixed_size = true, 1022 + .bonus_ways = 0xfff, 1023 + .cache_mode = 0, 1024 + }, 218 1025 }; 219 1026 220 1027 static const struct llcc_slice_config sc8280xp_data[] = { 221 - { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 }, 222 - { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 223 - { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 224 - { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 225 - { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 226 - { LLCC_GPU, 12, 4096, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 }, 227 - { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 228 - { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 229 - { LLCC_AUDHW, 22, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 230 - { LLCC_ECC, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 231 - { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 232 - { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0, 0 }, 233 - { LLCC_WRCACHE, 31, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 234 - { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 235 - { LLCC_CPUSS1, 3, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 236 - { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 1028 + { 1029 + .usecase_id = LLCC_CPUSS, 1030 + .slice_id = 1, 1031 + .max_cap = 6144, 1032 + .priority = 1, 1033 + .fixed_size = true, 1034 + .bonus_ways = 0xfff, 1035 + .cache_mode = 0, 1036 + .retain_on_pc = true, 1037 + .activate_on_init = true, 1038 + }, { 1039 + .usecase_id = LLCC_VIDSC0, 1040 + .slice_id = 2, 1041 + .max_cap = 512, 1042 + .priority = 3, 1043 + .fixed_size = true, 1044 + .bonus_ways = 0xfff, 1045 + .cache_mode = 0, 1046 + .retain_on_pc = true, 1047 + }, { 1048 + .usecase_id = LLCC_AUDIO, 1049 + .slice_id = 6, 1050 + .max_cap = 1024, 1051 + .priority = 1, 1052 + .fixed_size = true, 1053 + .bonus_ways = 0xfff, 1054 + .cache_mode = 0, 1055 + }, { 1056 + .usecase_id = LLCC_CMPT, 1057 + .slice_id = 10, 1058 + .max_cap = 6144, 1059 + .priority = 1, 1060 + .fixed_size = true, 1061 + .bonus_ways = 0xfff, 1062 + .cache_mode = 0, 1063 + }, { 1064 + .usecase_id = LLCC_GPUHTW, 1065 + .slice_id = 11, 1066 + .max_cap = 1024, 1067 + .priority = 1, 1068 + .fixed_size = true, 1069 + .bonus_ways = 0xfff, 1070 + .cache_mode = 0, 1071 + .retain_on_pc = true, 1072 + }, { 1073 + .usecase_id = LLCC_GPU, 1074 + .slice_id = 12, 1075 + .max_cap = 4096, 1076 + .priority = 1, 1077 + .fixed_size = true, 1078 + .bonus_ways = 0xfff, 1079 + .cache_mode = 0, 1080 + .retain_on_pc = true, 1081 + .write_scid_en = true, 1082 + }, { 1083 + .usecase_id = LLCC_MMUHWT, 1084 + .slice_id = 13, 1085 + .max_cap = 1024, 1086 + .priority = 1, 1087 + .fixed_size = true, 1088 + .bonus_ways = 0xfff, 1089 + .cache_mode = 0, 1090 + .activate_on_init = true, 1091 + }, { 1092 + .usecase_id = LLCC_DISP, 1093 + .slice_id = 16, 1094 + .max_cap = 6144, 1095 + .priority = 1, 1096 + .fixed_size = true, 1097 + .bonus_ways = 0xfff, 1098 + .cache_mode = 0, 1099 + .retain_on_pc = true, 1100 + }, { 1101 + .usecase_id = LLCC_AUDHW, 1102 + .slice_id = 22, 1103 + .max_cap = 2048, 1104 + .priority = 1, 1105 + .fixed_size = true, 1106 + .bonus_ways = 0xfff, 1107 + .cache_mode = 0, 1108 + .retain_on_pc = true, 1109 + }, { 1110 + .usecase_id = LLCC_ECC, 1111 + .slice_id = 26, 1112 + .max_cap = 1024, 1113 + .priority = 1, 1114 + .fixed_size = true, 1115 + .bonus_ways = 0xfff, 1116 + .cache_mode = 0, 1117 + .retain_on_pc = true, 1118 + }, { 1119 + .usecase_id = LLCC_CVP, 1120 + .slice_id = 28, 1121 + .max_cap = 512, 1122 + .priority = 3, 1123 + .fixed_size = true, 1124 + .bonus_ways = 0xfff, 1125 + .cache_mode = 0, 1126 + .retain_on_pc = true, 1127 + }, { 1128 + .usecase_id = LLCC_APTCM, 1129 + .slice_id = 30, 1130 + .max_cap = 1024, 1131 + .priority = 3, 1132 + .fixed_size = true, 1133 + .res_ways = 0x1, 1134 + .cache_mode = 1, 1135 + .retain_on_pc = true, 1136 + }, { 1137 + .usecase_id = LLCC_WRCACHE, 1138 + .slice_id = 31, 1139 + .max_cap = 1024, 1140 + .priority = 1, 1141 + .fixed_size = true, 1142 + .bonus_ways = 0xfff, 1143 + .cache_mode = 0, 1144 + .activate_on_init = true, 1145 + }, { 1146 + .usecase_id = LLCC_CVPFW, 1147 + .slice_id = 17, 1148 + .max_cap = 512, 1149 + .priority = 1, 1150 + .bonus_ways = 0xfff, 1151 + .cache_mode = 0, 1152 + .retain_on_pc = true, 1153 + }, { 1154 + .usecase_id = LLCC_CPUSS1, 1155 + .slice_id = 3, 1156 + .max_cap = 2048, 1157 + .priority = 1, 1158 + .fixed_size = true, 1159 + .bonus_ways = 0xfff, 1160 + .cache_mode = 0, 1161 + .retain_on_pc = true, 1162 + }, { 1163 + .usecase_id = LLCC_CPUHWT, 1164 + .slice_id = 5, 1165 + .max_cap = 512, 1166 + .priority = 1, 1167 + .fixed_size = true, 1168 + .bonus_ways = 0xfff, 1169 + .cache_mode = 0, 1170 + .activate_on_init = true, 1171 + }, 237 1172 }; 238 1173 239 - static const struct llcc_slice_config sdm845_data[] = { 240 - { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 }, 241 - { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, 242 - { LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, 243 - { LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0 }, 244 - { LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 245 - { LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 246 - { LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0 }, 247 - { LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 248 - { LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 249 - { LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0 }, 250 - { LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0 }, 251 - { LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1 }, 252 - { LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 253 - { LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 254 - { LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 255 - { LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0 }, 256 - { LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0 }, 257 - { LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 1174 + static const struct llcc_slice_config sdm845_data[] = {{ 1175 + .usecase_id = LLCC_CPUSS, 1176 + .slice_id = 1, 1177 + .max_cap = 2816, 1178 + .priority = 1, 1179 + .bonus_ways = 0xffc, 1180 + .res_ways = 0x2, 1181 + .cache_mode = 0, 1182 + .dis_cap_alloc = true, 1183 + .retain_on_pc = true, 1184 + .activate_on_init = true, 1185 + }, { 1186 + .usecase_id = LLCC_VIDSC0, 1187 + .slice_id = 2, 1188 + .max_cap = 512, 1189 + .priority = 2, 1190 + .fixed_size = true, 1191 + .res_ways = 0xf0, 1192 + .cache_mode = 0, 1193 + .dis_cap_alloc = true, 1194 + .retain_on_pc = true, 1195 + }, { 1196 + .usecase_id = LLCC_VIDSC1, 1197 + .slice_id = 3, 1198 + .max_cap = 512, 1199 + .priority = 2, 1200 + .fixed_size = true, 1201 + .res_ways = 0xf0, 1202 + .cache_mode = 0, 1203 + .dis_cap_alloc = true, 1204 + .retain_on_pc = true, 1205 + }, { 1206 + .usecase_id = LLCC_ROTATOR, 1207 + .slice_id = 4, 1208 + .max_cap = 563, 1209 + .priority = 2, 1210 + .fixed_size = true, 1211 + .res_ways = 0xe, 1212 + .cache_mode = 2, 1213 + .dis_cap_alloc = true, 1214 + .retain_on_pc = true, 1215 + }, { 1216 + .usecase_id = LLCC_VOICE, 1217 + .slice_id = 5, 1218 + .max_cap = 2816, 1219 + .priority = 1, 1220 + .bonus_ways = 0xffc, 1221 + .res_ways = 0x2, 1222 + .cache_mode = 0, 1223 + .dis_cap_alloc = true, 1224 + .retain_on_pc = true, 1225 + }, { 1226 + .usecase_id = LLCC_AUDIO, 1227 + .slice_id = 6, 1228 + .max_cap = 2816, 1229 + .priority = 1, 1230 + .bonus_ways = 0xffc, 1231 + .res_ways = 0x2, 1232 + .cache_mode = 0, 1233 + .dis_cap_alloc = true, 1234 + .retain_on_pc = true, 1235 + }, { 1236 + .usecase_id = LLCC_MDMHPGRW, 1237 + .slice_id = 7, 1238 + .max_cap = 1024, 1239 + .priority = 2, 1240 + .bonus_ways = 0xfc, 1241 + .res_ways = 0xf00, 1242 + .cache_mode = 0, 1243 + .dis_cap_alloc = true, 1244 + .retain_on_pc = true, 1245 + }, { 1246 + .usecase_id = LLCC_MDM, 1247 + .slice_id = 8, 1248 + .max_cap = 2816, 1249 + .priority = 1, 1250 + .bonus_ways = 0xffc, 1251 + .res_ways = 0x2, 1252 + .cache_mode = 0, 1253 + .dis_cap_alloc = true, 1254 + .retain_on_pc = true, 1255 + }, { 1256 + .usecase_id = LLCC_CMPT, 1257 + .slice_id = 10, 1258 + .max_cap = 2816, 1259 + .priority = 1, 1260 + .bonus_ways = 0xffc, 1261 + .res_ways = 0x2, 1262 + .cache_mode = 0, 1263 + .dis_cap_alloc = true, 1264 + .retain_on_pc = true, 1265 + }, { 1266 + .usecase_id = LLCC_GPUHTW, 1267 + .slice_id = 11, 1268 + .max_cap = 512, 1269 + .priority = 1, 1270 + .fixed_size = true, 1271 + .bonus_ways = 0xc, 1272 + .cache_mode = 0, 1273 + .dis_cap_alloc = true, 1274 + .retain_on_pc = true, 1275 + }, { 1276 + .usecase_id = LLCC_GPU, 1277 + .slice_id = 12, 1278 + .max_cap = 2304, 1279 + .priority = 1, 1280 + .bonus_ways = 0xff0, 1281 + .res_ways = 0x2, 1282 + .cache_mode = 0, 1283 + .dis_cap_alloc = true, 1284 + .retain_on_pc = true, 1285 + }, { 1286 + .usecase_id = LLCC_MMUHWT, 1287 + .slice_id = 13, 1288 + .max_cap = 256, 1289 + .priority = 2, 1290 + .res_ways = 0x1, 1291 + .cache_mode = 0, 1292 + .dis_cap_alloc = true, 1293 + .activate_on_init = true, 1294 + }, { 1295 + .usecase_id = LLCC_CMPTDMA, 1296 + .slice_id = 15, 1297 + .max_cap = 2816, 1298 + .priority = 1, 1299 + .bonus_ways = 0xffc, 1300 + .res_ways = 0x2, 1301 + .cache_mode = 0, 1302 + .dis_cap_alloc = true, 1303 + .retain_on_pc = true, 1304 + }, { 1305 + .usecase_id = LLCC_DISP, 1306 + .slice_id = 16, 1307 + .max_cap = 2816, 1308 + .priority = 1, 1309 + .bonus_ways = 0xffc, 1310 + .res_ways = 0x2, 1311 + .cache_mode = 0, 1312 + .dis_cap_alloc = true, 1313 + .retain_on_pc = true, 1314 + }, { 1315 + .usecase_id = LLCC_VIDFW, 1316 + .slice_id = 17, 1317 + .max_cap = 2816, 1318 + .priority = 1, 1319 + .bonus_ways = 0xffc, 1320 + .res_ways = 0x2, 1321 + .cache_mode = 0, 1322 + .dis_cap_alloc = true, 1323 + .retain_on_pc = true, 1324 + }, { 1325 + .usecase_id = LLCC_MDMHPFX, 1326 + .slice_id = 20, 1327 + .max_cap = 1024, 1328 + .priority = 2, 1329 + .fixed_size = true, 1330 + .res_ways = 0xf00, 1331 + .cache_mode = 0, 1332 + .dis_cap_alloc = true, 1333 + .retain_on_pc = true, 1334 + }, { 1335 + .usecase_id = LLCC_MDMPNG, 1336 + .slice_id = 21, 1337 + .max_cap = 1024, 1338 + .priority = 0, 1339 + .fixed_size = true, 1340 + .bonus_ways = 0x1e, 1341 + .cache_mode = 0, 1342 + .dis_cap_alloc = true, 1343 + .retain_on_pc = true, 1344 + }, { 1345 + .usecase_id = LLCC_AUDHW, 1346 + .slice_id = 22, 1347 + .max_cap = 1024, 1348 + .priority = 1, 1349 + .fixed_size = true, 1350 + .bonus_ways = 0xffc, 1351 + .res_ways = 0x2, 1352 + .cache_mode = 0, 1353 + .dis_cap_alloc = true, 1354 + .retain_on_pc = true, 1355 + }, 258 1356 }; 259 1357 260 1358 static const struct llcc_slice_config sm6350_data[] = { 261 - { LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1 }, 262 - { LLCC_MDM, 8, 512, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 263 - { LLCC_GPUHTW, 11, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 264 - { LLCC_GPU, 12, 512, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 265 - { LLCC_MDMPNG, 21, 768, 0, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 266 - { LLCC_NPU, 23, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 267 - { LLCC_MODPE, 29, 64, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 1359 + { 1360 + .usecase_id = LLCC_CPUSS, 1361 + .slice_id = 1, 1362 + .max_cap = 768, 1363 + .priority = 1, 1364 + .bonus_ways = 0xfff, 1365 + .cache_mode = 0, 1366 + .activate_on_init = true, 1367 + .write_scid_en = true, 1368 + }, { 1369 + .usecase_id = LLCC_MDM, 1370 + .slice_id = 8, 1371 + .max_cap = 512, 1372 + .priority = 2, 1373 + .bonus_ways = 0xfff, 1374 + .cache_mode = 0, 1375 + .activate_on_init = true, 1376 + }, { 1377 + .usecase_id = LLCC_GPUHTW, 1378 + .slice_id = 11, 1379 + .max_cap = 256, 1380 + .priority = 1, 1381 + .bonus_ways = 0xfff, 1382 + .cache_mode = 0, 1383 + .activate_on_init = true, 1384 + }, { 1385 + .usecase_id = LLCC_GPU, 1386 + .slice_id = 12, 1387 + .max_cap = 512, 1388 + .priority = 1, 1389 + .bonus_ways = 0xfff, 1390 + .cache_mode = 0, 1391 + .activate_on_init = true, 1392 + }, { 1393 + .usecase_id = LLCC_MDMPNG, 1394 + .slice_id = 21, 1395 + .max_cap = 768, 1396 + .priority = 0, 1397 + .fixed_size = true, 1398 + .bonus_ways = 0xfff, 1399 + .cache_mode = 0, 1400 + .activate_on_init = true, 1401 + }, { 1402 + .usecase_id = LLCC_NPU, 1403 + .slice_id = 23, 1404 + .max_cap = 768, 1405 + .priority = 1, 1406 + .bonus_ways = 0xfff, 1407 + .cache_mode = 0, 1408 + .activate_on_init = true, 1409 + }, { 1410 + .usecase_id = LLCC_MODPE, 1411 + .slice_id = 29, 1412 + .max_cap = 64, 1413 + .priority = 1, 1414 + .fixed_size = true, 1415 + .bonus_ways = 0xfff, 1416 + .cache_mode = 0, 1417 + .activate_on_init = true, 1418 + }, 268 1419 }; 269 1420 270 1421 static const struct llcc_slice_config sm7150_data[] = { 271 - { LLCC_CPUSS, 1, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 1 }, 272 - { LLCC_MDM, 8, 128, 2, 0, 0xF, 0x0, 0, 0, 0, 1, 0 }, 273 - { LLCC_GPUHTW, 11, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 }, 274 - { LLCC_GPU, 12, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 }, 275 - { LLCC_NPU, 23, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 0 }, 1422 + { 1423 + .usecase_id = LLCC_CPUSS, 1424 + .slice_id = 1, 1425 + .max_cap = 512, 1426 + .priority = 1, 1427 + .bonus_ways = 0xf, 1428 + .cache_mode = 0, 1429 + .retain_on_pc = true, 1430 + .activate_on_init = true, 1431 + }, { 1432 + .usecase_id = LLCC_MDM, 1433 + .slice_id = 8, 1434 + .max_cap = 128, 1435 + .priority = 2, 1436 + .bonus_ways = 0xf, 1437 + .cache_mode = 0, 1438 + .retain_on_pc = true, 1439 + }, { 1440 + .usecase_id = LLCC_GPUHTW, 1441 + .slice_id = 11, 1442 + .max_cap = 256, 1443 + .priority = 1, 1444 + .fixed_size = true, 1445 + .bonus_ways = 0xf, 1446 + .cache_mode = 0, 1447 + .retain_on_pc = true, 1448 + }, { 1449 + .usecase_id = LLCC_GPU, 1450 + .slice_id = 12, 1451 + .max_cap = 256, 1452 + .priority = 1, 1453 + .fixed_size = true, 1454 + .bonus_ways = 0xf, 1455 + .cache_mode = 0, 1456 + .retain_on_pc = true, 1457 + }, { 1458 + .usecase_id = LLCC_NPU, 1459 + .slice_id = 23, 1460 + .max_cap = 512, 1461 + .priority = 1, 1462 + .bonus_ways = 0xf, 1463 + .cache_mode = 0, 1464 + .retain_on_pc = true, 1465 + }, 276 1466 }; 277 1467 278 1468 static const struct llcc_slice_config sm8150_data[] = { 279 - { LLCC_CPUSS, 1, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 1 }, 280 - { LLCC_VIDSC0, 2, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 281 - { LLCC_VIDSC1, 3, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 282 - { LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 283 - { LLCC_MDMHPGRW, 7, 3072, 1, 0, 0xFF, 0xF00, 0, 0, 0, 1, 0 }, 284 - { LLCC_MDM, 8, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 285 - { LLCC_MODHW, 9, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 286 - { LLCC_CMPT, 10, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 287 - { LLCC_GPUHTW , 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 288 - { LLCC_GPU, 12, 2560, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 289 - { LLCC_MMUHWT, 13, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1 }, 290 - { LLCC_CMPTDMA, 15, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 291 - { LLCC_DISP, 16, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 292 - { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 293 - { LLCC_MDMHPFX, 21, 1024, 0, 1, 0xF, 0x0, 0, 0, 0, 1, 0 }, 294 - { LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 295 - { LLCC_NPU, 23, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 296 - { LLCC_WLHW, 24, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 }, 297 - { LLCC_MODPE, 29, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 }, 298 - { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 }, 299 - { LLCC_WRCACHE, 31, 128, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0 }, 1469 + { 1470 + .usecase_id = LLCC_CPUSS, 1471 + .slice_id = 1, 1472 + .max_cap = 3072, 1473 + .priority = 1, 1474 + .fixed_size = true, 1475 + .bonus_ways = 0xfff, 1476 + .cache_mode = 0, 1477 + .retain_on_pc = true, 1478 + .activate_on_init = true, 1479 + }, { 1480 + .usecase_id = LLCC_VIDSC0, 1481 + .slice_id = 2, 1482 + .max_cap = 512, 1483 + .priority = 2, 1484 + .fixed_size = true, 1485 + .bonus_ways = 0xfff, 1486 + .cache_mode = 0, 1487 + .retain_on_pc = true, 1488 + }, { 1489 + .usecase_id = LLCC_VIDSC1, 1490 + .slice_id = 3, 1491 + .max_cap = 512, 1492 + .priority = 2, 1493 + .fixed_size = true, 1494 + .bonus_ways = 0xfff, 1495 + .cache_mode = 0, 1496 + .retain_on_pc = true, 1497 + }, { 1498 + .usecase_id = LLCC_AUDIO, 1499 + .slice_id = 6, 1500 + .max_cap = 1024, 1501 + .priority = 1, 1502 + .fixed_size = true, 1503 + .bonus_ways = 0xfff, 1504 + .cache_mode = 0, 1505 + .retain_on_pc = true, 1506 + }, { 1507 + .usecase_id = LLCC_MDMHPGRW, 1508 + .slice_id = 7, 1509 + .max_cap = 3072, 1510 + .priority = 1, 1511 + .bonus_ways = 0xff, 1512 + .res_ways = 0xf00, 1513 + .cache_mode = 0, 1514 + .retain_on_pc = true, 1515 + }, { 1516 + .usecase_id = LLCC_MDM, 1517 + .slice_id = 8, 1518 + .max_cap = 3072, 1519 + .priority = 1, 1520 + .fixed_size = true, 1521 + .bonus_ways = 0xfff, 1522 + .cache_mode = 0, 1523 + .retain_on_pc = true, 1524 + }, { 1525 + .usecase_id = LLCC_MODHW, 1526 + .slice_id = 9, 1527 + .max_cap = 1024, 1528 + .priority = 1, 1529 + .fixed_size = true, 1530 + .bonus_ways = 0xfff, 1531 + .cache_mode = 0, 1532 + .retain_on_pc = true, 1533 + }, { 1534 + .usecase_id = LLCC_CMPT, 1535 + .slice_id = 10, 1536 + .max_cap = 3072, 1537 + .priority = 1, 1538 + .fixed_size = true, 1539 + .bonus_ways = 0xfff, 1540 + .cache_mode = 0, 1541 + .retain_on_pc = true, 1542 + }, { 1543 + .usecase_id = LLCC_GPUHTW, 1544 + .slice_id = 11, 1545 + .max_cap = 512, 1546 + .priority = 1, 1547 + .fixed_size = true, 1548 + .bonus_ways = 0xfff, 1549 + .cache_mode = 0, 1550 + .retain_on_pc = true, 1551 + }, { 1552 + .usecase_id = LLCC_GPU, 1553 + .slice_id = 12, 1554 + .max_cap = 2560, 1555 + .priority = 1, 1556 + .fixed_size = true, 1557 + .bonus_ways = 0xfff, 1558 + .cache_mode = 0, 1559 + .retain_on_pc = true, 1560 + }, { 1561 + .usecase_id = LLCC_MMUHWT, 1562 + .slice_id = 13, 1563 + .max_cap = 1024, 1564 + .priority = 1, 1565 + .fixed_size = true, 1566 + .bonus_ways = 0xfff, 1567 + .cache_mode = 0, 1568 + .activate_on_init = true, 1569 + }, { 1570 + .usecase_id = LLCC_CMPTDMA, 1571 + .slice_id = 15, 1572 + .max_cap = 3072, 1573 + .priority = 1, 1574 + .fixed_size = true, 1575 + .bonus_ways = 0xfff, 1576 + .cache_mode = 0, 1577 + .retain_on_pc = true, 1578 + }, { 1579 + .usecase_id = LLCC_DISP, 1580 + .slice_id = 16, 1581 + .max_cap = 3072, 1582 + .priority = 1, 1583 + .fixed_size = true, 1584 + .bonus_ways = 0xfff, 1585 + .cache_mode = 0, 1586 + .retain_on_pc = true, 1587 + }, { 1588 + .usecase_id = LLCC_MDMHPFX, 1589 + .slice_id = 20, 1590 + .max_cap = 1024, 1591 + .priority = 2, 1592 + .fixed_size = true, 1593 + .bonus_ways = 0xfff, 1594 + .cache_mode = 0, 1595 + .retain_on_pc = true, 1596 + }, { 1597 + .usecase_id = LLCC_MDMHPFX, 1598 + .slice_id = 21, 1599 + .max_cap = 1024, 1600 + .priority = 0, 1601 + .fixed_size = true, 1602 + .bonus_ways = 0xf, 1603 + .cache_mode = 0, 1604 + .retain_on_pc = true, 1605 + }, { 1606 + .usecase_id = LLCC_AUDHW, 1607 + .slice_id = 22, 1608 + .max_cap = 1024, 1609 + .priority = 1, 1610 + .fixed_size = true, 1611 + .bonus_ways = 0xfff, 1612 + .cache_mode = 0, 1613 + .retain_on_pc = true, 1614 + }, { 1615 + .usecase_id = LLCC_NPU, 1616 + .slice_id = 23, 1617 + .max_cap = 3072, 1618 + .priority = 1, 1619 + .fixed_size = true, 1620 + .bonus_ways = 0xfff, 1621 + .cache_mode = 0, 1622 + .retain_on_pc = true, 1623 + }, { 1624 + .usecase_id = LLCC_WLHW, 1625 + .slice_id = 24, 1626 + .max_cap = 3072, 1627 + .priority = 1, 1628 + .fixed_size = true, 1629 + .bonus_ways = 0xfff, 1630 + .cache_mode = 0, 1631 + .retain_on_pc = true, 1632 + }, { 1633 + .usecase_id = LLCC_MODPE, 1634 + .slice_id = 29, 1635 + .max_cap = 256, 1636 + .priority = 1, 1637 + .fixed_size = true, 1638 + .bonus_ways = 0xf, 1639 + .cache_mode = 0, 1640 + .retain_on_pc = true, 1641 + }, { 1642 + .usecase_id = LLCC_APTCM, 1643 + .slice_id = 30, 1644 + .max_cap = 256, 1645 + .priority = 3, 1646 + .fixed_size = true, 1647 + .res_ways = 0x1, 1648 + .cache_mode = 1, 1649 + .retain_on_pc = true, 1650 + }, { 1651 + .usecase_id = LLCC_WRCACHE, 1652 + .slice_id = 31, 1653 + .max_cap = 128, 1654 + .priority = 1, 1655 + .fixed_size = true, 1656 + .bonus_ways = 0xfff, 1657 + .cache_mode = 0, 1658 + }, 300 1659 }; 301 1660 302 1661 static const struct llcc_slice_config sm8250_data[] = { 303 - { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 }, 304 - { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 305 - { LLCC_AUDIO, 6, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 306 - { LLCC_CMPT, 10, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 307 - { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 308 - { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 }, 309 - { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 310 - { LLCC_CMPTDMA, 15, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 311 - { LLCC_DISP, 16, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 312 - { LLCC_VIDFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 313 - { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 314 - { LLCC_NPU, 23, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 315 - { LLCC_WLHW, 24, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 316 - { LLCC_CVP, 28, 256, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 317 - { LLCC_APTCM, 30, 128, 3, 0, 0x0, 0x3, 1, 0, 0, 1, 0, 0 }, 318 - { LLCC_WRCACHE, 31, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 1662 + { 1663 + .usecase_id = LLCC_CPUSS, 1664 + .slice_id = 1, 1665 + .max_cap = 3072, 1666 + .priority = 1, 1667 + .fixed_size = true, 1668 + .bonus_ways = 0xfff, 1669 + .cache_mode = 0, 1670 + .retain_on_pc = true, 1671 + .activate_on_init = true, 1672 + }, { 1673 + .usecase_id = LLCC_VIDSC0, 1674 + .slice_id = 2, 1675 + .max_cap = 512, 1676 + .priority = 3, 1677 + .fixed_size = true, 1678 + .bonus_ways = 0xfff, 1679 + .cache_mode = 0, 1680 + .retain_on_pc = true, 1681 + }, { 1682 + .usecase_id = LLCC_AUDIO, 1683 + .slice_id = 6, 1684 + .max_cap = 1024, 1685 + .priority = 1, 1686 + .bonus_ways = 0xfff, 1687 + .cache_mode = 0, 1688 + }, { 1689 + .usecase_id = LLCC_CMPT, 1690 + .slice_id = 10, 1691 + .max_cap = 1024, 1692 + .priority = 1, 1693 + .bonus_ways = 0xfff, 1694 + .cache_mode = 0, 1695 + }, { 1696 + .usecase_id = LLCC_GPUHTW, 1697 + .slice_id = 11, 1698 + .max_cap = 1024, 1699 + .priority = 1, 1700 + .fixed_size = true, 1701 + .bonus_ways = 0xfff, 1702 + .cache_mode = 0, 1703 + .retain_on_pc = true, 1704 + }, { 1705 + .usecase_id = LLCC_GPU, 1706 + .slice_id = 12, 1707 + .max_cap = 1024, 1708 + .priority = 1, 1709 + .bonus_ways = 0xfff, 1710 + .cache_mode = 0, 1711 + .retain_on_pc = true, 1712 + .write_scid_en = true, 1713 + }, { 1714 + .usecase_id = LLCC_MMUHWT, 1715 + .slice_id = 13, 1716 + .max_cap = 1024, 1717 + .priority = 1, 1718 + .fixed_size = true, 1719 + .bonus_ways = 0xfff, 1720 + .cache_mode = 0, 1721 + .activate_on_init = true, 1722 + }, { 1723 + .usecase_id = LLCC_CMPTDMA, 1724 + .slice_id = 15, 1725 + .max_cap = 1024, 1726 + .priority = 1, 1727 + .bonus_ways = 0xfff, 1728 + .cache_mode = 0, 1729 + .retain_on_pc = true, 1730 + }, { 1731 + .usecase_id = LLCC_DISP, 1732 + .slice_id = 16, 1733 + .max_cap = 3072, 1734 + .priority = 1, 1735 + .fixed_size = true, 1736 + .bonus_ways = 0xfff, 1737 + .cache_mode = 0, 1738 + .retain_on_pc = true, 1739 + }, { 1740 + .usecase_id = LLCC_VIDFW, 1741 + .slice_id = 17, 1742 + .max_cap = 512, 1743 + .priority = 1, 1744 + .bonus_ways = 0xfff, 1745 + .cache_mode = 0, 1746 + .retain_on_pc = true, 1747 + }, { 1748 + .usecase_id = LLCC_AUDHW, 1749 + .slice_id = 22, 1750 + .max_cap = 1024, 1751 + .priority = 1, 1752 + .fixed_size = true, 1753 + .bonus_ways = 0xfff, 1754 + .cache_mode = 0, 1755 + .retain_on_pc = true, 1756 + }, { 1757 + .usecase_id = LLCC_NPU, 1758 + .slice_id = 23, 1759 + .max_cap = 3072, 1760 + .priority = 1, 1761 + .fixed_size = true, 1762 + .bonus_ways = 0xfff, 1763 + .cache_mode = 0, 1764 + .retain_on_pc = true, 1765 + }, { 1766 + .usecase_id = LLCC_WLHW, 1767 + .slice_id = 24, 1768 + .max_cap = 1024, 1769 + .priority = 1, 1770 + .bonus_ways = 0xfff, 1771 + .cache_mode = 0, 1772 + .retain_on_pc = true, 1773 + }, { 1774 + .usecase_id = LLCC_CVP, 1775 + .slice_id = 28, 1776 + .max_cap = 256, 1777 + .priority = 3, 1778 + .fixed_size = true, 1779 + .bonus_ways = 0xfff, 1780 + .cache_mode = 0, 1781 + .retain_on_pc = true, 1782 + }, { 1783 + .usecase_id = LLCC_APTCM, 1784 + .slice_id = 30, 1785 + .max_cap = 128, 1786 + .priority = 3, 1787 + .res_ways = 0x3, 1788 + .cache_mode = 1, 1789 + .retain_on_pc = true, 1790 + }, { 1791 + .usecase_id = LLCC_WRCACHE, 1792 + .slice_id = 31, 1793 + .max_cap = 256, 1794 + .priority = 1, 1795 + .fixed_size = true, 1796 + .bonus_ways = 0xfff, 1797 + .cache_mode = 0, 1798 + .activate_on_init = true, 1799 + }, 319 1800 }; 320 1801 321 1802 static const struct llcc_slice_config sm8350_data[] = { 322 - { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 1 }, 323 - { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 324 - { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 }, 325 - { LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 326 - { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 327 - { LLCC_CMPT, 10, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 328 - { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 329 - { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 }, 330 - { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 }, 331 - { LLCC_DISP, 16, 3072, 2, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 332 - { LLCC_MDMPNG, 21, 1024, 0, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 }, 333 - { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 334 - { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 335 - { LLCC_MODPE, 29, 256, 1, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 }, 336 - { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 0, 1, 0 }, 337 - { LLCC_WRCACHE, 31, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 }, 338 - { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 339 - { LLCC_CPUSS1, 3, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 340 - { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 }, 1803 + { 1804 + .usecase_id = LLCC_CPUSS, 1805 + .slice_id = 1, 1806 + .max_cap = 3072, 1807 + .priority = 1, 1808 + .fixed_size = true, 1809 + .bonus_ways = 0xfff, 1810 + .cache_mode = 0, 1811 + .activate_on_init = true, 1812 + .write_scid_en = true, 1813 + }, { 1814 + .usecase_id = LLCC_VIDSC0, 1815 + .slice_id = 2, 1816 + .max_cap = 512, 1817 + .priority = 3, 1818 + .fixed_size = true, 1819 + .bonus_ways = 0xfff, 1820 + .cache_mode = 0, 1821 + .activate_on_init = true, 1822 + }, { 1823 + .usecase_id = LLCC_AUDIO, 1824 + .slice_id = 6, 1825 + .max_cap = 1024, 1826 + .priority = 1, 1827 + .fixed_size = true, 1828 + .bonus_ways = 0xfff, 1829 + .cache_mode = 0, 1830 + }, { 1831 + .usecase_id = LLCC_MDMHPGRW, 1832 + .slice_id = 7, 1833 + .max_cap = 1024, 1834 + .priority = 3, 1835 + .bonus_ways = 0xfff, 1836 + .cache_mode = 0, 1837 + .activate_on_init = true, 1838 + }, { 1839 + .usecase_id = LLCC_MODHW, 1840 + .slice_id = 9, 1841 + .max_cap = 1024, 1842 + .priority = 1, 1843 + .fixed_size = true, 1844 + .bonus_ways = 0xfff, 1845 + .cache_mode = 0, 1846 + .activate_on_init = true, 1847 + }, { 1848 + .usecase_id = LLCC_CMPT, 1849 + .slice_id = 10, 1850 + .max_cap = 3072, 1851 + .priority = 1, 1852 + .fixed_size = true, 1853 + .bonus_ways = 0xfff, 1854 + .cache_mode = 0, 1855 + .activate_on_init = true, 1856 + }, { 1857 + .usecase_id = LLCC_GPUHTW, 1858 + .slice_id = 11, 1859 + .max_cap = 1024, 1860 + .priority = 1, 1861 + .fixed_size = true, 1862 + .bonus_ways = 0xfff, 1863 + .cache_mode = 0, 1864 + .activate_on_init = true, 1865 + }, { 1866 + .usecase_id = LLCC_GPU, 1867 + .slice_id = 12, 1868 + .max_cap = 1024, 1869 + .priority = 1, 1870 + .bonus_ways = 0xfff, 1871 + .cache_mode = 0, 1872 + .retain_on_pc = true, 1873 + .activate_on_init = true, 1874 + }, { 1875 + .usecase_id = LLCC_MMUHWT, 1876 + .slice_id = 13, 1877 + .max_cap = 1024, 1878 + .priority = 1, 1879 + .fixed_size = true, 1880 + .bonus_ways = 0xfff, 1881 + .cache_mode = 0, 1882 + .write_scid_en = true, 1883 + }, { 1884 + .usecase_id = LLCC_DISP, 1885 + .slice_id = 16, 1886 + .max_cap = 3072, 1887 + .priority = 2, 1888 + .fixed_size = true, 1889 + .bonus_ways = 0xfff, 1890 + .cache_mode = 0, 1891 + .activate_on_init = true, 1892 + }, { 1893 + .usecase_id = LLCC_MDMPNG, 1894 + .slice_id = 21, 1895 + .max_cap = 1024, 1896 + .priority = 0, 1897 + .fixed_size = true, 1898 + .bonus_ways = 0xf, 1899 + .cache_mode = 0, 1900 + .activate_on_init = true, 1901 + }, { 1902 + .usecase_id = LLCC_AUDHW, 1903 + .slice_id = 22, 1904 + .max_cap = 1024, 1905 + .priority = 1, 1906 + .fixed_size = true, 1907 + .bonus_ways = 0xfff, 1908 + .cache_mode = 0, 1909 + .activate_on_init = true, 1910 + }, { 1911 + .usecase_id = LLCC_CVP, 1912 + .slice_id = 28, 1913 + .max_cap = 512, 1914 + .priority = 3, 1915 + .fixed_size = true, 1916 + .bonus_ways = 0xfff, 1917 + .cache_mode = 0, 1918 + .activate_on_init = true, 1919 + }, { 1920 + .usecase_id = LLCC_MODPE, 1921 + .slice_id = 29, 1922 + .max_cap = 256, 1923 + .priority = 1, 1924 + .fixed_size = true, 1925 + .bonus_ways = 0xf, 1926 + .cache_mode = 0, 1927 + .activate_on_init = true, 1928 + }, { 1929 + .usecase_id = LLCC_APTCM, 1930 + .slice_id = 30, 1931 + .max_cap = 1024, 1932 + .priority = 3, 1933 + .fixed_size = true, 1934 + .res_ways = 0x1, 1935 + .cache_mode = 1, 1936 + .activate_on_init = true, 1937 + }, { 1938 + .usecase_id = LLCC_WRCACHE, 1939 + .slice_id = 31, 1940 + .max_cap = 512, 1941 + .priority = 1, 1942 + .fixed_size = true, 1943 + .bonus_ways = 0xfff, 1944 + .cache_mode = 0, 1945 + .write_scid_en = true, 1946 + }, { 1947 + .usecase_id = LLCC_CVPFW, 1948 + .slice_id = 17, 1949 + .max_cap = 512, 1950 + .priority = 1, 1951 + .bonus_ways = 0xfff, 1952 + .cache_mode = 0, 1953 + .activate_on_init = true, 1954 + }, { 1955 + .usecase_id = LLCC_CPUSS1, 1956 + .slice_id = 3, 1957 + .max_cap = 1024, 1958 + .priority = 1, 1959 + .fixed_size = true, 1960 + .bonus_ways = 0xfff, 1961 + .cache_mode = 0, 1962 + .activate_on_init = true, 1963 + }, { 1964 + .usecase_id = LLCC_CPUHWT, 1965 + .slice_id = 5, 1966 + .max_cap = 512, 1967 + .priority = 1, 1968 + .fixed_size = true, 1969 + .bonus_ways = 0xfff, 1970 + .cache_mode = 0, 1971 + .write_scid_en = true, 1972 + }, 341 1973 }; 342 1974 343 1975 static const struct llcc_slice_config sm8450_data[] = { 344 - {LLCC_CPUSS, 1, 3072, 1, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 }, 345 - {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 346 - {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 347 - {LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 348 - {LLCC_MODHW, 9, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 349 - {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 350 - {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 351 - {LLCC_GPU, 12, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 1, 0 }, 352 - {LLCC_MMUHWT, 13, 768, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 353 - {LLCC_DISP, 16, 4096, 2, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 354 - {LLCC_MDMPNG, 21, 1024, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 355 - {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 356 - {LLCC_CVP, 28, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 357 - {LLCC_MODPE, 29, 64, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 358 - {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0 }, 359 - {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 360 - {LLCC_CVPFW, 17, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 361 - {LLCC_CPUSS1, 3, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 362 - {LLCC_CAMEXP0, 4, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 363 - {LLCC_CPUMTE, 23, 256, 1, 1, 0x0FFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 364 - {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 }, 365 - {LLCC_CAMEXP1, 27, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 366 - {LLCC_AENPU, 8, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 1976 + { 1977 + .usecase_id = LLCC_CPUSS, 1978 + .slice_id = 1, 1979 + .max_cap = 3072, 1980 + .priority = 1, 1981 + .bonus_ways = 0xffff, 1982 + .cache_mode = 0, 1983 + .retain_on_pc = true, 1984 + .activate_on_init = true, 1985 + }, { 1986 + .usecase_id = LLCC_VIDSC0, 1987 + .slice_id = 2, 1988 + .max_cap = 512, 1989 + .priority = 3, 1990 + .fixed_size = true, 1991 + .bonus_ways = 0xffff, 1992 + .cache_mode = 0, 1993 + .retain_on_pc = true, 1994 + }, { 1995 + .usecase_id = LLCC_AUDIO, 1996 + .slice_id = 6, 1997 + .max_cap = 1024, 1998 + .priority = 1, 1999 + .fixed_size = true, 2000 + .bonus_ways = 0xffff, 2001 + .cache_mode = 0, 2002 + }, { 2003 + .usecase_id = LLCC_MDMHPGRW, 2004 + .slice_id = 7, 2005 + .max_cap = 1024, 2006 + .priority = 3, 2007 + .bonus_ways = 0xffff, 2008 + .cache_mode = 0, 2009 + .retain_on_pc = true, 2010 + }, { 2011 + .usecase_id = LLCC_MODHW, 2012 + .slice_id = 9, 2013 + .max_cap = 1024, 2014 + .priority = 1, 2015 + .fixed_size = true, 2016 + .bonus_ways = 0xffff, 2017 + .cache_mode = 0, 2018 + .retain_on_pc = true, 2019 + }, { 2020 + .usecase_id = LLCC_CMPT, 2021 + .slice_id = 10, 2022 + .max_cap = 4096, 2023 + .priority = 1, 2024 + .fixed_size = true, 2025 + .bonus_ways = 0xffff, 2026 + .cache_mode = 0, 2027 + .retain_on_pc = true, 2028 + }, { 2029 + .usecase_id = LLCC_GPUHTW, 2030 + .slice_id = 11, 2031 + .max_cap = 512, 2032 + .priority = 1, 2033 + .fixed_size = true, 2034 + .bonus_ways = 0xffff, 2035 + .cache_mode = 0, 2036 + .retain_on_pc = true, 2037 + }, { 2038 + .usecase_id = LLCC_GPU, 2039 + .slice_id = 12, 2040 + .max_cap = 2048, 2041 + .priority = 1, 2042 + .fixed_size = true, 2043 + .bonus_ways = 0xffff, 2044 + .cache_mode = 0, 2045 + .retain_on_pc = true, 2046 + .write_scid_en = true, 2047 + }, { 2048 + .usecase_id = LLCC_MMUHWT, 2049 + .slice_id = 13, 2050 + .max_cap = 768, 2051 + .priority = 1, 2052 + .fixed_size = true, 2053 + .bonus_ways = 0xffff, 2054 + .cache_mode = 0, 2055 + .activate_on_init = true, 2056 + }, { 2057 + .usecase_id = LLCC_DISP, 2058 + .slice_id = 16, 2059 + .max_cap = 4096, 2060 + .priority = 2, 2061 + .fixed_size = true, 2062 + .bonus_ways = 0xffff, 2063 + .cache_mode = 0, 2064 + .retain_on_pc = true, 2065 + }, { 2066 + .usecase_id = LLCC_MDMPNG, 2067 + .slice_id = 21, 2068 + .max_cap = 1024, 2069 + .priority = 1, 2070 + .fixed_size = true, 2071 + .bonus_ways = 0xf000, 2072 + .cache_mode = 0, 2073 + .retain_on_pc = true, 2074 + }, { 2075 + .usecase_id = LLCC_AUDHW, 2076 + .slice_id = 22, 2077 + .max_cap = 1024, 2078 + .priority = 1, 2079 + .fixed_size = true, 2080 + .bonus_ways = 0xffff, 2081 + .cache_mode = 0, 2082 + }, { 2083 + .usecase_id = LLCC_CVP, 2084 + .slice_id = 28, 2085 + .max_cap = 256, 2086 + .priority = 3, 2087 + .fixed_size = true, 2088 + .bonus_ways = 0xffff, 2089 + .cache_mode = 0, 2090 + .retain_on_pc = true, 2091 + }, { 2092 + .usecase_id = LLCC_MODPE, 2093 + .slice_id = 29, 2094 + .max_cap = 64, 2095 + .priority = 1, 2096 + .fixed_size = true, 2097 + .bonus_ways = 0xf000, 2098 + .cache_mode = 0, 2099 + .retain_on_pc = true, 2100 + }, { 2101 + .usecase_id = LLCC_APTCM, 2102 + .slice_id = 30, 2103 + .max_cap = 1024, 2104 + .priority = 3, 2105 + .fixed_size = true, 2106 + .res_ways = 0xf0, 2107 + .cache_mode = 1, 2108 + .retain_on_pc = true, 2109 + }, { 2110 + .usecase_id = LLCC_WRCACHE, 2111 + .slice_id = 31, 2112 + .max_cap = 512, 2113 + .priority = 1, 2114 + .fixed_size = true, 2115 + .bonus_ways = 0xffff, 2116 + .cache_mode = 0, 2117 + .activate_on_init = true, 2118 + }, { 2119 + .usecase_id = LLCC_CVPFW, 2120 + .slice_id = 17, 2121 + .max_cap = 512, 2122 + .priority = 1, 2123 + .fixed_size = true, 2124 + .bonus_ways = 0xffff, 2125 + .cache_mode = 0, 2126 + .retain_on_pc = true, 2127 + }, { 2128 + .usecase_id = LLCC_CPUSS1, 2129 + .slice_id = 3, 2130 + .max_cap = 1024, 2131 + .priority = 1, 2132 + .fixed_size = true, 2133 + .bonus_ways = 0xffff, 2134 + .cache_mode = 0, 2135 + .retain_on_pc = true, 2136 + }, { 2137 + .usecase_id = LLCC_CAMEXP0, 2138 + .slice_id = 4, 2139 + .max_cap = 256, 2140 + .priority = 3, 2141 + .fixed_size = true, 2142 + .bonus_ways = 0xffff, 2143 + .cache_mode = 0, 2144 + .retain_on_pc = true, 2145 + }, { 2146 + .usecase_id = LLCC_CPUMTE, 2147 + .slice_id = 23, 2148 + .max_cap = 256, 2149 + .priority = 1, 2150 + .fixed_size = true, 2151 + .bonus_ways = 0xfff, 2152 + .cache_mode = 0, 2153 + .activate_on_init = true, 2154 + }, { 2155 + .usecase_id = LLCC_CPUHWT, 2156 + .slice_id = 5, 2157 + .max_cap = 512, 2158 + .priority = 1, 2159 + .fixed_size = true, 2160 + .bonus_ways = 0xffff, 2161 + .cache_mode = 0, 2162 + .retain_on_pc = true, 2163 + .activate_on_init = true, 2164 + }, { 2165 + .usecase_id = LLCC_CAMEXP1, 2166 + .slice_id = 27, 2167 + .max_cap = 256, 2168 + .priority = 3, 2169 + .fixed_size = true, 2170 + .bonus_ways = 0xffff, 2171 + .cache_mode = 0, 2172 + .retain_on_pc = true, 2173 + }, { 2174 + .usecase_id = LLCC_AENPU, 2175 + .slice_id = 8, 2176 + .max_cap = 2048, 2177 + .priority = 1, 2178 + .fixed_size = true, 2179 + .bonus_ways = 0xffff, 2180 + .cache_mode = 0, 2181 + }, 367 2182 }; 368 2183 369 2184 static const struct llcc_slice_config sm8550_data[] = { 370 - {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 371 - {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 372 - {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 373 - {LLCC_MDMHPGRW, 25, 1024, 4, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 374 - {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 375 - {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 376 - {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 377 - {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, }, 378 - {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 379 - {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 380 - {LLCC_MDMPNG, 27, 1024, 0, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 381 - {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 382 - {LLCC_CVP, 8, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 383 - {LLCC_MODPE, 29, 64, 1, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, }, 384 - {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 385 - {LLCC_CAMEXP0, 4, 256, 4, 1, 0xF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 386 - {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 387 - {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 388 - {LLCC_CMPTHCP, 17, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 389 - {LLCC_LCPDARE, 30, 128, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, }, 390 - {LLCC_AENPU, 3, 3072, 1, 1, 0xFE01FF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 391 - {LLCC_ISLAND1, 12, 1792, 7, 1, 0xFE00, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 392 - {LLCC_ISLAND4, 15, 256, 7, 1, 0x10000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 393 - {LLCC_CAMEXP2, 19, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 394 - {LLCC_CAMEXP3, 20, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 395 - {LLCC_CAMEXP4, 21, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 396 - {LLCC_DISP_WB, 23, 1024, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 397 - {LLCC_DISP_1, 24, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 398 - {LLCC_VIDVSP, 28, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 2185 + { 2186 + .usecase_id = LLCC_CPUSS, 2187 + .slice_id = 1, 2188 + .max_cap = 5120, 2189 + .priority = 1, 2190 + .bonus_ways = 0xffffff, 2191 + .cache_mode = 0, 2192 + .activate_on_init = true, 2193 + .write_scid_en = true, 2194 + }, { 2195 + .usecase_id = LLCC_VIDSC0, 2196 + .slice_id = 2, 2197 + .max_cap = 512, 2198 + .priority = 4, 2199 + .fixed_size = true, 2200 + .bonus_ways = 0xffffff, 2201 + .cache_mode = 0, 2202 + }, { 2203 + .usecase_id = LLCC_AUDIO, 2204 + .slice_id = 6, 2205 + .max_cap = 1024, 2206 + .priority = 1, 2207 + .fixed_size = true, 2208 + .bonus_ways = 0xffffff, 2209 + .cache_mode = 0, 2210 + }, { 2211 + .usecase_id = LLCC_MDMHPGRW, 2212 + .slice_id = 25, 2213 + .max_cap = 1024, 2214 + .priority = 4, 2215 + .bonus_ways = 0xffffff, 2216 + .cache_mode = 0, 2217 + }, { 2218 + .usecase_id = LLCC_MODHW, 2219 + .slice_id = 26, 2220 + .max_cap = 1024, 2221 + .priority = 1, 2222 + .fixed_size = true, 2223 + .bonus_ways = 0xffffff, 2224 + .cache_mode = 0, 2225 + }, { 2226 + .usecase_id = LLCC_CMPT, 2227 + .slice_id = 10, 2228 + .max_cap = 4096, 2229 + .priority = 1, 2230 + .fixed_size = true, 2231 + .bonus_ways = 0xffffff, 2232 + .cache_mode = 0, 2233 + }, { 2234 + .usecase_id = LLCC_GPUHTW, 2235 + .slice_id = 11, 2236 + .max_cap = 512, 2237 + .priority = 1, 2238 + .fixed_size = true, 2239 + .bonus_ways = 0xffffff, 2240 + .cache_mode = 0, 2241 + }, { 2242 + .usecase_id = LLCC_GPU, 2243 + .slice_id = 9, 2244 + .max_cap = 3096, 2245 + .priority = 1, 2246 + .bonus_ways = 0xffffff, 2247 + .cache_mode = 0, 2248 + .write_scid_en = true, 2249 + .write_scid_cacheable_en = true, 2250 + }, { 2251 + .usecase_id = LLCC_MMUHWT, 2252 + .slice_id = 18, 2253 + .max_cap = 768, 2254 + .priority = 1, 2255 + .fixed_size = true, 2256 + .bonus_ways = 0xffffff, 2257 + .cache_mode = 0, 2258 + .activate_on_init = true, 2259 + }, { 2260 + .usecase_id = LLCC_DISP, 2261 + .slice_id = 16, 2262 + .max_cap = 6144, 2263 + .priority = 1, 2264 + .fixed_size = true, 2265 + .bonus_ways = 0xffffff, 2266 + .cache_mode = 2, 2267 + }, { 2268 + .usecase_id = LLCC_MDMPNG, 2269 + .slice_id = 27, 2270 + .max_cap = 1024, 2271 + .priority = 0, 2272 + .fixed_size = true, 2273 + .bonus_ways = 0xf00000, 2274 + .cache_mode = 0, 2275 + }, { 2276 + .usecase_id = LLCC_AUDHW, 2277 + .slice_id = 22, 2278 + .max_cap = 1024, 2279 + .priority = 1, 2280 + .fixed_size = true, 2281 + .bonus_ways = 0xffffff, 2282 + .cache_mode = 0, 2283 + }, { 2284 + .usecase_id = LLCC_CVP, 2285 + .slice_id = 8, 2286 + .max_cap = 256, 2287 + .priority = 4, 2288 + .fixed_size = true, 2289 + .bonus_ways = 0xffffff, 2290 + .cache_mode = 0, 2291 + }, { 2292 + .usecase_id = LLCC_MODPE, 2293 + .slice_id = 29, 2294 + .max_cap = 64, 2295 + .priority = 1, 2296 + .fixed_size = true, 2297 + .bonus_ways = 0xf00000, 2298 + .cache_mode = 0, 2299 + .alloc_oneway_en = true, 2300 + .vict_prio = true, 2301 + }, { 2302 + .usecase_id = LLCC_WRCACHE, 2303 + .slice_id = 31, 2304 + .max_cap = 512, 2305 + .priority = 1, 2306 + .fixed_size = true, 2307 + .bonus_ways = 0xffffff, 2308 + .cache_mode = 0, 2309 + .activate_on_init = true, 2310 + }, { 2311 + .usecase_id = LLCC_CAMEXP0, 2312 + .slice_id = 4, 2313 + .max_cap = 256, 2314 + .priority = 4, 2315 + .fixed_size = true, 2316 + .bonus_ways = 0xf, 2317 + .cache_mode = 0, 2318 + }, { 2319 + .usecase_id = LLCC_CPUHWT, 2320 + .slice_id = 5, 2321 + .max_cap = 512, 2322 + .priority = 1, 2323 + .fixed_size = true, 2324 + .bonus_ways = 0xffffff, 2325 + .cache_mode = 0, 2326 + .activate_on_init = true, 2327 + }, { 2328 + .usecase_id = LLCC_CAMEXP1, 2329 + .slice_id = 7, 2330 + .max_cap = 3200, 2331 + .priority = 3, 2332 + .fixed_size = true, 2333 + .bonus_ways = 0xfffff0, 2334 + .cache_mode = 2, 2335 + }, { 2336 + .usecase_id = LLCC_CMPTHCP, 2337 + .slice_id = 17, 2338 + .max_cap = 256, 2339 + .priority = 4, 2340 + .fixed_size = true, 2341 + .bonus_ways = 0xffffff, 2342 + .cache_mode = 0, 2343 + }, { 2344 + .usecase_id = LLCC_LCPDARE, 2345 + .slice_id = 30, 2346 + .max_cap = 128, 2347 + .priority = 4, 2348 + .fixed_size = true, 2349 + .bonus_ways = 0xffffff, 2350 + .cache_mode = 0, 2351 + .activate_on_init = true, 2352 + .alloc_oneway_en = true, 2353 + .vict_prio = true, 2354 + }, { 2355 + .usecase_id = LLCC_AENPU, 2356 + .slice_id = 3, 2357 + .max_cap = 3072, 2358 + .priority = 1, 2359 + .fixed_size = true, 2360 + .bonus_ways = 0xfe01ff, 2361 + .cache_mode = 2, 2362 + }, { 2363 + .usecase_id = LLCC_ISLAND1, 2364 + .slice_id = 12, 2365 + .max_cap = 1792, 2366 + .priority = 7, 2367 + .fixed_size = true, 2368 + .bonus_ways = 0xfe00, 2369 + .cache_mode = 0, 2370 + }, { 2371 + .usecase_id = LLCC_ISLAND4, 2372 + .slice_id = 15, 2373 + .max_cap = 256, 2374 + .priority = 7, 2375 + .fixed_size = true, 2376 + .bonus_ways = 0x10000, 2377 + .cache_mode = 0, 2378 + }, { 2379 + .usecase_id = LLCC_CAMEXP2, 2380 + .slice_id = 19, 2381 + .max_cap = 3200, 2382 + .priority = 3, 2383 + .fixed_size = true, 2384 + .bonus_ways = 0xfffff0, 2385 + .cache_mode = 2, 2386 + }, { 2387 + .usecase_id = LLCC_CAMEXP3, 2388 + .slice_id = 20, 2389 + .max_cap = 3200, 2390 + .priority = 2, 2391 + .fixed_size = true, 2392 + .bonus_ways = 0xfffff0, 2393 + .cache_mode = 2, 2394 + }, { 2395 + .usecase_id = LLCC_CAMEXP4, 2396 + .slice_id = 21, 2397 + .max_cap = 3200, 2398 + .priority = 2, 2399 + .fixed_size = true, 2400 + .bonus_ways = 0xfffff0, 2401 + .cache_mode = 2, 2402 + }, { 2403 + .usecase_id = LLCC_DISP_WB, 2404 + .slice_id = 23, 2405 + .max_cap = 1024, 2406 + .priority = 4, 2407 + .fixed_size = true, 2408 + .bonus_ways = 0xffffff, 2409 + .cache_mode = 0, 2410 + }, { 2411 + .usecase_id = LLCC_DISP_1, 2412 + .slice_id = 24, 2413 + .max_cap = 6144, 2414 + .priority = 1, 2415 + .fixed_size = true, 2416 + .bonus_ways = 0xffffff, 2417 + .cache_mode = 2, 2418 + }, { 2419 + .usecase_id = LLCC_VIDVSP, 2420 + .slice_id = 28, 2421 + .max_cap = 256, 2422 + .priority = 4, 2423 + .fixed_size = true, 2424 + .bonus_ways = 0xffffff, 2425 + .cache_mode = 0, 2426 + }, 399 2427 }; 400 2428 401 2429 static const struct llcc_slice_config sm8650_data[] = { 402 - {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0}, 403 - {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 404 - {LLCC_AUDIO, 6, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 405 - {LLCC_MDMHPGRW, 25, 1024, 3, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 406 - {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 407 - {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 408 - {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 409 - {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0}, 410 - {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 411 - {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 412 - {LLCC_MDMHPFX, 24, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 413 - {LLCC_MDMPNG, 27, 1024, 0, 1, 0x000000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 414 - {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 415 - {LLCC_CVP, 8, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 416 - {LLCC_MODPE, 29, 128, 1, 1, 0xF00000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0}, 417 - {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 418 - {LLCC_CAMEXP0, 4, 256, 3, 1, 0xF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 419 - {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 420 - {LLCC_CMPTHCP, 17, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 421 - {LLCC_LCPDARE, 30, 128, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0}, 422 - {LLCC_AENPU, 3, 3072, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 423 - {LLCC_ISLAND1, 12, 5888, 7, 1, 0x0, 0x7FFFFF, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 424 - {LLCC_DISP_WB, 23, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 425 - {LLCC_VIDVSP, 28, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 2430 + { 2431 + .usecase_id = LLCC_CPUSS, 2432 + .slice_id = 1, 2433 + .max_cap = 5120, 2434 + .priority = 1, 2435 + .bonus_ways = 0xffffff, 2436 + .cache_mode = 0, 2437 + .activate_on_init = true, 2438 + .stale_en = true, 2439 + }, { 2440 + .usecase_id = LLCC_VIDSC0, 2441 + .slice_id = 2, 2442 + .max_cap = 512, 2443 + .priority = 3, 2444 + .fixed_size = true, 2445 + .bonus_ways = 0xffffff, 2446 + .cache_mode = 0, 2447 + }, { 2448 + .usecase_id = LLCC_AUDIO, 2449 + .slice_id = 6, 2450 + .max_cap = 512, 2451 + .priority = 1, 2452 + .fixed_size = true, 2453 + .bonus_ways = 0xffffff, 2454 + .cache_mode = 0, 2455 + }, { 2456 + .usecase_id = LLCC_MDMHPGRW, 2457 + .slice_id = 25, 2458 + .max_cap = 1024, 2459 + .priority = 3, 2460 + .bonus_ways = 0xffffff, 2461 + .cache_mode = 0, 2462 + }, { 2463 + .usecase_id = LLCC_MODHW, 2464 + .slice_id = 26, 2465 + .max_cap = 1024, 2466 + .priority = 1, 2467 + .fixed_size = true, 2468 + .bonus_ways = 0xffffff, 2469 + .cache_mode = 0, 2470 + }, { 2471 + .usecase_id = LLCC_CMPT, 2472 + .slice_id = 10, 2473 + .max_cap = 4096, 2474 + .priority = 1, 2475 + .fixed_size = true, 2476 + .bonus_ways = 0xffffff, 2477 + .cache_mode = 0, 2478 + }, { 2479 + .usecase_id = LLCC_GPUHTW, 2480 + .slice_id = 11, 2481 + .max_cap = 512, 2482 + .priority = 1, 2483 + .fixed_size = true, 2484 + .bonus_ways = 0xffffff, 2485 + .cache_mode = 0, 2486 + }, { 2487 + .usecase_id = LLCC_GPU, 2488 + .slice_id = 9, 2489 + .max_cap = 3096, 2490 + .priority = 1, 2491 + .bonus_ways = 0xffffff, 2492 + .cache_mode = 0, 2493 + .write_scid_en = true, 2494 + .write_scid_cacheable_en = true, 2495 + }, { 2496 + .usecase_id = LLCC_MMUHWT, 2497 + .slice_id = 18, 2498 + .max_cap = 768, 2499 + .priority = 1, 2500 + .fixed_size = true, 2501 + .bonus_ways = 0xffffff, 2502 + .cache_mode = 0, 2503 + .activate_on_init = true, 2504 + }, { 2505 + .usecase_id = LLCC_DISP, 2506 + .slice_id = 16, 2507 + .max_cap = 6144, 2508 + .priority = 1, 2509 + .fixed_size = true, 2510 + .bonus_ways = 0xffffff, 2511 + .cache_mode = 2, 2512 + }, { 2513 + .usecase_id = LLCC_MDMHPFX, 2514 + .slice_id = 24, 2515 + .max_cap = 1024, 2516 + .priority = 3, 2517 + .fixed_size = true, 2518 + .bonus_ways = 0xffffff, 2519 + .cache_mode = 0, 2520 + }, { 2521 + .usecase_id = LLCC_MDMPNG, 2522 + .slice_id = 27, 2523 + .max_cap = 1024, 2524 + .priority = 0, 2525 + .fixed_size = true, 2526 + .cache_mode = 0, 2527 + }, { 2528 + .usecase_id = LLCC_AUDHW, 2529 + .slice_id = 22, 2530 + .max_cap = 1024, 2531 + .priority = 1, 2532 + .fixed_size = true, 2533 + .bonus_ways = 0xffffff, 2534 + .cache_mode = 0, 2535 + }, { 2536 + .usecase_id = LLCC_CVP, 2537 + .slice_id = 8, 2538 + .max_cap = 256, 2539 + .priority = 3, 2540 + .fixed_size = true, 2541 + .bonus_ways = 0xffffff, 2542 + .cache_mode = 0, 2543 + }, { 2544 + .usecase_id = LLCC_MODPE, 2545 + .slice_id = 29, 2546 + .max_cap = 128, 2547 + .priority = 1, 2548 + .fixed_size = true, 2549 + .bonus_ways = 0xf00000, 2550 + .cache_mode = 0, 2551 + .alloc_oneway_en = true, 2552 + }, { 2553 + .usecase_id = LLCC_WRCACHE, 2554 + .slice_id = 31, 2555 + .max_cap = 512, 2556 + .priority = 1, 2557 + .fixed_size = true, 2558 + .bonus_ways = 0xffffff, 2559 + .cache_mode = 0, 2560 + .activate_on_init = true, 2561 + }, { 2562 + .usecase_id = LLCC_CAMEXP0, 2563 + .slice_id = 4, 2564 + .max_cap = 256, 2565 + .priority = 3, 2566 + .fixed_size = true, 2567 + .bonus_ways = 0xf, 2568 + .cache_mode = 0, 2569 + }, { 2570 + .usecase_id = LLCC_CAMEXP1, 2571 + .slice_id = 7, 2572 + .max_cap = 3200, 2573 + .priority = 3, 2574 + .fixed_size = true, 2575 + .bonus_ways = 0xfffff0, 2576 + .cache_mode = 2, 2577 + }, { 2578 + .usecase_id = LLCC_CMPTHCP, 2579 + .slice_id = 17, 2580 + .max_cap = 256, 2581 + .priority = 3, 2582 + .fixed_size = true, 2583 + .bonus_ways = 0xffffff, 2584 + .cache_mode = 0, 2585 + }, { 2586 + .usecase_id = LLCC_LCPDARE, 2587 + .slice_id = 30, 2588 + .max_cap = 128, 2589 + .priority = 3, 2590 + .fixed_size = true, 2591 + .bonus_ways = 0xffffff, 2592 + .cache_mode = 0, 2593 + .activate_on_init = true, 2594 + .alloc_oneway_en = true, 2595 + }, { 2596 + .usecase_id = LLCC_AENPU, 2597 + .slice_id = 3, 2598 + .max_cap = 3072, 2599 + .priority = 1, 2600 + .fixed_size = true, 2601 + .bonus_ways = 0xffffff, 2602 + .cache_mode = 2, 2603 + }, { 2604 + .usecase_id = LLCC_ISLAND1, 2605 + .slice_id = 12, 2606 + .max_cap = 5888, 2607 + .priority = 7, 2608 + .fixed_size = true, 2609 + .res_ways = 0x7fffff, 2610 + .cache_mode = 0, 2611 + }, { 2612 + .usecase_id = LLCC_DISP_WB, 2613 + .slice_id = 23, 2614 + .max_cap = 1024, 2615 + .priority = 3, 2616 + .fixed_size = true, 2617 + .bonus_ways = 0xffffff, 2618 + .cache_mode = 0, 2619 + }, { 2620 + .usecase_id = LLCC_VIDVSP, 2621 + .slice_id = 28, 2622 + .max_cap = 256, 2623 + .priority = 3, 2624 + .fixed_size = true, 2625 + .bonus_ways = 0xffffff, 2626 + .cache_mode = 0, 2627 + }, 2628 + }; 2629 + 2630 + static const struct llcc_slice_config qcs615_data[] = { 2631 + { 2632 + .usecase_id = LLCC_CPUSS, 2633 + .slice_id = 1, 2634 + .max_cap = 128, 2635 + .priority = 1, 2636 + .bonus_ways = 0xf, 2637 + .cache_mode = 0, 2638 + .activate_on_init = true, 2639 + .write_scid_en = true, 2640 + }, { 2641 + .usecase_id = LLCC_MDM, 2642 + .slice_id = 8, 2643 + .max_cap = 256, 2644 + .priority = 0, 2645 + .fixed_size = true, 2646 + .bonus_ways = 0xf, 2647 + .cache_mode = 0, 2648 + .activate_on_init = true, 2649 + }, { 2650 + .usecase_id = LLCC_GPUHTW, 2651 + .slice_id = 11, 2652 + .max_cap = 128, 2653 + .priority = 1, 2654 + .fixed_size = true, 2655 + .bonus_ways = 0xf, 2656 + .cache_mode = 0, 2657 + .activate_on_init = true, 2658 + }, { 2659 + .usecase_id = LLCC_GPU, 2660 + .slice_id = 12, 2661 + .max_cap = 128, 2662 + .priority = 1, 2663 + .bonus_ways = 0xf, 2664 + .cache_mode = 0, 2665 + .activate_on_init = true, 2666 + }, 2667 + }; 2668 + 2669 + static const struct llcc_slice_config qcs8300_data[] = { 2670 + { 2671 + .usecase_id = LLCC_GPUHTW, 2672 + .slice_id = 11, 2673 + .max_cap = 128, 2674 + .priority = 1, 2675 + .fixed_size = true, 2676 + .bonus_ways = 0xf, 2677 + .cache_mode = 0, 2678 + .retain_on_pc = true, 2679 + }, { 2680 + .usecase_id = LLCC_GPU, 2681 + .slice_id = 12, 2682 + .max_cap = 512, 2683 + .priority = 1, 2684 + .fixed_size = true, 2685 + .bonus_ways = 0xf, 2686 + .cache_mode = 0, 2687 + .retain_on_pc = true, 2688 + .write_scid_en = true, 2689 + }, { 2690 + .usecase_id = LLCC_MMUHWT, 2691 + .slice_id = 13, 2692 + .max_cap = 128, 2693 + .priority = 1, 2694 + .fixed_size = true, 2695 + .bonus_ways = 0xf, 2696 + .cache_mode = 0, 2697 + .activate_on_init = true, 2698 + }, { 2699 + .usecase_id = LLCC_ECC, 2700 + .slice_id = 26, 2701 + .max_cap = 256, 2702 + .priority = 3, 2703 + .fixed_size = true, 2704 + .bonus_ways = 0xf, 2705 + .cache_mode = 0, 2706 + .activate_on_init = true, 2707 + }, { 2708 + .usecase_id = LLCC_WRCACHE, 2709 + .slice_id = 31, 2710 + .max_cap = 128, 2711 + .priority = 1, 2712 + .fixed_size = true, 2713 + .bonus_ways = 0xf, 2714 + .cache_mode = 0, 2715 + .activate_on_init = true, 2716 + }, 426 2717 }; 427 2718 428 2719 static const struct llcc_slice_config qdu1000_data_2ch[] = { 429 - { LLCC_MDMHPGRW, 7, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 430 - { LLCC_MODHW, 9, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 431 - { LLCC_MDMPNG, 21, 256, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 432 - { LLCC_ECC, 26, 512, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 433 - { LLCC_MODPE, 29, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 434 - { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 435 - { LLCC_WRCACHE, 31, 128, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 2720 + { 2721 + .usecase_id = LLCC_MDMHPGRW, 2722 + .slice_id = 7, 2723 + .max_cap = 512, 2724 + .priority = 1, 2725 + .fixed_size = true, 2726 + .bonus_ways = 0xfff, 2727 + .cache_mode = 0, 2728 + .retain_on_pc = true, 2729 + }, { 2730 + .usecase_id = LLCC_MODHW, 2731 + .slice_id = 9, 2732 + .max_cap = 256, 2733 + .priority = 1, 2734 + .fixed_size = true, 2735 + .bonus_ways = 0xfff, 2736 + .cache_mode = 0, 2737 + .retain_on_pc = true, 2738 + }, { 2739 + .usecase_id = LLCC_MDMPNG, 2740 + .slice_id = 21, 2741 + .max_cap = 256, 2742 + .priority = 0, 2743 + .fixed_size = true, 2744 + .bonus_ways = 0x3, 2745 + .cache_mode = 0, 2746 + .retain_on_pc = true, 2747 + }, { 2748 + .usecase_id = LLCC_ECC, 2749 + .slice_id = 26, 2750 + .max_cap = 512, 2751 + .priority = 3, 2752 + .fixed_size = true, 2753 + .bonus_ways = 0xffc, 2754 + .cache_mode = 0, 2755 + .activate_on_init = true, 2756 + }, { 2757 + .usecase_id = LLCC_MODPE, 2758 + .slice_id = 29, 2759 + .max_cap = 256, 2760 + .priority = 1, 2761 + .fixed_size = true, 2762 + .bonus_ways = 0xfff, 2763 + .cache_mode = 0, 2764 + .retain_on_pc = true, 2765 + }, { 2766 + .usecase_id = LLCC_APTCM, 2767 + .slice_id = 30, 2768 + .max_cap = 256, 2769 + .priority = 3, 2770 + .fixed_size = true, 2771 + .res_ways = 0xc, 2772 + .cache_mode = 1, 2773 + .retain_on_pc = true, 2774 + }, { 2775 + .usecase_id = LLCC_WRCACHE, 2776 + .slice_id = 31, 2777 + .max_cap = 128, 2778 + .priority = 1, 2779 + .fixed_size = true, 2780 + .bonus_ways = 0x3, 2781 + .cache_mode = 0, 2782 + .activate_on_init = true, 2783 + }, 436 2784 }; 437 2785 438 2786 static const struct llcc_slice_config qdu1000_data_4ch[] = { 439 - { LLCC_MDMHPGRW, 7, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 440 - { LLCC_MODHW, 9, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 441 - { LLCC_MDMPNG, 21, 512, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 442 - { LLCC_ECC, 26, 1024, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 443 - { LLCC_MODPE, 29, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 444 - { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 445 - { LLCC_WRCACHE, 31, 256, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 2787 + { 2788 + .usecase_id = LLCC_MDMHPGRW, 2789 + .slice_id = 7, 2790 + .max_cap = 1024, 2791 + .priority = 1, 2792 + .fixed_size = true, 2793 + .bonus_ways = 0xfff, 2794 + .cache_mode = 0, 2795 + .retain_on_pc = true, 2796 + }, { 2797 + .usecase_id = LLCC_MODHW, 2798 + .slice_id = 9, 2799 + .max_cap = 512, 2800 + .priority = 1, 2801 + .fixed_size = true, 2802 + .bonus_ways = 0xfff, 2803 + .cache_mode = 0, 2804 + .retain_on_pc = true, 2805 + }, { 2806 + .usecase_id = LLCC_MDMPNG, 2807 + .slice_id = 21, 2808 + .max_cap = 512, 2809 + .priority = 0, 2810 + .fixed_size = true, 2811 + .bonus_ways = 0x3, 2812 + .cache_mode = 0, 2813 + .retain_on_pc = true, 2814 + }, { 2815 + .usecase_id = LLCC_ECC, 2816 + .slice_id = 26, 2817 + .max_cap = 1024, 2818 + .priority = 3, 2819 + .fixed_size = true, 2820 + .bonus_ways = 0xffc, 2821 + .cache_mode = 0, 2822 + .activate_on_init = true, 2823 + }, { 2824 + .usecase_id = LLCC_MODPE, 2825 + .slice_id = 29, 2826 + .max_cap = 512, 2827 + .priority = 1, 2828 + .fixed_size = true, 2829 + .bonus_ways = 0xfff, 2830 + .cache_mode = 0, 2831 + .retain_on_pc = true, 2832 + }, { 2833 + .usecase_id = LLCC_APTCM, 2834 + .slice_id = 30, 2835 + .max_cap = 512, 2836 + .priority = 3, 2837 + .fixed_size = true, 2838 + .res_ways = 0xc, 2839 + .cache_mode = 1, 2840 + .retain_on_pc = true, 2841 + }, { 2842 + .usecase_id = LLCC_WRCACHE, 2843 + .slice_id = 31, 2844 + .max_cap = 256, 2845 + .priority = 1, 2846 + .fixed_size = true, 2847 + .bonus_ways = 0x3, 2848 + .cache_mode = 0, 2849 + .activate_on_init = true, 2850 + }, 446 2851 }; 447 2852 448 2853 static const struct llcc_slice_config qdu1000_data_8ch[] = { 449 - { LLCC_MDMHPGRW, 7, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 450 - { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 451 - { LLCC_MDMPNG, 21, 1024, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 452 - { LLCC_ECC, 26, 2048, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 453 - { LLCC_MODPE, 29, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 454 - { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 455 - { LLCC_WRCACHE, 31, 512, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 2854 + { 2855 + .usecase_id = LLCC_MDMHPGRW, 2856 + .slice_id = 7, 2857 + .max_cap = 2048, 2858 + .priority = 1, 2859 + .fixed_size = true, 2860 + .bonus_ways = 0xfff, 2861 + .cache_mode = 0, 2862 + .retain_on_pc = true, 2863 + }, { 2864 + .usecase_id = LLCC_MODHW, 2865 + .slice_id = 9, 2866 + .max_cap = 1024, 2867 + .priority = 1, 2868 + .fixed_size = true, 2869 + .bonus_ways = 0xfff, 2870 + .cache_mode = 0, 2871 + .retain_on_pc = true, 2872 + }, { 2873 + .usecase_id = LLCC_MDMPNG, 2874 + .slice_id = 21, 2875 + .max_cap = 1024, 2876 + .priority = 0, 2877 + .fixed_size = true, 2878 + .bonus_ways = 0x3, 2879 + .cache_mode = 0, 2880 + .retain_on_pc = true, 2881 + }, { 2882 + .usecase_id = LLCC_ECC, 2883 + .slice_id = 26, 2884 + .max_cap = 2048, 2885 + .priority = 3, 2886 + .fixed_size = true, 2887 + .bonus_ways = 0xffc, 2888 + .cache_mode = 0, 2889 + .activate_on_init = true, 2890 + }, { 2891 + .usecase_id = LLCC_MODPE, 2892 + .slice_id = 29, 2893 + .max_cap = 1024, 2894 + .priority = 1, 2895 + .fixed_size = true, 2896 + .bonus_ways = 0xfff, 2897 + .cache_mode = 0, 2898 + .retain_on_pc = true, 2899 + }, { 2900 + .usecase_id = LLCC_APTCM, 2901 + .slice_id = 30, 2902 + .max_cap = 1024, 2903 + .priority = 3, 2904 + .fixed_size = true, 2905 + .res_ways = 0xc, 2906 + .cache_mode = 1, 2907 + .retain_on_pc = true, 2908 + }, { 2909 + .usecase_id = LLCC_WRCACHE, 2910 + .slice_id = 31, 2911 + .max_cap = 512, 2912 + .priority = 1, 2913 + .fixed_size = true, 2914 + .bonus_ways = 0x3, 2915 + .cache_mode = 0, 2916 + .activate_on_init = true, 2917 + }, 456 2918 }; 457 2919 458 2920 static const struct llcc_slice_config x1e80100_data[] = { 459 - {LLCC_CPUSS, 1, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 460 - {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 461 - {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 462 - {LLCC_CMPT, 10, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 463 - {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 464 - {LLCC_GPU, 9, 4608, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0}, 465 - {LLCC_MMUHWT, 18, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 466 - {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 467 - {LLCC_CVP, 8, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 468 - {LLCC_WRCACHE, 31, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 469 - {LLCC_CAMEXP0, 4, 256, 4, 1, 0x3, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 470 - {LLCC_CAMEXP1, 7, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 471 - {LLCC_LCPDARE, 30, 512, 3, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0}, 472 - {LLCC_AENPU, 3, 3072, 1, 1, 0xFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 473 - {LLCC_ISLAND1, 12, 2048, 7, 1, 0x0, 0xF, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 474 - {LLCC_CAMEXP2, 19, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 475 - {LLCC_CAMEXP3, 20, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 476 - {LLCC_CAMEXP4, 21, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 2921 + { 2922 + .usecase_id = LLCC_CPUSS, 2923 + .slice_id = 1, 2924 + .max_cap = 6144, 2925 + .priority = 1, 2926 + .fixed_size = true, 2927 + .bonus_ways = 0xfff, 2928 + .cache_mode = 0, 2929 + .activate_on_init = true, 2930 + }, { 2931 + .usecase_id = LLCC_VIDSC0, 2932 + .slice_id = 2, 2933 + .max_cap = 512, 2934 + .priority = 4, 2935 + .fixed_size = true, 2936 + .bonus_ways = 0xfff, 2937 + .cache_mode = 0, 2938 + }, { 2939 + .usecase_id = LLCC_AUDIO, 2940 + .slice_id = 6, 2941 + .max_cap = 1024, 2942 + .priority = 1, 2943 + .fixed_size = true, 2944 + .bonus_ways = 0xfff, 2945 + .cache_mode = 0, 2946 + }, { 2947 + .usecase_id = LLCC_CMPT, 2948 + .slice_id = 10, 2949 + .max_cap = 6144, 2950 + .priority = 1, 2951 + .fixed_size = true, 2952 + .bonus_ways = 0xfff, 2953 + .cache_mode = 0, 2954 + }, { 2955 + .usecase_id = LLCC_GPUHTW, 2956 + .slice_id = 11, 2957 + .max_cap = 512, 2958 + .priority = 1, 2959 + .fixed_size = true, 2960 + .bonus_ways = 0xfff, 2961 + .cache_mode = 0, 2962 + }, { 2963 + .usecase_id = LLCC_GPU, 2964 + .slice_id = 9, 2965 + .max_cap = 4608, 2966 + .priority = 1, 2967 + .bonus_ways = 0xfff, 2968 + .cache_mode = 0, 2969 + .write_scid_en = true, 2970 + .write_scid_cacheable_en = true, 2971 + .stale_en = true, 2972 + }, { 2973 + .usecase_id = LLCC_MMUHWT, 2974 + .slice_id = 18, 2975 + .max_cap = 512, 2976 + .priority = 1, 2977 + .fixed_size = true, 2978 + .bonus_ways = 0xfff, 2979 + .cache_mode = 0, 2980 + .activate_on_init = true, 2981 + }, { 2982 + .usecase_id = LLCC_AUDHW, 2983 + .slice_id = 22, 2984 + .max_cap = 1024, 2985 + .priority = 1, 2986 + .fixed_size = true, 2987 + .bonus_ways = 0xfff, 2988 + .cache_mode = 0, 2989 + }, { 2990 + .usecase_id = LLCC_CVP, 2991 + .slice_id = 8, 2992 + .max_cap = 512, 2993 + .priority = 4, 2994 + .fixed_size = true, 2995 + .bonus_ways = 0xfff, 2996 + .cache_mode = 0, 2997 + }, { 2998 + .usecase_id = LLCC_WRCACHE, 2999 + .slice_id = 31, 3000 + .max_cap = 1024, 3001 + .priority = 1, 3002 + .fixed_size = true, 3003 + .bonus_ways = 0xfff, 3004 + .cache_mode = 0, 3005 + }, { 3006 + .usecase_id = LLCC_CAMEXP0, 3007 + .slice_id = 4, 3008 + .max_cap = 256, 3009 + .priority = 4, 3010 + .fixed_size = true, 3011 + .bonus_ways = 0x3, 3012 + .cache_mode = 0, 3013 + }, { 3014 + .usecase_id = LLCC_CAMEXP1, 3015 + .slice_id = 7, 3016 + .max_cap = 3072, 3017 + .priority = 3, 3018 + .fixed_size = true, 3019 + .bonus_ways = 0xffc, 3020 + .cache_mode = 2, 3021 + }, { 3022 + .usecase_id = LLCC_LCPDARE, 3023 + .slice_id = 30, 3024 + .max_cap = 512, 3025 + .priority = 3, 3026 + .fixed_size = true, 3027 + .bonus_ways = 0xfff, 3028 + .cache_mode = 0, 3029 + .activate_on_init = true, 3030 + .alloc_oneway_en = true, 3031 + }, { 3032 + .usecase_id = LLCC_AENPU, 3033 + .slice_id = 3, 3034 + .max_cap = 3072, 3035 + .priority = 1, 3036 + .fixed_size = true, 3037 + .bonus_ways = 0xfff, 3038 + .cache_mode = 2, 3039 + }, { 3040 + .usecase_id = LLCC_ISLAND1, 3041 + .slice_id = 12, 3042 + .max_cap = 2048, 3043 + .priority = 7, 3044 + .fixed_size = true, 3045 + .res_ways = 0xf, 3046 + .cache_mode = 0, 3047 + }, { 3048 + .usecase_id = LLCC_CAMEXP2, 3049 + .slice_id = 19, 3050 + .max_cap = 3072, 3051 + .priority = 3, 3052 + .fixed_size = true, 3053 + .bonus_ways = 0xffc, 3054 + .cache_mode = 2, 3055 + }, { 3056 + .usecase_id = LLCC_CAMEXP3, 3057 + .slice_id = 20, 3058 + .max_cap = 3072, 3059 + .priority = 2, 3060 + .fixed_size = true, 3061 + .bonus_ways = 0xffc, 3062 + .cache_mode = 2, 3063 + }, { 3064 + .usecase_id = LLCC_CAMEXP4, 3065 + .slice_id = 21, 3066 + .max_cap = 3072, 3067 + .priority = 2, 3068 + .fixed_size = true, 3069 + .bonus_ways = 0xffc, 3070 + .cache_mode = 2, 3071 + }, 477 3072 }; 478 3073 479 3074 static const struct llcc_edac_reg_offset llcc_v1_edac_reg_offset = { ··· 3139 540 [LLCC_COMMON_STATUS0] = 0x0003400c, 3140 541 }; 3141 542 543 + static const struct qcom_llcc_config qcs615_cfg[] = { 544 + { 545 + .sct_data = qcs615_data, 546 + .size = ARRAY_SIZE(qcs615_data), 547 + .reg_offset = llcc_v1_reg_offset, 548 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 549 + }, 550 + }; 551 + 552 + static const struct qcom_llcc_config qcs8300_cfg[] = { 553 + { 554 + .sct_data = qcs8300_data, 555 + .size = ARRAY_SIZE(qcs8300_data), 556 + .reg_offset = llcc_v2_1_reg_offset, 557 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 558 + .num_banks = 4, 559 + }, 560 + }; 561 + 3142 562 static const struct qcom_llcc_config qdu1000_cfg[] = { 3143 563 { 3144 564 .sct_data = qdu1000_data_8ch, 3145 565 .size = ARRAY_SIZE(qdu1000_data_8ch), 3146 - .need_llcc_cfg = true, 3147 566 .reg_offset = llcc_v2_1_reg_offset, 3148 567 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3149 568 }, 3150 569 { 3151 570 .sct_data = qdu1000_data_4ch, 3152 571 .size = ARRAY_SIZE(qdu1000_data_4ch), 3153 - .need_llcc_cfg = true, 3154 572 .reg_offset = llcc_v2_1_reg_offset, 3155 573 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3156 574 }, 3157 575 { 3158 576 .sct_data = qdu1000_data_4ch, 3159 577 .size = ARRAY_SIZE(qdu1000_data_4ch), 3160 - .need_llcc_cfg = true, 3161 578 .reg_offset = llcc_v2_1_reg_offset, 3162 579 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3163 580 }, 3164 581 { 3165 582 .sct_data = qdu1000_data_2ch, 3166 583 .size = ARRAY_SIZE(qdu1000_data_2ch), 3167 - .need_llcc_cfg = true, 3168 584 .reg_offset = llcc_v2_1_reg_offset, 3169 585 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3170 586 }, ··· 3189 575 { 3190 576 .sct_data = sa8775p_data, 3191 577 .size = ARRAY_SIZE(sa8775p_data), 3192 - .need_llcc_cfg = true, 3193 578 .reg_offset = llcc_v2_1_reg_offset, 3194 579 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 580 + }, 581 + }; 582 + 583 + static const struct qcom_llcc_config sar1130p_cfg[] = { 584 + { 585 + .sct_data = sar1130p_data, 586 + .size = ARRAY_SIZE(sar1130p_data), 587 + .reg_offset = llcc_v2_1_reg_offset, 588 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 589 + .max_cap_shift = 14, 590 + .num_banks = 2, 591 + }, 592 + }; 593 + 594 + static const struct qcom_llcc_config sar2130p_cfg[] = { 595 + { 596 + .sct_data = sar2130p_data, 597 + .size = ARRAY_SIZE(sar2130p_data), 598 + .reg_offset = llcc_v2_1_reg_offset, 599 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 600 + .max_cap_shift = 14, 601 + .num_banks = 2, 3195 602 }, 3196 603 }; 3197 604 ··· 3220 585 { 3221 586 .sct_data = sc7180_data, 3222 587 .size = ARRAY_SIZE(sc7180_data), 3223 - .need_llcc_cfg = true, 3224 588 .reg_offset = llcc_v1_reg_offset, 3225 589 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3226 590 }, ··· 3229 595 { 3230 596 .sct_data = sc7280_data, 3231 597 .size = ARRAY_SIZE(sc7280_data), 3232 - .need_llcc_cfg = true, 3233 598 .reg_offset = llcc_v1_reg_offset, 3234 599 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3235 600 }, ··· 3238 605 { 3239 606 .sct_data = sc8180x_data, 3240 607 .size = ARRAY_SIZE(sc8180x_data), 3241 - .need_llcc_cfg = true, 3242 608 .reg_offset = llcc_v1_reg_offset, 3243 609 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3244 610 }, ··· 3247 615 { 3248 616 .sct_data = sc8280xp_data, 3249 617 .size = ARRAY_SIZE(sc8280xp_data), 3250 - .need_llcc_cfg = true, 3251 618 .reg_offset = llcc_v1_reg_offset, 3252 619 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3253 620 }, ··· 3256 625 { 3257 626 .sct_data = sdm845_data, 3258 627 .size = ARRAY_SIZE(sdm845_data), 3259 - .need_llcc_cfg = false, 628 + .skip_llcc_cfg = true, 3260 629 .reg_offset = llcc_v1_reg_offset, 3261 630 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3262 631 .no_edac = true, ··· 3267 636 { 3268 637 .sct_data = sm6350_data, 3269 638 .size = ARRAY_SIZE(sm6350_data), 3270 - .need_llcc_cfg = true, 3271 639 .reg_offset = llcc_v1_reg_offset, 3272 640 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3273 641 }, ··· 3276 646 { 3277 647 .sct_data = sm7150_data, 3278 648 .size = ARRAY_SIZE(sm7150_data), 3279 - .need_llcc_cfg = true, 3280 649 .reg_offset = llcc_v1_reg_offset, 3281 650 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3282 651 }, ··· 3285 656 { 3286 657 .sct_data = sm8150_data, 3287 658 .size = ARRAY_SIZE(sm8150_data), 3288 - .need_llcc_cfg = true, 3289 659 .reg_offset = llcc_v1_reg_offset, 3290 660 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3291 661 }, ··· 3294 666 { 3295 667 .sct_data = sm8250_data, 3296 668 .size = ARRAY_SIZE(sm8250_data), 3297 - .need_llcc_cfg = true, 3298 669 .reg_offset = llcc_v1_reg_offset, 3299 670 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3300 671 }, ··· 3303 676 { 3304 677 .sct_data = sm8350_data, 3305 678 .size = ARRAY_SIZE(sm8350_data), 3306 - .need_llcc_cfg = true, 3307 679 .reg_offset = llcc_v1_reg_offset, 3308 680 .edac_reg_offset = &llcc_v1_edac_reg_offset, 3309 681 }, ··· 3312 686 { 3313 687 .sct_data = sm8450_data, 3314 688 .size = ARRAY_SIZE(sm8450_data), 3315 - .need_llcc_cfg = true, 3316 689 .reg_offset = llcc_v2_1_reg_offset, 3317 690 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3318 691 }, ··· 3321 696 { 3322 697 .sct_data = sm8550_data, 3323 698 .size = ARRAY_SIZE(sm8550_data), 3324 - .need_llcc_cfg = true, 3325 699 .reg_offset = llcc_v2_1_reg_offset, 3326 700 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3327 701 }, ··· 3330 706 { 3331 707 .sct_data = sm8650_data, 3332 708 .size = ARRAY_SIZE(sm8650_data), 3333 - .need_llcc_cfg = true, 3334 709 .reg_offset = llcc_v2_1_reg_offset, 3335 710 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3336 711 }, ··· 3339 716 { 3340 717 .sct_data = x1e80100_data, 3341 718 .size = ARRAY_SIZE(x1e80100_data), 3342 - .need_llcc_cfg = true, 3343 719 .reg_offset = llcc_v2_1_reg_offset, 3344 720 .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3345 721 .irq_configured = true, 3346 722 }, 723 + }; 724 + 725 + static const struct qcom_sct_config qcs615_cfgs = { 726 + .llcc_config = qcs615_cfg, 727 + .num_config = ARRAY_SIZE(qcs615_cfg), 728 + }; 729 + 730 + static const struct qcom_sct_config qcs8300_cfgs = { 731 + .llcc_config = qcs8300_cfg, 732 + .num_config = ARRAY_SIZE(qcs8300_cfg), 3347 733 }; 3348 734 3349 735 static const struct qcom_sct_config qdu1000_cfgs = { ··· 3363 731 static const struct qcom_sct_config sa8775p_cfgs = { 3364 732 .llcc_config = sa8775p_cfg, 3365 733 .num_config = ARRAY_SIZE(sa8775p_cfg), 734 + }; 735 + 736 + static const struct qcom_sct_config sar1130p_cfgs = { 737 + .llcc_config = sar1130p_cfg, 738 + .num_config = ARRAY_SIZE(sar1130p_cfg), 739 + }; 740 + 741 + static const struct qcom_sct_config sar2130p_cfgs = { 742 + .llcc_config = sar2130p_cfg, 743 + .num_config = ARRAY_SIZE(sar2130p_cfg), 3366 744 }; 3367 745 3368 746 static const struct qcom_sct_config sc7180_cfgs = { ··· 3682 1040 */ 3683 1041 max_cap_cacheline = max_cap_cacheline / drv_data->num_banks; 3684 1042 max_cap_cacheline >>= CACHE_LINE_SIZE_SHIFT; 3685 - attr1_val |= max_cap_cacheline << ATTR1_MAX_CAP_SHIFT; 1043 + if (cfg->max_cap_shift) 1044 + attr1_val |= max_cap_cacheline << cfg->max_cap_shift; 1045 + else 1046 + attr1_val |= max_cap_cacheline << ATTR1_MAX_CAP_SHIFT; 3686 1047 3687 1048 attr1_cfg = LLCC_TRP_ATTR1_CFGn(config->slice_id); 3688 1049 ··· 3714 1069 return ret; 3715 1070 } 3716 1071 3717 - if (cfg->need_llcc_cfg) { 1072 + /* At least SDM845 disallows non-secure writes to these registers */ 1073 + if (!cfg->skip_llcc_cfg) { 3718 1074 u32 disable_cap_alloc, retain_pc; 3719 1075 3720 1076 disable_cap_alloc = config->dis_cap_alloc << config->slice_id; ··· 3923 1277 goto err; 3924 1278 cfg = &cfgs->llcc_config[cfg_index]; 3925 1279 3926 - ret = regmap_read(regmap, cfg->reg_offset[LLCC_COMMON_STATUS0], &num_banks); 3927 - if (ret) 3928 - goto err; 1280 + if (cfg->num_banks) { 1281 + num_banks = cfg->num_banks; 1282 + } else { 1283 + ret = regmap_read(regmap, cfg->reg_offset[LLCC_COMMON_STATUS0], &num_banks); 1284 + if (ret) 1285 + goto err; 3929 1286 3930 - num_banks &= LLCC_LB_CNT_MASK; 3931 - num_banks >>= LLCC_LB_CNT_SHIFT; 1287 + num_banks &= LLCC_LB_CNT_MASK; 1288 + num_banks >>= LLCC_LB_CNT_SHIFT; 1289 + } 1290 + 3932 1291 drv_data->num_banks = num_banks; 3933 1292 3934 1293 drv_data->regmaps = devm_kcalloc(dev, num_banks, sizeof(*drv_data->regmaps), GFP_KERNEL); ··· 4029 1378 } 4030 1379 4031 1380 static const struct of_device_id qcom_llcc_of_match[] = { 1381 + { .compatible = "qcom,qcs615-llcc", .data = &qcs615_cfgs}, 1382 + { .compatible = "qcom,qcs8300-llcc", .data = &qcs8300_cfgs}, 4032 1383 { .compatible = "qcom,qdu1000-llcc", .data = &qdu1000_cfgs}, 4033 1384 { .compatible = "qcom,sa8775p-llcc", .data = &sa8775p_cfgs }, 1385 + { .compatible = "qcom,sar1130p-llcc", .data = &sar1130p_cfgs }, 1386 + { .compatible = "qcom,sar2130p-llcc", .data = &sar2130p_cfgs }, 4034 1387 { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfgs }, 4035 1388 { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfgs }, 4036 1389 { .compatible = "qcom,sc8180x-llcc", .data = &sc8180x_cfgs }, ··· 4059 1404 .of_match_table = qcom_llcc_of_match, 4060 1405 }, 4061 1406 .probe = qcom_llcc_probe, 4062 - .remove_new = qcom_llcc_remove, 1407 + .remove = qcom_llcc_remove, 4063 1408 }; 4064 1409 module_platform_driver(qcom_llcc_driver); 4065 1410
+1 -1
drivers/soc/qcom/ocmem.c
··· 439 439 440 440 static struct platform_driver ocmem_driver = { 441 441 .probe = ocmem_dev_probe, 442 - .remove_new = ocmem_dev_remove, 442 + .remove = ocmem_dev_remove, 443 443 .driver = { 444 444 .name = "ocmem", 445 445 .of_match_table = ocmem_of_match,
+1 -1
drivers/soc/qcom/pmic_glink.c
··· 399 399 400 400 static struct platform_driver pmic_glink_driver = { 401 401 .probe = pmic_glink_probe, 402 - .remove_new = pmic_glink_remove, 402 + .remove = pmic_glink_remove, 403 403 .driver = { 404 404 .name = "qcom_pmic_glink", 405 405 .of_match_table = pmic_glink_of_match,
+2 -1
drivers/soc/qcom/qcom-geni-se.c
··· 585 585 586 586 for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) { 587 587 freq = clk_round_rate(se->clk, freq + 1); 588 - if (freq <= 0 || freq == se->clk_perf_tbl[i - 1]) 588 + if (freq <= 0 || 589 + (i > 0 && freq == se->clk_perf_tbl[i - 1])) 589 590 break; 590 591 se->clk_perf_tbl[i] = freq; 591 592 }
+8 -14
drivers/soc/qcom/qcom-pbs.c
··· 84 84 if (IS_ERR_OR_NULL(pbs)) 85 85 return -EINVAL; 86 86 87 - mutex_lock(&pbs->lock); 87 + guard(mutex)(&pbs->lock); 88 88 ret = regmap_read(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, &val); 89 89 if (ret < 0) 90 - goto out; 90 + return ret; 91 91 92 92 if (val == PBS_CLIENT_SCRATCH2_ERROR) { 93 93 /* PBS error - clear SCRATCH2 register */ 94 94 ret = regmap_write(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 0); 95 95 if (ret < 0) 96 - goto out; 96 + return ret; 97 97 } 98 98 99 99 for (bit_pos = 0; bit_pos < 8; bit_pos++) { ··· 104 104 ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 105 105 BIT(bit_pos), 0); 106 106 if (ret < 0) 107 - goto out_clear_scratch1; 107 + break; 108 108 109 109 /* Set the PBS sequence bit position */ 110 110 ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, 111 111 BIT(bit_pos), BIT(bit_pos)); 112 112 if (ret < 0) 113 - goto out_clear_scratch1; 113 + break; 114 114 115 115 /* Initiate the SW trigger */ 116 116 ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_TRIG_CTL, 117 117 PBS_CLIENT_SW_TRIG_BIT, PBS_CLIENT_SW_TRIG_BIT); 118 118 if (ret < 0) 119 - goto out_clear_scratch1; 119 + break; 120 120 121 121 ret = qcom_pbs_wait_for_ack(pbs, bit_pos); 122 122 if (ret < 0) 123 - goto out_clear_scratch1; 123 + break; 124 124 125 125 /* Clear the PBS sequence bit position */ 126 126 regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, BIT(bit_pos), 0); 127 127 regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, BIT(bit_pos), 0); 128 128 } 129 129 130 - out_clear_scratch1: 131 130 /* Clear all the requested bitmap */ 132 - ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, bitmap, 0); 133 - 134 - out: 135 - mutex_unlock(&pbs->lock); 136 - 137 - return ret; 131 + return regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, bitmap, 0); 138 132 } 139 133 EXPORT_SYMBOL_GPL(qcom_pbs_trigger_event); 140 134
+1 -1
drivers/soc/qcom/qcom_aoss.c
··· 664 664 .suppress_bind_attrs = true, 665 665 }, 666 666 .probe = qmp_probe, 667 - .remove_new = qmp_remove, 667 + .remove = qmp_remove, 668 668 }; 669 669 module_platform_driver(qmp_driver); 670 670
+1 -1
drivers/soc/qcom/qcom_gsbi.c
··· 232 232 .of_match_table = gsbi_dt_match, 233 233 }, 234 234 .probe = gsbi_probe, 235 - .remove_new = gsbi_remove, 235 + .remove = gsbi_remove, 236 236 }; 237 237 238 238 module_platform_driver(gsbi_driver);
+1
drivers/soc/qcom/qcom_pd_mapper.c
··· 540 540 { .compatible = "qcom,msm8996", .data = msm8996_domains, }, 541 541 { .compatible = "qcom,msm8998", .data = msm8998_domains, }, 542 542 { .compatible = "qcom,qcm2290", .data = qcm2290_domains, }, 543 + { .compatible = "qcom,qcm6490", .data = sc7280_domains, }, 543 544 { .compatible = "qcom,qcs404", .data = qcs404_domains, }, 544 545 { .compatible = "qcom,sc7180", .data = sc7180_domains, }, 545 546 { .compatible = "qcom,sc7280", .data = sc7280_domains, },
+1 -1
drivers/soc/qcom/qcom_stats.c
··· 274 274 275 275 static struct platform_driver qcom_stats = { 276 276 .probe = qcom_stats_probe, 277 - .remove_new = qcom_stats_remove, 277 + .remove = qcom_stats_remove, 278 278 .driver = { 279 279 .name = "qcom_stats", 280 280 .of_match_table = qcom_stats_table,
+1 -1
drivers/soc/qcom/qmi_interface.c
··· 195 195 * qmi_add_lookup() - register a new lookup with the name service 196 196 * @qmi: qmi handle 197 197 * @service: service id of the request 198 - * @instance: instance id of the request 199 198 * @version: version number of the request 199 + * @instance: instance id of the request 200 200 * 201 201 * Registering a lookup query with the name server will cause the name server 202 202 * to send NEW_SERVER and DEL_SERVER control messages to this socket as
+2 -2
drivers/soc/qcom/ramp_controller.c
··· 331 331 .of_match_table = qcom_ramp_controller_match_table, 332 332 .suppress_bind_attrs = true, 333 333 }, 334 - .probe = qcom_ramp_controller_probe, 335 - .remove_new = qcom_ramp_controller_remove, 334 + .probe = qcom_ramp_controller_probe, 335 + .remove = qcom_ramp_controller_remove, 336 336 }; 337 337 338 338 static int __init qcom_ramp_controller_init(void)
+1 -1
drivers/soc/qcom/rmtfs_mem.c
··· 315 315 316 316 static struct platform_driver qcom_rmtfs_mem_driver = { 317 317 .probe = qcom_rmtfs_mem_probe, 318 - .remove_new = qcom_rmtfs_mem_remove, 318 + .remove = qcom_rmtfs_mem_remove, 319 319 .driver = { 320 320 .name = "qcom_rmtfs_mem", 321 321 .of_match_table = qcom_rmtfs_mem_of_match,
+1 -1
drivers/soc/qcom/rpm-proc.c
··· 53 53 54 54 static struct platform_driver rpm_proc_driver = { 55 55 .probe = rpm_proc_probe, 56 - .remove_new = rpm_proc_remove, 56 + .remove = rpm_proc_remove, 57 57 .driver = { 58 58 .name = "qcom-rpm-proc", 59 59 .of_match_table = rpm_proc_of_match,
+1 -1
drivers/soc/qcom/rpm_master_stats.c
··· 155 155 156 156 static struct platform_driver master_stats_driver = { 157 157 .probe = master_stats_probe, 158 - .remove_new = master_stats_remove, 158 + .remove = master_stats_remove, 159 159 .driver = { 160 160 .name = "qcom_rpm_master_stats", 161 161 .of_match_table = rpm_master_table,
+3 -6
drivers/soc/qcom/rpmh-rsc.c
··· 1045 1045 * do. To avoid adding this check to our children we'll do it now. 1046 1046 */ 1047 1047 ret = cmd_db_ready(); 1048 - if (ret) { 1049 - if (ret != -EPROBE_DEFER) 1050 - dev_err(&pdev->dev, "Command DB not available (%d)\n", 1051 - ret); 1052 - return ret; 1053 - } 1048 + if (ret) 1049 + return dev_err_probe(&pdev->dev, ret, 1050 + "Command DB not available\n"); 1054 1051 1055 1052 drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 1056 1053 if (!drv)
+11 -7
drivers/soc/qcom/smem.c
··· 499 499 * 500 500 * Allocate space for a given smem item of size @size, given that the item is 501 501 * not yet allocated. 502 + * 503 + * Return: 0 on success, negative errno on failure. 502 504 */ 503 505 int qcom_smem_alloc(unsigned host, unsigned item, size_t size) 504 506 { ··· 679 677 * 680 678 * Looks up smem item and returns pointer to it. Size of smem 681 679 * item is returned in @size. 680 + * 681 + * Return: a pointer to an SMEM item on success, ERR_PTR() on failure. 682 682 */ 683 683 void *qcom_smem_get(unsigned host, unsigned item, size_t *size) 684 684 { ··· 713 709 * 714 710 * To be used by smem clients as a quick way to determine if any new 715 711 * allocations has been made. 712 + * 713 + * Return: number of available bytes on success, negative errno on failure. 716 714 */ 717 715 int qcom_smem_get_free_space(unsigned host) 718 716 { ··· 764 758 * with an smem item pointer (previously returned by qcom_smem_get() 765 759 * @p: the virtual address to convert 766 760 * 767 - * Returns 0 if the pointer provided is not within any smem region. 761 + * Return: physical address of the SMEM item (if found), 0 otherwise 768 762 */ 769 763 phys_addr_t qcom_smem_virt_to_phys(void *p) 770 764 { ··· 1186 1180 } 1187 1181 1188 1182 hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); 1189 - if (hwlock_id < 0) { 1190 - if (hwlock_id != -EPROBE_DEFER) 1191 - dev_err(&pdev->dev, "failed to retrieve hwlock\n"); 1192 - return hwlock_id; 1193 - } 1183 + if (hwlock_id < 0) 1184 + return dev_err_probe(&pdev->dev, hwlock_id, 1185 + "failed to retrieve hwlock\n"); 1194 1186 1195 1187 smem->hwlock = hwspin_lock_request_specific(hwlock_id); 1196 1188 if (!smem->hwlock) ··· 1255 1251 1256 1252 static struct platform_driver qcom_smem_driver = { 1257 1253 .probe = qcom_smem_probe, 1258 - .remove_new = qcom_smem_remove, 1254 + .remove = qcom_smem_remove, 1259 1255 .driver = { 1260 1256 .name = "qcom-smem", 1261 1257 .of_match_table = qcom_smem_of_match,
+4 -8
drivers/soc/qcom/smem_state.c
··· 3 3 * Copyright (c) 2015, Sony Mobile Communications Inc. 4 4 * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved. 5 5 */ 6 + #include <linux/cleanup.h> 6 7 #include <linux/device.h> 7 8 #include <linux/list.h> 8 9 #include <linux/module.h> ··· 61 60 { 62 61 struct qcom_smem_state *state; 63 62 64 - mutex_lock(&list_lock); 63 + guard(mutex)(&list_lock); 65 64 66 65 list_for_each_entry(state, &smem_states, list) { 67 66 if (state->of_node == np) { 68 67 kref_get(&state->refcount); 69 - goto unlock; 68 + return state; 70 69 } 71 70 } 72 - state = ERR_PTR(-EPROBE_DEFER); 73 - 74 - unlock: 75 - mutex_unlock(&list_lock); 76 - 77 - return state; 71 + return ERR_PTR(-EPROBE_DEFER); 78 72 } 79 73 80 74 /**
+4 -7
drivers/soc/qcom/smp2p.c
··· 467 467 int ret; 468 468 469 469 ret = qcom_smem_alloc(pid, smem_id, sizeof(*out)); 470 - if (ret < 0 && ret != -EEXIST) { 471 - if (ret != -EPROBE_DEFER) 472 - dev_err(smp2p->dev, 473 - "unable to allocate local smp2p item\n"); 474 - return ret; 475 - } 470 + if (ret < 0 && ret != -EEXIST) 471 + return dev_err_probe(smp2p->dev, ret, 472 + "unable to allocate local smp2p item\n"); 476 473 477 474 out = qcom_smem_get(pid, smem_id, NULL); 478 475 if (IS_ERR(out)) { ··· 695 698 696 699 static struct platform_driver qcom_smp2p_driver = { 697 700 .probe = qcom_smp2p_probe, 698 - .remove_new = qcom_smp2p_remove, 701 + .remove = qcom_smp2p_remove, 699 702 .driver = { 700 703 .name = "qcom_smp2p", 701 704 .of_match_table = qcom_smp2p_of_match,
+3 -3
drivers/soc/qcom/smsm.c
··· 682 682 683 683 static struct platform_driver qcom_smsm_driver = { 684 684 .probe = qcom_smsm_probe, 685 - .remove_new = qcom_smsm_remove, 686 - .driver = { 687 - .name = "qcom-smsm", 685 + .remove = qcom_smsm_remove, 686 + .driver = { 687 + .name = "qcom-smsm", 688 688 .of_match_table = qcom_smsm_of_match, 689 689 }, 690 690 };
+8 -1
drivers/soc/qcom/socinfo.c
··· 422 422 { qcom_board_id(IPQ9510) }, 423 423 { qcom_board_id(QRB4210) }, 424 424 { qcom_board_id(QRB2210) }, 425 + { qcom_board_id(SAR2130P) }, 425 426 { qcom_board_id(SM8475) }, 426 427 { qcom_board_id(SM8475P) }, 428 + { qcom_board_id(SA8255P) }, 427 429 { qcom_board_id(SA8775P) }, 428 430 { qcom_board_id(QRU1000) }, 429 431 { qcom_board_id(SM8475_2) }, ··· 433 431 { qcom_board_id(X1E80100) }, 434 432 { qcom_board_id(SM8650) }, 435 433 { qcom_board_id(SM4450) }, 434 + { qcom_board_id(SAR1130P) }, 436 435 { qcom_board_id(QDU1010) }, 437 436 { qcom_board_id(QRU1032) }, 438 437 { qcom_board_id(QRU1052) }, ··· 446 443 { qcom_board_id(QCM8550) }, 447 444 { qcom_board_id(IPQ5300) }, 448 445 { qcom_board_id(IPQ5321) }, 446 + { qcom_board_id(IPQ5424) }, 447 + { qcom_board_id(IPQ5404) }, 448 + { qcom_board_id(QCS9100) }, 449 449 { qcom_board_id(QCS8300) }, 450 450 { qcom_board_id(QCS8275) }, 451 + { qcom_board_id(QCS615) }, 451 452 }; 452 453 453 454 static const char *socinfo_machine(struct device *dev, unsigned int id) ··· 829 822 830 823 static struct platform_driver qcom_socinfo_driver = { 831 824 .probe = qcom_socinfo_probe, 832 - .remove_new = qcom_socinfo_remove, 825 + .remove = qcom_socinfo_remove, 833 826 .driver = { 834 827 .name = "qcom-socinfo", 835 828 },
+4 -4
drivers/soc/rockchip/io-domain.c
··· 742 742 } 743 743 744 744 static struct platform_driver rockchip_iodomain_driver = { 745 - .probe = rockchip_iodomain_probe, 746 - .remove_new = rockchip_iodomain_remove, 747 - .driver = { 748 - .name = "rockchip-iodomain", 745 + .probe = rockchip_iodomain_probe, 746 + .remove = rockchip_iodomain_remove, 747 + .driver = { 748 + .name = "rockchip-iodomain", 749 749 .of_match_table = rockchip_iodomain_match, 750 750 }, 751 751 };
+2 -2
drivers/soc/samsung/exynos-chipid.c
··· 198 198 .name = "exynos-chipid", 199 199 .of_match_table = exynos_chipid_of_device_ids, 200 200 }, 201 - .probe = exynos_chipid_probe, 202 - .remove_new = exynos_chipid_remove, 201 + .probe = exynos_chipid_probe, 202 + .remove = exynos_chipid_remove, 203 203 }; 204 204 module_platform_driver(exynos_chipid_driver); 205 205
+1 -1
drivers/soc/tegra/cbb/tegra194-cbb.c
··· 2330 2330 2331 2331 static struct platform_driver tegra194_cbb_driver = { 2332 2332 .probe = tegra194_cbb_probe, 2333 - .remove_new = tegra194_cbb_remove, 2333 + .remove = tegra194_cbb_remove, 2334 2334 .driver = { 2335 2335 .name = "tegra194-cbb", 2336 2336 .of_match_table = of_match_ptr(tegra194_cbb_match),
+1 -1
drivers/soc/ti/k3-ringacc.c
··· 1562 1562 1563 1563 static struct platform_driver k3_ringacc_driver = { 1564 1564 .probe = k3_ringacc_probe, 1565 - .remove_new = k3_ringacc_remove, 1565 + .remove = k3_ringacc_remove, 1566 1566 .driver = { 1567 1567 .name = "k3-ringacc", 1568 1568 .of_match_table = k3_ringacc_of_match,
+2 -2
drivers/soc/ti/knav_dma.c
··· 783 783 784 784 static struct platform_driver knav_dma_driver = { 785 785 .probe = knav_dma_probe, 786 - .remove_new = knav_dma_remove, 787 - .driver = { 786 + .remove = knav_dma_remove, 787 + .driver = { 788 788 .name = "keystone-navigator-dma", 789 789 .of_match_table = of_match, 790 790 },
+3 -5
drivers/soc/ti/knav_qmss_queue.c
··· 119 119 120 120 if (range->flags & RANGE_HAS_IRQ) { 121 121 irq = range->irqs[queue].irq; 122 - ret = request_irq(irq, knav_queue_int_handler, 0, 123 - inst->irq_name, inst); 122 + ret = request_irq(irq, knav_queue_int_handler, IRQF_NO_AUTOEN, 123 + inst->irq_name, inst); 124 124 if (ret) 125 125 return ret; 126 - disable_irq(irq); 127 126 if (range->irqs[queue].cpu_mask) { 128 127 ret = irq_set_affinity_hint(irq, range->irqs[queue].cpu_mask); 129 128 if (ret) { ··· 722 723 if (!desc) { 723 724 dev_dbg(pool->kdev->dev, 724 725 "couldn't unmap desc, continuing\n"); 725 - continue; 726 726 } 727 727 } 728 728 WARN_ON(i != pool->num_desc); ··· 1892 1894 1893 1895 static struct platform_driver keystone_qmss_driver = { 1894 1896 .probe = knav_queue_probe, 1895 - .remove_new = knav_queue_remove, 1897 + .remove = knav_queue_remove, 1896 1898 .driver = { 1897 1899 .name = "keystone-navigator-qmss", 1898 1900 .of_match_table = keystone_qmss_of_match,
+1 -1
drivers/soc/ti/pm33xx.c
··· 591 591 .name = "pm33xx", 592 592 }, 593 593 .probe = am33xx_pm_probe, 594 - .remove_new = am33xx_pm_remove, 594 + .remove = am33xx_pm_remove, 595 595 }; 596 596 module_platform_driver(am33xx_pm_driver); 597 597
+2 -2
drivers/soc/ti/pruss.c
··· 593 593 .name = "pruss", 594 594 .of_match_table = pruss_of_match, 595 595 }, 596 - .probe = pruss_probe, 597 - .remove_new = pruss_remove, 596 + .probe = pruss_probe, 597 + .remove = pruss_remove, 598 598 }; 599 599 module_platform_driver(pruss_driver); 600 600
+3 -3
drivers/soc/ti/smartreflex.c
··· 202 202 203 203 if (sr_class->notify && sr_class->notify_flags && sr_info->irq) { 204 204 ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq, 205 - sr_interrupt, 0, sr_info->name, sr_info); 205 + sr_interrupt, IRQF_NO_AUTOEN, 206 + sr_info->name, sr_info); 206 207 if (ret) 207 208 goto error; 208 - disable_irq(sr_info->irq); 209 209 } 210 210 211 211 return ret; ··· 969 969 970 970 static struct platform_driver smartreflex_driver = { 971 971 .probe = omap_sr_probe, 972 - .remove_new = omap_sr_remove, 972 + .remove = omap_sr_remove, 973 973 .shutdown = omap_sr_shutdown, 974 974 .driver = { 975 975 .name = DRIVER_NAME,
+1 -1
drivers/soc/ti/wkup_m3_ipc.c
··· 755 755 756 756 static struct platform_driver wkup_m3_ipc_driver = { 757 757 .probe = wkup_m3_ipc_probe, 758 - .remove_new = wkup_m3_ipc_remove, 758 + .remove = wkup_m3_ipc_remove, 759 759 .driver = { 760 760 .name = "wkup_m3_ipc", 761 761 .of_match_table = wkup_m3_ipc_of_match,
+4 -2
drivers/soc/xilinx/xlnx_event_manager.c
··· 188 188 INIT_LIST_HEAD(&eve_data->cb_list_head); 189 189 190 190 cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL); 191 - if (!cb_data) 191 + if (!cb_data) { 192 + kfree(eve_data); 192 193 return -ENOMEM; 194 + } 193 195 cb_data->eve_cb = cb_fun; 194 196 cb_data->agent_data = data; 195 197 ··· 711 709 712 710 static struct platform_driver xlnx_event_manager_driver = { 713 711 .probe = xlnx_event_manager_probe, 714 - .remove_new = xlnx_event_manager_remove, 712 + .remove = xlnx_event_manager_remove, 715 713 .driver = { 716 714 .name = "xlnx_event_manager", 717 715 },
+1 -1
drivers/soc/xilinx/zynqmp_power.c
··· 408 408 409 409 static struct platform_driver zynqmp_pm_platform_driver = { 410 410 .probe = zynqmp_pm_probe, 411 - .remove_new = zynqmp_pm_remove, 411 + .remove = zynqmp_pm_remove, 412 412 .driver = { 413 413 .name = "zynqmp_power", 414 414 .of_match_table = pm_of_match,
+2 -2
drivers/thermal/ti-soc-thermal/dra752-bandgap.h
··· 74 74 /** 75 75 * Register bitfields for DRA752 76 76 * 77 - * All the macros bellow define the required bits for 77 + * All the macros below define the required bits for 78 78 * controlling temperature on DRA752. Bit defines are 79 79 * grouped by register. 80 80 */ ··· 125 125 /** 126 126 * Temperature limits and thresholds for DRA752 127 127 * 128 - * All the macros bellow are definitions for handling the 128 + * All the macros below are definitions for handling the 129 129 * ADC conversions and representation of temperature limits 130 130 * and thresholds for DRA752. Definitions are grouped 131 131 * by temperature domain.
+4 -4
drivers/thermal/ti-soc-thermal/omap4xxx-bandgap.h
··· 32 32 /** 33 33 * Register and bit definitions for OMAP4430 34 34 * 35 - * All the macros bellow define the required bits for 35 + * All the macros below define the required bits for 36 36 * controlling temperature on OMAP4430. Bit defines are 37 37 * grouped by register. 38 38 */ ··· 48 48 /** 49 49 * Temperature limits and thresholds for OMAP4430 50 50 * 51 - * All the macros bellow are definitions for handling the 51 + * All the macros below are definitions for handling the 52 52 * ADC conversions and representation of temperature limits 53 53 * and thresholds for OMAP4430. 54 54 */ ··· 102 102 /** 103 103 * Register bitfields for OMAP4460 104 104 * 105 - * All the macros bellow define the required bits for 105 + * All the macros below define the required bits for 106 106 * controlling temperature on OMAP4460. Bit defines are 107 107 * grouped by register. 108 108 */ ··· 135 135 /** 136 136 * Temperature limits and thresholds for OMAP4460 137 137 * 138 - * All the macros bellow are definitions for handling the 138 + * All the macros below are definitions for handling the 139 139 * ADC conversions and representation of temperature limits 140 140 * and thresholds for OMAP4460. 141 141 */
+2 -2
drivers/thermal/ti-soc-thermal/omap5xxx-bandgap.h
··· 56 56 /** 57 57 * Register bitfields for OMAP5430 58 58 * 59 - * All the macros bellow define the required bits for 59 + * All the macros below define the required bits for 60 60 * controlling temperature on OMAP5430. Bit defines are 61 61 * grouped by register. 62 62 */ ··· 101 101 /** 102 102 * Temperature limits and thresholds for OMAP5430 103 103 * 104 - * All the macros bellow are definitions for handling the 104 + * All the macros below are definitions for handling the 105 105 * ADC conversions and representation of temperature limits 106 106 * and thresholds for OMAP5430. Definitions are grouped 107 107 * by temperature domain.
+7
include/dt-bindings/arm/qcom,ids.h
··· 255 255 #define QCOM_ID_IPQ9510 521 256 256 #define QCOM_ID_QRB4210 523 257 257 #define QCOM_ID_QRB2210 524 258 + #define QCOM_ID_SAR2130P 525 258 259 #define QCOM_ID_SM8475 530 259 260 #define QCOM_ID_SM8475P 531 261 + #define QCOM_ID_SA8255P 532 260 262 #define QCOM_ID_SA8775P 534 261 263 #define QCOM_ID_QRU1000 539 262 264 #define QCOM_ID_SM8475_2 540 ··· 266 264 #define QCOM_ID_X1E80100 555 267 265 #define QCOM_ID_SM8650 557 268 266 #define QCOM_ID_SM4450 568 267 + #define QCOM_ID_SAR1130P 579 269 268 #define QCOM_ID_QDU1010 587 270 269 #define QCOM_ID_QRU1032 588 271 270 #define QCOM_ID_QRU1052 589 ··· 279 276 #define QCOM_ID_QCM8550 604 280 277 #define QCOM_ID_IPQ5300 624 281 278 #define QCOM_ID_IPQ5321 650 279 + #define QCOM_ID_IPQ5424 651 280 + #define QCOM_ID_IPQ5404 671 281 + #define QCOM_ID_QCS9100 667 282 282 #define QCOM_ID_QCS8300 674 283 283 #define QCOM_ID_QCS8275 675 284 + #define QCOM_ID_QCS615 680 284 285 285 286 /* 286 287 * The board type and revision information, used by Qualcomm bootloaders and
+2
include/linux/firmware/qcom/qcom_scm.h
··· 85 85 86 86 bool qcom_scm_restore_sec_cfg_available(void); 87 87 int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 88 + int qcom_scm_set_gpu_smmu_aperture(unsigned int context_bank); 89 + bool qcom_scm_set_gpu_smmu_aperture_is_available(void); 88 90 int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 89 91 int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 90 92 int qcom_scm_iommu_set_cp_pool_size(u32 spare, u32 size);
+32 -7
include/linux/firmware/xlnx-zynqmp.h
··· 3 3 * Xilinx Zynq MPSoC Firmware layer 4 4 * 5 5 * Copyright (C) 2014-2021 Xilinx 6 - * Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc. 6 + * Copyright (C) 2022 - 2024, Advanced Micro Devices, Inc. 7 7 * 8 8 * Michal Simek <michal.simek@amd.com> 9 9 * Davorin Mista <davorin.mista@aggios.com> ··· 32 32 /* SMC SIP service Call Function Identifier Prefix */ 33 33 #define PM_SIP_SVC 0xC2000000 34 34 35 + /* SMC function ID to get SiP SVC version */ 36 + #define GET_SIP_SVC_VERSION (0x8200ff03U) 37 + 38 + /* SiP Service Calls version numbers */ 39 + #define SIP_SVC_VERSION_MAJOR (0U) 40 + #define SIP_SVC_VERSION_MINOR (2U) 41 + 42 + #define SIP_SVC_PASSTHROUGH_VERSION ((SIP_SVC_VERSION_MAJOR << 16) | \ 43 + SIP_SVC_VERSION_MINOR) 44 + 45 + /* Fixed ID for FW specific APIs */ 46 + #define PASS_THROUGH_FW_CMD_ID GENMASK(11, 0) 47 + 35 48 /* PM API versions */ 36 49 #define PM_API_VERSION_1 1 37 50 #define PM_API_VERSION_2 2 ··· 64 51 65 52 #define API_ID_MASK GENMASK(7, 0) 66 53 #define MODULE_ID_MASK GENMASK(11, 8) 54 + #define PLM_MODULE_ID_MASK GENMASK(15, 8) 67 55 68 56 /* Firmware feature check version mask */ 69 57 #define FIRMWARE_VERSION_MASK 0xFFFFU ··· 76 62 #define GET_CALLBACK_DATA 0xa01 77 63 78 64 /* Number of 32bits values in payload */ 79 - #define PAYLOAD_ARG_CNT 4U 65 + #define PAYLOAD_ARG_CNT 7U 66 + 67 + /* Number of 64bits arguments for SMC call */ 68 + #define SMC_ARG_CNT_64 8U 69 + 70 + /* Number of 32bits arguments for SMC call */ 71 + #define SMC_ARG_CNT_32 13U 80 72 81 73 /* Number of arguments for a callback */ 82 74 #define CB_ARG_CNT 4 ··· 150 130 151 131 enum pm_module_id { 152 132 PM_MODULE_ID = 0x0, 133 + XPM_MODULE_ID = 0x2, 153 134 XSEM_MODULE_ID = 0x3, 154 135 TF_A_MODULE_ID = 0xa, 155 136 }; ··· 239 218 /* Runtime feature configuration */ 240 219 IOCTL_SET_FEATURE_CONFIG = 26, 241 220 IOCTL_GET_FEATURE_CONFIG = 27, 221 + /* IOCTL for Secure Read/Write Interface */ 222 + IOCTL_READ_REG = 28, 242 223 /* Dynamic SD/GEM configuration */ 243 224 IOCTL_SET_SD_CONFIG = 30, 244 225 IOCTL_SET_GEM_CONFIG = 31, 226 + /* IOCTL to get default/current QoS */ 227 + IOCTL_GET_QOS = 34, 245 228 }; 246 229 247 230 enum pm_query_id { ··· 558 533 }; 559 534 560 535 int zynqmp_pm_invoke_fn(u32 pm_api_id, u32 *ret_payload, u32 num_args, ...); 536 + int zynqmp_pm_invoke_fw_fn(u32 pm_api_id, u32 *ret_payload, u32 num_args, ...); 561 537 562 538 #if IS_REACHABLE(CONFIG_ZYNQMP_FIRMWARE) 563 539 int zynqmp_pm_get_api_version(u32 *version); ··· 579 553 int zynqmp_pm_set_sd_tapdelay(u32 node_id, u32 type, u32 value); 580 554 int zynqmp_pm_sd_dll_reset(u32 node_id, u32 type); 581 555 int zynqmp_pm_ospi_mux_select(u32 dev_id, u32 select); 582 - int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset, 556 + int zynqmp_pm_reset_assert(const u32 reset, 583 557 const enum zynqmp_pm_reset_action assert_flag); 584 - int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, u32 *status); 558 + int zynqmp_pm_reset_get_status(const u32 reset, u32 *status); 585 559 unsigned int zynqmp_pm_bootmode_read(u32 *ps_mode); 586 560 int zynqmp_pm_bootmode_write(u32 ps_mode); 587 561 int zynqmp_pm_init_finalize(void); ··· 724 698 return -ENODEV; 725 699 } 726 700 727 - static inline int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset, 701 + static inline int zynqmp_pm_reset_assert(const u32 reset, 728 702 const enum zynqmp_pm_reset_action assert_flag) 729 703 { 730 704 return -ENODEV; 731 705 } 732 706 733 - static inline int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, 734 - u32 *status) 707 + static inline int zynqmp_pm_reset_get_status(const u32 reset, u32 *status) 735 708 { 736 709 return -ENODEV; 737 710 }
+212 -62
include/linux/reset.h
··· 25 25 struct reset_control *rstc; 26 26 }; 27 27 28 + #define RESET_CONTROL_FLAGS_BIT_SHARED BIT(0) /* not exclusive */ 29 + #define RESET_CONTROL_FLAGS_BIT_OPTIONAL BIT(1) 30 + #define RESET_CONTROL_FLAGS_BIT_ACQUIRED BIT(2) /* iff exclusive, not released */ 31 + #define RESET_CONTROL_FLAGS_BIT_DEASSERTED BIT(3) 32 + 33 + /** 34 + * enum reset_control_flags - Flags that can be passed to the reset_control_get functions 35 + * to determine the type of reset control. 36 + * These values cannot be OR'd. 37 + * 38 + * @RESET_CONTROL_EXCLUSIVE: exclusive, acquired, 39 + * @RESET_CONTROL_EXCLUSIVE_DEASSERTED: exclusive, acquired, deasserted 40 + * @RESET_CONTROL_EXCLUSIVE_RELEASED: exclusive, released, 41 + * @RESET_CONTROL_SHARED: shared 42 + * @RESET_CONTROL_SHARED_DEASSERTED: shared, deasserted 43 + * @RESET_CONTROL_OPTIONAL_EXCLUSIVE: optional, exclusive, acquired 44 + * @RESET_CONTROL_OPTIONAL_EXCLUSIVE_DEASSERTED: optional, exclusive, acquired, deasserted 45 + * @RESET_CONTROL_OPTIONAL_EXCLUSIVE_RELEASED: optional, exclusive, released 46 + * @RESET_CONTROL_OPTIONAL_SHARED: optional, shared 47 + * @RESET_CONTROL_OPTIONAL_SHARED_DEASSERTED: optional, shared, deasserted 48 + */ 49 + enum reset_control_flags { 50 + RESET_CONTROL_EXCLUSIVE = RESET_CONTROL_FLAGS_BIT_ACQUIRED, 51 + RESET_CONTROL_EXCLUSIVE_DEASSERTED = RESET_CONTROL_FLAGS_BIT_ACQUIRED | 52 + RESET_CONTROL_FLAGS_BIT_DEASSERTED, 53 + RESET_CONTROL_EXCLUSIVE_RELEASED = 0, 54 + RESET_CONTROL_SHARED = RESET_CONTROL_FLAGS_BIT_SHARED, 55 + RESET_CONTROL_SHARED_DEASSERTED = RESET_CONTROL_FLAGS_BIT_SHARED | 56 + RESET_CONTROL_FLAGS_BIT_DEASSERTED, 57 + RESET_CONTROL_OPTIONAL_EXCLUSIVE = RESET_CONTROL_FLAGS_BIT_OPTIONAL | 58 + RESET_CONTROL_FLAGS_BIT_ACQUIRED, 59 + RESET_CONTROL_OPTIONAL_EXCLUSIVE_DEASSERTED = RESET_CONTROL_FLAGS_BIT_OPTIONAL | 60 + RESET_CONTROL_FLAGS_BIT_ACQUIRED | 61 + RESET_CONTROL_FLAGS_BIT_DEASSERTED, 62 + RESET_CONTROL_OPTIONAL_EXCLUSIVE_RELEASED = RESET_CONTROL_FLAGS_BIT_OPTIONAL, 63 + RESET_CONTROL_OPTIONAL_SHARED = RESET_CONTROL_FLAGS_BIT_OPTIONAL | 64 + RESET_CONTROL_FLAGS_BIT_SHARED, 65 + RESET_CONTROL_OPTIONAL_SHARED_DEASSERTED = RESET_CONTROL_FLAGS_BIT_OPTIONAL | 66 + RESET_CONTROL_FLAGS_BIT_SHARED | 67 + RESET_CONTROL_FLAGS_BIT_DEASSERTED, 68 + }; 69 + 28 70 #ifdef CONFIG_RESET_CONTROLLER 29 71 30 72 int reset_control_reset(struct reset_control *rstc); ··· 84 42 void reset_control_bulk_release(int num_rstcs, struct reset_control_bulk_data *rstcs); 85 43 86 44 struct reset_control *__of_reset_control_get(struct device_node *node, 87 - const char *id, int index, bool shared, 88 - bool optional, bool acquired); 45 + const char *id, int index, enum reset_control_flags flags); 89 46 struct reset_control *__reset_control_get(struct device *dev, const char *id, 90 - int index, bool shared, 91 - bool optional, bool acquired); 47 + int index, enum reset_control_flags flags); 92 48 void reset_control_put(struct reset_control *rstc); 93 49 int __reset_control_bulk_get(struct device *dev, int num_rstcs, 94 50 struct reset_control_bulk_data *rstcs, 95 - bool shared, bool optional, bool acquired); 51 + enum reset_control_flags flags); 96 52 void reset_control_bulk_put(int num_rstcs, struct reset_control_bulk_data *rstcs); 97 53 98 54 int __device_reset(struct device *dev, bool optional); 99 55 struct reset_control *__devm_reset_control_get(struct device *dev, 100 - const char *id, int index, bool shared, 101 - bool optional, bool acquired); 56 + const char *id, int index, enum reset_control_flags flags); 102 57 int __devm_reset_control_bulk_get(struct device *dev, int num_rstcs, 103 58 struct reset_control_bulk_data *rstcs, 104 - bool shared, bool optional, bool acquired); 59 + enum reset_control_flags flags); 105 60 106 61 struct reset_control *devm_reset_control_array_get(struct device *dev, 107 - bool shared, bool optional); 108 - struct reset_control *of_reset_control_array_get(struct device_node *np, 109 - bool shared, bool optional, 110 - bool acquired); 62 + enum reset_control_flags flags); 63 + struct reset_control *of_reset_control_array_get(struct device_node *np, enum reset_control_flags); 111 64 112 65 int reset_control_get_count(struct device *dev); 113 66 ··· 153 116 154 117 static inline struct reset_control *__of_reset_control_get( 155 118 struct device_node *node, 156 - const char *id, int index, bool shared, 157 - bool optional, bool acquired) 119 + const char *id, int index, enum reset_control_flags flags) 158 120 { 121 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 122 + 159 123 return optional ? NULL : ERR_PTR(-ENOTSUPP); 160 124 } 161 125 162 126 static inline struct reset_control *__reset_control_get( 163 127 struct device *dev, const char *id, 164 - int index, bool shared, bool optional, 165 - bool acquired) 128 + int index, enum reset_control_flags flags) 166 129 { 130 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 131 + 167 132 return optional ? NULL : ERR_PTR(-ENOTSUPP); 168 133 } 169 134 ··· 201 162 static inline int 202 163 __reset_control_bulk_get(struct device *dev, int num_rstcs, 203 164 struct reset_control_bulk_data *rstcs, 204 - bool shared, bool optional, bool acquired) 165 + enum reset_control_flags flags) 205 166 { 167 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 168 + 206 169 return optional ? 0 : -EOPNOTSUPP; 207 170 } 208 171 ··· 215 174 216 175 static inline struct reset_control *__devm_reset_control_get( 217 176 struct device *dev, const char *id, 218 - int index, bool shared, bool optional, 219 - bool acquired) 177 + int index, enum reset_control_flags flags) 220 178 { 179 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 180 + 221 181 return optional ? NULL : ERR_PTR(-ENOTSUPP); 222 182 } 223 183 224 184 static inline int 225 185 __devm_reset_control_bulk_get(struct device *dev, int num_rstcs, 226 186 struct reset_control_bulk_data *rstcs, 227 - bool shared, bool optional, bool acquired) 187 + enum reset_control_flags flags) 228 188 { 189 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 190 + 229 191 return optional ? 0 : -EOPNOTSUPP; 230 192 } 231 193 232 194 static inline struct reset_control * 233 - devm_reset_control_array_get(struct device *dev, bool shared, bool optional) 195 + devm_reset_control_array_get(struct device *dev, enum reset_control_flags flags) 234 196 { 197 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 198 + 235 199 return optional ? NULL : ERR_PTR(-ENOTSUPP); 236 200 } 237 201 238 202 static inline struct reset_control * 239 - of_reset_control_array_get(struct device_node *np, bool shared, bool optional, 240 - bool acquired) 203 + of_reset_control_array_get(struct device_node *np, enum reset_control_flags flags) 241 204 { 205 + bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL; 206 + 242 207 return optional ? NULL : ERR_PTR(-ENOTSUPP); 243 208 } 244 209 ··· 283 236 static inline struct reset_control * 284 237 __must_check reset_control_get_exclusive(struct device *dev, const char *id) 285 238 { 286 - return __reset_control_get(dev, id, 0, false, false, true); 239 + return __reset_control_get(dev, id, 0, RESET_CONTROL_EXCLUSIVE); 287 240 } 288 241 289 242 /** ··· 300 253 reset_control_bulk_get_exclusive(struct device *dev, int num_rstcs, 301 254 struct reset_control_bulk_data *rstcs) 302 255 { 303 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, false, false, true); 256 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_EXCLUSIVE); 304 257 } 305 258 306 259 /** ··· 321 274 __must_check reset_control_get_exclusive_released(struct device *dev, 322 275 const char *id) 323 276 { 324 - return __reset_control_get(dev, id, 0, false, false, false); 277 + return __reset_control_get(dev, id, 0, RESET_CONTROL_EXCLUSIVE_RELEASED); 325 278 } 326 279 327 280 /** ··· 342 295 reset_control_bulk_get_exclusive_released(struct device *dev, int num_rstcs, 343 296 struct reset_control_bulk_data *rstcs) 344 297 { 345 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, false, false, false); 298 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_EXCLUSIVE_RELEASED); 346 299 } 347 300 348 301 /** ··· 363 316 reset_control_bulk_get_optional_exclusive_released(struct device *dev, int num_rstcs, 364 317 struct reset_control_bulk_data *rstcs) 365 318 { 366 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, false, true, false); 319 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, 320 + RESET_CONTROL_OPTIONAL_EXCLUSIVE_RELEASED); 367 321 } 368 322 369 323 /** ··· 392 344 static inline struct reset_control *reset_control_get_shared( 393 345 struct device *dev, const char *id) 394 346 { 395 - return __reset_control_get(dev, id, 0, true, false, false); 347 + return __reset_control_get(dev, id, 0, RESET_CONTROL_SHARED); 396 348 } 397 349 398 350 /** ··· 409 361 reset_control_bulk_get_shared(struct device *dev, int num_rstcs, 410 362 struct reset_control_bulk_data *rstcs) 411 363 { 412 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, true, false, false); 364 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_SHARED); 413 365 } 414 366 415 367 /** ··· 426 378 static inline struct reset_control *reset_control_get_optional_exclusive( 427 379 struct device *dev, const char *id) 428 380 { 429 - return __reset_control_get(dev, id, 0, false, true, true); 381 + return __reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 430 382 } 431 383 432 384 /** ··· 446 398 reset_control_bulk_get_optional_exclusive(struct device *dev, int num_rstcs, 447 399 struct reset_control_bulk_data *rstcs) 448 400 { 449 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, false, true, true); 401 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 450 402 } 451 403 452 404 /** ··· 463 415 static inline struct reset_control *reset_control_get_optional_shared( 464 416 struct device *dev, const char *id) 465 417 { 466 - return __reset_control_get(dev, id, 0, true, true, false); 418 + return __reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_SHARED); 467 419 } 468 420 469 421 /** ··· 483 435 reset_control_bulk_get_optional_shared(struct device *dev, int num_rstcs, 484 436 struct reset_control_bulk_data *rstcs) 485 437 { 486 - return __reset_control_bulk_get(dev, num_rstcs, rstcs, true, true, false); 438 + return __reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_OPTIONAL_SHARED); 487 439 } 488 440 489 441 /** ··· 499 451 static inline struct reset_control *of_reset_control_get_exclusive( 500 452 struct device_node *node, const char *id) 501 453 { 502 - return __of_reset_control_get(node, id, 0, false, false, true); 454 + return __of_reset_control_get(node, id, 0, RESET_CONTROL_EXCLUSIVE); 503 455 } 504 456 505 457 /** ··· 519 471 static inline struct reset_control *of_reset_control_get_optional_exclusive( 520 472 struct device_node *node, const char *id) 521 473 { 522 - return __of_reset_control_get(node, id, 0, false, true, true); 474 + return __of_reset_control_get(node, id, 0, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 523 475 } 524 476 525 477 /** ··· 544 496 static inline struct reset_control *of_reset_control_get_shared( 545 497 struct device_node *node, const char *id) 546 498 { 547 - return __of_reset_control_get(node, id, 0, true, false, false); 499 + return __of_reset_control_get(node, id, 0, RESET_CONTROL_SHARED); 548 500 } 549 501 550 502 /** ··· 561 513 static inline struct reset_control *of_reset_control_get_exclusive_by_index( 562 514 struct device_node *node, int index) 563 515 { 564 - return __of_reset_control_get(node, NULL, index, false, false, true); 516 + return __of_reset_control_get(node, NULL, index, RESET_CONTROL_EXCLUSIVE); 565 517 } 566 518 567 519 /** ··· 589 541 static inline struct reset_control *of_reset_control_get_shared_by_index( 590 542 struct device_node *node, int index) 591 543 { 592 - return __of_reset_control_get(node, NULL, index, true, false, false); 544 + return __of_reset_control_get(node, NULL, index, RESET_CONTROL_SHARED); 593 545 } 594 546 595 547 /** ··· 608 560 __must_check devm_reset_control_get_exclusive(struct device *dev, 609 561 const char *id) 610 562 { 611 - return __devm_reset_control_get(dev, id, 0, false, false, true); 563 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_EXCLUSIVE); 564 + } 565 + 566 + /** 567 + * devm_reset_control_get_exclusive_deasserted - resource managed 568 + * reset_control_get_exclusive() + 569 + * reset_control_deassert() 570 + * @dev: device to be reset by the controller 571 + * @id: reset line name 572 + * 573 + * Managed reset_control_get_exclusive() + reset_control_deassert(). For reset 574 + * controllers returned from this function, reset_control_assert() + 575 + * reset_control_put() is called automatically on driver detach. 576 + * 577 + * See reset_control_get_exclusive() for more information. 578 + */ 579 + static inline struct reset_control * __must_check 580 + devm_reset_control_get_exclusive_deasserted(struct device *dev, const char *id) 581 + { 582 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_EXCLUSIVE_DEASSERTED); 612 583 } 613 584 614 585 /** ··· 647 580 devm_reset_control_bulk_get_exclusive(struct device *dev, int num_rstcs, 648 581 struct reset_control_bulk_data *rstcs) 649 582 { 650 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, false, false, true); 583 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, 584 + RESET_CONTROL_EXCLUSIVE); 651 585 } 652 586 653 587 /** ··· 667 599 __must_check devm_reset_control_get_exclusive_released(struct device *dev, 668 600 const char *id) 669 601 { 670 - return __devm_reset_control_get(dev, id, 0, false, false, false); 602 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_EXCLUSIVE_RELEASED); 671 603 } 672 604 673 605 /** ··· 687 619 devm_reset_control_bulk_get_exclusive_released(struct device *dev, int num_rstcs, 688 620 struct reset_control_bulk_data *rstcs) 689 621 { 690 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, false, false, false); 622 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, 623 + RESET_CONTROL_EXCLUSIVE_RELEASED); 691 624 } 692 625 693 626 /** ··· 707 638 __must_check devm_reset_control_get_optional_exclusive_released(struct device *dev, 708 639 const char *id) 709 640 { 710 - return __devm_reset_control_get(dev, id, 0, false, true, false); 641 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_EXCLUSIVE_RELEASED); 711 642 } 712 643 713 644 /** ··· 727 658 devm_reset_control_bulk_get_optional_exclusive_released(struct device *dev, int num_rstcs, 728 659 struct reset_control_bulk_data *rstcs) 729 660 { 730 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, false, true, false); 661 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, 662 + RESET_CONTROL_OPTIONAL_EXCLUSIVE_RELEASED); 731 663 } 732 664 733 665 /** ··· 743 673 static inline struct reset_control *devm_reset_control_get_shared( 744 674 struct device *dev, const char *id) 745 675 { 746 - return __devm_reset_control_get(dev, id, 0, true, false, false); 676 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_SHARED); 677 + } 678 + 679 + /** 680 + * devm_reset_control_get_shared_deasserted - resource managed 681 + * reset_control_get_shared() + 682 + * reset_control_deassert() 683 + * @dev: device to be reset by the controller 684 + * @id: reset line name 685 + * 686 + * Managed reset_control_get_shared() + reset_control_deassert(). For reset 687 + * controllers returned from this function, reset_control_assert() + 688 + * reset_control_put() is called automatically on driver detach. 689 + * 690 + * See devm_reset_control_get_shared() for more information. 691 + */ 692 + static inline struct reset_control * __must_check 693 + devm_reset_control_get_shared_deasserted(struct device *dev, const char *id) 694 + { 695 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_SHARED_DEASSERTED); 747 696 } 748 697 749 698 /** ··· 782 693 devm_reset_control_bulk_get_shared(struct device *dev, int num_rstcs, 783 694 struct reset_control_bulk_data *rstcs) 784 695 { 785 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, true, false, false); 696 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_SHARED); 697 + } 698 + 699 + /** 700 + * devm_reset_control_bulk_get_shared_deasserted - resource managed 701 + * reset_control_bulk_get_shared() + 702 + * reset_control_bulk_deassert() 703 + * @dev: device to be reset by the controller 704 + * @num_rstcs: number of entries in rstcs array 705 + * @rstcs: array of struct reset_control_bulk_data with reset line names set 706 + * 707 + * Managed reset_control_bulk_get_shared() + reset_control_bulk_deassert(). For 708 + * reset controllers returned from this function, reset_control_bulk_assert() + 709 + * reset_control_bulk_put() are called automatically on driver detach. 710 + * 711 + * See devm_reset_control_bulk_get_shared() for more information. 712 + */ 713 + static inline int __must_check 714 + devm_reset_control_bulk_get_shared_deasserted(struct device *dev, int num_rstcs, 715 + struct reset_control_bulk_data *rstcs) 716 + { 717 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, 718 + RESET_CONTROL_SHARED_DEASSERTED); 786 719 } 787 720 788 721 /** ··· 822 711 static inline struct reset_control *devm_reset_control_get_optional_exclusive( 823 712 struct device *dev, const char *id) 824 713 { 825 - return __devm_reset_control_get(dev, id, 0, false, true, true); 714 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 715 + } 716 + 717 + /** 718 + * devm_reset_control_get_optional_exclusive_deasserted - resource managed 719 + * reset_control_get_optional_exclusive() + 720 + * reset_control_deassert() 721 + * @dev: device to be reset by the controller 722 + * @id: reset line name 723 + * 724 + * Managed reset_control_get_optional_exclusive() + reset_control_deassert(). 725 + * For reset controllers returned from this function, reset_control_assert() + 726 + * reset_control_put() is called automatically on driver detach. 727 + * 728 + * See devm_reset_control_get_optional_exclusive() for more information. 729 + */ 730 + static inline struct reset_control * 731 + devm_reset_control_get_optional_exclusive_deasserted(struct device *dev, const char *id) 732 + { 733 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_EXCLUSIVE_DEASSERTED); 826 734 } 827 735 828 736 /** ··· 861 731 devm_reset_control_bulk_get_optional_exclusive(struct device *dev, int num_rstcs, 862 732 struct reset_control_bulk_data *rstcs) 863 733 { 864 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, false, true, true); 734 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, 735 + RESET_CONTROL_OPTIONAL_EXCLUSIVE); 865 736 } 866 737 867 738 /** ··· 880 749 static inline struct reset_control *devm_reset_control_get_optional_shared( 881 750 struct device *dev, const char *id) 882 751 { 883 - return __devm_reset_control_get(dev, id, 0, true, true, false); 752 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_SHARED); 753 + } 754 + 755 + /** 756 + * devm_reset_control_get_optional_shared_deasserted - resource managed 757 + * reset_control_get_optional_shared() + 758 + * reset_control_deassert() 759 + * @dev: device to be reset by the controller 760 + * @id: reset line name 761 + * 762 + * Managed reset_control_get_optional_shared() + reset_control_deassert(). For 763 + * reset controllers returned from this function, reset_control_assert() + 764 + * reset_control_put() is called automatically on driver detach. 765 + * 766 + * See devm_reset_control_get_optional_shared() for more information. 767 + */ 768 + static inline struct reset_control * 769 + devm_reset_control_get_optional_shared_deasserted(struct device *dev, const char *id) 770 + { 771 + return __devm_reset_control_get(dev, id, 0, RESET_CONTROL_OPTIONAL_SHARED_DEASSERTED); 884 772 } 885 773 886 774 /** ··· 919 769 devm_reset_control_bulk_get_optional_shared(struct device *dev, int num_rstcs, 920 770 struct reset_control_bulk_data *rstcs) 921 771 { 922 - return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, true, true, false); 772 + return __devm_reset_control_bulk_get(dev, num_rstcs, rstcs, RESET_CONTROL_OPTIONAL_SHARED); 923 773 } 924 774 925 775 /** ··· 937 787 static inline struct reset_control * 938 788 devm_reset_control_get_exclusive_by_index(struct device *dev, int index) 939 789 { 940 - return __devm_reset_control_get(dev, NULL, index, false, false, true); 790 + return __devm_reset_control_get(dev, NULL, index, RESET_CONTROL_EXCLUSIVE); 941 791 } 942 792 943 793 /** ··· 953 803 static inline struct reset_control * 954 804 devm_reset_control_get_shared_by_index(struct device *dev, int index) 955 805 { 956 - return __devm_reset_control_get(dev, NULL, index, true, false, false); 806 + return __devm_reset_control_get(dev, NULL, index, RESET_CONTROL_SHARED); 957 807 } 958 808 959 809 /* ··· 1001 851 static inline struct reset_control * 1002 852 devm_reset_control_array_get_exclusive(struct device *dev) 1003 853 { 1004 - return devm_reset_control_array_get(dev, false, false); 854 + return devm_reset_control_array_get(dev, RESET_CONTROL_EXCLUSIVE); 1005 855 } 1006 856 1007 857 static inline struct reset_control * 1008 858 devm_reset_control_array_get_shared(struct device *dev) 1009 859 { 1010 - return devm_reset_control_array_get(dev, true, false); 860 + return devm_reset_control_array_get(dev, RESET_CONTROL_SHARED); 1011 861 } 1012 862 1013 863 static inline struct reset_control * 1014 864 devm_reset_control_array_get_optional_exclusive(struct device *dev) 1015 865 { 1016 - return devm_reset_control_array_get(dev, false, true); 866 + return devm_reset_control_array_get(dev, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 1017 867 } 1018 868 1019 869 static inline struct reset_control * 1020 870 devm_reset_control_array_get_optional_shared(struct device *dev) 1021 871 { 1022 - return devm_reset_control_array_get(dev, true, true); 872 + return devm_reset_control_array_get(dev, RESET_CONTROL_OPTIONAL_SHARED); 1023 873 } 1024 874 1025 875 static inline struct reset_control * 1026 876 of_reset_control_array_get_exclusive(struct device_node *node) 1027 877 { 1028 - return of_reset_control_array_get(node, false, false, true); 878 + return of_reset_control_array_get(node, RESET_CONTROL_EXCLUSIVE); 1029 879 } 1030 880 1031 881 static inline struct reset_control * 1032 882 of_reset_control_array_get_exclusive_released(struct device_node *node) 1033 883 { 1034 - return of_reset_control_array_get(node, false, false, false); 884 + return of_reset_control_array_get(node, RESET_CONTROL_EXCLUSIVE_RELEASED); 1035 885 } 1036 886 1037 887 static inline struct reset_control * 1038 888 of_reset_control_array_get_shared(struct device_node *node) 1039 889 { 1040 - return of_reset_control_array_get(node, true, false, true); 890 + return of_reset_control_array_get(node, RESET_CONTROL_SHARED); 1041 891 } 1042 892 1043 893 static inline struct reset_control * 1044 894 of_reset_control_array_get_optional_exclusive(struct device_node *node) 1045 895 { 1046 - return of_reset_control_array_get(node, false, true, true); 896 + return of_reset_control_array_get(node, RESET_CONTROL_OPTIONAL_EXCLUSIVE); 1047 897 } 1048 898 1049 899 static inline struct reset_control * 1050 900 of_reset_control_array_get_optional_shared(struct device_node *node) 1051 901 { 1052 - return of_reset_control_array_get(node, true, true, true); 902 + return of_reset_control_array_get(node, RESET_CONTROL_OPTIONAL_SHARED); 1053 903 } 1054 904 #endif
+36
include/linux/soc/mediatek/dvfsrc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright (c) 2021 MediaTek Inc. 4 + * Copyright (c) 2024 Collabora Ltd. 5 + * AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 6 + */ 7 + 8 + #ifndef __MEDIATEK_DVFSRC_H 9 + #define __MEDIATEK_DVFSRC_H 10 + 11 + enum mtk_dvfsrc_cmd { 12 + MTK_DVFSRC_CMD_BW, 13 + MTK_DVFSRC_CMD_HRT_BW, 14 + MTK_DVFSRC_CMD_PEAK_BW, 15 + MTK_DVFSRC_CMD_OPP, 16 + MTK_DVFSRC_CMD_VCORE_LEVEL, 17 + MTK_DVFSRC_CMD_VSCP_LEVEL, 18 + MTK_DVFSRC_CMD_MAX, 19 + }; 20 + 21 + #if IS_ENABLED(CONFIG_MTK_DVFSRC) 22 + 23 + int mtk_dvfsrc_send_request(const struct device *dev, u32 cmd, u64 data); 24 + int mtk_dvfsrc_query_info(const struct device *dev, u32 cmd, int *data); 25 + 26 + #else 27 + 28 + static inline int mtk_dvfsrc_send_request(const struct device *dev, u32 cmd, u64 data) 29 + { return -ENODEV; } 30 + 31 + static inline int mtk_dvfsrc_query_info(const struct device *dev, u32 cmd, int *data) 32 + { return -ENODEV; } 33 + 34 + #endif /* CONFIG_MTK_DVFSRC */ 35 + 36 + #endif
+3
include/linux/soc/mediatek/mtk_sip_svc.h
··· 22 22 ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, MTK_SIP_SMC_CONVENTION, \ 23 23 ARM_SMCCC_OWNER_SIP, fn_id) 24 24 25 + /* DVFSRC SMC calls */ 26 + #define MTK_SIP_DVFSRC_VCOREFS_CONTROL MTK_SIP_SMC_CMD(0x506) 27 + 25 28 /* IOMMU related SMC call */ 26 29 #define MTK_SIP_KERNEL_IOMMU_CONTROL MTK_SIP_SMC_CMD(0x514) 27 30
+12
include/linux/soc/qcom/llcc-qcom.h
··· 54 54 #define LLCC_CAMEXP4 52 55 55 #define LLCC_DISP_WB 53 56 56 #define LLCC_DISP_1 54 57 + #define LLCC_VIEYE 57 58 + #define LLCC_VIDPTH 58 59 + #define LLCC_GPUMV 59 60 + #define LLCC_EVA_LEFT 60 61 + #define LLCC_EVA_RIGHT 61 62 + #define LLCC_EVAGAIN 62 63 + #define LLCC_VIPTH 63 57 64 #define LLCC_VIDVSP 64 65 + #define LLCC_DISP_LEFT 65 66 + #define LLCC_DISP_RIGHT 66 67 + #define LLCC_EVCS_LEFT 67 68 + #define LLCC_EVCS_RIGHT 68 69 + #define LLCC_SPAD 69 58 70 59 71 /** 60 72 * struct llcc_slice_desc - Cache slice descriptor
+30
include/linux/soc/ti/ti_sci_protocol.h
··· 195 195 u64 *current_freq); 196 196 }; 197 197 198 + /* TISCI LPM IO isolation control values */ 199 + #define TISCI_MSG_VALUE_IO_ENABLE 1 200 + #define TISCI_MSG_VALUE_IO_DISABLE 0 201 + 202 + /* TISCI LPM constraint state values */ 203 + #define TISCI_MSG_CONSTRAINT_SET 1 204 + #define TISCI_MSG_CONSTRAINT_CLR 0 205 + 206 + /** 207 + * struct ti_sci_pm_ops - Low Power Mode (LPM) control operations 208 + * @lpm_wake_reason: Get the wake up source that woke the SoC from LPM 209 + * - source: The wake up source that woke soc from LPM. 210 + * - timestamp: Timestamp at which soc woke. 211 + * @set_device_constraint: Set LPM constraint on behalf of a device 212 + * - id: Device Identifier 213 + * - state: The desired state of device constraint: set or clear. 214 + * @set_latency_constraint: Set LPM resume latency constraint 215 + * - latency: maximum acceptable latency to wake up from low power mode 216 + * - state: The desired state of latency constraint: set or clear. 217 + */ 218 + struct ti_sci_pm_ops { 219 + int (*lpm_wake_reason)(const struct ti_sci_handle *handle, 220 + u32 *source, u64 *timestamp, u8 *pin, u8 *mode); 221 + int (*set_device_constraint)(const struct ti_sci_handle *handle, 222 + u32 id, u8 state); 223 + int (*set_latency_constraint)(const struct ti_sci_handle *handle, 224 + u16 latency, u8 state); 225 + }; 226 + 198 227 /** 199 228 * struct ti_sci_resource_desc - Description of TI SCI resource instance range. 200 229 * @start: Start index of the first resource range. ··· 568 539 struct ti_sci_core_ops core_ops; 569 540 struct ti_sci_dev_ops dev_ops; 570 541 struct ti_sci_clk_ops clk_ops; 542 + struct ti_sci_pm_ops pm_ops; 571 543 struct ti_sci_rm_core_ops rm_core_ops; 572 544 struct ti_sci_rm_irq_ops rm_irq_ops; 573 545 struct ti_sci_rm_ringacc_ops rm_ring_ops;
+23
include/soc/amlogic/reset-meson-aux.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __SOC_RESET_MESON_AUX_H 3 + #define __SOC_RESET_MESON_AUX_H 4 + 5 + #include <linux/err.h> 6 + 7 + struct device; 8 + struct regmap; 9 + 10 + #if IS_ENABLED(CONFIG_RESET_MESON_AUX) 11 + int devm_meson_rst_aux_register(struct device *dev, 12 + struct regmap *map, 13 + const char *adev_name); 14 + #else 15 + static inline int devm_meson_rst_aux_register(struct device *dev, 16 + struct regmap *map, 17 + const char *adev_name) 18 + { 19 + return 0; 20 + } 21 + #endif 22 + 23 + #endif /* __SOC_RESET_MESON_AUX_H */