Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull SoC driver updates from Arnd Bergmann:
"These are the updates for SoC specific drivers and related subsystems:

- Firmware driver updates for SCMI, FF-A and SMCCC firmware
interfaces, adding support for additional firmware features
including SoC identification and FF-A SRI callbacks as well as
various bugfixes

- Memory controller updates for Nvidia and Mediatek

- Reset controller support for microchip sam9x7 and imx8qxp/imx8qm

- New hardware support for multiple Mediatek, Renesas and Samsung
Exynos chips

- Minor updates on Zynq, Qualcomm, Amlogic, TI, Samsung, Nvidia and
Apple chips

There will be a follow up with a few more driver updates that are
still causing build regressions at the moment"

* tag 'soc-drivers-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (97 commits)
irqchip: Add support for Amlogic A4 and A5 SoCs
dt-bindings: interrupt-controller: Add support for Amlogic A4 and A5 SoCs
reset: imx: fix incorrect module device table
dt-bindings: power: qcom,kpss-acc-v2: add qcom,msm8916-acc compatible
bus: qcom-ssc-block-bus: Fix the error handling path of qcom_ssc_block_bus_probe()
bus: qcom-ssc-block-bus: Remove some duplicated iounmap() calls
soc: qcom: pd-mapper: Add support for SDM630/636
reset: imx: Add SCU reset driver for i.MX8QXP and i.MX8QM
dt-bindings: firmware: imx: add property reset-controller
dt-bindings: reset: atmel,at91sam9260-reset: add sam9x7
memory: mtk-smi: Add ostd setting for mt8192
dt-bindings: soc: samsung: exynos-usi: Drop unnecessary status from example
firmware: tegra: bpmp: Fix typo in bpmp-abi.h
soc/tegra: pmc: Use str_enable_disable-like helpers
soc: samsung: include linux/array_size.h where needed
firmware: arm_scmi: use ioread64() instead of ioread64_hi_lo()
soc: mediatek: mtk-socinfo: Add extra entry for MT8395AV/ZA Genio 1200
soc: mediatek: mt8188-mmsys: Add support for DSC on VDO0
soc: mediatek: mmsys: Migrate all tables to MMSYS_ROUTE() macro
soc: mediatek: mt8365-mmsys: Fix routing table masks and values
...

+3264 -1094
+12
Documentation/devicetree/bindings/firmware/fsl,scu.yaml
··· 45 45 Keys provided by the SCU 46 46 $ref: /schemas/input/fsl,scu-key.yaml 47 47 48 + reset-controller: 49 + type: object 50 + properties: 51 + compatible: 52 + const: fsl,imx-scu-reset 53 + '#reset-cells': 54 + const: 1 55 + required: 56 + - compatible 57 + - '#reset-cells' 58 + additionalProperties: false 59 + 48 60 mboxes: 49 61 description: 50 62 A list of phandles of TX MU channels followed by a list of phandles of
+50
Documentation/devicetree/bindings/firmware/google,gs101-acpm-ipc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright 2024 Linaro Ltd. 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/firmware/google,gs101-acpm-ipc.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Samsung Exynos ACPM mailbox protocol 9 + 10 + maintainers: 11 + - Tudor Ambarus <tudor.ambarus@linaro.org> 12 + 13 + description: | 14 + ACPM (Alive Clock and Power Manager) is a firmware that operates on the 15 + APM (Active Power Management) module that handles overall power management 16 + activities. ACPM and masters regard each other as independent hardware 17 + component and communicate with each other using mailbox messages and 18 + shared memory. 19 + 20 + This binding is intended to define the interface the firmware implementing 21 + ACPM provides for OSPM in the device tree. 22 + 23 + properties: 24 + compatible: 25 + const: google,gs101-acpm-ipc 26 + 27 + mboxes: 28 + maxItems: 1 29 + 30 + shmem: 31 + description: 32 + List of phandle pointing to the shared memory (SHM) area. The memory 33 + contains channels configuration data and the TX/RX ring buffers that 34 + are used for passing messages to/from the ACPM firmware. 35 + maxItems: 1 36 + 37 + required: 38 + - compatible 39 + - mboxes 40 + - shmem 41 + 42 + additionalProperties: false 43 + 44 + examples: 45 + - | 46 + power-management { 47 + compatible = "google,gs101-acpm-ipc"; 48 + mboxes = <&ap2apm_mailbox>; 49 + shmem = <&apm_sram>; 50 + };
+2
Documentation/devicetree/bindings/hwinfo/samsung,exynos-chipid.yaml
··· 19 19 - enum: 20 20 - samsung,exynos5433-chipid 21 21 - samsung,exynos7-chipid 22 + - samsung,exynos7870-chipid 22 23 - const: samsung,exynos4210-chipid 23 24 - items: 24 25 - enum: 26 + - samsung,exynos2200-chipid 25 27 - samsung,exynos7885-chipid 26 28 - samsung,exynos8895-chipid 27 29 - samsung,exynos9810-chipid
+18 -1
Documentation/devicetree/bindings/interrupt-controller/amlogic,meson-gpio-intc.yaml
··· 35 35 - amlogic,meson-sm1-gpio-intc 36 36 - amlogic,meson-a1-gpio-intc 37 37 - amlogic,meson-s4-gpio-intc 38 + - amlogic,a4-gpio-intc 39 + - amlogic,a4-gpio-ao-intc 40 + - amlogic,a5-gpio-intc 38 41 - amlogic,c3-gpio-intc 39 42 - amlogic,t7-gpio-intc 40 43 - const: amlogic,meson-gpio-intc ··· 52 49 53 50 amlogic,channel-interrupts: 54 51 description: Array with the upstream hwirq numbers 55 - minItems: 8 52 + minItems: 2 56 53 maxItems: 12 57 54 $ref: /schemas/types.yaml#/definitions/uint32-array 58 55 ··· 62 59 - interrupt-controller 63 60 - "#interrupt-cells" 64 61 - amlogic,channel-interrupts 62 + 63 + if: 64 + properties: 65 + compatible: 66 + contains: 67 + const: amlogic,a4-gpio-ao-intc 68 + then: 69 + properties: 70 + amlogic,channel-interrupts: 71 + maxItems: 2 72 + else: 73 + properties: 74 + amlogic,channel-interrupts: 75 + minItems: 8 65 76 66 77 additionalProperties: false 67 78
+3 -1
Documentation/devicetree/bindings/power/qcom,kpss-acc-v2.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - const: qcom,kpss-acc-v2 21 + enum: 22 + - qcom,kpss-acc-v2 23 + - qcom,msm8916-acc 22 24 23 25 reg: 24 26 items:
+4
Documentation/devicetree/bindings/reset/atmel,at91sam9260-reset.yaml
··· 26 26 - items: 27 27 - const: atmel,sama5d3-rstc 28 28 - const: atmel,at91sam9g45-rstc 29 + - items: 30 + - enum: 31 + - microchip,sam9x7-rstc 32 + - const: microchip,sam9x60-rstc 29 33 30 34 reg: 31 35 minItems: 1
+5
Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml
··· 54 54 55 55 dma-coherent: true 56 56 57 + firmware-name: 58 + maxItems: 1 59 + description: Specify the name of the QUP firmware to load. 60 + 57 61 required: 58 62 - compatible 59 63 - reg ··· 139 135 #address-cells = <2>; 140 136 #size-cells = <2>; 141 137 ranges; 138 + firmware-name = "qcom/sa8775p/qupv3fw.elf"; 142 139 143 140 i2c0: i2c@a94000 { 144 141 compatible = "qcom,geni-i2c";
+1
Documentation/devicetree/bindings/soc/qcom/qcom,pmic-glink.yaml
··· 38 38 - items: 39 39 - enum: 40 40 - qcom,sm8650-pmic-glink 41 + - qcom,sm8750-pmic-glink 41 42 - qcom,x1e80100-pmic-glink 42 43 - const: qcom,sm8550-pmic-glink 43 44 - const: qcom,pmic-glink
+2
Documentation/devicetree/bindings/soc/samsung/exynos-pmu.yaml
··· 52 52 - const: syscon 53 53 - items: 54 54 - enum: 55 + - samsung,exynos2200-pmu 56 + - samsung,exynos7870-pmu 55 57 - samsung,exynos7885-pmu 56 58 - samsung,exynos8895-pmu 57 59 - samsung,exynos9810-pmu
-1
Documentation/devicetree/bindings/soc/samsung/exynos-usi.yaml
··· 191 191 interrupts = <GIC_SPI 227 IRQ_TYPE_LEVEL_HIGH>; 192 192 clocks = <&cmu_peri 32>, <&cmu_peri 31>; 193 193 clock-names = "uart", "clk_uart_baud0"; 194 - status = "disabled"; 195 194 }; 196 195 197 196 hsi2c_0: i2c@13820000 {
+5
Documentation/devicetree/bindings/soc/samsung/samsung,exynos-sysreg.yaml
··· 18 18 - google,gs101-hsi2-sysreg 19 19 - google,gs101-peric0-sysreg 20 20 - google,gs101-peric1-sysreg 21 + - samsung,exynos2200-cmgp-sysreg 22 + - samsung,exynos2200-peric0-sysreg 23 + - samsung,exynos2200-peric1-sysreg 24 + - samsung,exynos2200-peric2-sysreg 25 + - samsung,exynos2200-ufs-sysreg 21 26 - samsung,exynos3-sysreg 22 27 - samsung,exynos4-sysreg 23 28 - samsung,exynos5-sysreg
+11
MAINTAINERS
··· 2481 2481 X: drivers/media/i2c/ 2482 2482 N: imx 2483 2483 N: mxs 2484 + N: \bmxc[^\d] 2484 2485 2485 2486 ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE 2486 2487 M: Shawn Guo <shawnguo@kernel.org> ··· 3084 3083 F: drivers/*/*s3c64xx* 3085 3084 F: drivers/*/*s5pv210* 3086 3085 F: drivers/clocksource/samsung_pwm_timer.c 3086 + F: drivers/firmware/samsung/ 3087 3087 F: drivers/mailbox/exynos-mailbox.c 3088 3088 F: drivers/memory/samsung/ 3089 3089 F: drivers/pwm/pwm-samsung.c ··· 21089 21087 F: arch/arm64/boot/dts/exynos/exynos850* 21090 21088 F: drivers/clk/samsung/clk-exynos850.c 21091 21089 F: include/dt-bindings/clock/exynos850.h 21090 + 21091 + SAMSUNG EXYNOS ACPM MAILBOX PROTOCOL 21092 + M: Tudor Ambarus <tudor.ambarus@linaro.org> 21093 + L: linux-kernel@vger.kernel.org 21094 + L: linux-samsung-soc@vger.kernel.org 21095 + S: Supported 21096 + F: Documentation/devicetree/bindings/firmware/google,gs101-acpm-ipc.yaml 21097 + F: drivers/firmware/samsung/exynos-acpm* 21098 + F: include/linux/firmware/samsung/exynos-acpm-protocol.h 21092 21099 21093 21100 SAMSUNG EXYNOS MAILBOX DRIVER 21094 21101 M: Tudor Ambarus <tudor.ambarus@linaro.org>
-1
arch/arm/mach-s3c/devs.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/string.h> 21 21 #include <linux/dma-mapping.h> 22 - #include <linux/fb.h> 23 22 #include <linux/gfp.h> 24 23 #include <linux/mmc/host.h> 25 24 #include <linux/ioport.h>
-1
arch/arm/mach-s3c/setup-fb-24bpp-s3c64xx.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/types.h> 12 - #include <linux/fb.h> 13 12 #include <linux/gpio.h> 14 13 15 14 #include "fb.h"
+19 -15
drivers/bus/qcom-ssc-block-bus.c
··· 264 264 265 265 platform_set_drvdata(pdev, data); 266 266 267 - data->pd_names = qcom_ssc_block_pd_names; 268 - data->num_pds = ARRAY_SIZE(qcom_ssc_block_pd_names); 269 - 270 - /* power domains */ 271 - ret = qcom_ssc_block_bus_pds_attach(&pdev->dev, data->pds, data->pd_names, data->num_pds); 272 - if (ret < 0) 273 - return dev_err_probe(&pdev->dev, ret, "error when attaching power domains\n"); 274 - 275 - ret = qcom_ssc_block_bus_pds_enable(data->pds, data->num_pds); 276 - if (ret < 0) 277 - return dev_err_probe(&pdev->dev, ret, "error when enabling power domains\n"); 278 - 279 267 /* low level overrides for when the HW logic doesn't "just work" */ 280 268 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mpm_sscaon_config0"); 281 269 data->reg_mpm_sscaon_config0 = devm_ioremap_resource(&pdev->dev, res); ··· 331 343 332 344 data->ssc_axi_halt = halt_args.args[0]; 333 345 346 + /* power domains */ 347 + data->pd_names = qcom_ssc_block_pd_names; 348 + data->num_pds = ARRAY_SIZE(qcom_ssc_block_pd_names); 349 + 350 + ret = qcom_ssc_block_bus_pds_attach(&pdev->dev, data->pds, data->pd_names, data->num_pds); 351 + if (ret < 0) 352 + return dev_err_probe(&pdev->dev, ret, "error when attaching power domains\n"); 353 + 354 + ret = qcom_ssc_block_bus_pds_enable(data->pds, data->num_pds); 355 + if (ret < 0) { 356 + dev_err_probe(&pdev->dev, ret, "error when enabling power domains\n"); 357 + goto err_detach_pds_bus; 358 + } 359 + 334 360 qcom_ssc_block_bus_init(&pdev->dev); 335 361 336 362 of_platform_populate(np, NULL, NULL, &pdev->dev); 337 363 338 364 return 0; 365 + 366 + err_detach_pds_bus: 367 + qcom_ssc_block_bus_pds_detach(&pdev->dev, data->pds, data->num_pds); 368 + 369 + return ret; 339 370 } 340 371 341 372 static void qcom_ssc_block_bus_remove(struct platform_device *pdev) ··· 362 355 struct qcom_ssc_block_bus_data *data = platform_get_drvdata(pdev); 363 356 364 357 qcom_ssc_block_bus_deinit(&pdev->dev); 365 - 366 - iounmap(data->reg_mpm_sscaon_config0); 367 - iounmap(data->reg_mpm_sscaon_config1); 368 358 369 359 qcom_ssc_block_bus_pds_disable(data->pds, data->num_pds); 370 360 qcom_ssc_block_bus_pds_detach(&pdev->dev, data->pds, data->num_pds);
+2
drivers/firmware/Kconfig
··· 225 225 config TI_SCI_PROTOCOL 226 226 tristate "TI System Control Interface (TISCI) Message Protocol" 227 227 depends on TI_MESSAGE_MANAGER 228 + default ARCH_K3 228 229 help 229 230 TI System Control Interface (TISCI) Message Protocol is used to manage 230 231 compute systems such as ARM, DSP etc with the system controller in ··· 278 277 source "drivers/firmware/microchip/Kconfig" 279 278 source "drivers/firmware/psci/Kconfig" 280 279 source "drivers/firmware/qcom/Kconfig" 280 + source "drivers/firmware/samsung/Kconfig" 281 281 source "drivers/firmware/smccc/Kconfig" 282 282 source "drivers/firmware/tegra/Kconfig" 283 283 source "drivers/firmware/xilinx/Kconfig"
+1
drivers/firmware/Makefile
··· 34 34 obj-y += imx/ 35 35 obj-y += psci/ 36 36 obj-y += qcom/ 37 + obj-y += samsung/ 37 38 obj-y += smccc/ 38 39 obj-y += tegra/ 39 40 obj-y += xilinx/
+7 -7
drivers/firmware/arm_ffa/bus.c
··· 15 15 16 16 #include "common.h" 17 17 18 - #define SCMI_UEVENT_MODALIAS_FMT "arm_ffa:%04x:%pUb" 18 + #define FFA_UEVENT_MODALIAS_FMT "arm_ffa:%04x:%pUb" 19 19 20 20 static DEFINE_IDA(ffa_bus_id); 21 21 ··· 68 68 { 69 69 const struct ffa_device *ffa_dev = to_ffa_dev(dev); 70 70 71 - return add_uevent_var(env, "MODALIAS=" SCMI_UEVENT_MODALIAS_FMT, 71 + return add_uevent_var(env, "MODALIAS=" FFA_UEVENT_MODALIAS_FMT, 72 72 ffa_dev->vm_id, &ffa_dev->uuid); 73 73 } 74 74 ··· 77 77 { 78 78 struct ffa_device *ffa_dev = to_ffa_dev(dev); 79 79 80 - return sysfs_emit(buf, SCMI_UEVENT_MODALIAS_FMT, ffa_dev->vm_id, 80 + return sysfs_emit(buf, FFA_UEVENT_MODALIAS_FMT, ffa_dev->vm_id, 81 81 &ffa_dev->uuid); 82 82 } 83 83 static DEVICE_ATTR_RO(modalias); ··· 160 160 return 0; 161 161 } 162 162 163 - static void ffa_devices_unregister(void) 163 + void ffa_devices_unregister(void) 164 164 { 165 165 bus_for_each_dev(&ffa_bus_type, NULL, NULL, 166 166 __ffa_devices_unregister); 167 167 } 168 + EXPORT_SYMBOL_GPL(ffa_devices_unregister); 168 169 169 170 bool ffa_device_is_valid(struct ffa_device *ffa_dev) 170 171 { ··· 193 192 const struct ffa_ops *ops) 194 193 { 195 194 int id, ret; 196 - uuid_t uuid; 197 195 struct device *dev; 198 196 struct ffa_device *ffa_dev; 199 197 ··· 212 212 dev = &ffa_dev->dev; 213 213 dev->bus = &ffa_bus_type; 214 214 dev->release = ffa_release_device; 215 + dev->dma_mask = &dev->coherent_dma_mask; 215 216 dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id); 216 217 217 218 ffa_dev->id = id; 218 219 ffa_dev->vm_id = part_info->id; 219 220 ffa_dev->properties = part_info->properties; 220 221 ffa_dev->ops = ops; 221 - import_uuid(&uuid, (u8 *)part_info->uuid); 222 - uuid_copy(&ffa_dev->uuid, &uuid); 222 + uuid_copy(&ffa_dev->uuid, &part_info->uuid); 223 223 224 224 ret = device_register(&ffa_dev->dev); 225 225 if (ret) {
+413 -155
drivers/firmware/arm_ffa/driver.c
··· 44 44 45 45 #include "common.h" 46 46 47 - #define FFA_DRIVER_VERSION FFA_VERSION_1_1 47 + #define FFA_DRIVER_VERSION FFA_VERSION_1_2 48 48 #define FFA_MIN_VERSION FFA_VERSION_1_0 49 49 50 50 #define SENDER_ID_MASK GENMASK(31, 16) ··· 114 114 }; 115 115 116 116 static struct ffa_drv_info *drv_info; 117 - static void ffa_partitions_cleanup(void); 118 117 119 118 /* 120 119 * The driver must be able to support all the versions from the earliest ··· 144 145 .a0 = FFA_VERSION, .a1 = FFA_DRIVER_VERSION, 145 146 }, &ver); 146 147 147 - if (ver.a0 == FFA_RET_NOT_SUPPORTED) { 148 + if ((s32)ver.a0 == FFA_RET_NOT_SUPPORTED) { 148 149 pr_info("FFA_VERSION returned not supported\n"); 149 150 return -EOPNOTSUPP; 151 + } 152 + 153 + if (FFA_MAJOR_VERSION(ver.a0) > FFA_MAJOR_VERSION(FFA_DRIVER_VERSION)) { 154 + pr_err("Incompatible v%d.%d! Latest supported v%d.%d\n", 155 + FFA_MAJOR_VERSION(ver.a0), FFA_MINOR_VERSION(ver.a0), 156 + FFA_MAJOR_VERSION(FFA_DRIVER_VERSION), 157 + FFA_MINOR_VERSION(FFA_DRIVER_VERSION)); 158 + return -EINVAL; 150 159 } 151 160 152 161 if (ver.a0 < FFA_MIN_VERSION) { ··· 283 276 } 284 277 285 278 if (buffer && count <= num_partitions) 286 - for (idx = 0; idx < count; idx++) 287 - memcpy(buffer + idx, drv_info->rx_buffer + idx * sz, 288 - buf_sz); 279 + for (idx = 0; idx < count; idx++) { 280 + struct ffa_partition_info_le { 281 + __le16 id; 282 + __le16 exec_ctxt; 283 + __le32 properties; 284 + uuid_t uuid; 285 + } *rx_buf = drv_info->rx_buffer + idx * sz; 286 + struct ffa_partition_info *buf = buffer + idx; 287 + 288 + buf->id = le16_to_cpu(rx_buf->id); 289 + buf->exec_ctxt = le16_to_cpu(rx_buf->exec_ctxt); 290 + buf->properties = le32_to_cpu(rx_buf->properties); 291 + if (buf_sz > 8) 292 + import_uuid(&buf->uuid, (u8 *)&rx_buf->uuid); 293 + } 289 294 290 295 ffa_rx_release(); 291 296 ··· 314 295 #define CURRENT_INDEX(x) ((u16)(FIELD_GET(CURRENT_INDEX_MASK, (x)))) 315 296 #define UUID_INFO_TAG(x) ((u16)(FIELD_GET(UUID_INFO_TAG_MASK, (x)))) 316 297 #define PARTITION_INFO_SZ(x) ((u16)(FIELD_GET(PARTITION_INFO_SZ_MASK, (x)))) 298 + #define PART_INFO_ID_MASK GENMASK(15, 0) 299 + #define PART_INFO_EXEC_CXT_MASK GENMASK(31, 16) 300 + #define PART_INFO_PROPS_MASK GENMASK(63, 32) 301 + #define PART_INFO_ID(x) ((u16)(FIELD_GET(PART_INFO_ID_MASK, (x)))) 302 + #define PART_INFO_EXEC_CXT(x) ((u16)(FIELD_GET(PART_INFO_EXEC_CXT_MASK, (x)))) 303 + #define PART_INFO_PROPERTIES(x) ((u32)(FIELD_GET(PART_INFO_PROPS_MASK, (x)))) 317 304 static int 318 305 __ffa_partition_info_get_regs(u32 uuid0, u32 uuid1, u32 uuid2, u32 uuid3, 319 306 struct ffa_partition_info *buffer, int num_parts) 320 307 { 321 308 u16 buf_sz, start_idx, cur_idx, count = 0, prev_idx = 0, tag = 0; 309 + struct ffa_partition_info *buf = buffer; 322 310 ffa_value_t partition_info; 323 311 324 312 do { 313 + __le64 *regs; 314 + int idx; 315 + 325 316 start_idx = prev_idx ? prev_idx + 1 : 0; 326 317 327 318 invoke_ffa_fn((ffa_value_t){ ··· 355 326 if (buf_sz > sizeof(*buffer)) 356 327 buf_sz = sizeof(*buffer); 357 328 358 - memcpy(buffer + prev_idx * buf_sz, &partition_info.a3, 359 - (cur_idx - start_idx + 1) * buf_sz); 329 + regs = (void *)&partition_info.a3; 330 + for (idx = 0; idx < cur_idx - start_idx + 1; idx++, buf++) { 331 + union { 332 + uuid_t uuid; 333 + u64 regs[2]; 334 + } uuid_regs = { 335 + .regs = { 336 + le64_to_cpu(*(regs + 1)), 337 + le64_to_cpu(*(regs + 2)), 338 + } 339 + }; 340 + u64 val = *(u64 *)regs; 341 + 342 + buf->id = PART_INFO_ID(val); 343 + buf->exec_ctxt = PART_INFO_EXEC_CXT(val); 344 + buf->properties = PART_INFO_PROPERTIES(val); 345 + uuid_copy(&buf->uuid, &uuid_regs.uuid); 346 + regs += 3; 347 + } 360 348 prev_idx = cur_idx; 361 349 362 350 } while (cur_idx < (count - 1)); ··· 491 445 return -EINVAL; 492 446 } 493 447 494 - static int ffa_msg_send2(u16 src_id, u16 dst_id, void *buf, size_t sz) 448 + static int ffa_msg_send2(struct ffa_device *dev, u16 src_id, void *buf, size_t sz) 495 449 { 496 - u32 src_dst_ids = PACK_TARGET_INFO(src_id, dst_id); 450 + u32 src_dst_ids = PACK_TARGET_INFO(src_id, dev->vm_id); 497 451 struct ffa_indirect_msg_hdr *msg; 498 452 ffa_value_t ret; 499 453 int retval = 0; ··· 509 463 msg->offset = sizeof(*msg); 510 464 msg->send_recv_id = src_dst_ids; 511 465 msg->size = sz; 466 + uuid_copy(&msg->uuid, &dev->uuid); 512 467 memcpy((u8 *)msg + msg->offset, buf, sz); 513 468 514 469 /* flags = 0, sender VMID = 0 works for both physical/virtual NS */ ··· 807 760 return 0; 808 761 } 809 762 763 + enum notify_type { 764 + SECURE_PARTITION, 765 + NON_SECURE_VM, 766 + SPM_FRAMEWORK, 767 + NS_HYP_FRAMEWORK, 768 + }; 769 + 810 770 #define NOTIFICATION_LOW_MASK GENMASK(31, 0) 811 771 #define NOTIFICATION_HIGH_MASK GENMASK(63, 32) 812 772 #define NOTIFICATION_BITMAP_HIGH(x) \ ··· 837 783 #define MAX_IDS_32 10 838 784 839 785 #define PER_VCPU_NOTIFICATION_FLAG BIT(0) 840 - #define SECURE_PARTITION_BITMAP BIT(0) 841 - #define NON_SECURE_VM_BITMAP BIT(1) 842 - #define SPM_FRAMEWORK_BITMAP BIT(2) 843 - #define NS_HYP_FRAMEWORK_BITMAP BIT(3) 786 + #define SECURE_PARTITION_BITMAP_ENABLE BIT(SECURE_PARTITION) 787 + #define NON_SECURE_VM_BITMAP_ENABLE BIT(NON_SECURE_VM) 788 + #define SPM_FRAMEWORK_BITMAP_ENABLE BIT(SPM_FRAMEWORK) 789 + #define NS_HYP_FRAMEWORK_BITMAP_ENABLE BIT(NS_HYP_FRAMEWORK) 790 + #define FFA_BITMAP_SECURE_ENABLE_MASK \ 791 + (SECURE_PARTITION_BITMAP_ENABLE | SPM_FRAMEWORK_BITMAP_ENABLE) 792 + #define FFA_BITMAP_NS_ENABLE_MASK \ 793 + (NON_SECURE_VM_BITMAP_ENABLE | NS_HYP_FRAMEWORK_BITMAP_ENABLE) 794 + #define FFA_BITMAP_ALL_ENABLE_MASK \ 795 + (FFA_BITMAP_SECURE_ENABLE_MASK | FFA_BITMAP_NS_ENABLE_MASK) 796 + 797 + #define FFA_SECURE_PARTITION_ID_FLAG BIT(15) 798 + 799 + #define SPM_FRAMEWORK_BITMAP(x) NOTIFICATION_BITMAP_LOW(x) 800 + #define NS_HYP_FRAMEWORK_BITMAP(x) NOTIFICATION_BITMAP_HIGH(x) 801 + #define FRAMEWORK_NOTIFY_RX_BUFFER_FULL BIT(0) 844 802 845 803 static int ffa_notification_bind_common(u16 dst_id, u64 bitmap, 846 804 u32 flags, bool is_bind) ··· 918 852 else if (ret.a0 != FFA_SUCCESS) 919 853 return -EINVAL; /* Something else went wrong. */ 920 854 921 - notify->sp_map = PACK_NOTIFICATION_BITMAP(ret.a2, ret.a3); 922 - notify->vm_map = PACK_NOTIFICATION_BITMAP(ret.a4, ret.a5); 923 - notify->arch_map = PACK_NOTIFICATION_BITMAP(ret.a6, ret.a7); 855 + if (flags & SECURE_PARTITION_BITMAP_ENABLE) 856 + notify->sp_map = PACK_NOTIFICATION_BITMAP(ret.a2, ret.a3); 857 + if (flags & NON_SECURE_VM_BITMAP_ENABLE) 858 + notify->vm_map = PACK_NOTIFICATION_BITMAP(ret.a4, ret.a5); 859 + if (flags & SPM_FRAMEWORK_BITMAP_ENABLE) 860 + notify->arch_map = SPM_FRAMEWORK_BITMAP(ret.a6); 861 + if (flags & NS_HYP_FRAMEWORK_BITMAP_ENABLE) 862 + notify->arch_map = PACK_NOTIFICATION_BITMAP(notify->arch_map, 863 + ret.a7); 924 864 925 865 return 0; 926 866 } ··· 935 863 ffa_sched_recv_cb callback; 936 864 void *cb_data; 937 865 rwlock_t rw_lock; 866 + struct ffa_device *dev; 867 + struct list_head node; 938 868 }; 939 869 940 870 static void __do_sched_recv_cb(u16 part_id, u16 vcpu, bool is_per_vcpu) 941 871 { 942 - struct ffa_dev_part_info *partition; 872 + struct ffa_dev_part_info *partition = NULL, *tmp; 943 873 ffa_sched_recv_cb callback; 874 + struct list_head *phead; 944 875 void *cb_data; 945 876 946 - partition = xa_load(&drv_info->partition_info, part_id); 947 - if (!partition) { 877 + phead = xa_load(&drv_info->partition_info, part_id); 878 + if (!phead) { 948 879 pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id); 949 880 return; 950 881 } 951 882 952 - read_lock(&partition->rw_lock); 953 - callback = partition->callback; 954 - cb_data = partition->cb_data; 955 - read_unlock(&partition->rw_lock); 883 + list_for_each_entry_safe(partition, tmp, phead, node) { 884 + read_lock(&partition->rw_lock); 885 + callback = partition->callback; 886 + cb_data = partition->cb_data; 887 + read_unlock(&partition->rw_lock); 956 888 957 - if (callback) 958 - callback(vcpu, is_per_vcpu, cb_data); 889 + if (callback) 890 + callback(vcpu, is_per_vcpu, cb_data); 891 + } 959 892 } 960 893 961 894 static void ffa_notification_info_get(void) ··· 976 899 }, &ret); 977 900 978 901 if (ret.a0 != FFA_FN_NATIVE(SUCCESS) && ret.a0 != FFA_SUCCESS) { 979 - if (ret.a2 != FFA_RET_NO_DATA) 902 + if ((s32)ret.a2 != FFA_RET_NO_DATA) 980 903 pr_err("Notification Info fetch failed: 0x%lx (0x%lx)", 981 904 ret.a0, ret.a2); 982 905 return; ··· 1012 935 } 1013 936 1014 937 /* Per vCPU Notification */ 1015 - for (idx = 0; idx < ids_count[list]; idx++) { 938 + for (idx = 1; idx < ids_count[list]; idx++) { 1016 939 if (ids_processed >= max_ids - 1) 1017 940 break; 1018 941 ··· 1092 1015 1093 1016 static int ffa_indirect_msg_send(struct ffa_device *dev, void *buf, size_t sz) 1094 1017 { 1095 - return ffa_msg_send2(drv_info->vm_id, dev->vm_id, buf, sz); 1018 + return ffa_msg_send2(dev, drv_info->vm_id, buf, sz); 1096 1019 } 1097 1020 1098 - static int ffa_sync_send_receive2(struct ffa_device *dev, const uuid_t *uuid, 1021 + static int ffa_sync_send_receive2(struct ffa_device *dev, 1099 1022 struct ffa_send_direct_data2 *data) 1100 1023 { 1101 1024 if (!drv_info->msg_direct_req2_supp) 1102 1025 return -EOPNOTSUPP; 1103 1026 1104 1027 return ffa_msg_send_direct_req2(drv_info->vm_id, dev->vm_id, 1105 - uuid, data); 1028 + &dev->uuid, data); 1106 1029 } 1107 1030 1108 1031 static int ffa_memory_share(struct ffa_mem_ops_args *args) ··· 1128 1051 return ffa_memory_ops(FFA_MEM_LEND, args); 1129 1052 } 1130 1053 1131 - #define FFA_SECURE_PARTITION_ID_FLAG BIT(15) 1132 - 1133 1054 #define ffa_notifications_disabled() (!drv_info->notif_enabled) 1134 - 1135 - enum notify_type { 1136 - NON_SECURE_VM, 1137 - SECURE_PARTITION, 1138 - FRAMEWORK, 1139 - }; 1140 1055 1141 1056 struct notifier_cb_info { 1142 1057 struct hlist_node hnode; 1058 + struct ffa_device *dev; 1059 + ffa_fwk_notifier_cb fwk_cb; 1143 1060 ffa_notifier_cb cb; 1144 1061 void *cb_data; 1145 - enum notify_type type; 1146 1062 }; 1147 1063 1148 - static int ffa_sched_recv_cb_update(u16 part_id, ffa_sched_recv_cb callback, 1149 - void *cb_data, bool is_registration) 1064 + static int 1065 + ffa_sched_recv_cb_update(struct ffa_device *dev, ffa_sched_recv_cb callback, 1066 + void *cb_data, bool is_registration) 1150 1067 { 1151 - struct ffa_dev_part_info *partition; 1068 + struct ffa_dev_part_info *partition = NULL, *tmp; 1069 + struct list_head *phead; 1152 1070 bool cb_valid; 1153 1071 1154 1072 if (ffa_notifications_disabled()) 1155 1073 return -EOPNOTSUPP; 1156 1074 1157 - partition = xa_load(&drv_info->partition_info, part_id); 1075 + phead = xa_load(&drv_info->partition_info, dev->vm_id); 1076 + if (!phead) { 1077 + pr_err("%s: Invalid partition ID 0x%x\n", __func__, dev->vm_id); 1078 + return -EINVAL; 1079 + } 1080 + 1081 + list_for_each_entry_safe(partition, tmp, phead, node) 1082 + if (partition->dev == dev) 1083 + break; 1084 + 1158 1085 if (!partition) { 1159 - pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id); 1086 + pr_err("%s: No such partition ID 0x%x\n", __func__, dev->vm_id); 1160 1087 return -EINVAL; 1161 1088 } 1162 1089 ··· 1182 1101 static int ffa_sched_recv_cb_register(struct ffa_device *dev, 1183 1102 ffa_sched_recv_cb cb, void *cb_data) 1184 1103 { 1185 - return ffa_sched_recv_cb_update(dev->vm_id, cb, cb_data, true); 1104 + return ffa_sched_recv_cb_update(dev, cb, cb_data, true); 1186 1105 } 1187 1106 1188 1107 static int ffa_sched_recv_cb_unregister(struct ffa_device *dev) 1189 1108 { 1190 - return ffa_sched_recv_cb_update(dev->vm_id, NULL, NULL, false); 1109 + return ffa_sched_recv_cb_update(dev, NULL, NULL, false); 1191 1110 } 1192 1111 1193 1112 static int ffa_notification_bind(u16 dst_id, u64 bitmap, u32 flags) ··· 1200 1119 return ffa_notification_bind_common(dst_id, bitmap, 0, false); 1201 1120 } 1202 1121 1203 - /* Should be called while the notify_lock is taken */ 1122 + static enum notify_type ffa_notify_type_get(u16 vm_id) 1123 + { 1124 + if (vm_id & FFA_SECURE_PARTITION_ID_FLAG) 1125 + return SECURE_PARTITION; 1126 + else 1127 + return NON_SECURE_VM; 1128 + } 1129 + 1130 + /* notifier_hnode_get* should be called with notify_lock held */ 1204 1131 static struct notifier_cb_info * 1205 - notifier_hash_node_get(u16 notify_id, enum notify_type type) 1132 + notifier_hnode_get_by_vmid(u16 notify_id, int vmid) 1206 1133 { 1207 1134 struct notifier_cb_info *node; 1208 1135 1209 1136 hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) 1210 - if (type == node->type) 1137 + if (node->fwk_cb && vmid == node->dev->vm_id) 1138 + return node; 1139 + 1140 + return NULL; 1141 + } 1142 + 1143 + static struct notifier_cb_info * 1144 + notifier_hnode_get_by_vmid_uuid(u16 notify_id, int vmid, const uuid_t *uuid) 1145 + { 1146 + struct notifier_cb_info *node; 1147 + 1148 + if (uuid_is_null(uuid)) 1149 + return notifier_hnode_get_by_vmid(notify_id, vmid); 1150 + 1151 + hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) 1152 + if (node->fwk_cb && vmid == node->dev->vm_id && 1153 + uuid_equal(&node->dev->uuid, uuid)) 1154 + return node; 1155 + 1156 + return NULL; 1157 + } 1158 + 1159 + static struct notifier_cb_info * 1160 + notifier_hnode_get_by_type(u16 notify_id, enum notify_type type) 1161 + { 1162 + struct notifier_cb_info *node; 1163 + 1164 + hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) 1165 + if (node->cb && type == ffa_notify_type_get(node->dev->vm_id)) 1211 1166 return node; 1212 1167 1213 1168 return NULL; 1214 1169 } 1215 1170 1216 1171 static int 1217 - update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, 1218 - void *cb_data, bool is_registration) 1172 + update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb, 1173 + void *cb_data, bool is_registration, bool is_framework) 1219 1174 { 1220 1175 struct notifier_cb_info *cb_info = NULL; 1176 + enum notify_type type = ffa_notify_type_get(dev->vm_id); 1221 1177 bool cb_found; 1222 1178 1223 - cb_info = notifier_hash_node_get(notify_id, type); 1179 + if (is_framework) 1180 + cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, dev->vm_id, 1181 + &dev->uuid); 1182 + else 1183 + cb_info = notifier_hnode_get_by_type(notify_id, type); 1184 + 1224 1185 cb_found = !!cb_info; 1225 1186 1226 1187 if (!(is_registration ^ cb_found)) ··· 1273 1150 if (!cb_info) 1274 1151 return -ENOMEM; 1275 1152 1276 - cb_info->type = type; 1277 - cb_info->cb = cb; 1153 + cb_info->dev = dev; 1278 1154 cb_info->cb_data = cb_data; 1155 + if (is_framework) 1156 + cb_info->fwk_cb = cb; 1157 + else 1158 + cb_info->cb = cb; 1279 1159 1280 1160 hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); 1281 1161 } else { ··· 1288 1162 return 0; 1289 1163 } 1290 1164 1291 - static enum notify_type ffa_notify_type_get(u16 vm_id) 1292 - { 1293 - if (vm_id & FFA_SECURE_PARTITION_ID_FLAG) 1294 - return SECURE_PARTITION; 1295 - else 1296 - return NON_SECURE_VM; 1297 - } 1298 - 1299 - static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) 1165 + static int __ffa_notify_relinquish(struct ffa_device *dev, int notify_id, 1166 + bool is_framework) 1300 1167 { 1301 1168 int rc; 1302 - enum notify_type type = ffa_notify_type_get(dev->vm_id); 1303 1169 1304 1170 if (ffa_notifications_disabled()) 1305 1171 return -EOPNOTSUPP; ··· 1301 1183 1302 1184 mutex_lock(&drv_info->notify_lock); 1303 1185 1304 - rc = update_notifier_cb(notify_id, type, NULL, NULL, false); 1186 + rc = update_notifier_cb(dev, notify_id, NULL, NULL, false, 1187 + is_framework); 1305 1188 if (rc) { 1306 1189 pr_err("Could not unregister notification callback\n"); 1307 1190 mutex_unlock(&drv_info->notify_lock); 1308 1191 return rc; 1309 1192 } 1310 1193 1311 - rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1194 + if (!is_framework) 1195 + rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1312 1196 1197 + mutex_unlock(&drv_info->notify_lock); 1198 + 1199 + return rc; 1200 + } 1201 + 1202 + static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) 1203 + { 1204 + return __ffa_notify_relinquish(dev, notify_id, false); 1205 + } 1206 + 1207 + static int ffa_fwk_notify_relinquish(struct ffa_device *dev, int notify_id) 1208 + { 1209 + return __ffa_notify_relinquish(dev, notify_id, true); 1210 + } 1211 + 1212 + static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, 1213 + void *cb, void *cb_data, 1214 + int notify_id, bool is_framework) 1215 + { 1216 + int rc; 1217 + u32 flags = 0; 1218 + 1219 + if (ffa_notifications_disabled()) 1220 + return -EOPNOTSUPP; 1221 + 1222 + if (notify_id >= FFA_MAX_NOTIFICATIONS) 1223 + return -EINVAL; 1224 + 1225 + mutex_lock(&drv_info->notify_lock); 1226 + 1227 + if (!is_framework) { 1228 + if (is_per_vcpu) 1229 + flags = PER_VCPU_NOTIFICATION_FLAG; 1230 + 1231 + rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); 1232 + if (rc) { 1233 + mutex_unlock(&drv_info->notify_lock); 1234 + return rc; 1235 + } 1236 + } 1237 + 1238 + rc = update_notifier_cb(dev, notify_id, cb, cb_data, true, 1239 + is_framework); 1240 + if (rc) { 1241 + pr_err("Failed to register callback for %d - %d\n", 1242 + notify_id, rc); 1243 + if (!is_framework) 1244 + ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1245 + } 1313 1246 mutex_unlock(&drv_info->notify_lock); 1314 1247 1315 1248 return rc; ··· 1369 1200 static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, 1370 1201 ffa_notifier_cb cb, void *cb_data, int notify_id) 1371 1202 { 1372 - int rc; 1373 - u32 flags = 0; 1374 - enum notify_type type = ffa_notify_type_get(dev->vm_id); 1203 + return __ffa_notify_request(dev, is_per_vcpu, cb, cb_data, notify_id, 1204 + false); 1205 + } 1375 1206 1376 - if (ffa_notifications_disabled()) 1377 - return -EOPNOTSUPP; 1378 - 1379 - if (notify_id >= FFA_MAX_NOTIFICATIONS) 1380 - return -EINVAL; 1381 - 1382 - mutex_lock(&drv_info->notify_lock); 1383 - 1384 - if (is_per_vcpu) 1385 - flags = PER_VCPU_NOTIFICATION_FLAG; 1386 - 1387 - rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); 1388 - if (rc) { 1389 - mutex_unlock(&drv_info->notify_lock); 1390 - return rc; 1391 - } 1392 - 1393 - rc = update_notifier_cb(notify_id, type, cb, cb_data, true); 1394 - if (rc) { 1395 - pr_err("Failed to register callback for %d - %d\n", 1396 - notify_id, rc); 1397 - ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1398 - } 1399 - mutex_unlock(&drv_info->notify_lock); 1400 - 1401 - return rc; 1207 + static int 1208 + ffa_fwk_notify_request(struct ffa_device *dev, ffa_fwk_notifier_cb cb, 1209 + void *cb_data, int notify_id) 1210 + { 1211 + return __ffa_notify_request(dev, false, cb, cb_data, notify_id, true); 1402 1212 } 1403 1213 1404 1214 static int ffa_notify_send(struct ffa_device *dev, int notify_id, ··· 1406 1258 continue; 1407 1259 1408 1260 mutex_lock(&drv_info->notify_lock); 1409 - cb_info = notifier_hash_node_get(notify_id, type); 1261 + cb_info = notifier_hnode_get_by_type(notify_id, type); 1410 1262 mutex_unlock(&drv_info->notify_lock); 1411 1263 1412 1264 if (cb_info && cb_info->cb) ··· 1414 1266 } 1415 1267 } 1416 1268 1417 - static void notif_get_and_handle(void *unused) 1269 + static void handle_fwk_notif_callbacks(u32 bitmap) 1270 + { 1271 + void *buf; 1272 + uuid_t uuid; 1273 + int notify_id = 0, target; 1274 + struct ffa_indirect_msg_hdr *msg; 1275 + struct notifier_cb_info *cb_info = NULL; 1276 + 1277 + /* Only one framework notification defined and supported for now */ 1278 + if (!(bitmap & FRAMEWORK_NOTIFY_RX_BUFFER_FULL)) 1279 + return; 1280 + 1281 + mutex_lock(&drv_info->rx_lock); 1282 + 1283 + msg = drv_info->rx_buffer; 1284 + buf = kmemdup((void *)msg + msg->offset, msg->size, GFP_KERNEL); 1285 + if (!buf) { 1286 + mutex_unlock(&drv_info->rx_lock); 1287 + return; 1288 + } 1289 + 1290 + target = SENDER_ID(msg->send_recv_id); 1291 + if (msg->offset >= sizeof(*msg)) 1292 + uuid_copy(&uuid, &msg->uuid); 1293 + else 1294 + uuid_copy(&uuid, &uuid_null); 1295 + 1296 + mutex_unlock(&drv_info->rx_lock); 1297 + 1298 + ffa_rx_release(); 1299 + 1300 + mutex_lock(&drv_info->notify_lock); 1301 + cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, target, &uuid); 1302 + mutex_unlock(&drv_info->notify_lock); 1303 + 1304 + if (cb_info && cb_info->fwk_cb) 1305 + cb_info->fwk_cb(notify_id, cb_info->cb_data, buf); 1306 + kfree(buf); 1307 + } 1308 + 1309 + static void notif_get_and_handle(void *cb_data) 1418 1310 { 1419 1311 int rc; 1420 - struct ffa_notify_bitmaps bitmaps; 1312 + u32 flags; 1313 + struct ffa_drv_info *info = cb_data; 1314 + struct ffa_notify_bitmaps bitmaps = { 0 }; 1421 1315 1422 - rc = ffa_notification_get(SECURE_PARTITION_BITMAP | 1423 - SPM_FRAMEWORK_BITMAP, &bitmaps); 1316 + if (info->vm_id == 0) /* Non secure physical instance */ 1317 + flags = FFA_BITMAP_SECURE_ENABLE_MASK; 1318 + else 1319 + flags = FFA_BITMAP_ALL_ENABLE_MASK; 1320 + 1321 + rc = ffa_notification_get(flags, &bitmaps); 1424 1322 if (rc) { 1425 1323 pr_err("Failed to retrieve notifications with %d!\n", rc); 1426 1324 return; 1427 1325 } 1428 1326 1327 + handle_fwk_notif_callbacks(SPM_FRAMEWORK_BITMAP(bitmaps.arch_map)); 1328 + handle_fwk_notif_callbacks(NS_HYP_FRAMEWORK_BITMAP(bitmaps.arch_map)); 1429 1329 handle_notif_callbacks(bitmaps.vm_map, NON_SECURE_VM); 1430 1330 handle_notif_callbacks(bitmaps.sp_map, SECURE_PARTITION); 1431 - handle_notif_callbacks(bitmaps.arch_map, FRAMEWORK); 1432 1331 } 1433 1332 1434 1333 static void ··· 1524 1329 .sched_recv_cb_unregister = ffa_sched_recv_cb_unregister, 1525 1330 .notify_request = ffa_notify_request, 1526 1331 .notify_relinquish = ffa_notify_relinquish, 1332 + .fwk_notify_request = ffa_fwk_notify_request, 1333 + .fwk_notify_relinquish = ffa_fwk_notify_relinquish, 1527 1334 .notify_send = ffa_notify_send, 1528 1335 }; 1529 1336 ··· 1581 1384 .notifier_call = ffa_bus_notifier, 1582 1385 }; 1583 1386 1387 + static int ffa_xa_add_partition_info(struct ffa_device *dev) 1388 + { 1389 + struct ffa_dev_part_info *info; 1390 + struct list_head *head, *phead; 1391 + int ret = -ENOMEM; 1392 + 1393 + phead = xa_load(&drv_info->partition_info, dev->vm_id); 1394 + if (phead) { 1395 + head = phead; 1396 + list_for_each_entry(info, head, node) { 1397 + if (info->dev == dev) { 1398 + pr_err("%s: duplicate dev %p part ID 0x%x\n", 1399 + __func__, dev, dev->vm_id); 1400 + return -EEXIST; 1401 + } 1402 + } 1403 + } 1404 + 1405 + info = kzalloc(sizeof(*info), GFP_KERNEL); 1406 + if (!info) 1407 + return ret; 1408 + 1409 + rwlock_init(&info->rw_lock); 1410 + info->dev = dev; 1411 + 1412 + if (!phead) { 1413 + phead = kzalloc(sizeof(*phead), GFP_KERNEL); 1414 + if (!phead) 1415 + goto free_out; 1416 + 1417 + INIT_LIST_HEAD(phead); 1418 + 1419 + ret = xa_insert(&drv_info->partition_info, dev->vm_id, phead, 1420 + GFP_KERNEL); 1421 + if (ret) { 1422 + pr_err("%s: failed to save part ID 0x%x Ret:%d\n", 1423 + __func__, dev->vm_id, ret); 1424 + goto free_out; 1425 + } 1426 + } 1427 + list_add(&info->node, phead); 1428 + return 0; 1429 + 1430 + free_out: 1431 + kfree(phead); 1432 + kfree(info); 1433 + return ret; 1434 + } 1435 + 1436 + static int ffa_setup_host_partition(int vm_id) 1437 + { 1438 + struct ffa_partition_info buf = { 0 }; 1439 + struct ffa_device *ffa_dev; 1440 + int ret; 1441 + 1442 + buf.id = vm_id; 1443 + ffa_dev = ffa_device_register(&buf, &ffa_drv_ops); 1444 + if (!ffa_dev) { 1445 + pr_err("%s: failed to register host partition ID 0x%x\n", 1446 + __func__, vm_id); 1447 + return -EINVAL; 1448 + } 1449 + 1450 + ret = ffa_xa_add_partition_info(ffa_dev); 1451 + if (ret) 1452 + return ret; 1453 + 1454 + if (ffa_notifications_disabled()) 1455 + return 0; 1456 + 1457 + ret = ffa_sched_recv_cb_update(ffa_dev, ffa_self_notif_handle, 1458 + drv_info, true); 1459 + if (ret) 1460 + pr_info("Failed to register driver sched callback %d\n", ret); 1461 + 1462 + return ret; 1463 + } 1464 + 1465 + static void ffa_partitions_cleanup(void) 1466 + { 1467 + struct list_head *phead; 1468 + unsigned long idx; 1469 + 1470 + /* Clean up/free all registered devices */ 1471 + ffa_devices_unregister(); 1472 + 1473 + xa_for_each(&drv_info->partition_info, idx, phead) { 1474 + struct ffa_dev_part_info *info, *tmp; 1475 + 1476 + xa_erase(&drv_info->partition_info, idx); 1477 + list_for_each_entry_safe(info, tmp, phead, node) { 1478 + list_del(&info->node); 1479 + kfree(info); 1480 + } 1481 + kfree(phead); 1482 + } 1483 + 1484 + xa_destroy(&drv_info->partition_info); 1485 + } 1486 + 1584 1487 static int ffa_setup_partitions(void) 1585 1488 { 1586 1489 int count, idx, ret; 1587 1490 struct ffa_device *ffa_dev; 1588 - struct ffa_dev_part_info *info; 1589 1491 struct ffa_partition_info *pbuf, *tpbuf; 1590 1492 1591 1493 if (drv_info->version == FFA_VERSION_1_0) { ··· 1718 1422 !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC)) 1719 1423 ffa_mode_32bit_set(ffa_dev); 1720 1424 1721 - info = kzalloc(sizeof(*info), GFP_KERNEL); 1722 - if (!info) { 1425 + if (ffa_xa_add_partition_info(ffa_dev)) { 1723 1426 ffa_device_unregister(ffa_dev); 1724 1427 continue; 1725 - } 1726 - rwlock_init(&info->rw_lock); 1727 - ret = xa_insert(&drv_info->partition_info, tpbuf->id, 1728 - info, GFP_KERNEL); 1729 - if (ret) { 1730 - pr_err("%s: failed to save partition ID 0x%x - ret:%d\n", 1731 - __func__, tpbuf->id, ret); 1732 - ffa_device_unregister(ffa_dev); 1733 - kfree(info); 1734 1428 } 1735 1429 } 1736 1430 1737 1431 kfree(pbuf); 1738 1432 1739 - /* Allocate for the host */ 1740 - info = kzalloc(sizeof(*info), GFP_KERNEL); 1741 - if (!info) { 1742 - /* Already registered devices are freed on bus_exit */ 1743 - ffa_partitions_cleanup(); 1744 - return -ENOMEM; 1745 - } 1433 + /* 1434 + * Check if the host is already added as part of partition info 1435 + * No multiple UUID possible for the host, so just checking if 1436 + * there is an entry will suffice 1437 + */ 1438 + if (xa_load(&drv_info->partition_info, drv_info->vm_id)) 1439 + return 0; 1746 1440 1747 - rwlock_init(&info->rw_lock); 1748 - ret = xa_insert(&drv_info->partition_info, drv_info->vm_id, 1749 - info, GFP_KERNEL); 1750 - if (ret) { 1751 - pr_err("%s: failed to save Host partition ID 0x%x - ret:%d. Abort.\n", 1752 - __func__, drv_info->vm_id, ret); 1753 - kfree(info); 1754 - /* Already registered devices are freed on bus_exit */ 1441 + /* Allocate for the host */ 1442 + ret = ffa_setup_host_partition(drv_info->vm_id); 1443 + if (ret) 1755 1444 ffa_partitions_cleanup(); 1756 - } 1757 1445 1758 1446 return ret; 1759 - } 1760 - 1761 - static void ffa_partitions_cleanup(void) 1762 - { 1763 - struct ffa_dev_part_info *info; 1764 - unsigned long idx; 1765 - 1766 - xa_for_each(&drv_info->partition_info, idx, info) { 1767 - xa_erase(&drv_info->partition_info, idx); 1768 - kfree(info); 1769 - } 1770 - 1771 - xa_destroy(&drv_info->partition_info); 1772 1447 } 1773 1448 1774 1449 /* FFA FEATURE IDs */ ··· 2044 1777 ffa_notifications_setup(); 2045 1778 2046 1779 ret = ffa_setup_partitions(); 2047 - if (ret) { 2048 - pr_err("failed to setup partitions\n"); 2049 - goto cleanup_notifs; 2050 - } 1780 + if (!ret) 1781 + return ret; 2051 1782 2052 - ret = ffa_sched_recv_cb_update(drv_info->vm_id, ffa_self_notif_handle, 2053 - drv_info, true); 2054 - if (ret) 2055 - pr_info("Failed to register driver sched callback %d\n", ret); 2056 - 2057 - return 0; 2058 - 2059 - cleanup_notifs: 1783 + pr_err("failed to setup partitions\n"); 2060 1784 ffa_notifications_cleanup(); 2061 1785 free_pages: 2062 1786 if (drv_info->tx_buffer)
+56 -13
drivers/firmware/arm_scmi/bus.c
··· 17 17 18 18 #include "common.h" 19 19 20 + #define SCMI_UEVENT_MODALIAS_FMT "%s:%02x:%s" 21 + 20 22 BLOCKING_NOTIFIER_HEAD(scmi_requested_devices_nh); 21 23 EXPORT_SYMBOL_GPL(scmi_requested_devices_nh); 22 24 ··· 44 42 * This helper let an SCMI driver request specific devices identified by the 45 43 * @id_table to be created for each active SCMI instance. 46 44 * 47 - * The requested device name MUST NOT be already existent for any protocol; 45 + * The requested device name MUST NOT be already existent for this protocol; 48 46 * at first the freshly requested @id_table is annotated in the IDR table 49 47 * @scmi_requested_devices and then the requested device is advertised to any 50 48 * registered party via the @scmi_requested_devices_nh notification chain. ··· 54 52 static int scmi_protocol_device_request(const struct scmi_device_id *id_table) 55 53 { 56 54 int ret = 0; 57 - unsigned int id = 0; 58 55 struct list_head *head, *phead = NULL; 59 56 struct scmi_requested_dev *rdev; 60 57 ··· 68 67 } 69 68 70 69 /* 71 - * Search for the matching protocol rdev list and then search 72 - * of any existent equally named device...fails if any duplicate found. 70 + * Find the matching protocol rdev list and then search of any 71 + * existent equally named device...fails if any duplicate found. 73 72 */ 74 73 mutex_lock(&scmi_requested_devices_mtx); 75 - idr_for_each_entry(&scmi_requested_devices, head, id) { 76 - if (!phead) { 77 - /* A list found registered in the IDR is never empty */ 78 - rdev = list_first_entry(head, struct scmi_requested_dev, 79 - node); 80 - if (rdev->id_table->protocol_id == 81 - id_table->protocol_id) 82 - phead = head; 83 - } 74 + phead = idr_find(&scmi_requested_devices, id_table->protocol_id); 75 + if (phead) { 76 + head = phead; 84 77 list_for_each_entry(rdev, head, node) { 85 78 if (!strcmp(rdev->id_table->name, id_table->name)) { 86 79 pr_err("Ignoring duplicate request [%d] %s\n", ··· 278 283 scmi_drv->remove(scmi_dev); 279 284 } 280 285 286 + static int scmi_device_uevent(const struct device *dev, struct kobj_uevent_env *env) 287 + { 288 + const struct scmi_device *scmi_dev = to_scmi_dev(dev); 289 + 290 + return add_uevent_var(env, "MODALIAS=" SCMI_UEVENT_MODALIAS_FMT, 291 + dev_name(&scmi_dev->dev), scmi_dev->protocol_id, 292 + scmi_dev->name); 293 + } 294 + 295 + static ssize_t modalias_show(struct device *dev, 296 + struct device_attribute *attr, char *buf) 297 + { 298 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 299 + 300 + return sysfs_emit(buf, SCMI_UEVENT_MODALIAS_FMT, 301 + dev_name(&scmi_dev->dev), scmi_dev->protocol_id, 302 + scmi_dev->name); 303 + } 304 + static DEVICE_ATTR_RO(modalias); 305 + 306 + static ssize_t protocol_id_show(struct device *dev, 307 + struct device_attribute *attr, char *buf) 308 + { 309 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 310 + 311 + return sprintf(buf, "0x%02x\n", scmi_dev->protocol_id); 312 + } 313 + static DEVICE_ATTR_RO(protocol_id); 314 + 315 + static ssize_t name_show(struct device *dev, struct device_attribute *attr, 316 + char *buf) 317 + { 318 + struct scmi_device *scmi_dev = to_scmi_dev(dev); 319 + 320 + return sprintf(buf, "%s\n", scmi_dev->name); 321 + } 322 + static DEVICE_ATTR_RO(name); 323 + 324 + static struct attribute *scmi_device_attributes_attrs[] = { 325 + &dev_attr_protocol_id.attr, 326 + &dev_attr_name.attr, 327 + &dev_attr_modalias.attr, 328 + NULL, 329 + }; 330 + ATTRIBUTE_GROUPS(scmi_device_attributes); 331 + 281 332 const struct bus_type scmi_bus_type = { 282 333 .name = "scmi_protocol", 283 334 .match = scmi_dev_match, 284 335 .probe = scmi_dev_probe, 285 336 .remove = scmi_dev_remove, 337 + .uevent = scmi_device_uevent, 338 + .dev_groups = scmi_device_attributes_groups, 286 339 }; 287 340 EXPORT_SYMBOL_GPL(scmi_bus_type); 288 341
-10
drivers/firmware/arm_scmi/driver.c
··· 1997 1997 else if (db->width == 4) 1998 1998 SCMI_PROTO_FC_RING_DB(32); 1999 1999 else /* db->width == 8 */ 2000 - #ifdef CONFIG_64BIT 2001 2000 SCMI_PROTO_FC_RING_DB(64); 2002 - #else 2003 - { 2004 - u64 val = 0; 2005 - 2006 - if (db->mask) 2007 - val = ioread64_hi_lo(db->addr) & db->mask; 2008 - iowrite64_hi_lo(db->set | val, db->addr); 2009 - } 2010 - #endif 2011 2001 } 2012 2002 2013 2003 /**
+14
drivers/firmware/samsung/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + config EXYNOS_ACPM_PROTOCOL 4 + tristate "Exynos Alive Clock and Power Manager (ACPM) Message Protocol" 5 + depends on ARCH_EXYNOS || COMPILE_TEST 6 + depends on MAILBOX 7 + help 8 + Alive Clock and Power Manager (ACPM) Message Protocol is defined for 9 + the purpose of communication between the ACPM firmware and masters 10 + (AP, AOC, ...). ACPM firmware operates on the Active Power Management 11 + (APM) module that handles overall power activities. 12 + 13 + This protocol driver provides interface for all the client drivers 14 + making use of the features offered by the APM.
+4
drivers/firmware/samsung/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + acpm-protocol-objs := exynos-acpm.o exynos-acpm-pmic.o 4 + obj-$(CONFIG_EXYNOS_ACPM_PROTOCOL) += acpm-protocol.o
+224
drivers/firmware/samsung/exynos-acpm-pmic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2020 Samsung Electronics Co., Ltd. 4 + * Copyright 2020 Google LLC. 5 + * Copyright 2024 Linaro Ltd. 6 + */ 7 + #include <linux/bitfield.h> 8 + #include <linux/firmware/samsung/exynos-acpm-protocol.h> 9 + #include <linux/ktime.h> 10 + #include <linux/types.h> 11 + 12 + #include "exynos-acpm.h" 13 + #include "exynos-acpm-pmic.h" 14 + 15 + #define ACPM_PMIC_CHANNEL GENMASK(15, 12) 16 + #define ACPM_PMIC_TYPE GENMASK(11, 8) 17 + #define ACPM_PMIC_REG GENMASK(7, 0) 18 + 19 + #define ACPM_PMIC_RETURN GENMASK(31, 24) 20 + #define ACPM_PMIC_MASK GENMASK(23, 16) 21 + #define ACPM_PMIC_VALUE GENMASK(15, 8) 22 + #define ACPM_PMIC_FUNC GENMASK(7, 0) 23 + 24 + #define ACPM_PMIC_BULK_SHIFT 8 25 + #define ACPM_PMIC_BULK_MASK GENMASK(7, 0) 26 + #define ACPM_PMIC_BULK_MAX_COUNT 8 27 + 28 + enum exynos_acpm_pmic_func { 29 + ACPM_PMIC_READ, 30 + ACPM_PMIC_WRITE, 31 + ACPM_PMIC_UPDATE, 32 + ACPM_PMIC_BULK_READ, 33 + ACPM_PMIC_BULK_WRITE, 34 + }; 35 + 36 + static inline u32 acpm_pmic_set_bulk(u32 data, unsigned int i) 37 + { 38 + return (data & ACPM_PMIC_BULK_MASK) << (ACPM_PMIC_BULK_SHIFT * i); 39 + } 40 + 41 + static inline u32 acpm_pmic_get_bulk(u32 data, unsigned int i) 42 + { 43 + return (data >> (ACPM_PMIC_BULK_SHIFT * i)) & ACPM_PMIC_BULK_MASK; 44 + } 45 + 46 + static void acpm_pmic_set_xfer(struct acpm_xfer *xfer, u32 *cmd, 47 + unsigned int acpm_chan_id) 48 + { 49 + xfer->txd = cmd; 50 + xfer->rxd = cmd; 51 + xfer->txlen = sizeof(cmd); 52 + xfer->rxlen = sizeof(cmd); 53 + xfer->acpm_chan_id = acpm_chan_id; 54 + } 55 + 56 + static void acpm_pmic_init_read_cmd(u32 cmd[4], u8 type, u8 reg, u8 chan) 57 + { 58 + cmd[0] = FIELD_PREP(ACPM_PMIC_TYPE, type) | 59 + FIELD_PREP(ACPM_PMIC_REG, reg) | 60 + FIELD_PREP(ACPM_PMIC_CHANNEL, chan); 61 + cmd[1] = FIELD_PREP(ACPM_PMIC_FUNC, ACPM_PMIC_READ); 62 + cmd[3] = ktime_to_ms(ktime_get()); 63 + } 64 + 65 + int acpm_pmic_read_reg(const struct acpm_handle *handle, 66 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 67 + u8 *buf) 68 + { 69 + struct acpm_xfer xfer; 70 + u32 cmd[4] = {0}; 71 + int ret; 72 + 73 + acpm_pmic_init_read_cmd(cmd, type, reg, chan); 74 + acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id); 75 + 76 + ret = acpm_do_xfer(handle, &xfer); 77 + if (ret) 78 + return ret; 79 + 80 + *buf = FIELD_GET(ACPM_PMIC_VALUE, xfer.rxd[1]); 81 + 82 + return FIELD_GET(ACPM_PMIC_RETURN, xfer.rxd[1]); 83 + } 84 + 85 + static void acpm_pmic_init_bulk_read_cmd(u32 cmd[4], u8 type, u8 reg, u8 chan, 86 + u8 count) 87 + { 88 + cmd[0] = FIELD_PREP(ACPM_PMIC_TYPE, type) | 89 + FIELD_PREP(ACPM_PMIC_REG, reg) | 90 + FIELD_PREP(ACPM_PMIC_CHANNEL, chan); 91 + cmd[1] = FIELD_PREP(ACPM_PMIC_FUNC, ACPM_PMIC_BULK_READ) | 92 + FIELD_PREP(ACPM_PMIC_VALUE, count); 93 + } 94 + 95 + int acpm_pmic_bulk_read(const struct acpm_handle *handle, 96 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 97 + u8 count, u8 *buf) 98 + { 99 + struct acpm_xfer xfer; 100 + u32 cmd[4] = {0}; 101 + int i, ret; 102 + 103 + if (count > ACPM_PMIC_BULK_MAX_COUNT) 104 + return -EINVAL; 105 + 106 + acpm_pmic_init_bulk_read_cmd(cmd, type, reg, chan, count); 107 + acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id); 108 + 109 + ret = acpm_do_xfer(handle, &xfer); 110 + if (ret) 111 + return ret; 112 + 113 + ret = FIELD_GET(ACPM_PMIC_RETURN, xfer.rxd[1]); 114 + if (ret) 115 + return ret; 116 + 117 + for (i = 0; i < count; i++) { 118 + if (i < 4) 119 + buf[i] = acpm_pmic_get_bulk(xfer.rxd[2], i); 120 + else 121 + buf[i] = acpm_pmic_get_bulk(xfer.rxd[3], i - 4); 122 + } 123 + 124 + return 0; 125 + } 126 + 127 + static void acpm_pmic_init_write_cmd(u32 cmd[4], u8 type, u8 reg, u8 chan, 128 + u8 value) 129 + { 130 + cmd[0] = FIELD_PREP(ACPM_PMIC_TYPE, type) | 131 + FIELD_PREP(ACPM_PMIC_REG, reg) | 132 + FIELD_PREP(ACPM_PMIC_CHANNEL, chan); 133 + cmd[1] = FIELD_PREP(ACPM_PMIC_FUNC, ACPM_PMIC_WRITE) | 134 + FIELD_PREP(ACPM_PMIC_VALUE, value); 135 + cmd[3] = ktime_to_ms(ktime_get()); 136 + } 137 + 138 + int acpm_pmic_write_reg(const struct acpm_handle *handle, 139 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 140 + u8 value) 141 + { 142 + struct acpm_xfer xfer; 143 + u32 cmd[4] = {0}; 144 + int ret; 145 + 146 + acpm_pmic_init_write_cmd(cmd, type, reg, chan, value); 147 + acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id); 148 + 149 + ret = acpm_do_xfer(handle, &xfer); 150 + if (ret) 151 + return ret; 152 + 153 + return FIELD_GET(ACPM_PMIC_RETURN, xfer.rxd[1]); 154 + } 155 + 156 + static void acpm_pmic_init_bulk_write_cmd(u32 cmd[4], u8 type, u8 reg, u8 chan, 157 + u8 count, const u8 *buf) 158 + { 159 + int i; 160 + 161 + cmd[0] = FIELD_PREP(ACPM_PMIC_TYPE, type) | 162 + FIELD_PREP(ACPM_PMIC_REG, reg) | 163 + FIELD_PREP(ACPM_PMIC_CHANNEL, chan); 164 + cmd[1] = FIELD_PREP(ACPM_PMIC_FUNC, ACPM_PMIC_BULK_WRITE) | 165 + FIELD_PREP(ACPM_PMIC_VALUE, count); 166 + 167 + for (i = 0; i < count; i++) { 168 + if (i < 4) 169 + cmd[2] |= acpm_pmic_set_bulk(buf[i], i); 170 + else 171 + cmd[3] |= acpm_pmic_set_bulk(buf[i], i - 4); 172 + } 173 + } 174 + 175 + int acpm_pmic_bulk_write(const struct acpm_handle *handle, 176 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 177 + u8 count, const u8 *buf) 178 + { 179 + struct acpm_xfer xfer; 180 + u32 cmd[4] = {0}; 181 + int ret; 182 + 183 + if (count > ACPM_PMIC_BULK_MAX_COUNT) 184 + return -EINVAL; 185 + 186 + acpm_pmic_init_bulk_write_cmd(cmd, type, reg, chan, count, buf); 187 + acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id); 188 + 189 + ret = acpm_do_xfer(handle, &xfer); 190 + if (ret) 191 + return ret; 192 + 193 + return FIELD_GET(ACPM_PMIC_RETURN, xfer.rxd[1]); 194 + } 195 + 196 + static void acpm_pmic_init_update_cmd(u32 cmd[4], u8 type, u8 reg, u8 chan, 197 + u8 value, u8 mask) 198 + { 199 + cmd[0] = FIELD_PREP(ACPM_PMIC_TYPE, type) | 200 + FIELD_PREP(ACPM_PMIC_REG, reg) | 201 + FIELD_PREP(ACPM_PMIC_CHANNEL, chan); 202 + cmd[1] = FIELD_PREP(ACPM_PMIC_FUNC, ACPM_PMIC_UPDATE) | 203 + FIELD_PREP(ACPM_PMIC_VALUE, value) | 204 + FIELD_PREP(ACPM_PMIC_MASK, mask); 205 + cmd[3] = ktime_to_ms(ktime_get()); 206 + } 207 + 208 + int acpm_pmic_update_reg(const struct acpm_handle *handle, 209 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 210 + u8 value, u8 mask) 211 + { 212 + struct acpm_xfer xfer; 213 + u32 cmd[4] = {0}; 214 + int ret; 215 + 216 + acpm_pmic_init_update_cmd(cmd, type, reg, chan, value, mask); 217 + acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id); 218 + 219 + ret = acpm_do_xfer(handle, &xfer); 220 + if (ret) 221 + return ret; 222 + 223 + return FIELD_GET(ACPM_PMIC_RETURN, xfer.rxd[1]); 224 + }
+29
drivers/firmware/samsung/exynos-acpm-pmic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright 2020 Samsung Electronics Co., Ltd. 4 + * Copyright 2020 Google LLC. 5 + * Copyright 2024 Linaro Ltd. 6 + */ 7 + #ifndef __EXYNOS_ACPM_PMIC_H__ 8 + #define __EXYNOS_ACPM_PMIC_H__ 9 + 10 + #include <linux/types.h> 11 + 12 + struct acpm_handle; 13 + 14 + int acpm_pmic_read_reg(const struct acpm_handle *handle, 15 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 16 + u8 *buf); 17 + int acpm_pmic_bulk_read(const struct acpm_handle *handle, 18 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 19 + u8 count, u8 *buf); 20 + int acpm_pmic_write_reg(const struct acpm_handle *handle, 21 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 22 + u8 value); 23 + int acpm_pmic_bulk_write(const struct acpm_handle *handle, 24 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 25 + u8 count, const u8 *buf); 26 + int acpm_pmic_update_reg(const struct acpm_handle *handle, 27 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 28 + u8 value, u8 mask); 29 + #endif /* __EXYNOS_ACPM_PMIC_H__ */
+769
drivers/firmware/samsung/exynos-acpm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2020 Samsung Electronics Co., Ltd. 4 + * Copyright 2020 Google LLC. 5 + * Copyright 2024 Linaro Ltd. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/bitmap.h> 10 + #include <linux/bits.h> 11 + #include <linux/cleanup.h> 12 + #include <linux/container_of.h> 13 + #include <linux/delay.h> 14 + #include <linux/device.h> 15 + #include <linux/firmware/samsung/exynos-acpm-protocol.h> 16 + #include <linux/io.h> 17 + #include <linux/iopoll.h> 18 + #include <linux/mailbox/exynos-message.h> 19 + #include <linux/mailbox_client.h> 20 + #include <linux/module.h> 21 + #include <linux/mutex.h> 22 + #include <linux/math.h> 23 + #include <linux/of.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_platform.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/slab.h> 28 + #include <linux/types.h> 29 + 30 + #include "exynos-acpm.h" 31 + #include "exynos-acpm-pmic.h" 32 + 33 + #define ACPM_PROTOCOL_SEQNUM GENMASK(21, 16) 34 + 35 + /* The unit of counter is 20 us. 5000 * 20 = 100 ms */ 36 + #define ACPM_POLL_TIMEOUT 5000 37 + #define ACPM_TX_TIMEOUT_US 500000 38 + 39 + #define ACPM_GS101_INITDATA_BASE 0xa000 40 + 41 + /** 42 + * struct acpm_shmem - shared memory configuration information. 43 + * @reserved: unused fields. 44 + * @chans: offset to array of struct acpm_chan_shmem. 45 + * @reserved1: unused fields. 46 + * @num_chans: number of channels. 47 + */ 48 + struct acpm_shmem { 49 + u32 reserved[2]; 50 + u32 chans; 51 + u32 reserved1[3]; 52 + u32 num_chans; 53 + }; 54 + 55 + /** 56 + * struct acpm_chan_shmem - descriptor of a shared memory channel. 57 + * 58 + * @id: channel ID. 59 + * @reserved: unused fields. 60 + * @rx_rear: rear pointer of APM RX queue (TX for AP). 61 + * @rx_front: front pointer of APM RX queue (TX for AP). 62 + * @rx_base: base address of APM RX queue (TX for AP). 63 + * @reserved1: unused fields. 64 + * @tx_rear: rear pointer of APM TX queue (RX for AP). 65 + * @tx_front: front pointer of APM TX queue (RX for AP). 66 + * @tx_base: base address of APM TX queue (RX for AP). 67 + * @qlen: queue length. Applies to both TX/RX queues. 68 + * @mlen: message length. Applies to both TX/RX queues. 69 + * @reserved2: unused fields. 70 + * @poll_completion: true when the channel works on polling. 71 + */ 72 + struct acpm_chan_shmem { 73 + u32 id; 74 + u32 reserved[3]; 75 + u32 rx_rear; 76 + u32 rx_front; 77 + u32 rx_base; 78 + u32 reserved1[3]; 79 + u32 tx_rear; 80 + u32 tx_front; 81 + u32 tx_base; 82 + u32 qlen; 83 + u32 mlen; 84 + u32 reserved2[2]; 85 + u32 poll_completion; 86 + }; 87 + 88 + /** 89 + * struct acpm_queue - exynos acpm queue. 90 + * 91 + * @rear: rear address of the queue. 92 + * @front: front address of the queue. 93 + * @base: base address of the queue. 94 + */ 95 + struct acpm_queue { 96 + void __iomem *rear; 97 + void __iomem *front; 98 + void __iomem *base; 99 + }; 100 + 101 + /** 102 + * struct acpm_rx_data - RX queue data. 103 + * 104 + * @cmd: pointer to where the data shall be saved. 105 + * @n_cmd: number of 32-bit commands. 106 + * @response: true if the client expects the RX data. 107 + */ 108 + struct acpm_rx_data { 109 + u32 *cmd; 110 + size_t n_cmd; 111 + bool response; 112 + }; 113 + 114 + #define ACPM_SEQNUM_MAX 64 115 + 116 + /** 117 + * struct acpm_chan - driver internal representation of a channel. 118 + * @cl: mailbox client. 119 + * @chan: mailbox channel. 120 + * @acpm: pointer to driver private data. 121 + * @tx: TX queue. The enqueue is done by the host. 122 + * - front index is written by the host. 123 + * - rear index is written by the firmware. 124 + * 125 + * @rx: RX queue. The enqueue is done by the firmware. 126 + * - front index is written by the firmware. 127 + * - rear index is written by the host. 128 + * @tx_lock: protects TX queue. 129 + * @rx_lock: protects RX queue. 130 + * @qlen: queue length. Applies to both TX/RX queues. 131 + * @mlen: message length. Applies to both TX/RX queues. 132 + * @seqnum: sequence number of the last message enqueued on TX queue. 133 + * @id: channel ID. 134 + * @poll_completion: indicates if the transfer needs to be polled for 135 + * completion or interrupt mode is used. 136 + * @bitmap_seqnum: bitmap that tracks the messages on the TX/RX queues. 137 + * @rx_data: internal buffer used to drain the RX queue. 138 + */ 139 + struct acpm_chan { 140 + struct mbox_client cl; 141 + struct mbox_chan *chan; 142 + struct acpm_info *acpm; 143 + struct acpm_queue tx; 144 + struct acpm_queue rx; 145 + struct mutex tx_lock; 146 + struct mutex rx_lock; 147 + 148 + unsigned int qlen; 149 + unsigned int mlen; 150 + u8 seqnum; 151 + u8 id; 152 + bool poll_completion; 153 + 154 + DECLARE_BITMAP(bitmap_seqnum, ACPM_SEQNUM_MAX - 1); 155 + struct acpm_rx_data rx_data[ACPM_SEQNUM_MAX]; 156 + }; 157 + 158 + /** 159 + * struct acpm_info - driver's private data. 160 + * @shmem: pointer to the SRAM configuration data. 161 + * @sram_base: base address of SRAM. 162 + * @chans: pointer to the ACPM channel parameters retrieved from SRAM. 163 + * @dev: pointer to the exynos-acpm device. 164 + * @handle: instance of acpm_handle to send to clients. 165 + * @num_chans: number of channels available for this controller. 166 + */ 167 + struct acpm_info { 168 + struct acpm_shmem __iomem *shmem; 169 + void __iomem *sram_base; 170 + struct acpm_chan *chans; 171 + struct device *dev; 172 + struct acpm_handle handle; 173 + u32 num_chans; 174 + }; 175 + 176 + /** 177 + * struct acpm_match_data - of_device_id data. 178 + * @initdata_base: offset in SRAM where the channels configuration resides. 179 + */ 180 + struct acpm_match_data { 181 + loff_t initdata_base; 182 + }; 183 + 184 + #define client_to_acpm_chan(c) container_of(c, struct acpm_chan, cl) 185 + #define handle_to_acpm_info(h) container_of(h, struct acpm_info, handle) 186 + 187 + /** 188 + * acpm_get_rx() - get response from RX queue. 189 + * @achan: ACPM channel info. 190 + * @xfer: reference to the transfer to get response for. 191 + * 192 + * Return: 0 on success, -errno otherwise. 193 + */ 194 + static int acpm_get_rx(struct acpm_chan *achan, const struct acpm_xfer *xfer) 195 + { 196 + u32 rx_front, rx_seqnum, tx_seqnum, seqnum; 197 + const void __iomem *base, *addr; 198 + struct acpm_rx_data *rx_data; 199 + u32 i, val, mlen; 200 + bool rx_set = false; 201 + 202 + guard(mutex)(&achan->rx_lock); 203 + 204 + rx_front = readl(achan->rx.front); 205 + i = readl(achan->rx.rear); 206 + 207 + /* Bail out if RX is empty. */ 208 + if (i == rx_front) 209 + return 0; 210 + 211 + base = achan->rx.base; 212 + mlen = achan->mlen; 213 + 214 + tx_seqnum = FIELD_GET(ACPM_PROTOCOL_SEQNUM, xfer->txd[0]); 215 + 216 + /* Drain RX queue. */ 217 + do { 218 + /* Read RX seqnum. */ 219 + addr = base + mlen * i; 220 + val = readl(addr); 221 + 222 + rx_seqnum = FIELD_GET(ACPM_PROTOCOL_SEQNUM, val); 223 + if (!rx_seqnum) 224 + return -EIO; 225 + /* 226 + * mssg seqnum starts with value 1, whereas the driver considers 227 + * the first mssg at index 0. 228 + */ 229 + seqnum = rx_seqnum - 1; 230 + rx_data = &achan->rx_data[seqnum]; 231 + 232 + if (rx_data->response) { 233 + if (rx_seqnum == tx_seqnum) { 234 + __ioread32_copy(xfer->rxd, addr, 235 + xfer->rxlen / 4); 236 + rx_set = true; 237 + clear_bit(seqnum, achan->bitmap_seqnum); 238 + } else { 239 + /* 240 + * The RX data corresponds to another request. 241 + * Save the data to drain the queue, but don't 242 + * clear yet the bitmap. It will be cleared 243 + * after the response is copied to the request. 244 + */ 245 + __ioread32_copy(rx_data->cmd, addr, 246 + xfer->rxlen / 4); 247 + } 248 + } else { 249 + clear_bit(seqnum, achan->bitmap_seqnum); 250 + } 251 + 252 + i = (i + 1) % achan->qlen; 253 + } while (i != rx_front); 254 + 255 + /* We saved all responses, mark RX empty. */ 256 + writel(rx_front, achan->rx.rear); 257 + 258 + /* 259 + * If the response was not in this iteration of the queue, check if the 260 + * RX data was previously saved. 261 + */ 262 + rx_data = &achan->rx_data[tx_seqnum - 1]; 263 + if (!rx_set && rx_data->response) { 264 + rx_seqnum = FIELD_GET(ACPM_PROTOCOL_SEQNUM, 265 + rx_data->cmd[0]); 266 + 267 + if (rx_seqnum == tx_seqnum) { 268 + memcpy(xfer->rxd, rx_data->cmd, xfer->rxlen); 269 + clear_bit(rx_seqnum - 1, achan->bitmap_seqnum); 270 + } 271 + } 272 + 273 + return 0; 274 + } 275 + 276 + /** 277 + * acpm_dequeue_by_polling() - RX dequeue by polling. 278 + * @achan: ACPM channel info. 279 + * @xfer: reference to the transfer being waited for. 280 + * 281 + * Return: 0 on success, -errno otherwise. 282 + */ 283 + static int acpm_dequeue_by_polling(struct acpm_chan *achan, 284 + const struct acpm_xfer *xfer) 285 + { 286 + struct device *dev = achan->acpm->dev; 287 + unsigned int cnt_20us = 0; 288 + u32 seqnum; 289 + int ret; 290 + 291 + seqnum = FIELD_GET(ACPM_PROTOCOL_SEQNUM, xfer->txd[0]); 292 + 293 + do { 294 + ret = acpm_get_rx(achan, xfer); 295 + if (ret) 296 + return ret; 297 + 298 + if (!test_bit(seqnum - 1, achan->bitmap_seqnum)) 299 + return 0; 300 + 301 + /* Determined experimentally. */ 302 + usleep_range(20, 30); 303 + cnt_20us++; 304 + } while (cnt_20us < ACPM_POLL_TIMEOUT); 305 + 306 + dev_err(dev, "Timeout! ch:%u s:%u bitmap:%lx, cnt_20us = %d.\n", 307 + achan->id, seqnum, achan->bitmap_seqnum[0], cnt_20us); 308 + 309 + return -ETIME; 310 + } 311 + 312 + /** 313 + * acpm_wait_for_queue_slots() - wait for queue slots. 314 + * 315 + * @achan: ACPM channel info. 316 + * @next_tx_front: next front index of the TX queue. 317 + * 318 + * Return: 0 on success, -errno otherwise. 319 + */ 320 + static int acpm_wait_for_queue_slots(struct acpm_chan *achan, u32 next_tx_front) 321 + { 322 + u32 val, ret; 323 + 324 + /* 325 + * Wait for RX front to keep up with TX front. Make sure there's at 326 + * least one element between them. 327 + */ 328 + ret = readl_poll_timeout(achan->rx.front, val, next_tx_front != val, 0, 329 + ACPM_TX_TIMEOUT_US); 330 + if (ret) { 331 + dev_err(achan->acpm->dev, "RX front can not keep up with TX front.\n"); 332 + return ret; 333 + } 334 + 335 + ret = readl_poll_timeout(achan->tx.rear, val, next_tx_front != val, 0, 336 + ACPM_TX_TIMEOUT_US); 337 + if (ret) 338 + dev_err(achan->acpm->dev, "TX queue is full.\n"); 339 + 340 + return ret; 341 + } 342 + 343 + /** 344 + * acpm_prepare_xfer() - prepare a transfer before writing the message to the 345 + * TX queue. 346 + * @achan: ACPM channel info. 347 + * @xfer: reference to the transfer being prepared. 348 + */ 349 + static void acpm_prepare_xfer(struct acpm_chan *achan, 350 + const struct acpm_xfer *xfer) 351 + { 352 + struct acpm_rx_data *rx_data; 353 + u32 *txd = (u32 *)xfer->txd; 354 + 355 + /* Prevent chan->seqnum from being re-used */ 356 + do { 357 + if (++achan->seqnum == ACPM_SEQNUM_MAX) 358 + achan->seqnum = 1; 359 + } while (test_bit(achan->seqnum - 1, achan->bitmap_seqnum)); 360 + 361 + txd[0] |= FIELD_PREP(ACPM_PROTOCOL_SEQNUM, achan->seqnum); 362 + 363 + /* Clear data for upcoming responses */ 364 + rx_data = &achan->rx_data[achan->seqnum - 1]; 365 + memset(rx_data->cmd, 0, sizeof(*rx_data->cmd) * rx_data->n_cmd); 366 + if (xfer->rxd) 367 + rx_data->response = true; 368 + 369 + /* Flag the index based on seqnum. (seqnum: 1~63, bitmap: 0~62) */ 370 + set_bit(achan->seqnum - 1, achan->bitmap_seqnum); 371 + } 372 + 373 + /** 374 + * acpm_wait_for_message_response - an helper to group all possible ways of 375 + * waiting for a synchronous message response. 376 + * 377 + * @achan: ACPM channel info. 378 + * @xfer: reference to the transfer being waited for. 379 + * 380 + * Return: 0 on success, -errno otherwise. 381 + */ 382 + static int acpm_wait_for_message_response(struct acpm_chan *achan, 383 + const struct acpm_xfer *xfer) 384 + { 385 + /* Just polling mode supported for now. */ 386 + return acpm_dequeue_by_polling(achan, xfer); 387 + } 388 + 389 + /** 390 + * acpm_do_xfer() - do one transfer. 391 + * @handle: pointer to the acpm handle. 392 + * @xfer: transfer to initiate and wait for response. 393 + * 394 + * Return: 0 on success, -errno otherwise. 395 + */ 396 + int acpm_do_xfer(const struct acpm_handle *handle, const struct acpm_xfer *xfer) 397 + { 398 + struct acpm_info *acpm = handle_to_acpm_info(handle); 399 + struct exynos_mbox_msg msg; 400 + struct acpm_chan *achan; 401 + u32 idx, tx_front; 402 + int ret; 403 + 404 + if (xfer->acpm_chan_id >= acpm->num_chans) 405 + return -EINVAL; 406 + 407 + achan = &acpm->chans[xfer->acpm_chan_id]; 408 + 409 + if (!xfer->txd || xfer->txlen > achan->mlen || xfer->rxlen > achan->mlen) 410 + return -EINVAL; 411 + 412 + if (!achan->poll_completion) { 413 + dev_err(achan->acpm->dev, "Interrupt mode not supported\n"); 414 + return -EOPNOTSUPP; 415 + } 416 + 417 + scoped_guard(mutex, &achan->tx_lock) { 418 + tx_front = readl(achan->tx.front); 419 + idx = (tx_front + 1) % achan->qlen; 420 + 421 + ret = acpm_wait_for_queue_slots(achan, idx); 422 + if (ret) 423 + return ret; 424 + 425 + acpm_prepare_xfer(achan, xfer); 426 + 427 + /* Write TX command. */ 428 + __iowrite32_copy(achan->tx.base + achan->mlen * tx_front, 429 + xfer->txd, xfer->txlen / 4); 430 + 431 + /* Advance TX front. */ 432 + writel(idx, achan->tx.front); 433 + } 434 + 435 + msg.chan_id = xfer->acpm_chan_id; 436 + msg.chan_type = EXYNOS_MBOX_CHAN_TYPE_DOORBELL; 437 + ret = mbox_send_message(achan->chan, (void *)&msg); 438 + if (ret < 0) 439 + return ret; 440 + 441 + ret = acpm_wait_for_message_response(achan, xfer); 442 + 443 + /* 444 + * NOTE: we might prefer not to need the mailbox ticker to manage the 445 + * transfer queueing since the protocol layer queues things by itself. 446 + * Unfortunately, we have to kick the mailbox framework after we have 447 + * received our message. 448 + */ 449 + mbox_client_txdone(achan->chan, ret); 450 + 451 + return ret; 452 + } 453 + 454 + /** 455 + * acpm_chan_shmem_get_params() - get channel parameters and addresses of the 456 + * TX/RX queues. 457 + * @achan: ACPM channel info. 458 + * @chan_shmem: __iomem pointer to a channel described in shared memory. 459 + */ 460 + static void acpm_chan_shmem_get_params(struct acpm_chan *achan, 461 + struct acpm_chan_shmem __iomem *chan_shmem) 462 + { 463 + void __iomem *base = achan->acpm->sram_base; 464 + struct acpm_queue *rx = &achan->rx; 465 + struct acpm_queue *tx = &achan->tx; 466 + 467 + achan->mlen = readl(&chan_shmem->mlen); 468 + achan->poll_completion = readl(&chan_shmem->poll_completion); 469 + achan->id = readl(&chan_shmem->id); 470 + achan->qlen = readl(&chan_shmem->qlen); 471 + 472 + tx->base = base + readl(&chan_shmem->rx_base); 473 + tx->rear = base + readl(&chan_shmem->rx_rear); 474 + tx->front = base + readl(&chan_shmem->rx_front); 475 + 476 + rx->base = base + readl(&chan_shmem->tx_base); 477 + rx->rear = base + readl(&chan_shmem->tx_rear); 478 + rx->front = base + readl(&chan_shmem->tx_front); 479 + 480 + dev_vdbg(achan->acpm->dev, "ID = %d poll = %d, mlen = %d, qlen = %d\n", 481 + achan->id, achan->poll_completion, achan->mlen, achan->qlen); 482 + } 483 + 484 + /** 485 + * acpm_achan_alloc_cmds() - allocate buffers for retrieving data from the ACPM 486 + * firmware. 487 + * @achan: ACPM channel info. 488 + * 489 + * Return: 0 on success, -errno otherwise. 490 + */ 491 + static int acpm_achan_alloc_cmds(struct acpm_chan *achan) 492 + { 493 + struct device *dev = achan->acpm->dev; 494 + struct acpm_rx_data *rx_data; 495 + size_t cmd_size, n_cmd; 496 + int i; 497 + 498 + if (achan->mlen == 0) 499 + return 0; 500 + 501 + cmd_size = sizeof(*(achan->rx_data[0].cmd)); 502 + n_cmd = DIV_ROUND_UP_ULL(achan->mlen, cmd_size); 503 + 504 + for (i = 0; i < ACPM_SEQNUM_MAX; i++) { 505 + rx_data = &achan->rx_data[i]; 506 + rx_data->n_cmd = n_cmd; 507 + rx_data->cmd = devm_kcalloc(dev, n_cmd, cmd_size, GFP_KERNEL); 508 + if (!rx_data->cmd) 509 + return -ENOMEM; 510 + } 511 + 512 + return 0; 513 + } 514 + 515 + /** 516 + * acpm_free_mbox_chans() - free mailbox channels. 517 + * @acpm: pointer to driver data. 518 + */ 519 + static void acpm_free_mbox_chans(struct acpm_info *acpm) 520 + { 521 + int i; 522 + 523 + for (i = 0; i < acpm->num_chans; i++) 524 + if (!IS_ERR_OR_NULL(acpm->chans[i].chan)) 525 + mbox_free_channel(acpm->chans[i].chan); 526 + } 527 + 528 + /** 529 + * acpm_channels_init() - initialize channels based on the configuration data in 530 + * the shared memory. 531 + * @acpm: pointer to driver data. 532 + * 533 + * Return: 0 on success, -errno otherwise. 534 + */ 535 + static int acpm_channels_init(struct acpm_info *acpm) 536 + { 537 + struct acpm_shmem __iomem *shmem = acpm->shmem; 538 + struct acpm_chan_shmem __iomem *chans_shmem; 539 + struct device *dev = acpm->dev; 540 + int i, ret; 541 + 542 + acpm->num_chans = readl(&shmem->num_chans); 543 + acpm->chans = devm_kcalloc(dev, acpm->num_chans, sizeof(*acpm->chans), 544 + GFP_KERNEL); 545 + if (!acpm->chans) 546 + return -ENOMEM; 547 + 548 + chans_shmem = acpm->sram_base + readl(&shmem->chans); 549 + 550 + for (i = 0; i < acpm->num_chans; i++) { 551 + struct acpm_chan_shmem __iomem *chan_shmem = &chans_shmem[i]; 552 + struct acpm_chan *achan = &acpm->chans[i]; 553 + struct mbox_client *cl = &achan->cl; 554 + 555 + achan->acpm = acpm; 556 + 557 + acpm_chan_shmem_get_params(achan, chan_shmem); 558 + 559 + ret = acpm_achan_alloc_cmds(achan); 560 + if (ret) 561 + return ret; 562 + 563 + mutex_init(&achan->rx_lock); 564 + mutex_init(&achan->tx_lock); 565 + 566 + cl->dev = dev; 567 + 568 + achan->chan = mbox_request_channel(cl, 0); 569 + if (IS_ERR(achan->chan)) { 570 + acpm_free_mbox_chans(acpm); 571 + return PTR_ERR(achan->chan); 572 + } 573 + } 574 + 575 + return 0; 576 + } 577 + 578 + /** 579 + * acpm_setup_ops() - setup the operations structures. 580 + * @acpm: pointer to the driver data. 581 + */ 582 + static void acpm_setup_ops(struct acpm_info *acpm) 583 + { 584 + struct acpm_pmic_ops *pmic_ops = &acpm->handle.ops.pmic_ops; 585 + 586 + pmic_ops->read_reg = acpm_pmic_read_reg; 587 + pmic_ops->bulk_read = acpm_pmic_bulk_read; 588 + pmic_ops->write_reg = acpm_pmic_write_reg; 589 + pmic_ops->bulk_write = acpm_pmic_bulk_write; 590 + pmic_ops->update_reg = acpm_pmic_update_reg; 591 + } 592 + 593 + static int acpm_probe(struct platform_device *pdev) 594 + { 595 + const struct acpm_match_data *match_data; 596 + struct device *dev = &pdev->dev; 597 + struct device_node *shmem; 598 + struct acpm_info *acpm; 599 + resource_size_t size; 600 + struct resource res; 601 + int ret; 602 + 603 + acpm = devm_kzalloc(dev, sizeof(*acpm), GFP_KERNEL); 604 + if (!acpm) 605 + return -ENOMEM; 606 + 607 + shmem = of_parse_phandle(dev->of_node, "shmem", 0); 608 + ret = of_address_to_resource(shmem, 0, &res); 609 + of_node_put(shmem); 610 + if (ret) 611 + return dev_err_probe(dev, ret, 612 + "Failed to get shared memory.\n"); 613 + 614 + size = resource_size(&res); 615 + acpm->sram_base = devm_ioremap(dev, res.start, size); 616 + if (!acpm->sram_base) 617 + return dev_err_probe(dev, -ENOMEM, 618 + "Failed to ioremap shared memory.\n"); 619 + 620 + match_data = of_device_get_match_data(dev); 621 + if (!match_data) 622 + return dev_err_probe(dev, -EINVAL, 623 + "Failed to get match data.\n"); 624 + 625 + acpm->shmem = acpm->sram_base + match_data->initdata_base; 626 + acpm->dev = dev; 627 + 628 + ret = acpm_channels_init(acpm); 629 + if (ret) 630 + return ret; 631 + 632 + acpm_setup_ops(acpm); 633 + 634 + platform_set_drvdata(pdev, acpm); 635 + 636 + return 0; 637 + } 638 + 639 + /** 640 + * acpm_handle_put() - release the handle acquired by acpm_get_by_phandle. 641 + * @handle: Handle acquired by acpm_get_by_phandle. 642 + */ 643 + static void acpm_handle_put(const struct acpm_handle *handle) 644 + { 645 + struct acpm_info *acpm = handle_to_acpm_info(handle); 646 + struct device *dev = acpm->dev; 647 + 648 + module_put(dev->driver->owner); 649 + /* Drop reference taken with of_find_device_by_node(). */ 650 + put_device(dev); 651 + } 652 + 653 + /** 654 + * devm_acpm_release() - devres release method. 655 + * @dev: pointer to device. 656 + * @res: pointer to resource. 657 + */ 658 + static void devm_acpm_release(struct device *dev, void *res) 659 + { 660 + acpm_handle_put(*(struct acpm_handle **)res); 661 + } 662 + 663 + /** 664 + * acpm_get_by_phandle() - get the ACPM handle using DT phandle. 665 + * @dev: device pointer requesting ACPM handle. 666 + * @property: property name containing phandle on ACPM node. 667 + * 668 + * Return: pointer to handle on success, ERR_PTR(-errno) otherwise. 669 + */ 670 + static const struct acpm_handle *acpm_get_by_phandle(struct device *dev, 671 + const char *property) 672 + { 673 + struct platform_device *pdev; 674 + struct device_node *acpm_np; 675 + struct device_link *link; 676 + struct acpm_info *acpm; 677 + 678 + acpm_np = of_parse_phandle(dev->of_node, property, 0); 679 + if (!acpm_np) 680 + return ERR_PTR(-ENODEV); 681 + 682 + pdev = of_find_device_by_node(acpm_np); 683 + if (!pdev) { 684 + dev_err(dev, "Cannot find device node %s\n", acpm_np->name); 685 + of_node_put(acpm_np); 686 + return ERR_PTR(-EPROBE_DEFER); 687 + } 688 + 689 + of_node_put(acpm_np); 690 + 691 + acpm = platform_get_drvdata(pdev); 692 + if (!acpm) { 693 + dev_err(dev, "Cannot get drvdata from %s\n", 694 + dev_name(&pdev->dev)); 695 + platform_device_put(pdev); 696 + return ERR_PTR(-EPROBE_DEFER); 697 + } 698 + 699 + if (!try_module_get(pdev->dev.driver->owner)) { 700 + dev_err(dev, "Cannot get module reference.\n"); 701 + platform_device_put(pdev); 702 + return ERR_PTR(-EPROBE_DEFER); 703 + } 704 + 705 + link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER); 706 + if (!link) { 707 + dev_err(&pdev->dev, 708 + "Failed to create device link to consumer %s.\n", 709 + dev_name(dev)); 710 + platform_device_put(pdev); 711 + module_put(pdev->dev.driver->owner); 712 + return ERR_PTR(-EINVAL); 713 + } 714 + 715 + return &acpm->handle; 716 + } 717 + 718 + /** 719 + * devm_acpm_get_by_phandle() - managed get handle using phandle. 720 + * @dev: device pointer requesting ACPM handle. 721 + * @property: property name containing phandle on ACPM node. 722 + * 723 + * Return: pointer to handle on success, ERR_PTR(-errno) otherwise. 724 + */ 725 + const struct acpm_handle *devm_acpm_get_by_phandle(struct device *dev, 726 + const char *property) 727 + { 728 + const struct acpm_handle **ptr, *handle; 729 + 730 + ptr = devres_alloc(devm_acpm_release, sizeof(*ptr), GFP_KERNEL); 731 + if (!ptr) 732 + return ERR_PTR(-ENOMEM); 733 + 734 + handle = acpm_get_by_phandle(dev, property); 735 + if (!IS_ERR(handle)) { 736 + *ptr = handle; 737 + devres_add(dev, ptr); 738 + } else { 739 + devres_free(ptr); 740 + } 741 + 742 + return handle; 743 + } 744 + 745 + static const struct acpm_match_data acpm_gs101 = { 746 + .initdata_base = ACPM_GS101_INITDATA_BASE, 747 + }; 748 + 749 + static const struct of_device_id acpm_match[] = { 750 + { 751 + .compatible = "google,gs101-acpm-ipc", 752 + .data = &acpm_gs101, 753 + }, 754 + {}, 755 + }; 756 + MODULE_DEVICE_TABLE(of, acpm_match); 757 + 758 + static struct platform_driver acpm_driver = { 759 + .probe = acpm_probe, 760 + .driver = { 761 + .name = "exynos-acpm-protocol", 762 + .of_match_table = acpm_match, 763 + }, 764 + }; 765 + module_platform_driver(acpm_driver); 766 + 767 + MODULE_AUTHOR("Tudor Ambarus <tudor.ambarus@linaro.org>"); 768 + MODULE_DESCRIPTION("Samsung Exynos ACPM mailbox protocol driver"); 769 + MODULE_LICENSE("GPL");
+23
drivers/firmware/samsung/exynos-acpm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright 2020 Samsung Electronics Co., Ltd. 4 + * Copyright 2020 Google LLC. 5 + * Copyright 2024 Linaro Ltd. 6 + */ 7 + #ifndef __EXYNOS_ACPM_H__ 8 + #define __EXYNOS_ACPM_H__ 9 + 10 + struct acpm_xfer { 11 + const u32 *txd; 12 + u32 *rxd; 13 + size_t txlen; 14 + size_t rxlen; 15 + unsigned int acpm_chan_id; 16 + }; 17 + 18 + struct acpm_handle; 19 + 20 + int acpm_do_xfer(const struct acpm_handle *handle, 21 + const struct acpm_xfer *xfer); 22 + 23 + #endif /* __EXYNOS_ACPM_H__ */
+80
drivers/firmware/smccc/soc_id.c
··· 32 32 static struct soc_device *soc_dev; 33 33 static struct soc_device_attribute *soc_dev_attr; 34 34 35 + #ifdef CONFIG_ARM64 36 + 37 + static char __ro_after_init smccc_soc_id_name[136] = ""; 38 + 39 + static inline void str_fragment_from_reg(char *dst, unsigned long reg) 40 + { 41 + dst[0] = (reg >> 0) & 0xff; 42 + dst[1] = (reg >> 8) & 0xff; 43 + dst[2] = (reg >> 16) & 0xff; 44 + dst[3] = (reg >> 24) & 0xff; 45 + dst[4] = (reg >> 32) & 0xff; 46 + dst[5] = (reg >> 40) & 0xff; 47 + dst[6] = (reg >> 48) & 0xff; 48 + dst[7] = (reg >> 56) & 0xff; 49 + } 50 + 51 + static char __init *smccc_soc_name_init(void) 52 + { 53 + struct arm_smccc_1_2_regs args; 54 + struct arm_smccc_1_2_regs res; 55 + size_t len; 56 + 57 + /* 58 + * Issue Number 1.6 of the Arm SMC Calling Convention 59 + * specification introduces an optional "name" string 60 + * to the ARM_SMCCC_ARCH_SOC_ID function. Fetch it if 61 + * available. 62 + */ 63 + args.a0 = ARM_SMCCC_ARCH_SOC_ID; 64 + args.a1 = 2; /* SOC_ID name */ 65 + arm_smccc_1_2_invoke(&args, &res); 66 + 67 + if ((u32)res.a0 == 0) { 68 + /* 69 + * Copy res.a1..res.a17 to the smccc_soc_id_name string 70 + * 8 bytes at a time. As per Issue 1.6 of the Arm SMC 71 + * Calling Convention, the string will be NUL terminated 72 + * and padded, from the end of the string to the end of the 73 + * 136 byte buffer, with NULs. 74 + */ 75 + str_fragment_from_reg(smccc_soc_id_name + 8 * 0, res.a1); 76 + str_fragment_from_reg(smccc_soc_id_name + 8 * 1, res.a2); 77 + str_fragment_from_reg(smccc_soc_id_name + 8 * 2, res.a3); 78 + str_fragment_from_reg(smccc_soc_id_name + 8 * 3, res.a4); 79 + str_fragment_from_reg(smccc_soc_id_name + 8 * 4, res.a5); 80 + str_fragment_from_reg(smccc_soc_id_name + 8 * 5, res.a6); 81 + str_fragment_from_reg(smccc_soc_id_name + 8 * 6, res.a7); 82 + str_fragment_from_reg(smccc_soc_id_name + 8 * 7, res.a8); 83 + str_fragment_from_reg(smccc_soc_id_name + 8 * 8, res.a9); 84 + str_fragment_from_reg(smccc_soc_id_name + 8 * 9, res.a10); 85 + str_fragment_from_reg(smccc_soc_id_name + 8 * 10, res.a11); 86 + str_fragment_from_reg(smccc_soc_id_name + 8 * 11, res.a12); 87 + str_fragment_from_reg(smccc_soc_id_name + 8 * 12, res.a13); 88 + str_fragment_from_reg(smccc_soc_id_name + 8 * 13, res.a14); 89 + str_fragment_from_reg(smccc_soc_id_name + 8 * 14, res.a15); 90 + str_fragment_from_reg(smccc_soc_id_name + 8 * 15, res.a16); 91 + str_fragment_from_reg(smccc_soc_id_name + 8 * 16, res.a17); 92 + 93 + len = strnlen(smccc_soc_id_name, sizeof(smccc_soc_id_name)); 94 + if (len) { 95 + if (len == sizeof(smccc_soc_id_name)) 96 + pr_warn(FW_BUG "Ignoring improperly formatted name\n"); 97 + else 98 + return smccc_soc_id_name; 99 + } 100 + } 101 + 102 + return NULL; 103 + } 104 + 105 + #else 106 + 107 + static char __init *smccc_soc_name_init(void) 108 + { 109 + return NULL; 110 + } 111 + 112 + #endif 113 + 35 114 static int __init smccc_soc_init(void) 36 115 { 37 116 int soc_id_rev, soc_id_version; ··· 151 72 soc_dev_attr->soc_id = soc_id_str; 152 73 soc_dev_attr->revision = soc_id_rev_str; 153 74 soc_dev_attr->family = soc_id_jep106_id_str; 75 + soc_dev_attr->machine = smccc_soc_name_init(); 154 76 155 77 soc_dev = soc_device_register(soc_dev_attr); 156 78 if (IS_ERR(soc_dev)) {
+1 -5
drivers/firmware/xilinx/zynqmp.c
··· 1139 1139 int zynqmp_pm_fpga_get_config_status(u32 *value) 1140 1140 { 1141 1141 u32 ret_payload[PAYLOAD_ARG_CNT]; 1142 - u32 buf, lower_addr, upper_addr; 1143 1142 int ret; 1144 1143 1145 1144 if (!value) 1146 1145 return -EINVAL; 1147 1146 1148 - lower_addr = lower_32_bits((u64)&buf); 1149 - upper_addr = upper_32_bits((u64)&buf); 1150 - 1151 1147 ret = zynqmp_pm_invoke_fn(PM_FPGA_READ, ret_payload, 4, 1152 - XILINX_ZYNQMP_PM_FPGA_CONFIG_STAT_OFFSET, lower_addr, upper_addr, 1148 + XILINX_ZYNQMP_PM_FPGA_CONFIG_STAT_OFFSET, 0, 0, 1153 1149 XILINX_ZYNQMP_PM_FPGA_READ_CONFIG_REG); 1154 1150 1155 1151 *value = ret_payload[1];
+37 -11
drivers/irqchip/irq-meson-gpio.c
··· 26 26 27 27 /* use for A1 like chips */ 28 28 #define REG_PIN_A1_SEL 0x04 29 - /* Used for s4 chips */ 30 - #define REG_EDGE_POL_S4 0x1c 31 29 32 30 /* 33 31 * Note: The S905X3 datasheet reports that BOTH_EDGE is controlled by ··· 70 72 bool support_edge_both; 71 73 unsigned int edge_both_offset; 72 74 unsigned int edge_single_offset; 75 + unsigned int edge_pol_reg; 73 76 unsigned int pol_low_offset; 74 77 unsigned int pin_sel_mask; 75 78 struct irq_ctl_ops ops; ··· 104 105 .pin_sel_mask = 0x7f, \ 105 106 .nr_channels = 8, \ 106 107 108 + #define INIT_MESON_A4_AO_COMMON_DATA(irqs) \ 109 + INIT_MESON_COMMON(irqs, meson_a1_gpio_irq_init, \ 110 + meson_a1_gpio_irq_sel_pin, \ 111 + meson_s4_gpio_irq_set_type) \ 112 + .support_edge_both = true, \ 113 + .edge_both_offset = 0, \ 114 + .edge_single_offset = 12, \ 115 + .edge_pol_reg = 0x8, \ 116 + .pol_low_offset = 0, \ 117 + .pin_sel_mask = 0xff, \ 118 + .nr_channels = 2, \ 119 + 107 120 #define INIT_MESON_S4_COMMON_DATA(irqs) \ 108 121 INIT_MESON_COMMON(irqs, meson_a1_gpio_irq_init, \ 109 122 meson_a1_gpio_irq_sel_pin, \ ··· 123 112 .support_edge_both = true, \ 124 113 .edge_both_offset = 0, \ 125 114 .edge_single_offset = 12, \ 115 + .edge_pol_reg = 0x1c, \ 126 116 .pol_low_offset = 0, \ 127 117 .pin_sel_mask = 0xff, \ 128 118 .nr_channels = 12, \ ··· 158 146 INIT_MESON_A1_COMMON_DATA(62) 159 147 }; 160 148 149 + static const struct meson_gpio_irq_params a4_params = { 150 + INIT_MESON_S4_COMMON_DATA(81) 151 + }; 152 + 153 + static const struct meson_gpio_irq_params a4_ao_params = { 154 + INIT_MESON_A4_AO_COMMON_DATA(8) 155 + }; 156 + 157 + static const struct meson_gpio_irq_params a5_params = { 158 + INIT_MESON_S4_COMMON_DATA(99) 159 + }; 160 + 161 161 static const struct meson_gpio_irq_params s4_params = { 162 162 INIT_MESON_S4_COMMON_DATA(82) 163 163 }; ··· 192 168 { .compatible = "amlogic,meson-sm1-gpio-intc", .data = &sm1_params }, 193 169 { .compatible = "amlogic,meson-a1-gpio-intc", .data = &a1_params }, 194 170 { .compatible = "amlogic,meson-s4-gpio-intc", .data = &s4_params }, 171 + { .compatible = "amlogic,a4-gpio-ao-intc", .data = &a4_ao_params }, 172 + { .compatible = "amlogic,a4-gpio-intc", .data = &a4_params }, 173 + { .compatible = "amlogic,a5-gpio-intc", .data = &a5_params }, 195 174 { .compatible = "amlogic,c3-gpio-intc", .data = &c3_params }, 196 175 { .compatible = "amlogic,t7-gpio-intc", .data = &t7_params }, 197 176 { } ··· 326 299 static int meson8_gpio_irq_set_type(struct meson_gpio_irq_controller *ctl, 327 300 unsigned int type, u32 *channel_hwirq) 328 301 { 329 - u32 val = 0; 302 + const struct meson_gpio_irq_params *params = ctl->params; 330 303 unsigned int idx; 331 - const struct meson_gpio_irq_params *params; 304 + u32 val = 0; 332 305 333 - params = ctl->params; 334 306 idx = meson_gpio_irq_get_channel_idx(ctl, channel_hwirq); 335 307 336 308 /* ··· 382 356 static int meson_s4_gpio_irq_set_type(struct meson_gpio_irq_controller *ctl, 383 357 unsigned int type, u32 *channel_hwirq) 384 358 { 385 - u32 val = 0; 359 + const struct meson_gpio_irq_params *params = ctl->params; 386 360 unsigned int idx; 361 + u32 val = 0; 387 362 388 363 idx = meson_gpio_irq_get_channel_idx(ctl, channel_hwirq); 389 364 390 365 type &= IRQ_TYPE_SENSE_MASK; 391 366 392 - meson_gpio_irq_update_bits(ctl, REG_EDGE_POL_S4, BIT(idx), 0); 367 + meson_gpio_irq_update_bits(ctl, params->edge_pol_reg, BIT(idx), 0); 393 368 394 369 if (type == IRQ_TYPE_EDGE_BOTH) { 395 - val |= BIT(ctl->params->edge_both_offset + idx); 396 - meson_gpio_irq_update_bits(ctl, REG_EDGE_POL_S4, 397 - BIT(ctl->params->edge_both_offset + idx), val); 370 + val = BIT(ctl->params->edge_both_offset + idx); 371 + meson_gpio_irq_update_bits(ctl, params->edge_pol_reg, val, val); 398 372 return 0; 399 373 } 400 374 ··· 404 378 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) 405 379 val |= BIT(ctl->params->edge_single_offset + idx); 406 380 407 - meson_gpio_irq_update_bits(ctl, REG_EDGE_POL, 381 + meson_gpio_irq_update_bits(ctl, params->edge_pol_reg, 408 382 BIT(idx) | BIT(12 + idx), val); 409 383 return 0; 410 384 };
+33
drivers/memory/mtk-smi.c
··· 332 332 [25] = {0x01}, 333 333 }; 334 334 335 + static const u8 mtk_smi_larb_mt8192_ostd[][SMI_LARB_PORT_NR_MAX] = { 336 + [0] = {0x2, 0x2, 0x28, 0xa, 0xc, 0x28,}, 337 + [1] = {0x2, 0x2, 0x18, 0x18, 0x18, 0xa, 0xc, 0x28,}, 338 + [2] = {0x5, 0x5, 0x5, 0x5, 0x1,}, 339 + [3] = {}, 340 + [4] = {0x28, 0x19, 0xb, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x4, 0x1,}, 341 + [5] = {0x1, 0x1, 0x4, 0x1, 0x1, 0x1, 0x1, 0x16,}, 342 + [6] = {}, 343 + [7] = {0x1, 0x3, 0x2, 0x1, 0x1, 0x5, 0x2, 0x12, 0x13, 0x4, 0x4, 0x1, 344 + 0x4, 0x2, 0x1,}, 345 + [8] = {}, 346 + [9] = {0xa, 0x7, 0xf, 0x8, 0x1, 0x8, 0x9, 0x3, 0x3, 0x6, 0x7, 0x4, 347 + 0xa, 0x3, 0x4, 0xe, 0x1, 0x7, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 348 + 0x1, 0x1, 0x1, 0x1, 0x1,}, 349 + [10] = {}, 350 + [11] = {0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 351 + 0x1, 0x1, 0x1, 0xe, 0x1, 0x7, 0x8, 0x7, 0x7, 0x1, 0x6, 0x2, 352 + 0xf, 0x8, 0x1, 0x1, 0x1,}, 353 + [12] = {}, 354 + [13] = {0x2, 0xc, 0xc, 0xe, 0x6, 0x6, 0x6, 0x6, 0x6, 0x12, 0x6, 0x28, 355 + 0x2, 0xc, 0xc, 0x28, 0x12, 0x6,}, 356 + [14] = {}, 357 + [15] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2, 358 + 0x4, 0x2, 0x8, 0x4, 0x4,}, 359 + [16] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2, 360 + 0x4, 0x2, 0x8, 0x4, 0x4,}, 361 + [17] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2, 362 + 0x4, 0x2, 0x8, 0x4, 0x4,}, 363 + [18] = {0x2, 0x2, 0x4, 0x2,}, 364 + [19] = {0x9, 0x9, 0x5, 0x5, 0x1, 0x1,}, 365 + }; 366 + 335 367 static const u8 mtk_smi_larb_mt8195_ostd[][SMI_LARB_PORT_NR_MAX] = { 336 368 [0] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb0 */ 337 369 [1] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb1 */ ··· 459 427 460 428 static const struct mtk_smi_larb_gen mtk_smi_larb_mt8192 = { 461 429 .config_port = mtk_smi_larb_config_port_gen2_general, 430 + .ostd = mtk_smi_larb_mt8192_ostd, 462 431 }; 463 432 464 433 static const struct mtk_smi_larb_gen mtk_smi_larb_mt8195 = {
+1 -3
drivers/memory/tegra/tegra20-emc.c
··· 1191 1191 int irq, err; 1192 1192 1193 1193 irq = platform_get_irq(pdev, 0); 1194 - if (irq < 0) { 1195 - dev_err(&pdev->dev, "please update your device tree\n"); 1194 + if (irq < 0) 1196 1195 return irq; 1197 - } 1198 1196 1199 1197 emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL); 1200 1198 if (!emc)
+1 -1
drivers/mmc/host/sdhci-msm.c
··· 1873 1873 if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS)) 1874 1874 return 0; 1875 1875 1876 - ice = of_qcom_ice_get(dev); 1876 + ice = devm_of_qcom_ice_get(dev); 1877 1877 if (ice == ERR_PTR(-EOPNOTSUPP)) { 1878 1878 dev_warn(dev, "Disabling inline encryption support\n"); 1879 1879 ice = NULL;
+1 -1
drivers/nvme/host/apple.c
··· 221 221 return APPLE_ANS_MAX_QUEUE_DEPTH; 222 222 } 223 223 224 - static void apple_nvme_rtkit_crashed(void *cookie) 224 + static void apple_nvme_rtkit_crashed(void *cookie, const void *crashlog, size_t crashlog_size) 225 225 { 226 226 struct apple_nvme *anv = cookie; 227 227
+7
drivers/reset/Kconfig
··· 96 96 help 97 97 This enables the reset controller driver for HSDK board. 98 98 99 + config RESET_IMX_SCU 100 + tristate "i.MX8Q Reset Driver" 101 + depends on IMX_SCU && HAVE_ARM_SMCCC 102 + depends on (ARM64 && ARCH_MXC) || COMPILE_TEST 103 + help 104 + This enables the reset controller driver for i.MX8QM/i.MX8QXP 105 + 99 106 config RESET_IMX7 100 107 tristate "i.MX7/8 Reset Driver" 101 108 depends on HAS_IOMEM
+1
drivers/reset/Makefile
··· 15 15 obj-$(CONFIG_RESET_EYEQ) += reset-eyeq.o 16 16 obj-$(CONFIG_RESET_GPIO) += reset-gpio.o 17 17 obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o 18 + obj-$(CONFIG_RESET_IMX_SCU) += reset-imx-scu.o 18 19 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o 19 20 obj-$(CONFIG_RESET_IMX8MP_AUDIOMIX) += reset-imx8mp-audiomix.o 20 21 obj-$(CONFIG_RESET_INTEL_GW) += reset-intel-gw.o
+101
drivers/reset/reset-imx-scu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright 2025 NXP 4 + * Frank Li <Frank.Li@nxp.com> 5 + */ 6 + #include <linux/firmware/imx/svc/misc.h> 7 + #include <linux/kernel.h> 8 + #include <linux/module.h> 9 + #include <linux/of.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/reset-controller.h> 12 + 13 + #include <dt-bindings/firmware/imx/rsrc.h> 14 + 15 + struct imx_scu_reset { 16 + struct reset_controller_dev rc; 17 + struct imx_sc_ipc *ipc_handle; 18 + }; 19 + 20 + static struct imx_scu_reset *to_imx_scu(struct reset_controller_dev *rc) 21 + { 22 + return container_of(rc, struct imx_scu_reset, rc); 23 + } 24 + 25 + struct imx_scu_id_map { 26 + u32 resource_id; 27 + u32 command_id; 28 + }; 29 + 30 + static const struct imx_scu_id_map imx_scu_id_map[] = { 31 + { IMX_SC_R_CSI_0, IMX_SC_C_MIPI_RESET }, 32 + { IMX_SC_R_CSI_1, IMX_SC_C_MIPI_RESET }, 33 + }; 34 + 35 + static int imx_scu_reset_assert(struct reset_controller_dev *rc, unsigned long id) 36 + { 37 + struct imx_scu_reset *priv = to_imx_scu(rc); 38 + 39 + return imx_sc_misc_set_control(priv->ipc_handle, imx_scu_id_map[id].resource_id, 40 + imx_scu_id_map[id].command_id, true); 41 + } 42 + 43 + static const struct reset_control_ops imx_scu_reset_ops = { 44 + .assert = imx_scu_reset_assert, 45 + }; 46 + 47 + static int imx_scu_xlate(struct reset_controller_dev *rc, const struct of_phandle_args *reset_spec) 48 + { 49 + int i; 50 + 51 + for (i = 0; i < rc->nr_resets; i++) 52 + if (reset_spec->args[0] == imx_scu_id_map[i].resource_id) 53 + return i; 54 + 55 + return -EINVAL; 56 + } 57 + 58 + static int imx_scu_reset_probe(struct platform_device *pdev) 59 + { 60 + struct device *dev = &pdev->dev; 61 + struct imx_scu_reset *priv; 62 + int ret; 63 + 64 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 65 + if (!priv) 66 + return -ENOMEM; 67 + 68 + platform_set_drvdata(pdev, &priv->rc); 69 + 70 + ret = imx_scu_get_handle(&priv->ipc_handle); 71 + if (ret) 72 + return dev_err_probe(dev, ret, "sc_misc_MIPI get ipc handle failed!\n"); 73 + 74 + priv->rc.ops = &imx_scu_reset_ops; 75 + priv->rc.owner = THIS_MODULE; 76 + priv->rc.of_node = dev->of_node; 77 + priv->rc.of_reset_n_cells = 1; 78 + priv->rc.of_xlate = imx_scu_xlate; 79 + priv->rc.nr_resets = ARRAY_SIZE(imx_scu_id_map); 80 + 81 + return devm_reset_controller_register(dev, &priv->rc); 82 + } 83 + 84 + static const struct of_device_id imx_scu_reset_ids[] = { 85 + { .compatible = "fsl,imx-scu-reset", }, 86 + {} 87 + }; 88 + MODULE_DEVICE_TABLE(of, imx_scu_reset_ids); 89 + 90 + static struct platform_driver imx_scu_reset_driver = { 91 + .probe = imx_scu_reset_probe, 92 + .driver = { 93 + .name = "scu-reset", 94 + .of_match_table = imx_scu_reset_ids, 95 + }, 96 + }; 97 + module_platform_driver(imx_scu_reset_driver); 98 + 99 + MODULE_AUTHOR("Frank Li <Frank.Li@nxp.com>"); 100 + MODULE_DESCRIPTION("i.MX scu reset driver"); 101 + MODULE_LICENSE("GPL");
+1
drivers/soc/apple/rtkit-internal.h
··· 44 44 45 45 struct apple_rtkit_shmem ioreport_buffer; 46 46 struct apple_rtkit_shmem crashlog_buffer; 47 + struct apple_rtkit_shmem oslog_buffer; 47 48 48 49 struct apple_rtkit_shmem syslog_buffer; 49 50 char *syslog_msg_buffer;
+75 -37
drivers/soc/apple/rtkit.c
··· 12 12 APPLE_RTKIT_PWR_STATE_IDLE = 0x201, /* sleeping, retain state */ 13 13 APPLE_RTKIT_PWR_STATE_QUIESCED = 0x10, /* running but no communication */ 14 14 APPLE_RTKIT_PWR_STATE_ON = 0x20, /* normal operating state */ 15 + APPLE_RTKIT_PWR_STATE_INIT = 0x220, /* init after starting the coproc */ 15 16 }; 16 17 17 18 enum { ··· 67 66 #define APPLE_RTKIT_SYSLOG_MSG_SIZE GENMASK_ULL(31, 24) 68 67 69 68 #define APPLE_RTKIT_OSLOG_TYPE GENMASK_ULL(63, 56) 70 - #define APPLE_RTKIT_OSLOG_INIT 1 71 - #define APPLE_RTKIT_OSLOG_ACK 3 69 + #define APPLE_RTKIT_OSLOG_BUFFER_REQUEST 1 70 + #define APPLE_RTKIT_OSLOG_SIZE GENMASK_ULL(55, 36) 71 + #define APPLE_RTKIT_OSLOG_IOVA GENMASK_ULL(35, 0) 72 72 73 73 #define APPLE_RTKIT_MIN_SUPPORTED_VERSION 11 74 74 #define APPLE_RTKIT_MAX_SUPPORTED_VERSION 12 ··· 99 97 } 100 98 EXPORT_SYMBOL_GPL(apple_rtkit_is_crashed); 101 99 102 - static void apple_rtkit_management_send(struct apple_rtkit *rtk, u8 type, 100 + static int apple_rtkit_management_send(struct apple_rtkit *rtk, u8 type, 103 101 u64 msg) 104 102 { 103 + int ret; 104 + 105 105 msg &= ~APPLE_RTKIT_MGMT_TYPE; 106 106 msg |= FIELD_PREP(APPLE_RTKIT_MGMT_TYPE, type); 107 - apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_MGMT, msg, NULL, false); 107 + ret = apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_MGMT, msg, NULL, false); 108 + 109 + if (ret) 110 + dev_err(rtk->dev, "RTKit: Failed to send management message: %d\n", ret); 111 + 112 + return ret; 108 113 } 109 114 110 115 static void apple_rtkit_management_rx_hello(struct apple_rtkit *rtk, u64 msg) ··· 260 251 struct apple_rtkit_shmem *buffer, 261 252 u8 ep, u64 msg) 262 253 { 263 - size_t n_4kpages = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg); 264 254 u64 reply; 265 255 int err; 256 + 257 + /* The different size vs. IOVA shifts look odd but are indeed correct this way */ 258 + if (ep == APPLE_RTKIT_EP_OSLOG) { 259 + buffer->size = FIELD_GET(APPLE_RTKIT_OSLOG_SIZE, msg); 260 + buffer->iova = FIELD_GET(APPLE_RTKIT_OSLOG_IOVA, msg) << 12; 261 + } else { 262 + buffer->size = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg) << 12; 263 + buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg); 264 + } 266 265 267 266 buffer->buffer = NULL; 268 267 buffer->iomem = NULL; 269 268 buffer->is_mapped = false; 270 - buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg); 271 - buffer->size = n_4kpages << 12; 272 269 273 270 dev_dbg(rtk->dev, "RTKit: buffer request for 0x%zx bytes at %pad\n", 274 271 buffer->size, &buffer->iova); ··· 299 284 } 300 285 301 286 if (!buffer->is_mapped) { 302 - reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE, 303 - APPLE_RTKIT_BUFFER_REQUEST); 304 - reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, n_4kpages); 305 - reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA, 306 - buffer->iova); 287 + /* oslog uses different fields and needs a shifted IOVA instead of size */ 288 + if (ep == APPLE_RTKIT_EP_OSLOG) { 289 + reply = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, 290 + APPLE_RTKIT_OSLOG_BUFFER_REQUEST); 291 + reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_SIZE, buffer->size); 292 + reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_IOVA, 293 + buffer->iova >> 12); 294 + } else { 295 + reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE, 296 + APPLE_RTKIT_BUFFER_REQUEST); 297 + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, 298 + buffer->size >> 12); 299 + reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA, 300 + buffer->iova); 301 + } 307 302 apple_rtkit_send_message(rtk, ep, reply, NULL, false); 308 303 } 309 304 310 305 return 0; 311 306 312 307 error: 308 + dev_err(rtk->dev, "RTKit: failed buffer request for 0x%zx bytes (%d)\n", 309 + buffer->size, err); 310 + 313 311 buffer->buffer = NULL; 314 312 buffer->iomem = NULL; 315 313 buffer->iova = 0; ··· 388 360 apple_rtkit_memcpy(rtk, bfr, &rtk->crashlog_buffer, 0, 389 361 rtk->crashlog_buffer.size); 390 362 apple_rtkit_crashlog_dump(rtk, bfr, rtk->crashlog_buffer.size); 391 - kfree(bfr); 392 363 } else { 393 364 dev_err(rtk->dev, 394 365 "RTKit: Couldn't allocate crashlog shadow buffer\n"); ··· 395 368 396 369 rtk->crashed = true; 397 370 if (rtk->ops->crashed) 398 - rtk->ops->crashed(rtk->cookie); 371 + rtk->ops->crashed(rtk->cookie, bfr, rtk->crashlog_buffer.size); 372 + 373 + kfree(bfr); 399 374 } 400 375 401 376 static void apple_rtkit_ioreport_rx(struct apple_rtkit *rtk, u64 msg) ··· 477 448 478 449 log_context[sizeof(log_context) - 1] = 0; 479 450 480 - msglen = rtk->syslog_msg_size - 1; 451 + msglen = strnlen(rtk->syslog_msg_buffer, rtk->syslog_msg_size - 1); 481 452 while (msglen > 0 && 482 453 should_crop_syslog_char(rtk->syslog_msg_buffer[msglen - 1])) 483 454 msglen--; ··· 511 482 } 512 483 } 513 484 514 - static void apple_rtkit_oslog_rx_init(struct apple_rtkit *rtk, u64 msg) 515 - { 516 - u64 ack; 517 - 518 - dev_dbg(rtk->dev, "RTKit: oslog init: msg: 0x%llx\n", msg); 519 - ack = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, APPLE_RTKIT_OSLOG_ACK); 520 - apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_OSLOG, ack, NULL, false); 521 - } 522 - 523 485 static void apple_rtkit_oslog_rx(struct apple_rtkit *rtk, u64 msg) 524 486 { 525 487 u8 type = FIELD_GET(APPLE_RTKIT_OSLOG_TYPE, msg); 526 488 527 489 switch (type) { 528 - case APPLE_RTKIT_OSLOG_INIT: 529 - apple_rtkit_oslog_rx_init(rtk, msg); 490 + case APPLE_RTKIT_OSLOG_BUFFER_REQUEST: 491 + apple_rtkit_common_rx_get_buffer(rtk, &rtk->oslog_buffer, 492 + APPLE_RTKIT_EP_OSLOG, msg); 530 493 break; 531 494 default: 532 - dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", msg); 495 + dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", 496 + msg); 533 497 } 534 498 } 535 499 ··· 610 588 .msg1 = ep, 611 589 }; 612 590 613 - if (rtk->crashed) 591 + if (rtk->crashed) { 592 + dev_warn(rtk->dev, 593 + "RTKit: Device is crashed, cannot send message\n"); 614 594 return -EINVAL; 595 + } 596 + 615 597 if (ep >= APPLE_RTKIT_APP_ENDPOINT_START && 616 - !apple_rtkit_is_running(rtk)) 598 + !apple_rtkit_is_running(rtk)) { 599 + dev_warn(rtk->dev, 600 + "RTKit: Endpoint 0x%02x is not running, cannot send message\n", ep); 617 601 return -EINVAL; 602 + } 618 603 619 604 /* 620 605 * The message will be sent with a MMIO write. We need the barrier ··· 696 667 rtk->mbox->rx = apple_rtkit_rx; 697 668 rtk->mbox->cookie = rtk; 698 669 699 - rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_MEM_RECLAIM, 670 + rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_HIGHPRI | WQ_MEM_RECLAIM, 700 671 dev_name(rtk->dev)); 701 672 if (!rtk->wq) { 702 673 ret = -ENOMEM; ··· 739 710 740 711 apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); 741 712 apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); 713 + apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer); 742 714 apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer); 743 715 744 716 kfree(rtk->syslog_msg_buffer); ··· 772 742 reinit_completion(&rtk->ap_pwr_ack_completion); 773 743 774 744 msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, state); 775 - apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_AP_PWR_STATE, 776 - msg); 745 + ret = apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_AP_PWR_STATE, 746 + msg); 747 + if (ret) 748 + return ret; 777 749 778 750 ret = apple_rtkit_wait_for_completion(&rtk->ap_pwr_ack_completion); 779 751 if (ret) ··· 795 763 reinit_completion(&rtk->iop_pwr_ack_completion); 796 764 797 765 msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, state); 798 - apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 799 - msg); 766 + ret = apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 767 + msg); 768 + if (ret) 769 + return ret; 800 770 801 771 ret = apple_rtkit_wait_for_completion(&rtk->iop_pwr_ack_completion); 802 772 if (ret) ··· 899 865 int apple_rtkit_wake(struct apple_rtkit *rtk) 900 866 { 901 867 u64 msg; 868 + int ret; 902 869 903 870 if (apple_rtkit_is_running(rtk)) 904 871 return -EINVAL; ··· 910 875 * Use open-coded apple_rtkit_set_iop_power_state since apple_rtkit_boot 911 876 * will wait for the completion anyway. 912 877 */ 913 - msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, APPLE_RTKIT_PWR_STATE_ON); 914 - apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 915 - msg); 878 + msg = FIELD_PREP(APPLE_RTKIT_MGMT_PWR_STATE, APPLE_RTKIT_PWR_STATE_INIT); 879 + ret = apple_rtkit_management_send(rtk, APPLE_RTKIT_MGMT_SET_IOP_PWR_STATE, 880 + msg); 881 + if (ret) 882 + return ret; 916 883 917 884 return apple_rtkit_boot(rtk); 918 885 } ··· 927 890 928 891 apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer); 929 892 apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer); 893 + apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer); 930 894 apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer); 931 895 932 896 kfree(rtk->syslog_msg_buffer);
+15 -16
drivers/soc/mediatek/mt8167-mmsys.h
··· 14 14 #define MT8167_DSI0_SEL_IN_RDMA0 0x1 15 15 16 16 static const struct mtk_mmsys_routes mt8167_mmsys_routing_table[] = { 17 - { 18 - DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 19 - MT8167_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0, 20 - }, { 21 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_RDMA0, 22 - MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN, MT8167_DITHER_MOUT_EN_RDMA0 23 - }, { 24 - DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 25 - MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0 26 - }, { 27 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0, 28 - MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, MT8167_DSI0_SEL_IN_RDMA0 29 - }, { 30 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0, 31 - MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8167_RDMA0_SOUT_DSI0 32 - }, 17 + MMSYS_ROUTE(OVL0, COLOR0, 18 + MT8167_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0, 19 + OVL0_MOUT_EN_COLOR0), 20 + MMSYS_ROUTE(DITHER0, RDMA0, 21 + MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN, MT8167_DITHER_MOUT_EN_RDMA0, 22 + MT8167_DITHER_MOUT_EN_RDMA0), 23 + MMSYS_ROUTE(OVL0, COLOR0, 24 + MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0, 25 + COLOR0_SEL_IN_OVL0), 26 + MMSYS_ROUTE(RDMA0, DSI0, 27 + MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, MT8167_DSI0_SEL_IN_RDMA0, 28 + MT8167_DSI0_SEL_IN_RDMA0), 29 + MMSYS_ROUTE(RDMA0, DSI0, 30 + MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8167_RDMA0_SOUT_DSI0, 31 + MT8167_RDMA0_SOUT_DSI0), 33 32 }; 34 33 35 34 #endif /* __SOC_MEDIATEK_MT8167_MMSYS_H */
+42 -57
drivers/soc/mediatek/mt8173-mmsys.h
··· 33 33 #define MT8173_RDMA0_SOUT_COLOR0 BIT(0) 34 34 35 35 static const struct mtk_mmsys_routes mt8173_mmsys_routing_table[] = { 36 - { 37 - DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 38 - MT8173_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, 39 - MT8173_OVL0_MOUT_EN_COLOR0, MT8173_OVL0_MOUT_EN_COLOR0 40 - }, { 41 - DDP_COMPONENT_OD0, DDP_COMPONENT_RDMA0, 42 - MT8173_DISP_REG_CONFIG_DISP_OD_MOUT_EN, 43 - MT8173_OD0_MOUT_EN_RDMA0, MT8173_OD0_MOUT_EN_RDMA0 44 - }, { 45 - DDP_COMPONENT_UFOE, DDP_COMPONENT_DSI0, 46 - MT8173_DISP_REG_CONFIG_DISP_UFOE_MOUT_EN, 47 - MT8173_UFOE_MOUT_EN_DSI0, MT8173_UFOE_MOUT_EN_DSI0 48 - }, { 49 - DDP_COMPONENT_COLOR0, DDP_COMPONENT_AAL0, 50 - MT8173_DISP_REG_CONFIG_DISP_COLOR0_SOUT_SEL_IN, 51 - MT8173_COLOR0_SOUT_MERGE, 0 /* SOUT to AAL */ 52 - }, { 53 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_UFOE, 54 - MT8173_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, 55 - MT8173_RDMA0_SOUT_COLOR0, 0 /* SOUT to UFOE */ 56 - }, { 57 - DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 58 - MT8173_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, 59 - MT8173_COLOR0_SEL_IN_OVL0, MT8173_COLOR0_SEL_IN_OVL0 60 - }, { 61 - DDP_COMPONENT_AAL0, DDP_COMPONENT_COLOR0, 62 - MT8173_DISP_REG_CONFIG_DISP_AAL_SEL_IN, 63 - MT8173_AAL_SEL_IN_MERGE, 0 /* SEL_IN from COLOR0 */ 64 - }, { 65 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_UFOE, 66 - MT8173_DISP_REG_CONFIG_DISP_UFOE_SEL_IN, 67 - MT8173_UFOE_SEL_IN_RDMA0, 0 /* SEL_IN from RDMA0 */ 68 - }, { 69 - DDP_COMPONENT_UFOE, DDP_COMPONENT_DSI0, 70 - MT8173_DISP_REG_CONFIG_DSI0_SEL_IN, 71 - MT8173_DSI0_SEL_IN_UFOE, 0, /* SEL_IN from UFOE */ 72 - }, { 73 - DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 74 - MT8173_DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, 75 - MT8173_OVL1_MOUT_EN_COLOR1, MT8173_OVL1_MOUT_EN_COLOR1 76 - }, { 77 - DDP_COMPONENT_GAMMA, DDP_COMPONENT_RDMA1, 78 - MT8173_DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, 79 - MT8173_GAMMA_MOUT_EN_RDMA1, MT8173_GAMMA_MOUT_EN_RDMA1 80 - }, { 81 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 82 - MT8173_DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, 83 - RDMA1_SOUT_MASK, RDMA1_SOUT_DPI0 84 - }, { 85 - DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 86 - MT8173_DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, 87 - COLOR1_SEL_IN_OVL1, COLOR1_SEL_IN_OVL1 88 - }, { 89 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 90 - MT8173_DISP_REG_CONFIG_DPI_SEL_IN, 91 - MT8173_DPI0_SEL_IN_MASK, MT8173_DPI0_SEL_IN_RDMA1 92 - } 36 + MMSYS_ROUTE(OVL0, COLOR0, 37 + MT8173_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, MT8173_OVL0_MOUT_EN_COLOR0, 38 + MT8173_OVL0_MOUT_EN_COLOR0), 39 + MMSYS_ROUTE(OD0, RDMA0, 40 + MT8173_DISP_REG_CONFIG_DISP_OD_MOUT_EN, MT8173_OD0_MOUT_EN_RDMA0, 41 + MT8173_OD0_MOUT_EN_RDMA0), 42 + MMSYS_ROUTE(UFOE, DSI0, 43 + MT8173_DISP_REG_CONFIG_DISP_UFOE_MOUT_EN, MT8173_UFOE_MOUT_EN_DSI0, 44 + MT8173_UFOE_MOUT_EN_DSI0), 45 + MMSYS_ROUTE(COLOR0, AAL0, 46 + MT8173_DISP_REG_CONFIG_DISP_COLOR0_SOUT_SEL_IN, MT8173_COLOR0_SOUT_MERGE, 47 + 0 /* SOUT to AAL */), 48 + MMSYS_ROUTE(RDMA0, UFOE, 49 + MT8173_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8173_RDMA0_SOUT_COLOR0, 50 + 0 /* SOUT to UFOE */), 51 + MMSYS_ROUTE(OVL0, COLOR0, 52 + MT8173_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, MT8173_COLOR0_SEL_IN_OVL0, 53 + MT8173_COLOR0_SEL_IN_OVL0), 54 + MMSYS_ROUTE(AAL0, COLOR0, 55 + MT8173_DISP_REG_CONFIG_DISP_AAL_SEL_IN, MT8173_AAL_SEL_IN_MERGE, 56 + 0 /* SEL_IN from COLOR0 */), 57 + MMSYS_ROUTE(RDMA0, UFOE, 58 + MT8173_DISP_REG_CONFIG_DISP_UFOE_SEL_IN, MT8173_UFOE_SEL_IN_RDMA0, 59 + 0 /* SEL_IN from RDMA0 */), 60 + MMSYS_ROUTE(UFOE, DSI0, 61 + MT8173_DISP_REG_CONFIG_DSI0_SEL_IN, MT8173_DSI0_SEL_IN_UFOE, 62 + 0 /* SEL_IN from UFOE */), 63 + MMSYS_ROUTE(OVL1, COLOR1, 64 + MT8173_DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, MT8173_OVL1_MOUT_EN_COLOR1, 65 + MT8173_OVL1_MOUT_EN_COLOR1), 66 + MMSYS_ROUTE(GAMMA, RDMA1, 67 + MT8173_DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, MT8173_GAMMA_MOUT_EN_RDMA1, 68 + MT8173_GAMMA_MOUT_EN_RDMA1), 69 + MMSYS_ROUTE(RDMA1, DPI0, 70 + MT8173_DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 71 + RDMA1_SOUT_DPI0), 72 + MMSYS_ROUTE(OVL1, COLOR1, 73 + MT8173_DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1, 74 + COLOR1_SEL_IN_OVL1), 75 + MMSYS_ROUTE(RDMA1, DPI0, 76 + MT8173_DISP_REG_CONFIG_DPI_SEL_IN, MT8173_DPI0_SEL_IN_MASK, 77 + MT8173_DPI0_SEL_IN_RDMA1), 93 78 }; 94 79 95 80 #endif /* __SOC_MEDIATEK_MT8173_MMSYS_H */
+21 -29
drivers/soc/mediatek/mt8183-mmsys.h
··· 28 28 #define MT8183_MMSYS_SW0_RST_B 0x140 29 29 30 30 static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = { 31 - { 32 - DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0, 33 - MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L, 34 - MT8183_OVL0_MOUT_EN_OVL0_2L 35 - }, { 36 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 37 - MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0, 38 - MT8183_OVL0_2L_MOUT_EN_DISP_PATH0 39 - }, { 40 - DDP_COMPONENT_OVL_2L1, DDP_COMPONENT_RDMA1, 41 - MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1, 42 - MT8183_OVL1_2L_MOUT_EN_RDMA1 43 - }, { 44 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 45 - MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0, 46 - MT8183_DITHER0_MOUT_IN_DSI0 47 - }, { 48 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 49 - MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L, 50 - MT8183_DISP_PATH0_SEL_IN_OVL0_2L 51 - }, { 52 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 53 - MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1, 54 - MT8183_DPI0_SEL_IN_RDMA1 55 - }, { 56 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 57 - MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0, 58 - MT8183_RDMA0_SOUT_COLOR0 59 - } 31 + MMSYS_ROUTE(OVL0, OVL_2L0, 32 + MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L, 33 + MT8183_OVL0_MOUT_EN_OVL0_2L), 34 + MMSYS_ROUTE(OVL_2L0, RDMA0, 35 + MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0, 36 + MT8183_OVL0_2L_MOUT_EN_DISP_PATH0), 37 + MMSYS_ROUTE(OVL_2L1, RDMA1, 38 + MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1, 39 + MT8183_OVL1_2L_MOUT_EN_RDMA1), 40 + MMSYS_ROUTE(DITHER0, DSI0, 41 + MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0, 42 + MT8183_DITHER0_MOUT_IN_DSI0), 43 + MMSYS_ROUTE(OVL_2L0, RDMA0, 44 + MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L, 45 + MT8183_DISP_PATH0_SEL_IN_OVL0_2L), 46 + MMSYS_ROUTE(RDMA1, DPI0, 47 + MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1, 48 + MT8183_DPI0_SEL_IN_RDMA1), 49 + MMSYS_ROUTE(RDMA0, COLOR0, 50 + MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0, 51 + MT8183_RDMA0_SOUT_COLOR0), 60 52 }; 61 53 62 54 #endif /* __SOC_MEDIATEK_MT8183_MMSYS_H */
+33 -55
drivers/soc/mediatek/mt8186-mmsys.h
··· 63 63 #define MT8186_MMSYS_SW0_RST_B 0x160 64 64 65 65 static const struct mtk_mmsys_routes mmsys_mt8186_routing_table[] = { 66 - { 67 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 68 - MT8186_DISP_OVL0_MOUT_EN, MT8186_OVL0_MOUT_EN_MASK, 69 - MT8186_OVL0_MOUT_TO_RDMA0 70 - }, 71 - { 72 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 73 - MT8186_DISP_RDMA0_SEL_IN, MT8186_RDMA0_SEL_IN_MASK, 74 - MT8186_RDMA0_FROM_OVL0 75 - }, 76 - { 77 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 78 - MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_CON_MASK, 79 - MT8186_OVL0_GO_BLEND 80 - }, 81 - { 82 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 83 - MT8186_DISP_RDMA0_SOUT_SEL, MT8186_RDMA0_SOUT_SEL_MASK, 84 - MT8186_RDMA0_SOUT_TO_COLOR0 85 - }, 86 - { 87 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 88 - MT8186_DISP_DITHER0_MOUT_EN, MT8186_DITHER0_MOUT_EN_MASK, 89 - MT8186_DITHER0_MOUT_TO_DSI0, 90 - }, 91 - { 92 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 93 - MT8186_DISP_DSI0_SEL_IN, MT8186_DSI0_SEL_IN_MASK, 94 - MT8186_DSI0_FROM_DITHER0 95 - }, 96 - { 97 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 98 - MT8186_DISP_OVL0_2L_MOUT_EN, MT8186_OVL0_2L_MOUT_EN_MASK, 99 - MT8186_OVL0_2L_MOUT_TO_RDMA1 100 - }, 101 - { 102 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 103 - MT8186_DISP_RDMA1_SEL_IN, MT8186_RDMA1_SEL_IN_MASK, 104 - MT8186_RDMA1_FROM_OVL0_2L 105 - }, 106 - { 107 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 108 - MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_2L_CON_MASK, 109 - MT8186_OVL0_2L_GO_BLEND 110 - }, 111 - { 112 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 113 - MT8186_DISP_RDMA1_MOUT_EN, MT8186_RDMA1_MOUT_EN_MASK, 114 - MT8186_RDMA1_MOUT_TO_DPI0_SEL 115 - }, 116 - { 117 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 118 - MT8186_DISP_DPI0_SEL_IN, MT8186_DPI0_SEL_IN_MASK, 119 - MT8186_DPI0_FROM_RDMA1 120 - }, 66 + MMSYS_ROUTE(OVL0, RDMA0, 67 + MT8186_DISP_OVL0_MOUT_EN, MT8186_OVL0_MOUT_EN_MASK, 68 + MT8186_OVL0_MOUT_TO_RDMA0), 69 + MMSYS_ROUTE(OVL0, RDMA0, 70 + MT8186_DISP_RDMA0_SEL_IN, MT8186_RDMA0_SEL_IN_MASK, 71 + MT8186_RDMA0_FROM_OVL0), 72 + MMSYS_ROUTE(OVL0, RDMA0, 73 + MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_CON_MASK, 74 + MT8186_OVL0_GO_BLEND), 75 + MMSYS_ROUTE(RDMA0, COLOR0, 76 + MT8186_DISP_RDMA0_SOUT_SEL, MT8186_RDMA0_SOUT_SEL_MASK, 77 + MT8186_RDMA0_SOUT_TO_COLOR0), 78 + MMSYS_ROUTE(DITHER0, DSI0, 79 + MT8186_DISP_DITHER0_MOUT_EN, MT8186_DITHER0_MOUT_EN_MASK, 80 + MT8186_DITHER0_MOUT_TO_DSI0), 81 + MMSYS_ROUTE(DITHER0, DSI0, 82 + MT8186_DISP_DSI0_SEL_IN, MT8186_DSI0_SEL_IN_MASK, 83 + MT8186_DSI0_FROM_DITHER0), 84 + MMSYS_ROUTE(OVL_2L0, RDMA1, 85 + MT8186_DISP_OVL0_2L_MOUT_EN, MT8186_OVL0_2L_MOUT_EN_MASK, 86 + MT8186_OVL0_2L_MOUT_TO_RDMA1), 87 + MMSYS_ROUTE(OVL_2L0, RDMA1, 88 + MT8186_DISP_RDMA1_SEL_IN, MT8186_RDMA1_SEL_IN_MASK, 89 + MT8186_RDMA1_FROM_OVL0_2L), 90 + MMSYS_ROUTE(OVL_2L0, RDMA1, 91 + MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_2L_CON_MASK, 92 + MT8186_OVL0_2L_GO_BLEND), 93 + MMSYS_ROUTE(RDMA1, DPI0, 94 + MT8186_DISP_RDMA1_MOUT_EN, MT8186_RDMA1_MOUT_EN_MASK, 95 + MT8186_RDMA1_MOUT_TO_DPI0_SEL), 96 + MMSYS_ROUTE(RDMA1, DPI0, 97 + MT8186_DISP_DPI0_SEL_IN, MT8186_DPI0_SEL_IN_MASK, 98 + MT8186_DPI0_FROM_RDMA1), 121 99 }; 122 100 123 101 #endif /* __SOC_MEDIATEK_MT8186_MMSYS_H */
+117 -149
drivers/soc/mediatek/mt8188-mmsys.h
··· 202 202 }; 203 203 204 204 static const struct mtk_mmsys_routes mmsys_mt8188_routing_table[] = { 205 - { 206 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 207 - MT8188_VDO0_OVL_MOUT_EN, MT8188_MOUT_DISP_OVL0_TO_DISP_RDMA0, 208 - MT8188_MOUT_DISP_OVL0_TO_DISP_RDMA0 209 - }, { 210 - DDP_COMPONENT_OVL0, DDP_COMPONENT_WDMA0, 211 - MT8188_VDO0_OVL_MOUT_EN, MT8188_MOUT_DISP_OVL0_TO_DISP_WDMA0, 212 - MT8188_MOUT_DISP_OVL0_TO_DISP_WDMA0 213 - }, { 214 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 215 - MT8188_VDO0_DISP_RDMA_SEL, MT8188_SEL_IN_DISP_RDMA0_FROM_MASK, 216 - MT8188_SEL_IN_DISP_RDMA0_FROM_DISP_OVL0 217 - }, { 218 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 219 - MT8188_VDO0_DSI0_SEL_IN, MT8188_SEL_IN_DSI0_FROM_MASK, 220 - MT8188_SEL_IN_DSI0_FROM_DISP_DITHER0 221 - }, { 222 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_MERGE0, 223 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SEL_IN_VPP_MERGE_FROM_MASK, 224 - MT8188_SEL_IN_VPP_MERGE_FROM_DITHER0_OUT 225 - }, { 226 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSC0, 227 - MT8188_VDO0_DSC_WARP_SEL, 228 - MT8188_SEL_IN_DSC_WRAP0C0_IN_FROM_MASK, 229 - MT8188_SEL_IN_DSC_WRAP0C0_IN_FROM_DISP_DITHER0 230 - }, { 231 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DP_INTF0, 232 - MT8188_VDO0_DP_INTF0_SEL_IN, MT8188_SEL_IN_DP_INTF0_FROM_MASK, 233 - MT8188_SEL_IN_DP_INTF0_FROM_DISP_DITHER0 234 - }, { 235 - DDP_COMPONENT_DSC0, DDP_COMPONENT_MERGE0, 236 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SEL_IN_VPP_MERGE_FROM_MASK, 237 - MT8188_SEL_IN_VPP_MERGE_FROM_DSC_WRAP0_OUT 238 - }, { 239 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DSI0, 240 - MT8188_VDO0_DSI0_SEL_IN, MT8188_SEL_IN_DSI0_FROM_MASK, 241 - MT8188_SEL_IN_DSI0_FROM_DSC_WRAP0_OUT 242 - }, { 243 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 244 - MT8188_VDO0_DISP_RDMA_SEL, MT8188_SOUT_DISP_RDMA0_TO_MASK, 245 - MT8188_SOUT_DISP_RDMA0_TO_DISP_COLOR0 246 - }, { 247 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 248 - MT8188_VDO0_DISP_DITHER0_SEL_OUT, 249 - MT8188_SOUT_DISP_DITHER0_TO_MASK, 250 - MT8188_SOUT_DISP_DITHER0_TO_DSI0 251 - }, { 252 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DP_INTF0, 253 - MT8188_VDO0_DISP_DITHER0_SEL_OUT, 254 - MT8188_SOUT_DISP_DITHER0_TO_MASK, 255 - MT8188_SOUT_DISP_DITHER0_TO_DP_INTF0 256 - }, { 257 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DP_INTF0, 258 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 259 - MT8188_SOUT_VPP_MERGE_TO_DP_INTF0 260 - }, { 261 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DPI0, 262 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 263 - MT8188_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0 264 - }, { 265 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_WDMA0, 266 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 267 - MT8188_SOUT_VPP_MERGE_TO_DISP_WDMA0 268 - }, { 269 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSC0, 270 - MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 271 - MT8188_SOUT_VPP_MERGE_TO_DSC_WRAP0_IN 272 - }, { 273 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DSI0, 274 - MT8188_VDO0_DSC_WARP_SEL, MT8188_SOUT_DSC_WRAP0_OUT_TO_MASK, 275 - MT8188_SOUT_DSC_WRAP0_OUT_TO_DSI0 276 - }, { 277 - DDP_COMPONENT_DSC0, DDP_COMPONENT_MERGE0, 278 - MT8188_VDO0_DSC_WARP_SEL, MT8188_SOUT_DSC_WRAP0_OUT_TO_MASK, 279 - MT8188_SOUT_DSC_WRAP0_OUT_TO_VPP_MERGE 280 - }, 205 + MMSYS_ROUTE(OVL0, RDMA0, 206 + MT8188_VDO0_OVL_MOUT_EN, MT8188_MOUT_DISP_OVL0_TO_DISP_RDMA0, 207 + MT8188_MOUT_DISP_OVL0_TO_DISP_RDMA0), 208 + MMSYS_ROUTE(OVL0, WDMA0, 209 + MT8188_VDO0_OVL_MOUT_EN, MT8188_MOUT_DISP_OVL0_TO_DISP_WDMA0, 210 + MT8188_MOUT_DISP_OVL0_TO_DISP_WDMA0), 211 + MMSYS_ROUTE(OVL0, RDMA0, 212 + MT8188_VDO0_DISP_RDMA_SEL, MT8188_SEL_IN_DISP_RDMA0_FROM_MASK, 213 + MT8188_SEL_IN_DISP_RDMA0_FROM_DISP_OVL0), 214 + MMSYS_ROUTE(DITHER0, DSI0, 215 + MT8188_VDO0_DSI0_SEL_IN, MT8188_SEL_IN_DSI0_FROM_MASK, 216 + MT8188_SEL_IN_DSI0_FROM_DISP_DITHER0), 217 + MMSYS_ROUTE(DITHER0, MERGE0, 218 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SEL_IN_VPP_MERGE_FROM_MASK, 219 + MT8188_SEL_IN_DP_INTF0_FROM_DISP_DITHER0), 220 + MMSYS_ROUTE(DITHER0, DSC0, 221 + MT8188_VDO0_DSC_WARP_SEL, MT8188_SEL_IN_DSC_WRAP0C0_IN_FROM_MASK, 222 + MT8188_SEL_IN_DSC_WRAP0C0_IN_FROM_DISP_DITHER0), 223 + MMSYS_ROUTE(DITHER0, DP_INTF0, 224 + MT8188_VDO0_DP_INTF0_SEL_IN, MT8188_SEL_IN_DP_INTF0_FROM_MASK, 225 + MT8188_SEL_IN_DP_INTF0_FROM_DISP_DITHER0), 226 + MMSYS_ROUTE(DSC0, MERGE0, 227 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SEL_IN_VPP_MERGE_FROM_MASK, 228 + MT8188_SEL_IN_VPP_MERGE_FROM_DSC_WRAP0_OUT), 229 + MMSYS_ROUTE(MERGE0, DP_INTF0, 230 + MT8188_VDO0_DP_INTF0_SEL_IN, MT8188_SEL_IN_DP_INTF0_FROM_MASK, 231 + MT8188_SEL_IN_DP_INTF0_FROM_VPP_MERGE), 232 + MMSYS_ROUTE(DSC0, DSI0, 233 + MT8188_VDO0_DSI0_SEL_IN, MT8188_SEL_IN_DSI0_FROM_MASK, 234 + MT8188_SEL_IN_DSI0_FROM_DSC_WRAP0_OUT), 235 + MMSYS_ROUTE(RDMA0, COLOR0, 236 + MT8188_VDO0_DISP_RDMA_SEL, GENMASK(1, 0), 237 + MT8188_SOUT_DISP_RDMA0_TO_DISP_COLOR0), 238 + MMSYS_ROUTE(DITHER0, DSC0, 239 + MT8188_VDO0_DISP_DITHER0_SEL_OUT, MT8188_SOUT_DISP_DITHER0_TO_MASK, 240 + MT8188_SOUT_DISP_DITHER0_TO_DSC_WRAP0_IN), 241 + MMSYS_ROUTE(DITHER0, DSI0, 242 + MT8188_VDO0_DISP_DITHER0_SEL_OUT, MT8188_SOUT_DISP_DITHER0_TO_MASK, 243 + MT8188_SOUT_DISP_DITHER0_TO_DSI0), 244 + MMSYS_ROUTE(DITHER0, MERGE0, 245 + MT8188_VDO0_DISP_DITHER0_SEL_OUT, MT8188_SOUT_DISP_DITHER0_TO_MASK, 246 + MT8188_SOUT_DISP_DITHER0_TO_VPP_MERGE0), 247 + MMSYS_ROUTE(DITHER0, DP_INTF0, 248 + MT8188_VDO0_DISP_DITHER0_SEL_OUT, MT8188_SOUT_DISP_DITHER0_TO_MASK, 249 + MT8188_SOUT_DISP_DITHER0_TO_DP_INTF0), 250 + MMSYS_ROUTE(MERGE0, DP_INTF0, 251 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 252 + MT8188_SOUT_VPP_MERGE_TO_DP_INTF0), 253 + MMSYS_ROUTE(MERGE0, DPI0, 254 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 255 + MT8188_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0), 256 + MMSYS_ROUTE(MERGE0, WDMA0, 257 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 258 + MT8188_SOUT_VPP_MERGE_TO_DISP_WDMA0), 259 + MMSYS_ROUTE(MERGE0, DSC0, 260 + MT8188_VDO0_VPP_MERGE_SEL, MT8188_SOUT_VPP_MERGE_TO_MASK, 261 + MT8188_SOUT_VPP_MERGE_TO_DSC_WRAP0_IN), 262 + MMSYS_ROUTE(DSC0, DSI0, 263 + MT8188_VDO0_DSC_WARP_SEL, MT8188_SOUT_DSC_WRAP0_OUT_TO_MASK, 264 + MT8188_SOUT_DSC_WRAP0_OUT_TO_DSI0), 265 + MMSYS_ROUTE(DSC0, MERGE0, 266 + MT8188_VDO0_DSC_WARP_SEL, MT8188_SOUT_DSC_WRAP0_OUT_TO_MASK, 267 + MT8188_SOUT_DSC_WRAP0_OUT_TO_VPP_MERGE), 281 268 }; 282 269 283 270 static const struct mtk_mmsys_routes mmsys_mt8188_vdo1_routing_table[] = { 284 - { 285 - DDP_COMPONENT_MDP_RDMA0, DDP_COMPONENT_MERGE1, 286 - MT8188_VDO1_VPP_MERGE0_P0_SEL_IN, GENMASK(0, 0), 287 - MT8188_VPP_MERGE0_P0_SEL_IN_FROM_MDP_RDMA0 288 - }, { 289 - DDP_COMPONENT_MDP_RDMA1, DDP_COMPONENT_MERGE1, 290 - MT8188_VDO1_VPP_MERGE0_P1_SEL_IN, GENMASK(0, 0), 291 - MT8188_VPP_MERGE0_P1_SEL_IN_FROM_MDP_RDMA1 292 - }, { 293 - DDP_COMPONENT_MDP_RDMA2, DDP_COMPONENT_MERGE2, 294 - MT8188_VDO1_VPP_MERGE1_P0_SEL_IN, GENMASK(0, 0), 295 - MT8188_VPP_MERGE1_P0_SEL_IN_FROM_MDP_RDMA2 296 - }, { 297 - DDP_COMPONENT_MERGE1, DDP_COMPONENT_ETHDR_MIXER, 298 - MT8188_VDO1_MERGE0_ASYNC_SOUT_SEL, GENMASK(1, 0), 299 - MT8188_SOUT_TO_MIXER_IN1_SEL 300 - }, { 301 - DDP_COMPONENT_MERGE2, DDP_COMPONENT_ETHDR_MIXER, 302 - MT8188_VDO1_MERGE1_ASYNC_SOUT_SEL, GENMASK(1, 0), 303 - MT8188_SOUT_TO_MIXER_IN2_SEL 304 - }, { 305 - DDP_COMPONENT_MERGE3, DDP_COMPONENT_ETHDR_MIXER, 306 - MT8188_VDO1_MERGE2_ASYNC_SOUT_SEL, GENMASK(1, 0), 307 - MT8188_SOUT_TO_MIXER_IN3_SEL 308 - }, { 309 - DDP_COMPONENT_MERGE4, DDP_COMPONENT_ETHDR_MIXER, 310 - MT8188_VDO1_MERGE3_ASYNC_SOUT_SEL, GENMASK(1, 0), 311 - MT8188_SOUT_TO_MIXER_IN4_SEL 312 - }, { 313 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 314 - MT8188_VDO1_MIXER_OUT_SOUT_SEL, GENMASK(0, 0), 315 - MT8188_MIXER_SOUT_TO_MERGE4_ASYNC_SEL 316 - }, { 317 - DDP_COMPONENT_MERGE1, DDP_COMPONENT_ETHDR_MIXER, 318 - MT8188_VDO1_MIXER_IN1_SEL_IN, GENMASK(0, 0), 319 - MT8188_MIXER_IN1_SEL_IN_FROM_MERGE0_ASYNC_SOUT 320 - }, { 321 - DDP_COMPONENT_MERGE2, DDP_COMPONENT_ETHDR_MIXER, 322 - MT8188_VDO1_MIXER_IN2_SEL_IN, GENMASK(0, 0), 323 - MT8188_MIXER_IN2_SEL_IN_FROM_MERGE1_ASYNC_SOUT 324 - }, { 325 - DDP_COMPONENT_MERGE3, DDP_COMPONENT_ETHDR_MIXER, 326 - MT8188_VDO1_MIXER_IN3_SEL_IN, GENMASK(0, 0), 327 - MT8188_MIXER_IN3_SEL_IN_FROM_MERGE2_ASYNC_SOUT 328 - }, { 329 - DDP_COMPONENT_MERGE4, DDP_COMPONENT_ETHDR_MIXER, 330 - MT8188_VDO1_MIXER_IN4_SEL_IN, GENMASK(0, 0), 331 - MT8188_MIXER_IN4_SEL_IN_FROM_MERGE3_ASYNC_SOUT 332 - }, { 333 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 334 - MT8188_VDO1_MIXER_SOUT_SEL_IN, GENMASK(2, 0), 335 - MT8188_MIXER_SOUT_SEL_IN_FROM_DISP_MIXER 336 - }, { 337 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 338 - MT8188_VDO1_MERGE4_ASYNC_SEL_IN, GENMASK(2, 0), 339 - MT8188_MERGE4_ASYNC_SEL_IN_FROM_MIXER_OUT_SOUT 340 - }, { 341 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DPI1, 342 - MT8188_VDO1_DISP_DPI1_SEL_IN, GENMASK(1, 0), 343 - MT8188_DISP_DPI1_SEL_IN_FROM_VPP_MERGE4_MOUT 344 - }, { 345 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DPI1, 346 - MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0), 347 - MT8188_MERGE4_SOUT_TO_DPI1_SEL 348 - }, { 349 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF1, 350 - MT8188_VDO1_DISP_DP_INTF0_SEL_IN, GENMASK(1, 0), 351 - MT8188_DISP_DP_INTF0_SEL_IN_FROM_VPP_MERGE4_MOUT 352 - }, { 353 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF1, 354 - MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(3, 0), 355 - MT8188_MERGE4_SOUT_TO_DP_INTF0_SEL 356 - } 271 + MMSYS_ROUTE(MDP_RDMA0, MERGE1, 272 + MT8188_VDO1_VPP_MERGE0_P0_SEL_IN, GENMASK(0, 0), 273 + MT8188_VPP_MERGE0_P0_SEL_IN_FROM_MDP_RDMA0), 274 + MMSYS_ROUTE(MDP_RDMA1, MERGE1, 275 + MT8188_VDO1_VPP_MERGE0_P1_SEL_IN, GENMASK(0, 0), 276 + MT8188_VPP_MERGE0_P1_SEL_IN_FROM_MDP_RDMA1), 277 + MMSYS_ROUTE(MDP_RDMA2, MERGE2, 278 + MT8188_VDO1_VPP_MERGE1_P0_SEL_IN, GENMASK(0, 0), 279 + MT8188_VPP_MERGE1_P0_SEL_IN_FROM_MDP_RDMA2), 280 + MMSYS_ROUTE(MERGE1, ETHDR_MIXER, 281 + MT8188_VDO1_MERGE0_ASYNC_SOUT_SEL, GENMASK(1, 0), 282 + MT8188_SOUT_TO_MIXER_IN1_SEL), 283 + MMSYS_ROUTE(MERGE2, ETHDR_MIXER, 284 + MT8188_VDO1_MERGE1_ASYNC_SOUT_SEL, GENMASK(1, 0), 285 + MT8188_SOUT_TO_MIXER_IN2_SEL), 286 + MMSYS_ROUTE(MERGE3, ETHDR_MIXER, 287 + MT8188_VDO1_MERGE2_ASYNC_SOUT_SEL, GENMASK(1, 0), 288 + MT8188_SOUT_TO_MIXER_IN3_SEL), 289 + MMSYS_ROUTE(MERGE4, ETHDR_MIXER, 290 + MT8188_VDO1_MERGE3_ASYNC_SOUT_SEL, GENMASK(1, 0), 291 + MT8188_SOUT_TO_MIXER_IN4_SEL), 292 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 293 + MT8188_VDO1_MIXER_OUT_SOUT_SEL, GENMASK(0, 0), 294 + MT8188_MIXER_SOUT_TO_MERGE4_ASYNC_SEL), 295 + MMSYS_ROUTE(MERGE1, ETHDR_MIXER, 296 + MT8188_VDO1_MIXER_IN1_SEL_IN, GENMASK(0, 0), 297 + MT8188_MIXER_IN1_SEL_IN_FROM_MERGE0_ASYNC_SOUT), 298 + MMSYS_ROUTE(MERGE2, ETHDR_MIXER, 299 + MT8188_VDO1_MIXER_IN2_SEL_IN, GENMASK(0, 0), 300 + MT8188_MIXER_IN2_SEL_IN_FROM_MERGE1_ASYNC_SOUT), 301 + MMSYS_ROUTE(MERGE3, ETHDR_MIXER, 302 + MT8188_VDO1_MIXER_IN3_SEL_IN, GENMASK(0, 0), 303 + MT8188_MIXER_IN3_SEL_IN_FROM_MERGE2_ASYNC_SOUT), 304 + MMSYS_ROUTE(MERGE4, ETHDR_MIXER, 305 + MT8188_VDO1_MIXER_IN4_SEL_IN, GENMASK(0, 0), 306 + MT8188_MIXER_IN4_SEL_IN_FROM_MERGE3_ASYNC_SOUT), 307 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 308 + MT8188_VDO1_MIXER_SOUT_SEL_IN, GENMASK(2, 0), 309 + MT8188_MIXER_SOUT_SEL_IN_FROM_DISP_MIXER), 310 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 311 + MT8188_VDO1_MERGE4_ASYNC_SEL_IN, GENMASK(2, 0), 312 + MT8188_MERGE4_ASYNC_SEL_IN_FROM_MIXER_OUT_SOUT), 313 + MMSYS_ROUTE(MERGE5, DPI1, 314 + MT8188_VDO1_DISP_DPI1_SEL_IN, GENMASK(1, 0), 315 + MT8188_DISP_DPI1_SEL_IN_FROM_VPP_MERGE4_MOUT), 316 + MMSYS_ROUTE(MERGE5, DPI1, 317 + MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(3, 0), 318 + MT8188_MERGE4_SOUT_TO_DPI1_SEL), 319 + MMSYS_ROUTE(MERGE5, DP_INTF1, 320 + MT8188_VDO1_DISP_DP_INTF0_SEL_IN, GENMASK(1, 0), 321 + MT8188_DISP_DP_INTF0_SEL_IN_FROM_VPP_MERGE4_MOUT), 322 + MMSYS_ROUTE(MERGE5, DP_INTF1, 323 + MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(3, 0), 324 + MT8188_MERGE4_SOUT_TO_DP_INTF0_SEL), 357 325 }; 358 326 359 327 #endif /* __SOC_MEDIATEK_MT8188_MMSYS_H */
+30 -41
drivers/soc/mediatek/mt8192-mmsys.h
··· 31 31 #define MT8192_DSI0_SEL_IN_DITHER0 0x1 32 32 33 33 static const struct mtk_mmsys_routes mmsys_mt8192_routing_table[] = { 34 - { 35 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 36 - MT8192_DISP_OVL0_2L_MOUT_EN, MT8192_OVL0_MOUT_EN_DISP_RDMA0, 37 - MT8192_OVL0_MOUT_EN_DISP_RDMA0 38 - }, { 39 - DDP_COMPONENT_OVL_2L2, DDP_COMPONENT_RDMA4, 40 - MT8192_DISP_OVL2_2L_MOUT_EN, MT8192_OVL2_2L_MOUT_EN_RDMA4, 41 - MT8192_OVL2_2L_MOUT_EN_RDMA4 42 - }, { 43 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 44 - MT8192_DISP_DITHER0_MOUT_EN, MT8192_DITHER0_MOUT_IN_DSI0, 45 - MT8192_DITHER0_MOUT_IN_DSI0 46 - }, { 47 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 48 - MT8192_DISP_RDMA0_SEL_IN, MT8192_RDMA0_SEL_IN_OVL0_2L, 49 - MT8192_RDMA0_SEL_IN_OVL0_2L 50 - }, { 51 - DDP_COMPONENT_CCORR, DDP_COMPONENT_AAL0, 52 - MT8192_DISP_AAL0_SEL_IN, MT8192_AAL0_SEL_IN_CCORR0, 53 - MT8192_AAL0_SEL_IN_CCORR0 54 - }, { 55 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 56 - MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0, 57 - MT8192_DSI0_SEL_IN_DITHER0 58 - }, { 59 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 60 - MT8192_DISP_RDMA0_SOUT_SEL, MT8192_RDMA0_SOUT_COLOR0, 61 - MT8192_RDMA0_SOUT_COLOR0 62 - }, { 63 - DDP_COMPONENT_CCORR, DDP_COMPONENT_AAL0, 64 - MT8192_DISP_CCORR0_SOUT_SEL, MT8192_CCORR0_SOUT_AAL0, 65 - MT8192_CCORR0_SOUT_AAL0 66 - }, { 67 - DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0, 68 - MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_GO_BG, 69 - MT8192_DISP_OVL0_GO_BG 70 - }, { 71 - DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 72 - MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_2L_GO_BLEND, 73 - MT8192_DISP_OVL0_2L_GO_BLEND 74 - } 34 + MMSYS_ROUTE(OVL_2L0, RDMA0, 35 + MT8192_DISP_OVL0_2L_MOUT_EN, MT8192_OVL0_MOUT_EN_DISP_RDMA0, 36 + MT8192_OVL0_MOUT_EN_DISP_RDMA0), 37 + MMSYS_ROUTE(OVL_2L2, RDMA4, 38 + MT8192_DISP_OVL2_2L_MOUT_EN, MT8192_OVL2_2L_MOUT_EN_RDMA4, 39 + MT8192_OVL2_2L_MOUT_EN_RDMA4), 40 + MMSYS_ROUTE(DITHER0, DSI0, 41 + MT8192_DISP_DITHER0_MOUT_EN, MT8192_DITHER0_MOUT_IN_DSI0, 42 + MT8192_DITHER0_MOUT_IN_DSI0), 43 + MMSYS_ROUTE(OVL_2L0, RDMA0, 44 + MT8192_DISP_RDMA0_SEL_IN, MT8192_RDMA0_SEL_IN_OVL0_2L, 45 + MT8192_RDMA0_SEL_IN_OVL0_2L), 46 + MMSYS_ROUTE(CCORR, AAL0, 47 + MT8192_DISP_AAL0_SEL_IN, MT8192_AAL0_SEL_IN_CCORR0, 48 + MT8192_AAL0_SEL_IN_CCORR0), 49 + MMSYS_ROUTE(DITHER0, DSI0, 50 + MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0, 51 + MT8192_DSI0_SEL_IN_DITHER0), 52 + MMSYS_ROUTE(RDMA0, COLOR0, 53 + MT8192_DISP_RDMA0_SOUT_SEL, MT8192_RDMA0_SOUT_COLOR0, 54 + MT8192_RDMA0_SOUT_COLOR0), 55 + MMSYS_ROUTE(CCORR, AAL0, 56 + MT8192_DISP_CCORR0_SOUT_SEL, MT8192_CCORR0_SOUT_AAL0, 57 + MT8192_CCORR0_SOUT_AAL0), 58 + MMSYS_ROUTE(OVL0, OVL_2L0, 59 + MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_GO_BG, 60 + MT8192_DISP_OVL0_GO_BG), 61 + MMSYS_ROUTE(OVL_2L0, RDMA0, 62 + MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_2L_GO_BLEND, 63 + MT8192_DISP_OVL0_2L_GO_BLEND), 75 64 }; 76 65 77 66 #endif /* __SOC_MEDIATEK_MT8192_MMSYS_H */
+270 -362
drivers/soc/mediatek/mt8195-mmsys.h
··· 160 160 #define MT8195_SVPP3_MDP_RSZ BIT(5) 161 161 162 162 static const struct mtk_mmsys_routes mmsys_mt8195_routing_table[] = { 163 - { 164 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 165 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_RDMA0, 166 - MT8195_MOUT_DISP_OVL0_TO_DISP_RDMA0 167 - }, { 168 - DDP_COMPONENT_OVL0, DDP_COMPONENT_WDMA0, 169 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_WDMA0, 170 - MT8195_MOUT_DISP_OVL0_TO_DISP_WDMA0 171 - }, { 172 - DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL1, 173 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_OVL1, 174 - MT8195_MOUT_DISP_OVL0_TO_DISP_OVL1 175 - }, { 176 - DDP_COMPONENT_OVL1, DDP_COMPONENT_RDMA1, 177 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_RDMA1, 178 - MT8195_MOUT_DISP_OVL1_TO_DISP_RDMA1 179 - }, { 180 - DDP_COMPONENT_OVL1, DDP_COMPONENT_WDMA1, 181 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_WDMA1, 182 - MT8195_MOUT_DISP_OVL1_TO_DISP_WDMA1 183 - }, { 184 - DDP_COMPONENT_OVL1, DDP_COMPONENT_OVL0, 185 - MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_OVL0, 186 - MT8195_MOUT_DISP_OVL1_TO_DISP_OVL0 187 - }, { 188 - DDP_COMPONENT_DSC0, DDP_COMPONENT_MERGE0, 189 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 190 - MT8195_SEL_IN_VPP_MERGE_FROM_DSC_WRAP0_OUT 191 - }, { 192 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_MERGE0, 193 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 194 - MT8195_SEL_IN_VPP_MERGE_FROM_DISP_DITHER1 195 - }, { 196 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_MERGE0, 197 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 198 - MT8195_SEL_IN_VPP_MERGE_FROM_VDO1_VIRTUAL0 199 - }, { 200 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSC0, 201 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP0_IN_FROM_MASK, 202 - MT8195_SEL_IN_DSC_WRAP0_IN_FROM_DISP_DITHER0 203 - }, { 204 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSC0, 205 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP0_IN_FROM_MASK, 206 - MT8195_SEL_IN_DSC_WRAP0_IN_FROM_VPP_MERGE 207 - }, { 208 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DSC1, 209 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_IN_FROM_MASK, 210 - MT8195_SEL_IN_DSC_WRAP1_IN_FROM_DISP_DITHER1 211 - }, { 212 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSC1, 213 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_IN_FROM_MASK, 214 - MT8195_SEL_IN_DSC_WRAP1_IN_FROM_VPP_MERGE 215 - }, { 216 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DP_INTF1, 217 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 218 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE 219 - }, { 220 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DPI0, 221 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 222 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE 223 - }, { 224 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DPI1, 225 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 226 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE 227 - }, { 228 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF1, 229 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 230 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT 231 - }, { 232 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI0, 233 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 234 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT 235 - }, { 236 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI1, 237 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 238 - MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT 239 - }, { 240 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DP_INTF1, 241 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 242 - MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT 243 - }, { 244 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DPI0, 245 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 246 - MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT 247 - }, { 248 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DPI1, 249 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 250 - MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT 251 - }, { 252 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF0, 253 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 254 - MT8195_SEL_IN_DP_INTF0_FROM_DSC_WRAP1_OUT 255 - }, { 256 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DP_INTF0, 257 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 258 - MT8195_SEL_IN_DP_INTF0_FROM_VPP_MERGE 259 - }, { 260 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF0, 261 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 262 - MT8195_SEL_IN_DP_INTF0_FROM_VDO1_VIRTUAL0 263 - }, { 264 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DSI0, 265 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI0_FROM_MASK, 266 - MT8195_SEL_IN_DSI0_FROM_DSC_WRAP0_OUT 267 - }, { 268 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 269 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI0_FROM_MASK, 270 - MT8195_SEL_IN_DSI0_FROM_DISP_DITHER0 271 - }, { 272 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DSI1, 273 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI1_FROM_MASK, 274 - MT8195_SEL_IN_DSI1_FROM_DSC_WRAP1_OUT 275 - }, { 276 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSI1, 277 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI1_FROM_MASK, 278 - MT8195_SEL_IN_DSI1_FROM_VPP_MERGE 279 - }, { 280 - DDP_COMPONENT_OVL1, DDP_COMPONENT_WDMA1, 281 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA1_FROM_MASK, 282 - MT8195_SEL_IN_DISP_WDMA1_FROM_DISP_OVL1 283 - }, { 284 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_WDMA1, 285 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA1_FROM_MASK, 286 - MT8195_SEL_IN_DISP_WDMA1_FROM_VPP_MERGE 287 - }, { 288 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DSI1, 289 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 290 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 291 - }, { 292 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF0, 293 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 294 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 295 - }, { 296 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF1, 297 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 298 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 299 - }, { 300 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI0, 301 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 302 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 303 - }, { 304 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI1, 305 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 306 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 307 - }, { 308 - DDP_COMPONENT_DSC1, DDP_COMPONENT_MERGE0, 309 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 310 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN 311 - }, { 312 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DSI1, 313 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 314 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1 315 - }, { 316 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DP_INTF0, 317 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 318 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1 319 - }, { 320 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DPI0, 321 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 322 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1 323 - }, { 324 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DPI1, 325 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 326 - MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1 327 - }, { 328 - DDP_COMPONENT_OVL0, DDP_COMPONENT_WDMA0, 329 - MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA0_FROM_MASK, 330 - MT8195_SEL_IN_DISP_WDMA0_FROM_DISP_OVL0 331 - }, { 332 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSC0, 333 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER0_TO_MASK, 334 - MT8195_SOUT_DISP_DITHER0_TO_DSC_WRAP0_IN 335 - }, { 336 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 337 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER0_TO_MASK, 338 - MT8195_SOUT_DISP_DITHER0_TO_DSI0 339 - }, { 340 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DSC1, 341 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 342 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_IN 343 - }, { 344 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_MERGE0, 345 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 346 - MT8195_SOUT_DISP_DITHER1_TO_VPP_MERGE 347 - }, { 348 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DSI1, 349 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 350 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT 351 - }, { 352 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DP_INTF0, 353 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 354 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT 355 - }, { 356 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DP_INTF1, 357 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 358 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT 359 - }, { 360 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DPI0, 361 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 362 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT 363 - }, { 364 - DDP_COMPONENT_DITHER1, DDP_COMPONENT_DPI1, 365 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 366 - MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT 367 - }, { 368 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_MERGE0, 369 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VDO1_VIRTUAL0_TO_MASK, 370 - MT8195_SOUT_VDO1_VIRTUAL0_TO_VPP_MERGE 371 - }, { 372 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF0, 373 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VDO1_VIRTUAL0_TO_MASK, 374 - MT8195_SOUT_VDO1_VIRTUAL0_TO_DP_INTF0 375 - }, { 376 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSI1, 377 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 378 - MT8195_SOUT_VPP_MERGE_TO_DSI1 379 - }, { 380 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DP_INTF0, 381 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 382 - MT8195_SOUT_VPP_MERGE_TO_DP_INTF0 383 - }, { 384 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DP_INTF1, 385 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 386 - MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0 387 - }, { 388 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DPI0, 389 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 390 - MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0 391 - }, { 392 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DPI1, 393 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 394 - MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0 395 - }, { 396 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_WDMA1, 397 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 398 - MT8195_SOUT_VPP_MERGE_TO_DISP_WDMA1 399 - }, { 400 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSC0, 401 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 402 - MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP0_IN 403 - }, { 404 - DDP_COMPONENT_MERGE0, DDP_COMPONENT_DSC1, 405 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP1_IN_MASK, 406 - MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP1_IN 407 - }, { 408 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DSI0, 409 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 410 - MT8195_SOUT_DSC_WRAP0_OUT_TO_DSI0 411 - }, { 412 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DP_INTF1, 413 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 414 - MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0 415 - }, { 416 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DPI0, 417 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 418 - MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0 419 - }, { 420 - DDP_COMPONENT_DSC0, DDP_COMPONENT_DPI1, 421 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 422 - MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0 423 - }, { 424 - DDP_COMPONENT_DSC0, DDP_COMPONENT_MERGE0, 425 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 426 - MT8195_SOUT_DSC_WRAP0_OUT_TO_VPP_MERGE 427 - }, { 428 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DSI1, 429 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 430 - MT8195_SOUT_DSC_WRAP1_OUT_TO_DSI1 431 - }, { 432 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF0, 433 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 434 - MT8195_SOUT_DSC_WRAP1_OUT_TO_DP_INTF0 435 - }, { 436 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DP_INTF1, 437 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 438 - MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0 439 - }, { 440 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI0, 441 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 442 - MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0 443 - }, { 444 - DDP_COMPONENT_DSC1, DDP_COMPONENT_DPI1, 445 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 446 - MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0 447 - }, { 448 - DDP_COMPONENT_DSC1, DDP_COMPONENT_MERGE0, 449 - MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 450 - MT8195_SOUT_DSC_WRAP1_OUT_TO_VPP_MERGE 451 - } 163 + MMSYS_ROUTE(OVL0, RDMA0, 164 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_RDMA0, 165 + MT8195_MOUT_DISP_OVL0_TO_DISP_RDMA0), 166 + MMSYS_ROUTE(OVL0, WDMA0, 167 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_WDMA0, 168 + MT8195_MOUT_DISP_OVL0_TO_DISP_WDMA0), 169 + MMSYS_ROUTE(OVL0, OVL1, 170 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL0_TO_DISP_OVL1, 171 + MT8195_MOUT_DISP_OVL0_TO_DISP_OVL1), 172 + MMSYS_ROUTE(OVL1, RDMA1, 173 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_RDMA1, 174 + MT8195_MOUT_DISP_OVL1_TO_DISP_RDMA1), 175 + MMSYS_ROUTE(OVL1, WDMA1, 176 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_WDMA1, 177 + MT8195_MOUT_DISP_OVL1_TO_DISP_WDMA1), 178 + MMSYS_ROUTE(OVL1, OVL0, 179 + MT8195_VDO0_OVL_MOUT_EN, MT8195_MOUT_DISP_OVL1_TO_DISP_OVL0, 180 + MT8195_MOUT_DISP_OVL1_TO_DISP_OVL0), 181 + MMSYS_ROUTE(DSC0, MERGE0, 182 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 183 + MT8195_SEL_IN_VPP_MERGE_FROM_DSC_WRAP0_OUT), 184 + MMSYS_ROUTE(DITHER1, MERGE0, 185 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 186 + MT8195_SEL_IN_VPP_MERGE_FROM_DISP_DITHER1), 187 + MMSYS_ROUTE(MERGE5, MERGE0, 188 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_VPP_MERGE_FROM_MASK, 189 + MT8195_SEL_IN_VPP_MERGE_FROM_VDO1_VIRTUAL0), 190 + MMSYS_ROUTE(DITHER0, DSC0, 191 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP0_IN_FROM_MASK, 192 + MT8195_SEL_IN_DSC_WRAP0_IN_FROM_DISP_DITHER0), 193 + MMSYS_ROUTE(MERGE0, DSC0, 194 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP0_IN_FROM_MASK, 195 + MT8195_SEL_IN_DSC_WRAP0_IN_FROM_VPP_MERGE), 196 + MMSYS_ROUTE(DITHER1, DSC1, 197 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_IN_FROM_MASK, 198 + MT8195_SEL_IN_DSC_WRAP1_IN_FROM_DISP_DITHER1), 199 + MMSYS_ROUTE(MERGE0, DSC1, 200 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_IN_FROM_MASK, 201 + MT8195_SEL_IN_DSC_WRAP1_IN_FROM_VPP_MERGE), 202 + MMSYS_ROUTE(MERGE0, DP_INTF1, 203 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 204 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE), 205 + MMSYS_ROUTE(MERGE0, DPI0, 206 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 207 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE), 208 + MMSYS_ROUTE(MERGE0, DPI1, 209 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 210 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_VPP_MERGE), 211 + MMSYS_ROUTE(DSC1, DP_INTF1, 212 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 213 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT), 214 + MMSYS_ROUTE(DSC1, DPI0, 215 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 216 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT), 217 + MMSYS_ROUTE(DSC1, DPI1, 218 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINA_VIRTUAL0_FROM_MASK, 219 + MT8195_SEL_IN_SINA_VIRTUAL0_FROM_DSC_WRAP1_OUT), 220 + MMSYS_ROUTE(DSC0, DP_INTF1, 221 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 222 + MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT), 223 + MMSYS_ROUTE(DSC0, DPI0, 224 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 225 + MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT), 226 + MMSYS_ROUTE(DSC0, DPI1, 227 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_SINB_VIRTUAL0_FROM_MASK, 228 + MT8195_SEL_IN_SINB_VIRTUAL0_FROM_DSC_WRAP0_OUT), 229 + MMSYS_ROUTE(DSC1, DP_INTF0, 230 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 231 + MT8195_SEL_IN_DP_INTF0_FROM_DSC_WRAP1_OUT), 232 + MMSYS_ROUTE(MERGE0, DP_INTF0, 233 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 234 + MT8195_SEL_IN_DP_INTF0_FROM_VPP_MERGE), 235 + MMSYS_ROUTE(MERGE5, DP_INTF0, 236 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DP_INTF0_FROM_MASK, 237 + MT8195_SEL_IN_DP_INTF0_FROM_VDO1_VIRTUAL0), 238 + MMSYS_ROUTE(DSC0, DSI0, 239 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI0_FROM_MASK, 240 + MT8195_SEL_IN_DSI0_FROM_DSC_WRAP0_OUT), 241 + MMSYS_ROUTE(DITHER0, DSI0, 242 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI0_FROM_MASK, 243 + MT8195_SEL_IN_DSI0_FROM_DISP_DITHER0), 244 + MMSYS_ROUTE(DSC1, DSI1, 245 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI1_FROM_MASK, 246 + MT8195_SEL_IN_DSI1_FROM_DSC_WRAP1_OUT), 247 + MMSYS_ROUTE(MERGE0, DSI1, 248 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSI1_FROM_MASK, 249 + MT8195_SEL_IN_DSI1_FROM_VPP_MERGE), 250 + MMSYS_ROUTE(OVL1, WDMA1, 251 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA1_FROM_MASK, 252 + MT8195_SEL_IN_DISP_WDMA1_FROM_DISP_OVL1), 253 + MMSYS_ROUTE(MERGE0, WDMA1, 254 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA1_FROM_MASK, 255 + MT8195_SEL_IN_DISP_WDMA1_FROM_VPP_MERGE), 256 + MMSYS_ROUTE(DSC1, DSI1, 257 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 258 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 259 + MMSYS_ROUTE(DSC1, DP_INTF0, 260 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 261 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 262 + MMSYS_ROUTE(DSC1, DP_INTF1, 263 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 264 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 265 + MMSYS_ROUTE(DSC1, DPI0, 266 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 267 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 268 + MMSYS_ROUTE(DSC1, DPI1, 269 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 270 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 271 + MMSYS_ROUTE(DSC1, MERGE0, 272 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 273 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DSC_WRAP1_IN), 274 + MMSYS_ROUTE(DITHER1, DSI1, 275 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 276 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1), 277 + MMSYS_ROUTE(DITHER1, DP_INTF0, 278 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 279 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1), 280 + MMSYS_ROUTE(DITHER1, DPI0, 281 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 282 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1), 283 + MMSYS_ROUTE(DITHER1, DPI1, 284 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DSC_WRAP1_FROM_MASK, 285 + MT8195_SEL_IN_DSC_WRAP1_OUT_FROM_DISP_DITHER1), 286 + MMSYS_ROUTE(OVL0, WDMA0, 287 + MT8195_VDO0_SEL_IN, MT8195_SEL_IN_DISP_WDMA0_FROM_MASK, 288 + MT8195_SEL_IN_DISP_WDMA0_FROM_DISP_OVL0), 289 + MMSYS_ROUTE(DITHER0, DSC0, 290 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER0_TO_MASK, 291 + MT8195_SOUT_DISP_DITHER0_TO_DSC_WRAP0_IN), 292 + MMSYS_ROUTE(DITHER0, DSI0, 293 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER0_TO_MASK, 294 + MT8195_SOUT_DISP_DITHER0_TO_DSI0), 295 + MMSYS_ROUTE(DITHER1, DSC1, 296 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 297 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_IN), 298 + MMSYS_ROUTE(DITHER1, MERGE0, 299 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 300 + MT8195_SOUT_DISP_DITHER1_TO_VPP_MERGE), 301 + MMSYS_ROUTE(DITHER1, DSI1, 302 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 303 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT), 304 + MMSYS_ROUTE(DITHER1, DP_INTF0, 305 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 306 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT), 307 + MMSYS_ROUTE(DITHER1, DP_INTF1, 308 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 309 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT), 310 + MMSYS_ROUTE(DITHER1, DPI0, 311 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 312 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT), 313 + MMSYS_ROUTE(DITHER1, DPI1, 314 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DISP_DITHER1_TO_MASK, 315 + MT8195_SOUT_DISP_DITHER1_TO_DSC_WRAP1_OUT), 316 + MMSYS_ROUTE(MERGE5, MERGE0, 317 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VDO1_VIRTUAL0_TO_MASK, 318 + MT8195_SOUT_VDO1_VIRTUAL0_TO_VPP_MERGE), 319 + MMSYS_ROUTE(MERGE5, DP_INTF0, 320 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VDO1_VIRTUAL0_TO_MASK, 321 + MT8195_SOUT_VDO1_VIRTUAL0_TO_DP_INTF0), 322 + MMSYS_ROUTE(MERGE0, DSI1, 323 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 324 + MT8195_SOUT_VPP_MERGE_TO_DSI1), 325 + MMSYS_ROUTE(MERGE0, DP_INTF0, 326 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 327 + MT8195_SOUT_VPP_MERGE_TO_DP_INTF0), 328 + MMSYS_ROUTE(MERGE0, DP_INTF1, 329 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 330 + MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0), 331 + MMSYS_ROUTE(MERGE0, DPI0, 332 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 333 + MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0), 334 + MMSYS_ROUTE(MERGE0, DPI1, 335 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 336 + MT8195_SOUT_VPP_MERGE_TO_SINA_VIRTUAL0), 337 + MMSYS_ROUTE(MERGE0, WDMA1, 338 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 339 + MT8195_SOUT_VPP_MERGE_TO_DISP_WDMA1), 340 + MMSYS_ROUTE(MERGE0, DSC0, 341 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_MASK, 342 + MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP0_IN), 343 + MMSYS_ROUTE(MERGE0, DSC1, 344 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP1_IN_MASK, 345 + MT8195_SOUT_VPP_MERGE_TO_DSC_WRAP1_IN), 346 + MMSYS_ROUTE(DSC0, DSI0, 347 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 348 + MT8195_SOUT_DSC_WRAP0_OUT_TO_DSI0), 349 + MMSYS_ROUTE(DSC0, DP_INTF1, 350 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 351 + MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0), 352 + MMSYS_ROUTE(DSC0, DPI0, 353 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 354 + MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0), 355 + MMSYS_ROUTE(DSC0, DPI1, 356 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 357 + MT8195_SOUT_DSC_WRAP0_OUT_TO_SINB_VIRTUAL0), 358 + MMSYS_ROUTE(DSC0, MERGE0, 359 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP0_OUT_TO_MASK, 360 + MT8195_SOUT_DSC_WRAP0_OUT_TO_VPP_MERGE), 361 + MMSYS_ROUTE(DSC1, DSI1, 362 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 363 + MT8195_SOUT_DSC_WRAP1_OUT_TO_DSI1), 364 + MMSYS_ROUTE(DSC1, DP_INTF0, 365 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 366 + MT8195_SOUT_DSC_WRAP1_OUT_TO_DP_INTF0), 367 + MMSYS_ROUTE(DSC1, DP_INTF1, 368 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 369 + MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0), 370 + MMSYS_ROUTE(DSC1, DPI0, 371 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 372 + MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0), 373 + MMSYS_ROUTE(DSC1, DPI1, 374 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 375 + MT8195_SOUT_DSC_WRAP1_OUT_TO_SINA_VIRTUAL0), 376 + MMSYS_ROUTE(DSC1, MERGE0, 377 + MT8195_VDO0_SEL_OUT, MT8195_SOUT_DSC_WRAP1_OUT_TO_MASK, 378 + MT8195_SOUT_DSC_WRAP1_OUT_TO_VPP_MERGE), 452 379 }; 453 380 454 381 static const struct mtk_mmsys_routes mmsys_mt8195_vdo1_routing_table[] = { 455 - { 456 - DDP_COMPONENT_MDP_RDMA0, DDP_COMPONENT_MERGE1, 457 - MT8195_VDO1_VPP_MERGE0_P0_SEL_IN, GENMASK(0, 0), 458 - MT8195_VPP_MERGE0_P0_SEL_IN_FROM_MDP_RDMA0 459 - }, { 460 - DDP_COMPONENT_MDP_RDMA1, DDP_COMPONENT_MERGE1, 461 - MT8195_VDO1_VPP_MERGE0_P1_SEL_IN, GENMASK(0, 0), 462 - MT8195_VPP_MERGE0_P1_SEL_IN_FROM_MDP_RDMA1 463 - }, { 464 - DDP_COMPONENT_MDP_RDMA2, DDP_COMPONENT_MERGE2, 465 - MT8195_VDO1_VPP_MERGE1_P0_SEL_IN, GENMASK(0, 0), 466 - MT8195_VPP_MERGE1_P0_SEL_IN_FROM_MDP_RDMA2 467 - }, { 468 - DDP_COMPONENT_MERGE1, DDP_COMPONENT_ETHDR_MIXER, 469 - MT8195_VDO1_MERGE0_ASYNC_SOUT_SEL, GENMASK(1, 0), 470 - MT8195_SOUT_TO_MIXER_IN1_SEL 471 - }, { 472 - DDP_COMPONENT_MERGE2, DDP_COMPONENT_ETHDR_MIXER, 473 - MT8195_VDO1_MERGE1_ASYNC_SOUT_SEL, GENMASK(1, 0), 474 - MT8195_SOUT_TO_MIXER_IN2_SEL 475 - }, { 476 - DDP_COMPONENT_MERGE3, DDP_COMPONENT_ETHDR_MIXER, 477 - MT8195_VDO1_MERGE2_ASYNC_SOUT_SEL, GENMASK(1, 0), 478 - MT8195_SOUT_TO_MIXER_IN3_SEL 479 - }, { 480 - DDP_COMPONENT_MERGE4, DDP_COMPONENT_ETHDR_MIXER, 481 - MT8195_VDO1_MERGE3_ASYNC_SOUT_SEL, GENMASK(1, 0), 482 - MT8195_SOUT_TO_MIXER_IN4_SEL 483 - }, { 484 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 485 - MT8195_VDO1_MIXER_OUT_SOUT_SEL, GENMASK(0, 0), 486 - MT8195_MIXER_SOUT_TO_MERGE4_ASYNC_SEL 487 - }, { 488 - DDP_COMPONENT_MERGE1, DDP_COMPONENT_ETHDR_MIXER, 489 - MT8195_VDO1_MIXER_IN1_SEL_IN, GENMASK(0, 0), 490 - MT8195_MIXER_IN1_SEL_IN_FROM_MERGE0_ASYNC_SOUT 491 - }, { 492 - DDP_COMPONENT_MERGE2, DDP_COMPONENT_ETHDR_MIXER, 493 - MT8195_VDO1_MIXER_IN2_SEL_IN, GENMASK(0, 0), 494 - MT8195_MIXER_IN2_SEL_IN_FROM_MERGE1_ASYNC_SOUT 495 - }, { 496 - DDP_COMPONENT_MERGE3, DDP_COMPONENT_ETHDR_MIXER, 497 - MT8195_VDO1_MIXER_IN3_SEL_IN, GENMASK(0, 0), 498 - MT8195_MIXER_IN3_SEL_IN_FROM_MERGE2_ASYNC_SOUT 499 - }, { 500 - DDP_COMPONENT_MERGE4, DDP_COMPONENT_ETHDR_MIXER, 501 - MT8195_VDO1_MIXER_IN4_SEL_IN, GENMASK(0, 0), 502 - MT8195_MIXER_IN4_SEL_IN_FROM_MERGE3_ASYNC_SOUT 503 - }, { 504 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 505 - MT8195_VDO1_MIXER_SOUT_SEL_IN, GENMASK(2, 0), 506 - MT8195_MIXER_SOUT_SEL_IN_FROM_DISP_MIXER 507 - }, { 508 - DDP_COMPONENT_ETHDR_MIXER, DDP_COMPONENT_MERGE5, 509 - MT8195_VDO1_MERGE4_ASYNC_SEL_IN, GENMASK(2, 0), 510 - MT8195_MERGE4_ASYNC_SEL_IN_FROM_MIXER_OUT_SOUT 511 - }, { 512 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DPI1, 513 - MT8195_VDO1_DISP_DPI1_SEL_IN, GENMASK(1, 0), 514 - MT8195_DISP_DPI1_SEL_IN_FROM_VPP_MERGE4_MOUT 515 - }, { 516 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DPI1, 517 - MT8195_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0), 518 - MT8195_MERGE4_SOUT_TO_DPI1_SEL 519 - }, { 520 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF1, 521 - MT8195_VDO1_DISP_DP_INTF0_SEL_IN, GENMASK(1, 0), 522 - MT8195_DISP_DP_INTF0_SEL_IN_FROM_VPP_MERGE4_MOUT 523 - }, { 524 - DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF1, 525 - MT8195_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0), 526 - MT8195_MERGE4_SOUT_TO_DP_INTF0_SEL 527 - } 382 + MMSYS_ROUTE(MDP_RDMA0, MERGE1, 383 + MT8195_VDO1_VPP_MERGE0_P0_SEL_IN, GENMASK(0, 0), 384 + MT8195_VPP_MERGE0_P0_SEL_IN_FROM_MDP_RDMA0), 385 + MMSYS_ROUTE(MDP_RDMA1, MERGE1, 386 + MT8195_VDO1_VPP_MERGE0_P1_SEL_IN, GENMASK(0, 0), 387 + MT8195_VPP_MERGE0_P1_SEL_IN_FROM_MDP_RDMA1), 388 + MMSYS_ROUTE(MDP_RDMA2, MERGE2, 389 + MT8195_VDO1_VPP_MERGE1_P0_SEL_IN, GENMASK(0, 0), 390 + MT8195_VPP_MERGE1_P0_SEL_IN_FROM_MDP_RDMA2), 391 + MMSYS_ROUTE(MERGE1, ETHDR_MIXER, 392 + MT8195_VDO1_MERGE0_ASYNC_SOUT_SEL, GENMASK(1, 0), 393 + MT8195_SOUT_TO_MIXER_IN1_SEL), 394 + MMSYS_ROUTE(MERGE2, ETHDR_MIXER, 395 + MT8195_VDO1_MERGE1_ASYNC_SOUT_SEL, GENMASK(1, 0), 396 + MT8195_SOUT_TO_MIXER_IN2_SEL), 397 + MMSYS_ROUTE(MERGE3, ETHDR_MIXER, 398 + MT8195_VDO1_MERGE2_ASYNC_SOUT_SEL, GENMASK(1, 0), 399 + MT8195_SOUT_TO_MIXER_IN3_SEL), 400 + MMSYS_ROUTE(MERGE4, ETHDR_MIXER, 401 + MT8195_VDO1_MERGE3_ASYNC_SOUT_SEL, GENMASK(1, 0), 402 + MT8195_SOUT_TO_MIXER_IN4_SEL), 403 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 404 + MT8195_VDO1_MIXER_OUT_SOUT_SEL, GENMASK(0, 0), 405 + MT8195_MIXER_SOUT_TO_MERGE4_ASYNC_SEL), 406 + MMSYS_ROUTE(MERGE1, ETHDR_MIXER, 407 + MT8195_VDO1_MIXER_IN1_SEL_IN, GENMASK(0, 0), 408 + MT8195_MIXER_IN1_SEL_IN_FROM_MERGE0_ASYNC_SOUT), 409 + MMSYS_ROUTE(MERGE2, ETHDR_MIXER, 410 + MT8195_VDO1_MIXER_IN2_SEL_IN, GENMASK(0, 0), 411 + MT8195_MIXER_IN2_SEL_IN_FROM_MERGE1_ASYNC_SOUT), 412 + MMSYS_ROUTE(MERGE3, ETHDR_MIXER, 413 + MT8195_VDO1_MIXER_IN3_SEL_IN, GENMASK(0, 0), 414 + MT8195_MIXER_IN3_SEL_IN_FROM_MERGE2_ASYNC_SOUT), 415 + MMSYS_ROUTE(MERGE4, ETHDR_MIXER, 416 + MT8195_VDO1_MIXER_IN4_SEL_IN, GENMASK(0, 0), 417 + MT8195_MIXER_IN4_SEL_IN_FROM_MERGE3_ASYNC_SOUT), 418 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 419 + MT8195_VDO1_MIXER_SOUT_SEL_IN, GENMASK(2, 0), 420 + MT8195_MIXER_SOUT_SEL_IN_FROM_DISP_MIXER), 421 + MMSYS_ROUTE(ETHDR_MIXER, MERGE5, 422 + MT8195_VDO1_MERGE4_ASYNC_SEL_IN, GENMASK(2, 0), 423 + MT8195_MERGE4_ASYNC_SEL_IN_FROM_MIXER_OUT_SOUT), 424 + MMSYS_ROUTE(MERGE5, DPI1, 425 + MT8195_VDO1_DISP_DPI1_SEL_IN, GENMASK(1, 0), 426 + MT8195_DISP_DPI1_SEL_IN_FROM_VPP_MERGE4_MOUT), 427 + MMSYS_ROUTE(MERGE5, DPI1, 428 + MT8195_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0), 429 + MT8195_MERGE4_SOUT_TO_DPI1_SEL), 430 + MMSYS_ROUTE(MERGE5, DP_INTF1, 431 + MT8195_VDO1_DISP_DP_INTF0_SEL_IN, GENMASK(1, 0), 432 + MT8195_DISP_DP_INTF0_SEL_IN_FROM_VPP_MERGE4_MOUT), 433 + MMSYS_ROUTE(MERGE5, DP_INTF1, 434 + MT8195_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0), 435 + MT8195_MERGE4_SOUT_TO_DP_INTF0_SEL), 528 436 }; 529 437 #endif /* __SOC_MEDIATEK_MT8195_MMSYS_H */
+33 -51
drivers/soc/mediatek/mt8365-mmsys.h
··· 14 14 #define MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN 0xfd8 15 15 #define MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00 0xfdc 16 16 17 + #define MT8365_DISP_MS_IN_OUT_MASK GENMASK(3, 0) 17 18 #define MT8365_RDMA0_SOUT_COLOR0 0x1 18 - #define MT8365_DITHER_MOUT_EN_DSI0 0x1 19 + #define MT8365_DITHER_MOUT_EN_DSI0 BIT(0) 19 20 #define MT8365_DSI0_SEL_IN_DITHER 0x1 20 21 #define MT8365_RDMA0_SEL_IN_OVL0 0x0 21 22 #define MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 0x0 ··· 28 27 #define MT8365_DPI0_SEL_IN_RDMA1 0x0 29 28 30 29 static const struct mtk_mmsys_routes mt8365_mmsys_routing_table[] = { 31 - { 32 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 33 - MT8365_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, 34 - MT8365_OVL0_MOUT_PATH0_SEL, MT8365_OVL0_MOUT_PATH0_SEL 35 - }, 36 - { 37 - DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 38 - MT8365_DISP_REG_CONFIG_DISP_RDMA0_SEL_IN, 39 - MT8365_RDMA0_SEL_IN_OVL0, MT8365_RDMA0_SEL_IN_OVL0 40 - }, 41 - { 42 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 43 - MT8365_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL, 44 - MT8365_RDMA0_SOUT_COLOR0, MT8365_RDMA0_SOUT_COLOR0 45 - }, 46 - { 47 - DDP_COMPONENT_COLOR0, DDP_COMPONENT_CCORR, 48 - MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, 49 - MT8365_DISP_COLOR_SEL_IN_COLOR0,MT8365_DISP_COLOR_SEL_IN_COLOR0 50 - }, 51 - { 52 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 53 - MT8365_DISP_REG_CONFIG_DISP_DITHER0_MOUT_EN, 54 - MT8365_DITHER_MOUT_EN_DSI0, MT8365_DITHER_MOUT_EN_DSI0 55 - }, 56 - { 57 - DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0, 58 - MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, 59 - MT8365_DSI0_SEL_IN_DITHER, MT8365_DSI0_SEL_IN_DITHER 60 - }, 61 - { 62 - DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 63 - MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN, 64 - MT8365_RDMA0_RSZ0_SEL_IN_RDMA0, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 65 - }, 66 - { 67 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 68 - MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00, 69 - MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK, MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK 70 - }, 71 - { 72 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 73 - MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN, 74 - MT8365_DPI0_SEL_IN_RDMA1, MT8365_DPI0_SEL_IN_RDMA1 75 - }, 76 - { 77 - DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 78 - MT8365_DISP_REG_CONFIG_DISP_RDMA1_SOUT_SEL, 79 - MT8365_RDMA1_SOUT_DPI0, MT8365_RDMA1_SOUT_DPI0 80 - }, 30 + MMSYS_ROUTE(OVL0, RDMA0, 31 + MT8365_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, 32 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_OVL0_MOUT_PATH0_SEL), 33 + MMSYS_ROUTE(OVL0, RDMA0, 34 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_SEL_IN, 35 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_SEL_IN_OVL0), 36 + MMSYS_ROUTE(RDMA0, COLOR0, 37 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL, 38 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_SOUT_COLOR0), 39 + MMSYS_ROUTE(COLOR0, CCORR, 40 + MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, 41 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_DISP_COLOR_SEL_IN_COLOR0), 42 + MMSYS_ROUTE(DITHER0, DSI0, 43 + MT8365_DISP_REG_CONFIG_DISP_DITHER0_MOUT_EN, 44 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_DITHER_MOUT_EN_DSI0), 45 + MMSYS_ROUTE(DITHER0, DSI0, 46 + MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, 47 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_DSI0_SEL_IN_DITHER), 48 + MMSYS_ROUTE(RDMA0, COLOR0, 49 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN, 50 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0), 51 + MMSYS_ROUTE(RDMA1, DPI0, 52 + MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00, 53 + MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK, 54 + MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK), 55 + MMSYS_ROUTE(RDMA1, DPI0, 56 + MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN, 57 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_DPI0_SEL_IN_RDMA1), 58 + MMSYS_ROUTE(RDMA1, DPI0, 59 + MT8365_DISP_REG_CONFIG_DISP_RDMA1_SOUT_SEL, 60 + MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA1_SOUT_DPI0), 81 61 }; 82 62 83 63 #endif /* __SOC_MEDIATEK_MT8365_MMSYS_H */
+14
drivers/soc/mediatek/mtk-mmsys.h
··· 80 80 81 81 #define MMSYS_RST_NR(bank, bit) (((bank) * 32) + (bit)) 82 82 83 + /* 84 + * This macro adds a compile time check to make sure that the in/out 85 + * selection bit(s) fit in the register mask, similar to bitfield 86 + * macros, but this does not transform the value. 87 + */ 88 + #define MMSYS_ROUTE(from, to, reg_addr, reg_mask, selection) \ 89 + { DDP_COMPONENT_##from, DDP_COMPONENT_##to, reg_addr, reg_mask, \ 90 + (__BUILD_BUG_ON_ZERO_MSG((reg_mask) == 0, "Invalid mask") + \ 91 + __BUILD_BUG_ON_ZERO_MSG(~(reg_mask) & (selection), \ 92 + #selection " does not fit in " \ 93 + #reg_mask) + \ 94 + (selection)) \ 95 + } 96 + 83 97 struct mtk_mmsys_routes { 84 98 u32 from_comp; 85 99 u32 to_comp;
+6
drivers/soc/mediatek/mtk-mutex.c
··· 155 155 #define MT8188_MUTEX_MOD_DISP1_VPP_MERGE3 23 156 156 #define MT8188_MUTEX_MOD_DISP1_VPP_MERGE4 24 157 157 #define MT8188_MUTEX_MOD_DISP1_DISP_MIXER 30 158 + #define MT8188_MUTEX_MOD_DISP1_DPI1 38 158 159 #define MT8188_MUTEX_MOD_DISP1_DP_INTF1 39 159 160 160 161 #define MT8195_MUTEX_MOD_DISP_OVL0 0 ··· 290 289 #define MT8188_MUTEX_SOF_DSI0 1 291 290 #define MT8188_MUTEX_SOF_DP_INTF0 3 292 291 #define MT8188_MUTEX_SOF_DP_INTF1 4 292 + #define MT8188_MUTEX_SOF_DPI1 5 293 293 #define MT8195_MUTEX_SOF_DSI0 1 294 294 #define MT8195_MUTEX_SOF_DSI1 2 295 295 #define MT8195_MUTEX_SOF_DP_INTF0 3 ··· 303 301 #define MT8188_MUTEX_EOF_DSI0 (MT8188_MUTEX_SOF_DSI0 << 7) 304 302 #define MT8188_MUTEX_EOF_DP_INTF0 (MT8188_MUTEX_SOF_DP_INTF0 << 7) 305 303 #define MT8188_MUTEX_EOF_DP_INTF1 (MT8188_MUTEX_SOF_DP_INTF1 << 7) 304 + #define MT8188_MUTEX_EOF_DPI1 (MT8188_MUTEX_SOF_DPI1 << 7) 306 305 #define MT8195_MUTEX_EOF_DSI0 (MT8195_MUTEX_SOF_DSI0 << 7) 307 306 #define MT8195_MUTEX_EOF_DSI1 (MT8195_MUTEX_SOF_DSI1 << 7) 308 307 #define MT8195_MUTEX_EOF_DP_INTF0 (MT8195_MUTEX_SOF_DP_INTF0 << 7) ··· 475 472 [DDP_COMPONENT_PWM0] = MT8188_MUTEX_MOD2_DISP_PWM0, 476 473 [DDP_COMPONENT_DP_INTF0] = MT8188_MUTEX_MOD_DISP_DP_INTF0, 477 474 [DDP_COMPONENT_DP_INTF1] = MT8188_MUTEX_MOD_DISP1_DP_INTF1, 475 + [DDP_COMPONENT_DPI1] = MT8188_MUTEX_MOD_DISP1_DPI1, 478 476 [DDP_COMPONENT_ETHDR_MIXER] = MT8188_MUTEX_MOD_DISP1_DISP_MIXER, 479 477 [DDP_COMPONENT_MDP_RDMA0] = MT8188_MUTEX_MOD_DISP1_MDP_RDMA0, 480 478 [DDP_COMPONENT_MDP_RDMA1] = MT8188_MUTEX_MOD_DISP1_MDP_RDMA1, ··· 690 686 [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, 691 687 [MUTEX_SOF_DSI0] = 692 688 MT8188_MUTEX_SOF_DSI0 | MT8188_MUTEX_EOF_DSI0, 689 + [MUTEX_SOF_DPI1] = 690 + MT8188_MUTEX_SOF_DPI1 | MT8188_MUTEX_EOF_DPI1, 693 691 [MUTEX_SOF_DP_INTF0] = 694 692 MT8188_MUTEX_SOF_DP_INTF0 | MT8188_MUTEX_EOF_DP_INTF0, 695 693 [MUTEX_SOF_DP_INTF1] =
+16 -6
drivers/soc/mediatek/mtk-socinfo.c
··· 56 56 MTK_SOCINFO_ENTRY("MT8195", "MT8195GV/EHZA", "Kompanio 1200", 0x81950304, CELL_NOT_USED), 57 57 MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EZA", "Kompanio 1380", 0x81950400, CELL_NOT_USED), 58 58 MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EHZA", "Kompanio 1380", 0x81950404, CELL_NOT_USED), 59 + MTK_SOCINFO_ENTRY("MT8370", "MT8370AV/AZA", "Genio 510", 0x83700000, 0x00000081), 60 + MTK_SOCINFO_ENTRY("MT8390", "MT8390AV/AZA", "Genio 700", 0x83900000, 0x00000080), 59 61 MTK_SOCINFO_ENTRY("MT8395", "MT8395AV/ZA", "Genio 1200", 0x83950100, CELL_NOT_USED), 62 + MTK_SOCINFO_ENTRY("MT8395", "MT8395AV/ZA", "Genio 1200", 0x83950800, CELL_NOT_USED), 60 63 }; 61 64 62 65 static int mtk_socinfo_create_socinfo_node(struct mtk_socinfo *mtk_socinfop) 63 66 { 64 67 struct soc_device_attribute *attrs; 65 - static char machine[30] = {0}; 68 + struct socinfo_data *data = mtk_socinfop->socinfo_data; 66 69 static const char *soc_manufacturer = "MediaTek"; 67 70 68 71 attrs = devm_kzalloc(mtk_socinfop->dev, sizeof(*attrs), GFP_KERNEL); 69 72 if (!attrs) 70 73 return -ENOMEM; 71 74 72 - snprintf(machine, sizeof(machine), "%s (%s)", mtk_socinfop->socinfo_data->marketing_name, 73 - mtk_socinfop->socinfo_data->soc_name); 74 - attrs->family = soc_manufacturer; 75 - attrs->machine = machine; 75 + if (data->marketing_name != NULL && data->marketing_name[0] != '\0') 76 + attrs->family = devm_kasprintf(mtk_socinfop->dev, GFP_KERNEL, "MediaTek %s", 77 + data->marketing_name); 78 + else 79 + attrs->family = soc_manufacturer; 80 + 81 + attrs->soc_id = data->soc_name; 82 + /* 83 + * The "machine" field will be populated automatically with the model 84 + * name from board DTS (if available). 85 + **/ 76 86 77 87 mtk_socinfop->soc_dev = soc_device_register(attrs); 78 88 if (IS_ERR(mtk_socinfop->soc_dev)) 79 89 return PTR_ERR(mtk_socinfop->soc_dev); 80 90 81 - dev_info(mtk_socinfop->dev, "%s %s SoC detected.\n", soc_manufacturer, attrs->machine); 91 + dev_info(mtk_socinfop->dev, "%s (%s) SoC detected.\n", attrs->family, attrs->soc_id); 82 92 return 0; 83 93 } 84 94
+49 -2
drivers/soc/qcom/ice.c
··· 11 11 #include <linux/cleanup.h> 12 12 #include <linux/clk.h> 13 13 #include <linux/delay.h> 14 + #include <linux/device.h> 14 15 #include <linux/iopoll.h> 15 16 #include <linux/of.h> 16 17 #include <linux/of_platform.h> ··· 262 261 * Return: ICE pointer on success, NULL if there is no ICE data provided by the 263 262 * consumer or ERR_PTR() on error. 264 263 */ 265 - struct qcom_ice *of_qcom_ice_get(struct device *dev) 264 + static struct qcom_ice *of_qcom_ice_get(struct device *dev) 266 265 { 267 266 struct platform_device *pdev = to_platform_device(dev); 268 267 struct qcom_ice *ice; ··· 323 322 324 323 return ice; 325 324 } 326 - EXPORT_SYMBOL_GPL(of_qcom_ice_get); 325 + 326 + static void qcom_ice_put(const struct qcom_ice *ice) 327 + { 328 + struct platform_device *pdev = to_platform_device(ice->dev); 329 + 330 + if (!platform_get_resource_byname(pdev, IORESOURCE_MEM, "ice")) 331 + platform_device_put(pdev); 332 + } 333 + 334 + static void devm_of_qcom_ice_put(struct device *dev, void *res) 335 + { 336 + qcom_ice_put(*(struct qcom_ice **)res); 337 + } 338 + 339 + /** 340 + * devm_of_qcom_ice_get() - Devres managed helper to get an ICE instance from 341 + * a DT node. 342 + * @dev: device pointer for the consumer device. 343 + * 344 + * This function will provide an ICE instance either by creating one for the 345 + * consumer device if its DT node provides the 'ice' reg range and the 'ice' 346 + * clock (for legacy DT style). On the other hand, if consumer provides a 347 + * phandle via 'qcom,ice' property to an ICE DT, the ICE instance will already 348 + * be created and so this function will return that instead. 349 + * 350 + * Return: ICE pointer on success, NULL if there is no ICE data provided by the 351 + * consumer or ERR_PTR() on error. 352 + */ 353 + struct qcom_ice *devm_of_qcom_ice_get(struct device *dev) 354 + { 355 + struct qcom_ice *ice, **dr; 356 + 357 + dr = devres_alloc(devm_of_qcom_ice_put, sizeof(*dr), GFP_KERNEL); 358 + if (!dr) 359 + return ERR_PTR(-ENOMEM); 360 + 361 + ice = of_qcom_ice_get(dev); 362 + if (!IS_ERR_OR_NULL(ice)) { 363 + *dr = ice; 364 + devres_add(dev, dr); 365 + } else { 366 + devres_free(dr); 367 + } 368 + 369 + return ice; 370 + } 371 + EXPORT_SYMBOL_GPL(devm_of_qcom_ice_get); 327 372 328 373 static int qcom_ice_probe(struct platform_device *pdev) 329 374 {
-1
drivers/soc/qcom/pdr_internal.h
··· 91 91 struct qmi_response_type_v01 rsp; 92 92 }; 93 93 94 - extern const struct qmi_elem_info servreg_location_entry_ei[]; 95 94 extern const struct qmi_elem_info servreg_get_domain_list_req_ei[]; 96 95 extern const struct qmi_elem_info servreg_get_domain_list_resp_ei[]; 97 96 extern const struct qmi_elem_info servreg_register_listener_req_ei[];
+2 -1
drivers/soc/qcom/qcom_aoss.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include <linux/thermal.h> 14 14 #include <linux/slab.h> 15 + #include <linux/string_choices.h> 15 16 #include <linux/soc/qcom/qcom_aoss.h> 16 17 17 18 #define CREATE_TRACE_POINTS ··· 359 358 return 0; 360 359 361 360 ret = qmp_send(qmp_cdev->qmp, "{class: volt_flr, event:zero_temp, res:%s, value:%s}", 362 - qmp_cdev->name, cdev_state ? "on" : "off"); 361 + qmp_cdev->name, str_on_off(cdev_state)); 363 362 if (!ret) 364 363 qmp_cdev->state = cdev_state; 365 364
+12
drivers/soc/qcom/qcom_pd_mapper.c
··· 429 429 NULL, 430 430 }; 431 431 432 + /* Unlike SDM660, SDM630/636 lack CDSP */ 433 + static const struct qcom_pdm_domain_data *sdm630_domains[] = { 434 + &adsp_audio_pd, 435 + &adsp_root_pd, 436 + &adsp_sensor_pd, 437 + &mpss_root_pd, 438 + &mpss_wlan_pd, 439 + NULL, 440 + }; 441 + 432 442 static const struct qcom_pdm_domain_data *sdm660_domains[] = { 433 443 &adsp_audio_pd, 434 444 &adsp_root_pd, ··· 556 546 { .compatible = "qcom,sc7280", .data = sc7280_domains, }, 557 547 { .compatible = "qcom,sc8180x", .data = sc8180x_domains, }, 558 548 { .compatible = "qcom,sc8280xp", .data = sc8280xp_domains, }, 549 + { .compatible = "qcom,sdm630", .data = sdm630_domains, }, 550 + { .compatible = "qcom,sdm636", .data = sdm630_domains, }, 559 551 { .compatible = "qcom,sda660", .data = sdm660_domains, }, 560 552 { .compatible = "qcom,sdm660", .data = sdm660_domains, }, 561 553 { .compatible = "qcom,sdm670", .data = sdm670_domains, },
+1 -2
drivers/soc/qcom/qcom_pdr_msg.c
··· 8 8 9 9 #include "pdr_internal.h" 10 10 11 - const struct qmi_elem_info servreg_location_entry_ei[] = { 11 + static const struct qmi_elem_info servreg_location_entry_ei[] = { 12 12 { 13 13 .data_type = QMI_STRING, 14 14 .elem_len = SERVREG_NAME_LENGTH + 1, ··· 47 47 }, 48 48 {} 49 49 }; 50 - EXPORT_SYMBOL_GPL(servreg_location_entry_ei); 51 50 52 51 const struct qmi_elem_info servreg_get_domain_list_req_ei[] = { 53 52 {
+18
drivers/soc/renesas/Kconfig
··· 334 334 config ARCH_R9A08G045 335 335 bool "ARM64 Platform support for RZ/G3S" 336 336 select ARCH_RZG2L 337 + select SYSC_R9A08G045 337 338 help 338 339 This enables support for the Renesas RZ/G3S SoC variants. 339 340 ··· 348 347 349 348 config ARCH_R9A09G047 350 349 bool "ARM64 Platform support for RZ/G3E" 350 + select SYS_R9A09G047 351 351 help 352 352 This enables support for the Renesas RZ/G3E SoC variants. 353 353 354 354 config ARCH_R9A09G057 355 355 bool "ARM64 Platform support for RZ/V2H(P)" 356 356 select RENESAS_RZV2H_ICU 357 + select SYS_R9A09G057 357 358 help 358 359 This enables support for the Renesas RZ/V2H(P) SoC variants. 359 360 ··· 385 382 386 383 config RST_RCAR 387 384 bool "Reset Controller support for R-Car" if COMPILE_TEST 385 + 386 + config SYSC_RZ 387 + bool "System controller for RZ SoCs" if COMPILE_TEST 388 + 389 + config SYSC_R9A08G045 390 + bool "Renesas RZ/G3S System controller support" if COMPILE_TEST 391 + select SYSC_RZ 392 + 393 + config SYS_R9A09G047 394 + bool "Renesas RZ/G3E System controller support" if COMPILE_TEST 395 + select SYSC_RZ 396 + 397 + config SYS_R9A09G057 398 + bool "Renesas RZ/V2H System controller support" if COMPILE_TEST 399 + select SYSC_RZ 388 400 389 401 endif # SOC_RENESAS
+4
drivers/soc/renesas/Makefile
··· 6 6 ifdef CONFIG_SMP 7 7 obj-$(CONFIG_ARCH_R9A06G032) += r9a06g032-smp.o 8 8 endif 9 + obj-$(CONFIG_SYSC_R9A08G045) += r9a08g045-sysc.o 10 + obj-$(CONFIG_SYS_R9A09G047) += r9a09g047-sys.o 11 + obj-$(CONFIG_SYS_R9A09G057) += r9a09g057-sys.o 9 12 10 13 # Family 11 14 obj-$(CONFIG_PWC_RZV2M) += pwc-rzv2m.o 12 15 obj-$(CONFIG_RST_RCAR) += rcar-rst.o 16 + obj-$(CONFIG_SYSC_RZ) += rz-sysc.o
+23
drivers/soc/renesas/r9a08g045-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RZ/G3S System controller driver 4 + * 5 + * Copyright (C) 2024 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/bits.h> 9 + #include <linux/init.h> 10 + 11 + #include "rz-sysc.h" 12 + 13 + static const struct rz_sysc_soc_id_init_data rzg3s_sysc_soc_id_init_data __initconst = { 14 + .family = "RZ/G3S", 15 + .id = 0x85e0447, 16 + .devid_offset = 0xa04, 17 + .revision_mask = GENMASK(31, 28), 18 + .specific_id_mask = GENMASK(27, 0), 19 + }; 20 + 21 + const struct rz_sysc_init_data rzg3s_sysc_init_data __initconst = { 22 + .soc_id_init_data = &rzg3s_sysc_soc_id_init_data, 23 + };
+67
drivers/soc/renesas/r9a09g047-sys.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RZ/G3E System controller (SYS) driver 4 + * 5 + * Copyright (C) 2025 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/bits.h> 10 + #include <linux/device.h> 11 + #include <linux/init.h> 12 + #include <linux/io.h> 13 + 14 + #include "rz-sysc.h" 15 + 16 + /* Register Offsets */ 17 + #define SYS_LSI_MODE 0x300 18 + /* 19 + * BOOTPLLCA[1:0] 20 + * [0,0] => 1.1GHZ 21 + * [0,1] => 1.5GHZ 22 + * [1,0] => 1.6GHZ 23 + * [1,1] => 1.7GHZ 24 + */ 25 + #define SYS_LSI_MODE_STAT_BOOTPLLCA55 GENMASK(12, 11) 26 + #define SYS_LSI_MODE_CA55_1_7GHZ 0x3 27 + 28 + #define SYS_LSI_PRR 0x308 29 + #define SYS_LSI_PRR_CA55_DIS BIT(8) 30 + #define SYS_LSI_PRR_NPU_DIS BIT(1) 31 + 32 + static void rzg3e_sys_print_id(struct device *dev, 33 + void __iomem *sysc_base, 34 + struct soc_device_attribute *soc_dev_attr) 35 + { 36 + bool is_quad_core, npu_enabled; 37 + u32 prr_val, mode_val; 38 + 39 + prr_val = readl(sysc_base + SYS_LSI_PRR); 40 + mode_val = readl(sysc_base + SYS_LSI_MODE); 41 + 42 + /* Check CPU and NPU configuration */ 43 + is_quad_core = !(prr_val & SYS_LSI_PRR_CA55_DIS); 44 + npu_enabled = !(prr_val & SYS_LSI_PRR_NPU_DIS); 45 + 46 + dev_info(dev, "Detected Renesas %s Core %s %s Rev %s%s\n", 47 + is_quad_core ? "Quad" : "Dual", soc_dev_attr->family, 48 + soc_dev_attr->soc_id, soc_dev_attr->revision, 49 + npu_enabled ? " with Ethos-U55" : ""); 50 + 51 + /* Check CA55 PLL configuration */ 52 + if (FIELD_GET(SYS_LSI_MODE_STAT_BOOTPLLCA55, mode_val) != SYS_LSI_MODE_CA55_1_7GHZ) 53 + dev_warn(dev, "CA55 PLL is not set to 1.7GHz\n"); 54 + } 55 + 56 + static const struct rz_sysc_soc_id_init_data rzg3e_sys_soc_id_init_data __initconst = { 57 + .family = "RZ/G3E", 58 + .id = 0x8679447, 59 + .devid_offset = 0x304, 60 + .revision_mask = GENMASK(31, 28), 61 + .specific_id_mask = GENMASK(27, 0), 62 + .print_id = rzg3e_sys_print_id, 63 + }; 64 + 65 + const struct rz_sysc_init_data rzg3e_sys_init_data = { 66 + .soc_id_init_data = &rzg3e_sys_soc_id_init_data, 67 + };
+67
drivers/soc/renesas/r9a09g057-sys.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RZ/V2H System controller (SYS) driver 4 + * 5 + * Copyright (C) 2025 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/bits.h> 10 + #include <linux/device.h> 11 + #include <linux/init.h> 12 + #include <linux/io.h> 13 + 14 + #include "rz-sysc.h" 15 + 16 + /* Register Offsets */ 17 + #define SYS_LSI_MODE 0x300 18 + /* 19 + * BOOTPLLCA[1:0] 20 + * [0,0] => 1.1GHZ 21 + * [0,1] => 1.5GHZ 22 + * [1,0] => 1.6GHZ 23 + * [1,1] => 1.7GHZ 24 + */ 25 + #define SYS_LSI_MODE_STAT_BOOTPLLCA55 GENMASK(12, 11) 26 + #define SYS_LSI_MODE_CA55_1_7GHZ 0x3 27 + 28 + #define SYS_LSI_PRR 0x308 29 + #define SYS_LSI_PRR_GPU_DIS BIT(0) 30 + #define SYS_LSI_PRR_ISP_DIS BIT(4) 31 + 32 + static void rzv2h_sys_print_id(struct device *dev, 33 + void __iomem *sysc_base, 34 + struct soc_device_attribute *soc_dev_attr) 35 + { 36 + bool gpu_enabled, isp_enabled; 37 + u32 prr_val, mode_val; 38 + 39 + prr_val = readl(sysc_base + SYS_LSI_PRR); 40 + mode_val = readl(sysc_base + SYS_LSI_MODE); 41 + 42 + /* Check GPU and ISP configuration */ 43 + gpu_enabled = !(prr_val & SYS_LSI_PRR_GPU_DIS); 44 + isp_enabled = !(prr_val & SYS_LSI_PRR_ISP_DIS); 45 + 46 + dev_info(dev, "Detected Renesas %s %s Rev %s%s%s\n", 47 + soc_dev_attr->family, soc_dev_attr->soc_id, soc_dev_attr->revision, 48 + gpu_enabled ? " with GE3D (Mali-G31)" : "", 49 + isp_enabled ? " with ISP (Mali-C55)" : ""); 50 + 51 + /* Check CA55 PLL configuration */ 52 + if (FIELD_GET(SYS_LSI_MODE_STAT_BOOTPLLCA55, mode_val) != SYS_LSI_MODE_CA55_1_7GHZ) 53 + dev_warn(dev, "CA55 PLL is not set to 1.7GHz\n"); 54 + } 55 + 56 + static const struct rz_sysc_soc_id_init_data rzv2h_sys_soc_id_init_data __initconst = { 57 + .family = "RZ/V2H", 58 + .id = 0x847a447, 59 + .devid_offset = 0x304, 60 + .revision_mask = GENMASK(31, 28), 61 + .specific_id_mask = GENMASK(27, 0), 62 + .print_id = rzv2h_sys_print_id, 63 + }; 64 + 65 + const struct rz_sysc_init_data rzv2h_sys_init_data = { 66 + .soc_id_init_data = &rzv2h_sys_soc_id_init_data, 67 + };
+1 -32
drivers/soc/renesas/renesas-soc.c
··· 71 71 .name = "RZ/G2UL", 72 72 }; 73 73 74 - static const struct renesas_family fam_rzg3s __initconst __maybe_unused = { 75 - .name = "RZ/G3S", 76 - }; 77 - 78 - static const struct renesas_family fam_rzv2h __initconst __maybe_unused = { 79 - .name = "RZ/V2H", 80 - }; 81 - 82 74 static const struct renesas_family fam_rzv2l __initconst __maybe_unused = { 83 75 .name = "RZ/V2L", 84 76 }; ··· 166 174 static const struct renesas_soc soc_rz_g2ul __initconst __maybe_unused = { 167 175 .family = &fam_rzg2ul, 168 176 .id = 0x8450447, 169 - }; 170 - 171 - static const struct renesas_soc soc_rz_g3s __initconst __maybe_unused = { 172 - .family = &fam_rzg3s, 173 - .id = 0x85e0447, 174 - }; 175 - 176 - static const struct renesas_soc soc_rz_v2h __initconst __maybe_unused = { 177 - .family = &fam_rzv2h, 178 - .id = 0x847a447, 179 177 }; 180 178 181 179 static const struct renesas_soc soc_rz_v2l __initconst __maybe_unused = { ··· 270 288 .family = &fam_shmobile, 271 289 .id = 0x37, 272 290 }; 273 - 274 291 275 292 static const struct of_device_id renesas_socs[] __initconst __maybe_unused = { 276 293 #ifdef CONFIG_ARCH_R7S72100 ··· 391 410 #ifdef CONFIG_ARCH_R9A07G054 392 411 { .compatible = "renesas,r9a07g054", .data = &soc_rz_v2l }, 393 412 #endif 394 - #ifdef CONFIG_ARCH_R9A08G045 395 - { .compatible = "renesas,r9a08g045", .data = &soc_rz_g3s }, 396 - #endif 397 413 #ifdef CONFIG_ARCH_R9A09G011 398 414 { .compatible = "renesas,r9a09g011", .data = &soc_rz_v2m }, 399 - #endif 400 - #ifdef CONFIG_ARCH_R9A09G057 401 - { .compatible = "renesas,r9a09g057", .data = &soc_rz_v2h }, 402 415 #endif 403 416 #ifdef CONFIG_ARCH_SH73A0 404 417 { .compatible = "renesas,sh73a0", .data = &soc_shmobile_ag5 }, ··· 419 444 .mask = 0xfffffff, 420 445 }; 421 446 422 - static const struct renesas_id id_rzv2h __initconst = { 423 - .offset = 0x304, 424 - .mask = 0xfffffff, 425 - }; 426 - 427 447 static const struct renesas_id id_rzv2m __initconst = { 428 448 .offset = 0x104, 429 449 .mask = 0xff, ··· 436 466 { .compatible = "renesas,r9a07g054-sysc", .data = &id_rzg2l }, 437 467 { .compatible = "renesas,r9a08g045-sysc", .data = &id_rzg2l }, 438 468 { .compatible = "renesas,r9a09g011-sys", .data = &id_rzv2m }, 439 - { .compatible = "renesas,r9a09g057-sys", .data = &id_rzv2h }, 440 469 { .compatible = "renesas,prr", .data = &id_prr }, 441 470 { /* sentinel */ } 442 471 }; ··· 500 531 eslo = product & 0xf; 501 532 soc_dev_attr->revision = kasprintf(GFP_KERNEL, "ES%u.%u", 502 533 eshi, eslo); 503 - } else if (id == &id_rzg2l || id == &id_rzv2h) { 534 + } else if (id == &id_rzg2l) { 504 535 eshi = ((product >> 28) & 0x0f); 505 536 soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%u", 506 537 eshi);
+137
drivers/soc/renesas/rz-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RZ System controller driver 4 + * 5 + * Copyright (C) 2024 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/io.h> 9 + #include <linux/of.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/sys_soc.h> 12 + 13 + #include "rz-sysc.h" 14 + 15 + #define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1)) 16 + 17 + /** 18 + * struct rz_sysc - RZ SYSC private data structure 19 + * @base: SYSC base address 20 + * @dev: SYSC device pointer 21 + */ 22 + struct rz_sysc { 23 + void __iomem *base; 24 + struct device *dev; 25 + }; 26 + 27 + static int rz_sysc_soc_init(struct rz_sysc *sysc, const struct of_device_id *match) 28 + { 29 + const struct rz_sysc_init_data *sysc_data = match->data; 30 + const struct rz_sysc_soc_id_init_data *soc_data = sysc_data->soc_id_init_data; 31 + struct soc_device_attribute *soc_dev_attr; 32 + const char *soc_id_start, *soc_id_end; 33 + u32 val, revision, specific_id; 34 + struct soc_device *soc_dev; 35 + char soc_id[32] = {0}; 36 + size_t size; 37 + 38 + soc_id_start = strchr(match->compatible, ',') + 1; 39 + soc_id_end = strchr(match->compatible, '-'); 40 + size = soc_id_end - soc_id_start + 1; 41 + if (size > 32) 42 + size = sizeof(soc_id); 43 + strscpy(soc_id, soc_id_start, size); 44 + 45 + soc_dev_attr = devm_kzalloc(sysc->dev, sizeof(*soc_dev_attr), GFP_KERNEL); 46 + if (!soc_dev_attr) 47 + return -ENOMEM; 48 + 49 + soc_dev_attr->family = devm_kstrdup(sysc->dev, soc_data->family, GFP_KERNEL); 50 + if (!soc_dev_attr->family) 51 + return -ENOMEM; 52 + 53 + soc_dev_attr->soc_id = devm_kstrdup(sysc->dev, soc_id, GFP_KERNEL); 54 + if (!soc_dev_attr->soc_id) 55 + return -ENOMEM; 56 + 57 + val = readl(sysc->base + soc_data->devid_offset); 58 + revision = field_get(soc_data->revision_mask, val); 59 + specific_id = field_get(soc_data->specific_id_mask, val); 60 + soc_dev_attr->revision = devm_kasprintf(sysc->dev, GFP_KERNEL, "%u", revision); 61 + if (!soc_dev_attr->revision) 62 + return -ENOMEM; 63 + 64 + if (soc_data->id && specific_id != soc_data->id) { 65 + dev_warn(sysc->dev, "SoC mismatch (product = 0x%x)\n", specific_id); 66 + return -ENODEV; 67 + } 68 + 69 + /* Try to call SoC-specific device identification */ 70 + if (soc_data->print_id) { 71 + soc_data->print_id(sysc->dev, sysc->base, soc_dev_attr); 72 + } else { 73 + dev_info(sysc->dev, "Detected Renesas %s %s Rev %s\n", 74 + soc_dev_attr->family, soc_dev_attr->soc_id, soc_dev_attr->revision); 75 + } 76 + 77 + soc_dev = soc_device_register(soc_dev_attr); 78 + if (IS_ERR(soc_dev)) 79 + return PTR_ERR(soc_dev); 80 + 81 + return 0; 82 + } 83 + 84 + static const struct of_device_id rz_sysc_match[] = { 85 + #ifdef CONFIG_SYSC_R9A08G045 86 + { .compatible = "renesas,r9a08g045-sysc", .data = &rzg3s_sysc_init_data }, 87 + #endif 88 + #ifdef CONFIG_SYS_R9A09G047 89 + { .compatible = "renesas,r9a09g047-sys", .data = &rzg3e_sys_init_data }, 90 + #endif 91 + #ifdef CONFIG_SYS_R9A09G057 92 + { .compatible = "renesas,r9a09g057-sys", .data = &rzv2h_sys_init_data }, 93 + #endif 94 + { } 95 + }; 96 + MODULE_DEVICE_TABLE(of, rz_sysc_match); 97 + 98 + static int rz_sysc_probe(struct platform_device *pdev) 99 + { 100 + const struct of_device_id *match; 101 + struct device *dev = &pdev->dev; 102 + struct rz_sysc *sysc; 103 + 104 + match = of_match_node(rz_sysc_match, dev->of_node); 105 + if (!match) 106 + return -ENODEV; 107 + 108 + sysc = devm_kzalloc(dev, sizeof(*sysc), GFP_KERNEL); 109 + if (!sysc) 110 + return -ENOMEM; 111 + 112 + sysc->base = devm_platform_ioremap_resource(pdev, 0); 113 + if (IS_ERR(sysc->base)) 114 + return PTR_ERR(sysc->base); 115 + 116 + sysc->dev = dev; 117 + return rz_sysc_soc_init(sysc, match); 118 + } 119 + 120 + static struct platform_driver rz_sysc_driver = { 121 + .driver = { 122 + .name = "renesas-rz-sysc", 123 + .suppress_bind_attrs = true, 124 + .of_match_table = rz_sysc_match 125 + }, 126 + .probe = rz_sysc_probe 127 + }; 128 + 129 + static int __init rz_sysc_init(void) 130 + { 131 + return platform_driver_register(&rz_sysc_driver); 132 + } 133 + subsys_initcall(rz_sysc_init); 134 + 135 + MODULE_DESCRIPTION("Renesas RZ System Controller Driver"); 136 + MODULE_AUTHOR("Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>"); 137 + MODULE_LICENSE("GPL");
+46
drivers/soc/renesas/rz-sysc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Renesas RZ System Controller 4 + * 5 + * Copyright (C) 2024 Renesas Electronics Corp. 6 + */ 7 + 8 + #ifndef __SOC_RENESAS_RZ_SYSC_H__ 9 + #define __SOC_RENESAS_RZ_SYSC_H__ 10 + 11 + #include <linux/device.h> 12 + #include <linux/sys_soc.h> 13 + #include <linux/types.h> 14 + 15 + /** 16 + * struct rz_syc_soc_id_init_data - RZ SYSC SoC identification initialization data 17 + * @family: RZ SoC family 18 + * @id: RZ SoC expected ID 19 + * @devid_offset: SYSC SoC ID register offset 20 + * @revision_mask: SYSC SoC ID revision mask 21 + * @specific_id_mask: SYSC SoC ID specific ID mask 22 + * @print_id: print SoC-specific extended device identification 23 + */ 24 + struct rz_sysc_soc_id_init_data { 25 + const char * const family; 26 + u32 id; 27 + u32 devid_offset; 28 + u32 revision_mask; 29 + u32 specific_id_mask; 30 + void (*print_id)(struct device *dev, void __iomem *sysc_base, 31 + struct soc_device_attribute *soc_dev_attr); 32 + }; 33 + 34 + /** 35 + * struct rz_sysc_init_data - RZ SYSC initialization data 36 + * @soc_id_init_data: RZ SYSC SoC ID initialization data 37 + */ 38 + struct rz_sysc_init_data { 39 + const struct rz_sysc_soc_id_init_data *soc_id_init_data; 40 + }; 41 + 42 + extern const struct rz_sysc_init_data rzg3e_sys_init_data; 43 + extern const struct rz_sysc_init_data rzg3s_sysc_init_data; 44 + extern const struct rz_sysc_init_data rzv2h_sys_init_data; 45 + 46 + #endif /* __SOC_RENESAS_RZ_SYSC_H__ */
+1
drivers/soc/samsung/exynos-asv.c
··· 9 9 * Samsung Exynos SoC Adaptive Supply Voltage support 10 10 */ 11 11 12 + #include <linux/array_size.h> 12 13 #include <linux/cpu.h> 13 14 #include <linux/device.h> 14 15 #include <linux/energy_model.h>
+5
drivers/soc/samsung/exynos-chipid.c
··· 12 12 * Samsung Exynos SoC Adaptive Supply Voltage and Chip ID support 13 13 */ 14 14 15 + #include <linux/array_size.h> 15 16 #include <linux/device.h> 16 17 #include <linux/errno.h> 17 18 #include <linux/mfd/syscon.h> ··· 56 55 { "EXYNOS5440", 0xE5440000 }, 57 56 { "EXYNOS5800", 0xE5422000 }, 58 57 { "EXYNOS7420", 0xE7420000 }, 58 + { "EXYNOS7870", 0xE7870000 }, 59 59 /* Compatible with: samsung,exynos850-chipid */ 60 + { "EXYNOS2200", 0xE9925000 }, 60 61 { "EXYNOS7885", 0xE7885000 }, 61 62 { "EXYNOS850", 0xE3830000 }, 62 63 { "EXYNOS8895", 0xE8895000 }, ··· 137 134 138 135 soc_dev_attr->revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, 139 136 "%x", soc_info.revision); 137 + if (!soc_dev_attr->revision) 138 + return -ENOMEM; 140 139 soc_dev_attr->soc_id = product_id_to_soc_id(soc_info.product_id); 141 140 if (!soc_dev_attr->soc_id) { 142 141 pr_err("Unknown SoC\n");
+1
drivers/soc/samsung/exynos-pmu.c
··· 5 5 // 6 6 // Exynos - CPU PMU(Power Management Unit) support 7 7 8 + #include <linux/array_size.h> 8 9 #include <linux/arm-smccc.h> 9 10 #include <linux/of.h> 10 11 #include <linux/of_address.h>
+89 -19
drivers/soc/samsung/exynos-usi.c
··· 6 6 * Samsung Exynos USI driver (Universal Serial Interface). 7 7 */ 8 8 9 + #include <linux/array_size.h> 9 10 #include <linux/clk.h> 10 11 #include <linux/mfd/syscon.h> 11 12 #include <linux/module.h> ··· 16 15 #include <linux/regmap.h> 17 16 18 17 #include <dt-bindings/soc/samsung,exynos-usi.h> 18 + 19 + /* USIv1: System Register: SW_CONF register bits */ 20 + #define USI_V1_SW_CONF_NONE 0x0 21 + #define USI_V1_SW_CONF_I2C0 0x1 22 + #define USI_V1_SW_CONF_I2C1 0x2 23 + #define USI_V1_SW_CONF_I2C0_1 0x3 24 + #define USI_V1_SW_CONF_SPI 0x4 25 + #define USI_V1_SW_CONF_UART 0x8 26 + #define USI_V1_SW_CONF_UART_I2C1 0xa 27 + #define USI_V1_SW_CONF_MASK (USI_V1_SW_CONF_I2C0 | USI_V1_SW_CONF_I2C1 | \ 28 + USI_V1_SW_CONF_I2C0_1 | USI_V1_SW_CONF_SPI | \ 29 + USI_V1_SW_CONF_UART | USI_V1_SW_CONF_UART_I2C1) 19 30 20 31 /* USIv2: System Register: SW_CONF register bits */ 21 32 #define USI_V2_SW_CONF_NONE 0x0 ··· 47 34 #define USI_OPTION_CLKSTOP_ON BIT(2) 48 35 49 36 enum exynos_usi_ver { 50 - USI_VER2 = 2, 37 + USI_VER1 = 0, 38 + USI_VER2, 51 39 }; 52 40 53 41 struct exynos_usi_variant { ··· 80 66 unsigned int val; /* mode register value */ 81 67 }; 82 68 83 - static const struct exynos_usi_mode exynos_usi_modes[] = { 84 - [USI_V2_NONE] = { .name = "none", .val = USI_V2_SW_CONF_NONE }, 85 - [USI_V2_UART] = { .name = "uart", .val = USI_V2_SW_CONF_UART }, 86 - [USI_V2_SPI] = { .name = "spi", .val = USI_V2_SW_CONF_SPI }, 87 - [USI_V2_I2C] = { .name = "i2c", .val = USI_V2_SW_CONF_I2C }, 69 + #define USI_MODES_MAX (USI_MODE_UART_I2C1 + 1) 70 + static const struct exynos_usi_mode exynos_usi_modes[][USI_MODES_MAX] = { 71 + [USI_VER1] = { 72 + [USI_MODE_NONE] = { .name = "none", .val = USI_V1_SW_CONF_NONE }, 73 + [USI_MODE_UART] = { .name = "uart", .val = USI_V1_SW_CONF_UART }, 74 + [USI_MODE_SPI] = { .name = "spi", .val = USI_V1_SW_CONF_SPI }, 75 + [USI_MODE_I2C] = { .name = "i2c", .val = USI_V1_SW_CONF_I2C0 }, 76 + [USI_MODE_I2C1] = { .name = "i2c1", .val = USI_V1_SW_CONF_I2C1 }, 77 + [USI_MODE_I2C0_1] = { .name = "i2c0_1", .val = USI_V1_SW_CONF_I2C0_1 }, 78 + [USI_MODE_UART_I2C1] = { .name = "uart_i2c1", .val = USI_V1_SW_CONF_UART_I2C1 }, 79 + }, [USI_VER2] = { 80 + [USI_MODE_NONE] = { .name = "none", .val = USI_V2_SW_CONF_NONE }, 81 + [USI_MODE_UART] = { .name = "uart", .val = USI_V2_SW_CONF_UART }, 82 + [USI_MODE_SPI] = { .name = "spi", .val = USI_V2_SW_CONF_SPI }, 83 + [USI_MODE_I2C] = { .name = "i2c", .val = USI_V2_SW_CONF_I2C }, 84 + }, 88 85 }; 89 86 90 87 static const char * const exynos850_usi_clk_names[] = { "pclk", "ipclk" }; 91 88 static const struct exynos_usi_variant exynos850_usi_data = { 92 89 .ver = USI_VER2, 93 90 .sw_conf_mask = USI_V2_SW_CONF_MASK, 94 - .min_mode = USI_V2_NONE, 95 - .max_mode = USI_V2_I2C, 91 + .min_mode = USI_MODE_NONE, 92 + .max_mode = USI_MODE_I2C, 93 + .num_clks = ARRAY_SIZE(exynos850_usi_clk_names), 94 + .clk_names = exynos850_usi_clk_names, 95 + }; 96 + 97 + static const struct exynos_usi_variant exynos8895_usi_data = { 98 + .ver = USI_VER1, 99 + .sw_conf_mask = USI_V1_SW_CONF_MASK, 100 + .min_mode = USI_MODE_NONE, 101 + .max_mode = USI_MODE_UART_I2C1, 96 102 .num_clks = ARRAY_SIZE(exynos850_usi_clk_names), 97 103 .clk_names = exynos850_usi_clk_names, 98 104 }; ··· 121 87 { 122 88 .compatible = "samsung,exynos850-usi", 123 89 .data = &exynos850_usi_data, 90 + }, { 91 + .compatible = "samsung,exynos8895-usi", 92 + .data = &exynos8895_usi_data, 124 93 }, 125 94 { } /* sentinel */ 126 95 }; ··· 146 109 if (mode < usi->data->min_mode || mode > usi->data->max_mode) 147 110 return -EINVAL; 148 111 149 - val = exynos_usi_modes[mode].val; 112 + val = exynos_usi_modes[usi->data->ver][mode].val; 150 113 ret = regmap_update_bits(usi->sysreg, usi->sw_conf, 151 114 usi->data->sw_conf_mask, val); 152 115 if (ret) 153 116 return ret; 154 117 155 118 usi->mode = mode; 156 - dev_dbg(usi->dev, "protocol: %s\n", exynos_usi_modes[usi->mode].name); 119 + dev_dbg(usi->dev, "protocol: %s\n", 120 + exynos_usi_modes[usi->data->ver][usi->mode].name); 157 121 158 122 return 0; 159 123 } ··· 206 168 if (ret) 207 169 return ret; 208 170 209 - if (usi->data->ver == USI_VER2) 210 - return exynos_usi_enable(usi); 171 + if (usi->data->ver == USI_VER1) 172 + ret = clk_bulk_prepare_enable(usi->data->num_clks, 173 + usi->clks); 174 + else if (usi->data->ver == USI_VER2) 175 + ret = exynos_usi_enable(usi); 211 176 212 - return 0; 177 + return ret; 178 + } 179 + 180 + static void exynos_usi_unconfigure(void *data) 181 + { 182 + struct exynos_usi *usi = data; 183 + u32 val; 184 + int ret; 185 + 186 + if (usi->data->ver == USI_VER1) { 187 + clk_bulk_disable_unprepare(usi->data->num_clks, usi->clks); 188 + return; 189 + } 190 + 191 + ret = clk_bulk_prepare_enable(usi->data->num_clks, usi->clks); 192 + if (ret) 193 + return; 194 + 195 + /* Make sure that we've stopped providing the clock to USI IP */ 196 + val = readl(usi->regs + USI_OPTION); 197 + val &= ~USI_OPTION_CLKREQ_ON; 198 + val |= ~USI_OPTION_CLKSTOP_ON; 199 + writel(val, usi->regs + USI_OPTION); 200 + 201 + /* Set USI block state to reset */ 202 + val = readl(usi->regs + USI_CON); 203 + val |= USI_CON_RESET; 204 + writel(val, usi->regs + USI_CON); 205 + 206 + clk_bulk_disable_unprepare(usi->data->num_clks, usi->clks); 213 207 } 214 208 215 209 static int exynos_usi_parse_dt(struct device_node *np, struct exynos_usi *usi) ··· 256 186 return -EINVAL; 257 187 usi->mode = mode; 258 188 259 - usi->sysreg = syscon_regmap_lookup_by_phandle(np, "samsung,sysreg"); 189 + usi->sysreg = syscon_regmap_lookup_by_phandle_args(np, "samsung,sysreg", 190 + 1, &usi->sw_conf); 260 191 if (IS_ERR(usi->sysreg)) 261 192 return PTR_ERR(usi->sysreg); 262 - 263 - ret = of_property_read_u32_index(np, "samsung,sysreg", 1, 264 - &usi->sw_conf); 265 - if (ret) 266 - return ret; 267 193 268 194 usi->clkreq_on = of_property_read_bool(np, "samsung,clkreq-on"); 269 195 ··· 318 252 } 319 253 320 254 ret = exynos_usi_configure(usi); 255 + if (ret) 256 + return ret; 257 + 258 + ret = devm_add_action_or_reset(&pdev->dev, exynos_usi_unconfigure, usi); 321 259 if (ret) 322 260 return ret; 323 261
+1
drivers/soc/samsung/exynos3250-pmu.c
··· 5 5 // 6 6 // Exynos3250 - CPU PMU (Power Management Unit) support 7 7 8 + #include <linux/array_size.h> 8 9 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 10 #include <linux/soc/samsung/exynos-pmu.h> 10 11
+1
drivers/soc/samsung/exynos5250-pmu.c
··· 5 5 // 6 6 // Exynos5250 - CPU PMU (Power Management Unit) support 7 7 8 + #include <linux/array_size.h> 8 9 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 10 #include <linux/soc/samsung/exynos-pmu.h> 10 11
+1
drivers/soc/samsung/exynos5420-pmu.c
··· 5 5 // 6 6 // Exynos5420 - CPU PMU (Power Management Unit) support 7 7 8 + #include <linux/array_size.h> 8 9 #include <linux/pm.h> 9 10 #include <linux/soc/samsung/exynos-regs-pmu.h> 10 11 #include <linux/soc/samsung/exynos-pmu.h>
+2 -1
drivers/soc/tegra/pmc.c
··· 47 47 #include <linux/seq_file.h> 48 48 #include <linux/slab.h> 49 49 #include <linux/spinlock.h> 50 + #include <linux/string_choices.h> 50 51 #include <linux/syscore_ops.h> 51 52 52 53 #include <soc/tegra/common.h> ··· 1182 1181 continue; 1183 1182 1184 1183 seq_printf(s, " %9s %7s\n", pmc->soc->powergates[i], 1185 - status ? "yes" : "no"); 1184 + str_yes_no(status)); 1186 1185 } 1187 1186 1188 1187 return 0;
+12 -1
drivers/soc/ti/k3-socinfo.c
··· 105 105 return -ENODEV; 106 106 } 107 107 108 + static const struct regmap_config k3_chipinfo_regmap_cfg = { 109 + .reg_bits = 32, 110 + .val_bits = 32, 111 + .reg_stride = 4, 112 + }; 113 + 108 114 static int k3_chipinfo_probe(struct platform_device *pdev) 109 115 { 110 116 struct device_node *node = pdev->dev.of_node; ··· 118 112 struct device *dev = &pdev->dev; 119 113 struct soc_device *soc_dev; 120 114 struct regmap *regmap; 115 + void __iomem *base; 121 116 u32 partno_id; 122 117 u32 variant; 123 118 u32 jtag_id; 124 119 u32 mfg; 125 120 int ret; 126 121 127 - regmap = device_node_to_regmap(node); 122 + base = devm_platform_ioremap_resource(pdev, 0); 123 + if (IS_ERR(base)) 124 + return PTR_ERR(base); 125 + 126 + regmap = regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg); 128 127 if (IS_ERR(regmap)) 129 128 return PTR_ERR(regmap); 130 129
+1 -1
drivers/ufs/host/ufs-qcom.c
··· 147 147 int err; 148 148 int i; 149 149 150 - ice = of_qcom_ice_get(dev); 150 + ice = devm_of_qcom_ice_get(dev); 151 151 if (ice == ERR_PTR(-EOPNOTSUPP)) { 152 152 dev_warn(dev, "Disabling inline encryption support\n"); 153 153 ice = NULL;
+40
include/linux/arm-smccc.h
··· 654 654 method; \ 655 655 }) 656 656 657 + #ifdef CONFIG_ARM64 658 + 659 + #define __fail_smccc_1_2(___res) \ 660 + do { \ 661 + if (___res) \ 662 + ___res->a0 = SMCCC_RET_NOT_SUPPORTED; \ 663 + } while (0) 664 + 665 + /* 666 + * arm_smccc_1_2_invoke() - make an SMCCC v1.2 compliant call 667 + * 668 + * @args: SMC args are in the a0..a17 fields of the arm_smcc_1_2_regs structure 669 + * @res: result values from registers 0 to 17 670 + * 671 + * This macro will make either an HVC call or an SMC call depending on the 672 + * current SMCCC conduit. If no valid conduit is available then -1 673 + * (SMCCC_RET_NOT_SUPPORTED) is returned in @res.a0 (if supplied). 674 + * 675 + * The return value also provides the conduit that was used. 676 + */ 677 + #define arm_smccc_1_2_invoke(args, res) ({ \ 678 + struct arm_smccc_1_2_regs *__args = args; \ 679 + struct arm_smccc_1_2_regs *__res = res; \ 680 + int method = arm_smccc_1_1_get_conduit(); \ 681 + switch (method) { \ 682 + case SMCCC_CONDUIT_HVC: \ 683 + arm_smccc_1_2_hvc(__args, __res); \ 684 + break; \ 685 + case SMCCC_CONDUIT_SMC: \ 686 + arm_smccc_1_2_smc(__args, __res); \ 687 + break; \ 688 + default: \ 689 + __fail_smccc_1_2(__res); \ 690 + method = SMCCC_CONDUIT_NONE; \ 691 + break; \ 692 + } \ 693 + method; \ 694 + }) 695 + #endif /*CONFIG_ARM64*/ 696 + 657 697 #endif /*__ASSEMBLY__*/ 658 698 #endif /*__LINUX_ARM_SMCCC_H*/
+20 -2
include/linux/arm_ffa.h
··· 112 112 FIELD_PREP(FFA_MINOR_VERSION_MASK, (minor))) 113 113 #define FFA_VERSION_1_0 FFA_PACK_VERSION_INFO(1, 0) 114 114 #define FFA_VERSION_1_1 FFA_PACK_VERSION_INFO(1, 1) 115 + #define FFA_VERSION_1_2 FFA_PACK_VERSION_INFO(1, 2) 115 116 116 117 /** 117 118 * FF-A specification mentions explicitly about '4K pages'. This should ··· 177 176 int ffa_driver_register(struct ffa_driver *driver, struct module *owner, 178 177 const char *mod_name); 179 178 void ffa_driver_unregister(struct ffa_driver *driver); 179 + void ffa_devices_unregister(void); 180 180 bool ffa_device_is_valid(struct ffa_device *ffa_dev); 181 181 182 182 #else ··· 189 187 } 190 188 191 189 static inline void ffa_device_unregister(struct ffa_device *dev) {} 190 + 191 + static inline void ffa_devices_unregister(void) {} 192 192 193 193 static inline int 194 194 ffa_driver_register(struct ffa_driver *driver, struct module *owner, ··· 241 237 #define FFA_PARTITION_NOTIFICATION_RECV BIT(3) 242 238 /* partition runs in the AArch64 execution state. */ 243 239 #define FFA_PARTITION_AARCH64_EXEC BIT(8) 240 + /* partition supports receipt of direct request2 */ 241 + #define FFA_PARTITION_DIRECT_REQ2_RECV BIT(9) 242 + /* partition can send direct request2. */ 243 + #define FFA_PARTITION_DIRECT_REQ2_SEND BIT(10) 244 244 u32 properties; 245 - u32 uuid[4]; 245 + uuid_t uuid; 246 246 }; 247 247 248 248 static inline ··· 264 256 #define ffa_partition_supports_direct_recv(dev) \ 265 257 ffa_partition_check_property(dev, FFA_PARTITION_DIRECT_RECV) 266 258 259 + #define ffa_partition_supports_direct_req2_recv(dev) \ 260 + (ffa_partition_check_property(dev, FFA_PARTITION_DIRECT_REQ2_RECV) && \ 261 + !dev->mode_32bit) 262 + 267 263 /* For use with FFA_MSG_SEND_DIRECT_{REQ,RESP} which pass data via registers */ 268 264 struct ffa_send_direct_data { 269 265 unsigned long data0; /* w3/x3 */ ··· 283 271 u32 offset; 284 272 u32 send_recv_id; 285 273 u32 size; 274 + uuid_t uuid; 286 275 }; 287 276 288 277 /* For use with FFA_MSG_SEND_DIRECT_{REQ,RESP}2 which pass data via registers */ ··· 452 439 int (*sync_send_receive)(struct ffa_device *dev, 453 440 struct ffa_send_direct_data *data); 454 441 int (*indirect_send)(struct ffa_device *dev, void *buf, size_t sz); 455 - int (*sync_send_receive2)(struct ffa_device *dev, const uuid_t *uuid, 442 + int (*sync_send_receive2)(struct ffa_device *dev, 456 443 struct ffa_send_direct_data2 *data); 457 444 }; 458 445 ··· 468 455 469 456 typedef void (*ffa_sched_recv_cb)(u16 vcpu, bool is_per_vcpu, void *cb_data); 470 457 typedef void (*ffa_notifier_cb)(int notify_id, void *cb_data); 458 + typedef void (*ffa_fwk_notifier_cb)(int notify_id, void *cb_data, void *buf); 471 459 472 460 struct ffa_notifier_ops { 473 461 int (*sched_recv_cb_register)(struct ffa_device *dev, ··· 477 463 int (*notify_request)(struct ffa_device *dev, bool per_vcpu, 478 464 ffa_notifier_cb cb, void *cb_data, int notify_id); 479 465 int (*notify_relinquish)(struct ffa_device *dev, int notify_id); 466 + int (*fwk_notify_request)(struct ffa_device *dev, 467 + ffa_fwk_notifier_cb cb, void *cb_data, 468 + int notify_id); 469 + int (*fwk_notify_relinquish)(struct ffa_device *dev, int notify_id); 480 470 int (*notify_send)(struct ffa_device *dev, int notify_id, bool per_vcpu, 481 471 u16 vcpu); 482 472 };
+49
include/linux/firmware/samsung/exynos-acpm-protocol.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright 2020 Samsung Electronics Co., Ltd. 4 + * Copyright 2020 Google LLC. 5 + * Copyright 2024 Linaro Ltd. 6 + */ 7 + 8 + #ifndef __EXYNOS_ACPM_PROTOCOL_H 9 + #define __EXYNOS_ACPM_PROTOCOL_H 10 + 11 + #include <linux/types.h> 12 + 13 + struct acpm_handle; 14 + 15 + struct acpm_pmic_ops { 16 + int (*read_reg)(const struct acpm_handle *handle, 17 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 18 + u8 *buf); 19 + int (*bulk_read)(const struct acpm_handle *handle, 20 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 21 + u8 count, u8 *buf); 22 + int (*write_reg)(const struct acpm_handle *handle, 23 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 24 + u8 value); 25 + int (*bulk_write)(const struct acpm_handle *handle, 26 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 27 + u8 count, const u8 *buf); 28 + int (*update_reg)(const struct acpm_handle *handle, 29 + unsigned int acpm_chan_id, u8 type, u8 reg, u8 chan, 30 + u8 value, u8 mask); 31 + }; 32 + 33 + struct acpm_ops { 34 + struct acpm_pmic_ops pmic_ops; 35 + }; 36 + 37 + /** 38 + * struct acpm_handle - Reference to an initialized protocol instance 39 + * @ops: 40 + */ 41 + struct acpm_handle { 42 + struct acpm_ops ops; 43 + }; 44 + 45 + struct device; 46 + 47 + const struct acpm_handle *devm_acpm_get_by_phandle(struct device *dev, 48 + const char *property); 49 + #endif /* __EXYNOS_ACPM_PROTOCOL_H */
+1 -1
include/linux/soc/apple/rtkit.h
··· 56 56 * context. 57 57 */ 58 58 struct apple_rtkit_ops { 59 - void (*crashed)(void *cookie); 59 + void (*crashed)(void *cookie, const void *crashlog, size_t crashlog_size); 60 60 void (*recv_message)(void *cookie, u8 endpoint, u64 message); 61 61 bool (*recv_message_early)(void *cookie, u8 endpoint, u64 message); 62 62 int (*shmem_setup)(void *cookie, struct apple_rtkit_shmem *bfr);
+2 -1
include/soc/qcom/ice.h
··· 33 33 const u8 crypto_key[], u8 data_unit_size, 34 34 int slot); 35 35 int qcom_ice_evict_key(struct qcom_ice *ice, int slot); 36 - struct qcom_ice *of_qcom_ice_get(struct device *dev); 36 + struct qcom_ice *devm_of_qcom_ice_get(struct device *dev); 37 + 37 38 #endif /* __QCOM_ICE_H__ */
+1 -1
include/soc/tegra/bpmp-abi.h
··· 3755 3755 * @defgroup bpmp_pwr_limit_type PWR_LIMIT TYPEs 3756 3756 * @{ 3757 3757 */ 3758 - /** @brief Limit value specifies traget cap */ 3758 + /** @brief Limit value specifies target cap */ 3759 3759 #define PWR_LIMIT_TYPE_TARGET_CAP 0U 3760 3760 /** @brief Limit value specifies maximum possible target cap */ 3761 3761 #define PWR_LIMIT_TYPE_BOUND_MAX 1U