Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC-related driver updates from Olof Johansson:
"Various driver updates for platforms and a couple of the small driver
subsystems we merge through our tree:

- A driver for SCU (system control) on NXP i.MX8QXP

- Qualcomm Always-on Subsystem messaging driver (AOSS QMP)

- Qualcomm PM support for MSM8998

- Support for a newer version of DRAM PHY driver for Broadcom (DPFE)

- Reset controller support for Bitmain BM1880

- TI SCI (System Control Interface) support for CPU control on AM654
processors

- More TI sysc refactoring and rework"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (84 commits)
reset: remove redundant null check on pointer dev
soc: rockchip: work around clang warning
dt-bindings: reset: imx7: Fix the spelling of 'indices'
soc: imx: Add i.MX8MN SoC driver support
soc: aspeed: lpc-ctrl: Fix probe error handling
soc: qcom: geni: Add support for ACPI
firmware: ti_sci: Fix gcc unused-but-set-variable warning
firmware: ti_sci: Use the correct style for SPDX License Identifier
soc: imx8: Use existing of_root directly
soc: imx8: Fix potential kernel dump in error path
firmware/psci: psci_checker: Park kthreads before stopping them
memory: move jedec_ddr.h from include/memory to drivers/memory/
memory: move jedec_ddr_data.c from lib/ to drivers/memory/
MAINTAINERS: Remove myself as qcom maintainer
soc: aspeed: lpc-ctrl: make parameter optional
soc: qcom: apr: Don't use reg for domain id
soc: qcom: fix QCOM_AOSS_QMP dependency and build errors
memory: tegra: Fix -Wunused-const-variable
firmware: tegra: Early resume BPMP
soc/tegra: Select pinctrl for Tegra194
...

+4622 -497
+1 -1
Documentation/devicetree/bindings/arm/arm,scmi.txt
··· 6 6 and performance functions. 7 7 8 8 This binding is intended to define the interface the firmware implementing 9 - the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control 9 + the SCMI as described in ARM document number ARM DEN 0056A ("ARM System Control 10 10 and Management Interface Platform Design Document")[0] provide for OSPM in 11 11 the device tree. 12 12
+11
Documentation/devicetree/bindings/misc/fsl,dpaa2-console.txt
··· 1 + DPAA2 console support 2 + 3 + Required properties: 4 + 5 + - compatible 6 + Value type: <string> 7 + Definition: Must be "fsl,dpaa2-console". 8 + - reg 9 + Value type: <prop-encoded-array> 10 + Definition: A standard property. Specifies the region where the MCFBA 11 + (MC firmware base address) register can be found.
+2
Documentation/devicetree/bindings/power/qcom,rpmpd.txt
··· 6 6 Required Properties: 7 7 - compatible: Should be one of the following 8 8 * qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC 9 + * qcom,msm8998-rpmpd: RPM Power domain for the msm8998 family of SoC 10 + * qcom,qcs404-rpmpd: RPM Power domain for the qcs404 family of SoC 9 11 * qcom,sdm845-rpmhpd: RPMh Power domain for the sdm845 family of SoC 10 12 - #power-domain-cells: number of cells in Power domain specifier 11 13 must be 1.
+18
Documentation/devicetree/bindings/reset/bitmain,bm1880-reset.txt
··· 1 + Bitmain BM1880 SoC Reset Controller 2 + =================================== 3 + 4 + Please also refer to reset.txt in this directory for common reset 5 + controller binding usage. 6 + 7 + Required properties: 8 + - compatible: Should be "bitmain,bm1880-reset" 9 + - reg: Offset and length of reset controller space in SCTRL. 10 + - #reset-cells: Must be 1. 11 + 12 + Example: 13 + 14 + rst: reset-controller@c00 { 15 + compatible = "bitmain,bm1880-reset"; 16 + reg = <0xc00 0x8>; 17 + #reset-cells = <1>; 18 + };
+1 -1
Documentation/devicetree/bindings/reset/fsl,imx7-src.txt
··· 45 45 }; 46 46 47 47 48 - For list of all valid reset indicies see 48 + For list of all valid reset indices see 49 49 <dt-bindings/reset/imx7-reset.h> for i.MX7 and 50 50 <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ
+7 -3
Documentation/devicetree/bindings/soc/amlogic/amlogic,canvas.txt
··· 2 2 ================================ 3 3 4 4 A canvas is a collection of metadata that describes a pixel buffer. 5 - Those metadata include: width, height, phyaddr, wrapping, block mode 6 - and endianness. 5 + Those metadata include: width, height, phyaddr, wrapping and block mode. 6 + Starting with GXBB the endianness can also be described. 7 7 8 8 Many IPs within Amlogic SoCs rely on canvas indexes to read/write pixel data 9 9 rather than use the phy addresses directly. For instance, this is the case for ··· 18 18 -------------------------- 19 19 20 20 Required properties: 21 - - compatible: "amlogic,canvas" 21 + - compatible: has to be one of: 22 + - "amlogic,meson8-canvas", "amlogic,canvas" on Meson8 23 + - "amlogic,meson8b-canvas", "amlogic,canvas" on Meson8b 24 + - "amlogic,meson8m2-canvas", "amlogic,canvas" on Meson8m2 25 + - "amlogic,canvas" on GXBB and newer 22 26 - reg: Base physical address and size of the canvas registers. 23 27 24 28 Example:
+81
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt
··· 1 + Qualcomm Always-On Subsystem side channel binding 2 + 3 + This binding describes the hardware component responsible for side channel 4 + requests to the always-on subsystem (AOSS), used for certain power management 5 + requests that is not handled by the standard RPMh interface. Each client in the 6 + SoC has it's own block of message RAM and IRQ for communication with the AOSS. 7 + The protocol used to communicate in the message RAM is known as Qualcomm 8 + Messaging Protocol (QMP) 9 + 10 + The AOSS side channel exposes control over a set of resources, used to control 11 + a set of debug related clocks and to affect the low power state of resources 12 + related to the secondary subsystems. These resources are exposed as a set of 13 + power-domains. 14 + 15 + - compatible: 16 + Usage: required 17 + Value type: <string> 18 + Definition: must be "qcom,sdm845-aoss-qmp" 19 + 20 + - reg: 21 + Usage: required 22 + Value type: <prop-encoded-array> 23 + Definition: the base address and size of the message RAM for this 24 + client's communication with the AOSS 25 + 26 + - interrupts: 27 + Usage: required 28 + Value type: <prop-encoded-array> 29 + Definition: should specify the AOSS message IRQ for this client 30 + 31 + - mboxes: 32 + Usage: required 33 + Value type: <prop-encoded-array> 34 + Definition: reference to the mailbox representing the outgoing doorbell 35 + in APCS for this client, as described in mailbox/mailbox.txt 36 + 37 + - #clock-cells: 38 + Usage: optional 39 + Value type: <u32> 40 + Definition: must be 0 41 + The single clock represents the QDSS clock. 42 + 43 + - #power-domain-cells: 44 + Usage: optional 45 + Value type: <u32> 46 + Definition: must be 1 47 + The provided power-domains are: 48 + CDSP state (0), LPASS state (1), modem state (2), SLPI 49 + state (3), SPSS state (4) and Venus state (5). 50 + 51 + = SUBNODES 52 + The AOSS side channel also provides the controls for three cooling devices, 53 + these are expressed as subnodes of the QMP node. The name of the node is used 54 + to identify the resource and must therefor be "cx", "mx" or "ebi". 55 + 56 + - #cooling-cells: 57 + Usage: optional 58 + Value type: <u32> 59 + Definition: must be 2 60 + 61 + = EXAMPLE 62 + 63 + The following example represents the AOSS side-channel message RAM and the 64 + mechanism exposing the power-domains, as found in SDM845. 65 + 66 + aoss_qmp: qmp@c300000 { 67 + compatible = "qcom,sdm845-aoss-qmp"; 68 + reg = <0x0c300000 0x100000>; 69 + interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>; 70 + mboxes = <&apss_shared 0>; 71 + 72 + #power-domain-cells = <1>; 73 + 74 + cx_cdev: cx { 75 + #cooling-cells = <2>; 76 + }; 77 + 78 + mx_cdev: mx { 79 + #cooling-cells = <2>; 80 + }; 81 + };
+3 -3
Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
··· 9 9 Value type: <stringlist> 10 10 Definition: must be "qcom,apr-v<VERSION-NUMBER>", example "qcom,apr-v2" 11 11 12 - - reg 12 + - qcom,apr-domain 13 13 Usage: required 14 14 Value type: <u32> 15 15 Definition: Destination processor ID. ··· 49 49 The following example represents a QDSP based sound card on a MSM8996 device 50 50 which uses apr as communication between Apps and QDSP. 51 51 52 - apr@4 { 52 + apr { 53 53 compatible = "qcom,apr-v2"; 54 - reg = <APR_DOMAIN_ADSP>; 54 + qcom,apr-domain = <APR_DOMAIN_ADSP>; 55 55 56 56 q6core@3 { 57 57 compatible = "qcom,q6core";
+6 -3
MAINTAINERS
··· 2091 2091 2092 2092 ARM/QUALCOMM SUPPORT 2093 2093 M: Andy Gross <agross@kernel.org> 2094 - M: David Brown <david.brown@linaro.org> 2095 2094 L: linux-arm-msm@vger.kernel.org 2096 2095 S: Maintained 2097 2096 F: Documentation/devicetree/bindings/soc/qcom/ ··· 2112 2113 F: drivers/i2c/busses/i2c-qcom-geni.c 2113 2114 F: drivers/mfd/ssbi.c 2114 2115 F: drivers/mmc/host/mmci_qcom* 2115 - F: drivers/mmc/host/sdhci_msm.c 2116 + F: drivers/mmc/host/sdhci-msm.c 2116 2117 F: drivers/pci/controller/dwc/pcie-qcom.c 2117 2118 F: drivers/phy/qualcomm/ 2118 2119 F: drivers/power/*/msm* ··· 6526 6527 L: linuxppc-dev@lists.ozlabs.org 6527 6528 L: linux-arm-kernel@lists.infradead.org 6528 6529 S: Maintained 6530 + F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.txt 6529 6531 F: Documentation/devicetree/bindings/soc/fsl/ 6530 6532 F: drivers/soc/fsl/ 6531 6533 F: include/linux/fsl/ ··· 11907 11907 11908 11908 OP-TEE DRIVER 11909 11909 M: Jens Wiklander <jens.wiklander@linaro.org> 11910 + L: tee-dev@lists.linaro.org 11910 11911 S: Maintained 11911 11912 F: drivers/tee/optee/ 11912 11913 11913 11914 OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER 11914 11915 M: Sumit Garg <sumit.garg@linaro.org> 11916 + L: tee-dev@lists.linaro.org 11915 11917 S: Maintained 11916 11918 F: drivers/char/hw_random/optee-rng.c 11917 11919 ··· 13297 13295 L: netdev@vger.kernel.org 13298 13296 S: Maintained 13299 13297 F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c 13300 - F: Documentation/devicetree/bindings/net/qcom,dwmac.txt 13298 + F: Documentation/devicetree/bindings/net/qcom,ethqos.txt 13301 13299 13302 13300 QUALCOMM GENERIC INTERFACE I2C DRIVER 13303 13301 M: Alok Chauhan <alokc@codeaurora.org> ··· 15747 15745 15748 15746 TEE SUBSYSTEM 15749 15747 M: Jens Wiklander <jens.wiklander@linaro.org> 15748 + L: tee-dev@lists.linaro.org 15750 15749 S: Maintained 15751 15750 F: include/linux/tee_drv.h 15752 15751 F: include/uapi/linux/tee.h
+5 -34
arch/arm/mach-omap2/omap_hwmod.c
··· 3442 3442 * @dev: struct device 3443 3443 * @oh: module 3444 3444 * @sysc_fields: sysc register bits 3445 + * @clockdomain: clockdomain 3445 3446 * @rev_offs: revision register offset 3446 3447 * @sysc_offs: sysconfig register offset 3447 3448 * @syss_offs: sysstatus register offset ··· 3454 3453 static int omap_hwmod_allocate_module(struct device *dev, struct omap_hwmod *oh, 3455 3454 const struct ti_sysc_module_data *data, 3456 3455 struct sysc_regbits *sysc_fields, 3456 + struct clockdomain *clkdm, 3457 3457 s32 rev_offs, s32 sysc_offs, 3458 3458 s32 syss_offs, u32 sysc_flags, 3459 3459 u32 idlemodes) ··· 3462 3460 struct omap_hwmod_class_sysconfig *sysc; 3463 3461 struct omap_hwmod_class *class = NULL; 3464 3462 struct omap_hwmod_ocp_if *oi = NULL; 3465 - struct clockdomain *clkdm = NULL; 3466 - struct clk *clk = NULL; 3467 3463 void __iomem *regs = NULL; 3468 3464 unsigned long flags; 3469 3465 ··· 3506 3506 */ 3507 3507 oi->slave = oh; 3508 3508 oi->user = OCP_USER_MPU | OCP_USER_SDMA; 3509 - } 3510 - 3511 - if (!oh->_clk) { 3512 - struct clk_hw_omap *hwclk; 3513 - 3514 - clk = of_clk_get_by_name(dev->of_node, "fck"); 3515 - if (!IS_ERR(clk)) 3516 - clk_prepare(clk); 3517 - else 3518 - clk = NULL; 3519 - 3520 - /* 3521 - * Populate clockdomain based on dts clock. It is needed for 3522 - * clkdm_deny_idle() and clkdm_allow_idle() until we have have 3523 - * interconnect driver and reset driver capable of blocking 3524 - * clockdomain idle during reset, enable and idle. 3525 - */ 3526 - if (clk) { 3527 - hwclk = to_clk_hw_omap(__clk_get_hw(clk)); 3528 - if (hwclk && hwclk->clkdm_name) 3529 - clkdm = clkdm_lookup(hwclk->clkdm_name); 3530 - } 3531 - 3532 - /* 3533 - * Note that we assume interconnect driver manages the clocks 3534 - * and do not need to populate oh->_clk for dynamically 3535 - * allocated modules. 3536 - */ 3537 - clk_unprepare(clk); 3538 - clk_put(clk); 3539 3509 } 3540 3510 3541 3511 spin_lock_irqsave(&oh->_lock, flags); ··· 3593 3623 u32 sysc_flags, idlemodes; 3594 3624 int error; 3595 3625 3596 - if (!dev || !data) 3626 + if (!dev || !data || !data->name || !cookie) 3597 3627 return -EINVAL; 3598 3628 3599 3629 oh = _lookup(data->name); ··· 3664 3694 return error; 3665 3695 3666 3696 return omap_hwmod_allocate_module(dev, oh, data, sysc_fields, 3667 - rev_offs, sysc_offs, syss_offs, 3697 + cookie->clkdm, rev_offs, 3698 + sysc_offs, syss_offs, 3668 3699 sysc_flags, idlemodes); 3669 3700 } 3670 3701
+60
arch/arm/mach-omap2/pdata-quirks.c
··· 26 26 #include <linux/platform_data/wkup_m3.h> 27 27 #include <linux/platform_data/asoc-ti-mcbsp.h> 28 28 29 + #include "clockdomain.h" 29 30 #include "common.h" 30 31 #include "common-board-devices.h" 31 32 #include "control.h" ··· 461 460 } 462 461 #endif 463 462 463 + static struct clockdomain *ti_sysc_find_one_clockdomain(struct clk *clk) 464 + { 465 + struct clockdomain *clkdm = NULL; 466 + struct clk_hw_omap *hwclk; 467 + 468 + hwclk = to_clk_hw_omap(__clk_get_hw(clk)); 469 + if (hwclk && hwclk->clkdm_name) 470 + clkdm = clkdm_lookup(hwclk->clkdm_name); 471 + 472 + return clkdm; 473 + } 474 + 475 + /** 476 + * ti_sysc_clkdm_init - find clockdomain based on clock 477 + * @fck: device functional clock 478 + * @ick: device interface clock 479 + * @dev: struct device 480 + * 481 + * Populate clockdomain based on clock. It is needed for 482 + * clkdm_deny_idle() and clkdm_allow_idle() for blocking clockdomain 483 + * clockdomain idle during reset, enable and idle. 484 + * 485 + * Note that we assume interconnect driver manages the clocks 486 + * and do not need to populate oh->_clk for dynamically 487 + * allocated modules. 488 + */ 489 + static int ti_sysc_clkdm_init(struct device *dev, 490 + struct clk *fck, struct clk *ick, 491 + struct ti_sysc_cookie *cookie) 492 + { 493 + if (fck) 494 + cookie->clkdm = ti_sysc_find_one_clockdomain(fck); 495 + if (cookie->clkdm) 496 + return 0; 497 + if (ick) 498 + cookie->clkdm = ti_sysc_find_one_clockdomain(ick); 499 + if (cookie->clkdm) 500 + return 0; 501 + 502 + return -ENODEV; 503 + } 504 + 505 + static void ti_sysc_clkdm_deny_idle(struct device *dev, 506 + const struct ti_sysc_cookie *cookie) 507 + { 508 + if (cookie->clkdm) 509 + clkdm_deny_idle(cookie->clkdm); 510 + } 511 + 512 + static void ti_sysc_clkdm_allow_idle(struct device *dev, 513 + const struct ti_sysc_cookie *cookie) 514 + { 515 + if (cookie->clkdm) 516 + clkdm_allow_idle(cookie->clkdm); 517 + } 518 + 464 519 static int ti_sysc_enable_module(struct device *dev, 465 520 const struct ti_sysc_cookie *cookie) 466 521 { ··· 548 491 549 492 static struct ti_sysc_platform_data ti_sysc_pdata = { 550 493 .auxdata = omap_auxdata_lookup, 494 + .init_clockdomain = ti_sysc_clkdm_init, 495 + .clkdm_deny_idle = ti_sysc_clkdm_deny_idle, 496 + .clkdm_allow_idle = ti_sysc_clkdm_allow_idle, 551 497 .init_module = omap_hwmod_init_module, 552 498 .enable_module = ti_sysc_enable_module, 553 499 .idle_module = ti_sysc_idle_module,
+2 -2
drivers/bus/brcmstb_gisb.c
··· 399 399 &gisb_panic_notifier); 400 400 } 401 401 402 - dev_info(&pdev->dev, "registered mem: %p, irqs: %d, %d\n", 403 - gdev->base, timeout_irq, tea_irq); 402 + dev_info(&pdev->dev, "registered irqs: %d, %d\n", 403 + timeout_irq, tea_irq); 404 404 405 405 return 0; 406 406 }
+27 -3
drivers/bus/fsl-mc/dprc.c
··· 443 443 struct fsl_mc_command cmd = { 0 }; 444 444 struct dprc_cmd_get_obj_region *cmd_params; 445 445 struct dprc_rsp_get_obj_region *rsp_params; 446 + u16 major_ver, minor_ver; 446 447 int err; 447 448 448 449 /* prepare command */ 449 - cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG, 450 - cmd_flags, token); 450 + err = dprc_get_api_version(mc_io, 0, 451 + &major_ver, 452 + &minor_ver); 453 + if (err) 454 + return err; 455 + 456 + /** 457 + * MC API version 6.3 introduced a new field to the region 458 + * descriptor: base_address. If the older API is in use then the base 459 + * address is set to zero to indicate it needs to be obtained elsewhere 460 + * (typically the device tree). 461 + */ 462 + if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3)) 463 + cmd.header = 464 + mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2, 465 + cmd_flags, token); 466 + else 467 + cmd.header = 468 + mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG, 469 + cmd_flags, token); 470 + 451 471 cmd_params = (struct dprc_cmd_get_obj_region *)cmd.params; 452 472 cmd_params->obj_id = cpu_to_le32(obj_id); 453 473 cmd_params->region_index = region_index; ··· 481 461 482 462 /* retrieve response parameters */ 483 463 rsp_params = (struct dprc_rsp_get_obj_region *)cmd.params; 484 - region_desc->base_offset = le64_to_cpu(rsp_params->base_addr); 464 + region_desc->base_offset = le64_to_cpu(rsp_params->base_offset); 485 465 region_desc->size = le32_to_cpu(rsp_params->size); 466 + if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3)) 467 + region_desc->base_address = le64_to_cpu(rsp_params->base_addr); 468 + else 469 + region_desc->base_address = 0; 486 470 487 471 return 0; 488 472 }
+13 -2
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 487 487 "dprc_get_obj_region() failed: %d\n", error); 488 488 goto error_cleanup_regions; 489 489 } 490 - 491 - error = translate_mc_addr(mc_dev, mc_region_type, 490 + /* 491 + * Older MC only returned region offset and no base address 492 + * If base address is in the region_desc use it otherwise 493 + * revert to old mechanism 494 + */ 495 + if (region_desc.base_address) 496 + regions[i].start = region_desc.base_address + 497 + region_desc.base_offset; 498 + else 499 + error = translate_mc_addr(mc_dev, mc_region_type, 492 500 region_desc.base_offset, 493 501 &regions[i].start); 502 + 494 503 if (error < 0) { 495 504 dev_err(parent_dev, 496 505 "Invalid MC offset: %#x (for %s.%d\'s region %d)\n", ··· 513 504 regions[i].flags = IORESOURCE_IO; 514 505 if (region_desc.flags & DPRC_REGION_CACHEABLE) 515 506 regions[i].flags |= IORESOURCE_CACHEABLE; 507 + if (region_desc.flags & DPRC_REGION_SHAREABLE) 508 + regions[i].flags |= IORESOURCE_MEM; 516 509 } 517 510 518 511 mc_dev->regions = regions;
+15 -2
drivers/bus/fsl-mc/fsl-mc-private.h
··· 79 79 80 80 /* DPRC command versioning */ 81 81 #define DPRC_CMD_BASE_VERSION 1 82 + #define DPRC_CMD_2ND_VERSION 2 82 83 #define DPRC_CMD_ID_OFFSET 4 83 84 84 85 #define DPRC_CMD(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION) 86 + #define DPRC_CMD_V2(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_2ND_VERSION) 85 87 86 88 /* DPRC command IDs */ 87 89 #define DPRC_CMDID_CLOSE DPRC_CMD(0x800) ··· 102 100 #define DPRC_CMDID_GET_OBJ_COUNT DPRC_CMD(0x159) 103 101 #define DPRC_CMDID_GET_OBJ DPRC_CMD(0x15A) 104 102 #define DPRC_CMDID_GET_OBJ_REG DPRC_CMD(0x15E) 103 + #define DPRC_CMDID_GET_OBJ_REG_V2 DPRC_CMD_V2(0x15E) 105 104 #define DPRC_CMDID_SET_OBJ_IRQ DPRC_CMD(0x15F) 106 105 107 106 struct dprc_cmd_open { ··· 202 199 /* response word 0 */ 203 200 __le64 pad; 204 201 /* response word 1 */ 205 - __le64 base_addr; 202 + __le64 base_offset; 206 203 /* response word 2 */ 207 204 __le32 size; 205 + __le32 pad2; 206 + /* response word 3 */ 207 + __le32 flags; 208 + __le32 pad3; 209 + /* response word 4 */ 210 + /* base_addr may be zero if older MC firmware is used */ 211 + __le64 base_addr; 208 212 }; 209 213 210 214 struct dprc_cmd_set_obj_irq { ··· 344 334 /* Region flags */ 345 335 /* Cacheable - Indicates that region should be mapped as cacheable */ 346 336 #define DPRC_REGION_CACHEABLE 0x00000001 337 + #define DPRC_REGION_SHAREABLE 0x00000002 347 338 348 339 /** 349 340 * enum dprc_region_type - Region type ··· 353 342 */ 354 343 enum dprc_region_type { 355 344 DPRC_REGION_TYPE_MC_PORTAL, 356 - DPRC_REGION_TYPE_QBMAN_PORTAL 345 + DPRC_REGION_TYPE_QBMAN_PORTAL, 346 + DPRC_REGION_TYPE_QBMAN_MEM_BACKED_PORTAL 357 347 }; 358 348 359 349 /** ··· 372 360 u32 size; 373 361 u32 flags; 374 362 enum dprc_region_type type; 363 + u64 base_address; 375 364 }; 376 365 377 366 int dprc_get_obj_region(struct fsl_mc_io *mc_io,
+379 -75
drivers/bus/ti-sysc.c
··· 71 71 * @name: name if available 72 72 * @revision: interconnect target module revision 73 73 * @needs_resume: runtime resume needed on resume from suspend 74 + * @clk_enable_quirk: module specific clock enable quirk 75 + * @clk_disable_quirk: module specific clock disable quirk 76 + * @reset_done_quirk: module specific reset done quirk 74 77 */ 75 78 struct sysc { 76 79 struct device *dev; ··· 92 89 struct ti_sysc_cookie cookie; 93 90 const char *name; 94 91 u32 revision; 95 - bool enabled; 96 - bool needs_resume; 97 - bool child_needs_resume; 92 + unsigned int enabled:1; 93 + unsigned int needs_resume:1; 94 + unsigned int child_needs_resume:1; 95 + unsigned int disable_on_idle:1; 98 96 struct delayed_work idle_work; 97 + void (*clk_enable_quirk)(struct sysc *sysc); 98 + void (*clk_disable_quirk)(struct sysc *sysc); 99 + void (*reset_done_quirk)(struct sysc *sysc); 99 100 }; 100 101 101 102 static void sysc_parse_dts_quirks(struct sysc *ddata, struct device_node *np, ··· 107 100 108 101 static void sysc_write(struct sysc *ddata, int offset, u32 value) 109 102 { 103 + if (ddata->cfg.quirks & SYSC_QUIRK_16BIT) { 104 + writew_relaxed(value & 0xffff, ddata->module_va + offset); 105 + 106 + /* Only i2c revision has LO and HI register with stride of 4 */ 107 + if (ddata->offsets[SYSC_REVISION] >= 0 && 108 + offset == ddata->offsets[SYSC_REVISION]) { 109 + u16 hi = value >> 16; 110 + 111 + writew_relaxed(hi, ddata->module_va + offset + 4); 112 + } 113 + 114 + return; 115 + } 116 + 110 117 writel_relaxed(value, ddata->module_va + offset); 111 118 } 112 119 ··· 130 109 u32 val; 131 110 132 111 val = readw_relaxed(ddata->module_va + offset); 133 - val |= (readw_relaxed(ddata->module_va + offset + 4) << 16); 112 + 113 + /* Only i2c revision has LO and HI register with stride of 4 */ 114 + if (ddata->offsets[SYSC_REVISION] >= 0 && 115 + offset == ddata->offsets[SYSC_REVISION]) { 116 + u16 tmp = readw_relaxed(ddata->module_va + offset + 4); 117 + 118 + val |= tmp << 16; 119 + } 134 120 135 121 return val; 136 122 } ··· 153 125 static u32 sysc_read_revision(struct sysc *ddata) 154 126 { 155 127 int offset = ddata->offsets[SYSC_REVISION]; 128 + 129 + if (offset < 0) 130 + return 0; 131 + 132 + return sysc_read(ddata, offset); 133 + } 134 + 135 + static u32 sysc_read_sysconfig(struct sysc *ddata) 136 + { 137 + int offset = ddata->offsets[SYSC_SYSCONFIG]; 138 + 139 + if (offset < 0) 140 + return 0; 141 + 142 + return sysc_read(ddata, offset); 143 + } 144 + 145 + static u32 sysc_read_sysstatus(struct sysc *ddata) 146 + { 147 + int offset = ddata->offsets[SYSC_SYSSTATUS]; 156 148 157 149 if (offset < 0) 158 150 return 0; ··· 470 422 } 471 423 } 472 424 425 + static void sysc_clkdm_deny_idle(struct sysc *ddata) 426 + { 427 + struct ti_sysc_platform_data *pdata; 428 + 429 + if (ddata->legacy_mode) 430 + return; 431 + 432 + pdata = dev_get_platdata(ddata->dev); 433 + if (pdata && pdata->clkdm_deny_idle) 434 + pdata->clkdm_deny_idle(ddata->dev, &ddata->cookie); 435 + } 436 + 437 + static void sysc_clkdm_allow_idle(struct sysc *ddata) 438 + { 439 + struct ti_sysc_platform_data *pdata; 440 + 441 + if (ddata->legacy_mode) 442 + return; 443 + 444 + pdata = dev_get_platdata(ddata->dev); 445 + if (pdata && pdata->clkdm_allow_idle) 446 + pdata->clkdm_allow_idle(ddata->dev, &ddata->cookie); 447 + } 448 + 473 449 /** 474 450 * sysc_init_resets - init rstctrl reset line if configured 475 451 * @ddata: device driver data ··· 503 431 static int sysc_init_resets(struct sysc *ddata) 504 432 { 505 433 ddata->rsts = 506 - devm_reset_control_array_get_optional_exclusive(ddata->dev); 434 + devm_reset_control_get_optional(ddata->dev, "rstctrl"); 507 435 if (IS_ERR(ddata->rsts)) 508 436 return PTR_ERR(ddata->rsts); 509 437 ··· 766 694 ddata->offsets[SYSC_SYSCONFIG], 767 695 ddata->offsets[SYSC_SYSSTATUS]); 768 696 697 + if (size < SZ_1K) 698 + size = SZ_1K; 699 + 769 700 if ((size + sizeof(u32)) > ddata->module_size) 770 - return -EINVAL; 701 + size = ddata->module_size; 771 702 } 772 703 773 704 ddata->module_va = devm_ioremap(ddata->dev, ··· 869 794 } 870 795 871 796 #define SYSC_IDLE_MASK (SYSC_NR_IDLEMODES - 1) 797 + #define SYSC_CLOCACT_ICK 2 872 798 799 + /* Caller needs to manage sysc_clkdm_deny_idle() and sysc_clkdm_allow_idle() */ 873 800 static int sysc_enable_module(struct device *dev) 874 801 { 875 802 struct sysc *ddata; ··· 882 805 if (ddata->offsets[SYSC_SYSCONFIG] == -ENODEV) 883 806 return 0; 884 807 885 - /* 886 - * TODO: Need to prevent clockdomain autoidle? 887 - * See clkdm_deny_idle() in arch/mach-omap2/omap_hwmod.c 888 - */ 889 - 890 808 regbits = ddata->cap->regbits; 891 809 reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 810 + 811 + /* Set CLOCKACTIVITY, we only use it for ick */ 812 + if (regbits->clkact_shift >= 0 && 813 + (ddata->cfg.quirks & SYSC_QUIRK_USE_CLOCKACT || 814 + ddata->cfg.sysc_val & BIT(regbits->clkact_shift))) 815 + reg |= SYSC_CLOCACT_ICK << regbits->clkact_shift; 892 816 893 817 /* Set SIDLE mode */ 894 818 idlemodes = ddata->cfg.sidlemodes; 895 819 if (!idlemodes || regbits->sidle_shift < 0) 896 820 goto set_midle; 897 821 898 - best_mode = fls(ddata->cfg.sidlemodes) - 1; 899 - if (best_mode > SYSC_IDLE_MASK) { 900 - dev_err(dev, "%s: invalid sidlemode\n", __func__); 901 - return -EINVAL; 822 + if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE | 823 + SYSC_QUIRK_SWSUP_SIDLE_ACT)) { 824 + best_mode = SYSC_IDLE_NO; 825 + } else { 826 + best_mode = fls(ddata->cfg.sidlemodes) - 1; 827 + if (best_mode > SYSC_IDLE_MASK) { 828 + dev_err(dev, "%s: invalid sidlemode\n", __func__); 829 + return -EINVAL; 830 + } 831 + 832 + /* Set WAKEUP */ 833 + if (regbits->enwkup_shift >= 0 && 834 + ddata->cfg.sysc_val & BIT(regbits->enwkup_shift)) 835 + reg |= BIT(regbits->enwkup_shift); 902 836 } 903 837 904 838 reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift); ··· 920 832 /* Set MIDLE mode */ 921 833 idlemodes = ddata->cfg.midlemodes; 922 834 if (!idlemodes || regbits->midle_shift < 0) 923 - return 0; 835 + goto set_autoidle; 924 836 925 837 best_mode = fls(ddata->cfg.midlemodes) - 1; 926 838 if (best_mode > SYSC_IDLE_MASK) { ··· 931 843 reg &= ~(SYSC_IDLE_MASK << regbits->midle_shift); 932 844 reg |= best_mode << regbits->midle_shift; 933 845 sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg); 846 + 847 + set_autoidle: 848 + /* Autoidle bit must enabled separately if available */ 849 + if (regbits->autoidle_shift >= 0 && 850 + ddata->cfg.sysc_val & BIT(regbits->autoidle_shift)) { 851 + reg |= 1 << regbits->autoidle_shift; 852 + sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg); 853 + } 934 854 935 855 return 0; 936 856 } ··· 957 861 return 0; 958 862 } 959 863 864 + /* Caller needs to manage sysc_clkdm_deny_idle() and sysc_clkdm_allow_idle() */ 960 865 static int sysc_disable_module(struct device *dev) 961 866 { 962 867 struct sysc *ddata; ··· 968 871 ddata = dev_get_drvdata(dev); 969 872 if (ddata->offsets[SYSC_SYSCONFIG] == -ENODEV) 970 873 return 0; 971 - 972 - /* 973 - * TODO: Need to prevent clockdomain autoidle? 974 - * See clkdm_deny_idle() in arch/mach-omap2/omap_hwmod.c 975 - */ 976 874 977 875 regbits = ddata->cap->regbits; 978 876 reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); ··· 993 901 if (!idlemodes || regbits->sidle_shift < 0) 994 902 return 0; 995 903 996 - ret = sysc_best_idle_mode(idlemodes, &best_mode); 997 - if (ret) { 998 - dev_err(dev, "%s: invalid sidlemode\n", __func__); 999 - return ret; 904 + if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE) { 905 + best_mode = SYSC_IDLE_FORCE; 906 + } else { 907 + ret = sysc_best_idle_mode(idlemodes, &best_mode); 908 + if (ret) { 909 + dev_err(dev, "%s: invalid sidlemode\n", __func__); 910 + return ret; 911 + } 1000 912 } 1001 913 1002 914 reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift); 1003 915 reg |= best_mode << regbits->sidle_shift; 916 + if (regbits->autoidle_shift >= 0 && 917 + ddata->cfg.sysc_val & BIT(regbits->autoidle_shift)) 918 + reg |= 1 << regbits->autoidle_shift; 1004 919 sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg); 1005 920 1006 921 return 0; ··· 1031 932 dev_err(dev, "%s: could not idle: %i\n", 1032 933 __func__, error); 1033 934 935 + if (ddata->disable_on_idle) 936 + reset_control_assert(ddata->rsts); 937 + 1034 938 return 0; 1035 939 } 1036 940 ··· 1042 940 { 1043 941 struct ti_sysc_platform_data *pdata; 1044 942 int error; 943 + 944 + if (ddata->disable_on_idle) 945 + reset_control_deassert(ddata->rsts); 1045 946 1046 947 pdata = dev_get_platdata(ddata->dev); 1047 948 if (!pdata) ··· 1071 966 if (!ddata->enabled) 1072 967 return 0; 1073 968 969 + sysc_clkdm_deny_idle(ddata); 970 + 1074 971 if (ddata->legacy_mode) { 1075 972 error = sysc_runtime_suspend_legacy(dev, ddata); 1076 973 if (error) 1077 - return error; 974 + goto err_allow_idle; 1078 975 } else { 1079 976 error = sysc_disable_module(dev); 1080 977 if (error) 1081 - return error; 978 + goto err_allow_idle; 1082 979 } 1083 980 1084 981 sysc_disable_main_clocks(ddata); ··· 1089 982 sysc_disable_opt_clocks(ddata); 1090 983 1091 984 ddata->enabled = false; 985 + 986 + err_allow_idle: 987 + sysc_clkdm_allow_idle(ddata); 988 + 989 + if (ddata->disable_on_idle) 990 + reset_control_assert(ddata->rsts); 1092 991 1093 992 return error; 1094 993 } ··· 1109 996 if (ddata->enabled) 1110 997 return 0; 1111 998 999 + if (ddata->disable_on_idle) 1000 + reset_control_deassert(ddata->rsts); 1001 + 1002 + sysc_clkdm_deny_idle(ddata); 1003 + 1112 1004 if (sysc_opt_clks_needed(ddata)) { 1113 1005 error = sysc_enable_opt_clocks(ddata); 1114 1006 if (error) 1115 - return error; 1007 + goto err_allow_idle; 1116 1008 } 1117 1009 1118 1010 error = sysc_enable_main_clocks(ddata); ··· 1136 1018 1137 1019 ddata->enabled = true; 1138 1020 1021 + sysc_clkdm_allow_idle(ddata); 1022 + 1139 1023 return 0; 1140 1024 1141 1025 err_main_clocks: ··· 1145 1025 err_opt_clocks: 1146 1026 if (sysc_opt_clks_needed(ddata)) 1147 1027 sysc_disable_opt_clocks(ddata); 1028 + err_allow_idle: 1029 + sysc_clkdm_allow_idle(ddata); 1148 1030 1149 1031 return error; 1150 1032 } ··· 1228 1106 0), 1229 1107 SYSC_QUIRK("timer", 0, 0, 0x10, -1, 0x4fff1301, 0xffff00ff, 1230 1108 0), 1109 + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff, 1110 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1231 1111 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff, 1232 - SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), 1112 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1233 1113 /* Uarts on omap4 and later */ 1234 1114 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff, 1235 1115 SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), ··· 1242 1118 SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -1, 0x50000800, 0xffffffff, 1243 1119 SYSC_QUIRK_EXT_OPT_CLOCK | SYSC_QUIRK_NO_RESET_ON_INIT | 1244 1120 SYSC_QUIRK_SWSUP_SIDLE), 1121 + 1122 + /* Quirks that need to be set based on detected module */ 1123 + SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 1124 + SYSC_MODULE_QUIRK_HDQ1W), 1125 + SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff, 1126 + SYSC_MODULE_QUIRK_HDQ1W), 1127 + SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000036, 0x000000ff, 1128 + SYSC_MODULE_QUIRK_I2C), 1129 + SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x0000003c, 0x000000ff, 1130 + SYSC_MODULE_QUIRK_I2C), 1131 + SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000040, 0x000000ff, 1132 + SYSC_MODULE_QUIRK_I2C), 1133 + SYSC_QUIRK("i2c", 0, 0, 0x10, 0x90, 0x5040000a, 0xfffff0f0, 1134 + SYSC_MODULE_QUIRK_I2C), 1135 + SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 1136 + SYSC_MODULE_QUIRK_WDT), 1245 1137 1246 1138 #ifdef DEBUG 1247 1139 SYSC_QUIRK("adc", 0, 0, 0x10, -1, 0x47300001, 0xffffffff, 0), ··· 1272 1132 SYSC_QUIRK("dwc3", 0, 0, 0x10, -1, 0x500a0200, 0xffffffff, 0), 1273 1133 SYSC_QUIRK("epwmss", 0, 0, 0x4, -1, 0x47400001, 0xffffffff, 0), 1274 1134 SYSC_QUIRK("gpu", 0, 0x1fc00, 0x1fc10, -1, 0, 0, 0), 1275 - SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 0), 1276 - SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff, 0), 1277 1135 SYSC_QUIRK("hsi", 0, 0, 0x10, 0x14, 0x50043101, 0xffffffff, 0), 1278 1136 SYSC_QUIRK("iss", 0, 0, 0x10, -1, 0x40000101, 0xffffffff, 0), 1279 - SYSC_QUIRK("i2c", 0, 0, 0x10, 0x90, 0x5040000a, 0xfffff0f0, 0), 1280 1137 SYSC_QUIRK("lcdc", 0, 0, 0x54, -1, 0x4f201000, 0xffffffff, 0), 1281 1138 SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44306302, 0xffffffff, 0), 1282 1139 SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44307b02, 0xffffffff, 0), ··· 1309 1172 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -1, 0x50700101, 0xffffffff, 0), 1310 1173 SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1311 1174 0xffffffff, 0), 1312 - SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 0), 1313 1175 SYSC_QUIRK("vfpe", 0, 0, 0x104, -1, 0x4d001200, 0xffffffff, 0), 1314 1176 #endif 1315 1177 }; ··· 1381 1245 } 1382 1246 } 1383 1247 1248 + /* 1-wire needs module's internal clocks enabled for reset */ 1249 + static void sysc_clk_enable_quirk_hdq1w(struct sysc *ddata) 1250 + { 1251 + int offset = 0x0c; /* HDQ_CTRL_STATUS */ 1252 + u16 val; 1253 + 1254 + val = sysc_read(ddata, offset); 1255 + val |= BIT(5); 1256 + sysc_write(ddata, offset, val); 1257 + } 1258 + 1259 + /* I2C needs extra enable bit toggling for reset */ 1260 + static void sysc_clk_quirk_i2c(struct sysc *ddata, bool enable) 1261 + { 1262 + int offset; 1263 + u16 val; 1264 + 1265 + /* I2C_CON, omap2/3 is different from omap4 and later */ 1266 + if ((ddata->revision & 0xffffff00) == 0x001f0000) 1267 + offset = 0x24; 1268 + else 1269 + offset = 0xa4; 1270 + 1271 + /* I2C_EN */ 1272 + val = sysc_read(ddata, offset); 1273 + if (enable) 1274 + val |= BIT(15); 1275 + else 1276 + val &= ~BIT(15); 1277 + sysc_write(ddata, offset, val); 1278 + } 1279 + 1280 + static void sysc_clk_enable_quirk_i2c(struct sysc *ddata) 1281 + { 1282 + sysc_clk_quirk_i2c(ddata, true); 1283 + } 1284 + 1285 + static void sysc_clk_disable_quirk_i2c(struct sysc *ddata) 1286 + { 1287 + sysc_clk_quirk_i2c(ddata, false); 1288 + } 1289 + 1290 + /* Watchdog timer needs a disable sequence after reset */ 1291 + static void sysc_reset_done_quirk_wdt(struct sysc *ddata) 1292 + { 1293 + int wps, spr, error; 1294 + u32 val; 1295 + 1296 + wps = 0x34; 1297 + spr = 0x48; 1298 + 1299 + sysc_write(ddata, spr, 0xaaaa); 1300 + error = readl_poll_timeout(ddata->module_va + wps, val, 1301 + !(val & 0x10), 100, 1302 + MAX_MODULE_SOFTRESET_WAIT); 1303 + if (error) 1304 + dev_warn(ddata->dev, "wdt disable spr failed\n"); 1305 + 1306 + sysc_write(ddata, wps, 0x5555); 1307 + error = readl_poll_timeout(ddata->module_va + wps, val, 1308 + !(val & 0x10), 100, 1309 + MAX_MODULE_SOFTRESET_WAIT); 1310 + if (error) 1311 + dev_warn(ddata->dev, "wdt disable wps failed\n"); 1312 + } 1313 + 1314 + static void sysc_init_module_quirks(struct sysc *ddata) 1315 + { 1316 + if (ddata->legacy_mode || !ddata->name) 1317 + return; 1318 + 1319 + if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_HDQ1W) { 1320 + ddata->clk_enable_quirk = sysc_clk_enable_quirk_hdq1w; 1321 + 1322 + return; 1323 + } 1324 + 1325 + if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_I2C) { 1326 + ddata->clk_enable_quirk = sysc_clk_enable_quirk_i2c; 1327 + ddata->clk_disable_quirk = sysc_clk_disable_quirk_i2c; 1328 + 1329 + return; 1330 + } 1331 + 1332 + if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_WDT) 1333 + ddata->reset_done_quirk = sysc_reset_done_quirk_wdt; 1334 + } 1335 + 1336 + static int sysc_clockdomain_init(struct sysc *ddata) 1337 + { 1338 + struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev); 1339 + struct clk *fck = NULL, *ick = NULL; 1340 + int error; 1341 + 1342 + if (!pdata || !pdata->init_clockdomain) 1343 + return 0; 1344 + 1345 + switch (ddata->nr_clocks) { 1346 + case 2: 1347 + ick = ddata->clocks[SYSC_ICK]; 1348 + /* fallthrough */ 1349 + case 1: 1350 + fck = ddata->clocks[SYSC_FCK]; 1351 + break; 1352 + case 0: 1353 + return 0; 1354 + } 1355 + 1356 + error = pdata->init_clockdomain(ddata->dev, fck, ick, &ddata->cookie); 1357 + if (!error || error == -ENODEV) 1358 + return 0; 1359 + 1360 + return error; 1361 + } 1362 + 1384 1363 /* 1385 1364 * Note that pdata->init_module() typically does a reset first. After 1386 1365 * pdata->init_module() is done, PM runtime can be used for the interconnect ··· 1506 1255 struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev); 1507 1256 int error; 1508 1257 1509 - if (!ddata->legacy_mode || !pdata || !pdata->init_module) 1258 + if (!pdata || !pdata->init_module) 1510 1259 return 0; 1511 1260 1512 1261 error = pdata->init_module(ddata->dev, ddata->mdata, &ddata->cookie); ··· 1531 1280 */ 1532 1281 static int sysc_rstctrl_reset_deassert(struct sysc *ddata, bool reset) 1533 1282 { 1534 - int error; 1283 + int error, val; 1535 1284 1536 1285 if (!ddata->rsts) 1537 1286 return 0; ··· 1542 1291 return error; 1543 1292 } 1544 1293 1545 - return reset_control_deassert(ddata->rsts); 1294 + error = reset_control_deassert(ddata->rsts); 1295 + if (error == -EEXIST) 1296 + return 0; 1297 + 1298 + error = readx_poll_timeout(reset_control_status, ddata->rsts, val, 1299 + val == 0, 100, MAX_MODULE_SOFTRESET_WAIT); 1300 + 1301 + return error; 1546 1302 } 1547 1303 1304 + /* 1305 + * Note that the caller must ensure the interconnect target module is enabled 1306 + * before calling reset. Otherwise reset will not complete. 1307 + */ 1548 1308 static int sysc_reset(struct sysc *ddata) 1549 1309 { 1550 - int offset = ddata->offsets[SYSC_SYSCONFIG]; 1551 - int val; 1310 + int sysc_offset, syss_offset, sysc_val, rstval, quirks, error = 0; 1311 + u32 sysc_mask, syss_done; 1552 1312 1553 - if (ddata->legacy_mode || offset < 0 || 1313 + sysc_offset = ddata->offsets[SYSC_SYSCONFIG]; 1314 + syss_offset = ddata->offsets[SYSC_SYSSTATUS]; 1315 + quirks = ddata->cfg.quirks; 1316 + 1317 + if (ddata->legacy_mode || sysc_offset < 0 || 1318 + ddata->cap->regbits->srst_shift < 0 || 1554 1319 ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) 1555 1320 return 0; 1556 1321 1557 - /* 1558 - * Currently only support reset status in sysstatus. 1559 - * Warn and return error in all other cases 1560 - */ 1561 - if (!ddata->cfg.syss_mask) { 1562 - dev_err(ddata->dev, "No ti,syss-mask. Reset failed\n"); 1563 - return -EINVAL; 1564 - } 1322 + sysc_mask = BIT(ddata->cap->regbits->srst_shift); 1565 1323 1566 - val = sysc_read(ddata, offset); 1567 - val |= (0x1 << ddata->cap->regbits->srst_shift); 1568 - sysc_write(ddata, offset, val); 1324 + if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED) 1325 + syss_done = 0; 1326 + else 1327 + syss_done = ddata->cfg.syss_mask; 1328 + 1329 + if (ddata->clk_disable_quirk) 1330 + ddata->clk_disable_quirk(ddata); 1331 + 1332 + sysc_val = sysc_read_sysconfig(ddata); 1333 + sysc_val |= sysc_mask; 1334 + sysc_write(ddata, sysc_offset, sysc_val); 1335 + 1336 + if (ddata->clk_enable_quirk) 1337 + ddata->clk_enable_quirk(ddata); 1569 1338 1570 1339 /* Poll on reset status */ 1571 - offset = ddata->offsets[SYSC_SYSSTATUS]; 1340 + if (syss_offset >= 0) { 1341 + error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval, 1342 + (rstval & ddata->cfg.syss_mask) == 1343 + syss_done, 1344 + 100, MAX_MODULE_SOFTRESET_WAIT); 1572 1345 1573 - return readl_poll_timeout(ddata->module_va + offset, val, 1574 - (val & ddata->cfg.syss_mask) == 0x0, 1575 - 100, MAX_MODULE_SOFTRESET_WAIT); 1346 + } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) { 1347 + error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval, 1348 + !(rstval & sysc_mask), 1349 + 100, MAX_MODULE_SOFTRESET_WAIT); 1350 + } 1351 + 1352 + if (ddata->reset_done_quirk) 1353 + ddata->reset_done_quirk(ddata); 1354 + 1355 + return error; 1576 1356 } 1577 1357 1578 1358 /* ··· 1616 1334 { 1617 1335 int error = 0; 1618 1336 bool manage_clocks = true; 1619 - bool reset = true; 1620 1337 1621 - if (ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) 1622 - reset = false; 1623 - 1624 - error = sysc_rstctrl_reset_deassert(ddata, reset); 1338 + error = sysc_rstctrl_reset_deassert(ddata, false); 1625 1339 if (error) 1626 1340 return error; 1627 1341 ··· 1625 1347 (SYSC_QUIRK_NO_IDLE | SYSC_QUIRK_NO_IDLE_ON_INIT)) 1626 1348 manage_clocks = false; 1627 1349 1350 + error = sysc_clockdomain_init(ddata); 1351 + if (error) 1352 + return error; 1353 + 1628 1354 if (manage_clocks) { 1355 + sysc_clkdm_deny_idle(ddata); 1356 + 1629 1357 error = sysc_enable_opt_clocks(ddata); 1630 1358 if (error) 1631 1359 return error; ··· 1641 1357 goto err_opt_clocks; 1642 1358 } 1643 1359 1360 + if (!(ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)) { 1361 + error = sysc_rstctrl_reset_deassert(ddata, true); 1362 + if (error) 1363 + goto err_main_clocks; 1364 + } 1365 + 1644 1366 ddata->revision = sysc_read_revision(ddata); 1645 1367 sysc_init_revision_quirks(ddata); 1368 + sysc_init_module_quirks(ddata); 1646 1369 1647 - error = sysc_legacy_init(ddata); 1648 - if (error) 1649 - goto err_main_clocks; 1370 + if (ddata->legacy_mode) { 1371 + error = sysc_legacy_init(ddata); 1372 + if (error) 1373 + goto err_main_clocks; 1374 + } 1375 + 1376 + if (!ddata->legacy_mode && manage_clocks) { 1377 + error = sysc_enable_module(ddata->dev); 1378 + if (error) 1379 + goto err_main_clocks; 1380 + } 1650 1381 1651 1382 error = sysc_reset(ddata); 1652 1383 if (error) 1653 1384 dev_err(ddata->dev, "Reset failed with %d\n", error); 1654 1385 1386 + if (!ddata->legacy_mode && manage_clocks) 1387 + sysc_disable_module(ddata->dev); 1388 + 1655 1389 err_main_clocks: 1656 1390 if (manage_clocks) 1657 1391 sysc_disable_main_clocks(ddata); 1658 1392 err_opt_clocks: 1659 - if (manage_clocks) 1393 + if (manage_clocks) { 1660 1394 sysc_disable_opt_clocks(ddata); 1395 + sysc_clkdm_allow_idle(ddata); 1396 + } 1661 1397 1662 1398 return error; 1663 1399 } ··· 1967 1663 */ 1968 1664 static void sysc_legacy_idle_quirk(struct sysc *ddata, struct device *child) 1969 1665 { 1970 - if (!ddata->legacy_mode) 1971 - return; 1972 - 1973 1666 if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE) 1974 1667 dev_pm_domain_set(child, &sysc_child_pm_domain); 1975 1668 } ··· 2306 2005 .type = TI_SYSC_DRA7_MCAN, 2307 2006 .sysc_mask = SYSC_DRA7_MCAN_ENAWAKEUP | SYSC_OMAP4_SOFTRESET, 2308 2007 .regbits = &sysc_regbits_dra7_mcan, 2008 + .mod_quirks = SYSS_QUIRK_RESETDONE_INVERTED, 2309 2009 }; 2310 2010 2311 2011 static int sysc_init_pdata(struct sysc *ddata) ··· 2314 2012 struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev); 2315 2013 struct ti_sysc_module_data *mdata; 2316 2014 2317 - if (!pdata || !ddata->legacy_mode) 2015 + if (!pdata) 2318 2016 return 0; 2319 2017 2320 2018 mdata = devm_kzalloc(ddata->dev, sizeof(*mdata), GFP_KERNEL); 2321 2019 if (!mdata) 2322 2020 return -ENOMEM; 2323 2021 2324 - mdata->name = ddata->legacy_mode; 2325 - mdata->module_pa = ddata->module_pa; 2326 - mdata->module_size = ddata->module_size; 2327 - mdata->offsets = ddata->offsets; 2328 - mdata->nr_offsets = SYSC_MAX_REGS; 2329 - mdata->cap = ddata->cap; 2330 - mdata->cfg = &ddata->cfg; 2022 + if (ddata->legacy_mode) { 2023 + mdata->name = ddata->legacy_mode; 2024 + mdata->module_pa = ddata->module_pa; 2025 + mdata->module_size = ddata->module_size; 2026 + mdata->offsets = ddata->offsets; 2027 + mdata->nr_offsets = SYSC_MAX_REGS; 2028 + mdata->cap = ddata->cap; 2029 + mdata->cfg = &ddata->cfg; 2030 + } 2331 2031 2332 2032 ddata->mdata = mdata; 2333 2033 ··· 2449 2145 } 2450 2146 2451 2147 if (!of_get_available_child_count(ddata->dev->of_node)) 2452 - reset_control_assert(ddata->rsts); 2148 + ddata->disable_on_idle = true; 2453 2149 2454 2150 return 0; 2455 2151
+2
drivers/firmware/arm_scmi/clock.c
··· 185 185 if (rate_discrete) 186 186 clk->list.num_rates = tot_rate_cnt; 187 187 188 + clk->rate_discrete = rate_discrete; 189 + 188 190 err: 189 191 scmi_xfer_put(handle, t); 190 192 return ret;
+8 -2
drivers/firmware/arm_scmi/sensors.c
··· 30 30 __le32 id; 31 31 __le32 attributes_low; 32 32 #define SUPPORTS_ASYNC_READ(x) ((x) & BIT(31)) 33 - #define NUM_TRIP_POINTS(x) (((x) >> 4) & 0xff) 33 + #define NUM_TRIP_POINTS(x) ((x) & 0xff) 34 34 __le32 attributes_high; 35 35 #define SENSOR_TYPE(x) ((x) & 0xff) 36 - #define SENSOR_SCALE(x) (((x) >> 11) & 0x3f) 36 + #define SENSOR_SCALE(x) (((x) >> 11) & 0x1f) 37 + #define SENSOR_SCALE_SIGN BIT(4) 38 + #define SENSOR_SCALE_EXTEND GENMASK(7, 5) 37 39 #define SENSOR_UPDATE_SCALE(x) (((x) >> 22) & 0x1f) 38 40 #define SENSOR_UPDATE_BASE(x) (((x) >> 27) & 0x1f) 39 41 u8 name[SCMI_MAX_STR_SIZE]; ··· 142 140 s = &si->sensors[desc_index + cnt]; 143 141 s->id = le32_to_cpu(buf->desc[cnt].id); 144 142 s->type = SENSOR_TYPE(attrh); 143 + s->scale = SENSOR_SCALE(attrh); 144 + /* Sign extend to a full s8 */ 145 + if (s->scale & SENSOR_SCALE_SIGN) 146 + s->scale |= SENSOR_SCALE_EXTEND; 145 147 strlcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE); 146 148 } 147 149
+6 -4
drivers/firmware/psci/psci_checker.c
··· 359 359 for (;;) { 360 360 /* Needs to be set first to avoid missing a wakeup. */ 361 361 set_current_state(TASK_INTERRUPTIBLE); 362 - if (kthread_should_stop()) { 363 - __set_current_state(TASK_RUNNING); 362 + if (kthread_should_park()) 364 363 break; 365 - } 366 364 schedule(); 367 365 } 368 366 369 367 pr_info("CPU %d suspend test results: success %d, shallow states %d, errors %d\n", 370 368 cpu, nb_suspend, nb_shallow_sleep, nb_err); 369 + 370 + kthread_parkme(); 371 371 372 372 return nb_err; 373 373 } ··· 433 433 434 434 435 435 /* Stop and destroy all threads, get return status. */ 436 - for (i = 0; i < nb_threads; ++i) 436 + for (i = 0; i < nb_threads; ++i) { 437 + err += kthread_park(threads[i]); 437 438 err += kthread_stop(threads[i]); 439 + } 438 440 out: 439 441 cpuidle_resume_and_unlock(); 440 442 kfree(threads);
+3 -1
drivers/firmware/tegra/bpmp.c
··· 803 803 return 0; 804 804 } 805 805 806 - static SIMPLE_DEV_PM_OPS(tegra_bpmp_pm_ops, NULL, tegra_bpmp_resume); 806 + static const struct dev_pm_ops tegra_bpmp_pm_ops = { 807 + .resume_early = tegra_bpmp_resume, 808 + }; 807 809 808 810 #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \ 809 811 IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
+852 -7
drivers/firmware/ti_sci.c
··· 466 466 struct ti_sci_xfer *xfer; 467 467 int ret; 468 468 469 - /* No need to setup flags since it is expected to respond */ 470 469 xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_VERSION, 471 - 0x0, sizeof(struct ti_sci_msg_hdr), 470 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 471 + sizeof(struct ti_sci_msg_hdr), 472 472 sizeof(*rev_info)); 473 473 if (IS_ERR(xfer)) { 474 474 ret = PTR_ERR(xfer); ··· 596 596 info = handle_to_ti_sci_info(handle); 597 597 dev = info->dev; 598 598 599 - /* Response is expected, so need of any flags */ 600 599 xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_DEVICE_STATE, 601 - 0, sizeof(*req), sizeof(*resp)); 600 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 601 + sizeof(*req), sizeof(*resp)); 602 602 if (IS_ERR(xfer)) { 603 603 ret = PTR_ERR(xfer); 604 604 dev_err(dev, "Message alloc failed(%d)\n", ret); ··· 2057 2057 ia_id, vint, global_event, vint_status_bit, 0); 2058 2058 } 2059 2059 2060 + /** 2061 + * ti_sci_cmd_ring_config() - configure RA ring 2062 + * @handle: Pointer to TI SCI handle. 2063 + * @valid_params: Bitfield defining validity of ring configuration 2064 + * parameters 2065 + * @nav_id: Device ID of Navigator Subsystem from which the ring is 2066 + * allocated 2067 + * @index: Ring index 2068 + * @addr_lo: The ring base address lo 32 bits 2069 + * @addr_hi: The ring base address hi 32 bits 2070 + * @count: Number of ring elements 2071 + * @mode: The mode of the ring 2072 + * @size: The ring element size. 2073 + * @order_id: Specifies the ring's bus order ID 2074 + * 2075 + * Return: 0 if all went well, else returns appropriate error value. 2076 + * 2077 + * See @ti_sci_msg_rm_ring_cfg_req for more info. 2078 + */ 2079 + static int ti_sci_cmd_ring_config(const struct ti_sci_handle *handle, 2080 + u32 valid_params, u16 nav_id, u16 index, 2081 + u32 addr_lo, u32 addr_hi, u32 count, 2082 + u8 mode, u8 size, u8 order_id) 2083 + { 2084 + struct ti_sci_msg_rm_ring_cfg_req *req; 2085 + struct ti_sci_msg_hdr *resp; 2086 + struct ti_sci_xfer *xfer; 2087 + struct ti_sci_info *info; 2088 + struct device *dev; 2089 + int ret = 0; 2090 + 2091 + if (IS_ERR_OR_NULL(handle)) 2092 + return -EINVAL; 2093 + 2094 + info = handle_to_ti_sci_info(handle); 2095 + dev = info->dev; 2096 + 2097 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_RING_CFG, 2098 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2099 + sizeof(*req), sizeof(*resp)); 2100 + if (IS_ERR(xfer)) { 2101 + ret = PTR_ERR(xfer); 2102 + dev_err(dev, "RM_RA:Message config failed(%d)\n", ret); 2103 + return ret; 2104 + } 2105 + req = (struct ti_sci_msg_rm_ring_cfg_req *)xfer->xfer_buf; 2106 + req->valid_params = valid_params; 2107 + req->nav_id = nav_id; 2108 + req->index = index; 2109 + req->addr_lo = addr_lo; 2110 + req->addr_hi = addr_hi; 2111 + req->count = count; 2112 + req->mode = mode; 2113 + req->size = size; 2114 + req->order_id = order_id; 2115 + 2116 + ret = ti_sci_do_xfer(info, xfer); 2117 + if (ret) { 2118 + dev_err(dev, "RM_RA:Mbox config send fail %d\n", ret); 2119 + goto fail; 2120 + } 2121 + 2122 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2123 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2124 + 2125 + fail: 2126 + ti_sci_put_one_xfer(&info->minfo, xfer); 2127 + dev_dbg(dev, "RM_RA:config ring %u ret:%d\n", index, ret); 2128 + return ret; 2129 + } 2130 + 2131 + /** 2132 + * ti_sci_cmd_ring_get_config() - get RA ring configuration 2133 + * @handle: Pointer to TI SCI handle. 2134 + * @nav_id: Device ID of Navigator Subsystem from which the ring is 2135 + * allocated 2136 + * @index: Ring index 2137 + * @addr_lo: Returns ring's base address lo 32 bits 2138 + * @addr_hi: Returns ring's base address hi 32 bits 2139 + * @count: Returns number of ring elements 2140 + * @mode: Returns mode of the ring 2141 + * @size: Returns ring element size 2142 + * @order_id: Returns ring's bus order ID 2143 + * 2144 + * Return: 0 if all went well, else returns appropriate error value. 2145 + * 2146 + * See @ti_sci_msg_rm_ring_get_cfg_req for more info. 2147 + */ 2148 + static int ti_sci_cmd_ring_get_config(const struct ti_sci_handle *handle, 2149 + u32 nav_id, u32 index, u8 *mode, 2150 + u32 *addr_lo, u32 *addr_hi, 2151 + u32 *count, u8 *size, u8 *order_id) 2152 + { 2153 + struct ti_sci_msg_rm_ring_get_cfg_resp *resp; 2154 + struct ti_sci_msg_rm_ring_get_cfg_req *req; 2155 + struct ti_sci_xfer *xfer; 2156 + struct ti_sci_info *info; 2157 + struct device *dev; 2158 + int ret = 0; 2159 + 2160 + if (IS_ERR_OR_NULL(handle)) 2161 + return -EINVAL; 2162 + 2163 + info = handle_to_ti_sci_info(handle); 2164 + dev = info->dev; 2165 + 2166 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_RING_GET_CFG, 2167 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2168 + sizeof(*req), sizeof(*resp)); 2169 + if (IS_ERR(xfer)) { 2170 + ret = PTR_ERR(xfer); 2171 + dev_err(dev, 2172 + "RM_RA:Message get config failed(%d)\n", ret); 2173 + return ret; 2174 + } 2175 + req = (struct ti_sci_msg_rm_ring_get_cfg_req *)xfer->xfer_buf; 2176 + req->nav_id = nav_id; 2177 + req->index = index; 2178 + 2179 + ret = ti_sci_do_xfer(info, xfer); 2180 + if (ret) { 2181 + dev_err(dev, "RM_RA:Mbox get config send fail %d\n", ret); 2182 + goto fail; 2183 + } 2184 + 2185 + resp = (struct ti_sci_msg_rm_ring_get_cfg_resp *)xfer->xfer_buf; 2186 + 2187 + if (!ti_sci_is_response_ack(resp)) { 2188 + ret = -ENODEV; 2189 + } else { 2190 + if (mode) 2191 + *mode = resp->mode; 2192 + if (addr_lo) 2193 + *addr_lo = resp->addr_lo; 2194 + if (addr_hi) 2195 + *addr_hi = resp->addr_hi; 2196 + if (count) 2197 + *count = resp->count; 2198 + if (size) 2199 + *size = resp->size; 2200 + if (order_id) 2201 + *order_id = resp->order_id; 2202 + }; 2203 + 2204 + fail: 2205 + ti_sci_put_one_xfer(&info->minfo, xfer); 2206 + dev_dbg(dev, "RM_RA:get config ring %u ret:%d\n", index, ret); 2207 + return ret; 2208 + } 2209 + 2210 + /** 2211 + * ti_sci_cmd_rm_psil_pair() - Pair PSI-L source to destination thread 2212 + * @handle: Pointer to TI SCI handle. 2213 + * @nav_id: Device ID of Navigator Subsystem which should be used for 2214 + * pairing 2215 + * @src_thread: Source PSI-L thread ID 2216 + * @dst_thread: Destination PSI-L thread ID 2217 + * 2218 + * Return: 0 if all went well, else returns appropriate error value. 2219 + */ 2220 + static int ti_sci_cmd_rm_psil_pair(const struct ti_sci_handle *handle, 2221 + u32 nav_id, u32 src_thread, u32 dst_thread) 2222 + { 2223 + struct ti_sci_msg_psil_pair *req; 2224 + struct ti_sci_msg_hdr *resp; 2225 + struct ti_sci_xfer *xfer; 2226 + struct ti_sci_info *info; 2227 + struct device *dev; 2228 + int ret = 0; 2229 + 2230 + if (IS_ERR(handle)) 2231 + return PTR_ERR(handle); 2232 + if (!handle) 2233 + return -EINVAL; 2234 + 2235 + info = handle_to_ti_sci_info(handle); 2236 + dev = info->dev; 2237 + 2238 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_PSIL_PAIR, 2239 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2240 + sizeof(*req), sizeof(*resp)); 2241 + if (IS_ERR(xfer)) { 2242 + ret = PTR_ERR(xfer); 2243 + dev_err(dev, "RM_PSIL:Message reconfig failed(%d)\n", ret); 2244 + return ret; 2245 + } 2246 + req = (struct ti_sci_msg_psil_pair *)xfer->xfer_buf; 2247 + req->nav_id = nav_id; 2248 + req->src_thread = src_thread; 2249 + req->dst_thread = dst_thread; 2250 + 2251 + ret = ti_sci_do_xfer(info, xfer); 2252 + if (ret) { 2253 + dev_err(dev, "RM_PSIL:Mbox send fail %d\n", ret); 2254 + goto fail; 2255 + } 2256 + 2257 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2258 + ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL; 2259 + 2260 + fail: 2261 + ti_sci_put_one_xfer(&info->minfo, xfer); 2262 + 2263 + return ret; 2264 + } 2265 + 2266 + /** 2267 + * ti_sci_cmd_rm_psil_unpair() - Unpair PSI-L source from destination thread 2268 + * @handle: Pointer to TI SCI handle. 2269 + * @nav_id: Device ID of Navigator Subsystem which should be used for 2270 + * unpairing 2271 + * @src_thread: Source PSI-L thread ID 2272 + * @dst_thread: Destination PSI-L thread ID 2273 + * 2274 + * Return: 0 if all went well, else returns appropriate error value. 2275 + */ 2276 + static int ti_sci_cmd_rm_psil_unpair(const struct ti_sci_handle *handle, 2277 + u32 nav_id, u32 src_thread, u32 dst_thread) 2278 + { 2279 + struct ti_sci_msg_psil_unpair *req; 2280 + struct ti_sci_msg_hdr *resp; 2281 + struct ti_sci_xfer *xfer; 2282 + struct ti_sci_info *info; 2283 + struct device *dev; 2284 + int ret = 0; 2285 + 2286 + if (IS_ERR(handle)) 2287 + return PTR_ERR(handle); 2288 + if (!handle) 2289 + return -EINVAL; 2290 + 2291 + info = handle_to_ti_sci_info(handle); 2292 + dev = info->dev; 2293 + 2294 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_PSIL_UNPAIR, 2295 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2296 + sizeof(*req), sizeof(*resp)); 2297 + if (IS_ERR(xfer)) { 2298 + ret = PTR_ERR(xfer); 2299 + dev_err(dev, "RM_PSIL:Message reconfig failed(%d)\n", ret); 2300 + return ret; 2301 + } 2302 + req = (struct ti_sci_msg_psil_unpair *)xfer->xfer_buf; 2303 + req->nav_id = nav_id; 2304 + req->src_thread = src_thread; 2305 + req->dst_thread = dst_thread; 2306 + 2307 + ret = ti_sci_do_xfer(info, xfer); 2308 + if (ret) { 2309 + dev_err(dev, "RM_PSIL:Mbox send fail %d\n", ret); 2310 + goto fail; 2311 + } 2312 + 2313 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2314 + ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL; 2315 + 2316 + fail: 2317 + ti_sci_put_one_xfer(&info->minfo, xfer); 2318 + 2319 + return ret; 2320 + } 2321 + 2322 + /** 2323 + * ti_sci_cmd_rm_udmap_tx_ch_cfg() - Configure a UDMAP TX channel 2324 + * @handle: Pointer to TI SCI handle. 2325 + * @params: Pointer to ti_sci_msg_rm_udmap_tx_ch_cfg TX channel config 2326 + * structure 2327 + * 2328 + * Return: 0 if all went well, else returns appropriate error value. 2329 + * 2330 + * See @ti_sci_msg_rm_udmap_tx_ch_cfg and @ti_sci_msg_rm_udmap_tx_ch_cfg_req for 2331 + * more info. 2332 + */ 2333 + static int ti_sci_cmd_rm_udmap_tx_ch_cfg(const struct ti_sci_handle *handle, 2334 + const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params) 2335 + { 2336 + struct ti_sci_msg_rm_udmap_tx_ch_cfg_req *req; 2337 + struct ti_sci_msg_hdr *resp; 2338 + struct ti_sci_xfer *xfer; 2339 + struct ti_sci_info *info; 2340 + struct device *dev; 2341 + int ret = 0; 2342 + 2343 + if (IS_ERR_OR_NULL(handle)) 2344 + return -EINVAL; 2345 + 2346 + info = handle_to_ti_sci_info(handle); 2347 + dev = info->dev; 2348 + 2349 + xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_TX_CH_CFG, 2350 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2351 + sizeof(*req), sizeof(*resp)); 2352 + if (IS_ERR(xfer)) { 2353 + ret = PTR_ERR(xfer); 2354 + dev_err(dev, "Message TX_CH_CFG alloc failed(%d)\n", ret); 2355 + return ret; 2356 + } 2357 + req = (struct ti_sci_msg_rm_udmap_tx_ch_cfg_req *)xfer->xfer_buf; 2358 + req->valid_params = params->valid_params; 2359 + req->nav_id = params->nav_id; 2360 + req->index = params->index; 2361 + req->tx_pause_on_err = params->tx_pause_on_err; 2362 + req->tx_filt_einfo = params->tx_filt_einfo; 2363 + req->tx_filt_pswords = params->tx_filt_pswords; 2364 + req->tx_atype = params->tx_atype; 2365 + req->tx_chan_type = params->tx_chan_type; 2366 + req->tx_supr_tdpkt = params->tx_supr_tdpkt; 2367 + req->tx_fetch_size = params->tx_fetch_size; 2368 + req->tx_credit_count = params->tx_credit_count; 2369 + req->txcq_qnum = params->txcq_qnum; 2370 + req->tx_priority = params->tx_priority; 2371 + req->tx_qos = params->tx_qos; 2372 + req->tx_orderid = params->tx_orderid; 2373 + req->fdepth = params->fdepth; 2374 + req->tx_sched_priority = params->tx_sched_priority; 2375 + req->tx_burst_size = params->tx_burst_size; 2376 + 2377 + ret = ti_sci_do_xfer(info, xfer); 2378 + if (ret) { 2379 + dev_err(dev, "Mbox send TX_CH_CFG fail %d\n", ret); 2380 + goto fail; 2381 + } 2382 + 2383 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2384 + ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL; 2385 + 2386 + fail: 2387 + ti_sci_put_one_xfer(&info->minfo, xfer); 2388 + dev_dbg(dev, "TX_CH_CFG: chn %u ret:%u\n", params->index, ret); 2389 + return ret; 2390 + } 2391 + 2392 + /** 2393 + * ti_sci_cmd_rm_udmap_rx_ch_cfg() - Configure a UDMAP RX channel 2394 + * @handle: Pointer to TI SCI handle. 2395 + * @params: Pointer to ti_sci_msg_rm_udmap_rx_ch_cfg RX channel config 2396 + * structure 2397 + * 2398 + * Return: 0 if all went well, else returns appropriate error value. 2399 + * 2400 + * See @ti_sci_msg_rm_udmap_rx_ch_cfg and @ti_sci_msg_rm_udmap_rx_ch_cfg_req for 2401 + * more info. 2402 + */ 2403 + static int ti_sci_cmd_rm_udmap_rx_ch_cfg(const struct ti_sci_handle *handle, 2404 + const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params) 2405 + { 2406 + struct ti_sci_msg_rm_udmap_rx_ch_cfg_req *req; 2407 + struct ti_sci_msg_hdr *resp; 2408 + struct ti_sci_xfer *xfer; 2409 + struct ti_sci_info *info; 2410 + struct device *dev; 2411 + int ret = 0; 2412 + 2413 + if (IS_ERR_OR_NULL(handle)) 2414 + return -EINVAL; 2415 + 2416 + info = handle_to_ti_sci_info(handle); 2417 + dev = info->dev; 2418 + 2419 + xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_RX_CH_CFG, 2420 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2421 + sizeof(*req), sizeof(*resp)); 2422 + if (IS_ERR(xfer)) { 2423 + ret = PTR_ERR(xfer); 2424 + dev_err(dev, "Message RX_CH_CFG alloc failed(%d)\n", ret); 2425 + return ret; 2426 + } 2427 + req = (struct ti_sci_msg_rm_udmap_rx_ch_cfg_req *)xfer->xfer_buf; 2428 + req->valid_params = params->valid_params; 2429 + req->nav_id = params->nav_id; 2430 + req->index = params->index; 2431 + req->rx_fetch_size = params->rx_fetch_size; 2432 + req->rxcq_qnum = params->rxcq_qnum; 2433 + req->rx_priority = params->rx_priority; 2434 + req->rx_qos = params->rx_qos; 2435 + req->rx_orderid = params->rx_orderid; 2436 + req->rx_sched_priority = params->rx_sched_priority; 2437 + req->flowid_start = params->flowid_start; 2438 + req->flowid_cnt = params->flowid_cnt; 2439 + req->rx_pause_on_err = params->rx_pause_on_err; 2440 + req->rx_atype = params->rx_atype; 2441 + req->rx_chan_type = params->rx_chan_type; 2442 + req->rx_ignore_short = params->rx_ignore_short; 2443 + req->rx_ignore_long = params->rx_ignore_long; 2444 + req->rx_burst_size = params->rx_burst_size; 2445 + 2446 + ret = ti_sci_do_xfer(info, xfer); 2447 + if (ret) { 2448 + dev_err(dev, "Mbox send RX_CH_CFG fail %d\n", ret); 2449 + goto fail; 2450 + } 2451 + 2452 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2453 + ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL; 2454 + 2455 + fail: 2456 + ti_sci_put_one_xfer(&info->minfo, xfer); 2457 + dev_dbg(dev, "RX_CH_CFG: chn %u ret:%d\n", params->index, ret); 2458 + return ret; 2459 + } 2460 + 2461 + /** 2462 + * ti_sci_cmd_rm_udmap_rx_flow_cfg() - Configure UDMAP RX FLOW 2463 + * @handle: Pointer to TI SCI handle. 2464 + * @params: Pointer to ti_sci_msg_rm_udmap_flow_cfg RX FLOW config 2465 + * structure 2466 + * 2467 + * Return: 0 if all went well, else returns appropriate error value. 2468 + * 2469 + * See @ti_sci_msg_rm_udmap_flow_cfg and @ti_sci_msg_rm_udmap_flow_cfg_req for 2470 + * more info. 2471 + */ 2472 + static int ti_sci_cmd_rm_udmap_rx_flow_cfg(const struct ti_sci_handle *handle, 2473 + const struct ti_sci_msg_rm_udmap_flow_cfg *params) 2474 + { 2475 + struct ti_sci_msg_rm_udmap_flow_cfg_req *req; 2476 + struct ti_sci_msg_hdr *resp; 2477 + struct ti_sci_xfer *xfer; 2478 + struct ti_sci_info *info; 2479 + struct device *dev; 2480 + int ret = 0; 2481 + 2482 + if (IS_ERR_OR_NULL(handle)) 2483 + return -EINVAL; 2484 + 2485 + info = handle_to_ti_sci_info(handle); 2486 + dev = info->dev; 2487 + 2488 + xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_FLOW_CFG, 2489 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2490 + sizeof(*req), sizeof(*resp)); 2491 + if (IS_ERR(xfer)) { 2492 + ret = PTR_ERR(xfer); 2493 + dev_err(dev, "RX_FL_CFG: Message alloc failed(%d)\n", ret); 2494 + return ret; 2495 + } 2496 + req = (struct ti_sci_msg_rm_udmap_flow_cfg_req *)xfer->xfer_buf; 2497 + req->valid_params = params->valid_params; 2498 + req->nav_id = params->nav_id; 2499 + req->flow_index = params->flow_index; 2500 + req->rx_einfo_present = params->rx_einfo_present; 2501 + req->rx_psinfo_present = params->rx_psinfo_present; 2502 + req->rx_error_handling = params->rx_error_handling; 2503 + req->rx_desc_type = params->rx_desc_type; 2504 + req->rx_sop_offset = params->rx_sop_offset; 2505 + req->rx_dest_qnum = params->rx_dest_qnum; 2506 + req->rx_src_tag_hi = params->rx_src_tag_hi; 2507 + req->rx_src_tag_lo = params->rx_src_tag_lo; 2508 + req->rx_dest_tag_hi = params->rx_dest_tag_hi; 2509 + req->rx_dest_tag_lo = params->rx_dest_tag_lo; 2510 + req->rx_src_tag_hi_sel = params->rx_src_tag_hi_sel; 2511 + req->rx_src_tag_lo_sel = params->rx_src_tag_lo_sel; 2512 + req->rx_dest_tag_hi_sel = params->rx_dest_tag_hi_sel; 2513 + req->rx_dest_tag_lo_sel = params->rx_dest_tag_lo_sel; 2514 + req->rx_fdq0_sz0_qnum = params->rx_fdq0_sz0_qnum; 2515 + req->rx_fdq1_qnum = params->rx_fdq1_qnum; 2516 + req->rx_fdq2_qnum = params->rx_fdq2_qnum; 2517 + req->rx_fdq3_qnum = params->rx_fdq3_qnum; 2518 + req->rx_ps_location = params->rx_ps_location; 2519 + 2520 + ret = ti_sci_do_xfer(info, xfer); 2521 + if (ret) { 2522 + dev_err(dev, "RX_FL_CFG: Mbox send fail %d\n", ret); 2523 + goto fail; 2524 + } 2525 + 2526 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 2527 + ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL; 2528 + 2529 + fail: 2530 + ti_sci_put_one_xfer(&info->minfo, xfer); 2531 + dev_dbg(info->dev, "RX_FL_CFG: %u ret:%d\n", params->flow_index, ret); 2532 + return ret; 2533 + } 2534 + 2535 + /** 2536 + * ti_sci_cmd_proc_request() - Command to request a physical processor control 2537 + * @handle: Pointer to TI SCI handle 2538 + * @proc_id: Processor ID this request is for 2539 + * 2540 + * Return: 0 if all went well, else returns appropriate error value. 2541 + */ 2542 + static int ti_sci_cmd_proc_request(const struct ti_sci_handle *handle, 2543 + u8 proc_id) 2544 + { 2545 + struct ti_sci_msg_req_proc_request *req; 2546 + struct ti_sci_msg_hdr *resp; 2547 + struct ti_sci_info *info; 2548 + struct ti_sci_xfer *xfer; 2549 + struct device *dev; 2550 + int ret = 0; 2551 + 2552 + if (!handle) 2553 + return -EINVAL; 2554 + if (IS_ERR(handle)) 2555 + return PTR_ERR(handle); 2556 + 2557 + info = handle_to_ti_sci_info(handle); 2558 + dev = info->dev; 2559 + 2560 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_REQUEST, 2561 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2562 + sizeof(*req), sizeof(*resp)); 2563 + if (IS_ERR(xfer)) { 2564 + ret = PTR_ERR(xfer); 2565 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2566 + return ret; 2567 + } 2568 + req = (struct ti_sci_msg_req_proc_request *)xfer->xfer_buf; 2569 + req->processor_id = proc_id; 2570 + 2571 + ret = ti_sci_do_xfer(info, xfer); 2572 + if (ret) { 2573 + dev_err(dev, "Mbox send fail %d\n", ret); 2574 + goto fail; 2575 + } 2576 + 2577 + resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 2578 + 2579 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2580 + 2581 + fail: 2582 + ti_sci_put_one_xfer(&info->minfo, xfer); 2583 + 2584 + return ret; 2585 + } 2586 + 2587 + /** 2588 + * ti_sci_cmd_proc_release() - Command to release a physical processor control 2589 + * @handle: Pointer to TI SCI handle 2590 + * @proc_id: Processor ID this request is for 2591 + * 2592 + * Return: 0 if all went well, else returns appropriate error value. 2593 + */ 2594 + static int ti_sci_cmd_proc_release(const struct ti_sci_handle *handle, 2595 + u8 proc_id) 2596 + { 2597 + struct ti_sci_msg_req_proc_release *req; 2598 + struct ti_sci_msg_hdr *resp; 2599 + struct ti_sci_info *info; 2600 + struct ti_sci_xfer *xfer; 2601 + struct device *dev; 2602 + int ret = 0; 2603 + 2604 + if (!handle) 2605 + return -EINVAL; 2606 + if (IS_ERR(handle)) 2607 + return PTR_ERR(handle); 2608 + 2609 + info = handle_to_ti_sci_info(handle); 2610 + dev = info->dev; 2611 + 2612 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_RELEASE, 2613 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2614 + sizeof(*req), sizeof(*resp)); 2615 + if (IS_ERR(xfer)) { 2616 + ret = PTR_ERR(xfer); 2617 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2618 + return ret; 2619 + } 2620 + req = (struct ti_sci_msg_req_proc_release *)xfer->xfer_buf; 2621 + req->processor_id = proc_id; 2622 + 2623 + ret = ti_sci_do_xfer(info, xfer); 2624 + if (ret) { 2625 + dev_err(dev, "Mbox send fail %d\n", ret); 2626 + goto fail; 2627 + } 2628 + 2629 + resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 2630 + 2631 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2632 + 2633 + fail: 2634 + ti_sci_put_one_xfer(&info->minfo, xfer); 2635 + 2636 + return ret; 2637 + } 2638 + 2639 + /** 2640 + * ti_sci_cmd_proc_handover() - Command to handover a physical processor 2641 + * control to a host in the processor's access 2642 + * control list. 2643 + * @handle: Pointer to TI SCI handle 2644 + * @proc_id: Processor ID this request is for 2645 + * @host_id: Host ID to get the control of the processor 2646 + * 2647 + * Return: 0 if all went well, else returns appropriate error value. 2648 + */ 2649 + static int ti_sci_cmd_proc_handover(const struct ti_sci_handle *handle, 2650 + u8 proc_id, u8 host_id) 2651 + { 2652 + struct ti_sci_msg_req_proc_handover *req; 2653 + struct ti_sci_msg_hdr *resp; 2654 + struct ti_sci_info *info; 2655 + struct ti_sci_xfer *xfer; 2656 + struct device *dev; 2657 + int ret = 0; 2658 + 2659 + if (!handle) 2660 + return -EINVAL; 2661 + if (IS_ERR(handle)) 2662 + return PTR_ERR(handle); 2663 + 2664 + info = handle_to_ti_sci_info(handle); 2665 + dev = info->dev; 2666 + 2667 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_HANDOVER, 2668 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2669 + sizeof(*req), sizeof(*resp)); 2670 + if (IS_ERR(xfer)) { 2671 + ret = PTR_ERR(xfer); 2672 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2673 + return ret; 2674 + } 2675 + req = (struct ti_sci_msg_req_proc_handover *)xfer->xfer_buf; 2676 + req->processor_id = proc_id; 2677 + req->host_id = host_id; 2678 + 2679 + ret = ti_sci_do_xfer(info, xfer); 2680 + if (ret) { 2681 + dev_err(dev, "Mbox send fail %d\n", ret); 2682 + goto fail; 2683 + } 2684 + 2685 + resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 2686 + 2687 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2688 + 2689 + fail: 2690 + ti_sci_put_one_xfer(&info->minfo, xfer); 2691 + 2692 + return ret; 2693 + } 2694 + 2695 + /** 2696 + * ti_sci_cmd_proc_set_config() - Command to set the processor boot 2697 + * configuration flags 2698 + * @handle: Pointer to TI SCI handle 2699 + * @proc_id: Processor ID this request is for 2700 + * @config_flags_set: Configuration flags to be set 2701 + * @config_flags_clear: Configuration flags to be cleared. 2702 + * 2703 + * Return: 0 if all went well, else returns appropriate error value. 2704 + */ 2705 + static int ti_sci_cmd_proc_set_config(const struct ti_sci_handle *handle, 2706 + u8 proc_id, u64 bootvector, 2707 + u32 config_flags_set, 2708 + u32 config_flags_clear) 2709 + { 2710 + struct ti_sci_msg_req_set_config *req; 2711 + struct ti_sci_msg_hdr *resp; 2712 + struct ti_sci_info *info; 2713 + struct ti_sci_xfer *xfer; 2714 + struct device *dev; 2715 + int ret = 0; 2716 + 2717 + if (!handle) 2718 + return -EINVAL; 2719 + if (IS_ERR(handle)) 2720 + return PTR_ERR(handle); 2721 + 2722 + info = handle_to_ti_sci_info(handle); 2723 + dev = info->dev; 2724 + 2725 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_CONFIG, 2726 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2727 + sizeof(*req), sizeof(*resp)); 2728 + if (IS_ERR(xfer)) { 2729 + ret = PTR_ERR(xfer); 2730 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2731 + return ret; 2732 + } 2733 + req = (struct ti_sci_msg_req_set_config *)xfer->xfer_buf; 2734 + req->processor_id = proc_id; 2735 + req->bootvector_low = bootvector & TI_SCI_ADDR_LOW_MASK; 2736 + req->bootvector_high = (bootvector & TI_SCI_ADDR_HIGH_MASK) >> 2737 + TI_SCI_ADDR_HIGH_SHIFT; 2738 + req->config_flags_set = config_flags_set; 2739 + req->config_flags_clear = config_flags_clear; 2740 + 2741 + ret = ti_sci_do_xfer(info, xfer); 2742 + if (ret) { 2743 + dev_err(dev, "Mbox send fail %d\n", ret); 2744 + goto fail; 2745 + } 2746 + 2747 + resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 2748 + 2749 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2750 + 2751 + fail: 2752 + ti_sci_put_one_xfer(&info->minfo, xfer); 2753 + 2754 + return ret; 2755 + } 2756 + 2757 + /** 2758 + * ti_sci_cmd_proc_set_control() - Command to set the processor boot 2759 + * control flags 2760 + * @handle: Pointer to TI SCI handle 2761 + * @proc_id: Processor ID this request is for 2762 + * @control_flags_set: Control flags to be set 2763 + * @control_flags_clear: Control flags to be cleared 2764 + * 2765 + * Return: 0 if all went well, else returns appropriate error value. 2766 + */ 2767 + static int ti_sci_cmd_proc_set_control(const struct ti_sci_handle *handle, 2768 + u8 proc_id, u32 control_flags_set, 2769 + u32 control_flags_clear) 2770 + { 2771 + struct ti_sci_msg_req_set_ctrl *req; 2772 + struct ti_sci_msg_hdr *resp; 2773 + struct ti_sci_info *info; 2774 + struct ti_sci_xfer *xfer; 2775 + struct device *dev; 2776 + int ret = 0; 2777 + 2778 + if (!handle) 2779 + return -EINVAL; 2780 + if (IS_ERR(handle)) 2781 + return PTR_ERR(handle); 2782 + 2783 + info = handle_to_ti_sci_info(handle); 2784 + dev = info->dev; 2785 + 2786 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_CTRL, 2787 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2788 + sizeof(*req), sizeof(*resp)); 2789 + if (IS_ERR(xfer)) { 2790 + ret = PTR_ERR(xfer); 2791 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2792 + return ret; 2793 + } 2794 + req = (struct ti_sci_msg_req_set_ctrl *)xfer->xfer_buf; 2795 + req->processor_id = proc_id; 2796 + req->control_flags_set = control_flags_set; 2797 + req->control_flags_clear = control_flags_clear; 2798 + 2799 + ret = ti_sci_do_xfer(info, xfer); 2800 + if (ret) { 2801 + dev_err(dev, "Mbox send fail %d\n", ret); 2802 + goto fail; 2803 + } 2804 + 2805 + resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf; 2806 + 2807 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 2808 + 2809 + fail: 2810 + ti_sci_put_one_xfer(&info->minfo, xfer); 2811 + 2812 + return ret; 2813 + } 2814 + 2815 + /** 2816 + * ti_sci_cmd_get_boot_status() - Command to get the processor boot status 2817 + * @handle: Pointer to TI SCI handle 2818 + * @proc_id: Processor ID this request is for 2819 + * 2820 + * Return: 0 if all went well, else returns appropriate error value. 2821 + */ 2822 + static int ti_sci_cmd_proc_get_status(const struct ti_sci_handle *handle, 2823 + u8 proc_id, u64 *bv, u32 *cfg_flags, 2824 + u32 *ctrl_flags, u32 *sts_flags) 2825 + { 2826 + struct ti_sci_msg_resp_get_status *resp; 2827 + struct ti_sci_msg_req_get_status *req; 2828 + struct ti_sci_info *info; 2829 + struct ti_sci_xfer *xfer; 2830 + struct device *dev; 2831 + int ret = 0; 2832 + 2833 + if (!handle) 2834 + return -EINVAL; 2835 + if (IS_ERR(handle)) 2836 + return PTR_ERR(handle); 2837 + 2838 + info = handle_to_ti_sci_info(handle); 2839 + dev = info->dev; 2840 + 2841 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_STATUS, 2842 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 2843 + sizeof(*req), sizeof(*resp)); 2844 + if (IS_ERR(xfer)) { 2845 + ret = PTR_ERR(xfer); 2846 + dev_err(dev, "Message alloc failed(%d)\n", ret); 2847 + return ret; 2848 + } 2849 + req = (struct ti_sci_msg_req_get_status *)xfer->xfer_buf; 2850 + req->processor_id = proc_id; 2851 + 2852 + ret = ti_sci_do_xfer(info, xfer); 2853 + if (ret) { 2854 + dev_err(dev, "Mbox send fail %d\n", ret); 2855 + goto fail; 2856 + } 2857 + 2858 + resp = (struct ti_sci_msg_resp_get_status *)xfer->tx_message.buf; 2859 + 2860 + if (!ti_sci_is_response_ack(resp)) { 2861 + ret = -ENODEV; 2862 + } else { 2863 + *bv = (resp->bootvector_low & TI_SCI_ADDR_LOW_MASK) | 2864 + (((u64)resp->bootvector_high << TI_SCI_ADDR_HIGH_SHIFT) & 2865 + TI_SCI_ADDR_HIGH_MASK); 2866 + *cfg_flags = resp->config_flags; 2867 + *ctrl_flags = resp->control_flags; 2868 + *sts_flags = resp->status_flags; 2869 + } 2870 + 2871 + fail: 2872 + ti_sci_put_one_xfer(&info->minfo, xfer); 2873 + 2874 + return ret; 2875 + } 2876 + 2060 2877 /* 2061 2878 * ti_sci_setup_ops() - Setup the operations structures 2062 2879 * @info: pointer to TISCI pointer ··· 2886 2069 struct ti_sci_clk_ops *cops = &ops->clk_ops; 2887 2070 struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops; 2888 2071 struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops; 2072 + struct ti_sci_rm_ringacc_ops *rops = &ops->rm_ring_ops; 2073 + struct ti_sci_rm_psil_ops *psilops = &ops->rm_psil_ops; 2074 + struct ti_sci_rm_udmap_ops *udmap_ops = &ops->rm_udmap_ops; 2075 + struct ti_sci_proc_ops *pops = &ops->proc_ops; 2889 2076 2890 2077 core_ops->reboot_device = ti_sci_cmd_core_reboot; 2891 2078 ··· 2929 2108 iops->set_event_map = ti_sci_cmd_set_event_map; 2930 2109 iops->free_irq = ti_sci_cmd_free_irq; 2931 2110 iops->free_event_map = ti_sci_cmd_free_event_map; 2111 + 2112 + rops->config = ti_sci_cmd_ring_config; 2113 + rops->get_config = ti_sci_cmd_ring_get_config; 2114 + 2115 + psilops->pair = ti_sci_cmd_rm_psil_pair; 2116 + psilops->unpair = ti_sci_cmd_rm_psil_unpair; 2117 + 2118 + udmap_ops->tx_ch_cfg = ti_sci_cmd_rm_udmap_tx_ch_cfg; 2119 + udmap_ops->rx_ch_cfg = ti_sci_cmd_rm_udmap_rx_ch_cfg; 2120 + udmap_ops->rx_flow_cfg = ti_sci_cmd_rm_udmap_rx_flow_cfg; 2121 + 2122 + pops->request = ti_sci_cmd_proc_request; 2123 + pops->release = ti_sci_cmd_proc_release; 2124 + pops->handover = ti_sci_cmd_proc_handover; 2125 + pops->set_config = ti_sci_cmd_proc_set_config; 2126 + pops->set_control = ti_sci_cmd_proc_set_control; 2127 + pops->get_status = ti_sci_cmd_proc_get_status; 2932 2128 } 2933 2129 2934 2130 /** ··· 3233 2395 struct device *dev, u32 dev_id, char *of_prop) 3234 2396 { 3235 2397 struct ti_sci_resource *res; 2398 + bool valid_set = false; 3236 2399 u32 resource_subtype; 3237 2400 int i, ret; 3238 2401 ··· 3265 2426 &res->desc[i].start, 3266 2427 &res->desc[i].num); 3267 2428 if (ret) { 3268 - dev_err(dev, "dev = %d subtype %d not allocated for this host\n", 2429 + dev_dbg(dev, "dev = %d subtype %d not allocated for this host\n", 3269 2430 dev_id, resource_subtype); 3270 - return ERR_PTR(ret); 2431 + res->desc[i].start = 0; 2432 + res->desc[i].num = 0; 2433 + continue; 3271 2434 } 3272 2435 3273 2436 dev_dbg(dev, "dev = %d, subtype = %d, start = %d, num = %d\n", 3274 2437 dev_id, resource_subtype, res->desc[i].start, 3275 2438 res->desc[i].num); 3276 2439 2440 + valid_set = true; 3277 2441 res->desc[i].res_map = 3278 2442 devm_kzalloc(dev, BITS_TO_LONGS(res->desc[i].num) * 3279 2443 sizeof(*res->desc[i].res_map), GFP_KERNEL); ··· 3285 2443 } 3286 2444 raw_spin_lock_init(&res->lock); 3287 2445 3288 - return res; 2446 + if (valid_set) 2447 + return res; 2448 + 2449 + return ERR_PTR(-EINVAL); 3289 2450 } 3290 2451 3291 2452 static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode,
+810
drivers/firmware/ti_sci.h
··· 42 42 #define TI_SCI_MSG_SET_IRQ 0x1000 43 43 #define TI_SCI_MSG_FREE_IRQ 0x1001 44 44 45 + /* NAVSS resource management */ 46 + /* Ringacc requests */ 47 + #define TI_SCI_MSG_RM_RING_ALLOCATE 0x1100 48 + #define TI_SCI_MSG_RM_RING_FREE 0x1101 49 + #define TI_SCI_MSG_RM_RING_RECONFIG 0x1102 50 + #define TI_SCI_MSG_RM_RING_RESET 0x1103 51 + #define TI_SCI_MSG_RM_RING_CFG 0x1110 52 + #define TI_SCI_MSG_RM_RING_GET_CFG 0x1111 53 + 54 + /* PSI-L requests */ 55 + #define TI_SCI_MSG_RM_PSIL_PAIR 0x1280 56 + #define TI_SCI_MSG_RM_PSIL_UNPAIR 0x1281 57 + 58 + #define TI_SCI_MSG_RM_UDMAP_TX_ALLOC 0x1200 59 + #define TI_SCI_MSG_RM_UDMAP_TX_FREE 0x1201 60 + #define TI_SCI_MSG_RM_UDMAP_RX_ALLOC 0x1210 61 + #define TI_SCI_MSG_RM_UDMAP_RX_FREE 0x1211 62 + #define TI_SCI_MSG_RM_UDMAP_FLOW_CFG 0x1220 63 + #define TI_SCI_MSG_RM_UDMAP_OPT_FLOW_CFG 0x1221 64 + 65 + #define TISCI_MSG_RM_UDMAP_TX_CH_CFG 0x1205 66 + #define TISCI_MSG_RM_UDMAP_TX_CH_GET_CFG 0x1206 67 + #define TISCI_MSG_RM_UDMAP_RX_CH_CFG 0x1215 68 + #define TISCI_MSG_RM_UDMAP_RX_CH_GET_CFG 0x1216 69 + #define TISCI_MSG_RM_UDMAP_FLOW_CFG 0x1230 70 + #define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_CFG 0x1231 71 + #define TISCI_MSG_RM_UDMAP_FLOW_GET_CFG 0x1232 72 + #define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_GET_CFG 0x1233 73 + 74 + /* Processor Control requests */ 75 + #define TI_SCI_MSG_PROC_REQUEST 0xc000 76 + #define TI_SCI_MSG_PROC_RELEASE 0xc001 77 + #define TI_SCI_MSG_PROC_HANDOVER 0xc005 78 + #define TI_SCI_MSG_SET_CONFIG 0xc100 79 + #define TI_SCI_MSG_SET_CTRL 0xc101 80 + #define TI_SCI_MSG_GET_STATUS 0xc400 81 + 45 82 /** 46 83 * struct ti_sci_msg_hdr - Generic Message Header for All messages and responses 47 84 * @type: Type of messages: One of TI_SCI_MSG* values ··· 639 602 u16 global_event; 640 603 u8 vint_status_bit; 641 604 u8 secondary_host; 605 + } __packed; 606 + 607 + /** 608 + * struct ti_sci_msg_rm_ring_cfg_req - Configure a Navigator Subsystem ring 609 + * 610 + * Configures the non-real-time registers of a Navigator Subsystem ring. 611 + * @hdr: Generic Header 612 + * @valid_params: Bitfield defining validity of ring configuration parameters. 613 + * The ring configuration fields are not valid, and will not be used for 614 + * ring configuration, if their corresponding valid bit is zero. 615 + * Valid bit usage: 616 + * 0 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_lo 617 + * 1 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_hi 618 + * 2 - Valid bit for @tisci_msg_rm_ring_cfg_req count 619 + * 3 - Valid bit for @tisci_msg_rm_ring_cfg_req mode 620 + * 4 - Valid bit for @tisci_msg_rm_ring_cfg_req size 621 + * 5 - Valid bit for @tisci_msg_rm_ring_cfg_req order_id 622 + * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated 623 + * @index: ring index to be configured. 624 + * @addr_lo: 32 LSBs of ring base address to be programmed into the ring's 625 + * RING_BA_LO register 626 + * @addr_hi: 16 MSBs of ring base address to be programmed into the ring's 627 + * RING_BA_HI register. 628 + * @count: Number of ring elements. Must be even if mode is CREDENTIALS or QM 629 + * modes. 630 + * @mode: Specifies the mode the ring is to be configured. 631 + * @size: Specifies encoded ring element size. To calculate the encoded size use 632 + * the formula (log2(size_bytes) - 2), where size_bytes cannot be 633 + * greater than 256. 634 + * @order_id: Specifies the ring's bus order ID. 635 + */ 636 + struct ti_sci_msg_rm_ring_cfg_req { 637 + struct ti_sci_msg_hdr hdr; 638 + u32 valid_params; 639 + u16 nav_id; 640 + u16 index; 641 + u32 addr_lo; 642 + u32 addr_hi; 643 + u32 count; 644 + u8 mode; 645 + u8 size; 646 + u8 order_id; 647 + } __packed; 648 + 649 + /** 650 + * struct ti_sci_msg_rm_ring_get_cfg_req - Get RA ring's configuration 651 + * 652 + * Gets the configuration of the non-real-time register fields of a ring. The 653 + * host, or a supervisor of the host, who owns the ring must be the requesting 654 + * host. The values of the non-real-time registers are returned in 655 + * @ti_sci_msg_rm_ring_get_cfg_resp. 656 + * 657 + * @hdr: Generic Header 658 + * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated 659 + * @index: ring index. 660 + */ 661 + struct ti_sci_msg_rm_ring_get_cfg_req { 662 + struct ti_sci_msg_hdr hdr; 663 + u16 nav_id; 664 + u16 index; 665 + } __packed; 666 + 667 + /** 668 + * struct ti_sci_msg_rm_ring_get_cfg_resp - Ring get configuration response 669 + * 670 + * Response received by host processor after RM has handled 671 + * @ti_sci_msg_rm_ring_get_cfg_req. The response contains the ring's 672 + * non-real-time register values. 673 + * 674 + * @hdr: Generic Header 675 + * @addr_lo: Ring 32 LSBs of base address 676 + * @addr_hi: Ring 16 MSBs of base address. 677 + * @count: Ring number of elements. 678 + * @mode: Ring mode. 679 + * @size: encoded Ring element size 680 + * @order_id: ing order ID. 681 + */ 682 + struct ti_sci_msg_rm_ring_get_cfg_resp { 683 + struct ti_sci_msg_hdr hdr; 684 + u32 addr_lo; 685 + u32 addr_hi; 686 + u32 count; 687 + u8 mode; 688 + u8 size; 689 + u8 order_id; 690 + } __packed; 691 + 692 + /** 693 + * struct ti_sci_msg_psil_pair - Pairs a PSI-L source thread to a destination 694 + * thread 695 + * @hdr: Generic Header 696 + * @nav_id: SoC Navigator Subsystem device ID whose PSI-L config proxy is 697 + * used to pair the source and destination threads. 698 + * @src_thread: PSI-L source thread ID within the PSI-L System thread map. 699 + * 700 + * UDMAP transmit channels mapped to source threads will have their 701 + * TCHAN_THRD_ID register programmed with the destination thread if the pairing 702 + * is successful. 703 + 704 + * @dst_thread: PSI-L destination thread ID within the PSI-L System thread map. 705 + * PSI-L destination threads start at index 0x8000. The request is NACK'd if 706 + * the destination thread is not greater than or equal to 0x8000. 707 + * 708 + * UDMAP receive channels mapped to destination threads will have their 709 + * RCHAN_THRD_ID register programmed with the source thread if the pairing 710 + * is successful. 711 + * 712 + * Request type is TI_SCI_MSG_RM_PSIL_PAIR, response is a generic ACK or NACK 713 + * message. 714 + */ 715 + struct ti_sci_msg_psil_pair { 716 + struct ti_sci_msg_hdr hdr; 717 + u32 nav_id; 718 + u32 src_thread; 719 + u32 dst_thread; 720 + } __packed; 721 + 722 + /** 723 + * struct ti_sci_msg_psil_unpair - Unpairs a PSI-L source thread from a 724 + * destination thread 725 + * @hdr: Generic Header 726 + * @nav_id: SoC Navigator Subsystem device ID whose PSI-L config proxy is 727 + * used to unpair the source and destination threads. 728 + * @src_thread: PSI-L source thread ID within the PSI-L System thread map. 729 + * 730 + * UDMAP transmit channels mapped to source threads will have their 731 + * TCHAN_THRD_ID register cleared if the unpairing is successful. 732 + * 733 + * @dst_thread: PSI-L destination thread ID within the PSI-L System thread map. 734 + * PSI-L destination threads start at index 0x8000. The request is NACK'd if 735 + * the destination thread is not greater than or equal to 0x8000. 736 + * 737 + * UDMAP receive channels mapped to destination threads will have their 738 + * RCHAN_THRD_ID register cleared if the unpairing is successful. 739 + * 740 + * Request type is TI_SCI_MSG_RM_PSIL_UNPAIR, response is a generic ACK or NACK 741 + * message. 742 + */ 743 + struct ti_sci_msg_psil_unpair { 744 + struct ti_sci_msg_hdr hdr; 745 + u32 nav_id; 746 + u32 src_thread; 747 + u32 dst_thread; 748 + } __packed; 749 + 750 + /** 751 + * struct ti_sci_msg_udmap_rx_flow_cfg - UDMAP receive flow configuration 752 + * message 753 + * @hdr: Generic Header 754 + * @nav_id: SoC Navigator Subsystem device ID from which the receive flow is 755 + * allocated 756 + * @flow_index: UDMAP receive flow index for non-optional configuration. 757 + * @rx_ch_index: Specifies the index of the receive channel using the flow_index 758 + * @rx_einfo_present: UDMAP receive flow extended packet info present. 759 + * @rx_psinfo_present: UDMAP receive flow PS words present. 760 + * @rx_error_handling: UDMAP receive flow error handling configuration. Valid 761 + * values are TI_SCI_RM_UDMAP_RX_FLOW_ERR_DROP/RETRY. 762 + * @rx_desc_type: UDMAP receive flow descriptor type. It can be one of 763 + * TI_SCI_RM_UDMAP_RX_FLOW_DESC_HOST/MONO. 764 + * @rx_sop_offset: UDMAP receive flow start of packet offset. 765 + * @rx_dest_qnum: UDMAP receive flow destination queue number. 766 + * @rx_ps_location: UDMAP receive flow PS words location. 767 + * 0 - end of packet descriptor 768 + * 1 - Beginning of the data buffer 769 + * @rx_src_tag_hi: UDMAP receive flow source tag high byte constant 770 + * @rx_src_tag_lo: UDMAP receive flow source tag low byte constant 771 + * @rx_dest_tag_hi: UDMAP receive flow destination tag high byte constant 772 + * @rx_dest_tag_lo: UDMAP receive flow destination tag low byte constant 773 + * @rx_src_tag_hi_sel: UDMAP receive flow source tag high byte selector 774 + * @rx_src_tag_lo_sel: UDMAP receive flow source tag low byte selector 775 + * @rx_dest_tag_hi_sel: UDMAP receive flow destination tag high byte selector 776 + * @rx_dest_tag_lo_sel: UDMAP receive flow destination tag low byte selector 777 + * @rx_size_thresh_en: UDMAP receive flow packet size based free buffer queue 778 + * enable. If enabled, the ti_sci_rm_udmap_rx_flow_opt_cfg also need to be 779 + * configured and sent. 780 + * @rx_fdq0_sz0_qnum: UDMAP receive flow free descriptor queue 0. 781 + * @rx_fdq1_qnum: UDMAP receive flow free descriptor queue 1. 782 + * @rx_fdq2_qnum: UDMAP receive flow free descriptor queue 2. 783 + * @rx_fdq3_qnum: UDMAP receive flow free descriptor queue 3. 784 + * 785 + * For detailed information on the settings, see the UDMAP section of the TRM. 786 + */ 787 + struct ti_sci_msg_udmap_rx_flow_cfg { 788 + struct ti_sci_msg_hdr hdr; 789 + u32 nav_id; 790 + u32 flow_index; 791 + u32 rx_ch_index; 792 + u8 rx_einfo_present; 793 + u8 rx_psinfo_present; 794 + u8 rx_error_handling; 795 + u8 rx_desc_type; 796 + u16 rx_sop_offset; 797 + u16 rx_dest_qnum; 798 + u8 rx_ps_location; 799 + u8 rx_src_tag_hi; 800 + u8 rx_src_tag_lo; 801 + u8 rx_dest_tag_hi; 802 + u8 rx_dest_tag_lo; 803 + u8 rx_src_tag_hi_sel; 804 + u8 rx_src_tag_lo_sel; 805 + u8 rx_dest_tag_hi_sel; 806 + u8 rx_dest_tag_lo_sel; 807 + u8 rx_size_thresh_en; 808 + u16 rx_fdq0_sz0_qnum; 809 + u16 rx_fdq1_qnum; 810 + u16 rx_fdq2_qnum; 811 + u16 rx_fdq3_qnum; 812 + } __packed; 813 + 814 + /** 815 + * struct rm_ti_sci_msg_udmap_rx_flow_opt_cfg - parameters for UDMAP receive 816 + * flow optional configuration 817 + * @hdr: Generic Header 818 + * @nav_id: SoC Navigator Subsystem device ID from which the receive flow is 819 + * allocated 820 + * @flow_index: UDMAP receive flow index for optional configuration. 821 + * @rx_ch_index: Specifies the index of the receive channel using the flow_index 822 + * @rx_size_thresh0: UDMAP receive flow packet size threshold 0. 823 + * @rx_size_thresh1: UDMAP receive flow packet size threshold 1. 824 + * @rx_size_thresh2: UDMAP receive flow packet size threshold 2. 825 + * @rx_fdq0_sz1_qnum: UDMAP receive flow free descriptor queue for size 826 + * threshold 1. 827 + * @rx_fdq0_sz2_qnum: UDMAP receive flow free descriptor queue for size 828 + * threshold 2. 829 + * @rx_fdq0_sz3_qnum: UDMAP receive flow free descriptor queue for size 830 + * threshold 3. 831 + * 832 + * For detailed information on the settings, see the UDMAP section of the TRM. 833 + */ 834 + struct rm_ti_sci_msg_udmap_rx_flow_opt_cfg { 835 + struct ti_sci_msg_hdr hdr; 836 + u32 nav_id; 837 + u32 flow_index; 838 + u32 rx_ch_index; 839 + u16 rx_size_thresh0; 840 + u16 rx_size_thresh1; 841 + u16 rx_size_thresh2; 842 + u16 rx_fdq0_sz1_qnum; 843 + u16 rx_fdq0_sz2_qnum; 844 + u16 rx_fdq0_sz3_qnum; 845 + } __packed; 846 + 847 + /** 848 + * Configures a Navigator Subsystem UDMAP transmit channel 849 + * 850 + * Configures the non-real-time registers of a Navigator Subsystem UDMAP 851 + * transmit channel. The channel index must be assigned to the host defined 852 + * in the TISCI header via the RM board configuration resource assignment 853 + * range list. 854 + * 855 + * @hdr: Generic Header 856 + * 857 + * @valid_params: Bitfield defining validity of tx channel configuration 858 + * parameters. The tx channel configuration fields are not valid, and will not 859 + * be used for ch configuration, if their corresponding valid bit is zero. 860 + * Valid bit usage: 861 + * 0 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_pause_on_err 862 + * 1 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_atype 863 + * 2 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_chan_type 864 + * 3 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_fetch_size 865 + * 4 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::txcq_qnum 866 + * 5 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_priority 867 + * 6 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_qos 868 + * 7 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_orderid 869 + * 8 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_sched_priority 870 + * 9 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_einfo 871 + * 10 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_pswords 872 + * 11 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_supr_tdpkt 873 + * 12 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_credit_count 874 + * 13 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::fdepth 875 + * 14 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_burst_size 876 + * 877 + * @nav_id: SoC device ID of Navigator Subsystem where tx channel is located 878 + * 879 + * @index: UDMAP transmit channel index. 880 + * 881 + * @tx_pause_on_err: UDMAP transmit channel pause on error configuration to 882 + * be programmed into the tx_pause_on_err field of the channel's TCHAN_TCFG 883 + * register. 884 + * 885 + * @tx_filt_einfo: UDMAP transmit channel extended packet information passing 886 + * configuration to be programmed into the tx_filt_einfo field of the 887 + * channel's TCHAN_TCFG register. 888 + * 889 + * @tx_filt_pswords: UDMAP transmit channel protocol specific word passing 890 + * configuration to be programmed into the tx_filt_pswords field of the 891 + * channel's TCHAN_TCFG register. 892 + * 893 + * @tx_atype: UDMAP transmit channel non Ring Accelerator access pointer 894 + * interpretation configuration to be programmed into the tx_atype field of 895 + * the channel's TCHAN_TCFG register. 896 + * 897 + * @tx_chan_type: UDMAP transmit channel functional channel type and work 898 + * passing mechanism configuration to be programmed into the tx_chan_type 899 + * field of the channel's TCHAN_TCFG register. 900 + * 901 + * @tx_supr_tdpkt: UDMAP transmit channel teardown packet generation suppression 902 + * configuration to be programmed into the tx_supr_tdpkt field of the channel's 903 + * TCHAN_TCFG register. 904 + * 905 + * @tx_fetch_size: UDMAP transmit channel number of 32-bit descriptor words to 906 + * fetch configuration to be programmed into the tx_fetch_size field of the 907 + * channel's TCHAN_TCFG register. The user must make sure to set the maximum 908 + * word count that can pass through the channel for any allowed descriptor type. 909 + * 910 + * @tx_credit_count: UDMAP transmit channel transfer request credit count 911 + * configuration to be programmed into the count field of the TCHAN_TCREDIT 912 + * register. Specifies how many credits for complete TRs are available. 913 + * 914 + * @txcq_qnum: UDMAP transmit channel completion queue configuration to be 915 + * programmed into the txcq_qnum field of the TCHAN_TCQ register. The specified 916 + * completion queue must be assigned to the host, or a subordinate of the host, 917 + * requesting configuration of the transmit channel. 918 + * 919 + * @tx_priority: UDMAP transmit channel transmit priority value to be programmed 920 + * into the priority field of the channel's TCHAN_TPRI_CTRL register. 921 + * 922 + * @tx_qos: UDMAP transmit channel transmit qos value to be programmed into the 923 + * qos field of the channel's TCHAN_TPRI_CTRL register. 924 + * 925 + * @tx_orderid: UDMAP transmit channel bus order id value to be programmed into 926 + * the orderid field of the channel's TCHAN_TPRI_CTRL register. 927 + * 928 + * @fdepth: UDMAP transmit channel FIFO depth configuration to be programmed 929 + * into the fdepth field of the TCHAN_TFIFO_DEPTH register. Sets the number of 930 + * Tx FIFO bytes which are allowed to be stored for the channel. Check the UDMAP 931 + * section of the TRM for restrictions regarding this parameter. 932 + * 933 + * @tx_sched_priority: UDMAP transmit channel tx scheduling priority 934 + * configuration to be programmed into the priority field of the channel's 935 + * TCHAN_TST_SCHED register. 936 + * 937 + * @tx_burst_size: UDMAP transmit channel burst size configuration to be 938 + * programmed into the tx_burst_size field of the TCHAN_TCFG register. 939 + */ 940 + struct ti_sci_msg_rm_udmap_tx_ch_cfg_req { 941 + struct ti_sci_msg_hdr hdr; 942 + u32 valid_params; 943 + u16 nav_id; 944 + u16 index; 945 + u8 tx_pause_on_err; 946 + u8 tx_filt_einfo; 947 + u8 tx_filt_pswords; 948 + u8 tx_atype; 949 + u8 tx_chan_type; 950 + u8 tx_supr_tdpkt; 951 + u16 tx_fetch_size; 952 + u8 tx_credit_count; 953 + u16 txcq_qnum; 954 + u8 tx_priority; 955 + u8 tx_qos; 956 + u8 tx_orderid; 957 + u16 fdepth; 958 + u8 tx_sched_priority; 959 + u8 tx_burst_size; 960 + } __packed; 961 + 962 + /** 963 + * Configures a Navigator Subsystem UDMAP receive channel 964 + * 965 + * Configures the non-real-time registers of a Navigator Subsystem UDMAP 966 + * receive channel. The channel index must be assigned to the host defined 967 + * in the TISCI header via the RM board configuration resource assignment 968 + * range list. 969 + * 970 + * @hdr: Generic Header 971 + * 972 + * @valid_params: Bitfield defining validity of rx channel configuration 973 + * parameters. 974 + * The rx channel configuration fields are not valid, and will not be used for 975 + * ch configuration, if their corresponding valid bit is zero. 976 + * Valid bit usage: 977 + * 0 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_pause_on_err 978 + * 1 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_atype 979 + * 2 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_chan_type 980 + * 3 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_fetch_size 981 + * 4 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rxcq_qnum 982 + * 5 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_priority 983 + * 6 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_qos 984 + * 7 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_orderid 985 + * 8 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_sched_priority 986 + * 9 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_start 987 + * 10 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_cnt 988 + * 11 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_short 989 + * 12 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_long 990 + * 14 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_burst_size 991 + * 992 + * @nav_id: SoC device ID of Navigator Subsystem where rx channel is located 993 + * 994 + * @index: UDMAP receive channel index. 995 + * 996 + * @rx_fetch_size: UDMAP receive channel number of 32-bit descriptor words to 997 + * fetch configuration to be programmed into the rx_fetch_size field of the 998 + * channel's RCHAN_RCFG register. 999 + * 1000 + * @rxcq_qnum: UDMAP receive channel completion queue configuration to be 1001 + * programmed into the rxcq_qnum field of the RCHAN_RCQ register. 1002 + * The specified completion queue must be assigned to the host, or a subordinate 1003 + * of the host, requesting configuration of the receive channel. 1004 + * 1005 + * @rx_priority: UDMAP receive channel receive priority value to be programmed 1006 + * into the priority field of the channel's RCHAN_RPRI_CTRL register. 1007 + * 1008 + * @rx_qos: UDMAP receive channel receive qos value to be programmed into the 1009 + * qos field of the channel's RCHAN_RPRI_CTRL register. 1010 + * 1011 + * @rx_orderid: UDMAP receive channel bus order id value to be programmed into 1012 + * the orderid field of the channel's RCHAN_RPRI_CTRL register. 1013 + * 1014 + * @rx_sched_priority: UDMAP receive channel rx scheduling priority 1015 + * configuration to be programmed into the priority field of the channel's 1016 + * RCHAN_RST_SCHED register. 1017 + * 1018 + * @flowid_start: UDMAP receive channel additional flows starting index 1019 + * configuration to program into the flow_start field of the RCHAN_RFLOW_RNG 1020 + * register. Specifies the starting index for flow IDs the receive channel is to 1021 + * make use of beyond the default flow. flowid_start and @ref flowid_cnt must be 1022 + * set as valid and configured together. The starting flow ID set by 1023 + * @ref flowid_cnt must be a flow index within the Navigator Subsystem's subset 1024 + * of flows beyond the default flows statically mapped to receive channels. 1025 + * The additional flows must be assigned to the host, or a subordinate of the 1026 + * host, requesting configuration of the receive channel. 1027 + * 1028 + * @flowid_cnt: UDMAP receive channel additional flows count configuration to 1029 + * program into the flowid_cnt field of the RCHAN_RFLOW_RNG register. 1030 + * This field specifies how many flow IDs are in the additional contiguous range 1031 + * of legal flow IDs for the channel. @ref flowid_start and flowid_cnt must be 1032 + * set as valid and configured together. Disabling the valid_params field bit 1033 + * for flowid_cnt indicates no flow IDs other than the default are to be 1034 + * allocated and used by the receive channel. @ref flowid_start plus flowid_cnt 1035 + * cannot be greater than the number of receive flows in the receive channel's 1036 + * Navigator Subsystem. The additional flows must be assigned to the host, or a 1037 + * subordinate of the host, requesting configuration of the receive channel. 1038 + * 1039 + * @rx_pause_on_err: UDMAP receive channel pause on error configuration to be 1040 + * programmed into the rx_pause_on_err field of the channel's RCHAN_RCFG 1041 + * register. 1042 + * 1043 + * @rx_atype: UDMAP receive channel non Ring Accelerator access pointer 1044 + * interpretation configuration to be programmed into the rx_atype field of the 1045 + * channel's RCHAN_RCFG register. 1046 + * 1047 + * @rx_chan_type: UDMAP receive channel functional channel type and work passing 1048 + * mechanism configuration to be programmed into the rx_chan_type field of the 1049 + * channel's RCHAN_RCFG register. 1050 + * 1051 + * @rx_ignore_short: UDMAP receive channel short packet treatment configuration 1052 + * to be programmed into the rx_ignore_short field of the RCHAN_RCFG register. 1053 + * 1054 + * @rx_ignore_long: UDMAP receive channel long packet treatment configuration to 1055 + * be programmed into the rx_ignore_long field of the RCHAN_RCFG register. 1056 + * 1057 + * @rx_burst_size: UDMAP receive channel burst size configuration to be 1058 + * programmed into the rx_burst_size field of the RCHAN_RCFG register. 1059 + */ 1060 + struct ti_sci_msg_rm_udmap_rx_ch_cfg_req { 1061 + struct ti_sci_msg_hdr hdr; 1062 + u32 valid_params; 1063 + u16 nav_id; 1064 + u16 index; 1065 + u16 rx_fetch_size; 1066 + u16 rxcq_qnum; 1067 + u8 rx_priority; 1068 + u8 rx_qos; 1069 + u8 rx_orderid; 1070 + u8 rx_sched_priority; 1071 + u16 flowid_start; 1072 + u16 flowid_cnt; 1073 + u8 rx_pause_on_err; 1074 + u8 rx_atype; 1075 + u8 rx_chan_type; 1076 + u8 rx_ignore_short; 1077 + u8 rx_ignore_long; 1078 + u8 rx_burst_size; 1079 + } __packed; 1080 + 1081 + /** 1082 + * Configures a Navigator Subsystem UDMAP receive flow 1083 + * 1084 + * Configures a Navigator Subsystem UDMAP receive flow's registers. 1085 + * Configuration does not include the flow registers which handle size-based 1086 + * free descriptor queue routing. 1087 + * 1088 + * The flow index must be assigned to the host defined in the TISCI header via 1089 + * the RM board configuration resource assignment range list. 1090 + * 1091 + * @hdr: Standard TISCI header 1092 + * 1093 + * @valid_params 1094 + * Bitfield defining validity of rx flow configuration parameters. The 1095 + * rx flow configuration fields are not valid, and will not be used for flow 1096 + * configuration, if their corresponding valid bit is zero. Valid bit usage: 1097 + * 0 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_einfo_present 1098 + * 1 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_psinfo_present 1099 + * 2 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_error_handling 1100 + * 3 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_desc_type 1101 + * 4 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_sop_offset 1102 + * 5 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_qnum 1103 + * 6 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi 1104 + * 7 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo 1105 + * 8 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi 1106 + * 9 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo 1107 + * 10 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi_sel 1108 + * 11 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo_sel 1109 + * 12 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi_sel 1110 + * 13 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo_sel 1111 + * 14 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq0_sz0_qnum 1112 + * 15 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq1_sz0_qnum 1113 + * 16 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq2_sz0_qnum 1114 + * 17 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq3_sz0_qnum 1115 + * 18 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_ps_location 1116 + * 1117 + * @nav_id: SoC device ID of Navigator Subsystem from which the receive flow is 1118 + * allocated 1119 + * 1120 + * @flow_index: UDMAP receive flow index for non-optional configuration. 1121 + * 1122 + * @rx_einfo_present: 1123 + * UDMAP receive flow extended packet info present configuration to be 1124 + * programmed into the rx_einfo_present field of the flow's RFLOW_RFA register. 1125 + * 1126 + * @rx_psinfo_present: 1127 + * UDMAP receive flow PS words present configuration to be programmed into the 1128 + * rx_psinfo_present field of the flow's RFLOW_RFA register. 1129 + * 1130 + * @rx_error_handling: 1131 + * UDMAP receive flow error handling configuration to be programmed into the 1132 + * rx_error_handling field of the flow's RFLOW_RFA register. 1133 + * 1134 + * @rx_desc_type: 1135 + * UDMAP receive flow descriptor type configuration to be programmed into the 1136 + * rx_desc_type field field of the flow's RFLOW_RFA register. 1137 + * 1138 + * @rx_sop_offset: 1139 + * UDMAP receive flow start of packet offset configuration to be programmed 1140 + * into the rx_sop_offset field of the RFLOW_RFA register. See the UDMAP 1141 + * section of the TRM for more information on this setting. Valid values for 1142 + * this field are 0-255 bytes. 1143 + * 1144 + * @rx_dest_qnum: 1145 + * UDMAP receive flow destination queue configuration to be programmed into the 1146 + * rx_dest_qnum field of the flow's RFLOW_RFA register. The specified 1147 + * destination queue must be valid within the Navigator Subsystem and must be 1148 + * owned by the host, or a subordinate of the host, requesting allocation and 1149 + * configuration of the receive flow. 1150 + * 1151 + * @rx_src_tag_hi: 1152 + * UDMAP receive flow source tag high byte constant configuration to be 1153 + * programmed into the rx_src_tag_hi field of the flow's RFLOW_RFB register. 1154 + * See the UDMAP section of the TRM for more information on this setting. 1155 + * 1156 + * @rx_src_tag_lo: 1157 + * UDMAP receive flow source tag low byte constant configuration to be 1158 + * programmed into the rx_src_tag_lo field of the flow's RFLOW_RFB register. 1159 + * See the UDMAP section of the TRM for more information on this setting. 1160 + * 1161 + * @rx_dest_tag_hi: 1162 + * UDMAP receive flow destination tag high byte constant configuration to be 1163 + * programmed into the rx_dest_tag_hi field of the flow's RFLOW_RFB register. 1164 + * See the UDMAP section of the TRM for more information on this setting. 1165 + * 1166 + * @rx_dest_tag_lo: 1167 + * UDMAP receive flow destination tag low byte constant configuration to be 1168 + * programmed into the rx_dest_tag_lo field of the flow's RFLOW_RFB register. 1169 + * See the UDMAP section of the TRM for more information on this setting. 1170 + * 1171 + * @rx_src_tag_hi_sel: 1172 + * UDMAP receive flow source tag high byte selector configuration to be 1173 + * programmed into the rx_src_tag_hi_sel field of the RFLOW_RFC register. See 1174 + * the UDMAP section of the TRM for more information on this setting. 1175 + * 1176 + * @rx_src_tag_lo_sel: 1177 + * UDMAP receive flow source tag low byte selector configuration to be 1178 + * programmed into the rx_src_tag_lo_sel field of the RFLOW_RFC register. See 1179 + * the UDMAP section of the TRM for more information on this setting. 1180 + * 1181 + * @rx_dest_tag_hi_sel: 1182 + * UDMAP receive flow destination tag high byte selector configuration to be 1183 + * programmed into the rx_dest_tag_hi_sel field of the RFLOW_RFC register. See 1184 + * the UDMAP section of the TRM for more information on this setting. 1185 + * 1186 + * @rx_dest_tag_lo_sel: 1187 + * UDMAP receive flow destination tag low byte selector configuration to be 1188 + * programmed into the rx_dest_tag_lo_sel field of the RFLOW_RFC register. See 1189 + * the UDMAP section of the TRM for more information on this setting. 1190 + * 1191 + * @rx_fdq0_sz0_qnum: 1192 + * UDMAP receive flow free descriptor queue 0 configuration to be programmed 1193 + * into the rx_fdq0_sz0_qnum field of the flow's RFLOW_RFD register. See the 1194 + * UDMAP section of the TRM for more information on this setting. The specified 1195 + * free queue must be valid within the Navigator Subsystem and must be owned 1196 + * by the host, or a subordinate of the host, requesting allocation and 1197 + * configuration of the receive flow. 1198 + * 1199 + * @rx_fdq1_qnum: 1200 + * UDMAP receive flow free descriptor queue 1 configuration to be programmed 1201 + * into the rx_fdq1_qnum field of the flow's RFLOW_RFD register. See the 1202 + * UDMAP section of the TRM for more information on this setting. The specified 1203 + * free queue must be valid within the Navigator Subsystem and must be owned 1204 + * by the host, or a subordinate of the host, requesting allocation and 1205 + * configuration of the receive flow. 1206 + * 1207 + * @rx_fdq2_qnum: 1208 + * UDMAP receive flow free descriptor queue 2 configuration to be programmed 1209 + * into the rx_fdq2_qnum field of the flow's RFLOW_RFE register. See the 1210 + * UDMAP section of the TRM for more information on this setting. The specified 1211 + * free queue must be valid within the Navigator Subsystem and must be owned 1212 + * by the host, or a subordinate of the host, requesting allocation and 1213 + * configuration of the receive flow. 1214 + * 1215 + * @rx_fdq3_qnum: 1216 + * UDMAP receive flow free descriptor queue 3 configuration to be programmed 1217 + * into the rx_fdq3_qnum field of the flow's RFLOW_RFE register. See the 1218 + * UDMAP section of the TRM for more information on this setting. The specified 1219 + * free queue must be valid within the Navigator Subsystem and must be owned 1220 + * by the host, or a subordinate of the host, requesting allocation and 1221 + * configuration of the receive flow. 1222 + * 1223 + * @rx_ps_location: 1224 + * UDMAP receive flow PS words location configuration to be programmed into the 1225 + * rx_ps_location field of the flow's RFLOW_RFA register. 1226 + */ 1227 + struct ti_sci_msg_rm_udmap_flow_cfg_req { 1228 + struct ti_sci_msg_hdr hdr; 1229 + u32 valid_params; 1230 + u16 nav_id; 1231 + u16 flow_index; 1232 + u8 rx_einfo_present; 1233 + u8 rx_psinfo_present; 1234 + u8 rx_error_handling; 1235 + u8 rx_desc_type; 1236 + u16 rx_sop_offset; 1237 + u16 rx_dest_qnum; 1238 + u8 rx_src_tag_hi; 1239 + u8 rx_src_tag_lo; 1240 + u8 rx_dest_tag_hi; 1241 + u8 rx_dest_tag_lo; 1242 + u8 rx_src_tag_hi_sel; 1243 + u8 rx_src_tag_lo_sel; 1244 + u8 rx_dest_tag_hi_sel; 1245 + u8 rx_dest_tag_lo_sel; 1246 + u16 rx_fdq0_sz0_qnum; 1247 + u16 rx_fdq1_qnum; 1248 + u16 rx_fdq2_qnum; 1249 + u16 rx_fdq3_qnum; 1250 + u8 rx_ps_location; 1251 + } __packed; 1252 + 1253 + /** 1254 + * struct ti_sci_msg_req_proc_request - Request a processor 1255 + * @hdr: Generic Header 1256 + * @processor_id: ID of processor being requested 1257 + * 1258 + * Request type is TI_SCI_MSG_PROC_REQUEST, response is a generic ACK/NACK 1259 + * message. 1260 + */ 1261 + struct ti_sci_msg_req_proc_request { 1262 + struct ti_sci_msg_hdr hdr; 1263 + u8 processor_id; 1264 + } __packed; 1265 + 1266 + /** 1267 + * struct ti_sci_msg_req_proc_release - Release a processor 1268 + * @hdr: Generic Header 1269 + * @processor_id: ID of processor being released 1270 + * 1271 + * Request type is TI_SCI_MSG_PROC_RELEASE, response is a generic ACK/NACK 1272 + * message. 1273 + */ 1274 + struct ti_sci_msg_req_proc_release { 1275 + struct ti_sci_msg_hdr hdr; 1276 + u8 processor_id; 1277 + } __packed; 1278 + 1279 + /** 1280 + * struct ti_sci_msg_req_proc_handover - Handover a processor to a host 1281 + * @hdr: Generic Header 1282 + * @processor_id: ID of processor being handed over 1283 + * @host_id: Host ID the control needs to be transferred to 1284 + * 1285 + * Request type is TI_SCI_MSG_PROC_HANDOVER, response is a generic ACK/NACK 1286 + * message. 1287 + */ 1288 + struct ti_sci_msg_req_proc_handover { 1289 + struct ti_sci_msg_hdr hdr; 1290 + u8 processor_id; 1291 + u8 host_id; 1292 + } __packed; 1293 + 1294 + /* Boot Vector masks */ 1295 + #define TI_SCI_ADDR_LOW_MASK GENMASK_ULL(31, 0) 1296 + #define TI_SCI_ADDR_HIGH_MASK GENMASK_ULL(63, 32) 1297 + #define TI_SCI_ADDR_HIGH_SHIFT 32 1298 + 1299 + /** 1300 + * struct ti_sci_msg_req_set_config - Set Processor boot configuration 1301 + * @hdr: Generic Header 1302 + * @processor_id: ID of processor being configured 1303 + * @bootvector_low: Lower 32 bit address (Little Endian) of boot vector 1304 + * @bootvector_high: Higher 32 bit address (Little Endian) of boot vector 1305 + * @config_flags_set: Optional Processor specific Config Flags to set. 1306 + * Setting a bit here implies the corresponding mode 1307 + * will be set 1308 + * @config_flags_clear: Optional Processor specific Config Flags to clear. 1309 + * Setting a bit here implies the corresponding mode 1310 + * will be cleared 1311 + * 1312 + * Request type is TI_SCI_MSG_PROC_HANDOVER, response is a generic ACK/NACK 1313 + * message. 1314 + */ 1315 + struct ti_sci_msg_req_set_config { 1316 + struct ti_sci_msg_hdr hdr; 1317 + u8 processor_id; 1318 + u32 bootvector_low; 1319 + u32 bootvector_high; 1320 + u32 config_flags_set; 1321 + u32 config_flags_clear; 1322 + } __packed; 1323 + 1324 + /** 1325 + * struct ti_sci_msg_req_set_ctrl - Set Processor boot control flags 1326 + * @hdr: Generic Header 1327 + * @processor_id: ID of processor being configured 1328 + * @control_flags_set: Optional Processor specific Control Flags to set. 1329 + * Setting a bit here implies the corresponding mode 1330 + * will be set 1331 + * @control_flags_clear:Optional Processor specific Control Flags to clear. 1332 + * Setting a bit here implies the corresponding mode 1333 + * will be cleared 1334 + * 1335 + * Request type is TI_SCI_MSG_SET_CTRL, response is a generic ACK/NACK 1336 + * message. 1337 + */ 1338 + struct ti_sci_msg_req_set_ctrl { 1339 + struct ti_sci_msg_hdr hdr; 1340 + u8 processor_id; 1341 + u32 control_flags_set; 1342 + u32 control_flags_clear; 1343 + } __packed; 1344 + 1345 + /** 1346 + * struct ti_sci_msg_req_get_status - Processor boot status request 1347 + * @hdr: Generic Header 1348 + * @processor_id: ID of processor whose status is being requested 1349 + * 1350 + * Request type is TI_SCI_MSG_GET_STATUS, response is an appropriate 1351 + * message, or NACK in case of inability to satisfy request. 1352 + */ 1353 + struct ti_sci_msg_req_get_status { 1354 + struct ti_sci_msg_hdr hdr; 1355 + u8 processor_id; 1356 + } __packed; 1357 + 1358 + /** 1359 + * struct ti_sci_msg_resp_get_status - Processor boot status response 1360 + * @hdr: Generic Header 1361 + * @processor_id: ID of processor whose status is returned 1362 + * @bootvector_low: Lower 32 bit address (Little Endian) of boot vector 1363 + * @bootvector_high: Higher 32 bit address (Little Endian) of boot vector 1364 + * @config_flags: Optional Processor specific Config Flags set currently 1365 + * @control_flags: Optional Processor specific Control Flags set currently 1366 + * @status_flags: Optional Processor specific Status Flags set currently 1367 + * 1368 + * Response structure to a TI_SCI_MSG_GET_STATUS request. 1369 + */ 1370 + struct ti_sci_msg_resp_get_status { 1371 + struct ti_sci_msg_hdr hdr; 1372 + u8 processor_id; 1373 + u32 bootvector_low; 1374 + u32 bootvector_high; 1375 + u32 config_flags; 1376 + u32 control_flags; 1377 + u32 status_flags; 642 1378 } __packed; 643 1379 644 1380 #endif /* __TI_SCI_H */
+48
drivers/hwmon/scmi-hwmon.c
··· 18 18 const struct scmi_sensor_info **info[hwmon_max]; 19 19 }; 20 20 21 + static inline u64 __pow10(u8 x) 22 + { 23 + u64 r = 1; 24 + 25 + while (x--) 26 + r *= 10; 27 + 28 + return r; 29 + } 30 + 31 + static int scmi_hwmon_scale(const struct scmi_sensor_info *sensor, u64 *value) 32 + { 33 + s8 scale = sensor->scale; 34 + u64 f; 35 + 36 + switch (sensor->type) { 37 + case TEMPERATURE_C: 38 + case VOLTAGE: 39 + case CURRENT: 40 + scale += 3; 41 + break; 42 + case POWER: 43 + case ENERGY: 44 + scale += 6; 45 + break; 46 + default: 47 + break; 48 + } 49 + 50 + if (scale == 0) 51 + return 0; 52 + 53 + if (abs(scale) > 19) 54 + return -E2BIG; 55 + 56 + f = __pow10(abs(scale)); 57 + if (scale > 0) 58 + *value *= f; 59 + else 60 + *value = div64_u64(*value, f); 61 + 62 + return 0; 63 + } 64 + 21 65 static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 22 66 u32 attr, int channel, long *val) 23 67 { ··· 73 29 74 30 sensor = *(scmi_sensors->info[type] + channel); 75 31 ret = h->sensor_ops->reading_get(h, sensor->id, false, &value); 32 + if (ret) 33 + return ret; 34 + 35 + ret = scmi_hwmon_scale(sensor, &value); 76 36 if (!ret) 77 37 *val = value; 78 38
+8
drivers/memory/Kconfig
··· 8 8 9 9 if MEMORY 10 10 11 + config DDR 12 + bool 13 + help 14 + Data from JEDEC specs for DDR SDRAM memories, 15 + particularly the AC timing parameters and addressing 16 + information. This data is useful for drivers handling 17 + DDR SDRAM controllers. 18 + 11 19 config ARM_PL172_MPMC 12 20 tristate "ARM PL172 MPMC driver" 13 21 depends on ARM_AMBA && OF
+1
drivers/memory/Makefile
··· 3 3 # Makefile for memory devices 4 4 # 5 5 6 + obj-$(CONFIG_DDR) += jedec_ddr_data.o 6 7 ifeq ($(CONFIG_DDR),y) 7 8 obj-$(CONFIG_OF) += of_memory.o 8 9 endif
+241 -82
drivers/memory/brcmstb_dpfe.c
··· 33 33 #include <linux/io.h> 34 34 #include <linux/module.h> 35 35 #include <linux/of_address.h> 36 + #include <linux/of_device.h> 36 37 #include <linux/platform_device.h> 37 38 38 39 #define DRVNAME "brcmstb-dpfe" 39 - #define FIRMWARE_NAME "dpfe.bin" 40 40 41 41 /* DCPU register offsets */ 42 42 #define REG_DCPU_RESET 0x0 ··· 59 59 #define DRAM_INFO_MR4 0x4 60 60 #define DRAM_INFO_ERROR 0x8 61 61 #define DRAM_INFO_MR4_MASK 0xff 62 + #define DRAM_INFO_MR4_SHIFT 24 /* We need to look at byte 3 */ 62 63 63 64 /* DRAM MR4 Offsets & Masks */ 64 65 #define DRAM_MR4_REFRESH 0x0 /* Refresh rate */ ··· 74 73 #define DRAM_MR4_TH_OFFS_MASK 0x3 75 74 #define DRAM_MR4_TUF_MASK 0x1 76 75 77 - /* DRAM Vendor Offsets & Masks */ 76 + /* DRAM Vendor Offsets & Masks (API v2) */ 78 77 #define DRAM_VENDOR_MR5 0x0 79 78 #define DRAM_VENDOR_MR6 0x4 80 79 #define DRAM_VENDOR_MR7 0x8 81 80 #define DRAM_VENDOR_MR8 0xc 82 81 #define DRAM_VENDOR_ERROR 0x10 83 82 #define DRAM_VENDOR_MASK 0xff 83 + #define DRAM_VENDOR_SHIFT 24 /* We need to look at byte 3 */ 84 + 85 + /* DRAM Information Offsets & Masks (API v3) */ 86 + #define DRAM_DDR_INFO_MR4 0x0 87 + #define DRAM_DDR_INFO_MR5 0x4 88 + #define DRAM_DDR_INFO_MR6 0x8 89 + #define DRAM_DDR_INFO_MR7 0xc 90 + #define DRAM_DDR_INFO_MR8 0x10 91 + #define DRAM_DDR_INFO_ERROR 0x14 92 + #define DRAM_DDR_INFO_MASK 0xff 84 93 85 94 /* Reset register bits & masks */ 86 95 #define DCPU_RESET_SHIFT 0x0 ··· 120 109 #define DPFE_MSG_TYPE_COMMAND 1 121 110 #define DPFE_MSG_TYPE_RESPONSE 2 122 111 123 - #define DELAY_LOOP_MAX 200000 112 + #define DELAY_LOOP_MAX 1000 124 113 125 114 enum dpfe_msg_fields { 126 115 MSG_HEADER, ··· 128 117 MSG_ARG_COUNT, 129 118 MSG_ARG0, 130 119 MSG_CHKSUM, 131 - MSG_FIELD_MAX /* Last entry */ 120 + MSG_FIELD_MAX = 16 /* Max number of arguments */ 132 121 }; 133 122 134 123 enum dpfe_commands { ··· 136 125 DPFE_CMD_GET_REFRESH, 137 126 DPFE_CMD_GET_VENDOR, 138 127 DPFE_CMD_MAX /* Last entry */ 139 - }; 140 - 141 - struct dpfe_msg { 142 - u32 header; 143 - u32 command; 144 - u32 arg_count; 145 - u32 arg0; 146 - u32 chksum; /* This is the sum of all other entries. */ 147 128 }; 148 129 149 130 /* ··· 171 168 bool is_big_endian; 172 169 }; 173 170 171 + /* API version and corresponding commands */ 172 + struct dpfe_api { 173 + int version; 174 + const char *fw_name; 175 + const struct attribute_group **sysfs_attrs; 176 + u32 command[DPFE_CMD_MAX][MSG_FIELD_MAX]; 177 + }; 178 + 174 179 /* Things we need for as long as we are active. */ 175 180 struct private_data { 176 181 void __iomem *regs; 177 182 void __iomem *dmem; 178 183 void __iomem *imem; 179 184 struct device *dev; 185 + const struct dpfe_api *dpfe_api; 180 186 struct mutex lock; 181 187 }; 182 188 ··· 194 182 "Incorrect checksum", "Malformed command", "Timed out", 195 183 }; 196 184 197 - /* List of supported firmware commands */ 198 - static const u32 dpfe_commands[DPFE_CMD_MAX][MSG_FIELD_MAX] = { 199 - [DPFE_CMD_GET_INFO] = { 200 - [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 201 - [MSG_COMMAND] = 1, 202 - [MSG_ARG_COUNT] = 1, 203 - [MSG_ARG0] = 1, 204 - [MSG_CHKSUM] = 4, 205 - }, 206 - [DPFE_CMD_GET_REFRESH] = { 207 - [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 208 - [MSG_COMMAND] = 2, 209 - [MSG_ARG_COUNT] = 1, 210 - [MSG_ARG0] = 1, 211 - [MSG_CHKSUM] = 5, 212 - }, 213 - [DPFE_CMD_GET_VENDOR] = { 214 - [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 215 - [MSG_COMMAND] = 2, 216 - [MSG_ARG_COUNT] = 1, 217 - [MSG_ARG0] = 2, 218 - [MSG_CHKSUM] = 6, 185 + /* 186 + * Forward declaration of our sysfs attribute functions, so we can declare the 187 + * attribute data structures early. 188 + */ 189 + static ssize_t show_info(struct device *, struct device_attribute *, char *); 190 + static ssize_t show_refresh(struct device *, struct device_attribute *, char *); 191 + static ssize_t store_refresh(struct device *, struct device_attribute *, 192 + const char *, size_t); 193 + static ssize_t show_vendor(struct device *, struct device_attribute *, char *); 194 + static ssize_t show_dram(struct device *, struct device_attribute *, char *); 195 + 196 + /* 197 + * Declare our attributes early, so they can be referenced in the API data 198 + * structure. We need to do this, because the attributes depend on the API 199 + * version. 200 + */ 201 + static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL); 202 + static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh); 203 + static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL); 204 + static DEVICE_ATTR(dpfe_dram, 0444, show_dram, NULL); 205 + 206 + /* API v2 sysfs attributes */ 207 + static struct attribute *dpfe_v2_attrs[] = { 208 + &dev_attr_dpfe_info.attr, 209 + &dev_attr_dpfe_refresh.attr, 210 + &dev_attr_dpfe_vendor.attr, 211 + NULL 212 + }; 213 + ATTRIBUTE_GROUPS(dpfe_v2); 214 + 215 + /* API v3 sysfs attributes */ 216 + static struct attribute *dpfe_v3_attrs[] = { 217 + &dev_attr_dpfe_info.attr, 218 + &dev_attr_dpfe_dram.attr, 219 + NULL 220 + }; 221 + ATTRIBUTE_GROUPS(dpfe_v3); 222 + 223 + /* API v2 firmware commands */ 224 + static const struct dpfe_api dpfe_api_v2 = { 225 + .version = 2, 226 + .fw_name = "dpfe.bin", 227 + .sysfs_attrs = dpfe_v2_groups, 228 + .command = { 229 + [DPFE_CMD_GET_INFO] = { 230 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 231 + [MSG_COMMAND] = 1, 232 + [MSG_ARG_COUNT] = 1, 233 + [MSG_ARG0] = 1, 234 + [MSG_CHKSUM] = 4, 235 + }, 236 + [DPFE_CMD_GET_REFRESH] = { 237 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 238 + [MSG_COMMAND] = 2, 239 + [MSG_ARG_COUNT] = 1, 240 + [MSG_ARG0] = 1, 241 + [MSG_CHKSUM] = 5, 242 + }, 243 + [DPFE_CMD_GET_VENDOR] = { 244 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 245 + [MSG_COMMAND] = 2, 246 + [MSG_ARG_COUNT] = 1, 247 + [MSG_ARG0] = 2, 248 + [MSG_CHKSUM] = 6, 249 + }, 250 + } 251 + }; 252 + 253 + /* API v3 firmware commands */ 254 + static const struct dpfe_api dpfe_api_v3 = { 255 + .version = 3, 256 + .fw_name = NULL, /* We expect the firmware to have been downloaded! */ 257 + .sysfs_attrs = dpfe_v3_groups, 258 + .command = { 259 + [DPFE_CMD_GET_INFO] = { 260 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 261 + [MSG_COMMAND] = 0x0101, 262 + [MSG_ARG_COUNT] = 1, 263 + [MSG_ARG0] = 1, 264 + [MSG_CHKSUM] = 0x104, 265 + }, 266 + [DPFE_CMD_GET_REFRESH] = { 267 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 268 + [MSG_COMMAND] = 0x0202, 269 + [MSG_ARG_COUNT] = 0, 270 + /* 271 + * This is a bit ugly. Without arguments, the checksum 272 + * follows right after the argument count and not at 273 + * offset MSG_CHKSUM. 274 + */ 275 + [MSG_ARG0] = 0x203, 276 + }, 277 + /* There's no GET_VENDOR command in API v3. */ 219 278 }, 220 279 }; 221 280 ··· 331 248 writel_relaxed(val, regs + REG_DCPU_RESET); 332 249 } 333 250 334 - static unsigned int get_msg_chksum(const u32 msg[]) 251 + static unsigned int get_msg_chksum(const u32 msg[], unsigned int max) 335 252 { 336 253 unsigned int sum = 0; 337 254 unsigned int i; 338 255 339 256 /* Don't include the last field in the checksum. */ 340 - for (i = 0; i < MSG_FIELD_MAX - 1; i++) 257 + for (i = 0; i < max; i++) 341 258 sum += msg[i]; 342 259 343 260 return sum; ··· 349 266 unsigned int msg_type; 350 267 unsigned int offset; 351 268 void __iomem *ptr = NULL; 269 + 270 + /* There is no need to use this function for API v3 or later. */ 271 + if (unlikely(priv->dpfe_api->version >= 3)) { 272 + return NULL; 273 + } 352 274 353 275 msg_type = (response >> DRAM_MSG_TYPE_OFFSET) & DRAM_MSG_TYPE_MASK; 354 276 offset = (response >> DRAM_MSG_ADDR_OFFSET) & DRAM_MSG_ADDR_MASK; ··· 382 294 return ptr; 383 295 } 384 296 297 + static void __finalize_command(struct private_data *priv) 298 + { 299 + unsigned int release_mbox; 300 + 301 + /* 302 + * It depends on the API version which MBOX register we have to write to 303 + * to signal we are done. 304 + */ 305 + release_mbox = (priv->dpfe_api->version < 3) 306 + ? REG_TO_HOST_MBOX : REG_TO_DCPU_MBOX; 307 + writel_relaxed(0, priv->regs + release_mbox); 308 + } 309 + 385 310 static int __send_command(struct private_data *priv, unsigned int cmd, 386 311 u32 result[]) 387 312 { 388 - const u32 *msg = dpfe_commands[cmd]; 313 + const u32 *msg = priv->dpfe_api->command[cmd]; 389 314 void __iomem *regs = priv->regs; 390 - unsigned int i, chksum; 315 + unsigned int i, chksum, chksum_idx; 391 316 int ret = 0; 392 317 u32 resp; 393 318 ··· 408 307 return -1; 409 308 410 309 mutex_lock(&priv->lock); 310 + 311 + /* Wait for DCPU to become ready */ 312 + for (i = 0; i < DELAY_LOOP_MAX; i++) { 313 + resp = readl_relaxed(regs + REG_TO_HOST_MBOX); 314 + if (resp == 0) 315 + break; 316 + msleep(1); 317 + } 318 + if (resp != 0) { 319 + mutex_unlock(&priv->lock); 320 + return -ETIMEDOUT; 321 + } 411 322 412 323 /* Write command and arguments to message area */ 413 324 for (i = 0; i < MSG_FIELD_MAX; i++) ··· 434 321 resp = readl_relaxed(regs + REG_TO_HOST_MBOX); 435 322 if (resp > 0) 436 323 break; 437 - udelay(5); 324 + msleep(1); 438 325 } 439 326 440 327 if (i == DELAY_LOOP_MAX) { ··· 444 331 /* Read response data */ 445 332 for (i = 0; i < MSG_FIELD_MAX; i++) 446 333 result[i] = readl_relaxed(regs + DCPU_MSG_RAM(i)); 334 + chksum_idx = result[MSG_ARG_COUNT] + MSG_ARG_COUNT + 1; 447 335 } 448 336 449 337 /* Tell DCPU we are done */ 450 - writel_relaxed(0, regs + REG_TO_HOST_MBOX); 338 + __finalize_command(priv); 451 339 452 340 mutex_unlock(&priv->lock); 453 341 ··· 456 342 return ret; 457 343 458 344 /* Verify response */ 459 - chksum = get_msg_chksum(result); 460 - if (chksum != result[MSG_CHKSUM]) 345 + chksum = get_msg_chksum(result, chksum_idx); 346 + if (chksum != result[chksum_idx]) 461 347 resp = DCPU_RET_ERR_CHKSUM; 462 348 463 349 if (resp != DCPU_RET_SUCCESS) { ··· 598 484 return 0; 599 485 } 600 486 601 - ret = request_firmware(&fw, FIRMWARE_NAME, dev); 487 + /* 488 + * If the firmware filename is NULL it means the boot firmware has to 489 + * download the DCPU firmware for us. If that didn't work, we have to 490 + * bail, since downloading it ourselves wouldn't work either. 491 + */ 492 + if (!priv->dpfe_api->fw_name) 493 + return -ENODEV; 494 + 495 + ret = request_firmware(&fw, priv->dpfe_api->fw_name, dev); 602 496 /* request_firmware() prints its own error messages. */ 603 497 if (ret) 604 498 return ret; ··· 647 525 } 648 526 649 527 static ssize_t generic_show(unsigned int command, u32 response[], 650 - struct device *dev, char *buf) 528 + struct private_data *priv, char *buf) 651 529 { 652 - struct private_data *priv; 653 530 int ret; 654 531 655 - priv = dev_get_drvdata(dev); 656 532 if (!priv) 657 533 return sprintf(buf, "ERROR: driver private data not set\n"); 658 534 ··· 665 545 char *buf) 666 546 { 667 547 u32 response[MSG_FIELD_MAX]; 548 + struct private_data *priv; 668 549 unsigned int info; 669 550 ssize_t ret; 670 551 671 - ret = generic_show(DPFE_CMD_GET_INFO, response, dev, buf); 552 + priv = dev_get_drvdata(dev); 553 + ret = generic_show(DPFE_CMD_GET_INFO, response, priv, buf); 672 554 if (ret) 673 555 return ret; 674 556 ··· 693 571 u32 mr4; 694 572 ssize_t ret; 695 573 696 - ret = generic_show(DPFE_CMD_GET_REFRESH, response, dev, buf); 574 + priv = dev_get_drvdata(dev); 575 + ret = generic_show(DPFE_CMD_GET_REFRESH, response, priv, buf); 697 576 if (ret) 698 577 return ret; 699 - 700 - priv = dev_get_drvdata(dev); 701 578 702 579 info = get_msg_ptr(priv, response[MSG_ARG0], buf, &ret); 703 580 if (!info) 704 581 return ret; 705 582 706 - mr4 = readl_relaxed(info + DRAM_INFO_MR4) & DRAM_INFO_MR4_MASK; 583 + mr4 = (readl_relaxed(info + DRAM_INFO_MR4) >> DRAM_INFO_MR4_SHIFT) & 584 + DRAM_INFO_MR4_MASK; 707 585 708 586 refresh = (mr4 >> DRAM_MR4_REFRESH) & DRAM_MR4_REFRESH_MASK; 709 587 sr_abort = (mr4 >> DRAM_MR4_SR_ABORT) & DRAM_MR4_SR_ABORT_MASK; ··· 730 608 return -EINVAL; 731 609 732 610 priv = dev_get_drvdata(dev); 733 - 734 611 ret = __send_command(priv, DPFE_CMD_GET_REFRESH, response); 735 612 if (ret) 736 613 return ret; ··· 744 623 } 745 624 746 625 static ssize_t show_vendor(struct device *dev, struct device_attribute *devattr, 747 - char *buf) 626 + char *buf) 748 627 { 749 628 u32 response[MSG_FIELD_MAX]; 750 629 struct private_data *priv; 751 630 void __iomem *info; 752 631 ssize_t ret; 753 - 754 - ret = generic_show(DPFE_CMD_GET_VENDOR, response, dev, buf); 755 - if (ret) 756 - return ret; 632 + u32 mr5, mr6, mr7, mr8, err; 757 633 758 634 priv = dev_get_drvdata(dev); 635 + ret = generic_show(DPFE_CMD_GET_VENDOR, response, priv, buf); 636 + if (ret) 637 + return ret; 759 638 760 639 info = get_msg_ptr(priv, response[MSG_ARG0], buf, &ret); 761 640 if (!info) 762 641 return ret; 763 642 764 - return sprintf(buf, "%#x %#x %#x %#x %#x\n", 765 - readl_relaxed(info + DRAM_VENDOR_MR5) & DRAM_VENDOR_MASK, 766 - readl_relaxed(info + DRAM_VENDOR_MR6) & DRAM_VENDOR_MASK, 767 - readl_relaxed(info + DRAM_VENDOR_MR7) & DRAM_VENDOR_MASK, 768 - readl_relaxed(info + DRAM_VENDOR_MR8) & DRAM_VENDOR_MASK, 769 - readl_relaxed(info + DRAM_VENDOR_ERROR) & 770 - DRAM_VENDOR_MASK); 643 + mr5 = (readl_relaxed(info + DRAM_VENDOR_MR5) >> DRAM_VENDOR_SHIFT) & 644 + DRAM_VENDOR_MASK; 645 + mr6 = (readl_relaxed(info + DRAM_VENDOR_MR6) >> DRAM_VENDOR_SHIFT) & 646 + DRAM_VENDOR_MASK; 647 + mr7 = (readl_relaxed(info + DRAM_VENDOR_MR7) >> DRAM_VENDOR_SHIFT) & 648 + DRAM_VENDOR_MASK; 649 + mr8 = (readl_relaxed(info + DRAM_VENDOR_MR8) >> DRAM_VENDOR_SHIFT) & 650 + DRAM_VENDOR_MASK; 651 + err = readl_relaxed(info + DRAM_VENDOR_ERROR) & DRAM_VENDOR_MASK; 652 + 653 + return sprintf(buf, "%#x %#x %#x %#x %#x\n", mr5, mr6, mr7, mr8, err); 654 + } 655 + 656 + static ssize_t show_dram(struct device *dev, struct device_attribute *devattr, 657 + char *buf) 658 + { 659 + u32 response[MSG_FIELD_MAX]; 660 + struct private_data *priv; 661 + ssize_t ret; 662 + u32 mr4, mr5, mr6, mr7, mr8, err; 663 + 664 + priv = dev_get_drvdata(dev); 665 + ret = generic_show(DPFE_CMD_GET_REFRESH, response, priv, buf); 666 + if (ret) 667 + return ret; 668 + 669 + mr4 = response[MSG_ARG0 + 0] & DRAM_INFO_MR4_MASK; 670 + mr5 = response[MSG_ARG0 + 1] & DRAM_DDR_INFO_MASK; 671 + mr6 = response[MSG_ARG0 + 2] & DRAM_DDR_INFO_MASK; 672 + mr7 = response[MSG_ARG0 + 3] & DRAM_DDR_INFO_MASK; 673 + mr8 = response[MSG_ARG0 + 4] & DRAM_DDR_INFO_MASK; 674 + err = response[MSG_ARG0 + 5] & DRAM_DDR_INFO_MASK; 675 + 676 + return sprintf(buf, "%#x %#x %#x %#x %#x %#x\n", mr4, mr5, mr6, mr7, 677 + mr8, err); 771 678 } 772 679 773 680 static int brcmstb_dpfe_resume(struct platform_device *pdev) ··· 804 655 805 656 return brcmstb_dpfe_download_firmware(pdev, &init); 806 657 } 807 - 808 - static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL); 809 - static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh); 810 - static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL); 811 - static struct attribute *dpfe_attrs[] = { 812 - &dev_attr_dpfe_info.attr, 813 - &dev_attr_dpfe_refresh.attr, 814 - &dev_attr_dpfe_vendor.attr, 815 - NULL 816 - }; 817 - ATTRIBUTE_GROUPS(dpfe); 818 658 819 659 static int brcmstb_dpfe_probe(struct platform_device *pdev) 820 660 { ··· 841 703 return -ENOENT; 842 704 } 843 705 844 - ret = brcmstb_dpfe_download_firmware(pdev, &init); 845 - if (ret) 846 - return ret; 706 + priv->dpfe_api = of_device_get_match_data(dev); 707 + if (unlikely(!priv->dpfe_api)) { 708 + /* 709 + * It should be impossible to end up here, but to be safe we 710 + * check anyway. 711 + */ 712 + dev_err(dev, "Couldn't determine API\n"); 713 + return -ENOENT; 714 + } 847 715 848 - ret = sysfs_create_groups(&pdev->dev.kobj, dpfe_groups); 716 + ret = brcmstb_dpfe_download_firmware(pdev, &init); 717 + if (ret) { 718 + dev_err(dev, "Couldn't download firmware -- %d\n", ret); 719 + return ret; 720 + } 721 + 722 + ret = sysfs_create_groups(&pdev->dev.kobj, priv->dpfe_api->sysfs_attrs); 849 723 if (!ret) 850 - dev_info(dev, "registered.\n"); 724 + dev_info(dev, "registered with API v%d.\n", 725 + priv->dpfe_api->version); 851 726 852 727 return ret; 853 728 } 854 729 855 730 static int brcmstb_dpfe_remove(struct platform_device *pdev) 856 731 { 857 - sysfs_remove_groups(&pdev->dev.kobj, dpfe_groups); 732 + struct private_data *priv = dev_get_drvdata(&pdev->dev); 733 + 734 + sysfs_remove_groups(&pdev->dev.kobj, priv->dpfe_api->sysfs_attrs); 858 735 859 736 return 0; 860 737 } 861 738 862 739 static const struct of_device_id brcmstb_dpfe_of_match[] = { 863 - { .compatible = "brcm,dpfe-cpu", }, 740 + /* Use legacy API v2 for a select number of chips */ 741 + { .compatible = "brcm,bcm7268-dpfe-cpu", .data = &dpfe_api_v2 }, 742 + { .compatible = "brcm,bcm7271-dpfe-cpu", .data = &dpfe_api_v2 }, 743 + { .compatible = "brcm,bcm7278-dpfe-cpu", .data = &dpfe_api_v2 }, 744 + { .compatible = "brcm,bcm7211-dpfe-cpu", .data = &dpfe_api_v2 }, 745 + /* API v3 is the default going forward */ 746 + { .compatible = "brcm,dpfe-cpu", .data = &dpfe_api_v3 }, 864 747 {} 865 748 }; 866 749 MODULE_DEVICE_TABLE(of, brcmstb_dpfe_of_match);
+2 -1
drivers/memory/emif.c
··· 23 23 #include <linux/list.h> 24 24 #include <linux/spinlock.h> 25 25 #include <linux/pm.h> 26 - #include <memory/jedec_ddr.h> 26 + 27 27 #include "emif.h" 28 + #include "jedec_ddr.h" 28 29 #include "of_memory.h" 29 30 30 31 /**
+2 -1
drivers/memory/of_memory.c
··· 10 10 #include <linux/list.h> 11 11 #include <linux/of.h> 12 12 #include <linux/gfp.h> 13 - #include <memory/jedec_ddr.h> 14 13 #include <linux/export.h> 14 + 15 + #include "jedec_ddr.h" 15 16 #include "of_memory.h" 16 17 17 18 /**
+22 -22
drivers/memory/tegra/tegra124.c
··· 30 30 #define MC_EMEM_ARB_MISC1 0xdc 31 31 #define MC_EMEM_ARB_RING1_THROTTLE 0xe0 32 32 33 - static const unsigned long tegra124_mc_emem_regs[] = { 34 - MC_EMEM_ARB_CFG, 35 - MC_EMEM_ARB_OUTSTANDING_REQ, 36 - MC_EMEM_ARB_TIMING_RCD, 37 - MC_EMEM_ARB_TIMING_RP, 38 - MC_EMEM_ARB_TIMING_RC, 39 - MC_EMEM_ARB_TIMING_RAS, 40 - MC_EMEM_ARB_TIMING_FAW, 41 - MC_EMEM_ARB_TIMING_RRD, 42 - MC_EMEM_ARB_TIMING_RAP2PRE, 43 - MC_EMEM_ARB_TIMING_WAP2PRE, 44 - MC_EMEM_ARB_TIMING_R2R, 45 - MC_EMEM_ARB_TIMING_W2W, 46 - MC_EMEM_ARB_TIMING_R2W, 47 - MC_EMEM_ARB_TIMING_W2R, 48 - MC_EMEM_ARB_DA_TURNS, 49 - MC_EMEM_ARB_DA_COVERS, 50 - MC_EMEM_ARB_MISC0, 51 - MC_EMEM_ARB_MISC1, 52 - MC_EMEM_ARB_RING1_THROTTLE 53 - }; 54 - 55 33 static const struct tegra_mc_client tegra124_mc_clients[] = { 56 34 { 57 35 .id = 0x00, ··· 1024 1046 }; 1025 1047 1026 1048 #ifdef CONFIG_ARCH_TEGRA_124_SOC 1049 + static const unsigned long tegra124_mc_emem_regs[] = { 1050 + MC_EMEM_ARB_CFG, 1051 + MC_EMEM_ARB_OUTSTANDING_REQ, 1052 + MC_EMEM_ARB_TIMING_RCD, 1053 + MC_EMEM_ARB_TIMING_RP, 1054 + MC_EMEM_ARB_TIMING_RC, 1055 + MC_EMEM_ARB_TIMING_RAS, 1056 + MC_EMEM_ARB_TIMING_FAW, 1057 + MC_EMEM_ARB_TIMING_RRD, 1058 + MC_EMEM_ARB_TIMING_RAP2PRE, 1059 + MC_EMEM_ARB_TIMING_WAP2PRE, 1060 + MC_EMEM_ARB_TIMING_R2R, 1061 + MC_EMEM_ARB_TIMING_W2W, 1062 + MC_EMEM_ARB_TIMING_R2W, 1063 + MC_EMEM_ARB_TIMING_W2R, 1064 + MC_EMEM_ARB_DA_TURNS, 1065 + MC_EMEM_ARB_DA_COVERS, 1066 + MC_EMEM_ARB_MISC0, 1067 + MC_EMEM_ARB_MISC1, 1068 + MC_EMEM_ARB_RING1_THROTTLE 1069 + }; 1070 + 1027 1071 static const struct tegra_smmu_soc tegra124_smmu_soc = { 1028 1072 .clients = tegra124_mc_clients, 1029 1073 .num_clients = ARRAY_SIZE(tegra124_mc_clients),
+2 -1
drivers/reset/Kconfig
··· 118 118 119 119 config RESET_SIMPLE 120 120 bool "Simple Reset Controller Driver" if COMPILE_TEST 121 - default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED 121 + default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED || ARCH_BITMAIN 122 122 help 123 123 This enables a simple reset controller driver for reset lines that 124 124 that can be asserted and deasserted by toggling bits in a contiguous, ··· 130 130 - RCC reset controller in STM32 MCUs 131 131 - Allwinner SoCs 132 132 - ZTE's zx2967 family 133 + - Bitmain BM1880 SoC 133 134 134 135 config RESET_STM32MP157 135 136 bool "STM32MP157 Reset Driver" if COMPILE_TEST
-3
drivers/reset/core.c
··· 690 690 const char *dev_id = dev_name(dev); 691 691 struct reset_control *rstc = NULL; 692 692 693 - if (!dev) 694 - return ERR_PTR(-EINVAL); 695 - 696 693 mutex_lock(&reset_lookup_mutex); 697 694 698 695 list_for_each_entry(lookup, &reset_lookup_list, list) {
+2
drivers/reset/reset-simple.c
··· 125 125 .data = &reset_simple_active_low }, 126 126 { .compatible = "aspeed,ast2400-lpc-reset" }, 127 127 { .compatible = "aspeed,ast2500-lpc-reset" }, 128 + { .compatible = "bitmain,bm1880-reset", 129 + .data = &reset_simple_active_low }, 128 130 { /* sentinel */ }, 129 131 }; 130 132
+13 -1
drivers/soc/amlogic/meson-canvas.c
··· 35 35 void __iomem *reg_base; 36 36 spinlock_t lock; /* canvas device lock */ 37 37 u8 used[NUM_CANVAS]; 38 + bool supports_endianness; 38 39 }; 39 40 40 41 static void canvas_write(struct meson_canvas *canvas, u32 reg, u32 val) ··· 86 85 unsigned int endian) 87 86 { 88 87 unsigned long flags; 88 + 89 + if (endian && !canvas->supports_endianness) { 90 + dev_err(canvas->dev, 91 + "Endianness is not supported on this SoC\n"); 92 + return -EINVAL; 93 + } 89 94 90 95 spin_lock_irqsave(&canvas->lock, flags); 91 96 if (!canvas->used[canvas_index]) { ··· 179 172 if (IS_ERR(canvas->reg_base)) 180 173 return PTR_ERR(canvas->reg_base); 181 174 175 + canvas->supports_endianness = of_device_get_match_data(dev); 176 + 182 177 canvas->dev = dev; 183 178 spin_lock_init(&canvas->lock); 184 179 dev_set_drvdata(dev, canvas); ··· 189 180 } 190 181 191 182 static const struct of_device_id canvas_dt_match[] = { 192 - { .compatible = "amlogic,canvas" }, 183 + { .compatible = "amlogic,meson8-canvas", .data = (void *)false, }, 184 + { .compatible = "amlogic,meson8b-canvas", .data = (void *)false, }, 185 + { .compatible = "amlogic,meson8m2-canvas", .data = (void *)false, }, 186 + { .compatible = "amlogic,canvas", .data = (void *)true, }, 193 187 {} 194 188 }; 195 189 MODULE_DEVICE_TABLE(of, canvas_dt_match);
+39 -24
drivers/soc/aspeed/aspeed-lpc-ctrl.c
··· 64 64 unsigned long param) 65 65 { 66 66 struct aspeed_lpc_ctrl *lpc_ctrl = file_aspeed_lpc_ctrl(file); 67 + struct device *dev = file->private_data; 67 68 void __user *p = (void __user *)param; 68 69 struct aspeed_lpc_ctrl_mapping map; 69 70 u32 addr; ··· 86 85 /* Support more than one window id in the future */ 87 86 if (map.window_id != 0) 88 87 return -EINVAL; 88 + 89 + /* If memory-region is not described in device tree */ 90 + if (!lpc_ctrl->mem_size) { 91 + dev_dbg(dev, "Didn't find reserved memory\n"); 92 + return -ENXIO; 93 + } 89 94 90 95 map.size = lpc_ctrl->mem_size; 91 96 ··· 129 122 return -EINVAL; 130 123 131 124 if (map.window_type == ASPEED_LPC_CTRL_WINDOW_FLASH) { 125 + if (!lpc_ctrl->pnor_size) { 126 + dev_dbg(dev, "Didn't find host pnor flash\n"); 127 + return -ENXIO; 128 + } 132 129 addr = lpc_ctrl->pnor_base; 133 130 size = lpc_ctrl->pnor_size; 134 131 } else if (map.window_type == ASPEED_LPC_CTRL_WINDOW_MEMORY) { 132 + /* If memory-region is not described in device tree */ 133 + if (!lpc_ctrl->mem_size) { 134 + dev_dbg(dev, "Didn't find reserved memory\n"); 135 + return -ENXIO; 136 + } 135 137 addr = lpc_ctrl->mem_base; 136 138 size = lpc_ctrl->mem_size; 137 139 } else { ··· 208 192 if (!lpc_ctrl) 209 193 return -ENOMEM; 210 194 195 + /* If flash is described in device tree then store */ 211 196 node = of_parse_phandle(dev->of_node, "flash", 0); 212 197 if (!node) { 213 - dev_err(dev, "Didn't find host pnor flash node\n"); 214 - return -ENODEV; 198 + dev_dbg(dev, "Didn't find host pnor flash node\n"); 199 + } else { 200 + rc = of_address_to_resource(node, 1, &resm); 201 + of_node_put(node); 202 + if (rc) { 203 + dev_err(dev, "Couldn't address to resource for flash\n"); 204 + return rc; 205 + } 206 + 207 + lpc_ctrl->pnor_size = resource_size(&resm); 208 + lpc_ctrl->pnor_base = resm.start; 215 209 } 216 210 217 - rc = of_address_to_resource(node, 1, &resm); 218 - of_node_put(node); 219 - if (rc) { 220 - dev_err(dev, "Couldn't address to resource for flash\n"); 221 - return rc; 222 - } 223 - 224 - lpc_ctrl->pnor_size = resource_size(&resm); 225 - lpc_ctrl->pnor_base = resm.start; 226 211 227 212 dev_set_drvdata(&pdev->dev, lpc_ctrl); 228 213 214 + /* If memory-region is described in device tree then store */ 229 215 node = of_parse_phandle(dev->of_node, "memory-region", 0); 230 216 if (!node) { 231 - dev_err(dev, "Didn't find reserved memory\n"); 232 - return -EINVAL; 233 - } 217 + dev_dbg(dev, "Didn't find reserved memory\n"); 218 + } else { 219 + rc = of_address_to_resource(node, 0, &resm); 220 + of_node_put(node); 221 + if (rc) { 222 + dev_err(dev, "Couldn't address to resource for reserved memory\n"); 223 + return -ENXIO; 224 + } 234 225 235 - rc = of_address_to_resource(node, 0, &resm); 236 - of_node_put(node); 237 - if (rc) { 238 - dev_err(dev, "Couldn't address to resource for reserved memory\n"); 239 - return -ENOMEM; 226 + lpc_ctrl->mem_size = resource_size(&resm); 227 + lpc_ctrl->mem_base = resm.start; 240 228 } 241 - 242 - lpc_ctrl->mem_size = resource_size(&resm); 243 - lpc_ctrl->mem_base = resm.start; 244 229 245 230 lpc_ctrl->regmap = syscon_node_to_regmap( 246 231 pdev->dev.parent->of_node); ··· 270 253 dev_err(dev, "Unable to register device\n"); 271 254 goto err; 272 255 } 273 - 274 - dev_info(dev, "Loaded at %pr\n", &resm); 275 256 276 257 return 0; 277 258
+10
drivers/soc/fsl/Kconfig
··· 30 30 other DPAA2 objects. This driver does not expose the DPIO 31 31 objects individually, but groups them under a service layer 32 32 API. 33 + 34 + config DPAA2_CONSOLE 35 + tristate "QorIQ DPAA2 console driver" 36 + depends on OF && (ARCH_LAYERSCAPE || COMPILE_TEST) 37 + default y 38 + help 39 + Console driver for DPAA2 platforms. Exports 2 char devices, 40 + /dev/dpaa2_mc_console and /dev/dpaa2_aiop_console, 41 + which can be used to dump the Management Complex and AIOP 42 + firmware logs. 33 43 endmenu
+1
drivers/soc/fsl/Makefile
··· 8 8 obj-$(CONFIG_CPM) += qe/ 9 9 obj-$(CONFIG_FSL_GUTS) += guts.o 10 10 obj-$(CONFIG_FSL_MC_DPIO) += dpio/ 11 + obj-$(CONFIG_DPAA2_CONSOLE) += dpaa2-console.o
+329
drivers/soc/fsl/dpaa2-console.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) 2 + /* 3 + * Freescale DPAA2 Platforms Console Driver 4 + * 5 + * Copyright 2015-2016 Freescale Semiconductor Inc. 6 + * Copyright 2018 NXP 7 + */ 8 + 9 + #define pr_fmt(fmt) "dpaa2-console: " fmt 10 + 11 + #include <linux/module.h> 12 + #include <linux/of_device.h> 13 + #include <linux/of_address.h> 14 + #include <linux/miscdevice.h> 15 + #include <linux/uaccess.h> 16 + #include <linux/slab.h> 17 + #include <linux/fs.h> 18 + #include <linux/io.h> 19 + 20 + /* MC firmware base low/high registers indexes */ 21 + #define MCFBALR_OFFSET 0 22 + #define MCFBAHR_OFFSET 1 23 + 24 + /* Bit masks used to get the most/least significant part of the MC base addr */ 25 + #define MC_FW_ADDR_MASK_HIGH 0x1FFFF 26 + #define MC_FW_ADDR_MASK_LOW 0xE0000000 27 + 28 + #define MC_BUFFER_OFFSET 0x01000000 29 + #define MC_BUFFER_SIZE (1024 * 1024 * 16) 30 + #define MC_OFFSET_DELTA MC_BUFFER_OFFSET 31 + 32 + #define AIOP_BUFFER_OFFSET 0x06000000 33 + #define AIOP_BUFFER_SIZE (1024 * 1024 * 16) 34 + #define AIOP_OFFSET_DELTA 0 35 + 36 + #define LOG_HEADER_FLAG_BUFFER_WRAPAROUND 0x80000000 37 + #define LAST_BYTE(a) ((a) & ~(LOG_HEADER_FLAG_BUFFER_WRAPAROUND)) 38 + 39 + /* MC and AIOP Magic words */ 40 + #define MAGIC_MC 0x4d430100 41 + #define MAGIC_AIOP 0x41494F50 42 + 43 + struct log_header { 44 + __le32 magic_word; 45 + char reserved[4]; 46 + __le32 buf_start; 47 + __le32 buf_length; 48 + __le32 last_byte; 49 + }; 50 + 51 + struct console_data { 52 + void __iomem *map_addr; 53 + struct log_header __iomem *hdr; 54 + void __iomem *start_addr; 55 + void __iomem *end_addr; 56 + void __iomem *end_of_data; 57 + void __iomem *cur_ptr; 58 + }; 59 + 60 + static struct resource mc_base_addr; 61 + 62 + static inline void adjust_end(struct console_data *cd) 63 + { 64 + u32 last_byte = readl(&cd->hdr->last_byte); 65 + 66 + cd->end_of_data = cd->start_addr + LAST_BYTE(last_byte); 67 + } 68 + 69 + static u64 get_mc_fw_base_address(void) 70 + { 71 + u64 mcfwbase = 0ULL; 72 + u32 __iomem *mcfbaregs; 73 + 74 + mcfbaregs = ioremap(mc_base_addr.start, resource_size(&mc_base_addr)); 75 + if (!mcfbaregs) { 76 + pr_err("could not map MC Firmaware Base registers\n"); 77 + return 0; 78 + } 79 + 80 + mcfwbase = readl(mcfbaregs + MCFBAHR_OFFSET) & 81 + MC_FW_ADDR_MASK_HIGH; 82 + mcfwbase <<= 32; 83 + mcfwbase |= readl(mcfbaregs + MCFBALR_OFFSET) & MC_FW_ADDR_MASK_LOW; 84 + iounmap(mcfbaregs); 85 + 86 + pr_debug("MC base address at 0x%016llx\n", mcfwbase); 87 + return mcfwbase; 88 + } 89 + 90 + static ssize_t dpaa2_console_size(struct console_data *cd) 91 + { 92 + ssize_t size; 93 + 94 + if (cd->cur_ptr <= cd->end_of_data) 95 + size = cd->end_of_data - cd->cur_ptr; 96 + else 97 + size = (cd->end_addr - cd->cur_ptr) + 98 + (cd->end_of_data - cd->start_addr); 99 + 100 + return size; 101 + } 102 + 103 + static int dpaa2_generic_console_open(struct inode *node, struct file *fp, 104 + u64 offset, u64 size, 105 + u32 expected_magic, 106 + u32 offset_delta) 107 + { 108 + u32 read_magic, wrapped, last_byte, buf_start, buf_length; 109 + struct console_data *cd; 110 + u64 base_addr; 111 + int err; 112 + 113 + cd = kmalloc(sizeof(*cd), GFP_KERNEL); 114 + if (!cd) 115 + return -ENOMEM; 116 + 117 + base_addr = get_mc_fw_base_address(); 118 + if (!base_addr) { 119 + err = -EIO; 120 + goto err_fwba; 121 + } 122 + 123 + cd->map_addr = ioremap(base_addr + offset, size); 124 + if (!cd->map_addr) { 125 + pr_err("cannot map console log memory\n"); 126 + err = -EIO; 127 + goto err_ioremap; 128 + } 129 + 130 + cd->hdr = (struct log_header __iomem *)cd->map_addr; 131 + read_magic = readl(&cd->hdr->magic_word); 132 + last_byte = readl(&cd->hdr->last_byte); 133 + buf_start = readl(&cd->hdr->buf_start); 134 + buf_length = readl(&cd->hdr->buf_length); 135 + 136 + if (read_magic != expected_magic) { 137 + pr_warn("expected = %08x, read = %08x\n", 138 + expected_magic, read_magic); 139 + err = -EIO; 140 + goto err_magic; 141 + } 142 + 143 + cd->start_addr = cd->map_addr + buf_start - offset_delta; 144 + cd->end_addr = cd->start_addr + buf_length; 145 + 146 + wrapped = last_byte & LOG_HEADER_FLAG_BUFFER_WRAPAROUND; 147 + 148 + adjust_end(cd); 149 + if (wrapped && cd->end_of_data != cd->end_addr) 150 + cd->cur_ptr = cd->end_of_data + 1; 151 + else 152 + cd->cur_ptr = cd->start_addr; 153 + 154 + fp->private_data = cd; 155 + 156 + return 0; 157 + 158 + err_magic: 159 + iounmap(cd->map_addr); 160 + 161 + err_ioremap: 162 + err_fwba: 163 + kfree(cd); 164 + 165 + return err; 166 + } 167 + 168 + static int dpaa2_mc_console_open(struct inode *node, struct file *fp) 169 + { 170 + return dpaa2_generic_console_open(node, fp, 171 + MC_BUFFER_OFFSET, MC_BUFFER_SIZE, 172 + MAGIC_MC, MC_OFFSET_DELTA); 173 + } 174 + 175 + static int dpaa2_aiop_console_open(struct inode *node, struct file *fp) 176 + { 177 + return dpaa2_generic_console_open(node, fp, 178 + AIOP_BUFFER_OFFSET, AIOP_BUFFER_SIZE, 179 + MAGIC_AIOP, AIOP_OFFSET_DELTA); 180 + } 181 + 182 + static int dpaa2_console_close(struct inode *node, struct file *fp) 183 + { 184 + struct console_data *cd = fp->private_data; 185 + 186 + iounmap(cd->map_addr); 187 + kfree(cd); 188 + return 0; 189 + } 190 + 191 + static ssize_t dpaa2_console_read(struct file *fp, char __user *buf, 192 + size_t count, loff_t *f_pos) 193 + { 194 + struct console_data *cd = fp->private_data; 195 + size_t bytes = dpaa2_console_size(cd); 196 + size_t bytes_end = cd->end_addr - cd->cur_ptr; 197 + size_t written = 0; 198 + void *kbuf; 199 + int err; 200 + 201 + /* Check if we need to adjust the end of data addr */ 202 + adjust_end(cd); 203 + 204 + if (cd->end_of_data == cd->cur_ptr) 205 + return 0; 206 + 207 + if (count < bytes) 208 + bytes = count; 209 + 210 + kbuf = kmalloc(bytes, GFP_KERNEL); 211 + if (!kbuf) 212 + return -ENOMEM; 213 + 214 + if (bytes > bytes_end) { 215 + memcpy_fromio(kbuf, cd->cur_ptr, bytes_end); 216 + if (copy_to_user(buf, kbuf, bytes_end)) { 217 + err = -EFAULT; 218 + goto err_free_buf; 219 + } 220 + buf += bytes_end; 221 + cd->cur_ptr = cd->start_addr; 222 + bytes -= bytes_end; 223 + written += bytes_end; 224 + } 225 + 226 + memcpy_fromio(kbuf, cd->cur_ptr, bytes); 227 + if (copy_to_user(buf, kbuf, bytes)) { 228 + err = -EFAULT; 229 + goto err_free_buf; 230 + } 231 + cd->cur_ptr += bytes; 232 + written += bytes; 233 + 234 + return written; 235 + 236 + err_free_buf: 237 + kfree(kbuf); 238 + 239 + return err; 240 + } 241 + 242 + static const struct file_operations dpaa2_mc_console_fops = { 243 + .owner = THIS_MODULE, 244 + .open = dpaa2_mc_console_open, 245 + .release = dpaa2_console_close, 246 + .read = dpaa2_console_read, 247 + }; 248 + 249 + static struct miscdevice dpaa2_mc_console_dev = { 250 + .minor = MISC_DYNAMIC_MINOR, 251 + .name = "dpaa2_mc_console", 252 + .fops = &dpaa2_mc_console_fops 253 + }; 254 + 255 + static const struct file_operations dpaa2_aiop_console_fops = { 256 + .owner = THIS_MODULE, 257 + .open = dpaa2_aiop_console_open, 258 + .release = dpaa2_console_close, 259 + .read = dpaa2_console_read, 260 + }; 261 + 262 + static struct miscdevice dpaa2_aiop_console_dev = { 263 + .minor = MISC_DYNAMIC_MINOR, 264 + .name = "dpaa2_aiop_console", 265 + .fops = &dpaa2_aiop_console_fops 266 + }; 267 + 268 + static int dpaa2_console_probe(struct platform_device *pdev) 269 + { 270 + int error; 271 + 272 + error = of_address_to_resource(pdev->dev.of_node, 0, &mc_base_addr); 273 + if (error < 0) { 274 + pr_err("of_address_to_resource() failed for %pOF with %d\n", 275 + pdev->dev.of_node, error); 276 + return error; 277 + } 278 + 279 + error = misc_register(&dpaa2_mc_console_dev); 280 + if (error) { 281 + pr_err("cannot register device %s\n", 282 + dpaa2_mc_console_dev.name); 283 + goto err_register_mc; 284 + } 285 + 286 + error = misc_register(&dpaa2_aiop_console_dev); 287 + if (error) { 288 + pr_err("cannot register device %s\n", 289 + dpaa2_aiop_console_dev.name); 290 + goto err_register_aiop; 291 + } 292 + 293 + return 0; 294 + 295 + err_register_aiop: 296 + misc_deregister(&dpaa2_mc_console_dev); 297 + err_register_mc: 298 + return error; 299 + } 300 + 301 + static int dpaa2_console_remove(struct platform_device *pdev) 302 + { 303 + misc_deregister(&dpaa2_mc_console_dev); 304 + misc_deregister(&dpaa2_aiop_console_dev); 305 + 306 + return 0; 307 + } 308 + 309 + static const struct of_device_id dpaa2_console_match_table[] = { 310 + { .compatible = "fsl,dpaa2-console",}, 311 + {}, 312 + }; 313 + 314 + MODULE_DEVICE_TABLE(of, dpaa2_console_match_table); 315 + 316 + static struct platform_driver dpaa2_console_driver = { 317 + .driver = { 318 + .name = "dpaa2-console", 319 + .pm = NULL, 320 + .of_match_table = dpaa2_console_match_table, 321 + }, 322 + .probe = dpaa2_console_probe, 323 + .remove = dpaa2_console_remove, 324 + }; 325 + module_platform_driver(dpaa2_console_driver); 326 + 327 + MODULE_LICENSE("Dual BSD/GPL"); 328 + MODULE_AUTHOR("Roy Pledge <roy.pledge@nxp.com>"); 329 + MODULE_DESCRIPTION("DPAA2 console driver");
+16 -7
drivers/soc/fsl/dpio/dpio-driver.c
··· 197 197 desc.cpu); 198 198 } 199 199 200 - /* 201 - * Set the CENA regs to be the cache inhibited area of the portal to 202 - * avoid coherency issues if a user migrates to another core. 203 - */ 204 - desc.regs_cena = devm_memremap(dev, dpio_dev->regions[1].start, 205 - resource_size(&dpio_dev->regions[1]), 206 - MEMREMAP_WC); 200 + if (dpio_dev->obj_desc.region_count < 3) { 201 + /* No support for DDR backed portals, use classic mapping */ 202 + /* 203 + * Set the CENA regs to be the cache inhibited area of the 204 + * portal to avoid coherency issues if a user migrates to 205 + * another core. 206 + */ 207 + desc.regs_cena = devm_memremap(dev, dpio_dev->regions[1].start, 208 + resource_size(&dpio_dev->regions[1]), 209 + MEMREMAP_WC); 210 + } else { 211 + desc.regs_cena = devm_memremap(dev, dpio_dev->regions[2].start, 212 + resource_size(&dpio_dev->regions[2]), 213 + MEMREMAP_WB); 214 + } 215 + 207 216 if (IS_ERR(desc.regs_cena)) { 208 217 dev_err(dev, "devm_memremap failed\n"); 209 218 err = PTR_ERR(desc.regs_cena);
+123 -25
drivers/soc/fsl/dpio/qbman-portal.c
··· 15 15 #define QMAN_REV_4000 0x04000000 16 16 #define QMAN_REV_4100 0x04010000 17 17 #define QMAN_REV_4101 0x04010001 18 + #define QMAN_REV_5000 0x05000000 19 + 18 20 #define QMAN_REV_MASK 0xffff0000 19 21 20 22 /* All QBMan command and result structures use this "valid bit" encoding */ ··· 27 25 #define QBMAN_WQCHAN_CONFIGURE 0x46 28 26 29 27 /* CINH register offsets */ 28 + #define QBMAN_CINH_SWP_EQCR_PI 0x800 30 29 #define QBMAN_CINH_SWP_EQAR 0x8c0 30 + #define QBMAN_CINH_SWP_CR_RT 0x900 31 + #define QBMAN_CINH_SWP_VDQCR_RT 0x940 32 + #define QBMAN_CINH_SWP_EQCR_AM_RT 0x980 33 + #define QBMAN_CINH_SWP_RCR_AM_RT 0x9c0 31 34 #define QBMAN_CINH_SWP_DQPI 0xa00 32 35 #define QBMAN_CINH_SWP_DCAP 0xac0 33 36 #define QBMAN_CINH_SWP_SDQCR 0xb00 37 + #define QBMAN_CINH_SWP_EQCR_AM_RT2 0xb40 38 + #define QBMAN_CINH_SWP_RCR_PI 0xc00 34 39 #define QBMAN_CINH_SWP_RAR 0xcc0 35 40 #define QBMAN_CINH_SWP_ISR 0xe00 36 41 #define QBMAN_CINH_SWP_IER 0xe40 ··· 51 42 #define QBMAN_CENA_SWP_CR 0x600 52 43 #define QBMAN_CENA_SWP_RR(vb) (0x700 + ((u32)(vb) >> 1)) 53 44 #define QBMAN_CENA_SWP_VDQCR 0x780 45 + 46 + /* CENA register offsets in memory-backed mode */ 47 + #define QBMAN_CENA_SWP_DQRR_MEM(n) (0x800 + ((u32)(n) << 6)) 48 + #define QBMAN_CENA_SWP_RCR_MEM(n) (0x1400 + ((u32)(n) << 6)) 49 + #define QBMAN_CENA_SWP_CR_MEM 0x1600 50 + #define QBMAN_CENA_SWP_RR_MEM 0x1680 51 + #define QBMAN_CENA_SWP_VDQCR_MEM 0x1780 54 52 55 53 /* Reverse mapping of QBMAN_CENA_SWP_DQRR() */ 56 54 #define QBMAN_IDX_FROM_DQRR(p) (((unsigned long)(p) & 0x1ff) >> 6) ··· 112 96 113 97 #define SWP_CFG_DQRR_MF_SHIFT 20 114 98 #define SWP_CFG_EST_SHIFT 16 99 + #define SWP_CFG_CPBS_SHIFT 15 115 100 #define SWP_CFG_WN_SHIFT 14 116 101 #define SWP_CFG_RPM_SHIFT 12 117 102 #define SWP_CFG_DCM_SHIFT 10 118 103 #define SWP_CFG_EPM_SHIFT 8 104 + #define SWP_CFG_VPM_SHIFT 7 105 + #define SWP_CFG_CPM_SHIFT 6 119 106 #define SWP_CFG_SD_SHIFT 5 120 107 #define SWP_CFG_SP_SHIFT 4 121 108 #define SWP_CFG_SE_SHIFT 3 ··· 144 125 ep << SWP_CFG_EP_SHIFT); 145 126 } 146 127 128 + #define QMAN_RT_MODE 0x00000100 129 + 147 130 /** 148 131 * qbman_swp_init() - Create a functional object representing the given 149 132 * QBMan portal descriptor. ··· 167 146 p->sdq |= qbman_sdqcr_dct_prio_ics << QB_SDQCR_DCT_SHIFT; 168 147 p->sdq |= qbman_sdqcr_fc_up_to_3 << QB_SDQCR_FC_SHIFT; 169 148 p->sdq |= QMAN_SDQCR_TOKEN << QB_SDQCR_TOK_SHIFT; 149 + if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000) 150 + p->mr.valid_bit = QB_VALID_BIT; 170 151 171 152 atomic_set(&p->vdq.available, 1); 172 153 p->vdq.valid_bit = QB_VALID_BIT; ··· 186 163 p->addr_cena = d->cena_bar; 187 164 p->addr_cinh = d->cinh_bar; 188 165 166 + if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000) 167 + memset(p->addr_cena, 0, 64 * 1024); 168 + 189 169 reg = qbman_set_swp_cfg(p->dqrr.dqrr_size, 190 170 1, /* Writes Non-cacheable */ 191 171 0, /* EQCR_CI stashing threshold */ ··· 201 175 1, /* dequeue stashing priority == TRUE */ 202 176 0, /* dequeue stashing enable == FALSE */ 203 177 0); /* EQCR_CI stashing priority == FALSE */ 178 + if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000) 179 + reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */ 180 + 1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */ 181 + 1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */ 204 182 205 183 qbman_write_register(p, QBMAN_CINH_SWP_CFG, reg); 206 184 reg = qbman_read_register(p, QBMAN_CINH_SWP_CFG); ··· 214 184 return NULL; 215 185 } 216 186 187 + if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000) { 188 + qbman_write_register(p, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE); 189 + qbman_write_register(p, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE); 190 + } 217 191 /* 218 192 * SDQCR needs to be initialized to 0 when no channels are 219 193 * being dequeued from or else the QMan HW will indicate an ··· 312 278 */ 313 279 void *qbman_swp_mc_start(struct qbman_swp *p) 314 280 { 315 - return qbman_get_cmd(p, QBMAN_CENA_SWP_CR); 281 + if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) 282 + return qbman_get_cmd(p, QBMAN_CENA_SWP_CR); 283 + else 284 + return qbman_get_cmd(p, QBMAN_CENA_SWP_CR_MEM); 316 285 } 317 286 318 287 /* ··· 326 289 { 327 290 u8 *v = cmd; 328 291 329 - dma_wmb(); 330 - *v = cmd_verb | p->mc.valid_bit; 292 + if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) { 293 + dma_wmb(); 294 + *v = cmd_verb | p->mc.valid_bit; 295 + } else { 296 + *v = cmd_verb | p->mc.valid_bit; 297 + dma_wmb(); 298 + qbman_write_register(p, QBMAN_CINH_SWP_CR_RT, QMAN_RT_MODE); 299 + } 331 300 } 332 301 333 302 /* ··· 344 301 { 345 302 u32 *ret, verb; 346 303 347 - ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR(p->mc.valid_bit)); 304 + if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) { 305 + ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR(p->mc.valid_bit)); 306 + /* Remove the valid-bit - command completed if the rest 307 + * is non-zero. 308 + */ 309 + verb = ret[0] & ~QB_VALID_BIT; 310 + if (!verb) 311 + return NULL; 312 + p->mc.valid_bit ^= QB_VALID_BIT; 313 + } else { 314 + ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR_MEM); 315 + /* Command completed if the valid bit is toggled */ 316 + if (p->mr.valid_bit != (ret[0] & QB_VALID_BIT)) 317 + return NULL; 318 + /* Command completed if the rest is non-zero */ 319 + verb = ret[0] & ~QB_VALID_BIT; 320 + if (!verb) 321 + return NULL; 322 + p->mr.valid_bit ^= QB_VALID_BIT; 323 + } 348 324 349 - /* Remove the valid-bit - command completed if the rest is non-zero */ 350 - verb = ret[0] & ~QB_VALID_BIT; 351 - if (!verb) 352 - return NULL; 353 - p->mc.valid_bit ^= QB_VALID_BIT; 354 325 return ret; 355 326 } 356 327 ··· 441 384 #define EQAR_VB(eqar) ((eqar) & 0x80) 442 385 #define EQAR_SUCCESS(eqar) ((eqar) & 0x100) 443 386 387 + static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p, 388 + u8 idx) 389 + { 390 + if (idx < 16) 391 + qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT + idx * 4, 392 + QMAN_RT_MODE); 393 + else 394 + qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT2 + 395 + (idx - 16) * 4, 396 + QMAN_RT_MODE); 397 + } 398 + 444 399 /** 445 400 * qbman_swp_enqueue() - Issue an enqueue command 446 401 * @s: the software portal used for enqueue ··· 477 408 memcpy(&p->dca, &d->dca, 31); 478 409 memcpy(&p->fd, fd, sizeof(*fd)); 479 410 480 - /* Set the verb byte, have to substitute in the valid-bit */ 481 - dma_wmb(); 482 - p->verb = d->verb | EQAR_VB(eqar); 411 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) { 412 + /* Set the verb byte, have to substitute in the valid-bit */ 413 + dma_wmb(); 414 + p->verb = d->verb | EQAR_VB(eqar); 415 + } else { 416 + p->verb = d->verb | EQAR_VB(eqar); 417 + dma_wmb(); 418 + qbman_write_eqcr_am_rt_register(s, EQAR_IDX(eqar)); 419 + } 483 420 484 421 return 0; 485 422 } ··· 662 587 return -EBUSY; 663 588 } 664 589 s->vdq.storage = (void *)(uintptr_t)d->rsp_addr_virt; 665 - p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR); 590 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) 591 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR); 592 + else 593 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR_MEM); 666 594 p->numf = d->numf; 667 595 p->tok = QMAN_DQ_TOKEN_VALID; 668 596 p->dq_src = d->dq_src; 669 597 p->rsp_addr = d->rsp_addr; 670 598 p->rsp_addr_virt = d->rsp_addr_virt; 671 - dma_wmb(); 672 599 673 - /* Set the verb byte, have to substitute in the valid-bit */ 674 - p->verb = d->verb | s->vdq.valid_bit; 675 - s->vdq.valid_bit ^= QB_VALID_BIT; 600 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) { 601 + dma_wmb(); 602 + /* Set the verb byte, have to substitute in the valid-bit */ 603 + p->verb = d->verb | s->vdq.valid_bit; 604 + s->vdq.valid_bit ^= QB_VALID_BIT; 605 + } else { 606 + p->verb = d->verb | s->vdq.valid_bit; 607 + s->vdq.valid_bit ^= QB_VALID_BIT; 608 + dma_wmb(); 609 + qbman_write_register(s, QBMAN_CINH_SWP_VDQCR_RT, QMAN_RT_MODE); 610 + } 676 611 677 612 return 0; 678 613 } ··· 740 655 QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx))); 741 656 } 742 657 743 - p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx)); 658 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) 659 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx)); 660 + else 661 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR_MEM(s->dqrr.next_idx)); 744 662 verb = p->dq.verb; 745 663 746 664 /* ··· 895 807 return -EBUSY; 896 808 897 809 /* Start the release command */ 898 - p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR(RAR_IDX(rar))); 810 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) 811 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR(RAR_IDX(rar))); 812 + else 813 + p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR_MEM(RAR_IDX(rar))); 899 814 /* Copy the caller's buffer pointers to the command */ 900 815 for (i = 0; i < num_buffers; i++) 901 816 p->buf[i] = cpu_to_le64(buffers[i]); 902 817 p->bpid = d->bpid; 903 818 904 - /* 905 - * Set the verb byte, have to substitute in the valid-bit and the number 906 - * of buffers. 907 - */ 908 - dma_wmb(); 909 - p->verb = d->verb | RAR_VB(rar) | num_buffers; 819 + if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) { 820 + /* 821 + * Set the verb byte, have to substitute in the valid-bit 822 + * and the number of buffers. 823 + */ 824 + dma_wmb(); 825 + p->verb = d->verb | RAR_VB(rar) | num_buffers; 826 + } else { 827 + p->verb = d->verb | RAR_VB(rar) | num_buffers; 828 + dma_wmb(); 829 + qbman_write_register(s, QBMAN_CINH_SWP_RCR_AM_RT + 830 + RAR_IDX(rar) * 4, QMAN_RT_MODE); 831 + } 910 832 911 833 return 0; 912 834 }
+7 -2
drivers/soc/fsl/dpio/qbman-portal.h
··· 1 1 /* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */ 2 2 /* 3 3 * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. 4 - * Copyright 2016 NXP 4 + * Copyright 2016-2019 NXP 5 5 * 6 6 */ 7 7 #ifndef __FSL_QBMAN_PORTAL_H ··· 109 109 struct { 110 110 u32 valid_bit; /* 0x00 or 0x80 */ 111 111 } mc; 112 + 113 + /* Management response */ 114 + struct { 115 + u32 valid_bit; /* 0x00 or 0x80 */ 116 + } mr; 112 117 113 118 /* Push dequeues */ 114 119 u32 sdq; ··· 433 428 static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, 434 429 u8 cmd_verb) 435 430 { 436 - int loopvar = 1000; 431 + int loopvar = 2000; 437 432 438 433 qbman_swp_mc_submit(swp, cmd, cmd_verb); 439 434
+6
drivers/soc/fsl/guts.c
··· 97 97 .svr = 0x87000000, 98 98 .mask = 0xfff70000, 99 99 }, 100 + /* Die: LX2160A, SoC: LX2160A/LX2120A/LX2080A */ 101 + { .die = "LX2160A", 102 + .svr = 0x87360000, 103 + .mask = 0xff3f0000, 104 + }, 100 105 { }, 101 106 }; 102 107 ··· 223 218 { .compatible = "fsl,ls1088a-dcfg", }, 224 219 { .compatible = "fsl,ls1012a-dcfg", }, 225 220 { .compatible = "fsl,ls1046a-dcfg", }, 221 + { .compatible = "fsl,lx2160a-dcfg", }, 226 222 {} 227 223 }; 228 224 MODULE_DEVICE_TABLE(of, fsl_guts_of_match);
+16 -4
drivers/soc/fsl/qbman/bman_portal.c
··· 32 32 33 33 static struct bman_portal *affine_bportals[NR_CPUS]; 34 34 static struct cpumask portal_cpus; 35 + static int __bman_portals_probed; 35 36 /* protect bman global registers and global data shared among portals */ 36 37 static DEFINE_SPINLOCK(bman_lock); 37 38 ··· 88 87 return 0; 89 88 } 90 89 90 + int bman_portals_probed(void) 91 + { 92 + return __bman_portals_probed; 93 + } 94 + EXPORT_SYMBOL_GPL(bman_portals_probed); 95 + 91 96 static int bman_portal_probe(struct platform_device *pdev) 92 97 { 93 98 struct device *dev = &pdev->dev; ··· 111 104 } 112 105 113 106 pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL); 114 - if (!pcfg) 107 + if (!pcfg) { 108 + __bman_portals_probed = -1; 115 109 return -ENOMEM; 110 + } 116 111 117 112 pcfg->dev = dev; 118 113 ··· 122 113 DPAA_PORTAL_CE); 123 114 if (!addr_phys[0]) { 124 115 dev_err(dev, "Can't get %pOF property 'reg::CE'\n", node); 125 - return -ENXIO; 116 + goto err_ioremap1; 126 117 } 127 118 128 119 addr_phys[1] = platform_get_resource(pdev, IORESOURCE_MEM, 129 120 DPAA_PORTAL_CI); 130 121 if (!addr_phys[1]) { 131 122 dev_err(dev, "Can't get %pOF property 'reg::CI'\n", node); 132 - return -ENXIO; 123 + goto err_ioremap1; 133 124 } 134 125 135 126 pcfg->cpu = -1; ··· 137 128 irq = platform_get_irq(pdev, 0); 138 129 if (irq <= 0) { 139 130 dev_err(dev, "Can't get %pOF IRQ'\n", node); 140 - return -ENXIO; 131 + goto err_ioremap1; 141 132 } 142 133 pcfg->irq = irq; 143 134 ··· 159 150 spin_lock(&bman_lock); 160 151 cpu = cpumask_next_zero(-1, &portal_cpus); 161 152 if (cpu >= nr_cpu_ids) { 153 + __bman_portals_probed = 1; 162 154 /* unassigned portal, skip init */ 163 155 spin_unlock(&bman_lock); 164 156 return 0; ··· 185 175 err_ioremap2: 186 176 memunmap(pcfg->addr_virt_ce); 187 177 err_ioremap1: 178 + __bman_portals_probed = -1; 179 + 188 180 return -ENXIO; 189 181 } 190 182
+1 -1
drivers/soc/fsl/qbman/qman_ccsr.c
··· 596 596 } 597 597 598 598 #define LIO_CFG_LIODN_MASK 0x0fff0000 599 - void qman_liodn_fixup(u16 channel) 599 + void __qman_liodn_fixup(u16 channel) 600 600 { 601 601 static int done; 602 602 static u32 liodn_offset;
+17 -4
drivers/soc/fsl/qbman/qman_portal.c
··· 38 38 #define CONFIG_FSL_DPA_PIRQ_FAST 1 39 39 40 40 static struct cpumask portal_cpus; 41 + static int __qman_portals_probed; 41 42 /* protect qman global registers and global data shared among portals */ 42 43 static DEFINE_SPINLOCK(qman_lock); 43 44 ··· 221 220 return 0; 222 221 } 223 222 223 + int qman_portals_probed(void) 224 + { 225 + return __qman_portals_probed; 226 + } 227 + EXPORT_SYMBOL_GPL(qman_portals_probed); 228 + 224 229 static int qman_portal_probe(struct platform_device *pdev) 225 230 { 226 231 struct device *dev = &pdev->dev; ··· 245 238 } 246 239 247 240 pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL); 248 - if (!pcfg) 241 + if (!pcfg) { 242 + __qman_portals_probed = -1; 249 243 return -ENOMEM; 244 + } 250 245 251 246 pcfg->dev = dev; 252 247 ··· 256 247 DPAA_PORTAL_CE); 257 248 if (!addr_phys[0]) { 258 249 dev_err(dev, "Can't get %pOF property 'reg::CE'\n", node); 259 - return -ENXIO; 250 + goto err_ioremap1; 260 251 } 261 252 262 253 addr_phys[1] = platform_get_resource(pdev, IORESOURCE_MEM, 263 254 DPAA_PORTAL_CI); 264 255 if (!addr_phys[1]) { 265 256 dev_err(dev, "Can't get %pOF property 'reg::CI'\n", node); 266 - return -ENXIO; 257 + goto err_ioremap1; 267 258 } 268 259 269 260 err = of_property_read_u32(node, "cell-index", &val); 270 261 if (err) { 271 262 dev_err(dev, "Can't get %pOF property 'cell-index'\n", node); 263 + __qman_portals_probed = -1; 272 264 return err; 273 265 } 274 266 pcfg->channel = val; ··· 277 267 irq = platform_get_irq(pdev, 0); 278 268 if (irq <= 0) { 279 269 dev_err(dev, "Can't get %pOF IRQ\n", node); 280 - return -ENXIO; 270 + goto err_ioremap1; 281 271 } 282 272 pcfg->irq = irq; 283 273 ··· 301 291 spin_lock(&qman_lock); 302 292 cpu = cpumask_next_zero(-1, &portal_cpus); 303 293 if (cpu >= nr_cpu_ids) { 294 + __qman_portals_probed = 1; 304 295 /* unassigned portal, skip init */ 305 296 spin_unlock(&qman_lock); 306 297 return 0; ··· 332 321 err_ioremap2: 333 322 memunmap(pcfg->addr_virt_ce); 334 323 err_ioremap1: 324 + __qman_portals_probed = -1; 325 + 335 326 return -ENXIO; 336 327 } 337 328
+8 -1
drivers/soc/fsl/qbman/qman_priv.h
··· 193 193 u32 qm_get_pools_sdqcr(void); 194 194 195 195 int qman_wq_alloc(void); 196 - void qman_liodn_fixup(u16 channel); 196 + #ifdef CONFIG_FSL_PAMU 197 + #define qman_liodn_fixup __qman_liodn_fixup 198 + #else 199 + static inline void qman_liodn_fixup(u16 channel) 200 + { 201 + } 202 + #endif 203 + void __qman_liodn_fixup(u16 channel); 197 204 void qman_set_sdest(u16 channel, unsigned int cpu_idx); 198 205 199 206 struct qman_portal *qman_create_affine_portal(
+9
drivers/soc/imx/Kconfig
··· 8 8 select PM_GENERIC_DOMAINS 9 9 default y if SOC_IMX7D 10 10 11 + config IMX_SCU_SOC 12 + bool "i.MX System Controller Unit SoC info support" 13 + depends on IMX_SCU 14 + select SOC_BUS 15 + help 16 + If you say yes here you get support for the NXP i.MX System 17 + Controller Unit SoC info module, it will provide the SoC info 18 + like SoC family, ID and revision etc. 19 + 11 20 endmenu
+1
drivers/soc/imx/Makefile
··· 2 2 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 3 3 obj-$(CONFIG_IMX_GPCV2_PM_DOMAINS) += gpcv2.o 4 4 obj-$(CONFIG_ARCH_MXC) += soc-imx8.o 5 + obj-$(CONFIG_IMX_SCU_SOC) += soc-imx-scu.o
+144
drivers/soc/imx/soc-imx-scu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright 2019 NXP. 4 + */ 5 + 6 + #include <dt-bindings/firmware/imx/rsrc.h> 7 + #include <linux/firmware/imx/sci.h> 8 + #include <linux/slab.h> 9 + #include <linux/sys_soc.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/of.h> 12 + 13 + #define IMX_SCU_SOC_DRIVER_NAME "imx-scu-soc" 14 + 15 + static struct imx_sc_ipc *soc_ipc_handle; 16 + 17 + struct imx_sc_msg_misc_get_soc_id { 18 + struct imx_sc_rpc_msg hdr; 19 + union { 20 + struct { 21 + u32 control; 22 + u16 resource; 23 + } __packed req; 24 + struct { 25 + u32 id; 26 + } resp; 27 + } data; 28 + } __packed; 29 + 30 + static int imx_scu_soc_id(void) 31 + { 32 + struct imx_sc_msg_misc_get_soc_id msg; 33 + struct imx_sc_rpc_msg *hdr = &msg.hdr; 34 + int ret; 35 + 36 + hdr->ver = IMX_SC_RPC_VERSION; 37 + hdr->svc = IMX_SC_RPC_SVC_MISC; 38 + hdr->func = IMX_SC_MISC_FUNC_GET_CONTROL; 39 + hdr->size = 3; 40 + 41 + msg.data.req.control = IMX_SC_C_ID; 42 + msg.data.req.resource = IMX_SC_R_SYSTEM; 43 + 44 + ret = imx_scu_call_rpc(soc_ipc_handle, &msg, true); 45 + if (ret) { 46 + pr_err("%s: get soc info failed, ret %d\n", __func__, ret); 47 + return ret; 48 + } 49 + 50 + return msg.data.resp.id; 51 + } 52 + 53 + static int imx_scu_soc_probe(struct platform_device *pdev) 54 + { 55 + struct soc_device_attribute *soc_dev_attr; 56 + struct soc_device *soc_dev; 57 + int id, ret; 58 + u32 val; 59 + 60 + ret = imx_scu_get_handle(&soc_ipc_handle); 61 + if (ret) 62 + return ret; 63 + 64 + soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr), 65 + GFP_KERNEL); 66 + if (!soc_dev_attr) 67 + return -ENOMEM; 68 + 69 + soc_dev_attr->family = "Freescale i.MX"; 70 + 71 + ret = of_property_read_string(of_root, 72 + "model", 73 + &soc_dev_attr->machine); 74 + if (ret) 75 + return ret; 76 + 77 + id = imx_scu_soc_id(); 78 + if (id < 0) 79 + return -EINVAL; 80 + 81 + /* format soc_id value passed from SCU firmware */ 82 + val = id & 0x1f; 83 + soc_dev_attr->soc_id = kasprintf(GFP_KERNEL, "0x%x", val); 84 + if (!soc_dev_attr->soc_id) 85 + return -ENOMEM; 86 + 87 + /* format revision value passed from SCU firmware */ 88 + val = (id >> 5) & 0xf; 89 + val = (((val >> 2) + 1) << 4) | (val & 0x3); 90 + soc_dev_attr->revision = kasprintf(GFP_KERNEL, 91 + "%d.%d", 92 + (val >> 4) & 0xf, 93 + val & 0xf); 94 + if (!soc_dev_attr->revision) { 95 + ret = -ENOMEM; 96 + goto free_soc_id; 97 + } 98 + 99 + soc_dev = soc_device_register(soc_dev_attr); 100 + if (IS_ERR(soc_dev)) { 101 + ret = PTR_ERR(soc_dev); 102 + goto free_revision; 103 + } 104 + 105 + return 0; 106 + 107 + free_revision: 108 + kfree(soc_dev_attr->revision); 109 + free_soc_id: 110 + kfree(soc_dev_attr->soc_id); 111 + return ret; 112 + } 113 + 114 + static struct platform_driver imx_scu_soc_driver = { 115 + .driver = { 116 + .name = IMX_SCU_SOC_DRIVER_NAME, 117 + }, 118 + .probe = imx_scu_soc_probe, 119 + }; 120 + 121 + static int __init imx_scu_soc_init(void) 122 + { 123 + struct platform_device *pdev; 124 + struct device_node *np; 125 + int ret; 126 + 127 + np = of_find_compatible_node(NULL, NULL, "fsl,imx-scu"); 128 + if (!np) 129 + return -ENODEV; 130 + 131 + of_node_put(np); 132 + 133 + ret = platform_driver_register(&imx_scu_soc_driver); 134 + if (ret) 135 + return ret; 136 + 137 + pdev = platform_device_register_simple(IMX_SCU_SOC_DRIVER_NAME, 138 + -1, NULL, 0); 139 + if (IS_ERR(pdev)) 140 + platform_driver_unregister(&imx_scu_soc_driver); 141 + 142 + return PTR_ERR_OR_ZERO(pdev); 143 + } 144 + device_initcall(imx_scu_soc_init);
+50 -13
drivers/soc/imx/soc-imx8.c
··· 16 16 #define IMX8MQ_SW_INFO_B1 0x40 17 17 #define IMX8MQ_SW_MAGIC_B1 0xff0055aa 18 18 19 + /* Same as ANADIG_DIGPROG_IMX7D */ 20 + #define ANADIG_DIGPROG_IMX8MM 0x800 21 + 19 22 struct imx8_soc_data { 20 23 char *name; 21 24 u32 (*soc_revision)(void); ··· 49 46 return rev; 50 47 } 51 48 49 + static u32 __init imx8mm_soc_revision(void) 50 + { 51 + struct device_node *np; 52 + void __iomem *anatop_base; 53 + u32 rev; 54 + 55 + np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop"); 56 + if (!np) 57 + return 0; 58 + 59 + anatop_base = of_iomap(np, 0); 60 + WARN_ON(!anatop_base); 61 + 62 + rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM); 63 + 64 + iounmap(anatop_base); 65 + of_node_put(np); 66 + return rev; 67 + } 68 + 52 69 static const struct imx8_soc_data imx8mq_soc_data = { 53 70 .name = "i.MX8MQ", 54 71 .soc_revision = imx8mq_soc_revision, 55 72 }; 56 73 74 + static const struct imx8_soc_data imx8mm_soc_data = { 75 + .name = "i.MX8MM", 76 + .soc_revision = imx8mm_soc_revision, 77 + }; 78 + 79 + static const struct imx8_soc_data imx8mn_soc_data = { 80 + .name = "i.MX8MN", 81 + .soc_revision = imx8mm_soc_revision, 82 + }; 83 + 57 84 static const struct of_device_id imx8_soc_match[] = { 58 85 { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, 86 + { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, 87 + { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, 59 88 { } 60 89 }; 61 90 ··· 100 65 { 101 66 struct soc_device_attribute *soc_dev_attr; 102 67 struct soc_device *soc_dev; 103 - struct device_node *root; 104 68 const struct of_device_id *id; 105 69 u32 soc_rev = 0; 106 70 const struct imx8_soc_data *data; ··· 107 73 108 74 soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 109 75 if (!soc_dev_attr) 110 - return -ENODEV; 76 + return -ENOMEM; 111 77 112 78 soc_dev_attr->family = "Freescale i.MX"; 113 79 114 - root = of_find_node_by_path("/"); 115 - ret = of_property_read_string(root, "model", &soc_dev_attr->machine); 80 + ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine); 116 81 if (ret) 117 82 goto free_soc; 118 83 119 - id = of_match_node(imx8_soc_match, root); 120 - if (!id) 84 + id = of_match_node(imx8_soc_match, of_root); 85 + if (!id) { 86 + ret = -ENODEV; 121 87 goto free_soc; 122 - 123 - of_node_put(root); 88 + } 124 89 125 90 data = id->data; 126 91 if (data) { ··· 129 96 } 130 97 131 98 soc_dev_attr->revision = imx8_revision(soc_rev); 132 - if (!soc_dev_attr->revision) 99 + if (!soc_dev_attr->revision) { 100 + ret = -ENOMEM; 133 101 goto free_soc; 102 + } 134 103 135 104 soc_dev = soc_device_register(soc_dev_attr); 136 - if (IS_ERR(soc_dev)) 105 + if (IS_ERR(soc_dev)) { 106 + ret = PTR_ERR(soc_dev); 137 107 goto free_rev; 108 + } 138 109 139 110 if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) 140 111 platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0); ··· 146 109 return 0; 147 110 148 111 free_rev: 149 - kfree(soc_dev_attr->revision); 112 + if (strcmp(soc_dev_attr->revision, "unknown")) 113 + kfree(soc_dev_attr->revision); 150 114 free_soc: 151 115 kfree(soc_dev_attr); 152 - of_node_put(root); 153 - return -ENODEV; 116 + return ret; 154 117 } 155 118 device_initcall(imx8_soc_init);
+12
drivers/soc/qcom/Kconfig
··· 4 4 # 5 5 menu "Qualcomm SoC drivers" 6 6 7 + config QCOM_AOSS_QMP 8 + tristate "Qualcomm AOSS Driver" 9 + depends on ARCH_QCOM || COMPILE_TEST 10 + depends on MAILBOX 11 + depends on COMMON_CLK && PM 12 + select PM_GENERIC_DOMAINS 13 + help 14 + This driver provides the means of communicating with and controlling 15 + the low-power state for resources related to the remoteproc 16 + subsystems as well as controlling the debug clocks exposed by the Always On 17 + Subsystem (AOSS) using Qualcomm Messaging Protocol (QMP). 18 + 7 19 config QCOM_COMMAND_DB 8 20 bool "Qualcomm Command DB" 9 21 depends on ARCH_QCOM || COMPILE_TEST
+1
drivers/soc/qcom/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 CFLAGS_rpmh-rsc.o := -I$(src) 3 + obj-$(CONFIG_QCOM_AOSS_QMP) += qcom_aoss.o 3 4 obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o 4 5 obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o 5 6 obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
+70 -6
drivers/soc/qcom/apr.c
··· 8 8 #include <linux/spinlock.h> 9 9 #include <linux/idr.h> 10 10 #include <linux/slab.h> 11 + #include <linux/workqueue.h> 11 12 #include <linux/of_device.h> 12 13 #include <linux/soc/qcom/apr.h> 13 14 #include <linux/rpmsg.h> ··· 18 17 struct rpmsg_endpoint *ch; 19 18 struct device *dev; 20 19 spinlock_t svcs_lock; 20 + spinlock_t rx_lock; 21 21 struct idr svcs_idr; 22 22 int dest_domain_id; 23 + struct workqueue_struct *rxwq; 24 + struct work_struct rx_work; 25 + struct list_head rx_list; 26 + }; 27 + 28 + struct apr_rx_buf { 29 + struct list_head node; 30 + int len; 31 + uint8_t buf[]; 23 32 }; 24 33 25 34 /** ··· 73 62 int len, void *priv, u32 addr) 74 63 { 75 64 struct apr *apr = dev_get_drvdata(&rpdev->dev); 76 - uint16_t hdr_size, msg_type, ver, svc_id; 77 - struct apr_device *svc = NULL; 78 - struct apr_driver *adrv = NULL; 79 - struct apr_resp_pkt resp; 80 - struct apr_hdr *hdr; 65 + struct apr_rx_buf *abuf; 81 66 unsigned long flags; 82 67 83 68 if (len <= APR_HDR_SIZE) { ··· 81 74 buf, len); 82 75 return -EINVAL; 83 76 } 77 + 78 + abuf = kzalloc(sizeof(*abuf) + len, GFP_ATOMIC); 79 + if (!abuf) 80 + return -ENOMEM; 81 + 82 + abuf->len = len; 83 + memcpy(abuf->buf, buf, len); 84 + 85 + spin_lock_irqsave(&apr->rx_lock, flags); 86 + list_add_tail(&abuf->node, &apr->rx_list); 87 + spin_unlock_irqrestore(&apr->rx_lock, flags); 88 + 89 + queue_work(apr->rxwq, &apr->rx_work); 90 + 91 + return 0; 92 + } 93 + 94 + 95 + static int apr_do_rx_callback(struct apr *apr, struct apr_rx_buf *abuf) 96 + { 97 + uint16_t hdr_size, msg_type, ver, svc_id; 98 + struct apr_device *svc = NULL; 99 + struct apr_driver *adrv = NULL; 100 + struct apr_resp_pkt resp; 101 + struct apr_hdr *hdr; 102 + unsigned long flags; 103 + void *buf = abuf->buf; 104 + int len = abuf->len; 84 105 85 106 hdr = buf; 86 107 ver = APR_HDR_FIELD_VER(hdr->hdr_field); ··· 165 130 adrv->callback(svc, &resp); 166 131 167 132 return 0; 133 + } 134 + 135 + static void apr_rxwq(struct work_struct *work) 136 + { 137 + struct apr *apr = container_of(work, struct apr, rx_work); 138 + struct apr_rx_buf *abuf, *b; 139 + unsigned long flags; 140 + 141 + if (!list_empty(&apr->rx_list)) { 142 + list_for_each_entry_safe(abuf, b, &apr->rx_list, node) { 143 + apr_do_rx_callback(apr, abuf); 144 + spin_lock_irqsave(&apr->rx_lock, flags); 145 + list_del(&abuf->node); 146 + spin_unlock_irqrestore(&apr->rx_lock, flags); 147 + kfree(abuf); 148 + } 149 + } 168 150 } 169 151 170 152 static int apr_device_match(struct device *dev, struct device_driver *drv) ··· 328 276 if (!apr) 329 277 return -ENOMEM; 330 278 331 - ret = of_property_read_u32(dev->of_node, "reg", &apr->dest_domain_id); 279 + ret = of_property_read_u32(dev->of_node, "qcom,apr-domain", &apr->dest_domain_id); 332 280 if (ret) { 333 281 dev_err(dev, "APR Domain ID not specified in DT\n"); 334 282 return ret; ··· 337 285 dev_set_drvdata(dev, apr); 338 286 apr->ch = rpdev->ept; 339 287 apr->dev = dev; 288 + apr->rxwq = create_singlethread_workqueue("qcom_apr_rx"); 289 + if (!apr->rxwq) { 290 + dev_err(apr->dev, "Failed to start Rx WQ\n"); 291 + return -ENOMEM; 292 + } 293 + INIT_WORK(&apr->rx_work, apr_rxwq); 294 + INIT_LIST_HEAD(&apr->rx_list); 295 + spin_lock_init(&apr->rx_lock); 340 296 spin_lock_init(&apr->svcs_lock); 341 297 idr_init(&apr->svcs_idr); 342 298 of_register_apr_devices(dev); ··· 363 303 364 304 static void apr_remove(struct rpmsg_device *rpdev) 365 305 { 306 + struct apr *apr = dev_get_drvdata(&rpdev->dev); 307 + 366 308 device_for_each_child(&rpdev->dev, NULL, apr_remove_device); 309 + flush_workqueue(apr->rxwq); 310 + destroy_workqueue(apr->rxwq); 367 311 } 368 312 369 313 /*
+480
drivers/soc/qcom/qcom_aoss.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019, Linaro Ltd 4 + */ 5 + #include <dt-bindings/power/qcom-aoss-qmp.h> 6 + #include <linux/clk-provider.h> 7 + #include <linux/interrupt.h> 8 + #include <linux/io.h> 9 + #include <linux/mailbox_client.h> 10 + #include <linux/module.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_domain.h> 13 + 14 + #define QMP_DESC_MAGIC 0x0 15 + #define QMP_DESC_VERSION 0x4 16 + #define QMP_DESC_FEATURES 0x8 17 + 18 + /* AOP-side offsets */ 19 + #define QMP_DESC_UCORE_LINK_STATE 0xc 20 + #define QMP_DESC_UCORE_LINK_STATE_ACK 0x10 21 + #define QMP_DESC_UCORE_CH_STATE 0x14 22 + #define QMP_DESC_UCORE_CH_STATE_ACK 0x18 23 + #define QMP_DESC_UCORE_MBOX_SIZE 0x1c 24 + #define QMP_DESC_UCORE_MBOX_OFFSET 0x20 25 + 26 + /* Linux-side offsets */ 27 + #define QMP_DESC_MCORE_LINK_STATE 0x24 28 + #define QMP_DESC_MCORE_LINK_STATE_ACK 0x28 29 + #define QMP_DESC_MCORE_CH_STATE 0x2c 30 + #define QMP_DESC_MCORE_CH_STATE_ACK 0x30 31 + #define QMP_DESC_MCORE_MBOX_SIZE 0x34 32 + #define QMP_DESC_MCORE_MBOX_OFFSET 0x38 33 + 34 + #define QMP_STATE_UP GENMASK(15, 0) 35 + #define QMP_STATE_DOWN GENMASK(31, 16) 36 + 37 + #define QMP_MAGIC 0x4d41494c /* mail */ 38 + #define QMP_VERSION 1 39 + 40 + /* 64 bytes is enough to store the requests and provides padding to 4 bytes */ 41 + #define QMP_MSG_LEN 64 42 + 43 + /** 44 + * struct qmp - driver state for QMP implementation 45 + * @msgram: iomem referencing the message RAM used for communication 46 + * @dev: reference to QMP device 47 + * @mbox_client: mailbox client used to ring the doorbell on transmit 48 + * @mbox_chan: mailbox channel used to ring the doorbell on transmit 49 + * @offset: offset within @msgram where messages should be written 50 + * @size: maximum size of the messages to be transmitted 51 + * @event: wait_queue for synchronization with the IRQ 52 + * @tx_lock: provides synchronization between multiple callers of qmp_send() 53 + * @qdss_clk: QDSS clock hw struct 54 + * @pd_data: genpd data 55 + */ 56 + struct qmp { 57 + void __iomem *msgram; 58 + struct device *dev; 59 + 60 + struct mbox_client mbox_client; 61 + struct mbox_chan *mbox_chan; 62 + 63 + size_t offset; 64 + size_t size; 65 + 66 + wait_queue_head_t event; 67 + 68 + struct mutex tx_lock; 69 + 70 + struct clk_hw qdss_clk; 71 + struct genpd_onecell_data pd_data; 72 + }; 73 + 74 + struct qmp_pd { 75 + struct qmp *qmp; 76 + struct generic_pm_domain pd; 77 + }; 78 + 79 + #define to_qmp_pd_resource(res) container_of(res, struct qmp_pd, pd) 80 + 81 + static void qmp_kick(struct qmp *qmp) 82 + { 83 + mbox_send_message(qmp->mbox_chan, NULL); 84 + mbox_client_txdone(qmp->mbox_chan, 0); 85 + } 86 + 87 + static bool qmp_magic_valid(struct qmp *qmp) 88 + { 89 + return readl(qmp->msgram + QMP_DESC_MAGIC) == QMP_MAGIC; 90 + } 91 + 92 + static bool qmp_link_acked(struct qmp *qmp) 93 + { 94 + return readl(qmp->msgram + QMP_DESC_MCORE_LINK_STATE_ACK) == QMP_STATE_UP; 95 + } 96 + 97 + static bool qmp_mcore_channel_acked(struct qmp *qmp) 98 + { 99 + return readl(qmp->msgram + QMP_DESC_MCORE_CH_STATE_ACK) == QMP_STATE_UP; 100 + } 101 + 102 + static bool qmp_ucore_channel_up(struct qmp *qmp) 103 + { 104 + return readl(qmp->msgram + QMP_DESC_UCORE_CH_STATE) == QMP_STATE_UP; 105 + } 106 + 107 + static int qmp_open(struct qmp *qmp) 108 + { 109 + int ret; 110 + u32 val; 111 + 112 + if (!qmp_magic_valid(qmp)) { 113 + dev_err(qmp->dev, "QMP magic doesn't match\n"); 114 + return -EINVAL; 115 + } 116 + 117 + val = readl(qmp->msgram + QMP_DESC_VERSION); 118 + if (val != QMP_VERSION) { 119 + dev_err(qmp->dev, "unsupported QMP version %d\n", val); 120 + return -EINVAL; 121 + } 122 + 123 + qmp->offset = readl(qmp->msgram + QMP_DESC_MCORE_MBOX_OFFSET); 124 + qmp->size = readl(qmp->msgram + QMP_DESC_MCORE_MBOX_SIZE); 125 + if (!qmp->size) { 126 + dev_err(qmp->dev, "invalid mailbox size\n"); 127 + return -EINVAL; 128 + } 129 + 130 + /* Ack remote core's link state */ 131 + val = readl(qmp->msgram + QMP_DESC_UCORE_LINK_STATE); 132 + writel(val, qmp->msgram + QMP_DESC_UCORE_LINK_STATE_ACK); 133 + 134 + /* Set local core's link state to up */ 135 + writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_MCORE_LINK_STATE); 136 + 137 + qmp_kick(qmp); 138 + 139 + ret = wait_event_timeout(qmp->event, qmp_link_acked(qmp), HZ); 140 + if (!ret) { 141 + dev_err(qmp->dev, "ucore didn't ack link\n"); 142 + goto timeout_close_link; 143 + } 144 + 145 + writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_MCORE_CH_STATE); 146 + 147 + qmp_kick(qmp); 148 + 149 + ret = wait_event_timeout(qmp->event, qmp_ucore_channel_up(qmp), HZ); 150 + if (!ret) { 151 + dev_err(qmp->dev, "ucore didn't open channel\n"); 152 + goto timeout_close_channel; 153 + } 154 + 155 + /* Ack remote core's channel state */ 156 + writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_UCORE_CH_STATE_ACK); 157 + 158 + qmp_kick(qmp); 159 + 160 + ret = wait_event_timeout(qmp->event, qmp_mcore_channel_acked(qmp), HZ); 161 + if (!ret) { 162 + dev_err(qmp->dev, "ucore didn't ack channel\n"); 163 + goto timeout_close_channel; 164 + } 165 + 166 + return 0; 167 + 168 + timeout_close_channel: 169 + writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_CH_STATE); 170 + 171 + timeout_close_link: 172 + writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_LINK_STATE); 173 + qmp_kick(qmp); 174 + 175 + return -ETIMEDOUT; 176 + } 177 + 178 + static void qmp_close(struct qmp *qmp) 179 + { 180 + writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_CH_STATE); 181 + writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_LINK_STATE); 182 + qmp_kick(qmp); 183 + } 184 + 185 + static irqreturn_t qmp_intr(int irq, void *data) 186 + { 187 + struct qmp *qmp = data; 188 + 189 + wake_up_interruptible_all(&qmp->event); 190 + 191 + return IRQ_HANDLED; 192 + } 193 + 194 + static bool qmp_message_empty(struct qmp *qmp) 195 + { 196 + return readl(qmp->msgram + qmp->offset) == 0; 197 + } 198 + 199 + /** 200 + * qmp_send() - send a message to the AOSS 201 + * @qmp: qmp context 202 + * @data: message to be sent 203 + * @len: length of the message 204 + * 205 + * Transmit @data to AOSS and wait for the AOSS to acknowledge the message. 206 + * @len must be a multiple of 4 and not longer than the mailbox size. Access is 207 + * synchronized by this implementation. 208 + * 209 + * Return: 0 on success, negative errno on failure 210 + */ 211 + static int qmp_send(struct qmp *qmp, const void *data, size_t len) 212 + { 213 + long time_left; 214 + int ret; 215 + 216 + if (WARN_ON(len + sizeof(u32) > qmp->size)) 217 + return -EINVAL; 218 + 219 + if (WARN_ON(len % sizeof(u32))) 220 + return -EINVAL; 221 + 222 + mutex_lock(&qmp->tx_lock); 223 + 224 + /* The message RAM only implements 32-bit accesses */ 225 + __iowrite32_copy(qmp->msgram + qmp->offset + sizeof(u32), 226 + data, len / sizeof(u32)); 227 + writel(len, qmp->msgram + qmp->offset); 228 + qmp_kick(qmp); 229 + 230 + time_left = wait_event_interruptible_timeout(qmp->event, 231 + qmp_message_empty(qmp), HZ); 232 + if (!time_left) { 233 + dev_err(qmp->dev, "ucore did not ack channel\n"); 234 + ret = -ETIMEDOUT; 235 + 236 + /* Clear message from buffer */ 237 + writel(0, qmp->msgram + qmp->offset); 238 + } else { 239 + ret = 0; 240 + } 241 + 242 + mutex_unlock(&qmp->tx_lock); 243 + 244 + return ret; 245 + } 246 + 247 + static int qmp_qdss_clk_prepare(struct clk_hw *hw) 248 + { 249 + static const char buf[QMP_MSG_LEN] = "{class: clock, res: qdss, val: 1}"; 250 + struct qmp *qmp = container_of(hw, struct qmp, qdss_clk); 251 + 252 + return qmp_send(qmp, buf, sizeof(buf)); 253 + } 254 + 255 + static void qmp_qdss_clk_unprepare(struct clk_hw *hw) 256 + { 257 + static const char buf[QMP_MSG_LEN] = "{class: clock, res: qdss, val: 0}"; 258 + struct qmp *qmp = container_of(hw, struct qmp, qdss_clk); 259 + 260 + qmp_send(qmp, buf, sizeof(buf)); 261 + } 262 + 263 + static const struct clk_ops qmp_qdss_clk_ops = { 264 + .prepare = qmp_qdss_clk_prepare, 265 + .unprepare = qmp_qdss_clk_unprepare, 266 + }; 267 + 268 + static int qmp_qdss_clk_add(struct qmp *qmp) 269 + { 270 + static const struct clk_init_data qdss_init = { 271 + .ops = &qmp_qdss_clk_ops, 272 + .name = "qdss", 273 + }; 274 + int ret; 275 + 276 + qmp->qdss_clk.init = &qdss_init; 277 + ret = clk_hw_register(qmp->dev, &qmp->qdss_clk); 278 + if (ret < 0) { 279 + dev_err(qmp->dev, "failed to register qdss clock\n"); 280 + return ret; 281 + } 282 + 283 + ret = of_clk_add_hw_provider(qmp->dev->of_node, of_clk_hw_simple_get, 284 + &qmp->qdss_clk); 285 + if (ret < 0) { 286 + dev_err(qmp->dev, "unable to register of clk hw provider\n"); 287 + clk_hw_unregister(&qmp->qdss_clk); 288 + } 289 + 290 + return ret; 291 + } 292 + 293 + static void qmp_qdss_clk_remove(struct qmp *qmp) 294 + { 295 + of_clk_del_provider(qmp->dev->of_node); 296 + clk_hw_unregister(&qmp->qdss_clk); 297 + } 298 + 299 + static int qmp_pd_power_toggle(struct qmp_pd *res, bool enable) 300 + { 301 + char buf[QMP_MSG_LEN] = {}; 302 + 303 + snprintf(buf, sizeof(buf), 304 + "{class: image, res: load_state, name: %s, val: %s}", 305 + res->pd.name, enable ? "on" : "off"); 306 + return qmp_send(res->qmp, buf, sizeof(buf)); 307 + } 308 + 309 + static int qmp_pd_power_on(struct generic_pm_domain *domain) 310 + { 311 + return qmp_pd_power_toggle(to_qmp_pd_resource(domain), true); 312 + } 313 + 314 + static int qmp_pd_power_off(struct generic_pm_domain *domain) 315 + { 316 + return qmp_pd_power_toggle(to_qmp_pd_resource(domain), false); 317 + } 318 + 319 + static const char * const sdm845_resources[] = { 320 + [AOSS_QMP_LS_CDSP] = "cdsp", 321 + [AOSS_QMP_LS_LPASS] = "adsp", 322 + [AOSS_QMP_LS_MODEM] = "modem", 323 + [AOSS_QMP_LS_SLPI] = "slpi", 324 + [AOSS_QMP_LS_SPSS] = "spss", 325 + [AOSS_QMP_LS_VENUS] = "venus", 326 + }; 327 + 328 + static int qmp_pd_add(struct qmp *qmp) 329 + { 330 + struct genpd_onecell_data *data = &qmp->pd_data; 331 + struct device *dev = qmp->dev; 332 + struct qmp_pd *res; 333 + size_t num = ARRAY_SIZE(sdm845_resources); 334 + int ret; 335 + int i; 336 + 337 + res = devm_kcalloc(dev, num, sizeof(*res), GFP_KERNEL); 338 + if (!res) 339 + return -ENOMEM; 340 + 341 + data->domains = devm_kcalloc(dev, num, sizeof(*data->domains), 342 + GFP_KERNEL); 343 + if (!data->domains) 344 + return -ENOMEM; 345 + 346 + for (i = 0; i < num; i++) { 347 + res[i].qmp = qmp; 348 + res[i].pd.name = sdm845_resources[i]; 349 + res[i].pd.power_on = qmp_pd_power_on; 350 + res[i].pd.power_off = qmp_pd_power_off; 351 + 352 + ret = pm_genpd_init(&res[i].pd, NULL, true); 353 + if (ret < 0) { 354 + dev_err(dev, "failed to init genpd\n"); 355 + goto unroll_genpds; 356 + } 357 + 358 + data->domains[i] = &res[i].pd; 359 + } 360 + 361 + data->num_domains = i; 362 + 363 + ret = of_genpd_add_provider_onecell(dev->of_node, data); 364 + if (ret < 0) 365 + goto unroll_genpds; 366 + 367 + return 0; 368 + 369 + unroll_genpds: 370 + for (i--; i >= 0; i--) 371 + pm_genpd_remove(data->domains[i]); 372 + 373 + return ret; 374 + } 375 + 376 + static void qmp_pd_remove(struct qmp *qmp) 377 + { 378 + struct genpd_onecell_data *data = &qmp->pd_data; 379 + struct device *dev = qmp->dev; 380 + int i; 381 + 382 + of_genpd_del_provider(dev->of_node); 383 + 384 + for (i = 0; i < data->num_domains; i++) 385 + pm_genpd_remove(data->domains[i]); 386 + } 387 + 388 + static int qmp_probe(struct platform_device *pdev) 389 + { 390 + struct resource *res; 391 + struct qmp *qmp; 392 + int irq; 393 + int ret; 394 + 395 + qmp = devm_kzalloc(&pdev->dev, sizeof(*qmp), GFP_KERNEL); 396 + if (!qmp) 397 + return -ENOMEM; 398 + 399 + qmp->dev = &pdev->dev; 400 + init_waitqueue_head(&qmp->event); 401 + mutex_init(&qmp->tx_lock); 402 + 403 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 404 + qmp->msgram = devm_ioremap_resource(&pdev->dev, res); 405 + if (IS_ERR(qmp->msgram)) 406 + return PTR_ERR(qmp->msgram); 407 + 408 + qmp->mbox_client.dev = &pdev->dev; 409 + qmp->mbox_client.knows_txdone = true; 410 + qmp->mbox_chan = mbox_request_channel(&qmp->mbox_client, 0); 411 + if (IS_ERR(qmp->mbox_chan)) { 412 + dev_err(&pdev->dev, "failed to acquire ipc mailbox\n"); 413 + return PTR_ERR(qmp->mbox_chan); 414 + } 415 + 416 + irq = platform_get_irq(pdev, 0); 417 + ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT, 418 + "aoss-qmp", qmp); 419 + if (ret < 0) { 420 + dev_err(&pdev->dev, "failed to request interrupt\n"); 421 + goto err_free_mbox; 422 + } 423 + 424 + ret = qmp_open(qmp); 425 + if (ret < 0) 426 + goto err_free_mbox; 427 + 428 + ret = qmp_qdss_clk_add(qmp); 429 + if (ret) 430 + goto err_close_qmp; 431 + 432 + ret = qmp_pd_add(qmp); 433 + if (ret) 434 + goto err_remove_qdss_clk; 435 + 436 + platform_set_drvdata(pdev, qmp); 437 + 438 + return 0; 439 + 440 + err_remove_qdss_clk: 441 + qmp_qdss_clk_remove(qmp); 442 + err_close_qmp: 443 + qmp_close(qmp); 444 + err_free_mbox: 445 + mbox_free_channel(qmp->mbox_chan); 446 + 447 + return ret; 448 + } 449 + 450 + static int qmp_remove(struct platform_device *pdev) 451 + { 452 + struct qmp *qmp = platform_get_drvdata(pdev); 453 + 454 + qmp_qdss_clk_remove(qmp); 455 + qmp_pd_remove(qmp); 456 + 457 + qmp_close(qmp); 458 + mbox_free_channel(qmp->mbox_chan); 459 + 460 + return 0; 461 + } 462 + 463 + static const struct of_device_id qmp_dt_match[] = { 464 + { .compatible = "qcom,sdm845-aoss-qmp", }, 465 + {} 466 + }; 467 + MODULE_DEVICE_TABLE(of, qmp_dt_match); 468 + 469 + static struct platform_driver qmp_driver = { 470 + .driver = { 471 + .name = "qcom_aoss_qmp", 472 + .of_match_table = qmp_dt_match, 473 + }, 474 + .probe = qmp_probe, 475 + .remove = qmp_remove, 476 + }; 477 + module_platform_driver(qmp_driver); 478 + 479 + MODULE_DESCRIPTION("Qualcomm AOSS QMP driver"); 480 + MODULE_LICENSE("GPL v2");
+110 -24
drivers/soc/qcom/rpmpd.c
··· 16 16 17 17 #define domain_to_rpmpd(domain) container_of(domain, struct rpmpd, pd) 18 18 19 - /* Resource types */ 19 + /* Resource types: 20 + * RPMPD_X is X encoded as a little-endian, lower-case, ASCII string */ 20 21 #define RPMPD_SMPA 0x61706d73 21 22 #define RPMPD_LDOA 0x616f646c 23 + #define RPMPD_RWCX 0x78637772 24 + #define RPMPD_RWMX 0x786d7772 25 + #define RPMPD_RWLC 0x636c7772 26 + #define RPMPD_RWLM 0x6d6c7772 27 + #define RPMPD_RWSC 0x63737772 28 + #define RPMPD_RWSM 0x6d737772 22 29 23 30 /* Operation Keys */ 24 31 #define KEY_CORNER 0x6e726f63 /* corn */ 25 32 #define KEY_ENABLE 0x6e657773 /* swen */ 26 33 #define KEY_FLOOR_CORNER 0x636676 /* vfc */ 34 + #define KEY_FLOOR_LEVEL 0x6c6676 /* vfl */ 35 + #define KEY_LEVEL 0x6c766c76 /* vlvl */ 27 36 28 - #define MAX_RPMPD_STATE 6 37 + #define MAX_8996_RPMPD_STATE 6 29 38 30 - #define DEFINE_RPMPD_CORNER_SMPA(_platform, _name, _active, r_id) \ 39 + #define DEFINE_RPMPD_PAIR(_platform, _name, _active, r_type, r_key, \ 40 + r_id) \ 31 41 static struct rpmpd _platform##_##_active; \ 32 42 static struct rpmpd _platform##_##_name = { \ 33 43 .pd = { .name = #_name, }, \ 34 44 .peer = &_platform##_##_active, \ 35 - .res_type = RPMPD_SMPA, \ 45 + .res_type = RPMPD_##r_type, \ 36 46 .res_id = r_id, \ 37 - .key = KEY_CORNER, \ 47 + .key = KEY_##r_key, \ 38 48 }; \ 39 49 static struct rpmpd _platform##_##_active = { \ 40 50 .pd = { .name = #_active, }, \ 41 51 .peer = &_platform##_##_name, \ 42 52 .active_only = true, \ 43 - .res_type = RPMPD_SMPA, \ 53 + .res_type = RPMPD_##r_type, \ 54 + .res_id = r_id, \ 55 + .key = KEY_##r_key, \ 56 + } 57 + 58 + #define DEFINE_RPMPD_CORNER(_platform, _name, r_type, r_id) \ 59 + static struct rpmpd _platform##_##_name = { \ 60 + .pd = { .name = #_name, }, \ 61 + .res_type = RPMPD_##r_type, \ 44 62 .res_id = r_id, \ 45 63 .key = KEY_CORNER, \ 46 64 } 47 65 48 - #define DEFINE_RPMPD_CORNER_LDOA(_platform, _name, r_id) \ 66 + #define DEFINE_RPMPD_LEVEL(_platform, _name, r_type, r_id) \ 49 67 static struct rpmpd _platform##_##_name = { \ 50 68 .pd = { .name = #_name, }, \ 51 - .res_type = RPMPD_LDOA, \ 69 + .res_type = RPMPD_##r_type, \ 52 70 .res_id = r_id, \ 53 - .key = KEY_CORNER, \ 71 + .key = KEY_LEVEL, \ 54 72 } 55 73 56 - #define DEFINE_RPMPD_VFC(_platform, _name, r_id, r_type) \ 74 + #define DEFINE_RPMPD_VFC(_platform, _name, r_type, r_id) \ 57 75 static struct rpmpd _platform##_##_name = { \ 58 76 .pd = { .name = #_name, }, \ 59 - .res_type = r_type, \ 77 + .res_type = RPMPD_##r_type, \ 60 78 .res_id = r_id, \ 61 79 .key = KEY_FLOOR_CORNER, \ 62 80 } 63 81 64 - #define DEFINE_RPMPD_VFC_SMPA(_platform, _name, r_id) \ 65 - DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_SMPA) 66 - 67 - #define DEFINE_RPMPD_VFC_LDOA(_platform, _name, r_id) \ 68 - DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_LDOA) 82 + #define DEFINE_RPMPD_VFL(_platform, _name, r_type, r_id) \ 83 + static struct rpmpd _platform##_##_name = { \ 84 + .pd = { .name = #_name, }, \ 85 + .res_type = RPMPD_##r_type, \ 86 + .res_id = r_id, \ 87 + .key = KEY_FLOOR_LEVEL, \ 88 + } 69 89 70 90 struct rpmpd_req { 71 91 __le32 key; ··· 103 83 const int res_type; 104 84 const int res_id; 105 85 struct qcom_smd_rpm *rpm; 86 + unsigned int max_state; 106 87 __le32 key; 107 88 }; 108 89 109 90 struct rpmpd_desc { 110 91 struct rpmpd **rpmpds; 111 92 size_t num_pds; 93 + unsigned int max_state; 112 94 }; 113 95 114 96 static DEFINE_MUTEX(rpmpd_lock); 115 97 116 98 /* msm8996 RPM Power domains */ 117 - DEFINE_RPMPD_CORNER_SMPA(msm8996, vddcx, vddcx_ao, 1); 118 - DEFINE_RPMPD_CORNER_SMPA(msm8996, vddmx, vddmx_ao, 2); 119 - DEFINE_RPMPD_CORNER_LDOA(msm8996, vddsscx, 26); 99 + DEFINE_RPMPD_PAIR(msm8996, vddcx, vddcx_ao, SMPA, CORNER, 1); 100 + DEFINE_RPMPD_PAIR(msm8996, vddmx, vddmx_ao, SMPA, CORNER, 2); 101 + DEFINE_RPMPD_CORNER(msm8996, vddsscx, LDOA, 26); 120 102 121 - DEFINE_RPMPD_VFC_SMPA(msm8996, vddcx_vfc, 1); 122 - DEFINE_RPMPD_VFC_LDOA(msm8996, vddsscx_vfc, 26); 103 + DEFINE_RPMPD_VFC(msm8996, vddcx_vfc, SMPA, 1); 104 + DEFINE_RPMPD_VFC(msm8996, vddsscx_vfc, LDOA, 26); 123 105 124 106 static struct rpmpd *msm8996_rpmpds[] = { 125 107 [MSM8996_VDDCX] = &msm8996_vddcx, ··· 136 114 static const struct rpmpd_desc msm8996_desc = { 137 115 .rpmpds = msm8996_rpmpds, 138 116 .num_pds = ARRAY_SIZE(msm8996_rpmpds), 117 + .max_state = MAX_8996_RPMPD_STATE, 118 + }; 119 + 120 + /* msm8998 RPM Power domains */ 121 + DEFINE_RPMPD_PAIR(msm8998, vddcx, vddcx_ao, RWCX, LEVEL, 0); 122 + DEFINE_RPMPD_VFL(msm8998, vddcx_vfl, RWCX, 0); 123 + 124 + DEFINE_RPMPD_PAIR(msm8998, vddmx, vddmx_ao, RWMX, LEVEL, 0); 125 + DEFINE_RPMPD_VFL(msm8998, vddmx_vfl, RWMX, 0); 126 + 127 + DEFINE_RPMPD_LEVEL(msm8998, vdd_ssccx, RWSC, 0); 128 + DEFINE_RPMPD_VFL(msm8998, vdd_ssccx_vfl, RWSC, 0); 129 + 130 + DEFINE_RPMPD_LEVEL(msm8998, vdd_sscmx, RWSM, 0); 131 + DEFINE_RPMPD_VFL(msm8998, vdd_sscmx_vfl, RWSM, 0); 132 + 133 + static struct rpmpd *msm8998_rpmpds[] = { 134 + [MSM8998_VDDCX] = &msm8998_vddcx, 135 + [MSM8998_VDDCX_AO] = &msm8998_vddcx_ao, 136 + [MSM8998_VDDCX_VFL] = &msm8998_vddcx_vfl, 137 + [MSM8998_VDDMX] = &msm8998_vddmx, 138 + [MSM8998_VDDMX_AO] = &msm8998_vddmx_ao, 139 + [MSM8998_VDDMX_VFL] = &msm8998_vddmx_vfl, 140 + [MSM8998_SSCCX] = &msm8998_vdd_ssccx, 141 + [MSM8998_SSCCX_VFL] = &msm8998_vdd_ssccx_vfl, 142 + [MSM8998_SSCMX] = &msm8998_vdd_sscmx, 143 + [MSM8998_SSCMX_VFL] = &msm8998_vdd_sscmx_vfl, 144 + }; 145 + 146 + static const struct rpmpd_desc msm8998_desc = { 147 + .rpmpds = msm8998_rpmpds, 148 + .num_pds = ARRAY_SIZE(msm8998_rpmpds), 149 + .max_state = RPM_SMD_LEVEL_BINNING, 150 + }; 151 + 152 + /* qcs404 RPM Power domains */ 153 + DEFINE_RPMPD_PAIR(qcs404, vddmx, vddmx_ao, RWMX, LEVEL, 0); 154 + DEFINE_RPMPD_VFL(qcs404, vddmx_vfl, RWMX, 0); 155 + 156 + DEFINE_RPMPD_LEVEL(qcs404, vdd_lpicx, RWLC, 0); 157 + DEFINE_RPMPD_VFL(qcs404, vdd_lpicx_vfl, RWLC, 0); 158 + 159 + DEFINE_RPMPD_LEVEL(qcs404, vdd_lpimx, RWLM, 0); 160 + DEFINE_RPMPD_VFL(qcs404, vdd_lpimx_vfl, RWLM, 0); 161 + 162 + static struct rpmpd *qcs404_rpmpds[] = { 163 + [QCS404_VDDMX] = &qcs404_vddmx, 164 + [QCS404_VDDMX_AO] = &qcs404_vddmx_ao, 165 + [QCS404_VDDMX_VFL] = &qcs404_vddmx_vfl, 166 + [QCS404_LPICX] = &qcs404_vdd_lpicx, 167 + [QCS404_LPICX_VFL] = &qcs404_vdd_lpicx_vfl, 168 + [QCS404_LPIMX] = &qcs404_vdd_lpimx, 169 + [QCS404_LPIMX_VFL] = &qcs404_vdd_lpimx_vfl, 170 + }; 171 + 172 + static const struct rpmpd_desc qcs404_desc = { 173 + .rpmpds = qcs404_rpmpds, 174 + .num_pds = ARRAY_SIZE(qcs404_rpmpds), 175 + .max_state = RPM_SMD_LEVEL_BINNING, 139 176 }; 140 177 141 178 static const struct of_device_id rpmpd_match_table[] = { 142 179 { .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc }, 180 + { .compatible = "qcom,msm8998-rpmpd", .data = &msm8998_desc }, 181 + { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc }, 143 182 { } 144 183 }; 145 184 ··· 308 225 int ret = 0; 309 226 struct rpmpd *pd = domain_to_rpmpd(domain); 310 227 311 - if (state > MAX_RPMPD_STATE) 312 - goto out; 228 + if (state > pd->max_state) 229 + state = pd->max_state; 313 230 314 231 mutex_lock(&rpmpd_lock); 315 232 316 233 pd->corner = state; 317 234 318 - if (!pd->enabled && pd->key != KEY_FLOOR_CORNER) 235 + /* Always send updates for vfc and vfl */ 236 + if (!pd->enabled && pd->key != KEY_FLOOR_CORNER && 237 + pd->key != KEY_FLOOR_LEVEL) 319 238 goto out; 320 239 321 240 ret = rpmpd_aggregate_corner(pd); ··· 372 287 } 373 288 374 289 rpmpds[i]->rpm = rpm; 290 + rpmpds[i]->max_state = desc->max_state; 375 291 rpmpds[i]->pd.power_off = rpmpd_power_off; 376 292 rpmpds[i]->pd.power_on = rpmpd_power_on; 377 293 rpmpds[i]->pd.set_performance_state = rpmpd_set_performance;
+115 -115
drivers/soc/rockchip/pm_domains.c
··· 86 86 #define to_rockchip_pd(gpd) container_of(gpd, struct rockchip_pm_domain, genpd) 87 87 88 88 #define DOMAIN(pwr, status, req, idle, ack, wakeup) \ 89 - { \ 90 - .pwr_mask = (pwr >= 0) ? BIT(pwr) : 0, \ 91 - .status_mask = (status >= 0) ? BIT(status) : 0, \ 92 - .req_mask = (req >= 0) ? BIT(req) : 0, \ 93 - .idle_mask = (idle >= 0) ? BIT(idle) : 0, \ 94 - .ack_mask = (ack >= 0) ? BIT(ack) : 0, \ 95 - .active_wakeup = wakeup, \ 89 + { \ 90 + .pwr_mask = (pwr), \ 91 + .status_mask = (status), \ 92 + .req_mask = (req), \ 93 + .idle_mask = (idle), \ 94 + .ack_mask = (ack), \ 95 + .active_wakeup = (wakeup), \ 96 96 } 97 97 98 98 #define DOMAIN_M(pwr, status, req, idle, ack, wakeup) \ 99 99 { \ 100 - .pwr_w_mask = (pwr >= 0) ? BIT(pwr + 16) : 0, \ 101 - .pwr_mask = (pwr >= 0) ? BIT(pwr) : 0, \ 102 - .status_mask = (status >= 0) ? BIT(status) : 0, \ 103 - .req_w_mask = (req >= 0) ? BIT(req + 16) : 0, \ 104 - .req_mask = (req >= 0) ? BIT(req) : 0, \ 105 - .idle_mask = (idle >= 0) ? BIT(idle) : 0, \ 106 - .ack_mask = (ack >= 0) ? BIT(ack) : 0, \ 100 + .pwr_w_mask = (pwr) << 16, \ 101 + .pwr_mask = (pwr), \ 102 + .status_mask = (status), \ 103 + .req_w_mask = (req) << 16, \ 104 + .req_mask = (req), \ 105 + .idle_mask = (idle), \ 106 + .ack_mask = (ack), \ 107 107 .active_wakeup = wakeup, \ 108 108 } 109 109 110 110 #define DOMAIN_RK3036(req, ack, idle, wakeup) \ 111 111 { \ 112 - .req_mask = (req >= 0) ? BIT(req) : 0, \ 113 - .req_w_mask = (req >= 0) ? BIT(req + 16) : 0, \ 114 - .ack_mask = (ack >= 0) ? BIT(ack) : 0, \ 115 - .idle_mask = (idle >= 0) ? BIT(idle) : 0, \ 112 + .req_mask = (req), \ 113 + .req_w_mask = (req) << 16, \ 114 + .ack_mask = (ack), \ 115 + .idle_mask = (idle), \ 116 116 .active_wakeup = wakeup, \ 117 117 } 118 118 119 119 #define DOMAIN_PX30(pwr, status, req, wakeup) \ 120 - DOMAIN_M(pwr, status, req, (req) + 16, req, wakeup) 120 + DOMAIN_M(pwr, status, req, (req) << 16, req, wakeup) 121 121 122 122 #define DOMAIN_RK3288(pwr, status, req, wakeup) \ 123 - DOMAIN(pwr, status, req, req, (req) + 16, wakeup) 123 + DOMAIN(pwr, status, req, req, (req) << 16, wakeup) 124 124 125 125 #define DOMAIN_RK3328(pwr, status, req, wakeup) \ 126 - DOMAIN_M(pwr, pwr, req, (req) + 10, req, wakeup) 126 + DOMAIN_M(pwr, pwr, req, (req) << 10, req, wakeup) 127 127 128 128 #define DOMAIN_RK3368(pwr, status, req, wakeup) \ 129 - DOMAIN(pwr, status, req, (req) + 16, req, wakeup) 129 + DOMAIN(pwr, status, req, (req) << 16, req, wakeup) 130 130 131 131 #define DOMAIN_RK3399(pwr, status, req, wakeup) \ 132 132 DOMAIN(pwr, status, req, req, req, wakeup) ··· 716 716 } 717 717 718 718 static const struct rockchip_domain_info px30_pm_domains[] = { 719 - [PX30_PD_USB] = DOMAIN_PX30(5, 5, 10, false), 720 - [PX30_PD_SDCARD] = DOMAIN_PX30(8, 8, 9, false), 721 - [PX30_PD_GMAC] = DOMAIN_PX30(10, 10, 6, false), 722 - [PX30_PD_MMC_NAND] = DOMAIN_PX30(11, 11, 5, false), 723 - [PX30_PD_VPU] = DOMAIN_PX30(12, 12, 14, false), 724 - [PX30_PD_VO] = DOMAIN_PX30(13, 13, 7, false), 725 - [PX30_PD_VI] = DOMAIN_PX30(14, 14, 8, false), 726 - [PX30_PD_GPU] = DOMAIN_PX30(15, 15, 2, false), 719 + [PX30_PD_USB] = DOMAIN_PX30(BIT(5), BIT(5), BIT(10), false), 720 + [PX30_PD_SDCARD] = DOMAIN_PX30(BIT(8), BIT(8), BIT(9), false), 721 + [PX30_PD_GMAC] = DOMAIN_PX30(BIT(10), BIT(10), BIT(6), false), 722 + [PX30_PD_MMC_NAND] = DOMAIN_PX30(BIT(11), BIT(11), BIT(5), false), 723 + [PX30_PD_VPU] = DOMAIN_PX30(BIT(12), BIT(12), BIT(14), false), 724 + [PX30_PD_VO] = DOMAIN_PX30(BIT(13), BIT(13), BIT(7), false), 725 + [PX30_PD_VI] = DOMAIN_PX30(BIT(14), BIT(14), BIT(8), false), 726 + [PX30_PD_GPU] = DOMAIN_PX30(BIT(15), BIT(15), BIT(2), false), 727 727 }; 728 728 729 729 static const struct rockchip_domain_info rk3036_pm_domains[] = { 730 - [RK3036_PD_MSCH] = DOMAIN_RK3036(14, 23, 30, true), 731 - [RK3036_PD_CORE] = DOMAIN_RK3036(13, 17, 24, false), 732 - [RK3036_PD_PERI] = DOMAIN_RK3036(12, 18, 25, false), 733 - [RK3036_PD_VIO] = DOMAIN_RK3036(11, 19, 26, false), 734 - [RK3036_PD_VPU] = DOMAIN_RK3036(10, 20, 27, false), 735 - [RK3036_PD_GPU] = DOMAIN_RK3036(9, 21, 28, false), 736 - [RK3036_PD_SYS] = DOMAIN_RK3036(8, 22, 29, false), 730 + [RK3036_PD_MSCH] = DOMAIN_RK3036(BIT(14), BIT(23), BIT(30), true), 731 + [RK3036_PD_CORE] = DOMAIN_RK3036(BIT(13), BIT(17), BIT(24), false), 732 + [RK3036_PD_PERI] = DOMAIN_RK3036(BIT(12), BIT(18), BIT(25), false), 733 + [RK3036_PD_VIO] = DOMAIN_RK3036(BIT(11), BIT(19), BIT(26), false), 734 + [RK3036_PD_VPU] = DOMAIN_RK3036(BIT(10), BIT(20), BIT(27), false), 735 + [RK3036_PD_GPU] = DOMAIN_RK3036(BIT(9), BIT(21), BIT(28), false), 736 + [RK3036_PD_SYS] = DOMAIN_RK3036(BIT(8), BIT(22), BIT(29), false), 737 737 }; 738 738 739 739 static const struct rockchip_domain_info rk3066_pm_domains[] = { 740 - [RK3066_PD_GPU] = DOMAIN(9, 9, 3, 24, 29, false), 741 - [RK3066_PD_VIDEO] = DOMAIN(8, 8, 4, 23, 28, false), 742 - [RK3066_PD_VIO] = DOMAIN(7, 7, 5, 22, 27, false), 743 - [RK3066_PD_PERI] = DOMAIN(6, 6, 2, 25, 30, false), 744 - [RK3066_PD_CPU] = DOMAIN(-1, 5, 1, 26, 31, false), 740 + [RK3066_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false), 741 + [RK3066_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false), 742 + [RK3066_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false), 743 + [RK3066_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false), 744 + [RK3066_PD_CPU] = DOMAIN(0, BIT(5), BIT(1), BIT(26), BIT(31), false), 745 745 }; 746 746 747 747 static const struct rockchip_domain_info rk3128_pm_domains[] = { 748 - [RK3128_PD_CORE] = DOMAIN_RK3288(0, 0, 4, false), 749 - [RK3128_PD_MSCH] = DOMAIN_RK3288(-1, -1, 6, true), 750 - [RK3128_PD_VIO] = DOMAIN_RK3288(3, 3, 2, false), 751 - [RK3128_PD_VIDEO] = DOMAIN_RK3288(2, 2, 1, false), 752 - [RK3128_PD_GPU] = DOMAIN_RK3288(1, 1, 3, false), 748 + [RK3128_PD_CORE] = DOMAIN_RK3288(BIT(0), BIT(0), BIT(4), false), 749 + [RK3128_PD_MSCH] = DOMAIN_RK3288(0, 0, BIT(6), true), 750 + [RK3128_PD_VIO] = DOMAIN_RK3288(BIT(3), BIT(3), BIT(2), false), 751 + [RK3128_PD_VIDEO] = DOMAIN_RK3288(BIT(2), BIT(2), BIT(1), false), 752 + [RK3128_PD_GPU] = DOMAIN_RK3288(BIT(1), BIT(1), BIT(3), false), 753 753 }; 754 754 755 755 static const struct rockchip_domain_info rk3188_pm_domains[] = { 756 - [RK3188_PD_GPU] = DOMAIN(9, 9, 3, 24, 29, false), 757 - [RK3188_PD_VIDEO] = DOMAIN(8, 8, 4, 23, 28, false), 758 - [RK3188_PD_VIO] = DOMAIN(7, 7, 5, 22, 27, false), 759 - [RK3188_PD_PERI] = DOMAIN(6, 6, 2, 25, 30, false), 760 - [RK3188_PD_CPU] = DOMAIN(5, 5, 1, 26, 31, false), 756 + [RK3188_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false), 757 + [RK3188_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false), 758 + [RK3188_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false), 759 + [RK3188_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false), 760 + [RK3188_PD_CPU] = DOMAIN(BIT(5), BIT(5), BIT(1), BIT(26), BIT(31), false), 761 761 }; 762 762 763 763 static const struct rockchip_domain_info rk3228_pm_domains[] = { 764 - [RK3228_PD_CORE] = DOMAIN_RK3036(0, 0, 16, true), 765 - [RK3228_PD_MSCH] = DOMAIN_RK3036(1, 1, 17, true), 766 - [RK3228_PD_BUS] = DOMAIN_RK3036(2, 2, 18, true), 767 - [RK3228_PD_SYS] = DOMAIN_RK3036(3, 3, 19, true), 768 - [RK3228_PD_VIO] = DOMAIN_RK3036(4, 4, 20, false), 769 - [RK3228_PD_VOP] = DOMAIN_RK3036(5, 5, 21, false), 770 - [RK3228_PD_VPU] = DOMAIN_RK3036(6, 6, 22, false), 771 - [RK3228_PD_RKVDEC] = DOMAIN_RK3036(7, 7, 23, false), 772 - [RK3228_PD_GPU] = DOMAIN_RK3036(8, 8, 24, false), 773 - [RK3228_PD_PERI] = DOMAIN_RK3036(9, 9, 25, true), 774 - [RK3228_PD_GMAC] = DOMAIN_RK3036(10, 10, 26, false), 764 + [RK3228_PD_CORE] = DOMAIN_RK3036(BIT(0), BIT(0), BIT(16), true), 765 + [RK3228_PD_MSCH] = DOMAIN_RK3036(BIT(1), BIT(1), BIT(17), true), 766 + [RK3228_PD_BUS] = DOMAIN_RK3036(BIT(2), BIT(2), BIT(18), true), 767 + [RK3228_PD_SYS] = DOMAIN_RK3036(BIT(3), BIT(3), BIT(19), true), 768 + [RK3228_PD_VIO] = DOMAIN_RK3036(BIT(4), BIT(4), BIT(20), false), 769 + [RK3228_PD_VOP] = DOMAIN_RK3036(BIT(5), BIT(5), BIT(21), false), 770 + [RK3228_PD_VPU] = DOMAIN_RK3036(BIT(6), BIT(6), BIT(22), false), 771 + [RK3228_PD_RKVDEC] = DOMAIN_RK3036(BIT(7), BIT(7), BIT(23), false), 772 + [RK3228_PD_GPU] = DOMAIN_RK3036(BIT(8), BIT(8), BIT(24), false), 773 + [RK3228_PD_PERI] = DOMAIN_RK3036(BIT(9), BIT(9), BIT(25), true), 774 + [RK3228_PD_GMAC] = DOMAIN_RK3036(BIT(10), BIT(10), BIT(26), false), 775 775 }; 776 776 777 777 static const struct rockchip_domain_info rk3288_pm_domains[] = { 778 - [RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4, false), 779 - [RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9, false), 780 - [RK3288_PD_VIDEO] = DOMAIN_RK3288(8, 8, 3, false), 781 - [RK3288_PD_GPU] = DOMAIN_RK3288(9, 9, 2, false), 778 + [RK3288_PD_VIO] = DOMAIN_RK3288(BIT(7), BIT(7), BIT(4), false), 779 + [RK3288_PD_HEVC] = DOMAIN_RK3288(BIT(14), BIT(10), BIT(9), false), 780 + [RK3288_PD_VIDEO] = DOMAIN_RK3288(BIT(8), BIT(8), BIT(3), false), 781 + [RK3288_PD_GPU] = DOMAIN_RK3288(BIT(9), BIT(9), BIT(2), false), 782 782 }; 783 783 784 784 static const struct rockchip_domain_info rk3328_pm_domains[] = { 785 - [RK3328_PD_CORE] = DOMAIN_RK3328(-1, 0, 0, false), 786 - [RK3328_PD_GPU] = DOMAIN_RK3328(-1, 1, 1, false), 787 - [RK3328_PD_BUS] = DOMAIN_RK3328(-1, 2, 2, true), 788 - [RK3328_PD_MSCH] = DOMAIN_RK3328(-1, 3, 3, true), 789 - [RK3328_PD_PERI] = DOMAIN_RK3328(-1, 4, 4, true), 790 - [RK3328_PD_VIDEO] = DOMAIN_RK3328(-1, 5, 5, false), 791 - [RK3328_PD_HEVC] = DOMAIN_RK3328(-1, 6, 6, false), 792 - [RK3328_PD_VIO] = DOMAIN_RK3328(-1, 8, 8, false), 793 - [RK3328_PD_VPU] = DOMAIN_RK3328(-1, 9, 9, false), 785 + [RK3328_PD_CORE] = DOMAIN_RK3328(0, BIT(0), BIT(0), false), 786 + [RK3328_PD_GPU] = DOMAIN_RK3328(0, BIT(1), BIT(1), false), 787 + [RK3328_PD_BUS] = DOMAIN_RK3328(0, BIT(2), BIT(2), true), 788 + [RK3328_PD_MSCH] = DOMAIN_RK3328(0, BIT(3), BIT(3), true), 789 + [RK3328_PD_PERI] = DOMAIN_RK3328(0, BIT(4), BIT(4), true), 790 + [RK3328_PD_VIDEO] = DOMAIN_RK3328(0, BIT(5), BIT(5), false), 791 + [RK3328_PD_HEVC] = DOMAIN_RK3328(0, BIT(6), BIT(6), false), 792 + [RK3328_PD_VIO] = DOMAIN_RK3328(0, BIT(8), BIT(8), false), 793 + [RK3328_PD_VPU] = DOMAIN_RK3328(0, BIT(9), BIT(9), false), 794 794 }; 795 795 796 796 static const struct rockchip_domain_info rk3366_pm_domains[] = { 797 - [RK3366_PD_PERI] = DOMAIN_RK3368(10, 10, 6, true), 798 - [RK3366_PD_VIO] = DOMAIN_RK3368(14, 14, 8, false), 799 - [RK3366_PD_VIDEO] = DOMAIN_RK3368(13, 13, 7, false), 800 - [RK3366_PD_RKVDEC] = DOMAIN_RK3368(11, 11, 7, false), 801 - [RK3366_PD_WIFIBT] = DOMAIN_RK3368(8, 8, 9, false), 802 - [RK3366_PD_VPU] = DOMAIN_RK3368(12, 12, 7, false), 803 - [RK3366_PD_GPU] = DOMAIN_RK3368(15, 15, 2, false), 797 + [RK3366_PD_PERI] = DOMAIN_RK3368(BIT(10), BIT(10), BIT(6), true), 798 + [RK3366_PD_VIO] = DOMAIN_RK3368(BIT(14), BIT(14), BIT(8), false), 799 + [RK3366_PD_VIDEO] = DOMAIN_RK3368(BIT(13), BIT(13), BIT(7), false), 800 + [RK3366_PD_RKVDEC] = DOMAIN_RK3368(BIT(11), BIT(11), BIT(7), false), 801 + [RK3366_PD_WIFIBT] = DOMAIN_RK3368(BIT(8), BIT(8), BIT(9), false), 802 + [RK3366_PD_VPU] = DOMAIN_RK3368(BIT(12), BIT(12), BIT(7), false), 803 + [RK3366_PD_GPU] = DOMAIN_RK3368(BIT(15), BIT(15), BIT(2), false), 804 804 }; 805 805 806 806 static const struct rockchip_domain_info rk3368_pm_domains[] = { 807 - [RK3368_PD_PERI] = DOMAIN_RK3368(13, 12, 6, true), 808 - [RK3368_PD_VIO] = DOMAIN_RK3368(15, 14, 8, false), 809 - [RK3368_PD_VIDEO] = DOMAIN_RK3368(14, 13, 7, false), 810 - [RK3368_PD_GPU_0] = DOMAIN_RK3368(16, 15, 2, false), 811 - [RK3368_PD_GPU_1] = DOMAIN_RK3368(17, 16, 2, false), 807 + [RK3368_PD_PERI] = DOMAIN_RK3368(BIT(13), BIT(12), BIT(6), true), 808 + [RK3368_PD_VIO] = DOMAIN_RK3368(BIT(15), BIT(14), BIT(8), false), 809 + [RK3368_PD_VIDEO] = DOMAIN_RK3368(BIT(14), BIT(13), BIT(7), false), 810 + [RK3368_PD_GPU_0] = DOMAIN_RK3368(BIT(16), BIT(15), BIT(2), false), 811 + [RK3368_PD_GPU_1] = DOMAIN_RK3368(BIT(17), BIT(16), BIT(2), false), 812 812 }; 813 813 814 814 static const struct rockchip_domain_info rk3399_pm_domains[] = { 815 - [RK3399_PD_TCPD0] = DOMAIN_RK3399(8, 8, -1, false), 816 - [RK3399_PD_TCPD1] = DOMAIN_RK3399(9, 9, -1, false), 817 - [RK3399_PD_CCI] = DOMAIN_RK3399(10, 10, -1, true), 818 - [RK3399_PD_CCI0] = DOMAIN_RK3399(-1, -1, 15, true), 819 - [RK3399_PD_CCI1] = DOMAIN_RK3399(-1, -1, 16, true), 820 - [RK3399_PD_PERILP] = DOMAIN_RK3399(11, 11, 1, true), 821 - [RK3399_PD_PERIHP] = DOMAIN_RK3399(12, 12, 2, true), 822 - [RK3399_PD_CENTER] = DOMAIN_RK3399(13, 13, 14, true), 823 - [RK3399_PD_VIO] = DOMAIN_RK3399(14, 14, 17, false), 824 - [RK3399_PD_GPU] = DOMAIN_RK3399(15, 15, 0, false), 825 - [RK3399_PD_VCODEC] = DOMAIN_RK3399(16, 16, 3, false), 826 - [RK3399_PD_VDU] = DOMAIN_RK3399(17, 17, 4, false), 827 - [RK3399_PD_RGA] = DOMAIN_RK3399(18, 18, 5, false), 828 - [RK3399_PD_IEP] = DOMAIN_RK3399(19, 19, 6, false), 829 - [RK3399_PD_VO] = DOMAIN_RK3399(20, 20, -1, false), 830 - [RK3399_PD_VOPB] = DOMAIN_RK3399(-1, -1, 7, false), 831 - [RK3399_PD_VOPL] = DOMAIN_RK3399(-1, -1, 8, false), 832 - [RK3399_PD_ISP0] = DOMAIN_RK3399(22, 22, 9, false), 833 - [RK3399_PD_ISP1] = DOMAIN_RK3399(23, 23, 10, false), 834 - [RK3399_PD_HDCP] = DOMAIN_RK3399(24, 24, 11, false), 835 - [RK3399_PD_GMAC] = DOMAIN_RK3399(25, 25, 23, true), 836 - [RK3399_PD_EMMC] = DOMAIN_RK3399(26, 26, 24, true), 837 - [RK3399_PD_USB3] = DOMAIN_RK3399(27, 27, 12, true), 838 - [RK3399_PD_EDP] = DOMAIN_RK3399(28, 28, 22, false), 839 - [RK3399_PD_GIC] = DOMAIN_RK3399(29, 29, 27, true), 840 - [RK3399_PD_SD] = DOMAIN_RK3399(30, 30, 28, true), 841 - [RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(31, 31, 29, true), 815 + [RK3399_PD_TCPD0] = DOMAIN_RK3399(BIT(8), BIT(8), 0, false), 816 + [RK3399_PD_TCPD1] = DOMAIN_RK3399(BIT(9), BIT(9), 0, false), 817 + [RK3399_PD_CCI] = DOMAIN_RK3399(BIT(10), BIT(10), 0, true), 818 + [RK3399_PD_CCI0] = DOMAIN_RK3399(0, 0, BIT(15), true), 819 + [RK3399_PD_CCI1] = DOMAIN_RK3399(0, 0, BIT(16), true), 820 + [RK3399_PD_PERILP] = DOMAIN_RK3399(BIT(11), BIT(11), BIT(1), true), 821 + [RK3399_PD_PERIHP] = DOMAIN_RK3399(BIT(12), BIT(12), BIT(2), true), 822 + [RK3399_PD_CENTER] = DOMAIN_RK3399(BIT(13), BIT(13), BIT(14), true), 823 + [RK3399_PD_VIO] = DOMAIN_RK3399(BIT(14), BIT(14), BIT(17), false), 824 + [RK3399_PD_GPU] = DOMAIN_RK3399(BIT(15), BIT(15), BIT(0), false), 825 + [RK3399_PD_VCODEC] = DOMAIN_RK3399(BIT(16), BIT(16), BIT(3), false), 826 + [RK3399_PD_VDU] = DOMAIN_RK3399(BIT(17), BIT(17), BIT(4), false), 827 + [RK3399_PD_RGA] = DOMAIN_RK3399(BIT(18), BIT(18), BIT(5), false), 828 + [RK3399_PD_IEP] = DOMAIN_RK3399(BIT(19), BIT(19), BIT(6), false), 829 + [RK3399_PD_VO] = DOMAIN_RK3399(BIT(20), BIT(20), 0, false), 830 + [RK3399_PD_VOPB] = DOMAIN_RK3399(0, 0, BIT(7), false), 831 + [RK3399_PD_VOPL] = DOMAIN_RK3399(0, 0, BIT(8), false), 832 + [RK3399_PD_ISP0] = DOMAIN_RK3399(BIT(22), BIT(22), BIT(9), false), 833 + [RK3399_PD_ISP1] = DOMAIN_RK3399(BIT(23), BIT(23), BIT(10), false), 834 + [RK3399_PD_HDCP] = DOMAIN_RK3399(BIT(24), BIT(24), BIT(11), false), 835 + [RK3399_PD_GMAC] = DOMAIN_RK3399(BIT(25), BIT(25), BIT(23), true), 836 + [RK3399_PD_EMMC] = DOMAIN_RK3399(BIT(26), BIT(26), BIT(24), true), 837 + [RK3399_PD_USB3] = DOMAIN_RK3399(BIT(27), BIT(27), BIT(12), true), 838 + [RK3399_PD_EDP] = DOMAIN_RK3399(BIT(28), BIT(28), BIT(22), false), 839 + [RK3399_PD_GIC] = DOMAIN_RK3399(BIT(29), BIT(29), BIT(27), true), 840 + [RK3399_PD_SD] = DOMAIN_RK3399(BIT(30), BIT(30), BIT(28), true), 841 + [RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(BIT(31), BIT(31), BIT(29), true), 842 842 }; 843 843 844 844 static const struct rockchip_pmu_info px30_pmu = {
+1
drivers/soc/tegra/Kconfig
··· 109 109 config ARCH_TEGRA_194_SOC 110 110 bool "NVIDIA Tegra194 SoC" 111 111 select MAILBOX 112 + select PINCTRL_TEGRA194 112 113 select TEGRA_BPMP 113 114 select TEGRA_HSP_MBOX 114 115 select TEGRA_IVC
+4 -2
drivers/soc/tegra/fuse/fuse-tegra.c
··· 133 133 134 134 fuse->clk = devm_clk_get(&pdev->dev, "fuse"); 135 135 if (IS_ERR(fuse->clk)) { 136 - dev_err(&pdev->dev, "failed to get FUSE clock: %ld", 137 - PTR_ERR(fuse->clk)); 136 + if (PTR_ERR(fuse->clk) != -EPROBE_DEFER) 137 + dev_err(&pdev->dev, "failed to get FUSE clock: %ld", 138 + PTR_ERR(fuse->clk)); 139 + 138 140 fuse->base = base; 139 141 return PTR_ERR(fuse->clk); 140 142 }
+18
drivers/soc/tegra/pmc.c
··· 232 232 const char * const *reset_levels; 233 233 unsigned int num_reset_levels; 234 234 235 + /* 236 + * These describe events that can wake the system from sleep (i.e. 237 + * LP0 or SC7). Wakeup from other sleep states (such as LP1 or LP2) 238 + * are dealt with in the LIC. 239 + */ 235 240 const struct tegra_wake_event *wake_events; 236 241 unsigned int num_wake_events; 237 242 }; ··· 1860 1855 unsigned int i; 1861 1856 int err = 0; 1862 1857 1858 + if (WARN_ON(num_irqs > 1)) 1859 + return -EINVAL; 1860 + 1863 1861 for (i = 0; i < soc->num_wake_events; i++) { 1864 1862 const struct tegra_wake_event *event = &soc->wake_events[i]; 1865 1863 ··· 1903 1895 } 1904 1896 } 1905 1897 1898 + /* 1899 + * For interrupts that don't have associated wake events, assign a 1900 + * dummy hardware IRQ number. This is used in the ->irq_set_type() 1901 + * and ->irq_set_wake() callbacks to return early for these IRQs. 1902 + */ 1906 1903 if (i == soc->num_wake_events) 1907 1904 err = irq_domain_set_hwirq_and_chip(domain, virq, ULONG_MAX, 1908 1905 &pmc->irq, pmc); ··· 1925 1912 struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 1926 1913 unsigned int offset, bit; 1927 1914 u32 value; 1915 + 1916 + /* nothing to do if there's no associated wake event */ 1917 + if (WARN_ON(data->hwirq == ULONG_MAX)) 1918 + return 0; 1928 1919 1929 1920 offset = data->hwirq / 32; 1930 1921 bit = data->hwirq % 32; ··· 1957 1940 struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 1958 1941 u32 value; 1959 1942 1943 + /* nothing to do if there's no associated wake event */ 1960 1944 if (data->hwirq == ULONG_MAX) 1961 1945 return 0; 1962 1946
+14
include/dt-bindings/power/qcom-aoss-qmp.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) 2018, Linaro Ltd. */ 3 + 4 + #ifndef __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H 5 + #define __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H 6 + 7 + #define AOSS_QMP_LS_CDSP 0 8 + #define AOSS_QMP_LS_LPASS 1 9 + #define AOSS_QMP_LS_MODEM 2 10 + #define AOSS_QMP_LS_SLPI 3 11 + #define AOSS_QMP_LS_SPSS 4 12 + #define AOSS_QMP_LS_VENUS 5 13 + 14 + #endif
+34
include/dt-bindings/power/qcom-rpmpd.h
··· 36 36 #define MSM8996_VDDSSCX 5 37 37 #define MSM8996_VDDSSCX_VFC 6 38 38 39 + /* MSM8998 Power Domain Indexes */ 40 + #define MSM8998_VDDCX 0 41 + #define MSM8998_VDDCX_AO 1 42 + #define MSM8998_VDDCX_VFL 2 43 + #define MSM8998_VDDMX 3 44 + #define MSM8998_VDDMX_AO 4 45 + #define MSM8998_VDDMX_VFL 5 46 + #define MSM8998_SSCCX 6 47 + #define MSM8998_SSCCX_VFL 7 48 + #define MSM8998_SSCMX 8 49 + #define MSM8998_SSCMX_VFL 9 50 + 51 + /* QCS404 Power Domains */ 52 + #define QCS404_VDDMX 0 53 + #define QCS404_VDDMX_AO 1 54 + #define QCS404_VDDMX_VFL 2 55 + #define QCS404_LPICX 3 56 + #define QCS404_LPICX_VFL 4 57 + #define QCS404_LPIMX 5 58 + #define QCS404_LPIMX_VFL 6 59 + 60 + /* RPM SMD Power Domain performance levels */ 61 + #define RPM_SMD_LEVEL_RETENTION 16 62 + #define RPM_SMD_LEVEL_RETENTION_PLUS 32 63 + #define RPM_SMD_LEVEL_MIN_SVS 48 64 + #define RPM_SMD_LEVEL_LOW_SVS 64 65 + #define RPM_SMD_LEVEL_SVS 128 66 + #define RPM_SMD_LEVEL_SVS_PLUS 192 67 + #define RPM_SMD_LEVEL_NOM 256 68 + #define RPM_SMD_LEVEL_NOM_PLUS 320 69 + #define RPM_SMD_LEVEL_TURBO 384 70 + #define RPM_SMD_LEVEL_TURBO_NO_CPR 416 71 + #define RPM_SMD_LEVEL_BINNING 512 72 + 39 73 #endif
+51
include/dt-bindings/reset/bitmain,bm1880-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* 3 + * Copyright (c) 2018 Bitmain Ltd. 4 + * Copyright (c) 2019 Linaro Ltd. 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_BM1880_RESET_H 8 + #define _DT_BINDINGS_BM1880_RESET_H 9 + 10 + #define BM1880_RST_MAIN_AP 0 11 + #define BM1880_RST_SECOND_AP 1 12 + #define BM1880_RST_DDR 2 13 + #define BM1880_RST_VIDEO 3 14 + #define BM1880_RST_JPEG 4 15 + #define BM1880_RST_VPP 5 16 + #define BM1880_RST_GDMA 6 17 + #define BM1880_RST_AXI_SRAM 7 18 + #define BM1880_RST_TPU 8 19 + #define BM1880_RST_USB 9 20 + #define BM1880_RST_ETH0 10 21 + #define BM1880_RST_ETH1 11 22 + #define BM1880_RST_NAND 12 23 + #define BM1880_RST_EMMC 13 24 + #define BM1880_RST_SD 14 25 + #define BM1880_RST_SDMA 15 26 + #define BM1880_RST_I2S0 16 27 + #define BM1880_RST_I2S1 17 28 + #define BM1880_RST_UART0_1_CLK 18 29 + #define BM1880_RST_UART0_1_ACLK 19 30 + #define BM1880_RST_UART2_3_CLK 20 31 + #define BM1880_RST_UART2_3_ACLK 21 32 + #define BM1880_RST_MINER 22 33 + #define BM1880_RST_I2C0 23 34 + #define BM1880_RST_I2C1 24 35 + #define BM1880_RST_I2C2 25 36 + #define BM1880_RST_I2C3 26 37 + #define BM1880_RST_I2C4 27 38 + #define BM1880_RST_PWM0 28 39 + #define BM1880_RST_PWM1 29 40 + #define BM1880_RST_PWM2 30 41 + #define BM1880_RST_PWM3 31 42 + #define BM1880_RST_SPI 32 43 + #define BM1880_RST_GPIO0 33 44 + #define BM1880_RST_GPIO1 34 45 + #define BM1880_RST_GPIO2 35 46 + #define BM1880_RST_EFUSE 36 47 + #define BM1880_RST_WDT 37 48 + #define BM1880_RST_AHB_ROM 38 49 + #define BM1880_RST_SPIC 39 50 + 51 + #endif /* _DT_BINDINGS_BM1880_RESET_H */
+12
include/linux/platform_data/ti-sysc.h
··· 19 19 20 20 struct ti_sysc_cookie { 21 21 void *data; 22 + void *clkdm; 22 23 }; 23 24 24 25 /** ··· 47 46 s8 emufree_shift; 48 47 }; 49 48 49 + #define SYSC_MODULE_QUIRK_HDQ1W BIT(17) 50 + #define SYSC_MODULE_QUIRK_I2C BIT(16) 51 + #define SYSC_MODULE_QUIRK_WDT BIT(15) 52 + #define SYSS_QUIRK_RESETDONE_INVERTED BIT(14) 50 53 #define SYSC_QUIRK_SWSUP_MSTANDBY BIT(13) 51 54 #define SYSC_QUIRK_SWSUP_SIDLE_ACT BIT(12) 52 55 #define SYSC_QUIRK_SWSUP_SIDLE BIT(11) ··· 130 125 }; 131 126 132 127 struct device; 128 + struct clk; 133 129 134 130 struct ti_sysc_platform_data { 135 131 struct of_dev_auxdata *auxdata; 132 + int (*init_clockdomain)(struct device *dev, struct clk *fck, 133 + struct clk *ick, struct ti_sysc_cookie *cookie); 134 + void (*clkdm_deny_idle)(struct device *dev, 135 + const struct ti_sysc_cookie *cookie); 136 + void (*clkdm_allow_idle)(struct device *dev, 137 + const struct ti_sysc_cookie *cookie); 136 138 int (*init_module)(struct device *dev, 137 139 const struct ti_sysc_module_data *data, 138 140 struct ti_sysc_cookie *cookie);
+1
include/linux/scmi_protocol.h
··· 144 144 struct scmi_sensor_info { 145 145 u32 id; 146 146 u8 type; 147 + s8 scale; 147 148 char name[SCMI_MAX_STR_SIZE]; 148 149 }; 149 150
+246
include/linux/soc/ti/ti_sci_protocol.h
··· 241 241 u16 global_event, u8 vint_status_bit); 242 242 }; 243 243 244 + /* RA config.addr_lo parameter is valid for RM ring configure TI_SCI message */ 245 + #define TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID BIT(0) 246 + /* RA config.addr_hi parameter is valid for RM ring configure TI_SCI message */ 247 + #define TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID BIT(1) 248 + /* RA config.count parameter is valid for RM ring configure TI_SCI message */ 249 + #define TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID BIT(2) 250 + /* RA config.mode parameter is valid for RM ring configure TI_SCI message */ 251 + #define TI_SCI_MSG_VALUE_RM_RING_MODE_VALID BIT(3) 252 + /* RA config.size parameter is valid for RM ring configure TI_SCI message */ 253 + #define TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID BIT(4) 254 + /* RA config.order_id parameter is valid for RM ring configure TISCI message */ 255 + #define TI_SCI_MSG_VALUE_RM_RING_ORDER_ID_VALID BIT(5) 256 + 257 + #define TI_SCI_MSG_VALUE_RM_ALL_NO_ORDER \ 258 + (TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID | \ 259 + TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID | \ 260 + TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID | \ 261 + TI_SCI_MSG_VALUE_RM_RING_MODE_VALID | \ 262 + TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID) 263 + 264 + /** 265 + * struct ti_sci_rm_ringacc_ops - Ring Accelerator Management operations 266 + * @config: configure the SoC Navigator Subsystem Ring Accelerator ring 267 + * @get_config: get the SoC Navigator Subsystem Ring Accelerator ring 268 + * configuration 269 + */ 270 + struct ti_sci_rm_ringacc_ops { 271 + int (*config)(const struct ti_sci_handle *handle, 272 + u32 valid_params, u16 nav_id, u16 index, 273 + u32 addr_lo, u32 addr_hi, u32 count, u8 mode, 274 + u8 size, u8 order_id 275 + ); 276 + int (*get_config)(const struct ti_sci_handle *handle, 277 + u32 nav_id, u32 index, u8 *mode, 278 + u32 *addr_lo, u32 *addr_hi, u32 *count, 279 + u8 *size, u8 *order_id); 280 + }; 281 + 282 + /** 283 + * struct ti_sci_rm_psil_ops - PSI-L thread operations 284 + * @pair: pair PSI-L source thread to a destination thread. 285 + * If the src_thread is mapped to UDMA tchan, the corresponding channel's 286 + * TCHAN_THRD_ID register is updated. 287 + * If the dst_thread is mapped to UDMA rchan, the corresponding channel's 288 + * RCHAN_THRD_ID register is updated. 289 + * @unpair: unpair PSI-L source thread from a destination thread. 290 + * If the src_thread is mapped to UDMA tchan, the corresponding channel's 291 + * TCHAN_THRD_ID register is cleared. 292 + * If the dst_thread is mapped to UDMA rchan, the corresponding channel's 293 + * RCHAN_THRD_ID register is cleared. 294 + */ 295 + struct ti_sci_rm_psil_ops { 296 + int (*pair)(const struct ti_sci_handle *handle, u32 nav_id, 297 + u32 src_thread, u32 dst_thread); 298 + int (*unpair)(const struct ti_sci_handle *handle, u32 nav_id, 299 + u32 src_thread, u32 dst_thread); 300 + }; 301 + 302 + /* UDMAP channel types */ 303 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR 2 304 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR_SB 3 /* RX only */ 305 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR 10 306 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBVR 11 307 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR 12 308 + #define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBVR 13 309 + 310 + #define TI_SCI_RM_UDMAP_RX_FLOW_DESC_HOST 0 311 + #define TI_SCI_RM_UDMAP_RX_FLOW_DESC_MONO 2 312 + 313 + #define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES 1 314 + #define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES 2 315 + #define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES 3 316 + 317 + /* UDMAP TX/RX channel valid_params common declarations */ 318 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID BIT(0) 319 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID BIT(1) 320 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID BIT(2) 321 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID BIT(3) 322 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID BIT(4) 323 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PRIORITY_VALID BIT(5) 324 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_QOS_VALID BIT(6) 325 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ORDER_ID_VALID BIT(7) 326 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_SCHED_PRIORITY_VALID BIT(8) 327 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_BURST_SIZE_VALID BIT(14) 328 + 329 + /** 330 + * Configures a Navigator Subsystem UDMAP transmit channel 331 + * 332 + * Configures a Navigator Subsystem UDMAP transmit channel registers. 333 + * See @ti_sci_msg_rm_udmap_tx_ch_cfg_req 334 + */ 335 + struct ti_sci_msg_rm_udmap_tx_ch_cfg { 336 + u32 valid_params; 337 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID BIT(9) 338 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID BIT(10) 339 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID BIT(11) 340 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_CREDIT_COUNT_VALID BIT(12) 341 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FDEPTH_VALID BIT(13) 342 + u16 nav_id; 343 + u16 index; 344 + u8 tx_pause_on_err; 345 + u8 tx_filt_einfo; 346 + u8 tx_filt_pswords; 347 + u8 tx_atype; 348 + u8 tx_chan_type; 349 + u8 tx_supr_tdpkt; 350 + u16 tx_fetch_size; 351 + u8 tx_credit_count; 352 + u16 txcq_qnum; 353 + u8 tx_priority; 354 + u8 tx_qos; 355 + u8 tx_orderid; 356 + u16 fdepth; 357 + u8 tx_sched_priority; 358 + u8 tx_burst_size; 359 + }; 360 + 361 + /** 362 + * Configures a Navigator Subsystem UDMAP receive channel 363 + * 364 + * Configures a Navigator Subsystem UDMAP receive channel registers. 365 + * See @ti_sci_msg_rm_udmap_rx_ch_cfg_req 366 + */ 367 + struct ti_sci_msg_rm_udmap_rx_ch_cfg { 368 + u32 valid_params; 369 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID BIT(9) 370 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID BIT(10) 371 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_SHORT_VALID BIT(11) 372 + #define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_LONG_VALID BIT(12) 373 + u16 nav_id; 374 + u16 index; 375 + u16 rx_fetch_size; 376 + u16 rxcq_qnum; 377 + u8 rx_priority; 378 + u8 rx_qos; 379 + u8 rx_orderid; 380 + u8 rx_sched_priority; 381 + u16 flowid_start; 382 + u16 flowid_cnt; 383 + u8 rx_pause_on_err; 384 + u8 rx_atype; 385 + u8 rx_chan_type; 386 + u8 rx_ignore_short; 387 + u8 rx_ignore_long; 388 + u8 rx_burst_size; 389 + }; 390 + 391 + /** 392 + * Configures a Navigator Subsystem UDMAP receive flow 393 + * 394 + * Configures a Navigator Subsystem UDMAP receive flow's registers. 395 + * See @tis_ci_msg_rm_udmap_flow_cfg_req 396 + */ 397 + struct ti_sci_msg_rm_udmap_flow_cfg { 398 + u32 valid_params; 399 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID BIT(0) 400 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID BIT(1) 401 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID BIT(2) 402 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID BIT(3) 403 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SOP_OFFSET_VALID BIT(4) 404 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID BIT(5) 405 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_VALID BIT(6) 406 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_VALID BIT(7) 407 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_VALID BIT(8) 408 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_VALID BIT(9) 409 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID BIT(10) 410 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID BIT(11) 411 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID BIT(12) 412 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID BIT(13) 413 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID BIT(14) 414 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID BIT(15) 415 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID BIT(16) 416 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID BIT(17) 417 + #define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PS_LOCATION_VALID BIT(18) 418 + u16 nav_id; 419 + u16 flow_index; 420 + u8 rx_einfo_present; 421 + u8 rx_psinfo_present; 422 + u8 rx_error_handling; 423 + u8 rx_desc_type; 424 + u16 rx_sop_offset; 425 + u16 rx_dest_qnum; 426 + u8 rx_src_tag_hi; 427 + u8 rx_src_tag_lo; 428 + u8 rx_dest_tag_hi; 429 + u8 rx_dest_tag_lo; 430 + u8 rx_src_tag_hi_sel; 431 + u8 rx_src_tag_lo_sel; 432 + u8 rx_dest_tag_hi_sel; 433 + u8 rx_dest_tag_lo_sel; 434 + u16 rx_fdq0_sz0_qnum; 435 + u16 rx_fdq1_qnum; 436 + u16 rx_fdq2_qnum; 437 + u16 rx_fdq3_qnum; 438 + u8 rx_ps_location; 439 + }; 440 + 441 + /** 442 + * struct ti_sci_rm_udmap_ops - UDMA Management operations 443 + * @tx_ch_cfg: configure SoC Navigator Subsystem UDMA transmit channel. 444 + * @rx_ch_cfg: configure SoC Navigator Subsystem UDMA receive channel. 445 + * @rx_flow_cfg1: configure SoC Navigator Subsystem UDMA receive flow. 446 + */ 447 + struct ti_sci_rm_udmap_ops { 448 + int (*tx_ch_cfg)(const struct ti_sci_handle *handle, 449 + const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params); 450 + int (*rx_ch_cfg)(const struct ti_sci_handle *handle, 451 + const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params); 452 + int (*rx_flow_cfg)(const struct ti_sci_handle *handle, 453 + const struct ti_sci_msg_rm_udmap_flow_cfg *params); 454 + }; 455 + 456 + /** 457 + * struct ti_sci_proc_ops - Processor Control operations 458 + * @request: Request to control a physical processor. The requesting host 459 + * should be in the processor access list 460 + * @release: Relinquish a physical processor control 461 + * @handover: Handover a physical processor control to another host 462 + * in the permitted list 463 + * @set_config: Set base configuration of a processor 464 + * @set_control: Setup limited control flags in specific cases 465 + * @get_status: Get the state of physical processor 466 + * 467 + * NOTE: The following paramteres are generic in nature for all these ops, 468 + * -handle: Pointer to TI SCI handle as retrieved by *ti_sci_get_handle 469 + * -pid: Processor ID 470 + * -hid: Host ID 471 + */ 472 + struct ti_sci_proc_ops { 473 + int (*request)(const struct ti_sci_handle *handle, u8 pid); 474 + int (*release)(const struct ti_sci_handle *handle, u8 pid); 475 + int (*handover)(const struct ti_sci_handle *handle, u8 pid, u8 hid); 476 + int (*set_config)(const struct ti_sci_handle *handle, u8 pid, 477 + u64 boot_vector, u32 cfg_set, u32 cfg_clr); 478 + int (*set_control)(const struct ti_sci_handle *handle, u8 pid, 479 + u32 ctrl_set, u32 ctrl_clr); 480 + int (*get_status)(const struct ti_sci_handle *handle, u8 pid, 481 + u64 *boot_vector, u32 *cfg_flags, u32 *ctrl_flags, 482 + u32 *status_flags); 483 + }; 484 + 244 485 /** 245 486 * struct ti_sci_ops - Function support for TI SCI 246 487 * @dev_ops: Device specific operations 247 488 * @clk_ops: Clock specific operations 248 489 * @rm_core_ops: Resource management core operations. 249 490 * @rm_irq_ops: IRQ management specific operations 491 + * @proc_ops: Processor Control specific operations 250 492 */ 251 493 struct ti_sci_ops { 252 494 struct ti_sci_core_ops core_ops; ··· 496 254 struct ti_sci_clk_ops clk_ops; 497 255 struct ti_sci_rm_core_ops rm_core_ops; 498 256 struct ti_sci_rm_irq_ops rm_irq_ops; 257 + struct ti_sci_rm_ringacc_ops rm_ring_ops; 258 + struct ti_sci_rm_psil_ops rm_psil_ops; 259 + struct ti_sci_rm_udmap_ops rm_udmap_ops; 260 + struct ti_sci_proc_ops proc_ops; 499 261 }; 500 262 501 263 /**
+3 -3
include/memory/jedec_ddr.h drivers/memory/jedec_ddr.h
··· 6 6 * 7 7 * Aneesh V <aneesh@ti.com> 8 8 */ 9 - #ifndef __LINUX_JEDEC_DDR_H 10 - #define __LINUX_JEDEC_DDR_H 9 + #ifndef __JEDEC_DDR_H 10 + #define __JEDEC_DDR_H 11 11 12 12 #include <linux/types.h> 13 13 ··· 169 169 lpddr2_jedec_timings[NUM_DDR_TIMING_TABLE_ENTRIES]; 170 170 extern const struct lpddr2_min_tck lpddr2_jedec_min_tck; 171 171 172 - #endif /* __LINUX_JEDEC_DDR_H */ 172 + #endif /* __JEDEC_DDR_H */
+8
include/soc/fsl/bman.h
··· 133 133 * failed to probe or 0 if the bman driver did not probed yet. 134 134 */ 135 135 int bman_is_probed(void); 136 + /** 137 + * bman_portals_probed - Check if all cpu bound bman portals are probed 138 + * 139 + * Returns 1 if all the required cpu bound bman portals successfully probed, 140 + * -1 if probe errors appeared or 0 if the bman portals did not yet finished 141 + * probing. 142 + */ 143 + int bman_portals_probed(void); 136 144 137 145 #endif /* __FSL_BMAN_H */
+9
include/soc/fsl/qman.h
··· 1195 1195 int qman_is_probed(void); 1196 1196 1197 1197 /** 1198 + * qman_portals_probed - Check if all cpu bound qman portals are probed 1199 + * 1200 + * Returns 1 if all the required cpu bound qman portals successfully probed, 1201 + * -1 if probe errors appeared or 0 if the qman portals did not yet finished 1202 + * probing. 1203 + */ 1204 + int qman_portals_probed(void); 1205 + 1206 + /** 1198 1207 * qman_dqrr_get_ithresh - Get coalesce interrupt threshold 1199 1208 * @portal: portal to get the value for 1200 1209 * @ithresh: threshold pointer
-8
lib/Kconfig
··· 531 531 config CLZ_TAB 532 532 bool 533 533 534 - config DDR 535 - bool "JEDEC DDR data" 536 - help 537 - Data from JEDEC specs for DDR SDRAM memories, 538 - particularly the AC timing parameters and addressing 539 - information. This data is useful for drivers handling 540 - DDR SDRAM controllers. 541 - 542 534 config IRQ_POLL 543 535 bool "IRQ polling library" 544 536 help
-2
lib/Makefile
··· 209 209 210 210 lib-$(CONFIG_CLZ_TAB) += clz_tab.o 211 211 212 - obj-$(CONFIG_DDR) += jedec_ddr_data.o 213 - 214 212 obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o 215 213 obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o 216 214
+3 -2
lib/jedec_ddr_data.c drivers/memory/jedec_ddr_data.c
··· 7 7 * Aneesh V <aneesh@ti.com> 8 8 */ 9 9 10 - #include <memory/jedec_ddr.h> 11 - #include <linux/module.h> 10 + #include <linux/export.h> 11 + 12 + #include "jedec_ddr.h" 12 13 13 14 /* LPDDR2 addressing details from JESD209-2 section 2.4 */ 14 15 const struct lpddr2_addressing