Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rproc-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/linux

Pull remoteproc updates from Bjorn Andersson:
"This makes the remoteproc core rproc_class const.

DeviceTree bindings for a few different Qualcomm remoteprocs are
updated to remove a range of validation warnings/errors. The Qualcomm
SMD binding marks qcom,ipc deprecated, in favor or the mailbox
interface.

The TI K3 R5 remoteproc driver is updated to ensure that cores are
powered up in the appropriate order. The driver also see a couple of
fixes related to cleanups in error paths during probe.

The Mediatek remoteproc driver is extended to support the MT8188 SCP
core 1. Support for varying DRAM and IPI shared buffer sizes are
introduced. This together with a couple of bug fixes and improvements
to the driver.

Support for the AMD-Xilinx Versal and Versal-NET platforms are added.
Coredump support and support for parsing TCM information from
DeviceTree is added to the Xilinx R5F remoteproc driver"

* tag 'rproc-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/linux: (22 commits)
dt-bindings: remoteproc: qcom,sdm845-adsp-pil: Fix qcom,halt-regs definition
dt-bindings: remoteproc: qcom,sc7280-wpss-pil: Fix qcom,halt-regs definition
dt-bindings: remoteproc: qcom,qcs404-cdsp-pil: Fix qcom,halt-regs definition
dt-bindings: remoteproc: qcom,msm8996-mss-pil: allow glink-edge on msm8996
dt-bindings: remoteproc: qcom,smd-edge: Mark qcom,ipc as deprecated
remoteproc: k3-r5: Jump to error handling labels in start/stop errors
remoteproc: mediatek: Fix error code in scp_rproc_init()
remoteproc: k3-r5: Do not allow core1 to power up before core0 via sysfs
remoteproc: k3-r5: Wait for core0 power-up before powering up core1
remoteproc: mediatek: Add IMGSYS IPI command
remoteproc: mediatek: Support setting DRAM and IPI shared buffer sizes
remoteproc: mediatek: Support MT8188 SCP core 1
dt-bindings: remoteproc: mediatek: Support MT8188 dual-core SCP
drivers: remoteproc: xlnx: Fix uninitialized tcm mode
drivers: remoteproc: xlnx: Fix uninitialized variable use
drivers: remoteproc: xlnx: Add Versal and Versal-NET support
remoteproc: zynqmp: parse TCM from device tree
dt-bindings: remoteproc: Add Tightly Coupled Memory (TCM) bindings
remoteproc: zynqmp: fix lockstep mode memory region
remoteproc: zynqmp: Add coredump support
...

+720 -232
+2
Documentation/devicetree/bindings/remoteproc/mtk,scp.yaml
··· 19 19 - mediatek,mt8183-scp 20 20 - mediatek,mt8186-scp 21 21 - mediatek,mt8188-scp 22 + - mediatek,mt8188-scp-dual 22 23 - mediatek,mt8192-scp 23 24 - mediatek,mt8195-scp 24 25 - mediatek,mt8195-scp-dual ··· 195 194 properties: 196 195 compatible: 197 196 enum: 197 + - mediatek,mt8188-scp-dual 198 198 - mediatek,mt8195-scp-dual 199 199 then: 200 200 properties:
-1
Documentation/devicetree/bindings/remoteproc/qcom,msm8996-mss-pil.yaml
··· 231 231 - const: snoc_axi 232 232 - const: mnoc_axi 233 233 - const: qdss 234 - glink-edge: false 235 234 required: 236 235 - pll-supply 237 236 - smd-edge
+5 -1
Documentation/devicetree/bindings/remoteproc/qcom,qcs404-cdsp-pil.yaml
··· 81 81 $ref: /schemas/types.yaml#/definitions/phandle-array 82 82 description: 83 83 Phandle reference to a syscon representing TCSR followed by the 84 - three offsets within syscon for q6, modem and nc halt registers. 84 + offset within syscon for q6 halt register. 85 + items: 86 + - items: 87 + - description: phandle to TCSR syscon region 88 + - description: offset to the Q6 halt register 85 89 86 90 qcom,smem-states: 87 91 $ref: /schemas/types.yaml#/definitions/phandle-array
+5 -1
Documentation/devicetree/bindings/remoteproc/qcom,sc7280-wpss-pil.yaml
··· 89 89 $ref: /schemas/types.yaml#/definitions/phandle-array 90 90 description: 91 91 Phandle reference to a syscon representing TCSR followed by the 92 - three offsets within syscon for q6, modem and nc halt registers. 92 + offset within syscon for q6 halt register. 93 + items: 94 + - items: 95 + - description: phandle to TCSR syscon region 96 + - description: offset to the Q6 halt register 93 97 94 98 qcom,qmp: 95 99 $ref: /schemas/types.yaml#/definitions/phandle
+5 -1
Documentation/devicetree/bindings/remoteproc/qcom,sdm845-adsp-pil.yaml
··· 81 81 $ref: /schemas/types.yaml#/definitions/phandle-array 82 82 description: 83 83 Phandle reference to a syscon representing TCSR followed by the 84 - three offsets within syscon for q6, modem and nc halt registers. 84 + offset within syscon for q6 halt register. 85 + items: 86 + - items: 87 + - description: phandle to TCSR syscon region 88 + - description: offset to the Q6 halt register 85 89 86 90 qcom,smem-states: 87 91 $ref: /schemas/types.yaml#/definitions/phandle-array
+2 -1
Documentation/devicetree/bindings/remoteproc/qcom,smd-edge.yaml
··· 61 61 description: 62 62 Three entries specifying the outgoing ipc bit used for signaling the 63 63 remote processor. 64 + deprecated: true 64 65 65 66 qcom,smd-edge: 66 67 $ref: /schemas/types.yaml#/definitions/uint32 ··· 112 111 smd-edge { 113 112 interrupts = <GIC_SPI 156 IRQ_TYPE_EDGE_RISING>; 114 113 115 - qcom,ipc = <&apcs 8 8>; 114 + mboxes = <&apcs 8>; 116 115 qcom,smd-edge = <1>; 117 116 }; 118 117 };
+256 -21
Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - const: xlnx,zynqmp-r5fss 21 + enum: 22 + - xlnx,zynqmp-r5fss 23 + - xlnx,versal-r5fss 24 + - xlnx,versal-net-r52fss 25 + 26 + "#address-cells": 27 + const: 2 28 + 29 + "#size-cells": 30 + const: 2 31 + 32 + ranges: 33 + description: | 34 + Standard ranges definition providing address translations for 35 + local R5F TCM address spaces to bus addresses. 22 36 23 37 xlnx,cluster-mode: 24 38 $ref: /schemas/types.yaml#/definitions/uint32 25 39 enum: [0, 1, 2] 40 + default: 1 26 41 description: | 27 42 The RPU MPCore can operate in split mode (Dual-processor performance), Safety 28 43 lock-step mode(Both RPU cores execute the same code in lock-step, ··· 51 36 1: lockstep mode (default) 52 37 2: single cpu mode 53 38 39 + xlnx,tcm-mode: 40 + $ref: /schemas/types.yaml#/definitions/uint32 41 + enum: [0, 1] 42 + description: | 43 + Configure RPU TCM 44 + 0: split mode 45 + 1: lockstep mode 46 + 54 47 patternProperties: 55 - "^r5f-[a-f0-9]+$": 48 + "^r(.*)@[0-9a-f]+$": 56 49 type: object 57 50 description: | 58 51 The RPU is located in the Low Power Domain of the Processor Subsystem. ··· 75 52 76 53 properties: 77 54 compatible: 78 - const: xlnx,zynqmp-r5f 55 + enum: 56 + - xlnx,zynqmp-r5f 57 + - xlnx,versal-r5f 58 + - xlnx,versal-net-r52f 59 + 60 + reg: 61 + minItems: 1 62 + maxItems: 4 63 + 64 + reg-names: 65 + minItems: 1 66 + maxItems: 4 79 67 80 68 power-domains: 81 - maxItems: 1 69 + minItems: 2 70 + maxItems: 5 82 71 83 72 mboxes: 84 73 minItems: 1 ··· 136 101 137 102 required: 138 103 - compatible 104 + - reg 105 + - reg-names 139 106 - power-domains 140 - 141 - unevaluatedProperties: false 142 107 143 108 required: 144 109 - compatible 110 + - "#address-cells" 111 + - "#size-cells" 112 + - ranges 113 + 114 + allOf: 115 + - if: 116 + properties: 117 + compatible: 118 + contains: 119 + enum: 120 + - xlnx,versal-net-r52fss 121 + then: 122 + properties: 123 + xlnx,tcm-mode: false 124 + 125 + patternProperties: 126 + "^r52f@[0-9a-f]+$": 127 + type: object 128 + 129 + properties: 130 + reg: 131 + minItems: 1 132 + items: 133 + - description: ATCM internal memory 134 + - description: BTCM internal memory 135 + - description: CTCM internal memory 136 + 137 + reg-names: 138 + minItems: 1 139 + items: 140 + - const: atcm0 141 + - const: btcm0 142 + - const: ctcm0 143 + 144 + power-domains: 145 + minItems: 2 146 + items: 147 + - description: RPU core power domain 148 + - description: ATCM power domain 149 + - description: BTCM power domain 150 + - description: CTCM power domain 151 + 152 + - if: 153 + properties: 154 + compatible: 155 + contains: 156 + enum: 157 + - xlnx,zynqmp-r5fss 158 + - xlnx,versal-r5fss 159 + then: 160 + if: 161 + properties: 162 + xlnx,cluster-mode: 163 + enum: [1, 2] 164 + then: 165 + properties: 166 + xlnx,tcm-mode: 167 + enum: [1] 168 + 169 + patternProperties: 170 + "^r5f@[0-9a-f]+$": 171 + type: object 172 + 173 + properties: 174 + reg: 175 + minItems: 1 176 + items: 177 + - description: ATCM internal memory 178 + - description: BTCM internal memory 179 + - description: extra ATCM memory in lockstep mode 180 + - description: extra BTCM memory in lockstep mode 181 + 182 + reg-names: 183 + minItems: 1 184 + items: 185 + - const: atcm0 186 + - const: btcm0 187 + - const: atcm1 188 + - const: btcm1 189 + 190 + power-domains: 191 + minItems: 2 192 + items: 193 + - description: RPU core power domain 194 + - description: ATCM power domain 195 + - description: BTCM power domain 196 + - description: second ATCM power domain 197 + - description: second BTCM power domain 198 + 199 + required: 200 + - xlnx,tcm-mode 201 + 202 + else: 203 + properties: 204 + xlnx,tcm-mode: 205 + enum: [0] 206 + 207 + patternProperties: 208 + "^r5f@[0-9a-f]+$": 209 + type: object 210 + 211 + properties: 212 + reg: 213 + minItems: 1 214 + items: 215 + - description: ATCM internal memory 216 + - description: BTCM internal memory 217 + 218 + reg-names: 219 + minItems: 1 220 + items: 221 + - const: atcm0 222 + - const: btcm0 223 + 224 + power-domains: 225 + minItems: 2 226 + items: 227 + - description: RPU core power domain 228 + - description: ATCM power domain 229 + - description: BTCM power domain 230 + 231 + required: 232 + - xlnx,tcm-mode 145 233 146 234 additionalProperties: false 147 235 148 236 examples: 149 237 - | 150 - remoteproc { 151 - compatible = "xlnx,zynqmp-r5fss"; 152 - xlnx,cluster-mode = <1>; 238 + #include <dt-bindings/power/xlnx-zynqmp-power.h> 153 239 154 - r5f-0 { 155 - compatible = "xlnx,zynqmp-r5f"; 156 - power-domains = <&zynqmp_firmware 0x7>; 157 - memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>; 158 - mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; 159 - mbox-names = "tx", "rx"; 240 + // Split mode configuration 241 + soc { 242 + #address-cells = <2>; 243 + #size-cells = <2>; 244 + 245 + remoteproc@ffe00000 { 246 + compatible = "xlnx,zynqmp-r5fss"; 247 + xlnx,cluster-mode = <0>; 248 + xlnx,tcm-mode = <0>; 249 + 250 + #address-cells = <2>; 251 + #size-cells = <2>; 252 + ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>, 253 + <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>, 254 + <0x1 0x0 0x0 0xffe90000 0x0 0x10000>, 255 + <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>; 256 + 257 + r5f@0 { 258 + compatible = "xlnx,zynqmp-r5f"; 259 + reg = <0x0 0x0 0x0 0x10000>, <0x0 0x20000 0x0 0x10000>; 260 + reg-names = "atcm0", "btcm0"; 261 + power-domains = <&zynqmp_firmware PD_RPU_0>, 262 + <&zynqmp_firmware PD_R5_0_ATCM>, 263 + <&zynqmp_firmware PD_R5_0_BTCM>; 264 + memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>, 265 + <&rpu0vdev0vring0>, <&rpu0vdev0vring1>; 266 + mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; 267 + mbox-names = "tx", "rx"; 268 + }; 269 + 270 + r5f@1 { 271 + compatible = "xlnx,zynqmp-r5f"; 272 + reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>; 273 + reg-names = "atcm0", "btcm0"; 274 + power-domains = <&zynqmp_firmware PD_RPU_1>, 275 + <&zynqmp_firmware PD_R5_1_ATCM>, 276 + <&zynqmp_firmware PD_R5_1_BTCM>; 277 + memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>, 278 + <&rpu1vdev0vring0>, <&rpu1vdev0vring1>; 279 + mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>; 280 + mbox-names = "tx", "rx"; 281 + }; 160 282 }; 283 + }; 161 284 162 - r5f-1 { 163 - compatible = "xlnx,zynqmp-r5f"; 164 - power-domains = <&zynqmp_firmware 0x8>; 165 - memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>, <&rpu1vdev0vring0>, <&rpu1vdev0vring1>; 166 - mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>; 167 - mbox-names = "tx", "rx"; 285 + - | 286 + //Lockstep configuration 287 + soc { 288 + #address-cells = <2>; 289 + #size-cells = <2>; 290 + 291 + remoteproc@ffe00000 { 292 + compatible = "xlnx,zynqmp-r5fss"; 293 + xlnx,cluster-mode = <1>; 294 + xlnx,tcm-mode = <1>; 295 + 296 + #address-cells = <2>; 297 + #size-cells = <2>; 298 + ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>, 299 + <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>, 300 + <0x0 0x10000 0x0 0xffe10000 0x0 0x10000>, 301 + <0x0 0x30000 0x0 0xffe30000 0x0 0x10000>; 302 + 303 + r5f@0 { 304 + compatible = "xlnx,zynqmp-r5f"; 305 + reg = <0x0 0x0 0x0 0x10000>, 306 + <0x0 0x20000 0x0 0x10000>, 307 + <0x0 0x10000 0x0 0x10000>, 308 + <0x0 0x30000 0x0 0x10000>; 309 + reg-names = "atcm0", "btcm0", "atcm1", "btcm1"; 310 + power-domains = <&zynqmp_firmware PD_RPU_0>, 311 + <&zynqmp_firmware PD_R5_0_ATCM>, 312 + <&zynqmp_firmware PD_R5_0_BTCM>, 313 + <&zynqmp_firmware PD_R5_1_ATCM>, 314 + <&zynqmp_firmware PD_R5_1_BTCM>; 315 + memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>, 316 + <&rpu0vdev0vring0>, <&rpu0vdev0vring1>; 317 + mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; 318 + mbox-names = "tx", "rx"; 319 + }; 320 + 321 + r5f@1 { 322 + compatible = "xlnx,zynqmp-r5f"; 323 + reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>; 324 + reg-names = "atcm0", "btcm0"; 325 + power-domains = <&zynqmp_firmware PD_RPU_1>, 326 + <&zynqmp_firmware PD_R5_1_ATCM>, 327 + <&zynqmp_firmware PD_R5_1_BTCM>; 328 + memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>, 329 + <&rpu1vdev0vring0>, <&rpu1vdev0vring1>; 330 + mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>; 331 + mbox-names = "tx", "rx"; 332 + }; 168 333 }; 169 334 }; 170 335 ...
+8 -3
drivers/remoteproc/mtk_common.h
··· 78 78 #define MT8195_L2TCM_OFFSET 0x850d0 79 79 80 80 #define SCP_FW_VER_LEN 32 81 - #define SCP_SHARE_BUFFER_SIZE 288 82 81 83 82 struct scp_run { 84 83 u32 signaled; ··· 96 97 97 98 struct mtk_scp; 98 99 100 + struct mtk_scp_sizes_data { 101 + size_t max_dram_size; 102 + size_t ipi_share_buffer_size; 103 + }; 104 + 99 105 struct mtk_scp_of_data { 100 106 int (*scp_clk_get)(struct mtk_scp *scp); 101 107 int (*scp_before_load)(struct mtk_scp *scp); ··· 114 110 u32 host_to_scp_int_bit; 115 111 116 112 size_t ipi_buf_offset; 113 + const struct mtk_scp_sizes_data *scp_sizes; 117 114 }; 118 115 119 116 struct mtk_scp_of_cluster { ··· 146 141 struct scp_ipi_desc ipi_desc[SCP_IPI_MAX]; 147 142 bool ipi_id_ack[SCP_IPI_MAX]; 148 143 wait_queue_head_t ack_wq; 144 + u8 *share_buf; 149 145 150 146 void *cpu_addr; 151 147 dma_addr_t dma_addr; 152 - size_t dram_size; 153 148 154 149 struct rproc_subdev *rpmsg_subdev; 155 150 ··· 167 162 struct mtk_share_obj { 168 163 u32 id; 169 164 u32 len; 170 - u8 share_buf[SCP_SHARE_BUFFER_SIZE]; 165 + u8 *share_buf; 171 166 }; 172 167 173 168 void scp_memcpy_aligned(void __iomem *dst, const void *src, unsigned int len);
+219 -22
drivers/remoteproc/mtk_scp.c
··· 20 20 #include "mtk_common.h" 21 21 #include "remoteproc_internal.h" 22 22 23 - #define MAX_CODE_SIZE 0x500000 24 23 #define SECTION_NAME_IPI_BUFFER ".ipi_buffer" 25 24 26 25 /** ··· 93 94 { 94 95 struct mtk_share_obj __iomem *rcv_obj = scp->recv_buf; 95 96 struct scp_ipi_desc *ipi_desc = scp->ipi_desc; 96 - u8 tmp_data[SCP_SHARE_BUFFER_SIZE]; 97 97 scp_ipi_handler_t handler; 98 98 u32 id = readl(&rcv_obj->id); 99 99 u32 len = readl(&rcv_obj->len); 100 + const struct mtk_scp_sizes_data *scp_sizes; 100 101 101 - if (len > SCP_SHARE_BUFFER_SIZE) { 102 - dev_err(scp->dev, "ipi message too long (len %d, max %d)", len, 103 - SCP_SHARE_BUFFER_SIZE); 102 + scp_sizes = scp->data->scp_sizes; 103 + if (len > scp_sizes->ipi_share_buffer_size) { 104 + dev_err(scp->dev, "ipi message too long (len %d, max %zd)", len, 105 + scp_sizes->ipi_share_buffer_size); 104 106 return; 105 107 } 106 108 if (id >= SCP_IPI_MAX) { ··· 117 117 return; 118 118 } 119 119 120 - memcpy_fromio(tmp_data, &rcv_obj->share_buf, len); 121 - handler(tmp_data, len, ipi_desc[id].priv); 120 + memset(scp->share_buf, 0, scp_sizes->ipi_share_buffer_size); 121 + memcpy_fromio(scp->share_buf, &rcv_obj->share_buf, len); 122 + handler(scp->share_buf, len, ipi_desc[id].priv); 122 123 scp_ipi_unlock(scp, id); 123 124 124 125 scp->ipi_id_ack[id] = true; ··· 133 132 static int scp_ipi_init(struct mtk_scp *scp, const struct firmware *fw) 134 133 { 135 134 int ret; 136 - size_t offset; 135 + size_t buf_sz, offset; 136 + size_t share_buf_offset; 137 + const struct mtk_scp_sizes_data *scp_sizes; 137 138 138 139 /* read the ipi buf addr from FW itself first */ 139 140 ret = scp_elf_read_ipi_buf_addr(scp, fw, &offset); ··· 147 144 } 148 145 dev_info(scp->dev, "IPI buf addr %#010zx\n", offset); 149 146 147 + /* Make sure IPI buffer fits in the L2TCM range assigned to this core */ 148 + buf_sz = sizeof(*scp->recv_buf) + sizeof(*scp->send_buf); 149 + 150 + if (scp->sram_size < buf_sz + offset) { 151 + dev_err(scp->dev, "IPI buffer does not fit in SRAM.\n"); 152 + return -EOVERFLOW; 153 + } 154 + 155 + scp_sizes = scp->data->scp_sizes; 150 156 scp->recv_buf = (struct mtk_share_obj __iomem *) 151 157 (scp->sram_base + offset); 158 + share_buf_offset = sizeof(scp->recv_buf->id) 159 + + sizeof(scp->recv_buf->len) + scp_sizes->ipi_share_buffer_size; 152 160 scp->send_buf = (struct mtk_share_obj __iomem *) 153 - (scp->sram_base + offset + sizeof(*scp->recv_buf)); 154 - memset_io(scp->recv_buf, 0, sizeof(*scp->recv_buf)); 155 - memset_io(scp->send_buf, 0, sizeof(*scp->send_buf)); 161 + (scp->sram_base + offset + share_buf_offset); 162 + memset_io(scp->recv_buf, 0, share_buf_offset); 163 + memset_io(scp->send_buf, 0, share_buf_offset); 156 164 157 165 return 0; 158 166 } ··· 477 463 return 0; 478 464 } 479 465 466 + static int mt8188_scp_l2tcm_on(struct mtk_scp *scp) 467 + { 468 + struct mtk_scp_of_cluster *scp_cluster = scp->cluster; 469 + 470 + mutex_lock(&scp_cluster->cluster_lock); 471 + 472 + if (scp_cluster->l2tcm_refcnt == 0) { 473 + /* clear SPM interrupt, SCP2SPM_IPC_CLR */ 474 + writel(0xff, scp->cluster->reg_base + MT8192_SCP2SPM_IPC_CLR); 475 + 476 + /* Power on L2TCM */ 477 + scp_sram_power_on(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_0, 0); 478 + scp_sram_power_on(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_1, 0); 479 + scp_sram_power_on(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_2, 0); 480 + scp_sram_power_on(scp->cluster->reg_base + MT8192_L1TCM_SRAM_PDN, 0); 481 + } 482 + 483 + scp_cluster->l2tcm_refcnt += 1; 484 + 485 + mutex_unlock(&scp_cluster->cluster_lock); 486 + 487 + return 0; 488 + } 489 + 490 + static int mt8188_scp_before_load(struct mtk_scp *scp) 491 + { 492 + writel(1, scp->cluster->reg_base + MT8192_CORE0_SW_RSTN_SET); 493 + 494 + mt8188_scp_l2tcm_on(scp); 495 + 496 + scp_sram_power_on(scp->cluster->reg_base + MT8192_CPU0_SRAM_PD, 0); 497 + 498 + /* enable MPU for all memory regions */ 499 + writel(0xff, scp->cluster->reg_base + MT8192_CORE0_MEM_ATT_PREDEF); 500 + 501 + return 0; 502 + } 503 + 504 + static int mt8188_scp_c1_before_load(struct mtk_scp *scp) 505 + { 506 + u32 sec_ctrl; 507 + struct mtk_scp *scp_c0; 508 + struct mtk_scp_of_cluster *scp_cluster = scp->cluster; 509 + 510 + scp->data->scp_reset_assert(scp); 511 + 512 + mt8188_scp_l2tcm_on(scp); 513 + 514 + scp_sram_power_on(scp->cluster->reg_base + MT8195_CPU1_SRAM_PD, 0); 515 + 516 + /* enable MPU for all memory regions */ 517 + writel(0xff, scp->cluster->reg_base + MT8195_CORE1_MEM_ATT_PREDEF); 518 + 519 + /* 520 + * The L2TCM_OFFSET_RANGE and L2TCM_OFFSET shift the destination address 521 + * on SRAM when SCP core 1 accesses SRAM. 522 + * 523 + * This configuration solves booting the SCP core 0 and core 1 from 524 + * different SRAM address because core 0 and core 1 both boot from 525 + * the head of SRAM by default. this must be configured before boot SCP core 1. 526 + * 527 + * The value of L2TCM_OFFSET_RANGE is from the viewpoint of SCP core 1. 528 + * When SCP core 1 issues address within the range (L2TCM_OFFSET_RANGE), 529 + * the address will be added with a fixed offset (L2TCM_OFFSET) on the bus. 530 + * The shift action is tranparent to software. 531 + */ 532 + writel(0, scp->cluster->reg_base + MT8195_L2TCM_OFFSET_RANGE_0_LOW); 533 + writel(scp->sram_size, scp->cluster->reg_base + MT8195_L2TCM_OFFSET_RANGE_0_HIGH); 534 + 535 + scp_c0 = list_first_entry(&scp_cluster->mtk_scp_list, struct mtk_scp, elem); 536 + writel(scp->sram_phys - scp_c0->sram_phys, scp->cluster->reg_base + MT8195_L2TCM_OFFSET); 537 + 538 + /* enable SRAM offset when fetching instruction and data */ 539 + sec_ctrl = readl(scp->cluster->reg_base + MT8195_SEC_CTRL); 540 + sec_ctrl |= MT8195_CORE_OFFSET_ENABLE_I | MT8195_CORE_OFFSET_ENABLE_D; 541 + writel(sec_ctrl, scp->cluster->reg_base + MT8195_SEC_CTRL); 542 + 543 + return 0; 544 + } 545 + 480 546 static int mt8192_scp_before_load(struct mtk_scp *scp) 481 547 { 482 548 /* clear SPM interrupt, SCP2SPM_IPC_CLR */ ··· 747 653 static void *mt8183_scp_da_to_va(struct mtk_scp *scp, u64 da, size_t len) 748 654 { 749 655 int offset; 656 + const struct mtk_scp_sizes_data *scp_sizes; 750 657 658 + scp_sizes = scp->data->scp_sizes; 751 659 if (da < scp->sram_size) { 752 660 offset = da; 753 661 if (offset >= 0 && (offset + len) <= scp->sram_size) 754 662 return (void __force *)scp->sram_base + offset; 755 - } else if (scp->dram_size) { 663 + } else if (scp_sizes->max_dram_size) { 756 664 offset = da - scp->dma_addr; 757 - if (offset >= 0 && (offset + len) <= scp->dram_size) 665 + if (offset >= 0 && (offset + len) <= scp_sizes->max_dram_size) 758 666 return scp->cpu_addr + offset; 759 667 } 760 668 ··· 766 670 static void *mt8192_scp_da_to_va(struct mtk_scp *scp, u64 da, size_t len) 767 671 { 768 672 int offset; 673 + const struct mtk_scp_sizes_data *scp_sizes; 769 674 675 + scp_sizes = scp->data->scp_sizes; 770 676 if (da >= scp->sram_phys && 771 677 (da + len) <= scp->sram_phys + scp->sram_size) { 772 678 offset = da - scp->sram_phys; ··· 784 686 } 785 687 786 688 /* optional memory region */ 787 - if (scp->dram_size && 689 + if (scp_sizes->max_dram_size && 788 690 da >= scp->dma_addr && 789 - (da + len) <= scp->dma_addr + scp->dram_size) { 691 + (da + len) <= scp->dma_addr + scp_sizes->max_dram_size) { 790 692 offset = da - scp->dma_addr; 791 693 return scp->cpu_addr + offset; 792 694 } ··· 805 707 { 806 708 /* Disable SCP watchdog */ 807 709 writel(0, scp->cluster->reg_base + MT8183_WDT_CFG); 710 + } 711 + 712 + static void mt8188_scp_l2tcm_off(struct mtk_scp *scp) 713 + { 714 + struct mtk_scp_of_cluster *scp_cluster = scp->cluster; 715 + 716 + mutex_lock(&scp_cluster->cluster_lock); 717 + 718 + if (scp_cluster->l2tcm_refcnt > 0) 719 + scp_cluster->l2tcm_refcnt -= 1; 720 + 721 + if (scp_cluster->l2tcm_refcnt == 0) { 722 + /* Power off L2TCM */ 723 + scp_sram_power_off(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_0, 0); 724 + scp_sram_power_off(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_1, 0); 725 + scp_sram_power_off(scp->cluster->reg_base + MT8192_L2TCM_SRAM_PD_2, 0); 726 + scp_sram_power_off(scp->cluster->reg_base + MT8192_L1TCM_SRAM_PDN, 0); 727 + } 728 + 729 + mutex_unlock(&scp_cluster->cluster_lock); 730 + } 731 + 732 + static void mt8188_scp_stop(struct mtk_scp *scp) 733 + { 734 + mt8188_scp_l2tcm_off(scp); 735 + 736 + scp_sram_power_off(scp->cluster->reg_base + MT8192_CPU0_SRAM_PD, 0); 737 + 738 + /* Disable SCP watchdog */ 739 + writel(0, scp->cluster->reg_base + MT8192_CORE0_WDT_CFG); 740 + } 741 + 742 + static void mt8188_scp_c1_stop(struct mtk_scp *scp) 743 + { 744 + mt8188_scp_l2tcm_off(scp); 745 + 746 + /* Power off CPU SRAM */ 747 + scp_sram_power_off(scp->cluster->reg_base + MT8195_CPU1_SRAM_PD, 0); 748 + 749 + /* Disable SCP watchdog */ 750 + writel(0, scp->cluster->reg_base + MT8195_CORE1_WDT_CFG); 808 751 } 809 752 810 753 static void mt8192_scp_stop(struct mtk_scp *scp) ··· 1007 868 static int scp_map_memory_region(struct mtk_scp *scp) 1008 869 { 1009 870 int ret; 871 + const struct mtk_scp_sizes_data *scp_sizes; 1010 872 1011 873 ret = of_reserved_mem_device_init(scp->dev); 1012 874 ··· 1023 883 } 1024 884 1025 885 /* Reserved SCP code size */ 1026 - scp->dram_size = MAX_CODE_SIZE; 1027 - scp->cpu_addr = dma_alloc_coherent(scp->dev, scp->dram_size, 886 + scp_sizes = scp->data->scp_sizes; 887 + scp->cpu_addr = dma_alloc_coherent(scp->dev, scp_sizes->max_dram_size, 1028 888 &scp->dma_addr, GFP_KERNEL); 1029 889 if (!scp->cpu_addr) 1030 890 return -ENOMEM; ··· 1034 894 1035 895 static void scp_unmap_memory_region(struct mtk_scp *scp) 1036 896 { 1037 - if (scp->dram_size == 0) 897 + const struct mtk_scp_sizes_data *scp_sizes; 898 + 899 + scp_sizes = scp->data->scp_sizes; 900 + if (scp_sizes->max_dram_size == 0) 1038 901 return; 1039 902 1040 - dma_free_coherent(scp->dev, scp->dram_size, scp->cpu_addr, 903 + dma_free_coherent(scp->dev, scp_sizes->max_dram_size, scp->cpu_addr, 1041 904 scp->dma_addr); 1042 905 of_reserved_mem_device_release(scp->dev); 1043 906 } ··· 1104 961 struct resource *res; 1105 962 const char *fw_name = "scp.img"; 1106 963 int ret, i; 964 + const struct mtk_scp_sizes_data *scp_sizes; 1107 965 1108 966 ret = rproc_of_parse_firmware(dev, 0, &fw_name); 1109 967 if (ret < 0 && ret != -EINVAL) ··· 1152 1008 goto release_dev_mem; 1153 1009 } 1154 1010 1011 + scp_sizes = scp->data->scp_sizes; 1012 + scp->share_buf = kzalloc(scp_sizes->ipi_share_buffer_size, GFP_KERNEL); 1013 + if (!scp->share_buf) { 1014 + dev_err(dev, "Failed to allocate IPI share buffer\n"); 1015 + ret = -ENOMEM; 1016 + goto release_dev_mem; 1017 + } 1018 + 1155 1019 init_waitqueue_head(&scp->run.wq); 1156 1020 init_waitqueue_head(&scp->ack_wq); 1157 1021 ··· 1179 1027 remove_subdev: 1180 1028 scp_remove_rpmsg_subdev(scp); 1181 1029 scp_ipi_unregister(scp, SCP_IPI_INIT); 1030 + kfree(scp->share_buf); 1031 + scp->share_buf = NULL; 1182 1032 release_dev_mem: 1183 1033 scp_unmap_memory_region(scp); 1184 1034 for (i = 0; i < SCP_IPI_MAX; i++) ··· 1196 1042 1197 1043 scp_remove_rpmsg_subdev(scp); 1198 1044 scp_ipi_unregister(scp, SCP_IPI_INIT); 1045 + kfree(scp->share_buf); 1046 + scp->share_buf = NULL; 1199 1047 scp_unmap_memory_region(scp); 1200 1048 for (i = 0; i < SCP_IPI_MAX; i++) 1201 1049 mutex_destroy(&scp->ipi_desc[i].lock); ··· 1384 1228 mutex_destroy(&scp_cluster->cluster_lock); 1385 1229 } 1386 1230 1231 + static const struct mtk_scp_sizes_data default_scp_sizes = { 1232 + .max_dram_size = 0x500000, 1233 + .ipi_share_buffer_size = 288, 1234 + }; 1235 + 1236 + static const struct mtk_scp_sizes_data mt8188_scp_sizes = { 1237 + .max_dram_size = 0x500000, 1238 + .ipi_share_buffer_size = 600, 1239 + }; 1240 + 1241 + static const struct mtk_scp_sizes_data mt8188_scp_c1_sizes = { 1242 + .max_dram_size = 0xA00000, 1243 + .ipi_share_buffer_size = 600, 1244 + }; 1245 + 1387 1246 static const struct mtk_scp_of_data mt8183_of_data = { 1388 1247 .scp_clk_get = mt8183_scp_clk_get, 1389 1248 .scp_before_load = mt8183_scp_before_load, ··· 1410 1239 .host_to_scp_reg = MT8183_HOST_TO_SCP, 1411 1240 .host_to_scp_int_bit = MT8183_HOST_IPC_INT_BIT, 1412 1241 .ipi_buf_offset = 0x7bdb0, 1242 + .scp_sizes = &default_scp_sizes, 1413 1243 }; 1414 1244 1415 1245 static const struct mtk_scp_of_data mt8186_of_data = { ··· 1424 1252 .host_to_scp_reg = MT8183_HOST_TO_SCP, 1425 1253 .host_to_scp_int_bit = MT8183_HOST_IPC_INT_BIT, 1426 1254 .ipi_buf_offset = 0x3bdb0, 1255 + .scp_sizes = &default_scp_sizes, 1427 1256 }; 1428 1257 1429 1258 static const struct mtk_scp_of_data mt8188_of_data = { 1430 1259 .scp_clk_get = mt8195_scp_clk_get, 1431 - .scp_before_load = mt8192_scp_before_load, 1432 - .scp_irq_handler = mt8192_scp_irq_handler, 1260 + .scp_before_load = mt8188_scp_before_load, 1261 + .scp_irq_handler = mt8195_scp_irq_handler, 1433 1262 .scp_reset_assert = mt8192_scp_reset_assert, 1434 1263 .scp_reset_deassert = mt8192_scp_reset_deassert, 1435 - .scp_stop = mt8192_scp_stop, 1264 + .scp_stop = mt8188_scp_stop, 1436 1265 .scp_da_to_va = mt8192_scp_da_to_va, 1437 1266 .host_to_scp_reg = MT8192_GIPC_IN_SET, 1438 1267 .host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT, 1268 + .scp_sizes = &mt8188_scp_sizes, 1269 + }; 1270 + 1271 + static const struct mtk_scp_of_data mt8188_of_data_c1 = { 1272 + .scp_clk_get = mt8195_scp_clk_get, 1273 + .scp_before_load = mt8188_scp_c1_before_load, 1274 + .scp_irq_handler = mt8195_scp_c1_irq_handler, 1275 + .scp_reset_assert = mt8195_scp_c1_reset_assert, 1276 + .scp_reset_deassert = mt8195_scp_c1_reset_deassert, 1277 + .scp_stop = mt8188_scp_c1_stop, 1278 + .scp_da_to_va = mt8192_scp_da_to_va, 1279 + .host_to_scp_reg = MT8192_GIPC_IN_SET, 1280 + .host_to_scp_int_bit = MT8195_CORE1_HOST_IPC_INT_BIT, 1281 + .scp_sizes = &mt8188_scp_c1_sizes, 1439 1282 }; 1440 1283 1441 1284 static const struct mtk_scp_of_data mt8192_of_data = { ··· 1463 1276 .scp_da_to_va = mt8192_scp_da_to_va, 1464 1277 .host_to_scp_reg = MT8192_GIPC_IN_SET, 1465 1278 .host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT, 1279 + .scp_sizes = &default_scp_sizes, 1466 1280 }; 1467 1281 1468 1282 static const struct mtk_scp_of_data mt8195_of_data = { ··· 1476 1288 .scp_da_to_va = mt8192_scp_da_to_va, 1477 1289 .host_to_scp_reg = MT8192_GIPC_IN_SET, 1478 1290 .host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT, 1291 + .scp_sizes = &default_scp_sizes, 1479 1292 }; 1480 1293 1481 1294 static const struct mtk_scp_of_data mt8195_of_data_c1 = { ··· 1489 1300 .scp_da_to_va = mt8192_scp_da_to_va, 1490 1301 .host_to_scp_reg = MT8192_GIPC_IN_SET, 1491 1302 .host_to_scp_int_bit = MT8195_CORE1_HOST_IPC_INT_BIT, 1303 + .scp_sizes = &default_scp_sizes, 1304 + }; 1305 + 1306 + static const struct mtk_scp_of_data *mt8188_of_data_cores[] = { 1307 + &mt8188_of_data, 1308 + &mt8188_of_data_c1, 1309 + NULL 1492 1310 }; 1493 1311 1494 1312 static const struct mtk_scp_of_data *mt8195_of_data_cores[] = { ··· 1508 1312 { .compatible = "mediatek,mt8183-scp", .data = &mt8183_of_data }, 1509 1313 { .compatible = "mediatek,mt8186-scp", .data = &mt8186_of_data }, 1510 1314 { .compatible = "mediatek,mt8188-scp", .data = &mt8188_of_data }, 1315 + { .compatible = "mediatek,mt8188-scp-dual", .data = &mt8188_of_data_cores }, 1511 1316 { .compatible = "mediatek,mt8192-scp", .data = &mt8192_of_data }, 1512 1317 { .compatible = "mediatek,mt8195-scp", .data = &mt8195_of_data }, 1513 1318 { .compatible = "mediatek,mt8195-scp-dual", .data = &mt8195_of_data_cores },
+5 -2
drivers/remoteproc/mtk_scp_ipi.c
··· 162 162 struct mtk_share_obj __iomem *send_obj = scp->send_buf; 163 163 u32 val; 164 164 int ret; 165 + const struct mtk_scp_sizes_data *scp_sizes; 166 + 167 + scp_sizes = scp->data->scp_sizes; 165 168 166 169 if (WARN_ON(id <= SCP_IPI_INIT) || WARN_ON(id >= SCP_IPI_MAX) || 167 170 WARN_ON(id == SCP_IPI_NS_SERVICE) || 168 - WARN_ON(len > sizeof(send_obj->share_buf)) || WARN_ON(!buf)) 171 + WARN_ON(len > scp_sizes->ipi_share_buffer_size) || WARN_ON(!buf)) 169 172 return -EINVAL; 170 173 171 174 ret = clk_prepare_enable(scp->clk); ··· 187 184 goto unlock_mutex; 188 185 } 189 186 190 - scp_memcpy_aligned(send_obj->share_buf, buf, len); 187 + scp_memcpy_aligned(&send_obj->share_buf, buf, len); 191 188 192 189 writel(len, &send_obj->len); 193 190 writel(id, &send_obj->id);
+1 -1
drivers/remoteproc/remoteproc_internal.h
··· 72 72 void rproc_exit_debugfs(void); 73 73 74 74 /* from remoteproc_sysfs.c */ 75 - extern struct class rproc_class; 75 + extern const struct class rproc_class; 76 76 int rproc_init_sysfs(void); 77 77 void rproc_exit_sysfs(void); 78 78
+1 -1
drivers/remoteproc/remoteproc_sysfs.c
··· 254 254 NULL 255 255 }; 256 256 257 - struct class rproc_class = { 257 + const struct class rproc_class = { 258 258 .name = "remoteproc", 259 259 .dev_groups = rproc_devgroups, 260 260 };
+56 -2
drivers/remoteproc/ti_k3_r5_remoteproc.c
··· 103 103 * @dev: cached device pointer 104 104 * @mode: Mode to configure the Cluster - Split or LockStep 105 105 * @cores: list of R5 cores within the cluster 106 + * @core_transition: wait queue to sync core state changes 106 107 * @soc_data: SoC-specific feature data for a R5FSS 107 108 */ 108 109 struct k3_r5_cluster { 109 110 struct device *dev; 110 111 enum cluster_mode mode; 111 112 struct list_head cores; 113 + wait_queue_head_t core_transition; 112 114 const struct k3_r5_soc_data *soc_data; 113 115 }; 114 116 ··· 130 128 * @atcm_enable: flag to control ATCM enablement 131 129 * @btcm_enable: flag to control BTCM enablement 132 130 * @loczrama: flag to dictate which TCM is at device address 0x0 131 + * @released_from_reset: flag to signal when core is out of reset 133 132 */ 134 133 struct k3_r5_core { 135 134 struct list_head elem; ··· 147 144 u32 atcm_enable; 148 145 u32 btcm_enable; 149 146 u32 loczrama; 147 + bool released_from_reset; 150 148 }; 151 149 152 150 /** ··· 464 460 ret); 465 461 return ret; 466 462 } 463 + core->released_from_reset = true; 464 + wake_up_interruptible(&cluster->core_transition); 467 465 468 466 /* 469 467 * Newer IP revisions like on J7200 SoCs support h/w auto-initialization ··· 548 542 struct k3_r5_rproc *kproc = rproc->priv; 549 543 struct k3_r5_cluster *cluster = kproc->cluster; 550 544 struct device *dev = kproc->dev; 551 - struct k3_r5_core *core; 545 + struct k3_r5_core *core0, *core; 552 546 u32 boot_addr; 553 547 int ret; 554 548 ··· 574 568 goto unroll_core_run; 575 569 } 576 570 } else { 571 + /* do not allow core 1 to start before core 0 */ 572 + core0 = list_first_entry(&cluster->cores, struct k3_r5_core, 573 + elem); 574 + if (core != core0 && core0->rproc->state == RPROC_OFFLINE) { 575 + dev_err(dev, "%s: can not start core 1 before core 0\n", 576 + __func__); 577 + ret = -EPERM; 578 + goto put_mbox; 579 + } 580 + 577 581 ret = k3_r5_core_run(core); 578 582 if (ret) 579 583 goto put_mbox; ··· 629 613 { 630 614 struct k3_r5_rproc *kproc = rproc->priv; 631 615 struct k3_r5_cluster *cluster = kproc->cluster; 632 - struct k3_r5_core *core = kproc->core; 616 + struct device *dev = kproc->dev; 617 + struct k3_r5_core *core1, *core = kproc->core; 633 618 int ret; 634 619 635 620 /* halt all applicable cores */ ··· 643 626 } 644 627 } 645 628 } else { 629 + /* do not allow core 0 to stop before core 1 */ 630 + core1 = list_last_entry(&cluster->cores, struct k3_r5_core, 631 + elem); 632 + if (core != core1 && core1->rproc->state != RPROC_OFFLINE) { 633 + dev_err(dev, "%s: can not stop core 0 before core 1\n", 634 + __func__); 635 + ret = -EPERM; 636 + goto out; 637 + } 638 + 646 639 ret = k3_r5_core_halt(core); 647 640 if (ret) 648 641 goto out; ··· 1167 1140 return ret; 1168 1141 } 1169 1142 1143 + /* 1144 + * Skip the waiting mechanism for sequential power-on of cores if the 1145 + * core has already been booted by another entity. 1146 + */ 1147 + core->released_from_reset = c_state; 1148 + 1170 1149 ret = ti_sci_proc_get_status(core->tsp, &boot_vec, &cfg, &ctrl, 1171 1150 &stat); 1172 1151 if (ret < 0) { ··· 1313 1280 cluster->mode == CLUSTER_MODE_SINGLECPU || 1314 1281 cluster->mode == CLUSTER_MODE_SINGLECORE) 1315 1282 break; 1283 + 1284 + /* 1285 + * R5 cores require to be powered on sequentially, core0 1286 + * should be in higher power state than core1 in a cluster 1287 + * So, wait for current core to power up before proceeding 1288 + * to next core and put timeout of 2sec for each core. 1289 + * 1290 + * This waiting mechanism is necessary because 1291 + * rproc_auto_boot_callback() for core1 can be called before 1292 + * core0 due to thread execution order. 1293 + */ 1294 + ret = wait_event_interruptible_timeout(cluster->core_transition, 1295 + core->released_from_reset, 1296 + msecs_to_jiffies(2000)); 1297 + if (ret <= 0) { 1298 + dev_err(dev, 1299 + "Timed out waiting for %s core to power up!\n", 1300 + rproc->name); 1301 + return ret; 1302 + } 1316 1303 } 1317 1304 1318 1305 return 0; ··· 1762 1709 cluster->dev = dev; 1763 1710 cluster->soc_data = data; 1764 1711 INIT_LIST_HEAD(&cluster->cores); 1712 + init_waitqueue_head(&cluster->core_transition); 1765 1713 1766 1714 ret = of_property_read_u32(np, "ti,cluster-mode", &cluster->mode); 1767 1715 if (ret < 0 && ret != -EINVAL) {
+154 -175
drivers/remoteproc/xlnx_r5_remoteproc.c
··· 74 74 }; 75 75 76 76 /* 77 - * Hardcoded TCM bank values. This will be removed once TCM bindings are 78 - * accepted for system-dt specifications and upstreamed in linux kernel 77 + * Hardcoded TCM bank values. This will stay in driver to maintain backward 78 + * compatibility with device-tree that does not have TCM information. 79 79 */ 80 80 static const struct mem_bank_data zynqmp_tcm_banks_split[] = { 81 81 {0xffe00000UL, 0x0, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */ ··· 84 84 {0xffeb0000UL, 0x20000, 0x10000UL, PD_R5_1_BTCM, "btcm1"}, 85 85 }; 86 86 87 - /* In lockstep mode cluster combines each 64KB TCM and makes 128KB TCM */ 87 + /* In lockstep mode cluster uses each 64KB TCM from second core as well */ 88 88 static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = { 89 - {0xffe00000UL, 0x0, 0x20000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 128KB each */ 90 - {0xffe20000UL, 0x20000, 0x20000UL, PD_R5_0_BTCM, "btcm0"}, 91 - {0, 0, 0, PD_R5_1_ATCM, ""}, 92 - {0, 0, 0, PD_R5_1_BTCM, ""}, 89 + {0xffe00000UL, 0x0, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */ 90 + {0xffe20000UL, 0x20000, 0x10000UL, PD_R5_0_BTCM, "btcm0"}, 91 + {0xffe10000UL, 0x10000, 0x10000UL, PD_R5_1_ATCM, "atcm1"}, 92 + {0xffe30000UL, 0x30000, 0x10000UL, PD_R5_1_BTCM, "btcm1"}, 93 93 }; 94 94 95 95 /** ··· 301 301 } 302 302 303 303 /* 304 - * zynqmp_r5_set_mode() 305 - * 306 - * set RPU cluster and TCM operation mode 307 - * 308 - * @r5_core: pointer to zynqmp_r5_core type object 309 - * @fw_reg_val: value expected by firmware to configure RPU cluster mode 310 - * @tcm_mode: value expected by fw to configure TCM mode (lockstep or split) 311 - * 312 - * Return: 0 for success and < 0 for failure 313 - */ 314 - static int zynqmp_r5_set_mode(struct zynqmp_r5_core *r5_core, 315 - enum rpu_oper_mode fw_reg_val, 316 - enum rpu_tcm_comb tcm_mode) 317 - { 318 - int ret; 319 - 320 - ret = zynqmp_pm_set_rpu_mode(r5_core->pm_domain_id, fw_reg_val); 321 - if (ret < 0) { 322 - dev_err(r5_core->dev, "failed to set RPU mode\n"); 323 - return ret; 324 - } 325 - 326 - ret = zynqmp_pm_set_tcm_config(r5_core->pm_domain_id, tcm_mode); 327 - if (ret < 0) 328 - dev_err(r5_core->dev, "failed to configure TCM\n"); 329 - 330 - return ret; 331 - } 332 - 333 - /* 334 304 * zynqmp_r5_rproc_start() 335 305 * @rproc: single R5 core's corresponding rproc instance 336 306 * ··· 456 486 } 457 487 458 488 rproc_add_carveout(rproc, rproc_mem); 489 + rproc_coredump_add_segment(rproc, rmem->base, rmem->size); 459 490 460 491 dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx", 461 492 it.node->name, rmem->base, rmem->size); ··· 511 540 } 512 541 513 542 /* 514 - * add_tcm_carveout_split_mode() 543 + * add_tcm_banks() 515 544 * @rproc: single R5 core's corresponding rproc instance 516 545 * 517 - * allocate and add remoteproc carveout for TCM memory in split mode 546 + * allocate and add remoteproc carveout for TCM memory 518 547 * 519 548 * return 0 on success, otherwise non-zero value on failure 520 549 */ 521 - static int add_tcm_carveout_split_mode(struct rproc *rproc) 550 + static int add_tcm_banks(struct rproc *rproc) 522 551 { 523 552 struct rproc_mem_entry *rproc_mem; 524 553 struct zynqmp_r5_core *r5_core; ··· 551 580 ZYNQMP_PM_REQUEST_ACK_BLOCKING); 552 581 if (ret < 0) { 553 582 dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id); 554 - goto release_tcm_split; 583 + goto release_tcm; 555 584 } 556 585 557 - dev_dbg(dev, "TCM carveout split mode %s addr=%llx, da=0x%x, size=0x%lx", 586 + dev_dbg(dev, "TCM carveout %s addr=%llx, da=0x%x, size=0x%lx", 558 587 bank_name, bank_addr, da, bank_size); 559 588 560 589 rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr, ··· 564 593 if (!rproc_mem) { 565 594 ret = -ENOMEM; 566 595 zynqmp_pm_release_node(pm_domain_id); 567 - goto release_tcm_split; 596 + goto release_tcm; 568 597 } 569 598 570 599 rproc_add_carveout(rproc, rproc_mem); 600 + rproc_coredump_add_segment(rproc, da, bank_size); 571 601 } 572 602 573 603 return 0; 574 604 575 - release_tcm_split: 605 + release_tcm: 576 606 /* If failed, Turn off all TCM banks turned on before */ 577 607 for (i--; i >= 0; i--) { 578 608 pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id; 579 609 zynqmp_pm_release_node(pm_domain_id); 580 610 } 581 611 return ret; 582 - } 583 - 584 - /* 585 - * add_tcm_carveout_lockstep_mode() 586 - * @rproc: single R5 core's corresponding rproc instance 587 - * 588 - * allocate and add remoteproc carveout for TCM memory in lockstep mode 589 - * 590 - * return 0 on success, otherwise non-zero value on failure 591 - */ 592 - static int add_tcm_carveout_lockstep_mode(struct rproc *rproc) 593 - { 594 - struct rproc_mem_entry *rproc_mem; 595 - struct zynqmp_r5_core *r5_core; 596 - int i, num_banks, ret; 597 - phys_addr_t bank_addr; 598 - size_t bank_size = 0; 599 - struct device *dev; 600 - u32 pm_domain_id; 601 - char *bank_name; 602 - u32 da; 603 - 604 - r5_core = rproc->priv; 605 - dev = r5_core->dev; 606 - 607 - /* Go through zynqmp banks for r5 node */ 608 - num_banks = r5_core->tcm_bank_count; 609 - 610 - /* 611 - * In lockstep mode, TCM is contiguous memory block 612 - * However, each TCM block still needs to be enabled individually. 613 - * So, Enable each TCM block individually. 614 - * Although ATCM and BTCM is contiguous memory block, add two separate 615 - * carveouts for both. 616 - */ 617 - for (i = 0; i < num_banks; i++) { 618 - pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id; 619 - 620 - /* Turn on each TCM bank individually */ 621 - ret = zynqmp_pm_request_node(pm_domain_id, 622 - ZYNQMP_PM_CAPABILITY_ACCESS, 0, 623 - ZYNQMP_PM_REQUEST_ACK_BLOCKING); 624 - if (ret < 0) { 625 - dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id); 626 - goto release_tcm_lockstep; 627 - } 628 - 629 - bank_size = r5_core->tcm_banks[i]->size; 630 - if (bank_size == 0) 631 - continue; 632 - 633 - bank_addr = r5_core->tcm_banks[i]->addr; 634 - da = r5_core->tcm_banks[i]->da; 635 - bank_name = r5_core->tcm_banks[i]->bank_name; 636 - 637 - /* Register TCM address range, TCM map and unmap functions */ 638 - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr, 639 - bank_size, da, 640 - tcm_mem_map, tcm_mem_unmap, 641 - bank_name); 642 - if (!rproc_mem) { 643 - ret = -ENOMEM; 644 - zynqmp_pm_release_node(pm_domain_id); 645 - goto release_tcm_lockstep; 646 - } 647 - 648 - /* If registration is success, add carveouts */ 649 - rproc_add_carveout(rproc, rproc_mem); 650 - 651 - dev_dbg(dev, "TCM carveout lockstep mode %s addr=0x%llx, da=0x%x, size=0x%lx", 652 - bank_name, bank_addr, da, bank_size); 653 - } 654 - 655 - return 0; 656 - 657 - release_tcm_lockstep: 658 - /* If failed, Turn off all TCM banks turned on before */ 659 - for (i--; i >= 0; i--) { 660 - pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id; 661 - zynqmp_pm_release_node(pm_domain_id); 662 - } 663 - return ret; 664 - } 665 - 666 - /* 667 - * add_tcm_banks() 668 - * @rproc: single R5 core's corresponding rproc instance 669 - * 670 - * allocate and add remoteproc carveouts for TCM memory based on cluster mode 671 - * 672 - * return 0 on success, otherwise non-zero value on failure 673 - */ 674 - static int add_tcm_banks(struct rproc *rproc) 675 - { 676 - struct zynqmp_r5_cluster *cluster; 677 - struct zynqmp_r5_core *r5_core; 678 - struct device *dev; 679 - 680 - r5_core = rproc->priv; 681 - if (!r5_core) 682 - return -EINVAL; 683 - 684 - dev = r5_core->dev; 685 - 686 - cluster = dev_get_drvdata(dev->parent); 687 - if (!cluster) { 688 - dev_err(dev->parent, "Invalid driver data\n"); 689 - return -EINVAL; 690 - } 691 - 692 - /* 693 - * In lockstep mode TCM banks are one contiguous memory region of 256Kb 694 - * In split mode, each TCM bank is 64Kb and not contiguous. 695 - * We add memory carveouts accordingly. 696 - */ 697 - if (cluster->mode == SPLIT_MODE) 698 - return add_tcm_carveout_split_mode(rproc); 699 - else if (cluster->mode == LOCKSTEP_MODE) 700 - return add_tcm_carveout_lockstep_mode(rproc); 701 - 702 - return -EINVAL; 703 612 } 704 613 705 614 /* ··· 704 853 return ERR_PTR(-ENOMEM); 705 854 } 706 855 856 + rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); 857 + 707 858 r5_rproc->auto_boot = false; 708 859 r5_core = r5_rproc->priv; 709 860 r5_core->dev = cdev; ··· 729 876 free_rproc: 730 877 rproc_free(r5_rproc); 731 878 return ERR_PTR(ret); 879 + } 880 + 881 + static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster) 882 + { 883 + int i, j, tcm_bank_count, ret, tcm_pd_idx, pd_count; 884 + struct of_phandle_args out_args; 885 + struct zynqmp_r5_core *r5_core; 886 + struct platform_device *cpdev; 887 + struct mem_bank_data *tcm; 888 + struct device_node *np; 889 + struct resource *res; 890 + u64 abs_addr, size; 891 + struct device *dev; 892 + 893 + for (i = 0; i < cluster->core_count; i++) { 894 + r5_core = cluster->r5_cores[i]; 895 + dev = r5_core->dev; 896 + np = r5_core->np; 897 + 898 + pd_count = of_count_phandle_with_args(np, "power-domains", 899 + "#power-domain-cells"); 900 + 901 + if (pd_count <= 0) { 902 + dev_err(dev, "invalid power-domains property, %d\n", pd_count); 903 + return -EINVAL; 904 + } 905 + 906 + /* First entry in power-domains list is for r5 core, rest for TCM. */ 907 + tcm_bank_count = pd_count - 1; 908 + 909 + if (tcm_bank_count <= 0) { 910 + dev_err(dev, "invalid TCM count %d\n", tcm_bank_count); 911 + return -EINVAL; 912 + } 913 + 914 + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count, 915 + sizeof(struct mem_bank_data *), 916 + GFP_KERNEL); 917 + if (!r5_core->tcm_banks) 918 + return -ENOMEM; 919 + 920 + r5_core->tcm_bank_count = tcm_bank_count; 921 + for (j = 0, tcm_pd_idx = 1; j < tcm_bank_count; j++, tcm_pd_idx++) { 922 + tcm = devm_kzalloc(dev, sizeof(struct mem_bank_data), 923 + GFP_KERNEL); 924 + if (!tcm) 925 + return -ENOMEM; 926 + 927 + r5_core->tcm_banks[j] = tcm; 928 + 929 + /* Get power-domains id of TCM. */ 930 + ret = of_parse_phandle_with_args(np, "power-domains", 931 + "#power-domain-cells", 932 + tcm_pd_idx, &out_args); 933 + if (ret) { 934 + dev_err(r5_core->dev, 935 + "failed to get tcm %d pm domain, ret %d\n", 936 + tcm_pd_idx, ret); 937 + return ret; 938 + } 939 + tcm->pm_domain_id = out_args.args[0]; 940 + of_node_put(out_args.np); 941 + 942 + /* Get TCM address without translation. */ 943 + ret = of_property_read_reg(np, j, &abs_addr, &size); 944 + if (ret) { 945 + dev_err(dev, "failed to get reg property\n"); 946 + return ret; 947 + } 948 + 949 + /* 950 + * Remote processor can address only 32 bits 951 + * so convert 64-bits into 32-bits. This will discard 952 + * any unwanted upper 32-bits. 953 + */ 954 + tcm->da = (u32)abs_addr; 955 + tcm->size = (u32)size; 956 + 957 + cpdev = to_platform_device(dev); 958 + res = platform_get_resource(cpdev, IORESOURCE_MEM, j); 959 + if (!res) { 960 + dev_err(dev, "failed to get tcm resource\n"); 961 + return -EINVAL; 962 + } 963 + 964 + tcm->addr = (u32)res->start; 965 + tcm->bank_name = (char *)res->name; 966 + res = devm_request_mem_region(dev, tcm->addr, tcm->size, 967 + tcm->bank_name); 968 + if (!res) { 969 + dev_err(dev, "failed to request tcm resource\n"); 970 + return -EINVAL; 971 + } 972 + } 973 + } 974 + 975 + return 0; 732 976 } 733 977 734 978 /** ··· 904 954 { 905 955 struct device *dev = cluster->dev; 906 956 struct zynqmp_r5_core *r5_core; 907 - int ret, i; 957 + int ret = -EINVAL, i; 908 958 909 - ret = zynqmp_r5_get_tcm_node(cluster); 910 - if (ret < 0) { 911 - dev_err(dev, "can't get tcm node, err %d\n", ret); 959 + r5_core = cluster->r5_cores[0]; 960 + 961 + /* Maintain backward compatibility for zynqmp by using hardcode TCM address. */ 962 + if (of_find_property(r5_core->np, "reg", NULL)) 963 + ret = zynqmp_r5_get_tcm_node_from_dt(cluster); 964 + else if (device_is_compatible(dev, "xlnx,zynqmp-r5fss")) 965 + ret = zynqmp_r5_get_tcm_node(cluster); 966 + 967 + if (ret) { 968 + dev_err(dev, "can't get tcm, err %d\n", ret); 912 969 return ret; 913 970 } 914 971 ··· 930 973 return ret; 931 974 } 932 975 933 - ret = zynqmp_r5_set_mode(r5_core, fw_reg_val, tcm_mode); 934 - if (ret) { 935 - dev_err(dev, "failed to set r5 cluster mode %d, err %d\n", 936 - cluster->mode, ret); 976 + ret = zynqmp_pm_set_rpu_mode(r5_core->pm_domain_id, fw_reg_val); 977 + if (ret < 0) { 978 + dev_err(r5_core->dev, "failed to set RPU mode\n"); 937 979 return ret; 980 + } 981 + 982 + if (of_find_property(dev_of_node(dev), "xlnx,tcm-mode", NULL) || 983 + device_is_compatible(dev, "xlnx,zynqmp-r5fss")) { 984 + ret = zynqmp_pm_set_tcm_config(r5_core->pm_domain_id, 985 + tcm_mode); 986 + if (ret < 0) { 987 + dev_err(r5_core->dev, "failed to configure TCM\n"); 988 + return ret; 989 + } 938 990 } 939 991 } 940 992 ··· 989 1023 * fail driver probe if either of that is not set in dts. 990 1024 */ 991 1025 if (cluster_mode == LOCKSTEP_MODE) { 992 - tcm_mode = PM_RPU_TCM_COMB; 993 1026 fw_reg_val = PM_RPU_MODE_LOCKSTEP; 994 1027 } else if (cluster_mode == SPLIT_MODE) { 995 - tcm_mode = PM_RPU_TCM_SPLIT; 996 1028 fw_reg_val = PM_RPU_MODE_SPLIT; 997 1029 } else { 998 1030 dev_err(dev, "driver does not support cluster mode %d\n", cluster_mode); 999 1031 return -EINVAL; 1032 + } 1033 + 1034 + if (of_find_property(dev_node, "xlnx,tcm-mode", NULL)) { 1035 + ret = of_property_read_u32(dev_node, "xlnx,tcm-mode", (u32 *)&tcm_mode); 1036 + if (ret) 1037 + return ret; 1038 + } else if (device_is_compatible(dev, "xlnx,zynqmp-r5fss")) { 1039 + if (cluster_mode == LOCKSTEP_MODE) 1040 + tcm_mode = PM_RPU_TCM_COMB; 1041 + else 1042 + tcm_mode = PM_RPU_TCM_SPLIT; 1043 + } else { 1044 + tcm_mode = PM_RPU_TCM_COMB; 1000 1045 } 1001 1046 1002 1047 /* ··· 1193 1216 1194 1217 /* Match table for OF platform binding */ 1195 1218 static const struct of_device_id zynqmp_r5_remoteproc_match[] = { 1219 + { .compatible = "xlnx,versal-net-r52fss", }, 1220 + { .compatible = "xlnx,versal-r5fss", }, 1196 1221 { .compatible = "xlnx,zynqmp-r5fss", }, 1197 1222 { /* end of list */ }, 1198 1223 };
+1
include/linux/remoteproc/mtk_scp.h
··· 43 43 SCP_IPI_CROS_HOST_CMD, 44 44 SCP_IPI_VDEC_LAT, 45 45 SCP_IPI_VDEC_CORE, 46 + SCP_IPI_IMGSYS_CMD, 46 47 SCP_IPI_NS_SERVICE = 0xFF, 47 48 SCP_IPI_MAX = 0x100, 48 49 };