Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rproc-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc

Pull remoteproc updates from Bjorn Andersson:
"This adds support to the remoteproc core for detaching Linux from a
running remoteproc, e.g. to reboot Linux while leaving the remoteproc
running, and it enable this support in the stm32 remoteproc driver.

It also introduces a property for memory carveouts to track if they
are iomem or system ram, to enable proper handling of the differences.

The imx_rproc received a number of fixes and improvements, in
particular support for attaching to already running remote processors
and i.MX8MQ and i.MX8MM support.

The Qualcomm wcss driver gained support for starting and stopping the
wireless subsystem on QCS404, when not using the TrustZone-based
validator/loader.

Finally it brings a few fixes to the TI PRU and to the firmware loader
for the Qualcomm modem subsystem drivers"

* tag 'rproc-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc: (53 commits)
remoteproc: stm32: add capability to detach
dt-bindings: remoteproc: stm32-rproc: add new mailbox channel for detach
remoteproc: imx_rproc: support remote cores booted before Linux Kernel
remoteproc: imx_rproc: move memory parsing to rproc_ops
remoteproc: imx_rproc: enlarge IMX7D_RPROC_MEM_MAX
remoteproc: imx_rproc: add missing of_node_put
remoteproc: imx_rproc: fix build error without CONFIG_MAILBOX
remoteproc: qcom: wcss: Remove unnecessary PTR_ERR()
remoteproc: qcom: wcss: Fix wrong pointer passed to PTR_ERR()
remoteproc: qcom: pas: Add modem support for SDX55
dt-bindings: remoteproc: qcom: pas: Add binding for SDX55
remoteproc: qcom: wcss: Fix return value check in q6v5_wcss_init_mmio()
remoteproc: pru: Fix and cleanup firmware interrupt mapping logic
remoteproc: pru: Fix wrong success return value for fw events
remoteproc: pru: Fixup interrupt-parent logic for fw events
remoteproc: qcom: wcnss: Allow specifying firmware-name
remoteproc: qcom: wcss: explicitly request exclusive reset control
remoteproc: qcom: wcss: Add non pas wcss Q6 support for QCS404
dt-bindings: remoteproc: qcom: Add Q6V5 Modem PIL binding for QCS404
remoteproc: qcom: wcss: populate hardcoded param using driver data
...

+1570 -295
+90
Documentation/devicetree/bindings/remoteproc/fsl,imx-rproc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/remoteproc/fsl,imx-rproc.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: NXP i.MX Co-Processor Bindings 8 + 9 + description: 10 + This binding provides support for ARM Cortex M4 Co-processor found on some NXP iMX SoCs. 11 + 12 + maintainers: 13 + - Peng Fan <peng.fan@nxp.com> 14 + 15 + properties: 16 + compatible: 17 + enum: 18 + - fsl,imx8mq-cm4 19 + - fsl,imx8mm-cm4 20 + - fsl,imx7d-cm4 21 + - fsl,imx6sx-cm4 22 + 23 + clocks: 24 + maxItems: 1 25 + 26 + syscon: 27 + $ref: /schemas/types.yaml#/definitions/phandle 28 + description: 29 + Phandle to syscon block which provide access to System Reset Controller 30 + 31 + mbox-names: 32 + items: 33 + - const: tx 34 + - const: rx 35 + - const: rxdb 36 + 37 + mboxes: 38 + description: 39 + This property is required only if the rpmsg/virtio functionality is used. 40 + List of <&phandle type channel> - 1 channel for TX, 1 channel for RX, 1 channel for RXDB. 41 + (see mailbox/fsl,mu.yaml) 42 + minItems: 1 43 + maxItems: 3 44 + 45 + memory-region: 46 + description: 47 + If present, a phandle for a reserved memory area that used for vdev buffer, 48 + resource table, vring region and others used by remote processor. 49 + minItems: 1 50 + maxItems: 32 51 + 52 + required: 53 + - compatible 54 + - clocks 55 + - syscon 56 + 57 + additionalProperties: false 58 + 59 + examples: 60 + - | 61 + #include <dt-bindings/clock/imx7d-clock.h> 62 + m4_reserved_sysmem1: cm4@80000000 { 63 + reg = <0x80000000 0x80000>; 64 + }; 65 + 66 + m4_reserved_sysmem2: cm4@81000000 { 67 + reg = <0x81000000 0x80000>; 68 + }; 69 + 70 + imx7d-cm4 { 71 + compatible = "fsl,imx7d-cm4"; 72 + memory-region = <&m4_reserved_sysmem1>, <&m4_reserved_sysmem2>; 73 + syscon = <&src>; 74 + clocks = <&clks IMX7D_ARM_M4_ROOT_CLK>; 75 + }; 76 + 77 + - | 78 + #include <dt-bindings/clock/imx8mm-clock.h> 79 + 80 + imx8mm-cm4 { 81 + compatible = "fsl,imx8mm-cm4"; 82 + clocks = <&clk IMX8MM_CLK_M4_DIV>; 83 + mbox-names = "tx", "rx", "rxdb"; 84 + mboxes = <&mu 0 1 85 + &mu 1 1 86 + &mu 3 1>; 87 + memory-region = <&vdev0buffer>, <&vdev0vring0>, <&vdev0vring1>, <&rsc_table>; 88 + syscon = <&src>; 89 + }; 90 + ...
-33
Documentation/devicetree/bindings/remoteproc/imx-rproc.txt
··· 1 - NXP iMX6SX/iMX7D Co-Processor Bindings 2 - ---------------------------------------- 3 - 4 - This binding provides support for ARM Cortex M4 Co-processor found on some 5 - NXP iMX SoCs. 6 - 7 - Required properties: 8 - - compatible Should be one of: 9 - "fsl,imx7d-cm4" 10 - "fsl,imx6sx-cm4" 11 - - clocks Clock for co-processor (See: ../clock/clock-bindings.txt) 12 - - syscon Phandle to syscon block which provide access to 13 - System Reset Controller 14 - 15 - Optional properties: 16 - - memory-region list of phandels to the reserved memory regions. 17 - (See: ../reserved-memory/reserved-memory.txt) 18 - 19 - Example: 20 - m4_reserved_sysmem1: cm4@80000000 { 21 - reg = <0x80000000 0x80000>; 22 - }; 23 - 24 - m4_reserved_sysmem2: cm4@81000000 { 25 - reg = <0x81000000 0x80000>; 26 - }; 27 - 28 - imx7d-cm4 { 29 - compatible = "fsl,imx7d-cm4"; 30 - memory-region = <&m4_reserved_sysmem1>, <&m4_reserved_sysmem2>; 31 - syscon = <&src>; 32 - clocks = <&clks IMX7D_ARM_M4_ROOT_CLK>; 33 - };
+4
Documentation/devicetree/bindings/remoteproc/qcom,adsp.txt
··· 18 18 "qcom,sc7180-mpss-pas" 19 19 "qcom,sdm845-adsp-pas" 20 20 "qcom,sdm845-cdsp-pas" 21 + "qcom,sdx55-mpss-pas" 21 22 "qcom,sm8150-adsp-pas" 22 23 "qcom,sm8150-cdsp-pas" 23 24 "qcom,sm8150-mpss-pas" ··· 62 61 must be "wdog", "fatal", "ready", "handover", "stop-ack" 63 62 qcom,qcs404-wcss-pas: 64 63 qcom,sc7180-mpss-pas: 64 + qcom,sdx55-mpss-pas: 65 65 qcom,sm8150-mpss-pas: 66 66 qcom,sm8350-mpss-pas: 67 67 must be "wdog", "fatal", "ready", "handover", "stop-ack", ··· 130 128 qcom,sm8150-mpss-pas: 131 129 qcom,sm8350-mpss-pas: 132 130 must be "cx", "load_state", "mss" 131 + qcom,sdx55-mpss-pas: 132 + must be "cx", "mss" 133 133 qcom,sm8250-adsp-pas: 134 134 qcom,sm8350-adsp-pas: 135 135 qcom,sm8150-slpi-pas:
+15
Documentation/devicetree/bindings/remoteproc/qcom,q6v5.txt
··· 9 9 Definition: must be one of: 10 10 "qcom,q6v5-pil", 11 11 "qcom,ipq8074-wcss-pil" 12 + "qcom,qcs404-wcss-pil" 12 13 "qcom,msm8916-mss-pil", 13 14 "qcom,msm8974-mss-pil" 14 15 "qcom,msm8996-mss-pil" ··· 40 39 string: 41 40 qcom,q6v5-pil: 42 41 qcom,ipq8074-wcss-pil: 42 + qcom,qcs404-wcss-pil: 43 43 qcom,msm8916-mss-pil: 44 44 qcom,msm8974-mss-pil: 45 45 must be "wdog", "fatal", "ready", "handover", "stop-ack" ··· 69 67 Definition: The clocks needed depend on the compatible string: 70 68 qcom,ipq8074-wcss-pil: 71 69 no clock names required 70 + qcom,qcs404-wcss-pil: 71 + must be "xo", "gcc_abhs_cbcr", "gcc_abhs_cbcr", 72 + "gcc_axim_cbcr", "lcc_ahbfabric_cbc", "tcsr_lcc_cbc", 73 + "lcc_abhs_cbc", "lcc_tcm_slave_cbc", "lcc_abhm_cbc", 74 + "lcc_axim_cbc", "lcc_bcr_sleep" 72 75 qcom,q6v5-pil: 73 76 qcom,msm8916-mss-pil: 74 77 qcom,msm8974-mss-pil: ··· 134 127 - mss-supply: 135 128 - mx-supply: (deprecated, use power domain instead) 136 129 - pll-supply: 130 + Usage: required 131 + Value type: <phandle> 132 + Definition: reference to the regulators to be held on behalf of the 133 + booting of the Hexagon core 134 + 135 + For the compatible string below the following supplies are required: 136 + "qcom,qcs404-wcss-pil" 137 + - cx-supply: 137 138 Usage: required 138 139 Value type: <phandle> 139 140 Definition: reference to the regulators to be held on behalf of the
+6
Documentation/devicetree/bindings/remoteproc/qcom,wcnss-pil.txt
··· 34 34 Definition: should be "wdog", "fatal", optionally followed by "ready", 35 35 "handover", "stop-ack" 36 36 37 + - firmware-name: 38 + Usage: optional 39 + Value type: <string> 40 + Definition: must list the relative firmware image path for the 41 + WCNSS core. Defaults to "wcnss.mdt". 42 + 37 43 - vddmx-supply: (deprecated for qcom,pronto-v1/2-pil) 38 44 - vddcx-supply: (deprecated for qcom,pronto-v1/2-pil) 39 45 - vddpx-supply:
+9 -2
Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
··· 65 65 Unidirectional channel: 66 66 - from local to remote, where ACK from the remote means that it is 67 67 ready for shutdown 68 + - description: | 69 + A channel (d) used by the local proc to notify the remote proc that it 70 + has to stop interprocessor communnication. 71 + Unidirectional channel: 72 + - from local to remote, where ACK from the remote means that communnication 73 + as been stopped on the remote side. 68 74 minItems: 1 69 - maxItems: 3 75 + maxItems: 4 70 76 71 77 mbox-names: 72 78 items: 73 79 - const: vq0 74 80 - const: vq1 75 81 - const: shutdown 82 + - const: detach 76 83 minItems: 1 77 - maxItems: 3 84 + maxItems: 4 78 85 79 86 memory-region: 80 87 description:
+4 -3
drivers/remoteproc/Kconfig
··· 24 24 It's safe to say N if you don't want to use this interface. 25 25 26 26 config IMX_REMOTEPROC 27 - tristate "IMX6/7 remoteproc support" 27 + tristate "i.MX remoteproc support" 28 28 depends on ARCH_MXC 29 + select MAILBOX 29 30 help 30 - Say y here to support iMX's remote processors (Cortex M4 31 - on iMX7D) via the remote processor framework. 31 + Say y here to support iMX's remote processors via the remote 32 + processor framework. 32 33 33 34 It's safe to say N here. 34 35
+306 -16
drivers/remoteproc/imx_rproc.c
··· 7 7 #include <linux/err.h> 8 8 #include <linux/interrupt.h> 9 9 #include <linux/kernel.h> 10 + #include <linux/mailbox_client.h> 10 11 #include <linux/mfd/syscon.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of_address.h> 14 + #include <linux/of_reserved_mem.h> 13 15 #include <linux/of_device.h> 14 16 #include <linux/platform_device.h> 15 17 #include <linux/regmap.h> 16 18 #include <linux/remoteproc.h> 19 + #include <linux/workqueue.h> 20 + 21 + #include "remoteproc_internal.h" 17 22 18 23 #define IMX7D_SRC_SCR 0x0C 19 24 #define IMX7D_ENABLE_M4 BIT(3) ··· 48 43 | IMX6SX_SW_M4C_NON_SCLR_RST \ 49 44 | IMX6SX_SW_M4C_RST) 50 45 51 - #define IMX7D_RPROC_MEM_MAX 8 46 + #define IMX_RPROC_MEM_MAX 32 52 47 53 48 /** 54 49 * struct imx_rproc_mem - slim internal memory structure ··· 88 83 struct regmap *regmap; 89 84 struct rproc *rproc; 90 85 const struct imx_rproc_dcfg *dcfg; 91 - struct imx_rproc_mem mem[IMX7D_RPROC_MEM_MAX]; 86 + struct imx_rproc_mem mem[IMX_RPROC_MEM_MAX]; 92 87 struct clk *clk; 88 + struct mbox_client cl; 89 + struct mbox_chan *tx_ch; 90 + struct mbox_chan *rx_ch; 91 + struct work_struct rproc_work; 92 + struct workqueue_struct *workqueue; 93 + void __iomem *rsc_table; 94 + }; 95 + 96 + static const struct imx_rproc_att imx_rproc_att_imx8mq[] = { 97 + /* dev addr , sys addr , size , flags */ 98 + /* TCML - alias */ 99 + { 0x00000000, 0x007e0000, 0x00020000, 0 }, 100 + /* OCRAM_S */ 101 + { 0x00180000, 0x00180000, 0x00008000, 0 }, 102 + /* OCRAM */ 103 + { 0x00900000, 0x00900000, 0x00020000, 0 }, 104 + /* OCRAM */ 105 + { 0x00920000, 0x00920000, 0x00020000, 0 }, 106 + /* QSPI Code - alias */ 107 + { 0x08000000, 0x08000000, 0x08000000, 0 }, 108 + /* DDR (Code) - alias */ 109 + { 0x10000000, 0x80000000, 0x0FFE0000, 0 }, 110 + /* TCML */ 111 + { 0x1FFE0000, 0x007E0000, 0x00020000, ATT_OWN }, 112 + /* TCMU */ 113 + { 0x20000000, 0x00800000, 0x00020000, ATT_OWN }, 114 + /* OCRAM_S */ 115 + { 0x20180000, 0x00180000, 0x00008000, ATT_OWN }, 116 + /* OCRAM */ 117 + { 0x20200000, 0x00900000, 0x00020000, ATT_OWN }, 118 + /* OCRAM */ 119 + { 0x20220000, 0x00920000, 0x00020000, ATT_OWN }, 120 + /* DDR (Data) */ 121 + { 0x40000000, 0x40000000, 0x80000000, 0 }, 93 122 }; 94 123 95 124 static const struct imx_rproc_att imx_rproc_att_imx7d[] = { ··· 174 135 { 0x208F8000, 0x008F8000, 0x00004000, 0 }, 175 136 /* DDR (Data) */ 176 137 { 0x80000000, 0x80000000, 0x60000000, 0 }, 138 + }; 139 + 140 + static const struct imx_rproc_dcfg imx_rproc_cfg_imx8mq = { 141 + .src_reg = IMX7D_SRC_SCR, 142 + .src_mask = IMX7D_M4_RST_MASK, 143 + .src_start = IMX7D_M4_START, 144 + .src_stop = IMX7D_M4_STOP, 145 + .att = imx_rproc_att_imx8mq, 146 + .att_size = ARRAY_SIZE(imx_rproc_att_imx8mq), 177 147 }; 178 148 179 149 static const struct imx_rproc_dcfg imx_rproc_cfg_imx7d = { ··· 256 208 return -ENOENT; 257 209 } 258 210 259 - static void *imx_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 211 + static void *imx_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 260 212 { 261 213 struct imx_rproc *priv = rproc->priv; 262 214 void *va = NULL; ··· 273 225 if (imx_rproc_da_to_sys(priv, da, len, &sys)) 274 226 return NULL; 275 227 276 - for (i = 0; i < IMX7D_RPROC_MEM_MAX; i++) { 228 + for (i = 0; i < IMX_RPROC_MEM_MAX; i++) { 277 229 if (sys >= priv->mem[i].sys_addr && sys + len < 278 230 priv->mem[i].sys_addr + priv->mem[i].size) { 279 231 unsigned int offset = sys - priv->mem[i].sys_addr; ··· 289 241 return va; 290 242 } 291 243 244 + static int imx_rproc_mem_alloc(struct rproc *rproc, 245 + struct rproc_mem_entry *mem) 246 + { 247 + struct device *dev = rproc->dev.parent; 248 + void *va; 249 + 250 + dev_dbg(dev, "map memory: %p+%zx\n", &mem->dma, mem->len); 251 + va = ioremap_wc(mem->dma, mem->len); 252 + if (IS_ERR_OR_NULL(va)) { 253 + dev_err(dev, "Unable to map memory region: %p+%zx\n", 254 + &mem->dma, mem->len); 255 + return -ENOMEM; 256 + } 257 + 258 + /* Update memory entry va */ 259 + mem->va = va; 260 + 261 + return 0; 262 + } 263 + 264 + static int imx_rproc_mem_release(struct rproc *rproc, 265 + struct rproc_mem_entry *mem) 266 + { 267 + dev_dbg(rproc->dev.parent, "unmap memory: %pa\n", &mem->dma); 268 + iounmap(mem->va); 269 + 270 + return 0; 271 + } 272 + 273 + static int imx_rproc_prepare(struct rproc *rproc) 274 + { 275 + struct imx_rproc *priv = rproc->priv; 276 + struct device_node *np = priv->dev->of_node; 277 + struct of_phandle_iterator it; 278 + struct rproc_mem_entry *mem; 279 + struct reserved_mem *rmem; 280 + u32 da; 281 + 282 + /* Register associated reserved memory regions */ 283 + of_phandle_iterator_init(&it, np, "memory-region", NULL, 0); 284 + while (of_phandle_iterator_next(&it) == 0) { 285 + /* 286 + * Ignore the first memory region which will be used vdev buffer. 287 + * No need to do extra handlings, rproc_add_virtio_dev will handle it. 288 + */ 289 + if (!strcmp(it.node->name, "vdev0buffer")) 290 + continue; 291 + 292 + rmem = of_reserved_mem_lookup(it.node); 293 + if (!rmem) { 294 + dev_err(priv->dev, "unable to acquire memory-region\n"); 295 + return -EINVAL; 296 + } 297 + 298 + /* No need to translate pa to da, i.MX use same map */ 299 + da = rmem->base; 300 + 301 + /* Register memory region */ 302 + mem = rproc_mem_entry_init(priv->dev, NULL, (dma_addr_t)rmem->base, rmem->size, da, 303 + imx_rproc_mem_alloc, imx_rproc_mem_release, 304 + it.node->name); 305 + 306 + if (mem) 307 + rproc_coredump_add_segment(rproc, da, rmem->size); 308 + else 309 + return -ENOMEM; 310 + 311 + rproc_add_carveout(rproc, mem); 312 + } 313 + 314 + return 0; 315 + } 316 + 317 + static int imx_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw) 318 + { 319 + int ret; 320 + 321 + ret = rproc_elf_load_rsc_table(rproc, fw); 322 + if (ret) 323 + dev_info(&rproc->dev, "No resource table in elf\n"); 324 + 325 + return 0; 326 + } 327 + 328 + static void imx_rproc_kick(struct rproc *rproc, int vqid) 329 + { 330 + struct imx_rproc *priv = rproc->priv; 331 + int err; 332 + __u32 mmsg; 333 + 334 + if (!priv->tx_ch) { 335 + dev_err(priv->dev, "No initialized mbox tx channel\n"); 336 + return; 337 + } 338 + 339 + /* 340 + * Send the index of the triggered virtqueue as the mu payload. 341 + * Let remote processor know which virtqueue is used. 342 + */ 343 + mmsg = vqid << 16; 344 + 345 + err = mbox_send_message(priv->tx_ch, (void *)&mmsg); 346 + if (err < 0) 347 + dev_err(priv->dev, "%s: failed (%d, err:%d)\n", 348 + __func__, vqid, err); 349 + } 350 + 351 + static int imx_rproc_attach(struct rproc *rproc) 352 + { 353 + return 0; 354 + } 355 + 356 + static struct resource_table *imx_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz) 357 + { 358 + struct imx_rproc *priv = rproc->priv; 359 + 360 + /* The resource table has already been mapped in imx_rproc_addr_init */ 361 + if (!priv->rsc_table) 362 + return NULL; 363 + 364 + *table_sz = SZ_1K; 365 + return (struct resource_table *)priv->rsc_table; 366 + } 367 + 292 368 static const struct rproc_ops imx_rproc_ops = { 369 + .prepare = imx_rproc_prepare, 370 + .attach = imx_rproc_attach, 293 371 .start = imx_rproc_start, 294 372 .stop = imx_rproc_stop, 373 + .kick = imx_rproc_kick, 295 374 .da_to_va = imx_rproc_da_to_va, 375 + .load = rproc_elf_load_segments, 376 + .parse_fw = imx_rproc_parse_fw, 377 + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table, 378 + .get_loaded_rsc_table = imx_rproc_get_loaded_rsc_table, 379 + .sanity_check = rproc_elf_sanity_check, 380 + .get_boot_addr = rproc_elf_get_boot_addr, 296 381 }; 297 382 298 383 static int imx_rproc_addr_init(struct imx_rproc *priv, ··· 443 262 if (!(att->flags & ATT_OWN)) 444 263 continue; 445 264 446 - if (b >= IMX7D_RPROC_MEM_MAX) 265 + if (b >= IMX_RPROC_MEM_MAX) 447 266 break; 448 267 449 268 priv->mem[b].cpu_addr = devm_ioremap(&pdev->dev, 450 269 att->sa, att->size); 451 270 if (!priv->mem[b].cpu_addr) { 452 - dev_err(dev, "devm_ioremap_resource failed\n"); 271 + dev_err(dev, "failed to remap %#x bytes from %#x\n", att->size, att->sa); 453 272 return -ENOMEM; 454 273 } 455 274 priv->mem[b].sys_addr = att->sa; ··· 468 287 struct resource res; 469 288 470 289 node = of_parse_phandle(np, "memory-region", a); 290 + /* Not map vdev region */ 291 + if (!strcmp(node->name, "vdev")) 292 + continue; 471 293 err = of_address_to_resource(node, 0, &res); 472 294 if (err) { 473 295 dev_err(dev, "unable to resolve memory region\n"); 474 296 return err; 475 297 } 476 298 477 - if (b >= IMX7D_RPROC_MEM_MAX) 299 + of_node_put(node); 300 + 301 + if (b >= IMX_RPROC_MEM_MAX) 478 302 break; 479 303 480 - priv->mem[b].cpu_addr = devm_ioremap_resource(&pdev->dev, &res); 481 - if (IS_ERR(priv->mem[b].cpu_addr)) { 482 - dev_err(dev, "devm_ioremap_resource failed\n"); 483 - err = PTR_ERR(priv->mem[b].cpu_addr); 484 - return err; 304 + /* Not use resource version, because we might share region */ 305 + priv->mem[b].cpu_addr = devm_ioremap(&pdev->dev, res.start, resource_size(&res)); 306 + if (!priv->mem[b].cpu_addr) { 307 + dev_err(dev, "failed to remap %pr\n", &res); 308 + return -ENOMEM; 485 309 } 486 310 priv->mem[b].sys_addr = res.start; 487 311 priv->mem[b].size = resource_size(&res); 312 + if (!strcmp(node->name, "rsc_table")) 313 + priv->rsc_table = priv->mem[b].cpu_addr; 488 314 b++; 489 315 } 316 + 317 + return 0; 318 + } 319 + 320 + static void imx_rproc_vq_work(struct work_struct *work) 321 + { 322 + struct imx_rproc *priv = container_of(work, struct imx_rproc, 323 + rproc_work); 324 + 325 + rproc_vq_interrupt(priv->rproc, 0); 326 + rproc_vq_interrupt(priv->rproc, 1); 327 + } 328 + 329 + static void imx_rproc_rx_callback(struct mbox_client *cl, void *msg) 330 + { 331 + struct rproc *rproc = dev_get_drvdata(cl->dev); 332 + struct imx_rproc *priv = rproc->priv; 333 + 334 + queue_work(priv->workqueue, &priv->rproc_work); 335 + } 336 + 337 + static int imx_rproc_xtr_mbox_init(struct rproc *rproc) 338 + { 339 + struct imx_rproc *priv = rproc->priv; 340 + struct device *dev = priv->dev; 341 + struct mbox_client *cl; 342 + int ret; 343 + 344 + if (!of_get_property(dev->of_node, "mbox-names", NULL)) 345 + return 0; 346 + 347 + cl = &priv->cl; 348 + cl->dev = dev; 349 + cl->tx_block = true; 350 + cl->tx_tout = 100; 351 + cl->knows_txdone = false; 352 + cl->rx_callback = imx_rproc_rx_callback; 353 + 354 + priv->tx_ch = mbox_request_channel_byname(cl, "tx"); 355 + if (IS_ERR(priv->tx_ch)) { 356 + ret = PTR_ERR(priv->tx_ch); 357 + return dev_err_probe(cl->dev, ret, 358 + "failed to request tx mailbox channel: %d\n", ret); 359 + } 360 + 361 + priv->rx_ch = mbox_request_channel_byname(cl, "rx"); 362 + if (IS_ERR(priv->rx_ch)) { 363 + mbox_free_channel(priv->tx_ch); 364 + ret = PTR_ERR(priv->rx_ch); 365 + return dev_err_probe(cl->dev, ret, 366 + "failed to request rx mailbox channel: %d\n", ret); 367 + } 368 + 369 + return 0; 370 + } 371 + 372 + static void imx_rproc_free_mbox(struct rproc *rproc) 373 + { 374 + struct imx_rproc *priv = rproc->priv; 375 + 376 + mbox_free_channel(priv->tx_ch); 377 + mbox_free_channel(priv->rx_ch); 378 + } 379 + 380 + static int imx_rproc_detect_mode(struct imx_rproc *priv) 381 + { 382 + const struct imx_rproc_dcfg *dcfg = priv->dcfg; 383 + struct device *dev = priv->dev; 384 + int ret; 385 + u32 val; 386 + 387 + ret = regmap_read(priv->regmap, dcfg->src_reg, &val); 388 + if (ret) { 389 + dev_err(dev, "Failed to read src\n"); 390 + return ret; 391 + } 392 + 393 + if (!(val & dcfg->src_stop)) 394 + priv->rproc->state = RPROC_DETACHED; 490 395 491 396 return 0; 492 397 } ··· 614 347 priv->dev = dev; 615 348 616 349 dev_set_drvdata(dev, rproc); 350 + priv->workqueue = create_workqueue(dev_name(dev)); 351 + if (!priv->workqueue) { 352 + dev_err(dev, "cannot create workqueue\n"); 353 + ret = -ENOMEM; 354 + goto err_put_rproc; 355 + } 356 + 357 + ret = imx_rproc_xtr_mbox_init(rproc); 358 + if (ret) 359 + goto err_put_wkq; 617 360 618 361 ret = imx_rproc_addr_init(priv, pdev); 619 362 if (ret) { 620 363 dev_err(dev, "failed on imx_rproc_addr_init\n"); 621 - goto err_put_rproc; 364 + goto err_put_mbox; 622 365 } 366 + 367 + ret = imx_rproc_detect_mode(priv); 368 + if (ret) 369 + goto err_put_mbox; 623 370 624 371 priv->clk = devm_clk_get(dev, NULL); 625 372 if (IS_ERR(priv->clk)) { 626 373 dev_err(dev, "Failed to get clock\n"); 627 374 ret = PTR_ERR(priv->clk); 628 - goto err_put_rproc; 375 + goto err_put_mbox; 629 376 } 630 377 631 378 /* ··· 649 368 ret = clk_prepare_enable(priv->clk); 650 369 if (ret) { 651 370 dev_err(&rproc->dev, "Failed to enable clock\n"); 652 - goto err_put_rproc; 371 + goto err_put_mbox; 653 372 } 373 + 374 + INIT_WORK(&priv->rproc_work, imx_rproc_vq_work); 654 375 655 376 ret = rproc_add(rproc); 656 377 if (ret) { ··· 664 381 665 382 err_put_clk: 666 383 clk_disable_unprepare(priv->clk); 384 + err_put_mbox: 385 + imx_rproc_free_mbox(rproc); 386 + err_put_wkq: 387 + destroy_workqueue(priv->workqueue); 667 388 err_put_rproc: 668 389 rproc_free(rproc); 669 390 ··· 681 394 682 395 clk_disable_unprepare(priv->clk); 683 396 rproc_del(rproc); 397 + imx_rproc_free_mbox(rproc); 684 398 rproc_free(rproc); 685 399 686 400 return 0; ··· 690 402 static const struct of_device_id imx_rproc_of_match[] = { 691 403 { .compatible = "fsl,imx7d-cm4", .data = &imx_rproc_cfg_imx7d }, 692 404 { .compatible = "fsl,imx6sx-cm4", .data = &imx_rproc_cfg_imx6sx }, 405 + { .compatible = "fsl,imx8mq-cm4", .data = &imx_rproc_cfg_imx8mq }, 406 + { .compatible = "fsl,imx8mm-cm4", .data = &imx_rproc_cfg_imx8mq }, 693 407 {}, 694 408 }; 695 409 MODULE_DEVICE_TABLE(of, imx_rproc_of_match); ··· 708 418 module_platform_driver(imx_rproc_driver); 709 419 710 420 MODULE_LICENSE("GPL v2"); 711 - MODULE_DESCRIPTION("IMX6SX/7D remote processor control driver"); 421 + MODULE_DESCRIPTION("i.MX remote processor control driver"); 712 422 MODULE_AUTHOR("Oleksij Rempel <o.rempel@pengutronix.de>");
+1 -1
drivers/remoteproc/ingenic_rproc.c
··· 121 121 writel(vqid, vpu->aux_base + REG_CORE_MSG); 122 122 } 123 123 124 - static void *ingenic_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 124 + static void *ingenic_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 125 125 { 126 126 struct vpu *vpu = rproc->priv; 127 127 void __iomem *va = NULL;
+1 -1
drivers/remoteproc/keystone_remoteproc.c
··· 246 246 * can be used either by the remoteproc core for loading (when using kernel 247 247 * remoteproc loader), or by any rpmsg bus drivers. 248 248 */ 249 - static void *keystone_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 249 + static void *keystone_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 250 250 { 251 251 struct keystone_rproc *ksproc = rproc->priv; 252 252 void __iomem *va = NULL;
+3 -3
drivers/remoteproc/mtk_scp.c
··· 272 272 } 273 273 274 274 /* grab the kernel address for this device address */ 275 - ptr = (void __iomem *)rproc_da_to_va(rproc, da, memsz); 275 + ptr = (void __iomem *)rproc_da_to_va(rproc, da, memsz, NULL); 276 276 if (!ptr) { 277 277 dev_err(dev, "bad phdr da 0x%x mem 0x%x\n", da, memsz); 278 278 ret = -EINVAL; ··· 509 509 return NULL; 510 510 } 511 511 512 - static void *scp_da_to_va(struct rproc *rproc, u64 da, size_t len) 512 + static void *scp_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 513 513 { 514 514 struct mtk_scp *scp = (struct mtk_scp *)rproc->priv; 515 515 ··· 627 627 { 628 628 void *ptr; 629 629 630 - ptr = scp_da_to_va(scp->rproc, mem_addr, 0); 630 + ptr = scp_da_to_va(scp->rproc, mem_addr, 0, NULL); 631 631 if (!ptr) 632 632 return ERR_PTR(-EINVAL); 633 633
+1 -1
drivers/remoteproc/omap_remoteproc.c
··· 728 728 * Return: translated virtual address in kernel memory space on success, 729 729 * or NULL on failure. 730 730 */ 731 - static void *omap_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 731 + static void *omap_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 732 732 { 733 733 struct omap_rproc *oproc = rproc->priv; 734 734 int i;
+35 -12
drivers/remoteproc/pru_rproc.c
··· 244 244 245 245 return 0; 246 246 } 247 - DEFINE_SIMPLE_ATTRIBUTE(pru_rproc_debug_ss_fops, pru_rproc_debug_ss_get, 248 - pru_rproc_debug_ss_set, "%llu\n"); 247 + DEFINE_DEBUGFS_ATTRIBUTE(pru_rproc_debug_ss_fops, pru_rproc_debug_ss_get, 248 + pru_rproc_debug_ss_set, "%llu\n"); 249 249 250 250 /* 251 251 * Create PRU-specific debugfs entries ··· 266 266 267 267 static void pru_dispose_irq_mapping(struct pru_rproc *pru) 268 268 { 269 - while (pru->evt_count--) { 269 + if (!pru->mapped_irq) 270 + return; 271 + 272 + while (pru->evt_count) { 273 + pru->evt_count--; 270 274 if (pru->mapped_irq[pru->evt_count] > 0) 271 275 irq_dispose_mapping(pru->mapped_irq[pru->evt_count]); 272 276 } 273 277 274 278 kfree(pru->mapped_irq); 279 + pru->mapped_irq = NULL; 275 280 } 276 281 277 282 /* ··· 289 284 struct pru_rproc *pru = rproc->priv; 290 285 struct pru_irq_rsc *rsc = pru->pru_interrupt_map; 291 286 struct irq_fwspec fwspec; 292 - struct device_node *irq_parent; 287 + struct device_node *parent, *irq_parent; 293 288 int i, ret = 0; 294 289 295 290 /* not having pru_interrupt_map is not an error */ ··· 312 307 pru->evt_count = rsc->num_evts; 313 308 pru->mapped_irq = kcalloc(pru->evt_count, sizeof(unsigned int), 314 309 GFP_KERNEL); 315 - if (!pru->mapped_irq) 310 + if (!pru->mapped_irq) { 311 + pru->evt_count = 0; 316 312 return -ENOMEM; 313 + } 317 314 318 315 /* 319 316 * parse and fill in system event to interrupt channel and 320 - * channel-to-host mapping 317 + * channel-to-host mapping. The interrupt controller to be used 318 + * for these mappings for a given PRU remoteproc is always its 319 + * corresponding sibling PRUSS INTC node. 321 320 */ 322 - irq_parent = of_irq_find_parent(pru->dev->of_node); 321 + parent = of_get_parent(dev_of_node(pru->dev)); 322 + if (!parent) { 323 + kfree(pru->mapped_irq); 324 + pru->mapped_irq = NULL; 325 + pru->evt_count = 0; 326 + return -ENODEV; 327 + } 328 + 329 + irq_parent = of_get_child_by_name(parent, "interrupt-controller"); 330 + of_node_put(parent); 323 331 if (!irq_parent) { 324 332 kfree(pru->mapped_irq); 333 + pru->mapped_irq = NULL; 334 + pru->evt_count = 0; 325 335 return -ENODEV; 326 336 } 327 337 ··· 352 332 353 333 pru->mapped_irq[i] = irq_create_fwspec_mapping(&fwspec); 354 334 if (!pru->mapped_irq[i]) { 355 - dev_err(dev, "failed to get virq\n"); 356 - ret = pru->mapped_irq[i]; 335 + dev_err(dev, "failed to get virq for fw mapping %d: event %d chnl %d host %d\n", 336 + i, fwspec.param[0], fwspec.param[1], 337 + fwspec.param[2]); 338 + ret = -EINVAL; 357 339 goto map_fail; 358 340 } 359 341 } 342 + of_node_put(irq_parent); 360 343 361 344 return ret; 362 345 363 346 map_fail: 364 347 pru_dispose_irq_mapping(pru); 348 + of_node_put(irq_parent); 365 349 366 350 return ret; 367 351 } ··· 411 387 pru_control_write_reg(pru, PRU_CTRL_CTRL, val); 412 388 413 389 /* dispose irq mapping - new firmware can provide new mapping */ 414 - if (pru->mapped_irq) 415 - pru_dispose_irq_mapping(pru); 390 + pru_dispose_irq_mapping(pru); 416 391 417 392 return 0; 418 393 } ··· 506 483 * core for any PRU client drivers. The PRU Instruction RAM access is restricted 507 484 * only to the PRU loader code. 508 485 */ 509 - static void *pru_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 486 + static void *pru_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 510 487 { 511 488 struct pru_rproc *pru = rproc->priv; 512 489
+1 -1
drivers/remoteproc/qcom_q6v5_adsp.c
··· 281 281 return ret; 282 282 } 283 283 284 - static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len) 284 + static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 285 285 { 286 286 struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 287 287 int offset;
+24 -2
drivers/remoteproc/qcom_q6v5_mss.c
··· 1210 1210 goto release_firmware; 1211 1211 } 1212 1212 1213 + if (phdr->p_filesz > phdr->p_memsz) { 1214 + dev_err(qproc->dev, 1215 + "refusing to load segment %d with p_filesz > p_memsz\n", 1216 + i); 1217 + ret = -EINVAL; 1218 + goto release_firmware; 1219 + } 1220 + 1213 1221 ptr = memremap(qproc->mpss_phys + offset, phdr->p_memsz, MEMREMAP_WC); 1214 1222 if (!ptr) { 1215 1223 dev_err(qproc->dev, ··· 1245 1237 ptr, phdr->p_filesz); 1246 1238 if (ret) { 1247 1239 dev_err(qproc->dev, "failed to load %s\n", fw_name); 1240 + memunmap(ptr); 1241 + goto release_firmware; 1242 + } 1243 + 1244 + if (seg_fw->size != phdr->p_filesz) { 1245 + dev_err(qproc->dev, 1246 + "failed to load segment %d from truncated file %s\n", 1247 + i, fw_name); 1248 + ret = -EINVAL; 1249 + release_firmware(seg_fw); 1248 1250 memunmap(ptr); 1249 1251 goto release_firmware; 1250 1252 } ··· 1679 1661 mba_image = desc->hexagon_mba_image; 1680 1662 ret = of_property_read_string_index(pdev->dev.of_node, "firmware-name", 1681 1663 0, &mba_image); 1682 - if (ret < 0 && ret != -EINVAL) 1664 + if (ret < 0 && ret != -EINVAL) { 1665 + dev_err(&pdev->dev, "unable to read mba firmware-name\n"); 1683 1666 return ret; 1667 + } 1684 1668 1685 1669 rproc = rproc_alloc(&pdev->dev, pdev->name, &q6v5_ops, 1686 1670 mba_image, sizeof(*qproc)); ··· 1700 1680 qproc->hexagon_mdt_image = "modem.mdt"; 1701 1681 ret = of_property_read_string_index(pdev->dev.of_node, "firmware-name", 1702 1682 1, &qproc->hexagon_mdt_image); 1703 - if (ret < 0 && ret != -EINVAL) 1683 + if (ret < 0 && ret != -EINVAL) { 1684 + dev_err(&pdev->dev, "unable to read mpss firmware-name\n"); 1704 1685 goto free_rproc; 1686 + } 1705 1687 1706 1688 platform_set_drvdata(pdev, qproc); 1707 1689
+18 -1
drivers/remoteproc/qcom_q6v5_pas.c
··· 242 242 return ret; 243 243 } 244 244 245 - static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len) 245 + static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 246 246 { 247 247 struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 248 248 int offset; ··· 785 785 .ssctl_id = 0x12, 786 786 }; 787 787 788 + static const struct adsp_data sdx55_mpss_resource = { 789 + .crash_reason_smem = 421, 790 + .firmware_name = "modem.mdt", 791 + .pas_id = 4, 792 + .has_aggre2_clk = false, 793 + .auto_boot = true, 794 + .proxy_pd_names = (char*[]){ 795 + "cx", 796 + "mss", 797 + NULL 798 + }, 799 + .ssr_name = "mpss", 800 + .sysmon_name = "modem", 801 + .ssctl_id = 0x22, 802 + }; 803 + 788 804 static const struct of_device_id adsp_of_match[] = { 789 805 { .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init}, 790 806 { .compatible = "qcom,msm8996-adsp-pil", .data = &adsp_resource_init}, ··· 813 797 { .compatible = "qcom,sc7180-mpss-pas", .data = &mpss_resource_init}, 814 798 { .compatible = "qcom,sdm845-adsp-pas", .data = &adsp_resource_init}, 815 799 { .compatible = "qcom,sdm845-cdsp-pas", .data = &cdsp_resource_init}, 800 + { .compatible = "qcom,sdx55-mpss-pas", .data = &sdx55_mpss_resource}, 816 801 { .compatible = "qcom,sm8150-adsp-pas", .data = &sm8150_adsp_resource}, 817 802 { .compatible = "qcom,sm8150-cdsp-pas", .data = &sm8150_cdsp_resource}, 818 803 { .compatible = "qcom,sm8150-mpss-pas", .data = &mpss_resource_init},
+555 -46
drivers/remoteproc/qcom_q6v5_wcss.c
··· 4 4 * Copyright (C) 2014 Sony Mobile Communications AB 5 5 * Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. 6 6 */ 7 + #include <linux/clk.h> 8 + #include <linux/delay.h> 9 + #include <linux/io.h> 7 10 #include <linux/iopoll.h> 8 11 #include <linux/kernel.h> 9 12 #include <linux/mfd/syscon.h> 10 13 #include <linux/module.h> 14 + #include <linux/of_address.h> 11 15 #include <linux/of_reserved_mem.h> 12 16 #include <linux/platform_device.h> 13 17 #include <linux/regmap.h> 18 + #include <linux/regulator/consumer.h> 14 19 #include <linux/reset.h> 15 20 #include <linux/soc/qcom/mdt_loader.h> 16 21 #include "qcom_common.h" ··· 29 24 #define Q6SS_GFMUX_CTL_REG 0x020 30 25 #define Q6SS_PWR_CTL_REG 0x030 31 26 #define Q6SS_MEM_PWR_CTL 0x0B0 27 + #define Q6SS_STRAP_ACC 0x110 28 + #define Q6SS_CGC_OVERRIDE 0x034 29 + #define Q6SS_BCR_REG 0x6000 32 30 33 31 /* AXI Halt Register Offsets */ 34 32 #define AXI_HALTREQ_REG 0x0 ··· 45 37 #define Q6SS_CORE_ARES BIT(1) 46 38 #define Q6SS_BUS_ARES_ENABLE BIT(2) 47 39 40 + /* Q6SS_BRC_RESET */ 41 + #define Q6SS_BRC_BLK_ARES BIT(0) 42 + 48 43 /* Q6SS_GFMUX_CTL */ 49 44 #define Q6SS_CLK_ENABLE BIT(1) 45 + #define Q6SS_SWITCH_CLK_SRC BIT(8) 50 46 51 47 /* Q6SS_PWR_CTL */ 52 48 #define Q6SS_L2DATA_STBY_N BIT(18) 53 49 #define Q6SS_SLP_RET_N BIT(19) 54 50 #define Q6SS_CLAMP_IO BIT(20) 55 51 #define QDSS_BHS_ON BIT(21) 52 + #define QDSS_Q6_MEMORIES GENMASK(15, 0) 56 53 57 54 /* Q6SS parameters */ 58 55 #define Q6SS_LDO_BYP BIT(25) ··· 66 53 #define Q6SS_CLAMP_QMC_MEM BIT(22) 67 54 #define HALT_CHECK_MAX_LOOPS 200 68 55 #define Q6SS_XO_CBCR GENMASK(5, 3) 56 + #define Q6SS_SLEEP_CBCR GENMASK(5, 2) 69 57 70 58 /* Q6SS config/status registers */ 71 59 #define TCSR_GLOBAL_CFG0 0x0 ··· 85 71 #define TCSR_WCSS_CLK_MASK 0x1F 86 72 #define TCSR_WCSS_CLK_ENABLE 0x14 87 73 74 + #define MAX_HALT_REG 3 75 + enum { 76 + WCSS_IPQ8074, 77 + WCSS_QCS404, 78 + }; 79 + 80 + struct wcss_data { 81 + const char *firmware_name; 82 + unsigned int crash_reason_smem; 83 + u32 version; 84 + bool aon_reset_required; 85 + bool wcss_q6_reset_required; 86 + const char *ssr_name; 87 + const char *sysmon_name; 88 + int ssctl_id; 89 + const struct rproc_ops *ops; 90 + bool requires_force_stop; 91 + }; 92 + 88 93 struct q6v5_wcss { 89 94 struct device *dev; 90 95 ··· 115 82 u32 halt_wcss; 116 83 u32 halt_nc; 117 84 85 + struct clk *xo; 86 + struct clk *ahbfabric_cbcr_clk; 87 + struct clk *gcc_abhs_cbcr; 88 + struct clk *gcc_axim_cbcr; 89 + struct clk *lcc_csr_cbcr; 90 + struct clk *ahbs_cbcr; 91 + struct clk *tcm_slave_cbcr; 92 + struct clk *qdsp6ss_abhm_cbcr; 93 + struct clk *qdsp6ss_sleep_cbcr; 94 + struct clk *qdsp6ss_axim_cbcr; 95 + struct clk *qdsp6ss_xo_cbcr; 96 + struct clk *qdsp6ss_core_gfmux; 97 + struct clk *lcc_bcr_sleep; 98 + struct regulator *cx_supply; 99 + struct qcom_sysmon *sysmon; 100 + 118 101 struct reset_control *wcss_aon_reset; 119 102 struct reset_control *wcss_reset; 120 103 struct reset_control *wcss_q6_reset; 104 + struct reset_control *wcss_q6_bcr_reset; 121 105 122 106 struct qcom_q6v5 q6v5; 123 107 ··· 142 92 phys_addr_t mem_reloc; 143 93 void *mem_region; 144 94 size_t mem_size; 95 + 96 + unsigned int crash_reason_smem; 97 + u32 version; 98 + bool requires_force_stop; 145 99 146 100 struct qcom_rproc_glink glink_subdev; 147 101 struct qcom_rproc_ssr ssr_subdev; ··· 291 237 return ret; 292 238 } 293 239 240 + static int q6v5_wcss_qcs404_power_on(struct q6v5_wcss *wcss) 241 + { 242 + unsigned long val; 243 + int ret, idx; 244 + 245 + /* Toggle the restart */ 246 + reset_control_assert(wcss->wcss_reset); 247 + usleep_range(200, 300); 248 + reset_control_deassert(wcss->wcss_reset); 249 + usleep_range(200, 300); 250 + 251 + /* Enable GCC_WDSP_Q6SS_AHBS_CBCR clock */ 252 + ret = clk_prepare_enable(wcss->gcc_abhs_cbcr); 253 + if (ret) 254 + return ret; 255 + 256 + /* Remove reset to the WCNSS QDSP6SS */ 257 + reset_control_deassert(wcss->wcss_q6_bcr_reset); 258 + 259 + /* Enable Q6SSTOP_AHBFABRIC_CBCR clock */ 260 + ret = clk_prepare_enable(wcss->ahbfabric_cbcr_clk); 261 + if (ret) 262 + goto disable_gcc_abhs_cbcr_clk; 263 + 264 + /* Enable the LCCCSR CBC clock, Q6SSTOP_Q6SSTOP_LCC_CSR_CBCR clock */ 265 + ret = clk_prepare_enable(wcss->lcc_csr_cbcr); 266 + if (ret) 267 + goto disable_ahbfabric_cbcr_clk; 268 + 269 + /* Enable the Q6AHBS CBC, Q6SSTOP_Q6SS_AHBS_CBCR clock */ 270 + ret = clk_prepare_enable(wcss->ahbs_cbcr); 271 + if (ret) 272 + goto disable_csr_cbcr_clk; 273 + 274 + /* Enable the TCM slave CBC, Q6SSTOP_Q6SS_TCM_SLAVE_CBCR clock */ 275 + ret = clk_prepare_enable(wcss->tcm_slave_cbcr); 276 + if (ret) 277 + goto disable_ahbs_cbcr_clk; 278 + 279 + /* Enable the Q6SS AHB master CBC, Q6SSTOP_Q6SS_AHBM_CBCR clock */ 280 + ret = clk_prepare_enable(wcss->qdsp6ss_abhm_cbcr); 281 + if (ret) 282 + goto disable_tcm_slave_cbcr_clk; 283 + 284 + /* Enable the Q6SS AXI master CBC, Q6SSTOP_Q6SS_AXIM_CBCR clock */ 285 + ret = clk_prepare_enable(wcss->qdsp6ss_axim_cbcr); 286 + if (ret) 287 + goto disable_abhm_cbcr_clk; 288 + 289 + /* Enable the Q6SS XO CBC */ 290 + val = readl(wcss->reg_base + Q6SS_XO_CBCR); 291 + val |= BIT(0); 292 + writel(val, wcss->reg_base + Q6SS_XO_CBCR); 293 + /* Read CLKOFF bit to go low indicating CLK is enabled */ 294 + ret = readl_poll_timeout(wcss->reg_base + Q6SS_XO_CBCR, 295 + val, !(val & BIT(31)), 1, 296 + HALT_CHECK_MAX_LOOPS); 297 + if (ret) { 298 + dev_err(wcss->dev, 299 + "xo cbcr enabling timed out (rc:%d)\n", ret); 300 + return ret; 301 + } 302 + 303 + writel(0, wcss->reg_base + Q6SS_CGC_OVERRIDE); 304 + 305 + /* Enable QDSP6 sleep clock clock */ 306 + val = readl(wcss->reg_base + Q6SS_SLEEP_CBCR); 307 + val |= BIT(0); 308 + writel(val, wcss->reg_base + Q6SS_SLEEP_CBCR); 309 + 310 + /* Enable the Enable the Q6 AXI clock, GCC_WDSP_Q6SS_AXIM_CBCR*/ 311 + ret = clk_prepare_enable(wcss->gcc_axim_cbcr); 312 + if (ret) 313 + goto disable_sleep_cbcr_clk; 314 + 315 + /* Assert resets, stop core */ 316 + val = readl(wcss->reg_base + Q6SS_RESET_REG); 317 + val |= Q6SS_CORE_ARES | Q6SS_BUS_ARES_ENABLE | Q6SS_STOP_CORE; 318 + writel(val, wcss->reg_base + Q6SS_RESET_REG); 319 + 320 + /* Program the QDSP6SS PWR_CTL register */ 321 + writel(0x01700000, wcss->reg_base + Q6SS_PWR_CTL_REG); 322 + 323 + writel(0x03700000, wcss->reg_base + Q6SS_PWR_CTL_REG); 324 + 325 + writel(0x03300000, wcss->reg_base + Q6SS_PWR_CTL_REG); 326 + 327 + writel(0x033C0000, wcss->reg_base + Q6SS_PWR_CTL_REG); 328 + 329 + /* 330 + * Enable memories by turning on the QDSP6 memory foot/head switch, one 331 + * bank at a time to avoid in-rush current 332 + */ 333 + for (idx = 28; idx >= 0; idx--) { 334 + writel((readl(wcss->reg_base + Q6SS_MEM_PWR_CTL) | 335 + (1 << idx)), wcss->reg_base + Q6SS_MEM_PWR_CTL); 336 + } 337 + 338 + writel(0x031C0000, wcss->reg_base + Q6SS_PWR_CTL_REG); 339 + writel(0x030C0000, wcss->reg_base + Q6SS_PWR_CTL_REG); 340 + 341 + val = readl(wcss->reg_base + Q6SS_RESET_REG); 342 + val &= ~Q6SS_CORE_ARES; 343 + writel(val, wcss->reg_base + Q6SS_RESET_REG); 344 + 345 + /* Enable the Q6 core clock at the GFM, Q6SSTOP_QDSP6SS_GFMUX_CTL */ 346 + val = readl(wcss->reg_base + Q6SS_GFMUX_CTL_REG); 347 + val |= Q6SS_CLK_ENABLE | Q6SS_SWITCH_CLK_SRC; 348 + writel(val, wcss->reg_base + Q6SS_GFMUX_CTL_REG); 349 + 350 + /* Enable sleep clock branch needed for BCR circuit */ 351 + ret = clk_prepare_enable(wcss->lcc_bcr_sleep); 352 + if (ret) 353 + goto disable_core_gfmux_clk; 354 + 355 + return 0; 356 + 357 + disable_core_gfmux_clk: 358 + val = readl(wcss->reg_base + Q6SS_GFMUX_CTL_REG); 359 + val &= ~(Q6SS_CLK_ENABLE | Q6SS_SWITCH_CLK_SRC); 360 + writel(val, wcss->reg_base + Q6SS_GFMUX_CTL_REG); 361 + clk_disable_unprepare(wcss->gcc_axim_cbcr); 362 + disable_sleep_cbcr_clk: 363 + val = readl(wcss->reg_base + Q6SS_SLEEP_CBCR); 364 + val &= ~Q6SS_CLK_ENABLE; 365 + writel(val, wcss->reg_base + Q6SS_SLEEP_CBCR); 366 + val = readl(wcss->reg_base + Q6SS_XO_CBCR); 367 + val &= ~Q6SS_CLK_ENABLE; 368 + writel(val, wcss->reg_base + Q6SS_XO_CBCR); 369 + clk_disable_unprepare(wcss->qdsp6ss_axim_cbcr); 370 + disable_abhm_cbcr_clk: 371 + clk_disable_unprepare(wcss->qdsp6ss_abhm_cbcr); 372 + disable_tcm_slave_cbcr_clk: 373 + clk_disable_unprepare(wcss->tcm_slave_cbcr); 374 + disable_ahbs_cbcr_clk: 375 + clk_disable_unprepare(wcss->ahbs_cbcr); 376 + disable_csr_cbcr_clk: 377 + clk_disable_unprepare(wcss->lcc_csr_cbcr); 378 + disable_ahbfabric_cbcr_clk: 379 + clk_disable_unprepare(wcss->ahbfabric_cbcr_clk); 380 + disable_gcc_abhs_cbcr_clk: 381 + clk_disable_unprepare(wcss->gcc_abhs_cbcr); 382 + 383 + return ret; 384 + } 385 + 386 + static inline int q6v5_wcss_qcs404_reset(struct q6v5_wcss *wcss) 387 + { 388 + unsigned long val; 389 + 390 + writel(0x80800000, wcss->reg_base + Q6SS_STRAP_ACC); 391 + 392 + /* Start core execution */ 393 + val = readl(wcss->reg_base + Q6SS_RESET_REG); 394 + val &= ~Q6SS_STOP_CORE; 395 + writel(val, wcss->reg_base + Q6SS_RESET_REG); 396 + 397 + return 0; 398 + } 399 + 400 + static int q6v5_qcs404_wcss_start(struct rproc *rproc) 401 + { 402 + struct q6v5_wcss *wcss = rproc->priv; 403 + int ret; 404 + 405 + ret = clk_prepare_enable(wcss->xo); 406 + if (ret) 407 + return ret; 408 + 409 + ret = regulator_enable(wcss->cx_supply); 410 + if (ret) 411 + goto disable_xo_clk; 412 + 413 + qcom_q6v5_prepare(&wcss->q6v5); 414 + 415 + ret = q6v5_wcss_qcs404_power_on(wcss); 416 + if (ret) { 417 + dev_err(wcss->dev, "wcss clk_enable failed\n"); 418 + goto disable_cx_supply; 419 + } 420 + 421 + writel(rproc->bootaddr >> 4, wcss->reg_base + Q6SS_RST_EVB); 422 + 423 + q6v5_wcss_qcs404_reset(wcss); 424 + 425 + ret = qcom_q6v5_wait_for_start(&wcss->q6v5, 5 * HZ); 426 + if (ret == -ETIMEDOUT) { 427 + dev_err(wcss->dev, "start timed out\n"); 428 + goto disable_cx_supply; 429 + } 430 + 431 + return 0; 432 + 433 + disable_cx_supply: 434 + regulator_disable(wcss->cx_supply); 435 + disable_xo_clk: 436 + clk_disable_unprepare(wcss->xo); 437 + 438 + return ret; 439 + } 440 + 294 441 static void q6v5_wcss_halt_axi_port(struct q6v5_wcss *wcss, 295 442 struct regmap *halt_map, 296 443 u32 offset) ··· 524 269 525 270 /* Clear halt request (port will remain halted until reset) */ 526 271 regmap_write(halt_map, offset + AXI_HALTREQ_REG, 0); 272 + } 273 + 274 + static int q6v5_qcs404_wcss_shutdown(struct q6v5_wcss *wcss) 275 + { 276 + unsigned long val; 277 + int ret; 278 + 279 + q6v5_wcss_halt_axi_port(wcss, wcss->halt_map, wcss->halt_wcss); 280 + 281 + /* assert clamps to avoid MX current inrush */ 282 + val = readl(wcss->reg_base + Q6SS_PWR_CTL_REG); 283 + val |= (Q6SS_CLAMP_IO | Q6SS_CLAMP_WL | Q6SS_CLAMP_QMC_MEM); 284 + writel(val, wcss->reg_base + Q6SS_PWR_CTL_REG); 285 + 286 + /* Disable memories by turning off memory foot/headswitch */ 287 + writel((readl(wcss->reg_base + Q6SS_MEM_PWR_CTL) & 288 + ~QDSS_Q6_MEMORIES), 289 + wcss->reg_base + Q6SS_MEM_PWR_CTL); 290 + 291 + /* Clear the BHS_ON bit */ 292 + val = readl(wcss->reg_base + Q6SS_PWR_CTL_REG); 293 + val &= ~Q6SS_BHS_ON; 294 + writel(val, wcss->reg_base + Q6SS_PWR_CTL_REG); 295 + 296 + clk_disable_unprepare(wcss->ahbfabric_cbcr_clk); 297 + clk_disable_unprepare(wcss->lcc_csr_cbcr); 298 + clk_disable_unprepare(wcss->tcm_slave_cbcr); 299 + clk_disable_unprepare(wcss->qdsp6ss_abhm_cbcr); 300 + clk_disable_unprepare(wcss->qdsp6ss_axim_cbcr); 301 + 302 + val = readl(wcss->reg_base + Q6SS_SLEEP_CBCR); 303 + val &= ~BIT(0); 304 + writel(val, wcss->reg_base + Q6SS_SLEEP_CBCR); 305 + 306 + val = readl(wcss->reg_base + Q6SS_XO_CBCR); 307 + val &= ~BIT(0); 308 + writel(val, wcss->reg_base + Q6SS_XO_CBCR); 309 + 310 + clk_disable_unprepare(wcss->ahbs_cbcr); 311 + clk_disable_unprepare(wcss->lcc_bcr_sleep); 312 + 313 + val = readl(wcss->reg_base + Q6SS_GFMUX_CTL_REG); 314 + val &= ~(Q6SS_CLK_ENABLE | Q6SS_SWITCH_CLK_SRC); 315 + writel(val, wcss->reg_base + Q6SS_GFMUX_CTL_REG); 316 + 317 + clk_disable_unprepare(wcss->gcc_abhs_cbcr); 318 + 319 + ret = reset_control_assert(wcss->wcss_reset); 320 + if (ret) { 321 + dev_err(wcss->dev, "wcss_reset failed\n"); 322 + return ret; 323 + } 324 + usleep_range(200, 300); 325 + 326 + ret = reset_control_deassert(wcss->wcss_reset); 327 + if (ret) { 328 + dev_err(wcss->dev, "wcss_reset failed\n"); 329 + return ret; 330 + } 331 + usleep_range(200, 300); 332 + 333 + clk_disable_unprepare(wcss->gcc_axim_cbcr); 334 + 335 + return 0; 527 336 } 528 337 529 338 static int q6v5_wcss_powerdown(struct q6v5_wcss *wcss) ··· 709 390 int ret; 710 391 711 392 /* WCSS powerdown */ 712 - ret = qcom_q6v5_request_stop(&wcss->q6v5, NULL); 713 - if (ret == -ETIMEDOUT) { 714 - dev_err(wcss->dev, "timed out on wait\n"); 715 - return ret; 393 + if (wcss->requires_force_stop) { 394 + ret = qcom_q6v5_request_stop(&wcss->q6v5, NULL); 395 + if (ret == -ETIMEDOUT) { 396 + dev_err(wcss->dev, "timed out on wait\n"); 397 + return ret; 398 + } 716 399 } 717 400 718 - ret = q6v5_wcss_powerdown(wcss); 719 - if (ret) 720 - return ret; 401 + if (wcss->version == WCSS_QCS404) { 402 + ret = q6v5_qcs404_wcss_shutdown(wcss); 403 + if (ret) 404 + return ret; 405 + } else { 406 + ret = q6v5_wcss_powerdown(wcss); 407 + if (ret) 408 + return ret; 721 409 722 - /* Q6 Power down */ 723 - ret = q6v5_q6_powerdown(wcss); 724 - if (ret) 725 - return ret; 410 + /* Q6 Power down */ 411 + ret = q6v5_q6_powerdown(wcss); 412 + if (ret) 413 + return ret; 414 + } 726 415 727 416 qcom_q6v5_unprepare(&wcss->q6v5); 728 417 729 418 return 0; 730 419 } 731 420 732 - static void *q6v5_wcss_da_to_va(struct rproc *rproc, u64 da, size_t len) 421 + static void *q6v5_wcss_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 733 422 { 734 423 struct q6v5_wcss *wcss = rproc->priv; 735 424 int offset; ··· 765 438 return ret; 766 439 } 767 440 768 - static const struct rproc_ops q6v5_wcss_ops = { 441 + static const struct rproc_ops q6v5_wcss_ipq8074_ops = { 769 442 .start = q6v5_wcss_start, 770 443 .stop = q6v5_wcss_stop, 771 444 .da_to_va = q6v5_wcss_da_to_va, ··· 773 446 .get_boot_addr = rproc_elf_get_boot_addr, 774 447 }; 775 448 776 - static int q6v5_wcss_init_reset(struct q6v5_wcss *wcss) 449 + static const struct rproc_ops q6v5_wcss_qcs404_ops = { 450 + .start = q6v5_qcs404_wcss_start, 451 + .stop = q6v5_wcss_stop, 452 + .da_to_va = q6v5_wcss_da_to_va, 453 + .load = q6v5_wcss_load, 454 + .get_boot_addr = rproc_elf_get_boot_addr, 455 + .parse_fw = qcom_register_dump_segments, 456 + }; 457 + 458 + static int q6v5_wcss_init_reset(struct q6v5_wcss *wcss, 459 + const struct wcss_data *desc) 777 460 { 778 461 struct device *dev = wcss->dev; 779 462 780 - wcss->wcss_aon_reset = devm_reset_control_get(dev, "wcss_aon_reset"); 781 - if (IS_ERR(wcss->wcss_aon_reset)) { 782 - dev_err(wcss->dev, "unable to acquire wcss_aon_reset\n"); 783 - return PTR_ERR(wcss->wcss_aon_reset); 463 + if (desc->aon_reset_required) { 464 + wcss->wcss_aon_reset = devm_reset_control_get_exclusive(dev, "wcss_aon_reset"); 465 + if (IS_ERR(wcss->wcss_aon_reset)) { 466 + dev_err(wcss->dev, "fail to acquire wcss_aon_reset\n"); 467 + return PTR_ERR(wcss->wcss_aon_reset); 468 + } 784 469 } 785 470 786 - wcss->wcss_reset = devm_reset_control_get(dev, "wcss_reset"); 471 + wcss->wcss_reset = devm_reset_control_get_exclusive(dev, "wcss_reset"); 787 472 if (IS_ERR(wcss->wcss_reset)) { 788 473 dev_err(wcss->dev, "unable to acquire wcss_reset\n"); 789 474 return PTR_ERR(wcss->wcss_reset); 790 475 } 791 476 792 - wcss->wcss_q6_reset = devm_reset_control_get(dev, "wcss_q6_reset"); 793 - if (IS_ERR(wcss->wcss_q6_reset)) { 794 - dev_err(wcss->dev, "unable to acquire wcss_q6_reset\n"); 795 - return PTR_ERR(wcss->wcss_q6_reset); 477 + if (desc->wcss_q6_reset_required) { 478 + wcss->wcss_q6_reset = devm_reset_control_get_exclusive(dev, "wcss_q6_reset"); 479 + if (IS_ERR(wcss->wcss_q6_reset)) { 480 + dev_err(wcss->dev, "unable to acquire wcss_q6_reset\n"); 481 + return PTR_ERR(wcss->wcss_q6_reset); 482 + } 483 + } 484 + 485 + wcss->wcss_q6_bcr_reset = devm_reset_control_get_exclusive(dev, "wcss_q6_bcr_reset"); 486 + if (IS_ERR(wcss->wcss_q6_bcr_reset)) { 487 + dev_err(wcss->dev, "unable to acquire wcss_q6_bcr_reset\n"); 488 + return PTR_ERR(wcss->wcss_q6_bcr_reset); 796 489 } 797 490 798 491 return 0; ··· 821 474 static int q6v5_wcss_init_mmio(struct q6v5_wcss *wcss, 822 475 struct platform_device *pdev) 823 476 { 824 - struct of_phandle_args args; 477 + unsigned int halt_reg[MAX_HALT_REG] = {0}; 478 + struct device_node *syscon; 825 479 struct resource *res; 826 480 int ret; 827 481 828 482 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qdsp6"); 829 - wcss->reg_base = devm_ioremap_resource(&pdev->dev, res); 830 - if (IS_ERR(wcss->reg_base)) 831 - return PTR_ERR(wcss->reg_base); 483 + wcss->reg_base = devm_ioremap(&pdev->dev, res->start, 484 + resource_size(res)); 485 + if (!wcss->reg_base) 486 + return -ENOMEM; 832 487 833 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rmb"); 834 - wcss->rmb_base = devm_ioremap_resource(&pdev->dev, res); 835 - if (IS_ERR(wcss->rmb_base)) 836 - return PTR_ERR(wcss->rmb_base); 488 + if (wcss->version == WCSS_IPQ8074) { 489 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rmb"); 490 + wcss->rmb_base = devm_ioremap_resource(&pdev->dev, res); 491 + if (IS_ERR(wcss->rmb_base)) 492 + return PTR_ERR(wcss->rmb_base); 493 + } 837 494 838 - ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node, 839 - "qcom,halt-regs", 3, 0, &args); 495 + syscon = of_parse_phandle(pdev->dev.of_node, 496 + "qcom,halt-regs", 0); 497 + if (!syscon) { 498 + dev_err(&pdev->dev, "failed to parse qcom,halt-regs\n"); 499 + return -EINVAL; 500 + } 501 + 502 + wcss->halt_map = syscon_node_to_regmap(syscon); 503 + of_node_put(syscon); 504 + if (IS_ERR(wcss->halt_map)) 505 + return PTR_ERR(wcss->halt_map); 506 + 507 + ret = of_property_read_variable_u32_array(pdev->dev.of_node, 508 + "qcom,halt-regs", 509 + halt_reg, 0, 510 + MAX_HALT_REG); 840 511 if (ret < 0) { 841 512 dev_err(&pdev->dev, "failed to parse qcom,halt-regs\n"); 842 513 return -EINVAL; 843 514 } 844 515 845 - wcss->halt_map = syscon_node_to_regmap(args.np); 846 - of_node_put(args.np); 847 - if (IS_ERR(wcss->halt_map)) 848 - return PTR_ERR(wcss->halt_map); 849 - 850 - wcss->halt_q6 = args.args[0]; 851 - wcss->halt_wcss = args.args[1]; 852 - wcss->halt_nc = args.args[2]; 516 + wcss->halt_q6 = halt_reg[0]; 517 + wcss->halt_wcss = halt_reg[1]; 518 + wcss->halt_nc = halt_reg[2]; 853 519 854 520 return 0; 855 521 } ··· 896 536 return 0; 897 537 } 898 538 539 + static int q6v5_wcss_init_clock(struct q6v5_wcss *wcss) 540 + { 541 + int ret; 542 + 543 + wcss->xo = devm_clk_get(wcss->dev, "xo"); 544 + if (IS_ERR(wcss->xo)) { 545 + ret = PTR_ERR(wcss->xo); 546 + if (ret != -EPROBE_DEFER) 547 + dev_err(wcss->dev, "failed to get xo clock"); 548 + return ret; 549 + } 550 + 551 + wcss->gcc_abhs_cbcr = devm_clk_get(wcss->dev, "gcc_abhs_cbcr"); 552 + if (IS_ERR(wcss->gcc_abhs_cbcr)) { 553 + ret = PTR_ERR(wcss->gcc_abhs_cbcr); 554 + if (ret != -EPROBE_DEFER) 555 + dev_err(wcss->dev, "failed to get gcc abhs clock"); 556 + return ret; 557 + } 558 + 559 + wcss->gcc_axim_cbcr = devm_clk_get(wcss->dev, "gcc_axim_cbcr"); 560 + if (IS_ERR(wcss->gcc_axim_cbcr)) { 561 + ret = PTR_ERR(wcss->gcc_axim_cbcr); 562 + if (ret != -EPROBE_DEFER) 563 + dev_err(wcss->dev, "failed to get gcc axim clock\n"); 564 + return ret; 565 + } 566 + 567 + wcss->ahbfabric_cbcr_clk = devm_clk_get(wcss->dev, 568 + "lcc_ahbfabric_cbc"); 569 + if (IS_ERR(wcss->ahbfabric_cbcr_clk)) { 570 + ret = PTR_ERR(wcss->ahbfabric_cbcr_clk); 571 + if (ret != -EPROBE_DEFER) 572 + dev_err(wcss->dev, "failed to get ahbfabric clock\n"); 573 + return ret; 574 + } 575 + 576 + wcss->lcc_csr_cbcr = devm_clk_get(wcss->dev, "tcsr_lcc_cbc"); 577 + if (IS_ERR(wcss->lcc_csr_cbcr)) { 578 + ret = PTR_ERR(wcss->lcc_csr_cbcr); 579 + if (ret != -EPROBE_DEFER) 580 + dev_err(wcss->dev, "failed to get csr cbcr clk\n"); 581 + return ret; 582 + } 583 + 584 + wcss->ahbs_cbcr = devm_clk_get(wcss->dev, 585 + "lcc_abhs_cbc"); 586 + if (IS_ERR(wcss->ahbs_cbcr)) { 587 + ret = PTR_ERR(wcss->ahbs_cbcr); 588 + if (ret != -EPROBE_DEFER) 589 + dev_err(wcss->dev, "failed to get ahbs_cbcr clk\n"); 590 + return ret; 591 + } 592 + 593 + wcss->tcm_slave_cbcr = devm_clk_get(wcss->dev, 594 + "lcc_tcm_slave_cbc"); 595 + if (IS_ERR(wcss->tcm_slave_cbcr)) { 596 + ret = PTR_ERR(wcss->tcm_slave_cbcr); 597 + if (ret != -EPROBE_DEFER) 598 + dev_err(wcss->dev, "failed to get tcm cbcr clk\n"); 599 + return ret; 600 + } 601 + 602 + wcss->qdsp6ss_abhm_cbcr = devm_clk_get(wcss->dev, "lcc_abhm_cbc"); 603 + if (IS_ERR(wcss->qdsp6ss_abhm_cbcr)) { 604 + ret = PTR_ERR(wcss->qdsp6ss_abhm_cbcr); 605 + if (ret != -EPROBE_DEFER) 606 + dev_err(wcss->dev, "failed to get abhm cbcr clk\n"); 607 + return ret; 608 + } 609 + 610 + wcss->qdsp6ss_axim_cbcr = devm_clk_get(wcss->dev, "lcc_axim_cbc"); 611 + if (IS_ERR(wcss->qdsp6ss_axim_cbcr)) { 612 + ret = PTR_ERR(wcss->qdsp6ss_axim_cbcr); 613 + if (ret != -EPROBE_DEFER) 614 + dev_err(wcss->dev, "failed to get axim cbcr clk\n"); 615 + return ret; 616 + } 617 + 618 + wcss->lcc_bcr_sleep = devm_clk_get(wcss->dev, "lcc_bcr_sleep"); 619 + if (IS_ERR(wcss->lcc_bcr_sleep)) { 620 + ret = PTR_ERR(wcss->lcc_bcr_sleep); 621 + if (ret != -EPROBE_DEFER) 622 + dev_err(wcss->dev, "failed to get bcr cbcr clk\n"); 623 + return ret; 624 + } 625 + 626 + return 0; 627 + } 628 + 629 + static int q6v5_wcss_init_regulator(struct q6v5_wcss *wcss) 630 + { 631 + wcss->cx_supply = devm_regulator_get(wcss->dev, "cx"); 632 + if (IS_ERR(wcss->cx_supply)) 633 + return PTR_ERR(wcss->cx_supply); 634 + 635 + regulator_set_load(wcss->cx_supply, 100000); 636 + 637 + return 0; 638 + } 639 + 899 640 static int q6v5_wcss_probe(struct platform_device *pdev) 900 641 { 642 + const struct wcss_data *desc; 901 643 struct q6v5_wcss *wcss; 902 644 struct rproc *rproc; 903 645 int ret; 904 646 905 - rproc = rproc_alloc(&pdev->dev, pdev->name, &q6v5_wcss_ops, 906 - "IPQ8074/q6_fw.mdt", sizeof(*wcss)); 647 + desc = device_get_match_data(&pdev->dev); 648 + if (!desc) 649 + return -EINVAL; 650 + 651 + rproc = rproc_alloc(&pdev->dev, pdev->name, desc->ops, 652 + desc->firmware_name, sizeof(*wcss)); 907 653 if (!rproc) { 908 654 dev_err(&pdev->dev, "failed to allocate rproc\n"); 909 655 return -ENOMEM; ··· 1017 551 1018 552 wcss = rproc->priv; 1019 553 wcss->dev = &pdev->dev; 554 + wcss->version = desc->version; 555 + 556 + wcss->version = desc->version; 557 + wcss->requires_force_stop = desc->requires_force_stop; 1020 558 1021 559 ret = q6v5_wcss_init_mmio(wcss, pdev); 1022 560 if (ret) ··· 1030 560 if (ret) 1031 561 goto free_rproc; 1032 562 1033 - ret = q6v5_wcss_init_reset(wcss); 563 + if (wcss->version == WCSS_QCS404) { 564 + ret = q6v5_wcss_init_clock(wcss); 565 + if (ret) 566 + goto free_rproc; 567 + 568 + ret = q6v5_wcss_init_regulator(wcss); 569 + if (ret) 570 + goto free_rproc; 571 + } 572 + 573 + ret = q6v5_wcss_init_reset(wcss, desc); 1034 574 if (ret) 1035 575 goto free_rproc; 1036 576 1037 - ret = qcom_q6v5_init(&wcss->q6v5, pdev, rproc, WCSS_CRASH_REASON, NULL); 577 + ret = qcom_q6v5_init(&wcss->q6v5, pdev, rproc, desc->crash_reason_smem, 578 + NULL); 1038 579 if (ret) 1039 580 goto free_rproc; 1040 581 1041 582 qcom_add_glink_subdev(rproc, &wcss->glink_subdev, "q6wcss"); 1042 583 qcom_add_ssr_subdev(rproc, &wcss->ssr_subdev, "q6wcss"); 584 + 585 + if (desc->ssctl_id) 586 + wcss->sysmon = qcom_add_sysmon_subdev(rproc, 587 + desc->sysmon_name, 588 + desc->ssctl_id); 1043 589 1044 590 ret = rproc_add(rproc); 1045 591 if (ret) ··· 1081 595 return 0; 1082 596 } 1083 597 598 + static const struct wcss_data wcss_ipq8074_res_init = { 599 + .firmware_name = "IPQ8074/q6_fw.mdt", 600 + .crash_reason_smem = WCSS_CRASH_REASON, 601 + .aon_reset_required = true, 602 + .wcss_q6_reset_required = true, 603 + .ops = &q6v5_wcss_ipq8074_ops, 604 + .requires_force_stop = true, 605 + }; 606 + 607 + static const struct wcss_data wcss_qcs404_res_init = { 608 + .crash_reason_smem = WCSS_CRASH_REASON, 609 + .firmware_name = "wcnss.mdt", 610 + .version = WCSS_QCS404, 611 + .aon_reset_required = false, 612 + .wcss_q6_reset_required = false, 613 + .ssr_name = "mpss", 614 + .sysmon_name = "wcnss", 615 + .ssctl_id = 0x12, 616 + .ops = &q6v5_wcss_qcs404_ops, 617 + .requires_force_stop = false, 618 + }; 619 + 1084 620 static const struct of_device_id q6v5_wcss_of_match[] = { 1085 - { .compatible = "qcom,ipq8074-wcss-pil" }, 621 + { .compatible = "qcom,ipq8074-wcss-pil", .data = &wcss_ipq8074_res_init }, 622 + { .compatible = "qcom,qcs404-wcss-pil", .data = &wcss_qcs404_res_init }, 1086 623 { }, 1087 624 }; 1088 625 MODULE_DEVICE_TABLE(of, q6v5_wcss_of_match);
+8 -2
drivers/remoteproc/qcom_wcnss.c
··· 320 320 return ret; 321 321 } 322 322 323 - static void *wcnss_da_to_va(struct rproc *rproc, u64 da, size_t len) 323 + static void *wcnss_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 324 324 { 325 325 struct qcom_wcnss *wcnss = (struct qcom_wcnss *)rproc->priv; 326 326 int offset; ··· 530 530 531 531 static int wcnss_probe(struct platform_device *pdev) 532 532 { 533 + const char *fw_name = WCNSS_FIRMWARE_NAME; 533 534 const struct wcnss_data *data; 534 535 struct qcom_wcnss *wcnss; 535 536 struct resource *res; ··· 548 547 return -ENXIO; 549 548 } 550 549 550 + ret = of_property_read_string(pdev->dev.of_node, "firmware-name", 551 + &fw_name); 552 + if (ret < 0 && ret != -EINVAL) 553 + return ret; 554 + 551 555 rproc = rproc_alloc(&pdev->dev, pdev->name, &wcnss_ops, 552 - WCNSS_FIRMWARE_NAME, sizeof(*wcnss)); 556 + fw_name, sizeof(*wcnss)); 553 557 if (!rproc) { 554 558 dev_err(&pdev->dev, "unable to allocate remoteproc\n"); 555 559 return -ENOMEM;
+17 -4
drivers/remoteproc/remoteproc_cdev.c
··· 32 32 return -EFAULT; 33 33 34 34 if (!strncmp(cmd, "start", len)) { 35 - if (rproc->state == RPROC_RUNNING) 35 + if (rproc->state == RPROC_RUNNING || 36 + rproc->state == RPROC_ATTACHED) 36 37 return -EBUSY; 37 38 38 39 ret = rproc_boot(rproc); 39 40 } else if (!strncmp(cmd, "stop", len)) { 40 - if (rproc->state != RPROC_RUNNING) 41 + if (rproc->state != RPROC_RUNNING && 42 + rproc->state != RPROC_ATTACHED) 41 43 return -EINVAL; 42 44 43 45 rproc_shutdown(rproc); 46 + } else if (!strncmp(cmd, "detach", len)) { 47 + if (rproc->state != RPROC_ATTACHED) 48 + return -EINVAL; 49 + 50 + ret = rproc_detach(rproc); 44 51 } else { 45 52 dev_err(&rproc->dev, "Unrecognized option\n"); 46 53 ret = -EINVAL; ··· 86 79 static int rproc_cdev_release(struct inode *inode, struct file *filp) 87 80 { 88 81 struct rproc *rproc = container_of(inode->i_cdev, struct rproc, cdev); 82 + int ret = 0; 89 83 90 - if (rproc->cdev_put_on_release && rproc->state == RPROC_RUNNING) 84 + if (!rproc->cdev_put_on_release) 85 + return 0; 86 + 87 + if (rproc->state == RPROC_RUNNING) 91 88 rproc_shutdown(rproc); 89 + else if (rproc->state == RPROC_ATTACHED) 90 + ret = rproc_detach(rproc); 92 91 93 - return 0; 92 + return ret; 94 93 } 95 94 96 95 static const struct file_operations rproc_fops = {
+293 -44
drivers/remoteproc/remoteproc_core.c
··· 189 189 * here the output of the DMA API for the carveouts, which should be more 190 190 * correct. 191 191 */ 192 - void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 192 + void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 193 193 { 194 194 struct rproc_mem_entry *carveout; 195 195 void *ptr = NULL; 196 196 197 197 if (rproc->ops->da_to_va) { 198 - ptr = rproc->ops->da_to_va(rproc, da, len); 198 + ptr = rproc->ops->da_to_va(rproc, da, len, is_iomem); 199 199 if (ptr) 200 200 goto out; 201 201 } ··· 216 216 continue; 217 217 218 218 ptr = carveout->va + offset; 219 + 220 + if (is_iomem) 221 + *is_iomem = carveout->is_iomem; 219 222 220 223 break; 221 224 } ··· 485 482 /** 486 483 * rproc_handle_vdev() - handle a vdev fw resource 487 484 * @rproc: the remote processor 488 - * @rsc: the vring resource descriptor 485 + * @ptr: the vring resource descriptor 489 486 * @offset: offset of the resource entry 490 487 * @avail: size of available data (for sanity checking the image) 491 488 * ··· 510 507 * 511 508 * Returns 0 on success, or an appropriate error code otherwise 512 509 */ 513 - static int rproc_handle_vdev(struct rproc *rproc, struct fw_rsc_vdev *rsc, 510 + static int rproc_handle_vdev(struct rproc *rproc, void *ptr, 514 511 int offset, int avail) 515 512 { 513 + struct fw_rsc_vdev *rsc = ptr; 516 514 struct device *dev = &rproc->dev; 517 515 struct rproc_vdev *rvdev; 518 516 int i, ret; ··· 631 627 /** 632 628 * rproc_handle_trace() - handle a shared trace buffer resource 633 629 * @rproc: the remote processor 634 - * @rsc: the trace resource descriptor 630 + * @ptr: the trace resource descriptor 635 631 * @offset: offset of the resource entry 636 632 * @avail: size of available data (for sanity checking the image) 637 633 * ··· 645 641 * 646 642 * Returns 0 on success, or an appropriate error code otherwise 647 643 */ 648 - static int rproc_handle_trace(struct rproc *rproc, struct fw_rsc_trace *rsc, 644 + static int rproc_handle_trace(struct rproc *rproc, void *ptr, 649 645 int offset, int avail) 650 646 { 647 + struct fw_rsc_trace *rsc = ptr; 651 648 struct rproc_debug_trace *trace; 652 649 struct device *dev = &rproc->dev; 653 650 char name[15]; ··· 698 693 /** 699 694 * rproc_handle_devmem() - handle devmem resource entry 700 695 * @rproc: remote processor handle 701 - * @rsc: the devmem resource entry 696 + * @ptr: the devmem resource entry 702 697 * @offset: offset of the resource entry 703 698 * @avail: size of available data (for sanity checking the image) 704 699 * ··· 721 716 * and not allow firmwares to request access to physical addresses that 722 717 * are outside those ranges. 723 718 */ 724 - static int rproc_handle_devmem(struct rproc *rproc, struct fw_rsc_devmem *rsc, 719 + static int rproc_handle_devmem(struct rproc *rproc, void *ptr, 725 720 int offset, int avail) 726 721 { 722 + struct fw_rsc_devmem *rsc = ptr; 727 723 struct rproc_mem_entry *mapping; 728 724 struct device *dev = &rproc->dev; 729 725 int ret; ··· 902 896 /** 903 897 * rproc_handle_carveout() - handle phys contig memory allocation requests 904 898 * @rproc: rproc handle 905 - * @rsc: the resource entry 899 + * @ptr: the resource entry 906 900 * @offset: offset of the resource entry 907 901 * @avail: size of available data (for image validation) 908 902 * ··· 919 913 * pressure is important; it may have a substantial impact on performance. 920 914 */ 921 915 static int rproc_handle_carveout(struct rproc *rproc, 922 - struct fw_rsc_carveout *rsc, 923 - int offset, int avail) 916 + void *ptr, int offset, int avail) 924 917 { 918 + struct fw_rsc_carveout *rsc = ptr; 925 919 struct rproc_mem_entry *carveout; 926 920 struct device *dev = &rproc->dev; 927 921 ··· 1103 1097 * enum fw_resource_type. 1104 1098 */ 1105 1099 static rproc_handle_resource_t rproc_loading_handlers[RSC_LAST] = { 1106 - [RSC_CARVEOUT] = (rproc_handle_resource_t)rproc_handle_carveout, 1107 - [RSC_DEVMEM] = (rproc_handle_resource_t)rproc_handle_devmem, 1108 - [RSC_TRACE] = (rproc_handle_resource_t)rproc_handle_trace, 1109 - [RSC_VDEV] = (rproc_handle_resource_t)rproc_handle_vdev, 1100 + [RSC_CARVEOUT] = rproc_handle_carveout, 1101 + [RSC_DEVMEM] = rproc_handle_devmem, 1102 + [RSC_TRACE] = rproc_handle_trace, 1103 + [RSC_VDEV] = rproc_handle_vdev, 1110 1104 }; 1111 1105 1112 1106 /* handle firmware resource entries before booting the remote processor */ ··· 1422 1416 return ret; 1423 1417 } 1424 1418 1425 - static int rproc_attach(struct rproc *rproc) 1419 + static int __rproc_attach(struct rproc *rproc) 1426 1420 { 1427 1421 struct device *dev = &rproc->dev; 1428 1422 int ret; ··· 1450 1444 goto stop_rproc; 1451 1445 } 1452 1446 1453 - rproc->state = RPROC_RUNNING; 1447 + rproc->state = RPROC_ATTACHED; 1454 1448 1455 1449 dev_info(dev, "remote processor %s is now attached\n", rproc->name); 1456 1450 ··· 1543 1537 return ret; 1544 1538 } 1545 1539 1540 + static int rproc_set_rsc_table(struct rproc *rproc) 1541 + { 1542 + struct resource_table *table_ptr; 1543 + struct device *dev = &rproc->dev; 1544 + size_t table_sz; 1545 + int ret; 1546 + 1547 + table_ptr = rproc_get_loaded_rsc_table(rproc, &table_sz); 1548 + if (!table_ptr) { 1549 + /* Not having a resource table is acceptable */ 1550 + return 0; 1551 + } 1552 + 1553 + if (IS_ERR(table_ptr)) { 1554 + ret = PTR_ERR(table_ptr); 1555 + dev_err(dev, "can't load resource table: %d\n", ret); 1556 + return ret; 1557 + } 1558 + 1559 + /* 1560 + * If it is possible to detach the remote processor, keep an untouched 1561 + * copy of the resource table. That way we can start fresh again when 1562 + * the remote processor is re-attached, that is: 1563 + * 1564 + * DETACHED -> ATTACHED -> DETACHED -> ATTACHED 1565 + * 1566 + * Free'd in rproc_reset_rsc_table_on_detach() and 1567 + * rproc_reset_rsc_table_on_stop(). 1568 + */ 1569 + if (rproc->ops->detach) { 1570 + rproc->clean_table = kmemdup(table_ptr, table_sz, GFP_KERNEL); 1571 + if (!rproc->clean_table) 1572 + return -ENOMEM; 1573 + } else { 1574 + rproc->clean_table = NULL; 1575 + } 1576 + 1577 + rproc->cached_table = NULL; 1578 + rproc->table_ptr = table_ptr; 1579 + rproc->table_sz = table_sz; 1580 + 1581 + return 0; 1582 + } 1583 + 1584 + static int rproc_reset_rsc_table_on_detach(struct rproc *rproc) 1585 + { 1586 + struct resource_table *table_ptr; 1587 + 1588 + /* A resource table was never retrieved, nothing to do here */ 1589 + if (!rproc->table_ptr) 1590 + return 0; 1591 + 1592 + /* 1593 + * If we made it to this point a clean_table _must_ have been 1594 + * allocated in rproc_set_rsc_table(). If one isn't present 1595 + * something went really wrong and we must complain. 1596 + */ 1597 + if (WARN_ON(!rproc->clean_table)) 1598 + return -EINVAL; 1599 + 1600 + /* Remember where the external entity installed the resource table */ 1601 + table_ptr = rproc->table_ptr; 1602 + 1603 + /* 1604 + * If we made it here the remote processor was started by another 1605 + * entity and a cache table doesn't exist. As such make a copy of 1606 + * the resource table currently used by the remote processor and 1607 + * use that for the rest of the shutdown process. The memory 1608 + * allocated here is free'd in rproc_detach(). 1609 + */ 1610 + rproc->cached_table = kmemdup(rproc->table_ptr, 1611 + rproc->table_sz, GFP_KERNEL); 1612 + if (!rproc->cached_table) 1613 + return -ENOMEM; 1614 + 1615 + /* 1616 + * Use a copy of the resource table for the remainder of the 1617 + * shutdown process. 1618 + */ 1619 + rproc->table_ptr = rproc->cached_table; 1620 + 1621 + /* 1622 + * Reset the memory area where the firmware loaded the resource table 1623 + * to its original value. That way when we re-attach the remote 1624 + * processor the resource table is clean and ready to be used again. 1625 + */ 1626 + memcpy(table_ptr, rproc->clean_table, rproc->table_sz); 1627 + 1628 + /* 1629 + * The clean resource table is no longer needed. Allocated in 1630 + * rproc_set_rsc_table(). 1631 + */ 1632 + kfree(rproc->clean_table); 1633 + 1634 + return 0; 1635 + } 1636 + 1637 + static int rproc_reset_rsc_table_on_stop(struct rproc *rproc) 1638 + { 1639 + /* A resource table was never retrieved, nothing to do here */ 1640 + if (!rproc->table_ptr) 1641 + return 0; 1642 + 1643 + /* 1644 + * If a cache table exists the remote processor was started by 1645 + * the remoteproc core. That cache table should be used for 1646 + * the rest of the shutdown process. 1647 + */ 1648 + if (rproc->cached_table) 1649 + goto out; 1650 + 1651 + /* 1652 + * If we made it here the remote processor was started by another 1653 + * entity and a cache table doesn't exist. As such make a copy of 1654 + * the resource table currently used by the remote processor and 1655 + * use that for the rest of the shutdown process. The memory 1656 + * allocated here is free'd in rproc_shutdown(). 1657 + */ 1658 + rproc->cached_table = kmemdup(rproc->table_ptr, 1659 + rproc->table_sz, GFP_KERNEL); 1660 + if (!rproc->cached_table) 1661 + return -ENOMEM; 1662 + 1663 + /* 1664 + * Since the remote processor is being switched off the clean table 1665 + * won't be needed. Allocated in rproc_set_rsc_table(). 1666 + */ 1667 + kfree(rproc->clean_table); 1668 + 1669 + out: 1670 + /* 1671 + * Use a copy of the resource table for the remainder of the 1672 + * shutdown process. 1673 + */ 1674 + rproc->table_ptr = rproc->cached_table; 1675 + return 0; 1676 + } 1677 + 1546 1678 /* 1547 1679 * Attach to remote processor - similar to rproc_fw_boot() but without 1548 1680 * the steps that deal with the firmware image. 1549 1681 */ 1550 - static int rproc_actuate(struct rproc *rproc) 1682 + static int rproc_attach(struct rproc *rproc) 1551 1683 { 1552 1684 struct device *dev = &rproc->dev; 1553 1685 int ret; ··· 1698 1554 if (ret) { 1699 1555 dev_err(dev, "can't enable iommu: %d\n", ret); 1700 1556 return ret; 1557 + } 1558 + 1559 + /* Do anything that is needed to boot the remote processor */ 1560 + ret = rproc_prepare_device(rproc); 1561 + if (ret) { 1562 + dev_err(dev, "can't prepare rproc %s: %d\n", rproc->name, ret); 1563 + goto disable_iommu; 1564 + } 1565 + 1566 + ret = rproc_set_rsc_table(rproc); 1567 + if (ret) { 1568 + dev_err(dev, "can't load resource table: %d\n", ret); 1569 + goto unprepare_device; 1701 1570 } 1702 1571 1703 1572 /* reset max_notifyid */ ··· 1727 1570 ret = rproc_handle_resources(rproc, rproc_loading_handlers); 1728 1571 if (ret) { 1729 1572 dev_err(dev, "Failed to process resources: %d\n", ret); 1730 - goto disable_iommu; 1573 + goto unprepare_device; 1731 1574 } 1732 1575 1733 1576 /* Allocate carveout resources associated to rproc */ ··· 1738 1581 goto clean_up_resources; 1739 1582 } 1740 1583 1741 - ret = rproc_attach(rproc); 1584 + ret = __rproc_attach(rproc); 1742 1585 if (ret) 1743 1586 goto clean_up_resources; 1744 1587 ··· 1746 1589 1747 1590 clean_up_resources: 1748 1591 rproc_resource_cleanup(rproc); 1592 + unprepare_device: 1593 + /* release HW resources if needed */ 1594 + rproc_unprepare_device(rproc); 1749 1595 disable_iommu: 1750 1596 rproc_disable_iommu(rproc); 1751 1597 return ret; ··· 1802 1642 struct device *dev = &rproc->dev; 1803 1643 int ret; 1804 1644 1645 + /* No need to continue if a stop() operation has not been provided */ 1646 + if (!rproc->ops->stop) 1647 + return -EINVAL; 1648 + 1805 1649 /* Stop any subdevices for the remote processor */ 1806 1650 rproc_stop_subdevices(rproc, crashed); 1807 1651 1808 1652 /* the installed resource table is no longer accessible */ 1809 - rproc->table_ptr = rproc->cached_table; 1653 + ret = rproc_reset_rsc_table_on_stop(rproc); 1654 + if (ret) { 1655 + dev_err(dev, "can't reset resource table: %d\n", ret); 1656 + return ret; 1657 + } 1658 + 1810 1659 1811 1660 /* power off the remote processor */ 1812 1661 ret = rproc->ops->stop(rproc); ··· 1828 1659 1829 1660 rproc->state = RPROC_OFFLINE; 1830 1661 1831 - /* 1832 - * The remote processor has been stopped and is now offline, which means 1833 - * that the next time it is brought back online the remoteproc core will 1834 - * be responsible to load its firmware. As such it is no longer 1835 - * autonomous. 1836 - */ 1837 - rproc->autonomous = false; 1838 - 1839 1662 dev_info(dev, "stopped remote processor %s\n", rproc->name); 1840 1663 1841 1664 return 0; 1842 1665 } 1843 1666 1667 + /* 1668 + * __rproc_detach(): Does the opposite of __rproc_attach() 1669 + */ 1670 + static int __rproc_detach(struct rproc *rproc) 1671 + { 1672 + struct device *dev = &rproc->dev; 1673 + int ret; 1674 + 1675 + /* No need to continue if a detach() operation has not been provided */ 1676 + if (!rproc->ops->detach) 1677 + return -EINVAL; 1678 + 1679 + /* Stop any subdevices for the remote processor */ 1680 + rproc_stop_subdevices(rproc, false); 1681 + 1682 + /* the installed resource table is no longer accessible */ 1683 + ret = rproc_reset_rsc_table_on_detach(rproc); 1684 + if (ret) { 1685 + dev_err(dev, "can't reset resource table: %d\n", ret); 1686 + return ret; 1687 + } 1688 + 1689 + /* Tell the remote processor the core isn't available anymore */ 1690 + ret = rproc->ops->detach(rproc); 1691 + if (ret) { 1692 + dev_err(dev, "can't detach from rproc: %d\n", ret); 1693 + return ret; 1694 + } 1695 + 1696 + rproc_unprepare_subdevices(rproc); 1697 + 1698 + rproc->state = RPROC_DETACHED; 1699 + 1700 + dev_info(dev, "detached remote processor %s\n", rproc->name); 1701 + 1702 + return 0; 1703 + } 1844 1704 1845 1705 /** 1846 1706 * rproc_trigger_recovery() - recover a remoteproc ··· 2000 1802 if (rproc->state == RPROC_DETACHED) { 2001 1803 dev_info(dev, "attaching to %s\n", rproc->name); 2002 1804 2003 - ret = rproc_actuate(rproc); 1805 + ret = rproc_attach(rproc); 2004 1806 } else { 2005 1807 dev_info(dev, "powering up %s\n", rproc->name); 2006 1808 ··· 2081 1883 mutex_unlock(&rproc->lock); 2082 1884 } 2083 1885 EXPORT_SYMBOL(rproc_shutdown); 1886 + 1887 + /** 1888 + * rproc_detach() - Detach the remote processor from the 1889 + * remoteproc core 1890 + * 1891 + * @rproc: the remote processor 1892 + * 1893 + * Detach a remote processor (previously attached to with rproc_attach()). 1894 + * 1895 + * In case @rproc is still being used by an additional user(s), then 1896 + * this function will just decrement the power refcount and exit, 1897 + * without disconnecting the device. 1898 + * 1899 + * Function rproc_detach() calls __rproc_detach() in order to let a remote 1900 + * processor know that services provided by the application processor are 1901 + * no longer available. From there it should be possible to remove the 1902 + * platform driver and even power cycle the application processor (if the HW 1903 + * supports it) without needing to switch off the remote processor. 1904 + */ 1905 + int rproc_detach(struct rproc *rproc) 1906 + { 1907 + struct device *dev = &rproc->dev; 1908 + int ret; 1909 + 1910 + ret = mutex_lock_interruptible(&rproc->lock); 1911 + if (ret) { 1912 + dev_err(dev, "can't lock rproc %s: %d\n", rproc->name, ret); 1913 + return ret; 1914 + } 1915 + 1916 + /* if the remote proc is still needed, bail out */ 1917 + if (!atomic_dec_and_test(&rproc->power)) { 1918 + ret = 0; 1919 + goto out; 1920 + } 1921 + 1922 + ret = __rproc_detach(rproc); 1923 + if (ret) { 1924 + atomic_inc(&rproc->power); 1925 + goto out; 1926 + } 1927 + 1928 + /* clean up all acquired resources */ 1929 + rproc_resource_cleanup(rproc); 1930 + 1931 + /* release HW resources if needed */ 1932 + rproc_unprepare_device(rproc); 1933 + 1934 + rproc_disable_iommu(rproc); 1935 + 1936 + /* Free the copy of the resource table */ 1937 + kfree(rproc->cached_table); 1938 + rproc->cached_table = NULL; 1939 + rproc->table_ptr = NULL; 1940 + out: 1941 + mutex_unlock(&rproc->lock); 1942 + return ret; 1943 + } 1944 + EXPORT_SYMBOL(rproc_detach); 2084 1945 2085 1946 /** 2086 1947 * rproc_get_by_phandle() - find a remote processor by phandle ··· 2333 2076 ret = rproc_char_device_add(rproc); 2334 2077 if (ret < 0) 2335 2078 return ret; 2336 - 2337 - /* 2338 - * Remind ourselves the remote processor has been attached to rather 2339 - * than booted by the remoteproc core. This is important because the 2340 - * RPROC_DETACHED state will be lost as soon as the remote processor 2341 - * has been attached to. Used in firmware_show() and reset in 2342 - * rproc_stop(). 2343 - */ 2344 - if (rproc->state == RPROC_DETACHED) 2345 - rproc->autonomous = true; 2346 2079 2347 2080 /* if rproc is marked always-on, request it to boot */ 2348 2081 if (rproc->auto_boot) { ··· 2594 2347 if (!rproc) 2595 2348 return -EINVAL; 2596 2349 2597 - /* if rproc is marked always-on, rproc_add() booted it */ 2598 2350 /* TODO: make sure this works with rproc->power > 1 */ 2599 - if (rproc->auto_boot) 2600 - rproc_shutdown(rproc); 2351 + rproc_shutdown(rproc); 2601 2352 2602 2353 mutex_lock(&rproc->lock); 2603 2354 rproc->state = RPROC_DELETED; ··· 2737 2492 2738 2493 rcu_read_lock(); 2739 2494 list_for_each_entry_rcu(rproc, &rproc_list, node) { 2740 - if (!rproc->ops->panic || rproc->state != RPROC_RUNNING) 2495 + if (!rproc->ops->panic) 2496 + continue; 2497 + 2498 + if (rproc->state != RPROC_RUNNING && 2499 + rproc->state != RPROC_ATTACHED) 2741 2500 continue; 2742 2501 2743 2502 d = rproc->ops->panic(rproc);
+6 -2
drivers/remoteproc/remoteproc_coredump.c
··· 153 153 size_t offset, size_t size) 154 154 { 155 155 void *ptr; 156 + bool is_iomem; 156 157 157 158 if (segment->dump) { 158 159 segment->dump(rproc, segment, dest, offset, size); 159 160 } else { 160 - ptr = rproc_da_to_va(rproc, segment->da + offset, size); 161 + ptr = rproc_da_to_va(rproc, segment->da + offset, size, &is_iomem); 161 162 if (!ptr) { 162 163 dev_err(&rproc->dev, 163 164 "invalid copy request for segment %pad with offset %zu and size %zu)\n", 164 165 &segment->da, offset, size); 165 166 memset(dest, 0xff, size); 166 167 } else { 167 - memcpy(dest, ptr, size); 168 + if (is_iomem) 169 + memcpy_fromio(dest, ptr, size); 170 + else 171 + memcpy(dest, ptr, size); 168 172 } 169 173 } 170 174 }
+1 -1
drivers/remoteproc/remoteproc_debugfs.c
··· 132 132 char buf[100]; 133 133 int len; 134 134 135 - va = rproc_da_to_va(data->rproc, trace->da, trace->len); 135 + va = rproc_da_to_va(data->rproc, trace->da, trace->len, NULL); 136 136 137 137 if (!va) { 138 138 len = scnprintf(buf, sizeof(buf), "Trace %s not available\n",
+15 -6
drivers/remoteproc/remoteproc_elf_loader.c
··· 175 175 u64 offset = elf_phdr_get_p_offset(class, phdr); 176 176 u32 type = elf_phdr_get_p_type(class, phdr); 177 177 void *ptr; 178 + bool is_iomem; 178 179 179 180 if (type != PT_LOAD) 180 181 continue; ··· 205 204 } 206 205 207 206 /* grab the kernel address for this device address */ 208 - ptr = rproc_da_to_va(rproc, da, memsz); 207 + ptr = rproc_da_to_va(rproc, da, memsz, &is_iomem); 209 208 if (!ptr) { 210 209 dev_err(dev, "bad phdr da 0x%llx mem 0x%llx\n", da, 211 210 memsz); ··· 214 213 } 215 214 216 215 /* put the segment where the remote processor expects it */ 217 - if (filesz) 218 - memcpy(ptr, elf_data + offset, filesz); 216 + if (filesz) { 217 + if (is_iomem) 218 + memcpy_fromio(ptr, (void __iomem *)(elf_data + offset), filesz); 219 + else 220 + memcpy(ptr, elf_data + offset, filesz); 221 + } 219 222 220 223 /* 221 224 * Zero out remaining memory for this segment. ··· 228 223 * did this for us. albeit harmless, we may consider removing 229 224 * this. 230 225 */ 231 - if (memsz > filesz) 232 - memset(ptr + filesz, 0, memsz - filesz); 226 + if (memsz > filesz) { 227 + if (is_iomem) 228 + memset_io((void __iomem *)(ptr + filesz), 0, memsz - filesz); 229 + else 230 + memset(ptr + filesz, 0, memsz - filesz); 231 + } 233 232 } 234 233 235 234 return ret; ··· 386 377 return NULL; 387 378 } 388 379 389 - return rproc_da_to_va(rproc, sh_addr, sh_size); 380 + return rproc_da_to_va(rproc, sh_addr, sh_size, NULL); 390 381 } 391 382 EXPORT_SYMBOL(rproc_elf_find_loaded_rsc_table);
+11 -1
drivers/remoteproc/remoteproc_internal.h
··· 84 84 void rproc_free_vring(struct rproc_vring *rvring); 85 85 int rproc_alloc_vring(struct rproc_vdev *rvdev, int i); 86 86 87 - void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len); 87 + void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem); 88 88 phys_addr_t rproc_va_to_pa(void *cpu_addr); 89 89 int rproc_trigger_recovery(struct rproc *rproc); 90 90 ··· 173 173 { 174 174 if (rproc->ops->find_loaded_rsc_table) 175 175 return rproc->ops->find_loaded_rsc_table(rproc, fw); 176 + 177 + return NULL; 178 + } 179 + 180 + static inline 181 + struct resource_table *rproc_get_loaded_rsc_table(struct rproc *rproc, 182 + size_t *size) 183 + { 184 + if (rproc->ops->get_loaded_rsc_table) 185 + return rproc->ops->get_loaded_rsc_table(rproc, size); 176 186 177 187 return NULL; 178 188 }
+13 -8
drivers/remoteproc/remoteproc_sysfs.c
··· 15 15 { 16 16 struct rproc *rproc = to_rproc(dev); 17 17 18 - return sprintf(buf, "%s", rproc->recovery_disabled ? "disabled\n" : "enabled\n"); 18 + return sysfs_emit(buf, "%s", rproc->recovery_disabled ? "disabled\n" : "enabled\n"); 19 19 } 20 20 21 21 /* ··· 82 82 { 83 83 struct rproc *rproc = to_rproc(dev); 84 84 85 - return sprintf(buf, "%s\n", rproc_coredump_str[rproc->dump_conf]); 85 + return sysfs_emit(buf, "%s\n", rproc_coredump_str[rproc->dump_conf]); 86 86 } 87 87 88 88 /* ··· 138 138 * If the remote processor has been started by an external 139 139 * entity we have no idea of what image it is running. As such 140 140 * simply display a generic string rather then rproc->firmware. 141 - * 142 - * Here we rely on the autonomous flag because a remote processor 143 - * may have been attached to and currently in a running state. 144 141 */ 145 - if (rproc->autonomous) 142 + if (rproc->state == RPROC_ATTACHED) 146 143 firmware = "unknown"; 147 144 148 145 return sprintf(buf, "%s\n", firmware); ··· 169 172 [RPROC_RUNNING] = "running", 170 173 [RPROC_CRASHED] = "crashed", 171 174 [RPROC_DELETED] = "deleted", 175 + [RPROC_ATTACHED] = "attached", 172 176 [RPROC_DETACHED] = "detached", 173 177 [RPROC_LAST] = "invalid", 174 178 }; ··· 194 196 int ret = 0; 195 197 196 198 if (sysfs_streq(buf, "start")) { 197 - if (rproc->state == RPROC_RUNNING) 199 + if (rproc->state == RPROC_RUNNING || 200 + rproc->state == RPROC_ATTACHED) 198 201 return -EBUSY; 199 202 200 203 ret = rproc_boot(rproc); 201 204 if (ret) 202 205 dev_err(&rproc->dev, "Boot failed: %d\n", ret); 203 206 } else if (sysfs_streq(buf, "stop")) { 204 - if (rproc->state != RPROC_RUNNING) 207 + if (rproc->state != RPROC_RUNNING && 208 + rproc->state != RPROC_ATTACHED) 205 209 return -EINVAL; 206 210 207 211 rproc_shutdown(rproc); 212 + } else if (sysfs_streq(buf, "detach")) { 213 + if (rproc->state != RPROC_ATTACHED) 214 + return -EINVAL; 215 + 216 + ret = rproc_detach(rproc); 208 217 } else { 209 218 dev_err(&rproc->dev, "Unrecognised option: %s\n", buf); 210 219 ret = -EINVAL;
+1 -1
drivers/remoteproc/st_slim_rproc.c
··· 174 174 return 0; 175 175 } 176 176 177 - static void *slim_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 177 + static void *slim_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 178 178 { 179 179 struct st_slim_rproc *slim_rproc = rproc->priv; 180 180 void *va = NULL;
+110 -95
drivers/remoteproc/stm32_rproc.c
··· 28 28 #define RELEASE_BOOT 1 29 29 30 30 #define MBOX_NB_VQ 2 31 - #define MBOX_NB_MBX 3 31 + #define MBOX_NB_MBX 4 32 32 33 33 #define STM32_SMC_RCC 0x82001000 34 34 #define STM32_SMC_REG_WRITE 0x1 ··· 38 38 #define STM32_MBX_VQ1 "vq1" 39 39 #define STM32_MBX_VQ1_ID 1 40 40 #define STM32_MBX_SHUTDOWN "shutdown" 41 + #define STM32_MBX_DETACH "detach" 41 42 42 43 #define RSC_TBL_SIZE 1024 43 44 ··· 208 207 return -EINVAL; 209 208 } 210 209 211 - static int stm32_rproc_elf_load_rsc_table(struct rproc *rproc, 212 - const struct firmware *fw) 213 - { 214 - if (rproc_elf_load_rsc_table(rproc, fw)) 215 - dev_warn(&rproc->dev, "no resource table found for this firmware\n"); 216 - 217 - return 0; 218 - } 219 - 220 - static int stm32_rproc_parse_memory_regions(struct rproc *rproc) 210 + static int stm32_rproc_prepare(struct rproc *rproc) 221 211 { 222 212 struct device *dev = rproc->dev.parent; 223 213 struct device_node *np = dev->of_node; ··· 266 274 267 275 static int stm32_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw) 268 276 { 269 - int ret = stm32_rproc_parse_memory_regions(rproc); 277 + if (rproc_elf_load_rsc_table(rproc, fw)) 278 + dev_warn(&rproc->dev, "no resource table found for this firmware\n"); 270 279 271 - if (ret) 272 - return ret; 273 - 274 - return stm32_rproc_elf_load_rsc_table(rproc, fw); 280 + return 0; 275 281 } 276 282 277 283 static irqreturn_t stm32_rproc_wdg(int irq, void *data) ··· 336 346 .tx_block = true, 337 347 .tx_done = NULL, 338 348 .tx_tout = 500, /* 500 ms time out */ 349 + }, 350 + }, 351 + { 352 + .name = STM32_MBX_DETACH, 353 + .vq_id = -1, 354 + .client = { 355 + .tx_block = true, 356 + .tx_done = NULL, 357 + .tx_tout = 200, /* 200 ms time out to detach should be fair enough */ 339 358 }, 340 359 } 341 360 }; ··· 471 472 return stm32_rproc_set_hold_boot(rproc, true); 472 473 } 473 474 475 + static int stm32_rproc_detach(struct rproc *rproc) 476 + { 477 + struct stm32_rproc *ddata = rproc->priv; 478 + int err, dummy_data, idx; 479 + 480 + /* Inform the remote processor of the detach */ 481 + idx = stm32_rproc_mbox_idx(rproc, STM32_MBX_DETACH); 482 + if (idx >= 0 && ddata->mb[idx].chan) { 483 + /* A dummy data is sent to allow to block on transmit */ 484 + err = mbox_send_message(ddata->mb[idx].chan, 485 + &dummy_data); 486 + if (err < 0) 487 + dev_warn(&rproc->dev, "warning: remote FW detach without ack\n"); 488 + } 489 + 490 + /* Allow remote processor to auto-reboot */ 491 + return stm32_rproc_set_hold_boot(rproc, false); 492 + } 493 + 474 494 static int stm32_rproc_stop(struct rproc *rproc) 475 495 { 476 496 struct stm32_rproc *ddata = rproc->priv; ··· 564 546 } 565 547 } 566 548 549 + static int stm32_rproc_da_to_pa(struct rproc *rproc, 550 + u64 da, phys_addr_t *pa) 551 + { 552 + struct stm32_rproc *ddata = rproc->priv; 553 + struct device *dev = rproc->dev.parent; 554 + struct stm32_rproc_mem *p_mem; 555 + unsigned int i; 556 + 557 + for (i = 0; i < ddata->nb_rmems; i++) { 558 + p_mem = &ddata->rmems[i]; 559 + 560 + if (da < p_mem->dev_addr || 561 + da >= p_mem->dev_addr + p_mem->size) 562 + continue; 563 + 564 + *pa = da - p_mem->dev_addr + p_mem->bus_addr; 565 + dev_dbg(dev, "da %llx to pa %#x\n", da, *pa); 566 + 567 + return 0; 568 + } 569 + 570 + dev_err(dev, "can't translate da %llx\n", da); 571 + 572 + return -EINVAL; 573 + } 574 + 575 + static struct resource_table * 576 + stm32_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz) 577 + { 578 + struct stm32_rproc *ddata = rproc->priv; 579 + struct device *dev = rproc->dev.parent; 580 + phys_addr_t rsc_pa; 581 + u32 rsc_da; 582 + int err; 583 + 584 + /* The resource table has already been mapped, nothing to do */ 585 + if (ddata->rsc_va) 586 + goto done; 587 + 588 + err = regmap_read(ddata->rsctbl.map, ddata->rsctbl.reg, &rsc_da); 589 + if (err) { 590 + dev_err(dev, "failed to read rsc tbl addr\n"); 591 + return ERR_PTR(-EINVAL); 592 + } 593 + 594 + if (!rsc_da) 595 + /* no rsc table */ 596 + return ERR_PTR(-ENOENT); 597 + 598 + err = stm32_rproc_da_to_pa(rproc, rsc_da, &rsc_pa); 599 + if (err) 600 + return ERR_PTR(err); 601 + 602 + ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE); 603 + if (IS_ERR_OR_NULL(ddata->rsc_va)) { 604 + dev_err(dev, "Unable to map memory region: %pa+%zx\n", 605 + &rsc_pa, RSC_TBL_SIZE); 606 + ddata->rsc_va = NULL; 607 + return ERR_PTR(-ENOMEM); 608 + } 609 + 610 + done: 611 + /* 612 + * Assuming the resource table fits in 1kB is fair. 613 + * Notice for the detach, that this 1 kB memory area has to be reserved in the coprocessor 614 + * firmware for the resource table. On detach, the remoteproc core re-initializes this 615 + * entire area by overwriting it with the initial values stored in rproc->clean_table. 616 + */ 617 + *table_sz = RSC_TBL_SIZE; 618 + return (struct resource_table *)ddata->rsc_va; 619 + } 620 + 567 621 static const struct rproc_ops st_rproc_ops = { 622 + .prepare = stm32_rproc_prepare, 568 623 .start = stm32_rproc_start, 569 624 .stop = stm32_rproc_stop, 570 625 .attach = stm32_rproc_attach, 626 + .detach = stm32_rproc_detach, 571 627 .kick = stm32_rproc_kick, 572 628 .load = rproc_elf_load_segments, 573 629 .parse_fw = stm32_rproc_parse_fw, 574 630 .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table, 631 + .get_loaded_rsc_table = stm32_rproc_get_loaded_rsc_table, 575 632 .sanity_check = rproc_elf_sanity_check, 576 633 .get_boot_addr = rproc_elf_get_boot_addr, 577 634 }; ··· 788 695 return regmap_read(ddata->m4_state.map, ddata->m4_state.reg, state); 789 696 } 790 697 791 - static int stm32_rproc_da_to_pa(struct platform_device *pdev, 792 - struct stm32_rproc *ddata, 793 - u64 da, phys_addr_t *pa) 794 - { 795 - struct device *dev = &pdev->dev; 796 - struct stm32_rproc_mem *p_mem; 797 - unsigned int i; 798 - 799 - for (i = 0; i < ddata->nb_rmems; i++) { 800 - p_mem = &ddata->rmems[i]; 801 - 802 - if (da < p_mem->dev_addr || 803 - da >= p_mem->dev_addr + p_mem->size) 804 - continue; 805 - 806 - *pa = da - p_mem->dev_addr + p_mem->bus_addr; 807 - dev_dbg(dev, "da %llx to pa %#x\n", da, *pa); 808 - 809 - return 0; 810 - } 811 - 812 - dev_err(dev, "can't translate da %llx\n", da); 813 - 814 - return -EINVAL; 815 - } 816 - 817 - static int stm32_rproc_get_loaded_rsc_table(struct platform_device *pdev, 818 - struct rproc *rproc, 819 - struct stm32_rproc *ddata) 820 - { 821 - struct device *dev = &pdev->dev; 822 - phys_addr_t rsc_pa; 823 - u32 rsc_da; 824 - int err; 825 - 826 - err = regmap_read(ddata->rsctbl.map, ddata->rsctbl.reg, &rsc_da); 827 - if (err) { 828 - dev_err(dev, "failed to read rsc tbl addr\n"); 829 - return err; 830 - } 831 - 832 - if (!rsc_da) 833 - /* no rsc table */ 834 - return 0; 835 - 836 - err = stm32_rproc_da_to_pa(pdev, ddata, rsc_da, &rsc_pa); 837 - if (err) 838 - return err; 839 - 840 - ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE); 841 - if (IS_ERR_OR_NULL(ddata->rsc_va)) { 842 - dev_err(dev, "Unable to map memory region: %pa+%zx\n", 843 - &rsc_pa, RSC_TBL_SIZE); 844 - ddata->rsc_va = NULL; 845 - return -ENOMEM; 846 - } 847 - 848 - /* 849 - * The resource table is already loaded in device memory, no need 850 - * to work with a cached table. 851 - */ 852 - rproc->cached_table = NULL; 853 - /* Assuming the resource table fits in 1kB is fair */ 854 - rproc->table_sz = RSC_TBL_SIZE; 855 - rproc->table_ptr = (struct resource_table *)ddata->rsc_va; 856 - 857 - return 0; 858 - } 859 - 860 698 static int stm32_rproc_probe(struct platform_device *pdev) 861 699 { 862 700 struct device *dev = &pdev->dev; ··· 821 797 if (ret) 822 798 goto free_rproc; 823 799 824 - if (state == M4_STATE_CRUN) { 800 + if (state == M4_STATE_CRUN) 825 801 rproc->state = RPROC_DETACHED; 826 - 827 - ret = stm32_rproc_parse_memory_regions(rproc); 828 - if (ret) 829 - goto free_resources; 830 - 831 - ret = stm32_rproc_get_loaded_rsc_table(pdev, rproc, ddata); 832 - if (ret) 833 - goto free_resources; 834 - } 835 802 836 803 rproc->has_iommu = false; 837 804 ddata->workqueue = create_workqueue(dev_name(dev));
+1 -1
drivers/remoteproc/ti_k3_dsp_remoteproc.c
··· 354 354 * can be used either by the remoteproc core for loading (when using kernel 355 355 * remoteproc loader), or by any rpmsg bus drivers. 356 356 */ 357 - static void *k3_dsp_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 357 + static void *k3_dsp_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 358 358 { 359 359 struct k3_dsp_rproc *kproc = rproc->priv; 360 360 void __iomem *va = NULL;
+1 -1
drivers/remoteproc/ti_k3_r5_remoteproc.c
··· 590 590 * present in a DSP or IPU device). The translated addresses can be used 591 591 * either by the remoteproc core for loading, or by any rpmsg bus drivers. 592 592 */ 593 - static void *k3_r5_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 593 + static void *k3_r5_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 594 594 { 595 595 struct k3_r5_rproc *kproc = rproc->priv; 596 596 struct k3_r5_core *core = kproc->core;
+1 -1
drivers/remoteproc/wkup_m3_rproc.c
··· 89 89 return error; 90 90 } 91 91 92 - static void *wkup_m3_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 92 + static void *wkup_m3_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem) 93 93 { 94 94 struct wkup_m3_rproc *wkupm3 = rproc->priv; 95 95 void *va = NULL;
+19 -6
include/linux/remoteproc.h
··· 315 315 /** 316 316 * struct rproc_mem_entry - memory entry descriptor 317 317 * @va: virtual address 318 + * @is_iomem: io memory 318 319 * @dma: dma address 319 320 * @len: length, in bytes 320 321 * @da: device address ··· 330 329 */ 331 330 struct rproc_mem_entry { 332 331 void *va; 332 + bool is_iomem; 333 333 dma_addr_t dma; 334 334 size_t len; 335 335 u32 da; ··· 363 361 * @start: power on the device and boot it 364 362 * @stop: power off the device 365 363 * @attach: attach to a device that his already powered up 364 + * @detach: detach from a device, leaving it powered up 366 365 * @kick: kick a virtqueue (virtqueue id given as a parameter) 367 366 * @da_to_va: optional platform hook to perform address translations 368 367 * @parse_fw: parse firmware to extract information (e.g. resource table) ··· 371 368 * RSC_HANDLED if resource was handled, RSC_IGNORED if not handled and a 372 369 * negative value on error 373 370 * @load_rsc_table: load resource table from firmware image 374 - * @find_loaded_rsc_table: find the loaded resouce table 371 + * @find_loaded_rsc_table: find the loaded resource table from firmware image 372 + * @get_loaded_rsc_table: get resource table installed in memory 373 + * by external entity 375 374 * @load: load firmware to memory, where the remote processor 376 375 * expects to find it 377 376 * @sanity_check: sanity check the fw image ··· 388 383 int (*start)(struct rproc *rproc); 389 384 int (*stop)(struct rproc *rproc); 390 385 int (*attach)(struct rproc *rproc); 386 + int (*detach)(struct rproc *rproc); 391 387 void (*kick)(struct rproc *rproc, int vqid); 392 - void * (*da_to_va)(struct rproc *rproc, u64 da, size_t len); 388 + void * (*da_to_va)(struct rproc *rproc, u64 da, size_t len, bool *is_iomem); 393 389 int (*parse_fw)(struct rproc *rproc, const struct firmware *fw); 394 390 int (*handle_rsc)(struct rproc *rproc, u32 rsc_type, void *rsc, 395 391 int offset, int avail); 396 392 struct resource_table *(*find_loaded_rsc_table)( 397 393 struct rproc *rproc, const struct firmware *fw); 394 + struct resource_table *(*get_loaded_rsc_table)( 395 + struct rproc *rproc, size_t *size); 398 396 int (*load)(struct rproc *rproc, const struct firmware *fw); 399 397 int (*sanity_check)(struct rproc *rproc, const struct firmware *fw); 400 398 u64 (*get_boot_addr)(struct rproc *rproc, const struct firmware *fw); ··· 413 405 * @RPROC_RUNNING: device is up and running 414 406 * @RPROC_CRASHED: device has crashed; need to start recovery 415 407 * @RPROC_DELETED: device is deleted 408 + * @RPROC_ATTACHED: device has been booted by another entity and the core 409 + * has attached to it 416 410 * @RPROC_DETACHED: device has been booted by another entity and waiting 417 411 * for the core to attach to it 418 412 * @RPROC_LAST: just keep this one at the end ··· 431 421 RPROC_RUNNING = 2, 432 422 RPROC_CRASHED = 3, 433 423 RPROC_DELETED = 4, 434 - RPROC_DETACHED = 5, 435 - RPROC_LAST = 6, 424 + RPROC_ATTACHED = 5, 425 + RPROC_DETACHED = 6, 426 + RPROC_LAST = 7, 436 427 }; 437 428 438 429 /** ··· 516 505 * @recovery_disabled: flag that state if recovery was disabled 517 506 * @max_notifyid: largest allocated notify id. 518 507 * @table_ptr: pointer to the resource table in effect 508 + * @clean_table: copy of the resource table without modifications. Used 509 + * when a remote processor is attached or detached from the core 519 510 * @cached_table: copy of the resource table 520 511 * @table_sz: size of @cached_table 521 512 * @has_iommu: flag to indicate if remote processor is behind an MMU 522 513 * @auto_boot: flag to indicate if remote processor should be auto-started 523 - * @autonomous: true if an external entity has booted the remote processor 524 514 * @dump_segments: list of segments in the firmware 525 515 * @nb_vdev: number of vdev currently handled by rproc 526 516 * @char_dev: character device of the rproc ··· 554 542 bool recovery_disabled; 555 543 int max_notifyid; 556 544 struct resource_table *table_ptr; 545 + struct resource_table *clean_table; 557 546 struct resource_table *cached_table; 558 547 size_t table_sz; 559 548 bool has_iommu; 560 549 bool auto_boot; 561 - bool autonomous; 562 550 struct list_head dump_segments; 563 551 int nb_vdev; 564 552 u8 elf_class; ··· 667 655 668 656 int rproc_boot(struct rproc *rproc); 669 657 void rproc_shutdown(struct rproc *rproc); 658 + int rproc_detach(struct rproc *rproc); 670 659 int rproc_set_firmware(struct rproc *rproc, const char *fw_name); 671 660 void rproc_report_crash(struct rproc *rproc, enum rproc_crash_type type); 672 661 void rproc_coredump_using_sections(struct rproc *rproc);