Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mailbox-v5.18' of git://git.linaro.org/landing-teams/working/fujitsu/integration

Pull mailbox updates from Jassi Brar:
"qcom:
- add support for MSM8976

mtk:
- enable mt8186
- add ADSP controller driver

ti:
- use poll mode during suspend

tegra:
- fix tx channel flush

imx:
- add i.MX8 SECO MU support
- prepare for, and add iMX93 support"

* tag 'mailbox-v5.18' of git://git.linaro.org/landing-teams/working/fujitsu/integration:
dt-bindings: mailbox: add definition for mt8186
mailbox: ti-msgmgr: Operate mailbox in polled mode during system suspend
mailbox: ti-msgmgr: Refactor message read during interrupt handler
mailbox: imx: support i.MX93 S401 MU
mailbox: imx: support dual interrupts
mailbox: imx: extend irq to an array
dt-bindings: mailbox: imx-mu: add i.MX93 S4 MU support
dt-bindings: mailbox: imx-mu: add i.MX93 MU
mailbox: imx: add i.MX8 SECO MU support
mailbox: imx: introduce rxdb callback
dt-bindings: mailbox: imx-mu: add i.MX8 SECO MU support
mailbox: imx: enlarge timeout while reading/writing messages to SCFW
mailbox: imx: fix crash in resume on i.mx8ulp
mailbox: imx: fix wakeup failure from freeze mode
mailbox: mediatek: add support for adsp mailbox controller
dt-bindings: mailbox: mtk,adsp-mbox: add mtk adsp-mbox document
mailbox: qcom-apcs-ipc: Add compatible for MSM8976 SoC
dt-bindings: mailbox: Add compatible for the MSM8976
mailbox: tegra-hsp: Flush whole channel

+719 -73
+34 -1
Documentation/devicetree/bindings/mailbox/fsl,mu.yaml
··· 28 28 - const: fsl,imx7ulp-mu 29 29 - const: fsl,imx8ulp-mu 30 30 - const: fsl,imx8-mu-scu 31 + - const: fsl,imx8-mu-seco 32 + - const: fsl,imx93-mu-s4 31 33 - const: fsl,imx8ulp-mu-s4 34 + - items: 35 + - const: fsl,imx93-mu 36 + - const: fsl,imx8ulp-mu 32 37 - items: 33 38 - enum: 34 39 - fsl,imx7s-mu ··· 56 51 maxItems: 1 57 52 58 53 interrupts: 59 - maxItems: 1 54 + minItems: 1 55 + maxItems: 2 56 + 57 + interrupt-names: 58 + minItems: 1 59 + items: 60 + - const: tx 61 + - const: rx 60 62 61 63 "#mbox-cells": 62 64 description: | ··· 97 85 - reg 98 86 - interrupts 99 87 - "#mbox-cells" 88 + 89 + allOf: 90 + - if: 91 + properties: 92 + compatible: 93 + enum: 94 + - fsl,imx93-mu-s4 95 + then: 96 + properties: 97 + interrupt-names: 98 + minItems: 2 99 + interrupts: 100 + minItems: 2 101 + 102 + else: 103 + properties: 104 + interrupts: 105 + maxItems: 1 106 + not: 107 + required: 108 + - interrupt-names 100 109 101 110 additionalProperties: false 102 111
+50
Documentation/devicetree/bindings/mailbox/mtk,adsp-mbox.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mailbox/mtk,adsp-mbox.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Mediatek ADSP mailbox 8 + 9 + maintainers: 10 + - Allen-KH Cheng <Allen-KH.Cheng@mediatek.com> 11 + 12 + description: | 13 + The MTK ADSP mailbox Inter-Processor Communication (IPC) enables the SoC 14 + to ommunicate with ADSP by passing messages through two mailbox channels. 15 + The MTK ADSP mailbox IPC also provides the ability for one processor to 16 + signal the other processor using interrupts. 17 + 18 + properties: 19 + compatible: 20 + items: 21 + - const: mediatek,mt8195-adsp-mbox 22 + 23 + "#mbox-cells": 24 + const: 0 25 + 26 + reg: 27 + maxItems: 1 28 + 29 + interrupts: 30 + maxItems: 1 31 + 32 + required: 33 + - compatible 34 + - "#mbox-cells" 35 + - reg 36 + - interrupts 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + #include <dt-bindings/interrupt-controller/arm-gic.h> 43 + #include <dt-bindings/interrupt-controller/irq.h> 44 + 45 + adsp_mailbox0:mailbox@10816000 { 46 + compatible = "mediatek,mt8195-adsp-mbox"; 47 + #mbox-cells = <0>; 48 + reg = <0x10816000 0x1000>; 49 + interrupts = <GIC_SPI 702 IRQ_TYPE_LEVEL_HIGH 0>; 50 + };
+5 -3
Documentation/devicetree/bindings/mailbox/mtk-gce.txt
··· 10 10 11 11 Required properties: 12 12 - compatible: can be "mediatek,mt8173-gce", "mediatek,mt8183-gce", 13 - "mediatek,mt8192-gce", "mediatek,mt8195-gce" or "mediatek,mt6779-gce". 13 + "mediatek,mt8186-gce", "mediatek,mt8192-gce", "mediatek,mt8195-gce" or 14 + "mediatek,mt6779-gce". 14 15 - reg: Address range of the GCE unit 15 16 - interrupts: The interrupt signal from the GCE block 16 17 - clock: Clocks according to the common clock binding ··· 41 40 defined in 'dt-bindings/gce/<chip>-gce.h'. 42 41 43 42 Some vaules of properties are defined in 'dt-bindings/gce/mt8173-gce.h', 44 - 'dt-bindings/gce/mt8183-gce.h', 'dt-bindings/gce/mt8192-gce.h', 45 - 'dt-bindings/gce/mt8195-gce.h' or 'dt-bindings/gce/mt6779-gce.h'. 43 + 'dt-bindings/gce/mt8183-gce.h', 'dt-bindings/gce/mt8186-gce.h' 44 + 'dt-bindings/gce/mt8192-gce.h', 'dt-bindings/gce/mt8195-gce.h' or 45 + 'dt-bindings/gce/mt6779-gce.h'. 46 46 Such as sub-system ids, thread priority, event ids. 47 47 48 48 Example:
+1
Documentation/devicetree/bindings/mailbox/qcom,apcs-kpss-global.yaml
··· 21 21 - qcom,msm8916-apcs-kpss-global 22 22 - qcom,msm8939-apcs-kpss-global 23 23 - qcom,msm8953-apcs-kpss-global 24 + - qcom,msm8976-apcs-kpss-global 24 25 - qcom,msm8994-apcs-kpss-global 25 26 - qcom,msm8996-apcs-hmss-global 26 27 - qcom,msm8998-apcs-hmss-global
+9
drivers/mailbox/Kconfig
··· 238 238 with hardware for Inter-Processor Communication Controller (IPCC) 239 239 between processors. Say Y here if you want to have this support. 240 240 241 + config MTK_ADSP_MBOX 242 + tristate "MediaTek ADSP Mailbox Controller" 243 + depends on ARCH_MEDIATEK || COMPILE_TEST 244 + help 245 + Say yes here to add support for "MediaTek ADSP Mailbox Controller. 246 + This mailbox driver is used to send notification or short message 247 + between processors with ADSP. It will place the message to share 248 + buffer and will access the ipc control. 249 + 241 250 config MTK_CMDQ_MBOX 242 251 tristate "MediaTek CMDQ Mailbox Support" 243 252 depends on ARCH_MEDIATEK || COMPILE_TEST
+2
drivers/mailbox/Makefile
··· 49 49 50 50 obj-$(CONFIG_STM32_IPCC) += stm32-ipcc.o 51 51 52 + obj-$(CONFIG_MTK_ADSP_MBOX) += mtk-adsp-mailbox.o 53 + 52 54 obj-$(CONFIG_MTK_CMDQ_MBOX) += mtk-cmdq-mailbox.o 53 55 54 56 obj-$(CONFIG_ZYNQMP_IPI_MBOX) += zynqmp-ipi-mailbox.o
+285 -23
drivers/mailbox/imx-mailbox.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (c) 2018 Pengutronix, Oleksij Rempel <o.rempel@pengutronix.de> 4 + * Copyright 2022 NXP, Peng Fan <peng.fan@nxp.com> 4 5 */ 5 6 6 7 #include <linux/clk.h> ··· 10 9 #include <linux/interrupt.h> 11 10 #include <linux/io.h> 12 11 #include <linux/iopoll.h> 12 + #include <linux/jiffies.h> 13 13 #include <linux/kernel.h> 14 14 #include <linux/mailbox_controller.h> 15 15 #include <linux/module.h> 16 16 #include <linux/of_device.h> 17 17 #include <linux/pm_runtime.h> 18 + #include <linux/suspend.h> 18 19 #include <linux/slab.h> 19 20 20 21 #define IMX_MU_CHANS 16 ··· 26 23 #define IMX_MU_S4_CHANS 2 27 24 #define IMX_MU_CHAN_NAME_SIZE 20 28 25 26 + #define IMX_MU_SECO_TX_TOUT (msecs_to_jiffies(3000)) 27 + #define IMX_MU_SECO_RX_TOUT (msecs_to_jiffies(3000)) 28 + 29 + /* Please not change TX & RX */ 29 30 enum imx_mu_chan_type { 30 - IMX_MU_TYPE_TX, /* Tx */ 31 - IMX_MU_TYPE_RX, /* Rx */ 32 - IMX_MU_TYPE_TXDB, /* Tx doorbell */ 33 - IMX_MU_TYPE_RXDB, /* Rx doorbell */ 31 + IMX_MU_TYPE_TX = 0, /* Tx */ 32 + IMX_MU_TYPE_RX = 1, /* Rx */ 33 + IMX_MU_TYPE_TXDB = 2, /* Tx doorbell */ 34 + IMX_MU_TYPE_RXDB = 3, /* Rx doorbell */ 34 35 }; 35 36 36 37 enum imx_mu_xcr { ··· 54 47 55 48 struct imx_sc_rpc_msg_max { 56 49 struct imx_sc_rpc_msg hdr; 57 - u32 data[7]; 50 + u32 data[30]; 58 51 }; 59 52 60 53 struct imx_s4_rpc_msg_max { ··· 82 75 struct imx_mu_con_priv con_priv[IMX_MU_CHANS]; 83 76 const struct imx_mu_dcfg *dcfg; 84 77 struct clk *clk; 85 - int irq; 78 + int irq[IMX_MU_CHANS]; 79 + bool suspend; 86 80 87 81 u32 xcr[4]; 88 82 ··· 94 86 IMX_MU_V1, 95 87 IMX_MU_V2 = BIT(1), 96 88 IMX_MU_V2_S4 = BIT(15), 89 + IMX_MU_V2_IRQ = BIT(16), 97 90 }; 98 91 99 92 struct imx_mu_dcfg { 100 93 int (*tx)(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp, void *data); 101 94 int (*rx)(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp); 95 + int (*rxdb)(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp); 102 96 void (*init)(struct imx_mu_priv *priv); 103 97 enum imx_mu_type type; 104 98 u32 xTR; /* Transmit Register0 */ ··· 136 126 static u32 imx_mu_read(struct imx_mu_priv *priv, u32 offs) 137 127 { 138 128 return ioread32(priv->base + offs); 129 + } 130 + 131 + static int imx_mu_tx_waiting_write(struct imx_mu_priv *priv, u32 val, u32 idx) 132 + { 133 + u64 timeout_time = get_jiffies_64() + IMX_MU_SECO_TX_TOUT; 134 + u32 status; 135 + u32 can_write; 136 + 137 + dev_dbg(priv->dev, "Trying to write %.8x to idx %d\n", val, idx); 138 + 139 + do { 140 + status = imx_mu_read(priv, priv->dcfg->xSR[IMX_MU_TSR]); 141 + can_write = status & IMX_MU_xSR_TEn(priv->dcfg->type, idx % 4); 142 + } while (!can_write && time_is_after_jiffies64(timeout_time)); 143 + 144 + if (!can_write) { 145 + dev_err(priv->dev, "timeout trying to write %.8x at %d(%.8x)\n", 146 + val, idx, status); 147 + return -ETIME; 148 + } 149 + 150 + imx_mu_write(priv, val, priv->dcfg->xTR + (idx % 4) * 4); 151 + 152 + return 0; 153 + } 154 + 155 + static int imx_mu_rx_waiting_read(struct imx_mu_priv *priv, u32 *val, u32 idx) 156 + { 157 + u64 timeout_time = get_jiffies_64() + IMX_MU_SECO_RX_TOUT; 158 + u32 status; 159 + u32 can_read; 160 + 161 + dev_dbg(priv->dev, "Trying to read from idx %d\n", idx); 162 + 163 + do { 164 + status = imx_mu_read(priv, priv->dcfg->xSR[IMX_MU_RSR]); 165 + can_read = status & IMX_MU_xSR_RFn(priv->dcfg->type, idx % 4); 166 + } while (!can_read && time_is_after_jiffies64(timeout_time)); 167 + 168 + if (!can_read) { 169 + dev_err(priv->dev, "timeout trying to read idx %d (%.8x)\n", 170 + idx, status); 171 + return -ETIME; 172 + } 173 + 174 + *val = imx_mu_read(priv, priv->dcfg->xRR + (idx % 4) * 4); 175 + dev_dbg(priv->dev, "Read %.8x\n", *val); 176 + 177 + return 0; 139 178 } 140 179 141 180 static u32 imx_mu_xcr_rmw(struct imx_mu_priv *priv, enum imx_mu_xcr type, u32 set, u32 clr) ··· 236 177 return 0; 237 178 } 238 179 180 + static int imx_mu_generic_rxdb(struct imx_mu_priv *priv, 181 + struct imx_mu_con_priv *cp) 182 + { 183 + imx_mu_write(priv, IMX_MU_xSR_GIPn(priv->dcfg->type, cp->idx), 184 + priv->dcfg->xSR[IMX_MU_GSR]); 185 + mbox_chan_received_data(cp->chan, NULL); 186 + 187 + return 0; 188 + } 189 + 239 190 static int imx_mu_specific_tx(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp, void *data) 240 191 { 241 192 u32 *arg = data; ··· 285 216 ret = readl_poll_timeout(priv->base + priv->dcfg->xSR[IMX_MU_TSR], 286 217 xsr, 287 218 xsr & IMX_MU_xSR_TEn(priv->dcfg->type, i % num_tr), 288 - 0, 100); 219 + 0, 5 * USEC_PER_SEC); 289 220 if (ret) { 290 221 dev_err(priv->dev, "Send data index: %d timeout\n", i); 291 222 return ret; ··· 330 261 331 262 for (i = 1; i < size; i++) { 332 263 ret = readl_poll_timeout(priv->base + priv->dcfg->xSR[IMX_MU_RSR], xsr, 333 - xsr & IMX_MU_xSR_RFn(priv->dcfg->type, i % 4), 0, 100); 264 + xsr & IMX_MU_xSR_RFn(priv->dcfg->type, i % 4), 0, 265 + 5 * USEC_PER_SEC); 334 266 if (ret) { 335 267 dev_err(priv->dev, "timeout read idx %d\n", i); 336 268 return ret; ··· 343 273 mbox_chan_received_data(cp->chan, (void *)priv->msg); 344 274 345 275 return 0; 276 + } 277 + 278 + static int imx_mu_seco_tx(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp, 279 + void *data) 280 + { 281 + struct imx_sc_rpc_msg_max *msg = data; 282 + u32 *arg = data; 283 + u32 byte_size; 284 + int err; 285 + int i; 286 + 287 + dev_dbg(priv->dev, "Sending message\n"); 288 + 289 + switch (cp->type) { 290 + case IMX_MU_TYPE_TXDB: 291 + byte_size = msg->hdr.size * sizeof(u32); 292 + if (byte_size > sizeof(*msg)) { 293 + /* 294 + * The real message size can be different to 295 + * struct imx_sc_rpc_msg_max size 296 + */ 297 + dev_err(priv->dev, 298 + "Exceed max msg size (%zu) on TX, got: %i\n", 299 + sizeof(*msg), byte_size); 300 + return -EINVAL; 301 + } 302 + 303 + print_hex_dump_debug("from client ", DUMP_PREFIX_OFFSET, 4, 4, 304 + data, byte_size, false); 305 + 306 + /* Send first word */ 307 + dev_dbg(priv->dev, "Sending header\n"); 308 + imx_mu_write(priv, *arg++, priv->dcfg->xTR); 309 + 310 + /* Send signaling */ 311 + dev_dbg(priv->dev, "Sending signaling\n"); 312 + imx_mu_xcr_rmw(priv, IMX_MU_GCR, 313 + IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0); 314 + 315 + /* Send words to fill the mailbox */ 316 + for (i = 1; i < 4 && i < msg->hdr.size; i++) { 317 + dev_dbg(priv->dev, "Sending word %d\n", i); 318 + imx_mu_write(priv, *arg++, 319 + priv->dcfg->xTR + (i % 4) * 4); 320 + } 321 + 322 + /* Send rest of message waiting for remote read */ 323 + for (; i < msg->hdr.size; i++) { 324 + dev_dbg(priv->dev, "Sending word %d\n", i); 325 + err = imx_mu_tx_waiting_write(priv, *arg++, i); 326 + if (err) { 327 + dev_err(priv->dev, "Timeout tx %d\n", i); 328 + return err; 329 + } 330 + } 331 + 332 + /* Simulate hack for mbox framework */ 333 + tasklet_schedule(&cp->txdb_tasklet); 334 + 335 + break; 336 + default: 337 + dev_warn_ratelimited(priv->dev, 338 + "Send data on wrong channel type: %d\n", 339 + cp->type); 340 + return -EINVAL; 341 + } 342 + 343 + return 0; 344 + } 345 + 346 + static int imx_mu_seco_rxdb(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp) 347 + { 348 + struct imx_sc_rpc_msg_max msg; 349 + u32 *data = (u32 *)&msg; 350 + u32 byte_size; 351 + int err = 0; 352 + int i; 353 + 354 + dev_dbg(priv->dev, "Receiving message\n"); 355 + 356 + /* Read header */ 357 + dev_dbg(priv->dev, "Receiving header\n"); 358 + *data++ = imx_mu_read(priv, priv->dcfg->xRR); 359 + byte_size = msg.hdr.size * sizeof(u32); 360 + if (byte_size > sizeof(msg)) { 361 + dev_err(priv->dev, "Exceed max msg size (%zu) on RX, got: %i\n", 362 + sizeof(msg), byte_size); 363 + err = -EINVAL; 364 + goto error; 365 + } 366 + 367 + /* Read message waiting they are written */ 368 + for (i = 1; i < msg.hdr.size; i++) { 369 + dev_dbg(priv->dev, "Receiving word %d\n", i); 370 + err = imx_mu_rx_waiting_read(priv, data++, i); 371 + if (err) { 372 + dev_err(priv->dev, "Timeout rx %d\n", i); 373 + goto error; 374 + } 375 + } 376 + 377 + /* Clear GIP */ 378 + imx_mu_write(priv, IMX_MU_xSR_GIPn(priv->dcfg->type, cp->idx), 379 + priv->dcfg->xSR[IMX_MU_GSR]); 380 + 381 + print_hex_dump_debug("to client ", DUMP_PREFIX_OFFSET, 4, 4, 382 + &msg, byte_size, false); 383 + 384 + /* send data to client */ 385 + dev_dbg(priv->dev, "Sending message to client\n"); 386 + mbox_chan_received_data(cp->chan, (void *)&msg); 387 + 388 + goto exit; 389 + 390 + error: 391 + mbox_chan_received_data(cp->chan, ERR_PTR(err)); 392 + 393 + exit: 394 + return err; 346 395 } 347 396 348 397 static void imx_mu_txdb_tasklet(unsigned long data) ··· 515 326 priv->dcfg->rx(priv, cp); 516 327 } else if ((val == IMX_MU_xSR_GIPn(priv->dcfg->type, cp->idx)) && 517 328 (cp->type == IMX_MU_TYPE_RXDB)) { 518 - imx_mu_write(priv, IMX_MU_xSR_GIPn(priv->dcfg->type, cp->idx), 519 - priv->dcfg->xSR[IMX_MU_GSR]); 520 - mbox_chan_received_data(chan, NULL); 329 + priv->dcfg->rxdb(priv, cp); 521 330 } else { 522 331 dev_warn_ratelimited(priv->dev, "Not handled interrupt\n"); 523 332 return IRQ_NONE; 524 333 } 334 + 335 + if (priv->suspend) 336 + pm_system_wakeup(); 525 337 526 338 return IRQ_HANDLED; 527 339 } ··· 539 349 { 540 350 struct imx_mu_priv *priv = to_imx_mu_priv(chan->mbox); 541 351 struct imx_mu_con_priv *cp = chan->con_priv; 542 - unsigned long irq_flag = IRQF_SHARED; 352 + unsigned long irq_flag = 0; 543 353 int ret; 544 354 545 355 pm_runtime_get_sync(priv->dev); ··· 554 364 if (!priv->dev->pm_domain) 555 365 irq_flag |= IRQF_NO_SUSPEND; 556 366 557 - ret = request_irq(priv->irq, imx_mu_isr, irq_flag, 558 - cp->irq_desc, chan); 367 + if (!(priv->dcfg->type & IMX_MU_V2_IRQ)) 368 + irq_flag |= IRQF_SHARED; 369 + 370 + ret = request_irq(priv->irq[cp->type], imx_mu_isr, irq_flag, cp->irq_desc, chan); 559 371 if (ret) { 560 - dev_err(priv->dev, 561 - "Unable to acquire IRQ %d\n", priv->irq); 372 + dev_err(priv->dev, "Unable to acquire IRQ %d\n", priv->irq[cp->type]); 562 373 return ret; 563 374 } 564 375 ··· 602 411 break; 603 412 } 604 413 605 - free_irq(priv->irq, chan); 414 + free_irq(priv->irq[cp->type], chan); 606 415 pm_runtime_put_sync(priv->dev); 607 416 } 608 417 ··· 670 479 return &mbox->chans[chan]; 671 480 } 672 481 482 + static struct mbox_chan *imx_mu_seco_xlate(struct mbox_controller *mbox, 483 + const struct of_phandle_args *sp) 484 + { 485 + u32 type; 486 + 487 + if (sp->args_count < 1) { 488 + dev_err(mbox->dev, "Invalid argument count %d\n", sp->args_count); 489 + return ERR_PTR(-EINVAL); 490 + } 491 + 492 + type = sp->args[0]; /* channel type */ 493 + 494 + /* Only supports TXDB and RXDB */ 495 + if (type == IMX_MU_TYPE_TX || type == IMX_MU_TYPE_RX) { 496 + dev_err(mbox->dev, "Invalid type: %d\n", type); 497 + return ERR_PTR(-EINVAL); 498 + } 499 + 500 + return imx_mu_xlate(mbox, sp); 501 + } 502 + 673 503 static void imx_mu_init_generic(struct imx_mu_priv *priv) 674 504 { 675 505 unsigned int i; ··· 741 529 imx_mu_write(priv, 0, priv->dcfg->xCR[i]); 742 530 } 743 531 532 + static void imx_mu_init_seco(struct imx_mu_priv *priv) 533 + { 534 + imx_mu_init_generic(priv); 535 + priv->mbox.of_xlate = imx_mu_seco_xlate; 536 + } 537 + 744 538 static int imx_mu_probe(struct platform_device *pdev) 745 539 { 746 540 struct device *dev = &pdev->dev; 747 541 struct device_node *np = dev->of_node; 748 542 struct imx_mu_priv *priv; 749 543 const struct imx_mu_dcfg *dcfg; 750 - int ret; 544 + int i, ret; 751 545 u32 size; 752 546 753 547 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 766 548 if (IS_ERR(priv->base)) 767 549 return PTR_ERR(priv->base); 768 550 769 - priv->irq = platform_get_irq(pdev, 0); 770 - if (priv->irq < 0) 771 - return priv->irq; 772 - 773 551 dcfg = of_device_get_match_data(dev); 774 552 if (!dcfg) 775 553 return -EINVAL; 776 554 priv->dcfg = dcfg; 555 + if (priv->dcfg->type & IMX_MU_V2_IRQ) { 556 + priv->irq[IMX_MU_TYPE_TX] = platform_get_irq_byname(pdev, "tx"); 557 + if (priv->irq[IMX_MU_TYPE_TX] < 0) 558 + return priv->irq[IMX_MU_TYPE_TX]; 559 + priv->irq[IMX_MU_TYPE_RX] = platform_get_irq_byname(pdev, "rx"); 560 + if (priv->irq[IMX_MU_TYPE_RX] < 0) 561 + return priv->irq[IMX_MU_TYPE_RX]; 562 + } else { 563 + ret = platform_get_irq(pdev, 0); 564 + if (ret < 0) 565 + return ret; 566 + 567 + for (i = 0; i < IMX_MU_CHANS; i++) 568 + priv->irq[i] = ret; 569 + } 777 570 778 571 if (priv->dcfg->type & IMX_MU_V2_S4) 779 572 size = sizeof(struct imx_s4_rpc_msg_max); ··· 862 633 static const struct imx_mu_dcfg imx_mu_cfg_imx6sx = { 863 634 .tx = imx_mu_generic_tx, 864 635 .rx = imx_mu_generic_rx, 636 + .rxdb = imx_mu_generic_rxdb, 865 637 .init = imx_mu_init_generic, 866 638 .xTR = 0x0, 867 639 .xRR = 0x10, ··· 873 643 static const struct imx_mu_dcfg imx_mu_cfg_imx7ulp = { 874 644 .tx = imx_mu_generic_tx, 875 645 .rx = imx_mu_generic_rx, 646 + .rxdb = imx_mu_generic_rxdb, 876 647 .init = imx_mu_init_generic, 877 648 .xTR = 0x20, 878 649 .xRR = 0x40, ··· 884 653 static const struct imx_mu_dcfg imx_mu_cfg_imx8ulp = { 885 654 .tx = imx_mu_generic_tx, 886 655 .rx = imx_mu_generic_rx, 656 + .rxdb = imx_mu_generic_rxdb, 887 657 .init = imx_mu_init_generic, 658 + .rxdb = imx_mu_generic_rxdb, 888 659 .type = IMX_MU_V2, 889 660 .xTR = 0x200, 890 661 .xRR = 0x280, ··· 905 672 .xCR = {0x110, 0x114, 0x120, 0x128}, 906 673 }; 907 674 675 + static const struct imx_mu_dcfg imx_mu_cfg_imx93_s4 = { 676 + .tx = imx_mu_specific_tx, 677 + .rx = imx_mu_specific_rx, 678 + .init = imx_mu_init_specific, 679 + .type = IMX_MU_V2 | IMX_MU_V2_S4 | IMX_MU_V2_IRQ, 680 + .xTR = 0x200, 681 + .xRR = 0x280, 682 + .xSR = {0xC, 0x118, 0x124, 0x12C}, 683 + .xCR = {0x110, 0x114, 0x120, 0x128}, 684 + }; 685 + 908 686 static const struct imx_mu_dcfg imx_mu_cfg_imx8_scu = { 909 687 .tx = imx_mu_specific_tx, 910 688 .rx = imx_mu_specific_rx, 911 689 .init = imx_mu_init_specific, 690 + .rxdb = imx_mu_generic_rxdb, 691 + .xTR = 0x0, 692 + .xRR = 0x10, 693 + .xSR = {0x20, 0x20, 0x20, 0x20}, 694 + .xCR = {0x24, 0x24, 0x24, 0x24}, 695 + }; 696 + 697 + static const struct imx_mu_dcfg imx_mu_cfg_imx8_seco = { 698 + .tx = imx_mu_seco_tx, 699 + .rx = imx_mu_generic_rx, 700 + .rxdb = imx_mu_seco_rxdb, 701 + .init = imx_mu_init_seco, 912 702 .xTR = 0x0, 913 703 .xRR = 0x10, 914 704 .xSR = {0x20, 0x20, 0x20, 0x20}, ··· 943 687 { .compatible = "fsl,imx6sx-mu", .data = &imx_mu_cfg_imx6sx }, 944 688 { .compatible = "fsl,imx8ulp-mu", .data = &imx_mu_cfg_imx8ulp }, 945 689 { .compatible = "fsl,imx8ulp-mu-s4", .data = &imx_mu_cfg_imx8ulp_s4 }, 690 + { .compatible = "fsl,imx93-mu-s4", .data = &imx_mu_cfg_imx93_s4 }, 946 691 { .compatible = "fsl,imx8-mu-scu", .data = &imx_mu_cfg_imx8_scu }, 692 + { .compatible = "fsl,imx8-mu-seco", .data = &imx_mu_cfg_imx8_seco }, 947 693 { }, 948 694 }; 949 695 MODULE_DEVICE_TABLE(of, imx_mu_dt_ids); ··· 959 701 for (i = 0; i < IMX_MU_xCR_MAX; i++) 960 702 priv->xcr[i] = imx_mu_read(priv, priv->dcfg->xCR[i]); 961 703 } 704 + 705 + priv->suspend = true; 962 706 963 707 return 0; 964 708 } ··· 978 718 * send failed, may lead to system freeze. This issue 979 719 * is observed by testing freeze mode suspend. 980 720 */ 981 - if (!imx_mu_read(priv, priv->dcfg->xCR[0]) && !priv->clk) { 721 + if (!priv->clk && !imx_mu_read(priv, priv->dcfg->xCR[0])) { 982 722 for (i = 0; i < IMX_MU_xCR_MAX; i++) 983 723 imx_mu_write(priv, priv->xcr[i], priv->dcfg->xCR[i]); 984 724 } 725 + 726 + priv->suspend = false; 985 727 986 728 return 0; 987 729 }
+176
drivers/mailbox/mtk-adsp-mailbox.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2022 MediaTek Corporation. All rights reserved. 4 + * Author: Allen-KH Cheng <allen-kh.cheng@mediatek.com> 5 + */ 6 + 7 + #include <linux/interrupt.h> 8 + #include <linux/io.h> 9 + #include <linux/iopoll.h> 10 + #include <linux/kernel.h> 11 + #include <linux/mailbox_controller.h> 12 + #include <linux/module.h> 13 + #include <linux/of_device.h> 14 + #include <linux/slab.h> 15 + 16 + struct mtk_adsp_mbox_priv { 17 + struct device *dev; 18 + struct mbox_controller mbox; 19 + void __iomem *va_mboxreg; 20 + const struct mtk_adsp_mbox_cfg *cfg; 21 + }; 22 + 23 + struct mtk_adsp_mbox_cfg { 24 + u32 set_in; 25 + u32 set_out; 26 + u32 clr_in; 27 + u32 clr_out; 28 + }; 29 + 30 + static inline struct mtk_adsp_mbox_priv *get_mtk_adsp_mbox_priv(struct mbox_controller *mbox) 31 + { 32 + return container_of(mbox, struct mtk_adsp_mbox_priv, mbox); 33 + } 34 + 35 + static irqreturn_t mtk_adsp_mbox_irq(int irq, void *data) 36 + { 37 + struct mbox_chan *chan = data; 38 + struct mtk_adsp_mbox_priv *priv = get_mtk_adsp_mbox_priv(chan->mbox); 39 + u32 op = readl(priv->va_mboxreg + priv->cfg->set_out); 40 + 41 + writel(op, priv->va_mboxreg + priv->cfg->clr_out); 42 + 43 + return IRQ_WAKE_THREAD; 44 + } 45 + 46 + static irqreturn_t mtk_adsp_mbox_isr(int irq, void *data) 47 + { 48 + struct mbox_chan *chan = data; 49 + 50 + mbox_chan_received_data(chan, NULL); 51 + 52 + return IRQ_HANDLED; 53 + } 54 + 55 + static struct mbox_chan *mtk_adsp_mbox_xlate(struct mbox_controller *mbox, 56 + const struct of_phandle_args *sp) 57 + { 58 + return mbox->chans; 59 + } 60 + 61 + static int mtk_adsp_mbox_startup(struct mbox_chan *chan) 62 + { 63 + struct mtk_adsp_mbox_priv *priv = get_mtk_adsp_mbox_priv(chan->mbox); 64 + 65 + /* Clear ADSP mbox command */ 66 + writel(0xFFFFFFFF, priv->va_mboxreg + priv->cfg->clr_in); 67 + writel(0xFFFFFFFF, priv->va_mboxreg + priv->cfg->clr_out); 68 + 69 + return 0; 70 + } 71 + 72 + static void mtk_adsp_mbox_shutdown(struct mbox_chan *chan) 73 + { 74 + struct mtk_adsp_mbox_priv *priv = get_mtk_adsp_mbox_priv(chan->mbox); 75 + 76 + /* Clear ADSP mbox command */ 77 + writel(0xFFFFFFFF, priv->va_mboxreg + priv->cfg->clr_in); 78 + writel(0xFFFFFFFF, priv->va_mboxreg + priv->cfg->clr_out); 79 + } 80 + 81 + static int mtk_adsp_mbox_send_data(struct mbox_chan *chan, void *data) 82 + { 83 + struct mtk_adsp_mbox_priv *priv = get_mtk_adsp_mbox_priv(chan->mbox); 84 + u32 *msg = data; 85 + 86 + writel(*msg, priv->va_mboxreg + priv->cfg->set_in); 87 + 88 + return 0; 89 + } 90 + 91 + static bool mtk_adsp_mbox_last_tx_done(struct mbox_chan *chan) 92 + { 93 + struct mtk_adsp_mbox_priv *priv = get_mtk_adsp_mbox_priv(chan->mbox); 94 + 95 + return readl(priv->va_mboxreg + priv->cfg->set_in) == 0; 96 + } 97 + 98 + static const struct mbox_chan_ops mtk_adsp_mbox_chan_ops = { 99 + .send_data = mtk_adsp_mbox_send_data, 100 + .startup = mtk_adsp_mbox_startup, 101 + .shutdown = mtk_adsp_mbox_shutdown, 102 + .last_tx_done = mtk_adsp_mbox_last_tx_done, 103 + }; 104 + 105 + static int mtk_adsp_mbox_probe(struct platform_device *pdev) 106 + { 107 + struct device *dev = &pdev->dev; 108 + struct mtk_adsp_mbox_priv *priv; 109 + const struct mtk_adsp_mbox_cfg *cfg; 110 + struct mbox_controller *mbox; 111 + int ret, irq; 112 + 113 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 114 + if (!priv) 115 + return -ENOMEM; 116 + 117 + mbox = &priv->mbox; 118 + mbox->dev = dev; 119 + mbox->ops = &mtk_adsp_mbox_chan_ops; 120 + mbox->txdone_irq = false; 121 + mbox->txdone_poll = true; 122 + mbox->of_xlate = mtk_adsp_mbox_xlate; 123 + mbox->num_chans = 1; 124 + mbox->chans = devm_kzalloc(dev, sizeof(*mbox->chans), GFP_KERNEL); 125 + if (!mbox->chans) 126 + return -ENOMEM; 127 + 128 + priv->va_mboxreg = devm_platform_ioremap_resource(pdev, 0); 129 + if (IS_ERR(priv->va_mboxreg)) 130 + return PTR_ERR(priv->va_mboxreg); 131 + 132 + cfg = of_device_get_match_data(dev); 133 + if (!cfg) 134 + return -EINVAL; 135 + priv->cfg = cfg; 136 + 137 + irq = platform_get_irq(pdev, 0); 138 + if (irq < 0) 139 + return irq; 140 + 141 + ret = devm_request_threaded_irq(dev, irq, mtk_adsp_mbox_irq, 142 + mtk_adsp_mbox_isr, IRQF_TRIGGER_NONE, 143 + dev_name(dev), mbox->chans); 144 + if (ret < 0) 145 + return ret; 146 + 147 + platform_set_drvdata(pdev, priv); 148 + 149 + return devm_mbox_controller_register(dev, &priv->mbox); 150 + } 151 + 152 + static const struct mtk_adsp_mbox_cfg mt8195_adsp_mbox_cfg = { 153 + .set_in = 0x00, 154 + .set_out = 0x1c, 155 + .clr_in = 0x04, 156 + .clr_out = 0x20, 157 + }; 158 + 159 + static const struct of_device_id mtk_adsp_mbox_of_match[] = { 160 + { .compatible = "mediatek,mt8195-adsp-mbox", .data = &mt8195_adsp_mbox_cfg }, 161 + {}, 162 + }; 163 + MODULE_DEVICE_TABLE(of, mtk_adsp_mbox_of_match); 164 + 165 + static struct platform_driver mtk_adsp_mbox_driver = { 166 + .probe = mtk_adsp_mbox_probe, 167 + .driver = { 168 + .name = "mtk_adsp_mbox", 169 + .of_match_table = mtk_adsp_mbox_of_match, 170 + }, 171 + }; 172 + module_platform_driver(mtk_adsp_mbox_driver); 173 + 174 + MODULE_AUTHOR("Allen-KH Cheng <Allen-KH.Cheng@mediatek.com>"); 175 + MODULE_DESCRIPTION("MTK ADSP Mailbox Controller"); 176 + MODULE_LICENSE("GPL v2");
+1
drivers/mailbox/qcom-apcs-ipc-mailbox.c
··· 146 146 { .compatible = "qcom,msm8916-apcs-kpss-global", .data = &msm8916_apcs_data }, 147 147 { .compatible = "qcom,msm8939-apcs-kpss-global", .data = &msm8916_apcs_data }, 148 148 { .compatible = "qcom,msm8953-apcs-kpss-global", .data = &msm8994_apcs_data }, 149 + { .compatible = "qcom,msm8976-apcs-kpss-global", .data = &msm8994_apcs_data }, 149 150 { .compatible = "qcom,msm8994-apcs-kpss-global", .data = &msm8994_apcs_data }, 150 151 { .compatible = "qcom,msm8996-apcs-hmss-global", .data = &msm8996_apcs_data }, 151 152 { .compatible = "qcom,msm8998-apcs-hmss-global", .data = &msm8994_apcs_data },
+5
drivers/mailbox/tegra-hsp.c
··· 412 412 value = tegra_hsp_channel_readl(ch, HSP_SM_SHRD_MBOX); 413 413 if ((value & HSP_SM_SHRD_MBOX_FULL) == 0) { 414 414 mbox_chan_txdone(chan, 0); 415 + 416 + /* Wait until channel is empty */ 417 + if (chan->active_req != NULL) 418 + continue; 419 + 415 420 return 0; 416 421 } 417 422
+144 -45
drivers/mailbox/ti-msgmgr.c
··· 2 2 /* 3 3 * Texas Instruments' Message Manager Driver 4 4 * 5 - * Copyright (C) 2015-2017 Texas Instruments Incorporated - https://www.ti.com/ 5 + * Copyright (C) 2015-2022 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Nishanth Menon 7 7 */ 8 8 ··· 11 11 #include <linux/device.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/io.h> 14 + #include <linux/iopoll.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/mailbox_controller.h> 16 17 #include <linux/module.h> ··· 101 100 * @queue_ctrl: Queue Control register 102 101 * @chan: Mailbox channel 103 102 * @rx_buff: Receive buffer pointer allocated at probe, max_message_size 103 + * @polled_rx_mode: Use polling for rx instead of interrupts 104 104 */ 105 105 struct ti_queue_inst { 106 106 char name[30]; ··· 115 113 void __iomem *queue_ctrl; 116 114 struct mbox_chan *chan; 117 115 u32 *rx_buff; 116 + bool polled_rx_mode; 118 117 }; 119 118 120 119 /** ··· 193 190 return val ? true : false; 194 191 } 195 192 196 - /** 197 - * ti_msgmgr_queue_rx_interrupt() - Interrupt handler for receive Queue 198 - * @irq: Interrupt number 199 - * @p: Channel Pointer 200 - * 201 - * Return: -EINVAL if there is no instance 202 - * IRQ_NONE if the interrupt is not ours. 203 - * IRQ_HANDLED if the rx interrupt was successfully handled. 204 - */ 205 - static irqreturn_t ti_msgmgr_queue_rx_interrupt(int irq, void *p) 193 + static int ti_msgmgr_queue_rx_data(struct mbox_chan *chan, struct ti_queue_inst *qinst, 194 + const struct ti_msgmgr_desc *desc) 206 195 { 207 - struct mbox_chan *chan = p; 208 - struct device *dev = chan->mbox->dev; 209 - struct ti_msgmgr_inst *inst = dev_get_drvdata(dev); 210 - struct ti_queue_inst *qinst = chan->con_priv; 211 - const struct ti_msgmgr_desc *desc; 212 - int msg_count, num_words; 196 + int num_words; 213 197 struct ti_msgmgr_message message; 214 198 void __iomem *data_reg; 215 199 u32 *word_data; 216 200 217 - if (WARN_ON(!inst)) { 218 - dev_err(dev, "no platform drv data??\n"); 219 - return -EINVAL; 220 - } 221 - 222 - /* Do I have an invalid interrupt source? */ 223 - if (qinst->is_tx) { 224 - dev_err(dev, "Cannot handle rx interrupt on tx channel %s\n", 225 - qinst->name); 226 - return IRQ_NONE; 227 - } 228 - 229 - desc = inst->desc; 230 - if (ti_msgmgr_queue_is_error(desc, qinst)) { 231 - dev_err(dev, "Error on Rx channel %s\n", qinst->name); 232 - return IRQ_NONE; 233 - } 234 - 235 - /* Do I actually have messages to read? */ 236 - msg_count = ti_msgmgr_queue_get_num_messages(desc, qinst); 237 - if (!msg_count) { 238 - /* Shared IRQ? */ 239 - dev_dbg(dev, "Spurious event - 0 pending data!\n"); 240 - return IRQ_NONE; 241 - } 242 - 243 201 /* 244 202 * I have no idea about the protocol being used to communicate with the 245 - * remote producer - 0 could be valid data, so I won't make a judgement 203 + * remote producer - 0 could be valid data, so I wont make a judgement 246 204 * of how many bytes I should be reading. Let the client figure this 247 205 * out.. I just read the full message and pass it on.. 248 206 */ ··· 236 272 * we invoke the handler in IRQ context. 237 273 */ 238 274 mbox_chan_received_data(chan, (void *)&message); 275 + 276 + return 0; 277 + } 278 + 279 + static int ti_msgmgr_queue_rx_poll_timeout(struct mbox_chan *chan, int timeout_us) 280 + { 281 + struct device *dev = chan->mbox->dev; 282 + struct ti_msgmgr_inst *inst = dev_get_drvdata(dev); 283 + struct ti_queue_inst *qinst = chan->con_priv; 284 + const struct ti_msgmgr_desc *desc = inst->desc; 285 + int msg_count; 286 + int ret; 287 + 288 + ret = readl_poll_timeout_atomic(qinst->queue_state, msg_count, 289 + (msg_count & desc->status_cnt_mask), 290 + 10, timeout_us); 291 + if (ret != 0) 292 + return ret; 293 + 294 + ti_msgmgr_queue_rx_data(chan, qinst, desc); 295 + 296 + return 0; 297 + } 298 + 299 + /** 300 + * ti_msgmgr_queue_rx_interrupt() - Interrupt handler for receive Queue 301 + * @irq: Interrupt number 302 + * @p: Channel Pointer 303 + * 304 + * Return: -EINVAL if there is no instance 305 + * IRQ_NONE if the interrupt is not ours. 306 + * IRQ_HANDLED if the rx interrupt was successfully handled. 307 + */ 308 + static irqreturn_t ti_msgmgr_queue_rx_interrupt(int irq, void *p) 309 + { 310 + struct mbox_chan *chan = p; 311 + struct device *dev = chan->mbox->dev; 312 + struct ti_msgmgr_inst *inst = dev_get_drvdata(dev); 313 + struct ti_queue_inst *qinst = chan->con_priv; 314 + const struct ti_msgmgr_desc *desc; 315 + int msg_count; 316 + 317 + if (WARN_ON(!inst)) { 318 + dev_err(dev, "no platform drv data??\n"); 319 + return -EINVAL; 320 + } 321 + 322 + /* Do I have an invalid interrupt source? */ 323 + if (qinst->is_tx) { 324 + dev_err(dev, "Cannot handle rx interrupt on tx channel %s\n", 325 + qinst->name); 326 + return IRQ_NONE; 327 + } 328 + 329 + desc = inst->desc; 330 + if (ti_msgmgr_queue_is_error(desc, qinst)) { 331 + dev_err(dev, "Error on Rx channel %s\n", qinst->name); 332 + return IRQ_NONE; 333 + } 334 + 335 + /* Do I actually have messages to read? */ 336 + msg_count = ti_msgmgr_queue_get_num_messages(desc, qinst); 337 + if (!msg_count) { 338 + /* Shared IRQ? */ 339 + dev_dbg(dev, "Spurious event - 0 pending data!\n"); 340 + return IRQ_NONE; 341 + } 342 + 343 + ti_msgmgr_queue_rx_data(chan, qinst, desc); 239 344 240 345 return IRQ_HANDLED; 241 346 } ··· 369 336 return msg_count ? false : true; 370 337 } 371 338 339 + static bool ti_msgmgr_chan_has_polled_queue_rx(struct mbox_chan *chan) 340 + { 341 + struct ti_queue_inst *qinst; 342 + 343 + if (!chan) 344 + return false; 345 + 346 + qinst = chan->con_priv; 347 + return qinst->polled_rx_mode; 348 + } 349 + 372 350 /** 373 351 * ti_msgmgr_send_data() - Send data 374 352 * @chan: Channel Pointer ··· 397 353 struct ti_msgmgr_message *message = data; 398 354 void __iomem *data_reg; 399 355 u32 *word_data; 356 + int ret = 0; 400 357 401 358 if (WARN_ON(!inst)) { 402 359 dev_err(dev, "no platform drv data??\n"); ··· 439 394 if (data_reg <= qinst->queue_buff_end) 440 395 writel(0, qinst->queue_buff_end); 441 396 442 - return 0; 397 + /* If we are in polled mode, wait for a response before proceeding */ 398 + if (ti_msgmgr_chan_has_polled_queue_rx(message->chan_rx)) 399 + ret = ti_msgmgr_queue_rx_poll_timeout(message->chan_rx, 400 + message->timeout_rx_ms * 1000); 401 + 402 + return ret; 443 403 } 444 404 445 405 /** ··· 692 642 return 0; 693 643 } 694 644 645 + static int ti_msgmgr_queue_rx_set_polled_mode(struct ti_queue_inst *qinst, bool enable) 646 + { 647 + if (enable) { 648 + disable_irq(qinst->irq); 649 + qinst->polled_rx_mode = true; 650 + } else { 651 + enable_irq(qinst->irq); 652 + qinst->polled_rx_mode = false; 653 + } 654 + 655 + return 0; 656 + } 657 + 658 + static int ti_msgmgr_suspend(struct device *dev) 659 + { 660 + struct ti_msgmgr_inst *inst = dev_get_drvdata(dev); 661 + struct ti_queue_inst *qinst; 662 + int i; 663 + 664 + /* 665 + * We must switch operation to polled mode now as drivers and the genpd 666 + * layer may make late TI SCI calls to change clock and device states 667 + * from the noirq phase of suspend. 668 + */ 669 + for (qinst = inst->qinsts, i = 0; i < inst->num_valid_queues; qinst++, i++) { 670 + if (!qinst->is_tx) 671 + ti_msgmgr_queue_rx_set_polled_mode(qinst, true); 672 + } 673 + 674 + return 0; 675 + } 676 + 677 + static int ti_msgmgr_resume(struct device *dev) 678 + { 679 + struct ti_msgmgr_inst *inst = dev_get_drvdata(dev); 680 + struct ti_queue_inst *qinst; 681 + int i; 682 + 683 + for (qinst = inst->qinsts, i = 0; i < inst->num_valid_queues; qinst++, i++) { 684 + if (!qinst->is_tx) 685 + ti_msgmgr_queue_rx_set_polled_mode(qinst, false); 686 + } 687 + 688 + return 0; 689 + } 690 + 691 + static DEFINE_SIMPLE_DEV_PM_OPS(ti_msgmgr_pm_ops, ti_msgmgr_suspend, ti_msgmgr_resume); 692 + 695 693 /* Queue operations */ 696 694 static const struct mbox_chan_ops ti_msgmgr_chan_ops = { 697 695 .startup = ti_msgmgr_queue_startup, ··· 927 829 .driver = { 928 830 .name = "ti-msgmgr", 929 831 .of_match_table = of_match_ptr(ti_msgmgr_of_match), 832 + .pm = &ti_msgmgr_pm_ops, 930 833 }, 931 834 }; 932 835 module_platform_driver(ti_msgmgr_driver);
+7 -1
include/linux/soc/ti/ti-msgmgr.h
··· 1 1 /* 2 2 * Texas Instruments' Message Manager 3 3 * 4 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 4 + * Copyright (C) 2015-2022 Texas Instruments Incorporated - https://www.ti.com/ 5 5 * Nishanth Menon 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify ··· 17 17 #ifndef TI_MSGMGR_H 18 18 #define TI_MSGMGR_H 19 19 20 + struct mbox_chan; 21 + 20 22 /** 21 23 * struct ti_msgmgr_message - Message Manager structure 22 24 * @len: Length of data in the Buffer 23 25 * @buf: Buffer pointer 26 + * @chan_rx: Expected channel for response, must be provided to use polled rx 27 + * @timeout_rx_ms: Timeout value to use if polling for response 24 28 * 25 29 * This is the structure for data used in mbox_send_message 26 30 * the length of data buffer used depends on the SoC integration ··· 34 30 struct ti_msgmgr_message { 35 31 size_t len; 36 32 u8 *buf; 33 + struct mbox_chan *chan_rx; 34 + int timeout_rx_ms; 37 35 }; 38 36 39 37 #endif /* TI_MSGMGR_H */