Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox

Pull mailbox updates from Jassi Brar:

- redo the omap driver from legacy to mailbox api

- enable bufferless IPI for zynqmp

- add mhu-v3 driver

- convert from tasklet to BH workqueue

- add qcom MSM8974 APCS compatible IDs

* tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox: (24 commits)
dt-bindings: mailbox: qcom-ipcc: Document the SDX75 IPCC
dt-bindings: mailbox: qcom: Add MSM8974 APCS compatible
mailbox: Convert from tasklet to BH workqueue
mailbox: mtk-cmdq: Fix pm_runtime_get_sync() warning in mbox shutdown
mailbox: mtk-cmdq-mailbox: fix module autoloading
mailbox: zynqmp: handle SGI for shared IPI
mailbox: arm_mhuv3: Add driver
dt-bindings: mailbox: arm,mhuv3: Add bindings
mailbox: omap: Remove kernel FIFO message queuing
mailbox: omap: Reverse FIFO busy check logic
mailbox: omap: Remove mbox_chan_to_omap_mbox()
mailbox: omap: Use mbox_controller channel list directly
mailbox: omap: Use function local struct mbox_controller
mailbox: omap: Merge mailbox child node setup loops
mailbox: omap: Use devm_pm_runtime_enable() helper
mailbox: omap: Remove device class
mailbox: omap: Remove unneeded header omap-mailbox.h
mailbox: omap: Move fifo size check to point of use
mailbox: omap: Move omap_mbox_irq_t into driver
mailbox: omap: Remove unused omap_mbox_request_channel() function
...

+1855 -501
+224
Documentation/devicetree/bindings/mailbox/arm,mhuv3.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mailbox/arm,mhuv3.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: ARM MHUv3 Mailbox Controller 8 + 9 + maintainers: 10 + - Sudeep Holla <sudeep.holla@arm.com> 11 + - Cristian Marussi <cristian.marussi@arm.com> 12 + 13 + description: | 14 + The Arm Message Handling Unit (MHU) Version 3 is a mailbox controller that 15 + enables unidirectional communications with remote processors through various 16 + possible transport protocols. 17 + The controller can optionally support a varying number of extensions that, in 18 + turn, enable different kinds of transport to be used for communication. 19 + Number, type and characteristics of each supported extension can be discovered 20 + dynamically at runtime. 21 + 22 + Given the unidirectional nature of the controller, an MHUv3 mailbox controller 23 + is composed of a MHU Sender (MHUS) containing a PostBox (PBX) block and a MHU 24 + Receiver (MHUR) containing a MailBox (MBX) block, where 25 + 26 + PBX is used to 27 + - Configure the MHU 28 + - Send Transfers to the Receiver 29 + - Optionally receive acknowledgment of a Transfer from the Receiver 30 + 31 + MBX is used to 32 + - Configure the MHU 33 + - Receive Transfers from the Sender 34 + - Optionally acknowledge Transfers sent by the Sender 35 + 36 + Both PBX and MBX need to be present and defined in the DT description if you 37 + need to establish a bidirectional communication, since you will have to 38 + acquire two distinct unidirectional channels, one for each block. 39 + 40 + As a consequence both blocks needs to be represented separately and specified 41 + as distinct DT nodes in order to properly describe their resources. 42 + 43 + Note that, though, thanks to the runtime discoverability, there is no need to 44 + identify the type of blocks with distinct compatibles. 45 + 46 + Following are the MHUv3 possible extensions. 47 + 48 + - Doorbell Extension (DBE): DBE defines a type of channel called a Doorbell 49 + Channel (DBCH). DBCH enables a single bit Transfer to be sent from the 50 + Sender to Receiver. The Transfer indicates that an event has occurred. 51 + When DBE is implemented, the number of DBCHs that an implementation of the 52 + MHU can support is between 1 and 128, numbered starting from 0 in ascending 53 + order and discoverable at run-time. 54 + Each DBCH contains 32 individual fields, referred to as flags, each of which 55 + can be used independently. It is possible for the Sender to send multiple 56 + Transfers at once using a single DBCH, so long as each Transfer uses 57 + a different flag in the DBCH. 58 + Optionally, data may be transmitted through an out-of-band shared memory 59 + region, wherein the MHU Doorbell is used strictly as an interrupt generation 60 + mechanism, but this is out of the scope of these bindings. 61 + 62 + - FastChannel Extension (FCE): FCE defines a type of channel called a Fast 63 + Channel (FCH). FCH is intended for lower overhead communication between 64 + Sender and Receiver at the expense of determinism. An FCH allows the Sender 65 + to update the channel value at any time, regardless of whether the previous 66 + value has been seen by the Receiver. When the Receiver reads the channel's 67 + content it gets the last value written to the channel. 68 + FCH is considered lossy in nature, and means that the Sender has no way of 69 + knowing if, or when, the Receiver will act on the Transfer. 70 + FCHs are expected to behave as RAM which generates interrupts when writes 71 + occur to the locations within the RAM. 72 + When FCE is implemented, the number of FCHs that an implementation of the 73 + MHU can support is between 1-1024, if the FastChannel word-size is 32-bits, 74 + or between 1-512, when the FastChannel word-size is 64-bits. 75 + FCHs are numbered from 0 in ascending order. 76 + Note that the number of FCHs and the word-size are implementation defined, 77 + not configurable but discoverable at run-time. 78 + Optionally, data may be transmitted through an out-of-band shared memory 79 + region, wherein the MHU FastChannel is used as an interrupt generation 80 + mechanism which carries also a pointer to such out-of-band data, but this 81 + is out of the scope of these bindings. 82 + 83 + - FIFO Extension (FE): FE defines a Channel type called a FIFO Channel (FFCH). 84 + FFCH allows a Sender to send 85 + - Multiple Transfers to the Receiver without having to wait for the 86 + previous Transfer to be acknowledged by the Receiver, as long as the 87 + FIFO has room for the Transfer. 88 + - Transfers which require the Receiver to provide acknowledgment. 89 + - Transfers which have in-band payload. 90 + In all cases, the data is guaranteed to be observed by the Receiver in the 91 + same order which the Sender sent it. 92 + When FE is implemented, the number of FFCHs that an implementation of the 93 + MHU can support is between 1 and 64, numbered starting from 0 in ascending 94 + order. The number of FFCHs, their depth (same for all implemented FFCHs) and 95 + the access-granularity are implementation defined, not configurable but 96 + discoverable at run-time. 97 + Optionally, additional data may be transmitted through an out-of-band shared 98 + memory region, wherein the MHU FIFO is used to transmit, in order, a small 99 + part of the payload (like a header) and a reference to the shared memory 100 + area holding the remaining, bigger, chunk of the payload, but this is out of 101 + the scope of these bindings. 102 + 103 + properties: 104 + compatible: 105 + const: arm,mhuv3 106 + 107 + reg: 108 + maxItems: 1 109 + 110 + interrupts: 111 + minItems: 1 112 + maxItems: 74 113 + 114 + interrupt-names: 115 + description: | 116 + The MHUv3 controller generates a number of events some of which are used 117 + to generate interrupts; as a consequence it can expose a varying number of 118 + optional PBX/MBX interrupts, representing the events generated during the 119 + operation of the various transport protocols associated with different 120 + extensions. All interrupts of the MHU are level-sensitive. 121 + Some of these optional interrupts are defined per-channel, where the 122 + number of channels effectively available is implementation defined and 123 + run-time discoverable. 124 + In the following names are enumerated using patterns, with per-channel 125 + interrupts implicitly capped at the maximum channels allowed by the 126 + specification for each extension type. 127 + For the sake of simplicity maxItems is anyway capped to a most plausible 128 + number, assuming way less channels would be implemented than actually 129 + possible. 130 + 131 + The only mandatory interrupts on the MHU are: 132 + - combined 133 + - mbx-fch-xfer-<N> but only if mbx-fcgrp-xfer-<N> is not implemented. 134 + 135 + minItems: 1 136 + maxItems: 74 137 + items: 138 + oneOf: 139 + - const: combined 140 + description: PBX/MBX Combined interrupt 141 + - const: combined-ffch 142 + description: PBX/MBX FIFO Combined interrupt 143 + - pattern: '^ffch-low-tide-[0-9]+$' 144 + description: PBX/MBX FIFO Channel <N> Low Tide interrupt 145 + - pattern: '^ffch-high-tide-[0-9]+$' 146 + description: PBX/MBX FIFO Channel <N> High Tide interrupt 147 + - pattern: '^ffch-flush-[0-9]+$' 148 + description: PBX/MBX FIFO Channel <N> Flush interrupt 149 + - pattern: '^mbx-dbch-xfer-[0-9]+$' 150 + description: MBX Doorbell Channel <N> Transfer interrupt 151 + - pattern: '^mbx-fch-xfer-[0-9]+$' 152 + description: MBX FastChannel <N> Transfer interrupt 153 + - pattern: '^mbx-fchgrp-xfer-[0-9]+$' 154 + description: MBX FastChannel <N> Group Transfer interrupt 155 + - pattern: '^mbx-ffch-xfer-[0-9]+$' 156 + description: MBX FIFO Channel <N> Transfer interrupt 157 + - pattern: '^pbx-dbch-xfer-ack-[0-9]+$' 158 + description: PBX Doorbell Channel <N> Transfer Ack interrupt 159 + - pattern: '^pbx-ffch-xfer-ack-[0-9]+$' 160 + description: PBX FIFO Channel <N> Transfer Ack interrupt 161 + 162 + '#mbox-cells': 163 + description: | 164 + The first argument in the consumers 'mboxes' property represents the 165 + extension type, the second is for the channel number while the third 166 + depends on extension type. 167 + 168 + Extension types constants are defined in <dt-bindings/arm/mhuv3-dt.h>. 169 + 170 + Extension type for DBE is DBE_EXT and the third parameter represents the 171 + doorbell flag number to use. 172 + Extension type for FCE is FCE_EXT, third parameter unused. 173 + Extension type for FE is FE_EXT, third parameter unused. 174 + 175 + mboxes = <&mhu DBE_EXT 0 5>; // DBE, Doorbell Channel Window 0, doorbell 5. 176 + mboxes = <&mhu DBE_EXT 7>; // DBE, Doorbell Channel Window 1, doorbell 7. 177 + mboxes = <&mhu FCE_EXT 0 0>; // FCE, FastChannel Window 0. 178 + mboxes = <&mhu FCE_EXT 3 0>; // FCE, FastChannel Window 3. 179 + mboxes = <&mhu FE_EXT 1 0>; // FE, FIFO Channel Window 1. 180 + mboxes = <&mhu FE_EXT 7 0>; // FE, FIFO Channel Window 7. 181 + const: 3 182 + 183 + clocks: 184 + maxItems: 1 185 + 186 + required: 187 + - compatible 188 + - reg 189 + - interrupts 190 + - interrupt-names 191 + - '#mbox-cells' 192 + 193 + additionalProperties: false 194 + 195 + examples: 196 + - | 197 + #include <dt-bindings/interrupt-controller/arm-gic.h> 198 + 199 + soc { 200 + #address-cells = <2>; 201 + #size-cells = <2>; 202 + 203 + mailbox@2aaa0000 { 204 + compatible = "arm,mhuv3"; 205 + #mbox-cells = <3>; 206 + reg = <0 0x2aaa0000 0 0x10000>; 207 + clocks = <&clock 0>; 208 + interrupt-names = "combined", "pbx-dbch-xfer-ack-1", 209 + "ffch-high-tide-0"; 210 + interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>, 211 + <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>; 212 + }; 213 + 214 + mailbox@2ab00000 { 215 + compatible = "arm,mhuv3"; 216 + #mbox-cells = <3>; 217 + reg = <0 0x2aab0000 0 0x10000>; 218 + clocks = <&clock 0>; 219 + interrupt-names = "combined", "mbx-dbch-xfer-1", "ffch-low-tide-0"; 220 + interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>, 221 + <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>, 222 + <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>; 223 + }; 224 + };
+1
Documentation/devicetree/bindings/mailbox/qcom,apcs-kpss-global.yaml
··· 30 30 - const: syscon 31 31 - items: 32 32 - enum: 33 + - qcom,msm8974-apcs-kpss-global 33 34 - qcom,msm8976-apcs-kpss-global 34 35 - const: qcom,msm8994-apcs-kpss-global 35 36 - const: syscon
+1
Documentation/devicetree/bindings/mailbox/qcom-ipcc.yaml
··· 28 28 - qcom,sa8775p-ipcc 29 29 - qcom,sc7280-ipcc 30 30 - qcom,sc8280xp-ipcc 31 + - qcom,sdx75-ipcc 31 32 - qcom,sm6350-ipcc 32 33 - qcom,sm6375-ipcc 33 34 - qcom,sm8250-ipcc
+9
MAINTAINERS
··· 13195 13195 F: drivers/mailbox/arm_mhuv2.c 13196 13196 F: include/linux/mailbox/arm_mhuv2_message.h 13197 13197 13198 + MAILBOX ARM MHUv3 13199 + M: Sudeep Holla <sudeep.holla@arm.com> 13200 + M: Cristian Marussi <cristian.marussi@arm.com> 13201 + L: linux-kernel@vger.kernel.org 13202 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 13203 + S: Maintained 13204 + F: Documentation/devicetree/bindings/mailbox/arm,mhuv3.yaml 13205 + F: drivers/mailbox/arm_mhuv3.c 13206 + 13198 13207 MAN-PAGES: MANUAL PAGES FOR LINUX -- Sections 2, 3, 4, 5, and 7 13199 13208 M: Alejandro Colomar <alx@kernel.org> 13200 13209 L: linux-man@vger.kernel.org
+12 -9
drivers/mailbox/Kconfig
··· 23 23 Say Y here if you want to build the ARM MHUv2 controller driver, 24 24 which provides unidirectional mailboxes between processing elements. 25 25 26 + config ARM_MHU_V3 27 + tristate "ARM MHUv3 Mailbox" 28 + depends on HAS_IOMEM || COMPILE_TEST 29 + depends on OF 30 + help 31 + Say Y here if you want to build the ARM MHUv3 controller driver, 32 + which provides unidirectional mailboxes between processing elements. 33 + 34 + ARM MHUv3 controllers can implement a varying number of extensions 35 + that provides different means of transports: supported extensions 36 + will be discovered and possibly managed at probe-time. 37 + 26 38 config IMX_MBOX 27 39 tristate "i.MX Mailbox" 28 40 depends on ARCH_MXC || COMPILE_TEST ··· 79 67 interprocessor communication involving DSP, IVA1.0 and IVA2 in 80 68 OMAP2/3; or IPU, IVA HD and DSP in OMAP4/5. Say Y here if you 81 69 want to use OMAP2+ Mailbox framework support. 82 - 83 - config OMAP_MBOX_KFIFO_SIZE 84 - int "Mailbox kfifo default buffer size (bytes)" 85 - depends on OMAP2PLUS_MBOX 86 - default 256 87 - help 88 - Specify the default size of mailbox's kfifo buffers (bytes). 89 - This can also be changed at runtime (via the mbox_kfifo_size 90 - module parameter). 91 70 92 71 config ROCKCHIP_MBOX 93 72 bool "Rockchip Soc Integrated Mailbox Support"
+2
drivers/mailbox/Makefile
··· 9 9 10 10 obj-$(CONFIG_ARM_MHU_V2) += arm_mhuv2.o 11 11 12 + obj-$(CONFIG_ARM_MHU_V3) += arm_mhuv3.o 13 + 12 14 obj-$(CONFIG_IMX_MBOX) += imx-mailbox.o 13 15 14 16 obj-$(CONFIG_ARMADA_37XX_RWTM_MBOX) += armada-37xx-rwtm-mailbox.o
+1103
drivers/mailbox/arm_mhuv3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ARM Message Handling Unit Version 3 (MHUv3) driver. 4 + * 5 + * Copyright (C) 2024 ARM Ltd. 6 + * 7 + * Based on ARM MHUv2 driver. 8 + */ 9 + 10 + #include <linux/bitfield.h> 11 + #include <linux/bitops.h> 12 + #include <linux/bits.h> 13 + #include <linux/cleanup.h> 14 + #include <linux/device.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/mailbox_controller.h> 17 + #include <linux/module.h> 18 + #include <linux/of_address.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/spinlock.h> 21 + #include <linux/sizes.h> 22 + #include <linux/slab.h> 23 + #include <linux/types.h> 24 + 25 + /* ====== MHUv3 Registers ====== */ 26 + 27 + /* Maximum number of Doorbell channel windows */ 28 + #define MHUV3_DBCW_MAX 128 29 + /* Number of DBCH combined interrupt status registers */ 30 + #define MHUV3_DBCH_CMB_INT_ST_REG_CNT 4 31 + 32 + /* Number of FFCH combined interrupt status registers */ 33 + #define MHUV3_FFCH_CMB_INT_ST_REG_CNT 2 34 + 35 + #define MHUV3_FLAG_BITS 32 36 + 37 + /* Not a typo ... */ 38 + #define MHUV3_MAJOR_VERSION 2 39 + 40 + enum { 41 + MHUV3_MBOX_CELL_TYPE, 42 + MHUV3_MBOX_CELL_CHWN, 43 + MHUV3_MBOX_CELL_PARAM, 44 + MHUV3_MBOX_CELLS 45 + }; 46 + 47 + /* Padding bitfields/fields represents hole in the regs MMIO */ 48 + 49 + /* CTRL_Page */ 50 + struct blk_id { 51 + #define id GENMASK(3, 0) 52 + u32 val; 53 + } __packed; 54 + 55 + struct feat_spt0 { 56 + #define dbe_spt GENMASK(3, 0) 57 + #define fe_spt GENMASK(7, 4) 58 + #define fce_spt GENMASK(11, 8) 59 + u32 val; 60 + } __packed; 61 + 62 + struct feat_spt1 { 63 + #define auto_op_spt GENMASK(3, 0) 64 + u32 val; 65 + } __packed; 66 + 67 + struct dbch_cfg0 { 68 + #define num_dbch GENMASK(7, 0) 69 + u32 val; 70 + } __packed; 71 + 72 + struct ffch_cfg0 { 73 + #define num_ffch GENMASK(7, 0) 74 + #define x8ba_spt BIT(8) 75 + #define x16ba_spt BIT(9) 76 + #define x32ba_spt BIT(10) 77 + #define x64ba_spt BIT(11) 78 + #define ffch_depth GENMASK(25, 16) 79 + u32 val; 80 + } __packed; 81 + 82 + struct fch_cfg0 { 83 + #define num_fch GENMASK(9, 0) 84 + #define fcgi_spt BIT(10) // MBX-only 85 + #define num_fcg GENMASK(15, 11) 86 + #define num_fch_per_grp GENMASK(20, 16) 87 + #define fch_ws GENMASK(28, 21) 88 + u32 val; 89 + } __packed; 90 + 91 + struct ctrl { 92 + #define op_req BIT(0) 93 + #define ch_op_mask BIT(1) 94 + u32 val; 95 + } __packed; 96 + 97 + struct fch_ctrl { 98 + #define _int_en BIT(2) 99 + u32 val; 100 + } __packed; 101 + 102 + struct iidr { 103 + #define implementer GENMASK(11, 0) 104 + #define revision GENMASK(15, 12) 105 + #define variant GENMASK(19, 16) 106 + #define product_id GENMASK(31, 20) 107 + u32 val; 108 + } __packed; 109 + 110 + struct aidr { 111 + #define arch_minor_rev GENMASK(3, 0) 112 + #define arch_major_rev GENMASK(7, 4) 113 + u32 val; 114 + } __packed; 115 + 116 + struct ctrl_page { 117 + struct blk_id blk_id; 118 + u8 pad[12]; 119 + struct feat_spt0 feat_spt0; 120 + struct feat_spt1 feat_spt1; 121 + u8 pad1[8]; 122 + struct dbch_cfg0 dbch_cfg0; 123 + u8 pad2[12]; 124 + struct ffch_cfg0 ffch_cfg0; 125 + u8 pad3[12]; 126 + struct fch_cfg0 fch_cfg0; 127 + u8 pad4[188]; 128 + struct ctrl x_ctrl; 129 + /*-- MBX-only registers --*/ 130 + u8 pad5[60]; 131 + struct fch_ctrl fch_ctrl; 132 + u32 fcg_int_en; 133 + u8 pad6[696]; 134 + /*-- End of MBX-only ---- */ 135 + u32 dbch_int_st[MHUV3_DBCH_CMB_INT_ST_REG_CNT]; 136 + u32 ffch_int_st[MHUV3_FFCH_CMB_INT_ST_REG_CNT]; 137 + /*-- MBX-only registers --*/ 138 + u8 pad7[88]; 139 + u32 fcg_int_st; 140 + u8 pad8[12]; 141 + u32 fcg_grp_int_st[32]; 142 + u8 pad9[2760]; 143 + /*-- End of MBX-only ---- */ 144 + struct iidr iidr; 145 + struct aidr aidr; 146 + u32 imp_def_id[12]; 147 + } __packed; 148 + 149 + /* DBCW_Page */ 150 + 151 + struct xbcw_ctrl { 152 + #define comb_en BIT(0) 153 + u32 val; 154 + } __packed; 155 + 156 + struct pdbcw_int { 157 + #define tfr_ack BIT(0) 158 + u32 val; 159 + } __packed; 160 + 161 + struct pdbcw_page { 162 + u32 st; 163 + u8 pad[8]; 164 + u32 set; 165 + struct pdbcw_int int_st; 166 + struct pdbcw_int int_clr; 167 + struct pdbcw_int int_en; 168 + struct xbcw_ctrl ctrl; 169 + } __packed; 170 + 171 + struct mdbcw_page { 172 + u32 st; 173 + u32 st_msk; 174 + u32 clr; 175 + u8 pad[4]; 176 + u32 msk_st; 177 + u32 msk_set; 178 + u32 msk_clr; 179 + struct xbcw_ctrl ctrl; 180 + } __packed; 181 + 182 + struct dummy_page { 183 + u8 pad[SZ_4K]; 184 + } __packed; 185 + 186 + struct mhu3_pbx_frame_reg { 187 + struct ctrl_page ctrl; 188 + struct pdbcw_page dbcw[MHUV3_DBCW_MAX]; 189 + struct dummy_page ffcw; 190 + struct dummy_page fcw; 191 + u8 pad[SZ_4K * 11]; 192 + struct dummy_page impdef; 193 + } __packed; 194 + 195 + struct mhu3_mbx_frame_reg { 196 + struct ctrl_page ctrl; 197 + struct mdbcw_page dbcw[MHUV3_DBCW_MAX]; 198 + struct dummy_page ffcw; 199 + struct dummy_page fcw; 200 + u8 pad[SZ_4K * 11]; 201 + struct dummy_page impdef; 202 + } __packed; 203 + 204 + /* Macro for reading a bitmask within a physically mapped packed struct */ 205 + #define readl_relaxed_bitmask(_regptr, _bitmask) \ 206 + ({ \ 207 + unsigned long _rval; \ 208 + _rval = readl_relaxed(_regptr); \ 209 + FIELD_GET(_bitmask, _rval); \ 210 + }) 211 + 212 + /* Macro for writing a bitmask within a physically mapped packed struct */ 213 + #define writel_relaxed_bitmask(_value, _regptr, _bitmask) \ 214 + ({ \ 215 + unsigned long _rval; \ 216 + typeof(_regptr) _rptr = _regptr; \ 217 + typeof(_bitmask) _bmask = _bitmask; \ 218 + _rval = readl_relaxed(_rptr); \ 219 + _rval &= ~(_bmask); \ 220 + _rval |= FIELD_PREP((unsigned long long)_bmask, _value);\ 221 + writel_relaxed(_rval, _rptr); \ 222 + }) 223 + 224 + /* ====== MHUv3 data structures ====== */ 225 + 226 + enum mhuv3_frame { 227 + PBX_FRAME, 228 + MBX_FRAME, 229 + }; 230 + 231 + static char *mhuv3_str[] = { 232 + "PBX", 233 + "MBX" 234 + }; 235 + 236 + enum mhuv3_extension_type { 237 + DBE_EXT, 238 + FCE_EXT, 239 + FE_EXT, 240 + NUM_EXT 241 + }; 242 + 243 + static char *mhuv3_ext_str[] = { 244 + "DBE", 245 + "FCE", 246 + "FE" 247 + }; 248 + 249 + struct mhuv3; 250 + 251 + /** 252 + * struct mhuv3_protocol_ops - MHUv3 operations 253 + * 254 + * @rx_startup: Receiver startup callback. 255 + * @rx_shutdown: Receiver shutdown callback. 256 + * @read_data: Read available Sender in-band LE data (if any). 257 + * @rx_complete: Acknowledge data reception to the Sender. Any out-of-band data 258 + * has to have been already retrieved before calling this. 259 + * @tx_startup: Sender startup callback. 260 + * @tx_shutdown: Sender shutdown callback. 261 + * @last_tx_done: Report back to the Sender if the last transfer has completed. 262 + * @send_data: Send data to the receiver. 263 + * 264 + * Each supported transport protocol provides its own implementation of 265 + * these operations. 266 + */ 267 + struct mhuv3_protocol_ops { 268 + int (*rx_startup)(struct mhuv3 *mhu, struct mbox_chan *chan); 269 + void (*rx_shutdown)(struct mhuv3 *mhu, struct mbox_chan *chan); 270 + void *(*read_data)(struct mhuv3 *mhu, struct mbox_chan *chan); 271 + void (*rx_complete)(struct mhuv3 *mhu, struct mbox_chan *chan); 272 + void (*tx_startup)(struct mhuv3 *mhu, struct mbox_chan *chan); 273 + void (*tx_shutdown)(struct mhuv3 *mhu, struct mbox_chan *chan); 274 + int (*last_tx_done)(struct mhuv3 *mhu, struct mbox_chan *chan); 275 + int (*send_data)(struct mhuv3 *mhu, struct mbox_chan *chan, void *arg); 276 + }; 277 + 278 + /** 279 + * struct mhuv3_mbox_chan_priv - MHUv3 channel private information 280 + * 281 + * @ch_idx: Channel window index associated to this mailbox channel. 282 + * @doorbell: Doorbell bit number within the @ch_idx window. 283 + * Only relevant to Doorbell transport. 284 + * @ops: Transport protocol specific operations for this channel. 285 + * 286 + * Transport specific data attached to mmailbox channel priv data. 287 + */ 288 + struct mhuv3_mbox_chan_priv { 289 + u32 ch_idx; 290 + u32 doorbell; 291 + const struct mhuv3_protocol_ops *ops; 292 + }; 293 + 294 + /** 295 + * struct mhuv3_extension - MHUv3 extension descriptor 296 + * 297 + * @type: Type of extension 298 + * @num_chans: Max number of channels found for this extension. 299 + * @base_ch_idx: First channel number assigned to this extension, picked from 300 + * the set of all mailbox channels descriptors created. 301 + * @mbox_of_xlate: Extension specific helper to parse DT and lookup associated 302 + * channel from the related 'mboxes' property. 303 + * @combined_irq_setup: Extension specific helper to setup the combined irq. 304 + * @channels_init: Extension specific helper to initialize channels. 305 + * @chan_from_comb_irq_get: Extension specific helper to lookup which channel 306 + * triggered the combined irq. 307 + * @pending_db: Array of per-channel pending doorbells. 308 + * @pending_lock: Protect access to pending_db. 309 + */ 310 + struct mhuv3_extension { 311 + enum mhuv3_extension_type type; 312 + unsigned int num_chans; 313 + unsigned int base_ch_idx; 314 + struct mbox_chan *(*mbox_of_xlate)(struct mhuv3 *mhu, 315 + unsigned int channel, 316 + unsigned int param); 317 + void (*combined_irq_setup)(struct mhuv3 *mhu); 318 + int (*channels_init)(struct mhuv3 *mhu); 319 + struct mbox_chan *(*chan_from_comb_irq_get)(struct mhuv3 *mhu); 320 + u32 pending_db[MHUV3_DBCW_MAX]; 321 + /* Protect access to pending_db */ 322 + spinlock_t pending_lock; 323 + }; 324 + 325 + /** 326 + * struct mhuv3 - MHUv3 mailbox controller data 327 + * 328 + * @frame: Frame type: MBX_FRAME or PBX_FRAME. 329 + * @auto_op_full: Flag to indicate if the MHU supports AutoOp full mode. 330 + * @major: MHUv3 controller architectural major version. 331 + * @minor: MHUv3 controller architectural minor version. 332 + * @implem: MHUv3 controller IIDR implementer. 333 + * @rev: MHUv3 controller IIDR revision. 334 + * @var: MHUv3 controller IIDR variant. 335 + * @prod_id: MHUv3 controller IIDR product_id. 336 + * @num_chans: The total number of channnels discovered across all extensions. 337 + * @cmb_irq: Combined IRQ number if any found defined. 338 + * @ctrl: A reference to the MHUv3 control page for this block. 339 + * @pbx: Base address of the PBX register mapping region. 340 + * @mbx: Base address of the MBX register mapping region. 341 + * @ext: Array holding descriptors for any found implemented extension. 342 + * @mbox: Mailbox controller belonging to the MHU frame. 343 + */ 344 + struct mhuv3 { 345 + enum mhuv3_frame frame; 346 + bool auto_op_full; 347 + unsigned int major; 348 + unsigned int minor; 349 + unsigned int implem; 350 + unsigned int rev; 351 + unsigned int var; 352 + unsigned int prod_id; 353 + unsigned int num_chans; 354 + int cmb_irq; 355 + struct ctrl_page __iomem *ctrl; 356 + union { 357 + struct mhu3_pbx_frame_reg __iomem *pbx; 358 + struct mhu3_mbx_frame_reg __iomem *mbx; 359 + }; 360 + struct mhuv3_extension *ext[NUM_EXT]; 361 + struct mbox_controller mbox; 362 + }; 363 + 364 + #define mhu_from_mbox(_mbox) container_of(_mbox, struct mhuv3, mbox) 365 + 366 + typedef int (*mhuv3_extension_initializer)(struct mhuv3 *mhu); 367 + 368 + /* =================== Doorbell transport protocol operations =============== */ 369 + 370 + static void mhuv3_doorbell_tx_startup(struct mhuv3 *mhu, struct mbox_chan *chan) 371 + { 372 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 373 + 374 + /* Enable Transfer Acknowledgment events */ 375 + writel_relaxed_bitmask(0x1, &mhu->pbx->dbcw[priv->ch_idx].int_en, tfr_ack); 376 + } 377 + 378 + static void mhuv3_doorbell_tx_shutdown(struct mhuv3 *mhu, struct mbox_chan *chan) 379 + { 380 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 381 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 382 + unsigned long flags; 383 + 384 + /* Disable Channel Transfer Ack events */ 385 + writel_relaxed_bitmask(0x0, &mhu->pbx->dbcw[priv->ch_idx].int_en, tfr_ack); 386 + 387 + /* Clear Channel Transfer Ack and pending doorbells */ 388 + writel_relaxed_bitmask(0x1, &mhu->pbx->dbcw[priv->ch_idx].int_clr, tfr_ack); 389 + spin_lock_irqsave(&e->pending_lock, flags); 390 + e->pending_db[priv->ch_idx] = 0; 391 + spin_unlock_irqrestore(&e->pending_lock, flags); 392 + } 393 + 394 + static int mhuv3_doorbell_rx_startup(struct mhuv3 *mhu, struct mbox_chan *chan) 395 + { 396 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 397 + 398 + /* Unmask Channel Transfer events */ 399 + writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].msk_clr); 400 + 401 + return 0; 402 + } 403 + 404 + static void mhuv3_doorbell_rx_shutdown(struct mhuv3 *mhu, 405 + struct mbox_chan *chan) 406 + { 407 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 408 + 409 + /* Mask Channel Transfer events */ 410 + writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].msk_set); 411 + } 412 + 413 + static void mhuv3_doorbell_rx_complete(struct mhuv3 *mhu, struct mbox_chan *chan) 414 + { 415 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 416 + 417 + /* Clearing the pending transfer generates the Channel Transfer Ack */ 418 + writel_relaxed(BIT(priv->doorbell), &mhu->mbx->dbcw[priv->ch_idx].clr); 419 + } 420 + 421 + static int mhuv3_doorbell_last_tx_done(struct mhuv3 *mhu, 422 + struct mbox_chan *chan) 423 + { 424 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 425 + int done; 426 + 427 + done = !(readl_relaxed(&mhu->pbx->dbcw[priv->ch_idx].st) & 428 + BIT(priv->doorbell)); 429 + if (done) { 430 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 431 + unsigned long flags; 432 + 433 + /* Take care to clear the pending doorbell also when polling */ 434 + spin_lock_irqsave(&e->pending_lock, flags); 435 + e->pending_db[priv->ch_idx] &= ~BIT(priv->doorbell); 436 + spin_unlock_irqrestore(&e->pending_lock, flags); 437 + } 438 + 439 + return done; 440 + } 441 + 442 + static int mhuv3_doorbell_send_data(struct mhuv3 *mhu, struct mbox_chan *chan, 443 + void *arg) 444 + { 445 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 446 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 447 + 448 + scoped_guard(spinlock_irqsave, &e->pending_lock) { 449 + /* Only one in-flight Transfer is allowed per-doorbell */ 450 + if (e->pending_db[priv->ch_idx] & BIT(priv->doorbell)) 451 + return -EBUSY; 452 + 453 + e->pending_db[priv->ch_idx] |= BIT(priv->doorbell); 454 + } 455 + 456 + writel_relaxed(BIT(priv->doorbell), &mhu->pbx->dbcw[priv->ch_idx].set); 457 + 458 + return 0; 459 + } 460 + 461 + static const struct mhuv3_protocol_ops mhuv3_doorbell_ops = { 462 + .tx_startup = mhuv3_doorbell_tx_startup, 463 + .tx_shutdown = mhuv3_doorbell_tx_shutdown, 464 + .rx_startup = mhuv3_doorbell_rx_startup, 465 + .rx_shutdown = mhuv3_doorbell_rx_shutdown, 466 + .rx_complete = mhuv3_doorbell_rx_complete, 467 + .last_tx_done = mhuv3_doorbell_last_tx_done, 468 + .send_data = mhuv3_doorbell_send_data, 469 + }; 470 + 471 + /* Sender and receiver mailbox ops */ 472 + static bool mhuv3_sender_last_tx_done(struct mbox_chan *chan) 473 + { 474 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 475 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 476 + 477 + return priv->ops->last_tx_done(mhu, chan); 478 + } 479 + 480 + static int mhuv3_sender_send_data(struct mbox_chan *chan, void *data) 481 + { 482 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 483 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 484 + 485 + if (!priv->ops->last_tx_done(mhu, chan)) 486 + return -EBUSY; 487 + 488 + return priv->ops->send_data(mhu, chan, data); 489 + } 490 + 491 + static int mhuv3_sender_startup(struct mbox_chan *chan) 492 + { 493 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 494 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 495 + 496 + if (priv->ops->tx_startup) 497 + priv->ops->tx_startup(mhu, chan); 498 + 499 + return 0; 500 + } 501 + 502 + static void mhuv3_sender_shutdown(struct mbox_chan *chan) 503 + { 504 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 505 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 506 + 507 + if (priv->ops->tx_shutdown) 508 + priv->ops->tx_shutdown(mhu, chan); 509 + } 510 + 511 + static const struct mbox_chan_ops mhuv3_sender_ops = { 512 + .send_data = mhuv3_sender_send_data, 513 + .startup = mhuv3_sender_startup, 514 + .shutdown = mhuv3_sender_shutdown, 515 + .last_tx_done = mhuv3_sender_last_tx_done, 516 + }; 517 + 518 + static int mhuv3_receiver_startup(struct mbox_chan *chan) 519 + { 520 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 521 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 522 + 523 + return priv->ops->rx_startup(mhu, chan); 524 + } 525 + 526 + static void mhuv3_receiver_shutdown(struct mbox_chan *chan) 527 + { 528 + struct mhuv3_mbox_chan_priv *priv = chan->con_priv; 529 + struct mhuv3 *mhu = mhu_from_mbox(chan->mbox); 530 + 531 + priv->ops->rx_shutdown(mhu, chan); 532 + } 533 + 534 + static int mhuv3_receiver_send_data(struct mbox_chan *chan, void *data) 535 + { 536 + dev_err(chan->mbox->dev, 537 + "Trying to transmit on a MBX MHUv3 frame\n"); 538 + return -EIO; 539 + } 540 + 541 + static bool mhuv3_receiver_last_tx_done(struct mbox_chan *chan) 542 + { 543 + dev_err(chan->mbox->dev, "Trying to Tx poll on a MBX MHUv3 frame\n"); 544 + return true; 545 + } 546 + 547 + static const struct mbox_chan_ops mhuv3_receiver_ops = { 548 + .send_data = mhuv3_receiver_send_data, 549 + .startup = mhuv3_receiver_startup, 550 + .shutdown = mhuv3_receiver_shutdown, 551 + .last_tx_done = mhuv3_receiver_last_tx_done, 552 + }; 553 + 554 + static struct mbox_chan *mhuv3_dbe_mbox_of_xlate(struct mhuv3 *mhu, 555 + unsigned int channel, 556 + unsigned int doorbell) 557 + { 558 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 559 + struct mbox_controller *mbox = &mhu->mbox; 560 + struct mbox_chan *chans = mbox->chans; 561 + 562 + if (channel >= e->num_chans || doorbell >= MHUV3_FLAG_BITS) { 563 + dev_err(mbox->dev, "Couldn't xlate to a valid channel (%d: %d)\n", 564 + channel, doorbell); 565 + return ERR_PTR(-ENODEV); 566 + } 567 + 568 + return &chans[e->base_ch_idx + channel * MHUV3_FLAG_BITS + doorbell]; 569 + } 570 + 571 + static void mhuv3_dbe_combined_irq_setup(struct mhuv3 *mhu) 572 + { 573 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 574 + int i; 575 + 576 + if (mhu->frame == PBX_FRAME) { 577 + struct pdbcw_page __iomem *dbcw = mhu->pbx->dbcw; 578 + 579 + for (i = 0; i < e->num_chans; i++) { 580 + writel_relaxed_bitmask(0x1, &dbcw[i].int_clr, tfr_ack); 581 + writel_relaxed_bitmask(0x0, &dbcw[i].int_en, tfr_ack); 582 + writel_relaxed_bitmask(0x1, &dbcw[i].ctrl, comb_en); 583 + } 584 + } else { 585 + struct mdbcw_page __iomem *dbcw = mhu->mbx->dbcw; 586 + 587 + for (i = 0; i < e->num_chans; i++) { 588 + writel_relaxed(0xFFFFFFFF, &dbcw[i].clr); 589 + writel_relaxed(0xFFFFFFFF, &dbcw[i].msk_set); 590 + writel_relaxed_bitmask(0x1, &dbcw[i].ctrl, comb_en); 591 + } 592 + } 593 + } 594 + 595 + static int mhuv3_dbe_channels_init(struct mhuv3 *mhu) 596 + { 597 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 598 + struct mbox_controller *mbox = &mhu->mbox; 599 + struct mbox_chan *chans; 600 + int i; 601 + 602 + chans = mbox->chans + mbox->num_chans; 603 + e->base_ch_idx = mbox->num_chans; 604 + for (i = 0; i < e->num_chans; i++) { 605 + struct mhuv3_mbox_chan_priv *priv; 606 + int k; 607 + 608 + for (k = 0; k < MHUV3_FLAG_BITS; k++) { 609 + priv = devm_kmalloc(mbox->dev, sizeof(*priv), GFP_KERNEL); 610 + if (!priv) 611 + return -ENOMEM; 612 + 613 + priv->ch_idx = i; 614 + priv->ops = &mhuv3_doorbell_ops; 615 + priv->doorbell = k; 616 + chans++->con_priv = priv; 617 + mbox->num_chans++; 618 + } 619 + } 620 + 621 + spin_lock_init(&e->pending_lock); 622 + 623 + return 0; 624 + } 625 + 626 + static bool mhuv3_dbe_doorbell_lookup(struct mhuv3 *mhu, unsigned int channel, 627 + unsigned int *db) 628 + { 629 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 630 + struct device *dev = mhu->mbox.dev; 631 + u32 st; 632 + 633 + if (mhu->frame == PBX_FRAME) { 634 + u32 active_dbs, fired_dbs; 635 + 636 + st = readl_relaxed_bitmask(&mhu->pbx->dbcw[channel].int_st, 637 + tfr_ack); 638 + if (!st) 639 + goto err_spurious; 640 + 641 + active_dbs = readl_relaxed(&mhu->pbx->dbcw[channel].st); 642 + scoped_guard(spinlock_irqsave, &e->pending_lock) { 643 + fired_dbs = e->pending_db[channel] & ~active_dbs; 644 + if (!fired_dbs) 645 + goto err_spurious; 646 + 647 + *db = __ffs(fired_dbs); 648 + e->pending_db[channel] &= ~BIT(*db); 649 + } 650 + fired_dbs &= ~BIT(*db); 651 + /* Clear TFR Ack if no more doorbells pending */ 652 + if (!fired_dbs) 653 + writel_relaxed_bitmask(0x1, 654 + &mhu->pbx->dbcw[channel].int_clr, 655 + tfr_ack); 656 + } else { 657 + st = readl_relaxed(&mhu->mbx->dbcw[channel].st_msk); 658 + if (!st) 659 + goto err_spurious; 660 + 661 + *db = __ffs(st); 662 + } 663 + 664 + return true; 665 + 666 + err_spurious: 667 + dev_warn(dev, "Spurious IRQ on %s channel:%d\n", 668 + mhuv3_str[mhu->frame], channel); 669 + 670 + return false; 671 + } 672 + 673 + static struct mbox_chan *mhuv3_dbe_chan_from_comb_irq_get(struct mhuv3 *mhu) 674 + { 675 + struct mhuv3_extension *e = mhu->ext[DBE_EXT]; 676 + struct device *dev = mhu->mbox.dev; 677 + int i; 678 + 679 + for (i = 0; i < MHUV3_DBCH_CMB_INT_ST_REG_CNT; i++) { 680 + unsigned int channel, db; 681 + u32 cmb_st; 682 + 683 + cmb_st = readl_relaxed(&mhu->ctrl->dbch_int_st[i]); 684 + if (!cmb_st) 685 + continue; 686 + 687 + channel = i * MHUV3_FLAG_BITS + __ffs(cmb_st); 688 + if (channel >= e->num_chans) { 689 + dev_err(dev, "Invalid %s channel:%d\n", 690 + mhuv3_str[mhu->frame], channel); 691 + return ERR_PTR(-EIO); 692 + } 693 + 694 + if (!mhuv3_dbe_doorbell_lookup(mhu, channel, &db)) 695 + continue; 696 + 697 + dev_dbg(dev, "Found %s ch[%d]/db[%d]\n", 698 + mhuv3_str[mhu->frame], channel, db); 699 + 700 + return &mhu->mbox.chans[channel * MHUV3_FLAG_BITS + db]; 701 + } 702 + 703 + return ERR_PTR(-EIO); 704 + } 705 + 706 + static int mhuv3_dbe_init(struct mhuv3 *mhu) 707 + { 708 + struct device *dev = mhu->mbox.dev; 709 + struct mhuv3_extension *e; 710 + 711 + if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, dbe_spt)) 712 + return 0; 713 + 714 + dev_dbg(dev, "%s: Initializing DBE Extension.\n", mhuv3_str[mhu->frame]); 715 + 716 + e = devm_kzalloc(dev, sizeof(*e), GFP_KERNEL); 717 + if (!e) 718 + return -ENOMEM; 719 + 720 + e->type = DBE_EXT; 721 + /* Note that, by the spec, the number of channels is (num_dbch + 1) */ 722 + e->num_chans = 723 + readl_relaxed_bitmask(&mhu->ctrl->dbch_cfg0, num_dbch) + 1; 724 + e->mbox_of_xlate = mhuv3_dbe_mbox_of_xlate; 725 + e->combined_irq_setup = mhuv3_dbe_combined_irq_setup; 726 + e->channels_init = mhuv3_dbe_channels_init; 727 + e->chan_from_comb_irq_get = mhuv3_dbe_chan_from_comb_irq_get; 728 + 729 + mhu->num_chans += e->num_chans * MHUV3_FLAG_BITS; 730 + mhu->ext[DBE_EXT] = e; 731 + 732 + dev_dbg(dev, "%s: found %d DBE channels.\n", 733 + mhuv3_str[mhu->frame], e->num_chans); 734 + 735 + return 0; 736 + } 737 + 738 + static int mhuv3_fce_init(struct mhuv3 *mhu) 739 + { 740 + struct device *dev = mhu->mbox.dev; 741 + 742 + if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, fce_spt)) 743 + return 0; 744 + 745 + dev_dbg(dev, "%s: FCE Extension not supported by driver.\n", 746 + mhuv3_str[mhu->frame]); 747 + 748 + return 0; 749 + } 750 + 751 + static int mhuv3_fe_init(struct mhuv3 *mhu) 752 + { 753 + struct device *dev = mhu->mbox.dev; 754 + 755 + if (!readl_relaxed_bitmask(&mhu->ctrl->feat_spt0, fe_spt)) 756 + return 0; 757 + 758 + dev_dbg(dev, "%s: FE Extension not supported by driver.\n", 759 + mhuv3_str[mhu->frame]); 760 + 761 + return 0; 762 + } 763 + 764 + static mhuv3_extension_initializer mhuv3_extension_init[NUM_EXT] = { 765 + mhuv3_dbe_init, 766 + mhuv3_fce_init, 767 + mhuv3_fe_init, 768 + }; 769 + 770 + static int mhuv3_initialize_channels(struct device *dev, struct mhuv3 *mhu) 771 + { 772 + struct mbox_controller *mbox = &mhu->mbox; 773 + int i, ret = 0; 774 + 775 + mbox->chans = devm_kcalloc(dev, mhu->num_chans, 776 + sizeof(*mbox->chans), GFP_KERNEL); 777 + if (!mbox->chans) 778 + return dev_err_probe(dev, -ENOMEM, 779 + "Failed to initialize channels\n"); 780 + 781 + for (i = 0; i < NUM_EXT && !ret; i++) 782 + if (mhu->ext[i]) 783 + ret = mhu->ext[i]->channels_init(mhu); 784 + 785 + return ret; 786 + } 787 + 788 + static struct mbox_chan *mhuv3_mbox_of_xlate(struct mbox_controller *mbox, 789 + const struct of_phandle_args *pa) 790 + { 791 + struct mhuv3 *mhu = mhu_from_mbox(mbox); 792 + unsigned int type, channel, param; 793 + 794 + if (pa->args_count != MHUV3_MBOX_CELLS) 795 + return ERR_PTR(-EINVAL); 796 + 797 + type = pa->args[MHUV3_MBOX_CELL_TYPE]; 798 + if (type >= NUM_EXT) 799 + return ERR_PTR(-EINVAL); 800 + 801 + channel = pa->args[MHUV3_MBOX_CELL_CHWN]; 802 + param = pa->args[MHUV3_MBOX_CELL_PARAM]; 803 + 804 + return mhu->ext[type]->mbox_of_xlate(mhu, channel, param); 805 + } 806 + 807 + static void mhu_frame_cleanup_actions(void *data) 808 + { 809 + struct mhuv3 *mhu = data; 810 + 811 + writel_relaxed_bitmask(0x0, &mhu->ctrl->x_ctrl, op_req); 812 + } 813 + 814 + static int mhuv3_frame_init(struct mhuv3 *mhu, void __iomem *regs) 815 + { 816 + struct device *dev = mhu->mbox.dev; 817 + int i; 818 + 819 + mhu->ctrl = regs; 820 + mhu->frame = readl_relaxed_bitmask(&mhu->ctrl->blk_id, id); 821 + if (mhu->frame > MBX_FRAME) 822 + return dev_err_probe(dev, -EINVAL, 823 + "Invalid Frame type- %d\n", mhu->frame); 824 + 825 + mhu->major = readl_relaxed_bitmask(&mhu->ctrl->aidr, arch_major_rev); 826 + mhu->minor = readl_relaxed_bitmask(&mhu->ctrl->aidr, arch_minor_rev); 827 + mhu->implem = readl_relaxed_bitmask(&mhu->ctrl->iidr, implementer); 828 + mhu->rev = readl_relaxed_bitmask(&mhu->ctrl->iidr, revision); 829 + mhu->var = readl_relaxed_bitmask(&mhu->ctrl->iidr, variant); 830 + mhu->prod_id = readl_relaxed_bitmask(&mhu->ctrl->iidr, product_id); 831 + if (mhu->major != MHUV3_MAJOR_VERSION) 832 + return dev_err_probe(dev, -EINVAL, 833 + "Unsupported MHU %s block - major:%d minor:%d\n", 834 + mhuv3_str[mhu->frame], mhu->major, 835 + mhu->minor); 836 + 837 + mhu->auto_op_full = 838 + !!readl_relaxed_bitmask(&mhu->ctrl->feat_spt1, auto_op_spt); 839 + /* Request the PBX/MBX to remain operational */ 840 + if (mhu->auto_op_full) { 841 + writel_relaxed_bitmask(0x1, &mhu->ctrl->x_ctrl, op_req); 842 + devm_add_action_or_reset(dev, mhu_frame_cleanup_actions, mhu); 843 + } 844 + 845 + dev_dbg(dev, 846 + "Found MHU %s block - major:%d minor:%d\n implem:0x%X rev:0x%X var:0x%X prod_id:0x%X", 847 + mhuv3_str[mhu->frame], mhu->major, mhu->minor, 848 + mhu->implem, mhu->rev, mhu->var, mhu->prod_id); 849 + 850 + if (mhu->frame == PBX_FRAME) 851 + mhu->pbx = regs; 852 + else 853 + mhu->mbx = regs; 854 + 855 + for (i = 0; i < NUM_EXT; i++) { 856 + int ret; 857 + 858 + /* 859 + * Note that extensions initialization fails only when such 860 + * extension initialization routine fails and the extensions 861 + * was found to be supported in hardware and in software. 862 + */ 863 + ret = mhuv3_extension_init[i](mhu); 864 + if (ret) 865 + return dev_err_probe(dev, ret, 866 + "Failed to initialize %s %s\n", 867 + mhuv3_str[mhu->frame], 868 + mhuv3_ext_str[i]); 869 + } 870 + 871 + return 0; 872 + } 873 + 874 + static irqreturn_t mhuv3_pbx_comb_interrupt(int irq, void *arg) 875 + { 876 + unsigned int i, found = 0; 877 + struct mhuv3 *mhu = arg; 878 + struct mbox_chan *chan; 879 + struct device *dev; 880 + int ret = IRQ_NONE; 881 + 882 + dev = mhu->mbox.dev; 883 + for (i = 0; i < NUM_EXT; i++) { 884 + struct mhuv3_mbox_chan_priv *priv; 885 + 886 + /* FCE does not participate to the PBX combined */ 887 + if (i == FCE_EXT || !mhu->ext[i]) 888 + continue; 889 + 890 + chan = mhu->ext[i]->chan_from_comb_irq_get(mhu); 891 + if (IS_ERR(chan)) 892 + continue; 893 + 894 + found++; 895 + priv = chan->con_priv; 896 + if (!chan->cl) { 897 + dev_warn(dev, "TX Ack on UNBOUND channel (%u)\n", 898 + priv->ch_idx); 899 + continue; 900 + } 901 + 902 + mbox_chan_txdone(chan, 0); 903 + ret = IRQ_HANDLED; 904 + } 905 + 906 + if (found == 0) 907 + dev_warn_once(dev, "Failed to find channel for the TX interrupt\n"); 908 + 909 + return ret; 910 + } 911 + 912 + static irqreturn_t mhuv3_mbx_comb_interrupt(int irq, void *arg) 913 + { 914 + unsigned int i, found = 0; 915 + struct mhuv3 *mhu = arg; 916 + struct mbox_chan *chan; 917 + struct device *dev; 918 + int ret = IRQ_NONE; 919 + 920 + dev = mhu->mbox.dev; 921 + for (i = 0; i < NUM_EXT; i++) { 922 + struct mhuv3_mbox_chan_priv *priv; 923 + void *data __free(kfree) = NULL; 924 + 925 + if (!mhu->ext[i]) 926 + continue; 927 + 928 + /* Process any extension which could be source of the IRQ */ 929 + chan = mhu->ext[i]->chan_from_comb_irq_get(mhu); 930 + if (IS_ERR(chan)) 931 + continue; 932 + 933 + found++; 934 + /* From here on we need to call rx_complete even on error */ 935 + priv = chan->con_priv; 936 + if (!chan->cl) { 937 + dev_warn(dev, "RX Data on UNBOUND channel (%u)\n", 938 + priv->ch_idx); 939 + goto rx_ack; 940 + } 941 + 942 + /* Read optional in-band LE data first. */ 943 + if (priv->ops->read_data) { 944 + data = priv->ops->read_data(mhu, chan); 945 + if (IS_ERR(data)) { 946 + dev_err(dev, 947 + "Failed to read in-band data. err:%ld\n", 948 + PTR_ERR(no_free_ptr(data))); 949 + goto rx_ack; 950 + } 951 + } 952 + 953 + mbox_chan_received_data(chan, data); 954 + ret = IRQ_HANDLED; 955 + 956 + /* 957 + * Acknowledge transfer after any possible optional 958 + * out-of-band data has also been retrieved via 959 + * mbox_chan_received_data(). 960 + */ 961 + rx_ack: 962 + if (priv->ops->rx_complete) 963 + priv->ops->rx_complete(mhu, chan); 964 + } 965 + 966 + if (found == 0) 967 + dev_warn_once(dev, "Failed to find channel for the RX interrupt\n"); 968 + 969 + return ret; 970 + } 971 + 972 + static int mhuv3_setup_pbx(struct mhuv3 *mhu) 973 + { 974 + struct device *dev = mhu->mbox.dev; 975 + 976 + mhu->mbox.ops = &mhuv3_sender_ops; 977 + 978 + if (mhu->cmb_irq > 0) { 979 + int ret, i; 980 + 981 + ret = devm_request_threaded_irq(dev, mhu->cmb_irq, NULL, 982 + mhuv3_pbx_comb_interrupt, 983 + IRQF_ONESHOT, "mhuv3-pbx", mhu); 984 + if (ret) 985 + return dev_err_probe(dev, ret, 986 + "Failed to request PBX IRQ\n"); 987 + 988 + mhu->mbox.txdone_irq = true; 989 + mhu->mbox.txdone_poll = false; 990 + 991 + for (i = 0; i < NUM_EXT; i++) 992 + if (mhu->ext[i]) 993 + mhu->ext[i]->combined_irq_setup(mhu); 994 + 995 + dev_dbg(dev, "MHUv3 PBX IRQs initialized.\n"); 996 + 997 + return 0; 998 + } 999 + 1000 + dev_info(dev, "Using PBX in Tx polling mode.\n"); 1001 + mhu->mbox.txdone_irq = false; 1002 + mhu->mbox.txdone_poll = true; 1003 + mhu->mbox.txpoll_period = 1; 1004 + 1005 + return 0; 1006 + } 1007 + 1008 + static int mhuv3_setup_mbx(struct mhuv3 *mhu) 1009 + { 1010 + struct device *dev = mhu->mbox.dev; 1011 + int ret, i; 1012 + 1013 + mhu->mbox.ops = &mhuv3_receiver_ops; 1014 + 1015 + if (mhu->cmb_irq <= 0) 1016 + return dev_err_probe(dev, -EINVAL, 1017 + "MBX combined IRQ is missing !\n"); 1018 + 1019 + ret = devm_request_threaded_irq(dev, mhu->cmb_irq, NULL, 1020 + mhuv3_mbx_comb_interrupt, IRQF_ONESHOT, 1021 + "mhuv3-mbx", mhu); 1022 + if (ret) 1023 + return dev_err_probe(dev, ret, "Failed to request MBX IRQ\n"); 1024 + 1025 + for (i = 0; i < NUM_EXT; i++) 1026 + if (mhu->ext[i]) 1027 + mhu->ext[i]->combined_irq_setup(mhu); 1028 + 1029 + dev_dbg(dev, "MHUv3 MBX IRQs initialized.\n"); 1030 + 1031 + return ret; 1032 + } 1033 + 1034 + static int mhuv3_irqs_init(struct mhuv3 *mhu, struct platform_device *pdev) 1035 + { 1036 + dev_dbg(mhu->mbox.dev, "Initializing %s block.\n", 1037 + mhuv3_str[mhu->frame]); 1038 + 1039 + if (mhu->frame == PBX_FRAME) { 1040 + mhu->cmb_irq = 1041 + platform_get_irq_byname_optional(pdev, "combined"); 1042 + return mhuv3_setup_pbx(mhu); 1043 + } 1044 + 1045 + mhu->cmb_irq = platform_get_irq_byname(pdev, "combined"); 1046 + return mhuv3_setup_mbx(mhu); 1047 + } 1048 + 1049 + static int mhuv3_probe(struct platform_device *pdev) 1050 + { 1051 + struct device *dev = &pdev->dev; 1052 + void __iomem *regs; 1053 + struct mhuv3 *mhu; 1054 + int ret; 1055 + 1056 + mhu = devm_kzalloc(dev, sizeof(*mhu), GFP_KERNEL); 1057 + if (!mhu) 1058 + return -ENOMEM; 1059 + 1060 + regs = devm_platform_ioremap_resource(pdev, 0); 1061 + if (IS_ERR(regs)) 1062 + return PTR_ERR(regs); 1063 + 1064 + mhu->mbox.dev = dev; 1065 + ret = mhuv3_frame_init(mhu, regs); 1066 + if (ret) 1067 + return ret; 1068 + 1069 + ret = mhuv3_irqs_init(mhu, pdev); 1070 + if (ret) 1071 + return ret; 1072 + 1073 + mhu->mbox.of_xlate = mhuv3_mbox_of_xlate; 1074 + ret = mhuv3_initialize_channels(dev, mhu); 1075 + if (ret) 1076 + return ret; 1077 + 1078 + ret = devm_mbox_controller_register(dev, &mhu->mbox); 1079 + if (ret) 1080 + return dev_err_probe(dev, ret, 1081 + "Failed to register ARM MHUv3 driver\n"); 1082 + 1083 + return ret; 1084 + } 1085 + 1086 + static const struct of_device_id mhuv3_of_match[] = { 1087 + { .compatible = "arm,mhuv3", .data = NULL }, 1088 + {} 1089 + }; 1090 + MODULE_DEVICE_TABLE(of, mhuv3_of_match); 1091 + 1092 + static struct platform_driver mhuv3_driver = { 1093 + .driver = { 1094 + .name = "arm-mhuv3-mailbox", 1095 + .of_match_table = mhuv3_of_match, 1096 + }, 1097 + .probe = mhuv3_probe, 1098 + }; 1099 + module_platform_driver(mhuv3_driver); 1100 + 1101 + MODULE_LICENSE("GPL"); 1102 + MODULE_DESCRIPTION("ARM MHUv3 Driver"); 1103 + MODULE_AUTHOR("Cristian Marussi <cristian.marussi@arm.com>");
+11 -10
drivers/mailbox/bcm-pdc-mailbox.c
··· 43 43 #include <linux/dma-direction.h> 44 44 #include <linux/dma-mapping.h> 45 45 #include <linux/dmapool.h> 46 + #include <linux/workqueue.h> 46 47 47 48 #define PDC_SUCCESS 0 48 49 ··· 294 293 295 294 unsigned int pdc_irq; 296 295 297 - /* tasklet for deferred processing after DMA rx interrupt */ 298 - struct tasklet_struct rx_tasklet; 296 + /* work for deferred processing after DMA rx interrupt */ 297 + struct work_struct rx_work; 299 298 300 299 /* Number of bytes of receive status prior to each rx frame */ 301 300 u32 rx_status_len; ··· 953 952 iowrite32(intstatus, pdcs->pdc_reg_vbase + PDC_INTSTATUS_OFFSET); 954 953 955 954 /* Wakeup IRQ thread */ 956 - tasklet_schedule(&pdcs->rx_tasklet); 955 + queue_work(system_bh_wq, &pdcs->rx_work); 957 956 return IRQ_HANDLED; 958 957 } 959 958 960 959 /** 961 - * pdc_tasklet_cb() - Tasklet callback that runs the deferred processing after 960 + * pdc_work_cb() - Work callback that runs the deferred processing after 962 961 * a DMA receive interrupt. Reenables the receive interrupt. 963 962 * @t: Pointer to the Altera sSGDMA channel structure 964 963 */ 965 - static void pdc_tasklet_cb(struct tasklet_struct *t) 964 + static void pdc_work_cb(struct work_struct *t) 966 965 { 967 - struct pdc_state *pdcs = from_tasklet(pdcs, t, rx_tasklet); 966 + struct pdc_state *pdcs = from_work(pdcs, t, rx_work); 968 967 969 968 pdc_receive(pdcs); 970 969 ··· 1578 1577 1579 1578 pdc_hw_init(pdcs); 1580 1579 1581 - /* Init tasklet for deferred DMA rx processing */ 1582 - tasklet_setup(&pdcs->rx_tasklet, pdc_tasklet_cb); 1580 + /* Init work for deferred DMA rx processing */ 1581 + INIT_WORK(&pdcs->rx_work, pdc_work_cb); 1583 1582 1584 1583 err = pdc_interrupts_init(pdcs); 1585 1584 if (err) ··· 1596 1595 return PDC_SUCCESS; 1597 1596 1598 1597 cleanup_buf_pool: 1599 - tasklet_kill(&pdcs->rx_tasklet); 1598 + cancel_work_sync(&pdcs->rx_work); 1600 1599 dma_pool_destroy(pdcs->rx_buf_pool); 1601 1600 1602 1601 cleanup_ring_pool: ··· 1612 1611 1613 1612 pdc_free_debugfs(); 1614 1613 1615 - tasklet_kill(&pdcs->rx_tasklet); 1614 + cancel_work_sync(&pdcs->rx_work); 1616 1615 1617 1616 pdc_hw_disable(pdcs); 1618 1617
+8 -8
drivers/mailbox/imx-mailbox.c
··· 21 21 #include <linux/pm_runtime.h> 22 22 #include <linux/suspend.h> 23 23 #include <linux/slab.h> 24 + #include <linux/workqueue.h> 24 25 25 26 #include "mailbox.h" 26 27 ··· 81 80 char irq_desc[IMX_MU_CHAN_NAME_SIZE]; 82 81 enum imx_mu_chan_type type; 83 82 struct mbox_chan *chan; 84 - struct tasklet_struct txdb_tasklet; 83 + struct work_struct txdb_work; 85 84 }; 86 85 87 86 struct imx_mu_priv { ··· 233 232 break; 234 233 case IMX_MU_TYPE_TXDB: 235 234 imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0); 236 - tasklet_schedule(&cp->txdb_tasklet); 235 + queue_work(system_bh_wq, &cp->txdb_work); 237 236 break; 238 237 case IMX_MU_TYPE_TXDB_V2: 239 238 imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0); ··· 421 420 } 422 421 423 422 /* Simulate hack for mbox framework */ 424 - tasklet_schedule(&cp->txdb_tasklet); 423 + queue_work(system_bh_wq, &cp->txdb_work); 425 424 426 425 break; 427 426 default: ··· 485 484 return err; 486 485 } 487 486 488 - static void imx_mu_txdb_tasklet(unsigned long data) 487 + static void imx_mu_txdb_work(struct work_struct *t) 489 488 { 490 - struct imx_mu_con_priv *cp = (struct imx_mu_con_priv *)data; 489 + struct imx_mu_con_priv *cp = from_work(cp, t, txdb_work); 491 490 492 491 mbox_chan_txdone(cp->chan, 0); 493 492 } ··· 571 570 572 571 if (cp->type == IMX_MU_TYPE_TXDB) { 573 572 /* Tx doorbell don't have ACK support */ 574 - tasklet_init(&cp->txdb_tasklet, imx_mu_txdb_tasklet, 575 - (unsigned long)cp); 573 + INIT_WORK(&cp->txdb_work, imx_mu_txdb_work); 576 574 return 0; 577 575 } 578 576 ··· 615 615 } 616 616 617 617 if (cp->type == IMX_MU_TYPE_TXDB) { 618 - tasklet_kill(&cp->txdb_tasklet); 618 + cancel_work_sync(&cp->txdb_work); 619 619 pm_runtime_put_sync(priv->dev); 620 620 return; 621 621 }
+2 -1
drivers/mailbox/mtk-cmdq-mailbox.c
··· 465 465 struct cmdq_task *task, *tmp; 466 466 unsigned long flags; 467 467 468 - WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev)); 468 + WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev) < 0); 469 469 470 470 spin_lock_irqsave(&thread->chan->lock, flags); 471 471 if (list_empty(&thread->task_busy_list)) ··· 765 765 {.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_mt8195}, 766 766 {} 767 767 }; 768 + MODULE_DEVICE_TABLE(of, cmdq_of_ids); 768 769 769 770 static struct platform_driver cmdq_drv = { 770 771 .probe = cmdq_probe,
+107 -410
drivers/mailbox/omap-mailbox.c
··· 19 19 #include <linux/of.h> 20 20 #include <linux/platform_device.h> 21 21 #include <linux/pm_runtime.h> 22 - #include <linux/omap-mailbox.h> 23 22 #include <linux/mailbox_controller.h> 24 23 #include <linux/mailbox_client.h> 25 24 ··· 50 51 #define MBOX_INTR_CFG_TYPE1 0 51 52 #define MBOX_INTR_CFG_TYPE2 1 52 53 54 + typedef enum { 55 + IRQ_TX = 1, 56 + IRQ_RX = 2, 57 + } omap_mbox_irq_t; 58 + 53 59 struct omap_mbox_fifo { 54 60 unsigned long msg; 55 61 unsigned long fifo_stat; ··· 63 59 unsigned long irqstatus; 64 60 unsigned long irqdisable; 65 61 u32 intr_bit; 66 - }; 67 - 68 - struct omap_mbox_queue { 69 - spinlock_t lock; 70 - struct kfifo fifo; 71 - struct work_struct work; 72 - struct omap_mbox *mbox; 73 - bool full; 74 62 }; 75 63 76 64 struct omap_mbox_match_data { ··· 77 81 u32 num_users; 78 82 u32 num_fifos; 79 83 u32 intr_type; 80 - struct omap_mbox **mboxes; 81 - struct mbox_controller controller; 82 - struct list_head elem; 83 - }; 84 - 85 - struct omap_mbox_fifo_info { 86 - int tx_id; 87 - int tx_usr; 88 - int tx_irq; 89 - 90 - int rx_id; 91 - int rx_usr; 92 - int rx_irq; 93 - 94 - const char *name; 95 - bool send_no_irq; 96 84 }; 97 85 98 86 struct omap_mbox { 99 87 const char *name; 100 88 int irq; 101 - struct omap_mbox_queue *rxq; 102 - struct device *dev; 103 89 struct omap_mbox_device *parent; 104 90 struct omap_mbox_fifo tx_fifo; 105 91 struct omap_mbox_fifo rx_fifo; ··· 89 111 struct mbox_chan *chan; 90 112 bool send_no_irq; 91 113 }; 92 - 93 - /* global variables for the mailbox devices */ 94 - static DEFINE_MUTEX(omap_mbox_devices_lock); 95 - static LIST_HEAD(omap_mbox_devices); 96 - 97 - static unsigned int mbox_kfifo_size = CONFIG_OMAP_MBOX_KFIFO_SIZE; 98 - module_param(mbox_kfifo_size, uint, S_IRUGO); 99 - MODULE_PARM_DESC(mbox_kfifo_size, "Size of omap's mailbox kfifo (bytes)"); 100 - 101 - static struct omap_mbox *mbox_chan_to_omap_mbox(struct mbox_chan *chan) 102 - { 103 - if (!chan || !chan->con_priv) 104 - return NULL; 105 - 106 - return (struct omap_mbox *)chan->con_priv; 107 - } 108 114 109 115 static inline 110 116 unsigned int mbox_read_reg(struct omap_mbox_device *mdev, size_t ofs) ··· 159 197 return (int)(enable & status & bit); 160 198 } 161 199 162 - static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 200 + static void omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 163 201 { 164 202 u32 l; 165 203 struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? ··· 172 210 mbox_write_reg(mbox->parent, l, irqenable); 173 211 } 174 212 175 - static void _omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 213 + static void omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 176 214 { 177 215 struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? 178 216 &mbox->tx_fifo : &mbox->rx_fifo; ··· 189 227 mbox_write_reg(mbox->parent, bit, irqdisable); 190 228 } 191 229 192 - void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq) 193 - { 194 - struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan); 195 - 196 - if (WARN_ON(!mbox)) 197 - return; 198 - 199 - _omap_mbox_enable_irq(mbox, irq); 200 - } 201 - EXPORT_SYMBOL(omap_mbox_enable_irq); 202 - 203 - void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq) 204 - { 205 - struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan); 206 - 207 - if (WARN_ON(!mbox)) 208 - return; 209 - 210 - _omap_mbox_disable_irq(mbox, irq); 211 - } 212 - EXPORT_SYMBOL(omap_mbox_disable_irq); 213 - 214 - /* 215 - * Message receiver(workqueue) 216 - */ 217 - static void mbox_rx_work(struct work_struct *work) 218 - { 219 - struct omap_mbox_queue *mq = 220 - container_of(work, struct omap_mbox_queue, work); 221 - mbox_msg_t data; 222 - u32 msg; 223 - int len; 224 - 225 - while (kfifo_len(&mq->fifo) >= sizeof(msg)) { 226 - len = kfifo_out(&mq->fifo, (unsigned char *)&msg, sizeof(msg)); 227 - WARN_ON(len != sizeof(msg)); 228 - data = msg; 229 - 230 - mbox_chan_received_data(mq->mbox->chan, (void *)data); 231 - spin_lock_irq(&mq->lock); 232 - if (mq->full) { 233 - mq->full = false; 234 - _omap_mbox_enable_irq(mq->mbox, IRQ_RX); 235 - } 236 - spin_unlock_irq(&mq->lock); 237 - } 238 - } 239 - 240 230 /* 241 231 * Mailbox interrupt handler 242 232 */ 243 233 static void __mbox_tx_interrupt(struct omap_mbox *mbox) 244 234 { 245 - _omap_mbox_disable_irq(mbox, IRQ_TX); 235 + omap_mbox_disable_irq(mbox, IRQ_TX); 246 236 ack_mbox_irq(mbox, IRQ_TX); 247 237 mbox_chan_txdone(mbox->chan, 0); 248 238 } 249 239 250 240 static void __mbox_rx_interrupt(struct omap_mbox *mbox) 251 241 { 252 - struct omap_mbox_queue *mq = mbox->rxq; 253 242 u32 msg; 254 - int len; 255 243 256 244 while (!mbox_fifo_empty(mbox)) { 257 - if (unlikely(kfifo_avail(&mq->fifo) < sizeof(msg))) { 258 - _omap_mbox_disable_irq(mbox, IRQ_RX); 259 - mq->full = true; 260 - goto nomem; 261 - } 262 - 263 245 msg = mbox_fifo_read(mbox); 264 - 265 - len = kfifo_in(&mq->fifo, (unsigned char *)&msg, sizeof(msg)); 266 - WARN_ON(len != sizeof(msg)); 246 + mbox_chan_received_data(mbox->chan, (void *)(uintptr_t)msg); 267 247 } 268 248 269 - /* no more messages in the fifo. clear IRQ source. */ 249 + /* clear IRQ source. */ 270 250 ack_mbox_irq(mbox, IRQ_RX); 271 - nomem: 272 - schedule_work(&mbox->rxq->work); 273 251 } 274 252 275 253 static irqreturn_t mbox_interrupt(int irq, void *p) ··· 225 323 return IRQ_HANDLED; 226 324 } 227 325 228 - static struct omap_mbox_queue *mbox_queue_alloc(struct omap_mbox *mbox, 229 - void (*work)(struct work_struct *)) 230 - { 231 - struct omap_mbox_queue *mq; 232 - 233 - if (!work) 234 - return NULL; 235 - 236 - mq = kzalloc(sizeof(*mq), GFP_KERNEL); 237 - if (!mq) 238 - return NULL; 239 - 240 - spin_lock_init(&mq->lock); 241 - 242 - if (kfifo_alloc(&mq->fifo, mbox_kfifo_size, GFP_KERNEL)) 243 - goto error; 244 - 245 - INIT_WORK(&mq->work, work); 246 - return mq; 247 - 248 - error: 249 - kfree(mq); 250 - return NULL; 251 - } 252 - 253 - static void mbox_queue_free(struct omap_mbox_queue *q) 254 - { 255 - kfifo_free(&q->fifo); 256 - kfree(q); 257 - } 258 - 259 326 static int omap_mbox_startup(struct omap_mbox *mbox) 260 327 { 261 328 int ret = 0; 262 - struct omap_mbox_queue *mq; 263 329 264 - mq = mbox_queue_alloc(mbox, mbox_rx_work); 265 - if (!mq) 266 - return -ENOMEM; 267 - mbox->rxq = mq; 268 - mq->mbox = mbox; 269 - 270 - ret = request_irq(mbox->irq, mbox_interrupt, IRQF_SHARED, 271 - mbox->name, mbox); 330 + ret = request_threaded_irq(mbox->irq, NULL, mbox_interrupt, 331 + IRQF_ONESHOT, mbox->name, mbox); 272 332 if (unlikely(ret)) { 273 333 pr_err("failed to register mailbox interrupt:%d\n", ret); 274 - goto fail_request_irq; 334 + return ret; 275 335 } 276 336 277 337 if (mbox->send_no_irq) 278 338 mbox->chan->txdone_method = TXDONE_BY_ACK; 279 339 280 - _omap_mbox_enable_irq(mbox, IRQ_RX); 340 + omap_mbox_enable_irq(mbox, IRQ_RX); 281 341 282 342 return 0; 283 - 284 - fail_request_irq: 285 - mbox_queue_free(mbox->rxq); 286 - return ret; 287 343 } 288 344 289 345 static void omap_mbox_fini(struct omap_mbox *mbox) 290 346 { 291 - _omap_mbox_disable_irq(mbox, IRQ_RX); 347 + omap_mbox_disable_irq(mbox, IRQ_RX); 292 348 free_irq(mbox->irq, mbox); 293 - flush_work(&mbox->rxq->work); 294 - mbox_queue_free(mbox->rxq); 295 - } 296 - 297 - static struct omap_mbox *omap_mbox_device_find(struct omap_mbox_device *mdev, 298 - const char *mbox_name) 299 - { 300 - struct omap_mbox *_mbox, *mbox = NULL; 301 - struct omap_mbox **mboxes = mdev->mboxes; 302 - int i; 303 - 304 - if (!mboxes) 305 - return NULL; 306 - 307 - for (i = 0; (_mbox = mboxes[i]); i++) { 308 - if (!strcmp(_mbox->name, mbox_name)) { 309 - mbox = _mbox; 310 - break; 311 - } 312 - } 313 - return mbox; 314 - } 315 - 316 - struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl, 317 - const char *chan_name) 318 - { 319 - struct device *dev = cl->dev; 320 - struct omap_mbox *mbox = NULL; 321 - struct omap_mbox_device *mdev; 322 - int ret; 323 - 324 - if (!dev) 325 - return ERR_PTR(-ENODEV); 326 - 327 - if (dev->of_node) { 328 - pr_err("%s: please use mbox_request_channel(), this API is supported only for OMAP non-DT usage\n", 329 - __func__); 330 - return ERR_PTR(-ENODEV); 331 - } 332 - 333 - mutex_lock(&omap_mbox_devices_lock); 334 - list_for_each_entry(mdev, &omap_mbox_devices, elem) { 335 - mbox = omap_mbox_device_find(mdev, chan_name); 336 - if (mbox) 337 - break; 338 - } 339 - mutex_unlock(&omap_mbox_devices_lock); 340 - 341 - if (!mbox || !mbox->chan) 342 - return ERR_PTR(-ENOENT); 343 - 344 - ret = mbox_bind_client(mbox->chan, cl); 345 - if (ret) 346 - return ERR_PTR(ret); 347 - 348 - return mbox->chan; 349 - } 350 - EXPORT_SYMBOL(omap_mbox_request_channel); 351 - 352 - static struct class omap_mbox_class = { .name = "mbox", }; 353 - 354 - static int omap_mbox_register(struct omap_mbox_device *mdev) 355 - { 356 - int ret; 357 - int i; 358 - struct omap_mbox **mboxes; 359 - 360 - if (!mdev || !mdev->mboxes) 361 - return -EINVAL; 362 - 363 - mboxes = mdev->mboxes; 364 - for (i = 0; mboxes[i]; i++) { 365 - struct omap_mbox *mbox = mboxes[i]; 366 - 367 - mbox->dev = device_create(&omap_mbox_class, mdev->dev, 368 - 0, mbox, "%s", mbox->name); 369 - if (IS_ERR(mbox->dev)) { 370 - ret = PTR_ERR(mbox->dev); 371 - goto err_out; 372 - } 373 - } 374 - 375 - mutex_lock(&omap_mbox_devices_lock); 376 - list_add(&mdev->elem, &omap_mbox_devices); 377 - mutex_unlock(&omap_mbox_devices_lock); 378 - 379 - ret = devm_mbox_controller_register(mdev->dev, &mdev->controller); 380 - 381 - err_out: 382 - if (ret) { 383 - while (i--) 384 - device_unregister(mboxes[i]->dev); 385 - } 386 - return ret; 387 - } 388 - 389 - static int omap_mbox_unregister(struct omap_mbox_device *mdev) 390 - { 391 - int i; 392 - struct omap_mbox **mboxes; 393 - 394 - if (!mdev || !mdev->mboxes) 395 - return -EINVAL; 396 - 397 - mutex_lock(&omap_mbox_devices_lock); 398 - list_del(&mdev->elem); 399 - mutex_unlock(&omap_mbox_devices_lock); 400 - 401 - mboxes = mdev->mboxes; 402 - for (i = 0; mboxes[i]; i++) 403 - device_unregister(mboxes[i]->dev); 404 - return 0; 405 349 } 406 350 407 351 static int omap_mbox_chan_startup(struct mbox_chan *chan) 408 352 { 409 - struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan); 353 + struct omap_mbox *mbox = chan->con_priv; 410 354 struct omap_mbox_device *mdev = mbox->parent; 411 355 int ret = 0; 412 356 ··· 267 519 268 520 static void omap_mbox_chan_shutdown(struct mbox_chan *chan) 269 521 { 270 - struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan); 522 + struct omap_mbox *mbox = chan->con_priv; 271 523 struct omap_mbox_device *mdev = mbox->parent; 272 524 273 525 mutex_lock(&mdev->cfg_lock); ··· 278 530 279 531 static int omap_mbox_chan_send_noirq(struct omap_mbox *mbox, u32 msg) 280 532 { 281 - int ret = -EBUSY; 533 + if (mbox_fifo_full(mbox)) 534 + return -EBUSY; 282 535 283 - if (!mbox_fifo_full(mbox)) { 284 - _omap_mbox_enable_irq(mbox, IRQ_RX); 285 - mbox_fifo_write(mbox, msg); 286 - ret = 0; 287 - _omap_mbox_disable_irq(mbox, IRQ_RX); 536 + omap_mbox_enable_irq(mbox, IRQ_RX); 537 + mbox_fifo_write(mbox, msg); 538 + omap_mbox_disable_irq(mbox, IRQ_RX); 288 539 289 - /* we must read and ack the interrupt directly from here */ 290 - mbox_fifo_read(mbox); 291 - ack_mbox_irq(mbox, IRQ_RX); 292 - } 540 + /* we must read and ack the interrupt directly from here */ 541 + mbox_fifo_read(mbox); 542 + ack_mbox_irq(mbox, IRQ_RX); 293 543 294 - return ret; 544 + return 0; 295 545 } 296 546 297 547 static int omap_mbox_chan_send(struct omap_mbox *mbox, u32 msg) 298 548 { 299 - int ret = -EBUSY; 300 - 301 - if (!mbox_fifo_full(mbox)) { 302 - mbox_fifo_write(mbox, msg); 303 - ret = 0; 549 + if (mbox_fifo_full(mbox)) { 550 + /* always enable the interrupt */ 551 + omap_mbox_enable_irq(mbox, IRQ_TX); 552 + return -EBUSY; 304 553 } 305 554 555 + mbox_fifo_write(mbox, msg); 556 + 306 557 /* always enable the interrupt */ 307 - _omap_mbox_enable_irq(mbox, IRQ_TX); 308 - return ret; 558 + omap_mbox_enable_irq(mbox, IRQ_TX); 559 + return 0; 309 560 } 310 561 311 562 static int omap_mbox_chan_send_data(struct mbox_chan *chan, void *data) 312 563 { 313 - struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan); 564 + struct omap_mbox *mbox = chan->con_priv; 314 565 int ret; 315 - u32 msg = omap_mbox_message(data); 566 + u32 msg = (u32)(uintptr_t)(data); 316 567 317 568 if (!mbox) 318 569 return -EINVAL; ··· 413 666 struct device_node *node; 414 667 struct omap_mbox_device *mdev; 415 668 struct omap_mbox *mbox; 669 + int i; 416 670 417 - mdev = container_of(controller, struct omap_mbox_device, controller); 671 + mdev = dev_get_drvdata(controller->dev); 418 672 if (WARN_ON(!mdev)) 419 673 return ERR_PTR(-EINVAL); 420 674 ··· 426 678 return ERR_PTR(-ENODEV); 427 679 } 428 680 429 - mbox = omap_mbox_device_find(mdev, node->name); 681 + for (i = 0; i < controller->num_chans; i++) { 682 + mbox = controller->chans[i].con_priv; 683 + if (!strcmp(mbox->name, node->name)) { 684 + of_node_put(node); 685 + return &controller->chans[i]; 686 + } 687 + } 688 + 430 689 of_node_put(node); 431 - return mbox ? mbox->chan : ERR_PTR(-ENOENT); 690 + return ERR_PTR(-ENOENT); 432 691 } 433 692 434 693 static int omap_mbox_probe(struct platform_device *pdev) 435 694 { 436 695 int ret; 437 696 struct mbox_chan *chnls; 438 - struct omap_mbox **list, *mbox, *mboxblk; 439 - struct omap_mbox_fifo_info *finfo, *finfoblk; 697 + struct omap_mbox *mbox; 440 698 struct omap_mbox_device *mdev; 441 699 struct omap_mbox_fifo *fifo; 442 700 struct device_node *node = pdev->dev.of_node; 443 701 struct device_node *child; 444 702 const struct omap_mbox_match_data *match_data; 703 + struct mbox_controller *controller; 445 704 u32 intr_type, info_count; 446 705 u32 num_users, num_fifos; 447 706 u32 tmp[3]; ··· 477 722 return -ENODEV; 478 723 } 479 724 480 - finfoblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*finfoblk), 481 - GFP_KERNEL); 482 - if (!finfoblk) 483 - return -ENOMEM; 484 - 485 - finfo = finfoblk; 486 - child = NULL; 487 - for (i = 0; i < info_count; i++, finfo++) { 488 - child = of_get_next_available_child(node, child); 489 - ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp, 490 - ARRAY_SIZE(tmp)); 491 - if (ret) 492 - return ret; 493 - finfo->tx_id = tmp[0]; 494 - finfo->tx_irq = tmp[1]; 495 - finfo->tx_usr = tmp[2]; 496 - 497 - ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp, 498 - ARRAY_SIZE(tmp)); 499 - if (ret) 500 - return ret; 501 - finfo->rx_id = tmp[0]; 502 - finfo->rx_irq = tmp[1]; 503 - finfo->rx_usr = tmp[2]; 504 - 505 - finfo->name = child->name; 506 - 507 - finfo->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq"); 508 - 509 - if (finfo->tx_id >= num_fifos || finfo->rx_id >= num_fifos || 510 - finfo->tx_usr >= num_users || finfo->rx_usr >= num_users) 511 - return -EINVAL; 512 - } 513 - 514 725 mdev = devm_kzalloc(&pdev->dev, sizeof(*mdev), GFP_KERNEL); 515 726 if (!mdev) 516 727 return -ENOMEM; ··· 490 769 if (!mdev->irq_ctx) 491 770 return -ENOMEM; 492 771 493 - /* allocate one extra for marking end of list */ 494 - list = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*list), 495 - GFP_KERNEL); 496 - if (!list) 497 - return -ENOMEM; 498 - 499 772 chnls = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*chnls), 500 773 GFP_KERNEL); 501 774 if (!chnls) 502 775 return -ENOMEM; 503 776 504 - mboxblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*mbox), 505 - GFP_KERNEL); 506 - if (!mboxblk) 507 - return -ENOMEM; 777 + child = NULL; 778 + for (i = 0; i < info_count; i++) { 779 + int tx_id, tx_irq, tx_usr; 780 + int rx_id, rx_usr; 508 781 509 - mbox = mboxblk; 510 - finfo = finfoblk; 511 - for (i = 0; i < info_count; i++, finfo++) { 782 + mbox = devm_kzalloc(&pdev->dev, sizeof(*mbox), GFP_KERNEL); 783 + if (!mbox) 784 + return -ENOMEM; 785 + 786 + child = of_get_next_available_child(node, child); 787 + ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp, 788 + ARRAY_SIZE(tmp)); 789 + if (ret) 790 + return ret; 791 + tx_id = tmp[0]; 792 + tx_irq = tmp[1]; 793 + tx_usr = tmp[2]; 794 + 795 + ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp, 796 + ARRAY_SIZE(tmp)); 797 + if (ret) 798 + return ret; 799 + rx_id = tmp[0]; 800 + /* rx_irq = tmp[1]; */ 801 + rx_usr = tmp[2]; 802 + 803 + if (tx_id >= num_fifos || rx_id >= num_fifos || 804 + tx_usr >= num_users || rx_usr >= num_users) 805 + return -EINVAL; 806 + 512 807 fifo = &mbox->tx_fifo; 513 - fifo->msg = MAILBOX_MESSAGE(finfo->tx_id); 514 - fifo->fifo_stat = MAILBOX_FIFOSTATUS(finfo->tx_id); 515 - fifo->intr_bit = MAILBOX_IRQ_NOTFULL(finfo->tx_id); 516 - fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->tx_usr); 517 - fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->tx_usr); 518 - fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->tx_usr); 808 + fifo->msg = MAILBOX_MESSAGE(tx_id); 809 + fifo->fifo_stat = MAILBOX_FIFOSTATUS(tx_id); 810 + fifo->intr_bit = MAILBOX_IRQ_NOTFULL(tx_id); 811 + fifo->irqenable = MAILBOX_IRQENABLE(intr_type, tx_usr); 812 + fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, tx_usr); 813 + fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, tx_usr); 519 814 520 815 fifo = &mbox->rx_fifo; 521 - fifo->msg = MAILBOX_MESSAGE(finfo->rx_id); 522 - fifo->msg_stat = MAILBOX_MSGSTATUS(finfo->rx_id); 523 - fifo->intr_bit = MAILBOX_IRQ_NEWMSG(finfo->rx_id); 524 - fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->rx_usr); 525 - fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->rx_usr); 526 - fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->rx_usr); 816 + fifo->msg = MAILBOX_MESSAGE(rx_id); 817 + fifo->msg_stat = MAILBOX_MSGSTATUS(rx_id); 818 + fifo->intr_bit = MAILBOX_IRQ_NEWMSG(rx_id); 819 + fifo->irqenable = MAILBOX_IRQENABLE(intr_type, rx_usr); 820 + fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, rx_usr); 821 + fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, rx_usr); 527 822 528 - mbox->send_no_irq = finfo->send_no_irq; 823 + mbox->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq"); 529 824 mbox->intr_type = intr_type; 530 825 531 826 mbox->parent = mdev; 532 - mbox->name = finfo->name; 533 - mbox->irq = platform_get_irq(pdev, finfo->tx_irq); 827 + mbox->name = child->name; 828 + mbox->irq = platform_get_irq(pdev, tx_irq); 534 829 if (mbox->irq < 0) 535 830 return mbox->irq; 536 831 mbox->chan = &chnls[i]; 537 832 chnls[i].con_priv = mbox; 538 - list[i] = mbox++; 539 833 } 540 834 541 835 mutex_init(&mdev->cfg_lock); ··· 558 822 mdev->num_users = num_users; 559 823 mdev->num_fifos = num_fifos; 560 824 mdev->intr_type = intr_type; 561 - mdev->mboxes = list; 562 825 826 + controller = devm_kzalloc(&pdev->dev, sizeof(*controller), GFP_KERNEL); 827 + if (!controller) 828 + return -ENOMEM; 563 829 /* 564 830 * OMAP/K3 Mailbox IP does not have a Tx-Done IRQ, but rather a Tx-Ready 565 831 * IRQ and is needed to run the Tx state machine 566 832 */ 567 - mdev->controller.txdone_irq = true; 568 - mdev->controller.dev = mdev->dev; 569 - mdev->controller.ops = &omap_mbox_chan_ops; 570 - mdev->controller.chans = chnls; 571 - mdev->controller.num_chans = info_count; 572 - mdev->controller.of_xlate = omap_mbox_of_xlate; 573 - ret = omap_mbox_register(mdev); 833 + controller->txdone_irq = true; 834 + controller->dev = mdev->dev; 835 + controller->ops = &omap_mbox_chan_ops; 836 + controller->chans = chnls; 837 + controller->num_chans = info_count; 838 + controller->of_xlate = omap_mbox_of_xlate; 839 + ret = devm_mbox_controller_register(mdev->dev, controller); 574 840 if (ret) 575 841 return ret; 576 842 577 843 platform_set_drvdata(pdev, mdev); 578 - pm_runtime_enable(mdev->dev); 844 + devm_pm_runtime_enable(mdev->dev); 579 845 580 846 ret = pm_runtime_resume_and_get(mdev->dev); 581 847 if (ret < 0) 582 - goto unregister; 848 + return ret; 583 849 584 850 /* 585 851 * just print the raw revision register, the format is not ··· 592 854 593 855 ret = pm_runtime_put_sync(mdev->dev); 594 856 if (ret < 0 && ret != -ENOSYS) 595 - goto unregister; 857 + return ret; 596 858 597 - devm_kfree(&pdev->dev, finfoblk); 598 859 return 0; 599 - 600 - unregister: 601 - pm_runtime_disable(mdev->dev); 602 - omap_mbox_unregister(mdev); 603 - return ret; 604 - } 605 - 606 - static void omap_mbox_remove(struct platform_device *pdev) 607 - { 608 - struct omap_mbox_device *mdev = platform_get_drvdata(pdev); 609 - 610 - pm_runtime_disable(mdev->dev); 611 - omap_mbox_unregister(mdev); 612 860 } 613 861 614 862 static struct platform_driver omap_mbox_driver = { 615 863 .probe = omap_mbox_probe, 616 - .remove_new = omap_mbox_remove, 617 864 .driver = { 618 865 .name = "omap-mailbox", 619 866 .pm = &omap_mbox_pm_ops, 620 867 .of_match_table = of_match_ptr(omap_mailbox_of_match), 621 868 }, 622 869 }; 623 - 624 - static int __init omap_mbox_init(void) 625 - { 626 - int err; 627 - 628 - err = class_register(&omap_mbox_class); 629 - if (err) 630 - return err; 631 - 632 - /* kfifo size sanity check: alignment and minimal size */ 633 - mbox_kfifo_size = ALIGN(mbox_kfifo_size, sizeof(u32)); 634 - mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size, sizeof(u32)); 635 - 636 - err = platform_driver_register(&omap_mbox_driver); 637 - if (err) 638 - class_unregister(&omap_mbox_class); 639 - 640 - return err; 641 - } 642 - subsys_initcall(omap_mbox_init); 643 - 644 - static void __exit omap_mbox_exit(void) 645 - { 646 - platform_driver_unregister(&omap_mbox_driver); 647 - class_unregister(&omap_mbox_class); 648 - } 649 - module_exit(omap_mbox_exit); 870 + module_platform_driver(omap_mbox_driver); 650 871 651 872 MODULE_LICENSE("GPL v2"); 652 873 MODULE_DESCRIPTION("omap mailbox: interrupt driven messaging");
+362 -50
drivers/mailbox/zynqmp-ipi-mailbox.c
··· 6 6 */ 7 7 8 8 #include <linux/arm-smccc.h> 9 + #include <linux/cpuhotplug.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/device.h> 11 12 #include <linux/interrupt.h> 13 + #include <linux/irqdomain.h> 12 14 #include <linux/io.h> 13 15 #include <linux/kernel.h> 14 16 #include <linux/mailbox_controller.h> ··· 18 16 #include <linux/module.h> 19 17 #include <linux/of.h> 20 18 #include <linux/of_address.h> 19 + #include <linux/of_irq.h> 21 20 #include <linux/platform_device.h> 22 21 23 22 /* IPI agent ID any */ ··· 55 52 #define IPI_MB_CHNL_TX 0 /* IPI mailbox TX channel */ 56 53 #define IPI_MB_CHNL_RX 1 /* IPI mailbox RX channel */ 57 54 55 + /* IPI Message Buffer Information */ 56 + #define RESP_OFFSET 0x20U 57 + #define DEST_OFFSET 0x40U 58 + #define IPI_BUF_SIZE 0x20U 59 + #define DST_BIT_POS 9U 60 + #define SRC_BITMASK GENMASK(11, 8) 61 + 62 + #define MAX_SGI 16 63 + 58 64 /** 59 65 * struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel 60 66 * @is_opened: indicate if the IPI channel is opened ··· 84 72 unsigned int chan_type; 85 73 }; 86 74 75 + struct zynqmp_ipi_mbox; 76 + 77 + typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node); 78 + 87 79 /** 88 80 * struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox 89 81 * platform data. ··· 97 81 * @remote_id: remote IPI agent ID 98 82 * @mbox: mailbox Controller 99 83 * @mchans: array for channels, tx channel and rx channel. 84 + * @setup_ipi_fn: Function Pointer to set up IPI Channels 100 85 */ 101 86 struct zynqmp_ipi_mbox { 102 87 struct zynqmp_ipi_pdata *pdata; ··· 105 88 u32 remote_id; 106 89 struct mbox_controller mbox; 107 90 struct zynqmp_ipi_mchan mchans[2]; 91 + setup_ipi_fn setup_ipi_fn; 108 92 }; 109 93 110 94 /** ··· 116 98 * @irq: IPI agent interrupt ID 117 99 * @method: IPI SMC or HVC is going to be used 118 100 * @local_id: local IPI agent ID 101 + * @virq_sgi: IRQ number mapped to SGI 119 102 * @num_mboxes: number of mailboxes of this IPI agent 120 103 * @ipi_mboxes: IPI mailboxes of this IPI agent 121 104 */ ··· 125 106 int irq; 126 107 unsigned int method; 127 108 u32 local_id; 109 + int virq_sgi; 128 110 int num_mboxes; 129 111 struct zynqmp_ipi_mbox ipi_mboxes[] __counted_by(num_mboxes); 130 112 }; 113 + 114 + static DEFINE_PER_CPU(struct zynqmp_ipi_pdata *, per_cpu_pdata); 131 115 132 116 static struct device_driver zynqmp_ipi_mbox_driver = { 133 117 .owner = THIS_MODULE, ··· 185 163 if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) { 186 164 if (mchan->is_opened) { 187 165 msg = mchan->rx_buf; 188 - msg->len = mchan->req_buf_size; 189 - memcpy_fromio(msg->data, mchan->req_buf, 190 - msg->len); 166 + if (msg) { 167 + msg->len = mchan->req_buf_size; 168 + memcpy_fromio(msg->data, mchan->req_buf, 169 + msg->len); 170 + } 191 171 mbox_chan_received_data(chan, (void *)msg); 192 172 status = IRQ_HANDLED; 193 173 } 194 174 } 195 175 } 196 176 return status; 177 + } 178 + 179 + static irqreturn_t zynqmp_sgi_interrupt(int irq, void *data) 180 + { 181 + struct zynqmp_ipi_pdata **pdata_ptr = data; 182 + struct zynqmp_ipi_pdata *pdata = *pdata_ptr; 183 + 184 + return zynqmp_ipi_interrupt(irq, pdata); 197 185 } 198 186 199 187 /** ··· 307 275 308 276 if (mchan->chan_type == IPI_MB_CHNL_TX) { 309 277 /* Send request message */ 310 - if (msg && msg->len > mchan->req_buf_size) { 278 + if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) { 311 279 dev_err(dev, "channel %d message length %u > max %lu\n", 312 280 mchan->chan_type, (unsigned int)msg->len, 313 281 mchan->req_buf_size); 314 282 return -EINVAL; 315 283 } 316 - if (msg && msg->len) 284 + if (msg && msg->len && mchan->req_buf) 317 285 memcpy_toio(mchan->req_buf, msg->data, msg->len); 318 286 /* Kick IPI mailbox to send message */ 319 287 arg0 = SMC_IPI_MAILBOX_NOTIFY; 320 288 zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res); 321 289 } else { 322 290 /* Send response message */ 323 - if (msg && msg->len > mchan->resp_buf_size) { 291 + if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) { 324 292 dev_err(dev, "channel %d message length %u > max %lu\n", 325 293 mchan->chan_type, (unsigned int)msg->len, 326 294 mchan->resp_buf_size); 327 295 return -EINVAL; 328 296 } 329 - if (msg && msg->len) 297 + if (msg && msg->len && mchan->resp_buf) 330 298 memcpy_toio(mchan->resp_buf, msg->data, msg->len); 331 299 arg0 = SMC_IPI_MAILBOX_ACK; 332 300 zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK, ··· 447 415 return chan; 448 416 } 449 417 450 - static const struct of_device_id zynqmp_ipi_of_match[] = { 451 - { .compatible = "xlnx,zynqmp-ipi-mailbox" }, 452 - {}, 453 - }; 454 - MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match); 455 - 456 418 /** 457 419 * zynqmp_ipi_mbox_get_buf_res - Get buffer resource from the IPI dev node 458 420 * ··· 496 470 static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox, 497 471 struct device_node *node) 498 472 { 499 - struct zynqmp_ipi_mchan *mchan; 500 473 struct mbox_chan *chans; 501 474 struct mbox_controller *mbox; 502 - struct resource res; 503 475 struct device *dev, *mdev; 504 - const char *name; 505 476 int ret; 506 477 507 478 dev = ipi_mbox->pdata->dev; ··· 516 493 put_device(&ipi_mbox->dev); 517 494 return ret; 518 495 } 496 + mdev = &ipi_mbox->dev; 497 + 498 + /* Get the IPI remote agent ID */ 499 + ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id); 500 + if (ret < 0) { 501 + dev_err(dev, "No IPI remote ID is specified.\n"); 502 + return ret; 503 + } 504 + 505 + ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node); 506 + if (ret) { 507 + dev_err(dev, "Failed to set up IPI Buffers.\n"); 508 + return ret; 509 + } 510 + 511 + mbox = &ipi_mbox->mbox; 512 + mbox->dev = mdev; 513 + mbox->ops = &zynqmp_ipi_chan_ops; 514 + mbox->num_chans = 2; 515 + mbox->txdone_irq = false; 516 + mbox->txdone_poll = true; 517 + mbox->txpoll_period = 5; 518 + mbox->of_xlate = zynqmp_ipi_of_xlate; 519 + chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL); 520 + if (!chans) 521 + return -ENOMEM; 522 + mbox->chans = chans; 523 + chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX]; 524 + chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX]; 525 + ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX; 526 + ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX; 527 + ret = devm_mbox_controller_register(mdev, mbox); 528 + if (ret) 529 + dev_err(mdev, 530 + "Failed to register mbox_controller(%d)\n", ret); 531 + else 532 + dev_info(mdev, 533 + "Registered ZynqMP IPI mbox with TX/RX channels.\n"); 534 + return ret; 535 + } 536 + 537 + /** 538 + * zynqmp_ipi_setup - set up IPI Buffers for classic flow 539 + * 540 + * @ipi_mbox: pointer to IPI mailbox private data structure 541 + * @node: IPI mailbox device node 542 + * 543 + * This will be used to set up IPI Buffers for ZynqMP SOC if user 544 + * wishes to use classic driver usage model on new SOC's with only 545 + * buffered IPIs. 546 + * 547 + * Note that bufferless IPIs and mixed usage of buffered and bufferless 548 + * IPIs are not supported with this flow. 549 + * 550 + * This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox". 551 + * 552 + * Return: 0 for success, negative value for failure 553 + */ 554 + static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox, 555 + struct device_node *node) 556 + { 557 + struct zynqmp_ipi_mchan *mchan; 558 + struct device *mdev; 559 + struct resource res; 560 + const char *name; 561 + int ret; 562 + 519 563 mdev = &ipi_mbox->dev; 520 564 521 565 mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX]; ··· 659 569 if (!mchan->rx_buf) 660 570 return -ENOMEM; 661 571 662 - /* Get the IPI remote agent ID */ 663 - ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id); 664 - if (ret < 0) { 665 - dev_err(dev, "No IPI remote ID is specified.\n"); 572 + return 0; 573 + } 574 + 575 + /** 576 + * versal_ipi_setup - Set up IPIs to support mixed usage of 577 + * Buffered and Bufferless IPIs. 578 + * 579 + * @ipi_mbox: pointer to IPI mailbox private data structure 580 + * @node: IPI mailbox device node 581 + * 582 + * Return: 0 for success, negative value for failure 583 + */ 584 + static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox, 585 + struct device_node *node) 586 + { 587 + struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan; 588 + struct resource host_res, remote_res; 589 + struct device_node *parent_node; 590 + int host_idx, remote_idx; 591 + struct device *mdev; 592 + 593 + tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX]; 594 + rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX]; 595 + parent_node = of_get_parent(node); 596 + mdev = &ipi_mbox->dev; 597 + 598 + host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res); 599 + remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res); 600 + 601 + /* 602 + * Only set up buffers if both sides claim to have msg buffers. 603 + * This is because each buffered IPI's corresponding msg buffers 604 + * are reserved for use by other buffered IPI's. 605 + */ 606 + if (!host_idx && !remote_idx) { 607 + u32 host_src, host_dst, remote_src, remote_dst; 608 + u32 buff_sz; 609 + 610 + buff_sz = resource_size(&host_res); 611 + 612 + host_src = host_res.start & SRC_BITMASK; 613 + remote_src = remote_res.start & SRC_BITMASK; 614 + 615 + host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET; 616 + remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET; 617 + 618 + /* Validate that IPI IDs is within IPI Message buffer space. */ 619 + if (host_dst >= buff_sz || remote_dst >= buff_sz) { 620 + dev_err(mdev, 621 + "Invalid IPI Message buffer values: %x %x\n", 622 + host_dst, remote_dst); 623 + return -EINVAL; 624 + } 625 + 626 + tx_mchan->req_buf = devm_ioremap(mdev, 627 + host_res.start | remote_dst, 628 + IPI_BUF_SIZE); 629 + if (!tx_mchan->req_buf) { 630 + dev_err(mdev, "Unable to map IPI buffer I/O memory\n"); 631 + return -ENOMEM; 632 + } 633 + 634 + tx_mchan->resp_buf = devm_ioremap(mdev, 635 + (remote_res.start | host_dst) + 636 + RESP_OFFSET, IPI_BUF_SIZE); 637 + if (!tx_mchan->resp_buf) { 638 + dev_err(mdev, "Unable to map IPI buffer I/O memory\n"); 639 + return -ENOMEM; 640 + } 641 + 642 + rx_mchan->req_buf = devm_ioremap(mdev, 643 + remote_res.start | host_dst, 644 + IPI_BUF_SIZE); 645 + if (!rx_mchan->req_buf) { 646 + dev_err(mdev, "Unable to map IPI buffer I/O memory\n"); 647 + return -ENOMEM; 648 + } 649 + 650 + rx_mchan->resp_buf = devm_ioremap(mdev, 651 + (host_res.start | remote_dst) + 652 + RESP_OFFSET, IPI_BUF_SIZE); 653 + if (!rx_mchan->resp_buf) { 654 + dev_err(mdev, "Unable to map IPI buffer I/O memory\n"); 655 + return -ENOMEM; 656 + } 657 + 658 + tx_mchan->resp_buf_size = IPI_BUF_SIZE; 659 + tx_mchan->req_buf_size = IPI_BUF_SIZE; 660 + tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE + 661 + sizeof(struct zynqmp_ipi_message), 662 + GFP_KERNEL); 663 + if (!tx_mchan->rx_buf) 664 + return -ENOMEM; 665 + 666 + rx_mchan->resp_buf_size = IPI_BUF_SIZE; 667 + rx_mchan->req_buf_size = IPI_BUF_SIZE; 668 + rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE + 669 + sizeof(struct zynqmp_ipi_message), 670 + GFP_KERNEL); 671 + if (!rx_mchan->rx_buf) 672 + return -ENOMEM; 673 + } 674 + 675 + return 0; 676 + } 677 + 678 + static int xlnx_mbox_cpuhp_start(unsigned int cpu) 679 + { 680 + struct zynqmp_ipi_pdata *pdata; 681 + 682 + pdata = get_cpu_var(per_cpu_pdata); 683 + put_cpu_var(per_cpu_pdata); 684 + enable_percpu_irq(pdata->virq_sgi, IRQ_TYPE_NONE); 685 + 686 + return 0; 687 + } 688 + 689 + static int xlnx_mbox_cpuhp_down(unsigned int cpu) 690 + { 691 + struct zynqmp_ipi_pdata *pdata; 692 + 693 + pdata = get_cpu_var(per_cpu_pdata); 694 + put_cpu_var(per_cpu_pdata); 695 + disable_percpu_irq(pdata->virq_sgi); 696 + 697 + return 0; 698 + } 699 + 700 + static void xlnx_disable_percpu_irq(void *data) 701 + { 702 + struct zynqmp_ipi_pdata *pdata; 703 + 704 + pdata = *this_cpu_ptr(&per_cpu_pdata); 705 + 706 + disable_percpu_irq(pdata->virq_sgi); 707 + } 708 + 709 + static int xlnx_mbox_init_sgi(struct platform_device *pdev, 710 + int sgi_num, 711 + struct zynqmp_ipi_pdata *pdata) 712 + { 713 + int ret = 0; 714 + int cpu; 715 + 716 + /* 717 + * IRQ related structures are used for the following: 718 + * for each SGI interrupt ensure its mapped by GIC IRQ domain 719 + * and that each corresponding linux IRQ for the HW IRQ has 720 + * a handler for when receiving an interrupt from the remote 721 + * processor. 722 + */ 723 + struct irq_domain *domain; 724 + struct irq_fwspec sgi_fwspec; 725 + struct device_node *interrupt_parent = NULL; 726 + struct device *dev = &pdev->dev; 727 + 728 + /* Find GIC controller to map SGIs. */ 729 + interrupt_parent = of_irq_find_parent(dev->of_node); 730 + if (!interrupt_parent) { 731 + dev_err(&pdev->dev, "Failed to find property for Interrupt parent\n"); 732 + return -EINVAL; 733 + } 734 + 735 + /* Each SGI needs to be associated with GIC's IRQ domain. */ 736 + domain = irq_find_host(interrupt_parent); 737 + of_node_put(interrupt_parent); 738 + 739 + /* Each mapping needs GIC domain when finding IRQ mapping. */ 740 + sgi_fwspec.fwnode = domain->fwnode; 741 + 742 + /* 743 + * When irq domain looks at mapping each arg is as follows: 744 + * 3 args for: interrupt type (SGI), interrupt # (set later), type 745 + */ 746 + sgi_fwspec.param_count = 1; 747 + 748 + /* Set SGI's hwirq */ 749 + sgi_fwspec.param[0] = sgi_num; 750 + pdata->virq_sgi = irq_create_fwspec_mapping(&sgi_fwspec); 751 + 752 + for_each_possible_cpu(cpu) 753 + per_cpu(per_cpu_pdata, cpu) = pdata; 754 + 755 + ret = request_percpu_irq(pdata->virq_sgi, zynqmp_sgi_interrupt, pdev->name, 756 + &per_cpu_pdata); 757 + WARN_ON(ret); 758 + if (ret) { 759 + irq_dispose_mapping(pdata->virq_sgi); 666 760 return ret; 667 761 } 668 762 669 - mbox = &ipi_mbox->mbox; 670 - mbox->dev = mdev; 671 - mbox->ops = &zynqmp_ipi_chan_ops; 672 - mbox->num_chans = 2; 673 - mbox->txdone_irq = false; 674 - mbox->txdone_poll = true; 675 - mbox->txpoll_period = 5; 676 - mbox->of_xlate = zynqmp_ipi_of_xlate; 677 - chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL); 678 - if (!chans) 679 - return -ENOMEM; 680 - mbox->chans = chans; 681 - chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX]; 682 - chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX]; 683 - ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX; 684 - ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX; 685 - ret = devm_mbox_controller_register(mdev, mbox); 686 - if (ret) 687 - dev_err(mdev, 688 - "Failed to register mbox_controller(%d)\n", ret); 689 - else 690 - dev_info(mdev, 691 - "Registered ZynqMP IPI mbox with TX/RX channels.\n"); 763 + irq_to_desc(pdata->virq_sgi); 764 + irq_set_status_flags(pdata->virq_sgi, IRQ_PER_CPU); 765 + 766 + /* Setup function for the CPU hot-plug cases */ 767 + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mailbox/sgi:starting", 768 + xlnx_mbox_cpuhp_start, xlnx_mbox_cpuhp_down); 769 + 692 770 return ret; 771 + } 772 + 773 + static void xlnx_mbox_cleanup_sgi(struct zynqmp_ipi_pdata *pdata) 774 + { 775 + cpuhp_remove_state(CPUHP_AP_ONLINE_DYN); 776 + 777 + on_each_cpu(xlnx_disable_percpu_irq, NULL, 1); 778 + 779 + irq_clear_status_flags(pdata->virq_sgi, IRQ_PER_CPU); 780 + free_percpu_irq(pdata->virq_sgi, &per_cpu_pdata); 781 + irq_dispose_mapping(pdata->virq_sgi); 693 782 } 694 783 695 784 /** ··· 880 611 { 881 612 struct zynqmp_ipi_mbox *ipi_mbox; 882 613 int i; 614 + 615 + if (pdata->irq < MAX_SGI) 616 + xlnx_mbox_cleanup_sgi(pdata); 883 617 884 618 i = pdata->num_mboxes; 885 619 for (; i >= 0; i--) { ··· 899 627 { 900 628 struct device *dev = &pdev->dev; 901 629 struct device_node *nc, *np = pdev->dev.of_node; 902 - struct zynqmp_ipi_pdata *pdata; 630 + struct zynqmp_ipi_pdata __percpu *pdata; 631 + struct of_phandle_args out_irq; 903 632 struct zynqmp_ipi_mbox *mbox; 904 633 int num_mboxes, ret = -EINVAL; 634 + setup_ipi_fn ipi_fn; 905 635 906 636 num_mboxes = of_get_available_child_count(np); 907 637 if (num_mboxes == 0) { ··· 924 650 return ret; 925 651 } 926 652 653 + ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev); 654 + if (!ipi_fn) { 655 + dev_err(dev, 656 + "Mbox Compatible String is missing IPI Setup fn.\n"); 657 + return -ENODEV; 658 + } 659 + 927 660 pdata->num_mboxes = num_mboxes; 928 661 929 662 mbox = pdata->ipi_mboxes; 663 + mbox->setup_ipi_fn = ipi_fn; 664 + 930 665 for_each_available_child_of_node(np, nc) { 931 666 mbox->pdata = pdata; 932 667 ret = zynqmp_ipi_mbox_probe(mbox, nc); ··· 948 665 mbox++; 949 666 } 950 667 951 - /* IPI IRQ */ 952 - ret = platform_get_irq(pdev, 0); 953 - if (ret < 0) 668 + ret = of_irq_parse_one(dev_of_node(dev), 0, &out_irq); 669 + if (ret < 0) { 670 + dev_err(dev, "failed to parse interrupts\n"); 954 671 goto free_mbox_dev; 672 + } 673 + ret = out_irq.args[1]; 955 674 956 - pdata->irq = ret; 957 - ret = devm_request_irq(dev, pdata->irq, zynqmp_ipi_interrupt, 958 - IRQF_SHARED, dev_name(dev), pdata); 675 + /* 676 + * If Interrupt number is in SGI range, then request SGI else request 677 + * IPI system IRQ. 678 + */ 679 + if (ret < MAX_SGI) { 680 + pdata->irq = ret; 681 + ret = xlnx_mbox_init_sgi(pdev, pdata->irq, pdata); 682 + if (ret) 683 + goto free_mbox_dev; 684 + } else { 685 + ret = platform_get_irq(pdev, 0); 686 + if (ret < 0) 687 + goto free_mbox_dev; 688 + 689 + pdata->irq = ret; 690 + ret = devm_request_irq(dev, pdata->irq, zynqmp_ipi_interrupt, 691 + IRQF_SHARED, dev_name(dev), pdata); 692 + } 693 + 959 694 if (ret) { 960 695 dev_err(dev, "IRQ %d is not requested successfully.\n", 961 696 pdata->irq); ··· 995 694 pdata = platform_get_drvdata(pdev); 996 695 zynqmp_ipi_free_mboxes(pdata); 997 696 } 697 + 698 + static const struct of_device_id zynqmp_ipi_of_match[] = { 699 + { .compatible = "xlnx,zynqmp-ipi-mailbox", 700 + .data = &zynqmp_ipi_setup, 701 + }, 702 + { .compatible = "xlnx,versal-ipi-mailbox", 703 + .data = &versal_ipi_setup, 704 + }, 705 + {}, 706 + }; 707 + MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match); 998 708 999 709 static struct platform_driver zynqmp_ipi_driver = { 1000 710 .probe = zynqmp_ipi_probe,
+13
include/dt-bindings/arm/mhuv3-dt.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + /* 3 + * This header provides constants for the defined MHUv3 types. 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_ARM_MHUV3_DT_H 7 + #define _DT_BINDINGS_ARM_MHUV3_DT_H 8 + 9 + #define DBE_EXT 0 10 + #define FCE_EXT 1 11 + #define FE_EXT 2 12 + 13 + #endif /* _DT_BINDINGS_ARM_MHUV3_DT_H */
-13
include/linux/omap-mailbox.h
··· 10 10 11 11 #define omap_mbox_message(data) (u32)(mbox_msg_t)(data) 12 12 13 - typedef int __bitwise omap_mbox_irq_t; 14 - #define IRQ_TX ((__force omap_mbox_irq_t) 1) 15 - #define IRQ_RX ((__force omap_mbox_irq_t) 2) 16 - 17 - struct mbox_chan; 18 - struct mbox_client; 19 - 20 - struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl, 21 - const char *chan_name); 22 - 23 - void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq); 24 - void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq); 25 - 26 13 #endif /* OMAP_MAILBOX_H */