Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'riscv-for-linus-6.18-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull more RISC-V updates from Paul Walmsley:

- Support for the RISC-V-standardized RPMI interface.

RPMI is a platform management communication mechanism between OSes
running on application processors, and a remote platform management
processor. Similar to ARM SCMI, TI SCI, etc. This includes irqchip,
mailbox, and clk changes.

- Support for the RISC-V-standardized MPXY SBI extension.

MPXY is a RISC-V-specific standard implementing a shared memory
mailbox between S-mode operating systems (e.g., Linux) and M-mode
firmware (e.g., OpenSBI). It is part of this PR since one of its use
cases is to enable M-mode firmware to act as a single RPMI client for
all RPMI activity on a core (including S-mode RPMI activity).
Includes a mailbox driver.

- Some ACPI-related updates to enable the use of RPMI and MPXY.

- The addition of Linux-wide memcpy_{from,to}_le32() static inline
functions, for RPMI use.

- An ACPI Kconfig change to enable boot logos on any ACPI-using
architecture (including RISC-V)

- A RISC-V defconfig change to add GPIO keyboard and event device
support, for front panel shutdown or reboot buttons

* tag 'riscv-for-linus-6.18-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (26 commits)
clk: COMMON_CLK_RPMI should depend on RISCV
ACPI: support BGRT table on RISC-V
MAINTAINERS: Add entry for RISC-V RPMI and MPXY drivers
RISC-V: Enable GPIO keyboard and event device in RV64 defconfig
irqchip/riscv-rpmi-sysmsi: Add ACPI support
mailbox/riscv-sbi-mpxy: Add ACPI support
irqchip/irq-riscv-imsic-early: Export imsic_acpi_get_fwnode()
ACPI: RISC-V: Add RPMI System MSI to GSI mapping
ACPI: RISC-V: Add support to update gsi range
ACPI: RISC-V: Create interrupt controller list in sorted order
ACPI: scan: Update honor list for RPMI System MSI
ACPI: Add support for nargs_prop in acpi_fwnode_get_reference_args()
ACPI: property: Refactor acpi_fwnode_get_reference_args() to support nargs_prop
irqchip: Add driver for the RPMI system MSI service group
dt-bindings: Add RPMI system MSI interrupt controller bindings
dt-bindings: Add RPMI system MSI message proxy bindings
clk: Add clock driver for the RISC-V RPMI clock service group
dt-bindings: clock: Add RPMI clock service controller bindings
dt-bindings: clock: Add RPMI clock service message proxy bindings
mailbox: Add RISC-V SBI message proxy (MPXY) based mailbox driver
...

+2981 -84
+64
Documentation/devicetree/bindings/clock/riscv,rpmi-clock.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/clock/riscv,rpmi-clock.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V RPMI clock service group based clock controller 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V Platform Management Interface (RPMI) [1] defines a 14 + messaging protocol which is modular and extensible. The supervisor 15 + software can send/receive RPMI messages via SBI MPXY extension [2] 16 + or some dedicated supervisor-mode RPMI transport. 17 + 18 + The RPMI specification [1] defines clock service group for accessing 19 + system clocks managed by a platform microcontroller. The supervisor 20 + software can access RPMI clock service group via SBI MPXY channel or 21 + some dedicated supervisor-mode RPMI transport. 22 + 23 + =========================================== 24 + References 25 + =========================================== 26 + 27 + [1] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 28 + https://github.com/riscv-non-isa/riscv-rpmi/releases 29 + 30 + [2] RISC-V Supervisor Binary Interface (SBI) v3.0 (or higher) 31 + https://github.com/riscv-non-isa/riscv-sbi-doc/releases 32 + 33 + properties: 34 + compatible: 35 + description: 36 + Intended for use by the supervisor software. 37 + const: riscv,rpmi-clock 38 + 39 + mboxes: 40 + maxItems: 1 41 + description: 42 + Mailbox channel of the underlying RPMI transport or SBI message proxy channel. 43 + 44 + "#clock-cells": 45 + const: 1 46 + description: 47 + Platform specific CLOCK_ID as defined by the RISC-V Platform Management 48 + Interface (RPMI) specification. 49 + 50 + required: 51 + - compatible 52 + - mboxes 53 + - "#clock-cells" 54 + 55 + additionalProperties: false 56 + 57 + examples: 58 + - | 59 + clock-controller { 60 + compatible = "riscv,rpmi-clock"; 61 + mboxes = <&mpxy_mbox 0x1000 0x0>; 62 + #clock-cells = <1>; 63 + }; 64 + ...
+64
Documentation/devicetree/bindings/clock/riscv,rpmi-mpxy-clock.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/clock/riscv,rpmi-mpxy-clock.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V RPMI clock service group based message proxy 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V Platform Management Interface (RPMI) [1] defines a 14 + messaging protocol which is modular and extensible. The supervisor 15 + software can send/receive RPMI messages via SBI MPXY extension [2] 16 + or some dedicated supervisor-mode RPMI transport. 17 + 18 + The RPMI specification [1] defines clock service group for accessing 19 + system clocks managed by a platform microcontroller. The SBI implementation 20 + (machine mode firmware or hypervisor) can implement an SBI MPXY channel 21 + to allow RPMI clock service group access to the supervisor software. 22 + 23 + =========================================== 24 + References 25 + =========================================== 26 + 27 + [1] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 28 + https://github.com/riscv-non-isa/riscv-rpmi/releases 29 + 30 + [2] RISC-V Supervisor Binary Interface (SBI) v3.0 (or higher) 31 + https://github.com/riscv-non-isa/riscv-sbi-doc/releases 32 + 33 + properties: 34 + compatible: 35 + description: 36 + Intended for use by the SBI implementation. 37 + const: riscv,rpmi-mpxy-clock 38 + 39 + mboxes: 40 + maxItems: 1 41 + description: 42 + Mailbox channel of the underlying RPMI transport. 43 + 44 + riscv,sbi-mpxy-channel-id: 45 + $ref: /schemas/types.yaml#/definitions/uint32 46 + description: 47 + The SBI MPXY channel id to be used for providing RPMI access to 48 + the supervisor software. 49 + 50 + required: 51 + - compatible 52 + - mboxes 53 + - riscv,sbi-mpxy-channel-id 54 + 55 + additionalProperties: false 56 + 57 + examples: 58 + - | 59 + clock-service { 60 + compatible = "riscv,rpmi-mpxy-clock"; 61 + mboxes = <&rpmi_shmem_mbox 0x8>; 62 + riscv,sbi-mpxy-channel-id = <0x1000>; 63 + }; 64 + ...
+67
Documentation/devicetree/bindings/interrupt-controller/riscv,rpmi-mpxy-system-msi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interrupt-controller/riscv,rpmi-mpxy-system-msi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V RPMI system MSI service group based message proxy 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V Platform Management Interface (RPMI) [1] defines a 14 + messaging protocol which is modular and extensible. The supervisor 15 + software can send/receive RPMI messages via SBI MPXY extension [2] 16 + or some dedicated supervisor-mode RPMI transport. 17 + 18 + The RPMI specification [1] defines system MSI service group which 19 + allow application processors to receive MSIs upon system events 20 + such as P2A doorbell, graceful shutdown/reboot request, CPU hotplug 21 + event, memory hotplug event, etc from the platform microcontroller. 22 + The SBI implementation (machine mode firmware or hypervisor) can 23 + implement an SBI MPXY channel to allow RPMI system MSI service 24 + group access to the supervisor software. 25 + 26 + =========================================== 27 + References 28 + =========================================== 29 + 30 + [1] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 31 + https://github.com/riscv-non-isa/riscv-rpmi/releases 32 + 33 + [2] RISC-V Supervisor Binary Interface (SBI) v3.0 (or higher) 34 + https://github.com/riscv-non-isa/riscv-sbi-doc/releases 35 + 36 + properties: 37 + compatible: 38 + description: 39 + Intended for use by the SBI implementation. 40 + const: riscv,rpmi-mpxy-system-msi 41 + 42 + mboxes: 43 + maxItems: 1 44 + description: 45 + Mailbox channel of the underlying RPMI transport. 46 + 47 + riscv,sbi-mpxy-channel-id: 48 + $ref: /schemas/types.yaml#/definitions/uint32 49 + description: 50 + The SBI MPXY channel id to be used for providing RPMI access to 51 + the supervisor software. 52 + 53 + required: 54 + - compatible 55 + - mboxes 56 + - riscv,sbi-mpxy-channel-id 57 + 58 + additionalProperties: false 59 + 60 + examples: 61 + - | 62 + interrupt-controller { 63 + compatible = "riscv,rpmi-mpxy-system-msi"; 64 + mboxes = <&rpmi_shmem_mbox 0x2>; 65 + riscv,sbi-mpxy-channel-id = <0x2000>; 66 + }; 67 + ...
+74
Documentation/devicetree/bindings/interrupt-controller/riscv,rpmi-system-msi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interrupt-controller/riscv,rpmi-system-msi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V RPMI system MSI service group based interrupt controller 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V Platform Management Interface (RPMI) [1] defines a 14 + messaging protocol which is modular and extensible. The supervisor 15 + software can send/receive RPMI messages via SBI MPXY extension [2] 16 + or some dedicated supervisor-mode RPMI transport. 17 + 18 + The RPMI specification [1] defines system MSI service group which 19 + allow application processors to receive MSIs upon system events 20 + such as P2A doorbell, graceful shutdown/reboot request, CPU hotplug 21 + event, memory hotplug event, etc from the platform microcontroller. 22 + The supervisor software can access RPMI system MSI service group via 23 + SBI MPXY channel or some dedicated supervisor-mode RPMI transport. 24 + 25 + =========================================== 26 + References 27 + =========================================== 28 + 29 + [1] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 30 + https://github.com/riscv-non-isa/riscv-rpmi/releases 31 + 32 + [2] RISC-V Supervisor Binary Interface (SBI) v3.0 (or higher) 33 + https://github.com/riscv-non-isa/riscv-sbi-doc/releases 34 + 35 + allOf: 36 + - $ref: /schemas/interrupt-controller.yaml# 37 + 38 + properties: 39 + compatible: 40 + description: 41 + Intended for use by the supervisor software. 42 + const: riscv,rpmi-system-msi 43 + 44 + mboxes: 45 + maxItems: 1 46 + description: 47 + Mailbox channel of the underlying RPMI transport or SBI message proxy channel. 48 + 49 + msi-parent: true 50 + 51 + interrupt-controller: true 52 + 53 + "#interrupt-cells": 54 + const: 1 55 + 56 + required: 57 + - compatible 58 + - mboxes 59 + - msi-parent 60 + - interrupt-controller 61 + - "#interrupt-cells" 62 + 63 + additionalProperties: false 64 + 65 + examples: 66 + - | 67 + interrupt-controller { 68 + compatible = "riscv,rpmi-system-msi"; 69 + mboxes = <&mpxy_mbox 0x2000 0x0>; 70 + msi-parent = <&imsic_slevel>; 71 + interrupt-controller; 72 + #interrupt-cells = <1>; 73 + }; 74 + ...
+124
Documentation/devicetree/bindings/mailbox/riscv,rpmi-shmem-mbox.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mailbox/riscv,rpmi-shmem-mbox.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V Platform Management Interface (RPMI) shared memory mailbox 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V Platform Management Interface (RPMI) [1] defines a common shared 14 + memory based RPMI transport. This RPMI shared memory transport integrates as 15 + mailbox controller in the SBI implementation or supervisor software whereas 16 + each RPMI service group is mailbox client in the SBI implementation and 17 + supervisor software. 18 + 19 + =========================================== 20 + References 21 + =========================================== 22 + 23 + [1] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 24 + https://github.com/riscv-non-isa/riscv-rpmi/releases 25 + 26 + properties: 27 + compatible: 28 + const: riscv,rpmi-shmem-mbox 29 + 30 + reg: 31 + minItems: 2 32 + items: 33 + - description: A2P request queue base address 34 + - description: P2A acknowledgment queue base address 35 + - description: P2A request queue base address 36 + - description: A2P acknowledgment queue base address 37 + - description: A2P doorbell address 38 + 39 + reg-names: 40 + minItems: 2 41 + items: 42 + - const: a2p-req 43 + - const: p2a-ack 44 + - enum: [ p2a-req, a2p-doorbell ] 45 + - const: a2p-ack 46 + - const: a2p-doorbell 47 + 48 + interrupts: 49 + maxItems: 1 50 + description: 51 + The RPMI shared memory transport supports P2A doorbell as a wired 52 + interrupt and this property specifies the interrupt source. 53 + 54 + msi-parent: 55 + description: 56 + The RPMI shared memory transport supports P2A doorbell as a system MSI 57 + and this property specifies the target MSI controller. 58 + 59 + riscv,slot-size: 60 + $ref: /schemas/types.yaml#/definitions/uint32 61 + minimum: 64 62 + description: 63 + Power-of-2 RPMI slot size of the RPMI shared memory transport. 64 + 65 + riscv,a2p-doorbell-value: 66 + $ref: /schemas/types.yaml#/definitions/uint32 67 + default: 0x1 68 + description: 69 + Value written to the 32-bit A2P doorbell register. 70 + 71 + riscv,p2a-doorbell-sysmsi-index: 72 + $ref: /schemas/types.yaml#/definitions/uint32 73 + description: 74 + The RPMI shared memory transport supports P2A doorbell as a system MSI 75 + and this property specifies system MSI index to be used for configuring 76 + the P2A doorbell MSI. 77 + 78 + "#mbox-cells": 79 + const: 1 80 + description: 81 + The first cell specifies RPMI service group ID. 82 + 83 + required: 84 + - compatible 85 + - reg 86 + - reg-names 87 + - riscv,slot-size 88 + - "#mbox-cells" 89 + 90 + anyOf: 91 + - required: 92 + - interrupts 93 + - required: 94 + - msi-parent 95 + 96 + additionalProperties: false 97 + 98 + examples: 99 + - | 100 + // Example 1 (RPMI shared memory with only 2 queues): 101 + mailbox@10080000 { 102 + compatible = "riscv,rpmi-shmem-mbox"; 103 + reg = <0x10080000 0x10000>, 104 + <0x10090000 0x10000>; 105 + reg-names = "a2p-req", "p2a-ack"; 106 + msi-parent = <&imsic_mlevel>; 107 + riscv,slot-size = <64>; 108 + #mbox-cells = <1>; 109 + }; 110 + - | 111 + // Example 2 (RPMI shared memory with only 4 queues): 112 + mailbox@10001000 { 113 + compatible = "riscv,rpmi-shmem-mbox"; 114 + reg = <0x10001000 0x800>, 115 + <0x10001800 0x800>, 116 + <0x10002000 0x800>, 117 + <0x10002800 0x800>, 118 + <0x10003000 0x4>; 119 + reg-names = "a2p-req", "p2a-ack", "p2a-req", "a2p-ack", "a2p-doorbell"; 120 + msi-parent = <&imsic_mlevel>; 121 + riscv,slot-size = <64>; 122 + riscv,a2p-doorbell-value = <0x00008000>; 123 + #mbox-cells = <1>; 124 + };
+51
Documentation/devicetree/bindings/mailbox/riscv,sbi-mpxy-mbox.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mailbox/riscv,sbi-mpxy-mbox.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: RISC-V SBI Message Proxy (MPXY) extension based mailbox 8 + 9 + maintainers: 10 + - Anup Patel <anup@brainfault.org> 11 + 12 + description: | 13 + The RISC-V SBI Message Proxy (MPXY) extension [1] allows supervisor 14 + software to send messages through the SBI implementation (M-mode 15 + firmware or HS-mode hypervisor). The underlying message protocol 16 + and message format used by the supervisor software could be some 17 + other standard protocol compatible with the SBI MPXY extension 18 + (such as RISC-V Platform Management Interface (RPMI) [2]). 19 + 20 + =========================================== 21 + References 22 + =========================================== 23 + 24 + [1] RISC-V Supervisor Binary Interface (SBI) v3.0 (or higher) 25 + https://github.com/riscv-non-isa/riscv-sbi-doc/releases 26 + 27 + [2] RISC-V Platform Management Interface (RPMI) v1.0 (or higher) 28 + https://github.com/riscv-non-isa/riscv-rpmi/releases 29 + 30 + properties: 31 + compatible: 32 + const: riscv,sbi-mpxy-mbox 33 + 34 + "#mbox-cells": 35 + const: 2 36 + description: 37 + The first cell specifies channel_id of the SBI MPXY channel, 38 + the second cell specifies MSG_PROT_ID of the SBI MPXY channel 39 + 40 + required: 41 + - compatible 42 + - "#mbox-cells" 43 + 44 + additionalProperties: false 45 + 46 + examples: 47 + - | 48 + mailbox { 49 + compatible = "riscv,sbi-mpxy-mbox"; 50 + #mbox-cells = <2>; 51 + };
+15
MAINTAINERS
··· 22066 22066 F: drivers/perf/riscv_pmu_legacy.c 22067 22067 F: drivers/perf/riscv_pmu_sbi.c 22068 22068 22069 + RISC-V RPMI AND MPXY DRIVERS 22070 + M: Rahul Pathak <rahul@summations.net> 22071 + M: Anup Patel <anup@brainfault.org> 22072 + L: linux-riscv@lists.infradead.org 22073 + F: Documentation/devicetree/bindings/clock/riscv,rpmi-clock.yaml 22074 + F: Documentation/devicetree/bindings/clock/riscv,rpmi-mpxy-clock.yaml 22075 + F: Documentation/devicetree/bindings/interrupt-controller/riscv,rpmi-mpxy-system-msi.yaml 22076 + F: Documentation/devicetree/bindings/interrupt-controller/riscv,rpmi-system-msi.yaml 22077 + F: Documentation/devicetree/bindings/mailbox/riscv,rpmi-shmem-mbox.yaml 22078 + F: Documentation/devicetree/bindings/mailbox/riscv,sbi-mpxy-mbox.yaml 22079 + F: drivers/clk/clk-rpmi.c 22080 + F: drivers/irqchip/irq-riscv-rpmi-sysmsi.c 22081 + F: drivers/mailbox/riscv-sbi-mpxy-mbox.c 22082 + F: include/linux/mailbox/riscv-rpmi-message.h 22083 + 22069 22084 RISC-V SPACEMIT SoC Support 22070 22085 M: Yixun Lan <dlan@gentoo.org> 22071 22086 L: linux-riscv@lists.infradead.org
+2
arch/riscv/configs/defconfig
··· 140 140 CONFIG_MICROSEMI_PHY=y 141 141 CONFIG_MOTORCOMM_PHY=y 142 142 CONFIG_INPUT_MOUSEDEV=y 143 + CONFIG_INPUT_EVDEV=y 144 + CONFIG_KEYBOARD_GPIO=y 143 145 CONFIG_KEYBOARD_SUN4I_LRADC=m 144 146 CONFIG_SERIAL_8250=y 145 147 CONFIG_SERIAL_8250_CONSOLE=y
+6
arch/riscv/include/asm/irq.h
··· 32 32 ACPI_RISCV_IRQCHIP_IMSIC = 0x01, 33 33 ACPI_RISCV_IRQCHIP_PLIC = 0x02, 34 34 ACPI_RISCV_IRQCHIP_APLIC = 0x03, 35 + ACPI_RISCV_IRQCHIP_SMSI = 0x04, 35 36 }; 36 37 37 38 int riscv_acpi_get_gsi_info(struct fwnode_handle *fwnode, u32 *gsi_base, ··· 43 42 unsigned int acpi_rintc_get_plic_nr_contexts(unsigned int plic_id); 44 43 unsigned int acpi_rintc_get_plic_context(unsigned int plic_id, unsigned int ctxt_idx); 45 44 int __init acpi_rintc_get_imsic_mmio_info(u32 index, struct resource *res); 45 + int riscv_acpi_update_gsi_range(u32 gsi_base, u32 nr_irqs); 46 46 47 47 #else 48 48 static inline int riscv_acpi_get_gsi_info(struct fwnode_handle *fwnode, u32 *gsi_base, ··· 78 76 return 0; 79 77 } 80 78 79 + static inline int riscv_acpi_update_gsi_range(u32 gsi_base, u32 nr_irqs) 80 + { 81 + return -ENODEV; 82 + } 81 83 #endif /* CONFIG_ACPI */ 82 84 83 85 #endif /* _ASM_RISCV_IRQ_H */
+62
arch/riscv/include/asm/sbi.h
··· 36 36 SBI_EXT_STA = 0x535441, 37 37 SBI_EXT_NACL = 0x4E41434C, 38 38 SBI_EXT_FWFT = 0x46574654, 39 + SBI_EXT_MPXY = 0x4D505859, 39 40 40 41 /* Experimentals extensions must lie within this range */ 41 42 SBI_EXT_EXPERIMENTAL_START = 0x08000000, ··· 443 442 #define SBI_FWFT_GLOBAL_FEATURE_BIT BIT(31) 444 443 445 444 #define SBI_FWFT_SET_FLAG_LOCK BIT(0) 445 + 446 + enum sbi_ext_mpxy_fid { 447 + SBI_EXT_MPXY_GET_SHMEM_SIZE, 448 + SBI_EXT_MPXY_SET_SHMEM, 449 + SBI_EXT_MPXY_GET_CHANNEL_IDS, 450 + SBI_EXT_MPXY_READ_ATTRS, 451 + SBI_EXT_MPXY_WRITE_ATTRS, 452 + SBI_EXT_MPXY_SEND_MSG_WITH_RESP, 453 + SBI_EXT_MPXY_SEND_MSG_WITHOUT_RESP, 454 + SBI_EXT_MPXY_GET_NOTIFICATION_EVENTS, 455 + }; 456 + 457 + enum sbi_mpxy_attribute_id { 458 + /* Standard channel attributes managed by MPXY framework */ 459 + SBI_MPXY_ATTR_MSG_PROT_ID = 0x00000000, 460 + SBI_MPXY_ATTR_MSG_PROT_VER = 0x00000001, 461 + SBI_MPXY_ATTR_MSG_MAX_LEN = 0x00000002, 462 + SBI_MPXY_ATTR_MSG_SEND_TIMEOUT = 0x00000003, 463 + SBI_MPXY_ATTR_MSG_COMPLETION_TIMEOUT = 0x00000004, 464 + SBI_MPXY_ATTR_CHANNEL_CAPABILITY = 0x00000005, 465 + SBI_MPXY_ATTR_SSE_EVENT_ID = 0x00000006, 466 + SBI_MPXY_ATTR_MSI_CONTROL = 0x00000007, 467 + SBI_MPXY_ATTR_MSI_ADDR_LO = 0x00000008, 468 + SBI_MPXY_ATTR_MSI_ADDR_HI = 0x00000009, 469 + SBI_MPXY_ATTR_MSI_DATA = 0x0000000A, 470 + SBI_MPXY_ATTR_EVENTS_STATE_CONTROL = 0x0000000B, 471 + SBI_MPXY_ATTR_STD_ATTR_MAX_IDX, 472 + /* 473 + * Message protocol specific attributes, managed by 474 + * the message protocol specification. 475 + */ 476 + SBI_MPXY_ATTR_MSGPROTO_ATTR_START = 0x80000000, 477 + SBI_MPXY_ATTR_MSGPROTO_ATTR_END = 0xffffffff 478 + }; 479 + 480 + /* Possible values of MSG_PROT_ID attribute as-per SBI v3.0 (or higher) */ 481 + enum sbi_mpxy_msgproto_id { 482 + SBI_MPXY_MSGPROTO_RPMI_ID = 0x0, 483 + }; 484 + 485 + /* RPMI message protocol specific MPXY attributes */ 486 + enum sbi_mpxy_rpmi_attribute_id { 487 + SBI_MPXY_RPMI_ATTR_SERVICEGROUP_ID = SBI_MPXY_ATTR_MSGPROTO_ATTR_START, 488 + SBI_MPXY_RPMI_ATTR_SERVICEGROUP_VERSION, 489 + SBI_MPXY_RPMI_ATTR_IMPL_ID, 490 + SBI_MPXY_RPMI_ATTR_IMPL_VERSION, 491 + SBI_MPXY_RPMI_ATTR_MAX_ID 492 + }; 493 + 494 + /* Encoding of MSG_PROT_VER attribute */ 495 + #define SBI_MPXY_MSG_PROT_VER_MAJOR(__ver) upper_16_bits(__ver) 496 + #define SBI_MPXY_MSG_PROT_VER_MINOR(__ver) lower_16_bits(__ver) 497 + #define SBI_MPXY_MSG_PROT_MKVER(__maj, __min) (((u32)(__maj) << 16) | (u16)(__min)) 498 + 499 + /* Capabilities available through CHANNEL_CAPABILITY attribute */ 500 + #define SBI_MPXY_CHAN_CAP_MSI BIT(0) 501 + #define SBI_MPXY_CHAN_CAP_SSE BIT(1) 502 + #define SBI_MPXY_CHAN_CAP_EVENTS_STATE BIT(2) 503 + #define SBI_MPXY_CHAN_CAP_SEND_WITH_RESP BIT(3) 504 + #define SBI_MPXY_CHAN_CAP_SEND_WITHOUT_RESP BIT(4) 505 + #define SBI_MPXY_CHAN_CAP_GET_NOTIFICATIONS BIT(5) 446 506 447 507 /* SBI spec version fields */ 448 508 #define SBI_SPEC_VERSION_DEFAULT 0x1
+1 -1
drivers/acpi/Kconfig
··· 461 461 462 462 config ACPI_BGRT 463 463 bool "Boottime Graphics Resource Table support" 464 - depends on EFI && (X86 || ARM64 || LOONGARCH) 464 + depends on EFI 465 465 help 466 466 This driver adds support for exposing the ACPI Boottime Graphics 467 467 Resource Table, which allows the operating system to obtain
+74 -54
drivers/acpi/property.c
··· 844 844 return NULL; 845 845 } 846 846 847 + static unsigned int acpi_fwnode_get_args_count(struct fwnode_handle *fwnode, 848 + const char *nargs_prop) 849 + { 850 + const struct acpi_device_data *data; 851 + const union acpi_object *obj; 852 + int ret; 853 + 854 + data = acpi_device_data_of_node(fwnode); 855 + if (!data) 856 + return 0; 857 + 858 + ret = acpi_data_get_property(data, nargs_prop, ACPI_TYPE_INTEGER, &obj); 859 + if (ret) 860 + return 0; 861 + 862 + return obj->integer.value; 863 + } 864 + 847 865 static int acpi_get_ref_args(struct fwnode_reference_args *args, 848 866 struct fwnode_handle *ref_fwnode, 867 + const char *nargs_prop, 849 868 const union acpi_object **element, 850 869 const union acpi_object *end, size_t num_args) 851 870 { 852 871 u32 nargs = 0, i; 872 + 873 + if (nargs_prop) 874 + num_args = acpi_fwnode_get_args_count(ref_fwnode, nargs_prop); 853 875 854 876 /* 855 877 * Assume the following integer elements are all args. Stop counting on ··· 944 922 return &dn->fwnode; 945 923 } 946 924 947 - /** 948 - * __acpi_node_get_property_reference - returns handle to the referenced object 949 - * @fwnode: Firmware node to get the property from 950 - * @propname: Name of the property 951 - * @index: Index of the reference to return 952 - * @num_args: Maximum number of arguments after each reference 953 - * @args: Location to store the returned reference with optional arguments 954 - * (may be NULL) 955 - * 956 - * Find property with @name, verifify that it is a package containing at least 957 - * one object reference and if so, store the ACPI device object pointer to the 958 - * target object in @args->adev. If the reference includes arguments, store 959 - * them in the @args->args[] array. 960 - * 961 - * If there's more than one reference in the property value package, @index is 962 - * used to select the one to return. 963 - * 964 - * It is possible to leave holes in the property value set like in the 965 - * example below: 966 - * 967 - * Package () { 968 - * "cs-gpios", 969 - * Package () { 970 - * ^GPIO, 19, 0, 0, 971 - * ^GPIO, 20, 0, 0, 972 - * 0, 973 - * ^GPIO, 21, 0, 0, 974 - * } 975 - * } 976 - * 977 - * Calling this function with index %2 or index %3 return %-ENOENT. If the 978 - * property does not contain any more values %-ENOENT is returned. The NULL 979 - * entry must be single integer and preferably contain value %0. 980 - * 981 - * Return: %0 on success, negative error code on failure. 982 - */ 983 - int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode, 984 - const char *propname, size_t index, size_t num_args, 985 - struct fwnode_reference_args *args) 925 + static int acpi_fwnode_get_reference_args(const struct fwnode_handle *fwnode, 926 + const char *propname, const char *nargs_prop, 927 + unsigned int args_count, unsigned int index, 928 + struct fwnode_reference_args *args) 986 929 { 987 930 const union acpi_object *element, *end; 988 931 const union acpi_object *obj; ··· 1023 1036 return -EINVAL; 1024 1037 1025 1038 element++; 1026 - 1027 1039 ret = acpi_get_ref_args(idx == index ? args : NULL, 1028 1040 acpi_fwnode_handle(device), 1029 - &element, end, num_args); 1041 + nargs_prop, &element, end, 1042 + args_count); 1030 1043 if (ret < 0) 1031 1044 return ret; 1032 1045 ··· 1041 1054 return -EINVAL; 1042 1055 1043 1056 element++; 1044 - 1045 1057 ret = acpi_get_ref_args(idx == index ? args : NULL, 1046 - ref_fwnode, &element, end, 1047 - num_args); 1058 + ref_fwnode, nargs_prop, &element, end, 1059 + args_count); 1048 1060 if (ret < 0) 1049 1061 return ret; 1050 1062 ··· 1064 1078 } 1065 1079 1066 1080 return -ENOENT; 1081 + } 1082 + 1083 + /** 1084 + * __acpi_node_get_property_reference - returns handle to the referenced object 1085 + * @fwnode: Firmware node to get the property from 1086 + * @propname: Name of the property 1087 + * @index: Index of the reference to return 1088 + * @num_args: Maximum number of arguments after each reference 1089 + * @args: Location to store the returned reference with optional arguments 1090 + * (may be NULL) 1091 + * 1092 + * Find property with @name, verifify that it is a package containing at least 1093 + * one object reference and if so, store the ACPI device object pointer to the 1094 + * target object in @args->adev. If the reference includes arguments, store 1095 + * them in the @args->args[] array. 1096 + * 1097 + * If there's more than one reference in the property value package, @index is 1098 + * used to select the one to return. 1099 + * 1100 + * It is possible to leave holes in the property value set like in the 1101 + * example below: 1102 + * 1103 + * Package () { 1104 + * "cs-gpios", 1105 + * Package () { 1106 + * ^GPIO, 19, 0, 0, 1107 + * ^GPIO, 20, 0, 0, 1108 + * 0, 1109 + * ^GPIO, 21, 0, 0, 1110 + * } 1111 + * } 1112 + * 1113 + * Calling this function with index %2 or index %3 return %-ENOENT. If the 1114 + * property does not contain any more values %-ENOENT is returned. The NULL 1115 + * entry must be single integer and preferably contain value %0. 1116 + * 1117 + * Return: %0 on success, negative error code on failure. 1118 + */ 1119 + int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode, 1120 + const char *propname, size_t index, 1121 + size_t num_args, 1122 + struct fwnode_reference_args *args) 1123 + { 1124 + return acpi_fwnode_get_reference_args(fwnode, propname, NULL, index, num_args, args); 1067 1125 } 1068 1126 EXPORT_SYMBOL_GPL(__acpi_node_get_property_reference); 1069 1127 ··· 1626 1596 { 1627 1597 return acpi_node_prop_read(fwnode, propname, DEV_PROP_STRING, 1628 1598 val, nval); 1629 - } 1630 - 1631 - static int 1632 - acpi_fwnode_get_reference_args(const struct fwnode_handle *fwnode, 1633 - const char *prop, const char *nargs_prop, 1634 - unsigned int args_count, unsigned int index, 1635 - struct fwnode_reference_args *args) 1636 - { 1637 - return __acpi_node_get_property_reference(fwnode, prop, index, 1638 - args_count, args); 1639 1599 } 1640 1600 1641 1601 static const char *acpi_fwnode_get_name(const struct fwnode_handle *fwnode)
+72 -3
drivers/acpi/riscv/irq.c
··· 10 10 11 11 #include "init.h" 12 12 13 + #define RISCV_ACPI_INTC_FLAG_PENDING BIT(0) 14 + 13 15 struct riscv_ext_intc_list { 14 16 acpi_handle handle; 15 17 u32 gsi_base; ··· 19 17 u32 nr_idcs; 20 18 u32 id; 21 19 u32 type; 20 + u32 flag; 22 21 struct list_head list; 23 22 }; 24 23 ··· 72 69 return AE_NOT_FOUND; 73 70 } 74 71 72 + int riscv_acpi_update_gsi_range(u32 gsi_base, u32 nr_irqs) 73 + { 74 + struct riscv_ext_intc_list *ext_intc_element; 75 + 76 + list_for_each_entry(ext_intc_element, &ext_intc_list, list) { 77 + if (gsi_base == ext_intc_element->gsi_base && 78 + (ext_intc_element->flag & RISCV_ACPI_INTC_FLAG_PENDING)) { 79 + ext_intc_element->nr_irqs = nr_irqs; 80 + ext_intc_element->flag &= ~RISCV_ACPI_INTC_FLAG_PENDING; 81 + return 0; 82 + } 83 + } 84 + 85 + return -ENODEV; 86 + } 87 + 75 88 int riscv_acpi_get_gsi_info(struct fwnode_handle *fwnode, u32 *gsi_base, 76 89 u32 *id, u32 *nr_irqs, u32 *nr_idcs) 77 90 { ··· 134 115 static int __init riscv_acpi_register_ext_intc(u32 gsi_base, u32 nr_irqs, u32 nr_idcs, 135 116 u32 id, u32 type) 136 117 { 137 - struct riscv_ext_intc_list *ext_intc_element; 118 + struct riscv_ext_intc_list *ext_intc_element, *node, *prev; 138 119 139 120 ext_intc_element = kzalloc(sizeof(*ext_intc_element), GFP_KERNEL); 140 121 if (!ext_intc_element) 141 122 return -ENOMEM; 142 123 143 124 ext_intc_element->gsi_base = gsi_base; 144 - ext_intc_element->nr_irqs = nr_irqs; 125 + 126 + /* If nr_irqs is zero, indicate it in flag and set to max range possible */ 127 + if (nr_irqs) { 128 + ext_intc_element->nr_irqs = nr_irqs; 129 + } else { 130 + ext_intc_element->flag |= RISCV_ACPI_INTC_FLAG_PENDING; 131 + ext_intc_element->nr_irqs = U32_MAX - ext_intc_element->gsi_base; 132 + } 133 + 145 134 ext_intc_element->nr_idcs = nr_idcs; 146 135 ext_intc_element->id = id; 147 - list_add_tail(&ext_intc_element->list, &ext_intc_list); 136 + list_for_each_entry(node, &ext_intc_list, list) { 137 + if (node->gsi_base < ext_intc_element->gsi_base) 138 + break; 139 + } 140 + 141 + /* Adjust the previous node's GSI range if that has pending registration */ 142 + prev = list_prev_entry(node, list); 143 + if (!list_entry_is_head(prev, &ext_intc_list, list)) { 144 + if (prev->flag & RISCV_ACPI_INTC_FLAG_PENDING) 145 + prev->nr_irqs = ext_intc_element->gsi_base - prev->gsi_base; 146 + } 147 + 148 + list_add_tail(&ext_intc_element->list, &node->list); 148 149 return 0; 150 + } 151 + 152 + static acpi_status __init riscv_acpi_create_gsi_map_smsi(acpi_handle handle, u32 level, 153 + void *context, void **return_value) 154 + { 155 + acpi_status status; 156 + u64 gbase; 157 + 158 + if (!acpi_has_method(handle, "_GSB")) { 159 + acpi_handle_err(handle, "_GSB method not found\n"); 160 + return AE_ERROR; 161 + } 162 + 163 + status = acpi_evaluate_integer(handle, "_GSB", NULL, &gbase); 164 + if (ACPI_FAILURE(status)) { 165 + acpi_handle_err(handle, "failed to evaluate _GSB method\n"); 166 + return status; 167 + } 168 + 169 + riscv_acpi_register_ext_intc(gbase, 0, 0, 0, ACPI_RISCV_IRQCHIP_SMSI); 170 + status = riscv_acpi_update_gsi_handle((u32)gbase, handle); 171 + if (ACPI_FAILURE(status)) { 172 + acpi_handle_err(handle, "failed to find the GSI mapping entry\n"); 173 + return status; 174 + } 175 + 176 + return AE_OK; 149 177 } 150 178 151 179 static acpi_status __init riscv_acpi_create_gsi_map(acpi_handle handle, u32 level, ··· 249 183 250 184 if (acpi_table_parse_madt(ACPI_MADT_TYPE_APLIC, riscv_acpi_aplic_parse_madt, 0) > 0) 251 185 acpi_get_devices("RSCV0002", riscv_acpi_create_gsi_map, NULL, NULL); 186 + 187 + /* Unlike PLIC/APLIC, SYSMSI doesn't have MADT */ 188 + acpi_get_devices("RSCV0006", riscv_acpi_create_gsi_map_smsi, NULL, NULL); 252 189 } 253 190 254 191 static acpi_handle riscv_acpi_get_gsi_handle(u32 gsi)
+2
drivers/acpi/scan.c
··· 861 861 "INTC10CF", /* IVSC (MTL) driver must be loaded to allow i2c access to camera sensors */ 862 862 "RSCV0001", /* RISC-V PLIC */ 863 863 "RSCV0002", /* RISC-V APLIC */ 864 + "RSCV0005", /* RISC-V SBI MPXY MBOX */ 865 + "RSCV0006", /* RISC-V RPMI SYSMSI */ 864 866 "PNP0C0F", /* PCI Link Device */ 865 867 NULL 866 868 };
+1 -1
drivers/base/property.c
··· 578 578 * @prop: The name of the property 579 579 * @nargs_prop: The name of the property telling the number of 580 580 * arguments in the referred node. NULL if @nargs is known, 581 - * otherwise @nargs is ignored. Only relevant on OF. 581 + * otherwise @nargs is ignored. 582 582 * @nargs: Number of arguments. Ignored if @nargs_prop is non-NULL. 583 583 * @index: Index of the reference, from zero onwards. 584 584 * @args: Result structure with reference and integer arguments.
+9
drivers/clk/Kconfig
··· 501 501 Not all features of the PLL are currently supported 502 502 by the driver. 503 503 504 + config COMMON_CLK_RPMI 505 + tristate "Clock driver based on RISC-V RPMI" 506 + depends on RISCV || COMPILE_TEST 507 + depends on MAILBOX 508 + default RISCV 509 + help 510 + Support for clocks based on the clock service group defined by 511 + the RISC-V platform management interface (RPMI) specification. 512 + 504 513 source "drivers/clk/actions/Kconfig" 505 514 source "drivers/clk/analogbits/Kconfig" 506 515 source "drivers/clk/baikal-t1/Kconfig"
+1
drivers/clk/Makefile
··· 86 86 obj-$(CONFIG_CLK_QORIQ) += clk-qoriq.o 87 87 obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o 88 88 obj-$(CONFIG_COMMON_CLK_RP1) += clk-rp1.o 89 + obj-$(CONFIG_COMMON_CLK_RPMI) += clk-rpmi.o 89 90 obj-$(CONFIG_COMMON_CLK_HI655X) += clk-hi655x.o 90 91 obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o 91 92 obj-$(CONFIG_COMMON_CLK_SCMI) += clk-scmi.o
+620
drivers/clk/clk-rpmi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RISC-V MPXY Based Clock Driver 4 + * 5 + * Copyright (C) 2025 Ventana Micro Systems Ltd. 6 + */ 7 + 8 + #include <linux/clk-provider.h> 9 + #include <linux/err.h> 10 + #include <linux/mailbox_client.h> 11 + #include <linux/mailbox/riscv-rpmi-message.h> 12 + #include <linux/module.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/types.h> 15 + #include <linux/slab.h> 16 + #include <linux/wordpart.h> 17 + 18 + #define RPMI_CLK_DISCRETE_MAX_NUM_RATES 16 19 + #define RPMI_CLK_NAME_LEN 16 20 + 21 + #define to_rpmi_clk(clk) container_of(clk, struct rpmi_clk, hw) 22 + 23 + enum rpmi_clk_config { 24 + RPMI_CLK_DISABLE = 0, 25 + RPMI_CLK_ENABLE = 1, 26 + RPMI_CLK_CONFIG_MAX_IDX 27 + }; 28 + 29 + #define RPMI_CLK_TYPE_MASK GENMASK(1, 0) 30 + enum rpmi_clk_type { 31 + RPMI_CLK_DISCRETE = 0, 32 + RPMI_CLK_LINEAR = 1, 33 + RPMI_CLK_TYPE_MAX_IDX 34 + }; 35 + 36 + struct rpmi_clk_context { 37 + struct device *dev; 38 + struct mbox_chan *chan; 39 + struct mbox_client client; 40 + u32 max_msg_data_size; 41 + }; 42 + 43 + /* 44 + * rpmi_clk_rates represents the rates format 45 + * as specified by the RPMI specification. 46 + * No other data format (e.g., struct linear_range) 47 + * is required to avoid to and from conversion. 48 + */ 49 + union rpmi_clk_rates { 50 + u64 discrete[RPMI_CLK_DISCRETE_MAX_NUM_RATES]; 51 + struct { 52 + u64 min; 53 + u64 max; 54 + u64 step; 55 + } linear; 56 + }; 57 + 58 + struct rpmi_clk { 59 + struct rpmi_clk_context *context; 60 + u32 id; 61 + u32 num_rates; 62 + u32 transition_latency; 63 + enum rpmi_clk_type type; 64 + union rpmi_clk_rates *rates; 65 + char name[RPMI_CLK_NAME_LEN]; 66 + struct clk_hw hw; 67 + }; 68 + 69 + struct rpmi_clk_rate_discrete { 70 + __le32 lo; 71 + __le32 hi; 72 + }; 73 + 74 + struct rpmi_clk_rate_linear { 75 + __le32 min_lo; 76 + __le32 min_hi; 77 + __le32 max_lo; 78 + __le32 max_hi; 79 + __le32 step_lo; 80 + __le32 step_hi; 81 + }; 82 + 83 + struct rpmi_get_num_clocks_rx { 84 + __le32 status; 85 + __le32 num_clocks; 86 + }; 87 + 88 + struct rpmi_get_attrs_tx { 89 + __le32 clkid; 90 + }; 91 + 92 + struct rpmi_get_attrs_rx { 93 + __le32 status; 94 + __le32 flags; 95 + __le32 num_rates; 96 + __le32 transition_latency; 97 + char name[RPMI_CLK_NAME_LEN]; 98 + }; 99 + 100 + struct rpmi_get_supp_rates_tx { 101 + __le32 clkid; 102 + __le32 clk_rate_idx; 103 + }; 104 + 105 + struct rpmi_get_supp_rates_rx { 106 + __le32 status; 107 + __le32 flags; 108 + __le32 remaining; 109 + __le32 returned; 110 + __le32 rates[]; 111 + }; 112 + 113 + struct rpmi_get_rate_tx { 114 + __le32 clkid; 115 + }; 116 + 117 + struct rpmi_get_rate_rx { 118 + __le32 status; 119 + __le32 lo; 120 + __le32 hi; 121 + }; 122 + 123 + struct rpmi_set_rate_tx { 124 + __le32 clkid; 125 + __le32 flags; 126 + __le32 lo; 127 + __le32 hi; 128 + }; 129 + 130 + struct rpmi_set_rate_rx { 131 + __le32 status; 132 + }; 133 + 134 + struct rpmi_set_config_tx { 135 + __le32 clkid; 136 + __le32 config; 137 + }; 138 + 139 + struct rpmi_set_config_rx { 140 + __le32 status; 141 + }; 142 + 143 + static inline u64 rpmi_clkrate_u64(u32 __hi, u32 __lo) 144 + { 145 + return (((u64)(__hi) << 32) | (u32)(__lo)); 146 + } 147 + 148 + static u32 rpmi_clk_get_num_clocks(struct rpmi_clk_context *context) 149 + { 150 + struct rpmi_get_num_clocks_rx rx, *resp; 151 + struct rpmi_mbox_message msg; 152 + int ret; 153 + 154 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_GET_NUM_CLOCKS, 155 + NULL, 0, &rx, sizeof(rx)); 156 + 157 + ret = rpmi_mbox_send_message(context->chan, &msg); 158 + if (ret) 159 + return 0; 160 + 161 + resp = rpmi_mbox_get_msg_response(&msg); 162 + if (!resp || resp->status) 163 + return 0; 164 + 165 + return le32_to_cpu(resp->num_clocks); 166 + } 167 + 168 + static int rpmi_clk_get_attrs(u32 clkid, struct rpmi_clk *rpmi_clk) 169 + { 170 + struct rpmi_clk_context *context = rpmi_clk->context; 171 + struct rpmi_mbox_message msg; 172 + struct rpmi_get_attrs_tx tx; 173 + struct rpmi_get_attrs_rx rx, *resp; 174 + u8 format; 175 + int ret; 176 + 177 + tx.clkid = cpu_to_le32(clkid); 178 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_GET_ATTRIBUTES, 179 + &tx, sizeof(tx), &rx, sizeof(rx)); 180 + 181 + ret = rpmi_mbox_send_message(context->chan, &msg); 182 + if (ret) 183 + return ret; 184 + 185 + resp = rpmi_mbox_get_msg_response(&msg); 186 + if (!resp) 187 + return -EINVAL; 188 + if (resp->status) 189 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 190 + 191 + rpmi_clk->id = clkid; 192 + rpmi_clk->num_rates = le32_to_cpu(resp->num_rates); 193 + rpmi_clk->transition_latency = le32_to_cpu(resp->transition_latency); 194 + strscpy(rpmi_clk->name, resp->name, RPMI_CLK_NAME_LEN); 195 + 196 + format = le32_to_cpu(resp->flags) & RPMI_CLK_TYPE_MASK; 197 + if (format >= RPMI_CLK_TYPE_MAX_IDX) 198 + return -EINVAL; 199 + 200 + rpmi_clk->type = format; 201 + 202 + return 0; 203 + } 204 + 205 + static int rpmi_clk_get_supported_rates(u32 clkid, struct rpmi_clk *rpmi_clk) 206 + { 207 + struct rpmi_clk_context *context = rpmi_clk->context; 208 + struct rpmi_clk_rate_discrete *rate_discrete; 209 + struct rpmi_clk_rate_linear *rate_linear; 210 + struct rpmi_get_supp_rates_tx tx; 211 + struct rpmi_get_supp_rates_rx *resp; 212 + struct rpmi_mbox_message msg; 213 + size_t clk_rate_idx; 214 + int ret, rateidx, j; 215 + 216 + tx.clkid = cpu_to_le32(clkid); 217 + tx.clk_rate_idx = 0; 218 + 219 + /* 220 + * Make sure we allocate rx buffer sufficient to be accommodate all 221 + * the rates sent in one RPMI message. 222 + */ 223 + struct rpmi_get_supp_rates_rx *rx __free(kfree) = 224 + kzalloc(context->max_msg_data_size, GFP_KERNEL); 225 + if (!rx) 226 + return -ENOMEM; 227 + 228 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_GET_SUPPORTED_RATES, 229 + &tx, sizeof(tx), rx, context->max_msg_data_size); 230 + 231 + ret = rpmi_mbox_send_message(context->chan, &msg); 232 + if (ret) 233 + return ret; 234 + 235 + resp = rpmi_mbox_get_msg_response(&msg); 236 + if (!resp) 237 + return -EINVAL; 238 + if (resp->status) 239 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 240 + if (!le32_to_cpu(resp->returned)) 241 + return -EINVAL; 242 + 243 + if (rpmi_clk->type == RPMI_CLK_DISCRETE) { 244 + rate_discrete = (struct rpmi_clk_rate_discrete *)resp->rates; 245 + 246 + for (rateidx = 0; rateidx < le32_to_cpu(resp->returned); rateidx++) { 247 + rpmi_clk->rates->discrete[rateidx] = 248 + rpmi_clkrate_u64(le32_to_cpu(rate_discrete[rateidx].hi), 249 + le32_to_cpu(rate_discrete[rateidx].lo)); 250 + } 251 + 252 + /* 253 + * Keep sending the request message until all 254 + * the rates are received. 255 + */ 256 + clk_rate_idx = 0; 257 + while (le32_to_cpu(resp->remaining)) { 258 + clk_rate_idx += le32_to_cpu(resp->returned); 259 + tx.clk_rate_idx = cpu_to_le32(clk_rate_idx); 260 + 261 + rpmi_mbox_init_send_with_response(&msg, 262 + RPMI_CLK_SRV_GET_SUPPORTED_RATES, 263 + &tx, sizeof(tx), 264 + rx, context->max_msg_data_size); 265 + 266 + ret = rpmi_mbox_send_message(context->chan, &msg); 267 + if (ret) 268 + return ret; 269 + 270 + resp = rpmi_mbox_get_msg_response(&msg); 271 + if (!resp) 272 + return -EINVAL; 273 + if (resp->status) 274 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 275 + if (!le32_to_cpu(resp->returned)) 276 + return -EINVAL; 277 + 278 + for (j = 0; j < le32_to_cpu(resp->returned); j++) { 279 + if (rateidx >= clk_rate_idx + le32_to_cpu(resp->returned)) 280 + break; 281 + rpmi_clk->rates->discrete[rateidx++] = 282 + rpmi_clkrate_u64(le32_to_cpu(rate_discrete[j].hi), 283 + le32_to_cpu(rate_discrete[j].lo)); 284 + } 285 + } 286 + } else if (rpmi_clk->type == RPMI_CLK_LINEAR) { 287 + rate_linear = (struct rpmi_clk_rate_linear *)resp->rates; 288 + 289 + rpmi_clk->rates->linear.min = rpmi_clkrate_u64(le32_to_cpu(rate_linear->min_hi), 290 + le32_to_cpu(rate_linear->min_lo)); 291 + rpmi_clk->rates->linear.max = rpmi_clkrate_u64(le32_to_cpu(rate_linear->max_hi), 292 + le32_to_cpu(rate_linear->max_lo)); 293 + rpmi_clk->rates->linear.step = rpmi_clkrate_u64(le32_to_cpu(rate_linear->step_hi), 294 + le32_to_cpu(rate_linear->step_lo)); 295 + } 296 + 297 + return 0; 298 + } 299 + 300 + static unsigned long rpmi_clk_recalc_rate(struct clk_hw *hw, 301 + unsigned long parent_rate) 302 + { 303 + struct rpmi_clk *rpmi_clk = to_rpmi_clk(hw); 304 + struct rpmi_clk_context *context = rpmi_clk->context; 305 + struct rpmi_mbox_message msg; 306 + struct rpmi_get_rate_tx tx; 307 + struct rpmi_get_rate_rx rx, *resp; 308 + int ret; 309 + 310 + tx.clkid = cpu_to_le32(rpmi_clk->id); 311 + 312 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_GET_RATE, 313 + &tx, sizeof(tx), &rx, sizeof(rx)); 314 + 315 + ret = rpmi_mbox_send_message(context->chan, &msg); 316 + if (ret) 317 + return ret; 318 + 319 + resp = rpmi_mbox_get_msg_response(&msg); 320 + if (!resp) 321 + return -EINVAL; 322 + if (resp->status) 323 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 324 + 325 + return rpmi_clkrate_u64(le32_to_cpu(resp->hi), le32_to_cpu(resp->lo)); 326 + } 327 + 328 + static int rpmi_clk_determine_rate(struct clk_hw *hw, 329 + struct clk_rate_request *req) 330 + { 331 + struct rpmi_clk *rpmi_clk = to_rpmi_clk(hw); 332 + u64 fmin, fmax, ftmp; 333 + 334 + /* 335 + * Keep the requested rate if the clock format 336 + * is of discrete type. Let the platform which 337 + * is actually controlling the clock handle that. 338 + */ 339 + if (rpmi_clk->type == RPMI_CLK_DISCRETE) 340 + return 0; 341 + 342 + fmin = rpmi_clk->rates->linear.min; 343 + fmax = rpmi_clk->rates->linear.max; 344 + 345 + if (req->rate <= fmin) { 346 + req->rate = fmin; 347 + return 0; 348 + } else if (req->rate >= fmax) { 349 + req->rate = fmax; 350 + return 0; 351 + } 352 + 353 + ftmp = req->rate - fmin; 354 + ftmp += rpmi_clk->rates->linear.step - 1; 355 + do_div(ftmp, rpmi_clk->rates->linear.step); 356 + 357 + req->rate = ftmp * rpmi_clk->rates->linear.step + fmin; 358 + 359 + return 0; 360 + } 361 + 362 + static int rpmi_clk_set_rate(struct clk_hw *hw, unsigned long rate, 363 + unsigned long parent_rate) 364 + { 365 + struct rpmi_clk *rpmi_clk = to_rpmi_clk(hw); 366 + struct rpmi_clk_context *context = rpmi_clk->context; 367 + struct rpmi_mbox_message msg; 368 + struct rpmi_set_rate_tx tx; 369 + struct rpmi_set_rate_rx rx, *resp; 370 + int ret; 371 + 372 + tx.clkid = cpu_to_le32(rpmi_clk->id); 373 + tx.lo = cpu_to_le32(lower_32_bits(rate)); 374 + tx.hi = cpu_to_le32(upper_32_bits(rate)); 375 + 376 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_SET_RATE, 377 + &tx, sizeof(tx), &rx, sizeof(rx)); 378 + 379 + ret = rpmi_mbox_send_message(context->chan, &msg); 380 + if (ret) 381 + return ret; 382 + 383 + resp = rpmi_mbox_get_msg_response(&msg); 384 + if (!resp) 385 + return -EINVAL; 386 + if (resp->status) 387 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 388 + 389 + return 0; 390 + } 391 + 392 + static int rpmi_clk_enable(struct clk_hw *hw) 393 + { 394 + struct rpmi_clk *rpmi_clk = to_rpmi_clk(hw); 395 + struct rpmi_clk_context *context = rpmi_clk->context; 396 + struct rpmi_mbox_message msg; 397 + struct rpmi_set_config_tx tx; 398 + struct rpmi_set_config_rx rx, *resp; 399 + int ret; 400 + 401 + tx.config = cpu_to_le32(RPMI_CLK_ENABLE); 402 + tx.clkid = cpu_to_le32(rpmi_clk->id); 403 + 404 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_SET_CONFIG, 405 + &tx, sizeof(tx), &rx, sizeof(rx)); 406 + 407 + ret = rpmi_mbox_send_message(context->chan, &msg); 408 + if (ret) 409 + return ret; 410 + 411 + resp = rpmi_mbox_get_msg_response(&msg); 412 + if (!resp) 413 + return -EINVAL; 414 + if (resp->status) 415 + return rpmi_to_linux_error(le32_to_cpu(resp->status)); 416 + 417 + return 0; 418 + } 419 + 420 + static void rpmi_clk_disable(struct clk_hw *hw) 421 + { 422 + struct rpmi_clk *rpmi_clk = to_rpmi_clk(hw); 423 + struct rpmi_clk_context *context = rpmi_clk->context; 424 + struct rpmi_mbox_message msg; 425 + struct rpmi_set_config_tx tx; 426 + struct rpmi_set_config_rx rx; 427 + 428 + tx.config = cpu_to_le32(RPMI_CLK_DISABLE); 429 + tx.clkid = cpu_to_le32(rpmi_clk->id); 430 + 431 + rpmi_mbox_init_send_with_response(&msg, RPMI_CLK_SRV_SET_CONFIG, 432 + &tx, sizeof(tx), &rx, sizeof(rx)); 433 + 434 + rpmi_mbox_send_message(context->chan, &msg); 435 + } 436 + 437 + static const struct clk_ops rpmi_clk_ops = { 438 + .recalc_rate = rpmi_clk_recalc_rate, 439 + .determine_rate = rpmi_clk_determine_rate, 440 + .set_rate = rpmi_clk_set_rate, 441 + .prepare = rpmi_clk_enable, 442 + .unprepare = rpmi_clk_disable, 443 + }; 444 + 445 + static struct clk_hw *rpmi_clk_enumerate(struct rpmi_clk_context *context, u32 clkid) 446 + { 447 + struct device *dev = context->dev; 448 + unsigned long min_rate, max_rate; 449 + union rpmi_clk_rates *rates; 450 + struct rpmi_clk *rpmi_clk; 451 + struct clk_init_data init = {}; 452 + struct clk_hw *clk_hw; 453 + int ret; 454 + 455 + rates = devm_kzalloc(dev, sizeof(*rates), GFP_KERNEL); 456 + if (!rates) 457 + return ERR_PTR(-ENOMEM); 458 + 459 + rpmi_clk = devm_kzalloc(dev, sizeof(*rpmi_clk), GFP_KERNEL); 460 + if (!rpmi_clk) 461 + return ERR_PTR(-ENOMEM); 462 + 463 + rpmi_clk->context = context; 464 + rpmi_clk->rates = rates; 465 + 466 + ret = rpmi_clk_get_attrs(clkid, rpmi_clk); 467 + if (ret) 468 + return dev_err_ptr_probe(dev, ret, 469 + "Failed to get clk-%u attributes\n", 470 + clkid); 471 + 472 + ret = rpmi_clk_get_supported_rates(clkid, rpmi_clk); 473 + if (ret) 474 + return dev_err_ptr_probe(dev, ret, 475 + "Get supported rates failed for clk-%u\n", 476 + clkid); 477 + 478 + init.flags = CLK_GET_RATE_NOCACHE; 479 + init.num_parents = 0; 480 + init.ops = &rpmi_clk_ops; 481 + init.name = rpmi_clk->name; 482 + clk_hw = &rpmi_clk->hw; 483 + clk_hw->init = &init; 484 + 485 + ret = devm_clk_hw_register(dev, clk_hw); 486 + if (ret) 487 + return dev_err_ptr_probe(dev, ret, 488 + "Unable to register clk-%u\n", 489 + clkid); 490 + 491 + if (rpmi_clk->type == RPMI_CLK_DISCRETE) { 492 + min_rate = rpmi_clk->rates->discrete[0]; 493 + max_rate = rpmi_clk->rates->discrete[rpmi_clk->num_rates - 1]; 494 + } else { 495 + min_rate = rpmi_clk->rates->linear.min; 496 + max_rate = rpmi_clk->rates->linear.max; 497 + } 498 + 499 + clk_hw_set_rate_range(clk_hw, min_rate, max_rate); 500 + 501 + return clk_hw; 502 + } 503 + 504 + static void rpmi_clk_mbox_chan_release(void *data) 505 + { 506 + struct mbox_chan *chan = data; 507 + 508 + mbox_free_channel(chan); 509 + } 510 + 511 + static int rpmi_clk_probe(struct platform_device *pdev) 512 + { 513 + int ret; 514 + unsigned int num_clocks, i; 515 + struct clk_hw_onecell_data *clk_data; 516 + struct rpmi_clk_context *context; 517 + struct rpmi_mbox_message msg; 518 + struct clk_hw *hw_ptr; 519 + struct device *dev = &pdev->dev; 520 + 521 + context = devm_kzalloc(dev, sizeof(*context), GFP_KERNEL); 522 + if (!context) 523 + return -ENOMEM; 524 + context->dev = dev; 525 + platform_set_drvdata(pdev, context); 526 + 527 + context->client.dev = context->dev; 528 + context->client.rx_callback = NULL; 529 + context->client.tx_block = false; 530 + context->client.knows_txdone = true; 531 + context->client.tx_tout = 0; 532 + 533 + context->chan = mbox_request_channel(&context->client, 0); 534 + if (IS_ERR(context->chan)) 535 + return PTR_ERR(context->chan); 536 + 537 + ret = devm_add_action_or_reset(dev, rpmi_clk_mbox_chan_release, context->chan); 538 + if (ret) 539 + return dev_err_probe(dev, ret, "Failed to add rpmi mbox channel cleanup\n"); 540 + 541 + rpmi_mbox_init_get_attribute(&msg, RPMI_MBOX_ATTR_SPEC_VERSION); 542 + ret = rpmi_mbox_send_message(context->chan, &msg); 543 + if (ret) 544 + return dev_err_probe(dev, ret, "Failed to get spec version\n"); 545 + if (msg.attr.value < RPMI_MKVER(1, 0)) { 546 + return dev_err_probe(dev, -EINVAL, 547 + "msg protocol version mismatch, expected 0x%x, found 0x%x\n", 548 + RPMI_MKVER(1, 0), msg.attr.value); 549 + } 550 + 551 + rpmi_mbox_init_get_attribute(&msg, RPMI_MBOX_ATTR_SERVICEGROUP_ID); 552 + ret = rpmi_mbox_send_message(context->chan, &msg); 553 + if (ret) 554 + return dev_err_probe(dev, ret, "Failed to get service group ID\n"); 555 + if (msg.attr.value != RPMI_SRVGRP_CLOCK) { 556 + return dev_err_probe(dev, -EINVAL, 557 + "service group match failed, expected 0x%x, found 0x%x\n", 558 + RPMI_SRVGRP_CLOCK, msg.attr.value); 559 + } 560 + 561 + rpmi_mbox_init_get_attribute(&msg, RPMI_MBOX_ATTR_SERVICEGROUP_VERSION); 562 + ret = rpmi_mbox_send_message(context->chan, &msg); 563 + if (ret) 564 + return dev_err_probe(dev, ret, "Failed to get service group version\n"); 565 + if (msg.attr.value < RPMI_MKVER(1, 0)) { 566 + return dev_err_probe(dev, -EINVAL, 567 + "service group version failed, expected 0x%x, found 0x%x\n", 568 + RPMI_MKVER(1, 0), msg.attr.value); 569 + } 570 + 571 + rpmi_mbox_init_get_attribute(&msg, RPMI_MBOX_ATTR_MAX_MSG_DATA_SIZE); 572 + ret = rpmi_mbox_send_message(context->chan, &msg); 573 + if (ret) 574 + return dev_err_probe(dev, ret, "Failed to get max message data size\n"); 575 + 576 + context->max_msg_data_size = msg.attr.value; 577 + num_clocks = rpmi_clk_get_num_clocks(context); 578 + if (!num_clocks) 579 + return dev_err_probe(dev, -ENODEV, "No clocks found\n"); 580 + 581 + clk_data = devm_kzalloc(dev, struct_size(clk_data, hws, num_clocks), 582 + GFP_KERNEL); 583 + if (!clk_data) 584 + return dev_err_probe(dev, -ENOMEM, "No memory for clock data\n"); 585 + clk_data->num = num_clocks; 586 + 587 + for (i = 0; i < clk_data->num; i++) { 588 + hw_ptr = rpmi_clk_enumerate(context, i); 589 + if (IS_ERR(hw_ptr)) { 590 + return dev_err_probe(dev, PTR_ERR(hw_ptr), 591 + "Failed to register clk-%d\n", i); 592 + } 593 + clk_data->hws[i] = hw_ptr; 594 + } 595 + 596 + ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, clk_data); 597 + if (ret) 598 + return dev_err_probe(dev, ret, "Failed to register clock HW provider\n"); 599 + 600 + return 0; 601 + } 602 + 603 + static const struct of_device_id rpmi_clk_of_match[] = { 604 + { .compatible = "riscv,rpmi-clock" }, 605 + { } 606 + }; 607 + MODULE_DEVICE_TABLE(of, rpmi_clk_of_match); 608 + 609 + static struct platform_driver rpmi_clk_driver = { 610 + .driver = { 611 + .name = "riscv-rpmi-clock", 612 + .of_match_table = rpmi_clk_of_match, 613 + }, 614 + .probe = rpmi_clk_probe, 615 + }; 616 + module_platform_driver(rpmi_clk_driver); 617 + 618 + MODULE_AUTHOR("Rahul Pathak <rpathak@ventanamicro.com>"); 619 + MODULE_DESCRIPTION("Clock Driver based on RPMI message protocol"); 620 + MODULE_LICENSE("GPL");
+7
drivers/irqchip/Kconfig
··· 634 634 select GENERIC_MSI_IRQ 635 635 select IRQ_MSI_LIB 636 636 637 + config RISCV_RPMI_SYSMSI 638 + bool 639 + depends on RISCV && MAILBOX 640 + select IRQ_DOMAIN_HIERARCHY 641 + select GENERIC_MSI_IRQ 642 + default RISCV 643 + 637 644 config SIFIVE_PLIC 638 645 bool 639 646 depends on RISCV
+1
drivers/irqchip/Makefile
··· 106 106 obj-$(CONFIG_RISCV_APLIC) += irq-riscv-aplic-main.o irq-riscv-aplic-direct.o 107 107 obj-$(CONFIG_RISCV_APLIC_MSI) += irq-riscv-aplic-msi.o 108 108 obj-$(CONFIG_RISCV_IMSIC) += irq-riscv-imsic-state.o irq-riscv-imsic-early.o irq-riscv-imsic-platform.o 109 + obj-$(CONFIG_RISCV_RPMI_SYSMSI) += irq-riscv-rpmi-sysmsi.o 109 110 obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o 110 111 obj-$(CONFIG_STARFIVE_JH8100_INTC) += irq-starfive-jh8100-intc.o 111 112 obj-$(CONFIG_ACLINT_SSWI) += irq-aclint-sswi.o
+2
drivers/irqchip/irq-riscv-imsic-early.c
··· 7 7 #define pr_fmt(fmt) "riscv-imsic: " fmt 8 8 #include <linux/acpi.h> 9 9 #include <linux/cpu.h> 10 + #include <linux/export.h> 10 11 #include <linux/interrupt.h> 11 12 #include <linux/init.h> 12 13 #include <linux/io.h> ··· 234 233 { 235 234 return imsic_acpi_fwnode; 236 235 } 236 + EXPORT_SYMBOL_GPL(imsic_acpi_get_fwnode); 237 237 238 238 static int __init imsic_early_acpi_init(union acpi_subtable_headers *header, 239 239 const unsigned long end)
+328
drivers/irqchip/irq-riscv-rpmi-sysmsi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2025 Ventana Micro Systems Inc. */ 3 + 4 + #include <linux/acpi.h> 5 + #include <linux/bits.h> 6 + #include <linux/bug.h> 7 + #include <linux/device.h> 8 + #include <linux/device/devres.h> 9 + #include <linux/dev_printk.h> 10 + #include <linux/errno.h> 11 + #include <linux/irq.h> 12 + #include <linux/irqdomain.h> 13 + #include <linux/irqchip/riscv-imsic.h> 14 + #include <linux/mailbox_client.h> 15 + #include <linux/mailbox/riscv-rpmi-message.h> 16 + #include <linux/module.h> 17 + #include <linux/msi.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/types.h> 21 + 22 + struct rpmi_sysmsi_get_attrs_rx { 23 + __le32 status; 24 + __le32 sys_num_msi; 25 + __le32 flag0; 26 + __le32 flag1; 27 + }; 28 + 29 + #define RPMI_SYSMSI_MSI_ATTRIBUTES_FLAG0_PREF_PRIV BIT(0) 30 + 31 + struct rpmi_sysmsi_set_msi_state_tx { 32 + __le32 sys_msi_index; 33 + __le32 sys_msi_state; 34 + }; 35 + 36 + struct rpmi_sysmsi_set_msi_state_rx { 37 + __le32 status; 38 + }; 39 + 40 + #define RPMI_SYSMSI_MSI_STATE_ENABLE BIT(0) 41 + #define RPMI_SYSMSI_MSI_STATE_PENDING BIT(1) 42 + 43 + struct rpmi_sysmsi_set_msi_target_tx { 44 + __le32 sys_msi_index; 45 + __le32 sys_msi_address_low; 46 + __le32 sys_msi_address_high; 47 + __le32 sys_msi_data; 48 + }; 49 + 50 + struct rpmi_sysmsi_set_msi_target_rx { 51 + __le32 status; 52 + }; 53 + 54 + struct rpmi_sysmsi_priv { 55 + struct device *dev; 56 + struct mbox_client client; 57 + struct mbox_chan *chan; 58 + u32 nr_irqs; 59 + u32 gsi_base; 60 + }; 61 + 62 + static int rpmi_sysmsi_get_num_msi(struct rpmi_sysmsi_priv *priv) 63 + { 64 + struct rpmi_sysmsi_get_attrs_rx rx; 65 + struct rpmi_mbox_message msg; 66 + int ret; 67 + 68 + rpmi_mbox_init_send_with_response(&msg, RPMI_SYSMSI_SRV_GET_ATTRIBUTES, 69 + NULL, 0, &rx, sizeof(rx)); 70 + ret = rpmi_mbox_send_message(priv->chan, &msg); 71 + if (ret) 72 + return ret; 73 + if (rx.status) 74 + return rpmi_to_linux_error(le32_to_cpu(rx.status)); 75 + 76 + return le32_to_cpu(rx.sys_num_msi); 77 + } 78 + 79 + static int rpmi_sysmsi_set_msi_state(struct rpmi_sysmsi_priv *priv, 80 + u32 sys_msi_index, u32 sys_msi_state) 81 + { 82 + struct rpmi_sysmsi_set_msi_state_tx tx; 83 + struct rpmi_sysmsi_set_msi_state_rx rx; 84 + struct rpmi_mbox_message msg; 85 + int ret; 86 + 87 + tx.sys_msi_index = cpu_to_le32(sys_msi_index); 88 + tx.sys_msi_state = cpu_to_le32(sys_msi_state); 89 + rpmi_mbox_init_send_with_response(&msg, RPMI_SYSMSI_SRV_SET_MSI_STATE, 90 + &tx, sizeof(tx), &rx, sizeof(rx)); 91 + ret = rpmi_mbox_send_message(priv->chan, &msg); 92 + if (ret) 93 + return ret; 94 + if (rx.status) 95 + return rpmi_to_linux_error(le32_to_cpu(rx.status)); 96 + 97 + return 0; 98 + } 99 + 100 + static int rpmi_sysmsi_set_msi_target(struct rpmi_sysmsi_priv *priv, 101 + u32 sys_msi_index, struct msi_msg *m) 102 + { 103 + struct rpmi_sysmsi_set_msi_target_tx tx; 104 + struct rpmi_sysmsi_set_msi_target_rx rx; 105 + struct rpmi_mbox_message msg; 106 + int ret; 107 + 108 + tx.sys_msi_index = cpu_to_le32(sys_msi_index); 109 + tx.sys_msi_address_low = cpu_to_le32(m->address_lo); 110 + tx.sys_msi_address_high = cpu_to_le32(m->address_hi); 111 + tx.sys_msi_data = cpu_to_le32(m->data); 112 + rpmi_mbox_init_send_with_response(&msg, RPMI_SYSMSI_SRV_SET_MSI_TARGET, 113 + &tx, sizeof(tx), &rx, sizeof(rx)); 114 + ret = rpmi_mbox_send_message(priv->chan, &msg); 115 + if (ret) 116 + return ret; 117 + if (rx.status) 118 + return rpmi_to_linux_error(le32_to_cpu(rx.status)); 119 + 120 + return 0; 121 + } 122 + 123 + static void rpmi_sysmsi_irq_mask(struct irq_data *d) 124 + { 125 + struct rpmi_sysmsi_priv *priv = irq_data_get_irq_chip_data(d); 126 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 127 + int ret; 128 + 129 + ret = rpmi_sysmsi_set_msi_state(priv, hwirq, 0); 130 + if (ret) 131 + dev_warn(priv->dev, "Failed to mask hwirq %lu (error %d)\n", hwirq, ret); 132 + irq_chip_mask_parent(d); 133 + } 134 + 135 + static void rpmi_sysmsi_irq_unmask(struct irq_data *d) 136 + { 137 + struct rpmi_sysmsi_priv *priv = irq_data_get_irq_chip_data(d); 138 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 139 + int ret; 140 + 141 + irq_chip_unmask_parent(d); 142 + ret = rpmi_sysmsi_set_msi_state(priv, hwirq, RPMI_SYSMSI_MSI_STATE_ENABLE); 143 + if (ret) 144 + dev_warn(priv->dev, "Failed to unmask hwirq %lu (error %d)\n", hwirq, ret); 145 + } 146 + 147 + static void rpmi_sysmsi_write_msg(struct irq_data *d, struct msi_msg *msg) 148 + { 149 + struct rpmi_sysmsi_priv *priv = irq_data_get_irq_chip_data(d); 150 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 151 + int ret; 152 + 153 + /* For zeroed MSI, do nothing as of now */ 154 + if (!msg->address_hi && !msg->address_lo && !msg->data) 155 + return; 156 + 157 + ret = rpmi_sysmsi_set_msi_target(priv, hwirq, msg); 158 + if (ret) 159 + dev_warn(priv->dev, "Failed to set target for hwirq %lu (error %d)\n", hwirq, ret); 160 + } 161 + 162 + static void rpmi_sysmsi_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc) 163 + { 164 + arg->desc = desc; 165 + arg->hwirq = desc->data.icookie.value; 166 + } 167 + 168 + static int rpmi_sysmsi_translate(struct irq_domain *d, struct irq_fwspec *fwspec, 169 + unsigned long *hwirq, unsigned int *type) 170 + { 171 + struct msi_domain_info *info = d->host_data; 172 + struct rpmi_sysmsi_priv *priv = info->data; 173 + 174 + if (WARN_ON(fwspec->param_count < 1)) 175 + return -EINVAL; 176 + 177 + /* For DT, gsi_base is always zero. */ 178 + *hwirq = fwspec->param[0] - priv->gsi_base; 179 + *type = IRQ_TYPE_NONE; 180 + return 0; 181 + } 182 + 183 + static const struct msi_domain_template rpmi_sysmsi_template = { 184 + .chip = { 185 + .name = "RPMI-SYSMSI", 186 + .irq_mask = rpmi_sysmsi_irq_mask, 187 + .irq_unmask = rpmi_sysmsi_irq_unmask, 188 + #ifdef CONFIG_SMP 189 + .irq_set_affinity = irq_chip_set_affinity_parent, 190 + #endif 191 + .irq_write_msi_msg = rpmi_sysmsi_write_msg, 192 + .flags = IRQCHIP_SET_TYPE_MASKED | 193 + IRQCHIP_SKIP_SET_WAKE | 194 + IRQCHIP_MASK_ON_SUSPEND, 195 + }, 196 + 197 + .ops = { 198 + .set_desc = rpmi_sysmsi_set_desc, 199 + .msi_translate = rpmi_sysmsi_translate, 200 + }, 201 + 202 + .info = { 203 + .bus_token = DOMAIN_BUS_WIRED_TO_MSI, 204 + .flags = MSI_FLAG_USE_DEV_FWNODE, 205 + .handler = handle_simple_irq, 206 + .handler_name = "simple", 207 + }, 208 + }; 209 + 210 + static int rpmi_sysmsi_probe(struct platform_device *pdev) 211 + { 212 + struct device *dev = &pdev->dev; 213 + struct rpmi_sysmsi_priv *priv; 214 + struct fwnode_handle *fwnode; 215 + u32 id; 216 + int rc; 217 + 218 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 219 + if (!priv) 220 + return -ENOMEM; 221 + priv->dev = dev; 222 + 223 + /* Setup mailbox client */ 224 + priv->client.dev = priv->dev; 225 + priv->client.rx_callback = NULL; 226 + priv->client.tx_block = false; 227 + priv->client.knows_txdone = true; 228 + priv->client.tx_tout = 0; 229 + 230 + /* Request mailbox channel */ 231 + priv->chan = mbox_request_channel(&priv->client, 0); 232 + if (IS_ERR(priv->chan)) 233 + return PTR_ERR(priv->chan); 234 + 235 + /* Get number of system MSIs */ 236 + rc = rpmi_sysmsi_get_num_msi(priv); 237 + if (rc < 1) { 238 + mbox_free_channel(priv->chan); 239 + if (rc) 240 + return dev_err_probe(dev, rc, "Failed to get number of system MSIs\n"); 241 + else 242 + return dev_err_probe(dev, -ENODEV, "No system MSIs found\n"); 243 + } 244 + priv->nr_irqs = rc; 245 + 246 + fwnode = dev_fwnode(dev); 247 + if (is_acpi_node(fwnode)) { 248 + u32 nr_irqs; 249 + 250 + rc = riscv_acpi_get_gsi_info(fwnode, &priv->gsi_base, &id, 251 + &nr_irqs, NULL); 252 + if (rc) { 253 + dev_err(dev, "failed to find GSI mapping\n"); 254 + return rc; 255 + } 256 + 257 + /* Update with actual GSI range */ 258 + if (nr_irqs != priv->nr_irqs) 259 + riscv_acpi_update_gsi_range(priv->gsi_base, priv->nr_irqs); 260 + } 261 + 262 + /* 263 + * The device MSI domain for platform devices on RISC-V architecture 264 + * is only available after the MSI controller driver is probed so, 265 + * explicitly configure here. 266 + */ 267 + if (!dev_get_msi_domain(dev)) { 268 + /* 269 + * The device MSI domain for OF devices is only set at the 270 + * time of populating/creating OF device. If the device MSI 271 + * domain is discovered later after the OF device is created 272 + * then we need to set it explicitly before using any platform 273 + * MSI functions. 274 + */ 275 + if (is_of_node(fwnode)) { 276 + of_msi_configure(dev, dev_of_node(dev)); 277 + } else if (is_acpi_device_node(fwnode)) { 278 + struct irq_domain *msi_domain; 279 + 280 + msi_domain = irq_find_matching_fwnode(imsic_acpi_get_fwnode(dev), 281 + DOMAIN_BUS_PLATFORM_MSI); 282 + dev_set_msi_domain(dev, msi_domain); 283 + } 284 + 285 + if (!dev_get_msi_domain(dev)) { 286 + mbox_free_channel(priv->chan); 287 + return -EPROBE_DEFER; 288 + } 289 + } 290 + 291 + if (!msi_create_device_irq_domain(dev, MSI_DEFAULT_DOMAIN, 292 + &rpmi_sysmsi_template, 293 + priv->nr_irqs, priv, priv)) { 294 + mbox_free_channel(priv->chan); 295 + return dev_err_probe(dev, -ENOMEM, "failed to create MSI irq domain\n"); 296 + } 297 + 298 + #ifdef CONFIG_ACPI 299 + struct acpi_device *adev = ACPI_COMPANION(dev); 300 + 301 + if (adev) 302 + acpi_dev_clear_dependencies(adev); 303 + #endif 304 + 305 + dev_info(dev, "%u system MSIs registered\n", priv->nr_irqs); 306 + return 0; 307 + } 308 + 309 + static const struct of_device_id rpmi_sysmsi_match[] = { 310 + { .compatible = "riscv,rpmi-system-msi" }, 311 + {} 312 + }; 313 + 314 + static const struct acpi_device_id acpi_rpmi_sysmsi_match[] = { 315 + { "RSCV0006" }, 316 + {} 317 + }; 318 + MODULE_DEVICE_TABLE(acpi, acpi_rpmi_sysmsi_match); 319 + 320 + static struct platform_driver rpmi_sysmsi_driver = { 321 + .driver = { 322 + .name = "rpmi-sysmsi", 323 + .of_match_table = rpmi_sysmsi_match, 324 + .acpi_match_table = acpi_rpmi_sysmsi_match, 325 + }, 326 + .probe = rpmi_sysmsi_probe, 327 + }; 328 + builtin_platform_driver(rpmi_sysmsi_driver);
+11
drivers/mailbox/Kconfig
··· 369 369 processor and coprocessor that handles various power management task 370 370 and more. 371 371 372 + config RISCV_SBI_MPXY_MBOX 373 + tristate "RISC-V SBI Message Proxy (MPXY) Mailbox" 374 + depends on RISCV_SBI 375 + default RISCV 376 + help 377 + Mailbox driver implementation for RISC-V SBI Message Proxy (MPXY) 378 + extension. This mailbox driver is used to send messages to the 379 + remote processor through the SBI implementation (M-mode firmware 380 + or HS-mode hypervisor). Say Y here if you want to have this support. 381 + If unsure say N. 382 + 372 383 endif
+2
drivers/mailbox/Makefile
··· 78 78 obj-$(CONFIG_CIX_MBOX) += cix-mailbox.o 79 79 80 80 obj-$(CONFIG_BCM74110_MAILBOX) += bcm74110-mailbox.o 81 + 82 + obj-$(CONFIG_RISCV_SBI_MPXY_MBOX) += riscv-sbi-mpxy-mbox.o
+40 -25
drivers/mailbox/mailbox.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/mutex.h> 17 17 #include <linux/of.h> 18 + #include <linux/property.h> 18 19 #include <linux/spinlock.h> 19 20 20 21 #include "mailbox.h" ··· 384 383 */ 385 384 struct mbox_chan *mbox_request_channel(struct mbox_client *cl, int index) 386 385 { 387 - struct device *dev = cl->dev; 386 + struct fwnode_reference_args fwspec; 387 + struct fwnode_handle *fwnode; 388 388 struct mbox_controller *mbox; 389 389 struct of_phandle_args spec; 390 390 struct mbox_chan *chan; 391 + struct device *dev; 392 + unsigned int i; 391 393 int ret; 392 394 393 - if (!dev || !dev->of_node) { 394 - pr_debug("%s: No owner device node\n", __func__); 395 + dev = cl->dev; 396 + if (!dev) { 397 + pr_debug("No owner device\n"); 395 398 return ERR_PTR(-ENODEV); 396 399 } 397 400 398 - ret = of_parse_phandle_with_args(dev->of_node, "mboxes", "#mbox-cells", 399 - index, &spec); 401 + fwnode = dev_fwnode(dev); 402 + if (!fwnode) { 403 + dev_dbg(dev, "No owner fwnode\n"); 404 + return ERR_PTR(-ENODEV); 405 + } 406 + 407 + ret = fwnode_property_get_reference_args(fwnode, "mboxes", "#mbox-cells", 408 + 0, index, &fwspec); 400 409 if (ret) { 401 - dev_err(dev, "%s: can't parse \"mboxes\" property\n", __func__); 410 + dev_err(dev, "%s: can't parse \"%s\" property\n", __func__, "mboxes"); 402 411 return ERR_PTR(ret); 403 412 } 404 413 414 + spec.np = to_of_node(fwspec.fwnode); 415 + spec.args_count = fwspec.nargs; 416 + for (i = 0; i < spec.args_count; i++) 417 + spec.args[i] = fwspec.args[i]; 418 + 405 419 scoped_guard(mutex, &con_mutex) { 406 420 chan = ERR_PTR(-EPROBE_DEFER); 407 - list_for_each_entry(mbox, &mbox_cons, node) 408 - if (mbox->dev->of_node == spec.np) { 409 - chan = mbox->of_xlate(mbox, &spec); 410 - if (!IS_ERR(chan)) 411 - break; 421 + list_for_each_entry(mbox, &mbox_cons, node) { 422 + if (device_match_fwnode(mbox->dev, fwspec.fwnode)) { 423 + if (mbox->fw_xlate) { 424 + chan = mbox->fw_xlate(mbox, &fwspec); 425 + if (!IS_ERR(chan)) 426 + break; 427 + } else if (mbox->of_xlate) { 428 + chan = mbox->of_xlate(mbox, &spec); 429 + if (!IS_ERR(chan)) 430 + break; 431 + } 412 432 } 433 + } 413 434 414 - of_node_put(spec.np); 435 + fwnode_handle_put(fwspec.fwnode); 415 436 416 437 if (IS_ERR(chan)) 417 438 return chan; ··· 450 427 struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl, 451 428 const char *name) 452 429 { 453 - struct device_node *np = cl->dev->of_node; 454 - int index; 430 + int index = device_property_match_string(cl->dev, "mbox-names", name); 455 431 456 - if (!np) { 457 - dev_err(cl->dev, "%s() currently only supports DT\n", __func__); 458 - return ERR_PTR(-EINVAL); 459 - } 460 - 461 - index = of_property_match_string(np, "mbox-names", name); 462 432 if (index < 0) { 463 433 dev_err(cl->dev, "%s() could not locate channel named \"%s\"\n", 464 434 __func__, name); ··· 486 470 } 487 471 EXPORT_SYMBOL_GPL(mbox_free_channel); 488 472 489 - static struct mbox_chan * 490 - of_mbox_index_xlate(struct mbox_controller *mbox, 491 - const struct of_phandle_args *sp) 473 + static struct mbox_chan *fw_mbox_index_xlate(struct mbox_controller *mbox, 474 + const struct fwnode_reference_args *sp) 492 475 { 493 476 int ind = sp->args[0]; 494 477 ··· 538 523 spin_lock_init(&chan->lock); 539 524 } 540 525 541 - if (!mbox->of_xlate) 542 - mbox->of_xlate = of_mbox_index_xlate; 526 + if (!mbox->fw_xlate && !mbox->of_xlate) 527 + mbox->fw_xlate = fw_mbox_index_xlate; 543 528 544 529 scoped_guard(mutex, &con_mutex) 545 530 list_add_tail(&mbox->node, &mbox_cons);
+1019
drivers/mailbox/riscv-sbi-mpxy-mbox.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RISC-V SBI Message Proxy (MPXY) mailbox controller driver 4 + * 5 + * Copyright (C) 2025 Ventana Micro Systems Inc. 6 + */ 7 + 8 + #include <linux/acpi.h> 9 + #include <linux/cpu.h> 10 + #include <linux/errno.h> 11 + #include <linux/init.h> 12 + #include <linux/irqchip/riscv-imsic.h> 13 + #include <linux/mailbox_controller.h> 14 + #include <linux/mailbox/riscv-rpmi-message.h> 15 + #include <linux/minmax.h> 16 + #include <linux/mm.h> 17 + #include <linux/module.h> 18 + #include <linux/msi.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/percpu.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/smp.h> 23 + #include <linux/string.h> 24 + #include <linux/types.h> 25 + #include <asm/byteorder.h> 26 + #include <asm/sbi.h> 27 + 28 + /* ====== SBI MPXY extension data structures ====== */ 29 + 30 + /* SBI MPXY MSI related channel attributes */ 31 + struct sbi_mpxy_msi_info { 32 + /* Lower 32-bits of the MSI target address */ 33 + u32 msi_addr_lo; 34 + /* Upper 32-bits of the MSI target address */ 35 + u32 msi_addr_hi; 36 + /* MSI data value */ 37 + u32 msi_data; 38 + }; 39 + 40 + /* 41 + * SBI MPXY standard channel attributes. 42 + * 43 + * NOTE: The sequence of attribute fields are as-per the 44 + * defined sequence in the attribute table in spec (or 45 + * as-per the enum sbi_mpxy_attribute_id). 46 + */ 47 + struct sbi_mpxy_channel_attrs { 48 + /* Message protocol ID */ 49 + u32 msg_proto_id; 50 + /* Message protocol version */ 51 + u32 msg_proto_version; 52 + /* Message protocol maximum message length */ 53 + u32 msg_max_len; 54 + /* Message protocol message send timeout in microseconds */ 55 + u32 msg_send_timeout; 56 + /* Message protocol message completion timeout in microseconds */ 57 + u32 msg_completion_timeout; 58 + /* Bit array for channel capabilities */ 59 + u32 capability; 60 + /* SSE event ID */ 61 + u32 sse_event_id; 62 + /* MSI enable/disable control knob */ 63 + u32 msi_control; 64 + /* Channel MSI info */ 65 + struct sbi_mpxy_msi_info msi_info; 66 + /* Events state control */ 67 + u32 events_state_ctrl; 68 + }; 69 + 70 + /* 71 + * RPMI specific SBI MPXY channel attributes. 72 + * 73 + * NOTE: The sequence of attribute fields are as-per the 74 + * defined sequence in the attribute table in spec (or 75 + * as-per the enum sbi_mpxy_rpmi_attribute_id). 76 + */ 77 + struct sbi_mpxy_rpmi_channel_attrs { 78 + /* RPMI service group ID */ 79 + u32 servicegroup_id; 80 + /* RPMI service group version */ 81 + u32 servicegroup_version; 82 + /* RPMI implementation ID */ 83 + u32 impl_id; 84 + /* RPMI implementation version */ 85 + u32 impl_version; 86 + }; 87 + 88 + /* SBI MPXY channel IDs data in shared memory */ 89 + struct sbi_mpxy_channel_ids_data { 90 + /* Remaining number of channel ids */ 91 + __le32 remaining; 92 + /* Returned channel ids in current function call */ 93 + __le32 returned; 94 + /* Returned channel id array */ 95 + __le32 channel_array[]; 96 + }; 97 + 98 + /* SBI MPXY notification data in shared memory */ 99 + struct sbi_mpxy_notification_data { 100 + /* Remaining number of notification events */ 101 + __le32 remaining; 102 + /* Number of notification events returned */ 103 + __le32 returned; 104 + /* Number of notification events lost */ 105 + __le32 lost; 106 + /* Reserved for future use */ 107 + __le32 reserved; 108 + /* Returned channel id array */ 109 + u8 events_data[]; 110 + }; 111 + 112 + /* ====== MPXY data structures & helper routines ====== */ 113 + 114 + /* MPXY Per-CPU or local context */ 115 + struct mpxy_local { 116 + /* Shared memory base address */ 117 + void *shmem; 118 + /* Shared memory physical address */ 119 + phys_addr_t shmem_phys_addr; 120 + /* Flag representing whether shared memory is active or not */ 121 + bool shmem_active; 122 + }; 123 + 124 + static DEFINE_PER_CPU(struct mpxy_local, mpxy_local); 125 + static unsigned long mpxy_shmem_size; 126 + static bool mpxy_shmem_init_done; 127 + 128 + static int mpxy_get_channel_count(u32 *channel_count) 129 + { 130 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 131 + struct sbi_mpxy_channel_ids_data *sdata = mpxy->shmem; 132 + u32 remaining, returned; 133 + struct sbiret sret; 134 + 135 + if (!mpxy->shmem_active) 136 + return -ENODEV; 137 + if (!channel_count) 138 + return -EINVAL; 139 + 140 + get_cpu(); 141 + 142 + /* Get the remaining and returned fields to calculate total */ 143 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_GET_CHANNEL_IDS, 144 + 0, 0, 0, 0, 0, 0); 145 + if (sret.error) 146 + goto err_put_cpu; 147 + 148 + remaining = le32_to_cpu(sdata->remaining); 149 + returned = le32_to_cpu(sdata->returned); 150 + *channel_count = remaining + returned; 151 + 152 + err_put_cpu: 153 + put_cpu(); 154 + return sbi_err_map_linux_errno(sret.error); 155 + } 156 + 157 + static int mpxy_get_channel_ids(u32 channel_count, u32 *channel_ids) 158 + { 159 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 160 + struct sbi_mpxy_channel_ids_data *sdata = mpxy->shmem; 161 + u32 remaining, returned, count, start_index = 0; 162 + struct sbiret sret; 163 + 164 + if (!mpxy->shmem_active) 165 + return -ENODEV; 166 + if (!channel_count || !channel_ids) 167 + return -EINVAL; 168 + 169 + get_cpu(); 170 + 171 + do { 172 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_GET_CHANNEL_IDS, 173 + start_index, 0, 0, 0, 0, 0); 174 + if (sret.error) 175 + goto err_put_cpu; 176 + 177 + remaining = le32_to_cpu(sdata->remaining); 178 + returned = le32_to_cpu(sdata->returned); 179 + 180 + count = returned < (channel_count - start_index) ? 181 + returned : (channel_count - start_index); 182 + memcpy_from_le32(&channel_ids[start_index], sdata->channel_array, count); 183 + start_index += count; 184 + } while (remaining && start_index < channel_count); 185 + 186 + err_put_cpu: 187 + put_cpu(); 188 + return sbi_err_map_linux_errno(sret.error); 189 + } 190 + 191 + static int mpxy_read_attrs(u32 channel_id, u32 base_attrid, u32 attr_count, 192 + u32 *attrs_buf) 193 + { 194 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 195 + struct sbiret sret; 196 + 197 + if (!mpxy->shmem_active) 198 + return -ENODEV; 199 + if (!attr_count || !attrs_buf) 200 + return -EINVAL; 201 + 202 + get_cpu(); 203 + 204 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_READ_ATTRS, 205 + channel_id, base_attrid, attr_count, 0, 0, 0); 206 + if (sret.error) 207 + goto err_put_cpu; 208 + 209 + memcpy_from_le32(attrs_buf, (__le32 *)mpxy->shmem, attr_count); 210 + 211 + err_put_cpu: 212 + put_cpu(); 213 + return sbi_err_map_linux_errno(sret.error); 214 + } 215 + 216 + static int mpxy_write_attrs(u32 channel_id, u32 base_attrid, u32 attr_count, 217 + u32 *attrs_buf) 218 + { 219 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 220 + struct sbiret sret; 221 + 222 + if (!mpxy->shmem_active) 223 + return -ENODEV; 224 + if (!attr_count || !attrs_buf) 225 + return -EINVAL; 226 + 227 + get_cpu(); 228 + 229 + memcpy_to_le32((__le32 *)mpxy->shmem, attrs_buf, attr_count); 230 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_WRITE_ATTRS, 231 + channel_id, base_attrid, attr_count, 0, 0, 0); 232 + 233 + put_cpu(); 234 + return sbi_err_map_linux_errno(sret.error); 235 + } 236 + 237 + static int mpxy_send_message_with_resp(u32 channel_id, u32 msg_id, 238 + void *tx, unsigned long tx_len, 239 + void *rx, unsigned long max_rx_len, 240 + unsigned long *rx_len) 241 + { 242 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 243 + unsigned long rx_bytes; 244 + struct sbiret sret; 245 + 246 + if (!mpxy->shmem_active) 247 + return -ENODEV; 248 + if (!tx && tx_len) 249 + return -EINVAL; 250 + 251 + get_cpu(); 252 + 253 + /* Message protocols allowed to have no data in messages */ 254 + if (tx_len) 255 + memcpy(mpxy->shmem, tx, tx_len); 256 + 257 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_SEND_MSG_WITH_RESP, 258 + channel_id, msg_id, tx_len, 0, 0, 0); 259 + if (rx && !sret.error) { 260 + rx_bytes = sret.value; 261 + if (rx_bytes > max_rx_len) { 262 + put_cpu(); 263 + return -ENOSPC; 264 + } 265 + 266 + memcpy(rx, mpxy->shmem, rx_bytes); 267 + if (rx_len) 268 + *rx_len = rx_bytes; 269 + } 270 + 271 + put_cpu(); 272 + return sbi_err_map_linux_errno(sret.error); 273 + } 274 + 275 + static int mpxy_send_message_without_resp(u32 channel_id, u32 msg_id, 276 + void *tx, unsigned long tx_len) 277 + { 278 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 279 + struct sbiret sret; 280 + 281 + if (!mpxy->shmem_active) 282 + return -ENODEV; 283 + if (!tx && tx_len) 284 + return -EINVAL; 285 + 286 + get_cpu(); 287 + 288 + /* Message protocols allowed to have no data in messages */ 289 + if (tx_len) 290 + memcpy(mpxy->shmem, tx, tx_len); 291 + 292 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_SEND_MSG_WITHOUT_RESP, 293 + channel_id, msg_id, tx_len, 0, 0, 0); 294 + 295 + put_cpu(); 296 + return sbi_err_map_linux_errno(sret.error); 297 + } 298 + 299 + static int mpxy_get_notifications(u32 channel_id, 300 + struct sbi_mpxy_notification_data *notif_data, 301 + unsigned long *events_data_len) 302 + { 303 + struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local); 304 + struct sbiret sret; 305 + 306 + if (!mpxy->shmem_active) 307 + return -ENODEV; 308 + if (!notif_data || !events_data_len) 309 + return -EINVAL; 310 + 311 + get_cpu(); 312 + 313 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_GET_NOTIFICATION_EVENTS, 314 + channel_id, 0, 0, 0, 0, 0); 315 + if (sret.error) 316 + goto err_put_cpu; 317 + 318 + memcpy(notif_data, mpxy->shmem, sret.value + 16); 319 + *events_data_len = sret.value; 320 + 321 + err_put_cpu: 322 + put_cpu(); 323 + return sbi_err_map_linux_errno(sret.error); 324 + } 325 + 326 + static int mpxy_get_shmem_size(unsigned long *shmem_size) 327 + { 328 + struct sbiret sret; 329 + 330 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_GET_SHMEM_SIZE, 331 + 0, 0, 0, 0, 0, 0); 332 + if (sret.error) 333 + return sbi_err_map_linux_errno(sret.error); 334 + if (shmem_size) 335 + *shmem_size = sret.value; 336 + return 0; 337 + } 338 + 339 + static int mpxy_setup_shmem(unsigned int cpu) 340 + { 341 + struct page *shmem_page; 342 + struct mpxy_local *mpxy; 343 + struct sbiret sret; 344 + 345 + mpxy = per_cpu_ptr(&mpxy_local, cpu); 346 + if (mpxy->shmem_active) 347 + return 0; 348 + 349 + shmem_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(mpxy_shmem_size)); 350 + if (!shmem_page) 351 + return -ENOMEM; 352 + 353 + /* 354 + * Linux setup of shmem is done in mpxy OVERWRITE mode. 355 + * flags[1:0] = 00b 356 + */ 357 + sret = sbi_ecall(SBI_EXT_MPXY, SBI_EXT_MPXY_SET_SHMEM, 358 + page_to_phys(shmem_page), 0, 0, 0, 0, 0); 359 + if (sret.error) { 360 + free_pages((unsigned long)page_to_virt(shmem_page), 361 + get_order(mpxy_shmem_size)); 362 + return sbi_err_map_linux_errno(sret.error); 363 + } 364 + 365 + mpxy->shmem = page_to_virt(shmem_page); 366 + mpxy->shmem_phys_addr = page_to_phys(shmem_page); 367 + mpxy->shmem_active = true; 368 + 369 + return 0; 370 + } 371 + 372 + /* ====== MPXY mailbox data structures ====== */ 373 + 374 + /* MPXY mailbox channel */ 375 + struct mpxy_mbox_channel { 376 + struct mpxy_mbox *mbox; 377 + u32 channel_id; 378 + struct sbi_mpxy_channel_attrs attrs; 379 + struct sbi_mpxy_rpmi_channel_attrs rpmi_attrs; 380 + struct sbi_mpxy_notification_data *notif; 381 + u32 max_xfer_len; 382 + bool have_events_state; 383 + u32 msi_index; 384 + u32 msi_irq; 385 + bool started; 386 + }; 387 + 388 + /* MPXY mailbox */ 389 + struct mpxy_mbox { 390 + struct device *dev; 391 + u32 channel_count; 392 + struct mpxy_mbox_channel *channels; 393 + u32 msi_count; 394 + struct mpxy_mbox_channel **msi_index_to_channel; 395 + struct mbox_controller controller; 396 + }; 397 + 398 + /* ====== MPXY RPMI processing ====== */ 399 + 400 + static void mpxy_mbox_send_rpmi_data(struct mpxy_mbox_channel *mchan, 401 + struct rpmi_mbox_message *msg) 402 + { 403 + msg->error = 0; 404 + switch (msg->type) { 405 + case RPMI_MBOX_MSG_TYPE_GET_ATTRIBUTE: 406 + switch (msg->attr.id) { 407 + case RPMI_MBOX_ATTR_SPEC_VERSION: 408 + msg->attr.value = mchan->attrs.msg_proto_version; 409 + break; 410 + case RPMI_MBOX_ATTR_MAX_MSG_DATA_SIZE: 411 + msg->attr.value = mchan->max_xfer_len; 412 + break; 413 + case RPMI_MBOX_ATTR_SERVICEGROUP_ID: 414 + msg->attr.value = mchan->rpmi_attrs.servicegroup_id; 415 + break; 416 + case RPMI_MBOX_ATTR_SERVICEGROUP_VERSION: 417 + msg->attr.value = mchan->rpmi_attrs.servicegroup_version; 418 + break; 419 + case RPMI_MBOX_ATTR_IMPL_ID: 420 + msg->attr.value = mchan->rpmi_attrs.impl_id; 421 + break; 422 + case RPMI_MBOX_ATTR_IMPL_VERSION: 423 + msg->attr.value = mchan->rpmi_attrs.impl_version; 424 + break; 425 + default: 426 + msg->error = -EOPNOTSUPP; 427 + break; 428 + } 429 + break; 430 + case RPMI_MBOX_MSG_TYPE_SET_ATTRIBUTE: 431 + /* None of the RPMI linux mailbox attributes are writeable */ 432 + msg->error = -EOPNOTSUPP; 433 + break; 434 + case RPMI_MBOX_MSG_TYPE_SEND_WITH_RESPONSE: 435 + if ((!msg->data.request && msg->data.request_len) || 436 + (msg->data.request && msg->data.request_len > mchan->max_xfer_len) || 437 + (!msg->data.response && msg->data.max_response_len)) { 438 + msg->error = -EINVAL; 439 + break; 440 + } 441 + if (!(mchan->attrs.capability & SBI_MPXY_CHAN_CAP_SEND_WITH_RESP)) { 442 + msg->error = -EIO; 443 + break; 444 + } 445 + msg->error = mpxy_send_message_with_resp(mchan->channel_id, 446 + msg->data.service_id, 447 + msg->data.request, 448 + msg->data.request_len, 449 + msg->data.response, 450 + msg->data.max_response_len, 451 + &msg->data.out_response_len); 452 + break; 453 + case RPMI_MBOX_MSG_TYPE_SEND_WITHOUT_RESPONSE: 454 + if ((!msg->data.request && msg->data.request_len) || 455 + (msg->data.request && msg->data.request_len > mchan->max_xfer_len)) { 456 + msg->error = -EINVAL; 457 + break; 458 + } 459 + if (!(mchan->attrs.capability & SBI_MPXY_CHAN_CAP_SEND_WITHOUT_RESP)) { 460 + msg->error = -EIO; 461 + break; 462 + } 463 + msg->error = mpxy_send_message_without_resp(mchan->channel_id, 464 + msg->data.service_id, 465 + msg->data.request, 466 + msg->data.request_len); 467 + break; 468 + default: 469 + msg->error = -EOPNOTSUPP; 470 + break; 471 + } 472 + } 473 + 474 + static void mpxy_mbox_peek_rpmi_data(struct mbox_chan *chan, 475 + struct mpxy_mbox_channel *mchan, 476 + struct sbi_mpxy_notification_data *notif, 477 + unsigned long events_data_len) 478 + { 479 + struct rpmi_notification_event *event; 480 + struct rpmi_mbox_message msg; 481 + unsigned long pos = 0; 482 + 483 + while (pos < events_data_len && (events_data_len - pos) <= sizeof(*event)) { 484 + event = (struct rpmi_notification_event *)(notif->events_data + pos); 485 + 486 + msg.type = RPMI_MBOX_MSG_TYPE_NOTIFICATION_EVENT; 487 + msg.notif.event_datalen = le16_to_cpu(event->event_datalen); 488 + msg.notif.event_id = event->event_id; 489 + msg.notif.event_data = event->event_data; 490 + msg.error = 0; 491 + 492 + mbox_chan_received_data(chan, &msg); 493 + pos += sizeof(*event) + msg.notif.event_datalen; 494 + } 495 + } 496 + 497 + static int mpxy_mbox_read_rpmi_attrs(struct mpxy_mbox_channel *mchan) 498 + { 499 + return mpxy_read_attrs(mchan->channel_id, 500 + SBI_MPXY_ATTR_MSGPROTO_ATTR_START, 501 + sizeof(mchan->rpmi_attrs) / sizeof(u32), 502 + (u32 *)&mchan->rpmi_attrs); 503 + } 504 + 505 + /* ====== MPXY mailbox callbacks ====== */ 506 + 507 + static int mpxy_mbox_send_data(struct mbox_chan *chan, void *data) 508 + { 509 + struct mpxy_mbox_channel *mchan = chan->con_priv; 510 + 511 + if (mchan->attrs.msg_proto_id == SBI_MPXY_MSGPROTO_RPMI_ID) { 512 + mpxy_mbox_send_rpmi_data(mchan, data); 513 + return 0; 514 + } 515 + 516 + return -EOPNOTSUPP; 517 + } 518 + 519 + static bool mpxy_mbox_peek_data(struct mbox_chan *chan) 520 + { 521 + struct mpxy_mbox_channel *mchan = chan->con_priv; 522 + struct sbi_mpxy_notification_data *notif = mchan->notif; 523 + bool have_notifications = false; 524 + unsigned long data_len; 525 + int rc; 526 + 527 + if (!(mchan->attrs.capability & SBI_MPXY_CHAN_CAP_GET_NOTIFICATIONS)) 528 + return false; 529 + 530 + do { 531 + rc = mpxy_get_notifications(mchan->channel_id, notif, &data_len); 532 + if (rc || !data_len) 533 + break; 534 + 535 + if (mchan->attrs.msg_proto_id == SBI_MPXY_MSGPROTO_RPMI_ID) 536 + mpxy_mbox_peek_rpmi_data(chan, mchan, notif, data_len); 537 + 538 + have_notifications = true; 539 + } while (1); 540 + 541 + return have_notifications; 542 + } 543 + 544 + static irqreturn_t mpxy_mbox_irq_thread(int irq, void *dev_id) 545 + { 546 + mpxy_mbox_peek_data(dev_id); 547 + return IRQ_HANDLED; 548 + } 549 + 550 + static int mpxy_mbox_setup_msi(struct mbox_chan *chan, 551 + struct mpxy_mbox_channel *mchan) 552 + { 553 + struct device *dev = mchan->mbox->dev; 554 + int rc; 555 + 556 + /* Do nothing if MSI not supported */ 557 + if (mchan->msi_irq == U32_MAX) 558 + return 0; 559 + 560 + /* Fail if MSI already enabled */ 561 + if (mchan->attrs.msi_control) 562 + return -EALREADY; 563 + 564 + /* Request channel MSI handler */ 565 + rc = request_threaded_irq(mchan->msi_irq, NULL, mpxy_mbox_irq_thread, 566 + 0, dev_name(dev), chan); 567 + if (rc) { 568 + dev_err(dev, "failed to request MPXY channel 0x%x IRQ\n", 569 + mchan->channel_id); 570 + return rc; 571 + } 572 + 573 + /* Enable channel MSI control */ 574 + mchan->attrs.msi_control = 1; 575 + rc = mpxy_write_attrs(mchan->channel_id, SBI_MPXY_ATTR_MSI_CONTROL, 576 + 1, &mchan->attrs.msi_control); 577 + if (rc) { 578 + dev_err(dev, "enable MSI control failed for MPXY channel 0x%x\n", 579 + mchan->channel_id); 580 + mchan->attrs.msi_control = 0; 581 + free_irq(mchan->msi_irq, chan); 582 + return rc; 583 + } 584 + 585 + return 0; 586 + } 587 + 588 + static void mpxy_mbox_cleanup_msi(struct mbox_chan *chan, 589 + struct mpxy_mbox_channel *mchan) 590 + { 591 + struct device *dev = mchan->mbox->dev; 592 + int rc; 593 + 594 + /* Do nothing if MSI not supported */ 595 + if (mchan->msi_irq == U32_MAX) 596 + return; 597 + 598 + /* Do nothing if MSI already disabled */ 599 + if (!mchan->attrs.msi_control) 600 + return; 601 + 602 + /* Disable channel MSI control */ 603 + mchan->attrs.msi_control = 0; 604 + rc = mpxy_write_attrs(mchan->channel_id, SBI_MPXY_ATTR_MSI_CONTROL, 605 + 1, &mchan->attrs.msi_control); 606 + if (rc) { 607 + dev_err(dev, "disable MSI control failed for MPXY channel 0x%x\n", 608 + mchan->channel_id); 609 + } 610 + 611 + /* Free channel MSI handler */ 612 + free_irq(mchan->msi_irq, chan); 613 + } 614 + 615 + static int mpxy_mbox_setup_events(struct mpxy_mbox_channel *mchan) 616 + { 617 + struct device *dev = mchan->mbox->dev; 618 + int rc; 619 + 620 + /* Do nothing if events state not supported */ 621 + if (!mchan->have_events_state) 622 + return 0; 623 + 624 + /* Fail if events state already enabled */ 625 + if (mchan->attrs.events_state_ctrl) 626 + return -EALREADY; 627 + 628 + /* Enable channel events state */ 629 + mchan->attrs.events_state_ctrl = 1; 630 + rc = mpxy_write_attrs(mchan->channel_id, SBI_MPXY_ATTR_EVENTS_STATE_CONTROL, 631 + 1, &mchan->attrs.events_state_ctrl); 632 + if (rc) { 633 + dev_err(dev, "enable events state failed for MPXY channel 0x%x\n", 634 + mchan->channel_id); 635 + mchan->attrs.events_state_ctrl = 0; 636 + return rc; 637 + } 638 + 639 + return 0; 640 + } 641 + 642 + static void mpxy_mbox_cleanup_events(struct mpxy_mbox_channel *mchan) 643 + { 644 + struct device *dev = mchan->mbox->dev; 645 + int rc; 646 + 647 + /* Do nothing if events state not supported */ 648 + if (!mchan->have_events_state) 649 + return; 650 + 651 + /* Do nothing if events state already disabled */ 652 + if (!mchan->attrs.events_state_ctrl) 653 + return; 654 + 655 + /* Disable channel events state */ 656 + mchan->attrs.events_state_ctrl = 0; 657 + rc = mpxy_write_attrs(mchan->channel_id, SBI_MPXY_ATTR_EVENTS_STATE_CONTROL, 658 + 1, &mchan->attrs.events_state_ctrl); 659 + if (rc) 660 + dev_err(dev, "disable events state failed for MPXY channel 0x%x\n", 661 + mchan->channel_id); 662 + } 663 + 664 + static int mpxy_mbox_startup(struct mbox_chan *chan) 665 + { 666 + struct mpxy_mbox_channel *mchan = chan->con_priv; 667 + int rc; 668 + 669 + if (mchan->started) 670 + return -EALREADY; 671 + 672 + /* Setup channel MSI */ 673 + rc = mpxy_mbox_setup_msi(chan, mchan); 674 + if (rc) 675 + return rc; 676 + 677 + /* Setup channel notification events */ 678 + rc = mpxy_mbox_setup_events(mchan); 679 + if (rc) { 680 + mpxy_mbox_cleanup_msi(chan, mchan); 681 + return rc; 682 + } 683 + 684 + /* Mark the channel as started */ 685 + mchan->started = true; 686 + 687 + return 0; 688 + } 689 + 690 + static void mpxy_mbox_shutdown(struct mbox_chan *chan) 691 + { 692 + struct mpxy_mbox_channel *mchan = chan->con_priv; 693 + 694 + if (!mchan->started) 695 + return; 696 + 697 + /* Mark the channel as stopped */ 698 + mchan->started = false; 699 + 700 + /* Cleanup channel notification events */ 701 + mpxy_mbox_cleanup_events(mchan); 702 + 703 + /* Cleanup channel MSI */ 704 + mpxy_mbox_cleanup_msi(chan, mchan); 705 + } 706 + 707 + static const struct mbox_chan_ops mpxy_mbox_ops = { 708 + .send_data = mpxy_mbox_send_data, 709 + .peek_data = mpxy_mbox_peek_data, 710 + .startup = mpxy_mbox_startup, 711 + .shutdown = mpxy_mbox_shutdown, 712 + }; 713 + 714 + /* ====== MPXY platform driver ===== */ 715 + 716 + static void mpxy_mbox_msi_write(struct msi_desc *desc, struct msi_msg *msg) 717 + { 718 + struct device *dev = msi_desc_to_dev(desc); 719 + struct mpxy_mbox *mbox = dev_get_drvdata(dev); 720 + struct mpxy_mbox_channel *mchan; 721 + struct sbi_mpxy_msi_info *minfo; 722 + int rc; 723 + 724 + mchan = mbox->msi_index_to_channel[desc->msi_index]; 725 + if (!mchan) { 726 + dev_warn(dev, "MPXY channel not available for MSI index %d\n", 727 + desc->msi_index); 728 + return; 729 + } 730 + 731 + minfo = &mchan->attrs.msi_info; 732 + minfo->msi_addr_lo = msg->address_lo; 733 + minfo->msi_addr_hi = msg->address_hi; 734 + minfo->msi_data = msg->data; 735 + 736 + rc = mpxy_write_attrs(mchan->channel_id, SBI_MPXY_ATTR_MSI_ADDR_LO, 737 + sizeof(*minfo) / sizeof(u32), (u32 *)minfo); 738 + if (rc) { 739 + dev_warn(dev, "failed to write MSI info for MPXY channel 0x%x\n", 740 + mchan->channel_id); 741 + } 742 + } 743 + 744 + static struct mbox_chan *mpxy_mbox_fw_xlate(struct mbox_controller *ctlr, 745 + const struct fwnode_reference_args *pa) 746 + { 747 + struct mpxy_mbox *mbox = container_of(ctlr, struct mpxy_mbox, controller); 748 + struct mpxy_mbox_channel *mchan; 749 + u32 i; 750 + 751 + if (pa->nargs != 2) 752 + return ERR_PTR(-EINVAL); 753 + 754 + for (i = 0; i < mbox->channel_count; i++) { 755 + mchan = &mbox->channels[i]; 756 + if (mchan->channel_id == pa->args[0] && 757 + mchan->attrs.msg_proto_id == pa->args[1]) 758 + return &mbox->controller.chans[i]; 759 + } 760 + 761 + return ERR_PTR(-ENOENT); 762 + } 763 + 764 + static int mpxy_mbox_populate_channels(struct mpxy_mbox *mbox) 765 + { 766 + u32 i, *channel_ids __free(kfree) = NULL; 767 + struct mpxy_mbox_channel *mchan; 768 + int rc; 769 + 770 + /* Find-out of number of channels */ 771 + rc = mpxy_get_channel_count(&mbox->channel_count); 772 + if (rc) 773 + return dev_err_probe(mbox->dev, rc, "failed to get number of MPXY channels\n"); 774 + if (!mbox->channel_count) 775 + return dev_err_probe(mbox->dev, -ENODEV, "no MPXY channels available\n"); 776 + 777 + /* Allocate and fetch all channel IDs */ 778 + channel_ids = kcalloc(mbox->channel_count, sizeof(*channel_ids), GFP_KERNEL); 779 + if (!channel_ids) 780 + return -ENOMEM; 781 + rc = mpxy_get_channel_ids(mbox->channel_count, channel_ids); 782 + if (rc) 783 + return dev_err_probe(mbox->dev, rc, "failed to get MPXY channel IDs\n"); 784 + 785 + /* Populate all channels */ 786 + mbox->channels = devm_kcalloc(mbox->dev, mbox->channel_count, 787 + sizeof(*mbox->channels), GFP_KERNEL); 788 + if (!mbox->channels) 789 + return -ENOMEM; 790 + for (i = 0; i < mbox->channel_count; i++) { 791 + mchan = &mbox->channels[i]; 792 + mchan->mbox = mbox; 793 + mchan->channel_id = channel_ids[i]; 794 + 795 + rc = mpxy_read_attrs(mchan->channel_id, SBI_MPXY_ATTR_MSG_PROT_ID, 796 + sizeof(mchan->attrs) / sizeof(u32), 797 + (u32 *)&mchan->attrs); 798 + if (rc) { 799 + return dev_err_probe(mbox->dev, rc, 800 + "MPXY channel 0x%x read attrs failed\n", 801 + mchan->channel_id); 802 + } 803 + 804 + if (mchan->attrs.msg_proto_id == SBI_MPXY_MSGPROTO_RPMI_ID) { 805 + rc = mpxy_mbox_read_rpmi_attrs(mchan); 806 + if (rc) { 807 + return dev_err_probe(mbox->dev, rc, 808 + "MPXY channel 0x%x read RPMI attrs failed\n", 809 + mchan->channel_id); 810 + } 811 + } 812 + 813 + mchan->notif = devm_kzalloc(mbox->dev, mpxy_shmem_size, GFP_KERNEL); 814 + if (!mchan->notif) 815 + return -ENOMEM; 816 + 817 + mchan->max_xfer_len = min(mpxy_shmem_size, mchan->attrs.msg_max_len); 818 + 819 + if ((mchan->attrs.capability & SBI_MPXY_CHAN_CAP_GET_NOTIFICATIONS) && 820 + (mchan->attrs.capability & SBI_MPXY_CHAN_CAP_EVENTS_STATE)) 821 + mchan->have_events_state = true; 822 + 823 + if ((mchan->attrs.capability & SBI_MPXY_CHAN_CAP_GET_NOTIFICATIONS) && 824 + (mchan->attrs.capability & SBI_MPXY_CHAN_CAP_MSI)) 825 + mchan->msi_index = mbox->msi_count++; 826 + else 827 + mchan->msi_index = U32_MAX; 828 + mchan->msi_irq = U32_MAX; 829 + } 830 + 831 + return 0; 832 + } 833 + 834 + static int mpxy_mbox_probe(struct platform_device *pdev) 835 + { 836 + struct device *dev = &pdev->dev; 837 + struct mpxy_mbox_channel *mchan; 838 + struct mpxy_mbox *mbox; 839 + int msi_idx, rc; 840 + u32 i; 841 + 842 + /* 843 + * Initialize MPXY shared memory only once. This also ensures 844 + * that SBI MPXY mailbox is probed only once. 845 + */ 846 + if (mpxy_shmem_init_done) { 847 + dev_err(dev, "SBI MPXY mailbox already initialized\n"); 848 + return -EALREADY; 849 + } 850 + 851 + /* Probe for SBI MPXY extension */ 852 + if (sbi_spec_version < sbi_mk_version(1, 0) || 853 + sbi_probe_extension(SBI_EXT_MPXY) <= 0) { 854 + dev_info(dev, "SBI MPXY extension not available\n"); 855 + return -ENODEV; 856 + } 857 + 858 + /* Find-out shared memory size */ 859 + rc = mpxy_get_shmem_size(&mpxy_shmem_size); 860 + if (rc) 861 + return dev_err_probe(dev, rc, "failed to get MPXY shared memory size\n"); 862 + 863 + /* 864 + * Setup MPXY shared memory on each CPU 865 + * 866 + * Note: Don't cleanup MPXY shared memory upon CPU power-down 867 + * because the RPMI System MSI irqchip driver needs it to be 868 + * available when migrating IRQs in CPU power-down path. 869 + */ 870 + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "riscv/sbi-mpxy-shmem", 871 + mpxy_setup_shmem, NULL); 872 + 873 + /* Mark as MPXY shared memory initialization done */ 874 + mpxy_shmem_init_done = true; 875 + 876 + /* Allocate mailbox instance */ 877 + mbox = devm_kzalloc(dev, sizeof(*mbox), GFP_KERNEL); 878 + if (!mbox) 879 + return -ENOMEM; 880 + mbox->dev = dev; 881 + platform_set_drvdata(pdev, mbox); 882 + 883 + /* Populate mailbox channels */ 884 + rc = mpxy_mbox_populate_channels(mbox); 885 + if (rc) 886 + return rc; 887 + 888 + /* Initialize mailbox controller */ 889 + mbox->controller.txdone_irq = false; 890 + mbox->controller.txdone_poll = false; 891 + mbox->controller.ops = &mpxy_mbox_ops; 892 + mbox->controller.dev = dev; 893 + mbox->controller.num_chans = mbox->channel_count; 894 + mbox->controller.fw_xlate = mpxy_mbox_fw_xlate; 895 + mbox->controller.chans = devm_kcalloc(dev, mbox->channel_count, 896 + sizeof(*mbox->controller.chans), 897 + GFP_KERNEL); 898 + if (!mbox->controller.chans) 899 + return -ENOMEM; 900 + for (i = 0; i < mbox->channel_count; i++) 901 + mbox->controller.chans[i].con_priv = &mbox->channels[i]; 902 + 903 + /* Setup MSIs for mailbox (if required) */ 904 + if (mbox->msi_count) { 905 + /* 906 + * The device MSI domain for platform devices on RISC-V architecture 907 + * is only available after the MSI controller driver is probed so, 908 + * explicitly configure here. 909 + */ 910 + if (!dev_get_msi_domain(dev)) { 911 + struct fwnode_handle *fwnode = dev_fwnode(dev); 912 + 913 + /* 914 + * The device MSI domain for OF devices is only set at the 915 + * time of populating/creating OF device. If the device MSI 916 + * domain is discovered later after the OF device is created 917 + * then we need to set it explicitly before using any platform 918 + * MSI functions. 919 + */ 920 + if (is_of_node(fwnode)) { 921 + of_msi_configure(dev, dev_of_node(dev)); 922 + } else if (is_acpi_device_node(fwnode)) { 923 + struct irq_domain *msi_domain; 924 + 925 + msi_domain = irq_find_matching_fwnode(imsic_acpi_get_fwnode(dev), 926 + DOMAIN_BUS_PLATFORM_MSI); 927 + dev_set_msi_domain(dev, msi_domain); 928 + } 929 + 930 + if (!dev_get_msi_domain(dev)) 931 + return -EPROBE_DEFER; 932 + } 933 + 934 + mbox->msi_index_to_channel = devm_kcalloc(dev, mbox->msi_count, 935 + sizeof(*mbox->msi_index_to_channel), 936 + GFP_KERNEL); 937 + if (!mbox->msi_index_to_channel) 938 + return -ENOMEM; 939 + 940 + for (msi_idx = 0; msi_idx < mbox->msi_count; msi_idx++) { 941 + for (i = 0; i < mbox->channel_count; i++) { 942 + mchan = &mbox->channels[i]; 943 + if (mchan->msi_index == msi_idx) { 944 + mbox->msi_index_to_channel[msi_idx] = mchan; 945 + break; 946 + } 947 + } 948 + } 949 + 950 + rc = platform_device_msi_init_and_alloc_irqs(dev, mbox->msi_count, 951 + mpxy_mbox_msi_write); 952 + if (rc) { 953 + return dev_err_probe(dev, rc, "Failed to allocate %d MSIs\n", 954 + mbox->msi_count); 955 + } 956 + 957 + for (i = 0; i < mbox->channel_count; i++) { 958 + mchan = &mbox->channels[i]; 959 + if (mchan->msi_index == U32_MAX) 960 + continue; 961 + mchan->msi_irq = msi_get_virq(dev, mchan->msi_index); 962 + } 963 + } 964 + 965 + /* Register mailbox controller */ 966 + rc = devm_mbox_controller_register(dev, &mbox->controller); 967 + if (rc) { 968 + dev_err_probe(dev, rc, "Registering SBI MPXY mailbox failed\n"); 969 + if (mbox->msi_count) 970 + platform_device_msi_free_irqs_all(dev); 971 + return rc; 972 + } 973 + 974 + #ifdef CONFIG_ACPI 975 + struct acpi_device *adev = ACPI_COMPANION(dev); 976 + 977 + if (adev) 978 + acpi_dev_clear_dependencies(adev); 979 + #endif 980 + 981 + dev_info(dev, "mailbox registered with %d channels\n", 982 + mbox->channel_count); 983 + return 0; 984 + } 985 + 986 + static void mpxy_mbox_remove(struct platform_device *pdev) 987 + { 988 + struct mpxy_mbox *mbox = platform_get_drvdata(pdev); 989 + 990 + if (mbox->msi_count) 991 + platform_device_msi_free_irqs_all(mbox->dev); 992 + } 993 + 994 + static const struct of_device_id mpxy_mbox_of_match[] = { 995 + { .compatible = "riscv,sbi-mpxy-mbox" }, 996 + {} 997 + }; 998 + MODULE_DEVICE_TABLE(of, mpxy_mbox_of_match); 999 + 1000 + static const struct acpi_device_id mpxy_mbox_acpi_match[] = { 1001 + { "RSCV0005" }, 1002 + {} 1003 + }; 1004 + MODULE_DEVICE_TABLE(acpi, mpxy_mbox_acpi_match); 1005 + 1006 + static struct platform_driver mpxy_mbox_driver = { 1007 + .driver = { 1008 + .name = "riscv-sbi-mpxy-mbox", 1009 + .of_match_table = mpxy_mbox_of_match, 1010 + .acpi_match_table = mpxy_mbox_acpi_match, 1011 + }, 1012 + .probe = mpxy_mbox_probe, 1013 + .remove = mpxy_mbox_remove, 1014 + }; 1015 + module_platform_driver(mpxy_mbox_driver); 1016 + 1017 + MODULE_LICENSE("GPL"); 1018 + MODULE_AUTHOR("Anup Patel <apatel@ventanamicro.com>"); 1019 + MODULE_DESCRIPTION("RISC-V SBI MPXY mailbox controller driver");
+16
include/linux/byteorder/generic.h
··· 173 173 } 174 174 } 175 175 176 + static inline void memcpy_from_le32(u32 *dst, const __le32 *src, size_t words) 177 + { 178 + size_t i; 179 + 180 + for (i = 0; i < words; i++) 181 + dst[i] = le32_to_cpu(src[i]); 182 + } 183 + 184 + static inline void memcpy_to_le32(__le32 *dst, const u32 *src, size_t words) 185 + { 186 + size_t i; 187 + 188 + for (i = 0; i < words; i++) 189 + dst[i] = cpu_to_le32(src[i]); 190 + } 191 + 176 192 static inline void be16_add_cpu(__be16 *var, u16 val) 177 193 { 178 194 *var = cpu_to_be16(be16_to_cpu(*var) + val);
+243
include/linux/mailbox/riscv-rpmi-message.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright (C) 2025 Ventana Micro Systems Inc. */ 3 + 4 + #ifndef _LINUX_RISCV_RPMI_MESSAGE_H_ 5 + #define _LINUX_RISCV_RPMI_MESSAGE_H_ 6 + 7 + #include <linux/errno.h> 8 + #include <linux/mailbox_client.h> 9 + #include <linux/types.h> 10 + #include <linux/wordpart.h> 11 + 12 + /* RPMI version encode/decode macros */ 13 + #define RPMI_VER_MAJOR(__ver) upper_16_bits(__ver) 14 + #define RPMI_VER_MINOR(__ver) lower_16_bits(__ver) 15 + #define RPMI_MKVER(__maj, __min) (((u32)(__maj) << 16) | (u16)(__min)) 16 + 17 + /* RPMI message header */ 18 + struct rpmi_message_header { 19 + __le16 servicegroup_id; 20 + u8 service_id; 21 + u8 flags; 22 + __le16 datalen; 23 + __le16 token; 24 + }; 25 + 26 + /* RPMI message */ 27 + struct rpmi_message { 28 + struct rpmi_message_header header; 29 + u8 data[]; 30 + }; 31 + 32 + /* RPMI notification event */ 33 + struct rpmi_notification_event { 34 + __le16 event_datalen; 35 + u8 event_id; 36 + u8 reserved; 37 + u8 event_data[]; 38 + }; 39 + 40 + /* RPMI error codes */ 41 + enum rpmi_error_codes { 42 + RPMI_SUCCESS = 0, 43 + RPMI_ERR_FAILED = -1, 44 + RPMI_ERR_NOTSUPP = -2, 45 + RPMI_ERR_INVALID_PARAM = -3, 46 + RPMI_ERR_DENIED = -4, 47 + RPMI_ERR_INVALID_ADDR = -5, 48 + RPMI_ERR_ALREADY = -6, 49 + RPMI_ERR_EXTENSION = -7, 50 + RPMI_ERR_HW_FAULT = -8, 51 + RPMI_ERR_BUSY = -9, 52 + RPMI_ERR_INVALID_STATE = -10, 53 + RPMI_ERR_BAD_RANGE = -11, 54 + RPMI_ERR_TIMEOUT = -12, 55 + RPMI_ERR_IO = -13, 56 + RPMI_ERR_NO_DATA = -14, 57 + RPMI_ERR_RESERVED_START = -15, 58 + RPMI_ERR_RESERVED_END = -127, 59 + RPMI_ERR_VENDOR_START = -128, 60 + }; 61 + 62 + static inline int rpmi_to_linux_error(int rpmi_error) 63 + { 64 + switch (rpmi_error) { 65 + case RPMI_SUCCESS: 66 + return 0; 67 + case RPMI_ERR_INVALID_PARAM: 68 + case RPMI_ERR_BAD_RANGE: 69 + case RPMI_ERR_INVALID_STATE: 70 + return -EINVAL; 71 + case RPMI_ERR_DENIED: 72 + return -EPERM; 73 + case RPMI_ERR_INVALID_ADDR: 74 + case RPMI_ERR_HW_FAULT: 75 + return -EFAULT; 76 + case RPMI_ERR_ALREADY: 77 + return -EALREADY; 78 + case RPMI_ERR_BUSY: 79 + return -EBUSY; 80 + case RPMI_ERR_TIMEOUT: 81 + return -ETIMEDOUT; 82 + case RPMI_ERR_IO: 83 + return -ECOMM; 84 + case RPMI_ERR_FAILED: 85 + case RPMI_ERR_NOTSUPP: 86 + case RPMI_ERR_NO_DATA: 87 + case RPMI_ERR_EXTENSION: 88 + default: 89 + return -EOPNOTSUPP; 90 + } 91 + } 92 + 93 + /* RPMI service group IDs */ 94 + #define RPMI_SRVGRP_SYSTEM_MSI 0x00002 95 + #define RPMI_SRVGRP_CLOCK 0x00008 96 + 97 + /* RPMI clock service IDs */ 98 + enum rpmi_clock_service_id { 99 + RPMI_CLK_SRV_ENABLE_NOTIFICATION = 0x01, 100 + RPMI_CLK_SRV_GET_NUM_CLOCKS = 0x02, 101 + RPMI_CLK_SRV_GET_ATTRIBUTES = 0x03, 102 + RPMI_CLK_SRV_GET_SUPPORTED_RATES = 0x04, 103 + RPMI_CLK_SRV_SET_CONFIG = 0x05, 104 + RPMI_CLK_SRV_GET_CONFIG = 0x06, 105 + RPMI_CLK_SRV_SET_RATE = 0x07, 106 + RPMI_CLK_SRV_GET_RATE = 0x08, 107 + RPMI_CLK_SRV_ID_MAX_COUNT 108 + }; 109 + 110 + /* RPMI system MSI service IDs */ 111 + enum rpmi_sysmsi_service_id { 112 + RPMI_SYSMSI_SRV_ENABLE_NOTIFICATION = 0x01, 113 + RPMI_SYSMSI_SRV_GET_ATTRIBUTES = 0x02, 114 + RPMI_SYSMSI_SRV_GET_MSI_ATTRIBUTES = 0x03, 115 + RPMI_SYSMSI_SRV_SET_MSI_STATE = 0x04, 116 + RPMI_SYSMSI_SRV_GET_MSI_STATE = 0x05, 117 + RPMI_SYSMSI_SRV_SET_MSI_TARGET = 0x06, 118 + RPMI_SYSMSI_SRV_GET_MSI_TARGET = 0x07, 119 + RPMI_SYSMSI_SRV_ID_MAX_COUNT 120 + }; 121 + 122 + /* RPMI Linux mailbox attribute IDs */ 123 + enum rpmi_mbox_attribute_id { 124 + RPMI_MBOX_ATTR_SPEC_VERSION, 125 + RPMI_MBOX_ATTR_MAX_MSG_DATA_SIZE, 126 + RPMI_MBOX_ATTR_SERVICEGROUP_ID, 127 + RPMI_MBOX_ATTR_SERVICEGROUP_VERSION, 128 + RPMI_MBOX_ATTR_IMPL_ID, 129 + RPMI_MBOX_ATTR_IMPL_VERSION, 130 + RPMI_MBOX_ATTR_MAX_ID 131 + }; 132 + 133 + /* RPMI Linux mailbox message types */ 134 + enum rpmi_mbox_message_type { 135 + RPMI_MBOX_MSG_TYPE_GET_ATTRIBUTE, 136 + RPMI_MBOX_MSG_TYPE_SET_ATTRIBUTE, 137 + RPMI_MBOX_MSG_TYPE_SEND_WITH_RESPONSE, 138 + RPMI_MBOX_MSG_TYPE_SEND_WITHOUT_RESPONSE, 139 + RPMI_MBOX_MSG_TYPE_NOTIFICATION_EVENT, 140 + RPMI_MBOX_MSG_MAX_TYPE 141 + }; 142 + 143 + /* RPMI Linux mailbox message instance */ 144 + struct rpmi_mbox_message { 145 + enum rpmi_mbox_message_type type; 146 + union { 147 + struct { 148 + enum rpmi_mbox_attribute_id id; 149 + u32 value; 150 + } attr; 151 + 152 + struct { 153 + u32 service_id; 154 + void *request; 155 + unsigned long request_len; 156 + void *response; 157 + unsigned long max_response_len; 158 + unsigned long out_response_len; 159 + } data; 160 + 161 + struct { 162 + u16 event_datalen; 163 + u8 event_id; 164 + u8 *event_data; 165 + } notif; 166 + }; 167 + int error; 168 + }; 169 + 170 + /* RPMI Linux mailbox message helper routines */ 171 + static inline void rpmi_mbox_init_get_attribute(struct rpmi_mbox_message *msg, 172 + enum rpmi_mbox_attribute_id id) 173 + { 174 + msg->type = RPMI_MBOX_MSG_TYPE_GET_ATTRIBUTE; 175 + msg->attr.id = id; 176 + msg->attr.value = 0; 177 + msg->error = 0; 178 + } 179 + 180 + static inline void rpmi_mbox_init_set_attribute(struct rpmi_mbox_message *msg, 181 + enum rpmi_mbox_attribute_id id, 182 + u32 value) 183 + { 184 + msg->type = RPMI_MBOX_MSG_TYPE_SET_ATTRIBUTE; 185 + msg->attr.id = id; 186 + msg->attr.value = value; 187 + msg->error = 0; 188 + } 189 + 190 + static inline void rpmi_mbox_init_send_with_response(struct rpmi_mbox_message *msg, 191 + u32 service_id, 192 + void *request, 193 + unsigned long request_len, 194 + void *response, 195 + unsigned long max_response_len) 196 + { 197 + msg->type = RPMI_MBOX_MSG_TYPE_SEND_WITH_RESPONSE; 198 + msg->data.service_id = service_id; 199 + msg->data.request = request; 200 + msg->data.request_len = request_len; 201 + msg->data.response = response; 202 + msg->data.max_response_len = max_response_len; 203 + msg->data.out_response_len = 0; 204 + msg->error = 0; 205 + } 206 + 207 + static inline void rpmi_mbox_init_send_without_response(struct rpmi_mbox_message *msg, 208 + u32 service_id, 209 + void *request, 210 + unsigned long request_len) 211 + { 212 + msg->type = RPMI_MBOX_MSG_TYPE_SEND_WITHOUT_RESPONSE; 213 + msg->data.service_id = service_id; 214 + msg->data.request = request; 215 + msg->data.request_len = request_len; 216 + msg->data.response = NULL; 217 + msg->data.max_response_len = 0; 218 + msg->data.out_response_len = 0; 219 + msg->error = 0; 220 + } 221 + 222 + static inline void *rpmi_mbox_get_msg_response(struct rpmi_mbox_message *msg) 223 + { 224 + return msg ? msg->data.response : NULL; 225 + } 226 + 227 + static inline int rpmi_mbox_send_message(struct mbox_chan *chan, 228 + struct rpmi_mbox_message *msg) 229 + { 230 + int ret; 231 + 232 + /* Send message for the underlying mailbox channel */ 233 + ret = mbox_send_message(chan, msg); 234 + if (ret < 0) 235 + return ret; 236 + 237 + /* Explicitly signal txdone for mailbox channel */ 238 + ret = msg->error; 239 + mbox_client_txdone(chan, ret); 240 + return ret; 241 + } 242 + 243 + #endif /* _LINUX_RISCV_RPMI_MESSAGE_H_ */
+3
include/linux/mailbox_controller.h
··· 66 66 * no interrupt rises. Ignored if 'txdone_irq' is set. 67 67 * @txpoll_period: If 'txdone_poll' is in effect, the API polls for 68 68 * last TX's status after these many millisecs 69 + * @fw_xlate: Controller driver specific mapping of channel via fwnode 69 70 * @of_xlate: Controller driver specific mapping of channel via DT 70 71 * @poll_hrt: API private. hrtimer used to poll for TXDONE on all 71 72 * channels. ··· 80 79 bool txdone_irq; 81 80 bool txdone_poll; 82 81 unsigned txpoll_period; 82 + struct mbox_chan *(*fw_xlate)(struct mbox_controller *mbox, 83 + const struct fwnode_reference_args *sp); 83 84 struct mbox_chan *(*of_xlate)(struct mbox_controller *mbox, 84 85 const struct of_phandle_args *sp); 85 86 /* Internal to API */