Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scmi-updates-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into soc/drivers

Arm SCMI updates for v6.3

The main addition is a unified userspace interface for SCMI irrespective
of the underlying transport and along with some changed to refactor the
SCMI stack probing sequence.

1. SCMI unified userspace interface

This is to have a unified way of testing an SCMI platform firmware
implementation for compliance, fuzzing etc., from the perspective of
the non-secure OSPM irrespective of the underlying transport supporting
SCMI. It is just for testing/development and not a feature intended fo
use in production.

Currently an SCMI Compliance Suite[1] can only work by injecting SCMI
messages using the mailbox test driver only which makes it transport
specific and can't be used with any other transport like virtio,
smc/hvc, optee, etc. Also the shared memory can be transport specific
and it is better to even abstract/hide those details while providing
the userspace access. So in order to scale with any transport, we need
a unified interface for the same.

In order to achieve that, SCMI "raw mode support" is being added through
debugfs which is more configurable as well. A userspace application
can inject bare SCMI binary messages into the SCMI core stack; such
messages will be routed by the SCMI regular kernel stack to the backend
platform firmware using the configured transport transparently. This
eliminates the to know about the specific underlying transport
internals that will be taken care of by the SCMI core stack itself.
Further no additional changes needed in the device tree like in the
mailbox-test driver.

[1] https://gitlab.arm.com/tests/scmi-tests

2. Refactoring of the SCMI stack probing sequence

On some platforms, SCMI transport can be provide by OPTEE/TEE which
introduces certain dependency in the probe ordering. In order to address
the same, the SCMI bus is split into its own module which continues to
be initialized at subsys_initcall, while the SCMI core stack, including
its various transport backends (like optee, mailbox, virtio, smc), is
now moved into a separate module at module_init level.

This allows the other possibly dependent subsystems to register and/or
access SCMI bus well before the core SCMI stack and its dependent
transport backends.

* tag 'scmi-updates-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux: (31 commits)
firmware: arm_scmi: Clarify raw per-channel ABI documentation
firmware: arm_scmi: Add per-channel raw injection support
firmware: arm_scmi: Add the raw mode co-existence support
firmware: arm_scmi: Call raw mode hooks from the core stack
firmware: arm_scmi: Reject SCMI drivers when configured in raw mode
firmware: arm_scmi: Add debugfs ABI documentation for raw mode
firmware: arm_scmi: Add core raw transmission support
firmware: arm_scmi: Add debugfs ABI documentation for common entries
firmware: arm_scmi: Populate a common SCMI debugfs root
debugfs: Export debugfs_create_str symbol
include: trace: Add platform and channel instance references
firmware: arm_scmi: Add internal platform/channel identifiers
firmware: arm_scmi: Move errors defs and code to common.h
firmware: arm_scmi: Add xfer helpers to provide raw access
firmware: arm_scmi: Add flags field to xfer
firmware: arm_scmi: Refactor scmi_wait_for_message_response
firmware: arm_scmi: Refactor polling helpers
firmware: arm_scmi: Refactor xfer in-flight registration routines
firmware: arm_scmi: Split bus and driver into distinct modules
firmware: arm_scmi: Introduce a new lifecycle for protocol devices
...

Link: https://lore.kernel.org/r/20230120162152.1438456-1-sudeep.holla@arm.com
Signed-off-by: Arnd Bergmann <arnd@arndb.de>

+2957 -614
+70
Documentation/ABI/testing/debugfs-scmi
··· 1 + What: /sys/kernel/debug/scmi/<n>/instance_name 2 + Date: March 2023 3 + KernelVersion: 6.3 4 + Contact: cristian.marussi@arm.com 5 + Description: The name of the underlying SCMI instance <n> described by 6 + all the debugfs accessors rooted at /sys/kernel/debug/scmi/<n>, 7 + expressed as the full name of the top DT SCMI node under which 8 + this SCMI instance is rooted. 9 + Users: Debugging, any userspace test suite 10 + 11 + What: /sys/kernel/debug/scmi/<n>/atomic_threshold_us 12 + Date: March 2023 13 + KernelVersion: 6.3 14 + Contact: cristian.marussi@arm.com 15 + Description: An optional time value, expressed in microseconds, representing, 16 + on this SCMI instance <n>, the threshold above which any SCMI 17 + command, advertised to have an higher-than-threshold execution 18 + latency, should not be considered for atomic mode of operation, 19 + even if requested. 20 + Users: Debugging, any userspace test suite 21 + 22 + What: /sys/kernel/debug/scmi/<n>/transport/type 23 + Date: March 2023 24 + KernelVersion: 6.3 25 + Contact: cristian.marussi@arm.com 26 + Description: A string representing the type of transport configured for this 27 + SCMI instance <n>. 28 + Users: Debugging, any userspace test suite 29 + 30 + What: /sys/kernel/debug/scmi/<n>/transport/is_atomic 31 + Date: March 2023 32 + KernelVersion: 6.3 33 + Contact: cristian.marussi@arm.com 34 + Description: A boolean stating if the transport configured on the underlying 35 + SCMI instance <n> is capable of atomic mode of operation. 36 + Users: Debugging, any userspace test suite 37 + 38 + What: /sys/kernel/debug/scmi/<n>/transport/max_rx_timeout_ms 39 + Date: March 2023 40 + KernelVersion: 6.3 41 + Contact: cristian.marussi@arm.com 42 + Description: Timeout in milliseconds allowed for SCMI synchronous replies 43 + for the currently configured SCMI transport for instance <n>. 44 + Users: Debugging, any userspace test suite 45 + 46 + What: /sys/kernel/debug/scmi/<n>/transport/max_msg_size 47 + Date: March 2023 48 + KernelVersion: 6.3 49 + Contact: cristian.marussi@arm.com 50 + Description: Max message size of allowed SCMI messages for the currently 51 + configured SCMI transport for instance <n>. 52 + Users: Debugging, any userspace test suite 53 + 54 + What: /sys/kernel/debug/scmi/<n>/transport/tx_max_msg 55 + Date: March 2023 56 + KernelVersion: 6.3 57 + Contact: cristian.marussi@arm.com 58 + Description: Max number of concurrently allowed in-flight SCMI messages for 59 + the currently configured SCMI transport for instance <n> on the 60 + TX channels. 61 + Users: Debugging, any userspace test suite 62 + 63 + What: /sys/kernel/debug/scmi/<n>/transport/rx_max_msg 64 + Date: March 2023 65 + KernelVersion: 6.3 66 + Contact: cristian.marussi@arm.com 67 + Description: Max number of concurrently allowed in-flight SCMI messages for 68 + the currently configured SCMI transport for instance <n> on the 69 + RX channels. 70 + Users: Debugging, any userspace test suite
+117
Documentation/ABI/testing/debugfs-scmi-raw
··· 1 + What: /sys/kernel/debug/scmi/<n>/raw/message 2 + Date: March 2023 3 + KernelVersion: 6.3 4 + Contact: cristian.marussi@arm.com 5 + Description: SCMI Raw synchronous message injection/snooping facility; write 6 + a complete SCMI synchronous command message (header included) 7 + in little-endian binary format to have it sent to the configured 8 + backend SCMI server for instance <n>. 9 + Any subsequently received response can be read from this same 10 + entry if it arrived within the configured timeout. 11 + Each write to the entry causes one command request to be built 12 + and sent while the replies are read back one message at time 13 + (receiving an EOF at each message boundary). 14 + Users: Debugging, any userspace test suite 15 + 16 + What: /sys/kernel/debug/scmi/<n>/raw/message_async 17 + Date: March 2023 18 + KernelVersion: 6.3 19 + Contact: cristian.marussi@arm.com 20 + Description: SCMI Raw asynchronous message injection/snooping facility; write 21 + a complete SCMI asynchronous command message (header included) 22 + in little-endian binary format to have it sent to the configured 23 + backend SCMI server for instance <n>. 24 + Any subsequently received response can be read from this same 25 + entry if it arrived within the configured timeout. 26 + Any additional delayed response received afterwards can be read 27 + from this same entry too if it arrived within the configured 28 + timeout. 29 + Each write to the entry causes one command request to be built 30 + and sent while the replies are read back one message at time 31 + (receiving an EOF at each message boundary). 32 + Users: Debugging, any userspace test suite 33 + 34 + What: /sys/kernel/debug/scmi/<n>/raw/errors 35 + Date: March 2023 36 + KernelVersion: 6.3 37 + Contact: cristian.marussi@arm.com 38 + Description: SCMI Raw message errors facility; any kind of timed-out or 39 + generally unexpectedly received SCMI message, for instance <n>, 40 + can be read from this entry. 41 + Each read gives back one message at time (receiving an EOF at 42 + each message boundary). 43 + Users: Debugging, any userspace test suite 44 + 45 + What: /sys/kernel/debug/scmi/<n>/raw/notification 46 + Date: March 2023 47 + KernelVersion: 6.3 48 + Contact: cristian.marussi@arm.com 49 + Description: SCMI Raw notification snooping facility; any notification 50 + emitted by the backend SCMI server, for instance <n>, can be 51 + read from this entry. 52 + Each read gives back one message at time (receiving an EOF at 53 + each message boundary). 54 + Users: Debugging, any userspace test suite 55 + 56 + What: /sys/kernel/debug/scmi/<n>/raw/reset 57 + Date: March 2023 58 + KernelVersion: 6.3 59 + Contact: cristian.marussi@arm.com 60 + Description: SCMI Raw stack reset facility; writing a value to this entry 61 + causes the internal queues of any kind of received message, 62 + still pending to be read out for instance <n>, to be immediately 63 + flushed. 64 + Can be used to reset and clean the SCMI Raw stack between to 65 + different test-run. 66 + Users: Debugging, any userspace test suite 67 + 68 + What: /sys/kernel/debug/scmi/<n>/raw/channels/<m>/message 69 + Date: March 2023 70 + KernelVersion: 6.3 71 + Contact: cristian.marussi@arm.com 72 + Description: SCMI Raw synchronous message injection/snooping facility; write 73 + a complete SCMI synchronous command message (header included) 74 + in little-endian binary format to have it sent to the configured 75 + backend SCMI server for instance <n> through the <m> transport 76 + channel. 77 + Any subsequently received response can be read from this same 78 + entry if it arrived on channel <m> within the configured 79 + timeout. 80 + Each write to the entry causes one command request to be built 81 + and sent while the replies are read back one message at time 82 + (receiving an EOF at each message boundary). 83 + Channel identifier <m> matches the SCMI protocol number which 84 + has been associated with this transport channel in the DT 85 + description, with base protocol number 0x10 being the default 86 + channel for this instance. 87 + Note that these per-channel entries rooted at <..>/channels 88 + exist only if the transport is configured to have more than 89 + one default channel. 90 + Users: Debugging, any userspace test suite 91 + 92 + What: /sys/kernel/debug/scmi/<n>/raw/channels/<m>/message_async 93 + Date: March 2023 94 + KernelVersion: 6.3 95 + Contact: cristian.marussi@arm.com 96 + Description: SCMI Raw asynchronous message injection/snooping facility; write 97 + a complete SCMI asynchronous command message (header included) 98 + in little-endian binary format to have it sent to the configured 99 + backend SCMI server for instance <n> through the <m> transport 100 + channel. 101 + Any subsequently received response can be read from this same 102 + entry if it arrived on channel <m> within the configured 103 + timeout. 104 + Any additional delayed response received afterwards can be read 105 + from this same entry too if it arrived within the configured 106 + timeout. 107 + Each write to the entry causes one command request to be built 108 + and sent while the replies are read back one message at time 109 + (receiving an EOF at each message boundary). 110 + Channel identifier <m> matches the SCMI protocol number which 111 + has been associated with this transport channel in the DT 112 + description, with base protocol number 0x10 being the default 113 + channel for this instance. 114 + Note that these per-channel entries rooted at <..>/channels 115 + exist only if the transport is configured to have more than 116 + one default channel. 117 + Users: Debugging, any userspace test suite
+32
drivers/firmware/arm_scmi/Kconfig
··· 23 23 24 24 if ARM_SCMI_PROTOCOL 25 25 26 + config ARM_SCMI_NEED_DEBUGFS 27 + bool 28 + help 29 + This declares whether at least one SCMI facility is configured 30 + which needs debugfs support. When selected causess the creation 31 + of a common SCMI debugfs root directory. 32 + 33 + config ARM_SCMI_RAW_MODE_SUPPORT 34 + bool "Enable support for SCMI Raw transmission mode" 35 + depends on DEBUG_FS 36 + select ARM_SCMI_NEED_DEBUGFS 37 + help 38 + Enable support for SCMI Raw transmission mode. 39 + 40 + If enabled allows the direct injection and snooping of SCMI bare 41 + messages through a dedicated debugfs interface. 42 + It is meant to be used by SCMI compliance/testing suites. 43 + 44 + When enabled regular SCMI drivers interactions are inhibited in 45 + order to avoid unexpected interactions with the SCMI Raw message 46 + flow. If unsure say N. 47 + 48 + config ARM_SCMI_RAW_MODE_SUPPORT_COEX 49 + bool "Allow SCMI Raw mode coexistence with normal SCMI stack" 50 + depends on ARM_SCMI_RAW_MODE_SUPPORT 51 + help 52 + Allow SCMI Raw transmission mode to coexist with normal SCMI stack. 53 + 54 + This will allow regular SCMI drivers to register with the core and 55 + operate normally, thing which could make an SCMI test suite using the 56 + SCMI Raw mode support unreliable. If unsure, say N. 57 + 26 58 config ARM_SCMI_HAVE_TRANSPORT 27 59 bool 28 60 help
+7 -2
drivers/firmware/arm_scmi/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 scmi-bus-y = bus.o 3 + scmi-core-objs := $(scmi-bus-y) 4 + 3 5 scmi-driver-y = driver.o notify.o 6 + scmi-driver-$(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT) += raw_mode.o 4 7 scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o 5 8 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += mailbox.o 6 9 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += smc.o ··· 11 8 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO) += virtio.o 12 9 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += optee.o 13 10 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o powercap.o 14 - scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \ 15 - $(scmi-transport-y) 11 + scmi-module-objs := $(scmi-driver-y) $(scmi-protocols-y) $(scmi-transport-y) 12 + 13 + obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-core.o 16 14 obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o 15 + 17 16 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o 18 17 obj-$(CONFIG_ARM_SCMI_POWER_CONTROL) += scmi_power_control.o 19 18
+300 -91
drivers/firmware/arm_scmi/bus.c
··· 7 7 8 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 9 9 10 + #include <linux/atomic.h> 10 11 #include <linux/types.h> 11 12 #include <linux/module.h> 13 + #include <linux/of.h> 12 14 #include <linux/kernel.h> 13 15 #include <linux/slab.h> 14 16 #include <linux/device.h> 15 17 16 18 #include "common.h" 17 19 20 + BLOCKING_NOTIFIER_HEAD(scmi_requested_devices_nh); 21 + EXPORT_SYMBOL_GPL(scmi_requested_devices_nh); 22 + 18 23 static DEFINE_IDA(scmi_bus_id); 19 - static DEFINE_IDR(scmi_protocols); 20 - static DEFINE_SPINLOCK(protocol_lock); 24 + 25 + static DEFINE_IDR(scmi_requested_devices); 26 + /* Protect access to scmi_requested_devices */ 27 + static DEFINE_MUTEX(scmi_requested_devices_mtx); 28 + 29 + struct scmi_requested_dev { 30 + const struct scmi_device_id *id_table; 31 + struct list_head node; 32 + }; 33 + 34 + /* Track globally the creation of SCMI SystemPower related devices */ 35 + static atomic_t scmi_syspower_registered = ATOMIC_INIT(0); 36 + 37 + /** 38 + * scmi_protocol_device_request - Helper to request a device 39 + * 40 + * @id_table: A protocol/name pair descriptor for the device to be created. 41 + * 42 + * This helper let an SCMI driver request specific devices identified by the 43 + * @id_table to be created for each active SCMI instance. 44 + * 45 + * The requested device name MUST NOT be already existent for any protocol; 46 + * at first the freshly requested @id_table is annotated in the IDR table 47 + * @scmi_requested_devices and then the requested device is advertised to any 48 + * registered party via the @scmi_requested_devices_nh notification chain. 49 + * 50 + * Return: 0 on Success 51 + */ 52 + static int scmi_protocol_device_request(const struct scmi_device_id *id_table) 53 + { 54 + int ret = 0; 55 + unsigned int id = 0; 56 + struct list_head *head, *phead = NULL; 57 + struct scmi_requested_dev *rdev; 58 + 59 + pr_debug("Requesting SCMI device (%s) for protocol %x\n", 60 + id_table->name, id_table->protocol_id); 61 + 62 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT) && 63 + !IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT_COEX)) { 64 + pr_warn("SCMI Raw mode active. Rejecting '%s'/0x%02X\n", 65 + id_table->name, id_table->protocol_id); 66 + return -EINVAL; 67 + } 68 + 69 + /* 70 + * Search for the matching protocol rdev list and then search 71 + * of any existent equally named device...fails if any duplicate found. 72 + */ 73 + mutex_lock(&scmi_requested_devices_mtx); 74 + idr_for_each_entry(&scmi_requested_devices, head, id) { 75 + if (!phead) { 76 + /* A list found registered in the IDR is never empty */ 77 + rdev = list_first_entry(head, struct scmi_requested_dev, 78 + node); 79 + if (rdev->id_table->protocol_id == 80 + id_table->protocol_id) 81 + phead = head; 82 + } 83 + list_for_each_entry(rdev, head, node) { 84 + if (!strcmp(rdev->id_table->name, id_table->name)) { 85 + pr_err("Ignoring duplicate request [%d] %s\n", 86 + rdev->id_table->protocol_id, 87 + rdev->id_table->name); 88 + ret = -EINVAL; 89 + goto out; 90 + } 91 + } 92 + } 93 + 94 + /* 95 + * No duplicate found for requested id_table, so let's create a new 96 + * requested device entry for this new valid request. 97 + */ 98 + rdev = kzalloc(sizeof(*rdev), GFP_KERNEL); 99 + if (!rdev) { 100 + ret = -ENOMEM; 101 + goto out; 102 + } 103 + rdev->id_table = id_table; 104 + 105 + /* 106 + * Append the new requested device table descriptor to the head of the 107 + * related protocol list, eventually creating such head if not already 108 + * there. 109 + */ 110 + if (!phead) { 111 + phead = kzalloc(sizeof(*phead), GFP_KERNEL); 112 + if (!phead) { 113 + kfree(rdev); 114 + ret = -ENOMEM; 115 + goto out; 116 + } 117 + INIT_LIST_HEAD(phead); 118 + 119 + ret = idr_alloc(&scmi_requested_devices, (void *)phead, 120 + id_table->protocol_id, 121 + id_table->protocol_id + 1, GFP_KERNEL); 122 + if (ret != id_table->protocol_id) { 123 + pr_err("Failed to save SCMI device - ret:%d\n", ret); 124 + kfree(rdev); 125 + kfree(phead); 126 + ret = -EINVAL; 127 + goto out; 128 + } 129 + ret = 0; 130 + } 131 + list_add(&rdev->node, phead); 132 + 133 + out: 134 + mutex_unlock(&scmi_requested_devices_mtx); 135 + 136 + if (!ret) 137 + blocking_notifier_call_chain(&scmi_requested_devices_nh, 138 + SCMI_BUS_NOTIFY_DEVICE_REQUEST, 139 + (void *)rdev->id_table); 140 + 141 + return ret; 142 + } 143 + 144 + /** 145 + * scmi_protocol_device_unrequest - Helper to unrequest a device 146 + * 147 + * @id_table: A protocol/name pair descriptor for the device to be unrequested. 148 + * 149 + * The unrequested device, described by the provided id_table, is at first 150 + * removed from the IDR @scmi_requested_devices and then the removal is 151 + * advertised to any registered party via the @scmi_requested_devices_nh 152 + * notification chain. 153 + */ 154 + static void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table) 155 + { 156 + struct list_head *phead; 157 + 158 + pr_debug("Unrequesting SCMI device (%s) for protocol %x\n", 159 + id_table->name, id_table->protocol_id); 160 + 161 + mutex_lock(&scmi_requested_devices_mtx); 162 + phead = idr_find(&scmi_requested_devices, id_table->protocol_id); 163 + if (phead) { 164 + struct scmi_requested_dev *victim, *tmp; 165 + 166 + list_for_each_entry_safe(victim, tmp, phead, node) { 167 + if (!strcmp(victim->id_table->name, id_table->name)) { 168 + list_del(&victim->node); 169 + 170 + mutex_unlock(&scmi_requested_devices_mtx); 171 + blocking_notifier_call_chain(&scmi_requested_devices_nh, 172 + SCMI_BUS_NOTIFY_DEVICE_UNREQUEST, 173 + (void *)victim->id_table); 174 + kfree(victim); 175 + mutex_lock(&scmi_requested_devices_mtx); 176 + break; 177 + } 178 + } 179 + 180 + if (list_empty(phead)) { 181 + idr_remove(&scmi_requested_devices, 182 + id_table->protocol_id); 183 + kfree(phead); 184 + } 185 + } 186 + mutex_unlock(&scmi_requested_devices_mtx); 187 + } 21 188 22 189 static const struct scmi_device_id * 23 190 scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv) ··· 224 57 struct scmi_device_id *id_table = data; 225 58 226 59 return sdev->protocol_id == id_table->protocol_id && 227 - !strcmp(sdev->name, id_table->name); 60 + (id_table->name && !strcmp(sdev->name, id_table->name)); 228 61 } 229 62 230 - struct scmi_device *scmi_child_dev_find(struct device *parent, 231 - int prot_id, const char *name) 63 + static struct scmi_device *scmi_child_dev_find(struct device *parent, 64 + int prot_id, const char *name) 232 65 { 233 66 struct scmi_device_id id_table; 234 67 struct device *dev; ··· 241 74 return NULL; 242 75 243 76 return to_scmi_dev(dev); 244 - } 245 - 246 - const struct scmi_protocol *scmi_protocol_get(int protocol_id) 247 - { 248 - const struct scmi_protocol *proto; 249 - 250 - proto = idr_find(&scmi_protocols, protocol_id); 251 - if (!proto || !try_module_get(proto->owner)) { 252 - pr_warn("SCMI Protocol 0x%x not found!\n", protocol_id); 253 - return NULL; 254 - } 255 - 256 - pr_debug("Found SCMI Protocol 0x%x\n", protocol_id); 257 - 258 - return proto; 259 - } 260 - 261 - void scmi_protocol_put(int protocol_id) 262 - { 263 - const struct scmi_protocol *proto; 264 - 265 - proto = idr_find(&scmi_protocols, protocol_id); 266 - if (proto) 267 - module_put(proto->owner); 268 77 } 269 78 270 79 static int scmi_dev_probe(struct device *dev) ··· 263 120 scmi_drv->remove(scmi_dev); 264 121 } 265 122 266 - static struct bus_type scmi_bus_type = { 123 + struct bus_type scmi_bus_type = { 267 124 .name = "scmi_protocol", 268 125 .match = scmi_dev_match, 269 126 .probe = scmi_dev_probe, 270 127 .remove = scmi_dev_remove, 271 128 }; 129 + EXPORT_SYMBOL_GPL(scmi_bus_type); 272 130 273 131 int scmi_driver_register(struct scmi_driver *driver, struct module *owner, 274 132 const char *mod_name) ··· 290 146 291 147 retval = driver_register(&driver->driver); 292 148 if (!retval) 293 - pr_debug("registered new scmi driver %s\n", driver->name); 149 + pr_debug("Registered new scmi driver %s\n", driver->name); 294 150 295 151 return retval; 296 152 } ··· 308 164 kfree(to_scmi_dev(dev)); 309 165 } 310 166 311 - struct scmi_device * 312 - scmi_device_create(struct device_node *np, struct device *parent, int protocol, 313 - const char *name) 167 + static void __scmi_device_destroy(struct scmi_device *scmi_dev) 168 + { 169 + pr_debug("(%s) Destroying SCMI device '%s' for protocol 0x%x (%s)\n", 170 + of_node_full_name(scmi_dev->dev.parent->of_node), 171 + dev_name(&scmi_dev->dev), scmi_dev->protocol_id, 172 + scmi_dev->name); 173 + 174 + if (scmi_dev->protocol_id == SCMI_PROTOCOL_SYSTEM) 175 + atomic_set(&scmi_syspower_registered, 0); 176 + 177 + kfree_const(scmi_dev->name); 178 + ida_free(&scmi_bus_id, scmi_dev->id); 179 + device_unregister(&scmi_dev->dev); 180 + } 181 + 182 + static struct scmi_device * 183 + __scmi_device_create(struct device_node *np, struct device *parent, 184 + int protocol, const char *name) 314 185 { 315 186 int id, retval; 316 187 struct scmi_device *scmi_dev; 188 + 189 + /* 190 + * If the same protocol/name device already exist under the same parent 191 + * (i.e. SCMI instance) just return the existent device. 192 + * This avoids any race between the SCMI driver, creating devices for 193 + * each DT defined protocol at probe time, and the concurrent 194 + * registration of SCMI drivers. 195 + */ 196 + scmi_dev = scmi_child_dev_find(parent, protocol, name); 197 + if (scmi_dev) 198 + return scmi_dev; 199 + 200 + /* 201 + * Ignore any possible subsequent failures while creating the device 202 + * since we are doomed anyway at that point; not using a mutex which 203 + * spans across this whole function to keep things simple and to avoid 204 + * to serialize all the __scmi_device_create calls across possibly 205 + * different SCMI server instances (parent) 206 + */ 207 + if (protocol == SCMI_PROTOCOL_SYSTEM && 208 + atomic_cmpxchg(&scmi_syspower_registered, 0, 1)) { 209 + dev_warn(parent, 210 + "SCMI SystemPower protocol device must be unique !\n"); 211 + return NULL; 212 + } 317 213 318 214 scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL); 319 215 if (!scmi_dev) ··· 384 200 if (retval) 385 201 goto put_dev; 386 202 203 + pr_debug("(%s) Created SCMI device '%s' for protocol 0x%x (%s)\n", 204 + of_node_full_name(parent->of_node), 205 + dev_name(&scmi_dev->dev), protocol, name); 206 + 387 207 return scmi_dev; 388 208 put_dev: 389 209 kfree_const(scmi_dev->name); ··· 396 208 return NULL; 397 209 } 398 210 399 - void scmi_device_destroy(struct scmi_device *scmi_dev) 211 + /** 212 + * scmi_device_create - A method to create one or more SCMI devices 213 + * 214 + * @np: A reference to the device node to use for the new device(s) 215 + * @parent: The parent device to use identifying a specific SCMI instance 216 + * @protocol: The SCMI protocol to be associated with this device 217 + * @name: The requested-name of the device to be created; this is optional 218 + * and if no @name is provided, all the devices currently known to 219 + * be requested on the SCMI bus for @protocol will be created. 220 + * 221 + * This method can be invoked to create a single well-defined device (like 222 + * a transport device or a device requested by an SCMI driver loaded after 223 + * the core SCMI stack has been probed), or to create all the devices currently 224 + * known to have been requested by the loaded SCMI drivers for a specific 225 + * protocol (typically during SCMI core protocol enumeration at probe time). 226 + * 227 + * Return: The created device (or one of them if @name was NOT provided and 228 + * multiple devices were created) or NULL if no device was created; 229 + * note that NULL indicates an error ONLY in case a specific @name 230 + * was provided: when @name param was not provided, a number of devices 231 + * could have been potentially created for a whole protocol, unless no 232 + * device was found to have been requested for that specific protocol. 233 + */ 234 + struct scmi_device *scmi_device_create(struct device_node *np, 235 + struct device *parent, int protocol, 236 + const char *name) 400 237 { 401 - kfree_const(scmi_dev->name); 402 - scmi_handle_put(scmi_dev->handle); 403 - ida_free(&scmi_bus_id, scmi_dev->id); 404 - device_unregister(&scmi_dev->dev); 405 - } 238 + struct list_head *phead; 239 + struct scmi_requested_dev *rdev; 240 + struct scmi_device *scmi_dev = NULL; 406 241 407 - void scmi_device_link_add(struct device *consumer, struct device *supplier) 408 - { 409 - struct device_link *link; 242 + if (name) 243 + return __scmi_device_create(np, parent, protocol, name); 410 244 411 - link = device_link_add(consumer, supplier, DL_FLAG_AUTOREMOVE_CONSUMER); 412 - 413 - WARN_ON(!link); 414 - } 415 - 416 - void scmi_set_handle(struct scmi_device *scmi_dev) 417 - { 418 - scmi_dev->handle = scmi_handle_get(&scmi_dev->dev); 419 - if (scmi_dev->handle) 420 - scmi_device_link_add(&scmi_dev->dev, scmi_dev->handle->dev); 421 - } 422 - 423 - int scmi_protocol_register(const struct scmi_protocol *proto) 424 - { 425 - int ret; 426 - 427 - if (!proto) { 428 - pr_err("invalid protocol\n"); 429 - return -EINVAL; 245 + mutex_lock(&scmi_requested_devices_mtx); 246 + phead = idr_find(&scmi_requested_devices, protocol); 247 + /* Nothing to do. */ 248 + if (!phead) { 249 + mutex_unlock(&scmi_requested_devices_mtx); 250 + return scmi_dev; 430 251 } 431 252 432 - if (!proto->instance_init) { 433 - pr_err("missing init for protocol 0x%x\n", proto->id); 434 - return -EINVAL; 253 + /* Walk the list of requested devices for protocol and create them */ 254 + list_for_each_entry(rdev, phead, node) { 255 + struct scmi_device *sdev; 256 + 257 + sdev = __scmi_device_create(np, parent, 258 + rdev->id_table->protocol_id, 259 + rdev->id_table->name); 260 + /* Report errors and carry on... */ 261 + if (sdev) 262 + scmi_dev = sdev; 263 + else 264 + pr_err("(%s) Failed to create device for protocol 0x%x (%s)\n", 265 + of_node_full_name(parent->of_node), 266 + rdev->id_table->protocol_id, 267 + rdev->id_table->name); 435 268 } 269 + mutex_unlock(&scmi_requested_devices_mtx); 436 270 437 - spin_lock(&protocol_lock); 438 - ret = idr_alloc(&scmi_protocols, (void *)proto, 439 - proto->id, proto->id + 1, GFP_ATOMIC); 440 - spin_unlock(&protocol_lock); 441 - if (ret != proto->id) { 442 - pr_err("unable to allocate SCMI idr slot for 0x%x - err %d\n", 443 - proto->id, ret); 444 - return ret; 445 - } 446 - 447 - pr_debug("Registered SCMI Protocol 0x%x\n", proto->id); 448 - 449 - return 0; 271 + return scmi_dev; 450 272 } 451 - EXPORT_SYMBOL_GPL(scmi_protocol_register); 273 + EXPORT_SYMBOL_GPL(scmi_device_create); 452 274 453 - void scmi_protocol_unregister(const struct scmi_protocol *proto) 275 + void scmi_device_destroy(struct device *parent, int protocol, const char *name) 454 276 { 455 - spin_lock(&protocol_lock); 456 - idr_remove(&scmi_protocols, proto->id); 457 - spin_unlock(&protocol_lock); 277 + struct scmi_device *scmi_dev; 458 278 459 - pr_debug("Unregistered SCMI Protocol 0x%x\n", proto->id); 460 - 461 - return; 279 + scmi_dev = scmi_child_dev_find(parent, protocol, name); 280 + if (scmi_dev) 281 + __scmi_device_destroy(scmi_dev); 462 282 } 463 - EXPORT_SYMBOL_GPL(scmi_protocol_unregister); 283 + EXPORT_SYMBOL_GPL(scmi_device_destroy); 464 284 465 285 static int __scmi_devices_unregister(struct device *dev, void *data) 466 286 { 467 287 struct scmi_device *scmi_dev = to_scmi_dev(dev); 468 288 469 - scmi_device_destroy(scmi_dev); 289 + __scmi_device_destroy(scmi_dev); 470 290 return 0; 471 291 } 472 292 ··· 483 287 bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister); 484 288 } 485 289 486 - int __init scmi_bus_init(void) 290 + static int __init scmi_bus_init(void) 487 291 { 488 292 int retval; 489 293 490 294 retval = bus_register(&scmi_bus_type); 491 295 if (retval) 492 - pr_err("scmi protocol bus register failed (%d)\n", retval); 296 + pr_err("SCMI protocol bus register failed (%d)\n", retval); 297 + 298 + pr_info("SCMI protocol bus registered\n"); 493 299 494 300 return retval; 495 301 } 302 + subsys_initcall(scmi_bus_init); 496 303 497 - void __exit scmi_bus_exit(void) 304 + static void __exit scmi_bus_exit(void) 498 305 { 306 + /* 307 + * Destroy all remaining devices: just in case the drivers were 308 + * manually unbound and at first and then the modules unloaded. 309 + */ 499 310 scmi_devices_unregister(); 500 311 bus_unregister(&scmi_bus_type); 501 312 ida_destroy(&scmi_bus_id); 502 313 } 314 + module_exit(scmi_bus_exit); 315 + 316 + MODULE_ALIAS("scmi-core"); 317 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 318 + MODULE_DESCRIPTION("ARM SCMI protocol bus"); 319 + MODULE_LICENSE("GPL");
+85 -15
drivers/firmware/arm_scmi/common.h
··· 27 27 #include "protocols.h" 28 28 #include "notify.h" 29 29 30 + #define SCMI_MAX_CHANNELS 256 31 + 32 + #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC) 33 + 34 + enum scmi_error_codes { 35 + SCMI_SUCCESS = 0, /* Success */ 36 + SCMI_ERR_SUPPORT = -1, /* Not supported */ 37 + SCMI_ERR_PARAMS = -2, /* Invalid Parameters */ 38 + SCMI_ERR_ACCESS = -3, /* Invalid access/permission denied */ 39 + SCMI_ERR_ENTRY = -4, /* Not found */ 40 + SCMI_ERR_RANGE = -5, /* Value out of range */ 41 + SCMI_ERR_BUSY = -6, /* Device busy */ 42 + SCMI_ERR_COMMS = -7, /* Communication Error */ 43 + SCMI_ERR_GENERIC = -8, /* Generic Error */ 44 + SCMI_ERR_HARDWARE = -9, /* Hardware Error */ 45 + SCMI_ERR_PROTOCOL = -10,/* Protocol Error */ 46 + }; 47 + 48 + static const int scmi_linux_errmap[] = { 49 + /* better than switch case as long as return value is continuous */ 50 + 0, /* SCMI_SUCCESS */ 51 + -EOPNOTSUPP, /* SCMI_ERR_SUPPORT */ 52 + -EINVAL, /* SCMI_ERR_PARAM */ 53 + -EACCES, /* SCMI_ERR_ACCESS */ 54 + -ENOENT, /* SCMI_ERR_ENTRY */ 55 + -ERANGE, /* SCMI_ERR_RANGE */ 56 + -EBUSY, /* SCMI_ERR_BUSY */ 57 + -ECOMM, /* SCMI_ERR_COMMS */ 58 + -EIO, /* SCMI_ERR_GENERIC */ 59 + -EREMOTEIO, /* SCMI_ERR_HARDWARE */ 60 + -EPROTO, /* SCMI_ERR_PROTOCOL */ 61 + }; 62 + 63 + static inline int scmi_to_linux_errno(int errno) 64 + { 65 + int err_idx = -errno; 66 + 67 + if (err_idx >= SCMI_SUCCESS && err_idx < ARRAY_SIZE(scmi_linux_errmap)) 68 + return scmi_linux_errmap[err_idx]; 69 + return -EIO; 70 + } 71 + 30 72 #define MSG_ID_MASK GENMASK(7, 0) 31 73 #define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr)) 32 74 #define MSG_TYPE_MASK GENMASK(9, 8) ··· 138 96 139 97 struct scmi_revision_info * 140 98 scmi_revision_area_get(const struct scmi_protocol_handle *ph); 141 - int scmi_handle_put(const struct scmi_handle *handle); 142 - void scmi_device_link_add(struct device *consumer, struct device *supplier); 143 - struct scmi_handle *scmi_handle_get(struct device *dev); 144 - void scmi_set_handle(struct scmi_device *scmi_dev); 145 99 void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph, 146 100 u8 *prot_imp); 147 101 148 - int __init scmi_bus_init(void); 149 - void __exit scmi_bus_exit(void); 102 + extern struct bus_type scmi_bus_type; 150 103 151 - const struct scmi_protocol *scmi_protocol_get(int protocol_id); 152 - void scmi_protocol_put(int protocol_id); 104 + #define SCMI_BUS_NOTIFY_DEVICE_REQUEST 0 105 + #define SCMI_BUS_NOTIFY_DEVICE_UNREQUEST 1 106 + extern struct blocking_notifier_head scmi_requested_devices_nh; 107 + 108 + struct scmi_device *scmi_device_create(struct device_node *np, 109 + struct device *parent, int protocol, 110 + const char *name); 111 + void scmi_device_destroy(struct device *parent, int protocol, const char *name); 153 112 154 113 int scmi_protocol_acquire(const struct scmi_handle *handle, u8 protocol_id); 155 114 void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id); ··· 159 116 /** 160 117 * struct scmi_chan_info - Structure representing a SCMI channel information 161 118 * 119 + * @id: An identifier for this channel: this matches the protocol number 120 + * used to initialize this channel 162 121 * @dev: Reference to device in the SCMI hierarchy corresponding to this 163 122 * channel 164 123 * @rx_timeout_ms: The configured RX timeout in milliseconds. ··· 172 127 * @transport_info: Transport layer related information 173 128 */ 174 129 struct scmi_chan_info { 130 + int id; 175 131 struct device *dev; 176 132 unsigned int rx_timeout_ms; 177 133 struct scmi_handle *handle; ··· 199 153 */ 200 154 struct scmi_transport_ops { 201 155 int (*link_supplier)(struct device *dev); 202 - bool (*chan_available)(struct device *dev, int idx); 156 + bool (*chan_available)(struct device_node *of_node, int idx); 203 157 int (*chan_setup)(struct scmi_chan_info *cinfo, struct device *dev, 204 158 bool tx); 205 159 int (*chan_free)(int id, void *p, void *data); ··· 215 169 void (*clear_channel)(struct scmi_chan_info *cinfo); 216 170 bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); 217 171 }; 218 - 219 - int scmi_protocol_device_request(const struct scmi_device_id *id_table); 220 - void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table); 221 - struct scmi_device *scmi_child_dev_find(struct device *parent, 222 - int prot_id, const char *name); 223 172 224 173 /** 225 174 * struct scmi_desc - Description of SoC integration ··· 256 215 const bool atomic_enabled; 257 216 }; 258 217 218 + static inline bool is_polling_required(struct scmi_chan_info *cinfo, 219 + const struct scmi_desc *desc) 220 + { 221 + return cinfo->no_completion_irq || desc->force_polling; 222 + } 223 + 224 + static inline bool is_transport_polling_capable(const struct scmi_desc *desc) 225 + { 226 + return desc->ops->poll_done || desc->sync_cmds_completed_on_ret; 227 + } 228 + 229 + static inline bool is_polling_enabled(struct scmi_chan_info *cinfo, 230 + const struct scmi_desc *desc) 231 + { 232 + return is_polling_required(cinfo, desc) && 233 + is_transport_polling_capable(desc); 234 + } 235 + 236 + void scmi_xfer_raw_put(const struct scmi_handle *handle, 237 + struct scmi_xfer *xfer); 238 + struct scmi_xfer *scmi_xfer_raw_get(const struct scmi_handle *handle); 239 + struct scmi_chan_info * 240 + scmi_xfer_raw_channel_get(const struct scmi_handle *handle, u8 protocol_id); 241 + 242 + int scmi_xfer_raw_inflight_register(const struct scmi_handle *handle, 243 + struct scmi_xfer *xfer); 244 + 245 + int scmi_xfer_raw_wait_for_message_response(struct scmi_chan_info *cinfo, 246 + struct scmi_xfer *xfer, 247 + unsigned int timeout_ms); 259 248 #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX 260 249 extern const struct scmi_desc scmi_mailbox_desc; 261 250 #endif ··· 300 229 #endif 301 230 302 231 void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr, void *priv); 303 - void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id); 304 232 305 233 /* shmem related declarations */ 306 234 struct scmi_shared_mem;
+833 -476
drivers/firmware/arm_scmi/driver.c
··· 14 14 * Copyright (C) 2018-2021 ARM Ltd. 15 15 */ 16 16 17 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 18 + 17 19 #include <linux/bitmap.h> 20 + #include <linux/debugfs.h> 18 21 #include <linux/device.h> 19 22 #include <linux/export.h> 20 23 #include <linux/idr.h> ··· 37 34 #include "common.h" 38 35 #include "notify.h" 39 36 37 + #include "raw_mode.h" 38 + 40 39 #define CREATE_TRACE_POINTS 41 40 #include <trace/events/scmi.h> 42 41 43 - enum scmi_error_codes { 44 - SCMI_SUCCESS = 0, /* Success */ 45 - SCMI_ERR_SUPPORT = -1, /* Not supported */ 46 - SCMI_ERR_PARAMS = -2, /* Invalid Parameters */ 47 - SCMI_ERR_ACCESS = -3, /* Invalid access/permission denied */ 48 - SCMI_ERR_ENTRY = -4, /* Not found */ 49 - SCMI_ERR_RANGE = -5, /* Value out of range */ 50 - SCMI_ERR_BUSY = -6, /* Device busy */ 51 - SCMI_ERR_COMMS = -7, /* Communication Error */ 52 - SCMI_ERR_GENERIC = -8, /* Generic Error */ 53 - SCMI_ERR_HARDWARE = -9, /* Hardware Error */ 54 - SCMI_ERR_PROTOCOL = -10,/* Protocol Error */ 55 - }; 42 + static DEFINE_IDA(scmi_id); 43 + 44 + static DEFINE_IDR(scmi_protocols); 45 + static DEFINE_SPINLOCK(protocol_lock); 56 46 57 47 /* List of all SCMI devices active in system */ 58 48 static LIST_HEAD(scmi_list); ··· 54 58 /* Track the unique id for the transfers for debug & profiling purpose */ 55 59 static atomic_t transfer_last_id; 56 60 57 - static DEFINE_IDR(scmi_requested_devices); 58 - static DEFINE_MUTEX(scmi_requested_devices_mtx); 59 - 60 - /* Track globally the creation of SCMI SystemPower related devices */ 61 - static bool scmi_syspower_registered; 62 - /* Protect access to scmi_syspower_registered */ 63 - static DEFINE_MUTEX(scmi_syspower_mtx); 64 - 65 - struct scmi_requested_dev { 66 - const struct scmi_device_id *id_table; 67 - struct list_head node; 68 - }; 61 + static struct dentry *scmi_top_dentry; 69 62 70 63 /** 71 64 * struct scmi_xfers_info - Structure to manage transfer information ··· 103 118 #define ph_to_pi(h) container_of(h, struct scmi_protocol_instance, ph) 104 119 105 120 /** 121 + * struct scmi_debug_info - Debug common info 122 + * @top_dentry: A reference to the top debugfs dentry 123 + * @name: Name of this SCMI instance 124 + * @type: Type of this SCMI instance 125 + * @is_atomic: Flag to state if the transport of this instance is atomic 126 + */ 127 + struct scmi_debug_info { 128 + struct dentry *top_dentry; 129 + const char *name; 130 + const char *type; 131 + bool is_atomic; 132 + }; 133 + 134 + /** 106 135 * struct scmi_info - Structure representing a SCMI instance 107 136 * 137 + * @id: A sequence number starting from zero identifying this instance 108 138 * @dev: Device pointer 109 139 * @desc: SoC description for this instance 110 140 * @version: SCMI revision information containing protocol version, ··· 147 147 * @notify_priv: Pointer to private data structure specific to notifications. 148 148 * @node: List head 149 149 * @users: Number of users of this instance 150 + * @bus_nb: A notifier to listen for device bind/unbind on the scmi bus 151 + * @dev_req_nb: A notifier to listen for device request/unrequest on the scmi 152 + * bus 153 + * @devreq_mtx: A mutex to serialize device creation for this SCMI instance 154 + * @dbg: A pointer to debugfs related data (if any) 155 + * @raw: An opaque reference handle used by SCMI Raw mode. 150 156 */ 151 157 struct scmi_info { 158 + int id; 152 159 struct device *dev; 153 160 const struct scmi_desc *desc; 154 161 struct scmi_revision_info version; ··· 173 166 void *notify_priv; 174 167 struct list_head node; 175 168 int users; 169 + struct notifier_block bus_nb; 170 + struct notifier_block dev_req_nb; 171 + /* Serialize device creation process for this instance */ 172 + struct mutex devreq_mtx; 173 + struct scmi_debug_info *dbg; 174 + void *raw; 176 175 }; 177 176 178 177 #define handle_to_scmi_info(h) container_of(h, struct scmi_info, handle) 178 + #define bus_nb_to_scmi_info(nb) container_of(nb, struct scmi_info, bus_nb) 179 + #define req_nb_to_scmi_info(nb) container_of(nb, struct scmi_info, dev_req_nb) 179 180 180 - static const int scmi_linux_errmap[] = { 181 - /* better than switch case as long as return value is continuous */ 182 - 0, /* SCMI_SUCCESS */ 183 - -EOPNOTSUPP, /* SCMI_ERR_SUPPORT */ 184 - -EINVAL, /* SCMI_ERR_PARAM */ 185 - -EACCES, /* SCMI_ERR_ACCESS */ 186 - -ENOENT, /* SCMI_ERR_ENTRY */ 187 - -ERANGE, /* SCMI_ERR_RANGE */ 188 - -EBUSY, /* SCMI_ERR_BUSY */ 189 - -ECOMM, /* SCMI_ERR_COMMS */ 190 - -EIO, /* SCMI_ERR_GENERIC */ 191 - -EREMOTEIO, /* SCMI_ERR_HARDWARE */ 192 - -EPROTO, /* SCMI_ERR_PROTOCOL */ 193 - }; 194 - 195 - static inline int scmi_to_linux_errno(int errno) 181 + static const struct scmi_protocol *scmi_protocol_get(int protocol_id) 196 182 { 197 - int err_idx = -errno; 183 + const struct scmi_protocol *proto; 198 184 199 - if (err_idx >= SCMI_SUCCESS && err_idx < ARRAY_SIZE(scmi_linux_errmap)) 200 - return scmi_linux_errmap[err_idx]; 201 - return -EIO; 185 + proto = idr_find(&scmi_protocols, protocol_id); 186 + if (!proto || !try_module_get(proto->owner)) { 187 + pr_warn("SCMI Protocol 0x%x not found!\n", protocol_id); 188 + return NULL; 189 + } 190 + 191 + pr_debug("Found SCMI Protocol 0x%x\n", protocol_id); 192 + 193 + return proto; 194 + } 195 + 196 + static void scmi_protocol_put(int protocol_id) 197 + { 198 + const struct scmi_protocol *proto; 199 + 200 + proto = idr_find(&scmi_protocols, protocol_id); 201 + if (proto) 202 + module_put(proto->owner); 203 + } 204 + 205 + int scmi_protocol_register(const struct scmi_protocol *proto) 206 + { 207 + int ret; 208 + 209 + if (!proto) { 210 + pr_err("invalid protocol\n"); 211 + return -EINVAL; 212 + } 213 + 214 + if (!proto->instance_init) { 215 + pr_err("missing init for protocol 0x%x\n", proto->id); 216 + return -EINVAL; 217 + } 218 + 219 + spin_lock(&protocol_lock); 220 + ret = idr_alloc(&scmi_protocols, (void *)proto, 221 + proto->id, proto->id + 1, GFP_ATOMIC); 222 + spin_unlock(&protocol_lock); 223 + if (ret != proto->id) { 224 + pr_err("unable to allocate SCMI idr slot for 0x%x - err %d\n", 225 + proto->id, ret); 226 + return ret; 227 + } 228 + 229 + pr_debug("Registered SCMI Protocol 0x%x\n", proto->id); 230 + 231 + return 0; 232 + } 233 + EXPORT_SYMBOL_GPL(scmi_protocol_register); 234 + 235 + void scmi_protocol_unregister(const struct scmi_protocol *proto) 236 + { 237 + spin_lock(&protocol_lock); 238 + idr_remove(&scmi_protocols, proto->id); 239 + spin_unlock(&protocol_lock); 240 + 241 + pr_debug("Unregistered SCMI Protocol 0x%x\n", proto->id); 242 + } 243 + EXPORT_SYMBOL_GPL(scmi_protocol_unregister); 244 + 245 + /** 246 + * scmi_create_protocol_devices - Create devices for all pending requests for 247 + * this SCMI instance. 248 + * 249 + * @np: The device node describing the protocol 250 + * @info: The SCMI instance descriptor 251 + * @prot_id: The protocol ID 252 + * @name: The optional name of the device to be created: if not provided this 253 + * call will lead to the creation of all the devices currently requested 254 + * for the specified protocol. 255 + */ 256 + static void scmi_create_protocol_devices(struct device_node *np, 257 + struct scmi_info *info, 258 + int prot_id, const char *name) 259 + { 260 + struct scmi_device *sdev; 261 + 262 + mutex_lock(&info->devreq_mtx); 263 + sdev = scmi_device_create(np, info->dev, prot_id, name); 264 + if (name && !sdev) 265 + dev_err(info->dev, 266 + "failed to create device for protocol 0x%X (%s)\n", 267 + prot_id, name); 268 + mutex_unlock(&info->devreq_mtx); 269 + } 270 + 271 + static void scmi_destroy_protocol_devices(struct scmi_info *info, 272 + int prot_id, const char *name) 273 + { 274 + mutex_lock(&info->devreq_mtx); 275 + scmi_device_destroy(info->dev, prot_id, name); 276 + mutex_unlock(&info->devreq_mtx); 202 277 } 203 278 204 279 void scmi_notification_instance_data_set(const struct scmi_handle *handle, ··· 400 311 if (xfer_id != next_token) 401 312 atomic_add((int)(xfer_id - next_token), &transfer_last_id); 402 313 403 - /* Set in-flight */ 404 - set_bit(xfer_id, minfo->xfer_alloc_table); 405 314 xfer->hdr.seq = (u16)xfer_id; 406 315 407 316 return 0; ··· 418 331 } 419 332 420 333 /** 334 + * scmi_xfer_inflight_register_unlocked - Register the xfer as in-flight 335 + * 336 + * @xfer: The xfer to register 337 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 338 + * 339 + * Note that this helper assumes that the xfer to be registered as in-flight 340 + * had been built using an xfer sequence number which still corresponds to a 341 + * free slot in the xfer_alloc_table. 342 + * 343 + * Context: Assumes to be called with @xfer_lock already acquired. 344 + */ 345 + static inline void 346 + scmi_xfer_inflight_register_unlocked(struct scmi_xfer *xfer, 347 + struct scmi_xfers_info *minfo) 348 + { 349 + /* Set in-flight */ 350 + set_bit(xfer->hdr.seq, minfo->xfer_alloc_table); 351 + hash_add(minfo->pending_xfers, &xfer->node, xfer->hdr.seq); 352 + xfer->pending = true; 353 + } 354 + 355 + /** 356 + * scmi_xfer_inflight_register - Try to register an xfer as in-flight 357 + * 358 + * @xfer: The xfer to register 359 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 360 + * 361 + * Note that this helper does NOT assume anything about the sequence number 362 + * that was baked into the provided xfer, so it checks at first if it can 363 + * be mapped to a free slot and fails with an error if another xfer with the 364 + * same sequence number is currently still registered as in-flight. 365 + * 366 + * Return: 0 on Success or -EBUSY if sequence number embedded in the xfer 367 + * could not rbe mapped to a free slot in the xfer_alloc_table. 368 + */ 369 + static int scmi_xfer_inflight_register(struct scmi_xfer *xfer, 370 + struct scmi_xfers_info *minfo) 371 + { 372 + int ret = 0; 373 + unsigned long flags; 374 + 375 + spin_lock_irqsave(&minfo->xfer_lock, flags); 376 + if (!test_bit(xfer->hdr.seq, minfo->xfer_alloc_table)) 377 + scmi_xfer_inflight_register_unlocked(xfer, minfo); 378 + else 379 + ret = -EBUSY; 380 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 381 + 382 + return ret; 383 + } 384 + 385 + /** 386 + * scmi_xfer_raw_inflight_register - An helper to register the given xfer as in 387 + * flight on the TX channel, if possible. 388 + * 389 + * @handle: Pointer to SCMI entity handle 390 + * @xfer: The xfer to register 391 + * 392 + * Return: 0 on Success, error otherwise 393 + */ 394 + int scmi_xfer_raw_inflight_register(const struct scmi_handle *handle, 395 + struct scmi_xfer *xfer) 396 + { 397 + struct scmi_info *info = handle_to_scmi_info(handle); 398 + 399 + return scmi_xfer_inflight_register(xfer, &info->tx_minfo); 400 + } 401 + 402 + /** 403 + * scmi_xfer_pending_set - Pick a proper sequence number and mark the xfer 404 + * as pending in-flight 405 + * 406 + * @xfer: The xfer to act upon 407 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 408 + * 409 + * Return: 0 on Success or error otherwise 410 + */ 411 + static inline int scmi_xfer_pending_set(struct scmi_xfer *xfer, 412 + struct scmi_xfers_info *minfo) 413 + { 414 + int ret; 415 + unsigned long flags; 416 + 417 + spin_lock_irqsave(&minfo->xfer_lock, flags); 418 + /* Set a new monotonic token as the xfer sequence number */ 419 + ret = scmi_xfer_token_set(minfo, xfer); 420 + if (!ret) 421 + scmi_xfer_inflight_register_unlocked(xfer, minfo); 422 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 423 + 424 + return ret; 425 + } 426 + 427 + /** 421 428 * scmi_xfer_get() - Allocate one message 422 429 * 423 430 * @handle: Pointer to SCMI entity handle 424 431 * @minfo: Pointer to Tx/Rx Message management info based on channel type 425 - * @set_pending: If true a monotonic token is picked and the xfer is added to 426 - * the pending hash table. 427 432 * 428 433 * Helper function which is used by various message functions that are 429 434 * exposed to clients of this driver for allocating a message traffic event. 430 435 * 431 - * Picks an xfer from the free list @free_xfers (if any available) and, if 432 - * required, sets a monotonically increasing token and stores the inflight xfer 433 - * into the @pending_xfers hashtable for later retrieval. 436 + * Picks an xfer from the free list @free_xfers (if any available) and perform 437 + * a basic initialization. 438 + * 439 + * Note that, at this point, still no sequence number is assigned to the 440 + * allocated xfer, nor it is registered as a pending transaction. 434 441 * 435 442 * The successfully initialized xfer is refcounted. 436 443 * 437 - * Context: Holds @xfer_lock while manipulating @xfer_alloc_table and 438 - * @free_xfers. 444 + * Context: Holds @xfer_lock while manipulating @free_xfers. 439 445 * 440 - * Return: 0 if all went fine, else corresponding error. 446 + * Return: An initialized xfer if all went fine, else pointer error. 441 447 */ 442 448 static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle, 443 - struct scmi_xfers_info *minfo, 444 - bool set_pending) 449 + struct scmi_xfers_info *minfo) 445 450 { 446 - int ret; 447 451 unsigned long flags; 448 452 struct scmi_xfer *xfer; 449 453 ··· 554 376 */ 555 377 xfer->transfer_id = atomic_inc_return(&transfer_last_id); 556 378 557 - if (set_pending) { 558 - /* Pick and set monotonic token */ 559 - ret = scmi_xfer_token_set(minfo, xfer); 560 - if (!ret) { 561 - hash_add(minfo->pending_xfers, &xfer->node, 562 - xfer->hdr.seq); 563 - xfer->pending = true; 564 - } else { 565 - dev_err(handle->dev, 566 - "Failed to get monotonic token %d\n", ret); 567 - hlist_add_head(&xfer->node, &minfo->free_xfers); 568 - xfer = ERR_PTR(ret); 569 - } 570 - } 571 - 572 - if (!IS_ERR(xfer)) { 573 - refcount_set(&xfer->users, 1); 574 - atomic_set(&xfer->busy, SCMI_XFER_FREE); 575 - } 379 + refcount_set(&xfer->users, 1); 380 + atomic_set(&xfer->busy, SCMI_XFER_FREE); 576 381 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 577 382 578 383 return xfer; 384 + } 385 + 386 + /** 387 + * scmi_xfer_raw_get - Helper to get a bare free xfer from the TX channel 388 + * 389 + * @handle: Pointer to SCMI entity handle 390 + * 391 + * Note that xfer is taken from the TX channel structures. 392 + * 393 + * Return: A valid xfer on Success, or an error-pointer otherwise 394 + */ 395 + struct scmi_xfer *scmi_xfer_raw_get(const struct scmi_handle *handle) 396 + { 397 + struct scmi_xfer *xfer; 398 + struct scmi_info *info = handle_to_scmi_info(handle); 399 + 400 + xfer = scmi_xfer_get(handle, &info->tx_minfo); 401 + if (!IS_ERR(xfer)) 402 + xfer->flags |= SCMI_XFER_FLAG_IS_RAW; 403 + 404 + return xfer; 405 + } 406 + 407 + /** 408 + * scmi_xfer_raw_channel_get - Helper to get a reference to the proper channel 409 + * to use for a specific protocol_id Raw transaction. 410 + * 411 + * @handle: Pointer to SCMI entity handle 412 + * @protocol_id: Identifier of the protocol 413 + * 414 + * Note that in a regular SCMI stack, usually, a protocol has to be defined in 415 + * the DT to have an associated channel and be usable; but in Raw mode any 416 + * protocol in range is allowed, re-using the Base channel, so as to enable 417 + * fuzzing on any protocol without the need of a fully compiled DT. 418 + * 419 + * Return: A reference to the channel to use, or an ERR_PTR 420 + */ 421 + struct scmi_chan_info * 422 + scmi_xfer_raw_channel_get(const struct scmi_handle *handle, u8 protocol_id) 423 + { 424 + struct scmi_chan_info *cinfo; 425 + struct scmi_info *info = handle_to_scmi_info(handle); 426 + 427 + cinfo = idr_find(&info->tx_idr, protocol_id); 428 + if (!cinfo) { 429 + if (protocol_id == SCMI_PROTOCOL_BASE) 430 + return ERR_PTR(-EINVAL); 431 + /* Use Base channel for protocols not defined for DT */ 432 + cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE); 433 + if (!cinfo) 434 + return ERR_PTR(-EINVAL); 435 + dev_warn_once(handle->dev, 436 + "Using Base channel for protocol 0x%X\n", 437 + protocol_id); 438 + } 439 + 440 + return cinfo; 579 441 } 580 442 581 443 /** ··· 644 426 hlist_add_head(&xfer->node, &minfo->free_xfers); 645 427 } 646 428 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 429 + } 430 + 431 + /** 432 + * scmi_xfer_raw_put - Release an xfer that was taken by @scmi_xfer_raw_get 433 + * 434 + * @handle: Pointer to SCMI entity handle 435 + * @xfer: A reference to the xfer to put 436 + * 437 + * Note that as with other xfer_put() handlers the xfer is really effectively 438 + * released only if there are no more users on the system. 439 + */ 440 + void scmi_xfer_raw_put(const struct scmi_handle *handle, struct scmi_xfer *xfer) 441 + { 442 + struct scmi_info *info = handle_to_scmi_info(handle); 443 + 444 + xfer->flags &= ~SCMI_XFER_FLAG_IS_RAW; 445 + xfer->flags &= ~SCMI_XFER_FLAG_CHAN_SET; 446 + return __scmi_xfer_put(&info->tx_minfo, xfer); 647 447 } 648 448 649 449 /** ··· 859 623 info->desc->ops->clear_channel(cinfo); 860 624 } 861 625 862 - static inline bool is_polling_required(struct scmi_chan_info *cinfo, 863 - struct scmi_info *info) 864 - { 865 - return cinfo->no_completion_irq || info->desc->force_polling; 866 - } 867 - 868 - static inline bool is_transport_polling_capable(struct scmi_info *info) 869 - { 870 - return info->desc->ops->poll_done || 871 - info->desc->sync_cmds_completed_on_ret; 872 - } 873 - 874 - static inline bool is_polling_enabled(struct scmi_chan_info *cinfo, 875 - struct scmi_info *info) 876 - { 877 - return is_polling_required(cinfo, info) && 878 - is_transport_polling_capable(info); 879 - } 880 - 881 626 static void scmi_handle_notification(struct scmi_chan_info *cinfo, 882 627 u32 msg_hdr, void *priv) 883 628 { ··· 869 652 ktime_t ts; 870 653 871 654 ts = ktime_get_boottime(); 872 - xfer = scmi_xfer_get(cinfo->handle, minfo, false); 655 + xfer = scmi_xfer_get(cinfo->handle, minfo); 873 656 if (IS_ERR(xfer)) { 874 657 dev_err(dev, "failed to get free message slot (%ld)\n", 875 658 PTR_ERR(xfer)); ··· 884 667 info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 885 668 xfer); 886 669 887 - trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, "NOTI", 888 - xfer->hdr.seq, xfer->hdr.status, 889 - xfer->rx.buf, xfer->rx.len); 670 + trace_scmi_msg_dump(info->id, cinfo->id, xfer->hdr.protocol_id, 671 + xfer->hdr.id, "NOTI", xfer->hdr.seq, 672 + xfer->hdr.status, xfer->rx.buf, xfer->rx.len); 890 673 891 674 scmi_notify(cinfo->handle, xfer->hdr.protocol_id, 892 675 xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts); ··· 894 677 trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 895 678 xfer->hdr.protocol_id, xfer->hdr.seq, 896 679 MSG_TYPE_NOTIFICATION); 680 + 681 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) { 682 + xfer->hdr.seq = MSG_XTRACT_TOKEN(msg_hdr); 683 + scmi_raw_message_report(info->raw, xfer, SCMI_RAW_NOTIF_QUEUE, 684 + cinfo->id); 685 + } 897 686 898 687 __scmi_xfer_put(minfo, xfer); 899 688 ··· 914 691 915 692 xfer = scmi_xfer_command_acquire(cinfo, msg_hdr); 916 693 if (IS_ERR(xfer)) { 694 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) 695 + scmi_raw_error_report(info->raw, cinfo, msg_hdr, priv); 696 + 917 697 if (MSG_XTRACT_TYPE(msg_hdr) == MSG_TYPE_DELAYED_RESP) 918 698 scmi_clear_channel(info, cinfo); 919 699 return; ··· 931 705 smp_store_mb(xfer->priv, priv); 932 706 info->desc->ops->fetch_response(cinfo, xfer); 933 707 934 - trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, 708 + trace_scmi_msg_dump(info->id, cinfo->id, xfer->hdr.protocol_id, 709 + xfer->hdr.id, 935 710 xfer->hdr.type == MSG_TYPE_DELAYED_RESP ? 936 - "DLYD" : "RESP", 711 + (!SCMI_XFER_IS_RAW(xfer) ? "DLYD" : "dlyd") : 712 + (!SCMI_XFER_IS_RAW(xfer) ? "RESP" : "resp"), 937 713 xfer->hdr.seq, xfer->hdr.status, 938 714 xfer->rx.buf, xfer->rx.len); 939 715 ··· 948 720 complete(xfer->async_done); 949 721 } else { 950 722 complete(&xfer->done); 723 + } 724 + 725 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) { 726 + /* 727 + * When in polling mode avoid to queue the Raw xfer on the IRQ 728 + * RX path since it will be already queued at the end of the TX 729 + * poll loop. 730 + */ 731 + if (!xfer->hdr.poll_completion) 732 + scmi_raw_message_report(info->raw, xfer, 733 + SCMI_RAW_REPLY_QUEUE, 734 + cinfo->id); 951 735 } 952 736 953 737 scmi_xfer_command_release(info, xfer); ··· 1025 785 ktime_after(ktime_get(), stop); 1026 786 } 1027 787 1028 - /** 1029 - * scmi_wait_for_message_response - An helper to group all the possible ways of 1030 - * waiting for a synchronous message response. 1031 - * 1032 - * @cinfo: SCMI channel info 1033 - * @xfer: Reference to the transfer being waited for. 1034 - * 1035 - * Chooses waiting strategy (sleep-waiting vs busy-waiting) depending on 1036 - * configuration flags like xfer->hdr.poll_completion. 1037 - * 1038 - * Return: 0 on Success, error otherwise. 1039 - */ 1040 - static int scmi_wait_for_message_response(struct scmi_chan_info *cinfo, 1041 - struct scmi_xfer *xfer) 788 + static int scmi_wait_for_reply(struct device *dev, const struct scmi_desc *desc, 789 + struct scmi_chan_info *cinfo, 790 + struct scmi_xfer *xfer, unsigned int timeout_ms) 1042 791 { 1043 - struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 1044 - struct device *dev = info->dev; 1045 - int ret = 0, timeout_ms = info->desc->max_rx_timeout_ms; 1046 - 1047 - trace_scmi_xfer_response_wait(xfer->transfer_id, xfer->hdr.id, 1048 - xfer->hdr.protocol_id, xfer->hdr.seq, 1049 - timeout_ms, 1050 - xfer->hdr.poll_completion); 792 + int ret = 0; 1051 793 1052 794 if (xfer->hdr.poll_completion) { 1053 795 /* 1054 796 * Real polling is needed only if transport has NOT declared 1055 797 * itself to support synchronous commands replies. 1056 798 */ 1057 - if (!info->desc->sync_cmds_completed_on_ret) { 799 + if (!desc->sync_cmds_completed_on_ret) { 1058 800 /* 1059 801 * Poll on xfer using transport provided .poll_done(); 1060 802 * assumes no completion interrupt was available. ··· 1055 833 1056 834 if (!ret) { 1057 835 unsigned long flags; 836 + struct scmi_info *info = 837 + handle_to_scmi_info(cinfo->handle); 1058 838 1059 839 /* 1060 840 * Do not fetch_response if an out-of-order delayed ··· 1064 840 */ 1065 841 spin_lock_irqsave(&xfer->lock, flags); 1066 842 if (xfer->state == SCMI_XFER_SENT_OK) { 1067 - info->desc->ops->fetch_response(cinfo, xfer); 843 + desc->ops->fetch_response(cinfo, xfer); 1068 844 xfer->state = SCMI_XFER_RESP_OK; 1069 845 } 1070 846 spin_unlock_irqrestore(&xfer->lock, flags); 1071 847 1072 848 /* Trace polled replies. */ 1073 - trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, 1074 - "RESP", 849 + trace_scmi_msg_dump(info->id, cinfo->id, 850 + xfer->hdr.protocol_id, xfer->hdr.id, 851 + !SCMI_XFER_IS_RAW(xfer) ? 852 + "RESP" : "resp", 1075 853 xfer->hdr.seq, xfer->hdr.status, 1076 854 xfer->rx.buf, xfer->rx.len); 855 + 856 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) { 857 + struct scmi_info *info = 858 + handle_to_scmi_info(cinfo->handle); 859 + 860 + scmi_raw_message_report(info->raw, xfer, 861 + SCMI_RAW_REPLY_QUEUE, 862 + cinfo->id); 863 + } 1077 864 } 1078 865 } else { 1079 866 /* And we wait for the response. */ ··· 1095 860 ret = -ETIMEDOUT; 1096 861 } 1097 862 } 863 + 864 + return ret; 865 + } 866 + 867 + /** 868 + * scmi_wait_for_message_response - An helper to group all the possible ways of 869 + * waiting for a synchronous message response. 870 + * 871 + * @cinfo: SCMI channel info 872 + * @xfer: Reference to the transfer being waited for. 873 + * 874 + * Chooses waiting strategy (sleep-waiting vs busy-waiting) depending on 875 + * configuration flags like xfer->hdr.poll_completion. 876 + * 877 + * Return: 0 on Success, error otherwise. 878 + */ 879 + static int scmi_wait_for_message_response(struct scmi_chan_info *cinfo, 880 + struct scmi_xfer *xfer) 881 + { 882 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 883 + struct device *dev = info->dev; 884 + 885 + trace_scmi_xfer_response_wait(xfer->transfer_id, xfer->hdr.id, 886 + xfer->hdr.protocol_id, xfer->hdr.seq, 887 + info->desc->max_rx_timeout_ms, 888 + xfer->hdr.poll_completion); 889 + 890 + return scmi_wait_for_reply(dev, info->desc, cinfo, xfer, 891 + info->desc->max_rx_timeout_ms); 892 + } 893 + 894 + /** 895 + * scmi_xfer_raw_wait_for_message_response - An helper to wait for a message 896 + * reply to an xfer raw request on a specific channel for the required timeout. 897 + * 898 + * @cinfo: SCMI channel info 899 + * @xfer: Reference to the transfer being waited for. 900 + * @timeout_ms: The maximum timeout in milliseconds 901 + * 902 + * Return: 0 on Success, error otherwise. 903 + */ 904 + int scmi_xfer_raw_wait_for_message_response(struct scmi_chan_info *cinfo, 905 + struct scmi_xfer *xfer, 906 + unsigned int timeout_ms) 907 + { 908 + int ret; 909 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 910 + struct device *dev = info->dev; 911 + 912 + ret = scmi_wait_for_reply(dev, info->desc, cinfo, xfer, timeout_ms); 913 + if (ret) 914 + dev_dbg(dev, "timed out in RAW response - HDR:%08X\n", 915 + pack_scmi_header(&xfer->hdr)); 1098 916 1099 917 return ret; 1100 918 } ··· 1172 884 struct scmi_chan_info *cinfo; 1173 885 1174 886 /* Check for polling request on custom command xfers at first */ 1175 - if (xfer->hdr.poll_completion && !is_transport_polling_capable(info)) { 887 + if (xfer->hdr.poll_completion && 888 + !is_transport_polling_capable(info->desc)) { 1176 889 dev_warn_once(dev, 1177 890 "Polling mode is not supported by transport.\n"); 1178 891 return -EINVAL; ··· 1184 895 return -EINVAL; 1185 896 1186 897 /* True ONLY if also supported by transport. */ 1187 - if (is_polling_enabled(cinfo, info)) 898 + if (is_polling_enabled(cinfo, info->desc)) 1188 899 xfer->hdr.poll_completion = true; 1189 900 1190 901 /* ··· 1199 910 xfer->hdr.protocol_id, xfer->hdr.seq, 1200 911 xfer->hdr.poll_completion); 1201 912 913 + /* Clear any stale status */ 914 + xfer->hdr.status = SCMI_SUCCESS; 1202 915 xfer->state = SCMI_XFER_SENT_OK; 1203 916 /* 1204 917 * Even though spinlocking is not needed here since no race is possible ··· 1217 926 return ret; 1218 927 } 1219 928 1220 - trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, "CMND", 1221 - xfer->hdr.seq, xfer->hdr.status, 1222 - xfer->tx.buf, xfer->tx.len); 929 + trace_scmi_msg_dump(info->id, cinfo->id, xfer->hdr.protocol_id, 930 + xfer->hdr.id, "CMND", xfer->hdr.seq, 931 + xfer->hdr.status, xfer->tx.buf, xfer->tx.len); 1223 932 1224 933 ret = scmi_wait_for_message_response(cinfo, xfer); 1225 934 if (!ret && xfer->hdr.status) ··· 1242 951 1243 952 xfer->rx.len = info->desc->max_msg_size; 1244 953 } 1245 - 1246 - #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC) 1247 954 1248 955 /** 1249 956 * do_xfer_with_response() - Do one transfer and wait until the delayed ··· 1330 1041 tx_size > info->desc->max_msg_size) 1331 1042 return -ERANGE; 1332 1043 1333 - xfer = scmi_xfer_get(pi->handle, minfo, true); 1044 + xfer = scmi_xfer_get(pi->handle, minfo); 1334 1045 if (IS_ERR(xfer)) { 1335 1046 ret = PTR_ERR(xfer); 1336 1047 dev_err(dev, "failed to get free message slot(%d)\n", ret); 1048 + return ret; 1049 + } 1050 + 1051 + /* Pick a sequence number and register this xfer as in-flight */ 1052 + ret = scmi_xfer_pending_set(xfer, minfo); 1053 + if (ret) { 1054 + dev_err(pi->handle->dev, 1055 + "Failed to get monotonic token %d\n", ret); 1056 + __scmi_xfer_put(minfo, xfer); 1337 1057 return ret; 1338 1058 } 1339 1059 ··· 2118 1820 bool ret; 2119 1821 struct scmi_info *info = handle_to_scmi_info(handle); 2120 1822 2121 - ret = info->desc->atomic_enabled && is_transport_polling_capable(info); 1823 + ret = info->desc->atomic_enabled && 1824 + is_transport_polling_capable(info->desc); 2122 1825 if (ret && atomic_threshold) 2123 1826 *atomic_threshold = info->atomic_threshold; 2124 1827 2125 1828 return ret; 2126 - } 2127 - 2128 - static inline 2129 - struct scmi_handle *scmi_handle_get_from_info_unlocked(struct scmi_info *info) 2130 - { 2131 - info->users++; 2132 - return &info->handle; 2133 1829 } 2134 1830 2135 1831 /** ··· 2137 1845 * 2138 1846 * Return: pointer to handle if successful, NULL on error 2139 1847 */ 2140 - struct scmi_handle *scmi_handle_get(struct device *dev) 1848 + static struct scmi_handle *scmi_handle_get(struct device *dev) 2141 1849 { 2142 1850 struct list_head *p; 2143 1851 struct scmi_info *info; ··· 2147 1855 list_for_each(p, &scmi_list) { 2148 1856 info = list_entry(p, struct scmi_info, node); 2149 1857 if (dev->parent == info->dev) { 2150 - handle = scmi_handle_get_from_info_unlocked(info); 1858 + info->users++; 1859 + handle = &info->handle; 2151 1860 break; 2152 1861 } 2153 1862 } ··· 2169 1876 * Return: 0 is successfully released 2170 1877 * if null was passed, it returns -EINVAL; 2171 1878 */ 2172 - int scmi_handle_put(const struct scmi_handle *handle) 1879 + static int scmi_handle_put(const struct scmi_handle *handle) 2173 1880 { 2174 1881 struct scmi_info *info; 2175 1882 ··· 2183 1890 mutex_unlock(&scmi_list_mutex); 2184 1891 2185 1892 return 0; 1893 + } 1894 + 1895 + static void scmi_device_link_add(struct device *consumer, 1896 + struct device *supplier) 1897 + { 1898 + struct device_link *link; 1899 + 1900 + link = device_link_add(consumer, supplier, DL_FLAG_AUTOREMOVE_CONSUMER); 1901 + 1902 + WARN_ON(!link); 1903 + } 1904 + 1905 + static void scmi_set_handle(struct scmi_device *scmi_dev) 1906 + { 1907 + scmi_dev->handle = scmi_handle_get(&scmi_dev->dev); 1908 + if (scmi_dev->handle) 1909 + scmi_device_link_add(&scmi_dev->dev, scmi_dev->handle->dev); 2186 1910 } 2187 1911 2188 1912 static int __scmi_xfer_info_init(struct scmi_info *sinfo, ··· 2295 1985 return ret; 2296 1986 } 2297 1987 2298 - static int scmi_chan_setup(struct scmi_info *info, struct device *dev, 1988 + static int scmi_chan_setup(struct scmi_info *info, struct device_node *of_node, 2299 1989 int prot_id, bool tx) 2300 1990 { 2301 1991 int ret, idx; 1992 + char name[32]; 2302 1993 struct scmi_chan_info *cinfo; 2303 1994 struct idr *idr; 1995 + struct scmi_device *tdev = NULL; 2304 1996 2305 1997 /* Transmit channel is first entry i.e. index 0 */ 2306 1998 idx = tx ? 0 : 1; 2307 1999 idr = tx ? &info->tx_idr : &info->rx_idr; 2308 2000 2309 - /* check if already allocated, used for multiple device per protocol */ 2310 - cinfo = idr_find(idr, prot_id); 2311 - if (cinfo) 2312 - return 0; 2313 - 2314 - if (!info->desc->ops->chan_available(dev, idx)) { 2001 + if (!info->desc->ops->chan_available(of_node, idx)) { 2315 2002 cinfo = idr_find(idr, SCMI_PROTOCOL_BASE); 2316 2003 if (unlikely(!cinfo)) /* Possible only if platform has no Rx */ 2317 2004 return -EINVAL; ··· 2319 2012 if (!cinfo) 2320 2013 return -ENOMEM; 2321 2014 2322 - cinfo->dev = dev; 2323 2015 cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms; 2324 2016 2325 - ret = info->desc->ops->chan_setup(cinfo, info->dev, tx); 2326 - if (ret) 2327 - return ret; 2017 + /* Create a unique name for this transport device */ 2018 + snprintf(name, 32, "__scmi_transport_device_%s_%02X", 2019 + idx ? "rx" : "tx", prot_id); 2020 + /* Create a uniquely named, dedicated transport device for this chan */ 2021 + tdev = scmi_device_create(of_node, info->dev, prot_id, name); 2022 + if (!tdev) { 2023 + dev_err(info->dev, 2024 + "failed to create transport device (%s)\n", name); 2025 + devm_kfree(info->dev, cinfo); 2026 + return -EINVAL; 2027 + } 2028 + of_node_get(of_node); 2328 2029 2329 - if (tx && is_polling_required(cinfo, info)) { 2330 - if (is_transport_polling_capable(info)) 2331 - dev_info(dev, 2030 + cinfo->id = prot_id; 2031 + cinfo->dev = &tdev->dev; 2032 + ret = info->desc->ops->chan_setup(cinfo, info->dev, tx); 2033 + if (ret) { 2034 + of_node_put(of_node); 2035 + scmi_device_destroy(info->dev, prot_id, name); 2036 + devm_kfree(info->dev, cinfo); 2037 + return ret; 2038 + } 2039 + 2040 + if (tx && is_polling_required(cinfo, info->desc)) { 2041 + if (is_transport_polling_capable(info->desc)) 2042 + dev_info(&tdev->dev, 2332 2043 "Enabled polling mode TX channel - prot_id:%d\n", 2333 2044 prot_id); 2334 2045 else 2335 - dev_warn(dev, 2046 + dev_warn(&tdev->dev, 2336 2047 "Polling mode NOT supported by transport.\n"); 2337 2048 } 2338 2049 2339 2050 idr_alloc: 2340 2051 ret = idr_alloc(idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); 2341 2052 if (ret != prot_id) { 2342 - dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret); 2053 + dev_err(info->dev, 2054 + "unable to allocate SCMI idr slot err %d\n", ret); 2055 + /* Destroy channel and device only if created by this call. */ 2056 + if (tdev) { 2057 + of_node_put(of_node); 2058 + scmi_device_destroy(info->dev, prot_id, name); 2059 + devm_kfree(info->dev, cinfo); 2060 + } 2343 2061 return ret; 2344 2062 } 2345 2063 ··· 2373 2041 } 2374 2042 2375 2043 static inline int 2376 - scmi_txrx_setup(struct scmi_info *info, struct device *dev, int prot_id) 2044 + scmi_txrx_setup(struct scmi_info *info, struct device_node *of_node, 2045 + int prot_id) 2377 2046 { 2378 - int ret = scmi_chan_setup(info, dev, prot_id, true); 2047 + int ret = scmi_chan_setup(info, of_node, prot_id, true); 2379 2048 2380 2049 if (!ret) { 2381 2050 /* Rx is optional, report only memory errors */ 2382 - ret = scmi_chan_setup(info, dev, prot_id, false); 2051 + ret = scmi_chan_setup(info, of_node, prot_id, false); 2383 2052 if (ret && ret != -ENOMEM) 2384 2053 ret = 0; 2385 2054 } ··· 2389 2056 } 2390 2057 2391 2058 /** 2392 - * scmi_get_protocol_device - Helper to get/create an SCMI device. 2059 + * scmi_channels_setup - Helper to initialize all required channels 2393 2060 * 2394 - * @np: A device node representing a valid active protocols for the referred 2395 - * SCMI instance. 2396 - * @info: The referred SCMI instance for which we are getting/creating this 2397 - * device. 2398 - * @prot_id: The protocol ID. 2399 - * @name: The device name. 2061 + * @info: The SCMI instance descriptor. 2400 2062 * 2401 - * Referring to the specific SCMI instance identified by @info, this helper 2402 - * takes care to return a properly initialized device matching the requested 2403 - * @proto_id and @name: if device was still not existent it is created as a 2404 - * child of the specified SCMI instance @info and its transport properly 2405 - * initialized as usual. 2063 + * Initialize all the channels found described in the DT against the underlying 2064 + * configured transport using custom defined dedicated devices instead of 2065 + * borrowing devices from the SCMI drivers; this way channels are initialized 2066 + * upfront during core SCMI stack probing and are no more coupled with SCMI 2067 + * devices used by SCMI drivers. 2406 2068 * 2407 - * Return: A properly initialized scmi device, NULL otherwise. 2408 - */ 2409 - static inline struct scmi_device * 2410 - scmi_get_protocol_device(struct device_node *np, struct scmi_info *info, 2411 - int prot_id, const char *name) 2412 - { 2413 - struct scmi_device *sdev; 2414 - 2415 - /* Already created for this parent SCMI instance ? */ 2416 - sdev = scmi_child_dev_find(info->dev, prot_id, name); 2417 - if (sdev) 2418 - return sdev; 2419 - 2420 - mutex_lock(&scmi_syspower_mtx); 2421 - if (prot_id == SCMI_PROTOCOL_SYSTEM && scmi_syspower_registered) { 2422 - dev_warn(info->dev, 2423 - "SCMI SystemPower protocol device must be unique !\n"); 2424 - mutex_unlock(&scmi_syspower_mtx); 2425 - 2426 - return NULL; 2427 - } 2428 - 2429 - pr_debug("Creating SCMI device (%s) for protocol %x\n", name, prot_id); 2430 - 2431 - sdev = scmi_device_create(np, info->dev, prot_id, name); 2432 - if (!sdev) { 2433 - dev_err(info->dev, "failed to create %d protocol device\n", 2434 - prot_id); 2435 - mutex_unlock(&scmi_syspower_mtx); 2436 - 2437 - return NULL; 2438 - } 2439 - 2440 - if (scmi_txrx_setup(info, &sdev->dev, prot_id)) { 2441 - dev_err(&sdev->dev, "failed to setup transport\n"); 2442 - scmi_device_destroy(sdev); 2443 - mutex_unlock(&scmi_syspower_mtx); 2444 - 2445 - return NULL; 2446 - } 2447 - 2448 - if (prot_id == SCMI_PROTOCOL_SYSTEM) 2449 - scmi_syspower_registered = true; 2450 - 2451 - mutex_unlock(&scmi_syspower_mtx); 2452 - 2453 - return sdev; 2454 - } 2455 - 2456 - static inline void 2457 - scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 2458 - int prot_id, const char *name) 2459 - { 2460 - struct scmi_device *sdev; 2461 - 2462 - sdev = scmi_get_protocol_device(np, info, prot_id, name); 2463 - if (!sdev) 2464 - return; 2465 - 2466 - /* setup handle now as the transport is ready */ 2467 - scmi_set_handle(sdev); 2468 - } 2469 - 2470 - /** 2471 - * scmi_create_protocol_devices - Create devices for all pending requests for 2472 - * this SCMI instance. 2473 - * 2474 - * @np: The device node describing the protocol 2475 - * @info: The SCMI instance descriptor 2476 - * @prot_id: The protocol ID 2477 - * 2478 - * All devices previously requested for this instance (if any) are found and 2479 - * created by scanning the proper @&scmi_requested_devices entry. 2480 - */ 2481 - static void scmi_create_protocol_devices(struct device_node *np, 2482 - struct scmi_info *info, int prot_id) 2483 - { 2484 - struct list_head *phead; 2485 - 2486 - mutex_lock(&scmi_requested_devices_mtx); 2487 - phead = idr_find(&scmi_requested_devices, prot_id); 2488 - if (phead) { 2489 - struct scmi_requested_dev *rdev; 2490 - 2491 - list_for_each_entry(rdev, phead, node) 2492 - scmi_create_protocol_device(np, info, prot_id, 2493 - rdev->id_table->name); 2494 - } 2495 - mutex_unlock(&scmi_requested_devices_mtx); 2496 - } 2497 - 2498 - /** 2499 - * scmi_protocol_device_request - Helper to request a device 2500 - * 2501 - * @id_table: A protocol/name pair descriptor for the device to be created. 2502 - * 2503 - * This helper let an SCMI driver request specific devices identified by the 2504 - * @id_table to be created for each active SCMI instance. 2505 - * 2506 - * The requested device name MUST NOT be already existent for any protocol; 2507 - * at first the freshly requested @id_table is annotated in the IDR table 2508 - * @scmi_requested_devices, then a matching device is created for each already 2509 - * active SCMI instance. (if any) 2510 - * 2511 - * This way the requested device is created straight-away for all the already 2512 - * initialized(probed) SCMI instances (handles) and it remains also annotated 2513 - * as pending creation if the requesting SCMI driver was loaded before some 2514 - * SCMI instance and related transports were available: when such late instance 2515 - * is probed, its probe will take care to scan the list of pending requested 2516 - * devices and create those on its own (see @scmi_create_protocol_devices and 2517 - * its enclosing loop) 2069 + * Note that, even though a pair of TX/RX channels is associated to each 2070 + * protocol defined in the DT, a distinct freshly initialized channel is 2071 + * created only if the DT node for the protocol at hand describes a dedicated 2072 + * channel: in all the other cases the common BASE protocol channel is reused. 2518 2073 * 2519 2074 * Return: 0 on Success 2520 2075 */ 2521 - int scmi_protocol_device_request(const struct scmi_device_id *id_table) 2522 - { 2523 - int ret = 0; 2524 - unsigned int id = 0; 2525 - struct list_head *head, *phead = NULL; 2526 - struct scmi_requested_dev *rdev; 2527 - struct scmi_info *info; 2528 - 2529 - pr_debug("Requesting SCMI device (%s) for protocol %x\n", 2530 - id_table->name, id_table->protocol_id); 2531 - 2532 - /* 2533 - * Search for the matching protocol rdev list and then search 2534 - * of any existent equally named device...fails if any duplicate found. 2535 - */ 2536 - mutex_lock(&scmi_requested_devices_mtx); 2537 - idr_for_each_entry(&scmi_requested_devices, head, id) { 2538 - if (!phead) { 2539 - /* A list found registered in the IDR is never empty */ 2540 - rdev = list_first_entry(head, struct scmi_requested_dev, 2541 - node); 2542 - if (rdev->id_table->protocol_id == 2543 - id_table->protocol_id) 2544 - phead = head; 2545 - } 2546 - list_for_each_entry(rdev, head, node) { 2547 - if (!strcmp(rdev->id_table->name, id_table->name)) { 2548 - pr_err("Ignoring duplicate request [%d] %s\n", 2549 - rdev->id_table->protocol_id, 2550 - rdev->id_table->name); 2551 - ret = -EINVAL; 2552 - goto out; 2553 - } 2554 - } 2555 - } 2556 - 2557 - /* 2558 - * No duplicate found for requested id_table, so let's create a new 2559 - * requested device entry for this new valid request. 2560 - */ 2561 - rdev = kzalloc(sizeof(*rdev), GFP_KERNEL); 2562 - if (!rdev) { 2563 - ret = -ENOMEM; 2564 - goto out; 2565 - } 2566 - rdev->id_table = id_table; 2567 - 2568 - /* 2569 - * Append the new requested device table descriptor to the head of the 2570 - * related protocol list, eventually creating such head if not already 2571 - * there. 2572 - */ 2573 - if (!phead) { 2574 - phead = kzalloc(sizeof(*phead), GFP_KERNEL); 2575 - if (!phead) { 2576 - kfree(rdev); 2577 - ret = -ENOMEM; 2578 - goto out; 2579 - } 2580 - INIT_LIST_HEAD(phead); 2581 - 2582 - ret = idr_alloc(&scmi_requested_devices, (void *)phead, 2583 - id_table->protocol_id, 2584 - id_table->protocol_id + 1, GFP_KERNEL); 2585 - if (ret != id_table->protocol_id) { 2586 - pr_err("Failed to save SCMI device - ret:%d\n", ret); 2587 - kfree(rdev); 2588 - kfree(phead); 2589 - ret = -EINVAL; 2590 - goto out; 2591 - } 2592 - ret = 0; 2593 - } 2594 - list_add(&rdev->node, phead); 2595 - 2596 - /* 2597 - * Now effectively create and initialize the requested device for every 2598 - * already initialized SCMI instance which has registered the requested 2599 - * protocol as a valid active one: i.e. defined in DT and supported by 2600 - * current platform FW. 2601 - */ 2602 - mutex_lock(&scmi_list_mutex); 2603 - list_for_each_entry(info, &scmi_list, node) { 2604 - struct device_node *child; 2605 - 2606 - child = idr_find(&info->active_protocols, 2607 - id_table->protocol_id); 2608 - if (child) { 2609 - struct scmi_device *sdev; 2610 - 2611 - sdev = scmi_get_protocol_device(child, info, 2612 - id_table->protocol_id, 2613 - id_table->name); 2614 - if (sdev) { 2615 - /* Set handle if not already set: device existed */ 2616 - if (!sdev->handle) 2617 - sdev->handle = 2618 - scmi_handle_get_from_info_unlocked(info); 2619 - /* Relink consumer and suppliers */ 2620 - if (sdev->handle) 2621 - scmi_device_link_add(&sdev->dev, 2622 - sdev->handle->dev); 2623 - } 2624 - } else { 2625 - dev_err(info->dev, 2626 - "Failed. SCMI protocol %d not active.\n", 2627 - id_table->protocol_id); 2628 - } 2629 - } 2630 - mutex_unlock(&scmi_list_mutex); 2631 - 2632 - out: 2633 - mutex_unlock(&scmi_requested_devices_mtx); 2634 - 2635 - return ret; 2636 - } 2637 - 2638 - /** 2639 - * scmi_protocol_device_unrequest - Helper to unrequest a device 2640 - * 2641 - * @id_table: A protocol/name pair descriptor for the device to be unrequested. 2642 - * 2643 - * An helper to let an SCMI driver release its request about devices; note that 2644 - * devices are created and initialized once the first SCMI driver request them 2645 - * but they destroyed only on SCMI core unloading/unbinding. 2646 - * 2647 - * The current SCMI transport layer uses such devices as internal references and 2648 - * as such they could be shared as same transport between multiple drivers so 2649 - * that cannot be safely destroyed till the whole SCMI stack is removed. 2650 - * (unless adding further burden of refcounting.) 2651 - */ 2652 - void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table) 2653 - { 2654 - struct list_head *phead; 2655 - 2656 - pr_debug("Unrequesting SCMI device (%s) for protocol %x\n", 2657 - id_table->name, id_table->protocol_id); 2658 - 2659 - mutex_lock(&scmi_requested_devices_mtx); 2660 - phead = idr_find(&scmi_requested_devices, id_table->protocol_id); 2661 - if (phead) { 2662 - struct scmi_requested_dev *victim, *tmp; 2663 - 2664 - list_for_each_entry_safe(victim, tmp, phead, node) { 2665 - if (!strcmp(victim->id_table->name, id_table->name)) { 2666 - list_del(&victim->node); 2667 - kfree(victim); 2668 - break; 2669 - } 2670 - } 2671 - 2672 - if (list_empty(phead)) { 2673 - idr_remove(&scmi_requested_devices, 2674 - id_table->protocol_id); 2675 - kfree(phead); 2676 - } 2677 - } 2678 - mutex_unlock(&scmi_requested_devices_mtx); 2679 - } 2680 - 2681 - static int scmi_cleanup_txrx_channels(struct scmi_info *info) 2076 + static int scmi_channels_setup(struct scmi_info *info) 2682 2077 { 2683 2078 int ret; 2684 - struct idr *idr = &info->tx_idr; 2079 + struct device_node *child, *top_np = info->dev->of_node; 2685 2080 2686 - ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 2687 - idr_destroy(&info->tx_idr); 2081 + /* Initialize a common generic channel at first */ 2082 + ret = scmi_txrx_setup(info, top_np, SCMI_PROTOCOL_BASE); 2083 + if (ret) 2084 + return ret; 2688 2085 2689 - idr = &info->rx_idr; 2690 - ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 2691 - idr_destroy(&info->rx_idr); 2086 + for_each_available_child_of_node(top_np, child) { 2087 + u32 prot_id; 2088 + 2089 + if (of_property_read_u32(child, "reg", &prot_id)) 2090 + continue; 2091 + 2092 + if (!FIELD_FIT(MSG_PROTOCOL_ID_MASK, prot_id)) 2093 + dev_err(info->dev, 2094 + "Out of range protocol %d\n", prot_id); 2095 + 2096 + ret = scmi_txrx_setup(info, child, prot_id); 2097 + if (ret) { 2098 + of_node_put(child); 2099 + return ret; 2100 + } 2101 + } 2102 + 2103 + return 0; 2104 + } 2105 + 2106 + static int scmi_chan_destroy(int id, void *p, void *idr) 2107 + { 2108 + struct scmi_chan_info *cinfo = p; 2109 + 2110 + if (cinfo->dev) { 2111 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 2112 + struct scmi_device *sdev = to_scmi_dev(cinfo->dev); 2113 + 2114 + of_node_put(cinfo->dev->of_node); 2115 + scmi_device_destroy(info->dev, id, sdev->name); 2116 + cinfo->dev = NULL; 2117 + } 2118 + 2119 + idr_remove(idr, id); 2120 + 2121 + return 0; 2122 + } 2123 + 2124 + static void scmi_cleanup_channels(struct scmi_info *info, struct idr *idr) 2125 + { 2126 + /* At first free all channels at the transport layer ... */ 2127 + idr_for_each(idr, info->desc->ops->chan_free, idr); 2128 + 2129 + /* ...then destroy all underlying devices */ 2130 + idr_for_each(idr, scmi_chan_destroy, idr); 2131 + 2132 + idr_destroy(idr); 2133 + } 2134 + 2135 + static void scmi_cleanup_txrx_channels(struct scmi_info *info) 2136 + { 2137 + scmi_cleanup_channels(info, &info->tx_idr); 2138 + 2139 + scmi_cleanup_channels(info, &info->rx_idr); 2140 + } 2141 + 2142 + static int scmi_bus_notifier(struct notifier_block *nb, 2143 + unsigned long action, void *data) 2144 + { 2145 + struct scmi_info *info = bus_nb_to_scmi_info(nb); 2146 + struct scmi_device *sdev = to_scmi_dev(data); 2147 + 2148 + /* Skip transport devices and devices of different SCMI instances */ 2149 + if (!strncmp(sdev->name, "__scmi_transport_device", 23) || 2150 + sdev->dev.parent != info->dev) 2151 + return NOTIFY_DONE; 2152 + 2153 + switch (action) { 2154 + case BUS_NOTIFY_BIND_DRIVER: 2155 + /* setup handle now as the transport is ready */ 2156 + scmi_set_handle(sdev); 2157 + break; 2158 + case BUS_NOTIFY_UNBOUND_DRIVER: 2159 + scmi_handle_put(sdev->handle); 2160 + sdev->handle = NULL; 2161 + break; 2162 + default: 2163 + return NOTIFY_DONE; 2164 + } 2165 + 2166 + dev_dbg(info->dev, "Device %s (%s) is now %s\n", dev_name(&sdev->dev), 2167 + sdev->name, action == BUS_NOTIFY_BIND_DRIVER ? 2168 + "about to be BOUND." : "UNBOUND."); 2169 + 2170 + return NOTIFY_OK; 2171 + } 2172 + 2173 + static int scmi_device_request_notifier(struct notifier_block *nb, 2174 + unsigned long action, void *data) 2175 + { 2176 + struct device_node *np; 2177 + struct scmi_device_id *id_table = data; 2178 + struct scmi_info *info = req_nb_to_scmi_info(nb); 2179 + 2180 + np = idr_find(&info->active_protocols, id_table->protocol_id); 2181 + if (!np) 2182 + return NOTIFY_DONE; 2183 + 2184 + dev_dbg(info->dev, "%sRequested device (%s) for protocol 0x%x\n", 2185 + action == SCMI_BUS_NOTIFY_DEVICE_REQUEST ? "" : "UN-", 2186 + id_table->name, id_table->protocol_id); 2187 + 2188 + switch (action) { 2189 + case SCMI_BUS_NOTIFY_DEVICE_REQUEST: 2190 + scmi_create_protocol_devices(np, info, id_table->protocol_id, 2191 + id_table->name); 2192 + break; 2193 + case SCMI_BUS_NOTIFY_DEVICE_UNREQUEST: 2194 + scmi_destroy_protocol_devices(info, id_table->protocol_id, 2195 + id_table->name); 2196 + break; 2197 + default: 2198 + return NOTIFY_DONE; 2199 + } 2200 + 2201 + return NOTIFY_OK; 2202 + } 2203 + 2204 + static void scmi_debugfs_common_cleanup(void *d) 2205 + { 2206 + struct scmi_debug_info *dbg = d; 2207 + 2208 + if (!dbg) 2209 + return; 2210 + 2211 + debugfs_remove_recursive(dbg->top_dentry); 2212 + kfree(dbg->name); 2213 + kfree(dbg->type); 2214 + } 2215 + 2216 + static struct scmi_debug_info *scmi_debugfs_common_setup(struct scmi_info *info) 2217 + { 2218 + char top_dir[16]; 2219 + struct dentry *trans, *top_dentry; 2220 + struct scmi_debug_info *dbg; 2221 + const char *c_ptr = NULL; 2222 + 2223 + dbg = devm_kzalloc(info->dev, sizeof(*dbg), GFP_KERNEL); 2224 + if (!dbg) 2225 + return NULL; 2226 + 2227 + dbg->name = kstrdup(of_node_full_name(info->dev->of_node), GFP_KERNEL); 2228 + if (!dbg->name) { 2229 + devm_kfree(info->dev, dbg); 2230 + return NULL; 2231 + } 2232 + 2233 + of_property_read_string(info->dev->of_node, "compatible", &c_ptr); 2234 + dbg->type = kstrdup(c_ptr, GFP_KERNEL); 2235 + if (!dbg->type) { 2236 + kfree(dbg->name); 2237 + devm_kfree(info->dev, dbg); 2238 + return NULL; 2239 + } 2240 + 2241 + snprintf(top_dir, 16, "%d", info->id); 2242 + top_dentry = debugfs_create_dir(top_dir, scmi_top_dentry); 2243 + trans = debugfs_create_dir("transport", top_dentry); 2244 + 2245 + dbg->is_atomic = info->desc->atomic_enabled && 2246 + is_transport_polling_capable(info->desc); 2247 + 2248 + debugfs_create_str("instance_name", 0400, top_dentry, 2249 + (char **)&dbg->name); 2250 + 2251 + debugfs_create_u32("atomic_threshold_us", 0400, top_dentry, 2252 + &info->atomic_threshold); 2253 + 2254 + debugfs_create_str("type", 0400, trans, (char **)&dbg->type); 2255 + 2256 + debugfs_create_bool("is_atomic", 0400, trans, &dbg->is_atomic); 2257 + 2258 + debugfs_create_u32("max_rx_timeout_ms", 0400, trans, 2259 + (u32 *)&info->desc->max_rx_timeout_ms); 2260 + 2261 + debugfs_create_u32("max_msg_size", 0400, trans, 2262 + (u32 *)&info->desc->max_msg_size); 2263 + 2264 + debugfs_create_u32("tx_max_msg", 0400, trans, 2265 + (u32 *)&info->tx_minfo.max_msg); 2266 + 2267 + debugfs_create_u32("rx_max_msg", 0400, trans, 2268 + (u32 *)&info->rx_minfo.max_msg); 2269 + 2270 + dbg->top_dentry = top_dentry; 2271 + 2272 + if (devm_add_action_or_reset(info->dev, 2273 + scmi_debugfs_common_cleanup, dbg)) { 2274 + scmi_debugfs_common_cleanup(dbg); 2275 + return NULL; 2276 + } 2277 + 2278 + return dbg; 2279 + } 2280 + 2281 + static int scmi_debugfs_raw_mode_setup(struct scmi_info *info) 2282 + { 2283 + int id, num_chans = 0, ret = 0; 2284 + struct scmi_chan_info *cinfo; 2285 + u8 channels[SCMI_MAX_CHANNELS] = {}; 2286 + DECLARE_BITMAP(protos, SCMI_MAX_CHANNELS) = {}; 2287 + 2288 + if (!info->dbg) 2289 + return -EINVAL; 2290 + 2291 + /* Enumerate all channels to collect their ids */ 2292 + idr_for_each_entry(&info->tx_idr, cinfo, id) { 2293 + /* 2294 + * Cannot happen, but be defensive. 2295 + * Zero as num_chans is ok, warn and carry on. 2296 + */ 2297 + if (num_chans >= SCMI_MAX_CHANNELS || !cinfo) { 2298 + dev_warn(info->dev, 2299 + "SCMI RAW - Error enumerating channels\n"); 2300 + break; 2301 + } 2302 + 2303 + if (!test_bit(cinfo->id, protos)) { 2304 + channels[num_chans++] = cinfo->id; 2305 + set_bit(cinfo->id, protos); 2306 + } 2307 + } 2308 + 2309 + info->raw = scmi_raw_mode_init(&info->handle, info->dbg->top_dentry, 2310 + info->id, channels, num_chans, 2311 + info->desc, info->tx_minfo.max_msg); 2312 + if (IS_ERR(info->raw)) { 2313 + dev_err(info->dev, "Failed to initialize SCMI RAW Mode !\n"); 2314 + ret = PTR_ERR(info->raw); 2315 + info->raw = NULL; 2316 + } 2692 2317 2693 2318 return ret; 2694 2319 } ··· 2668 2377 if (!info) 2669 2378 return -ENOMEM; 2670 2379 2380 + info->id = ida_alloc_min(&scmi_id, 0, GFP_KERNEL); 2381 + if (info->id < 0) 2382 + return info->id; 2383 + 2671 2384 info->dev = dev; 2672 2385 info->desc = desc; 2386 + info->bus_nb.notifier_call = scmi_bus_notifier; 2387 + info->dev_req_nb.notifier_call = scmi_device_request_notifier; 2673 2388 INIT_LIST_HEAD(&info->node); 2674 2389 idr_init(&info->protocols); 2675 2390 mutex_init(&info->protocols_mtx); 2676 2391 idr_init(&info->active_protocols); 2392 + mutex_init(&info->devreq_mtx); 2677 2393 2678 2394 platform_set_drvdata(pdev, info); 2679 2395 idr_init(&info->tx_idr); ··· 2704 2406 if (desc->ops->link_supplier) { 2705 2407 ret = desc->ops->link_supplier(dev); 2706 2408 if (ret) 2707 - return ret; 2409 + goto clear_ida; 2708 2410 } 2709 2411 2710 - ret = scmi_txrx_setup(info, dev, SCMI_PROTOCOL_BASE); 2412 + /* Setup all channels described in the DT at first */ 2413 + ret = scmi_channels_setup(info); 2711 2414 if (ret) 2712 - return ret; 2415 + goto clear_ida; 2416 + 2417 + ret = bus_register_notifier(&scmi_bus_type, &info->bus_nb); 2418 + if (ret) 2419 + goto clear_txrx_setup; 2420 + 2421 + ret = blocking_notifier_chain_register(&scmi_requested_devices_nh, 2422 + &info->dev_req_nb); 2423 + if (ret) 2424 + goto clear_bus_notifier; 2713 2425 2714 2426 ret = scmi_xfer_info_init(info); 2715 2427 if (ret) 2716 - goto clear_txrx_setup; 2428 + goto clear_dev_req_notifier; 2429 + 2430 + if (scmi_top_dentry) { 2431 + info->dbg = scmi_debugfs_common_setup(info); 2432 + if (!info->dbg) 2433 + dev_warn(dev, "Failed to setup SCMI debugfs.\n"); 2434 + 2435 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) { 2436 + bool coex = 2437 + IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT_COEX); 2438 + 2439 + ret = scmi_debugfs_raw_mode_setup(info); 2440 + if (!coex) { 2441 + if (ret) 2442 + goto clear_dev_req_notifier; 2443 + 2444 + /* Bail out anyway when coex enabled */ 2445 + return ret; 2446 + } 2447 + 2448 + /* Coex enabled, carry on in any case. */ 2449 + dev_info(dev, "SCMI RAW Mode COEX enabled !\n"); 2450 + } 2451 + } 2717 2452 2718 2453 if (scmi_notification_init(handle)) 2719 2454 dev_err(dev, "SCMI Notifications NOT available.\n"); 2720 2455 2721 - if (info->desc->atomic_enabled && !is_transport_polling_capable(info)) 2456 + if (info->desc->atomic_enabled && 2457 + !is_transport_polling_capable(info->desc)) 2722 2458 dev_err(dev, 2723 2459 "Transport is not polling capable. Atomic mode not supported.\n"); 2724 2460 ··· 2799 2467 } 2800 2468 2801 2469 of_node_get(child); 2802 - scmi_create_protocol_devices(child, info, prot_id); 2470 + scmi_create_protocol_devices(child, info, prot_id, NULL); 2803 2471 } 2804 2472 2805 2473 return 0; 2806 2474 2807 2475 notification_exit: 2476 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) 2477 + scmi_raw_mode_cleanup(info->raw); 2808 2478 scmi_notification_exit(&info->handle); 2479 + clear_dev_req_notifier: 2480 + blocking_notifier_chain_unregister(&scmi_requested_devices_nh, 2481 + &info->dev_req_nb); 2482 + clear_bus_notifier: 2483 + bus_unregister_notifier(&scmi_bus_type, &info->bus_nb); 2809 2484 clear_txrx_setup: 2810 2485 scmi_cleanup_txrx_channels(info); 2486 + clear_ida: 2487 + ida_free(&scmi_id, info->id); 2811 2488 return ret; 2812 - } 2813 - 2814 - void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id) 2815 - { 2816 - idr_remove(idr, id); 2817 2489 } 2818 2490 2819 2491 static int scmi_remove(struct platform_device *pdev) 2820 2492 { 2821 - int ret, id; 2493 + int id; 2822 2494 struct scmi_info *info = platform_get_drvdata(pdev); 2823 2495 struct device_node *child; 2496 + 2497 + if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) 2498 + scmi_raw_mode_cleanup(info->raw); 2824 2499 2825 2500 mutex_lock(&scmi_list_mutex); 2826 2501 if (info->users) ··· 2846 2507 of_node_put(child); 2847 2508 idr_destroy(&info->active_protocols); 2848 2509 2510 + blocking_notifier_chain_unregister(&scmi_requested_devices_nh, 2511 + &info->dev_req_nb); 2512 + bus_unregister_notifier(&scmi_bus_type, &info->bus_nb); 2513 + 2849 2514 /* Safe to free channels since no more users */ 2850 - ret = scmi_cleanup_txrx_channels(info); 2851 - if (ret) 2852 - dev_warn(&pdev->dev, "Failed to cleanup SCMI channels.\n"); 2515 + scmi_cleanup_txrx_channels(info); 2516 + 2517 + ida_free(&scmi_id, info->id); 2853 2518 2854 2519 return 0; 2855 2520 } ··· 2982 2639 __scmi_transports_setup(false); 2983 2640 } 2984 2641 2642 + static struct dentry *scmi_debugfs_init(void) 2643 + { 2644 + struct dentry *d; 2645 + 2646 + d = debugfs_create_dir("scmi", NULL); 2647 + if (IS_ERR(d)) { 2648 + pr_err("Could NOT create SCMI top dentry.\n"); 2649 + return NULL; 2650 + } 2651 + 2652 + return d; 2653 + } 2654 + 2985 2655 static int __init scmi_driver_init(void) 2986 2656 { 2987 2657 int ret; ··· 3003 2647 if (WARN_ON(!IS_ENABLED(CONFIG_ARM_SCMI_HAVE_TRANSPORT))) 3004 2648 return -EINVAL; 3005 2649 3006 - scmi_bus_init(); 3007 - 3008 2650 /* Initialize any compiled-in transport which provided an init/exit */ 3009 2651 ret = scmi_transports_init(); 3010 2652 if (ret) 3011 2653 return ret; 2654 + 2655 + if (IS_ENABLED(CONFIG_ARM_SCMI_NEED_DEBUGFS)) 2656 + scmi_top_dentry = scmi_debugfs_init(); 3012 2657 3013 2658 scmi_base_register(); 3014 2659 ··· 3024 2667 3025 2668 return platform_driver_register(&scmi_driver); 3026 2669 } 3027 - subsys_initcall(scmi_driver_init); 2670 + module_init(scmi_driver_init); 3028 2671 3029 2672 static void __exit scmi_driver_exit(void) 3030 2673 { ··· 3039 2682 scmi_system_unregister(); 3040 2683 scmi_powercap_unregister(); 3041 2684 3042 - scmi_bus_exit(); 3043 - 3044 2685 scmi_transports_exit(); 3045 2686 3046 2687 platform_driver_unregister(&scmi_driver); 2688 + 2689 + debugfs_remove_recursive(scmi_top_dentry); 3047 2690 } 3048 2691 module_exit(scmi_driver_exit); 3049 2692
+2 -4
drivers/firmware/arm_scmi/mailbox.c
··· 46 46 scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL); 47 47 } 48 48 49 - static bool mailbox_chan_available(struct device *dev, int idx) 49 + static bool mailbox_chan_available(struct device_node *of_node, int idx) 50 50 { 51 - return !of_parse_phandle_with_args(dev->of_node, "mboxes", 51 + return !of_parse_phandle_with_args(of_node, "mboxes", 52 52 "#mbox-cells", idx, NULL); 53 53 } 54 54 ··· 119 119 smbox->chan = NULL; 120 120 smbox->cinfo = NULL; 121 121 } 122 - 123 - scmi_free_channel(cinfo, data, id); 124 122 125 123 return 0; 126 124 }
+2 -4
drivers/firmware/arm_scmi/optee.c
··· 328 328 return 0; 329 329 } 330 330 331 - static bool scmi_optee_chan_available(struct device *dev, int idx) 331 + static bool scmi_optee_chan_available(struct device_node *of_node, int idx) 332 332 { 333 333 u32 channel_id; 334 334 335 - return !of_property_read_u32_index(dev->of_node, "linaro,optee-channel-id", 335 + return !of_property_read_u32_index(of_node, "linaro,optee-channel-id", 336 336 idx, &channel_id); 337 337 } 338 338 ··· 480 480 481 481 cinfo->transport_info = NULL; 482 482 channel->cinfo = NULL; 483 - 484 - scmi_free_channel(cinfo, data, id); 485 483 486 484 return 0; 487 485 }
+7
drivers/firmware/arm_scmi/protocols.h
··· 115 115 * - SCMI_XFER_SENT_OK -> SCMI_XFER_RESP_OK [ -> SCMI_XFER_DRESP_OK ] 116 116 * - SCMI_XFER_SENT_OK -> SCMI_XFER_DRESP_OK 117 117 * (Missing synchronous response is assumed OK and ignored) 118 + * @flags: Optional flags associated to this xfer. 118 119 * @lock: A spinlock to protect state and busy fields. 119 120 * @priv: A pointer for transport private usage. 120 121 */ ··· 136 135 #define SCMI_XFER_RESP_OK 1 137 136 #define SCMI_XFER_DRESP_OK 2 138 137 int state; 138 + #define SCMI_XFER_FLAG_IS_RAW BIT(0) 139 + #define SCMI_XFER_IS_RAW(x) ((x)->flags & SCMI_XFER_FLAG_IS_RAW) 140 + #define SCMI_XFER_FLAG_CHAN_SET BIT(1) 141 + #define SCMI_XFER_IS_CHAN_SET(x) \ 142 + ((x)->flags & SCMI_XFER_FLAG_CHAN_SET) 143 + int flags; 139 144 /* A lock to protect state and busy fields */ 140 145 spinlock_t lock; 141 146 void *priv;
+1443
drivers/firmware/arm_scmi/raw_mode.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Raw mode support 4 + * 5 + * Copyright (C) 2022 ARM Ltd. 6 + */ 7 + /** 8 + * DOC: Theory of operation 9 + * 10 + * When enabled the SCMI Raw mode support exposes a userspace API which allows 11 + * to send and receive SCMI commands, replies and notifications from a user 12 + * application through injection and snooping of bare SCMI messages in binary 13 + * little-endian format. 14 + * 15 + * Such injected SCMI transactions will then be routed through the SCMI core 16 + * stack towards the SCMI backend server using whatever SCMI transport is 17 + * currently configured on the system under test. 18 + * 19 + * It is meant to help in running any sort of SCMI backend server testing, no 20 + * matter where the server is placed, as long as it is normally reachable via 21 + * the transport configured on the system. 22 + * 23 + * It is activated by a Kernel configuration option since it is NOT meant to 24 + * be used in production but only during development and in CI deployments. 25 + * 26 + * In order to avoid possible interferences between the SCMI Raw transactions 27 + * originated from a test-suite and the normal operations of the SCMI drivers, 28 + * when Raw mode is enabled, by default, all the regular SCMI drivers are 29 + * inhibited, unless CONFIG_ARM_SCMI_RAW_MODE_SUPPORT_COEX is enabled: in this 30 + * latter case the regular SCMI stack drivers will be loaded as usual and it is 31 + * up to the user of this interface to take care of manually inhibiting the 32 + * regular SCMI drivers in order to avoid interferences during the test runs. 33 + * 34 + * The exposed API is as follows. 35 + * 36 + * All SCMI Raw entries are rooted under a common top /raw debugfs top directory 37 + * which in turn is rooted under the corresponding underlying SCMI instance. 38 + * 39 + * /sys/kernel/debug/scmi/ 40 + * `-- 0 41 + * |-- atomic_threshold_us 42 + * |-- instance_name 43 + * |-- raw 44 + * | |-- channels 45 + * | | |-- 0x10 46 + * | | | |-- message 47 + * | | | `-- message_async 48 + * | | `-- 0x13 49 + * | | |-- message 50 + * | | `-- message_async 51 + * | |-- errors 52 + * | |-- message 53 + * | |-- message_async 54 + * | |-- notification 55 + * | `-- reset 56 + * `-- transport 57 + * |-- is_atomic 58 + * |-- max_msg_size 59 + * |-- max_rx_timeout_ms 60 + * |-- rx_max_msg 61 + * |-- tx_max_msg 62 + * `-- type 63 + * 64 + * where: 65 + * 66 + * - errors: used to read back timed-out and unexpected replies 67 + * - message*: used to send sync/async commands and read back immediate and 68 + * delayed reponses (if any) 69 + * - notification: used to read any notification being emitted by the system 70 + * (if previously enabled by the user app) 71 + * - reset: used to flush the queues of messages (of any kind) still pending 72 + * to be read; this is useful at test-suite start/stop to get 73 + * rid of any unread messages from the previous run. 74 + * 75 + * with the per-channel entries rooted at /channels being present only on a 76 + * system where multiple transport channels have been configured. 77 + * 78 + * Such per-channel entries can be used to explicitly choose a specific channel 79 + * for SCMI bare message injection, in contrast with the general entries above 80 + * where, instead, the selection of the proper channel to use is automatically 81 + * performed based the protocol embedded in the injected message and on how the 82 + * transport is configured on the system. 83 + * 84 + * Note that other common general entries are available under transport/ to let 85 + * the user applications properly make up their expectations in terms of 86 + * timeouts and message characteristics. 87 + * 88 + * Each write to the message* entries causes one command request to be built 89 + * and sent while the replies or delayed response are read back from those same 90 + * entries one message at time (receiving an EOF at each message boundary). 91 + * 92 + * The user application running the test is in charge of handling timeouts 93 + * on replies and properly choosing SCMI sequence numbers for the outgoing 94 + * requests (using the same sequence number is supported but discouraged). 95 + * 96 + * Injection of multiple in-flight requests is supported as long as the user 97 + * application uses properly distinct sequence numbers for concurrent requests 98 + * and takes care to properly manage all the related issues about concurrency 99 + * and command/reply pairing. Keep in mind that, anyway, the real level of 100 + * parallelism attainable in such scenario is dependent on the characteristics 101 + * of the underlying transport being used. 102 + * 103 + * Since the SCMI core regular stack is partially used to deliver and collect 104 + * the messages, late replies arrived after timeouts and any other sort of 105 + * unexpected message can be identified by the SCMI core as usual and they will 106 + * be reported as messages under "errors" for later analysis. 107 + */ 108 + 109 + #include <linux/bitmap.h> 110 + #include <linux/debugfs.h> 111 + #include <linux/delay.h> 112 + #include <linux/device.h> 113 + #include <linux/export.h> 114 + #include <linux/io.h> 115 + #include <linux/kernel.h> 116 + #include <linux/fs.h> 117 + #include <linux/list.h> 118 + #include <linux/module.h> 119 + #include <linux/poll.h> 120 + #include <linux/of.h> 121 + #include <linux/slab.h> 122 + #include <linux/xarray.h> 123 + 124 + #include "common.h" 125 + 126 + #include "raw_mode.h" 127 + 128 + #include <trace/events/scmi.h> 129 + 130 + #define SCMI_XFER_RAW_MAX_RETRIES 10 131 + 132 + /** 133 + * struct scmi_raw_queue - Generic Raw queue descriptor 134 + * 135 + * @free_bufs: A freelists listhead used to keep unused raw buffers 136 + * @free_bufs_lock: Spinlock used to protect access to @free_bufs 137 + * @msg_q: A listhead to a queue of snooped messages waiting to be read out 138 + * @msg_q_lock: Spinlock used to protect access to @msg_q 139 + * @wq: A waitqueue used to wait and poll on related @msg_q 140 + */ 141 + struct scmi_raw_queue { 142 + struct list_head free_bufs; 143 + /* Protect free_bufs[] lists */ 144 + spinlock_t free_bufs_lock; 145 + struct list_head msg_q; 146 + /* Protect msg_q[] lists */ 147 + spinlock_t msg_q_lock; 148 + wait_queue_head_t wq; 149 + }; 150 + 151 + /** 152 + * struct scmi_raw_mode_info - Structure holding SCMI Raw instance data 153 + * 154 + * @id: Sequential Raw instance ID. 155 + * @handle: Pointer to SCMI entity handle to use 156 + * @desc: Pointer to the transport descriptor to use 157 + * @tx_max_msg: Maximum number of concurrent TX in-flight messages 158 + * @q: An array of Raw queue descriptors 159 + * @chans_q: An XArray mapping optional additional per-channel queues 160 + * @free_waiters: Head of freelist for unused waiters 161 + * @free_mtx: A mutex to protect the waiters freelist 162 + * @active_waiters: Head of list for currently active and used waiters 163 + * @active_mtx: A mutex to protect the active waiters list 164 + * @waiters_work: A work descriptor to be used with the workqueue machinery 165 + * @wait_wq: A workqueue reference to the created workqueue 166 + * @dentry: Top debugfs root dentry for SCMI Raw 167 + * @gid: A group ID used for devres accounting 168 + * 169 + * Note that this descriptor is passed back to the core after SCMI Raw is 170 + * initialized as an opaque handle to use by subsequent SCMI Raw call hooks. 171 + * 172 + */ 173 + struct scmi_raw_mode_info { 174 + unsigned int id; 175 + const struct scmi_handle *handle; 176 + const struct scmi_desc *desc; 177 + int tx_max_msg; 178 + struct scmi_raw_queue *q[SCMI_RAW_MAX_QUEUE]; 179 + struct xarray chans_q; 180 + struct list_head free_waiters; 181 + /* Protect free_waiters list */ 182 + struct mutex free_mtx; 183 + struct list_head active_waiters; 184 + /* Protect active_waiters list */ 185 + struct mutex active_mtx; 186 + struct work_struct waiters_work; 187 + struct workqueue_struct *wait_wq; 188 + struct dentry *dentry; 189 + void *gid; 190 + }; 191 + 192 + /** 193 + * struct scmi_xfer_raw_waiter - Structure to describe an xfer to be waited for 194 + * 195 + * @start_jiffies: The timestamp in jiffies of when this structure was queued. 196 + * @cinfo: A reference to the channel to use for this transaction 197 + * @xfer: A reference to the xfer to be waited for 198 + * @async_response: A completion to be, optionally, used for async waits: it 199 + * will be setup by @scmi_do_xfer_raw_start, if needed, to be 200 + * pointed at by xfer->async_done. 201 + * @node: A list node. 202 + */ 203 + struct scmi_xfer_raw_waiter { 204 + unsigned long start_jiffies; 205 + struct scmi_chan_info *cinfo; 206 + struct scmi_xfer *xfer; 207 + struct completion async_response; 208 + struct list_head node; 209 + }; 210 + 211 + /** 212 + * struct scmi_raw_buffer - Structure to hold a full SCMI message 213 + * 214 + * @max_len: The maximum allowed message size (header included) that can be 215 + * stored into @msg 216 + * @msg: A message buffer used to collect a full message grabbed from an xfer. 217 + * @node: A list node. 218 + */ 219 + struct scmi_raw_buffer { 220 + size_t max_len; 221 + struct scmi_msg msg; 222 + struct list_head node; 223 + }; 224 + 225 + /** 226 + * struct scmi_dbg_raw_data - Structure holding data needed by the debugfs 227 + * layer 228 + * 229 + * @chan_id: The preferred channel to use: if zero the channel is automatically 230 + * selected based on protocol. 231 + * @raw: A reference to the Raw instance. 232 + * @tx: A message buffer used to collect TX message on write. 233 + * @tx_size: The effective size of the TX message. 234 + * @tx_req_size: The final expected size of the complete TX message. 235 + * @rx: A message buffer to collect RX message on read. 236 + * @rx_size: The effective size of the RX message. 237 + */ 238 + struct scmi_dbg_raw_data { 239 + u8 chan_id; 240 + struct scmi_raw_mode_info *raw; 241 + struct scmi_msg tx; 242 + size_t tx_size; 243 + size_t tx_req_size; 244 + struct scmi_msg rx; 245 + size_t rx_size; 246 + }; 247 + 248 + static struct scmi_raw_queue * 249 + scmi_raw_queue_select(struct scmi_raw_mode_info *raw, unsigned int idx, 250 + unsigned int chan_id) 251 + { 252 + if (!chan_id) 253 + return raw->q[idx]; 254 + 255 + return xa_load(&raw->chans_q, chan_id); 256 + } 257 + 258 + static struct scmi_raw_buffer *scmi_raw_buffer_get(struct scmi_raw_queue *q) 259 + { 260 + unsigned long flags; 261 + struct scmi_raw_buffer *rb = NULL; 262 + struct list_head *head = &q->free_bufs; 263 + 264 + spin_lock_irqsave(&q->free_bufs_lock, flags); 265 + if (!list_empty(head)) { 266 + rb = list_first_entry(head, struct scmi_raw_buffer, node); 267 + list_del_init(&rb->node); 268 + } 269 + spin_unlock_irqrestore(&q->free_bufs_lock, flags); 270 + 271 + return rb; 272 + } 273 + 274 + static void scmi_raw_buffer_put(struct scmi_raw_queue *q, 275 + struct scmi_raw_buffer *rb) 276 + { 277 + unsigned long flags; 278 + 279 + /* Reset to full buffer length */ 280 + rb->msg.len = rb->max_len; 281 + 282 + spin_lock_irqsave(&q->free_bufs_lock, flags); 283 + list_add_tail(&rb->node, &q->free_bufs); 284 + spin_unlock_irqrestore(&q->free_bufs_lock, flags); 285 + } 286 + 287 + static void scmi_raw_buffer_enqueue(struct scmi_raw_queue *q, 288 + struct scmi_raw_buffer *rb) 289 + { 290 + unsigned long flags; 291 + 292 + spin_lock_irqsave(&q->msg_q_lock, flags); 293 + list_add_tail(&rb->node, &q->msg_q); 294 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 295 + 296 + wake_up_interruptible(&q->wq); 297 + } 298 + 299 + static struct scmi_raw_buffer* 300 + scmi_raw_buffer_dequeue_unlocked(struct scmi_raw_queue *q) 301 + { 302 + struct scmi_raw_buffer *rb = NULL; 303 + 304 + if (!list_empty(&q->msg_q)) { 305 + rb = list_first_entry(&q->msg_q, struct scmi_raw_buffer, node); 306 + list_del_init(&rb->node); 307 + } 308 + 309 + return rb; 310 + } 311 + 312 + static struct scmi_raw_buffer *scmi_raw_buffer_dequeue(struct scmi_raw_queue *q) 313 + { 314 + unsigned long flags; 315 + struct scmi_raw_buffer *rb; 316 + 317 + spin_lock_irqsave(&q->msg_q_lock, flags); 318 + rb = scmi_raw_buffer_dequeue_unlocked(q); 319 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 320 + 321 + return rb; 322 + } 323 + 324 + static void scmi_raw_buffer_queue_flush(struct scmi_raw_queue *q) 325 + { 326 + struct scmi_raw_buffer *rb; 327 + 328 + do { 329 + rb = scmi_raw_buffer_dequeue(q); 330 + if (rb) 331 + scmi_raw_buffer_put(q, rb); 332 + } while (rb); 333 + } 334 + 335 + static struct scmi_xfer_raw_waiter * 336 + scmi_xfer_raw_waiter_get(struct scmi_raw_mode_info *raw, struct scmi_xfer *xfer, 337 + struct scmi_chan_info *cinfo, bool async) 338 + { 339 + struct scmi_xfer_raw_waiter *rw = NULL; 340 + 341 + mutex_lock(&raw->free_mtx); 342 + if (!list_empty(&raw->free_waiters)) { 343 + rw = list_first_entry(&raw->free_waiters, 344 + struct scmi_xfer_raw_waiter, node); 345 + list_del_init(&rw->node); 346 + 347 + if (async) { 348 + reinit_completion(&rw->async_response); 349 + xfer->async_done = &rw->async_response; 350 + } 351 + 352 + rw->cinfo = cinfo; 353 + rw->xfer = xfer; 354 + } 355 + mutex_unlock(&raw->free_mtx); 356 + 357 + return rw; 358 + } 359 + 360 + static void scmi_xfer_raw_waiter_put(struct scmi_raw_mode_info *raw, 361 + struct scmi_xfer_raw_waiter *rw) 362 + { 363 + if (rw->xfer) { 364 + rw->xfer->async_done = NULL; 365 + rw->xfer = NULL; 366 + } 367 + 368 + mutex_lock(&raw->free_mtx); 369 + list_add_tail(&rw->node, &raw->free_waiters); 370 + mutex_unlock(&raw->free_mtx); 371 + } 372 + 373 + static void scmi_xfer_raw_waiter_enqueue(struct scmi_raw_mode_info *raw, 374 + struct scmi_xfer_raw_waiter *rw) 375 + { 376 + /* A timestamp for the deferred worker to know how much this has aged */ 377 + rw->start_jiffies = jiffies; 378 + 379 + trace_scmi_xfer_response_wait(rw->xfer->transfer_id, rw->xfer->hdr.id, 380 + rw->xfer->hdr.protocol_id, 381 + rw->xfer->hdr.seq, 382 + raw->desc->max_rx_timeout_ms, 383 + rw->xfer->hdr.poll_completion); 384 + 385 + mutex_lock(&raw->active_mtx); 386 + list_add_tail(&rw->node, &raw->active_waiters); 387 + mutex_unlock(&raw->active_mtx); 388 + 389 + /* kick waiter work */ 390 + queue_work(raw->wait_wq, &raw->waiters_work); 391 + } 392 + 393 + static struct scmi_xfer_raw_waiter * 394 + scmi_xfer_raw_waiter_dequeue(struct scmi_raw_mode_info *raw) 395 + { 396 + struct scmi_xfer_raw_waiter *rw = NULL; 397 + 398 + mutex_lock(&raw->active_mtx); 399 + if (!list_empty(&raw->active_waiters)) { 400 + rw = list_first_entry(&raw->active_waiters, 401 + struct scmi_xfer_raw_waiter, node); 402 + list_del_init(&rw->node); 403 + } 404 + mutex_unlock(&raw->active_mtx); 405 + 406 + return rw; 407 + } 408 + 409 + /** 410 + * scmi_xfer_raw_worker - Work function to wait for Raw xfers completions 411 + * 412 + * @work: A reference to the work. 413 + * 414 + * In SCMI Raw mode, once a user-provided injected SCMI message is sent, we 415 + * cannot wait to receive its response (if any) in the context of the injection 416 + * routines so as not to leave the userspace write syscall, which delivered the 417 + * SCMI message to send, pending till eventually a reply is received. 418 + * Userspace should and will poll/wait instead on the read syscalls which will 419 + * be in charge of reading a received reply (if any). 420 + * 421 + * Even though reply messages are collected and reported into the SCMI Raw layer 422 + * on the RX path, nonetheless we have to properly wait for their completion as 423 + * usual (and async_completion too if needed) in order to properly release the 424 + * xfer structure at the end: to do this out of the context of the write/send 425 + * these waiting jobs are delegated to this deferred worker. 426 + * 427 + * Any sent xfer, to be waited for, is timestamped and queued for later 428 + * consumption by this worker: queue aging is accounted for while choosing a 429 + * timeout for the completion, BUT we do not really care here if we end up 430 + * accidentally waiting for a bit too long. 431 + */ 432 + static void scmi_xfer_raw_worker(struct work_struct *work) 433 + { 434 + struct scmi_raw_mode_info *raw; 435 + struct device *dev; 436 + unsigned long max_tmo; 437 + 438 + raw = container_of(work, struct scmi_raw_mode_info, waiters_work); 439 + dev = raw->handle->dev; 440 + max_tmo = msecs_to_jiffies(raw->desc->max_rx_timeout_ms); 441 + 442 + do { 443 + int ret = 0; 444 + unsigned int timeout_ms; 445 + unsigned long aging; 446 + struct scmi_xfer *xfer; 447 + struct scmi_xfer_raw_waiter *rw; 448 + struct scmi_chan_info *cinfo; 449 + 450 + rw = scmi_xfer_raw_waiter_dequeue(raw); 451 + if (!rw) 452 + return; 453 + 454 + cinfo = rw->cinfo; 455 + xfer = rw->xfer; 456 + /* 457 + * Waiters are queued by wait-deadline at the end, so some of 458 + * them could have been already expired when processed, BUT we 459 + * have to check the completion status anyway just in case a 460 + * virtually expired (aged) transaction was indeed completed 461 + * fine and we'll have to wait for the asynchronous part (if 462 + * any): for this reason a 1 ms timeout is used for already 463 + * expired/aged xfers. 464 + */ 465 + aging = jiffies - rw->start_jiffies; 466 + timeout_ms = max_tmo > aging ? 467 + jiffies_to_msecs(max_tmo - aging) : 1; 468 + 469 + ret = scmi_xfer_raw_wait_for_message_response(cinfo, xfer, 470 + timeout_ms); 471 + if (!ret && xfer->hdr.status) 472 + ret = scmi_to_linux_errno(xfer->hdr.status); 473 + 474 + if (raw->desc->ops->mark_txdone) 475 + raw->desc->ops->mark_txdone(rw->cinfo, ret, xfer); 476 + 477 + trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id, 478 + xfer->hdr.protocol_id, xfer->hdr.seq, ret); 479 + 480 + /* Wait also for an async delayed response if needed */ 481 + if (!ret && xfer->async_done) { 482 + unsigned long tmo = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT); 483 + 484 + if (!wait_for_completion_timeout(xfer->async_done, tmo)) 485 + dev_err(dev, 486 + "timed out in RAW delayed resp - HDR:%08X\n", 487 + pack_scmi_header(&xfer->hdr)); 488 + } 489 + 490 + /* Release waiter and xfer */ 491 + scmi_xfer_raw_put(raw->handle, xfer); 492 + scmi_xfer_raw_waiter_put(raw, rw); 493 + } while (1); 494 + } 495 + 496 + static void scmi_xfer_raw_reset(struct scmi_raw_mode_info *raw) 497 + { 498 + int i; 499 + 500 + dev_info(raw->handle->dev, "Resetting SCMI Raw stack.\n"); 501 + 502 + for (i = 0; i < SCMI_RAW_MAX_QUEUE; i++) 503 + scmi_raw_buffer_queue_flush(raw->q[i]); 504 + } 505 + 506 + /** 507 + * scmi_xfer_raw_get_init - An helper to build a valid xfer from the provided 508 + * bare SCMI message. 509 + * 510 + * @raw: A reference to the Raw instance. 511 + * @buf: A buffer containing the whole SCMI message to send (including the 512 + * header) in little-endian binary formmat. 513 + * @len: Length of the message in @buf. 514 + * @p: A pointer to return the initialized Raw xfer. 515 + * 516 + * After an xfer is picked from the TX pool and filled in with the message 517 + * content, the xfer is registered as pending with the core in the usual way 518 + * using the original sequence number provided by the user with the message. 519 + * 520 + * Note that, in case the testing user application is NOT using distinct 521 + * sequence-numbers between successive SCMI messages such registration could 522 + * fail temporarily if the previous message, using the same sequence number, 523 + * had still not released; in such a case we just wait and retry. 524 + * 525 + * Return: 0 on Success 526 + */ 527 + static int scmi_xfer_raw_get_init(struct scmi_raw_mode_info *raw, void *buf, 528 + size_t len, struct scmi_xfer **p) 529 + { 530 + u32 msg_hdr; 531 + size_t tx_size; 532 + struct scmi_xfer *xfer; 533 + int ret, retry = SCMI_XFER_RAW_MAX_RETRIES; 534 + struct device *dev = raw->handle->dev; 535 + 536 + if (!buf || len < sizeof(u32)) 537 + return -EINVAL; 538 + 539 + tx_size = len - sizeof(u32); 540 + /* Ensure we have sane transfer sizes */ 541 + if (tx_size > raw->desc->max_msg_size) 542 + return -ERANGE; 543 + 544 + xfer = scmi_xfer_raw_get(raw->handle); 545 + if (IS_ERR(xfer)) { 546 + dev_warn(dev, "RAW - Cannot get a free RAW xfer !\n"); 547 + return PTR_ERR(xfer); 548 + } 549 + 550 + /* Build xfer from the provided SCMI bare LE message */ 551 + msg_hdr = le32_to_cpu(*((__le32 *)buf)); 552 + unpack_scmi_header(msg_hdr, &xfer->hdr); 553 + xfer->hdr.seq = (u16)MSG_XTRACT_TOKEN(msg_hdr); 554 + /* Polling not supported */ 555 + xfer->hdr.poll_completion = false; 556 + xfer->hdr.status = SCMI_SUCCESS; 557 + xfer->tx.len = tx_size; 558 + xfer->rx.len = raw->desc->max_msg_size; 559 + /* Clear the whole TX buffer */ 560 + memset(xfer->tx.buf, 0x00, raw->desc->max_msg_size); 561 + if (xfer->tx.len) 562 + memcpy(xfer->tx.buf, (u8 *)buf + sizeof(msg_hdr), xfer->tx.len); 563 + *p = xfer; 564 + 565 + /* 566 + * In flight registration can temporarily fail in case of Raw messages 567 + * if the user injects messages without using monotonically increasing 568 + * sequence numbers since, in Raw mode, the xfer (and the token) is 569 + * finally released later by a deferred worker. Just retry for a while. 570 + */ 571 + do { 572 + ret = scmi_xfer_raw_inflight_register(raw->handle, xfer); 573 + if (ret) { 574 + dev_dbg(dev, 575 + "...retrying[%d] inflight registration\n", 576 + retry); 577 + msleep(raw->desc->max_rx_timeout_ms / 578 + SCMI_XFER_RAW_MAX_RETRIES); 579 + } 580 + } while (ret && --retry); 581 + 582 + if (ret) { 583 + dev_warn(dev, 584 + "RAW - Could NOT register xfer %d in-flight HDR:0x%08X\n", 585 + xfer->hdr.seq, msg_hdr); 586 + scmi_xfer_raw_put(raw->handle, xfer); 587 + } 588 + 589 + return ret; 590 + } 591 + 592 + /** 593 + * scmi_do_xfer_raw_start - An helper to send a valid raw xfer 594 + * 595 + * @raw: A reference to the Raw instance. 596 + * @xfer: The xfer to send 597 + * @chan_id: The channel ID to use, if zero the channels is automatically 598 + * selected based on the protocol used. 599 + * @async: A flag stating if an asynchronous command is required. 600 + * 601 + * This function send a previously built raw xfer using an appropriate channel 602 + * and queues the related waiting work. 603 + * 604 + * Note that we need to know explicitly if the required command is meant to be 605 + * asynchronous in kind since we have to properly setup the waiter. 606 + * (and deducing this from the payload is weak and do not scale given there is 607 + * NOT a common header-flag stating if the command is asynchronous or not) 608 + * 609 + * Return: 0 on Success 610 + */ 611 + static int scmi_do_xfer_raw_start(struct scmi_raw_mode_info *raw, 612 + struct scmi_xfer *xfer, u8 chan_id, 613 + bool async) 614 + { 615 + int ret; 616 + struct scmi_chan_info *cinfo; 617 + struct scmi_xfer_raw_waiter *rw; 618 + struct device *dev = raw->handle->dev; 619 + 620 + if (!chan_id) 621 + chan_id = xfer->hdr.protocol_id; 622 + else 623 + xfer->flags |= SCMI_XFER_FLAG_CHAN_SET; 624 + 625 + cinfo = scmi_xfer_raw_channel_get(raw->handle, chan_id); 626 + if (IS_ERR(cinfo)) 627 + return PTR_ERR(cinfo); 628 + 629 + rw = scmi_xfer_raw_waiter_get(raw, xfer, cinfo, async); 630 + if (!rw) { 631 + dev_warn(dev, "RAW - Cannot get a free waiter !\n"); 632 + return -ENOMEM; 633 + } 634 + 635 + /* True ONLY if also supported by transport. */ 636 + if (is_polling_enabled(cinfo, raw->desc)) 637 + xfer->hdr.poll_completion = true; 638 + 639 + reinit_completion(&xfer->done); 640 + /* Make sure xfer state update is visible before sending */ 641 + smp_store_mb(xfer->state, SCMI_XFER_SENT_OK); 642 + 643 + trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id, 644 + xfer->hdr.protocol_id, xfer->hdr.seq, 645 + xfer->hdr.poll_completion); 646 + 647 + ret = raw->desc->ops->send_message(rw->cinfo, xfer); 648 + if (ret) { 649 + dev_err(dev, "Failed to send RAW message %d\n", ret); 650 + scmi_xfer_raw_waiter_put(raw, rw); 651 + return ret; 652 + } 653 + 654 + trace_scmi_msg_dump(raw->id, cinfo->id, xfer->hdr.protocol_id, 655 + xfer->hdr.id, "cmnd", xfer->hdr.seq, 656 + xfer->hdr.status, 657 + xfer->tx.buf, xfer->tx.len); 658 + 659 + scmi_xfer_raw_waiter_enqueue(raw, rw); 660 + 661 + return ret; 662 + } 663 + 664 + /** 665 + * scmi_raw_message_send - An helper to build and send an SCMI command using 666 + * the provided SCMI bare message buffer 667 + * 668 + * @raw: A reference to the Raw instance. 669 + * @buf: A buffer containing the whole SCMI message to send (including the 670 + * header) in little-endian binary format. 671 + * @len: Length of the message in @buf. 672 + * @chan_id: The channel ID to use. 673 + * @async: A flag stating if an asynchronous command is required. 674 + * 675 + * Return: 0 on Success 676 + */ 677 + static int scmi_raw_message_send(struct scmi_raw_mode_info *raw, 678 + void *buf, size_t len, u8 chan_id, bool async) 679 + { 680 + int ret; 681 + struct scmi_xfer *xfer; 682 + 683 + ret = scmi_xfer_raw_get_init(raw, buf, len, &xfer); 684 + if (ret) 685 + return ret; 686 + 687 + ret = scmi_do_xfer_raw_start(raw, xfer, chan_id, async); 688 + if (ret) 689 + scmi_xfer_raw_put(raw->handle, xfer); 690 + 691 + return ret; 692 + } 693 + 694 + static struct scmi_raw_buffer * 695 + scmi_raw_message_dequeue(struct scmi_raw_queue *q, bool o_nonblock) 696 + { 697 + unsigned long flags; 698 + struct scmi_raw_buffer *rb; 699 + 700 + spin_lock_irqsave(&q->msg_q_lock, flags); 701 + while (list_empty(&q->msg_q)) { 702 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 703 + 704 + if (o_nonblock) 705 + return ERR_PTR(-EAGAIN); 706 + 707 + if (wait_event_interruptible(q->wq, !list_empty(&q->msg_q))) 708 + return ERR_PTR(-ERESTARTSYS); 709 + 710 + spin_lock_irqsave(&q->msg_q_lock, flags); 711 + } 712 + 713 + rb = scmi_raw_buffer_dequeue_unlocked(q); 714 + 715 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 716 + 717 + return rb; 718 + } 719 + 720 + /** 721 + * scmi_raw_message_receive - An helper to dequeue and report the next 722 + * available enqueued raw message payload that has been collected. 723 + * 724 + * @raw: A reference to the Raw instance. 725 + * @buf: A buffer to get hold of the whole SCMI message received and represented 726 + * in little-endian binary format. 727 + * @len: Length of @buf. 728 + * @size: The effective size of the message copied into @buf 729 + * @idx: The index of the queue to pick the next queued message from. 730 + * @chan_id: The channel ID to use. 731 + * @o_nonblock: A flag to request a non-blocking message dequeue. 732 + * 733 + * Return: 0 on Success 734 + */ 735 + static int scmi_raw_message_receive(struct scmi_raw_mode_info *raw, 736 + void *buf, size_t len, size_t *size, 737 + unsigned int idx, unsigned int chan_id, 738 + bool o_nonblock) 739 + { 740 + int ret = 0; 741 + struct scmi_raw_buffer *rb; 742 + struct scmi_raw_queue *q; 743 + 744 + q = scmi_raw_queue_select(raw, idx, chan_id); 745 + if (!q) 746 + return -ENODEV; 747 + 748 + rb = scmi_raw_message_dequeue(q, o_nonblock); 749 + if (IS_ERR(rb)) { 750 + dev_dbg(raw->handle->dev, "RAW - No message available!\n"); 751 + return PTR_ERR(rb); 752 + } 753 + 754 + if (rb->msg.len <= len) { 755 + memcpy(buf, rb->msg.buf, rb->msg.len); 756 + *size = rb->msg.len; 757 + } else { 758 + ret = -ENOSPC; 759 + } 760 + 761 + scmi_raw_buffer_put(q, rb); 762 + 763 + return ret; 764 + } 765 + 766 + /* SCMI Raw debugfs helpers */ 767 + 768 + static ssize_t scmi_dbg_raw_mode_common_read(struct file *filp, 769 + char __user *buf, 770 + size_t count, loff_t *ppos, 771 + unsigned int idx) 772 + { 773 + ssize_t cnt; 774 + struct scmi_dbg_raw_data *rd = filp->private_data; 775 + 776 + if (!rd->rx_size) { 777 + int ret; 778 + 779 + ret = scmi_raw_message_receive(rd->raw, rd->rx.buf, rd->rx.len, 780 + &rd->rx_size, idx, rd->chan_id, 781 + filp->f_flags & O_NONBLOCK); 782 + if (ret) { 783 + rd->rx_size = 0; 784 + return ret; 785 + } 786 + 787 + /* Reset any previous filepos change, including writes */ 788 + *ppos = 0; 789 + } else if (*ppos == rd->rx_size) { 790 + /* Return EOF once all the message has been read-out */ 791 + rd->rx_size = 0; 792 + return 0; 793 + } 794 + 795 + cnt = simple_read_from_buffer(buf, count, ppos, 796 + rd->rx.buf, rd->rx_size); 797 + 798 + return cnt; 799 + } 800 + 801 + static ssize_t scmi_dbg_raw_mode_common_write(struct file *filp, 802 + const char __user *buf, 803 + size_t count, loff_t *ppos, 804 + bool async) 805 + { 806 + int ret; 807 + struct scmi_dbg_raw_data *rd = filp->private_data; 808 + 809 + if (count > rd->tx.len - rd->tx_size) 810 + return -ENOSPC; 811 + 812 + /* On first write attempt @count carries the total full message size. */ 813 + if (!rd->tx_size) 814 + rd->tx_req_size = count; 815 + 816 + /* 817 + * Gather a full message, possibly across multiple interrupted wrrtes, 818 + * before sending it with a single RAW xfer. 819 + */ 820 + if (rd->tx_size < rd->tx_req_size) { 821 + size_t cnt; 822 + 823 + cnt = simple_write_to_buffer(rd->tx.buf, rd->tx.len, ppos, 824 + buf, count); 825 + rd->tx_size += cnt; 826 + if (cnt < count) 827 + return cnt; 828 + } 829 + 830 + ret = scmi_raw_message_send(rd->raw, rd->tx.buf, rd->tx_size, 831 + rd->chan_id, async); 832 + 833 + /* Reset ppos for next message ... */ 834 + rd->tx_size = 0; 835 + *ppos = 0; 836 + 837 + return ret ?: count; 838 + } 839 + 840 + static __poll_t scmi_test_dbg_raw_common_poll(struct file *filp, 841 + struct poll_table_struct *wait, 842 + unsigned int idx) 843 + { 844 + unsigned long flags; 845 + struct scmi_dbg_raw_data *rd = filp->private_data; 846 + struct scmi_raw_queue *q; 847 + __poll_t mask = 0; 848 + 849 + q = scmi_raw_queue_select(rd->raw, idx, rd->chan_id); 850 + if (!q) 851 + return mask; 852 + 853 + poll_wait(filp, &q->wq, wait); 854 + 855 + spin_lock_irqsave(&q->msg_q_lock, flags); 856 + if (!list_empty(&q->msg_q)) 857 + mask = EPOLLIN | EPOLLRDNORM; 858 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 859 + 860 + return mask; 861 + } 862 + 863 + static ssize_t scmi_dbg_raw_mode_message_read(struct file *filp, 864 + char __user *buf, 865 + size_t count, loff_t *ppos) 866 + { 867 + return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos, 868 + SCMI_RAW_REPLY_QUEUE); 869 + } 870 + 871 + static ssize_t scmi_dbg_raw_mode_message_write(struct file *filp, 872 + const char __user *buf, 873 + size_t count, loff_t *ppos) 874 + { 875 + return scmi_dbg_raw_mode_common_write(filp, buf, count, ppos, false); 876 + } 877 + 878 + static __poll_t scmi_dbg_raw_mode_message_poll(struct file *filp, 879 + struct poll_table_struct *wait) 880 + { 881 + return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_REPLY_QUEUE); 882 + } 883 + 884 + static int scmi_dbg_raw_mode_open(struct inode *inode, struct file *filp) 885 + { 886 + u8 id; 887 + struct scmi_raw_mode_info *raw; 888 + struct scmi_dbg_raw_data *rd; 889 + const char *id_str = filp->f_path.dentry->d_parent->d_name.name; 890 + 891 + if (!inode->i_private) 892 + return -ENODEV; 893 + 894 + raw = inode->i_private; 895 + rd = kzalloc(sizeof(*rd), GFP_KERNEL); 896 + if (!rd) 897 + return -ENOMEM; 898 + 899 + rd->rx.len = raw->desc->max_msg_size + sizeof(u32); 900 + rd->rx.buf = kzalloc(rd->rx.len, GFP_KERNEL); 901 + if (!rd->rx.buf) { 902 + kfree(rd); 903 + return -ENOMEM; 904 + } 905 + 906 + rd->tx.len = raw->desc->max_msg_size + sizeof(u32); 907 + rd->tx.buf = kzalloc(rd->tx.len, GFP_KERNEL); 908 + if (!rd->tx.buf) { 909 + kfree(rd->rx.buf); 910 + kfree(rd); 911 + return -ENOMEM; 912 + } 913 + 914 + /* Grab channel ID from debugfs entry naming if any */ 915 + if (!kstrtou8(id_str, 16, &id)) 916 + rd->chan_id = id; 917 + 918 + rd->raw = raw; 919 + filp->private_data = rd; 920 + 921 + return 0; 922 + } 923 + 924 + static int scmi_dbg_raw_mode_release(struct inode *inode, struct file *filp) 925 + { 926 + struct scmi_dbg_raw_data *rd = filp->private_data; 927 + 928 + kfree(rd->rx.buf); 929 + kfree(rd->tx.buf); 930 + kfree(rd); 931 + 932 + return 0; 933 + } 934 + 935 + static ssize_t scmi_dbg_raw_mode_reset_write(struct file *filp, 936 + const char __user *buf, 937 + size_t count, loff_t *ppos) 938 + { 939 + struct scmi_dbg_raw_data *rd = filp->private_data; 940 + 941 + scmi_xfer_raw_reset(rd->raw); 942 + 943 + return count; 944 + } 945 + 946 + static const struct file_operations scmi_dbg_raw_mode_reset_fops = { 947 + .open = scmi_dbg_raw_mode_open, 948 + .release = scmi_dbg_raw_mode_release, 949 + .write = scmi_dbg_raw_mode_reset_write, 950 + .owner = THIS_MODULE, 951 + }; 952 + 953 + static const struct file_operations scmi_dbg_raw_mode_message_fops = { 954 + .open = scmi_dbg_raw_mode_open, 955 + .release = scmi_dbg_raw_mode_release, 956 + .read = scmi_dbg_raw_mode_message_read, 957 + .write = scmi_dbg_raw_mode_message_write, 958 + .poll = scmi_dbg_raw_mode_message_poll, 959 + .owner = THIS_MODULE, 960 + }; 961 + 962 + static ssize_t scmi_dbg_raw_mode_message_async_write(struct file *filp, 963 + const char __user *buf, 964 + size_t count, loff_t *ppos) 965 + { 966 + return scmi_dbg_raw_mode_common_write(filp, buf, count, ppos, true); 967 + } 968 + 969 + static const struct file_operations scmi_dbg_raw_mode_message_async_fops = { 970 + .open = scmi_dbg_raw_mode_open, 971 + .release = scmi_dbg_raw_mode_release, 972 + .read = scmi_dbg_raw_mode_message_read, 973 + .write = scmi_dbg_raw_mode_message_async_write, 974 + .poll = scmi_dbg_raw_mode_message_poll, 975 + .owner = THIS_MODULE, 976 + }; 977 + 978 + static ssize_t scmi_test_dbg_raw_mode_notif_read(struct file *filp, 979 + char __user *buf, 980 + size_t count, loff_t *ppos) 981 + { 982 + return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos, 983 + SCMI_RAW_NOTIF_QUEUE); 984 + } 985 + 986 + static __poll_t 987 + scmi_test_dbg_raw_mode_notif_poll(struct file *filp, 988 + struct poll_table_struct *wait) 989 + { 990 + return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_NOTIF_QUEUE); 991 + } 992 + 993 + static const struct file_operations scmi_dbg_raw_mode_notification_fops = { 994 + .open = scmi_dbg_raw_mode_open, 995 + .release = scmi_dbg_raw_mode_release, 996 + .read = scmi_test_dbg_raw_mode_notif_read, 997 + .poll = scmi_test_dbg_raw_mode_notif_poll, 998 + .owner = THIS_MODULE, 999 + }; 1000 + 1001 + static ssize_t scmi_test_dbg_raw_mode_errors_read(struct file *filp, 1002 + char __user *buf, 1003 + size_t count, loff_t *ppos) 1004 + { 1005 + return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos, 1006 + SCMI_RAW_ERRS_QUEUE); 1007 + } 1008 + 1009 + static __poll_t 1010 + scmi_test_dbg_raw_mode_errors_poll(struct file *filp, 1011 + struct poll_table_struct *wait) 1012 + { 1013 + return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_ERRS_QUEUE); 1014 + } 1015 + 1016 + static const struct file_operations scmi_dbg_raw_mode_errors_fops = { 1017 + .open = scmi_dbg_raw_mode_open, 1018 + .release = scmi_dbg_raw_mode_release, 1019 + .read = scmi_test_dbg_raw_mode_errors_read, 1020 + .poll = scmi_test_dbg_raw_mode_errors_poll, 1021 + .owner = THIS_MODULE, 1022 + }; 1023 + 1024 + static struct scmi_raw_queue * 1025 + scmi_raw_queue_init(struct scmi_raw_mode_info *raw) 1026 + { 1027 + int i; 1028 + struct scmi_raw_buffer *rb; 1029 + struct device *dev = raw->handle->dev; 1030 + struct scmi_raw_queue *q; 1031 + 1032 + q = devm_kzalloc(dev, sizeof(*q), GFP_KERNEL); 1033 + if (!q) 1034 + return ERR_PTR(-ENOMEM); 1035 + 1036 + rb = devm_kcalloc(dev, raw->tx_max_msg, sizeof(*rb), GFP_KERNEL); 1037 + if (!rb) 1038 + return ERR_PTR(-ENOMEM); 1039 + 1040 + spin_lock_init(&q->free_bufs_lock); 1041 + INIT_LIST_HEAD(&q->free_bufs); 1042 + for (i = 0; i < raw->tx_max_msg; i++, rb++) { 1043 + rb->max_len = raw->desc->max_msg_size + sizeof(u32); 1044 + rb->msg.buf = devm_kzalloc(dev, rb->max_len, GFP_KERNEL); 1045 + if (!rb->msg.buf) 1046 + return ERR_PTR(-ENOMEM); 1047 + scmi_raw_buffer_put(q, rb); 1048 + } 1049 + 1050 + spin_lock_init(&q->msg_q_lock); 1051 + INIT_LIST_HEAD(&q->msg_q); 1052 + init_waitqueue_head(&q->wq); 1053 + 1054 + return q; 1055 + } 1056 + 1057 + static int scmi_xfer_raw_worker_init(struct scmi_raw_mode_info *raw) 1058 + { 1059 + int i; 1060 + struct scmi_xfer_raw_waiter *rw; 1061 + struct device *dev = raw->handle->dev; 1062 + 1063 + rw = devm_kcalloc(dev, raw->tx_max_msg, sizeof(*rw), GFP_KERNEL); 1064 + if (!rw) 1065 + return -ENOMEM; 1066 + 1067 + raw->wait_wq = alloc_workqueue("scmi-raw-wait-wq-%d", 1068 + WQ_UNBOUND | WQ_FREEZABLE | 1069 + WQ_HIGHPRI, WQ_SYSFS, raw->id); 1070 + if (!raw->wait_wq) 1071 + return -ENOMEM; 1072 + 1073 + mutex_init(&raw->free_mtx); 1074 + INIT_LIST_HEAD(&raw->free_waiters); 1075 + mutex_init(&raw->active_mtx); 1076 + INIT_LIST_HEAD(&raw->active_waiters); 1077 + 1078 + for (i = 0; i < raw->tx_max_msg; i++, rw++) { 1079 + init_completion(&rw->async_response); 1080 + scmi_xfer_raw_waiter_put(raw, rw); 1081 + } 1082 + INIT_WORK(&raw->waiters_work, scmi_xfer_raw_worker); 1083 + 1084 + return 0; 1085 + } 1086 + 1087 + static int scmi_raw_mode_setup(struct scmi_raw_mode_info *raw, 1088 + u8 *channels, int num_chans) 1089 + { 1090 + int ret, idx; 1091 + void *gid; 1092 + struct device *dev = raw->handle->dev; 1093 + 1094 + gid = devres_open_group(dev, NULL, GFP_KERNEL); 1095 + if (!gid) 1096 + return -ENOMEM; 1097 + 1098 + for (idx = 0; idx < SCMI_RAW_MAX_QUEUE; idx++) { 1099 + raw->q[idx] = scmi_raw_queue_init(raw); 1100 + if (IS_ERR(raw->q[idx])) { 1101 + ret = PTR_ERR(raw->q[idx]); 1102 + goto err; 1103 + } 1104 + } 1105 + 1106 + xa_init(&raw->chans_q); 1107 + if (num_chans > 1) { 1108 + int i; 1109 + 1110 + for (i = 0; i < num_chans; i++) { 1111 + void *xret; 1112 + struct scmi_raw_queue *q; 1113 + 1114 + q = scmi_raw_queue_init(raw); 1115 + if (IS_ERR(q)) { 1116 + ret = PTR_ERR(q); 1117 + goto err_xa; 1118 + } 1119 + 1120 + xret = xa_store(&raw->chans_q, channels[i], q, 1121 + GFP_KERNEL); 1122 + if (xa_err(xret)) { 1123 + dev_err(dev, 1124 + "Fail to allocate Raw queue 0x%02X\n", 1125 + channels[i]); 1126 + ret = xa_err(xret); 1127 + goto err_xa; 1128 + } 1129 + } 1130 + } 1131 + 1132 + ret = scmi_xfer_raw_worker_init(raw); 1133 + if (ret) 1134 + goto err_xa; 1135 + 1136 + devres_close_group(dev, gid); 1137 + raw->gid = gid; 1138 + 1139 + return 0; 1140 + 1141 + err_xa: 1142 + xa_destroy(&raw->chans_q); 1143 + err: 1144 + devres_release_group(dev, gid); 1145 + return ret; 1146 + } 1147 + 1148 + /** 1149 + * scmi_raw_mode_init - Function to initialize the SCMI Raw stack 1150 + * 1151 + * @handle: Pointer to SCMI entity handle 1152 + * @top_dentry: A reference to the top Raw debugfs dentry 1153 + * @instance_id: The ID of the underlying SCMI platform instance represented by 1154 + * this Raw instance 1155 + * @channels: The list of the existing channels 1156 + * @num_chans: The number of entries in @channels 1157 + * @desc: Reference to the transport operations 1158 + * @tx_max_msg: Max number of in-flight messages allowed by the transport 1159 + * 1160 + * This function prepare the SCMI Raw stack and creates the debugfs API. 1161 + * 1162 + * Return: An opaque handle to the Raw instance on Success, an ERR_PTR otherwise 1163 + */ 1164 + void *scmi_raw_mode_init(const struct scmi_handle *handle, 1165 + struct dentry *top_dentry, int instance_id, 1166 + u8 *channels, int num_chans, 1167 + const struct scmi_desc *desc, int tx_max_msg) 1168 + { 1169 + int ret; 1170 + struct scmi_raw_mode_info *raw; 1171 + struct device *dev; 1172 + 1173 + if (!handle || !desc) 1174 + return ERR_PTR(-EINVAL); 1175 + 1176 + dev = handle->dev; 1177 + raw = devm_kzalloc(dev, sizeof(*raw), GFP_KERNEL); 1178 + if (!raw) 1179 + return ERR_PTR(-ENOMEM); 1180 + 1181 + raw->handle = handle; 1182 + raw->desc = desc; 1183 + raw->tx_max_msg = tx_max_msg; 1184 + raw->id = instance_id; 1185 + 1186 + ret = scmi_raw_mode_setup(raw, channels, num_chans); 1187 + if (ret) { 1188 + devm_kfree(dev, raw); 1189 + return ERR_PTR(ret); 1190 + } 1191 + 1192 + raw->dentry = debugfs_create_dir("raw", top_dentry); 1193 + 1194 + debugfs_create_file("reset", 0200, raw->dentry, raw, 1195 + &scmi_dbg_raw_mode_reset_fops); 1196 + 1197 + debugfs_create_file("message", 0600, raw->dentry, raw, 1198 + &scmi_dbg_raw_mode_message_fops); 1199 + 1200 + debugfs_create_file("message_async", 0600, raw->dentry, raw, 1201 + &scmi_dbg_raw_mode_message_async_fops); 1202 + 1203 + debugfs_create_file("notification", 0400, raw->dentry, raw, 1204 + &scmi_dbg_raw_mode_notification_fops); 1205 + 1206 + debugfs_create_file("errors", 0400, raw->dentry, raw, 1207 + &scmi_dbg_raw_mode_errors_fops); 1208 + 1209 + /* 1210 + * Expose per-channel entries if multiple channels available. 1211 + * Just ignore errors while setting up these interfaces since we 1212 + * have anyway already a working core Raw support. 1213 + */ 1214 + if (num_chans > 1) { 1215 + int i; 1216 + struct dentry *top_chans; 1217 + 1218 + top_chans = debugfs_create_dir("channels", raw->dentry); 1219 + 1220 + for (i = 0; i < num_chans; i++) { 1221 + char cdir[8]; 1222 + struct dentry *chd; 1223 + 1224 + snprintf(cdir, 8, "0x%02X", channels[i]); 1225 + chd = debugfs_create_dir(cdir, top_chans); 1226 + 1227 + debugfs_create_file("message", 0600, chd, raw, 1228 + &scmi_dbg_raw_mode_message_fops); 1229 + 1230 + debugfs_create_file("message_async", 0600, chd, raw, 1231 + &scmi_dbg_raw_mode_message_async_fops); 1232 + } 1233 + } 1234 + 1235 + dev_info(dev, "SCMI RAW Mode initialized for instance %d\n", raw->id); 1236 + 1237 + return raw; 1238 + } 1239 + 1240 + /** 1241 + * scmi_raw_mode_cleanup - Function to cleanup the SCMI Raw stack 1242 + * 1243 + * @r: An opaque handle to an initialized SCMI Raw instance 1244 + */ 1245 + void scmi_raw_mode_cleanup(void *r) 1246 + { 1247 + struct scmi_raw_mode_info *raw = r; 1248 + 1249 + if (!raw) 1250 + return; 1251 + 1252 + debugfs_remove_recursive(raw->dentry); 1253 + 1254 + cancel_work_sync(&raw->waiters_work); 1255 + destroy_workqueue(raw->wait_wq); 1256 + xa_destroy(&raw->chans_q); 1257 + } 1258 + 1259 + static int scmi_xfer_raw_collect(void *msg, size_t *msg_len, 1260 + struct scmi_xfer *xfer) 1261 + { 1262 + __le32 *m; 1263 + size_t msg_size; 1264 + 1265 + if (!xfer || !msg || !msg_len) 1266 + return -EINVAL; 1267 + 1268 + /* Account for hdr ...*/ 1269 + msg_size = xfer->rx.len + sizeof(u32); 1270 + /* ... and status if needed */ 1271 + if (xfer->hdr.type != MSG_TYPE_NOTIFICATION) 1272 + msg_size += sizeof(u32); 1273 + 1274 + if (msg_size > *msg_len) 1275 + return -ENOSPC; 1276 + 1277 + m = msg; 1278 + *m = cpu_to_le32(pack_scmi_header(&xfer->hdr)); 1279 + if (xfer->hdr.type != MSG_TYPE_NOTIFICATION) 1280 + *++m = cpu_to_le32(xfer->hdr.status); 1281 + 1282 + memcpy(++m, xfer->rx.buf, xfer->rx.len); 1283 + 1284 + *msg_len = msg_size; 1285 + 1286 + return 0; 1287 + } 1288 + 1289 + /** 1290 + * scmi_raw_message_report - Helper to report back valid reponses/notifications 1291 + * to raw message requests. 1292 + * 1293 + * @r: An opaque reference to the raw instance configuration 1294 + * @xfer: The xfer containing the message to be reported 1295 + * @idx: The index of the queue. 1296 + * @chan_id: The channel ID to use. 1297 + * 1298 + * If Raw mode is enabled, this is called from the SCMI core on the regular RX 1299 + * path to save and enqueue the response/notification payload carried by this 1300 + * xfer into a dedicated scmi_raw_buffer for later consumption by the user. 1301 + * 1302 + * This way the caller can free the related xfer immediately afterwards and the 1303 + * user can read back the raw message payload at its own pace (if ever) without 1304 + * holding an xfer for too long. 1305 + */ 1306 + void scmi_raw_message_report(void *r, struct scmi_xfer *xfer, 1307 + unsigned int idx, unsigned int chan_id) 1308 + { 1309 + int ret; 1310 + unsigned long flags; 1311 + struct scmi_raw_buffer *rb; 1312 + struct device *dev; 1313 + struct scmi_raw_queue *q; 1314 + struct scmi_raw_mode_info *raw = r; 1315 + 1316 + if (!raw || (idx == SCMI_RAW_REPLY_QUEUE && !SCMI_XFER_IS_RAW(xfer))) 1317 + return; 1318 + 1319 + dev = raw->handle->dev; 1320 + q = scmi_raw_queue_select(raw, idx, 1321 + SCMI_XFER_IS_CHAN_SET(xfer) ? chan_id : 0); 1322 + 1323 + /* 1324 + * Grab the msg_q_lock upfront to avoid a possible race between 1325 + * realizing the free list was empty and effectively picking the next 1326 + * buffer to use from the oldest one enqueued and still unread on this 1327 + * msg_q. 1328 + * 1329 + * Note that nowhere else these locks are taken together, so no risk of 1330 + * deadlocks du eto inversion. 1331 + */ 1332 + spin_lock_irqsave(&q->msg_q_lock, flags); 1333 + rb = scmi_raw_buffer_get(q); 1334 + if (!rb) { 1335 + /* 1336 + * Immediate and delayed replies to previously injected Raw 1337 + * commands MUST be read back from userspace to free the buffers: 1338 + * if this is not happening something is seriously broken and 1339 + * must be fixed at the application level: complain loudly. 1340 + */ 1341 + if (idx == SCMI_RAW_REPLY_QUEUE) { 1342 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 1343 + dev_warn(dev, 1344 + "RAW[%d] - Buffers exhausted. Dropping report.\n", 1345 + idx); 1346 + return; 1347 + } 1348 + 1349 + /* 1350 + * Notifications and errors queues are instead handled in a 1351 + * circular manner: unread old buffers are just overwritten by 1352 + * newer ones. 1353 + * 1354 + * The main reason for this is that notifications originated 1355 + * by Raw requests cannot be distinguished from normal ones, so 1356 + * your Raw buffers queues risk to be flooded and depleted by 1357 + * notifications if you left it mistakenly enabled or when in 1358 + * coexistence mode. 1359 + */ 1360 + rb = scmi_raw_buffer_dequeue_unlocked(q); 1361 + if (WARN_ON(!rb)) { 1362 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 1363 + return; 1364 + } 1365 + 1366 + /* Reset to full buffer length */ 1367 + rb->msg.len = rb->max_len; 1368 + 1369 + dev_warn_once(dev, 1370 + "RAW[%d] - Buffers exhausted. Re-using oldest.\n", 1371 + idx); 1372 + } 1373 + spin_unlock_irqrestore(&q->msg_q_lock, flags); 1374 + 1375 + ret = scmi_xfer_raw_collect(rb->msg.buf, &rb->msg.len, xfer); 1376 + if (ret) { 1377 + dev_warn(dev, "RAW - Cannot collect xfer into buffer !\n"); 1378 + scmi_raw_buffer_put(q, rb); 1379 + return; 1380 + } 1381 + 1382 + scmi_raw_buffer_enqueue(q, rb); 1383 + } 1384 + 1385 + static void scmi_xfer_raw_fill(struct scmi_raw_mode_info *raw, 1386 + struct scmi_chan_info *cinfo, 1387 + struct scmi_xfer *xfer, u32 msg_hdr) 1388 + { 1389 + /* Unpack received HDR as it is */ 1390 + unpack_scmi_header(msg_hdr, &xfer->hdr); 1391 + xfer->hdr.seq = MSG_XTRACT_TOKEN(msg_hdr); 1392 + 1393 + memset(xfer->rx.buf, 0x00, xfer->rx.len); 1394 + 1395 + raw->desc->ops->fetch_response(cinfo, xfer); 1396 + } 1397 + 1398 + /** 1399 + * scmi_raw_error_report - Helper to report back timed-out or generally 1400 + * unexpected replies. 1401 + * 1402 + * @r: An opaque reference to the raw instance configuration 1403 + * @cinfo: A reference to the channel to use to retrieve the broken xfer 1404 + * @msg_hdr: The SCMI message header of the message to fetch and report 1405 + * @priv: Any private data related to the xfer. 1406 + * 1407 + * If Raw mode is enabled, this is called from the SCMI core on the RX path in 1408 + * case of errors to save and enqueue the bad message payload carried by the 1409 + * message that has just been received. 1410 + * 1411 + * Note that we have to manually fetch any available payload into a temporary 1412 + * xfer to be able to save and enqueue the message, since the regular RX error 1413 + * path which had called this would have not fetched the message payload having 1414 + * classified it as an error. 1415 + */ 1416 + void scmi_raw_error_report(void *r, struct scmi_chan_info *cinfo, 1417 + u32 msg_hdr, void *priv) 1418 + { 1419 + struct scmi_xfer xfer; 1420 + struct scmi_raw_mode_info *raw = r; 1421 + 1422 + if (!raw) 1423 + return; 1424 + 1425 + xfer.rx.len = raw->desc->max_msg_size; 1426 + xfer.rx.buf = kzalloc(xfer.rx.len, GFP_ATOMIC); 1427 + if (!xfer.rx.buf) { 1428 + dev_info(raw->handle->dev, 1429 + "Cannot report Raw error for HDR:0x%X - ENOMEM\n", 1430 + msg_hdr); 1431 + return; 1432 + } 1433 + 1434 + /* Any transport-provided priv must be passed back down to transport */ 1435 + if (priv) 1436 + /* Ensure priv is visible */ 1437 + smp_store_mb(xfer.priv, priv); 1438 + 1439 + scmi_xfer_raw_fill(raw, cinfo, &xfer, msg_hdr); 1440 + scmi_raw_message_report(raw, &xfer, SCMI_RAW_ERRS_QUEUE, 0); 1441 + 1442 + kfree(xfer.rx.buf); 1443 + }
+31
drivers/firmware/arm_scmi/raw_mode.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol 4 + * Raw mode support header. 5 + * 6 + * Copyright (C) 2022 ARM Ltd. 7 + */ 8 + #ifndef _SCMI_RAW_MODE_H 9 + #define _SCMI_RAW_MODE_H 10 + 11 + #include "common.h" 12 + 13 + enum { 14 + SCMI_RAW_REPLY_QUEUE, 15 + SCMI_RAW_NOTIF_QUEUE, 16 + SCMI_RAW_ERRS_QUEUE, 17 + SCMI_RAW_MAX_QUEUE 18 + }; 19 + 20 + void *scmi_raw_mode_init(const struct scmi_handle *handle, 21 + struct dentry *top_dentry, int instance_id, 22 + u8 *channels, int num_chans, 23 + const struct scmi_desc *desc, int tx_max_msg); 24 + void scmi_raw_mode_cleanup(void *raw); 25 + 26 + void scmi_raw_message_report(void *raw, struct scmi_xfer *xfer, 27 + unsigned int idx, unsigned int chan_id); 28 + void scmi_raw_error_report(void *raw, struct scmi_chan_info *cinfo, 29 + u32 msg_hdr, void *priv); 30 + 31 + #endif /* _SCMI_RAW_MODE_H */
+6 -3
drivers/firmware/arm_scmi/shmem.c
··· 81 81 void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem, 82 82 struct scmi_xfer *xfer) 83 83 { 84 + size_t len = ioread32(&shmem->length); 85 + 84 86 xfer->hdr.status = ioread32(shmem->msg_payload); 85 87 /* Skip the length of header and status in shmem area i.e 8 bytes */ 86 - xfer->rx.len = min_t(size_t, xfer->rx.len, 87 - ioread32(&shmem->length) - 8); 88 + xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0); 88 89 89 90 /* Take a copy to the rx buffer.. */ 90 91 memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len); ··· 94 93 void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, 95 94 size_t max_len, struct scmi_xfer *xfer) 96 95 { 96 + size_t len = ioread32(&shmem->length); 97 + 97 98 /* Skip only the length of header in shmem area i.e 4 bytes */ 98 - xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4); 99 + xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0); 99 100 100 101 /* Take a copy to the rx buffer.. */ 101 102 memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
+2 -4
drivers/firmware/arm_scmi/smc.c
··· 52 52 return IRQ_HANDLED; 53 53 } 54 54 55 - static bool smc_chan_available(struct device *dev, int idx) 55 + static bool smc_chan_available(struct device_node *of_node, int idx) 56 56 { 57 - struct device_node *np = of_parse_phandle(dev->of_node, "shmem", 0); 57 + struct device_node *np = of_parse_phandle(of_node, "shmem", 0); 58 58 if (!np) 59 59 return false; 60 60 ··· 170 170 171 171 cinfo->transport_info = NULL; 172 172 scmi_info->cinfo = NULL; 173 - 174 - scmi_free_channel(cinfo, data, id); 175 173 176 174 return 0; 177 175 }
+7 -4
drivers/firmware/arm_scmi/virtio.c
··· 160 160 } 161 161 162 162 vioch->shutdown_done = &vioch_shutdown_done; 163 - virtio_break_device(vioch->vqueue->vdev); 164 163 if (!vioch->is_rx && vioch->deferred_tx_wq) 165 164 /* Cannot be kicked anymore after this...*/ 166 165 vioch->deferred_tx_wq = NULL; ··· 385 386 return 0; 386 387 } 387 388 388 - static bool virtio_chan_available(struct device *dev, int idx) 389 + static bool virtio_chan_available(struct device_node *of_node, int idx) 389 390 { 390 391 struct scmi_vio_channel *channels, *vioch = NULL; 391 392 ··· 481 482 struct scmi_chan_info *cinfo = p; 482 483 struct scmi_vio_channel *vioch = cinfo->transport_info; 483 484 485 + /* 486 + * Break device to inhibit further traffic flowing while shutting down 487 + * the channels: doing it later holding vioch->lock creates unsafe 488 + * locking dependency chains as reported by LOCKDEP. 489 + */ 490 + virtio_break_device(vioch->vqueue->vdev); 484 491 scmi_vio_channel_cleanup_sync(vioch); 485 - 486 - scmi_free_channel(cinfo, data, id); 487 492 488 493 return 0; 489 494 }
+1
fs/debugfs/file.c
··· 899 899 900 900 return ret; 901 901 } 902 + EXPORT_SYMBOL_GPL(debugfs_create_str); 902 903 903 904 static ssize_t debugfs_write_file_str(struct file *file, const char __user *user_buf, 904 905 size_t count, loff_t *ppos)
-5
include/linux/scmi_protocol.h
··· 804 804 805 805 #define to_scmi_dev(d) container_of(d, struct scmi_device, dev) 806 806 807 - struct scmi_device * 808 - scmi_device_create(struct device_node *np, struct device *parent, int protocol, 809 - const char *name); 810 - void scmi_device_destroy(struct scmi_device *scmi_dev); 811 - 812 807 struct scmi_device_id { 813 808 u8 protocol_id; 814 809 const char *name;
+12 -6
include/trace/events/scmi.h
··· 139 139 ); 140 140 141 141 TRACE_EVENT(scmi_msg_dump, 142 - TP_PROTO(u8 protocol_id, u8 msg_id, unsigned char *tag, u16 seq, 143 - int status, void *buf, size_t len), 144 - TP_ARGS(protocol_id, msg_id, tag, seq, status, buf, len), 142 + TP_PROTO(int id, u8 channel_id, u8 protocol_id, u8 msg_id, 143 + unsigned char *tag, u16 seq, int status, 144 + void *buf, size_t len), 145 + TP_ARGS(id, channel_id, protocol_id, msg_id, tag, seq, status, 146 + buf, len), 145 147 146 148 TP_STRUCT__entry( 149 + __field(int, id) 150 + __field(u8, channel_id) 147 151 __field(u8, protocol_id) 148 152 __field(u8, msg_id) 149 153 __array(char, tag, 5) ··· 158 154 ), 159 155 160 156 TP_fast_assign( 157 + __entry->id = id; 158 + __entry->channel_id = channel_id; 161 159 __entry->protocol_id = protocol_id; 162 160 __entry->msg_id = msg_id; 163 161 strscpy(__entry->tag, tag, 5); ··· 169 163 memcpy(__get_dynamic_array(cmd), buf, __entry->len); 170 164 ), 171 165 172 - TP_printk("pt=%02X t=%s msg_id=%02X seq=%04X s=%d pyld=%s", 173 - __entry->protocol_id, __entry->tag, __entry->msg_id, 174 - __entry->seq, __entry->status, 166 + TP_printk("id=%d ch=%02X pt=%02X t=%s msg_id=%02X seq=%04X s=%d pyld=%s", 167 + __entry->id, __entry->channel_id, __entry->protocol_id, 168 + __entry->tag, __entry->msg_id, __entry->seq, __entry->status, 175 169 __print_hex_str(__get_dynamic_array(cmd), __entry->len)) 176 170 ); 177 171 #endif /* _TRACE_SCMI_H */