···11+Qualcomm Resource Power Manager (RPM) over SMD22+33+This driver is used to interface with the Resource Power Manager (RPM) found in44+various Qualcomm platforms. The RPM allows each component in the system to vote55+for state of the system resources, such as clocks, regulators and bus66+frequencies.77+88+- compatible:99+ Usage: required1010+ Value type: <string>1111+ Definition: must be one of:1212+ "qcom,rpm-msm8974"1313+1414+- qcom,smd-channels:1515+ Usage: required1616+ Value type: <stringlist>1717+ Definition: Shared Memory channel used for communication with the RPM1818+1919+= SUBDEVICES2020+2121+The RPM exposes resources to its subnodes. The below bindings specify the set2222+of valid subnodes that can operate on these resources.2323+2424+== Regulators2525+2626+Regulator nodes are identified by their compatible:2727+2828+- compatible:2929+ Usage: required3030+ Value type: <string>3131+ Definition: must be one of:3232+ "qcom,rpm-pm8841-regulators"3333+ "qcom,rpm-pm8941-regulators"3434+3535+- vdd_s1-supply:3636+- vdd_s2-supply:3737+- vdd_s3-supply:3838+- vdd_s4-supply:3939+- vdd_s5-supply:4040+- vdd_s6-supply:4141+- vdd_s7-supply:4242+- vdd_s8-supply:4343+ Usage: optional (pm8841 only)4444+ Value type: <phandle>4545+ Definition: reference to regulator supplying the input pin, as4646+ described in the data sheet4747+4848+- vdd_s1-supply:4949+- vdd_s2-supply:5050+- vdd_s3-supply:5151+- vdd_l1_l3-supply:5252+- vdd_l2_lvs1_2_3-supply:5353+- vdd_l4_l11-supply:5454+- vdd_l5_l7-supply:5555+- vdd_l6_l12_l14_l15-supply:5656+- vdd_l8_l16_l18_l19-supply:5757+- vdd_l9_l10_l17_l22-supply:5858+- vdd_l13_l20_l23_l24-supply:5959+- vdd_l21-supply:6060+- vin_5vs-supply:6161+ Usage: optional (pm8941 only)6262+ Value type: <phandle>6363+ Definition: reference to regulator supplying the input pin, as6464+ described in the data sheet6565+6666+The regulator node houses sub-nodes for each regulator within the device. Each6767+sub-node is identified using the node's name, with valid values listed for each6868+of the pmics below.6969+7070+pm8841:7171+ s1, s2, s3, s4, s5, s6, s7, s87272+7373+pm8941:7474+ s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13,7575+ l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, lvs1, lvs2,7676+ lvs3, 5vs1, 5vs27777+7878+The content of each sub-node is defined by the standard binding for regulators -7979+see regulator.txt.8080+8181+= EXAMPLE8282+8383+ smd {8484+ compatible = "qcom,smd";8585+8686+ rpm {8787+ interrupts = <0 168 1>;8888+ qcom,ipc = <&apcs 8 0>;8989+ qcom,smd-edge = <15>;9090+9191+ rpm_requests {9292+ compatible = "qcom,rpm-msm8974";9393+ qcom,smd-channels = "rpm_requests";9494+9595+ pm8941-regulators {9696+ compatible = "qcom,rpm-pm8941-regulators";9797+ vdd_l13_l20_l23_l24-supply = <&pm8941_boost>;9898+9999+ pm8941_s3: s3 {100100+ regulator-min-microvolt = <1800000>;101101+ regulator-max-microvolt = <1800000>;102102+ };103103+104104+ pm8941_boost: s4 {105105+ regulator-min-microvolt = <5000000>;106106+ regulator-max-microvolt = <5000000>;107107+ };108108+109109+ pm8941_l20: l20 {110110+ regulator-min-microvolt = <2950000>;111111+ regulator-max-microvolt = <2950000>;112112+ };113113+ };114114+ };115115+ };116116+ };117117+
···11+Qualcomm Shared Memory Driver (SMD) binding22+33+This binding describes the Qualcomm Shared Memory Driver, a fifo based44+communication channel for sending data between the various subsystems in55+Qualcomm platforms.66+77+- compatible:88+ Usage: required99+ Value type: <stringlist>1010+ Definition: must be "qcom,smd"1111+1212+= EDGES1313+1414+Each subnode of the SMD node represents a remote subsystem or a remote1515+processor of some sort - or in SMD language an "edge". The name of the edges1616+are not important.1717+The edge is described by the following properties:1818+1919+- interrupts:2020+ Usage: required2121+ Value type: <prop-encoded-array>2222+ Definition: should specify the IRQ used by the remote processor to2323+ signal this processor about communication related updates2424+2525+- qcom,ipc:2626+ Usage: required2727+ Value type: <prop-encoded-array>2828+ Definition: three entries specifying the outgoing ipc bit used for2929+ signaling the remote processor:3030+ - phandle to a syscon node representing the apcs registers3131+ - u32 representing offset to the register within the syscon3232+ - u32 representing the ipc bit within the register3333+3434+- qcom,smd-edge:3535+ Usage: required3636+ Value type: <u32>3737+ Definition: the identifier of the remote processor in the smd channel3838+ allocation table3939+4040+= SMD DEVICES4141+4242+In turn, subnodes of the "edges" represent devices tied to SMD channels on that4343+"edge". The names of the devices are not important. The properties of these4444+nodes are defined by the individual bindings for the SMD devices - but must4545+contain the following property:4646+4747+- qcom,smd-channels:4848+ Usage: required4949+ Value type: <stringlist>5050+ Definition: a list of channels tied to this device, used for matching5151+ the device to channels5252+5353+= EXAMPLE5454+5555+The following example represents a smd node, with one edge representing the5656+"rpm" subsystem. For the "rpm" subsystem we have a device tied to the5757+"rpm_request" channel.5858+5959+ apcs: syscon@f9011000 {6060+ compatible = "syscon";6161+ reg = <0xf9011000 0x1000>;6262+ };6363+6464+ smd {6565+ compatible = "qcom,smd";6666+6767+ rpm {6868+ interrupts = <0 168 1>;6969+ qcom,ipc = <&apcs 8 0>;7070+ qcom,smd-edge = <15>;7171+7272+ rpm_requests {7373+ compatible = "qcom,rpm-msm8974";7474+ qcom,smd-channels = "rpm_requests";7575+7676+ ...7777+ };7878+ };7979+ };
+31
drivers/soc/qcom/Kconfig
···1313config QCOM_PM1414 bool "Qualcomm Power Management"1515 depends on ARCH_QCOM && !ARM641616+ select QCOM_SCM1617 help1718 QCOM Platform specific power driver to manage cores and L2 low power1819 modes. It interface with various system drivers to put the cores in1920 low power modes.2121+2222+config QCOM_SMD2323+ tristate "Qualcomm Shared Memory Driver (SMD)"2424+ depends on QCOM_SMEM2525+ help2626+ Say y here to enable support for the Qualcomm Shared Memory Driver2727+ providing communication channels to remote processors in Qualcomm2828+ platforms.2929+3030+config QCOM_SMD_RPM3131+ tristate "Qualcomm Resource Power Manager (RPM) over SMD"3232+ depends on QCOM_SMD && OF3333+ help3434+ If you say yes to this option, support will be included for the3535+ Resource Power Manager system found in the Qualcomm 8974 based3636+ devices.3737+3838+ This is required to access many regulators, clocks and bus3939+ frequencies controlled by the RPM on these devices.4040+4141+ Say M here if you want to include support for the Qualcomm RPM as a4242+ module. This will build a module called "qcom-smd-rpm".4343+4444+config QCOM_SMEM4545+ tristate "Qualcomm Shared Memory Manager (SMEM)"4646+ depends on ARCH_QCOM4747+ help4848+ Say y here to enable support for the Qualcomm Shared Memory Manager.4949+ The driver provides an interface to items in a heap shared among all5050+ processors in a Qualcomm platform.
···11+/*22+ * Copyright (c) 2015, Sony Mobile Communications AB.33+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 and77+ * only version 2 as published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ */1414+1515+#include <linux/module.h>1616+#include <linux/platform_device.h>1717+#include <linux/of_platform.h>1818+#include <linux/io.h>1919+#include <linux/interrupt.h>2020+2121+#include <linux/soc/qcom/smd.h>2222+#include <linux/soc/qcom/smd-rpm.h>2323+2424+#define RPM_REQUEST_TIMEOUT (5 * HZ)2525+2626+/**2727+ * struct qcom_smd_rpm - state of the rpm device driver2828+ * @rpm_channel: reference to the smd channel2929+ * @ack: completion for acks3030+ * @lock: mutual exclusion around the send/complete pair3131+ * @ack_status: result of the rpm request3232+ */3333+struct qcom_smd_rpm {3434+ struct qcom_smd_channel *rpm_channel;3535+3636+ struct completion ack;3737+ struct mutex lock;3838+ int ack_status;3939+};4040+4141+/**4242+ * struct qcom_rpm_header - header for all rpm requests and responses4343+ * @service_type: identifier of the service4444+ * @length: length of the payload4545+ */4646+struct qcom_rpm_header {4747+ u32 service_type;4848+ u32 length;4949+};5050+5151+/**5252+ * struct qcom_rpm_request - request message to the rpm5353+ * @msg_id: identifier of the outgoing message5454+ * @flags: active/sleep state flags5555+ * @type: resource type5656+ * @id: resource id5757+ * @data_len: length of the payload following this header5858+ */5959+struct qcom_rpm_request {6060+ u32 msg_id;6161+ u32 flags;6262+ u32 type;6363+ u32 id;6464+ u32 data_len;6565+};6666+6767+/**6868+ * struct qcom_rpm_message - response message from the rpm6969+ * @msg_type: indicator of the type of message7070+ * @length: the size of this message, including the message header7171+ * @msg_id: message id7272+ * @message: textual message from the rpm7373+ *7474+ * Multiple of these messages can be stacked in an rpm message.7575+ */7676+struct qcom_rpm_message {7777+ u32 msg_type;7878+ u32 length;7979+ union {8080+ u32 msg_id;8181+ u8 message[0];8282+ };8383+};8484+8585+#define RPM_SERVICE_TYPE_REQUEST 0x00716572 /* "req\0" */8686+8787+#define RPM_MSG_TYPE_ERR 0x00727265 /* "err\0" */8888+#define RPM_MSG_TYPE_MSG_ID 0x2367736d /* "msg#" */8989+9090+/**9191+ * qcom_rpm_smd_write - write @buf to @type:@id9292+ * @rpm: rpm handle9393+ * @type: resource type9494+ * @id: resource identifier9595+ * @buf: the data to be written9696+ * @count: number of bytes in @buf9797+ */9898+int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,9999+ int state,100100+ u32 type, u32 id,101101+ void *buf,102102+ size_t count)103103+{104104+ static unsigned msg_id = 1;105105+ int left;106106+ int ret;107107+108108+ struct {109109+ struct qcom_rpm_header hdr;110110+ struct qcom_rpm_request req;111111+ u8 payload[count];112112+ } pkt;113113+114114+ /* SMD packets to the RPM may not exceed 256 bytes */115115+ if (WARN_ON(sizeof(pkt) >= 256))116116+ return -EINVAL;117117+118118+ mutex_lock(&rpm->lock);119119+120120+ pkt.hdr.service_type = RPM_SERVICE_TYPE_REQUEST;121121+ pkt.hdr.length = sizeof(struct qcom_rpm_request) + count;122122+123123+ pkt.req.msg_id = msg_id++;124124+ pkt.req.flags = BIT(state);125125+ pkt.req.type = type;126126+ pkt.req.id = id;127127+ pkt.req.data_len = count;128128+ memcpy(pkt.payload, buf, count);129129+130130+ ret = qcom_smd_send(rpm->rpm_channel, &pkt, sizeof(pkt));131131+ if (ret)132132+ goto out;133133+134134+ left = wait_for_completion_timeout(&rpm->ack, RPM_REQUEST_TIMEOUT);135135+ if (!left)136136+ ret = -ETIMEDOUT;137137+ else138138+ ret = rpm->ack_status;139139+140140+out:141141+ mutex_unlock(&rpm->lock);142142+ return ret;143143+}144144+EXPORT_SYMBOL(qcom_rpm_smd_write);145145+146146+static int qcom_smd_rpm_callback(struct qcom_smd_device *qsdev,147147+ const void *data,148148+ size_t count)149149+{150150+ const struct qcom_rpm_header *hdr = data;151151+ const struct qcom_rpm_message *msg;152152+ struct qcom_smd_rpm *rpm = dev_get_drvdata(&qsdev->dev);153153+ const u8 *buf = data + sizeof(struct qcom_rpm_header);154154+ const u8 *end = buf + hdr->length;155155+ char msgbuf[32];156156+ int status = 0;157157+ u32 len;158158+159159+ if (hdr->service_type != RPM_SERVICE_TYPE_REQUEST ||160160+ hdr->length < sizeof(struct qcom_rpm_message)) {161161+ dev_err(&qsdev->dev, "invalid request\n");162162+ return 0;163163+ }164164+165165+ while (buf < end) {166166+ msg = (struct qcom_rpm_message *)buf;167167+ switch (msg->msg_type) {168168+ case RPM_MSG_TYPE_MSG_ID:169169+ break;170170+ case RPM_MSG_TYPE_ERR:171171+ len = min_t(u32, ALIGN(msg->length, 4), sizeof(msgbuf));172172+ memcpy_fromio(msgbuf, msg->message, len);173173+ msgbuf[len - 1] = 0;174174+175175+ if (!strcmp(msgbuf, "resource does not exist"))176176+ status = -ENXIO;177177+ else178178+ status = -EINVAL;179179+ break;180180+ }181181+182182+ buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg->length, 4);183183+ }184184+185185+ rpm->ack_status = status;186186+ complete(&rpm->ack);187187+ return 0;188188+}189189+190190+static int qcom_smd_rpm_probe(struct qcom_smd_device *sdev)191191+{192192+ struct qcom_smd_rpm *rpm;193193+194194+ rpm = devm_kzalloc(&sdev->dev, sizeof(*rpm), GFP_KERNEL);195195+ if (!rpm)196196+ return -ENOMEM;197197+198198+ mutex_init(&rpm->lock);199199+ init_completion(&rpm->ack);200200+201201+ rpm->rpm_channel = sdev->channel;202202+203203+ dev_set_drvdata(&sdev->dev, rpm);204204+205205+ return of_platform_populate(sdev->dev.of_node, NULL, NULL, &sdev->dev);206206+}207207+208208+static void qcom_smd_rpm_remove(struct qcom_smd_device *sdev)209209+{210210+ of_platform_depopulate(&sdev->dev);211211+}212212+213213+static const struct of_device_id qcom_smd_rpm_of_match[] = {214214+ { .compatible = "qcom,rpm-msm8974" },215215+ {}216216+};217217+MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match);218218+219219+static struct qcom_smd_driver qcom_smd_rpm_driver = {220220+ .probe = qcom_smd_rpm_probe,221221+ .remove = qcom_smd_rpm_remove,222222+ .callback = qcom_smd_rpm_callback,223223+ .driver = {224224+ .name = "qcom_smd_rpm",225225+ .owner = THIS_MODULE,226226+ .of_match_table = qcom_smd_rpm_of_match,227227+ },228228+};229229+230230+static int __init qcom_smd_rpm_init(void)231231+{232232+ return qcom_smd_driver_register(&qcom_smd_rpm_driver);233233+}234234+arch_initcall(qcom_smd_rpm_init);235235+236236+static void __exit qcom_smd_rpm_exit(void)237237+{238238+ qcom_smd_driver_unregister(&qcom_smd_rpm_driver);239239+}240240+module_exit(qcom_smd_rpm_exit);241241+242242+MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");243243+MODULE_DESCRIPTION("Qualcomm SMD backed RPM driver");244244+MODULE_LICENSE("GPL v2");
+1319
drivers/soc/qcom/smd.c
···11+/*22+ * Copyright (c) 2015, Sony Mobile Communications AB.33+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 and77+ * only version 2 as published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ */1414+1515+#include <linux/interrupt.h>1616+#include <linux/io.h>1717+#include <linux/mfd/syscon.h>1818+#include <linux/module.h>1919+#include <linux/of_irq.h>2020+#include <linux/of_platform.h>2121+#include <linux/platform_device.h>2222+#include <linux/regmap.h>2323+#include <linux/sched.h>2424+#include <linux/slab.h>2525+#include <linux/soc/qcom/smd.h>2626+#include <linux/soc/qcom/smem.h>2727+#include <linux/wait.h>2828+2929+/*3030+ * The Qualcomm Shared Memory communication solution provides point-to-point3131+ * channels for clients to send and receive streaming or packet based data.3232+ *3333+ * Each channel consists of a control item (channel info) and a ring buffer3434+ * pair. The channel info carry information related to channel state, flow3535+ * control and the offsets within the ring buffer.3636+ *3737+ * All allocated channels are listed in an allocation table, identifying the3838+ * pair of items by name, type and remote processor.3939+ *4040+ * Upon creating a new channel the remote processor allocates channel info and4141+ * ring buffer items from the smem heap and populate the allocation table. An4242+ * interrupt is sent to the other end of the channel and a scan for new4343+ * channels should be done. A channel never goes away, it will only change4444+ * state.4545+ *4646+ * The remote processor signals it intent for bring up the communication4747+ * channel by setting the state of its end of the channel to "opening" and4848+ * sends out an interrupt. We detect this change and register a smd device to4949+ * consume the channel. Upon finding a consumer we finish the handshake and the5050+ * channel is up.5151+ *5252+ * Upon closing a channel, the remote processor will update the state of its5353+ * end of the channel and signal us, we will then unregister any attached5454+ * device and close our end of the channel.5555+ *5656+ * Devices attached to a channel can use the qcom_smd_send function to push5757+ * data to the channel, this is done by copying the data into the tx ring5858+ * buffer, updating the pointers in the channel info and signaling the remote5959+ * processor.6060+ *6161+ * The remote processor does the equivalent when it transfer data and upon6262+ * receiving the interrupt we check the channel info for new data and delivers6363+ * this to the attached device. If the device is not ready to receive the data6464+ * we leave it in the ring buffer for now.6565+ */6666+6767+struct smd_channel_info;6868+struct smd_channel_info_word;6969+7070+#define SMD_ALLOC_TBL_COUNT 27171+#define SMD_ALLOC_TBL_SIZE 647272+7373+/*7474+ * This lists the various smem heap items relevant for the allocation table and7575+ * smd channel entries.7676+ */7777+static const struct {7878+ unsigned alloc_tbl_id;7979+ unsigned info_base_id;8080+ unsigned fifo_base_id;8181+} smem_items[SMD_ALLOC_TBL_COUNT] = {8282+ {8383+ .alloc_tbl_id = 13,8484+ .info_base_id = 14,8585+ .fifo_base_id = 3388686+ },8787+ {8888+ .alloc_tbl_id = 14,8989+ .info_base_id = 266,9090+ .fifo_base_id = 202,9191+ },9292+};9393+9494+/**9595+ * struct qcom_smd_edge - representing a remote processor9696+ * @smd: handle to qcom_smd9797+ * @of_node: of_node handle for information related to this edge9898+ * @edge_id: identifier of this edge9999+ * @irq: interrupt for signals on this edge100100+ * @ipc_regmap: regmap handle holding the outgoing ipc register101101+ * @ipc_offset: offset within @ipc_regmap of the register for ipc102102+ * @ipc_bit: bit in the register at @ipc_offset of @ipc_regmap103103+ * @channels: list of all channels detected on this edge104104+ * @channels_lock: guard for modifications of @channels105105+ * @allocated: array of bitmaps representing already allocated channels106106+ * @need_rescan: flag that the @work needs to scan smem for new channels107107+ * @smem_available: last available amount of smem triggering a channel scan108108+ * @work: work item for edge house keeping109109+ */110110+struct qcom_smd_edge {111111+ struct qcom_smd *smd;112112+ struct device_node *of_node;113113+ unsigned edge_id;114114+115115+ int irq;116116+117117+ struct regmap *ipc_regmap;118118+ int ipc_offset;119119+ int ipc_bit;120120+121121+ struct list_head channels;122122+ spinlock_t channels_lock;123123+124124+ DECLARE_BITMAP(allocated[SMD_ALLOC_TBL_COUNT], SMD_ALLOC_TBL_SIZE);125125+126126+ bool need_rescan;127127+ unsigned smem_available;128128+129129+ struct work_struct work;130130+};131131+132132+/*133133+ * SMD channel states.134134+ */135135+enum smd_channel_state {136136+ SMD_CHANNEL_CLOSED,137137+ SMD_CHANNEL_OPENING,138138+ SMD_CHANNEL_OPENED,139139+ SMD_CHANNEL_FLUSHING,140140+ SMD_CHANNEL_CLOSING,141141+ SMD_CHANNEL_RESET,142142+ SMD_CHANNEL_RESET_OPENING143143+};144144+145145+/**146146+ * struct qcom_smd_channel - smd channel struct147147+ * @edge: qcom_smd_edge this channel is living on148148+ * @qsdev: reference to a associated smd client device149149+ * @name: name of the channel150150+ * @state: local state of the channel151151+ * @remote_state: remote state of the channel152152+ * @tx_info: byte aligned outgoing channel info153153+ * @rx_info: byte aligned incoming channel info154154+ * @tx_info_word: word aligned outgoing channel info155155+ * @rx_info_word: word aligned incoming channel info156156+ * @tx_lock: lock to make writes to the channel mutually exclusive157157+ * @fblockread_event: wakeup event tied to tx fBLOCKREADINTR158158+ * @tx_fifo: pointer to the outgoing ring buffer159159+ * @rx_fifo: pointer to the incoming ring buffer160160+ * @fifo_size: size of each ring buffer161161+ * @bounce_buffer: bounce buffer for reading wrapped packets162162+ * @cb: callback function registered for this channel163163+ * @recv_lock: guard for rx info modifications and cb pointer164164+ * @pkt_size: size of the currently handled packet165165+ * @list: lite entry for @channels in qcom_smd_edge166166+ */167167+struct qcom_smd_channel {168168+ struct qcom_smd_edge *edge;169169+170170+ struct qcom_smd_device *qsdev;171171+172172+ char *name;173173+ enum smd_channel_state state;174174+ enum smd_channel_state remote_state;175175+176176+ struct smd_channel_info *tx_info;177177+ struct smd_channel_info *rx_info;178178+179179+ struct smd_channel_info_word *tx_info_word;180180+ struct smd_channel_info_word *rx_info_word;181181+182182+ struct mutex tx_lock;183183+ wait_queue_head_t fblockread_event;184184+185185+ void *tx_fifo;186186+ void *rx_fifo;187187+ int fifo_size;188188+189189+ void *bounce_buffer;190190+ int (*cb)(struct qcom_smd_device *, const void *, size_t);191191+192192+ spinlock_t recv_lock;193193+194194+ int pkt_size;195195+196196+ struct list_head list;197197+};198198+199199+/**200200+ * struct qcom_smd - smd struct201201+ * @dev: device struct202202+ * @num_edges: number of entries in @edges203203+ * @edges: array of edges to be handled204204+ */205205+struct qcom_smd {206206+ struct device *dev;207207+208208+ unsigned num_edges;209209+ struct qcom_smd_edge edges[0];210210+};211211+212212+/*213213+ * Format of the smd_info smem items, for byte aligned channels.214214+ */215215+struct smd_channel_info {216216+ u32 state;217217+ u8 fDSR;218218+ u8 fCTS;219219+ u8 fCD;220220+ u8 fRI;221221+ u8 fHEAD;222222+ u8 fTAIL;223223+ u8 fSTATE;224224+ u8 fBLOCKREADINTR;225225+ u32 tail;226226+ u32 head;227227+};228228+229229+/*230230+ * Format of the smd_info smem items, for word aligned channels.231231+ */232232+struct smd_channel_info_word {233233+ u32 state;234234+ u32 fDSR;235235+ u32 fCTS;236236+ u32 fCD;237237+ u32 fRI;238238+ u32 fHEAD;239239+ u32 fTAIL;240240+ u32 fSTATE;241241+ u32 fBLOCKREADINTR;242242+ u32 tail;243243+ u32 head;244244+};245245+246246+#define GET_RX_CHANNEL_INFO(channel, param) \247247+ (channel->rx_info_word ? \248248+ channel->rx_info_word->param : \249249+ channel->rx_info->param)250250+251251+#define SET_RX_CHANNEL_INFO(channel, param, value) \252252+ (channel->rx_info_word ? \253253+ (channel->rx_info_word->param = value) : \254254+ (channel->rx_info->param = value))255255+256256+#define GET_TX_CHANNEL_INFO(channel, param) \257257+ (channel->tx_info_word ? \258258+ channel->tx_info_word->param : \259259+ channel->tx_info->param)260260+261261+#define SET_TX_CHANNEL_INFO(channel, param, value) \262262+ (channel->tx_info_word ? \263263+ (channel->tx_info_word->param = value) : \264264+ (channel->tx_info->param = value))265265+266266+/**267267+ * struct qcom_smd_alloc_entry - channel allocation entry268268+ * @name: channel name269269+ * @cid: channel index270270+ * @flags: channel flags and edge id271271+ * @ref_count: reference count of the channel272272+ */273273+struct qcom_smd_alloc_entry {274274+ u8 name[20];275275+ u32 cid;276276+ u32 flags;277277+ u32 ref_count;278278+} __packed;279279+280280+#define SMD_CHANNEL_FLAGS_EDGE_MASK 0xff281281+#define SMD_CHANNEL_FLAGS_STREAM BIT(8)282282+#define SMD_CHANNEL_FLAGS_PACKET BIT(9)283283+284284+/*285285+ * Each smd packet contains a 20 byte header, with the first 4 being the length286286+ * of the packet.287287+ */288288+#define SMD_PACKET_HEADER_LEN 20289289+290290+/*291291+ * Signal the remote processor associated with 'channel'.292292+ */293293+static void qcom_smd_signal_channel(struct qcom_smd_channel *channel)294294+{295295+ struct qcom_smd_edge *edge = channel->edge;296296+297297+ regmap_write(edge->ipc_regmap, edge->ipc_offset, BIT(edge->ipc_bit));298298+}299299+300300+/*301301+ * Initialize the tx channel info302302+ */303303+static void qcom_smd_channel_reset(struct qcom_smd_channel *channel)304304+{305305+ SET_TX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED);306306+ SET_TX_CHANNEL_INFO(channel, fDSR, 0);307307+ SET_TX_CHANNEL_INFO(channel, fCTS, 0);308308+ SET_TX_CHANNEL_INFO(channel, fCD, 0);309309+ SET_TX_CHANNEL_INFO(channel, fRI, 0);310310+ SET_TX_CHANNEL_INFO(channel, fHEAD, 0);311311+ SET_TX_CHANNEL_INFO(channel, fTAIL, 0);312312+ SET_TX_CHANNEL_INFO(channel, fSTATE, 1);313313+ SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0);314314+ SET_TX_CHANNEL_INFO(channel, head, 0);315315+ SET_TX_CHANNEL_INFO(channel, tail, 0);316316+317317+ qcom_smd_signal_channel(channel);318318+319319+ channel->state = SMD_CHANNEL_CLOSED;320320+ channel->pkt_size = 0;321321+}322322+323323+/*324324+ * Calculate the amount of data available in the rx fifo325325+ */326326+static size_t qcom_smd_channel_get_rx_avail(struct qcom_smd_channel *channel)327327+{328328+ unsigned head;329329+ unsigned tail;330330+331331+ head = GET_RX_CHANNEL_INFO(channel, head);332332+ tail = GET_RX_CHANNEL_INFO(channel, tail);333333+334334+ return (head - tail) & (channel->fifo_size - 1);335335+}336336+337337+/*338338+ * Set tx channel state and inform the remote processor339339+ */340340+static void qcom_smd_channel_set_state(struct qcom_smd_channel *channel,341341+ int state)342342+{343343+ struct qcom_smd_edge *edge = channel->edge;344344+ bool is_open = state == SMD_CHANNEL_OPENED;345345+346346+ if (channel->state == state)347347+ return;348348+349349+ dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state);350350+351351+ SET_TX_CHANNEL_INFO(channel, fDSR, is_open);352352+ SET_TX_CHANNEL_INFO(channel, fCTS, is_open);353353+ SET_TX_CHANNEL_INFO(channel, fCD, is_open);354354+355355+ SET_TX_CHANNEL_INFO(channel, state, state);356356+ SET_TX_CHANNEL_INFO(channel, fSTATE, 1);357357+358358+ channel->state = state;359359+ qcom_smd_signal_channel(channel);360360+}361361+362362+/*363363+ * Copy count bytes of data using 32bit accesses, if that's required.364364+ */365365+static void smd_copy_to_fifo(void __iomem *_dst,366366+ const void *_src,367367+ size_t count,368368+ bool word_aligned)369369+{370370+ u32 *dst = (u32 *)_dst;371371+ u32 *src = (u32 *)_src;372372+373373+ if (word_aligned) {374374+ count /= sizeof(u32);375375+ while (count--)376376+ writel_relaxed(*src++, dst++);377377+ } else {378378+ memcpy_toio(_dst, _src, count);379379+ }380380+}381381+382382+/*383383+ * Copy count bytes of data using 32bit accesses, if that is required.384384+ */385385+static void smd_copy_from_fifo(void *_dst,386386+ const void __iomem *_src,387387+ size_t count,388388+ bool word_aligned)389389+{390390+ u32 *dst = (u32 *)_dst;391391+ u32 *src = (u32 *)_src;392392+393393+ if (word_aligned) {394394+ count /= sizeof(u32);395395+ while (count--)396396+ *dst++ = readl_relaxed(src++);397397+ } else {398398+ memcpy_fromio(_dst, _src, count);399399+ }400400+}401401+402402+/*403403+ * Read count bytes of data from the rx fifo into buf, but don't advance the404404+ * tail.405405+ */406406+static size_t qcom_smd_channel_peek(struct qcom_smd_channel *channel,407407+ void *buf, size_t count)408408+{409409+ bool word_aligned;410410+ unsigned tail;411411+ size_t len;412412+413413+ word_aligned = channel->rx_info_word != NULL;414414+ tail = GET_RX_CHANNEL_INFO(channel, tail);415415+416416+ len = min_t(size_t, count, channel->fifo_size - tail);417417+ if (len) {418418+ smd_copy_from_fifo(buf,419419+ channel->rx_fifo + tail,420420+ len,421421+ word_aligned);422422+ }423423+424424+ if (len != count) {425425+ smd_copy_from_fifo(buf + len,426426+ channel->rx_fifo,427427+ count - len,428428+ word_aligned);429429+ }430430+431431+ return count;432432+}433433+434434+/*435435+ * Advance the rx tail by count bytes.436436+ */437437+static void qcom_smd_channel_advance(struct qcom_smd_channel *channel,438438+ size_t count)439439+{440440+ unsigned tail;441441+442442+ tail = GET_RX_CHANNEL_INFO(channel, tail);443443+ tail += count;444444+ tail &= (channel->fifo_size - 1);445445+ SET_RX_CHANNEL_INFO(channel, tail, tail);446446+}447447+448448+/*449449+ * Read out a single packet from the rx fifo and deliver it to the device450450+ */451451+static int qcom_smd_channel_recv_single(struct qcom_smd_channel *channel)452452+{453453+ struct qcom_smd_device *qsdev = channel->qsdev;454454+ unsigned tail;455455+ size_t len;456456+ void *ptr;457457+ int ret;458458+459459+ if (!channel->cb)460460+ return 0;461461+462462+ tail = GET_RX_CHANNEL_INFO(channel, tail);463463+464464+ /* Use bounce buffer if the data wraps */465465+ if (tail + channel->pkt_size >= channel->fifo_size) {466466+ ptr = channel->bounce_buffer;467467+ len = qcom_smd_channel_peek(channel, ptr, channel->pkt_size);468468+ } else {469469+ ptr = channel->rx_fifo + tail;470470+ len = channel->pkt_size;471471+ }472472+473473+ ret = channel->cb(qsdev, ptr, len);474474+ if (ret < 0)475475+ return ret;476476+477477+ /* Only forward the tail if the client consumed the data */478478+ qcom_smd_channel_advance(channel, len);479479+480480+ channel->pkt_size = 0;481481+482482+ return 0;483483+}484484+485485+/*486486+ * Per channel interrupt handling487487+ */488488+static bool qcom_smd_channel_intr(struct qcom_smd_channel *channel)489489+{490490+ bool need_state_scan = false;491491+ int remote_state;492492+ u32 pktlen;493493+ int avail;494494+ int ret;495495+496496+ /* Handle state changes */497497+ remote_state = GET_RX_CHANNEL_INFO(channel, state);498498+ if (remote_state != channel->remote_state) {499499+ channel->remote_state = remote_state;500500+ need_state_scan = true;501501+ }502502+ /* Indicate that we have seen any state change */503503+ SET_RX_CHANNEL_INFO(channel, fSTATE, 0);504504+505505+ /* Signal waiting qcom_smd_send() about the interrupt */506506+ if (!GET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR))507507+ wake_up_interruptible(&channel->fblockread_event);508508+509509+ /* Don't consume any data until we've opened the channel */510510+ if (channel->state != SMD_CHANNEL_OPENED)511511+ goto out;512512+513513+ /* Indicate that we've seen the new data */514514+ SET_RX_CHANNEL_INFO(channel, fHEAD, 0);515515+516516+ /* Consume data */517517+ for (;;) {518518+ avail = qcom_smd_channel_get_rx_avail(channel);519519+520520+ if (!channel->pkt_size && avail >= SMD_PACKET_HEADER_LEN) {521521+ qcom_smd_channel_peek(channel, &pktlen, sizeof(pktlen));522522+ qcom_smd_channel_advance(channel, SMD_PACKET_HEADER_LEN);523523+ channel->pkt_size = pktlen;524524+ } else if (channel->pkt_size && avail >= channel->pkt_size) {525525+ ret = qcom_smd_channel_recv_single(channel);526526+ if (ret)527527+ break;528528+ } else {529529+ break;530530+ }531531+ }532532+533533+ /* Indicate that we have seen and updated tail */534534+ SET_RX_CHANNEL_INFO(channel, fTAIL, 1);535535+536536+ /* Signal the remote that we've consumed the data (if requested) */537537+ if (!GET_RX_CHANNEL_INFO(channel, fBLOCKREADINTR)) {538538+ /* Ensure ordering of channel info updates */539539+ wmb();540540+541541+ qcom_smd_signal_channel(channel);542542+ }543543+544544+out:545545+ return need_state_scan;546546+}547547+548548+/*549549+ * The edge interrupts are triggered by the remote processor on state changes,550550+ * channel info updates or when new channels are created.551551+ */552552+static irqreturn_t qcom_smd_edge_intr(int irq, void *data)553553+{554554+ struct qcom_smd_edge *edge = data;555555+ struct qcom_smd_channel *channel;556556+ unsigned available;557557+ bool kick_worker = false;558558+559559+ /*560560+ * Handle state changes or data on each of the channels on this edge561561+ */562562+ spin_lock(&edge->channels_lock);563563+ list_for_each_entry(channel, &edge->channels, list) {564564+ spin_lock(&channel->recv_lock);565565+ kick_worker |= qcom_smd_channel_intr(channel);566566+ spin_unlock(&channel->recv_lock);567567+ }568568+ spin_unlock(&edge->channels_lock);569569+570570+ /*571571+ * Creating a new channel requires allocating an smem entry, so we only572572+ * have to scan if the amount of available space in smem have changed573573+ * since last scan.574574+ */575575+ available = qcom_smem_get_free_space(edge->edge_id);576576+ if (available != edge->smem_available) {577577+ edge->smem_available = available;578578+ edge->need_rescan = true;579579+ kick_worker = true;580580+ }581581+582582+ if (kick_worker)583583+ schedule_work(&edge->work);584584+585585+ return IRQ_HANDLED;586586+}587587+588588+/*589589+ * Delivers any outstanding packets in the rx fifo, can be used after probe of590590+ * the clients to deliver any packets that wasn't delivered before the client591591+ * was setup.592592+ */593593+static void qcom_smd_channel_resume(struct qcom_smd_channel *channel)594594+{595595+ unsigned long flags;596596+597597+ spin_lock_irqsave(&channel->recv_lock, flags);598598+ qcom_smd_channel_intr(channel);599599+ spin_unlock_irqrestore(&channel->recv_lock, flags);600600+}601601+602602+/*603603+ * Calculate how much space is available in the tx fifo.604604+ */605605+static size_t qcom_smd_get_tx_avail(struct qcom_smd_channel *channel)606606+{607607+ unsigned head;608608+ unsigned tail;609609+ unsigned mask = channel->fifo_size - 1;610610+611611+ head = GET_TX_CHANNEL_INFO(channel, head);612612+ tail = GET_TX_CHANNEL_INFO(channel, tail);613613+614614+ return mask - ((head - tail) & mask);615615+}616616+617617+/*618618+ * Write count bytes of data into channel, possibly wrapping in the ring buffer619619+ */620620+static int qcom_smd_write_fifo(struct qcom_smd_channel *channel,621621+ const void *data,622622+ size_t count)623623+{624624+ bool word_aligned;625625+ unsigned head;626626+ size_t len;627627+628628+ word_aligned = channel->tx_info_word != NULL;629629+ head = GET_TX_CHANNEL_INFO(channel, head);630630+631631+ len = min_t(size_t, count, channel->fifo_size - head);632632+ if (len) {633633+ smd_copy_to_fifo(channel->tx_fifo + head,634634+ data,635635+ len,636636+ word_aligned);637637+ }638638+639639+ if (len != count) {640640+ smd_copy_to_fifo(channel->tx_fifo,641641+ data + len,642642+ count - len,643643+ word_aligned);644644+ }645645+646646+ head += count;647647+ head &= (channel->fifo_size - 1);648648+ SET_TX_CHANNEL_INFO(channel, head, head);649649+650650+ return count;651651+}652652+653653+/**654654+ * qcom_smd_send - write data to smd channel655655+ * @channel: channel handle656656+ * @data: buffer of data to write657657+ * @len: number of bytes to write658658+ *659659+ * This is a blocking write of len bytes into the channel's tx ring buffer and660660+ * signal the remote end. It will sleep until there is enough space available661661+ * in the tx buffer, utilizing the fBLOCKREADINTR signaling mechanism to avoid662662+ * polling.663663+ */664664+int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len)665665+{666666+ u32 hdr[5] = {len,};667667+ int tlen = sizeof(hdr) + len;668668+ int ret;669669+670670+ /* Word aligned channels only accept word size aligned data */671671+ if (channel->rx_info_word != NULL && len % 4)672672+ return -EINVAL;673673+674674+ ret = mutex_lock_interruptible(&channel->tx_lock);675675+ if (ret)676676+ return ret;677677+678678+ while (qcom_smd_get_tx_avail(channel) < tlen) {679679+ if (channel->state != SMD_CHANNEL_OPENED) {680680+ ret = -EPIPE;681681+ goto out;682682+ }683683+684684+ SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1);685685+686686+ ret = wait_event_interruptible(channel->fblockread_event,687687+ qcom_smd_get_tx_avail(channel) >= tlen ||688688+ channel->state != SMD_CHANNEL_OPENED);689689+ if (ret)690690+ goto out;691691+692692+ SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0);693693+ }694694+695695+ SET_TX_CHANNEL_INFO(channel, fTAIL, 0);696696+697697+ qcom_smd_write_fifo(channel, hdr, sizeof(hdr));698698+ qcom_smd_write_fifo(channel, data, len);699699+700700+ SET_TX_CHANNEL_INFO(channel, fHEAD, 1);701701+702702+ /* Ensure ordering of channel info updates */703703+ wmb();704704+705705+ qcom_smd_signal_channel(channel);706706+707707+out:708708+ mutex_unlock(&channel->tx_lock);709709+710710+ return ret;711711+}712712+EXPORT_SYMBOL(qcom_smd_send);713713+714714+static struct qcom_smd_device *to_smd_device(struct device *dev)715715+{716716+ return container_of(dev, struct qcom_smd_device, dev);717717+}718718+719719+static struct qcom_smd_driver *to_smd_driver(struct device *dev)720720+{721721+ struct qcom_smd_device *qsdev = to_smd_device(dev);722722+723723+ return container_of(qsdev->dev.driver, struct qcom_smd_driver, driver);724724+}725725+726726+static int qcom_smd_dev_match(struct device *dev, struct device_driver *drv)727727+{728728+ return of_driver_match_device(dev, drv);729729+}730730+731731+/*732732+ * Probe the smd client.733733+ *734734+ * The remote side have indicated that it want the channel to be opened, so735735+ * complete the state handshake and probe our client driver.736736+ */737737+static int qcom_smd_dev_probe(struct device *dev)738738+{739739+ struct qcom_smd_device *qsdev = to_smd_device(dev);740740+ struct qcom_smd_driver *qsdrv = to_smd_driver(dev);741741+ struct qcom_smd_channel *channel = qsdev->channel;742742+ size_t bb_size;743743+ int ret;744744+745745+ /*746746+ * Packets are maximum 4k, but reduce if the fifo is smaller747747+ */748748+ bb_size = min(channel->fifo_size, SZ_4K);749749+ channel->bounce_buffer = kmalloc(bb_size, GFP_KERNEL);750750+ if (!channel->bounce_buffer)751751+ return -ENOMEM;752752+753753+ channel->cb = qsdrv->callback;754754+755755+ qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENING);756756+757757+ qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENED);758758+759759+ ret = qsdrv->probe(qsdev);760760+ if (ret)761761+ goto err;762762+763763+ qcom_smd_channel_resume(channel);764764+765765+ return 0;766766+767767+err:768768+ dev_err(&qsdev->dev, "probe failed\n");769769+770770+ channel->cb = NULL;771771+ kfree(channel->bounce_buffer);772772+ channel->bounce_buffer = NULL;773773+774774+ qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);775775+ return ret;776776+}777777+778778+/*779779+ * Remove the smd client.780780+ *781781+ * The channel is going away, for some reason, so remove the smd client and782782+ * reset the channel state.783783+ */784784+static int qcom_smd_dev_remove(struct device *dev)785785+{786786+ struct qcom_smd_device *qsdev = to_smd_device(dev);787787+ struct qcom_smd_driver *qsdrv = to_smd_driver(dev);788788+ struct qcom_smd_channel *channel = qsdev->channel;789789+ unsigned long flags;790790+791791+ qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSING);792792+793793+ /*794794+ * Make sure we don't race with the code receiving data.795795+ */796796+ spin_lock_irqsave(&channel->recv_lock, flags);797797+ channel->cb = NULL;798798+ spin_unlock_irqrestore(&channel->recv_lock, flags);799799+800800+ /* Wake up any sleepers in qcom_smd_send() */801801+ wake_up_interruptible(&channel->fblockread_event);802802+803803+ /*804804+ * We expect that the client might block in remove() waiting for any805805+ * outstanding calls to qcom_smd_send() to wake up and finish.806806+ */807807+ if (qsdrv->remove)808808+ qsdrv->remove(qsdev);809809+810810+ /*811811+ * The client is now gone, cleanup and reset the channel state.812812+ */813813+ channel->qsdev = NULL;814814+ kfree(channel->bounce_buffer);815815+ channel->bounce_buffer = NULL;816816+817817+ qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);818818+819819+ qcom_smd_channel_reset(channel);820820+821821+ return 0;822822+}823823+824824+static struct bus_type qcom_smd_bus = {825825+ .name = "qcom_smd",826826+ .match = qcom_smd_dev_match,827827+ .probe = qcom_smd_dev_probe,828828+ .remove = qcom_smd_dev_remove,829829+};830830+831831+/*832832+ * Release function for the qcom_smd_device object.833833+ */834834+static void qcom_smd_release_device(struct device *dev)835835+{836836+ struct qcom_smd_device *qsdev = to_smd_device(dev);837837+838838+ kfree(qsdev);839839+}840840+841841+/*842842+ * Finds the device_node for the smd child interested in this channel.843843+ */844844+static struct device_node *qcom_smd_match_channel(struct device_node *edge_node,845845+ const char *channel)846846+{847847+ struct device_node *child;848848+ const char *name;849849+ const char *key;850850+ int ret;851851+852852+ for_each_available_child_of_node(edge_node, child) {853853+ key = "qcom,smd-channels";854854+ ret = of_property_read_string(child, key, &name);855855+ if (ret) {856856+ of_node_put(child);857857+ continue;858858+ }859859+860860+ if (strcmp(name, channel) == 0)861861+ return child;862862+ }863863+864864+ return NULL;865865+}866866+867867+/*868868+ * Create a smd client device for channel that is being opened.869869+ */870870+static int qcom_smd_create_device(struct qcom_smd_channel *channel)871871+{872872+ struct qcom_smd_device *qsdev;873873+ struct qcom_smd_edge *edge = channel->edge;874874+ struct device_node *node;875875+ struct qcom_smd *smd = edge->smd;876876+ int ret;877877+878878+ if (channel->qsdev)879879+ return -EEXIST;880880+881881+ node = qcom_smd_match_channel(edge->of_node, channel->name);882882+ if (!node) {883883+ dev_dbg(smd->dev, "no match for '%s'\n", channel->name);884884+ return -ENXIO;885885+ }886886+887887+ dev_dbg(smd->dev, "registering '%s'\n", channel->name);888888+889889+ qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL);890890+ if (!qsdev)891891+ return -ENOMEM;892892+893893+ dev_set_name(&qsdev->dev, "%s.%s", edge->of_node->name, node->name);894894+ qsdev->dev.parent = smd->dev;895895+ qsdev->dev.bus = &qcom_smd_bus;896896+ qsdev->dev.release = qcom_smd_release_device;897897+ qsdev->dev.of_node = node;898898+899899+ qsdev->channel = channel;900900+901901+ channel->qsdev = qsdev;902902+903903+ ret = device_register(&qsdev->dev);904904+ if (ret) {905905+ dev_err(smd->dev, "device_register failed: %d\n", ret);906906+ put_device(&qsdev->dev);907907+ }908908+909909+ return ret;910910+}911911+912912+/*913913+ * Destroy a smd client device for a channel that's going away.914914+ */915915+static void qcom_smd_destroy_device(struct qcom_smd_channel *channel)916916+{917917+ struct device *dev;918918+919919+ BUG_ON(!channel->qsdev);920920+921921+ dev = &channel->qsdev->dev;922922+923923+ device_unregister(dev);924924+ of_node_put(dev->of_node);925925+ put_device(dev);926926+}927927+928928+/**929929+ * qcom_smd_driver_register - register a smd driver930930+ * @qsdrv: qcom_smd_driver struct931931+ */932932+int qcom_smd_driver_register(struct qcom_smd_driver *qsdrv)933933+{934934+ qsdrv->driver.bus = &qcom_smd_bus;935935+ return driver_register(&qsdrv->driver);936936+}937937+EXPORT_SYMBOL(qcom_smd_driver_register);938938+939939+/**940940+ * qcom_smd_driver_unregister - unregister a smd driver941941+ * @qsdrv: qcom_smd_driver struct942942+ */943943+void qcom_smd_driver_unregister(struct qcom_smd_driver *qsdrv)944944+{945945+ driver_unregister(&qsdrv->driver);946946+}947947+EXPORT_SYMBOL(qcom_smd_driver_unregister);948948+949949+/*950950+ * Allocate the qcom_smd_channel object for a newly found smd channel,951951+ * retrieving and validating the smem items involved.952952+ */953953+static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *edge,954954+ unsigned smem_info_item,955955+ unsigned smem_fifo_item,956956+ char *name)957957+{958958+ struct qcom_smd_channel *channel;959959+ struct qcom_smd *smd = edge->smd;960960+ size_t fifo_size;961961+ size_t info_size;962962+ void *fifo_base;963963+ void *info;964964+ int ret;965965+966966+ channel = devm_kzalloc(smd->dev, sizeof(*channel), GFP_KERNEL);967967+ if (!channel)968968+ return ERR_PTR(-ENOMEM);969969+970970+ channel->edge = edge;971971+ channel->name = devm_kstrdup(smd->dev, name, GFP_KERNEL);972972+ if (!channel->name)973973+ return ERR_PTR(-ENOMEM);974974+975975+ mutex_init(&channel->tx_lock);976976+ spin_lock_init(&channel->recv_lock);977977+ init_waitqueue_head(&channel->fblockread_event);978978+979979+ ret = qcom_smem_get(edge->edge_id, smem_info_item, (void **)&info, &info_size);980980+ if (ret)981981+ goto free_name_and_channel;982982+983983+ /*984984+ * Use the size of the item to figure out which channel info struct to985985+ * use.986986+ */987987+ if (info_size == 2 * sizeof(struct smd_channel_info_word)) {988988+ channel->tx_info_word = info;989989+ channel->rx_info_word = info + sizeof(struct smd_channel_info_word);990990+ } else if (info_size == 2 * sizeof(struct smd_channel_info)) {991991+ channel->tx_info = info;992992+ channel->rx_info = info + sizeof(struct smd_channel_info);993993+ } else {994994+ dev_err(smd->dev,995995+ "channel info of size %zu not supported\n", info_size);996996+ ret = -EINVAL;997997+ goto free_name_and_channel;998998+ }999999+10001000+ ret = qcom_smem_get(edge->edge_id, smem_fifo_item, &fifo_base, &fifo_size);10011001+ if (ret)10021002+ goto free_name_and_channel;10031003+10041004+ /* The channel consist of a rx and tx fifo of equal size */10051005+ fifo_size /= 2;10061006+10071007+ dev_dbg(smd->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n",10081008+ name, info_size, fifo_size);10091009+10101010+ channel->tx_fifo = fifo_base;10111011+ channel->rx_fifo = fifo_base + fifo_size;10121012+ channel->fifo_size = fifo_size;10131013+10141014+ qcom_smd_channel_reset(channel);10151015+10161016+ return channel;10171017+10181018+free_name_and_channel:10191019+ devm_kfree(smd->dev, channel->name);10201020+ devm_kfree(smd->dev, channel);10211021+10221022+ return ERR_PTR(ret);10231023+}10241024+10251025+/*10261026+ * Scans the allocation table for any newly allocated channels, calls10271027+ * qcom_smd_create_channel() to create representations of these and add10281028+ * them to the edge's list of channels.10291029+ */10301030+static void qcom_discover_channels(struct qcom_smd_edge *edge)10311031+{10321032+ struct qcom_smd_alloc_entry *alloc_tbl;10331033+ struct qcom_smd_alloc_entry *entry;10341034+ struct qcom_smd_channel *channel;10351035+ struct qcom_smd *smd = edge->smd;10361036+ unsigned long flags;10371037+ unsigned fifo_id;10381038+ unsigned info_id;10391039+ int ret;10401040+ int tbl;10411041+ int i;10421042+10431043+ for (tbl = 0; tbl < SMD_ALLOC_TBL_COUNT; tbl++) {10441044+ ret = qcom_smem_get(edge->edge_id,10451045+ smem_items[tbl].alloc_tbl_id,10461046+ (void **)&alloc_tbl,10471047+ NULL);10481048+ if (ret < 0)10491049+ continue;10501050+10511051+ for (i = 0; i < SMD_ALLOC_TBL_SIZE; i++) {10521052+ entry = &alloc_tbl[i];10531053+ if (test_bit(i, edge->allocated[tbl]))10541054+ continue;10551055+10561056+ if (entry->ref_count == 0)10571057+ continue;10581058+10591059+ if (!entry->name[0])10601060+ continue;10611061+10621062+ if (!(entry->flags & SMD_CHANNEL_FLAGS_PACKET))10631063+ continue;10641064+10651065+ if ((entry->flags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id)10661066+ continue;10671067+10681068+ info_id = smem_items[tbl].info_base_id + entry->cid;10691069+ fifo_id = smem_items[tbl].fifo_base_id + entry->cid;10701070+10711071+ channel = qcom_smd_create_channel(edge, info_id, fifo_id, entry->name);10721072+ if (IS_ERR(channel))10731073+ continue;10741074+10751075+ spin_lock_irqsave(&edge->channels_lock, flags);10761076+ list_add(&channel->list, &edge->channels);10771077+ spin_unlock_irqrestore(&edge->channels_lock, flags);10781078+10791079+ dev_dbg(smd->dev, "new channel found: '%s'\n", channel->name);10801080+ set_bit(i, edge->allocated[tbl]);10811081+ }10821082+ }10831083+10841084+ schedule_work(&edge->work);10851085+}10861086+10871087+/*10881088+ * This per edge worker scans smem for any new channels and register these. It10891089+ * then scans all registered channels for state changes that should be handled10901090+ * by creating or destroying smd client devices for the registered channels.10911091+ *10921092+ * LOCKING: edge->channels_lock is not needed to be held during the traversal10931093+ * of the channels list as it's done synchronously with the only writer.10941094+ */10951095+static void qcom_channel_state_worker(struct work_struct *work)10961096+{10971097+ struct qcom_smd_channel *channel;10981098+ struct qcom_smd_edge *edge = container_of(work,10991099+ struct qcom_smd_edge,11001100+ work);11011101+ unsigned remote_state;11021102+11031103+ /*11041104+ * Rescan smem if we have reason to belive that there are new channels.11051105+ */11061106+ if (edge->need_rescan) {11071107+ edge->need_rescan = false;11081108+ qcom_discover_channels(edge);11091109+ }11101110+11111111+ /*11121112+ * Register a device for any closed channel where the remote processor11131113+ * is showing interest in opening the channel.11141114+ */11151115+ list_for_each_entry(channel, &edge->channels, list) {11161116+ if (channel->state != SMD_CHANNEL_CLOSED)11171117+ continue;11181118+11191119+ remote_state = GET_RX_CHANNEL_INFO(channel, state);11201120+ if (remote_state != SMD_CHANNEL_OPENING &&11211121+ remote_state != SMD_CHANNEL_OPENED)11221122+ continue;11231123+11241124+ qcom_smd_create_device(channel);11251125+ }11261126+11271127+ /*11281128+ * Unregister the device for any channel that is opened where the11291129+ * remote processor is closing the channel.11301130+ */11311131+ list_for_each_entry(channel, &edge->channels, list) {11321132+ if (channel->state != SMD_CHANNEL_OPENING &&11331133+ channel->state != SMD_CHANNEL_OPENED)11341134+ continue;11351135+11361136+ remote_state = GET_RX_CHANNEL_INFO(channel, state);11371137+ if (remote_state == SMD_CHANNEL_OPENING ||11381138+ remote_state == SMD_CHANNEL_OPENED)11391139+ continue;11401140+11411141+ qcom_smd_destroy_device(channel);11421142+ }11431143+}11441144+11451145+/*11461146+ * Parses an of_node describing an edge.11471147+ */11481148+static int qcom_smd_parse_edge(struct device *dev,11491149+ struct device_node *node,11501150+ struct qcom_smd_edge *edge)11511151+{11521152+ struct device_node *syscon_np;11531153+ const char *key;11541154+ int irq;11551155+ int ret;11561156+11571157+ INIT_LIST_HEAD(&edge->channels);11581158+ spin_lock_init(&edge->channels_lock);11591159+11601160+ INIT_WORK(&edge->work, qcom_channel_state_worker);11611161+11621162+ edge->of_node = of_node_get(node);11631163+11641164+ irq = irq_of_parse_and_map(node, 0);11651165+ if (irq < 0) {11661166+ dev_err(dev, "required smd interrupt missing\n");11671167+ return -EINVAL;11681168+ }11691169+11701170+ ret = devm_request_irq(dev, irq,11711171+ qcom_smd_edge_intr, IRQF_TRIGGER_RISING,11721172+ node->name, edge);11731173+ if (ret) {11741174+ dev_err(dev, "failed to request smd irq\n");11751175+ return ret;11761176+ }11771177+11781178+ edge->irq = irq;11791179+11801180+ key = "qcom,smd-edge";11811181+ ret = of_property_read_u32(node, key, &edge->edge_id);11821182+ if (ret) {11831183+ dev_err(dev, "edge missing %s property\n", key);11841184+ return -EINVAL;11851185+ }11861186+11871187+ syscon_np = of_parse_phandle(node, "qcom,ipc", 0);11881188+ if (!syscon_np) {11891189+ dev_err(dev, "no qcom,ipc node\n");11901190+ return -ENODEV;11911191+ }11921192+11931193+ edge->ipc_regmap = syscon_node_to_regmap(syscon_np);11941194+ if (IS_ERR(edge->ipc_regmap))11951195+ return PTR_ERR(edge->ipc_regmap);11961196+11971197+ key = "qcom,ipc";11981198+ ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);11991199+ if (ret < 0) {12001200+ dev_err(dev, "no offset in %s\n", key);12011201+ return -EINVAL;12021202+ }12031203+12041204+ ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);12051205+ if (ret < 0) {12061206+ dev_err(dev, "no bit in %s\n", key);12071207+ return -EINVAL;12081208+ }12091209+12101210+ return 0;12111211+}12121212+12131213+static int qcom_smd_probe(struct platform_device *pdev)12141214+{12151215+ struct qcom_smd_edge *edge;12161216+ struct device_node *node;12171217+ struct qcom_smd *smd;12181218+ size_t array_size;12191219+ int num_edges;12201220+ int ret;12211221+ int i = 0;12221222+12231223+ /* Wait for smem */12241224+ ret = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL, NULL);12251225+ if (ret == -EPROBE_DEFER)12261226+ return ret;12271227+12281228+ num_edges = of_get_available_child_count(pdev->dev.of_node);12291229+ array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge);12301230+ smd = devm_kzalloc(&pdev->dev, array_size, GFP_KERNEL);12311231+ if (!smd)12321232+ return -ENOMEM;12331233+ smd->dev = &pdev->dev;12341234+12351235+ smd->num_edges = num_edges;12361236+ for_each_available_child_of_node(pdev->dev.of_node, node) {12371237+ edge = &smd->edges[i++];12381238+ edge->smd = smd;12391239+12401240+ ret = qcom_smd_parse_edge(&pdev->dev, node, edge);12411241+ if (ret)12421242+ continue;12431243+12441244+ edge->need_rescan = true;12451245+ schedule_work(&edge->work);12461246+ }12471247+12481248+ platform_set_drvdata(pdev, smd);12491249+12501250+ return 0;12511251+}12521252+12531253+/*12541254+ * Shut down all smd clients by making sure that each edge stops processing12551255+ * events and scanning for new channels, then call destroy on the devices.12561256+ */12571257+static int qcom_smd_remove(struct platform_device *pdev)12581258+{12591259+ struct qcom_smd_channel *channel;12601260+ struct qcom_smd_edge *edge;12611261+ struct qcom_smd *smd = platform_get_drvdata(pdev);12621262+ int i;12631263+12641264+ for (i = 0; i < smd->num_edges; i++) {12651265+ edge = &smd->edges[i];12661266+12671267+ disable_irq(edge->irq);12681268+ cancel_work_sync(&edge->work);12691269+12701270+ list_for_each_entry(channel, &edge->channels, list) {12711271+ if (!channel->qsdev)12721272+ continue;12731273+12741274+ qcom_smd_destroy_device(channel);12751275+ }12761276+ }12771277+12781278+ return 0;12791279+}12801280+12811281+static const struct of_device_id qcom_smd_of_match[] = {12821282+ { .compatible = "qcom,smd" },12831283+ {}12841284+};12851285+MODULE_DEVICE_TABLE(of, qcom_smd_of_match);12861286+12871287+static struct platform_driver qcom_smd_driver = {12881288+ .probe = qcom_smd_probe,12891289+ .remove = qcom_smd_remove,12901290+ .driver = {12911291+ .name = "qcom-smd",12921292+ .of_match_table = qcom_smd_of_match,12931293+ },12941294+};12951295+12961296+static int __init qcom_smd_init(void)12971297+{12981298+ int ret;12991299+13001300+ ret = bus_register(&qcom_smd_bus);13011301+ if (ret) {13021302+ pr_err("failed to register smd bus: %d\n", ret);13031303+ return ret;13041304+ }13051305+13061306+ return platform_driver_register(&qcom_smd_driver);13071307+}13081308+postcore_initcall(qcom_smd_init);13091309+13101310+static void __exit qcom_smd_exit(void)13111311+{13121312+ platform_driver_unregister(&qcom_smd_driver);13131313+ bus_unregister(&qcom_smd_bus);13141314+}13151315+module_exit(qcom_smd_exit);13161316+13171317+MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");13181318+MODULE_DESCRIPTION("Qualcomm Shared Memory Driver");13191319+MODULE_LICENSE("GPL v2");
+775
drivers/soc/qcom/smem.c
···11+/*22+ * Copyright (c) 2015, Sony Mobile Communications AB.33+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 and77+ * only version 2 as published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ */1414+1515+#include <linux/hwspinlock.h>1616+#include <linux/io.h>1717+#include <linux/module.h>1818+#include <linux/of.h>1919+#include <linux/of_address.h>2020+#include <linux/platform_device.h>2121+#include <linux/slab.h>2222+#include <linux/soc/qcom/smem.h>2323+2424+/*2525+ * The Qualcomm shared memory system is a allocate only heap structure that2626+ * consists of one of more memory areas that can be accessed by the processors2727+ * in the SoC.2828+ *2929+ * All systems contains a global heap, accessible by all processors in the SoC,3030+ * with a table of contents data structure (@smem_header) at the beginning of3131+ * the main shared memory block.3232+ *3333+ * The global header contains meta data for allocations as well as a fixed list3434+ * of 512 entries (@smem_global_entry) that can be initialized to reference3535+ * parts of the shared memory space.3636+ *3737+ *3838+ * In addition to this global heap a set of "private" heaps can be set up at3939+ * boot time with access restrictions so that only certain processor pairs can4040+ * access the data.4141+ *4242+ * These partitions are referenced from an optional partition table4343+ * (@smem_ptable), that is found 4kB from the end of the main smem region. The4444+ * partition table entries (@smem_ptable_entry) lists the involved processors4545+ * (or hosts) and their location in the main shared memory region.4646+ *4747+ * Each partition starts with a header (@smem_partition_header) that identifies4848+ * the partition and holds properties for the two internal memory regions. The4949+ * two regions are cached and non-cached memory respectively. Each region5050+ * contain a link list of allocation headers (@smem_private_entry) followed by5151+ * their data.5252+ *5353+ * Items in the non-cached region are allocated from the start of the partition5454+ * while items in the cached region are allocated from the end. The free area5555+ * is hence the region between the cached and non-cached offsets.5656+ *5757+ *5858+ * To synchronize allocations in the shared memory heaps a remote spinlock must5959+ * be held - currently lock number 3 of the sfpb or tcsr is used for this on all6060+ * platforms.6161+ *6262+ */6363+6464+/*6565+ * Item 3 of the global heap contains an array of versions for the various6666+ * software components in the SoC. We verify that the boot loader version is6767+ * what the expected version (SMEM_EXPECTED_VERSION) as a sanity check.6868+ */6969+#define SMEM_ITEM_VERSION 37070+#define SMEM_MASTER_SBL_VERSION_INDEX 77171+#define SMEM_EXPECTED_VERSION 117272+7373+/*7474+ * The first 8 items are only to be allocated by the boot loader while7575+ * initializing the heap.7676+ */7777+#define SMEM_ITEM_LAST_FIXED 87878+7979+/* Highest accepted item number, for both global and private heaps */8080+#define SMEM_ITEM_COUNT 5128181+8282+/* Processor/host identifier for the application processor */8383+#define SMEM_HOST_APPS 08484+8585+/* Max number of processors/hosts in a system */8686+#define SMEM_HOST_COUNT 98787+8888+/**8989+ * struct smem_proc_comm - proc_comm communication struct (legacy)9090+ * @command: current command to be executed9191+ * @status: status of the currently requested command9292+ * @params: parameters to the command9393+ */9494+struct smem_proc_comm {9595+ u32 command;9696+ u32 status;9797+ u32 params[2];9898+};9999+100100+/**101101+ * struct smem_global_entry - entry to reference smem items on the heap102102+ * @allocated: boolean to indicate if this entry is used103103+ * @offset: offset to the allocated space104104+ * @size: size of the allocated space, 8 byte aligned105105+ * @aux_base: base address for the memory region used by this unit, or 0 for106106+ * the default region. bits 0,1 are reserved107107+ */108108+struct smem_global_entry {109109+ u32 allocated;110110+ u32 offset;111111+ u32 size;112112+ u32 aux_base; /* bits 1:0 reserved */113113+};114114+#define AUX_BASE_MASK 0xfffffffc115115+116116+/**117117+ * struct smem_header - header found in beginning of primary smem region118118+ * @proc_comm: proc_comm communication interface (legacy)119119+ * @version: array of versions for the various subsystems120120+ * @initialized: boolean to indicate that smem is initialized121121+ * @free_offset: index of the first unallocated byte in smem122122+ * @available: number of bytes available for allocation123123+ * @reserved: reserved field, must be 0124124+ * toc: array of references to items125125+ */126126+struct smem_header {127127+ struct smem_proc_comm proc_comm[4];128128+ u32 version[32];129129+ u32 initialized;130130+ u32 free_offset;131131+ u32 available;132132+ u32 reserved;133133+ struct smem_global_entry toc[SMEM_ITEM_COUNT];134134+};135135+136136+/**137137+ * struct smem_ptable_entry - one entry in the @smem_ptable list138138+ * @offset: offset, within the main shared memory region, of the partition139139+ * @size: size of the partition140140+ * @flags: flags for the partition (currently unused)141141+ * @host0: first processor/host with access to this partition142142+ * @host1: second processor/host with access to this partition143143+ * @reserved: reserved entries for later use144144+ */145145+struct smem_ptable_entry {146146+ u32 offset;147147+ u32 size;148148+ u32 flags;149149+ u16 host0;150150+ u16 host1;151151+ u32 reserved[8];152152+};153153+154154+/**155155+ * struct smem_ptable - partition table for the private partitions156156+ * @magic: magic number, must be SMEM_PTABLE_MAGIC157157+ * @version: version of the partition table158158+ * @num_entries: number of partitions in the table159159+ * @reserved: for now reserved entries160160+ * @entry: list of @smem_ptable_entry for the @num_entries partitions161161+ */162162+struct smem_ptable {163163+ u32 magic;164164+ u32 version;165165+ u32 num_entries;166166+ u32 reserved[5];167167+ struct smem_ptable_entry entry[];168168+};169169+#define SMEM_PTABLE_MAGIC 0x434f5424 /* "$TOC" */170170+171171+/**172172+ * struct smem_partition_header - header of the partitions173173+ * @magic: magic number, must be SMEM_PART_MAGIC174174+ * @host0: first processor/host with access to this partition175175+ * @host1: second processor/host with access to this partition176176+ * @size: size of the partition177177+ * @offset_free_uncached: offset to the first free byte of uncached memory in178178+ * this partition179179+ * @offset_free_cached: offset to the first free byte of cached memory in this180180+ * partition181181+ * @reserved: for now reserved entries182182+ */183183+struct smem_partition_header {184184+ u32 magic;185185+ u16 host0;186186+ u16 host1;187187+ u32 size;188188+ u32 offset_free_uncached;189189+ u32 offset_free_cached;190190+ u32 reserved[3];191191+};192192+#define SMEM_PART_MAGIC 0x54525024 /* "$PRT" */193193+194194+/**195195+ * struct smem_private_entry - header of each item in the private partition196196+ * @canary: magic number, must be SMEM_PRIVATE_CANARY197197+ * @item: identifying number of the smem item198198+ * @size: size of the data, including padding bytes199199+ * @padding_data: number of bytes of padding of data200200+ * @padding_hdr: number of bytes of padding between the header and the data201201+ * @reserved: for now reserved entry202202+ */203203+struct smem_private_entry {204204+ u16 canary;205205+ u16 item;206206+ u32 size; /* includes padding bytes */207207+ u16 padding_data;208208+ u16 padding_hdr;209209+ u32 reserved;210210+};211211+#define SMEM_PRIVATE_CANARY 0xa5a5212212+213213+/**214214+ * struct smem_region - representation of a chunk of memory used for smem215215+ * @aux_base: identifier of aux_mem base216216+ * @virt_base: virtual base address of memory with this aux_mem identifier217217+ * @size: size of the memory region218218+ */219219+struct smem_region {220220+ u32 aux_base;221221+ void __iomem *virt_base;222222+ size_t size;223223+};224224+225225+/**226226+ * struct qcom_smem - device data for the smem device227227+ * @dev: device pointer228228+ * @hwlock: reference to a hwspinlock229229+ * @partitions: list of pointers to partitions affecting the current230230+ * processor/host231231+ * @num_regions: number of @regions232232+ * @regions: list of the memory regions defining the shared memory233233+ */234234+struct qcom_smem {235235+ struct device *dev;236236+237237+ struct hwspinlock *hwlock;238238+239239+ struct smem_partition_header *partitions[SMEM_HOST_COUNT];240240+241241+ unsigned num_regions;242242+ struct smem_region regions[0];243243+};244244+245245+/* Pointer to the one and only smem handle */246246+static struct qcom_smem *__smem;247247+248248+/* Timeout (ms) for the trylock of remote spinlocks */249249+#define HWSPINLOCK_TIMEOUT 1000250250+251251+static int qcom_smem_alloc_private(struct qcom_smem *smem,252252+ unsigned host,253253+ unsigned item,254254+ size_t size)255255+{256256+ struct smem_partition_header *phdr;257257+ struct smem_private_entry *hdr;258258+ size_t alloc_size;259259+ void *p;260260+261261+ /* We're not going to find it if there's no matching partition */262262+ if (host >= SMEM_HOST_COUNT || !smem->partitions[host])263263+ return -ENOENT;264264+265265+ phdr = smem->partitions[host];266266+267267+ p = (void *)phdr + sizeof(*phdr);268268+ while (p < (void *)phdr + phdr->offset_free_uncached) {269269+ hdr = p;270270+271271+ if (hdr->canary != SMEM_PRIVATE_CANARY) {272272+ dev_err(smem->dev,273273+ "Found invalid canary in host %d partition\n",274274+ host);275275+ return -EINVAL;276276+ }277277+278278+ if (hdr->item == item)279279+ return -EEXIST;280280+281281+ p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;282282+ }283283+284284+ /* Check that we don't grow into the cached region */285285+ alloc_size = sizeof(*hdr) + ALIGN(size, 8);286286+ if (p + alloc_size >= (void *)phdr + phdr->offset_free_cached) {287287+ dev_err(smem->dev, "Out of memory\n");288288+ return -ENOSPC;289289+ }290290+291291+ hdr = p;292292+ hdr->canary = SMEM_PRIVATE_CANARY;293293+ hdr->item = item;294294+ hdr->size = ALIGN(size, 8);295295+ hdr->padding_data = hdr->size - size;296296+ hdr->padding_hdr = 0;297297+298298+ /*299299+ * Ensure the header is written before we advance the free offset, so300300+ * that remote processors that does not take the remote spinlock still301301+ * gets a consistent view of the linked list.302302+ */303303+ wmb();304304+ phdr->offset_free_uncached += alloc_size;305305+306306+ return 0;307307+}308308+309309+static int qcom_smem_alloc_global(struct qcom_smem *smem,310310+ unsigned item,311311+ size_t size)312312+{313313+ struct smem_header *header;314314+ struct smem_global_entry *entry;315315+316316+ if (WARN_ON(item >= SMEM_ITEM_COUNT))317317+ return -EINVAL;318318+319319+ header = smem->regions[0].virt_base;320320+ entry = &header->toc[item];321321+ if (entry->allocated)322322+ return -EEXIST;323323+324324+ size = ALIGN(size, 8);325325+ if (WARN_ON(size > header->available))326326+ return -ENOMEM;327327+328328+ entry->offset = header->free_offset;329329+ entry->size = size;330330+331331+ /*332332+ * Ensure the header is consistent before we mark the item allocated,333333+ * so that remote processors will get a consistent view of the item334334+ * even though they do not take the spinlock on read.335335+ */336336+ wmb();337337+ entry->allocated = 1;338338+339339+ header->free_offset += size;340340+ header->available -= size;341341+342342+ return 0;343343+}344344+345345+/**346346+ * qcom_smem_alloc() - allocate space for a smem item347347+ * @host: remote processor id, or -1348348+ * @item: smem item handle349349+ * @size: number of bytes to be allocated350350+ *351351+ * Allocate space for a given smem item of size @size, given that the item is352352+ * not yet allocated.353353+ */354354+int qcom_smem_alloc(unsigned host, unsigned item, size_t size)355355+{356356+ unsigned long flags;357357+ int ret;358358+359359+ if (!__smem)360360+ return -EPROBE_DEFER;361361+362362+ if (item < SMEM_ITEM_LAST_FIXED) {363363+ dev_err(__smem->dev,364364+ "Rejecting allocation of static entry %d\n", item);365365+ return -EINVAL;366366+ }367367+368368+ ret = hwspin_lock_timeout_irqsave(__smem->hwlock,369369+ HWSPINLOCK_TIMEOUT,370370+ &flags);371371+ if (ret)372372+ return ret;373373+374374+ ret = qcom_smem_alloc_private(__smem, host, item, size);375375+ if (ret == -ENOENT)376376+ ret = qcom_smem_alloc_global(__smem, item, size);377377+378378+ hwspin_unlock_irqrestore(__smem->hwlock, &flags);379379+380380+ return ret;381381+}382382+EXPORT_SYMBOL(qcom_smem_alloc);383383+384384+static int qcom_smem_get_global(struct qcom_smem *smem,385385+ unsigned item,386386+ void **ptr,387387+ size_t *size)388388+{389389+ struct smem_header *header;390390+ struct smem_region *area;391391+ struct smem_global_entry *entry;392392+ u32 aux_base;393393+ unsigned i;394394+395395+ if (WARN_ON(item >= SMEM_ITEM_COUNT))396396+ return -EINVAL;397397+398398+ header = smem->regions[0].virt_base;399399+ entry = &header->toc[item];400400+ if (!entry->allocated)401401+ return -ENXIO;402402+403403+ if (ptr != NULL) {404404+ aux_base = entry->aux_base & AUX_BASE_MASK;405405+406406+ for (i = 0; i < smem->num_regions; i++) {407407+ area = &smem->regions[i];408408+409409+ if (area->aux_base == aux_base || !aux_base) {410410+ *ptr = area->virt_base + entry->offset;411411+ break;412412+ }413413+ }414414+ }415415+ if (size != NULL)416416+ *size = entry->size;417417+418418+ return 0;419419+}420420+421421+static int qcom_smem_get_private(struct qcom_smem *smem,422422+ unsigned host,423423+ unsigned item,424424+ void **ptr,425425+ size_t *size)426426+{427427+ struct smem_partition_header *phdr;428428+ struct smem_private_entry *hdr;429429+ void *p;430430+431431+ /* We're not going to find it if there's no matching partition */432432+ if (host >= SMEM_HOST_COUNT || !smem->partitions[host])433433+ return -ENOENT;434434+435435+ phdr = smem->partitions[host];436436+437437+ p = (void *)phdr + sizeof(*phdr);438438+ while (p < (void *)phdr + phdr->offset_free_uncached) {439439+ hdr = p;440440+441441+ if (hdr->canary != SMEM_PRIVATE_CANARY) {442442+ dev_err(smem->dev,443443+ "Found invalid canary in host %d partition\n",444444+ host);445445+ return -EINVAL;446446+ }447447+448448+ if (hdr->item == item) {449449+ if (ptr != NULL)450450+ *ptr = p + sizeof(*hdr) + hdr->padding_hdr;451451+452452+ if (size != NULL)453453+ *size = hdr->size - hdr->padding_data;454454+455455+ return 0;456456+ }457457+458458+ p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;459459+ }460460+461461+ return -ENOENT;462462+}463463+464464+/**465465+ * qcom_smem_get() - resolve ptr of size of a smem item466466+ * @host: the remote processor, or -1467467+ * @item: smem item handle468468+ * @ptr: pointer to be filled out with address of the item469469+ * @size: pointer to be filled out with size of the item470470+ *471471+ * Looks up pointer and size of a smem item.472472+ */473473+int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size)474474+{475475+ unsigned long flags;476476+ int ret;477477+478478+ if (!__smem)479479+ return -EPROBE_DEFER;480480+481481+ ret = hwspin_lock_timeout_irqsave(__smem->hwlock,482482+ HWSPINLOCK_TIMEOUT,483483+ &flags);484484+ if (ret)485485+ return ret;486486+487487+ ret = qcom_smem_get_private(__smem, host, item, ptr, size);488488+ if (ret == -ENOENT)489489+ ret = qcom_smem_get_global(__smem, item, ptr, size);490490+491491+ hwspin_unlock_irqrestore(__smem->hwlock, &flags);492492+ return ret;493493+494494+}495495+EXPORT_SYMBOL(qcom_smem_get);496496+497497+/**498498+ * qcom_smem_get_free_space() - retrieve amount of free space in a partition499499+ * @host: the remote processor identifying a partition, or -1500500+ *501501+ * To be used by smem clients as a quick way to determine if any new502502+ * allocations has been made.503503+ */504504+int qcom_smem_get_free_space(unsigned host)505505+{506506+ struct smem_partition_header *phdr;507507+ struct smem_header *header;508508+ unsigned ret;509509+510510+ if (!__smem)511511+ return -EPROBE_DEFER;512512+513513+ if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {514514+ phdr = __smem->partitions[host];515515+ ret = phdr->offset_free_cached - phdr->offset_free_uncached;516516+ } else {517517+ header = __smem->regions[0].virt_base;518518+ ret = header->available;519519+ }520520+521521+ return ret;522522+}523523+EXPORT_SYMBOL(qcom_smem_get_free_space);524524+525525+static int qcom_smem_get_sbl_version(struct qcom_smem *smem)526526+{527527+ unsigned *versions;528528+ size_t size;529529+ int ret;530530+531531+ ret = qcom_smem_get_global(smem, SMEM_ITEM_VERSION,532532+ (void **)&versions, &size);533533+ if (ret < 0) {534534+ dev_err(smem->dev, "Unable to read the version item\n");535535+ return -ENOENT;536536+ }537537+538538+ if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) {539539+ dev_err(smem->dev, "Version item is too small\n");540540+ return -EINVAL;541541+ }542542+543543+ return versions[SMEM_MASTER_SBL_VERSION_INDEX];544544+}545545+546546+static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,547547+ unsigned local_host)548548+{549549+ struct smem_partition_header *header;550550+ struct smem_ptable_entry *entry;551551+ struct smem_ptable *ptable;552552+ unsigned remote_host;553553+ int i;554554+555555+ ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;556556+ if (ptable->magic != SMEM_PTABLE_MAGIC)557557+ return 0;558558+559559+ if (ptable->version != 1) {560560+ dev_err(smem->dev,561561+ "Unsupported partition header version %d\n",562562+ ptable->version);563563+ return -EINVAL;564564+ }565565+566566+ for (i = 0; i < ptable->num_entries; i++) {567567+ entry = &ptable->entry[i];568568+569569+ if (entry->host0 != local_host && entry->host1 != local_host)570570+ continue;571571+572572+ if (!entry->offset)573573+ continue;574574+575575+ if (!entry->size)576576+ continue;577577+578578+ if (entry->host0 == local_host)579579+ remote_host = entry->host1;580580+ else581581+ remote_host = entry->host0;582582+583583+ if (remote_host >= SMEM_HOST_COUNT) {584584+ dev_err(smem->dev,585585+ "Invalid remote host %d\n",586586+ remote_host);587587+ return -EINVAL;588588+ }589589+590590+ if (smem->partitions[remote_host]) {591591+ dev_err(smem->dev,592592+ "Already found a partition for host %d\n",593593+ remote_host);594594+ return -EINVAL;595595+ }596596+597597+ header = smem->regions[0].virt_base + entry->offset;598598+599599+ if (header->magic != SMEM_PART_MAGIC) {600600+ dev_err(smem->dev,601601+ "Partition %d has invalid magic\n", i);602602+ return -EINVAL;603603+ }604604+605605+ if (header->host0 != local_host && header->host1 != local_host) {606606+ dev_err(smem->dev,607607+ "Partition %d hosts are invalid\n", i);608608+ return -EINVAL;609609+ }610610+611611+ if (header->host0 != remote_host && header->host1 != remote_host) {612612+ dev_err(smem->dev,613613+ "Partition %d hosts are invalid\n", i);614614+ return -EINVAL;615615+ }616616+617617+ if (header->size != entry->size) {618618+ dev_err(smem->dev,619619+ "Partition %d has invalid size\n", i);620620+ return -EINVAL;621621+ }622622+623623+ if (header->offset_free_uncached > header->size) {624624+ dev_err(smem->dev,625625+ "Partition %d has invalid free pointer\n", i);626626+ return -EINVAL;627627+ }628628+629629+ smem->partitions[remote_host] = header;630630+ }631631+632632+ return 0;633633+}634634+635635+static int qcom_smem_count_mem_regions(struct platform_device *pdev)636636+{637637+ struct resource *res;638638+ int num_regions = 0;639639+ int i;640640+641641+ for (i = 0; i < pdev->num_resources; i++) {642642+ res = &pdev->resource[i];643643+644644+ if (resource_type(res) == IORESOURCE_MEM)645645+ num_regions++;646646+ }647647+648648+ return num_regions;649649+}650650+651651+static int qcom_smem_probe(struct platform_device *pdev)652652+{653653+ struct smem_header *header;654654+ struct device_node *np;655655+ struct qcom_smem *smem;656656+ struct resource *res;657657+ struct resource r;658658+ size_t array_size;659659+ int num_regions = 0;660660+ int hwlock_id;661661+ u32 version;662662+ int ret;663663+ int i;664664+665665+ num_regions = qcom_smem_count_mem_regions(pdev) + 1;666666+667667+ array_size = num_regions * sizeof(struct smem_region);668668+ smem = devm_kzalloc(&pdev->dev, sizeof(*smem) + array_size, GFP_KERNEL);669669+ if (!smem)670670+ return -ENOMEM;671671+672672+ smem->dev = &pdev->dev;673673+ smem->num_regions = num_regions;674674+675675+ np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0);676676+ if (!np) {677677+ dev_err(&pdev->dev, "No memory-region specified\n");678678+ return -EINVAL;679679+ }680680+681681+ ret = of_address_to_resource(np, 0, &r);682682+ of_node_put(np);683683+ if (ret)684684+ return ret;685685+686686+ smem->regions[0].aux_base = (u32)r.start;687687+ smem->regions[0].size = resource_size(&r);688688+ smem->regions[0].virt_base = devm_ioremap_nocache(&pdev->dev,689689+ r.start,690690+ resource_size(&r));691691+ if (!smem->regions[0].virt_base)692692+ return -ENOMEM;693693+694694+ for (i = 1; i < num_regions; i++) {695695+ res = platform_get_resource(pdev, IORESOURCE_MEM, i - 1);696696+697697+ smem->regions[i].aux_base = (u32)res->start;698698+ smem->regions[i].size = resource_size(res);699699+ smem->regions[i].virt_base = devm_ioremap_nocache(&pdev->dev,700700+ res->start,701701+ resource_size(res));702702+ if (!smem->regions[i].virt_base)703703+ return -ENOMEM;704704+ }705705+706706+ header = smem->regions[0].virt_base;707707+ if (header->initialized != 1 || header->reserved) {708708+ dev_err(&pdev->dev, "SMEM is not initialized by SBL\n");709709+ return -EINVAL;710710+ }711711+712712+ version = qcom_smem_get_sbl_version(smem);713713+ if (version >> 16 != SMEM_EXPECTED_VERSION) {714714+ dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version);715715+ return -EINVAL;716716+ }717717+718718+ ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS);719719+ if (ret < 0)720720+ return ret;721721+722722+ hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);723723+ if (hwlock_id < 0) {724724+ dev_err(&pdev->dev, "failed to retrieve hwlock\n");725725+ return hwlock_id;726726+ }727727+728728+ smem->hwlock = hwspin_lock_request_specific(hwlock_id);729729+ if (!smem->hwlock)730730+ return -ENXIO;731731+732732+ __smem = smem;733733+734734+ return 0;735735+}736736+737737+static int qcom_smem_remove(struct platform_device *pdev)738738+{739739+ __smem = NULL;740740+ hwspin_lock_free(__smem->hwlock);741741+742742+ return 0;743743+}744744+745745+static const struct of_device_id qcom_smem_of_match[] = {746746+ { .compatible = "qcom,smem" },747747+ {}748748+};749749+MODULE_DEVICE_TABLE(of, qcom_smem_of_match);750750+751751+static struct platform_driver qcom_smem_driver = {752752+ .probe = qcom_smem_probe,753753+ .remove = qcom_smem_remove,754754+ .driver = {755755+ .name = "qcom-smem",756756+ .of_match_table = qcom_smem_of_match,757757+ .suppress_bind_attrs = true,758758+ },759759+};760760+761761+static int __init qcom_smem_init(void)762762+{763763+ return platform_driver_register(&qcom_smem_driver);764764+}765765+arch_initcall(qcom_smem_init);766766+767767+static void __exit qcom_smem_exit(void)768768+{769769+ platform_driver_unregister(&qcom_smem_driver);770770+}771771+module_exit(qcom_smem_exit)772772+773773+MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");774774+MODULE_DESCRIPTION("Qualcomm Shared Memory Manager");775775+MODULE_LICENSE("GPL v2");