Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

spi: axi-spi-engine: add offload support

Merge series from David Lechner <dlechner@baylibre.com>:

As a recap, here is the background and end goal of this series:

The AXI SPI Engine is a SPI controller that has the ability to record a
series of SPI transactions and then play them back using a hardware
trigger. This allows operations to be performed, repeating many times,
without any CPU intervention. This is needed for achieving high data
rates (millions of samples per second) from ADCs and DACs that are
connected via a SPI bus.

The offload hardware interface consists of a trigger input and a data
output for the RX data. These are connected to other hardware external
to the SPI controller.

To record one or more transactions, commands and TX data are written
to memories in the controller (RX buffer is not used since RX data gets
streamed to an external sink). This sequence of transactions can then be
played back when the trigger input is asserted.

This series includes core SPI support along with the first SPI
controller (AXI SPI Engine) and SPI peripheral (AD7944 ADC) that use
them. This enables capturing analog data at 2 million samples per
second.

The hardware setup looks like this:

+-------------------------------+ +------------------+
| | | |
| SOC/FPGA | | AD7944 ADC |
| +---------------------+ | | |
| | AXI SPI Engine | | | |
| | SPI Bus ============ SPI Bus |
| | | | | |
| | +---------------+ | | | |
| | | Offload 0 | | | +------------------+
| | | RX DATA OUT > > > > |
| | | TRIGGER IN < < < v |
| | +---------------+ | ^ v |
| +---------------------+ ^ v |
| | AXI PWM | ^ v |
| | CH0 > ^ v |
| +---------------------+ v |
| | AXI DMA | v |
| | CH0 < < < |
| +---------------------+ |
| |
+-------------------------------+

+1243 -7
+24
Documentation/devicetree/bindings/spi/adi,axi-spi-engine.yaml
··· 41 41 - const: s_axi_aclk 42 42 - const: spi_clk 43 43 44 + trigger-sources: 45 + description: 46 + An array of trigger source phandles for offload instances. The index in 47 + the array corresponds to the offload instance number. 48 + minItems: 1 49 + maxItems: 32 50 + 51 + dmas: 52 + description: 53 + DMA channels connected to the input or output stream interface of an 54 + offload instance. 55 + minItems: 1 56 + maxItems: 32 57 + 58 + dma-names: 59 + items: 60 + pattern: "^offload(?:[12]?[0-9]|3[01])-[tr]x$" 61 + minItems: 1 62 + maxItems: 32 63 + 44 64 required: 45 65 - compatible 46 66 - reg ··· 78 58 interrupts = <0 56 4>; 79 59 clocks = <&clkc 15>, <&clkc 15>; 80 60 clock-names = "s_axi_aclk", "spi_clk"; 61 + 62 + trigger-sources = <&trigger_clock>; 63 + dmas = <&dma 0>; 64 + dma-names = "offload0-rx"; 81 65 82 66 #address-cells = <1>; 83 67 #size-cells = <0>;
+37
Documentation/devicetree/bindings/trigger-source/pwm-trigger.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/trigger-source/pwm-trigger.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Generic trigger source using PWM 8 + 9 + description: Remaps a PWM channel as a trigger source. 10 + 11 + maintainers: 12 + - David Lechner <dlechner@baylibre.com> 13 + 14 + properties: 15 + compatible: 16 + const: pwm-trigger 17 + 18 + '#trigger-source-cells': 19 + const: 0 20 + 21 + pwms: 22 + maxItems: 1 23 + 24 + required: 25 + - compatible 26 + - '#trigger-source-cells' 27 + - pwms 28 + 29 + additionalProperties: false 30 + 31 + examples: 32 + - | 33 + trigger { 34 + compatible = "pwm-trigger"; 35 + #trigger-source-cells = <0>; 36 + pwms = <&pwm 0 1000000 0>; 37 + };
+12
MAINTAINERS
··· 22295 22295 F: drivers/mtd/spi-nor/ 22296 22296 F: include/linux/mtd/spi-nor.h 22297 22297 22298 + SPI OFFLOAD 22299 + R: David Lechner <dlechner@baylibre.com> 22300 + F: drivers/spi/spi-offload-trigger-pwm.c 22301 + F: drivers/spi/spi-offload.c 22302 + F: include/linux/spi/spi-offload.h 22303 + K: spi_offload 22304 + 22298 22305 SPI SUBSYSTEM 22299 22306 M: Mark Brown <broonie@kernel.org> 22300 22307 L: linux-spi@vger.kernel.org ··· 24056 24049 W: https://github.com/srcres258/linux-doc 24057 24050 T: git git://github.com/srcres258/linux-doc.git doc-zh-tw 24058 24051 F: Documentation/translations/zh_TW/ 24052 + 24053 + TRIGGER SOURCE - PWM 24054 + M: David Lechner <dlechner@baylibre.com> 24055 + S: Maintained 24056 + F: Documentation/devicetree/bindings/trigger-source/pwm-trigger.yaml 24059 24057 24060 24058 TRUSTED SECURITY MODULE (TSM) ATTESTATION REPORTS 24061 24059 M: Dan Williams <dan.j.williams@intel.com>
+16
drivers/spi/Kconfig
··· 55 55 This extension is meant to simplify interaction with SPI memories 56 56 by providing a high-level interface to send memory-like commands. 57 57 58 + config SPI_OFFLOAD 59 + bool 60 + 58 61 comment "SPI Master Controller Drivers" 59 62 60 63 config SPI_AIROHA_SNFI ··· 179 176 config SPI_AXI_SPI_ENGINE 180 177 tristate "Analog Devices AXI SPI Engine controller" 181 178 depends on HAS_IOMEM 179 + select SPI_OFFLOAD 182 180 help 183 181 This enables support for the Analog Devices AXI SPI Engine SPI controller. 184 182 It is part of the SPI Engine framework that is used in some Analog Devices ··· 1320 1316 1321 1317 config SPI_DYNAMIC 1322 1318 def_bool ACPI || OF_DYNAMIC || SPI_SLAVE 1319 + 1320 + if SPI_OFFLOAD 1321 + 1322 + comment "SPI Offload triggers" 1323 + 1324 + config SPI_OFFLOAD_TRIGGER_PWM 1325 + tristate "SPI offload trigger using PWM" 1326 + depends on PWM 1327 + help 1328 + Generic SPI offload trigger implemented using PWM output. 1329 + 1330 + endif # SPI_OFFLOAD 1323 1331 1324 1332 endif # SPI
+4
drivers/spi/Makefile
··· 10 10 obj-$(CONFIG_SPI_MASTER) += spi.o 11 11 obj-$(CONFIG_SPI_MEM) += spi-mem.o 12 12 obj-$(CONFIG_SPI_MUX) += spi-mux.o 13 + obj-$(CONFIG_SPI_OFFLOAD) += spi-offload.o 13 14 obj-$(CONFIG_SPI_SPIDEV) += spidev.o 14 15 obj-$(CONFIG_SPI_LOOPBACK_TEST) += spi-loopback-test.o 15 16 ··· 164 163 # SPI slave protocol handlers 165 164 obj-$(CONFIG_SPI_SLAVE_TIME) += spi-slave-time.o 166 165 obj-$(CONFIG_SPI_SLAVE_SYSTEM_CONTROL) += spi-slave-system-control.o 166 + 167 + # SPI offload triggers 168 + obj-$(CONFIG_SPI_OFFLOAD_TRIGGER_PWM) += spi-offload-trigger-pwm.o
+308 -7
drivers/spi/spi-axi-spi-engine.c
··· 2 2 /* 3 3 * SPI-Engine SPI controller driver 4 4 * Copyright 2015 Analog Devices Inc. 5 + * Copyright 2024 BayLibre, SAS 5 6 * Author: Lars-Peter Clausen <lars@metafoo.de> 6 7 */ 7 8 9 + #include <linux/bitfield.h> 10 + #include <linux/bitops.h> 8 11 #include <linux/clk.h> 9 12 #include <linux/completion.h> 13 + #include <linux/dmaengine.h> 10 14 #include <linux/fpga/adi-axi-common.h> 11 15 #include <linux/interrupt.h> 12 16 #include <linux/io.h> ··· 18 14 #include <linux/module.h> 19 15 #include <linux/overflow.h> 20 16 #include <linux/platform_device.h> 17 + #include <linux/spi/offload/provider.h> 21 18 #include <linux/spi/spi.h> 22 19 #include <trace/events/spi.h> 23 20 21 + #define SPI_ENGINE_REG_OFFLOAD_MEM_ADDR_WIDTH 0x10 24 22 #define SPI_ENGINE_REG_RESET 0x40 25 23 26 24 #define SPI_ENGINE_REG_INT_ENABLE 0x80 ··· 30 24 #define SPI_ENGINE_REG_INT_SOURCE 0x88 31 25 32 26 #define SPI_ENGINE_REG_SYNC_ID 0xc0 27 + #define SPI_ENGINE_REG_OFFLOAD_SYNC_ID 0xc4 33 28 34 29 #define SPI_ENGINE_REG_CMD_FIFO_ROOM 0xd0 35 30 #define SPI_ENGINE_REG_SDO_FIFO_ROOM 0xd4 ··· 41 34 #define SPI_ENGINE_REG_SDI_DATA_FIFO 0xe8 42 35 #define SPI_ENGINE_REG_SDI_DATA_FIFO_PEEK 0xec 43 36 37 + #define SPI_ENGINE_MAX_NUM_OFFLOADS 32 38 + 39 + #define SPI_ENGINE_REG_OFFLOAD_CTRL(x) (0x100 + SPI_ENGINE_MAX_NUM_OFFLOADS * (x)) 40 + #define SPI_ENGINE_REG_OFFLOAD_STATUS(x) (0x104 + SPI_ENGINE_MAX_NUM_OFFLOADS * (x)) 41 + #define SPI_ENGINE_REG_OFFLOAD_RESET(x) (0x108 + SPI_ENGINE_MAX_NUM_OFFLOADS * (x)) 42 + #define SPI_ENGINE_REG_OFFLOAD_CMD_FIFO(x) (0x110 + SPI_ENGINE_MAX_NUM_OFFLOADS * (x)) 43 + #define SPI_ENGINE_REG_OFFLOAD_SDO_FIFO(x) (0x114 + SPI_ENGINE_MAX_NUM_OFFLOADS * (x)) 44 + 45 + #define SPI_ENGINE_SPI_OFFLOAD_MEM_WIDTH_SDO GENMASK(15, 8) 46 + #define SPI_ENGINE_SPI_OFFLOAD_MEM_WIDTH_CMD GENMASK(7, 0) 47 + 44 48 #define SPI_ENGINE_INT_CMD_ALMOST_EMPTY BIT(0) 45 49 #define SPI_ENGINE_INT_SDO_ALMOST_EMPTY BIT(1) 46 50 #define SPI_ENGINE_INT_SDI_ALMOST_FULL BIT(2) 47 51 #define SPI_ENGINE_INT_SYNC BIT(3) 52 + #define SPI_ENGINE_INT_OFFLOAD_SYNC BIT(4) 53 + 54 + #define SPI_ENGINE_OFFLOAD_CTRL_ENABLE BIT(0) 48 55 49 56 #define SPI_ENGINE_CONFIG_CPHA BIT(0) 50 57 #define SPI_ENGINE_CONFIG_CPOL BIT(1) ··· 100 79 #define SPI_ENGINE_CMD_CS_INV(flags) \ 101 80 SPI_ENGINE_CMD(SPI_ENGINE_INST_CS_INV, 0, (flags)) 102 81 82 + /* default sizes - can be changed when SPI Engine firmware is compiled */ 83 + #define SPI_ENGINE_OFFLOAD_CMD_FIFO_SIZE 16 84 + #define SPI_ENGINE_OFFLOAD_SDO_FIFO_SIZE 16 85 + 103 86 struct spi_engine_program { 104 87 unsigned int length; 105 88 uint16_t instructions[] __counted_by(length); ··· 131 106 uint8_t *rx_buf; 132 107 }; 133 108 109 + enum { 110 + SPI_ENGINE_OFFLOAD_FLAG_ASSIGNED, 111 + SPI_ENGINE_OFFLOAD_FLAG_PREPARED, 112 + }; 113 + 114 + struct spi_engine_offload { 115 + struct spi_engine *spi_engine; 116 + unsigned long flags; 117 + unsigned int offload_num; 118 + }; 119 + 134 120 struct spi_engine { 135 121 struct clk *clk; 136 122 struct clk *ref_clk; ··· 154 118 unsigned int int_enable; 155 119 /* shadows hardware CS inversion flag state */ 156 120 u8 cs_inv; 121 + 122 + unsigned int offload_ctrl_mem_size; 123 + unsigned int offload_sdo_mem_size; 124 + struct spi_offload *offload; 125 + u32 offload_caps; 157 126 }; 158 127 159 128 static void spi_engine_program_add_cmd(struct spi_engine_program *p, ··· 204 163 unsigned int n = min(len, 256U); 205 164 unsigned int flags = 0; 206 165 207 - if (xfer->tx_buf) 166 + if (xfer->tx_buf || (xfer->offload_flags & SPI_OFFLOAD_XFER_TX_STREAM)) 208 167 flags |= SPI_ENGINE_TRANSFER_WRITE; 209 - if (xfer->rx_buf) 168 + if (xfer->rx_buf || (xfer->offload_flags & SPI_OFFLOAD_XFER_RX_STREAM)) 210 169 flags |= SPI_ENGINE_TRANSFER_READ; 211 170 212 171 spi_engine_program_add_cmd(p, dry, ··· 258 217 * 259 218 * NB: This is separate from spi_engine_compile_message() because the latter 260 219 * is called twice and would otherwise result in double-evaluation. 220 + * 221 + * Returns 0 on success, -EINVAL on failure. 261 222 */ 262 - static void spi_engine_precompile_message(struct spi_message *msg) 223 + static int spi_engine_precompile_message(struct spi_message *msg) 263 224 { 264 225 unsigned int clk_div, max_hz = msg->spi->controller->max_speed_hz; 265 226 struct spi_transfer *xfer; 266 227 267 228 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 229 + /* If we have an offload transfer, we can't rx to buffer */ 230 + if (msg->offload && xfer->rx_buf) 231 + return -EINVAL; 232 + 268 233 clk_div = DIV_ROUND_UP(max_hz, xfer->speed_hz); 269 234 xfer->effective_speed_hz = max_hz / min(clk_div, 256U); 270 235 } 236 + 237 + return 0; 271 238 } 272 239 273 240 static void spi_engine_compile_message(struct spi_message *msg, bool dry, ··· 570 521 return IRQ_HANDLED; 571 522 } 572 523 524 + static int spi_engine_offload_prepare(struct spi_message *msg) 525 + { 526 + struct spi_controller *host = msg->spi->controller; 527 + struct spi_engine *spi_engine = spi_controller_get_devdata(host); 528 + struct spi_engine_program *p = msg->opt_state; 529 + struct spi_engine_offload *priv = msg->offload->priv; 530 + struct spi_transfer *xfer; 531 + void __iomem *cmd_addr; 532 + void __iomem *sdo_addr; 533 + size_t tx_word_count = 0; 534 + unsigned int i; 535 + 536 + if (p->length > spi_engine->offload_ctrl_mem_size) 537 + return -EINVAL; 538 + 539 + /* count total number of tx words in message */ 540 + list_for_each_entry(xfer, &msg->transfers, transfer_list) { 541 + /* no support for reading to rx_buf */ 542 + if (xfer->rx_buf) 543 + return -EINVAL; 544 + 545 + if (!xfer->tx_buf) 546 + continue; 547 + 548 + if (xfer->bits_per_word <= 8) 549 + tx_word_count += xfer->len; 550 + else if (xfer->bits_per_word <= 16) 551 + tx_word_count += xfer->len / 2; 552 + else 553 + tx_word_count += xfer->len / 4; 554 + } 555 + 556 + if (tx_word_count && !(spi_engine->offload_caps & SPI_OFFLOAD_CAP_TX_STATIC_DATA)) 557 + return -EINVAL; 558 + 559 + if (tx_word_count > spi_engine->offload_sdo_mem_size) 560 + return -EINVAL; 561 + 562 + /* 563 + * This protects against calling spi_optimize_message() with an offload 564 + * that has already been prepared with a different message. 565 + */ 566 + if (test_and_set_bit_lock(SPI_ENGINE_OFFLOAD_FLAG_PREPARED, &priv->flags)) 567 + return -EBUSY; 568 + 569 + cmd_addr = spi_engine->base + 570 + SPI_ENGINE_REG_OFFLOAD_CMD_FIFO(priv->offload_num); 571 + sdo_addr = spi_engine->base + 572 + SPI_ENGINE_REG_OFFLOAD_SDO_FIFO(priv->offload_num); 573 + 574 + list_for_each_entry(xfer, &msg->transfers, transfer_list) { 575 + if (!xfer->tx_buf) 576 + continue; 577 + 578 + if (xfer->bits_per_word <= 8) { 579 + const u8 *buf = xfer->tx_buf; 580 + 581 + for (i = 0; i < xfer->len; i++) 582 + writel_relaxed(buf[i], sdo_addr); 583 + } else if (xfer->bits_per_word <= 16) { 584 + const u16 *buf = xfer->tx_buf; 585 + 586 + for (i = 0; i < xfer->len / 2; i++) 587 + writel_relaxed(buf[i], sdo_addr); 588 + } else { 589 + const u32 *buf = xfer->tx_buf; 590 + 591 + for (i = 0; i < xfer->len / 4; i++) 592 + writel_relaxed(buf[i], sdo_addr); 593 + } 594 + } 595 + 596 + for (i = 0; i < p->length; i++) 597 + writel_relaxed(p->instructions[i], cmd_addr); 598 + 599 + return 0; 600 + } 601 + 602 + static void spi_engine_offload_unprepare(struct spi_offload *offload) 603 + { 604 + struct spi_engine_offload *priv = offload->priv; 605 + struct spi_engine *spi_engine = priv->spi_engine; 606 + 607 + writel_relaxed(1, spi_engine->base + 608 + SPI_ENGINE_REG_OFFLOAD_RESET(priv->offload_num)); 609 + writel_relaxed(0, spi_engine->base + 610 + SPI_ENGINE_REG_OFFLOAD_RESET(priv->offload_num)); 611 + 612 + clear_bit_unlock(SPI_ENGINE_OFFLOAD_FLAG_PREPARED, &priv->flags); 613 + } 614 + 573 615 static int spi_engine_optimize_message(struct spi_message *msg) 574 616 { 575 617 struct spi_engine_program p_dry, *p; 618 + int ret; 576 619 577 - spi_engine_precompile_message(msg); 620 + ret = spi_engine_precompile_message(msg); 621 + if (ret) 622 + return ret; 578 623 579 624 p_dry.length = 0; 580 625 spi_engine_compile_message(msg, true, &p_dry); ··· 680 537 spi_engine_compile_message(msg, false, p); 681 538 682 539 spi_engine_program_add_cmd(p, false, SPI_ENGINE_CMD_SYNC( 683 - AXI_SPI_ENGINE_CUR_MSG_SYNC_ID)); 540 + msg->offload ? 0 : AXI_SPI_ENGINE_CUR_MSG_SYNC_ID)); 684 541 685 542 msg->opt_state = p; 543 + 544 + if (msg->offload) { 545 + ret = spi_engine_offload_prepare(msg); 546 + if (ret) { 547 + msg->opt_state = NULL; 548 + kfree(p); 549 + return ret; 550 + } 551 + } 686 552 687 553 return 0; 688 554 } 689 555 690 556 static int spi_engine_unoptimize_message(struct spi_message *msg) 691 557 { 558 + if (msg->offload) 559 + spi_engine_offload_unprepare(msg->offload); 560 + 692 561 kfree(msg->opt_state); 693 562 694 563 return 0; 564 + } 565 + 566 + static struct spi_offload 567 + *spi_engine_get_offload(struct spi_device *spi, 568 + const struct spi_offload_config *config) 569 + { 570 + struct spi_controller *host = spi->controller; 571 + struct spi_engine *spi_engine = spi_controller_get_devdata(host); 572 + struct spi_engine_offload *priv; 573 + 574 + if (!spi_engine->offload) 575 + return ERR_PTR(-ENODEV); 576 + 577 + if (config->capability_flags & ~spi_engine->offload_caps) 578 + return ERR_PTR(-EINVAL); 579 + 580 + priv = spi_engine->offload->priv; 581 + 582 + if (test_and_set_bit_lock(SPI_ENGINE_OFFLOAD_FLAG_ASSIGNED, &priv->flags)) 583 + return ERR_PTR(-EBUSY); 584 + 585 + return spi_engine->offload; 586 + } 587 + 588 + static void spi_engine_put_offload(struct spi_offload *offload) 589 + { 590 + struct spi_engine_offload *priv = offload->priv; 591 + 592 + clear_bit_unlock(SPI_ENGINE_OFFLOAD_FLAG_ASSIGNED, &priv->flags); 695 593 } 696 594 697 595 static int spi_engine_setup(struct spi_device *device) ··· 766 582 struct spi_engine_program *p = msg->opt_state; 767 583 unsigned int int_enable = 0; 768 584 unsigned long flags; 585 + 586 + if (msg->offload) { 587 + dev_err(&host->dev, "Single transfer offload not supported\n"); 588 + msg->status = -EOPNOTSUPP; 589 + goto out; 590 + } 769 591 770 592 /* reinitialize message state for this transfer */ 771 593 memset(st, 0, sizeof(*st)); ··· 822 632 trace_spi_transfer_stop(msg, xfer); 823 633 } 824 634 635 + out: 825 636 spi_finalize_current_message(host); 826 637 827 638 return msg->status; 828 639 } 640 + 641 + static int spi_engine_trigger_enable(struct spi_offload *offload) 642 + { 643 + struct spi_engine_offload *priv = offload->priv; 644 + struct spi_engine *spi_engine = priv->spi_engine; 645 + unsigned int reg; 646 + 647 + reg = readl_relaxed(spi_engine->base + 648 + SPI_ENGINE_REG_OFFLOAD_CTRL(priv->offload_num)); 649 + reg |= SPI_ENGINE_OFFLOAD_CTRL_ENABLE; 650 + writel_relaxed(reg, spi_engine->base + 651 + SPI_ENGINE_REG_OFFLOAD_CTRL(priv->offload_num)); 652 + return 0; 653 + } 654 + 655 + static void spi_engine_trigger_disable(struct spi_offload *offload) 656 + { 657 + struct spi_engine_offload *priv = offload->priv; 658 + struct spi_engine *spi_engine = priv->spi_engine; 659 + unsigned int reg; 660 + 661 + reg = readl_relaxed(spi_engine->base + 662 + SPI_ENGINE_REG_OFFLOAD_CTRL(priv->offload_num)); 663 + reg &= ~SPI_ENGINE_OFFLOAD_CTRL_ENABLE; 664 + writel_relaxed(reg, spi_engine->base + 665 + SPI_ENGINE_REG_OFFLOAD_CTRL(priv->offload_num)); 666 + } 667 + 668 + static struct dma_chan 669 + *spi_engine_tx_stream_request_dma_chan(struct spi_offload *offload) 670 + { 671 + struct spi_engine_offload *priv = offload->priv; 672 + char name[16]; 673 + 674 + snprintf(name, sizeof(name), "offload%u-tx", priv->offload_num); 675 + 676 + return dma_request_chan(offload->provider_dev, name); 677 + } 678 + 679 + static struct dma_chan 680 + *spi_engine_rx_stream_request_dma_chan(struct spi_offload *offload) 681 + { 682 + struct spi_engine_offload *priv = offload->priv; 683 + char name[16]; 684 + 685 + snprintf(name, sizeof(name), "offload%u-rx", priv->offload_num); 686 + 687 + return dma_request_chan(offload->provider_dev, name); 688 + } 689 + 690 + static const struct spi_offload_ops spi_engine_offload_ops = { 691 + .trigger_enable = spi_engine_trigger_enable, 692 + .trigger_disable = spi_engine_trigger_disable, 693 + .tx_stream_request_dma_chan = spi_engine_tx_stream_request_dma_chan, 694 + .rx_stream_request_dma_chan = spi_engine_rx_stream_request_dma_chan, 695 + }; 829 696 830 697 static void spi_engine_release_hw(void *p) 831 698 { ··· 898 651 struct spi_engine *spi_engine; 899 652 struct spi_controller *host; 900 653 unsigned int version; 901 - int irq; 902 - int ret; 654 + int irq, ret; 903 655 904 656 irq = platform_get_irq(pdev, 0); 905 657 if (irq < 0) ··· 912 666 913 667 spin_lock_init(&spi_engine->lock); 914 668 init_completion(&spi_engine->msg_complete); 669 + 670 + /* 671 + * REVISIT: for now, all SPI Engines only have one offload. In the 672 + * future, this should be read from a memory mapped register to 673 + * determine the number of offloads enabled at HDL compile time. For 674 + * now, we can tell if an offload is present if there is a trigger 675 + * source wired up to it. 676 + */ 677 + if (device_property_present(&pdev->dev, "trigger-sources")) { 678 + struct spi_engine_offload *priv; 679 + 680 + spi_engine->offload = 681 + devm_spi_offload_alloc(&pdev->dev, 682 + sizeof(struct spi_engine_offload)); 683 + if (IS_ERR(spi_engine->offload)) 684 + return PTR_ERR(spi_engine->offload); 685 + 686 + priv = spi_engine->offload->priv; 687 + priv->spi_engine = spi_engine; 688 + priv->offload_num = 0; 689 + 690 + spi_engine->offload->ops = &spi_engine_offload_ops; 691 + spi_engine->offload_caps = SPI_OFFLOAD_CAP_TRIGGER; 692 + 693 + if (device_property_match_string(&pdev->dev, "dma-names", "offload0-rx") >= 0) { 694 + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_RX_STREAM_DMA; 695 + spi_engine->offload->xfer_flags |= SPI_OFFLOAD_XFER_RX_STREAM; 696 + } 697 + 698 + if (device_property_match_string(&pdev->dev, "dma-names", "offload0-tx") >= 0) { 699 + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_TX_STREAM_DMA; 700 + spi_engine->offload->xfer_flags |= SPI_OFFLOAD_XFER_TX_STREAM; 701 + } else { 702 + /* 703 + * HDL compile option to enable TX DMA stream also disables 704 + * the SDO memory, so can't do both at the same time. 705 + */ 706 + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_TX_STATIC_DATA; 707 + } 708 + } 915 709 916 710 spi_engine->clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk"); 917 711 if (IS_ERR(spi_engine->clk)) ··· 972 686 ADI_AXI_PCORE_VER_MINOR(version), 973 687 ADI_AXI_PCORE_VER_PATCH(version)); 974 688 return -ENODEV; 689 + } 690 + 691 + if (ADI_AXI_PCORE_VER_MINOR(version) >= 1) { 692 + unsigned int sizes = readl(spi_engine->base + 693 + SPI_ENGINE_REG_OFFLOAD_MEM_ADDR_WIDTH); 694 + 695 + spi_engine->offload_ctrl_mem_size = 1 << 696 + FIELD_GET(SPI_ENGINE_SPI_OFFLOAD_MEM_WIDTH_CMD, sizes); 697 + spi_engine->offload_sdo_mem_size = 1 << 698 + FIELD_GET(SPI_ENGINE_SPI_OFFLOAD_MEM_WIDTH_SDO, sizes); 699 + } else { 700 + spi_engine->offload_ctrl_mem_size = SPI_ENGINE_OFFLOAD_CMD_FIFO_SIZE; 701 + spi_engine->offload_sdo_mem_size = SPI_ENGINE_OFFLOAD_SDO_FIFO_SIZE; 975 702 } 976 703 977 704 writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_RESET); ··· 1008 709 host->transfer_one_message = spi_engine_transfer_one_message; 1009 710 host->optimize_message = spi_engine_optimize_message; 1010 711 host->unoptimize_message = spi_engine_unoptimize_message; 712 + host->get_offload = spi_engine_get_offload; 713 + host->put_offload = spi_engine_put_offload; 1011 714 host->num_chipselect = 8; 1012 715 1013 716 /* Some features depend of the IP core version. */
+162
drivers/spi/spi-offload-trigger-pwm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2024 Analog Devices Inc. 4 + * Copyright (C) 2024 BayLibre, SAS 5 + * 6 + * Generic PWM trigger for SPI offload. 7 + */ 8 + 9 + #include <linux/platform_device.h> 10 + #include <linux/pwm.h> 11 + #include <linux/mod_devicetable.h> 12 + #include <linux/spi/offload/provider.h> 13 + #include <linux/types.h> 14 + 15 + struct spi_offload_trigger_pwm_state { 16 + struct device *dev; 17 + struct pwm_device *pwm; 18 + }; 19 + 20 + static bool spi_offload_trigger_pwm_match(struct spi_offload_trigger *trigger, 21 + enum spi_offload_trigger_type type, 22 + u64 *args, u32 nargs) 23 + { 24 + if (nargs) 25 + return false; 26 + 27 + return type == SPI_OFFLOAD_TRIGGER_PERIODIC; 28 + } 29 + 30 + static int spi_offload_trigger_pwm_validate(struct spi_offload_trigger *trigger, 31 + struct spi_offload_trigger_config *config) 32 + { 33 + struct spi_offload_trigger_pwm_state *st = spi_offload_trigger_get_priv(trigger); 34 + struct spi_offload_trigger_periodic *periodic = &config->periodic; 35 + struct pwm_waveform wf = { }; 36 + int ret; 37 + 38 + if (config->type != SPI_OFFLOAD_TRIGGER_PERIODIC) 39 + return -EINVAL; 40 + 41 + if (!periodic->frequency_hz) 42 + return -EINVAL; 43 + 44 + wf.period_length_ns = DIV_ROUND_UP_ULL(NSEC_PER_SEC, periodic->frequency_hz); 45 + /* REVISIT: 50% duty-cycle for now - may add config parameter later */ 46 + wf.duty_length_ns = wf.period_length_ns / 2; 47 + 48 + ret = pwm_round_waveform_might_sleep(st->pwm, &wf); 49 + if (ret < 0) 50 + return ret; 51 + 52 + periodic->frequency_hz = DIV_ROUND_UP_ULL(NSEC_PER_SEC, wf.period_length_ns); 53 + 54 + return 0; 55 + } 56 + 57 + static int spi_offload_trigger_pwm_enable(struct spi_offload_trigger *trigger, 58 + struct spi_offload_trigger_config *config) 59 + { 60 + struct spi_offload_trigger_pwm_state *st = spi_offload_trigger_get_priv(trigger); 61 + struct spi_offload_trigger_periodic *periodic = &config->periodic; 62 + struct pwm_waveform wf = { }; 63 + 64 + if (config->type != SPI_OFFLOAD_TRIGGER_PERIODIC) 65 + return -EINVAL; 66 + 67 + if (!periodic->frequency_hz) 68 + return -EINVAL; 69 + 70 + wf.period_length_ns = DIV_ROUND_UP_ULL(NSEC_PER_SEC, periodic->frequency_hz); 71 + /* REVISIT: 50% duty-cycle for now - may add config parameter later */ 72 + wf.duty_length_ns = wf.period_length_ns / 2; 73 + 74 + return pwm_set_waveform_might_sleep(st->pwm, &wf, false); 75 + } 76 + 77 + static void spi_offload_trigger_pwm_disable(struct spi_offload_trigger *trigger) 78 + { 79 + struct spi_offload_trigger_pwm_state *st = spi_offload_trigger_get_priv(trigger); 80 + struct pwm_waveform wf; 81 + int ret; 82 + 83 + ret = pwm_get_waveform_might_sleep(st->pwm, &wf); 84 + if (ret < 0) { 85 + dev_err(st->dev, "failed to get waveform: %d\n", ret); 86 + return; 87 + } 88 + 89 + wf.duty_length_ns = 0; 90 + 91 + ret = pwm_set_waveform_might_sleep(st->pwm, &wf, false); 92 + if (ret < 0) 93 + dev_err(st->dev, "failed to disable PWM: %d\n", ret); 94 + } 95 + 96 + static const struct spi_offload_trigger_ops spi_offload_trigger_pwm_ops = { 97 + .match = spi_offload_trigger_pwm_match, 98 + .validate = spi_offload_trigger_pwm_validate, 99 + .enable = spi_offload_trigger_pwm_enable, 100 + .disable = spi_offload_trigger_pwm_disable, 101 + }; 102 + 103 + static void spi_offload_trigger_pwm_release(void *data) 104 + { 105 + pwm_disable(data); 106 + } 107 + 108 + static int spi_offload_trigger_pwm_probe(struct platform_device *pdev) 109 + { 110 + struct device *dev = &pdev->dev; 111 + struct spi_offload_trigger_info info = { 112 + .fwnode = dev_fwnode(dev), 113 + .ops = &spi_offload_trigger_pwm_ops, 114 + }; 115 + struct spi_offload_trigger_pwm_state *st; 116 + struct pwm_state state; 117 + int ret; 118 + 119 + st = devm_kzalloc(dev, sizeof(*st), GFP_KERNEL); 120 + if (!st) 121 + return -ENOMEM; 122 + 123 + info.priv = st; 124 + st->dev = dev; 125 + 126 + st->pwm = devm_pwm_get(dev, NULL); 127 + if (IS_ERR(st->pwm)) 128 + return dev_err_probe(dev, PTR_ERR(st->pwm), "failed to get PWM\n"); 129 + 130 + /* init with duty_cycle = 0, output enabled to ensure trigger off */ 131 + pwm_init_state(st->pwm, &state); 132 + state.enabled = true; 133 + 134 + ret = pwm_apply_might_sleep(st->pwm, &state); 135 + if (ret < 0) 136 + return dev_err_probe(dev, ret, "failed to apply PWM state\n"); 137 + 138 + ret = devm_add_action_or_reset(dev, spi_offload_trigger_pwm_release, st->pwm); 139 + if (ret) 140 + return ret; 141 + 142 + return devm_spi_offload_trigger_register(dev, &info); 143 + } 144 + 145 + static const struct of_device_id spi_offload_trigger_pwm_of_match_table[] = { 146 + { .compatible = "pwm-trigger" }, 147 + { } 148 + }; 149 + MODULE_DEVICE_TABLE(of, spi_offload_trigger_pwm_of_match_table); 150 + 151 + static struct platform_driver spi_offload_trigger_pwm_driver = { 152 + .driver = { 153 + .name = "pwm-trigger", 154 + .of_match_table = spi_offload_trigger_pwm_of_match_table, 155 + }, 156 + .probe = spi_offload_trigger_pwm_probe, 157 + }; 158 + module_platform_driver(spi_offload_trigger_pwm_driver); 159 + 160 + MODULE_AUTHOR("David Lechner <dlechner@baylibre.com>"); 161 + MODULE_DESCRIPTION("Generic PWM trigger"); 162 + MODULE_LICENSE("GPL");
+465
drivers/spi/spi-offload.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2024 Analog Devices Inc. 4 + * Copyright (C) 2024 BayLibre, SAS 5 + */ 6 + 7 + /* 8 + * SPI Offloading support. 9 + * 10 + * Some SPI controllers support offloading of SPI transfers. Essentially, this 11 + * is the ability for a SPI controller to perform SPI transfers with minimal 12 + * or even no CPU intervention, e.g. via a specialized SPI controller with a 13 + * hardware trigger or via a conventional SPI controller using a non-Linux MCU 14 + * processor core to offload the work. 15 + */ 16 + 17 + #define DEFAULT_SYMBOL_NAMESPACE "SPI_OFFLOAD" 18 + 19 + #include <linux/cleanup.h> 20 + #include <linux/device.h> 21 + #include <linux/dmaengine.h> 22 + #include <linux/export.h> 23 + #include <linux/kref.h> 24 + #include <linux/list.h> 25 + #include <linux/mutex.h> 26 + #include <linux/of.h> 27 + #include <linux/property.h> 28 + #include <linux/spi/offload/consumer.h> 29 + #include <linux/spi/offload/provider.h> 30 + #include <linux/spi/offload/types.h> 31 + #include <linux/spi/spi.h> 32 + #include <linux/types.h> 33 + 34 + struct spi_controller_and_offload { 35 + struct spi_controller *controller; 36 + struct spi_offload *offload; 37 + }; 38 + 39 + struct spi_offload_trigger { 40 + struct list_head list; 41 + struct kref ref; 42 + struct fwnode_handle *fwnode; 43 + /* synchronizes calling ops and driver registration */ 44 + struct mutex lock; 45 + /* 46 + * If the provider goes away while the consumer still has a reference, 47 + * ops and priv will be set to NULL and all calls will fail with -ENODEV. 48 + */ 49 + const struct spi_offload_trigger_ops *ops; 50 + void *priv; 51 + }; 52 + 53 + static LIST_HEAD(spi_offload_triggers); 54 + static DEFINE_MUTEX(spi_offload_triggers_lock); 55 + 56 + /** 57 + * devm_spi_offload_alloc() - Allocate offload instance 58 + * @dev: Device for devm purposes and assigned to &struct spi_offload.provider_dev 59 + * @priv_size: Size of private data to allocate 60 + * 61 + * Offload providers should use this to allocate offload instances. 62 + * 63 + * Return: Pointer to new offload instance or error on failure. 64 + */ 65 + struct spi_offload *devm_spi_offload_alloc(struct device *dev, 66 + size_t priv_size) 67 + { 68 + struct spi_offload *offload; 69 + void *priv; 70 + 71 + offload = devm_kzalloc(dev, sizeof(*offload), GFP_KERNEL); 72 + if (!offload) 73 + return ERR_PTR(-ENOMEM); 74 + 75 + priv = devm_kzalloc(dev, priv_size, GFP_KERNEL); 76 + if (!priv) 77 + return ERR_PTR(-ENOMEM); 78 + 79 + offload->provider_dev = dev; 80 + offload->priv = priv; 81 + 82 + return offload; 83 + } 84 + EXPORT_SYMBOL_GPL(devm_spi_offload_alloc); 85 + 86 + static void spi_offload_put(void *data) 87 + { 88 + struct spi_controller_and_offload *resource = data; 89 + 90 + resource->controller->put_offload(resource->offload); 91 + kfree(resource); 92 + } 93 + 94 + /** 95 + * devm_spi_offload_get() - Get an offload instance 96 + * @dev: Device for devm purposes 97 + * @spi: SPI device to use for the transfers 98 + * @config: Offload configuration 99 + * 100 + * Peripheral drivers call this function to get an offload instance that meets 101 + * the requirements specified in @config. If no suitable offload instance is 102 + * available, -ENODEV is returned. 103 + * 104 + * Return: Offload instance or error on failure. 105 + */ 106 + struct spi_offload *devm_spi_offload_get(struct device *dev, 107 + struct spi_device *spi, 108 + const struct spi_offload_config *config) 109 + { 110 + struct spi_controller_and_offload *resource; 111 + int ret; 112 + 113 + if (!spi || !config) 114 + return ERR_PTR(-EINVAL); 115 + 116 + if (!spi->controller->get_offload) 117 + return ERR_PTR(-ENODEV); 118 + 119 + resource = kzalloc(sizeof(*resource), GFP_KERNEL); 120 + if (!resource) 121 + return ERR_PTR(-ENOMEM); 122 + 123 + resource->controller = spi->controller; 124 + resource->offload = spi->controller->get_offload(spi, config); 125 + if (IS_ERR(resource->offload)) { 126 + kfree(resource); 127 + return resource->offload; 128 + } 129 + 130 + ret = devm_add_action_or_reset(dev, spi_offload_put, resource); 131 + if (ret) 132 + return ERR_PTR(ret); 133 + 134 + return resource->offload; 135 + } 136 + EXPORT_SYMBOL_GPL(devm_spi_offload_get); 137 + 138 + static void spi_offload_trigger_free(struct kref *ref) 139 + { 140 + struct spi_offload_trigger *trigger = 141 + container_of(ref, struct spi_offload_trigger, ref); 142 + 143 + mutex_destroy(&trigger->lock); 144 + fwnode_handle_put(trigger->fwnode); 145 + kfree(trigger); 146 + } 147 + 148 + static void spi_offload_trigger_put(void *data) 149 + { 150 + struct spi_offload_trigger *trigger = data; 151 + 152 + scoped_guard(mutex, &trigger->lock) 153 + if (trigger->ops && trigger->ops->release) 154 + trigger->ops->release(trigger); 155 + 156 + kref_put(&trigger->ref, spi_offload_trigger_free); 157 + } 158 + 159 + static struct spi_offload_trigger 160 + *spi_offload_trigger_get(enum spi_offload_trigger_type type, 161 + struct fwnode_reference_args *args) 162 + { 163 + struct spi_offload_trigger *trigger; 164 + bool match = false; 165 + int ret; 166 + 167 + guard(mutex)(&spi_offload_triggers_lock); 168 + 169 + list_for_each_entry(trigger, &spi_offload_triggers, list) { 170 + if (trigger->fwnode != args->fwnode) 171 + continue; 172 + 173 + match = trigger->ops->match(trigger, type, args->args, args->nargs); 174 + if (match) 175 + break; 176 + } 177 + 178 + if (!match) 179 + return ERR_PTR(-EPROBE_DEFER); 180 + 181 + guard(mutex)(&trigger->lock); 182 + 183 + if (!trigger->ops) 184 + return ERR_PTR(-ENODEV); 185 + 186 + if (trigger->ops->request) { 187 + ret = trigger->ops->request(trigger, type, args->args, args->nargs); 188 + if (ret) 189 + return ERR_PTR(ret); 190 + } 191 + 192 + kref_get(&trigger->ref); 193 + 194 + return trigger; 195 + } 196 + 197 + /** 198 + * devm_spi_offload_trigger_get() - Get an offload trigger instance 199 + * @dev: Device for devm purposes. 200 + * @offload: Offload instance connected to a trigger. 201 + * @type: Trigger type to get. 202 + * 203 + * Return: Offload trigger instance or error on failure. 204 + */ 205 + struct spi_offload_trigger 206 + *devm_spi_offload_trigger_get(struct device *dev, 207 + struct spi_offload *offload, 208 + enum spi_offload_trigger_type type) 209 + { 210 + struct spi_offload_trigger *trigger; 211 + struct fwnode_reference_args args; 212 + int ret; 213 + 214 + ret = fwnode_property_get_reference_args(dev_fwnode(offload->provider_dev), 215 + "trigger-sources", 216 + "#trigger-source-cells", 0, 0, 217 + &args); 218 + if (ret) 219 + return ERR_PTR(ret); 220 + 221 + trigger = spi_offload_trigger_get(type, &args); 222 + fwnode_handle_put(args.fwnode); 223 + if (IS_ERR(trigger)) 224 + return trigger; 225 + 226 + ret = devm_add_action_or_reset(dev, spi_offload_trigger_put, trigger); 227 + if (ret) 228 + return ERR_PTR(ret); 229 + 230 + return trigger; 231 + } 232 + EXPORT_SYMBOL_GPL(devm_spi_offload_trigger_get); 233 + 234 + /** 235 + * spi_offload_trigger_validate - Validate the requested trigger 236 + * @trigger: Offload trigger instance 237 + * @config: Trigger config to validate 238 + * 239 + * On success, @config may be modifed to reflect what the hardware can do. 240 + * For example, the frequency of a periodic trigger may be adjusted to the 241 + * nearest supported value. 242 + * 243 + * Callers will likely need to do additional validation of the modified trigger 244 + * parameters. 245 + * 246 + * Return: 0 on success, negative error code on failure. 247 + */ 248 + int spi_offload_trigger_validate(struct spi_offload_trigger *trigger, 249 + struct spi_offload_trigger_config *config) 250 + { 251 + guard(mutex)(&trigger->lock); 252 + 253 + if (!trigger->ops) 254 + return -ENODEV; 255 + 256 + if (!trigger->ops->validate) 257 + return -EOPNOTSUPP; 258 + 259 + return trigger->ops->validate(trigger, config); 260 + } 261 + EXPORT_SYMBOL_GPL(spi_offload_trigger_validate); 262 + 263 + /** 264 + * spi_offload_trigger_enable - enables trigger for offload 265 + * @offload: Offload instance 266 + * @trigger: Offload trigger instance 267 + * @config: Trigger config to validate 268 + * 269 + * There must be a prepared offload instance with the specified ID (i.e. 270 + * spi_optimize_message() was called with the same offload assigned to the 271 + * message). This will also reserve the bus for exclusive use by the offload 272 + * instance until the trigger is disabled. Any other attempts to send a 273 + * transfer or lock the bus will fail with -EBUSY during this time. 274 + * 275 + * Calls must be balanced with spi_offload_trigger_disable(). 276 + * 277 + * Context: can sleep 278 + * Return: 0 on success, else a negative error code. 279 + */ 280 + int spi_offload_trigger_enable(struct spi_offload *offload, 281 + struct spi_offload_trigger *trigger, 282 + struct spi_offload_trigger_config *config) 283 + { 284 + int ret; 285 + 286 + guard(mutex)(&trigger->lock); 287 + 288 + if (!trigger->ops) 289 + return -ENODEV; 290 + 291 + if (offload->ops && offload->ops->trigger_enable) { 292 + ret = offload->ops->trigger_enable(offload); 293 + if (ret) 294 + return ret; 295 + } 296 + 297 + if (trigger->ops->enable) { 298 + ret = trigger->ops->enable(trigger, config); 299 + if (ret) { 300 + if (offload->ops->trigger_disable) 301 + offload->ops->trigger_disable(offload); 302 + return ret; 303 + } 304 + } 305 + 306 + return 0; 307 + } 308 + EXPORT_SYMBOL_GPL(spi_offload_trigger_enable); 309 + 310 + /** 311 + * spi_offload_trigger_disable - disables hardware trigger for offload 312 + * @offload: Offload instance 313 + * @trigger: Offload trigger instance 314 + * 315 + * Disables the hardware trigger for the offload instance with the specified ID 316 + * and releases the bus for use by other clients. 317 + * 318 + * Context: can sleep 319 + */ 320 + void spi_offload_trigger_disable(struct spi_offload *offload, 321 + struct spi_offload_trigger *trigger) 322 + { 323 + if (offload->ops && offload->ops->trigger_disable) 324 + offload->ops->trigger_disable(offload); 325 + 326 + guard(mutex)(&trigger->lock); 327 + 328 + if (!trigger->ops) 329 + return; 330 + 331 + if (trigger->ops->disable) 332 + trigger->ops->disable(trigger); 333 + } 334 + EXPORT_SYMBOL_GPL(spi_offload_trigger_disable); 335 + 336 + static void spi_offload_release_dma_chan(void *chan) 337 + { 338 + dma_release_channel(chan); 339 + } 340 + 341 + /** 342 + * devm_spi_offload_tx_stream_request_dma_chan - Get the DMA channel info for the TX stream 343 + * @dev: Device for devm purposes. 344 + * @offload: Offload instance 345 + * 346 + * This is the DMA channel that will provide data to transfers that use the 347 + * %SPI_OFFLOAD_XFER_TX_STREAM offload flag. 348 + * 349 + * Return: Pointer to DMA channel info, or negative error code 350 + */ 351 + struct dma_chan 352 + *devm_spi_offload_tx_stream_request_dma_chan(struct device *dev, 353 + struct spi_offload *offload) 354 + { 355 + struct dma_chan *chan; 356 + int ret; 357 + 358 + if (!offload->ops || !offload->ops->tx_stream_request_dma_chan) 359 + return ERR_PTR(-EOPNOTSUPP); 360 + 361 + chan = offload->ops->tx_stream_request_dma_chan(offload); 362 + if (IS_ERR(chan)) 363 + return chan; 364 + 365 + ret = devm_add_action_or_reset(dev, spi_offload_release_dma_chan, chan); 366 + if (ret) 367 + return ERR_PTR(ret); 368 + 369 + return chan; 370 + } 371 + EXPORT_SYMBOL_GPL(devm_spi_offload_tx_stream_request_dma_chan); 372 + 373 + /** 374 + * devm_spi_offload_rx_stream_request_dma_chan - Get the DMA channel info for the RX stream 375 + * @dev: Device for devm purposes. 376 + * @offload: Offload instance 377 + * 378 + * This is the DMA channel that will receive data from transfers that use the 379 + * %SPI_OFFLOAD_XFER_RX_STREAM offload flag. 380 + * 381 + * Return: Pointer to DMA channel info, or negative error code 382 + */ 383 + struct dma_chan 384 + *devm_spi_offload_rx_stream_request_dma_chan(struct device *dev, 385 + struct spi_offload *offload) 386 + { 387 + struct dma_chan *chan; 388 + int ret; 389 + 390 + if (!offload->ops || !offload->ops->rx_stream_request_dma_chan) 391 + return ERR_PTR(-EOPNOTSUPP); 392 + 393 + chan = offload->ops->rx_stream_request_dma_chan(offload); 394 + if (IS_ERR(chan)) 395 + return chan; 396 + 397 + ret = devm_add_action_or_reset(dev, spi_offload_release_dma_chan, chan); 398 + if (ret) 399 + return ERR_PTR(ret); 400 + 401 + return chan; 402 + } 403 + EXPORT_SYMBOL_GPL(devm_spi_offload_rx_stream_request_dma_chan); 404 + 405 + /* Triggers providers */ 406 + 407 + static void spi_offload_trigger_unregister(void *data) 408 + { 409 + struct spi_offload_trigger *trigger = data; 410 + 411 + scoped_guard(mutex, &spi_offload_triggers_lock) 412 + list_del(&trigger->list); 413 + 414 + scoped_guard(mutex, &trigger->lock) { 415 + trigger->priv = NULL; 416 + trigger->ops = NULL; 417 + } 418 + 419 + kref_put(&trigger->ref, spi_offload_trigger_free); 420 + } 421 + 422 + /** 423 + * devm_spi_offload_trigger_register() - Allocate and register an offload trigger 424 + * @dev: Device for devm purposes. 425 + * @info: Provider-specific trigger info. 426 + * 427 + * Return: 0 on success, else a negative error code. 428 + */ 429 + int devm_spi_offload_trigger_register(struct device *dev, 430 + struct spi_offload_trigger_info *info) 431 + { 432 + struct spi_offload_trigger *trigger; 433 + 434 + if (!info->fwnode || !info->ops) 435 + return -EINVAL; 436 + 437 + trigger = kzalloc(sizeof(*trigger), GFP_KERNEL); 438 + if (!trigger) 439 + return -ENOMEM; 440 + 441 + kref_init(&trigger->ref); 442 + mutex_init(&trigger->lock); 443 + trigger->fwnode = fwnode_handle_get(info->fwnode); 444 + trigger->ops = info->ops; 445 + trigger->priv = info->priv; 446 + 447 + scoped_guard(mutex, &spi_offload_triggers_lock) 448 + list_add_tail(&trigger->list, &spi_offload_triggers); 449 + 450 + return devm_add_action_or_reset(dev, spi_offload_trigger_unregister, trigger); 451 + } 452 + EXPORT_SYMBOL_GPL(devm_spi_offload_trigger_register); 453 + 454 + /** 455 + * spi_offload_trigger_get_priv() - Get the private data for the trigger 456 + * 457 + * @trigger: Offload trigger instance. 458 + * 459 + * Return: Private data for the trigger. 460 + */ 461 + void *spi_offload_trigger_get_priv(struct spi_offload_trigger *trigger) 462 + { 463 + return trigger->priv; 464 + } 465 + EXPORT_SYMBOL_GPL(spi_offload_trigger_get_priv);
+10
drivers/spi/spi.c
··· 31 31 #include <linux/ptp_clock_kernel.h> 32 32 #include <linux/sched/rt.h> 33 33 #include <linux/slab.h> 34 + #include <linux/spi/offload/types.h> 34 35 #include <linux/spi/spi.h> 35 36 #include <linux/spi/spi-mem.h> 36 37 #include <uapi/linux/sched/types.h> ··· 4156 4155 4157 4156 if (_spi_xfer_word_delay_update(xfer, spi)) 4158 4157 return -EINVAL; 4158 + 4159 + /* Make sure controller supports required offload features. */ 4160 + if (xfer->offload_flags) { 4161 + if (!message->offload) 4162 + return -EINVAL; 4163 + 4164 + if (xfer->offload_flags & ~message->offload->xfer_flags) 4165 + return -EINVAL; 4166 + } 4159 4167 } 4160 4168 4161 4169 message->status = -EINPROGRESS;
+39
include/linux/spi/offload/consumer.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2024 Analog Devices Inc. 4 + * Copyright (C) 2024 BayLibre, SAS 5 + */ 6 + 7 + #ifndef __LINUX_SPI_OFFLOAD_CONSUMER_H 8 + #define __LINUX_SPI_OFFLOAD_CONSUMER_H 9 + 10 + #include <linux/module.h> 11 + #include <linux/spi/offload/types.h> 12 + #include <linux/types.h> 13 + 14 + MODULE_IMPORT_NS("SPI_OFFLOAD"); 15 + 16 + struct device; 17 + struct spi_device; 18 + 19 + struct spi_offload *devm_spi_offload_get(struct device *dev, struct spi_device *spi, 20 + const struct spi_offload_config *config); 21 + 22 + struct spi_offload_trigger 23 + *devm_spi_offload_trigger_get(struct device *dev, 24 + struct spi_offload *offload, 25 + enum spi_offload_trigger_type type); 26 + int spi_offload_trigger_validate(struct spi_offload_trigger *trigger, 27 + struct spi_offload_trigger_config *config); 28 + int spi_offload_trigger_enable(struct spi_offload *offload, 29 + struct spi_offload_trigger *trigger, 30 + struct spi_offload_trigger_config *config); 31 + void spi_offload_trigger_disable(struct spi_offload *offload, 32 + struct spi_offload_trigger *trigger); 33 + 34 + struct dma_chan *devm_spi_offload_tx_stream_request_dma_chan(struct device *dev, 35 + struct spi_offload *offload); 36 + struct dma_chan *devm_spi_offload_rx_stream_request_dma_chan(struct device *dev, 37 + struct spi_offload *offload); 38 + 39 + #endif /* __LINUX_SPI_OFFLOAD_CONSUMER_H */
+47
include/linux/spi/offload/provider.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2024 Analog Devices Inc. 4 + * Copyright (C) 2024 BayLibre, SAS 5 + */ 6 + 7 + #ifndef __LINUX_SPI_OFFLOAD_PROVIDER_H 8 + #define __LINUX_SPI_OFFLOAD_PROVIDER_H 9 + 10 + #include <linux/module.h> 11 + #include <linux/spi/offload/types.h> 12 + #include <linux/types.h> 13 + 14 + MODULE_IMPORT_NS("SPI_OFFLOAD"); 15 + 16 + struct device; 17 + struct spi_offload_trigger; 18 + 19 + struct spi_offload *devm_spi_offload_alloc(struct device *dev, size_t priv_size); 20 + 21 + struct spi_offload_trigger_ops { 22 + bool (*match)(struct spi_offload_trigger *trigger, 23 + enum spi_offload_trigger_type type, u64 *args, u32 nargs); 24 + int (*request)(struct spi_offload_trigger *trigger, 25 + enum spi_offload_trigger_type type, u64 *args, u32 nargs); 26 + void (*release)(struct spi_offload_trigger *trigger); 27 + int (*validate)(struct spi_offload_trigger *trigger, 28 + struct spi_offload_trigger_config *config); 29 + int (*enable)(struct spi_offload_trigger *trigger, 30 + struct spi_offload_trigger_config *config); 31 + void (*disable)(struct spi_offload_trigger *trigger); 32 + }; 33 + 34 + struct spi_offload_trigger_info { 35 + /** @fwnode: Provider fwnode, used to match to consumer. */ 36 + struct fwnode_handle *fwnode; 37 + /** @ops: Provider-specific callbacks. */ 38 + const struct spi_offload_trigger_ops *ops; 39 + /** Provider-specific state to be used in callbacks. */ 40 + void *priv; 41 + }; 42 + 43 + int devm_spi_offload_trigger_register(struct device *dev, 44 + struct spi_offload_trigger_info *info); 45 + void *spi_offload_trigger_get_priv(struct spi_offload_trigger *trigger); 46 + 47 + #endif /* __LINUX_SPI_OFFLOAD_PROVIDER_H */
+99
include/linux/spi/offload/types.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2024 Analog Devices Inc. 4 + * Copyright (C) 2024 BayLibre, SAS 5 + */ 6 + 7 + #ifndef __LINUX_SPI_OFFLOAD_TYPES_H 8 + #define __LINUX_SPI_OFFLOAD_TYPES_H 9 + 10 + #include <linux/types.h> 11 + 12 + struct device; 13 + 14 + /* This is write xfer but TX uses external data stream rather than tx_buf. */ 15 + #define SPI_OFFLOAD_XFER_TX_STREAM BIT(0) 16 + /* This is read xfer but RX uses external data stream rather than rx_buf. */ 17 + #define SPI_OFFLOAD_XFER_RX_STREAM BIT(1) 18 + 19 + /* Offload can be triggered by external hardware event. */ 20 + #define SPI_OFFLOAD_CAP_TRIGGER BIT(0) 21 + /* Offload can record and then play back TX data when triggered. */ 22 + #define SPI_OFFLOAD_CAP_TX_STATIC_DATA BIT(1) 23 + /* Offload can get TX data from an external stream source. */ 24 + #define SPI_OFFLOAD_CAP_TX_STREAM_DMA BIT(2) 25 + /* Offload can send RX data to an external stream sink. */ 26 + #define SPI_OFFLOAD_CAP_RX_STREAM_DMA BIT(3) 27 + 28 + /** 29 + * struct spi_offload_config - offload configuration 30 + * 31 + * This is used to request an offload with specific configuration. 32 + */ 33 + struct spi_offload_config { 34 + /** @capability_flags: required capabilities. See %SPI_OFFLOAD_CAP_* */ 35 + u32 capability_flags; 36 + }; 37 + 38 + /** 39 + * struct spi_offload - offload instance 40 + */ 41 + struct spi_offload { 42 + /** @provider_dev: for get/put reference counting */ 43 + struct device *provider_dev; 44 + /** @priv: provider driver private data */ 45 + void *priv; 46 + /** @ops: callbacks for offload support */ 47 + const struct spi_offload_ops *ops; 48 + /** @xfer_flags: %SPI_OFFLOAD_XFER_* flags supported by provider */ 49 + u32 xfer_flags; 50 + }; 51 + 52 + enum spi_offload_trigger_type { 53 + /* Indication from SPI peripheral that data is read to read. */ 54 + SPI_OFFLOAD_TRIGGER_DATA_READY, 55 + /* Trigger comes from a periodic source such as a clock. */ 56 + SPI_OFFLOAD_TRIGGER_PERIODIC, 57 + }; 58 + 59 + struct spi_offload_trigger_periodic { 60 + u64 frequency_hz; 61 + }; 62 + 63 + struct spi_offload_trigger_config { 64 + /** @type: type discriminator for union */ 65 + enum spi_offload_trigger_type type; 66 + union { 67 + struct spi_offload_trigger_periodic periodic; 68 + }; 69 + }; 70 + 71 + /** 72 + * struct spi_offload_ops - callbacks implemented by offload providers 73 + */ 74 + struct spi_offload_ops { 75 + /** 76 + * @trigger_enable: Optional callback to enable the trigger for the 77 + * given offload instance. 78 + */ 79 + int (*trigger_enable)(struct spi_offload *offload); 80 + /** 81 + * @trigger_disable: Optional callback to disable the trigger for the 82 + * given offload instance. 83 + */ 84 + void (*trigger_disable)(struct spi_offload *offload); 85 + /** 86 + * @tx_stream_request_dma_chan: Optional callback for controllers that 87 + * have an offload where the TX data stream is connected directly to a 88 + * DMA channel. 89 + */ 90 + struct dma_chan *(*tx_stream_request_dma_chan)(struct spi_offload *offload); 91 + /** 92 + * @rx_stream_request_dma_chan: Optional callback for controllers that 93 + * have an offload where the RX data stream is connected directly to a 94 + * DMA channel. 95 + */ 96 + struct dma_chan *(*rx_stream_request_dma_chan)(struct spi_offload *offload); 97 + }; 98 + 99 + #endif /* __LINUX_SPI_OFFLOAD_TYPES_H */
+20
include/linux/spi/spi.h
··· 31 31 struct spi_controller_mem_ops; 32 32 struct spi_controller_mem_caps; 33 33 struct spi_message; 34 + struct spi_offload; 35 + struct spi_offload_config; 34 36 35 37 /* 36 38 * INTERFACES between SPI master-side drivers and SPI slave protocol handlers, ··· 498 496 * @mem_ops: optimized/dedicated operations for interactions with SPI memory. 499 497 * This field is optional and should only be implemented if the 500 498 * controller has native support for memory like operations. 499 + * @get_offload: callback for controllers with offload support to get matching 500 + * offload instance. Implementations should return -ENODEV if no match is 501 + * found. 502 + * @put_offload: release the offload instance acquired by @get_offload. 501 503 * @mem_caps: controller capabilities for the handling of memory operations. 502 504 * @unprepare_message: undo any work done by prepare_message(). 503 505 * @target_abort: abort the ongoing transfer request on an SPI target controller ··· 745 739 /* Optimized handlers for SPI memory-like operations. */ 746 740 const struct spi_controller_mem_ops *mem_ops; 747 741 const struct spi_controller_mem_caps *mem_caps; 742 + 743 + struct spi_offload *(*get_offload)(struct spi_device *spi, 744 + const struct spi_offload_config *config); 745 + void (*put_offload)(struct spi_offload *offload); 748 746 749 747 /* GPIO chip select */ 750 748 struct gpio_desc **cs_gpiods; ··· 1093 1083 1094 1084 u32 effective_speed_hz; 1095 1085 1086 + /* Use %SPI_OFFLOAD_XFER_* from spi-offload.h */ 1087 + unsigned int offload_flags; 1088 + 1096 1089 unsigned int ptp_sts_word_pre; 1097 1090 unsigned int ptp_sts_word_post; 1098 1091 ··· 1121 1108 * @state: for use by whichever driver currently owns the message 1122 1109 * @opt_state: for use by whichever driver currently owns the message 1123 1110 * @resources: for resource management when the SPI message is processed 1111 + * @offload: (optional) offload instance used by this message 1124 1112 * 1125 1113 * A @spi_message is used to execute an atomic sequence of data transfers, 1126 1114 * each represented by a struct spi_transfer. The sequence is "atomic" ··· 1181 1167 * __spi_optimize_message() and __spi_unoptimize_message(). 1182 1168 */ 1183 1169 void *opt_state; 1170 + 1171 + /* 1172 + * Optional offload instance used by this message. This must be set 1173 + * by the peripheral driver before calling spi_optimize_message(). 1174 + */ 1175 + struct spi_offload *offload; 1184 1176 1185 1177 /* List of spi_res resources when the SPI message is processed */ 1186 1178 struct list_head resources;