Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
"This one features the usual updates to the drivers and one good part
of removing DA_SG from core as it has no users.

Summary:

- Remove DMA_SG support as we have no users for this feature
- New driver for Altera / Intel mSGDMA IP core
- Support for memset in dmatest and qcom_hidma driver
- Update for non cyclic mode in k3dma, bunch of update in bam_dma,
bcm sba-raid
- Constify device ids across drivers"

* tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits)
dmaengine: sun6i: support V3s SoC variant
dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
dmaengine: rcar-dmac: document R8A77970 bindings
dmaengine: xilinx_dma: Fix error code format specifier
dmaengine: altera: Use macros instead of structs to describe the registers
dmaengine: ti-dma-crossbar: Fix dra7 reserve function
dmaengine: pl330: constify amba_id
dmaengine: pl08x: constify amba_id
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
dmaengine: bcm-sba-raid: Add debugfs support
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
dmaengine: bcm-sba-raid: Increase number of free sba_request
dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
...

+1742 -1220
+30
Documentation/ABI/stable/sysfs-driver-dma-ioatdma
··· 1 + What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap 2 + Date: December 3, 2009 3 + KernelVersion: 2.6.32 4 + Contact: dmaengine@vger.kernel.org 5 + Description: Capabilities the DMA supports.Currently there are DMA_PQ, DMA_PQ_VAL, 6 + DMA_XOR,DMA_XOR_VAL,DMA_INTERRUPT. 7 + 8 + What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active 9 + Date: December 3, 2009 10 + KernelVersion: 2.6.32 11 + Contact: dmaengine@vger.kernel.org 12 + Description: The number of descriptors active in the ring. 13 + 14 + What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size 15 + Date: December 3, 2009 16 + KernelVersion: 2.6.32 17 + Contact: dmaengine@vger.kernel.org 18 + Description: Descriptor ring size, total number of descriptors available. 19 + 20 + What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version 21 + Date: December 3, 2009 22 + KernelVersion: 2.6.32 23 + Contact: dmaengine@vger.kernel.org 24 + Description: Version of ioatdma device. 25 + 26 + What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce 27 + Date: August 8, 2017 28 + KernelVersion: 4.14 29 + Contact: dmaengine@vger.kernel.org 30 + Description: Tune-able interrupt delay value per channel basis.
+1
Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
··· 25 25 - "renesas,dmac-r8a7794" (R-Car E2) 26 26 - "renesas,dmac-r8a7795" (R-Car H3) 27 27 - "renesas,dmac-r8a7796" (R-Car M3-W) 28 + - "renesas,dmac-r8a77970" (R-Car V3M) 28 29 29 30 - reg: base address and length of the registers block for the DMAC 30 31
+1
Documentation/devicetree/bindings/dma/renesas,usb-dmac.txt
··· 8 8 - "renesas,r8a7793-usb-dmac" (R-Car M2-N) 9 9 - "renesas,r8a7794-usb-dmac" (R-Car E2) 10 10 - "renesas,r8a7795-usb-dmac" (R-Car H3) 11 + - "renesas,r8a7796-usb-dmac" (R-Car M3-W) 11 12 - reg: base address and length of the registers block for the DMAC 12 13 - interrupts: interrupt specifiers for the DMAC, one for each entry in 13 14 interrupt-names.
+1
Documentation/devicetree/bindings/dma/sun6i-dma.txt
··· 9 9 "allwinner,sun8i-a23-dma" 10 10 "allwinner,sun8i-a83t-dma" 11 11 "allwinner,sun8i-h3-dma" 12 + "allwinner,sun8i-v3s-dma" 12 13 - reg: Should contain the registers base address and length 13 14 - interrupts: Should contain a reference to the interrupt used by this device 14 15 - clocks: Should contain a reference to the parent AHB clock
+7 -7
Documentation/dmaengine/provider.txt
··· 181 181 - Used by the client drivers to register a callback that will be 182 182 called on a regular basis through the DMA controller interrupt 183 183 184 - * DMA_SG 185 - - The device supports memory to memory scatter-gather 186 - transfers. 187 - - Even though a plain memcpy can look like a particular case of a 188 - scatter-gather transfer, with a single chunk to transfer, it's a 189 - distinct transaction type in the mem2mem transfers case 190 - 191 184 * DMA_PRIVATE 192 185 - The devices only supports slave transfers, and as such isn't 193 186 available for async transfers. ··· 388 395 when DMA_CTRL_REUSE is already set 389 396 - Terminating the channel 390 397 398 + * DMA_PREP_CMD 399 + - If set, the client driver tells DMA controller that passed data in DMA 400 + API is command data. 401 + - Interpretation of command data is DMA controller specific. It can be 402 + used for issuing commands to other peripherals/register reads/register 403 + writes for which the descriptor should be in different format from 404 + normal data descriptors. 391 405 392 406 General Design Notes 393 407 --------------------
-23
drivers/crypto/ccp/ccp-dmaengine.c
··· 502 502 return &desc->tx_desc; 503 503 } 504 504 505 - static struct dma_async_tx_descriptor *ccp_prep_dma_sg( 506 - struct dma_chan *dma_chan, struct scatterlist *dst_sg, 507 - unsigned int dst_nents, struct scatterlist *src_sg, 508 - unsigned int src_nents, unsigned long flags) 509 - { 510 - struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan, 511 - dma_chan); 512 - struct ccp_dma_desc *desc; 513 - 514 - dev_dbg(chan->ccp->dev, 515 - "%s - src=%p, src_nents=%u dst=%p, dst_nents=%u, flags=%#lx\n", 516 - __func__, src_sg, src_nents, dst_sg, dst_nents, flags); 517 - 518 - desc = ccp_create_desc(dma_chan, dst_sg, dst_nents, src_sg, src_nents, 519 - flags); 520 - if (!desc) 521 - return NULL; 522 - 523 - return &desc->tx_desc; 524 - } 525 - 526 505 static struct dma_async_tx_descriptor *ccp_prep_dma_interrupt( 527 506 struct dma_chan *dma_chan, unsigned long flags) 528 507 { ··· 683 704 dma_dev->directions = DMA_MEM_TO_MEM; 684 705 dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 685 706 dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask); 686 - dma_cap_set(DMA_SG, dma_dev->cap_mask); 687 707 dma_cap_set(DMA_INTERRUPT, dma_dev->cap_mask); 688 708 689 709 /* The DMA channels for this device can be set to public or private, ··· 718 740 719 741 dma_dev->device_free_chan_resources = ccp_free_chan_resources; 720 742 dma_dev->device_prep_dma_memcpy = ccp_prep_dma_memcpy; 721 - dma_dev->device_prep_dma_sg = ccp_prep_dma_sg; 722 743 dma_dev->device_prep_dma_interrupt = ccp_prep_dma_interrupt; 723 744 dma_dev->device_issue_pending = ccp_issue_pending; 724 745 dma_dev->device_tx_status = ccp_tx_status;
+6
drivers/dma/Kconfig
··· 56 56 select DMA_ENGINE 57 57 58 58 #devices 59 + config ALTERA_MSGDMA 60 + tristate "Altera / Intel mSGDMA Engine" 61 + select DMA_ENGINE 62 + help 63 + Enable support for Altera / Intel mSGDMA controller. 64 + 59 65 config AMBA_PL08X 60 66 bool "ARM PrimeCell PL080 or PL081 support" 61 67 depends on ARM_AMBA
+1
drivers/dma/Makefile
··· 12 12 obj-$(CONFIG_DMATEST) += dmatest.o 13 13 14 14 #devices 15 + obj-$(CONFIG_ALTERA_MSGDMA) += altera-msgdma.o 15 16 obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o 16 17 obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/ 17 18 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
+927
drivers/dma/altera-msgdma.c
··· 1 + /* 2 + * DMA driver for Altera mSGDMA IP core 3 + * 4 + * Copyright (C) 2017 Stefan Roese <sr@denx.de> 5 + * 6 + * Based on drivers/dma/xilinx/zynqmp_dma.c, which is: 7 + * Copyright (C) 2016 Xilinx, Inc. All rights reserved. 8 + * 9 + * This program is free software: you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License as published by 11 + * the Free Software Foundation, either version 2 of the License, or 12 + * (at your option) any later version. 13 + */ 14 + 15 + #include <linux/bitops.h> 16 + #include <linux/delay.h> 17 + #include <linux/dma-mapping.h> 18 + #include <linux/dmapool.h> 19 + #include <linux/init.h> 20 + #include <linux/interrupt.h> 21 + #include <linux/io.h> 22 + #include <linux/iopoll.h> 23 + #include <linux/module.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/slab.h> 26 + 27 + #include "dmaengine.h" 28 + 29 + #define MSGDMA_MAX_TRANS_LEN U32_MAX 30 + #define MSGDMA_DESC_NUM 1024 31 + 32 + /** 33 + * struct msgdma_extended_desc - implements an extended descriptor 34 + * @read_addr_lo: data buffer source address low bits 35 + * @write_addr_lo: data buffer destination address low bits 36 + * @len: the number of bytes to transfer per descriptor 37 + * @burst_seq_num: bit 31:24 write burst 38 + * bit 23:16 read burst 39 + * bit 15:00 sequence number 40 + * @stride: bit 31:16 write stride 41 + * bit 15:00 read stride 42 + * @read_addr_hi: data buffer source address high bits 43 + * @write_addr_hi: data buffer destination address high bits 44 + * @control: characteristics of the transfer 45 + */ 46 + struct msgdma_extended_desc { 47 + u32 read_addr_lo; 48 + u32 write_addr_lo; 49 + u32 len; 50 + u32 burst_seq_num; 51 + u32 stride; 52 + u32 read_addr_hi; 53 + u32 write_addr_hi; 54 + u32 control; 55 + }; 56 + 57 + /* mSGDMA descriptor control field bit definitions */ 58 + #define MSGDMA_DESC_CTL_SET_CH(x) ((x) & 0xff) 59 + #define MSGDMA_DESC_CTL_GEN_SOP BIT(8) 60 + #define MSGDMA_DESC_CTL_GEN_EOP BIT(9) 61 + #define MSGDMA_DESC_CTL_PARK_READS BIT(10) 62 + #define MSGDMA_DESC_CTL_PARK_WRITES BIT(11) 63 + #define MSGDMA_DESC_CTL_END_ON_EOP BIT(12) 64 + #define MSGDMA_DESC_CTL_END_ON_LEN BIT(13) 65 + #define MSGDMA_DESC_CTL_TR_COMP_IRQ BIT(14) 66 + #define MSGDMA_DESC_CTL_EARLY_IRQ BIT(15) 67 + #define MSGDMA_DESC_CTL_TR_ERR_IRQ GENMASK(23, 16) 68 + #define MSGDMA_DESC_CTL_EARLY_DONE BIT(24) 69 + 70 + /* 71 + * Writing "1" the "go" bit commits the entire descriptor into the 72 + * descriptor FIFO(s) 73 + */ 74 + #define MSGDMA_DESC_CTL_GO BIT(31) 75 + 76 + /* Tx buffer control flags */ 77 + #define MSGDMA_DESC_CTL_TX_FIRST (MSGDMA_DESC_CTL_GEN_SOP | \ 78 + MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 79 + MSGDMA_DESC_CTL_GO) 80 + 81 + #define MSGDMA_DESC_CTL_TX_MIDDLE (MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 82 + MSGDMA_DESC_CTL_GO) 83 + 84 + #define MSGDMA_DESC_CTL_TX_LAST (MSGDMA_DESC_CTL_GEN_EOP | \ 85 + MSGDMA_DESC_CTL_TR_COMP_IRQ | \ 86 + MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 87 + MSGDMA_DESC_CTL_GO) 88 + 89 + #define MSGDMA_DESC_CTL_TX_SINGLE (MSGDMA_DESC_CTL_GEN_SOP | \ 90 + MSGDMA_DESC_CTL_GEN_EOP | \ 91 + MSGDMA_DESC_CTL_TR_COMP_IRQ | \ 92 + MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 93 + MSGDMA_DESC_CTL_GO) 94 + 95 + #define MSGDMA_DESC_CTL_RX_SINGLE (MSGDMA_DESC_CTL_END_ON_EOP | \ 96 + MSGDMA_DESC_CTL_END_ON_LEN | \ 97 + MSGDMA_DESC_CTL_TR_COMP_IRQ | \ 98 + MSGDMA_DESC_CTL_EARLY_IRQ | \ 99 + MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 100 + MSGDMA_DESC_CTL_GO) 101 + 102 + /* mSGDMA extended descriptor stride definitions */ 103 + #define MSGDMA_DESC_STRIDE_RD 0x00000001 104 + #define MSGDMA_DESC_STRIDE_WR 0x00010000 105 + #define MSGDMA_DESC_STRIDE_RW 0x00010001 106 + 107 + /* mSGDMA dispatcher control and status register map */ 108 + #define MSGDMA_CSR_STATUS 0x00 /* Read / Clear */ 109 + #define MSGDMA_CSR_CONTROL 0x04 /* Read / Write */ 110 + #define MSGDMA_CSR_RW_FILL_LEVEL 0x08 /* 31:16 - write fill level */ 111 + /* 15:00 - read fill level */ 112 + #define MSGDMA_CSR_RESP_FILL_LEVEL 0x0c /* response FIFO fill level */ 113 + #define MSGDMA_CSR_RW_SEQ_NUM 0x10 /* 31:16 - write seq number */ 114 + /* 15:00 - read seq number */ 115 + 116 + /* mSGDMA CSR status register bit definitions */ 117 + #define MSGDMA_CSR_STAT_BUSY BIT(0) 118 + #define MSGDMA_CSR_STAT_DESC_BUF_EMPTY BIT(1) 119 + #define MSGDMA_CSR_STAT_DESC_BUF_FULL BIT(2) 120 + #define MSGDMA_CSR_STAT_RESP_BUF_EMPTY BIT(3) 121 + #define MSGDMA_CSR_STAT_RESP_BUF_FULL BIT(4) 122 + #define MSGDMA_CSR_STAT_STOPPED BIT(5) 123 + #define MSGDMA_CSR_STAT_RESETTING BIT(6) 124 + #define MSGDMA_CSR_STAT_STOPPED_ON_ERR BIT(7) 125 + #define MSGDMA_CSR_STAT_STOPPED_ON_EARLY BIT(8) 126 + #define MSGDMA_CSR_STAT_IRQ BIT(9) 127 + #define MSGDMA_CSR_STAT_MASK GENMASK(9, 0) 128 + #define MSGDMA_CSR_STAT_MASK_WITHOUT_IRQ GENMASK(8, 0) 129 + 130 + #define DESC_EMPTY (MSGDMA_CSR_STAT_DESC_BUF_EMPTY | \ 131 + MSGDMA_CSR_STAT_RESP_BUF_EMPTY) 132 + 133 + /* mSGDMA CSR control register bit definitions */ 134 + #define MSGDMA_CSR_CTL_STOP BIT(0) 135 + #define MSGDMA_CSR_CTL_RESET BIT(1) 136 + #define MSGDMA_CSR_CTL_STOP_ON_ERR BIT(2) 137 + #define MSGDMA_CSR_CTL_STOP_ON_EARLY BIT(3) 138 + #define MSGDMA_CSR_CTL_GLOBAL_INTR BIT(4) 139 + #define MSGDMA_CSR_CTL_STOP_DESCS BIT(5) 140 + 141 + /* mSGDMA CSR fill level bits */ 142 + #define MSGDMA_CSR_WR_FILL_LEVEL_GET(v) (((v) & 0xffff0000) >> 16) 143 + #define MSGDMA_CSR_RD_FILL_LEVEL_GET(v) ((v) & 0x0000ffff) 144 + #define MSGDMA_CSR_RESP_FILL_LEVEL_GET(v) ((v) & 0x0000ffff) 145 + 146 + #define MSGDMA_CSR_SEQ_NUM_GET(v) (((v) & 0xffff0000) >> 16) 147 + 148 + /* mSGDMA response register map */ 149 + #define MSGDMA_RESP_BYTES_TRANSFERRED 0x00 150 + #define MSGDMA_RESP_STATUS 0x04 151 + 152 + /* mSGDMA response register bit definitions */ 153 + #define MSGDMA_RESP_EARLY_TERM BIT(8) 154 + #define MSGDMA_RESP_ERR_MASK 0xff 155 + 156 + /** 157 + * struct msgdma_sw_desc - implements a sw descriptor 158 + * @async_tx: support for the async_tx api 159 + * @hw_desc: assosiated HW descriptor 160 + * @free_list: node of the free SW descriprots list 161 + */ 162 + struct msgdma_sw_desc { 163 + struct dma_async_tx_descriptor async_tx; 164 + struct msgdma_extended_desc hw_desc; 165 + struct list_head node; 166 + struct list_head tx_list; 167 + }; 168 + 169 + /** 170 + * struct msgdma_device - DMA device structure 171 + */ 172 + struct msgdma_device { 173 + spinlock_t lock; 174 + struct device *dev; 175 + struct tasklet_struct irq_tasklet; 176 + struct list_head pending_list; 177 + struct list_head free_list; 178 + struct list_head active_list; 179 + struct list_head done_list; 180 + u32 desc_free_cnt; 181 + bool idle; 182 + 183 + struct dma_device dmadev; 184 + struct dma_chan dmachan; 185 + dma_addr_t hw_desq; 186 + struct msgdma_sw_desc *sw_desq; 187 + unsigned int npendings; 188 + 189 + struct dma_slave_config slave_cfg; 190 + 191 + int irq; 192 + 193 + /* mSGDMA controller */ 194 + void __iomem *csr; 195 + 196 + /* mSGDMA descriptors */ 197 + void __iomem *desc; 198 + 199 + /* mSGDMA response */ 200 + void __iomem *resp; 201 + }; 202 + 203 + #define to_mdev(chan) container_of(chan, struct msgdma_device, dmachan) 204 + #define tx_to_desc(tx) container_of(tx, struct msgdma_sw_desc, async_tx) 205 + 206 + /** 207 + * msgdma_get_descriptor - Get the sw descriptor from the pool 208 + * @mdev: Pointer to the Altera mSGDMA device structure 209 + * 210 + * Return: The sw descriptor 211 + */ 212 + static struct msgdma_sw_desc *msgdma_get_descriptor(struct msgdma_device *mdev) 213 + { 214 + struct msgdma_sw_desc *desc; 215 + 216 + spin_lock_bh(&mdev->lock); 217 + desc = list_first_entry(&mdev->free_list, struct msgdma_sw_desc, node); 218 + list_del(&desc->node); 219 + spin_unlock_bh(&mdev->lock); 220 + 221 + INIT_LIST_HEAD(&desc->tx_list); 222 + 223 + return desc; 224 + } 225 + 226 + /** 227 + * msgdma_free_descriptor - Issue pending transactions 228 + * @mdev: Pointer to the Altera mSGDMA device structure 229 + * @desc: Transaction descriptor pointer 230 + */ 231 + static void msgdma_free_descriptor(struct msgdma_device *mdev, 232 + struct msgdma_sw_desc *desc) 233 + { 234 + struct msgdma_sw_desc *child, *next; 235 + 236 + mdev->desc_free_cnt++; 237 + list_add_tail(&desc->node, &mdev->free_list); 238 + list_for_each_entry_safe(child, next, &desc->tx_list, node) { 239 + mdev->desc_free_cnt++; 240 + list_move_tail(&child->node, &mdev->free_list); 241 + } 242 + } 243 + 244 + /** 245 + * msgdma_free_desc_list - Free descriptors list 246 + * @mdev: Pointer to the Altera mSGDMA device structure 247 + * @list: List to parse and delete the descriptor 248 + */ 249 + static void msgdma_free_desc_list(struct msgdma_device *mdev, 250 + struct list_head *list) 251 + { 252 + struct msgdma_sw_desc *desc, *next; 253 + 254 + list_for_each_entry_safe(desc, next, list, node) 255 + msgdma_free_descriptor(mdev, desc); 256 + } 257 + 258 + /** 259 + * msgdma_desc_config - Configure the descriptor 260 + * @desc: Hw descriptor pointer 261 + * @dst: Destination buffer address 262 + * @src: Source buffer address 263 + * @len: Transfer length 264 + */ 265 + static void msgdma_desc_config(struct msgdma_extended_desc *desc, 266 + dma_addr_t dst, dma_addr_t src, size_t len, 267 + u32 stride) 268 + { 269 + /* Set lower 32bits of src & dst addresses in the descriptor */ 270 + desc->read_addr_lo = lower_32_bits(src); 271 + desc->write_addr_lo = lower_32_bits(dst); 272 + 273 + /* Set upper 32bits of src & dst addresses in the descriptor */ 274 + desc->read_addr_hi = upper_32_bits(src); 275 + desc->write_addr_hi = upper_32_bits(dst); 276 + 277 + desc->len = len; 278 + desc->stride = stride; 279 + desc->burst_seq_num = 0; /* 0 will result in max burst length */ 280 + 281 + /* 282 + * Don't set interrupt on xfer end yet, this will be done later 283 + * for the "last" descriptor 284 + */ 285 + desc->control = MSGDMA_DESC_CTL_TR_ERR_IRQ | MSGDMA_DESC_CTL_GO | 286 + MSGDMA_DESC_CTL_END_ON_LEN; 287 + } 288 + 289 + /** 290 + * msgdma_desc_config_eod - Mark the descriptor as end descriptor 291 + * @desc: Hw descriptor pointer 292 + */ 293 + static void msgdma_desc_config_eod(struct msgdma_extended_desc *desc) 294 + { 295 + desc->control |= MSGDMA_DESC_CTL_TR_COMP_IRQ; 296 + } 297 + 298 + /** 299 + * msgdma_tx_submit - Submit DMA transaction 300 + * @tx: Async transaction descriptor pointer 301 + * 302 + * Return: cookie value 303 + */ 304 + static dma_cookie_t msgdma_tx_submit(struct dma_async_tx_descriptor *tx) 305 + { 306 + struct msgdma_device *mdev = to_mdev(tx->chan); 307 + struct msgdma_sw_desc *new; 308 + dma_cookie_t cookie; 309 + 310 + new = tx_to_desc(tx); 311 + spin_lock_bh(&mdev->lock); 312 + cookie = dma_cookie_assign(tx); 313 + 314 + list_add_tail(&new->node, &mdev->pending_list); 315 + spin_unlock_bh(&mdev->lock); 316 + 317 + return cookie; 318 + } 319 + 320 + /** 321 + * msgdma_prep_memcpy - prepare descriptors for memcpy transaction 322 + * @dchan: DMA channel 323 + * @dma_dst: Destination buffer address 324 + * @dma_src: Source buffer address 325 + * @len: Transfer length 326 + * @flags: transfer ack flags 327 + * 328 + * Return: Async transaction descriptor on success and NULL on failure 329 + */ 330 + static struct dma_async_tx_descriptor * 331 + msgdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst, 332 + dma_addr_t dma_src, size_t len, ulong flags) 333 + { 334 + struct msgdma_device *mdev = to_mdev(dchan); 335 + struct msgdma_sw_desc *new, *first = NULL; 336 + struct msgdma_extended_desc *desc; 337 + size_t copy; 338 + u32 desc_cnt; 339 + 340 + desc_cnt = DIV_ROUND_UP(len, MSGDMA_MAX_TRANS_LEN); 341 + 342 + spin_lock_bh(&mdev->lock); 343 + if (desc_cnt > mdev->desc_free_cnt) { 344 + spin_unlock_bh(&mdev->lock); 345 + dev_dbg(mdev->dev, "mdev %p descs are not available\n", mdev); 346 + return NULL; 347 + } 348 + mdev->desc_free_cnt -= desc_cnt; 349 + spin_unlock_bh(&mdev->lock); 350 + 351 + do { 352 + /* Allocate and populate the descriptor */ 353 + new = msgdma_get_descriptor(mdev); 354 + 355 + copy = min_t(size_t, len, MSGDMA_MAX_TRANS_LEN); 356 + desc = &new->hw_desc; 357 + msgdma_desc_config(desc, dma_dst, dma_src, copy, 358 + MSGDMA_DESC_STRIDE_RW); 359 + len -= copy; 360 + dma_src += copy; 361 + dma_dst += copy; 362 + if (!first) 363 + first = new; 364 + else 365 + list_add_tail(&new->node, &first->tx_list); 366 + } while (len); 367 + 368 + msgdma_desc_config_eod(desc); 369 + async_tx_ack(&first->async_tx); 370 + first->async_tx.flags = flags; 371 + 372 + return &first->async_tx; 373 + } 374 + 375 + /** 376 + * msgdma_prep_slave_sg - prepare descriptors for a slave sg transaction 377 + * 378 + * @dchan: DMA channel 379 + * @sgl: Destination scatter list 380 + * @sg_len: Number of entries in destination scatter list 381 + * @dir: DMA transfer direction 382 + * @flags: transfer ack flags 383 + * @context: transfer context (unused) 384 + */ 385 + static struct dma_async_tx_descriptor * 386 + msgdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, 387 + unsigned int sg_len, enum dma_transfer_direction dir, 388 + unsigned long flags, void *context) 389 + 390 + { 391 + struct msgdma_device *mdev = to_mdev(dchan); 392 + struct dma_slave_config *cfg = &mdev->slave_cfg; 393 + struct msgdma_sw_desc *new, *first = NULL; 394 + void *desc = NULL; 395 + size_t len, avail; 396 + dma_addr_t dma_dst, dma_src; 397 + u32 desc_cnt = 0, i; 398 + struct scatterlist *sg; 399 + u32 stride; 400 + 401 + for_each_sg(sgl, sg, sg_len, i) 402 + desc_cnt += DIV_ROUND_UP(sg_dma_len(sg), MSGDMA_MAX_TRANS_LEN); 403 + 404 + spin_lock_bh(&mdev->lock); 405 + if (desc_cnt > mdev->desc_free_cnt) { 406 + spin_unlock_bh(&mdev->lock); 407 + dev_dbg(mdev->dev, "mdev %p descs are not available\n", mdev); 408 + return NULL; 409 + } 410 + mdev->desc_free_cnt -= desc_cnt; 411 + spin_unlock_bh(&mdev->lock); 412 + 413 + avail = sg_dma_len(sgl); 414 + 415 + /* Run until we are out of scatterlist entries */ 416 + while (true) { 417 + /* Allocate and populate the descriptor */ 418 + new = msgdma_get_descriptor(mdev); 419 + 420 + desc = &new->hw_desc; 421 + len = min_t(size_t, avail, MSGDMA_MAX_TRANS_LEN); 422 + 423 + if (dir == DMA_MEM_TO_DEV) { 424 + dma_src = sg_dma_address(sgl) + sg_dma_len(sgl) - avail; 425 + dma_dst = cfg->dst_addr; 426 + stride = MSGDMA_DESC_STRIDE_RD; 427 + } else { 428 + dma_src = cfg->src_addr; 429 + dma_dst = sg_dma_address(sgl) + sg_dma_len(sgl) - avail; 430 + stride = MSGDMA_DESC_STRIDE_WR; 431 + } 432 + msgdma_desc_config(desc, dma_dst, dma_src, len, stride); 433 + avail -= len; 434 + 435 + if (!first) 436 + first = new; 437 + else 438 + list_add_tail(&new->node, &first->tx_list); 439 + 440 + /* Fetch the next scatterlist entry */ 441 + if (avail == 0) { 442 + if (sg_len == 0) 443 + break; 444 + sgl = sg_next(sgl); 445 + if (sgl == NULL) 446 + break; 447 + sg_len--; 448 + avail = sg_dma_len(sgl); 449 + } 450 + } 451 + 452 + msgdma_desc_config_eod(desc); 453 + first->async_tx.flags = flags; 454 + 455 + return &first->async_tx; 456 + } 457 + 458 + static int msgdma_dma_config(struct dma_chan *dchan, 459 + struct dma_slave_config *config) 460 + { 461 + struct msgdma_device *mdev = to_mdev(dchan); 462 + 463 + memcpy(&mdev->slave_cfg, config, sizeof(*config)); 464 + 465 + return 0; 466 + } 467 + 468 + static void msgdma_reset(struct msgdma_device *mdev) 469 + { 470 + u32 val; 471 + int ret; 472 + 473 + /* Reset mSGDMA */ 474 + iowrite32(MSGDMA_CSR_STAT_MASK, mdev->csr + MSGDMA_CSR_STATUS); 475 + iowrite32(MSGDMA_CSR_CTL_RESET, mdev->csr + MSGDMA_CSR_CONTROL); 476 + 477 + ret = readl_poll_timeout(mdev->csr + MSGDMA_CSR_STATUS, val, 478 + (val & MSGDMA_CSR_STAT_RESETTING) == 0, 479 + 1, 10000); 480 + if (ret) 481 + dev_err(mdev->dev, "DMA channel did not reset\n"); 482 + 483 + /* Clear all status bits */ 484 + iowrite32(MSGDMA_CSR_STAT_MASK, mdev->csr + MSGDMA_CSR_STATUS); 485 + 486 + /* Enable the DMA controller including interrupts */ 487 + iowrite32(MSGDMA_CSR_CTL_STOP_ON_ERR | MSGDMA_CSR_CTL_STOP_ON_EARLY | 488 + MSGDMA_CSR_CTL_GLOBAL_INTR, mdev->csr + MSGDMA_CSR_CONTROL); 489 + 490 + mdev->idle = true; 491 + }; 492 + 493 + static void msgdma_copy_one(struct msgdma_device *mdev, 494 + struct msgdma_sw_desc *desc) 495 + { 496 + void __iomem *hw_desc = mdev->desc; 497 + 498 + /* 499 + * Check if the DESC FIFO it not full. If its full, we need to wait 500 + * for at least one entry to become free again 501 + */ 502 + while (ioread32(mdev->csr + MSGDMA_CSR_STATUS) & 503 + MSGDMA_CSR_STAT_DESC_BUF_FULL) 504 + mdelay(1); 505 + 506 + /* 507 + * The descriptor needs to get copied into the descriptor FIFO 508 + * of the DMA controller. The descriptor will get flushed to the 509 + * FIFO, once the last word (control word) is written. Since we 510 + * are not 100% sure that memcpy() writes all word in the "correct" 511 + * oder (address from low to high) on all architectures, we make 512 + * sure this control word is written last by single coding it and 513 + * adding some write-barriers here. 514 + */ 515 + memcpy((void __force *)hw_desc, &desc->hw_desc, 516 + sizeof(desc->hw_desc) - sizeof(u32)); 517 + 518 + /* Write control word last to flush this descriptor into the FIFO */ 519 + mdev->idle = false; 520 + wmb(); 521 + iowrite32(desc->hw_desc.control, hw_desc + 522 + offsetof(struct msgdma_extended_desc, control)); 523 + wmb(); 524 + } 525 + 526 + /** 527 + * msgdma_copy_desc_to_fifo - copy descriptor(s) into controller FIFO 528 + * @mdev: Pointer to the Altera mSGDMA device structure 529 + * @desc: Transaction descriptor pointer 530 + */ 531 + static void msgdma_copy_desc_to_fifo(struct msgdma_device *mdev, 532 + struct msgdma_sw_desc *desc) 533 + { 534 + struct msgdma_sw_desc *sdesc, *next; 535 + 536 + msgdma_copy_one(mdev, desc); 537 + 538 + list_for_each_entry_safe(sdesc, next, &desc->tx_list, node) 539 + msgdma_copy_one(mdev, sdesc); 540 + } 541 + 542 + /** 543 + * msgdma_start_transfer - Initiate the new transfer 544 + * @mdev: Pointer to the Altera mSGDMA device structure 545 + */ 546 + static void msgdma_start_transfer(struct msgdma_device *mdev) 547 + { 548 + struct msgdma_sw_desc *desc; 549 + 550 + if (!mdev->idle) 551 + return; 552 + 553 + desc = list_first_entry_or_null(&mdev->pending_list, 554 + struct msgdma_sw_desc, node); 555 + if (!desc) 556 + return; 557 + 558 + list_splice_tail_init(&mdev->pending_list, &mdev->active_list); 559 + msgdma_copy_desc_to_fifo(mdev, desc); 560 + } 561 + 562 + /** 563 + * msgdma_issue_pending - Issue pending transactions 564 + * @chan: DMA channel pointer 565 + */ 566 + static void msgdma_issue_pending(struct dma_chan *chan) 567 + { 568 + struct msgdma_device *mdev = to_mdev(chan); 569 + 570 + spin_lock_bh(&mdev->lock); 571 + msgdma_start_transfer(mdev); 572 + spin_unlock_bh(&mdev->lock); 573 + } 574 + 575 + /** 576 + * msgdma_chan_desc_cleanup - Cleanup the completed descriptors 577 + * @mdev: Pointer to the Altera mSGDMA device structure 578 + */ 579 + static void msgdma_chan_desc_cleanup(struct msgdma_device *mdev) 580 + { 581 + struct msgdma_sw_desc *desc, *next; 582 + 583 + list_for_each_entry_safe(desc, next, &mdev->done_list, node) { 584 + dma_async_tx_callback callback; 585 + void *callback_param; 586 + 587 + list_del(&desc->node); 588 + 589 + callback = desc->async_tx.callback; 590 + callback_param = desc->async_tx.callback_param; 591 + if (callback) { 592 + spin_unlock(&mdev->lock); 593 + callback(callback_param); 594 + spin_lock(&mdev->lock); 595 + } 596 + 597 + /* Run any dependencies, then free the descriptor */ 598 + msgdma_free_descriptor(mdev, desc); 599 + } 600 + } 601 + 602 + /** 603 + * msgdma_complete_descriptor - Mark the active descriptor as complete 604 + * @mdev: Pointer to the Altera mSGDMA device structure 605 + */ 606 + static void msgdma_complete_descriptor(struct msgdma_device *mdev) 607 + { 608 + struct msgdma_sw_desc *desc; 609 + 610 + desc = list_first_entry_or_null(&mdev->active_list, 611 + struct msgdma_sw_desc, node); 612 + if (!desc) 613 + return; 614 + list_del(&desc->node); 615 + dma_cookie_complete(&desc->async_tx); 616 + list_add_tail(&desc->node, &mdev->done_list); 617 + } 618 + 619 + /** 620 + * msgdma_free_descriptors - Free channel descriptors 621 + * @mdev: Pointer to the Altera mSGDMA device structure 622 + */ 623 + static void msgdma_free_descriptors(struct msgdma_device *mdev) 624 + { 625 + msgdma_free_desc_list(mdev, &mdev->active_list); 626 + msgdma_free_desc_list(mdev, &mdev->pending_list); 627 + msgdma_free_desc_list(mdev, &mdev->done_list); 628 + } 629 + 630 + /** 631 + * msgdma_free_chan_resources - Free channel resources 632 + * @dchan: DMA channel pointer 633 + */ 634 + static void msgdma_free_chan_resources(struct dma_chan *dchan) 635 + { 636 + struct msgdma_device *mdev = to_mdev(dchan); 637 + 638 + spin_lock_bh(&mdev->lock); 639 + msgdma_free_descriptors(mdev); 640 + spin_unlock_bh(&mdev->lock); 641 + kfree(mdev->sw_desq); 642 + } 643 + 644 + /** 645 + * msgdma_alloc_chan_resources - Allocate channel resources 646 + * @dchan: DMA channel 647 + * 648 + * Return: Number of descriptors on success and failure value on error 649 + */ 650 + static int msgdma_alloc_chan_resources(struct dma_chan *dchan) 651 + { 652 + struct msgdma_device *mdev = to_mdev(dchan); 653 + struct msgdma_sw_desc *desc; 654 + int i; 655 + 656 + mdev->sw_desq = kcalloc(MSGDMA_DESC_NUM, sizeof(*desc), GFP_NOWAIT); 657 + if (!mdev->sw_desq) 658 + return -ENOMEM; 659 + 660 + mdev->idle = true; 661 + mdev->desc_free_cnt = MSGDMA_DESC_NUM; 662 + 663 + INIT_LIST_HEAD(&mdev->free_list); 664 + 665 + for (i = 0; i < MSGDMA_DESC_NUM; i++) { 666 + desc = mdev->sw_desq + i; 667 + dma_async_tx_descriptor_init(&desc->async_tx, &mdev->dmachan); 668 + desc->async_tx.tx_submit = msgdma_tx_submit; 669 + list_add_tail(&desc->node, &mdev->free_list); 670 + } 671 + 672 + return MSGDMA_DESC_NUM; 673 + } 674 + 675 + /** 676 + * msgdma_tasklet - Schedule completion tasklet 677 + * @data: Pointer to the Altera sSGDMA channel structure 678 + */ 679 + static void msgdma_tasklet(unsigned long data) 680 + { 681 + struct msgdma_device *mdev = (struct msgdma_device *)data; 682 + u32 count; 683 + u32 __maybe_unused size; 684 + u32 __maybe_unused status; 685 + 686 + spin_lock(&mdev->lock); 687 + 688 + /* Read number of responses that are available */ 689 + count = ioread32(mdev->csr + MSGDMA_CSR_RESP_FILL_LEVEL); 690 + dev_dbg(mdev->dev, "%s (%d): response count=%d\n", 691 + __func__, __LINE__, count); 692 + 693 + while (count--) { 694 + /* 695 + * Read both longwords to purge this response from the FIFO 696 + * On Avalon-MM implementations, size and status do not 697 + * have any real values, like transferred bytes or error 698 + * bits. So we need to just drop these values. 699 + */ 700 + size = ioread32(mdev->resp + MSGDMA_RESP_BYTES_TRANSFERRED); 701 + status = ioread32(mdev->resp - MSGDMA_RESP_STATUS); 702 + 703 + msgdma_complete_descriptor(mdev); 704 + msgdma_chan_desc_cleanup(mdev); 705 + } 706 + 707 + spin_unlock(&mdev->lock); 708 + } 709 + 710 + /** 711 + * msgdma_irq_handler - Altera mSGDMA Interrupt handler 712 + * @irq: IRQ number 713 + * @data: Pointer to the Altera mSGDMA device structure 714 + * 715 + * Return: IRQ_HANDLED/IRQ_NONE 716 + */ 717 + static irqreturn_t msgdma_irq_handler(int irq, void *data) 718 + { 719 + struct msgdma_device *mdev = data; 720 + u32 status; 721 + 722 + status = ioread32(mdev->csr + MSGDMA_CSR_STATUS); 723 + if ((status & MSGDMA_CSR_STAT_BUSY) == 0) { 724 + /* Start next transfer if the DMA controller is idle */ 725 + spin_lock(&mdev->lock); 726 + mdev->idle = true; 727 + msgdma_start_transfer(mdev); 728 + spin_unlock(&mdev->lock); 729 + } 730 + 731 + tasklet_schedule(&mdev->irq_tasklet); 732 + 733 + /* Clear interrupt in mSGDMA controller */ 734 + iowrite32(MSGDMA_CSR_STAT_IRQ, mdev->csr + MSGDMA_CSR_STATUS); 735 + 736 + return IRQ_HANDLED; 737 + } 738 + 739 + /** 740 + * msgdma_chan_remove - Channel remove function 741 + * @mdev: Pointer to the Altera mSGDMA device structure 742 + */ 743 + static void msgdma_dev_remove(struct msgdma_device *mdev) 744 + { 745 + if (!mdev) 746 + return; 747 + 748 + devm_free_irq(mdev->dev, mdev->irq, mdev); 749 + tasklet_kill(&mdev->irq_tasklet); 750 + list_del(&mdev->dmachan.device_node); 751 + } 752 + 753 + static int request_and_map(struct platform_device *pdev, const char *name, 754 + struct resource **res, void __iomem **ptr) 755 + { 756 + struct resource *region; 757 + struct device *device = &pdev->dev; 758 + 759 + *res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); 760 + if (*res == NULL) { 761 + dev_err(device, "resource %s not defined\n", name); 762 + return -ENODEV; 763 + } 764 + 765 + region = devm_request_mem_region(device, (*res)->start, 766 + resource_size(*res), dev_name(device)); 767 + if (region == NULL) { 768 + dev_err(device, "unable to request %s\n", name); 769 + return -EBUSY; 770 + } 771 + 772 + *ptr = devm_ioremap_nocache(device, region->start, 773 + resource_size(region)); 774 + if (*ptr == NULL) { 775 + dev_err(device, "ioremap_nocache of %s failed!", name); 776 + return -ENOMEM; 777 + } 778 + 779 + return 0; 780 + } 781 + 782 + /** 783 + * msgdma_probe - Driver probe function 784 + * @pdev: Pointer to the platform_device structure 785 + * 786 + * Return: '0' on success and failure value on error 787 + */ 788 + static int msgdma_probe(struct platform_device *pdev) 789 + { 790 + struct msgdma_device *mdev; 791 + struct dma_device *dma_dev; 792 + struct resource *dma_res; 793 + int ret; 794 + 795 + mdev = devm_kzalloc(&pdev->dev, sizeof(*mdev), GFP_NOWAIT); 796 + if (!mdev) 797 + return -ENOMEM; 798 + 799 + mdev->dev = &pdev->dev; 800 + 801 + /* Map CSR space */ 802 + ret = request_and_map(pdev, "csr", &dma_res, &mdev->csr); 803 + if (ret) 804 + return ret; 805 + 806 + /* Map (extended) descriptor space */ 807 + ret = request_and_map(pdev, "desc", &dma_res, &mdev->desc); 808 + if (ret) 809 + return ret; 810 + 811 + /* Map response space */ 812 + ret = request_and_map(pdev, "resp", &dma_res, &mdev->resp); 813 + if (ret) 814 + return ret; 815 + 816 + platform_set_drvdata(pdev, mdev); 817 + 818 + /* Get interrupt nr from platform data */ 819 + mdev->irq = platform_get_irq(pdev, 0); 820 + if (mdev->irq < 0) 821 + return -ENXIO; 822 + 823 + ret = devm_request_irq(&pdev->dev, mdev->irq, msgdma_irq_handler, 824 + 0, dev_name(&pdev->dev), mdev); 825 + if (ret) 826 + return ret; 827 + 828 + tasklet_init(&mdev->irq_tasklet, msgdma_tasklet, (unsigned long)mdev); 829 + 830 + dma_cookie_init(&mdev->dmachan); 831 + 832 + spin_lock_init(&mdev->lock); 833 + 834 + INIT_LIST_HEAD(&mdev->active_list); 835 + INIT_LIST_HEAD(&mdev->pending_list); 836 + INIT_LIST_HEAD(&mdev->done_list); 837 + INIT_LIST_HEAD(&mdev->free_list); 838 + 839 + dma_dev = &mdev->dmadev; 840 + 841 + /* Set DMA capabilities */ 842 + dma_cap_zero(dma_dev->cap_mask); 843 + dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask); 844 + dma_cap_set(DMA_SLAVE, dma_dev->cap_mask); 845 + 846 + dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 847 + dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 848 + dma_dev->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM) | 849 + BIT(DMA_MEM_TO_MEM); 850 + dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 851 + 852 + /* Init DMA link list */ 853 + INIT_LIST_HEAD(&dma_dev->channels); 854 + 855 + /* Set base routines */ 856 + dma_dev->device_tx_status = dma_cookie_status; 857 + dma_dev->device_issue_pending = msgdma_issue_pending; 858 + dma_dev->dev = &pdev->dev; 859 + 860 + dma_dev->copy_align = DMAENGINE_ALIGN_4_BYTES; 861 + dma_dev->device_prep_dma_memcpy = msgdma_prep_memcpy; 862 + dma_dev->device_prep_slave_sg = msgdma_prep_slave_sg; 863 + dma_dev->device_config = msgdma_dma_config; 864 + 865 + dma_dev->device_alloc_chan_resources = msgdma_alloc_chan_resources; 866 + dma_dev->device_free_chan_resources = msgdma_free_chan_resources; 867 + 868 + mdev->dmachan.device = dma_dev; 869 + list_add_tail(&mdev->dmachan.device_node, &dma_dev->channels); 870 + 871 + /* Set DMA mask to 64 bits */ 872 + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 873 + if (ret) { 874 + dev_warn(&pdev->dev, "unable to set coherent mask to 64"); 875 + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 876 + if (ret) 877 + goto fail; 878 + } 879 + 880 + msgdma_reset(mdev); 881 + 882 + ret = dma_async_device_register(dma_dev); 883 + if (ret) 884 + goto fail; 885 + 886 + dev_notice(&pdev->dev, "Altera mSGDMA driver probe success\n"); 887 + 888 + return 0; 889 + 890 + fail: 891 + msgdma_dev_remove(mdev); 892 + 893 + return ret; 894 + } 895 + 896 + /** 897 + * msgdma_dma_remove - Driver remove function 898 + * @pdev: Pointer to the platform_device structure 899 + * 900 + * Return: Always '0' 901 + */ 902 + static int msgdma_remove(struct platform_device *pdev) 903 + { 904 + struct msgdma_device *mdev = platform_get_drvdata(pdev); 905 + 906 + dma_async_device_unregister(&mdev->dmadev); 907 + msgdma_dev_remove(mdev); 908 + 909 + dev_notice(&pdev->dev, "Altera mSGDMA driver removed\n"); 910 + 911 + return 0; 912 + } 913 + 914 + static struct platform_driver msgdma_driver = { 915 + .driver = { 916 + .name = "altera-msgdma", 917 + }, 918 + .probe = msgdma_probe, 919 + .remove = msgdma_remove, 920 + }; 921 + 922 + module_platform_driver(msgdma_driver); 923 + 924 + MODULE_ALIAS("platform:altera-msgdma"); 925 + MODULE_DESCRIPTION("Altera mSGDMA driver"); 926 + MODULE_AUTHOR("Stefan Roese <sr@denx.de>"); 927 + MODULE_LICENSE("GPL");
+1 -1
drivers/dma/amba-pl08x.c
··· 3033 3033 .max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK, 3034 3034 }; 3035 3035 3036 - static struct amba_id pl08x_ids[] = { 3036 + static const struct amba_id pl08x_ids[] = { 3037 3037 /* Samsung PL080S variant */ 3038 3038 { 3039 3039 .id = 0x0a141080,
+1 -139
drivers/dma/at_hdmac.c
··· 1203 1203 } 1204 1204 1205 1205 /** 1206 - * atc_prep_dma_sg - prepare memory to memory scather-gather operation 1207 - * @chan: the channel to prepare operation on 1208 - * @dst_sg: destination scatterlist 1209 - * @dst_nents: number of destination scatterlist entries 1210 - * @src_sg: source scatterlist 1211 - * @src_nents: number of source scatterlist entries 1212 - * @flags: tx descriptor status flags 1213 - */ 1214 - static struct dma_async_tx_descriptor * 1215 - atc_prep_dma_sg(struct dma_chan *chan, 1216 - struct scatterlist *dst_sg, unsigned int dst_nents, 1217 - struct scatterlist *src_sg, unsigned int src_nents, 1218 - unsigned long flags) 1219 - { 1220 - struct at_dma_chan *atchan = to_at_dma_chan(chan); 1221 - struct at_desc *desc = NULL; 1222 - struct at_desc *first = NULL; 1223 - struct at_desc *prev = NULL; 1224 - unsigned int src_width; 1225 - unsigned int dst_width; 1226 - size_t xfer_count; 1227 - u32 ctrla; 1228 - u32 ctrlb; 1229 - size_t dst_len = 0, src_len = 0; 1230 - dma_addr_t dst = 0, src = 0; 1231 - size_t len = 0, total_len = 0; 1232 - 1233 - if (unlikely(dst_nents == 0 || src_nents == 0)) 1234 - return NULL; 1235 - 1236 - if (unlikely(dst_sg == NULL || src_sg == NULL)) 1237 - return NULL; 1238 - 1239 - ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN 1240 - | ATC_SRC_ADDR_MODE_INCR 1241 - | ATC_DST_ADDR_MODE_INCR 1242 - | ATC_FC_MEM2MEM; 1243 - 1244 - /* 1245 - * loop until there is either no more source or no more destination 1246 - * scatterlist entry 1247 - */ 1248 - while (true) { 1249 - 1250 - /* prepare the next transfer */ 1251 - if (dst_len == 0) { 1252 - 1253 - /* no more destination scatterlist entries */ 1254 - if (!dst_sg || !dst_nents) 1255 - break; 1256 - 1257 - dst = sg_dma_address(dst_sg); 1258 - dst_len = sg_dma_len(dst_sg); 1259 - 1260 - dst_sg = sg_next(dst_sg); 1261 - dst_nents--; 1262 - } 1263 - 1264 - if (src_len == 0) { 1265 - 1266 - /* no more source scatterlist entries */ 1267 - if (!src_sg || !src_nents) 1268 - break; 1269 - 1270 - src = sg_dma_address(src_sg); 1271 - src_len = sg_dma_len(src_sg); 1272 - 1273 - src_sg = sg_next(src_sg); 1274 - src_nents--; 1275 - } 1276 - 1277 - len = min_t(size_t, src_len, dst_len); 1278 - if (len == 0) 1279 - continue; 1280 - 1281 - /* take care for the alignment */ 1282 - src_width = dst_width = atc_get_xfer_width(src, dst, len); 1283 - 1284 - ctrla = ATC_SRC_WIDTH(src_width) | 1285 - ATC_DST_WIDTH(dst_width); 1286 - 1287 - /* 1288 - * The number of transfers to set up refer to the source width 1289 - * that depends on the alignment. 1290 - */ 1291 - xfer_count = len >> src_width; 1292 - if (xfer_count > ATC_BTSIZE_MAX) { 1293 - xfer_count = ATC_BTSIZE_MAX; 1294 - len = ATC_BTSIZE_MAX << src_width; 1295 - } 1296 - 1297 - /* create the transfer */ 1298 - desc = atc_desc_get(atchan); 1299 - if (!desc) 1300 - goto err_desc_get; 1301 - 1302 - desc->lli.saddr = src; 1303 - desc->lli.daddr = dst; 1304 - desc->lli.ctrla = ctrla | xfer_count; 1305 - desc->lli.ctrlb = ctrlb; 1306 - 1307 - desc->txd.cookie = 0; 1308 - desc->len = len; 1309 - 1310 - atc_desc_chain(&first, &prev, desc); 1311 - 1312 - /* update the lengths and addresses for the next loop cycle */ 1313 - dst_len -= len; 1314 - src_len -= len; 1315 - dst += len; 1316 - src += len; 1317 - 1318 - total_len += len; 1319 - } 1320 - 1321 - /* First descriptor of the chain embedds additional information */ 1322 - first->txd.cookie = -EBUSY; 1323 - first->total_len = total_len; 1324 - 1325 - /* set end-of-link to the last link descriptor of list*/ 1326 - set_desc_eol(desc); 1327 - 1328 - first->txd.flags = flags; /* client is in control of this ack */ 1329 - 1330 - return &first->txd; 1331 - 1332 - err_desc_get: 1333 - atc_desc_put(atchan, first); 1334 - return NULL; 1335 - } 1336 - 1337 - /** 1338 1206 * atc_dma_cyclic_check_values 1339 1207 * Check for too big/unaligned periods and unaligned DMA buffer 1340 1208 */ ··· 1801 1933 1802 1934 /* setup platform data for each SoC */ 1803 1935 dma_cap_set(DMA_MEMCPY, at91sam9rl_config.cap_mask); 1804 - dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask); 1805 1936 dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask); 1806 1937 dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask); 1807 1938 dma_cap_set(DMA_MEMSET, at91sam9g45_config.cap_mask); 1808 1939 dma_cap_set(DMA_MEMSET_SG, at91sam9g45_config.cap_mask); 1809 1940 dma_cap_set(DMA_PRIVATE, at91sam9g45_config.cap_mask); 1810 1941 dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask); 1811 - dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask); 1812 1942 1813 1943 /* get DMA parameters from controller type */ 1814 1944 plat_dat = at_dma_get_driver_data(pdev); ··· 1944 2078 atdma->dma_common.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1945 2079 } 1946 2080 1947 - if (dma_has_cap(DMA_SG, atdma->dma_common.cap_mask)) 1948 - atdma->dma_common.device_prep_dma_sg = atc_prep_dma_sg; 1949 - 1950 2081 dma_writel(atdma, EN, AT_DMA_ENABLE); 1951 2082 1952 - dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s%s), %d channels\n", 2083 + dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s), %d channels\n", 1953 2084 dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask) ? "cpy " : "", 1954 2085 dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask) ? "set " : "", 1955 2086 dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) ? "slave " : "", 1956 - dma_has_cap(DMA_SG, atdma->dma_common.cap_mask) ? "sg-cpy " : "", 1957 2087 plat_dat->nr_channels); 1958 2088 1959 2089 dma_async_device_register(&atdma->dma_common);
+8 -5
drivers/dma/at_xdmac.c
··· 875 875 dwidth = at_xdmac_align_width(chan, src | dst | chunk->size); 876 876 if (chunk->size >= (AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth)) { 877 877 dev_dbg(chan2dev(chan), 878 - "%s: chunk too big (%d, max size %lu)...\n", 878 + "%s: chunk too big (%zu, max size %lu)...\n", 879 879 __func__, chunk->size, 880 880 AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth); 881 881 return NULL; ··· 956 956 if ((xt->numf > 1) && (xt->frame_size > 1)) 957 957 return NULL; 958 958 959 - dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, numf=%d, frame_size=%d, flags=0x%lx\n", 959 + dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, numf=%zu, frame_size=%zu, flags=0x%lx\n", 960 960 __func__, &xt->src_start, &xt->dst_start, xt->numf, 961 961 xt->frame_size, flags); 962 962 ··· 990 990 dst_skip = chunk->size + dst_icg; 991 991 992 992 dev_dbg(chan2dev(chan), 993 - "%s: chunk size=%d, src icg=%d, dst icg=%d\n", 993 + "%s: chunk size=%zu, src icg=%zu, dst icg=%zu\n", 994 994 __func__, chunk->size, src_icg, dst_icg); 995 995 996 996 desc = at_xdmac_interleaved_queue_desc(chan, atchan, ··· 1207 1207 struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan); 1208 1208 struct at_xdmac_desc *desc; 1209 1209 1210 - dev_dbg(chan2dev(chan), "%s: dest=%pad, len=%d, pattern=0x%x, flags=0x%lx\n", 1210 + dev_dbg(chan2dev(chan), "%s: dest=%pad, len=%zu, pattern=0x%x, flags=0x%lx\n", 1211 1211 __func__, &dest, len, value, flags); 1212 1212 1213 1213 if (unlikely(!len)) ··· 1883 1883 struct at_xdmac_chan *atchan; 1884 1884 struct dma_chan *chan, *_chan; 1885 1885 int i; 1886 + int ret; 1886 1887 1887 - clk_prepare_enable(atxdmac->clk); 1888 + ret = clk_prepare_enable(atxdmac->clk); 1889 + if (ret) 1890 + return ret; 1888 1891 1889 1892 /* Clear pending interrupts. */ 1890 1893 for (i = 0; i < atxdmac->dma.chancnt; i++) {
+310 -264
drivers/dma/bcm-sba-raid.c
··· 36 36 */ 37 37 38 38 #include <linux/bitops.h> 39 + #include <linux/debugfs.h> 39 40 #include <linux/dma-mapping.h> 40 41 #include <linux/dmaengine.h> 41 42 #include <linux/list.h> ··· 49 48 50 49 #include "dmaengine.h" 51 50 52 - /* SBA command related defines */ 51 + /* ====== Driver macros and defines ===== */ 52 + 53 53 #define SBA_TYPE_SHIFT 48 54 54 #define SBA_TYPE_MASK GENMASK(1, 0) 55 55 #define SBA_TYPE_A 0x0 ··· 84 82 #define SBA_CMD_WRITE_BUFFER 0xc 85 83 #define SBA_CMD_GALOIS 0xe 86 84 85 + #define SBA_MAX_REQ_PER_MBOX_CHANNEL 8192 86 + 87 87 /* Driver helper macros */ 88 88 #define to_sba_request(tx) \ 89 89 container_of(tx, struct sba_request, tx) 90 90 #define to_sba_device(dchan) \ 91 91 container_of(dchan, struct sba_device, dma_chan) 92 92 93 - enum sba_request_state { 94 - SBA_REQUEST_STATE_FREE = 1, 95 - SBA_REQUEST_STATE_ALLOCED = 2, 96 - SBA_REQUEST_STATE_PENDING = 3, 97 - SBA_REQUEST_STATE_ACTIVE = 4, 98 - SBA_REQUEST_STATE_RECEIVED = 5, 99 - SBA_REQUEST_STATE_COMPLETED = 6, 100 - SBA_REQUEST_STATE_ABORTED = 7, 93 + /* ===== Driver data structures ===== */ 94 + 95 + enum sba_request_flags { 96 + SBA_REQUEST_STATE_FREE = 0x001, 97 + SBA_REQUEST_STATE_ALLOCED = 0x002, 98 + SBA_REQUEST_STATE_PENDING = 0x004, 99 + SBA_REQUEST_STATE_ACTIVE = 0x008, 100 + SBA_REQUEST_STATE_ABORTED = 0x010, 101 + SBA_REQUEST_STATE_MASK = 0x0ff, 102 + SBA_REQUEST_FENCE = 0x100, 101 103 }; 102 104 103 105 struct sba_request { 104 106 /* Global state */ 105 107 struct list_head node; 106 108 struct sba_device *sba; 107 - enum sba_request_state state; 108 - bool fence; 109 + u32 flags; 109 110 /* Chained requests management */ 110 111 struct sba_request *first; 111 112 struct list_head next; 112 - unsigned int next_count; 113 113 atomic_t next_pending_count; 114 114 /* BRCM message data */ 115 - void *resp; 116 - dma_addr_t resp_dma; 117 - struct brcm_sba_command *cmds; 118 115 struct brcm_message msg; 119 116 struct dma_async_tx_descriptor tx; 117 + /* SBA commands */ 118 + struct brcm_sba_command cmds[0]; 120 119 }; 121 120 122 121 enum sba_version { ··· 155 152 void *cmds_base; 156 153 dma_addr_t cmds_dma_base; 157 154 spinlock_t reqs_lock; 158 - struct sba_request *reqs; 159 155 bool reqs_fence; 160 156 struct list_head reqs_alloc_list; 161 157 struct list_head reqs_pending_list; 162 158 struct list_head reqs_active_list; 163 - struct list_head reqs_received_list; 164 - struct list_head reqs_completed_list; 165 159 struct list_head reqs_aborted_list; 166 160 struct list_head reqs_free_list; 167 - int reqs_free_count; 161 + /* DebugFS directory entries */ 162 + struct dentry *root; 163 + struct dentry *stats; 168 164 }; 169 165 170 - /* ====== SBA command helper routines ===== */ 166 + /* ====== Command helper routines ===== */ 171 167 172 168 static inline u64 __pure sba_cmd_enc(u64 cmd, u32 val, u32 shift, u32 mask) 173 169 { ··· 198 196 ((d & SBA_C_MDATA_DNUM_MASK) << SBA_C_MDATA_DNUM_SHIFT); 199 197 } 200 198 201 - /* ====== Channel resource management routines ===== */ 199 + /* ====== General helper routines ===== */ 200 + 201 + static void sba_peek_mchans(struct sba_device *sba) 202 + { 203 + int mchan_idx; 204 + 205 + for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++) 206 + mbox_client_peek_data(sba->mchans[mchan_idx]); 207 + } 202 208 203 209 static struct sba_request *sba_alloc_request(struct sba_device *sba) 204 210 { 211 + bool found = false; 205 212 unsigned long flags; 206 213 struct sba_request *req = NULL; 207 214 208 215 spin_lock_irqsave(&sba->reqs_lock, flags); 216 + list_for_each_entry(req, &sba->reqs_free_list, node) { 217 + if (async_tx_test_ack(&req->tx)) { 218 + list_move_tail(&req->node, &sba->reqs_alloc_list); 219 + found = true; 220 + break; 221 + } 222 + } 223 + spin_unlock_irqrestore(&sba->reqs_lock, flags); 209 224 210 - req = list_first_entry_or_null(&sba->reqs_free_list, 211 - struct sba_request, node); 212 - if (req) { 213 - list_move_tail(&req->node, &sba->reqs_alloc_list); 214 - req->state = SBA_REQUEST_STATE_ALLOCED; 215 - req->fence = false; 216 - req->first = req; 217 - INIT_LIST_HEAD(&req->next); 218 - req->next_count = 1; 219 - atomic_set(&req->next_pending_count, 1); 220 - 221 - sba->reqs_free_count--; 222 - 223 - dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); 225 + if (!found) { 226 + /* 227 + * We have no more free requests so, we peek 228 + * mailbox channels hoping few active requests 229 + * would have completed which will create more 230 + * room for new requests. 231 + */ 232 + sba_peek_mchans(sba); 233 + return NULL; 224 234 } 225 235 226 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 236 + req->flags = SBA_REQUEST_STATE_ALLOCED; 237 + req->first = req; 238 + INIT_LIST_HEAD(&req->next); 239 + atomic_set(&req->next_pending_count, 1); 240 + 241 + dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); 242 + async_tx_ack(&req->tx); 227 243 228 244 return req; 229 245 } ··· 251 231 struct sba_request *req) 252 232 { 253 233 lockdep_assert_held(&sba->reqs_lock); 254 - req->state = SBA_REQUEST_STATE_PENDING; 234 + req->flags &= ~SBA_REQUEST_STATE_MASK; 235 + req->flags |= SBA_REQUEST_STATE_PENDING; 255 236 list_move_tail(&req->node, &sba->reqs_pending_list); 256 237 if (list_empty(&sba->reqs_active_list)) 257 238 sba->reqs_fence = false; ··· 267 246 sba->reqs_fence = false; 268 247 if (sba->reqs_fence) 269 248 return false; 270 - req->state = SBA_REQUEST_STATE_ACTIVE; 249 + req->flags &= ~SBA_REQUEST_STATE_MASK; 250 + req->flags |= SBA_REQUEST_STATE_ACTIVE; 271 251 list_move_tail(&req->node, &sba->reqs_active_list); 272 - if (req->fence) 252 + if (req->flags & SBA_REQUEST_FENCE) 273 253 sba->reqs_fence = true; 274 254 return true; 275 255 } ··· 280 258 struct sba_request *req) 281 259 { 282 260 lockdep_assert_held(&sba->reqs_lock); 283 - req->state = SBA_REQUEST_STATE_ABORTED; 261 + req->flags &= ~SBA_REQUEST_STATE_MASK; 262 + req->flags |= SBA_REQUEST_STATE_ABORTED; 284 263 list_move_tail(&req->node, &sba->reqs_aborted_list); 285 264 if (list_empty(&sba->reqs_active_list)) 286 265 sba->reqs_fence = false; ··· 292 269 struct sba_request *req) 293 270 { 294 271 lockdep_assert_held(&sba->reqs_lock); 295 - req->state = SBA_REQUEST_STATE_FREE; 272 + req->flags &= ~SBA_REQUEST_STATE_MASK; 273 + req->flags |= SBA_REQUEST_STATE_FREE; 296 274 list_move_tail(&req->node, &sba->reqs_free_list); 297 275 if (list_empty(&sba->reqs_active_list)) 298 276 sba->reqs_fence = false; 299 - sba->reqs_free_count++; 300 - } 301 - 302 - static void sba_received_request(struct sba_request *req) 303 - { 304 - unsigned long flags; 305 - struct sba_device *sba = req->sba; 306 - 307 - spin_lock_irqsave(&sba->reqs_lock, flags); 308 - req->state = SBA_REQUEST_STATE_RECEIVED; 309 - list_move_tail(&req->node, &sba->reqs_received_list); 310 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 311 - } 312 - 313 - static void sba_complete_chained_requests(struct sba_request *req) 314 - { 315 - unsigned long flags; 316 - struct sba_request *nreq; 317 - struct sba_device *sba = req->sba; 318 - 319 - spin_lock_irqsave(&sba->reqs_lock, flags); 320 - 321 - req->state = SBA_REQUEST_STATE_COMPLETED; 322 - list_move_tail(&req->node, &sba->reqs_completed_list); 323 - list_for_each_entry(nreq, &req->next, next) { 324 - nreq->state = SBA_REQUEST_STATE_COMPLETED; 325 - list_move_tail(&nreq->node, &sba->reqs_completed_list); 326 - } 327 - if (list_empty(&sba->reqs_active_list)) 328 - sba->reqs_fence = false; 329 - 330 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 331 277 } 332 278 333 279 static void sba_free_chained_requests(struct sba_request *req) ··· 324 332 325 333 list_add_tail(&req->next, &first->next); 326 334 req->first = first; 327 - first->next_count++; 328 - atomic_set(&first->next_pending_count, first->next_count); 335 + atomic_inc(&first->next_pending_count); 329 336 330 337 spin_unlock_irqrestore(&sba->reqs_lock, flags); 331 338 } ··· 338 347 339 348 /* Freeup all alloced request */ 340 349 list_for_each_entry_safe(req, req1, &sba->reqs_alloc_list, node) 341 - _sba_free_request(sba, req); 342 - 343 - /* Freeup all received request */ 344 - list_for_each_entry_safe(req, req1, &sba->reqs_received_list, node) 345 - _sba_free_request(sba, req); 346 - 347 - /* Freeup all completed request */ 348 - list_for_each_entry_safe(req, req1, &sba->reqs_completed_list, node) 349 350 _sba_free_request(sba, req); 350 351 351 352 /* Set all active requests as aborted */ ··· 366 383 spin_unlock_irqrestore(&sba->reqs_lock, flags); 367 384 } 368 385 386 + static int sba_send_mbox_request(struct sba_device *sba, 387 + struct sba_request *req) 388 + { 389 + int mchans_idx, ret = 0; 390 + 391 + /* Select mailbox channel in round-robin fashion */ 392 + mchans_idx = atomic_inc_return(&sba->mchans_current); 393 + mchans_idx = mchans_idx % sba->mchans_count; 394 + 395 + /* Send message for the request */ 396 + req->msg.error = 0; 397 + ret = mbox_send_message(sba->mchans[mchans_idx], &req->msg); 398 + if (ret < 0) { 399 + dev_err(sba->dev, "send message failed with error %d", ret); 400 + return ret; 401 + } 402 + 403 + /* Check error returned by mailbox controller */ 404 + ret = req->msg.error; 405 + if (ret < 0) { 406 + dev_err(sba->dev, "message error %d", ret); 407 + } 408 + 409 + /* Signal txdone for mailbox channel */ 410 + mbox_client_txdone(sba->mchans[mchans_idx], ret); 411 + 412 + return ret; 413 + } 414 + 415 + /* Note: Must be called with sba->reqs_lock held */ 416 + static void _sba_process_pending_requests(struct sba_device *sba) 417 + { 418 + int ret; 419 + u32 count; 420 + struct sba_request *req; 421 + 422 + /* 423 + * Process few pending requests 424 + * 425 + * For now, we process (<number_of_mailbox_channels> * 8) 426 + * number of requests at a time. 427 + */ 428 + count = sba->mchans_count * 8; 429 + while (!list_empty(&sba->reqs_pending_list) && count) { 430 + /* Get the first pending request */ 431 + req = list_first_entry(&sba->reqs_pending_list, 432 + struct sba_request, node); 433 + 434 + /* Try to make request active */ 435 + if (!_sba_active_request(sba, req)) 436 + break; 437 + 438 + /* Send request to mailbox channel */ 439 + ret = sba_send_mbox_request(sba, req); 440 + if (ret < 0) { 441 + _sba_pending_request(sba, req); 442 + break; 443 + } 444 + 445 + count--; 446 + } 447 + } 448 + 449 + static void sba_process_received_request(struct sba_device *sba, 450 + struct sba_request *req) 451 + { 452 + unsigned long flags; 453 + struct dma_async_tx_descriptor *tx; 454 + struct sba_request *nreq, *first = req->first; 455 + 456 + /* Process only after all chained requests are received */ 457 + if (!atomic_dec_return(&first->next_pending_count)) { 458 + tx = &first->tx; 459 + 460 + WARN_ON(tx->cookie < 0); 461 + if (tx->cookie > 0) { 462 + dma_cookie_complete(tx); 463 + dmaengine_desc_get_callback_invoke(tx, NULL); 464 + dma_descriptor_unmap(tx); 465 + tx->callback = NULL; 466 + tx->callback_result = NULL; 467 + } 468 + 469 + dma_run_dependencies(tx); 470 + 471 + spin_lock_irqsave(&sba->reqs_lock, flags); 472 + 473 + /* Free all requests chained to first request */ 474 + list_for_each_entry(nreq, &first->next, next) 475 + _sba_free_request(sba, nreq); 476 + INIT_LIST_HEAD(&first->next); 477 + 478 + /* Free the first request */ 479 + _sba_free_request(sba, first); 480 + 481 + /* Process pending requests */ 482 + _sba_process_pending_requests(sba); 483 + 484 + spin_unlock_irqrestore(&sba->reqs_lock, flags); 485 + } 486 + } 487 + 488 + static void sba_write_stats_in_seqfile(struct sba_device *sba, 489 + struct seq_file *file) 490 + { 491 + unsigned long flags; 492 + struct sba_request *req; 493 + u32 free_count = 0, alloced_count = 0; 494 + u32 pending_count = 0, active_count = 0, aborted_count = 0; 495 + 496 + spin_lock_irqsave(&sba->reqs_lock, flags); 497 + 498 + list_for_each_entry(req, &sba->reqs_free_list, node) 499 + if (async_tx_test_ack(&req->tx)) 500 + free_count++; 501 + 502 + list_for_each_entry(req, &sba->reqs_alloc_list, node) 503 + alloced_count++; 504 + 505 + list_for_each_entry(req, &sba->reqs_pending_list, node) 506 + pending_count++; 507 + 508 + list_for_each_entry(req, &sba->reqs_active_list, node) 509 + active_count++; 510 + 511 + list_for_each_entry(req, &sba->reqs_aborted_list, node) 512 + aborted_count++; 513 + 514 + spin_unlock_irqrestore(&sba->reqs_lock, flags); 515 + 516 + seq_printf(file, "maximum requests = %d\n", sba->max_req); 517 + seq_printf(file, "free requests = %d\n", free_count); 518 + seq_printf(file, "alloced requests = %d\n", alloced_count); 519 + seq_printf(file, "pending requests = %d\n", pending_count); 520 + seq_printf(file, "active requests = %d\n", active_count); 521 + seq_printf(file, "aborted requests = %d\n", aborted_count); 522 + } 523 + 369 524 /* ====== DMAENGINE callbacks ===== */ 370 525 371 526 static void sba_free_chan_resources(struct dma_chan *dchan) ··· 524 403 return 0; 525 404 } 526 405 527 - static int sba_send_mbox_request(struct sba_device *sba, 528 - struct sba_request *req) 529 - { 530 - int mchans_idx, ret = 0; 531 - 532 - /* Select mailbox channel in round-robin fashion */ 533 - mchans_idx = atomic_inc_return(&sba->mchans_current); 534 - mchans_idx = mchans_idx % sba->mchans_count; 535 - 536 - /* Send message for the request */ 537 - req->msg.error = 0; 538 - ret = mbox_send_message(sba->mchans[mchans_idx], &req->msg); 539 - if (ret < 0) { 540 - dev_err(sba->dev, "send message failed with error %d", ret); 541 - return ret; 542 - } 543 - ret = req->msg.error; 544 - if (ret < 0) { 545 - dev_err(sba->dev, "message error %d", ret); 546 - return ret; 547 - } 548 - 549 - return 0; 550 - } 551 - 552 406 static void sba_issue_pending(struct dma_chan *dchan) 553 407 { 554 - int ret; 555 408 unsigned long flags; 556 - struct sba_request *req, *req1; 557 409 struct sba_device *sba = to_sba_device(dchan); 558 410 411 + /* Process pending requests */ 559 412 spin_lock_irqsave(&sba->reqs_lock, flags); 560 - 561 - /* Process all pending request */ 562 - list_for_each_entry_safe(req, req1, &sba->reqs_pending_list, node) { 563 - /* Try to make request active */ 564 - if (!_sba_active_request(sba, req)) 565 - break; 566 - 567 - /* Send request to mailbox channel */ 568 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 569 - ret = sba_send_mbox_request(sba, req); 570 - spin_lock_irqsave(&sba->reqs_lock, flags); 571 - 572 - /* If something went wrong then keep request pending */ 573 - if (ret < 0) { 574 - _sba_pending_request(sba, req); 575 - break; 576 - } 577 - } 578 - 413 + _sba_process_pending_requests(sba); 579 414 spin_unlock_irqrestore(&sba->reqs_lock, flags); 580 415 } 581 416 ··· 563 486 dma_cookie_t cookie, 564 487 struct dma_tx_state *txstate) 565 488 { 566 - int mchan_idx; 567 489 enum dma_status ret; 568 490 struct sba_device *sba = to_sba_device(dchan); 569 - 570 - for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++) 571 - mbox_client_peek_data(sba->mchans[mchan_idx]); 572 491 573 492 ret = dma_cookie_status(dchan, cookie, txstate); 574 493 if (ret == DMA_COMPLETE) 575 494 return ret; 495 + 496 + sba_peek_mchans(sba); 576 497 577 498 return dma_cookie_status(dchan, cookie, txstate); 578 499 } ··· 581 506 { 582 507 u64 cmd; 583 508 u32 c_mdata; 509 + dma_addr_t resp_dma = req->tx.phys; 584 510 struct brcm_sba_command *cmdsp = cmds; 585 511 586 512 /* Type-B command to load dummy data into buf0 */ ··· 597 521 cmdsp->cmd = cmd; 598 522 *cmdsp->cmd_dma = cpu_to_le64(cmd); 599 523 cmdsp->flags = BRCM_SBA_CMD_TYPE_B; 600 - cmdsp->data = req->resp_dma; 524 + cmdsp->data = resp_dma; 601 525 cmdsp->data_len = req->sba->hw_resp_size; 602 526 cmdsp++; 603 527 ··· 618 542 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 619 543 if (req->sba->hw_resp_size) { 620 544 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 621 - cmdsp->resp = req->resp_dma; 545 + cmdsp->resp = resp_dma; 622 546 cmdsp->resp_len = req->sba->hw_resp_size; 623 547 } 624 548 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; 625 - cmdsp->data = req->resp_dma; 549 + cmdsp->data = resp_dma; 626 550 cmdsp->data_len = req->sba->hw_resp_size; 627 551 cmdsp++; 628 552 ··· 649 573 * Force fence so that no requests are submitted 650 574 * until DMA callback for this request is invoked. 651 575 */ 652 - req->fence = true; 576 + req->flags |= SBA_REQUEST_FENCE; 653 577 654 578 /* Fillup request message */ 655 579 sba_fillup_interrupt_msg(req, req->cmds, &req->msg); ··· 669 593 { 670 594 u64 cmd; 671 595 u32 c_mdata; 596 + dma_addr_t resp_dma = req->tx.phys; 672 597 struct brcm_sba_command *cmdsp = cmds; 673 598 674 599 /* Type-B command to load data into buf0 */ ··· 706 629 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 707 630 if (req->sba->hw_resp_size) { 708 631 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 709 - cmdsp->resp = req->resp_dma; 632 + cmdsp->resp = resp_dma; 710 633 cmdsp->resp_len = req->sba->hw_resp_size; 711 634 } 712 635 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 733 656 req = sba_alloc_request(sba); 734 657 if (!req) 735 658 return NULL; 736 - req->fence = (flags & DMA_PREP_FENCE) ? true : false; 659 + if (flags & DMA_PREP_FENCE) 660 + req->flags |= SBA_REQUEST_FENCE; 737 661 738 662 /* Fillup request message */ 739 663 sba_fillup_memcpy_msg(req, req->cmds, &req->msg, ··· 789 711 u64 cmd; 790 712 u32 c_mdata; 791 713 unsigned int i; 714 + dma_addr_t resp_dma = req->tx.phys; 792 715 struct brcm_sba_command *cmdsp = cmds; 793 716 794 717 /* Type-B command to load data into buf0 */ ··· 845 766 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 846 767 if (req->sba->hw_resp_size) { 847 768 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 848 - cmdsp->resp = req->resp_dma; 769 + cmdsp->resp = resp_dma; 849 770 cmdsp->resp_len = req->sba->hw_resp_size; 850 771 } 851 772 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 861 782 msg->error = 0; 862 783 } 863 784 864 - struct sba_request * 785 + static struct sba_request * 865 786 sba_prep_dma_xor_req(struct sba_device *sba, 866 787 dma_addr_t off, dma_addr_t dst, dma_addr_t *src, 867 788 u32 src_cnt, size_t len, unsigned long flags) ··· 872 793 req = sba_alloc_request(sba); 873 794 if (!req) 874 795 return NULL; 875 - req->fence = (flags & DMA_PREP_FENCE) ? true : false; 796 + if (flags & DMA_PREP_FENCE) 797 + req->flags |= SBA_REQUEST_FENCE; 876 798 877 799 /* Fillup request message */ 878 800 sba_fillup_xor_msg(req, req->cmds, &req->msg, ··· 934 854 u64 cmd; 935 855 u32 c_mdata; 936 856 unsigned int i; 857 + dma_addr_t resp_dma = req->tx.phys; 937 858 struct brcm_sba_command *cmdsp = cmds; 938 859 939 860 if (pq_continue) { ··· 1028 947 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 1029 948 if (req->sba->hw_resp_size) { 1030 949 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 1031 - cmdsp->resp = req->resp_dma; 950 + cmdsp->resp = resp_dma; 1032 951 cmdsp->resp_len = req->sba->hw_resp_size; 1033 952 } 1034 953 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 1055 974 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 1056 975 if (req->sba->hw_resp_size) { 1057 976 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 1058 - cmdsp->resp = req->resp_dma; 977 + cmdsp->resp = resp_dma; 1059 978 cmdsp->resp_len = req->sba->hw_resp_size; 1060 979 } 1061 980 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 1072 991 msg->error = 0; 1073 992 } 1074 993 1075 - struct sba_request * 994 + static struct sba_request * 1076 995 sba_prep_dma_pq_req(struct sba_device *sba, dma_addr_t off, 1077 996 dma_addr_t *dst_p, dma_addr_t *dst_q, dma_addr_t *src, 1078 997 u32 src_cnt, const u8 *scf, size_t len, unsigned long flags) ··· 1083 1002 req = sba_alloc_request(sba); 1084 1003 if (!req) 1085 1004 return NULL; 1086 - req->fence = (flags & DMA_PREP_FENCE) ? true : false; 1005 + if (flags & DMA_PREP_FENCE) 1006 + req->flags |= SBA_REQUEST_FENCE; 1087 1007 1088 1008 /* Fillup request messages */ 1089 1009 sba_fillup_pq_msg(req, dmaf_continue(flags), ··· 1109 1027 u64 cmd; 1110 1028 u32 c_mdata; 1111 1029 u8 pos, dpos = raid6_gflog[scf]; 1030 + dma_addr_t resp_dma = req->tx.phys; 1112 1031 struct brcm_sba_command *cmdsp = cmds; 1113 1032 1114 1033 if (!dst_p) ··· 1188 1105 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 1189 1106 if (req->sba->hw_resp_size) { 1190 1107 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 1191 - cmdsp->resp = req->resp_dma; 1108 + cmdsp->resp = resp_dma; 1192 1109 cmdsp->resp_len = req->sba->hw_resp_size; 1193 1110 } 1194 1111 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 1309 1226 cmdsp->flags = BRCM_SBA_CMD_TYPE_A; 1310 1227 if (req->sba->hw_resp_size) { 1311 1228 cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; 1312 - cmdsp->resp = req->resp_dma; 1229 + cmdsp->resp = resp_dma; 1313 1230 cmdsp->resp_len = req->sba->hw_resp_size; 1314 1231 } 1315 1232 cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; ··· 1326 1243 msg->error = 0; 1327 1244 } 1328 1245 1329 - struct sba_request * 1246 + static struct sba_request * 1330 1247 sba_prep_dma_pq_single_req(struct sba_device *sba, dma_addr_t off, 1331 1248 dma_addr_t *dst_p, dma_addr_t *dst_q, 1332 1249 dma_addr_t src, u8 scf, size_t len, ··· 1338 1255 req = sba_alloc_request(sba); 1339 1256 if (!req) 1340 1257 return NULL; 1341 - req->fence = (flags & DMA_PREP_FENCE) ? true : false; 1258 + if (flags & DMA_PREP_FENCE) 1259 + req->flags |= SBA_REQUEST_FENCE; 1342 1260 1343 1261 /* Fillup request messages */ 1344 1262 sba_fillup_pq_single_msg(req, dmaf_continue(flags), ··· 1454 1370 1455 1371 /* ====== Mailbox callbacks ===== */ 1456 1372 1457 - static void sba_dma_tx_actions(struct sba_request *req) 1458 - { 1459 - struct dma_async_tx_descriptor *tx = &req->tx; 1460 - 1461 - WARN_ON(tx->cookie < 0); 1462 - 1463 - if (tx->cookie > 0) { 1464 - dma_cookie_complete(tx); 1465 - 1466 - /* 1467 - * Call the callback (must not sleep or submit new 1468 - * operations to this channel) 1469 - */ 1470 - if (tx->callback) 1471 - tx->callback(tx->callback_param); 1472 - 1473 - dma_descriptor_unmap(tx); 1474 - } 1475 - 1476 - /* Run dependent operations */ 1477 - dma_run_dependencies(tx); 1478 - 1479 - /* If waiting for 'ack' then move to completed list */ 1480 - if (!async_tx_test_ack(&req->tx)) 1481 - sba_complete_chained_requests(req); 1482 - else 1483 - sba_free_chained_requests(req); 1484 - } 1485 - 1486 1373 static void sba_receive_message(struct mbox_client *cl, void *msg) 1487 1374 { 1488 - unsigned long flags; 1489 1375 struct brcm_message *m = msg; 1490 - struct sba_request *req = m->ctx, *req1; 1376 + struct sba_request *req = m->ctx; 1491 1377 struct sba_device *sba = req->sba; 1492 1378 1493 1379 /* Error count if message has error */ ··· 1465 1411 dev_err(sba->dev, "%s got message with error %d", 1466 1412 dma_chan_name(&sba->dma_chan), m->error); 1467 1413 1468 - /* Mark request as received */ 1469 - sba_received_request(req); 1414 + /* Process received request */ 1415 + sba_process_received_request(sba, req); 1416 + } 1470 1417 1471 - /* Wait for all chained requests to be completed */ 1472 - if (atomic_dec_return(&req->first->next_pending_count)) 1473 - goto done; 1418 + /* ====== Debugfs callbacks ====== */ 1474 1419 1475 - /* Point to first request */ 1476 - req = req->first; 1420 + static int sba_debugfs_stats_show(struct seq_file *file, void *offset) 1421 + { 1422 + struct platform_device *pdev = to_platform_device(file->private); 1423 + struct sba_device *sba = platform_get_drvdata(pdev); 1477 1424 1478 - /* Update request */ 1479 - if (req->state == SBA_REQUEST_STATE_RECEIVED) 1480 - sba_dma_tx_actions(req); 1481 - else 1482 - sba_free_chained_requests(req); 1425 + /* Write stats in file */ 1426 + sba_write_stats_in_seqfile(sba, file); 1483 1427 1484 - spin_lock_irqsave(&sba->reqs_lock, flags); 1485 - 1486 - /* Re-check all completed request waiting for 'ack' */ 1487 - list_for_each_entry_safe(req, req1, &sba->reqs_completed_list, node) { 1488 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 1489 - sba_dma_tx_actions(req); 1490 - spin_lock_irqsave(&sba->reqs_lock, flags); 1491 - } 1492 - 1493 - spin_unlock_irqrestore(&sba->reqs_lock, flags); 1494 - 1495 - done: 1496 - /* Try to submit pending request */ 1497 - sba_issue_pending(&sba->dma_chan); 1428 + return 0; 1498 1429 } 1499 1430 1500 1431 /* ====== Platform driver routines ===== */ 1501 1432 1502 1433 static int sba_prealloc_channel_resources(struct sba_device *sba) 1503 1434 { 1504 - int i, j, p, ret = 0; 1435 + int i, j, ret = 0; 1505 1436 struct sba_request *req = NULL; 1506 1437 1507 - sba->resp_base = dma_alloc_coherent(sba->dma_dev.dev, 1438 + sba->resp_base = dma_alloc_coherent(sba->mbox_dev, 1508 1439 sba->max_resp_pool_size, 1509 1440 &sba->resp_dma_base, GFP_KERNEL); 1510 1441 if (!sba->resp_base) 1511 1442 return -ENOMEM; 1512 1443 1513 - sba->cmds_base = dma_alloc_coherent(sba->dma_dev.dev, 1444 + sba->cmds_base = dma_alloc_coherent(sba->mbox_dev, 1514 1445 sba->max_cmds_pool_size, 1515 1446 &sba->cmds_dma_base, GFP_KERNEL); 1516 1447 if (!sba->cmds_base) { ··· 1508 1469 INIT_LIST_HEAD(&sba->reqs_alloc_list); 1509 1470 INIT_LIST_HEAD(&sba->reqs_pending_list); 1510 1471 INIT_LIST_HEAD(&sba->reqs_active_list); 1511 - INIT_LIST_HEAD(&sba->reqs_received_list); 1512 - INIT_LIST_HEAD(&sba->reqs_completed_list); 1513 1472 INIT_LIST_HEAD(&sba->reqs_aborted_list); 1514 1473 INIT_LIST_HEAD(&sba->reqs_free_list); 1515 1474 1516 - sba->reqs = devm_kcalloc(sba->dev, sba->max_req, 1517 - sizeof(*req), GFP_KERNEL); 1518 - if (!sba->reqs) { 1519 - ret = -ENOMEM; 1520 - goto fail_free_cmds_pool; 1521 - } 1522 - 1523 - for (i = 0, p = 0; i < sba->max_req; i++) { 1524 - req = &sba->reqs[i]; 1525 - INIT_LIST_HEAD(&req->node); 1526 - req->sba = sba; 1527 - req->state = SBA_REQUEST_STATE_FREE; 1528 - INIT_LIST_HEAD(&req->next); 1529 - req->next_count = 1; 1530 - atomic_set(&req->next_pending_count, 0); 1531 - req->fence = false; 1532 - req->resp = sba->resp_base + p; 1533 - req->resp_dma = sba->resp_dma_base + p; 1534 - p += sba->hw_resp_size; 1535 - req->cmds = devm_kcalloc(sba->dev, sba->max_cmd_per_req, 1536 - sizeof(*req->cmds), GFP_KERNEL); 1537 - if (!req->cmds) { 1475 + for (i = 0; i < sba->max_req; i++) { 1476 + req = devm_kzalloc(sba->dev, 1477 + sizeof(*req) + 1478 + sba->max_cmd_per_req * sizeof(req->cmds[0]), 1479 + GFP_KERNEL); 1480 + if (!req) { 1538 1481 ret = -ENOMEM; 1539 1482 goto fail_free_cmds_pool; 1540 1483 } 1484 + INIT_LIST_HEAD(&req->node); 1485 + req->sba = sba; 1486 + req->flags = SBA_REQUEST_STATE_FREE; 1487 + INIT_LIST_HEAD(&req->next); 1488 + atomic_set(&req->next_pending_count, 0); 1541 1489 for (j = 0; j < sba->max_cmd_per_req; j++) { 1542 1490 req->cmds[j].cmd = 0; 1543 1491 req->cmds[j].cmd_dma = sba->cmds_base + ··· 1535 1509 } 1536 1510 memset(&req->msg, 0, sizeof(req->msg)); 1537 1511 dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); 1512 + async_tx_ack(&req->tx); 1538 1513 req->tx.tx_submit = sba_tx_submit; 1539 - req->tx.phys = req->resp_dma; 1514 + req->tx.phys = sba->resp_dma_base + i * sba->hw_resp_size; 1540 1515 list_add_tail(&req->node, &sba->reqs_free_list); 1541 1516 } 1542 - 1543 - sba->reqs_free_count = sba->max_req; 1544 1517 1545 1518 return 0; 1546 1519 1547 1520 fail_free_cmds_pool: 1548 - dma_free_coherent(sba->dma_dev.dev, 1521 + dma_free_coherent(sba->mbox_dev, 1549 1522 sba->max_cmds_pool_size, 1550 1523 sba->cmds_base, sba->cmds_dma_base); 1551 1524 fail_free_resp_pool: 1552 - dma_free_coherent(sba->dma_dev.dev, 1525 + dma_free_coherent(sba->mbox_dev, 1553 1526 sba->max_resp_pool_size, 1554 1527 sba->resp_base, sba->resp_dma_base); 1555 1528 return ret; ··· 1557 1532 static void sba_freeup_channel_resources(struct sba_device *sba) 1558 1533 { 1559 1534 dmaengine_terminate_all(&sba->dma_chan); 1560 - dma_free_coherent(sba->dma_dev.dev, sba->max_cmds_pool_size, 1535 + dma_free_coherent(sba->mbox_dev, sba->max_cmds_pool_size, 1561 1536 sba->cmds_base, sba->cmds_dma_base); 1562 - dma_free_coherent(sba->dma_dev.dev, sba->max_resp_pool_size, 1537 + dma_free_coherent(sba->mbox_dev, sba->max_resp_pool_size, 1563 1538 sba->resp_base, sba->resp_dma_base); 1564 1539 sba->resp_base = NULL; 1565 1540 sba->resp_dma_base = 0; ··· 1650 1625 sba->dev = &pdev->dev; 1651 1626 platform_set_drvdata(pdev, sba); 1652 1627 1628 + /* Number of channels equals number of mailbox channels */ 1629 + ret = of_count_phandle_with_args(pdev->dev.of_node, 1630 + "mboxes", "#mbox-cells"); 1631 + if (ret <= 0) 1632 + return -ENODEV; 1633 + mchans_count = ret; 1634 + 1653 1635 /* Determine SBA version from DT compatible string */ 1654 1636 if (of_device_is_compatible(sba->dev->of_node, "brcm,iproc-sba")) 1655 1637 sba->ver = SBA_VER_1; ··· 1669 1637 /* Derived Configuration parameters */ 1670 1638 switch (sba->ver) { 1671 1639 case SBA_VER_1: 1672 - sba->max_req = 1024; 1673 1640 sba->hw_buf_size = 4096; 1674 1641 sba->hw_resp_size = 8; 1675 1642 sba->max_pq_coefs = 6; 1676 1643 sba->max_pq_srcs = 6; 1677 1644 break; 1678 1645 case SBA_VER_2: 1679 - sba->max_req = 1024; 1680 1646 sba->hw_buf_size = 4096; 1681 1647 sba->hw_resp_size = 8; 1682 1648 sba->max_pq_coefs = 30; ··· 1688 1658 default: 1689 1659 return -EINVAL; 1690 1660 } 1661 + sba->max_req = SBA_MAX_REQ_PER_MBOX_CHANNEL * mchans_count; 1691 1662 sba->max_cmd_per_req = sba->max_pq_srcs + 3; 1692 1663 sba->max_xor_srcs = sba->max_cmd_per_req - 1; 1693 1664 sba->max_resp_pool_size = sba->max_req * sba->hw_resp_size; ··· 1699 1668 sba->client.dev = &pdev->dev; 1700 1669 sba->client.rx_callback = sba_receive_message; 1701 1670 sba->client.tx_block = false; 1702 - sba->client.knows_txdone = false; 1671 + sba->client.knows_txdone = true; 1703 1672 sba->client.tx_tout = 0; 1704 1673 1705 - /* Number of channels equals number of mailbox channels */ 1706 - ret = of_count_phandle_with_args(pdev->dev.of_node, 1707 - "mboxes", "#mbox-cells"); 1708 - if (ret <= 0) 1709 - return -ENODEV; 1710 - mchans_count = ret; 1711 - sba->mchans_count = 0; 1712 - atomic_set(&sba->mchans_current, 0); 1713 - 1714 1674 /* Allocate mailbox channel array */ 1715 - sba->mchans = devm_kcalloc(&pdev->dev, sba->mchans_count, 1675 + sba->mchans = devm_kcalloc(&pdev->dev, mchans_count, 1716 1676 sizeof(*sba->mchans), GFP_KERNEL); 1717 1677 if (!sba->mchans) 1718 1678 return -ENOMEM; 1719 1679 1720 1680 /* Request mailbox channels */ 1681 + sba->mchans_count = 0; 1721 1682 for (i = 0; i < mchans_count; i++) { 1722 1683 sba->mchans[i] = mbox_request_channel(&sba->client, i); 1723 1684 if (IS_ERR(sba->mchans[i])) { ··· 1718 1695 } 1719 1696 sba->mchans_count++; 1720 1697 } 1698 + atomic_set(&sba->mchans_current, 0); 1721 1699 1722 1700 /* Find-out underlying mailbox device */ 1723 1701 ret = of_parse_phandle_with_args(pdev->dev.of_node, ··· 1747 1723 } 1748 1724 } 1749 1725 1750 - /* Register DMA device with linux async framework */ 1751 - ret = sba_async_register(sba); 1752 - if (ret) 1753 - goto fail_free_mchans; 1754 - 1755 1726 /* Prealloc channel resource */ 1756 1727 ret = sba_prealloc_channel_resources(sba); 1757 1728 if (ret) 1758 - goto fail_async_dev_unreg; 1729 + goto fail_free_mchans; 1730 + 1731 + /* Check availability of debugfs */ 1732 + if (!debugfs_initialized()) 1733 + goto skip_debugfs; 1734 + 1735 + /* Create debugfs root entry */ 1736 + sba->root = debugfs_create_dir(dev_name(sba->dev), NULL); 1737 + if (IS_ERR_OR_NULL(sba->root)) { 1738 + dev_err(sba->dev, "failed to create debugfs root entry\n"); 1739 + sba->root = NULL; 1740 + goto skip_debugfs; 1741 + } 1742 + 1743 + /* Create debugfs stats entry */ 1744 + sba->stats = debugfs_create_devm_seqfile(sba->dev, "stats", sba->root, 1745 + sba_debugfs_stats_show); 1746 + if (IS_ERR_OR_NULL(sba->stats)) 1747 + dev_err(sba->dev, "failed to create debugfs stats file\n"); 1748 + skip_debugfs: 1749 + 1750 + /* Register DMA device with Linux async framework */ 1751 + ret = sba_async_register(sba); 1752 + if (ret) 1753 + goto fail_free_resources; 1759 1754 1760 1755 /* Print device info */ 1761 1756 dev_info(sba->dev, "%s using SBAv%d and %d mailbox channels", ··· 1783 1740 1784 1741 return 0; 1785 1742 1786 - fail_async_dev_unreg: 1787 - dma_async_device_unregister(&sba->dma_dev); 1743 + fail_free_resources: 1744 + debugfs_remove_recursive(sba->root); 1745 + sba_freeup_channel_resources(sba); 1788 1746 fail_free_mchans: 1789 1747 for (i = 0; i < sba->mchans_count; i++) 1790 1748 mbox_free_channel(sba->mchans[i]); ··· 1797 1753 int i; 1798 1754 struct sba_device *sba = platform_get_drvdata(pdev); 1799 1755 1800 - sba_freeup_channel_resources(sba); 1801 - 1802 1756 dma_async_device_unregister(&sba->dma_dev); 1757 + 1758 + debugfs_remove_recursive(sba->root); 1759 + 1760 + sba_freeup_channel_resources(sba); 1803 1761 1804 1762 for (i = 0; i < sba->mchans_count; i++) 1805 1763 mbox_free_channel(sba->mchans[i]);
+78 -23
drivers/dma/dmaengine.c
··· 923 923 return -ENODEV; 924 924 925 925 /* validate device routines */ 926 - BUG_ON(dma_has_cap(DMA_MEMCPY, device->cap_mask) && 927 - !device->device_prep_dma_memcpy); 928 - BUG_ON(dma_has_cap(DMA_XOR, device->cap_mask) && 929 - !device->device_prep_dma_xor); 930 - BUG_ON(dma_has_cap(DMA_XOR_VAL, device->cap_mask) && 931 - !device->device_prep_dma_xor_val); 932 - BUG_ON(dma_has_cap(DMA_PQ, device->cap_mask) && 933 - !device->device_prep_dma_pq); 934 - BUG_ON(dma_has_cap(DMA_PQ_VAL, device->cap_mask) && 935 - !device->device_prep_dma_pq_val); 936 - BUG_ON(dma_has_cap(DMA_MEMSET, device->cap_mask) && 937 - !device->device_prep_dma_memset); 938 - BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) && 939 - !device->device_prep_dma_interrupt); 940 - BUG_ON(dma_has_cap(DMA_SG, device->cap_mask) && 941 - !device->device_prep_dma_sg); 942 - BUG_ON(dma_has_cap(DMA_CYCLIC, device->cap_mask) && 943 - !device->device_prep_dma_cyclic); 944 - BUG_ON(dma_has_cap(DMA_INTERLEAVE, device->cap_mask) && 945 - !device->device_prep_interleaved_dma); 926 + if (!device->dev) { 927 + pr_err("DMAdevice must have dev\n"); 928 + return -EIO; 929 + } 946 930 947 - BUG_ON(!device->device_tx_status); 948 - BUG_ON(!device->device_issue_pending); 949 - BUG_ON(!device->dev); 931 + if (dma_has_cap(DMA_MEMCPY, device->cap_mask) && !device->device_prep_dma_memcpy) { 932 + dev_err(device->dev, 933 + "Device claims capability %s, but op is not defined\n", 934 + "DMA_MEMCPY"); 935 + return -EIO; 936 + } 937 + 938 + if (dma_has_cap(DMA_XOR, device->cap_mask) && !device->device_prep_dma_xor) { 939 + dev_err(device->dev, 940 + "Device claims capability %s, but op is not defined\n", 941 + "DMA_XOR"); 942 + return -EIO; 943 + } 944 + 945 + if (dma_has_cap(DMA_XOR_VAL, device->cap_mask) && !device->device_prep_dma_xor_val) { 946 + dev_err(device->dev, 947 + "Device claims capability %s, but op is not defined\n", 948 + "DMA_XOR_VAL"); 949 + return -EIO; 950 + } 951 + 952 + if (dma_has_cap(DMA_PQ, device->cap_mask) && !device->device_prep_dma_pq) { 953 + dev_err(device->dev, 954 + "Device claims capability %s, but op is not defined\n", 955 + "DMA_PQ"); 956 + return -EIO; 957 + } 958 + 959 + if (dma_has_cap(DMA_PQ_VAL, device->cap_mask) && !device->device_prep_dma_pq_val) { 960 + dev_err(device->dev, 961 + "Device claims capability %s, but op is not defined\n", 962 + "DMA_PQ_VAL"); 963 + return -EIO; 964 + } 965 + 966 + if (dma_has_cap(DMA_MEMSET, device->cap_mask) && !device->device_prep_dma_memset) { 967 + dev_err(device->dev, 968 + "Device claims capability %s, but op is not defined\n", 969 + "DMA_MEMSET"); 970 + return -EIO; 971 + } 972 + 973 + if (dma_has_cap(DMA_INTERRUPT, device->cap_mask) && !device->device_prep_dma_interrupt) { 974 + dev_err(device->dev, 975 + "Device claims capability %s, but op is not defined\n", 976 + "DMA_INTERRUPT"); 977 + return -EIO; 978 + } 979 + 980 + if (dma_has_cap(DMA_CYCLIC, device->cap_mask) && !device->device_prep_dma_cyclic) { 981 + dev_err(device->dev, 982 + "Device claims capability %s, but op is not defined\n", 983 + "DMA_CYCLIC"); 984 + return -EIO; 985 + } 986 + 987 + if (dma_has_cap(DMA_INTERLEAVE, device->cap_mask) && !device->device_prep_interleaved_dma) { 988 + dev_err(device->dev, 989 + "Device claims capability %s, but op is not defined\n", 990 + "DMA_INTERLEAVE"); 991 + return -EIO; 992 + } 993 + 994 + 995 + if (!device->device_tx_status) { 996 + dev_err(device->dev, "Device tx_status is not defined\n"); 997 + return -EIO; 998 + } 999 + 1000 + 1001 + if (!device->device_issue_pending) { 1002 + dev_err(device->dev, "Device issue_pending is not defined\n"); 1003 + return -EIO; 1004 + } 950 1005 951 1006 /* note: this only matters in the 952 1007 * CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=n case
+59 -51
drivers/dma/dmatest.c
··· 52 52 MODULE_PARM_DESC(iterations, 53 53 "Iterations before stopping test (default: infinite)"); 54 54 55 - static unsigned int sg_buffers = 1; 56 - module_param(sg_buffers, uint, S_IRUGO | S_IWUSR); 57 - MODULE_PARM_DESC(sg_buffers, 58 - "Number of scatter gather buffers (default: 1)"); 59 - 60 55 static unsigned int dmatest; 61 56 module_param(dmatest, uint, S_IRUGO | S_IWUSR); 62 57 MODULE_PARM_DESC(dmatest, 63 - "dmatest 0-memcpy 1-slave_sg (default: 0)"); 58 + "dmatest 0-memcpy 1-memset (default: 0)"); 64 59 65 60 static unsigned int xor_sources = 3; 66 61 module_param(xor_sources, uint, S_IRUGO | S_IWUSR); ··· 153 158 #define PATTERN_COPY 0x40 154 159 #define PATTERN_OVERWRITE 0x20 155 160 #define PATTERN_COUNT_MASK 0x1f 161 + #define PATTERN_MEMSET_IDX 0x01 156 162 157 163 struct dmatest_thread { 158 164 struct list_head node; ··· 235 239 return buf; 236 240 } 237 241 242 + static inline u8 gen_inv_idx(u8 index, bool is_memset) 243 + { 244 + u8 val = is_memset ? PATTERN_MEMSET_IDX : index; 245 + 246 + return ~val & PATTERN_COUNT_MASK; 247 + } 248 + 249 + static inline u8 gen_src_value(u8 index, bool is_memset) 250 + { 251 + return PATTERN_SRC | gen_inv_idx(index, is_memset); 252 + } 253 + 254 + static inline u8 gen_dst_value(u8 index, bool is_memset) 255 + { 256 + return PATTERN_DST | gen_inv_idx(index, is_memset); 257 + } 258 + 238 259 static void dmatest_init_srcs(u8 **bufs, unsigned int start, unsigned int len, 239 - unsigned int buf_size) 260 + unsigned int buf_size, bool is_memset) 240 261 { 241 262 unsigned int i; 242 263 u8 *buf; 243 264 244 265 for (; (buf = *bufs); bufs++) { 245 266 for (i = 0; i < start; i++) 246 - buf[i] = PATTERN_SRC | (~i & PATTERN_COUNT_MASK); 267 + buf[i] = gen_src_value(i, is_memset); 247 268 for ( ; i < start + len; i++) 248 - buf[i] = PATTERN_SRC | PATTERN_COPY 249 - | (~i & PATTERN_COUNT_MASK); 269 + buf[i] = gen_src_value(i, is_memset) | PATTERN_COPY; 250 270 for ( ; i < buf_size; i++) 251 - buf[i] = PATTERN_SRC | (~i & PATTERN_COUNT_MASK); 271 + buf[i] = gen_src_value(i, is_memset); 252 272 buf++; 253 273 } 254 274 } 255 275 256 276 static void dmatest_init_dsts(u8 **bufs, unsigned int start, unsigned int len, 257 - unsigned int buf_size) 277 + unsigned int buf_size, bool is_memset) 258 278 { 259 279 unsigned int i; 260 280 u8 *buf; 261 281 262 282 for (; (buf = *bufs); bufs++) { 263 283 for (i = 0; i < start; i++) 264 - buf[i] = PATTERN_DST | (~i & PATTERN_COUNT_MASK); 284 + buf[i] = gen_dst_value(i, is_memset); 265 285 for ( ; i < start + len; i++) 266 - buf[i] = PATTERN_DST | PATTERN_OVERWRITE 267 - | (~i & PATTERN_COUNT_MASK); 286 + buf[i] = gen_dst_value(i, is_memset) | 287 + PATTERN_OVERWRITE; 268 288 for ( ; i < buf_size; i++) 269 - buf[i] = PATTERN_DST | (~i & PATTERN_COUNT_MASK); 289 + buf[i] = gen_dst_value(i, is_memset); 270 290 } 271 291 } 272 292 273 293 static void dmatest_mismatch(u8 actual, u8 pattern, unsigned int index, 274 - unsigned int counter, bool is_srcbuf) 294 + unsigned int counter, bool is_srcbuf, bool is_memset) 275 295 { 276 296 u8 diff = actual ^ pattern; 277 - u8 expected = pattern | (~counter & PATTERN_COUNT_MASK); 297 + u8 expected = pattern | gen_inv_idx(counter, is_memset); 278 298 const char *thread_name = current->comm; 279 299 280 300 if (is_srcbuf) ··· 310 298 311 299 static unsigned int dmatest_verify(u8 **bufs, unsigned int start, 312 300 unsigned int end, unsigned int counter, u8 pattern, 313 - bool is_srcbuf) 301 + bool is_srcbuf, bool is_memset) 314 302 { 315 303 unsigned int i; 316 304 unsigned int error_count = 0; ··· 323 311 counter = counter_orig; 324 312 for (i = start; i < end; i++) { 325 313 actual = buf[i]; 326 - expected = pattern | (~counter & PATTERN_COUNT_MASK); 314 + expected = pattern | gen_inv_idx(counter, is_memset); 327 315 if (actual != expected) { 328 316 if (error_count < MAX_ERROR_COUNT) 329 317 dmatest_mismatch(actual, pattern, i, 330 - counter, is_srcbuf); 318 + counter, is_srcbuf, 319 + is_memset); 331 320 error_count++; 332 321 } 333 322 counter++; ··· 448 435 s64 runtime = 0; 449 436 unsigned long long total_len = 0; 450 437 u8 align = 0; 438 + bool is_memset = false; 451 439 452 440 set_freezable(); 453 441 ··· 462 448 if (thread->type == DMA_MEMCPY) { 463 449 align = dev->copy_align; 464 450 src_cnt = dst_cnt = 1; 465 - } else if (thread->type == DMA_SG) { 466 - align = dev->copy_align; 467 - src_cnt = dst_cnt = sg_buffers; 451 + } else if (thread->type == DMA_MEMSET) { 452 + align = dev->fill_align; 453 + src_cnt = dst_cnt = 1; 454 + is_memset = true; 468 455 } else if (thread->type == DMA_XOR) { 469 456 /* force odd to ensure dst = src */ 470 457 src_cnt = min_odd(params->xor_sources | 1, dev->max_xor); ··· 545 530 dma_addr_t srcs[src_cnt]; 546 531 dma_addr_t *dsts; 547 532 unsigned int src_off, dst_off, len; 548 - struct scatterlist tx_sg[src_cnt]; 549 - struct scatterlist rx_sg[src_cnt]; 550 533 551 534 total_tests++; 552 535 ··· 584 571 dst_off = (dst_off >> align) << align; 585 572 586 573 dmatest_init_srcs(thread->srcs, src_off, len, 587 - params->buf_size); 574 + params->buf_size, is_memset); 588 575 dmatest_init_dsts(thread->dsts, dst_off, len, 589 - params->buf_size); 576 + params->buf_size, is_memset); 590 577 591 578 diff = ktime_sub(ktime_get(), start); 592 579 filltime = ktime_add(filltime, diff); ··· 640 627 um->bidi_cnt++; 641 628 } 642 629 643 - sg_init_table(tx_sg, src_cnt); 644 - sg_init_table(rx_sg, src_cnt); 645 - for (i = 0; i < src_cnt; i++) { 646 - sg_dma_address(&rx_sg[i]) = srcs[i]; 647 - sg_dma_address(&tx_sg[i]) = dsts[i] + dst_off; 648 - sg_dma_len(&tx_sg[i]) = len; 649 - sg_dma_len(&rx_sg[i]) = len; 650 - } 651 - 652 630 if (thread->type == DMA_MEMCPY) 653 631 tx = dev->device_prep_dma_memcpy(chan, 654 632 dsts[0] + dst_off, 655 633 srcs[0], len, flags); 656 - else if (thread->type == DMA_SG) 657 - tx = dev->device_prep_dma_sg(chan, tx_sg, src_cnt, 658 - rx_sg, src_cnt, flags); 634 + else if (thread->type == DMA_MEMSET) 635 + tx = dev->device_prep_dma_memset(chan, 636 + dsts[0] + dst_off, 637 + *(thread->srcs[0] + src_off), 638 + len, flags); 659 639 else if (thread->type == DMA_XOR) 660 640 tx = dev->device_prep_dma_xor(chan, 661 641 dsts[0] + dst_off, ··· 728 722 start = ktime_get(); 729 723 pr_debug("%s: verifying source buffer...\n", current->comm); 730 724 error_count = dmatest_verify(thread->srcs, 0, src_off, 731 - 0, PATTERN_SRC, true); 725 + 0, PATTERN_SRC, true, is_memset); 732 726 error_count += dmatest_verify(thread->srcs, src_off, 733 727 src_off + len, src_off, 734 - PATTERN_SRC | PATTERN_COPY, true); 728 + PATTERN_SRC | PATTERN_COPY, true, is_memset); 735 729 error_count += dmatest_verify(thread->srcs, src_off + len, 736 730 params->buf_size, src_off + len, 737 - PATTERN_SRC, true); 731 + PATTERN_SRC, true, is_memset); 738 732 739 733 pr_debug("%s: verifying dest buffer...\n", current->comm); 740 734 error_count += dmatest_verify(thread->dsts, 0, dst_off, 741 - 0, PATTERN_DST, false); 735 + 0, PATTERN_DST, false, is_memset); 736 + 742 737 error_count += dmatest_verify(thread->dsts, dst_off, 743 738 dst_off + len, src_off, 744 - PATTERN_SRC | PATTERN_COPY, false); 739 + PATTERN_SRC | PATTERN_COPY, false, is_memset); 740 + 745 741 error_count += dmatest_verify(thread->dsts, dst_off + len, 746 742 params->buf_size, dst_off + len, 747 - PATTERN_DST, false); 743 + PATTERN_DST, false, is_memset); 748 744 749 745 diff = ktime_sub(ktime_get(), start); 750 746 comparetime = ktime_add(comparetime, diff); ··· 829 821 830 822 if (type == DMA_MEMCPY) 831 823 op = "copy"; 832 - else if (type == DMA_SG) 833 - op = "sg"; 824 + else if (type == DMA_MEMSET) 825 + op = "set"; 834 826 else if (type == DMA_XOR) 835 827 op = "xor"; 836 828 else if (type == DMA_PQ) ··· 891 883 } 892 884 } 893 885 894 - if (dma_has_cap(DMA_SG, dma_dev->cap_mask)) { 886 + if (dma_has_cap(DMA_MEMSET, dma_dev->cap_mask)) { 895 887 if (dmatest == 1) { 896 - cnt = dmatest_add_threads(info, dtc, DMA_SG); 888 + cnt = dmatest_add_threads(info, dtc, DMA_MEMSET); 897 889 thread_count += cnt > 0 ? cnt : 0; 898 890 } 899 891 } ··· 969 961 params->noverify = noverify; 970 962 971 963 request_channels(info, DMA_MEMCPY); 964 + request_channels(info, DMA_MEMSET); 972 965 request_channels(info, DMA_XOR); 973 - request_channels(info, DMA_SG); 974 966 request_channels(info, DMA_PQ); 975 967 } 976 968
-118
drivers/dma/fsldma.c
··· 825 825 return NULL; 826 826 } 827 827 828 - static struct dma_async_tx_descriptor *fsl_dma_prep_sg(struct dma_chan *dchan, 829 - struct scatterlist *dst_sg, unsigned int dst_nents, 830 - struct scatterlist *src_sg, unsigned int src_nents, 831 - unsigned long flags) 832 - { 833 - struct fsl_desc_sw *first = NULL, *prev = NULL, *new = NULL; 834 - struct fsldma_chan *chan = to_fsl_chan(dchan); 835 - size_t dst_avail, src_avail; 836 - dma_addr_t dst, src; 837 - size_t len; 838 - 839 - /* basic sanity checks */ 840 - if (dst_nents == 0 || src_nents == 0) 841 - return NULL; 842 - 843 - if (dst_sg == NULL || src_sg == NULL) 844 - return NULL; 845 - 846 - /* 847 - * TODO: should we check that both scatterlists have the same 848 - * TODO: number of bytes in total? Is that really an error? 849 - */ 850 - 851 - /* get prepared for the loop */ 852 - dst_avail = sg_dma_len(dst_sg); 853 - src_avail = sg_dma_len(src_sg); 854 - 855 - /* run until we are out of scatterlist entries */ 856 - while (true) { 857 - 858 - /* create the largest transaction possible */ 859 - len = min_t(size_t, src_avail, dst_avail); 860 - len = min_t(size_t, len, FSL_DMA_BCR_MAX_CNT); 861 - if (len == 0) 862 - goto fetch; 863 - 864 - dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - dst_avail; 865 - src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - src_avail; 866 - 867 - /* allocate and populate the descriptor */ 868 - new = fsl_dma_alloc_descriptor(chan); 869 - if (!new) { 870 - chan_err(chan, "%s\n", msg_ld_oom); 871 - goto fail; 872 - } 873 - 874 - set_desc_cnt(chan, &new->hw, len); 875 - set_desc_src(chan, &new->hw, src); 876 - set_desc_dst(chan, &new->hw, dst); 877 - 878 - if (!first) 879 - first = new; 880 - else 881 - set_desc_next(chan, &prev->hw, new->async_tx.phys); 882 - 883 - new->async_tx.cookie = 0; 884 - async_tx_ack(&new->async_tx); 885 - prev = new; 886 - 887 - /* Insert the link descriptor to the LD ring */ 888 - list_add_tail(&new->node, &first->tx_list); 889 - 890 - /* update metadata */ 891 - dst_avail -= len; 892 - src_avail -= len; 893 - 894 - fetch: 895 - /* fetch the next dst scatterlist entry */ 896 - if (dst_avail == 0) { 897 - 898 - /* no more entries: we're done */ 899 - if (dst_nents == 0) 900 - break; 901 - 902 - /* fetch the next entry: if there are no more: done */ 903 - dst_sg = sg_next(dst_sg); 904 - if (dst_sg == NULL) 905 - break; 906 - 907 - dst_nents--; 908 - dst_avail = sg_dma_len(dst_sg); 909 - } 910 - 911 - /* fetch the next src scatterlist entry */ 912 - if (src_avail == 0) { 913 - 914 - /* no more entries: we're done */ 915 - if (src_nents == 0) 916 - break; 917 - 918 - /* fetch the next entry: if there are no more: done */ 919 - src_sg = sg_next(src_sg); 920 - if (src_sg == NULL) 921 - break; 922 - 923 - src_nents--; 924 - src_avail = sg_dma_len(src_sg); 925 - } 926 - } 927 - 928 - new->async_tx.flags = flags; /* client is in control of this ack */ 929 - new->async_tx.cookie = -EBUSY; 930 - 931 - /* Set End-of-link to the last link descriptor of new list */ 932 - set_ld_eol(chan, new); 933 - 934 - return &first->async_tx; 935 - 936 - fail: 937 - if (!first) 938 - return NULL; 939 - 940 - fsldma_free_desc_list_reverse(chan, &first->tx_list); 941 - return NULL; 942 - } 943 - 944 828 static int fsl_dma_device_terminate_all(struct dma_chan *dchan) 945 829 { 946 830 struct fsldma_chan *chan; ··· 1241 1357 fdev->irq = irq_of_parse_and_map(op->dev.of_node, 0); 1242 1358 1243 1359 dma_cap_set(DMA_MEMCPY, fdev->common.cap_mask); 1244 - dma_cap_set(DMA_SG, fdev->common.cap_mask); 1245 1360 dma_cap_set(DMA_SLAVE, fdev->common.cap_mask); 1246 1361 fdev->common.device_alloc_chan_resources = fsl_dma_alloc_chan_resources; 1247 1362 fdev->common.device_free_chan_resources = fsl_dma_free_chan_resources; 1248 1363 fdev->common.device_prep_dma_memcpy = fsl_dma_prep_memcpy; 1249 - fdev->common.device_prep_dma_sg = fsl_dma_prep_sg; 1250 1364 fdev->common.device_tx_status = fsl_tx_status; 1251 1365 fdev->common.device_issue_pending = fsl_dma_memcpy_issue_pending; 1252 1366 fdev->common.device_config = fsl_dma_device_config;
+7 -3
drivers/dma/ioat/dma.c
··· 644 644 mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT); 645 645 } 646 646 647 - /* 5 microsecond delay per pending descriptor */ 648 - writew(min((5 * (active - i)), IOAT_INTRDELAY_MASK), 649 - ioat_chan->ioat_dma->reg_base + IOAT_INTRDELAY_OFFSET); 647 + /* microsecond delay by sysfs variable per pending descriptor */ 648 + if (ioat_chan->intr_coalesce != ioat_chan->prev_intr_coalesce) { 649 + writew(min((ioat_chan->intr_coalesce * (active - i)), 650 + IOAT_INTRDELAY_MASK), 651 + ioat_chan->ioat_dma->reg_base + IOAT_INTRDELAY_OFFSET); 652 + ioat_chan->prev_intr_coalesce = ioat_chan->intr_coalesce; 653 + } 650 654 } 651 655 652 656 static void ioat_cleanup(struct ioatdma_chan *ioat_chan)
+3
drivers/dma/ioat/dma.h
··· 142 142 spinlock_t prep_lock; 143 143 struct ioat_descs descs[2]; 144 144 int desc_chunks; 145 + int intr_coalesce; 146 + int prev_intr_coalesce; 145 147 }; 146 148 147 149 struct ioat_sysfs_entry { 148 150 struct attribute attr; 149 151 ssize_t (*show)(struct dma_chan *, char *); 152 + ssize_t (*store)(struct dma_chan *, const char *, size_t); 150 153 }; 151 154 152 155 /**
+1 -1
drivers/dma/ioat/init.c
··· 39 39 MODULE_LICENSE("Dual BSD/GPL"); 40 40 MODULE_AUTHOR("Intel Corporation"); 41 41 42 - static struct pci_device_id ioat_pci_tbl[] = { 42 + static const struct pci_device_id ioat_pci_tbl[] = { 43 43 /* I/OAT v3 platforms */ 44 44 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG0) }, 45 45 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG1) },
+42
drivers/dma/ioat/sysfs.c
··· 64 64 return entry->show(&ioat_chan->dma_chan, page); 65 65 } 66 66 67 + static ssize_t 68 + ioat_attr_store(struct kobject *kobj, struct attribute *attr, 69 + const char *page, size_t count) 70 + { 71 + struct ioat_sysfs_entry *entry; 72 + struct ioatdma_chan *ioat_chan; 73 + 74 + entry = container_of(attr, struct ioat_sysfs_entry, attr); 75 + ioat_chan = container_of(kobj, struct ioatdma_chan, kobj); 76 + 77 + if (!entry->store) 78 + return -EIO; 79 + return entry->store(&ioat_chan->dma_chan, page, count); 80 + } 81 + 67 82 const struct sysfs_ops ioat_sysfs_ops = { 68 83 .show = ioat_attr_show, 84 + .store = ioat_attr_store, 69 85 }; 70 86 71 87 void ioat_kobject_add(struct ioatdma_device *ioat_dma, struct kobj_type *type) ··· 137 121 } 138 122 static struct ioat_sysfs_entry ring_active_attr = __ATTR_RO(ring_active); 139 123 124 + static ssize_t intr_coalesce_show(struct dma_chan *c, char *page) 125 + { 126 + struct ioatdma_chan *ioat_chan = to_ioat_chan(c); 127 + 128 + return sprintf(page, "%d\n", ioat_chan->intr_coalesce); 129 + } 130 + 131 + static ssize_t intr_coalesce_store(struct dma_chan *c, const char *page, 132 + size_t count) 133 + { 134 + int intr_coalesce = 0; 135 + struct ioatdma_chan *ioat_chan = to_ioat_chan(c); 136 + 137 + if (sscanf(page, "%du", &intr_coalesce) != -1) { 138 + if ((intr_coalesce < 0) || 139 + (intr_coalesce > IOAT_INTRDELAY_MASK)) 140 + return -EINVAL; 141 + ioat_chan->intr_coalesce = intr_coalesce; 142 + } 143 + 144 + return count; 145 + } 146 + 147 + static struct ioat_sysfs_entry intr_coalesce_attr = __ATTR_RW(intr_coalesce); 148 + 140 149 static struct attribute *ioat_attrs[] = { 141 150 &ring_size_attr.attr, 142 151 &ring_active_attr.attr, 143 152 &ioat_cap_attr.attr, 144 153 &ioat_version_attr.attr, 154 + &intr_coalesce_attr.attr, 145 155 NULL, 146 156 }; 147 157
+4 -8
drivers/dma/k3dma.c
··· 223 223 if (c && (tc1 & BIT(i))) { 224 224 spin_lock_irqsave(&c->vc.lock, flags); 225 225 vchan_cookie_complete(&p->ds_run->vd); 226 - WARN_ON_ONCE(p->ds_done); 227 226 p->ds_done = p->ds_run; 228 227 p->ds_run = NULL; 229 228 spin_unlock_irqrestore(&c->vc.lock, flags); ··· 273 274 */ 274 275 list_del(&ds->vd.node); 275 276 276 - WARN_ON_ONCE(c->phy->ds_run); 277 - WARN_ON_ONCE(c->phy->ds_done); 278 277 c->phy->ds_run = ds; 278 + c->phy->ds_done = NULL; 279 279 /* start dma */ 280 280 k3_dma_set_desc(c->phy, &ds->desc_hw[0]); 281 281 return 0; 282 282 } 283 + c->phy->ds_run = NULL; 284 + c->phy->ds_done = NULL; 283 285 return -EAGAIN; 284 286 } 285 287 ··· 722 722 k3_dma_free_desc(&p->ds_run->vd); 723 723 p->ds_run = NULL; 724 724 } 725 - if (p->ds_done) { 726 - k3_dma_free_desc(&p->ds_done->vd); 727 - p->ds_done = NULL; 728 - } 729 - 725 + p->ds_done = NULL; 730 726 } 731 727 spin_unlock_irqrestore(&c->vc.lock, flags); 732 728 vchan_dma_desc_free_list(&c->vc, &head);
+1 -161
drivers/dma/mv_xor.c
··· 68 68 hw_desc->byte_count = byte_count; 69 69 } 70 70 71 - /* Populate the descriptor */ 72 - static void mv_xor_config_sg_ll_desc(struct mv_xor_desc_slot *desc, 73 - dma_addr_t dma_src, dma_addr_t dma_dst, 74 - u32 len, struct mv_xor_desc_slot *prev) 75 - { 76 - struct mv_xor_desc *hw_desc = desc->hw_desc; 77 - 78 - hw_desc->status = XOR_DESC_DMA_OWNED; 79 - hw_desc->phy_next_desc = 0; 80 - /* Configure for XOR with only one src address -> MEMCPY */ 81 - hw_desc->desc_command = XOR_DESC_OPERATION_XOR | (0x1 << 0); 82 - hw_desc->phy_dest_addr = dma_dst; 83 - hw_desc->phy_src_addr[0] = dma_src; 84 - hw_desc->byte_count = len; 85 - 86 - if (prev) { 87 - struct mv_xor_desc *hw_prev = prev->hw_desc; 88 - 89 - hw_prev->phy_next_desc = desc->async_tx.phys; 90 - } 91 - } 92 - 93 - static void mv_xor_desc_config_eod(struct mv_xor_desc_slot *desc) 94 - { 95 - struct mv_xor_desc *hw_desc = desc->hw_desc; 96 - 97 - /* Enable end-of-descriptor interrupt */ 98 - hw_desc->desc_command |= XOR_DESC_EOD_INT_EN; 99 - } 100 - 101 71 static void mv_desc_set_mode(struct mv_xor_desc_slot *desc) 102 72 { 103 73 struct mv_xor_desc *hw_desc = desc->hw_desc; ··· 632 662 return mv_xor_prep_dma_xor(chan, dest, &src, 1, len, flags); 633 663 } 634 664 635 - /** 636 - * mv_xor_prep_dma_sg - prepare descriptors for a memory sg transaction 637 - * @chan: DMA channel 638 - * @dst_sg: Destination scatter list 639 - * @dst_sg_len: Number of entries in destination scatter list 640 - * @src_sg: Source scatter list 641 - * @src_sg_len: Number of entries in source scatter list 642 - * @flags: transfer ack flags 643 - * 644 - * Return: Async transaction descriptor on success and NULL on failure 645 - */ 646 - static struct dma_async_tx_descriptor * 647 - mv_xor_prep_dma_sg(struct dma_chan *chan, struct scatterlist *dst_sg, 648 - unsigned int dst_sg_len, struct scatterlist *src_sg, 649 - unsigned int src_sg_len, unsigned long flags) 650 - { 651 - struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan); 652 - struct mv_xor_desc_slot *new; 653 - struct mv_xor_desc_slot *first = NULL; 654 - struct mv_xor_desc_slot *prev = NULL; 655 - size_t len, dst_avail, src_avail; 656 - dma_addr_t dma_dst, dma_src; 657 - int desc_cnt = 0; 658 - int ret; 659 - 660 - dev_dbg(mv_chan_to_devp(mv_chan), 661 - "%s dst_sg_len: %d src_sg_len: %d flags: %ld\n", 662 - __func__, dst_sg_len, src_sg_len, flags); 663 - 664 - dst_avail = sg_dma_len(dst_sg); 665 - src_avail = sg_dma_len(src_sg); 666 - 667 - /* Run until we are out of scatterlist entries */ 668 - while (true) { 669 - /* Allocate and populate the descriptor */ 670 - desc_cnt++; 671 - new = mv_chan_alloc_slot(mv_chan); 672 - if (!new) { 673 - dev_err(mv_chan_to_devp(mv_chan), 674 - "Out of descriptors (desc_cnt=%d)!\n", 675 - desc_cnt); 676 - goto err; 677 - } 678 - 679 - len = min_t(size_t, src_avail, dst_avail); 680 - len = min_t(size_t, len, MV_XOR_MAX_BYTE_COUNT); 681 - if (len == 0) 682 - goto fetch; 683 - 684 - if (len < MV_XOR_MIN_BYTE_COUNT) { 685 - dev_err(mv_chan_to_devp(mv_chan), 686 - "Transfer size of %zu too small!\n", len); 687 - goto err; 688 - } 689 - 690 - dma_dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - 691 - dst_avail; 692 - dma_src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - 693 - src_avail; 694 - 695 - /* Check if a new window needs to get added for 'dst' */ 696 - ret = mv_xor_add_io_win(mv_chan, dma_dst); 697 - if (ret) 698 - goto err; 699 - 700 - /* Check if a new window needs to get added for 'src' */ 701 - ret = mv_xor_add_io_win(mv_chan, dma_src); 702 - if (ret) 703 - goto err; 704 - 705 - /* Populate the descriptor */ 706 - mv_xor_config_sg_ll_desc(new, dma_src, dma_dst, len, prev); 707 - prev = new; 708 - dst_avail -= len; 709 - src_avail -= len; 710 - 711 - if (!first) 712 - first = new; 713 - else 714 - list_move_tail(&new->node, &first->sg_tx_list); 715 - 716 - fetch: 717 - /* Fetch the next dst scatterlist entry */ 718 - if (dst_avail == 0) { 719 - if (dst_sg_len == 0) 720 - break; 721 - 722 - /* Fetch the next entry: if there are no more: done */ 723 - dst_sg = sg_next(dst_sg); 724 - if (dst_sg == NULL) 725 - break; 726 - 727 - dst_sg_len--; 728 - dst_avail = sg_dma_len(dst_sg); 729 - } 730 - 731 - /* Fetch the next src scatterlist entry */ 732 - if (src_avail == 0) { 733 - if (src_sg_len == 0) 734 - break; 735 - 736 - /* Fetch the next entry: if there are no more: done */ 737 - src_sg = sg_next(src_sg); 738 - if (src_sg == NULL) 739 - break; 740 - 741 - src_sg_len--; 742 - src_avail = sg_dma_len(src_sg); 743 - } 744 - } 745 - 746 - /* Set the EOD flag in the last descriptor */ 747 - mv_xor_desc_config_eod(new); 748 - first->async_tx.flags = flags; 749 - 750 - return &first->async_tx; 751 - 752 - err: 753 - /* Cleanup: Move all descriptors back into the free list */ 754 - spin_lock_bh(&mv_chan->lock); 755 - mv_desc_clean_slot(first, mv_chan); 756 - spin_unlock_bh(&mv_chan->lock); 757 - 758 - return NULL; 759 - } 760 - 761 665 static void mv_xor_free_chan_resources(struct dma_chan *chan) 762 666 { 763 667 struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan); ··· 1098 1254 dma_dev->device_prep_dma_interrupt = mv_xor_prep_dma_interrupt; 1099 1255 if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) 1100 1256 dma_dev->device_prep_dma_memcpy = mv_xor_prep_dma_memcpy; 1101 - if (dma_has_cap(DMA_SG, dma_dev->cap_mask)) 1102 - dma_dev->device_prep_dma_sg = mv_xor_prep_dma_sg; 1103 1257 if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) { 1104 1258 dma_dev->max_xor = 8; 1105 1259 dma_dev->device_prep_dma_xor = mv_xor_prep_dma_xor; ··· 1147 1305 goto err_free_irq; 1148 1306 } 1149 1307 1150 - dev_info(&pdev->dev, "Marvell XOR (%s): ( %s%s%s%s)\n", 1308 + dev_info(&pdev->dev, "Marvell XOR (%s): ( %s%s%s)\n", 1151 1309 mv_chan->op_in_desc ? "Descriptor Mode" : "Registers Mode", 1152 1310 dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "xor " : "", 1153 1311 dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "", 1154 - dma_has_cap(DMA_SG, dma_dev->cap_mask) ? "sg " : "", 1155 1312 dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : ""); 1156 1313 1157 1314 dma_async_device_register(dma_dev); ··· 1393 1552 1394 1553 dma_cap_zero(cap_mask); 1395 1554 dma_cap_set(DMA_MEMCPY, cap_mask); 1396 - dma_cap_set(DMA_SG, cap_mask); 1397 1555 dma_cap_set(DMA_XOR, cap_mask); 1398 1556 dma_cap_set(DMA_INTERRUPT, cap_mask); 1399 1557
-17
drivers/dma/nbpfaxi.c
··· 1005 1005 DMA_MEM_TO_MEM, flags); 1006 1006 } 1007 1007 1008 - static struct dma_async_tx_descriptor *nbpf_prep_memcpy_sg( 1009 - struct dma_chan *dchan, 1010 - struct scatterlist *dst_sg, unsigned int dst_nents, 1011 - struct scatterlist *src_sg, unsigned int src_nents, 1012 - unsigned long flags) 1013 - { 1014 - struct nbpf_channel *chan = nbpf_to_chan(dchan); 1015 - 1016 - if (dst_nents != src_nents) 1017 - return NULL; 1018 - 1019 - return nbpf_prep_sg(chan, src_sg, dst_sg, src_nents, 1020 - DMA_MEM_TO_MEM, flags); 1021 - } 1022 - 1023 1008 static struct dma_async_tx_descriptor *nbpf_prep_slave_sg( 1024 1009 struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len, 1025 1010 enum dma_transfer_direction direction, unsigned long flags, void *context) ··· 1402 1417 dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask); 1403 1418 dma_cap_set(DMA_SLAVE, dma_dev->cap_mask); 1404 1419 dma_cap_set(DMA_PRIVATE, dma_dev->cap_mask); 1405 - dma_cap_set(DMA_SG, dma_dev->cap_mask); 1406 1420 1407 1421 /* Common and MEMCPY operations */ 1408 1422 dma_dev->device_alloc_chan_resources 1409 1423 = nbpf_alloc_chan_resources; 1410 1424 dma_dev->device_free_chan_resources = nbpf_free_chan_resources; 1411 - dma_dev->device_prep_dma_sg = nbpf_prep_memcpy_sg; 1412 1425 dma_dev->device_prep_dma_memcpy = nbpf_prep_memcpy; 1413 1426 dma_dev->device_tx_status = nbpf_tx_status; 1414 1427 dma_dev->device_issue_pending = nbpf_issue_pending;
+4 -4
drivers/dma/of-dma.c
··· 38 38 if (ofdma->of_node == dma_spec->np) 39 39 return ofdma; 40 40 41 - pr_debug("%s: can't find DMA controller %s\n", __func__, 42 - dma_spec->np->full_name); 41 + pr_debug("%s: can't find DMA controller %pOF\n", __func__, 42 + dma_spec->np); 43 43 44 44 return NULL; 45 45 } ··· 255 255 256 256 count = of_property_count_strings(np, "dma-names"); 257 257 if (count < 0) { 258 - pr_err("%s: dma-names property of node '%s' missing or empty\n", 259 - __func__, np->full_name); 258 + pr_err("%s: dma-names property of node '%pOF' missing or empty\n", 259 + __func__, np); 260 260 return ERR_PTR(-ENODEV); 261 261 } 262 262
+1 -1
drivers/dma/pl330.c
··· 3023 3023 return 0; 3024 3024 } 3025 3025 3026 - static struct amba_id pl330_ids[] = { 3026 + static const struct amba_id pl330_ids[] = { 3027 3027 { 3028 3028 .id = 0x00041330, 3029 3029 .mask = 0x000fffff,
+16 -21
drivers/dma/ppc4xx/adma.c
··· 4040 4040 /* it is DMA0 or DMA1 */ 4041 4041 idx = of_get_property(np, "cell-index", &len); 4042 4042 if (!idx || (len != sizeof(u32))) { 4043 - dev_err(&ofdev->dev, "Device node %s has missing " 4043 + dev_err(&ofdev->dev, "Device node %pOF has missing " 4044 4044 "or invalid cell-index property\n", 4045 - np->full_name); 4045 + np); 4046 4046 return -EINVAL; 4047 4047 } 4048 4048 id = *idx; ··· 4307 4307 * "poly" allows setting/checking used polynomial (for PPC440SPe only). 4308 4308 */ 4309 4309 4310 - static ssize_t show_ppc440spe_devices(struct device_driver *dev, char *buf) 4310 + static ssize_t devices_show(struct device_driver *dev, char *buf) 4311 4311 { 4312 4312 ssize_t size = 0; 4313 4313 int i; ··· 4321 4321 } 4322 4322 return size; 4323 4323 } 4324 + static DRIVER_ATTR_RO(devices); 4324 4325 4325 - static ssize_t show_ppc440spe_r6enable(struct device_driver *dev, char *buf) 4326 + static ssize_t enable_show(struct device_driver *dev, char *buf) 4326 4327 { 4327 4328 return snprintf(buf, PAGE_SIZE, 4328 4329 "PPC440SP(e) RAID-6 capabilities are %sABLED.\n", 4329 4330 ppc440spe_r6_enabled ? "EN" : "DIS"); 4330 4331 } 4331 4332 4332 - static ssize_t store_ppc440spe_r6enable(struct device_driver *dev, 4333 - const char *buf, size_t count) 4333 + static ssize_t enable_store(struct device_driver *dev, const char *buf, 4334 + size_t count) 4334 4335 { 4335 4336 unsigned long val; 4336 4337 ··· 4358 4357 } 4359 4358 return count; 4360 4359 } 4360 + static DRIVER_ATTR_RW(enable); 4361 4361 4362 - static ssize_t show_ppc440spe_r6poly(struct device_driver *dev, char *buf) 4362 + static ssize_t poly_store(struct device_driver *dev, char *buf) 4363 4363 { 4364 4364 ssize_t size = 0; 4365 4365 u32 reg; ··· 4379 4377 return size; 4380 4378 } 4381 4379 4382 - static ssize_t store_ppc440spe_r6poly(struct device_driver *dev, 4383 - const char *buf, size_t count) 4380 + static ssize_t poly_store(struct device_driver *dev, const char *buf, 4381 + size_t count) 4384 4382 { 4385 4383 unsigned long reg, val; 4386 4384 ··· 4406 4404 4407 4405 return count; 4408 4406 } 4409 - 4410 - static DRIVER_ATTR(devices, S_IRUGO, show_ppc440spe_devices, NULL); 4411 - static DRIVER_ATTR(enable, S_IRUGO | S_IWUSR, show_ppc440spe_r6enable, 4412 - store_ppc440spe_r6enable); 4413 - static DRIVER_ATTR(poly, S_IRUGO | S_IWUSR, show_ppc440spe_r6poly, 4414 - store_ppc440spe_r6poly); 4407 + static DRIVER_ATTR_RW(poly); 4415 4408 4416 4409 /* 4417 4410 * Common initialisation for RAID engines; allocate memory for ··· 4445 4448 dcr_base = dcr_resource_start(np, 0); 4446 4449 dcr_len = dcr_resource_len(np, 0); 4447 4450 if (!dcr_base && !dcr_len) { 4448 - pr_err("%s: can't get DCR registers base/len!\n", 4449 - np->full_name); 4451 + pr_err("%pOF: can't get DCR registers base/len!\n", np); 4450 4452 of_node_put(np); 4451 4453 iounmap(i2o_reg); 4452 4454 return -ENODEV; ··· 4453 4457 4454 4458 i2o_dcr_host = dcr_map(np, dcr_base, dcr_len); 4455 4459 if (!DCR_MAP_OK(i2o_dcr_host)) { 4456 - pr_err("%s: failed to map DCRs!\n", np->full_name); 4460 + pr_err("%pOF: failed to map DCRs!\n", np); 4457 4461 of_node_put(np); 4458 4462 iounmap(i2o_reg); 4459 4463 return -ENODEV; ··· 4514 4518 dcr_base = dcr_resource_start(np, 0); 4515 4519 dcr_len = dcr_resource_len(np, 0); 4516 4520 if (!dcr_base && !dcr_len) { 4517 - pr_err("%s: can't get DCR registers base/len!\n", 4518 - np->full_name); 4521 + pr_err("%pOF: can't get DCR registers base/len!\n", np); 4519 4522 ret = -ENODEV; 4520 4523 goto out_mq; 4521 4524 } 4522 4525 4523 4526 ppc440spe_mq_dcr_host = dcr_map(np, dcr_base, dcr_len); 4524 4527 if (!DCR_MAP_OK(ppc440spe_mq_dcr_host)) { 4525 - pr_err("%s: failed to map DCRs!\n", np->full_name); 4528 + pr_err("%pOF: failed to map DCRs!\n", np); 4526 4529 ret = -ENODEV; 4527 4530 goto out_mq; 4528 4531 }
+5 -1
drivers/dma/qcom/bam_dma.c
··· 65 65 #define DESC_FLAG_EOT BIT(14) 66 66 #define DESC_FLAG_EOB BIT(13) 67 67 #define DESC_FLAG_NWD BIT(12) 68 + #define DESC_FLAG_CMD BIT(11) 68 69 69 70 struct bam_async_desc { 70 71 struct virt_dma_desc vd; ··· 646 645 unsigned int curr_offset = 0; 647 646 648 647 do { 648 + if (flags & DMA_PREP_CMD) 649 + desc->flags |= cpu_to_le16(DESC_FLAG_CMD); 650 + 649 651 desc->addr = cpu_to_le32(sg_dma_address(sg) + 650 652 curr_offset); 651 653 ··· 964 960 965 961 /* set any special flags on the last descriptor */ 966 962 if (async_desc->num_desc == async_desc->xfer_len) 967 - desc[async_desc->xfer_len - 1].flags = 963 + desc[async_desc->xfer_len - 1].flags |= 968 964 cpu_to_le16(async_desc->flags); 969 965 else 970 966 desc[async_desc->xfer_len - 1].flags |=
+36 -1
drivers/dma/qcom/hidma.c
··· 411 411 return NULL; 412 412 413 413 hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch, 414 - src, dest, len, flags); 414 + src, dest, len, flags, 415 + HIDMA_TRE_MEMCPY); 416 + 417 + /* Place descriptor in prepared list */ 418 + spin_lock_irqsave(&mchan->lock, irqflags); 419 + list_add_tail(&mdesc->node, &mchan->prepared); 420 + spin_unlock_irqrestore(&mchan->lock, irqflags); 421 + 422 + return &mdesc->desc; 423 + } 424 + 425 + static struct dma_async_tx_descriptor * 426 + hidma_prep_dma_memset(struct dma_chan *dmach, dma_addr_t dest, int value, 427 + size_t len, unsigned long flags) 428 + { 429 + struct hidma_chan *mchan = to_hidma_chan(dmach); 430 + struct hidma_desc *mdesc = NULL; 431 + struct hidma_dev *mdma = mchan->dmadev; 432 + unsigned long irqflags; 433 + 434 + /* Get free descriptor */ 435 + spin_lock_irqsave(&mchan->lock, irqflags); 436 + if (!list_empty(&mchan->free)) { 437 + mdesc = list_first_entry(&mchan->free, struct hidma_desc, node); 438 + list_del(&mdesc->node); 439 + } 440 + spin_unlock_irqrestore(&mchan->lock, irqflags); 441 + 442 + if (!mdesc) 443 + return NULL; 444 + 445 + hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch, 446 + value, dest, len, flags, 447 + HIDMA_TRE_MEMSET); 415 448 416 449 /* Place descriptor in prepared list */ 417 450 spin_lock_irqsave(&mchan->lock, irqflags); ··· 809 776 pm_runtime_get_sync(dmadev->ddev.dev); 810 777 811 778 dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask); 779 + dma_cap_set(DMA_MEMSET, dmadev->ddev.cap_mask); 812 780 if (WARN_ON(!pdev->dev.dma_mask)) { 813 781 rc = -ENXIO; 814 782 goto dmafree; ··· 820 786 dmadev->dev_trca = trca; 821 787 dmadev->trca_resource = trca_resource; 822 788 dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy; 789 + dmadev->ddev.device_prep_dma_memset = hidma_prep_dma_memset; 823 790 dmadev->ddev.device_alloc_chan_resources = hidma_alloc_chan_resources; 824 791 dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources; 825 792 dmadev->ddev.device_tx_status = hidma_tx_status;
+6 -1
drivers/dma/qcom/hidma.h
··· 28 28 #define HIDMA_TRE_DEST_LOW_IDX 4 29 29 #define HIDMA_TRE_DEST_HI_IDX 5 30 30 31 + enum tre_type { 32 + HIDMA_TRE_MEMCPY = 3, 33 + HIDMA_TRE_MEMSET = 4, 34 + }; 35 + 31 36 struct hidma_tre { 32 37 atomic_t allocated; /* if this channel is allocated */ 33 38 bool queued; /* flag whether this is pending */ ··· 155 150 int hidma_ll_disable(struct hidma_lldev *lldev); 156 151 int hidma_ll_enable(struct hidma_lldev *llhndl); 157 152 void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch, 158 - dma_addr_t src, dma_addr_t dest, u32 len, u32 flags); 153 + dma_addr_t src, dma_addr_t dest, u32 len, u32 flags, u32 txntype); 159 154 void hidma_ll_setup_irq(struct hidma_lldev *lldev, bool msi); 160 155 int hidma_ll_setup(struct hidma_lldev *lldev); 161 156 struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+4 -7
drivers/dma/qcom/hidma_ll.c
··· 105 105 HIDMA_CH_STOPPED = 4, 106 106 }; 107 107 108 - enum tre_type { 109 - HIDMA_TRE_MEMCPY = 3, 110 - }; 111 - 112 108 enum err_code { 113 109 HIDMA_EVRE_STATUS_COMPLETE = 1, 114 110 HIDMA_EVRE_STATUS_ERROR = 4, ··· 170 174 tre->err_info = 0; 171 175 tre->lldev = lldev; 172 176 tre_local = &tre->tre_local[0]; 173 - tre_local[HIDMA_TRE_CFG_IDX] = HIDMA_TRE_MEMCPY; 174 - tre_local[HIDMA_TRE_CFG_IDX] |= (lldev->chidx & 0xFF) << 8; 177 + tre_local[HIDMA_TRE_CFG_IDX] = (lldev->chidx & 0xFF) << 8; 175 178 tre_local[HIDMA_TRE_CFG_IDX] |= BIT(16); /* set IEOB */ 176 179 *tre_ch = i; 177 180 if (callback) ··· 602 607 603 608 void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch, 604 609 dma_addr_t src, dma_addr_t dest, u32 len, 605 - u32 flags) 610 + u32 flags, u32 txntype) 606 611 { 607 612 struct hidma_tre *tre; 608 613 u32 *tre_local; ··· 621 626 } 622 627 623 628 tre_local = &tre->tre_local[0]; 629 + tre_local[HIDMA_TRE_CFG_IDX] &= ~GENMASK(7, 0); 630 + tre_local[HIDMA_TRE_CFG_IDX] |= txntype; 624 631 tre_local[HIDMA_TRE_LEN_IDX] = len; 625 632 tre_local[HIDMA_TRE_SRC_LOW_IDX] = lower_32_bits(src); 626 633 tre_local[HIDMA_TRE_SRC_HI_IDX] = upper_32_bits(src);
+10 -6
drivers/dma/qcom/hidma_mgmt.c
··· 28 28 29 29 #include "hidma_mgmt.h" 30 30 31 - #define HIDMA_QOS_N_OFFSET 0x300 31 + #define HIDMA_QOS_N_OFFSET 0x700 32 32 #define HIDMA_CFG_OFFSET 0x400 33 33 #define HIDMA_MAX_BUS_REQ_LEN_OFFSET 0x41C 34 34 #define HIDMA_MAX_XACTIONS_OFFSET 0x420 ··· 227 227 goto out; 228 228 } 229 229 230 - if (max_write_request) { 230 + if (max_write_request && 231 + (max_write_request != mgmtdev->max_write_request)) { 231 232 dev_info(&pdev->dev, "overriding max-write-burst-bytes: %d\n", 232 233 max_write_request); 233 234 mgmtdev->max_write_request = max_write_request; ··· 241 240 dev_err(&pdev->dev, "max-read-burst-bytes missing\n"); 242 241 goto out; 243 242 } 244 - if (max_read_request) { 243 + if (max_read_request && 244 + (max_read_request != mgmtdev->max_read_request)) { 245 245 dev_info(&pdev->dev, "overriding max-read-burst-bytes: %d\n", 246 246 max_read_request); 247 247 mgmtdev->max_read_request = max_read_request; ··· 255 253 dev_err(&pdev->dev, "max-write-transactions missing\n"); 256 254 goto out; 257 255 } 258 - if (max_wr_xactions) { 256 + if (max_wr_xactions && 257 + (max_wr_xactions != mgmtdev->max_wr_xactions)) { 259 258 dev_info(&pdev->dev, "overriding max-write-transactions: %d\n", 260 259 max_wr_xactions); 261 260 mgmtdev->max_wr_xactions = max_wr_xactions; ··· 269 266 dev_err(&pdev->dev, "max-read-transactions missing\n"); 270 267 goto out; 271 268 } 272 - if (max_rd_xactions) { 269 + if (max_rd_xactions && 270 + (max_rd_xactions != mgmtdev->max_rd_xactions)) { 273 271 dev_info(&pdev->dev, "overriding max-read-transactions: %d\n", 274 272 max_rd_xactions); 275 273 mgmtdev->max_rd_xactions = max_rd_xactions; ··· 358 354 struct platform_device_info pdevinfo; 359 355 struct of_phandle_args out_irq; 360 356 struct device_node *child; 361 - struct resource *res; 357 + struct resource *res = NULL; 362 358 const __be32 *cell; 363 359 int ret = 0, size, i, num; 364 360 u64 addr, addr_size;
+43 -42
drivers/dma/sh/rcar-dmac.c
··· 1690 1690 if (!irqname) 1691 1691 return -ENOMEM; 1692 1692 1693 + /* 1694 + * Initialize the DMA engine channel and add it to the DMA engine 1695 + * channels list. 1696 + */ 1697 + chan->device = &dmac->engine; 1698 + dma_cookie_init(chan); 1699 + 1700 + list_add_tail(&chan->device_node, &dmac->engine.channels); 1701 + 1693 1702 ret = devm_request_threaded_irq(dmac->dev, rchan->irq, 1694 1703 rcar_dmac_isr_channel, 1695 1704 rcar_dmac_isr_channel_thread, 0, ··· 1708 1699 rchan->irq, ret); 1709 1700 return ret; 1710 1701 } 1711 - 1712 - /* 1713 - * Initialize the DMA engine channel and add it to the DMA engine 1714 - * channels list. 1715 - */ 1716 - chan->device = &dmac->engine; 1717 - dma_cookie_init(chan); 1718 - 1719 - list_add_tail(&chan->device_node, &dmac->engine.channels); 1720 1702 1721 1703 return 0; 1722 1704 } ··· 1794 1794 if (!irqname) 1795 1795 return -ENOMEM; 1796 1796 1797 - ret = devm_request_irq(&pdev->dev, irq, rcar_dmac_isr_error, 0, 1798 - irqname, dmac); 1799 - if (ret) { 1800 - dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n", 1801 - irq, ret); 1802 - return ret; 1803 - } 1804 - 1805 1797 /* Enable runtime PM and initialize the device. */ 1806 1798 pm_runtime_enable(&pdev->dev); 1807 1799 ret = pm_runtime_get_sync(&pdev->dev); ··· 1810 1818 goto error; 1811 1819 } 1812 1820 1813 - /* Initialize the channels. */ 1814 - INIT_LIST_HEAD(&dmac->engine.channels); 1821 + /* Initialize engine */ 1822 + engine = &dmac->engine; 1823 + 1824 + dma_cap_set(DMA_MEMCPY, engine->cap_mask); 1825 + dma_cap_set(DMA_SLAVE, engine->cap_mask); 1826 + 1827 + engine->dev = &pdev->dev; 1828 + engine->copy_align = ilog2(RCAR_DMAC_MEMCPY_XFER_SIZE); 1829 + 1830 + engine->src_addr_widths = widths; 1831 + engine->dst_addr_widths = widths; 1832 + engine->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1833 + engine->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1834 + 1835 + engine->device_alloc_chan_resources = rcar_dmac_alloc_chan_resources; 1836 + engine->device_free_chan_resources = rcar_dmac_free_chan_resources; 1837 + engine->device_prep_dma_memcpy = rcar_dmac_prep_dma_memcpy; 1838 + engine->device_prep_slave_sg = rcar_dmac_prep_slave_sg; 1839 + engine->device_prep_dma_cyclic = rcar_dmac_prep_dma_cyclic; 1840 + engine->device_config = rcar_dmac_device_config; 1841 + engine->device_terminate_all = rcar_dmac_chan_terminate_all; 1842 + engine->device_tx_status = rcar_dmac_tx_status; 1843 + engine->device_issue_pending = rcar_dmac_issue_pending; 1844 + engine->device_synchronize = rcar_dmac_device_synchronize; 1845 + 1846 + INIT_LIST_HEAD(&engine->channels); 1815 1847 1816 1848 for (i = 0; i < dmac->n_channels; ++i) { 1817 1849 ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], 1818 1850 i + channels_offset); 1819 1851 if (ret < 0) 1820 1852 goto error; 1853 + } 1854 + 1855 + ret = devm_request_irq(&pdev->dev, irq, rcar_dmac_isr_error, 0, 1856 + irqname, dmac); 1857 + if (ret) { 1858 + dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n", 1859 + irq, ret); 1860 + return ret; 1821 1861 } 1822 1862 1823 1863 /* Register the DMAC as a DMA provider for DT. */ ··· 1863 1839 * 1864 1840 * Default transfer size of 32 bytes requires 32-byte alignment. 1865 1841 */ 1866 - engine = &dmac->engine; 1867 - dma_cap_set(DMA_MEMCPY, engine->cap_mask); 1868 - dma_cap_set(DMA_SLAVE, engine->cap_mask); 1869 - 1870 - engine->dev = &pdev->dev; 1871 - engine->copy_align = ilog2(RCAR_DMAC_MEMCPY_XFER_SIZE); 1872 - 1873 - engine->src_addr_widths = widths; 1874 - engine->dst_addr_widths = widths; 1875 - engine->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1876 - engine->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1877 - 1878 - engine->device_alloc_chan_resources = rcar_dmac_alloc_chan_resources; 1879 - engine->device_free_chan_resources = rcar_dmac_free_chan_resources; 1880 - engine->device_prep_dma_memcpy = rcar_dmac_prep_dma_memcpy; 1881 - engine->device_prep_slave_sg = rcar_dmac_prep_slave_sg; 1882 - engine->device_prep_dma_cyclic = rcar_dmac_prep_dma_cyclic; 1883 - engine->device_config = rcar_dmac_device_config; 1884 - engine->device_terminate_all = rcar_dmac_chan_terminate_all; 1885 - engine->device_tx_status = rcar_dmac_tx_status; 1886 - engine->device_issue_pending = rcar_dmac_issue_pending; 1887 - engine->device_synchronize = rcar_dmac_device_synchronize; 1888 - 1889 1842 ret = dma_async_device_register(engine); 1890 1843 if (ret < 0) 1891 1844 goto error;
+2 -20
drivers/dma/ste_dma40.c
··· 79 79 }; 80 80 81 81 /* Default configuration for physcial memcpy */ 82 - static struct stedma40_chan_cfg dma40_memcpy_conf_phy = { 82 + static const struct stedma40_chan_cfg dma40_memcpy_conf_phy = { 83 83 .mode = STEDMA40_MODE_PHYSICAL, 84 84 .dir = DMA_MEM_TO_MEM, 85 85 ··· 93 93 }; 94 94 95 95 /* Default configuration for logical memcpy */ 96 - static struct stedma40_chan_cfg dma40_memcpy_conf_log = { 96 + static const struct stedma40_chan_cfg dma40_memcpy_conf_log = { 97 97 .mode = STEDMA40_MODE_LOGICAL, 98 98 .dir = DMA_MEM_TO_MEM, 99 99 ··· 2485 2485 } 2486 2486 2487 2487 static struct dma_async_tx_descriptor * 2488 - d40_prep_memcpy_sg(struct dma_chan *chan, 2489 - struct scatterlist *dst_sg, unsigned int dst_nents, 2490 - struct scatterlist *src_sg, unsigned int src_nents, 2491 - unsigned long dma_flags) 2492 - { 2493 - if (dst_nents != src_nents) 2494 - return NULL; 2495 - 2496 - return d40_prep_sg(chan, src_sg, dst_sg, src_nents, 2497 - DMA_MEM_TO_MEM, dma_flags); 2498 - } 2499 - 2500 - static struct dma_async_tx_descriptor * 2501 2488 d40_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, 2502 2489 unsigned int sg_len, enum dma_transfer_direction direction, 2503 2490 unsigned long dma_flags, void *context) ··· 2808 2821 dev->copy_align = DMAENGINE_ALIGN_4_BYTES; 2809 2822 } 2810 2823 2811 - if (dma_has_cap(DMA_SG, dev->cap_mask)) 2812 - dev->device_prep_dma_sg = d40_prep_memcpy_sg; 2813 - 2814 2824 if (dma_has_cap(DMA_CYCLIC, dev->cap_mask)) 2815 2825 dev->device_prep_dma_cyclic = dma40_prep_dma_cyclic; 2816 2826 ··· 2849 2865 2850 2866 dma_cap_zero(base->dma_memcpy.cap_mask); 2851 2867 dma_cap_set(DMA_MEMCPY, base->dma_memcpy.cap_mask); 2852 - dma_cap_set(DMA_SG, base->dma_memcpy.cap_mask); 2853 2868 2854 2869 d40_ops_init(base, &base->dma_memcpy); 2855 2870 ··· 2866 2883 dma_cap_zero(base->dma_both.cap_mask); 2867 2884 dma_cap_set(DMA_SLAVE, base->dma_both.cap_mask); 2868 2885 dma_cap_set(DMA_MEMCPY, base->dma_both.cap_mask); 2869 - dma_cap_set(DMA_SG, base->dma_both.cap_mask); 2870 2886 dma_cap_set(DMA_CYCLIC, base->dma_slave.cap_mask); 2871 2887 2872 2888 d40_ops_init(base, &base->dma_both);
+26 -7
drivers/dma/sun6i-dma.c
··· 101 101 u32 nr_max_channels; 102 102 u32 nr_max_requests; 103 103 u32 nr_max_vchans; 104 + /* 105 + * In the datasheets/user manuals of newer Allwinner SoCs, a special 106 + * bit (bit 2 at register 0x20) is present. 107 + * It's named "DMA MCLK interface circuit auto gating bit" in the 108 + * documents, and the footnote of this register says that this bit 109 + * should be set up when initializing the DMA controller. 110 + * Allwinner A23/A33 user manuals do not have this bit documented, 111 + * however these SoCs really have and need this bit, as seen in the 112 + * BSP kernel source code. 113 + */ 114 + bool gate_needed; 104 115 }; 105 116 106 117 /* ··· 1020 1009 .nr_max_channels = 8, 1021 1010 .nr_max_requests = 24, 1022 1011 .nr_max_vchans = 37, 1012 + .gate_needed = true, 1023 1013 }; 1024 1014 1025 1015 static struct sun6i_dma_config sun8i_a83t_dma_cfg = { ··· 1040 1028 .nr_max_vchans = 34, 1041 1029 }; 1042 1030 1031 + /* 1032 + * The V3s have only 8 physical channels, a maximum DRQ port id of 23, 1033 + * and a total of 24 usable source and destination endpoints. 1034 + */ 1035 + 1036 + static struct sun6i_dma_config sun8i_v3s_dma_cfg = { 1037 + .nr_max_channels = 8, 1038 + .nr_max_requests = 23, 1039 + .nr_max_vchans = 24, 1040 + .gate_needed = true, 1041 + }; 1042 + 1043 1043 static const struct of_device_id sun6i_dma_match[] = { 1044 1044 { .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg }, 1045 1045 { .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg }, 1046 1046 { .compatible = "allwinner,sun8i-a83t-dma", .data = &sun8i_a83t_dma_cfg }, 1047 1047 { .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg }, 1048 + { .compatible = "allwinner,sun8i-v3s-dma", .data = &sun8i_v3s_dma_cfg }, 1048 1049 { /* sentinel */ } 1049 1050 }; 1050 1051 MODULE_DEVICE_TABLE(of, sun6i_dma_match); ··· 1199 1174 goto err_dma_unregister; 1200 1175 } 1201 1176 1202 - /* 1203 - * sun8i variant requires us to toggle a dma gating register, 1204 - * as seen in Allwinner's SDK. This register is not documented 1205 - * in the A23 user manual. 1206 - */ 1207 - if (of_device_is_compatible(pdev->dev.of_node, 1208 - "allwinner,sun8i-a23-dma")) 1177 + if (sdc->cfg->gate_needed) 1209 1178 writel(SUN8I_DMA_GATE_ENABLE, sdc->base + SUN8I_DMA_GATE); 1210 1179 1211 1180 return 0;
+1 -1
drivers/dma/ti-dma-crossbar.c
··· 308 308 static inline void ti_dra7_xbar_reserve(int offset, int len, unsigned long *p) 309 309 { 310 310 for (; len > 0; len--) 311 - clear_bit(offset + (len - 1), p); 311 + set_bit(offset + (len - 1), p); 312 312 } 313 313 314 314 static int ti_dra7_xbar_probe(struct platform_device *pdev)
+1 -159
drivers/dma/xgene-dma.c
··· 391 391 *paddr += nbytes; 392 392 } 393 393 394 - static void xgene_dma_invalidate_buffer(__le64 *ext8) 395 - { 396 - *ext8 |= cpu_to_le64(XGENE_DMA_INVALID_LEN_CODE); 397 - } 398 - 399 394 static __le64 *xgene_dma_lookup_ext8(struct xgene_dma_desc_hw *desc, int idx) 400 395 { 401 396 switch (idx) { ··· 418 423 desc->m1 |= cpu_to_le64(XGENE_DMA_DESC_C_BIT); 419 424 desc->m3 |= cpu_to_le64((u64)dst_ring_num << 420 425 XGENE_DMA_DESC_HOENQ_NUM_POS); 421 - } 422 - 423 - static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan, 424 - struct xgene_dma_desc_sw *desc_sw, 425 - dma_addr_t dst, dma_addr_t src, 426 - size_t len) 427 - { 428 - struct xgene_dma_desc_hw *desc1, *desc2; 429 - int i; 430 - 431 - /* Get 1st descriptor */ 432 - desc1 = &desc_sw->desc1; 433 - xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num); 434 - 435 - /* Set destination address */ 436 - desc1->m2 |= cpu_to_le64(XGENE_DMA_DESC_DR_BIT); 437 - desc1->m3 |= cpu_to_le64(dst); 438 - 439 - /* Set 1st source address */ 440 - xgene_dma_set_src_buffer(&desc1->m1, &len, &src); 441 - 442 - if (!len) 443 - return; 444 - 445 - /* 446 - * We need to split this source buffer, 447 - * and need to use 2nd descriptor 448 - */ 449 - desc2 = &desc_sw->desc2; 450 - desc1->m0 |= cpu_to_le64(XGENE_DMA_DESC_NV_BIT); 451 - 452 - /* Set 2nd to 5th source address */ 453 - for (i = 0; i < 4 && len; i++) 454 - xgene_dma_set_src_buffer(xgene_dma_lookup_ext8(desc2, i), 455 - &len, &src); 456 - 457 - /* Invalidate unused source address field */ 458 - for (; i < 4; i++) 459 - xgene_dma_invalidate_buffer(xgene_dma_lookup_ext8(desc2, i)); 460 - 461 - /* Updated flag that we have prepared 64B descriptor */ 462 - desc_sw->flags |= XGENE_DMA_FLAG_64B_DESC; 463 426 } 464 427 465 428 static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan, ··· 842 889 /* Delete this channel DMA pool */ 843 890 dma_pool_destroy(chan->desc_pool); 844 891 chan->desc_pool = NULL; 845 - } 846 - 847 - static struct dma_async_tx_descriptor *xgene_dma_prep_sg( 848 - struct dma_chan *dchan, struct scatterlist *dst_sg, 849 - u32 dst_nents, struct scatterlist *src_sg, 850 - u32 src_nents, unsigned long flags) 851 - { 852 - struct xgene_dma_desc_sw *first = NULL, *new = NULL; 853 - struct xgene_dma_chan *chan; 854 - size_t dst_avail, src_avail; 855 - dma_addr_t dst, src; 856 - size_t len; 857 - 858 - if (unlikely(!dchan)) 859 - return NULL; 860 - 861 - if (unlikely(!dst_nents || !src_nents)) 862 - return NULL; 863 - 864 - if (unlikely(!dst_sg || !src_sg)) 865 - return NULL; 866 - 867 - chan = to_dma_chan(dchan); 868 - 869 - /* Get prepared for the loop */ 870 - dst_avail = sg_dma_len(dst_sg); 871 - src_avail = sg_dma_len(src_sg); 872 - dst_nents--; 873 - src_nents--; 874 - 875 - /* Run until we are out of scatterlist entries */ 876 - while (true) { 877 - /* Create the largest transaction possible */ 878 - len = min_t(size_t, src_avail, dst_avail); 879 - len = min_t(size_t, len, XGENE_DMA_MAX_64B_DESC_BYTE_CNT); 880 - if (len == 0) 881 - goto fetch; 882 - 883 - dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - dst_avail; 884 - src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - src_avail; 885 - 886 - /* Allocate the link descriptor from DMA pool */ 887 - new = xgene_dma_alloc_descriptor(chan); 888 - if (!new) 889 - goto fail; 890 - 891 - /* Prepare DMA descriptor */ 892 - xgene_dma_prep_cpy_desc(chan, new, dst, src, len); 893 - 894 - if (!first) 895 - first = new; 896 - 897 - new->tx.cookie = 0; 898 - async_tx_ack(&new->tx); 899 - 900 - /* update metadata */ 901 - dst_avail -= len; 902 - src_avail -= len; 903 - 904 - /* Insert the link descriptor to the LD ring */ 905 - list_add_tail(&new->node, &first->tx_list); 906 - 907 - fetch: 908 - /* fetch the next dst scatterlist entry */ 909 - if (dst_avail == 0) { 910 - /* no more entries: we're done */ 911 - if (dst_nents == 0) 912 - break; 913 - 914 - /* fetch the next entry: if there are no more: done */ 915 - dst_sg = sg_next(dst_sg); 916 - if (!dst_sg) 917 - break; 918 - 919 - dst_nents--; 920 - dst_avail = sg_dma_len(dst_sg); 921 - } 922 - 923 - /* fetch the next src scatterlist entry */ 924 - if (src_avail == 0) { 925 - /* no more entries: we're done */ 926 - if (src_nents == 0) 927 - break; 928 - 929 - /* fetch the next entry: if there are no more: done */ 930 - src_sg = sg_next(src_sg); 931 - if (!src_sg) 932 - break; 933 - 934 - src_nents--; 935 - src_avail = sg_dma_len(src_sg); 936 - } 937 - } 938 - 939 - if (!new) 940 - return NULL; 941 - 942 - new->tx.flags = flags; /* client is in control of this ack */ 943 - new->tx.cookie = -EBUSY; 944 - list_splice(&first->tx_list, &new->tx_list); 945 - 946 - return &new->tx; 947 - fail: 948 - if (!first) 949 - return NULL; 950 - 951 - xgene_dma_free_desc_list(chan, &first->tx_list); 952 - return NULL; 953 892 } 954 893 955 894 static struct dma_async_tx_descriptor *xgene_dma_prep_xor( ··· 1498 1653 dma_cap_zero(dma_dev->cap_mask); 1499 1654 1500 1655 /* Set DMA device capability */ 1501 - dma_cap_set(DMA_SG, dma_dev->cap_mask); 1502 1656 1503 1657 /* Basically here, the X-Gene SoC DMA engine channel 0 supports XOR 1504 1658 * and channel 1 supports XOR, PQ both. First thing here is we have ··· 1523 1679 dma_dev->device_free_chan_resources = xgene_dma_free_chan_resources; 1524 1680 dma_dev->device_issue_pending = xgene_dma_issue_pending; 1525 1681 dma_dev->device_tx_status = xgene_dma_tx_status; 1526 - dma_dev->device_prep_dma_sg = xgene_dma_prep_sg; 1527 1682 1528 1683 if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) { 1529 1684 dma_dev->device_prep_dma_xor = xgene_dma_prep_xor; ··· 1574 1731 1575 1732 /* DMA capability info */ 1576 1733 dev_info(pdma->dev, 1577 - "%s: CAPABILITY ( %s%s%s)\n", dma_chan_name(&chan->dma_chan), 1578 - dma_has_cap(DMA_SG, dma_dev->cap_mask) ? "SGCPY " : "", 1734 + "%s: CAPABILITY ( %s%s)\n", dma_chan_name(&chan->dma_chan), 1579 1735 dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "XOR " : "", 1580 1736 dma_has_cap(DMA_PQ, dma_dev->cap_mask) ? "PQ " : ""); 1581 1737
+15 -15
drivers/dma/xilinx/xilinx_dma.c
··· 2124 2124 *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk"); 2125 2125 if (IS_ERR(*axi_clk)) { 2126 2126 err = PTR_ERR(*axi_clk); 2127 - dev_err(&pdev->dev, "failed to get axi_aclk (%u)\n", err); 2127 + dev_err(&pdev->dev, "failed to get axi_aclk (%d)\n", err); 2128 2128 return err; 2129 2129 } 2130 2130 ··· 2142 2142 2143 2143 err = clk_prepare_enable(*axi_clk); 2144 2144 if (err) { 2145 - dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err); 2145 + dev_err(&pdev->dev, "failed to enable axi_clk (%d)\n", err); 2146 2146 return err; 2147 2147 } 2148 2148 2149 2149 err = clk_prepare_enable(*tx_clk); 2150 2150 if (err) { 2151 - dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err); 2151 + dev_err(&pdev->dev, "failed to enable tx_clk (%d)\n", err); 2152 2152 goto err_disable_axiclk; 2153 2153 } 2154 2154 2155 2155 err = clk_prepare_enable(*rx_clk); 2156 2156 if (err) { 2157 - dev_err(&pdev->dev, "failed to enable rx_clk (%u)\n", err); 2157 + dev_err(&pdev->dev, "failed to enable rx_clk (%d)\n", err); 2158 2158 goto err_disable_txclk; 2159 2159 } 2160 2160 2161 2161 err = clk_prepare_enable(*sg_clk); 2162 2162 if (err) { 2163 - dev_err(&pdev->dev, "failed to enable sg_clk (%u)\n", err); 2163 + dev_err(&pdev->dev, "failed to enable sg_clk (%d)\n", err); 2164 2164 goto err_disable_rxclk; 2165 2165 } 2166 2166 ··· 2189 2189 *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk"); 2190 2190 if (IS_ERR(*axi_clk)) { 2191 2191 err = PTR_ERR(*axi_clk); 2192 - dev_err(&pdev->dev, "failed to get axi_clk (%u)\n", err); 2192 + dev_err(&pdev->dev, "failed to get axi_clk (%d)\n", err); 2193 2193 return err; 2194 2194 } 2195 2195 2196 2196 *dev_clk = devm_clk_get(&pdev->dev, "m_axi_aclk"); 2197 2197 if (IS_ERR(*dev_clk)) { 2198 2198 err = PTR_ERR(*dev_clk); 2199 - dev_err(&pdev->dev, "failed to get dev_clk (%u)\n", err); 2199 + dev_err(&pdev->dev, "failed to get dev_clk (%d)\n", err); 2200 2200 return err; 2201 2201 } 2202 2202 2203 2203 err = clk_prepare_enable(*axi_clk); 2204 2204 if (err) { 2205 - dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err); 2205 + dev_err(&pdev->dev, "failed to enable axi_clk (%d)\n", err); 2206 2206 return err; 2207 2207 } 2208 2208 2209 2209 err = clk_prepare_enable(*dev_clk); 2210 2210 if (err) { 2211 - dev_err(&pdev->dev, "failed to enable dev_clk (%u)\n", err); 2211 + dev_err(&pdev->dev, "failed to enable dev_clk (%d)\n", err); 2212 2212 goto err_disable_axiclk; 2213 2213 } 2214 2214 ··· 2229 2229 *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk"); 2230 2230 if (IS_ERR(*axi_clk)) { 2231 2231 err = PTR_ERR(*axi_clk); 2232 - dev_err(&pdev->dev, "failed to get axi_aclk (%u)\n", err); 2232 + dev_err(&pdev->dev, "failed to get axi_aclk (%d)\n", err); 2233 2233 return err; 2234 2234 } 2235 2235 ··· 2251 2251 2252 2252 err = clk_prepare_enable(*axi_clk); 2253 2253 if (err) { 2254 - dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err); 2254 + dev_err(&pdev->dev, "failed to enable axi_clk (%d)\n", err); 2255 2255 return err; 2256 2256 } 2257 2257 2258 2258 err = clk_prepare_enable(*tx_clk); 2259 2259 if (err) { 2260 - dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err); 2260 + dev_err(&pdev->dev, "failed to enable tx_clk (%d)\n", err); 2261 2261 goto err_disable_axiclk; 2262 2262 } 2263 2263 2264 2264 err = clk_prepare_enable(*txs_clk); 2265 2265 if (err) { 2266 - dev_err(&pdev->dev, "failed to enable txs_clk (%u)\n", err); 2266 + dev_err(&pdev->dev, "failed to enable txs_clk (%d)\n", err); 2267 2267 goto err_disable_txclk; 2268 2268 } 2269 2269 2270 2270 err = clk_prepare_enable(*rx_clk); 2271 2271 if (err) { 2272 - dev_err(&pdev->dev, "failed to enable rx_clk (%u)\n", err); 2272 + dev_err(&pdev->dev, "failed to enable rx_clk (%d)\n", err); 2273 2273 goto err_disable_txsclk; 2274 2274 } 2275 2275 2276 2276 err = clk_prepare_enable(*rxs_clk); 2277 2277 if (err) { 2278 - dev_err(&pdev->dev, "failed to enable rxs_clk (%u)\n", err); 2278 + dev_err(&pdev->dev, "failed to enable rxs_clk (%d)\n", err); 2279 2279 goto err_disable_rxclk; 2280 2280 } 2281 2281
-94
drivers/dma/xilinx/zynqmp_dma.c
··· 830 830 } 831 831 832 832 /** 833 - * zynqmp_dma_prep_slave_sg - prepare descriptors for a memory sg transaction 834 - * @dchan: DMA channel 835 - * @dst_sg: Destination scatter list 836 - * @dst_sg_len: Number of entries in destination scatter list 837 - * @src_sg: Source scatter list 838 - * @src_sg_len: Number of entries in source scatter list 839 - * @flags: transfer ack flags 840 - * 841 - * Return: Async transaction descriptor on success and NULL on failure 842 - */ 843 - static struct dma_async_tx_descriptor *zynqmp_dma_prep_sg( 844 - struct dma_chan *dchan, struct scatterlist *dst_sg, 845 - unsigned int dst_sg_len, struct scatterlist *src_sg, 846 - unsigned int src_sg_len, unsigned long flags) 847 - { 848 - struct zynqmp_dma_desc_sw *new, *first = NULL; 849 - struct zynqmp_dma_chan *chan = to_chan(dchan); 850 - void *desc = NULL, *prev = NULL; 851 - size_t len, dst_avail, src_avail; 852 - dma_addr_t dma_dst, dma_src; 853 - u32 desc_cnt = 0, i; 854 - struct scatterlist *sg; 855 - 856 - for_each_sg(src_sg, sg, src_sg_len, i) 857 - desc_cnt += DIV_ROUND_UP(sg_dma_len(sg), 858 - ZYNQMP_DMA_MAX_TRANS_LEN); 859 - 860 - spin_lock_bh(&chan->lock); 861 - if (desc_cnt > chan->desc_free_cnt) { 862 - spin_unlock_bh(&chan->lock); 863 - dev_dbg(chan->dev, "chan %p descs are not available\n", chan); 864 - return NULL; 865 - } 866 - chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; 867 - spin_unlock_bh(&chan->lock); 868 - 869 - dst_avail = sg_dma_len(dst_sg); 870 - src_avail = sg_dma_len(src_sg); 871 - 872 - /* Run until we are out of scatterlist entries */ 873 - while (true) { 874 - /* Allocate and populate the descriptor */ 875 - new = zynqmp_dma_get_descriptor(chan); 876 - desc = (struct zynqmp_dma_desc_ll *)new->src_v; 877 - len = min_t(size_t, src_avail, dst_avail); 878 - len = min_t(size_t, len, ZYNQMP_DMA_MAX_TRANS_LEN); 879 - if (len == 0) 880 - goto fetch; 881 - dma_dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - 882 - dst_avail; 883 - dma_src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - 884 - src_avail; 885 - 886 - zynqmp_dma_config_sg_ll_desc(chan, desc, dma_src, dma_dst, 887 - len, prev); 888 - prev = desc; 889 - dst_avail -= len; 890 - src_avail -= len; 891 - 892 - if (!first) 893 - first = new; 894 - else 895 - list_add_tail(&new->node, &first->tx_list); 896 - fetch: 897 - /* Fetch the next dst scatterlist entry */ 898 - if (dst_avail == 0) { 899 - if (dst_sg_len == 0) 900 - break; 901 - dst_sg = sg_next(dst_sg); 902 - if (dst_sg == NULL) 903 - break; 904 - dst_sg_len--; 905 - dst_avail = sg_dma_len(dst_sg); 906 - } 907 - /* Fetch the next src scatterlist entry */ 908 - if (src_avail == 0) { 909 - if (src_sg_len == 0) 910 - break; 911 - src_sg = sg_next(src_sg); 912 - if (src_sg == NULL) 913 - break; 914 - src_sg_len--; 915 - src_avail = sg_dma_len(src_sg); 916 - } 917 - } 918 - 919 - zynqmp_dma_desc_config_eod(chan, desc); 920 - first->async_tx.flags = flags; 921 - return &first->async_tx; 922 - } 923 - 924 - /** 925 833 * zynqmp_dma_chan_remove - Channel remove function 926 834 * @chan: ZynqMP DMA channel pointer 927 835 */ ··· 972 1064 INIT_LIST_HEAD(&zdev->common.channels); 973 1065 974 1066 dma_set_mask(&pdev->dev, DMA_BIT_MASK(44)); 975 - dma_cap_set(DMA_SG, zdev->common.cap_mask); 976 1067 dma_cap_set(DMA_MEMCPY, zdev->common.cap_mask); 977 1068 978 1069 p = &zdev->common; 979 - p->device_prep_dma_sg = zynqmp_dma_prep_sg; 980 1070 p->device_prep_dma_memcpy = zynqmp_dma_prep_memcpy; 981 1071 p->device_terminate_all = zynqmp_dma_device_terminate_all; 982 1072 p->device_issue_pending = zynqmp_dma_issue_pending;
+79
include/linux/dma/qcom_bam_dma.h
··· 1 + /* 2 + * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved. 3 + * 4 + * This software is licensed under the terms of the GNU General Public 5 + * License version 2, as published by the Free Software Foundation, and 6 + * may be copied, distributed, and modified under those terms. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #ifndef _QCOM_BAM_DMA_H 15 + #define _QCOM_BAM_DMA_H 16 + 17 + #include <asm/byteorder.h> 18 + 19 + /* 20 + * This data type corresponds to the native Command Element 21 + * supported by BAM DMA Engine. 22 + * 23 + * @cmd_and_addr - upper 8 bits command and lower 24 bits register address. 24 + * @data - for write command: content to be written into peripheral register. 25 + * for read command: dest addr to write peripheral register value. 26 + * @mask - register mask. 27 + * @reserved - for future usage. 28 + * 29 + */ 30 + struct bam_cmd_element { 31 + __le32 cmd_and_addr; 32 + __le32 data; 33 + __le32 mask; 34 + __le32 reserved; 35 + }; 36 + 37 + /* 38 + * This enum indicates the command type in a command element 39 + */ 40 + enum bam_command_type { 41 + BAM_WRITE_COMMAND = 0, 42 + BAM_READ_COMMAND, 43 + }; 44 + 45 + /* 46 + * prep_bam_ce_le32 - Wrapper function to prepare a single BAM command 47 + * element with the data already in le32 format. 48 + * 49 + * @bam_ce: bam command element 50 + * @addr: target address 51 + * @cmd: BAM command 52 + * @data: actual data for write and dest addr for read in le32 53 + */ 54 + static inline void 55 + bam_prep_ce_le32(struct bam_cmd_element *bam_ce, u32 addr, 56 + enum bam_command_type cmd, __le32 data) 57 + { 58 + bam_ce->cmd_and_addr = 59 + cpu_to_le32((addr & 0xffffff) | ((cmd & 0xff) << 24)); 60 + bam_ce->data = data; 61 + bam_ce->mask = cpu_to_le32(0xffffffff); 62 + } 63 + 64 + /* 65 + * bam_prep_ce - Wrapper function to prepare a single BAM command element 66 + * with the data. 67 + * 68 + * @bam_ce: BAM command element 69 + * @addr: target address 70 + * @cmd: BAM command 71 + * @data: actual data for write and dest addr for read 72 + */ 73 + static inline void 74 + bam_prep_ce(struct bam_cmd_element *bam_ce, u32 addr, 75 + enum bam_command_type cmd, u32 data) 76 + { 77 + bam_prep_ce_le32(bam_ce, addr, cmd, cpu_to_le32(data)); 78 + } 79 + #endif
+4 -19
include/linux/dmaengine.h
··· 68 68 DMA_MEMSET, 69 69 DMA_MEMSET_SG, 70 70 DMA_INTERRUPT, 71 - DMA_SG, 72 71 DMA_PRIVATE, 73 72 DMA_ASYNC_TX, 74 73 DMA_SLAVE, ··· 185 186 * on the result of this operation 186 187 * @DMA_CTRL_REUSE: client can reuse the descriptor and submit again till 187 188 * cleared or freed 189 + * @DMA_PREP_CMD: tell the driver that the data passed to DMA API is command 190 + * data and the descriptor should be in different format from normal 191 + * data descriptors. 188 192 */ 189 193 enum dma_ctrl_flags { 190 194 DMA_PREP_INTERRUPT = (1 << 0), ··· 197 195 DMA_PREP_CONTINUE = (1 << 4), 198 196 DMA_PREP_FENCE = (1 << 5), 199 197 DMA_CTRL_REUSE = (1 << 6), 198 + DMA_PREP_CMD = (1 << 7), 200 199 }; 201 200 202 201 /** ··· 774 771 unsigned int nents, int value, unsigned long flags); 775 772 struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)( 776 773 struct dma_chan *chan, unsigned long flags); 777 - struct dma_async_tx_descriptor *(*device_prep_dma_sg)( 778 - struct dma_chan *chan, 779 - struct scatterlist *dst_sg, unsigned int dst_nents, 780 - struct scatterlist *src_sg, unsigned int src_nents, 781 - unsigned long flags); 782 774 783 775 struct dma_async_tx_descriptor *(*device_prep_slave_sg)( 784 776 struct dma_chan *chan, struct scatterlist *sgl, ··· 901 903 902 904 return chan->device->device_prep_dma_memcpy(chan, dest, src, 903 905 len, flags); 904 - } 905 - 906 - static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg( 907 - struct dma_chan *chan, 908 - struct scatterlist *dst_sg, unsigned int dst_nents, 909 - struct scatterlist *src_sg, unsigned int src_nents, 910 - unsigned long flags) 911 - { 912 - if (!chan || !chan->device || !chan->device->device_prep_dma_sg) 913 - return NULL; 914 - 915 - return chan->device->device_prep_dma_sg(chan, dst_sg, dst_nents, 916 - src_sg, src_nents, flags); 917 906 } 918 907 919 908 /**