Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:

- dmatest updates for modularizing common struct and code

- remove SG support for VDMA xilinx IP and updates to driver

- Update to dw driver to support Intel iDMA controllers multi-block
support

- tegra updates for proper reporting of residue

- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4

- struct_size() usage and useless LIST_HEAD cleanups in subsystem.

- qDMA controller driver for Layerscape SoCs

- stm32-dma PM Runtime support

- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc

* tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits)
dmaengine: imx-sdma: fix consistent dma test failures
dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
dmaengine: imx-sdma: add clock ratio 1:1 check
dmaengine: dmatest: move test data alloc & free into functions
dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()
dmaengine: dmatest: wrap src & dst data into a struct
dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
dmaengine: ioatdma: Add Snow Ridge ioatdma device id
dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dmaengine: mv_xor: Use correct device for DMA API
Documentation :dmaengine: clarify DMA desc. pointer after submission
Documentation: dmaengine: fix dmatest.rst warning
dmaengine: k3dma: Add support for dma-channel-mask
dmaengine: k3dma: Delete axi_config
dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
Documentation: bindings: dma: Add binding for dma-channel-mask
Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp
...

+2597 -659
+4
Documentation/devicetree/bindings/dma/dma.txt
··· 16 16 - dma-channels: Number of DMA channels supported by the controller. 17 17 - dma-requests: Number of DMA request signals supported by the 18 18 controller. 19 + - dma-channel-mask: Bitmask of available DMA channels in ascending order 20 + that are not reserved by firmware and are available to 21 + the kernel. i.e. first channel corresponds to LSB. 19 22 20 23 Example: 21 24 ··· 32 29 #dma-cells = <1>; 33 30 dma-channels = <32>; 34 31 dma-requests = <127>; 32 + dma-channel-mask = <0xfffe> 35 33 }; 36 34 37 35 * DMA router
+57
Documentation/devicetree/bindings/dma/fsl-qdma.txt
··· 1 + NXP Layerscape SoC qDMA Controller 2 + ================================== 3 + 4 + This device follows the generic DMA bindings defined in dma/dma.txt. 5 + 6 + Required properties: 7 + 8 + - compatible: Must be one of 9 + "fsl,ls1021a-qdma": for LS1021A Board 10 + "fsl,ls1043a-qdma": for ls1043A Board 11 + "fsl,ls1046a-qdma": for ls1046A Board 12 + - reg: Should contain the register's base address and length. 13 + - interrupts: Should contain a reference to the interrupt used by this 14 + device. 15 + - interrupt-names: Should contain interrupt names: 16 + "qdma-queue0": the block0 interrupt 17 + "qdma-queue1": the block1 interrupt 18 + "qdma-queue2": the block2 interrupt 19 + "qdma-queue3": the block3 interrupt 20 + "qdma-error": the error interrupt 21 + - fsl,dma-queues: Should contain number of queues supported. 22 + - dma-channels: Number of DMA channels supported 23 + - block-number: the virtual block number 24 + - block-offset: the offset of different virtual block 25 + - status-sizes: status queue size of per virtual block 26 + - queue-sizes: command queue size of per virtual block, the size number 27 + based on queues 28 + 29 + Optional properties: 30 + 31 + - dma-channels: Number of DMA channels supported by the controller. 32 + - big-endian: If present registers and hardware scatter/gather descriptors 33 + of the qDMA are implemented in big endian mode, otherwise in little 34 + mode. 35 + 36 + Examples: 37 + 38 + qdma: dma-controller@8390000 { 39 + compatible = "fsl,ls1021a-qdma"; 40 + reg = <0x0 0x8388000 0x0 0x1000>, /* Controller regs */ 41 + <0x0 0x8389000 0x0 0x1000>, /* Status regs */ 42 + <0x0 0x838a000 0x0 0x2000>; /* Block regs */ 43 + interrupts = <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>, 44 + <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>, 45 + <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>; 46 + interrupt-names = "qdma-error", 47 + "qdma-queue0", "qdma-queue1"; 48 + dma-channels = <8>; 49 + block-number = <2>; 50 + block-offset = <0x1000>; 51 + fsl,dma-queues = <2>; 52 + status-sizes = <64>; 53 + queue-sizes = <64 64>; 54 + big-endian; 55 + }; 56 + 57 + DMA clients must use the format described in dma/dma.txt file.
+3 -1
Documentation/devicetree/bindings/dma/k3dma.txt
··· 3 3 See dma.txt first 4 4 5 5 Required properties: 6 - - compatible: Should be "hisilicon,k3-dma-1.0" 6 + - compatible: Must be one of 7 + - "hisilicon,k3-dma-1.0" 8 + - "hisilicon,hisi-pcm-asp-dma-1.0" 7 9 - reg: Should contain DMA registers location and length. 8 10 - interrupts: Should contain one interrupt shared by all channel 9 11 - #dma-cells: see dma.txt, should be 1, para number
-2
Documentation/devicetree/bindings/dma/snps-dma.txt
··· 23 23 24 24 25 25 Optional properties: 26 - - is_private: The device channels should be marked as private and not for by the 27 - general purpose DMA channel allocator. False if not passed. 28 26 - multi-block: Multi block transfers supported by hardware. Array property with 29 27 one cell per channel. 0: not supported, 1 (default): supported. 30 28 - snps,dma-protection-control: AHB HPROT[3:1] protection setting.
+1 -1
Documentation/devicetree/bindings/dma/sprd-dma.txt
··· 31 31 described in the dma.txt file, using a two-cell specifier for each channel. 32 32 The two cells in order are: 33 33 1. A phandle pointing to the DMA controller. 34 - 2. The channel id. 34 + 2. The slave id. 35 35 36 36 spi0: spi@70a00000{ 37 37 ...
+4 -3
Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
··· 37 37 Required properties for VDMA: 38 38 - xlnx,num-fstores: Should be the number of framebuffers as configured in h/w. 39 39 40 - Optional properties: 41 - - xlnx,include-sg: Tells configured for Scatter-mode in 42 - the hardware. 43 40 Optional properties for AXI DMA: 41 + - xlnx,sg-length-width: Should be set to the width in bits of the length 42 + register as configured in h/w. Takes values {8...26}. If the property 43 + is missing or invalid then the default value 23 is used. This is the 44 + maximum value that is supported by all IP versions. 44 45 - xlnx,mcdma: Tells whether configured for multi-channel mode in the hardware. 45 46 Optional properties for VDMA: 46 47 - xlnx,flush-fsync: Tells which channel to Flush on Frame sync.
+1 -1
Documentation/driver-api/dmaengine/client.rst
··· 172 172 173 173 After calling ``dmaengine_submit()`` the submitted transfer descriptor 174 174 (``struct dma_async_tx_descriptor``) belongs to the DMA engine. 175 - Consequentially, the client must consider invalid the pointer to that 175 + Consequently, the client must consider invalid the pointer to that 176 176 descriptor. 177 177 178 178 5. Issue pending DMA requests and wait for callback notification
+1
Documentation/driver-api/dmaengine/dmatest.rst
··· 59 59 is created with the existing parameters. This thread is set as pending 60 60 and will be executed once run is set to 1. Any parameters set after the thread 61 61 is created are not applied. 62 + 62 63 .. hint:: 63 64 available channel list could be extracted by running the following command:: 64 65
+14
drivers/dma/Kconfig
··· 218 218 multiplexing capability for DMA request sources(slot). 219 219 This module can be found on Freescale Vybrid and LS-1 SoCs. 220 220 221 + config FSL_QDMA 222 + tristate "NXP Layerscape qDMA engine support" 223 + depends on ARM || ARM64 224 + select DMA_ENGINE 225 + select DMA_VIRTUAL_CHANNELS 226 + select DMA_ENGINE_RAID 227 + select ASYNC_TX_ENABLE_CHANNEL_SWITCH 228 + help 229 + Support the NXP Layerscape qDMA engine with command queue and legacy mode. 230 + Channel virtualization is supported through enqueuing of DMA jobs to, 231 + or dequeuing DMA jobs from, different work queues. 232 + This module can be found on NXP Layerscape SoCs. 233 + The qdma driver only work on SoCs with a DPAA hardware block. 234 + 221 235 config FSL_RAID 222 236 tristate "Freescale RAID engine Support" 223 237 depends on FSL_SOC && !ASYNC_TX_ENABLE_CHANNEL_SWITCH
+1
drivers/dma/Makefile
··· 33 33 obj-$(CONFIG_FSL_DMA) += fsldma.o 34 34 obj-$(CONFIG_FSL_EDMA) += fsl-edma.o fsl-edma-common.o 35 35 obj-$(CONFIG_MCF_EDMA) += mcf-edma.o fsl-edma-common.o 36 + obj-$(CONFIG_FSL_QDMA) += fsl-qdma.o 36 37 obj-$(CONFIG_FSL_RAID) += fsl_raid.o 37 38 obj-$(CONFIG_HSU_DMA) += hsu/ 38 39 obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
-5
drivers/dma/at_hdmac.c
··· 134 134 struct at_desc *ret = NULL; 135 135 unsigned long flags; 136 136 unsigned int i = 0; 137 - LIST_HEAD(tmp_list); 138 137 139 138 spin_lock_irqsave(&atchan->lock, flags); 140 139 list_for_each_entry_safe(desc, _desc, &atchan->free_list, desc_node) { ··· 1386 1387 int chan_id = atchan->chan_common.chan_id; 1387 1388 unsigned long flags; 1388 1389 1389 - LIST_HEAD(list); 1390 - 1391 1390 dev_vdbg(chan2dev(chan), "%s\n", __func__); 1392 1391 1393 1392 spin_lock_irqsave(&atchan->lock, flags); ··· 1404 1407 struct at_dma *atdma = to_at_dma(chan->device); 1405 1408 int chan_id = atchan->chan_common.chan_id; 1406 1409 unsigned long flags; 1407 - 1408 - LIST_HEAD(list); 1409 1410 1410 1411 dev_vdbg(chan2dev(chan), "%s\n", __func__); 1411 1412
+8 -19
drivers/dma/bcm2835-dma.c
··· 2 2 /* 3 3 * BCM2835 DMA engine support 4 4 * 5 - * This driver only supports cyclic DMA transfers 6 - * as needed for the I2S module. 7 - * 8 5 * Author: Florian Meier <florian.meier@koalo.de> 9 6 * Copyright 2013 10 7 * ··· 39 42 40 43 struct bcm2835_dmadev { 41 44 struct dma_device ddev; 42 - spinlock_t lock; 43 45 void __iomem *base; 44 46 struct device_dma_parameters dma_parms; 45 47 }; ··· 60 64 61 65 struct bcm2835_chan { 62 66 struct virt_dma_chan vc; 63 - struct list_head node; 64 67 65 68 struct dma_slave_config cfg; 66 69 unsigned int dreq; ··· 307 312 return NULL; 308 313 309 314 /* allocate and setup the descriptor. */ 310 - d = kzalloc(sizeof(*d) + frames * sizeof(struct bcm2835_cb_entry), 311 - gfp); 315 + d = kzalloc(struct_size(d, cb_list, frames), gfp); 312 316 if (!d) 313 317 return NULL; 314 318 ··· 400 406 } 401 407 } 402 408 403 - static int bcm2835_dma_abort(struct bcm2835_chan *c) 409 + static void bcm2835_dma_abort(struct bcm2835_chan *c) 404 410 { 405 411 void __iomem *chan_base = c->chan_base; 406 412 long int timeout = 10000; ··· 410 416 * (The ACTIVE flag in the CS register is not a reliable indicator.) 411 417 */ 412 418 if (!readl(chan_base + BCM2835_DMA_ADDR)) 413 - return 0; 419 + return; 414 420 415 421 /* Write 0 to the active bit - Pause the DMA */ 416 422 writel(0, chan_base + BCM2835_DMA_CS); ··· 426 432 "failed to complete outstanding writes\n"); 427 433 428 434 writel(BCM2835_DMA_RESET, chan_base + BCM2835_DMA_CS); 429 - return 0; 430 435 } 431 436 432 437 static void bcm2835_dma_start_desc(struct bcm2835_chan *c) ··· 497 504 498 505 dev_dbg(dev, "Allocating DMA channel %d\n", c->ch); 499 506 507 + /* 508 + * Control blocks are 256 bit in length and must start at a 256 bit 509 + * (32 byte) aligned address (BCM2835 ARM Peripherals, sec. 4.2.1.1). 510 + */ 500 511 c->cb_pool = dma_pool_create(dev_name(dev), dev, 501 - sizeof(struct bcm2835_dma_cb), 0, 0); 512 + sizeof(struct bcm2835_dma_cb), 32, 0); 502 513 if (!c->cb_pool) { 503 514 dev_err(dev, "unable to allocate descriptor pool\n"); 504 515 return -ENOMEM; ··· 771 774 static int bcm2835_dma_terminate_all(struct dma_chan *chan) 772 775 { 773 776 struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); 774 - struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device); 775 777 unsigned long flags; 776 778 LIST_HEAD(head); 777 779 778 780 spin_lock_irqsave(&c->vc.lock, flags); 779 - 780 - /* Prevent this channel being scheduled */ 781 - spin_lock(&d->lock); 782 - list_del_init(&c->node); 783 - spin_unlock(&d->lock); 784 781 785 782 /* stop DMA activity */ 786 783 if (c->desc) { ··· 808 817 809 818 c->vc.desc_free = bcm2835_dma_desc_free; 810 819 vchan_init(&c->vc, &d->ddev); 811 - INIT_LIST_HEAD(&c->node); 812 820 813 821 c->chan_base = BCM2835_DMA_CHANIO(d->base, chan_id); 814 822 c->ch = chan_id; ··· 910 920 od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 911 921 od->ddev.dev = &pdev->dev; 912 922 INIT_LIST_HEAD(&od->ddev.channels); 913 - spin_lock_init(&od->lock); 914 923 915 924 platform_set_drvdata(pdev, od); 916 925
+1 -2
drivers/dma/dma-axi-dmac.c
··· 367 367 struct axi_dmac_desc *desc; 368 368 unsigned int i; 369 369 370 - desc = kzalloc(sizeof(struct axi_dmac_desc) + 371 - sizeof(struct axi_dmac_sg) * num_sgs, GFP_NOWAIT); 370 + desc = kzalloc(struct_size(desc, sg, num_sgs), GFP_NOWAIT); 372 371 if (!desc) 373 372 return NULL; 374 373
+2 -3
drivers/dma/dma-jz4780.c
··· 838 838 if (!soc_data) 839 839 return -EINVAL; 840 840 841 - jzdma = devm_kzalloc(dev, sizeof(*jzdma) 842 - + sizeof(*jzdma->chan) * soc_data->nb_channels, 843 - GFP_KERNEL); 841 + jzdma = devm_kzalloc(dev, struct_size(jzdma, chan, 842 + soc_data->nb_channels), GFP_KERNEL); 844 843 if (!jzdma) 845 844 return -ENOMEM; 846 845
+139 -130
drivers/dma/dmatest.c
··· 200 200 wait_queue_head_t *wait; 201 201 }; 202 202 203 + struct dmatest_data { 204 + u8 **raw; 205 + u8 **aligned; 206 + unsigned int cnt; 207 + unsigned int off; 208 + }; 209 + 203 210 struct dmatest_thread { 204 211 struct list_head node; 205 212 struct dmatest_info *info; 206 213 struct task_struct *task; 207 214 struct dma_chan *chan; 208 - u8 **srcs; 209 - u8 **usrcs; 210 - u8 **dsts; 211 - u8 **udsts; 215 + struct dmatest_data src; 216 + struct dmatest_data dst; 212 217 enum dma_transaction_type type; 213 218 wait_queue_head_t done_wait; 214 219 struct dmatest_done test_done; ··· 486 481 return FIXPT_TO_INT(dmatest_persec(runtime, len >> 10)); 487 482 } 488 483 484 + static void __dmatest_free_test_data(struct dmatest_data *d, unsigned int cnt) 485 + { 486 + unsigned int i; 487 + 488 + for (i = 0; i < cnt; i++) 489 + kfree(d->raw[i]); 490 + 491 + kfree(d->aligned); 492 + kfree(d->raw); 493 + } 494 + 495 + static void dmatest_free_test_data(struct dmatest_data *d) 496 + { 497 + __dmatest_free_test_data(d, d->cnt); 498 + } 499 + 500 + static int dmatest_alloc_test_data(struct dmatest_data *d, 501 + unsigned int buf_size, u8 align) 502 + { 503 + unsigned int i = 0; 504 + 505 + d->raw = kcalloc(d->cnt + 1, sizeof(u8 *), GFP_KERNEL); 506 + if (!d->raw) 507 + return -ENOMEM; 508 + 509 + d->aligned = kcalloc(d->cnt + 1, sizeof(u8 *), GFP_KERNEL); 510 + if (!d->aligned) 511 + goto err; 512 + 513 + for (i = 0; i < d->cnt; i++) { 514 + d->raw[i] = kmalloc(buf_size + align, GFP_KERNEL); 515 + if (!d->raw[i]) 516 + goto err; 517 + 518 + /* align to alignment restriction */ 519 + if (align) 520 + d->aligned[i] = PTR_ALIGN(d->raw[i], align); 521 + else 522 + d->aligned[i] = d->raw[i]; 523 + } 524 + 525 + return 0; 526 + err: 527 + __dmatest_free_test_data(d, i); 528 + return -ENOMEM; 529 + } 530 + 489 531 /* 490 532 * This function repeatedly tests DMA transfers of various lengths and 491 533 * offsets for a given operation type until it is told to exit by ··· 563 511 enum dma_ctrl_flags flags; 564 512 u8 *pq_coefs = NULL; 565 513 int ret; 566 - int src_cnt; 567 - int dst_cnt; 514 + unsigned int buf_size; 515 + struct dmatest_data *src; 516 + struct dmatest_data *dst; 568 517 int i; 569 518 ktime_t ktime, start, diff; 570 519 ktime_t filltime = 0; ··· 588 535 params = &info->params; 589 536 chan = thread->chan; 590 537 dev = chan->device; 538 + src = &thread->src; 539 + dst = &thread->dst; 591 540 if (thread->type == DMA_MEMCPY) { 592 541 align = params->alignment < 0 ? dev->copy_align : 593 542 params->alignment; 594 - src_cnt = dst_cnt = 1; 543 + src->cnt = dst->cnt = 1; 595 544 } else if (thread->type == DMA_MEMSET) { 596 545 align = params->alignment < 0 ? dev->fill_align : 597 546 params->alignment; 598 - src_cnt = dst_cnt = 1; 547 + src->cnt = dst->cnt = 1; 599 548 is_memset = true; 600 549 } else if (thread->type == DMA_XOR) { 601 550 /* force odd to ensure dst = src */ 602 - src_cnt = min_odd(params->xor_sources | 1, dev->max_xor); 603 - dst_cnt = 1; 551 + src->cnt = min_odd(params->xor_sources | 1, dev->max_xor); 552 + dst->cnt = 1; 604 553 align = params->alignment < 0 ? dev->xor_align : 605 554 params->alignment; 606 555 } else if (thread->type == DMA_PQ) { 607 556 /* force odd to ensure dst = src */ 608 - src_cnt = min_odd(params->pq_sources | 1, dma_maxpq(dev, 0)); 609 - dst_cnt = 2; 557 + src->cnt = min_odd(params->pq_sources | 1, dma_maxpq(dev, 0)); 558 + dst->cnt = 2; 610 559 align = params->alignment < 0 ? dev->pq_align : 611 560 params->alignment; 612 561 ··· 616 561 if (!pq_coefs) 617 562 goto err_thread_type; 618 563 619 - for (i = 0; i < src_cnt; i++) 564 + for (i = 0; i < src->cnt; i++) 620 565 pq_coefs[i] = 1; 621 566 } else 622 567 goto err_thread_type; 623 568 624 569 /* Check if buffer count fits into map count variable (u8) */ 625 - if ((src_cnt + dst_cnt) >= 255) { 570 + if ((src->cnt + dst->cnt) >= 255) { 626 571 pr_err("too many buffers (%d of 255 supported)\n", 627 - src_cnt + dst_cnt); 572 + src->cnt + dst->cnt); 628 573 goto err_free_coefs; 629 574 } 630 575 631 - if (1 << align > params->buf_size) { 576 + buf_size = params->buf_size; 577 + if (1 << align > buf_size) { 632 578 pr_err("%u-byte buffer too small for %d-byte alignment\n", 633 - params->buf_size, 1 << align); 579 + buf_size, 1 << align); 634 580 goto err_free_coefs; 635 581 } 636 582 637 - thread->srcs = kcalloc(src_cnt + 1, sizeof(u8 *), GFP_KERNEL); 638 - if (!thread->srcs) 583 + if (dmatest_alloc_test_data(src, buf_size, align) < 0) 639 584 goto err_free_coefs; 640 585 641 - thread->usrcs = kcalloc(src_cnt + 1, sizeof(u8 *), GFP_KERNEL); 642 - if (!thread->usrcs) 643 - goto err_usrcs; 644 - 645 - for (i = 0; i < src_cnt; i++) { 646 - thread->usrcs[i] = kmalloc(params->buf_size + align, 647 - GFP_KERNEL); 648 - if (!thread->usrcs[i]) 649 - goto err_srcbuf; 650 - 651 - /* align srcs to alignment restriction */ 652 - if (align) 653 - thread->srcs[i] = PTR_ALIGN(thread->usrcs[i], align); 654 - else 655 - thread->srcs[i] = thread->usrcs[i]; 656 - } 657 - thread->srcs[i] = NULL; 658 - 659 - thread->dsts = kcalloc(dst_cnt + 1, sizeof(u8 *), GFP_KERNEL); 660 - if (!thread->dsts) 661 - goto err_dsts; 662 - 663 - thread->udsts = kcalloc(dst_cnt + 1, sizeof(u8 *), GFP_KERNEL); 664 - if (!thread->udsts) 665 - goto err_udsts; 666 - 667 - for (i = 0; i < dst_cnt; i++) { 668 - thread->udsts[i] = kmalloc(params->buf_size + align, 669 - GFP_KERNEL); 670 - if (!thread->udsts[i]) 671 - goto err_dstbuf; 672 - 673 - /* align dsts to alignment restriction */ 674 - if (align) 675 - thread->dsts[i] = PTR_ALIGN(thread->udsts[i], align); 676 - else 677 - thread->dsts[i] = thread->udsts[i]; 678 - } 679 - thread->dsts[i] = NULL; 586 + if (dmatest_alloc_test_data(dst, buf_size, align) < 0) 587 + goto err_src; 680 588 681 589 set_user_nice(current, 10); 682 590 683 - srcs = kcalloc(src_cnt, sizeof(dma_addr_t), GFP_KERNEL); 591 + srcs = kcalloc(src->cnt, sizeof(dma_addr_t), GFP_KERNEL); 684 592 if (!srcs) 685 - goto err_dstbuf; 593 + goto err_dst; 686 594 687 - dma_pq = kcalloc(dst_cnt, sizeof(dma_addr_t), GFP_KERNEL); 595 + dma_pq = kcalloc(dst->cnt, sizeof(dma_addr_t), GFP_KERNEL); 688 596 if (!dma_pq) 689 597 goto err_srcs_array; 690 598 ··· 662 644 struct dma_async_tx_descriptor *tx = NULL; 663 645 struct dmaengine_unmap_data *um; 664 646 dma_addr_t *dsts; 665 - unsigned int src_off, dst_off, len; 647 + unsigned int len; 666 648 667 649 total_tests++; 668 650 669 651 if (params->transfer_size) { 670 - if (params->transfer_size >= params->buf_size) { 652 + if (params->transfer_size >= buf_size) { 671 653 pr_err("%u-byte transfer size must be lower than %u-buffer size\n", 672 - params->transfer_size, params->buf_size); 654 + params->transfer_size, buf_size); 673 655 break; 674 656 } 675 657 len = params->transfer_size; 676 658 } else if (params->norandom) { 677 - len = params->buf_size; 659 + len = buf_size; 678 660 } else { 679 - len = dmatest_random() % params->buf_size + 1; 661 + len = dmatest_random() % buf_size + 1; 680 662 } 681 663 682 664 /* Do not alter transfer size explicitly defined by user */ ··· 688 670 total_len += len; 689 671 690 672 if (params->norandom) { 691 - src_off = 0; 692 - dst_off = 0; 673 + src->off = 0; 674 + dst->off = 0; 693 675 } else { 694 - src_off = dmatest_random() % (params->buf_size - len + 1); 695 - dst_off = dmatest_random() % (params->buf_size - len + 1); 676 + src->off = dmatest_random() % (buf_size - len + 1); 677 + dst->off = dmatest_random() % (buf_size - len + 1); 696 678 697 - src_off = (src_off >> align) << align; 698 - dst_off = (dst_off >> align) << align; 679 + src->off = (src->off >> align) << align; 680 + dst->off = (dst->off >> align) << align; 699 681 } 700 682 701 683 if (!params->noverify) { 702 684 start = ktime_get(); 703 - dmatest_init_srcs(thread->srcs, src_off, len, 704 - params->buf_size, is_memset); 705 - dmatest_init_dsts(thread->dsts, dst_off, len, 706 - params->buf_size, is_memset); 685 + dmatest_init_srcs(src->aligned, src->off, len, 686 + buf_size, is_memset); 687 + dmatest_init_dsts(dst->aligned, dst->off, len, 688 + buf_size, is_memset); 707 689 708 690 diff = ktime_sub(ktime_get(), start); 709 691 filltime = ktime_add(filltime, diff); 710 692 } 711 693 712 - um = dmaengine_get_unmap_data(dev->dev, src_cnt + dst_cnt, 694 + um = dmaengine_get_unmap_data(dev->dev, src->cnt + dst->cnt, 713 695 GFP_KERNEL); 714 696 if (!um) { 715 697 failed_tests++; 716 698 result("unmap data NULL", total_tests, 717 - src_off, dst_off, len, ret); 699 + src->off, dst->off, len, ret); 718 700 continue; 719 701 } 720 702 721 - um->len = params->buf_size; 722 - for (i = 0; i < src_cnt; i++) { 723 - void *buf = thread->srcs[i]; 703 + um->len = buf_size; 704 + for (i = 0; i < src->cnt; i++) { 705 + void *buf = src->aligned[i]; 724 706 struct page *pg = virt_to_page(buf); 725 707 unsigned long pg_off = offset_in_page(buf); 726 708 727 709 um->addr[i] = dma_map_page(dev->dev, pg, pg_off, 728 710 um->len, DMA_TO_DEVICE); 729 - srcs[i] = um->addr[i] + src_off; 711 + srcs[i] = um->addr[i] + src->off; 730 712 ret = dma_mapping_error(dev->dev, um->addr[i]); 731 713 if (ret) { 732 714 result("src mapping error", total_tests, 733 - src_off, dst_off, len, ret); 715 + src->off, dst->off, len, ret); 734 716 goto error_unmap_continue; 735 717 } 736 718 um->to_cnt++; 737 719 } 738 720 /* map with DMA_BIDIRECTIONAL to force writeback/invalidate */ 739 - dsts = &um->addr[src_cnt]; 740 - for (i = 0; i < dst_cnt; i++) { 741 - void *buf = thread->dsts[i]; 721 + dsts = &um->addr[src->cnt]; 722 + for (i = 0; i < dst->cnt; i++) { 723 + void *buf = dst->aligned[i]; 742 724 struct page *pg = virt_to_page(buf); 743 725 unsigned long pg_off = offset_in_page(buf); 744 726 ··· 747 729 ret = dma_mapping_error(dev->dev, dsts[i]); 748 730 if (ret) { 749 731 result("dst mapping error", total_tests, 750 - src_off, dst_off, len, ret); 732 + src->off, dst->off, len, ret); 751 733 goto error_unmap_continue; 752 734 } 753 735 um->bidi_cnt++; ··· 755 737 756 738 if (thread->type == DMA_MEMCPY) 757 739 tx = dev->device_prep_dma_memcpy(chan, 758 - dsts[0] + dst_off, 740 + dsts[0] + dst->off, 759 741 srcs[0], len, flags); 760 742 else if (thread->type == DMA_MEMSET) 761 743 tx = dev->device_prep_dma_memset(chan, 762 - dsts[0] + dst_off, 763 - *(thread->srcs[0] + src_off), 744 + dsts[0] + dst->off, 745 + *(src->aligned[0] + src->off), 764 746 len, flags); 765 747 else if (thread->type == DMA_XOR) 766 748 tx = dev->device_prep_dma_xor(chan, 767 - dsts[0] + dst_off, 768 - srcs, src_cnt, 749 + dsts[0] + dst->off, 750 + srcs, src->cnt, 769 751 len, flags); 770 752 else if (thread->type == DMA_PQ) { 771 - for (i = 0; i < dst_cnt; i++) 772 - dma_pq[i] = dsts[i] + dst_off; 753 + for (i = 0; i < dst->cnt; i++) 754 + dma_pq[i] = dsts[i] + dst->off; 773 755 tx = dev->device_prep_dma_pq(chan, dma_pq, srcs, 774 - src_cnt, pq_coefs, 756 + src->cnt, pq_coefs, 775 757 len, flags); 776 758 } 777 759 778 760 if (!tx) { 779 - result("prep error", total_tests, src_off, 780 - dst_off, len, ret); 761 + result("prep error", total_tests, src->off, 762 + dst->off, len, ret); 781 763 msleep(100); 782 764 goto error_unmap_continue; 783 765 } ··· 788 770 cookie = tx->tx_submit(tx); 789 771 790 772 if (dma_submit_error(cookie)) { 791 - result("submit error", total_tests, src_off, 792 - dst_off, len, ret); 773 + result("submit error", total_tests, src->off, 774 + dst->off, len, ret); 793 775 msleep(100); 794 776 goto error_unmap_continue; 795 777 } ··· 801 783 status = dma_async_is_tx_complete(chan, cookie, NULL, NULL); 802 784 803 785 if (!done->done) { 804 - result("test timed out", total_tests, src_off, dst_off, 786 + result("test timed out", total_tests, src->off, dst->off, 805 787 len, 0); 806 788 goto error_unmap_continue; 807 789 } else if (status != DMA_COMPLETE) { 808 790 result(status == DMA_ERROR ? 809 791 "completion error status" : 810 - "completion busy status", total_tests, src_off, 811 - dst_off, len, ret); 792 + "completion busy status", total_tests, src->off, 793 + dst->off, len, ret); 812 794 goto error_unmap_continue; 813 795 } 814 796 815 797 dmaengine_unmap_put(um); 816 798 817 799 if (params->noverify) { 818 - verbose_result("test passed", total_tests, src_off, 819 - dst_off, len, 0); 800 + verbose_result("test passed", total_tests, src->off, 801 + dst->off, len, 0); 820 802 continue; 821 803 } 822 804 823 805 start = ktime_get(); 824 806 pr_debug("%s: verifying source buffer...\n", current->comm); 825 - error_count = dmatest_verify(thread->srcs, 0, src_off, 807 + error_count = dmatest_verify(src->aligned, 0, src->off, 826 808 0, PATTERN_SRC, true, is_memset); 827 - error_count += dmatest_verify(thread->srcs, src_off, 828 - src_off + len, src_off, 809 + error_count += dmatest_verify(src->aligned, src->off, 810 + src->off + len, src->off, 829 811 PATTERN_SRC | PATTERN_COPY, true, is_memset); 830 - error_count += dmatest_verify(thread->srcs, src_off + len, 831 - params->buf_size, src_off + len, 812 + error_count += dmatest_verify(src->aligned, src->off + len, 813 + buf_size, src->off + len, 832 814 PATTERN_SRC, true, is_memset); 833 815 834 816 pr_debug("%s: verifying dest buffer...\n", current->comm); 835 - error_count += dmatest_verify(thread->dsts, 0, dst_off, 817 + error_count += dmatest_verify(dst->aligned, 0, dst->off, 836 818 0, PATTERN_DST, false, is_memset); 837 819 838 - error_count += dmatest_verify(thread->dsts, dst_off, 839 - dst_off + len, src_off, 820 + error_count += dmatest_verify(dst->aligned, dst->off, 821 + dst->off + len, src->off, 840 822 PATTERN_SRC | PATTERN_COPY, false, is_memset); 841 823 842 - error_count += dmatest_verify(thread->dsts, dst_off + len, 843 - params->buf_size, dst_off + len, 824 + error_count += dmatest_verify(dst->aligned, dst->off + len, 825 + buf_size, dst->off + len, 844 826 PATTERN_DST, false, is_memset); 845 827 846 828 diff = ktime_sub(ktime_get(), start); 847 829 comparetime = ktime_add(comparetime, diff); 848 830 849 831 if (error_count) { 850 - result("data error", total_tests, src_off, dst_off, 832 + result("data error", total_tests, src->off, dst->off, 851 833 len, error_count); 852 834 failed_tests++; 853 835 } else { 854 - verbose_result("test passed", total_tests, src_off, 855 - dst_off, len, 0); 836 + verbose_result("test passed", total_tests, src->off, 837 + dst->off, len, 0); 856 838 } 857 839 858 840 continue; ··· 870 852 kfree(dma_pq); 871 853 err_srcs_array: 872 854 kfree(srcs); 873 - err_dstbuf: 874 - for (i = 0; thread->udsts[i]; i++) 875 - kfree(thread->udsts[i]); 876 - kfree(thread->udsts); 877 - err_udsts: 878 - kfree(thread->dsts); 879 - err_dsts: 880 - err_srcbuf: 881 - for (i = 0; thread->usrcs[i]; i++) 882 - kfree(thread->usrcs[i]); 883 - kfree(thread->usrcs); 884 - err_usrcs: 885 - kfree(thread->srcs); 855 + err_dst: 856 + dmatest_free_test_data(dst); 857 + err_src: 858 + dmatest_free_test_data(src); 886 859 err_free_coefs: 887 860 kfree(pq_coefs); 888 861 err_thread_type:
+1 -1
drivers/dma/dw-axi-dmac/dw-axi-dmac.h
··· 75 75 __le32 sstat; 76 76 __le32 dstat; 77 77 __le32 status_lo; 78 - __le32 ststus_hi; 78 + __le32 status_hi; 79 79 __le32 reserved_lo; 80 80 __le32 reserved_hi; 81 81 };
+2
drivers/dma/dw/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 1 3 # 2 4 # DMA engine configuration for dw 3 5 #
+1 -1
drivers/dma/dw/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_DW_DMAC_CORE) += dw_dmac_core.o 3 - dw_dmac_core-objs := core.o 3 + dw_dmac_core-objs := core.o dw.o idma32.o 4 4 5 5 obj-$(CONFIG_DW_DMAC) += dw_dmac.o 6 6 dw_dmac-objs := platform.o
+47 -198
drivers/dma/dw/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Core driver for the Synopsys DesignWare DMA Controller 3 4 * 4 5 * Copyright (C) 2007-2008 Atmel Corporation 5 6 * Copyright (C) 2010-2011 ST Microelectronics 6 7 * Copyright (C) 2013 Intel Corporation 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/bitops.h> ··· 33 36 * The driver has been tested with the Atmel AT32AP7000, which does not 34 37 * support descriptor writeback. 35 38 */ 36 - 37 - #define DWC_DEFAULT_CTLLO(_chan) ({ \ 38 - struct dw_dma_chan *_dwc = to_dw_dma_chan(_chan); \ 39 - struct dma_slave_config *_sconfig = &_dwc->dma_sconfig; \ 40 - bool _is_slave = is_slave_direction(_dwc->direction); \ 41 - u8 _smsize = _is_slave ? _sconfig->src_maxburst : \ 42 - DW_DMA_MSIZE_16; \ 43 - u8 _dmsize = _is_slave ? _sconfig->dst_maxburst : \ 44 - DW_DMA_MSIZE_16; \ 45 - u8 _dms = (_dwc->direction == DMA_MEM_TO_DEV) ? \ 46 - _dwc->dws.p_master : _dwc->dws.m_master; \ 47 - u8 _sms = (_dwc->direction == DMA_DEV_TO_MEM) ? \ 48 - _dwc->dws.p_master : _dwc->dws.m_master; \ 49 - \ 50 - (DWC_CTLL_DST_MSIZE(_dmsize) \ 51 - | DWC_CTLL_SRC_MSIZE(_smsize) \ 52 - | DWC_CTLL_LLP_D_EN \ 53 - | DWC_CTLL_LLP_S_EN \ 54 - | DWC_CTLL_DMS(_dms) \ 55 - | DWC_CTLL_SMS(_sms)); \ 56 - }) 57 39 58 40 /* The set of bus widths supported by the DMA controller */ 59 41 #define DW_DMA_BUSWIDTHS \ ··· 114 138 dwc->descs_allocated--; 115 139 } 116 140 117 - static void dwc_initialize_chan_idma32(struct dw_dma_chan *dwc) 118 - { 119 - u32 cfghi = 0; 120 - u32 cfglo = 0; 121 - 122 - /* Set default burst alignment */ 123 - cfglo |= IDMA32C_CFGL_DST_BURST_ALIGN | IDMA32C_CFGL_SRC_BURST_ALIGN; 124 - 125 - /* Low 4 bits of the request lines */ 126 - cfghi |= IDMA32C_CFGH_DST_PER(dwc->dws.dst_id & 0xf); 127 - cfghi |= IDMA32C_CFGH_SRC_PER(dwc->dws.src_id & 0xf); 128 - 129 - /* Request line extension (2 bits) */ 130 - cfghi |= IDMA32C_CFGH_DST_PER_EXT(dwc->dws.dst_id >> 4 & 0x3); 131 - cfghi |= IDMA32C_CFGH_SRC_PER_EXT(dwc->dws.src_id >> 4 & 0x3); 132 - 133 - channel_writel(dwc, CFG_LO, cfglo); 134 - channel_writel(dwc, CFG_HI, cfghi); 135 - } 136 - 137 - static void dwc_initialize_chan_dw(struct dw_dma_chan *dwc) 138 - { 139 - struct dw_dma *dw = to_dw_dma(dwc->chan.device); 140 - u32 cfghi = DWC_CFGH_FIFO_MODE; 141 - u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority); 142 - bool hs_polarity = dwc->dws.hs_polarity; 143 - 144 - cfghi |= DWC_CFGH_DST_PER(dwc->dws.dst_id); 145 - cfghi |= DWC_CFGH_SRC_PER(dwc->dws.src_id); 146 - cfghi |= DWC_CFGH_PROTCTL(dw->pdata->protctl); 147 - 148 - /* Set polarity of handshake interface */ 149 - cfglo |= hs_polarity ? DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL : 0; 150 - 151 - channel_writel(dwc, CFG_LO, cfglo); 152 - channel_writel(dwc, CFG_HI, cfghi); 153 - } 154 - 155 141 static void dwc_initialize(struct dw_dma_chan *dwc) 156 142 { 157 143 struct dw_dma *dw = to_dw_dma(dwc->chan.device); ··· 121 183 if (test_bit(DW_DMA_IS_INITIALIZED, &dwc->flags)) 122 184 return; 123 185 124 - if (dw->pdata->is_idma32) 125 - dwc_initialize_chan_idma32(dwc); 126 - else 127 - dwc_initialize_chan_dw(dwc); 186 + dw->initialize_chan(dwc); 128 187 129 188 /* Enable interrupts */ 130 189 channel_set_bit(dw, MASK.XFER, dwc->mask); ··· 148 213 channel_clear_bit(dw, CH_EN, dwc->mask); 149 214 while (dma_readl(dw, CH_EN) & dwc->mask) 150 215 cpu_relax(); 151 - } 152 - 153 - static u32 bytes2block(struct dw_dma_chan *dwc, size_t bytes, 154 - unsigned int width, size_t *len) 155 - { 156 - struct dw_dma *dw = to_dw_dma(dwc->chan.device); 157 - u32 block; 158 - 159 - /* Always in bytes for iDMA 32-bit */ 160 - if (dw->pdata->is_idma32) 161 - width = 0; 162 - 163 - if ((bytes >> width) > dwc->block_size) { 164 - block = dwc->block_size; 165 - *len = block << width; 166 - } else { 167 - block = bytes >> width; 168 - *len = bytes; 169 - } 170 - 171 - return block; 172 - } 173 - 174 - static size_t block2bytes(struct dw_dma_chan *dwc, u32 block, u32 width) 175 - { 176 - struct dw_dma *dw = to_dw_dma(dwc->chan.device); 177 - 178 - if (dw->pdata->is_idma32) 179 - return IDMA32C_CTLH_BLOCK_TS(block); 180 - 181 - return DWC_CTLH_BLOCK_TS(block) << width; 182 216 } 183 217 184 218 /*----------------------------------------------------------------------*/ ··· 295 391 /* Returns how many bytes were already received from source */ 296 392 static inline u32 dwc_get_sent(struct dw_dma_chan *dwc) 297 393 { 394 + struct dw_dma *dw = to_dw_dma(dwc->chan.device); 298 395 u32 ctlhi = channel_readl(dwc, CTL_HI); 299 396 u32 ctllo = channel_readl(dwc, CTL_LO); 300 397 301 - return block2bytes(dwc, ctlhi, ctllo >> 4 & 7); 398 + return dw->block2bytes(dwc, ctlhi, ctllo >> 4 & 7); 302 399 } 303 400 304 401 static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) ··· 556 651 unsigned int src_width; 557 652 unsigned int dst_width; 558 653 unsigned int data_width = dw->pdata->data_width[m_master]; 559 - u32 ctllo; 654 + u32 ctllo, ctlhi; 560 655 u8 lms = DWC_LLP_LMS(m_master); 561 656 562 657 dev_vdbg(chan2dev(chan), ··· 572 667 573 668 src_width = dst_width = __ffs(data_width | src | dest | len); 574 669 575 - ctllo = DWC_DEFAULT_CTLLO(chan) 670 + ctllo = dw->prepare_ctllo(dwc) 576 671 | DWC_CTLL_DST_WIDTH(dst_width) 577 672 | DWC_CTLL_SRC_WIDTH(src_width) 578 673 | DWC_CTLL_DST_INC ··· 585 680 if (!desc) 586 681 goto err_desc_get; 587 682 683 + ctlhi = dw->bytes2block(dwc, len - offset, src_width, &xfer_count); 684 + 588 685 lli_write(desc, sar, src + offset); 589 686 lli_write(desc, dar, dest + offset); 590 687 lli_write(desc, ctllo, ctllo); 591 - lli_write(desc, ctlhi, bytes2block(dwc, len - offset, src_width, &xfer_count)); 688 + lli_write(desc, ctlhi, ctlhi); 592 689 desc->len = xfer_count; 593 690 594 691 if (!first) { ··· 628 721 struct dma_slave_config *sconfig = &dwc->dma_sconfig; 629 722 struct dw_desc *prev; 630 723 struct dw_desc *first; 631 - u32 ctllo; 724 + u32 ctllo, ctlhi; 632 725 u8 m_master = dwc->dws.m_master; 633 726 u8 lms = DWC_LLP_LMS(m_master); 634 727 dma_addr_t reg; ··· 652 745 case DMA_MEM_TO_DEV: 653 746 reg_width = __ffs(sconfig->dst_addr_width); 654 747 reg = sconfig->dst_addr; 655 - ctllo = (DWC_DEFAULT_CTLLO(chan) 748 + ctllo = dw->prepare_ctllo(dwc) 656 749 | DWC_CTLL_DST_WIDTH(reg_width) 657 750 | DWC_CTLL_DST_FIX 658 - | DWC_CTLL_SRC_INC); 751 + | DWC_CTLL_SRC_INC; 659 752 660 753 ctllo |= sconfig->device_fc ? DWC_CTLL_FC(DW_DMA_FC_P_M2P) : 661 754 DWC_CTLL_FC(DW_DMA_FC_D_M2P); ··· 675 768 if (!desc) 676 769 goto err_desc_get; 677 770 771 + ctlhi = dw->bytes2block(dwc, len, mem_width, &dlen); 772 + 678 773 lli_write(desc, sar, mem); 679 774 lli_write(desc, dar, reg); 680 - lli_write(desc, ctlhi, bytes2block(dwc, len, mem_width, &dlen)); 775 + lli_write(desc, ctlhi, ctlhi); 681 776 lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width)); 682 777 desc->len = dlen; 683 778 ··· 702 793 case DMA_DEV_TO_MEM: 703 794 reg_width = __ffs(sconfig->src_addr_width); 704 795 reg = sconfig->src_addr; 705 - ctllo = (DWC_DEFAULT_CTLLO(chan) 796 + ctllo = dw->prepare_ctllo(dwc) 706 797 | DWC_CTLL_SRC_WIDTH(reg_width) 707 798 | DWC_CTLL_DST_INC 708 - | DWC_CTLL_SRC_FIX); 799 + | DWC_CTLL_SRC_FIX; 709 800 710 801 ctllo |= sconfig->device_fc ? DWC_CTLL_FC(DW_DMA_FC_P_P2M) : 711 802 DWC_CTLL_FC(DW_DMA_FC_D_P2M); ··· 723 814 if (!desc) 724 815 goto err_desc_get; 725 816 817 + ctlhi = dw->bytes2block(dwc, len, reg_width, &dlen); 818 + 726 819 lli_write(desc, sar, reg); 727 820 lli_write(desc, dar, mem); 728 - lli_write(desc, ctlhi, bytes2block(dwc, len, reg_width, &dlen)); 821 + lli_write(desc, ctlhi, ctlhi); 729 822 mem_width = __ffs(data_width | mem | dlen); 730 823 lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); 731 824 desc->len = dlen; ··· 787 876 static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) 788 877 { 789 878 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 790 - struct dma_slave_config *sc = &dwc->dma_sconfig; 791 879 struct dw_dma *dw = to_dw_dma(chan->device); 792 - /* 793 - * Fix sconfig's burst size according to dw_dmac. We need to convert 794 - * them as: 795 - * 1 -> 0, 4 -> 1, 8 -> 2, 16 -> 3. 796 - * 797 - * NOTE: burst size 2 is not supported by DesignWare controller. 798 - * iDMA 32-bit supports it. 799 - */ 800 - u32 s = dw->pdata->is_idma32 ? 1 : 2; 801 880 802 881 memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig)); 803 882 804 - sc->src_maxburst = sc->src_maxburst > 1 ? fls(sc->src_maxburst) - s : 0; 805 - sc->dst_maxburst = sc->dst_maxburst > 1 ? fls(sc->dst_maxburst) - s : 0; 883 + dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst); 884 + dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst); 806 885 807 886 return 0; 808 887 } ··· 801 900 { 802 901 struct dw_dma *dw = to_dw_dma(dwc->chan.device); 803 902 unsigned int count = 20; /* timeout iterations */ 804 - u32 cfglo; 805 903 806 - cfglo = channel_readl(dwc, CFG_LO); 807 - if (dw->pdata->is_idma32) { 808 - if (drain) 809 - cfglo |= IDMA32C_CFGL_CH_DRAIN; 810 - else 811 - cfglo &= ~IDMA32C_CFGL_CH_DRAIN; 812 - } 813 - channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP); 904 + dw->suspend_chan(dwc, drain); 905 + 814 906 while (!(channel_readl(dwc, CFG_LO) & DWC_CFGL_FIFO_EMPTY) && count--) 815 907 udelay(2); 816 908 ··· 822 928 return 0; 823 929 } 824 930 825 - static inline void dwc_chan_resume(struct dw_dma_chan *dwc) 931 + static inline void dwc_chan_resume(struct dw_dma_chan *dwc, bool drain) 826 932 { 827 - u32 cfglo = channel_readl(dwc, CFG_LO); 933 + struct dw_dma *dw = to_dw_dma(dwc->chan.device); 828 934 829 - channel_writel(dwc, CFG_LO, cfglo & ~DWC_CFGL_CH_SUSP); 935 + dw->resume_chan(dwc, drain); 830 936 831 937 clear_bit(DW_DMA_IS_PAUSED, &dwc->flags); 832 938 } ··· 839 945 spin_lock_irqsave(&dwc->lock, flags); 840 946 841 947 if (test_bit(DW_DMA_IS_PAUSED, &dwc->flags)) 842 - dwc_chan_resume(dwc); 948 + dwc_chan_resume(dwc, false); 843 949 844 950 spin_unlock_irqrestore(&dwc->lock, flags); 845 951 ··· 862 968 863 969 dwc_chan_disable(dw, dwc); 864 970 865 - dwc_chan_resume(dwc); 971 + dwc_chan_resume(dwc, true); 866 972 867 973 /* active_list entries will end up before queued entries */ 868 974 list_splice_init(&dwc->queue, &list); ··· 952 1058 953 1059 /*----------------------------------------------------------------------*/ 954 1060 955 - /* 956 - * Program FIFO size of channels. 957 - * 958 - * By default full FIFO (512 bytes) is assigned to channel 0. Here we 959 - * slice FIFO on equal parts between channels. 960 - */ 961 - static void idma32_fifo_partition(struct dw_dma *dw) 962 - { 963 - u64 value = IDMA32C_FP_PSIZE_CH0(64) | IDMA32C_FP_PSIZE_CH1(64) | 964 - IDMA32C_FP_UPDATE; 965 - u64 fifo_partition = 0; 966 - 967 - if (!dw->pdata->is_idma32) 968 - return; 969 - 970 - /* Fill FIFO_PARTITION low bits (Channels 0..1, 4..5) */ 971 - fifo_partition |= value << 0; 972 - 973 - /* Fill FIFO_PARTITION high bits (Channels 2..3, 6..7) */ 974 - fifo_partition |= value << 32; 975 - 976 - /* Program FIFO Partition registers - 64 bytes per channel */ 977 - idma32_writeq(dw, FIFO_PARTITION1, fifo_partition); 978 - idma32_writeq(dw, FIFO_PARTITION0, fifo_partition); 979 - } 980 - 981 - static void dw_dma_off(struct dw_dma *dw) 1061 + void do_dw_dma_off(struct dw_dma *dw) 982 1062 { 983 1063 unsigned int i; 984 1064 ··· 971 1103 clear_bit(DW_DMA_IS_INITIALIZED, &dw->chan[i].flags); 972 1104 } 973 1105 974 - static void dw_dma_on(struct dw_dma *dw) 1106 + void do_dw_dma_on(struct dw_dma *dw) 975 1107 { 976 1108 dma_writel(dw, CFG, DW_CFG_DMA_EN); 977 1109 } ··· 1007 1139 1008 1140 /* Enable controller here if needed */ 1009 1141 if (!dw->in_use) 1010 - dw_dma_on(dw); 1142 + do_dw_dma_on(dw); 1011 1143 dw->in_use |= dwc->mask; 1012 1144 1013 1145 return 0; ··· 1018 1150 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1019 1151 struct dw_dma *dw = to_dw_dma(chan->device); 1020 1152 unsigned long flags; 1021 - LIST_HEAD(list); 1022 1153 1023 1154 dev_dbg(chan2dev(chan), "%s: descs allocated=%u\n", __func__, 1024 1155 dwc->descs_allocated); ··· 1044 1177 /* Disable controller in case it was a last user */ 1045 1178 dw->in_use &= ~dwc->mask; 1046 1179 if (!dw->in_use) 1047 - dw_dma_off(dw); 1180 + do_dw_dma_off(dw); 1048 1181 1049 1182 dev_vdbg(chan2dev(chan), "%s: done\n", __func__); 1050 1183 } 1051 1184 1052 - int dw_dma_probe(struct dw_dma_chip *chip) 1185 + int do_dma_probe(struct dw_dma_chip *chip) 1053 1186 { 1187 + struct dw_dma *dw = chip->dw; 1054 1188 struct dw_dma_platform_data *pdata; 1055 - struct dw_dma *dw; 1056 1189 bool autocfg = false; 1057 1190 unsigned int dw_params; 1058 1191 unsigned int i; 1059 1192 int err; 1060 - 1061 - dw = devm_kzalloc(chip->dev, sizeof(*dw), GFP_KERNEL); 1062 - if (!dw) 1063 - return -ENOMEM; 1064 1193 1065 1194 dw->pdata = devm_kzalloc(chip->dev, sizeof(*dw->pdata), GFP_KERNEL); 1066 1195 if (!dw->pdata) 1067 1196 return -ENOMEM; 1068 1197 1069 1198 dw->regs = chip->regs; 1070 - chip->dw = dw; 1071 1199 1072 1200 pm_runtime_get_sync(chip->dev); 1073 1201 ··· 1089 1227 pdata->block_size = dma_readl(dw, MAX_BLK_SIZE); 1090 1228 1091 1229 /* Fill platform data with the default values */ 1092 - pdata->is_private = true; 1093 - pdata->is_memcpy = true; 1094 1230 pdata->chan_allocation_order = CHAN_ALLOCATION_ASCENDING; 1095 1231 pdata->chan_priority = CHAN_PRIORITY_ASCENDING; 1096 1232 } else if (chip->pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) { ··· 1112 1252 dw->all_chan_mask = (1 << pdata->nr_channels) - 1; 1113 1253 1114 1254 /* Force dma off, just in case */ 1115 - dw_dma_off(dw); 1116 - 1117 - idma32_fifo_partition(dw); 1255 + dw->disable(dw); 1118 1256 1119 1257 /* Device and instance ID for IRQ and DMA pool */ 1120 - if (pdata->is_idma32) 1121 - snprintf(dw->name, sizeof(dw->name), "idma32:dmac%d", chip->id); 1122 - else 1123 - snprintf(dw->name, sizeof(dw->name), "dw:dmac%d", chip->id); 1258 + dw->set_device_name(dw, chip->id); 1124 1259 1125 1260 /* Create a pool of consistent memory blocks for hardware descriptors */ 1126 1261 dw->desc_pool = dmam_pool_create(dw->name, chip->dev, ··· 1195 1340 1196 1341 /* Set capabilities */ 1197 1342 dma_cap_set(DMA_SLAVE, dw->dma.cap_mask); 1198 - if (pdata->is_private) 1199 - dma_cap_set(DMA_PRIVATE, dw->dma.cap_mask); 1200 - if (pdata->is_memcpy) 1201 - dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask); 1343 + dma_cap_set(DMA_PRIVATE, dw->dma.cap_mask); 1344 + dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask); 1202 1345 1203 1346 dw->dma.dev = chip->dev; 1204 1347 dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources; ··· 1237 1384 pm_runtime_put_sync_suspend(chip->dev); 1238 1385 return err; 1239 1386 } 1240 - EXPORT_SYMBOL_GPL(dw_dma_probe); 1241 1387 1242 - int dw_dma_remove(struct dw_dma_chip *chip) 1388 + int do_dma_remove(struct dw_dma_chip *chip) 1243 1389 { 1244 1390 struct dw_dma *dw = chip->dw; 1245 1391 struct dw_dma_chan *dwc, *_dwc; 1246 1392 1247 1393 pm_runtime_get_sync(chip->dev); 1248 1394 1249 - dw_dma_off(dw); 1395 + do_dw_dma_off(dw); 1250 1396 dma_async_device_unregister(&dw->dma); 1251 1397 1252 1398 free_irq(chip->irq, dw); ··· 1260 1408 pm_runtime_put_sync_suspend(chip->dev); 1261 1409 return 0; 1262 1410 } 1263 - EXPORT_SYMBOL_GPL(dw_dma_remove); 1264 1411 1265 - int dw_dma_disable(struct dw_dma_chip *chip) 1412 + int do_dw_dma_disable(struct dw_dma_chip *chip) 1266 1413 { 1267 1414 struct dw_dma *dw = chip->dw; 1268 1415 1269 - dw_dma_off(dw); 1416 + dw->disable(dw); 1270 1417 return 0; 1271 1418 } 1272 - EXPORT_SYMBOL_GPL(dw_dma_disable); 1419 + EXPORT_SYMBOL_GPL(do_dw_dma_disable); 1273 1420 1274 - int dw_dma_enable(struct dw_dma_chip *chip) 1421 + int do_dw_dma_enable(struct dw_dma_chip *chip) 1275 1422 { 1276 1423 struct dw_dma *dw = chip->dw; 1277 1424 1278 - idma32_fifo_partition(dw); 1279 - 1280 - dw_dma_on(dw); 1425 + dw->enable(dw); 1281 1426 return 0; 1282 1427 } 1283 - EXPORT_SYMBOL_GPL(dw_dma_enable); 1428 + EXPORT_SYMBOL_GPL(do_dw_dma_enable); 1284 1429 1285 1430 MODULE_LICENSE("GPL v2"); 1286 1431 MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller core driver");
+138
drivers/dma/dw/dw.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2007-2008 Atmel Corporation 3 + // Copyright (C) 2010-2011 ST Microelectronics 4 + // Copyright (C) 2013,2018 Intel Corporation 5 + 6 + #include <linux/bitops.h> 7 + #include <linux/dmaengine.h> 8 + #include <linux/errno.h> 9 + #include <linux/slab.h> 10 + #include <linux/types.h> 11 + 12 + #include "internal.h" 13 + 14 + static void dw_dma_initialize_chan(struct dw_dma_chan *dwc) 15 + { 16 + struct dw_dma *dw = to_dw_dma(dwc->chan.device); 17 + u32 cfghi = DWC_CFGH_FIFO_MODE; 18 + u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority); 19 + bool hs_polarity = dwc->dws.hs_polarity; 20 + 21 + cfghi |= DWC_CFGH_DST_PER(dwc->dws.dst_id); 22 + cfghi |= DWC_CFGH_SRC_PER(dwc->dws.src_id); 23 + cfghi |= DWC_CFGH_PROTCTL(dw->pdata->protctl); 24 + 25 + /* Set polarity of handshake interface */ 26 + cfglo |= hs_polarity ? DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL : 0; 27 + 28 + channel_writel(dwc, CFG_LO, cfglo); 29 + channel_writel(dwc, CFG_HI, cfghi); 30 + } 31 + 32 + static void dw_dma_suspend_chan(struct dw_dma_chan *dwc, bool drain) 33 + { 34 + u32 cfglo = channel_readl(dwc, CFG_LO); 35 + 36 + channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP); 37 + } 38 + 39 + static void dw_dma_resume_chan(struct dw_dma_chan *dwc, bool drain) 40 + { 41 + u32 cfglo = channel_readl(dwc, CFG_LO); 42 + 43 + channel_writel(dwc, CFG_LO, cfglo & ~DWC_CFGL_CH_SUSP); 44 + } 45 + 46 + static u32 dw_dma_bytes2block(struct dw_dma_chan *dwc, 47 + size_t bytes, unsigned int width, size_t *len) 48 + { 49 + u32 block; 50 + 51 + if ((bytes >> width) > dwc->block_size) { 52 + block = dwc->block_size; 53 + *len = dwc->block_size << width; 54 + } else { 55 + block = bytes >> width; 56 + *len = bytes; 57 + } 58 + 59 + return block; 60 + } 61 + 62 + static size_t dw_dma_block2bytes(struct dw_dma_chan *dwc, u32 block, u32 width) 63 + { 64 + return DWC_CTLH_BLOCK_TS(block) << width; 65 + } 66 + 67 + static u32 dw_dma_prepare_ctllo(struct dw_dma_chan *dwc) 68 + { 69 + struct dma_slave_config *sconfig = &dwc->dma_sconfig; 70 + bool is_slave = is_slave_direction(dwc->direction); 71 + u8 smsize = is_slave ? sconfig->src_maxburst : DW_DMA_MSIZE_16; 72 + u8 dmsize = is_slave ? sconfig->dst_maxburst : DW_DMA_MSIZE_16; 73 + u8 p_master = dwc->dws.p_master; 74 + u8 m_master = dwc->dws.m_master; 75 + u8 dms = (dwc->direction == DMA_MEM_TO_DEV) ? p_master : m_master; 76 + u8 sms = (dwc->direction == DMA_DEV_TO_MEM) ? p_master : m_master; 77 + 78 + return DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN | 79 + DWC_CTLL_DST_MSIZE(dmsize) | DWC_CTLL_SRC_MSIZE(smsize) | 80 + DWC_CTLL_DMS(dms) | DWC_CTLL_SMS(sms); 81 + } 82 + 83 + static void dw_dma_encode_maxburst(struct dw_dma_chan *dwc, u32 *maxburst) 84 + { 85 + /* 86 + * Fix burst size according to dw_dmac. We need to convert them as: 87 + * 1 -> 0, 4 -> 1, 8 -> 2, 16 -> 3. 88 + */ 89 + *maxburst = *maxburst > 1 ? fls(*maxburst) - 2 : 0; 90 + } 91 + 92 + static void dw_dma_set_device_name(struct dw_dma *dw, int id) 93 + { 94 + snprintf(dw->name, sizeof(dw->name), "dw:dmac%d", id); 95 + } 96 + 97 + static void dw_dma_disable(struct dw_dma *dw) 98 + { 99 + do_dw_dma_off(dw); 100 + } 101 + 102 + static void dw_dma_enable(struct dw_dma *dw) 103 + { 104 + do_dw_dma_on(dw); 105 + } 106 + 107 + int dw_dma_probe(struct dw_dma_chip *chip) 108 + { 109 + struct dw_dma *dw; 110 + 111 + dw = devm_kzalloc(chip->dev, sizeof(*dw), GFP_KERNEL); 112 + if (!dw) 113 + return -ENOMEM; 114 + 115 + /* Channel operations */ 116 + dw->initialize_chan = dw_dma_initialize_chan; 117 + dw->suspend_chan = dw_dma_suspend_chan; 118 + dw->resume_chan = dw_dma_resume_chan; 119 + dw->prepare_ctllo = dw_dma_prepare_ctllo; 120 + dw->encode_maxburst = dw_dma_encode_maxburst; 121 + dw->bytes2block = dw_dma_bytes2block; 122 + dw->block2bytes = dw_dma_block2bytes; 123 + 124 + /* Device operations */ 125 + dw->set_device_name = dw_dma_set_device_name; 126 + dw->disable = dw_dma_disable; 127 + dw->enable = dw_dma_enable; 128 + 129 + chip->dw = dw; 130 + return do_dma_probe(chip); 131 + } 132 + EXPORT_SYMBOL_GPL(dw_dma_probe); 133 + 134 + int dw_dma_remove(struct dw_dma_chip *chip) 135 + { 136 + return do_dma_remove(chip); 137 + } 138 + EXPORT_SYMBOL_GPL(dw_dma_remove);
+160
drivers/dma/dw/idma32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2013,2018 Intel Corporation 3 + 4 + #include <linux/bitops.h> 5 + #include <linux/dmaengine.h> 6 + #include <linux/errno.h> 7 + #include <linux/slab.h> 8 + #include <linux/types.h> 9 + 10 + #include "internal.h" 11 + 12 + static void idma32_initialize_chan(struct dw_dma_chan *dwc) 13 + { 14 + u32 cfghi = 0; 15 + u32 cfglo = 0; 16 + 17 + /* Set default burst alignment */ 18 + cfglo |= IDMA32C_CFGL_DST_BURST_ALIGN | IDMA32C_CFGL_SRC_BURST_ALIGN; 19 + 20 + /* Low 4 bits of the request lines */ 21 + cfghi |= IDMA32C_CFGH_DST_PER(dwc->dws.dst_id & 0xf); 22 + cfghi |= IDMA32C_CFGH_SRC_PER(dwc->dws.src_id & 0xf); 23 + 24 + /* Request line extension (2 bits) */ 25 + cfghi |= IDMA32C_CFGH_DST_PER_EXT(dwc->dws.dst_id >> 4 & 0x3); 26 + cfghi |= IDMA32C_CFGH_SRC_PER_EXT(dwc->dws.src_id >> 4 & 0x3); 27 + 28 + channel_writel(dwc, CFG_LO, cfglo); 29 + channel_writel(dwc, CFG_HI, cfghi); 30 + } 31 + 32 + static void idma32_suspend_chan(struct dw_dma_chan *dwc, bool drain) 33 + { 34 + u32 cfglo = channel_readl(dwc, CFG_LO); 35 + 36 + if (drain) 37 + cfglo |= IDMA32C_CFGL_CH_DRAIN; 38 + 39 + channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP); 40 + } 41 + 42 + static void idma32_resume_chan(struct dw_dma_chan *dwc, bool drain) 43 + { 44 + u32 cfglo = channel_readl(dwc, CFG_LO); 45 + 46 + if (drain) 47 + cfglo &= ~IDMA32C_CFGL_CH_DRAIN; 48 + 49 + channel_writel(dwc, CFG_LO, cfglo & ~DWC_CFGL_CH_SUSP); 50 + } 51 + 52 + static u32 idma32_bytes2block(struct dw_dma_chan *dwc, 53 + size_t bytes, unsigned int width, size_t *len) 54 + { 55 + u32 block; 56 + 57 + if (bytes > dwc->block_size) { 58 + block = dwc->block_size; 59 + *len = dwc->block_size; 60 + } else { 61 + block = bytes; 62 + *len = bytes; 63 + } 64 + 65 + return block; 66 + } 67 + 68 + static size_t idma32_block2bytes(struct dw_dma_chan *dwc, u32 block, u32 width) 69 + { 70 + return IDMA32C_CTLH_BLOCK_TS(block); 71 + } 72 + 73 + static u32 idma32_prepare_ctllo(struct dw_dma_chan *dwc) 74 + { 75 + struct dma_slave_config *sconfig = &dwc->dma_sconfig; 76 + bool is_slave = is_slave_direction(dwc->direction); 77 + u8 smsize = is_slave ? sconfig->src_maxburst : IDMA32_MSIZE_8; 78 + u8 dmsize = is_slave ? sconfig->dst_maxburst : IDMA32_MSIZE_8; 79 + 80 + return DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN | 81 + DWC_CTLL_DST_MSIZE(dmsize) | DWC_CTLL_SRC_MSIZE(smsize); 82 + } 83 + 84 + static void idma32_encode_maxburst(struct dw_dma_chan *dwc, u32 *maxburst) 85 + { 86 + *maxburst = *maxburst > 1 ? fls(*maxburst) - 1 : 0; 87 + } 88 + 89 + static void idma32_set_device_name(struct dw_dma *dw, int id) 90 + { 91 + snprintf(dw->name, sizeof(dw->name), "idma32:dmac%d", id); 92 + } 93 + 94 + /* 95 + * Program FIFO size of channels. 96 + * 97 + * By default full FIFO (512 bytes) is assigned to channel 0. Here we 98 + * slice FIFO on equal parts between channels. 99 + */ 100 + static void idma32_fifo_partition(struct dw_dma *dw) 101 + { 102 + u64 value = IDMA32C_FP_PSIZE_CH0(64) | IDMA32C_FP_PSIZE_CH1(64) | 103 + IDMA32C_FP_UPDATE; 104 + u64 fifo_partition = 0; 105 + 106 + /* Fill FIFO_PARTITION low bits (Channels 0..1, 4..5) */ 107 + fifo_partition |= value << 0; 108 + 109 + /* Fill FIFO_PARTITION high bits (Channels 2..3, 6..7) */ 110 + fifo_partition |= value << 32; 111 + 112 + /* Program FIFO Partition registers - 64 bytes per channel */ 113 + idma32_writeq(dw, FIFO_PARTITION1, fifo_partition); 114 + idma32_writeq(dw, FIFO_PARTITION0, fifo_partition); 115 + } 116 + 117 + static void idma32_disable(struct dw_dma *dw) 118 + { 119 + do_dw_dma_off(dw); 120 + idma32_fifo_partition(dw); 121 + } 122 + 123 + static void idma32_enable(struct dw_dma *dw) 124 + { 125 + idma32_fifo_partition(dw); 126 + do_dw_dma_on(dw); 127 + } 128 + 129 + int idma32_dma_probe(struct dw_dma_chip *chip) 130 + { 131 + struct dw_dma *dw; 132 + 133 + dw = devm_kzalloc(chip->dev, sizeof(*dw), GFP_KERNEL); 134 + if (!dw) 135 + return -ENOMEM; 136 + 137 + /* Channel operations */ 138 + dw->initialize_chan = idma32_initialize_chan; 139 + dw->suspend_chan = idma32_suspend_chan; 140 + dw->resume_chan = idma32_resume_chan; 141 + dw->prepare_ctllo = idma32_prepare_ctllo; 142 + dw->encode_maxburst = idma32_encode_maxburst; 143 + dw->bytes2block = idma32_bytes2block; 144 + dw->block2bytes = idma32_block2bytes; 145 + 146 + /* Device operations */ 147 + dw->set_device_name = idma32_set_device_name; 148 + dw->disable = idma32_disable; 149 + dw->enable = idma32_enable; 150 + 151 + chip->dw = dw; 152 + return do_dma_probe(chip); 153 + } 154 + EXPORT_SYMBOL_GPL(idma32_dma_probe); 155 + 156 + int idma32_dma_remove(struct dw_dma_chip *chip) 157 + { 158 + return do_dma_remove(chip); 159 + } 160 + EXPORT_SYMBOL_GPL(idma32_dma_remove);
+9 -6
drivers/dma/dw/internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Driver for the Synopsys DesignWare DMA Controller 3 4 * 4 5 * Copyright (C) 2013 Intel Corporation 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 6 */ 10 7 11 8 #ifndef _DMA_DW_INTERNAL_H ··· 12 15 13 16 #include "regs.h" 14 17 15 - int dw_dma_disable(struct dw_dma_chip *chip); 16 - int dw_dma_enable(struct dw_dma_chip *chip); 18 + int do_dma_probe(struct dw_dma_chip *chip); 19 + int do_dma_remove(struct dw_dma_chip *chip); 20 + 21 + void do_dw_dma_on(struct dw_dma *dw); 22 + void do_dw_dma_off(struct dw_dma *dw); 23 + 24 + int do_dw_dma_disable(struct dw_dma_chip *chip); 25 + int do_dw_dma_enable(struct dw_dma_chip *chip); 17 26 18 27 extern bool dw_dma_filter(struct dma_chan *chan, void *param); 19 28
+31 -22
drivers/dma/dw/pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * PCI driver for the Synopsys DesignWare DMA Controller 3 4 * 4 5 * Copyright (C) 2013 Intel Corporation 5 6 * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 12 9 #include <linux/module.h> ··· 12 15 13 16 #include "internal.h" 14 17 15 - static struct dw_dma_platform_data mrfld_pdata = { 18 + struct dw_dma_pci_data { 19 + const struct dw_dma_platform_data *pdata; 20 + int (*probe)(struct dw_dma_chip *chip); 21 + }; 22 + 23 + static const struct dw_dma_pci_data dw_pci_data = { 24 + .probe = dw_dma_probe, 25 + }; 26 + 27 + static const struct dw_dma_platform_data idma32_pdata = { 16 28 .nr_channels = 8, 17 - .is_private = true, 18 - .is_memcpy = true, 19 - .is_idma32 = true, 20 29 .chan_allocation_order = CHAN_ALLOCATION_ASCENDING, 21 30 .chan_priority = CHAN_PRIORITY_ASCENDING, 22 31 .block_size = 131071, 23 32 .nr_masters = 1, 24 33 .data_width = {4}, 34 + .multi_block = {1, 1, 1, 1, 1, 1, 1, 1}, 35 + }; 36 + 37 + static const struct dw_dma_pci_data idma32_pci_data = { 38 + .pdata = &idma32_pdata, 39 + .probe = idma32_dma_probe, 25 40 }; 26 41 27 42 static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) 28 43 { 29 - const struct dw_dma_platform_data *pdata = (void *)pid->driver_data; 44 + const struct dw_dma_pci_data *data = (void *)pid->driver_data; 30 45 struct dw_dma_chip *chip; 31 46 int ret; 32 47 ··· 71 62 chip->id = pdev->devfn; 72 63 chip->regs = pcim_iomap_table(pdev)[0]; 73 64 chip->irq = pdev->irq; 74 - chip->pdata = pdata; 65 + chip->pdata = data->pdata; 75 66 76 - ret = dw_dma_probe(chip); 67 + ret = data->probe(chip); 77 68 if (ret) 78 69 return ret; 79 70 ··· 99 90 struct pci_dev *pci = to_pci_dev(dev); 100 91 struct dw_dma_chip *chip = pci_get_drvdata(pci); 101 92 102 - return dw_dma_disable(chip); 93 + return do_dw_dma_disable(chip); 103 94 }; 104 95 105 96 static int dw_pci_resume_early(struct device *dev) ··· 107 98 struct pci_dev *pci = to_pci_dev(dev); 108 99 struct dw_dma_chip *chip = pci_get_drvdata(pci); 109 100 110 - return dw_dma_enable(chip); 101 + return do_dw_dma_enable(chip); 111 102 }; 112 103 113 104 #endif /* CONFIG_PM_SLEEP */ ··· 118 109 119 110 static const struct pci_device_id dw_pci_id_table[] = { 120 111 /* Medfield (GPDMA) */ 121 - { PCI_VDEVICE(INTEL, 0x0827) }, 112 + { PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_data }, 122 113 123 114 /* BayTrail */ 124 - { PCI_VDEVICE(INTEL, 0x0f06) }, 125 - { PCI_VDEVICE(INTEL, 0x0f40) }, 115 + { PCI_VDEVICE(INTEL, 0x0f06), (kernel_ulong_t)&dw_pci_data }, 116 + { PCI_VDEVICE(INTEL, 0x0f40), (kernel_ulong_t)&dw_pci_data }, 126 117 127 - /* Merrifield iDMA 32-bit (GPDMA) */ 128 - { PCI_VDEVICE(INTEL, 0x11a2), (kernel_ulong_t)&mrfld_pdata }, 118 + /* Merrifield */ 119 + { PCI_VDEVICE(INTEL, 0x11a2), (kernel_ulong_t)&idma32_pci_data }, 129 120 130 121 /* Braswell */ 131 - { PCI_VDEVICE(INTEL, 0x2286) }, 132 - { PCI_VDEVICE(INTEL, 0x22c0) }, 122 + { PCI_VDEVICE(INTEL, 0x2286), (kernel_ulong_t)&dw_pci_data }, 123 + { PCI_VDEVICE(INTEL, 0x22c0), (kernel_ulong_t)&dw_pci_data }, 133 124 134 125 /* Haswell */ 135 - { PCI_VDEVICE(INTEL, 0x9c60) }, 126 + { PCI_VDEVICE(INTEL, 0x9c60), (kernel_ulong_t)&dw_pci_data }, 136 127 137 128 /* Broadwell */ 138 - { PCI_VDEVICE(INTEL, 0x9ce0) }, 129 + { PCI_VDEVICE(INTEL, 0x9ce0), (kernel_ulong_t)&dw_pci_data }, 139 130 140 131 { } 141 132 };
+5 -17
drivers/dma/dw/platform.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Platform driver for the Synopsys DesignWare DMA Controller 3 4 * ··· 7 6 * Copyright (C) 2013 Intel Corporation 8 7 * 9 8 * Some parts of this driver are derived from the original dw_dmac. 10 - * 11 - * This program is free software; you can redistribute it and/or modify 12 - * it under the terms of the GNU General Public License version 2 as 13 - * published by the Free Software Foundation. 14 9 */ 15 10 16 11 #include <linux/module.h> ··· 124 127 125 128 pdata->nr_masters = nr_masters; 126 129 pdata->nr_channels = nr_channels; 127 - 128 - if (of_property_read_bool(np, "is_private")) 129 - pdata->is_private = true; 130 - 131 - /* 132 - * All known devices, which use DT for configuration, support 133 - * memory-to-memory transfers. So enable it by default. 134 - */ 135 - pdata->is_memcpy = true; 136 130 137 131 if (!of_property_read_u32(np, "chan_allocation_order", &tmp)) 138 132 pdata->chan_allocation_order = (unsigned char)tmp; ··· 252 264 struct dw_dma_chip *chip = platform_get_drvdata(pdev); 253 265 254 266 /* 255 - * We have to call dw_dma_disable() to stop any ongoing transfer. On 267 + * We have to call do_dw_dma_disable() to stop any ongoing transfer. On 256 268 * some platforms we can't do that since DMA device is powered off. 257 269 * Moreover we have no possibility to check if the platform is affected 258 270 * or not. That's why we call pm_runtime_get_sync() / pm_runtime_put() ··· 261 273 * used by the driver. 262 274 */ 263 275 pm_runtime_get_sync(chip->dev); 264 - dw_dma_disable(chip); 276 + do_dw_dma_disable(chip); 265 277 pm_runtime_put_sync_suspend(chip->dev); 266 278 267 279 clk_disable_unprepare(chip->clk); ··· 291 303 { 292 304 struct dw_dma_chip *chip = dev_get_drvdata(dev); 293 305 294 - dw_dma_disable(chip); 306 + do_dw_dma_disable(chip); 295 307 clk_disable_unprepare(chip->clk); 296 308 297 309 return 0; ··· 306 318 if (ret) 307 319 return ret; 308 320 309 - return dw_dma_enable(chip); 321 + return do_dw_dma_enable(chip); 310 322 } 311 323 312 324 #endif /* CONFIG_PM_SLEEP */
+26 -4
drivers/dma/dw/regs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Driver for the Synopsys DesignWare AHB DMA Controller 3 4 * 4 5 * Copyright (C) 2005-2007 Atmel Corporation 5 6 * Copyright (C) 2010-2011 ST Microelectronics 6 7 * Copyright (C) 2016 Intel Corporation 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/bitops.h> ··· 219 222 220 223 /* iDMA 32-bit support */ 221 224 225 + /* bursts size */ 226 + enum idma32_msize { 227 + IDMA32_MSIZE_1, 228 + IDMA32_MSIZE_2, 229 + IDMA32_MSIZE_4, 230 + IDMA32_MSIZE_8, 231 + IDMA32_MSIZE_16, 232 + IDMA32_MSIZE_32, 233 + }; 234 + 222 235 /* Bitfields in CTL_HI */ 223 236 #define IDMA32C_CTLH_BLOCK_TS_MASK GENMASK(16, 0) 224 237 #define IDMA32C_CTLH_BLOCK_TS(x) ((x) & IDMA32C_CTLH_BLOCK_TS_MASK) ··· 318 311 struct dw_dma_chan *chan; 319 312 u8 all_chan_mask; 320 313 u8 in_use; 314 + 315 + /* Channel operations */ 316 + void (*initialize_chan)(struct dw_dma_chan *dwc); 317 + void (*suspend_chan)(struct dw_dma_chan *dwc, bool drain); 318 + void (*resume_chan)(struct dw_dma_chan *dwc, bool drain); 319 + u32 (*prepare_ctllo)(struct dw_dma_chan *dwc); 320 + void (*encode_maxburst)(struct dw_dma_chan *dwc, u32 *maxburst); 321 + u32 (*bytes2block)(struct dw_dma_chan *dwc, size_t bytes, 322 + unsigned int width, size_t *len); 323 + size_t (*block2bytes)(struct dw_dma_chan *dwc, u32 block, u32 width); 324 + 325 + /* Device operations */ 326 + void (*set_device_name)(struct dw_dma *dw, int id); 327 + void (*disable)(struct dw_dma *dw); 328 + void (*enable)(struct dw_dma *dw); 321 329 322 330 /* platform data */ 323 331 struct dw_dma_platform_data *pdata;
+63 -7
drivers/dma/fsl-edma-common.c
··· 6 6 #include <linux/dmapool.h> 7 7 #include <linux/module.h> 8 8 #include <linux/slab.h> 9 + #include <linux/dma-mapping.h> 9 10 10 11 #include "fsl-edma-common.h" 11 12 ··· 174 173 } 175 174 EXPORT_SYMBOL_GPL(fsl_edma_resume); 176 175 176 + static void fsl_edma_unprep_slave_dma(struct fsl_edma_chan *fsl_chan) 177 + { 178 + if (fsl_chan->dma_dir != DMA_NONE) 179 + dma_unmap_resource(fsl_chan->vchan.chan.device->dev, 180 + fsl_chan->dma_dev_addr, 181 + fsl_chan->dma_dev_size, 182 + fsl_chan->dma_dir, 0); 183 + fsl_chan->dma_dir = DMA_NONE; 184 + } 185 + 186 + static bool fsl_edma_prep_slave_dma(struct fsl_edma_chan *fsl_chan, 187 + enum dma_transfer_direction dir) 188 + { 189 + struct device *dev = fsl_chan->vchan.chan.device->dev; 190 + enum dma_data_direction dma_dir; 191 + phys_addr_t addr = 0; 192 + u32 size = 0; 193 + 194 + switch (dir) { 195 + case DMA_MEM_TO_DEV: 196 + dma_dir = DMA_FROM_DEVICE; 197 + addr = fsl_chan->cfg.dst_addr; 198 + size = fsl_chan->cfg.dst_maxburst; 199 + break; 200 + case DMA_DEV_TO_MEM: 201 + dma_dir = DMA_TO_DEVICE; 202 + addr = fsl_chan->cfg.src_addr; 203 + size = fsl_chan->cfg.src_maxburst; 204 + break; 205 + default: 206 + dma_dir = DMA_NONE; 207 + break; 208 + } 209 + 210 + /* Already mapped for this config? */ 211 + if (fsl_chan->dma_dir == dma_dir) 212 + return true; 213 + 214 + fsl_edma_unprep_slave_dma(fsl_chan); 215 + 216 + fsl_chan->dma_dev_addr = dma_map_resource(dev, addr, size, dma_dir, 0); 217 + if (dma_mapping_error(dev, fsl_chan->dma_dev_addr)) 218 + return false; 219 + fsl_chan->dma_dev_size = size; 220 + fsl_chan->dma_dir = dma_dir; 221 + 222 + return true; 223 + } 224 + 177 225 int fsl_edma_slave_config(struct dma_chan *chan, 178 226 struct dma_slave_config *cfg) 179 227 { 180 228 struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); 181 229 182 230 memcpy(&fsl_chan->cfg, cfg, sizeof(*cfg)); 231 + fsl_edma_unprep_slave_dma(fsl_chan); 183 232 184 233 return 0; 185 234 } ··· 390 339 struct fsl_edma_desc *fsl_desc; 391 340 int i; 392 341 393 - fsl_desc = kzalloc(sizeof(*fsl_desc) + 394 - sizeof(struct fsl_edma_sw_tcd) * 395 - sg_len, GFP_NOWAIT); 342 + fsl_desc = kzalloc(struct_size(fsl_desc, tcd, sg_len), GFP_NOWAIT); 396 343 if (!fsl_desc) 397 344 return NULL; 398 345 ··· 427 378 if (!is_slave_direction(direction)) 428 379 return NULL; 429 380 381 + if (!fsl_edma_prep_slave_dma(fsl_chan, direction)) 382 + return NULL; 383 + 430 384 sg_len = buf_len / period_len; 431 385 fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); 432 386 if (!fsl_desc) ··· 461 409 462 410 if (direction == DMA_MEM_TO_DEV) { 463 411 src_addr = dma_buf_next; 464 - dst_addr = fsl_chan->cfg.dst_addr; 412 + dst_addr = fsl_chan->dma_dev_addr; 465 413 soff = fsl_chan->cfg.dst_addr_width; 466 414 doff = 0; 467 415 } else { 468 - src_addr = fsl_chan->cfg.src_addr; 416 + src_addr = fsl_chan->dma_dev_addr; 469 417 dst_addr = dma_buf_next; 470 418 soff = 0; 471 419 doff = fsl_chan->cfg.src_addr_width; ··· 496 444 if (!is_slave_direction(direction)) 497 445 return NULL; 498 446 447 + if (!fsl_edma_prep_slave_dma(fsl_chan, direction)) 448 + return NULL; 449 + 499 450 fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); 500 451 if (!fsl_desc) 501 452 return NULL; ··· 523 468 524 469 if (direction == DMA_MEM_TO_DEV) { 525 470 src_addr = sg_dma_address(sg); 526 - dst_addr = fsl_chan->cfg.dst_addr; 471 + dst_addr = fsl_chan->dma_dev_addr; 527 472 soff = fsl_chan->cfg.dst_addr_width; 528 473 doff = 0; 529 474 } else { 530 - src_addr = fsl_chan->cfg.src_addr; 475 + src_addr = fsl_chan->dma_dev_addr; 531 476 dst_addr = sg_dma_address(sg); 532 477 soff = 0; 533 478 doff = fsl_chan->cfg.src_addr_width; ··· 610 555 fsl_edma_chan_mux(fsl_chan, 0, false); 611 556 fsl_chan->edesc = NULL; 612 557 vchan_get_all_descriptors(&fsl_chan->vchan, &head); 558 + fsl_edma_unprep_slave_dma(fsl_chan); 613 559 spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 614 560 615 561 vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
+4
drivers/dma/fsl-edma-common.h
··· 6 6 #ifndef _FSL_EDMA_COMMON_H_ 7 7 #define _FSL_EDMA_COMMON_H_ 8 8 9 + #include <linux/dma-direction.h> 9 10 #include "virt-dma.h" 10 11 11 12 #define EDMA_CR_EDBG BIT(1) ··· 121 120 struct dma_slave_config cfg; 122 121 u32 attr; 123 122 struct dma_pool *tcd_pool; 123 + dma_addr_t dma_dev_addr; 124 + u32 dma_dev_size; 125 + enum dma_data_direction dma_dir; 124 126 }; 125 127 126 128 struct fsl_edma_desc {
+1
drivers/dma/fsl-edma.c
··· 254 254 fsl_chan->pm_state = RUNNING; 255 255 fsl_chan->slave_id = 0; 256 256 fsl_chan->idle = true; 257 + fsl_chan->dma_dir = DMA_NONE; 257 258 fsl_chan->vchan.desc_free = fsl_edma_free_desc; 258 259 vchan_init(&fsl_chan->vchan, &fsl_edma->dma_dev); 259 260
+1259
drivers/dma/fsl-qdma.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright 2014-2015 Freescale 3 + // Copyright 2018 NXP 4 + 5 + /* 6 + * Driver for NXP Layerscape Queue Direct Memory Access Controller 7 + * 8 + * Author: 9 + * Wen He <wen.he_1@nxp.com> 10 + * Jiaheng Fan <jiaheng.fan@nxp.com> 11 + * 12 + */ 13 + 14 + #include <linux/module.h> 15 + #include <linux/delay.h> 16 + #include <linux/of_irq.h> 17 + #include <linux/of_platform.h> 18 + #include <linux/of_dma.h> 19 + #include <linux/dma-mapping.h> 20 + 21 + #include "virt-dma.h" 22 + #include "fsldma.h" 23 + 24 + /* Register related definition */ 25 + #define FSL_QDMA_DMR 0x0 26 + #define FSL_QDMA_DSR 0x4 27 + #define FSL_QDMA_DEIER 0xe00 28 + #define FSL_QDMA_DEDR 0xe04 29 + #define FSL_QDMA_DECFDW0R 0xe10 30 + #define FSL_QDMA_DECFDW1R 0xe14 31 + #define FSL_QDMA_DECFDW2R 0xe18 32 + #define FSL_QDMA_DECFDW3R 0xe1c 33 + #define FSL_QDMA_DECFQIDR 0xe30 34 + #define FSL_QDMA_DECBR 0xe34 35 + 36 + #define FSL_QDMA_BCQMR(x) (0xc0 + 0x100 * (x)) 37 + #define FSL_QDMA_BCQSR(x) (0xc4 + 0x100 * (x)) 38 + #define FSL_QDMA_BCQEDPA_SADDR(x) (0xc8 + 0x100 * (x)) 39 + #define FSL_QDMA_BCQDPA_SADDR(x) (0xcc + 0x100 * (x)) 40 + #define FSL_QDMA_BCQEEPA_SADDR(x) (0xd0 + 0x100 * (x)) 41 + #define FSL_QDMA_BCQEPA_SADDR(x) (0xd4 + 0x100 * (x)) 42 + #define FSL_QDMA_BCQIER(x) (0xe0 + 0x100 * (x)) 43 + #define FSL_QDMA_BCQIDR(x) (0xe4 + 0x100 * (x)) 44 + 45 + #define FSL_QDMA_SQDPAR 0x80c 46 + #define FSL_QDMA_SQEPAR 0x814 47 + #define FSL_QDMA_BSQMR 0x800 48 + #define FSL_QDMA_BSQSR 0x804 49 + #define FSL_QDMA_BSQICR 0x828 50 + #define FSL_QDMA_CQMR 0xa00 51 + #define FSL_QDMA_CQDSCR1 0xa08 52 + #define FSL_QDMA_CQDSCR2 0xa0c 53 + #define FSL_QDMA_CQIER 0xa10 54 + #define FSL_QDMA_CQEDR 0xa14 55 + #define FSL_QDMA_SQCCMR 0xa20 56 + 57 + /* Registers for bit and genmask */ 58 + #define FSL_QDMA_CQIDR_SQT BIT(15) 59 + #define QDMA_CCDF_FOTMAT BIT(29) 60 + #define QDMA_CCDF_SER BIT(30) 61 + #define QDMA_SG_FIN BIT(30) 62 + #define QDMA_SG_LEN_MASK GENMASK(29, 0) 63 + #define QDMA_CCDF_MASK GENMASK(28, 20) 64 + 65 + #define FSL_QDMA_DEDR_CLEAR GENMASK(31, 0) 66 + #define FSL_QDMA_BCQIDR_CLEAR GENMASK(31, 0) 67 + #define FSL_QDMA_DEIER_CLEAR GENMASK(31, 0) 68 + 69 + #define FSL_QDMA_BCQIER_CQTIE BIT(15) 70 + #define FSL_QDMA_BCQIER_CQPEIE BIT(23) 71 + #define FSL_QDMA_BSQICR_ICEN BIT(31) 72 + 73 + #define FSL_QDMA_BSQICR_ICST(x) ((x) << 16) 74 + #define FSL_QDMA_CQIER_MEIE BIT(31) 75 + #define FSL_QDMA_CQIER_TEIE BIT(0) 76 + #define FSL_QDMA_SQCCMR_ENTER_WM BIT(21) 77 + 78 + #define FSL_QDMA_BCQMR_EN BIT(31) 79 + #define FSL_QDMA_BCQMR_EI BIT(30) 80 + #define FSL_QDMA_BCQMR_CD_THLD(x) ((x) << 20) 81 + #define FSL_QDMA_BCQMR_CQ_SIZE(x) ((x) << 16) 82 + 83 + #define FSL_QDMA_BCQSR_QF BIT(16) 84 + #define FSL_QDMA_BCQSR_XOFF BIT(0) 85 + 86 + #define FSL_QDMA_BSQMR_EN BIT(31) 87 + #define FSL_QDMA_BSQMR_DI BIT(30) 88 + #define FSL_QDMA_BSQMR_CQ_SIZE(x) ((x) << 16) 89 + 90 + #define FSL_QDMA_BSQSR_QE BIT(17) 91 + 92 + #define FSL_QDMA_DMR_DQD BIT(30) 93 + #define FSL_QDMA_DSR_DB BIT(31) 94 + 95 + /* Size related definition */ 96 + #define FSL_QDMA_QUEUE_MAX 8 97 + #define FSL_QDMA_COMMAND_BUFFER_SIZE 64 98 + #define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32 99 + #define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN 64 100 + #define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX 16384 101 + #define FSL_QDMA_QUEUE_NUM_MAX 8 102 + 103 + /* Field definition for CMD */ 104 + #define FSL_QDMA_CMD_RWTTYPE 0x4 105 + #define FSL_QDMA_CMD_LWC 0x2 106 + #define FSL_QDMA_CMD_RWTTYPE_OFFSET 28 107 + #define FSL_QDMA_CMD_NS_OFFSET 27 108 + #define FSL_QDMA_CMD_DQOS_OFFSET 24 109 + #define FSL_QDMA_CMD_WTHROTL_OFFSET 20 110 + #define FSL_QDMA_CMD_DSEN_OFFSET 19 111 + #define FSL_QDMA_CMD_LWC_OFFSET 16 112 + 113 + /* Field definition for Descriptor offset */ 114 + #define QDMA_CCDF_STATUS 20 115 + #define QDMA_CCDF_OFFSET 20 116 + 117 + /* Field definition for safe loop count*/ 118 + #define FSL_QDMA_HALT_COUNT 1500 119 + #define FSL_QDMA_MAX_SIZE 16385 120 + #define FSL_QDMA_COMP_TIMEOUT 1000 121 + #define FSL_COMMAND_QUEUE_OVERFLLOW 10 122 + 123 + #define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x) \ 124 + (((fsl_qdma_engine)->block_offset) * (x)) 125 + 126 + /** 127 + * struct fsl_qdma_format - This is the struct holding describing compound 128 + * descriptor format with qDMA. 129 + * @status: Command status and enqueue status notification. 130 + * @cfg: Frame offset and frame format. 131 + * @addr_lo: Holding the compound descriptor of the lower 132 + * 32-bits address in memory 40-bit address. 133 + * @addr_hi: Same as above member, but point high 8-bits in 134 + * memory 40-bit address. 135 + * @__reserved1: Reserved field. 136 + * @cfg8b_w1: Compound descriptor command queue origin produced 137 + * by qDMA and dynamic debug field. 138 + * @data Pointer to the memory 40-bit address, describes DMA 139 + * source information and DMA destination information. 140 + */ 141 + struct fsl_qdma_format { 142 + __le32 status; 143 + __le32 cfg; 144 + union { 145 + struct { 146 + __le32 addr_lo; 147 + u8 addr_hi; 148 + u8 __reserved1[2]; 149 + u8 cfg8b_w1; 150 + } __packed; 151 + __le64 data; 152 + }; 153 + } __packed; 154 + 155 + /* qDMA status notification pre information */ 156 + struct fsl_pre_status { 157 + u64 addr; 158 + u8 queue; 159 + }; 160 + 161 + static DEFINE_PER_CPU(struct fsl_pre_status, pre); 162 + 163 + struct fsl_qdma_chan { 164 + struct virt_dma_chan vchan; 165 + struct virt_dma_desc vdesc; 166 + enum dma_status status; 167 + struct fsl_qdma_engine *qdma; 168 + struct fsl_qdma_queue *queue; 169 + }; 170 + 171 + struct fsl_qdma_queue { 172 + struct fsl_qdma_format *virt_head; 173 + struct fsl_qdma_format *virt_tail; 174 + struct list_head comp_used; 175 + struct list_head comp_free; 176 + struct dma_pool *comp_pool; 177 + struct dma_pool *desc_pool; 178 + spinlock_t queue_lock; 179 + dma_addr_t bus_addr; 180 + u32 n_cq; 181 + u32 id; 182 + struct fsl_qdma_format *cq; 183 + void __iomem *block_base; 184 + }; 185 + 186 + struct fsl_qdma_comp { 187 + dma_addr_t bus_addr; 188 + dma_addr_t desc_bus_addr; 189 + struct fsl_qdma_format *virt_addr; 190 + struct fsl_qdma_format *desc_virt_addr; 191 + struct fsl_qdma_chan *qchan; 192 + struct virt_dma_desc vdesc; 193 + struct list_head list; 194 + }; 195 + 196 + struct fsl_qdma_engine { 197 + struct dma_device dma_dev; 198 + void __iomem *ctrl_base; 199 + void __iomem *status_base; 200 + void __iomem *block_base; 201 + u32 n_chans; 202 + u32 n_queues; 203 + struct mutex fsl_qdma_mutex; 204 + int error_irq; 205 + int *queue_irq; 206 + u32 feature; 207 + struct fsl_qdma_queue *queue; 208 + struct fsl_qdma_queue **status; 209 + struct fsl_qdma_chan *chans; 210 + int block_number; 211 + int block_offset; 212 + int irq_base; 213 + int desc_allocated; 214 + 215 + }; 216 + 217 + static inline u64 218 + qdma_ccdf_addr_get64(const struct fsl_qdma_format *ccdf) 219 + { 220 + return le64_to_cpu(ccdf->data) & (U64_MAX >> 24); 221 + } 222 + 223 + static inline void 224 + qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr) 225 + { 226 + ccdf->addr_hi = upper_32_bits(addr); 227 + ccdf->addr_lo = cpu_to_le32(lower_32_bits(addr)); 228 + } 229 + 230 + static inline u8 231 + qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf) 232 + { 233 + return ccdf->cfg8b_w1 & U8_MAX; 234 + } 235 + 236 + static inline int 237 + qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf) 238 + { 239 + return (le32_to_cpu(ccdf->cfg) & QDMA_CCDF_MASK) >> QDMA_CCDF_OFFSET; 240 + } 241 + 242 + static inline void 243 + qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset) 244 + { 245 + ccdf->cfg = cpu_to_le32(QDMA_CCDF_FOTMAT | offset); 246 + } 247 + 248 + static inline int 249 + qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf) 250 + { 251 + return (le32_to_cpu(ccdf->status) & QDMA_CCDF_MASK) >> QDMA_CCDF_STATUS; 252 + } 253 + 254 + static inline void 255 + qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status) 256 + { 257 + ccdf->status = cpu_to_le32(QDMA_CCDF_SER | status); 258 + } 259 + 260 + static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len) 261 + { 262 + csgf->cfg = cpu_to_le32(len & QDMA_SG_LEN_MASK); 263 + } 264 + 265 + static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len) 266 + { 267 + csgf->cfg = cpu_to_le32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK)); 268 + } 269 + 270 + static u32 qdma_readl(struct fsl_qdma_engine *qdma, void __iomem *addr) 271 + { 272 + return FSL_DMA_IN(qdma, addr, 32); 273 + } 274 + 275 + static void qdma_writel(struct fsl_qdma_engine *qdma, u32 val, 276 + void __iomem *addr) 277 + { 278 + FSL_DMA_OUT(qdma, addr, val, 32); 279 + } 280 + 281 + static struct fsl_qdma_chan *to_fsl_qdma_chan(struct dma_chan *chan) 282 + { 283 + return container_of(chan, struct fsl_qdma_chan, vchan.chan); 284 + } 285 + 286 + static struct fsl_qdma_comp *to_fsl_qdma_comp(struct virt_dma_desc *vd) 287 + { 288 + return container_of(vd, struct fsl_qdma_comp, vdesc); 289 + } 290 + 291 + static void fsl_qdma_free_chan_resources(struct dma_chan *chan) 292 + { 293 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 294 + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; 295 + struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma; 296 + struct fsl_qdma_comp *comp_temp, *_comp_temp; 297 + unsigned long flags; 298 + LIST_HEAD(head); 299 + 300 + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); 301 + vchan_get_all_descriptors(&fsl_chan->vchan, &head); 302 + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 303 + 304 + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); 305 + 306 + if (!fsl_queue->comp_pool && !fsl_queue->comp_pool) 307 + return; 308 + 309 + list_for_each_entry_safe(comp_temp, _comp_temp, 310 + &fsl_queue->comp_used, list) { 311 + dma_pool_free(fsl_queue->comp_pool, 312 + comp_temp->virt_addr, 313 + comp_temp->bus_addr); 314 + dma_pool_free(fsl_queue->desc_pool, 315 + comp_temp->desc_virt_addr, 316 + comp_temp->desc_bus_addr); 317 + list_del(&comp_temp->list); 318 + kfree(comp_temp); 319 + } 320 + 321 + list_for_each_entry_safe(comp_temp, _comp_temp, 322 + &fsl_queue->comp_free, list) { 323 + dma_pool_free(fsl_queue->comp_pool, 324 + comp_temp->virt_addr, 325 + comp_temp->bus_addr); 326 + dma_pool_free(fsl_queue->desc_pool, 327 + comp_temp->desc_virt_addr, 328 + comp_temp->desc_bus_addr); 329 + list_del(&comp_temp->list); 330 + kfree(comp_temp); 331 + } 332 + 333 + dma_pool_destroy(fsl_queue->comp_pool); 334 + dma_pool_destroy(fsl_queue->desc_pool); 335 + 336 + fsl_qdma->desc_allocated--; 337 + fsl_queue->comp_pool = NULL; 338 + fsl_queue->desc_pool = NULL; 339 + } 340 + 341 + static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp, 342 + dma_addr_t dst, dma_addr_t src, u32 len) 343 + { 344 + struct fsl_qdma_format *sdf, *ddf; 345 + struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest; 346 + 347 + ccdf = fsl_comp->virt_addr; 348 + csgf_desc = fsl_comp->virt_addr + 1; 349 + csgf_src = fsl_comp->virt_addr + 2; 350 + csgf_dest = fsl_comp->virt_addr + 3; 351 + sdf = fsl_comp->desc_virt_addr; 352 + ddf = fsl_comp->desc_virt_addr + 1; 353 + 354 + memset(fsl_comp->virt_addr, 0, FSL_QDMA_COMMAND_BUFFER_SIZE); 355 + memset(fsl_comp->desc_virt_addr, 0, FSL_QDMA_DESCRIPTOR_BUFFER_SIZE); 356 + /* Head Command Descriptor(Frame Descriptor) */ 357 + qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16); 358 + qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf)); 359 + qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf)); 360 + /* Status notification is enqueued to status queue. */ 361 + /* Compound Command Descriptor(Frame List Table) */ 362 + qdma_desc_addr_set64(csgf_desc, fsl_comp->desc_bus_addr); 363 + /* It must be 32 as Compound S/G Descriptor */ 364 + qdma_csgf_set_len(csgf_desc, 32); 365 + qdma_desc_addr_set64(csgf_src, src); 366 + qdma_csgf_set_len(csgf_src, len); 367 + qdma_desc_addr_set64(csgf_dest, dst); 368 + qdma_csgf_set_len(csgf_dest, len); 369 + /* This entry is the last entry. */ 370 + qdma_csgf_set_f(csgf_dest, len); 371 + /* Descriptor Buffer */ 372 + sdf->data = 373 + cpu_to_le64(FSL_QDMA_CMD_RWTTYPE << 374 + FSL_QDMA_CMD_RWTTYPE_OFFSET); 375 + ddf->data = 376 + cpu_to_le64(FSL_QDMA_CMD_RWTTYPE << 377 + FSL_QDMA_CMD_RWTTYPE_OFFSET); 378 + ddf->data |= 379 + cpu_to_le64(FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET); 380 + } 381 + 382 + /* 383 + * Pre-request full command descriptor for enqueue. 384 + */ 385 + static int fsl_qdma_pre_request_enqueue_desc(struct fsl_qdma_queue *queue) 386 + { 387 + int i; 388 + struct fsl_qdma_comp *comp_temp, *_comp_temp; 389 + 390 + for (i = 0; i < queue->n_cq + FSL_COMMAND_QUEUE_OVERFLLOW; i++) { 391 + comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL); 392 + if (!comp_temp) 393 + goto err_alloc; 394 + comp_temp->virt_addr = 395 + dma_pool_alloc(queue->comp_pool, GFP_KERNEL, 396 + &comp_temp->bus_addr); 397 + if (!comp_temp->virt_addr) 398 + goto err_dma_alloc; 399 + 400 + comp_temp->desc_virt_addr = 401 + dma_pool_alloc(queue->desc_pool, GFP_KERNEL, 402 + &comp_temp->desc_bus_addr); 403 + if (!comp_temp->desc_virt_addr) 404 + goto err_desc_dma_alloc; 405 + 406 + list_add_tail(&comp_temp->list, &queue->comp_free); 407 + } 408 + 409 + return 0; 410 + 411 + err_desc_dma_alloc: 412 + dma_pool_free(queue->comp_pool, comp_temp->virt_addr, 413 + comp_temp->bus_addr); 414 + 415 + err_dma_alloc: 416 + kfree(comp_temp); 417 + 418 + err_alloc: 419 + list_for_each_entry_safe(comp_temp, _comp_temp, 420 + &queue->comp_free, list) { 421 + if (comp_temp->virt_addr) 422 + dma_pool_free(queue->comp_pool, 423 + comp_temp->virt_addr, 424 + comp_temp->bus_addr); 425 + if (comp_temp->desc_virt_addr) 426 + dma_pool_free(queue->desc_pool, 427 + comp_temp->desc_virt_addr, 428 + comp_temp->desc_bus_addr); 429 + 430 + list_del(&comp_temp->list); 431 + kfree(comp_temp); 432 + } 433 + 434 + return -ENOMEM; 435 + } 436 + 437 + /* 438 + * Request a command descriptor for enqueue. 439 + */ 440 + static struct fsl_qdma_comp 441 + *fsl_qdma_request_enqueue_desc(struct fsl_qdma_chan *fsl_chan) 442 + { 443 + unsigned long flags; 444 + struct fsl_qdma_comp *comp_temp; 445 + int timeout = FSL_QDMA_COMP_TIMEOUT; 446 + struct fsl_qdma_queue *queue = fsl_chan->queue; 447 + 448 + while (timeout--) { 449 + spin_lock_irqsave(&queue->queue_lock, flags); 450 + if (!list_empty(&queue->comp_free)) { 451 + comp_temp = list_first_entry(&queue->comp_free, 452 + struct fsl_qdma_comp, 453 + list); 454 + list_del(&comp_temp->list); 455 + 456 + spin_unlock_irqrestore(&queue->queue_lock, flags); 457 + comp_temp->qchan = fsl_chan; 458 + return comp_temp; 459 + } 460 + spin_unlock_irqrestore(&queue->queue_lock, flags); 461 + udelay(1); 462 + } 463 + 464 + return NULL; 465 + } 466 + 467 + static struct fsl_qdma_queue 468 + *fsl_qdma_alloc_queue_resources(struct platform_device *pdev, 469 + struct fsl_qdma_engine *fsl_qdma) 470 + { 471 + int ret, len, i, j; 472 + int queue_num, block_number; 473 + unsigned int queue_size[FSL_QDMA_QUEUE_MAX]; 474 + struct fsl_qdma_queue *queue_head, *queue_temp; 475 + 476 + queue_num = fsl_qdma->n_queues; 477 + block_number = fsl_qdma->block_number; 478 + 479 + if (queue_num > FSL_QDMA_QUEUE_MAX) 480 + queue_num = FSL_QDMA_QUEUE_MAX; 481 + len = sizeof(*queue_head) * queue_num * block_number; 482 + queue_head = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); 483 + if (!queue_head) 484 + return NULL; 485 + 486 + ret = device_property_read_u32_array(&pdev->dev, "queue-sizes", 487 + queue_size, queue_num); 488 + if (ret) { 489 + dev_err(&pdev->dev, "Can't get queue-sizes.\n"); 490 + return NULL; 491 + } 492 + for (j = 0; j < block_number; j++) { 493 + for (i = 0; i < queue_num; i++) { 494 + if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || 495 + queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { 496 + dev_err(&pdev->dev, 497 + "Get wrong queue-sizes.\n"); 498 + return NULL; 499 + } 500 + queue_temp = queue_head + i + (j * queue_num); 501 + 502 + queue_temp->cq = 503 + dma_alloc_coherent(&pdev->dev, 504 + sizeof(struct fsl_qdma_format) * 505 + queue_size[i], 506 + &queue_temp->bus_addr, 507 + GFP_KERNEL); 508 + if (!queue_temp->cq) 509 + return NULL; 510 + queue_temp->block_base = fsl_qdma->block_base + 511 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); 512 + queue_temp->n_cq = queue_size[i]; 513 + queue_temp->id = i; 514 + queue_temp->virt_head = queue_temp->cq; 515 + queue_temp->virt_tail = queue_temp->cq; 516 + /* 517 + * List for queue command buffer 518 + */ 519 + INIT_LIST_HEAD(&queue_temp->comp_used); 520 + spin_lock_init(&queue_temp->queue_lock); 521 + } 522 + } 523 + return queue_head; 524 + } 525 + 526 + static struct fsl_qdma_queue 527 + *fsl_qdma_prep_status_queue(struct platform_device *pdev) 528 + { 529 + int ret; 530 + unsigned int status_size; 531 + struct fsl_qdma_queue *status_head; 532 + struct device_node *np = pdev->dev.of_node; 533 + 534 + ret = of_property_read_u32(np, "status-sizes", &status_size); 535 + if (ret) { 536 + dev_err(&pdev->dev, "Can't get status-sizes.\n"); 537 + return NULL; 538 + } 539 + if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || 540 + status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { 541 + dev_err(&pdev->dev, "Get wrong status_size.\n"); 542 + return NULL; 543 + } 544 + status_head = devm_kzalloc(&pdev->dev, 545 + sizeof(*status_head), GFP_KERNEL); 546 + if (!status_head) 547 + return NULL; 548 + 549 + /* 550 + * Buffer for queue command 551 + */ 552 + status_head->cq = dma_alloc_coherent(&pdev->dev, 553 + sizeof(struct fsl_qdma_format) * 554 + status_size, 555 + &status_head->bus_addr, 556 + GFP_KERNEL); 557 + if (!status_head->cq) { 558 + devm_kfree(&pdev->dev, status_head); 559 + return NULL; 560 + } 561 + status_head->n_cq = status_size; 562 + status_head->virt_head = status_head->cq; 563 + status_head->virt_tail = status_head->cq; 564 + status_head->comp_pool = NULL; 565 + 566 + return status_head; 567 + } 568 + 569 + static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma) 570 + { 571 + u32 reg; 572 + int i, j, count = FSL_QDMA_HALT_COUNT; 573 + void __iomem *block, *ctrl = fsl_qdma->ctrl_base; 574 + 575 + /* Disable the command queue and wait for idle state. */ 576 + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); 577 + reg |= FSL_QDMA_DMR_DQD; 578 + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); 579 + for (j = 0; j < fsl_qdma->block_number; j++) { 580 + block = fsl_qdma->block_base + 581 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); 582 + for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++) 583 + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQMR(i)); 584 + } 585 + while (1) { 586 + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DSR); 587 + if (!(reg & FSL_QDMA_DSR_DB)) 588 + break; 589 + if (count-- < 0) 590 + return -EBUSY; 591 + udelay(100); 592 + } 593 + 594 + for (j = 0; j < fsl_qdma->block_number; j++) { 595 + block = fsl_qdma->block_base + 596 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); 597 + 598 + /* Disable status queue. */ 599 + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BSQMR); 600 + 601 + /* 602 + * clear the command queue interrupt detect register for 603 + * all queues. 604 + */ 605 + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, 606 + block + FSL_QDMA_BCQIDR(0)); 607 + } 608 + 609 + return 0; 610 + } 611 + 612 + static int 613 + fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, 614 + void *block, 615 + int id) 616 + { 617 + bool duplicate; 618 + u32 reg, i, count; 619 + struct fsl_qdma_queue *temp_queue; 620 + struct fsl_qdma_format *status_addr; 621 + struct fsl_qdma_comp *fsl_comp = NULL; 622 + struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue; 623 + struct fsl_qdma_queue *fsl_status = fsl_qdma->status[id]; 624 + 625 + count = FSL_QDMA_MAX_SIZE; 626 + 627 + while (count--) { 628 + duplicate = 0; 629 + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR); 630 + if (reg & FSL_QDMA_BSQSR_QE) 631 + return 0; 632 + 633 + status_addr = fsl_status->virt_head; 634 + 635 + if (qdma_ccdf_get_queue(status_addr) == 636 + __this_cpu_read(pre.queue) && 637 + qdma_ccdf_addr_get64(status_addr) == 638 + __this_cpu_read(pre.addr)) 639 + duplicate = 1; 640 + i = qdma_ccdf_get_queue(status_addr) + 641 + id * fsl_qdma->n_queues; 642 + __this_cpu_write(pre.addr, qdma_ccdf_addr_get64(status_addr)); 643 + __this_cpu_write(pre.queue, qdma_ccdf_get_queue(status_addr)); 644 + temp_queue = fsl_queue + i; 645 + 646 + spin_lock(&temp_queue->queue_lock); 647 + if (list_empty(&temp_queue->comp_used)) { 648 + if (!duplicate) { 649 + spin_unlock(&temp_queue->queue_lock); 650 + return -EAGAIN; 651 + } 652 + } else { 653 + fsl_comp = list_first_entry(&temp_queue->comp_used, 654 + struct fsl_qdma_comp, list); 655 + if (fsl_comp->bus_addr + 16 != 656 + __this_cpu_read(pre.addr)) { 657 + if (!duplicate) { 658 + spin_unlock(&temp_queue->queue_lock); 659 + return -EAGAIN; 660 + } 661 + } 662 + } 663 + 664 + if (duplicate) { 665 + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR); 666 + reg |= FSL_QDMA_BSQMR_DI; 667 + qdma_desc_addr_set64(status_addr, 0x0); 668 + fsl_status->virt_head++; 669 + if (fsl_status->virt_head == fsl_status->cq 670 + + fsl_status->n_cq) 671 + fsl_status->virt_head = fsl_status->cq; 672 + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR); 673 + spin_unlock(&temp_queue->queue_lock); 674 + continue; 675 + } 676 + list_del(&fsl_comp->list); 677 + 678 + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR); 679 + reg |= FSL_QDMA_BSQMR_DI; 680 + qdma_desc_addr_set64(status_addr, 0x0); 681 + fsl_status->virt_head++; 682 + if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq) 683 + fsl_status->virt_head = fsl_status->cq; 684 + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR); 685 + spin_unlock(&temp_queue->queue_lock); 686 + 687 + spin_lock(&fsl_comp->qchan->vchan.lock); 688 + vchan_cookie_complete(&fsl_comp->vdesc); 689 + fsl_comp->qchan->status = DMA_COMPLETE; 690 + spin_unlock(&fsl_comp->qchan->vchan.lock); 691 + } 692 + 693 + return 0; 694 + } 695 + 696 + static irqreturn_t fsl_qdma_error_handler(int irq, void *dev_id) 697 + { 698 + unsigned int intr; 699 + struct fsl_qdma_engine *fsl_qdma = dev_id; 700 + void __iomem *status = fsl_qdma->status_base; 701 + 702 + intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR); 703 + 704 + if (intr) { 705 + dev_err(fsl_qdma->dma_dev.dev, "DMA transaction error!\n"); 706 + return IRQ_NONE; 707 + } 708 + 709 + qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + FSL_QDMA_DEDR); 710 + return IRQ_HANDLED; 711 + } 712 + 713 + static irqreturn_t fsl_qdma_queue_handler(int irq, void *dev_id) 714 + { 715 + int id; 716 + unsigned int intr, reg; 717 + struct fsl_qdma_engine *fsl_qdma = dev_id; 718 + void __iomem *block, *ctrl = fsl_qdma->ctrl_base; 719 + 720 + id = irq - fsl_qdma->irq_base; 721 + if (id < 0 && id > fsl_qdma->block_number) { 722 + dev_err(fsl_qdma->dma_dev.dev, 723 + "irq %d is wrong irq_base is %d\n", 724 + irq, fsl_qdma->irq_base); 725 + } 726 + 727 + block = fsl_qdma->block_base + 728 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, id); 729 + 730 + intr = qdma_readl(fsl_qdma, block + FSL_QDMA_BCQIDR(0)); 731 + 732 + if ((intr & FSL_QDMA_CQIDR_SQT) != 0) 733 + intr = fsl_qdma_queue_transfer_complete(fsl_qdma, block, id); 734 + 735 + if (intr != 0) { 736 + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); 737 + reg |= FSL_QDMA_DMR_DQD; 738 + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); 739 + qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQIER(0)); 740 + dev_err(fsl_qdma->dma_dev.dev, "QDMA: status err!\n"); 741 + } 742 + 743 + /* Clear all detected events and interrupts. */ 744 + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, 745 + block + FSL_QDMA_BCQIDR(0)); 746 + 747 + return IRQ_HANDLED; 748 + } 749 + 750 + static int 751 + fsl_qdma_irq_init(struct platform_device *pdev, 752 + struct fsl_qdma_engine *fsl_qdma) 753 + { 754 + int i; 755 + int cpu; 756 + int ret; 757 + char irq_name[20]; 758 + 759 + fsl_qdma->error_irq = 760 + platform_get_irq_byname(pdev, "qdma-error"); 761 + if (fsl_qdma->error_irq < 0) { 762 + dev_err(&pdev->dev, "Can't get qdma controller irq.\n"); 763 + return fsl_qdma->error_irq; 764 + } 765 + 766 + ret = devm_request_irq(&pdev->dev, fsl_qdma->error_irq, 767 + fsl_qdma_error_handler, 0, 768 + "qDMA error", fsl_qdma); 769 + if (ret) { 770 + dev_err(&pdev->dev, "Can't register qDMA controller IRQ.\n"); 771 + return ret; 772 + } 773 + 774 + for (i = 0; i < fsl_qdma->block_number; i++) { 775 + sprintf(irq_name, "qdma-queue%d", i); 776 + fsl_qdma->queue_irq[i] = 777 + platform_get_irq_byname(pdev, irq_name); 778 + 779 + if (fsl_qdma->queue_irq[i] < 0) { 780 + dev_err(&pdev->dev, 781 + "Can't get qdma queue %d irq.\n", i); 782 + return fsl_qdma->queue_irq[i]; 783 + } 784 + 785 + ret = devm_request_irq(&pdev->dev, 786 + fsl_qdma->queue_irq[i], 787 + fsl_qdma_queue_handler, 788 + 0, 789 + "qDMA queue", 790 + fsl_qdma); 791 + if (ret) { 792 + dev_err(&pdev->dev, 793 + "Can't register qDMA queue IRQ.\n"); 794 + return ret; 795 + } 796 + 797 + cpu = i % num_online_cpus(); 798 + ret = irq_set_affinity_hint(fsl_qdma->queue_irq[i], 799 + get_cpu_mask(cpu)); 800 + if (ret) { 801 + dev_err(&pdev->dev, 802 + "Can't set cpu %d affinity to IRQ %d.\n", 803 + cpu, 804 + fsl_qdma->queue_irq[i]); 805 + return ret; 806 + } 807 + } 808 + 809 + return 0; 810 + } 811 + 812 + static void fsl_qdma_irq_exit(struct platform_device *pdev, 813 + struct fsl_qdma_engine *fsl_qdma) 814 + { 815 + int i; 816 + 817 + devm_free_irq(&pdev->dev, fsl_qdma->error_irq, fsl_qdma); 818 + for (i = 0; i < fsl_qdma->block_number; i++) 819 + devm_free_irq(&pdev->dev, fsl_qdma->queue_irq[i], fsl_qdma); 820 + } 821 + 822 + static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma) 823 + { 824 + u32 reg; 825 + int i, j, ret; 826 + struct fsl_qdma_queue *temp; 827 + void __iomem *status = fsl_qdma->status_base; 828 + void __iomem *block, *ctrl = fsl_qdma->ctrl_base; 829 + struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue; 830 + 831 + /* Try to halt the qDMA engine first. */ 832 + ret = fsl_qdma_halt(fsl_qdma); 833 + if (ret) { 834 + dev_err(fsl_qdma->dma_dev.dev, "DMA halt failed!"); 835 + return ret; 836 + } 837 + 838 + for (i = 0; i < fsl_qdma->block_number; i++) { 839 + /* 840 + * Clear the command queue interrupt detect register for 841 + * all queues. 842 + */ 843 + 844 + block = fsl_qdma->block_base + 845 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, i); 846 + qdma_writel(fsl_qdma, FSL_QDMA_BCQIDR_CLEAR, 847 + block + FSL_QDMA_BCQIDR(0)); 848 + } 849 + 850 + for (j = 0; j < fsl_qdma->block_number; j++) { 851 + block = fsl_qdma->block_base + 852 + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); 853 + for (i = 0; i < fsl_qdma->n_queues; i++) { 854 + temp = fsl_queue + i + (j * fsl_qdma->n_queues); 855 + /* 856 + * Initialize Command Queue registers to 857 + * point to the first 858 + * command descriptor in memory. 859 + * Dequeue Pointer Address Registers 860 + * Enqueue Pointer Address Registers 861 + */ 862 + 863 + qdma_writel(fsl_qdma, temp->bus_addr, 864 + block + FSL_QDMA_BCQDPA_SADDR(i)); 865 + qdma_writel(fsl_qdma, temp->bus_addr, 866 + block + FSL_QDMA_BCQEPA_SADDR(i)); 867 + 868 + /* Initialize the queue mode. */ 869 + reg = FSL_QDMA_BCQMR_EN; 870 + reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4); 871 + reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6); 872 + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BCQMR(i)); 873 + } 874 + 875 + /* 876 + * Workaround for erratum: ERR010812. 877 + * We must enable XOFF to avoid the enqueue rejection occurs. 878 + * Setting SQCCMR ENTER_WM to 0x20. 879 + */ 880 + 881 + qdma_writel(fsl_qdma, FSL_QDMA_SQCCMR_ENTER_WM, 882 + block + FSL_QDMA_SQCCMR); 883 + 884 + /* 885 + * Initialize status queue registers to point to the first 886 + * command descriptor in memory. 887 + * Dequeue Pointer Address Registers 888 + * Enqueue Pointer Address Registers 889 + */ 890 + 891 + qdma_writel(fsl_qdma, fsl_qdma->status[j]->bus_addr, 892 + block + FSL_QDMA_SQEPAR); 893 + qdma_writel(fsl_qdma, fsl_qdma->status[j]->bus_addr, 894 + block + FSL_QDMA_SQDPAR); 895 + /* Initialize status queue interrupt. */ 896 + qdma_writel(fsl_qdma, FSL_QDMA_BCQIER_CQTIE, 897 + block + FSL_QDMA_BCQIER(0)); 898 + qdma_writel(fsl_qdma, FSL_QDMA_BSQICR_ICEN | 899 + FSL_QDMA_BSQICR_ICST(5) | 0x8000, 900 + block + FSL_QDMA_BSQICR); 901 + qdma_writel(fsl_qdma, FSL_QDMA_CQIER_MEIE | 902 + FSL_QDMA_CQIER_TEIE, 903 + block + FSL_QDMA_CQIER); 904 + 905 + /* Initialize the status queue mode. */ 906 + reg = FSL_QDMA_BSQMR_EN; 907 + reg |= FSL_QDMA_BSQMR_CQ_SIZE(ilog2 908 + (fsl_qdma->status[j]->n_cq) - 6); 909 + 910 + qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR); 911 + reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR); 912 + } 913 + 914 + /* Initialize controller interrupt register. */ 915 + qdma_writel(fsl_qdma, FSL_QDMA_DEDR_CLEAR, status + FSL_QDMA_DEDR); 916 + qdma_writel(fsl_qdma, FSL_QDMA_DEIER_CLEAR, status + FSL_QDMA_DEIER); 917 + 918 + reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR); 919 + reg &= ~FSL_QDMA_DMR_DQD; 920 + qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR); 921 + 922 + return 0; 923 + } 924 + 925 + static struct dma_async_tx_descriptor * 926 + fsl_qdma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, 927 + dma_addr_t src, size_t len, unsigned long flags) 928 + { 929 + struct fsl_qdma_comp *fsl_comp; 930 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 931 + 932 + fsl_comp = fsl_qdma_request_enqueue_desc(fsl_chan); 933 + 934 + if (!fsl_comp) 935 + return NULL; 936 + 937 + fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len); 938 + 939 + return vchan_tx_prep(&fsl_chan->vchan, &fsl_comp->vdesc, flags); 940 + } 941 + 942 + static void fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan) 943 + { 944 + u32 reg; 945 + struct virt_dma_desc *vdesc; 946 + struct fsl_qdma_comp *fsl_comp; 947 + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; 948 + void __iomem *block = fsl_queue->block_base; 949 + 950 + reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQSR(fsl_queue->id)); 951 + if (reg & (FSL_QDMA_BCQSR_QF | FSL_QDMA_BCQSR_XOFF)) 952 + return; 953 + vdesc = vchan_next_desc(&fsl_chan->vchan); 954 + if (!vdesc) 955 + return; 956 + list_del(&vdesc->node); 957 + fsl_comp = to_fsl_qdma_comp(vdesc); 958 + 959 + memcpy(fsl_queue->virt_head++, 960 + fsl_comp->virt_addr, sizeof(struct fsl_qdma_format)); 961 + if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq) 962 + fsl_queue->virt_head = fsl_queue->cq; 963 + 964 + list_add_tail(&fsl_comp->list, &fsl_queue->comp_used); 965 + barrier(); 966 + reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQMR(fsl_queue->id)); 967 + reg |= FSL_QDMA_BCQMR_EI; 968 + qdma_writel(fsl_chan->qdma, reg, block + FSL_QDMA_BCQMR(fsl_queue->id)); 969 + fsl_chan->status = DMA_IN_PROGRESS; 970 + } 971 + 972 + static void fsl_qdma_free_desc(struct virt_dma_desc *vdesc) 973 + { 974 + unsigned long flags; 975 + struct fsl_qdma_comp *fsl_comp; 976 + struct fsl_qdma_queue *fsl_queue; 977 + 978 + fsl_comp = to_fsl_qdma_comp(vdesc); 979 + fsl_queue = fsl_comp->qchan->queue; 980 + 981 + spin_lock_irqsave(&fsl_queue->queue_lock, flags); 982 + list_add_tail(&fsl_comp->list, &fsl_queue->comp_free); 983 + spin_unlock_irqrestore(&fsl_queue->queue_lock, flags); 984 + } 985 + 986 + static void fsl_qdma_issue_pending(struct dma_chan *chan) 987 + { 988 + unsigned long flags; 989 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 990 + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; 991 + 992 + spin_lock_irqsave(&fsl_queue->queue_lock, flags); 993 + spin_lock(&fsl_chan->vchan.lock); 994 + if (vchan_issue_pending(&fsl_chan->vchan)) 995 + fsl_qdma_enqueue_desc(fsl_chan); 996 + spin_unlock(&fsl_chan->vchan.lock); 997 + spin_unlock_irqrestore(&fsl_queue->queue_lock, flags); 998 + } 999 + 1000 + static void fsl_qdma_synchronize(struct dma_chan *chan) 1001 + { 1002 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 1003 + 1004 + vchan_synchronize(&fsl_chan->vchan); 1005 + } 1006 + 1007 + static int fsl_qdma_terminate_all(struct dma_chan *chan) 1008 + { 1009 + LIST_HEAD(head); 1010 + unsigned long flags; 1011 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 1012 + 1013 + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); 1014 + vchan_get_all_descriptors(&fsl_chan->vchan, &head); 1015 + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 1016 + vchan_dma_desc_free_list(&fsl_chan->vchan, &head); 1017 + return 0; 1018 + } 1019 + 1020 + static int fsl_qdma_alloc_chan_resources(struct dma_chan *chan) 1021 + { 1022 + int ret; 1023 + struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan); 1024 + struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma; 1025 + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; 1026 + 1027 + if (fsl_queue->comp_pool && fsl_queue->desc_pool) 1028 + return fsl_qdma->desc_allocated; 1029 + 1030 + INIT_LIST_HEAD(&fsl_queue->comp_free); 1031 + 1032 + /* 1033 + * The dma pool for queue command buffer 1034 + */ 1035 + fsl_queue->comp_pool = 1036 + dma_pool_create("comp_pool", 1037 + chan->device->dev, 1038 + FSL_QDMA_COMMAND_BUFFER_SIZE, 1039 + 64, 0); 1040 + if (!fsl_queue->comp_pool) 1041 + return -ENOMEM; 1042 + 1043 + /* 1044 + * The dma pool for Descriptor(SD/DD) buffer 1045 + */ 1046 + fsl_queue->desc_pool = 1047 + dma_pool_create("desc_pool", 1048 + chan->device->dev, 1049 + FSL_QDMA_DESCRIPTOR_BUFFER_SIZE, 1050 + 32, 0); 1051 + if (!fsl_queue->desc_pool) 1052 + goto err_desc_pool; 1053 + 1054 + ret = fsl_qdma_pre_request_enqueue_desc(fsl_queue); 1055 + if (ret) { 1056 + dev_err(chan->device->dev, 1057 + "failed to alloc dma buffer for S/G descriptor\n"); 1058 + goto err_mem; 1059 + } 1060 + 1061 + fsl_qdma->desc_allocated++; 1062 + return fsl_qdma->desc_allocated; 1063 + 1064 + err_mem: 1065 + dma_pool_destroy(fsl_queue->desc_pool); 1066 + err_desc_pool: 1067 + dma_pool_destroy(fsl_queue->comp_pool); 1068 + return -ENOMEM; 1069 + } 1070 + 1071 + static int fsl_qdma_probe(struct platform_device *pdev) 1072 + { 1073 + int ret, i; 1074 + int blk_num, blk_off; 1075 + u32 len, chans, queues; 1076 + struct resource *res; 1077 + struct fsl_qdma_chan *fsl_chan; 1078 + struct fsl_qdma_engine *fsl_qdma; 1079 + struct device_node *np = pdev->dev.of_node; 1080 + 1081 + ret = of_property_read_u32(np, "dma-channels", &chans); 1082 + if (ret) { 1083 + dev_err(&pdev->dev, "Can't get dma-channels.\n"); 1084 + return ret; 1085 + } 1086 + 1087 + ret = of_property_read_u32(np, "block-offset", &blk_off); 1088 + if (ret) { 1089 + dev_err(&pdev->dev, "Can't get block-offset.\n"); 1090 + return ret; 1091 + } 1092 + 1093 + ret = of_property_read_u32(np, "block-number", &blk_num); 1094 + if (ret) { 1095 + dev_err(&pdev->dev, "Can't get block-number.\n"); 1096 + return ret; 1097 + } 1098 + 1099 + blk_num = min_t(int, blk_num, num_online_cpus()); 1100 + 1101 + len = sizeof(*fsl_qdma); 1102 + fsl_qdma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); 1103 + if (!fsl_qdma) 1104 + return -ENOMEM; 1105 + 1106 + len = sizeof(*fsl_chan) * chans; 1107 + fsl_qdma->chans = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); 1108 + if (!fsl_qdma->chans) 1109 + return -ENOMEM; 1110 + 1111 + len = sizeof(struct fsl_qdma_queue *) * blk_num; 1112 + fsl_qdma->status = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); 1113 + if (!fsl_qdma->status) 1114 + return -ENOMEM; 1115 + 1116 + len = sizeof(int) * blk_num; 1117 + fsl_qdma->queue_irq = devm_kzalloc(&pdev->dev, len, GFP_KERNEL); 1118 + if (!fsl_qdma->queue_irq) 1119 + return -ENOMEM; 1120 + 1121 + ret = of_property_read_u32(np, "fsl,dma-queues", &queues); 1122 + if (ret) { 1123 + dev_err(&pdev->dev, "Can't get queues.\n"); 1124 + return ret; 1125 + } 1126 + 1127 + fsl_qdma->desc_allocated = 0; 1128 + fsl_qdma->n_chans = chans; 1129 + fsl_qdma->n_queues = queues; 1130 + fsl_qdma->block_number = blk_num; 1131 + fsl_qdma->block_offset = blk_off; 1132 + 1133 + mutex_init(&fsl_qdma->fsl_qdma_mutex); 1134 + 1135 + for (i = 0; i < fsl_qdma->block_number; i++) { 1136 + fsl_qdma->status[i] = fsl_qdma_prep_status_queue(pdev); 1137 + if (!fsl_qdma->status[i]) 1138 + return -ENOMEM; 1139 + } 1140 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1141 + fsl_qdma->ctrl_base = devm_ioremap_resource(&pdev->dev, res); 1142 + if (IS_ERR(fsl_qdma->ctrl_base)) 1143 + return PTR_ERR(fsl_qdma->ctrl_base); 1144 + 1145 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1146 + fsl_qdma->status_base = devm_ioremap_resource(&pdev->dev, res); 1147 + if (IS_ERR(fsl_qdma->status_base)) 1148 + return PTR_ERR(fsl_qdma->status_base); 1149 + 1150 + res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 1151 + fsl_qdma->block_base = devm_ioremap_resource(&pdev->dev, res); 1152 + if (IS_ERR(fsl_qdma->block_base)) 1153 + return PTR_ERR(fsl_qdma->block_base); 1154 + fsl_qdma->queue = fsl_qdma_alloc_queue_resources(pdev, fsl_qdma); 1155 + if (!fsl_qdma->queue) 1156 + return -ENOMEM; 1157 + 1158 + ret = fsl_qdma_irq_init(pdev, fsl_qdma); 1159 + if (ret) 1160 + return ret; 1161 + 1162 + fsl_qdma->irq_base = platform_get_irq_byname(pdev, "qdma-queue0"); 1163 + fsl_qdma->feature = of_property_read_bool(np, "big-endian"); 1164 + INIT_LIST_HEAD(&fsl_qdma->dma_dev.channels); 1165 + 1166 + for (i = 0; i < fsl_qdma->n_chans; i++) { 1167 + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i]; 1168 + 1169 + fsl_chan->qdma = fsl_qdma; 1170 + fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues * 1171 + fsl_qdma->block_number); 1172 + fsl_chan->vchan.desc_free = fsl_qdma_free_desc; 1173 + vchan_init(&fsl_chan->vchan, &fsl_qdma->dma_dev); 1174 + } 1175 + 1176 + dma_cap_set(DMA_MEMCPY, fsl_qdma->dma_dev.cap_mask); 1177 + 1178 + fsl_qdma->dma_dev.dev = &pdev->dev; 1179 + fsl_qdma->dma_dev.device_free_chan_resources = 1180 + fsl_qdma_free_chan_resources; 1181 + fsl_qdma->dma_dev.device_alloc_chan_resources = 1182 + fsl_qdma_alloc_chan_resources; 1183 + fsl_qdma->dma_dev.device_tx_status = dma_cookie_status; 1184 + fsl_qdma->dma_dev.device_prep_dma_memcpy = fsl_qdma_prep_memcpy; 1185 + fsl_qdma->dma_dev.device_issue_pending = fsl_qdma_issue_pending; 1186 + fsl_qdma->dma_dev.device_synchronize = fsl_qdma_synchronize; 1187 + fsl_qdma->dma_dev.device_terminate_all = fsl_qdma_terminate_all; 1188 + 1189 + dma_set_mask(&pdev->dev, DMA_BIT_MASK(40)); 1190 + 1191 + platform_set_drvdata(pdev, fsl_qdma); 1192 + 1193 + ret = dma_async_device_register(&fsl_qdma->dma_dev); 1194 + if (ret) { 1195 + dev_err(&pdev->dev, 1196 + "Can't register NXP Layerscape qDMA engine.\n"); 1197 + return ret; 1198 + } 1199 + 1200 + ret = fsl_qdma_reg_init(fsl_qdma); 1201 + if (ret) { 1202 + dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n"); 1203 + return ret; 1204 + } 1205 + 1206 + return 0; 1207 + } 1208 + 1209 + static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev) 1210 + { 1211 + struct fsl_qdma_chan *chan, *_chan; 1212 + 1213 + list_for_each_entry_safe(chan, _chan, 1214 + &dmadev->channels, vchan.chan.device_node) { 1215 + list_del(&chan->vchan.chan.device_node); 1216 + tasklet_kill(&chan->vchan.task); 1217 + } 1218 + } 1219 + 1220 + static int fsl_qdma_remove(struct platform_device *pdev) 1221 + { 1222 + int i; 1223 + struct fsl_qdma_queue *status; 1224 + struct device_node *np = pdev->dev.of_node; 1225 + struct fsl_qdma_engine *fsl_qdma = platform_get_drvdata(pdev); 1226 + 1227 + fsl_qdma_irq_exit(pdev, fsl_qdma); 1228 + fsl_qdma_cleanup_vchan(&fsl_qdma->dma_dev); 1229 + of_dma_controller_free(np); 1230 + dma_async_device_unregister(&fsl_qdma->dma_dev); 1231 + 1232 + for (i = 0; i < fsl_qdma->block_number; i++) { 1233 + status = fsl_qdma->status[i]; 1234 + dma_free_coherent(&pdev->dev, sizeof(struct fsl_qdma_format) * 1235 + status->n_cq, status->cq, status->bus_addr); 1236 + } 1237 + return 0; 1238 + } 1239 + 1240 + static const struct of_device_id fsl_qdma_dt_ids[] = { 1241 + { .compatible = "fsl,ls1021a-qdma", }, 1242 + { /* sentinel */ } 1243 + }; 1244 + MODULE_DEVICE_TABLE(of, fsl_qdma_dt_ids); 1245 + 1246 + static struct platform_driver fsl_qdma_driver = { 1247 + .driver = { 1248 + .name = "fsl-qdma", 1249 + .of_match_table = fsl_qdma_dt_ids, 1250 + }, 1251 + .probe = fsl_qdma_probe, 1252 + .remove = fsl_qdma_remove, 1253 + }; 1254 + 1255 + module_platform_driver(fsl_qdma_driver); 1256 + 1257 + MODULE_ALIAS("platform:fsl-qdma"); 1258 + MODULE_LICENSE("GPL v2"); 1259 + MODULE_DESCRIPTION("NXP Layerscape qDMA engine driver");
+8 -8
drivers/dma/fsldma.c
··· 53 53 54 54 static void set_sr(struct fsldma_chan *chan, u32 val) 55 55 { 56 - DMA_OUT(chan, &chan->regs->sr, val, 32); 56 + FSL_DMA_OUT(chan, &chan->regs->sr, val, 32); 57 57 } 58 58 59 59 static u32 get_sr(struct fsldma_chan *chan) 60 60 { 61 - return DMA_IN(chan, &chan->regs->sr, 32); 61 + return FSL_DMA_IN(chan, &chan->regs->sr, 32); 62 62 } 63 63 64 64 static void set_mr(struct fsldma_chan *chan, u32 val) 65 65 { 66 - DMA_OUT(chan, &chan->regs->mr, val, 32); 66 + FSL_DMA_OUT(chan, &chan->regs->mr, val, 32); 67 67 } 68 68 69 69 static u32 get_mr(struct fsldma_chan *chan) 70 70 { 71 - return DMA_IN(chan, &chan->regs->mr, 32); 71 + return FSL_DMA_IN(chan, &chan->regs->mr, 32); 72 72 } 73 73 74 74 static void set_cdar(struct fsldma_chan *chan, dma_addr_t addr) 75 75 { 76 - DMA_OUT(chan, &chan->regs->cdar, addr | FSL_DMA_SNEN, 64); 76 + FSL_DMA_OUT(chan, &chan->regs->cdar, addr | FSL_DMA_SNEN, 64); 77 77 } 78 78 79 79 static dma_addr_t get_cdar(struct fsldma_chan *chan) 80 80 { 81 - return DMA_IN(chan, &chan->regs->cdar, 64) & ~FSL_DMA_SNEN; 81 + return FSL_DMA_IN(chan, &chan->regs->cdar, 64) & ~FSL_DMA_SNEN; 82 82 } 83 83 84 84 static void set_bcr(struct fsldma_chan *chan, u32 val) 85 85 { 86 - DMA_OUT(chan, &chan->regs->bcr, val, 32); 86 + FSL_DMA_OUT(chan, &chan->regs->bcr, val, 32); 87 87 } 88 88 89 89 static u32 get_bcr(struct fsldma_chan *chan) 90 90 { 91 - return DMA_IN(chan, &chan->regs->bcr, 32); 91 + return FSL_DMA_IN(chan, &chan->regs->bcr, 32); 92 92 } 93 93 94 94 /*
+52 -24
drivers/dma/fsldma.h
··· 196 196 #define to_fsl_desc(lh) container_of(lh, struct fsl_desc_sw, node) 197 197 #define tx_to_fsl_desc(tx) container_of(tx, struct fsl_desc_sw, async_tx) 198 198 199 - #ifndef __powerpc64__ 200 - static u64 in_be64(const u64 __iomem *addr) 199 + #ifdef CONFIG_PPC 200 + #define fsl_ioread32(p) in_le32(p) 201 + #define fsl_ioread32be(p) in_be32(p) 202 + #define fsl_iowrite32(v, p) out_le32(p, v) 203 + #define fsl_iowrite32be(v, p) out_be32(p, v) 204 + 205 + #ifdef __powerpc64__ 206 + #define fsl_ioread64(p) in_le64(p) 207 + #define fsl_ioread64be(p) in_be64(p) 208 + #define fsl_iowrite64(v, p) out_le64(p, v) 209 + #define fsl_iowrite64be(v, p) out_be64(p, v) 210 + #else 211 + static u64 fsl_ioread64(const u64 __iomem *addr) 201 212 { 202 - return ((u64)in_be32((u32 __iomem *)addr) << 32) | 203 - (in_be32((u32 __iomem *)addr + 1)); 213 + u32 fsl_addr = lower_32_bits(addr); 214 + u64 fsl_addr_hi = (u64)in_le32((u32 *)(fsl_addr + 1)) << 32; 215 + 216 + return fsl_addr_hi | in_le32((u32 *)fsl_addr); 204 217 } 205 218 206 - static void out_be64(u64 __iomem *addr, u64 val) 207 - { 208 - out_be32((u32 __iomem *)addr, val >> 32); 209 - out_be32((u32 __iomem *)addr + 1, (u32)val); 210 - } 211 - 212 - /* There is no asm instructions for 64 bits reverse loads and stores */ 213 - static u64 in_le64(const u64 __iomem *addr) 214 - { 215 - return ((u64)in_le32((u32 __iomem *)addr + 1) << 32) | 216 - (in_le32((u32 __iomem *)addr)); 217 - } 218 - 219 - static void out_le64(u64 __iomem *addr, u64 val) 219 + static void fsl_iowrite64(u64 val, u64 __iomem *addr) 220 220 { 221 221 out_le32((u32 __iomem *)addr + 1, val >> 32); 222 222 out_le32((u32 __iomem *)addr, (u32)val); 223 223 } 224 + 225 + static u64 fsl_ioread64be(const u64 __iomem *addr) 226 + { 227 + u32 fsl_addr = lower_32_bits(addr); 228 + u64 fsl_addr_hi = (u64)in_be32((u32 *)fsl_addr) << 32; 229 + 230 + return fsl_addr_hi | in_be32((u32 *)(fsl_addr + 1)); 231 + } 232 + 233 + static void fsl_iowrite64be(u64 val, u64 __iomem *addr) 234 + { 235 + out_be32((u32 __iomem *)addr, val >> 32); 236 + out_be32((u32 __iomem *)addr + 1, (u32)val); 237 + } 238 + #endif 224 239 #endif 225 240 226 - #define DMA_IN(fsl_chan, addr, width) \ 227 - (((fsl_chan)->feature & FSL_DMA_BIG_ENDIAN) ? \ 228 - in_be##width(addr) : in_le##width(addr)) 229 - #define DMA_OUT(fsl_chan, addr, val, width) \ 230 - (((fsl_chan)->feature & FSL_DMA_BIG_ENDIAN) ? \ 231 - out_be##width(addr, val) : out_le##width(addr, val)) 241 + #if defined(CONFIG_ARM64) || defined(CONFIG_ARM) 242 + #define fsl_ioread32(p) ioread32(p) 243 + #define fsl_ioread32be(p) ioread32be(p) 244 + #define fsl_iowrite32(v, p) iowrite32(v, p) 245 + #define fsl_iowrite32be(v, p) iowrite32be(v, p) 246 + #define fsl_ioread64(p) ioread64(p) 247 + #define fsl_ioread64be(p) ioread64be(p) 248 + #define fsl_iowrite64(v, p) iowrite64(v, p) 249 + #define fsl_iowrite64be(v, p) iowrite64be(v, p) 250 + #endif 251 + 252 + #define FSL_DMA_IN(fsl_dma, addr, width) \ 253 + (((fsl_dma)->feature & FSL_DMA_BIG_ENDIAN) ? \ 254 + fsl_ioread##width##be(addr) : fsl_ioread##width(addr)) 255 + 256 + #define FSL_DMA_OUT(fsl_dma, addr, val, width) \ 257 + (((fsl_dma)->feature & FSL_DMA_BIG_ENDIAN) ? \ 258 + fsl_iowrite##width##be(val, addr) : fsl_iowrite \ 259 + ##width(val, addr)) 232 260 233 261 #define DMA_TO_CPU(fsl_chan, d, width) \ 234 262 (((fsl_chan)->feature & FSL_DMA_BIG_ENDIAN) ? \
+3 -5
drivers/dma/imx-dma.c
··· 278 278 /* 279 279 * imxdma_sg_next - prepare next chunk for scatter-gather DMA emulation 280 280 */ 281 - static inline int imxdma_sg_next(struct imxdma_desc *d) 281 + static inline void imxdma_sg_next(struct imxdma_desc *d) 282 282 { 283 283 struct imxdma_channel *imxdmac = to_imxdma_chan(d->desc.chan); 284 284 struct imxdma_engine *imxdma = imxdmac->imxdma; 285 285 struct scatterlist *sg = d->sg; 286 - unsigned long now; 286 + size_t now; 287 287 288 - now = min(d->len, sg_dma_len(sg)); 288 + now = min_t(size_t, d->len, sg_dma_len(sg)); 289 289 if (d->len != IMX_DMA_LENGTH_LOOP) 290 290 d->len -= now; 291 291 ··· 303 303 imx_dmav1_readl(imxdma, DMA_DAR(imxdmac->channel)), 304 304 imx_dmav1_readl(imxdma, DMA_SAR(imxdmac->channel)), 305 305 imx_dmav1_readl(imxdma, DMA_CNTR(imxdmac->channel))); 306 - 307 - return now; 308 306 } 309 307 310 308 static void imxdma_enable_hw(struct imxdma_desc *d)
+37 -12
drivers/dma/imx-sdma.c
··· 377 377 unsigned long watermark_level; 378 378 u32 shp_addr, per_addr; 379 379 enum dma_status status; 380 + bool context_loaded; 380 381 struct imx_dma_data data; 381 382 struct work_struct terminate_worker; 382 383 }; ··· 441 440 unsigned int irq; 442 441 dma_addr_t bd0_phys; 443 442 struct sdma_buffer_descriptor *bd0; 443 + /* clock ratio for AHB:SDMA core. 1:1 is 1, 2:1 is 0*/ 444 + bool clk_ratio; 444 445 }; 445 446 446 447 static int sdma_config_write(struct dma_chan *chan, ··· 665 662 dev_err(sdma->dev, "Timeout waiting for CH0 ready\n"); 666 663 667 664 /* Set bits of CONFIG register with dynamic context switching */ 668 - if (readl(sdma->regs + SDMA_H_CONFIG) == 0) 669 - writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG); 665 + reg = readl(sdma->regs + SDMA_H_CONFIG); 666 + if ((reg & SDMA_H_CONFIG_CSM) == 0) { 667 + reg |= SDMA_H_CONFIG_CSM; 668 + writel_relaxed(reg, sdma->regs + SDMA_H_CONFIG); 669 + } 670 670 671 671 return ret; 672 672 } ··· 683 677 int ret; 684 678 unsigned long flags; 685 679 686 - buf_virt = dma_alloc_coherent(NULL, size, &buf_phys, GFP_KERNEL); 680 + buf_virt = dma_alloc_coherent(sdma->dev, size, &buf_phys, GFP_KERNEL); 687 681 if (!buf_virt) { 688 682 return -ENOMEM; 689 683 } ··· 702 696 703 697 spin_unlock_irqrestore(&sdma->channel_0_lock, flags); 704 698 705 - dma_free_coherent(NULL, size, buf_virt, buf_phys); 699 + dma_free_coherent(sdma->dev, size, buf_virt, buf_phys); 706 700 707 701 return ret; 708 702 } ··· 976 970 int ret; 977 971 unsigned long flags; 978 972 973 + if (sdmac->context_loaded) 974 + return 0; 975 + 979 976 if (sdmac->direction == DMA_DEV_TO_MEM) 980 977 load_address = sdmac->pc_from_device; 981 978 else if (sdmac->direction == DMA_DEV_TO_DEV) ··· 1021 1012 1022 1013 spin_unlock_irqrestore(&sdma->channel_0_lock, flags); 1023 1014 1015 + sdmac->context_loaded = true; 1016 + 1024 1017 return ret; 1025 1018 } 1026 1019 ··· 1062 1051 sdmac->desc = NULL; 1063 1052 spin_unlock_irqrestore(&sdmac->vc.lock, flags); 1064 1053 vchan_dma_desc_free_list(&sdmac->vc, &head); 1054 + sdmac->context_loaded = false; 1065 1055 } 1066 1056 1067 1057 static int sdma_disable_channel_async(struct dma_chan *chan) ··· 1194 1182 { 1195 1183 int ret = -EBUSY; 1196 1184 1197 - sdma->bd0 = dma_alloc_coherent(NULL, PAGE_SIZE, &sdma->bd0_phys, 1198 - GFP_NOWAIT); 1185 + sdma->bd0 = dma_alloc_coherent(sdma->dev, PAGE_SIZE, &sdma->bd0_phys, 1186 + GFP_NOWAIT); 1199 1187 if (!sdma->bd0) { 1200 1188 ret = -ENOMEM; 1201 1189 goto out; ··· 1217 1205 u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); 1218 1206 int ret = 0; 1219 1207 1220 - desc->bd = dma_alloc_coherent(NULL, bd_size, &desc->bd_phys, 1221 - GFP_NOWAIT); 1208 + desc->bd = dma_alloc_coherent(desc->sdmac->sdma->dev, bd_size, 1209 + &desc->bd_phys, GFP_NOWAIT); 1222 1210 if (!desc->bd) { 1223 1211 ret = -ENOMEM; 1224 1212 goto out; ··· 1231 1219 { 1232 1220 u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); 1233 1221 1234 - dma_free_coherent(NULL, bd_size, desc->bd, desc->bd_phys); 1222 + dma_free_coherent(desc->sdmac->sdma->dev, bd_size, desc->bd, 1223 + desc->bd_phys); 1235 1224 } 1236 1225 1237 1226 static void sdma_desc_free(struct virt_dma_desc *vd) ··· 1852 1839 if (ret) 1853 1840 goto disable_clk_ipg; 1854 1841 1842 + if (clk_get_rate(sdma->clk_ahb) == clk_get_rate(sdma->clk_ipg)) 1843 + sdma->clk_ratio = 1; 1844 + 1855 1845 /* Be sure SDMA has not started yet */ 1856 1846 writel_relaxed(0, sdma->regs + SDMA_H_C0PTR); 1857 1847 1858 - sdma->channel_control = dma_alloc_coherent(NULL, 1848 + sdma->channel_control = dma_alloc_coherent(sdma->dev, 1859 1849 MAX_DMA_CHANNELS * sizeof (struct sdma_channel_control) + 1860 1850 sizeof(struct sdma_context_data), 1861 1851 &ccb_phys, GFP_KERNEL); ··· 1895 1879 writel_relaxed(0x4050, sdma->regs + SDMA_CHN0ADDR); 1896 1880 1897 1881 /* Set bits of CONFIG register but with static context switching */ 1898 - /* FIXME: Check whether to set ACR bit depending on clock ratios */ 1899 - writel_relaxed(0, sdma->regs + SDMA_H_CONFIG); 1882 + if (sdma->clk_ratio) 1883 + writel_relaxed(SDMA_H_CONFIG_ACR, sdma->regs + SDMA_H_CONFIG); 1884 + else 1885 + writel_relaxed(0, sdma->regs + SDMA_H_CONFIG); 1900 1886 1901 1887 writel_relaxed(ccb_phys, sdma->regs + SDMA_H_C0PTR); 1902 1888 ··· 1921 1903 static bool sdma_filter_fn(struct dma_chan *chan, void *fn_param) 1922 1904 { 1923 1905 struct sdma_channel *sdmac = to_sdma_chan(chan); 1906 + struct sdma_engine *sdma = sdmac->sdma; 1924 1907 struct imx_dma_data *data = fn_param; 1925 1908 1926 1909 if (!imx_dma_is_general_purpose(chan)) 1910 + return false; 1911 + 1912 + /* return false if it's not the right device */ 1913 + if (sdma->dev->of_node != data->of_node) 1927 1914 return false; 1928 1915 1929 1916 sdmac->data = *data; ··· 1958 1935 * be set to sdmac->event_id1. 1959 1936 */ 1960 1937 data.dma_request2 = 0; 1938 + data.of_node = ofdma->of_node; 1961 1939 1962 1940 return dma_request_channel(mask, sdma_filter_fn, &data); 1963 1941 } ··· 2121 2097 sdma->dma_device.device_prep_dma_memcpy = sdma_prep_memcpy; 2122 2098 sdma->dma_device.device_issue_pending = sdma_issue_pending; 2123 2099 sdma->dma_device.dev->dma_parms = &sdma->dma_parms; 2100 + sdma->dma_device.copy_align = 2; 2124 2101 dma_set_max_seg_size(sdma->dma_device.dev, SDMA_BD_MAX_CNT); 2125 2102 2126 2103 platform_set_drvdata(pdev, sdma);
+12
drivers/dma/ioat/dma.c
··· 372 372 ioat_alloc_ring(struct dma_chan *c, int order, gfp_t flags) 373 373 { 374 374 struct ioatdma_chan *ioat_chan = to_ioat_chan(c); 375 + struct ioatdma_device *ioat_dma = ioat_chan->ioat_dma; 375 376 struct ioat_ring_ent **ring; 376 377 int total_descs = 1 << order; 377 378 int i, chunks; ··· 437 436 hw->next = next->txd.phys; 438 437 } 439 438 ring[i]->hw->next = ring[0]->txd.phys; 439 + 440 + /* setup descriptor pre-fetching for v3.4 */ 441 + if (ioat_dma->cap & IOAT_CAP_DPS) { 442 + u16 drsctl = IOAT_CHAN_DRSZ_2MB | IOAT_CHAN_DRS_EN; 443 + 444 + if (chunks == 1) 445 + drsctl |= IOAT_CHAN_DRS_AUTOWRAP; 446 + 447 + writew(drsctl, ioat_chan->reg_base + IOAT_CHAN_DRSCTL_OFFSET); 448 + 449 + } 440 450 441 451 return ring; 442 452 }
+1 -1
drivers/dma/ioat/dma.h
··· 27 27 #include "registers.h" 28 28 #include "hw.h" 29 29 30 - #define IOAT_DMA_VERSION "4.00" 30 + #define IOAT_DMA_VERSION "5.00" 31 31 32 32 #define IOAT_DMA_DCA_ANY_CPU ~0 33 33
+3
drivers/dma/ioat/hw.h
··· 66 66 67 67 #define PCI_DEVICE_ID_INTEL_IOAT_SKX 0x2021 68 68 69 + #define PCI_DEVICE_ID_INTEL_IOAT_ICX 0x0b00 70 + 69 71 #define IOAT_VER_1_2 0x12 /* Version 1.2 */ 70 72 #define IOAT_VER_2_0 0x20 /* Version 2.0 */ 71 73 #define IOAT_VER_3_0 0x30 /* Version 3.0 */ 72 74 #define IOAT_VER_3_2 0x32 /* Version 3.2 */ 73 75 #define IOAT_VER_3_3 0x33 /* Version 3.3 */ 76 + #define IOAT_VER_3_4 0x34 /* Version 3.4 */ 74 77 75 78 76 79 int system_has_dca_enabled(struct pci_dev *pdev);
+38 -2
drivers/dma/ioat/init.c
··· 119 119 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE2) }, 120 120 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE3) }, 121 121 122 + /* I/OAT v3.4 platforms */ 123 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_ICX) }, 124 + 122 125 { 0, } 123 126 }; 124 127 MODULE_DEVICE_TABLE(pci, ioat_pci_tbl); ··· 138 135 static int ioat_dca_enabled = 1; 139 136 module_param(ioat_dca_enabled, int, 0644); 140 137 MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)"); 141 - int ioat_pending_level = 4; 138 + int ioat_pending_level = 7; 142 139 module_param(ioat_pending_level, int, 0644); 143 140 MODULE_PARM_DESC(ioat_pending_level, 144 - "high-water mark for pushing ioat descriptors (default: 4)"); 141 + "high-water mark for pushing ioat descriptors (default: 7)"); 145 142 static char ioat_interrupt_style[32] = "msix"; 146 143 module_param_string(ioat_interrupt_style, ioat_interrupt_style, 147 144 sizeof(ioat_interrupt_style), 0644); ··· 638 635 ioat_stop(ioat_chan); 639 636 ioat_reset_hw(ioat_chan); 640 637 638 + /* Put LTR to idle */ 639 + if (ioat_dma->version >= IOAT_VER_3_4) 640 + writeb(IOAT_CHAN_LTR_SWSEL_IDLE, 641 + ioat_chan->reg_base + IOAT_CHAN_LTR_SWSEL_OFFSET); 642 + 641 643 spin_lock_bh(&ioat_chan->cleanup_lock); 642 644 spin_lock_bh(&ioat_chan->prep_lock); 643 645 descs = ioat_ring_space(ioat_chan); ··· 731 723 set_bit(IOAT_RUN, &ioat_chan->state); 732 724 spin_unlock_bh(&ioat_chan->prep_lock); 733 725 spin_unlock_bh(&ioat_chan->cleanup_lock); 726 + 727 + /* Setting up LTR values for 3.4 or later */ 728 + if (ioat_chan->ioat_dma->version >= IOAT_VER_3_4) { 729 + u32 lat_val; 730 + 731 + lat_val = IOAT_CHAN_LTR_ACTIVE_SNVAL | 732 + IOAT_CHAN_LTR_ACTIVE_SNLATSCALE | 733 + IOAT_CHAN_LTR_ACTIVE_SNREQMNT; 734 + writel(lat_val, ioat_chan->reg_base + 735 + IOAT_CHAN_LTR_ACTIVE_OFFSET); 736 + 737 + lat_val = IOAT_CHAN_LTR_IDLE_SNVAL | 738 + IOAT_CHAN_LTR_IDLE_SNLATSCALE | 739 + IOAT_CHAN_LTR_IDLE_SNREQMNT; 740 + writel(lat_val, ioat_chan->reg_base + 741 + IOAT_CHAN_LTR_IDLE_OFFSET); 742 + 743 + /* Select to active */ 744 + writeb(IOAT_CHAN_LTR_SWSEL_ACTIVE, 745 + ioat_chan->reg_base + 746 + IOAT_CHAN_LTR_SWSEL_OFFSET); 747 + } 734 748 735 749 ioat_start_null_desc(ioat_chan); 736 750 ··· 1215 1185 if (err) 1216 1186 return err; 1217 1187 1188 + if (ioat_dma->cap & IOAT_CAP_DPS) 1189 + writeb(ioat_pending_level + 1, 1190 + ioat_dma->reg_base + IOAT_PREFETCH_LIMIT_OFFSET); 1191 + 1218 1192 return 0; 1219 1193 } 1220 1194 ··· 1384 1350 pci_set_drvdata(pdev, device); 1385 1351 1386 1352 device->version = readb(device->reg_base + IOAT_VER_OFFSET); 1353 + if (device->version >= IOAT_VER_3_4) 1354 + ioat_dca_enabled = 0; 1387 1355 if (device->version >= IOAT_VER_3_0) { 1388 1356 if (is_skx_ioat(pdev)) 1389 1357 device->version = IOAT_VER_3_2;
+24
drivers/dma/ioat/registers.h
··· 84 84 #define IOAT_CAP_PQ 0x00000200 85 85 #define IOAT_CAP_DWBES 0x00002000 86 86 #define IOAT_CAP_RAID16SS 0x00020000 87 + #define IOAT_CAP_DPS 0x00800000 88 + 89 + #define IOAT_PREFETCH_LIMIT_OFFSET 0x4C /* CHWPREFLMT */ 87 90 88 91 #define IOAT_CHANNEL_MMIO_SIZE 0x80 /* Each Channel MMIO space is this size */ 89 92 ··· 245 242 IOAT_CHANERR_WRITE_DATA_ERR) 246 243 247 244 #define IOAT_CHANERR_MASK_OFFSET 0x2C /* 32-bit Channel Error Register */ 245 + 246 + #define IOAT_CHAN_DRSCTL_OFFSET 0xB6 247 + #define IOAT_CHAN_DRSZ_4KB 0x0000 248 + #define IOAT_CHAN_DRSZ_8KB 0x0001 249 + #define IOAT_CHAN_DRSZ_2MB 0x0009 250 + #define IOAT_CHAN_DRS_EN 0x0100 251 + #define IOAT_CHAN_DRS_AUTOWRAP 0x0200 252 + 253 + #define IOAT_CHAN_LTR_SWSEL_OFFSET 0xBC 254 + #define IOAT_CHAN_LTR_SWSEL_ACTIVE 0x0 255 + #define IOAT_CHAN_LTR_SWSEL_IDLE 0x1 256 + 257 + #define IOAT_CHAN_LTR_ACTIVE_OFFSET 0xC0 258 + #define IOAT_CHAN_LTR_ACTIVE_SNVAL 0x0000 /* 0 us */ 259 + #define IOAT_CHAN_LTR_ACTIVE_SNLATSCALE 0x0800 /* 1us scale */ 260 + #define IOAT_CHAN_LTR_ACTIVE_SNREQMNT 0x8000 /* snoop req enable */ 261 + 262 + #define IOAT_CHAN_LTR_IDLE_OFFSET 0xC4 263 + #define IOAT_CHAN_LTR_IDLE_SNVAL 0x0258 /* 600 us */ 264 + #define IOAT_CHAN_LTR_IDLE_SNLATSCALE 0x0800 /* 1us scale */ 265 + #define IOAT_CHAN_LTR_IDLE_SNREQMNT 0x8000 /* snoop req enable */ 248 266 249 267 #endif /* _IOAT_REGISTERS_H_ */
+52 -9
drivers/dma/k3dma.c
··· 52 52 #define CX_SRC 0x814 53 53 #define CX_DST 0x818 54 54 #define CX_CFG 0x81c 55 - #define AXI_CFG 0x820 56 - #define AXI_CFG_DEFAULT 0x201201 57 55 58 56 #define CX_LLI_CHAIN_EN 0x2 59 57 #define CX_CFG_EN 0x1 ··· 111 113 struct dma_pool *pool; 112 114 u32 dma_channels; 113 115 u32 dma_requests; 116 + u32 dma_channel_mask; 114 117 unsigned int irq; 115 118 }; 119 + 120 + 121 + #define K3_FLAG_NOCLK BIT(1) 122 + 123 + struct k3dma_soc_data { 124 + unsigned long flags; 125 + }; 126 + 116 127 117 128 #define to_k3_dma(dmadev) container_of(dmadev, struct k3_dma_dev, slave) 118 129 ··· 168 161 writel_relaxed(hw->count, phy->base + CX_CNT0); 169 162 writel_relaxed(hw->saddr, phy->base + CX_SRC); 170 163 writel_relaxed(hw->daddr, phy->base + CX_DST); 171 - writel_relaxed(AXI_CFG_DEFAULT, phy->base + AXI_CFG); 172 164 writel_relaxed(hw->config, phy->base + CX_CFG); 173 165 } 174 166 ··· 320 314 /* check new channel request in d->chan_pending */ 321 315 spin_lock_irq(&d->lock); 322 316 for (pch = 0; pch < d->dma_channels; pch++) { 317 + if (!(d->dma_channel_mask & (1 << pch))) 318 + continue; 319 + 323 320 p = &d->phy[pch]; 324 321 325 322 if (p->vchan == NULL && !list_empty(&d->chan_pending)) { ··· 340 331 spin_unlock_irq(&d->lock); 341 332 342 333 for (pch = 0; pch < d->dma_channels; pch++) { 334 + if (!(d->dma_channel_mask & (1 << pch))) 335 + continue; 336 + 343 337 if (pch_alloc & (1 << pch)) { 344 338 p = &d->phy[pch]; 345 339 c = p->vchan; ··· 802 790 return 0; 803 791 } 804 792 793 + static const struct k3dma_soc_data k3_v1_dma_data = { 794 + .flags = 0, 795 + }; 796 + 797 + static const struct k3dma_soc_data asp_v1_dma_data = { 798 + .flags = K3_FLAG_NOCLK, 799 + }; 800 + 805 801 static const struct of_device_id k3_pdma_dt_ids[] = { 806 - { .compatible = "hisilicon,k3-dma-1.0", }, 802 + { .compatible = "hisilicon,k3-dma-1.0", 803 + .data = &k3_v1_dma_data 804 + }, 805 + { .compatible = "hisilicon,hisi-pcm-asp-dma-1.0", 806 + .data = &asp_v1_dma_data 807 + }, 807 808 {} 808 809 }; 809 810 MODULE_DEVICE_TABLE(of, k3_pdma_dt_ids); ··· 835 810 836 811 static int k3_dma_probe(struct platform_device *op) 837 812 { 813 + const struct k3dma_soc_data *soc_data; 838 814 struct k3_dma_dev *d; 839 815 const struct of_device_id *of_id; 840 816 struct resource *iores; ··· 849 823 if (!d) 850 824 return -ENOMEM; 851 825 826 + soc_data = device_get_match_data(&op->dev); 827 + if (!soc_data) 828 + return -EINVAL; 829 + 852 830 d->base = devm_ioremap_resource(&op->dev, iores); 853 831 if (IS_ERR(d->base)) 854 832 return PTR_ERR(d->base); ··· 863 833 "dma-channels", &d->dma_channels); 864 834 of_property_read_u32((&op->dev)->of_node, 865 835 "dma-requests", &d->dma_requests); 836 + ret = of_property_read_u32((&op->dev)->of_node, 837 + "dma-channel-mask", &d->dma_channel_mask); 838 + if (ret) { 839 + dev_warn(&op->dev, 840 + "dma-channel-mask doesn't exist, considering all as available.\n"); 841 + d->dma_channel_mask = (u32)~0UL; 842 + } 866 843 } 867 844 868 - d->clk = devm_clk_get(&op->dev, NULL); 869 - if (IS_ERR(d->clk)) { 870 - dev_err(&op->dev, "no dma clk\n"); 871 - return PTR_ERR(d->clk); 845 + if (!(soc_data->flags & K3_FLAG_NOCLK)) { 846 + d->clk = devm_clk_get(&op->dev, NULL); 847 + if (IS_ERR(d->clk)) { 848 + dev_err(&op->dev, "no dma clk\n"); 849 + return PTR_ERR(d->clk); 850 + } 872 851 } 873 852 874 853 irq = platform_get_irq(op, 0); ··· 901 862 return -ENOMEM; 902 863 903 864 for (i = 0; i < d->dma_channels; i++) { 904 - struct k3_dma_phy *p = &d->phy[i]; 865 + struct k3_dma_phy *p; 905 866 867 + if (!(d->dma_channel_mask & BIT(i))) 868 + continue; 869 + 870 + p = &d->phy[i]; 906 871 p->idx = i; 907 872 p->base = d->base + i * 0x40; 908 873 }
+1
drivers/dma/mcf-edma.c
··· 214 214 mcf_chan->edma = mcf_edma; 215 215 mcf_chan->slave_id = i; 216 216 mcf_chan->idle = true; 217 + mcf_chan->dma_dir = DMA_NONE; 217 218 mcf_chan->vchan.desc_free = fsl_edma_free_desc; 218 219 vchan_init(&mcf_chan->vchan, &mcf_edma->dma_dev); 219 220 iowrite32(0x0, &regs->tcd[i].csr);
+5 -2
drivers/dma/mv_xor.c
··· 1059 1059 mv_chan->op_in_desc = XOR_MODE_IN_DESC; 1060 1060 1061 1061 dma_dev = &mv_chan->dmadev; 1062 + dma_dev->dev = &pdev->dev; 1062 1063 mv_chan->xordev = xordev; 1063 1064 1064 1065 /* ··· 1092 1091 dma_dev->device_free_chan_resources = mv_xor_free_chan_resources; 1093 1092 dma_dev->device_tx_status = mv_xor_status; 1094 1093 dma_dev->device_issue_pending = mv_xor_issue_pending; 1095 - dma_dev->dev = &pdev->dev; 1096 1094 1097 1095 /* set prep routines based on capability */ 1098 1096 if (dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask)) ··· 1153 1153 dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "", 1154 1154 dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : ""); 1155 1155 1156 - dma_async_device_register(dma_dev); 1156 + ret = dma_async_device_register(dma_dev); 1157 + if (ret) 1158 + goto err_free_irq; 1159 + 1157 1160 return mv_chan; 1158 1161 1159 1162 err_free_irq:
-1
drivers/dma/pl330.c
··· 2267 2267 struct dma_pl330_desc *desc; 2268 2268 unsigned long flags; 2269 2269 struct pl330_dmac *pl330 = pch->dmac; 2270 - LIST_HEAD(list); 2271 2270 bool power_down = false; 2272 2271 2273 2272 pm_runtime_get_sync(pl330->ddma.dev);
+2 -2
drivers/dma/qcom/bam_dma.c
··· 636 636 num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_FIFO_SIZE); 637 637 638 638 /* allocate enough room to accomodate the number of entries */ 639 - async_desc = kzalloc(sizeof(*async_desc) + 640 - (num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT); 639 + async_desc = kzalloc(struct_size(async_desc, desc, num_alloc), 640 + GFP_NOWAIT); 641 641 642 642 if (!async_desc) 643 643 goto err_out;
+11 -8
drivers/dma/qcom/hidma.c
··· 138 138 desc = &mdesc->desc; 139 139 last_cookie = desc->cookie; 140 140 141 + llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch); 142 + 141 143 spin_lock_irqsave(&mchan->lock, irqflags); 144 + if (llstat == DMA_COMPLETE) { 145 + mchan->last_success = last_cookie; 146 + result.result = DMA_TRANS_NOERROR; 147 + } else { 148 + result.result = DMA_TRANS_ABORTED; 149 + } 150 + 142 151 dma_cookie_complete(desc); 143 152 spin_unlock_irqrestore(&mchan->lock, irqflags); 144 153 145 - llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch); 146 154 dmaengine_desc_get_callback(desc, &cb); 147 155 148 156 dma_run_dependencies(desc); 149 157 150 158 spin_lock_irqsave(&mchan->lock, irqflags); 151 159 list_move(&mdesc->node, &mchan->free); 152 - 153 - if (llstat == DMA_COMPLETE) { 154 - mchan->last_success = last_cookie; 155 - result.result = DMA_TRANS_NOERROR; 156 - } else 157 - result.result = DMA_TRANS_ABORTED; 158 - 159 160 spin_unlock_irqrestore(&mchan->lock, irqflags); 160 161 161 162 dmaengine_desc_callback_invoke(&cb, &result); ··· 416 415 if (!mdesc) 417 416 return NULL; 418 417 418 + mdesc->desc.flags = flags; 419 419 hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch, 420 420 src, dest, len, flags, 421 421 HIDMA_TRE_MEMCPY); ··· 449 447 if (!mdesc) 450 448 return NULL; 451 449 450 + mdesc->desc.flags = flags; 452 451 hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch, 453 452 value, dest, len, flags, 454 453 HIDMA_TRE_MEMSET);
+1 -2
drivers/dma/qcom/hidma_mgmt.c
··· 423 423 hidma_mgmt_of_populate_channels(child); 424 424 } 425 425 #endif 426 - platform_driver_register(&hidma_mgmt_driver); 426 + return platform_driver_register(&hidma_mgmt_driver); 427 427 428 - return 0; 429 428 } 430 429 module_init(hidma_mgmt_init); 431 430 MODULE_LICENSE("GPL v2");
-2
drivers/dma/sa11x0-dma.c
··· 705 705 struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan); 706 706 struct sa11x0_dma_dev *d = to_sa11x0_dma(chan->device); 707 707 struct sa11x0_dma_phy *p; 708 - LIST_HEAD(head); 709 708 unsigned long flags; 710 709 711 710 dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc); ··· 731 732 struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan); 732 733 struct sa11x0_dma_dev *d = to_sa11x0_dma(chan->device); 733 734 struct sa11x0_dma_phy *p; 734 - LIST_HEAD(head); 735 735 unsigned long flags; 736 736 737 737 dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
+2
drivers/dma/sh/usb-dmac.c
··· 694 694 #endif /* CONFIG_PM */ 695 695 696 696 static const struct dev_pm_ops usb_dmac_pm = { 697 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 698 + pm_runtime_force_resume) 697 699 SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume, 698 700 NULL) 699 701 };
+4 -15
drivers/dma/sprd-dma.c
··· 580 580 581 581 static int sprd_dma_alloc_chan_resources(struct dma_chan *chan) 582 582 { 583 - struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); 584 - int ret; 585 - 586 - ret = pm_runtime_get_sync(chan->device->dev); 587 - if (ret < 0) 588 - return ret; 589 - 590 - schan->dev_id = SPRD_DMA_SOFTWARE_UID; 591 - return 0; 583 + return pm_runtime_get_sync(chan->device->dev); 592 584 } 593 585 594 586 static void sprd_dma_free_chan_resources(struct dma_chan *chan) ··· 1013 1021 static bool sprd_dma_filter_fn(struct dma_chan *chan, void *param) 1014 1022 { 1015 1023 struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); 1016 - struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan); 1017 - u32 req = *(u32 *)param; 1024 + u32 slave_id = *(u32 *)param; 1018 1025 1019 - if (req < sdev->total_chns) 1020 - return req == schan->chn_num + 1; 1021 - else 1022 - return false; 1026 + schan->dev_id = slave_id; 1027 + return true; 1023 1028 } 1024 1029 1025 1030 static int sprd_dma_probe(struct platform_device *pdev)
+1 -5
drivers/dma/st_fdma.c
··· 243 243 struct st_fdma_desc *fdesc; 244 244 int i; 245 245 246 - fdesc = kzalloc(sizeof(*fdesc) + 247 - sizeof(struct st_fdma_sw_node) * sg_len, GFP_NOWAIT); 246 + fdesc = kzalloc(struct_size(fdesc, node, sg_len), GFP_NOWAIT); 248 247 if (!fdesc) 249 248 return NULL; 250 249 ··· 292 293 struct st_fdma_chan *fchan = to_st_fdma_chan(chan); 293 294 struct rproc *rproc = fchan->fdev->slim_rproc->rproc; 294 295 unsigned long flags; 295 - 296 - LIST_HEAD(head); 297 296 298 297 dev_dbg(fchan->fdev->dev, "%s: freeing chan:%d\n", 299 298 __func__, fchan->vchan.chan.chan_id); ··· 623 626 static int st_fdma_pause(struct dma_chan *chan) 624 627 { 625 628 unsigned long flags; 626 - LIST_HEAD(head); 627 629 struct st_fdma_chan *fchan = to_st_fdma_chan(chan); 628 630 int ch_id = fchan->vchan.chan.chan_id; 629 631 unsigned long cmd = FDMA_CMD_PAUSE(ch_id);
+59 -12
drivers/dma/stm32-dma.c
··· 23 23 #include <linux/of_device.h> 24 24 #include <linux/of_dma.h> 25 25 #include <linux/platform_device.h> 26 + #include <linux/pm_runtime.h> 26 27 #include <linux/reset.h> 27 28 #include <linux/sched.h> 28 29 #include <linux/slab.h> ··· 642 641 { 643 642 struct stm32_dma_chan *chan = devid; 644 643 struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 645 - u32 status, scr; 644 + u32 status, scr, sfcr; 646 645 647 646 spin_lock(&chan->vchan.lock); 648 647 649 648 status = stm32_dma_irq_status(chan); 650 649 scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); 650 + sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id)); 651 651 652 652 if (status & STM32_DMA_TCI) { 653 653 stm32_dma_irq_clear(chan, STM32_DMA_TCI); ··· 663 661 if (status & STM32_DMA_FEI) { 664 662 stm32_dma_irq_clear(chan, STM32_DMA_FEI); 665 663 status &= ~STM32_DMA_FEI; 666 - if (!(scr & STM32_DMA_SCR_EN)) 667 - dev_err(chan2dev(chan), "FIFO Error\n"); 668 - else 669 - dev_dbg(chan2dev(chan), "FIFO over/underrun\n"); 664 + if (sfcr & STM32_DMA_SFCR_FEIE) { 665 + if (!(scr & STM32_DMA_SCR_EN)) 666 + dev_err(chan2dev(chan), "FIFO Error\n"); 667 + else 668 + dev_dbg(chan2dev(chan), "FIFO over/underrun\n"); 669 + } 670 670 } 671 671 if (status) { 672 672 stm32_dma_irq_clear(chan, status); ··· 1116 1112 int ret; 1117 1113 1118 1114 chan->config_init = false; 1119 - ret = clk_prepare_enable(dmadev->clk); 1120 - if (ret < 0) { 1121 - dev_err(chan2dev(chan), "clk_prepare_enable failed: %d\n", ret); 1115 + 1116 + ret = pm_runtime_get_sync(dmadev->ddev.dev); 1117 + if (ret < 0) 1122 1118 return ret; 1123 - } 1124 1119 1125 1120 ret = stm32_dma_disable_chan(chan); 1126 1121 if (ret < 0) 1127 - clk_disable_unprepare(dmadev->clk); 1122 + pm_runtime_put(dmadev->ddev.dev); 1128 1123 1129 1124 return ret; 1130 1125 } ··· 1143 1140 spin_unlock_irqrestore(&chan->vchan.lock, flags); 1144 1141 } 1145 1142 1146 - clk_disable_unprepare(dmadev->clk); 1143 + pm_runtime_put(dmadev->ddev.dev); 1147 1144 1148 1145 vchan_free_chan_resources(to_virt_chan(c)); 1149 1146 } ··· 1243 1240 return PTR_ERR(dmadev->clk); 1244 1241 } 1245 1242 1243 + ret = clk_prepare_enable(dmadev->clk); 1244 + if (ret < 0) { 1245 + dev_err(&pdev->dev, "clk_prep_enable error: %d\n", ret); 1246 + return ret; 1247 + } 1248 + 1246 1249 dmadev->mem2mem = of_property_read_bool(pdev->dev.of_node, 1247 1250 "st,mem2mem"); 1248 1251 ··· 1298 1289 1299 1290 ret = dma_async_device_register(dd); 1300 1291 if (ret) 1301 - return ret; 1292 + goto clk_free; 1302 1293 1303 1294 for (i = 0; i < STM32_DMA_MAX_CHANNELS; i++) { 1304 1295 chan = &dmadev->chan[i]; ··· 1330 1321 1331 1322 platform_set_drvdata(pdev, dmadev); 1332 1323 1324 + pm_runtime_set_active(&pdev->dev); 1325 + pm_runtime_enable(&pdev->dev); 1326 + pm_runtime_get_noresume(&pdev->dev); 1327 + pm_runtime_put(&pdev->dev); 1328 + 1333 1329 dev_info(&pdev->dev, "STM32 DMA driver registered\n"); 1334 1330 1335 1331 return 0; 1336 1332 1337 1333 err_unregister: 1338 1334 dma_async_device_unregister(dd); 1335 + clk_free: 1336 + clk_disable_unprepare(dmadev->clk); 1339 1337 1340 1338 return ret; 1341 1339 } 1340 + 1341 + #ifdef CONFIG_PM 1342 + static int stm32_dma_runtime_suspend(struct device *dev) 1343 + { 1344 + struct stm32_dma_device *dmadev = dev_get_drvdata(dev); 1345 + 1346 + clk_disable_unprepare(dmadev->clk); 1347 + 1348 + return 0; 1349 + } 1350 + 1351 + static int stm32_dma_runtime_resume(struct device *dev) 1352 + { 1353 + struct stm32_dma_device *dmadev = dev_get_drvdata(dev); 1354 + int ret; 1355 + 1356 + ret = clk_prepare_enable(dmadev->clk); 1357 + if (ret) { 1358 + dev_err(dev, "failed to prepare_enable clock\n"); 1359 + return ret; 1360 + } 1361 + 1362 + return 0; 1363 + } 1364 + #endif 1365 + 1366 + static const struct dev_pm_ops stm32_dma_pm_ops = { 1367 + SET_RUNTIME_PM_OPS(stm32_dma_runtime_suspend, 1368 + stm32_dma_runtime_resume, NULL) 1369 + }; 1342 1370 1343 1371 static struct platform_driver stm32_dma_driver = { 1344 1372 .driver = { 1345 1373 .name = "stm32-dma", 1346 1374 .of_match_table = stm32_dma_of_match, 1375 + .pm = &stm32_dma_pm_ops, 1347 1376 }, 1348 1377 }; 1349 1378
+47 -11
drivers/dma/stm32-dmamux.c
··· 28 28 #include <linux/module.h> 29 29 #include <linux/of_device.h> 30 30 #include <linux/of_dma.h> 31 + #include <linux/pm_runtime.h> 31 32 #include <linux/reset.h> 32 33 #include <linux/slab.h> 33 34 #include <linux/spinlock.h> ··· 80 79 stm32_dmamux_write(dmamux->iomem, STM32_DMAMUX_CCR(mux->chan_id), 0); 81 80 clear_bit(mux->chan_id, dmamux->dma_inuse); 82 81 83 - if (!IS_ERR(dmamux->clk)) 84 - clk_disable(dmamux->clk); 82 + pm_runtime_put_sync(dev); 85 83 86 84 spin_unlock_irqrestore(&dmamux->lock, flags); 87 85 ··· 146 146 147 147 /* Set dma request */ 148 148 spin_lock_irqsave(&dmamux->lock, flags); 149 - if (!IS_ERR(dmamux->clk)) { 150 - ret = clk_enable(dmamux->clk); 151 - if (ret < 0) { 152 - spin_unlock_irqrestore(&dmamux->lock, flags); 153 - dev_err(&pdev->dev, "clk_prep_enable issue: %d\n", ret); 154 - goto error; 155 - } 149 + ret = pm_runtime_get_sync(&pdev->dev); 150 + if (ret < 0) { 151 + spin_unlock_irqrestore(&dmamux->lock, flags); 152 + goto error; 156 153 } 157 154 spin_unlock_irqrestore(&dmamux->lock, flags); 158 155 ··· 251 254 dev_warn(&pdev->dev, "DMAMUX defaulting on %u requests\n", 252 255 stm32_dmamux->dmamux_requests); 253 256 } 257 + pm_runtime_get_noresume(&pdev->dev); 254 258 255 259 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 256 260 iomem = devm_ioremap_resource(&pdev->dev, res); ··· 280 282 stm32_dmamux->dmarouter.route_free = stm32_dmamux_free; 281 283 282 284 platform_set_drvdata(pdev, stm32_dmamux); 285 + pm_runtime_set_active(&pdev->dev); 286 + pm_runtime_enable(&pdev->dev); 283 287 284 288 if (!IS_ERR(stm32_dmamux->clk)) { 285 289 ret = clk_prepare_enable(stm32_dmamux->clk); ··· 291 291 } 292 292 } 293 293 294 + pm_runtime_get_noresume(&pdev->dev); 295 + 294 296 /* Reset the dmamux */ 295 297 for (i = 0; i < stm32_dmamux->dma_requests; i++) 296 298 stm32_dmamux_write(stm32_dmamux->iomem, STM32_DMAMUX_CCR(i), 0); 297 299 298 - if (!IS_ERR(stm32_dmamux->clk)) 299 - clk_disable(stm32_dmamux->clk); 300 + pm_runtime_put(&pdev->dev); 300 301 301 302 return of_dma_router_register(node, stm32_dmamux_route_allocate, 302 303 &stm32_dmamux->dmarouter); 303 304 } 305 + 306 + #ifdef CONFIG_PM 307 + static int stm32_dmamux_runtime_suspend(struct device *dev) 308 + { 309 + struct platform_device *pdev = 310 + container_of(dev, struct platform_device, dev); 311 + struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev); 312 + 313 + clk_disable_unprepare(stm32_dmamux->clk); 314 + 315 + return 0; 316 + } 317 + 318 + static int stm32_dmamux_runtime_resume(struct device *dev) 319 + { 320 + struct platform_device *pdev = 321 + container_of(dev, struct platform_device, dev); 322 + struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev); 323 + int ret; 324 + 325 + ret = clk_prepare_enable(stm32_dmamux->clk); 326 + if (ret) { 327 + dev_err(&pdev->dev, "failed to prepare_enable clock\n"); 328 + return ret; 329 + } 330 + 331 + return 0; 332 + } 333 + #endif 334 + 335 + static const struct dev_pm_ops stm32_dmamux_pm_ops = { 336 + SET_RUNTIME_PM_OPS(stm32_dmamux_runtime_suspend, 337 + stm32_dmamux_runtime_resume, NULL) 338 + }; 304 339 305 340 static const struct of_device_id stm32_dmamux_match[] = { 306 341 { .compatible = "st,stm32h7-dmamux" }, ··· 347 312 .driver = { 348 313 .name = "stm32-dmamux", 349 314 .of_match_table = stm32_dmamux_match, 315 + .pm = &stm32_dmamux_pm_ops, 350 316 }, 351 317 }; 352 318
+49 -7
drivers/dma/stm32-mdma.c
··· 37 37 #include <linux/of_device.h> 38 38 #include <linux/of_dma.h> 39 39 #include <linux/platform_device.h> 40 + #include <linux/pm_runtime.h> 40 41 #include <linux/reset.h> 41 42 #include <linux/slab.h> 42 43 ··· 1457 1456 return -ENOMEM; 1458 1457 } 1459 1458 1460 - ret = clk_prepare_enable(dmadev->clk); 1461 - if (ret < 0) { 1462 - dev_err(chan2dev(chan), "clk_prepare_enable failed: %d\n", ret); 1459 + ret = pm_runtime_get_sync(dmadev->ddev.dev); 1460 + if (ret < 0) 1463 1461 return ret; 1464 - } 1465 1462 1466 1463 ret = stm32_mdma_disable_chan(chan); 1467 1464 if (ret < 0) 1468 - clk_disable_unprepare(dmadev->clk); 1465 + pm_runtime_put(dmadev->ddev.dev); 1469 1466 1470 1467 return ret; 1471 1468 } ··· 1483 1484 spin_unlock_irqrestore(&chan->vchan.lock, flags); 1484 1485 } 1485 1486 1486 - clk_disable_unprepare(dmadev->clk); 1487 + pm_runtime_put(dmadev->ddev.dev); 1487 1488 vchan_free_chan_resources(to_virt_chan(c)); 1488 1489 dmam_pool_destroy(chan->desc_pool); 1489 1490 chan->desc_pool = NULL; ··· 1578 1579 1579 1580 dmadev->nr_channels = nr_channels; 1580 1581 dmadev->nr_requests = nr_requests; 1581 - device_property_read_u32_array(&pdev->dev, "st,ahb-addr-masks", 1582 + ret = device_property_read_u32_array(&pdev->dev, "st,ahb-addr-masks", 1582 1583 dmadev->ahb_addr_masks, 1583 1584 count); 1585 + if (ret) 1586 + return ret; 1584 1587 dmadev->nr_ahb_addr_masks = count; 1585 1588 1586 1589 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 1595 1594 ret = PTR_ERR(dmadev->clk); 1596 1595 if (ret == -EPROBE_DEFER) 1597 1596 dev_info(&pdev->dev, "Missing controller clock\n"); 1597 + return ret; 1598 + } 1599 + 1600 + ret = clk_prepare_enable(dmadev->clk); 1601 + if (ret < 0) { 1602 + dev_err(&pdev->dev, "clk_prep_enable error: %d\n", ret); 1598 1603 return ret; 1599 1604 } 1600 1605 ··· 1675 1668 } 1676 1669 1677 1670 platform_set_drvdata(pdev, dmadev); 1671 + pm_runtime_set_active(&pdev->dev); 1672 + pm_runtime_enable(&pdev->dev); 1673 + pm_runtime_get_noresume(&pdev->dev); 1674 + pm_runtime_put(&pdev->dev); 1678 1675 1679 1676 dev_info(&pdev->dev, "STM32 MDMA driver registered\n"); 1680 1677 ··· 1688 1677 return ret; 1689 1678 } 1690 1679 1680 + #ifdef CONFIG_PM 1681 + static int stm32_mdma_runtime_suspend(struct device *dev) 1682 + { 1683 + struct stm32_mdma_device *dmadev = dev_get_drvdata(dev); 1684 + 1685 + clk_disable_unprepare(dmadev->clk); 1686 + 1687 + return 0; 1688 + } 1689 + 1690 + static int stm32_mdma_runtime_resume(struct device *dev) 1691 + { 1692 + struct stm32_mdma_device *dmadev = dev_get_drvdata(dev); 1693 + int ret; 1694 + 1695 + ret = clk_prepare_enable(dmadev->clk); 1696 + if (ret) { 1697 + dev_err(dev, "failed to prepare_enable clock\n"); 1698 + return ret; 1699 + } 1700 + 1701 + return 0; 1702 + } 1703 + #endif 1704 + 1705 + static const struct dev_pm_ops stm32_mdma_pm_ops = { 1706 + SET_RUNTIME_PM_OPS(stm32_mdma_runtime_suspend, 1707 + stm32_mdma_runtime_resume, NULL) 1708 + }; 1709 + 1691 1710 static struct platform_driver stm32_mdma_driver = { 1692 1711 .probe = stm32_mdma_probe, 1693 1712 .driver = { 1694 1713 .name = "stm32-mdma", 1695 1714 .of_match_table = stm32_mdma_of_match, 1715 + .pm = &stm32_mdma_pm_ops, 1696 1716 }, 1697 1717 }; 1698 1718
+28 -17
drivers/dma/tegra20-apb-dma.c
··· 38 38 39 39 #include "dmaengine.h" 40 40 41 + #define CREATE_TRACE_POINTS 42 + #include <trace/events/tegra_apb_dma.h> 43 + 41 44 #define TEGRA_APBDMA_GENERAL 0x0 42 45 #define TEGRA_APBDMA_GENERAL_ENABLE BIT(31) 43 46 ··· 149 146 }; 150 147 151 148 /* 152 - * tegra_dma_sg_req: Dma request details to configure hardware. This 149 + * tegra_dma_sg_req: DMA request details to configure hardware. This 153 150 * contains the details for one transfer to configure DMA hw. 154 151 * The client's request for data transfer can be broken into multiple 155 152 * sub-transfer as per requester details and hw support. ··· 158 155 */ 159 156 struct tegra_dma_sg_req { 160 157 struct tegra_dma_channel_regs ch_regs; 161 - int req_len; 158 + unsigned int req_len; 162 159 bool configured; 163 160 bool last_sg; 164 161 struct list_head node; ··· 172 169 */ 173 170 struct tegra_dma_desc { 174 171 struct dma_async_tx_descriptor txd; 175 - int bytes_requested; 176 - int bytes_transferred; 172 + unsigned int bytes_requested; 173 + unsigned int bytes_transferred; 177 174 enum dma_status dma_status; 178 175 struct list_head node; 179 176 struct list_head tx_list; ··· 189 186 /* tegra_dma_channel: Channel specific information */ 190 187 struct tegra_dma_channel { 191 188 struct dma_chan dma_chan; 192 - char name[30]; 189 + char name[12]; 193 190 bool config_init; 194 191 int id; 195 192 int irq; ··· 577 574 struct tegra_dma_sg_req *hsgreq = NULL; 578 575 579 576 if (list_empty(&tdc->pending_sg_req)) { 580 - dev_err(tdc2dev(tdc), "Dma is running without req\n"); 577 + dev_err(tdc2dev(tdc), "DMA is running without req\n"); 581 578 tegra_dma_stop(tdc); 582 579 return false; 583 580 } ··· 590 587 hsgreq = list_first_entry(&tdc->pending_sg_req, typeof(*hsgreq), node); 591 588 if (!hsgreq->configured) { 592 589 tegra_dma_stop(tdc); 593 - dev_err(tdc2dev(tdc), "Error in dma transfer, aborting dma\n"); 590 + dev_err(tdc2dev(tdc), "Error in DMA transfer, aborting DMA\n"); 594 591 tegra_dma_abort_all(tdc); 595 592 return false; 596 593 } ··· 639 636 640 637 sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node); 641 638 dma_desc = sgreq->dma_desc; 642 - dma_desc->bytes_transferred += sgreq->req_len; 639 + /* if we dma for long enough the transfer count will wrap */ 640 + dma_desc->bytes_transferred = 641 + (dma_desc->bytes_transferred + sgreq->req_len) % 642 + dma_desc->bytes_requested; 643 643 644 644 /* Callback need to be call */ 645 645 if (!dma_desc->cb_count) ··· 675 669 dmaengine_desc_get_callback(&dma_desc->txd, &cb); 676 670 cb_count = dma_desc->cb_count; 677 671 dma_desc->cb_count = 0; 672 + trace_tegra_dma_complete_cb(&tdc->dma_chan, cb_count, 673 + cb.callback); 678 674 spin_unlock_irqrestore(&tdc->lock, flags); 679 675 while (cb_count--) 680 676 dmaengine_desc_callback_invoke(&cb, NULL); ··· 693 685 694 686 spin_lock_irqsave(&tdc->lock, flags); 695 687 688 + trace_tegra_dma_isr(&tdc->dma_chan, irq); 696 689 status = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS); 697 690 if (status & TEGRA_APBDMA_STATUS_ISE_EOC) { 698 691 tdc_write(tdc, TEGRA_APBDMA_CHAN_STATUS, status); ··· 852 843 dma_set_residue(txstate, residual); 853 844 } 854 845 846 + trace_tegra_dma_tx_status(&tdc->dma_chan, cookie, txstate); 855 847 spin_unlock_irqrestore(&tdc->lock, flags); 856 848 return ret; 857 849 } ··· 929 919 return 0; 930 920 931 921 default: 932 - dev_err(tdc2dev(tdc), "Dma direction is not supported\n"); 922 + dev_err(tdc2dev(tdc), "DMA direction is not supported\n"); 933 923 return -EINVAL; 934 924 } 935 925 return -EINVAL; ··· 962 952 enum dma_slave_buswidth slave_bw; 963 953 964 954 if (!tdc->config_init) { 965 - dev_err(tdc2dev(tdc), "dma channel is not configured\n"); 955 + dev_err(tdc2dev(tdc), "DMA channel is not configured\n"); 966 956 return NULL; 967 957 } 968 958 if (sg_len < 1) { ··· 995 985 996 986 dma_desc = tegra_dma_desc_get(tdc); 997 987 if (!dma_desc) { 998 - dev_err(tdc2dev(tdc), "Dma descriptors not available\n"); 988 + dev_err(tdc2dev(tdc), "DMA descriptors not available\n"); 999 989 return NULL; 1000 990 } 1001 991 INIT_LIST_HEAD(&dma_desc->tx_list); ··· 1015 1005 if ((len & 3) || (mem & 3) || 1016 1006 (len > tdc->tdma->chip_data->max_dma_count)) { 1017 1007 dev_err(tdc2dev(tdc), 1018 - "Dma length/memory address is not supported\n"); 1008 + "DMA length/memory address is not supported\n"); 1019 1009 tegra_dma_desc_put(tdc, dma_desc); 1020 1010 return NULL; 1021 1011 } 1022 1012 1023 1013 sg_req = tegra_dma_sg_req_get(tdc); 1024 1014 if (!sg_req) { 1025 - dev_err(tdc2dev(tdc), "Dma sg-req not available\n"); 1015 + dev_err(tdc2dev(tdc), "DMA sg-req not available\n"); 1026 1016 tegra_dma_desc_put(tdc, dma_desc); 1027 1017 return NULL; 1028 1018 } ··· 1097 1087 * terminating the DMA. 1098 1088 */ 1099 1089 if (tdc->busy) { 1100 - dev_err(tdc2dev(tdc), "Request not allowed when dma running\n"); 1090 + dev_err(tdc2dev(tdc), "Request not allowed when DMA running\n"); 1101 1091 return NULL; 1102 1092 } 1103 1093 ··· 1154 1144 while (remain_len) { 1155 1145 sg_req = tegra_dma_sg_req_get(tdc); 1156 1146 if (!sg_req) { 1157 - dev_err(tdc2dev(tdc), "Dma sg-req not available\n"); 1147 + dev_err(tdc2dev(tdc), "DMA sg-req not available\n"); 1158 1148 tegra_dma_desc_put(tdc, dma_desc); 1159 1149 return NULL; 1160 1150 } ··· 1329 1319 return -ENODEV; 1330 1320 } 1331 1321 1332 - tdma = devm_kzalloc(&pdev->dev, sizeof(*tdma) + cdata->nr_channels * 1333 - sizeof(struct tegra_dma_channel), GFP_KERNEL); 1322 + tdma = devm_kzalloc(&pdev->dev, 1323 + struct_size(tdma, channels, cdata->nr_channels), 1324 + GFP_KERNEL); 1334 1325 if (!tdma) 1335 1326 return -ENOMEM; 1336 1327
+3 -2
drivers/dma/tegra210-adma.c
··· 678 678 return -ENODEV; 679 679 } 680 680 681 - tdma = devm_kzalloc(&pdev->dev, sizeof(*tdma) + cdata->nr_channels * 682 - sizeof(struct tegra_adma_chan), GFP_KERNEL); 681 + tdma = devm_kzalloc(&pdev->dev, 682 + struct_size(tdma, channels, cdata->nr_channels), 683 + GFP_KERNEL); 683 684 if (!tdma) 684 685 return -ENOMEM; 685 686
+2 -2
drivers/dma/timb_dma.c
··· 643 643 DRIVER_NAME)) 644 644 return -EBUSY; 645 645 646 - td = kzalloc(sizeof(struct timb_dma) + 647 - sizeof(struct timb_dma_chan) * pdata->nr_channels, GFP_KERNEL); 646 + td = kzalloc(struct_size(td, channels, pdata->nr_channels), 647 + GFP_KERNEL); 648 648 if (!td) { 649 649 err = -ENOMEM; 650 650 goto err_release_region;
+101 -71
drivers/dma/xilinx/xilinx_dma.c
··· 86 86 #define XILINX_DMA_DMASR_DMA_DEC_ERR BIT(6) 87 87 #define XILINX_DMA_DMASR_DMA_SLAVE_ERR BIT(5) 88 88 #define XILINX_DMA_DMASR_DMA_INT_ERR BIT(4) 89 + #define XILINX_DMA_DMASR_SG_MASK BIT(3) 89 90 #define XILINX_DMA_DMASR_IDLE BIT(1) 90 91 #define XILINX_DMA_DMASR_HALTED BIT(0) 91 92 #define XILINX_DMA_DMASR_DELAY_MASK GENMASK(31, 24) ··· 162 161 #define XILINX_DMA_REG_BTT 0x28 163 162 164 163 /* AXI DMA Specific Masks/Bit fields */ 165 - #define XILINX_DMA_MAX_TRANS_LEN GENMASK(22, 0) 164 + #define XILINX_DMA_MAX_TRANS_LEN_MIN 8 165 + #define XILINX_DMA_MAX_TRANS_LEN_MAX 23 166 + #define XILINX_DMA_V2_MAX_TRANS_LEN_MAX 26 166 167 #define XILINX_DMA_CR_COALESCE_MAX GENMASK(23, 16) 167 168 #define XILINX_DMA_CR_CYCLIC_BD_EN_MASK BIT(4) 168 169 #define XILINX_DMA_CR_COALESCE_SHIFT 16 ··· 415 412 * @dev: Device Structure 416 413 * @common: DMA device structure 417 414 * @chan: Driver specific DMA channel 418 - * @has_sg: Specifies whether Scatter-Gather is present or not 419 415 * @mcdma: Specifies whether Multi-Channel is present or not 420 416 * @flush_on_fsync: Flush on frame sync 421 417 * @ext_addr: Indicates 64 bit addressing is supported by dma device ··· 427 425 * @rxs_clk: DMA s2mm stream clock 428 426 * @nr_channels: Number of channels DMA device supports 429 427 * @chan_id: DMA channel identifier 428 + * @max_buffer_len: Max buffer length 430 429 */ 431 430 struct xilinx_dma_device { 432 431 void __iomem *regs; 433 432 struct device *dev; 434 433 struct dma_device common; 435 434 struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE]; 436 - bool has_sg; 437 435 bool mcdma; 438 436 u32 flush_on_fsync; 439 437 bool ext_addr; ··· 446 444 struct clk *rxs_clk; 447 445 u32 nr_channels; 448 446 u32 chan_id; 447 + u32 max_buffer_len; 449 448 }; 450 449 451 450 /* Macros */ ··· 963 960 } 964 961 965 962 /** 963 + * xilinx_dma_calc_copysize - Calculate the amount of data to copy 964 + * @chan: Driver specific DMA channel 965 + * @size: Total data that needs to be copied 966 + * @done: Amount of data that has been already copied 967 + * 968 + * Return: Amount of data that has to be copied 969 + */ 970 + static int xilinx_dma_calc_copysize(struct xilinx_dma_chan *chan, 971 + int size, int done) 972 + { 973 + size_t copy; 974 + 975 + copy = min_t(size_t, size - done, 976 + chan->xdev->max_buffer_len); 977 + 978 + if ((copy + done < size) && 979 + chan->xdev->common.copy_align) { 980 + /* 981 + * If this is not the last descriptor, make sure 982 + * the next one will be properly aligned 983 + */ 984 + copy = rounddown(copy, 985 + (1 << chan->xdev->common.copy_align)); 986 + } 987 + return copy; 988 + } 989 + 990 + /** 966 991 * xilinx_dma_tx_status - Get DMA transaction status 967 992 * @dchan: DMA channel 968 993 * @cookie: Transaction identifier ··· 1023 992 list_for_each_entry(segment, &desc->segments, node) { 1024 993 hw = &segment->hw; 1025 994 residue += (hw->control - hw->status) & 1026 - XILINX_DMA_MAX_TRANS_LEN; 995 + chan->xdev->max_buffer_len; 1027 996 } 1028 997 } 1029 998 spin_unlock_irqrestore(&chan->lock, flags); ··· 1101 1070 struct xilinx_vdma_config *config = &chan->config; 1102 1071 struct xilinx_dma_tx_descriptor *desc, *tail_desc; 1103 1072 u32 reg, j; 1104 - struct xilinx_vdma_tx_segment *tail_segment; 1073 + struct xilinx_vdma_tx_segment *segment, *last = NULL; 1074 + int i = 0; 1105 1075 1106 1076 /* This function was invoked with lock held */ 1107 1077 if (chan->err) ··· 1118 1086 struct xilinx_dma_tx_descriptor, node); 1119 1087 tail_desc = list_last_entry(&chan->pending_list, 1120 1088 struct xilinx_dma_tx_descriptor, node); 1121 - 1122 - tail_segment = list_last_entry(&tail_desc->segments, 1123 - struct xilinx_vdma_tx_segment, node); 1124 - 1125 - /* 1126 - * If hardware is idle, then all descriptors on the running lists are 1127 - * done, start new transfers 1128 - */ 1129 - if (chan->has_sg) 1130 - dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, 1131 - desc->async_tx.phys); 1132 1089 1133 1090 /* Configure the hardware using info in the config structure */ 1134 1091 if (chan->has_vflip) { ··· 1135 1114 else 1136 1115 reg &= ~XILINX_DMA_DMACR_FRAMECNT_EN; 1137 1116 1138 - /* 1139 - * With SG, start with circular mode, so that BDs can be fetched. 1140 - * In direct register mode, if not parking, enable circular mode 1141 - */ 1142 - if (chan->has_sg || !config->park) 1143 - reg |= XILINX_DMA_DMACR_CIRC_EN; 1144 - 1117 + /* If not parking, enable circular mode */ 1145 1118 if (config->park) 1146 1119 reg &= ~XILINX_DMA_DMACR_CIRC_EN; 1120 + else 1121 + reg |= XILINX_DMA_DMACR_CIRC_EN; 1147 1122 1148 1123 dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg); 1149 1124 ··· 1161 1144 return; 1162 1145 1163 1146 /* Start the transfer */ 1164 - if (chan->has_sg) { 1165 - dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, 1166 - tail_segment->phys); 1167 - list_splice_tail_init(&chan->pending_list, &chan->active_list); 1168 - chan->desc_pendingcount = 0; 1169 - } else { 1170 - struct xilinx_vdma_tx_segment *segment, *last = NULL; 1171 - int i = 0; 1147 + if (chan->desc_submitcount < chan->num_frms) 1148 + i = chan->desc_submitcount; 1172 1149 1173 - if (chan->desc_submitcount < chan->num_frms) 1174 - i = chan->desc_submitcount; 1175 - 1176 - list_for_each_entry(segment, &desc->segments, node) { 1177 - if (chan->ext_addr) 1178 - vdma_desc_write_64(chan, 1179 - XILINX_VDMA_REG_START_ADDRESS_64(i++), 1180 - segment->hw.buf_addr, 1181 - segment->hw.buf_addr_msb); 1182 - else 1183 - vdma_desc_write(chan, 1150 + list_for_each_entry(segment, &desc->segments, node) { 1151 + if (chan->ext_addr) 1152 + vdma_desc_write_64(chan, 1153 + XILINX_VDMA_REG_START_ADDRESS_64(i++), 1154 + segment->hw.buf_addr, 1155 + segment->hw.buf_addr_msb); 1156 + else 1157 + vdma_desc_write(chan, 1184 1158 XILINX_VDMA_REG_START_ADDRESS(i++), 1185 1159 segment->hw.buf_addr); 1186 1160 1187 - last = segment; 1188 - } 1189 - 1190 - if (!last) 1191 - return; 1192 - 1193 - /* HW expects these parameters to be same for one transaction */ 1194 - vdma_desc_write(chan, XILINX_DMA_REG_HSIZE, last->hw.hsize); 1195 - vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE, 1196 - last->hw.stride); 1197 - vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize); 1198 - 1199 - chan->desc_submitcount++; 1200 - chan->desc_pendingcount--; 1201 - list_del(&desc->node); 1202 - list_add_tail(&desc->node, &chan->active_list); 1203 - if (chan->desc_submitcount == chan->num_frms) 1204 - chan->desc_submitcount = 0; 1161 + last = segment; 1205 1162 } 1163 + 1164 + if (!last) 1165 + return; 1166 + 1167 + /* HW expects these parameters to be same for one transaction */ 1168 + vdma_desc_write(chan, XILINX_DMA_REG_HSIZE, last->hw.hsize); 1169 + vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE, 1170 + last->hw.stride); 1171 + vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize); 1172 + 1173 + chan->desc_submitcount++; 1174 + chan->desc_pendingcount--; 1175 + list_del(&desc->node); 1176 + list_add_tail(&desc->node, &chan->active_list); 1177 + if (chan->desc_submitcount == chan->num_frms) 1178 + chan->desc_submitcount = 0; 1206 1179 1207 1180 chan->idle = false; 1208 1181 } ··· 1261 1254 1262 1255 /* Start the transfer */ 1263 1256 dma_ctrl_write(chan, XILINX_DMA_REG_BTT, 1264 - hw->control & XILINX_DMA_MAX_TRANS_LEN); 1257 + hw->control & chan->xdev->max_buffer_len); 1265 1258 } 1266 1259 1267 1260 list_splice_tail_init(&chan->pending_list, &chan->active_list); ··· 1364 1357 1365 1358 /* Start the transfer */ 1366 1359 dma_ctrl_write(chan, XILINX_DMA_REG_BTT, 1367 - hw->control & XILINX_DMA_MAX_TRANS_LEN); 1360 + hw->control & chan->xdev->max_buffer_len); 1368 1361 } 1369 1362 1370 1363 list_splice_tail_init(&chan->pending_list, &chan->active_list); ··· 1725 1718 struct xilinx_cdma_tx_segment *segment; 1726 1719 struct xilinx_cdma_desc_hw *hw; 1727 1720 1728 - if (!len || len > XILINX_DMA_MAX_TRANS_LEN) 1721 + if (!len || len > chan->xdev->max_buffer_len) 1729 1722 return NULL; 1730 1723 1731 1724 desc = xilinx_dma_alloc_tx_descriptor(chan); ··· 1815 1808 * Calculate the maximum number of bytes to transfer, 1816 1809 * making sure it is less than the hw limit 1817 1810 */ 1818 - copy = min_t(size_t, sg_dma_len(sg) - sg_used, 1819 - XILINX_DMA_MAX_TRANS_LEN); 1811 + copy = xilinx_dma_calc_copysize(chan, sg_dma_len(sg), 1812 + sg_used); 1820 1813 hw = &segment->hw; 1821 1814 1822 1815 /* Fill in the descriptor */ ··· 1920 1913 * Calculate the maximum number of bytes to transfer, 1921 1914 * making sure it is less than the hw limit 1922 1915 */ 1923 - copy = min_t(size_t, period_len - sg_used, 1924 - XILINX_DMA_MAX_TRANS_LEN); 1916 + copy = xilinx_dma_calc_copysize(chan, period_len, 1917 + sg_used); 1925 1918 hw = &segment->hw; 1926 1919 xilinx_axidma_buf(chan, hw, buf_addr, sg_used, 1927 1920 period_len * i); ··· 2396 2389 2397 2390 chan->dev = xdev->dev; 2398 2391 chan->xdev = xdev; 2399 - chan->has_sg = xdev->has_sg; 2400 2392 chan->desc_pendingcount = 0x0; 2401 2393 chan->ext_addr = xdev->ext_addr; 2402 2394 /* This variable ensures that descriptors are not ··· 2493 2487 } else { 2494 2488 chan->start_transfer = xilinx_vdma_start_transfer; 2495 2489 chan->stop_transfer = xilinx_dma_stop_transfer; 2490 + } 2491 + 2492 + /* check if SG is enabled (only for AXIDMA and CDMA) */ 2493 + if (xdev->dma_config->dmatype != XDMA_TYPE_VDMA) { 2494 + if (dma_ctrl_read(chan, XILINX_DMA_REG_DMASR) & 2495 + XILINX_DMA_DMASR_SG_MASK) 2496 + chan->has_sg = true; 2497 + dev_dbg(chan->dev, "ch %d: SG %s\n", chan->id, 2498 + chan->has_sg ? "enabled" : "disabled"); 2496 2499 } 2497 2500 2498 2501 /* Initialize the tasklet */ ··· 2611 2596 struct xilinx_dma_device *xdev; 2612 2597 struct device_node *child, *np = pdev->dev.of_node; 2613 2598 struct resource *io; 2614 - u32 num_frames, addr_width; 2599 + u32 num_frames, addr_width, len_width; 2615 2600 int i, err; 2616 2601 2617 2602 /* Allocate and initialize the DMA engine structure */ ··· 2642 2627 return PTR_ERR(xdev->regs); 2643 2628 2644 2629 /* Retrieve the DMA engine properties from the device tree */ 2645 - xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg"); 2646 - if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) 2630 + xdev->max_buffer_len = GENMASK(XILINX_DMA_MAX_TRANS_LEN_MAX - 1, 0); 2631 + 2632 + if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { 2647 2633 xdev->mcdma = of_property_read_bool(node, "xlnx,mcdma"); 2634 + if (!of_property_read_u32(node, "xlnx,sg-length-width", 2635 + &len_width)) { 2636 + if (len_width < XILINX_DMA_MAX_TRANS_LEN_MIN || 2637 + len_width > XILINX_DMA_V2_MAX_TRANS_LEN_MAX) { 2638 + dev_warn(xdev->dev, 2639 + "invalid xlnx,sg-length-width property value. Using default width\n"); 2640 + } else { 2641 + if (len_width > XILINX_DMA_MAX_TRANS_LEN_MAX) 2642 + dev_warn(xdev->dev, "Please ensure that IP supports buffer length > 23 bits\n"); 2643 + xdev->max_buffer_len = 2644 + GENMASK(len_width - 1, 0); 2645 + } 2646 + } 2647 + } 2648 2648 2649 2649 if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) { 2650 2650 err = of_property_read_u32(node, "xlnx,num-fstores",
-1
drivers/tty/serial/8250/8250_lpss.c
··· 153 153 #ifdef CONFIG_SERIAL_8250_DMA 154 154 static const struct dw_dma_platform_data qrk_serial_dma_pdata = { 155 155 .nr_channels = 2, 156 - .is_private = true, 157 156 .chan_allocation_order = CHAN_ALLOCATION_ASCENDING, 158 157 .chan_priority = CHAN_PRIORITY_ASCENDING, 159 158 .block_size = 4095,
+5 -4
include/linux/dma/dw.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Driver for the Synopsys DesignWare DMA Controller 3 4 * 4 5 * Copyright (C) 2007 Atmel Corporation 5 6 * Copyright (C) 2010-2011 ST Microelectronics 6 7 * Copyright (C) 2014 Intel Corporation 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 #ifndef _DMA_DW_H 13 10 #define _DMA_DW_H ··· 42 45 #if IS_ENABLED(CONFIG_DW_DMAC_CORE) 43 46 int dw_dma_probe(struct dw_dma_chip *chip); 44 47 int dw_dma_remove(struct dw_dma_chip *chip); 48 + int idma32_dma_probe(struct dw_dma_chip *chip); 49 + int idma32_dma_remove(struct dw_dma_chip *chip); 45 50 #else 46 51 static inline int dw_dma_probe(struct dw_dma_chip *chip) { return -ENODEV; } 47 52 static inline int dw_dma_remove(struct dw_dma_chip *chip) { return 0; } 53 + static inline int idma32_dma_probe(struct dw_dma_chip *chip) { return -ENODEV; } 54 + static inline int idma32_dma_remove(struct dw_dma_chip *chip) { return 0; } 48 55 #endif /* CONFIG_DW_DMAC_CORE */ 49 56 50 57 #endif /* _DMA_DW_H */
+1 -11
include/linux/platform_data/dma-dw.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Driver for the Synopsys DesignWare DMA Controller 3 4 * 4 5 * Copyright (C) 2007 Atmel Corporation 5 6 * Copyright (C) 2010-2011 ST Microelectronics 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 #ifndef _PLATFORM_DATA_DMA_DW_H 12 9 #define _PLATFORM_DATA_DMA_DW_H ··· 35 38 /** 36 39 * struct dw_dma_platform_data - Controller configuration parameters 37 40 * @nr_channels: Number of channels supported by hardware (max 8) 38 - * @is_private: The device channels should be marked as private and not for 39 - * by the general purpose DMA channel allocator. 40 - * @is_memcpy: The device channels do support memory-to-memory transfers. 41 - * @is_idma32: The type of the DMA controller is iDMA32 42 41 * @chan_allocation_order: Allocate channels starting from 0 or 7 43 42 * @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0. 44 43 * @block_size: Maximum block size supported by the controller ··· 46 53 */ 47 54 struct dw_dma_platform_data { 48 55 unsigned int nr_channels; 49 - bool is_private; 50 - bool is_memcpy; 51 - bool is_idma32; 52 56 #define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */ 53 57 #define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */ 54 58 unsigned char chan_allocation_order;
+1
include/linux/platform_data/dma-imx.h
··· 55 55 int dma_request2; /* secondary DMA request line */ 56 56 enum sdma_peripheral_type peripheral_type; 57 57 int priority; 58 + struct device_node *of_node; 58 59 }; 59 60 60 61 static inline int imx_dma_is_ipu(struct dma_chan *chan)
+61
include/trace/events/tegra_apb_dma.h
··· 1 + #if !defined(_TRACE_TEGRA_APB_DMA_H) || defined(TRACE_HEADER_MULTI_READ) 2 + #define _TRACE_TEGRA_APM_DMA_H 3 + 4 + #include <linux/tracepoint.h> 5 + #include <linux/dmaengine.h> 6 + 7 + #undef TRACE_SYSTEM 8 + #define TRACE_SYSTEM tegra_apb_dma 9 + 10 + TRACE_EVENT(tegra_dma_tx_status, 11 + TP_PROTO(struct dma_chan *dc, dma_cookie_t cookie, struct dma_tx_state *state), 12 + TP_ARGS(dc, cookie, state), 13 + TP_STRUCT__entry( 14 + __string(chan, dev_name(&dc->dev->device)) 15 + __field(dma_cookie_t, cookie) 16 + __field(__u32, residue) 17 + ), 18 + TP_fast_assign( 19 + __assign_str(chan, dev_name(&dc->dev->device)); 20 + __entry->cookie = cookie; 21 + __entry->residue = state ? state->residue : (u32)-1; 22 + ), 23 + TP_printk("channel %s: dma cookie %d, residue %u", 24 + __get_str(chan), __entry->cookie, __entry->residue) 25 + ); 26 + 27 + TRACE_EVENT(tegra_dma_complete_cb, 28 + TP_PROTO(struct dma_chan *dc, int count, void *ptr), 29 + TP_ARGS(dc, count, ptr), 30 + TP_STRUCT__entry( 31 + __string(chan, dev_name(&dc->dev->device)) 32 + __field(int, count) 33 + __field(void *, ptr) 34 + ), 35 + TP_fast_assign( 36 + __assign_str(chan, dev_name(&dc->dev->device)); 37 + __entry->count = count; 38 + __entry->ptr = ptr; 39 + ), 40 + TP_printk("channel %s: done %d, ptr %p", 41 + __get_str(chan), __entry->count, __entry->ptr) 42 + ); 43 + 44 + TRACE_EVENT(tegra_dma_isr, 45 + TP_PROTO(struct dma_chan *dc, int irq), 46 + TP_ARGS(dc, irq), 47 + TP_STRUCT__entry( 48 + __string(chan, dev_name(&dc->dev->device)) 49 + __field(int, irq) 50 + ), 51 + TP_fast_assign( 52 + __assign_str(chan, dev_name(&dc->dev->device)); 53 + __entry->irq = irq; 54 + ), 55 + TP_printk("%s: irq %d\n", __get_str(chan), __entry->irq) 56 + ); 57 + 58 + #endif /* _TRACE_TEGRADMA_H */ 59 + 60 + /* This part must be outside protection */ 61 + #include <trace/define_trace.h>