Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
"This is bit large pile of code which bring in some nice additions:

- Error reporting: we have added a new mechanism for users of
dmaenegine to register a callback_result which tells them the
result of the dma transaction. Right now only one user (ntb) is
using it.

- As we discussed on KS mailing list and pointed out NO_IRQ has no
place in kernel, this also remove NO_IRQ from dmaengine subsystem
(both arm and ppc users)

- Support for IOMMU slave transfers and its implementation for arm.

- To get better build coverage, enable COMPILE_TEST for bunch of
driver, and fix the warning and sparse complaints on these.

- Apart from above, usual updates spread across drivers"

* tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (169 commits)
async_pq_val: fix DMA memory leak
dmaengine: virt-dma: move function declarations
dmaengine: omap-dma: Enable burst and data pack for SG
DT: dmaengine: rcar-dmac: document R8A7743/5 support
dmaengine: fsldma: Unmap region obtained by of_iomap
dmaengine: jz4780: fix resource leaks on error exit return
dma-debug: fix ia64 build, use PHYS_PFN
dmaengine: coh901318: fix integer overflow when shifting more than 32 places
dmaengine: edma: avoid uninitialized variable use
dma-mapping: fix m32r build warning
dma-mapping: fix ia64 build, use PHYS_PFN
dmaengine: ti-dma-crossbar: enable COMPILE_TEST
dmaengine: omap-dma: enable COMPILE_TEST
dmaengine: edma: enable COMPILE_TEST
dmaengine: ti-dma-crossbar: Fix of_device_id data parameter usage
dmaengine: ti-dma-crossbar: Correct type for of_find_property() third parameter
dmaengine/ARM: omap-dma: Fix the DMAengine compile test on non OMAP configs
dmaengine: edma: Rename set_bits and remove unused clear_bits helper
dmaengine: edma: Use correct type for of_find_property() third parameter
dmaengine: edma: Fix of_device_id data parameter usage (legacy vs TPCC)
...

+1919 -744
+17 -5
Documentation/DMA-API.txt
··· 277 277 recommended that you never use these unless you really know what the 278 278 cache width is. 279 279 280 + dma_addr_t 281 + dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, 282 + enum dma_data_direction dir, unsigned long attrs) 283 + 284 + void 285 + dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, 286 + enum dma_data_direction dir, unsigned long attrs) 287 + 288 + API for mapping and unmapping for MMIO resources. All the notes and 289 + warnings for the other mapping APIs apply here. The API should only be 290 + used to map device MMIO resources, mapping of RAM is not permitted. 291 + 280 292 int 281 293 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 282 294 283 - In some circumstances dma_map_single() and dma_map_page() will fail to create 284 - a mapping. A driver can check for these errors by testing the returned 285 - DMA address with dma_mapping_error(). A non-zero return value means the mapping 286 - could not be created and the driver should take appropriate action (e.g. 287 - reduce current DMA mapping usage or delay and try again later). 295 + In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() 296 + will fail to create a mapping. A driver can check for these errors by testing 297 + the returned DMA address with dma_mapping_error(). A non-zero return value 298 + means the mapping could not be created and the driver should take appropriate 299 + action (e.g. reduce current DMA mapping usage or delay and try again later). 288 300 289 301 int 290 302 dma_map_sg(struct device *dev, struct scatterlist *sg,
+1
Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
··· 8 8 "fsl,imx51-sdma" 9 9 "fsl,imx53-sdma" 10 10 "fsl,imx6q-sdma" 11 + "fsl,imx7d-sdma" 11 12 The -to variants should be preferred since they allow to determine the 12 13 correct ROM script addresses needed for the driver to work without additional 13 14 firmware.
+3 -1
Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
··· 1 - * Renesas R-Car DMA Controller Device Tree bindings 1 + * Renesas R-Car (RZ/G) DMA Controller Device Tree bindings 2 2 3 3 Renesas R-Car Generation 2 SoCs have multiple multi-channel DMA 4 4 controller instances named DMAC capable of serving multiple clients. Channels ··· 16 16 17 17 - compatible: "renesas,dmac-<soctype>", "renesas,rcar-dmac" as fallback. 18 18 Examples with soctypes are: 19 + - "renesas,dmac-r8a7743" (RZ/G1M) 20 + - "renesas,dmac-r8a7745" (RZ/G1E) 19 21 - "renesas,dmac-r8a7790" (R-Car H2) 20 22 - "renesas,dmac-r8a7791" (R-Car M2-W) 21 23 - "renesas,dmac-r8a7792" (R-Car V2H)
+1
Documentation/devicetree/bindings/dma/sun6i-dma.txt
··· 7 7 - compatible: Must be one of 8 8 "allwinner,sun6i-a31-dma" 9 9 "allwinner,sun8i-a23-dma" 10 + "allwinner,sun8i-a83t-dma" 10 11 "allwinner,sun8i-h3-dma" 11 12 - reg: Should contain the registers base address and length 12 13 - interrupts: Should contain a reference to the interrupt used by this device
+11
Documentation/dmaengine/provider.txt
··· 282 282 that is supposed to push the current 283 283 transaction descriptor to a pending queue, waiting 284 284 for issue_pending to be called. 285 + - In this structure the function pointer callback_result can be 286 + initialized in order for the submitter to be notified that a 287 + transaction has completed. In the earlier code the function pointer 288 + callback has been used. However it does not provide any status to the 289 + transaction and will be deprecated. The result structure defined as 290 + dmaengine_result that is passed in to callback_result has two fields: 291 + + result: This provides the transfer result defined by 292 + dmaengine_tx_result. Either success or some error 293 + condition. 294 + + residue: Provides the residue bytes of the transfer for those that 295 + support residue. 285 296 286 297 * device_issue_pending 287 298 - Takes the first transaction descriptor in the pending queue,
+35
arch/arm/mach-s3c24xx/common.c
··· 33 33 #include <linux/delay.h> 34 34 #include <linux/io.h> 35 35 #include <linux/platform_data/dma-s3c24xx.h> 36 + #include <linux/dmaengine.h> 36 37 37 38 #include <mach/hardware.h> 38 39 #include <mach/regs-clock.h> ··· 440 439 [DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 3), }, 441 440 }; 442 441 442 + static const struct dma_slave_map s3c2440_dma_slave_map[] = { 443 + /* TODO: DMACH_XD0 */ 444 + /* TODO: DMACH_XD1 */ 445 + { "s3c2440-sdi", "rx-tx", (void *)DMACH_SDI }, 446 + { "s3c2410-spi.0", "rx", (void *)DMACH_SPI0 }, 447 + { "s3c2410-spi.0", "tx", (void *)DMACH_SPI0 }, 448 + { "s3c2410-spi.1", "rx", (void *)DMACH_SPI1 }, 449 + { "s3c2410-spi.1", "tx", (void *)DMACH_SPI1 }, 450 + { "s3c2440-uart.0", "rx", (void *)DMACH_UART0 }, 451 + { "s3c2440-uart.0", "tx", (void *)DMACH_UART0 }, 452 + { "s3c2440-uart.1", "rx", (void *)DMACH_UART1 }, 453 + { "s3c2440-uart.1", "tx", (void *)DMACH_UART1 }, 454 + { "s3c2440-uart.2", "rx", (void *)DMACH_UART2 }, 455 + { "s3c2440-uart.2", "tx", (void *)DMACH_UART2 }, 456 + { "s3c2440-uart.3", "rx", (void *)DMACH_UART3 }, 457 + { "s3c2440-uart.3", "tx", (void *)DMACH_UART3 }, 458 + /* TODO: DMACH_TIMER */ 459 + { "s3c24xx-iis", "rx", (void *)DMACH_I2S_IN }, 460 + { "s3c24xx-iis", "tx", (void *)DMACH_I2S_OUT }, 461 + { "samsung-ac97", "rx", (void *)DMACH_PCM_IN }, 462 + { "samsung-ac97", "tx", (void *)DMACH_PCM_OUT }, 463 + { "samsung-ac97", "rx", (void *)DMACH_MIC_IN }, 464 + { "s3c-hsudc", "rx0", (void *)DMACH_USB_EP1 }, 465 + { "s3c-hsudc", "rx1", (void *)DMACH_USB_EP2 }, 466 + { "s3c-hsudc", "rx2", (void *)DMACH_USB_EP3 }, 467 + { "s3c-hsudc", "rx3", (void *)DMACH_USB_EP4 }, 468 + { "s3c-hsudc", "tx0", (void *)DMACH_USB_EP1 }, 469 + { "s3c-hsudc", "tx1", (void *)DMACH_USB_EP2 }, 470 + { "s3c-hsudc", "tx2", (void *)DMACH_USB_EP3 }, 471 + { "s3c-hsudc", "tx3", (void *)DMACH_USB_EP4 } 472 + }; 473 + 443 474 static struct s3c24xx_dma_platdata s3c2440_dma_platdata = { 444 475 .num_phy_channels = 4, 445 476 .channels = s3c2440_dma_channels, 446 477 .num_channels = DMACH_MAX, 478 + .slave_map = s3c2440_dma_slave_map, 479 + .slavecnt = ARRAY_SIZE(s3c2440_dma_slave_map), 447 480 }; 448 481 449 482 struct platform_device s3c2440_device_dma = {
+63
arch/arm/mm/dma-mapping.c
··· 2014 2014 __free_iova(mapping, iova, len); 2015 2015 } 2016 2016 2017 + /** 2018 + * arm_iommu_map_resource - map a device resource for DMA 2019 + * @dev: valid struct device pointer 2020 + * @phys_addr: physical address of resource 2021 + * @size: size of resource to map 2022 + * @dir: DMA transfer direction 2023 + */ 2024 + static dma_addr_t arm_iommu_map_resource(struct device *dev, 2025 + phys_addr_t phys_addr, size_t size, 2026 + enum dma_data_direction dir, unsigned long attrs) 2027 + { 2028 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 2029 + dma_addr_t dma_addr; 2030 + int ret, prot; 2031 + phys_addr_t addr = phys_addr & PAGE_MASK; 2032 + unsigned int offset = phys_addr & ~PAGE_MASK; 2033 + size_t len = PAGE_ALIGN(size + offset); 2034 + 2035 + dma_addr = __alloc_iova(mapping, len); 2036 + if (dma_addr == DMA_ERROR_CODE) 2037 + return dma_addr; 2038 + 2039 + prot = __dma_direction_to_prot(dir) | IOMMU_MMIO; 2040 + 2041 + ret = iommu_map(mapping->domain, dma_addr, addr, len, prot); 2042 + if (ret < 0) 2043 + goto fail; 2044 + 2045 + return dma_addr + offset; 2046 + fail: 2047 + __free_iova(mapping, dma_addr, len); 2048 + return DMA_ERROR_CODE; 2049 + } 2050 + 2051 + /** 2052 + * arm_iommu_unmap_resource - unmap a device DMA resource 2053 + * @dev: valid struct device pointer 2054 + * @dma_handle: DMA address to resource 2055 + * @size: size of resource to map 2056 + * @dir: DMA transfer direction 2057 + */ 2058 + static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle, 2059 + size_t size, enum dma_data_direction dir, 2060 + unsigned long attrs) 2061 + { 2062 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 2063 + dma_addr_t iova = dma_handle & PAGE_MASK; 2064 + unsigned int offset = dma_handle & ~PAGE_MASK; 2065 + size_t len = PAGE_ALIGN(size + offset); 2066 + 2067 + if (!iova) 2068 + return; 2069 + 2070 + iommu_unmap(mapping->domain, iova, len); 2071 + __free_iova(mapping, iova, len); 2072 + } 2073 + 2017 2074 static void arm_iommu_sync_single_for_cpu(struct device *dev, 2018 2075 dma_addr_t handle, size_t size, enum dma_data_direction dir) 2019 2076 { ··· 2114 2057 .unmap_sg = arm_iommu_unmap_sg, 2115 2058 .sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu, 2116 2059 .sync_sg_for_device = arm_iommu_sync_sg_for_device, 2060 + 2061 + .map_resource = arm_iommu_map_resource, 2062 + .unmap_resource = arm_iommu_unmap_resource, 2117 2063 }; 2118 2064 2119 2065 struct dma_map_ops iommu_coherent_ops = { ··· 2130 2070 2131 2071 .map_sg = arm_coherent_iommu_map_sg, 2132 2072 .unmap_sg = arm_coherent_iommu_unmap_sg, 2073 + 2074 + .map_resource = arm_iommu_map_resource, 2075 + .unmap_resource = arm_iommu_unmap_resource, 2133 2076 }; 2134 2077 2135 2078 /**
+4 -4
crypto/async_tx/async_pq.c
··· 368 368 369 369 dma_set_unmap(tx, unmap); 370 370 async_tx_submit(chan, tx, submit); 371 - 372 - return tx; 373 371 } else { 374 372 struct page *p_src = P(blocks, disks); 375 373 struct page *q_src = Q(blocks, disks); ··· 422 424 submit->cb_param = cb_param_orig; 423 425 submit->flags = flags_orig; 424 426 async_tx_sync_epilog(submit); 425 - 426 - return NULL; 427 + tx = NULL; 427 428 } 429 + dmaengine_unmap_put(unmap); 430 + 431 + return tx; 428 432 } 429 433 EXPORT_SYMBOL_GPL(async_syndrome_val); 430 434
+18 -22
drivers/dma/Kconfig
··· 102 102 config COH901318 103 103 bool "ST-Ericsson COH901318 DMA support" 104 104 select DMA_ENGINE 105 - depends on ARCH_U300 105 + depends on ARCH_U300 || COMPILE_TEST 106 106 help 107 107 Enable support for ST-Ericsson COH 901 318 DMA. 108 108 ··· 114 114 115 115 config DMA_JZ4740 116 116 tristate "JZ4740 DMA support" 117 - depends on MACH_JZ4740 117 + depends on MACH_JZ4740 || COMPILE_TEST 118 118 select DMA_ENGINE 119 119 select DMA_VIRTUAL_CHANNELS 120 120 121 121 config DMA_JZ4780 122 122 tristate "JZ4780 DMA support" 123 - depends on MACH_JZ4780 123 + depends on MACH_JZ4780 || COMPILE_TEST 124 124 select DMA_ENGINE 125 125 select DMA_VIRTUAL_CHANNELS 126 126 help ··· 130 130 131 131 config DMA_OMAP 132 132 tristate "OMAP DMA support" 133 - depends on ARCH_OMAP 133 + depends on ARCH_OMAP || COMPILE_TEST 134 134 select DMA_ENGINE 135 135 select DMA_VIRTUAL_CHANNELS 136 - select TI_DMA_CROSSBAR if SOC_DRA7XX 136 + select TI_DMA_CROSSBAR if (SOC_DRA7XX || COMPILE_TEST) 137 137 138 138 config DMA_SA11X0 139 139 tristate "SA-11x0 DMA support" 140 - depends on ARCH_SA1100 140 + depends on ARCH_SA1100 || COMPILE_TEST 141 141 select DMA_ENGINE 142 142 select DMA_VIRTUAL_CHANNELS 143 143 help ··· 150 150 depends on MACH_SUN4I || MACH_SUN5I || MACH_SUN7I 151 151 default (MACH_SUN4I || MACH_SUN5I || MACH_SUN7I) 152 152 select DMA_ENGINE 153 - select DMA_OF 154 153 select DMA_VIRTUAL_CHANNELS 155 154 help 156 155 Enable support for the DMA controller present in the sun4i, ··· 166 167 167 168 config EP93XX_DMA 168 169 bool "Cirrus Logic EP93xx DMA support" 169 - depends on ARCH_EP93XX 170 + depends on ARCH_EP93XX || COMPILE_TEST 170 171 select DMA_ENGINE 171 172 help 172 173 Enable support for the Cirrus Logic EP93xx M2P/M2M DMA controller. ··· 278 279 279 280 config K3_DMA 280 281 tristate "Hisilicon K3 DMA support" 281 - depends on ARCH_HI3xxx 282 + depends on ARCH_HI3xxx || ARCH_HISI || COMPILE_TEST 282 283 select DMA_ENGINE 283 284 select DMA_VIRTUAL_CHANNELS 284 285 help ··· 296 297 297 298 config MMP_PDMA 298 299 bool "MMP PDMA support" 299 - depends on (ARCH_MMP || ARCH_PXA) 300 + depends on ARCH_MMP || ARCH_PXA || COMPILE_TEST 300 301 select DMA_ENGINE 301 302 help 302 303 Support the MMP PDMA engine for PXA and MMP platform. 303 304 304 305 config MMP_TDMA 305 306 bool "MMP Two-Channel DMA support" 306 - depends on ARCH_MMP 307 + depends on ARCH_MMP || COMPILE_TEST 307 308 select DMA_ENGINE 308 - select MMP_SRAM 309 + select MMP_SRAM if ARCH_MMP 309 310 help 310 311 Support the MMP Two-Channel DMA engine. 311 312 This engine used for MMP Audio DMA and pxa910 SQU. ··· 315 316 tristate "MOXART DMA support" 316 317 depends on ARCH_MOXART 317 318 select DMA_ENGINE 318 - select DMA_OF 319 319 select DMA_VIRTUAL_CHANNELS 320 320 help 321 321 Enable support for the MOXA ART SoC DMA controller. ··· 437 439 438 440 config STM32_DMA 439 441 bool "STMicroelectronics STM32 DMA support" 440 - depends on ARCH_STM32 442 + depends on ARCH_STM32 || COMPILE_TEST 441 443 select DMA_ENGINE 442 - select DMA_OF 443 444 select DMA_VIRTUAL_CHANNELS 444 445 help 445 446 Enable support for the on-chip DMA controller on STMicroelectronics ··· 448 451 449 452 config S3C24XX_DMAC 450 453 bool "Samsung S3C24XX DMA support" 451 - depends on ARCH_S3C24XX 454 + depends on ARCH_S3C24XX || COMPILE_TEST 452 455 select DMA_ENGINE 453 456 select DMA_VIRTUAL_CHANNELS 454 457 help ··· 480 483 481 484 config TEGRA210_ADMA 482 485 bool "NVIDIA Tegra210 ADMA support" 483 - depends on ARCH_TEGRA_210_SOC 486 + depends on (ARCH_TEGRA_210_SOC || COMPILE_TEST) && PM_CLK 484 487 select DMA_ENGINE 485 488 select DMA_VIRTUAL_CHANNELS 486 - select PM_CLK 487 489 help 488 490 Support for the NVIDIA Tegra210 ADMA controller driver. The 489 491 DMA controller has multiple DMA channels and is used to service ··· 493 497 494 498 config TIMB_DMA 495 499 tristate "Timberdale FPGA DMA support" 496 - depends on MFD_TIMBERDALE 500 + depends on MFD_TIMBERDALE || COMPILE_TEST 497 501 select DMA_ENGINE 498 502 help 499 503 Enable support for the Timberdale FPGA DMA engine. ··· 511 515 512 516 config TI_EDMA 513 517 bool "TI EDMA support" 514 - depends on ARCH_DAVINCI || ARCH_OMAP || ARCH_KEYSTONE 518 + depends on ARCH_DAVINCI || ARCH_OMAP || ARCH_KEYSTONE || COMPILE_TEST 515 519 select DMA_ENGINE 516 520 select DMA_VIRTUAL_CHANNELS 517 - select TI_DMA_CROSSBAR if ARCH_OMAP 521 + select TI_DMA_CROSSBAR if (ARCH_OMAP || COMPILE_TEST) 518 522 default n 519 523 help 520 524 Enable support for the TI EDMA controller. This DMA ··· 557 561 558 562 config ZX_DMA 559 563 tristate "ZTE ZX296702 DMA support" 560 - depends on ARCH_ZX 564 + depends on ARCH_ZX || COMPILE_TEST 561 565 select DMA_ENGINE 562 566 select DMA_VIRTUAL_CHANNELS 563 567 help
+2 -9
drivers/dma/at_hdmac.c
··· 473 473 /* for cyclic transfers, 474 474 * no need to replay callback function while stopping */ 475 475 if (!atc_chan_is_cyclic(atchan)) { 476 - dma_async_tx_callback callback = txd->callback; 477 - void *param = txd->callback_param; 478 - 479 476 /* 480 477 * The API requires that no submissions are done from a 481 478 * callback, so we don't need to drop the lock here 482 479 */ 483 - if (callback) 484 - callback(param); 480 + dmaengine_desc_get_callback_invoke(txd, NULL); 485 481 } 486 482 487 483 dma_run_dependencies(txd); ··· 594 598 { 595 599 struct at_desc *first = atc_first_active(atchan); 596 600 struct dma_async_tx_descriptor *txd = &first->txd; 597 - dma_async_tx_callback callback = txd->callback; 598 - void *param = txd->callback_param; 599 601 600 602 dev_vdbg(chan2dev(&atchan->chan_common), 601 603 "new cyclic period llp 0x%08x\n", 602 604 channel_readl(atchan, DSCR)); 603 605 604 - if (callback) 605 - callback(param); 606 + dmaengine_desc_get_callback_invoke(txd, NULL); 606 607 } 607 608 608 609 /*-- IRQ & Tasklet ---------------------------------------------------*/
+4 -4
drivers/dma/at_xdmac.c
··· 1572 1572 desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node); 1573 1573 txd = &desc->tx_dma_desc; 1574 1574 1575 - if (txd->callback && (txd->flags & DMA_PREP_INTERRUPT)) 1576 - txd->callback(txd->callback_param); 1575 + if (txd->flags & DMA_PREP_INTERRUPT) 1576 + dmaengine_desc_get_callback_invoke(txd, NULL); 1577 1577 } 1578 1578 1579 1579 static void at_xdmac_tasklet(unsigned long data) ··· 1616 1616 1617 1617 if (!at_xdmac_chan_is_cyclic(atchan)) { 1618 1618 dma_cookie_complete(txd); 1619 - if (txd->callback && (txd->flags & DMA_PREP_INTERRUPT)) 1620 - txd->callback(txd->callback_param); 1619 + if (txd->flags & DMA_PREP_INTERRUPT) 1620 + dmaengine_desc_get_callback_invoke(txd, NULL); 1621 1621 } 1622 1622 1623 1623 dma_run_dependencies(txd);
+2 -2
drivers/dma/bestcomm/bestcomm.c
··· 82 82 83 83 /* Get IRQ of that task */ 84 84 tsk->irq = irq_of_parse_and_map(bcom_eng->ofnode, tsk->tasknum); 85 - if (tsk->irq == NO_IRQ) 85 + if (!tsk->irq) 86 86 goto error; 87 87 88 88 /* Init the BDs, if needed */ ··· 104 104 105 105 error: 106 106 if (tsk) { 107 - if (tsk->irq != NO_IRQ) 107 + if (tsk->irq) 108 108 irq_dispose_mapping(tsk->irq); 109 109 bcom_sram_free(tsk->bd); 110 110 kfree(tsk->cookie);
+20 -36
drivers/dma/coh901318.c
··· 1319 1319 int i = 0; 1320 1320 1321 1321 while (l) { 1322 - dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%x" 1323 - ", dst 0x%x, link 0x%x virt_link_addr 0x%p\n", 1324 - i, l, l->control, l->src_addr, l->dst_addr, 1325 - l->link_addr, l->virt_link_addr); 1322 + dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%pad" 1323 + ", dst 0x%pad, link 0x%pad virt_link_addr 0x%p\n", 1324 + i, l, l->control, &l->src_addr, &l->dst_addr, 1325 + &l->link_addr, l->virt_link_addr); 1326 1326 i++; 1327 1327 l = l->virt_link_addr; 1328 1328 } ··· 1335 1335 static struct coh901318_base *debugfs_dma_base; 1336 1336 static struct dentry *dma_dentry; 1337 1337 1338 - static int coh901318_debugfs_read(struct file *file, char __user *buf, 1338 + static ssize_t coh901318_debugfs_read(struct file *file, char __user *buf, 1339 1339 size_t count, loff_t *f_pos) 1340 1340 { 1341 1341 u64 started_channels = debugfs_dma_base->pm.started_channels; ··· 1352 1352 1353 1353 tmp += sprintf(tmp, "DMA -- enabled dma channels\n"); 1354 1354 1355 - for (i = 0; i < U300_DMA_CHANNELS; i++) 1356 - if (started_channels & (1 << i)) 1355 + for (i = 0; i < U300_DMA_CHANNELS; i++) { 1356 + if (started_channels & (1ULL << i)) 1357 1357 tmp += sprintf(tmp, "channel %d\n", i); 1358 + } 1358 1359 1359 1360 tmp += sprintf(tmp, "Pool alloc nbr %d\n", pool_count); 1360 1361 ··· 1554 1553 static struct coh901318_desc * 1555 1554 coh901318_first_active_get(struct coh901318_chan *cohc) 1556 1555 { 1557 - struct coh901318_desc *d; 1558 - 1559 - if (list_empty(&cohc->active)) 1560 - return NULL; 1561 - 1562 - d = list_first_entry(&cohc->active, 1563 - struct coh901318_desc, 1564 - node); 1565 - return d; 1556 + return list_first_entry_or_null(&cohc->active, struct coh901318_desc, 1557 + node); 1566 1558 } 1567 1559 1568 1560 static void ··· 1573 1579 static struct coh901318_desc * 1574 1580 coh901318_first_queued(struct coh901318_chan *cohc) 1575 1581 { 1576 - struct coh901318_desc *d; 1577 - 1578 - if (list_empty(&cohc->queue)) 1579 - return NULL; 1580 - 1581 - d = list_first_entry(&cohc->queue, 1582 - struct coh901318_desc, 1583 - node); 1584 - return d; 1582 + return list_first_entry_or_null(&cohc->queue, struct coh901318_desc, 1583 + node); 1585 1584 } 1586 1585 1587 1586 static inline u32 coh901318_get_bytes_in_lli(struct coh901318_lli *in_lli) ··· 1753 1766 1754 1767 bool coh901318_filter_id(struct dma_chan *chan, void *chan_id) 1755 1768 { 1756 - unsigned int ch_nr = (unsigned int) chan_id; 1769 + unsigned long ch_nr = (unsigned long) chan_id; 1757 1770 1758 1771 if (ch_nr == to_coh901318_chan(chan)->id) 1759 1772 return true; ··· 1875 1888 struct coh901318_chan *cohc = (struct coh901318_chan *) data; 1876 1889 struct coh901318_desc *cohd_fin; 1877 1890 unsigned long flags; 1878 - dma_async_tx_callback callback; 1879 - void *callback_param; 1891 + struct dmaengine_desc_callback cb; 1880 1892 1881 1893 dev_vdbg(COHC_2_DEV(cohc), "[%s] chan_id %d" 1882 1894 " nbr_active_done %ld\n", __func__, ··· 1890 1904 goto err; 1891 1905 1892 1906 /* locate callback to client */ 1893 - callback = cohd_fin->desc.callback; 1894 - callback_param = cohd_fin->desc.callback_param; 1907 + dmaengine_desc_get_callback(&cohd_fin->desc, &cb); 1895 1908 1896 1909 /* sign this job as completed on the channel */ 1897 1910 dma_cookie_complete(&cohd_fin->desc); ··· 1905 1920 spin_unlock_irqrestore(&cohc->lock, flags); 1906 1921 1907 1922 /* Call the callback when we're done */ 1908 - if (callback) 1909 - callback(callback_param); 1923 + dmaengine_desc_callback_invoke(&cb, NULL); 1910 1924 1911 1925 spin_lock_irqsave(&cohc->lock, flags); 1912 1926 ··· 2231 2247 spin_lock_irqsave(&cohc->lock, flg); 2232 2248 2233 2249 dev_vdbg(COHC_2_DEV(cohc), 2234 - "[%s] channel %d src 0x%x dest 0x%x size %d\n", 2235 - __func__, cohc->id, src, dest, size); 2250 + "[%s] channel %d src 0x%pad dest 0x%pad size %zu\n", 2251 + __func__, cohc->id, &src, &dest, size); 2236 2252 2237 2253 if (flags & DMA_PREP_INTERRUPT) 2238 2254 /* Trigger interrupt after last lli */ ··· 2728 2744 goto err_register_of_dma; 2729 2745 2730 2746 platform_set_drvdata(pdev, base); 2731 - dev_info(&pdev->dev, "Initialized COH901318 DMA on virtual base 0x%08x\n", 2732 - (u32) base->virtbase); 2747 + dev_info(&pdev->dev, "Initialized COH901318 DMA on virtual base 0x%p\n", 2748 + base->virtbase); 2733 2749 2734 2750 return err; 2735 2751
+2 -2
drivers/dma/coh901318_lli.c
··· 75 75 lli = head; 76 76 lli->phy_this = phy; 77 77 lli->link_addr = 0x00000000; 78 - lli->virt_link_addr = 0x00000000U; 78 + lli->virt_link_addr = NULL; 79 79 80 80 for (i = 1; i < len; i++) { 81 81 lli_prev = lli; ··· 88 88 DEBUGFS_POOL_COUNTER_ADD(pool, 1); 89 89 lli->phy_this = phy; 90 90 lli->link_addr = 0x00000000; 91 - lli->virt_link_addr = 0x00000000U; 91 + lli->virt_link_addr = NULL; 92 92 93 93 lli_prev->link_addr = phy; 94 94 lli_prev->virt_link_addr = lli;
+117 -25
drivers/dma/cppi41.c
··· 108 108 unsigned td_queued:1; 109 109 unsigned td_seen:1; 110 110 unsigned td_desc_seen:1; 111 + 112 + struct list_head node; /* Node for pending list */ 111 113 }; 112 114 113 115 struct cppi41_desc { ··· 147 145 const struct chan_queues *queues_rx; 148 146 const struct chan_queues *queues_tx; 149 147 struct chan_queues td_queue; 148 + 149 + struct list_head pending; /* Pending queued transfers */ 150 + spinlock_t lock; /* Lock for pending list */ 150 151 151 152 /* context for suspend/resume */ 152 153 unsigned int dma_tdfdq; ··· 336 331 337 332 c->residue = pd_trans_len(c->desc->pd6) - len; 338 333 dma_cookie_complete(&c->txd); 339 - c->txd.callback(c->txd.callback_param); 334 + dmaengine_desc_get_callback_invoke(&c->txd, NULL); 335 + 336 + /* Paired with cppi41_dma_issue_pending */ 337 + pm_runtime_mark_last_busy(cdd->ddev.dev); 338 + pm_runtime_put_autosuspend(cdd->ddev.dev); 340 339 } 341 340 } 342 341 return IRQ_HANDLED; ··· 358 349 static int cppi41_dma_alloc_chan_resources(struct dma_chan *chan) 359 350 { 360 351 struct cppi41_channel *c = to_cpp41_chan(chan); 352 + struct cppi41_dd *cdd = c->cdd; 353 + int error; 354 + 355 + error = pm_runtime_get_sync(cdd->ddev.dev); 356 + if (error < 0) 357 + return error; 361 358 362 359 dma_cookie_init(chan); 363 360 dma_async_tx_descriptor_init(&c->txd, chan); ··· 372 357 if (!c->is_tx) 373 358 cppi_writel(c->q_num, c->gcr_reg + RXHPCRA0); 374 359 360 + pm_runtime_mark_last_busy(cdd->ddev.dev); 361 + pm_runtime_put_autosuspend(cdd->ddev.dev); 362 + 375 363 return 0; 376 364 } 377 365 378 366 static void cppi41_dma_free_chan_resources(struct dma_chan *chan) 379 367 { 368 + struct cppi41_channel *c = to_cpp41_chan(chan); 369 + struct cppi41_dd *cdd = c->cdd; 370 + int error; 371 + 372 + error = pm_runtime_get_sync(cdd->ddev.dev); 373 + if (error < 0) 374 + return; 375 + 376 + WARN_ON(!list_empty(&cdd->pending)); 377 + 378 + pm_runtime_mark_last_busy(cdd->ddev.dev); 379 + pm_runtime_put_autosuspend(cdd->ddev.dev); 380 380 } 381 381 382 382 static enum dma_status cppi41_dma_tx_status(struct dma_chan *chan, ··· 416 386 u32 desc_phys; 417 387 u32 reg; 418 388 419 - desc_phys = lower_32_bits(c->desc_phys); 420 - desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc); 421 - WARN_ON(cdd->chan_busy[desc_num]); 422 - cdd->chan_busy[desc_num] = c; 423 - 424 - reg = (sizeof(struct cppi41_desc) - 24) / 4; 425 - reg |= desc_phys; 426 - cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num)); 427 - } 428 - 429 - static void cppi41_dma_issue_pending(struct dma_chan *chan) 430 - { 431 - struct cppi41_channel *c = to_cpp41_chan(chan); 432 - u32 reg; 433 - 434 389 c->residue = 0; 435 390 436 391 reg = GCR_CHAN_ENABLE; ··· 433 418 * before starting the dma engine. 434 419 */ 435 420 __iowmb(); 436 - push_desc_queue(c); 421 + 422 + desc_phys = lower_32_bits(c->desc_phys); 423 + desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc); 424 + WARN_ON(cdd->chan_busy[desc_num]); 425 + cdd->chan_busy[desc_num] = c; 426 + 427 + reg = (sizeof(struct cppi41_desc) - 24) / 4; 428 + reg |= desc_phys; 429 + cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num)); 430 + } 431 + 432 + static void pending_desc(struct cppi41_channel *c) 433 + { 434 + struct cppi41_dd *cdd = c->cdd; 435 + unsigned long flags; 436 + 437 + spin_lock_irqsave(&cdd->lock, flags); 438 + list_add_tail(&c->node, &cdd->pending); 439 + spin_unlock_irqrestore(&cdd->lock, flags); 440 + } 441 + 442 + static void cppi41_dma_issue_pending(struct dma_chan *chan) 443 + { 444 + struct cppi41_channel *c = to_cpp41_chan(chan); 445 + struct cppi41_dd *cdd = c->cdd; 446 + int error; 447 + 448 + /* PM runtime paired with dmaengine_desc_get_callback_invoke */ 449 + error = pm_runtime_get(cdd->ddev.dev); 450 + if ((error != -EINPROGRESS) && error < 0) { 451 + dev_err(cdd->ddev.dev, "Failed to pm_runtime_get: %i\n", 452 + error); 453 + 454 + return; 455 + } 456 + 457 + if (likely(pm_runtime_active(cdd->ddev.dev))) 458 + push_desc_queue(c); 459 + else 460 + pending_desc(c); 437 461 } 438 462 439 463 static u32 get_host_pd0(u32 length) ··· 994 940 cdd->ctrl_mem = of_iomap(dev->of_node, 1); 995 941 cdd->sched_mem = of_iomap(dev->of_node, 2); 996 942 cdd->qmgr_mem = of_iomap(dev->of_node, 3); 943 + spin_lock_init(&cdd->lock); 944 + INIT_LIST_HEAD(&cdd->pending); 945 + 946 + platform_set_drvdata(pdev, cdd); 997 947 998 948 if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem || 999 949 !cdd->qmgr_mem) 1000 950 return -ENXIO; 1001 951 1002 952 pm_runtime_enable(dev); 953 + pm_runtime_set_autosuspend_delay(dev, 100); 954 + pm_runtime_use_autosuspend(dev); 1003 955 ret = pm_runtime_get_sync(dev); 1004 956 if (ret < 0) 1005 957 goto err_get_sync; ··· 1045 985 if (ret) 1046 986 goto err_of; 1047 987 1048 - platform_set_drvdata(pdev, cdd); 988 + pm_runtime_mark_last_busy(dev); 989 + pm_runtime_put_autosuspend(dev); 990 + 1049 991 return 0; 1050 992 err_of: 1051 993 dma_async_device_unregister(&cdd->ddev); ··· 1058 996 err_chans: 1059 997 deinit_cppi41(dev, cdd); 1060 998 err_init_cppi: 1061 - pm_runtime_put(dev); 999 + pm_runtime_dont_use_autosuspend(dev); 1000 + pm_runtime_put_sync(dev); 1062 1001 err_get_sync: 1063 1002 pm_runtime_disable(dev); 1064 1003 iounmap(cdd->usbss_mem); ··· 1084 1021 iounmap(cdd->ctrl_mem); 1085 1022 iounmap(cdd->sched_mem); 1086 1023 iounmap(cdd->qmgr_mem); 1087 - pm_runtime_put(&pdev->dev); 1024 + pm_runtime_dont_use_autosuspend(&pdev->dev); 1025 + pm_runtime_put_sync(&pdev->dev); 1088 1026 pm_runtime_disable(&pdev->dev); 1089 1027 return 0; 1090 1028 } 1091 1029 1092 - #ifdef CONFIG_PM_SLEEP 1093 - static int cppi41_suspend(struct device *dev) 1030 + static int __maybe_unused cppi41_suspend(struct device *dev) 1094 1031 { 1095 1032 struct cppi41_dd *cdd = dev_get_drvdata(dev); 1096 1033 ··· 1101 1038 return 0; 1102 1039 } 1103 1040 1104 - static int cppi41_resume(struct device *dev) 1041 + static int __maybe_unused cppi41_resume(struct device *dev) 1105 1042 { 1106 1043 struct cppi41_dd *cdd = dev_get_drvdata(dev); 1107 1044 struct cppi41_channel *c; ··· 1125 1062 1126 1063 return 0; 1127 1064 } 1128 - #endif 1129 1065 1130 - static SIMPLE_DEV_PM_OPS(cppi41_pm_ops, cppi41_suspend, cppi41_resume); 1066 + static int __maybe_unused cppi41_runtime_suspend(struct device *dev) 1067 + { 1068 + struct cppi41_dd *cdd = dev_get_drvdata(dev); 1069 + 1070 + WARN_ON(!list_empty(&cdd->pending)); 1071 + 1072 + return 0; 1073 + } 1074 + 1075 + static int __maybe_unused cppi41_runtime_resume(struct device *dev) 1076 + { 1077 + struct cppi41_dd *cdd = dev_get_drvdata(dev); 1078 + struct cppi41_channel *c, *_c; 1079 + unsigned long flags; 1080 + 1081 + spin_lock_irqsave(&cdd->lock, flags); 1082 + list_for_each_entry_safe(c, _c, &cdd->pending, node) { 1083 + push_desc_queue(c); 1084 + list_del(&c->node); 1085 + } 1086 + spin_unlock_irqrestore(&cdd->lock, flags); 1087 + 1088 + return 0; 1089 + } 1090 + 1091 + static const struct dev_pm_ops cppi41_pm_ops = { 1092 + SET_LATE_SYSTEM_SLEEP_PM_OPS(cppi41_suspend, cppi41_resume) 1093 + SET_RUNTIME_PM_OPS(cppi41_runtime_suspend, 1094 + cppi41_runtime_resume, 1095 + NULL) 1096 + }; 1131 1097 1132 1098 static struct platform_driver cpp41_dma_driver = { 1133 1099 .probe = cppi41_dma_probe,
-2
drivers/dma/dma-jz4740.c
··· 21 21 #include <linux/irq.h> 22 22 #include <linux/clk.h> 23 23 24 - #include <asm/mach-jz4740/dma.h> 25 - 26 24 #include "virt-dma.h" 27 25 28 26 #define JZ_DMA_NR_CHANS 6
+7 -3
drivers/dma/dma-jz4780.c
··· 324 324 sg_dma_address(&sgl[i]), 325 325 sg_dma_len(&sgl[i]), 326 326 direction); 327 - if (err < 0) 327 + if (err < 0) { 328 + jz4780_dma_desc_free(&jzchan->desc->vdesc); 328 329 return NULL; 330 + } 329 331 330 332 desc->desc[i].dcm |= JZ_DMA_DCM_TIE; 331 333 ··· 370 368 for (i = 0; i < periods; i++) { 371 369 err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr, 372 370 period_len, direction); 373 - if (err < 0) 371 + if (err < 0) { 372 + jz4780_dma_desc_free(&jzchan->desc->vdesc); 374 373 return NULL; 374 + } 375 375 376 376 buf_addr += period_len; 377 377 ··· 400 396 return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags); 401 397 } 402 398 403 - struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy( 399 + static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy( 404 400 struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, 405 401 size_t len, unsigned long flags) 406 402 {
+7
drivers/dma/dmaengine.c
··· 997 997 } 998 998 chan->client_count = 0; 999 999 } 1000 + 1001 + if (!chancnt) { 1002 + dev_err(device->dev, "%s: device has no channels!\n", __func__); 1003 + rc = -ENODEV; 1004 + goto err_out; 1005 + } 1006 + 1000 1007 device->chancnt = chancnt; 1001 1008 1002 1009 mutex_lock(&dma_list_mutex);
+84
drivers/dma/dmaengine.h
··· 86 86 state->residue = residue; 87 87 } 88 88 89 + struct dmaengine_desc_callback { 90 + dma_async_tx_callback callback; 91 + dma_async_tx_callback_result callback_result; 92 + void *callback_param; 93 + }; 94 + 95 + /** 96 + * dmaengine_desc_get_callback - get the passed in callback function 97 + * @tx: tx descriptor 98 + * @cb: temp struct to hold the callback info 99 + * 100 + * Fill the passed in cb struct with what's available in the passed in 101 + * tx descriptor struct 102 + * No locking is required. 103 + */ 104 + static inline void 105 + dmaengine_desc_get_callback(struct dma_async_tx_descriptor *tx, 106 + struct dmaengine_desc_callback *cb) 107 + { 108 + cb->callback = tx->callback; 109 + cb->callback_result = tx->callback_result; 110 + cb->callback_param = tx->callback_param; 111 + } 112 + 113 + /** 114 + * dmaengine_desc_callback_invoke - call the callback function in cb struct 115 + * @cb: temp struct that is holding the callback info 116 + * @result: transaction result 117 + * 118 + * Call the callback function provided in the cb struct with the parameter 119 + * in the cb struct. 120 + * Locking is dependent on the driver. 121 + */ 122 + static inline void 123 + dmaengine_desc_callback_invoke(struct dmaengine_desc_callback *cb, 124 + const struct dmaengine_result *result) 125 + { 126 + struct dmaengine_result dummy_result = { 127 + .result = DMA_TRANS_NOERROR, 128 + .residue = 0 129 + }; 130 + 131 + if (cb->callback_result) { 132 + if (!result) 133 + result = &dummy_result; 134 + cb->callback_result(cb->callback_param, result); 135 + } else if (cb->callback) { 136 + cb->callback(cb->callback_param); 137 + } 138 + } 139 + 140 + /** 141 + * dmaengine_desc_get_callback_invoke - get the callback in tx descriptor and 142 + * then immediately call the callback. 143 + * @tx: dma async tx descriptor 144 + * @result: transaction result 145 + * 146 + * Call dmaengine_desc_get_callback() and dmaengine_desc_callback_invoke() 147 + * in a single function since no work is necessary in between for the driver. 148 + * Locking is dependent on the driver. 149 + */ 150 + static inline void 151 + dmaengine_desc_get_callback_invoke(struct dma_async_tx_descriptor *tx, 152 + const struct dmaengine_result *result) 153 + { 154 + struct dmaengine_desc_callback cb; 155 + 156 + dmaengine_desc_get_callback(tx, &cb); 157 + dmaengine_desc_callback_invoke(&cb, result); 158 + } 159 + 160 + /** 161 + * dmaengine_desc_callback_valid - verify the callback is valid in cb 162 + * @cb: callback info struct 163 + * 164 + * Return a bool that verifies whether callback in cb is valid or not. 165 + * No locking is required. 166 + */ 167 + static inline bool 168 + dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb) 169 + { 170 + return (cb->callback) ? true : false; 171 + } 172 + 89 173 #endif
+18 -5
drivers/dma/dmatest.c
··· 56 56 MODULE_PARM_DESC(sg_buffers, 57 57 "Number of scatter gather buffers (default: 1)"); 58 58 59 - static unsigned int dmatest = 1; 59 + static unsigned int dmatest; 60 60 module_param(dmatest, uint, S_IRUGO | S_IWUSR); 61 61 MODULE_PARM_DESC(dmatest, 62 - "dmatest 0-memcpy 1-slave_sg (default: 1)"); 62 + "dmatest 0-memcpy 1-slave_sg (default: 0)"); 63 63 64 64 static unsigned int xor_sources = 3; 65 65 module_param(xor_sources, uint, S_IRUGO | S_IWUSR); ··· 426 426 int src_cnt; 427 427 int dst_cnt; 428 428 int i; 429 - ktime_t ktime; 429 + ktime_t ktime, start, diff; 430 + ktime_t filltime = ktime_set(0, 0); 431 + ktime_t comparetime = ktime_set(0, 0); 430 432 s64 runtime = 0; 431 433 unsigned long long total_len = 0; 432 434 ··· 505 503 total_tests++; 506 504 507 505 /* honor alignment restrictions */ 508 - if (thread->type == DMA_MEMCPY) 506 + if (thread->type == DMA_MEMCPY || thread->type == DMA_SG) 509 507 align = dev->copy_align; 510 508 else if (thread->type == DMA_XOR) 511 509 align = dev->xor_align; ··· 533 531 src_off = 0; 534 532 dst_off = 0; 535 533 } else { 534 + start = ktime_get(); 536 535 src_off = dmatest_random() % (params->buf_size - len + 1); 537 536 dst_off = dmatest_random() % (params->buf_size - len + 1); 538 537 ··· 544 541 params->buf_size); 545 542 dmatest_init_dsts(thread->dsts, dst_off, len, 546 543 params->buf_size); 544 + 545 + diff = ktime_sub(ktime_get(), start); 546 + filltime = ktime_add(filltime, diff); 547 547 } 548 548 549 549 um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt, ··· 689 683 continue; 690 684 } 691 685 686 + start = ktime_get(); 692 687 pr_debug("%s: verifying source buffer...\n", current->comm); 693 688 error_count = dmatest_verify(thread->srcs, 0, src_off, 694 689 0, PATTERN_SRC, true); ··· 710 703 params->buf_size, dst_off + len, 711 704 PATTERN_DST, false); 712 705 706 + diff = ktime_sub(ktime_get(), start); 707 + comparetime = ktime_add(comparetime, diff); 708 + 713 709 if (error_count) { 714 710 result("data error", total_tests, src_off, dst_off, 715 711 len, error_count); ··· 722 712 dst_off, len, 0); 723 713 } 724 714 } 725 - runtime = ktime_us_delta(ktime_get(), ktime); 715 + ktime = ktime_sub(ktime_get(), ktime); 716 + ktime = ktime_sub(ktime, comparetime); 717 + ktime = ktime_sub(ktime, filltime); 718 + runtime = ktime_to_us(ktime); 726 719 727 720 ret = 0; 728 721 err_dstbuf:
+6 -8
drivers/dma/dw/core.c
··· 274 274 dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc, 275 275 bool callback_required) 276 276 { 277 - dma_async_tx_callback callback = NULL; 278 - void *param = NULL; 279 277 struct dma_async_tx_descriptor *txd = &desc->txd; 280 278 struct dw_desc *child; 281 279 unsigned long flags; 280 + struct dmaengine_desc_callback cb; 282 281 283 282 dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie); 284 283 285 284 spin_lock_irqsave(&dwc->lock, flags); 286 285 dma_cookie_complete(txd); 287 - if (callback_required) { 288 - callback = txd->callback; 289 - param = txd->callback_param; 290 - } 286 + if (callback_required) 287 + dmaengine_desc_get_callback(txd, &cb); 288 + else 289 + memset(&cb, 0, sizeof(cb)); 291 290 292 291 /* async_tx_ack */ 293 292 list_for_each_entry(child, &desc->tx_list, desc_node) ··· 295 296 dwc_desc_put(dwc, desc); 296 297 spin_unlock_irqrestore(&dwc->lock, flags); 297 298 298 - if (callback) 299 - callback(param); 299 + dmaengine_desc_callback_invoke(&cb, NULL); 300 300 } 301 301 302 302 static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc)
+21 -17
drivers/dma/edma.c
··· 263 263 264 264 #define EDMA_BINDING_LEGACY 0 265 265 #define EDMA_BINDING_TPCC 1 266 + static const u32 edma_binding_type[] = { 267 + [EDMA_BINDING_LEGACY] = EDMA_BINDING_LEGACY, 268 + [EDMA_BINDING_TPCC] = EDMA_BINDING_TPCC, 269 + }; 270 + 266 271 static const struct of_device_id edma_of_ids[] = { 267 272 { 268 273 .compatible = "ti,edma3", 269 - .data = (void *)EDMA_BINDING_LEGACY, 274 + .data = &edma_binding_type[EDMA_BINDING_LEGACY], 270 275 }, 271 276 { 272 277 .compatible = "ti,edma3-tpcc", 273 - .data = (void *)EDMA_BINDING_TPCC, 278 + .data = &edma_binding_type[EDMA_BINDING_TPCC], 274 279 }, 275 280 {} 276 281 }; 282 + MODULE_DEVICE_TABLE(of, edma_of_ids); 277 283 278 284 static const struct of_device_id edma_tptc_of_ids[] = { 279 285 { .compatible = "ti,edma3-tptc", }, 280 286 {} 281 287 }; 288 + MODULE_DEVICE_TABLE(of, edma_tptc_of_ids); 282 289 283 290 static inline unsigned int edma_read(struct edma_cc *ecc, int offset) 284 291 { ··· 412 405 edma_or(ecc, EDMA_PARM + offset + (param_no << 5), or); 413 406 } 414 407 415 - static inline void set_bits(int offset, int len, unsigned long *p) 408 + static inline void edma_set_bits(int offset, int len, unsigned long *p) 416 409 { 417 410 for (; len > 0; len--) 418 411 set_bit(offset + (len - 1), p); 419 - } 420 - 421 - static inline void clear_bits(int offset, int len, unsigned long *p) 422 - { 423 - for (; len > 0; len--) 424 - clear_bit(offset + (len - 1), p); 425 412 } 426 413 427 414 static void edma_assign_priority_to_queue(struct edma_cc *ecc, int queue_no, ··· 465 464 memcpy_toio(ecc->base + PARM_OFFSET(slot), param, PARM_SIZE); 466 465 } 467 466 468 - static void edma_read_slot(struct edma_cc *ecc, unsigned slot, 467 + static int edma_read_slot(struct edma_cc *ecc, unsigned slot, 469 468 struct edmacc_param *param) 470 469 { 471 470 slot = EDMA_CHAN_SLOT(slot); 472 471 if (slot >= ecc->num_slots) 473 - return; 472 + return -EINVAL; 474 473 memcpy_fromio(param, ecc->base + PARM_OFFSET(slot), PARM_SIZE); 474 + 475 + return 0; 475 476 } 476 477 477 478 /** ··· 1479 1476 struct edma_cc *ecc = echan->ecc; 1480 1477 struct device *dev = echan->vchan.chan.device->dev; 1481 1478 struct edmacc_param p; 1479 + int err; 1482 1480 1483 1481 if (!echan->edesc) 1484 1482 return; 1485 1483 1486 1484 spin_lock(&echan->vchan.lock); 1487 1485 1488 - edma_read_slot(ecc, echan->slot[0], &p); 1486 + err = edma_read_slot(ecc, echan->slot[0], &p); 1487 + 1489 1488 /* 1490 1489 * Issue later based on missed flag which will be sure 1491 1490 * to happen as: ··· 1500 1495 * lead to some nasty recursion when we are in a NULL 1501 1496 * slot. So we avoid doing so and set the missed flag. 1502 1497 */ 1503 - if (p.a_b_cnt == 0 && p.ccnt == 0) { 1498 + if (err || (p.a_b_cnt == 0 && p.ccnt == 0)) { 1504 1499 dev_dbg(dev, "Error on null slot, setting miss\n"); 1505 1500 echan->missed = 1; 1506 1501 } else { ··· 2024 2019 { 2025 2020 struct edma_soc_info *info; 2026 2021 struct property *prop; 2027 - size_t sz; 2028 - int ret; 2022 + int sz, ret; 2029 2023 2030 2024 info = devm_kzalloc(dev, sizeof(struct edma_soc_info), GFP_KERNEL); 2031 2025 if (!info) ··· 2186 2182 const struct of_device_id *match; 2187 2183 2188 2184 match = of_match_node(edma_of_ids, node); 2189 - if (match && (u32)match->data == EDMA_BINDING_TPCC) 2185 + if (match && (*(u32 *)match->data) == EDMA_BINDING_TPCC) 2190 2186 legacy_mode = false; 2191 2187 2192 2188 info = edma_setup_info_from_dt(dev, legacy_mode); ··· 2264 2260 for (i = 0; rsv_slots[i][0] != -1; i++) { 2265 2261 off = rsv_slots[i][0]; 2266 2262 ln = rsv_slots[i][1]; 2267 - set_bits(off, ln, ecc->slot_inuse); 2263 + edma_set_bits(off, ln, ecc->slot_inuse); 2268 2264 } 2269 2265 } 2270 2266 }
+12 -16
drivers/dma/ep93xx_dma.c
··· 262 262 static struct ep93xx_dma_desc * 263 263 ep93xx_dma_get_active(struct ep93xx_dma_chan *edmac) 264 264 { 265 - if (list_empty(&edmac->active)) 266 - return NULL; 267 - 268 - return list_first_entry(&edmac->active, struct ep93xx_dma_desc, node); 265 + return list_first_entry_or_null(&edmac->active, 266 + struct ep93xx_dma_desc, node); 269 267 } 270 268 271 269 /** ··· 737 739 { 738 740 struct ep93xx_dma_chan *edmac = (struct ep93xx_dma_chan *)data; 739 741 struct ep93xx_dma_desc *desc, *d; 740 - dma_async_tx_callback callback = NULL; 741 - void *callback_param = NULL; 742 + struct dmaengine_desc_callback cb; 742 743 LIST_HEAD(list); 743 744 745 + memset(&cb, 0, sizeof(cb)); 744 746 spin_lock_irq(&edmac->lock); 745 747 /* 746 748 * If dma_terminate_all() was called before we get to run, the active ··· 755 757 dma_cookie_complete(&desc->txd); 756 758 list_splice_init(&edmac->active, &list); 757 759 } 758 - callback = desc->txd.callback; 759 - callback_param = desc->txd.callback_param; 760 + dmaengine_desc_get_callback(&desc->txd, &cb); 760 761 } 761 762 spin_unlock_irq(&edmac->lock); 762 763 ··· 768 771 ep93xx_dma_desc_put(edmac, desc); 769 772 } 770 773 771 - if (callback) 772 - callback(callback_param); 774 + dmaengine_desc_callback_invoke(&cb, NULL); 773 775 } 774 776 775 777 static irqreturn_t ep93xx_dma_interrupt(int irq, void *dev_id) ··· 1043 1047 1044 1048 first = NULL; 1045 1049 for_each_sg(sgl, sg, sg_len, i) { 1046 - size_t sg_len = sg_dma_len(sg); 1050 + size_t len = sg_dma_len(sg); 1047 1051 1048 - if (sg_len > DMA_MAX_CHAN_BYTES) { 1049 - dev_warn(chan2dev(edmac), "too big transfer size %d\n", 1050 - sg_len); 1052 + if (len > DMA_MAX_CHAN_BYTES) { 1053 + dev_warn(chan2dev(edmac), "too big transfer size %zu\n", 1054 + len); 1051 1055 goto fail; 1052 1056 } 1053 1057 ··· 1064 1068 desc->src_addr = edmac->runtime_addr; 1065 1069 desc->dst_addr = sg_dma_address(sg); 1066 1070 } 1067 - desc->size = sg_len; 1071 + desc->size = len; 1068 1072 1069 1073 if (!first) 1070 1074 first = desc; ··· 1121 1125 } 1122 1126 1123 1127 if (period_len > DMA_MAX_CHAN_BYTES) { 1124 - dev_warn(chan2dev(edmac), "too big period length %d\n", 1128 + dev_warn(chan2dev(edmac), "too big period length %zu\n", 1125 1129 period_len); 1126 1130 return NULL; 1127 1131 }
+2 -10
drivers/dma/fsl_raid.c
··· 134 134 135 135 static void fsl_re_desc_done(struct fsl_re_desc *desc) 136 136 { 137 - dma_async_tx_callback callback; 138 - void *callback_param; 139 - 140 137 dma_cookie_complete(&desc->async_tx); 141 - 142 - callback = desc->async_tx.callback; 143 - callback_param = desc->async_tx.callback_param; 144 - if (callback) 145 - callback(callback_param); 146 - 147 138 dma_descriptor_unmap(&desc->async_tx); 139 + dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL); 148 140 } 149 141 150 142 static void fsl_re_cleanup_descs(struct fsl_re_chan *re_chan) ··· 662 670 663 671 /* read irq property from dts */ 664 672 chan->irq = irq_of_parse_and_map(np, 0); 665 - if (chan->irq == NO_IRQ) { 673 + if (!chan->irq) { 666 674 dev_err(dev, "No IRQ defined for JR %d\n", q); 667 675 ret = -ENODEV; 668 676 goto err_free;
+11 -13
drivers/dma/fsldma.c
··· 516 516 if (txd->cookie > 0) { 517 517 ret = txd->cookie; 518 518 519 - /* Run the link descriptor callback function */ 520 - if (txd->callback) { 521 - chan_dbg(chan, "LD %p callback\n", desc); 522 - txd->callback(txd->callback_param); 523 - } 524 - 525 519 dma_descriptor_unmap(txd); 520 + /* Run the link descriptor callback function */ 521 + dmaengine_desc_get_callback_invoke(txd, NULL); 526 522 } 527 523 528 524 /* Run any dependencies */ ··· 1149 1153 struct fsldma_chan *chan; 1150 1154 int i; 1151 1155 1152 - if (fdev->irq != NO_IRQ) { 1156 + if (fdev->irq) { 1153 1157 dev_dbg(fdev->dev, "free per-controller IRQ\n"); 1154 1158 free_irq(fdev->irq, fdev); 1155 1159 return; ··· 1157 1161 1158 1162 for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) { 1159 1163 chan = fdev->chan[i]; 1160 - if (chan && chan->irq != NO_IRQ) { 1164 + if (chan && chan->irq) { 1161 1165 chan_dbg(chan, "free per-channel IRQ\n"); 1162 1166 free_irq(chan->irq, chan); 1163 1167 } ··· 1171 1175 int i; 1172 1176 1173 1177 /* if we have a per-controller IRQ, use that */ 1174 - if (fdev->irq != NO_IRQ) { 1178 + if (fdev->irq) { 1175 1179 dev_dbg(fdev->dev, "request per-controller IRQ\n"); 1176 1180 ret = request_irq(fdev->irq, fsldma_ctrl_irq, IRQF_SHARED, 1177 1181 "fsldma-controller", fdev); ··· 1184 1188 if (!chan) 1185 1189 continue; 1186 1190 1187 - if (chan->irq == NO_IRQ) { 1191 + if (!chan->irq) { 1188 1192 chan_err(chan, "interrupts property missing in device tree\n"); 1189 1193 ret = -ENODEV; 1190 1194 goto out_unwind; ··· 1207 1211 if (!chan) 1208 1212 continue; 1209 1213 1210 - if (chan->irq == NO_IRQ) 1214 + if (!chan->irq) 1211 1215 continue; 1212 1216 1213 1217 free_irq(chan->irq, chan); ··· 1307 1311 list_add_tail(&chan->common.device_node, &fdev->common.channels); 1308 1312 1309 1313 dev_info(fdev->dev, "#%d (%s), irq %d\n", chan->id, compatible, 1310 - chan->irq != NO_IRQ ? chan->irq : fdev->irq); 1314 + chan->irq ? chan->irq : fdev->irq); 1311 1315 1312 1316 return 0; 1313 1317 ··· 1347 1351 if (!fdev->regs) { 1348 1352 dev_err(&op->dev, "unable to ioremap registers\n"); 1349 1353 err = -ENOMEM; 1350 - goto out_free_fdev; 1354 + goto out_free; 1351 1355 } 1352 1356 1353 1357 /* map the channel IRQ if it exists, but don't hookup the handler yet */ ··· 1412 1416 1413 1417 out_free_fdev: 1414 1418 irq_dispose_mapping(fdev->irq); 1419 + iounmap(fdev->regs); 1420 + out_free: 1415 1421 kfree(fdev); 1416 1422 out_return: 1417 1423 return err;
+1 -3
drivers/dma/imx-dma.c
··· 663 663 out: 664 664 spin_unlock_irqrestore(&imxdma->lock, flags); 665 665 666 - if (desc->desc.callback) 667 - desc->desc.callback(desc->desc.callback_param); 668 - 666 + dmaengine_desc_get_callback_invoke(&desc->desc, NULL); 669 667 } 670 668 671 669 static int imxdma_terminate_all(struct dma_chan *chan)
+30 -5
drivers/dma/imx-sdma.c
··· 184 184 struct sdma_mode_count { 185 185 u32 count : 16; /* size of the buffer pointed by this BD */ 186 186 u32 status : 8; /* E,R,I,C,W,D status bits stored here */ 187 - u32 command : 8; /* command mostlky used for channel 0 */ 187 + u32 command : 8; /* command mostly used for channel 0 */ 188 188 }; 189 189 190 190 /* ··· 479 479 .script_addrs = &sdma_script_imx6q, 480 480 }; 481 481 482 + static struct sdma_script_start_addrs sdma_script_imx7d = { 483 + .ap_2_ap_addr = 644, 484 + .uart_2_mcu_addr = 819, 485 + .mcu_2_app_addr = 749, 486 + .uartsh_2_mcu_addr = 1034, 487 + .mcu_2_shp_addr = 962, 488 + .app_2_mcu_addr = 685, 489 + .shp_2_mcu_addr = 893, 490 + .spdif_2_mcu_addr = 1102, 491 + .mcu_2_spdif_addr = 1136, 492 + }; 493 + 494 + static struct sdma_driver_data sdma_imx7d = { 495 + .chnenbl0 = SDMA_CHNENBL0_IMX35, 496 + .num_events = 48, 497 + .script_addrs = &sdma_script_imx7d, 498 + }; 499 + 482 500 static const struct platform_device_id sdma_devtypes[] = { 483 501 { 484 502 .name = "imx25-sdma", ··· 517 499 .name = "imx6q-sdma", 518 500 .driver_data = (unsigned long)&sdma_imx6q, 519 501 }, { 502 + .name = "imx7d-sdma", 503 + .driver_data = (unsigned long)&sdma_imx7d, 504 + }, { 520 505 /* sentinel */ 521 506 } 522 507 }; ··· 532 511 { .compatible = "fsl,imx35-sdma", .data = &sdma_imx35, }, 533 512 { .compatible = "fsl,imx31-sdma", .data = &sdma_imx31, }, 534 513 { .compatible = "fsl,imx25-sdma", .data = &sdma_imx25, }, 514 + { .compatible = "fsl,imx7d-sdma", .data = &sdma_imx7d, }, 535 515 { /* sentinel */ } 536 516 }; 537 517 MODULE_DEVICE_TABLE(of, sdma_dt_ids); ··· 708 686 * executed. 709 687 */ 710 688 711 - if (sdmac->desc.callback) 712 - sdmac->desc.callback(sdmac->desc.callback_param); 689 + dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL); 713 690 714 691 sdmac->buf_tail++; 715 692 sdmac->buf_tail %= sdmac->num_bd; ··· 743 722 sdmac->status = DMA_COMPLETE; 744 723 745 724 dma_cookie_complete(&sdmac->desc); 746 - if (sdmac->desc.callback) 747 - sdmac->desc.callback(sdmac->desc.callback_param); 725 + 726 + dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL); 748 727 } 749 728 750 729 static irqreturn_t sdma_int_handler(int irq, void *dev_id) ··· 1408 1387 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1 34 1409 1388 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2 38 1410 1389 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3 41 1390 + #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4 42 1411 1391 1412 1392 static void sdma_add_scripts(struct sdma_engine *sdma, 1413 1393 const struct sdma_script_start_addrs *addr) ··· 1457 1435 break; 1458 1436 case 3: 1459 1437 sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3; 1438 + break; 1439 + case 4: 1440 + sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4; 1460 1441 break; 1461 1442 default: 1462 1443 dev_err(sdma->dev, "unknown firmware version\n");
+187 -26
drivers/dma/ioat/dma.c
··· 38 38 39 39 #include "../dmaengine.h" 40 40 41 + static char *chanerr_str[] = { 42 + "DMA Transfer Destination Address Error", 43 + "Next Descriptor Address Error", 44 + "Descriptor Error", 45 + "Chan Address Value Error", 46 + "CHANCMD Error", 47 + "Chipset Uncorrectable Data Integrity Error", 48 + "DMA Uncorrectable Data Integrity Error", 49 + "Read Data Error", 50 + "Write Data Error", 51 + "Descriptor Control Error", 52 + "Descriptor Transfer Size Error", 53 + "Completion Address Error", 54 + "Interrupt Configuration Error", 55 + "Super extended descriptor Address Error", 56 + "Unaffiliated Error", 57 + "CRC or XOR P Error", 58 + "XOR Q Error", 59 + "Descriptor Count Error", 60 + "DIF All F detect Error", 61 + "Guard Tag verification Error", 62 + "Application Tag verification Error", 63 + "Reference Tag verification Error", 64 + "Bundle Bit Error", 65 + "Result DIF All F detect Error", 66 + "Result Guard Tag verification Error", 67 + "Result Application Tag verification Error", 68 + "Result Reference Tag verification Error", 69 + NULL 70 + }; 71 + 41 72 static void ioat_eh(struct ioatdma_chan *ioat_chan); 73 + 74 + static void ioat_print_chanerrs(struct ioatdma_chan *ioat_chan, u32 chanerr) 75 + { 76 + int i; 77 + 78 + for (i = 0; i < 32; i++) { 79 + if ((chanerr >> i) & 1) { 80 + if (chanerr_str[i]) { 81 + dev_err(to_dev(ioat_chan), "Err(%d): %s\n", 82 + i, chanerr_str[i]); 83 + } else 84 + break; 85 + } 86 + } 87 + } 42 88 43 89 /** 44 90 * ioat_dma_do_interrupt - handler used for single vector interrupt mode ··· 614 568 615 569 tx = &desc->txd; 616 570 if (tx->cookie) { 571 + struct dmaengine_result res; 572 + 617 573 dma_cookie_complete(tx); 618 574 dma_descriptor_unmap(tx); 619 - if (tx->callback) { 620 - tx->callback(tx->callback_param); 621 - tx->callback = NULL; 622 - } 575 + res.result = DMA_TRANS_NOERROR; 576 + dmaengine_desc_get_callback_invoke(tx, NULL); 577 + tx->callback = NULL; 578 + tx->callback_result = NULL; 623 579 } 624 580 625 581 if (tx->phys == phys_complete) ··· 670 622 if (is_ioat_halted(*ioat_chan->completion)) { 671 623 u32 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); 672 624 673 - if (chanerr & IOAT_CHANERR_HANDLE_MASK) { 625 + if (chanerr & 626 + (IOAT_CHANERR_HANDLE_MASK | IOAT_CHANERR_RECOVER_MASK)) { 674 627 mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT); 675 628 ioat_eh(ioat_chan); 676 629 } ··· 701 652 __ioat_restart_chan(ioat_chan); 702 653 } 703 654 655 + 656 + static void ioat_abort_descs(struct ioatdma_chan *ioat_chan) 657 + { 658 + struct ioatdma_device *ioat_dma = ioat_chan->ioat_dma; 659 + struct ioat_ring_ent *desc; 660 + u16 active; 661 + int idx = ioat_chan->tail, i; 662 + 663 + /* 664 + * We assume that the failed descriptor has been processed. 665 + * Now we are just returning all the remaining submitted 666 + * descriptors to abort. 667 + */ 668 + active = ioat_ring_active(ioat_chan); 669 + 670 + /* we skip the failed descriptor that tail points to */ 671 + for (i = 1; i < active; i++) { 672 + struct dma_async_tx_descriptor *tx; 673 + 674 + smp_read_barrier_depends(); 675 + prefetch(ioat_get_ring_ent(ioat_chan, idx + i + 1)); 676 + desc = ioat_get_ring_ent(ioat_chan, idx + i); 677 + 678 + tx = &desc->txd; 679 + if (tx->cookie) { 680 + struct dmaengine_result res; 681 + 682 + dma_cookie_complete(tx); 683 + dma_descriptor_unmap(tx); 684 + res.result = DMA_TRANS_ABORTED; 685 + dmaengine_desc_get_callback_invoke(tx, &res); 686 + tx->callback = NULL; 687 + tx->callback_result = NULL; 688 + } 689 + 690 + /* skip extended descriptors */ 691 + if (desc_has_ext(desc)) { 692 + WARN_ON(i + 1 >= active); 693 + i++; 694 + } 695 + 696 + /* cleanup super extended descriptors */ 697 + if (desc->sed) { 698 + ioat_free_sed(ioat_dma, desc->sed); 699 + desc->sed = NULL; 700 + } 701 + } 702 + 703 + smp_mb(); /* finish all descriptor reads before incrementing tail */ 704 + ioat_chan->tail = idx + active; 705 + 706 + desc = ioat_get_ring_ent(ioat_chan, ioat_chan->tail); 707 + ioat_chan->last_completion = *ioat_chan->completion = desc->txd.phys; 708 + } 709 + 704 710 static void ioat_eh(struct ioatdma_chan *ioat_chan) 705 711 { 706 712 struct pci_dev *pdev = to_pdev(ioat_chan); ··· 766 662 u32 err_handled = 0; 767 663 u32 chanerr_int; 768 664 u32 chanerr; 665 + bool abort = false; 666 + struct dmaengine_result res; 769 667 770 668 /* cleanup so tail points to descriptor that caused the error */ 771 669 if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) ··· 803 697 break; 804 698 } 805 699 700 + if (chanerr & IOAT_CHANERR_RECOVER_MASK) { 701 + if (chanerr & IOAT_CHANERR_READ_DATA_ERR) { 702 + res.result = DMA_TRANS_READ_FAILED; 703 + err_handled |= IOAT_CHANERR_READ_DATA_ERR; 704 + } else if (chanerr & IOAT_CHANERR_WRITE_DATA_ERR) { 705 + res.result = DMA_TRANS_WRITE_FAILED; 706 + err_handled |= IOAT_CHANERR_WRITE_DATA_ERR; 707 + } 708 + 709 + abort = true; 710 + } else 711 + res.result = DMA_TRANS_NOERROR; 712 + 806 713 /* fault on unhandled error or spurious halt */ 807 714 if (chanerr ^ err_handled || chanerr == 0) { 808 715 dev_err(to_dev(ioat_chan), "%s: fatal error (%x:%x)\n", 809 716 __func__, chanerr, err_handled); 717 + dev_err(to_dev(ioat_chan), "Errors handled:\n"); 718 + ioat_print_chanerrs(ioat_chan, err_handled); 719 + dev_err(to_dev(ioat_chan), "Errors not handled:\n"); 720 + ioat_print_chanerrs(ioat_chan, (chanerr & ~err_handled)); 721 + 810 722 BUG(); 811 - } else { /* cleanup the faulty descriptor */ 812 - tx = &desc->txd; 813 - if (tx->cookie) { 814 - dma_cookie_complete(tx); 815 - dma_descriptor_unmap(tx); 816 - if (tx->callback) { 817 - tx->callback(tx->callback_param); 818 - tx->callback = NULL; 819 - } 820 - } 821 723 } 822 724 823 - writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET); 824 - pci_write_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, chanerr_int); 725 + /* cleanup the faulty descriptor since we are continuing */ 726 + tx = &desc->txd; 727 + if (tx->cookie) { 728 + dma_cookie_complete(tx); 729 + dma_descriptor_unmap(tx); 730 + dmaengine_desc_get_callback_invoke(tx, &res); 731 + tx->callback = NULL; 732 + tx->callback_result = NULL; 733 + } 825 734 826 735 /* mark faulting descriptor as complete */ 827 736 *ioat_chan->completion = desc->txd.phys; 828 737 829 738 spin_lock_bh(&ioat_chan->prep_lock); 739 + /* we need abort all descriptors */ 740 + if (abort) { 741 + ioat_abort_descs(ioat_chan); 742 + /* clean up the channel, we could be in weird state */ 743 + ioat_reset_hw(ioat_chan); 744 + } 745 + 746 + writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET); 747 + pci_write_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, chanerr_int); 748 + 830 749 ioat_restart_channel(ioat_chan); 831 750 spin_unlock_bh(&ioat_chan->prep_lock); 832 751 } ··· 884 753 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); 885 754 dev_err(to_dev(ioat_chan), "%s: Channel halted (%x)\n", 886 755 __func__, chanerr); 887 - if (test_bit(IOAT_RUN, &ioat_chan->state)) 888 - BUG_ON(is_ioat_bug(chanerr)); 889 - else /* we never got off the ground */ 890 - return; 756 + dev_err(to_dev(ioat_chan), "Errors:\n"); 757 + ioat_print_chanerrs(ioat_chan, chanerr); 758 + 759 + if (test_bit(IOAT_RUN, &ioat_chan->state)) { 760 + spin_lock_bh(&ioat_chan->cleanup_lock); 761 + spin_lock_bh(&ioat_chan->prep_lock); 762 + set_bit(IOAT_CHAN_DOWN, &ioat_chan->state); 763 + spin_unlock_bh(&ioat_chan->prep_lock); 764 + 765 + ioat_abort_descs(ioat_chan); 766 + dev_warn(to_dev(ioat_chan), "Reset channel...\n"); 767 + ioat_reset_hw(ioat_chan); 768 + dev_warn(to_dev(ioat_chan), "Restart channel...\n"); 769 + ioat_restart_channel(ioat_chan); 770 + 771 + spin_lock_bh(&ioat_chan->prep_lock); 772 + clear_bit(IOAT_CHAN_DOWN, &ioat_chan->state); 773 + spin_unlock_bh(&ioat_chan->prep_lock); 774 + spin_unlock_bh(&ioat_chan->cleanup_lock); 775 + } 776 + 777 + return; 891 778 } 892 779 893 780 spin_lock_bh(&ioat_chan->cleanup_lock); ··· 929 780 u32 chanerr; 930 781 931 782 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); 932 - dev_warn(to_dev(ioat_chan), "Restarting channel...\n"); 933 - dev_warn(to_dev(ioat_chan), "CHANSTS: %#Lx CHANERR: %#x\n", 934 - status, chanerr); 935 - dev_warn(to_dev(ioat_chan), "Active descriptors: %d\n", 936 - ioat_ring_active(ioat_chan)); 783 + dev_err(to_dev(ioat_chan), "CHANSTS: %#Lx CHANERR: %#x\n", 784 + status, chanerr); 785 + dev_err(to_dev(ioat_chan), "Errors:\n"); 786 + ioat_print_chanerrs(ioat_chan, chanerr); 787 + 788 + dev_dbg(to_dev(ioat_chan), "Active descriptors: %d\n", 789 + ioat_ring_active(ioat_chan)); 937 790 938 791 spin_lock_bh(&ioat_chan->prep_lock); 792 + set_bit(IOAT_CHAN_DOWN, &ioat_chan->state); 793 + spin_unlock_bh(&ioat_chan->prep_lock); 794 + 795 + ioat_abort_descs(ioat_chan); 796 + dev_warn(to_dev(ioat_chan), "Resetting channel...\n"); 797 + ioat_reset_hw(ioat_chan); 798 + dev_warn(to_dev(ioat_chan), "Restarting channel...\n"); 939 799 ioat_restart_channel(ioat_chan); 800 + 801 + spin_lock_bh(&ioat_chan->prep_lock); 802 + clear_bit(IOAT_CHAN_DOWN, &ioat_chan->state); 940 803 spin_unlock_bh(&ioat_chan->prep_lock); 941 804 spin_unlock_bh(&ioat_chan->cleanup_lock); 942 805 return;
+1 -1
drivers/dma/ioat/init.c
··· 828 828 829 829 dest_dma = dma_map_page(dev, dest, 0, PAGE_SIZE, DMA_FROM_DEVICE); 830 830 if (dma_mapping_error(dev, dest_dma)) 831 - goto dma_unmap; 831 + goto free_resources; 832 832 833 833 for (i = 0; i < IOAT_NUM_SRC_TEST; i++) 834 834 dma_srcs[i] = DMA_ERROR_CODE;
+2
drivers/dma/ioat/registers.h
··· 240 240 #define IOAT_CHANERR_DESCRIPTOR_COUNT_ERR 0x40000 241 241 242 242 #define IOAT_CHANERR_HANDLE_MASK (IOAT_CHANERR_XOR_P_OR_CRC_ERR | IOAT_CHANERR_XOR_Q_ERR) 243 + #define IOAT_CHANERR_RECOVER_MASK (IOAT_CHANERR_READ_DATA_ERR | \ 244 + IOAT_CHANERR_WRITE_DATA_ERR) 243 245 244 246 #define IOAT_CHANERR_MASK_OFFSET 0x2C /* 32-bit Channel Error Register */ 245 247
+1 -2
drivers/dma/iop-adma.c
··· 71 71 /* call the callback (must not sleep or submit new 72 72 * operations to this channel) 73 73 */ 74 - if (tx->callback) 75 - tx->callback(tx->callback_param); 74 + dmaengine_desc_get_callback_invoke(tx, NULL); 76 75 77 76 dma_descriptor_unmap(tx); 78 77 if (desc->group_head)
+8 -10
drivers/dma/ipu/ipu_idmac.c
··· 1160 1160 struct scatterlist **sg, *sgnext, *sgnew = NULL; 1161 1161 /* Next transfer descriptor */ 1162 1162 struct idmac_tx_desc *desc, *descnew; 1163 - dma_async_tx_callback callback; 1164 - void *callback_param; 1165 1163 bool done = false; 1166 1164 u32 ready0, ready1, curbuf, err; 1167 1165 unsigned long flags; 1166 + struct dmaengine_desc_callback cb; 1168 1167 1169 1168 /* IDMAC has cleared the respective BUFx_RDY bit, we manage the buffer */ 1170 1169 ··· 1277 1278 1278 1279 if (likely(sgnew) && 1279 1280 ipu_submit_buffer(ichan, descnew, sgnew, ichan->active_buffer) < 0) { 1280 - callback = descnew->txd.callback; 1281 - callback_param = descnew->txd.callback_param; 1281 + dmaengine_desc_get_callback(&descnew->txd, &cb); 1282 + 1282 1283 list_del_init(&descnew->list); 1283 1284 spin_unlock(&ichan->lock); 1284 - if (callback) 1285 - callback(callback_param); 1285 + 1286 + dmaengine_desc_callback_invoke(&cb, NULL); 1286 1287 spin_lock(&ichan->lock); 1287 1288 } 1288 1289 ··· 1291 1292 if (done) 1292 1293 dma_cookie_complete(&desc->txd); 1293 1294 1294 - callback = desc->txd.callback; 1295 - callback_param = desc->txd.callback_param; 1295 + dmaengine_desc_get_callback(&desc->txd, &cb); 1296 1296 1297 1297 spin_unlock(&ichan->lock); 1298 1298 1299 - if (done && (desc->txd.flags & DMA_PREP_INTERRUPT) && callback) 1300 - callback(callback_param); 1299 + if (done && (desc->txd.flags & DMA_PREP_INTERRUPT)) 1300 + dmaengine_desc_callback_invoke(&cb, NULL); 1301 1301 1302 1302 return IRQ_HANDLED; 1303 1303 }
+4 -5
drivers/dma/ipu/ipu_irq.c
··· 286 286 raw_spin_unlock(&bank_lock); 287 287 while ((line = ffs(status))) { 288 288 struct ipu_irq_map *map; 289 - unsigned int irq = NO_IRQ; 289 + unsigned int irq; 290 290 291 291 line--; 292 292 status &= ~(1UL << line); 293 293 294 294 raw_spin_lock(&bank_lock); 295 295 map = src2map(32 * i + line); 296 - if (map) 297 - irq = map->irq; 298 - raw_spin_unlock(&bank_lock); 299 - 300 296 if (!map) { 297 + raw_spin_unlock(&bank_lock); 301 298 pr_err("IPU: Interrupt on unmapped source %u bank %d\n", 302 299 line, i); 303 300 continue; 304 301 } 302 + irq = map->irq; 303 + raw_spin_unlock(&bank_lock); 305 304 generic_handle_irq(irq); 306 305 } 307 306 }
+178 -37
drivers/dma/k3dma.c
··· 1 1 /* 2 - * Copyright (c) 2013 Linaro Ltd. 2 + * Copyright (c) 2013 - 2015 Linaro Ltd. 3 3 * Copyright (c) 2013 Hisilicon Limited. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify ··· 8 8 */ 9 9 #include <linux/sched.h> 10 10 #include <linux/device.h> 11 + #include <linux/dma-mapping.h> 12 + #include <linux/dmapool.h> 11 13 #include <linux/dmaengine.h> 12 14 #include <linux/init.h> 13 15 #include <linux/interrupt.h> ··· 27 25 28 26 #define DRIVER_NAME "k3-dma" 29 27 #define DMA_MAX_SIZE 0x1ffc 28 + #define DMA_CYCLIC_MAX_PERIOD 0x1000 29 + #define LLI_BLOCK_SIZE (4 * PAGE_SIZE) 30 30 31 31 #define INT_STAT 0x00 32 32 #define INT_TC1 0x04 33 + #define INT_TC2 0x08 33 34 #define INT_ERR1 0x0c 34 35 #define INT_ERR2 0x10 35 36 #define INT_TC1_MASK 0x18 37 + #define INT_TC2_MASK 0x1c 36 38 #define INT_ERR1_MASK 0x20 37 39 #define INT_ERR2_MASK 0x24 38 40 #define INT_TC1_RAW 0x600 39 - #define INT_ERR1_RAW 0x608 40 - #define INT_ERR2_RAW 0x610 41 + #define INT_TC2_RAW 0x608 42 + #define INT_ERR1_RAW 0x610 43 + #define INT_ERR2_RAW 0x618 41 44 #define CH_PRI 0x688 42 45 #define CH_STAT 0x690 43 46 #define CX_CUR_CNT 0x704 44 47 #define CX_LLI 0x800 45 - #define CX_CNT 0x810 48 + #define CX_CNT1 0x80c 49 + #define CX_CNT0 0x810 46 50 #define CX_SRC 0x814 47 51 #define CX_DST 0x818 48 52 #define CX_CFG 0x81c ··· 57 49 58 50 #define CX_LLI_CHAIN_EN 0x2 59 51 #define CX_CFG_EN 0x1 52 + #define CX_CFG_NODEIRQ BIT(1) 60 53 #define CX_CFG_MEM2PER (0x1 << 2) 61 54 #define CX_CFG_PER2MEM (0x2 << 2) 62 55 #define CX_CFG_SRCINCR (0x1 << 31) ··· 77 68 dma_addr_t desc_hw_lli; 78 69 size_t desc_num; 79 70 size_t size; 80 - struct k3_desc_hw desc_hw[0]; 71 + struct k3_desc_hw *desc_hw; 81 72 }; 82 73 83 74 struct k3_dma_phy; ··· 90 81 enum dma_transfer_direction dir; 91 82 dma_addr_t dev_addr; 92 83 enum dma_status status; 84 + bool cyclic; 93 85 }; 94 86 95 87 struct k3_dma_phy { ··· 110 100 struct k3_dma_phy *phy; 111 101 struct k3_dma_chan *chans; 112 102 struct clk *clk; 103 + struct dma_pool *pool; 113 104 u32 dma_channels; 114 105 u32 dma_requests; 115 106 unsigned int irq; ··· 146 135 147 136 val = 0x1 << phy->idx; 148 137 writel_relaxed(val, d->base + INT_TC1_RAW); 138 + writel_relaxed(val, d->base + INT_TC2_RAW); 149 139 writel_relaxed(val, d->base + INT_ERR1_RAW); 150 140 writel_relaxed(val, d->base + INT_ERR2_RAW); 151 141 } ··· 154 142 static void k3_dma_set_desc(struct k3_dma_phy *phy, struct k3_desc_hw *hw) 155 143 { 156 144 writel_relaxed(hw->lli, phy->base + CX_LLI); 157 - writel_relaxed(hw->count, phy->base + CX_CNT); 145 + writel_relaxed(hw->count, phy->base + CX_CNT0); 158 146 writel_relaxed(hw->saddr, phy->base + CX_SRC); 159 147 writel_relaxed(hw->daddr, phy->base + CX_DST); 160 148 writel_relaxed(AXI_CFG_DEFAULT, phy->base + AXI_CFG); ··· 188 176 189 177 /* unmask irq */ 190 178 writel_relaxed(0xffff, d->base + INT_TC1_MASK); 179 + writel_relaxed(0xffff, d->base + INT_TC2_MASK); 191 180 writel_relaxed(0xffff, d->base + INT_ERR1_MASK); 192 181 writel_relaxed(0xffff, d->base + INT_ERR2_MASK); 193 182 } else { 194 183 /* mask irq */ 195 184 writel_relaxed(0x0, d->base + INT_TC1_MASK); 185 + writel_relaxed(0x0, d->base + INT_TC2_MASK); 196 186 writel_relaxed(0x0, d->base + INT_ERR1_MASK); 197 187 writel_relaxed(0x0, d->base + INT_ERR2_MASK); 198 188 } ··· 207 193 struct k3_dma_chan *c; 208 194 u32 stat = readl_relaxed(d->base + INT_STAT); 209 195 u32 tc1 = readl_relaxed(d->base + INT_TC1); 196 + u32 tc2 = readl_relaxed(d->base + INT_TC2); 210 197 u32 err1 = readl_relaxed(d->base + INT_ERR1); 211 198 u32 err2 = readl_relaxed(d->base + INT_ERR2); 212 199 u32 i, irq_chan = 0; 213 200 214 201 while (stat) { 215 202 i = __ffs(stat); 216 - stat &= (stat - 1); 217 - if (likely(tc1 & BIT(i))) { 203 + stat &= ~BIT(i); 204 + if (likely(tc1 & BIT(i)) || (tc2 & BIT(i))) { 205 + unsigned long flags; 206 + 218 207 p = &d->phy[i]; 219 208 c = p->vchan; 220 - if (c) { 221 - unsigned long flags; 222 - 209 + if (c && (tc1 & BIT(i))) { 223 210 spin_lock_irqsave(&c->vc.lock, flags); 224 211 vchan_cookie_complete(&p->ds_run->vd); 212 + WARN_ON_ONCE(p->ds_done); 225 213 p->ds_done = p->ds_run; 214 + p->ds_run = NULL; 215 + spin_unlock_irqrestore(&c->vc.lock, flags); 216 + } 217 + if (c && (tc2 & BIT(i))) { 218 + spin_lock_irqsave(&c->vc.lock, flags); 219 + if (p->ds_run != NULL) 220 + vchan_cyclic_callback(&p->ds_run->vd); 226 221 spin_unlock_irqrestore(&c->vc.lock, flags); 227 222 } 228 223 irq_chan |= BIT(i); ··· 241 218 } 242 219 243 220 writel_relaxed(irq_chan, d->base + INT_TC1_RAW); 221 + writel_relaxed(irq_chan, d->base + INT_TC2_RAW); 244 222 writel_relaxed(err1, d->base + INT_ERR1_RAW); 245 223 writel_relaxed(err2, d->base + INT_ERR2_RAW); 246 224 247 - if (irq_chan) { 225 + if (irq_chan) 248 226 tasklet_schedule(&d->task); 227 + 228 + if (irq_chan || err1 || err2) 249 229 return IRQ_HANDLED; 250 - } else 251 - return IRQ_NONE; 230 + 231 + return IRQ_NONE; 252 232 } 253 233 254 234 static int k3_dma_start_txd(struct k3_dma_chan *c) ··· 273 247 * so vc->desc_issued only contains desc pending 274 248 */ 275 249 list_del(&ds->vd.node); 250 + 251 + WARN_ON_ONCE(c->phy->ds_run); 252 + WARN_ON_ONCE(c->phy->ds_done); 276 253 c->phy->ds_run = ds; 277 - c->phy->ds_done = NULL; 278 254 /* start dma */ 279 255 k3_dma_set_desc(c->phy, &ds->desc_hw[0]); 280 256 return 0; 281 257 } 282 - c->phy->ds_done = NULL; 283 - c->phy->ds_run = NULL; 284 258 return -EAGAIN; 285 259 } 286 260 ··· 377 351 * its total size. 378 352 */ 379 353 vd = vchan_find_desc(&c->vc, cookie); 380 - if (vd) { 354 + if (vd && !c->cyclic) { 381 355 bytes = container_of(vd, struct k3_dma_desc_sw, vd)->size; 382 356 } else if ((!p) || (!p->ds_run)) { 383 357 bytes = 0; ··· 387 361 388 362 bytes = k3_dma_get_curr_cnt(d, p); 389 363 clli = k3_dma_get_curr_lli(p); 390 - index = (clli - ds->desc_hw_lli) / sizeof(struct k3_desc_hw); 364 + index = ((clli - ds->desc_hw_lli) / 365 + sizeof(struct k3_desc_hw)) + 1; 391 366 for (; index < ds->desc_num; index++) { 392 367 bytes += ds->desc_hw[index].count; 393 368 /* end of lli */ ··· 429 402 static void k3_dma_fill_desc(struct k3_dma_desc_sw *ds, dma_addr_t dst, 430 403 dma_addr_t src, size_t len, u32 num, u32 ccfg) 431 404 { 432 - if ((num + 1) < ds->desc_num) 405 + if (num != ds->desc_num - 1) 433 406 ds->desc_hw[num].lli = ds->desc_hw_lli + (num + 1) * 434 407 sizeof(struct k3_desc_hw); 408 + 435 409 ds->desc_hw[num].lli |= CX_LLI_CHAIN_EN; 436 410 ds->desc_hw[num].count = len; 437 411 ds->desc_hw[num].saddr = src; 438 412 ds->desc_hw[num].daddr = dst; 439 413 ds->desc_hw[num].config = ccfg; 414 + } 415 + 416 + static struct k3_dma_desc_sw *k3_dma_alloc_desc_resource(int num, 417 + struct dma_chan *chan) 418 + { 419 + struct k3_dma_chan *c = to_k3_chan(chan); 420 + struct k3_dma_desc_sw *ds; 421 + struct k3_dma_dev *d = to_k3_dma(chan->device); 422 + int lli_limit = LLI_BLOCK_SIZE / sizeof(struct k3_desc_hw); 423 + 424 + if (num > lli_limit) { 425 + dev_dbg(chan->device->dev, "vch %p: sg num %d exceed max %d\n", 426 + &c->vc, num, lli_limit); 427 + return NULL; 428 + } 429 + 430 + ds = kzalloc(sizeof(*ds), GFP_NOWAIT); 431 + if (!ds) 432 + return NULL; 433 + 434 + ds->desc_hw = dma_pool_alloc(d->pool, GFP_NOWAIT, &ds->desc_hw_lli); 435 + if (!ds->desc_hw) { 436 + dev_dbg(chan->device->dev, "vch %p: dma alloc fail\n", &c->vc); 437 + kfree(ds); 438 + return NULL; 439 + } 440 + memset(ds->desc_hw, 0, sizeof(struct k3_desc_hw) * num); 441 + ds->desc_num = num; 442 + return ds; 440 443 } 441 444 442 445 static struct dma_async_tx_descriptor *k3_dma_prep_memcpy( ··· 482 425 return NULL; 483 426 484 427 num = DIV_ROUND_UP(len, DMA_MAX_SIZE); 485 - ds = kzalloc(sizeof(*ds) + num * sizeof(ds->desc_hw[0]), GFP_ATOMIC); 428 + 429 + ds = k3_dma_alloc_desc_resource(num, chan); 486 430 if (!ds) 487 431 return NULL; 488 432 489 - ds->desc_hw_lli = __virt_to_phys((unsigned long)&ds->desc_hw[0]); 433 + c->cyclic = 0; 490 434 ds->size = len; 491 - ds->desc_num = num; 492 435 num = 0; 493 436 494 437 if (!c->ccfg) { ··· 531 474 if (sgl == NULL) 532 475 return NULL; 533 476 477 + c->cyclic = 0; 478 + 534 479 for_each_sg(sgl, sg, sglen, i) { 535 480 avail = sg_dma_len(sg); 536 481 if (avail > DMA_MAX_SIZE) 537 482 num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1; 538 483 } 539 484 540 - ds = kzalloc(sizeof(*ds) + num * sizeof(ds->desc_hw[0]), GFP_ATOMIC); 485 + ds = k3_dma_alloc_desc_resource(num, chan); 541 486 if (!ds) 542 487 return NULL; 543 - 544 - ds->desc_hw_lli = __virt_to_phys((unsigned long)&ds->desc_hw[0]); 545 - ds->desc_num = num; 546 488 num = 0; 547 489 548 490 for_each_sg(sgl, sg, sglen, i) { ··· 569 513 570 514 ds->desc_hw[num-1].lli = 0; /* end of link */ 571 515 ds->size = total; 516 + return vchan_tx_prep(&c->vc, &ds->vd, flags); 517 + } 518 + 519 + static struct dma_async_tx_descriptor * 520 + k3_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, 521 + size_t buf_len, size_t period_len, 522 + enum dma_transfer_direction dir, 523 + unsigned long flags) 524 + { 525 + struct k3_dma_chan *c = to_k3_chan(chan); 526 + struct k3_dma_desc_sw *ds; 527 + size_t len, avail, total = 0; 528 + dma_addr_t addr, src = 0, dst = 0; 529 + int num = 1, since = 0; 530 + size_t modulo = DMA_CYCLIC_MAX_PERIOD; 531 + u32 en_tc2 = 0; 532 + 533 + dev_dbg(chan->device->dev, "%s: buf %pad, dst %pad, buf len %zu, period_len = %zu, dir %d\n", 534 + __func__, &buf_addr, &to_k3_chan(chan)->dev_addr, 535 + buf_len, period_len, (int)dir); 536 + 537 + avail = buf_len; 538 + if (avail > modulo) 539 + num += DIV_ROUND_UP(avail, modulo) - 1; 540 + 541 + ds = k3_dma_alloc_desc_resource(num, chan); 542 + if (!ds) 543 + return NULL; 544 + 545 + c->cyclic = 1; 546 + addr = buf_addr; 547 + avail = buf_len; 548 + total = avail; 549 + num = 0; 550 + 551 + if (period_len < modulo) 552 + modulo = period_len; 553 + 554 + do { 555 + len = min_t(size_t, avail, modulo); 556 + 557 + if (dir == DMA_MEM_TO_DEV) { 558 + src = addr; 559 + dst = c->dev_addr; 560 + } else if (dir == DMA_DEV_TO_MEM) { 561 + src = c->dev_addr; 562 + dst = addr; 563 + } 564 + since += len; 565 + if (since >= period_len) { 566 + /* descriptor asks for TC2 interrupt on completion */ 567 + en_tc2 = CX_CFG_NODEIRQ; 568 + since -= period_len; 569 + } else 570 + en_tc2 = 0; 571 + 572 + k3_dma_fill_desc(ds, dst, src, len, num++, c->ccfg | en_tc2); 573 + 574 + addr += len; 575 + avail -= len; 576 + } while (avail); 577 + 578 + /* "Cyclic" == end of link points back to start of link */ 579 + ds->desc_hw[num - 1].lli |= ds->desc_hw_lli; 580 + 581 + ds->size = total; 582 + 572 583 return vchan_tx_prep(&c->vc, &ds->vd, flags); 573 584 } 574 585 ··· 674 551 c->ccfg |= (val << 12) | (val << 16); 675 552 676 553 if ((maxburst == 0) || (maxburst > 16)) 677 - val = 16; 554 + val = 15; 678 555 else 679 556 val = maxburst - 1; 680 557 c->ccfg |= (val << 20) | (val << 24); ··· 684 561 c->ccfg |= c->vc.chan.chan_id << 4; 685 562 686 563 return 0; 564 + } 565 + 566 + static void k3_dma_free_desc(struct virt_dma_desc *vd) 567 + { 568 + struct k3_dma_desc_sw *ds = 569 + container_of(vd, struct k3_dma_desc_sw, vd); 570 + struct k3_dma_dev *d = to_k3_dma(vd->tx.chan->device); 571 + 572 + dma_pool_free(d->pool, ds->desc_hw, ds->desc_hw_lli); 573 + kfree(ds); 687 574 } 688 575 689 576 static int k3_dma_terminate_all(struct dma_chan *chan) ··· 719 586 k3_dma_terminate_chan(p, d); 720 587 c->phy = NULL; 721 588 p->vchan = NULL; 722 - p->ds_run = p->ds_done = NULL; 589 + if (p->ds_run) { 590 + k3_dma_free_desc(&p->ds_run->vd); 591 + p->ds_run = NULL; 592 + } 593 + if (p->ds_done) { 594 + k3_dma_free_desc(&p->ds_done->vd); 595 + p->ds_done = NULL; 596 + } 597 + 723 598 } 724 599 spin_unlock_irqrestore(&c->vc.lock, flags); 725 600 vchan_dma_desc_free_list(&c->vc, &head); ··· 778 637 spin_unlock_irqrestore(&c->vc.lock, flags); 779 638 780 639 return 0; 781 - } 782 - 783 - static void k3_dma_free_desc(struct virt_dma_desc *vd) 784 - { 785 - struct k3_dma_desc_sw *ds = 786 - container_of(vd, struct k3_dma_desc_sw, vd); 787 - 788 - kfree(ds); 789 640 } 790 641 791 642 static const struct of_device_id k3_pdma_dt_ids[] = { ··· 839 706 840 707 d->irq = irq; 841 708 709 + /* A DMA memory pool for LLIs, align on 32-byte boundary */ 710 + d->pool = dmam_pool_create(DRIVER_NAME, &op->dev, 711 + LLI_BLOCK_SIZE, 32, 0); 712 + if (!d->pool) 713 + return -ENOMEM; 714 + 842 715 /* init phy channel */ 843 716 d->phy = devm_kzalloc(&op->dev, 844 717 d->dma_channels * sizeof(struct k3_dma_phy), GFP_KERNEL); ··· 861 722 INIT_LIST_HEAD(&d->slave.channels); 862 723 dma_cap_set(DMA_SLAVE, d->slave.cap_mask); 863 724 dma_cap_set(DMA_MEMCPY, d->slave.cap_mask); 725 + dma_cap_set(DMA_CYCLIC, d->slave.cap_mask); 864 726 d->slave.dev = &op->dev; 865 727 d->slave.device_free_chan_resources = k3_dma_free_chan_resources; 866 728 d->slave.device_tx_status = k3_dma_tx_status; 867 729 d->slave.device_prep_dma_memcpy = k3_dma_prep_memcpy; 868 730 d->slave.device_prep_slave_sg = k3_dma_prep_slave_sg; 731 + d->slave.device_prep_dma_cyclic = k3_dma_prep_dma_cyclic; 869 732 d->slave.device_issue_pending = k3_dma_issue_pending; 870 733 d->slave.device_config = k3_dma_config; 871 734 d->slave.device_pause = k3_dma_transfer_pause;
+2 -4
drivers/dma/mic_x100_dma.c
··· 104 104 tx = &ch->tx_array[last_tail]; 105 105 if (tx->cookie) { 106 106 dma_cookie_complete(tx); 107 - if (tx->callback) { 108 - tx->callback(tx->callback_param); 109 - tx->callback = NULL; 110 - } 107 + dmaengine_desc_get_callback_invoke(tx, NULL); 108 + tx->callback = NULL; 111 109 } 112 110 last_tail = mic_dma_hw_ring_inc(last_tail); 113 111 }
+5 -9
drivers/dma/mmp_pdma.c
··· 864 864 struct mmp_pdma_desc_sw *desc, *_desc; 865 865 LIST_HEAD(chain_cleanup); 866 866 unsigned long flags; 867 + struct dmaengine_desc_callback cb; 867 868 868 869 if (chan->cyclic_first) { 869 - dma_async_tx_callback cb = NULL; 870 - void *cb_data = NULL; 871 - 872 870 spin_lock_irqsave(&chan->desc_lock, flags); 873 871 desc = chan->cyclic_first; 874 - cb = desc->async_tx.callback; 875 - cb_data = desc->async_tx.callback_param; 872 + dmaengine_desc_get_callback(&desc->async_tx, &cb); 876 873 spin_unlock_irqrestore(&chan->desc_lock, flags); 877 874 878 - if (cb) 879 - cb(cb_data); 875 + dmaengine_desc_callback_invoke(&cb, NULL); 880 876 881 877 return; 882 878 } ··· 917 921 /* Remove from the list of transactions */ 918 922 list_del(&desc->node); 919 923 /* Run the link descriptor callback function */ 920 - if (txd->callback) 921 - txd->callback(txd->callback_param); 924 + dmaengine_desc_get_callback(txd, &cb); 925 + dmaengine_desc_callback_invoke(&cb, NULL); 922 926 923 927 dma_pool_free(chan->desc_pool, desc, txd->phys); 924 928 }
+2 -4
drivers/dma/mmp_tdma.c
··· 349 349 { 350 350 struct mmp_tdma_chan *tdmac = (struct mmp_tdma_chan *)data; 351 351 352 - if (tdmac->desc.callback) 353 - tdmac->desc.callback(tdmac->desc.callback_param); 354 - 352 + dmaengine_desc_get_callback_invoke(&tdmac->desc, NULL); 355 353 } 356 354 357 355 static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac) ··· 431 433 432 434 if (period_len > TDMA_MAX_XFER_BYTES) { 433 435 dev_err(tdmac->dev, 434 - "maximum period size exceeded: %d > %d\n", 436 + "maximum period size exceeded: %zu > %d\n", 435 437 period_len, TDMA_MAX_XFER_BYTES); 436 438 goto err_out; 437 439 }
+1 -1
drivers/dma/moxart-dma.c
··· 579 579 return -ENOMEM; 580 580 581 581 irq = irq_of_parse_and_map(node, 0); 582 - if (irq == NO_IRQ) { 582 + if (!irq) { 583 583 dev_err(dev, "no IRQ resource\n"); 584 584 return -EINVAL; 585 585 }
+3 -4
drivers/dma/mpc512x_dma.c
··· 411 411 list_for_each_entry(mdesc, &list, node) { 412 412 desc = &mdesc->desc; 413 413 414 - if (desc->callback) 415 - desc->callback(desc->callback_param); 414 + dmaengine_desc_get_callback_invoke(desc, NULL); 416 415 417 416 last_cookie = desc->cookie; 418 417 dma_run_dependencies(desc); ··· 925 926 } 926 927 927 928 mdma->irq = irq_of_parse_and_map(dn, 0); 928 - if (mdma->irq == NO_IRQ) { 929 + if (!mdma->irq) { 929 930 dev_err(dev, "Error mapping IRQ!\n"); 930 931 retval = -EINVAL; 931 932 goto err; ··· 934 935 if (of_device_is_compatible(dn, "fsl,mpc8308-dma")) { 935 936 mdma->is_mpc8308 = 1; 936 937 mdma->irq2 = irq_of_parse_and_map(dn, 1); 937 - if (mdma->irq2 == NO_IRQ) { 938 + if (!mdma->irq2) { 938 939 dev_err(dev, "Error mapping IRQ!\n"); 939 940 retval = -EINVAL; 940 941 goto err_dispose1;
+96 -6
drivers/dma/mv_xor.c
··· 206 206 if (desc->async_tx.cookie > 0) { 207 207 cookie = desc->async_tx.cookie; 208 208 209 + dma_descriptor_unmap(&desc->async_tx); 209 210 /* call the callback (must not sleep or submit new 210 211 * operations to this channel) 211 212 */ 212 - if (desc->async_tx.callback) 213 - desc->async_tx.callback( 214 - desc->async_tx.callback_param); 215 - 216 - dma_descriptor_unmap(&desc->async_tx); 213 + dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL); 217 214 } 218 215 219 216 /* run dependent operations */ ··· 467 470 return mv_chan->slots_allocated ? : -ENOMEM; 468 471 } 469 472 473 + /* 474 + * Check if source or destination is an PCIe/IO address (non-SDRAM) and add 475 + * a new MBus window if necessary. Use a cache for these check so that 476 + * the MMIO mapped registers don't have to be accessed for this check 477 + * to speed up this process. 478 + */ 479 + static int mv_xor_add_io_win(struct mv_xor_chan *mv_chan, u32 addr) 480 + { 481 + struct mv_xor_device *xordev = mv_chan->xordev; 482 + void __iomem *base = mv_chan->mmr_high_base; 483 + u32 win_enable; 484 + u32 size; 485 + u8 target, attr; 486 + int ret; 487 + int i; 488 + 489 + /* Nothing needs to get done for the Armada 3700 */ 490 + if (xordev->xor_type == XOR_ARMADA_37XX) 491 + return 0; 492 + 493 + /* 494 + * Loop over the cached windows to check, if the requested area 495 + * is already mapped. If this the case, nothing needs to be done 496 + * and we can return. 497 + */ 498 + for (i = 0; i < WINDOW_COUNT; i++) { 499 + if (addr >= xordev->win_start[i] && 500 + addr <= xordev->win_end[i]) { 501 + /* Window is already mapped */ 502 + return 0; 503 + } 504 + } 505 + 506 + /* 507 + * The window is not mapped, so we need to create the new mapping 508 + */ 509 + 510 + /* If no IO window is found that addr has to be located in SDRAM */ 511 + ret = mvebu_mbus_get_io_win_info(addr, &size, &target, &attr); 512 + if (ret < 0) 513 + return 0; 514 + 515 + /* 516 + * Mask the base addr 'addr' according to 'size' read back from the 517 + * MBus window. Otherwise we might end up with an address located 518 + * somewhere in the middle of this area here. 519 + */ 520 + size -= 1; 521 + addr &= ~size; 522 + 523 + /* 524 + * Reading one of both enabled register is enough, as they are always 525 + * programmed to the identical values 526 + */ 527 + win_enable = readl(base + WINDOW_BAR_ENABLE(0)); 528 + 529 + /* Set 'i' to the first free window to write the new values to */ 530 + i = ffs(~win_enable) - 1; 531 + if (i >= WINDOW_COUNT) 532 + return -ENOMEM; 533 + 534 + writel((addr & 0xffff0000) | (attr << 8) | target, 535 + base + WINDOW_BASE(i)); 536 + writel(size & 0xffff0000, base + WINDOW_SIZE(i)); 537 + 538 + /* Fill the caching variables for later use */ 539 + xordev->win_start[i] = addr; 540 + xordev->win_end[i] = addr + size; 541 + 542 + win_enable |= (1 << i); 543 + win_enable |= 3 << (16 + (2 * i)); 544 + writel(win_enable, base + WINDOW_BAR_ENABLE(0)); 545 + writel(win_enable, base + WINDOW_BAR_ENABLE(1)); 546 + 547 + return 0; 548 + } 549 + 470 550 static struct dma_async_tx_descriptor * 471 551 mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src, 472 552 unsigned int src_cnt, size_t len, unsigned long flags) 473 553 { 474 554 struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan); 475 555 struct mv_xor_desc_slot *sw_desc; 556 + int ret; 476 557 477 558 if (unlikely(len < MV_XOR_MIN_BYTE_COUNT)) 478 559 return NULL; ··· 561 486 "%s src_cnt: %d len: %zu dest %pad flags: %ld\n", 562 487 __func__, src_cnt, len, &dest, flags); 563 488 489 + /* Check if a new window needs to get added for 'dest' */ 490 + ret = mv_xor_add_io_win(mv_chan, dest); 491 + if (ret) 492 + return NULL; 493 + 564 494 sw_desc = mv_chan_alloc_slot(mv_chan); 565 495 if (sw_desc) { 566 496 sw_desc->type = DMA_XOR; ··· 573 493 mv_desc_init(sw_desc, dest, len, flags); 574 494 if (mv_chan->op_in_desc == XOR_MODE_IN_DESC) 575 495 mv_desc_set_mode(sw_desc); 576 - while (src_cnt--) 496 + while (src_cnt--) { 497 + /* Check if a new window needs to get added for 'src' */ 498 + ret = mv_xor_add_io_win(mv_chan, src[src_cnt]); 499 + if (ret) 500 + return NULL; 577 501 mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]); 502 + } 578 503 } 579 504 580 505 dev_dbg(mv_chan_to_devp(mv_chan), ··· 1044 959 mv_chan->op_in_desc = XOR_MODE_IN_DESC; 1045 960 1046 961 dma_dev = &mv_chan->dmadev; 962 + mv_chan->xordev = xordev; 1047 963 1048 964 /* 1049 965 * These source and destination dummy buffers are used to implement ··· 1171 1085 (cs->mbus_attr << 8) | 1172 1086 dram->mbus_dram_target_id, base + WINDOW_BASE(i)); 1173 1087 writel((cs->size - 1) & 0xffff0000, base + WINDOW_SIZE(i)); 1088 + 1089 + /* Fill the caching variables for later use */ 1090 + xordev->win_start[i] = cs->base; 1091 + xordev->win_end[i] = cs->base + cs->size - 1; 1174 1092 1175 1093 win_enable |= (1 << i); 1176 1094 win_enable |= 3 << (16 + (2 * i));
+7
drivers/dma/mv_xor.h
··· 80 80 #define WINDOW_BAR_ENABLE(chan) (0x40 + ((chan) << 2)) 81 81 #define WINDOW_OVERRIDE_CTRL(chan) (0xA0 + ((chan) << 2)) 82 82 83 + #define WINDOW_COUNT 8 84 + 83 85 struct mv_xor_device { 84 86 void __iomem *xor_base; 85 87 void __iomem *xor_high_base; 86 88 struct clk *clk; 87 89 struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS]; 88 90 int xor_type; 91 + 92 + u32 win_start[WINDOW_COUNT]; 93 + u32 win_end[WINDOW_COUNT]; 89 94 }; 90 95 91 96 /** ··· 132 127 char dummy_dst[MV_XOR_MIN_BYTE_COUNT]; 133 128 dma_addr_t dummy_src_addr, dummy_dst_addr; 134 129 u32 saved_config_reg, saved_int_mask_reg; 130 + 131 + struct mv_xor_device *xordev; 135 132 }; 136 133 137 134 /**
+5 -8
drivers/dma/mxs-dma.c
··· 326 326 { 327 327 struct mxs_dma_chan *mxs_chan = (struct mxs_dma_chan *) data; 328 328 329 - if (mxs_chan->desc.callback) 330 - mxs_chan->desc.callback(mxs_chan->desc.callback_param); 329 + dmaengine_desc_get_callback_invoke(&mxs_chan->desc, NULL); 331 330 } 332 331 333 332 static int mxs_dma_irq_to_chan(struct mxs_dma_engine *mxs_dma, int irq) ··· 428 429 goto err_alloc; 429 430 } 430 431 431 - if (mxs_chan->chan_irq != NO_IRQ) { 432 - ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler, 433 - 0, "mxs-dma", mxs_dma); 434 - if (ret) 435 - goto err_irq; 436 - } 432 + ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler, 433 + 0, "mxs-dma", mxs_dma); 434 + if (ret) 435 + goto err_irq; 437 436 438 437 ret = clk_prepare_enable(mxs_dma->clk); 439 438 if (ret)
+3 -6
drivers/dma/nbpfaxi.c
··· 1102 1102 { 1103 1103 struct nbpf_channel *chan = (struct nbpf_channel *)data; 1104 1104 struct nbpf_desc *desc, *tmp; 1105 - dma_async_tx_callback callback; 1106 - void *param; 1105 + struct dmaengine_desc_callback cb; 1107 1106 1108 1107 while (!list_empty(&chan->done)) { 1109 1108 bool found = false, must_put, recycling = false; ··· 1150 1151 must_put = false; 1151 1152 } 1152 1153 1153 - callback = desc->async_tx.callback; 1154 - param = desc->async_tx.callback_param; 1154 + dmaengine_desc_get_callback(&desc->async_tx, &cb); 1155 1155 1156 1156 /* ack and callback completed descriptor */ 1157 1157 spin_unlock_irq(&chan->lock); 1158 1158 1159 - if (callback) 1160 - callback(param); 1159 + dmaengine_desc_callback_invoke(&cb, NULL); 1161 1160 1162 1161 if (must_put) 1163 1162 nbpf_desc_put(desc);
+213 -30
drivers/dma/omap-dma.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/dmaengine.h> 10 10 #include <linux/dma-mapping.h> 11 + #include <linux/dmapool.h> 11 12 #include <linux/err.h> 12 13 #include <linux/init.h> 13 14 #include <linux/interrupt.h> ··· 33 32 const struct omap_dma_reg *reg_map; 34 33 struct omap_system_dma_plat_info *plat; 35 34 bool legacy; 35 + bool ll123_supported; 36 + struct dma_pool *desc_pool; 36 37 unsigned dma_requests; 37 38 spinlock_t irq_lock; 38 39 uint32_t irq_enable_mask; 39 - struct omap_chan *lch_map[OMAP_SDMA_CHANNELS]; 40 + struct omap_chan **lch_map; 40 41 }; 41 42 42 43 struct omap_chan { ··· 58 55 unsigned sgidx; 59 56 }; 60 57 58 + #define DESC_NXT_SV_REFRESH (0x1 << 24) 59 + #define DESC_NXT_SV_REUSE (0x2 << 24) 60 + #define DESC_NXT_DV_REFRESH (0x1 << 26) 61 + #define DESC_NXT_DV_REUSE (0x2 << 26) 62 + #define DESC_NTYPE_TYPE2 (0x2 << 29) 63 + 64 + /* Type 2 descriptor with Source or Destination address update */ 65 + struct omap_type2_desc { 66 + uint32_t next_desc; 67 + uint32_t en; 68 + uint32_t addr; /* src or dst */ 69 + uint16_t fn; 70 + uint16_t cicr; 71 + int16_t cdei; 72 + int16_t csei; 73 + int32_t cdfi; 74 + int32_t csfi; 75 + } __packed; 76 + 61 77 struct omap_sg { 62 78 dma_addr_t addr; 63 79 uint32_t en; /* number of elements (24-bit) */ 64 80 uint32_t fn; /* number of frames (16-bit) */ 65 81 int32_t fi; /* for double indexing */ 66 82 int16_t ei; /* for double indexing */ 83 + 84 + /* Linked list */ 85 + struct omap_type2_desc *t2_desc; 86 + dma_addr_t t2_desc_paddr; 67 87 }; 68 88 69 89 struct omap_desc { 70 90 struct virt_dma_desc vd; 91 + bool using_ll; 71 92 enum dma_transfer_direction dir; 72 93 dma_addr_t dev_addr; 73 94 ··· 108 81 }; 109 82 110 83 enum { 84 + CAPS_0_SUPPORT_LL123 = BIT(20), /* Linked List type1/2/3 */ 85 + CAPS_0_SUPPORT_LL4 = BIT(21), /* Linked List type4 */ 86 + 111 87 CCR_FS = BIT(5), 112 88 CCR_READ_PRIORITY = BIT(6), 113 89 CCR_ENABLE = BIT(7), ··· 181 151 CICR_SUPER_BLOCK_IE = BIT(14), /* OMAP2+ only */ 182 152 183 153 CLNK_CTRL_ENABLE_LNK = BIT(15), 154 + 155 + CDP_DST_VALID_INC = 0 << 0, 156 + CDP_DST_VALID_RELOAD = 1 << 0, 157 + CDP_DST_VALID_REUSE = 2 << 0, 158 + CDP_SRC_VALID_INC = 0 << 2, 159 + CDP_SRC_VALID_RELOAD = 1 << 2, 160 + CDP_SRC_VALID_REUSE = 2 << 2, 161 + CDP_NTYPE_TYPE1 = 1 << 4, 162 + CDP_NTYPE_TYPE2 = 2 << 4, 163 + CDP_NTYPE_TYPE3 = 3 << 4, 164 + CDP_TMODE_NORMAL = 0 << 8, 165 + CDP_TMODE_LLIST = 1 << 8, 166 + CDP_FAST = BIT(10), 184 167 }; 185 168 186 169 static const unsigned es_bytes[] = { ··· 223 180 224 181 static void omap_dma_desc_free(struct virt_dma_desc *vd) 225 182 { 226 - kfree(container_of(vd, struct omap_desc, vd)); 183 + struct omap_desc *d = to_omap_dma_desc(&vd->tx); 184 + 185 + if (d->using_ll) { 186 + struct omap_dmadev *od = to_omap_dma_dev(vd->tx.chan->device); 187 + int i; 188 + 189 + for (i = 0; i < d->sglen; i++) { 190 + if (d->sg[i].t2_desc) 191 + dma_pool_free(od->desc_pool, d->sg[i].t2_desc, 192 + d->sg[i].t2_desc_paddr); 193 + } 194 + } 195 + 196 + kfree(d); 197 + } 198 + 199 + static void omap_dma_fill_type2_desc(struct omap_desc *d, int idx, 200 + enum dma_transfer_direction dir, bool last) 201 + { 202 + struct omap_sg *sg = &d->sg[idx]; 203 + struct omap_type2_desc *t2_desc = sg->t2_desc; 204 + 205 + if (idx) 206 + d->sg[idx - 1].t2_desc->next_desc = sg->t2_desc_paddr; 207 + if (last) 208 + t2_desc->next_desc = 0xfffffffc; 209 + 210 + t2_desc->en = sg->en; 211 + t2_desc->addr = sg->addr; 212 + t2_desc->fn = sg->fn & 0xffff; 213 + t2_desc->cicr = d->cicr; 214 + if (!last) 215 + t2_desc->cicr &= ~CICR_BLOCK_IE; 216 + 217 + switch (dir) { 218 + case DMA_DEV_TO_MEM: 219 + t2_desc->cdei = sg->ei; 220 + t2_desc->csei = d->ei; 221 + t2_desc->cdfi = sg->fi; 222 + t2_desc->csfi = d->fi; 223 + 224 + t2_desc->en |= DESC_NXT_DV_REFRESH; 225 + t2_desc->en |= DESC_NXT_SV_REUSE; 226 + break; 227 + case DMA_MEM_TO_DEV: 228 + t2_desc->cdei = d->ei; 229 + t2_desc->csei = sg->ei; 230 + t2_desc->cdfi = d->fi; 231 + t2_desc->csfi = sg->fi; 232 + 233 + t2_desc->en |= DESC_NXT_SV_REFRESH; 234 + t2_desc->en |= DESC_NXT_DV_REUSE; 235 + break; 236 + default: 237 + return; 238 + } 239 + 240 + t2_desc->en |= DESC_NTYPE_TYPE2; 227 241 } 228 242 229 243 static void omap_dma_write(uint32_t val, unsigned type, void __iomem *addr) ··· 385 285 static void omap_dma_start(struct omap_chan *c, struct omap_desc *d) 386 286 { 387 287 struct omap_dmadev *od = to_omap_dma_dev(c->vc.chan.device); 288 + uint16_t cicr = d->cicr; 388 289 389 290 if (__dma_omap15xx(od->plat->dma_attr)) 390 291 omap_dma_chan_write(c, CPC, 0); ··· 394 293 395 294 omap_dma_clear_csr(c); 396 295 296 + if (d->using_ll) { 297 + uint32_t cdp = CDP_TMODE_LLIST | CDP_NTYPE_TYPE2 | CDP_FAST; 298 + 299 + if (d->dir == DMA_DEV_TO_MEM) 300 + cdp |= (CDP_DST_VALID_RELOAD | CDP_SRC_VALID_REUSE); 301 + else 302 + cdp |= (CDP_DST_VALID_REUSE | CDP_SRC_VALID_RELOAD); 303 + omap_dma_chan_write(c, CDP, cdp); 304 + 305 + omap_dma_chan_write(c, CNDP, d->sg[0].t2_desc_paddr); 306 + omap_dma_chan_write(c, CCDN, 0); 307 + omap_dma_chan_write(c, CCFN, 0xffff); 308 + omap_dma_chan_write(c, CCEN, 0xffffff); 309 + 310 + cicr &= ~CICR_BLOCK_IE; 311 + } else if (od->ll123_supported) { 312 + omap_dma_chan_write(c, CDP, 0); 313 + } 314 + 397 315 /* Enable interrupts */ 398 - omap_dma_chan_write(c, CICR, d->cicr); 316 + omap_dma_chan_write(c, CICR, cicr); 399 317 400 318 /* Enable channel */ 401 319 omap_dma_chan_write(c, CCR, d->ccr | CCR_ENABLE); ··· 485 365 c->running = false; 486 366 } 487 367 488 - static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d, 489 - unsigned idx) 368 + static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d) 490 369 { 491 - struct omap_sg *sg = d->sg + idx; 370 + struct omap_sg *sg = d->sg + c->sgidx; 492 371 unsigned cxsa, cxei, cxfi; 493 372 494 373 if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) { ··· 507 388 omap_dma_chan_write(c, CFN, sg->fn); 508 389 509 390 omap_dma_start(c, d); 391 + c->sgidx++; 510 392 } 511 393 512 394 static void omap_dma_start_desc(struct omap_chan *c) ··· 553 433 omap_dma_chan_write(c, CSDP, d->csdp); 554 434 omap_dma_chan_write(c, CLNK_CTRL, d->clnk_ctrl); 555 435 556 - omap_dma_start_sg(c, d, 0); 436 + omap_dma_start_sg(c, d); 557 437 } 558 438 559 439 static void omap_dma_callback(int ch, u16 status, void *data) ··· 565 445 spin_lock_irqsave(&c->vc.lock, flags); 566 446 d = c->desc; 567 447 if (d) { 568 - if (!c->cyclic) { 569 - if (++c->sgidx < d->sglen) { 570 - omap_dma_start_sg(c, d, c->sgidx); 571 - } else { 572 - omap_dma_start_desc(c); 573 - vchan_cookie_complete(&d->vd); 574 - } 575 - } else { 448 + if (c->cyclic) { 576 449 vchan_cyclic_callback(&d->vd); 450 + } else if (d->using_ll || c->sgidx == d->sglen) { 451 + omap_dma_start_desc(c); 452 + vchan_cookie_complete(&d->vd); 453 + } else { 454 + omap_dma_start_sg(c, d); 577 455 } 578 456 } 579 457 spin_unlock_irqrestore(&c->vc.lock, flags); ··· 621 503 { 622 504 struct omap_dmadev *od = to_omap_dma_dev(chan->device); 623 505 struct omap_chan *c = to_omap_dma_chan(chan); 506 + struct device *dev = od->ddev.dev; 624 507 int ret; 625 508 626 509 if (od->legacy) { ··· 632 513 &c->dma_ch); 633 514 } 634 515 635 - dev_dbg(od->ddev.dev, "allocating channel %u for %u\n", 636 - c->dma_ch, c->dma_sig); 516 + dev_dbg(dev, "allocating channel %u for %u\n", c->dma_ch, c->dma_sig); 637 517 638 518 if (ret >= 0) { 639 519 omap_dma_assign(od, c, c->dma_ch); ··· 688 570 vchan_free_chan_resources(&c->vc); 689 571 omap_free_dma(c->dma_ch); 690 572 691 - dev_dbg(od->ddev.dev, "freeing channel for %u\n", c->dma_sig); 573 + dev_dbg(od->ddev.dev, "freeing channel %u used for %u\n", c->dma_ch, 574 + c->dma_sig); 692 575 c->dma_sig = 0; 693 576 } 694 577 ··· 863 744 struct omap_desc *d; 864 745 dma_addr_t dev_addr; 865 746 unsigned i, es, en, frame_bytes; 747 + bool ll_failed = false; 866 748 u32 burst; 867 749 868 750 if (dir == DMA_DEV_TO_MEM) { ··· 904 784 d->es = es; 905 785 906 786 d->ccr = c->ccr | CCR_SYNC_FRAME; 907 - if (dir == DMA_DEV_TO_MEM) 787 + if (dir == DMA_DEV_TO_MEM) { 908 788 d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_CONSTANT; 909 - else 789 + d->csdp = CSDP_DST_BURST_64 | CSDP_DST_PACKED; 790 + } else { 910 791 d->ccr |= CCR_DST_AMODE_CONSTANT | CCR_SRC_AMODE_POSTINC; 792 + d->csdp = CSDP_SRC_BURST_64 | CSDP_SRC_PACKED; 793 + } 911 794 912 795 d->cicr = CICR_DROP_IE | CICR_BLOCK_IE; 913 - d->csdp = es; 796 + d->csdp |= es; 914 797 915 798 if (dma_omap1()) { 916 799 d->cicr |= CICR_TOUT_IE; ··· 942 819 */ 943 820 en = burst; 944 821 frame_bytes = es_bytes[es] * en; 822 + 823 + if (sglen >= 2) 824 + d->using_ll = od->ll123_supported; 825 + 945 826 for_each_sg(sgl, sgent, sglen, i) { 946 - d->sg[i].addr = sg_dma_address(sgent); 947 - d->sg[i].en = en; 948 - d->sg[i].fn = sg_dma_len(sgent) / frame_bytes; 827 + struct omap_sg *osg = &d->sg[i]; 828 + 829 + osg->addr = sg_dma_address(sgent); 830 + osg->en = en; 831 + osg->fn = sg_dma_len(sgent) / frame_bytes; 832 + 833 + if (d->using_ll) { 834 + osg->t2_desc = dma_pool_alloc(od->desc_pool, GFP_ATOMIC, 835 + &osg->t2_desc_paddr); 836 + if (!osg->t2_desc) { 837 + dev_err(chan->device->dev, 838 + "t2_desc[%d] allocation failed\n", i); 839 + ll_failed = true; 840 + d->using_ll = false; 841 + continue; 842 + } 843 + 844 + omap_dma_fill_type2_desc(d, i, dir, (i == sglen - 1)); 845 + } 949 846 } 950 847 951 848 d->sglen = sglen; 849 + 850 + /* Release the dma_pool entries if one allocation failed */ 851 + if (ll_failed) { 852 + for (i = 0; i < d->sglen; i++) { 853 + struct omap_sg *osg = &d->sg[i]; 854 + 855 + if (osg->t2_desc) { 856 + dma_pool_free(od->desc_pool, osg->t2_desc, 857 + osg->t2_desc_paddr); 858 + osg->t2_desc = NULL; 859 + } 860 + } 861 + } 952 862 953 863 return vchan_tx_prep(&c->vc, &d->vd, tx_flags); 954 864 } ··· 1381 1225 spin_lock_init(&od->lock); 1382 1226 spin_lock_init(&od->irq_lock); 1383 1227 1384 - od->dma_requests = OMAP_SDMA_REQUESTS; 1385 - if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node, 1386 - "dma-requests", 1387 - &od->dma_requests)) { 1228 + if (!pdev->dev.of_node) { 1229 + od->dma_requests = od->plat->dma_attr->lch_count; 1230 + if (unlikely(!od->dma_requests)) 1231 + od->dma_requests = OMAP_SDMA_REQUESTS; 1232 + } else if (of_property_read_u32(pdev->dev.of_node, "dma-requests", 1233 + &od->dma_requests)) { 1388 1234 dev_info(&pdev->dev, 1389 1235 "Missing dma-requests property, using %u.\n", 1390 1236 OMAP_SDMA_REQUESTS); 1237 + od->dma_requests = OMAP_SDMA_REQUESTS; 1391 1238 } 1392 1239 1393 - for (i = 0; i < OMAP_SDMA_CHANNELS; i++) { 1240 + od->lch_map = devm_kcalloc(&pdev->dev, od->dma_requests, 1241 + sizeof(*od->lch_map), GFP_KERNEL); 1242 + if (!od->lch_map) 1243 + return -ENOMEM; 1244 + 1245 + for (i = 0; i < od->dma_requests; i++) { 1394 1246 rc = omap_dma_chan_init(od); 1395 1247 if (rc) { 1396 1248 omap_dma_free(od); ··· 1421 1257 return rc; 1422 1258 } 1423 1259 1260 + if (omap_dma_glbl_read(od, CAPS_0) & CAPS_0_SUPPORT_LL123) 1261 + od->ll123_supported = true; 1262 + 1424 1263 od->ddev.filter.map = od->plat->slave_map; 1425 1264 od->ddev.filter.mapcnt = od->plat->slavecnt; 1426 1265 od->ddev.filter.fn = omap_dma_filter_fn; 1266 + 1267 + if (od->ll123_supported) { 1268 + od->desc_pool = dma_pool_create(dev_name(&pdev->dev), 1269 + &pdev->dev, 1270 + sizeof(struct omap_type2_desc), 1271 + 4, 0); 1272 + if (!od->desc_pool) { 1273 + dev_err(&pdev->dev, 1274 + "unable to allocate descriptor pool\n"); 1275 + od->ll123_supported = false; 1276 + } 1277 + } 1427 1278 1428 1279 rc = dma_async_device_register(&od->ddev); 1429 1280 if (rc) { ··· 1463 1284 } 1464 1285 } 1465 1286 1466 - dev_info(&pdev->dev, "OMAP DMA engine driver\n"); 1287 + dev_info(&pdev->dev, "OMAP DMA engine driver%s\n", 1288 + od->ll123_supported ? " (LinkedList1/2/3 supported)" : ""); 1467 1289 1468 1290 return rc; 1469 1291 } ··· 1486 1306 /* Disable all interrupts */ 1487 1307 omap_dma_glbl_write(od, IRQENABLE_L0, 0); 1488 1308 } 1309 + 1310 + if (od->ll123_supported) 1311 + dma_pool_destroy(od->desc_pool); 1489 1312 1490 1313 omap_dma_free(od); 1491 1314
+3 -4
drivers/dma/pch_dma.c
··· 357 357 struct pch_dma_desc *desc) 358 358 { 359 359 struct dma_async_tx_descriptor *txd = &desc->txd; 360 - dma_async_tx_callback callback = txd->callback; 361 - void *param = txd->callback_param; 360 + struct dmaengine_desc_callback cb; 362 361 362 + dmaengine_desc_get_callback(txd, &cb); 363 363 list_splice_init(&desc->tx_list, &pd_chan->free_list); 364 364 list_move(&desc->desc_node, &pd_chan->free_list); 365 365 366 - if (callback) 367 - callback(param); 366 + dmaengine_desc_callback_invoke(&cb, NULL); 368 367 } 369 368 370 369 static void pdc_complete_all(struct pch_dma_chan *pd_chan)
+18 -7
drivers/dma/pl330.c
··· 2039 2039 } 2040 2040 2041 2041 while (!list_empty(&pch->completed_list)) { 2042 - dma_async_tx_callback callback; 2043 - void *callback_param; 2042 + struct dmaengine_desc_callback cb; 2044 2043 2045 2044 desc = list_first_entry(&pch->completed_list, 2046 2045 struct dma_pl330_desc, node); 2047 2046 2048 - callback = desc->txd.callback; 2049 - callback_param = desc->txd.callback_param; 2047 + dmaengine_desc_get_callback(&desc->txd, &cb); 2050 2048 2051 2049 if (pch->cyclic) { 2052 2050 desc->status = PREP; ··· 2062 2064 2063 2065 dma_descriptor_unmap(&desc->txd); 2064 2066 2065 - if (callback) { 2067 + if (dmaengine_desc_callback_valid(&cb)) { 2066 2068 spin_unlock_irqrestore(&pch->lock, flags); 2067 - callback(callback_param); 2069 + dmaengine_desc_callback_invoke(&cb, NULL); 2068 2070 spin_lock_irqsave(&pch->lock, flags); 2069 2071 } 2070 2072 } ··· 2272 2274 { 2273 2275 enum dma_status ret; 2274 2276 unsigned long flags; 2275 - struct dma_pl330_desc *desc, *running = NULL; 2277 + struct dma_pl330_desc *desc, *running = NULL, *last_enq = NULL; 2276 2278 struct dma_pl330_chan *pch = to_pchan(chan); 2277 2279 unsigned int transferred, residual = 0; 2278 2280 ··· 2285 2287 goto out; 2286 2288 2287 2289 spin_lock_irqsave(&pch->lock, flags); 2290 + spin_lock(&pch->thread->dmac->lock); 2288 2291 2289 2292 if (pch->thread->req_running != -1) 2290 2293 running = pch->thread->req[pch->thread->req_running].desc; 2294 + 2295 + last_enq = pch->thread->req[pch->thread->lstenq].desc; 2291 2296 2292 2297 /* Check in pending list */ 2293 2298 list_for_each_entry(desc, &pch->work_list, node) { ··· 2299 2298 else if (running && desc == running) 2300 2299 transferred = 2301 2300 pl330_get_current_xferred_count(pch, desc); 2301 + else if (desc->status == BUSY) 2302 + /* 2303 + * Busy but not running means either just enqueued, 2304 + * or finished and not yet marked done 2305 + */ 2306 + if (desc == last_enq) 2307 + transferred = 0; 2308 + else 2309 + transferred = desc->bytes_requested; 2302 2310 else 2303 2311 transferred = 0; 2304 2312 residual += desc->bytes_requested - transferred; ··· 2328 2318 if (desc->last) 2329 2319 residual = 0; 2330 2320 } 2321 + spin_unlock(&pch->thread->dmac->lock); 2331 2322 spin_unlock_irqrestore(&pch->lock, flags); 2332 2323 2333 2324 out:
+4 -7
drivers/dma/ppc4xx/adma.c
··· 1482 1482 cookie = desc->async_tx.cookie; 1483 1483 desc->async_tx.cookie = 0; 1484 1484 1485 + dma_descriptor_unmap(&desc->async_tx); 1485 1486 /* call the callback (must not sleep or submit new 1486 1487 * operations to this channel) 1487 1488 */ 1488 - if (desc->async_tx.callback) 1489 - desc->async_tx.callback( 1490 - desc->async_tx.callback_param); 1491 - 1492 - dma_descriptor_unmap(&desc->async_tx); 1489 + dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL); 1493 1490 } 1494 1491 1495 1492 /* run dependent operations */ ··· 3888 3891 np = ofdev->dev.of_node; 3889 3892 if (adev->id != PPC440SPE_XOR_ID) { 3890 3893 adev->err_irq = irq_of_parse_and_map(np, 1); 3891 - if (adev->err_irq == NO_IRQ) { 3894 + if (!adev->err_irq) { 3892 3895 dev_warn(adev->dev, "no err irq resource?\n"); 3893 3896 *initcode = PPC_ADMA_INIT_IRQ2; 3894 3897 adev->err_irq = -ENXIO; ··· 3899 3902 } 3900 3903 3901 3904 adev->irq = irq_of_parse_and_map(np, 0); 3902 - if (adev->irq == NO_IRQ) { 3905 + if (!adev->irq) { 3903 3906 dev_err(adev->dev, "no irq resource\n"); 3904 3907 *initcode = PPC_ADMA_INIT_IRQ1; 3905 3908 ret = -ENXIO;
+41 -18
drivers/dma/qcom/hidma.c
··· 111 111 struct dma_async_tx_descriptor *desc; 112 112 dma_cookie_t last_cookie; 113 113 struct hidma_desc *mdesc; 114 + struct hidma_desc *next; 114 115 unsigned long irqflags; 115 116 struct list_head list; 116 117 ··· 123 122 spin_unlock_irqrestore(&mchan->lock, irqflags); 124 123 125 124 /* Execute callbacks and run dependencies */ 126 - list_for_each_entry(mdesc, &list, node) { 125 + list_for_each_entry_safe(mdesc, next, &list, node) { 127 126 enum dma_status llstat; 127 + struct dmaengine_desc_callback cb; 128 + struct dmaengine_result result; 128 129 129 130 desc = &mdesc->desc; 131 + last_cookie = desc->cookie; 130 132 131 133 spin_lock_irqsave(&mchan->lock, irqflags); 132 134 dma_cookie_complete(desc); 133 135 spin_unlock_irqrestore(&mchan->lock, irqflags); 134 136 135 137 llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch); 136 - if (desc->callback && (llstat == DMA_COMPLETE)) 137 - desc->callback(desc->callback_param); 138 + dmaengine_desc_get_callback(desc, &cb); 138 139 139 - last_cookie = desc->cookie; 140 140 dma_run_dependencies(desc); 141 + 142 + spin_lock_irqsave(&mchan->lock, irqflags); 143 + list_move(&mdesc->node, &mchan->free); 144 + 145 + if (llstat == DMA_COMPLETE) { 146 + mchan->last_success = last_cookie; 147 + result.result = DMA_TRANS_NOERROR; 148 + } else 149 + result.result = DMA_TRANS_ABORTED; 150 + 151 + spin_unlock_irqrestore(&mchan->lock, irqflags); 152 + 153 + dmaengine_desc_callback_invoke(&cb, &result); 141 154 } 142 - 143 - /* Free descriptors */ 144 - spin_lock_irqsave(&mchan->lock, irqflags); 145 - list_splice_tail_init(&list, &mchan->free); 146 - spin_unlock_irqrestore(&mchan->lock, irqflags); 147 - 148 155 } 149 156 150 157 /* ··· 247 238 hidma_ll_start(dmadev->lldev); 248 239 } 249 240 241 + static inline bool hidma_txn_is_success(dma_cookie_t cookie, 242 + dma_cookie_t last_success, dma_cookie_t last_used) 243 + { 244 + if (last_success <= last_used) { 245 + if ((cookie <= last_success) || (cookie > last_used)) 246 + return true; 247 + } else { 248 + if ((cookie <= last_success) && (cookie > last_used)) 249 + return true; 250 + } 251 + return false; 252 + } 253 + 250 254 static enum dma_status hidma_tx_status(struct dma_chan *dmach, 251 255 dma_cookie_t cookie, 252 256 struct dma_tx_state *txstate) ··· 268 246 enum dma_status ret; 269 247 270 248 ret = dma_cookie_status(dmach, cookie, txstate); 271 - if (ret == DMA_COMPLETE) 272 - return ret; 249 + if (ret == DMA_COMPLETE) { 250 + bool is_success; 251 + 252 + is_success = hidma_txn_is_success(cookie, mchan->last_success, 253 + dmach->cookie); 254 + return is_success ? ret : DMA_ERROR; 255 + } 273 256 274 257 if (mchan->paused && (ret == DMA_IN_PROGRESS)) { 275 258 unsigned long flags; ··· 425 398 hidma_process_completed(mchan); 426 399 427 400 spin_lock_irqsave(&mchan->lock, irqflags); 401 + mchan->last_success = 0; 428 402 list_splice_init(&mchan->active, &list); 429 403 list_splice_init(&mchan->prepared, &list); 430 404 list_splice_init(&mchan->completed, &list); ··· 441 413 /* return all user requests */ 442 414 list_for_each_entry_safe(mdesc, tmp, &list, node) { 443 415 struct dma_async_tx_descriptor *txd = &mdesc->desc; 444 - dma_async_tx_callback callback = mdesc->desc.callback; 445 - void *param = mdesc->desc.callback_param; 446 416 447 417 dma_descriptor_unmap(txd); 448 - 449 - if (callback) 450 - callback(param); 451 - 418 + dmaengine_desc_get_callback_invoke(txd, NULL); 452 419 dma_run_dependencies(txd); 453 420 454 421 /* move myself to free_list */
+1 -1
drivers/dma/qcom/hidma.h
··· 72 72 73 73 u32 tre_write_offset; /* TRE write location */ 74 74 struct tasklet_struct task; /* task delivering notifications */ 75 - struct tasklet_struct rst_task; /* task to reset HW */ 76 75 DECLARE_KFIFO_PTR(handoff_fifo, 77 76 struct hidma_tre *); /* pending TREs FIFO */ 78 77 }; ··· 88 89 bool allocated; 89 90 char dbg_name[16]; 90 91 u32 dma_sig; 92 + dma_cookie_t last_success; 91 93 92 94 /* 93 95 * active descriptor on this channel
+7 -25
drivers/dma/qcom/hidma_ll.c
··· 381 381 } 382 382 383 383 /* 384 - * Abort all transactions and perform a reset. 385 - */ 386 - static void hidma_ll_abort(unsigned long arg) 387 - { 388 - struct hidma_lldev *lldev = (struct hidma_lldev *)arg; 389 - u8 err_code = HIDMA_EVRE_STATUS_ERROR; 390 - u8 err_info = 0xFF; 391 - int rc; 392 - 393 - hidma_cleanup_pending_tre(lldev, err_info, err_code); 394 - 395 - /* reset the channel for recovery */ 396 - rc = hidma_ll_setup(lldev); 397 - if (rc) { 398 - dev_err(lldev->dev, "channel reinitialize failed after error\n"); 399 - return; 400 - } 401 - writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG); 402 - } 403 - 404 - /* 405 384 * The interrupt handler for HIDMA will try to consume as many pending 406 385 * EVRE from the event queue as possible. Each EVRE has an associated 407 386 * TRE that holds the user interface parameters. EVRE reports the ··· 433 454 434 455 while (cause) { 435 456 if (cause & HIDMA_ERR_INT_MASK) { 436 - dev_err(lldev->dev, "error 0x%x, resetting...\n", 457 + dev_err(lldev->dev, "error 0x%x, disabling...\n", 437 458 cause); 438 459 439 460 /* Clear out pending interrupts */ 440 461 writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG); 441 462 442 - tasklet_schedule(&lldev->rst_task); 463 + /* No further submissions. */ 464 + hidma_ll_disable(lldev); 465 + 466 + /* Driver completes the txn and intimates the client.*/ 467 + hidma_cleanup_pending_tre(lldev, 0xFF, 468 + HIDMA_EVRE_STATUS_ERROR); 443 469 goto out; 444 470 } 445 471 ··· 792 808 return NULL; 793 809 794 810 spin_lock_init(&lldev->lock); 795 - tasklet_init(&lldev->rst_task, hidma_ll_abort, (unsigned long)lldev); 796 811 tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev); 797 812 lldev->initialized = 1; 798 813 writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG); ··· 814 831 815 832 required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres; 816 833 tasklet_kill(&lldev->task); 817 - tasklet_kill(&lldev->rst_task); 818 834 memset(lldev->trepool, 0, required_bytes); 819 835 lldev->trepool = NULL; 820 836 lldev->pending_tre_count = 0;
+6 -3
drivers/dma/s3c24xx-dma.c
··· 823 823 struct s3c24xx_sg *dsg; 824 824 int src_mod, dest_mod; 825 825 826 - dev_dbg(&s3cdma->pdev->dev, "prepare memcpy of %d bytes from %s\n", 826 + dev_dbg(&s3cdma->pdev->dev, "prepare memcpy of %zu bytes from %s\n", 827 827 len, s3cchan->name); 828 828 829 829 if ((len & S3C24XX_DCON_TC_MASK) != len) { 830 - dev_err(&s3cdma->pdev->dev, "memcpy size %d to large\n", len); 830 + dev_err(&s3cdma->pdev->dev, "memcpy size %zu to large\n", len); 831 831 return NULL; 832 832 } 833 833 ··· 1301 1301 s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic; 1302 1302 s3cdma->slave.device_config = s3c24xx_dma_set_runtime_config; 1303 1303 s3cdma->slave.device_terminate_all = s3c24xx_dma_terminate_all; 1304 + s3cdma->slave.filter.map = pdata->slave_map; 1305 + s3cdma->slave.filter.mapcnt = pdata->slavecnt; 1306 + s3cdma->slave.filter.fn = s3c24xx_dma_filter; 1304 1307 1305 1308 /* Register as many memcpy channels as there are physical channels */ 1306 1309 ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->memcpy, ··· 1421 1418 1422 1419 s3cchan = to_s3c24xx_dma_chan(chan); 1423 1420 1424 - return s3cchan->id == (int)param; 1421 + return s3cchan->id == (uintptr_t)param; 1425 1422 } 1426 1423 EXPORT_SYMBOL(s3c24xx_dma_filter); 1427 1424
+7 -7
drivers/dma/sa11x0-dma.c
··· 463 463 dma_addr_t addr = sa11x0_dma_pos(p); 464 464 unsigned i; 465 465 466 - dev_vdbg(d->slave.dev, "tx_status: addr:%x\n", addr); 466 + dev_vdbg(d->slave.dev, "tx_status: addr:%pad\n", &addr); 467 467 468 468 for (i = 0; i < txd->sglen; i++) { 469 469 dev_vdbg(d->slave.dev, "tx_status: [%u] %x+%x\n", ··· 491 491 } 492 492 spin_unlock_irqrestore(&c->vc.lock, flags); 493 493 494 - dev_vdbg(d->slave.dev, "tx_status: bytes 0x%zx\n", state->residue); 494 + dev_vdbg(d->slave.dev, "tx_status: bytes 0x%x\n", state->residue); 495 495 496 496 return ret; 497 497 } ··· 551 551 if (len > DMA_MAX_SIZE) 552 552 j += DIV_ROUND_UP(len, DMA_MAX_SIZE & ~DMA_ALIGN) - 1; 553 553 if (addr & DMA_ALIGN) { 554 - dev_dbg(chan->device->dev, "vchan %p: bad buffer alignment: %08x\n", 555 - &c->vc, addr); 554 + dev_dbg(chan->device->dev, "vchan %p: bad buffer alignment: %pad\n", 555 + &c->vc, &addr); 556 556 return NULL; 557 557 } 558 558 } ··· 599 599 txd->size = size; 600 600 txd->sglen = j; 601 601 602 - dev_dbg(chan->device->dev, "vchan %p: txd %p: size %u nr %u\n", 602 + dev_dbg(chan->device->dev, "vchan %p: txd %p: size %zu nr %u\n", 603 603 &c->vc, &txd->vd, txd->size, txd->sglen); 604 604 605 605 return vchan_tx_prep(&c->vc, &txd->vd, flags); ··· 693 693 if (maxburst == 8) 694 694 ddar |= DDAR_BS; 695 695 696 - dev_dbg(c->vc.chan.device->dev, "vchan %p: dma_slave_config addr %x width %u burst %u\n", 697 - &c->vc, addr, width, maxburst); 696 + dev_dbg(c->vc.chan.device->dev, "vchan %p: dma_slave_config addr %pad width %u burst %u\n", 697 + &c->vc, &addr, width, maxburst); 698 698 699 699 c->ddar = ddar | (addr & 0xf0000000) | (addr & 0x003ffffc) << 6; 700 700
+101 -31
drivers/dma/sh/rcar-dmac.c
··· 118 118 sizeof(struct rcar_dmac_xfer_chunk)) 119 119 120 120 /* 121 + * struct rcar_dmac_chan_slave - Slave configuration 122 + * @slave_addr: slave memory address 123 + * @xfer_size: size (in bytes) of hardware transfers 124 + */ 125 + struct rcar_dmac_chan_slave { 126 + phys_addr_t slave_addr; 127 + unsigned int xfer_size; 128 + }; 129 + 130 + /* 131 + * struct rcar_dmac_chan_map - Map of slave device phys to dma address 132 + * @addr: slave dma address 133 + * @dir: direction of mapping 134 + * @slave: slave configuration that is mapped 135 + */ 136 + struct rcar_dmac_chan_map { 137 + dma_addr_t addr; 138 + enum dma_data_direction dir; 139 + struct rcar_dmac_chan_slave slave; 140 + }; 141 + 142 + /* 121 143 * struct rcar_dmac_chan - R-Car Gen2 DMA Controller Channel 122 144 * @chan: base DMA channel object 123 145 * @iomem: channel I/O memory base 124 146 * @index: index of this channel in the controller 125 - * @src_xfer_size: size (in bytes) of hardware transfers on the source side 126 - * @dst_xfer_size: size (in bytes) of hardware transfers on the destination side 127 - * @src_slave_addr: slave source memory address 128 - * @dst_slave_addr: slave destination memory address 147 + * @src: slave memory address and size on the source side 148 + * @dst: slave memory address and size on the destination side 129 149 * @mid_rid: hardware MID/RID for the DMA client using this channel 130 150 * @lock: protects the channel CHCR register and the desc members 131 151 * @desc.free: list of free descriptors ··· 162 142 void __iomem *iomem; 163 143 unsigned int index; 164 144 165 - unsigned int src_xfer_size; 166 - unsigned int dst_xfer_size; 167 - dma_addr_t src_slave_addr; 168 - dma_addr_t dst_slave_addr; 145 + struct rcar_dmac_chan_slave src; 146 + struct rcar_dmac_chan_slave dst; 147 + struct rcar_dmac_chan_map map; 169 148 int mid_rid; 170 149 171 150 spinlock_t lock; ··· 812 793 case DMA_DEV_TO_MEM: 813 794 chcr = RCAR_DMACHCR_DM_INC | RCAR_DMACHCR_SM_FIXED 814 795 | RCAR_DMACHCR_RS_DMARS; 815 - xfer_size = chan->src_xfer_size; 796 + xfer_size = chan->src.xfer_size; 816 797 break; 817 798 818 799 case DMA_MEM_TO_DEV: 819 800 chcr = RCAR_DMACHCR_DM_FIXED | RCAR_DMACHCR_SM_INC 820 801 | RCAR_DMACHCR_RS_DMARS; 821 - xfer_size = chan->dst_xfer_size; 802 + xfer_size = chan->dst.xfer_size; 822 803 break; 823 804 824 805 case DMA_MEM_TO_MEM: ··· 1042 1023 DMA_MEM_TO_MEM, flags, false); 1043 1024 } 1044 1025 1026 + static int rcar_dmac_map_slave_addr(struct dma_chan *chan, 1027 + enum dma_transfer_direction dir) 1028 + { 1029 + struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); 1030 + struct rcar_dmac_chan_map *map = &rchan->map; 1031 + phys_addr_t dev_addr; 1032 + size_t dev_size; 1033 + enum dma_data_direction dev_dir; 1034 + 1035 + if (dir == DMA_DEV_TO_MEM) { 1036 + dev_addr = rchan->src.slave_addr; 1037 + dev_size = rchan->src.xfer_size; 1038 + dev_dir = DMA_TO_DEVICE; 1039 + } else { 1040 + dev_addr = rchan->dst.slave_addr; 1041 + dev_size = rchan->dst.xfer_size; 1042 + dev_dir = DMA_FROM_DEVICE; 1043 + } 1044 + 1045 + /* Reuse current map if possible. */ 1046 + if (dev_addr == map->slave.slave_addr && 1047 + dev_size == map->slave.xfer_size && 1048 + dev_dir == map->dir) 1049 + return 0; 1050 + 1051 + /* Remove old mapping if present. */ 1052 + if (map->slave.xfer_size) 1053 + dma_unmap_resource(chan->device->dev, map->addr, 1054 + map->slave.xfer_size, map->dir, 0); 1055 + map->slave.xfer_size = 0; 1056 + 1057 + /* Create new slave address map. */ 1058 + map->addr = dma_map_resource(chan->device->dev, dev_addr, dev_size, 1059 + dev_dir, 0); 1060 + 1061 + if (dma_mapping_error(chan->device->dev, map->addr)) { 1062 + dev_err(chan->device->dev, 1063 + "chan%u: failed to map %zx@%pap", rchan->index, 1064 + dev_size, &dev_addr); 1065 + return -EIO; 1066 + } 1067 + 1068 + dev_dbg(chan->device->dev, "chan%u: map %zx@%pap to %pad dir: %s\n", 1069 + rchan->index, dev_size, &dev_addr, &map->addr, 1070 + dev_dir == DMA_TO_DEVICE ? "DMA_TO_DEVICE" : "DMA_FROM_DEVICE"); 1071 + 1072 + map->slave.slave_addr = dev_addr; 1073 + map->slave.xfer_size = dev_size; 1074 + map->dir = dev_dir; 1075 + 1076 + return 0; 1077 + } 1078 + 1045 1079 static struct dma_async_tx_descriptor * 1046 1080 rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, 1047 1081 unsigned int sg_len, enum dma_transfer_direction dir, 1048 1082 unsigned long flags, void *context) 1049 1083 { 1050 1084 struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); 1051 - dma_addr_t dev_addr; 1052 1085 1053 1086 /* Someone calling slave DMA on a generic channel? */ 1054 1087 if (rchan->mid_rid < 0 || !sg_len) { ··· 1110 1039 return NULL; 1111 1040 } 1112 1041 1113 - dev_addr = dir == DMA_DEV_TO_MEM 1114 - ? rchan->src_slave_addr : rchan->dst_slave_addr; 1115 - return rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, dev_addr, 1042 + if (rcar_dmac_map_slave_addr(chan, dir)) 1043 + return NULL; 1044 + 1045 + return rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, rchan->map.addr, 1116 1046 dir, flags, false); 1117 1047 } 1118 1048 ··· 1127 1055 struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); 1128 1056 struct dma_async_tx_descriptor *desc; 1129 1057 struct scatterlist *sgl; 1130 - dma_addr_t dev_addr; 1131 1058 unsigned int sg_len; 1132 1059 unsigned int i; 1133 1060 ··· 1137 1066 __func__, buf_len, period_len, rchan->mid_rid); 1138 1067 return NULL; 1139 1068 } 1069 + 1070 + if (rcar_dmac_map_slave_addr(chan, dir)) 1071 + return NULL; 1140 1072 1141 1073 sg_len = buf_len / period_len; 1142 1074 if (sg_len > RCAR_DMAC_MAX_SG_LEN) { ··· 1168 1094 sg_dma_len(&sgl[i]) = period_len; 1169 1095 } 1170 1096 1171 - dev_addr = dir == DMA_DEV_TO_MEM 1172 - ? rchan->src_slave_addr : rchan->dst_slave_addr; 1173 - desc = rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, dev_addr, 1097 + desc = rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, rchan->map.addr, 1174 1098 dir, flags, true); 1175 1099 1176 1100 kfree(sgl); ··· 1184 1112 * We could lock this, but you shouldn't be configuring the 1185 1113 * channel, while using it... 1186 1114 */ 1187 - rchan->src_slave_addr = cfg->src_addr; 1188 - rchan->dst_slave_addr = cfg->dst_addr; 1189 - rchan->src_xfer_size = cfg->src_addr_width; 1190 - rchan->dst_xfer_size = cfg->dst_addr_width; 1115 + rchan->src.slave_addr = cfg->src_addr; 1116 + rchan->dst.slave_addr = cfg->dst_addr; 1117 + rchan->src.xfer_size = cfg->src_addr_width; 1118 + rchan->dst.xfer_size = cfg->dst_addr_width; 1191 1119 1192 1120 return 0; 1193 1121 } ··· 1461 1389 { 1462 1390 struct rcar_dmac_chan *chan = dev; 1463 1391 struct rcar_dmac_desc *desc; 1392 + struct dmaengine_desc_callback cb; 1464 1393 1465 1394 spin_lock_irq(&chan->lock); 1466 1395 1467 1396 /* For cyclic transfers notify the user after every chunk. */ 1468 1397 if (chan->desc.running && chan->desc.running->cyclic) { 1469 - dma_async_tx_callback callback; 1470 - void *callback_param; 1471 - 1472 1398 desc = chan->desc.running; 1473 - callback = desc->async_tx.callback; 1474 - callback_param = desc->async_tx.callback_param; 1399 + dmaengine_desc_get_callback(&desc->async_tx, &cb); 1475 1400 1476 - if (callback) { 1401 + if (dmaengine_desc_callback_valid(&cb)) { 1477 1402 spin_unlock_irq(&chan->lock); 1478 - callback(callback_param); 1403 + dmaengine_desc_callback_invoke(&cb, NULL); 1479 1404 spin_lock_irq(&chan->lock); 1480 1405 } 1481 1406 } ··· 1487 1418 dma_cookie_complete(&desc->async_tx); 1488 1419 list_del(&desc->node); 1489 1420 1490 - if (desc->async_tx.callback) { 1421 + dmaengine_desc_get_callback(&desc->async_tx, &cb); 1422 + if (dmaengine_desc_callback_valid(&cb)) { 1491 1423 spin_unlock_irq(&chan->lock); 1492 1424 /* 1493 1425 * We own the only reference to this descriptor, we can 1494 1426 * safely dereference it without holding the channel 1495 1427 * lock. 1496 1428 */ 1497 - desc->async_tx.callback(desc->async_tx.callback_param); 1429 + dmaengine_desc_callback_invoke(&cb, NULL); 1498 1430 spin_lock_irq(&chan->lock); 1499 1431 } 1500 1432
+6 -6
drivers/dma/sh/shdma-base.c
··· 330 330 bool head_acked = false; 331 331 dma_cookie_t cookie = 0; 332 332 dma_async_tx_callback callback = NULL; 333 - void *param = NULL; 333 + struct dmaengine_desc_callback cb; 334 334 unsigned long flags; 335 335 LIST_HEAD(cyclic_list); 336 336 337 + memset(&cb, 0, sizeof(cb)); 337 338 spin_lock_irqsave(&schan->chan_lock, flags); 338 339 list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) { 339 340 struct dma_async_tx_descriptor *tx = &desc->async_tx; ··· 368 367 /* Call callback on the last chunk */ 369 368 if (desc->mark == DESC_COMPLETED && tx->callback) { 370 369 desc->mark = DESC_WAITING; 370 + dmaengine_desc_get_callback(tx, &cb); 371 371 callback = tx->callback; 372 - param = tx->callback_param; 373 372 dev_dbg(schan->dev, "descriptor #%d@%p on %d callback\n", 374 373 tx->cookie, tx, schan->id); 375 374 BUG_ON(desc->chunks != 1); ··· 431 430 432 431 spin_unlock_irqrestore(&schan->chan_lock, flags); 433 432 434 - if (callback) 435 - callback(param); 433 + dmaengine_desc_callback_invoke(&cb, NULL); 436 434 437 435 return callback; 438 436 } ··· 885 885 /* Complete all */ 886 886 list_for_each_entry(sdesc, &dl, node) { 887 887 struct dma_async_tx_descriptor *tx = &sdesc->async_tx; 888 + 888 889 sdesc->mark = DESC_IDLE; 889 - if (tx->callback) 890 - tx->callback(tx->callback_param); 890 + dmaengine_desc_get_callback_invoke(tx, NULL); 891 891 } 892 892 893 893 spin_lock(&schan->chan_lock);
+3 -6
drivers/dma/sirf-dma.c
··· 360 360 list_for_each_entry(sdesc, &list, node) { 361 361 desc = &sdesc->desc; 362 362 363 - if (desc->callback) 364 - desc->callback(desc->callback_param); 365 - 363 + dmaengine_desc_get_callback_invoke(desc, NULL); 366 364 last_cookie = desc->cookie; 367 365 dma_run_dependencies(desc); 368 366 } ··· 386 388 387 389 desc = &sdesc->desc; 388 390 while (happened_cyclic != schan->completed_cyclic) { 389 - if (desc->callback) 390 - desc->callback(desc->callback_param); 391 + dmaengine_desc_get_callback_invoke(desc, NULL); 391 392 schan->completed_cyclic++; 392 393 } 393 394 } ··· 866 869 } 867 870 868 871 sdma->irq = irq_of_parse_and_map(dn, 0); 869 - if (sdma->irq == NO_IRQ) { 872 + if (!sdma->irq) { 870 873 dev_err(dev, "Error mapping IRQ!\n"); 871 874 return -EINVAL; 872 875 }
+131 -168
drivers/dma/ste_dma40.c
··· 874 874 } 875 875 876 876 if (curr_lcla < 0) 877 - goto out; 877 + goto set_current; 878 878 879 879 for (; lli_current < lli_len; lli_current++) { 880 880 unsigned int lcla_offset = chan->phy_chan->num * 1024 + ··· 925 925 break; 926 926 } 927 927 } 928 - 929 - out: 928 + set_current: 930 929 desc->lli_current = lli_current; 931 930 } 932 931 ··· 940 941 941 942 static struct d40_desc *d40_first_active_get(struct d40_chan *d40c) 942 943 { 943 - struct d40_desc *d; 944 - 945 - if (list_empty(&d40c->active)) 946 - return NULL; 947 - 948 - d = list_first_entry(&d40c->active, 949 - struct d40_desc, 950 - node); 951 - return d; 944 + return list_first_entry_or_null(&d40c->active, struct d40_desc, node); 952 945 } 953 946 954 947 /* remove desc from current queue and add it to the pending_queue */ ··· 953 962 954 963 static struct d40_desc *d40_first_pending(struct d40_chan *d40c) 955 964 { 956 - struct d40_desc *d; 957 - 958 - if (list_empty(&d40c->pending_queue)) 959 - return NULL; 960 - 961 - d = list_first_entry(&d40c->pending_queue, 962 - struct d40_desc, 963 - node); 964 - return d; 965 + return list_first_entry_or_null(&d40c->pending_queue, struct d40_desc, 966 + node); 965 967 } 966 968 967 969 static struct d40_desc *d40_first_queued(struct d40_chan *d40c) 968 970 { 969 - struct d40_desc *d; 970 - 971 - if (list_empty(&d40c->queue)) 972 - return NULL; 973 - 974 - d = list_first_entry(&d40c->queue, 975 - struct d40_desc, 976 - node); 977 - return d; 971 + return list_first_entry_or_null(&d40c->queue, struct d40_desc, node); 978 972 } 979 973 980 974 static struct d40_desc *d40_first_done(struct d40_chan *d40c) 981 975 { 982 - if (list_empty(&d40c->done)) 983 - return NULL; 984 - 985 - return list_first_entry(&d40c->done, struct d40_desc, node); 976 + return list_first_entry_or_null(&d40c->done, struct d40_desc, node); 986 977 } 987 978 988 979 static int d40_psize_2_burst_size(bool is_log, int psize) ··· 1056 1083 D40_CHAN_POS(d40c->phy_chan->num); 1057 1084 1058 1085 if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP) 1059 - goto done; 1086 + goto unlock; 1060 1087 } 1061 1088 1062 1089 wmask = 0xffffffff & ~(D40_CHAN_POS_MASK(d40c->phy_chan->num)); ··· 1092 1119 } 1093 1120 1094 1121 } 1095 - done: 1122 + unlock: 1096 1123 spin_unlock_irqrestore(&d40c->base->execmd_lock, flags); 1097 1124 return ret; 1098 1125 } ··· 1569 1596 struct d40_desc *d40d; 1570 1597 unsigned long flags; 1571 1598 bool callback_active; 1572 - dma_async_tx_callback callback; 1573 - void *callback_param; 1599 + struct dmaengine_desc_callback cb; 1574 1600 1575 1601 spin_lock_irqsave(&d40c->lock, flags); 1576 1602 ··· 1579 1607 /* Check if we have reached here for cyclic job */ 1580 1608 d40d = d40_first_active_get(d40c); 1581 1609 if (d40d == NULL || !d40d->cyclic) 1582 - goto err; 1610 + goto check_pending_tx; 1583 1611 } 1584 1612 1585 1613 if (!d40d->cyclic) ··· 1596 1624 1597 1625 /* Callback to client */ 1598 1626 callback_active = !!(d40d->txd.flags & DMA_PREP_INTERRUPT); 1599 - callback = d40d->txd.callback; 1600 - callback_param = d40d->txd.callback_param; 1627 + dmaengine_desc_get_callback(&d40d->txd, &cb); 1601 1628 1602 1629 if (!d40d->cyclic) { 1603 1630 if (async_tx_test_ack(&d40d->txd)) { ··· 1617 1646 1618 1647 spin_unlock_irqrestore(&d40c->lock, flags); 1619 1648 1620 - if (callback_active && callback) 1621 - callback(callback_param); 1649 + if (callback_active) 1650 + dmaengine_desc_callback_invoke(&cb, NULL); 1622 1651 1623 1652 return; 1624 - 1625 - err: 1653 + check_pending_tx: 1626 1654 /* Rescue manouver if receiving double interrupts */ 1627 1655 if (d40c->pending_tx > 0) 1628 1656 d40c->pending_tx--; ··· 1750 1780 phy->allocated_dst == D40_ALLOC_FREE) { 1751 1781 phy->allocated_dst = D40_ALLOC_PHY; 1752 1782 phy->allocated_src = D40_ALLOC_PHY; 1753 - goto found; 1783 + goto found_unlock; 1754 1784 } else 1755 - goto not_found; 1785 + goto not_found_unlock; 1756 1786 } 1757 1787 1758 1788 /* Logical channel */ 1759 1789 if (is_src) { 1760 1790 if (phy->allocated_src == D40_ALLOC_PHY) 1761 - goto not_found; 1791 + goto not_found_unlock; 1762 1792 1763 1793 if (phy->allocated_src == D40_ALLOC_FREE) 1764 1794 phy->allocated_src = D40_ALLOC_LOG_FREE; 1765 1795 1766 1796 if (!(phy->allocated_src & BIT(log_event_line))) { 1767 1797 phy->allocated_src |= BIT(log_event_line); 1768 - goto found; 1798 + goto found_unlock; 1769 1799 } else 1770 - goto not_found; 1800 + goto not_found_unlock; 1771 1801 } else { 1772 1802 if (phy->allocated_dst == D40_ALLOC_PHY) 1773 - goto not_found; 1803 + goto not_found_unlock; 1774 1804 1775 1805 if (phy->allocated_dst == D40_ALLOC_FREE) 1776 1806 phy->allocated_dst = D40_ALLOC_LOG_FREE; 1777 1807 1778 1808 if (!(phy->allocated_dst & BIT(log_event_line))) { 1779 1809 phy->allocated_dst |= BIT(log_event_line); 1780 - goto found; 1781 - } else 1782 - goto not_found; 1810 + goto found_unlock; 1811 + } 1783 1812 } 1784 - 1785 - not_found: 1813 + not_found_unlock: 1786 1814 spin_unlock_irqrestore(&phy->lock, flags); 1787 1815 return false; 1788 - found: 1816 + found_unlock: 1789 1817 spin_unlock_irqrestore(&phy->lock, flags); 1790 1818 return true; 1791 1819 } ··· 1799 1831 phy->allocated_dst = D40_ALLOC_FREE; 1800 1832 phy->allocated_src = D40_ALLOC_FREE; 1801 1833 is_free = true; 1802 - goto out; 1834 + goto unlock; 1803 1835 } 1804 1836 1805 1837 /* Logical channel */ ··· 1815 1847 1816 1848 is_free = ((phy->allocated_src | phy->allocated_dst) == 1817 1849 D40_ALLOC_FREE); 1818 - 1819 - out: 1850 + unlock: 1820 1851 spin_unlock_irqrestore(&phy->lock, flags); 1821 1852 1822 1853 return is_free; ··· 2014 2047 res = d40_channel_execute_command(d40c, D40_DMA_STOP); 2015 2048 if (res) { 2016 2049 chan_err(d40c, "stop failed\n"); 2017 - goto out; 2050 + goto mark_last_busy; 2018 2051 } 2019 2052 2020 2053 d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0); ··· 2032 2065 d40c->busy = false; 2033 2066 d40c->phy_chan = NULL; 2034 2067 d40c->configured = false; 2035 - out: 2036 - 2068 + mark_last_busy: 2037 2069 pm_runtime_mark_last_busy(d40c->base->dev); 2038 2070 pm_runtime_put_autosuspend(d40c->base->dev); 2039 2071 return res; ··· 2060 2094 D40_CHAN_POS(d40c->phy_chan->num); 2061 2095 if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP) 2062 2096 is_paused = true; 2063 - 2064 - goto _exit; 2097 + goto unlock; 2065 2098 } 2066 2099 2067 2100 if (d40c->dma_cfg.dir == DMA_MEM_TO_DEV || ··· 2070 2105 status = readl(chanbase + D40_CHAN_REG_SSLNK); 2071 2106 } else { 2072 2107 chan_err(d40c, "Unknown direction\n"); 2073 - goto _exit; 2108 + goto unlock; 2074 2109 } 2075 2110 2076 2111 status = (status & D40_EVENTLINE_MASK(event)) >> ··· 2078 2113 2079 2114 if (status != D40_DMA_RUN) 2080 2115 is_paused = true; 2081 - _exit: 2116 + unlock: 2082 2117 spin_unlock_irqrestore(&d40c->lock, flags); 2083 2118 return is_paused; 2084 2119 ··· 2163 2198 d40_prep_desc(struct d40_chan *chan, struct scatterlist *sg, 2164 2199 unsigned int sg_len, unsigned long dma_flags) 2165 2200 { 2166 - struct stedma40_chan_cfg *cfg = &chan->dma_cfg; 2201 + struct stedma40_chan_cfg *cfg; 2167 2202 struct d40_desc *desc; 2168 2203 int ret; 2169 2204 ··· 2171 2206 if (!desc) 2172 2207 return NULL; 2173 2208 2209 + cfg = &chan->dma_cfg; 2174 2210 desc->lli_len = d40_sg_2_dmalen(sg, sg_len, cfg->src_info.data_width, 2175 2211 cfg->dst_info.data_width); 2176 2212 if (desc->lli_len < 0) { 2177 2213 chan_err(chan, "Unaligned size\n"); 2178 - goto err; 2214 + goto free_desc; 2179 2215 } 2180 2216 2181 2217 ret = d40_pool_lli_alloc(chan, desc, desc->lli_len); 2182 2218 if (ret < 0) { 2183 2219 chan_err(chan, "Could not allocate lli\n"); 2184 - goto err; 2220 + goto free_desc; 2185 2221 } 2186 2222 2187 2223 desc->lli_current = 0; ··· 2192 2226 dma_async_tx_descriptor_init(&desc->txd, &chan->chan); 2193 2227 2194 2228 return desc; 2195 - 2196 - err: 2229 + free_desc: 2197 2230 d40_desc_free(chan, desc); 2198 2231 return NULL; 2199 2232 } ··· 2203 2238 enum dma_transfer_direction direction, unsigned long dma_flags) 2204 2239 { 2205 2240 struct d40_chan *chan = container_of(dchan, struct d40_chan, chan); 2206 - dma_addr_t src_dev_addr = 0; 2207 - dma_addr_t dst_dev_addr = 0; 2241 + dma_addr_t src_dev_addr; 2242 + dma_addr_t dst_dev_addr; 2208 2243 struct d40_desc *desc; 2209 2244 unsigned long flags; 2210 2245 int ret; ··· 2218 2253 2219 2254 desc = d40_prep_desc(chan, sg_src, sg_len, dma_flags); 2220 2255 if (desc == NULL) 2221 - goto err; 2256 + goto unlock; 2222 2257 2223 2258 if (sg_next(&sg_src[sg_len - 1]) == sg_src) 2224 2259 desc->cyclic = true; 2225 2260 2261 + src_dev_addr = 0; 2262 + dst_dev_addr = 0; 2226 2263 if (direction == DMA_DEV_TO_MEM) 2227 2264 src_dev_addr = chan->runtime_addr; 2228 2265 else if (direction == DMA_MEM_TO_DEV) ··· 2240 2273 if (ret) { 2241 2274 chan_err(chan, "Failed to prepare %s sg job: %d\n", 2242 2275 chan_is_logical(chan) ? "log" : "phy", ret); 2243 - goto err; 2276 + goto free_desc; 2244 2277 } 2245 2278 2246 2279 /* ··· 2252 2285 spin_unlock_irqrestore(&chan->lock, flags); 2253 2286 2254 2287 return &desc->txd; 2255 - 2256 - err: 2257 - if (desc) 2258 - d40_desc_free(chan, desc); 2288 + free_desc: 2289 + d40_desc_free(chan, desc); 2290 + unlock: 2259 2291 spin_unlock_irqrestore(&chan->lock, flags); 2260 2292 return NULL; 2261 2293 } ··· 2392 2426 err = d40_config_memcpy(d40c); 2393 2427 if (err) { 2394 2428 chan_err(d40c, "Failed to configure memcpy channel\n"); 2395 - goto fail; 2429 + goto mark_last_busy; 2396 2430 } 2397 2431 } 2398 2432 ··· 2400 2434 if (err) { 2401 2435 chan_err(d40c, "Failed to allocate channel\n"); 2402 2436 d40c->configured = false; 2403 - goto fail; 2437 + goto mark_last_busy; 2404 2438 } 2405 2439 2406 2440 pm_runtime_get_sync(d40c->base->dev); ··· 2434 2468 */ 2435 2469 if (is_free_phy) 2436 2470 d40_config_write(d40c); 2437 - fail: 2471 + mark_last_busy: 2438 2472 pm_runtime_mark_last_busy(d40c->base->dev); 2439 2473 pm_runtime_put_autosuspend(d40c->base->dev); 2440 2474 spin_unlock_irqrestore(&d40c->lock, flags); ··· 2857 2891 2858 2892 if (err) { 2859 2893 d40_err(base->dev, "Failed to register slave channels\n"); 2860 - goto failure1; 2894 + goto exit; 2861 2895 } 2862 2896 2863 2897 d40_chan_init(base, &base->dma_memcpy, base->log_chans, ··· 2874 2908 if (err) { 2875 2909 d40_err(base->dev, 2876 2910 "Failed to register memcpy only channels\n"); 2877 - goto failure2; 2911 + goto unregister_slave; 2878 2912 } 2879 2913 2880 2914 d40_chan_init(base, &base->dma_both, base->phy_chans, ··· 2892 2926 if (err) { 2893 2927 d40_err(base->dev, 2894 2928 "Failed to register logical and physical capable channels\n"); 2895 - goto failure3; 2929 + goto unregister_memcpy; 2896 2930 } 2897 2931 return 0; 2898 - failure3: 2932 + unregister_memcpy: 2899 2933 dma_async_device_unregister(&base->dma_memcpy); 2900 - failure2: 2934 + unregister_slave: 2901 2935 dma_async_device_unregister(&base->dma_slave); 2902 - failure1: 2936 + exit: 2903 2937 return err; 2904 2938 } 2905 2939 ··· 3110 3144 static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev) 3111 3145 { 3112 3146 struct stedma40_platform_data *plat_data = dev_get_platdata(&pdev->dev); 3113 - struct clk *clk = NULL; 3114 - void __iomem *virtbase = NULL; 3115 - struct resource *res = NULL; 3116 - struct d40_base *base = NULL; 3117 - int num_log_chans = 0; 3147 + struct clk *clk; 3148 + void __iomem *virtbase; 3149 + struct resource *res; 3150 + struct d40_base *base; 3151 + int num_log_chans; 3118 3152 int num_phy_chans; 3119 3153 int num_memcpy_chans; 3120 3154 int clk_ret = -EINVAL; ··· 3126 3160 clk = clk_get(&pdev->dev, NULL); 3127 3161 if (IS_ERR(clk)) { 3128 3162 d40_err(&pdev->dev, "No matching clock found\n"); 3129 - goto failure; 3163 + goto check_prepare_enabled; 3130 3164 } 3131 3165 3132 3166 clk_ret = clk_prepare_enable(clk); 3133 3167 if (clk_ret) { 3134 3168 d40_err(&pdev->dev, "Failed to prepare/enable clock\n"); 3135 - goto failure; 3169 + goto disable_unprepare; 3136 3170 } 3137 3171 3138 3172 /* Get IO for DMAC base address */ 3139 3173 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "base"); 3140 3174 if (!res) 3141 - goto failure; 3175 + goto disable_unprepare; 3142 3176 3143 3177 if (request_mem_region(res->start, resource_size(res), 3144 3178 D40_NAME " I/O base") == NULL) 3145 - goto failure; 3179 + goto release_region; 3146 3180 3147 3181 virtbase = ioremap(res->start, resource_size(res)); 3148 3182 if (!virtbase) 3149 - goto failure; 3183 + goto release_region; 3150 3184 3151 3185 /* This is just a regular AMBA PrimeCell ID actually */ 3152 3186 for (pid = 0, i = 0; i < 4; i++) ··· 3158 3192 3159 3193 if (cid != AMBA_CID) { 3160 3194 d40_err(&pdev->dev, "Unknown hardware! No PrimeCell ID\n"); 3161 - goto failure; 3195 + goto unmap_io; 3162 3196 } 3163 3197 if (AMBA_MANF_BITS(pid) != AMBA_VENDOR_ST) { 3164 3198 d40_err(&pdev->dev, "Unknown designer! Got %x wanted %x\n", 3165 3199 AMBA_MANF_BITS(pid), 3166 3200 AMBA_VENDOR_ST); 3167 - goto failure; 3201 + goto unmap_io; 3168 3202 } 3169 3203 /* 3170 3204 * HW revision: ··· 3178 3212 rev = AMBA_REV_BITS(pid); 3179 3213 if (rev < 2) { 3180 3214 d40_err(&pdev->dev, "hardware revision: %d is not supported", rev); 3181 - goto failure; 3215 + goto unmap_io; 3182 3216 } 3183 3217 3184 3218 /* The number of physical channels on this HW */ ··· 3204 3238 sizeof(struct d40_chan), GFP_KERNEL); 3205 3239 3206 3240 if (base == NULL) 3207 - goto failure; 3241 + goto unmap_io; 3208 3242 3209 3243 base->rev = rev; 3210 3244 base->clk = clk; ··· 3249 3283 base->gen_dmac.init_reg_size = ARRAY_SIZE(dma_init_reg_v4a); 3250 3284 } 3251 3285 3252 - base->phy_res = kzalloc(num_phy_chans * sizeof(struct d40_phy_res), 3286 + base->phy_res = kcalloc(num_phy_chans, 3287 + sizeof(*base->phy_res), 3253 3288 GFP_KERNEL); 3254 3289 if (!base->phy_res) 3255 - goto failure; 3290 + goto free_base; 3256 3291 3257 - base->lookup_phy_chans = kzalloc(num_phy_chans * 3258 - sizeof(struct d40_chan *), 3292 + base->lookup_phy_chans = kcalloc(num_phy_chans, 3293 + sizeof(*base->lookup_phy_chans), 3259 3294 GFP_KERNEL); 3260 3295 if (!base->lookup_phy_chans) 3261 - goto failure; 3296 + goto free_phy_res; 3262 3297 3263 - base->lookup_log_chans = kzalloc(num_log_chans * 3264 - sizeof(struct d40_chan *), 3298 + base->lookup_log_chans = kcalloc(num_log_chans, 3299 + sizeof(*base->lookup_log_chans), 3265 3300 GFP_KERNEL); 3266 3301 if (!base->lookup_log_chans) 3267 - goto failure; 3302 + goto free_phy_chans; 3268 3303 3269 - base->reg_val_backup_chan = kmalloc(base->num_phy_chans * 3270 - sizeof(d40_backup_regs_chan), 3271 - GFP_KERNEL); 3304 + base->reg_val_backup_chan = kmalloc_array(base->num_phy_chans, 3305 + sizeof(d40_backup_regs_chan), 3306 + GFP_KERNEL); 3272 3307 if (!base->reg_val_backup_chan) 3273 - goto failure; 3308 + goto free_log_chans; 3274 3309 3275 - base->lcla_pool.alloc_map = 3276 - kzalloc(num_phy_chans * sizeof(struct d40_desc *) 3277 - * D40_LCLA_LINK_PER_EVENT_GRP, GFP_KERNEL); 3310 + base->lcla_pool.alloc_map = kcalloc(num_phy_chans 3311 + * D40_LCLA_LINK_PER_EVENT_GRP, 3312 + sizeof(*base->lcla_pool.alloc_map), 3313 + GFP_KERNEL); 3278 3314 if (!base->lcla_pool.alloc_map) 3279 - goto failure; 3315 + goto free_backup_chan; 3280 3316 3281 3317 base->desc_slab = kmem_cache_create(D40_NAME, sizeof(struct d40_desc), 3282 3318 0, SLAB_HWCACHE_ALIGN, 3283 3319 NULL); 3284 3320 if (base->desc_slab == NULL) 3285 - goto failure; 3321 + goto free_map; 3286 3322 3287 3323 return base; 3288 - 3289 - failure: 3324 + free_map: 3325 + kfree(base->lcla_pool.alloc_map); 3326 + free_backup_chan: 3327 + kfree(base->reg_val_backup_chan); 3328 + free_log_chans: 3329 + kfree(base->lookup_log_chans); 3330 + free_phy_chans: 3331 + kfree(base->lookup_phy_chans); 3332 + free_phy_res: 3333 + kfree(base->phy_res); 3334 + free_base: 3335 + kfree(base); 3336 + unmap_io: 3337 + iounmap(virtbase); 3338 + release_region: 3339 + release_mem_region(res->start, resource_size(res)); 3340 + check_prepare_enabled: 3290 3341 if (!clk_ret) 3342 + disable_unprepare: 3291 3343 clk_disable_unprepare(clk); 3292 3344 if (!IS_ERR(clk)) 3293 3345 clk_put(clk); 3294 - if (virtbase) 3295 - iounmap(virtbase); 3296 - if (res) 3297 - release_mem_region(res->start, 3298 - resource_size(res)); 3299 - if (virtbase) 3300 - iounmap(virtbase); 3301 - 3302 - if (base) { 3303 - kfree(base->lcla_pool.alloc_map); 3304 - kfree(base->reg_val_backup_chan); 3305 - kfree(base->lookup_log_chans); 3306 - kfree(base->lookup_phy_chans); 3307 - kfree(base->phy_res); 3308 - kfree(base); 3309 - } 3310 - 3311 3346 return NULL; 3312 3347 } 3313 3348 ··· 3371 3404 struct d40_lcla_pool *pool = &base->lcla_pool; 3372 3405 unsigned long *page_list; 3373 3406 int i, j; 3374 - int ret = 0; 3407 + int ret; 3375 3408 3376 3409 /* 3377 3410 * This is somewhat ugly. We need 8192 bytes that are 18 bit aligned, 3378 3411 * To full fill this hardware requirement without wasting 256 kb 3379 3412 * we allocate pages until we get an aligned one. 3380 3413 */ 3381 - page_list = kmalloc(sizeof(unsigned long) * MAX_LCLA_ALLOC_ATTEMPTS, 3382 - GFP_KERNEL); 3383 - 3384 - if (!page_list) { 3385 - ret = -ENOMEM; 3386 - goto failure; 3387 - } 3414 + page_list = kmalloc_array(MAX_LCLA_ALLOC_ATTEMPTS, 3415 + sizeof(*page_list), 3416 + GFP_KERNEL); 3417 + if (!page_list) 3418 + return -ENOMEM; 3388 3419 3389 3420 /* Calculating how many pages that are required */ 3390 3421 base->lcla_pool.pages = SZ_1K * base->num_phy_chans / PAGE_SIZE; ··· 3398 3433 3399 3434 for (j = 0; j < i; j++) 3400 3435 free_pages(page_list[j], base->lcla_pool.pages); 3401 - goto failure; 3436 + goto free_page_list; 3402 3437 } 3403 3438 3404 3439 if ((virt_to_phys((void *)page_list[i]) & ··· 3425 3460 GFP_KERNEL); 3426 3461 if (!base->lcla_pool.base_unaligned) { 3427 3462 ret = -ENOMEM; 3428 - goto failure; 3463 + goto free_page_list; 3429 3464 } 3430 3465 3431 3466 base->lcla_pool.base = PTR_ALIGN(base->lcla_pool.base_unaligned, ··· 3438 3473 if (dma_mapping_error(base->dev, pool->dma_addr)) { 3439 3474 pool->dma_addr = 0; 3440 3475 ret = -ENOMEM; 3441 - goto failure; 3476 + goto free_page_list; 3442 3477 } 3443 3478 3444 3479 writel(virt_to_phys(base->lcla_pool.base), 3445 3480 base->virtbase + D40_DREG_LCLA); 3446 - failure: 3481 + ret = 0; 3482 + free_page_list: 3447 3483 kfree(page_list); 3448 3484 return ret; 3449 3485 } ··· 3456 3490 int num_phy = 0, num_memcpy = 0, num_disabled = 0; 3457 3491 const __be32 *list; 3458 3492 3459 - pdata = devm_kzalloc(&pdev->dev, 3460 - sizeof(struct stedma40_platform_data), 3461 - GFP_KERNEL); 3493 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 3462 3494 if (!pdata) 3463 3495 return -ENOMEM; 3464 3496 ··· 3538 3574 if (!res) { 3539 3575 ret = -ENOENT; 3540 3576 d40_err(&pdev->dev, "No \"lcpa\" memory resource\n"); 3541 - goto failure; 3577 + goto destroy_cache; 3542 3578 } 3543 3579 base->lcpa_size = resource_size(res); 3544 3580 base->phy_lcpa = res->start; ··· 3547 3583 D40_NAME " I/O lcpa") == NULL) { 3548 3584 ret = -EBUSY; 3549 3585 d40_err(&pdev->dev, "Failed to request LCPA region %pR\n", res); 3550 - goto failure; 3586 + goto destroy_cache; 3551 3587 } 3552 3588 3553 3589 /* We make use of ESRAM memory for this. */ ··· 3563 3599 if (!base->lcpa_base) { 3564 3600 ret = -ENOMEM; 3565 3601 d40_err(&pdev->dev, "Failed to ioremap LCPA region\n"); 3566 - goto failure; 3602 + goto destroy_cache; 3567 3603 } 3568 3604 /* If lcla has to be located in ESRAM we don't need to allocate */ 3569 3605 if (base->plat_data->use_esram_lcla) { ··· 3573 3609 ret = -ENOENT; 3574 3610 d40_err(&pdev->dev, 3575 3611 "No \"lcla_esram\" memory resource\n"); 3576 - goto failure; 3612 + goto destroy_cache; 3577 3613 } 3578 3614 base->lcla_pool.base = ioremap(res->start, 3579 3615 resource_size(res)); 3580 3616 if (!base->lcla_pool.base) { 3581 3617 ret = -ENOMEM; 3582 3618 d40_err(&pdev->dev, "Failed to ioremap LCLA region\n"); 3583 - goto failure; 3619 + goto destroy_cache; 3584 3620 } 3585 3621 writel(res->start, base->virtbase + D40_DREG_LCLA); 3586 3622 ··· 3588 3624 ret = d40_lcla_allocate(base); 3589 3625 if (ret) { 3590 3626 d40_err(&pdev->dev, "Failed to allocate LCLA area\n"); 3591 - goto failure; 3627 + goto destroy_cache; 3592 3628 } 3593 3629 } 3594 3630 ··· 3599 3635 ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base); 3600 3636 if (ret) { 3601 3637 d40_err(&pdev->dev, "No IRQ defined\n"); 3602 - goto failure; 3638 + goto destroy_cache; 3603 3639 } 3604 3640 3605 3641 if (base->plat_data->use_esram_lcla) { ··· 3609 3645 d40_err(&pdev->dev, "Failed to get lcpa_regulator\n"); 3610 3646 ret = PTR_ERR(base->lcpa_regulator); 3611 3647 base->lcpa_regulator = NULL; 3612 - goto failure; 3648 + goto destroy_cache; 3613 3649 } 3614 3650 3615 3651 ret = regulator_enable(base->lcpa_regulator); ··· 3618 3654 "Failed to enable lcpa_regulator\n"); 3619 3655 regulator_put(base->lcpa_regulator); 3620 3656 base->lcpa_regulator = NULL; 3621 - goto failure; 3657 + goto destroy_cache; 3622 3658 } 3623 3659 } 3624 3660 ··· 3633 3669 3634 3670 ret = d40_dmaengine_init(base, num_reserved_chans); 3635 3671 if (ret) 3636 - goto failure; 3672 + goto destroy_cache; 3637 3673 3638 3674 base->dev->dma_parms = &base->dma_parms; 3639 3675 ret = dma_set_max_seg_size(base->dev, STEDMA40_MAX_SEG_SIZE); 3640 3676 if (ret) { 3641 3677 d40_err(&pdev->dev, "Failed to set dma max seg size\n"); 3642 - goto failure; 3678 + goto destroy_cache; 3643 3679 } 3644 3680 3645 3681 d40_hw_init(base); ··· 3653 3689 3654 3690 dev_info(base->dev, "initialized\n"); 3655 3691 return 0; 3656 - 3657 - failure: 3692 + destroy_cache: 3658 3693 kmem_cache_destroy(base->desc_slab); 3659 3694 if (base->virtbase) 3660 3695 iounmap(base->virtbase); ··· 3695 3732 kfree(base->lookup_phy_chans); 3696 3733 kfree(base->phy_res); 3697 3734 kfree(base); 3698 - report_failure: 3735 + report_failure: 3699 3736 d40_err(&pdev->dev, "probe failed\n"); 3700 3737 return ret; 3701 3738 }
+1 -1
drivers/dma/stm32-dma.c
··· 954 954 kfree(container_of(vdesc, struct stm32_dma_desc, vdesc)); 955 955 } 956 956 957 - void stm32_dma_set_config(struct stm32_dma_chan *chan, 957 + static void stm32_dma_set_config(struct stm32_dma_chan *chan, 958 958 struct stm32_dma_cfg *cfg) 959 959 { 960 960 stm32_dma_clear_reg(&chan->chan_reg);
+7
drivers/dma/sun6i-dma.c
··· 1011 1011 .nr_max_vchans = 37, 1012 1012 }; 1013 1013 1014 + static struct sun6i_dma_config sun8i_a83t_dma_cfg = { 1015 + .nr_max_channels = 8, 1016 + .nr_max_requests = 28, 1017 + .nr_max_vchans = 39, 1018 + }; 1019 + 1014 1020 /* 1015 1021 * The H3 has 12 physical channels, a maximum DRQ port id of 27, 1016 1022 * and a total of 34 usable source and destination endpoints. ··· 1031 1025 static const struct of_device_id sun6i_dma_match[] = { 1032 1026 { .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg }, 1033 1027 { .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg }, 1028 + { .compatible = "allwinner,sun8i-a83t-dma", .data = &sun8i_a83t_dma_cfg }, 1034 1029 { .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg }, 1035 1030 { /* sentinel */ } 1036 1031 };
+4 -6
drivers/dma/tegra20-apb-dma.c
··· 655 655 static void tegra_dma_tasklet(unsigned long data) 656 656 { 657 657 struct tegra_dma_channel *tdc = (struct tegra_dma_channel *)data; 658 - dma_async_tx_callback callback = NULL; 659 - void *callback_param = NULL; 658 + struct dmaengine_desc_callback cb; 660 659 struct tegra_dma_desc *dma_desc; 661 660 unsigned long flags; 662 661 int cb_count; ··· 665 666 dma_desc = list_first_entry(&tdc->cb_desc, 666 667 typeof(*dma_desc), cb_node); 667 668 list_del(&dma_desc->cb_node); 668 - callback = dma_desc->txd.callback; 669 - callback_param = dma_desc->txd.callback_param; 669 + dmaengine_desc_get_callback(&dma_desc->txd, &cb); 670 670 cb_count = dma_desc->cb_count; 671 671 dma_desc->cb_count = 0; 672 672 spin_unlock_irqrestore(&tdc->lock, flags); 673 - while (cb_count-- && callback) 674 - callback(callback_param); 673 + while (cb_count--) 674 + dmaengine_desc_callback_invoke(&cb, NULL); 675 675 spin_lock_irqsave(&tdc->lock, flags); 676 676 } 677 677 spin_unlock_irqrestore(&tdc->lock, flags);
+2 -12
drivers/dma/tegra210-adma.c
··· 670 670 const struct tegra_adma_chip_data *cdata; 671 671 struct tegra_adma *tdma; 672 672 struct resource *res; 673 - struct clk *clk; 674 673 int ret, i; 675 674 676 675 cdata = of_device_get_match_data(&pdev->dev); ··· 696 697 if (ret) 697 698 return ret; 698 699 699 - clk = clk_get(&pdev->dev, "d_audio"); 700 - if (IS_ERR(clk)) { 701 - dev_err(&pdev->dev, "ADMA clock not found\n"); 702 - ret = PTR_ERR(clk); 700 + ret = of_pm_clk_add_clk(&pdev->dev, "d_audio"); 701 + if (ret) 703 702 goto clk_destroy; 704 - } 705 - 706 - ret = pm_clk_add_clk(&pdev->dev, clk); 707 - if (ret) { 708 - clk_put(clk); 709 - goto clk_destroy; 710 - } 711 703 712 704 pm_runtime_enable(&pdev->dev); 713 705
+19 -11
drivers/dma/ti-dma-crossbar.c
··· 18 18 19 19 #define TI_XBAR_DRA7 0 20 20 #define TI_XBAR_AM335X 1 21 + static const u32 ti_xbar_type[] = { 22 + [TI_XBAR_DRA7] = TI_XBAR_DRA7, 23 + [TI_XBAR_AM335X] = TI_XBAR_AM335X, 24 + }; 21 25 22 26 static const struct of_device_id ti_dma_xbar_match[] = { 23 27 { 24 28 .compatible = "ti,dra7-dma-crossbar", 25 - .data = (void *)TI_XBAR_DRA7, 29 + .data = &ti_xbar_type[TI_XBAR_DRA7], 26 30 }, 27 31 { 28 32 .compatible = "ti,am335x-edma-crossbar", 29 - .data = (void *)TI_XBAR_AM335X, 33 + .data = &ti_xbar_type[TI_XBAR_AM335X], 30 34 }, 31 35 {}, 32 36 }; ··· 194 190 #define TI_DRA7_XBAR_OUTPUTS 127 195 191 #define TI_DRA7_XBAR_INPUTS 256 196 192 197 - #define TI_XBAR_EDMA_OFFSET 0 198 - #define TI_XBAR_SDMA_OFFSET 1 199 - 200 193 struct ti_dra7_xbar_data { 201 194 void __iomem *iomem; 202 195 ··· 281 280 return map; 282 281 } 283 282 283 + #define TI_XBAR_EDMA_OFFSET 0 284 + #define TI_XBAR_SDMA_OFFSET 1 285 + static const u32 ti_dma_offset[] = { 286 + [TI_XBAR_EDMA_OFFSET] = 0, 287 + [TI_XBAR_SDMA_OFFSET] = 1, 288 + }; 289 + 284 290 static const struct of_device_id ti_dra7_master_match[] = { 285 291 { 286 292 .compatible = "ti,omap4430-sdma", 287 - .data = (void *)TI_XBAR_SDMA_OFFSET, 293 + .data = &ti_dma_offset[TI_XBAR_SDMA_OFFSET], 288 294 }, 289 295 { 290 296 .compatible = "ti,edma3", 291 - .data = (void *)TI_XBAR_EDMA_OFFSET, 297 + .data = &ti_dma_offset[TI_XBAR_EDMA_OFFSET], 292 298 }, 293 299 { 294 300 .compatible = "ti,edma3-tpcc", 295 - .data = (void *)TI_XBAR_EDMA_OFFSET, 301 + .data = &ti_dma_offset[TI_XBAR_EDMA_OFFSET], 296 302 }, 297 303 {}, 298 304 }; ··· 319 311 struct property *prop; 320 312 struct resource *res; 321 313 u32 safe_val; 322 - size_t sz; 314 + int sz; 323 315 void __iomem *iomem; 324 316 int i, ret; 325 317 ··· 403 395 404 396 xbar->dmarouter.dev = &pdev->dev; 405 397 xbar->dmarouter.route_free = ti_dra7_xbar_free; 406 - xbar->dma_offset = (u32)match->data; 398 + xbar->dma_offset = *(u32 *)match->data; 407 399 408 400 mutex_init(&xbar->mutex); 409 401 platform_set_drvdata(pdev, xbar); ··· 436 428 if (unlikely(!match)) 437 429 return -EINVAL; 438 430 439 - switch ((u32)match->data) { 431 + switch (*(u32 *)match->data) { 440 432 case TI_XBAR_DRA7: 441 433 ret = ti_dra7_xbar_probe(pdev); 442 434 break;
+3 -6
drivers/dma/timb_dma.c
··· 226 226 227 227 static void __td_finish(struct timb_dma_chan *td_chan) 228 228 { 229 - dma_async_tx_callback callback; 230 - void *param; 229 + struct dmaengine_desc_callback cb; 231 230 struct dma_async_tx_descriptor *txd; 232 231 struct timb_dma_desc *td_desc; 233 232 ··· 251 252 dma_cookie_complete(txd); 252 253 td_chan->ongoing = false; 253 254 254 - callback = txd->callback; 255 - param = txd->callback_param; 255 + dmaengine_desc_get_callback(txd, &cb); 256 256 257 257 list_move(&td_desc->desc_node, &td_chan->free_list); 258 258 ··· 260 262 * The API requires that no submissions are done from a 261 263 * callback, so we don't need to drop the lock here 262 264 */ 263 - if (callback) 264 - callback(param); 265 + dmaengine_desc_callback_invoke(&cb, NULL); 265 266 } 266 267 267 268 static u32 __td_ier_mask(struct timb_dma *td)
+3 -6
drivers/dma/txx9dmac.c
··· 403 403 txx9dmac_descriptor_complete(struct txx9dmac_chan *dc, 404 404 struct txx9dmac_desc *desc) 405 405 { 406 - dma_async_tx_callback callback; 407 - void *param; 406 + struct dmaengine_desc_callback cb; 408 407 struct dma_async_tx_descriptor *txd = &desc->txd; 409 408 410 409 dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n", 411 410 txd->cookie, desc); 412 411 413 412 dma_cookie_complete(txd); 414 - callback = txd->callback; 415 - param = txd->callback_param; 413 + dmaengine_desc_get_callback(txd, &cb); 416 414 417 415 txx9dmac_sync_desc_for_cpu(dc, desc); 418 416 list_splice_init(&desc->tx_list, &dc->free_list); ··· 421 423 * The API requires that no submissions are done from a 422 424 * callback, so we don't need to drop the lock here 423 425 */ 424 - if (callback) 425 - callback(param); 426 + dmaengine_desc_callback_invoke(&cb, NULL); 426 427 dma_run_dependencies(txd); 427 428 } 428 429
+7 -10
drivers/dma/virt-dma.c
··· 87 87 { 88 88 struct virt_dma_chan *vc = (struct virt_dma_chan *)arg; 89 89 struct virt_dma_desc *vd; 90 - dma_async_tx_callback cb = NULL; 91 - void *cb_data = NULL; 90 + struct dmaengine_desc_callback cb; 92 91 LIST_HEAD(head); 93 92 94 93 spin_lock_irq(&vc->lock); ··· 95 96 vd = vc->cyclic; 96 97 if (vd) { 97 98 vc->cyclic = NULL; 98 - cb = vd->tx.callback; 99 - cb_data = vd->tx.callback_param; 99 + dmaengine_desc_get_callback(&vd->tx, &cb); 100 + } else { 101 + memset(&cb, 0, sizeof(cb)); 100 102 } 101 103 spin_unlock_irq(&vc->lock); 102 104 103 - if (cb) 104 - cb(cb_data); 105 + dmaengine_desc_callback_invoke(&cb, NULL); 105 106 106 107 while (!list_empty(&head)) { 107 108 vd = list_first_entry(&head, struct virt_dma_desc, node); 108 - cb = vd->tx.callback; 109 - cb_data = vd->tx.callback_param; 109 + dmaengine_desc_get_callback(&vd->tx, &cb); 110 110 111 111 list_del(&vd->node); 112 112 if (dmaengine_desc_test_reuse(&vd->tx)) ··· 113 115 else 114 116 vc->desc_free(vd); 115 117 116 - if (cb) 117 - cb(cb_data); 118 + dmaengine_desc_callback_invoke(&cb, NULL); 118 119 } 119 120 } 120 121
+4 -6
drivers/dma/virt-dma.h
··· 45 45 void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head); 46 46 void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev); 47 47 struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t); 48 + extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *); 49 + extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *); 48 50 49 51 /** 50 52 * vchan_tx_prep - prepare a descriptor ··· 57 55 static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan *vc, 58 56 struct virt_dma_desc *vd, unsigned long tx_flags) 59 57 { 60 - extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *); 61 - extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *); 62 58 unsigned long flags; 63 59 64 60 dma_async_tx_descriptor_init(&vd->tx, &vc->chan); ··· 123 123 */ 124 124 static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) 125 125 { 126 - if (list_empty(&vc->desc_issued)) 127 - return NULL; 128 - 129 - return list_first_entry(&vc->desc_issued, struct virt_dma_desc, node); 126 + return list_first_entry_or_null(&vc->desc_issued, 127 + struct virt_dma_desc, node); 130 128 } 131 129 132 130 /**
+2 -4
drivers/dma/xgene-dma.c
··· 606 606 return; 607 607 608 608 dma_cookie_complete(tx); 609 + dma_descriptor_unmap(tx); 609 610 610 611 /* Run the link descriptor callback function */ 611 - if (tx->callback) 612 - tx->callback(tx->callback_param); 613 - 614 - dma_descriptor_unmap(tx); 612 + dmaengine_desc_get_callback_invoke(tx, NULL); 615 613 616 614 /* Run any dependencies */ 617 615 dma_run_dependencies(tx);
+4 -6
drivers/dma/xilinx/xilinx_dma.c
··· 755 755 spin_lock_irqsave(&chan->lock, flags); 756 756 757 757 list_for_each_entry_safe(desc, next, &chan->done_list, node) { 758 - dma_async_tx_callback callback; 759 - void *callback_param; 758 + struct dmaengine_desc_callback cb; 760 759 761 760 if (desc->cyclic) { 762 761 xilinx_dma_chan_handle_cyclic(chan, desc, &flags); ··· 766 767 list_del(&desc->node); 767 768 768 769 /* Run the link descriptor callback function */ 769 - callback = desc->async_tx.callback; 770 - callback_param = desc->async_tx.callback_param; 771 - if (callback) { 770 + dmaengine_desc_get_callback(&desc->async_tx, &cb); 771 + if (dmaengine_desc_callback_valid(&cb)) { 772 772 spin_unlock_irqrestore(&chan->lock, flags); 773 - callback(callback_param); 773 + dmaengine_desc_callback_invoke(&cb, NULL); 774 774 spin_lock_irqsave(&chan->lock, flags); 775 775 } 776 776
+150 -43
drivers/ntb/ntb_transport.c
··· 102 102 void *buf; 103 103 unsigned int len; 104 104 unsigned int flags; 105 + int retries; 106 + int errors; 107 + unsigned int tx_index; 108 + unsigned int rx_index; 105 109 106 110 struct ntb_transport_qp *qp; 107 111 union { 108 112 struct ntb_payload_header __iomem *tx_hdr; 109 113 struct ntb_payload_header *rx_hdr; 110 114 }; 111 - unsigned int index; 112 115 }; 113 116 114 117 struct ntb_rx_info { ··· 262 259 static void ntb_transport_rxc_db(unsigned long data); 263 260 static const struct ntb_ctx_ops ntb_transport_ops; 264 261 static struct ntb_client ntb_transport_client; 262 + static int ntb_async_tx_submit(struct ntb_transport_qp *qp, 263 + struct ntb_queue_entry *entry); 264 + static void ntb_memcpy_tx(struct ntb_queue_entry *entry, void __iomem *offset); 265 + static int ntb_async_rx_submit(struct ntb_queue_entry *entry, void *offset); 266 + static void ntb_memcpy_rx(struct ntb_queue_entry *entry, void *offset); 267 + 265 268 266 269 static int ntb_transport_bus_match(struct device *dev, 267 270 struct device_driver *drv) ··· 1238 1229 break; 1239 1230 1240 1231 entry->rx_hdr->flags = 0; 1241 - iowrite32(entry->index, &qp->rx_info->entry); 1232 + iowrite32(entry->rx_index, &qp->rx_info->entry); 1242 1233 1243 1234 cb_data = entry->cb_data; 1244 1235 len = entry->len; ··· 1256 1247 spin_unlock_irqrestore(&qp->ntb_rx_q_lock, irqflags); 1257 1248 } 1258 1249 1259 - static void ntb_rx_copy_callback(void *data) 1250 + static void ntb_rx_copy_callback(void *data, 1251 + const struct dmaengine_result *res) 1260 1252 { 1261 1253 struct ntb_queue_entry *entry = data; 1254 + 1255 + /* we need to check DMA results if we are using DMA */ 1256 + if (res) { 1257 + enum dmaengine_tx_result dma_err = res->result; 1258 + 1259 + switch (dma_err) { 1260 + case DMA_TRANS_READ_FAILED: 1261 + case DMA_TRANS_WRITE_FAILED: 1262 + entry->errors++; 1263 + case DMA_TRANS_ABORTED: 1264 + { 1265 + struct ntb_transport_qp *qp = entry->qp; 1266 + void *offset = qp->rx_buff + qp->rx_max_frame * 1267 + qp->rx_index; 1268 + 1269 + ntb_memcpy_rx(entry, offset); 1270 + qp->rx_memcpy++; 1271 + return; 1272 + } 1273 + 1274 + case DMA_TRANS_NOERROR: 1275 + default: 1276 + break; 1277 + } 1278 + } 1262 1279 1263 1280 entry->flags |= DESC_DONE_FLAG; 1264 1281 ··· 1301 1266 /* Ensure that the data is fully copied out before clearing the flag */ 1302 1267 wmb(); 1303 1268 1304 - ntb_rx_copy_callback(entry); 1269 + ntb_rx_copy_callback(entry, NULL); 1305 1270 } 1306 1271 1307 - static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset) 1272 + static int ntb_async_rx_submit(struct ntb_queue_entry *entry, void *offset) 1308 1273 { 1309 1274 struct dma_async_tx_descriptor *txd; 1310 1275 struct ntb_transport_qp *qp = entry->qp; ··· 1317 1282 int retries = 0; 1318 1283 1319 1284 len = entry->len; 1320 - 1321 - if (!chan) 1322 - goto err; 1323 - 1324 - if (len < copy_bytes) 1325 - goto err; 1326 - 1327 1285 device = chan->device; 1328 1286 pay_off = (size_t)offset & ~PAGE_MASK; 1329 1287 buff_off = (size_t)buf & ~PAGE_MASK; ··· 1344 1316 unmap->from_cnt = 1; 1345 1317 1346 1318 for (retries = 0; retries < DMA_RETRIES; retries++) { 1347 - txd = device->device_prep_dma_memcpy(chan, unmap->addr[1], 1319 + txd = device->device_prep_dma_memcpy(chan, 1320 + unmap->addr[1], 1348 1321 unmap->addr[0], len, 1349 1322 DMA_PREP_INTERRUPT); 1350 1323 if (txd) ··· 1360 1331 goto err_get_unmap; 1361 1332 } 1362 1333 1363 - txd->callback = ntb_rx_copy_callback; 1334 + txd->callback_result = ntb_rx_copy_callback; 1364 1335 txd->callback_param = entry; 1365 1336 dma_set_unmap(txd, unmap); 1366 1337 ··· 1374 1345 1375 1346 qp->rx_async++; 1376 1347 1377 - return; 1348 + return 0; 1378 1349 1379 1350 err_set_unmap: 1380 1351 dmaengine_unmap_put(unmap); 1381 1352 err_get_unmap: 1382 1353 dmaengine_unmap_put(unmap); 1354 + err: 1355 + return -ENXIO; 1356 + } 1357 + 1358 + static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset) 1359 + { 1360 + struct ntb_transport_qp *qp = entry->qp; 1361 + struct dma_chan *chan = qp->rx_dma_chan; 1362 + int res; 1363 + 1364 + if (!chan) 1365 + goto err; 1366 + 1367 + if (entry->len < copy_bytes) 1368 + goto err; 1369 + 1370 + res = ntb_async_rx_submit(entry, offset); 1371 + if (res < 0) 1372 + goto err; 1373 + 1374 + if (!entry->retries) 1375 + qp->rx_async++; 1376 + 1377 + return; 1378 + 1383 1379 err: 1384 1380 ntb_memcpy_rx(entry, offset); 1385 1381 qp->rx_memcpy++; ··· 1451 1397 } 1452 1398 1453 1399 entry->rx_hdr = hdr; 1454 - entry->index = qp->rx_index; 1400 + entry->rx_index = qp->rx_index; 1455 1401 1456 1402 if (hdr->len > entry->len) { 1457 1403 dev_dbg(&qp->ndev->pdev->dev, ··· 1521 1467 } 1522 1468 } 1523 1469 1524 - static void ntb_tx_copy_callback(void *data) 1470 + static void ntb_tx_copy_callback(void *data, 1471 + const struct dmaengine_result *res) 1525 1472 { 1526 1473 struct ntb_queue_entry *entry = data; 1527 1474 struct ntb_transport_qp *qp = entry->qp; 1528 1475 struct ntb_payload_header __iomem *hdr = entry->tx_hdr; 1476 + 1477 + /* we need to check DMA results if we are using DMA */ 1478 + if (res) { 1479 + enum dmaengine_tx_result dma_err = res->result; 1480 + 1481 + switch (dma_err) { 1482 + case DMA_TRANS_READ_FAILED: 1483 + case DMA_TRANS_WRITE_FAILED: 1484 + entry->errors++; 1485 + case DMA_TRANS_ABORTED: 1486 + { 1487 + void __iomem *offset = 1488 + qp->tx_mw + qp->tx_max_frame * 1489 + entry->tx_index; 1490 + 1491 + /* resubmit via CPU */ 1492 + ntb_memcpy_tx(entry, offset); 1493 + qp->tx_memcpy++; 1494 + return; 1495 + } 1496 + 1497 + case DMA_TRANS_NOERROR: 1498 + default: 1499 + break; 1500 + } 1501 + } 1529 1502 1530 1503 iowrite32(entry->flags | DESC_DONE_FLAG, &hdr->flags); 1531 1504 ··· 1588 1507 /* Ensure that the data is fully copied out before setting the flags */ 1589 1508 wmb(); 1590 1509 1591 - ntb_tx_copy_callback(entry); 1510 + ntb_tx_copy_callback(entry, NULL); 1592 1511 } 1593 1512 1594 - static void ntb_async_tx(struct ntb_transport_qp *qp, 1595 - struct ntb_queue_entry *entry) 1513 + static int ntb_async_tx_submit(struct ntb_transport_qp *qp, 1514 + struct ntb_queue_entry *entry) 1596 1515 { 1597 - struct ntb_payload_header __iomem *hdr; 1598 1516 struct dma_async_tx_descriptor *txd; 1599 1517 struct dma_chan *chan = qp->tx_dma_chan; 1600 1518 struct dma_device *device; 1519 + size_t len = entry->len; 1520 + void *buf = entry->buf; 1601 1521 size_t dest_off, buff_off; 1602 1522 struct dmaengine_unmap_data *unmap; 1603 1523 dma_addr_t dest; 1604 1524 dma_cookie_t cookie; 1605 - void __iomem *offset; 1606 - size_t len = entry->len; 1607 - void *buf = entry->buf; 1608 1525 int retries = 0; 1609 1526 1610 - offset = qp->tx_mw + qp->tx_max_frame * qp->tx_index; 1611 - hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header); 1612 - entry->tx_hdr = hdr; 1613 - 1614 - iowrite32(entry->len, &hdr->len); 1615 - iowrite32((u32)qp->tx_pkts, &hdr->ver); 1616 - 1617 - if (!chan) 1618 - goto err; 1619 - 1620 - if (len < copy_bytes) 1621 - goto err; 1622 - 1623 1527 device = chan->device; 1624 - dest = qp->tx_mw_phys + qp->tx_max_frame * qp->tx_index; 1528 + dest = qp->tx_mw_phys + qp->tx_max_frame * entry->tx_index; 1625 1529 buff_off = (size_t)buf & ~PAGE_MASK; 1626 1530 dest_off = (size_t)dest & ~PAGE_MASK; 1627 1531 ··· 1626 1560 unmap->to_cnt = 1; 1627 1561 1628 1562 for (retries = 0; retries < DMA_RETRIES; retries++) { 1629 - txd = device->device_prep_dma_memcpy(chan, dest, unmap->addr[0], 1630 - len, DMA_PREP_INTERRUPT); 1563 + txd = device->device_prep_dma_memcpy(chan, dest, 1564 + unmap->addr[0], len, 1565 + DMA_PREP_INTERRUPT); 1631 1566 if (txd) 1632 1567 break; 1633 1568 ··· 1641 1574 goto err_get_unmap; 1642 1575 } 1643 1576 1644 - txd->callback = ntb_tx_copy_callback; 1577 + txd->callback_result = ntb_tx_copy_callback; 1645 1578 txd->callback_param = entry; 1646 1579 dma_set_unmap(txd, unmap); 1647 1580 ··· 1652 1585 dmaengine_unmap_put(unmap); 1653 1586 1654 1587 dma_async_issue_pending(chan); 1655 - qp->tx_async++; 1656 1588 1657 - return; 1589 + return 0; 1658 1590 err_set_unmap: 1659 1591 dmaengine_unmap_put(unmap); 1660 1592 err_get_unmap: 1661 1593 dmaengine_unmap_put(unmap); 1594 + err: 1595 + return -ENXIO; 1596 + } 1597 + 1598 + static void ntb_async_tx(struct ntb_transport_qp *qp, 1599 + struct ntb_queue_entry *entry) 1600 + { 1601 + struct ntb_payload_header __iomem *hdr; 1602 + struct dma_chan *chan = qp->tx_dma_chan; 1603 + void __iomem *offset; 1604 + int res; 1605 + 1606 + entry->tx_index = qp->tx_index; 1607 + offset = qp->tx_mw + qp->tx_max_frame * entry->tx_index; 1608 + hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header); 1609 + entry->tx_hdr = hdr; 1610 + 1611 + iowrite32(entry->len, &hdr->len); 1612 + iowrite32((u32)qp->tx_pkts, &hdr->ver); 1613 + 1614 + if (!chan) 1615 + goto err; 1616 + 1617 + if (entry->len < copy_bytes) 1618 + goto err; 1619 + 1620 + res = ntb_async_tx_submit(qp, entry); 1621 + if (res < 0) 1622 + goto err; 1623 + 1624 + if (!entry->retries) 1625 + qp->tx_async++; 1626 + 1627 + return; 1628 + 1662 1629 err: 1663 1630 ntb_memcpy_tx(entry, offset); 1664 1631 qp->tx_memcpy++; ··· 2029 1928 entry->buf = data; 2030 1929 entry->len = len; 2031 1930 entry->flags = 0; 1931 + entry->retries = 0; 1932 + entry->errors = 0; 1933 + entry->rx_index = 0; 2032 1934 2033 1935 ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry, &qp->rx_pend_q); 2034 1936 ··· 2074 1970 entry->buf = data; 2075 1971 entry->len = len; 2076 1972 entry->flags = 0; 1973 + entry->errors = 0; 1974 + entry->retries = 0; 1975 + entry->tx_index = 0; 2077 1976 2078 1977 rc = ntb_process_tx(qp, entry); 2079 1978 if (rc)
+19
include/linux/dma-debug.h
··· 56 56 extern void debug_dma_free_coherent(struct device *dev, size_t size, 57 57 void *virt, dma_addr_t addr); 58 58 59 + extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr, 60 + size_t size, int direction, 61 + dma_addr_t dma_addr); 62 + 63 + extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, 64 + size_t size, int direction); 65 + 59 66 extern void debug_dma_sync_single_for_cpu(struct device *dev, 60 67 dma_addr_t dma_handle, size_t size, 61 68 int direction); ··· 145 138 146 139 static inline void debug_dma_free_coherent(struct device *dev, size_t size, 147 140 void *virt, dma_addr_t addr) 141 + { 142 + } 143 + 144 + static inline void debug_dma_map_resource(struct device *dev, phys_addr_t addr, 145 + size_t size, int direction, 146 + dma_addr_t dma_addr) 147 + { 148 + } 149 + 150 + static inline void debug_dma_unmap_resource(struct device *dev, 151 + dma_addr_t dma_addr, size_t size, 152 + int direction) 148 153 { 149 154 } 150 155
+41
include/linux/dma-mapping.h
··· 95 95 struct scatterlist *sg, int nents, 96 96 enum dma_data_direction dir, 97 97 unsigned long attrs); 98 + dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr, 99 + size_t size, enum dma_data_direction dir, 100 + unsigned long attrs); 101 + void (*unmap_resource)(struct device *dev, dma_addr_t dma_handle, 102 + size_t size, enum dma_data_direction dir, 103 + unsigned long attrs); 98 104 void (*sync_single_for_cpu)(struct device *dev, 99 105 dma_addr_t dma_handle, size_t size, 100 106 enum dma_data_direction dir); ··· 262 256 if (ops->unmap_page) 263 257 ops->unmap_page(dev, addr, size, dir, 0); 264 258 debug_dma_unmap_page(dev, addr, size, dir, false); 259 + } 260 + 261 + static inline dma_addr_t dma_map_resource(struct device *dev, 262 + phys_addr_t phys_addr, 263 + size_t size, 264 + enum dma_data_direction dir, 265 + unsigned long attrs) 266 + { 267 + struct dma_map_ops *ops = get_dma_ops(dev); 268 + dma_addr_t addr; 269 + 270 + BUG_ON(!valid_dma_direction(dir)); 271 + 272 + /* Don't allow RAM to be mapped */ 273 + BUG_ON(pfn_valid(PHYS_PFN(phys_addr))); 274 + 275 + addr = phys_addr; 276 + if (ops->map_resource) 277 + addr = ops->map_resource(dev, phys_addr, size, dir, attrs); 278 + 279 + debug_dma_map_resource(dev, phys_addr, size, dir, addr); 280 + 281 + return addr; 282 + } 283 + 284 + static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr, 285 + size_t size, enum dma_data_direction dir, 286 + unsigned long attrs) 287 + { 288 + struct dma_map_ops *ops = get_dma_ops(dev); 289 + 290 + BUG_ON(!valid_dma_direction(dir)); 291 + if (ops->unmap_resource) 292 + ops->unmap_resource(dev, addr, size, dir, attrs); 293 + debug_dma_unmap_resource(dev, addr, size, dir); 265 294 } 266 295 267 296 static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr,
+16
include/linux/dmaengine.h
··· 441 441 442 442 typedef void (*dma_async_tx_callback)(void *dma_async_param); 443 443 444 + enum dmaengine_tx_result { 445 + DMA_TRANS_NOERROR = 0, /* SUCCESS */ 446 + DMA_TRANS_READ_FAILED, /* Source DMA read failed */ 447 + DMA_TRANS_WRITE_FAILED, /* Destination DMA write failed */ 448 + DMA_TRANS_ABORTED, /* Op never submitted / aborted */ 449 + }; 450 + 451 + struct dmaengine_result { 452 + enum dmaengine_tx_result result; 453 + u32 residue; 454 + }; 455 + 456 + typedef void (*dma_async_tx_callback_result)(void *dma_async_param, 457 + const struct dmaengine_result *result); 458 + 444 459 struct dmaengine_unmap_data { 445 460 u8 map_cnt; 446 461 u8 to_cnt; ··· 493 478 dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); 494 479 int (*desc_free)(struct dma_async_tx_descriptor *tx); 495 480 dma_async_tx_callback callback; 481 + dma_async_tx_callback_result callback_result; 496 482 void *callback_param; 497 483 struct dmaengine_unmap_data *unmap; 498 484 #ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH
+16 -2
include/linux/mbus.h
··· 11 11 #ifndef __LINUX_MBUS_H 12 12 #define __LINUX_MBUS_H 13 13 14 + #include <linux/errno.h> 15 + 14 16 struct resource; 15 17 16 18 struct mbus_dram_target_info ··· 57 55 #ifdef CONFIG_PLAT_ORION 58 56 extern const struct mbus_dram_target_info *mv_mbus_dram_info(void); 59 57 extern const struct mbus_dram_target_info *mv_mbus_dram_info_nooverlap(void); 58 + int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target, 59 + u8 *attr); 60 60 #else 61 61 static inline const struct mbus_dram_target_info *mv_mbus_dram_info(void) 62 62 { ··· 68 64 { 69 65 return NULL; 70 66 } 67 + static inline int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, 68 + u8 *target, u8 *attr) 69 + { 70 + /* 71 + * On all ARM32 MVEBU platforms with MBus support, this stub 72 + * function will not get called. The real function from the 73 + * MBus driver is called instead. ARM64 MVEBU platforms like 74 + * the Armada 3700 could use the mv_xor device driver which calls 75 + * into this function 76 + */ 77 + return -EINVAL; 78 + } 71 79 #endif 72 80 73 81 int mvebu_mbus_save_cpu_target(u32 __iomem *store_addr); 74 82 void mvebu_mbus_get_pcie_mem_aperture(struct resource *res); 75 83 void mvebu_mbus_get_pcie_io_aperture(struct resource *res); 76 84 int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr); 77 - int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target, 78 - u8 *attr); 79 85 int mvebu_mbus_add_window_remap_by_id(unsigned int target, 80 86 unsigned int attribute, 81 87 phys_addr_t base, size_t size,
+19
include/linux/omap-dma.h
··· 297 297 #define dma_omap15xx() __dma_omap15xx(d) 298 298 #define dma_omap16xx() __dma_omap16xx(d) 299 299 300 + #if defined(CONFIG_ARCH_OMAP) 300 301 extern struct omap_system_dma_plat_info *omap_get_plat_info(void); 301 302 302 303 extern void omap_set_dma_priority(int lch, int dst_port, int priority); ··· 355 354 return 0; 356 355 } 357 356 #endif 357 + 358 + #else /* CONFIG_ARCH_OMAP */ 359 + 360 + static inline struct omap_system_dma_plat_info *omap_get_plat_info(void) 361 + { 362 + return NULL; 363 + } 364 + 365 + static inline int omap_request_dma(int dev_id, const char *dev_name, 366 + void (*callback)(int lch, u16 ch_status, void *data), 367 + void *data, int *dma_ch) 368 + { 369 + return -ENODEV; 370 + } 371 + 372 + static inline void omap_free_dma(int ch) { } 373 + 374 + #endif /* CONFIG_ARCH_OMAP */ 358 375 359 376 #endif /* __LINUX_OMAP_DMA_H */
+1 -1
include/linux/platform_data/dma-mmp_tdma.h
··· 28 28 int granularity; 29 29 }; 30 30 31 - #ifdef CONFIG_ARM 31 + #ifdef CONFIG_MMP_SRAM 32 32 extern struct gen_pool *sram_get_gpool(char *pool_name); 33 33 #else 34 34 static inline struct gen_pool *sram_get_gpool(char *pool_name)
+6
include/linux/platform_data/dma-s3c24xx.h
··· 30 30 u16 chansel; 31 31 }; 32 32 33 + struct dma_slave_map; 34 + 33 35 /** 34 36 * struct s3c24xx_dma_platdata - platform specific settings 35 37 * @num_phy_channels: number of physical channels 36 38 * @channels: array of virtual channel descriptions 37 39 * @num_channels: number of virtual channels 40 + * @slave_map: dma slave map matching table 41 + * @slavecnt: number of elements in slave_map 38 42 */ 39 43 struct s3c24xx_dma_platdata { 40 44 int num_phy_channels; 41 45 struct s3c24xx_dma_channel *channels; 42 46 int num_channels; 47 + const struct dma_slave_map *slave_map; 48 + int slavecnt; 43 49 }; 44 50 45 51 struct dma_chan;
+50 -2
lib/dma-debug.c
··· 44 44 dma_debug_page, 45 45 dma_debug_sg, 46 46 dma_debug_coherent, 47 + dma_debug_resource, 47 48 }; 48 49 49 50 enum map_err_types { ··· 152 151 [MAP_ERR_CHECKED] = "dma map error checked", 153 152 }; 154 153 155 - static const char *type2name[4] = { "single", "page", 156 - "scather-gather", "coherent" }; 154 + static const char *type2name[5] = { "single", "page", 155 + "scather-gather", "coherent", 156 + "resource" }; 157 157 158 158 static const char *dir2name[4] = { "DMA_BIDIRECTIONAL", "DMA_TO_DEVICE", 159 159 "DMA_FROM_DEVICE", "DMA_NONE" }; ··· 402 400 403 401 static unsigned long long phys_addr(struct dma_debug_entry *entry) 404 402 { 403 + if (entry->type == dma_debug_resource) 404 + return __pfn_to_phys(entry->pfn) + entry->offset; 405 + 405 406 return page_to_phys(pfn_to_page(entry->pfn)) + entry->offset; 406 407 } 407 408 ··· 1523 1518 check_unmap(&ref); 1524 1519 } 1525 1520 EXPORT_SYMBOL(debug_dma_free_coherent); 1521 + 1522 + void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size, 1523 + int direction, dma_addr_t dma_addr) 1524 + { 1525 + struct dma_debug_entry *entry; 1526 + 1527 + if (unlikely(dma_debug_disabled())) 1528 + return; 1529 + 1530 + entry = dma_entry_alloc(); 1531 + if (!entry) 1532 + return; 1533 + 1534 + entry->type = dma_debug_resource; 1535 + entry->dev = dev; 1536 + entry->pfn = PHYS_PFN(addr); 1537 + entry->offset = offset_in_page(addr); 1538 + entry->size = size; 1539 + entry->dev_addr = dma_addr; 1540 + entry->direction = direction; 1541 + entry->map_err_type = MAP_ERR_NOT_CHECKED; 1542 + 1543 + add_dma_entry(entry); 1544 + } 1545 + EXPORT_SYMBOL(debug_dma_map_resource); 1546 + 1547 + void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, 1548 + size_t size, int direction) 1549 + { 1550 + struct dma_debug_entry ref = { 1551 + .type = dma_debug_resource, 1552 + .dev = dev, 1553 + .dev_addr = dma_addr, 1554 + .size = size, 1555 + .direction = direction, 1556 + }; 1557 + 1558 + if (unlikely(dma_debug_disabled())) 1559 + return; 1560 + 1561 + check_unmap(&ref); 1562 + } 1563 + EXPORT_SYMBOL(debug_dma_unmap_resource); 1526 1564 1527 1565 void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 1528 1566 size_t size, int direction)