Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
"This round we have few new features, new driver and updates to few
drivers.

The new features to dmaengine core are:
- Synchronized transfer termination API to terminate the dmaengine
transfers in synchronized and async fashion as required by users.
We have its user now in ALSA dmaengine lib, img, at_xdma, axi_dmac
drivers.
- Universal API for channel request and start consolidation of
request flows. It's user is ompa-dma driver.
- Introduce reuse of descriptors and use in pxa_dma driver

Add/Remove:
- New STM32 DMA driver
- Removal of unused R-Car HPB-DMAC driver

Updates:
- ti-dma-crossbar updates for supporting eDMA
- tegra-apb pm updates
- idma64
- mv_xor updates
- ste_dma updates"

* tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (54 commits)
dmaengine: mv_xor: add suspend/resume support
dmaengine: mv_xor: de-duplicate mv_chan_set_mode*()
dmaengine: mv_xor: remove mv_xor_chan->current_type field
dmaengine: omap-dma: Add support for DMA filter mapping to slave devices
dmaengine: edma: Add support for DMA filter mapping to slave devices
dmaengine: core: Introduce new, universal API to request a channel
dmaengine: core: Move and merge the code paths using private_candidate
dmaengine: core: Skip mask matching when it is not provided to private_candidate
dmaengine: mdc: Correct terminate_all handling
dmaengine: edma: Add probe callback to edma_tptc_driver
dmaengine: dw: fix potential memory leak in dw_dma_parse_dt()
dmaengine: stm32-dma: Fix unchecked deference of chan->desc
dmaengine: sh: Remove unused R-Car HPB-DMAC driver
dmaengine: usb-dmac: Document SoC specific compatibility strings
ste_dma40: Delete an unnecessary variable initialisation in d40_probe()
ste_dma40: Delete another unnecessary check in d40_probe()
ste_dma40: Delete an unnecessary check before the function call "kmem_cache_destroy"
dmaengine: tegra-apb: Free interrupts before killing tasklets
dmaengine: tegra-apb: Update driver to use GFP_NOWAIT
dmaengine: tegra-apb: Only save channel state for those in use
...

+2090 -1183
+8 -2
Documentation/devicetree/bindings/dma/renesas,usb-dmac.txt
··· 1 1 * Renesas USB DMA Controller Device Tree bindings 2 2 3 3 Required Properties: 4 - - compatible: must contain "renesas,usb-dmac" 4 + -compatible: "renesas,<soctype>-usb-dmac", "renesas,usb-dmac" as fallback. 5 + Examples with soctypes are: 6 + - "renesas,r8a7790-usb-dmac" (R-Car H2) 7 + - "renesas,r8a7791-usb-dmac" (R-Car M2-W) 8 + - "renesas,r8a7793-usb-dmac" (R-Car M2-N) 9 + - "renesas,r8a7794-usb-dmac" (R-Car E2) 10 + - "renesas,r8a7795-usb-dmac" (R-Car H3) 5 11 - reg: base address and length of the registers block for the DMAC 6 12 - interrupts: interrupt specifiers for the DMAC, one for each entry in 7 13 interrupt-names. ··· 21 15 Example: R8A7790 (R-Car H2) USB-DMACs 22 16 23 17 usb_dmac0: dma-controller@e65a0000 { 24 - compatible = "renesas,usb-dmac"; 18 + compatible = "renesas,r8a7790-usb-dmac", "renesas,usb-dmac"; 25 19 reg = <0 0xe65a0000 0 0x100>; 26 20 interrupts = <0 109 IRQ_TYPE_LEVEL_HIGH 27 21 0 109 IRQ_TYPE_LEVEL_HIGH>;
+82
Documentation/devicetree/bindings/dma/stm32-dma.txt
··· 1 + * STMicroelectronics STM32 DMA controller 2 + 3 + The STM32 DMA is a general-purpose direct memory access controller capable of 4 + supporting 8 independent DMA channels. Each channel can have up to 8 requests. 5 + 6 + Required properties: 7 + - compatible: Should be "st,stm32-dma" 8 + - reg: Should contain DMA registers location and length. This should include 9 + all of the per-channel registers. 10 + - interrupts: Should contain all of the per-channel DMA interrupts in 11 + ascending order with respect to the DMA channel index. 12 + - clocks: Should contain the input clock of the DMA instance. 13 + - #dma-cells : Must be <4>. See DMA client paragraph for more details. 14 + 15 + Optional properties: 16 + - resets: Reference to a reset controller asserting the DMA controller 17 + - st,mem2mem: boolean; if defined, it indicates that the controller supports 18 + memory-to-memory transfer 19 + 20 + Example: 21 + 22 + dma2: dma-controller@40026400 { 23 + compatible = "st,stm32-dma"; 24 + reg = <0x40026400 0x400>; 25 + interrupts = <56>, 26 + <57>, 27 + <58>, 28 + <59>, 29 + <60>, 30 + <68>, 31 + <69>, 32 + <70>; 33 + clocks = <&clk_hclk>; 34 + #dma-cells = <4>; 35 + st,mem2mem; 36 + resets = <&rcc 150>; 37 + }; 38 + 39 + * DMA client 40 + 41 + DMA clients connected to the STM32 DMA controller must use the format 42 + described in the dma.txt file, using a five-cell specifier for each 43 + channel: a phandle plus four integer cells. 44 + The four cells in order are: 45 + 46 + 1. The channel id 47 + 2. The request line number 48 + 3. A 32bit mask specifying the DMA channel configuration which are device 49 + dependent: 50 + -bit 9: Peripheral Increment Address 51 + 0x0: no address increment between transfers 52 + 0x1: increment address between transfers 53 + -bit 10: Memory Increment Address 54 + 0x0: no address increment between transfers 55 + 0x1: increment address between transfers 56 + -bit 15: Peripheral Increment Offset Size 57 + 0x0: offset size is linked to the peripheral bus width 58 + 0x1: offset size is fixed to 4 (32-bit alignment) 59 + -bit 16-17: Priority level 60 + 0x0: low 61 + 0x1: medium 62 + 0x2: high 63 + 0x3: very high 64 + 5. A 32bit mask specifying the DMA FIFO threshold configuration which are device 65 + dependent: 66 + -bit 0-1: Fifo threshold 67 + 0x0: 1/4 full FIFO 68 + 0x1: 1/2 full FIFO 69 + 0x2: 3/4 full FIFO 70 + 0x3: full FIFO 71 + 72 + Example: 73 + 74 + usart1: serial@40011000 { 75 + compatible = "st,stm32-usart", "st,stm32-uart"; 76 + reg = <0x40011000 0x400>; 77 + interrupts = <37>; 78 + clocks = <&clk_pclk2>; 79 + dmas = <&dma2 2 4 0x10400 0x3>, 80 + <&dma2 7 5 0x10200 0x3>; 81 + dma-names = "rx", "tx"; 82 + };
+6
Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt
··· 14 14 15 15 Optional properties: 16 16 - ti,dma-safe-map: Safe routing value for unused request lines 17 + - ti,reserved-dma-request-ranges: DMA request ranges which should not be used 18 + when mapping xbar input to DMA request, they are either 19 + allocated to be used by for example the DSP or they are used as 20 + memcpy channels in eDMA. 17 21 18 22 Notes: 19 23 When requesting channel via ti,dra7-dma-crossbar, the DMA clinet must request ··· 50 46 #dma-cells = <1>; 51 47 dma-requests = <205>; 52 48 ti,dma-safe-map = <0>; 49 + /* Protect the sDMA request ranges: 10-14 and 100-126 */ 50 + ti,reserved-dma-request-ranges = <10 5>, <100 27>; 53 51 dma-masters = <&sdma>; 54 52 }; 55 53
+41 -18
Documentation/dmaengine/client.txt
··· 22 22 Channel allocation is slightly different in the slave DMA context, 23 23 client drivers typically need a channel from a particular DMA 24 24 controller only and even in some cases a specific channel is desired. 25 - To request a channel dma_request_channel() API is used. 25 + To request a channel dma_request_chan() API is used. 26 26 27 27 Interface: 28 - struct dma_chan *dma_request_channel(dma_cap_mask_t mask, 29 - dma_filter_fn filter_fn, 30 - void *filter_param); 31 - where dma_filter_fn is defined as: 32 - typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 28 + struct dma_chan *dma_request_chan(struct device *dev, const char *name); 33 29 34 - The 'filter_fn' parameter is optional, but highly recommended for 35 - slave and cyclic channels as they typically need to obtain a specific 36 - DMA channel. 37 - 38 - When the optional 'filter_fn' parameter is NULL, dma_request_channel() 39 - simply returns the first channel that satisfies the capability mask. 40 - 41 - Otherwise, the 'filter_fn' routine will be called once for each free 42 - channel which has a capability in 'mask'. 'filter_fn' is expected to 43 - return 'true' when the desired DMA channel is found. 30 + Which will find and return the 'name' DMA channel associated with the 'dev' 31 + device. The association is done via DT, ACPI or board file based 32 + dma_slave_map matching table. 44 33 45 34 A channel allocated via this interface is exclusive to the caller, 46 35 until dma_release_channel() is called. ··· 117 128 transaction. 118 129 119 130 For cyclic DMA, a callback function may wish to terminate the 120 - DMA via dmaengine_terminate_all(). 131 + DMA via dmaengine_terminate_async(). 121 132 122 133 Therefore, it is important that DMA engine drivers drop any 123 134 locks before calling the callback function which may cause a ··· 155 166 156 167 Further APIs: 157 168 158 - 1. int dmaengine_terminate_all(struct dma_chan *chan) 169 + 1. int dmaengine_terminate_sync(struct dma_chan *chan) 170 + int dmaengine_terminate_async(struct dma_chan *chan) 171 + int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */ 159 172 160 173 This causes all activity for the DMA channel to be stopped, and may 161 174 discard data in the DMA FIFO which hasn't been fully transferred. 162 175 No callback functions will be called for any incomplete transfers. 176 + 177 + Two variants of this function are available. 178 + 179 + dmaengine_terminate_async() might not wait until the DMA has been fully 180 + stopped or until any running complete callbacks have finished. But it is 181 + possible to call dmaengine_terminate_async() from atomic context or from 182 + within a complete callback. dmaengine_synchronize() must be called before it 183 + is safe to free the memory accessed by the DMA transfer or free resources 184 + accessed from within the complete callback. 185 + 186 + dmaengine_terminate_sync() will wait for the transfer and any running 187 + complete callbacks to finish before it returns. But the function must not be 188 + called from atomic context or from within a complete callback. 189 + 190 + dmaengine_terminate_all() is deprecated and should not be used in new code. 163 191 164 192 2. int dmaengine_pause(struct dma_chan *chan) 165 193 ··· 203 197 a running DMA channel. It is recommended that DMA engine users 204 198 pause or stop (via dmaengine_terminate_all()) the channel before 205 199 using this API. 200 + 201 + 5. void dmaengine_synchronize(struct dma_chan *chan) 202 + 203 + Synchronize the termination of the DMA channel to the current context. 204 + 205 + This function should be used after dmaengine_terminate_async() to synchronize 206 + the termination of the DMA channel to the current context. The function will 207 + wait for the transfer and any running complete callbacks to finish before it 208 + returns. 209 + 210 + If dmaengine_terminate_async() is used to stop the DMA channel this function 211 + must be called before it is safe to free memory accessed by previously 212 + submitted descriptors or to free any resources accessed within the complete 213 + callback of previously submitted descriptors. 214 + 215 + The behavior of this function is undefined if dma_async_issue_pending() has 216 + been called between dmaengine_terminate_async() and this function.
+18 -2
Documentation/dmaengine/provider.txt
··· 327 327 328 328 * device_terminate_all 329 329 - Aborts all the pending and ongoing transfers on the channel 330 - - This command should operate synchronously on the channel, 331 - terminating right away all the channels 330 + - For aborted transfers the complete callback should not be called 331 + - Can be called from atomic context or from within a complete 332 + callback of a descriptor. Must not sleep. Drivers must be able 333 + to handle this correctly. 334 + - Termination may be asynchronous. The driver does not have to 335 + wait until the currently active transfer has completely stopped. 336 + See device_synchronize. 337 + 338 + * device_synchronize 339 + - Must synchronize the termination of a channel to the current 340 + context. 341 + - Must make sure that memory for previously submitted 342 + descriptors is no longer accessed by the DMA controller. 343 + - Must make sure that all complete callbacks for previously 344 + submitted descriptors have finished running and none are 345 + scheduled to run. 346 + - May sleep. 347 + 332 348 333 349 Misc notes (stuff that should be documented, but don't really know 334 350 where to put them)
+2
arch/arm/configs/stm32_defconfig
··· 54 54 CONFIG_LEDS_CLASS=y 55 55 CONFIG_LEDS_TRIGGERS=y 56 56 CONFIG_LEDS_TRIGGER_HEARTBEAT=y 57 + CONFIG_DMADEVICES=y 58 + CONFIG_STM32_DMA=y 57 59 # CONFIG_FILE_LOCKING is not set 58 60 # CONFIG_DNOTIFY is not set 59 61 # CONFIG_INOTIFY_USER is not set
+2 -1
drivers/dca/dca-core.c
··· 321 321 * @ops - pointer to struct of dca operation function pointers 322 322 * @priv_size - size of extra mem to be added for provider's needs 323 323 */ 324 - struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size) 324 + struct dca_provider *alloc_dca_provider(const struct dca_ops *ops, 325 + int priv_size) 325 326 { 326 327 struct dca_provider *dca; 327 328 int alloc_size;
+12
drivers/dma/Kconfig
··· 431 431 help 432 432 Support for ST-Ericsson DMA40 controller 433 433 434 + config STM32_DMA 435 + bool "STMicroelectronics STM32 DMA support" 436 + depends on ARCH_STM32 437 + select DMA_ENGINE 438 + select DMA_OF 439 + select DMA_VIRTUAL_CHANNELS 440 + help 441 + Enable support for the on-chip DMA controller on STMicroelectronics 442 + STM32 MCUs. 443 + If you have a board based on such a MCU and wish to use DMA say Y or M 444 + here. 445 + 434 446 config S3C24XX_DMAC 435 447 tristate "Samsung S3C24XX DMA support" 436 448 depends on ARCH_S3C24XX
+1
drivers/dma/Makefile
··· 56 56 obj-$(CONFIG_RENESAS_DMA) += sh/ 57 57 obj-$(CONFIG_SIRF_DMA) += sirf-dma.o 58 58 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o 59 + obj-$(CONFIG_STM32_DMA) += stm32-dma.o 59 60 obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o 60 61 obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o 61 62 obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
+4 -1
drivers/dma/acpi-dma.c
··· 15 15 #include <linux/device.h> 16 16 #include <linux/err.h> 17 17 #include <linux/module.h> 18 + #include <linux/kernel.h> 18 19 #include <linux/list.h> 19 20 #include <linux/mutex.h> 20 21 #include <linux/slab.h> ··· 73 72 si = (const struct acpi_csrt_shared_info *)&grp[1]; 74 73 75 74 /* Match device by MMIO and IRQ */ 76 - if (si->mmio_base_low != mem || si->gsi_interrupt != irq) 75 + if (si->mmio_base_low != lower_32_bits(mem) || 76 + si->mmio_base_high != upper_32_bits(mem) || 77 + si->gsi_interrupt != irq) 77 78 return 0; 78 79 79 80 dev_dbg(&adev->dev, "matches with %.4s%04X (rev %u)\n",
+15 -5
drivers/dma/at_xdmac.c
··· 863 863 * access. Hopefully we can access DDR through both ports (at least on 864 864 * SAMA5D4x), so we can use the same interface for source and dest, 865 865 * that solves the fact we don't know the direction. 866 + * ERRATA: Even if useless for memory transfers, the PERID has to not 867 + * match the one of another channel. If not, it could lead to spurious 868 + * flag status. 866 869 */ 867 - u32 chan_cc = AT_XDMAC_CC_DIF(0) 870 + u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) 871 + | AT_XDMAC_CC_DIF(0) 868 872 | AT_XDMAC_CC_SIF(0) 869 873 | AT_XDMAC_CC_MBSIZE_SIXTEEN 870 874 | AT_XDMAC_CC_TYPE_MEM_TRAN; ··· 1045 1041 * access DDR through both ports (at least on SAMA5D4x), so we can use 1046 1042 * the same interface for source and dest, that solves the fact we 1047 1043 * don't know the direction. 1044 + * ERRATA: Even if useless for memory transfers, the PERID has to not 1045 + * match the one of another channel. If not, it could lead to spurious 1046 + * flag status. 1048 1047 */ 1049 - u32 chan_cc = AT_XDMAC_CC_DAM_INCREMENTED_AM 1048 + u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) 1049 + | AT_XDMAC_CC_DAM_INCREMENTED_AM 1050 1050 | AT_XDMAC_CC_SAM_INCREMENTED_AM 1051 1051 | AT_XDMAC_CC_DIF(0) 1052 1052 | AT_XDMAC_CC_SIF(0) ··· 1151 1143 * access. Hopefully we can access DDR through both ports (at least on 1152 1144 * SAMA5D4x), so we can use the same interface for source and dest, 1153 1145 * that solves the fact we don't know the direction. 1146 + * ERRATA: Even if useless for memory transfers, the PERID has to not 1147 + * match the one of another channel. If not, it could lead to spurious 1148 + * flag status. 1154 1149 */ 1155 - u32 chan_cc = AT_XDMAC_CC_DAM_UBS_AM 1150 + u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) 1151 + | AT_XDMAC_CC_DAM_UBS_AM 1156 1152 | AT_XDMAC_CC_SAM_INCREMENTED_AM 1157 1153 | AT_XDMAC_CC_DIF(0) 1158 1154 | AT_XDMAC_CC_SIF(0) ··· 2009 1997 of_dma_controller_free(pdev->dev.of_node); 2010 1998 dma_async_device_unregister(&atxdmac->dma); 2011 1999 clk_disable_unprepare(atxdmac->clk); 2012 - 2013 - synchronize_irq(atxdmac->irq); 2014 2000 2015 2001 free_irq(atxdmac->irq, atxdmac->dma.dev); 2016 2002
+8
drivers/dma/dma-axi-dmac.c
··· 307 307 return 0; 308 308 } 309 309 310 + static void axi_dmac_synchronize(struct dma_chan *c) 311 + { 312 + struct axi_dmac_chan *chan = to_axi_dmac_chan(c); 313 + 314 + vchan_synchronize(&chan->vchan); 315 + } 316 + 310 317 static void axi_dmac_issue_pending(struct dma_chan *c) 311 318 { 312 319 struct axi_dmac_chan *chan = to_axi_dmac_chan(c); ··· 620 613 dma_dev->device_prep_dma_cyclic = axi_dmac_prep_dma_cyclic; 621 614 dma_dev->device_prep_interleaved_dma = axi_dmac_prep_interleaved; 622 615 dma_dev->device_terminate_all = axi_dmac_terminate_all; 616 + dma_dev->device_synchronize = axi_dmac_synchronize; 623 617 dma_dev->dev = &pdev->dev; 624 618 dma_dev->chancnt = 1; 625 619 dma_dev->src_addr_widths = BIT(dmac->chan.src_width);
+125 -53
drivers/dma/dmaengine.c
··· 43 43 44 44 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 45 45 46 + #include <linux/platform_device.h> 46 47 #include <linux/dma-mapping.h> 47 48 #include <linux/init.h> 48 49 #include <linux/module.h> ··· 266 265 module_put(dma_chan_to_owner(chan)); 267 266 268 267 /* This channel is not in use anymore, free it */ 269 - if (!chan->client_count && chan->device->device_free_chan_resources) 268 + if (!chan->client_count && chan->device->device_free_chan_resources) { 269 + /* Make sure all operations have completed */ 270 + dmaengine_synchronize(chan); 270 271 chan->device->device_free_chan_resources(chan); 272 + } 271 273 272 274 /* If the channel is used via a DMA request router, free the mapping */ 273 275 if (chan->router && chan->router->route_free) { ··· 497 493 caps->dst_addr_widths = device->dst_addr_widths; 498 494 caps->directions = device->directions; 499 495 caps->residue_granularity = device->residue_granularity; 496 + caps->descriptor_reuse = device->descriptor_reuse; 500 497 501 498 /* 502 499 * Some devices implement only pause (e.g. to get residuum) but no ··· 516 511 { 517 512 struct dma_chan *chan; 518 513 519 - if (!__dma_device_satisfies_mask(dev, mask)) { 514 + if (mask && !__dma_device_satisfies_mask(dev, mask)) { 520 515 pr_debug("%s: wrong capabilities\n", __func__); 521 516 return NULL; 522 517 } ··· 545 540 } 546 541 547 542 return NULL; 543 + } 544 + 545 + static struct dma_chan *find_candidate(struct dma_device *device, 546 + const dma_cap_mask_t *mask, 547 + dma_filter_fn fn, void *fn_param) 548 + { 549 + struct dma_chan *chan = private_candidate(mask, device, fn, fn_param); 550 + int err; 551 + 552 + if (chan) { 553 + /* Found a suitable channel, try to grab, prep, and return it. 554 + * We first set DMA_PRIVATE to disable balance_ref_count as this 555 + * channel will not be published in the general-purpose 556 + * allocator 557 + */ 558 + dma_cap_set(DMA_PRIVATE, device->cap_mask); 559 + device->privatecnt++; 560 + err = dma_chan_get(chan); 561 + 562 + if (err) { 563 + if (err == -ENODEV) { 564 + pr_debug("%s: %s module removed\n", __func__, 565 + dma_chan_name(chan)); 566 + list_del_rcu(&device->global_node); 567 + } else 568 + pr_debug("%s: failed to get %s: (%d)\n", 569 + __func__, dma_chan_name(chan), err); 570 + 571 + if (--device->privatecnt == 0) 572 + dma_cap_clear(DMA_PRIVATE, device->cap_mask); 573 + 574 + chan = ERR_PTR(err); 575 + } 576 + } 577 + 578 + return chan ? chan : ERR_PTR(-EPROBE_DEFER); 548 579 } 549 580 550 581 /** ··· 621 580 { 622 581 dma_cap_mask_t mask; 623 582 struct dma_chan *chan; 624 - int err; 625 583 626 584 dma_cap_zero(mask); 627 585 dma_cap_set(DMA_SLAVE, mask); ··· 628 588 /* lock against __dma_request_channel */ 629 589 mutex_lock(&dma_list_mutex); 630 590 631 - chan = private_candidate(&mask, device, NULL, NULL); 632 - if (chan) { 633 - dma_cap_set(DMA_PRIVATE, device->cap_mask); 634 - device->privatecnt++; 635 - err = dma_chan_get(chan); 636 - if (err) { 637 - pr_debug("%s: failed to get %s: (%d)\n", 638 - __func__, dma_chan_name(chan), err); 639 - chan = NULL; 640 - if (--device->privatecnt == 0) 641 - dma_cap_clear(DMA_PRIVATE, device->cap_mask); 642 - } 643 - } 591 + chan = find_candidate(device, &mask, NULL, NULL); 644 592 645 593 mutex_unlock(&dma_list_mutex); 646 594 647 - return chan; 595 + return IS_ERR(chan) ? NULL : chan; 648 596 } 649 597 EXPORT_SYMBOL_GPL(dma_get_any_slave_channel); 650 598 ··· 649 621 { 650 622 struct dma_device *device, *_d; 651 623 struct dma_chan *chan = NULL; 652 - int err; 653 624 654 625 /* Find a channel */ 655 626 mutex_lock(&dma_list_mutex); 656 627 list_for_each_entry_safe(device, _d, &dma_device_list, global_node) { 657 - chan = private_candidate(mask, device, fn, fn_param); 658 - if (chan) { 659 - /* Found a suitable channel, try to grab, prep, and 660 - * return it. We first set DMA_PRIVATE to disable 661 - * balance_ref_count as this channel will not be 662 - * published in the general-purpose allocator 663 - */ 664 - dma_cap_set(DMA_PRIVATE, device->cap_mask); 665 - device->privatecnt++; 666 - err = dma_chan_get(chan); 628 + chan = find_candidate(device, mask, fn, fn_param); 629 + if (!IS_ERR(chan)) 630 + break; 667 631 668 - if (err == -ENODEV) { 669 - pr_debug("%s: %s module removed\n", 670 - __func__, dma_chan_name(chan)); 671 - list_del_rcu(&device->global_node); 672 - } else if (err) 673 - pr_debug("%s: failed to get %s: (%d)\n", 674 - __func__, dma_chan_name(chan), err); 675 - else 676 - break; 677 - if (--device->privatecnt == 0) 678 - dma_cap_clear(DMA_PRIVATE, device->cap_mask); 679 - chan = NULL; 680 - } 632 + chan = NULL; 681 633 } 682 634 mutex_unlock(&dma_list_mutex); 683 635 ··· 670 662 } 671 663 EXPORT_SYMBOL_GPL(__dma_request_channel); 672 664 665 + static const struct dma_slave_map *dma_filter_match(struct dma_device *device, 666 + const char *name, 667 + struct device *dev) 668 + { 669 + int i; 670 + 671 + if (!device->filter.mapcnt) 672 + return NULL; 673 + 674 + for (i = 0; i < device->filter.mapcnt; i++) { 675 + const struct dma_slave_map *map = &device->filter.map[i]; 676 + 677 + if (!strcmp(map->devname, dev_name(dev)) && 678 + !strcmp(map->slave, name)) 679 + return map; 680 + } 681 + 682 + return NULL; 683 + } 684 + 673 685 /** 674 - * dma_request_slave_channel_reason - try to allocate an exclusive slave channel 686 + * dma_request_chan - try to allocate an exclusive slave channel 675 687 * @dev: pointer to client device structure 676 688 * @name: slave channel name 677 689 * 678 690 * Returns pointer to appropriate DMA channel on success or an error pointer. 679 691 */ 680 - struct dma_chan *dma_request_slave_channel_reason(struct device *dev, 681 - const char *name) 692 + struct dma_chan *dma_request_chan(struct device *dev, const char *name) 682 693 { 694 + struct dma_device *d, *_d; 695 + struct dma_chan *chan = NULL; 696 + 683 697 /* If device-tree is present get slave info from here */ 684 698 if (dev->of_node) 685 - return of_dma_request_slave_channel(dev->of_node, name); 699 + chan = of_dma_request_slave_channel(dev->of_node, name); 686 700 687 701 /* If device was enumerated by ACPI get slave info from here */ 688 - if (ACPI_HANDLE(dev)) 689 - return acpi_dma_request_slave_chan_by_name(dev, name); 702 + if (has_acpi_companion(dev) && !chan) 703 + chan = acpi_dma_request_slave_chan_by_name(dev, name); 690 704 691 - return ERR_PTR(-ENODEV); 705 + if (chan) { 706 + /* Valid channel found or requester need to be deferred */ 707 + if (!IS_ERR(chan) || PTR_ERR(chan) == -EPROBE_DEFER) 708 + return chan; 709 + } 710 + 711 + /* Try to find the channel via the DMA filter map(s) */ 712 + mutex_lock(&dma_list_mutex); 713 + list_for_each_entry_safe(d, _d, &dma_device_list, global_node) { 714 + dma_cap_mask_t mask; 715 + const struct dma_slave_map *map = dma_filter_match(d, name, dev); 716 + 717 + if (!map) 718 + continue; 719 + 720 + dma_cap_zero(mask); 721 + dma_cap_set(DMA_SLAVE, mask); 722 + 723 + chan = find_candidate(d, &mask, d->filter.fn, map->param); 724 + if (!IS_ERR(chan)) 725 + break; 726 + } 727 + mutex_unlock(&dma_list_mutex); 728 + 729 + return chan ? chan : ERR_PTR(-EPROBE_DEFER); 692 730 } 693 - EXPORT_SYMBOL_GPL(dma_request_slave_channel_reason); 731 + EXPORT_SYMBOL_GPL(dma_request_chan); 694 732 695 733 /** 696 734 * dma_request_slave_channel - try to allocate an exclusive slave channel ··· 748 694 struct dma_chan *dma_request_slave_channel(struct device *dev, 749 695 const char *name) 750 696 { 751 - struct dma_chan *ch = dma_request_slave_channel_reason(dev, name); 697 + struct dma_chan *ch = dma_request_chan(dev, name); 752 698 if (IS_ERR(ch)) 753 699 return NULL; 754 - 755 - dma_cap_set(DMA_PRIVATE, ch->device->cap_mask); 756 - ch->device->privatecnt++; 757 700 758 701 return ch; 759 702 } 760 703 EXPORT_SYMBOL_GPL(dma_request_slave_channel); 704 + 705 + /** 706 + * dma_request_chan_by_mask - allocate a channel satisfying certain capabilities 707 + * @mask: capabilities that the channel must satisfy 708 + * 709 + * Returns pointer to appropriate DMA channel on success or an error pointer. 710 + */ 711 + struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask) 712 + { 713 + struct dma_chan *chan; 714 + 715 + if (!mask) 716 + return ERR_PTR(-ENODEV); 717 + 718 + chan = __dma_request_channel(mask, NULL, NULL); 719 + if (!chan) 720 + chan = ERR_PTR(-ENODEV); 721 + 722 + return chan; 723 + } 724 + EXPORT_SYMBOL_GPL(dma_request_chan_by_mask); 761 725 762 726 void dma_release_channel(struct dma_chan *chan) 763 727 {
+5 -2
drivers/dma/dw/platform.c
··· 103 103 struct device_node *np = pdev->dev.of_node; 104 104 struct dw_dma_platform_data *pdata; 105 105 u32 tmp, arr[DW_DMA_MAX_NR_MASTERS]; 106 + u32 nr_channels; 106 107 107 108 if (!np) { 108 109 dev_err(&pdev->dev, "Missing DT data\n"); 109 110 return NULL; 110 111 } 111 112 113 + if (of_property_read_u32(np, "dma-channels", &nr_channels)) 114 + return NULL; 115 + 112 116 pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 113 117 if (!pdata) 114 118 return NULL; 115 119 116 - if (of_property_read_u32(np, "dma-channels", &pdata->nr_channels)) 117 - return NULL; 120 + pdata->nr_channels = nr_channels; 118 121 119 122 if (of_property_read_bool(np, "is_private")) 120 123 pdata->is_private = true;
+10
drivers/dma/edma.c
··· 2314 2314 edma_set_chmap(&ecc->slave_chans[i], ecc->dummy_slot); 2315 2315 } 2316 2316 2317 + ecc->dma_slave.filter.map = info->slave_map; 2318 + ecc->dma_slave.filter.mapcnt = info->slavecnt; 2319 + ecc->dma_slave.filter.fn = edma_filter_fn; 2320 + 2317 2321 ret = dma_async_device_register(&ecc->dma_slave); 2318 2322 if (ret) { 2319 2323 dev_err(dev, "slave ddev registration failed (%d)\n", ret); ··· 2425 2421 }, 2426 2422 }; 2427 2423 2424 + static int edma_tptc_probe(struct platform_device *pdev) 2425 + { 2426 + return 0; 2427 + } 2428 + 2428 2429 static struct platform_driver edma_tptc_driver = { 2430 + .probe = edma_tptc_probe, 2429 2431 .driver = { 2430 2432 .name = "edma3-tptc", 2431 2433 .of_match_table = edma_tptc_of_ids,
+82 -3
drivers/dma/fsl-edma.c
··· 116 116 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 117 117 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \ 118 118 BIT(DMA_SLAVE_BUSWIDTH_8_BYTES) 119 + enum fsl_edma_pm_state { 120 + RUNNING = 0, 121 + SUSPENDED, 122 + }; 119 123 120 124 struct fsl_edma_hw_tcd { 121 125 __le32 saddr; ··· 151 147 struct fsl_edma_chan { 152 148 struct virt_dma_chan vchan; 153 149 enum dma_status status; 150 + enum fsl_edma_pm_state pm_state; 151 + bool idle; 152 + u32 slave_id; 154 153 struct fsl_edma_engine *edma; 155 154 struct fsl_edma_desc *edesc; 156 155 struct fsl_edma_slave_config fsc; ··· 305 298 spin_lock_irqsave(&fsl_chan->vchan.lock, flags); 306 299 fsl_edma_disable_request(fsl_chan); 307 300 fsl_chan->edesc = NULL; 301 + fsl_chan->idle = true; 308 302 vchan_get_all_descriptors(&fsl_chan->vchan, &head); 309 303 spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 310 304 vchan_dma_desc_free_list(&fsl_chan->vchan, &head); ··· 321 313 if (fsl_chan->edesc) { 322 314 fsl_edma_disable_request(fsl_chan); 323 315 fsl_chan->status = DMA_PAUSED; 316 + fsl_chan->idle = true; 324 317 } 325 318 spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 326 319 return 0; ··· 336 327 if (fsl_chan->edesc) { 337 328 fsl_edma_enable_request(fsl_chan); 338 329 fsl_chan->status = DMA_IN_PROGRESS; 330 + fsl_chan->idle = false; 339 331 } 340 332 spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 341 333 return 0; ··· 658 648 fsl_edma_set_tcd_regs(fsl_chan, fsl_chan->edesc->tcd[0].vtcd); 659 649 fsl_edma_enable_request(fsl_chan); 660 650 fsl_chan->status = DMA_IN_PROGRESS; 651 + fsl_chan->idle = false; 661 652 } 662 653 663 654 static irqreturn_t fsl_edma_tx_handler(int irq, void *dev_id) ··· 687 676 vchan_cookie_complete(&fsl_chan->edesc->vdesc); 688 677 fsl_chan->edesc = NULL; 689 678 fsl_chan->status = DMA_COMPLETE; 679 + fsl_chan->idle = true; 690 680 } else { 691 681 vchan_cyclic_callback(&fsl_chan->edesc->vdesc); 692 682 } ··· 716 704 edma_writeb(fsl_edma, EDMA_CERR_CERR(ch), 717 705 fsl_edma->membase + EDMA_CERR); 718 706 fsl_edma->chans[ch].status = DMA_ERROR; 707 + fsl_edma->chans[ch].idle = true; 719 708 } 720 709 } 721 710 return IRQ_HANDLED; ··· 737 724 738 725 spin_lock_irqsave(&fsl_chan->vchan.lock, flags); 739 726 727 + if (unlikely(fsl_chan->pm_state != RUNNING)) { 728 + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 729 + /* cannot submit due to suspend */ 730 + return; 731 + } 732 + 740 733 if (vchan_issue_pending(&fsl_chan->vchan) && !fsl_chan->edesc) 741 734 fsl_edma_xfer_desc(fsl_chan); 742 735 ··· 754 735 { 755 736 struct fsl_edma_engine *fsl_edma = ofdma->of_dma_data; 756 737 struct dma_chan *chan, *_chan; 738 + struct fsl_edma_chan *fsl_chan; 757 739 unsigned long chans_per_mux = fsl_edma->n_chans / DMAMUX_NR; 758 740 759 741 if (dma_spec->args_count != 2) ··· 768 748 chan = dma_get_slave_channel(chan); 769 749 if (chan) { 770 750 chan->device->privatecnt++; 771 - fsl_edma_chan_mux(to_fsl_edma_chan(chan), 772 - dma_spec->args[1], true); 751 + fsl_chan = to_fsl_edma_chan(chan); 752 + fsl_chan->slave_id = dma_spec->args[1]; 753 + fsl_edma_chan_mux(fsl_chan, fsl_chan->slave_id, 754 + true); 773 755 mutex_unlock(&fsl_edma->fsl_edma_mutex); 774 756 return chan; 775 757 } ··· 910 888 struct fsl_edma_chan *fsl_chan = &fsl_edma->chans[i]; 911 889 912 890 fsl_chan->edma = fsl_edma; 913 - 891 + fsl_chan->pm_state = RUNNING; 892 + fsl_chan->slave_id = 0; 893 + fsl_chan->idle = true; 914 894 fsl_chan->vchan.desc_free = fsl_edma_free_desc; 915 895 vchan_init(&fsl_chan->vchan, &fsl_edma->dma_dev); 916 896 ··· 983 959 return 0; 984 960 } 985 961 962 + static int fsl_edma_suspend_late(struct device *dev) 963 + { 964 + struct fsl_edma_engine *fsl_edma = dev_get_drvdata(dev); 965 + struct fsl_edma_chan *fsl_chan; 966 + unsigned long flags; 967 + int i; 968 + 969 + for (i = 0; i < fsl_edma->n_chans; i++) { 970 + fsl_chan = &fsl_edma->chans[i]; 971 + spin_lock_irqsave(&fsl_chan->vchan.lock, flags); 972 + /* Make sure chan is idle or will force disable. */ 973 + if (unlikely(!fsl_chan->idle)) { 974 + dev_warn(dev, "WARN: There is non-idle channel."); 975 + fsl_edma_disable_request(fsl_chan); 976 + fsl_edma_chan_mux(fsl_chan, 0, false); 977 + } 978 + 979 + fsl_chan->pm_state = SUSPENDED; 980 + spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags); 981 + } 982 + 983 + return 0; 984 + } 985 + 986 + static int fsl_edma_resume_early(struct device *dev) 987 + { 988 + struct fsl_edma_engine *fsl_edma = dev_get_drvdata(dev); 989 + struct fsl_edma_chan *fsl_chan; 990 + int i; 991 + 992 + for (i = 0; i < fsl_edma->n_chans; i++) { 993 + fsl_chan = &fsl_edma->chans[i]; 994 + fsl_chan->pm_state = RUNNING; 995 + edma_writew(fsl_edma, 0x0, fsl_edma->membase + EDMA_TCD_CSR(i)); 996 + if (fsl_chan->slave_id != 0) 997 + fsl_edma_chan_mux(fsl_chan, fsl_chan->slave_id, true); 998 + } 999 + 1000 + edma_writel(fsl_edma, EDMA_CR_ERGA | EDMA_CR_ERCA, 1001 + fsl_edma->membase + EDMA_CR); 1002 + 1003 + return 0; 1004 + } 1005 + 1006 + /* 1007 + * eDMA provides the service to others, so it should be suspend late 1008 + * and resume early. When eDMA suspend, all of the clients should stop 1009 + * the DMA data transmission and let the channel idle. 1010 + */ 1011 + static const struct dev_pm_ops fsl_edma_pm_ops = { 1012 + .suspend_late = fsl_edma_suspend_late, 1013 + .resume_early = fsl_edma_resume_early, 1014 + }; 1015 + 986 1016 static const struct of_device_id fsl_edma_dt_ids[] = { 987 1017 { .compatible = "fsl,vf610-edma", }, 988 1018 { /* sentinel */ } ··· 1047 969 .driver = { 1048 970 .name = "fsl-edma", 1049 971 .of_match_table = fsl_edma_dt_ids, 972 + .pm = &fsl_edma_pm_ops, 1050 973 }, 1051 974 .probe = fsl_edma_probe, 1052 975 .remove = fsl_edma_remove,
+4 -13
drivers/dma/hsu/hsu.c
··· 228 228 for_each_sg(sgl, sg, sg_len, i) { 229 229 desc->sg[i].addr = sg_dma_address(sg); 230 230 desc->sg[i].len = sg_dma_len(sg); 231 + 232 + desc->length += sg_dma_len(sg); 231 233 } 232 234 233 235 desc->nents = sg_len; ··· 251 249 spin_unlock_irqrestore(&hsuc->vchan.lock, flags); 252 250 } 253 251 254 - static size_t hsu_dma_desc_size(struct hsu_dma_desc *desc) 255 - { 256 - size_t bytes = 0; 257 - unsigned int i; 258 - 259 - for (i = desc->active; i < desc->nents; i++) 260 - bytes += desc->sg[i].len; 261 - 262 - return bytes; 263 - } 264 - 265 252 static size_t hsu_dma_active_desc_size(struct hsu_dma_chan *hsuc) 266 253 { 267 254 struct hsu_dma_desc *desc = hsuc->desc; 268 - size_t bytes = hsu_dma_desc_size(desc); 255 + size_t bytes = desc->length; 269 256 int i; 270 257 271 258 i = desc->active % HSU_DMA_CHAN_NR_DESC; ··· 285 294 dma_set_residue(state, bytes); 286 295 status = hsuc->desc->status; 287 296 } else if (vdesc) { 288 - bytes = hsu_dma_desc_size(to_hsu_dma_desc(vdesc)); 297 + bytes = to_hsu_dma_desc(vdesc)->length; 289 298 dma_set_residue(state, bytes); 290 299 } 291 300 spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+1
drivers/dma/hsu/hsu.h
··· 65 65 enum dma_transfer_direction direction; 66 66 struct hsu_dma_sg *sg; 67 67 unsigned int nents; 68 + size_t length; 68 69 unsigned int active; 69 70 enum dma_status status; 70 71 };
+8 -14
drivers/dma/idma64.c
··· 178 178 if (!status) 179 179 return IRQ_NONE; 180 180 181 - /* Disable interrupts */ 182 - channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask); 183 - channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask); 184 - 185 181 status_xfer = dma_readl(idma64, RAW(XFER)); 186 182 status_err = dma_readl(idma64, RAW(ERROR)); 187 183 188 184 for (i = 0; i < idma64->dma.chancnt; i++) 189 185 idma64_chan_irq(idma64, i, status_err, status_xfer); 190 - 191 - /* Re-enable interrupts */ 192 - channel_set_bit(idma64, MASK(XFER), idma64->all_chan_mask); 193 - channel_set_bit(idma64, MASK(ERROR), idma64->all_chan_mask); 194 186 195 187 return IRQ_HANDLED; 196 188 } ··· 231 239 idma64_desc_free(idma64c, to_idma64_desc(vdesc)); 232 240 } 233 241 234 - static u64 idma64_hw_desc_fill(struct idma64_hw_desc *hw, 242 + static void idma64_hw_desc_fill(struct idma64_hw_desc *hw, 235 243 struct dma_slave_config *config, 236 244 enum dma_transfer_direction direction, u64 llp) 237 245 { ··· 268 276 IDMA64C_CTLL_SRC_WIDTH(src_width); 269 277 270 278 lli->llp = llp; 271 - return hw->llp; 272 279 } 273 280 274 281 static void idma64_desc_fill(struct idma64_chan *idma64c, 275 282 struct idma64_desc *desc) 276 283 { 277 284 struct dma_slave_config *config = &idma64c->config; 278 - struct idma64_hw_desc *hw = &desc->hw[desc->ndesc - 1]; 285 + unsigned int i = desc->ndesc; 286 + struct idma64_hw_desc *hw = &desc->hw[i - 1]; 279 287 struct idma64_lli *lli = hw->lli; 280 288 u64 llp = 0; 281 - unsigned int i = desc->ndesc; 282 289 283 290 /* Fill the hardware descriptors and link them to a list */ 284 291 do { 285 292 hw = &desc->hw[--i]; 286 - llp = idma64_hw_desc_fill(hw, config, desc->direction, llp); 293 + idma64_hw_desc_fill(hw, config, desc->direction, llp); 294 + llp = hw->llp; 287 295 desc->length += hw->len; 288 296 } while (i); 289 297 290 - /* Trigger interrupt after last block */ 298 + /* Trigger an interrupt after the last block is transfered */ 291 299 lli->ctllo |= IDMA64C_CTLL_INT_EN; 292 300 } 293 301 ··· 587 595 idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 588 596 589 597 idma64->dma.dev = chip->dev; 598 + 599 + dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK); 590 600 591 601 ret = dma_async_device_register(&idma64->dma); 592 602 if (ret)
+2 -1
drivers/dma/idma64.h
··· 54 54 #define IDMA64C_CTLL_LLP_S_EN (1 << 28) /* src block chain */ 55 55 56 56 /* Bitfields in CTL_HI */ 57 - #define IDMA64C_CTLH_BLOCK_TS(x) ((x) & ((1 << 17) - 1)) 57 + #define IDMA64C_CTLH_BLOCK_TS_MASK ((1 << 17) - 1) 58 + #define IDMA64C_CTLH_BLOCK_TS(x) ((x) & IDMA64C_CTLH_BLOCK_TS_MASK) 58 59 #define IDMA64C_CTLH_DONE (1 << 17) 59 60 60 61 /* Bitfields in CFG_LO */
+51 -27
drivers/dma/img-mdc-dma.c
··· 651 651 return ret; 652 652 } 653 653 654 + static unsigned int mdc_get_new_events(struct mdc_chan *mchan) 655 + { 656 + u32 val, processed, done1, done2; 657 + unsigned int ret; 658 + 659 + val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 660 + processed = (val >> MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT) & 661 + MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK; 662 + /* 663 + * CMDS_DONE may have incremented between reading CMDS_PROCESSED 664 + * and clearing INT_ACTIVE. Re-read CMDS_PROCESSED to ensure we 665 + * didn't miss a command completion. 666 + */ 667 + do { 668 + val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 669 + 670 + done1 = (val >> MDC_CMDS_PROCESSED_CMDS_DONE_SHIFT) & 671 + MDC_CMDS_PROCESSED_CMDS_DONE_MASK; 672 + 673 + val &= ~((MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK << 674 + MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT) | 675 + MDC_CMDS_PROCESSED_INT_ACTIVE); 676 + 677 + val |= done1 << MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT; 678 + 679 + mdc_chan_writel(mchan, val, MDC_CMDS_PROCESSED); 680 + 681 + val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 682 + 683 + done2 = (val >> MDC_CMDS_PROCESSED_CMDS_DONE_SHIFT) & 684 + MDC_CMDS_PROCESSED_CMDS_DONE_MASK; 685 + } while (done1 != done2); 686 + 687 + if (done1 >= processed) 688 + ret = done1 - processed; 689 + else 690 + ret = ((MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK + 1) - 691 + processed) + done1; 692 + 693 + return ret; 694 + } 695 + 654 696 static int mdc_terminate_all(struct dma_chan *chan) 655 697 { 656 698 struct mdc_chan *mchan = to_mdc_chan(chan); ··· 708 666 mdesc = mchan->desc; 709 667 mchan->desc = NULL; 710 668 vchan_get_all_descriptors(&mchan->vc, &head); 669 + 670 + mdc_get_new_events(mchan); 711 671 712 672 spin_unlock_irqrestore(&mchan->vc.lock, flags); 713 673 ··· 747 703 { 748 704 struct mdc_chan *mchan = (struct mdc_chan *)dev_id; 749 705 struct mdc_tx_desc *mdesc; 750 - u32 val, processed, done1, done2; 751 - unsigned int i; 706 + unsigned int i, new_events; 752 707 753 708 spin_lock(&mchan->vc.lock); 754 709 755 - val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 756 - processed = (val >> MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT) & 757 - MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK; 758 - /* 759 - * CMDS_DONE may have incremented between reading CMDS_PROCESSED 760 - * and clearing INT_ACTIVE. Re-read CMDS_PROCESSED to ensure we 761 - * didn't miss a command completion. 762 - */ 763 - do { 764 - val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 765 - done1 = (val >> MDC_CMDS_PROCESSED_CMDS_DONE_SHIFT) & 766 - MDC_CMDS_PROCESSED_CMDS_DONE_MASK; 767 - val &= ~((MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK << 768 - MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT) | 769 - MDC_CMDS_PROCESSED_INT_ACTIVE); 770 - val |= done1 << MDC_CMDS_PROCESSED_CMDS_PROCESSED_SHIFT; 771 - mdc_chan_writel(mchan, val, MDC_CMDS_PROCESSED); 772 - val = mdc_chan_readl(mchan, MDC_CMDS_PROCESSED); 773 - done2 = (val >> MDC_CMDS_PROCESSED_CMDS_DONE_SHIFT) & 774 - MDC_CMDS_PROCESSED_CMDS_DONE_MASK; 775 - } while (done1 != done2); 776 - 777 710 dev_dbg(mdma2dev(mchan->mdma), "IRQ on channel %d\n", mchan->chan_nr); 711 + 712 + new_events = mdc_get_new_events(mchan); 713 + 714 + if (!new_events) 715 + goto out; 778 716 779 717 mdesc = mchan->desc; 780 718 if (!mdesc) { ··· 766 740 goto out; 767 741 } 768 742 769 - for (i = processed; i != done1; 770 - i = (i + 1) % (MDC_CMDS_PROCESSED_CMDS_PROCESSED_MASK + 1)) { 743 + for (i = 0; i < new_events; i++) { 771 744 /* 772 745 * The first interrupt in a transfer indicates that the 773 746 * command list has been loaded, not that a command has ··· 1004 979 vc.chan.device_node) { 1005 980 list_del(&mchan->vc.chan.device_node); 1006 981 1007 - synchronize_irq(mchan->irq); 1008 982 devm_free_irq(&pdev->dev, mchan->irq, mchan); 1009 983 1010 984 tasklet_kill(&mchan->vc.task);
+1 -1
drivers/dma/ioat/dca.c
··· 224 224 return tag; 225 225 } 226 226 227 - static struct dca_ops ioat_dca_ops = { 227 + static const struct dca_ops ioat_dca_ops = { 228 228 .add_requester = ioat_dca_add_requester, 229 229 .remove_requester = ioat_dca_remove_requester, 230 230 .get_tag = ioat_dca_get_tag,
+1 -33
drivers/dma/ioat/dma.h
··· 235 235 return ioat_dma->idx[index]; 236 236 } 237 237 238 - static inline u64 ioat_chansts_32(struct ioatdma_chan *ioat_chan) 239 - { 240 - u8 ver = ioat_chan->ioat_dma->version; 241 - u64 status; 242 - u32 status_lo; 243 - 244 - /* We need to read the low address first as this causes the 245 - * chipset to latch the upper bits for the subsequent read 246 - */ 247 - status_lo = readl(ioat_chan->reg_base + IOAT_CHANSTS_OFFSET_LOW(ver)); 248 - status = readl(ioat_chan->reg_base + IOAT_CHANSTS_OFFSET_HIGH(ver)); 249 - status <<= 32; 250 - status |= status_lo; 251 - 252 - return status; 253 - } 254 - 255 - #if BITS_PER_LONG == 64 256 - 257 238 static inline u64 ioat_chansts(struct ioatdma_chan *ioat_chan) 258 239 { 259 - u8 ver = ioat_chan->ioat_dma->version; 260 - u64 status; 261 - 262 - /* With IOAT v3.3 the status register is 64bit. */ 263 - if (ver >= IOAT_VER_3_3) 264 - status = readq(ioat_chan->reg_base + IOAT_CHANSTS_OFFSET(ver)); 265 - else 266 - status = ioat_chansts_32(ioat_chan); 267 - 268 - return status; 240 + return readq(ioat_chan->reg_base + IOAT_CHANSTS_OFFSET); 269 241 } 270 - 271 - #else 272 - #define ioat_chansts ioat_chansts_32 273 - #endif 274 242 275 243 static inline u64 ioat_chansts_to_addr(u64 status) 276 244 {
+3 -13
drivers/dma/ioat/registers.h
··· 99 99 #define IOAT_DMA_COMP_V1 0x0001 /* Compatibility with DMA version 1 */ 100 100 #define IOAT_DMA_COMP_V2 0x0002 /* Compatibility with DMA version 2 */ 101 101 102 - 103 - #define IOAT1_CHANSTS_OFFSET 0x04 /* 64-bit Channel Status Register */ 104 - #define IOAT2_CHANSTS_OFFSET 0x08 /* 64-bit Channel Status Register */ 105 - #define IOAT_CHANSTS_OFFSET(ver) ((ver) < IOAT_VER_2_0 \ 106 - ? IOAT1_CHANSTS_OFFSET : IOAT2_CHANSTS_OFFSET) 107 - #define IOAT1_CHANSTS_OFFSET_LOW 0x04 108 - #define IOAT2_CHANSTS_OFFSET_LOW 0x08 109 - #define IOAT_CHANSTS_OFFSET_LOW(ver) ((ver) < IOAT_VER_2_0 \ 110 - ? IOAT1_CHANSTS_OFFSET_LOW : IOAT2_CHANSTS_OFFSET_LOW) 111 - #define IOAT1_CHANSTS_OFFSET_HIGH 0x08 112 - #define IOAT2_CHANSTS_OFFSET_HIGH 0x0C 113 - #define IOAT_CHANSTS_OFFSET_HIGH(ver) ((ver) < IOAT_VER_2_0 \ 114 - ? IOAT1_CHANSTS_OFFSET_HIGH : IOAT2_CHANSTS_OFFSET_HIGH) 102 + /* IOAT1 define left for i7300_idle driver to not fail compiling */ 103 + #define IOAT1_CHANSTS_OFFSET 0x04 104 + #define IOAT_CHANSTS_OFFSET 0x08 /* 64-bit Channel Status Register */ 115 105 #define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR (~0x3fULL) 116 106 #define IOAT_CHANSTS_SOFT_ERR 0x10ULL 117 107 #define IOAT_CHANSTS_UNAFFILIATED_ERR 0x8ULL
+56 -39
drivers/dma/mv_xor.c
··· 139 139 } 140 140 141 141 static void mv_chan_set_mode(struct mv_xor_chan *chan, 142 - enum dma_transaction_type type) 142 + u32 op_mode) 143 143 { 144 - u32 op_mode; 145 144 u32 config = readl_relaxed(XOR_CONFIG(chan)); 146 - 147 - switch (type) { 148 - case DMA_XOR: 149 - op_mode = XOR_OPERATION_MODE_XOR; 150 - break; 151 - case DMA_MEMCPY: 152 - op_mode = XOR_OPERATION_MODE_MEMCPY; 153 - break; 154 - default: 155 - dev_err(mv_chan_to_devp(chan), 156 - "error: unsupported operation %d\n", 157 - type); 158 - BUG(); 159 - return; 160 - } 161 - 162 - config &= ~0x7; 163 - config |= op_mode; 164 - 165 - #if defined(__BIG_ENDIAN) 166 - config |= XOR_DESCRIPTOR_SWAP; 167 - #else 168 - config &= ~XOR_DESCRIPTOR_SWAP; 169 - #endif 170 - 171 - writel_relaxed(config, XOR_CONFIG(chan)); 172 - chan->current_type = type; 173 - } 174 - 175 - static void mv_chan_set_mode_to_desc(struct mv_xor_chan *chan) 176 - { 177 - u32 op_mode; 178 - u32 config = readl_relaxed(XOR_CONFIG(chan)); 179 - 180 - op_mode = XOR_OPERATION_MODE_IN_DESC; 181 145 182 146 config &= ~0x7; 183 147 config |= op_mode; ··· 1007 1043 mv_chan_unmask_interrupts(mv_chan); 1008 1044 1009 1045 if (mv_chan->op_in_desc == XOR_MODE_IN_DESC) 1010 - mv_chan_set_mode_to_desc(mv_chan); 1046 + mv_chan_set_mode(mv_chan, XOR_OPERATION_MODE_IN_DESC); 1011 1047 else 1012 - mv_chan_set_mode(mv_chan, DMA_XOR); 1048 + mv_chan_set_mode(mv_chan, XOR_OPERATION_MODE_XOR); 1013 1049 1014 1050 spin_lock_init(&mv_chan->lock); 1015 1051 INIT_LIST_HEAD(&mv_chan->chain); ··· 1083 1119 writel(win_enable, base + WINDOW_BAR_ENABLE(1)); 1084 1120 writel(0, base + WINDOW_OVERRIDE_CTRL(0)); 1085 1121 writel(0, base + WINDOW_OVERRIDE_CTRL(1)); 1122 + } 1123 + 1124 + /* 1125 + * Since this XOR driver is basically used only for RAID5, we don't 1126 + * need to care about synchronizing ->suspend with DMA activity, 1127 + * because the DMA engine will naturally be quiet due to the block 1128 + * devices being suspended. 1129 + */ 1130 + static int mv_xor_suspend(struct platform_device *pdev, pm_message_t state) 1131 + { 1132 + struct mv_xor_device *xordev = platform_get_drvdata(pdev); 1133 + int i; 1134 + 1135 + for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { 1136 + struct mv_xor_chan *mv_chan = xordev->channels[i]; 1137 + 1138 + if (!mv_chan) 1139 + continue; 1140 + 1141 + mv_chan->saved_config_reg = 1142 + readl_relaxed(XOR_CONFIG(mv_chan)); 1143 + mv_chan->saved_int_mask_reg = 1144 + readl_relaxed(XOR_INTR_MASK(mv_chan)); 1145 + } 1146 + 1147 + return 0; 1148 + } 1149 + 1150 + static int mv_xor_resume(struct platform_device *dev) 1151 + { 1152 + struct mv_xor_device *xordev = platform_get_drvdata(dev); 1153 + const struct mbus_dram_target_info *dram; 1154 + int i; 1155 + 1156 + for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { 1157 + struct mv_xor_chan *mv_chan = xordev->channels[i]; 1158 + 1159 + if (!mv_chan) 1160 + continue; 1161 + 1162 + writel_relaxed(mv_chan->saved_config_reg, 1163 + XOR_CONFIG(mv_chan)); 1164 + writel_relaxed(mv_chan->saved_int_mask_reg, 1165 + XOR_INTR_MASK(mv_chan)); 1166 + } 1167 + 1168 + dram = mv_mbus_dram_info(); 1169 + if (dram) 1170 + mv_xor_conf_mbus_windows(xordev, dram); 1171 + 1172 + return 0; 1086 1173 } 1087 1174 1088 1175 static const struct of_device_id mv_xor_dt_ids[] = { ··· 1297 1282 1298 1283 static struct platform_driver mv_xor_driver = { 1299 1284 .probe = mv_xor_probe, 1285 + .suspend = mv_xor_suspend, 1286 + .resume = mv_xor_resume, 1300 1287 .driver = { 1301 1288 .name = MV_XOR_NAME, 1302 1289 .of_match_table = of_match_ptr(mv_xor_dt_ids),
+1 -1
drivers/dma/mv_xor.h
··· 110 110 void __iomem *mmr_high_base; 111 111 unsigned int idx; 112 112 int irq; 113 - enum dma_transaction_type current_type; 114 113 struct list_head chain; 115 114 struct list_head free_slots; 116 115 struct list_head allocated_slots; ··· 125 126 char dummy_src[MV_XOR_MIN_BYTE_COUNT]; 126 127 char dummy_dst[MV_XOR_MIN_BYTE_COUNT]; 127 128 dma_addr_t dummy_src_addr, dummy_dst_addr; 129 + u32 saved_config_reg, saved_int_mask_reg; 128 130 }; 129 131 130 132 /**
+18 -64
drivers/dma/omap-dma.c
··· 28 28 struct omap_dmadev { 29 29 struct dma_device ddev; 30 30 spinlock_t lock; 31 - struct tasklet_struct task; 32 - struct list_head pending; 33 31 void __iomem *base; 34 32 const struct omap_dma_reg *reg_map; 35 33 struct omap_system_dma_plat_info *plat; ··· 40 42 41 43 struct omap_chan { 42 44 struct virt_dma_chan vc; 43 - struct list_head node; 44 45 void __iomem *channel_base; 45 46 const struct omap_dma_reg *reg_map; 46 47 uint32_t ccr; ··· 451 454 spin_unlock_irqrestore(&c->vc.lock, flags); 452 455 } 453 456 454 - /* 455 - * This callback schedules all pending channels. We could be more 456 - * clever here by postponing allocation of the real DMA channels to 457 - * this point, and freeing them when our virtual channel becomes idle. 458 - * 459 - * We would then need to deal with 'all channels in-use' 460 - */ 461 - static void omap_dma_sched(unsigned long data) 462 - { 463 - struct omap_dmadev *d = (struct omap_dmadev *)data; 464 - LIST_HEAD(head); 465 - 466 - spin_lock_irq(&d->lock); 467 - list_splice_tail_init(&d->pending, &head); 468 - spin_unlock_irq(&d->lock); 469 - 470 - while (!list_empty(&head)) { 471 - struct omap_chan *c = list_first_entry(&head, 472 - struct omap_chan, node); 473 - 474 - spin_lock_irq(&c->vc.lock); 475 - list_del_init(&c->node); 476 - omap_dma_start_desc(c); 477 - spin_unlock_irq(&c->vc.lock); 478 - } 479 - } 480 - 481 457 static irqreturn_t omap_dma_irq(int irq, void *devid) 482 458 { 483 459 struct omap_dmadev *od = devid; ··· 673 703 struct omap_chan *c = to_omap_dma_chan(chan); 674 704 struct virt_dma_desc *vd; 675 705 enum dma_status ret; 706 + uint32_t ccr; 676 707 unsigned long flags; 708 + 709 + ccr = omap_dma_chan_read(c, CCR); 710 + /* The channel is no longer active, handle the completion right away */ 711 + if (!(ccr & CCR_ENABLE)) 712 + omap_dma_callback(c->dma_ch, 0, c); 677 713 678 714 ret = dma_cookie_status(chan, cookie, txstate); 679 715 if (ret == DMA_COMPLETE || !txstate) ··· 695 719 696 720 if (d->dir == DMA_MEM_TO_DEV) 697 721 pos = omap_dma_get_src_pos(c); 698 - else if (d->dir == DMA_DEV_TO_MEM) 722 + else if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) 699 723 pos = omap_dma_get_dst_pos(c); 700 724 else 701 725 pos = 0; ··· 715 739 unsigned long flags; 716 740 717 741 spin_lock_irqsave(&c->vc.lock, flags); 718 - if (vchan_issue_pending(&c->vc) && !c->desc) { 719 - /* 720 - * c->cyclic is used only by audio and in this case the DMA need 721 - * to be started without delay. 722 - */ 723 - if (!c->cyclic) { 724 - struct omap_dmadev *d = to_omap_dma_dev(chan->device); 725 - spin_lock(&d->lock); 726 - if (list_empty(&c->node)) 727 - list_add_tail(&c->node, &d->pending); 728 - spin_unlock(&d->lock); 729 - tasklet_schedule(&d->task); 730 - } else { 731 - omap_dma_start_desc(c); 732 - } 733 - } 742 + if (vchan_issue_pending(&c->vc) && !c->desc) 743 + omap_dma_start_desc(c); 734 744 spin_unlock_irqrestore(&c->vc.lock, flags); 735 745 } 736 746 ··· 730 768 struct scatterlist *sgent; 731 769 struct omap_desc *d; 732 770 dma_addr_t dev_addr; 733 - unsigned i, j = 0, es, en, frame_bytes; 771 + unsigned i, es, en, frame_bytes; 734 772 u32 burst; 735 773 736 774 if (dir == DMA_DEV_TO_MEM) { ··· 807 845 en = burst; 808 846 frame_bytes = es_bytes[es] * en; 809 847 for_each_sg(sgl, sgent, sglen, i) { 810 - d->sg[j].addr = sg_dma_address(sgent); 811 - d->sg[j].en = en; 812 - d->sg[j].fn = sg_dma_len(sgent) / frame_bytes; 813 - j++; 848 + d->sg[i].addr = sg_dma_address(sgent); 849 + d->sg[i].en = en; 850 + d->sg[i].fn = sg_dma_len(sgent) / frame_bytes; 814 851 } 815 852 816 - d->sglen = j; 853 + d->sglen = sglen; 817 854 818 855 return vchan_tx_prep(&c->vc, &d->vd, tx_flags); 819 856 } ··· 979 1018 static int omap_dma_terminate_all(struct dma_chan *chan) 980 1019 { 981 1020 struct omap_chan *c = to_omap_dma_chan(chan); 982 - struct omap_dmadev *d = to_omap_dma_dev(c->vc.chan.device); 983 1021 unsigned long flags; 984 1022 LIST_HEAD(head); 985 1023 986 1024 spin_lock_irqsave(&c->vc.lock, flags); 987 - 988 - /* Prevent this channel being scheduled */ 989 - spin_lock(&d->lock); 990 - list_del_init(&c->node); 991 - spin_unlock(&d->lock); 992 1025 993 1026 /* 994 1027 * Stop DMA activity: we assume the callback will not be called ··· 1057 1102 c->reg_map = od->reg_map; 1058 1103 c->vc.desc_free = omap_dma_desc_free; 1059 1104 vchan_init(&c->vc, &od->ddev); 1060 - INIT_LIST_HEAD(&c->node); 1061 1105 1062 1106 return 0; 1063 1107 } 1064 1108 1065 1109 static void omap_dma_free(struct omap_dmadev *od) 1066 1110 { 1067 - tasklet_kill(&od->task); 1068 1111 while (!list_empty(&od->ddev.channels)) { 1069 1112 struct omap_chan *c = list_first_entry(&od->ddev.channels, 1070 1113 struct omap_chan, vc.chan.device_node); ··· 1118 1165 od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1119 1166 od->ddev.dev = &pdev->dev; 1120 1167 INIT_LIST_HEAD(&od->ddev.channels); 1121 - INIT_LIST_HEAD(&od->pending); 1122 1168 spin_lock_init(&od->lock); 1123 1169 spin_lock_init(&od->irq_lock); 1124 - 1125 - tasklet_init(&od->task, omap_dma_sched, (unsigned long)od); 1126 1170 1127 1171 od->dma_requests = OMAP_SDMA_REQUESTS; 1128 1172 if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node, ··· 1152 1202 if (rc) 1153 1203 return rc; 1154 1204 } 1205 + 1206 + od->ddev.filter.map = od->plat->slave_map; 1207 + od->ddev.filter.mapcnt = od->plat->slavecnt; 1208 + od->ddev.filter.fn = omap_dma_filter_fn; 1155 1209 1156 1210 rc = dma_async_device_register(&od->ddev); 1157 1211 if (rc) {
+1
drivers/dma/pxa_dma.c
··· 1414 1414 pdev->slave.dst_addr_widths = widths; 1415 1415 pdev->slave.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1416 1416 pdev->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1417 + pdev->slave.descriptor_reuse = true; 1417 1418 1418 1419 pdev->slave.dev = &op->dev; 1419 1420 ret = pxad_init_dmadev(op, pdev, dma_channels);
-6
drivers/dma/sh/Kconfig
··· 47 47 This driver supports the general purpose DMA controller found in the 48 48 Renesas R-Car second generation SoCs. 49 49 50 - config RCAR_HPB_DMAE 51 - tristate "Renesas R-Car HPB DMAC support" 52 - depends on SH_DMAE_BASE 53 - help 54 - Enable support for the Renesas R-Car series DMA controllers. 55 - 56 50 config RENESAS_USB_DMAC 57 51 tristate "Renesas USB-DMA Controller" 58 52 depends on ARCH_SHMOBILE || COMPILE_TEST
-1
drivers/dma/sh/Makefile
··· 14 14 obj-$(CONFIG_SH_DMAE) += shdma.o 15 15 16 16 obj-$(CONFIG_RCAR_DMAC) += rcar-dmac.o 17 - obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o 18 17 obj-$(CONFIG_RENESAS_USB_DMAC) += usb-dmac.o 19 18 obj-$(CONFIG_SUDMAC) += sudmac.o
-669
drivers/dma/sh/rcar-hpbdma.c
··· 1 - /* 2 - * Copyright (C) 2011-2013 Renesas Electronics Corporation 3 - * Copyright (C) 2013 Cogent Embedded, Inc. 4 - * 5 - * This file is based on the drivers/dma/sh/shdma.c 6 - * 7 - * Renesas SuperH DMA Engine support 8 - * 9 - * This is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License as published by 11 - * the Free Software Foundation; either version 2 of the License, or 12 - * (at your option) any later version. 13 - * 14 - * - DMA of SuperH does not have Hardware DMA chain mode. 15 - * - max DMA size is 16MB. 16 - * 17 - */ 18 - 19 - #include <linux/dmaengine.h> 20 - #include <linux/delay.h> 21 - #include <linux/err.h> 22 - #include <linux/init.h> 23 - #include <linux/interrupt.h> 24 - #include <linux/module.h> 25 - #include <linux/platform_data/dma-rcar-hpbdma.h> 26 - #include <linux/platform_device.h> 27 - #include <linux/pm_runtime.h> 28 - #include <linux/shdma-base.h> 29 - #include <linux/slab.h> 30 - 31 - /* DMA channel registers */ 32 - #define HPB_DMAE_DSAR0 0x00 33 - #define HPB_DMAE_DDAR0 0x04 34 - #define HPB_DMAE_DTCR0 0x08 35 - #define HPB_DMAE_DSAR1 0x0C 36 - #define HPB_DMAE_DDAR1 0x10 37 - #define HPB_DMAE_DTCR1 0x14 38 - #define HPB_DMAE_DSASR 0x18 39 - #define HPB_DMAE_DDASR 0x1C 40 - #define HPB_DMAE_DTCSR 0x20 41 - #define HPB_DMAE_DPTR 0x24 42 - #define HPB_DMAE_DCR 0x28 43 - #define HPB_DMAE_DCMDR 0x2C 44 - #define HPB_DMAE_DSTPR 0x30 45 - #define HPB_DMAE_DSTSR 0x34 46 - #define HPB_DMAE_DDBGR 0x38 47 - #define HPB_DMAE_DDBGR2 0x3C 48 - #define HPB_DMAE_CHAN(n) (0x40 * (n)) 49 - 50 - /* DMA command register (DCMDR) bits */ 51 - #define HPB_DMAE_DCMDR_BDOUT BIT(7) 52 - #define HPB_DMAE_DCMDR_DQSPD BIT(6) 53 - #define HPB_DMAE_DCMDR_DQSPC BIT(5) 54 - #define HPB_DMAE_DCMDR_DMSPD BIT(4) 55 - #define HPB_DMAE_DCMDR_DMSPC BIT(3) 56 - #define HPB_DMAE_DCMDR_DQEND BIT(2) 57 - #define HPB_DMAE_DCMDR_DNXT BIT(1) 58 - #define HPB_DMAE_DCMDR_DMEN BIT(0) 59 - 60 - /* DMA forced stop register (DSTPR) bits */ 61 - #define HPB_DMAE_DSTPR_DMSTP BIT(0) 62 - 63 - /* DMA status register (DSTSR) bits */ 64 - #define HPB_DMAE_DSTSR_DQSTS BIT(2) 65 - #define HPB_DMAE_DSTSR_DMSTS BIT(0) 66 - 67 - /* DMA common registers */ 68 - #define HPB_DMAE_DTIMR 0x00 69 - #define HPB_DMAE_DINTSR0 0x0C 70 - #define HPB_DMAE_DINTSR1 0x10 71 - #define HPB_DMAE_DINTCR0 0x14 72 - #define HPB_DMAE_DINTCR1 0x18 73 - #define HPB_DMAE_DINTMR0 0x1C 74 - #define HPB_DMAE_DINTMR1 0x20 75 - #define HPB_DMAE_DACTSR0 0x24 76 - #define HPB_DMAE_DACTSR1 0x28 77 - #define HPB_DMAE_HSRSTR(n) (0x40 + (n) * 4) 78 - #define HPB_DMAE_HPB_DMASPR(n) (0x140 + (n) * 4) 79 - #define HPB_DMAE_HPB_DMLVLR0 0x160 80 - #define HPB_DMAE_HPB_DMLVLR1 0x164 81 - #define HPB_DMAE_HPB_DMSHPT0 0x168 82 - #define HPB_DMAE_HPB_DMSHPT1 0x16C 83 - 84 - #define HPB_DMA_SLAVE_NUMBER 256 85 - #define HPB_DMA_TCR_MAX 0x01000000 /* 16 MiB */ 86 - 87 - struct hpb_dmae_chan { 88 - struct shdma_chan shdma_chan; 89 - int xfer_mode; /* DMA transfer mode */ 90 - #define XFER_SINGLE 1 91 - #define XFER_DOUBLE 2 92 - unsigned plane_idx; /* current DMA information set */ 93 - bool first_desc; /* first/next transfer */ 94 - int xmit_shift; /* log_2(bytes_per_xfer) */ 95 - void __iomem *base; 96 - const struct hpb_dmae_slave_config *cfg; 97 - char dev_id[16]; /* unique name per DMAC of channel */ 98 - dma_addr_t slave_addr; 99 - }; 100 - 101 - struct hpb_dmae_device { 102 - struct shdma_dev shdma_dev; 103 - spinlock_t reg_lock; /* comm_reg operation lock */ 104 - struct hpb_dmae_pdata *pdata; 105 - void __iomem *chan_reg; 106 - void __iomem *comm_reg; 107 - void __iomem *reset_reg; 108 - void __iomem *mode_reg; 109 - }; 110 - 111 - struct hpb_dmae_regs { 112 - u32 sar; /* SAR / source address */ 113 - u32 dar; /* DAR / destination address */ 114 - u32 tcr; /* TCR / transfer count */ 115 - }; 116 - 117 - struct hpb_desc { 118 - struct shdma_desc shdma_desc; 119 - struct hpb_dmae_regs hw; 120 - unsigned plane_idx; 121 - }; 122 - 123 - #define to_chan(schan) container_of(schan, struct hpb_dmae_chan, shdma_chan) 124 - #define to_desc(sdesc) container_of(sdesc, struct hpb_desc, shdma_desc) 125 - #define to_dev(sc) container_of(sc->shdma_chan.dma_chan.device, \ 126 - struct hpb_dmae_device, shdma_dev.dma_dev) 127 - 128 - static void ch_reg_write(struct hpb_dmae_chan *hpb_dc, u32 data, u32 reg) 129 - { 130 - iowrite32(data, hpb_dc->base + reg); 131 - } 132 - 133 - static u32 ch_reg_read(struct hpb_dmae_chan *hpb_dc, u32 reg) 134 - { 135 - return ioread32(hpb_dc->base + reg); 136 - } 137 - 138 - static void dcmdr_write(struct hpb_dmae_device *hpbdev, u32 data) 139 - { 140 - iowrite32(data, hpbdev->chan_reg + HPB_DMAE_DCMDR); 141 - } 142 - 143 - static void hsrstr_write(struct hpb_dmae_device *hpbdev, u32 ch) 144 - { 145 - iowrite32(0x1, hpbdev->comm_reg + HPB_DMAE_HSRSTR(ch)); 146 - } 147 - 148 - static u32 dintsr_read(struct hpb_dmae_device *hpbdev, u32 ch) 149 - { 150 - u32 v; 151 - 152 - if (ch < 32) 153 - v = ioread32(hpbdev->comm_reg + HPB_DMAE_DINTSR0) >> ch; 154 - else 155 - v = ioread32(hpbdev->comm_reg + HPB_DMAE_DINTSR1) >> (ch - 32); 156 - return v & 0x1; 157 - } 158 - 159 - static void dintcr_write(struct hpb_dmae_device *hpbdev, u32 ch) 160 - { 161 - if (ch < 32) 162 - iowrite32((0x1 << ch), hpbdev->comm_reg + HPB_DMAE_DINTCR0); 163 - else 164 - iowrite32((0x1 << (ch - 32)), 165 - hpbdev->comm_reg + HPB_DMAE_DINTCR1); 166 - } 167 - 168 - static void asyncmdr_write(struct hpb_dmae_device *hpbdev, u32 data) 169 - { 170 - iowrite32(data, hpbdev->mode_reg); 171 - } 172 - 173 - static u32 asyncmdr_read(struct hpb_dmae_device *hpbdev) 174 - { 175 - return ioread32(hpbdev->mode_reg); 176 - } 177 - 178 - static void hpb_dmae_enable_int(struct hpb_dmae_device *hpbdev, u32 ch) 179 - { 180 - u32 intreg; 181 - 182 - spin_lock_irq(&hpbdev->reg_lock); 183 - if (ch < 32) { 184 - intreg = ioread32(hpbdev->comm_reg + HPB_DMAE_DINTMR0); 185 - iowrite32(BIT(ch) | intreg, 186 - hpbdev->comm_reg + HPB_DMAE_DINTMR0); 187 - } else { 188 - intreg = ioread32(hpbdev->comm_reg + HPB_DMAE_DINTMR1); 189 - iowrite32(BIT(ch - 32) | intreg, 190 - hpbdev->comm_reg + HPB_DMAE_DINTMR1); 191 - } 192 - spin_unlock_irq(&hpbdev->reg_lock); 193 - } 194 - 195 - static void hpb_dmae_async_reset(struct hpb_dmae_device *hpbdev, u32 data) 196 - { 197 - u32 rstr; 198 - int timeout = 10000; /* 100 ms */ 199 - 200 - spin_lock(&hpbdev->reg_lock); 201 - rstr = ioread32(hpbdev->reset_reg); 202 - rstr |= data; 203 - iowrite32(rstr, hpbdev->reset_reg); 204 - do { 205 - rstr = ioread32(hpbdev->reset_reg); 206 - if ((rstr & data) == data) 207 - break; 208 - udelay(10); 209 - } while (timeout--); 210 - 211 - if (timeout < 0) 212 - dev_err(hpbdev->shdma_dev.dma_dev.dev, 213 - "%s timeout\n", __func__); 214 - 215 - rstr &= ~data; 216 - iowrite32(rstr, hpbdev->reset_reg); 217 - spin_unlock(&hpbdev->reg_lock); 218 - } 219 - 220 - static void hpb_dmae_set_async_mode(struct hpb_dmae_device *hpbdev, 221 - u32 mask, u32 data) 222 - { 223 - u32 mode; 224 - 225 - spin_lock_irq(&hpbdev->reg_lock); 226 - mode = asyncmdr_read(hpbdev); 227 - mode &= ~mask; 228 - mode |= data; 229 - asyncmdr_write(hpbdev, mode); 230 - spin_unlock_irq(&hpbdev->reg_lock); 231 - } 232 - 233 - static void hpb_dmae_ctl_stop(struct hpb_dmae_device *hpbdev) 234 - { 235 - dcmdr_write(hpbdev, HPB_DMAE_DCMDR_DQSPD); 236 - } 237 - 238 - static void hpb_dmae_reset(struct hpb_dmae_device *hpbdev) 239 - { 240 - u32 ch; 241 - 242 - for (ch = 0; ch < hpbdev->pdata->num_hw_channels; ch++) 243 - hsrstr_write(hpbdev, ch); 244 - } 245 - 246 - static unsigned int calc_xmit_shift(struct hpb_dmae_chan *hpb_chan) 247 - { 248 - struct hpb_dmae_device *hpbdev = to_dev(hpb_chan); 249 - struct hpb_dmae_pdata *pdata = hpbdev->pdata; 250 - int width = ch_reg_read(hpb_chan, HPB_DMAE_DCR); 251 - int i; 252 - 253 - switch (width & (HPB_DMAE_DCR_SPDS_MASK | HPB_DMAE_DCR_DPDS_MASK)) { 254 - case HPB_DMAE_DCR_SPDS_8BIT | HPB_DMAE_DCR_DPDS_8BIT: 255 - default: 256 - i = XMIT_SZ_8BIT; 257 - break; 258 - case HPB_DMAE_DCR_SPDS_16BIT | HPB_DMAE_DCR_DPDS_16BIT: 259 - i = XMIT_SZ_16BIT; 260 - break; 261 - case HPB_DMAE_DCR_SPDS_32BIT | HPB_DMAE_DCR_DPDS_32BIT: 262 - i = XMIT_SZ_32BIT; 263 - break; 264 - } 265 - return pdata->ts_shift[i]; 266 - } 267 - 268 - static void hpb_dmae_set_reg(struct hpb_dmae_chan *hpb_chan, 269 - struct hpb_dmae_regs *hw, unsigned plane) 270 - { 271 - ch_reg_write(hpb_chan, hw->sar, 272 - plane ? HPB_DMAE_DSAR1 : HPB_DMAE_DSAR0); 273 - ch_reg_write(hpb_chan, hw->dar, 274 - plane ? HPB_DMAE_DDAR1 : HPB_DMAE_DDAR0); 275 - ch_reg_write(hpb_chan, hw->tcr >> hpb_chan->xmit_shift, 276 - plane ? HPB_DMAE_DTCR1 : HPB_DMAE_DTCR0); 277 - } 278 - 279 - static void hpb_dmae_start(struct hpb_dmae_chan *hpb_chan, bool next) 280 - { 281 - ch_reg_write(hpb_chan, (next ? HPB_DMAE_DCMDR_DNXT : 0) | 282 - HPB_DMAE_DCMDR_DMEN, HPB_DMAE_DCMDR); 283 - } 284 - 285 - static void hpb_dmae_halt(struct shdma_chan *schan) 286 - { 287 - struct hpb_dmae_chan *chan = to_chan(schan); 288 - 289 - ch_reg_write(chan, HPB_DMAE_DCMDR_DQEND, HPB_DMAE_DCMDR); 290 - ch_reg_write(chan, HPB_DMAE_DSTPR_DMSTP, HPB_DMAE_DSTPR); 291 - 292 - chan->plane_idx = 0; 293 - chan->first_desc = true; 294 - } 295 - 296 - static const struct hpb_dmae_slave_config * 297 - hpb_dmae_find_slave(struct hpb_dmae_chan *hpb_chan, int slave_id) 298 - { 299 - struct hpb_dmae_device *hpbdev = to_dev(hpb_chan); 300 - struct hpb_dmae_pdata *pdata = hpbdev->pdata; 301 - int i; 302 - 303 - if (slave_id >= HPB_DMA_SLAVE_NUMBER) 304 - return NULL; 305 - 306 - for (i = 0; i < pdata->num_slaves; i++) 307 - if (pdata->slaves[i].id == slave_id) 308 - return pdata->slaves + i; 309 - 310 - return NULL; 311 - } 312 - 313 - static void hpb_dmae_start_xfer(struct shdma_chan *schan, 314 - struct shdma_desc *sdesc) 315 - { 316 - struct hpb_dmae_chan *chan = to_chan(schan); 317 - struct hpb_dmae_device *hpbdev = to_dev(chan); 318 - struct hpb_desc *desc = to_desc(sdesc); 319 - 320 - if (chan->cfg->flags & HPB_DMAE_SET_ASYNC_RESET) 321 - hpb_dmae_async_reset(hpbdev, chan->cfg->rstr); 322 - 323 - desc->plane_idx = chan->plane_idx; 324 - hpb_dmae_set_reg(chan, &desc->hw, chan->plane_idx); 325 - hpb_dmae_start(chan, !chan->first_desc); 326 - 327 - if (chan->xfer_mode == XFER_DOUBLE) { 328 - chan->plane_idx ^= 1; 329 - chan->first_desc = false; 330 - } 331 - } 332 - 333 - static bool hpb_dmae_desc_completed(struct shdma_chan *schan, 334 - struct shdma_desc *sdesc) 335 - { 336 - /* 337 - * This is correct since we always have at most single 338 - * outstanding DMA transfer per channel, and by the time 339 - * we get completion interrupt the transfer is completed. 340 - * This will change if we ever use alternating DMA 341 - * information sets and submit two descriptors at once. 342 - */ 343 - return true; 344 - } 345 - 346 - static bool hpb_dmae_chan_irq(struct shdma_chan *schan, int irq) 347 - { 348 - struct hpb_dmae_chan *chan = to_chan(schan); 349 - struct hpb_dmae_device *hpbdev = to_dev(chan); 350 - int ch = chan->cfg->dma_ch; 351 - 352 - /* Check Complete DMA Transfer */ 353 - if (dintsr_read(hpbdev, ch)) { 354 - /* Clear Interrupt status */ 355 - dintcr_write(hpbdev, ch); 356 - return true; 357 - } 358 - return false; 359 - } 360 - 361 - static int hpb_dmae_desc_setup(struct shdma_chan *schan, 362 - struct shdma_desc *sdesc, 363 - dma_addr_t src, dma_addr_t dst, size_t *len) 364 - { 365 - struct hpb_desc *desc = to_desc(sdesc); 366 - 367 - if (*len > (size_t)HPB_DMA_TCR_MAX) 368 - *len = (size_t)HPB_DMA_TCR_MAX; 369 - 370 - desc->hw.sar = src; 371 - desc->hw.dar = dst; 372 - desc->hw.tcr = *len; 373 - 374 - return 0; 375 - } 376 - 377 - static size_t hpb_dmae_get_partial(struct shdma_chan *schan, 378 - struct shdma_desc *sdesc) 379 - { 380 - struct hpb_desc *desc = to_desc(sdesc); 381 - struct hpb_dmae_chan *chan = to_chan(schan); 382 - u32 tcr = ch_reg_read(chan, desc->plane_idx ? 383 - HPB_DMAE_DTCR1 : HPB_DMAE_DTCR0); 384 - 385 - return (desc->hw.tcr - tcr) << chan->xmit_shift; 386 - } 387 - 388 - static bool hpb_dmae_channel_busy(struct shdma_chan *schan) 389 - { 390 - struct hpb_dmae_chan *chan = to_chan(schan); 391 - u32 dstsr = ch_reg_read(chan, HPB_DMAE_DSTSR); 392 - 393 - if (chan->xfer_mode == XFER_DOUBLE) 394 - return dstsr & HPB_DMAE_DSTSR_DQSTS; 395 - else 396 - return dstsr & HPB_DMAE_DSTSR_DMSTS; 397 - } 398 - 399 - static int 400 - hpb_dmae_alloc_chan_resources(struct hpb_dmae_chan *hpb_chan, 401 - const struct hpb_dmae_slave_config *cfg) 402 - { 403 - struct hpb_dmae_device *hpbdev = to_dev(hpb_chan); 404 - struct hpb_dmae_pdata *pdata = hpbdev->pdata; 405 - const struct hpb_dmae_channel *channel = pdata->channels; 406 - int slave_id = cfg->id; 407 - int i, err; 408 - 409 - for (i = 0; i < pdata->num_channels; i++, channel++) { 410 - if (channel->s_id == slave_id) { 411 - struct device *dev = hpb_chan->shdma_chan.dev; 412 - 413 - hpb_chan->base = hpbdev->chan_reg + 414 - HPB_DMAE_CHAN(cfg->dma_ch); 415 - 416 - dev_dbg(dev, "Detected Slave device\n"); 417 - dev_dbg(dev, " -- slave_id : 0x%x\n", slave_id); 418 - dev_dbg(dev, " -- cfg->dma_ch : %d\n", cfg->dma_ch); 419 - dev_dbg(dev, " -- channel->ch_irq: %d\n", 420 - channel->ch_irq); 421 - break; 422 - } 423 - } 424 - 425 - err = shdma_request_irq(&hpb_chan->shdma_chan, channel->ch_irq, 426 - IRQF_SHARED, hpb_chan->dev_id); 427 - if (err) { 428 - dev_err(hpb_chan->shdma_chan.dev, 429 - "DMA channel request_irq %d failed with error %d\n", 430 - channel->ch_irq, err); 431 - return err; 432 - } 433 - 434 - hpb_chan->plane_idx = 0; 435 - hpb_chan->first_desc = true; 436 - 437 - if ((cfg->dcr & (HPB_DMAE_DCR_CT | HPB_DMAE_DCR_DIP)) == 0) { 438 - hpb_chan->xfer_mode = XFER_SINGLE; 439 - } else if ((cfg->dcr & (HPB_DMAE_DCR_CT | HPB_DMAE_DCR_DIP)) == 440 - (HPB_DMAE_DCR_CT | HPB_DMAE_DCR_DIP)) { 441 - hpb_chan->xfer_mode = XFER_DOUBLE; 442 - } else { 443 - dev_err(hpb_chan->shdma_chan.dev, "DCR setting error"); 444 - return -EINVAL; 445 - } 446 - 447 - if (cfg->flags & HPB_DMAE_SET_ASYNC_MODE) 448 - hpb_dmae_set_async_mode(hpbdev, cfg->mdm, cfg->mdr); 449 - ch_reg_write(hpb_chan, cfg->dcr, HPB_DMAE_DCR); 450 - ch_reg_write(hpb_chan, cfg->port, HPB_DMAE_DPTR); 451 - hpb_chan->xmit_shift = calc_xmit_shift(hpb_chan); 452 - hpb_dmae_enable_int(hpbdev, cfg->dma_ch); 453 - 454 - return 0; 455 - } 456 - 457 - static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, 458 - dma_addr_t slave_addr, bool try) 459 - { 460 - struct hpb_dmae_chan *chan = to_chan(schan); 461 - const struct hpb_dmae_slave_config *sc = 462 - hpb_dmae_find_slave(chan, slave_id); 463 - 464 - if (!sc) 465 - return -ENODEV; 466 - if (try) 467 - return 0; 468 - chan->cfg = sc; 469 - chan->slave_addr = slave_addr ? : sc->addr; 470 - return hpb_dmae_alloc_chan_resources(chan, sc); 471 - } 472 - 473 - static void hpb_dmae_setup_xfer(struct shdma_chan *schan, int slave_id) 474 - { 475 - } 476 - 477 - static dma_addr_t hpb_dmae_slave_addr(struct shdma_chan *schan) 478 - { 479 - struct hpb_dmae_chan *chan = to_chan(schan); 480 - 481 - return chan->slave_addr; 482 - } 483 - 484 - static struct shdma_desc *hpb_dmae_embedded_desc(void *buf, int i) 485 - { 486 - return &((struct hpb_desc *)buf)[i].shdma_desc; 487 - } 488 - 489 - static const struct shdma_ops hpb_dmae_ops = { 490 - .desc_completed = hpb_dmae_desc_completed, 491 - .halt_channel = hpb_dmae_halt, 492 - .channel_busy = hpb_dmae_channel_busy, 493 - .slave_addr = hpb_dmae_slave_addr, 494 - .desc_setup = hpb_dmae_desc_setup, 495 - .set_slave = hpb_dmae_set_slave, 496 - .setup_xfer = hpb_dmae_setup_xfer, 497 - .start_xfer = hpb_dmae_start_xfer, 498 - .embedded_desc = hpb_dmae_embedded_desc, 499 - .chan_irq = hpb_dmae_chan_irq, 500 - .get_partial = hpb_dmae_get_partial, 501 - }; 502 - 503 - static int hpb_dmae_chan_probe(struct hpb_dmae_device *hpbdev, int id) 504 - { 505 - struct shdma_dev *sdev = &hpbdev->shdma_dev; 506 - struct platform_device *pdev = 507 - to_platform_device(hpbdev->shdma_dev.dma_dev.dev); 508 - struct hpb_dmae_chan *new_hpb_chan; 509 - struct shdma_chan *schan; 510 - 511 - /* Alloc channel */ 512 - new_hpb_chan = devm_kzalloc(&pdev->dev, 513 - sizeof(struct hpb_dmae_chan), GFP_KERNEL); 514 - if (!new_hpb_chan) { 515 - dev_err(hpbdev->shdma_dev.dma_dev.dev, 516 - "No free memory for allocating DMA channels!\n"); 517 - return -ENOMEM; 518 - } 519 - 520 - schan = &new_hpb_chan->shdma_chan; 521 - schan->max_xfer_len = HPB_DMA_TCR_MAX; 522 - 523 - shdma_chan_probe(sdev, schan, id); 524 - 525 - if (pdev->id >= 0) 526 - snprintf(new_hpb_chan->dev_id, sizeof(new_hpb_chan->dev_id), 527 - "hpb-dmae%d.%d", pdev->id, id); 528 - else 529 - snprintf(new_hpb_chan->dev_id, sizeof(new_hpb_chan->dev_id), 530 - "hpb-dma.%d", id); 531 - 532 - return 0; 533 - } 534 - 535 - static int hpb_dmae_probe(struct platform_device *pdev) 536 - { 537 - const enum dma_slave_buswidth widths = DMA_SLAVE_BUSWIDTH_1_BYTE | 538 - DMA_SLAVE_BUSWIDTH_2_BYTES | DMA_SLAVE_BUSWIDTH_4_BYTES; 539 - struct hpb_dmae_pdata *pdata = pdev->dev.platform_data; 540 - struct hpb_dmae_device *hpbdev; 541 - struct dma_device *dma_dev; 542 - struct resource *chan, *comm, *rest, *mode, *irq_res; 543 - int err, i; 544 - 545 - /* Get platform data */ 546 - if (!pdata || !pdata->num_channels) 547 - return -ENODEV; 548 - 549 - chan = platform_get_resource(pdev, IORESOURCE_MEM, 0); 550 - comm = platform_get_resource(pdev, IORESOURCE_MEM, 1); 551 - rest = platform_get_resource(pdev, IORESOURCE_MEM, 2); 552 - mode = platform_get_resource(pdev, IORESOURCE_MEM, 3); 553 - 554 - irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 555 - if (!irq_res) 556 - return -ENODEV; 557 - 558 - hpbdev = devm_kzalloc(&pdev->dev, sizeof(struct hpb_dmae_device), 559 - GFP_KERNEL); 560 - if (!hpbdev) { 561 - dev_err(&pdev->dev, "Not enough memory\n"); 562 - return -ENOMEM; 563 - } 564 - 565 - hpbdev->chan_reg = devm_ioremap_resource(&pdev->dev, chan); 566 - if (IS_ERR(hpbdev->chan_reg)) 567 - return PTR_ERR(hpbdev->chan_reg); 568 - 569 - hpbdev->comm_reg = devm_ioremap_resource(&pdev->dev, comm); 570 - if (IS_ERR(hpbdev->comm_reg)) 571 - return PTR_ERR(hpbdev->comm_reg); 572 - 573 - hpbdev->reset_reg = devm_ioremap_resource(&pdev->dev, rest); 574 - if (IS_ERR(hpbdev->reset_reg)) 575 - return PTR_ERR(hpbdev->reset_reg); 576 - 577 - hpbdev->mode_reg = devm_ioremap_resource(&pdev->dev, mode); 578 - if (IS_ERR(hpbdev->mode_reg)) 579 - return PTR_ERR(hpbdev->mode_reg); 580 - 581 - dma_dev = &hpbdev->shdma_dev.dma_dev; 582 - 583 - spin_lock_init(&hpbdev->reg_lock); 584 - 585 - /* Platform data */ 586 - hpbdev->pdata = pdata; 587 - 588 - pm_runtime_enable(&pdev->dev); 589 - err = pm_runtime_get_sync(&pdev->dev); 590 - if (err < 0) 591 - dev_err(&pdev->dev, "%s(): GET = %d\n", __func__, err); 592 - 593 - /* Reset DMA controller */ 594 - hpb_dmae_reset(hpbdev); 595 - 596 - pm_runtime_put(&pdev->dev); 597 - 598 - dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask); 599 - dma_cap_set(DMA_SLAVE, dma_dev->cap_mask); 600 - dma_dev->src_addr_widths = widths; 601 - dma_dev->dst_addr_widths = widths; 602 - dma_dev->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 603 - dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 604 - 605 - hpbdev->shdma_dev.ops = &hpb_dmae_ops; 606 - hpbdev->shdma_dev.desc_size = sizeof(struct hpb_desc); 607 - err = shdma_init(&pdev->dev, &hpbdev->shdma_dev, pdata->num_channels); 608 - if (err < 0) 609 - goto error; 610 - 611 - /* Create DMA channels */ 612 - for (i = 0; i < pdata->num_channels; i++) 613 - hpb_dmae_chan_probe(hpbdev, i); 614 - 615 - platform_set_drvdata(pdev, hpbdev); 616 - err = dma_async_device_register(dma_dev); 617 - if (!err) 618 - return 0; 619 - 620 - shdma_cleanup(&hpbdev->shdma_dev); 621 - error: 622 - pm_runtime_disable(&pdev->dev); 623 - return err; 624 - } 625 - 626 - static void hpb_dmae_chan_remove(struct hpb_dmae_device *hpbdev) 627 - { 628 - struct shdma_chan *schan; 629 - int i; 630 - 631 - shdma_for_each_chan(schan, &hpbdev->shdma_dev, i) { 632 - BUG_ON(!schan); 633 - 634 - shdma_chan_remove(schan); 635 - } 636 - } 637 - 638 - static int hpb_dmae_remove(struct platform_device *pdev) 639 - { 640 - struct hpb_dmae_device *hpbdev = platform_get_drvdata(pdev); 641 - 642 - dma_async_device_unregister(&hpbdev->shdma_dev.dma_dev); 643 - 644 - pm_runtime_disable(&pdev->dev); 645 - 646 - hpb_dmae_chan_remove(hpbdev); 647 - 648 - return 0; 649 - } 650 - 651 - static void hpb_dmae_shutdown(struct platform_device *pdev) 652 - { 653 - struct hpb_dmae_device *hpbdev = platform_get_drvdata(pdev); 654 - hpb_dmae_ctl_stop(hpbdev); 655 - } 656 - 657 - static struct platform_driver hpb_dmae_driver = { 658 - .probe = hpb_dmae_probe, 659 - .remove = hpb_dmae_remove, 660 - .shutdown = hpb_dmae_shutdown, 661 - .driver = { 662 - .name = "hpb-dma-engine", 663 - }, 664 - }; 665 - module_platform_driver(hpb_dmae_driver); 666 - 667 - MODULE_AUTHOR("Max Filippov <max.filippov@cogentembedded.com>"); 668 - MODULE_DESCRIPTION("Renesas HPB DMA Engine driver"); 669 - MODULE_LICENSE("GPL");
+2 -2
drivers/dma/sh/usb-dmac.c
··· 448 448 static int usb_dmac_chan_terminate_all(struct dma_chan *chan) 449 449 { 450 450 struct usb_dmac_chan *uchan = to_usb_dmac_chan(chan); 451 - struct usb_dmac_desc *desc; 451 + struct usb_dmac_desc *desc, *_desc; 452 452 unsigned long flags; 453 453 LIST_HEAD(head); 454 454 LIST_HEAD(list); ··· 459 459 if (uchan->desc) 460 460 uchan->desc = NULL; 461 461 list_splice_init(&uchan->desc_got, &list); 462 - list_for_each_entry(desc, &list, node) 462 + list_for_each_entry_safe(desc, _desc, &list, node) 463 463 list_move_tail(&desc->node, &uchan->desc_freed); 464 464 spin_unlock_irqrestore(&uchan->vc.lock, flags); 465 465 vchan_dma_desc_free_list(&uchan->vc, &head);
+44 -47
drivers/dma/ste_dma40.c
··· 3543 3543 struct stedma40_platform_data *plat_data = dev_get_platdata(&pdev->dev); 3544 3544 struct device_node *np = pdev->dev.of_node; 3545 3545 int ret = -ENOENT; 3546 - struct d40_base *base = NULL; 3547 - struct resource *res = NULL; 3546 + struct d40_base *base; 3547 + struct resource *res; 3548 3548 int num_reserved_chans; 3549 3549 u32 val; 3550 3550 ··· 3552 3552 if (np) { 3553 3553 if (d40_of_probe(pdev, np)) { 3554 3554 ret = -ENOMEM; 3555 - goto failure; 3555 + goto report_failure; 3556 3556 } 3557 3557 } else { 3558 3558 d40_err(&pdev->dev, "No pdata or Device Tree provided\n"); 3559 - goto failure; 3559 + goto report_failure; 3560 3560 } 3561 3561 } 3562 3562 3563 3563 base = d40_hw_detect_init(pdev); 3564 3564 if (!base) 3565 - goto failure; 3565 + goto report_failure; 3566 3566 3567 3567 num_reserved_chans = d40_phy_res_init(base); 3568 3568 ··· 3693 3693 return 0; 3694 3694 3695 3695 failure: 3696 - if (base) { 3697 - if (base->desc_slab) 3698 - kmem_cache_destroy(base->desc_slab); 3699 - if (base->virtbase) 3700 - iounmap(base->virtbase); 3696 + kmem_cache_destroy(base->desc_slab); 3697 + if (base->virtbase) 3698 + iounmap(base->virtbase); 3701 3699 3702 - if (base->lcla_pool.base && base->plat_data->use_esram_lcla) { 3703 - iounmap(base->lcla_pool.base); 3704 - base->lcla_pool.base = NULL; 3705 - } 3706 - 3707 - if (base->lcla_pool.dma_addr) 3708 - dma_unmap_single(base->dev, base->lcla_pool.dma_addr, 3709 - SZ_1K * base->num_phy_chans, 3710 - DMA_TO_DEVICE); 3711 - 3712 - if (!base->lcla_pool.base_unaligned && base->lcla_pool.base) 3713 - free_pages((unsigned long)base->lcla_pool.base, 3714 - base->lcla_pool.pages); 3715 - 3716 - kfree(base->lcla_pool.base_unaligned); 3717 - 3718 - if (base->phy_lcpa) 3719 - release_mem_region(base->phy_lcpa, 3720 - base->lcpa_size); 3721 - if (base->phy_start) 3722 - release_mem_region(base->phy_start, 3723 - base->phy_size); 3724 - if (base->clk) { 3725 - clk_disable_unprepare(base->clk); 3726 - clk_put(base->clk); 3727 - } 3728 - 3729 - if (base->lcpa_regulator) { 3730 - regulator_disable(base->lcpa_regulator); 3731 - regulator_put(base->lcpa_regulator); 3732 - } 3733 - 3734 - kfree(base->lcla_pool.alloc_map); 3735 - kfree(base->lookup_log_chans); 3736 - kfree(base->lookup_phy_chans); 3737 - kfree(base->phy_res); 3738 - kfree(base); 3700 + if (base->lcla_pool.base && base->plat_data->use_esram_lcla) { 3701 + iounmap(base->lcla_pool.base); 3702 + base->lcla_pool.base = NULL; 3739 3703 } 3740 3704 3705 + if (base->lcla_pool.dma_addr) 3706 + dma_unmap_single(base->dev, base->lcla_pool.dma_addr, 3707 + SZ_1K * base->num_phy_chans, 3708 + DMA_TO_DEVICE); 3709 + 3710 + if (!base->lcla_pool.base_unaligned && base->lcla_pool.base) 3711 + free_pages((unsigned long)base->lcla_pool.base, 3712 + base->lcla_pool.pages); 3713 + 3714 + kfree(base->lcla_pool.base_unaligned); 3715 + 3716 + if (base->phy_lcpa) 3717 + release_mem_region(base->phy_lcpa, 3718 + base->lcpa_size); 3719 + if (base->phy_start) 3720 + release_mem_region(base->phy_start, 3721 + base->phy_size); 3722 + if (base->clk) { 3723 + clk_disable_unprepare(base->clk); 3724 + clk_put(base->clk); 3725 + } 3726 + 3727 + if (base->lcpa_regulator) { 3728 + regulator_disable(base->lcpa_regulator); 3729 + regulator_put(base->lcpa_regulator); 3730 + } 3731 + 3732 + kfree(base->lcla_pool.alloc_map); 3733 + kfree(base->lookup_log_chans); 3734 + kfree(base->lookup_phy_chans); 3735 + kfree(base->phy_res); 3736 + kfree(base); 3737 + report_failure: 3741 3738 d40_err(&pdev->dev, "probe failed\n"); 3742 3739 return ret; 3743 3740 }
+1141
drivers/dma/stm32-dma.c
··· 1 + /* 2 + * Driver for STM32 DMA controller 3 + * 4 + * Inspired by dma-jz4740.c and tegra20-apb-dma.c 5 + * 6 + * Copyright (C) M'boumba Cedric Madianga 2015 7 + * Author: M'boumba Cedric Madianga <cedric.madianga@gmail.com> 8 + * 9 + * License terms: GNU General Public License (GPL), version 2 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/delay.h> 14 + #include <linux/dmaengine.h> 15 + #include <linux/dma-mapping.h> 16 + #include <linux/err.h> 17 + #include <linux/init.h> 18 + #include <linux/jiffies.h> 19 + #include <linux/list.h> 20 + #include <linux/module.h> 21 + #include <linux/of.h> 22 + #include <linux/of_device.h> 23 + #include <linux/of_dma.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/reset.h> 26 + #include <linux/sched.h> 27 + #include <linux/slab.h> 28 + 29 + #include "virt-dma.h" 30 + 31 + #define STM32_DMA_LISR 0x0000 /* DMA Low Int Status Reg */ 32 + #define STM32_DMA_HISR 0x0004 /* DMA High Int Status Reg */ 33 + #define STM32_DMA_LIFCR 0x0008 /* DMA Low Int Flag Clear Reg */ 34 + #define STM32_DMA_HIFCR 0x000c /* DMA High Int Flag Clear Reg */ 35 + #define STM32_DMA_TCI BIT(5) /* Transfer Complete Interrupt */ 36 + #define STM32_DMA_TEI BIT(3) /* Transfer Error Interrupt */ 37 + #define STM32_DMA_DMEI BIT(2) /* Direct Mode Error Interrupt */ 38 + #define STM32_DMA_FEI BIT(0) /* FIFO Error Interrupt */ 39 + 40 + /* DMA Stream x Configuration Register */ 41 + #define STM32_DMA_SCR(x) (0x0010 + 0x18 * (x)) /* x = 0..7 */ 42 + #define STM32_DMA_SCR_REQ(n) ((n & 0x7) << 25) 43 + #define STM32_DMA_SCR_MBURST_MASK GENMASK(24, 23) 44 + #define STM32_DMA_SCR_MBURST(n) ((n & 0x3) << 23) 45 + #define STM32_DMA_SCR_PBURST_MASK GENMASK(22, 21) 46 + #define STM32_DMA_SCR_PBURST(n) ((n & 0x3) << 21) 47 + #define STM32_DMA_SCR_PL_MASK GENMASK(17, 16) 48 + #define STM32_DMA_SCR_PL(n) ((n & 0x3) << 16) 49 + #define STM32_DMA_SCR_MSIZE_MASK GENMASK(14, 13) 50 + #define STM32_DMA_SCR_MSIZE(n) ((n & 0x3) << 13) 51 + #define STM32_DMA_SCR_PSIZE_MASK GENMASK(12, 11) 52 + #define STM32_DMA_SCR_PSIZE(n) ((n & 0x3) << 11) 53 + #define STM32_DMA_SCR_PSIZE_GET(n) ((n & STM32_DMA_SCR_PSIZE_MASK) >> 11) 54 + #define STM32_DMA_SCR_DIR_MASK GENMASK(7, 6) 55 + #define STM32_DMA_SCR_DIR(n) ((n & 0x3) << 6) 56 + #define STM32_DMA_SCR_CT BIT(19) /* Target in double buffer */ 57 + #define STM32_DMA_SCR_DBM BIT(18) /* Double Buffer Mode */ 58 + #define STM32_DMA_SCR_PINCOS BIT(15) /* Peripheral inc offset size */ 59 + #define STM32_DMA_SCR_MINC BIT(10) /* Memory increment mode */ 60 + #define STM32_DMA_SCR_PINC BIT(9) /* Peripheral increment mode */ 61 + #define STM32_DMA_SCR_CIRC BIT(8) /* Circular mode */ 62 + #define STM32_DMA_SCR_PFCTRL BIT(5) /* Peripheral Flow Controller */ 63 + #define STM32_DMA_SCR_TCIE BIT(4) /* Transfer Cplete Int Enable*/ 64 + #define STM32_DMA_SCR_TEIE BIT(2) /* Transfer Error Int Enable */ 65 + #define STM32_DMA_SCR_DMEIE BIT(1) /* Direct Mode Err Int Enable */ 66 + #define STM32_DMA_SCR_EN BIT(0) /* Stream Enable */ 67 + #define STM32_DMA_SCR_CFG_MASK (STM32_DMA_SCR_PINC \ 68 + | STM32_DMA_SCR_MINC \ 69 + | STM32_DMA_SCR_PINCOS \ 70 + | STM32_DMA_SCR_PL_MASK) 71 + #define STM32_DMA_SCR_IRQ_MASK (STM32_DMA_SCR_TCIE \ 72 + | STM32_DMA_SCR_TEIE \ 73 + | STM32_DMA_SCR_DMEIE) 74 + 75 + /* DMA Stream x number of data register */ 76 + #define STM32_DMA_SNDTR(x) (0x0014 + 0x18 * (x)) 77 + 78 + /* DMA stream peripheral address register */ 79 + #define STM32_DMA_SPAR(x) (0x0018 + 0x18 * (x)) 80 + 81 + /* DMA stream x memory 0 address register */ 82 + #define STM32_DMA_SM0AR(x) (0x001c + 0x18 * (x)) 83 + 84 + /* DMA stream x memory 1 address register */ 85 + #define STM32_DMA_SM1AR(x) (0x0020 + 0x18 * (x)) 86 + 87 + /* DMA stream x FIFO control register */ 88 + #define STM32_DMA_SFCR(x) (0x0024 + 0x18 * (x)) 89 + #define STM32_DMA_SFCR_FTH_MASK GENMASK(1, 0) 90 + #define STM32_DMA_SFCR_FTH(n) (n & STM32_DMA_SFCR_FTH_MASK) 91 + #define STM32_DMA_SFCR_FEIE BIT(7) /* FIFO error interrupt enable */ 92 + #define STM32_DMA_SFCR_DMDIS BIT(2) /* Direct mode disable */ 93 + #define STM32_DMA_SFCR_MASK (STM32_DMA_SFCR_FEIE \ 94 + | STM32_DMA_SFCR_DMDIS) 95 + 96 + /* DMA direction */ 97 + #define STM32_DMA_DEV_TO_MEM 0x00 98 + #define STM32_DMA_MEM_TO_DEV 0x01 99 + #define STM32_DMA_MEM_TO_MEM 0x02 100 + 101 + /* DMA priority level */ 102 + #define STM32_DMA_PRIORITY_LOW 0x00 103 + #define STM32_DMA_PRIORITY_MEDIUM 0x01 104 + #define STM32_DMA_PRIORITY_HIGH 0x02 105 + #define STM32_DMA_PRIORITY_VERY_HIGH 0x03 106 + 107 + /* DMA FIFO threshold selection */ 108 + #define STM32_DMA_FIFO_THRESHOLD_1QUARTERFULL 0x00 109 + #define STM32_DMA_FIFO_THRESHOLD_HALFFULL 0x01 110 + #define STM32_DMA_FIFO_THRESHOLD_3QUARTERSFULL 0x02 111 + #define STM32_DMA_FIFO_THRESHOLD_FULL 0x03 112 + 113 + #define STM32_DMA_MAX_DATA_ITEMS 0xffff 114 + #define STM32_DMA_MAX_CHANNELS 0x08 115 + #define STM32_DMA_MAX_REQUEST_ID 0x08 116 + #define STM32_DMA_MAX_DATA_PARAM 0x03 117 + 118 + enum stm32_dma_width { 119 + STM32_DMA_BYTE, 120 + STM32_DMA_HALF_WORD, 121 + STM32_DMA_WORD, 122 + }; 123 + 124 + enum stm32_dma_burst_size { 125 + STM32_DMA_BURST_SINGLE, 126 + STM32_DMA_BURST_INCR4, 127 + STM32_DMA_BURST_INCR8, 128 + STM32_DMA_BURST_INCR16, 129 + }; 130 + 131 + struct stm32_dma_cfg { 132 + u32 channel_id; 133 + u32 request_line; 134 + u32 stream_config; 135 + u32 threshold; 136 + }; 137 + 138 + struct stm32_dma_chan_reg { 139 + u32 dma_lisr; 140 + u32 dma_hisr; 141 + u32 dma_lifcr; 142 + u32 dma_hifcr; 143 + u32 dma_scr; 144 + u32 dma_sndtr; 145 + u32 dma_spar; 146 + u32 dma_sm0ar; 147 + u32 dma_sm1ar; 148 + u32 dma_sfcr; 149 + }; 150 + 151 + struct stm32_dma_sg_req { 152 + u32 len; 153 + struct stm32_dma_chan_reg chan_reg; 154 + }; 155 + 156 + struct stm32_dma_desc { 157 + struct virt_dma_desc vdesc; 158 + bool cyclic; 159 + u32 num_sgs; 160 + struct stm32_dma_sg_req sg_req[]; 161 + }; 162 + 163 + struct stm32_dma_chan { 164 + struct virt_dma_chan vchan; 165 + bool config_init; 166 + bool busy; 167 + u32 id; 168 + u32 irq; 169 + struct stm32_dma_desc *desc; 170 + u32 next_sg; 171 + struct dma_slave_config dma_sconfig; 172 + struct stm32_dma_chan_reg chan_reg; 173 + }; 174 + 175 + struct stm32_dma_device { 176 + struct dma_device ddev; 177 + void __iomem *base; 178 + struct clk *clk; 179 + struct reset_control *rst; 180 + bool mem2mem; 181 + struct stm32_dma_chan chan[STM32_DMA_MAX_CHANNELS]; 182 + }; 183 + 184 + static struct stm32_dma_device *stm32_dma_get_dev(struct stm32_dma_chan *chan) 185 + { 186 + return container_of(chan->vchan.chan.device, struct stm32_dma_device, 187 + ddev); 188 + } 189 + 190 + static struct stm32_dma_chan *to_stm32_dma_chan(struct dma_chan *c) 191 + { 192 + return container_of(c, struct stm32_dma_chan, vchan.chan); 193 + } 194 + 195 + static struct stm32_dma_desc *to_stm32_dma_desc(struct virt_dma_desc *vdesc) 196 + { 197 + return container_of(vdesc, struct stm32_dma_desc, vdesc); 198 + } 199 + 200 + static struct device *chan2dev(struct stm32_dma_chan *chan) 201 + { 202 + return &chan->vchan.chan.dev->device; 203 + } 204 + 205 + static u32 stm32_dma_read(struct stm32_dma_device *dmadev, u32 reg) 206 + { 207 + return readl_relaxed(dmadev->base + reg); 208 + } 209 + 210 + static void stm32_dma_write(struct stm32_dma_device *dmadev, u32 reg, u32 val) 211 + { 212 + writel_relaxed(val, dmadev->base + reg); 213 + } 214 + 215 + static struct stm32_dma_desc *stm32_dma_alloc_desc(u32 num_sgs) 216 + { 217 + return kzalloc(sizeof(struct stm32_dma_desc) + 218 + sizeof(struct stm32_dma_sg_req) * num_sgs, GFP_NOWAIT); 219 + } 220 + 221 + static int stm32_dma_get_width(struct stm32_dma_chan *chan, 222 + enum dma_slave_buswidth width) 223 + { 224 + switch (width) { 225 + case DMA_SLAVE_BUSWIDTH_1_BYTE: 226 + return STM32_DMA_BYTE; 227 + case DMA_SLAVE_BUSWIDTH_2_BYTES: 228 + return STM32_DMA_HALF_WORD; 229 + case DMA_SLAVE_BUSWIDTH_4_BYTES: 230 + return STM32_DMA_WORD; 231 + default: 232 + dev_err(chan2dev(chan), "Dma bus width not supported\n"); 233 + return -EINVAL; 234 + } 235 + } 236 + 237 + static int stm32_dma_get_burst(struct stm32_dma_chan *chan, u32 maxburst) 238 + { 239 + switch (maxburst) { 240 + case 0: 241 + case 1: 242 + return STM32_DMA_BURST_SINGLE; 243 + case 4: 244 + return STM32_DMA_BURST_INCR4; 245 + case 8: 246 + return STM32_DMA_BURST_INCR8; 247 + case 16: 248 + return STM32_DMA_BURST_INCR16; 249 + default: 250 + dev_err(chan2dev(chan), "Dma burst size not supported\n"); 251 + return -EINVAL; 252 + } 253 + } 254 + 255 + static void stm32_dma_set_fifo_config(struct stm32_dma_chan *chan, 256 + u32 src_maxburst, u32 dst_maxburst) 257 + { 258 + chan->chan_reg.dma_sfcr &= ~STM32_DMA_SFCR_MASK; 259 + chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_DMEIE; 260 + 261 + if ((!src_maxburst) && (!dst_maxburst)) { 262 + /* Using direct mode */ 263 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_DMEIE; 264 + } else { 265 + /* Using FIFO mode */ 266 + chan->chan_reg.dma_sfcr |= STM32_DMA_SFCR_MASK; 267 + } 268 + } 269 + 270 + static int stm32_dma_slave_config(struct dma_chan *c, 271 + struct dma_slave_config *config) 272 + { 273 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 274 + 275 + memcpy(&chan->dma_sconfig, config, sizeof(*config)); 276 + 277 + chan->config_init = true; 278 + 279 + return 0; 280 + } 281 + 282 + static u32 stm32_dma_irq_status(struct stm32_dma_chan *chan) 283 + { 284 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 285 + u32 flags, dma_isr; 286 + 287 + /* 288 + * Read "flags" from DMA_xISR register corresponding to the selected 289 + * DMA channel at the correct bit offset inside that register. 290 + * 291 + * If (ch % 4) is 2 or 3, left shift the mask by 16 bits. 292 + * If (ch % 4) is 1 or 3, additionally left shift the mask by 6 bits. 293 + */ 294 + 295 + if (chan->id & 4) 296 + dma_isr = stm32_dma_read(dmadev, STM32_DMA_HISR); 297 + else 298 + dma_isr = stm32_dma_read(dmadev, STM32_DMA_LISR); 299 + 300 + flags = dma_isr >> (((chan->id & 2) << 3) | ((chan->id & 1) * 6)); 301 + 302 + return flags; 303 + } 304 + 305 + static void stm32_dma_irq_clear(struct stm32_dma_chan *chan, u32 flags) 306 + { 307 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 308 + u32 dma_ifcr; 309 + 310 + /* 311 + * Write "flags" to the DMA_xIFCR register corresponding to the selected 312 + * DMA channel at the correct bit offset inside that register. 313 + * 314 + * If (ch % 4) is 2 or 3, left shift the mask by 16 bits. 315 + * If (ch % 4) is 1 or 3, additionally left shift the mask by 6 bits. 316 + */ 317 + dma_ifcr = flags << (((chan->id & 2) << 3) | ((chan->id & 1) * 6)); 318 + 319 + if (chan->id & 4) 320 + stm32_dma_write(dmadev, STM32_DMA_HIFCR, dma_ifcr); 321 + else 322 + stm32_dma_write(dmadev, STM32_DMA_LIFCR, dma_ifcr); 323 + } 324 + 325 + static int stm32_dma_disable_chan(struct stm32_dma_chan *chan) 326 + { 327 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 328 + unsigned long timeout = jiffies + msecs_to_jiffies(5000); 329 + u32 dma_scr, id; 330 + 331 + id = chan->id; 332 + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); 333 + 334 + if (dma_scr & STM32_DMA_SCR_EN) { 335 + dma_scr &= ~STM32_DMA_SCR_EN; 336 + stm32_dma_write(dmadev, STM32_DMA_SCR(id), dma_scr); 337 + 338 + do { 339 + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); 340 + dma_scr &= STM32_DMA_SCR_EN; 341 + if (!dma_scr) 342 + break; 343 + 344 + if (time_after_eq(jiffies, timeout)) { 345 + dev_err(chan2dev(chan), "%s: timeout!\n", 346 + __func__); 347 + return -EBUSY; 348 + } 349 + cond_resched(); 350 + } while (1); 351 + } 352 + 353 + return 0; 354 + } 355 + 356 + static void stm32_dma_stop(struct stm32_dma_chan *chan) 357 + { 358 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 359 + u32 dma_scr, dma_sfcr, status; 360 + int ret; 361 + 362 + /* Disable interrupts */ 363 + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); 364 + dma_scr &= ~STM32_DMA_SCR_IRQ_MASK; 365 + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), dma_scr); 366 + dma_sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id)); 367 + dma_sfcr &= ~STM32_DMA_SFCR_FEIE; 368 + stm32_dma_write(dmadev, STM32_DMA_SFCR(chan->id), dma_sfcr); 369 + 370 + /* Disable DMA */ 371 + ret = stm32_dma_disable_chan(chan); 372 + if (ret < 0) 373 + return; 374 + 375 + /* Clear interrupt status if it is there */ 376 + status = stm32_dma_irq_status(chan); 377 + if (status) { 378 + dev_dbg(chan2dev(chan), "%s(): clearing interrupt: 0x%08x\n", 379 + __func__, status); 380 + stm32_dma_irq_clear(chan, status); 381 + } 382 + 383 + chan->busy = false; 384 + } 385 + 386 + static int stm32_dma_terminate_all(struct dma_chan *c) 387 + { 388 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 389 + unsigned long flags; 390 + LIST_HEAD(head); 391 + 392 + spin_lock_irqsave(&chan->vchan.lock, flags); 393 + 394 + if (chan->busy) { 395 + stm32_dma_stop(chan); 396 + chan->desc = NULL; 397 + } 398 + 399 + vchan_get_all_descriptors(&chan->vchan, &head); 400 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 401 + vchan_dma_desc_free_list(&chan->vchan, &head); 402 + 403 + return 0; 404 + } 405 + 406 + static void stm32_dma_dump_reg(struct stm32_dma_chan *chan) 407 + { 408 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 409 + u32 scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); 410 + u32 ndtr = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id)); 411 + u32 spar = stm32_dma_read(dmadev, STM32_DMA_SPAR(chan->id)); 412 + u32 sm0ar = stm32_dma_read(dmadev, STM32_DMA_SM0AR(chan->id)); 413 + u32 sm1ar = stm32_dma_read(dmadev, STM32_DMA_SM1AR(chan->id)); 414 + u32 sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id)); 415 + 416 + dev_dbg(chan2dev(chan), "SCR: 0x%08x\n", scr); 417 + dev_dbg(chan2dev(chan), "NDTR: 0x%08x\n", ndtr); 418 + dev_dbg(chan2dev(chan), "SPAR: 0x%08x\n", spar); 419 + dev_dbg(chan2dev(chan), "SM0AR: 0x%08x\n", sm0ar); 420 + dev_dbg(chan2dev(chan), "SM1AR: 0x%08x\n", sm1ar); 421 + dev_dbg(chan2dev(chan), "SFCR: 0x%08x\n", sfcr); 422 + } 423 + 424 + static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) 425 + { 426 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 427 + struct virt_dma_desc *vdesc; 428 + struct stm32_dma_sg_req *sg_req; 429 + struct stm32_dma_chan_reg *reg; 430 + u32 status; 431 + int ret; 432 + 433 + ret = stm32_dma_disable_chan(chan); 434 + if (ret < 0) 435 + return ret; 436 + 437 + if (!chan->desc) { 438 + vdesc = vchan_next_desc(&chan->vchan); 439 + if (!vdesc) 440 + return -EPERM; 441 + 442 + chan->desc = to_stm32_dma_desc(vdesc); 443 + chan->next_sg = 0; 444 + } 445 + 446 + if (chan->next_sg == chan->desc->num_sgs) 447 + chan->next_sg = 0; 448 + 449 + sg_req = &chan->desc->sg_req[chan->next_sg]; 450 + reg = &sg_req->chan_reg; 451 + 452 + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), reg->dma_scr); 453 + stm32_dma_write(dmadev, STM32_DMA_SPAR(chan->id), reg->dma_spar); 454 + stm32_dma_write(dmadev, STM32_DMA_SM0AR(chan->id), reg->dma_sm0ar); 455 + stm32_dma_write(dmadev, STM32_DMA_SFCR(chan->id), reg->dma_sfcr); 456 + stm32_dma_write(dmadev, STM32_DMA_SM1AR(chan->id), reg->dma_sm1ar); 457 + stm32_dma_write(dmadev, STM32_DMA_SNDTR(chan->id), reg->dma_sndtr); 458 + 459 + chan->next_sg++; 460 + 461 + /* Clear interrupt status if it is there */ 462 + status = stm32_dma_irq_status(chan); 463 + if (status) 464 + stm32_dma_irq_clear(chan, status); 465 + 466 + stm32_dma_dump_reg(chan); 467 + 468 + /* Start DMA */ 469 + reg->dma_scr |= STM32_DMA_SCR_EN; 470 + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), reg->dma_scr); 471 + 472 + chan->busy = true; 473 + 474 + return 0; 475 + } 476 + 477 + static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) 478 + { 479 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 480 + struct stm32_dma_sg_req *sg_req; 481 + u32 dma_scr, dma_sm0ar, dma_sm1ar, id; 482 + 483 + id = chan->id; 484 + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); 485 + 486 + if (dma_scr & STM32_DMA_SCR_DBM) { 487 + if (chan->next_sg == chan->desc->num_sgs) 488 + chan->next_sg = 0; 489 + 490 + sg_req = &chan->desc->sg_req[chan->next_sg]; 491 + 492 + if (dma_scr & STM32_DMA_SCR_CT) { 493 + dma_sm0ar = sg_req->chan_reg.dma_sm0ar; 494 + stm32_dma_write(dmadev, STM32_DMA_SM0AR(id), dma_sm0ar); 495 + dev_dbg(chan2dev(chan), "CT=1 <=> SM0AR: 0x%08x\n", 496 + stm32_dma_read(dmadev, STM32_DMA_SM0AR(id))); 497 + } else { 498 + dma_sm1ar = sg_req->chan_reg.dma_sm1ar; 499 + stm32_dma_write(dmadev, STM32_DMA_SM1AR(id), dma_sm1ar); 500 + dev_dbg(chan2dev(chan), "CT=0 <=> SM1AR: 0x%08x\n", 501 + stm32_dma_read(dmadev, STM32_DMA_SM1AR(id))); 502 + } 503 + 504 + chan->next_sg++; 505 + } 506 + } 507 + 508 + static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan) 509 + { 510 + if (chan->desc) { 511 + if (chan->desc->cyclic) { 512 + vchan_cyclic_callback(&chan->desc->vdesc); 513 + stm32_dma_configure_next_sg(chan); 514 + } else { 515 + chan->busy = false; 516 + if (chan->next_sg == chan->desc->num_sgs) { 517 + list_del(&chan->desc->vdesc.node); 518 + vchan_cookie_complete(&chan->desc->vdesc); 519 + chan->desc = NULL; 520 + } 521 + stm32_dma_start_transfer(chan); 522 + } 523 + } 524 + } 525 + 526 + static irqreturn_t stm32_dma_chan_irq(int irq, void *devid) 527 + { 528 + struct stm32_dma_chan *chan = devid; 529 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 530 + u32 status, scr, sfcr; 531 + 532 + spin_lock(&chan->vchan.lock); 533 + 534 + status = stm32_dma_irq_status(chan); 535 + scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); 536 + sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id)); 537 + 538 + if ((status & STM32_DMA_TCI) && (scr & STM32_DMA_SCR_TCIE)) { 539 + stm32_dma_irq_clear(chan, STM32_DMA_TCI); 540 + stm32_dma_handle_chan_done(chan); 541 + 542 + } else { 543 + stm32_dma_irq_clear(chan, status); 544 + dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status); 545 + } 546 + 547 + spin_unlock(&chan->vchan.lock); 548 + 549 + return IRQ_HANDLED; 550 + } 551 + 552 + static void stm32_dma_issue_pending(struct dma_chan *c) 553 + { 554 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 555 + unsigned long flags; 556 + int ret; 557 + 558 + spin_lock_irqsave(&chan->vchan.lock, flags); 559 + if (!chan->busy) { 560 + if (vchan_issue_pending(&chan->vchan) && !chan->desc) { 561 + ret = stm32_dma_start_transfer(chan); 562 + if ((!ret) && (chan->desc->cyclic)) 563 + stm32_dma_configure_next_sg(chan); 564 + } 565 + } 566 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 567 + } 568 + 569 + static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan, 570 + enum dma_transfer_direction direction, 571 + enum dma_slave_buswidth *buswidth) 572 + { 573 + enum dma_slave_buswidth src_addr_width, dst_addr_width; 574 + int src_bus_width, dst_bus_width; 575 + int src_burst_size, dst_burst_size; 576 + u32 src_maxburst, dst_maxburst; 577 + dma_addr_t src_addr, dst_addr; 578 + u32 dma_scr = 0; 579 + 580 + src_addr_width = chan->dma_sconfig.src_addr_width; 581 + dst_addr_width = chan->dma_sconfig.dst_addr_width; 582 + src_maxburst = chan->dma_sconfig.src_maxburst; 583 + dst_maxburst = chan->dma_sconfig.dst_maxburst; 584 + src_addr = chan->dma_sconfig.src_addr; 585 + dst_addr = chan->dma_sconfig.dst_addr; 586 + 587 + switch (direction) { 588 + case DMA_MEM_TO_DEV: 589 + dst_bus_width = stm32_dma_get_width(chan, dst_addr_width); 590 + if (dst_bus_width < 0) 591 + return dst_bus_width; 592 + 593 + dst_burst_size = stm32_dma_get_burst(chan, dst_maxburst); 594 + if (dst_burst_size < 0) 595 + return dst_burst_size; 596 + 597 + if (!src_addr_width) 598 + src_addr_width = dst_addr_width; 599 + 600 + src_bus_width = stm32_dma_get_width(chan, src_addr_width); 601 + if (src_bus_width < 0) 602 + return src_bus_width; 603 + 604 + src_burst_size = stm32_dma_get_burst(chan, src_maxburst); 605 + if (src_burst_size < 0) 606 + return src_burst_size; 607 + 608 + dma_scr = STM32_DMA_SCR_DIR(STM32_DMA_MEM_TO_DEV) | 609 + STM32_DMA_SCR_PSIZE(dst_bus_width) | 610 + STM32_DMA_SCR_MSIZE(src_bus_width) | 611 + STM32_DMA_SCR_PBURST(dst_burst_size) | 612 + STM32_DMA_SCR_MBURST(src_burst_size); 613 + 614 + chan->chan_reg.dma_spar = chan->dma_sconfig.dst_addr; 615 + *buswidth = dst_addr_width; 616 + break; 617 + 618 + case DMA_DEV_TO_MEM: 619 + src_bus_width = stm32_dma_get_width(chan, src_addr_width); 620 + if (src_bus_width < 0) 621 + return src_bus_width; 622 + 623 + src_burst_size = stm32_dma_get_burst(chan, src_maxburst); 624 + if (src_burst_size < 0) 625 + return src_burst_size; 626 + 627 + if (!dst_addr_width) 628 + dst_addr_width = src_addr_width; 629 + 630 + dst_bus_width = stm32_dma_get_width(chan, dst_addr_width); 631 + if (dst_bus_width < 0) 632 + return dst_bus_width; 633 + 634 + dst_burst_size = stm32_dma_get_burst(chan, dst_maxburst); 635 + if (dst_burst_size < 0) 636 + return dst_burst_size; 637 + 638 + dma_scr = STM32_DMA_SCR_DIR(STM32_DMA_DEV_TO_MEM) | 639 + STM32_DMA_SCR_PSIZE(src_bus_width) | 640 + STM32_DMA_SCR_MSIZE(dst_bus_width) | 641 + STM32_DMA_SCR_PBURST(src_burst_size) | 642 + STM32_DMA_SCR_MBURST(dst_burst_size); 643 + 644 + chan->chan_reg.dma_spar = chan->dma_sconfig.src_addr; 645 + *buswidth = chan->dma_sconfig.src_addr_width; 646 + break; 647 + 648 + default: 649 + dev_err(chan2dev(chan), "Dma direction is not supported\n"); 650 + return -EINVAL; 651 + } 652 + 653 + stm32_dma_set_fifo_config(chan, src_maxburst, dst_maxburst); 654 + 655 + chan->chan_reg.dma_scr &= ~(STM32_DMA_SCR_DIR_MASK | 656 + STM32_DMA_SCR_PSIZE_MASK | STM32_DMA_SCR_MSIZE_MASK | 657 + STM32_DMA_SCR_PBURST_MASK | STM32_DMA_SCR_MBURST_MASK); 658 + chan->chan_reg.dma_scr |= dma_scr; 659 + 660 + return 0; 661 + } 662 + 663 + static void stm32_dma_clear_reg(struct stm32_dma_chan_reg *regs) 664 + { 665 + memset(regs, 0, sizeof(struct stm32_dma_chan_reg)); 666 + } 667 + 668 + static struct dma_async_tx_descriptor *stm32_dma_prep_slave_sg( 669 + struct dma_chan *c, struct scatterlist *sgl, 670 + u32 sg_len, enum dma_transfer_direction direction, 671 + unsigned long flags, void *context) 672 + { 673 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 674 + struct stm32_dma_desc *desc; 675 + struct scatterlist *sg; 676 + enum dma_slave_buswidth buswidth; 677 + u32 nb_data_items; 678 + int i, ret; 679 + 680 + if (!chan->config_init) { 681 + dev_err(chan2dev(chan), "dma channel is not configured\n"); 682 + return NULL; 683 + } 684 + 685 + if (sg_len < 1) { 686 + dev_err(chan2dev(chan), "Invalid segment length %d\n", sg_len); 687 + return NULL; 688 + } 689 + 690 + desc = stm32_dma_alloc_desc(sg_len); 691 + if (!desc) 692 + return NULL; 693 + 694 + ret = stm32_dma_set_xfer_param(chan, direction, &buswidth); 695 + if (ret < 0) 696 + goto err; 697 + 698 + /* Set peripheral flow controller */ 699 + if (chan->dma_sconfig.device_fc) 700 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_PFCTRL; 701 + else 702 + chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_PFCTRL; 703 + 704 + for_each_sg(sgl, sg, sg_len, i) { 705 + desc->sg_req[i].len = sg_dma_len(sg); 706 + 707 + nb_data_items = desc->sg_req[i].len / buswidth; 708 + if (nb_data_items > STM32_DMA_MAX_DATA_ITEMS) { 709 + dev_err(chan2dev(chan), "nb items not supported\n"); 710 + goto err; 711 + } 712 + 713 + stm32_dma_clear_reg(&desc->sg_req[i].chan_reg); 714 + desc->sg_req[i].chan_reg.dma_scr = chan->chan_reg.dma_scr; 715 + desc->sg_req[i].chan_reg.dma_sfcr = chan->chan_reg.dma_sfcr; 716 + desc->sg_req[i].chan_reg.dma_spar = chan->chan_reg.dma_spar; 717 + desc->sg_req[i].chan_reg.dma_sm0ar = sg_dma_address(sg); 718 + desc->sg_req[i].chan_reg.dma_sm1ar = sg_dma_address(sg); 719 + desc->sg_req[i].chan_reg.dma_sndtr = nb_data_items; 720 + } 721 + 722 + desc->num_sgs = sg_len; 723 + desc->cyclic = false; 724 + 725 + return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); 726 + 727 + err: 728 + kfree(desc); 729 + return NULL; 730 + } 731 + 732 + static struct dma_async_tx_descriptor *stm32_dma_prep_dma_cyclic( 733 + struct dma_chan *c, dma_addr_t buf_addr, size_t buf_len, 734 + size_t period_len, enum dma_transfer_direction direction, 735 + unsigned long flags) 736 + { 737 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 738 + struct stm32_dma_desc *desc; 739 + enum dma_slave_buswidth buswidth; 740 + u32 num_periods, nb_data_items; 741 + int i, ret; 742 + 743 + if (!buf_len || !period_len) { 744 + dev_err(chan2dev(chan), "Invalid buffer/period len\n"); 745 + return NULL; 746 + } 747 + 748 + if (!chan->config_init) { 749 + dev_err(chan2dev(chan), "dma channel is not configured\n"); 750 + return NULL; 751 + } 752 + 753 + if (buf_len % period_len) { 754 + dev_err(chan2dev(chan), "buf_len not multiple of period_len\n"); 755 + return NULL; 756 + } 757 + 758 + /* 759 + * We allow to take more number of requests till DMA is 760 + * not started. The driver will loop over all requests. 761 + * Once DMA is started then new requests can be queued only after 762 + * terminating the DMA. 763 + */ 764 + if (chan->busy) { 765 + dev_err(chan2dev(chan), "Request not allowed when dma busy\n"); 766 + return NULL; 767 + } 768 + 769 + ret = stm32_dma_set_xfer_param(chan, direction, &buswidth); 770 + if (ret < 0) 771 + return NULL; 772 + 773 + nb_data_items = period_len / buswidth; 774 + if (nb_data_items > STM32_DMA_MAX_DATA_ITEMS) { 775 + dev_err(chan2dev(chan), "number of items not supported\n"); 776 + return NULL; 777 + } 778 + 779 + /* Enable Circular mode or double buffer mode */ 780 + if (buf_len == period_len) 781 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_CIRC; 782 + else 783 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_DBM; 784 + 785 + /* Clear periph ctrl if client set it */ 786 + chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_PFCTRL; 787 + 788 + num_periods = buf_len / period_len; 789 + 790 + desc = stm32_dma_alloc_desc(num_periods); 791 + if (!desc) 792 + return NULL; 793 + 794 + for (i = 0; i < num_periods; i++) { 795 + desc->sg_req[i].len = period_len; 796 + 797 + stm32_dma_clear_reg(&desc->sg_req[i].chan_reg); 798 + desc->sg_req[i].chan_reg.dma_scr = chan->chan_reg.dma_scr; 799 + desc->sg_req[i].chan_reg.dma_sfcr = chan->chan_reg.dma_sfcr; 800 + desc->sg_req[i].chan_reg.dma_spar = chan->chan_reg.dma_spar; 801 + desc->sg_req[i].chan_reg.dma_sm0ar = buf_addr; 802 + desc->sg_req[i].chan_reg.dma_sm1ar = buf_addr; 803 + desc->sg_req[i].chan_reg.dma_sndtr = nb_data_items; 804 + buf_addr += period_len; 805 + } 806 + 807 + desc->num_sgs = num_periods; 808 + desc->cyclic = true; 809 + 810 + return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); 811 + } 812 + 813 + static struct dma_async_tx_descriptor *stm32_dma_prep_dma_memcpy( 814 + struct dma_chan *c, dma_addr_t dest, 815 + dma_addr_t src, size_t len, unsigned long flags) 816 + { 817 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 818 + u32 num_sgs; 819 + struct stm32_dma_desc *desc; 820 + size_t xfer_count, offset; 821 + int i; 822 + 823 + num_sgs = DIV_ROUND_UP(len, STM32_DMA_MAX_DATA_ITEMS); 824 + desc = stm32_dma_alloc_desc(num_sgs); 825 + if (!desc) 826 + return NULL; 827 + 828 + for (offset = 0, i = 0; offset < len; offset += xfer_count, i++) { 829 + xfer_count = min_t(size_t, len - offset, 830 + STM32_DMA_MAX_DATA_ITEMS); 831 + 832 + desc->sg_req[i].len = xfer_count; 833 + 834 + stm32_dma_clear_reg(&desc->sg_req[i].chan_reg); 835 + desc->sg_req[i].chan_reg.dma_scr = 836 + STM32_DMA_SCR_DIR(STM32_DMA_MEM_TO_MEM) | 837 + STM32_DMA_SCR_MINC | 838 + STM32_DMA_SCR_PINC | 839 + STM32_DMA_SCR_TCIE | 840 + STM32_DMA_SCR_TEIE; 841 + desc->sg_req[i].chan_reg.dma_sfcr = STM32_DMA_SFCR_DMDIS | 842 + STM32_DMA_SFCR_FTH(STM32_DMA_FIFO_THRESHOLD_FULL) | 843 + STM32_DMA_SFCR_FEIE; 844 + desc->sg_req[i].chan_reg.dma_spar = src + offset; 845 + desc->sg_req[i].chan_reg.dma_sm0ar = dest + offset; 846 + desc->sg_req[i].chan_reg.dma_sndtr = xfer_count; 847 + } 848 + 849 + desc->num_sgs = num_sgs; 850 + desc->cyclic = false; 851 + 852 + return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); 853 + } 854 + 855 + static size_t stm32_dma_desc_residue(struct stm32_dma_chan *chan, 856 + struct stm32_dma_desc *desc, 857 + u32 next_sg) 858 + { 859 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 860 + u32 dma_scr, width, residue, count; 861 + int i; 862 + 863 + residue = 0; 864 + 865 + for (i = next_sg; i < desc->num_sgs; i++) 866 + residue += desc->sg_req[i].len; 867 + 868 + if (next_sg != 0) { 869 + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); 870 + width = STM32_DMA_SCR_PSIZE_GET(dma_scr); 871 + count = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id)); 872 + 873 + residue += count << width; 874 + } 875 + 876 + return residue; 877 + } 878 + 879 + static enum dma_status stm32_dma_tx_status(struct dma_chan *c, 880 + dma_cookie_t cookie, 881 + struct dma_tx_state *state) 882 + { 883 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 884 + struct virt_dma_desc *vdesc; 885 + enum dma_status status; 886 + unsigned long flags; 887 + u32 residue; 888 + 889 + status = dma_cookie_status(c, cookie, state); 890 + if ((status == DMA_COMPLETE) || (!state)) 891 + return status; 892 + 893 + spin_lock_irqsave(&chan->vchan.lock, flags); 894 + vdesc = vchan_find_desc(&chan->vchan, cookie); 895 + if (cookie == chan->desc->vdesc.tx.cookie) { 896 + residue = stm32_dma_desc_residue(chan, chan->desc, 897 + chan->next_sg); 898 + } else if (vdesc) { 899 + residue = stm32_dma_desc_residue(chan, 900 + to_stm32_dma_desc(vdesc), 0); 901 + } else { 902 + residue = 0; 903 + } 904 + 905 + dma_set_residue(state, residue); 906 + 907 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 908 + 909 + return status; 910 + } 911 + 912 + static int stm32_dma_alloc_chan_resources(struct dma_chan *c) 913 + { 914 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 915 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 916 + int ret; 917 + 918 + chan->config_init = false; 919 + ret = clk_prepare_enable(dmadev->clk); 920 + if (ret < 0) { 921 + dev_err(chan2dev(chan), "clk_prepare_enable failed: %d\n", ret); 922 + return ret; 923 + } 924 + 925 + ret = stm32_dma_disable_chan(chan); 926 + if (ret < 0) 927 + clk_disable_unprepare(dmadev->clk); 928 + 929 + return ret; 930 + } 931 + 932 + static void stm32_dma_free_chan_resources(struct dma_chan *c) 933 + { 934 + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); 935 + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); 936 + unsigned long flags; 937 + 938 + dev_dbg(chan2dev(chan), "Freeing channel %d\n", chan->id); 939 + 940 + if (chan->busy) { 941 + spin_lock_irqsave(&chan->vchan.lock, flags); 942 + stm32_dma_stop(chan); 943 + chan->desc = NULL; 944 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 945 + } 946 + 947 + clk_disable_unprepare(dmadev->clk); 948 + 949 + vchan_free_chan_resources(to_virt_chan(c)); 950 + } 951 + 952 + static void stm32_dma_desc_free(struct virt_dma_desc *vdesc) 953 + { 954 + kfree(container_of(vdesc, struct stm32_dma_desc, vdesc)); 955 + } 956 + 957 + void stm32_dma_set_config(struct stm32_dma_chan *chan, 958 + struct stm32_dma_cfg *cfg) 959 + { 960 + stm32_dma_clear_reg(&chan->chan_reg); 961 + 962 + chan->chan_reg.dma_scr = cfg->stream_config & STM32_DMA_SCR_CFG_MASK; 963 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_REQ(cfg->request_line); 964 + 965 + /* Enable Interrupts */ 966 + chan->chan_reg.dma_scr |= STM32_DMA_SCR_TEIE | STM32_DMA_SCR_TCIE; 967 + 968 + chan->chan_reg.dma_sfcr = cfg->threshold & STM32_DMA_SFCR_FTH_MASK; 969 + } 970 + 971 + static struct dma_chan *stm32_dma_of_xlate(struct of_phandle_args *dma_spec, 972 + struct of_dma *ofdma) 973 + { 974 + struct stm32_dma_device *dmadev = ofdma->of_dma_data; 975 + struct stm32_dma_cfg cfg; 976 + struct stm32_dma_chan *chan; 977 + struct dma_chan *c; 978 + 979 + if (dma_spec->args_count < 3) 980 + return NULL; 981 + 982 + cfg.channel_id = dma_spec->args[0]; 983 + cfg.request_line = dma_spec->args[1]; 984 + cfg.stream_config = dma_spec->args[2]; 985 + cfg.threshold = 0; 986 + 987 + if ((cfg.channel_id >= STM32_DMA_MAX_CHANNELS) || (cfg.request_line >= 988 + STM32_DMA_MAX_REQUEST_ID)) 989 + return NULL; 990 + 991 + if (dma_spec->args_count > 3) 992 + cfg.threshold = dma_spec->args[3]; 993 + 994 + chan = &dmadev->chan[cfg.channel_id]; 995 + 996 + c = dma_get_slave_channel(&chan->vchan.chan); 997 + if (c) 998 + stm32_dma_set_config(chan, &cfg); 999 + 1000 + return c; 1001 + } 1002 + 1003 + static const struct of_device_id stm32_dma_of_match[] = { 1004 + { .compatible = "st,stm32-dma", }, 1005 + { /* sentinel */ }, 1006 + }; 1007 + MODULE_DEVICE_TABLE(of, stm32_dma_of_match); 1008 + 1009 + static int stm32_dma_probe(struct platform_device *pdev) 1010 + { 1011 + struct stm32_dma_chan *chan; 1012 + struct stm32_dma_device *dmadev; 1013 + struct dma_device *dd; 1014 + const struct of_device_id *match; 1015 + struct resource *res; 1016 + int i, ret; 1017 + 1018 + match = of_match_device(stm32_dma_of_match, &pdev->dev); 1019 + if (!match) { 1020 + dev_err(&pdev->dev, "Error: No device match found\n"); 1021 + return -ENODEV; 1022 + } 1023 + 1024 + dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL); 1025 + if (!dmadev) 1026 + return -ENOMEM; 1027 + 1028 + dd = &dmadev->ddev; 1029 + 1030 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1031 + dmadev->base = devm_ioremap_resource(&pdev->dev, res); 1032 + if (IS_ERR(dmadev->base)) 1033 + return PTR_ERR(dmadev->base); 1034 + 1035 + dmadev->clk = devm_clk_get(&pdev->dev, NULL); 1036 + if (IS_ERR(dmadev->clk)) { 1037 + dev_err(&pdev->dev, "Error: Missing controller clock\n"); 1038 + return PTR_ERR(dmadev->clk); 1039 + } 1040 + 1041 + dmadev->mem2mem = of_property_read_bool(pdev->dev.of_node, 1042 + "st,mem2mem"); 1043 + 1044 + dmadev->rst = devm_reset_control_get(&pdev->dev, NULL); 1045 + if (!IS_ERR(dmadev->rst)) { 1046 + reset_control_assert(dmadev->rst); 1047 + udelay(2); 1048 + reset_control_deassert(dmadev->rst); 1049 + } 1050 + 1051 + dma_cap_set(DMA_SLAVE, dd->cap_mask); 1052 + dma_cap_set(DMA_PRIVATE, dd->cap_mask); 1053 + dma_cap_set(DMA_CYCLIC, dd->cap_mask); 1054 + dd->device_alloc_chan_resources = stm32_dma_alloc_chan_resources; 1055 + dd->device_free_chan_resources = stm32_dma_free_chan_resources; 1056 + dd->device_tx_status = stm32_dma_tx_status; 1057 + dd->device_issue_pending = stm32_dma_issue_pending; 1058 + dd->device_prep_slave_sg = stm32_dma_prep_slave_sg; 1059 + dd->device_prep_dma_cyclic = stm32_dma_prep_dma_cyclic; 1060 + dd->device_config = stm32_dma_slave_config; 1061 + dd->device_terminate_all = stm32_dma_terminate_all; 1062 + dd->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | 1063 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | 1064 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1065 + dd->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | 1066 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | 1067 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1068 + dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 1069 + dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1070 + dd->dev = &pdev->dev; 1071 + INIT_LIST_HEAD(&dd->channels); 1072 + 1073 + if (dmadev->mem2mem) { 1074 + dma_cap_set(DMA_MEMCPY, dd->cap_mask); 1075 + dd->device_prep_dma_memcpy = stm32_dma_prep_dma_memcpy; 1076 + dd->directions |= BIT(DMA_MEM_TO_MEM); 1077 + } 1078 + 1079 + for (i = 0; i < STM32_DMA_MAX_CHANNELS; i++) { 1080 + chan = &dmadev->chan[i]; 1081 + chan->id = i; 1082 + chan->vchan.desc_free = stm32_dma_desc_free; 1083 + vchan_init(&chan->vchan, dd); 1084 + } 1085 + 1086 + ret = dma_async_device_register(dd); 1087 + if (ret) 1088 + return ret; 1089 + 1090 + for (i = 0; i < STM32_DMA_MAX_CHANNELS; i++) { 1091 + chan = &dmadev->chan[i]; 1092 + res = platform_get_resource(pdev, IORESOURCE_IRQ, i); 1093 + if (!res) { 1094 + ret = -EINVAL; 1095 + dev_err(&pdev->dev, "No irq resource for chan %d\n", i); 1096 + goto err_unregister; 1097 + } 1098 + chan->irq = res->start; 1099 + ret = devm_request_irq(&pdev->dev, chan->irq, 1100 + stm32_dma_chan_irq, 0, 1101 + dev_name(chan2dev(chan)), chan); 1102 + if (ret) { 1103 + dev_err(&pdev->dev, 1104 + "request_irq failed with err %d channel %d\n", 1105 + ret, i); 1106 + goto err_unregister; 1107 + } 1108 + } 1109 + 1110 + ret = of_dma_controller_register(pdev->dev.of_node, 1111 + stm32_dma_of_xlate, dmadev); 1112 + if (ret < 0) { 1113 + dev_err(&pdev->dev, 1114 + "STM32 DMA DMA OF registration failed %d\n", ret); 1115 + goto err_unregister; 1116 + } 1117 + 1118 + platform_set_drvdata(pdev, dmadev); 1119 + 1120 + dev_info(&pdev->dev, "STM32 DMA driver registered\n"); 1121 + 1122 + return 0; 1123 + 1124 + err_unregister: 1125 + dma_async_device_unregister(dd); 1126 + 1127 + return ret; 1128 + } 1129 + 1130 + static struct platform_driver stm32_dma_driver = { 1131 + .driver = { 1132 + .name = "stm32-dma", 1133 + .of_match_table = stm32_dma_of_match, 1134 + }, 1135 + }; 1136 + 1137 + static int __init stm32_dma_init(void) 1138 + { 1139 + return platform_driver_probe(&stm32_dma_driver, stm32_dma_probe); 1140 + } 1141 + subsys_initcall(stm32_dma_init);
+40 -33
drivers/dma/tegra20-apb-dma.c
··· 296 296 spin_unlock_irqrestore(&tdc->lock, flags); 297 297 298 298 /* Allocate DMA desc */ 299 - dma_desc = kzalloc(sizeof(*dma_desc), GFP_ATOMIC); 299 + dma_desc = kzalloc(sizeof(*dma_desc), GFP_NOWAIT); 300 300 if (!dma_desc) { 301 301 dev_err(tdc2dev(tdc), "dma_desc alloc failed\n"); 302 302 return NULL; ··· 336 336 } 337 337 spin_unlock_irqrestore(&tdc->lock, flags); 338 338 339 - sg_req = kzalloc(sizeof(struct tegra_dma_sg_req), GFP_ATOMIC); 339 + sg_req = kzalloc(sizeof(struct tegra_dma_sg_req), GFP_NOWAIT); 340 340 if (!sg_req) 341 341 dev_err(tdc2dev(tdc), "sg_req alloc failed\n"); 342 342 return sg_req; ··· 1186 1186 1187 1187 dma_cookie_init(&tdc->dma_chan); 1188 1188 tdc->config_init = false; 1189 - ret = clk_prepare_enable(tdma->dma_clk); 1189 + 1190 + ret = pm_runtime_get_sync(tdma->dev); 1190 1191 if (ret < 0) 1191 - dev_err(tdc2dev(tdc), "clk_prepare_enable failed: %d\n", ret); 1192 - return ret; 1192 + return ret; 1193 + 1194 + return 0; 1193 1195 } 1194 1196 1195 1197 static void tegra_dma_free_chan_resources(struct dma_chan *dc) ··· 1234 1232 list_del(&sg_req->node); 1235 1233 kfree(sg_req); 1236 1234 } 1237 - clk_disable_unprepare(tdma->dma_clk); 1235 + pm_runtime_put(tdma->dev); 1238 1236 1239 1237 tdc->slave_id = 0; 1240 1238 } ··· 1358 1356 spin_lock_init(&tdma->global_lock); 1359 1357 1360 1358 pm_runtime_enable(&pdev->dev); 1361 - if (!pm_runtime_enabled(&pdev->dev)) { 1359 + if (!pm_runtime_enabled(&pdev->dev)) 1362 1360 ret = tegra_dma_runtime_resume(&pdev->dev); 1363 - if (ret) { 1364 - dev_err(&pdev->dev, "dma_runtime_resume failed %d\n", 1365 - ret); 1366 - goto err_pm_disable; 1367 - } 1368 - } 1361 + else 1362 + ret = pm_runtime_get_sync(&pdev->dev); 1369 1363 1370 - /* Enable clock before accessing registers */ 1371 - ret = clk_prepare_enable(tdma->dma_clk); 1372 1364 if (ret < 0) { 1373 - dev_err(&pdev->dev, "clk_prepare_enable failed: %d\n", ret); 1374 - goto err_pm_disable; 1365 + pm_runtime_disable(&pdev->dev); 1366 + return ret; 1375 1367 } 1376 1368 1377 1369 /* Reset DMA controller */ ··· 1378 1382 tdma_write(tdma, TEGRA_APBDMA_CONTROL, 0); 1379 1383 tdma_write(tdma, TEGRA_APBDMA_IRQ_MASK_SET, 0xFFFFFFFFul); 1380 1384 1381 - clk_disable_unprepare(tdma->dma_clk); 1385 + pm_runtime_put(&pdev->dev); 1382 1386 1383 1387 INIT_LIST_HEAD(&tdma->dma_dev.channels); 1384 1388 for (i = 0; i < cdata->nr_channels; i++) { ··· 1396 1400 } 1397 1401 tdc->irq = res->start; 1398 1402 snprintf(tdc->name, sizeof(tdc->name), "apbdma.%d", i); 1399 - ret = devm_request_irq(&pdev->dev, tdc->irq, 1400 - tegra_dma_isr, 0, tdc->name, tdc); 1403 + ret = request_irq(tdc->irq, tegra_dma_isr, 0, tdc->name, tdc); 1401 1404 if (ret) { 1402 1405 dev_err(&pdev->dev, 1403 1406 "request_irq failed with err %d channel %d\n", ··· 1477 1482 err_irq: 1478 1483 while (--i >= 0) { 1479 1484 struct tegra_dma_channel *tdc = &tdma->channels[i]; 1485 + 1486 + free_irq(tdc->irq, tdc); 1480 1487 tasklet_kill(&tdc->tasklet); 1481 1488 } 1482 1489 1483 - err_pm_disable: 1484 1490 pm_runtime_disable(&pdev->dev); 1485 1491 if (!pm_runtime_status_suspended(&pdev->dev)) 1486 1492 tegra_dma_runtime_suspend(&pdev->dev); ··· 1498 1502 1499 1503 for (i = 0; i < tdma->chip_data->nr_channels; ++i) { 1500 1504 tdc = &tdma->channels[i]; 1505 + free_irq(tdc->irq, tdc); 1501 1506 tasklet_kill(&tdc->tasklet); 1502 1507 } 1503 1508 ··· 1511 1514 1512 1515 static int tegra_dma_runtime_suspend(struct device *dev) 1513 1516 { 1514 - struct platform_device *pdev = to_platform_device(dev); 1515 - struct tegra_dma *tdma = platform_get_drvdata(pdev); 1517 + struct tegra_dma *tdma = dev_get_drvdata(dev); 1516 1518 1517 1519 clk_disable_unprepare(tdma->dma_clk); 1518 1520 return 0; ··· 1519 1523 1520 1524 static int tegra_dma_runtime_resume(struct device *dev) 1521 1525 { 1522 - struct platform_device *pdev = to_platform_device(dev); 1523 - struct tegra_dma *tdma = platform_get_drvdata(pdev); 1526 + struct tegra_dma *tdma = dev_get_drvdata(dev); 1524 1527 int ret; 1525 1528 1526 1529 ret = clk_prepare_enable(tdma->dma_clk); ··· 1538 1543 int ret; 1539 1544 1540 1545 /* Enable clock before accessing register */ 1541 - ret = tegra_dma_runtime_resume(dev); 1546 + ret = pm_runtime_get_sync(dev); 1542 1547 if (ret < 0) 1543 1548 return ret; 1544 1549 ··· 1547 1552 struct tegra_dma_channel *tdc = &tdma->channels[i]; 1548 1553 struct tegra_dma_channel_regs *ch_reg = &tdc->channel_reg; 1549 1554 1555 + /* Only save the state of DMA channels that are in use */ 1556 + if (!tdc->config_init) 1557 + continue; 1558 + 1550 1559 ch_reg->csr = tdc_read(tdc, TEGRA_APBDMA_CHAN_CSR); 1551 1560 ch_reg->ahb_ptr = tdc_read(tdc, TEGRA_APBDMA_CHAN_AHBPTR); 1552 1561 ch_reg->apb_ptr = tdc_read(tdc, TEGRA_APBDMA_CHAN_APBPTR); 1553 1562 ch_reg->ahb_seq = tdc_read(tdc, TEGRA_APBDMA_CHAN_AHBSEQ); 1554 1563 ch_reg->apb_seq = tdc_read(tdc, TEGRA_APBDMA_CHAN_APBSEQ); 1564 + if (tdma->chip_data->support_separate_wcount_reg) 1565 + ch_reg->wcount = tdc_read(tdc, 1566 + TEGRA_APBDMA_CHAN_WCOUNT); 1555 1567 } 1556 1568 1557 1569 /* Disable clock */ 1558 - tegra_dma_runtime_suspend(dev); 1570 + pm_runtime_put(dev); 1559 1571 return 0; 1560 1572 } 1561 1573 ··· 1573 1571 int ret; 1574 1572 1575 1573 /* Enable clock before accessing register */ 1576 - ret = tegra_dma_runtime_resume(dev); 1574 + ret = pm_runtime_get_sync(dev); 1577 1575 if (ret < 0) 1578 1576 return ret; 1579 1577 ··· 1585 1583 struct tegra_dma_channel *tdc = &tdma->channels[i]; 1586 1584 struct tegra_dma_channel_regs *ch_reg = &tdc->channel_reg; 1587 1585 1586 + /* Only restore the state of DMA channels that are in use */ 1587 + if (!tdc->config_init) 1588 + continue; 1589 + 1590 + if (tdma->chip_data->support_separate_wcount_reg) 1591 + tdc_write(tdc, TEGRA_APBDMA_CHAN_WCOUNT, 1592 + ch_reg->wcount); 1588 1593 tdc_write(tdc, TEGRA_APBDMA_CHAN_APBSEQ, ch_reg->apb_seq); 1589 1594 tdc_write(tdc, TEGRA_APBDMA_CHAN_APBPTR, ch_reg->apb_ptr); 1590 1595 tdc_write(tdc, TEGRA_APBDMA_CHAN_AHBSEQ, ch_reg->ahb_seq); ··· 1601 1592 } 1602 1593 1603 1594 /* Disable clock */ 1604 - tegra_dma_runtime_suspend(dev); 1595 + pm_runtime_put(dev); 1605 1596 return 0; 1606 1597 } 1607 1598 #endif 1608 1599 1609 1600 static const struct dev_pm_ops tegra_dma_dev_pm_ops = { 1610 - #ifdef CONFIG_PM 1611 - .runtime_suspend = tegra_dma_runtime_suspend, 1612 - .runtime_resume = tegra_dma_runtime_resume, 1613 - #endif 1601 + SET_RUNTIME_PM_OPS(tegra_dma_runtime_suspend, tegra_dma_runtime_resume, 1602 + NULL) 1614 1603 SET_SYSTEM_SLEEP_PM_OPS(tegra_dma_pm_suspend, tegra_dma_pm_resume) 1615 1604 }; 1616 1605
+70 -11
drivers/dma/ti-dma-crossbar.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/list.h> 14 14 #include <linux/io.h> 15 - #include <linux/idr.h> 16 15 #include <linux/of_address.h> 17 16 #include <linux/of_device.h> 18 17 #include <linux/of_dma.h> ··· 197 198 void __iomem *iomem; 198 199 199 200 struct dma_router dmarouter; 200 - struct idr map_idr; 201 + struct mutex mutex; 202 + unsigned long *dma_inuse; 201 203 202 204 u16 safe_val; /* Value to rest the crossbar lines */ 203 205 u32 xbar_requests; /* number of DMA requests connected to XBAR */ ··· 225 225 map->xbar_in, map->xbar_out); 226 226 227 227 ti_dra7_xbar_write(xbar->iomem, map->xbar_out, xbar->safe_val); 228 - idr_remove(&xbar->map_idr, map->xbar_out); 228 + mutex_lock(&xbar->mutex); 229 + clear_bit(map->xbar_out, xbar->dma_inuse); 230 + mutex_unlock(&xbar->mutex); 229 231 kfree(map); 230 232 } 231 233 ··· 257 255 return ERR_PTR(-ENOMEM); 258 256 } 259 257 260 - map->xbar_out = idr_alloc(&xbar->map_idr, NULL, 0, xbar->dma_requests, 261 - GFP_KERNEL); 258 + mutex_lock(&xbar->mutex); 259 + map->xbar_out = find_first_zero_bit(xbar->dma_inuse, 260 + xbar->dma_requests); 261 + mutex_unlock(&xbar->mutex); 262 + if (map->xbar_out == xbar->dma_requests) { 263 + dev_err(&pdev->dev, "Run out of free DMA requests\n"); 264 + kfree(map); 265 + return ERR_PTR(-ENOMEM); 266 + } 267 + set_bit(map->xbar_out, xbar->dma_inuse); 268 + 262 269 map->xbar_in = (u16)dma_spec->args[0]; 263 270 264 271 dma_spec->args[0] = map->xbar_out + xbar->dma_offset; ··· 289 278 .compatible = "ti,edma3", 290 279 .data = (void *)TI_XBAR_EDMA_OFFSET, 291 280 }, 281 + { 282 + .compatible = "ti,edma3-tpcc", 283 + .data = (void *)TI_XBAR_EDMA_OFFSET, 284 + }, 292 285 {}, 293 286 }; 287 + 288 + static inline void ti_dra7_xbar_reserve(int offset, int len, unsigned long *p) 289 + { 290 + for (; len > 0; len--) 291 + clear_bit(offset + (len - 1), p); 292 + } 294 293 295 294 static int ti_dra7_xbar_probe(struct platform_device *pdev) 296 295 { ··· 308 287 const struct of_device_id *match; 309 288 struct device_node *dma_node; 310 289 struct ti_dra7_xbar_data *xbar; 290 + struct property *prop; 311 291 struct resource *res; 312 292 u32 safe_val; 293 + size_t sz; 313 294 void __iomem *iomem; 314 295 int i, ret; 315 296 ··· 321 298 xbar = devm_kzalloc(&pdev->dev, sizeof(*xbar), GFP_KERNEL); 322 299 if (!xbar) 323 300 return -ENOMEM; 324 - 325 - idr_init(&xbar->map_idr); 326 301 327 302 dma_node = of_parse_phandle(node, "dma-masters", 0); 328 303 if (!dma_node) { ··· 343 322 } 344 323 of_node_put(dma_node); 345 324 325 + xbar->dma_inuse = devm_kcalloc(&pdev->dev, 326 + BITS_TO_LONGS(xbar->dma_requests), 327 + sizeof(unsigned long), GFP_KERNEL); 328 + if (!xbar->dma_inuse) 329 + return -ENOMEM; 330 + 346 331 if (of_property_read_u32(node, "dma-requests", &xbar->xbar_requests)) { 347 332 dev_info(&pdev->dev, 348 333 "Missing XBAR input information, using %u.\n", ··· 358 331 359 332 if (!of_property_read_u32(node, "ti,dma-safe-map", &safe_val)) 360 333 xbar->safe_val = (u16)safe_val; 334 + 335 + 336 + prop = of_find_property(node, "ti,reserved-dma-request-ranges", &sz); 337 + if (prop) { 338 + const char pname[] = "ti,reserved-dma-request-ranges"; 339 + u32 (*rsv_events)[2]; 340 + size_t nelm = sz / sizeof(*rsv_events); 341 + int i; 342 + 343 + if (!nelm) 344 + return -EINVAL; 345 + 346 + rsv_events = kcalloc(nelm, sizeof(*rsv_events), GFP_KERNEL); 347 + if (!rsv_events) 348 + return -ENOMEM; 349 + 350 + ret = of_property_read_u32_array(node, pname, (u32 *)rsv_events, 351 + nelm * 2); 352 + if (ret) 353 + return ret; 354 + 355 + for (i = 0; i < nelm; i++) { 356 + ti_dra7_xbar_reserve(rsv_events[i][0], rsv_events[i][1], 357 + xbar->dma_inuse); 358 + } 359 + kfree(rsv_events); 360 + } 361 361 362 362 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 363 363 iomem = devm_ioremap_resource(&pdev->dev, res); ··· 397 343 xbar->dmarouter.route_free = ti_dra7_xbar_free; 398 344 xbar->dma_offset = (u32)match->data; 399 345 346 + mutex_init(&xbar->mutex); 400 347 platform_set_drvdata(pdev, xbar); 401 348 402 349 /* Reset the crossbar */ 403 - for (i = 0; i < xbar->dma_requests; i++) 404 - ti_dra7_xbar_write(xbar->iomem, i, xbar->safe_val); 350 + for (i = 0; i < xbar->dma_requests; i++) { 351 + if (!test_bit(i, xbar->dma_inuse)) 352 + ti_dra7_xbar_write(xbar->iomem, i, xbar->safe_val); 353 + } 405 354 406 355 ret = of_dma_router_register(node, ti_dra7_xbar_route_allocate, 407 356 &xbar->dmarouter); 408 357 if (ret) { 409 358 /* Restore the defaults for the crossbar */ 410 - for (i = 0; i < xbar->dma_requests; i++) 411 - ti_dra7_xbar_write(xbar->iomem, i, i); 359 + for (i = 0; i < xbar->dma_requests; i++) { 360 + if (!test_bit(i, xbar->dma_inuse)) 361 + ti_dra7_xbar_write(xbar->iomem, i, i); 362 + } 412 363 } 413 364 414 365 return ret;
+40 -6
drivers/dma/virt-dma.c
··· 29 29 spin_lock_irqsave(&vc->lock, flags); 30 30 cookie = dma_cookie_assign(tx); 31 31 32 - list_add_tail(&vd->node, &vc->desc_submitted); 32 + list_move_tail(&vd->node, &vc->desc_submitted); 33 33 spin_unlock_irqrestore(&vc->lock, flags); 34 34 35 35 dev_dbg(vc->chan.device->dev, "vchan %p: txd %p[%x]: submitted\n", ··· 38 38 return cookie; 39 39 } 40 40 EXPORT_SYMBOL_GPL(vchan_tx_submit); 41 + 42 + /** 43 + * vchan_tx_desc_free - free a reusable descriptor 44 + * @tx: the transfer 45 + * 46 + * This function frees a previously allocated reusable descriptor. The only 47 + * other way is to clear the DMA_CTRL_REUSE flag and submit one last time the 48 + * transfer. 49 + * 50 + * Returns 0 upon success 51 + */ 52 + int vchan_tx_desc_free(struct dma_async_tx_descriptor *tx) 53 + { 54 + struct virt_dma_chan *vc = to_virt_chan(tx->chan); 55 + struct virt_dma_desc *vd = to_virt_desc(tx); 56 + unsigned long flags; 57 + 58 + spin_lock_irqsave(&vc->lock, flags); 59 + list_del(&vd->node); 60 + spin_unlock_irqrestore(&vc->lock, flags); 61 + 62 + dev_dbg(vc->chan.device->dev, "vchan %p: txd %p[%x]: freeing\n", 63 + vc, vd, vd->tx.cookie); 64 + vc->desc_free(vd); 65 + return 0; 66 + } 67 + EXPORT_SYMBOL_GPL(vchan_tx_desc_free); 41 68 42 69 struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *vc, 43 70 dma_cookie_t cookie) ··· 110 83 cb_data = vd->tx.callback_param; 111 84 112 85 list_del(&vd->node); 113 - 114 - vc->desc_free(vd); 86 + if (dmaengine_desc_test_reuse(&vd->tx)) 87 + list_add(&vd->node, &vc->desc_allocated); 88 + else 89 + vc->desc_free(vd); 115 90 116 91 if (cb) 117 92 cb(cb_data); ··· 125 96 while (!list_empty(head)) { 126 97 struct virt_dma_desc *vd = list_first_entry(head, 127 98 struct virt_dma_desc, node); 128 - list_del(&vd->node); 129 - dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd); 130 - vc->desc_free(vd); 99 + if (dmaengine_desc_test_reuse(&vd->tx)) { 100 + list_move_tail(&vd->node, &vc->desc_allocated); 101 + } else { 102 + dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd); 103 + list_del(&vd->node); 104 + vc->desc_free(vd); 105 + } 131 106 } 132 107 } 133 108 EXPORT_SYMBOL_GPL(vchan_dma_desc_free_list); ··· 141 108 dma_cookie_init(&vc->chan); 142 109 143 110 spin_lock_init(&vc->lock); 111 + INIT_LIST_HEAD(&vc->desc_allocated); 144 112 INIT_LIST_HEAD(&vc->desc_submitted); 145 113 INIT_LIST_HEAD(&vc->desc_issued); 146 114 INIT_LIST_HEAD(&vc->desc_completed);
+25
drivers/dma/virt-dma.h
··· 29 29 spinlock_t lock; 30 30 31 31 /* protected by vc.lock */ 32 + struct list_head desc_allocated; 32 33 struct list_head desc_submitted; 33 34 struct list_head desc_issued; 34 35 struct list_head desc_completed; ··· 56 55 struct virt_dma_desc *vd, unsigned long tx_flags) 57 56 { 58 57 extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *); 58 + extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *); 59 + unsigned long flags; 59 60 60 61 dma_async_tx_descriptor_init(&vd->tx, &vc->chan); 61 62 vd->tx.flags = tx_flags; 62 63 vd->tx.tx_submit = vchan_tx_submit; 64 + vd->tx.desc_free = vchan_tx_desc_free; 65 + 66 + spin_lock_irqsave(&vc->lock, flags); 67 + list_add_tail(&vd->node, &vc->desc_allocated); 68 + spin_unlock_irqrestore(&vc->lock, flags); 63 69 64 70 return &vd->tx; 65 71 } ··· 142 134 static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc, 143 135 struct list_head *head) 144 136 { 137 + list_splice_tail_init(&vc->desc_allocated, head); 145 138 list_splice_tail_init(&vc->desc_submitted, head); 146 139 list_splice_tail_init(&vc->desc_issued, head); 147 140 list_splice_tail_init(&vc->desc_completed, head); ··· 150 141 151 142 static inline void vchan_free_chan_resources(struct virt_dma_chan *vc) 152 143 { 144 + struct virt_dma_desc *vd; 153 145 unsigned long flags; 154 146 LIST_HEAD(head); 155 147 156 148 spin_lock_irqsave(&vc->lock, flags); 157 149 vchan_get_all_descriptors(vc, &head); 150 + list_for_each_entry(vd, &head, node) 151 + dmaengine_desc_clear_reuse(&vd->tx); 158 152 spin_unlock_irqrestore(&vc->lock, flags); 159 153 160 154 vchan_dma_desc_free_list(vc, &head); 155 + } 156 + 157 + /** 158 + * vchan_synchronize() - synchronize callback execution to the current context 159 + * @vc: virtual channel to synchronize 160 + * 161 + * Makes sure that all scheduled or active callbacks have finished running. For 162 + * proper operation the caller has to ensure that no new callbacks are scheduled 163 + * after the invocation of this function started. 164 + */ 165 + static inline void vchan_synchronize(struct virt_dma_chan *vc) 166 + { 167 + tasklet_kill(&vc->task); 161 168 } 162 169 163 170 #endif
+3 -2
include/linux/dca.h
··· 34 34 35 35 struct dca_provider { 36 36 struct list_head node; 37 - struct dca_ops *ops; 37 + const struct dca_ops *ops; 38 38 struct device *cd; 39 39 int id; 40 40 }; ··· 53 53 int (*dev_managed) (struct dca_provider *, struct device *); 54 54 }; 55 55 56 - struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size); 56 + struct dca_provider *alloc_dca_provider(const struct dca_ops *ops, 57 + int priv_size); 57 58 void free_dca_provider(struct dca_provider *dca); 58 59 int register_dca_provider(struct dca_provider *dca, struct device *dev); 59 60 void unregister_dca_provider(struct dca_provider *dca, struct device *dev);
+138 -7
include/linux/dmaengine.h
··· 607 607 }; 608 608 609 609 /** 610 + * struct dma_slave_map - associates slave device and it's slave channel with 611 + * parameter to be used by a filter function 612 + * @devname: name of the device 613 + * @slave: slave channel name 614 + * @param: opaque parameter to pass to struct dma_filter.fn 615 + */ 616 + struct dma_slave_map { 617 + const char *devname; 618 + const char *slave; 619 + void *param; 620 + }; 621 + 622 + /** 623 + * struct dma_filter - information for slave device/channel to filter_fn/param 624 + * mapping 625 + * @fn: filter function callback 626 + * @mapcnt: number of slave device/channel in the map 627 + * @map: array of channel to filter mapping data 628 + */ 629 + struct dma_filter { 630 + dma_filter_fn fn; 631 + int mapcnt; 632 + const struct dma_slave_map *map; 633 + }; 634 + 635 + /** 610 636 * struct dma_device - info on the entity supplying DMA services 611 637 * @chancnt: how many DMA channels are supported 612 638 * @privatecnt: how many DMA channels are requested by dma_request_channel 613 639 * @channels: the list of struct dma_chan 614 640 * @global_node: list_head for global dma_device_list 641 + * @filter: information for device/slave to filter function/param mapping 615 642 * @cap_mask: one or more dma_capability flags 616 643 * @max_xor: maximum number of xor sources, 0 if no capability 617 644 * @max_pq: maximum number of PQ sources and PQ-continue capability ··· 681 654 * paused. Returns 0 or an error code 682 655 * @device_terminate_all: Aborts all transfers on a channel. Returns 0 683 656 * or an error code 657 + * @device_synchronize: Synchronizes the termination of a transfers to the 658 + * current context. 684 659 * @device_tx_status: poll for transaction completion, the optional 685 660 * txstate parameter can be supplied with a pointer to get a 686 661 * struct with auxiliary transfer status information, otherwise the call 687 662 * will just return a simple status code 688 663 * @device_issue_pending: push pending transactions to hardware 664 + * @descriptor_reuse: a submitted transfer can be resubmitted after completion 689 665 */ 690 666 struct dma_device { 691 667 ··· 696 666 unsigned int privatecnt; 697 667 struct list_head channels; 698 668 struct list_head global_node; 669 + struct dma_filter filter; 699 670 dma_cap_mask_t cap_mask; 700 671 unsigned short max_xor; 701 672 unsigned short max_pq; ··· 712 681 u32 src_addr_widths; 713 682 u32 dst_addr_widths; 714 683 u32 directions; 684 + bool descriptor_reuse; 715 685 enum dma_residue_granularity residue_granularity; 716 686 717 687 int (*device_alloc_chan_resources)(struct dma_chan *chan); ··· 769 737 int (*device_pause)(struct dma_chan *chan); 770 738 int (*device_resume)(struct dma_chan *chan); 771 739 int (*device_terminate_all)(struct dma_chan *chan); 740 + void (*device_synchronize)(struct dma_chan *chan); 772 741 773 742 enum dma_status (*device_tx_status)(struct dma_chan *chan, 774 743 dma_cookie_t cookie, ··· 861 828 src_sg, src_nents, flags); 862 829 } 863 830 831 + /** 832 + * dmaengine_terminate_all() - Terminate all active DMA transfers 833 + * @chan: The channel for which to terminate the transfers 834 + * 835 + * This function is DEPRECATED use either dmaengine_terminate_sync() or 836 + * dmaengine_terminate_async() instead. 837 + */ 864 838 static inline int dmaengine_terminate_all(struct dma_chan *chan) 865 839 { 866 840 if (chan->device->device_terminate_all) 867 841 return chan->device->device_terminate_all(chan); 868 842 869 843 return -ENOSYS; 844 + } 845 + 846 + /** 847 + * dmaengine_terminate_async() - Terminate all active DMA transfers 848 + * @chan: The channel for which to terminate the transfers 849 + * 850 + * Calling this function will terminate all active and pending descriptors 851 + * that have previously been submitted to the channel. It is not guaranteed 852 + * though that the transfer for the active descriptor has stopped when the 853 + * function returns. Furthermore it is possible the complete callback of a 854 + * submitted transfer is still running when this function returns. 855 + * 856 + * dmaengine_synchronize() needs to be called before it is safe to free 857 + * any memory that is accessed by previously submitted descriptors or before 858 + * freeing any resources accessed from within the completion callback of any 859 + * perviously submitted descriptors. 860 + * 861 + * This function can be called from atomic context as well as from within a 862 + * complete callback of a descriptor submitted on the same channel. 863 + * 864 + * If none of the two conditions above apply consider using 865 + * dmaengine_terminate_sync() instead. 866 + */ 867 + static inline int dmaengine_terminate_async(struct dma_chan *chan) 868 + { 869 + if (chan->device->device_terminate_all) 870 + return chan->device->device_terminate_all(chan); 871 + 872 + return -EINVAL; 873 + } 874 + 875 + /** 876 + * dmaengine_synchronize() - Synchronize DMA channel termination 877 + * @chan: The channel to synchronize 878 + * 879 + * Synchronizes to the DMA channel termination to the current context. When this 880 + * function returns it is guaranteed that all transfers for previously issued 881 + * descriptors have stopped and and it is safe to free the memory assoicated 882 + * with them. Furthermore it is guaranteed that all complete callback functions 883 + * for a previously submitted descriptor have finished running and it is safe to 884 + * free resources accessed from within the complete callbacks. 885 + * 886 + * The behavior of this function is undefined if dma_async_issue_pending() has 887 + * been called between dmaengine_terminate_async() and this function. 888 + * 889 + * This function must only be called from non-atomic context and must not be 890 + * called from within a complete callback of a descriptor submitted on the same 891 + * channel. 892 + */ 893 + static inline void dmaengine_synchronize(struct dma_chan *chan) 894 + { 895 + might_sleep(); 896 + 897 + if (chan->device->device_synchronize) 898 + chan->device->device_synchronize(chan); 899 + } 900 + 901 + /** 902 + * dmaengine_terminate_sync() - Terminate all active DMA transfers 903 + * @chan: The channel for which to terminate the transfers 904 + * 905 + * Calling this function will terminate all active and pending transfers 906 + * that have previously been submitted to the channel. It is similar to 907 + * dmaengine_terminate_async() but guarantees that the DMA transfer has actually 908 + * stopped and that all complete callbacks have finished running when the 909 + * function returns. 910 + * 911 + * This function must only be called from non-atomic context and must not be 912 + * called from within a complete callback of a descriptor submitted on the same 913 + * channel. 914 + */ 915 + static inline int dmaengine_terminate_sync(struct dma_chan *chan) 916 + { 917 + int ret; 918 + 919 + ret = dmaengine_terminate_async(chan); 920 + if (ret) 921 + return ret; 922 + 923 + dmaengine_synchronize(chan); 924 + 925 + return 0; 870 926 } 871 927 872 928 static inline int dmaengine_pause(struct dma_chan *chan) ··· 1262 1140 void dma_issue_pending_all(void); 1263 1141 struct dma_chan *__dma_request_channel(const dma_cap_mask_t *mask, 1264 1142 dma_filter_fn fn, void *fn_param); 1265 - struct dma_chan *dma_request_slave_channel_reason(struct device *dev, 1266 - const char *name); 1267 1143 struct dma_chan *dma_request_slave_channel(struct device *dev, const char *name); 1144 + 1145 + struct dma_chan *dma_request_chan(struct device *dev, const char *name); 1146 + struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask); 1147 + 1268 1148 void dma_release_channel(struct dma_chan *chan); 1269 1149 int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps); 1270 1150 #else ··· 1290 1166 { 1291 1167 return NULL; 1292 1168 } 1293 - static inline struct dma_chan *dma_request_slave_channel_reason( 1294 - struct device *dev, const char *name) 1295 - { 1296 - return ERR_PTR(-ENODEV); 1297 - } 1298 1169 static inline struct dma_chan *dma_request_slave_channel(struct device *dev, 1299 1170 const char *name) 1300 1171 { 1301 1172 return NULL; 1173 + } 1174 + static inline struct dma_chan *dma_request_chan(struct device *dev, 1175 + const char *name) 1176 + { 1177 + return ERR_PTR(-ENODEV); 1178 + } 1179 + static inline struct dma_chan *dma_request_chan_by_mask( 1180 + const dma_cap_mask_t *mask) 1181 + { 1182 + return ERR_PTR(-ENODEV); 1302 1183 } 1303 1184 static inline void dma_release_channel(struct dma_chan *chan) 1304 1185 { ··· 1314 1185 return -ENXIO; 1315 1186 } 1316 1187 #endif 1188 + 1189 + #define dma_request_slave_channel_reason(dev, name) dma_request_chan(dev, name) 1317 1190 1318 1191 static inline int dmaengine_desc_set_reuse(struct dma_async_tx_descriptor *tx) 1319 1192 {
+6
include/linux/omap-dma.h
··· 267 267 u8 type; 268 268 }; 269 269 270 + #define SDMA_FILTER_PARAM(hw_req) ((int[]) { (hw_req) }) 271 + struct dma_slave_map; 272 + 270 273 /* System DMA platform data structure */ 271 274 struct omap_system_dma_plat_info { 272 275 const struct omap_dma_reg *reg_map; ··· 281 278 void (*clear_dma)(int lch); 282 279 void (*dma_write)(u32 val, int reg, int lch); 283 280 u32 (*dma_read)(int reg, int lch); 281 + 282 + const struct dma_slave_map *slave_map; 283 + int slavecnt; 284 284 }; 285 285 286 286 #ifdef CONFIG_ARCH_OMAP2PLUS
-103
include/linux/platform_data/dma-rcar-hpbdma.h
··· 1 - /* 2 - * Copyright (C) 2011-2013 Renesas Electronics Corporation 3 - * Copyright (C) 2013 Cogent Embedded, Inc. 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. 8 - */ 9 - 10 - #ifndef __DMA_RCAR_HPBDMA_H 11 - #define __DMA_RCAR_HPBDMA_H 12 - 13 - #include <linux/bitops.h> 14 - #include <linux/types.h> 15 - 16 - /* Transmit sizes and respective register values */ 17 - enum { 18 - XMIT_SZ_8BIT = 0, 19 - XMIT_SZ_16BIT = 1, 20 - XMIT_SZ_32BIT = 2, 21 - XMIT_SZ_MAX 22 - }; 23 - 24 - /* DMA control register (DCR) bits */ 25 - #define HPB_DMAE_DCR_DTAMD (1u << 26) 26 - #define HPB_DMAE_DCR_DTAC (1u << 25) 27 - #define HPB_DMAE_DCR_DTAU (1u << 24) 28 - #define HPB_DMAE_DCR_DTAU1 (1u << 23) 29 - #define HPB_DMAE_DCR_SWMD (1u << 22) 30 - #define HPB_DMAE_DCR_BTMD (1u << 21) 31 - #define HPB_DMAE_DCR_PKMD (1u << 20) 32 - #define HPB_DMAE_DCR_CT (1u << 18) 33 - #define HPB_DMAE_DCR_ACMD (1u << 17) 34 - #define HPB_DMAE_DCR_DIP (1u << 16) 35 - #define HPB_DMAE_DCR_SMDL (1u << 13) 36 - #define HPB_DMAE_DCR_SPDAM (1u << 12) 37 - #define HPB_DMAE_DCR_SDRMD_MASK (3u << 10) 38 - #define HPB_DMAE_DCR_SDRMD_MOD (0u << 10) 39 - #define HPB_DMAE_DCR_SDRMD_AUTO (1u << 10) 40 - #define HPB_DMAE_DCR_SDRMD_TIMER (2u << 10) 41 - #define HPB_DMAE_DCR_SPDS_MASK (3u << 8) 42 - #define HPB_DMAE_DCR_SPDS_8BIT (0u << 8) 43 - #define HPB_DMAE_DCR_SPDS_16BIT (1u << 8) 44 - #define HPB_DMAE_DCR_SPDS_32BIT (2u << 8) 45 - #define HPB_DMAE_DCR_DMDL (1u << 5) 46 - #define HPB_DMAE_DCR_DPDAM (1u << 4) 47 - #define HPB_DMAE_DCR_DDRMD_MASK (3u << 2) 48 - #define HPB_DMAE_DCR_DDRMD_MOD (0u << 2) 49 - #define HPB_DMAE_DCR_DDRMD_AUTO (1u << 2) 50 - #define HPB_DMAE_DCR_DDRMD_TIMER (2u << 2) 51 - #define HPB_DMAE_DCR_DPDS_MASK (3u << 0) 52 - #define HPB_DMAE_DCR_DPDS_8BIT (0u << 0) 53 - #define HPB_DMAE_DCR_DPDS_16BIT (1u << 0) 54 - #define HPB_DMAE_DCR_DPDS_32BIT (2u << 0) 55 - 56 - /* Asynchronous reset register (ASYNCRSTR) bits */ 57 - #define HPB_DMAE_ASYNCRSTR_ASRST41 BIT(10) 58 - #define HPB_DMAE_ASYNCRSTR_ASRST40 BIT(9) 59 - #define HPB_DMAE_ASYNCRSTR_ASRST39 BIT(8) 60 - #define HPB_DMAE_ASYNCRSTR_ASRST27 BIT(7) 61 - #define HPB_DMAE_ASYNCRSTR_ASRST26 BIT(6) 62 - #define HPB_DMAE_ASYNCRSTR_ASRST25 BIT(5) 63 - #define HPB_DMAE_ASYNCRSTR_ASRST24 BIT(4) 64 - #define HPB_DMAE_ASYNCRSTR_ASRST23 BIT(3) 65 - #define HPB_DMAE_ASYNCRSTR_ASRST22 BIT(2) 66 - #define HPB_DMAE_ASYNCRSTR_ASRST21 BIT(1) 67 - #define HPB_DMAE_ASYNCRSTR_ASRST20 BIT(0) 68 - 69 - struct hpb_dmae_slave_config { 70 - unsigned int id; 71 - dma_addr_t addr; 72 - u32 dcr; 73 - u32 port; 74 - u32 rstr; 75 - u32 mdr; 76 - u32 mdm; 77 - u32 flags; 78 - #define HPB_DMAE_SET_ASYNC_RESET BIT(0) 79 - #define HPB_DMAE_SET_ASYNC_MODE BIT(1) 80 - u32 dma_ch; 81 - }; 82 - 83 - #define HPB_DMAE_CHANNEL(_irq, _s_id) \ 84 - { \ 85 - .ch_irq = _irq, \ 86 - .s_id = _s_id, \ 87 - } 88 - 89 - struct hpb_dmae_channel { 90 - unsigned int ch_irq; 91 - unsigned int s_id; 92 - }; 93 - 94 - struct hpb_dmae_pdata { 95 - const struct hpb_dmae_slave_config *slaves; 96 - int num_slaves; 97 - const struct hpb_dmae_channel *channels; 98 - int num_channels; 99 - const unsigned int ts_shift[XMIT_SZ_MAX]; 100 - int num_hw_channels; 101 - }; 102 - 103 - #endif
+7
include/linux/platform_data/edma.h
··· 53 53 #define EDMA_CTLR(i) ((i) >> 16) 54 54 #define EDMA_CHAN_SLOT(i) ((i) & 0xffff) 55 55 56 + #define EDMA_FILTER_PARAM(ctlr, chan) ((int[]) { EDMA_CTLR_CHAN(ctlr, chan) }) 57 + 56 58 struct edma_rsv_info { 57 59 58 60 const s16 (*rsv_chans)[2]; 59 61 const s16 (*rsv_slots)[2]; 60 62 }; 63 + 64 + struct dma_slave_map; 61 65 62 66 /* platform_data for EDMA driver */ 63 67 struct edma_soc_info { ··· 80 76 81 77 s8 (*queue_priority_mapping)[2]; 82 78 const s16 (*xbar_chans)[2]; 79 + 80 + const struct dma_slave_map *slave_map; 81 + int slavecnt; 83 82 }; 84 83 85 84 #endif
+6 -3
sound/core/pcm_dmaengine.c
··· 202 202 if (runtime->info & SNDRV_PCM_INFO_PAUSE) 203 203 dmaengine_pause(prtd->dma_chan); 204 204 else 205 - dmaengine_terminate_all(prtd->dma_chan); 205 + dmaengine_terminate_async(prtd->dma_chan); 206 206 break; 207 207 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 208 208 dmaengine_pause(prtd->dma_chan); 209 209 break; 210 210 case SNDRV_PCM_TRIGGER_STOP: 211 - dmaengine_terminate_all(prtd->dma_chan); 211 + dmaengine_terminate_async(prtd->dma_chan); 212 212 break; 213 213 default: 214 214 return -EINVAL; ··· 346 346 { 347 347 struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream); 348 348 349 + dmaengine_synchronize(prtd->dma_chan); 349 350 kfree(prtd); 350 351 351 352 return 0; ··· 363 362 { 364 363 struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream); 365 364 365 + dmaengine_synchronize(prtd->dma_chan); 366 366 dma_release_channel(prtd->dma_chan); 367 + kfree(prtd); 367 368 368 - return snd_dmaengine_pcm_close(substream); 369 + return 0; 369 370 } 370 371 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_close_release_chan); 371 372