Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dmaengine updates from Vinod Koul:
"Once you have some time from extended weekend celebrations please
consider pulling the following to get:
- Various fixes and PCI driver for dw_dmac by Andy
- DT binding for imx-dma by Markus & imx-sdma by Shawn
- DT fixes for dmaengine by Lars
- jz4740 dmac driver by Lars
- and various fixes across the drivers"

What "extended weekend celebrations"? I'm in the merge window, who has
time for extended celebrations..

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
DMA: shdma: add DT support
DMA: shdma: shdma_chan_filter() has to be in shdma-base.h
DMA: shdma: (cosmetic) don't re-calculate a pointer
dmaengine: at_hdmac: prepare clk before calling enable
dmaengine/trivial: at_hdmac: add curly brackets to if/else expressions
dmaengine: at_hdmac: remove unsuded atc_cleanup_descriptors()
dmaengine: at_hdmac: add FIFO configuration parameter to DMA DT binding
ARM: at91: dt: add header to define at_hdmac configuration
MIPS: jz4740: Correct clock gate bit for DMA controller
MIPS: jz4740: Remove custom DMA API
MIPS: jz4740: Register jz4740 DMA device
dma: Add a jz4740 dmaengine driver
MIPS: jz4740: Acquire and enable DMA controller clock
dma: mmp_tdma: disable irq when disabling dma channel
dmaengine: PL08x: Avoid collisions with get_signal() macro
dmaengine: dw: select DW_DMAC_BIG_ENDIAN_IO automagically
dma: dw: add PCI part of the driver
dma: dw: split driver to library part and platform code
dma: move dw_dmac driver to an own directory
dw_dmac: don't check resource with devm_ioremap_resource
...

+1863 -800
+5 -2
Documentation/devicetree/bindings/dma/atmel-dma.txt
··· 24 24 1. A phandle pointing to the DMA controller. 25 25 2. The memory interface (16 most significant bits), the peripheral interface 26 26 (16 less significant bits). 27 - 3. The peripheral identifier for the hardware handshaking interface. The 28 - identifier can be different for tx and rx. 27 + 3. Parameters for the at91 DMA configuration register which are device 28 + dependant: 29 + - bit 7-0: peripheral identifier for the hardware handshaking interface. The 30 + identifier can be different for tx and rx. 31 + - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 1 for ASAP. 29 32 30 33 Example: 31 34
+48
Documentation/devicetree/bindings/dma/fsl-imx-dma.txt
··· 1 + * Freescale Direct Memory Access (DMA) Controller for i.MX 2 + 3 + This document will only describe differences to the generic DMA Controller and 4 + DMA request bindings as described in dma/dma.txt . 5 + 6 + * DMA controller 7 + 8 + Required properties: 9 + - compatible : Should be "fsl,<chip>-dma". chip can be imx1, imx21 or imx27 10 + - reg : Should contain DMA registers location and length 11 + - interrupts : First item should be DMA interrupt, second one is optional and 12 + should contain DMA Error interrupt 13 + - #dma-cells : Has to be 1. imx-dma does not support anything else. 14 + 15 + Optional properties: 16 + - #dma-channels : Number of DMA channels supported. Should be 16. 17 + - #dma-requests : Number of DMA requests supported. 18 + 19 + Example: 20 + 21 + dma: dma@10001000 { 22 + compatible = "fsl,imx27-dma"; 23 + reg = <0x10001000 0x1000>; 24 + interrupts = <32 33>; 25 + #dma-cells = <1>; 26 + #dma-channels = <16>; 27 + }; 28 + 29 + 30 + * DMA client 31 + 32 + Clients have to specify the DMA requests with phandles in a list. 33 + 34 + Required properties: 35 + - dmas: List of one or more DMA request specifiers. One DMA request specifier 36 + consists of a phandle to the DMA controller followed by the integer 37 + specifiying the request line. 38 + - dma-names: List of string identifiers for the DMA requests. For the correct 39 + names, have a look at the specific client driver. 40 + 41 + Example: 42 + 43 + sdhci1: sdhci@10013000 { 44 + ... 45 + dmas = <&dma 7>; 46 + dma-names = "rx-tx"; 47 + ... 48 + };
+56
Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
··· 4 4 - compatible : Should be "fsl,<chip>-sdma" 5 5 - reg : Should contain SDMA registers location and length 6 6 - interrupts : Should contain SDMA interrupt 7 + - #dma-cells : Must be <3>. 8 + The first cell specifies the DMA request/event ID. See details below 9 + about the second and third cell. 7 10 - fsl,sdma-ram-script-name : Should contain the full path of SDMA RAM 8 11 scripts firmware 12 + 13 + The second cell of dma phandle specifies the peripheral type of DMA transfer. 14 + The full ID of peripheral types can be found below. 15 + 16 + ID transfer type 17 + --------------------- 18 + 0 MCU domain SSI 19 + 1 Shared SSI 20 + 2 MMC 21 + 3 SDHC 22 + 4 MCU domain UART 23 + 5 Shared UART 24 + 6 FIRI 25 + 7 MCU domain CSPI 26 + 8 Shared CSPI 27 + 9 SIM 28 + 10 ATA 29 + 11 CCM 30 + 12 External peripheral 31 + 13 Memory Stick Host Controller 32 + 14 Shared Memory Stick Host Controller 33 + 15 DSP 34 + 16 Memory 35 + 17 FIFO type Memory 36 + 18 SPDIF 37 + 19 IPU Memory 38 + 20 ASRC 39 + 21 ESAI 40 + 41 + The third cell specifies the transfer priority as below. 42 + 43 + ID transfer priority 44 + ------------------------- 45 + 0 High 46 + 1 Medium 47 + 2 Low 9 48 10 49 Examples: 11 50 ··· 52 13 compatible = "fsl,imx51-sdma", "fsl,imx35-sdma"; 53 14 reg = <0x83fb0000 0x4000>; 54 15 interrupts = <6>; 16 + #dma-cells = <3>; 55 17 fsl,sdma-ram-script-name = "sdma-imx51.bin"; 18 + }; 19 + 20 + DMA clients connected to the i.MX SDMA controller must use the format 21 + described in the dma.txt file. 22 + 23 + Examples: 24 + 25 + ssi2: ssi@70014000 { 26 + compatible = "fsl,imx51-ssi", "fsl,imx21-ssi"; 27 + reg = <0x70014000 0x4000>; 28 + interrupts = <30>; 29 + clocks = <&clks 49>; 30 + dmas = <&sdma 24 1 0>, 31 + <&sdma 25 1 0>; 32 + dma-names = "rx", "tx"; 33 + fsl,fifo-depth = <15>; 56 34 };
+75
Documentation/devicetree/bindings/dma/shdma.txt
··· 1 + * SHDMA Device Tree bindings 2 + 3 + Sh-/r-mobile and r-car systems often have multiple identical DMA controller 4 + instances, capable of serving any of a common set of DMA slave devices, using 5 + the same configuration. To describe this topology we require all compatible 6 + SHDMA DT nodes to be placed under a DMA multiplexer node. All such compatible 7 + DMAC instances have the same number of channels and use the same DMA 8 + descriptors. Therefore respective DMA DT bindings can also all be placed in the 9 + multiplexer node. Even if there is only one such DMAC instance on a system, it 10 + still has to be placed under such a multiplexer node. 11 + 12 + * DMA multiplexer 13 + 14 + Required properties: 15 + - compatible: should be "renesas,shdma-mux" 16 + - #dma-cells: should be <1>, see "dmas" property below 17 + 18 + Optional properties (currently unused): 19 + - dma-channels: number of DMA channels 20 + - dma-requests: number of DMA request signals 21 + 22 + * DMA controller 23 + 24 + Required properties: 25 + - compatible: should be "renesas,shdma" 26 + 27 + Example: 28 + dmac: dma-mux0 { 29 + compatible = "renesas,shdma-mux"; 30 + #dma-cells = <1>; 31 + dma-channels = <6>; 32 + dma-requests = <256>; 33 + reg = <0 0>; /* Needed for AUXDATA */ 34 + #address-cells = <1>; 35 + #size-cells = <1>; 36 + ranges; 37 + 38 + dma0: shdma@fe008020 { 39 + compatible = "renesas,shdma"; 40 + reg = <0xfe008020 0x270>, 41 + <0xfe009000 0xc>; 42 + interrupt-parent = <&gic>; 43 + interrupts = <0 34 4 44 + 0 28 4 45 + 0 29 4 46 + 0 30 4 47 + 0 31 4 48 + 0 32 4 49 + 0 33 4>; 50 + interrupt-names = "error", 51 + "ch0", "ch1", "ch2", "ch3", 52 + "ch4", "ch5"; 53 + }; 54 + 55 + dma1: shdma@fe018020 { 56 + ... 57 + }; 58 + 59 + dma2: shdma@fe028020 { 60 + ... 61 + }; 62 + }; 63 + 64 + * DMA client 65 + 66 + Required properties: 67 + - dmas: a list of <[DMA multiplexer phandle] [MID/RID value]> pairs, 68 + where MID/RID values are fixed handles, specified in the SoC 69 + manual 70 + - dma-names: a list of DMA channel names, one per "dmas" entry 71 + 72 + Example: 73 + dmas = <&dmac 0xd1 74 + &dmac 0xd2>; 75 + dma-names = "tx", "rx";
+1 -2
MAINTAINERS
··· 7057 7057 M: Viresh Kumar <viresh.linux@gmail.com> 7058 7058 S: Maintained 7059 7059 F: include/linux/dw_dmac.h 7060 - F: drivers/dma/dw_dmac_regs.h 7061 - F: drivers/dma/dw_dmac.c 7060 + F: drivers/dma/dw/ 7062 7061 7063 7062 SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER 7064 7063 M: Seungwon Jeon <tgih.jun@samsung.com>
+2 -2
arch/arm/mach-lpc32xx/phy3250.c
··· 182 182 static struct pl08x_platform_data pl08x_pd = { 183 183 .slave_channels = &pl08x_slave_channels[0], 184 184 .num_slave_channels = ARRAY_SIZE(pl08x_slave_channels), 185 - .get_signal = pl08x_get_signal, 186 - .put_signal = pl08x_put_signal, 185 + .get_xfer_signal = pl08x_get_signal, 186 + .put_xfer_signal = pl08x_put_signal, 187 187 .lli_buses = PL08X_AHB1, 188 188 .mem_buses = PL08X_AHB1, 189 189 };
+2 -2
arch/arm/mach-spear/spear3xx.c
··· 56 56 }, 57 57 .lli_buses = PL08X_AHB1, 58 58 .mem_buses = PL08X_AHB1, 59 - .get_signal = pl080_get_signal, 60 - .put_signal = pl080_put_signal, 59 + .get_xfer_signal = pl080_get_signal, 60 + .put_xfer_signal = pl080_put_signal, 61 61 }; 62 62 63 63 /*
+2 -2
arch/arm/mach-spear/spear6xx.c
··· 334 334 }, 335 335 .lli_buses = PL08X_AHB1, 336 336 .mem_buses = PL08X_AHB1, 337 - .get_signal = pl080_get_signal, 338 - .put_signal = pl080_put_signal, 337 + .get_xfer_signal = pl080_get_signal, 338 + .put_xfer_signal = pl080_put_signal, 339 339 .slave_channels = spear600_dma_info, 340 340 .num_slave_channels = ARRAY_SIZE(spear600_dma_info), 341 341 };
-56
arch/mips/include/asm/mach-jz4740/dma.h
··· 16 16 #ifndef __ASM_MACH_JZ4740_DMA_H__ 17 17 #define __ASM_MACH_JZ4740_DMA_H__ 18 18 19 - struct jz4740_dma_chan; 20 - 21 19 enum jz4740_dma_request_type { 22 20 JZ4740_DMA_TYPE_AUTO_REQUEST = 8, 23 21 JZ4740_DMA_TYPE_UART_TRANSMIT = 20, ··· 30 32 JZ4740_DMA_TYPE_SADC = 29, 31 33 JZ4740_DMA_TYPE_SLCD = 30, 32 34 }; 33 - 34 - enum jz4740_dma_width { 35 - JZ4740_DMA_WIDTH_32BIT = 0, 36 - JZ4740_DMA_WIDTH_8BIT = 1, 37 - JZ4740_DMA_WIDTH_16BIT = 2, 38 - }; 39 - 40 - enum jz4740_dma_transfer_size { 41 - JZ4740_DMA_TRANSFER_SIZE_4BYTE = 0, 42 - JZ4740_DMA_TRANSFER_SIZE_1BYTE = 1, 43 - JZ4740_DMA_TRANSFER_SIZE_2BYTE = 2, 44 - JZ4740_DMA_TRANSFER_SIZE_16BYTE = 3, 45 - JZ4740_DMA_TRANSFER_SIZE_32BYTE = 4, 46 - }; 47 - 48 - enum jz4740_dma_flags { 49 - JZ4740_DMA_SRC_AUTOINC = 0x2, 50 - JZ4740_DMA_DST_AUTOINC = 0x1, 51 - }; 52 - 53 - enum jz4740_dma_mode { 54 - JZ4740_DMA_MODE_SINGLE = 0, 55 - JZ4740_DMA_MODE_BLOCK = 1, 56 - }; 57 - 58 - struct jz4740_dma_config { 59 - enum jz4740_dma_width src_width; 60 - enum jz4740_dma_width dst_width; 61 - enum jz4740_dma_transfer_size transfer_size; 62 - enum jz4740_dma_request_type request_type; 63 - enum jz4740_dma_flags flags; 64 - enum jz4740_dma_mode mode; 65 - }; 66 - 67 - typedef void (*jz4740_dma_complete_callback_t)(struct jz4740_dma_chan *, int, void *); 68 - 69 - struct jz4740_dma_chan *jz4740_dma_request(void *dev, const char *name); 70 - void jz4740_dma_free(struct jz4740_dma_chan *dma); 71 - 72 - void jz4740_dma_configure(struct jz4740_dma_chan *dma, 73 - const struct jz4740_dma_config *config); 74 - 75 - 76 - void jz4740_dma_enable(struct jz4740_dma_chan *dma); 77 - void jz4740_dma_disable(struct jz4740_dma_chan *dma); 78 - 79 - void jz4740_dma_set_src_addr(struct jz4740_dma_chan *dma, dma_addr_t src); 80 - void jz4740_dma_set_dst_addr(struct jz4740_dma_chan *dma, dma_addr_t dst); 81 - void jz4740_dma_set_transfer_count(struct jz4740_dma_chan *dma, uint32_t count); 82 - 83 - uint32_t jz4740_dma_get_residue(const struct jz4740_dma_chan *dma); 84 - 85 - void jz4740_dma_set_complete_cb(struct jz4740_dma_chan *dma, 86 - jz4740_dma_complete_callback_t cb); 87 35 88 36 #endif /* __ASM_JZ4740_DMA_H__ */
+1
arch/mips/include/asm/mach-jz4740/platform.h
··· 32 32 extern struct platform_device jz4740_adc_device; 33 33 extern struct platform_device jz4740_wdt_device; 34 34 extern struct platform_device jz4740_pwm_device; 35 + extern struct platform_device jz4740_dma_device; 35 36 36 37 void jz4740_serial_device_register(void); 37 38
+1 -1
arch/mips/jz4740/Makefile
··· 4 4 5 5 # Object file lists. 6 6 7 - obj-y += prom.o irq.o time.o reset.o setup.o dma.o \ 7 + obj-y += prom.o irq.o time.o reset.o setup.o \ 8 8 gpio.o clock.o platform.o timer.o serial.o 9 9 10 10 obj-$(CONFIG_DEBUG_FS) += clock-debugfs.o
+1
arch/mips/jz4740/board-qi_lb60.c
··· 438 438 &jz4740_rtc_device, 439 439 &jz4740_adc_device, 440 440 &jz4740_pwm_device, 441 + &jz4740_dma_device, 441 442 &qi_lb60_gpio_keys, 442 443 &qi_lb60_pwm_beeper, 443 444 &qi_lb60_charger_device,
+1 -1
arch/mips/jz4740/clock.c
··· 687 687 [3] = { 688 688 .name = "dma", 689 689 .parent = &jz_clk_high_speed_peripheral.clk, 690 - .gate_bit = JZ_CLOCK_GATE_UART0, 690 + .gate_bit = JZ_CLOCK_GATE_DMAC, 691 691 .ops = &jz_clk_simple_ops, 692 692 }, 693 693 [4] = {
-287
arch/mips/jz4740/dma.c
··· 1 - /* 2 - * Copyright (C) 2010, Lars-Peter Clausen <lars@metafoo.de> 3 - * JZ4740 SoC DMA support 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License as published by the 7 - * Free Software Foundation; either version 2 of the License, or (at your 8 - * option) any later version. 9 - * 10 - * You should have received a copy of the GNU General Public License along 11 - * with this program; if not, write to the Free Software Foundation, Inc., 12 - * 675 Mass Ave, Cambridge, MA 02139, USA. 13 - * 14 - */ 15 - 16 - #include <linux/kernel.h> 17 - #include <linux/module.h> 18 - #include <linux/spinlock.h> 19 - #include <linux/interrupt.h> 20 - 21 - #include <linux/dma-mapping.h> 22 - #include <asm/mach-jz4740/dma.h> 23 - #include <asm/mach-jz4740/base.h> 24 - 25 - #define JZ_REG_DMA_SRC_ADDR(x) (0x00 + (x) * 0x20) 26 - #define JZ_REG_DMA_DST_ADDR(x) (0x04 + (x) * 0x20) 27 - #define JZ_REG_DMA_TRANSFER_COUNT(x) (0x08 + (x) * 0x20) 28 - #define JZ_REG_DMA_REQ_TYPE(x) (0x0C + (x) * 0x20) 29 - #define JZ_REG_DMA_STATUS_CTRL(x) (0x10 + (x) * 0x20) 30 - #define JZ_REG_DMA_CMD(x) (0x14 + (x) * 0x20) 31 - #define JZ_REG_DMA_DESC_ADDR(x) (0x18 + (x) * 0x20) 32 - 33 - #define JZ_REG_DMA_CTRL 0x300 34 - #define JZ_REG_DMA_IRQ 0x304 35 - #define JZ_REG_DMA_DOORBELL 0x308 36 - #define JZ_REG_DMA_DOORBELL_SET 0x30C 37 - 38 - #define JZ_DMA_STATUS_CTRL_NO_DESC BIT(31) 39 - #define JZ_DMA_STATUS_CTRL_DESC_INV BIT(6) 40 - #define JZ_DMA_STATUS_CTRL_ADDR_ERR BIT(4) 41 - #define JZ_DMA_STATUS_CTRL_TRANSFER_DONE BIT(3) 42 - #define JZ_DMA_STATUS_CTRL_HALT BIT(2) 43 - #define JZ_DMA_STATUS_CTRL_COUNT_TERMINATE BIT(1) 44 - #define JZ_DMA_STATUS_CTRL_ENABLE BIT(0) 45 - 46 - #define JZ_DMA_CMD_SRC_INC BIT(23) 47 - #define JZ_DMA_CMD_DST_INC BIT(22) 48 - #define JZ_DMA_CMD_RDIL_MASK (0xf << 16) 49 - #define JZ_DMA_CMD_SRC_WIDTH_MASK (0x3 << 14) 50 - #define JZ_DMA_CMD_DST_WIDTH_MASK (0x3 << 12) 51 - #define JZ_DMA_CMD_INTERVAL_LENGTH_MASK (0x7 << 8) 52 - #define JZ_DMA_CMD_BLOCK_MODE BIT(7) 53 - #define JZ_DMA_CMD_DESC_VALID BIT(4) 54 - #define JZ_DMA_CMD_DESC_VALID_MODE BIT(3) 55 - #define JZ_DMA_CMD_VALID_IRQ_ENABLE BIT(2) 56 - #define JZ_DMA_CMD_TRANSFER_IRQ_ENABLE BIT(1) 57 - #define JZ_DMA_CMD_LINK_ENABLE BIT(0) 58 - 59 - #define JZ_DMA_CMD_FLAGS_OFFSET 22 60 - #define JZ_DMA_CMD_RDIL_OFFSET 16 61 - #define JZ_DMA_CMD_SRC_WIDTH_OFFSET 14 62 - #define JZ_DMA_CMD_DST_WIDTH_OFFSET 12 63 - #define JZ_DMA_CMD_TRANSFER_SIZE_OFFSET 8 64 - #define JZ_DMA_CMD_MODE_OFFSET 7 65 - 66 - #define JZ_DMA_CTRL_PRIORITY_MASK (0x3 << 8) 67 - #define JZ_DMA_CTRL_HALT BIT(3) 68 - #define JZ_DMA_CTRL_ADDRESS_ERROR BIT(2) 69 - #define JZ_DMA_CTRL_ENABLE BIT(0) 70 - 71 - 72 - static void __iomem *jz4740_dma_base; 73 - static spinlock_t jz4740_dma_lock; 74 - 75 - static inline uint32_t jz4740_dma_read(size_t reg) 76 - { 77 - return readl(jz4740_dma_base + reg); 78 - } 79 - 80 - static inline void jz4740_dma_write(size_t reg, uint32_t val) 81 - { 82 - writel(val, jz4740_dma_base + reg); 83 - } 84 - 85 - static inline void jz4740_dma_write_mask(size_t reg, uint32_t val, uint32_t mask) 86 - { 87 - uint32_t val2; 88 - val2 = jz4740_dma_read(reg); 89 - val2 &= ~mask; 90 - val2 |= val; 91 - jz4740_dma_write(reg, val2); 92 - } 93 - 94 - struct jz4740_dma_chan { 95 - unsigned int id; 96 - void *dev; 97 - const char *name; 98 - 99 - enum jz4740_dma_flags flags; 100 - uint32_t transfer_shift; 101 - 102 - jz4740_dma_complete_callback_t complete_cb; 103 - 104 - unsigned used:1; 105 - }; 106 - 107 - #define JZ4740_DMA_CHANNEL(_id) { .id = _id } 108 - 109 - struct jz4740_dma_chan jz4740_dma_channels[] = { 110 - JZ4740_DMA_CHANNEL(0), 111 - JZ4740_DMA_CHANNEL(1), 112 - JZ4740_DMA_CHANNEL(2), 113 - JZ4740_DMA_CHANNEL(3), 114 - JZ4740_DMA_CHANNEL(4), 115 - JZ4740_DMA_CHANNEL(5), 116 - }; 117 - 118 - struct jz4740_dma_chan *jz4740_dma_request(void *dev, const char *name) 119 - { 120 - unsigned int i; 121 - struct jz4740_dma_chan *dma = NULL; 122 - 123 - spin_lock(&jz4740_dma_lock); 124 - 125 - for (i = 0; i < ARRAY_SIZE(jz4740_dma_channels); ++i) { 126 - if (!jz4740_dma_channels[i].used) { 127 - dma = &jz4740_dma_channels[i]; 128 - dma->used = 1; 129 - break; 130 - } 131 - } 132 - 133 - spin_unlock(&jz4740_dma_lock); 134 - 135 - if (!dma) 136 - return NULL; 137 - 138 - dma->dev = dev; 139 - dma->name = name; 140 - 141 - return dma; 142 - } 143 - EXPORT_SYMBOL_GPL(jz4740_dma_request); 144 - 145 - void jz4740_dma_configure(struct jz4740_dma_chan *dma, 146 - const struct jz4740_dma_config *config) 147 - { 148 - uint32_t cmd; 149 - 150 - switch (config->transfer_size) { 151 - case JZ4740_DMA_TRANSFER_SIZE_2BYTE: 152 - dma->transfer_shift = 1; 153 - break; 154 - case JZ4740_DMA_TRANSFER_SIZE_4BYTE: 155 - dma->transfer_shift = 2; 156 - break; 157 - case JZ4740_DMA_TRANSFER_SIZE_16BYTE: 158 - dma->transfer_shift = 4; 159 - break; 160 - case JZ4740_DMA_TRANSFER_SIZE_32BYTE: 161 - dma->transfer_shift = 5; 162 - break; 163 - default: 164 - dma->transfer_shift = 0; 165 - break; 166 - } 167 - 168 - cmd = config->flags << JZ_DMA_CMD_FLAGS_OFFSET; 169 - cmd |= config->src_width << JZ_DMA_CMD_SRC_WIDTH_OFFSET; 170 - cmd |= config->dst_width << JZ_DMA_CMD_DST_WIDTH_OFFSET; 171 - cmd |= config->transfer_size << JZ_DMA_CMD_TRANSFER_SIZE_OFFSET; 172 - cmd |= config->mode << JZ_DMA_CMD_MODE_OFFSET; 173 - cmd |= JZ_DMA_CMD_TRANSFER_IRQ_ENABLE; 174 - 175 - jz4740_dma_write(JZ_REG_DMA_CMD(dma->id), cmd); 176 - jz4740_dma_write(JZ_REG_DMA_STATUS_CTRL(dma->id), 0); 177 - jz4740_dma_write(JZ_REG_DMA_REQ_TYPE(dma->id), config->request_type); 178 - } 179 - EXPORT_SYMBOL_GPL(jz4740_dma_configure); 180 - 181 - void jz4740_dma_set_src_addr(struct jz4740_dma_chan *dma, dma_addr_t src) 182 - { 183 - jz4740_dma_write(JZ_REG_DMA_SRC_ADDR(dma->id), src); 184 - } 185 - EXPORT_SYMBOL_GPL(jz4740_dma_set_src_addr); 186 - 187 - void jz4740_dma_set_dst_addr(struct jz4740_dma_chan *dma, dma_addr_t dst) 188 - { 189 - jz4740_dma_write(JZ_REG_DMA_DST_ADDR(dma->id), dst); 190 - } 191 - EXPORT_SYMBOL_GPL(jz4740_dma_set_dst_addr); 192 - 193 - void jz4740_dma_set_transfer_count(struct jz4740_dma_chan *dma, uint32_t count) 194 - { 195 - count >>= dma->transfer_shift; 196 - jz4740_dma_write(JZ_REG_DMA_TRANSFER_COUNT(dma->id), count); 197 - } 198 - EXPORT_SYMBOL_GPL(jz4740_dma_set_transfer_count); 199 - 200 - void jz4740_dma_set_complete_cb(struct jz4740_dma_chan *dma, 201 - jz4740_dma_complete_callback_t cb) 202 - { 203 - dma->complete_cb = cb; 204 - } 205 - EXPORT_SYMBOL_GPL(jz4740_dma_set_complete_cb); 206 - 207 - void jz4740_dma_free(struct jz4740_dma_chan *dma) 208 - { 209 - dma->dev = NULL; 210 - dma->complete_cb = NULL; 211 - dma->used = 0; 212 - } 213 - EXPORT_SYMBOL_GPL(jz4740_dma_free); 214 - 215 - void jz4740_dma_enable(struct jz4740_dma_chan *dma) 216 - { 217 - jz4740_dma_write_mask(JZ_REG_DMA_STATUS_CTRL(dma->id), 218 - JZ_DMA_STATUS_CTRL_NO_DESC | JZ_DMA_STATUS_CTRL_ENABLE, 219 - JZ_DMA_STATUS_CTRL_HALT | JZ_DMA_STATUS_CTRL_NO_DESC | 220 - JZ_DMA_STATUS_CTRL_ENABLE); 221 - 222 - jz4740_dma_write_mask(JZ_REG_DMA_CTRL, 223 - JZ_DMA_CTRL_ENABLE, 224 - JZ_DMA_CTRL_HALT | JZ_DMA_CTRL_ENABLE); 225 - } 226 - EXPORT_SYMBOL_GPL(jz4740_dma_enable); 227 - 228 - void jz4740_dma_disable(struct jz4740_dma_chan *dma) 229 - { 230 - jz4740_dma_write_mask(JZ_REG_DMA_STATUS_CTRL(dma->id), 0, 231 - JZ_DMA_STATUS_CTRL_ENABLE); 232 - } 233 - EXPORT_SYMBOL_GPL(jz4740_dma_disable); 234 - 235 - uint32_t jz4740_dma_get_residue(const struct jz4740_dma_chan *dma) 236 - { 237 - uint32_t residue; 238 - residue = jz4740_dma_read(JZ_REG_DMA_TRANSFER_COUNT(dma->id)); 239 - return residue << dma->transfer_shift; 240 - } 241 - EXPORT_SYMBOL_GPL(jz4740_dma_get_residue); 242 - 243 - static void jz4740_dma_chan_irq(struct jz4740_dma_chan *dma) 244 - { 245 - (void) jz4740_dma_read(JZ_REG_DMA_STATUS_CTRL(dma->id)); 246 - 247 - jz4740_dma_write_mask(JZ_REG_DMA_STATUS_CTRL(dma->id), 0, 248 - JZ_DMA_STATUS_CTRL_ENABLE | JZ_DMA_STATUS_CTRL_TRANSFER_DONE); 249 - 250 - if (dma->complete_cb) 251 - dma->complete_cb(dma, 0, dma->dev); 252 - } 253 - 254 - static irqreturn_t jz4740_dma_irq(int irq, void *dev_id) 255 - { 256 - uint32_t irq_status; 257 - unsigned int i; 258 - 259 - irq_status = readl(jz4740_dma_base + JZ_REG_DMA_IRQ); 260 - 261 - for (i = 0; i < 6; ++i) { 262 - if (irq_status & (1 << i)) 263 - jz4740_dma_chan_irq(&jz4740_dma_channels[i]); 264 - } 265 - 266 - return IRQ_HANDLED; 267 - } 268 - 269 - static int jz4740_dma_init(void) 270 - { 271 - unsigned int ret; 272 - 273 - jz4740_dma_base = ioremap(JZ4740_DMAC_BASE_ADDR, 0x400); 274 - 275 - if (!jz4740_dma_base) 276 - return -EBUSY; 277 - 278 - spin_lock_init(&jz4740_dma_lock); 279 - 280 - ret = request_irq(JZ4740_IRQ_DMAC, jz4740_dma_irq, 0, "DMA", NULL); 281 - 282 - if (ret) 283 - printk(KERN_ERR "JZ4740 DMA: Failed to request irq: %d\n", ret); 284 - 285 - return ret; 286 - } 287 - arch_initcall(jz4740_dma_init);
+21
arch/mips/jz4740/platform.c
··· 329 329 .name = "jz4740-pwm", 330 330 .id = -1, 331 331 }; 332 + 333 + /* DMA */ 334 + static struct resource jz4740_dma_resources[] = { 335 + { 336 + .start = JZ4740_DMAC_BASE_ADDR, 337 + .end = JZ4740_DMAC_BASE_ADDR + 0x400 - 1, 338 + .flags = IORESOURCE_MEM, 339 + }, 340 + { 341 + .start = JZ4740_IRQ_DMAC, 342 + .end = JZ4740_IRQ_DMAC, 343 + .flags = IORESOURCE_IRQ, 344 + }, 345 + }; 346 + 347 + struct platform_device jz4740_dma_device = { 348 + .name = "jz4740-dma", 349 + .id = -1, 350 + .num_resources = ARRAY_SIZE(jz4740_dma_resources), 351 + .resource = jz4740_dma_resources, 352 + };
+7 -19
drivers/dma/Kconfig
··· 79 79 help 80 80 Enable support for the Intel(R) IOP Series RAID engines. 81 81 82 - config DW_DMAC 83 - tristate "Synopsys DesignWare AHB DMA support" 84 - depends on GENERIC_HARDIRQS 85 - select DMA_ENGINE 86 - default y if CPU_AT32AP7000 87 - help 88 - Support the Synopsys DesignWare AHB DMA controller. This 89 - can be integrated in chips such as the Atmel AT32ap7000. 90 - 91 - config DW_DMAC_BIG_ENDIAN_IO 92 - bool "Use big endian I/O register access" 93 - default y if AVR32 94 - depends on DW_DMAC 95 - help 96 - Say yes here to use big endian I/O access when reading and writing 97 - to the DMA controller registers. This is needed on some platforms, 98 - like the Atmel AVR32 architecture. 99 - 100 - If unsure, use the default setting. 82 + source "drivers/dma/dw/Kconfig" 101 83 102 84 config AT_HDMAC 103 85 tristate "Atmel AHB DMA support" ··· 293 311 select DMA_ENGINE 294 312 help 295 313 Support the MMP PDMA engine for PXA and MMP platfrom. 314 + 315 + config DMA_JZ4740 316 + tristate "JZ4740 DMA support" 317 + depends on MACH_JZ4740 318 + select DMA_ENGINE 319 + select DMA_VIRTUAL_CHANNELS 296 320 297 321 config DMA_ENGINE 298 322 bool
+2 -1
drivers/dma/Makefile
··· 15 15 obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o 16 16 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ 17 17 obj-$(CONFIG_MV_XOR) += mv_xor.o 18 - obj-$(CONFIG_DW_DMAC) += dw_dmac.o 18 + obj-$(CONFIG_DW_DMAC_CORE) += dw/ 19 19 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o 20 20 obj-$(CONFIG_MX3_IPU) += ipu/ 21 21 obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o ··· 38 38 obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o 39 39 obj-$(CONFIG_DMA_OMAP) += omap-dma.o 40 40 obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o 41 + obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
+4 -4
drivers/dma/amba-pl08x.c
··· 299 299 const struct pl08x_platform_data *pd = plchan->host->pd; 300 300 int ret; 301 301 302 - if (plchan->mux_use++ == 0 && pd->get_signal) { 303 - ret = pd->get_signal(plchan->cd); 302 + if (plchan->mux_use++ == 0 && pd->get_xfer_signal) { 303 + ret = pd->get_xfer_signal(plchan->cd); 304 304 if (ret < 0) { 305 305 plchan->mux_use = 0; 306 306 return ret; ··· 318 318 if (plchan->signal >= 0) { 319 319 WARN_ON(plchan->mux_use == 0); 320 320 321 - if (--plchan->mux_use == 0 && pd->put_signal) { 322 - pd->put_signal(plchan->cd, plchan->signal); 321 + if (--plchan->mux_use == 0 && pd->put_xfer_signal) { 322 + pd->put_xfer_signal(plchan->cd, plchan->signal); 323 323 plchan->signal = -1; 324 324 } 325 325 }
+150 -63
drivers/dma/at_hdmac.c
··· 14 14 * found on AT91SAM9263. 15 15 */ 16 16 17 + #include <dt-bindings/dma/at91.h> 17 18 #include <linux/clk.h> 18 19 #include <linux/dmaengine.h> 19 20 #include <linux/dma-mapping.h> ··· 55 54 56 55 /* prototypes */ 57 56 static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx); 57 + static void atc_issue_pending(struct dma_chan *chan); 58 58 59 59 60 60 /*----------------------------------------------------------------------*/ ··· 232 230 vdbg_dump_regs(atchan); 233 231 } 234 232 233 + /* 234 + * atc_get_current_descriptors - 235 + * locate the descriptor which equal to physical address in DSCR 236 + * @atchan: the channel we want to start 237 + * @dscr_addr: physical descriptor address in DSCR 238 + */ 239 + static struct at_desc *atc_get_current_descriptors(struct at_dma_chan *atchan, 240 + u32 dscr_addr) 241 + { 242 + struct at_desc *desc, *_desc, *child, *desc_cur = NULL; 243 + 244 + list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) { 245 + if (desc->lli.dscr == dscr_addr) { 246 + desc_cur = desc; 247 + break; 248 + } 249 + 250 + list_for_each_entry(child, &desc->tx_list, desc_node) { 251 + if (child->lli.dscr == dscr_addr) { 252 + desc_cur = child; 253 + break; 254 + } 255 + } 256 + } 257 + 258 + return desc_cur; 259 + } 260 + 261 + /* 262 + * atc_get_bytes_left - 263 + * Get the number of bytes residue in dma buffer, 264 + * @chan: the channel we want to start 265 + */ 266 + static int atc_get_bytes_left(struct dma_chan *chan) 267 + { 268 + struct at_dma_chan *atchan = to_at_dma_chan(chan); 269 + struct at_dma *atdma = to_at_dma(chan->device); 270 + int chan_id = atchan->chan_common.chan_id; 271 + struct at_desc *desc_first = atc_first_active(atchan); 272 + struct at_desc *desc_cur; 273 + int ret = 0, count = 0; 274 + 275 + /* 276 + * Initialize necessary values in the first time. 277 + * remain_desc record remain desc length. 278 + */ 279 + if (atchan->remain_desc == 0) 280 + /* First descriptor embedds the transaction length */ 281 + atchan->remain_desc = desc_first->len; 282 + 283 + /* 284 + * This happens when current descriptor transfer complete. 285 + * The residual buffer size should reduce current descriptor length. 286 + */ 287 + if (unlikely(test_bit(ATC_IS_BTC, &atchan->status))) { 288 + clear_bit(ATC_IS_BTC, &atchan->status); 289 + desc_cur = atc_get_current_descriptors(atchan, 290 + channel_readl(atchan, DSCR)); 291 + if (!desc_cur) { 292 + ret = -EINVAL; 293 + goto out; 294 + } 295 + atchan->remain_desc -= (desc_cur->lli.ctrla & ATC_BTSIZE_MAX) 296 + << (desc_first->tx_width); 297 + if (atchan->remain_desc < 0) { 298 + ret = -EINVAL; 299 + goto out; 300 + } else { 301 + ret = atchan->remain_desc; 302 + } 303 + } else { 304 + /* 305 + * Get residual bytes when current 306 + * descriptor transfer in progress. 307 + */ 308 + count = (channel_readl(atchan, CTRLA) & ATC_BTSIZE_MAX) 309 + << (desc_first->tx_width); 310 + ret = atchan->remain_desc - count; 311 + } 312 + /* 313 + * Check fifo empty. 314 + */ 315 + if (!(dma_readl(atdma, CHSR) & AT_DMA_EMPT(chan_id))) 316 + atc_issue_pending(chan); 317 + 318 + out: 319 + return ret; 320 + } 321 + 235 322 /** 236 323 * atc_chain_complete - finish work for one transaction chain 237 324 * @atchan: channel we work on ··· 415 324 416 325 list_for_each_entry_safe(desc, _desc, &list, desc_node) 417 326 atc_chain_complete(atchan, desc); 418 - } 419 - 420 - /** 421 - * atc_cleanup_descriptors - cleanup up finished descriptors in active_list 422 - * @atchan: channel to be cleaned up 423 - * 424 - * Called with atchan->lock held and bh disabled 425 - */ 426 - static void atc_cleanup_descriptors(struct at_dma_chan *atchan) 427 - { 428 - struct at_desc *desc, *_desc; 429 - struct at_desc *child; 430 - 431 - dev_vdbg(chan2dev(&atchan->chan_common), "cleanup descriptors\n"); 432 - 433 - list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) { 434 - if (!(desc->lli.ctrla & ATC_DONE)) 435 - /* This one is currently in progress */ 436 - return; 437 - 438 - list_for_each_entry(child, &desc->tx_list, desc_node) 439 - if (!(child->lli.ctrla & ATC_DONE)) 440 - /* Currently in progress */ 441 - return; 442 - 443 - /* 444 - * No descriptors so far seem to be in progress, i.e. 445 - * this chain must be done. 446 - */ 447 - atc_chain_complete(atchan, desc); 448 - } 449 327 } 450 328 451 329 /** ··· 556 496 /* Give information to tasklet */ 557 497 set_bit(ATC_IS_ERROR, &atchan->status); 558 498 } 499 + if (pending & AT_DMA_BTC(i)) 500 + set_bit(ATC_IS_BTC, &atchan->status); 559 501 tasklet_schedule(&atchan->tasklet); 560 502 ret = IRQ_HANDLED; 561 503 } ··· 677 615 /* First descriptor of the chain embedds additional information */ 678 616 first->txd.cookie = -EBUSY; 679 617 first->len = len; 618 + first->tx_width = src_width; 680 619 681 620 /* set end-of-link to the last link descriptor of list*/ 682 621 set_desc_eol(desc); ··· 824 761 /* First descriptor of the chain embedds additional information */ 825 762 first->txd.cookie = -EBUSY; 826 763 first->len = total_len; 764 + first->tx_width = reg_width; 827 765 828 766 /* first link descriptor of list is responsible of flags */ 829 767 first->txd.flags = flags; /* client is in control of this ack */ ··· 983 919 /* First descriptor of the chain embedds additional information */ 984 920 first->txd.cookie = -EBUSY; 985 921 first->len = buf_len; 922 + first->tx_width = reg_width; 986 923 987 924 return &first->txd; 988 925 ··· 1097 1032 struct dma_tx_state *txstate) 1098 1033 { 1099 1034 struct at_dma_chan *atchan = to_at_dma_chan(chan); 1100 - dma_cookie_t last_used; 1101 - dma_cookie_t last_complete; 1102 1035 unsigned long flags; 1103 1036 enum dma_status ret; 1037 + int bytes = 0; 1038 + 1039 + ret = dma_cookie_status(chan, cookie, txstate); 1040 + if (ret == DMA_SUCCESS) 1041 + return ret; 1042 + /* 1043 + * There's no point calculating the residue if there's 1044 + * no txstate to store the value. 1045 + */ 1046 + if (!txstate) 1047 + return DMA_ERROR; 1104 1048 1105 1049 spin_lock_irqsave(&atchan->lock, flags); 1106 1050 1107 - ret = dma_cookie_status(chan, cookie, txstate); 1108 - if (ret != DMA_SUCCESS) { 1109 - atc_cleanup_descriptors(atchan); 1110 - 1111 - ret = dma_cookie_status(chan, cookie, txstate); 1112 - } 1113 - 1114 - last_complete = chan->completed_cookie; 1115 - last_used = chan->cookie; 1051 + /* Get number of bytes left in the active transactions */ 1052 + bytes = atc_get_bytes_left(chan); 1116 1053 1117 1054 spin_unlock_irqrestore(&atchan->lock, flags); 1118 1055 1119 - if (ret != DMA_SUCCESS) 1120 - dma_set_residue(txstate, atc_first_active(atchan)->len); 1056 + if (unlikely(bytes < 0)) { 1057 + dev_vdbg(chan2dev(chan), "get residual bytes error\n"); 1058 + return DMA_ERROR; 1059 + } else { 1060 + dma_set_residue(txstate, bytes); 1061 + } 1121 1062 1122 - if (atc_chan_is_paused(atchan)) 1123 - ret = DMA_PAUSED; 1124 - 1125 - dev_vdbg(chan2dev(chan), "tx_status %d: cookie = %d (d%d, u%d)\n", 1126 - ret, cookie, last_complete ? last_complete : 0, 1127 - last_used ? last_used : 0); 1063 + dev_vdbg(chan2dev(chan), "tx_status %d: cookie = %d residue = %d\n", 1064 + ret, cookie, bytes); 1128 1065 1129 1066 return ret; 1130 1067 } ··· 1187 1120 */ 1188 1121 BUG_ON(!atslave->dma_dev || atslave->dma_dev != atdma->dma_common.dev); 1189 1122 1190 - /* if cfg configuration specified take it instad of default */ 1123 + /* if cfg configuration specified take it instead of default */ 1191 1124 if (atslave->cfg) 1192 1125 cfg = atslave->cfg; 1193 1126 } ··· 1210 1143 1211 1144 spin_lock_irqsave(&atchan->lock, flags); 1212 1145 atchan->descs_allocated = i; 1146 + atchan->remain_desc = 0; 1213 1147 list_splice(&tmp_list, &atchan->free_list); 1214 1148 dma_cookie_init(chan); 1215 1149 spin_unlock_irqrestore(&atchan->lock, flags); ··· 1253 1185 list_splice_init(&atchan->free_list, &list); 1254 1186 atchan->descs_allocated = 0; 1255 1187 atchan->status = 0; 1188 + atchan->remain_desc = 0; 1256 1189 1257 1190 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1258 1191 } ··· 1292 1223 atslave = devm_kzalloc(&dmac_pdev->dev, sizeof(*atslave), GFP_KERNEL); 1293 1224 if (!atslave) 1294 1225 return NULL; 1226 + 1227 + atslave->cfg = ATC_DST_H2SEL_HW | ATC_SRC_H2SEL_HW; 1295 1228 /* 1296 1229 * We can fill both SRC_PER and DST_PER, one of these fields will be 1297 1230 * ignored depending on DMA transfer direction. 1298 1231 */ 1299 - per_id = dma_spec->args[1]; 1300 - atslave->cfg = ATC_FIFOCFG_HALFFIFO | ATC_DST_H2SEL_HW 1301 - | ATC_SRC_H2SEL_HW | ATC_DST_PER(per_id) 1302 - | ATC_SRC_PER(per_id); 1232 + per_id = dma_spec->args[1] & AT91_DMA_CFG_PER_ID_MASK; 1233 + atslave->cfg |= ATC_DST_PER_MSB(per_id) | ATC_DST_PER(per_id) 1234 + | ATC_SRC_PER_MSB(per_id) | ATC_SRC_PER(per_id); 1235 + /* 1236 + * We have to translate the value we get from the device tree since 1237 + * the half FIFO configuration value had to be 0 to keep backward 1238 + * compatibility. 1239 + */ 1240 + switch (dma_spec->args[1] & AT91_DMA_CFG_FIFOCFG_MASK) { 1241 + case AT91_DMA_CFG_FIFOCFG_ALAP: 1242 + atslave->cfg |= ATC_FIFOCFG_LARGESTBURST; 1243 + break; 1244 + case AT91_DMA_CFG_FIFOCFG_ASAP: 1245 + atslave->cfg |= ATC_FIFOCFG_ENOUGHSPACE; 1246 + break; 1247 + case AT91_DMA_CFG_FIFOCFG_HALF: 1248 + default: 1249 + atslave->cfg |= ATC_FIFOCFG_HALFFIFO; 1250 + } 1303 1251 atslave->dma_dev = &dmac_pdev->dev; 1304 1252 1305 1253 chan = dma_request_channel(mask, at_dma_filter, atslave); ··· 1460 1374 err = PTR_ERR(atdma->clk); 1461 1375 goto err_clk; 1462 1376 } 1463 - clk_enable(atdma->clk); 1377 + err = clk_prepare_enable(atdma->clk); 1378 + if (err) 1379 + goto err_clk_prepare; 1464 1380 1465 1381 /* force dma off, just in case */ 1466 1382 at_dma_off(atdma); ··· 1560 1472 dma_async_device_unregister(&atdma->dma_common); 1561 1473 dma_pool_destroy(atdma->dma_desc_pool); 1562 1474 err_pool_create: 1563 - platform_set_drvdata(pdev, NULL); 1564 1475 free_irq(platform_get_irq(pdev, 0), atdma); 1565 1476 err_irq: 1566 - clk_disable(atdma->clk); 1477 + clk_disable_unprepare(atdma->clk); 1478 + err_clk_prepare: 1567 1479 clk_put(atdma->clk); 1568 1480 err_clk: 1569 1481 iounmap(atdma->regs); ··· 1585 1497 dma_async_device_unregister(&atdma->dma_common); 1586 1498 1587 1499 dma_pool_destroy(atdma->dma_desc_pool); 1588 - platform_set_drvdata(pdev, NULL); 1589 1500 free_irq(platform_get_irq(pdev, 0), atdma); 1590 1501 1591 1502 list_for_each_entry_safe(chan, _chan, &atdma->dma_common.channels, ··· 1599 1512 list_del(&chan->device_node); 1600 1513 } 1601 1514 1602 - clk_disable(atdma->clk); 1515 + clk_disable_unprepare(atdma->clk); 1603 1516 clk_put(atdma->clk); 1604 1517 1605 1518 iounmap(atdma->regs); ··· 1618 1531 struct at_dma *atdma = platform_get_drvdata(pdev); 1619 1532 1620 1533 at_dma_off(platform_get_drvdata(pdev)); 1621 - clk_disable(atdma->clk); 1534 + clk_disable_unprepare(atdma->clk); 1622 1535 } 1623 1536 1624 1537 static int at_dma_prepare(struct device *dev) ··· 1675 1588 1676 1589 /* disable DMA controller */ 1677 1590 at_dma_off(atdma); 1678 - clk_disable(atdma->clk); 1591 + clk_disable_unprepare(atdma->clk); 1679 1592 return 0; 1680 1593 } 1681 1594 ··· 1705 1618 struct dma_chan *chan, *_chan; 1706 1619 1707 1620 /* bring back DMA controller */ 1708 - clk_enable(atdma->clk); 1621 + clk_prepare_enable(atdma->clk); 1709 1622 dma_writel(atdma, EN, AT_DMA_ENABLE); 1710 1623 1711 1624 /* clear any pending interrupt */
+5
drivers/dma/at_hdmac_regs.h
··· 182 182 * @txd: support for the async_tx api 183 183 * @desc_node: node on the channed descriptors list 184 184 * @len: total transaction bytecount 185 + * @tx_width: transfer width 185 186 */ 186 187 struct at_desc { 187 188 /* FIRST values the hardware uses */ ··· 193 192 struct dma_async_tx_descriptor txd; 194 193 struct list_head desc_node; 195 194 size_t len; 195 + u32 tx_width; 196 196 }; 197 197 198 198 static inline struct at_desc * ··· 213 211 enum atc_status { 214 212 ATC_IS_ERROR = 0, 215 213 ATC_IS_PAUSED = 1, 214 + ATC_IS_BTC = 2, 216 215 ATC_IS_CYCLIC = 24, 217 216 }; 218 217 ··· 231 228 * @save_cfg: configuration register that is saved on suspend/resume cycle 232 229 * @save_dscr: for cyclic operations, preserve next descriptor address in 233 230 * the cyclic list on suspend/resume cycle 231 + * @remain_desc: to save remain desc length 234 232 * @dma_sconfig: configuration for slave transfers, passed via DMA_SLAVE_CONFIG 235 233 * @lock: serializes enqueue/dequeue operations to descriptors lists 236 234 * @active_list: list of descriptors dmaengine is being running on ··· 250 246 struct tasklet_struct tasklet; 251 247 u32 save_cfg; 252 248 u32 save_dscr; 249 + u32 remain_desc; 253 250 struct dma_slave_config dma_sconfig; 254 251 255 252 spinlock_t lock;
+617
drivers/dma/dma-jz4740.c
··· 1 + /* 2 + * Copyright (C) 2013, Lars-Peter Clausen <lars@metafoo.de> 3 + * JZ4740 DMAC support 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License as published by the 7 + * Free Software Foundation; either version 2 of the License, or (at your 8 + * option) any later version. 9 + * 10 + * You should have received a copy of the GNU General Public License along 11 + * with this program; if not, write to the Free Software Foundation, Inc., 12 + * 675 Mass Ave, Cambridge, MA 02139, USA. 13 + * 14 + */ 15 + 16 + #include <linux/dmaengine.h> 17 + #include <linux/dma-mapping.h> 18 + #include <linux/err.h> 19 + #include <linux/init.h> 20 + #include <linux/list.h> 21 + #include <linux/module.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/slab.h> 24 + #include <linux/spinlock.h> 25 + #include <linux/irq.h> 26 + #include <linux/clk.h> 27 + 28 + #include <asm/mach-jz4740/dma.h> 29 + 30 + #include "virt-dma.h" 31 + 32 + #define JZ_DMA_NR_CHANS 6 33 + 34 + #define JZ_REG_DMA_SRC_ADDR(x) (0x00 + (x) * 0x20) 35 + #define JZ_REG_DMA_DST_ADDR(x) (0x04 + (x) * 0x20) 36 + #define JZ_REG_DMA_TRANSFER_COUNT(x) (0x08 + (x) * 0x20) 37 + #define JZ_REG_DMA_REQ_TYPE(x) (0x0C + (x) * 0x20) 38 + #define JZ_REG_DMA_STATUS_CTRL(x) (0x10 + (x) * 0x20) 39 + #define JZ_REG_DMA_CMD(x) (0x14 + (x) * 0x20) 40 + #define JZ_REG_DMA_DESC_ADDR(x) (0x18 + (x) * 0x20) 41 + 42 + #define JZ_REG_DMA_CTRL 0x300 43 + #define JZ_REG_DMA_IRQ 0x304 44 + #define JZ_REG_DMA_DOORBELL 0x308 45 + #define JZ_REG_DMA_DOORBELL_SET 0x30C 46 + 47 + #define JZ_DMA_STATUS_CTRL_NO_DESC BIT(31) 48 + #define JZ_DMA_STATUS_CTRL_DESC_INV BIT(6) 49 + #define JZ_DMA_STATUS_CTRL_ADDR_ERR BIT(4) 50 + #define JZ_DMA_STATUS_CTRL_TRANSFER_DONE BIT(3) 51 + #define JZ_DMA_STATUS_CTRL_HALT BIT(2) 52 + #define JZ_DMA_STATUS_CTRL_COUNT_TERMINATE BIT(1) 53 + #define JZ_DMA_STATUS_CTRL_ENABLE BIT(0) 54 + 55 + #define JZ_DMA_CMD_SRC_INC BIT(23) 56 + #define JZ_DMA_CMD_DST_INC BIT(22) 57 + #define JZ_DMA_CMD_RDIL_MASK (0xf << 16) 58 + #define JZ_DMA_CMD_SRC_WIDTH_MASK (0x3 << 14) 59 + #define JZ_DMA_CMD_DST_WIDTH_MASK (0x3 << 12) 60 + #define JZ_DMA_CMD_INTERVAL_LENGTH_MASK (0x7 << 8) 61 + #define JZ_DMA_CMD_BLOCK_MODE BIT(7) 62 + #define JZ_DMA_CMD_DESC_VALID BIT(4) 63 + #define JZ_DMA_CMD_DESC_VALID_MODE BIT(3) 64 + #define JZ_DMA_CMD_VALID_IRQ_ENABLE BIT(2) 65 + #define JZ_DMA_CMD_TRANSFER_IRQ_ENABLE BIT(1) 66 + #define JZ_DMA_CMD_LINK_ENABLE BIT(0) 67 + 68 + #define JZ_DMA_CMD_FLAGS_OFFSET 22 69 + #define JZ_DMA_CMD_RDIL_OFFSET 16 70 + #define JZ_DMA_CMD_SRC_WIDTH_OFFSET 14 71 + #define JZ_DMA_CMD_DST_WIDTH_OFFSET 12 72 + #define JZ_DMA_CMD_TRANSFER_SIZE_OFFSET 8 73 + #define JZ_DMA_CMD_MODE_OFFSET 7 74 + 75 + #define JZ_DMA_CTRL_PRIORITY_MASK (0x3 << 8) 76 + #define JZ_DMA_CTRL_HALT BIT(3) 77 + #define JZ_DMA_CTRL_ADDRESS_ERROR BIT(2) 78 + #define JZ_DMA_CTRL_ENABLE BIT(0) 79 + 80 + enum jz4740_dma_width { 81 + JZ4740_DMA_WIDTH_32BIT = 0, 82 + JZ4740_DMA_WIDTH_8BIT = 1, 83 + JZ4740_DMA_WIDTH_16BIT = 2, 84 + }; 85 + 86 + enum jz4740_dma_transfer_size { 87 + JZ4740_DMA_TRANSFER_SIZE_4BYTE = 0, 88 + JZ4740_DMA_TRANSFER_SIZE_1BYTE = 1, 89 + JZ4740_DMA_TRANSFER_SIZE_2BYTE = 2, 90 + JZ4740_DMA_TRANSFER_SIZE_16BYTE = 3, 91 + JZ4740_DMA_TRANSFER_SIZE_32BYTE = 4, 92 + }; 93 + 94 + enum jz4740_dma_flags { 95 + JZ4740_DMA_SRC_AUTOINC = 0x2, 96 + JZ4740_DMA_DST_AUTOINC = 0x1, 97 + }; 98 + 99 + enum jz4740_dma_mode { 100 + JZ4740_DMA_MODE_SINGLE = 0, 101 + JZ4740_DMA_MODE_BLOCK = 1, 102 + }; 103 + 104 + struct jz4740_dma_sg { 105 + dma_addr_t addr; 106 + unsigned int len; 107 + }; 108 + 109 + struct jz4740_dma_desc { 110 + struct virt_dma_desc vdesc; 111 + 112 + enum dma_transfer_direction direction; 113 + bool cyclic; 114 + 115 + unsigned int num_sgs; 116 + struct jz4740_dma_sg sg[]; 117 + }; 118 + 119 + struct jz4740_dmaengine_chan { 120 + struct virt_dma_chan vchan; 121 + unsigned int id; 122 + 123 + dma_addr_t fifo_addr; 124 + unsigned int transfer_shift; 125 + 126 + struct jz4740_dma_desc *desc; 127 + unsigned int next_sg; 128 + }; 129 + 130 + struct jz4740_dma_dev { 131 + struct dma_device ddev; 132 + void __iomem *base; 133 + struct clk *clk; 134 + 135 + struct jz4740_dmaengine_chan chan[JZ_DMA_NR_CHANS]; 136 + }; 137 + 138 + static struct jz4740_dma_dev *jz4740_dma_chan_get_dev( 139 + struct jz4740_dmaengine_chan *chan) 140 + { 141 + return container_of(chan->vchan.chan.device, struct jz4740_dma_dev, 142 + ddev); 143 + } 144 + 145 + static struct jz4740_dmaengine_chan *to_jz4740_dma_chan(struct dma_chan *c) 146 + { 147 + return container_of(c, struct jz4740_dmaengine_chan, vchan.chan); 148 + } 149 + 150 + static struct jz4740_dma_desc *to_jz4740_dma_desc(struct virt_dma_desc *vdesc) 151 + { 152 + return container_of(vdesc, struct jz4740_dma_desc, vdesc); 153 + } 154 + 155 + static inline uint32_t jz4740_dma_read(struct jz4740_dma_dev *dmadev, 156 + unsigned int reg) 157 + { 158 + return readl(dmadev->base + reg); 159 + } 160 + 161 + static inline void jz4740_dma_write(struct jz4740_dma_dev *dmadev, 162 + unsigned reg, uint32_t val) 163 + { 164 + writel(val, dmadev->base + reg); 165 + } 166 + 167 + static inline void jz4740_dma_write_mask(struct jz4740_dma_dev *dmadev, 168 + unsigned int reg, uint32_t val, uint32_t mask) 169 + { 170 + uint32_t tmp; 171 + 172 + tmp = jz4740_dma_read(dmadev, reg); 173 + tmp &= ~mask; 174 + tmp |= val; 175 + jz4740_dma_write(dmadev, reg, tmp); 176 + } 177 + 178 + static struct jz4740_dma_desc *jz4740_dma_alloc_desc(unsigned int num_sgs) 179 + { 180 + return kzalloc(sizeof(struct jz4740_dma_desc) + 181 + sizeof(struct jz4740_dma_sg) * num_sgs, GFP_ATOMIC); 182 + } 183 + 184 + static enum jz4740_dma_width jz4740_dma_width(enum dma_slave_buswidth width) 185 + { 186 + switch (width) { 187 + case DMA_SLAVE_BUSWIDTH_1_BYTE: 188 + return JZ4740_DMA_WIDTH_8BIT; 189 + case DMA_SLAVE_BUSWIDTH_2_BYTES: 190 + return JZ4740_DMA_WIDTH_16BIT; 191 + case DMA_SLAVE_BUSWIDTH_4_BYTES: 192 + return JZ4740_DMA_WIDTH_32BIT; 193 + default: 194 + return JZ4740_DMA_WIDTH_32BIT; 195 + } 196 + } 197 + 198 + static enum jz4740_dma_transfer_size jz4740_dma_maxburst(u32 maxburst) 199 + { 200 + if (maxburst <= 1) 201 + return JZ4740_DMA_TRANSFER_SIZE_1BYTE; 202 + else if (maxburst <= 3) 203 + return JZ4740_DMA_TRANSFER_SIZE_2BYTE; 204 + else if (maxburst <= 15) 205 + return JZ4740_DMA_TRANSFER_SIZE_4BYTE; 206 + else if (maxburst <= 31) 207 + return JZ4740_DMA_TRANSFER_SIZE_16BYTE; 208 + 209 + return JZ4740_DMA_TRANSFER_SIZE_32BYTE; 210 + } 211 + 212 + static int jz4740_dma_slave_config(struct dma_chan *c, 213 + const struct dma_slave_config *config) 214 + { 215 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 216 + struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan); 217 + enum jz4740_dma_width src_width; 218 + enum jz4740_dma_width dst_width; 219 + enum jz4740_dma_transfer_size transfer_size; 220 + enum jz4740_dma_flags flags; 221 + uint32_t cmd; 222 + 223 + switch (config->direction) { 224 + case DMA_MEM_TO_DEV: 225 + flags = JZ4740_DMA_SRC_AUTOINC; 226 + transfer_size = jz4740_dma_maxburst(config->dst_maxburst); 227 + chan->fifo_addr = config->dst_addr; 228 + break; 229 + case DMA_DEV_TO_MEM: 230 + flags = JZ4740_DMA_DST_AUTOINC; 231 + transfer_size = jz4740_dma_maxburst(config->src_maxburst); 232 + chan->fifo_addr = config->src_addr; 233 + break; 234 + default: 235 + return -EINVAL; 236 + } 237 + 238 + src_width = jz4740_dma_width(config->src_addr_width); 239 + dst_width = jz4740_dma_width(config->dst_addr_width); 240 + 241 + switch (transfer_size) { 242 + case JZ4740_DMA_TRANSFER_SIZE_2BYTE: 243 + chan->transfer_shift = 1; 244 + break; 245 + case JZ4740_DMA_TRANSFER_SIZE_4BYTE: 246 + chan->transfer_shift = 2; 247 + break; 248 + case JZ4740_DMA_TRANSFER_SIZE_16BYTE: 249 + chan->transfer_shift = 4; 250 + break; 251 + case JZ4740_DMA_TRANSFER_SIZE_32BYTE: 252 + chan->transfer_shift = 5; 253 + break; 254 + default: 255 + chan->transfer_shift = 0; 256 + break; 257 + } 258 + 259 + cmd = flags << JZ_DMA_CMD_FLAGS_OFFSET; 260 + cmd |= src_width << JZ_DMA_CMD_SRC_WIDTH_OFFSET; 261 + cmd |= dst_width << JZ_DMA_CMD_DST_WIDTH_OFFSET; 262 + cmd |= transfer_size << JZ_DMA_CMD_TRANSFER_SIZE_OFFSET; 263 + cmd |= JZ4740_DMA_MODE_SINGLE << JZ_DMA_CMD_MODE_OFFSET; 264 + cmd |= JZ_DMA_CMD_TRANSFER_IRQ_ENABLE; 265 + 266 + jz4740_dma_write(dmadev, JZ_REG_DMA_CMD(chan->id), cmd); 267 + jz4740_dma_write(dmadev, JZ_REG_DMA_STATUS_CTRL(chan->id), 0); 268 + jz4740_dma_write(dmadev, JZ_REG_DMA_REQ_TYPE(chan->id), 269 + config->slave_id); 270 + 271 + return 0; 272 + } 273 + 274 + static int jz4740_dma_terminate_all(struct dma_chan *c) 275 + { 276 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 277 + struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan); 278 + unsigned long flags; 279 + LIST_HEAD(head); 280 + 281 + spin_lock_irqsave(&chan->vchan.lock, flags); 282 + jz4740_dma_write_mask(dmadev, JZ_REG_DMA_STATUS_CTRL(chan->id), 0, 283 + JZ_DMA_STATUS_CTRL_ENABLE); 284 + chan->desc = NULL; 285 + vchan_get_all_descriptors(&chan->vchan, &head); 286 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 287 + 288 + vchan_dma_desc_free_list(&chan->vchan, &head); 289 + 290 + return 0; 291 + } 292 + 293 + static int jz4740_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, 294 + unsigned long arg) 295 + { 296 + struct dma_slave_config *config = (struct dma_slave_config *)arg; 297 + 298 + switch (cmd) { 299 + case DMA_SLAVE_CONFIG: 300 + return jz4740_dma_slave_config(chan, config); 301 + case DMA_TERMINATE_ALL: 302 + return jz4740_dma_terminate_all(chan); 303 + default: 304 + return -ENOSYS; 305 + } 306 + } 307 + 308 + static int jz4740_dma_start_transfer(struct jz4740_dmaengine_chan *chan) 309 + { 310 + struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan); 311 + dma_addr_t src_addr, dst_addr; 312 + struct virt_dma_desc *vdesc; 313 + struct jz4740_dma_sg *sg; 314 + 315 + jz4740_dma_write_mask(dmadev, JZ_REG_DMA_STATUS_CTRL(chan->id), 0, 316 + JZ_DMA_STATUS_CTRL_ENABLE); 317 + 318 + if (!chan->desc) { 319 + vdesc = vchan_next_desc(&chan->vchan); 320 + if (!vdesc) 321 + return 0; 322 + chan->desc = to_jz4740_dma_desc(vdesc); 323 + chan->next_sg = 0; 324 + } 325 + 326 + if (chan->next_sg == chan->desc->num_sgs) 327 + chan->next_sg = 0; 328 + 329 + sg = &chan->desc->sg[chan->next_sg]; 330 + 331 + if (chan->desc->direction == DMA_MEM_TO_DEV) { 332 + src_addr = sg->addr; 333 + dst_addr = chan->fifo_addr; 334 + } else { 335 + src_addr = chan->fifo_addr; 336 + dst_addr = sg->addr; 337 + } 338 + jz4740_dma_write(dmadev, JZ_REG_DMA_SRC_ADDR(chan->id), src_addr); 339 + jz4740_dma_write(dmadev, JZ_REG_DMA_DST_ADDR(chan->id), dst_addr); 340 + jz4740_dma_write(dmadev, JZ_REG_DMA_TRANSFER_COUNT(chan->id), 341 + sg->len >> chan->transfer_shift); 342 + 343 + chan->next_sg++; 344 + 345 + jz4740_dma_write_mask(dmadev, JZ_REG_DMA_STATUS_CTRL(chan->id), 346 + JZ_DMA_STATUS_CTRL_NO_DESC | JZ_DMA_STATUS_CTRL_ENABLE, 347 + JZ_DMA_STATUS_CTRL_HALT | JZ_DMA_STATUS_CTRL_NO_DESC | 348 + JZ_DMA_STATUS_CTRL_ENABLE); 349 + 350 + jz4740_dma_write_mask(dmadev, JZ_REG_DMA_CTRL, 351 + JZ_DMA_CTRL_ENABLE, 352 + JZ_DMA_CTRL_HALT | JZ_DMA_CTRL_ENABLE); 353 + 354 + return 0; 355 + } 356 + 357 + static void jz4740_dma_chan_irq(struct jz4740_dmaengine_chan *chan) 358 + { 359 + spin_lock(&chan->vchan.lock); 360 + if (chan->desc) { 361 + if (chan->desc && chan->desc->cyclic) { 362 + vchan_cyclic_callback(&chan->desc->vdesc); 363 + } else { 364 + if (chan->next_sg == chan->desc->num_sgs) { 365 + chan->desc = NULL; 366 + vchan_cookie_complete(&chan->desc->vdesc); 367 + } 368 + } 369 + } 370 + jz4740_dma_start_transfer(chan); 371 + spin_unlock(&chan->vchan.lock); 372 + } 373 + 374 + static irqreturn_t jz4740_dma_irq(int irq, void *devid) 375 + { 376 + struct jz4740_dma_dev *dmadev = devid; 377 + uint32_t irq_status; 378 + unsigned int i; 379 + 380 + irq_status = readl(dmadev->base + JZ_REG_DMA_IRQ); 381 + 382 + for (i = 0; i < 6; ++i) { 383 + if (irq_status & (1 << i)) { 384 + jz4740_dma_write_mask(dmadev, 385 + JZ_REG_DMA_STATUS_CTRL(i), 0, 386 + JZ_DMA_STATUS_CTRL_ENABLE | 387 + JZ_DMA_STATUS_CTRL_TRANSFER_DONE); 388 + 389 + jz4740_dma_chan_irq(&dmadev->chan[i]); 390 + } 391 + } 392 + 393 + return IRQ_HANDLED; 394 + } 395 + 396 + static void jz4740_dma_issue_pending(struct dma_chan *c) 397 + { 398 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 399 + unsigned long flags; 400 + 401 + spin_lock_irqsave(&chan->vchan.lock, flags); 402 + if (vchan_issue_pending(&chan->vchan) && !chan->desc) 403 + jz4740_dma_start_transfer(chan); 404 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 405 + } 406 + 407 + static struct dma_async_tx_descriptor *jz4740_dma_prep_slave_sg( 408 + struct dma_chan *c, struct scatterlist *sgl, 409 + unsigned int sg_len, enum dma_transfer_direction direction, 410 + unsigned long flags, void *context) 411 + { 412 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 413 + struct jz4740_dma_desc *desc; 414 + struct scatterlist *sg; 415 + unsigned int i; 416 + 417 + desc = jz4740_dma_alloc_desc(sg_len); 418 + if (!desc) 419 + return NULL; 420 + 421 + for_each_sg(sgl, sg, sg_len, i) { 422 + desc->sg[i].addr = sg_dma_address(sg); 423 + desc->sg[i].len = sg_dma_len(sg); 424 + } 425 + 426 + desc->num_sgs = sg_len; 427 + desc->direction = direction; 428 + desc->cyclic = false; 429 + 430 + return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); 431 + } 432 + 433 + static struct dma_async_tx_descriptor *jz4740_dma_prep_dma_cyclic( 434 + struct dma_chan *c, dma_addr_t buf_addr, size_t buf_len, 435 + size_t period_len, enum dma_transfer_direction direction, 436 + unsigned long flags, void *context) 437 + { 438 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 439 + struct jz4740_dma_desc *desc; 440 + unsigned int num_periods, i; 441 + 442 + if (buf_len % period_len) 443 + return NULL; 444 + 445 + num_periods = buf_len / period_len; 446 + 447 + desc = jz4740_dma_alloc_desc(num_periods); 448 + if (!desc) 449 + return NULL; 450 + 451 + for (i = 0; i < num_periods; i++) { 452 + desc->sg[i].addr = buf_addr; 453 + desc->sg[i].len = period_len; 454 + buf_addr += period_len; 455 + } 456 + 457 + desc->num_sgs = num_periods; 458 + desc->direction = direction; 459 + desc->cyclic = true; 460 + 461 + return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); 462 + } 463 + 464 + static size_t jz4740_dma_desc_residue(struct jz4740_dmaengine_chan *chan, 465 + struct jz4740_dma_desc *desc, unsigned int next_sg) 466 + { 467 + struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan); 468 + unsigned int residue, count; 469 + unsigned int i; 470 + 471 + residue = 0; 472 + 473 + for (i = next_sg; i < desc->num_sgs; i++) 474 + residue += desc->sg[i].len; 475 + 476 + if (next_sg != 0) { 477 + count = jz4740_dma_read(dmadev, 478 + JZ_REG_DMA_TRANSFER_COUNT(chan->id)); 479 + residue += count << chan->transfer_shift; 480 + } 481 + 482 + return residue; 483 + } 484 + 485 + static enum dma_status jz4740_dma_tx_status(struct dma_chan *c, 486 + dma_cookie_t cookie, struct dma_tx_state *state) 487 + { 488 + struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c); 489 + struct virt_dma_desc *vdesc; 490 + enum dma_status status; 491 + unsigned long flags; 492 + 493 + status = dma_cookie_status(c, cookie, state); 494 + if (status == DMA_SUCCESS || !state) 495 + return status; 496 + 497 + spin_lock_irqsave(&chan->vchan.lock, flags); 498 + vdesc = vchan_find_desc(&chan->vchan, cookie); 499 + if (cookie == chan->desc->vdesc.tx.cookie) { 500 + state->residue = jz4740_dma_desc_residue(chan, chan->desc, 501 + chan->next_sg); 502 + } else if (vdesc) { 503 + state->residue = jz4740_dma_desc_residue(chan, 504 + to_jz4740_dma_desc(vdesc), 0); 505 + } else { 506 + state->residue = 0; 507 + } 508 + spin_unlock_irqrestore(&chan->vchan.lock, flags); 509 + 510 + return status; 511 + } 512 + 513 + static int jz4740_dma_alloc_chan_resources(struct dma_chan *c) 514 + { 515 + return 0; 516 + } 517 + 518 + static void jz4740_dma_free_chan_resources(struct dma_chan *c) 519 + { 520 + vchan_free_chan_resources(to_virt_chan(c)); 521 + } 522 + 523 + static void jz4740_dma_desc_free(struct virt_dma_desc *vdesc) 524 + { 525 + kfree(container_of(vdesc, struct jz4740_dma_desc, vdesc)); 526 + } 527 + 528 + static int jz4740_dma_probe(struct platform_device *pdev) 529 + { 530 + struct jz4740_dmaengine_chan *chan; 531 + struct jz4740_dma_dev *dmadev; 532 + struct dma_device *dd; 533 + unsigned int i; 534 + struct resource *res; 535 + int ret; 536 + int irq; 537 + 538 + dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL); 539 + if (!dmadev) 540 + return -EINVAL; 541 + 542 + dd = &dmadev->ddev; 543 + 544 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 545 + dmadev->base = devm_ioremap_resource(&pdev->dev, res); 546 + if (IS_ERR(dmadev->base)) 547 + return PTR_ERR(dmadev->base); 548 + 549 + dmadev->clk = clk_get(&pdev->dev, "dma"); 550 + if (IS_ERR(dmadev->clk)) 551 + return PTR_ERR(dmadev->clk); 552 + 553 + clk_prepare_enable(dmadev->clk); 554 + 555 + dma_cap_set(DMA_SLAVE, dd->cap_mask); 556 + dma_cap_set(DMA_CYCLIC, dd->cap_mask); 557 + dd->device_alloc_chan_resources = jz4740_dma_alloc_chan_resources; 558 + dd->device_free_chan_resources = jz4740_dma_free_chan_resources; 559 + dd->device_tx_status = jz4740_dma_tx_status; 560 + dd->device_issue_pending = jz4740_dma_issue_pending; 561 + dd->device_prep_slave_sg = jz4740_dma_prep_slave_sg; 562 + dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic; 563 + dd->device_control = jz4740_dma_control; 564 + dd->dev = &pdev->dev; 565 + dd->chancnt = JZ_DMA_NR_CHANS; 566 + INIT_LIST_HEAD(&dd->channels); 567 + 568 + for (i = 0; i < dd->chancnt; i++) { 569 + chan = &dmadev->chan[i]; 570 + chan->id = i; 571 + chan->vchan.desc_free = jz4740_dma_desc_free; 572 + vchan_init(&chan->vchan, dd); 573 + } 574 + 575 + ret = dma_async_device_register(dd); 576 + if (ret) 577 + return ret; 578 + 579 + irq = platform_get_irq(pdev, 0); 580 + ret = request_irq(irq, jz4740_dma_irq, 0, dev_name(&pdev->dev), dmadev); 581 + if (ret) 582 + goto err_unregister; 583 + 584 + platform_set_drvdata(pdev, dmadev); 585 + 586 + return 0; 587 + 588 + err_unregister: 589 + dma_async_device_unregister(dd); 590 + return ret; 591 + } 592 + 593 + static int jz4740_dma_remove(struct platform_device *pdev) 594 + { 595 + struct jz4740_dma_dev *dmadev = platform_get_drvdata(pdev); 596 + int irq = platform_get_irq(pdev, 0); 597 + 598 + free_irq(irq, dmadev); 599 + dma_async_device_unregister(&dmadev->ddev); 600 + clk_disable_unprepare(dmadev->clk); 601 + 602 + return 0; 603 + } 604 + 605 + static struct platform_driver jz4740_dma_driver = { 606 + .probe = jz4740_dma_probe, 607 + .remove = jz4740_dma_remove, 608 + .driver = { 609 + .name = "jz4740-dma", 610 + .owner = THIS_MODULE, 611 + }, 612 + }; 613 + module_platform_driver(jz4740_dma_driver); 614 + 615 + MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>"); 616 + MODULE_DESCRIPTION("JZ4740 DMA driver"); 617 + MODULE_LICENSE("GPLv2");
+29
drivers/dma/dw/Kconfig
··· 1 + # 2 + # DMA engine configuration for dw 3 + # 4 + 5 + config DW_DMAC_CORE 6 + tristate "Synopsys DesignWare AHB DMA support" 7 + depends on GENERIC_HARDIRQS 8 + select DMA_ENGINE 9 + 10 + config DW_DMAC 11 + tristate "Synopsys DesignWare AHB DMA platform driver" 12 + select DW_DMAC_CORE 13 + select DW_DMAC_BIG_ENDIAN_IO if AVR32 14 + default y if CPU_AT32AP7000 15 + help 16 + Support the Synopsys DesignWare AHB DMA controller. This 17 + can be integrated in chips such as the Atmel AT32ap7000. 18 + 19 + config DW_DMAC_PCI 20 + tristate "Synopsys DesignWare AHB DMA PCI driver" 21 + depends on PCI 22 + select DW_DMAC_CORE 23 + help 24 + Support the Synopsys DesignWare AHB DMA controller on the 25 + platfroms that enumerate it as a PCI device. For example, 26 + Intel Medfield has integrated this GPDMA controller. 27 + 28 + config DW_DMAC_BIG_ENDIAN_IO 29 + bool
+8
drivers/dma/dw/Makefile
··· 1 + obj-$(CONFIG_DW_DMAC_CORE) += dw_dmac_core.o 2 + dw_dmac_core-objs := core.o 3 + 4 + obj-$(CONFIG_DW_DMAC) += dw_dmac.o 5 + dw_dmac-objs := platform.o 6 + 7 + obj-$(CONFIG_DW_DMAC_PCI) += dw_dmac_pci.o 8 + dw_dmac_pci-objs := pci.o
+70
drivers/dma/dw/internal.h
··· 1 + /* 2 + * Driver for the Synopsys DesignWare DMA Controller 3 + * 4 + * Copyright (C) 2013 Intel Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #ifndef _DW_DMAC_INTERNAL_H 12 + #define _DW_DMAC_INTERNAL_H 13 + 14 + #include <linux/device.h> 15 + #include <linux/dw_dmac.h> 16 + 17 + #include "regs.h" 18 + 19 + /** 20 + * struct dw_dma_chip - representation of DesignWare DMA controller hardware 21 + * @dev: struct device of the DMA controller 22 + * @irq: irq line 23 + * @regs: memory mapped I/O space 24 + * @dw: struct dw_dma that is filed by dw_dma_probe() 25 + */ 26 + struct dw_dma_chip { 27 + struct device *dev; 28 + int irq; 29 + void __iomem *regs; 30 + struct dw_dma *dw; 31 + }; 32 + 33 + /* Export to the platform drivers */ 34 + int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata); 35 + int dw_dma_remove(struct dw_dma_chip *chip); 36 + 37 + void dw_dma_shutdown(struct dw_dma_chip *chip); 38 + 39 + #ifdef CONFIG_PM_SLEEP 40 + 41 + int dw_dma_suspend(struct dw_dma_chip *chip); 42 + int dw_dma_resume(struct dw_dma_chip *chip); 43 + 44 + #endif /* CONFIG_PM_SLEEP */ 45 + 46 + /** 47 + * dwc_get_dms - get destination master 48 + * @slave: pointer to the custom slave configuration 49 + * 50 + * Returns destination master in the custom slave configuration if defined, or 51 + * default value otherwise. 52 + */ 53 + static inline unsigned int dwc_get_dms(struct dw_dma_slave *slave) 54 + { 55 + return slave ? slave->dst_master : 0; 56 + } 57 + 58 + /** 59 + * dwc_get_sms - get source master 60 + * @slave: pointer to the custom slave configuration 61 + * 62 + * Returns source master in the custom slave configuration if defined, or 63 + * default value otherwise. 64 + */ 65 + static inline unsigned int dwc_get_sms(struct dw_dma_slave *slave) 66 + { 67 + return slave ? slave->src_master : 1; 68 + } 69 + 70 + #endif /* _DW_DMAC_INTERNAL_H */
+101
drivers/dma/dw/pci.c
··· 1 + /* 2 + * PCI driver for the Synopsys DesignWare DMA Controller 3 + * 4 + * Copyright (C) 2013 Intel Corporation 5 + * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/module.h> 13 + #include <linux/pci.h> 14 + #include <linux/device.h> 15 + 16 + #include "internal.h" 17 + 18 + static struct dw_dma_platform_data dw_pci_pdata = { 19 + .is_private = 1, 20 + .chan_allocation_order = CHAN_ALLOCATION_ASCENDING, 21 + .chan_priority = CHAN_PRIORITY_ASCENDING, 22 + }; 23 + 24 + static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) 25 + { 26 + struct dw_dma_chip *chip; 27 + struct dw_dma_platform_data *pdata = (void *)pid->driver_data; 28 + int ret; 29 + 30 + ret = pcim_enable_device(pdev); 31 + if (ret) 32 + return ret; 33 + 34 + ret = pcim_iomap_regions(pdev, 1 << 0, pci_name(pdev)); 35 + if (ret) { 36 + dev_err(&pdev->dev, "I/O memory remapping failed\n"); 37 + return ret; 38 + } 39 + 40 + pci_set_master(pdev); 41 + pci_try_set_mwi(pdev); 42 + 43 + ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 44 + if (ret) 45 + return ret; 46 + 47 + ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); 48 + if (ret) 49 + return ret; 50 + 51 + chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL); 52 + if (!chip) 53 + return -ENOMEM; 54 + 55 + chip->dev = &pdev->dev; 56 + chip->regs = pcim_iomap_table(pdev)[0]; 57 + chip->irq = pdev->irq; 58 + 59 + ret = dw_dma_probe(chip, pdata); 60 + if (ret) 61 + return ret; 62 + 63 + pci_set_drvdata(pdev, chip); 64 + 65 + return 0; 66 + } 67 + 68 + static void dw_pci_remove(struct pci_dev *pdev) 69 + { 70 + struct dw_dma_chip *chip = pci_get_drvdata(pdev); 71 + int ret; 72 + 73 + ret = dw_dma_remove(chip); 74 + if (ret) 75 + dev_warn(&pdev->dev, "can't remove device properly: %d\n", ret); 76 + } 77 + 78 + static DEFINE_PCI_DEVICE_TABLE(dw_pci_id_table) = { 79 + /* Medfield */ 80 + { PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_pdata }, 81 + { PCI_VDEVICE(INTEL, 0x0830), (kernel_ulong_t)&dw_pci_pdata }, 82 + 83 + /* BayTrail */ 84 + { PCI_VDEVICE(INTEL, 0x0f06), (kernel_ulong_t)&dw_pci_pdata }, 85 + { PCI_VDEVICE(INTEL, 0x0f40), (kernel_ulong_t)&dw_pci_pdata }, 86 + { } 87 + }; 88 + MODULE_DEVICE_TABLE(pci, dw_pci_id_table); 89 + 90 + static struct pci_driver dw_pci_driver = { 91 + .name = "dw_dmac_pci", 92 + .id_table = dw_pci_id_table, 93 + .probe = dw_pci_probe, 94 + .remove = dw_pci_remove, 95 + }; 96 + 97 + module_pci_driver(dw_pci_driver); 98 + 99 + MODULE_LICENSE("GPL v2"); 100 + MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller PCI driver"); 101 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
+317
drivers/dma/dw/platform.c
··· 1 + /* 2 + * Platform driver for the Synopsys DesignWare DMA Controller 3 + * 4 + * Copyright (C) 2007-2008 Atmel Corporation 5 + * Copyright (C) 2010-2011 ST Microelectronics 6 + * Copyright (C) 2013 Intel Corporation 7 + * 8 + * Some parts of this driver are derived from the original dw_dmac. 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/device.h> 17 + #include <linux/clk.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/dmaengine.h> 20 + #include <linux/dma-mapping.h> 21 + #include <linux/of.h> 22 + #include <linux/of_dma.h> 23 + #include <linux/acpi.h> 24 + #include <linux/acpi_dma.h> 25 + 26 + #include "internal.h" 27 + 28 + struct dw_dma_of_filter_args { 29 + struct dw_dma *dw; 30 + unsigned int req; 31 + unsigned int src; 32 + unsigned int dst; 33 + }; 34 + 35 + static bool dw_dma_of_filter(struct dma_chan *chan, void *param) 36 + { 37 + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 38 + struct dw_dma_of_filter_args *fargs = param; 39 + 40 + /* Ensure the device matches our channel */ 41 + if (chan->device != &fargs->dw->dma) 42 + return false; 43 + 44 + dwc->request_line = fargs->req; 45 + dwc->src_master = fargs->src; 46 + dwc->dst_master = fargs->dst; 47 + 48 + return true; 49 + } 50 + 51 + static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, 52 + struct of_dma *ofdma) 53 + { 54 + struct dw_dma *dw = ofdma->of_dma_data; 55 + struct dw_dma_of_filter_args fargs = { 56 + .dw = dw, 57 + }; 58 + dma_cap_mask_t cap; 59 + 60 + if (dma_spec->args_count != 3) 61 + return NULL; 62 + 63 + fargs.req = dma_spec->args[0]; 64 + fargs.src = dma_spec->args[1]; 65 + fargs.dst = dma_spec->args[2]; 66 + 67 + if (WARN_ON(fargs.req >= DW_DMA_MAX_NR_REQUESTS || 68 + fargs.src >= dw->nr_masters || 69 + fargs.dst >= dw->nr_masters)) 70 + return NULL; 71 + 72 + dma_cap_zero(cap); 73 + dma_cap_set(DMA_SLAVE, cap); 74 + 75 + /* TODO: there should be a simpler way to do this */ 76 + return dma_request_channel(cap, dw_dma_of_filter, &fargs); 77 + } 78 + 79 + #ifdef CONFIG_ACPI 80 + static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param) 81 + { 82 + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 83 + struct acpi_dma_spec *dma_spec = param; 84 + 85 + if (chan->device->dev != dma_spec->dev || 86 + chan->chan_id != dma_spec->chan_id) 87 + return false; 88 + 89 + dwc->request_line = dma_spec->slave_id; 90 + dwc->src_master = dwc_get_sms(NULL); 91 + dwc->dst_master = dwc_get_dms(NULL); 92 + 93 + return true; 94 + } 95 + 96 + static void dw_dma_acpi_controller_register(struct dw_dma *dw) 97 + { 98 + struct device *dev = dw->dma.dev; 99 + struct acpi_dma_filter_info *info; 100 + int ret; 101 + 102 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 103 + if (!info) 104 + return; 105 + 106 + dma_cap_zero(info->dma_cap); 107 + dma_cap_set(DMA_SLAVE, info->dma_cap); 108 + info->filter_fn = dw_dma_acpi_filter; 109 + 110 + ret = devm_acpi_dma_controller_register(dev, acpi_dma_simple_xlate, 111 + info); 112 + if (ret) 113 + dev_err(dev, "could not register acpi_dma_controller\n"); 114 + } 115 + #else /* !CONFIG_ACPI */ 116 + static inline void dw_dma_acpi_controller_register(struct dw_dma *dw) {} 117 + #endif /* !CONFIG_ACPI */ 118 + 119 + #ifdef CONFIG_OF 120 + static struct dw_dma_platform_data * 121 + dw_dma_parse_dt(struct platform_device *pdev) 122 + { 123 + struct device_node *np = pdev->dev.of_node; 124 + struct dw_dma_platform_data *pdata; 125 + u32 tmp, arr[4]; 126 + 127 + if (!np) { 128 + dev_err(&pdev->dev, "Missing DT data\n"); 129 + return NULL; 130 + } 131 + 132 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 133 + if (!pdata) 134 + return NULL; 135 + 136 + if (of_property_read_u32(np, "dma-channels", &pdata->nr_channels)) 137 + return NULL; 138 + 139 + if (of_property_read_bool(np, "is_private")) 140 + pdata->is_private = true; 141 + 142 + if (!of_property_read_u32(np, "chan_allocation_order", &tmp)) 143 + pdata->chan_allocation_order = (unsigned char)tmp; 144 + 145 + if (!of_property_read_u32(np, "chan_priority", &tmp)) 146 + pdata->chan_priority = tmp; 147 + 148 + if (!of_property_read_u32(np, "block_size", &tmp)) 149 + pdata->block_size = tmp; 150 + 151 + if (!of_property_read_u32(np, "dma-masters", &tmp)) { 152 + if (tmp > 4) 153 + return NULL; 154 + 155 + pdata->nr_masters = tmp; 156 + } 157 + 158 + if (!of_property_read_u32_array(np, "data_width", arr, 159 + pdata->nr_masters)) 160 + for (tmp = 0; tmp < pdata->nr_masters; tmp++) 161 + pdata->data_width[tmp] = arr[tmp]; 162 + 163 + return pdata; 164 + } 165 + #else 166 + static inline struct dw_dma_platform_data * 167 + dw_dma_parse_dt(struct platform_device *pdev) 168 + { 169 + return NULL; 170 + } 171 + #endif 172 + 173 + static int dw_probe(struct platform_device *pdev) 174 + { 175 + struct dw_dma_chip *chip; 176 + struct device *dev = &pdev->dev; 177 + struct resource *mem; 178 + struct dw_dma_platform_data *pdata; 179 + int err; 180 + 181 + chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL); 182 + if (!chip) 183 + return -ENOMEM; 184 + 185 + chip->irq = platform_get_irq(pdev, 0); 186 + if (chip->irq < 0) 187 + return chip->irq; 188 + 189 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 190 + chip->regs = devm_ioremap_resource(dev, mem); 191 + if (IS_ERR(chip->regs)) 192 + return PTR_ERR(chip->regs); 193 + 194 + /* Apply default dma_mask if needed */ 195 + if (!dev->dma_mask) { 196 + dev->dma_mask = &dev->coherent_dma_mask; 197 + dev->coherent_dma_mask = DMA_BIT_MASK(32); 198 + } 199 + 200 + pdata = dev_get_platdata(dev); 201 + if (!pdata) 202 + pdata = dw_dma_parse_dt(pdev); 203 + 204 + chip->dev = dev; 205 + 206 + err = dw_dma_probe(chip, pdata); 207 + if (err) 208 + return err; 209 + 210 + platform_set_drvdata(pdev, chip); 211 + 212 + if (pdev->dev.of_node) { 213 + err = of_dma_controller_register(pdev->dev.of_node, 214 + dw_dma_of_xlate, chip->dw); 215 + if (err) 216 + dev_err(&pdev->dev, 217 + "could not register of_dma_controller\n"); 218 + } 219 + 220 + if (ACPI_HANDLE(&pdev->dev)) 221 + dw_dma_acpi_controller_register(chip->dw); 222 + 223 + return 0; 224 + } 225 + 226 + static int dw_remove(struct platform_device *pdev) 227 + { 228 + struct dw_dma_chip *chip = platform_get_drvdata(pdev); 229 + 230 + if (pdev->dev.of_node) 231 + of_dma_controller_free(pdev->dev.of_node); 232 + 233 + return dw_dma_remove(chip); 234 + } 235 + 236 + static void dw_shutdown(struct platform_device *pdev) 237 + { 238 + struct dw_dma_chip *chip = platform_get_drvdata(pdev); 239 + 240 + dw_dma_shutdown(chip); 241 + } 242 + 243 + #ifdef CONFIG_OF 244 + static const struct of_device_id dw_dma_of_id_table[] = { 245 + { .compatible = "snps,dma-spear1340" }, 246 + {} 247 + }; 248 + MODULE_DEVICE_TABLE(of, dw_dma_of_id_table); 249 + #endif 250 + 251 + #ifdef CONFIG_ACPI 252 + static const struct acpi_device_id dw_dma_acpi_id_table[] = { 253 + { "INTL9C60", 0 }, 254 + { } 255 + }; 256 + #endif 257 + 258 + #ifdef CONFIG_PM_SLEEP 259 + 260 + static int dw_suspend_noirq(struct device *dev) 261 + { 262 + struct platform_device *pdev = to_platform_device(dev); 263 + struct dw_dma_chip *chip = platform_get_drvdata(pdev); 264 + 265 + return dw_dma_suspend(chip); 266 + } 267 + 268 + static int dw_resume_noirq(struct device *dev) 269 + { 270 + struct platform_device *pdev = to_platform_device(dev); 271 + struct dw_dma_chip *chip = platform_get_drvdata(pdev); 272 + 273 + return dw_dma_resume(chip); 274 + } 275 + 276 + #else /* !CONFIG_PM_SLEEP */ 277 + 278 + #define dw_suspend_noirq NULL 279 + #define dw_resume_noirq NULL 280 + 281 + #endif /* !CONFIG_PM_SLEEP */ 282 + 283 + static const struct dev_pm_ops dw_dev_pm_ops = { 284 + .suspend_noirq = dw_suspend_noirq, 285 + .resume_noirq = dw_resume_noirq, 286 + .freeze_noirq = dw_suspend_noirq, 287 + .thaw_noirq = dw_resume_noirq, 288 + .restore_noirq = dw_resume_noirq, 289 + .poweroff_noirq = dw_suspend_noirq, 290 + }; 291 + 292 + static struct platform_driver dw_driver = { 293 + .probe = dw_probe, 294 + .remove = dw_remove, 295 + .shutdown = dw_shutdown, 296 + .driver = { 297 + .name = "dw_dmac", 298 + .pm = &dw_dev_pm_ops, 299 + .of_match_table = of_match_ptr(dw_dma_of_id_table), 300 + .acpi_match_table = ACPI_PTR(dw_dma_acpi_id_table), 301 + }, 302 + }; 303 + 304 + static int __init dw_init(void) 305 + { 306 + return platform_driver_register(&dw_driver); 307 + } 308 + subsys_initcall(dw_init); 309 + 310 + static void __exit dw_exit(void) 311 + { 312 + platform_driver_unregister(&dw_driver); 313 + } 314 + module_exit(dw_exit); 315 + 316 + MODULE_LICENSE("GPL v2"); 317 + MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller platform driver");
+39 -281
drivers/dma/dw_dmac.c drivers/dma/dw/core.c
··· 3 3 * 4 4 * Copyright (C) 2007-2008 Atmel Corporation 5 5 * Copyright (C) 2010-2011 ST Microelectronics 6 + * Copyright (C) 2013 Intel Corporation 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License version 2 as ··· 20 19 #include <linux/init.h> 21 20 #include <linux/interrupt.h> 22 21 #include <linux/io.h> 23 - #include <linux/of.h> 24 - #include <linux/of_dma.h> 25 22 #include <linux/mm.h> 26 23 #include <linux/module.h> 27 - #include <linux/platform_device.h> 28 24 #include <linux/slab.h> 29 - #include <linux/acpi.h> 30 - #include <linux/acpi_dma.h> 31 25 32 - #include "dw_dmac_regs.h" 33 - #include "dmaengine.h" 26 + #include "../dmaengine.h" 27 + #include "internal.h" 34 28 35 29 /* 36 30 * This supports the Synopsys "DesignWare AHB Central DMA Controller", ··· 36 40 * The driver has currently been tested only with the Atmel AT32AP7000, 37 41 * which does not support descriptor writeback. 38 42 */ 39 - 40 - static inline unsigned int dwc_get_dms(struct dw_dma_slave *slave) 41 - { 42 - return slave ? slave->dst_master : 0; 43 - } 44 - 45 - static inline unsigned int dwc_get_sms(struct dw_dma_slave *slave) 46 - { 47 - return slave ? slave->src_master : 1; 48 - } 49 43 50 44 static inline void dwc_set_masters(struct dw_dma_chan *dwc) 51 45 { ··· 542 556 543 557 /* --------------------- Cyclic DMA API extensions -------------------- */ 544 558 545 - inline dma_addr_t dw_dma_get_src_addr(struct dma_chan *chan) 559 + dma_addr_t dw_dma_get_src_addr(struct dma_chan *chan) 546 560 { 547 561 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 548 562 return channel_readl(dwc, SAR); 549 563 } 550 564 EXPORT_SYMBOL(dw_dma_get_src_addr); 551 565 552 - inline dma_addr_t dw_dma_get_dst_addr(struct dma_chan *chan) 566 + dma_addr_t dw_dma_get_dst_addr(struct dma_chan *chan) 553 567 { 554 568 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 555 569 return channel_readl(dwc, DAR); ··· 1211 1225 dev_vdbg(chan2dev(chan), "%s: done\n", __func__); 1212 1226 } 1213 1227 1214 - /*----------------------------------------------------------------------*/ 1215 - 1216 - struct dw_dma_of_filter_args { 1217 - struct dw_dma *dw; 1218 - unsigned int req; 1219 - unsigned int src; 1220 - unsigned int dst; 1221 - }; 1222 - 1223 - static bool dw_dma_of_filter(struct dma_chan *chan, void *param) 1224 - { 1225 - struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1226 - struct dw_dma_of_filter_args *fargs = param; 1227 - 1228 - /* Ensure the device matches our channel */ 1229 - if (chan->device != &fargs->dw->dma) 1230 - return false; 1231 - 1232 - dwc->request_line = fargs->req; 1233 - dwc->src_master = fargs->src; 1234 - dwc->dst_master = fargs->dst; 1235 - 1236 - return true; 1237 - } 1238 - 1239 - static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, 1240 - struct of_dma *ofdma) 1241 - { 1242 - struct dw_dma *dw = ofdma->of_dma_data; 1243 - struct dw_dma_of_filter_args fargs = { 1244 - .dw = dw, 1245 - }; 1246 - dma_cap_mask_t cap; 1247 - 1248 - if (dma_spec->args_count != 3) 1249 - return NULL; 1250 - 1251 - fargs.req = dma_spec->args[0]; 1252 - fargs.src = dma_spec->args[1]; 1253 - fargs.dst = dma_spec->args[2]; 1254 - 1255 - if (WARN_ON(fargs.req >= DW_DMA_MAX_NR_REQUESTS || 1256 - fargs.src >= dw->nr_masters || 1257 - fargs.dst >= dw->nr_masters)) 1258 - return NULL; 1259 - 1260 - dma_cap_zero(cap); 1261 - dma_cap_set(DMA_SLAVE, cap); 1262 - 1263 - /* TODO: there should be a simpler way to do this */ 1264 - return dma_request_channel(cap, dw_dma_of_filter, &fargs); 1265 - } 1266 - 1267 - #ifdef CONFIG_ACPI 1268 - static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param) 1269 - { 1270 - struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1271 - struct acpi_dma_spec *dma_spec = param; 1272 - 1273 - if (chan->device->dev != dma_spec->dev || 1274 - chan->chan_id != dma_spec->chan_id) 1275 - return false; 1276 - 1277 - dwc->request_line = dma_spec->slave_id; 1278 - dwc->src_master = dwc_get_sms(NULL); 1279 - dwc->dst_master = dwc_get_dms(NULL); 1280 - 1281 - return true; 1282 - } 1283 - 1284 - static void dw_dma_acpi_controller_register(struct dw_dma *dw) 1285 - { 1286 - struct device *dev = dw->dma.dev; 1287 - struct acpi_dma_filter_info *info; 1288 - int ret; 1289 - 1290 - info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 1291 - if (!info) 1292 - return; 1293 - 1294 - dma_cap_zero(info->dma_cap); 1295 - dma_cap_set(DMA_SLAVE, info->dma_cap); 1296 - info->filter_fn = dw_dma_acpi_filter; 1297 - 1298 - ret = devm_acpi_dma_controller_register(dev, acpi_dma_simple_xlate, 1299 - info); 1300 - if (ret) 1301 - dev_err(dev, "could not register acpi_dma_controller\n"); 1302 - } 1303 - #else /* !CONFIG_ACPI */ 1304 - static inline void dw_dma_acpi_controller_register(struct dw_dma *dw) {} 1305 - #endif /* !CONFIG_ACPI */ 1306 - 1307 1228 /* --------------------- Cyclic DMA API extensions -------------------- */ 1308 1229 1309 1230 /** ··· 1491 1598 dw->chan[i].initialized = false; 1492 1599 } 1493 1600 1494 - #ifdef CONFIG_OF 1495 - static struct dw_dma_platform_data * 1496 - dw_dma_parse_dt(struct platform_device *pdev) 1601 + int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) 1497 1602 { 1498 - struct device_node *np = pdev->dev.of_node; 1499 - struct dw_dma_platform_data *pdata; 1500 - u32 tmp, arr[4]; 1501 - 1502 - if (!np) { 1503 - dev_err(&pdev->dev, "Missing DT data\n"); 1504 - return NULL; 1505 - } 1506 - 1507 - pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 1508 - if (!pdata) 1509 - return NULL; 1510 - 1511 - if (of_property_read_u32(np, "dma-channels", &pdata->nr_channels)) 1512 - return NULL; 1513 - 1514 - if (of_property_read_bool(np, "is_private")) 1515 - pdata->is_private = true; 1516 - 1517 - if (!of_property_read_u32(np, "chan_allocation_order", &tmp)) 1518 - pdata->chan_allocation_order = (unsigned char)tmp; 1519 - 1520 - if (!of_property_read_u32(np, "chan_priority", &tmp)) 1521 - pdata->chan_priority = tmp; 1522 - 1523 - if (!of_property_read_u32(np, "block_size", &tmp)) 1524 - pdata->block_size = tmp; 1525 - 1526 - if (!of_property_read_u32(np, "dma-masters", &tmp)) { 1527 - if (tmp > 4) 1528 - return NULL; 1529 - 1530 - pdata->nr_masters = tmp; 1531 - } 1532 - 1533 - if (!of_property_read_u32_array(np, "data_width", arr, 1534 - pdata->nr_masters)) 1535 - for (tmp = 0; tmp < pdata->nr_masters; tmp++) 1536 - pdata->data_width[tmp] = arr[tmp]; 1537 - 1538 - return pdata; 1539 - } 1540 - #else 1541 - static inline struct dw_dma_platform_data * 1542 - dw_dma_parse_dt(struct platform_device *pdev) 1543 - { 1544 - return NULL; 1545 - } 1546 - #endif 1547 - 1548 - static int dw_probe(struct platform_device *pdev) 1549 - { 1550 - struct dw_dma_platform_data *pdata; 1551 - struct resource *io; 1552 1603 struct dw_dma *dw; 1553 1604 size_t size; 1554 - void __iomem *regs; 1555 1605 bool autocfg; 1556 1606 unsigned int dw_params; 1557 1607 unsigned int nr_channels; 1558 1608 unsigned int max_blk_size = 0; 1559 - int irq; 1560 1609 int err; 1561 1610 int i; 1562 1611 1563 - io = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1564 - if (!io) 1565 - return -EINVAL; 1566 - 1567 - irq = platform_get_irq(pdev, 0); 1568 - if (irq < 0) 1569 - return irq; 1570 - 1571 - regs = devm_ioremap_resource(&pdev->dev, io); 1572 - if (IS_ERR(regs)) 1573 - return PTR_ERR(regs); 1574 - 1575 - /* Apply default dma_mask if needed */ 1576 - if (!pdev->dev.dma_mask) { 1577 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 1578 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 1579 - } 1580 - 1581 - dw_params = dma_read_byaddr(regs, DW_PARAMS); 1612 + dw_params = dma_read_byaddr(chip->regs, DW_PARAMS); 1582 1613 autocfg = dw_params >> DW_PARAMS_EN & 0x1; 1583 1614 1584 - dev_dbg(&pdev->dev, "DW_PARAMS: 0x%08x\n", dw_params); 1585 - 1586 - pdata = dev_get_platdata(&pdev->dev); 1587 - if (!pdata) 1588 - pdata = dw_dma_parse_dt(pdev); 1615 + dev_dbg(chip->dev, "DW_PARAMS: 0x%08x\n", dw_params); 1589 1616 1590 1617 if (!pdata && autocfg) { 1591 - pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 1618 + pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL); 1592 1619 if (!pdata) 1593 1620 return -ENOMEM; 1594 1621 ··· 1525 1712 nr_channels = pdata->nr_channels; 1526 1713 1527 1714 size = sizeof(struct dw_dma) + nr_channels * sizeof(struct dw_dma_chan); 1528 - dw = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 1715 + dw = devm_kzalloc(chip->dev, size, GFP_KERNEL); 1529 1716 if (!dw) 1530 1717 return -ENOMEM; 1531 1718 1532 - dw->clk = devm_clk_get(&pdev->dev, "hclk"); 1719 + dw->clk = devm_clk_get(chip->dev, "hclk"); 1533 1720 if (IS_ERR(dw->clk)) 1534 1721 return PTR_ERR(dw->clk); 1535 1722 clk_prepare_enable(dw->clk); 1536 1723 1537 - dw->regs = regs; 1724 + dw->regs = chip->regs; 1725 + chip->dw = dw; 1538 1726 1539 1727 /* Get hardware configuration parameters */ 1540 1728 if (autocfg) { ··· 1560 1746 /* Disable BLOCK interrupts as well */ 1561 1747 channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); 1562 1748 1563 - err = devm_request_irq(&pdev->dev, irq, dw_dma_interrupt, 0, 1749 + err = devm_request_irq(chip->dev, chip->irq, dw_dma_interrupt, 0, 1564 1750 "dw_dmac", dw); 1565 1751 if (err) 1566 1752 return err; 1567 1753 1568 - platform_set_drvdata(pdev, dw); 1569 - 1570 1754 /* Create a pool of consistent memory blocks for hardware descriptors */ 1571 - dw->desc_pool = dmam_pool_create("dw_dmac_desc_pool", &pdev->dev, 1755 + dw->desc_pool = dmam_pool_create("dw_dmac_desc_pool", chip->dev, 1572 1756 sizeof(struct dw_desc), 4, 0); 1573 1757 if (!dw->desc_pool) { 1574 - dev_err(&pdev->dev, "No memory for descriptors dma pool\n"); 1758 + dev_err(chip->dev, "No memory for descriptors dma pool\n"); 1575 1759 return -ENOMEM; 1576 1760 } 1577 1761 ··· 1610 1798 /* Hardware configuration */ 1611 1799 if (autocfg) { 1612 1800 unsigned int dwc_params; 1801 + void __iomem *addr = chip->regs + r * sizeof(u32); 1613 1802 1614 - dwc_params = dma_read_byaddr(regs + r * sizeof(u32), 1615 - DWC_PARAMS); 1803 + dwc_params = dma_read_byaddr(addr, DWC_PARAMS); 1616 1804 1617 - dev_dbg(&pdev->dev, "DWC_PARAMS[%d]: 0x%08x\n", i, 1618 - dwc_params); 1805 + dev_dbg(chip->dev, "DWC_PARAMS[%d]: 0x%08x\n", i, 1806 + dwc_params); 1619 1807 1620 1808 /* Decode maximum block size for given channel. The 1621 1809 * stored 4 bit value represents blocks from 0x00 for 3 ··· 1646 1834 dma_cap_set(DMA_SLAVE, dw->dma.cap_mask); 1647 1835 if (pdata->is_private) 1648 1836 dma_cap_set(DMA_PRIVATE, dw->dma.cap_mask); 1649 - dw->dma.dev = &pdev->dev; 1837 + dw->dma.dev = chip->dev; 1650 1838 dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources; 1651 1839 dw->dma.device_free_chan_resources = dwc_free_chan_resources; 1652 1840 ··· 1660 1848 1661 1849 dma_writel(dw, CFG, DW_CFG_DMA_EN); 1662 1850 1663 - dev_info(&pdev->dev, "DesignWare DMA Controller, %d channels\n", 1851 + dev_info(chip->dev, "DesignWare DMA Controller, %d channels\n", 1664 1852 nr_channels); 1665 1853 1666 1854 dma_async_device_register(&dw->dma); 1667 1855 1668 - if (pdev->dev.of_node) { 1669 - err = of_dma_controller_register(pdev->dev.of_node, 1670 - dw_dma_of_xlate, dw); 1671 - if (err) 1672 - dev_err(&pdev->dev, 1673 - "could not register of_dma_controller\n"); 1674 - } 1675 - 1676 - if (ACPI_HANDLE(&pdev->dev)) 1677 - dw_dma_acpi_controller_register(dw); 1678 - 1679 1856 return 0; 1680 1857 } 1858 + EXPORT_SYMBOL_GPL(dw_dma_probe); 1681 1859 1682 - static int dw_remove(struct platform_device *pdev) 1860 + int dw_dma_remove(struct dw_dma_chip *chip) 1683 1861 { 1684 - struct dw_dma *dw = platform_get_drvdata(pdev); 1862 + struct dw_dma *dw = chip->dw; 1685 1863 struct dw_dma_chan *dwc, *_dwc; 1686 1864 1687 - if (pdev->dev.of_node) 1688 - of_dma_controller_free(pdev->dev.of_node); 1689 1865 dw_dma_off(dw); 1690 1866 dma_async_device_unregister(&dw->dma); 1691 1867 ··· 1687 1887 1688 1888 return 0; 1689 1889 } 1890 + EXPORT_SYMBOL_GPL(dw_dma_remove); 1690 1891 1691 - static void dw_shutdown(struct platform_device *pdev) 1892 + void dw_dma_shutdown(struct dw_dma_chip *chip) 1692 1893 { 1693 - struct dw_dma *dw = platform_get_drvdata(pdev); 1894 + struct dw_dma *dw = chip->dw; 1694 1895 1695 1896 dw_dma_off(dw); 1696 1897 clk_disable_unprepare(dw->clk); 1697 1898 } 1899 + EXPORT_SYMBOL_GPL(dw_dma_shutdown); 1698 1900 1699 - static int dw_suspend_noirq(struct device *dev) 1901 + #ifdef CONFIG_PM_SLEEP 1902 + 1903 + int dw_dma_suspend(struct dw_dma_chip *chip) 1700 1904 { 1701 - struct platform_device *pdev = to_platform_device(dev); 1702 - struct dw_dma *dw = platform_get_drvdata(pdev); 1905 + struct dw_dma *dw = chip->dw; 1703 1906 1704 1907 dw_dma_off(dw); 1705 1908 clk_disable_unprepare(dw->clk); 1706 1909 1707 1910 return 0; 1708 1911 } 1912 + EXPORT_SYMBOL_GPL(dw_dma_suspend); 1709 1913 1710 - static int dw_resume_noirq(struct device *dev) 1914 + int dw_dma_resume(struct dw_dma_chip *chip) 1711 1915 { 1712 - struct platform_device *pdev = to_platform_device(dev); 1713 - struct dw_dma *dw = platform_get_drvdata(pdev); 1916 + struct dw_dma *dw = chip->dw; 1714 1917 1715 1918 clk_prepare_enable(dw->clk); 1716 1919 dma_writel(dw, CFG, DW_CFG_DMA_EN); 1717 1920 1718 1921 return 0; 1719 1922 } 1923 + EXPORT_SYMBOL_GPL(dw_dma_resume); 1720 1924 1721 - static const struct dev_pm_ops dw_dev_pm_ops = { 1722 - .suspend_noirq = dw_suspend_noirq, 1723 - .resume_noirq = dw_resume_noirq, 1724 - .freeze_noirq = dw_suspend_noirq, 1725 - .thaw_noirq = dw_resume_noirq, 1726 - .restore_noirq = dw_resume_noirq, 1727 - .poweroff_noirq = dw_suspend_noirq, 1728 - }; 1729 - 1730 - #ifdef CONFIG_OF 1731 - static const struct of_device_id dw_dma_of_id_table[] = { 1732 - { .compatible = "snps,dma-spear1340" }, 1733 - {} 1734 - }; 1735 - MODULE_DEVICE_TABLE(of, dw_dma_of_id_table); 1736 - #endif 1737 - 1738 - #ifdef CONFIG_ACPI 1739 - static const struct acpi_device_id dw_dma_acpi_id_table[] = { 1740 - { "INTL9C60", 0 }, 1741 - { } 1742 - }; 1743 - #endif 1744 - 1745 - static struct platform_driver dw_driver = { 1746 - .probe = dw_probe, 1747 - .remove = dw_remove, 1748 - .shutdown = dw_shutdown, 1749 - .driver = { 1750 - .name = "dw_dmac", 1751 - .pm = &dw_dev_pm_ops, 1752 - .of_match_table = of_match_ptr(dw_dma_of_id_table), 1753 - .acpi_match_table = ACPI_PTR(dw_dma_acpi_id_table), 1754 - }, 1755 - }; 1756 - 1757 - static int __init dw_init(void) 1758 - { 1759 - return platform_driver_register(&dw_driver); 1760 - } 1761 - subsys_initcall(dw_init); 1762 - 1763 - static void __exit dw_exit(void) 1764 - { 1765 - platform_driver_unregister(&dw_driver); 1766 - } 1767 - module_exit(dw_exit); 1925 + #endif /* CONFIG_PM_SLEEP */ 1768 1926 1769 1927 MODULE_LICENSE("GPL v2"); 1770 - MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver"); 1928 + MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller core driver"); 1771 1929 MODULE_AUTHOR("Haavard Skinnemoen (Atmel)"); 1772 1930 MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");
+7
drivers/dma/dw_dmac_regs.h drivers/dma/dw/regs.h
··· 9 9 * published by the Free Software Foundation. 10 10 */ 11 11 12 + #include <linux/interrupt.h> 12 13 #include <linux/dmaengine.h> 13 14 #include <linux/dw_dmac.h> 14 15 ··· 100 99 /* top-level parameters */ 101 100 u32 DW_PARAMS; 102 101 }; 102 + 103 + /* 104 + * Big endian I/O access when reading and writing to the DMA controller 105 + * registers. This is needed on some platforms, like the Atmel AVR32 106 + * architecture. 107 + */ 103 108 104 109 #ifdef CONFIG_DW_DMAC_BIG_ENDIAN_IO 105 110 #define dma_readl_native ioread32be
+2 -3
drivers/dma/fsldma.c
··· 1368 1368 1369 1369 dma_set_mask(&(op->dev), DMA_BIT_MASK(36)); 1370 1370 1371 - dev_set_drvdata(&op->dev, fdev); 1371 + platform_set_drvdata(op, fdev); 1372 1372 1373 1373 /* 1374 1374 * We cannot use of_platform_bus_probe() because there is no ··· 1417 1417 struct fsldma_device *fdev; 1418 1418 unsigned int i; 1419 1419 1420 - fdev = dev_get_drvdata(&op->dev); 1420 + fdev = platform_get_drvdata(op); 1421 1421 dma_async_device_unregister(&fdev->common); 1422 1422 1423 1423 fsldma_free_irqs(fdev); ··· 1428 1428 } 1429 1429 1430 1430 iounmap(fdev->regs); 1431 - dev_set_drvdata(&op->dev, NULL); 1432 1431 kfree(fdev); 1433 1432 1434 1433 return 0;
+76 -1
drivers/dma/imx-dma.c
··· 27 27 #include <linux/clk.h> 28 28 #include <linux/dmaengine.h> 29 29 #include <linux/module.h> 30 + #include <linux/of_device.h> 31 + #include <linux/of_dma.h> 30 32 31 33 #include <asm/irq.h> 32 34 #include <linux/platform_data/dma-imx.h> ··· 188 186 enum imx_dma_type devtype; 189 187 }; 190 188 189 + struct imxdma_filter_data { 190 + struct imxdma_engine *imxdma; 191 + int request; 192 + }; 193 + 191 194 static struct platform_device_id imx_dma_devtype[] = { 192 195 { 193 196 .name = "imx1-dma", ··· 208 201 } 209 202 }; 210 203 MODULE_DEVICE_TABLE(platform, imx_dma_devtype); 204 + 205 + static const struct of_device_id imx_dma_of_dev_id[] = { 206 + { 207 + .compatible = "fsl,imx1-dma", 208 + .data = &imx_dma_devtype[IMX1_DMA], 209 + }, { 210 + .compatible = "fsl,imx21-dma", 211 + .data = &imx_dma_devtype[IMX21_DMA], 212 + }, { 213 + .compatible = "fsl,imx27-dma", 214 + .data = &imx_dma_devtype[IMX27_DMA], 215 + }, { 216 + /* sentinel */ 217 + } 218 + }; 219 + MODULE_DEVICE_TABLE(of, imx_dma_of_dev_id); 211 220 212 221 static inline int is_imx1_dma(struct imxdma_engine *imxdma) 213 222 { ··· 1019 996 spin_unlock_irqrestore(&imxdma->lock, flags); 1020 997 } 1021 998 999 + static bool imxdma_filter_fn(struct dma_chan *chan, void *param) 1000 + { 1001 + struct imxdma_filter_data *fdata = param; 1002 + struct imxdma_channel *imxdma_chan = to_imxdma_chan(chan); 1003 + 1004 + if (chan->device->dev != fdata->imxdma->dev) 1005 + return false; 1006 + 1007 + imxdma_chan->dma_request = fdata->request; 1008 + chan->private = NULL; 1009 + 1010 + return true; 1011 + } 1012 + 1013 + static struct dma_chan *imxdma_xlate(struct of_phandle_args *dma_spec, 1014 + struct of_dma *ofdma) 1015 + { 1016 + int count = dma_spec->args_count; 1017 + struct imxdma_engine *imxdma = ofdma->of_dma_data; 1018 + struct imxdma_filter_data fdata = { 1019 + .imxdma = imxdma, 1020 + }; 1021 + 1022 + if (count != 1) 1023 + return NULL; 1024 + 1025 + fdata.request = dma_spec->args[0]; 1026 + 1027 + return dma_request_channel(imxdma->dma_device.cap_mask, 1028 + imxdma_filter_fn, &fdata); 1029 + } 1030 + 1022 1031 static int __init imxdma_probe(struct platform_device *pdev) 1023 1032 { 1024 1033 struct imxdma_engine *imxdma; 1025 1034 struct resource *res; 1035 + const struct of_device_id *of_id; 1026 1036 int ret, i; 1027 1037 int irq, irq_err; 1038 + 1039 + of_id = of_match_device(imx_dma_of_dev_id, &pdev->dev); 1040 + if (of_id) 1041 + pdev->id_entry = of_id->data; 1028 1042 1029 1043 imxdma = devm_kzalloc(&pdev->dev, sizeof(*imxdma), GFP_KERNEL); 1030 1044 if (!imxdma) 1031 1045 return -ENOMEM; 1032 1046 1047 + imxdma->dev = &pdev->dev; 1033 1048 imxdma->devtype = pdev->id_entry->driver_data; 1034 1049 1035 1050 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 1172 1111 &imxdma->dma_device.channels); 1173 1112 } 1174 1113 1175 - imxdma->dev = &pdev->dev; 1176 1114 imxdma->dma_device.dev = &pdev->dev; 1177 1115 1178 1116 imxdma->dma_device.device_alloc_chan_resources = imxdma_alloc_chan_resources; ··· 1196 1136 goto err; 1197 1137 } 1198 1138 1139 + if (pdev->dev.of_node) { 1140 + ret = of_dma_controller_register(pdev->dev.of_node, 1141 + imxdma_xlate, imxdma); 1142 + if (ret) { 1143 + dev_err(&pdev->dev, "unable to register of_dma_controller\n"); 1144 + goto err_of_dma_controller; 1145 + } 1146 + } 1147 + 1199 1148 return 0; 1200 1149 1150 + err_of_dma_controller: 1151 + dma_async_device_unregister(&imxdma->dma_device); 1201 1152 err: 1202 1153 clk_disable_unprepare(imxdma->dma_ipg); 1203 1154 clk_disable_unprepare(imxdma->dma_ahb); ··· 1221 1150 1222 1151 dma_async_device_unregister(&imxdma->dma_device); 1223 1152 1153 + if (pdev->dev.of_node) 1154 + of_dma_controller_free(pdev->dev.of_node); 1155 + 1224 1156 clk_disable_unprepare(imxdma->dma_ipg); 1225 1157 clk_disable_unprepare(imxdma->dma_ahb); 1226 1158 ··· 1233 1159 static struct platform_driver imxdma_driver = { 1234 1160 .driver = { 1235 1161 .name = "imx-dma", 1162 + .of_match_table = imx_dma_of_dev_id, 1236 1163 }, 1237 1164 .id_table = imx_dma_devtype, 1238 1165 .remove = imxdma_remove,
+40
drivers/dma/imx-sdma.c
··· 36 36 #include <linux/dmaengine.h> 37 37 #include <linux/of.h> 38 38 #include <linux/of_device.h> 39 + #include <linux/of_dma.h> 39 40 40 41 #include <asm/irq.h> 41 42 #include <linux/platform_data/dma-imx-sdma.h> ··· 1297 1296 return ret; 1298 1297 } 1299 1298 1299 + static bool sdma_filter_fn(struct dma_chan *chan, void *fn_param) 1300 + { 1301 + struct imx_dma_data *data = fn_param; 1302 + 1303 + if (!imx_dma_is_general_purpose(chan)) 1304 + return false; 1305 + 1306 + chan->private = data; 1307 + 1308 + return true; 1309 + } 1310 + 1311 + static struct dma_chan *sdma_xlate(struct of_phandle_args *dma_spec, 1312 + struct of_dma *ofdma) 1313 + { 1314 + struct sdma_engine *sdma = ofdma->of_dma_data; 1315 + dma_cap_mask_t mask = sdma->dma_device.cap_mask; 1316 + struct imx_dma_data data; 1317 + 1318 + if (dma_spec->args_count != 3) 1319 + return NULL; 1320 + 1321 + data.dma_request = dma_spec->args[0]; 1322 + data.peripheral_type = dma_spec->args[1]; 1323 + data.priority = dma_spec->args[2]; 1324 + 1325 + return dma_request_channel(mask, sdma_filter_fn, &data); 1326 + } 1327 + 1300 1328 static int __init sdma_probe(struct platform_device *pdev) 1301 1329 { 1302 1330 const struct of_device_id *of_id = ··· 1473 1443 goto err_init; 1474 1444 } 1475 1445 1446 + if (np) { 1447 + ret = of_dma_controller_register(np, sdma_xlate, sdma); 1448 + if (ret) { 1449 + dev_err(&pdev->dev, "failed to register controller\n"); 1450 + goto err_register; 1451 + } 1452 + } 1453 + 1476 1454 dev_info(sdma->dev, "initialized\n"); 1477 1455 1478 1456 return 0; 1479 1457 1458 + err_register: 1459 + dma_async_device_unregister(&sdma->dma_device); 1480 1460 err_init: 1481 1461 kfree(sdma->script_addrs); 1482 1462 err_alloc:
+4
drivers/dma/mmp_tdma.c
··· 154 154 { 155 155 writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN, 156 156 tdmac->reg_base + TDCR); 157 + 158 + /* disable irq */ 159 + writel(0, tdmac->reg_base + TDIMR); 160 + 157 161 tdmac->status = DMA_SUCCESS; 158 162 } 159 163
+1 -1
drivers/dma/mxs-dma.c
··· 693 693 return true; 694 694 } 695 695 696 - struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec, 696 + static struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec, 697 697 struct of_dma *ofdma) 698 698 { 699 699 struct mxs_dma_engine *mxs_dma = ofdma->of_dma_data;
+1 -16
drivers/dma/of-dma.c
··· 35 35 struct of_dma *ofdma; 36 36 37 37 list_for_each_entry(ofdma, &of_dma_list, of_dma_controllers) 38 - if ((ofdma->of_node == dma_spec->np) && 39 - (ofdma->of_dma_nbcells == dma_spec->args_count)) 38 + if (ofdma->of_node == dma_spec->np) 40 39 return ofdma; 41 40 42 41 pr_debug("%s: can't find DMA controller %s\n", __func__, ··· 63 64 void *data) 64 65 { 65 66 struct of_dma *ofdma; 66 - int nbcells; 67 - const __be32 *prop; 68 67 69 68 if (!np || !of_dma_xlate) { 70 69 pr_err("%s: not enough information provided\n", __func__); ··· 73 76 if (!ofdma) 74 77 return -ENOMEM; 75 78 76 - prop = of_get_property(np, "#dma-cells", NULL); 77 - if (prop) 78 - nbcells = be32_to_cpup(prop); 79 - 80 - if (!prop || !nbcells) { 81 - pr_err("%s: #dma-cells property is missing or invalid\n", 82 - __func__); 83 - kfree(ofdma); 84 - return -EINVAL; 85 - } 86 - 87 79 ofdma->of_node = np; 88 - ofdma->of_dma_nbcells = nbcells; 89 80 ofdma->of_dma_xlate = of_dma_xlate; 90 81 ofdma->of_dma_data = data; 91 82
+4 -25
drivers/dma/pl330.c
··· 157 157 #define PERIPH_REV_R0P0 0 158 158 #define PERIPH_REV_R1P0 1 159 159 #define PERIPH_REV_R1P1 2 160 - #define PCELL_ID 0xff0 161 160 162 161 #define CR0_PERIPH_REQ_SET (1 << 0) 163 162 #define CR0_BOOT_EN_SET (1 << 1) ··· 191 192 #define REVISION 0x0 192 193 #define INTEG_CFG 0x0 193 194 #define PERIPH_ID_VAL ((PART << 0) | (DESIGNER << 12)) 194 - 195 - #define PCELL_ID_VAL 0xb105f00d 196 195 197 196 #define PL330_STATE_STOPPED (1 << 0) 198 197 #define PL330_STATE_EXECUTING (1 << 1) ··· 289 292 /* Populated by the PL330 core driver for DMA API driver's info */ 290 293 struct pl330_config { 291 294 u32 periph_id; 292 - u32 pcell_id; 293 295 #define DMAC_MODE_NS (1 << 0) 294 296 unsigned int mode; 295 297 unsigned int data_bus_width:10; /* In number of bits */ ··· 501 505 /* Maximum possible events/irqs */ 502 506 int events[32]; 503 507 /* BUS address of MicroCode buffer */ 504 - u32 mcode_bus; 508 + dma_addr_t mcode_bus; 505 509 /* CPU address of MicroCode buffer */ 506 510 void *mcode_cpu; 507 511 /* List of all Channel threads */ ··· 644 648 struct pl330_dmac *pl330 = thrd->dmac; 645 649 646 650 return (pl330->pinfo->pcfg.mode & DMAC_MODE_NS) ? true : false; 647 - } 648 - 649 - static inline u32 get_id(struct pl330_info *pi, u32 off) 650 - { 651 - void __iomem *regs = pi->base; 652 - u32 id = 0; 653 - 654 - id |= (readb(regs + off + 0x0) << 0); 655 - id |= (readb(regs + off + 0x4) << 8); 656 - id |= (readb(regs + off + 0x8) << 16); 657 - id |= (readb(regs + off + 0xc) << 24); 658 - 659 - return id; 660 651 } 661 652 662 653 static inline u32 get_revision(u32 periph_id) ··· 1969 1986 pi->pcfg.num_events = val; 1970 1987 1971 1988 pi->pcfg.irq_ns = readl(regs + CR3); 1972 - 1973 - pi->pcfg.periph_id = get_id(pi, PERIPH_ID); 1974 - pi->pcfg.pcell_id = get_id(pi, PCELL_ID); 1975 1989 } 1976 1990 1977 1991 static inline void _reset_thread(struct pl330_thread *thrd) ··· 2078 2098 regs = pi->base; 2079 2099 2080 2100 /* Check if we can handle this DMAC */ 2081 - if ((get_id(pi, PERIPH_ID) & 0xfffff) != PERIPH_ID_VAL 2082 - || get_id(pi, PCELL_ID) != PCELL_ID_VAL) { 2083 - dev_err(pi->dev, "PERIPH_ID 0x%x, PCELL_ID 0x%x !\n", 2084 - get_id(pi, PERIPH_ID), get_id(pi, PCELL_ID)); 2101 + if ((pi->pcfg.periph_id & 0xfffff) != PERIPH_ID_VAL) { 2102 + dev_err(pi->dev, "PERIPH_ID 0x%x !\n", pi->pcfg.periph_id); 2085 2103 return -EINVAL; 2086 2104 } 2087 2105 ··· 2894 2916 if (ret) 2895 2917 return ret; 2896 2918 2919 + pi->pcfg.periph_id = adev->periphid; 2897 2920 ret = pl330_add(pi); 2898 2921 if (ret) 2899 2922 goto probe_err1;
+2 -3
drivers/dma/ppc4xx/adma.c
··· 4434 4434 adev->dev = &ofdev->dev; 4435 4435 adev->common.dev = &ofdev->dev; 4436 4436 INIT_LIST_HEAD(&adev->common.channels); 4437 - dev_set_drvdata(&ofdev->dev, adev); 4437 + platform_set_drvdata(ofdev, adev); 4438 4438 4439 4439 /* create a channel */ 4440 4440 chan = kzalloc(sizeof(*chan), GFP_KERNEL); ··· 4547 4547 */ 4548 4548 static int ppc440spe_adma_remove(struct platform_device *ofdev) 4549 4549 { 4550 - struct ppc440spe_adma_device *adev = dev_get_drvdata(&ofdev->dev); 4550 + struct ppc440spe_adma_device *adev = platform_get_drvdata(ofdev); 4551 4551 struct device_node *np = ofdev->dev.of_node; 4552 4552 struct resource res; 4553 4553 struct dma_chan *chan, *_chan; 4554 4554 struct ppc_dma_chan_ref *ref, *_ref; 4555 4555 struct ppc440spe_adma_chan *ppc440spe_chan; 4556 4556 4557 - dev_set_drvdata(&ofdev->dev, NULL); 4558 4557 if (adev->id < PPC440SPE_ADMA_ENGINES_NUM) 4559 4558 ppc440spe_adma_devices[adev->id] = -1; 4560 4559
+1 -1
drivers/dma/sh/Makefile
··· 1 - obj-$(CONFIG_SH_DMAE_BASE) += shdma-base.o 1 + obj-$(CONFIG_SH_DMAE_BASE) += shdma-base.o shdma-of.o 2 2 obj-$(CONFIG_SH_DMAE) += shdma.o 3 3 obj-$(CONFIG_SUDMAC) += sudmac.o
+20 -6
drivers/dma/sh/shdma-base.c
··· 175 175 { 176 176 struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device); 177 177 const struct shdma_ops *ops = sdev->ops; 178 - int ret; 178 + int ret, match; 179 + 180 + if (schan->dev->of_node) { 181 + match = schan->hw_req; 182 + ret = ops->set_slave(schan, match, true); 183 + if (ret < 0) 184 + return ret; 185 + 186 + slave_id = schan->slave_id; 187 + } else { 188 + match = slave_id; 189 + } 179 190 180 191 if (slave_id < 0 || slave_id >= slave_num) 181 192 return -EINVAL; ··· 194 183 if (test_and_set_bit(slave_id, shdma_slave_used)) 195 184 return -EBUSY; 196 185 197 - ret = ops->set_slave(schan, slave_id, false); 186 + ret = ops->set_slave(schan, match, false); 198 187 if (ret < 0) { 199 188 clear_bit(slave_id, shdma_slave_used); 200 189 return ret; ··· 217 206 * services would have to provide their own filters, which first would check 218 207 * the device driver, similar to how other DMAC drivers, e.g., sa11x0-dma.c, do 219 208 * this, and only then, in case of a match, call this common filter. 209 + * NOTE 2: This filter function is also used in the DT case by shdma_of_xlate(). 210 + * In that case the MID-RID value is used for slave channel filtering and is 211 + * passed to this function in the "arg" parameter. 220 212 */ 221 213 bool shdma_chan_filter(struct dma_chan *chan, void *arg) 222 214 { 223 215 struct shdma_chan *schan = to_shdma_chan(chan); 224 216 struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device); 225 217 const struct shdma_ops *ops = sdev->ops; 226 - int slave_id = (int)arg; 218 + int match = (int)arg; 227 219 int ret; 228 220 229 - if (slave_id < 0) 221 + if (match < 0) 230 222 /* No slave requested - arbitrary channel */ 231 223 return true; 232 224 233 - if (slave_id >= slave_num) 225 + if (!schan->dev->of_node && match >= slave_num) 234 226 return false; 235 227 236 - ret = ops->set_slave(schan, slave_id, true); 228 + ret = ops->set_slave(schan, match, true); 237 229 if (ret < 0) 238 230 return false; 239 231
+82
drivers/dma/sh/shdma-of.c
··· 1 + /* 2 + * SHDMA Device Tree glue 3 + * 4 + * Copyright (C) 2013 Renesas Electronics Inc. 5 + * Author: Guennadi Liakhovetski <g.liakhovetski@gmx.de> 6 + * 7 + * This is free software; you can redistribute it and/or modify 8 + * it under the terms of version 2 of the GNU General Public License as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/dmaengine.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_dma.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/shdma-base.h> 19 + 20 + #define to_shdma_chan(c) container_of(c, struct shdma_chan, dma_chan) 21 + 22 + static struct dma_chan *shdma_of_xlate(struct of_phandle_args *dma_spec, 23 + struct of_dma *ofdma) 24 + { 25 + u32 id = dma_spec->args[0]; 26 + dma_cap_mask_t mask; 27 + struct dma_chan *chan; 28 + 29 + if (dma_spec->args_count != 1) 30 + return NULL; 31 + 32 + dma_cap_zero(mask); 33 + /* Only slave DMA channels can be allocated via DT */ 34 + dma_cap_set(DMA_SLAVE, mask); 35 + 36 + chan = dma_request_channel(mask, shdma_chan_filter, (void *)id); 37 + if (chan) 38 + to_shdma_chan(chan)->hw_req = id; 39 + 40 + return chan; 41 + } 42 + 43 + static int shdma_of_probe(struct platform_device *pdev) 44 + { 45 + const struct of_dev_auxdata *lookup = pdev->dev.platform_data; 46 + int ret; 47 + 48 + if (!lookup) 49 + return -EINVAL; 50 + 51 + ret = of_dma_controller_register(pdev->dev.of_node, 52 + shdma_of_xlate, pdev); 53 + if (ret < 0) 54 + return ret; 55 + 56 + ret = of_platform_populate(pdev->dev.of_node, NULL, lookup, &pdev->dev); 57 + if (ret < 0) 58 + of_dma_controller_free(pdev->dev.of_node); 59 + 60 + return ret; 61 + } 62 + 63 + static const struct of_device_id shdma_of_match[] = { 64 + { .compatible = "renesas,shdma-mux", }, 65 + { } 66 + }; 67 + MODULE_DEVICE_TABLE(of, sh_dmae_of_match); 68 + 69 + static struct platform_driver shdma_of = { 70 + .driver = { 71 + .owner = THIS_MODULE, 72 + .name = "shdma-of", 73 + .of_match_table = shdma_of_match, 74 + }, 75 + .probe = shdma_of_probe, 76 + }; 77 + 78 + module_platform_driver(shdma_of); 79 + 80 + MODULE_LICENSE("GPL v2"); 81 + MODULE_DESCRIPTION("SH-DMA driver DT glue"); 82 + MODULE_AUTHOR("Guennadi Liakhovetski <g.liakhovetski@gmx.de>");
+26 -7
drivers/dma/sh/shdma.c
··· 301 301 } 302 302 } 303 303 304 + /* 305 + * Find a slave channel configuration from the contoller list by either a slave 306 + * ID in the non-DT case, or by a MID/RID value in the DT case 307 + */ 304 308 static const struct sh_dmae_slave_config *dmae_find_slave( 305 - struct sh_dmae_chan *sh_chan, int slave_id) 309 + struct sh_dmae_chan *sh_chan, int match) 306 310 { 307 311 struct sh_dmae_device *shdev = to_sh_dev(sh_chan); 308 312 struct sh_dmae_pdata *pdata = shdev->pdata; 309 313 const struct sh_dmae_slave_config *cfg; 310 314 int i; 311 315 312 - if (slave_id >= SH_DMA_SLAVE_NUMBER) 313 - return NULL; 316 + if (!sh_chan->shdma_chan.dev->of_node) { 317 + if (match >= SH_DMA_SLAVE_NUMBER) 318 + return NULL; 314 319 315 - for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++) 316 - if (cfg->slave_id == slave_id) 317 - return cfg; 320 + for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++) 321 + if (cfg->slave_id == match) 322 + return cfg; 323 + } else { 324 + for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++) 325 + if (cfg->mid_rid == match) { 326 + sh_chan->shdma_chan.slave_id = cfg->slave_id; 327 + return cfg; 328 + } 329 + } 318 330 319 331 return NULL; 320 332 } ··· 741 729 goto eshdma; 742 730 743 731 /* platform data */ 744 - shdev->pdata = pdev->dev.platform_data; 732 + shdev->pdata = pdata; 745 733 746 734 if (pdata->chcr_offset) 747 735 shdev->chcr_offset = pdata->chcr_offset; ··· 932 920 return 0; 933 921 } 934 922 923 + static const struct of_device_id sh_dmae_of_match[] = { 924 + { .compatible = "renesas,shdma", }, 925 + { } 926 + }; 927 + MODULE_DEVICE_TABLE(of, sh_dmae_of_match); 928 + 935 929 static struct platform_driver sh_dmae_driver = { 936 930 .driver = { 937 931 .owner = THIS_MODULE, 938 932 .pm = &sh_dmae_pm, 939 933 .name = SH_DMAE_DRV_NAME, 934 + .of_match_table = sh_dmae_of_match, 940 935 }, 941 936 .remove = sh_dmae_remove, 942 937 .shutdown = sh_dmae_shutdown,
+17
drivers/dma/sirf-dma.c
··· 466 466 sirfsoc_dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, 467 467 struct dma_tx_state *txstate) 468 468 { 469 + struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(chan); 469 470 struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 470 471 unsigned long flags; 471 472 enum dma_status ret; 473 + struct sirfsoc_dma_desc *sdesc; 474 + int cid = schan->chan.chan_id; 475 + unsigned long dma_pos; 476 + unsigned long dma_request_bytes; 477 + unsigned long residue; 472 478 473 479 spin_lock_irqsave(&schan->lock, flags); 480 + 481 + sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc, 482 + node); 483 + dma_request_bytes = (sdesc->xlen + 1) * (sdesc->ylen + 1) * 484 + (sdesc->width * SIRFSOC_DMA_WORD_LEN); 485 + 474 486 ret = dma_cookie_status(chan, cookie, txstate); 487 + dma_pos = readl_relaxed(sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR) 488 + << 2; 489 + residue = dma_request_bytes - (dma_pos - sdesc->addr); 490 + dma_set_residue(txstate, residue); 491 + 475 492 spin_unlock_irqrestore(&schan->lock, flags); 476 493 477 494 return ret;
+2 -1
drivers/dma/tegra20-apb-dma.c
··· 1191 1191 list_splice_init(&tdc->free_dma_desc, &dma_desc_list); 1192 1192 INIT_LIST_HEAD(&tdc->cb_desc); 1193 1193 tdc->config_init = false; 1194 + tdc->isr_handler = NULL; 1194 1195 spin_unlock_irqrestore(&tdc->lock, flags); 1195 1196 1196 1197 while (!list_empty(&dma_desc_list)) { ··· 1335 1334 if (ret) { 1336 1335 dev_err(&pdev->dev, 1337 1336 "request_irq failed with err %d channel %d\n", 1338 - i, ret); 1337 + ret, i); 1339 1338 goto err_irq; 1340 1339 } 1341 1340
-2
drivers/dma/timb_dma.c
··· 811 811 kfree(td); 812 812 release_mem_region(iomem->start, resource_size(iomem)); 813 813 814 - platform_set_drvdata(pdev, NULL); 815 - 816 814 dev_dbg(&pdev->dev, "Removed...\n"); 817 815 return 0; 818 816 }
+4 -4
include/linux/amba/pl08x.h
··· 76 76 * platform, all inclusive, including multiplexed channels. The available 77 77 * physical channels will be multiplexed around these signals as they are 78 78 * requested, just enumerate all possible channels. 79 - * @get_signal: request a physical signal to be used for a DMA transfer 79 + * @get_xfer_signal: request a physical signal to be used for a DMA transfer 80 80 * immediately: if there is some multiplexing or similar blocking the use 81 81 * of the channel the transfer can be denied by returning less than zero, 82 82 * else it returns the allocated signal number 83 - * @put_signal: indicate to the platform that this physical signal is not 83 + * @put_xfer_signal: indicate to the platform that this physical signal is not 84 84 * running any DMA transfer and multiplexing can be recycled 85 85 * @lli_buses: buses which LLIs can be fetched from: PL08X_AHB1 | PL08X_AHB2 86 86 * @mem_buses: buses which memory can be accessed from: PL08X_AHB1 | PL08X_AHB2 ··· 89 89 const struct pl08x_channel_data *slave_channels; 90 90 unsigned int num_slave_channels; 91 91 struct pl08x_channel_data memcpy_channel; 92 - int (*get_signal)(const struct pl08x_channel_data *); 93 - void (*put_signal)(const struct pl08x_channel_data *, int); 92 + int (*get_xfer_signal)(const struct pl08x_channel_data *); 93 + void (*put_xfer_signal)(const struct pl08x_channel_data *, int); 94 94 u8 lli_buses; 95 95 u8 mem_buses; 96 96 };
-1
include/linux/of_dma.h
··· 21 21 struct of_dma { 22 22 struct list_head of_dma_controllers; 23 23 struct device_node *of_node; 24 - int of_dma_nbcells; 25 24 struct dma_chan *(*of_dma_xlate) 26 25 (struct of_phandle_args *, struct of_dma *); 27 26 void *of_dma_data;
+4
include/linux/platform_data/dma-atmel.h
··· 35 35 36 36 37 37 /* Platform-configurable bits in CFG */ 38 + #define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */ 39 + 38 40 #define ATC_SRC_PER(h) (0xFU & (h)) /* Channel src rq associated with periph handshaking ifc h */ 39 41 #define ATC_DST_PER(h) ((0xFU & (h)) << 4) /* Channel dst rq associated with periph handshaking ifc h */ 40 42 #define ATC_SRC_REP (0x1 << 8) /* Source Replay Mod */ 41 43 #define ATC_SRC_H2SEL (0x1 << 9) /* Source Handshaking Mod */ 42 44 #define ATC_SRC_H2SEL_SW (0x0 << 9) 43 45 #define ATC_SRC_H2SEL_HW (0x1 << 9) 46 + #define ATC_SRC_PER_MSB(h) (ATC_PER_MSB(h) << 10) /* Channel src rq (most significant bits) */ 44 47 #define ATC_DST_REP (0x1 << 12) /* Destination Replay Mod */ 45 48 #define ATC_DST_H2SEL (0x1 << 13) /* Destination Handshaking Mod */ 46 49 #define ATC_DST_H2SEL_SW (0x0 << 13) 47 50 #define ATC_DST_H2SEL_HW (0x1 << 13) 51 + #define ATC_DST_PER_MSB(h) (ATC_PER_MSB(h) << 14) /* Channel dst rq (most significant bits) */ 48 52 #define ATC_SOD (0x1 << 16) /* Stop On Done */ 49 53 #define ATC_LOCK_IF (0x1 << 20) /* Interface Lock */ 50 54 #define ATC_LOCK_B (0x1 << 21) /* AHB Bus Lock */
+2 -4
include/linux/platform_data/dma-imx.h
··· 60 60 61 61 static inline int imx_dma_is_general_purpose(struct dma_chan *chan) 62 62 { 63 - return strstr(dev_name(chan->device->dev), "sdma") || 64 - !strcmp(dev_name(chan->device->dev), "imx1-dma") || 65 - !strcmp(dev_name(chan->device->dev), "imx21-dma") || 66 - !strcmp(dev_name(chan->device->dev), "imx27-dma"); 63 + return !strcmp(chan->device->dev->driver->name, "imx-sdma") || 64 + !strcmp(chan->device->dev->driver->name, "imx-dma"); 67 65 } 68 66 69 67 #endif
-2
include/linux/sh_dma.h
··· 99 99 #define CHCR_TE 0x00000002 100 100 #define CHCR_IE 0x00000004 101 101 102 - bool shdma_chan_filter(struct dma_chan *chan, void *arg); 103 - 104 102 #endif
+3
include/linux/shdma-base.h
··· 68 68 int id; /* Raw id of this channel */ 69 69 int irq; /* Channel IRQ */ 70 70 int slave_id; /* Client ID for slave DMA */ 71 + int hw_req; /* DMA request line for slave DMA - same 72 + * as MID/RID, used with DT */ 71 73 enum shdma_pm_state pm_state; 72 74 }; 73 75 ··· 124 122 int shdma_init(struct device *dev, struct shdma_dev *sdev, 125 123 int chan_num); 126 124 void shdma_cleanup(struct shdma_dev *sdev); 125 + bool shdma_chan_filter(struct dma_chan *chan, void *arg); 127 126 128 127 #endif