Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
"We have couple of drivers removed a new driver and bunch of new device
support and few updates to drivers for this round.

New drivers/devices:
- Intel LGM SoC DMA driver
- Actions Semi S500 DMA controller
- Renesas r8a779a0 dma controller
- Ingenic JZ4760(B) dma controller
- Intel KeemBay AxiDMA controller

Removed:
- Coh901318 dma driver
- Zte zx dma driver
- Sirfsoc dma driver

Updates:
- mmp_pdma, mmp_tdma gained module support
- imx-sdma become modern and dropped platform data support
- dw-axi driver gained slave and cyclic dma support"

* tag 'dmaengine-5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (58 commits)
dmaengine: dw-axi-dmac: remove redundant null check on desc
dmaengine: xilinx_dma: Alloc tx descriptors GFP_NOWAIT
dmaengine: dw-axi-dmac: Virtually split the linked-list
dmaengine: dw-axi-dmac: Set constraint to the Max segment size
dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA BYTE and HALFWORD registers
dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA handshake
dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA support
dmaengine: drivers: Kconfig: add HAS_IOMEM dependency to DW_AXI_DMAC
dmaengine: dw-axi-dmac: Add Intel KeemBay DMA register fields
dt-binding: dma: dw-axi-dmac: Add support for Intel KeemBay AxiDMA
dmaengine: dw-axi-dmac: Support burst residue granularity
dmaengine: dw-axi-dmac: Support of_dma_controller_register()
dmaegine: dw-axi-dmac: Support device_prep_dma_cyclic()
dmaengine: dw-axi-dmac: Support device_prep_slave_sg
dmaengine: dw-axi-dmac: Add device_config operation
dmaengine: dw-axi-dmac: Add device_synchronize() callback
dmaengine: dw-axi-dmac: move dma_pool_create() to alloc_chan_resources()
dmaengine: dw-axi-dmac: simplify descriptor management
dt-bindings: dma: Add YAML schemas for dw-axi-dmac
dmaengine: ti: k3-psil: optimize struct psil_endpoint_config for size
...

+3018 -5988
+6
Documentation/admin-guide/kernel-parameters.txt
··· 1674 1674 In such case C2/C3 won't be used again. 1675 1675 idle=nomwait: Disable mwait for CPU C-states 1676 1676 1677 + idxd.sva= [HW] 1678 + Format: <bool> 1679 + Allow force disabling of Shared Virtual Memory (SVA) 1680 + support for the idxd driver. By default it is set to 1681 + true (1). 1682 + 1677 1683 ieee754= [MIPS] Select IEEE Std 754 conformance mode 1678 1684 Format: { strict | legacy | 2008 | relaxed } 1679 1685 Default: strict
+2
Documentation/devicetree/bindings/dma/ingenic,dma.yaml
··· 17 17 enum: 18 18 - ingenic,jz4740-dma 19 19 - ingenic,jz4725b-dma 20 + - ingenic,jz4760-dma 21 + - ingenic,jz4760b-dma 20 22 - ingenic,jz4770-dma 21 23 - ingenic,jz4780-dma 22 24 - ingenic,x1000-dma
+116
Documentation/devicetree/bindings/dma/intel,ldma.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/dma/intel,ldma.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Lightning Mountain centralized DMA controllers. 8 + 9 + maintainers: 10 + - chuanhua.lei@intel.com 11 + - mallikarjunax.reddy@intel.com 12 + 13 + allOf: 14 + - $ref: "dma-controller.yaml#" 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - intel,lgm-cdma 20 + - intel,lgm-dma2tx 21 + - intel,lgm-dma1rx 22 + - intel,lgm-dma1tx 23 + - intel,lgm-dma0tx 24 + - intel,lgm-dma3 25 + - intel,lgm-toe-dma30 26 + - intel,lgm-toe-dma31 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + "#dma-cells": 32 + const: 3 33 + description: 34 + The first cell is the peripheral's DMA request line. 35 + The second cell is the peripheral's (port) number corresponding to the channel. 36 + The third cell is the burst length of the channel. 37 + 38 + dma-channels: 39 + minimum: 1 40 + maximum: 16 41 + 42 + dma-channel-mask: 43 + maxItems: 1 44 + 45 + clocks: 46 + maxItems: 1 47 + 48 + resets: 49 + maxItems: 1 50 + 51 + reset-names: 52 + items: 53 + - const: ctrl 54 + 55 + interrupts: 56 + maxItems: 1 57 + 58 + intel,dma-poll-cnt: 59 + $ref: /schemas/types.yaml#/definitions/uint32 60 + description: 61 + DMA descriptor polling counter is used to control the poling mechanism 62 + for the descriptor fetching for all channels. 63 + 64 + intel,dma-byte-en: 65 + type: boolean 66 + description: 67 + DMA byte enable is only valid for DMA write(RX). 68 + Byte enable(1) means DMA write will be based on the number of dwords 69 + instead of the whole burst. 70 + 71 + intel,dma-drb: 72 + type: boolean 73 + description: 74 + DMA descriptor read back to make sure data and desc synchronization. 75 + 76 + intel,dma-dburst-wr: 77 + type: boolean 78 + description: 79 + Enable RX dynamic burst write. When it is enabled, the DMA does RX dynamic burst; 80 + if it is disabled, the DMA RX will still support programmable fixed burst size of 2,4,8,16. 81 + It only applies to RX DMA and memcopy DMA. 82 + 83 + required: 84 + - compatible 85 + - reg 86 + 87 + additionalProperties: false 88 + 89 + examples: 90 + - | 91 + dma0: dma-controller@e0e00000 { 92 + compatible = "intel,lgm-cdma"; 93 + reg = <0xe0e00000 0x1000>; 94 + #dma-cells = <3>; 95 + dma-channels = <16>; 96 + dma-channel-mask = <0xFFFF>; 97 + interrupt-parent = <&ioapic1>; 98 + interrupts = <82 1>; 99 + resets = <&rcu0 0x30 0>; 100 + reset-names = "ctrl"; 101 + clocks = <&cgu0 80>; 102 + intel,dma-poll-cnt = <4>; 103 + intel,dma-byte-en; 104 + intel,dma-drb; 105 + }; 106 + - | 107 + dma3: dma-controller@ec800000 { 108 + compatible = "intel,lgm-dma3"; 109 + reg = <0xec800000 0x1000>; 110 + clocks = <&cgu0 71>; 111 + resets = <&rcu0 0x10 9>; 112 + #dma-cells = <3>; 113 + intel,dma-poll-cnt = <16>; 114 + intel,dma-byte-en; 115 + intel,dma-dburst-wr; 116 + };
+4 -3
Documentation/devicetree/bindings/dma/owl-dma.yaml
··· 8 8 9 9 description: | 10 10 The OWL DMA is a general-purpose direct memory access controller capable of 11 - supporting 10 and 12 independent DMA channels for S700 and S900 SoCs 12 - respectively. 11 + supporting 10 independent DMA channels for the Actions Semi S700 SoC and 12 12 + independent DMA channels for the S500 and S900 SoC variants. 13 13 14 14 maintainers: 15 15 - Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> ··· 20 20 properties: 21 21 compatible: 22 22 enum: 23 - - actions,s900-dma 23 + - actions,s500-dma 24 24 - actions,s700-dma 25 + - actions,s900-dma 25 26 26 27 reg: 27 28 maxItems: 1
+47 -27
Documentation/devicetree/bindings/dma/renesas,rcar-dmac.yaml
··· 14 14 15 15 properties: 16 16 compatible: 17 - items: 18 - - enum: 19 - - renesas,dmac-r8a7742 # RZ/G1H 20 - - renesas,dmac-r8a7743 # RZ/G1M 21 - - renesas,dmac-r8a7744 # RZ/G1N 22 - - renesas,dmac-r8a7745 # RZ/G1E 23 - - renesas,dmac-r8a77470 # RZ/G1C 24 - - renesas,dmac-r8a774a1 # RZ/G2M 25 - - renesas,dmac-r8a774b1 # RZ/G2N 26 - - renesas,dmac-r8a774c0 # RZ/G2E 27 - - renesas,dmac-r8a774e1 # RZ/G2H 28 - - renesas,dmac-r8a7790 # R-Car H2 29 - - renesas,dmac-r8a7791 # R-Car M2-W 30 - - renesas,dmac-r8a7792 # R-Car V2H 31 - - renesas,dmac-r8a7793 # R-Car M2-N 32 - - renesas,dmac-r8a7794 # R-Car E2 33 - - renesas,dmac-r8a7795 # R-Car H3 34 - - renesas,dmac-r8a7796 # R-Car M3-W 35 - - renesas,dmac-r8a77961 # R-Car M3-W+ 36 - - renesas,dmac-r8a77965 # R-Car M3-N 37 - - renesas,dmac-r8a77970 # R-Car V3M 38 - - renesas,dmac-r8a77980 # R-Car V3H 39 - - renesas,dmac-r8a77990 # R-Car E3 40 - - renesas,dmac-r8a77995 # R-Car D3 41 - - const: renesas,rcar-dmac 17 + oneOf: 18 + - items: 19 + - enum: 20 + - renesas,dmac-r8a7742 # RZ/G1H 21 + - renesas,dmac-r8a7743 # RZ/G1M 22 + - renesas,dmac-r8a7744 # RZ/G1N 23 + - renesas,dmac-r8a7745 # RZ/G1E 24 + - renesas,dmac-r8a77470 # RZ/G1C 25 + - renesas,dmac-r8a774a1 # RZ/G2M 26 + - renesas,dmac-r8a774b1 # RZ/G2N 27 + - renesas,dmac-r8a774c0 # RZ/G2E 28 + - renesas,dmac-r8a774e1 # RZ/G2H 29 + - renesas,dmac-r8a7790 # R-Car H2 30 + - renesas,dmac-r8a7791 # R-Car M2-W 31 + - renesas,dmac-r8a7792 # R-Car V2H 32 + - renesas,dmac-r8a7793 # R-Car M2-N 33 + - renesas,dmac-r8a7794 # R-Car E2 34 + - renesas,dmac-r8a7795 # R-Car H3 35 + - renesas,dmac-r8a7796 # R-Car M3-W 36 + - renesas,dmac-r8a77961 # R-Car M3-W+ 37 + - renesas,dmac-r8a77965 # R-Car M3-N 38 + - renesas,dmac-r8a77970 # R-Car V3M 39 + - renesas,dmac-r8a77980 # R-Car V3H 40 + - renesas,dmac-r8a77990 # R-Car E3 41 + - renesas,dmac-r8a77995 # R-Car D3 42 + - const: renesas,rcar-dmac 42 43 43 - reg: 44 - maxItems: 1 44 + - items: 45 + - const: renesas,dmac-r8a779a0 # R-Car V3U 46 + 47 + reg: true 45 48 46 49 interrupts: 47 50 minItems: 9 ··· 112 109 - dma-channels 113 110 - power-domains 114 111 - resets 112 + 113 + if: 114 + properties: 115 + compatible: 116 + contains: 117 + enum: 118 + - renesas,dmac-r8a779a0 119 + then: 120 + properties: 121 + reg: 122 + items: 123 + - description: Base register block 124 + - description: Channel register block 125 + else: 126 + properties: 127 + reg: 128 + maxItems: 1 115 129 116 130 additionalProperties: false 117 131
-44
Documentation/devicetree/bindings/dma/sirfsoc-dma.txt
··· 1 - * CSR SiRFSoC DMA controller 2 - 3 - See dma.txt first 4 - 5 - Required properties: 6 - - compatible: Should be "sirf,prima2-dmac", "sirf,atlas7-dmac" or 7 - "sirf,atlas7-dmac-v2" 8 - - reg: Should contain DMA registers location and length. 9 - - interrupts: Should contain one interrupt shared by all channel 10 - - #dma-cells: must be <1>. used to represent the number of integer 11 - cells in the dmas property of client device. 12 - - clocks: clock required 13 - 14 - Example: 15 - 16 - Controller: 17 - dmac0: dma-controller@b00b0000 { 18 - compatible = "sirf,prima2-dmac"; 19 - reg = <0xb00b0000 0x10000>; 20 - interrupts = <12>; 21 - clocks = <&clks 24>; 22 - #dma-cells = <1>; 23 - }; 24 - 25 - 26 - Client: 27 - Fill the specific dma request line in dmas. In the below example, spi0 read 28 - channel request line is 9 of the 2nd dma controller, while write channel uses 29 - 4 of the 2nd dma controller; spi1 read channel request line is 12 of the 1st 30 - dma controller, while write channel uses 13 of the 1st dma controller: 31 - 32 - spi0: spi@b00d0000 { 33 - compatible = "sirf,prima2-spi"; 34 - dmas = <&dmac1 9>, 35 - <&dmac1 4>; 36 - dma-names = "rx", "tx"; 37 - }; 38 - 39 - spi1: spi@b0170000 { 40 - compatible = "sirf,prima2-spi"; 41 - dmas = <&dmac0 12>, 42 - <&dmac0 13>; 43 - dma-names = "rx", "tx"; 44 - };
-39
Documentation/devicetree/bindings/dma/snps,dw-axi-dmac.txt
··· 1 - Synopsys DesignWare AXI DMA Controller 2 - 3 - Required properties: 4 - - compatible: "snps,axi-dma-1.01a" 5 - - reg: Address range of the DMAC registers. This should include 6 - all of the per-channel registers. 7 - - interrupt: Should contain the DMAC interrupt number. 8 - - dma-channels: Number of channels supported by hardware. 9 - - snps,dma-masters: Number of AXI masters supported by the hardware. 10 - - snps,data-width: Maximum AXI data width supported by hardware. 11 - (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits) 12 - - snps,priority: Priority of channel. Array size is equal to the number of 13 - dma-channels. Priority value must be programmed within [0:dma-channels-1] 14 - range. (0 - minimum priority) 15 - - snps,block-size: Maximum block size supported by the controller channel. 16 - Array size is equal to the number of dma-channels. 17 - 18 - Optional properties: 19 - - snps,axi-max-burst-len: Restrict master AXI burst length by value specified 20 - in this property. If this property is missing the maximum AXI burst length 21 - supported by DMAC is used. [1:256] 22 - 23 - Example: 24 - 25 - dmac: dma-controller@80000 { 26 - compatible = "snps,axi-dma-1.01a"; 27 - reg = <0x80000 0x400>; 28 - clocks = <&core_clk>, <&cfgr_clk>; 29 - clock-names = "core-clk", "cfgr-clk"; 30 - interrupt-parent = <&intc>; 31 - interrupts = <27>; 32 - 33 - dma-channels = <4>; 34 - snps,dma-masters = <2>; 35 - snps,data-width = <3>; 36 - snps,block-size = <4096 4096 4096 4096>; 37 - snps,priority = <0 1 2 3>; 38 - snps,axi-max-burst-len = <16>; 39 - };
+126
Documentation/devicetree/bindings/dma/snps,dw-axi-dmac.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/dma/snps,dw-axi-dmac.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Synopsys DesignWare AXI DMA Controller 8 + 9 + maintainers: 10 + - Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> 11 + - Jee Heng Sia <jee.heng.sia@intel.com> 12 + 13 + description: 14 + Synopsys DesignWare AXI DMA Controller DT Binding 15 + 16 + allOf: 17 + - $ref: "dma-controller.yaml#" 18 + 19 + properties: 20 + compatible: 21 + enum: 22 + - snps,axi-dma-1.01a 23 + - intel,kmb-axi-dma 24 + 25 + reg: 26 + minItems: 1 27 + items: 28 + - description: Address range of the DMAC registers 29 + - description: Address range of the DMAC APB registers 30 + 31 + reg-names: 32 + items: 33 + - const: axidma_ctrl_regs 34 + - const: axidma_apb_regs 35 + 36 + interrupts: 37 + maxItems: 1 38 + 39 + clocks: 40 + items: 41 + - description: Bus Clock 42 + - description: Module Clock 43 + 44 + clock-names: 45 + items: 46 + - const: core-clk 47 + - const: cfgr-clk 48 + 49 + '#dma-cells': 50 + const: 1 51 + 52 + dma-channels: 53 + minimum: 1 54 + maximum: 8 55 + 56 + snps,dma-masters: 57 + description: | 58 + Number of AXI masters supported by the hardware. 59 + $ref: /schemas/types.yaml#/definitions/uint32 60 + enum: [1, 2] 61 + 62 + snps,data-width: 63 + description: | 64 + AXI data width supported by hardware. 65 + (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits) 66 + $ref: /schemas/types.yaml#/definitions/uint32 67 + enum: [0, 1, 2, 3, 4, 5, 6] 68 + 69 + snps,priority: 70 + description: | 71 + Channel priority specifier associated with the DMA channels. 72 + $ref: /schemas/types.yaml#/definitions/uint32-array 73 + minItems: 1 74 + maxItems: 8 75 + 76 + snps,block-size: 77 + description: | 78 + Channel block size specifier associated with the DMA channels. 79 + $ref: /schemas/types.yaml#/definitions/uint32-array 80 + minItems: 1 81 + maxItems: 8 82 + 83 + snps,axi-max-burst-len: 84 + description: | 85 + Restrict master AXI burst length by value specified in this property. 86 + If this property is missing the maximum AXI burst length supported by 87 + DMAC is used. 88 + $ref: /schemas/types.yaml#/definitions/uint32 89 + minimum: 1 90 + maximum: 256 91 + 92 + required: 93 + - compatible 94 + - reg 95 + - clocks 96 + - clock-names 97 + - interrupts 98 + - '#dma-cells' 99 + - dma-channels 100 + - snps,dma-masters 101 + - snps,data-width 102 + - snps,priority 103 + - snps,block-size 104 + 105 + additionalProperties: false 106 + 107 + examples: 108 + - | 109 + #include <dt-bindings/interrupt-controller/arm-gic.h> 110 + #include <dt-bindings/interrupt-controller/irq.h> 111 + /* example with snps,dw-axi-dmac */ 112 + dmac: dma-controller@80000 { 113 + compatible = "snps,axi-dma-1.01a"; 114 + reg = <0x80000 0x400>; 115 + clocks = <&core_clk>, <&cfgr_clk>; 116 + clock-names = "core-clk", "cfgr-clk"; 117 + interrupt-parent = <&intc>; 118 + interrupts = <27>; 119 + #dma-cells = <1>; 120 + dma-channels = <4>; 121 + snps,dma-masters = <2>; 122 + snps,data-width = <3>; 123 + snps,block-size = <4096 4096 4096 4096>; 124 + snps,priority = <0 1 2 3>; 125 + snps,axi-max-burst-len = <16>; 126 + };
-32
Documentation/devicetree/bindings/dma/ste-coh901318.txt
··· 1 - ST-Ericsson COH 901 318 DMA Controller 2 - 3 - This is a DMA controller which has begun as a fork of the 4 - ARM PL08x PrimeCell VHDL code. 5 - 6 - Required properties: 7 - - compatible: should be "stericsson,coh901318" 8 - - reg: register locations and length 9 - - interrupts: the single DMA IRQ 10 - - #dma-cells: must be set to <1>, as the channels on the 11 - COH 901 318 are simple and identified by a single number 12 - - dma-channels: the number of DMA channels handled 13 - 14 - Example: 15 - 16 - dmac: dma-controller@c00020000 { 17 - compatible = "stericsson,coh901318"; 18 - reg = <0xc0020000 0x1000>; 19 - interrupt-parent = <&vica>; 20 - interrupts = <2>; 21 - #dma-cells = <1>; 22 - dma-channels = <40>; 23 - }; 24 - 25 - Consumers example: 26 - 27 - uart0: serial@c0013000 { 28 - compatible = "..."; 29 - (...) 30 - dmas = <&dmac 17 &dmac 18>; 31 - dma-names = "tx", "rx"; 32 - };
-38
Documentation/devicetree/bindings/dma/zxdma.txt
··· 1 - * ZTE ZX296702 DMA controller 2 - 3 - Required properties: 4 - - compatible: Should be "zte,zx296702-dma" 5 - - reg: Should contain DMA registers location and length. 6 - - interrupts: Should contain one interrupt shared by all channel 7 - - #dma-cells: see dma.txt, should be 1, para number 8 - - dma-channels: physical channels supported 9 - - dma-requests: virtual channels supported, each virtual channel 10 - have specific request line 11 - - clocks: clock required 12 - 13 - Example: 14 - 15 - Controller: 16 - dma: dma-controller@09c00000{ 17 - compatible = "zte,zx296702-dma"; 18 - reg = <0x09c00000 0x1000>; 19 - clocks = <&topclk ZX296702_DMA_ACLK>; 20 - interrupts = <GIC_SPI 66 IRQ_TYPE_LEVEL_HIGH>; 21 - #dma-cells = <1>; 22 - dma-channels = <24>; 23 - dma-requests = <24>; 24 - }; 25 - 26 - Client: 27 - Use specific request line passing from dmax 28 - For example, spdif0 tx channel request line is 4 29 - spdif0: spdif0@b004000 { 30 - #sound-dai-cells = <0>; 31 - compatible = "zte,zx296702-spdif"; 32 - reg = <0x0b004000 0x1000>; 33 - clocks = <&lsp0clk ZX296702_SPDIF0_DIV>; 34 - clock-names = "tx"; 35 - interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>; 36 - dmas = <&dma 4>; 37 - dma-names = "tx"; 38 - }
+1 -3
MAINTAINERS
··· 2828 2828 W: http://sourceforge.net/projects/xscaleiop 2829 2829 F: Documentation/crypto/async-tx-api.rst 2830 2830 F: crypto/async_tx/ 2831 - F: drivers/dma/ 2832 2831 F: include/linux/async_tx.h 2833 - F: include/linux/dmaengine.h 2834 2832 2835 2833 AT24 EEPROM DRIVER 2836 2834 M: Bartosz Golaszewski <bgolaszewski@baylibre.com> ··· 5269 5271 F: Documentation/devicetree/bindings/dma/ 5270 5272 F: Documentation/driver-api/dmaengine/ 5271 5273 F: drivers/dma/ 5274 + F: include/linux/dma/ 5272 5275 F: include/linux/dmaengine.h 5273 5276 F: include/linux/of_dma.h 5274 5277 ··· 11613 11614 F: drivers/dma/at_hdmac_regs.h 11614 11615 F: drivers/dma/at_xdmac.c 11615 11616 F: include/dt-bindings/dma/at91.h 11616 - F: include/linux/platform_data/dma-atmel.h 11617 11617 11618 11618 MICROCHIP AT91 SERIAL DRIVER 11619 11619 M: Richard Genoud <richard.genoud@gmail.com>
+5 -25
drivers/dma/Kconfig
··· 124 124 has the capability to offload memcpy, xor and pq computation 125 125 for raid5/6. 126 126 127 - config COH901318 128 - bool "ST-Ericsson COH901318 DMA support" 129 - select DMA_ENGINE 130 - depends on ARCH_U300 || COMPILE_TEST 131 - help 132 - Enable support for ST-Ericsson COH 901 318 DMA. 133 - 134 127 config DMA_BCM2835 135 128 tristate "BCM2835 DMA engine support" 136 129 depends on ARCH_BCM2835 ··· 172 179 config DW_AXI_DMAC 173 180 tristate "Synopsys DesignWare AXI DMA support" 174 181 depends on OF || COMPILE_TEST 182 + depends on HAS_IOMEM 175 183 select DMA_ENGINE 176 184 select DMA_VIRTUAL_CHANNELS 177 185 help ··· 372 378 XDMAC device. 373 379 374 380 config MMP_PDMA 375 - bool "MMP PDMA support" 381 + tristate "MMP PDMA support" 376 382 depends on ARCH_MMP || ARCH_PXA || COMPILE_TEST 377 383 select DMA_ENGINE 378 384 help 379 385 Support the MMP PDMA engine for PXA and MMP platform. 380 386 381 387 config MMP_TDMA 382 - bool "MMP Two-Channel DMA support" 388 + tristate "MMP Two-Channel DMA support" 383 389 depends on ARCH_MMP || COMPILE_TEST 384 390 select DMA_ENGINE 385 391 select GENERIC_ALLOCATOR ··· 512 518 Some PLX ExpressLane PCI Switches support additional DMA engines. 513 519 These are exposed via extra functions on the switch's 514 520 upstream port. Each function exposes one DMA channel. 515 - 516 - config SIRF_DMA 517 - tristate "CSR SiRFprimaII/SiRFmarco DMA support" 518 - depends on ARCH_SIRF 519 - select DMA_ENGINE 520 - help 521 - Enable support for the CSR SiRFprimaII DMA engine. 522 521 523 522 config STE_DMA40 524 523 bool "ST-Ericsson DMA40 support" ··· 697 710 driver provides the dmaengine required by the DisplayPort subsystem 698 711 display driver. 699 712 700 - config ZX_DMA 701 - tristate "ZTE ZX DMA support" 702 - depends on ARCH_ZX || COMPILE_TEST 703 - select DMA_ENGINE 704 - select DMA_VIRTUAL_CHANNELS 705 - help 706 - Support the DMA engine for ZTE ZX family platform devices. 707 - 708 - 709 713 # driver files 710 714 source "drivers/dma/bestcomm/Kconfig" 711 715 ··· 717 739 source "drivers/dma/ti/Kconfig" 718 740 719 741 source "drivers/dma/fsl-dpaa2-qdma/Kconfig" 742 + 743 + source "drivers/dma/lgm/Kconfig" 720 744 721 745 # clients 722 746 comment "DMA Clients"
+1 -3
drivers/dma/Makefile
··· 20 20 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o 21 21 obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o 22 22 obj-$(CONFIG_BCM_SBA_RAID) += bcm-sba-raid.o 23 - obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o 24 23 obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o 25 24 obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o 26 25 obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o ··· 64 65 obj-$(CONFIG_PXA_DMA) += pxa_dma.o 65 66 obj-$(CONFIG_RENESAS_DMA) += sh/ 66 67 obj-$(CONFIG_SF_PDMA) += sf-pdma/ 67 - obj-$(CONFIG_SIRF_DMA) += sirf-dma.o 68 68 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o 69 69 obj-$(CONFIG_STM32_DMA) += stm32-dma.o 70 70 obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o ··· 77 79 obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o 78 80 obj-$(CONFIG_UNIPHIER_XDMAC) += uniphier-xdmac.o 79 81 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o 80 - obj-$(CONFIG_ZX_DMA) += zx_dma.o 81 82 obj-$(CONFIG_ST_FDMA) += st_fdma.o 82 83 obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/ 84 + obj-$(CONFIG_INTEL_LDMA) += lgm/ 83 85 84 86 obj-y += mediatek/ 85 87 obj-y += qcom/
+19
drivers/dma/at_hdmac.c
··· 54 54 MODULE_PARM_DESC(init_nr_desc_per_channel, 55 55 "initial descriptors per channel (default: 64)"); 56 56 57 + /** 58 + * struct at_dma_platform_data - Controller configuration parameters 59 + * @nr_channels: Number of channels supported by hardware (max 8) 60 + * @cap_mask: dma_capability flags supported by the platform 61 + */ 62 + struct at_dma_platform_data { 63 + unsigned int nr_channels; 64 + dma_cap_mask_t cap_mask; 65 + }; 66 + 67 + /** 68 + * struct at_dma_slave - Controller-specific information about a slave 69 + * @dma_dev: required DMA master device 70 + * @cfg: Platform-specific initializer for the CFG register 71 + */ 72 + struct at_dma_slave { 73 + struct device *dma_dev; 74 + u32 cfg; 75 + }; 57 76 58 77 /* prototypes */ 59 78 static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx);
+25 -3
drivers/dma/at_hdmac_regs.h
··· 7 7 #ifndef AT_HDMAC_REGS_H 8 8 #define AT_HDMAC_REGS_H 9 9 10 - #include <linux/platform_data/dma-atmel.h> 11 - 12 10 #define AT_DMA_MAX_NR_CHANNELS 8 13 11 14 12 ··· 146 148 #define ATC_AUTO (0x1 << 31) /* Auto multiple buffer tx enable */ 147 149 148 150 /* Bitfields in CFG */ 149 - /* are in at_hdmac.h */ 151 + #define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */ 152 + 153 + #define ATC_SRC_PER(h) (0xFU & (h)) /* Channel src rq associated with periph handshaking ifc h */ 154 + #define ATC_DST_PER(h) ((0xFU & (h)) << 4) /* Channel dst rq associated with periph handshaking ifc h */ 155 + #define ATC_SRC_REP (0x1 << 8) /* Source Replay Mod */ 156 + #define ATC_SRC_H2SEL (0x1 << 9) /* Source Handshaking Mod */ 157 + #define ATC_SRC_H2SEL_SW (0x0 << 9) 158 + #define ATC_SRC_H2SEL_HW (0x1 << 9) 159 + #define ATC_SRC_PER_MSB(h) (ATC_PER_MSB(h) << 10) /* Channel src rq (most significant bits) */ 160 + #define ATC_DST_REP (0x1 << 12) /* Destination Replay Mod */ 161 + #define ATC_DST_H2SEL (0x1 << 13) /* Destination Handshaking Mod */ 162 + #define ATC_DST_H2SEL_SW (0x0 << 13) 163 + #define ATC_DST_H2SEL_HW (0x1 << 13) 164 + #define ATC_DST_PER_MSB(h) (ATC_PER_MSB(h) << 14) /* Channel dst rq (most significant bits) */ 165 + #define ATC_SOD (0x1 << 16) /* Stop On Done */ 166 + #define ATC_LOCK_IF (0x1 << 20) /* Interface Lock */ 167 + #define ATC_LOCK_B (0x1 << 21) /* AHB Bus Lock */ 168 + #define ATC_LOCK_IF_L (0x1 << 22) /* Master Interface Arbiter Lock */ 169 + #define ATC_LOCK_IF_L_CHUNK (0x0 << 22) 170 + #define ATC_LOCK_IF_L_BUFFER (0x1 << 22) 171 + #define ATC_AHB_PROT_MASK (0x7 << 24) /* AHB Protection */ 172 + #define ATC_FIFOCFG_MASK (0x3 << 28) /* FIFO Request Configuration */ 173 + #define ATC_FIFOCFG_LARGESTBURST (0x0 << 28) 174 + #define ATC_FIFOCFG_HALFFIFO (0x1 << 28) 175 + #define ATC_FIFOCFG_ENOUGHSPACE (0x2 << 28) 150 176 151 177 /* Bitfields in SPIP */ 152 178 #define ATC_SPIP_HOLE(x) (0xFFFFU & (x))
-2808
drivers/dma/coh901318.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * driver/dma/coh901318.c 4 - * 5 - * Copyright (C) 2007-2009 ST-Ericsson 6 - * DMA driver for COH 901 318 7 - * Author: Per Friden <per.friden@stericsson.com> 8 - */ 9 - 10 - #include <linux/init.h> 11 - #include <linux/module.h> 12 - #include <linux/kernel.h> /* printk() */ 13 - #include <linux/fs.h> /* everything... */ 14 - #include <linux/scatterlist.h> 15 - #include <linux/slab.h> /* kmalloc() */ 16 - #include <linux/dmaengine.h> 17 - #include <linux/platform_device.h> 18 - #include <linux/device.h> 19 - #include <linux/irqreturn.h> 20 - #include <linux/interrupt.h> 21 - #include <linux/io.h> 22 - #include <linux/uaccess.h> 23 - #include <linux/debugfs.h> 24 - #include <linux/platform_data/dma-coh901318.h> 25 - #include <linux/of_dma.h> 26 - 27 - #include "coh901318.h" 28 - #include "dmaengine.h" 29 - 30 - #define COH901318_MOD32_MASK (0x1F) 31 - #define COH901318_WORD_MASK (0xFFFFFFFF) 32 - /* INT_STATUS - Interrupt Status Registers 32bit (R/-) */ 33 - #define COH901318_INT_STATUS1 (0x0000) 34 - #define COH901318_INT_STATUS2 (0x0004) 35 - /* TC_INT_STATUS - Terminal Count Interrupt Status Registers 32bit (R/-) */ 36 - #define COH901318_TC_INT_STATUS1 (0x0008) 37 - #define COH901318_TC_INT_STATUS2 (0x000C) 38 - /* TC_INT_CLEAR - Terminal Count Interrupt Clear Registers 32bit (-/W) */ 39 - #define COH901318_TC_INT_CLEAR1 (0x0010) 40 - #define COH901318_TC_INT_CLEAR2 (0x0014) 41 - /* RAW_TC_INT_STATUS - Raw Term Count Interrupt Status Registers 32bit (R/-) */ 42 - #define COH901318_RAW_TC_INT_STATUS1 (0x0018) 43 - #define COH901318_RAW_TC_INT_STATUS2 (0x001C) 44 - /* BE_INT_STATUS - Bus Error Interrupt Status Registers 32bit (R/-) */ 45 - #define COH901318_BE_INT_STATUS1 (0x0020) 46 - #define COH901318_BE_INT_STATUS2 (0x0024) 47 - /* BE_INT_CLEAR - Bus Error Interrupt Clear Registers 32bit (-/W) */ 48 - #define COH901318_BE_INT_CLEAR1 (0x0028) 49 - #define COH901318_BE_INT_CLEAR2 (0x002C) 50 - /* RAW_BE_INT_STATUS - Raw Term Count Interrupt Status Registers 32bit (R/-) */ 51 - #define COH901318_RAW_BE_INT_STATUS1 (0x0030) 52 - #define COH901318_RAW_BE_INT_STATUS2 (0x0034) 53 - 54 - /* 55 - * CX_CFG - Channel Configuration Registers 32bit (R/W) 56 - */ 57 - #define COH901318_CX_CFG (0x0100) 58 - #define COH901318_CX_CFG_SPACING (0x04) 59 - /* Channel enable activates tha dma job */ 60 - #define COH901318_CX_CFG_CH_ENABLE (0x00000001) 61 - #define COH901318_CX_CFG_CH_DISABLE (0x00000000) 62 - /* Request Mode */ 63 - #define COH901318_CX_CFG_RM_MASK (0x00000006) 64 - #define COH901318_CX_CFG_RM_MEMORY_TO_MEMORY (0x0 << 1) 65 - #define COH901318_CX_CFG_RM_PRIMARY_TO_MEMORY (0x1 << 1) 66 - #define COH901318_CX_CFG_RM_MEMORY_TO_PRIMARY (0x1 << 1) 67 - #define COH901318_CX_CFG_RM_PRIMARY_TO_SECONDARY (0x3 << 1) 68 - #define COH901318_CX_CFG_RM_SECONDARY_TO_PRIMARY (0x3 << 1) 69 - /* Linked channel request field. RM must == 11 */ 70 - #define COH901318_CX_CFG_LCRF_SHIFT 3 71 - #define COH901318_CX_CFG_LCRF_MASK (0x000001F8) 72 - #define COH901318_CX_CFG_LCR_DISABLE (0x00000000) 73 - /* Terminal Counter Interrupt Request Mask */ 74 - #define COH901318_CX_CFG_TC_IRQ_ENABLE (0x00000200) 75 - #define COH901318_CX_CFG_TC_IRQ_DISABLE (0x00000000) 76 - /* Bus Error interrupt Mask */ 77 - #define COH901318_CX_CFG_BE_IRQ_ENABLE (0x00000400) 78 - #define COH901318_CX_CFG_BE_IRQ_DISABLE (0x00000000) 79 - 80 - /* 81 - * CX_STAT - Channel Status Registers 32bit (R/-) 82 - */ 83 - #define COH901318_CX_STAT (0x0200) 84 - #define COH901318_CX_STAT_SPACING (0x04) 85 - #define COH901318_CX_STAT_RBE_IRQ_IND (0x00000008) 86 - #define COH901318_CX_STAT_RTC_IRQ_IND (0x00000004) 87 - #define COH901318_CX_STAT_ACTIVE (0x00000002) 88 - #define COH901318_CX_STAT_ENABLED (0x00000001) 89 - 90 - /* 91 - * CX_CTRL - Channel Control Registers 32bit (R/W) 92 - */ 93 - #define COH901318_CX_CTRL (0x0400) 94 - #define COH901318_CX_CTRL_SPACING (0x10) 95 - /* Transfer Count Enable */ 96 - #define COH901318_CX_CTRL_TC_ENABLE (0x00001000) 97 - #define COH901318_CX_CTRL_TC_DISABLE (0x00000000) 98 - /* Transfer Count Value 0 - 4095 */ 99 - #define COH901318_CX_CTRL_TC_VALUE_MASK (0x00000FFF) 100 - /* Burst count */ 101 - #define COH901318_CX_CTRL_BURST_COUNT_MASK (0x0000E000) 102 - #define COH901318_CX_CTRL_BURST_COUNT_64_BYTES (0x7 << 13) 103 - #define COH901318_CX_CTRL_BURST_COUNT_48_BYTES (0x6 << 13) 104 - #define COH901318_CX_CTRL_BURST_COUNT_32_BYTES (0x5 << 13) 105 - #define COH901318_CX_CTRL_BURST_COUNT_16_BYTES (0x4 << 13) 106 - #define COH901318_CX_CTRL_BURST_COUNT_8_BYTES (0x3 << 13) 107 - #define COH901318_CX_CTRL_BURST_COUNT_4_BYTES (0x2 << 13) 108 - #define COH901318_CX_CTRL_BURST_COUNT_2_BYTES (0x1 << 13) 109 - #define COH901318_CX_CTRL_BURST_COUNT_1_BYTE (0x0 << 13) 110 - /* Source bus size */ 111 - #define COH901318_CX_CTRL_SRC_BUS_SIZE_MASK (0x00030000) 112 - #define COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS (0x2 << 16) 113 - #define COH901318_CX_CTRL_SRC_BUS_SIZE_16_BITS (0x1 << 16) 114 - #define COH901318_CX_CTRL_SRC_BUS_SIZE_8_BITS (0x0 << 16) 115 - /* Source address increment */ 116 - #define COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE (0x00040000) 117 - #define COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE (0x00000000) 118 - /* Destination Bus Size */ 119 - #define COH901318_CX_CTRL_DST_BUS_SIZE_MASK (0x00180000) 120 - #define COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS (0x2 << 19) 121 - #define COH901318_CX_CTRL_DST_BUS_SIZE_16_BITS (0x1 << 19) 122 - #define COH901318_CX_CTRL_DST_BUS_SIZE_8_BITS (0x0 << 19) 123 - /* Destination address increment */ 124 - #define COH901318_CX_CTRL_DST_ADDR_INC_ENABLE (0x00200000) 125 - #define COH901318_CX_CTRL_DST_ADDR_INC_DISABLE (0x00000000) 126 - /* Master Mode (Master2 is only connected to MSL) */ 127 - #define COH901318_CX_CTRL_MASTER_MODE_MASK (0x00C00000) 128 - #define COH901318_CX_CTRL_MASTER_MODE_M2R_M1W (0x3 << 22) 129 - #define COH901318_CX_CTRL_MASTER_MODE_M1R_M2W (0x2 << 22) 130 - #define COH901318_CX_CTRL_MASTER_MODE_M2RW (0x1 << 22) 131 - #define COH901318_CX_CTRL_MASTER_MODE_M1RW (0x0 << 22) 132 - /* Terminal Count flag to PER enable */ 133 - #define COH901318_CX_CTRL_TCP_ENABLE (0x01000000) 134 - #define COH901318_CX_CTRL_TCP_DISABLE (0x00000000) 135 - /* Terminal Count flags to CPU enable */ 136 - #define COH901318_CX_CTRL_TC_IRQ_ENABLE (0x02000000) 137 - #define COH901318_CX_CTRL_TC_IRQ_DISABLE (0x00000000) 138 - /* Hand shake to peripheral */ 139 - #define COH901318_CX_CTRL_HSP_ENABLE (0x04000000) 140 - #define COH901318_CX_CTRL_HSP_DISABLE (0x00000000) 141 - #define COH901318_CX_CTRL_HSS_ENABLE (0x08000000) 142 - #define COH901318_CX_CTRL_HSS_DISABLE (0x00000000) 143 - /* DMA mode */ 144 - #define COH901318_CX_CTRL_DDMA_MASK (0x30000000) 145 - #define COH901318_CX_CTRL_DDMA_LEGACY (0x0 << 28) 146 - #define COH901318_CX_CTRL_DDMA_DEMAND_DMA1 (0x1 << 28) 147 - #define COH901318_CX_CTRL_DDMA_DEMAND_DMA2 (0x2 << 28) 148 - /* Primary Request Data Destination */ 149 - #define COH901318_CX_CTRL_PRDD_MASK (0x40000000) 150 - #define COH901318_CX_CTRL_PRDD_DEST (0x1 << 30) 151 - #define COH901318_CX_CTRL_PRDD_SOURCE (0x0 << 30) 152 - 153 - /* 154 - * CX_SRC_ADDR - Channel Source Address Registers 32bit (R/W) 155 - */ 156 - #define COH901318_CX_SRC_ADDR (0x0404) 157 - #define COH901318_CX_SRC_ADDR_SPACING (0x10) 158 - 159 - /* 160 - * CX_DST_ADDR - Channel Destination Address Registers 32bit R/W 161 - */ 162 - #define COH901318_CX_DST_ADDR (0x0408) 163 - #define COH901318_CX_DST_ADDR_SPACING (0x10) 164 - 165 - /* 166 - * CX_LNK_ADDR - Channel Link Address Registers 32bit (R/W) 167 - */ 168 - #define COH901318_CX_LNK_ADDR (0x040C) 169 - #define COH901318_CX_LNK_ADDR_SPACING (0x10) 170 - #define COH901318_CX_LNK_LINK_IMMEDIATE (0x00000001) 171 - 172 - /** 173 - * struct coh901318_params - parameters for DMAC configuration 174 - * @config: DMA config register 175 - * @ctrl_lli_last: DMA control register for the last lli in the list 176 - * @ctrl_lli: DMA control register for an lli 177 - * @ctrl_lli_chained: DMA control register for a chained lli 178 - */ 179 - struct coh901318_params { 180 - u32 config; 181 - u32 ctrl_lli_last; 182 - u32 ctrl_lli; 183 - u32 ctrl_lli_chained; 184 - }; 185 - 186 - /** 187 - * struct coh_dma_channel - dma channel base 188 - * @name: ascii name of dma channel 189 - * @number: channel id number 190 - * @desc_nbr_max: number of preallocated descriptors 191 - * @priority_high: prio of channel, 0 low otherwise high. 192 - * @param: configuration parameters 193 - */ 194 - struct coh_dma_channel { 195 - const char name[32]; 196 - const int number; 197 - const int desc_nbr_max; 198 - const int priority_high; 199 - const struct coh901318_params param; 200 - }; 201 - 202 - /** 203 - * struct powersave - DMA power save structure 204 - * @lock: lock protecting data in this struct 205 - * @started_channels: bit mask indicating active dma channels 206 - */ 207 - struct powersave { 208 - spinlock_t lock; 209 - u64 started_channels; 210 - }; 211 - 212 - /* points out all dma slave channels. 213 - * Syntax is [A1, B1, A2, B2, .... ,-1,-1] 214 - * Select all channels from A to B, end of list is marked with -1,-1 215 - */ 216 - static int dma_slave_channels[] = { 217 - U300_DMA_MSL_TX_0, U300_DMA_SPI_RX, 218 - U300_DMA_UART1_TX, U300_DMA_UART1_RX, -1, -1}; 219 - 220 - /* points out all dma memcpy channels. */ 221 - static int dma_memcpy_channels[] = { 222 - U300_DMA_GENERAL_PURPOSE_0, U300_DMA_GENERAL_PURPOSE_8, -1, -1}; 223 - 224 - #define flags_memcpy_config (COH901318_CX_CFG_CH_DISABLE | \ 225 - COH901318_CX_CFG_RM_MEMORY_TO_MEMORY | \ 226 - COH901318_CX_CFG_LCR_DISABLE | \ 227 - COH901318_CX_CFG_TC_IRQ_ENABLE | \ 228 - COH901318_CX_CFG_BE_IRQ_ENABLE) 229 - #define flags_memcpy_lli_chained (COH901318_CX_CTRL_TC_ENABLE | \ 230 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | \ 231 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | \ 232 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | \ 233 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | \ 234 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | \ 235 - COH901318_CX_CTRL_MASTER_MODE_M1RW | \ 236 - COH901318_CX_CTRL_TCP_DISABLE | \ 237 - COH901318_CX_CTRL_TC_IRQ_DISABLE | \ 238 - COH901318_CX_CTRL_HSP_DISABLE | \ 239 - COH901318_CX_CTRL_HSS_DISABLE | \ 240 - COH901318_CX_CTRL_DDMA_LEGACY | \ 241 - COH901318_CX_CTRL_PRDD_SOURCE) 242 - #define flags_memcpy_lli (COH901318_CX_CTRL_TC_ENABLE | \ 243 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | \ 244 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | \ 245 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | \ 246 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | \ 247 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | \ 248 - COH901318_CX_CTRL_MASTER_MODE_M1RW | \ 249 - COH901318_CX_CTRL_TCP_DISABLE | \ 250 - COH901318_CX_CTRL_TC_IRQ_DISABLE | \ 251 - COH901318_CX_CTRL_HSP_DISABLE | \ 252 - COH901318_CX_CTRL_HSS_DISABLE | \ 253 - COH901318_CX_CTRL_DDMA_LEGACY | \ 254 - COH901318_CX_CTRL_PRDD_SOURCE) 255 - #define flags_memcpy_lli_last (COH901318_CX_CTRL_TC_ENABLE | \ 256 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | \ 257 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | \ 258 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | \ 259 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | \ 260 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | \ 261 - COH901318_CX_CTRL_MASTER_MODE_M1RW | \ 262 - COH901318_CX_CTRL_TCP_DISABLE | \ 263 - COH901318_CX_CTRL_TC_IRQ_ENABLE | \ 264 - COH901318_CX_CTRL_HSP_DISABLE | \ 265 - COH901318_CX_CTRL_HSS_DISABLE | \ 266 - COH901318_CX_CTRL_DDMA_LEGACY | \ 267 - COH901318_CX_CTRL_PRDD_SOURCE) 268 - 269 - static const struct coh_dma_channel chan_config[U300_DMA_CHANNELS] = { 270 - { 271 - .number = U300_DMA_MSL_TX_0, 272 - .name = "MSL TX 0", 273 - .priority_high = 0, 274 - }, 275 - { 276 - .number = U300_DMA_MSL_TX_1, 277 - .name = "MSL TX 1", 278 - .priority_high = 0, 279 - .param.config = COH901318_CX_CFG_CH_DISABLE | 280 - COH901318_CX_CFG_LCR_DISABLE | 281 - COH901318_CX_CFG_TC_IRQ_ENABLE | 282 - COH901318_CX_CFG_BE_IRQ_ENABLE, 283 - .param.ctrl_lli_chained = 0 | 284 - COH901318_CX_CTRL_TC_ENABLE | 285 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 286 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 287 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 288 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 289 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 290 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 291 - COH901318_CX_CTRL_TCP_DISABLE | 292 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 293 - COH901318_CX_CTRL_HSP_ENABLE | 294 - COH901318_CX_CTRL_HSS_DISABLE | 295 - COH901318_CX_CTRL_DDMA_LEGACY | 296 - COH901318_CX_CTRL_PRDD_SOURCE, 297 - .param.ctrl_lli = 0 | 298 - COH901318_CX_CTRL_TC_ENABLE | 299 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 300 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 301 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 302 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 303 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 304 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 305 - COH901318_CX_CTRL_TCP_ENABLE | 306 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 307 - COH901318_CX_CTRL_HSP_ENABLE | 308 - COH901318_CX_CTRL_HSS_DISABLE | 309 - COH901318_CX_CTRL_DDMA_LEGACY | 310 - COH901318_CX_CTRL_PRDD_SOURCE, 311 - .param.ctrl_lli_last = 0 | 312 - COH901318_CX_CTRL_TC_ENABLE | 313 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 314 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 315 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 316 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 317 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 318 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 319 - COH901318_CX_CTRL_TCP_ENABLE | 320 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 321 - COH901318_CX_CTRL_HSP_ENABLE | 322 - COH901318_CX_CTRL_HSS_DISABLE | 323 - COH901318_CX_CTRL_DDMA_LEGACY | 324 - COH901318_CX_CTRL_PRDD_SOURCE, 325 - }, 326 - { 327 - .number = U300_DMA_MSL_TX_2, 328 - .name = "MSL TX 2", 329 - .priority_high = 0, 330 - .param.config = COH901318_CX_CFG_CH_DISABLE | 331 - COH901318_CX_CFG_LCR_DISABLE | 332 - COH901318_CX_CFG_TC_IRQ_ENABLE | 333 - COH901318_CX_CFG_BE_IRQ_ENABLE, 334 - .param.ctrl_lli_chained = 0 | 335 - COH901318_CX_CTRL_TC_ENABLE | 336 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 337 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 338 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 339 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 340 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 341 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 342 - COH901318_CX_CTRL_TCP_DISABLE | 343 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 344 - COH901318_CX_CTRL_HSP_ENABLE | 345 - COH901318_CX_CTRL_HSS_DISABLE | 346 - COH901318_CX_CTRL_DDMA_LEGACY | 347 - COH901318_CX_CTRL_PRDD_SOURCE, 348 - .param.ctrl_lli = 0 | 349 - COH901318_CX_CTRL_TC_ENABLE | 350 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 351 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 352 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 353 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 354 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 355 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 356 - COH901318_CX_CTRL_TCP_ENABLE | 357 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 358 - COH901318_CX_CTRL_HSP_ENABLE | 359 - COH901318_CX_CTRL_HSS_DISABLE | 360 - COH901318_CX_CTRL_DDMA_LEGACY | 361 - COH901318_CX_CTRL_PRDD_SOURCE, 362 - .param.ctrl_lli_last = 0 | 363 - COH901318_CX_CTRL_TC_ENABLE | 364 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 365 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 366 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 367 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 368 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 369 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 370 - COH901318_CX_CTRL_TCP_ENABLE | 371 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 372 - COH901318_CX_CTRL_HSP_ENABLE | 373 - COH901318_CX_CTRL_HSS_DISABLE | 374 - COH901318_CX_CTRL_DDMA_LEGACY | 375 - COH901318_CX_CTRL_PRDD_SOURCE, 376 - .desc_nbr_max = 10, 377 - }, 378 - { 379 - .number = U300_DMA_MSL_TX_3, 380 - .name = "MSL TX 3", 381 - .priority_high = 0, 382 - .param.config = COH901318_CX_CFG_CH_DISABLE | 383 - COH901318_CX_CFG_LCR_DISABLE | 384 - COH901318_CX_CFG_TC_IRQ_ENABLE | 385 - COH901318_CX_CFG_BE_IRQ_ENABLE, 386 - .param.ctrl_lli_chained = 0 | 387 - COH901318_CX_CTRL_TC_ENABLE | 388 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 389 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 390 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 391 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 392 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 393 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 394 - COH901318_CX_CTRL_TCP_DISABLE | 395 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 396 - COH901318_CX_CTRL_HSP_ENABLE | 397 - COH901318_CX_CTRL_HSS_DISABLE | 398 - COH901318_CX_CTRL_DDMA_LEGACY | 399 - COH901318_CX_CTRL_PRDD_SOURCE, 400 - .param.ctrl_lli = 0 | 401 - COH901318_CX_CTRL_TC_ENABLE | 402 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 403 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 404 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 405 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 406 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 407 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 408 - COH901318_CX_CTRL_TCP_ENABLE | 409 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 410 - COH901318_CX_CTRL_HSP_ENABLE | 411 - COH901318_CX_CTRL_HSS_DISABLE | 412 - COH901318_CX_CTRL_DDMA_LEGACY | 413 - COH901318_CX_CTRL_PRDD_SOURCE, 414 - .param.ctrl_lli_last = 0 | 415 - COH901318_CX_CTRL_TC_ENABLE | 416 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 417 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 418 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 419 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 420 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 421 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 422 - COH901318_CX_CTRL_TCP_ENABLE | 423 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 424 - COH901318_CX_CTRL_HSP_ENABLE | 425 - COH901318_CX_CTRL_HSS_DISABLE | 426 - COH901318_CX_CTRL_DDMA_LEGACY | 427 - COH901318_CX_CTRL_PRDD_SOURCE, 428 - }, 429 - { 430 - .number = U300_DMA_MSL_TX_4, 431 - .name = "MSL TX 4", 432 - .priority_high = 0, 433 - .param.config = COH901318_CX_CFG_CH_DISABLE | 434 - COH901318_CX_CFG_LCR_DISABLE | 435 - COH901318_CX_CFG_TC_IRQ_ENABLE | 436 - COH901318_CX_CFG_BE_IRQ_ENABLE, 437 - .param.ctrl_lli_chained = 0 | 438 - COH901318_CX_CTRL_TC_ENABLE | 439 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 440 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 441 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 442 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 443 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 444 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 445 - COH901318_CX_CTRL_TCP_DISABLE | 446 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 447 - COH901318_CX_CTRL_HSP_ENABLE | 448 - COH901318_CX_CTRL_HSS_DISABLE | 449 - COH901318_CX_CTRL_DDMA_LEGACY | 450 - COH901318_CX_CTRL_PRDD_SOURCE, 451 - .param.ctrl_lli = 0 | 452 - COH901318_CX_CTRL_TC_ENABLE | 453 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 454 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 455 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 456 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 457 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 458 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 459 - COH901318_CX_CTRL_TCP_ENABLE | 460 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 461 - COH901318_CX_CTRL_HSP_ENABLE | 462 - COH901318_CX_CTRL_HSS_DISABLE | 463 - COH901318_CX_CTRL_DDMA_LEGACY | 464 - COH901318_CX_CTRL_PRDD_SOURCE, 465 - .param.ctrl_lli_last = 0 | 466 - COH901318_CX_CTRL_TC_ENABLE | 467 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 468 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 469 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 470 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 471 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 472 - COH901318_CX_CTRL_MASTER_MODE_M1R_M2W | 473 - COH901318_CX_CTRL_TCP_ENABLE | 474 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 475 - COH901318_CX_CTRL_HSP_ENABLE | 476 - COH901318_CX_CTRL_HSS_DISABLE | 477 - COH901318_CX_CTRL_DDMA_LEGACY | 478 - COH901318_CX_CTRL_PRDD_SOURCE, 479 - }, 480 - { 481 - .number = U300_DMA_MSL_TX_5, 482 - .name = "MSL TX 5", 483 - .priority_high = 0, 484 - }, 485 - { 486 - .number = U300_DMA_MSL_TX_6, 487 - .name = "MSL TX 6", 488 - .priority_high = 0, 489 - }, 490 - { 491 - .number = U300_DMA_MSL_RX_0, 492 - .name = "MSL RX 0", 493 - .priority_high = 0, 494 - }, 495 - { 496 - .number = U300_DMA_MSL_RX_1, 497 - .name = "MSL RX 1", 498 - .priority_high = 0, 499 - .param.config = COH901318_CX_CFG_CH_DISABLE | 500 - COH901318_CX_CFG_LCR_DISABLE | 501 - COH901318_CX_CFG_TC_IRQ_ENABLE | 502 - COH901318_CX_CFG_BE_IRQ_ENABLE, 503 - .param.ctrl_lli_chained = 0 | 504 - COH901318_CX_CTRL_TC_ENABLE | 505 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 506 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 507 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 508 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 509 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 510 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 511 - COH901318_CX_CTRL_TCP_DISABLE | 512 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 513 - COH901318_CX_CTRL_HSP_ENABLE | 514 - COH901318_CX_CTRL_HSS_DISABLE | 515 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 516 - COH901318_CX_CTRL_PRDD_DEST, 517 - .param.ctrl_lli = 0, 518 - .param.ctrl_lli_last = 0 | 519 - COH901318_CX_CTRL_TC_ENABLE | 520 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 521 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 522 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 523 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 524 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 525 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 526 - COH901318_CX_CTRL_TCP_DISABLE | 527 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 528 - COH901318_CX_CTRL_HSP_ENABLE | 529 - COH901318_CX_CTRL_HSS_DISABLE | 530 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 531 - COH901318_CX_CTRL_PRDD_DEST, 532 - }, 533 - { 534 - .number = U300_DMA_MSL_RX_2, 535 - .name = "MSL RX 2", 536 - .priority_high = 0, 537 - .param.config = COH901318_CX_CFG_CH_DISABLE | 538 - COH901318_CX_CFG_LCR_DISABLE | 539 - COH901318_CX_CFG_TC_IRQ_ENABLE | 540 - COH901318_CX_CFG_BE_IRQ_ENABLE, 541 - .param.ctrl_lli_chained = 0 | 542 - COH901318_CX_CTRL_TC_ENABLE | 543 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 544 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 545 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 546 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 547 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 548 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 549 - COH901318_CX_CTRL_TCP_DISABLE | 550 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 551 - COH901318_CX_CTRL_HSP_ENABLE | 552 - COH901318_CX_CTRL_HSS_DISABLE | 553 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 554 - COH901318_CX_CTRL_PRDD_DEST, 555 - .param.ctrl_lli = 0 | 556 - COH901318_CX_CTRL_TC_ENABLE | 557 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 558 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 559 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 560 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 561 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 562 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 563 - COH901318_CX_CTRL_TCP_DISABLE | 564 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 565 - COH901318_CX_CTRL_HSP_ENABLE | 566 - COH901318_CX_CTRL_HSS_DISABLE | 567 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 568 - COH901318_CX_CTRL_PRDD_DEST, 569 - .param.ctrl_lli_last = 0 | 570 - COH901318_CX_CTRL_TC_ENABLE | 571 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 572 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 573 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 574 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 575 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 576 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 577 - COH901318_CX_CTRL_TCP_DISABLE | 578 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 579 - COH901318_CX_CTRL_HSP_ENABLE | 580 - COH901318_CX_CTRL_HSS_DISABLE | 581 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 582 - COH901318_CX_CTRL_PRDD_DEST, 583 - }, 584 - { 585 - .number = U300_DMA_MSL_RX_3, 586 - .name = "MSL RX 3", 587 - .priority_high = 0, 588 - .param.config = COH901318_CX_CFG_CH_DISABLE | 589 - COH901318_CX_CFG_LCR_DISABLE | 590 - COH901318_CX_CFG_TC_IRQ_ENABLE | 591 - COH901318_CX_CFG_BE_IRQ_ENABLE, 592 - .param.ctrl_lli_chained = 0 | 593 - COH901318_CX_CTRL_TC_ENABLE | 594 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 595 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 596 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 597 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 598 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 599 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 600 - COH901318_CX_CTRL_TCP_DISABLE | 601 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 602 - COH901318_CX_CTRL_HSP_ENABLE | 603 - COH901318_CX_CTRL_HSS_DISABLE | 604 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 605 - COH901318_CX_CTRL_PRDD_DEST, 606 - .param.ctrl_lli = 0 | 607 - COH901318_CX_CTRL_TC_ENABLE | 608 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 609 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 610 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 611 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 612 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 613 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 614 - COH901318_CX_CTRL_TCP_DISABLE | 615 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 616 - COH901318_CX_CTRL_HSP_ENABLE | 617 - COH901318_CX_CTRL_HSS_DISABLE | 618 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 619 - COH901318_CX_CTRL_PRDD_DEST, 620 - .param.ctrl_lli_last = 0 | 621 - COH901318_CX_CTRL_TC_ENABLE | 622 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 623 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 624 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 625 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 626 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 627 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 628 - COH901318_CX_CTRL_TCP_DISABLE | 629 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 630 - COH901318_CX_CTRL_HSP_ENABLE | 631 - COH901318_CX_CTRL_HSS_DISABLE | 632 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 633 - COH901318_CX_CTRL_PRDD_DEST, 634 - }, 635 - { 636 - .number = U300_DMA_MSL_RX_4, 637 - .name = "MSL RX 4", 638 - .priority_high = 0, 639 - .param.config = COH901318_CX_CFG_CH_DISABLE | 640 - COH901318_CX_CFG_LCR_DISABLE | 641 - COH901318_CX_CFG_TC_IRQ_ENABLE | 642 - COH901318_CX_CFG_BE_IRQ_ENABLE, 643 - .param.ctrl_lli_chained = 0 | 644 - COH901318_CX_CTRL_TC_ENABLE | 645 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 646 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 647 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 648 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 649 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 650 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 651 - COH901318_CX_CTRL_TCP_DISABLE | 652 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 653 - COH901318_CX_CTRL_HSP_ENABLE | 654 - COH901318_CX_CTRL_HSS_DISABLE | 655 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 656 - COH901318_CX_CTRL_PRDD_DEST, 657 - .param.ctrl_lli = 0 | 658 - COH901318_CX_CTRL_TC_ENABLE | 659 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 660 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 661 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 662 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 663 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 664 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 665 - COH901318_CX_CTRL_TCP_DISABLE | 666 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 667 - COH901318_CX_CTRL_HSP_ENABLE | 668 - COH901318_CX_CTRL_HSS_DISABLE | 669 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 670 - COH901318_CX_CTRL_PRDD_DEST, 671 - .param.ctrl_lli_last = 0 | 672 - COH901318_CX_CTRL_TC_ENABLE | 673 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 674 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 675 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 676 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 677 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 678 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 679 - COH901318_CX_CTRL_TCP_DISABLE | 680 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 681 - COH901318_CX_CTRL_HSP_ENABLE | 682 - COH901318_CX_CTRL_HSS_DISABLE | 683 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 684 - COH901318_CX_CTRL_PRDD_DEST, 685 - }, 686 - { 687 - .number = U300_DMA_MSL_RX_5, 688 - .name = "MSL RX 5", 689 - .priority_high = 0, 690 - .param.config = COH901318_CX_CFG_CH_DISABLE | 691 - COH901318_CX_CFG_LCR_DISABLE | 692 - COH901318_CX_CFG_TC_IRQ_ENABLE | 693 - COH901318_CX_CFG_BE_IRQ_ENABLE, 694 - .param.ctrl_lli_chained = 0 | 695 - COH901318_CX_CTRL_TC_ENABLE | 696 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 697 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 698 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 699 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 700 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 701 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 702 - COH901318_CX_CTRL_TCP_DISABLE | 703 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 704 - COH901318_CX_CTRL_HSP_ENABLE | 705 - COH901318_CX_CTRL_HSS_DISABLE | 706 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 707 - COH901318_CX_CTRL_PRDD_DEST, 708 - .param.ctrl_lli = 0 | 709 - COH901318_CX_CTRL_TC_ENABLE | 710 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 711 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 712 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 713 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 714 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 715 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 716 - COH901318_CX_CTRL_TCP_DISABLE | 717 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 718 - COH901318_CX_CTRL_HSP_ENABLE | 719 - COH901318_CX_CTRL_HSS_DISABLE | 720 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 721 - COH901318_CX_CTRL_PRDD_DEST, 722 - .param.ctrl_lli_last = 0 | 723 - COH901318_CX_CTRL_TC_ENABLE | 724 - COH901318_CX_CTRL_BURST_COUNT_32_BYTES | 725 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 726 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 727 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 728 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 729 - COH901318_CX_CTRL_MASTER_MODE_M2R_M1W | 730 - COH901318_CX_CTRL_TCP_DISABLE | 731 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 732 - COH901318_CX_CTRL_HSP_ENABLE | 733 - COH901318_CX_CTRL_HSS_DISABLE | 734 - COH901318_CX_CTRL_DDMA_DEMAND_DMA1 | 735 - COH901318_CX_CTRL_PRDD_DEST, 736 - }, 737 - { 738 - .number = U300_DMA_MSL_RX_6, 739 - .name = "MSL RX 6", 740 - .priority_high = 0, 741 - }, 742 - /* 743 - * Don't set up device address, burst count or size of src 744 - * or dst bus for this peripheral - handled by PrimeCell 745 - * DMA extension. 746 - */ 747 - { 748 - .number = U300_DMA_MMCSD_RX_TX, 749 - .name = "MMCSD RX TX", 750 - .priority_high = 0, 751 - .param.config = COH901318_CX_CFG_CH_DISABLE | 752 - COH901318_CX_CFG_LCR_DISABLE | 753 - COH901318_CX_CFG_TC_IRQ_ENABLE | 754 - COH901318_CX_CFG_BE_IRQ_ENABLE, 755 - .param.ctrl_lli_chained = 0 | 756 - COH901318_CX_CTRL_TC_ENABLE | 757 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 758 - COH901318_CX_CTRL_TCP_ENABLE | 759 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 760 - COH901318_CX_CTRL_HSP_ENABLE | 761 - COH901318_CX_CTRL_HSS_DISABLE | 762 - COH901318_CX_CTRL_DDMA_LEGACY, 763 - .param.ctrl_lli = 0 | 764 - COH901318_CX_CTRL_TC_ENABLE | 765 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 766 - COH901318_CX_CTRL_TCP_ENABLE | 767 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 768 - COH901318_CX_CTRL_HSP_ENABLE | 769 - COH901318_CX_CTRL_HSS_DISABLE | 770 - COH901318_CX_CTRL_DDMA_LEGACY, 771 - .param.ctrl_lli_last = 0 | 772 - COH901318_CX_CTRL_TC_ENABLE | 773 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 774 - COH901318_CX_CTRL_TCP_DISABLE | 775 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 776 - COH901318_CX_CTRL_HSP_ENABLE | 777 - COH901318_CX_CTRL_HSS_DISABLE | 778 - COH901318_CX_CTRL_DDMA_LEGACY, 779 - 780 - }, 781 - { 782 - .number = U300_DMA_MSPRO_TX, 783 - .name = "MSPRO TX", 784 - .priority_high = 0, 785 - }, 786 - { 787 - .number = U300_DMA_MSPRO_RX, 788 - .name = "MSPRO RX", 789 - .priority_high = 0, 790 - }, 791 - /* 792 - * Don't set up device address, burst count or size of src 793 - * or dst bus for this peripheral - handled by PrimeCell 794 - * DMA extension. 795 - */ 796 - { 797 - .number = U300_DMA_UART0_TX, 798 - .name = "UART0 TX", 799 - .priority_high = 0, 800 - .param.config = COH901318_CX_CFG_CH_DISABLE | 801 - COH901318_CX_CFG_LCR_DISABLE | 802 - COH901318_CX_CFG_TC_IRQ_ENABLE | 803 - COH901318_CX_CFG_BE_IRQ_ENABLE, 804 - .param.ctrl_lli_chained = 0 | 805 - COH901318_CX_CTRL_TC_ENABLE | 806 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 807 - COH901318_CX_CTRL_TCP_ENABLE | 808 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 809 - COH901318_CX_CTRL_HSP_ENABLE | 810 - COH901318_CX_CTRL_HSS_DISABLE | 811 - COH901318_CX_CTRL_DDMA_LEGACY, 812 - .param.ctrl_lli = 0 | 813 - COH901318_CX_CTRL_TC_ENABLE | 814 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 815 - COH901318_CX_CTRL_TCP_ENABLE | 816 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 817 - COH901318_CX_CTRL_HSP_ENABLE | 818 - COH901318_CX_CTRL_HSS_DISABLE | 819 - COH901318_CX_CTRL_DDMA_LEGACY, 820 - .param.ctrl_lli_last = 0 | 821 - COH901318_CX_CTRL_TC_ENABLE | 822 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 823 - COH901318_CX_CTRL_TCP_ENABLE | 824 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 825 - COH901318_CX_CTRL_HSP_ENABLE | 826 - COH901318_CX_CTRL_HSS_DISABLE | 827 - COH901318_CX_CTRL_DDMA_LEGACY, 828 - }, 829 - { 830 - .number = U300_DMA_UART0_RX, 831 - .name = "UART0 RX", 832 - .priority_high = 0, 833 - .param.config = COH901318_CX_CFG_CH_DISABLE | 834 - COH901318_CX_CFG_LCR_DISABLE | 835 - COH901318_CX_CFG_TC_IRQ_ENABLE | 836 - COH901318_CX_CFG_BE_IRQ_ENABLE, 837 - .param.ctrl_lli_chained = 0 | 838 - COH901318_CX_CTRL_TC_ENABLE | 839 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 840 - COH901318_CX_CTRL_TCP_ENABLE | 841 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 842 - COH901318_CX_CTRL_HSP_ENABLE | 843 - COH901318_CX_CTRL_HSS_DISABLE | 844 - COH901318_CX_CTRL_DDMA_LEGACY, 845 - .param.ctrl_lli = 0 | 846 - COH901318_CX_CTRL_TC_ENABLE | 847 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 848 - COH901318_CX_CTRL_TCP_ENABLE | 849 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 850 - COH901318_CX_CTRL_HSP_ENABLE | 851 - COH901318_CX_CTRL_HSS_DISABLE | 852 - COH901318_CX_CTRL_DDMA_LEGACY, 853 - .param.ctrl_lli_last = 0 | 854 - COH901318_CX_CTRL_TC_ENABLE | 855 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 856 - COH901318_CX_CTRL_TCP_ENABLE | 857 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 858 - COH901318_CX_CTRL_HSP_ENABLE | 859 - COH901318_CX_CTRL_HSS_DISABLE | 860 - COH901318_CX_CTRL_DDMA_LEGACY, 861 - }, 862 - { 863 - .number = U300_DMA_APEX_TX, 864 - .name = "APEX TX", 865 - .priority_high = 0, 866 - }, 867 - { 868 - .number = U300_DMA_APEX_RX, 869 - .name = "APEX RX", 870 - .priority_high = 0, 871 - }, 872 - { 873 - .number = U300_DMA_PCM_I2S0_TX, 874 - .name = "PCM I2S0 TX", 875 - .priority_high = 1, 876 - .param.config = COH901318_CX_CFG_CH_DISABLE | 877 - COH901318_CX_CFG_LCR_DISABLE | 878 - COH901318_CX_CFG_TC_IRQ_ENABLE | 879 - COH901318_CX_CFG_BE_IRQ_ENABLE, 880 - .param.ctrl_lli_chained = 0 | 881 - COH901318_CX_CTRL_TC_ENABLE | 882 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 883 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 884 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 885 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 886 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 887 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 888 - COH901318_CX_CTRL_TCP_DISABLE | 889 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 890 - COH901318_CX_CTRL_HSP_ENABLE | 891 - COH901318_CX_CTRL_HSS_DISABLE | 892 - COH901318_CX_CTRL_DDMA_LEGACY | 893 - COH901318_CX_CTRL_PRDD_SOURCE, 894 - .param.ctrl_lli = 0 | 895 - COH901318_CX_CTRL_TC_ENABLE | 896 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 897 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 898 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 899 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 900 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 901 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 902 - COH901318_CX_CTRL_TCP_ENABLE | 903 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 904 - COH901318_CX_CTRL_HSP_ENABLE | 905 - COH901318_CX_CTRL_HSS_DISABLE | 906 - COH901318_CX_CTRL_DDMA_LEGACY | 907 - COH901318_CX_CTRL_PRDD_SOURCE, 908 - .param.ctrl_lli_last = 0 | 909 - COH901318_CX_CTRL_TC_ENABLE | 910 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 911 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 912 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 913 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 914 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 915 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 916 - COH901318_CX_CTRL_TCP_ENABLE | 917 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 918 - COH901318_CX_CTRL_HSP_ENABLE | 919 - COH901318_CX_CTRL_HSS_DISABLE | 920 - COH901318_CX_CTRL_DDMA_LEGACY | 921 - COH901318_CX_CTRL_PRDD_SOURCE, 922 - }, 923 - { 924 - .number = U300_DMA_PCM_I2S0_RX, 925 - .name = "PCM I2S0 RX", 926 - .priority_high = 1, 927 - .param.config = COH901318_CX_CFG_CH_DISABLE | 928 - COH901318_CX_CFG_LCR_DISABLE | 929 - COH901318_CX_CFG_TC_IRQ_ENABLE | 930 - COH901318_CX_CFG_BE_IRQ_ENABLE, 931 - .param.ctrl_lli_chained = 0 | 932 - COH901318_CX_CTRL_TC_ENABLE | 933 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 934 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 935 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 936 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 937 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 938 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 939 - COH901318_CX_CTRL_TCP_DISABLE | 940 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 941 - COH901318_CX_CTRL_HSP_ENABLE | 942 - COH901318_CX_CTRL_HSS_DISABLE | 943 - COH901318_CX_CTRL_DDMA_LEGACY | 944 - COH901318_CX_CTRL_PRDD_DEST, 945 - .param.ctrl_lli = 0 | 946 - COH901318_CX_CTRL_TC_ENABLE | 947 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 948 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 949 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 950 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 951 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 952 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 953 - COH901318_CX_CTRL_TCP_ENABLE | 954 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 955 - COH901318_CX_CTRL_HSP_ENABLE | 956 - COH901318_CX_CTRL_HSS_DISABLE | 957 - COH901318_CX_CTRL_DDMA_LEGACY | 958 - COH901318_CX_CTRL_PRDD_DEST, 959 - .param.ctrl_lli_last = 0 | 960 - COH901318_CX_CTRL_TC_ENABLE | 961 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 962 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 963 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 964 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 965 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 966 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 967 - COH901318_CX_CTRL_TCP_ENABLE | 968 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 969 - COH901318_CX_CTRL_HSP_ENABLE | 970 - COH901318_CX_CTRL_HSS_DISABLE | 971 - COH901318_CX_CTRL_DDMA_LEGACY | 972 - COH901318_CX_CTRL_PRDD_DEST, 973 - }, 974 - { 975 - .number = U300_DMA_PCM_I2S1_TX, 976 - .name = "PCM I2S1 TX", 977 - .priority_high = 1, 978 - .param.config = COH901318_CX_CFG_CH_DISABLE | 979 - COH901318_CX_CFG_LCR_DISABLE | 980 - COH901318_CX_CFG_TC_IRQ_ENABLE | 981 - COH901318_CX_CFG_BE_IRQ_ENABLE, 982 - .param.ctrl_lli_chained = 0 | 983 - COH901318_CX_CTRL_TC_ENABLE | 984 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 985 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 986 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 987 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 988 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 989 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 990 - COH901318_CX_CTRL_TCP_DISABLE | 991 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 992 - COH901318_CX_CTRL_HSP_ENABLE | 993 - COH901318_CX_CTRL_HSS_DISABLE | 994 - COH901318_CX_CTRL_DDMA_LEGACY | 995 - COH901318_CX_CTRL_PRDD_SOURCE, 996 - .param.ctrl_lli = 0 | 997 - COH901318_CX_CTRL_TC_ENABLE | 998 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 999 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 1000 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 1001 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 1002 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 1003 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1004 - COH901318_CX_CTRL_TCP_ENABLE | 1005 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 1006 - COH901318_CX_CTRL_HSP_ENABLE | 1007 - COH901318_CX_CTRL_HSS_DISABLE | 1008 - COH901318_CX_CTRL_DDMA_LEGACY | 1009 - COH901318_CX_CTRL_PRDD_SOURCE, 1010 - .param.ctrl_lli_last = 0 | 1011 - COH901318_CX_CTRL_TC_ENABLE | 1012 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 1013 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 1014 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE | 1015 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 1016 - COH901318_CX_CTRL_DST_ADDR_INC_DISABLE | 1017 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1018 - COH901318_CX_CTRL_TCP_ENABLE | 1019 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1020 - COH901318_CX_CTRL_HSP_ENABLE | 1021 - COH901318_CX_CTRL_HSS_DISABLE | 1022 - COH901318_CX_CTRL_DDMA_LEGACY | 1023 - COH901318_CX_CTRL_PRDD_SOURCE, 1024 - }, 1025 - { 1026 - .number = U300_DMA_PCM_I2S1_RX, 1027 - .name = "PCM I2S1 RX", 1028 - .priority_high = 1, 1029 - .param.config = COH901318_CX_CFG_CH_DISABLE | 1030 - COH901318_CX_CFG_LCR_DISABLE | 1031 - COH901318_CX_CFG_TC_IRQ_ENABLE | 1032 - COH901318_CX_CFG_BE_IRQ_ENABLE, 1033 - .param.ctrl_lli_chained = 0 | 1034 - COH901318_CX_CTRL_TC_ENABLE | 1035 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 1036 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 1037 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 1038 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 1039 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 1040 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1041 - COH901318_CX_CTRL_TCP_DISABLE | 1042 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 1043 - COH901318_CX_CTRL_HSP_ENABLE | 1044 - COH901318_CX_CTRL_HSS_DISABLE | 1045 - COH901318_CX_CTRL_DDMA_LEGACY | 1046 - COH901318_CX_CTRL_PRDD_DEST, 1047 - .param.ctrl_lli = 0 | 1048 - COH901318_CX_CTRL_TC_ENABLE | 1049 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 1050 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 1051 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 1052 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 1053 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 1054 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1055 - COH901318_CX_CTRL_TCP_ENABLE | 1056 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 1057 - COH901318_CX_CTRL_HSP_ENABLE | 1058 - COH901318_CX_CTRL_HSS_DISABLE | 1059 - COH901318_CX_CTRL_DDMA_LEGACY | 1060 - COH901318_CX_CTRL_PRDD_DEST, 1061 - .param.ctrl_lli_last = 0 | 1062 - COH901318_CX_CTRL_TC_ENABLE | 1063 - COH901318_CX_CTRL_BURST_COUNT_16_BYTES | 1064 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 1065 - COH901318_CX_CTRL_SRC_ADDR_INC_DISABLE | 1066 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS | 1067 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE | 1068 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1069 - COH901318_CX_CTRL_TCP_ENABLE | 1070 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1071 - COH901318_CX_CTRL_HSP_ENABLE | 1072 - COH901318_CX_CTRL_HSS_DISABLE | 1073 - COH901318_CX_CTRL_DDMA_LEGACY | 1074 - COH901318_CX_CTRL_PRDD_DEST, 1075 - }, 1076 - { 1077 - .number = U300_DMA_XGAM_CDI, 1078 - .name = "XGAM CDI", 1079 - .priority_high = 0, 1080 - }, 1081 - { 1082 - .number = U300_DMA_XGAM_PDI, 1083 - .name = "XGAM PDI", 1084 - .priority_high = 0, 1085 - }, 1086 - /* 1087 - * Don't set up device address, burst count or size of src 1088 - * or dst bus for this peripheral - handled by PrimeCell 1089 - * DMA extension. 1090 - */ 1091 - { 1092 - .number = U300_DMA_SPI_TX, 1093 - .name = "SPI TX", 1094 - .priority_high = 0, 1095 - .param.config = COH901318_CX_CFG_CH_DISABLE | 1096 - COH901318_CX_CFG_LCR_DISABLE | 1097 - COH901318_CX_CFG_TC_IRQ_ENABLE | 1098 - COH901318_CX_CFG_BE_IRQ_ENABLE, 1099 - .param.ctrl_lli_chained = 0 | 1100 - COH901318_CX_CTRL_TC_ENABLE | 1101 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1102 - COH901318_CX_CTRL_TCP_DISABLE | 1103 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 1104 - COH901318_CX_CTRL_HSP_ENABLE | 1105 - COH901318_CX_CTRL_HSS_DISABLE | 1106 - COH901318_CX_CTRL_DDMA_LEGACY, 1107 - .param.ctrl_lli = 0 | 1108 - COH901318_CX_CTRL_TC_ENABLE | 1109 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1110 - COH901318_CX_CTRL_TCP_DISABLE | 1111 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1112 - COH901318_CX_CTRL_HSP_ENABLE | 1113 - COH901318_CX_CTRL_HSS_DISABLE | 1114 - COH901318_CX_CTRL_DDMA_LEGACY, 1115 - .param.ctrl_lli_last = 0 | 1116 - COH901318_CX_CTRL_TC_ENABLE | 1117 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1118 - COH901318_CX_CTRL_TCP_DISABLE | 1119 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1120 - COH901318_CX_CTRL_HSP_ENABLE | 1121 - COH901318_CX_CTRL_HSS_DISABLE | 1122 - COH901318_CX_CTRL_DDMA_LEGACY, 1123 - }, 1124 - { 1125 - .number = U300_DMA_SPI_RX, 1126 - .name = "SPI RX", 1127 - .priority_high = 0, 1128 - .param.config = COH901318_CX_CFG_CH_DISABLE | 1129 - COH901318_CX_CFG_LCR_DISABLE | 1130 - COH901318_CX_CFG_TC_IRQ_ENABLE | 1131 - COH901318_CX_CFG_BE_IRQ_ENABLE, 1132 - .param.ctrl_lli_chained = 0 | 1133 - COH901318_CX_CTRL_TC_ENABLE | 1134 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1135 - COH901318_CX_CTRL_TCP_DISABLE | 1136 - COH901318_CX_CTRL_TC_IRQ_DISABLE | 1137 - COH901318_CX_CTRL_HSP_ENABLE | 1138 - COH901318_CX_CTRL_HSS_DISABLE | 1139 - COH901318_CX_CTRL_DDMA_LEGACY, 1140 - .param.ctrl_lli = 0 | 1141 - COH901318_CX_CTRL_TC_ENABLE | 1142 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1143 - COH901318_CX_CTRL_TCP_DISABLE | 1144 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1145 - COH901318_CX_CTRL_HSP_ENABLE | 1146 - COH901318_CX_CTRL_HSS_DISABLE | 1147 - COH901318_CX_CTRL_DDMA_LEGACY, 1148 - .param.ctrl_lli_last = 0 | 1149 - COH901318_CX_CTRL_TC_ENABLE | 1150 - COH901318_CX_CTRL_MASTER_MODE_M1RW | 1151 - COH901318_CX_CTRL_TCP_DISABLE | 1152 - COH901318_CX_CTRL_TC_IRQ_ENABLE | 1153 - COH901318_CX_CTRL_HSP_ENABLE | 1154 - COH901318_CX_CTRL_HSS_DISABLE | 1155 - COH901318_CX_CTRL_DDMA_LEGACY, 1156 - 1157 - }, 1158 - { 1159 - .number = U300_DMA_GENERAL_PURPOSE_0, 1160 - .name = "GENERAL 00", 1161 - .priority_high = 0, 1162 - 1163 - .param.config = flags_memcpy_config, 1164 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1165 - .param.ctrl_lli = flags_memcpy_lli, 1166 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1167 - }, 1168 - { 1169 - .number = U300_DMA_GENERAL_PURPOSE_1, 1170 - .name = "GENERAL 01", 1171 - .priority_high = 0, 1172 - 1173 - .param.config = flags_memcpy_config, 1174 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1175 - .param.ctrl_lli = flags_memcpy_lli, 1176 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1177 - }, 1178 - { 1179 - .number = U300_DMA_GENERAL_PURPOSE_2, 1180 - .name = "GENERAL 02", 1181 - .priority_high = 0, 1182 - 1183 - .param.config = flags_memcpy_config, 1184 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1185 - .param.ctrl_lli = flags_memcpy_lli, 1186 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1187 - }, 1188 - { 1189 - .number = U300_DMA_GENERAL_PURPOSE_3, 1190 - .name = "GENERAL 03", 1191 - .priority_high = 0, 1192 - 1193 - .param.config = flags_memcpy_config, 1194 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1195 - .param.ctrl_lli = flags_memcpy_lli, 1196 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1197 - }, 1198 - { 1199 - .number = U300_DMA_GENERAL_PURPOSE_4, 1200 - .name = "GENERAL 04", 1201 - .priority_high = 0, 1202 - 1203 - .param.config = flags_memcpy_config, 1204 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1205 - .param.ctrl_lli = flags_memcpy_lli, 1206 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1207 - }, 1208 - { 1209 - .number = U300_DMA_GENERAL_PURPOSE_5, 1210 - .name = "GENERAL 05", 1211 - .priority_high = 0, 1212 - 1213 - .param.config = flags_memcpy_config, 1214 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1215 - .param.ctrl_lli = flags_memcpy_lli, 1216 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1217 - }, 1218 - { 1219 - .number = U300_DMA_GENERAL_PURPOSE_6, 1220 - .name = "GENERAL 06", 1221 - .priority_high = 0, 1222 - 1223 - .param.config = flags_memcpy_config, 1224 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1225 - .param.ctrl_lli = flags_memcpy_lli, 1226 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1227 - }, 1228 - { 1229 - .number = U300_DMA_GENERAL_PURPOSE_7, 1230 - .name = "GENERAL 07", 1231 - .priority_high = 0, 1232 - 1233 - .param.config = flags_memcpy_config, 1234 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1235 - .param.ctrl_lli = flags_memcpy_lli, 1236 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1237 - }, 1238 - { 1239 - .number = U300_DMA_GENERAL_PURPOSE_8, 1240 - .name = "GENERAL 08", 1241 - .priority_high = 0, 1242 - 1243 - .param.config = flags_memcpy_config, 1244 - .param.ctrl_lli_chained = flags_memcpy_lli_chained, 1245 - .param.ctrl_lli = flags_memcpy_lli, 1246 - .param.ctrl_lli_last = flags_memcpy_lli_last, 1247 - }, 1248 - { 1249 - .number = U300_DMA_UART1_TX, 1250 - .name = "UART1 TX", 1251 - .priority_high = 0, 1252 - }, 1253 - { 1254 - .number = U300_DMA_UART1_RX, 1255 - .name = "UART1 RX", 1256 - .priority_high = 0, 1257 - } 1258 - }; 1259 - 1260 - #define COHC_2_DEV(cohc) (&cohc->chan.dev->device) 1261 - 1262 - #ifdef VERBOSE_DEBUG 1263 - #define COH_DBG(x) ({ if (1) x; 0; }) 1264 - #else 1265 - #define COH_DBG(x) ({ if (0) x; 0; }) 1266 - #endif 1267 - 1268 - struct coh901318_desc { 1269 - struct dma_async_tx_descriptor desc; 1270 - struct list_head node; 1271 - struct scatterlist *sg; 1272 - unsigned int sg_len; 1273 - struct coh901318_lli *lli; 1274 - enum dma_transfer_direction dir; 1275 - unsigned long flags; 1276 - u32 head_config; 1277 - u32 head_ctrl; 1278 - }; 1279 - 1280 - struct coh901318_base { 1281 - struct device *dev; 1282 - void __iomem *virtbase; 1283 - unsigned int irq; 1284 - struct coh901318_pool pool; 1285 - struct powersave pm; 1286 - struct dma_device dma_slave; 1287 - struct dma_device dma_memcpy; 1288 - struct coh901318_chan *chans; 1289 - }; 1290 - 1291 - struct coh901318_chan { 1292 - spinlock_t lock; 1293 - int allocated; 1294 - int id; 1295 - int stopped; 1296 - 1297 - struct work_struct free_work; 1298 - struct dma_chan chan; 1299 - 1300 - struct tasklet_struct tasklet; 1301 - 1302 - struct list_head active; 1303 - struct list_head queue; 1304 - struct list_head free; 1305 - 1306 - unsigned long nbr_active_done; 1307 - unsigned long busy; 1308 - 1309 - struct dma_slave_config config; 1310 - u32 addr; 1311 - u32 ctrl; 1312 - 1313 - struct coh901318_base *base; 1314 - }; 1315 - 1316 - static void coh901318_list_print(struct coh901318_chan *cohc, 1317 - struct coh901318_lli *lli) 1318 - { 1319 - struct coh901318_lli *l = lli; 1320 - int i = 0; 1321 - 1322 - while (l) { 1323 - dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src %pad" 1324 - ", dst %pad, link %pad virt_link_addr 0x%p\n", 1325 - i, l, l->control, &l->src_addr, &l->dst_addr, 1326 - &l->link_addr, l->virt_link_addr); 1327 - i++; 1328 - l = l->virt_link_addr; 1329 - } 1330 - } 1331 - 1332 - #ifdef CONFIG_DEBUG_FS 1333 - 1334 - #define COH901318_DEBUGFS_ASSIGN(x, y) (x = y) 1335 - 1336 - static struct coh901318_base *debugfs_dma_base; 1337 - static struct dentry *dma_dentry; 1338 - 1339 - static ssize_t coh901318_debugfs_read(struct file *file, char __user *buf, 1340 - size_t count, loff_t *f_pos) 1341 - { 1342 - u64 started_channels = debugfs_dma_base->pm.started_channels; 1343 - int pool_count = debugfs_dma_base->pool.debugfs_pool_counter; 1344 - char *dev_buf; 1345 - char *tmp; 1346 - int ret; 1347 - int i; 1348 - 1349 - dev_buf = kmalloc(4*1024, GFP_KERNEL); 1350 - if (dev_buf == NULL) 1351 - return -ENOMEM; 1352 - tmp = dev_buf; 1353 - 1354 - tmp += sprintf(tmp, "DMA -- enabled dma channels\n"); 1355 - 1356 - for (i = 0; i < U300_DMA_CHANNELS; i++) { 1357 - if (started_channels & (1ULL << i)) 1358 - tmp += sprintf(tmp, "channel %d\n", i); 1359 - } 1360 - 1361 - tmp += sprintf(tmp, "Pool alloc nbr %d\n", pool_count); 1362 - 1363 - ret = simple_read_from_buffer(buf, count, f_pos, dev_buf, 1364 - tmp - dev_buf); 1365 - kfree(dev_buf); 1366 - return ret; 1367 - } 1368 - 1369 - static const struct file_operations coh901318_debugfs_status_operations = { 1370 - .open = simple_open, 1371 - .read = coh901318_debugfs_read, 1372 - .llseek = default_llseek, 1373 - }; 1374 - 1375 - 1376 - static int __init init_coh901318_debugfs(void) 1377 - { 1378 - 1379 - dma_dentry = debugfs_create_dir("dma", NULL); 1380 - 1381 - debugfs_create_file("status", S_IFREG | S_IRUGO, dma_dentry, NULL, 1382 - &coh901318_debugfs_status_operations); 1383 - return 0; 1384 - } 1385 - 1386 - static void __exit exit_coh901318_debugfs(void) 1387 - { 1388 - debugfs_remove_recursive(dma_dentry); 1389 - } 1390 - 1391 - module_init(init_coh901318_debugfs); 1392 - module_exit(exit_coh901318_debugfs); 1393 - #else 1394 - 1395 - #define COH901318_DEBUGFS_ASSIGN(x, y) 1396 - 1397 - #endif /* CONFIG_DEBUG_FS */ 1398 - 1399 - static inline struct coh901318_chan *to_coh901318_chan(struct dma_chan *chan) 1400 - { 1401 - return container_of(chan, struct coh901318_chan, chan); 1402 - } 1403 - 1404 - static int coh901318_dma_set_runtimeconfig(struct dma_chan *chan, 1405 - struct dma_slave_config *config, 1406 - enum dma_transfer_direction direction); 1407 - 1408 - static inline const struct coh901318_params * 1409 - cohc_chan_param(struct coh901318_chan *cohc) 1410 - { 1411 - return &chan_config[cohc->id].param; 1412 - } 1413 - 1414 - static inline const struct coh_dma_channel * 1415 - cohc_chan_conf(struct coh901318_chan *cohc) 1416 - { 1417 - return &chan_config[cohc->id]; 1418 - } 1419 - 1420 - static void enable_powersave(struct coh901318_chan *cohc) 1421 - { 1422 - unsigned long flags; 1423 - struct powersave *pm = &cohc->base->pm; 1424 - 1425 - spin_lock_irqsave(&pm->lock, flags); 1426 - 1427 - pm->started_channels &= ~(1ULL << cohc->id); 1428 - 1429 - spin_unlock_irqrestore(&pm->lock, flags); 1430 - } 1431 - static void disable_powersave(struct coh901318_chan *cohc) 1432 - { 1433 - unsigned long flags; 1434 - struct powersave *pm = &cohc->base->pm; 1435 - 1436 - spin_lock_irqsave(&pm->lock, flags); 1437 - 1438 - pm->started_channels |= (1ULL << cohc->id); 1439 - 1440 - spin_unlock_irqrestore(&pm->lock, flags); 1441 - } 1442 - 1443 - static inline int coh901318_set_ctrl(struct coh901318_chan *cohc, u32 control) 1444 - { 1445 - int channel = cohc->id; 1446 - void __iomem *virtbase = cohc->base->virtbase; 1447 - 1448 - writel(control, 1449 - virtbase + COH901318_CX_CTRL + 1450 - COH901318_CX_CTRL_SPACING * channel); 1451 - return 0; 1452 - } 1453 - 1454 - static inline int coh901318_set_conf(struct coh901318_chan *cohc, u32 conf) 1455 - { 1456 - int channel = cohc->id; 1457 - void __iomem *virtbase = cohc->base->virtbase; 1458 - 1459 - writel(conf, 1460 - virtbase + COH901318_CX_CFG + 1461 - COH901318_CX_CFG_SPACING*channel); 1462 - return 0; 1463 - } 1464 - 1465 - 1466 - static int coh901318_start(struct coh901318_chan *cohc) 1467 - { 1468 - u32 val; 1469 - int channel = cohc->id; 1470 - void __iomem *virtbase = cohc->base->virtbase; 1471 - 1472 - disable_powersave(cohc); 1473 - 1474 - val = readl(virtbase + COH901318_CX_CFG + 1475 - COH901318_CX_CFG_SPACING * channel); 1476 - 1477 - /* Enable channel */ 1478 - val |= COH901318_CX_CFG_CH_ENABLE; 1479 - writel(val, virtbase + COH901318_CX_CFG + 1480 - COH901318_CX_CFG_SPACING * channel); 1481 - 1482 - return 0; 1483 - } 1484 - 1485 - static int coh901318_prep_linked_list(struct coh901318_chan *cohc, 1486 - struct coh901318_lli *lli) 1487 - { 1488 - int channel = cohc->id; 1489 - void __iomem *virtbase = cohc->base->virtbase; 1490 - 1491 - BUG_ON(readl(virtbase + COH901318_CX_STAT + 1492 - COH901318_CX_STAT_SPACING*channel) & 1493 - COH901318_CX_STAT_ACTIVE); 1494 - 1495 - writel(lli->src_addr, 1496 - virtbase + COH901318_CX_SRC_ADDR + 1497 - COH901318_CX_SRC_ADDR_SPACING * channel); 1498 - 1499 - writel(lli->dst_addr, virtbase + 1500 - COH901318_CX_DST_ADDR + 1501 - COH901318_CX_DST_ADDR_SPACING * channel); 1502 - 1503 - writel(lli->link_addr, virtbase + COH901318_CX_LNK_ADDR + 1504 - COH901318_CX_LNK_ADDR_SPACING * channel); 1505 - 1506 - writel(lli->control, virtbase + COH901318_CX_CTRL + 1507 - COH901318_CX_CTRL_SPACING * channel); 1508 - 1509 - return 0; 1510 - } 1511 - 1512 - static struct coh901318_desc * 1513 - coh901318_desc_get(struct coh901318_chan *cohc) 1514 - { 1515 - struct coh901318_desc *desc; 1516 - 1517 - if (list_empty(&cohc->free)) { 1518 - /* alloc new desc because we're out of used ones 1519 - * TODO: alloc a pile of descs instead of just one, 1520 - * avoid many small allocations. 1521 - */ 1522 - desc = kzalloc(sizeof(struct coh901318_desc), GFP_NOWAIT); 1523 - if (desc == NULL) 1524 - goto out; 1525 - INIT_LIST_HEAD(&desc->node); 1526 - dma_async_tx_descriptor_init(&desc->desc, &cohc->chan); 1527 - } else { 1528 - /* Reuse an old desc. */ 1529 - desc = list_first_entry(&cohc->free, 1530 - struct coh901318_desc, 1531 - node); 1532 - list_del(&desc->node); 1533 - /* Initialize it a bit so it's not insane */ 1534 - desc->sg = NULL; 1535 - desc->sg_len = 0; 1536 - desc->desc.callback = NULL; 1537 - desc->desc.callback_param = NULL; 1538 - } 1539 - 1540 - out: 1541 - return desc; 1542 - } 1543 - 1544 - static void 1545 - coh901318_desc_free(struct coh901318_chan *cohc, struct coh901318_desc *cohd) 1546 - { 1547 - list_add_tail(&cohd->node, &cohc->free); 1548 - } 1549 - 1550 - /* call with irq lock held */ 1551 - static void 1552 - coh901318_desc_submit(struct coh901318_chan *cohc, struct coh901318_desc *desc) 1553 - { 1554 - list_add_tail(&desc->node, &cohc->active); 1555 - } 1556 - 1557 - static struct coh901318_desc * 1558 - coh901318_first_active_get(struct coh901318_chan *cohc) 1559 - { 1560 - return list_first_entry_or_null(&cohc->active, struct coh901318_desc, 1561 - node); 1562 - } 1563 - 1564 - static void 1565 - coh901318_desc_remove(struct coh901318_desc *cohd) 1566 - { 1567 - list_del(&cohd->node); 1568 - } 1569 - 1570 - static void 1571 - coh901318_desc_queue(struct coh901318_chan *cohc, struct coh901318_desc *desc) 1572 - { 1573 - list_add_tail(&desc->node, &cohc->queue); 1574 - } 1575 - 1576 - static struct coh901318_desc * 1577 - coh901318_first_queued(struct coh901318_chan *cohc) 1578 - { 1579 - return list_first_entry_or_null(&cohc->queue, struct coh901318_desc, 1580 - node); 1581 - } 1582 - 1583 - static inline u32 coh901318_get_bytes_in_lli(struct coh901318_lli *in_lli) 1584 - { 1585 - struct coh901318_lli *lli = in_lli; 1586 - u32 bytes = 0; 1587 - 1588 - while (lli) { 1589 - bytes += lli->control & COH901318_CX_CTRL_TC_VALUE_MASK; 1590 - lli = lli->virt_link_addr; 1591 - } 1592 - return bytes; 1593 - } 1594 - 1595 - /* 1596 - * Get the number of bytes left to transfer on this channel, 1597 - * it is unwise to call this before stopping the channel for 1598 - * absolute measures, but for a rough guess you can still call 1599 - * it. 1600 - */ 1601 - static u32 coh901318_get_bytes_left(struct dma_chan *chan) 1602 - { 1603 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 1604 - struct coh901318_desc *cohd; 1605 - struct list_head *pos; 1606 - unsigned long flags; 1607 - u32 left = 0; 1608 - int i = 0; 1609 - 1610 - spin_lock_irqsave(&cohc->lock, flags); 1611 - 1612 - /* 1613 - * If there are many queued jobs, we iterate and add the 1614 - * size of them all. We take a special look on the first 1615 - * job though, since it is probably active. 1616 - */ 1617 - list_for_each(pos, &cohc->active) { 1618 - /* 1619 - * The first job in the list will be working on the 1620 - * hardware. The job can be stopped but still active, 1621 - * so that the transfer counter is somewhere inside 1622 - * the buffer. 1623 - */ 1624 - cohd = list_entry(pos, struct coh901318_desc, node); 1625 - 1626 - if (i == 0) { 1627 - struct coh901318_lli *lli; 1628 - dma_addr_t ladd; 1629 - 1630 - /* Read current transfer count value */ 1631 - left = readl(cohc->base->virtbase + 1632 - COH901318_CX_CTRL + 1633 - COH901318_CX_CTRL_SPACING * cohc->id) & 1634 - COH901318_CX_CTRL_TC_VALUE_MASK; 1635 - 1636 - /* See if the transfer is linked... */ 1637 - ladd = readl(cohc->base->virtbase + 1638 - COH901318_CX_LNK_ADDR + 1639 - COH901318_CX_LNK_ADDR_SPACING * 1640 - cohc->id) & 1641 - ~COH901318_CX_LNK_LINK_IMMEDIATE; 1642 - /* Single transaction */ 1643 - if (!ladd) 1644 - continue; 1645 - 1646 - /* 1647 - * Linked transaction, follow the lli, find the 1648 - * currently processing lli, and proceed to the next 1649 - */ 1650 - lli = cohd->lli; 1651 - while (lli && lli->link_addr != ladd) 1652 - lli = lli->virt_link_addr; 1653 - 1654 - if (lli) 1655 - lli = lli->virt_link_addr; 1656 - 1657 - /* 1658 - * Follow remaining lli links around to count the total 1659 - * number of bytes left 1660 - */ 1661 - left += coh901318_get_bytes_in_lli(lli); 1662 - } else { 1663 - left += coh901318_get_bytes_in_lli(cohd->lli); 1664 - } 1665 - i++; 1666 - } 1667 - 1668 - /* Also count bytes in the queued jobs */ 1669 - list_for_each(pos, &cohc->queue) { 1670 - cohd = list_entry(pos, struct coh901318_desc, node); 1671 - left += coh901318_get_bytes_in_lli(cohd->lli); 1672 - } 1673 - 1674 - spin_unlock_irqrestore(&cohc->lock, flags); 1675 - 1676 - return left; 1677 - } 1678 - 1679 - /* 1680 - * Pauses a transfer without losing data. Enables power save. 1681 - * Use this function in conjunction with coh901318_resume. 1682 - */ 1683 - static int coh901318_pause(struct dma_chan *chan) 1684 - { 1685 - u32 val; 1686 - unsigned long flags; 1687 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 1688 - int channel = cohc->id; 1689 - void __iomem *virtbase = cohc->base->virtbase; 1690 - 1691 - spin_lock_irqsave(&cohc->lock, flags); 1692 - 1693 - /* Disable channel in HW */ 1694 - val = readl(virtbase + COH901318_CX_CFG + 1695 - COH901318_CX_CFG_SPACING * channel); 1696 - 1697 - /* Stopping infinite transfer */ 1698 - if ((val & COH901318_CX_CTRL_TC_ENABLE) == 0 && 1699 - (val & COH901318_CX_CFG_CH_ENABLE)) 1700 - cohc->stopped = 1; 1701 - 1702 - 1703 - val &= ~COH901318_CX_CFG_CH_ENABLE; 1704 - /* Enable twice, HW bug work around */ 1705 - writel(val, virtbase + COH901318_CX_CFG + 1706 - COH901318_CX_CFG_SPACING * channel); 1707 - writel(val, virtbase + COH901318_CX_CFG + 1708 - COH901318_CX_CFG_SPACING * channel); 1709 - 1710 - /* Spin-wait for it to actually go inactive */ 1711 - while (readl(virtbase + COH901318_CX_STAT+COH901318_CX_STAT_SPACING * 1712 - channel) & COH901318_CX_STAT_ACTIVE) 1713 - cpu_relax(); 1714 - 1715 - /* Check if we stopped an active job */ 1716 - if ((readl(virtbase + COH901318_CX_CTRL+COH901318_CX_CTRL_SPACING * 1717 - channel) & COH901318_CX_CTRL_TC_VALUE_MASK) > 0) 1718 - cohc->stopped = 1; 1719 - 1720 - enable_powersave(cohc); 1721 - 1722 - spin_unlock_irqrestore(&cohc->lock, flags); 1723 - return 0; 1724 - } 1725 - 1726 - /* Resumes a transfer that has been stopped via 300_dma_stop(..). 1727 - Power save is handled. 1728 - */ 1729 - static int coh901318_resume(struct dma_chan *chan) 1730 - { 1731 - u32 val; 1732 - unsigned long flags; 1733 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 1734 - int channel = cohc->id; 1735 - 1736 - spin_lock_irqsave(&cohc->lock, flags); 1737 - 1738 - disable_powersave(cohc); 1739 - 1740 - if (cohc->stopped) { 1741 - /* Enable channel in HW */ 1742 - val = readl(cohc->base->virtbase + COH901318_CX_CFG + 1743 - COH901318_CX_CFG_SPACING * channel); 1744 - 1745 - val |= COH901318_CX_CFG_CH_ENABLE; 1746 - 1747 - writel(val, cohc->base->virtbase + COH901318_CX_CFG + 1748 - COH901318_CX_CFG_SPACING*channel); 1749 - 1750 - cohc->stopped = 0; 1751 - } 1752 - 1753 - spin_unlock_irqrestore(&cohc->lock, flags); 1754 - return 0; 1755 - } 1756 - 1757 - bool coh901318_filter_id(struct dma_chan *chan, void *chan_id) 1758 - { 1759 - unsigned long ch_nr = (unsigned long) chan_id; 1760 - 1761 - if (ch_nr == to_coh901318_chan(chan)->id) 1762 - return true; 1763 - 1764 - return false; 1765 - } 1766 - EXPORT_SYMBOL(coh901318_filter_id); 1767 - 1768 - struct coh901318_filter_args { 1769 - struct coh901318_base *base; 1770 - unsigned int ch_nr; 1771 - }; 1772 - 1773 - static bool coh901318_filter_base_and_id(struct dma_chan *chan, void *data) 1774 - { 1775 - struct coh901318_filter_args *args = data; 1776 - 1777 - if (&args->base->dma_slave == chan->device && 1778 - args->ch_nr == to_coh901318_chan(chan)->id) 1779 - return true; 1780 - 1781 - return false; 1782 - } 1783 - 1784 - static struct dma_chan *coh901318_xlate(struct of_phandle_args *dma_spec, 1785 - struct of_dma *ofdma) 1786 - { 1787 - struct coh901318_filter_args args = { 1788 - .base = ofdma->of_dma_data, 1789 - .ch_nr = dma_spec->args[0], 1790 - }; 1791 - dma_cap_mask_t cap; 1792 - dma_cap_zero(cap); 1793 - dma_cap_set(DMA_SLAVE, cap); 1794 - 1795 - return dma_request_channel(cap, coh901318_filter_base_and_id, &args); 1796 - } 1797 - /* 1798 - * DMA channel allocation 1799 - */ 1800 - static int coh901318_config(struct coh901318_chan *cohc, 1801 - struct coh901318_params *param) 1802 - { 1803 - const struct coh901318_params *p; 1804 - int channel = cohc->id; 1805 - void __iomem *virtbase = cohc->base->virtbase; 1806 - 1807 - if (param) 1808 - p = param; 1809 - else 1810 - p = cohc_chan_param(cohc); 1811 - 1812 - /* Clear any pending BE or TC interrupt */ 1813 - if (channel < 32) { 1814 - writel(1 << channel, virtbase + COH901318_BE_INT_CLEAR1); 1815 - writel(1 << channel, virtbase + COH901318_TC_INT_CLEAR1); 1816 - } else { 1817 - writel(1 << (channel - 32), virtbase + 1818 - COH901318_BE_INT_CLEAR2); 1819 - writel(1 << (channel - 32), virtbase + 1820 - COH901318_TC_INT_CLEAR2); 1821 - } 1822 - 1823 - coh901318_set_conf(cohc, p->config); 1824 - coh901318_set_ctrl(cohc, p->ctrl_lli_last); 1825 - 1826 - return 0; 1827 - } 1828 - 1829 - /* must lock when calling this function 1830 - * start queued jobs, if any 1831 - * TODO: start all queued jobs in one go 1832 - * 1833 - * Returns descriptor if queued job is started otherwise NULL. 1834 - * If the queue is empty NULL is returned. 1835 - */ 1836 - static struct coh901318_desc *coh901318_queue_start(struct coh901318_chan *cohc) 1837 - { 1838 - struct coh901318_desc *cohd; 1839 - 1840 - /* 1841 - * start queued jobs, if any 1842 - * TODO: transmit all queued jobs in one go 1843 - */ 1844 - cohd = coh901318_first_queued(cohc); 1845 - 1846 - if (cohd != NULL) { 1847 - /* Remove from queue */ 1848 - coh901318_desc_remove(cohd); 1849 - /* initiate DMA job */ 1850 - cohc->busy = 1; 1851 - 1852 - coh901318_desc_submit(cohc, cohd); 1853 - 1854 - /* Program the transaction head */ 1855 - coh901318_set_conf(cohc, cohd->head_config); 1856 - coh901318_set_ctrl(cohc, cohd->head_ctrl); 1857 - coh901318_prep_linked_list(cohc, cohd->lli); 1858 - 1859 - /* start dma job on this channel */ 1860 - coh901318_start(cohc); 1861 - 1862 - } 1863 - 1864 - return cohd; 1865 - } 1866 - 1867 - /* 1868 - * This tasklet is called from the interrupt handler to 1869 - * handle each descriptor (DMA job) that is sent to a channel. 1870 - */ 1871 - static void dma_tasklet(struct tasklet_struct *t) 1872 - { 1873 - struct coh901318_chan *cohc = from_tasklet(cohc, t, tasklet); 1874 - struct coh901318_desc *cohd_fin; 1875 - unsigned long flags; 1876 - struct dmaengine_desc_callback cb; 1877 - 1878 - dev_vdbg(COHC_2_DEV(cohc), "[%s] chan_id %d" 1879 - " nbr_active_done %ld\n", __func__, 1880 - cohc->id, cohc->nbr_active_done); 1881 - 1882 - spin_lock_irqsave(&cohc->lock, flags); 1883 - 1884 - /* get first active descriptor entry from list */ 1885 - cohd_fin = coh901318_first_active_get(cohc); 1886 - 1887 - if (cohd_fin == NULL) 1888 - goto err; 1889 - 1890 - /* locate callback to client */ 1891 - dmaengine_desc_get_callback(&cohd_fin->desc, &cb); 1892 - 1893 - /* sign this job as completed on the channel */ 1894 - dma_cookie_complete(&cohd_fin->desc); 1895 - 1896 - /* release the lli allocation and remove the descriptor */ 1897 - coh901318_lli_free(&cohc->base->pool, &cohd_fin->lli); 1898 - 1899 - /* return desc to free-list */ 1900 - coh901318_desc_remove(cohd_fin); 1901 - coh901318_desc_free(cohc, cohd_fin); 1902 - 1903 - spin_unlock_irqrestore(&cohc->lock, flags); 1904 - 1905 - /* Call the callback when we're done */ 1906 - dmaengine_desc_callback_invoke(&cb, NULL); 1907 - 1908 - spin_lock_irqsave(&cohc->lock, flags); 1909 - 1910 - /* 1911 - * If another interrupt fired while the tasklet was scheduling, 1912 - * we don't get called twice, so we have this number of active 1913 - * counter that keep track of the number of IRQs expected to 1914 - * be handled for this channel. If there happen to be more than 1915 - * one IRQ to be ack:ed, we simply schedule this tasklet again. 1916 - */ 1917 - cohc->nbr_active_done--; 1918 - if (cohc->nbr_active_done) { 1919 - dev_dbg(COHC_2_DEV(cohc), "scheduling tasklet again, new IRQs " 1920 - "came in while we were scheduling this tasklet\n"); 1921 - if (cohc_chan_conf(cohc)->priority_high) 1922 - tasklet_hi_schedule(&cohc->tasklet); 1923 - else 1924 - tasklet_schedule(&cohc->tasklet); 1925 - } 1926 - 1927 - spin_unlock_irqrestore(&cohc->lock, flags); 1928 - 1929 - return; 1930 - 1931 - err: 1932 - spin_unlock_irqrestore(&cohc->lock, flags); 1933 - dev_err(COHC_2_DEV(cohc), "[%s] No active dma desc\n", __func__); 1934 - } 1935 - 1936 - 1937 - /* called from interrupt context */ 1938 - static void dma_tc_handle(struct coh901318_chan *cohc) 1939 - { 1940 - /* 1941 - * If the channel is not allocated, then we shouldn't have 1942 - * any TC interrupts on it. 1943 - */ 1944 - if (!cohc->allocated) { 1945 - dev_err(COHC_2_DEV(cohc), "spurious interrupt from " 1946 - "unallocated channel\n"); 1947 - return; 1948 - } 1949 - 1950 - /* 1951 - * When we reach this point, at least one queue item 1952 - * should have been moved over from cohc->queue to 1953 - * cohc->active and run to completion, that is why we're 1954 - * getting a terminal count interrupt is it not? 1955 - * If you get this BUG() the most probable cause is that 1956 - * the individual nodes in the lli chain have IRQ enabled, 1957 - * so check your platform config for lli chain ctrl. 1958 - */ 1959 - BUG_ON(list_empty(&cohc->active)); 1960 - 1961 - cohc->nbr_active_done++; 1962 - 1963 - /* 1964 - * This attempt to take a job from cohc->queue, put it 1965 - * into cohc->active and start it. 1966 - */ 1967 - if (coh901318_queue_start(cohc) == NULL) 1968 - cohc->busy = 0; 1969 - 1970 - /* 1971 - * This tasklet will remove items from cohc->active 1972 - * and thus terminates them. 1973 - */ 1974 - if (cohc_chan_conf(cohc)->priority_high) 1975 - tasklet_hi_schedule(&cohc->tasklet); 1976 - else 1977 - tasklet_schedule(&cohc->tasklet); 1978 - } 1979 - 1980 - 1981 - static irqreturn_t dma_irq_handler(int irq, void *dev_id) 1982 - { 1983 - u32 status1; 1984 - u32 status2; 1985 - int i; 1986 - int ch; 1987 - struct coh901318_base *base = dev_id; 1988 - struct coh901318_chan *cohc; 1989 - void __iomem *virtbase = base->virtbase; 1990 - 1991 - status1 = readl(virtbase + COH901318_INT_STATUS1); 1992 - status2 = readl(virtbase + COH901318_INT_STATUS2); 1993 - 1994 - if (unlikely(status1 == 0 && status2 == 0)) { 1995 - dev_warn(base->dev, "spurious DMA IRQ from no channel!\n"); 1996 - return IRQ_HANDLED; 1997 - } 1998 - 1999 - /* TODO: consider handle IRQ in tasklet here to 2000 - * minimize interrupt latency */ 2001 - 2002 - /* Check the first 32 DMA channels for IRQ */ 2003 - while (status1) { 2004 - /* Find first bit set, return as a number. */ 2005 - i = ffs(status1) - 1; 2006 - ch = i; 2007 - 2008 - cohc = &base->chans[ch]; 2009 - spin_lock(&cohc->lock); 2010 - 2011 - /* Mask off this bit */ 2012 - status1 &= ~(1 << i); 2013 - /* Check the individual channel bits */ 2014 - if (test_bit(i, virtbase + COH901318_BE_INT_STATUS1)) { 2015 - dev_crit(COHC_2_DEV(cohc), 2016 - "DMA bus error on channel %d!\n", ch); 2017 - BUG_ON(1); 2018 - /* Clear BE interrupt */ 2019 - __set_bit(i, virtbase + COH901318_BE_INT_CLEAR1); 2020 - } else { 2021 - /* Caused by TC, really? */ 2022 - if (unlikely(!test_bit(i, virtbase + 2023 - COH901318_TC_INT_STATUS1))) { 2024 - dev_warn(COHC_2_DEV(cohc), 2025 - "ignoring interrupt not caused by terminal count on channel %d\n", ch); 2026 - /* Clear TC interrupt */ 2027 - BUG_ON(1); 2028 - __set_bit(i, virtbase + COH901318_TC_INT_CLEAR1); 2029 - } else { 2030 - /* Enable powersave if transfer has finished */ 2031 - if (!(readl(virtbase + COH901318_CX_STAT + 2032 - COH901318_CX_STAT_SPACING*ch) & 2033 - COH901318_CX_STAT_ENABLED)) { 2034 - enable_powersave(cohc); 2035 - } 2036 - 2037 - /* Must clear TC interrupt before calling 2038 - * dma_tc_handle 2039 - * in case tc_handle initiate a new dma job 2040 - */ 2041 - __set_bit(i, virtbase + COH901318_TC_INT_CLEAR1); 2042 - 2043 - dma_tc_handle(cohc); 2044 - } 2045 - } 2046 - spin_unlock(&cohc->lock); 2047 - } 2048 - 2049 - /* Check the remaining 32 DMA channels for IRQ */ 2050 - while (status2) { 2051 - /* Find first bit set, return as a number. */ 2052 - i = ffs(status2) - 1; 2053 - ch = i + 32; 2054 - cohc = &base->chans[ch]; 2055 - spin_lock(&cohc->lock); 2056 - 2057 - /* Mask off this bit */ 2058 - status2 &= ~(1 << i); 2059 - /* Check the individual channel bits */ 2060 - if (test_bit(i, virtbase + COH901318_BE_INT_STATUS2)) { 2061 - dev_crit(COHC_2_DEV(cohc), 2062 - "DMA bus error on channel %d!\n", ch); 2063 - /* Clear BE interrupt */ 2064 - BUG_ON(1); 2065 - __set_bit(i, virtbase + COH901318_BE_INT_CLEAR2); 2066 - } else { 2067 - /* Caused by TC, really? */ 2068 - if (unlikely(!test_bit(i, virtbase + 2069 - COH901318_TC_INT_STATUS2))) { 2070 - dev_warn(COHC_2_DEV(cohc), 2071 - "ignoring interrupt not caused by terminal count on channel %d\n", ch); 2072 - /* Clear TC interrupt */ 2073 - __set_bit(i, virtbase + COH901318_TC_INT_CLEAR2); 2074 - BUG_ON(1); 2075 - } else { 2076 - /* Enable powersave if transfer has finished */ 2077 - if (!(readl(virtbase + COH901318_CX_STAT + 2078 - COH901318_CX_STAT_SPACING*ch) & 2079 - COH901318_CX_STAT_ENABLED)) { 2080 - enable_powersave(cohc); 2081 - } 2082 - /* Must clear TC interrupt before calling 2083 - * dma_tc_handle 2084 - * in case tc_handle initiate a new dma job 2085 - */ 2086 - __set_bit(i, virtbase + COH901318_TC_INT_CLEAR2); 2087 - 2088 - dma_tc_handle(cohc); 2089 - } 2090 - } 2091 - spin_unlock(&cohc->lock); 2092 - } 2093 - 2094 - return IRQ_HANDLED; 2095 - } 2096 - 2097 - static int coh901318_terminate_all(struct dma_chan *chan) 2098 - { 2099 - unsigned long flags; 2100 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2101 - struct coh901318_desc *cohd; 2102 - void __iomem *virtbase = cohc->base->virtbase; 2103 - 2104 - /* The remainder of this function terminates the transfer */ 2105 - coh901318_pause(chan); 2106 - spin_lock_irqsave(&cohc->lock, flags); 2107 - 2108 - /* Clear any pending BE or TC interrupt */ 2109 - if (cohc->id < 32) { 2110 - writel(1 << cohc->id, virtbase + COH901318_BE_INT_CLEAR1); 2111 - writel(1 << cohc->id, virtbase + COH901318_TC_INT_CLEAR1); 2112 - } else { 2113 - writel(1 << (cohc->id - 32), virtbase + 2114 - COH901318_BE_INT_CLEAR2); 2115 - writel(1 << (cohc->id - 32), virtbase + 2116 - COH901318_TC_INT_CLEAR2); 2117 - } 2118 - 2119 - enable_powersave(cohc); 2120 - 2121 - while ((cohd = coh901318_first_active_get(cohc))) { 2122 - /* release the lli allocation*/ 2123 - coh901318_lli_free(&cohc->base->pool, &cohd->lli); 2124 - 2125 - /* return desc to free-list */ 2126 - coh901318_desc_remove(cohd); 2127 - coh901318_desc_free(cohc, cohd); 2128 - } 2129 - 2130 - while ((cohd = coh901318_first_queued(cohc))) { 2131 - /* release the lli allocation*/ 2132 - coh901318_lli_free(&cohc->base->pool, &cohd->lli); 2133 - 2134 - /* return desc to free-list */ 2135 - coh901318_desc_remove(cohd); 2136 - coh901318_desc_free(cohc, cohd); 2137 - } 2138 - 2139 - 2140 - cohc->nbr_active_done = 0; 2141 - cohc->busy = 0; 2142 - 2143 - spin_unlock_irqrestore(&cohc->lock, flags); 2144 - 2145 - return 0; 2146 - } 2147 - 2148 - static int coh901318_alloc_chan_resources(struct dma_chan *chan) 2149 - { 2150 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2151 - unsigned long flags; 2152 - 2153 - dev_vdbg(COHC_2_DEV(cohc), "[%s] DMA channel %d\n", 2154 - __func__, cohc->id); 2155 - 2156 - if (chan->client_count > 1) 2157 - return -EBUSY; 2158 - 2159 - spin_lock_irqsave(&cohc->lock, flags); 2160 - 2161 - coh901318_config(cohc, NULL); 2162 - 2163 - cohc->allocated = 1; 2164 - dma_cookie_init(chan); 2165 - 2166 - spin_unlock_irqrestore(&cohc->lock, flags); 2167 - 2168 - return 1; 2169 - } 2170 - 2171 - static void 2172 - coh901318_free_chan_resources(struct dma_chan *chan) 2173 - { 2174 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2175 - int channel = cohc->id; 2176 - unsigned long flags; 2177 - 2178 - spin_lock_irqsave(&cohc->lock, flags); 2179 - 2180 - /* Disable HW */ 2181 - writel(0x00000000U, cohc->base->virtbase + COH901318_CX_CFG + 2182 - COH901318_CX_CFG_SPACING*channel); 2183 - writel(0x00000000U, cohc->base->virtbase + COH901318_CX_CTRL + 2184 - COH901318_CX_CTRL_SPACING*channel); 2185 - 2186 - cohc->allocated = 0; 2187 - 2188 - spin_unlock_irqrestore(&cohc->lock, flags); 2189 - 2190 - coh901318_terminate_all(chan); 2191 - } 2192 - 2193 - 2194 - static dma_cookie_t 2195 - coh901318_tx_submit(struct dma_async_tx_descriptor *tx) 2196 - { 2197 - struct coh901318_desc *cohd = container_of(tx, struct coh901318_desc, 2198 - desc); 2199 - struct coh901318_chan *cohc = to_coh901318_chan(tx->chan); 2200 - unsigned long flags; 2201 - dma_cookie_t cookie; 2202 - 2203 - spin_lock_irqsave(&cohc->lock, flags); 2204 - cookie = dma_cookie_assign(tx); 2205 - 2206 - coh901318_desc_queue(cohc, cohd); 2207 - 2208 - spin_unlock_irqrestore(&cohc->lock, flags); 2209 - 2210 - return cookie; 2211 - } 2212 - 2213 - static struct dma_async_tx_descriptor * 2214 - coh901318_prep_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, 2215 - size_t size, unsigned long flags) 2216 - { 2217 - struct coh901318_lli *lli; 2218 - struct coh901318_desc *cohd; 2219 - unsigned long flg; 2220 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2221 - int lli_len; 2222 - u32 ctrl_last = cohc_chan_param(cohc)->ctrl_lli_last; 2223 - int ret; 2224 - 2225 - spin_lock_irqsave(&cohc->lock, flg); 2226 - 2227 - dev_vdbg(COHC_2_DEV(cohc), 2228 - "[%s] channel %d src %pad dest %pad size %zu\n", 2229 - __func__, cohc->id, &src, &dest, size); 2230 - 2231 - if (flags & DMA_PREP_INTERRUPT) 2232 - /* Trigger interrupt after last lli */ 2233 - ctrl_last |= COH901318_CX_CTRL_TC_IRQ_ENABLE; 2234 - 2235 - lli_len = size >> MAX_DMA_PACKET_SIZE_SHIFT; 2236 - if ((lli_len << MAX_DMA_PACKET_SIZE_SHIFT) < size) 2237 - lli_len++; 2238 - 2239 - lli = coh901318_lli_alloc(&cohc->base->pool, lli_len); 2240 - 2241 - if (lli == NULL) 2242 - goto err; 2243 - 2244 - ret = coh901318_lli_fill_memcpy( 2245 - &cohc->base->pool, lli, src, size, dest, 2246 - cohc_chan_param(cohc)->ctrl_lli_chained, 2247 - ctrl_last); 2248 - if (ret) 2249 - goto err; 2250 - 2251 - COH_DBG(coh901318_list_print(cohc, lli)); 2252 - 2253 - /* Pick a descriptor to handle this transfer */ 2254 - cohd = coh901318_desc_get(cohc); 2255 - cohd->lli = lli; 2256 - cohd->flags = flags; 2257 - cohd->desc.tx_submit = coh901318_tx_submit; 2258 - 2259 - spin_unlock_irqrestore(&cohc->lock, flg); 2260 - 2261 - return &cohd->desc; 2262 - err: 2263 - spin_unlock_irqrestore(&cohc->lock, flg); 2264 - return NULL; 2265 - } 2266 - 2267 - static struct dma_async_tx_descriptor * 2268 - coh901318_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, 2269 - unsigned int sg_len, enum dma_transfer_direction direction, 2270 - unsigned long flags, void *context) 2271 - { 2272 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2273 - struct coh901318_lli *lli; 2274 - struct coh901318_desc *cohd; 2275 - const struct coh901318_params *params; 2276 - struct scatterlist *sg; 2277 - int len = 0; 2278 - int size; 2279 - int i; 2280 - u32 ctrl_chained = cohc_chan_param(cohc)->ctrl_lli_chained; 2281 - u32 ctrl = cohc_chan_param(cohc)->ctrl_lli; 2282 - u32 ctrl_last = cohc_chan_param(cohc)->ctrl_lli_last; 2283 - u32 config; 2284 - unsigned long flg; 2285 - int ret; 2286 - 2287 - if (!sgl) 2288 - goto out; 2289 - if (sg_dma_len(sgl) == 0) 2290 - goto out; 2291 - 2292 - spin_lock_irqsave(&cohc->lock, flg); 2293 - 2294 - dev_vdbg(COHC_2_DEV(cohc), "[%s] sg_len %d dir %d\n", 2295 - __func__, sg_len, direction); 2296 - 2297 - if (flags & DMA_PREP_INTERRUPT) 2298 - /* Trigger interrupt after last lli */ 2299 - ctrl_last |= COH901318_CX_CTRL_TC_IRQ_ENABLE; 2300 - 2301 - params = cohc_chan_param(cohc); 2302 - config = params->config; 2303 - /* 2304 - * Add runtime-specific control on top, make 2305 - * sure the bits you set per peripheral channel are 2306 - * cleared in the default config from the platform. 2307 - */ 2308 - ctrl_chained |= cohc->ctrl; 2309 - ctrl_last |= cohc->ctrl; 2310 - ctrl |= cohc->ctrl; 2311 - 2312 - if (direction == DMA_MEM_TO_DEV) { 2313 - u32 tx_flags = COH901318_CX_CTRL_PRDD_SOURCE | 2314 - COH901318_CX_CTRL_SRC_ADDR_INC_ENABLE; 2315 - 2316 - config |= COH901318_CX_CFG_RM_MEMORY_TO_PRIMARY; 2317 - ctrl_chained |= tx_flags; 2318 - ctrl_last |= tx_flags; 2319 - ctrl |= tx_flags; 2320 - } else if (direction == DMA_DEV_TO_MEM) { 2321 - u32 rx_flags = COH901318_CX_CTRL_PRDD_DEST | 2322 - COH901318_CX_CTRL_DST_ADDR_INC_ENABLE; 2323 - 2324 - config |= COH901318_CX_CFG_RM_PRIMARY_TO_MEMORY; 2325 - ctrl_chained |= rx_flags; 2326 - ctrl_last |= rx_flags; 2327 - ctrl |= rx_flags; 2328 - } else 2329 - goto err_direction; 2330 - 2331 - /* The dma only supports transmitting packages up to 2332 - * MAX_DMA_PACKET_SIZE. Calculate to total number of 2333 - * dma elemts required to send the entire sg list 2334 - */ 2335 - for_each_sg(sgl, sg, sg_len, i) { 2336 - unsigned int factor; 2337 - size = sg_dma_len(sg); 2338 - 2339 - if (size <= MAX_DMA_PACKET_SIZE) { 2340 - len++; 2341 - continue; 2342 - } 2343 - 2344 - factor = size >> MAX_DMA_PACKET_SIZE_SHIFT; 2345 - if ((factor << MAX_DMA_PACKET_SIZE_SHIFT) < size) 2346 - factor++; 2347 - 2348 - len += factor; 2349 - } 2350 - 2351 - pr_debug("Allocate %d lli:s for this transfer\n", len); 2352 - lli = coh901318_lli_alloc(&cohc->base->pool, len); 2353 - 2354 - if (lli == NULL) 2355 - goto err_dma_alloc; 2356 - 2357 - coh901318_dma_set_runtimeconfig(chan, &cohc->config, direction); 2358 - 2359 - /* initiate allocated lli list */ 2360 - ret = coh901318_lli_fill_sg(&cohc->base->pool, lli, sgl, sg_len, 2361 - cohc->addr, 2362 - ctrl_chained, 2363 - ctrl, 2364 - ctrl_last, 2365 - direction, COH901318_CX_CTRL_TC_IRQ_ENABLE); 2366 - if (ret) 2367 - goto err_lli_fill; 2368 - 2369 - 2370 - COH_DBG(coh901318_list_print(cohc, lli)); 2371 - 2372 - /* Pick a descriptor to handle this transfer */ 2373 - cohd = coh901318_desc_get(cohc); 2374 - cohd->head_config = config; 2375 - /* 2376 - * Set the default head ctrl for the channel to the one from the 2377 - * lli, things may have changed due to odd buffer alignment 2378 - * etc. 2379 - */ 2380 - cohd->head_ctrl = lli->control; 2381 - cohd->dir = direction; 2382 - cohd->flags = flags; 2383 - cohd->desc.tx_submit = coh901318_tx_submit; 2384 - cohd->lli = lli; 2385 - 2386 - spin_unlock_irqrestore(&cohc->lock, flg); 2387 - 2388 - return &cohd->desc; 2389 - err_lli_fill: 2390 - err_dma_alloc: 2391 - err_direction: 2392 - spin_unlock_irqrestore(&cohc->lock, flg); 2393 - out: 2394 - return NULL; 2395 - } 2396 - 2397 - static enum dma_status 2398 - coh901318_tx_status(struct dma_chan *chan, dma_cookie_t cookie, 2399 - struct dma_tx_state *txstate) 2400 - { 2401 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2402 - enum dma_status ret; 2403 - 2404 - ret = dma_cookie_status(chan, cookie, txstate); 2405 - if (ret == DMA_COMPLETE || !txstate) 2406 - return ret; 2407 - 2408 - dma_set_residue(txstate, coh901318_get_bytes_left(chan)); 2409 - 2410 - if (ret == DMA_IN_PROGRESS && cohc->stopped) 2411 - ret = DMA_PAUSED; 2412 - 2413 - return ret; 2414 - } 2415 - 2416 - static void 2417 - coh901318_issue_pending(struct dma_chan *chan) 2418 - { 2419 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2420 - unsigned long flags; 2421 - 2422 - spin_lock_irqsave(&cohc->lock, flags); 2423 - 2424 - /* 2425 - * Busy means that pending jobs are already being processed, 2426 - * and then there is no point in starting the queue: the 2427 - * terminal count interrupt on the channel will take the next 2428 - * job on the queue and execute it anyway. 2429 - */ 2430 - if (!cohc->busy) 2431 - coh901318_queue_start(cohc); 2432 - 2433 - spin_unlock_irqrestore(&cohc->lock, flags); 2434 - } 2435 - 2436 - /* 2437 - * Here we wrap in the runtime dma control interface 2438 - */ 2439 - struct burst_table { 2440 - int burst_8bit; 2441 - int burst_16bit; 2442 - int burst_32bit; 2443 - u32 reg; 2444 - }; 2445 - 2446 - static const struct burst_table burst_sizes[] = { 2447 - { 2448 - .burst_8bit = 64, 2449 - .burst_16bit = 32, 2450 - .burst_32bit = 16, 2451 - .reg = COH901318_CX_CTRL_BURST_COUNT_64_BYTES, 2452 - }, 2453 - { 2454 - .burst_8bit = 48, 2455 - .burst_16bit = 24, 2456 - .burst_32bit = 12, 2457 - .reg = COH901318_CX_CTRL_BURST_COUNT_48_BYTES, 2458 - }, 2459 - { 2460 - .burst_8bit = 32, 2461 - .burst_16bit = 16, 2462 - .burst_32bit = 8, 2463 - .reg = COH901318_CX_CTRL_BURST_COUNT_32_BYTES, 2464 - }, 2465 - { 2466 - .burst_8bit = 16, 2467 - .burst_16bit = 8, 2468 - .burst_32bit = 4, 2469 - .reg = COH901318_CX_CTRL_BURST_COUNT_16_BYTES, 2470 - }, 2471 - { 2472 - .burst_8bit = 8, 2473 - .burst_16bit = 4, 2474 - .burst_32bit = 2, 2475 - .reg = COH901318_CX_CTRL_BURST_COUNT_8_BYTES, 2476 - }, 2477 - { 2478 - .burst_8bit = 4, 2479 - .burst_16bit = 2, 2480 - .burst_32bit = 1, 2481 - .reg = COH901318_CX_CTRL_BURST_COUNT_4_BYTES, 2482 - }, 2483 - { 2484 - .burst_8bit = 2, 2485 - .burst_16bit = 1, 2486 - .burst_32bit = 0, 2487 - .reg = COH901318_CX_CTRL_BURST_COUNT_2_BYTES, 2488 - }, 2489 - { 2490 - .burst_8bit = 1, 2491 - .burst_16bit = 0, 2492 - .burst_32bit = 0, 2493 - .reg = COH901318_CX_CTRL_BURST_COUNT_1_BYTE, 2494 - }, 2495 - }; 2496 - 2497 - static int coh901318_dma_set_runtimeconfig(struct dma_chan *chan, 2498 - struct dma_slave_config *config, 2499 - enum dma_transfer_direction direction) 2500 - { 2501 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2502 - dma_addr_t addr; 2503 - enum dma_slave_buswidth addr_width; 2504 - u32 maxburst; 2505 - u32 ctrl = 0; 2506 - int i = 0; 2507 - 2508 - /* We only support mem to per or per to mem transfers */ 2509 - if (direction == DMA_DEV_TO_MEM) { 2510 - addr = config->src_addr; 2511 - addr_width = config->src_addr_width; 2512 - maxburst = config->src_maxburst; 2513 - } else if (direction == DMA_MEM_TO_DEV) { 2514 - addr = config->dst_addr; 2515 - addr_width = config->dst_addr_width; 2516 - maxburst = config->dst_maxburst; 2517 - } else { 2518 - dev_err(COHC_2_DEV(cohc), "illegal channel mode\n"); 2519 - return -EINVAL; 2520 - } 2521 - 2522 - dev_dbg(COHC_2_DEV(cohc), "configure channel for %d byte transfers\n", 2523 - addr_width); 2524 - switch (addr_width) { 2525 - case DMA_SLAVE_BUSWIDTH_1_BYTE: 2526 - ctrl |= 2527 - COH901318_CX_CTRL_SRC_BUS_SIZE_8_BITS | 2528 - COH901318_CX_CTRL_DST_BUS_SIZE_8_BITS; 2529 - 2530 - while (i < ARRAY_SIZE(burst_sizes)) { 2531 - if (burst_sizes[i].burst_8bit <= maxburst) 2532 - break; 2533 - i++; 2534 - } 2535 - 2536 - break; 2537 - case DMA_SLAVE_BUSWIDTH_2_BYTES: 2538 - ctrl |= 2539 - COH901318_CX_CTRL_SRC_BUS_SIZE_16_BITS | 2540 - COH901318_CX_CTRL_DST_BUS_SIZE_16_BITS; 2541 - 2542 - while (i < ARRAY_SIZE(burst_sizes)) { 2543 - if (burst_sizes[i].burst_16bit <= maxburst) 2544 - break; 2545 - i++; 2546 - } 2547 - 2548 - break; 2549 - case DMA_SLAVE_BUSWIDTH_4_BYTES: 2550 - /* Direction doesn't matter here, it's 32/32 bits */ 2551 - ctrl |= 2552 - COH901318_CX_CTRL_SRC_BUS_SIZE_32_BITS | 2553 - COH901318_CX_CTRL_DST_BUS_SIZE_32_BITS; 2554 - 2555 - while (i < ARRAY_SIZE(burst_sizes)) { 2556 - if (burst_sizes[i].burst_32bit <= maxburst) 2557 - break; 2558 - i++; 2559 - } 2560 - 2561 - break; 2562 - default: 2563 - dev_err(COHC_2_DEV(cohc), 2564 - "bad runtimeconfig: alien address width\n"); 2565 - return -EINVAL; 2566 - } 2567 - 2568 - ctrl |= burst_sizes[i].reg; 2569 - dev_dbg(COHC_2_DEV(cohc), 2570 - "selected burst size %d bytes for address width %d bytes, maxburst %d\n", 2571 - burst_sizes[i].burst_8bit, addr_width, maxburst); 2572 - 2573 - cohc->addr = addr; 2574 - cohc->ctrl = ctrl; 2575 - 2576 - return 0; 2577 - } 2578 - 2579 - static int coh901318_dma_slave_config(struct dma_chan *chan, 2580 - struct dma_slave_config *config) 2581 - { 2582 - struct coh901318_chan *cohc = to_coh901318_chan(chan); 2583 - 2584 - memcpy(&cohc->config, config, sizeof(*config)); 2585 - 2586 - return 0; 2587 - } 2588 - 2589 - static void coh901318_base_init(struct dma_device *dma, const int *pick_chans, 2590 - struct coh901318_base *base) 2591 - { 2592 - int chans_i; 2593 - int i = 0; 2594 - struct coh901318_chan *cohc; 2595 - 2596 - INIT_LIST_HEAD(&dma->channels); 2597 - 2598 - for (chans_i = 0; pick_chans[chans_i] != -1; chans_i += 2) { 2599 - for (i = pick_chans[chans_i]; i <= pick_chans[chans_i+1]; i++) { 2600 - cohc = &base->chans[i]; 2601 - 2602 - cohc->base = base; 2603 - cohc->chan.device = dma; 2604 - cohc->id = i; 2605 - 2606 - /* TODO: do we really need this lock if only one 2607 - * client is connected to each channel? 2608 - */ 2609 - 2610 - spin_lock_init(&cohc->lock); 2611 - 2612 - cohc->nbr_active_done = 0; 2613 - cohc->busy = 0; 2614 - INIT_LIST_HEAD(&cohc->free); 2615 - INIT_LIST_HEAD(&cohc->active); 2616 - INIT_LIST_HEAD(&cohc->queue); 2617 - 2618 - tasklet_setup(&cohc->tasklet, dma_tasklet); 2619 - 2620 - list_add_tail(&cohc->chan.device_node, 2621 - &dma->channels); 2622 - } 2623 - } 2624 - } 2625 - 2626 - static int __init coh901318_probe(struct platform_device *pdev) 2627 - { 2628 - int err = 0; 2629 - struct coh901318_base *base; 2630 - int irq; 2631 - struct resource *io; 2632 - 2633 - io = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2634 - if (!io) 2635 - return -ENODEV; 2636 - 2637 - /* Map DMA controller registers to virtual memory */ 2638 - if (devm_request_mem_region(&pdev->dev, 2639 - io->start, 2640 - resource_size(io), 2641 - pdev->dev.driver->name) == NULL) 2642 - return -ENOMEM; 2643 - 2644 - base = devm_kzalloc(&pdev->dev, 2645 - ALIGN(sizeof(struct coh901318_base), 4) + 2646 - U300_DMA_CHANNELS * 2647 - sizeof(struct coh901318_chan), 2648 - GFP_KERNEL); 2649 - if (!base) 2650 - return -ENOMEM; 2651 - 2652 - base->chans = ((void *)base) + ALIGN(sizeof(struct coh901318_base), 4); 2653 - 2654 - base->virtbase = devm_ioremap(&pdev->dev, io->start, resource_size(io)); 2655 - if (!base->virtbase) 2656 - return -ENOMEM; 2657 - 2658 - base->dev = &pdev->dev; 2659 - spin_lock_init(&base->pm.lock); 2660 - base->pm.started_channels = 0; 2661 - 2662 - COH901318_DEBUGFS_ASSIGN(debugfs_dma_base, base); 2663 - 2664 - irq = platform_get_irq(pdev, 0); 2665 - if (irq < 0) 2666 - return irq; 2667 - 2668 - err = devm_request_irq(&pdev->dev, irq, dma_irq_handler, 0, 2669 - "coh901318", base); 2670 - if (err) 2671 - return err; 2672 - 2673 - base->irq = irq; 2674 - 2675 - err = coh901318_pool_create(&base->pool, &pdev->dev, 2676 - sizeof(struct coh901318_lli), 2677 - 32); 2678 - if (err) 2679 - return err; 2680 - 2681 - /* init channels for device transfers */ 2682 - coh901318_base_init(&base->dma_slave, dma_slave_channels, 2683 - base); 2684 - 2685 - dma_cap_zero(base->dma_slave.cap_mask); 2686 - dma_cap_set(DMA_SLAVE, base->dma_slave.cap_mask); 2687 - 2688 - base->dma_slave.device_alloc_chan_resources = coh901318_alloc_chan_resources; 2689 - base->dma_slave.device_free_chan_resources = coh901318_free_chan_resources; 2690 - base->dma_slave.device_prep_slave_sg = coh901318_prep_slave_sg; 2691 - base->dma_slave.device_tx_status = coh901318_tx_status; 2692 - base->dma_slave.device_issue_pending = coh901318_issue_pending; 2693 - base->dma_slave.device_config = coh901318_dma_slave_config; 2694 - base->dma_slave.device_pause = coh901318_pause; 2695 - base->dma_slave.device_resume = coh901318_resume; 2696 - base->dma_slave.device_terminate_all = coh901318_terminate_all; 2697 - base->dma_slave.dev = &pdev->dev; 2698 - 2699 - err = dma_async_device_register(&base->dma_slave); 2700 - 2701 - if (err) 2702 - goto err_register_slave; 2703 - 2704 - /* init channels for memcpy */ 2705 - coh901318_base_init(&base->dma_memcpy, dma_memcpy_channels, 2706 - base); 2707 - 2708 - dma_cap_zero(base->dma_memcpy.cap_mask); 2709 - dma_cap_set(DMA_MEMCPY, base->dma_memcpy.cap_mask); 2710 - 2711 - base->dma_memcpy.device_alloc_chan_resources = coh901318_alloc_chan_resources; 2712 - base->dma_memcpy.device_free_chan_resources = coh901318_free_chan_resources; 2713 - base->dma_memcpy.device_prep_dma_memcpy = coh901318_prep_memcpy; 2714 - base->dma_memcpy.device_tx_status = coh901318_tx_status; 2715 - base->dma_memcpy.device_issue_pending = coh901318_issue_pending; 2716 - base->dma_memcpy.device_config = coh901318_dma_slave_config; 2717 - base->dma_memcpy.device_pause = coh901318_pause; 2718 - base->dma_memcpy.device_resume = coh901318_resume; 2719 - base->dma_memcpy.device_terminate_all = coh901318_terminate_all; 2720 - base->dma_memcpy.dev = &pdev->dev; 2721 - /* 2722 - * This controller can only access address at even 32bit boundaries, 2723 - * i.e. 2^2 2724 - */ 2725 - base->dma_memcpy.copy_align = DMAENGINE_ALIGN_4_BYTES; 2726 - err = dma_async_device_register(&base->dma_memcpy); 2727 - 2728 - if (err) 2729 - goto err_register_memcpy; 2730 - 2731 - err = of_dma_controller_register(pdev->dev.of_node, coh901318_xlate, 2732 - base); 2733 - if (err) 2734 - goto err_register_of_dma; 2735 - 2736 - platform_set_drvdata(pdev, base); 2737 - dev_info(&pdev->dev, "Initialized COH901318 DMA on virtual base 0x%p\n", 2738 - base->virtbase); 2739 - 2740 - return err; 2741 - 2742 - err_register_of_dma: 2743 - dma_async_device_unregister(&base->dma_memcpy); 2744 - err_register_memcpy: 2745 - dma_async_device_unregister(&base->dma_slave); 2746 - err_register_slave: 2747 - coh901318_pool_destroy(&base->pool); 2748 - return err; 2749 - } 2750 - static void coh901318_base_remove(struct coh901318_base *base, const int *pick_chans) 2751 - { 2752 - int chans_i; 2753 - int i = 0; 2754 - struct coh901318_chan *cohc; 2755 - 2756 - for (chans_i = 0; pick_chans[chans_i] != -1; chans_i += 2) { 2757 - for (i = pick_chans[chans_i]; i <= pick_chans[chans_i+1]; i++) { 2758 - cohc = &base->chans[i]; 2759 - 2760 - tasklet_kill(&cohc->tasklet); 2761 - } 2762 - } 2763 - 2764 - } 2765 - 2766 - static int coh901318_remove(struct platform_device *pdev) 2767 - { 2768 - struct coh901318_base *base = platform_get_drvdata(pdev); 2769 - 2770 - devm_free_irq(&pdev->dev, base->irq, base); 2771 - 2772 - coh901318_base_remove(base, dma_slave_channels); 2773 - coh901318_base_remove(base, dma_memcpy_channels); 2774 - 2775 - of_dma_controller_free(pdev->dev.of_node); 2776 - dma_async_device_unregister(&base->dma_memcpy); 2777 - dma_async_device_unregister(&base->dma_slave); 2778 - coh901318_pool_destroy(&base->pool); 2779 - return 0; 2780 - } 2781 - 2782 - static const struct of_device_id coh901318_dt_match[] = { 2783 - { .compatible = "stericsson,coh901318" }, 2784 - {}, 2785 - }; 2786 - 2787 - static struct platform_driver coh901318_driver = { 2788 - .remove = coh901318_remove, 2789 - .driver = { 2790 - .name = "coh901318", 2791 - .of_match_table = coh901318_dt_match, 2792 - }, 2793 - }; 2794 - 2795 - static int __init coh901318_init(void) 2796 - { 2797 - return platform_driver_probe(&coh901318_driver, coh901318_probe); 2798 - } 2799 - subsys_initcall(coh901318_init); 2800 - 2801 - static void __exit coh901318_exit(void) 2802 - { 2803 - platform_driver_unregister(&coh901318_driver); 2804 - } 2805 - module_exit(coh901318_exit); 2806 - 2807 - MODULE_LICENSE("GPL"); 2808 - MODULE_AUTHOR("Per Friden");
-141
drivers/dma/coh901318.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) 2007-2013 ST-Ericsson 4 - * DMA driver for COH 901 318 5 - * Author: Per Friden <per.friden@stericsson.com> 6 - */ 7 - 8 - #ifndef COH901318_H 9 - #define COH901318_H 10 - 11 - #define MAX_DMA_PACKET_SIZE_SHIFT 11 12 - #define MAX_DMA_PACKET_SIZE (1 << MAX_DMA_PACKET_SIZE_SHIFT) 13 - 14 - struct device; 15 - 16 - struct coh901318_pool { 17 - spinlock_t lock; 18 - struct dma_pool *dmapool; 19 - struct device *dev; 20 - 21 - #ifdef CONFIG_DEBUG_FS 22 - int debugfs_pool_counter; 23 - #endif 24 - }; 25 - 26 - /** 27 - * struct coh901318_lli - linked list item for DMAC 28 - * @control: control settings for DMAC 29 - * @src_addr: transfer source address 30 - * @dst_addr: transfer destination address 31 - * @link_addr: physical address to next lli 32 - * @virt_link_addr: virtual address of next lli (only used by pool_free) 33 - * @phy_this: physical address of current lli (only used by pool_free) 34 - */ 35 - struct coh901318_lli { 36 - u32 control; 37 - dma_addr_t src_addr; 38 - dma_addr_t dst_addr; 39 - dma_addr_t link_addr; 40 - 41 - void *virt_link_addr; 42 - dma_addr_t phy_this; 43 - }; 44 - 45 - /** 46 - * coh901318_pool_create() - Creates an dma pool for lli:s 47 - * @pool: pool handle 48 - * @dev: dma device 49 - * @lli_nbr: number of lli:s in the pool 50 - * @algin: address alignemtn of lli:s 51 - * returns 0 on success otherwise none zero 52 - */ 53 - int coh901318_pool_create(struct coh901318_pool *pool, 54 - struct device *dev, 55 - size_t lli_nbr, size_t align); 56 - 57 - /** 58 - * coh901318_pool_destroy() - Destroys the dma pool 59 - * @pool: pool handle 60 - * returns 0 on success otherwise none zero 61 - */ 62 - int coh901318_pool_destroy(struct coh901318_pool *pool); 63 - 64 - /** 65 - * coh901318_lli_alloc() - Allocates a linked list 66 - * 67 - * @pool: pool handle 68 - * @len: length to list 69 - * return: none NULL if success otherwise NULL 70 - */ 71 - struct coh901318_lli * 72 - coh901318_lli_alloc(struct coh901318_pool *pool, 73 - unsigned int len); 74 - 75 - /** 76 - * coh901318_lli_free() - Returns the linked list items to the pool 77 - * @pool: pool handle 78 - * @lli: reference to lli pointer to be freed 79 - */ 80 - void coh901318_lli_free(struct coh901318_pool *pool, 81 - struct coh901318_lli **lli); 82 - 83 - /** 84 - * coh901318_lli_fill_memcpy() - Prepares the lli:s for dma memcpy 85 - * @pool: pool handle 86 - * @lli: allocated lli 87 - * @src: src address 88 - * @size: transfer size 89 - * @dst: destination address 90 - * @ctrl_chained: ctrl for chained lli 91 - * @ctrl_last: ctrl for the last lli 92 - * returns number of CPU interrupts for the lli, negative on error. 93 - */ 94 - int 95 - coh901318_lli_fill_memcpy(struct coh901318_pool *pool, 96 - struct coh901318_lli *lli, 97 - dma_addr_t src, unsigned int size, 98 - dma_addr_t dst, u32 ctrl_chained, u32 ctrl_last); 99 - 100 - /** 101 - * coh901318_lli_fill_single() - Prepares the lli:s for dma single transfer 102 - * @pool: pool handle 103 - * @lli: allocated lli 104 - * @buf: transfer buffer 105 - * @size: transfer size 106 - * @dev_addr: address of periphal 107 - * @ctrl_chained: ctrl for chained lli 108 - * @ctrl_last: ctrl for the last lli 109 - * @dir: direction of transfer (to or from device) 110 - * returns number of CPU interrupts for the lli, negative on error. 111 - */ 112 - int 113 - coh901318_lli_fill_single(struct coh901318_pool *pool, 114 - struct coh901318_lli *lli, 115 - dma_addr_t buf, unsigned int size, 116 - dma_addr_t dev_addr, u32 ctrl_chained, u32 ctrl_last, 117 - enum dma_transfer_direction dir); 118 - 119 - /** 120 - * coh901318_lli_fill_single() - Prepares the lli:s for dma scatter list transfer 121 - * @pool: pool handle 122 - * @lli: allocated lli 123 - * @sg: scatter gather list 124 - * @nents: number of entries in sg 125 - * @dev_addr: address of periphal 126 - * @ctrl_chained: ctrl for chained lli 127 - * @ctrl: ctrl of middle lli 128 - * @ctrl_last: ctrl for the last lli 129 - * @dir: direction of transfer (to or from device) 130 - * @ctrl_irq_mask: ctrl mask for CPU interrupt 131 - * returns number of CPU interrupts for the lli, negative on error. 132 - */ 133 - int 134 - coh901318_lli_fill_sg(struct coh901318_pool *pool, 135 - struct coh901318_lli *lli, 136 - struct scatterlist *sg, unsigned int nents, 137 - dma_addr_t dev_addr, u32 ctrl_chained, 138 - u32 ctrl, u32 ctrl_last, 139 - enum dma_transfer_direction dir, u32 ctrl_irq_mask); 140 - 141 - #endif /* COH901318_H */
-313
drivers/dma/coh901318_lli.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * driver/dma/coh901318_lli.c 4 - * 5 - * Copyright (C) 2007-2009 ST-Ericsson 6 - * Support functions for handling lli for dma 7 - * Author: Per Friden <per.friden@stericsson.com> 8 - */ 9 - 10 - #include <linux/spinlock.h> 11 - #include <linux/memory.h> 12 - #include <linux/gfp.h> 13 - #include <linux/dmapool.h> 14 - #include <linux/dmaengine.h> 15 - 16 - #include "coh901318.h" 17 - 18 - #if (defined(CONFIG_DEBUG_FS) && defined(CONFIG_U300_DEBUG)) 19 - #define DEBUGFS_POOL_COUNTER_RESET(pool) (pool->debugfs_pool_counter = 0) 20 - #define DEBUGFS_POOL_COUNTER_ADD(pool, add) (pool->debugfs_pool_counter += add) 21 - #else 22 - #define DEBUGFS_POOL_COUNTER_RESET(pool) 23 - #define DEBUGFS_POOL_COUNTER_ADD(pool, add) 24 - #endif 25 - 26 - static struct coh901318_lli * 27 - coh901318_lli_next(struct coh901318_lli *data) 28 - { 29 - if (data == NULL || data->link_addr == 0) 30 - return NULL; 31 - 32 - return (struct coh901318_lli *) data->virt_link_addr; 33 - } 34 - 35 - int coh901318_pool_create(struct coh901318_pool *pool, 36 - struct device *dev, 37 - size_t size, size_t align) 38 - { 39 - spin_lock_init(&pool->lock); 40 - pool->dev = dev; 41 - pool->dmapool = dma_pool_create("lli_pool", dev, size, align, 0); 42 - 43 - DEBUGFS_POOL_COUNTER_RESET(pool); 44 - return 0; 45 - } 46 - 47 - int coh901318_pool_destroy(struct coh901318_pool *pool) 48 - { 49 - 50 - dma_pool_destroy(pool->dmapool); 51 - return 0; 52 - } 53 - 54 - struct coh901318_lli * 55 - coh901318_lli_alloc(struct coh901318_pool *pool, unsigned int len) 56 - { 57 - int i; 58 - struct coh901318_lli *head; 59 - struct coh901318_lli *lli; 60 - struct coh901318_lli *lli_prev; 61 - dma_addr_t phy; 62 - 63 - if (len == 0) 64 - return NULL; 65 - 66 - spin_lock(&pool->lock); 67 - 68 - head = dma_pool_alloc(pool->dmapool, GFP_NOWAIT, &phy); 69 - 70 - if (head == NULL) 71 - goto err; 72 - 73 - DEBUGFS_POOL_COUNTER_ADD(pool, 1); 74 - 75 - lli = head; 76 - lli->phy_this = phy; 77 - lli->link_addr = 0x00000000; 78 - lli->virt_link_addr = NULL; 79 - 80 - for (i = 1; i < len; i++) { 81 - lli_prev = lli; 82 - 83 - lli = dma_pool_alloc(pool->dmapool, GFP_NOWAIT, &phy); 84 - 85 - if (lli == NULL) 86 - goto err_clean_up; 87 - 88 - DEBUGFS_POOL_COUNTER_ADD(pool, 1); 89 - lli->phy_this = phy; 90 - lli->link_addr = 0x00000000; 91 - lli->virt_link_addr = NULL; 92 - 93 - lli_prev->link_addr = phy; 94 - lli_prev->virt_link_addr = lli; 95 - } 96 - 97 - spin_unlock(&pool->lock); 98 - 99 - return head; 100 - 101 - err: 102 - spin_unlock(&pool->lock); 103 - return NULL; 104 - 105 - err_clean_up: 106 - lli_prev->link_addr = 0x00000000U; 107 - spin_unlock(&pool->lock); 108 - coh901318_lli_free(pool, &head); 109 - return NULL; 110 - } 111 - 112 - void coh901318_lli_free(struct coh901318_pool *pool, 113 - struct coh901318_lli **lli) 114 - { 115 - struct coh901318_lli *l; 116 - struct coh901318_lli *next; 117 - 118 - if (lli == NULL) 119 - return; 120 - 121 - l = *lli; 122 - 123 - if (l == NULL) 124 - return; 125 - 126 - spin_lock(&pool->lock); 127 - 128 - while (l->link_addr) { 129 - next = l->virt_link_addr; 130 - dma_pool_free(pool->dmapool, l, l->phy_this); 131 - DEBUGFS_POOL_COUNTER_ADD(pool, -1); 132 - l = next; 133 - } 134 - dma_pool_free(pool->dmapool, l, l->phy_this); 135 - DEBUGFS_POOL_COUNTER_ADD(pool, -1); 136 - 137 - spin_unlock(&pool->lock); 138 - *lli = NULL; 139 - } 140 - 141 - int 142 - coh901318_lli_fill_memcpy(struct coh901318_pool *pool, 143 - struct coh901318_lli *lli, 144 - dma_addr_t source, unsigned int size, 145 - dma_addr_t destination, u32 ctrl_chained, 146 - u32 ctrl_eom) 147 - { 148 - int s = size; 149 - dma_addr_t src = source; 150 - dma_addr_t dst = destination; 151 - 152 - lli->src_addr = src; 153 - lli->dst_addr = dst; 154 - 155 - while (lli->link_addr) { 156 - lli->control = ctrl_chained | MAX_DMA_PACKET_SIZE; 157 - lli->src_addr = src; 158 - lli->dst_addr = dst; 159 - 160 - s -= MAX_DMA_PACKET_SIZE; 161 - lli = coh901318_lli_next(lli); 162 - 163 - src += MAX_DMA_PACKET_SIZE; 164 - dst += MAX_DMA_PACKET_SIZE; 165 - } 166 - 167 - lli->control = ctrl_eom | s; 168 - lli->src_addr = src; 169 - lli->dst_addr = dst; 170 - 171 - return 0; 172 - } 173 - 174 - int 175 - coh901318_lli_fill_single(struct coh901318_pool *pool, 176 - struct coh901318_lli *lli, 177 - dma_addr_t buf, unsigned int size, 178 - dma_addr_t dev_addr, u32 ctrl_chained, u32 ctrl_eom, 179 - enum dma_transfer_direction dir) 180 - { 181 - int s = size; 182 - dma_addr_t src; 183 - dma_addr_t dst; 184 - 185 - 186 - if (dir == DMA_MEM_TO_DEV) { 187 - src = buf; 188 - dst = dev_addr; 189 - 190 - } else if (dir == DMA_DEV_TO_MEM) { 191 - 192 - src = dev_addr; 193 - dst = buf; 194 - } else { 195 - return -EINVAL; 196 - } 197 - 198 - while (lli->link_addr) { 199 - size_t block_size = MAX_DMA_PACKET_SIZE; 200 - lli->control = ctrl_chained | MAX_DMA_PACKET_SIZE; 201 - 202 - /* If we are on the next-to-final block and there will 203 - * be less than half a DMA packet left for the last 204 - * block, then we want to make this block a little 205 - * smaller to balance the sizes. This is meant to 206 - * avoid too small transfers if the buffer size is 207 - * (MAX_DMA_PACKET_SIZE*N + 1) */ 208 - if (s < (MAX_DMA_PACKET_SIZE + MAX_DMA_PACKET_SIZE/2)) 209 - block_size = MAX_DMA_PACKET_SIZE/2; 210 - 211 - s -= block_size; 212 - lli->src_addr = src; 213 - lli->dst_addr = dst; 214 - 215 - lli = coh901318_lli_next(lli); 216 - 217 - if (dir == DMA_MEM_TO_DEV) 218 - src += block_size; 219 - else if (dir == DMA_DEV_TO_MEM) 220 - dst += block_size; 221 - } 222 - 223 - lli->control = ctrl_eom | s; 224 - lli->src_addr = src; 225 - lli->dst_addr = dst; 226 - 227 - return 0; 228 - } 229 - 230 - int 231 - coh901318_lli_fill_sg(struct coh901318_pool *pool, 232 - struct coh901318_lli *lli, 233 - struct scatterlist *sgl, unsigned int nents, 234 - dma_addr_t dev_addr, u32 ctrl_chained, u32 ctrl, 235 - u32 ctrl_last, 236 - enum dma_transfer_direction dir, u32 ctrl_irq_mask) 237 - { 238 - int i; 239 - struct scatterlist *sg; 240 - u32 ctrl_sg; 241 - dma_addr_t src = 0; 242 - dma_addr_t dst = 0; 243 - u32 bytes_to_transfer; 244 - u32 elem_size; 245 - 246 - if (lli == NULL) 247 - goto err; 248 - 249 - spin_lock(&pool->lock); 250 - 251 - if (dir == DMA_MEM_TO_DEV) 252 - dst = dev_addr; 253 - else if (dir == DMA_DEV_TO_MEM) 254 - src = dev_addr; 255 - else 256 - goto err; 257 - 258 - for_each_sg(sgl, sg, nents, i) { 259 - if (sg_is_chain(sg)) { 260 - /* sg continues to the next sg-element don't 261 - * send ctrl_finish until the last 262 - * sg-element in the chain 263 - */ 264 - ctrl_sg = ctrl_chained; 265 - } else if (i == nents - 1) 266 - ctrl_sg = ctrl_last; 267 - else 268 - ctrl_sg = ctrl ? ctrl : ctrl_last; 269 - 270 - 271 - if (dir == DMA_MEM_TO_DEV) 272 - /* increment source address */ 273 - src = sg_dma_address(sg); 274 - else 275 - /* increment destination address */ 276 - dst = sg_dma_address(sg); 277 - 278 - bytes_to_transfer = sg_dma_len(sg); 279 - 280 - while (bytes_to_transfer) { 281 - u32 val; 282 - 283 - if (bytes_to_transfer > MAX_DMA_PACKET_SIZE) { 284 - elem_size = MAX_DMA_PACKET_SIZE; 285 - val = ctrl_chained; 286 - } else { 287 - elem_size = bytes_to_transfer; 288 - val = ctrl_sg; 289 - } 290 - 291 - lli->control = val | elem_size; 292 - lli->src_addr = src; 293 - lli->dst_addr = dst; 294 - 295 - if (dir == DMA_DEV_TO_MEM) 296 - dst += elem_size; 297 - else 298 - src += elem_size; 299 - 300 - BUG_ON(lli->link_addr & 3); 301 - 302 - bytes_to_transfer -= elem_size; 303 - lli = coh901318_lli_next(lli); 304 - } 305 - 306 - } 307 - spin_unlock(&pool->lock); 308 - 309 - return 0; 310 - err: 311 - spin_unlock(&pool->lock); 312 - return -EINVAL; 313 - }
+14
drivers/dma/dma-jz4780.c
··· 1004 1004 JZ_SOC_DATA_BREAK_LINKS, 1005 1005 }; 1006 1006 1007 + static const struct jz4780_dma_soc_data jz4760_dma_soc_data = { 1008 + .nb_channels = 5, 1009 + .transfer_ord_max = 6, 1010 + .flags = JZ_SOC_DATA_PER_CHAN_PM | JZ_SOC_DATA_NO_DCKES_DCKEC, 1011 + }; 1012 + 1013 + static const struct jz4780_dma_soc_data jz4760b_dma_soc_data = { 1014 + .nb_channels = 5, 1015 + .transfer_ord_max = 6, 1016 + .flags = JZ_SOC_DATA_PER_CHAN_PM, 1017 + }; 1018 + 1007 1019 static const struct jz4780_dma_soc_data jz4770_dma_soc_data = { 1008 1020 .nb_channels = 6, 1009 1021 .transfer_ord_max = 6, ··· 1043 1031 static const struct of_device_id jz4780_dma_dt_match[] = { 1044 1032 { .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data }, 1045 1033 { .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data }, 1034 + { .compatible = "ingenic,jz4760-dma", .data = &jz4760_dma_soc_data }, 1035 + { .compatible = "ingenic,jz4760b-dma", .data = &jz4760b_dma_soc_data }, 1046 1036 { .compatible = "ingenic,jz4770-dma", .data = &jz4770_dma_soc_data }, 1047 1037 { .compatible = "ingenic,jz4780-dma", .data = &jz4780_dma_soc_data }, 1048 1038 { .compatible = "ingenic,x1000-dma", .data = &x1000_dma_soc_data },
+605 -93
drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
··· 12 12 #include <linux/device.h> 13 13 #include <linux/dmaengine.h> 14 14 #include <linux/dmapool.h> 15 + #include <linux/dma-mapping.h> 15 16 #include <linux/err.h> 16 17 #include <linux/interrupt.h> 17 18 #include <linux/io.h> 19 + #include <linux/iopoll.h> 20 + #include <linux/io-64-nonatomic-lo-hi.h> 18 21 #include <linux/kernel.h> 19 22 #include <linux/module.h> 20 23 #include <linux/of.h> 24 + #include <linux/of_dma.h> 21 25 #include <linux/platform_device.h> 22 26 #include <linux/pm_runtime.h> 23 27 #include <linux/property.h> 28 + #include <linux/slab.h> 24 29 #include <linux/types.h> 25 30 26 31 #include "dw-axi-dmac.h" ··· 200 195 return dma_chan_name(&chan->vc.chan); 201 196 } 202 197 203 - static struct axi_dma_desc *axi_desc_get(struct axi_dma_chan *chan) 198 + static struct axi_dma_desc *axi_desc_alloc(u32 num) 204 199 { 205 - struct dw_axi_dma *dw = chan->chip->dw; 206 200 struct axi_dma_desc *desc; 201 + 202 + desc = kzalloc(sizeof(*desc), GFP_NOWAIT); 203 + if (!desc) 204 + return NULL; 205 + 206 + desc->hw_desc = kcalloc(num, sizeof(*desc->hw_desc), GFP_NOWAIT); 207 + if (!desc->hw_desc) { 208 + kfree(desc); 209 + return NULL; 210 + } 211 + 212 + return desc; 213 + } 214 + 215 + static struct axi_dma_lli *axi_desc_get(struct axi_dma_chan *chan, 216 + dma_addr_t *addr) 217 + { 218 + struct axi_dma_lli *lli; 207 219 dma_addr_t phys; 208 220 209 - desc = dma_pool_zalloc(dw->desc_pool, GFP_NOWAIT, &phys); 210 - if (unlikely(!desc)) { 221 + lli = dma_pool_zalloc(chan->desc_pool, GFP_NOWAIT, &phys); 222 + if (unlikely(!lli)) { 211 223 dev_err(chan2dev(chan), "%s: not enough descriptors available\n", 212 224 axi_chan_name(chan)); 213 225 return NULL; 214 226 } 215 227 216 228 atomic_inc(&chan->descs_allocated); 217 - INIT_LIST_HEAD(&desc->xfer_list); 218 - desc->vd.tx.phys = phys; 219 - desc->chan = chan; 229 + *addr = phys; 220 230 221 - return desc; 231 + return lli; 222 232 } 223 233 224 234 static void axi_desc_put(struct axi_dma_desc *desc) 225 235 { 226 236 struct axi_dma_chan *chan = desc->chan; 227 - struct dw_axi_dma *dw = chan->chip->dw; 228 - struct axi_dma_desc *child, *_next; 229 - unsigned int descs_put = 0; 237 + int count = atomic_read(&chan->descs_allocated); 238 + struct axi_dma_hw_desc *hw_desc; 239 + int descs_put; 230 240 231 - list_for_each_entry_safe(child, _next, &desc->xfer_list, xfer_list) { 232 - list_del(&child->xfer_list); 233 - dma_pool_free(dw->desc_pool, child, child->vd.tx.phys); 234 - descs_put++; 241 + for (descs_put = 0; descs_put < count; descs_put++) { 242 + hw_desc = &desc->hw_desc[descs_put]; 243 + dma_pool_free(chan->desc_pool, hw_desc->lli, hw_desc->llp); 235 244 } 236 245 237 - dma_pool_free(dw->desc_pool, desc, desc->vd.tx.phys); 238 - descs_put++; 239 - 246 + kfree(desc->hw_desc); 247 + kfree(desc); 240 248 atomic_sub(descs_put, &chan->descs_allocated); 241 249 dev_vdbg(chan2dev(chan), "%s: %d descs put, %d still allocated\n", 242 250 axi_chan_name(chan), descs_put, ··· 266 248 struct dma_tx_state *txstate) 267 249 { 268 250 struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 269 - enum dma_status ret; 251 + struct virt_dma_desc *vdesc; 252 + enum dma_status status; 253 + u32 completed_length; 254 + unsigned long flags; 255 + u32 completed_blocks; 256 + size_t bytes = 0; 257 + u32 length; 258 + u32 len; 270 259 271 - ret = dma_cookie_status(dchan, cookie, txstate); 260 + status = dma_cookie_status(dchan, cookie, txstate); 261 + if (status == DMA_COMPLETE || !txstate) 262 + return status; 272 263 273 - if (chan->is_paused && ret == DMA_IN_PROGRESS) 274 - ret = DMA_PAUSED; 264 + spin_lock_irqsave(&chan->vc.lock, flags); 275 265 276 - return ret; 266 + vdesc = vchan_find_desc(&chan->vc, cookie); 267 + if (vdesc) { 268 + length = vd_to_axi_desc(vdesc)->length; 269 + completed_blocks = vd_to_axi_desc(vdesc)->completed_blocks; 270 + len = vd_to_axi_desc(vdesc)->hw_desc[0].len; 271 + completed_length = completed_blocks * len; 272 + bytes = length - completed_length; 273 + } else { 274 + bytes = vd_to_axi_desc(vdesc)->length; 275 + } 276 + 277 + spin_unlock_irqrestore(&chan->vc.lock, flags); 278 + dma_set_residue(txstate, bytes); 279 + 280 + return status; 277 281 } 278 282 279 - static void write_desc_llp(struct axi_dma_desc *desc, dma_addr_t adr) 283 + static void write_desc_llp(struct axi_dma_hw_desc *desc, dma_addr_t adr) 280 284 { 281 - desc->lli.llp = cpu_to_le64(adr); 285 + desc->lli->llp = cpu_to_le64(adr); 282 286 } 283 287 284 288 static void write_chan_llp(struct axi_dma_chan *chan, dma_addr_t adr) ··· 308 268 axi_chan_iowrite64(chan, CH_LLP, adr); 309 269 } 310 270 271 + static void dw_axi_dma_set_byte_halfword(struct axi_dma_chan *chan, bool set) 272 + { 273 + u32 offset = DMAC_APB_BYTE_WR_CH_EN; 274 + u32 reg_width, val; 275 + 276 + if (!chan->chip->apb_regs) { 277 + dev_dbg(chan->chip->dev, "apb_regs not initialized\n"); 278 + return; 279 + } 280 + 281 + reg_width = __ffs(chan->config.dst_addr_width); 282 + if (reg_width == DWAXIDMAC_TRANS_WIDTH_16) 283 + offset = DMAC_APB_HALFWORD_WR_CH_EN; 284 + 285 + val = ioread32(chan->chip->apb_regs + offset); 286 + 287 + if (set) 288 + val |= BIT(chan->id); 289 + else 290 + val &= ~BIT(chan->id); 291 + 292 + iowrite32(val, chan->chip->apb_regs + offset); 293 + } 311 294 /* Called in chan locked context */ 312 295 static void axi_chan_block_xfer_start(struct axi_dma_chan *chan, 313 296 struct axi_dma_desc *first) ··· 356 293 priority << CH_CFG_H_PRIORITY_POS | 357 294 DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_DST_POS | 358 295 DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_SRC_POS); 296 + switch (chan->direction) { 297 + case DMA_MEM_TO_DEV: 298 + dw_axi_dma_set_byte_halfword(chan, true); 299 + reg |= (chan->config.device_fc ? 300 + DWAXIDMAC_TT_FC_MEM_TO_PER_DST : 301 + DWAXIDMAC_TT_FC_MEM_TO_PER_DMAC) 302 + << CH_CFG_H_TT_FC_POS; 303 + break; 304 + case DMA_DEV_TO_MEM: 305 + reg |= (chan->config.device_fc ? 306 + DWAXIDMAC_TT_FC_PER_TO_MEM_SRC : 307 + DWAXIDMAC_TT_FC_PER_TO_MEM_DMAC) 308 + << CH_CFG_H_TT_FC_POS; 309 + break; 310 + default: 311 + break; 312 + } 359 313 axi_chan_iowrite32(chan, CH_CFG_H, reg); 360 314 361 - write_chan_llp(chan, first->vd.tx.phys | lms); 315 + write_chan_llp(chan, first->hw_desc[0].llp | lms); 362 316 363 317 irq_mask = DWAXIDMAC_IRQ_DMA_TRF | DWAXIDMAC_IRQ_ALL_ERR; 364 318 axi_chan_irq_sig_set(chan, irq_mask); ··· 413 333 spin_unlock_irqrestore(&chan->vc.lock, flags); 414 334 } 415 335 336 + static void dw_axi_dma_synchronize(struct dma_chan *dchan) 337 + { 338 + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 339 + 340 + vchan_synchronize(&chan->vc); 341 + } 342 + 416 343 static int dma_chan_alloc_chan_resources(struct dma_chan *dchan) 417 344 { 418 345 struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); ··· 431 344 return -EBUSY; 432 345 } 433 346 347 + /* LLI address must be aligned to a 64-byte boundary */ 348 + chan->desc_pool = dma_pool_create(dev_name(chan2dev(chan)), 349 + chan->chip->dev, 350 + sizeof(struct axi_dma_lli), 351 + 64, 0); 352 + if (!chan->desc_pool) { 353 + dev_err(chan2dev(chan), "No memory for descriptors\n"); 354 + return -ENOMEM; 355 + } 434 356 dev_vdbg(dchan2dev(dchan), "%s: allocating\n", axi_chan_name(chan)); 435 357 436 358 pm_runtime_get(chan->chip->dev); ··· 461 365 462 366 vchan_free_chan_resources(&chan->vc); 463 367 368 + dma_pool_destroy(chan->desc_pool); 369 + chan->desc_pool = NULL; 464 370 dev_vdbg(dchan2dev(dchan), 465 371 "%s: free resources, descriptor still allocated: %u\n", 466 372 axi_chan_name(chan), atomic_read(&chan->descs_allocated)); 467 373 468 374 pm_runtime_put(chan->chip->dev); 375 + } 376 + 377 + static void dw_axi_dma_set_hw_channel(struct axi_dma_chip *chip, 378 + u32 handshake_num, bool set) 379 + { 380 + unsigned long start = 0; 381 + unsigned long reg_value; 382 + unsigned long reg_mask; 383 + unsigned long reg_set; 384 + unsigned long mask; 385 + unsigned long val; 386 + 387 + if (!chip->apb_regs) { 388 + dev_dbg(chip->dev, "apb_regs not initialized\n"); 389 + return; 390 + } 391 + 392 + /* 393 + * An unused DMA channel has a default value of 0x3F. 394 + * Lock the DMA channel by assign a handshake number to the channel. 395 + * Unlock the DMA channel by assign 0x3F to the channel. 396 + */ 397 + if (set) { 398 + reg_set = UNUSED_CHANNEL; 399 + val = handshake_num; 400 + } else { 401 + reg_set = handshake_num; 402 + val = UNUSED_CHANNEL; 403 + } 404 + 405 + reg_value = lo_hi_readq(chip->apb_regs + DMAC_APB_HW_HS_SEL_0); 406 + 407 + for_each_set_clump8(start, reg_mask, &reg_value, 64) { 408 + if (reg_mask == reg_set) { 409 + mask = GENMASK_ULL(start + 7, start); 410 + reg_value &= ~mask; 411 + reg_value |= rol64(val, start); 412 + lo_hi_writeq(reg_value, 413 + chip->apb_regs + DMAC_APB_HW_HS_SEL_0); 414 + break; 415 + } 416 + } 469 417 } 470 418 471 419 /* ··· 518 378 * transfer and completes the DMA transfer operation at the end of current 519 379 * block transfer. 520 380 */ 521 - static void set_desc_last(struct axi_dma_desc *desc) 381 + static void set_desc_last(struct axi_dma_hw_desc *desc) 522 382 { 523 383 u32 val; 524 384 525 - val = le32_to_cpu(desc->lli.ctl_hi); 385 + val = le32_to_cpu(desc->lli->ctl_hi); 526 386 val |= CH_CTL_H_LLI_LAST; 527 - desc->lli.ctl_hi = cpu_to_le32(val); 387 + desc->lli->ctl_hi = cpu_to_le32(val); 528 388 } 529 389 530 - static void write_desc_sar(struct axi_dma_desc *desc, dma_addr_t adr) 390 + static void write_desc_sar(struct axi_dma_hw_desc *desc, dma_addr_t adr) 531 391 { 532 - desc->lli.sar = cpu_to_le64(adr); 392 + desc->lli->sar = cpu_to_le64(adr); 533 393 } 534 394 535 - static void write_desc_dar(struct axi_dma_desc *desc, dma_addr_t adr) 395 + static void write_desc_dar(struct axi_dma_hw_desc *desc, dma_addr_t adr) 536 396 { 537 - desc->lli.dar = cpu_to_le64(adr); 397 + desc->lli->dar = cpu_to_le64(adr); 538 398 } 539 399 540 - static void set_desc_src_master(struct axi_dma_desc *desc) 400 + static void set_desc_src_master(struct axi_dma_hw_desc *desc) 541 401 { 542 402 u32 val; 543 403 544 404 /* Select AXI0 for source master */ 545 - val = le32_to_cpu(desc->lli.ctl_lo); 405 + val = le32_to_cpu(desc->lli->ctl_lo); 546 406 val &= ~CH_CTL_L_SRC_MAST; 547 - desc->lli.ctl_lo = cpu_to_le32(val); 407 + desc->lli->ctl_lo = cpu_to_le32(val); 548 408 } 549 409 550 - static void set_desc_dest_master(struct axi_dma_desc *desc) 410 + static void set_desc_dest_master(struct axi_dma_hw_desc *hw_desc, 411 + struct axi_dma_desc *desc) 551 412 { 552 413 u32 val; 553 414 554 415 /* Select AXI1 for source master if available */ 555 - val = le32_to_cpu(desc->lli.ctl_lo); 416 + val = le32_to_cpu(hw_desc->lli->ctl_lo); 556 417 if (desc->chan->chip->dw->hdata->nr_masters > 1) 557 418 val |= CH_CTL_L_DST_MAST; 558 419 else 559 420 val &= ~CH_CTL_L_DST_MAST; 560 421 561 - desc->lli.ctl_lo = cpu_to_le32(val); 422 + hw_desc->lli->ctl_lo = cpu_to_le32(val); 423 + } 424 + 425 + static int dw_axi_dma_set_hw_desc(struct axi_dma_chan *chan, 426 + struct axi_dma_hw_desc *hw_desc, 427 + dma_addr_t mem_addr, size_t len) 428 + { 429 + unsigned int data_width = BIT(chan->chip->dw->hdata->m_data_width); 430 + unsigned int reg_width; 431 + unsigned int mem_width; 432 + dma_addr_t device_addr; 433 + size_t axi_block_ts; 434 + size_t block_ts; 435 + u32 ctllo, ctlhi; 436 + u32 burst_len; 437 + 438 + axi_block_ts = chan->chip->dw->hdata->block_size[chan->id]; 439 + 440 + mem_width = __ffs(data_width | mem_addr | len); 441 + if (mem_width > DWAXIDMAC_TRANS_WIDTH_32) 442 + mem_width = DWAXIDMAC_TRANS_WIDTH_32; 443 + 444 + if (!IS_ALIGNED(mem_addr, 4)) { 445 + dev_err(chan->chip->dev, "invalid buffer alignment\n"); 446 + return -EINVAL; 447 + } 448 + 449 + switch (chan->direction) { 450 + case DMA_MEM_TO_DEV: 451 + reg_width = __ffs(chan->config.dst_addr_width); 452 + device_addr = chan->config.dst_addr; 453 + ctllo = reg_width << CH_CTL_L_DST_WIDTH_POS | 454 + mem_width << CH_CTL_L_SRC_WIDTH_POS | 455 + DWAXIDMAC_CH_CTL_L_NOINC << CH_CTL_L_DST_INC_POS | 456 + DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_SRC_INC_POS; 457 + block_ts = len >> mem_width; 458 + break; 459 + case DMA_DEV_TO_MEM: 460 + reg_width = __ffs(chan->config.src_addr_width); 461 + device_addr = chan->config.src_addr; 462 + ctllo = reg_width << CH_CTL_L_SRC_WIDTH_POS | 463 + mem_width << CH_CTL_L_DST_WIDTH_POS | 464 + DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_DST_INC_POS | 465 + DWAXIDMAC_CH_CTL_L_NOINC << CH_CTL_L_SRC_INC_POS; 466 + block_ts = len >> reg_width; 467 + break; 468 + default: 469 + return -EINVAL; 470 + } 471 + 472 + if (block_ts > axi_block_ts) 473 + return -EINVAL; 474 + 475 + hw_desc->lli = axi_desc_get(chan, &hw_desc->llp); 476 + if (unlikely(!hw_desc->lli)) 477 + return -ENOMEM; 478 + 479 + ctlhi = CH_CTL_H_LLI_VALID; 480 + 481 + if (chan->chip->dw->hdata->restrict_axi_burst_len) { 482 + burst_len = chan->chip->dw->hdata->axi_rw_burst_len; 483 + ctlhi |= CH_CTL_H_ARLEN_EN | CH_CTL_H_AWLEN_EN | 484 + burst_len << CH_CTL_H_ARLEN_POS | 485 + burst_len << CH_CTL_H_AWLEN_POS; 486 + } 487 + 488 + hw_desc->lli->ctl_hi = cpu_to_le32(ctlhi); 489 + 490 + if (chan->direction == DMA_MEM_TO_DEV) { 491 + write_desc_sar(hw_desc, mem_addr); 492 + write_desc_dar(hw_desc, device_addr); 493 + } else { 494 + write_desc_sar(hw_desc, device_addr); 495 + write_desc_dar(hw_desc, mem_addr); 496 + } 497 + 498 + hw_desc->lli->block_ts_lo = cpu_to_le32(block_ts - 1); 499 + 500 + ctllo |= DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_DST_MSIZE_POS | 501 + DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_SRC_MSIZE_POS; 502 + hw_desc->lli->ctl_lo = cpu_to_le32(ctllo); 503 + 504 + set_desc_src_master(hw_desc); 505 + 506 + hw_desc->len = len; 507 + return 0; 508 + } 509 + 510 + static size_t calculate_block_len(struct axi_dma_chan *chan, 511 + dma_addr_t dma_addr, size_t buf_len, 512 + enum dma_transfer_direction direction) 513 + { 514 + u32 data_width, reg_width, mem_width; 515 + size_t axi_block_ts, block_len; 516 + 517 + axi_block_ts = chan->chip->dw->hdata->block_size[chan->id]; 518 + 519 + switch (direction) { 520 + case DMA_MEM_TO_DEV: 521 + data_width = BIT(chan->chip->dw->hdata->m_data_width); 522 + mem_width = __ffs(data_width | dma_addr | buf_len); 523 + if (mem_width > DWAXIDMAC_TRANS_WIDTH_32) 524 + mem_width = DWAXIDMAC_TRANS_WIDTH_32; 525 + 526 + block_len = axi_block_ts << mem_width; 527 + break; 528 + case DMA_DEV_TO_MEM: 529 + reg_width = __ffs(chan->config.src_addr_width); 530 + block_len = axi_block_ts << reg_width; 531 + break; 532 + default: 533 + block_len = 0; 534 + } 535 + 536 + return block_len; 537 + } 538 + 539 + static struct dma_async_tx_descriptor * 540 + dw_axi_dma_chan_prep_cyclic(struct dma_chan *dchan, dma_addr_t dma_addr, 541 + size_t buf_len, size_t period_len, 542 + enum dma_transfer_direction direction, 543 + unsigned long flags) 544 + { 545 + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 546 + struct axi_dma_hw_desc *hw_desc = NULL; 547 + struct axi_dma_desc *desc = NULL; 548 + dma_addr_t src_addr = dma_addr; 549 + u32 num_periods, num_segments; 550 + size_t axi_block_len; 551 + u32 total_segments; 552 + u32 segment_len; 553 + unsigned int i; 554 + int status; 555 + u64 llp = 0; 556 + u8 lms = 0; /* Select AXI0 master for LLI fetching */ 557 + 558 + num_periods = buf_len / period_len; 559 + 560 + axi_block_len = calculate_block_len(chan, dma_addr, buf_len, direction); 561 + if (axi_block_len == 0) 562 + return NULL; 563 + 564 + num_segments = DIV_ROUND_UP(period_len, axi_block_len); 565 + segment_len = DIV_ROUND_UP(period_len, num_segments); 566 + 567 + total_segments = num_periods * num_segments; 568 + 569 + desc = axi_desc_alloc(total_segments); 570 + if (unlikely(!desc)) 571 + goto err_desc_get; 572 + 573 + chan->direction = direction; 574 + desc->chan = chan; 575 + chan->cyclic = true; 576 + desc->length = 0; 577 + desc->period_len = period_len; 578 + 579 + for (i = 0; i < total_segments; i++) { 580 + hw_desc = &desc->hw_desc[i]; 581 + 582 + status = dw_axi_dma_set_hw_desc(chan, hw_desc, src_addr, 583 + segment_len); 584 + if (status < 0) 585 + goto err_desc_get; 586 + 587 + desc->length += hw_desc->len; 588 + /* Set end-of-link to the linked descriptor, so that cyclic 589 + * callback function can be triggered during interrupt. 590 + */ 591 + set_desc_last(hw_desc); 592 + 593 + src_addr += segment_len; 594 + } 595 + 596 + llp = desc->hw_desc[0].llp; 597 + 598 + /* Managed transfer list */ 599 + do { 600 + hw_desc = &desc->hw_desc[--total_segments]; 601 + write_desc_llp(hw_desc, llp | lms); 602 + llp = hw_desc->llp; 603 + } while (total_segments); 604 + 605 + dw_axi_dma_set_hw_channel(chan->chip, chan->hw_handshake_num, true); 606 + 607 + return vchan_tx_prep(&chan->vc, &desc->vd, flags); 608 + 609 + err_desc_get: 610 + if (desc) 611 + axi_desc_put(desc); 612 + 613 + return NULL; 614 + } 615 + 616 + static struct dma_async_tx_descriptor * 617 + dw_axi_dma_chan_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, 618 + unsigned int sg_len, 619 + enum dma_transfer_direction direction, 620 + unsigned long flags, void *context) 621 + { 622 + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 623 + struct axi_dma_hw_desc *hw_desc = NULL; 624 + struct axi_dma_desc *desc = NULL; 625 + u32 num_segments, segment_len; 626 + unsigned int loop = 0; 627 + struct scatterlist *sg; 628 + size_t axi_block_len; 629 + u32 len, num_sgs = 0; 630 + unsigned int i; 631 + dma_addr_t mem; 632 + int status; 633 + u64 llp = 0; 634 + u8 lms = 0; /* Select AXI0 master for LLI fetching */ 635 + 636 + if (unlikely(!is_slave_direction(direction) || !sg_len)) 637 + return NULL; 638 + 639 + mem = sg_dma_address(sgl); 640 + len = sg_dma_len(sgl); 641 + 642 + axi_block_len = calculate_block_len(chan, mem, len, direction); 643 + if (axi_block_len == 0) 644 + return NULL; 645 + 646 + for_each_sg(sgl, sg, sg_len, i) 647 + num_sgs += DIV_ROUND_UP(sg_dma_len(sg), axi_block_len); 648 + 649 + desc = axi_desc_alloc(num_sgs); 650 + if (unlikely(!desc)) 651 + goto err_desc_get; 652 + 653 + desc->chan = chan; 654 + desc->length = 0; 655 + chan->direction = direction; 656 + 657 + for_each_sg(sgl, sg, sg_len, i) { 658 + mem = sg_dma_address(sg); 659 + len = sg_dma_len(sg); 660 + num_segments = DIV_ROUND_UP(sg_dma_len(sg), axi_block_len); 661 + segment_len = DIV_ROUND_UP(sg_dma_len(sg), num_segments); 662 + 663 + do { 664 + hw_desc = &desc->hw_desc[loop++]; 665 + status = dw_axi_dma_set_hw_desc(chan, hw_desc, mem, segment_len); 666 + if (status < 0) 667 + goto err_desc_get; 668 + 669 + desc->length += hw_desc->len; 670 + len -= segment_len; 671 + mem += segment_len; 672 + } while (len >= segment_len); 673 + } 674 + 675 + /* Set end-of-link to the last link descriptor of list */ 676 + set_desc_last(&desc->hw_desc[num_sgs - 1]); 677 + 678 + /* Managed transfer list */ 679 + do { 680 + hw_desc = &desc->hw_desc[--num_sgs]; 681 + write_desc_llp(hw_desc, llp | lms); 682 + llp = hw_desc->llp; 683 + } while (num_sgs); 684 + 685 + dw_axi_dma_set_hw_channel(chan->chip, chan->hw_handshake_num, true); 686 + 687 + return vchan_tx_prep(&chan->vc, &desc->vd, flags); 688 + 689 + err_desc_get: 690 + if (desc) 691 + axi_desc_put(desc); 692 + 693 + return NULL; 562 694 } 563 695 564 696 static struct dma_async_tx_descriptor * 565 697 dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dst_adr, 566 698 dma_addr_t src_adr, size_t len, unsigned long flags) 567 699 { 568 - struct axi_dma_desc *first = NULL, *desc = NULL, *prev = NULL; 569 700 struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 570 701 size_t block_ts, max_block_ts, xfer_len; 571 - u32 xfer_width, reg; 702 + struct axi_dma_hw_desc *hw_desc = NULL; 703 + struct axi_dma_desc *desc = NULL; 704 + u32 xfer_width, reg, num; 705 + u64 llp = 0; 572 706 u8 lms = 0; /* Select AXI0 master for LLI fetching */ 573 707 574 708 dev_dbg(chan2dev(chan), "%s: memcpy: src: %pad dst: %pad length: %zd flags: %#lx", 575 709 axi_chan_name(chan), &src_adr, &dst_adr, len, flags); 576 710 577 711 max_block_ts = chan->chip->dw->hdata->block_size[chan->id]; 712 + xfer_width = axi_chan_get_xfer_width(chan, src_adr, dst_adr, len); 713 + num = DIV_ROUND_UP(len, max_block_ts << xfer_width); 714 + desc = axi_desc_alloc(num); 715 + if (unlikely(!desc)) 716 + goto err_desc_get; 578 717 718 + desc->chan = chan; 719 + num = 0; 720 + desc->length = 0; 579 721 while (len) { 580 722 xfer_len = len; 581 723 724 + hw_desc = &desc->hw_desc[num]; 582 725 /* 583 726 * Take care for the alignment. 584 727 * Actually source and destination widths can be different, but ··· 880 457 xfer_len = max_block_ts << xfer_width; 881 458 } 882 459 883 - desc = axi_desc_get(chan); 884 - if (unlikely(!desc)) 460 + hw_desc->lli = axi_desc_get(chan, &hw_desc->llp); 461 + if (unlikely(!hw_desc->lli)) 885 462 goto err_desc_get; 886 463 887 - write_desc_sar(desc, src_adr); 888 - write_desc_dar(desc, dst_adr); 889 - desc->lli.block_ts_lo = cpu_to_le32(block_ts - 1); 464 + write_desc_sar(hw_desc, src_adr); 465 + write_desc_dar(hw_desc, dst_adr); 466 + hw_desc->lli->block_ts_lo = cpu_to_le32(block_ts - 1); 890 467 891 468 reg = CH_CTL_H_LLI_VALID; 892 469 if (chan->chip->dw->hdata->restrict_axi_burst_len) { ··· 897 474 CH_CTL_H_AWLEN_EN | 898 475 burst_len << CH_CTL_H_AWLEN_POS); 899 476 } 900 - desc->lli.ctl_hi = cpu_to_le32(reg); 477 + hw_desc->lli->ctl_hi = cpu_to_le32(reg); 901 478 902 479 reg = (DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_DST_MSIZE_POS | 903 480 DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_SRC_MSIZE_POS | ··· 905 482 xfer_width << CH_CTL_L_SRC_WIDTH_POS | 906 483 DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_DST_INC_POS | 907 484 DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_SRC_INC_POS); 908 - desc->lli.ctl_lo = cpu_to_le32(reg); 485 + hw_desc->lli->ctl_lo = cpu_to_le32(reg); 909 486 910 - set_desc_src_master(desc); 911 - set_desc_dest_master(desc); 487 + set_desc_src_master(hw_desc); 488 + set_desc_dest_master(hw_desc, desc); 912 489 913 - /* Manage transfer list (xfer_list) */ 914 - if (!first) { 915 - first = desc; 916 - } else { 917 - list_add_tail(&desc->xfer_list, &first->xfer_list); 918 - write_desc_llp(prev, desc->vd.tx.phys | lms); 919 - } 920 - prev = desc; 921 - 490 + hw_desc->len = xfer_len; 491 + desc->length += hw_desc->len; 922 492 /* update the length and addresses for the next loop cycle */ 923 493 len -= xfer_len; 924 494 dst_adr += xfer_len; 925 495 src_adr += xfer_len; 496 + num++; 926 497 } 927 498 928 - /* Total len of src/dest sg == 0, so no descriptor were allocated */ 929 - if (unlikely(!first)) 930 - return NULL; 931 - 932 499 /* Set end-of-link to the last link descriptor of list */ 933 - set_desc_last(desc); 500 + set_desc_last(&desc->hw_desc[num - 1]); 501 + /* Managed transfer list */ 502 + do { 503 + hw_desc = &desc->hw_desc[--num]; 504 + write_desc_llp(hw_desc, llp | lms); 505 + llp = hw_desc->llp; 506 + } while (num); 934 507 935 - return vchan_tx_prep(&chan->vc, &first->vd, flags); 508 + return vchan_tx_prep(&chan->vc, &desc->vd, flags); 936 509 937 510 err_desc_get: 938 - if (first) 939 - axi_desc_put(first); 511 + if (desc) 512 + axi_desc_put(desc); 940 513 return NULL; 941 514 } 942 515 516 + static int dw_axi_dma_chan_slave_config(struct dma_chan *dchan, 517 + struct dma_slave_config *config) 518 + { 519 + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 520 + 521 + memcpy(&chan->config, config, sizeof(*config)); 522 + 523 + return 0; 524 + } 525 + 943 526 static void axi_chan_dump_lli(struct axi_dma_chan *chan, 944 - struct axi_dma_desc *desc) 527 + struct axi_dma_hw_desc *desc) 945 528 { 946 529 dev_err(dchan2dev(&chan->vc.chan), 947 530 "SAR: 0x%llx DAR: 0x%llx LLP: 0x%llx BTS 0x%x CTL: 0x%x:%08x", 948 - le64_to_cpu(desc->lli.sar), 949 - le64_to_cpu(desc->lli.dar), 950 - le64_to_cpu(desc->lli.llp), 951 - le32_to_cpu(desc->lli.block_ts_lo), 952 - le32_to_cpu(desc->lli.ctl_hi), 953 - le32_to_cpu(desc->lli.ctl_lo)); 531 + le64_to_cpu(desc->lli->sar), 532 + le64_to_cpu(desc->lli->dar), 533 + le64_to_cpu(desc->lli->llp), 534 + le32_to_cpu(desc->lli->block_ts_lo), 535 + le32_to_cpu(desc->lli->ctl_hi), 536 + le32_to_cpu(desc->lli->ctl_lo)); 954 537 } 955 538 956 539 static void axi_chan_list_dump_lli(struct axi_dma_chan *chan, 957 540 struct axi_dma_desc *desc_head) 958 541 { 959 - struct axi_dma_desc *desc; 542 + int count = atomic_read(&chan->descs_allocated); 543 + int i; 960 544 961 - axi_chan_dump_lli(chan, desc_head); 962 - list_for_each_entry(desc, &desc_head->xfer_list, xfer_list) 963 - axi_chan_dump_lli(chan, desc); 545 + for (i = 0; i < count; i++) 546 + axi_chan_dump_lli(chan, &desc_head->hw_desc[i]); 964 547 } 965 548 966 549 static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status) ··· 999 570 1000 571 static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan) 1001 572 { 573 + int count = atomic_read(&chan->descs_allocated); 574 + struct axi_dma_hw_desc *hw_desc; 575 + struct axi_dma_desc *desc; 1002 576 struct virt_dma_desc *vd; 1003 577 unsigned long flags; 578 + u64 llp; 579 + int i; 1004 580 1005 581 spin_lock_irqsave(&chan->vc.lock, flags); 1006 582 if (unlikely(axi_chan_is_hw_enable(chan))) { ··· 1016 582 1017 583 /* The completed descriptor currently is in the head of vc list */ 1018 584 vd = vchan_next_desc(&chan->vc); 1019 - /* Remove the completed descriptor from issued list before completing */ 1020 - list_del(&vd->node); 1021 - vchan_cookie_complete(vd); 1022 585 1023 - /* Submit queued descriptors after processing the completed ones */ 1024 - axi_chan_start_first_queued(chan); 586 + if (chan->cyclic) { 587 + desc = vd_to_axi_desc(vd); 588 + if (desc) { 589 + llp = lo_hi_readq(chan->chan_regs + CH_LLP); 590 + for (i = 0; i < count; i++) { 591 + hw_desc = &desc->hw_desc[i]; 592 + if (hw_desc->llp == llp) { 593 + axi_chan_irq_clear(chan, hw_desc->lli->status_lo); 594 + hw_desc->lli->ctl_hi |= CH_CTL_H_LLI_VALID; 595 + desc->completed_blocks = i; 596 + 597 + if (((hw_desc->len * (i + 1)) % desc->period_len) == 0) 598 + vchan_cyclic_callback(vd); 599 + break; 600 + } 601 + } 602 + 603 + axi_chan_enable(chan); 604 + } 605 + } else { 606 + /* Remove the completed descriptor from issued list before completing */ 607 + list_del(&vd->node); 608 + vchan_cookie_complete(vd); 609 + 610 + /* Submit queued descriptors after processing the completed ones */ 611 + axi_chan_start_first_queued(chan); 612 + } 1025 613 1026 614 spin_unlock_irqrestore(&chan->vc.lock, flags); 1027 615 } ··· 1083 627 static int dma_chan_terminate_all(struct dma_chan *dchan) 1084 628 { 1085 629 struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); 630 + u32 chan_active = BIT(chan->id) << DMAC_CHAN_EN_SHIFT; 1086 631 unsigned long flags; 632 + u32 val; 633 + int ret; 1087 634 LIST_HEAD(head); 1088 - 1089 - spin_lock_irqsave(&chan->vc.lock, flags); 1090 635 1091 636 axi_chan_disable(chan); 1092 637 638 + ret = readl_poll_timeout_atomic(chan->chip->regs + DMAC_CHEN, val, 639 + !(val & chan_active), 1000, 10000); 640 + if (ret == -ETIMEDOUT) 641 + dev_warn(dchan2dev(dchan), 642 + "%s failed to stop\n", axi_chan_name(chan)); 643 + 644 + if (chan->direction != DMA_MEM_TO_MEM) 645 + dw_axi_dma_set_hw_channel(chan->chip, 646 + chan->hw_handshake_num, false); 647 + if (chan->direction == DMA_MEM_TO_DEV) 648 + dw_axi_dma_set_byte_halfword(chan, false); 649 + 650 + spin_lock_irqsave(&chan->vc.lock, flags); 651 + 1093 652 vchan_get_all_descriptors(&chan->vc, &head); 1094 653 654 + chan->cyclic = false; 1095 655 spin_unlock_irqrestore(&chan->vc.lock, flags); 1096 656 1097 657 vchan_dma_desc_free_list(&chan->vc, &head); ··· 1218 746 return axi_dma_resume(chip); 1219 747 } 1220 748 749 + static struct dma_chan *dw_axi_dma_of_xlate(struct of_phandle_args *dma_spec, 750 + struct of_dma *ofdma) 751 + { 752 + struct dw_axi_dma *dw = ofdma->of_dma_data; 753 + struct axi_dma_chan *chan; 754 + struct dma_chan *dchan; 755 + 756 + dchan = dma_get_any_slave_channel(&dw->dma); 757 + if (!dchan) 758 + return NULL; 759 + 760 + chan = dchan_to_axi_dma_chan(dchan); 761 + chan->hw_handshake_num = dma_spec->args[0]; 762 + return dchan; 763 + } 764 + 1221 765 static int parse_device_properties(struct axi_dma_chip *chip) 1222 766 { 1223 767 struct device *dev = chip->dev; ··· 1304 816 1305 817 static int dw_probe(struct platform_device *pdev) 1306 818 { 819 + struct device_node *node = pdev->dev.of_node; 1307 820 struct axi_dma_chip *chip; 1308 821 struct resource *mem; 1309 822 struct dw_axi_dma *dw; ··· 1337 848 if (IS_ERR(chip->regs)) 1338 849 return PTR_ERR(chip->regs); 1339 850 851 + if (of_device_is_compatible(node, "intel,kmb-axi-dma")) { 852 + chip->apb_regs = devm_platform_ioremap_resource(pdev, 1); 853 + if (IS_ERR(chip->apb_regs)) 854 + return PTR_ERR(chip->apb_regs); 855 + } 856 + 1340 857 chip->core_clk = devm_clk_get(chip->dev, "core-clk"); 1341 858 if (IS_ERR(chip->core_clk)) 1342 859 return PTR_ERR(chip->core_clk); ··· 1365 870 if (ret) 1366 871 return ret; 1367 872 1368 - /* Lli address must be aligned to a 64-byte boundary */ 1369 - dw->desc_pool = dmam_pool_create(KBUILD_MODNAME, chip->dev, 1370 - sizeof(struct axi_dma_desc), 64, 0); 1371 - if (!dw->desc_pool) { 1372 - dev_err(chip->dev, "No memory for descriptors dma pool\n"); 1373 - return -ENOMEM; 1374 - } 1375 873 1376 874 INIT_LIST_HEAD(&dw->dma.channels); 1377 875 for (i = 0; i < hdata->nr_channels; i++) { ··· 1381 893 1382 894 /* Set capabilities */ 1383 895 dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask); 896 + dma_cap_set(DMA_SLAVE, dw->dma.cap_mask); 897 + dma_cap_set(DMA_CYCLIC, dw->dma.cap_mask); 1384 898 1385 899 /* DMA capabilities */ 1386 900 dw->dma.chancnt = hdata->nr_channels; 1387 901 dw->dma.src_addr_widths = AXI_DMA_BUSWIDTHS; 1388 902 dw->dma.dst_addr_widths = AXI_DMA_BUSWIDTHS; 1389 903 dw->dma.directions = BIT(DMA_MEM_TO_MEM); 1390 - dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 904 + dw->dma.directions |= BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 905 + dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1391 906 1392 907 dw->dma.dev = chip->dev; 1393 908 dw->dma.device_tx_status = dma_chan_tx_status; ··· 1403 912 dw->dma.device_free_chan_resources = dma_chan_free_chan_resources; 1404 913 1405 914 dw->dma.device_prep_dma_memcpy = dma_chan_prep_dma_memcpy; 915 + dw->dma.device_synchronize = dw_axi_dma_synchronize; 916 + dw->dma.device_config = dw_axi_dma_chan_slave_config; 917 + dw->dma.device_prep_slave_sg = dw_axi_dma_chan_prep_slave_sg; 918 + dw->dma.device_prep_dma_cyclic = dw_axi_dma_chan_prep_cyclic; 1406 919 920 + /* 921 + * Synopsis DesignWare AxiDMA datasheet mentioned Maximum 922 + * supported blocks is 1024. Device register width is 4 bytes. 923 + * Therefore, set constraint to 1024 * 4. 924 + */ 925 + dw->dma.dev->dma_parms = &dw->dma_parms; 926 + dma_set_max_seg_size(&pdev->dev, MAX_BLOCK_SIZE); 1407 927 platform_set_drvdata(pdev, chip); 1408 928 1409 929 pm_runtime_enable(chip->dev); ··· 1436 934 ret = dmaenginem_async_device_register(&dw->dma); 1437 935 if (ret) 1438 936 goto err_pm_disable; 937 + 938 + /* Register with OF helpers for DMA lookups */ 939 + ret = of_dma_controller_register(pdev->dev.of_node, 940 + dw_axi_dma_of_xlate, dw); 941 + if (ret < 0) 942 + dev_warn(&pdev->dev, 943 + "Failed to register OF DMA controller, fallback to MEM_TO_MEM mode\n"); 1439 944 1440 945 dev_info(chip->dev, "DesignWare AXI DMA Controller, %d channels\n", 1441 946 dw->hdata->nr_channels); ··· 1477 968 1478 969 devm_free_irq(chip->dev, chip->irq, chip); 1479 970 971 + of_dma_controller_free(chip->dev->of_node); 972 + 1480 973 list_for_each_entry_safe(chan, _chan, &dw->dma.channels, 1481 974 vc.chan.device_node) { 1482 975 list_del(&chan->vc.chan.device_node); ··· 1494 983 1495 984 static const struct of_device_id dw_dma_of_id_table[] = { 1496 985 { .compatible = "snps,axi-dma-1.01a" }, 986 + { .compatible = "intel,kmb-axi-dma" }, 1497 987 {} 1498 988 }; 1499 989 MODULE_DEVICE_TABLE(of, dw_dma_of_id_table);
+31 -3
drivers/dma/dw-axi-dmac/dw-axi-dmac.h
··· 37 37 struct axi_dma_chip *chip; 38 38 void __iomem *chan_regs; 39 39 u8 id; 40 + u8 hw_handshake_num; 40 41 atomic_t descs_allocated; 41 42 43 + struct dma_pool *desc_pool; 42 44 struct virt_dma_chan vc; 43 45 46 + struct axi_dma_desc *desc; 47 + struct dma_slave_config config; 48 + enum dma_transfer_direction direction; 49 + bool cyclic; 44 50 /* these other elements are all protected by vc.lock */ 45 51 bool is_paused; 46 52 }; ··· 54 48 struct dw_axi_dma { 55 49 struct dma_device dma; 56 50 struct dw_axi_dma_hcfg *hdata; 57 - struct dma_pool *desc_pool; 51 + struct device_dma_parameters dma_parms; 58 52 59 53 /* channels */ 60 54 struct axi_dma_chan *chan; ··· 64 58 struct device *dev; 65 59 int irq; 66 60 void __iomem *regs; 61 + void __iomem *apb_regs; 67 62 struct clk *core_clk; 68 63 struct clk *cfgr_clk; 69 64 struct dw_axi_dma *dw; ··· 87 80 __le32 reserved_hi; 88 81 }; 89 82 83 + struct axi_dma_hw_desc { 84 + struct axi_dma_lli *lli; 85 + dma_addr_t llp; 86 + u32 len; 87 + }; 88 + 90 89 struct axi_dma_desc { 91 - struct axi_dma_lli lli; 90 + struct axi_dma_hw_desc *hw_desc; 92 91 93 92 struct virt_dma_desc vd; 94 93 struct axi_dma_chan *chan; 95 - struct list_head xfer_list; 94 + u32 completed_blocks; 95 + u32 length; 96 + u32 period_len; 96 97 }; 97 98 98 99 static inline struct device *dchan2dev(struct dma_chan *dchan) ··· 172 157 #define CH_INTSIGNAL_ENA 0x090 /* R/W Chan Interrupt Signal Enable */ 173 158 #define CH_INTCLEAR 0x098 /* W Chan Interrupt Clear */ 174 159 160 + /* These Apb registers are used by Intel KeemBay SoC */ 161 + #define DMAC_APB_CFG 0x000 /* DMAC Apb Configuration Register */ 162 + #define DMAC_APB_STAT 0x004 /* DMAC Apb Status Register */ 163 + #define DMAC_APB_DEBUG_STAT_0 0x008 /* DMAC Apb Debug Status Register 0 */ 164 + #define DMAC_APB_DEBUG_STAT_1 0x00C /* DMAC Apb Debug Status Register 1 */ 165 + #define DMAC_APB_HW_HS_SEL_0 0x010 /* DMAC Apb HW HS register 0 */ 166 + #define DMAC_APB_HW_HS_SEL_1 0x014 /* DMAC Apb HW HS register 1 */ 167 + #define DMAC_APB_LPI 0x018 /* DMAC Apb Low Power Interface Reg */ 168 + #define DMAC_APB_BYTE_WR_CH_EN 0x01C /* DMAC Apb Byte Write Enable */ 169 + #define DMAC_APB_HALFWORD_WR_CH_EN 0x020 /* DMAC Halfword write enables */ 170 + 171 + #define UNUSED_CHANNEL 0x3F /* Set unused DMA channel to 0x3F */ 172 + #define MAX_BLOCK_SIZE 0x1000 /* 1024 blocks * 4 bytes data width */ 175 173 176 174 /* DMAC_CFG */ 177 175 #define DMAC_EN_POS 0
+6
drivers/dma/fsldma.c
··· 1214 1214 { 1215 1215 struct fsldma_device *fdev; 1216 1216 struct device_node *child; 1217 + unsigned int i; 1217 1218 int err; 1218 1219 1219 1220 fdev = kzalloc(sizeof(*fdev), GFP_KERNEL); ··· 1293 1292 return 0; 1294 1293 1295 1294 out_free_fdev: 1295 + for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) { 1296 + if (fdev->chan[i]) 1297 + fsl_dma_chan_remove(fdev->chan[i]); 1298 + } 1296 1299 irq_dispose_mapping(fdev->irq); 1297 1300 iounmap(fdev->regs); 1298 1301 out_free: ··· 1319 1314 if (fdev->chan[i]) 1320 1315 fsl_dma_chan_remove(fdev->chan[i]); 1321 1316 } 1317 + irq_dispose_mapping(fdev->irq); 1322 1318 1323 1319 iounmap(fdev->regs); 1324 1320 kfree(fdev);
+11 -10
drivers/dma/hsu/pci.c
··· 26 26 static irqreturn_t hsu_pci_irq(int irq, void *dev) 27 27 { 28 28 struct hsu_dma_chip *chip = dev; 29 - struct pci_dev *pdev = to_pci_dev(chip->dev); 30 29 u32 dmaisr; 31 30 u32 status; 32 31 unsigned short i; 33 32 int ret = 0; 34 33 int err; 35 - 36 - /* 37 - * On Intel Tangier B0 and Anniedale the interrupt line, disregarding 38 - * to have different numbers, is shared between HSU DMA and UART IPs. 39 - * Thus on such SoCs we are expecting that IRQ handler is called in 40 - * UART driver only. 41 - */ 42 - if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA) 43 - return IRQ_HANDLED; 44 34 45 35 dmaisr = readl(chip->regs + HSU_PCI_DMAISR); 46 36 for (i = 0; i < chip->hsu->nr_channels; i++) { ··· 94 104 ret = request_irq(chip->irq, hsu_pci_irq, 0, "hsu_dma_pci", chip); 95 105 if (ret) 96 106 goto err_register_irq; 107 + 108 + /* 109 + * On Intel Tangier B0 and Anniedale the interrupt line, disregarding 110 + * to have different numbers, is shared between HSU DMA and UART IPs. 111 + * Thus on such SoCs we are expecting that IRQ handler is called in 112 + * UART driver only. Instead of handling the spurious interrupt 113 + * from HSU DMA here and waste CPU time and delay HSU UART interrupt 114 + * handling, disable the interrupt entirely. 115 + */ 116 + if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA) 117 + disable_irq_nosync(chip->irq); 97 118 98 119 pci_set_drvdata(pdev, chip); 99 120
+1
drivers/dma/idxd/dma.c
··· 165 165 INIT_LIST_HEAD(&dma->channels); 166 166 dma->dev = &idxd->pdev->dev; 167 167 168 + dma_cap_set(DMA_PRIVATE, dma->cap_mask); 168 169 dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask); 169 170 dma->device_release = idxd_dma_release; 170 171
+8 -3
drivers/dma/idxd/init.c
··· 26 26 MODULE_LICENSE("GPL v2"); 27 27 MODULE_AUTHOR("Intel Corporation"); 28 28 29 + static bool sva = true; 30 + module_param(sva, bool, 0644); 31 + MODULE_PARM_DESC(sva, "Toggle SVA support on/off"); 32 + 29 33 #define DRV_NAME "idxd" 30 34 31 35 bool support_enqcmd; 32 36 33 37 static struct idr idxd_idrs[IDXD_TYPE_MAX]; 34 - static struct mutex idxd_idr_lock; 38 + static DEFINE_MUTEX(idxd_idr_lock); 35 39 36 40 static struct pci_device_id idxd_pci_tbl[] = { 37 41 /* DSA ver 1.0 platforms */ ··· 345 341 346 342 dev_dbg(dev, "IDXD reset complete\n"); 347 343 348 - if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM)) { 344 + if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM) && sva) { 349 345 rc = idxd_enable_system_pasid(idxd); 350 346 if (rc < 0) 351 347 dev_warn(dev, "Failed to enable PASID. No SVA support: %d\n", rc); 352 348 else 353 349 set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags); 350 + } else if (!sva) { 351 + dev_warn(dev, "User forced SVA off via module param.\n"); 354 352 } 355 353 356 354 idxd_read_caps(idxd); ··· 553 547 else 554 548 support_enqcmd = true; 555 549 556 - mutex_init(&idxd_idr_lock); 557 550 for (i = 0; i < IDXD_TYPE_MAX; i++) 558 551 idr_init(&idxd_idrs[i]); 559 552
+11 -35
drivers/dma/imx-sdma.c
··· 1952 1952 1953 1953 static int sdma_probe(struct platform_device *pdev) 1954 1954 { 1955 - const struct of_device_id *of_id = 1956 - of_match_device(sdma_dt_ids, &pdev->dev); 1957 1955 struct device_node *np = pdev->dev.of_node; 1958 1956 struct device_node *spba_bus; 1959 1957 const char *fw_name; ··· 1959 1961 int irq; 1960 1962 struct resource *iores; 1961 1963 struct resource spba_res; 1962 - struct sdma_platform_data *pdata = dev_get_platdata(&pdev->dev); 1963 1964 int i; 1964 1965 struct sdma_engine *sdma; 1965 1966 s32 *saddr_arr; 1966 - const struct sdma_driver_data *drvdata = NULL; 1967 - 1968 - drvdata = of_id->data; 1969 - if (!drvdata) { 1970 - dev_err(&pdev->dev, "unable to find driver data\n"); 1971 - return -EINVAL; 1972 - } 1973 1967 1974 1968 ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1975 1969 if (ret) ··· 1974 1984 spin_lock_init(&sdma->channel_0_lock); 1975 1985 1976 1986 sdma->dev = &pdev->dev; 1977 - sdma->drvdata = drvdata; 1987 + sdma->drvdata = of_device_get_match_data(sdma->dev); 1978 1988 1979 1989 irq = platform_get_irq(pdev, 0); 1980 1990 if (irq < 0) ··· 2053 2063 2054 2064 if (sdma->drvdata->script_addrs) 2055 2065 sdma_add_scripts(sdma, sdma->drvdata->script_addrs); 2056 - if (pdata && pdata->script_addrs) 2057 - sdma_add_scripts(sdma, pdata->script_addrs); 2058 2066 2059 2067 sdma->dma_device.dev = &pdev->dev; 2060 2068 ··· 2098 2110 } 2099 2111 2100 2112 /* 2101 - * Kick off firmware loading as the very last step: 2102 - * attempt to load firmware only if we're not on the error path, because 2103 - * the firmware callback requires a fully functional and allocated sdma 2104 - * instance. 2113 + * Because that device tree does not encode ROM script address, 2114 + * the RAM script in firmware is mandatory for device tree 2115 + * probe, otherwise it fails. 2105 2116 */ 2106 - if (pdata) { 2107 - ret = sdma_get_firmware(sdma, pdata->fw_name); 2108 - if (ret) 2109 - dev_warn(&pdev->dev, "failed to get firmware from platform data\n"); 2117 + ret = of_property_read_string(np, "fsl,sdma-ram-script-name", 2118 + &fw_name); 2119 + if (ret) { 2120 + dev_warn(&pdev->dev, "failed to get firmware name\n"); 2110 2121 } else { 2111 - /* 2112 - * Because that device tree does not encode ROM script address, 2113 - * the RAM script in firmware is mandatory for device tree 2114 - * probe, otherwise it fails. 2115 - */ 2116 - ret = of_property_read_string(np, "fsl,sdma-ram-script-name", 2117 - &fw_name); 2118 - if (ret) { 2119 - dev_warn(&pdev->dev, "failed to get firmware name\n"); 2120 - } else { 2121 - ret = sdma_get_firmware(sdma, fw_name); 2122 - if (ret) 2123 - dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); 2124 - } 2122 + ret = sdma_get_firmware(sdma, fw_name); 2123 + if (ret) 2124 + dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); 2125 2125 } 2126 2126 2127 2127 return 0;
+10
drivers/dma/lgm/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + config INTEL_LDMA 3 + bool "Lightning Mountain centralized DMA controllers" 4 + depends on X86 || COMPILE_TEST 5 + select DMA_ENGINE 6 + select DMA_VIRTUAL_CHANNELS 7 + help 8 + Enable support for Intel Lightning Mountain SOC DMA controllers. 9 + These controllers provide DMA capabilities for a variety of on-chip 10 + devices such as HSNAND and GSWIP (Gigabit Switch IP).
+2
drivers/dma/lgm/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_INTEL_LDMA) += lgm-dma.o
+1739
drivers/dma/lgm/lgm-dma.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Lightning Mountain centralized DMA controller driver 4 + * 5 + * Copyright (c) 2016 - 2020 Intel Corporation. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/clk.h> 10 + #include <linux/dma-mapping.h> 11 + #include <linux/dmapool.h> 12 + #include <linux/err.h> 13 + #include <linux/export.h> 14 + #include <linux/init.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/iopoll.h> 17 + #include <linux/of_dma.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/reset.h> 21 + 22 + #include "../dmaengine.h" 23 + #include "../virt-dma.h" 24 + 25 + #define DRIVER_NAME "lgm-dma" 26 + 27 + #define DMA_ID 0x0008 28 + #define DMA_ID_REV GENMASK(7, 0) 29 + #define DMA_ID_PNR GENMASK(19, 16) 30 + #define DMA_ID_CHNR GENMASK(26, 20) 31 + #define DMA_ID_DW_128B BIT(27) 32 + #define DMA_ID_AW_36B BIT(28) 33 + #define DMA_VER32 0x32 34 + #define DMA_VER31 0x31 35 + #define DMA_VER22 0x0A 36 + 37 + #define DMA_CTRL 0x0010 38 + #define DMA_CTRL_RST BIT(0) 39 + #define DMA_CTRL_DSRAM_PATH BIT(1) 40 + #define DMA_CTRL_DBURST_WR BIT(3) 41 + #define DMA_CTRL_VLD_DF_ACK BIT(4) 42 + #define DMA_CTRL_CH_FL BIT(6) 43 + #define DMA_CTRL_DS_FOD BIT(7) 44 + #define DMA_CTRL_DRB BIT(8) 45 + #define DMA_CTRL_ENBE BIT(9) 46 + #define DMA_CTRL_DESC_TMOUT_CNT_V31 GENMASK(27, 16) 47 + #define DMA_CTRL_DESC_TMOUT_EN_V31 BIT(30) 48 + #define DMA_CTRL_PKTARB BIT(31) 49 + 50 + #define DMA_CPOLL 0x0014 51 + #define DMA_CPOLL_CNT GENMASK(15, 4) 52 + #define DMA_CPOLL_EN BIT(31) 53 + 54 + #define DMA_CS 0x0018 55 + #define DMA_CS_MASK GENMASK(5, 0) 56 + 57 + #define DMA_CCTRL 0x001C 58 + #define DMA_CCTRL_ON BIT(0) 59 + #define DMA_CCTRL_RST BIT(1) 60 + #define DMA_CCTRL_CH_POLL_EN BIT(2) 61 + #define DMA_CCTRL_CH_ABC BIT(3) /* Adaptive Burst Chop */ 62 + #define DMA_CDBA_MSB GENMASK(7, 4) 63 + #define DMA_CCTRL_DIR_TX BIT(8) 64 + #define DMA_CCTRL_CLASS GENMASK(11, 9) 65 + #define DMA_CCTRL_CLASSH GENMASK(19, 18) 66 + #define DMA_CCTRL_WR_NP_EN BIT(21) 67 + #define DMA_CCTRL_PDEN BIT(23) 68 + #define DMA_MAX_CLASS (SZ_32 - 1) 69 + 70 + #define DMA_CDBA 0x0020 71 + #define DMA_CDLEN 0x0024 72 + #define DMA_CIS 0x0028 73 + #define DMA_CIE 0x002C 74 + #define DMA_CI_EOP BIT(1) 75 + #define DMA_CI_DUR BIT(2) 76 + #define DMA_CI_DESCPT BIT(3) 77 + #define DMA_CI_CHOFF BIT(4) 78 + #define DMA_CI_RDERR BIT(5) 79 + #define DMA_CI_ALL \ 80 + (DMA_CI_EOP | DMA_CI_DUR | DMA_CI_DESCPT | DMA_CI_CHOFF | DMA_CI_RDERR) 81 + 82 + #define DMA_PS 0x0040 83 + #define DMA_PCTRL 0x0044 84 + #define DMA_PCTRL_RXBL16 BIT(0) 85 + #define DMA_PCTRL_TXBL16 BIT(1) 86 + #define DMA_PCTRL_RXBL GENMASK(3, 2) 87 + #define DMA_PCTRL_RXBL_8 3 88 + #define DMA_PCTRL_TXBL GENMASK(5, 4) 89 + #define DMA_PCTRL_TXBL_8 3 90 + #define DMA_PCTRL_PDEN BIT(6) 91 + #define DMA_PCTRL_RXBL32 BIT(7) 92 + #define DMA_PCTRL_RXENDI GENMASK(9, 8) 93 + #define DMA_PCTRL_TXENDI GENMASK(11, 10) 94 + #define DMA_PCTRL_TXBL32 BIT(15) 95 + #define DMA_PCTRL_MEM_FLUSH BIT(16) 96 + 97 + #define DMA_IRNEN1 0x00E8 98 + #define DMA_IRNCR1 0x00EC 99 + #define DMA_IRNEN 0x00F4 100 + #define DMA_IRNCR 0x00F8 101 + #define DMA_C_DP_TICK 0x100 102 + #define DMA_C_DP_TICK_TIKNARB GENMASK(15, 0) 103 + #define DMA_C_DP_TICK_TIKARB GENMASK(31, 16) 104 + 105 + #define DMA_C_HDRM 0x110 106 + /* 107 + * If header mode is set in DMA descriptor, 108 + * If bit 30 is disabled, HDR_LEN must be configured according to channel 109 + * requirement. 110 + * If bit 30 is enabled(checksum with heade mode), HDR_LEN has no need to 111 + * be configured. It will enable check sum for switch 112 + * If header mode is not set in DMA descriptor, 113 + * This register setting doesn't matter 114 + */ 115 + #define DMA_C_HDRM_HDR_SUM BIT(30) 116 + 117 + #define DMA_C_BOFF 0x120 118 + #define DMA_C_BOFF_BOF_LEN GENMASK(7, 0) 119 + #define DMA_C_BOFF_EN BIT(31) 120 + 121 + #define DMA_ORRC 0x190 122 + #define DMA_ORRC_ORRCNT GENMASK(8, 4) 123 + #define DMA_ORRC_EN BIT(31) 124 + 125 + #define DMA_C_ENDIAN 0x200 126 + #define DMA_C_END_DATAENDI GENMASK(1, 0) 127 + #define DMA_C_END_DE_EN BIT(7) 128 + #define DMA_C_END_DESENDI GENMASK(9, 8) 129 + #define DMA_C_END_DES_EN BIT(16) 130 + 131 + /* DMA controller capability */ 132 + #define DMA_ADDR_36BIT BIT(0) 133 + #define DMA_DATA_128BIT BIT(1) 134 + #define DMA_CHAN_FLOW_CTL BIT(2) 135 + #define DMA_DESC_FOD BIT(3) 136 + #define DMA_DESC_IN_SRAM BIT(4) 137 + #define DMA_EN_BYTE_EN BIT(5) 138 + #define DMA_DBURST_WR BIT(6) 139 + #define DMA_VALID_DESC_FETCH_ACK BIT(7) 140 + #define DMA_DFT_DRB BIT(8) 141 + 142 + #define DMA_ORRC_MAX_CNT (SZ_32 - 1) 143 + #define DMA_DFT_POLL_CNT SZ_4 144 + #define DMA_DFT_BURST_V22 SZ_2 145 + #define DMA_BURSTL_8DW SZ_8 146 + #define DMA_BURSTL_16DW SZ_16 147 + #define DMA_BURSTL_32DW SZ_32 148 + #define DMA_DFT_BURST DMA_BURSTL_16DW 149 + #define DMA_MAX_DESC_NUM (SZ_8K - 1) 150 + #define DMA_CHAN_BOFF_MAX (SZ_256 - 1) 151 + #define DMA_DFT_ENDIAN 0 152 + 153 + #define DMA_DFT_DESC_TCNT 50 154 + #define DMA_HDR_LEN_MAX (SZ_16K - 1) 155 + 156 + /* DMA flags */ 157 + #define DMA_TX_CH BIT(0) 158 + #define DMA_RX_CH BIT(1) 159 + #define DEVICE_ALLOC_DESC BIT(2) 160 + #define CHAN_IN_USE BIT(3) 161 + #define DMA_HW_DESC BIT(4) 162 + 163 + /* Descriptor fields */ 164 + #define DESC_DATA_LEN GENMASK(15, 0) 165 + #define DESC_BYTE_OFF GENMASK(25, 23) 166 + #define DESC_EOP BIT(28) 167 + #define DESC_SOP BIT(29) 168 + #define DESC_C BIT(30) 169 + #define DESC_OWN BIT(31) 170 + 171 + #define DMA_CHAN_RST 1 172 + #define DMA_MAX_SIZE (BIT(16) - 1) 173 + #define MAX_LOWER_CHANS 32 174 + #define MASK_LOWER_CHANS GENMASK(4, 0) 175 + #define DMA_OWN 1 176 + #define HIGH_4_BITS GENMASK(3, 0) 177 + #define DMA_DFT_DESC_NUM 1 178 + #define DMA_PKT_DROP_DIS 0 179 + 180 + enum ldma_chan_on_off { 181 + DMA_CH_OFF = 0, 182 + DMA_CH_ON = 1, 183 + }; 184 + 185 + enum { 186 + DMA_TYPE_TX = 0, 187 + DMA_TYPE_RX, 188 + DMA_TYPE_MCPY, 189 + }; 190 + 191 + struct ldma_dev; 192 + struct ldma_port; 193 + 194 + struct ldma_chan { 195 + struct virt_dma_chan vchan; 196 + struct ldma_port *port; /* back pointer */ 197 + char name[8]; /* Channel name */ 198 + int nr; /* Channel id in hardware */ 199 + u32 flags; /* central way or channel based way */ 200 + enum ldma_chan_on_off onoff; 201 + dma_addr_t desc_phys; 202 + void *desc_base; /* Virtual address */ 203 + u32 desc_cnt; /* Number of descriptors */ 204 + int rst; 205 + u32 hdrm_len; 206 + bool hdrm_csum; 207 + u32 boff_len; 208 + u32 data_endian; 209 + u32 desc_endian; 210 + bool pden; 211 + bool desc_rx_np; 212 + bool data_endian_en; 213 + bool desc_endian_en; 214 + bool abc_en; 215 + bool desc_init; 216 + struct dma_pool *desc_pool; /* Descriptors pool */ 217 + u32 desc_num; 218 + struct dw2_desc_sw *ds; 219 + struct work_struct work; 220 + struct dma_slave_config config; 221 + }; 222 + 223 + struct ldma_port { 224 + struct ldma_dev *ldev; /* back pointer */ 225 + u32 portid; 226 + u32 rxbl; 227 + u32 txbl; 228 + u32 rxendi; 229 + u32 txendi; 230 + u32 pkt_drop; 231 + }; 232 + 233 + /* Instance specific data */ 234 + struct ldma_inst_data { 235 + bool desc_in_sram; 236 + bool chan_fc; 237 + bool desc_fod; /* Fetch On Demand */ 238 + bool valid_desc_fetch_ack; 239 + u32 orrc; /* Outstanding read count */ 240 + const char *name; 241 + u32 type; 242 + }; 243 + 244 + struct ldma_dev { 245 + struct device *dev; 246 + void __iomem *base; 247 + struct reset_control *rst; 248 + struct clk *core_clk; 249 + struct dma_device dma_dev; 250 + u32 ver; 251 + int irq; 252 + struct ldma_port *ports; 253 + struct ldma_chan *chans; /* channel list on this DMA or port */ 254 + spinlock_t dev_lock; /* Controller register exclusive */ 255 + u32 chan_nrs; 256 + u32 port_nrs; 257 + u32 channels_mask; 258 + u32 flags; 259 + u32 pollcnt; 260 + const struct ldma_inst_data *inst; 261 + struct workqueue_struct *wq; 262 + }; 263 + 264 + struct dw2_desc { 265 + u32 field; 266 + u32 addr; 267 + } __packed __aligned(8); 268 + 269 + struct dw2_desc_sw { 270 + struct virt_dma_desc vdesc; 271 + struct ldma_chan *chan; 272 + dma_addr_t desc_phys; 273 + size_t desc_cnt; 274 + size_t size; 275 + struct dw2_desc *desc_hw; 276 + }; 277 + 278 + static inline void 279 + ldma_update_bits(struct ldma_dev *d, u32 mask, u32 val, u32 ofs) 280 + { 281 + u32 old_val, new_val; 282 + 283 + old_val = readl(d->base + ofs); 284 + new_val = (old_val & ~mask) | (val & mask); 285 + 286 + if (new_val != old_val) 287 + writel(new_val, d->base + ofs); 288 + } 289 + 290 + static inline struct ldma_chan *to_ldma_chan(struct dma_chan *chan) 291 + { 292 + return container_of(chan, struct ldma_chan, vchan.chan); 293 + } 294 + 295 + static inline struct ldma_dev *to_ldma_dev(struct dma_device *dma_dev) 296 + { 297 + return container_of(dma_dev, struct ldma_dev, dma_dev); 298 + } 299 + 300 + static inline struct dw2_desc_sw *to_lgm_dma_desc(struct virt_dma_desc *vdesc) 301 + { 302 + return container_of(vdesc, struct dw2_desc_sw, vdesc); 303 + } 304 + 305 + static inline bool ldma_chan_tx(struct ldma_chan *c) 306 + { 307 + return !!(c->flags & DMA_TX_CH); 308 + } 309 + 310 + static inline bool ldma_chan_is_hw_desc(struct ldma_chan *c) 311 + { 312 + return !!(c->flags & DMA_HW_DESC); 313 + } 314 + 315 + static void ldma_dev_reset(struct ldma_dev *d) 316 + 317 + { 318 + unsigned long flags; 319 + 320 + spin_lock_irqsave(&d->dev_lock, flags); 321 + ldma_update_bits(d, DMA_CTRL_RST, DMA_CTRL_RST, DMA_CTRL); 322 + spin_unlock_irqrestore(&d->dev_lock, flags); 323 + } 324 + 325 + static void ldma_dev_pkt_arb_cfg(struct ldma_dev *d, bool enable) 326 + { 327 + unsigned long flags; 328 + u32 mask = DMA_CTRL_PKTARB; 329 + u32 val = enable ? DMA_CTRL_PKTARB : 0; 330 + 331 + spin_lock_irqsave(&d->dev_lock, flags); 332 + ldma_update_bits(d, mask, val, DMA_CTRL); 333 + spin_unlock_irqrestore(&d->dev_lock, flags); 334 + } 335 + 336 + static void ldma_dev_sram_desc_cfg(struct ldma_dev *d, bool enable) 337 + { 338 + unsigned long flags; 339 + u32 mask = DMA_CTRL_DSRAM_PATH; 340 + u32 val = enable ? DMA_CTRL_DSRAM_PATH : 0; 341 + 342 + spin_lock_irqsave(&d->dev_lock, flags); 343 + ldma_update_bits(d, mask, val, DMA_CTRL); 344 + spin_unlock_irqrestore(&d->dev_lock, flags); 345 + } 346 + 347 + static void ldma_dev_chan_flow_ctl_cfg(struct ldma_dev *d, bool enable) 348 + { 349 + unsigned long flags; 350 + u32 mask, val; 351 + 352 + if (d->inst->type != DMA_TYPE_TX) 353 + return; 354 + 355 + mask = DMA_CTRL_CH_FL; 356 + val = enable ? DMA_CTRL_CH_FL : 0; 357 + 358 + spin_lock_irqsave(&d->dev_lock, flags); 359 + ldma_update_bits(d, mask, val, DMA_CTRL); 360 + spin_unlock_irqrestore(&d->dev_lock, flags); 361 + } 362 + 363 + static void ldma_dev_global_polling_enable(struct ldma_dev *d) 364 + { 365 + unsigned long flags; 366 + u32 mask = DMA_CPOLL_EN | DMA_CPOLL_CNT; 367 + u32 val = DMA_CPOLL_EN; 368 + 369 + val |= FIELD_PREP(DMA_CPOLL_CNT, d->pollcnt); 370 + 371 + spin_lock_irqsave(&d->dev_lock, flags); 372 + ldma_update_bits(d, mask, val, DMA_CPOLL); 373 + spin_unlock_irqrestore(&d->dev_lock, flags); 374 + } 375 + 376 + static void ldma_dev_desc_fetch_on_demand_cfg(struct ldma_dev *d, bool enable) 377 + { 378 + unsigned long flags; 379 + u32 mask, val; 380 + 381 + if (d->inst->type == DMA_TYPE_MCPY) 382 + return; 383 + 384 + mask = DMA_CTRL_DS_FOD; 385 + val = enable ? DMA_CTRL_DS_FOD : 0; 386 + 387 + spin_lock_irqsave(&d->dev_lock, flags); 388 + ldma_update_bits(d, mask, val, DMA_CTRL); 389 + spin_unlock_irqrestore(&d->dev_lock, flags); 390 + } 391 + 392 + static void ldma_dev_byte_enable_cfg(struct ldma_dev *d, bool enable) 393 + { 394 + unsigned long flags; 395 + u32 mask = DMA_CTRL_ENBE; 396 + u32 val = enable ? DMA_CTRL_ENBE : 0; 397 + 398 + spin_lock_irqsave(&d->dev_lock, flags); 399 + ldma_update_bits(d, mask, val, DMA_CTRL); 400 + spin_unlock_irqrestore(&d->dev_lock, flags); 401 + } 402 + 403 + static void ldma_dev_orrc_cfg(struct ldma_dev *d) 404 + { 405 + unsigned long flags; 406 + u32 val = 0; 407 + u32 mask; 408 + 409 + if (d->inst->type == DMA_TYPE_RX) 410 + return; 411 + 412 + mask = DMA_ORRC_EN | DMA_ORRC_ORRCNT; 413 + if (d->inst->orrc > 0 && d->inst->orrc <= DMA_ORRC_MAX_CNT) 414 + val = DMA_ORRC_EN | FIELD_PREP(DMA_ORRC_ORRCNT, d->inst->orrc); 415 + 416 + spin_lock_irqsave(&d->dev_lock, flags); 417 + ldma_update_bits(d, mask, val, DMA_ORRC); 418 + spin_unlock_irqrestore(&d->dev_lock, flags); 419 + } 420 + 421 + static void ldma_dev_df_tout_cfg(struct ldma_dev *d, bool enable, int tcnt) 422 + { 423 + u32 mask = DMA_CTRL_DESC_TMOUT_CNT_V31; 424 + unsigned long flags; 425 + u32 val; 426 + 427 + if (enable) 428 + val = DMA_CTRL_DESC_TMOUT_EN_V31 | FIELD_PREP(DMA_CTRL_DESC_TMOUT_CNT_V31, tcnt); 429 + else 430 + val = 0; 431 + 432 + spin_lock_irqsave(&d->dev_lock, flags); 433 + ldma_update_bits(d, mask, val, DMA_CTRL); 434 + spin_unlock_irqrestore(&d->dev_lock, flags); 435 + } 436 + 437 + static void ldma_dev_dburst_wr_cfg(struct ldma_dev *d, bool enable) 438 + { 439 + unsigned long flags; 440 + u32 mask, val; 441 + 442 + if (d->inst->type != DMA_TYPE_RX && d->inst->type != DMA_TYPE_MCPY) 443 + return; 444 + 445 + mask = DMA_CTRL_DBURST_WR; 446 + val = enable ? DMA_CTRL_DBURST_WR : 0; 447 + 448 + spin_lock_irqsave(&d->dev_lock, flags); 449 + ldma_update_bits(d, mask, val, DMA_CTRL); 450 + spin_unlock_irqrestore(&d->dev_lock, flags); 451 + } 452 + 453 + static void ldma_dev_vld_fetch_ack_cfg(struct ldma_dev *d, bool enable) 454 + { 455 + unsigned long flags; 456 + u32 mask, val; 457 + 458 + if (d->inst->type != DMA_TYPE_TX) 459 + return; 460 + 461 + mask = DMA_CTRL_VLD_DF_ACK; 462 + val = enable ? DMA_CTRL_VLD_DF_ACK : 0; 463 + 464 + spin_lock_irqsave(&d->dev_lock, flags); 465 + ldma_update_bits(d, mask, val, DMA_CTRL); 466 + spin_unlock_irqrestore(&d->dev_lock, flags); 467 + } 468 + 469 + static void ldma_dev_drb_cfg(struct ldma_dev *d, int enable) 470 + { 471 + unsigned long flags; 472 + u32 mask = DMA_CTRL_DRB; 473 + u32 val = enable ? DMA_CTRL_DRB : 0; 474 + 475 + spin_lock_irqsave(&d->dev_lock, flags); 476 + ldma_update_bits(d, mask, val, DMA_CTRL); 477 + spin_unlock_irqrestore(&d->dev_lock, flags); 478 + } 479 + 480 + static int ldma_dev_cfg(struct ldma_dev *d) 481 + { 482 + bool enable; 483 + 484 + ldma_dev_pkt_arb_cfg(d, true); 485 + ldma_dev_global_polling_enable(d); 486 + 487 + enable = !!(d->flags & DMA_DFT_DRB); 488 + ldma_dev_drb_cfg(d, enable); 489 + 490 + enable = !!(d->flags & DMA_EN_BYTE_EN); 491 + ldma_dev_byte_enable_cfg(d, enable); 492 + 493 + enable = !!(d->flags & DMA_CHAN_FLOW_CTL); 494 + ldma_dev_chan_flow_ctl_cfg(d, enable); 495 + 496 + enable = !!(d->flags & DMA_DESC_FOD); 497 + ldma_dev_desc_fetch_on_demand_cfg(d, enable); 498 + 499 + enable = !!(d->flags & DMA_DESC_IN_SRAM); 500 + ldma_dev_sram_desc_cfg(d, enable); 501 + 502 + enable = !!(d->flags & DMA_DBURST_WR); 503 + ldma_dev_dburst_wr_cfg(d, enable); 504 + 505 + enable = !!(d->flags & DMA_VALID_DESC_FETCH_ACK); 506 + ldma_dev_vld_fetch_ack_cfg(d, enable); 507 + 508 + if (d->ver > DMA_VER22) { 509 + ldma_dev_orrc_cfg(d); 510 + ldma_dev_df_tout_cfg(d, true, DMA_DFT_DESC_TCNT); 511 + } 512 + 513 + dev_dbg(d->dev, "%s Controller 0x%08x configuration done\n", 514 + d->inst->name, readl(d->base + DMA_CTRL)); 515 + 516 + return 0; 517 + } 518 + 519 + static int ldma_chan_cctrl_cfg(struct ldma_chan *c, u32 val) 520 + { 521 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 522 + u32 class_low, class_high; 523 + unsigned long flags; 524 + u32 reg; 525 + 526 + spin_lock_irqsave(&d->dev_lock, flags); 527 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 528 + reg = readl(d->base + DMA_CCTRL); 529 + /* Read from hardware */ 530 + if (reg & DMA_CCTRL_DIR_TX) 531 + c->flags |= DMA_TX_CH; 532 + else 533 + c->flags |= DMA_RX_CH; 534 + 535 + /* Keep the class value unchanged */ 536 + class_low = FIELD_GET(DMA_CCTRL_CLASS, reg); 537 + class_high = FIELD_GET(DMA_CCTRL_CLASSH, reg); 538 + val &= ~DMA_CCTRL_CLASS; 539 + val |= FIELD_PREP(DMA_CCTRL_CLASS, class_low); 540 + val &= ~DMA_CCTRL_CLASSH; 541 + val |= FIELD_PREP(DMA_CCTRL_CLASSH, class_high); 542 + writel(val, d->base + DMA_CCTRL); 543 + spin_unlock_irqrestore(&d->dev_lock, flags); 544 + 545 + return 0; 546 + } 547 + 548 + static void ldma_chan_irq_init(struct ldma_chan *c) 549 + { 550 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 551 + unsigned long flags; 552 + u32 enofs, crofs; 553 + u32 cn_bit; 554 + 555 + if (c->nr < MAX_LOWER_CHANS) { 556 + enofs = DMA_IRNEN; 557 + crofs = DMA_IRNCR; 558 + } else { 559 + enofs = DMA_IRNEN1; 560 + crofs = DMA_IRNCR1; 561 + } 562 + 563 + cn_bit = BIT(c->nr & MASK_LOWER_CHANS); 564 + spin_lock_irqsave(&d->dev_lock, flags); 565 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 566 + 567 + /* Clear all interrupts and disabled it */ 568 + writel(0, d->base + DMA_CIE); 569 + writel(DMA_CI_ALL, d->base + DMA_CIS); 570 + 571 + ldma_update_bits(d, cn_bit, 0, enofs); 572 + writel(cn_bit, d->base + crofs); 573 + spin_unlock_irqrestore(&d->dev_lock, flags); 574 + } 575 + 576 + static void ldma_chan_set_class(struct ldma_chan *c, u32 val) 577 + { 578 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 579 + u32 class_val; 580 + 581 + if (d->inst->type == DMA_TYPE_MCPY || val > DMA_MAX_CLASS) 582 + return; 583 + 584 + /* 3 bits low */ 585 + class_val = FIELD_PREP(DMA_CCTRL_CLASS, val & 0x7); 586 + /* 2 bits high */ 587 + class_val |= FIELD_PREP(DMA_CCTRL_CLASSH, (val >> 3) & 0x3); 588 + 589 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 590 + ldma_update_bits(d, DMA_CCTRL_CLASS | DMA_CCTRL_CLASSH, class_val, 591 + DMA_CCTRL); 592 + } 593 + 594 + static int ldma_chan_on(struct ldma_chan *c) 595 + { 596 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 597 + unsigned long flags; 598 + 599 + /* If descriptors not configured, not allow to turn on channel */ 600 + if (WARN_ON(!c->desc_init)) 601 + return -EINVAL; 602 + 603 + spin_lock_irqsave(&d->dev_lock, flags); 604 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 605 + ldma_update_bits(d, DMA_CCTRL_ON, DMA_CCTRL_ON, DMA_CCTRL); 606 + spin_unlock_irqrestore(&d->dev_lock, flags); 607 + 608 + c->onoff = DMA_CH_ON; 609 + 610 + return 0; 611 + } 612 + 613 + static int ldma_chan_off(struct ldma_chan *c) 614 + { 615 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 616 + unsigned long flags; 617 + u32 val; 618 + int ret; 619 + 620 + spin_lock_irqsave(&d->dev_lock, flags); 621 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 622 + ldma_update_bits(d, DMA_CCTRL_ON, 0, DMA_CCTRL); 623 + spin_unlock_irqrestore(&d->dev_lock, flags); 624 + 625 + ret = readl_poll_timeout_atomic(d->base + DMA_CCTRL, val, 626 + !(val & DMA_CCTRL_ON), 0, 10000); 627 + if (ret) 628 + return ret; 629 + 630 + c->onoff = DMA_CH_OFF; 631 + 632 + return 0; 633 + } 634 + 635 + static void ldma_chan_desc_hw_cfg(struct ldma_chan *c, dma_addr_t desc_base, 636 + int desc_num) 637 + { 638 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 639 + unsigned long flags; 640 + 641 + spin_lock_irqsave(&d->dev_lock, flags); 642 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 643 + writel(lower_32_bits(desc_base), d->base + DMA_CDBA); 644 + 645 + /* Higher 4 bits of 36 bit addressing */ 646 + if (IS_ENABLED(CONFIG_64BIT)) { 647 + u32 hi = upper_32_bits(desc_base) & HIGH_4_BITS; 648 + 649 + ldma_update_bits(d, DMA_CDBA_MSB, 650 + FIELD_PREP(DMA_CDBA_MSB, hi), DMA_CCTRL); 651 + } 652 + writel(desc_num, d->base + DMA_CDLEN); 653 + spin_unlock_irqrestore(&d->dev_lock, flags); 654 + 655 + c->desc_init = true; 656 + } 657 + 658 + static struct dma_async_tx_descriptor * 659 + ldma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base, int desc_num) 660 + { 661 + struct ldma_chan *c = to_ldma_chan(chan); 662 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 663 + struct dma_async_tx_descriptor *tx; 664 + struct dw2_desc_sw *ds; 665 + 666 + if (!desc_num) { 667 + dev_err(d->dev, "Channel %d must allocate descriptor first\n", 668 + c->nr); 669 + return NULL; 670 + } 671 + 672 + if (desc_num > DMA_MAX_DESC_NUM) { 673 + dev_err(d->dev, "Channel %d descriptor number out of range %d\n", 674 + c->nr, desc_num); 675 + return NULL; 676 + } 677 + 678 + ldma_chan_desc_hw_cfg(c, desc_base, desc_num); 679 + 680 + c->flags |= DMA_HW_DESC; 681 + c->desc_cnt = desc_num; 682 + c->desc_phys = desc_base; 683 + 684 + ds = kzalloc(sizeof(*ds), GFP_NOWAIT); 685 + if (!ds) 686 + return NULL; 687 + 688 + tx = &ds->vdesc.tx; 689 + dma_async_tx_descriptor_init(tx, chan); 690 + 691 + return tx; 692 + } 693 + 694 + static int ldma_chan_reset(struct ldma_chan *c) 695 + { 696 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 697 + unsigned long flags; 698 + u32 val; 699 + int ret; 700 + 701 + ret = ldma_chan_off(c); 702 + if (ret) 703 + return ret; 704 + 705 + spin_lock_irqsave(&d->dev_lock, flags); 706 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 707 + ldma_update_bits(d, DMA_CCTRL_RST, DMA_CCTRL_RST, DMA_CCTRL); 708 + spin_unlock_irqrestore(&d->dev_lock, flags); 709 + 710 + ret = readl_poll_timeout_atomic(d->base + DMA_CCTRL, val, 711 + !(val & DMA_CCTRL_RST), 0, 10000); 712 + if (ret) 713 + return ret; 714 + 715 + c->rst = 1; 716 + c->desc_init = false; 717 + 718 + return 0; 719 + } 720 + 721 + static void ldma_chan_byte_offset_cfg(struct ldma_chan *c, u32 boff_len) 722 + { 723 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 724 + u32 mask = DMA_C_BOFF_EN | DMA_C_BOFF_BOF_LEN; 725 + u32 val; 726 + 727 + if (boff_len > 0 && boff_len <= DMA_CHAN_BOFF_MAX) 728 + val = FIELD_PREP(DMA_C_BOFF_BOF_LEN, boff_len) | DMA_C_BOFF_EN; 729 + else 730 + val = 0; 731 + 732 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 733 + ldma_update_bits(d, mask, val, DMA_C_BOFF); 734 + } 735 + 736 + static void ldma_chan_data_endian_cfg(struct ldma_chan *c, bool enable, 737 + u32 endian_type) 738 + { 739 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 740 + u32 mask = DMA_C_END_DE_EN | DMA_C_END_DATAENDI; 741 + u32 val; 742 + 743 + if (enable) 744 + val = DMA_C_END_DE_EN | FIELD_PREP(DMA_C_END_DATAENDI, endian_type); 745 + else 746 + val = 0; 747 + 748 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 749 + ldma_update_bits(d, mask, val, DMA_C_ENDIAN); 750 + } 751 + 752 + static void ldma_chan_desc_endian_cfg(struct ldma_chan *c, bool enable, 753 + u32 endian_type) 754 + { 755 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 756 + u32 mask = DMA_C_END_DES_EN | DMA_C_END_DESENDI; 757 + u32 val; 758 + 759 + if (enable) 760 + val = DMA_C_END_DES_EN | FIELD_PREP(DMA_C_END_DESENDI, endian_type); 761 + else 762 + val = 0; 763 + 764 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 765 + ldma_update_bits(d, mask, val, DMA_C_ENDIAN); 766 + } 767 + 768 + static void ldma_chan_hdr_mode_cfg(struct ldma_chan *c, u32 hdr_len, bool csum) 769 + { 770 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 771 + u32 mask, val; 772 + 773 + /* NB, csum disabled, hdr length must be provided */ 774 + if (!csum && (!hdr_len || hdr_len > DMA_HDR_LEN_MAX)) 775 + return; 776 + 777 + mask = DMA_C_HDRM_HDR_SUM; 778 + val = DMA_C_HDRM_HDR_SUM; 779 + 780 + if (!csum && hdr_len) 781 + val = hdr_len; 782 + 783 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 784 + ldma_update_bits(d, mask, val, DMA_C_HDRM); 785 + } 786 + 787 + static void ldma_chan_rxwr_np_cfg(struct ldma_chan *c, bool enable) 788 + { 789 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 790 + u32 mask, val; 791 + 792 + /* Only valid for RX channel */ 793 + if (ldma_chan_tx(c)) 794 + return; 795 + 796 + mask = DMA_CCTRL_WR_NP_EN; 797 + val = enable ? DMA_CCTRL_WR_NP_EN : 0; 798 + 799 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 800 + ldma_update_bits(d, mask, val, DMA_CCTRL); 801 + } 802 + 803 + static void ldma_chan_abc_cfg(struct ldma_chan *c, bool enable) 804 + { 805 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 806 + u32 mask, val; 807 + 808 + if (d->ver < DMA_VER32 || ldma_chan_tx(c)) 809 + return; 810 + 811 + mask = DMA_CCTRL_CH_ABC; 812 + val = enable ? DMA_CCTRL_CH_ABC : 0; 813 + 814 + ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS); 815 + ldma_update_bits(d, mask, val, DMA_CCTRL); 816 + } 817 + 818 + static int ldma_port_cfg(struct ldma_port *p) 819 + { 820 + unsigned long flags; 821 + struct ldma_dev *d; 822 + u32 reg; 823 + 824 + d = p->ldev; 825 + reg = FIELD_PREP(DMA_PCTRL_TXENDI, p->txendi); 826 + reg |= FIELD_PREP(DMA_PCTRL_RXENDI, p->rxendi); 827 + 828 + if (d->ver == DMA_VER22) { 829 + reg |= FIELD_PREP(DMA_PCTRL_TXBL, p->txbl); 830 + reg |= FIELD_PREP(DMA_PCTRL_RXBL, p->rxbl); 831 + } else { 832 + reg |= FIELD_PREP(DMA_PCTRL_PDEN, p->pkt_drop); 833 + 834 + if (p->txbl == DMA_BURSTL_32DW) 835 + reg |= DMA_PCTRL_TXBL32; 836 + else if (p->txbl == DMA_BURSTL_16DW) 837 + reg |= DMA_PCTRL_TXBL16; 838 + else 839 + reg |= FIELD_PREP(DMA_PCTRL_TXBL, DMA_PCTRL_TXBL_8); 840 + 841 + if (p->rxbl == DMA_BURSTL_32DW) 842 + reg |= DMA_PCTRL_RXBL32; 843 + else if (p->rxbl == DMA_BURSTL_16DW) 844 + reg |= DMA_PCTRL_RXBL16; 845 + else 846 + reg |= FIELD_PREP(DMA_PCTRL_RXBL, DMA_PCTRL_RXBL_8); 847 + } 848 + 849 + spin_lock_irqsave(&d->dev_lock, flags); 850 + writel(p->portid, d->base + DMA_PS); 851 + writel(reg, d->base + DMA_PCTRL); 852 + spin_unlock_irqrestore(&d->dev_lock, flags); 853 + 854 + reg = readl(d->base + DMA_PCTRL); /* read back */ 855 + dev_dbg(d->dev, "Port Control 0x%08x configuration done\n", reg); 856 + 857 + return 0; 858 + } 859 + 860 + static int ldma_chan_cfg(struct ldma_chan *c) 861 + { 862 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 863 + unsigned long flags; 864 + u32 reg; 865 + 866 + reg = c->pden ? DMA_CCTRL_PDEN : 0; 867 + reg |= c->onoff ? DMA_CCTRL_ON : 0; 868 + reg |= c->rst ? DMA_CCTRL_RST : 0; 869 + 870 + ldma_chan_cctrl_cfg(c, reg); 871 + ldma_chan_irq_init(c); 872 + 873 + if (d->ver <= DMA_VER22) 874 + return 0; 875 + 876 + spin_lock_irqsave(&d->dev_lock, flags); 877 + ldma_chan_set_class(c, c->nr); 878 + ldma_chan_byte_offset_cfg(c, c->boff_len); 879 + ldma_chan_data_endian_cfg(c, c->data_endian_en, c->data_endian); 880 + ldma_chan_desc_endian_cfg(c, c->desc_endian_en, c->desc_endian); 881 + ldma_chan_hdr_mode_cfg(c, c->hdrm_len, c->hdrm_csum); 882 + ldma_chan_rxwr_np_cfg(c, c->desc_rx_np); 883 + ldma_chan_abc_cfg(c, c->abc_en); 884 + spin_unlock_irqrestore(&d->dev_lock, flags); 885 + 886 + if (ldma_chan_is_hw_desc(c)) 887 + ldma_chan_desc_hw_cfg(c, c->desc_phys, c->desc_cnt); 888 + 889 + return 0; 890 + } 891 + 892 + static void ldma_dev_init(struct ldma_dev *d) 893 + { 894 + unsigned long ch_mask = (unsigned long)d->channels_mask; 895 + struct ldma_port *p; 896 + struct ldma_chan *c; 897 + int i; 898 + u32 j; 899 + 900 + spin_lock_init(&d->dev_lock); 901 + ldma_dev_reset(d); 902 + ldma_dev_cfg(d); 903 + 904 + /* DMA port initialization */ 905 + for (i = 0; i < d->port_nrs; i++) { 906 + p = &d->ports[i]; 907 + ldma_port_cfg(p); 908 + } 909 + 910 + /* DMA channel initialization */ 911 + for_each_set_bit(j, &ch_mask, d->chan_nrs) { 912 + c = &d->chans[j]; 913 + ldma_chan_cfg(c); 914 + } 915 + } 916 + 917 + static int ldma_cfg_init(struct ldma_dev *d) 918 + { 919 + struct fwnode_handle *fwnode = dev_fwnode(d->dev); 920 + struct ldma_port *p; 921 + int i; 922 + 923 + if (fwnode_property_read_bool(fwnode, "intel,dma-byte-en")) 924 + d->flags |= DMA_EN_BYTE_EN; 925 + 926 + if (fwnode_property_read_bool(fwnode, "intel,dma-dburst-wr")) 927 + d->flags |= DMA_DBURST_WR; 928 + 929 + if (fwnode_property_read_bool(fwnode, "intel,dma-drb")) 930 + d->flags |= DMA_DFT_DRB; 931 + 932 + if (fwnode_property_read_u32(fwnode, "intel,dma-poll-cnt", 933 + &d->pollcnt)) 934 + d->pollcnt = DMA_DFT_POLL_CNT; 935 + 936 + if (d->inst->chan_fc) 937 + d->flags |= DMA_CHAN_FLOW_CTL; 938 + 939 + if (d->inst->desc_fod) 940 + d->flags |= DMA_DESC_FOD; 941 + 942 + if (d->inst->desc_in_sram) 943 + d->flags |= DMA_DESC_IN_SRAM; 944 + 945 + if (d->inst->valid_desc_fetch_ack) 946 + d->flags |= DMA_VALID_DESC_FETCH_ACK; 947 + 948 + if (d->ver > DMA_VER22) { 949 + if (!d->port_nrs) 950 + return -EINVAL; 951 + 952 + for (i = 0; i < d->port_nrs; i++) { 953 + p = &d->ports[i]; 954 + p->rxendi = DMA_DFT_ENDIAN; 955 + p->txendi = DMA_DFT_ENDIAN; 956 + p->rxbl = DMA_DFT_BURST; 957 + p->txbl = DMA_DFT_BURST; 958 + p->pkt_drop = DMA_PKT_DROP_DIS; 959 + } 960 + } 961 + 962 + return 0; 963 + } 964 + 965 + static void dma_free_desc_resource(struct virt_dma_desc *vdesc) 966 + { 967 + struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc); 968 + struct ldma_chan *c = ds->chan; 969 + 970 + dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys); 971 + kfree(ds); 972 + } 973 + 974 + static struct dw2_desc_sw * 975 + dma_alloc_desc_resource(int num, struct ldma_chan *c) 976 + { 977 + struct device *dev = c->vchan.chan.device->dev; 978 + struct dw2_desc_sw *ds; 979 + 980 + if (num > c->desc_num) { 981 + dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num); 982 + return NULL; 983 + } 984 + 985 + ds = kzalloc(sizeof(*ds), GFP_NOWAIT); 986 + if (!ds) 987 + return NULL; 988 + 989 + ds->chan = c; 990 + ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC, 991 + &ds->desc_phys); 992 + if (!ds->desc_hw) { 993 + dev_dbg(dev, "out of memory for link descriptor\n"); 994 + kfree(ds); 995 + return NULL; 996 + } 997 + ds->desc_cnt = num; 998 + 999 + return ds; 1000 + } 1001 + 1002 + static void ldma_chan_irq_en(struct ldma_chan *c) 1003 + { 1004 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1005 + unsigned long flags; 1006 + 1007 + spin_lock_irqsave(&d->dev_lock, flags); 1008 + writel(c->nr, d->base + DMA_CS); 1009 + writel(DMA_CI_EOP, d->base + DMA_CIE); 1010 + writel(BIT(c->nr), d->base + DMA_IRNEN); 1011 + spin_unlock_irqrestore(&d->dev_lock, flags); 1012 + } 1013 + 1014 + static void ldma_issue_pending(struct dma_chan *chan) 1015 + { 1016 + struct ldma_chan *c = to_ldma_chan(chan); 1017 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1018 + unsigned long flags; 1019 + 1020 + if (d->ver == DMA_VER22) { 1021 + spin_lock_irqsave(&c->vchan.lock, flags); 1022 + if (vchan_issue_pending(&c->vchan)) { 1023 + struct virt_dma_desc *vdesc; 1024 + 1025 + /* Get the next descriptor */ 1026 + vdesc = vchan_next_desc(&c->vchan); 1027 + if (!vdesc) { 1028 + c->ds = NULL; 1029 + spin_unlock_irqrestore(&c->vchan.lock, flags); 1030 + return; 1031 + } 1032 + list_del(&vdesc->node); 1033 + c->ds = to_lgm_dma_desc(vdesc); 1034 + ldma_chan_desc_hw_cfg(c, c->ds->desc_phys, c->ds->desc_cnt); 1035 + ldma_chan_irq_en(c); 1036 + } 1037 + spin_unlock_irqrestore(&c->vchan.lock, flags); 1038 + } 1039 + ldma_chan_on(c); 1040 + } 1041 + 1042 + static void ldma_synchronize(struct dma_chan *chan) 1043 + { 1044 + struct ldma_chan *c = to_ldma_chan(chan); 1045 + 1046 + /* 1047 + * clear any pending work if any. In that 1048 + * case the resource needs to be free here. 1049 + */ 1050 + cancel_work_sync(&c->work); 1051 + vchan_synchronize(&c->vchan); 1052 + if (c->ds) 1053 + dma_free_desc_resource(&c->ds->vdesc); 1054 + } 1055 + 1056 + static int ldma_terminate_all(struct dma_chan *chan) 1057 + { 1058 + struct ldma_chan *c = to_ldma_chan(chan); 1059 + unsigned long flags; 1060 + LIST_HEAD(head); 1061 + 1062 + spin_lock_irqsave(&c->vchan.lock, flags); 1063 + vchan_get_all_descriptors(&c->vchan, &head); 1064 + spin_unlock_irqrestore(&c->vchan.lock, flags); 1065 + vchan_dma_desc_free_list(&c->vchan, &head); 1066 + 1067 + return ldma_chan_reset(c); 1068 + } 1069 + 1070 + static int ldma_resume_chan(struct dma_chan *chan) 1071 + { 1072 + struct ldma_chan *c = to_ldma_chan(chan); 1073 + 1074 + ldma_chan_on(c); 1075 + 1076 + return 0; 1077 + } 1078 + 1079 + static int ldma_pause_chan(struct dma_chan *chan) 1080 + { 1081 + struct ldma_chan *c = to_ldma_chan(chan); 1082 + 1083 + return ldma_chan_off(c); 1084 + } 1085 + 1086 + static enum dma_status 1087 + ldma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, 1088 + struct dma_tx_state *txstate) 1089 + { 1090 + struct ldma_chan *c = to_ldma_chan(chan); 1091 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1092 + enum dma_status status = DMA_COMPLETE; 1093 + 1094 + if (d->ver == DMA_VER22) 1095 + status = dma_cookie_status(chan, cookie, txstate); 1096 + 1097 + return status; 1098 + } 1099 + 1100 + static void dma_chan_irq(int irq, void *data) 1101 + { 1102 + struct ldma_chan *c = data; 1103 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1104 + u32 stat; 1105 + 1106 + /* Disable channel interrupts */ 1107 + writel(c->nr, d->base + DMA_CS); 1108 + stat = readl(d->base + DMA_CIS); 1109 + if (!stat) 1110 + return; 1111 + 1112 + writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE); 1113 + writel(stat, d->base + DMA_CIS); 1114 + queue_work(d->wq, &c->work); 1115 + } 1116 + 1117 + static irqreturn_t dma_interrupt(int irq, void *dev_id) 1118 + { 1119 + struct ldma_dev *d = dev_id; 1120 + struct ldma_chan *c; 1121 + unsigned long irncr; 1122 + u32 cid; 1123 + 1124 + irncr = readl(d->base + DMA_IRNCR); 1125 + if (!irncr) { 1126 + dev_err(d->dev, "dummy interrupt\n"); 1127 + return IRQ_NONE; 1128 + } 1129 + 1130 + for_each_set_bit(cid, &irncr, d->chan_nrs) { 1131 + /* Mask */ 1132 + writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base + DMA_IRNEN); 1133 + /* Ack */ 1134 + writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base + DMA_IRNCR); 1135 + 1136 + c = &d->chans[cid]; 1137 + dma_chan_irq(irq, c); 1138 + } 1139 + 1140 + return IRQ_HANDLED; 1141 + } 1142 + 1143 + static void prep_slave_burst_len(struct ldma_chan *c) 1144 + { 1145 + struct ldma_port *p = c->port; 1146 + struct dma_slave_config *cfg = &c->config; 1147 + 1148 + if (cfg->dst_maxburst) 1149 + cfg->src_maxburst = cfg->dst_maxburst; 1150 + 1151 + /* TX and RX has the same burst length */ 1152 + p->txbl = ilog2(cfg->src_maxburst); 1153 + p->rxbl = p->txbl; 1154 + } 1155 + 1156 + static struct dma_async_tx_descriptor * 1157 + ldma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, 1158 + unsigned int sglen, enum dma_transfer_direction dir, 1159 + unsigned long flags, void *context) 1160 + { 1161 + struct ldma_chan *c = to_ldma_chan(chan); 1162 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1163 + size_t len, avail, total = 0; 1164 + struct dw2_desc *hw_ds; 1165 + struct dw2_desc_sw *ds; 1166 + struct scatterlist *sg; 1167 + int num = sglen, i; 1168 + dma_addr_t addr; 1169 + 1170 + if (!sgl) 1171 + return NULL; 1172 + 1173 + if (d->ver > DMA_VER22) 1174 + return ldma_chan_desc_cfg(chan, sgl->dma_address, sglen); 1175 + 1176 + for_each_sg(sgl, sg, sglen, i) { 1177 + avail = sg_dma_len(sg); 1178 + if (avail > DMA_MAX_SIZE) 1179 + num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1; 1180 + } 1181 + 1182 + ds = dma_alloc_desc_resource(num, c); 1183 + if (!ds) 1184 + return NULL; 1185 + 1186 + c->ds = ds; 1187 + 1188 + num = 0; 1189 + /* sop and eop has to be handled nicely */ 1190 + for_each_sg(sgl, sg, sglen, i) { 1191 + addr = sg_dma_address(sg); 1192 + avail = sg_dma_len(sg); 1193 + total += avail; 1194 + 1195 + do { 1196 + len = min_t(size_t, avail, DMA_MAX_SIZE); 1197 + 1198 + hw_ds = &ds->desc_hw[num]; 1199 + switch (sglen) { 1200 + case 1: 1201 + hw_ds->field &= ~DESC_SOP; 1202 + hw_ds->field |= FIELD_PREP(DESC_SOP, 1); 1203 + 1204 + hw_ds->field &= ~DESC_EOP; 1205 + hw_ds->field |= FIELD_PREP(DESC_EOP, 1); 1206 + break; 1207 + default: 1208 + if (num == 0) { 1209 + hw_ds->field &= ~DESC_SOP; 1210 + hw_ds->field |= FIELD_PREP(DESC_SOP, 1); 1211 + 1212 + hw_ds->field &= ~DESC_EOP; 1213 + hw_ds->field |= FIELD_PREP(DESC_EOP, 0); 1214 + } else if (num == (sglen - 1)) { 1215 + hw_ds->field &= ~DESC_SOP; 1216 + hw_ds->field |= FIELD_PREP(DESC_SOP, 0); 1217 + hw_ds->field &= ~DESC_EOP; 1218 + hw_ds->field |= FIELD_PREP(DESC_EOP, 1); 1219 + } else { 1220 + hw_ds->field &= ~DESC_SOP; 1221 + hw_ds->field |= FIELD_PREP(DESC_SOP, 0); 1222 + 1223 + hw_ds->field &= ~DESC_EOP; 1224 + hw_ds->field |= FIELD_PREP(DESC_EOP, 0); 1225 + } 1226 + break; 1227 + } 1228 + /* Only 32 bit address supported */ 1229 + hw_ds->addr = (u32)addr; 1230 + 1231 + hw_ds->field &= ~DESC_DATA_LEN; 1232 + hw_ds->field |= FIELD_PREP(DESC_DATA_LEN, len); 1233 + 1234 + hw_ds->field &= ~DESC_C; 1235 + hw_ds->field |= FIELD_PREP(DESC_C, 0); 1236 + 1237 + hw_ds->field &= ~DESC_BYTE_OFF; 1238 + hw_ds->field |= FIELD_PREP(DESC_BYTE_OFF, addr & 0x3); 1239 + 1240 + /* Ensure data ready before ownership change */ 1241 + wmb(); 1242 + hw_ds->field &= ~DESC_OWN; 1243 + hw_ds->field |= FIELD_PREP(DESC_OWN, DMA_OWN); 1244 + 1245 + /* Ensure ownership changed before moving forward */ 1246 + wmb(); 1247 + num++; 1248 + addr += len; 1249 + avail -= len; 1250 + } while (avail); 1251 + } 1252 + 1253 + ds->size = total; 1254 + prep_slave_burst_len(c); 1255 + 1256 + return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK); 1257 + } 1258 + 1259 + static int 1260 + ldma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg) 1261 + { 1262 + struct ldma_chan *c = to_ldma_chan(chan); 1263 + 1264 + memcpy(&c->config, cfg, sizeof(c->config)); 1265 + 1266 + return 0; 1267 + } 1268 + 1269 + static int ldma_alloc_chan_resources(struct dma_chan *chan) 1270 + { 1271 + struct ldma_chan *c = to_ldma_chan(chan); 1272 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1273 + struct device *dev = c->vchan.chan.device->dev; 1274 + size_t desc_sz; 1275 + 1276 + if (d->ver > DMA_VER22) { 1277 + c->flags |= CHAN_IN_USE; 1278 + return 0; 1279 + } 1280 + 1281 + if (c->desc_pool) 1282 + return c->desc_num; 1283 + 1284 + desc_sz = c->desc_num * sizeof(struct dw2_desc); 1285 + c->desc_pool = dma_pool_create(c->name, dev, desc_sz, 1286 + __alignof__(struct dw2_desc), 0); 1287 + 1288 + if (!c->desc_pool) { 1289 + dev_err(dev, "unable to allocate descriptor pool\n"); 1290 + return -ENOMEM; 1291 + } 1292 + 1293 + return c->desc_num; 1294 + } 1295 + 1296 + static void ldma_free_chan_resources(struct dma_chan *chan) 1297 + { 1298 + struct ldma_chan *c = to_ldma_chan(chan); 1299 + struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device); 1300 + 1301 + if (d->ver == DMA_VER22) { 1302 + dma_pool_destroy(c->desc_pool); 1303 + c->desc_pool = NULL; 1304 + vchan_free_chan_resources(to_virt_chan(chan)); 1305 + ldma_chan_reset(c); 1306 + } else { 1307 + c->flags &= ~CHAN_IN_USE; 1308 + } 1309 + } 1310 + 1311 + static void dma_work(struct work_struct *work) 1312 + { 1313 + struct ldma_chan *c = container_of(work, struct ldma_chan, work); 1314 + struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx; 1315 + struct virt_dma_chan *vc = &c->vchan; 1316 + struct dmaengine_desc_callback cb; 1317 + struct virt_dma_desc *vd, *_vd; 1318 + unsigned long flags; 1319 + LIST_HEAD(head); 1320 + 1321 + spin_lock_irqsave(&c->vchan.lock, flags); 1322 + list_splice_tail_init(&vc->desc_completed, &head); 1323 + spin_unlock_irqrestore(&c->vchan.lock, flags); 1324 + dmaengine_desc_get_callback(tx, &cb); 1325 + dma_cookie_complete(tx); 1326 + dmaengine_desc_callback_invoke(&cb, NULL); 1327 + 1328 + list_for_each_entry_safe(vd, _vd, &head, node) { 1329 + dmaengine_desc_get_callback(tx, &cb); 1330 + dma_cookie_complete(tx); 1331 + list_del(&vd->node); 1332 + dmaengine_desc_callback_invoke(&cb, NULL); 1333 + 1334 + vchan_vdesc_fini(vd); 1335 + } 1336 + c->ds = NULL; 1337 + } 1338 + 1339 + static void 1340 + update_burst_len_v22(struct ldma_chan *c, struct ldma_port *p, u32 burst) 1341 + { 1342 + if (ldma_chan_tx(c)) 1343 + p->txbl = ilog2(burst); 1344 + else 1345 + p->rxbl = ilog2(burst); 1346 + } 1347 + 1348 + static void 1349 + update_burst_len_v3X(struct ldma_chan *c, struct ldma_port *p, u32 burst) 1350 + { 1351 + if (ldma_chan_tx(c)) 1352 + p->txbl = burst; 1353 + else 1354 + p->rxbl = burst; 1355 + } 1356 + 1357 + static int 1358 + update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec) 1359 + { 1360 + struct ldma_dev *d = ofdma->of_dma_data; 1361 + u32 chan_id = spec->args[0]; 1362 + u32 port_id = spec->args[1]; 1363 + u32 burst = spec->args[2]; 1364 + struct ldma_port *p; 1365 + struct ldma_chan *c; 1366 + 1367 + if (chan_id >= d->chan_nrs || port_id >= d->port_nrs) 1368 + return 0; 1369 + 1370 + p = &d->ports[port_id]; 1371 + c = &d->chans[chan_id]; 1372 + c->port = p; 1373 + 1374 + if (d->ver == DMA_VER22) 1375 + update_burst_len_v22(c, p, burst); 1376 + else 1377 + update_burst_len_v3X(c, p, burst); 1378 + 1379 + ldma_port_cfg(p); 1380 + 1381 + return 1; 1382 + } 1383 + 1384 + static struct dma_chan *ldma_xlate(struct of_phandle_args *spec, 1385 + struct of_dma *ofdma) 1386 + { 1387 + struct ldma_dev *d = ofdma->of_dma_data; 1388 + u32 chan_id = spec->args[0]; 1389 + int ret; 1390 + 1391 + if (!spec->args_count) 1392 + return NULL; 1393 + 1394 + /* if args_count is 1 driver use default settings */ 1395 + if (spec->args_count > 1) { 1396 + ret = update_client_configs(ofdma, spec); 1397 + if (!ret) 1398 + return NULL; 1399 + } 1400 + 1401 + return dma_get_slave_channel(&d->chans[chan_id].vchan.chan); 1402 + } 1403 + 1404 + static void ldma_dma_init_v22(int i, struct ldma_dev *d) 1405 + { 1406 + struct ldma_chan *c; 1407 + 1408 + c = &d->chans[i]; 1409 + c->nr = i; /* Real channel number */ 1410 + c->rst = DMA_CHAN_RST; 1411 + c->desc_num = DMA_DFT_DESC_NUM; 1412 + snprintf(c->name, sizeof(c->name), "chan%d", c->nr); 1413 + INIT_WORK(&c->work, dma_work); 1414 + c->vchan.desc_free = dma_free_desc_resource; 1415 + vchan_init(&c->vchan, &d->dma_dev); 1416 + } 1417 + 1418 + static void ldma_dma_init_v3X(int i, struct ldma_dev *d) 1419 + { 1420 + struct ldma_chan *c; 1421 + 1422 + c = &d->chans[i]; 1423 + c->data_endian = DMA_DFT_ENDIAN; 1424 + c->desc_endian = DMA_DFT_ENDIAN; 1425 + c->data_endian_en = false; 1426 + c->desc_endian_en = false; 1427 + c->desc_rx_np = false; 1428 + c->flags |= DEVICE_ALLOC_DESC; 1429 + c->onoff = DMA_CH_OFF; 1430 + c->rst = DMA_CHAN_RST; 1431 + c->abc_en = true; 1432 + c->hdrm_csum = false; 1433 + c->boff_len = 0; 1434 + c->nr = i; 1435 + c->vchan.desc_free = dma_free_desc_resource; 1436 + vchan_init(&c->vchan, &d->dma_dev); 1437 + } 1438 + 1439 + static int ldma_init_v22(struct ldma_dev *d, struct platform_device *pdev) 1440 + { 1441 + int ret; 1442 + 1443 + ret = device_property_read_u32(d->dev, "dma-channels", &d->chan_nrs); 1444 + if (ret < 0) { 1445 + dev_err(d->dev, "unable to read dma-channels property\n"); 1446 + return ret; 1447 + } 1448 + 1449 + d->irq = platform_get_irq(pdev, 0); 1450 + if (d->irq < 0) 1451 + return d->irq; 1452 + 1453 + ret = devm_request_irq(&pdev->dev, d->irq, dma_interrupt, 0, 1454 + DRIVER_NAME, d); 1455 + if (ret) 1456 + return ret; 1457 + 1458 + d->wq = alloc_ordered_workqueue("dma_wq", WQ_MEM_RECLAIM | 1459 + WQ_HIGHPRI); 1460 + if (!d->wq) 1461 + return -ENOMEM; 1462 + 1463 + return 0; 1464 + } 1465 + 1466 + static void ldma_clk_disable(void *data) 1467 + { 1468 + struct ldma_dev *d = data; 1469 + 1470 + clk_disable_unprepare(d->core_clk); 1471 + reset_control_assert(d->rst); 1472 + } 1473 + 1474 + static const struct ldma_inst_data dma0 = { 1475 + .name = "dma0", 1476 + .chan_fc = false, 1477 + .desc_fod = false, 1478 + .desc_in_sram = false, 1479 + .valid_desc_fetch_ack = false, 1480 + }; 1481 + 1482 + static const struct ldma_inst_data dma2tx = { 1483 + .name = "dma2tx", 1484 + .type = DMA_TYPE_TX, 1485 + .orrc = 16, 1486 + .chan_fc = true, 1487 + .desc_fod = true, 1488 + .desc_in_sram = true, 1489 + .valid_desc_fetch_ack = true, 1490 + }; 1491 + 1492 + static const struct ldma_inst_data dma1rx = { 1493 + .name = "dma1rx", 1494 + .type = DMA_TYPE_RX, 1495 + .orrc = 16, 1496 + .chan_fc = false, 1497 + .desc_fod = true, 1498 + .desc_in_sram = true, 1499 + .valid_desc_fetch_ack = false, 1500 + }; 1501 + 1502 + static const struct ldma_inst_data dma1tx = { 1503 + .name = "dma1tx", 1504 + .type = DMA_TYPE_TX, 1505 + .orrc = 16, 1506 + .chan_fc = true, 1507 + .desc_fod = true, 1508 + .desc_in_sram = true, 1509 + .valid_desc_fetch_ack = true, 1510 + }; 1511 + 1512 + static const struct ldma_inst_data dma0tx = { 1513 + .name = "dma0tx", 1514 + .type = DMA_TYPE_TX, 1515 + .orrc = 16, 1516 + .chan_fc = true, 1517 + .desc_fod = true, 1518 + .desc_in_sram = true, 1519 + .valid_desc_fetch_ack = true, 1520 + }; 1521 + 1522 + static const struct ldma_inst_data dma3 = { 1523 + .name = "dma3", 1524 + .type = DMA_TYPE_MCPY, 1525 + .orrc = 16, 1526 + .chan_fc = false, 1527 + .desc_fod = false, 1528 + .desc_in_sram = true, 1529 + .valid_desc_fetch_ack = false, 1530 + }; 1531 + 1532 + static const struct ldma_inst_data toe_dma30 = { 1533 + .name = "toe_dma30", 1534 + .type = DMA_TYPE_MCPY, 1535 + .orrc = 16, 1536 + .chan_fc = false, 1537 + .desc_fod = false, 1538 + .desc_in_sram = true, 1539 + .valid_desc_fetch_ack = true, 1540 + }; 1541 + 1542 + static const struct ldma_inst_data toe_dma31 = { 1543 + .name = "toe_dma31", 1544 + .type = DMA_TYPE_MCPY, 1545 + .orrc = 16, 1546 + .chan_fc = false, 1547 + .desc_fod = false, 1548 + .desc_in_sram = true, 1549 + .valid_desc_fetch_ack = true, 1550 + }; 1551 + 1552 + static const struct of_device_id intel_ldma_match[] = { 1553 + { .compatible = "intel,lgm-cdma", .data = &dma0}, 1554 + { .compatible = "intel,lgm-dma2tx", .data = &dma2tx}, 1555 + { .compatible = "intel,lgm-dma1rx", .data = &dma1rx}, 1556 + { .compatible = "intel,lgm-dma1tx", .data = &dma1tx}, 1557 + { .compatible = "intel,lgm-dma0tx", .data = &dma0tx}, 1558 + { .compatible = "intel,lgm-dma3", .data = &dma3}, 1559 + { .compatible = "intel,lgm-toe-dma30", .data = &toe_dma30}, 1560 + { .compatible = "intel,lgm-toe-dma31", .data = &toe_dma31}, 1561 + {} 1562 + }; 1563 + 1564 + static int intel_ldma_probe(struct platform_device *pdev) 1565 + { 1566 + struct device *dev = &pdev->dev; 1567 + struct dma_device *dma_dev; 1568 + unsigned long ch_mask; 1569 + struct ldma_chan *c; 1570 + struct ldma_port *p; 1571 + struct ldma_dev *d; 1572 + u32 id, bitn = 32, j; 1573 + int i, ret; 1574 + 1575 + d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL); 1576 + if (!d) 1577 + return -ENOMEM; 1578 + 1579 + /* Link controller to platform device */ 1580 + d->dev = &pdev->dev; 1581 + 1582 + d->inst = device_get_match_data(dev); 1583 + if (!d->inst) { 1584 + dev_err(dev, "No device match found\n"); 1585 + return -ENODEV; 1586 + } 1587 + 1588 + d->base = devm_platform_ioremap_resource(pdev, 0); 1589 + if (IS_ERR(d->base)) 1590 + return PTR_ERR(d->base); 1591 + 1592 + /* Power up and reset the dma engine, some DMAs always on?? */ 1593 + d->core_clk = devm_clk_get_optional(dev, NULL); 1594 + if (IS_ERR(d->core_clk)) 1595 + return PTR_ERR(d->core_clk); 1596 + clk_prepare_enable(d->core_clk); 1597 + 1598 + d->rst = devm_reset_control_get_optional(dev, NULL); 1599 + if (IS_ERR(d->rst)) 1600 + return PTR_ERR(d->rst); 1601 + reset_control_deassert(d->rst); 1602 + 1603 + ret = devm_add_action_or_reset(dev, ldma_clk_disable, d); 1604 + if (ret) { 1605 + dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret); 1606 + return ret; 1607 + } 1608 + 1609 + id = readl(d->base + DMA_ID); 1610 + d->chan_nrs = FIELD_GET(DMA_ID_CHNR, id); 1611 + d->port_nrs = FIELD_GET(DMA_ID_PNR, id); 1612 + d->ver = FIELD_GET(DMA_ID_REV, id); 1613 + 1614 + if (id & DMA_ID_AW_36B) 1615 + d->flags |= DMA_ADDR_36BIT; 1616 + 1617 + if (IS_ENABLED(CONFIG_64BIT) && (id & DMA_ID_AW_36B)) 1618 + bitn = 36; 1619 + 1620 + if (id & DMA_ID_DW_128B) 1621 + d->flags |= DMA_DATA_128BIT; 1622 + 1623 + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(bitn)); 1624 + if (ret) { 1625 + dev_err(dev, "No usable DMA configuration\n"); 1626 + return ret; 1627 + } 1628 + 1629 + if (d->ver == DMA_VER22) { 1630 + ret = ldma_init_v22(d, pdev); 1631 + if (ret) 1632 + return ret; 1633 + } 1634 + 1635 + ret = device_property_read_u32(dev, "dma-channel-mask", &d->channels_mask); 1636 + if (ret < 0) 1637 + d->channels_mask = GENMASK(d->chan_nrs - 1, 0); 1638 + 1639 + dma_dev = &d->dma_dev; 1640 + 1641 + dma_cap_zero(dma_dev->cap_mask); 1642 + dma_cap_set(DMA_SLAVE, dma_dev->cap_mask); 1643 + 1644 + /* Channel initializations */ 1645 + INIT_LIST_HEAD(&dma_dev->channels); 1646 + 1647 + /* Port Initializations */ 1648 + d->ports = devm_kcalloc(dev, d->port_nrs, sizeof(*p), GFP_KERNEL); 1649 + if (!d->ports) 1650 + return -ENOMEM; 1651 + 1652 + /* Channels Initializations */ 1653 + d->chans = devm_kcalloc(d->dev, d->chan_nrs, sizeof(*c), GFP_KERNEL); 1654 + if (!d->chans) 1655 + return -ENOMEM; 1656 + 1657 + for (i = 0; i < d->port_nrs; i++) { 1658 + p = &d->ports[i]; 1659 + p->portid = i; 1660 + p->ldev = d; 1661 + } 1662 + 1663 + ret = ldma_cfg_init(d); 1664 + if (ret) 1665 + return ret; 1666 + 1667 + dma_dev->dev = &pdev->dev; 1668 + 1669 + ch_mask = (unsigned long)d->channels_mask; 1670 + for_each_set_bit(j, &ch_mask, d->chan_nrs) { 1671 + if (d->ver == DMA_VER22) 1672 + ldma_dma_init_v22(j, d); 1673 + else 1674 + ldma_dma_init_v3X(j, d); 1675 + } 1676 + 1677 + dma_dev->device_alloc_chan_resources = ldma_alloc_chan_resources; 1678 + dma_dev->device_free_chan_resources = ldma_free_chan_resources; 1679 + dma_dev->device_terminate_all = ldma_terminate_all; 1680 + dma_dev->device_issue_pending = ldma_issue_pending; 1681 + dma_dev->device_tx_status = ldma_tx_status; 1682 + dma_dev->device_resume = ldma_resume_chan; 1683 + dma_dev->device_pause = ldma_pause_chan; 1684 + dma_dev->device_prep_slave_sg = ldma_prep_slave_sg; 1685 + 1686 + if (d->ver == DMA_VER22) { 1687 + dma_dev->device_config = ldma_slave_config; 1688 + dma_dev->device_synchronize = ldma_synchronize; 1689 + dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1690 + dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1691 + dma_dev->directions = BIT(DMA_MEM_TO_DEV) | 1692 + BIT(DMA_DEV_TO_MEM); 1693 + dma_dev->residue_granularity = 1694 + DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1695 + } 1696 + 1697 + platform_set_drvdata(pdev, d); 1698 + 1699 + ldma_dev_init(d); 1700 + 1701 + ret = dma_async_device_register(dma_dev); 1702 + if (ret) { 1703 + dev_err(dev, "Failed to register slave DMA engine device\n"); 1704 + return ret; 1705 + } 1706 + 1707 + ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d); 1708 + if (ret) { 1709 + dev_err(dev, "Failed to register of DMA controller\n"); 1710 + dma_async_device_unregister(dma_dev); 1711 + return ret; 1712 + } 1713 + 1714 + dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n", d->ver, 1715 + d->port_nrs, d->chan_nrs); 1716 + 1717 + return 0; 1718 + } 1719 + 1720 + static struct platform_driver intel_ldma_driver = { 1721 + .probe = intel_ldma_probe, 1722 + .driver = { 1723 + .name = DRIVER_NAME, 1724 + .of_match_table = intel_ldma_match, 1725 + }, 1726 + }; 1727 + 1728 + /* 1729 + * Perform this driver as device_initcall to make sure initialization happens 1730 + * before its DMA clients of some are platform specific and also to provide 1731 + * registered DMA channels and DMA capabilities to clients before their 1732 + * initialization. 1733 + */ 1734 + static int __init intel_ldma_init(void) 1735 + { 1736 + return platform_driver_register(&intel_ldma_driver); 1737 + } 1738 + 1739 + device_initcall(intel_ldma_init);
-14
drivers/dma/mmp_pdma.c
··· 18 18 #include <linux/of_device.h> 19 19 #include <linux/of_dma.h> 20 20 #include <linux/of.h> 21 - #include <linux/dma/mmp-pdma.h> 22 21 23 22 #include "dmaengine.h" 24 23 ··· 1146 1147 .probe = mmp_pdma_probe, 1147 1148 .remove = mmp_pdma_remove, 1148 1149 }; 1149 - 1150 - bool mmp_pdma_filter_fn(struct dma_chan *chan, void *param) 1151 - { 1152 - struct mmp_pdma_chan *c = to_mmp_pdma_chan(chan); 1153 - 1154 - if (chan->device->dev->driver != &mmp_pdma_driver.driver) 1155 - return false; 1156 - 1157 - c->drcmr = *(unsigned int *)param; 1158 - 1159 - return true; 1160 - } 1161 - EXPORT_SYMBOL_GPL(mmp_pdma_filter_fn); 1162 1150 1163 1151 module_platform_driver(mmp_pdma_driver); 1164 1152
+3 -1
drivers/dma/owl-dma.c
··· 1080 1080 } 1081 1081 1082 1082 static const struct of_device_id owl_dma_match[] = { 1083 - { .compatible = "actions,s900-dma", .data = (void *)S900_DMA,}, 1083 + { .compatible = "actions,s500-dma", .data = (void *)S900_DMA,}, 1084 1084 { .compatible = "actions,s700-dma", .data = (void *)S700_DMA,}, 1085 + { .compatible = "actions,s900-dma", .data = (void *)S900_DMA,}, 1085 1086 { /* sentinel */ }, 1086 1087 }; 1087 1088 MODULE_DEVICE_TABLE(of, owl_dma_match); ··· 1246 1245 owl_dma_free(od); 1247 1246 1248 1247 clk_disable_unprepare(od->clk); 1248 + dma_pool_destroy(od->lli_pool); 1249 1249 1250 1250 return 0; 1251 1251 }
+15 -14
drivers/dma/qcom/bam_dma.c
··· 1270 1270 dev_err(bdev->dev, "num-ees unspecified in dt\n"); 1271 1271 } 1272 1272 1273 - bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk"); 1274 - if (IS_ERR(bdev->bamclk)) { 1275 - if (!bdev->controlled_remotely) 1276 - return PTR_ERR(bdev->bamclk); 1273 + if (bdev->controlled_remotely) 1274 + bdev->bamclk = devm_clk_get_optional(bdev->dev, "bam_clk"); 1275 + else 1276 + bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk"); 1277 1277 1278 - bdev->bamclk = NULL; 1279 - } 1278 + if (IS_ERR(bdev->bamclk)) 1279 + return PTR_ERR(bdev->bamclk); 1280 1280 1281 1281 ret = clk_prepare_enable(bdev->bamclk); 1282 1282 if (ret) { ··· 1350 1350 if (ret) 1351 1351 goto err_unregister_dma; 1352 1352 1353 - if (bdev->controlled_remotely) { 1353 + if (!bdev->bamclk) { 1354 1354 pm_runtime_disable(&pdev->dev); 1355 1355 return 0; 1356 1356 } ··· 1438 1438 { 1439 1439 struct bam_device *bdev = dev_get_drvdata(dev); 1440 1440 1441 - if (!bdev->controlled_remotely) 1441 + if (bdev->bamclk) { 1442 1442 pm_runtime_force_suspend(dev); 1443 - 1444 - clk_unprepare(bdev->bamclk); 1443 + clk_unprepare(bdev->bamclk); 1444 + } 1445 1445 1446 1446 return 0; 1447 1447 } ··· 1451 1451 struct bam_device *bdev = dev_get_drvdata(dev); 1452 1452 int ret; 1453 1453 1454 - ret = clk_prepare(bdev->bamclk); 1455 - if (ret) 1456 - return ret; 1454 + if (bdev->bamclk) { 1455 + ret = clk_prepare(bdev->bamclk); 1456 + if (ret) 1457 + return ret; 1457 1458 1458 - if (!bdev->controlled_remotely) 1459 1459 pm_runtime_force_resume(dev); 1460 + } 1460 1461 1461 1462 return 0; 1462 1463 }
+2 -2
drivers/dma/qcom/gpi.c
··· 584 584 gpi_write_reg(gpii, addr, val); 585 585 } 586 586 587 - static inline void 587 + static __always_inline void 588 588 gpi_update_reg(struct gpii *gpii, u32 offset, u32 mask, u32 val) 589 589 { 590 590 void __iomem *addr = gpii->regs + offset; ··· 1700 1700 1701 1701 tre->dword[3] = u32_encode_bits(TRE_TYPE_DMA, TRE_FLAGS_TYPE); 1702 1702 tre->dword[3] |= u32_encode_bits(1, TRE_FLAGS_IEOT); 1703 - }; 1703 + } 1704 1704 1705 1705 for (i = 0; i < tre_idx; i++) 1706 1706 dev_dbg(dev, "TRE:%d %x:%x:%x:%x\n", i, desc->tre[i].dword[0],
+79 -35
drivers/dma/sh/rcar-dmac.c
··· 189 189 * struct rcar_dmac - R-Car Gen2 DMA Controller 190 190 * @engine: base DMA engine object 191 191 * @dev: the hardware device 192 - * @iomem: remapped I/O memory base 192 + * @dmac_base: remapped base register block 193 + * @chan_base: remapped channel register block (optional) 193 194 * @n_channels: number of available channels 194 195 * @channels: array of DMAC channels 195 196 * @channels_mask: bitfield of which DMA channels are managed by this driver ··· 199 198 struct rcar_dmac { 200 199 struct dma_device engine; 201 200 struct device *dev; 202 - void __iomem *iomem; 201 + void __iomem *dmac_base; 202 + void __iomem *chan_base; 203 203 204 204 unsigned int n_channels; 205 205 struct rcar_dmac_chan *channels; ··· 210 208 }; 211 209 212 210 #define to_rcar_dmac(d) container_of(d, struct rcar_dmac, engine) 211 + 212 + #define for_each_rcar_dmac_chan(i, dmac, chan) \ 213 + for (i = 0, chan = &(dmac)->channels[0]; i < (dmac)->n_channels; i++, chan++) \ 214 + if (!((dmac)->channels_mask & BIT(i))) continue; else 213 215 214 216 /* 215 217 * struct rcar_dmac_of_data - This driver's OF data ··· 236 230 #define RCAR_DMAOR_PRI_ROUND_ROBIN (3 << 8) 237 231 #define RCAR_DMAOR_AE (1 << 2) 238 232 #define RCAR_DMAOR_DME (1 << 0) 239 - #define RCAR_DMACHCLR 0x0080 233 + #define RCAR_DMACHCLR 0x0080 /* Not on R-Car V3U */ 240 234 #define RCAR_DMADPSEC 0x00a0 241 235 242 236 #define RCAR_DMASAR 0x0000 ··· 299 293 #define RCAR_DMAFIXDAR 0x0014 300 294 #define RCAR_DMAFIXDPBASE 0x0060 301 295 296 + /* For R-Car V3U */ 297 + #define RCAR_V3U_DMACHCLR 0x0100 298 + 302 299 /* Hardcode the MEMCPY transfer size to 4 bytes. */ 303 300 #define RCAR_DMAC_MEMCPY_XFER_SIZE 4 304 301 ··· 312 303 static void rcar_dmac_write(struct rcar_dmac *dmac, u32 reg, u32 data) 313 304 { 314 305 if (reg == RCAR_DMAOR) 315 - writew(data, dmac->iomem + reg); 306 + writew(data, dmac->dmac_base + reg); 316 307 else 317 - writel(data, dmac->iomem + reg); 308 + writel(data, dmac->dmac_base + reg); 318 309 } 319 310 320 311 static u32 rcar_dmac_read(struct rcar_dmac *dmac, u32 reg) 321 312 { 322 313 if (reg == RCAR_DMAOR) 323 - return readw(dmac->iomem + reg); 314 + return readw(dmac->dmac_base + reg); 324 315 else 325 - return readl(dmac->iomem + reg); 316 + return readl(dmac->dmac_base + reg); 326 317 } 327 318 328 319 static u32 rcar_dmac_chan_read(struct rcar_dmac_chan *chan, u32 reg) ··· 339 330 writew(data, chan->iomem + reg); 340 331 else 341 332 writel(data, chan->iomem + reg); 333 + } 334 + 335 + static void rcar_dmac_chan_clear(struct rcar_dmac *dmac, 336 + struct rcar_dmac_chan *chan) 337 + { 338 + if (dmac->chan_base) 339 + rcar_dmac_chan_write(chan, RCAR_V3U_DMACHCLR, 1); 340 + else 341 + rcar_dmac_write(dmac, RCAR_DMACHCLR, BIT(chan->index)); 342 + } 343 + 344 + static void rcar_dmac_chan_clear_all(struct rcar_dmac *dmac) 345 + { 346 + struct rcar_dmac_chan *chan; 347 + unsigned int i; 348 + 349 + if (dmac->chan_base) { 350 + for_each_rcar_dmac_chan(i, dmac, chan) 351 + rcar_dmac_chan_write(chan, RCAR_V3U_DMACHCLR, 1); 352 + } else { 353 + rcar_dmac_write(dmac, RCAR_DMACHCLR, dmac->channels_mask); 354 + } 342 355 } 343 356 344 357 /* ----------------------------------------------------------------------------- ··· 478 447 u16 dmaor; 479 448 480 449 /* Clear all channels and enable the DMAC globally. */ 481 - rcar_dmac_write(dmac, RCAR_DMACHCLR, dmac->channels_mask); 450 + rcar_dmac_chan_clear_all(dmac); 482 451 rcar_dmac_write(dmac, RCAR_DMAOR, 483 452 RCAR_DMAOR_PRI_FIXED | RCAR_DMAOR_DME); 484 453 ··· 848 817 849 818 static void rcar_dmac_stop_all_chan(struct rcar_dmac *dmac) 850 819 { 820 + struct rcar_dmac_chan *chan; 851 821 unsigned int i; 852 822 853 823 /* Stop all channels. */ 854 - for (i = 0; i < dmac->n_channels; ++i) { 855 - struct rcar_dmac_chan *chan = &dmac->channels[i]; 856 - 857 - if (!(dmac->channels_mask & BIT(i))) 858 - continue; 859 - 824 + for_each_rcar_dmac_chan(i, dmac, chan) { 860 825 /* Stop and reinitialize the channel. */ 861 826 spin_lock_irq(&chan->lock); 862 827 rcar_dmac_chan_halt(chan); ··· 1593 1566 * because channel is already stopped in error case. 1594 1567 * We need to clear register and check DE bit as recovery. 1595 1568 */ 1596 - rcar_dmac_write(dmac, RCAR_DMACHCLR, 1 << chan->index); 1569 + rcar_dmac_chan_clear(dmac, chan); 1597 1570 rcar_dmac_chcr_de_barrier(chan); 1598 1571 reinit = true; 1599 1572 goto spin_lock_end; ··· 1759 1732 */ 1760 1733 1761 1734 static int rcar_dmac_chan_probe(struct rcar_dmac *dmac, 1762 - struct rcar_dmac_chan *rchan, 1763 - const struct rcar_dmac_of_data *data, 1764 - unsigned int index) 1735 + struct rcar_dmac_chan *rchan) 1765 1736 { 1766 1737 struct platform_device *pdev = to_platform_device(dmac->dev); 1767 1738 struct dma_chan *chan = &rchan->chan; ··· 1767 1742 char *irqname; 1768 1743 int ret; 1769 1744 1770 - rchan->index = index; 1771 - rchan->iomem = dmac->iomem + data->chan_offset_base + 1772 - data->chan_offset_stride * index; 1773 1745 rchan->mid_rid = -EINVAL; 1774 1746 1775 1747 spin_lock_init(&rchan->lock); ··· 1778 1756 INIT_LIST_HEAD(&rchan->desc.wait); 1779 1757 1780 1758 /* Request the channel interrupt. */ 1781 - sprintf(pdev_irqname, "ch%u", index); 1759 + sprintf(pdev_irqname, "ch%u", rchan->index); 1782 1760 rchan->irq = platform_get_irq_byname(pdev, pdev_irqname); 1783 1761 if (rchan->irq < 0) 1784 1762 return -ENODEV; 1785 1763 1786 1764 irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u", 1787 - dev_name(dmac->dev), index); 1765 + dev_name(dmac->dev), rchan->index); 1788 1766 if (!irqname) 1789 1767 return -ENOMEM; 1790 1768 ··· 1850 1828 DMA_SLAVE_BUSWIDTH_2_BYTES | DMA_SLAVE_BUSWIDTH_4_BYTES | 1851 1829 DMA_SLAVE_BUSWIDTH_8_BYTES | DMA_SLAVE_BUSWIDTH_16_BYTES | 1852 1830 DMA_SLAVE_BUSWIDTH_32_BYTES | DMA_SLAVE_BUSWIDTH_64_BYTES; 1853 - struct dma_device *engine; 1854 - struct rcar_dmac *dmac; 1855 1831 const struct rcar_dmac_of_data *data; 1832 + struct rcar_dmac_chan *chan; 1833 + struct dma_device *engine; 1834 + void __iomem *chan_base; 1835 + struct rcar_dmac *dmac; 1856 1836 unsigned int i; 1857 1837 int ret; 1858 1838 ··· 1892 1868 return -ENOMEM; 1893 1869 1894 1870 /* Request resources. */ 1895 - dmac->iomem = devm_platform_ioremap_resource(pdev, 0); 1896 - if (IS_ERR(dmac->iomem)) 1897 - return PTR_ERR(dmac->iomem); 1871 + dmac->dmac_base = devm_platform_ioremap_resource(pdev, 0); 1872 + if (IS_ERR(dmac->dmac_base)) 1873 + return PTR_ERR(dmac->dmac_base); 1874 + 1875 + if (!data->chan_offset_base) { 1876 + dmac->chan_base = devm_platform_ioremap_resource(pdev, 1); 1877 + if (IS_ERR(dmac->chan_base)) 1878 + return PTR_ERR(dmac->chan_base); 1879 + 1880 + chan_base = dmac->chan_base; 1881 + } else { 1882 + chan_base = dmac->dmac_base + data->chan_offset_base; 1883 + } 1884 + 1885 + for_each_rcar_dmac_chan(i, dmac, chan) { 1886 + chan->index = i; 1887 + chan->iomem = chan_base + i * data->chan_offset_stride; 1888 + } 1898 1889 1899 1890 /* Enable runtime PM and initialize the device. */ 1900 1891 pm_runtime_enable(&pdev->dev); ··· 1955 1916 1956 1917 INIT_LIST_HEAD(&engine->channels); 1957 1918 1958 - for (i = 0; i < dmac->n_channels; ++i) { 1959 - if (!(dmac->channels_mask & BIT(i))) 1960 - continue; 1961 - 1962 - ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], data, i); 1919 + for_each_rcar_dmac_chan(i, dmac, chan) { 1920 + ret = rcar_dmac_chan_probe(dmac, chan); 1963 1921 if (ret < 0) 1964 1922 goto error; 1965 1923 } ··· 2004 1968 } 2005 1969 2006 1970 static const struct rcar_dmac_of_data rcar_dmac_data = { 2007 - .chan_offset_base = 0x8000, 2008 - .chan_offset_stride = 0x80, 1971 + .chan_offset_base = 0x8000, 1972 + .chan_offset_stride = 0x80, 1973 + }; 1974 + 1975 + static const struct rcar_dmac_of_data rcar_v3u_dmac_data = { 1976 + .chan_offset_base = 0x0, 1977 + .chan_offset_stride = 0x1000, 2009 1978 }; 2010 1979 2011 1980 static const struct of_device_id rcar_dmac_of_ids[] = { 2012 1981 { 2013 1982 .compatible = "renesas,rcar-dmac", 2014 1983 .data = &rcar_dmac_data, 1984 + }, { 1985 + .compatible = "renesas,dmac-r8a779a0", 1986 + .data = &rcar_v3u_dmac_data, 2015 1987 }, 2016 1988 { /* Sentinel */ } 2017 1989 };
-1170
drivers/dma/sirf-dma.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * DMA controller driver for CSR SiRFprimaII 4 - * 5 - * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. 6 - */ 7 - 8 - #include <linux/module.h> 9 - #include <linux/dmaengine.h> 10 - #include <linux/dma-mapping.h> 11 - #include <linux/pm_runtime.h> 12 - #include <linux/interrupt.h> 13 - #include <linux/io.h> 14 - #include <linux/slab.h> 15 - #include <linux/of_irq.h> 16 - #include <linux/of_address.h> 17 - #include <linux/of_device.h> 18 - #include <linux/of_platform.h> 19 - #include <linux/clk.h> 20 - #include <linux/of_dma.h> 21 - #include <linux/sirfsoc_dma.h> 22 - 23 - #include "dmaengine.h" 24 - 25 - #define SIRFSOC_DMA_VER_A7V1 1 26 - #define SIRFSOC_DMA_VER_A7V2 2 27 - #define SIRFSOC_DMA_VER_A6 4 28 - 29 - #define SIRFSOC_DMA_DESCRIPTORS 16 30 - #define SIRFSOC_DMA_CHANNELS 16 31 - #define SIRFSOC_DMA_TABLE_NUM 256 32 - 33 - #define SIRFSOC_DMA_CH_ADDR 0x00 34 - #define SIRFSOC_DMA_CH_XLEN 0x04 35 - #define SIRFSOC_DMA_CH_YLEN 0x08 36 - #define SIRFSOC_DMA_CH_CTRL 0x0C 37 - 38 - #define SIRFSOC_DMA_WIDTH_0 0x100 39 - #define SIRFSOC_DMA_CH_VALID 0x140 40 - #define SIRFSOC_DMA_CH_INT 0x144 41 - #define SIRFSOC_DMA_INT_EN 0x148 42 - #define SIRFSOC_DMA_INT_EN_CLR 0x14C 43 - #define SIRFSOC_DMA_CH_LOOP_CTRL 0x150 44 - #define SIRFSOC_DMA_CH_LOOP_CTRL_CLR 0x154 45 - #define SIRFSOC_DMA_WIDTH_ATLAS7 0x10 46 - #define SIRFSOC_DMA_VALID_ATLAS7 0x14 47 - #define SIRFSOC_DMA_INT_ATLAS7 0x18 48 - #define SIRFSOC_DMA_INT_EN_ATLAS7 0x1c 49 - #define SIRFSOC_DMA_LOOP_CTRL_ATLAS7 0x20 50 - #define SIRFSOC_DMA_CUR_DATA_ADDR 0x34 51 - #define SIRFSOC_DMA_MUL_ATLAS7 0x38 52 - #define SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7 0x158 53 - #define SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7 0x15C 54 - #define SIRFSOC_DMA_IOBG_SCMD_EN 0x800 55 - #define SIRFSOC_DMA_EARLY_RESP_SET 0x818 56 - #define SIRFSOC_DMA_EARLY_RESP_CLR 0x81C 57 - 58 - #define SIRFSOC_DMA_MODE_CTRL_BIT 4 59 - #define SIRFSOC_DMA_DIR_CTRL_BIT 5 60 - #define SIRFSOC_DMA_MODE_CTRL_BIT_ATLAS7 2 61 - #define SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7 3 62 - #define SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7 4 63 - #define SIRFSOC_DMA_TAB_NUM_ATLAS7 7 64 - #define SIRFSOC_DMA_CHAIN_INT_BIT_ATLAS7 5 65 - #define SIRFSOC_DMA_CHAIN_FLAG_SHIFT_ATLAS7 25 66 - #define SIRFSOC_DMA_CHAIN_ADDR_SHIFT 32 67 - 68 - #define SIRFSOC_DMA_INT_FINI_INT_ATLAS7 BIT(0) 69 - #define SIRFSOC_DMA_INT_CNT_INT_ATLAS7 BIT(1) 70 - #define SIRFSOC_DMA_INT_PAU_INT_ATLAS7 BIT(2) 71 - #define SIRFSOC_DMA_INT_LOOP_INT_ATLAS7 BIT(3) 72 - #define SIRFSOC_DMA_INT_INV_INT_ATLAS7 BIT(4) 73 - #define SIRFSOC_DMA_INT_END_INT_ATLAS7 BIT(5) 74 - #define SIRFSOC_DMA_INT_ALL_ATLAS7 0x3F 75 - 76 - /* xlen and dma_width register is in 4 bytes boundary */ 77 - #define SIRFSOC_DMA_WORD_LEN 4 78 - #define SIRFSOC_DMA_XLEN_MAX_V1 0x800 79 - #define SIRFSOC_DMA_XLEN_MAX_V2 0x1000 80 - 81 - struct sirfsoc_dma_desc { 82 - struct dma_async_tx_descriptor desc; 83 - struct list_head node; 84 - 85 - /* SiRFprimaII 2D-DMA parameters */ 86 - 87 - int xlen; /* DMA xlen */ 88 - int ylen; /* DMA ylen */ 89 - int width; /* DMA width */ 90 - int dir; 91 - bool cyclic; /* is loop DMA? */ 92 - bool chain; /* is chain DMA? */ 93 - u32 addr; /* DMA buffer address */ 94 - u64 chain_table[SIRFSOC_DMA_TABLE_NUM]; /* chain tbl */ 95 - }; 96 - 97 - struct sirfsoc_dma_chan { 98 - struct dma_chan chan; 99 - struct list_head free; 100 - struct list_head prepared; 101 - struct list_head queued; 102 - struct list_head active; 103 - struct list_head completed; 104 - unsigned long happened_cyclic; 105 - unsigned long completed_cyclic; 106 - 107 - /* Lock for this structure */ 108 - spinlock_t lock; 109 - 110 - int mode; 111 - }; 112 - 113 - struct sirfsoc_dma_regs { 114 - u32 ctrl[SIRFSOC_DMA_CHANNELS]; 115 - u32 interrupt_en; 116 - }; 117 - 118 - struct sirfsoc_dma { 119 - struct dma_device dma; 120 - struct tasklet_struct tasklet; 121 - struct sirfsoc_dma_chan channels[SIRFSOC_DMA_CHANNELS]; 122 - void __iomem *base; 123 - int irq; 124 - struct clk *clk; 125 - int type; 126 - void (*exec_desc)(struct sirfsoc_dma_desc *sdesc, 127 - int cid, int burst_mode, void __iomem *base); 128 - struct sirfsoc_dma_regs regs_save; 129 - }; 130 - 131 - struct sirfsoc_dmadata { 132 - void (*exec)(struct sirfsoc_dma_desc *sdesc, 133 - int cid, int burst_mode, void __iomem *base); 134 - int type; 135 - }; 136 - 137 - enum sirfsoc_dma_chain_flag { 138 - SIRFSOC_DMA_CHAIN_NORMAL = 0x01, 139 - SIRFSOC_DMA_CHAIN_PAUSE = 0x02, 140 - SIRFSOC_DMA_CHAIN_LOOP = 0x03, 141 - SIRFSOC_DMA_CHAIN_END = 0x04 142 - }; 143 - 144 - #define DRV_NAME "sirfsoc_dma" 145 - 146 - static int sirfsoc_dma_runtime_suspend(struct device *dev); 147 - 148 - /* Convert struct dma_chan to struct sirfsoc_dma_chan */ 149 - static inline 150 - struct sirfsoc_dma_chan *dma_chan_to_sirfsoc_dma_chan(struct dma_chan *c) 151 - { 152 - return container_of(c, struct sirfsoc_dma_chan, chan); 153 - } 154 - 155 - /* Convert struct dma_chan to struct sirfsoc_dma */ 156 - static inline struct sirfsoc_dma *dma_chan_to_sirfsoc_dma(struct dma_chan *c) 157 - { 158 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(c); 159 - return container_of(schan, struct sirfsoc_dma, channels[c->chan_id]); 160 - } 161 - 162 - static void sirfsoc_dma_execute_hw_a7v2(struct sirfsoc_dma_desc *sdesc, 163 - int cid, int burst_mode, void __iomem *base) 164 - { 165 - if (sdesc->chain) { 166 - /* DMA v2 HW chain mode */ 167 - writel_relaxed((sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7) | 168 - (sdesc->chain << 169 - SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7) | 170 - (0x8 << SIRFSOC_DMA_TAB_NUM_ATLAS7) | 0x3, 171 - base + SIRFSOC_DMA_CH_CTRL); 172 - } else { 173 - /* DMA v2 legacy mode */ 174 - writel_relaxed(sdesc->xlen, base + SIRFSOC_DMA_CH_XLEN); 175 - writel_relaxed(sdesc->ylen, base + SIRFSOC_DMA_CH_YLEN); 176 - writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_ATLAS7); 177 - writel_relaxed((sdesc->width*((sdesc->ylen+1)>>1)), 178 - base + SIRFSOC_DMA_MUL_ATLAS7); 179 - writel_relaxed((sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7) | 180 - (sdesc->chain << 181 - SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7) | 182 - 0x3, base + SIRFSOC_DMA_CH_CTRL); 183 - } 184 - writel_relaxed(sdesc->chain ? SIRFSOC_DMA_INT_END_INT_ATLAS7 : 185 - (SIRFSOC_DMA_INT_FINI_INT_ATLAS7 | 186 - SIRFSOC_DMA_INT_LOOP_INT_ATLAS7), 187 - base + SIRFSOC_DMA_INT_EN_ATLAS7); 188 - writel(sdesc->addr, base + SIRFSOC_DMA_CH_ADDR); 189 - if (sdesc->cyclic) 190 - writel(0x10001, base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7); 191 - } 192 - 193 - static void sirfsoc_dma_execute_hw_a7v1(struct sirfsoc_dma_desc *sdesc, 194 - int cid, int burst_mode, void __iomem *base) 195 - { 196 - writel_relaxed(1, base + SIRFSOC_DMA_IOBG_SCMD_EN); 197 - writel_relaxed((1 << cid), base + SIRFSOC_DMA_EARLY_RESP_SET); 198 - writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_0 + cid * 4); 199 - writel_relaxed(cid | (burst_mode << SIRFSOC_DMA_MODE_CTRL_BIT) | 200 - (sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT), 201 - base + cid * 0x10 + SIRFSOC_DMA_CH_CTRL); 202 - writel_relaxed(sdesc->xlen, base + cid * 0x10 + SIRFSOC_DMA_CH_XLEN); 203 - writel_relaxed(sdesc->ylen, base + cid * 0x10 + SIRFSOC_DMA_CH_YLEN); 204 - writel_relaxed(readl_relaxed(base + SIRFSOC_DMA_INT_EN) | 205 - (1 << cid), base + SIRFSOC_DMA_INT_EN); 206 - writel(sdesc->addr >> 2, base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR); 207 - if (sdesc->cyclic) { 208 - writel((1 << cid) | 1 << (cid + 16) | 209 - readl_relaxed(base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7), 210 - base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7); 211 - } 212 - 213 - } 214 - 215 - static void sirfsoc_dma_execute_hw_a6(struct sirfsoc_dma_desc *sdesc, 216 - int cid, int burst_mode, void __iomem *base) 217 - { 218 - writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_0 + cid * 4); 219 - writel_relaxed(cid | (burst_mode << SIRFSOC_DMA_MODE_CTRL_BIT) | 220 - (sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT), 221 - base + cid * 0x10 + SIRFSOC_DMA_CH_CTRL); 222 - writel_relaxed(sdesc->xlen, base + cid * 0x10 + SIRFSOC_DMA_CH_XLEN); 223 - writel_relaxed(sdesc->ylen, base + cid * 0x10 + SIRFSOC_DMA_CH_YLEN); 224 - writel_relaxed(readl_relaxed(base + SIRFSOC_DMA_INT_EN) | 225 - (1 << cid), base + SIRFSOC_DMA_INT_EN); 226 - writel(sdesc->addr >> 2, base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR); 227 - if (sdesc->cyclic) { 228 - writel((1 << cid) | 1 << (cid + 16) | 229 - readl_relaxed(base + SIRFSOC_DMA_CH_LOOP_CTRL), 230 - base + SIRFSOC_DMA_CH_LOOP_CTRL); 231 - } 232 - 233 - } 234 - 235 - /* Execute all queued DMA descriptors */ 236 - static void sirfsoc_dma_execute(struct sirfsoc_dma_chan *schan) 237 - { 238 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan); 239 - int cid = schan->chan.chan_id; 240 - struct sirfsoc_dma_desc *sdesc = NULL; 241 - void __iomem *base; 242 - 243 - /* 244 - * lock has been held by functions calling this, so we don't hold 245 - * lock again 246 - */ 247 - base = sdma->base; 248 - sdesc = list_first_entry(&schan->queued, struct sirfsoc_dma_desc, 249 - node); 250 - /* Move the first queued descriptor to active list */ 251 - list_move_tail(&sdesc->node, &schan->active); 252 - 253 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) 254 - cid = 0; 255 - 256 - /* Start the DMA transfer */ 257 - sdma->exec_desc(sdesc, cid, schan->mode, base); 258 - 259 - if (sdesc->cyclic) 260 - schan->happened_cyclic = schan->completed_cyclic = 0; 261 - } 262 - 263 - /* Interrupt handler */ 264 - static irqreturn_t sirfsoc_dma_irq(int irq, void *data) 265 - { 266 - struct sirfsoc_dma *sdma = data; 267 - struct sirfsoc_dma_chan *schan; 268 - struct sirfsoc_dma_desc *sdesc = NULL; 269 - u32 is; 270 - bool chain; 271 - int ch; 272 - void __iomem *reg; 273 - 274 - switch (sdma->type) { 275 - case SIRFSOC_DMA_VER_A6: 276 - case SIRFSOC_DMA_VER_A7V1: 277 - is = readl(sdma->base + SIRFSOC_DMA_CH_INT); 278 - reg = sdma->base + SIRFSOC_DMA_CH_INT; 279 - while ((ch = fls(is) - 1) >= 0) { 280 - is &= ~(1 << ch); 281 - writel_relaxed(1 << ch, reg); 282 - schan = &sdma->channels[ch]; 283 - spin_lock(&schan->lock); 284 - sdesc = list_first_entry(&schan->active, 285 - struct sirfsoc_dma_desc, node); 286 - if (!sdesc->cyclic) { 287 - /* Execute queued descriptors */ 288 - list_splice_tail_init(&schan->active, 289 - &schan->completed); 290 - dma_cookie_complete(&sdesc->desc); 291 - if (!list_empty(&schan->queued)) 292 - sirfsoc_dma_execute(schan); 293 - } else 294 - schan->happened_cyclic++; 295 - spin_unlock(&schan->lock); 296 - } 297 - break; 298 - 299 - case SIRFSOC_DMA_VER_A7V2: 300 - is = readl(sdma->base + SIRFSOC_DMA_INT_ATLAS7); 301 - 302 - reg = sdma->base + SIRFSOC_DMA_INT_ATLAS7; 303 - writel_relaxed(SIRFSOC_DMA_INT_ALL_ATLAS7, reg); 304 - schan = &sdma->channels[0]; 305 - spin_lock(&schan->lock); 306 - sdesc = list_first_entry(&schan->active, 307 - struct sirfsoc_dma_desc, node); 308 - if (!sdesc->cyclic) { 309 - chain = sdesc->chain; 310 - if ((chain && (is & SIRFSOC_DMA_INT_END_INT_ATLAS7)) || 311 - (!chain && 312 - (is & SIRFSOC_DMA_INT_FINI_INT_ATLAS7))) { 313 - /* Execute queued descriptors */ 314 - list_splice_tail_init(&schan->active, 315 - &schan->completed); 316 - dma_cookie_complete(&sdesc->desc); 317 - if (!list_empty(&schan->queued)) 318 - sirfsoc_dma_execute(schan); 319 - } 320 - } else if (sdesc->cyclic && (is & 321 - SIRFSOC_DMA_INT_LOOP_INT_ATLAS7)) 322 - schan->happened_cyclic++; 323 - 324 - spin_unlock(&schan->lock); 325 - break; 326 - 327 - default: 328 - break; 329 - } 330 - 331 - /* Schedule tasklet */ 332 - tasklet_schedule(&sdma->tasklet); 333 - 334 - return IRQ_HANDLED; 335 - } 336 - 337 - /* process completed descriptors */ 338 - static void sirfsoc_dma_process_completed(struct sirfsoc_dma *sdma) 339 - { 340 - dma_cookie_t last_cookie = 0; 341 - struct sirfsoc_dma_chan *schan; 342 - struct sirfsoc_dma_desc *sdesc; 343 - struct dma_async_tx_descriptor *desc; 344 - unsigned long flags; 345 - unsigned long happened_cyclic; 346 - LIST_HEAD(list); 347 - int i; 348 - 349 - for (i = 0; i < sdma->dma.chancnt; i++) { 350 - schan = &sdma->channels[i]; 351 - 352 - /* Get all completed descriptors */ 353 - spin_lock_irqsave(&schan->lock, flags); 354 - if (!list_empty(&schan->completed)) { 355 - list_splice_tail_init(&schan->completed, &list); 356 - spin_unlock_irqrestore(&schan->lock, flags); 357 - 358 - /* Execute callbacks and run dependencies */ 359 - list_for_each_entry(sdesc, &list, node) { 360 - desc = &sdesc->desc; 361 - 362 - dmaengine_desc_get_callback_invoke(desc, NULL); 363 - last_cookie = desc->cookie; 364 - dma_run_dependencies(desc); 365 - } 366 - 367 - /* Free descriptors */ 368 - spin_lock_irqsave(&schan->lock, flags); 369 - list_splice_tail_init(&list, &schan->free); 370 - schan->chan.completed_cookie = last_cookie; 371 - spin_unlock_irqrestore(&schan->lock, flags); 372 - } else { 373 - if (list_empty(&schan->active)) { 374 - spin_unlock_irqrestore(&schan->lock, flags); 375 - continue; 376 - } 377 - 378 - /* for cyclic channel, desc is always in active list */ 379 - sdesc = list_first_entry(&schan->active, 380 - struct sirfsoc_dma_desc, node); 381 - 382 - /* cyclic DMA */ 383 - happened_cyclic = schan->happened_cyclic; 384 - spin_unlock_irqrestore(&schan->lock, flags); 385 - 386 - desc = &sdesc->desc; 387 - while (happened_cyclic != schan->completed_cyclic) { 388 - dmaengine_desc_get_callback_invoke(desc, NULL); 389 - schan->completed_cyclic++; 390 - } 391 - } 392 - } 393 - } 394 - 395 - /* DMA Tasklet */ 396 - static void sirfsoc_dma_tasklet(struct tasklet_struct *t) 397 - { 398 - struct sirfsoc_dma *sdma = from_tasklet(sdma, t, tasklet); 399 - 400 - sirfsoc_dma_process_completed(sdma); 401 - } 402 - 403 - /* Submit descriptor to hardware */ 404 - static dma_cookie_t sirfsoc_dma_tx_submit(struct dma_async_tx_descriptor *txd) 405 - { 406 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(txd->chan); 407 - struct sirfsoc_dma_desc *sdesc; 408 - unsigned long flags; 409 - dma_cookie_t cookie; 410 - 411 - sdesc = container_of(txd, struct sirfsoc_dma_desc, desc); 412 - 413 - spin_lock_irqsave(&schan->lock, flags); 414 - 415 - /* Move descriptor to queue */ 416 - list_move_tail(&sdesc->node, &schan->queued); 417 - 418 - cookie = dma_cookie_assign(txd); 419 - 420 - spin_unlock_irqrestore(&schan->lock, flags); 421 - 422 - return cookie; 423 - } 424 - 425 - static int sirfsoc_dma_slave_config(struct dma_chan *chan, 426 - struct dma_slave_config *config) 427 - { 428 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 429 - unsigned long flags; 430 - 431 - if ((config->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) || 432 - (config->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)) 433 - return -EINVAL; 434 - 435 - spin_lock_irqsave(&schan->lock, flags); 436 - schan->mode = (config->src_maxburst == 4 ? 1 : 0); 437 - spin_unlock_irqrestore(&schan->lock, flags); 438 - 439 - return 0; 440 - } 441 - 442 - static int sirfsoc_dma_terminate_all(struct dma_chan *chan) 443 - { 444 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 445 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan); 446 - int cid = schan->chan.chan_id; 447 - unsigned long flags; 448 - 449 - spin_lock_irqsave(&schan->lock, flags); 450 - 451 - switch (sdma->type) { 452 - case SIRFSOC_DMA_VER_A7V1: 453 - writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_INT_EN_CLR); 454 - writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_INT); 455 - writel_relaxed((1 << cid) | 1 << (cid + 16), 456 - sdma->base + 457 - SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7); 458 - writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_VALID); 459 - break; 460 - case SIRFSOC_DMA_VER_A7V2: 461 - writel_relaxed(0, sdma->base + SIRFSOC_DMA_INT_EN_ATLAS7); 462 - writel_relaxed(SIRFSOC_DMA_INT_ALL_ATLAS7, 463 - sdma->base + SIRFSOC_DMA_INT_ATLAS7); 464 - writel_relaxed(0, sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7); 465 - writel_relaxed(0, sdma->base + SIRFSOC_DMA_VALID_ATLAS7); 466 - break; 467 - case SIRFSOC_DMA_VER_A6: 468 - writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_INT_EN) & 469 - ~(1 << cid), sdma->base + SIRFSOC_DMA_INT_EN); 470 - writel_relaxed(readl_relaxed(sdma->base + 471 - SIRFSOC_DMA_CH_LOOP_CTRL) & 472 - ~((1 << cid) | 1 << (cid + 16)), 473 - sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL); 474 - writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_VALID); 475 - break; 476 - default: 477 - break; 478 - } 479 - 480 - list_splice_tail_init(&schan->active, &schan->free); 481 - list_splice_tail_init(&schan->queued, &schan->free); 482 - 483 - spin_unlock_irqrestore(&schan->lock, flags); 484 - 485 - return 0; 486 - } 487 - 488 - static int sirfsoc_dma_pause_chan(struct dma_chan *chan) 489 - { 490 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 491 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan); 492 - int cid = schan->chan.chan_id; 493 - unsigned long flags; 494 - 495 - spin_lock_irqsave(&schan->lock, flags); 496 - 497 - switch (sdma->type) { 498 - case SIRFSOC_DMA_VER_A7V1: 499 - writel_relaxed((1 << cid) | 1 << (cid + 16), 500 - sdma->base + 501 - SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7); 502 - break; 503 - case SIRFSOC_DMA_VER_A7V2: 504 - writel_relaxed(0, sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7); 505 - break; 506 - case SIRFSOC_DMA_VER_A6: 507 - writel_relaxed(readl_relaxed(sdma->base + 508 - SIRFSOC_DMA_CH_LOOP_CTRL) & 509 - ~((1 << cid) | 1 << (cid + 16)), 510 - sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL); 511 - break; 512 - 513 - default: 514 - break; 515 - } 516 - 517 - spin_unlock_irqrestore(&schan->lock, flags); 518 - 519 - return 0; 520 - } 521 - 522 - static int sirfsoc_dma_resume_chan(struct dma_chan *chan) 523 - { 524 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 525 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan); 526 - int cid = schan->chan.chan_id; 527 - unsigned long flags; 528 - 529 - spin_lock_irqsave(&schan->lock, flags); 530 - switch (sdma->type) { 531 - case SIRFSOC_DMA_VER_A7V1: 532 - writel_relaxed((1 << cid) | 1 << (cid + 16), 533 - sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7); 534 - break; 535 - case SIRFSOC_DMA_VER_A7V2: 536 - writel_relaxed(0x10001, 537 - sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7); 538 - break; 539 - case SIRFSOC_DMA_VER_A6: 540 - writel_relaxed(readl_relaxed(sdma->base + 541 - SIRFSOC_DMA_CH_LOOP_CTRL) | 542 - ((1 << cid) | 1 << (cid + 16)), 543 - sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL); 544 - break; 545 - 546 - default: 547 - break; 548 - } 549 - 550 - spin_unlock_irqrestore(&schan->lock, flags); 551 - 552 - return 0; 553 - } 554 - 555 - /* Alloc channel resources */ 556 - static int sirfsoc_dma_alloc_chan_resources(struct dma_chan *chan) 557 - { 558 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(chan); 559 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 560 - struct sirfsoc_dma_desc *sdesc; 561 - unsigned long flags; 562 - LIST_HEAD(descs); 563 - int i; 564 - 565 - pm_runtime_get_sync(sdma->dma.dev); 566 - 567 - /* Alloc descriptors for this channel */ 568 - for (i = 0; i < SIRFSOC_DMA_DESCRIPTORS; i++) { 569 - sdesc = kzalloc(sizeof(*sdesc), GFP_KERNEL); 570 - if (!sdesc) { 571 - dev_notice(sdma->dma.dev, "Memory allocation error. " 572 - "Allocated only %u descriptors\n", i); 573 - break; 574 - } 575 - 576 - dma_async_tx_descriptor_init(&sdesc->desc, chan); 577 - sdesc->desc.flags = DMA_CTRL_ACK; 578 - sdesc->desc.tx_submit = sirfsoc_dma_tx_submit; 579 - 580 - list_add_tail(&sdesc->node, &descs); 581 - } 582 - 583 - /* Return error only if no descriptors were allocated */ 584 - if (i == 0) 585 - return -ENOMEM; 586 - 587 - spin_lock_irqsave(&schan->lock, flags); 588 - 589 - list_splice_tail_init(&descs, &schan->free); 590 - spin_unlock_irqrestore(&schan->lock, flags); 591 - 592 - return i; 593 - } 594 - 595 - /* Free channel resources */ 596 - static void sirfsoc_dma_free_chan_resources(struct dma_chan *chan) 597 - { 598 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 599 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(chan); 600 - struct sirfsoc_dma_desc *sdesc, *tmp; 601 - unsigned long flags; 602 - LIST_HEAD(descs); 603 - 604 - spin_lock_irqsave(&schan->lock, flags); 605 - 606 - /* Channel must be idle */ 607 - BUG_ON(!list_empty(&schan->prepared)); 608 - BUG_ON(!list_empty(&schan->queued)); 609 - BUG_ON(!list_empty(&schan->active)); 610 - BUG_ON(!list_empty(&schan->completed)); 611 - 612 - /* Move data */ 613 - list_splice_tail_init(&schan->free, &descs); 614 - 615 - spin_unlock_irqrestore(&schan->lock, flags); 616 - 617 - /* Free descriptors */ 618 - list_for_each_entry_safe(sdesc, tmp, &descs, node) 619 - kfree(sdesc); 620 - 621 - pm_runtime_put(sdma->dma.dev); 622 - } 623 - 624 - /* Send pending descriptor to hardware */ 625 - static void sirfsoc_dma_issue_pending(struct dma_chan *chan) 626 - { 627 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 628 - unsigned long flags; 629 - 630 - spin_lock_irqsave(&schan->lock, flags); 631 - 632 - if (list_empty(&schan->active) && !list_empty(&schan->queued)) 633 - sirfsoc_dma_execute(schan); 634 - 635 - spin_unlock_irqrestore(&schan->lock, flags); 636 - } 637 - 638 - /* Check request completion status */ 639 - static enum dma_status 640 - sirfsoc_dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, 641 - struct dma_tx_state *txstate) 642 - { 643 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(chan); 644 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 645 - unsigned long flags; 646 - enum dma_status ret; 647 - struct sirfsoc_dma_desc *sdesc; 648 - int cid = schan->chan.chan_id; 649 - unsigned long dma_pos; 650 - unsigned long dma_request_bytes; 651 - unsigned long residue; 652 - 653 - spin_lock_irqsave(&schan->lock, flags); 654 - 655 - if (list_empty(&schan->active)) { 656 - ret = dma_cookie_status(chan, cookie, txstate); 657 - dma_set_residue(txstate, 0); 658 - spin_unlock_irqrestore(&schan->lock, flags); 659 - return ret; 660 - } 661 - sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc, node); 662 - if (sdesc->cyclic) 663 - dma_request_bytes = (sdesc->xlen + 1) * (sdesc->ylen + 1) * 664 - (sdesc->width * SIRFSOC_DMA_WORD_LEN); 665 - else 666 - dma_request_bytes = sdesc->xlen * SIRFSOC_DMA_WORD_LEN; 667 - 668 - ret = dma_cookie_status(chan, cookie, txstate); 669 - 670 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) 671 - cid = 0; 672 - 673 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) { 674 - dma_pos = readl_relaxed(sdma->base + SIRFSOC_DMA_CUR_DATA_ADDR); 675 - } else { 676 - dma_pos = readl_relaxed( 677 - sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR) << 2; 678 - } 679 - 680 - residue = dma_request_bytes - (dma_pos - sdesc->addr); 681 - dma_set_residue(txstate, residue); 682 - 683 - spin_unlock_irqrestore(&schan->lock, flags); 684 - 685 - return ret; 686 - } 687 - 688 - static struct dma_async_tx_descriptor *sirfsoc_dma_prep_interleaved( 689 - struct dma_chan *chan, struct dma_interleaved_template *xt, 690 - unsigned long flags) 691 - { 692 - struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(chan); 693 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 694 - struct sirfsoc_dma_desc *sdesc = NULL; 695 - unsigned long iflags; 696 - int ret; 697 - 698 - if ((xt->dir != DMA_MEM_TO_DEV) && (xt->dir != DMA_DEV_TO_MEM)) { 699 - ret = -EINVAL; 700 - goto err_dir; 701 - } 702 - 703 - /* Get free descriptor */ 704 - spin_lock_irqsave(&schan->lock, iflags); 705 - if (!list_empty(&schan->free)) { 706 - sdesc = list_first_entry(&schan->free, struct sirfsoc_dma_desc, 707 - node); 708 - list_del(&sdesc->node); 709 - } 710 - spin_unlock_irqrestore(&schan->lock, iflags); 711 - 712 - if (!sdesc) { 713 - /* try to free completed descriptors */ 714 - sirfsoc_dma_process_completed(sdma); 715 - ret = 0; 716 - goto no_desc; 717 - } 718 - 719 - /* Place descriptor in prepared list */ 720 - spin_lock_irqsave(&schan->lock, iflags); 721 - 722 - /* 723 - * Number of chunks in a frame can only be 1 for prima2 724 - * and ylen (number of frame - 1) must be at least 0 725 - */ 726 - if ((xt->frame_size == 1) && (xt->numf > 0)) { 727 - sdesc->cyclic = 0; 728 - sdesc->xlen = xt->sgl[0].size / SIRFSOC_DMA_WORD_LEN; 729 - sdesc->width = (xt->sgl[0].size + xt->sgl[0].icg) / 730 - SIRFSOC_DMA_WORD_LEN; 731 - sdesc->ylen = xt->numf - 1; 732 - if (xt->dir == DMA_MEM_TO_DEV) { 733 - sdesc->addr = xt->src_start; 734 - sdesc->dir = 1; 735 - } else { 736 - sdesc->addr = xt->dst_start; 737 - sdesc->dir = 0; 738 - } 739 - 740 - list_add_tail(&sdesc->node, &schan->prepared); 741 - } else { 742 - pr_err("sirfsoc DMA Invalid xfer\n"); 743 - ret = -EINVAL; 744 - goto err_xfer; 745 - } 746 - spin_unlock_irqrestore(&schan->lock, iflags); 747 - 748 - return &sdesc->desc; 749 - err_xfer: 750 - spin_unlock_irqrestore(&schan->lock, iflags); 751 - no_desc: 752 - err_dir: 753 - return ERR_PTR(ret); 754 - } 755 - 756 - static struct dma_async_tx_descriptor * 757 - sirfsoc_dma_prep_cyclic(struct dma_chan *chan, dma_addr_t addr, 758 - size_t buf_len, size_t period_len, 759 - enum dma_transfer_direction direction, unsigned long flags) 760 - { 761 - struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan); 762 - struct sirfsoc_dma_desc *sdesc = NULL; 763 - unsigned long iflags; 764 - 765 - /* 766 - * we only support cycle transfer with 2 period 767 - * If the X-length is set to 0, it would be the loop mode. 768 - * The DMA address keeps increasing until reaching the end of a loop 769 - * area whose size is defined by (DMA_WIDTH x (Y_LENGTH + 1)). Then 770 - * the DMA address goes back to the beginning of this area. 771 - * In loop mode, the DMA data region is divided into two parts, BUFA 772 - * and BUFB. DMA controller generates interrupts twice in each loop: 773 - * when the DMA address reaches the end of BUFA or the end of the 774 - * BUFB 775 - */ 776 - if (buf_len != 2 * period_len) 777 - return ERR_PTR(-EINVAL); 778 - 779 - /* Get free descriptor */ 780 - spin_lock_irqsave(&schan->lock, iflags); 781 - if (!list_empty(&schan->free)) { 782 - sdesc = list_first_entry(&schan->free, struct sirfsoc_dma_desc, 783 - node); 784 - list_del(&sdesc->node); 785 - } 786 - spin_unlock_irqrestore(&schan->lock, iflags); 787 - 788 - if (!sdesc) 789 - return NULL; 790 - 791 - /* Place descriptor in prepared list */ 792 - spin_lock_irqsave(&schan->lock, iflags); 793 - sdesc->addr = addr; 794 - sdesc->cyclic = 1; 795 - sdesc->xlen = 0; 796 - sdesc->ylen = buf_len / SIRFSOC_DMA_WORD_LEN - 1; 797 - sdesc->width = 1; 798 - list_add_tail(&sdesc->node, &schan->prepared); 799 - spin_unlock_irqrestore(&schan->lock, iflags); 800 - 801 - return &sdesc->desc; 802 - } 803 - 804 - /* 805 - * The DMA controller consists of 16 independent DMA channels. 806 - * Each channel is allocated to a different function 807 - */ 808 - bool sirfsoc_dma_filter_id(struct dma_chan *chan, void *chan_id) 809 - { 810 - unsigned int ch_nr = (unsigned int) chan_id; 811 - 812 - if (ch_nr == chan->chan_id + 813 - chan->device->dev_id * SIRFSOC_DMA_CHANNELS) 814 - return true; 815 - 816 - return false; 817 - } 818 - EXPORT_SYMBOL(sirfsoc_dma_filter_id); 819 - 820 - #define SIRFSOC_DMA_BUSWIDTHS \ 821 - (BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \ 822 - BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 823 - BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 824 - BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \ 825 - BIT(DMA_SLAVE_BUSWIDTH_8_BYTES)) 826 - 827 - static struct dma_chan *of_dma_sirfsoc_xlate(struct of_phandle_args *dma_spec, 828 - struct of_dma *ofdma) 829 - { 830 - struct sirfsoc_dma *sdma = ofdma->of_dma_data; 831 - unsigned int request = dma_spec->args[0]; 832 - 833 - if (request >= SIRFSOC_DMA_CHANNELS) 834 - return NULL; 835 - 836 - return dma_get_slave_channel(&sdma->channels[request].chan); 837 - } 838 - 839 - static int sirfsoc_dma_probe(struct platform_device *op) 840 - { 841 - struct device_node *dn = op->dev.of_node; 842 - struct device *dev = &op->dev; 843 - struct dma_device *dma; 844 - struct sirfsoc_dma *sdma; 845 - struct sirfsoc_dma_chan *schan; 846 - struct sirfsoc_dmadata *data; 847 - struct resource res; 848 - ulong regs_start, regs_size; 849 - u32 id; 850 - int ret, i; 851 - 852 - sdma = devm_kzalloc(dev, sizeof(*sdma), GFP_KERNEL); 853 - if (!sdma) 854 - return -ENOMEM; 855 - 856 - data = (struct sirfsoc_dmadata *) 857 - (of_match_device(op->dev.driver->of_match_table, 858 - &op->dev)->data); 859 - sdma->exec_desc = data->exec; 860 - sdma->type = data->type; 861 - 862 - if (of_property_read_u32(dn, "cell-index", &id)) { 863 - dev_err(dev, "Fail to get DMAC index\n"); 864 - return -ENODEV; 865 - } 866 - 867 - sdma->irq = irq_of_parse_and_map(dn, 0); 868 - if (!sdma->irq) { 869 - dev_err(dev, "Error mapping IRQ!\n"); 870 - return -EINVAL; 871 - } 872 - 873 - sdma->clk = devm_clk_get(dev, NULL); 874 - if (IS_ERR(sdma->clk)) { 875 - dev_err(dev, "failed to get a clock.\n"); 876 - return PTR_ERR(sdma->clk); 877 - } 878 - 879 - ret = of_address_to_resource(dn, 0, &res); 880 - if (ret) { 881 - dev_err(dev, "Error parsing memory region!\n"); 882 - goto irq_dispose; 883 - } 884 - 885 - regs_start = res.start; 886 - regs_size = resource_size(&res); 887 - 888 - sdma->base = devm_ioremap(dev, regs_start, regs_size); 889 - if (!sdma->base) { 890 - dev_err(dev, "Error mapping memory region!\n"); 891 - ret = -ENOMEM; 892 - goto irq_dispose; 893 - } 894 - 895 - ret = request_irq(sdma->irq, &sirfsoc_dma_irq, 0, DRV_NAME, sdma); 896 - if (ret) { 897 - dev_err(dev, "Error requesting IRQ!\n"); 898 - ret = -EINVAL; 899 - goto irq_dispose; 900 - } 901 - 902 - dma = &sdma->dma; 903 - dma->dev = dev; 904 - 905 - dma->device_alloc_chan_resources = sirfsoc_dma_alloc_chan_resources; 906 - dma->device_free_chan_resources = sirfsoc_dma_free_chan_resources; 907 - dma->device_issue_pending = sirfsoc_dma_issue_pending; 908 - dma->device_config = sirfsoc_dma_slave_config; 909 - dma->device_pause = sirfsoc_dma_pause_chan; 910 - dma->device_resume = sirfsoc_dma_resume_chan; 911 - dma->device_terminate_all = sirfsoc_dma_terminate_all; 912 - dma->device_tx_status = sirfsoc_dma_tx_status; 913 - dma->device_prep_interleaved_dma = sirfsoc_dma_prep_interleaved; 914 - dma->device_prep_dma_cyclic = sirfsoc_dma_prep_cyclic; 915 - dma->src_addr_widths = SIRFSOC_DMA_BUSWIDTHS; 916 - dma->dst_addr_widths = SIRFSOC_DMA_BUSWIDTHS; 917 - dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 918 - 919 - INIT_LIST_HEAD(&dma->channels); 920 - dma_cap_set(DMA_SLAVE, dma->cap_mask); 921 - dma_cap_set(DMA_CYCLIC, dma->cap_mask); 922 - dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); 923 - dma_cap_set(DMA_PRIVATE, dma->cap_mask); 924 - 925 - for (i = 0; i < SIRFSOC_DMA_CHANNELS; i++) { 926 - schan = &sdma->channels[i]; 927 - 928 - schan->chan.device = dma; 929 - dma_cookie_init(&schan->chan); 930 - 931 - INIT_LIST_HEAD(&schan->free); 932 - INIT_LIST_HEAD(&schan->prepared); 933 - INIT_LIST_HEAD(&schan->queued); 934 - INIT_LIST_HEAD(&schan->active); 935 - INIT_LIST_HEAD(&schan->completed); 936 - 937 - spin_lock_init(&schan->lock); 938 - list_add_tail(&schan->chan.device_node, &dma->channels); 939 - } 940 - 941 - tasklet_setup(&sdma->tasklet, sirfsoc_dma_tasklet); 942 - 943 - /* Register DMA engine */ 944 - dev_set_drvdata(dev, sdma); 945 - 946 - ret = dma_async_device_register(dma); 947 - if (ret) 948 - goto free_irq; 949 - 950 - /* Device-tree DMA controller registration */ 951 - ret = of_dma_controller_register(dn, of_dma_sirfsoc_xlate, sdma); 952 - if (ret) { 953 - dev_err(dev, "failed to register DMA controller\n"); 954 - goto unreg_dma_dev; 955 - } 956 - 957 - pm_runtime_enable(&op->dev); 958 - dev_info(dev, "initialized SIRFSOC DMAC driver\n"); 959 - 960 - return 0; 961 - 962 - unreg_dma_dev: 963 - dma_async_device_unregister(dma); 964 - free_irq: 965 - free_irq(sdma->irq, sdma); 966 - irq_dispose: 967 - irq_dispose_mapping(sdma->irq); 968 - return ret; 969 - } 970 - 971 - static int sirfsoc_dma_remove(struct platform_device *op) 972 - { 973 - struct device *dev = &op->dev; 974 - struct sirfsoc_dma *sdma = dev_get_drvdata(dev); 975 - 976 - of_dma_controller_free(op->dev.of_node); 977 - dma_async_device_unregister(&sdma->dma); 978 - free_irq(sdma->irq, sdma); 979 - tasklet_kill(&sdma->tasklet); 980 - irq_dispose_mapping(sdma->irq); 981 - pm_runtime_disable(&op->dev); 982 - if (!pm_runtime_status_suspended(&op->dev)) 983 - sirfsoc_dma_runtime_suspend(&op->dev); 984 - 985 - return 0; 986 - } 987 - 988 - static int __maybe_unused sirfsoc_dma_runtime_suspend(struct device *dev) 989 - { 990 - struct sirfsoc_dma *sdma = dev_get_drvdata(dev); 991 - 992 - clk_disable_unprepare(sdma->clk); 993 - return 0; 994 - } 995 - 996 - static int __maybe_unused sirfsoc_dma_runtime_resume(struct device *dev) 997 - { 998 - struct sirfsoc_dma *sdma = dev_get_drvdata(dev); 999 - int ret; 1000 - 1001 - ret = clk_prepare_enable(sdma->clk); 1002 - if (ret < 0) { 1003 - dev_err(dev, "clk_enable failed: %d\n", ret); 1004 - return ret; 1005 - } 1006 - return 0; 1007 - } 1008 - 1009 - static int __maybe_unused sirfsoc_dma_pm_suspend(struct device *dev) 1010 - { 1011 - struct sirfsoc_dma *sdma = dev_get_drvdata(dev); 1012 - struct sirfsoc_dma_regs *save = &sdma->regs_save; 1013 - struct sirfsoc_dma_chan *schan; 1014 - int ch; 1015 - int ret; 1016 - int count; 1017 - u32 int_offset; 1018 - 1019 - /* 1020 - * if we were runtime-suspended before, resume to enable clock 1021 - * before accessing register 1022 - */ 1023 - if (pm_runtime_status_suspended(dev)) { 1024 - ret = sirfsoc_dma_runtime_resume(dev); 1025 - if (ret < 0) 1026 - return ret; 1027 - } 1028 - 1029 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) { 1030 - count = 1; 1031 - int_offset = SIRFSOC_DMA_INT_EN_ATLAS7; 1032 - } else { 1033 - count = SIRFSOC_DMA_CHANNELS; 1034 - int_offset = SIRFSOC_DMA_INT_EN; 1035 - } 1036 - 1037 - /* 1038 - * DMA controller will lose all registers while suspending 1039 - * so we need to save registers for active channels 1040 - */ 1041 - for (ch = 0; ch < count; ch++) { 1042 - schan = &sdma->channels[ch]; 1043 - if (list_empty(&schan->active)) 1044 - continue; 1045 - save->ctrl[ch] = readl_relaxed(sdma->base + 1046 - ch * 0x10 + SIRFSOC_DMA_CH_CTRL); 1047 - } 1048 - save->interrupt_en = readl_relaxed(sdma->base + int_offset); 1049 - 1050 - /* Disable clock */ 1051 - sirfsoc_dma_runtime_suspend(dev); 1052 - 1053 - return 0; 1054 - } 1055 - 1056 - static int __maybe_unused sirfsoc_dma_pm_resume(struct device *dev) 1057 - { 1058 - struct sirfsoc_dma *sdma = dev_get_drvdata(dev); 1059 - struct sirfsoc_dma_regs *save = &sdma->regs_save; 1060 - struct sirfsoc_dma_desc *sdesc; 1061 - struct sirfsoc_dma_chan *schan; 1062 - int ch; 1063 - int ret; 1064 - int count; 1065 - u32 int_offset; 1066 - u32 width_offset; 1067 - 1068 - /* Enable clock before accessing register */ 1069 - ret = sirfsoc_dma_runtime_resume(dev); 1070 - if (ret < 0) 1071 - return ret; 1072 - 1073 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) { 1074 - count = 1; 1075 - int_offset = SIRFSOC_DMA_INT_EN_ATLAS7; 1076 - width_offset = SIRFSOC_DMA_WIDTH_ATLAS7; 1077 - } else { 1078 - count = SIRFSOC_DMA_CHANNELS; 1079 - int_offset = SIRFSOC_DMA_INT_EN; 1080 - width_offset = SIRFSOC_DMA_WIDTH_0; 1081 - } 1082 - 1083 - writel_relaxed(save->interrupt_en, sdma->base + int_offset); 1084 - for (ch = 0; ch < count; ch++) { 1085 - schan = &sdma->channels[ch]; 1086 - if (list_empty(&schan->active)) 1087 - continue; 1088 - sdesc = list_first_entry(&schan->active, 1089 - struct sirfsoc_dma_desc, 1090 - node); 1091 - writel_relaxed(sdesc->width, 1092 - sdma->base + width_offset + ch * 4); 1093 - writel_relaxed(sdesc->xlen, 1094 - sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_XLEN); 1095 - writel_relaxed(sdesc->ylen, 1096 - sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_YLEN); 1097 - writel_relaxed(save->ctrl[ch], 1098 - sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_CTRL); 1099 - if (sdma->type == SIRFSOC_DMA_VER_A7V2) { 1100 - writel_relaxed(sdesc->addr, 1101 - sdma->base + SIRFSOC_DMA_CH_ADDR); 1102 - } else { 1103 - writel_relaxed(sdesc->addr >> 2, 1104 - sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_ADDR); 1105 - 1106 - } 1107 - } 1108 - 1109 - /* if we were runtime-suspended before, suspend again */ 1110 - if (pm_runtime_status_suspended(dev)) 1111 - sirfsoc_dma_runtime_suspend(dev); 1112 - 1113 - return 0; 1114 - } 1115 - 1116 - static const struct dev_pm_ops sirfsoc_dma_pm_ops = { 1117 - SET_RUNTIME_PM_OPS(sirfsoc_dma_runtime_suspend, sirfsoc_dma_runtime_resume, NULL) 1118 - SET_SYSTEM_SLEEP_PM_OPS(sirfsoc_dma_pm_suspend, sirfsoc_dma_pm_resume) 1119 - }; 1120 - 1121 - static struct sirfsoc_dmadata sirfsoc_dmadata_a6 = { 1122 - .exec = sirfsoc_dma_execute_hw_a6, 1123 - .type = SIRFSOC_DMA_VER_A6, 1124 - }; 1125 - 1126 - static struct sirfsoc_dmadata sirfsoc_dmadata_a7v1 = { 1127 - .exec = sirfsoc_dma_execute_hw_a7v1, 1128 - .type = SIRFSOC_DMA_VER_A7V1, 1129 - }; 1130 - 1131 - static struct sirfsoc_dmadata sirfsoc_dmadata_a7v2 = { 1132 - .exec = sirfsoc_dma_execute_hw_a7v2, 1133 - .type = SIRFSOC_DMA_VER_A7V2, 1134 - }; 1135 - 1136 - static const struct of_device_id sirfsoc_dma_match[] = { 1137 - { .compatible = "sirf,prima2-dmac", .data = &sirfsoc_dmadata_a6,}, 1138 - { .compatible = "sirf,atlas7-dmac", .data = &sirfsoc_dmadata_a7v1,}, 1139 - { .compatible = "sirf,atlas7-dmac-v2", .data = &sirfsoc_dmadata_a7v2,}, 1140 - {}, 1141 - }; 1142 - MODULE_DEVICE_TABLE(of, sirfsoc_dma_match); 1143 - 1144 - static struct platform_driver sirfsoc_dma_driver = { 1145 - .probe = sirfsoc_dma_probe, 1146 - .remove = sirfsoc_dma_remove, 1147 - .driver = { 1148 - .name = DRV_NAME, 1149 - .pm = &sirfsoc_dma_pm_ops, 1150 - .of_match_table = sirfsoc_dma_match, 1151 - }, 1152 - }; 1153 - 1154 - static __init int sirfsoc_dma_init(void) 1155 - { 1156 - return platform_driver_register(&sirfsoc_dma_driver); 1157 - } 1158 - 1159 - static void __exit sirfsoc_dma_exit(void) 1160 - { 1161 - platform_driver_unregister(&sirfsoc_dma_driver); 1162 - } 1163 - 1164 - subsys_initcall(sirfsoc_dma_init); 1165 - module_exit(sirfsoc_dma_exit); 1166 - 1167 - MODULE_AUTHOR("Rongjun Ying <rongjun.ying@csr.com>"); 1168 - MODULE_AUTHOR("Barry Song <baohua.song@csr.com>"); 1169 - MODULE_DESCRIPTION("SIRFSOC DMA control driver"); 1170 - MODULE_LICENSE("GPL v2");
+1 -1
drivers/dma/ste_dma40.c
··· 78 78 DB8500_DMA_MEMCPY_EV_5, 79 79 }; 80 80 81 - /* Default configuration for physcial memcpy */ 81 + /* Default configuration for physical memcpy */ 82 82 static const struct stedma40_chan_cfg dma40_memcpy_conf_phy = { 83 83 .mode = STEDMA40_MODE_PHYSICAL, 84 84 .dir = DMA_MEM_TO_MEM,
+119 -12
drivers/dma/ti/k3-udma.c
··· 121 121 #define UDMA_FLAG_PDMA_ACC32 BIT(0) 122 122 #define UDMA_FLAG_PDMA_BURST BIT(1) 123 123 #define UDMA_FLAG_TDTYPE BIT(2) 124 + #define UDMA_FLAG_BURST_SIZE BIT(3) 125 + #define UDMA_FLAGS_J7_CLASS (UDMA_FLAG_PDMA_ACC32 | \ 126 + UDMA_FLAG_PDMA_BURST | \ 127 + UDMA_FLAG_TDTYPE | \ 128 + UDMA_FLAG_BURST_SIZE) 124 129 125 130 struct udma_match_data { 126 131 enum k3_dma_type type; ··· 133 128 bool enable_memcpy_support; 134 129 u32 flags; 135 130 u32 statictr_z_mask; 131 + u8 burst_size[3]; 136 132 }; 137 133 138 134 struct udma_soc_data { ··· 440 434 chan_dev->dma_coherent = false; 441 435 chan_dev->dma_parms = NULL; 442 436 } 437 + } 438 + 439 + static u8 udma_get_chan_tpl_index(struct udma_tpl *tpl_map, int chan_id) 440 + { 441 + int i; 442 + 443 + for (i = 0; i < tpl_map->levels; i++) { 444 + if (chan_id >= tpl_map->start_idx[i]) 445 + return i; 446 + } 447 + 448 + return 0; 443 449 } 444 450 445 451 static void udma_reset_uchan(struct udma_chan *uc) ··· 1829 1811 const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; 1830 1812 struct udma_tchan *tchan = uc->tchan; 1831 1813 struct udma_rchan *rchan = uc->rchan; 1832 - int ret = 0; 1814 + u8 burst_size = 0; 1815 + int ret; 1816 + u8 tpl; 1833 1817 1834 1818 /* Non synchronized - mem to mem type of transfer */ 1835 1819 int tc_ring = k3_ringacc_get_ring_id(tchan->tc_ring); 1836 1820 struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; 1837 1821 struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; 1822 + 1823 + if (ud->match_data->flags & UDMA_FLAG_BURST_SIZE) { 1824 + tpl = udma_get_chan_tpl_index(&ud->tchan_tpl, tchan->id); 1825 + 1826 + burst_size = ud->match_data->burst_size[tpl]; 1827 + } 1838 1828 1839 1829 req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS; 1840 1830 req_tx.nav_id = tisci_rm->tisci_dev_id; ··· 1851 1825 req_tx.tx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2; 1852 1826 req_tx.txcq_qnum = tc_ring; 1853 1827 req_tx.tx_atype = ud->atype; 1828 + if (burst_size) { 1829 + req_tx.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_BURST_SIZE_VALID; 1830 + req_tx.tx_burst_size = burst_size; 1831 + } 1854 1832 1855 1833 ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); 1856 1834 if (ret) { ··· 1869 1839 req_rx.rxcq_qnum = tc_ring; 1870 1840 req_rx.rx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR; 1871 1841 req_rx.rx_atype = ud->atype; 1842 + if (burst_size) { 1843 + req_rx.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_BURST_SIZE_VALID; 1844 + req_rx.rx_burst_size = burst_size; 1845 + } 1872 1846 1873 1847 ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); 1874 1848 if (ret) ··· 1888 1854 const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; 1889 1855 struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; 1890 1856 struct udma_bchan *bchan = uc->bchan; 1891 - int ret = 0; 1857 + u8 burst_size = 0; 1858 + int ret; 1859 + u8 tpl; 1860 + 1861 + if (ud->match_data->flags & UDMA_FLAG_BURST_SIZE) { 1862 + tpl = udma_get_chan_tpl_index(&ud->bchan_tpl, bchan->id); 1863 + 1864 + burst_size = ud->match_data->burst_size[tpl]; 1865 + } 1892 1866 1893 1867 req_tx.valid_params = TISCI_BCDMA_BCHAN_VALID_PARAMS; 1894 1868 req_tx.nav_id = tisci_rm->tisci_dev_id; 1895 1869 req_tx.extended_ch_type = TI_SCI_RM_BCDMA_EXTENDED_CH_TYPE_BCHAN; 1896 1870 req_tx.index = bchan->id; 1871 + if (burst_size) { 1872 + req_tx.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_BURST_SIZE_VALID; 1873 + req_tx.tx_burst_size = burst_size; 1874 + } 1897 1875 1898 1876 ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); 1899 1877 if (ret) ··· 1923 1877 int tc_ring = k3_ringacc_get_ring_id(tchan->tc_ring); 1924 1878 struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; 1925 1879 u32 mode, fetch_size; 1926 - int ret = 0; 1880 + int ret; 1927 1881 1928 1882 if (uc->config.pkt_mode) { 1929 1883 mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; ··· 1964 1918 const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; 1965 1919 struct udma_tchan *tchan = uc->tchan; 1966 1920 struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; 1967 - int ret = 0; 1921 + int ret; 1968 1922 1969 1923 req_tx.valid_params = TISCI_BCDMA_TCHAN_VALID_PARAMS; 1970 1924 req_tx.nav_id = tisci_rm->tisci_dev_id; ··· 1997 1951 struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; 1998 1952 struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 }; 1999 1953 u32 mode, fetch_size; 2000 - int ret = 0; 1954 + int ret; 2001 1955 2002 1956 if (uc->config.pkt_mode) { 2003 1957 mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; ··· 2074 2028 const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; 2075 2029 struct udma_rchan *rchan = uc->rchan; 2076 2030 struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; 2077 - int ret = 0; 2031 + int ret; 2078 2032 2079 2033 req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; 2080 2034 req_rx.nav_id = tisci_rm->tisci_dev_id; ··· 2094 2048 const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; 2095 2049 struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; 2096 2050 struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 }; 2097 - int ret = 0; 2051 + int ret; 2098 2052 2099 2053 req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; 2100 2054 req_rx.nav_id = tisci_rm->tisci_dev_id; ··· 4214 4168 .psil_base = 0x1000, 4215 4169 .enable_memcpy_support = true, 4216 4170 .statictr_z_mask = GENMASK(11, 0), 4171 + .burst_size = { 4172 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4173 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* H Channels */ 4174 + 0, /* No UH Channels */ 4175 + }, 4217 4176 }; 4218 4177 4219 4178 static struct udma_match_data am654_mcu_data = { ··· 4226 4175 .psil_base = 0x6000, 4227 4176 .enable_memcpy_support = false, 4228 4177 .statictr_z_mask = GENMASK(11, 0), 4178 + .burst_size = { 4179 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4180 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* H Channels */ 4181 + 0, /* No UH Channels */ 4182 + }, 4229 4183 }; 4230 4184 4231 4185 static struct udma_match_data j721e_main_data = { 4232 4186 .type = DMA_TYPE_UDMA, 4233 4187 .psil_base = 0x1000, 4234 4188 .enable_memcpy_support = true, 4235 - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, 4189 + .flags = UDMA_FLAGS_J7_CLASS, 4236 4190 .statictr_z_mask = GENMASK(23, 0), 4191 + .burst_size = { 4192 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4193 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES, /* H Channels */ 4194 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES, /* UH Channels */ 4195 + }, 4237 4196 }; 4238 4197 4239 4198 static struct udma_match_data j721e_mcu_data = { 4240 4199 .type = DMA_TYPE_UDMA, 4241 4200 .psil_base = 0x6000, 4242 4201 .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ 4243 - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, 4202 + .flags = UDMA_FLAGS_J7_CLASS, 4244 4203 .statictr_z_mask = GENMASK(23, 0), 4204 + .burst_size = { 4205 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4206 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES, /* H Channels */ 4207 + 0, /* No UH Channels */ 4208 + }, 4245 4209 }; 4246 4210 4247 4211 static struct udma_match_data am64_bcdma_data = { 4248 4212 .type = DMA_TYPE_BCDMA, 4249 4213 .psil_base = 0x2000, /* for tchan and rchan, not applicable to bchan */ 4250 4214 .enable_memcpy_support = true, /* Supported via bchan */ 4251 - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, 4215 + .flags = UDMA_FLAGS_J7_CLASS, 4252 4216 .statictr_z_mask = GENMASK(23, 0), 4217 + .burst_size = { 4218 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4219 + 0, /* No H Channels */ 4220 + 0, /* No UH Channels */ 4221 + }, 4253 4222 }; 4254 4223 4255 4224 static struct udma_match_data am64_pktdma_data = { 4256 4225 .type = DMA_TYPE_PKTDMA, 4257 4226 .psil_base = 0x1000, 4258 4227 .enable_memcpy_support = false, /* PKTDMA does not support MEM_TO_MEM */ 4259 - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, 4228 + .flags = UDMA_FLAGS_J7_CLASS, 4260 4229 .statictr_z_mask = GENMASK(23, 0), 4230 + .burst_size = { 4231 + TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES, /* Normal Channels */ 4232 + 0, /* No H Channels */ 4233 + 0, /* No UH Channels */ 4234 + }, 4261 4235 }; 4262 4236 4263 4237 static const struct of_device_id udma_of_match[] = { ··· 4382 4306 ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2); 4383 4307 ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2); 4384 4308 ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2); 4309 + ud->rflow_cnt = ud->rchan_cnt; 4385 4310 break; 4386 4311 case DMA_TYPE_PKTDMA: 4387 4312 cap4 = udma_read(ud->mmrs[MMR_GCFG], 0x30); ··· 5123 5046 } 5124 5047 #endif /* CONFIG_DEBUG_FS */ 5125 5048 5049 + static enum dmaengine_alignment udma_get_copy_align(struct udma_dev *ud) 5050 + { 5051 + const struct udma_match_data *match_data = ud->match_data; 5052 + u8 tpl; 5053 + 5054 + if (!match_data->enable_memcpy_support) 5055 + return DMAENGINE_ALIGN_8_BYTES; 5056 + 5057 + /* Get the highest TPL level the device supports for memcpy */ 5058 + if (ud->bchan_cnt) 5059 + tpl = udma_get_chan_tpl_index(&ud->bchan_tpl, 0); 5060 + else if (ud->tchan_cnt) 5061 + tpl = udma_get_chan_tpl_index(&ud->tchan_tpl, 0); 5062 + else 5063 + return DMAENGINE_ALIGN_8_BYTES; 5064 + 5065 + switch (match_data->burst_size[tpl]) { 5066 + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES: 5067 + return DMAENGINE_ALIGN_256_BYTES; 5068 + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES: 5069 + return DMAENGINE_ALIGN_128_BYTES; 5070 + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES: 5071 + fallthrough; 5072 + default: 5073 + return DMAENGINE_ALIGN_64_BYTES; 5074 + } 5075 + } 5076 + 5126 5077 #define TI_UDMAC_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 5127 5078 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 5128 5079 BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \ ··· 5307 5202 ud->ddev.dst_addr_widths = TI_UDMAC_BUSWIDTHS; 5308 5203 ud->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 5309 5204 ud->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 5310 - ud->ddev.copy_align = DMAENGINE_ALIGN_8_BYTES; 5311 5205 ud->ddev.desc_metadata_modes = DESC_METADATA_CLIENT | 5312 5206 DESC_METADATA_ENGINE; 5313 5207 if (ud->match_data->enable_memcpy_support && ··· 5387 5283 init_completion(&uc->teardown_completed); 5388 5284 INIT_DELAYED_WORK(&uc->tx_drain.work, udma_check_tx_completion); 5389 5285 } 5286 + 5287 + /* Configure the copy_align to the maximum burst size the device supports */ 5288 + ud->ddev.copy_align = udma_get_copy_align(ud); 5390 5289 5391 5290 ret = dma_async_device_register(&ud->ddev); 5392 5291 if (ret) {
+1 -1
drivers/dma/xilinx/xilinx_dma.c
··· 800 800 { 801 801 struct xilinx_dma_tx_descriptor *desc; 802 802 803 - desc = kzalloc(sizeof(*desc), GFP_KERNEL); 803 + desc = kzalloc(sizeof(*desc), GFP_NOWAIT); 804 804 if (!desc) 805 805 return NULL; 806 806
-941
drivers/dma/zx_dma.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright 2015 Linaro. 4 - */ 5 - #include <linux/sched.h> 6 - #include <linux/device.h> 7 - #include <linux/dmaengine.h> 8 - #include <linux/dma-mapping.h> 9 - #include <linux/dmapool.h> 10 - #include <linux/init.h> 11 - #include <linux/interrupt.h> 12 - #include <linux/kernel.h> 13 - #include <linux/module.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/slab.h> 16 - #include <linux/spinlock.h> 17 - #include <linux/of_device.h> 18 - #include <linux/of.h> 19 - #include <linux/clk.h> 20 - #include <linux/of_dma.h> 21 - 22 - #include "virt-dma.h" 23 - 24 - #define DRIVER_NAME "zx-dma" 25 - #define DMA_ALIGN 4 26 - #define DMA_MAX_SIZE (0x10000 - 512) 27 - #define LLI_BLOCK_SIZE (4 * PAGE_SIZE) 28 - 29 - #define REG_ZX_SRC_ADDR 0x00 30 - #define REG_ZX_DST_ADDR 0x04 31 - #define REG_ZX_TX_X_COUNT 0x08 32 - #define REG_ZX_TX_ZY_COUNT 0x0c 33 - #define REG_ZX_SRC_ZY_STEP 0x10 34 - #define REG_ZX_DST_ZY_STEP 0x14 35 - #define REG_ZX_LLI_ADDR 0x1c 36 - #define REG_ZX_CTRL 0x20 37 - #define REG_ZX_TC_IRQ 0x800 38 - #define REG_ZX_SRC_ERR_IRQ 0x804 39 - #define REG_ZX_DST_ERR_IRQ 0x808 40 - #define REG_ZX_CFG_ERR_IRQ 0x80c 41 - #define REG_ZX_TC_IRQ_RAW 0x810 42 - #define REG_ZX_SRC_ERR_IRQ_RAW 0x814 43 - #define REG_ZX_DST_ERR_IRQ_RAW 0x818 44 - #define REG_ZX_CFG_ERR_IRQ_RAW 0x81c 45 - #define REG_ZX_STATUS 0x820 46 - #define REG_ZX_DMA_GRP_PRIO 0x824 47 - #define REG_ZX_DMA_ARB 0x828 48 - 49 - #define ZX_FORCE_CLOSE BIT(31) 50 - #define ZX_DST_BURST_WIDTH(x) (((x) & 0x7) << 13) 51 - #define ZX_MAX_BURST_LEN 16 52 - #define ZX_SRC_BURST_LEN(x) (((x) & 0xf) << 9) 53 - #define ZX_SRC_BURST_WIDTH(x) (((x) & 0x7) << 6) 54 - #define ZX_IRQ_ENABLE_ALL (3 << 4) 55 - #define ZX_DST_FIFO_MODE BIT(3) 56 - #define ZX_SRC_FIFO_MODE BIT(2) 57 - #define ZX_SOFT_REQ BIT(1) 58 - #define ZX_CH_ENABLE BIT(0) 59 - 60 - #define ZX_DMA_BUSWIDTHS \ 61 - (BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \ 62 - BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 63 - BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 64 - BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \ 65 - BIT(DMA_SLAVE_BUSWIDTH_8_BYTES)) 66 - 67 - enum zx_dma_burst_width { 68 - ZX_DMA_WIDTH_8BIT = 0, 69 - ZX_DMA_WIDTH_16BIT = 1, 70 - ZX_DMA_WIDTH_32BIT = 2, 71 - ZX_DMA_WIDTH_64BIT = 3, 72 - }; 73 - 74 - struct zx_desc_hw { 75 - u32 saddr; 76 - u32 daddr; 77 - u32 src_x; 78 - u32 src_zy; 79 - u32 src_zy_step; 80 - u32 dst_zy_step; 81 - u32 reserved1; 82 - u32 lli; 83 - u32 ctr; 84 - u32 reserved[7]; /* pack as hardware registers region size */ 85 - } __aligned(32); 86 - 87 - struct zx_dma_desc_sw { 88 - struct virt_dma_desc vd; 89 - dma_addr_t desc_hw_lli; 90 - size_t desc_num; 91 - size_t size; 92 - struct zx_desc_hw *desc_hw; 93 - }; 94 - 95 - struct zx_dma_phy; 96 - 97 - struct zx_dma_chan { 98 - struct dma_slave_config slave_cfg; 99 - int id; /* Request phy chan id */ 100 - u32 ccfg; 101 - u32 cyclic; 102 - struct virt_dma_chan vc; 103 - struct zx_dma_phy *phy; 104 - struct list_head node; 105 - dma_addr_t dev_addr; 106 - enum dma_status status; 107 - }; 108 - 109 - struct zx_dma_phy { 110 - u32 idx; 111 - void __iomem *base; 112 - struct zx_dma_chan *vchan; 113 - struct zx_dma_desc_sw *ds_run; 114 - struct zx_dma_desc_sw *ds_done; 115 - }; 116 - 117 - struct zx_dma_dev { 118 - struct dma_device slave; 119 - void __iomem *base; 120 - spinlock_t lock; /* lock for ch and phy */ 121 - struct list_head chan_pending; 122 - struct zx_dma_phy *phy; 123 - struct zx_dma_chan *chans; 124 - struct clk *clk; 125 - struct dma_pool *pool; 126 - u32 dma_channels; 127 - u32 dma_requests; 128 - int irq; 129 - }; 130 - 131 - #define to_zx_dma(dmadev) container_of(dmadev, struct zx_dma_dev, slave) 132 - 133 - static struct zx_dma_chan *to_zx_chan(struct dma_chan *chan) 134 - { 135 - return container_of(chan, struct zx_dma_chan, vc.chan); 136 - } 137 - 138 - static void zx_dma_terminate_chan(struct zx_dma_phy *phy, struct zx_dma_dev *d) 139 - { 140 - u32 val = 0; 141 - 142 - val = readl_relaxed(phy->base + REG_ZX_CTRL); 143 - val &= ~ZX_CH_ENABLE; 144 - val |= ZX_FORCE_CLOSE; 145 - writel_relaxed(val, phy->base + REG_ZX_CTRL); 146 - 147 - val = 0x1 << phy->idx; 148 - writel_relaxed(val, d->base + REG_ZX_TC_IRQ_RAW); 149 - writel_relaxed(val, d->base + REG_ZX_SRC_ERR_IRQ_RAW); 150 - writel_relaxed(val, d->base + REG_ZX_DST_ERR_IRQ_RAW); 151 - writel_relaxed(val, d->base + REG_ZX_CFG_ERR_IRQ_RAW); 152 - } 153 - 154 - static void zx_dma_set_desc(struct zx_dma_phy *phy, struct zx_desc_hw *hw) 155 - { 156 - writel_relaxed(hw->saddr, phy->base + REG_ZX_SRC_ADDR); 157 - writel_relaxed(hw->daddr, phy->base + REG_ZX_DST_ADDR); 158 - writel_relaxed(hw->src_x, phy->base + REG_ZX_TX_X_COUNT); 159 - writel_relaxed(0, phy->base + REG_ZX_TX_ZY_COUNT); 160 - writel_relaxed(0, phy->base + REG_ZX_SRC_ZY_STEP); 161 - writel_relaxed(0, phy->base + REG_ZX_DST_ZY_STEP); 162 - writel_relaxed(hw->lli, phy->base + REG_ZX_LLI_ADDR); 163 - writel_relaxed(hw->ctr, phy->base + REG_ZX_CTRL); 164 - } 165 - 166 - static u32 zx_dma_get_curr_lli(struct zx_dma_phy *phy) 167 - { 168 - return readl_relaxed(phy->base + REG_ZX_LLI_ADDR); 169 - } 170 - 171 - static u32 zx_dma_get_chan_stat(struct zx_dma_dev *d) 172 - { 173 - return readl_relaxed(d->base + REG_ZX_STATUS); 174 - } 175 - 176 - static void zx_dma_init_state(struct zx_dma_dev *d) 177 - { 178 - /* set same priority */ 179 - writel_relaxed(0x0, d->base + REG_ZX_DMA_ARB); 180 - /* clear all irq */ 181 - writel_relaxed(0xffffffff, d->base + REG_ZX_TC_IRQ_RAW); 182 - writel_relaxed(0xffffffff, d->base + REG_ZX_SRC_ERR_IRQ_RAW); 183 - writel_relaxed(0xffffffff, d->base + REG_ZX_DST_ERR_IRQ_RAW); 184 - writel_relaxed(0xffffffff, d->base + REG_ZX_CFG_ERR_IRQ_RAW); 185 - } 186 - 187 - static int zx_dma_start_txd(struct zx_dma_chan *c) 188 - { 189 - struct zx_dma_dev *d = to_zx_dma(c->vc.chan.device); 190 - struct virt_dma_desc *vd = vchan_next_desc(&c->vc); 191 - 192 - if (!c->phy) 193 - return -EAGAIN; 194 - 195 - if (BIT(c->phy->idx) & zx_dma_get_chan_stat(d)) 196 - return -EAGAIN; 197 - 198 - if (vd) { 199 - struct zx_dma_desc_sw *ds = 200 - container_of(vd, struct zx_dma_desc_sw, vd); 201 - /* 202 - * fetch and remove request from vc->desc_issued 203 - * so vc->desc_issued only contains desc pending 204 - */ 205 - list_del(&ds->vd.node); 206 - c->phy->ds_run = ds; 207 - c->phy->ds_done = NULL; 208 - /* start dma */ 209 - zx_dma_set_desc(c->phy, ds->desc_hw); 210 - return 0; 211 - } 212 - c->phy->ds_done = NULL; 213 - c->phy->ds_run = NULL; 214 - return -EAGAIN; 215 - } 216 - 217 - static void zx_dma_task(struct zx_dma_dev *d) 218 - { 219 - struct zx_dma_phy *p; 220 - struct zx_dma_chan *c, *cn; 221 - unsigned pch, pch_alloc = 0; 222 - unsigned long flags; 223 - 224 - /* check new dma request of running channel in vc->desc_issued */ 225 - list_for_each_entry_safe(c, cn, &d->slave.channels, 226 - vc.chan.device_node) { 227 - spin_lock_irqsave(&c->vc.lock, flags); 228 - p = c->phy; 229 - if (p && p->ds_done && zx_dma_start_txd(c)) { 230 - /* No current txd associated with this channel */ 231 - dev_dbg(d->slave.dev, "pchan %u: free\n", p->idx); 232 - /* Mark this channel free */ 233 - c->phy = NULL; 234 - p->vchan = NULL; 235 - } 236 - spin_unlock_irqrestore(&c->vc.lock, flags); 237 - } 238 - 239 - /* check new channel request in d->chan_pending */ 240 - spin_lock_irqsave(&d->lock, flags); 241 - while (!list_empty(&d->chan_pending)) { 242 - c = list_first_entry(&d->chan_pending, 243 - struct zx_dma_chan, node); 244 - p = &d->phy[c->id]; 245 - if (!p->vchan) { 246 - /* remove from d->chan_pending */ 247 - list_del_init(&c->node); 248 - pch_alloc |= 1 << c->id; 249 - /* Mark this channel allocated */ 250 - p->vchan = c; 251 - c->phy = p; 252 - } else { 253 - dev_dbg(d->slave.dev, "pchan %u: busy!\n", c->id); 254 - } 255 - } 256 - spin_unlock_irqrestore(&d->lock, flags); 257 - 258 - for (pch = 0; pch < d->dma_channels; pch++) { 259 - if (pch_alloc & (1 << pch)) { 260 - p = &d->phy[pch]; 261 - c = p->vchan; 262 - if (c) { 263 - spin_lock_irqsave(&c->vc.lock, flags); 264 - zx_dma_start_txd(c); 265 - spin_unlock_irqrestore(&c->vc.lock, flags); 266 - } 267 - } 268 - } 269 - } 270 - 271 - static irqreturn_t zx_dma_int_handler(int irq, void *dev_id) 272 - { 273 - struct zx_dma_dev *d = (struct zx_dma_dev *)dev_id; 274 - struct zx_dma_phy *p; 275 - struct zx_dma_chan *c; 276 - u32 tc = readl_relaxed(d->base + REG_ZX_TC_IRQ); 277 - u32 serr = readl_relaxed(d->base + REG_ZX_SRC_ERR_IRQ); 278 - u32 derr = readl_relaxed(d->base + REG_ZX_DST_ERR_IRQ); 279 - u32 cfg = readl_relaxed(d->base + REG_ZX_CFG_ERR_IRQ); 280 - u32 i, irq_chan = 0, task = 0; 281 - 282 - while (tc) { 283 - i = __ffs(tc); 284 - tc &= ~BIT(i); 285 - p = &d->phy[i]; 286 - c = p->vchan; 287 - if (c) { 288 - spin_lock(&c->vc.lock); 289 - if (c->cyclic) { 290 - vchan_cyclic_callback(&p->ds_run->vd); 291 - } else { 292 - vchan_cookie_complete(&p->ds_run->vd); 293 - p->ds_done = p->ds_run; 294 - task = 1; 295 - } 296 - spin_unlock(&c->vc.lock); 297 - irq_chan |= BIT(i); 298 - } 299 - } 300 - 301 - if (serr || derr || cfg) 302 - dev_warn(d->slave.dev, "DMA ERR src 0x%x, dst 0x%x, cfg 0x%x\n", 303 - serr, derr, cfg); 304 - 305 - writel_relaxed(irq_chan, d->base + REG_ZX_TC_IRQ_RAW); 306 - writel_relaxed(serr, d->base + REG_ZX_SRC_ERR_IRQ_RAW); 307 - writel_relaxed(derr, d->base + REG_ZX_DST_ERR_IRQ_RAW); 308 - writel_relaxed(cfg, d->base + REG_ZX_CFG_ERR_IRQ_RAW); 309 - 310 - if (task) 311 - zx_dma_task(d); 312 - return IRQ_HANDLED; 313 - } 314 - 315 - static void zx_dma_free_chan_resources(struct dma_chan *chan) 316 - { 317 - struct zx_dma_chan *c = to_zx_chan(chan); 318 - struct zx_dma_dev *d = to_zx_dma(chan->device); 319 - unsigned long flags; 320 - 321 - spin_lock_irqsave(&d->lock, flags); 322 - list_del_init(&c->node); 323 - spin_unlock_irqrestore(&d->lock, flags); 324 - 325 - vchan_free_chan_resources(&c->vc); 326 - c->ccfg = 0; 327 - } 328 - 329 - static enum dma_status zx_dma_tx_status(struct dma_chan *chan, 330 - dma_cookie_t cookie, 331 - struct dma_tx_state *state) 332 - { 333 - struct zx_dma_chan *c = to_zx_chan(chan); 334 - struct zx_dma_phy *p; 335 - struct virt_dma_desc *vd; 336 - unsigned long flags; 337 - enum dma_status ret; 338 - size_t bytes = 0; 339 - 340 - ret = dma_cookie_status(&c->vc.chan, cookie, state); 341 - if (ret == DMA_COMPLETE || !state) 342 - return ret; 343 - 344 - spin_lock_irqsave(&c->vc.lock, flags); 345 - p = c->phy; 346 - ret = c->status; 347 - 348 - /* 349 - * If the cookie is on our issue queue, then the residue is 350 - * its total size. 351 - */ 352 - vd = vchan_find_desc(&c->vc, cookie); 353 - if (vd) { 354 - bytes = container_of(vd, struct zx_dma_desc_sw, vd)->size; 355 - } else if ((!p) || (!p->ds_run)) { 356 - bytes = 0; 357 - } else { 358 - struct zx_dma_desc_sw *ds = p->ds_run; 359 - u32 clli = 0, index = 0; 360 - 361 - bytes = 0; 362 - clli = zx_dma_get_curr_lli(p); 363 - index = (clli - ds->desc_hw_lli) / 364 - sizeof(struct zx_desc_hw) + 1; 365 - for (; index < ds->desc_num; index++) { 366 - bytes += ds->desc_hw[index].src_x; 367 - /* end of lli */ 368 - if (!ds->desc_hw[index].lli) 369 - break; 370 - } 371 - } 372 - spin_unlock_irqrestore(&c->vc.lock, flags); 373 - dma_set_residue(state, bytes); 374 - return ret; 375 - } 376 - 377 - static void zx_dma_issue_pending(struct dma_chan *chan) 378 - { 379 - struct zx_dma_chan *c = to_zx_chan(chan); 380 - struct zx_dma_dev *d = to_zx_dma(chan->device); 381 - unsigned long flags; 382 - int issue = 0; 383 - 384 - spin_lock_irqsave(&c->vc.lock, flags); 385 - /* add request to vc->desc_issued */ 386 - if (vchan_issue_pending(&c->vc)) { 387 - spin_lock(&d->lock); 388 - if (!c->phy && list_empty(&c->node)) { 389 - /* if new channel, add chan_pending */ 390 - list_add_tail(&c->node, &d->chan_pending); 391 - issue = 1; 392 - dev_dbg(d->slave.dev, "vchan %p: issued\n", &c->vc); 393 - } 394 - spin_unlock(&d->lock); 395 - } else { 396 - dev_dbg(d->slave.dev, "vchan %p: nothing to issue\n", &c->vc); 397 - } 398 - spin_unlock_irqrestore(&c->vc.lock, flags); 399 - 400 - if (issue) 401 - zx_dma_task(d); 402 - } 403 - 404 - static void zx_dma_fill_desc(struct zx_dma_desc_sw *ds, dma_addr_t dst, 405 - dma_addr_t src, size_t len, u32 num, u32 ccfg) 406 - { 407 - if ((num + 1) < ds->desc_num) 408 - ds->desc_hw[num].lli = ds->desc_hw_lli + (num + 1) * 409 - sizeof(struct zx_desc_hw); 410 - ds->desc_hw[num].saddr = src; 411 - ds->desc_hw[num].daddr = dst; 412 - ds->desc_hw[num].src_x = len; 413 - ds->desc_hw[num].ctr = ccfg; 414 - } 415 - 416 - static struct zx_dma_desc_sw *zx_alloc_desc_resource(int num, 417 - struct dma_chan *chan) 418 - { 419 - struct zx_dma_chan *c = to_zx_chan(chan); 420 - struct zx_dma_desc_sw *ds; 421 - struct zx_dma_dev *d = to_zx_dma(chan->device); 422 - int lli_limit = LLI_BLOCK_SIZE / sizeof(struct zx_desc_hw); 423 - 424 - if (num > lli_limit) { 425 - dev_dbg(chan->device->dev, "vch %p: sg num %d exceed max %d\n", 426 - &c->vc, num, lli_limit); 427 - return NULL; 428 - } 429 - 430 - ds = kzalloc(sizeof(*ds), GFP_ATOMIC); 431 - if (!ds) 432 - return NULL; 433 - 434 - ds->desc_hw = dma_pool_zalloc(d->pool, GFP_NOWAIT, &ds->desc_hw_lli); 435 - if (!ds->desc_hw) { 436 - dev_dbg(chan->device->dev, "vch %p: dma alloc fail\n", &c->vc); 437 - kfree(ds); 438 - return NULL; 439 - } 440 - ds->desc_num = num; 441 - return ds; 442 - } 443 - 444 - static enum zx_dma_burst_width zx_dma_burst_width(enum dma_slave_buswidth width) 445 - { 446 - switch (width) { 447 - case DMA_SLAVE_BUSWIDTH_1_BYTE: 448 - case DMA_SLAVE_BUSWIDTH_2_BYTES: 449 - case DMA_SLAVE_BUSWIDTH_4_BYTES: 450 - case DMA_SLAVE_BUSWIDTH_8_BYTES: 451 - return ffs(width) - 1; 452 - default: 453 - return ZX_DMA_WIDTH_32BIT; 454 - } 455 - } 456 - 457 - static int zx_pre_config(struct zx_dma_chan *c, enum dma_transfer_direction dir) 458 - { 459 - struct dma_slave_config *cfg = &c->slave_cfg; 460 - enum zx_dma_burst_width src_width; 461 - enum zx_dma_burst_width dst_width; 462 - u32 maxburst = 0; 463 - 464 - switch (dir) { 465 - case DMA_MEM_TO_MEM: 466 - c->ccfg = ZX_CH_ENABLE | ZX_SOFT_REQ 467 - | ZX_SRC_BURST_LEN(ZX_MAX_BURST_LEN - 1) 468 - | ZX_SRC_BURST_WIDTH(ZX_DMA_WIDTH_32BIT) 469 - | ZX_DST_BURST_WIDTH(ZX_DMA_WIDTH_32BIT); 470 - break; 471 - case DMA_MEM_TO_DEV: 472 - c->dev_addr = cfg->dst_addr; 473 - /* dst len is calculated from src width, len and dst width. 474 - * We need make sure dst len not exceed MAX LEN. 475 - * Trailing single transaction that does not fill a full 476 - * burst also require identical src/dst data width. 477 - */ 478 - dst_width = zx_dma_burst_width(cfg->dst_addr_width); 479 - maxburst = cfg->dst_maxburst; 480 - maxburst = maxburst < ZX_MAX_BURST_LEN ? 481 - maxburst : ZX_MAX_BURST_LEN; 482 - c->ccfg = ZX_DST_FIFO_MODE | ZX_CH_ENABLE 483 - | ZX_SRC_BURST_LEN(maxburst - 1) 484 - | ZX_SRC_BURST_WIDTH(dst_width) 485 - | ZX_DST_BURST_WIDTH(dst_width); 486 - break; 487 - case DMA_DEV_TO_MEM: 488 - c->dev_addr = cfg->src_addr; 489 - src_width = zx_dma_burst_width(cfg->src_addr_width); 490 - maxburst = cfg->src_maxburst; 491 - maxburst = maxburst < ZX_MAX_BURST_LEN ? 492 - maxburst : ZX_MAX_BURST_LEN; 493 - c->ccfg = ZX_SRC_FIFO_MODE | ZX_CH_ENABLE 494 - | ZX_SRC_BURST_LEN(maxburst - 1) 495 - | ZX_SRC_BURST_WIDTH(src_width) 496 - | ZX_DST_BURST_WIDTH(src_width); 497 - break; 498 - default: 499 - return -EINVAL; 500 - } 501 - return 0; 502 - } 503 - 504 - static struct dma_async_tx_descriptor *zx_dma_prep_memcpy( 505 - struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, 506 - size_t len, unsigned long flags) 507 - { 508 - struct zx_dma_chan *c = to_zx_chan(chan); 509 - struct zx_dma_desc_sw *ds; 510 - size_t copy = 0; 511 - int num = 0; 512 - 513 - if (!len) 514 - return NULL; 515 - 516 - if (zx_pre_config(c, DMA_MEM_TO_MEM)) 517 - return NULL; 518 - 519 - num = DIV_ROUND_UP(len, DMA_MAX_SIZE); 520 - 521 - ds = zx_alloc_desc_resource(num, chan); 522 - if (!ds) 523 - return NULL; 524 - 525 - ds->size = len; 526 - num = 0; 527 - 528 - do { 529 - copy = min_t(size_t, len, DMA_MAX_SIZE); 530 - zx_dma_fill_desc(ds, dst, src, copy, num++, c->ccfg); 531 - 532 - src += copy; 533 - dst += copy; 534 - len -= copy; 535 - } while (len); 536 - 537 - c->cyclic = 0; 538 - ds->desc_hw[num - 1].lli = 0; /* end of link */ 539 - ds->desc_hw[num - 1].ctr |= ZX_IRQ_ENABLE_ALL; 540 - return vchan_tx_prep(&c->vc, &ds->vd, flags); 541 - } 542 - 543 - static struct dma_async_tx_descriptor *zx_dma_prep_slave_sg( 544 - struct dma_chan *chan, struct scatterlist *sgl, unsigned int sglen, 545 - enum dma_transfer_direction dir, unsigned long flags, void *context) 546 - { 547 - struct zx_dma_chan *c = to_zx_chan(chan); 548 - struct zx_dma_desc_sw *ds; 549 - size_t len, avail, total = 0; 550 - struct scatterlist *sg; 551 - dma_addr_t addr, src = 0, dst = 0; 552 - int num = sglen, i; 553 - 554 - if (!sgl) 555 - return NULL; 556 - 557 - if (zx_pre_config(c, dir)) 558 - return NULL; 559 - 560 - for_each_sg(sgl, sg, sglen, i) { 561 - avail = sg_dma_len(sg); 562 - if (avail > DMA_MAX_SIZE) 563 - num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1; 564 - } 565 - 566 - ds = zx_alloc_desc_resource(num, chan); 567 - if (!ds) 568 - return NULL; 569 - 570 - c->cyclic = 0; 571 - num = 0; 572 - for_each_sg(sgl, sg, sglen, i) { 573 - addr = sg_dma_address(sg); 574 - avail = sg_dma_len(sg); 575 - total += avail; 576 - 577 - do { 578 - len = min_t(size_t, avail, DMA_MAX_SIZE); 579 - 580 - if (dir == DMA_MEM_TO_DEV) { 581 - src = addr; 582 - dst = c->dev_addr; 583 - } else if (dir == DMA_DEV_TO_MEM) { 584 - src = c->dev_addr; 585 - dst = addr; 586 - } 587 - 588 - zx_dma_fill_desc(ds, dst, src, len, num++, c->ccfg); 589 - 590 - addr += len; 591 - avail -= len; 592 - } while (avail); 593 - } 594 - 595 - ds->desc_hw[num - 1].lli = 0; /* end of link */ 596 - ds->desc_hw[num - 1].ctr |= ZX_IRQ_ENABLE_ALL; 597 - ds->size = total; 598 - return vchan_tx_prep(&c->vc, &ds->vd, flags); 599 - } 600 - 601 - static struct dma_async_tx_descriptor *zx_dma_prep_dma_cyclic( 602 - struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len, 603 - size_t period_len, enum dma_transfer_direction dir, 604 - unsigned long flags) 605 - { 606 - struct zx_dma_chan *c = to_zx_chan(chan); 607 - struct zx_dma_desc_sw *ds; 608 - dma_addr_t src = 0, dst = 0; 609 - int num_periods = buf_len / period_len; 610 - int buf = 0, num = 0; 611 - 612 - if (period_len > DMA_MAX_SIZE) { 613 - dev_err(chan->device->dev, "maximum period size exceeded\n"); 614 - return NULL; 615 - } 616 - 617 - if (zx_pre_config(c, dir)) 618 - return NULL; 619 - 620 - ds = zx_alloc_desc_resource(num_periods, chan); 621 - if (!ds) 622 - return NULL; 623 - c->cyclic = 1; 624 - 625 - while (buf < buf_len) { 626 - if (dir == DMA_MEM_TO_DEV) { 627 - src = dma_addr; 628 - dst = c->dev_addr; 629 - } else if (dir == DMA_DEV_TO_MEM) { 630 - src = c->dev_addr; 631 - dst = dma_addr; 632 - } 633 - zx_dma_fill_desc(ds, dst, src, period_len, num++, 634 - c->ccfg | ZX_IRQ_ENABLE_ALL); 635 - dma_addr += period_len; 636 - buf += period_len; 637 - } 638 - 639 - ds->desc_hw[num - 1].lli = ds->desc_hw_lli; 640 - ds->size = buf_len; 641 - return vchan_tx_prep(&c->vc, &ds->vd, flags); 642 - } 643 - 644 - static int zx_dma_config(struct dma_chan *chan, 645 - struct dma_slave_config *cfg) 646 - { 647 - struct zx_dma_chan *c = to_zx_chan(chan); 648 - 649 - if (!cfg) 650 - return -EINVAL; 651 - 652 - memcpy(&c->slave_cfg, cfg, sizeof(*cfg)); 653 - 654 - return 0; 655 - } 656 - 657 - static int zx_dma_terminate_all(struct dma_chan *chan) 658 - { 659 - struct zx_dma_chan *c = to_zx_chan(chan); 660 - struct zx_dma_dev *d = to_zx_dma(chan->device); 661 - struct zx_dma_phy *p = c->phy; 662 - unsigned long flags; 663 - LIST_HEAD(head); 664 - 665 - dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc); 666 - 667 - /* Prevent this channel being scheduled */ 668 - spin_lock(&d->lock); 669 - list_del_init(&c->node); 670 - spin_unlock(&d->lock); 671 - 672 - /* Clear the tx descriptor lists */ 673 - spin_lock_irqsave(&c->vc.lock, flags); 674 - vchan_get_all_descriptors(&c->vc, &head); 675 - if (p) { 676 - /* vchan is assigned to a pchan - stop the channel */ 677 - zx_dma_terminate_chan(p, d); 678 - c->phy = NULL; 679 - p->vchan = NULL; 680 - p->ds_run = NULL; 681 - p->ds_done = NULL; 682 - } 683 - spin_unlock_irqrestore(&c->vc.lock, flags); 684 - vchan_dma_desc_free_list(&c->vc, &head); 685 - 686 - return 0; 687 - } 688 - 689 - static int zx_dma_transfer_pause(struct dma_chan *chan) 690 - { 691 - struct zx_dma_chan *c = to_zx_chan(chan); 692 - u32 val = 0; 693 - 694 - val = readl_relaxed(c->phy->base + REG_ZX_CTRL); 695 - val &= ~ZX_CH_ENABLE; 696 - writel_relaxed(val, c->phy->base + REG_ZX_CTRL); 697 - 698 - return 0; 699 - } 700 - 701 - static int zx_dma_transfer_resume(struct dma_chan *chan) 702 - { 703 - struct zx_dma_chan *c = to_zx_chan(chan); 704 - u32 val = 0; 705 - 706 - val = readl_relaxed(c->phy->base + REG_ZX_CTRL); 707 - val |= ZX_CH_ENABLE; 708 - writel_relaxed(val, c->phy->base + REG_ZX_CTRL); 709 - 710 - return 0; 711 - } 712 - 713 - static void zx_dma_free_desc(struct virt_dma_desc *vd) 714 - { 715 - struct zx_dma_desc_sw *ds = 716 - container_of(vd, struct zx_dma_desc_sw, vd); 717 - struct zx_dma_dev *d = to_zx_dma(vd->tx.chan->device); 718 - 719 - dma_pool_free(d->pool, ds->desc_hw, ds->desc_hw_lli); 720 - kfree(ds); 721 - } 722 - 723 - static const struct of_device_id zx6702_dma_dt_ids[] = { 724 - { .compatible = "zte,zx296702-dma", }, 725 - {} 726 - }; 727 - MODULE_DEVICE_TABLE(of, zx6702_dma_dt_ids); 728 - 729 - static struct dma_chan *zx_of_dma_simple_xlate(struct of_phandle_args *dma_spec, 730 - struct of_dma *ofdma) 731 - { 732 - struct zx_dma_dev *d = ofdma->of_dma_data; 733 - unsigned int request = dma_spec->args[0]; 734 - struct dma_chan *chan; 735 - struct zx_dma_chan *c; 736 - 737 - if (request >= d->dma_requests) 738 - return NULL; 739 - 740 - chan = dma_get_any_slave_channel(&d->slave); 741 - if (!chan) { 742 - dev_err(d->slave.dev, "get channel fail in %s.\n", __func__); 743 - return NULL; 744 - } 745 - c = to_zx_chan(chan); 746 - c->id = request; 747 - dev_info(d->slave.dev, "zx_dma: pchan %u: alloc vchan %p\n", 748 - c->id, &c->vc); 749 - return chan; 750 - } 751 - 752 - static int zx_dma_probe(struct platform_device *op) 753 - { 754 - struct zx_dma_dev *d; 755 - int i, ret = 0; 756 - 757 - d = devm_kzalloc(&op->dev, sizeof(*d), GFP_KERNEL); 758 - if (!d) 759 - return -ENOMEM; 760 - 761 - d->base = devm_platform_ioremap_resource(op, 0); 762 - if (IS_ERR(d->base)) 763 - return PTR_ERR(d->base); 764 - 765 - of_property_read_u32((&op->dev)->of_node, 766 - "dma-channels", &d->dma_channels); 767 - of_property_read_u32((&op->dev)->of_node, 768 - "dma-requests", &d->dma_requests); 769 - if (!d->dma_requests || !d->dma_channels) 770 - return -EINVAL; 771 - 772 - d->clk = devm_clk_get(&op->dev, NULL); 773 - if (IS_ERR(d->clk)) { 774 - dev_err(&op->dev, "no dma clk\n"); 775 - return PTR_ERR(d->clk); 776 - } 777 - 778 - d->irq = platform_get_irq(op, 0); 779 - ret = devm_request_irq(&op->dev, d->irq, zx_dma_int_handler, 780 - 0, DRIVER_NAME, d); 781 - if (ret) 782 - return ret; 783 - 784 - /* A DMA memory pool for LLIs, align on 32-byte boundary */ 785 - d->pool = dmam_pool_create(DRIVER_NAME, &op->dev, 786 - LLI_BLOCK_SIZE, 32, 0); 787 - if (!d->pool) 788 - return -ENOMEM; 789 - 790 - /* init phy channel */ 791 - d->phy = devm_kcalloc(&op->dev, 792 - d->dma_channels, sizeof(struct zx_dma_phy), GFP_KERNEL); 793 - if (!d->phy) 794 - return -ENOMEM; 795 - 796 - for (i = 0; i < d->dma_channels; i++) { 797 - struct zx_dma_phy *p = &d->phy[i]; 798 - 799 - p->idx = i; 800 - p->base = d->base + i * 0x40; 801 - } 802 - 803 - INIT_LIST_HEAD(&d->slave.channels); 804 - dma_cap_set(DMA_SLAVE, d->slave.cap_mask); 805 - dma_cap_set(DMA_MEMCPY, d->slave.cap_mask); 806 - dma_cap_set(DMA_CYCLIC, d->slave.cap_mask); 807 - dma_cap_set(DMA_PRIVATE, d->slave.cap_mask); 808 - d->slave.dev = &op->dev; 809 - d->slave.device_free_chan_resources = zx_dma_free_chan_resources; 810 - d->slave.device_tx_status = zx_dma_tx_status; 811 - d->slave.device_prep_dma_memcpy = zx_dma_prep_memcpy; 812 - d->slave.device_prep_slave_sg = zx_dma_prep_slave_sg; 813 - d->slave.device_prep_dma_cyclic = zx_dma_prep_dma_cyclic; 814 - d->slave.device_issue_pending = zx_dma_issue_pending; 815 - d->slave.device_config = zx_dma_config; 816 - d->slave.device_terminate_all = zx_dma_terminate_all; 817 - d->slave.device_pause = zx_dma_transfer_pause; 818 - d->slave.device_resume = zx_dma_transfer_resume; 819 - d->slave.copy_align = DMA_ALIGN; 820 - d->slave.src_addr_widths = ZX_DMA_BUSWIDTHS; 821 - d->slave.dst_addr_widths = ZX_DMA_BUSWIDTHS; 822 - d->slave.directions = BIT(DMA_MEM_TO_MEM) | BIT(DMA_MEM_TO_DEV) 823 - | BIT(DMA_DEV_TO_MEM); 824 - d->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 825 - 826 - /* init virtual channel */ 827 - d->chans = devm_kcalloc(&op->dev, 828 - d->dma_requests, sizeof(struct zx_dma_chan), GFP_KERNEL); 829 - if (!d->chans) 830 - return -ENOMEM; 831 - 832 - for (i = 0; i < d->dma_requests; i++) { 833 - struct zx_dma_chan *c = &d->chans[i]; 834 - 835 - c->status = DMA_IN_PROGRESS; 836 - INIT_LIST_HEAD(&c->node); 837 - c->vc.desc_free = zx_dma_free_desc; 838 - vchan_init(&c->vc, &d->slave); 839 - } 840 - 841 - /* Enable clock before accessing registers */ 842 - ret = clk_prepare_enable(d->clk); 843 - if (ret < 0) { 844 - dev_err(&op->dev, "clk_prepare_enable failed: %d\n", ret); 845 - goto zx_dma_out; 846 - } 847 - 848 - zx_dma_init_state(d); 849 - 850 - spin_lock_init(&d->lock); 851 - INIT_LIST_HEAD(&d->chan_pending); 852 - platform_set_drvdata(op, d); 853 - 854 - ret = dma_async_device_register(&d->slave); 855 - if (ret) 856 - goto clk_dis; 857 - 858 - ret = of_dma_controller_register((&op->dev)->of_node, 859 - zx_of_dma_simple_xlate, d); 860 - if (ret) 861 - goto of_dma_register_fail; 862 - 863 - dev_info(&op->dev, "initialized\n"); 864 - return 0; 865 - 866 - of_dma_register_fail: 867 - dma_async_device_unregister(&d->slave); 868 - clk_dis: 869 - clk_disable_unprepare(d->clk); 870 - zx_dma_out: 871 - return ret; 872 - } 873 - 874 - static int zx_dma_remove(struct platform_device *op) 875 - { 876 - struct zx_dma_chan *c, *cn; 877 - struct zx_dma_dev *d = platform_get_drvdata(op); 878 - 879 - /* explictly free the irq */ 880 - devm_free_irq(&op->dev, d->irq, d); 881 - 882 - dma_async_device_unregister(&d->slave); 883 - of_dma_controller_free((&op->dev)->of_node); 884 - 885 - list_for_each_entry_safe(c, cn, &d->slave.channels, 886 - vc.chan.device_node) { 887 - list_del(&c->vc.chan.device_node); 888 - } 889 - clk_disable_unprepare(d->clk); 890 - 891 - return 0; 892 - } 893 - 894 - #ifdef CONFIG_PM_SLEEP 895 - static int zx_dma_suspend_dev(struct device *dev) 896 - { 897 - struct zx_dma_dev *d = dev_get_drvdata(dev); 898 - u32 stat = 0; 899 - 900 - stat = zx_dma_get_chan_stat(d); 901 - if (stat) { 902 - dev_warn(d->slave.dev, 903 - "chan %d is running fail to suspend\n", stat); 904 - return -1; 905 - } 906 - clk_disable_unprepare(d->clk); 907 - return 0; 908 - } 909 - 910 - static int zx_dma_resume_dev(struct device *dev) 911 - { 912 - struct zx_dma_dev *d = dev_get_drvdata(dev); 913 - int ret = 0; 914 - 915 - ret = clk_prepare_enable(d->clk); 916 - if (ret < 0) { 917 - dev_err(d->slave.dev, "clk_prepare_enable failed: %d\n", ret); 918 - return ret; 919 - } 920 - zx_dma_init_state(d); 921 - return 0; 922 - } 923 - #endif 924 - 925 - static SIMPLE_DEV_PM_OPS(zx_dma_pmops, zx_dma_suspend_dev, zx_dma_resume_dev); 926 - 927 - static struct platform_driver zx_pdma_driver = { 928 - .driver = { 929 - .name = DRIVER_NAME, 930 - .pm = &zx_dma_pmops, 931 - .of_match_table = zx6702_dma_dt_ids, 932 - }, 933 - .probe = zx_dma_probe, 934 - .remove = zx_dma_remove, 935 - }; 936 - 937 - module_platform_driver(zx_pdma_driver); 938 - 939 - MODULE_DESCRIPTION("ZTE ZX296702 DMA Driver"); 940 - MODULE_AUTHOR("Jun Nie jun.nie@linaro.org"); 941 - MODULE_LICENSE("GPL v2");
+6 -7
include/linux/dma/k3-psil.h
··· 42 42 /** 43 43 * struct psil_endpoint_config - PSI-L Endpoint configuration 44 44 * @ep_type: PSI-L endpoint type 45 + * @channel_tpl: Desired throughput level for the channel 45 46 * @pkt_mode: If set, the channel must be in Packet mode, otherwise in 46 47 * TR mode 47 48 * @notdpkt: TDCM must be suppressed on the TX channel 48 49 * @needs_epib: Endpoint needs EPIB 49 - * @psd_size: If set, PSdata is used by the endpoint 50 - * @channel_tpl: Desired throughput level for the channel 51 50 * @pdma_acc32: ACC32 must be enabled on the PDMA side 52 51 * @pdma_burst: BURST must be enabled on the PDMA side 52 + * @psd_size: If set, PSdata is used by the endpoint 53 53 * @mapped_channel_id: PKTDMA thread to channel mapping for mapped channels. 54 54 * The thread must be serviced by the specified channel if 55 55 * mapped_channel_id is >= 0 in case of PKTDMA ··· 62 62 */ 63 63 struct psil_endpoint_config { 64 64 enum psil_endpoint_type ep_type; 65 + enum udma_tp_level channel_tpl; 65 66 66 67 unsigned pkt_mode:1; 67 68 unsigned notdpkt:1; 68 69 unsigned needs_epib:1; 69 - u32 psd_size; 70 - enum udma_tp_level channel_tpl; 71 - 72 70 /* PDMA properties, valid for PSIL_EP_PDMA_* */ 73 71 unsigned pdma_acc32:1; 74 72 unsigned pdma_burst:1; 75 73 74 + u32 psd_size; 76 75 /* PKDMA mapped channel */ 77 - int mapped_channel_id; 76 + s16 mapped_channel_id; 78 77 /* PKTDMA tflow and rflow ranges for mapped channel */ 79 78 u16 flow_start; 80 79 u16 flow_num; 81 - u16 default_flow_id; 80 + s16 default_flow_id; 82 81 }; 83 82 84 83 int psil_set_new_ep_config(struct device *dev, const char *name,
-16
include/linux/dma/mmp-pdma.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _MMP_PDMA_H_ 3 - #define _MMP_PDMA_H_ 4 - 5 - struct dma_chan; 6 - 7 - #ifdef CONFIG_MMP_PDMA 8 - bool mmp_pdma_filter_fn(struct dma_chan *chan, void *param); 9 - #else 10 - static inline bool mmp_pdma_filter_fn(struct dma_chan *chan, void *param) 11 - { 12 - return false; 13 - } 14 - #endif 15 - 16 - #endif /* _MMP_PDMA_H_ */
+2
include/linux/dmaengine.h
··· 745 745 DMAENGINE_ALIGN_16_BYTES = 4, 746 746 DMAENGINE_ALIGN_32_BYTES = 5, 747 747 DMAENGINE_ALIGN_64_BYTES = 6, 748 + DMAENGINE_ALIGN_128_BYTES = 7, 749 + DMAENGINE_ALIGN_256_BYTES = 8, 748 750 }; 749 751 750 752 /**
-61
include/linux/platform_data/dma-atmel.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Header file for the Atmel AHB DMA Controller driver 4 - * 5 - * Copyright (C) 2008 Atmel Corporation 6 - */ 7 - #ifndef AT_HDMAC_H 8 - #define AT_HDMAC_H 9 - 10 - #include <linux/dmaengine.h> 11 - 12 - /** 13 - * struct at_dma_platform_data - Controller configuration parameters 14 - * @nr_channels: Number of channels supported by hardware (max 8) 15 - * @cap_mask: dma_capability flags supported by the platform 16 - */ 17 - struct at_dma_platform_data { 18 - unsigned int nr_channels; 19 - dma_cap_mask_t cap_mask; 20 - }; 21 - 22 - /** 23 - * struct at_dma_slave - Controller-specific information about a slave 24 - * @dma_dev: required DMA master device 25 - * @cfg: Platform-specific initializer for the CFG register 26 - */ 27 - struct at_dma_slave { 28 - struct device *dma_dev; 29 - u32 cfg; 30 - }; 31 - 32 - 33 - /* Platform-configurable bits in CFG */ 34 - #define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */ 35 - 36 - #define ATC_SRC_PER(h) (0xFU & (h)) /* Channel src rq associated with periph handshaking ifc h */ 37 - #define ATC_DST_PER(h) ((0xFU & (h)) << 4) /* Channel dst rq associated with periph handshaking ifc h */ 38 - #define ATC_SRC_REP (0x1 << 8) /* Source Replay Mod */ 39 - #define ATC_SRC_H2SEL (0x1 << 9) /* Source Handshaking Mod */ 40 - #define ATC_SRC_H2SEL_SW (0x0 << 9) 41 - #define ATC_SRC_H2SEL_HW (0x1 << 9) 42 - #define ATC_SRC_PER_MSB(h) (ATC_PER_MSB(h) << 10) /* Channel src rq (most significant bits) */ 43 - #define ATC_DST_REP (0x1 << 12) /* Destination Replay Mod */ 44 - #define ATC_DST_H2SEL (0x1 << 13) /* Destination Handshaking Mod */ 45 - #define ATC_DST_H2SEL_SW (0x0 << 13) 46 - #define ATC_DST_H2SEL_HW (0x1 << 13) 47 - #define ATC_DST_PER_MSB(h) (ATC_PER_MSB(h) << 14) /* Channel dst rq (most significant bits) */ 48 - #define ATC_SOD (0x1 << 16) /* Stop On Done */ 49 - #define ATC_LOCK_IF (0x1 << 20) /* Interface Lock */ 50 - #define ATC_LOCK_B (0x1 << 21) /* AHB Bus Lock */ 51 - #define ATC_LOCK_IF_L (0x1 << 22) /* Master Interface Arbiter Lock */ 52 - #define ATC_LOCK_IF_L_CHUNK (0x0 << 22) 53 - #define ATC_LOCK_IF_L_BUFFER (0x1 << 22) 54 - #define ATC_AHB_PROT_MASK (0x7 << 24) /* AHB Protection */ 55 - #define ATC_FIFOCFG_MASK (0x3 << 28) /* FIFO Request Configuration */ 56 - #define ATC_FIFOCFG_LARGESTBURST (0x0 << 28) 57 - #define ATC_FIFOCFG_HALFFIFO (0x1 << 28) 58 - #define ATC_FIFOCFG_ENOUGHSPACE (0x2 << 28) 59 - 60 - 61 - #endif /* AT_HDMAC_H */
-72
include/linux/platform_data/dma-coh901318.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Platform data for the COH901318 DMA controller 4 - * Copyright (C) 2007-2013 ST-Ericsson 5 - */ 6 - 7 - #ifndef PLAT_COH901318_H 8 - #define PLAT_COH901318_H 9 - 10 - #ifdef CONFIG_COH901318 11 - 12 - /* We only support the U300 DMA channels */ 13 - #define U300_DMA_MSL_TX_0 0 14 - #define U300_DMA_MSL_TX_1 1 15 - #define U300_DMA_MSL_TX_2 2 16 - #define U300_DMA_MSL_TX_3 3 17 - #define U300_DMA_MSL_TX_4 4 18 - #define U300_DMA_MSL_TX_5 5 19 - #define U300_DMA_MSL_TX_6 6 20 - #define U300_DMA_MSL_RX_0 7 21 - #define U300_DMA_MSL_RX_1 8 22 - #define U300_DMA_MSL_RX_2 9 23 - #define U300_DMA_MSL_RX_3 10 24 - #define U300_DMA_MSL_RX_4 11 25 - #define U300_DMA_MSL_RX_5 12 26 - #define U300_DMA_MSL_RX_6 13 27 - #define U300_DMA_MMCSD_RX_TX 14 28 - #define U300_DMA_MSPRO_TX 15 29 - #define U300_DMA_MSPRO_RX 16 30 - #define U300_DMA_UART0_TX 17 31 - #define U300_DMA_UART0_RX 18 32 - #define U300_DMA_APEX_TX 19 33 - #define U300_DMA_APEX_RX 20 34 - #define U300_DMA_PCM_I2S0_TX 21 35 - #define U300_DMA_PCM_I2S0_RX 22 36 - #define U300_DMA_PCM_I2S1_TX 23 37 - #define U300_DMA_PCM_I2S1_RX 24 38 - #define U300_DMA_XGAM_CDI 25 39 - #define U300_DMA_XGAM_PDI 26 40 - #define U300_DMA_SPI_TX 27 41 - #define U300_DMA_SPI_RX 28 42 - #define U300_DMA_GENERAL_PURPOSE_0 29 43 - #define U300_DMA_GENERAL_PURPOSE_1 30 44 - #define U300_DMA_GENERAL_PURPOSE_2 31 45 - #define U300_DMA_GENERAL_PURPOSE_3 32 46 - #define U300_DMA_GENERAL_PURPOSE_4 33 47 - #define U300_DMA_GENERAL_PURPOSE_5 34 48 - #define U300_DMA_GENERAL_PURPOSE_6 35 49 - #define U300_DMA_GENERAL_PURPOSE_7 36 50 - #define U300_DMA_GENERAL_PURPOSE_8 37 51 - #define U300_DMA_UART1_TX 38 52 - #define U300_DMA_UART1_RX 39 53 - 54 - #define U300_DMA_DEVICE_CHANNELS 32 55 - #define U300_DMA_CHANNELS 40 56 - 57 - /** 58 - * coh901318_filter_id() - DMA channel filter function 59 - * @chan: dma channel handle 60 - * @chan_id: id of dma channel to be filter out 61 - * 62 - * In dma_request_channel() it specifies what channel id to be requested 63 - */ 64 - bool coh901318_filter_id(struct dma_chan *chan, void *chan_id); 65 - #else 66 - static inline bool coh901318_filter_id(struct dma_chan *chan, void *chan_id) 67 - { 68 - return false; 69 - } 70 - #endif 71 - 72 - #endif /* PLAT_COH901318_H */
-11
include/linux/platform_data/dma-imx-sdma.h
··· 57 57 /* End of v4 array */ 58 58 }; 59 59 60 - /** 61 - * struct sdma_platform_data - platform specific data for SDMA engine 62 - * 63 - * @fw_name The firmware name 64 - * @script_addrs SDMA scripts addresses in SDMA ROM 65 - */ 66 - struct sdma_platform_data { 67 - char *fw_name; 68 - struct sdma_script_start_addrs *script_addrs; 69 - }; 70 - 71 60 #endif /* __MACH_MXC_SDMA_H__ */
-7
include/linux/sirfsoc_dma.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _SIRFSOC_DMA_H_ 3 - #define _SIRFSOC_DMA_H_ 4 - 5 - bool sirfsoc_dma_filter_id(struct dma_chan *chan, void *chan_id); 6 - 7 - #endif