Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dmaengine changes from Vinod Koul:
"This brings for slave dmaengine:

- Change dma notification flag to DMA_COMPLETE from DMA_SUCCESS as
dmaengine can only transfer and not verify validaty of dma
transfers

- Bunch of fixes across drivers:

- cppi41 driver fixes from Daniel

- 8 channel freescale dma engine support and updated bindings from
Hongbo

- msx-dma fixes and cleanup by Markus

- DMAengine updates from Dan:

- Bartlomiej and Dan finalized a rework of the dma address unmap
implementation.

- In the course of testing 1/ a collection of enhancements to
dmatest fell out. Notably basic performance statistics, and
fixed / enhanced test control through new module parameters
'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and
Linus [Walleij] for their review.

- Testing the raid related corner cases of 1/ triggered bugs in
the recently added 16-source operation support in the ioatdma
driver.

- Some minor fixes / cleanups to mv_xor and ioatdma"

* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (99 commits)
dma: mv_xor: Fix mis-usage of mmio 'base' and 'high_base' registers
dma: mv_xor: Remove unneeded NULL address check
ioat: fix ioat3_irq_reinit
ioat: kill msix_single_vector support
raid6test: add new corner case for ioatdma driver
ioatdma: clean up sed pool kmem_cache
ioatdma: fix selection of 16 vs 8 source path
ioatdma: fix sed pool selection
ioatdma: Fix bug in selftest after removal of DMA_MEMSET.
dmatest: verbose mode
dmatest: convert to dmaengine_unmap_data
dmatest: add a 'wait' parameter
dmatest: add basic performance metrics
dmatest: add support for skipping verification and random data setup
dmatest: use pseudo random numbers
dmatest: support xor-only, or pq-only channels in tests
dmatest: restore ability to start test at module load and init
dmatest: cleanup redundant "dmatest: " prefixes
dmatest: replace stored results mechanism, with uniform messages
Revert "dmatest: append verify result to results"
...

+1994 -2287
+1 -1
Documentation/devicetree/bindings/dma/atmel-dma.txt
··· 28 28 dependent: 29 29 - bit 7-0: peripheral identifier for the hardware handshaking interface. The 30 30 identifier can be different for tx and rx. 31 - - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 1 for ASAP. 31 + - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 2 for ASAP. 32 32 33 33 Example: 34 34
+101 -37
Documentation/devicetree/bindings/powerpc/fsl/dma.txt
··· 1 - * Freescale 83xx DMA Controller 1 + * Freescale DMA Controllers 2 2 3 - Freescale PowerPC 83xx have on chip general purpose DMA controllers. 3 + ** Freescale Elo DMA Controller 4 + This is a little-endian 4-channel DMA controller, used in Freescale mpc83xx 5 + series chips such as mpc8315, mpc8349, mpc8379 etc. 4 6 5 7 Required properties: 6 8 7 - - compatible : compatible list, contains 2 entries, first is 8 - "fsl,CHIP-dma", where CHIP is the processor 9 - (mpc8349, mpc8360, etc.) and the second is 10 - "fsl,elo-dma" 11 - - reg : <registers mapping for DMA general status reg> 12 - - ranges : Should be defined as specified in 1) to describe the 13 - DMA controller channels. 9 + - compatible : must include "fsl,elo-dma" 10 + - reg : DMA General Status Register, i.e. DGSR which contains 11 + status for all the 4 DMA channels 12 + - ranges : describes the mapping between the address space of the 13 + DMA channels and the address space of the DMA controller 14 14 - cell-index : controller index. 0 for controller @ 0x8100 15 - - interrupts : <interrupt mapping for DMA IRQ> 15 + - interrupts : interrupt specifier for DMA IRQ 16 16 - interrupt-parent : optional, if needed for interrupt mapping 17 17 18 - 19 18 - DMA channel nodes: 20 - - compatible : compatible list, contains 2 entries, first is 21 - "fsl,CHIP-dma-channel", where CHIP is the processor 22 - (mpc8349, mpc8350, etc.) and the second is 23 - "fsl,elo-dma-channel". However, see note below. 24 - - reg : <registers mapping for channel> 25 - - cell-index : dma channel index starts at 0. 19 + - compatible : must include "fsl,elo-dma-channel" 20 + However, see note below. 21 + - reg : DMA channel specific registers 22 + - cell-index : DMA channel index starts at 0. 26 23 27 24 Optional properties: 28 - - interrupts : <interrupt mapping for DMA channel IRQ> 29 - (on 83xx this is expected to be identical to 30 - the interrupts property of the parent node) 25 + - interrupts : interrupt specifier for DMA channel IRQ 26 + (on 83xx this is expected to be identical to 27 + the interrupts property of the parent node) 31 28 - interrupt-parent : optional, if needed for interrupt mapping 32 29 33 30 Example: ··· 67 70 }; 68 71 }; 69 72 70 - * Freescale 85xx/86xx DMA Controller 71 - 72 - Freescale PowerPC 85xx/86xx have on chip general purpose DMA controllers. 73 + ** Freescale EloPlus DMA Controller 74 + This is a 4-channel DMA controller with extended addresses and chaining, 75 + mainly used in Freescale mpc85xx/86xx, Pxxx and BSC series chips, such as 76 + mpc8540, mpc8641 p4080, bsc9131 etc. 73 77 74 78 Required properties: 75 79 76 - - compatible : compatible list, contains 2 entries, first is 77 - "fsl,CHIP-dma", where CHIP is the processor 78 - (mpc8540, mpc8540, etc.) and the second is 79 - "fsl,eloplus-dma" 80 - - reg : <registers mapping for DMA general status reg> 80 + - compatible : must include "fsl,eloplus-dma" 81 + - reg : DMA General Status Register, i.e. DGSR which contains 82 + status for all the 4 DMA channels 81 83 - cell-index : controller index. 0 for controller @ 0x21000, 82 84 1 for controller @ 0xc000 83 - - ranges : Should be defined as specified in 1) to describe the 84 - DMA controller channels. 85 + - ranges : describes the mapping between the address space of the 86 + DMA channels and the address space of the DMA controller 85 87 86 88 - DMA channel nodes: 87 - - compatible : compatible list, contains 2 entries, first is 88 - "fsl,CHIP-dma-channel", where CHIP is the processor 89 - (mpc8540, mpc8560, etc.) and the second is 90 - "fsl,eloplus-dma-channel". However, see note below. 91 - - cell-index : dma channel index starts at 0. 92 - - reg : <registers mapping for channel> 93 - - interrupts : <interrupt mapping for DMA channel IRQ> 89 + - compatible : must include "fsl,eloplus-dma-channel" 90 + However, see note below. 91 + - cell-index : DMA channel index starts at 0. 92 + - reg : DMA channel specific registers 93 + - interrupts : interrupt specifier for DMA channel IRQ 94 94 - interrupt-parent : optional, if needed for interrupt mapping 95 95 96 96 Example: ··· 127 133 interrupts = <23 2>; 128 134 }; 129 135 }; 136 + 137 + ** Freescale Elo3 DMA Controller 138 + DMA controller which has same function as EloPlus except that Elo3 has 8 139 + channels while EloPlus has only 4, it is used in Freescale Txxx and Bxxx 140 + series chips, such as t1040, t4240, b4860. 141 + 142 + Required properties: 143 + 144 + - compatible : must include "fsl,elo3-dma" 145 + - reg : contains two entries for DMA General Status Registers, 146 + i.e. DGSR0 which includes status for channel 1~4, and 147 + DGSR1 for channel 5~8 148 + - ranges : describes the mapping between the address space of the 149 + DMA channels and the address space of the DMA controller 150 + 151 + - DMA channel nodes: 152 + - compatible : must include "fsl,eloplus-dma-channel" 153 + - reg : DMA channel specific registers 154 + - interrupts : interrupt specifier for DMA channel IRQ 155 + - interrupt-parent : optional, if needed for interrupt mapping 156 + 157 + Example: 158 + dma@100300 { 159 + #address-cells = <1>; 160 + #size-cells = <1>; 161 + compatible = "fsl,elo3-dma"; 162 + reg = <0x100300 0x4>, 163 + <0x100600 0x4>; 164 + ranges = <0x0 0x100100 0x500>; 165 + dma-channel@0 { 166 + compatible = "fsl,eloplus-dma-channel"; 167 + reg = <0x0 0x80>; 168 + interrupts = <28 2 0 0>; 169 + }; 170 + dma-channel@80 { 171 + compatible = "fsl,eloplus-dma-channel"; 172 + reg = <0x80 0x80>; 173 + interrupts = <29 2 0 0>; 174 + }; 175 + dma-channel@100 { 176 + compatible = "fsl,eloplus-dma-channel"; 177 + reg = <0x100 0x80>; 178 + interrupts = <30 2 0 0>; 179 + }; 180 + dma-channel@180 { 181 + compatible = "fsl,eloplus-dma-channel"; 182 + reg = <0x180 0x80>; 183 + interrupts = <31 2 0 0>; 184 + }; 185 + dma-channel@300 { 186 + compatible = "fsl,eloplus-dma-channel"; 187 + reg = <0x300 0x80>; 188 + interrupts = <76 2 0 0>; 189 + }; 190 + dma-channel@380 { 191 + compatible = "fsl,eloplus-dma-channel"; 192 + reg = <0x380 0x80>; 193 + interrupts = <77 2 0 0>; 194 + }; 195 + dma-channel@400 { 196 + compatible = "fsl,eloplus-dma-channel"; 197 + reg = <0x400 0x80>; 198 + interrupts = <78 2 0 0>; 199 + }; 200 + dma-channel@480 { 201 + compatible = "fsl,eloplus-dma-channel"; 202 + reg = <0x480 0x80>; 203 + interrupts = <79 2 0 0>; 204 + }; 205 + }; 130 206 131 207 Note on DMA channel compatible properties: The compatible property must say 132 208 "fsl,elo-dma-channel" or "fsl,eloplus-dma-channel" to be used by the Elo DMA
+39 -29
Documentation/dmatest.txt
··· 15 15 16 16 Part 2 - When dmatest is built as a module... 17 17 18 - After mounting debugfs and loading the module, the /sys/kernel/debug/dmatest 19 - folder with nodes will be created. There are two important files located. First 20 - is the 'run' node that controls run and stop phases of the test, and the second 21 - one, 'results', is used to get the test case results. 22 - 23 - Note that in this case test will not run on load automatically. 24 - 25 18 Example of usage: 19 + % modprobe dmatest channel=dma0chan0 timeout=2000 iterations=1 run=1 20 + 21 + ...or: 22 + % modprobe dmatest 26 23 % echo dma0chan0 > /sys/module/dmatest/parameters/channel 27 24 % echo 2000 > /sys/module/dmatest/parameters/timeout 28 25 % echo 1 > /sys/module/dmatest/parameters/iterations 29 - % echo 1 > /sys/kernel/debug/dmatest/run 26 + % echo 1 > /sys/module/dmatest/parameters/run 27 + 28 + ...or on the kernel command line: 29 + 30 + dmatest.channel=dma0chan0 dmatest.timeout=2000 dmatest.iterations=1 dmatest.run=1 30 31 31 32 Hint: available channel list could be extracted by running the following 32 33 command: 33 34 % ls -1 /sys/class/dma/ 34 35 35 - After a while you will start to get messages about current status or error like 36 - in the original code. 36 + Once started a message like "dmatest: Started 1 threads using dma0chan0" is 37 + emitted. After that only test failure messages are reported until the test 38 + stops. 37 39 38 40 Note that running a new test will not stop any in progress test. 39 41 40 - The following command should return actual state of the test. 41 - % cat /sys/kernel/debug/dmatest/run 42 + The following command returns the state of the test. 43 + % cat /sys/module/dmatest/parameters/run 42 44 43 - To wait for test done the user may perform a busy loop that checks the state. 45 + To wait for test completion userpace can poll 'run' until it is false, or use 46 + the wait parameter. Specifying 'wait=1' when loading the module causes module 47 + initialization to pause until a test run has completed, while reading 48 + /sys/module/dmatest/parameters/wait waits for any running test to complete 49 + before returning. For example, the following scripts wait for 42 tests 50 + to complete before exiting. Note that if 'iterations' is set to 'infinite' then 51 + waiting is disabled. 44 52 45 - % while [ $(cat /sys/kernel/debug/dmatest/run) = "Y" ] 46 - > do 47 - > echo -n "." 48 - > sleep 1 49 - > done 50 - > echo 53 + Example: 54 + % modprobe dmatest run=1 iterations=42 wait=1 55 + % modprobe -r dmatest 56 + ...or: 57 + % modprobe dmatest run=1 iterations=42 58 + % cat /sys/module/dmatest/parameters/wait 59 + % modprobe -r dmatest 51 60 52 61 Part 3 - When built-in in the kernel... 53 62 ··· 71 62 72 63 Part 4 - Gathering the test results 73 64 74 - The module provides a storage for the test results in the memory. The gathered 75 - data could be used after test is done. 65 + Test results are printed to the kernel log buffer with the format: 76 66 77 - The special file 'results' in the debugfs represents gathered data of the in 78 - progress test. The messages collected are printed to the kernel log as well. 67 + "dmatest: result <channel>: <test id>: '<error msg>' with src_off=<val> dst_off=<val> len=<val> (<err code>)" 79 68 80 69 Example of output: 81 - % cat /sys/kernel/debug/dmatest/results 82 - dma0chan0-copy0: #1: No errors with src_off=0x7bf dst_off=0x8ad len=0x3fea (0) 70 + % dmesg | tail -n 1 71 + dmatest: result dma0chan0-copy0: #1: No errors with src_off=0x7bf dst_off=0x8ad len=0x3fea (0) 83 72 84 73 The message format is unified across the different types of errors. A number in 85 74 the parens represents additional information, e.g. error code, error counter, 86 - or status. 75 + or status. A test thread also emits a summary line at completion listing the 76 + number of tests executed, number that failed, and a result code. 87 77 88 - Comparison between buffers is stored to the dedicated structure. 78 + Example: 79 + % dmesg | tail -n 1 80 + dmatest: dma0chan0-copy0: summary 1 test, 0 failures 1000 iops 100000 KB/s (0) 89 81 90 - Note that the verify result is now accessible only via file 'results' in the 91 - debugfs. 82 + The details of a data miscompare error are also emitted, but do not follow the 83 + above format.
+2 -2
arch/arm/common/edma.c
··· 404 404 BIT(slot)); 405 405 if (edma_cc[ctlr]->intr_data[channel].callback) 406 406 edma_cc[ctlr]->intr_data[channel].callback( 407 - channel, DMA_COMPLETE, 407 + channel, EDMA_DMA_COMPLETE, 408 408 edma_cc[ctlr]->intr_data[channel].data); 409 409 } 410 410 } while (sh_ipr); ··· 459 459 callback) { 460 460 edma_cc[ctlr]->intr_data[k]. 461 461 callback(k, 462 - DMA_CC_ERROR, 462 + EDMA_DMA_CC_ERROR, 463 463 edma_cc[ctlr]->intr_data 464 464 [k].data); 465 465 }
-30
arch/arm/include/asm/hardware/iop3xx-adma.h
··· 393 393 return slot_cnt; 394 394 } 395 395 396 - static inline int iop_desc_is_pq(struct iop_adma_desc_slot *desc) 397 - { 398 - return 0; 399 - } 400 - 401 - static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc, 402 - struct iop_adma_chan *chan) 403 - { 404 - union iop3xx_desc hw_desc = { .ptr = desc->hw_desc, }; 405 - 406 - switch (chan->device->id) { 407 - case DMA0_ID: 408 - case DMA1_ID: 409 - return hw_desc.dma->dest_addr; 410 - case AAU_ID: 411 - return hw_desc.aau->dest_addr; 412 - default: 413 - BUG(); 414 - } 415 - return 0; 416 - } 417 - 418 - 419 - static inline u32 iop_desc_get_qdest_addr(struct iop_adma_desc_slot *desc, 420 - struct iop_adma_chan *chan) 421 - { 422 - BUG(); 423 - return 0; 424 - } 425 - 426 396 static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc, 427 397 struct iop_adma_chan *chan) 428 398 {
-4
arch/arm/include/asm/hardware/iop_adma.h
··· 82 82 * @slot_cnt: total slots used in an transaction (group of operations) 83 83 * @slots_per_op: number of slots per operation 84 84 * @idx: pool index 85 - * @unmap_src_cnt: number of xor sources 86 - * @unmap_len: transaction bytecount 87 85 * @tx_list: list of descriptors that are associated with one operation 88 86 * @async_tx: support for the async_tx api 89 87 * @group_list: list of slots that make up a multi-descriptor transaction ··· 97 99 u16 slot_cnt; 98 100 u16 slots_per_op; 99 101 u16 idx; 100 - u16 unmap_src_cnt; 101 - size_t unmap_len; 102 102 struct list_head tx_list; 103 103 struct dma_async_tx_descriptor async_tx; 104 104 union {
-26
arch/arm/mach-iop13xx/include/mach/adma.h
··· 218 218 #define iop_chan_pq_slot_count iop_chan_xor_slot_count 219 219 #define iop_chan_pq_zero_sum_slot_count iop_chan_xor_slot_count 220 220 221 - static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc, 222 - struct iop_adma_chan *chan) 223 - { 224 - struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc; 225 - return hw_desc->dest_addr; 226 - } 227 - 228 - static inline u32 iop_desc_get_qdest_addr(struct iop_adma_desc_slot *desc, 229 - struct iop_adma_chan *chan) 230 - { 231 - struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc; 232 - return hw_desc->q_dest_addr; 233 - } 234 - 235 221 static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc, 236 222 struct iop_adma_chan *chan) 237 223 { ··· 334 348 u_desc_ctrl.field.p_xfer_dis = !!(flags & DMA_PREP_PQ_DISABLE_P); 335 349 u_desc_ctrl.field.int_en = flags & DMA_PREP_INTERRUPT; 336 350 hw_desc->desc_ctrl = u_desc_ctrl.value; 337 - } 338 - 339 - static inline int iop_desc_is_pq(struct iop_adma_desc_slot *desc) 340 - { 341 - struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc; 342 - union { 343 - u32 value; 344 - struct iop13xx_adma_desc_ctrl field; 345 - } u_desc_ctrl; 346 - 347 - u_desc_ctrl.value = hw_desc->desc_ctrl; 348 - return u_desc_ctrl.field.pq_xfer_en; 349 351 } 350 352 351 353 static inline void
+2 -2
arch/powerpc/boot/dts/fsl/b4si-post.dtsi
··· 223 223 reg = <0xe2000 0x1000>; 224 224 }; 225 225 226 - /include/ "qoriq-dma-0.dtsi" 226 + /include/ "elo3-dma-0.dtsi" 227 227 dma@100300 { 228 228 fsl,iommu-parent = <&pamu0>; 229 229 fsl,liodn-reg = <&guts 0x580>; /* DMA1LIODNR */ 230 230 }; 231 231 232 - /include/ "qoriq-dma-1.dtsi" 232 + /include/ "elo3-dma-1.dtsi" 233 233 dma@101300 { 234 234 fsl,iommu-parent = <&pamu0>; 235 235 fsl,liodn-reg = <&guts 0x584>; /* DMA2LIODNR */
+82
arch/powerpc/boot/dts/fsl/elo3-dma-0.dtsi
··· 1 + /* 2 + * QorIQ Elo3 DMA device tree stub [ controller @ offset 0x100000 ] 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + dma0: dma@100300 { 36 + #address-cells = <1>; 37 + #size-cells = <1>; 38 + compatible = "fsl,elo3-dma"; 39 + reg = <0x100300 0x4>, 40 + <0x100600 0x4>; 41 + ranges = <0x0 0x100100 0x500>; 42 + dma-channel@0 { 43 + compatible = "fsl,eloplus-dma-channel"; 44 + reg = <0x0 0x80>; 45 + interrupts = <28 2 0 0>; 46 + }; 47 + dma-channel@80 { 48 + compatible = "fsl,eloplus-dma-channel"; 49 + reg = <0x80 0x80>; 50 + interrupts = <29 2 0 0>; 51 + }; 52 + dma-channel@100 { 53 + compatible = "fsl,eloplus-dma-channel"; 54 + reg = <0x100 0x80>; 55 + interrupts = <30 2 0 0>; 56 + }; 57 + dma-channel@180 { 58 + compatible = "fsl,eloplus-dma-channel"; 59 + reg = <0x180 0x80>; 60 + interrupts = <31 2 0 0>; 61 + }; 62 + dma-channel@300 { 63 + compatible = "fsl,eloplus-dma-channel"; 64 + reg = <0x300 0x80>; 65 + interrupts = <76 2 0 0>; 66 + }; 67 + dma-channel@380 { 68 + compatible = "fsl,eloplus-dma-channel"; 69 + reg = <0x380 0x80>; 70 + interrupts = <77 2 0 0>; 71 + }; 72 + dma-channel@400 { 73 + compatible = "fsl,eloplus-dma-channel"; 74 + reg = <0x400 0x80>; 75 + interrupts = <78 2 0 0>; 76 + }; 77 + dma-channel@480 { 78 + compatible = "fsl,eloplus-dma-channel"; 79 + reg = <0x480 0x80>; 80 + interrupts = <79 2 0 0>; 81 + }; 82 + };
+82
arch/powerpc/boot/dts/fsl/elo3-dma-1.dtsi
··· 1 + /* 2 + * QorIQ Elo3 DMA device tree stub [ controller @ offset 0x101000 ] 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + dma1: dma@101300 { 36 + #address-cells = <1>; 37 + #size-cells = <1>; 38 + compatible = "fsl,elo3-dma"; 39 + reg = <0x101300 0x4>, 40 + <0x101600 0x4>; 41 + ranges = <0x0 0x101100 0x500>; 42 + dma-channel@0 { 43 + compatible = "fsl,eloplus-dma-channel"; 44 + reg = <0x0 0x80>; 45 + interrupts = <32 2 0 0>; 46 + }; 47 + dma-channel@80 { 48 + compatible = "fsl,eloplus-dma-channel"; 49 + reg = <0x80 0x80>; 50 + interrupts = <33 2 0 0>; 51 + }; 52 + dma-channel@100 { 53 + compatible = "fsl,eloplus-dma-channel"; 54 + reg = <0x100 0x80>; 55 + interrupts = <34 2 0 0>; 56 + }; 57 + dma-channel@180 { 58 + compatible = "fsl,eloplus-dma-channel"; 59 + reg = <0x180 0x80>; 60 + interrupts = <35 2 0 0>; 61 + }; 62 + dma-channel@300 { 63 + compatible = "fsl,eloplus-dma-channel"; 64 + reg = <0x300 0x80>; 65 + interrupts = <80 2 0 0>; 66 + }; 67 + dma-channel@380 { 68 + compatible = "fsl,eloplus-dma-channel"; 69 + reg = <0x380 0x80>; 70 + interrupts = <81 2 0 0>; 71 + }; 72 + dma-channel@400 { 73 + compatible = "fsl,eloplus-dma-channel"; 74 + reg = <0x400 0x80>; 75 + interrupts = <82 2 0 0>; 76 + }; 77 + dma-channel@480 { 78 + compatible = "fsl,eloplus-dma-channel"; 79 + reg = <0x480 0x80>; 80 + interrupts = <83 2 0 0>; 81 + }; 82 + };
+2 -2
arch/powerpc/boot/dts/fsl/t4240si-post.dtsi
··· 387 387 reg = <0xea000 0x4000>; 388 388 }; 389 389 390 - /include/ "qoriq-dma-0.dtsi" 391 - /include/ "qoriq-dma-1.dtsi" 390 + /include/ "elo3-dma-0.dtsi" 391 + /include/ "elo3-dma-1.dtsi" 392 392 393 393 /include/ "qoriq-espi-0.dtsi" 394 394 spi@110000 {
+19 -14
crypto/async_tx/async_memcpy.c
··· 50 50 &dest, 1, &src, 1, len); 51 51 struct dma_device *device = chan ? chan->device : NULL; 52 52 struct dma_async_tx_descriptor *tx = NULL; 53 + struct dmaengine_unmap_data *unmap = NULL; 53 54 54 - if (device && is_dma_copy_aligned(device, src_offset, dest_offset, len)) { 55 - dma_addr_t dma_dest, dma_src; 55 + if (device) 56 + unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOIO); 57 + 58 + if (unmap && is_dma_copy_aligned(device, src_offset, dest_offset, len)) { 56 59 unsigned long dma_prep_flags = 0; 57 60 58 61 if (submit->cb_fn) 59 62 dma_prep_flags |= DMA_PREP_INTERRUPT; 60 63 if (submit->flags & ASYNC_TX_FENCE) 61 64 dma_prep_flags |= DMA_PREP_FENCE; 62 - dma_dest = dma_map_page(device->dev, dest, dest_offset, len, 63 - DMA_FROM_DEVICE); 64 65 65 - dma_src = dma_map_page(device->dev, src, src_offset, len, 66 - DMA_TO_DEVICE); 66 + unmap->to_cnt = 1; 67 + unmap->addr[0] = dma_map_page(device->dev, src, src_offset, len, 68 + DMA_TO_DEVICE); 69 + unmap->from_cnt = 1; 70 + unmap->addr[1] = dma_map_page(device->dev, dest, dest_offset, len, 71 + DMA_FROM_DEVICE); 72 + unmap->len = len; 67 73 68 - tx = device->device_prep_dma_memcpy(chan, dma_dest, dma_src, 69 - len, dma_prep_flags); 70 - if (!tx) { 71 - dma_unmap_page(device->dev, dma_dest, len, 72 - DMA_FROM_DEVICE); 73 - dma_unmap_page(device->dev, dma_src, len, 74 - DMA_TO_DEVICE); 75 - } 74 + tx = device->device_prep_dma_memcpy(chan, unmap->addr[1], 75 + unmap->addr[0], len, 76 + dma_prep_flags); 76 77 } 77 78 78 79 if (tx) { 79 80 pr_debug("%s: (async) len: %zu\n", __func__, len); 81 + 82 + dma_set_unmap(tx, unmap); 80 83 async_tx_submit(chan, tx, submit); 81 84 } else { 82 85 void *dest_buf, *src_buf; ··· 98 95 99 96 async_tx_sync_epilog(submit); 100 97 } 98 + 99 + dmaengine_unmap_put(unmap); 101 100 102 101 return tx; 103 102 }
+99 -75
crypto/async_tx/async_pq.c
··· 46 46 * do_async_gen_syndrome - asynchronously calculate P and/or Q 47 47 */ 48 48 static __async_inline struct dma_async_tx_descriptor * 49 - do_async_gen_syndrome(struct dma_chan *chan, struct page **blocks, 50 - const unsigned char *scfs, unsigned int offset, int disks, 51 - size_t len, dma_addr_t *dma_src, 49 + do_async_gen_syndrome(struct dma_chan *chan, 50 + const unsigned char *scfs, int disks, 51 + struct dmaengine_unmap_data *unmap, 52 + enum dma_ctrl_flags dma_flags, 52 53 struct async_submit_ctl *submit) 53 54 { 54 55 struct dma_async_tx_descriptor *tx = NULL; 55 56 struct dma_device *dma = chan->device; 56 - enum dma_ctrl_flags dma_flags = 0; 57 57 enum async_tx_flags flags_orig = submit->flags; 58 58 dma_async_tx_callback cb_fn_orig = submit->cb_fn; 59 59 dma_async_tx_callback cb_param_orig = submit->cb_param; 60 60 int src_cnt = disks - 2; 61 - unsigned char coefs[src_cnt]; 62 61 unsigned short pq_src_cnt; 63 62 dma_addr_t dma_dest[2]; 64 63 int src_off = 0; 65 - int idx; 66 - int i; 67 64 68 - /* DMAs use destinations as sources, so use BIDIRECTIONAL mapping */ 69 - if (P(blocks, disks)) 70 - dma_dest[0] = dma_map_page(dma->dev, P(blocks, disks), offset, 71 - len, DMA_BIDIRECTIONAL); 72 - else 73 - dma_flags |= DMA_PREP_PQ_DISABLE_P; 74 - if (Q(blocks, disks)) 75 - dma_dest[1] = dma_map_page(dma->dev, Q(blocks, disks), offset, 76 - len, DMA_BIDIRECTIONAL); 77 - else 78 - dma_flags |= DMA_PREP_PQ_DISABLE_Q; 79 - 80 - /* convert source addresses being careful to collapse 'empty' 81 - * sources and update the coefficients accordingly 82 - */ 83 - for (i = 0, idx = 0; i < src_cnt; i++) { 84 - if (blocks[i] == NULL) 85 - continue; 86 - dma_src[idx] = dma_map_page(dma->dev, blocks[i], offset, len, 87 - DMA_TO_DEVICE); 88 - coefs[idx] = scfs[i]; 89 - idx++; 90 - } 91 - src_cnt = idx; 65 + if (submit->flags & ASYNC_TX_FENCE) 66 + dma_flags |= DMA_PREP_FENCE; 92 67 93 68 while (src_cnt > 0) { 94 69 submit->flags = flags_orig; ··· 75 100 if (src_cnt > pq_src_cnt) { 76 101 submit->flags &= ~ASYNC_TX_ACK; 77 102 submit->flags |= ASYNC_TX_FENCE; 78 - dma_flags |= DMA_COMPL_SKIP_DEST_UNMAP; 79 103 submit->cb_fn = NULL; 80 104 submit->cb_param = NULL; 81 105 } else { 82 - dma_flags &= ~DMA_COMPL_SKIP_DEST_UNMAP; 83 106 submit->cb_fn = cb_fn_orig; 84 107 submit->cb_param = cb_param_orig; 85 108 if (cb_fn_orig) 86 109 dma_flags |= DMA_PREP_INTERRUPT; 87 110 } 88 - if (submit->flags & ASYNC_TX_FENCE) 89 - dma_flags |= DMA_PREP_FENCE; 90 111 91 - /* Since we have clobbered the src_list we are committed 92 - * to doing this asynchronously. Drivers force forward 93 - * progress in case they can not provide a descriptor 112 + /* Drivers force forward progress in case they can not provide 113 + * a descriptor 94 114 */ 95 115 for (;;) { 116 + dma_dest[0] = unmap->addr[disks - 2]; 117 + dma_dest[1] = unmap->addr[disks - 1]; 96 118 tx = dma->device_prep_dma_pq(chan, dma_dest, 97 - &dma_src[src_off], 119 + &unmap->addr[src_off], 98 120 pq_src_cnt, 99 - &coefs[src_off], len, 121 + &scfs[src_off], unmap->len, 100 122 dma_flags); 101 123 if (likely(tx)) 102 124 break; ··· 101 129 dma_async_issue_pending(chan); 102 130 } 103 131 132 + dma_set_unmap(tx, unmap); 104 133 async_tx_submit(chan, tx, submit); 105 134 submit->depend_tx = tx; 106 135 ··· 161 188 * set to NULL those buffers will be replaced with the raid6_zero_page 162 189 * in the synchronous path and omitted in the hardware-asynchronous 163 190 * path. 164 - * 165 - * 'blocks' note: if submit->scribble is NULL then the contents of 166 - * 'blocks' may be overwritten to perform address conversions 167 - * (dma_map_page() or page_address()). 168 191 */ 169 192 struct dma_async_tx_descriptor * 170 193 async_gen_syndrome(struct page **blocks, unsigned int offset, int disks, ··· 171 202 &P(blocks, disks), 2, 172 203 blocks, src_cnt, len); 173 204 struct dma_device *device = chan ? chan->device : NULL; 174 - dma_addr_t *dma_src = NULL; 205 + struct dmaengine_unmap_data *unmap = NULL; 175 206 176 207 BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks))); 177 208 178 - if (submit->scribble) 179 - dma_src = submit->scribble; 180 - else if (sizeof(dma_addr_t) <= sizeof(struct page *)) 181 - dma_src = (dma_addr_t *) blocks; 209 + if (device) 210 + unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO); 182 211 183 - if (dma_src && device && 212 + if (unmap && 184 213 (src_cnt <= dma_maxpq(device, 0) || 185 214 dma_maxpq(device, DMA_PREP_CONTINUE) > 0) && 186 215 is_dma_pq_aligned(device, offset, 0, len)) { 216 + struct dma_async_tx_descriptor *tx; 217 + enum dma_ctrl_flags dma_flags = 0; 218 + unsigned char coefs[src_cnt]; 219 + int i, j; 220 + 187 221 /* run the p+q asynchronously */ 188 222 pr_debug("%s: (async) disks: %d len: %zu\n", 189 223 __func__, disks, len); 190 - return do_async_gen_syndrome(chan, blocks, raid6_gfexp, offset, 191 - disks, len, dma_src, submit); 224 + 225 + /* convert source addresses being careful to collapse 'empty' 226 + * sources and update the coefficients accordingly 227 + */ 228 + unmap->len = len; 229 + for (i = 0, j = 0; i < src_cnt; i++) { 230 + if (blocks[i] == NULL) 231 + continue; 232 + unmap->addr[j] = dma_map_page(device->dev, blocks[i], offset, 233 + len, DMA_TO_DEVICE); 234 + coefs[j] = raid6_gfexp[i]; 235 + unmap->to_cnt++; 236 + j++; 237 + } 238 + 239 + /* 240 + * DMAs use destinations as sources, 241 + * so use BIDIRECTIONAL mapping 242 + */ 243 + unmap->bidi_cnt++; 244 + if (P(blocks, disks)) 245 + unmap->addr[j++] = dma_map_page(device->dev, P(blocks, disks), 246 + offset, len, DMA_BIDIRECTIONAL); 247 + else { 248 + unmap->addr[j++] = 0; 249 + dma_flags |= DMA_PREP_PQ_DISABLE_P; 250 + } 251 + 252 + unmap->bidi_cnt++; 253 + if (Q(blocks, disks)) 254 + unmap->addr[j++] = dma_map_page(device->dev, Q(blocks, disks), 255 + offset, len, DMA_BIDIRECTIONAL); 256 + else { 257 + unmap->addr[j++] = 0; 258 + dma_flags |= DMA_PREP_PQ_DISABLE_Q; 259 + } 260 + 261 + tx = do_async_gen_syndrome(chan, coefs, j, unmap, dma_flags, submit); 262 + dmaengine_unmap_put(unmap); 263 + return tx; 192 264 } 265 + 266 + dmaengine_unmap_put(unmap); 193 267 194 268 /* run the pq synchronously */ 195 269 pr_debug("%s: (sync) disks: %d len: %zu\n", __func__, disks, len); ··· 289 277 struct dma_async_tx_descriptor *tx; 290 278 unsigned char coefs[disks-2]; 291 279 enum dma_ctrl_flags dma_flags = submit->cb_fn ? DMA_PREP_INTERRUPT : 0; 292 - dma_addr_t *dma_src = NULL; 293 - int src_cnt = 0; 280 + struct dmaengine_unmap_data *unmap = NULL; 294 281 295 282 BUG_ON(disks < 4); 296 283 297 - if (submit->scribble) 298 - dma_src = submit->scribble; 299 - else if (sizeof(dma_addr_t) <= sizeof(struct page *)) 300 - dma_src = (dma_addr_t *) blocks; 284 + if (device) 285 + unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO); 301 286 302 - if (dma_src && device && disks <= dma_maxpq(device, 0) && 287 + if (unmap && disks <= dma_maxpq(device, 0) && 303 288 is_dma_pq_aligned(device, offset, 0, len)) { 304 289 struct device *dev = device->dev; 305 - dma_addr_t *pq = &dma_src[disks-2]; 306 - int i; 290 + dma_addr_t pq[2]; 291 + int i, j = 0, src_cnt = 0; 307 292 308 293 pr_debug("%s: (async) disks: %d len: %zu\n", 309 294 __func__, disks, len); 310 - if (!P(blocks, disks)) 295 + 296 + unmap->len = len; 297 + for (i = 0; i < disks-2; i++) 298 + if (likely(blocks[i])) { 299 + unmap->addr[j] = dma_map_page(dev, blocks[i], 300 + offset, len, 301 + DMA_TO_DEVICE); 302 + coefs[j] = raid6_gfexp[i]; 303 + unmap->to_cnt++; 304 + src_cnt++; 305 + j++; 306 + } 307 + 308 + if (!P(blocks, disks)) { 309 + pq[0] = 0; 311 310 dma_flags |= DMA_PREP_PQ_DISABLE_P; 312 - else 311 + } else { 313 312 pq[0] = dma_map_page(dev, P(blocks, disks), 314 313 offset, len, 315 314 DMA_TO_DEVICE); 316 - if (!Q(blocks, disks)) 315 + unmap->addr[j++] = pq[0]; 316 + unmap->to_cnt++; 317 + } 318 + if (!Q(blocks, disks)) { 319 + pq[1] = 0; 317 320 dma_flags |= DMA_PREP_PQ_DISABLE_Q; 318 - else 321 + } else { 319 322 pq[1] = dma_map_page(dev, Q(blocks, disks), 320 323 offset, len, 321 324 DMA_TO_DEVICE); 325 + unmap->addr[j++] = pq[1]; 326 + unmap->to_cnt++; 327 + } 322 328 323 329 if (submit->flags & ASYNC_TX_FENCE) 324 330 dma_flags |= DMA_PREP_FENCE; 325 - for (i = 0; i < disks-2; i++) 326 - if (likely(blocks[i])) { 327 - dma_src[src_cnt] = dma_map_page(dev, blocks[i], 328 - offset, len, 329 - DMA_TO_DEVICE); 330 - coefs[src_cnt] = raid6_gfexp[i]; 331 - src_cnt++; 332 - } 333 - 334 331 for (;;) { 335 - tx = device->device_prep_dma_pq_val(chan, pq, dma_src, 332 + tx = device->device_prep_dma_pq_val(chan, pq, 333 + unmap->addr, 336 334 src_cnt, 337 335 coefs, 338 336 len, pqres, ··· 352 330 async_tx_quiesce(&submit->depend_tx); 353 331 dma_async_issue_pending(chan); 354 332 } 333 + 334 + dma_set_unmap(tx, unmap); 355 335 async_tx_submit(chan, tx, submit); 356 336 357 337 return tx;
+43 -18
crypto/async_tx/async_raid6_recov.c
··· 26 26 #include <linux/dma-mapping.h> 27 27 #include <linux/raid/pq.h> 28 28 #include <linux/async_tx.h> 29 + #include <linux/dmaengine.h> 29 30 30 31 static struct dma_async_tx_descriptor * 31 32 async_sum_product(struct page *dest, struct page **srcs, unsigned char *coef, ··· 35 34 struct dma_chan *chan = async_tx_find_channel(submit, DMA_PQ, 36 35 &dest, 1, srcs, 2, len); 37 36 struct dma_device *dma = chan ? chan->device : NULL; 37 + struct dmaengine_unmap_data *unmap = NULL; 38 38 const u8 *amul, *bmul; 39 39 u8 ax, bx; 40 40 u8 *a, *b, *c; 41 41 42 - if (dma) { 43 - dma_addr_t dma_dest[2]; 44 - dma_addr_t dma_src[2]; 42 + if (dma) 43 + unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO); 44 + 45 + if (unmap) { 45 46 struct device *dev = dma->dev; 47 + dma_addr_t pq[2]; 46 48 struct dma_async_tx_descriptor *tx; 47 49 enum dma_ctrl_flags dma_flags = DMA_PREP_PQ_DISABLE_P; 48 50 49 51 if (submit->flags & ASYNC_TX_FENCE) 50 52 dma_flags |= DMA_PREP_FENCE; 51 - dma_dest[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL); 52 - dma_src[0] = dma_map_page(dev, srcs[0], 0, len, DMA_TO_DEVICE); 53 - dma_src[1] = dma_map_page(dev, srcs[1], 0, len, DMA_TO_DEVICE); 54 - tx = dma->device_prep_dma_pq(chan, dma_dest, dma_src, 2, coef, 53 + unmap->addr[0] = dma_map_page(dev, srcs[0], 0, len, DMA_TO_DEVICE); 54 + unmap->addr[1] = dma_map_page(dev, srcs[1], 0, len, DMA_TO_DEVICE); 55 + unmap->to_cnt = 2; 56 + 57 + unmap->addr[2] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL); 58 + unmap->bidi_cnt = 1; 59 + /* engine only looks at Q, but expects it to follow P */ 60 + pq[1] = unmap->addr[2]; 61 + 62 + unmap->len = len; 63 + tx = dma->device_prep_dma_pq(chan, pq, unmap->addr, 2, coef, 55 64 len, dma_flags); 56 65 if (tx) { 66 + dma_set_unmap(tx, unmap); 57 67 async_tx_submit(chan, tx, submit); 68 + dmaengine_unmap_put(unmap); 58 69 return tx; 59 70 } 60 71 61 72 /* could not get a descriptor, unmap and fall through to 62 73 * the synchronous path 63 74 */ 64 - dma_unmap_page(dev, dma_dest[1], len, DMA_BIDIRECTIONAL); 65 - dma_unmap_page(dev, dma_src[0], len, DMA_TO_DEVICE); 66 - dma_unmap_page(dev, dma_src[1], len, DMA_TO_DEVICE); 75 + dmaengine_unmap_put(unmap); 67 76 } 68 77 69 78 /* run the operation synchronously */ ··· 100 89 struct dma_chan *chan = async_tx_find_channel(submit, DMA_PQ, 101 90 &dest, 1, &src, 1, len); 102 91 struct dma_device *dma = chan ? chan->device : NULL; 92 + struct dmaengine_unmap_data *unmap = NULL; 103 93 const u8 *qmul; /* Q multiplier table */ 104 94 u8 *d, *s; 105 95 106 - if (dma) { 96 + if (dma) 97 + unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO); 98 + 99 + if (unmap) { 107 100 dma_addr_t dma_dest[2]; 108 - dma_addr_t dma_src[1]; 109 101 struct device *dev = dma->dev; 110 102 struct dma_async_tx_descriptor *tx; 111 103 enum dma_ctrl_flags dma_flags = DMA_PREP_PQ_DISABLE_P; 112 104 113 105 if (submit->flags & ASYNC_TX_FENCE) 114 106 dma_flags |= DMA_PREP_FENCE; 115 - dma_dest[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL); 116 - dma_src[0] = dma_map_page(dev, src, 0, len, DMA_TO_DEVICE); 117 - tx = dma->device_prep_dma_pq(chan, dma_dest, dma_src, 1, &coef, 118 - len, dma_flags); 107 + unmap->addr[0] = dma_map_page(dev, src, 0, len, DMA_TO_DEVICE); 108 + unmap->to_cnt++; 109 + unmap->addr[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL); 110 + dma_dest[1] = unmap->addr[1]; 111 + unmap->bidi_cnt++; 112 + unmap->len = len; 113 + 114 + /* this looks funny, but the engine looks for Q at 115 + * dma_dest[1] and ignores dma_dest[0] as a dest 116 + * due to DMA_PREP_PQ_DISABLE_P 117 + */ 118 + tx = dma->device_prep_dma_pq(chan, dma_dest, unmap->addr, 119 + 1, &coef, len, dma_flags); 120 + 119 121 if (tx) { 122 + dma_set_unmap(tx, unmap); 123 + dmaengine_unmap_put(unmap); 120 124 async_tx_submit(chan, tx, submit); 121 125 return tx; 122 126 } ··· 139 113 /* could not get a descriptor, unmap and fall through to 140 114 * the synchronous path 141 115 */ 142 - dma_unmap_page(dev, dma_dest[1], len, DMA_BIDIRECTIONAL); 143 - dma_unmap_page(dev, dma_src[0], len, DMA_TO_DEVICE); 116 + dmaengine_unmap_put(unmap); 144 117 } 145 118 146 119 /* no channel available, or failed to allocate a descriptor, so
+2 -2
crypto/async_tx/async_tx.c
··· 128 128 } 129 129 device->device_issue_pending(chan); 130 130 } else { 131 - if (dma_wait_for_async_tx(depend_tx) != DMA_SUCCESS) 131 + if (dma_wait_for_async_tx(depend_tx) != DMA_COMPLETE) 132 132 panic("%s: DMA error waiting for depend_tx\n", 133 133 __func__); 134 134 tx->tx_submit(tx); ··· 280 280 * we are referring to the correct operation 281 281 */ 282 282 BUG_ON(async_tx_test_ack(*tx)); 283 - if (dma_wait_for_async_tx(*tx) != DMA_SUCCESS) 283 + if (dma_wait_for_async_tx(*tx) != DMA_COMPLETE) 284 284 panic("%s: DMA error waiting for transaction\n", 285 285 __func__); 286 286 async_tx_ack(*tx);
+66 -57
crypto/async_tx/async_xor.c
··· 33 33 34 34 /* do_async_xor - dma map the pages and perform the xor with an engine */ 35 35 static __async_inline struct dma_async_tx_descriptor * 36 - do_async_xor(struct dma_chan *chan, struct page *dest, struct page **src_list, 37 - unsigned int offset, int src_cnt, size_t len, dma_addr_t *dma_src, 36 + do_async_xor(struct dma_chan *chan, struct dmaengine_unmap_data *unmap, 38 37 struct async_submit_ctl *submit) 39 38 { 40 39 struct dma_device *dma = chan->device; 41 40 struct dma_async_tx_descriptor *tx = NULL; 42 - int src_off = 0; 43 - int i; 44 41 dma_async_tx_callback cb_fn_orig = submit->cb_fn; 45 42 void *cb_param_orig = submit->cb_param; 46 43 enum async_tx_flags flags_orig = submit->flags; 47 - enum dma_ctrl_flags dma_flags; 48 - int xor_src_cnt = 0; 49 - dma_addr_t dma_dest; 50 - 51 - /* map the dest bidrectional in case it is re-used as a source */ 52 - dma_dest = dma_map_page(dma->dev, dest, offset, len, DMA_BIDIRECTIONAL); 53 - for (i = 0; i < src_cnt; i++) { 54 - /* only map the dest once */ 55 - if (!src_list[i]) 56 - continue; 57 - if (unlikely(src_list[i] == dest)) { 58 - dma_src[xor_src_cnt++] = dma_dest; 59 - continue; 60 - } 61 - dma_src[xor_src_cnt++] = dma_map_page(dma->dev, src_list[i], offset, 62 - len, DMA_TO_DEVICE); 63 - } 64 - src_cnt = xor_src_cnt; 44 + enum dma_ctrl_flags dma_flags = 0; 45 + int src_cnt = unmap->to_cnt; 46 + int xor_src_cnt; 47 + dma_addr_t dma_dest = unmap->addr[unmap->to_cnt]; 48 + dma_addr_t *src_list = unmap->addr; 65 49 66 50 while (src_cnt) { 51 + dma_addr_t tmp; 52 + 67 53 submit->flags = flags_orig; 68 - dma_flags = 0; 69 54 xor_src_cnt = min(src_cnt, (int)dma->max_xor); 70 - /* if we are submitting additional xors, leave the chain open, 71 - * clear the callback parameters, and leave the destination 72 - * buffer mapped 55 + /* if we are submitting additional xors, leave the chain open 56 + * and clear the callback parameters 73 57 */ 74 58 if (src_cnt > xor_src_cnt) { 75 59 submit->flags &= ~ASYNC_TX_ACK; 76 60 submit->flags |= ASYNC_TX_FENCE; 77 - dma_flags = DMA_COMPL_SKIP_DEST_UNMAP; 78 61 submit->cb_fn = NULL; 79 62 submit->cb_param = NULL; 80 63 } else { ··· 68 85 dma_flags |= DMA_PREP_INTERRUPT; 69 86 if (submit->flags & ASYNC_TX_FENCE) 70 87 dma_flags |= DMA_PREP_FENCE; 71 - /* Since we have clobbered the src_list we are committed 72 - * to doing this asynchronously. Drivers force forward progress 73 - * in case they can not provide a descriptor 88 + 89 + /* Drivers force forward progress in case they can not provide a 90 + * descriptor 74 91 */ 75 - tx = dma->device_prep_dma_xor(chan, dma_dest, &dma_src[src_off], 76 - xor_src_cnt, len, dma_flags); 92 + tmp = src_list[0]; 93 + if (src_list > unmap->addr) 94 + src_list[0] = dma_dest; 95 + tx = dma->device_prep_dma_xor(chan, dma_dest, src_list, 96 + xor_src_cnt, unmap->len, 97 + dma_flags); 98 + src_list[0] = tmp; 99 + 77 100 78 101 if (unlikely(!tx)) 79 102 async_tx_quiesce(&submit->depend_tx); ··· 88 99 while (unlikely(!tx)) { 89 100 dma_async_issue_pending(chan); 90 101 tx = dma->device_prep_dma_xor(chan, dma_dest, 91 - &dma_src[src_off], 92 - xor_src_cnt, len, 102 + src_list, 103 + xor_src_cnt, unmap->len, 93 104 dma_flags); 94 105 } 95 106 107 + dma_set_unmap(tx, unmap); 96 108 async_tx_submit(chan, tx, submit); 97 109 submit->depend_tx = tx; 98 110 99 111 if (src_cnt > xor_src_cnt) { 100 112 /* drop completed sources */ 101 113 src_cnt -= xor_src_cnt; 102 - src_off += xor_src_cnt; 103 - 104 114 /* use the intermediate result a source */ 105 - dma_src[--src_off] = dma_dest; 106 115 src_cnt++; 116 + src_list += xor_src_cnt - 1; 107 117 } else 108 118 break; 109 119 } ··· 177 189 struct dma_chan *chan = async_tx_find_channel(submit, DMA_XOR, 178 190 &dest, 1, src_list, 179 191 src_cnt, len); 180 - dma_addr_t *dma_src = NULL; 192 + struct dma_device *device = chan ? chan->device : NULL; 193 + struct dmaengine_unmap_data *unmap = NULL; 181 194 182 195 BUG_ON(src_cnt <= 1); 183 196 184 - if (submit->scribble) 185 - dma_src = submit->scribble; 186 - else if (sizeof(dma_addr_t) <= sizeof(struct page *)) 187 - dma_src = (dma_addr_t *) src_list; 197 + if (device) 198 + unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOIO); 188 199 189 - if (dma_src && chan && is_dma_xor_aligned(chan->device, offset, 0, len)) { 200 + if (unmap && is_dma_xor_aligned(device, offset, 0, len)) { 201 + struct dma_async_tx_descriptor *tx; 202 + int i, j; 203 + 190 204 /* run the xor asynchronously */ 191 205 pr_debug("%s (async): len: %zu\n", __func__, len); 192 206 193 - return do_async_xor(chan, dest, src_list, offset, src_cnt, len, 194 - dma_src, submit); 207 + unmap->len = len; 208 + for (i = 0, j = 0; i < src_cnt; i++) { 209 + if (!src_list[i]) 210 + continue; 211 + unmap->to_cnt++; 212 + unmap->addr[j++] = dma_map_page(device->dev, src_list[i], 213 + offset, len, DMA_TO_DEVICE); 214 + } 215 + 216 + /* map it bidirectional as it may be re-used as a source */ 217 + unmap->addr[j] = dma_map_page(device->dev, dest, offset, len, 218 + DMA_BIDIRECTIONAL); 219 + unmap->bidi_cnt = 1; 220 + 221 + tx = do_async_xor(chan, unmap, submit); 222 + dmaengine_unmap_put(unmap); 223 + return tx; 195 224 } else { 225 + dmaengine_unmap_put(unmap); 196 226 /* run the xor synchronously */ 197 227 pr_debug("%s (sync): len: %zu\n", __func__, len); 198 228 WARN_ONCE(chan, "%s: no space for dma address conversion\n", ··· 274 268 struct dma_chan *chan = xor_val_chan(submit, dest, src_list, src_cnt, len); 275 269 struct dma_device *device = chan ? chan->device : NULL; 276 270 struct dma_async_tx_descriptor *tx = NULL; 277 - dma_addr_t *dma_src = NULL; 271 + struct dmaengine_unmap_data *unmap = NULL; 278 272 279 273 BUG_ON(src_cnt <= 1); 280 274 281 - if (submit->scribble) 282 - dma_src = submit->scribble; 283 - else if (sizeof(dma_addr_t) <= sizeof(struct page *)) 284 - dma_src = (dma_addr_t *) src_list; 275 + if (device) 276 + unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOIO); 285 277 286 - if (dma_src && device && src_cnt <= device->max_xor && 278 + if (unmap && src_cnt <= device->max_xor && 287 279 is_dma_xor_aligned(device, offset, 0, len)) { 288 280 unsigned long dma_prep_flags = 0; 289 281 int i; ··· 292 288 dma_prep_flags |= DMA_PREP_INTERRUPT; 293 289 if (submit->flags & ASYNC_TX_FENCE) 294 290 dma_prep_flags |= DMA_PREP_FENCE; 295 - for (i = 0; i < src_cnt; i++) 296 - dma_src[i] = dma_map_page(device->dev, src_list[i], 297 - offset, len, DMA_TO_DEVICE); 298 291 299 - tx = device->device_prep_dma_xor_val(chan, dma_src, src_cnt, 292 + for (i = 0; i < src_cnt; i++) { 293 + unmap->addr[i] = dma_map_page(device->dev, src_list[i], 294 + offset, len, DMA_TO_DEVICE); 295 + unmap->to_cnt++; 296 + } 297 + unmap->len = len; 298 + 299 + tx = device->device_prep_dma_xor_val(chan, unmap->addr, src_cnt, 300 300 len, result, 301 301 dma_prep_flags); 302 302 if (unlikely(!tx)) { ··· 309 301 while (!tx) { 310 302 dma_async_issue_pending(chan); 311 303 tx = device->device_prep_dma_xor_val(chan, 312 - dma_src, src_cnt, len, result, 304 + unmap->addr, src_cnt, len, result, 313 305 dma_prep_flags); 314 306 } 315 307 } 316 - 308 + dma_set_unmap(tx, unmap); 317 309 async_tx_submit(chan, tx, submit); 318 310 } else { 319 311 enum async_tx_flags flags_orig = submit->flags; ··· 335 327 async_tx_sync_epilog(submit); 336 328 submit->flags = flags_orig; 337 329 } 330 + dmaengine_unmap_put(unmap); 338 331 339 332 return tx; 340 333 }
+9 -1
crypto/async_tx/raid6test.c
··· 28 28 #undef pr 29 29 #define pr(fmt, args...) pr_info("raid6test: " fmt, ##args) 30 30 31 - #define NDISKS 16 /* Including P and Q */ 31 + #define NDISKS 64 /* Including P and Q */ 32 32 33 33 static struct page *dataptrs[NDISKS]; 34 34 static addr_conv_t addr_conv[NDISKS]; ··· 219 219 err += test(11, &tests); 220 220 err += test(12, &tests); 221 221 } 222 + 223 + /* the 24 disk case is special for ioatdma as it is the boudary point 224 + * at which it needs to switch from 8-source ops to 16-source 225 + * ops for continuation (assumes DMA_HAS_PQ_CONTINUE is not set) 226 + */ 227 + if (NDISKS > 24) 228 + err += test(24, &tests); 229 + 222 230 err += test(NDISKS, &tests); 223 231 224 232 pr("\n");
+1 -2
drivers/ata/pata_arasan_cf.c
··· 396 396 struct dma_async_tx_descriptor *tx; 397 397 struct dma_chan *chan = acdev->dma_chan; 398 398 dma_cookie_t cookie; 399 - unsigned long flags = DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP | 400 - DMA_COMPL_SKIP_DEST_UNMAP; 399 + unsigned long flags = DMA_PREP_INTERRUPT; 401 400 int ret = 0; 402 401 403 402 tx = chan->device->device_prep_dma_memcpy(chan, dest, src, len, flags);
+5 -4
drivers/dma/Kconfig
··· 89 89 Support the Atmel AHB DMA controller. 90 90 91 91 config FSL_DMA 92 - tristate "Freescale Elo and Elo Plus DMA support" 92 + tristate "Freescale Elo series DMA support" 93 93 depends on FSL_SOC 94 94 select DMA_ENGINE 95 95 select ASYNC_TX_ENABLE_CHANNEL_SWITCH 96 96 ---help--- 97 - Enable support for the Freescale Elo and Elo Plus DMA controllers. 98 - The Elo is the DMA controller on some 82xx and 83xx parts, and the 99 - Elo Plus is the DMA controller on 85xx and 86xx parts. 97 + Enable support for the Freescale Elo series DMA controllers. 98 + The Elo is the DMA controller on some mpc82xx and mpc83xx parts, the 99 + EloPlus is on mpc85xx and mpc86xx and Pxxx parts, and the Elo3 is on 100 + some Txxx and Bxxx parts. 100 101 101 102 config MPC512X_DMA 102 103 tristate "Freescale MPC512x built-in DMA engine support"
+4 -35
drivers/dma/amba-pl08x.c
··· 1164 1164 kfree(txd); 1165 1165 } 1166 1166 1167 - static void pl08x_unmap_buffers(struct pl08x_txd *txd) 1168 - { 1169 - struct device *dev = txd->vd.tx.chan->device->dev; 1170 - struct pl08x_sg *dsg; 1171 - 1172 - if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 1173 - if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE) 1174 - list_for_each_entry(dsg, &txd->dsg_list, node) 1175 - dma_unmap_single(dev, dsg->src_addr, dsg->len, 1176 - DMA_TO_DEVICE); 1177 - else { 1178 - list_for_each_entry(dsg, &txd->dsg_list, node) 1179 - dma_unmap_page(dev, dsg->src_addr, dsg->len, 1180 - DMA_TO_DEVICE); 1181 - } 1182 - } 1183 - if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 1184 - if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE) 1185 - list_for_each_entry(dsg, &txd->dsg_list, node) 1186 - dma_unmap_single(dev, dsg->dst_addr, dsg->len, 1187 - DMA_FROM_DEVICE); 1188 - else 1189 - list_for_each_entry(dsg, &txd->dsg_list, node) 1190 - dma_unmap_page(dev, dsg->dst_addr, dsg->len, 1191 - DMA_FROM_DEVICE); 1192 - } 1193 - } 1194 - 1195 1167 static void pl08x_desc_free(struct virt_dma_desc *vd) 1196 1168 { 1197 1169 struct pl08x_txd *txd = to_pl08x_txd(&vd->tx); 1198 1170 struct pl08x_dma_chan *plchan = to_pl08x_chan(vd->tx.chan); 1199 1171 1200 - if (!plchan->slave) 1201 - pl08x_unmap_buffers(txd); 1202 - 1172 + dma_descriptor_unmap(txd); 1203 1173 if (!txd->done) 1204 1174 pl08x_release_mux(plchan); 1205 1175 ··· 1222 1252 size_t bytes = 0; 1223 1253 1224 1254 ret = dma_cookie_status(chan, cookie, txstate); 1225 - if (ret == DMA_SUCCESS) 1255 + if (ret == DMA_COMPLETE) 1226 1256 return ret; 1227 1257 1228 1258 /* ··· 1237 1267 1238 1268 spin_lock_irqsave(&plchan->vc.lock, flags); 1239 1269 ret = dma_cookie_status(chan, cookie, txstate); 1240 - if (ret != DMA_SUCCESS) { 1270 + if (ret != DMA_COMPLETE) { 1241 1271 vd = vchan_find_desc(&plchan->vc, cookie); 1242 1272 if (vd) { 1243 1273 /* On the issued list, so hasn't been processed yet */ ··· 2108 2138 writel(0x000000FF, pl08x->base + PL080_ERR_CLEAR); 2109 2139 writel(0x000000FF, pl08x->base + PL080_TC_CLEAR); 2110 2140 2111 - ret = request_irq(adev->irq[0], pl08x_irq, IRQF_DISABLED, 2112 - DRIVER_NAME, pl08x); 2141 + ret = request_irq(adev->irq[0], pl08x_irq, 0, DRIVER_NAME, pl08x); 2113 2142 if (ret) { 2114 2143 dev_err(&adev->dev, "%s failed to request interrupt %d\n", 2115 2144 __func__, adev->irq[0]);
+2 -26
drivers/dma/at_hdmac.c
··· 344 344 /* move myself to free_list */ 345 345 list_move(&desc->desc_node, &atchan->free_list); 346 346 347 - /* unmap dma addresses (not on slave channels) */ 348 - if (!atchan->chan_common.private) { 349 - struct device *parent = chan2parent(&atchan->chan_common); 350 - if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 351 - if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE) 352 - dma_unmap_single(parent, 353 - desc->lli.daddr, 354 - desc->len, DMA_FROM_DEVICE); 355 - else 356 - dma_unmap_page(parent, 357 - desc->lli.daddr, 358 - desc->len, DMA_FROM_DEVICE); 359 - } 360 - if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 361 - if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE) 362 - dma_unmap_single(parent, 363 - desc->lli.saddr, 364 - desc->len, DMA_TO_DEVICE); 365 - else 366 - dma_unmap_page(parent, 367 - desc->lli.saddr, 368 - desc->len, DMA_TO_DEVICE); 369 - } 370 - } 371 - 347 + dma_descriptor_unmap(txd); 372 348 /* for cyclic transfers, 373 349 * no need to replay callback function while stopping */ 374 350 if (!atc_chan_is_cyclic(atchan)) { ··· 1078 1102 int bytes = 0; 1079 1103 1080 1104 ret = dma_cookie_status(chan, cookie, txstate); 1081 - if (ret == DMA_SUCCESS) 1105 + if (ret == DMA_COMPLETE) 1082 1106 return ret; 1083 1107 /* 1084 1108 * There's no point calculating the residue if there's
+2 -2
drivers/dma/coh901318.c
··· 2369 2369 enum dma_status ret; 2370 2370 2371 2371 ret = dma_cookie_status(chan, cookie, txstate); 2372 - if (ret == DMA_SUCCESS) 2372 + if (ret == DMA_COMPLETE) 2373 2373 return ret; 2374 2374 2375 2375 dma_set_residue(txstate, coh901318_get_bytes_left(chan)); ··· 2694 2694 if (irq < 0) 2695 2695 return irq; 2696 2696 2697 - err = devm_request_irq(&pdev->dev, irq, dma_irq_handler, IRQF_DISABLED, 2697 + err = devm_request_irq(&pdev->dev, irq, dma_irq_handler, 0, 2698 2698 "coh901318", base); 2699 2699 if (err) 2700 2700 return err;
+107 -73
drivers/dma/cppi41.c
··· 141 141 const struct chan_queues *queues_rx; 142 142 const struct chan_queues *queues_tx; 143 143 struct chan_queues td_queue; 144 + 145 + /* context for suspend/resume */ 146 + unsigned int dma_tdfdq; 144 147 }; 145 148 146 149 #define FIST_COMPLETION_QUEUE 93 ··· 266 263 return val & ((1 << (DESC_LENGTH_BITS_NUM + 1)) - 1); 267 264 } 268 265 266 + static u32 cppi41_pop_desc(struct cppi41_dd *cdd, unsigned queue_num) 267 + { 268 + u32 desc; 269 + 270 + desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(queue_num)); 271 + desc &= ~0x1f; 272 + return desc; 273 + } 274 + 269 275 static irqreturn_t cppi41_irq(int irq, void *data) 270 276 { 271 277 struct cppi41_dd *cdd = data; ··· 312 300 q_num = __fls(val); 313 301 val &= ~(1 << q_num); 314 302 q_num += 32 * i; 315 - desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(q_num)); 316 - desc &= ~0x1f; 303 + desc = cppi41_pop_desc(cdd, q_num); 317 304 c = desc_to_chan(cdd, desc); 318 305 if (WARN_ON(!c)) { 319 306 pr_err("%s() q %d desc %08x\n", __func__, ··· 364 353 365 354 /* lock */ 366 355 ret = dma_cookie_status(chan, cookie, txstate); 367 - if (txstate && ret == DMA_SUCCESS) 356 + if (txstate && ret == DMA_COMPLETE) 368 357 txstate->residue = c->residue; 369 358 /* unlock */ 370 359 ··· 528 517 d->pd0 = DESC_TYPE_TEARD << DESC_TYPE; 529 518 } 530 519 531 - static u32 cppi41_pop_desc(struct cppi41_dd *cdd, unsigned queue_num) 532 - { 533 - u32 desc; 534 - 535 - desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(queue_num)); 536 - desc &= ~0x1f; 537 - return desc; 538 - } 539 - 540 520 static int cppi41_tear_down_chan(struct cppi41_channel *c) 541 521 { 542 522 struct cppi41_dd *cdd = c->cdd; ··· 563 561 c->td_retry = 100; 564 562 } 565 563 566 - if (!c->td_seen) { 567 - unsigned td_comp_queue; 564 + if (!c->td_seen || !c->td_desc_seen) { 568 565 569 - if (c->is_tx) 570 - td_comp_queue = cdd->td_queue.complete; 571 - else 572 - td_comp_queue = c->q_comp_num; 566 + desc_phys = cppi41_pop_desc(cdd, cdd->td_queue.complete); 567 + if (!desc_phys) 568 + desc_phys = cppi41_pop_desc(cdd, c->q_comp_num); 573 569 574 - desc_phys = cppi41_pop_desc(cdd, td_comp_queue); 575 - if (desc_phys) { 576 - __iormb(); 577 - 578 - if (desc_phys == td_desc_phys) { 579 - u32 pd0; 580 - pd0 = td->pd0; 581 - WARN_ON((pd0 >> DESC_TYPE) != DESC_TYPE_TEARD); 582 - WARN_ON(!c->is_tx && !(pd0 & TD_DESC_IS_RX)); 583 - WARN_ON((pd0 & 0x1f) != c->port_num); 584 - } else { 585 - WARN_ON_ONCE(1); 586 - } 587 - c->td_seen = 1; 588 - } 589 - } 590 - if (!c->td_desc_seen) { 591 - desc_phys = cppi41_pop_desc(cdd, c->q_comp_num); 592 - if (desc_phys) { 593 - __iormb(); 594 - WARN_ON(c->desc_phys != desc_phys); 570 + if (desc_phys == c->desc_phys) { 595 571 c->td_desc_seen = 1; 572 + 573 + } else if (desc_phys == td_desc_phys) { 574 + u32 pd0; 575 + 576 + __iormb(); 577 + pd0 = td->pd0; 578 + WARN_ON((pd0 >> DESC_TYPE) != DESC_TYPE_TEARD); 579 + WARN_ON(!c->is_tx && !(pd0 & TD_DESC_IS_RX)); 580 + WARN_ON((pd0 & 0x1f) != c->port_num); 581 + c->td_seen = 1; 582 + } else if (desc_phys) { 583 + WARN_ON_ONCE(1); 596 584 } 597 585 } 598 586 c->td_retry--; ··· 601 609 602 610 WARN_ON(!c->td_retry); 603 611 if (!c->td_desc_seen) { 604 - desc_phys = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num)); 612 + desc_phys = cppi41_pop_desc(cdd, c->q_num); 605 613 WARN_ON(!desc_phys); 606 614 } 607 615 ··· 666 674 } 667 675 } 668 676 669 - static int cppi41_add_chans(struct platform_device *pdev, struct cppi41_dd *cdd) 677 + static int cppi41_add_chans(struct device *dev, struct cppi41_dd *cdd) 670 678 { 671 679 struct cppi41_channel *cchan; 672 680 int i; 673 681 int ret; 674 682 u32 n_chans; 675 683 676 - ret = of_property_read_u32(pdev->dev.of_node, "#dma-channels", 684 + ret = of_property_read_u32(dev->of_node, "#dma-channels", 677 685 &n_chans); 678 686 if (ret) 679 687 return ret; ··· 711 719 return -ENOMEM; 712 720 } 713 721 714 - static void purge_descs(struct platform_device *pdev, struct cppi41_dd *cdd) 722 + static void purge_descs(struct device *dev, struct cppi41_dd *cdd) 715 723 { 716 724 unsigned int mem_decs; 717 725 int i; ··· 723 731 cppi_writel(0, cdd->qmgr_mem + QMGR_MEMBASE(i)); 724 732 cppi_writel(0, cdd->qmgr_mem + QMGR_MEMCTRL(i)); 725 733 726 - dma_free_coherent(&pdev->dev, mem_decs, cdd->cd, 734 + dma_free_coherent(dev, mem_decs, cdd->cd, 727 735 cdd->descs_phys); 728 736 } 729 737 } ··· 733 741 cppi_writel(0, cdd->sched_mem + DMA_SCHED_CTRL); 734 742 } 735 743 736 - static void deinit_cpii41(struct platform_device *pdev, struct cppi41_dd *cdd) 744 + static void deinit_cppi41(struct device *dev, struct cppi41_dd *cdd) 737 745 { 738 746 disable_sched(cdd); 739 747 740 - purge_descs(pdev, cdd); 748 + purge_descs(dev, cdd); 741 749 742 750 cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM0_BASE); 743 751 cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM0_BASE); 744 - dma_free_coherent(&pdev->dev, QMGR_SCRATCH_SIZE, cdd->qmgr_scratch, 752 + dma_free_coherent(dev, QMGR_SCRATCH_SIZE, cdd->qmgr_scratch, 745 753 cdd->scratch_phys); 746 754 } 747 755 748 - static int init_descs(struct platform_device *pdev, struct cppi41_dd *cdd) 756 + static int init_descs(struct device *dev, struct cppi41_dd *cdd) 749 757 { 750 758 unsigned int desc_size; 751 759 unsigned int mem_decs; ··· 769 777 reg |= ilog2(ALLOC_DECS_NUM) - 5; 770 778 771 779 BUILD_BUG_ON(DESCS_AREAS != 1); 772 - cdd->cd = dma_alloc_coherent(&pdev->dev, mem_decs, 780 + cdd->cd = dma_alloc_coherent(dev, mem_decs, 773 781 &cdd->descs_phys, GFP_KERNEL); 774 782 if (!cdd->cd) 775 783 return -ENOMEM; ··· 805 813 cppi_writel(reg, cdd->sched_mem + DMA_SCHED_CTRL); 806 814 } 807 815 808 - static int init_cppi41(struct platform_device *pdev, struct cppi41_dd *cdd) 816 + static int init_cppi41(struct device *dev, struct cppi41_dd *cdd) 809 817 { 810 818 int ret; 811 819 812 820 BUILD_BUG_ON(QMGR_SCRATCH_SIZE > ((1 << 14) - 1)); 813 - cdd->qmgr_scratch = dma_alloc_coherent(&pdev->dev, QMGR_SCRATCH_SIZE, 821 + cdd->qmgr_scratch = dma_alloc_coherent(dev, QMGR_SCRATCH_SIZE, 814 822 &cdd->scratch_phys, GFP_KERNEL); 815 823 if (!cdd->qmgr_scratch) 816 824 return -ENOMEM; ··· 819 827 cppi_writel(QMGR_SCRATCH_SIZE, cdd->qmgr_mem + QMGR_LRAM_SIZE); 820 828 cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM1_BASE); 821 829 822 - ret = init_descs(pdev, cdd); 830 + ret = init_descs(dev, cdd); 823 831 if (ret) 824 832 goto err_td; 825 833 ··· 827 835 init_sched(cdd); 828 836 return 0; 829 837 err_td: 830 - deinit_cpii41(pdev, cdd); 838 + deinit_cppi41(dev, cdd); 831 839 return ret; 832 840 } 833 841 ··· 906 914 }; 907 915 MODULE_DEVICE_TABLE(of, cppi41_dma_ids); 908 916 909 - static const struct cppi_glue_infos *get_glue_info(struct platform_device *pdev) 917 + static const struct cppi_glue_infos *get_glue_info(struct device *dev) 910 918 { 911 919 const struct of_device_id *of_id; 912 920 913 - of_id = of_match_node(cppi41_dma_ids, pdev->dev.of_node); 921 + of_id = of_match_node(cppi41_dma_ids, dev->of_node); 914 922 if (!of_id) 915 923 return NULL; 916 924 return of_id->data; ··· 919 927 static int cppi41_dma_probe(struct platform_device *pdev) 920 928 { 921 929 struct cppi41_dd *cdd; 930 + struct device *dev = &pdev->dev; 922 931 const struct cppi_glue_infos *glue_info; 923 932 int irq; 924 933 int ret; 925 934 926 - glue_info = get_glue_info(pdev); 935 + glue_info = get_glue_info(dev); 927 936 if (!glue_info) 928 937 return -EINVAL; 929 938 ··· 939 946 cdd->ddev.device_issue_pending = cppi41_dma_issue_pending; 940 947 cdd->ddev.device_prep_slave_sg = cppi41_dma_prep_slave_sg; 941 948 cdd->ddev.device_control = cppi41_dma_control; 942 - cdd->ddev.dev = &pdev->dev; 949 + cdd->ddev.dev = dev; 943 950 INIT_LIST_HEAD(&cdd->ddev.channels); 944 951 cpp41_dma_info.dma_cap = cdd->ddev.cap_mask; 945 952 946 - cdd->usbss_mem = of_iomap(pdev->dev.of_node, 0); 947 - cdd->ctrl_mem = of_iomap(pdev->dev.of_node, 1); 948 - cdd->sched_mem = of_iomap(pdev->dev.of_node, 2); 949 - cdd->qmgr_mem = of_iomap(pdev->dev.of_node, 3); 953 + cdd->usbss_mem = of_iomap(dev->of_node, 0); 954 + cdd->ctrl_mem = of_iomap(dev->of_node, 1); 955 + cdd->sched_mem = of_iomap(dev->of_node, 2); 956 + cdd->qmgr_mem = of_iomap(dev->of_node, 3); 950 957 951 958 if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem || 952 959 !cdd->qmgr_mem) { ··· 954 961 goto err_remap; 955 962 } 956 963 957 - pm_runtime_enable(&pdev->dev); 958 - ret = pm_runtime_get_sync(&pdev->dev); 959 - if (ret) 964 + pm_runtime_enable(dev); 965 + ret = pm_runtime_get_sync(dev); 966 + if (ret < 0) 960 967 goto err_get_sync; 961 968 962 969 cdd->queues_rx = glue_info->queues_rx; 963 970 cdd->queues_tx = glue_info->queues_tx; 964 971 cdd->td_queue = glue_info->td_queue; 965 972 966 - ret = init_cppi41(pdev, cdd); 973 + ret = init_cppi41(dev, cdd); 967 974 if (ret) 968 975 goto err_init_cppi; 969 976 970 - ret = cppi41_add_chans(pdev, cdd); 977 + ret = cppi41_add_chans(dev, cdd); 971 978 if (ret) 972 979 goto err_chans; 973 980 974 - irq = irq_of_parse_and_map(pdev->dev.of_node, 0); 981 + irq = irq_of_parse_and_map(dev->of_node, 0); 975 982 if (!irq) 976 983 goto err_irq; 977 984 978 985 cppi_writel(USBSS_IRQ_PD_COMP, cdd->usbss_mem + USBSS_IRQ_ENABLER); 979 986 980 987 ret = request_irq(irq, glue_info->isr, IRQF_SHARED, 981 - dev_name(&pdev->dev), cdd); 988 + dev_name(dev), cdd); 982 989 if (ret) 983 990 goto err_irq; 984 991 cdd->irq = irq; ··· 987 994 if (ret) 988 995 goto err_dma_reg; 989 996 990 - ret = of_dma_controller_register(pdev->dev.of_node, 997 + ret = of_dma_controller_register(dev->of_node, 991 998 cppi41_dma_xlate, &cpp41_dma_info); 992 999 if (ret) 993 1000 goto err_of; ··· 1002 1009 cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR); 1003 1010 cleanup_chans(cdd); 1004 1011 err_chans: 1005 - deinit_cpii41(pdev, cdd); 1012 + deinit_cppi41(dev, cdd); 1006 1013 err_init_cppi: 1007 - pm_runtime_put(&pdev->dev); 1014 + pm_runtime_put(dev); 1008 1015 err_get_sync: 1009 - pm_runtime_disable(&pdev->dev); 1016 + pm_runtime_disable(dev); 1010 1017 iounmap(cdd->usbss_mem); 1011 1018 iounmap(cdd->ctrl_mem); 1012 1019 iounmap(cdd->sched_mem); ··· 1026 1033 cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR); 1027 1034 free_irq(cdd->irq, cdd); 1028 1035 cleanup_chans(cdd); 1029 - deinit_cpii41(pdev, cdd); 1036 + deinit_cppi41(&pdev->dev, cdd); 1030 1037 iounmap(cdd->usbss_mem); 1031 1038 iounmap(cdd->ctrl_mem); 1032 1039 iounmap(cdd->sched_mem); ··· 1037 1044 return 0; 1038 1045 } 1039 1046 1047 + #ifdef CONFIG_PM_SLEEP 1048 + static int cppi41_suspend(struct device *dev) 1049 + { 1050 + struct cppi41_dd *cdd = dev_get_drvdata(dev); 1051 + 1052 + cdd->dma_tdfdq = cppi_readl(cdd->ctrl_mem + DMA_TDFDQ); 1053 + cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR); 1054 + disable_sched(cdd); 1055 + 1056 + return 0; 1057 + } 1058 + 1059 + static int cppi41_resume(struct device *dev) 1060 + { 1061 + struct cppi41_dd *cdd = dev_get_drvdata(dev); 1062 + struct cppi41_channel *c; 1063 + int i; 1064 + 1065 + for (i = 0; i < DESCS_AREAS; i++) 1066 + cppi_writel(cdd->descs_phys, cdd->qmgr_mem + QMGR_MEMBASE(i)); 1067 + 1068 + list_for_each_entry(c, &cdd->ddev.channels, chan.device_node) 1069 + if (!c->is_tx) 1070 + cppi_writel(c->q_num, c->gcr_reg + RXHPCRA0); 1071 + 1072 + init_sched(cdd); 1073 + 1074 + cppi_writel(cdd->dma_tdfdq, cdd->ctrl_mem + DMA_TDFDQ); 1075 + cppi_writel(cdd->scratch_phys, cdd->qmgr_mem + QMGR_LRAM0_BASE); 1076 + cppi_writel(QMGR_SCRATCH_SIZE, cdd->qmgr_mem + QMGR_LRAM_SIZE); 1077 + cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM1_BASE); 1078 + 1079 + cppi_writel(USBSS_IRQ_PD_COMP, cdd->usbss_mem + USBSS_IRQ_ENABLER); 1080 + 1081 + return 0; 1082 + } 1083 + #endif 1084 + 1085 + static SIMPLE_DEV_PM_OPS(cppi41_pm_ops, cppi41_suspend, cppi41_resume); 1086 + 1040 1087 static struct platform_driver cpp41_dma_driver = { 1041 1088 .probe = cppi41_dma_probe, 1042 1089 .remove = cppi41_dma_remove, 1043 1090 .driver = { 1044 1091 .name = "cppi41-dma-engine", 1045 1092 .owner = THIS_MODULE, 1093 + .pm = &cppi41_pm_ops, 1046 1094 .of_match_table = of_match_ptr(cppi41_dma_ids), 1047 1095 }, 1048 1096 };
+1 -1
drivers/dma/dma-jz4740.c
··· 491 491 unsigned long flags; 492 492 493 493 status = dma_cookie_status(c, cookie, state); 494 - if (status == DMA_SUCCESS || !state) 494 + if (status == DMA_COMPLETE || !state) 495 495 return status; 496 496 497 497 spin_lock_irqsave(&chan->vchan.lock, flags);
+182 -88
drivers/dma/dmaengine.c
··· 65 65 #include <linux/acpi.h> 66 66 #include <linux/acpi_dma.h> 67 67 #include <linux/of_dma.h> 68 + #include <linux/mempool.h> 68 69 69 70 static DEFINE_MUTEX(dma_list_mutex); 70 71 static DEFINE_IDR(dma_idr); ··· 902 901 } 903 902 EXPORT_SYMBOL(dma_async_device_unregister); 904 903 905 - /** 906 - * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses 907 - * @chan: DMA channel to offload copy to 908 - * @dest: destination address (virtual) 909 - * @src: source address (virtual) 910 - * @len: length 911 - * 912 - * Both @dest and @src must be mappable to a bus address according to the 913 - * DMA mapping API rules for streaming mappings. 914 - * Both @dest and @src must stay memory resident (kernel memory or locked 915 - * user space pages). 916 - */ 917 - dma_cookie_t 918 - dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest, 919 - void *src, size_t len) 904 + struct dmaengine_unmap_pool { 905 + struct kmem_cache *cache; 906 + const char *name; 907 + mempool_t *pool; 908 + size_t size; 909 + }; 910 + 911 + #define __UNMAP_POOL(x) { .size = x, .name = "dmaengine-unmap-" __stringify(x) } 912 + static struct dmaengine_unmap_pool unmap_pool[] = { 913 + __UNMAP_POOL(2), 914 + #if IS_ENABLED(CONFIG_ASYNC_TX_DMA) 915 + __UNMAP_POOL(16), 916 + __UNMAP_POOL(128), 917 + __UNMAP_POOL(256), 918 + #endif 919 + }; 920 + 921 + static struct dmaengine_unmap_pool *__get_unmap_pool(int nr) 920 922 { 921 - struct dma_device *dev = chan->device; 922 - struct dma_async_tx_descriptor *tx; 923 - dma_addr_t dma_dest, dma_src; 924 - dma_cookie_t cookie; 925 - unsigned long flags; 923 + int order = get_count_order(nr); 926 924 927 - dma_src = dma_map_single(dev->dev, src, len, DMA_TO_DEVICE); 928 - dma_dest = dma_map_single(dev->dev, dest, len, DMA_FROM_DEVICE); 929 - flags = DMA_CTRL_ACK | 930 - DMA_COMPL_SRC_UNMAP_SINGLE | 931 - DMA_COMPL_DEST_UNMAP_SINGLE; 932 - tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags); 925 + switch (order) { 926 + case 0 ... 1: 927 + return &unmap_pool[0]; 928 + case 2 ... 4: 929 + return &unmap_pool[1]; 930 + case 5 ... 7: 931 + return &unmap_pool[2]; 932 + case 8: 933 + return &unmap_pool[3]; 934 + default: 935 + BUG(); 936 + return NULL; 937 + } 938 + } 933 939 934 - if (!tx) { 935 - dma_unmap_single(dev->dev, dma_src, len, DMA_TO_DEVICE); 936 - dma_unmap_single(dev->dev, dma_dest, len, DMA_FROM_DEVICE); 937 - return -ENOMEM; 940 + static void dmaengine_unmap(struct kref *kref) 941 + { 942 + struct dmaengine_unmap_data *unmap = container_of(kref, typeof(*unmap), kref); 943 + struct device *dev = unmap->dev; 944 + int cnt, i; 945 + 946 + cnt = unmap->to_cnt; 947 + for (i = 0; i < cnt; i++) 948 + dma_unmap_page(dev, unmap->addr[i], unmap->len, 949 + DMA_TO_DEVICE); 950 + cnt += unmap->from_cnt; 951 + for (; i < cnt; i++) 952 + dma_unmap_page(dev, unmap->addr[i], unmap->len, 953 + DMA_FROM_DEVICE); 954 + cnt += unmap->bidi_cnt; 955 + for (; i < cnt; i++) { 956 + if (unmap->addr[i] == 0) 957 + continue; 958 + dma_unmap_page(dev, unmap->addr[i], unmap->len, 959 + DMA_BIDIRECTIONAL); 960 + } 961 + mempool_free(unmap, __get_unmap_pool(cnt)->pool); 962 + } 963 + 964 + void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap) 965 + { 966 + if (unmap) 967 + kref_put(&unmap->kref, dmaengine_unmap); 968 + } 969 + EXPORT_SYMBOL_GPL(dmaengine_unmap_put); 970 + 971 + static void dmaengine_destroy_unmap_pool(void) 972 + { 973 + int i; 974 + 975 + for (i = 0; i < ARRAY_SIZE(unmap_pool); i++) { 976 + struct dmaengine_unmap_pool *p = &unmap_pool[i]; 977 + 978 + if (p->pool) 979 + mempool_destroy(p->pool); 980 + p->pool = NULL; 981 + if (p->cache) 982 + kmem_cache_destroy(p->cache); 983 + p->cache = NULL; 984 + } 985 + } 986 + 987 + static int __init dmaengine_init_unmap_pool(void) 988 + { 989 + int i; 990 + 991 + for (i = 0; i < ARRAY_SIZE(unmap_pool); i++) { 992 + struct dmaengine_unmap_pool *p = &unmap_pool[i]; 993 + size_t size; 994 + 995 + size = sizeof(struct dmaengine_unmap_data) + 996 + sizeof(dma_addr_t) * p->size; 997 + 998 + p->cache = kmem_cache_create(p->name, size, 0, 999 + SLAB_HWCACHE_ALIGN, NULL); 1000 + if (!p->cache) 1001 + break; 1002 + p->pool = mempool_create_slab_pool(1, p->cache); 1003 + if (!p->pool) 1004 + break; 938 1005 } 939 1006 940 - tx->callback = NULL; 941 - cookie = tx->tx_submit(tx); 1007 + if (i == ARRAY_SIZE(unmap_pool)) 1008 + return 0; 942 1009 943 - preempt_disable(); 944 - __this_cpu_add(chan->local->bytes_transferred, len); 945 - __this_cpu_inc(chan->local->memcpy_count); 946 - preempt_enable(); 947 - 948 - return cookie; 1010 + dmaengine_destroy_unmap_pool(); 1011 + return -ENOMEM; 949 1012 } 950 - EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf); 951 1013 952 - /** 953 - * dma_async_memcpy_buf_to_pg - offloaded copy from address to page 954 - * @chan: DMA channel to offload copy to 955 - * @page: destination page 956 - * @offset: offset in page to copy to 957 - * @kdata: source address (virtual) 958 - * @len: length 959 - * 960 - * Both @page/@offset and @kdata must be mappable to a bus address according 961 - * to the DMA mapping API rules for streaming mappings. 962 - * Both @page/@offset and @kdata must stay memory resident (kernel memory or 963 - * locked user space pages) 964 - */ 965 - dma_cookie_t 966 - dma_async_memcpy_buf_to_pg(struct dma_chan *chan, struct page *page, 967 - unsigned int offset, void *kdata, size_t len) 1014 + struct dmaengine_unmap_data * 1015 + dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags) 968 1016 { 969 - struct dma_device *dev = chan->device; 970 - struct dma_async_tx_descriptor *tx; 971 - dma_addr_t dma_dest, dma_src; 972 - dma_cookie_t cookie; 973 - unsigned long flags; 1017 + struct dmaengine_unmap_data *unmap; 974 1018 975 - dma_src = dma_map_single(dev->dev, kdata, len, DMA_TO_DEVICE); 976 - dma_dest = dma_map_page(dev->dev, page, offset, len, DMA_FROM_DEVICE); 977 - flags = DMA_CTRL_ACK | DMA_COMPL_SRC_UNMAP_SINGLE; 978 - tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags); 1019 + unmap = mempool_alloc(__get_unmap_pool(nr)->pool, flags); 1020 + if (!unmap) 1021 + return NULL; 979 1022 980 - if (!tx) { 981 - dma_unmap_single(dev->dev, dma_src, len, DMA_TO_DEVICE); 982 - dma_unmap_page(dev->dev, dma_dest, len, DMA_FROM_DEVICE); 983 - return -ENOMEM; 984 - } 1023 + memset(unmap, 0, sizeof(*unmap)); 1024 + kref_init(&unmap->kref); 1025 + unmap->dev = dev; 985 1026 986 - tx->callback = NULL; 987 - cookie = tx->tx_submit(tx); 988 - 989 - preempt_disable(); 990 - __this_cpu_add(chan->local->bytes_transferred, len); 991 - __this_cpu_inc(chan->local->memcpy_count); 992 - preempt_enable(); 993 - 994 - return cookie; 1027 + return unmap; 995 1028 } 996 - EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg); 1029 + EXPORT_SYMBOL(dmaengine_get_unmap_data); 997 1030 998 1031 /** 999 1032 * dma_async_memcpy_pg_to_pg - offloaded copy from page to page ··· 1050 1015 { 1051 1016 struct dma_device *dev = chan->device; 1052 1017 struct dma_async_tx_descriptor *tx; 1053 - dma_addr_t dma_dest, dma_src; 1018 + struct dmaengine_unmap_data *unmap; 1054 1019 dma_cookie_t cookie; 1055 1020 unsigned long flags; 1056 1021 1057 - dma_src = dma_map_page(dev->dev, src_pg, src_off, len, DMA_TO_DEVICE); 1058 - dma_dest = dma_map_page(dev->dev, dest_pg, dest_off, len, 1059 - DMA_FROM_DEVICE); 1022 + unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOIO); 1023 + if (!unmap) 1024 + return -ENOMEM; 1025 + 1026 + unmap->to_cnt = 1; 1027 + unmap->from_cnt = 1; 1028 + unmap->addr[0] = dma_map_page(dev->dev, src_pg, src_off, len, 1029 + DMA_TO_DEVICE); 1030 + unmap->addr[1] = dma_map_page(dev->dev, dest_pg, dest_off, len, 1031 + DMA_FROM_DEVICE); 1032 + unmap->len = len; 1060 1033 flags = DMA_CTRL_ACK; 1061 - tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags); 1034 + tx = dev->device_prep_dma_memcpy(chan, unmap->addr[1], unmap->addr[0], 1035 + len, flags); 1062 1036 1063 1037 if (!tx) { 1064 - dma_unmap_page(dev->dev, dma_src, len, DMA_TO_DEVICE); 1065 - dma_unmap_page(dev->dev, dma_dest, len, DMA_FROM_DEVICE); 1038 + dmaengine_unmap_put(unmap); 1066 1039 return -ENOMEM; 1067 1040 } 1068 1041 1069 - tx->callback = NULL; 1042 + dma_set_unmap(tx, unmap); 1070 1043 cookie = tx->tx_submit(tx); 1044 + dmaengine_unmap_put(unmap); 1071 1045 1072 1046 preempt_disable(); 1073 1047 __this_cpu_add(chan->local->bytes_transferred, len); ··· 1086 1042 return cookie; 1087 1043 } 1088 1044 EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg); 1045 + 1046 + /** 1047 + * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses 1048 + * @chan: DMA channel to offload copy to 1049 + * @dest: destination address (virtual) 1050 + * @src: source address (virtual) 1051 + * @len: length 1052 + * 1053 + * Both @dest and @src must be mappable to a bus address according to the 1054 + * DMA mapping API rules for streaming mappings. 1055 + * Both @dest and @src must stay memory resident (kernel memory or locked 1056 + * user space pages). 1057 + */ 1058 + dma_cookie_t 1059 + dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest, 1060 + void *src, size_t len) 1061 + { 1062 + return dma_async_memcpy_pg_to_pg(chan, virt_to_page(dest), 1063 + (unsigned long) dest & ~PAGE_MASK, 1064 + virt_to_page(src), 1065 + (unsigned long) src & ~PAGE_MASK, len); 1066 + } 1067 + EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf); 1068 + 1069 + /** 1070 + * dma_async_memcpy_buf_to_pg - offloaded copy from address to page 1071 + * @chan: DMA channel to offload copy to 1072 + * @page: destination page 1073 + * @offset: offset in page to copy to 1074 + * @kdata: source address (virtual) 1075 + * @len: length 1076 + * 1077 + * Both @page/@offset and @kdata must be mappable to a bus address according 1078 + * to the DMA mapping API rules for streaming mappings. 1079 + * Both @page/@offset and @kdata must stay memory resident (kernel memory or 1080 + * locked user space pages) 1081 + */ 1082 + dma_cookie_t 1083 + dma_async_memcpy_buf_to_pg(struct dma_chan *chan, struct page *page, 1084 + unsigned int offset, void *kdata, size_t len) 1085 + { 1086 + return dma_async_memcpy_pg_to_pg(chan, page, offset, 1087 + virt_to_page(kdata), 1088 + (unsigned long) kdata & ~PAGE_MASK, len); 1089 + } 1090 + EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg); 1089 1091 1090 1092 void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx, 1091 1093 struct dma_chan *chan) ··· 1152 1062 unsigned long dma_sync_wait_timeout = jiffies + msecs_to_jiffies(5000); 1153 1063 1154 1064 if (!tx) 1155 - return DMA_SUCCESS; 1065 + return DMA_COMPLETE; 1156 1066 1157 1067 while (tx->cookie == -EBUSY) { 1158 1068 if (time_after_eq(jiffies, dma_sync_wait_timeout)) { ··· 1206 1116 1207 1117 static int __init dma_bus_init(void) 1208 1118 { 1119 + int err = dmaengine_init_unmap_pool(); 1120 + 1121 + if (err) 1122 + return err; 1209 1123 return class_register(&dma_devclass); 1210 1124 } 1211 1125 arch_initcall(dma_bus_init);
+369 -582
drivers/dma/dmatest.c
··· 8 8 * it under the terms of the GNU General Public License version 2 as 9 9 * published by the Free Software Foundation. 10 10 */ 11 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 12 + 11 13 #include <linux/delay.h> 12 14 #include <linux/dma-mapping.h> 13 15 #include <linux/dmaengine.h> ··· 21 19 #include <linux/random.h> 22 20 #include <linux/slab.h> 23 21 #include <linux/wait.h> 24 - #include <linux/ctype.h> 25 - #include <linux/debugfs.h> 26 - #include <linux/uaccess.h> 27 - #include <linux/seq_file.h> 28 22 29 23 static unsigned int test_buf_size = 16384; 30 24 module_param(test_buf_size, uint, S_IRUGO | S_IWUSR); ··· 66 68 MODULE_PARM_DESC(timeout, "Transfer Timeout in msec (default: 3000), " 67 69 "Pass -1 for infinite timeout"); 68 70 69 - /* Maximum amount of mismatched bytes in buffer to print */ 70 - #define MAX_ERROR_COUNT 32 71 + static bool noverify; 72 + module_param(noverify, bool, S_IRUGO | S_IWUSR); 73 + MODULE_PARM_DESC(noverify, "Disable random data setup and verification"); 71 74 72 - /* 73 - * Initialization patterns. All bytes in the source buffer has bit 7 74 - * set, all bytes in the destination buffer has bit 7 cleared. 75 - * 76 - * Bit 6 is set for all bytes which are to be copied by the DMA 77 - * engine. Bit 5 is set for all bytes which are to be overwritten by 78 - * the DMA engine. 79 - * 80 - * The remaining bits are the inverse of a counter which increments by 81 - * one for each byte address. 82 - */ 83 - #define PATTERN_SRC 0x80 84 - #define PATTERN_DST 0x00 85 - #define PATTERN_COPY 0x40 86 - #define PATTERN_OVERWRITE 0x20 87 - #define PATTERN_COUNT_MASK 0x1f 88 - 89 - enum dmatest_error_type { 90 - DMATEST_ET_OK, 91 - DMATEST_ET_MAP_SRC, 92 - DMATEST_ET_MAP_DST, 93 - DMATEST_ET_PREP, 94 - DMATEST_ET_SUBMIT, 95 - DMATEST_ET_TIMEOUT, 96 - DMATEST_ET_DMA_ERROR, 97 - DMATEST_ET_DMA_IN_PROGRESS, 98 - DMATEST_ET_VERIFY, 99 - DMATEST_ET_VERIFY_BUF, 100 - }; 101 - 102 - struct dmatest_verify_buffer { 103 - unsigned int index; 104 - u8 expected; 105 - u8 actual; 106 - }; 107 - 108 - struct dmatest_verify_result { 109 - unsigned int error_count; 110 - struct dmatest_verify_buffer data[MAX_ERROR_COUNT]; 111 - u8 pattern; 112 - bool is_srcbuf; 113 - }; 114 - 115 - struct dmatest_thread_result { 116 - struct list_head node; 117 - unsigned int n; 118 - unsigned int src_off; 119 - unsigned int dst_off; 120 - unsigned int len; 121 - enum dmatest_error_type type; 122 - union { 123 - unsigned long data; 124 - dma_cookie_t cookie; 125 - enum dma_status status; 126 - int error; 127 - struct dmatest_verify_result *vr; 128 - }; 129 - }; 130 - 131 - struct dmatest_result { 132 - struct list_head node; 133 - char *name; 134 - struct list_head results; 135 - }; 136 - 137 - struct dmatest_info; 138 - 139 - struct dmatest_thread { 140 - struct list_head node; 141 - struct dmatest_info *info; 142 - struct task_struct *task; 143 - struct dma_chan *chan; 144 - u8 **srcs; 145 - u8 **dsts; 146 - enum dma_transaction_type type; 147 - bool done; 148 - }; 149 - 150 - struct dmatest_chan { 151 - struct list_head node; 152 - struct dma_chan *chan; 153 - struct list_head threads; 154 - }; 75 + static bool verbose; 76 + module_param(verbose, bool, S_IRUGO | S_IWUSR); 77 + MODULE_PARM_DESC(verbose, "Enable \"success\" result messages (default: off)"); 155 78 156 79 /** 157 80 * struct dmatest_params - test parameters. ··· 96 177 unsigned int xor_sources; 97 178 unsigned int pq_sources; 98 179 int timeout; 180 + bool noverify; 99 181 }; 100 182 101 183 /** ··· 104 184 * @params: test parameters 105 185 * @lock: access protection to the fields of this structure 106 186 */ 107 - struct dmatest_info { 187 + static struct dmatest_info { 108 188 /* Test parameters */ 109 189 struct dmatest_params params; 110 190 ··· 112 192 struct list_head channels; 113 193 unsigned int nr_channels; 114 194 struct mutex lock; 115 - 116 - /* debugfs related stuff */ 117 - struct dentry *root; 118 - 119 - /* Test results */ 120 - struct list_head results; 121 - struct mutex results_lock; 195 + bool did_init; 196 + } test_info = { 197 + .channels = LIST_HEAD_INIT(test_info.channels), 198 + .lock = __MUTEX_INITIALIZER(test_info.lock), 122 199 }; 123 200 124 - static struct dmatest_info test_info; 201 + static int dmatest_run_set(const char *val, const struct kernel_param *kp); 202 + static int dmatest_run_get(char *val, const struct kernel_param *kp); 203 + static struct kernel_param_ops run_ops = { 204 + .set = dmatest_run_set, 205 + .get = dmatest_run_get, 206 + }; 207 + static bool dmatest_run; 208 + module_param_cb(run, &run_ops, &dmatest_run, S_IRUGO | S_IWUSR); 209 + MODULE_PARM_DESC(run, "Run the test (default: false)"); 210 + 211 + /* Maximum amount of mismatched bytes in buffer to print */ 212 + #define MAX_ERROR_COUNT 32 213 + 214 + /* 215 + * Initialization patterns. All bytes in the source buffer has bit 7 216 + * set, all bytes in the destination buffer has bit 7 cleared. 217 + * 218 + * Bit 6 is set for all bytes which are to be copied by the DMA 219 + * engine. Bit 5 is set for all bytes which are to be overwritten by 220 + * the DMA engine. 221 + * 222 + * The remaining bits are the inverse of a counter which increments by 223 + * one for each byte address. 224 + */ 225 + #define PATTERN_SRC 0x80 226 + #define PATTERN_DST 0x00 227 + #define PATTERN_COPY 0x40 228 + #define PATTERN_OVERWRITE 0x20 229 + #define PATTERN_COUNT_MASK 0x1f 230 + 231 + struct dmatest_thread { 232 + struct list_head node; 233 + struct dmatest_info *info; 234 + struct task_struct *task; 235 + struct dma_chan *chan; 236 + u8 **srcs; 237 + u8 **dsts; 238 + enum dma_transaction_type type; 239 + bool done; 240 + }; 241 + 242 + struct dmatest_chan { 243 + struct list_head node; 244 + struct dma_chan *chan; 245 + struct list_head threads; 246 + }; 247 + 248 + static DECLARE_WAIT_QUEUE_HEAD(thread_wait); 249 + static bool wait; 250 + 251 + static bool is_threaded_test_run(struct dmatest_info *info) 252 + { 253 + struct dmatest_chan *dtc; 254 + 255 + list_for_each_entry(dtc, &info->channels, node) { 256 + struct dmatest_thread *thread; 257 + 258 + list_for_each_entry(thread, &dtc->threads, node) { 259 + if (!thread->done) 260 + return true; 261 + } 262 + } 263 + 264 + return false; 265 + } 266 + 267 + static int dmatest_wait_get(char *val, const struct kernel_param *kp) 268 + { 269 + struct dmatest_info *info = &test_info; 270 + struct dmatest_params *params = &info->params; 271 + 272 + if (params->iterations) 273 + wait_event(thread_wait, !is_threaded_test_run(info)); 274 + wait = true; 275 + return param_get_bool(val, kp); 276 + } 277 + 278 + static struct kernel_param_ops wait_ops = { 279 + .get = dmatest_wait_get, 280 + .set = param_set_bool, 281 + }; 282 + module_param_cb(wait, &wait_ops, &wait, S_IRUGO); 283 + MODULE_PARM_DESC(wait, "Wait for tests to complete (default: false)"); 125 284 126 285 static bool dmatest_match_channel(struct dmatest_params *params, 127 286 struct dma_chan *chan) ··· 222 223 { 223 224 unsigned long buf; 224 225 225 - get_random_bytes(&buf, sizeof(buf)); 226 + prandom_bytes(&buf, sizeof(buf)); 226 227 return buf; 227 228 } 228 229 ··· 261 262 } 262 263 } 263 264 264 - static unsigned int dmatest_verify(struct dmatest_verify_result *vr, u8 **bufs, 265 - unsigned int start, unsigned int end, unsigned int counter, 266 - u8 pattern, bool is_srcbuf) 265 + static void dmatest_mismatch(u8 actual, u8 pattern, unsigned int index, 266 + unsigned int counter, bool is_srcbuf) 267 + { 268 + u8 diff = actual ^ pattern; 269 + u8 expected = pattern | (~counter & PATTERN_COUNT_MASK); 270 + const char *thread_name = current->comm; 271 + 272 + if (is_srcbuf) 273 + pr_warn("%s: srcbuf[0x%x] overwritten! Expected %02x, got %02x\n", 274 + thread_name, index, expected, actual); 275 + else if ((pattern & PATTERN_COPY) 276 + && (diff & (PATTERN_COPY | PATTERN_OVERWRITE))) 277 + pr_warn("%s: dstbuf[0x%x] not copied! Expected %02x, got %02x\n", 278 + thread_name, index, expected, actual); 279 + else if (diff & PATTERN_SRC) 280 + pr_warn("%s: dstbuf[0x%x] was copied! Expected %02x, got %02x\n", 281 + thread_name, index, expected, actual); 282 + else 283 + pr_warn("%s: dstbuf[0x%x] mismatch! Expected %02x, got %02x\n", 284 + thread_name, index, expected, actual); 285 + } 286 + 287 + static unsigned int dmatest_verify(u8 **bufs, unsigned int start, 288 + unsigned int end, unsigned int counter, u8 pattern, 289 + bool is_srcbuf) 267 290 { 268 291 unsigned int i; 269 292 unsigned int error_count = 0; ··· 293 272 u8 expected; 294 273 u8 *buf; 295 274 unsigned int counter_orig = counter; 296 - struct dmatest_verify_buffer *vb; 297 275 298 276 for (; (buf = *bufs); bufs++) { 299 277 counter = counter_orig; ··· 300 280 actual = buf[i]; 301 281 expected = pattern | (~counter & PATTERN_COUNT_MASK); 302 282 if (actual != expected) { 303 - if (error_count < MAX_ERROR_COUNT && vr) { 304 - vb = &vr->data[error_count]; 305 - vb->index = i; 306 - vb->expected = expected; 307 - vb->actual = actual; 308 - } 283 + if (error_count < MAX_ERROR_COUNT) 284 + dmatest_mismatch(actual, pattern, i, 285 + counter, is_srcbuf); 309 286 error_count++; 310 287 } 311 288 counter++; ··· 310 293 } 311 294 312 295 if (error_count > MAX_ERROR_COUNT) 313 - pr_warning("%s: %u errors suppressed\n", 296 + pr_warn("%s: %u errors suppressed\n", 314 297 current->comm, error_count - MAX_ERROR_COUNT); 315 298 316 299 return error_count; ··· 330 313 wake_up_all(done->wait); 331 314 } 332 315 333 - static inline void unmap_src(struct device *dev, dma_addr_t *addr, size_t len, 334 - unsigned int count) 335 - { 336 - while (count--) 337 - dma_unmap_single(dev, addr[count], len, DMA_TO_DEVICE); 338 - } 339 - 340 - static inline void unmap_dst(struct device *dev, dma_addr_t *addr, size_t len, 341 - unsigned int count) 342 - { 343 - while (count--) 344 - dma_unmap_single(dev, addr[count], len, DMA_BIDIRECTIONAL); 345 - } 346 - 347 316 static unsigned int min_odd(unsigned int x, unsigned int y) 348 317 { 349 318 unsigned int val = min(x, y); ··· 337 334 return val % 2 ? val : val - 1; 338 335 } 339 336 340 - static char *verify_result_get_one(struct dmatest_verify_result *vr, 341 - unsigned int i) 337 + static void result(const char *err, unsigned int n, unsigned int src_off, 338 + unsigned int dst_off, unsigned int len, unsigned long data) 342 339 { 343 - struct dmatest_verify_buffer *vb = &vr->data[i]; 344 - u8 diff = vb->actual ^ vr->pattern; 345 - static char buf[512]; 346 - char *msg; 347 - 348 - if (vr->is_srcbuf) 349 - msg = "srcbuf overwritten!"; 350 - else if ((vr->pattern & PATTERN_COPY) 351 - && (diff & (PATTERN_COPY | PATTERN_OVERWRITE))) 352 - msg = "dstbuf not copied!"; 353 - else if (diff & PATTERN_SRC) 354 - msg = "dstbuf was copied!"; 355 - else 356 - msg = "dstbuf mismatch!"; 357 - 358 - snprintf(buf, sizeof(buf) - 1, "%s [0x%x] Expected %02x, got %02x", msg, 359 - vb->index, vb->expected, vb->actual); 360 - 361 - return buf; 340 + pr_info("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)", 341 + current->comm, n, err, src_off, dst_off, len, data); 362 342 } 363 343 364 - static char *thread_result_get(const char *name, 365 - struct dmatest_thread_result *tr) 344 + static void dbg_result(const char *err, unsigned int n, unsigned int src_off, 345 + unsigned int dst_off, unsigned int len, 346 + unsigned long data) 366 347 { 367 - static const char * const messages[] = { 368 - [DMATEST_ET_OK] = "No errors", 369 - [DMATEST_ET_MAP_SRC] = "src mapping error", 370 - [DMATEST_ET_MAP_DST] = "dst mapping error", 371 - [DMATEST_ET_PREP] = "prep error", 372 - [DMATEST_ET_SUBMIT] = "submit error", 373 - [DMATEST_ET_TIMEOUT] = "test timed out", 374 - [DMATEST_ET_DMA_ERROR] = 375 - "got completion callback (DMA_ERROR)", 376 - [DMATEST_ET_DMA_IN_PROGRESS] = 377 - "got completion callback (DMA_IN_PROGRESS)", 378 - [DMATEST_ET_VERIFY] = "errors", 379 - [DMATEST_ET_VERIFY_BUF] = "verify errors", 380 - }; 381 - static char buf[512]; 382 - 383 - snprintf(buf, sizeof(buf) - 1, 384 - "%s: #%u: %s with src_off=0x%x ""dst_off=0x%x len=0x%x (%lu)", 385 - name, tr->n, messages[tr->type], tr->src_off, tr->dst_off, 386 - tr->len, tr->data); 387 - 388 - return buf; 348 + pr_debug("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)", 349 + current->comm, n, err, src_off, dst_off, len, data); 389 350 } 390 351 391 - static int thread_result_add(struct dmatest_info *info, 392 - struct dmatest_result *r, enum dmatest_error_type type, 393 - unsigned int n, unsigned int src_off, unsigned int dst_off, 394 - unsigned int len, unsigned long data) 352 + #define verbose_result(err, n, src_off, dst_off, len, data) ({ \ 353 + if (verbose) \ 354 + result(err, n, src_off, dst_off, len, data); \ 355 + else \ 356 + dbg_result(err, n, src_off, dst_off, len, data); \ 357 + }) 358 + 359 + static unsigned long long dmatest_persec(s64 runtime, unsigned int val) 395 360 { 396 - struct dmatest_thread_result *tr; 361 + unsigned long long per_sec = 1000000; 397 362 398 - tr = kzalloc(sizeof(*tr), GFP_KERNEL); 399 - if (!tr) 400 - return -ENOMEM; 363 + if (runtime <= 0) 364 + return 0; 401 365 402 - tr->type = type; 403 - tr->n = n; 404 - tr->src_off = src_off; 405 - tr->dst_off = dst_off; 406 - tr->len = len; 407 - tr->data = data; 366 + /* drop precision until runtime is 32-bits */ 367 + while (runtime > UINT_MAX) { 368 + runtime >>= 1; 369 + per_sec <<= 1; 370 + } 408 371 409 - mutex_lock(&info->results_lock); 410 - list_add_tail(&tr->node, &r->results); 411 - mutex_unlock(&info->results_lock); 412 - 413 - if (tr->type == DMATEST_ET_OK) 414 - pr_debug("%s\n", thread_result_get(r->name, tr)); 415 - else 416 - pr_warn("%s\n", thread_result_get(r->name, tr)); 417 - 418 - return 0; 372 + per_sec *= val; 373 + do_div(per_sec, runtime); 374 + return per_sec; 419 375 } 420 376 421 - static unsigned int verify_result_add(struct dmatest_info *info, 422 - struct dmatest_result *r, unsigned int n, 423 - unsigned int src_off, unsigned int dst_off, unsigned int len, 424 - u8 **bufs, int whence, unsigned int counter, u8 pattern, 425 - bool is_srcbuf) 377 + static unsigned long long dmatest_KBs(s64 runtime, unsigned long long len) 426 378 { 427 - struct dmatest_verify_result *vr; 428 - unsigned int error_count; 429 - unsigned int buf_off = is_srcbuf ? src_off : dst_off; 430 - unsigned int start, end; 431 - 432 - if (whence < 0) { 433 - start = 0; 434 - end = buf_off; 435 - } else if (whence > 0) { 436 - start = buf_off + len; 437 - end = info->params.buf_size; 438 - } else { 439 - start = buf_off; 440 - end = buf_off + len; 441 - } 442 - 443 - vr = kmalloc(sizeof(*vr), GFP_KERNEL); 444 - if (!vr) { 445 - pr_warn("dmatest: No memory to store verify result\n"); 446 - return dmatest_verify(NULL, bufs, start, end, counter, pattern, 447 - is_srcbuf); 448 - } 449 - 450 - vr->pattern = pattern; 451 - vr->is_srcbuf = is_srcbuf; 452 - 453 - error_count = dmatest_verify(vr, bufs, start, end, counter, pattern, 454 - is_srcbuf); 455 - if (error_count) { 456 - vr->error_count = error_count; 457 - thread_result_add(info, r, DMATEST_ET_VERIFY_BUF, n, src_off, 458 - dst_off, len, (unsigned long)vr); 459 - return error_count; 460 - } 461 - 462 - kfree(vr); 463 - return 0; 464 - } 465 - 466 - static void result_free(struct dmatest_info *info, const char *name) 467 - { 468 - struct dmatest_result *r, *_r; 469 - 470 - mutex_lock(&info->results_lock); 471 - list_for_each_entry_safe(r, _r, &info->results, node) { 472 - struct dmatest_thread_result *tr, *_tr; 473 - 474 - if (name && strcmp(r->name, name)) 475 - continue; 476 - 477 - list_for_each_entry_safe(tr, _tr, &r->results, node) { 478 - if (tr->type == DMATEST_ET_VERIFY_BUF) 479 - kfree(tr->vr); 480 - list_del(&tr->node); 481 - kfree(tr); 482 - } 483 - 484 - kfree(r->name); 485 - list_del(&r->node); 486 - kfree(r); 487 - } 488 - 489 - mutex_unlock(&info->results_lock); 490 - } 491 - 492 - static struct dmatest_result *result_init(struct dmatest_info *info, 493 - const char *name) 494 - { 495 - struct dmatest_result *r; 496 - 497 - r = kzalloc(sizeof(*r), GFP_KERNEL); 498 - if (r) { 499 - r->name = kstrdup(name, GFP_KERNEL); 500 - INIT_LIST_HEAD(&r->results); 501 - mutex_lock(&info->results_lock); 502 - list_add_tail(&r->node, &info->results); 503 - mutex_unlock(&info->results_lock); 504 - } 505 - return r; 379 + return dmatest_persec(runtime, len >> 10); 506 380 } 507 381 508 382 /* ··· 405 525 struct dmatest_params *params; 406 526 struct dma_chan *chan; 407 527 struct dma_device *dev; 408 - const char *thread_name; 409 528 unsigned int src_off, dst_off, len; 410 529 unsigned int error_count; 411 530 unsigned int failed_tests = 0; ··· 417 538 int src_cnt; 418 539 int dst_cnt; 419 540 int i; 420 - struct dmatest_result *result; 541 + ktime_t ktime; 542 + s64 runtime = 0; 543 + unsigned long long total_len = 0; 421 544 422 - thread_name = current->comm; 423 545 set_freezable(); 424 546 425 547 ret = -ENOMEM; ··· 450 570 } else 451 571 goto err_thread_type; 452 572 453 - result = result_init(info, thread_name); 454 - if (!result) 455 - goto err_srcs; 456 - 457 573 thread->srcs = kcalloc(src_cnt+1, sizeof(u8 *), GFP_KERNEL); 458 574 if (!thread->srcs) 459 575 goto err_srcs; ··· 473 597 set_user_nice(current, 10); 474 598 475 599 /* 476 - * src buffers are freed by the DMAEngine code with dma_unmap_single() 477 - * dst buffers are freed by ourselves below 600 + * src and dst buffers are freed by ourselves below 478 601 */ 479 - flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT 480 - | DMA_COMPL_SKIP_DEST_UNMAP | DMA_COMPL_SRC_UNMAP_SINGLE; 602 + flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 481 603 604 + ktime = ktime_get(); 482 605 while (!kthread_should_stop() 483 606 && !(params->iterations && total_tests >= params->iterations)) { 484 607 struct dma_async_tx_descriptor *tx = NULL; 485 - dma_addr_t dma_srcs[src_cnt]; 486 - dma_addr_t dma_dsts[dst_cnt]; 608 + struct dmaengine_unmap_data *um; 609 + dma_addr_t srcs[src_cnt]; 610 + dma_addr_t *dsts; 487 611 u8 align = 0; 488 612 489 613 total_tests++; ··· 502 626 break; 503 627 } 504 628 505 - len = dmatest_random() % params->buf_size + 1; 629 + if (params->noverify) { 630 + len = params->buf_size; 631 + src_off = 0; 632 + dst_off = 0; 633 + } else { 634 + len = dmatest_random() % params->buf_size + 1; 635 + len = (len >> align) << align; 636 + if (!len) 637 + len = 1 << align; 638 + src_off = dmatest_random() % (params->buf_size - len + 1); 639 + dst_off = dmatest_random() % (params->buf_size - len + 1); 640 + 641 + src_off = (src_off >> align) << align; 642 + dst_off = (dst_off >> align) << align; 643 + 644 + dmatest_init_srcs(thread->srcs, src_off, len, 645 + params->buf_size); 646 + dmatest_init_dsts(thread->dsts, dst_off, len, 647 + params->buf_size); 648 + } 649 + 506 650 len = (len >> align) << align; 507 651 if (!len) 508 652 len = 1 << align; 509 - src_off = dmatest_random() % (params->buf_size - len + 1); 510 - dst_off = dmatest_random() % (params->buf_size - len + 1); 653 + total_len += len; 511 654 512 - src_off = (src_off >> align) << align; 513 - dst_off = (dst_off >> align) << align; 655 + um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt, 656 + GFP_KERNEL); 657 + if (!um) { 658 + failed_tests++; 659 + result("unmap data NULL", total_tests, 660 + src_off, dst_off, len, ret); 661 + continue; 662 + } 514 663 515 - dmatest_init_srcs(thread->srcs, src_off, len, params->buf_size); 516 - dmatest_init_dsts(thread->dsts, dst_off, len, params->buf_size); 517 - 664 + um->len = params->buf_size; 518 665 for (i = 0; i < src_cnt; i++) { 519 - u8 *buf = thread->srcs[i] + src_off; 666 + unsigned long buf = (unsigned long) thread->srcs[i]; 667 + struct page *pg = virt_to_page(buf); 668 + unsigned pg_off = buf & ~PAGE_MASK; 520 669 521 - dma_srcs[i] = dma_map_single(dev->dev, buf, len, 522 - DMA_TO_DEVICE); 523 - ret = dma_mapping_error(dev->dev, dma_srcs[i]); 670 + um->addr[i] = dma_map_page(dev->dev, pg, pg_off, 671 + um->len, DMA_TO_DEVICE); 672 + srcs[i] = um->addr[i] + src_off; 673 + ret = dma_mapping_error(dev->dev, um->addr[i]); 524 674 if (ret) { 525 - unmap_src(dev->dev, dma_srcs, len, i); 526 - thread_result_add(info, result, 527 - DMATEST_ET_MAP_SRC, 528 - total_tests, src_off, dst_off, 529 - len, ret); 675 + dmaengine_unmap_put(um); 676 + result("src mapping error", total_tests, 677 + src_off, dst_off, len, ret); 530 678 failed_tests++; 531 679 continue; 532 680 } 681 + um->to_cnt++; 533 682 } 534 683 /* map with DMA_BIDIRECTIONAL to force writeback/invalidate */ 684 + dsts = &um->addr[src_cnt]; 535 685 for (i = 0; i < dst_cnt; i++) { 536 - dma_dsts[i] = dma_map_single(dev->dev, thread->dsts[i], 537 - params->buf_size, 538 - DMA_BIDIRECTIONAL); 539 - ret = dma_mapping_error(dev->dev, dma_dsts[i]); 686 + unsigned long buf = (unsigned long) thread->dsts[i]; 687 + struct page *pg = virt_to_page(buf); 688 + unsigned pg_off = buf & ~PAGE_MASK; 689 + 690 + dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, 691 + DMA_BIDIRECTIONAL); 692 + ret = dma_mapping_error(dev->dev, dsts[i]); 540 693 if (ret) { 541 - unmap_src(dev->dev, dma_srcs, len, src_cnt); 542 - unmap_dst(dev->dev, dma_dsts, params->buf_size, 543 - i); 544 - thread_result_add(info, result, 545 - DMATEST_ET_MAP_DST, 546 - total_tests, src_off, dst_off, 547 - len, ret); 694 + dmaengine_unmap_put(um); 695 + result("dst mapping error", total_tests, 696 + src_off, dst_off, len, ret); 548 697 failed_tests++; 549 698 continue; 550 699 } 700 + um->bidi_cnt++; 551 701 } 552 702 553 703 if (thread->type == DMA_MEMCPY) 554 704 tx = dev->device_prep_dma_memcpy(chan, 555 - dma_dsts[0] + dst_off, 556 - dma_srcs[0], len, 557 - flags); 705 + dsts[0] + dst_off, 706 + srcs[0], len, flags); 558 707 else if (thread->type == DMA_XOR) 559 708 tx = dev->device_prep_dma_xor(chan, 560 - dma_dsts[0] + dst_off, 561 - dma_srcs, src_cnt, 709 + dsts[0] + dst_off, 710 + srcs, src_cnt, 562 711 len, flags); 563 712 else if (thread->type == DMA_PQ) { 564 713 dma_addr_t dma_pq[dst_cnt]; 565 714 566 715 for (i = 0; i < dst_cnt; i++) 567 - dma_pq[i] = dma_dsts[i] + dst_off; 568 - tx = dev->device_prep_dma_pq(chan, dma_pq, dma_srcs, 716 + dma_pq[i] = dsts[i] + dst_off; 717 + tx = dev->device_prep_dma_pq(chan, dma_pq, srcs, 569 718 src_cnt, pq_coefs, 570 719 len, flags); 571 720 } 572 721 573 722 if (!tx) { 574 - unmap_src(dev->dev, dma_srcs, len, src_cnt); 575 - unmap_dst(dev->dev, dma_dsts, params->buf_size, 576 - dst_cnt); 577 - thread_result_add(info, result, DMATEST_ET_PREP, 578 - total_tests, src_off, dst_off, 579 - len, 0); 723 + dmaengine_unmap_put(um); 724 + result("prep error", total_tests, src_off, 725 + dst_off, len, ret); 580 726 msleep(100); 581 727 failed_tests++; 582 728 continue; ··· 610 712 cookie = tx->tx_submit(tx); 611 713 612 714 if (dma_submit_error(cookie)) { 613 - thread_result_add(info, result, DMATEST_ET_SUBMIT, 614 - total_tests, src_off, dst_off, 615 - len, cookie); 715 + dmaengine_unmap_put(um); 716 + result("submit error", total_tests, src_off, 717 + dst_off, len, ret); 616 718 msleep(100); 617 719 failed_tests++; 618 720 continue; ··· 633 735 * free it this time?" dancing. For now, just 634 736 * leave it dangling. 635 737 */ 636 - thread_result_add(info, result, DMATEST_ET_TIMEOUT, 637 - total_tests, src_off, dst_off, 638 - len, 0); 738 + dmaengine_unmap_put(um); 739 + result("test timed out", total_tests, src_off, dst_off, 740 + len, 0); 639 741 failed_tests++; 640 742 continue; 641 - } else if (status != DMA_SUCCESS) { 642 - enum dmatest_error_type type = (status == DMA_ERROR) ? 643 - DMATEST_ET_DMA_ERROR : DMATEST_ET_DMA_IN_PROGRESS; 644 - thread_result_add(info, result, type, 645 - total_tests, src_off, dst_off, 646 - len, status); 743 + } else if (status != DMA_COMPLETE) { 744 + dmaengine_unmap_put(um); 745 + result(status == DMA_ERROR ? 746 + "completion error status" : 747 + "completion busy status", total_tests, src_off, 748 + dst_off, len, ret); 647 749 failed_tests++; 648 750 continue; 649 751 } 650 752 651 - /* Unmap by myself (see DMA_COMPL_SKIP_DEST_UNMAP above) */ 652 - unmap_dst(dev->dev, dma_dsts, params->buf_size, dst_cnt); 753 + dmaengine_unmap_put(um); 653 754 654 - error_count = 0; 755 + if (params->noverify) { 756 + verbose_result("test passed", total_tests, src_off, 757 + dst_off, len, 0); 758 + continue; 759 + } 655 760 656 - pr_debug("%s: verifying source buffer...\n", thread_name); 657 - error_count += verify_result_add(info, result, total_tests, 658 - src_off, dst_off, len, thread->srcs, -1, 761 + pr_debug("%s: verifying source buffer...\n", current->comm); 762 + error_count = dmatest_verify(thread->srcs, 0, src_off, 659 763 0, PATTERN_SRC, true); 660 - error_count += verify_result_add(info, result, total_tests, 661 - src_off, dst_off, len, thread->srcs, 0, 662 - src_off, PATTERN_SRC | PATTERN_COPY, true); 663 - error_count += verify_result_add(info, result, total_tests, 664 - src_off, dst_off, len, thread->srcs, 1, 665 - src_off + len, PATTERN_SRC, true); 764 + error_count += dmatest_verify(thread->srcs, src_off, 765 + src_off + len, src_off, 766 + PATTERN_SRC | PATTERN_COPY, true); 767 + error_count += dmatest_verify(thread->srcs, src_off + len, 768 + params->buf_size, src_off + len, 769 + PATTERN_SRC, true); 666 770 667 - pr_debug("%s: verifying dest buffer...\n", thread_name); 668 - error_count += verify_result_add(info, result, total_tests, 669 - src_off, dst_off, len, thread->dsts, -1, 771 + pr_debug("%s: verifying dest buffer...\n", current->comm); 772 + error_count += dmatest_verify(thread->dsts, 0, dst_off, 670 773 0, PATTERN_DST, false); 671 - error_count += verify_result_add(info, result, total_tests, 672 - src_off, dst_off, len, thread->dsts, 0, 673 - src_off, PATTERN_SRC | PATTERN_COPY, false); 674 - error_count += verify_result_add(info, result, total_tests, 675 - src_off, dst_off, len, thread->dsts, 1, 676 - dst_off + len, PATTERN_DST, false); 774 + error_count += dmatest_verify(thread->dsts, dst_off, 775 + dst_off + len, src_off, 776 + PATTERN_SRC | PATTERN_COPY, false); 777 + error_count += dmatest_verify(thread->dsts, dst_off + len, 778 + params->buf_size, dst_off + len, 779 + PATTERN_DST, false); 677 780 678 781 if (error_count) { 679 - thread_result_add(info, result, DMATEST_ET_VERIFY, 680 - total_tests, src_off, dst_off, 681 - len, error_count); 782 + result("data error", total_tests, src_off, dst_off, 783 + len, error_count); 682 784 failed_tests++; 683 785 } else { 684 - thread_result_add(info, result, DMATEST_ET_OK, 685 - total_tests, src_off, dst_off, 686 - len, 0); 786 + verbose_result("test passed", total_tests, src_off, 787 + dst_off, len, 0); 687 788 } 688 789 } 790 + runtime = ktime_us_delta(ktime_get(), ktime); 689 791 690 792 ret = 0; 691 793 for (i = 0; thread->dsts[i]; i++) ··· 700 802 err_srcs: 701 803 kfree(pq_coefs); 702 804 err_thread_type: 703 - pr_notice("%s: terminating after %u tests, %u failures (status %d)\n", 704 - thread_name, total_tests, failed_tests, ret); 805 + pr_info("%s: summary %u tests, %u failures %llu iops %llu KB/s (%d)\n", 806 + current->comm, total_tests, failed_tests, 807 + dmatest_persec(runtime, total_tests), 808 + dmatest_KBs(runtime, total_len), ret); 705 809 706 810 /* terminate all transfers on specified channels */ 707 811 if (ret) 708 812 dmaengine_terminate_all(chan); 709 813 710 814 thread->done = true; 711 - 712 - if (params->iterations > 0) 713 - while (!kthread_should_stop()) { 714 - DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wait_dmatest_exit); 715 - interruptible_sleep_on(&wait_dmatest_exit); 716 - } 815 + wake_up(&thread_wait); 717 816 718 817 return ret; 719 818 } ··· 723 828 724 829 list_for_each_entry_safe(thread, _thread, &dtc->threads, node) { 725 830 ret = kthread_stop(thread->task); 726 - pr_debug("dmatest: thread %s exited with status %d\n", 727 - thread->task->comm, ret); 831 + pr_debug("thread %s exited with status %d\n", 832 + thread->task->comm, ret); 728 833 list_del(&thread->node); 834 + put_task_struct(thread->task); 729 835 kfree(thread); 730 836 } 731 837 ··· 757 861 for (i = 0; i < params->threads_per_chan; i++) { 758 862 thread = kzalloc(sizeof(struct dmatest_thread), GFP_KERNEL); 759 863 if (!thread) { 760 - pr_warning("dmatest: No memory for %s-%s%u\n", 761 - dma_chan_name(chan), op, i); 762 - 864 + pr_warn("No memory for %s-%s%u\n", 865 + dma_chan_name(chan), op, i); 763 866 break; 764 867 } 765 868 thread->info = info; 766 869 thread->chan = dtc->chan; 767 870 thread->type = type; 768 871 smp_wmb(); 769 - thread->task = kthread_run(dmatest_func, thread, "%s-%s%u", 872 + thread->task = kthread_create(dmatest_func, thread, "%s-%s%u", 770 873 dma_chan_name(chan), op, i); 771 874 if (IS_ERR(thread->task)) { 772 - pr_warning("dmatest: Failed to run thread %s-%s%u\n", 773 - dma_chan_name(chan), op, i); 875 + pr_warn("Failed to create thread %s-%s%u\n", 876 + dma_chan_name(chan), op, i); 774 877 kfree(thread); 775 878 break; 776 879 } 777 880 778 881 /* srcbuf and dstbuf are allocated by the thread itself */ 779 - 882 + get_task_struct(thread->task); 780 883 list_add_tail(&thread->node, &dtc->threads); 884 + wake_up_process(thread->task); 781 885 } 782 886 783 887 return i; ··· 793 897 794 898 dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL); 795 899 if (!dtc) { 796 - pr_warning("dmatest: No memory for %s\n", dma_chan_name(chan)); 900 + pr_warn("No memory for %s\n", dma_chan_name(chan)); 797 901 return -ENOMEM; 798 902 } 799 903 ··· 813 917 thread_count += cnt > 0 ? cnt : 0; 814 918 } 815 919 816 - pr_info("dmatest: Started %u threads using %s\n", 920 + pr_info("Started %u threads using %s\n", 817 921 thread_count, dma_chan_name(chan)); 818 922 819 923 list_add_tail(&dtc->node, &info->channels); ··· 833 937 return true; 834 938 } 835 939 836 - static int __run_threaded_test(struct dmatest_info *info) 940 + static void request_channels(struct dmatest_info *info, 941 + enum dma_transaction_type type) 837 942 { 838 943 dma_cap_mask_t mask; 839 - struct dma_chan *chan; 840 - struct dmatest_params *params = &info->params; 841 - int err = 0; 842 944 843 945 dma_cap_zero(mask); 844 - dma_cap_set(DMA_MEMCPY, mask); 946 + dma_cap_set(type, mask); 845 947 for (;;) { 948 + struct dmatest_params *params = &info->params; 949 + struct dma_chan *chan; 950 + 846 951 chan = dma_request_channel(mask, filter, params); 847 952 if (chan) { 848 - err = dmatest_add_channel(info, chan); 849 - if (err) { 953 + if (dmatest_add_channel(info, chan)) { 850 954 dma_release_channel(chan); 851 955 break; /* add_channel failed, punt */ 852 956 } ··· 856 960 info->nr_channels >= params->max_channels) 857 961 break; /* we have all we need */ 858 962 } 859 - return err; 860 963 } 861 964 862 - #ifndef MODULE 863 - static int run_threaded_test(struct dmatest_info *info) 864 - { 865 - int ret; 866 - 867 - mutex_lock(&info->lock); 868 - ret = __run_threaded_test(info); 869 - mutex_unlock(&info->lock); 870 - return ret; 871 - } 872 - #endif 873 - 874 - static void __stop_threaded_test(struct dmatest_info *info) 875 - { 876 - struct dmatest_chan *dtc, *_dtc; 877 - struct dma_chan *chan; 878 - 879 - list_for_each_entry_safe(dtc, _dtc, &info->channels, node) { 880 - list_del(&dtc->node); 881 - chan = dtc->chan; 882 - dmatest_cleanup_channel(dtc); 883 - pr_debug("dmatest: dropped channel %s\n", dma_chan_name(chan)); 884 - dma_release_channel(chan); 885 - } 886 - 887 - info->nr_channels = 0; 888 - } 889 - 890 - static void stop_threaded_test(struct dmatest_info *info) 891 - { 892 - mutex_lock(&info->lock); 893 - __stop_threaded_test(info); 894 - mutex_unlock(&info->lock); 895 - } 896 - 897 - static int __restart_threaded_test(struct dmatest_info *info, bool run) 965 + static void run_threaded_test(struct dmatest_info *info) 898 966 { 899 967 struct dmatest_params *params = &info->params; 900 - 901 - /* Stop any running test first */ 902 - __stop_threaded_test(info); 903 - 904 - if (run == false) 905 - return 0; 906 - 907 - /* Clear results from previous run */ 908 - result_free(info, NULL); 909 968 910 969 /* Copy test parameters */ 911 970 params->buf_size = test_buf_size; ··· 872 1021 params->xor_sources = xor_sources; 873 1022 params->pq_sources = pq_sources; 874 1023 params->timeout = timeout; 1024 + params->noverify = noverify; 1025 + 1026 + request_channels(info, DMA_MEMCPY); 1027 + request_channels(info, DMA_XOR); 1028 + request_channels(info, DMA_PQ); 1029 + } 1030 + 1031 + static void stop_threaded_test(struct dmatest_info *info) 1032 + { 1033 + struct dmatest_chan *dtc, *_dtc; 1034 + struct dma_chan *chan; 1035 + 1036 + list_for_each_entry_safe(dtc, _dtc, &info->channels, node) { 1037 + list_del(&dtc->node); 1038 + chan = dtc->chan; 1039 + dmatest_cleanup_channel(dtc); 1040 + pr_debug("dropped channel %s\n", dma_chan_name(chan)); 1041 + dma_release_channel(chan); 1042 + } 1043 + 1044 + info->nr_channels = 0; 1045 + } 1046 + 1047 + static void restart_threaded_test(struct dmatest_info *info, bool run) 1048 + { 1049 + /* we might be called early to set run=, defer running until all 1050 + * parameters have been evaluated 1051 + */ 1052 + if (!info->did_init) 1053 + return; 1054 + 1055 + /* Stop any running test first */ 1056 + stop_threaded_test(info); 875 1057 876 1058 /* Run test with new parameters */ 877 - return __run_threaded_test(info); 1059 + run_threaded_test(info); 878 1060 } 879 1061 880 - static bool __is_threaded_test_run(struct dmatest_info *info) 1062 + static int dmatest_run_get(char *val, const struct kernel_param *kp) 881 1063 { 882 - struct dmatest_chan *dtc; 883 - 884 - list_for_each_entry(dtc, &info->channels, node) { 885 - struct dmatest_thread *thread; 886 - 887 - list_for_each_entry(thread, &dtc->threads, node) { 888 - if (!thread->done) 889 - return true; 890 - } 891 - } 892 - 893 - return false; 894 - } 895 - 896 - static ssize_t dtf_read_run(struct file *file, char __user *user_buf, 897 - size_t count, loff_t *ppos) 898 - { 899 - struct dmatest_info *info = file->private_data; 900 - char buf[3]; 1064 + struct dmatest_info *info = &test_info; 901 1065 902 1066 mutex_lock(&info->lock); 903 - 904 - if (__is_threaded_test_run(info)) { 905 - buf[0] = 'Y'; 1067 + if (is_threaded_test_run(info)) { 1068 + dmatest_run = true; 906 1069 } else { 907 - __stop_threaded_test(info); 908 - buf[0] = 'N'; 1070 + stop_threaded_test(info); 1071 + dmatest_run = false; 909 1072 } 1073 + mutex_unlock(&info->lock); 1074 + 1075 + return param_get_bool(val, kp); 1076 + } 1077 + 1078 + static int dmatest_run_set(const char *val, const struct kernel_param *kp) 1079 + { 1080 + struct dmatest_info *info = &test_info; 1081 + int ret; 1082 + 1083 + mutex_lock(&info->lock); 1084 + ret = param_set_bool(val, kp); 1085 + if (ret) { 1086 + mutex_unlock(&info->lock); 1087 + return ret; 1088 + } 1089 + 1090 + if (is_threaded_test_run(info)) 1091 + ret = -EBUSY; 1092 + else if (dmatest_run) 1093 + restart_threaded_test(info, dmatest_run); 910 1094 911 1095 mutex_unlock(&info->lock); 912 - buf[1] = '\n'; 913 - buf[2] = 0x00; 914 - return simple_read_from_buffer(user_buf, count, ppos, buf, 2); 915 - } 916 1096 917 - static ssize_t dtf_write_run(struct file *file, const char __user *user_buf, 918 - size_t count, loff_t *ppos) 919 - { 920 - struct dmatest_info *info = file->private_data; 921 - char buf[16]; 922 - bool bv; 923 - int ret = 0; 924 - 925 - if (copy_from_user(buf, user_buf, min(count, (sizeof(buf) - 1)))) 926 - return -EFAULT; 927 - 928 - if (strtobool(buf, &bv) == 0) { 929 - mutex_lock(&info->lock); 930 - 931 - if (__is_threaded_test_run(info)) 932 - ret = -EBUSY; 933 - else 934 - ret = __restart_threaded_test(info, bv); 935 - 936 - mutex_unlock(&info->lock); 937 - } 938 - 939 - return ret ? ret : count; 940 - } 941 - 942 - static const struct file_operations dtf_run_fops = { 943 - .read = dtf_read_run, 944 - .write = dtf_write_run, 945 - .open = simple_open, 946 - .llseek = default_llseek, 947 - }; 948 - 949 - static int dtf_results_show(struct seq_file *sf, void *data) 950 - { 951 - struct dmatest_info *info = sf->private; 952 - struct dmatest_result *result; 953 - struct dmatest_thread_result *tr; 954 - unsigned int i; 955 - 956 - mutex_lock(&info->results_lock); 957 - list_for_each_entry(result, &info->results, node) { 958 - list_for_each_entry(tr, &result->results, node) { 959 - seq_printf(sf, "%s\n", 960 - thread_result_get(result->name, tr)); 961 - if (tr->type == DMATEST_ET_VERIFY_BUF) { 962 - for (i = 0; i < tr->vr->error_count; i++) { 963 - seq_printf(sf, "\t%s\n", 964 - verify_result_get_one(tr->vr, i)); 965 - } 966 - } 967 - } 968 - } 969 - 970 - mutex_unlock(&info->results_lock); 971 - return 0; 972 - } 973 - 974 - static int dtf_results_open(struct inode *inode, struct file *file) 975 - { 976 - return single_open(file, dtf_results_show, inode->i_private); 977 - } 978 - 979 - static const struct file_operations dtf_results_fops = { 980 - .open = dtf_results_open, 981 - .read = seq_read, 982 - .llseek = seq_lseek, 983 - .release = single_release, 984 - }; 985 - 986 - static int dmatest_register_dbgfs(struct dmatest_info *info) 987 - { 988 - struct dentry *d; 989 - 990 - d = debugfs_create_dir("dmatest", NULL); 991 - if (IS_ERR(d)) 992 - return PTR_ERR(d); 993 - if (!d) 994 - goto err_root; 995 - 996 - info->root = d; 997 - 998 - /* Run or stop threaded test */ 999 - debugfs_create_file("run", S_IWUSR | S_IRUGO, info->root, info, 1000 - &dtf_run_fops); 1001 - 1002 - /* Results of test in progress */ 1003 - debugfs_create_file("results", S_IRUGO, info->root, info, 1004 - &dtf_results_fops); 1005 - 1006 - return 0; 1007 - 1008 - err_root: 1009 - pr_err("dmatest: Failed to initialize debugfs\n"); 1010 - return -ENOMEM; 1097 + return ret; 1011 1098 } 1012 1099 1013 1100 static int __init dmatest_init(void) 1014 1101 { 1015 1102 struct dmatest_info *info = &test_info; 1016 - int ret; 1103 + struct dmatest_params *params = &info->params; 1017 1104 1018 - memset(info, 0, sizeof(*info)); 1105 + if (dmatest_run) { 1106 + mutex_lock(&info->lock); 1107 + run_threaded_test(info); 1108 + mutex_unlock(&info->lock); 1109 + } 1019 1110 1020 - mutex_init(&info->lock); 1021 - INIT_LIST_HEAD(&info->channels); 1111 + if (params->iterations && wait) 1112 + wait_event(thread_wait, !is_threaded_test_run(info)); 1022 1113 1023 - mutex_init(&info->results_lock); 1024 - INIT_LIST_HEAD(&info->results); 1114 + /* module parameters are stable, inittime tests are started, 1115 + * let userspace take over 'run' control 1116 + */ 1117 + info->did_init = true; 1025 1118 1026 - ret = dmatest_register_dbgfs(info); 1027 - if (ret) 1028 - return ret; 1029 - 1030 - #ifdef MODULE 1031 1119 return 0; 1032 - #else 1033 - return run_threaded_test(info); 1034 - #endif 1035 1120 } 1036 1121 /* when compiled-in wait for drivers to load first */ 1037 1122 late_initcall(dmatest_init); ··· 976 1189 { 977 1190 struct dmatest_info *info = &test_info; 978 1191 979 - debugfs_remove_recursive(info->root); 1192 + mutex_lock(&info->lock); 980 1193 stop_threaded_test(info); 981 - result_free(info, NULL); 1194 + mutex_unlock(&info->lock); 982 1195 } 983 1196 module_exit(dmatest_exit); 984 1197
+3 -26
drivers/dma/dw/core.c
··· 85 85 { 86 86 return &chan->dev->device; 87 87 } 88 - static struct device *chan2parent(struct dma_chan *chan) 89 - { 90 - return chan->dev->device.parent; 91 - } 92 88 93 89 static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc) 94 90 { ··· 307 311 list_splice_init(&desc->tx_list, &dwc->free_list); 308 312 list_move(&desc->desc_node, &dwc->free_list); 309 313 310 - if (!is_slave_direction(dwc->direction)) { 311 - struct device *parent = chan2parent(&dwc->chan); 312 - if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 313 - if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE) 314 - dma_unmap_single(parent, desc->lli.dar, 315 - desc->total_len, DMA_FROM_DEVICE); 316 - else 317 - dma_unmap_page(parent, desc->lli.dar, 318 - desc->total_len, DMA_FROM_DEVICE); 319 - } 320 - if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 321 - if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE) 322 - dma_unmap_single(parent, desc->lli.sar, 323 - desc->total_len, DMA_TO_DEVICE); 324 - else 325 - dma_unmap_page(parent, desc->lli.sar, 326 - desc->total_len, DMA_TO_DEVICE); 327 - } 328 - } 329 - 314 + dma_descriptor_unmap(txd); 330 315 spin_unlock_irqrestore(&dwc->lock, flags); 331 316 332 317 if (callback) ··· 1075 1098 enum dma_status ret; 1076 1099 1077 1100 ret = dma_cookie_status(chan, cookie, txstate); 1078 - if (ret == DMA_SUCCESS) 1101 + if (ret == DMA_COMPLETE) 1079 1102 return ret; 1080 1103 1081 1104 dwc_scan_descriptors(to_dw_dma(chan->device), dwc); 1082 1105 1083 1106 ret = dma_cookie_status(chan, cookie, txstate); 1084 - if (ret != DMA_SUCCESS) 1107 + if (ret != DMA_COMPLETE) 1085 1108 dma_set_residue(txstate, dwc_get_residue(dwc)); 1086 1109 1087 1110 if (dwc->paused && ret == DMA_IN_PROGRESS)
+285 -82
drivers/dma/edma.c
··· 46 46 #define EDMA_CHANS 64 47 47 #endif /* CONFIG_ARCH_DAVINCI_DA8XX */ 48 48 49 - /* Max of 16 segments per channel to conserve PaRAM slots */ 50 - #define MAX_NR_SG 16 49 + /* 50 + * Max of 20 segments per channel to conserve PaRAM slots 51 + * Also note that MAX_NR_SG should be atleast the no.of periods 52 + * that are required for ASoC, otherwise DMA prep calls will 53 + * fail. Today davinci-pcm is the only user of this driver and 54 + * requires atleast 17 slots, so we setup the default to 20. 55 + */ 56 + #define MAX_NR_SG 20 51 57 #define EDMA_MAX_SLOTS MAX_NR_SG 52 58 #define EDMA_DESCRIPTORS 16 53 59 54 60 struct edma_desc { 55 61 struct virt_dma_desc vdesc; 56 62 struct list_head node; 63 + int cyclic; 57 64 int absync; 58 65 int pset_nr; 59 66 int processed; ··· 174 167 * then setup a link to the dummy slot, this results in all future 175 168 * events being absorbed and that's OK because we're done 176 169 */ 177 - if (edesc->processed == edesc->pset_nr) 178 - edma_link(echan->slot[nslots-1], echan->ecc->dummy_slot); 170 + if (edesc->processed == edesc->pset_nr) { 171 + if (edesc->cyclic) 172 + edma_link(echan->slot[nslots-1], echan->slot[1]); 173 + else 174 + edma_link(echan->slot[nslots-1], 175 + echan->ecc->dummy_slot); 176 + } 179 177 180 178 edma_resume(echan->ch_num); 181 179 ··· 262 250 return ret; 263 251 } 264 252 253 + /* 254 + * A PaRAM set configuration abstraction used by other modes 255 + * @chan: Channel who's PaRAM set we're configuring 256 + * @pset: PaRAM set to initialize and setup. 257 + * @src_addr: Source address of the DMA 258 + * @dst_addr: Destination address of the DMA 259 + * @burst: In units of dev_width, how much to send 260 + * @dev_width: How much is the dev_width 261 + * @dma_length: Total length of the DMA transfer 262 + * @direction: Direction of the transfer 263 + */ 264 + static int edma_config_pset(struct dma_chan *chan, struct edmacc_param *pset, 265 + dma_addr_t src_addr, dma_addr_t dst_addr, u32 burst, 266 + enum dma_slave_buswidth dev_width, unsigned int dma_length, 267 + enum dma_transfer_direction direction) 268 + { 269 + struct edma_chan *echan = to_edma_chan(chan); 270 + struct device *dev = chan->device->dev; 271 + int acnt, bcnt, ccnt, cidx; 272 + int src_bidx, dst_bidx, src_cidx, dst_cidx; 273 + int absync; 274 + 275 + acnt = dev_width; 276 + /* 277 + * If the maxburst is equal to the fifo width, use 278 + * A-synced transfers. This allows for large contiguous 279 + * buffer transfers using only one PaRAM set. 280 + */ 281 + if (burst == 1) { 282 + /* 283 + * For the A-sync case, bcnt and ccnt are the remainder 284 + * and quotient respectively of the division of: 285 + * (dma_length / acnt) by (SZ_64K -1). This is so 286 + * that in case bcnt over flows, we have ccnt to use. 287 + * Note: In A-sync tranfer only, bcntrld is used, but it 288 + * only applies for sg_dma_len(sg) >= SZ_64K. 289 + * In this case, the best way adopted is- bccnt for the 290 + * first frame will be the remainder below. Then for 291 + * every successive frame, bcnt will be SZ_64K-1. This 292 + * is assured as bcntrld = 0xffff in end of function. 293 + */ 294 + absync = false; 295 + ccnt = dma_length / acnt / (SZ_64K - 1); 296 + bcnt = dma_length / acnt - ccnt * (SZ_64K - 1); 297 + /* 298 + * If bcnt is non-zero, we have a remainder and hence an 299 + * extra frame to transfer, so increment ccnt. 300 + */ 301 + if (bcnt) 302 + ccnt++; 303 + else 304 + bcnt = SZ_64K - 1; 305 + cidx = acnt; 306 + } else { 307 + /* 308 + * If maxburst is greater than the fifo address_width, 309 + * use AB-synced transfers where A count is the fifo 310 + * address_width and B count is the maxburst. In this 311 + * case, we are limited to transfers of C count frames 312 + * of (address_width * maxburst) where C count is limited 313 + * to SZ_64K-1. This places an upper bound on the length 314 + * of an SG segment that can be handled. 315 + */ 316 + absync = true; 317 + bcnt = burst; 318 + ccnt = dma_length / (acnt * bcnt); 319 + if (ccnt > (SZ_64K - 1)) { 320 + dev_err(dev, "Exceeded max SG segment size\n"); 321 + return -EINVAL; 322 + } 323 + cidx = acnt * bcnt; 324 + } 325 + 326 + if (direction == DMA_MEM_TO_DEV) { 327 + src_bidx = acnt; 328 + src_cidx = cidx; 329 + dst_bidx = 0; 330 + dst_cidx = 0; 331 + } else if (direction == DMA_DEV_TO_MEM) { 332 + src_bidx = 0; 333 + src_cidx = 0; 334 + dst_bidx = acnt; 335 + dst_cidx = cidx; 336 + } else { 337 + dev_err(dev, "%s: direction not implemented yet\n", __func__); 338 + return -EINVAL; 339 + } 340 + 341 + pset->opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num)); 342 + /* Configure A or AB synchronized transfers */ 343 + if (absync) 344 + pset->opt |= SYNCDIM; 345 + 346 + pset->src = src_addr; 347 + pset->dst = dst_addr; 348 + 349 + pset->src_dst_bidx = (dst_bidx << 16) | src_bidx; 350 + pset->src_dst_cidx = (dst_cidx << 16) | src_cidx; 351 + 352 + pset->a_b_cnt = bcnt << 16 | acnt; 353 + pset->ccnt = ccnt; 354 + /* 355 + * Only time when (bcntrld) auto reload is required is for 356 + * A-sync case, and in this case, a requirement of reload value 357 + * of SZ_64K-1 only is assured. 'link' is initially set to NULL 358 + * and then later will be populated by edma_execute. 359 + */ 360 + pset->link_bcntrld = 0xffffffff; 361 + return absync; 362 + } 363 + 265 364 static struct dma_async_tx_descriptor *edma_prep_slave_sg( 266 365 struct dma_chan *chan, struct scatterlist *sgl, 267 366 unsigned int sg_len, enum dma_transfer_direction direction, ··· 381 258 struct edma_chan *echan = to_edma_chan(chan); 382 259 struct device *dev = chan->device->dev; 383 260 struct edma_desc *edesc; 384 - dma_addr_t dev_addr; 261 + dma_addr_t src_addr = 0, dst_addr = 0; 385 262 enum dma_slave_buswidth dev_width; 386 263 u32 burst; 387 264 struct scatterlist *sg; 388 - int acnt, bcnt, ccnt, src, dst, cidx; 389 - int src_bidx, dst_bidx, src_cidx, dst_cidx; 390 - int i, nslots; 265 + int i, nslots, ret; 391 266 392 267 if (unlikely(!echan || !sgl || !sg_len)) 393 268 return NULL; 394 269 395 270 if (direction == DMA_DEV_TO_MEM) { 396 - dev_addr = echan->cfg.src_addr; 271 + src_addr = echan->cfg.src_addr; 397 272 dev_width = echan->cfg.src_addr_width; 398 273 burst = echan->cfg.src_maxburst; 399 274 } else if (direction == DMA_MEM_TO_DEV) { 400 - dev_addr = echan->cfg.dst_addr; 275 + dst_addr = echan->cfg.dst_addr; 401 276 dev_width = echan->cfg.dst_addr_width; 402 277 burst = echan->cfg.dst_maxburst; 403 278 } else { ··· 428 307 if (echan->slot[i] < 0) { 429 308 kfree(edesc); 430 309 dev_err(dev, "Failed to allocate slot\n"); 431 - kfree(edesc); 432 310 return NULL; 433 311 } 434 312 } ··· 435 315 436 316 /* Configure PaRAM sets for each SG */ 437 317 for_each_sg(sgl, sg, sg_len, i) { 318 + /* Get address for each SG */ 319 + if (direction == DMA_DEV_TO_MEM) 320 + dst_addr = sg_dma_address(sg); 321 + else 322 + src_addr = sg_dma_address(sg); 438 323 439 - acnt = dev_width; 440 - 441 - /* 442 - * If the maxburst is equal to the fifo width, use 443 - * A-synced transfers. This allows for large contiguous 444 - * buffer transfers using only one PaRAM set. 445 - */ 446 - if (burst == 1) { 447 - edesc->absync = false; 448 - ccnt = sg_dma_len(sg) / acnt / (SZ_64K - 1); 449 - bcnt = sg_dma_len(sg) / acnt - ccnt * (SZ_64K - 1); 450 - if (bcnt) 451 - ccnt++; 452 - else 453 - bcnt = SZ_64K - 1; 454 - cidx = acnt; 455 - /* 456 - * If maxburst is greater than the fifo address_width, 457 - * use AB-synced transfers where A count is the fifo 458 - * address_width and B count is the maxburst. In this 459 - * case, we are limited to transfers of C count frames 460 - * of (address_width * maxburst) where C count is limited 461 - * to SZ_64K-1. This places an upper bound on the length 462 - * of an SG segment that can be handled. 463 - */ 464 - } else { 465 - edesc->absync = true; 466 - bcnt = burst; 467 - ccnt = sg_dma_len(sg) / (acnt * bcnt); 468 - if (ccnt > (SZ_64K - 1)) { 469 - dev_err(dev, "Exceeded max SG segment size\n"); 470 - kfree(edesc); 471 - return NULL; 472 - } 473 - cidx = acnt * bcnt; 324 + ret = edma_config_pset(chan, &edesc->pset[i], src_addr, 325 + dst_addr, burst, dev_width, 326 + sg_dma_len(sg), direction); 327 + if (ret < 0) { 328 + kfree(edesc); 329 + return NULL; 474 330 } 475 331 476 - if (direction == DMA_MEM_TO_DEV) { 477 - src = sg_dma_address(sg); 478 - dst = dev_addr; 479 - src_bidx = acnt; 480 - src_cidx = cidx; 481 - dst_bidx = 0; 482 - dst_cidx = 0; 483 - } else { 484 - src = dev_addr; 485 - dst = sg_dma_address(sg); 486 - src_bidx = 0; 487 - src_cidx = 0; 488 - dst_bidx = acnt; 489 - dst_cidx = cidx; 490 - } 491 - 492 - edesc->pset[i].opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num)); 493 - /* Configure A or AB synchronized transfers */ 494 - if (edesc->absync) 495 - edesc->pset[i].opt |= SYNCDIM; 332 + edesc->absync = ret; 496 333 497 334 /* If this is the last in a current SG set of transactions, 498 335 enable interrupts so that next set is processed */ ··· 459 382 /* If this is the last set, enable completion interrupt flag */ 460 383 if (i == sg_len - 1) 461 384 edesc->pset[i].opt |= TCINTEN; 385 + } 462 386 463 - edesc->pset[i].src = src; 464 - edesc->pset[i].dst = dst; 387 + return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); 388 + } 465 389 466 - edesc->pset[i].src_dst_bidx = (dst_bidx << 16) | src_bidx; 467 - edesc->pset[i].src_dst_cidx = (dst_cidx << 16) | src_cidx; 390 + static struct dma_async_tx_descriptor *edma_prep_dma_cyclic( 391 + struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, 392 + size_t period_len, enum dma_transfer_direction direction, 393 + unsigned long tx_flags, void *context) 394 + { 395 + struct edma_chan *echan = to_edma_chan(chan); 396 + struct device *dev = chan->device->dev; 397 + struct edma_desc *edesc; 398 + dma_addr_t src_addr, dst_addr; 399 + enum dma_slave_buswidth dev_width; 400 + u32 burst; 401 + int i, ret, nslots; 468 402 469 - edesc->pset[i].a_b_cnt = bcnt << 16 | acnt; 470 - edesc->pset[i].ccnt = ccnt; 471 - edesc->pset[i].link_bcntrld = 0xffffffff; 403 + if (unlikely(!echan || !buf_len || !period_len)) 404 + return NULL; 472 405 406 + if (direction == DMA_DEV_TO_MEM) { 407 + src_addr = echan->cfg.src_addr; 408 + dst_addr = buf_addr; 409 + dev_width = echan->cfg.src_addr_width; 410 + burst = echan->cfg.src_maxburst; 411 + } else if (direction == DMA_MEM_TO_DEV) { 412 + src_addr = buf_addr; 413 + dst_addr = echan->cfg.dst_addr; 414 + dev_width = echan->cfg.dst_addr_width; 415 + burst = echan->cfg.dst_maxburst; 416 + } else { 417 + dev_err(dev, "%s: bad direction?\n", __func__); 418 + return NULL; 419 + } 420 + 421 + if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { 422 + dev_err(dev, "Undefined slave buswidth\n"); 423 + return NULL; 424 + } 425 + 426 + if (unlikely(buf_len % period_len)) { 427 + dev_err(dev, "Period should be multiple of Buffer length\n"); 428 + return NULL; 429 + } 430 + 431 + nslots = (buf_len / period_len) + 1; 432 + 433 + /* 434 + * Cyclic DMA users such as audio cannot tolerate delays introduced 435 + * by cases where the number of periods is more than the maximum 436 + * number of SGs the EDMA driver can handle at a time. For DMA types 437 + * such as Slave SGs, such delays are tolerable and synchronized, 438 + * but the synchronization is difficult to achieve with Cyclic and 439 + * cannot be guaranteed, so we error out early. 440 + */ 441 + if (nslots > MAX_NR_SG) 442 + return NULL; 443 + 444 + edesc = kzalloc(sizeof(*edesc) + nslots * 445 + sizeof(edesc->pset[0]), GFP_ATOMIC); 446 + if (!edesc) { 447 + dev_dbg(dev, "Failed to allocate a descriptor\n"); 448 + return NULL; 449 + } 450 + 451 + edesc->cyclic = 1; 452 + edesc->pset_nr = nslots; 453 + 454 + dev_dbg(dev, "%s: nslots=%d\n", __func__, nslots); 455 + dev_dbg(dev, "%s: period_len=%d\n", __func__, period_len); 456 + dev_dbg(dev, "%s: buf_len=%d\n", __func__, buf_len); 457 + 458 + for (i = 0; i < nslots; i++) { 459 + /* Allocate a PaRAM slot, if needed */ 460 + if (echan->slot[i] < 0) { 461 + echan->slot[i] = 462 + edma_alloc_slot(EDMA_CTLR(echan->ch_num), 463 + EDMA_SLOT_ANY); 464 + if (echan->slot[i] < 0) { 465 + dev_err(dev, "Failed to allocate slot\n"); 466 + return NULL; 467 + } 468 + } 469 + 470 + if (i == nslots - 1) { 471 + memcpy(&edesc->pset[i], &edesc->pset[0], 472 + sizeof(edesc->pset[0])); 473 + break; 474 + } 475 + 476 + ret = edma_config_pset(chan, &edesc->pset[i], src_addr, 477 + dst_addr, burst, dev_width, period_len, 478 + direction); 479 + if (ret < 0) 480 + return NULL; 481 + 482 + if (direction == DMA_DEV_TO_MEM) 483 + dst_addr += period_len; 484 + else 485 + src_addr += period_len; 486 + 487 + dev_dbg(dev, "%s: Configure period %d of buf:\n", __func__, i); 488 + dev_dbg(dev, 489 + "\n pset[%d]:\n" 490 + " chnum\t%d\n" 491 + " slot\t%d\n" 492 + " opt\t%08x\n" 493 + " src\t%08x\n" 494 + " dst\t%08x\n" 495 + " abcnt\t%08x\n" 496 + " ccnt\t%08x\n" 497 + " bidx\t%08x\n" 498 + " cidx\t%08x\n" 499 + " lkrld\t%08x\n", 500 + i, echan->ch_num, echan->slot[i], 501 + edesc->pset[i].opt, 502 + edesc->pset[i].src, 503 + edesc->pset[i].dst, 504 + edesc->pset[i].a_b_cnt, 505 + edesc->pset[i].ccnt, 506 + edesc->pset[i].src_dst_bidx, 507 + edesc->pset[i].src_dst_cidx, 508 + edesc->pset[i].link_bcntrld); 509 + 510 + edesc->absync = ret; 511 + 512 + /* 513 + * Enable interrupts for every period because callback 514 + * has to be called for every period. 515 + */ 516 + edesc->pset[i].opt |= TCINTEN; 473 517 } 474 518 475 519 return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); ··· 604 406 unsigned long flags; 605 407 struct edmacc_param p; 606 408 607 - /* Pause the channel */ 608 - edma_pause(echan->ch_num); 409 + edesc = echan->edesc; 410 + 411 + /* Pause the channel for non-cyclic */ 412 + if (!edesc || (edesc && !edesc->cyclic)) 413 + edma_pause(echan->ch_num); 609 414 610 415 switch (ch_status) { 611 - case DMA_COMPLETE: 416 + case EDMA_DMA_COMPLETE: 612 417 spin_lock_irqsave(&echan->vchan.lock, flags); 613 418 614 - edesc = echan->edesc; 615 419 if (edesc) { 616 - if (edesc->processed == edesc->pset_nr) { 420 + if (edesc->cyclic) { 421 + vchan_cyclic_callback(&edesc->vdesc); 422 + } else if (edesc->processed == edesc->pset_nr) { 617 423 dev_dbg(dev, "Transfer complete, stopping channel %d\n", ch_num); 618 424 edma_stop(echan->ch_num); 619 425 vchan_cookie_complete(&edesc->vdesc); 426 + edma_execute(echan); 620 427 } else { 621 428 dev_dbg(dev, "Intermediate transfer complete on channel %d\n", ch_num); 429 + edma_execute(echan); 622 430 } 623 - 624 - edma_execute(echan); 625 431 } 626 432 627 433 spin_unlock_irqrestore(&echan->vchan.lock, flags); 628 434 629 435 break; 630 - case DMA_CC_ERROR: 436 + case EDMA_DMA_CC_ERROR: 631 437 spin_lock_irqsave(&echan->vchan.lock, flags); 632 438 633 439 edma_read_slot(EDMA_CHAN_SLOT(echan->slot[0]), &p); ··· 781 579 unsigned long flags; 782 580 783 581 ret = dma_cookie_status(chan, cookie, txstate); 784 - if (ret == DMA_SUCCESS || !txstate) 582 + if (ret == DMA_COMPLETE || !txstate) 785 583 return ret; 786 584 787 585 spin_lock_irqsave(&echan->vchan.lock, flags); ··· 821 619 struct device *dev) 822 620 { 823 621 dma->device_prep_slave_sg = edma_prep_slave_sg; 622 + dma->device_prep_dma_cyclic = edma_prep_dma_cyclic; 824 623 dma->device_alloc_chan_resources = edma_alloc_chan_resources; 825 624 dma->device_free_chan_resources = edma_free_chan_resources; 826 625 dma->device_issue_pending = edma_issue_pending;
+1 -29
drivers/dma/ep93xx_dma.c
··· 733 733 spin_unlock_irqrestore(&edmac->lock, flags); 734 734 } 735 735 736 - static void ep93xx_dma_unmap_buffers(struct ep93xx_dma_desc *desc) 737 - { 738 - struct device *dev = desc->txd.chan->device->dev; 739 - 740 - if (!(desc->txd.flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 741 - if (desc->txd.flags & DMA_COMPL_SRC_UNMAP_SINGLE) 742 - dma_unmap_single(dev, desc->src_addr, desc->size, 743 - DMA_TO_DEVICE); 744 - else 745 - dma_unmap_page(dev, desc->src_addr, desc->size, 746 - DMA_TO_DEVICE); 747 - } 748 - if (!(desc->txd.flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 749 - if (desc->txd.flags & DMA_COMPL_DEST_UNMAP_SINGLE) 750 - dma_unmap_single(dev, desc->dst_addr, desc->size, 751 - DMA_FROM_DEVICE); 752 - else 753 - dma_unmap_page(dev, desc->dst_addr, desc->size, 754 - DMA_FROM_DEVICE); 755 - } 756 - } 757 - 758 736 static void ep93xx_dma_tasklet(unsigned long data) 759 737 { 760 738 struct ep93xx_dma_chan *edmac = (struct ep93xx_dma_chan *)data; ··· 765 787 766 788 /* Now we can release all the chained descriptors */ 767 789 list_for_each_entry_safe(desc, d, &list, node) { 768 - /* 769 - * For the memcpy channels the API requires us to unmap the 770 - * buffers unless requested otherwise. 771 - */ 772 - if (!edmac->chan.private) 773 - ep93xx_dma_unmap_buffers(desc); 774 - 790 + dma_descriptor_unmap(&desc->txd); 775 791 ep93xx_dma_desc_put(edmac, desc); 776 792 } 777 793
+7 -19
drivers/dma/fsldma.c
··· 870 870 /* Run any dependencies */ 871 871 dma_run_dependencies(txd); 872 872 873 - /* Unmap the dst buffer, if requested */ 874 - if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 875 - if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE) 876 - dma_unmap_single(dev, dst, len, DMA_FROM_DEVICE); 877 - else 878 - dma_unmap_page(dev, dst, len, DMA_FROM_DEVICE); 879 - } 880 - 881 - /* Unmap the src buffer, if requested */ 882 - if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 883 - if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE) 884 - dma_unmap_single(dev, src, len, DMA_TO_DEVICE); 885 - else 886 - dma_unmap_page(dev, src, len, DMA_TO_DEVICE); 887 - } 888 - 873 + dma_descriptor_unmap(txd); 889 874 #ifdef FSL_DMA_LD_DEBUG 890 875 chan_dbg(chan, "LD %p free\n", desc); 891 876 #endif ··· 1240 1255 WARN_ON(fdev->feature != chan->feature); 1241 1256 1242 1257 chan->dev = fdev->dev; 1243 - chan->id = ((res.start - 0x100) & 0xfff) >> 7; 1258 + chan->id = (res.start & 0xfff) < 0x300 ? 1259 + ((res.start - 0x100) & 0xfff) >> 7 : 1260 + ((res.start - 0x200) & 0xfff) >> 7; 1244 1261 if (chan->id >= FSL_DMA_MAX_CHANS_PER_DEVICE) { 1245 1262 dev_err(fdev->dev, "too many channels for device\n"); 1246 1263 err = -EINVAL; ··· 1415 1428 } 1416 1429 1417 1430 static const struct of_device_id fsldma_of_ids[] = { 1431 + { .compatible = "fsl,elo3-dma", }, 1418 1432 { .compatible = "fsl,eloplus-dma", }, 1419 1433 { .compatible = "fsl,elo-dma", }, 1420 1434 {} ··· 1437 1449 1438 1450 static __init int fsldma_init(void) 1439 1451 { 1440 - pr_info("Freescale Elo / Elo Plus DMA driver\n"); 1452 + pr_info("Freescale Elo series DMA driver\n"); 1441 1453 return platform_driver_register(&fsldma_of_driver); 1442 1454 } 1443 1455 ··· 1449 1461 subsys_initcall(fsldma_init); 1450 1462 module_exit(fsldma_exit); 1451 1463 1452 - MODULE_DESCRIPTION("Freescale Elo / Elo Plus DMA driver"); 1464 + MODULE_DESCRIPTION("Freescale Elo series DMA driver"); 1453 1465 MODULE_LICENSE("GPL");
+1 -1
drivers/dma/fsldma.h
··· 112 112 }; 113 113 114 114 struct fsldma_chan; 115 - #define FSL_DMA_MAX_CHANS_PER_DEVICE 4 115 + #define FSL_DMA_MAX_CHANS_PER_DEVICE 8 116 116 117 117 struct fsldma_device { 118 118 void __iomem *regs; /* DGSR register base */
+24 -18
drivers/dma/imx-dma.c
··· 572 572 573 573 imx_dmav1_writel(imxdma, d->len, DMA_CNTR(imxdmac->channel)); 574 574 575 - dev_dbg(imxdma->dev, "%s channel: %d dest=0x%08x src=0x%08x " 576 - "dma_length=%d\n", __func__, imxdmac->channel, 577 - d->dest, d->src, d->len); 575 + dev_dbg(imxdma->dev, 576 + "%s channel: %d dest=0x%08llx src=0x%08llx dma_length=%zu\n", 577 + __func__, imxdmac->channel, 578 + (unsigned long long)d->dest, 579 + (unsigned long long)d->src, d->len); 578 580 579 581 break; 580 582 /* Cyclic transfer is the same as slave_sg with special sg configuration. */ ··· 588 586 imx_dmav1_writel(imxdma, imxdmac->ccr_from_device, 589 587 DMA_CCR(imxdmac->channel)); 590 588 591 - dev_dbg(imxdma->dev, "%s channel: %d sg=%p sgcount=%d " 592 - "total length=%d dev_addr=0x%08x (dev2mem)\n", 593 - __func__, imxdmac->channel, d->sg, d->sgcount, 594 - d->len, imxdmac->per_address); 589 + dev_dbg(imxdma->dev, 590 + "%s channel: %d sg=%p sgcount=%d total length=%zu dev_addr=0x%08llx (dev2mem)\n", 591 + __func__, imxdmac->channel, 592 + d->sg, d->sgcount, d->len, 593 + (unsigned long long)imxdmac->per_address); 595 594 } else if (d->direction == DMA_MEM_TO_DEV) { 596 595 imx_dmav1_writel(imxdma, imxdmac->per_address, 597 596 DMA_DAR(imxdmac->channel)); 598 597 imx_dmav1_writel(imxdma, imxdmac->ccr_to_device, 599 598 DMA_CCR(imxdmac->channel)); 600 599 601 - dev_dbg(imxdma->dev, "%s channel: %d sg=%p sgcount=%d " 602 - "total length=%d dev_addr=0x%08x (mem2dev)\n", 603 - __func__, imxdmac->channel, d->sg, d->sgcount, 604 - d->len, imxdmac->per_address); 600 + dev_dbg(imxdma->dev, 601 + "%s channel: %d sg=%p sgcount=%d total length=%zu dev_addr=0x%08llx (mem2dev)\n", 602 + __func__, imxdmac->channel, 603 + d->sg, d->sgcount, d->len, 604 + (unsigned long long)imxdmac->per_address); 605 605 } else { 606 606 dev_err(imxdma->dev, "%s channel: %d bad dma mode\n", 607 607 __func__, imxdmac->channel); ··· 775 771 desc->desc.tx_submit = imxdma_tx_submit; 776 772 /* txd.flags will be overwritten in prep funcs */ 777 773 desc->desc.flags = DMA_CTRL_ACK; 778 - desc->status = DMA_SUCCESS; 774 + desc->status = DMA_COMPLETE; 779 775 780 776 list_add_tail(&desc->node, &imxdmac->ld_free); 781 777 imxdmac->descs_allocated++; ··· 874 870 int i; 875 871 unsigned int periods = buf_len / period_len; 876 872 877 - dev_dbg(imxdma->dev, "%s channel: %d buf_len=%d period_len=%d\n", 873 + dev_dbg(imxdma->dev, "%s channel: %d buf_len=%zu period_len=%zu\n", 878 874 __func__, imxdmac->channel, buf_len, period_len); 879 875 880 876 if (list_empty(&imxdmac->ld_free) || ··· 930 926 struct imxdma_engine *imxdma = imxdmac->imxdma; 931 927 struct imxdma_desc *desc; 932 928 933 - dev_dbg(imxdma->dev, "%s channel: %d src=0x%x dst=0x%x len=%d\n", 934 - __func__, imxdmac->channel, src, dest, len); 929 + dev_dbg(imxdma->dev, "%s channel: %d src=0x%llx dst=0x%llx len=%zu\n", 930 + __func__, imxdmac->channel, (unsigned long long)src, 931 + (unsigned long long)dest, len); 935 932 936 933 if (list_empty(&imxdmac->ld_free) || 937 934 imxdma_chan_is_doing_cyclic(imxdmac)) ··· 961 956 struct imxdma_engine *imxdma = imxdmac->imxdma; 962 957 struct imxdma_desc *desc; 963 958 964 - dev_dbg(imxdma->dev, "%s channel: %d src_start=0x%x dst_start=0x%x\n" 965 - " src_sgl=%s dst_sgl=%s numf=%d frame_size=%d\n", __func__, 966 - imxdmac->channel, xt->src_start, xt->dst_start, 959 + dev_dbg(imxdma->dev, "%s channel: %d src_start=0x%llx dst_start=0x%llx\n" 960 + " src_sgl=%s dst_sgl=%s numf=%zu frame_size=%zu\n", __func__, 961 + imxdmac->channel, (unsigned long long)xt->src_start, 962 + (unsigned long long) xt->dst_start, 967 963 xt->src_sgl ? "true" : "false", xt->dst_sgl ? "true" : "false", 968 964 xt->numf, xt->frame_size); 969 965
+5 -5
drivers/dma/imx-sdma.c
··· 638 638 if (error) 639 639 sdmac->status = DMA_ERROR; 640 640 else 641 - sdmac->status = DMA_SUCCESS; 641 + sdmac->status = DMA_COMPLETE; 642 642 643 643 dma_cookie_complete(&sdmac->desc); 644 644 if (sdmac->desc.callback) ··· 1089 1089 param &= ~BD_CONT; 1090 1090 } 1091 1091 1092 - dev_dbg(sdma->dev, "entry %d: count: %d dma: 0x%08x %s%s\n", 1093 - i, count, sg->dma_address, 1092 + dev_dbg(sdma->dev, "entry %d: count: %d dma: %#llx %s%s\n", 1093 + i, count, (u64)sg->dma_address, 1094 1094 param & BD_WRAP ? "wrap" : "", 1095 1095 param & BD_INTR ? " intr" : ""); 1096 1096 ··· 1163 1163 if (i + 1 == num_periods) 1164 1164 param |= BD_WRAP; 1165 1165 1166 - dev_dbg(sdma->dev, "entry %d: count: %d dma: 0x%08x %s%s\n", 1167 - i, period_len, dma_addr, 1166 + dev_dbg(sdma->dev, "entry %d: count: %d dma: %#llx %s%s\n", 1167 + i, period_len, (u64)dma_addr, 1168 1168 param & BD_WRAP ? "wrap" : "", 1169 1169 param & BD_INTR ? " intr" : ""); 1170 1170
+2 -2
drivers/dma/intel_mid_dma.c
··· 309 309 callback_txd(param_txd); 310 310 } 311 311 if (midc->raw_tfr) { 312 - desc->status = DMA_SUCCESS; 312 + desc->status = DMA_COMPLETE; 313 313 if (desc->lli != NULL) { 314 314 pci_pool_free(desc->lli_pool, desc->lli, 315 315 desc->lli_phys); ··· 481 481 enum dma_status ret; 482 482 483 483 ret = dma_cookie_status(chan, cookie, txstate); 484 - if (ret != DMA_SUCCESS) { 484 + if (ret != DMA_COMPLETE) { 485 485 spin_lock_bh(&midc->lock); 486 486 midc_scan_descriptors(to_middma_device(chan->device), midc); 487 487 spin_unlock_bh(&midc->lock);
+8 -45
drivers/dma/ioat/dma.c
··· 531 531 writew(IOAT_CHANCTRL_RUN, ioat->base.reg_base + IOAT_CHANCTRL_OFFSET); 532 532 } 533 533 534 - void ioat_dma_unmap(struct ioat_chan_common *chan, enum dma_ctrl_flags flags, 535 - size_t len, struct ioat_dma_descriptor *hw) 536 - { 537 - struct pci_dev *pdev = chan->device->pdev; 538 - size_t offset = len - hw->size; 539 - 540 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) 541 - ioat_unmap(pdev, hw->dst_addr - offset, len, 542 - PCI_DMA_FROMDEVICE, flags, 1); 543 - 544 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) 545 - ioat_unmap(pdev, hw->src_addr - offset, len, 546 - PCI_DMA_TODEVICE, flags, 0); 547 - } 548 - 549 534 dma_addr_t ioat_get_current_completion(struct ioat_chan_common *chan) 550 535 { 551 536 dma_addr_t phys_complete; ··· 587 602 dump_desc_dbg(ioat, desc); 588 603 if (tx->cookie) { 589 604 dma_cookie_complete(tx); 590 - ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw); 605 + dma_descriptor_unmap(tx); 591 606 ioat->active -= desc->hw->tx_cnt; 592 607 if (tx->callback) { 593 608 tx->callback(tx->callback_param); ··· 718 733 enum dma_status ret; 719 734 720 735 ret = dma_cookie_status(c, cookie, txstate); 721 - if (ret == DMA_SUCCESS) 736 + if (ret == DMA_COMPLETE) 722 737 return ret; 723 738 724 739 device->cleanup_fn((unsigned long) c); ··· 818 833 819 834 dma_src = dma_map_single(dev, src, IOAT_TEST_SIZE, DMA_TO_DEVICE); 820 835 dma_dest = dma_map_single(dev, dest, IOAT_TEST_SIZE, DMA_FROM_DEVICE); 821 - flags = DMA_COMPL_SKIP_SRC_UNMAP | DMA_COMPL_SKIP_DEST_UNMAP | 822 - DMA_PREP_INTERRUPT; 836 + flags = DMA_PREP_INTERRUPT; 823 837 tx = device->common.device_prep_dma_memcpy(dma_chan, dma_dest, dma_src, 824 838 IOAT_TEST_SIZE, flags); 825 839 if (!tx) { ··· 843 859 844 860 if (tmo == 0 || 845 861 dma->device_tx_status(dma_chan, cookie, NULL) 846 - != DMA_SUCCESS) { 862 + != DMA_COMPLETE) { 847 863 dev_err(dev, "Self-test copy timed out, disabling\n"); 848 864 err = -ENODEV; 849 865 goto unmap_dma; ··· 869 885 module_param_string(ioat_interrupt_style, ioat_interrupt_style, 870 886 sizeof(ioat_interrupt_style), 0644); 871 887 MODULE_PARM_DESC(ioat_interrupt_style, 872 - "set ioat interrupt style: msix (default), " 873 - "msix-single-vector, msi, intx)"); 888 + "set ioat interrupt style: msix (default), msi, intx"); 874 889 875 890 /** 876 891 * ioat_dma_setup_interrupts - setup interrupt handler ··· 887 904 888 905 if (!strcmp(ioat_interrupt_style, "msix")) 889 906 goto msix; 890 - if (!strcmp(ioat_interrupt_style, "msix-single-vector")) 891 - goto msix_single_vector; 892 907 if (!strcmp(ioat_interrupt_style, "msi")) 893 908 goto msi; 894 909 if (!strcmp(ioat_interrupt_style, "intx")) ··· 901 920 device->msix_entries[i].entry = i; 902 921 903 922 err = pci_enable_msix(pdev, device->msix_entries, msixcnt); 904 - if (err < 0) 923 + if (err) 905 924 goto msi; 906 - if (err > 0) 907 - goto msix_single_vector; 908 925 909 926 for (i = 0; i < msixcnt; i++) { 910 927 msix = &device->msix_entries[i]; ··· 916 937 chan = ioat_chan_by_index(device, j); 917 938 devm_free_irq(dev, msix->vector, chan); 918 939 } 919 - goto msix_single_vector; 940 + goto msi; 920 941 } 921 942 } 922 943 intrctrl |= IOAT_INTRCTRL_MSIX_VECTOR_CONTROL; 923 944 device->irq_mode = IOAT_MSIX; 924 - goto done; 925 - 926 - msix_single_vector: 927 - msix = &device->msix_entries[0]; 928 - msix->entry = 0; 929 - err = pci_enable_msix(pdev, device->msix_entries, 1); 930 - if (err) 931 - goto msi; 932 - 933 - err = devm_request_irq(dev, msix->vector, ioat_dma_do_interrupt, 0, 934 - "ioat-msix", device); 935 - if (err) { 936 - pci_disable_msix(pdev); 937 - goto msi; 938 - } 939 - device->irq_mode = IOAT_MSIX_SINGLE; 940 945 goto done; 941 946 942 947 msi: ··· 934 971 pci_disable_msi(pdev); 935 972 goto intx; 936 973 } 937 - device->irq_mode = IOAT_MSIX; 974 + device->irq_mode = IOAT_MSI; 938 975 goto done; 939 976 940 977 intx:
-14
drivers/dma/ioat/dma.h
··· 52 52 enum ioat_irq_mode { 53 53 IOAT_NOIRQ = 0, 54 54 IOAT_MSIX, 55 - IOAT_MSIX_SINGLE, 56 55 IOAT_MSI, 57 56 IOAT_INTX 58 57 }; ··· 82 83 struct pci_pool *completion_pool; 83 84 #define MAX_SED_POOLS 5 84 85 struct dma_pool *sed_hw_pool[MAX_SED_POOLS]; 85 - struct kmem_cache *sed_pool; 86 86 struct dma_device common; 87 87 u8 version; 88 88 struct msix_entry msix_entries[4]; ··· 340 342 return !!err; 341 343 } 342 344 343 - static inline void ioat_unmap(struct pci_dev *pdev, dma_addr_t addr, size_t len, 344 - int direction, enum dma_ctrl_flags flags, bool dst) 345 - { 346 - if ((dst && (flags & DMA_COMPL_DEST_UNMAP_SINGLE)) || 347 - (!dst && (flags & DMA_COMPL_SRC_UNMAP_SINGLE))) 348 - pci_unmap_single(pdev, addr, len, direction); 349 - else 350 - pci_unmap_page(pdev, addr, len, direction); 351 - } 352 - 353 345 int ioat_probe(struct ioatdma_device *device); 354 346 int ioat_register(struct ioatdma_device *device); 355 347 int ioat1_dma_probe(struct ioatdma_device *dev, int dca); ··· 351 363 struct ioat_chan_common *chan, int idx); 352 364 enum dma_status ioat_dma_tx_status(struct dma_chan *c, dma_cookie_t cookie, 353 365 struct dma_tx_state *txstate); 354 - void ioat_dma_unmap(struct ioat_chan_common *chan, enum dma_ctrl_flags flags, 355 - size_t len, struct ioat_dma_descriptor *hw); 356 366 bool ioat_cleanup_preamble(struct ioat_chan_common *chan, 357 367 dma_addr_t *phys_complete); 358 368 void ioat_kobject_add(struct ioatdma_device *device, struct kobj_type *type);
+1 -1
drivers/dma/ioat/dma_v2.c
··· 148 148 tx = &desc->txd; 149 149 dump_desc_dbg(ioat, desc); 150 150 if (tx->cookie) { 151 - ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw); 151 + dma_descriptor_unmap(tx); 152 152 dma_cookie_complete(tx); 153 153 if (tx->callback) { 154 154 tx->callback(tx->callback_param);
-1
drivers/dma/ioat/dma_v2.h
··· 157 157 158 158 int ioat2_dma_probe(struct ioatdma_device *dev, int dca); 159 159 int ioat3_dma_probe(struct ioatdma_device *dev, int dca); 160 - void ioat3_dma_remove(struct ioatdma_device *dev); 161 160 struct dca_provider *ioat2_dca_init(struct pci_dev *pdev, void __iomem *iobase); 162 161 struct dca_provider *ioat3_dca_init(struct pci_dev *pdev, void __iomem *iobase); 163 162 int ioat2_check_space_lock(struct ioat2_dma_chan *ioat, int num_descs);
+47 -276
drivers/dma/ioat/dma_v3.c
··· 67 67 #include "dma.h" 68 68 #include "dma_v2.h" 69 69 70 + extern struct kmem_cache *ioat3_sed_cache; 71 + 70 72 /* ioat hardware assumes at least two sources for raid operations */ 71 73 #define src_cnt_to_sw(x) ((x) + 2) 72 74 #define src_cnt_to_hw(x) ((x) - 2) ··· 89 87 static const u8 pq16_idx_to_field[] = { 1, 4, 1, 2, 3, 4, 5, 6, 7, 90 88 0, 1, 2, 3, 4, 5, 6 }; 91 89 92 - /* 93 - * technically sources 1 and 2 do not require SED, but the op will have 94 - * at least 9 descriptors so that's irrelevant. 95 - */ 96 - static const u8 pq16_idx_to_sed[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 97 - 1, 1, 1, 1, 1, 1, 1 }; 98 - 99 90 static void ioat3_eh(struct ioat2_dma_chan *ioat); 100 - 101 - static dma_addr_t xor_get_src(struct ioat_raw_descriptor *descs[2], int idx) 102 - { 103 - struct ioat_raw_descriptor *raw = descs[xor_idx_to_desc >> idx & 1]; 104 - 105 - return raw->field[xor_idx_to_field[idx]]; 106 - } 107 91 108 92 static void xor_set_src(struct ioat_raw_descriptor *descs[2], 109 93 dma_addr_t addr, u32 offset, int idx) ··· 121 133 122 134 raw->field[pq_idx_to_field[idx]] = addr + offset; 123 135 pq->coef[idx] = coef; 124 - } 125 - 126 - static int sed_get_pq16_pool_idx(int src_cnt) 127 - { 128 - 129 - return pq16_idx_to_sed[src_cnt]; 130 136 } 131 137 132 138 static bool is_jf_ioat(struct pci_dev *pdev) ··· 254 272 struct ioat_sed_ent *sed; 255 273 gfp_t flags = __GFP_ZERO | GFP_ATOMIC; 256 274 257 - sed = kmem_cache_alloc(device->sed_pool, flags); 275 + sed = kmem_cache_alloc(ioat3_sed_cache, flags); 258 276 if (!sed) 259 277 return NULL; 260 278 ··· 262 280 sed->hw = dma_pool_alloc(device->sed_hw_pool[hw_pool], 263 281 flags, &sed->dma); 264 282 if (!sed->hw) { 265 - kmem_cache_free(device->sed_pool, sed); 283 + kmem_cache_free(ioat3_sed_cache, sed); 266 284 return NULL; 267 285 } 268 286 ··· 275 293 return; 276 294 277 295 dma_pool_free(device->sed_hw_pool[sed->hw_pool], sed->hw, sed->dma); 278 - kmem_cache_free(device->sed_pool, sed); 279 - } 280 - 281 - static void ioat3_dma_unmap(struct ioat2_dma_chan *ioat, 282 - struct ioat_ring_ent *desc, int idx) 283 - { 284 - struct ioat_chan_common *chan = &ioat->base; 285 - struct pci_dev *pdev = chan->device->pdev; 286 - size_t len = desc->len; 287 - size_t offset = len - desc->hw->size; 288 - struct dma_async_tx_descriptor *tx = &desc->txd; 289 - enum dma_ctrl_flags flags = tx->flags; 290 - 291 - switch (desc->hw->ctl_f.op) { 292 - case IOAT_OP_COPY: 293 - if (!desc->hw->ctl_f.null) /* skip 'interrupt' ops */ 294 - ioat_dma_unmap(chan, flags, len, desc->hw); 295 - break; 296 - case IOAT_OP_XOR_VAL: 297 - case IOAT_OP_XOR: { 298 - struct ioat_xor_descriptor *xor = desc->xor; 299 - struct ioat_ring_ent *ext; 300 - struct ioat_xor_ext_descriptor *xor_ex = NULL; 301 - int src_cnt = src_cnt_to_sw(xor->ctl_f.src_cnt); 302 - struct ioat_raw_descriptor *descs[2]; 303 - int i; 304 - 305 - if (src_cnt > 5) { 306 - ext = ioat2_get_ring_ent(ioat, idx + 1); 307 - xor_ex = ext->xor_ex; 308 - } 309 - 310 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 311 - descs[0] = (struct ioat_raw_descriptor *) xor; 312 - descs[1] = (struct ioat_raw_descriptor *) xor_ex; 313 - for (i = 0; i < src_cnt; i++) { 314 - dma_addr_t src = xor_get_src(descs, i); 315 - 316 - ioat_unmap(pdev, src - offset, len, 317 - PCI_DMA_TODEVICE, flags, 0); 318 - } 319 - 320 - /* dest is a source in xor validate operations */ 321 - if (xor->ctl_f.op == IOAT_OP_XOR_VAL) { 322 - ioat_unmap(pdev, xor->dst_addr - offset, len, 323 - PCI_DMA_TODEVICE, flags, 1); 324 - break; 325 - } 326 - } 327 - 328 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) 329 - ioat_unmap(pdev, xor->dst_addr - offset, len, 330 - PCI_DMA_FROMDEVICE, flags, 1); 331 - break; 332 - } 333 - case IOAT_OP_PQ_VAL: 334 - case IOAT_OP_PQ: { 335 - struct ioat_pq_descriptor *pq = desc->pq; 336 - struct ioat_ring_ent *ext; 337 - struct ioat_pq_ext_descriptor *pq_ex = NULL; 338 - int src_cnt = src_cnt_to_sw(pq->ctl_f.src_cnt); 339 - struct ioat_raw_descriptor *descs[2]; 340 - int i; 341 - 342 - if (src_cnt > 3) { 343 - ext = ioat2_get_ring_ent(ioat, idx + 1); 344 - pq_ex = ext->pq_ex; 345 - } 346 - 347 - /* in the 'continue' case don't unmap the dests as sources */ 348 - if (dmaf_p_disabled_continue(flags)) 349 - src_cnt--; 350 - else if (dmaf_continue(flags)) 351 - src_cnt -= 3; 352 - 353 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 354 - descs[0] = (struct ioat_raw_descriptor *) pq; 355 - descs[1] = (struct ioat_raw_descriptor *) pq_ex; 356 - for (i = 0; i < src_cnt; i++) { 357 - dma_addr_t src = pq_get_src(descs, i); 358 - 359 - ioat_unmap(pdev, src - offset, len, 360 - PCI_DMA_TODEVICE, flags, 0); 361 - } 362 - 363 - /* the dests are sources in pq validate operations */ 364 - if (pq->ctl_f.op == IOAT_OP_XOR_VAL) { 365 - if (!(flags & DMA_PREP_PQ_DISABLE_P)) 366 - ioat_unmap(pdev, pq->p_addr - offset, 367 - len, PCI_DMA_TODEVICE, flags, 0); 368 - if (!(flags & DMA_PREP_PQ_DISABLE_Q)) 369 - ioat_unmap(pdev, pq->q_addr - offset, 370 - len, PCI_DMA_TODEVICE, flags, 0); 371 - break; 372 - } 373 - } 374 - 375 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 376 - if (!(flags & DMA_PREP_PQ_DISABLE_P)) 377 - ioat_unmap(pdev, pq->p_addr - offset, len, 378 - PCI_DMA_BIDIRECTIONAL, flags, 1); 379 - if (!(flags & DMA_PREP_PQ_DISABLE_Q)) 380 - ioat_unmap(pdev, pq->q_addr - offset, len, 381 - PCI_DMA_BIDIRECTIONAL, flags, 1); 382 - } 383 - break; 384 - } 385 - case IOAT_OP_PQ_16S: 386 - case IOAT_OP_PQ_VAL_16S: { 387 - struct ioat_pq_descriptor *pq = desc->pq; 388 - int src_cnt = src16_cnt_to_sw(pq->ctl_f.src_cnt); 389 - struct ioat_raw_descriptor *descs[4]; 390 - int i; 391 - 392 - /* in the 'continue' case don't unmap the dests as sources */ 393 - if (dmaf_p_disabled_continue(flags)) 394 - src_cnt--; 395 - else if (dmaf_continue(flags)) 396 - src_cnt -= 3; 397 - 398 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 399 - descs[0] = (struct ioat_raw_descriptor *)pq; 400 - descs[1] = (struct ioat_raw_descriptor *)(desc->sed->hw); 401 - descs[2] = (struct ioat_raw_descriptor *)(&desc->sed->hw->b[0]); 402 - for (i = 0; i < src_cnt; i++) { 403 - dma_addr_t src = pq16_get_src(descs, i); 404 - 405 - ioat_unmap(pdev, src - offset, len, 406 - PCI_DMA_TODEVICE, flags, 0); 407 - } 408 - 409 - /* the dests are sources in pq validate operations */ 410 - if (pq->ctl_f.op == IOAT_OP_XOR_VAL) { 411 - if (!(flags & DMA_PREP_PQ_DISABLE_P)) 412 - ioat_unmap(pdev, pq->p_addr - offset, 413 - len, PCI_DMA_TODEVICE, 414 - flags, 0); 415 - if (!(flags & DMA_PREP_PQ_DISABLE_Q)) 416 - ioat_unmap(pdev, pq->q_addr - offset, 417 - len, PCI_DMA_TODEVICE, 418 - flags, 0); 419 - break; 420 - } 421 - } 422 - 423 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 424 - if (!(flags & DMA_PREP_PQ_DISABLE_P)) 425 - ioat_unmap(pdev, pq->p_addr - offset, len, 426 - PCI_DMA_BIDIRECTIONAL, flags, 1); 427 - if (!(flags & DMA_PREP_PQ_DISABLE_Q)) 428 - ioat_unmap(pdev, pq->q_addr - offset, len, 429 - PCI_DMA_BIDIRECTIONAL, flags, 1); 430 - } 431 - break; 432 - } 433 - default: 434 - dev_err(&pdev->dev, "%s: unknown op type: %#x\n", 435 - __func__, desc->hw->ctl_f.op); 436 - } 296 + kmem_cache_free(ioat3_sed_cache, sed); 437 297 } 438 298 439 299 static bool desc_has_ext(struct ioat_ring_ent *desc) ··· 401 577 tx = &desc->txd; 402 578 if (tx->cookie) { 403 579 dma_cookie_complete(tx); 404 - ioat3_dma_unmap(ioat, desc, idx + i); 580 + dma_descriptor_unmap(tx); 405 581 if (tx->callback) { 406 582 tx->callback(tx->callback_param); 407 583 tx->callback = NULL; ··· 631 807 enum dma_status ret; 632 808 633 809 ret = dma_cookie_status(c, cookie, txstate); 634 - if (ret == DMA_SUCCESS) 810 + if (ret == DMA_COMPLETE) 635 811 return ret; 636 812 637 813 ioat3_cleanup(ioat); ··· 953 1129 u8 op; 954 1130 int i, s, idx, num_descs; 955 1131 956 - /* this function only handles src_cnt 9 - 16 */ 957 - BUG_ON(src_cnt < 9); 958 - 959 1132 /* this function is only called with 9-16 sources */ 960 1133 op = result ? IOAT_OP_PQ_VAL_16S : IOAT_OP_PQ_16S; 961 1134 ··· 980 1159 981 1160 descs[0] = (struct ioat_raw_descriptor *) pq; 982 1161 983 - desc->sed = ioat3_alloc_sed(device, 984 - sed_get_pq16_pool_idx(src_cnt)); 1162 + desc->sed = ioat3_alloc_sed(device, (src_cnt-2) >> 3); 985 1163 if (!desc->sed) { 986 1164 dev_err(to_dev(chan), 987 1165 "%s: no free sed entries\n", __func__); ··· 1038 1218 return &desc->txd; 1039 1219 } 1040 1220 1221 + static int src_cnt_flags(unsigned int src_cnt, unsigned long flags) 1222 + { 1223 + if (dmaf_p_disabled_continue(flags)) 1224 + return src_cnt + 1; 1225 + else if (dmaf_continue(flags)) 1226 + return src_cnt + 3; 1227 + else 1228 + return src_cnt; 1229 + } 1230 + 1041 1231 static struct dma_async_tx_descriptor * 1042 1232 ioat3_prep_pq(struct dma_chan *chan, dma_addr_t *dst, dma_addr_t *src, 1043 1233 unsigned int src_cnt, const unsigned char *scf, size_t len, 1044 1234 unsigned long flags) 1045 1235 { 1046 - struct dma_device *dma = chan->device; 1047 - 1048 1236 /* specify valid address for disabled result */ 1049 1237 if (flags & DMA_PREP_PQ_DISABLE_P) 1050 1238 dst[0] = dst[1]; ··· 1072 1244 single_source_coef[0] = scf[0]; 1073 1245 single_source_coef[1] = 0; 1074 1246 1075 - return (src_cnt > 8) && (dma->max_pq > 8) ? 1247 + return src_cnt_flags(src_cnt, flags) > 8 ? 1076 1248 __ioat3_prep_pq16_lock(chan, NULL, dst, single_source, 1077 1249 2, single_source_coef, len, 1078 1250 flags) : ··· 1080 1252 single_source_coef, len, flags); 1081 1253 1082 1254 } else { 1083 - return (src_cnt > 8) && (dma->max_pq > 8) ? 1255 + return src_cnt_flags(src_cnt, flags) > 8 ? 1084 1256 __ioat3_prep_pq16_lock(chan, NULL, dst, src, src_cnt, 1085 1257 scf, len, flags) : 1086 1258 __ioat3_prep_pq_lock(chan, NULL, dst, src, src_cnt, ··· 1093 1265 unsigned int src_cnt, const unsigned char *scf, size_t len, 1094 1266 enum sum_check_flags *pqres, unsigned long flags) 1095 1267 { 1096 - struct dma_device *dma = chan->device; 1097 - 1098 1268 /* specify valid address for disabled result */ 1099 1269 if (flags & DMA_PREP_PQ_DISABLE_P) 1100 1270 pq[0] = pq[1]; ··· 1104 1278 */ 1105 1279 *pqres = 0; 1106 1280 1107 - return (src_cnt > 8) && (dma->max_pq > 8) ? 1281 + return src_cnt_flags(src_cnt, flags) > 8 ? 1108 1282 __ioat3_prep_pq16_lock(chan, pqres, pq, src, src_cnt, scf, len, 1109 1283 flags) : 1110 1284 __ioat3_prep_pq_lock(chan, pqres, pq, src, src_cnt, scf, len, ··· 1115 1289 ioat3_prep_pqxor(struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src, 1116 1290 unsigned int src_cnt, size_t len, unsigned long flags) 1117 1291 { 1118 - struct dma_device *dma = chan->device; 1119 1292 unsigned char scf[src_cnt]; 1120 1293 dma_addr_t pq[2]; 1121 1294 ··· 1123 1298 flags |= DMA_PREP_PQ_DISABLE_Q; 1124 1299 pq[1] = dst; /* specify valid address for disabled result */ 1125 1300 1126 - return (src_cnt > 8) && (dma->max_pq > 8) ? 1301 + return src_cnt_flags(src_cnt, flags) > 8 ? 1127 1302 __ioat3_prep_pq16_lock(chan, NULL, pq, src, src_cnt, scf, len, 1128 1303 flags) : 1129 1304 __ioat3_prep_pq_lock(chan, NULL, pq, src, src_cnt, scf, len, ··· 1135 1310 unsigned int src_cnt, size_t len, 1136 1311 enum sum_check_flags *result, unsigned long flags) 1137 1312 { 1138 - struct dma_device *dma = chan->device; 1139 1313 unsigned char scf[src_cnt]; 1140 1314 dma_addr_t pq[2]; 1141 1315 ··· 1148 1324 flags |= DMA_PREP_PQ_DISABLE_Q; 1149 1325 pq[1] = pq[0]; /* specify valid address for disabled result */ 1150 1326 1151 - 1152 - return (src_cnt > 8) && (dma->max_pq > 8) ? 1327 + return src_cnt_flags(src_cnt, flags) > 8 ? 1153 1328 __ioat3_prep_pq16_lock(chan, result, pq, &src[1], src_cnt - 1, 1154 1329 scf, len, flags) : 1155 1330 __ioat3_prep_pq_lock(chan, result, pq, &src[1], src_cnt - 1, ··· 1267 1444 DMA_TO_DEVICE); 1268 1445 tx = dma->device_prep_dma_xor(dma_chan, dest_dma, dma_srcs, 1269 1446 IOAT_NUM_SRC_TEST, PAGE_SIZE, 1270 - DMA_PREP_INTERRUPT | 1271 - DMA_COMPL_SKIP_SRC_UNMAP | 1272 - DMA_COMPL_SKIP_DEST_UNMAP); 1447 + DMA_PREP_INTERRUPT); 1273 1448 1274 1449 if (!tx) { 1275 1450 dev_err(dev, "Self-test xor prep failed\n"); ··· 1289 1468 1290 1469 tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); 1291 1470 1292 - if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) { 1471 + if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) { 1293 1472 dev_err(dev, "Self-test xor timed out\n"); 1294 1473 err = -ENODEV; 1295 1474 goto dma_unmap; ··· 1328 1507 DMA_TO_DEVICE); 1329 1508 tx = dma->device_prep_dma_xor_val(dma_chan, dma_srcs, 1330 1509 IOAT_NUM_SRC_TEST + 1, PAGE_SIZE, 1331 - &xor_val_result, DMA_PREP_INTERRUPT | 1332 - DMA_COMPL_SKIP_SRC_UNMAP | 1333 - DMA_COMPL_SKIP_DEST_UNMAP); 1510 + &xor_val_result, DMA_PREP_INTERRUPT); 1334 1511 if (!tx) { 1335 1512 dev_err(dev, "Self-test zero prep failed\n"); 1336 1513 err = -ENODEV; ··· 1349 1530 1350 1531 tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); 1351 1532 1352 - if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) { 1533 + if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) { 1353 1534 dev_err(dev, "Self-test validate timed out\n"); 1354 1535 err = -ENODEV; 1355 1536 goto dma_unmap; ··· 1364 1545 goto free_resources; 1365 1546 } 1366 1547 1548 + memset(page_address(dest), 0, PAGE_SIZE); 1549 + 1367 1550 /* test for non-zero parity sum */ 1368 1551 op = IOAT_OP_XOR_VAL; 1369 1552 ··· 1375 1554 DMA_TO_DEVICE); 1376 1555 tx = dma->device_prep_dma_xor_val(dma_chan, dma_srcs, 1377 1556 IOAT_NUM_SRC_TEST + 1, PAGE_SIZE, 1378 - &xor_val_result, DMA_PREP_INTERRUPT | 1379 - DMA_COMPL_SKIP_SRC_UNMAP | 1380 - DMA_COMPL_SKIP_DEST_UNMAP); 1557 + &xor_val_result, DMA_PREP_INTERRUPT); 1381 1558 if (!tx) { 1382 1559 dev_err(dev, "Self-test 2nd zero prep failed\n"); 1383 1560 err = -ENODEV; ··· 1396 1577 1397 1578 tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); 1398 1579 1399 - if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) { 1580 + if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) { 1400 1581 dev_err(dev, "Self-test 2nd validate timed out\n"); 1401 1582 err = -ENODEV; 1402 1583 goto dma_unmap; ··· 1449 1630 1450 1631 static int ioat3_irq_reinit(struct ioatdma_device *device) 1451 1632 { 1452 - int msixcnt = device->common.chancnt; 1453 1633 struct pci_dev *pdev = device->pdev; 1454 - int i; 1455 - struct msix_entry *msix; 1456 - struct ioat_chan_common *chan; 1457 - int err = 0; 1634 + int irq = pdev->irq, i; 1635 + 1636 + if (!is_bwd_ioat(pdev)) 1637 + return 0; 1458 1638 1459 1639 switch (device->irq_mode) { 1460 1640 case IOAT_MSIX: 1641 + for (i = 0; i < device->common.chancnt; i++) { 1642 + struct msix_entry *msix = &device->msix_entries[i]; 1643 + struct ioat_chan_common *chan; 1461 1644 1462 - for (i = 0; i < msixcnt; i++) { 1463 - msix = &device->msix_entries[i]; 1464 1645 chan = ioat_chan_by_index(device, i); 1465 1646 devm_free_irq(&pdev->dev, msix->vector, chan); 1466 1647 } 1467 1648 1468 1649 pci_disable_msix(pdev); 1469 1650 break; 1470 - 1471 - case IOAT_MSIX_SINGLE: 1472 - msix = &device->msix_entries[0]; 1473 - chan = ioat_chan_by_index(device, 0); 1474 - devm_free_irq(&pdev->dev, msix->vector, chan); 1475 - pci_disable_msix(pdev); 1476 - break; 1477 - 1478 1651 case IOAT_MSI: 1479 - chan = ioat_chan_by_index(device, 0); 1480 - devm_free_irq(&pdev->dev, pdev->irq, chan); 1481 1652 pci_disable_msi(pdev); 1482 - break; 1483 - 1653 + /* fall through */ 1484 1654 case IOAT_INTX: 1485 - chan = ioat_chan_by_index(device, 0); 1486 - devm_free_irq(&pdev->dev, pdev->irq, chan); 1655 + devm_free_irq(&pdev->dev, irq, device); 1487 1656 break; 1488 - 1489 1657 default: 1490 1658 return 0; 1491 1659 } 1492 - 1493 1660 device->irq_mode = IOAT_NOIRQ; 1494 1661 1495 - err = ioat_dma_setup_interrupts(device); 1496 - 1497 - return err; 1662 + return ioat_dma_setup_interrupts(device); 1498 1663 } 1499 1664 1500 1665 static int ioat3_reset_hw(struct ioat_chan_common *chan) ··· 1521 1718 } 1522 1719 1523 1720 err = ioat2_reset_sync(chan, msecs_to_jiffies(200)); 1524 - if (err) { 1525 - dev_err(&pdev->dev, "Failed to reset!\n"); 1526 - return err; 1527 - } 1528 - 1529 - if (device->irq_mode != IOAT_NOIRQ && is_bwd_ioat(pdev)) 1721 + if (!err) 1530 1722 err = ioat3_irq_reinit(device); 1723 + 1724 + if (err) 1725 + dev_err(&pdev->dev, "Failed to reset: %d\n", err); 1531 1726 1532 1727 return err; 1533 1728 } ··· 1636 1835 char pool_name[14]; 1637 1836 int i; 1638 1837 1639 - /* allocate sw descriptor pool for SED */ 1640 - device->sed_pool = kmem_cache_create("ioat_sed", 1641 - sizeof(struct ioat_sed_ent), 0, 0, NULL); 1642 - if (!device->sed_pool) 1643 - return -ENOMEM; 1644 - 1645 1838 for (i = 0; i < MAX_SED_POOLS; i++) { 1646 1839 snprintf(pool_name, 14, "ioat_hw%d_sed", i); 1647 1840 1648 1841 /* allocate SED DMA pool */ 1649 - device->sed_hw_pool[i] = dma_pool_create(pool_name, 1842 + device->sed_hw_pool[i] = dmam_pool_create(pool_name, 1650 1843 &pdev->dev, 1651 1844 SED_SIZE * (i + 1), 64, 0); 1652 1845 if (!device->sed_hw_pool[i]) 1653 - goto sed_pool_cleanup; 1846 + return -ENOMEM; 1654 1847 1655 1848 } 1656 1849 } ··· 1670 1875 device->dca = ioat3_dca_init(pdev, device->reg_base); 1671 1876 1672 1877 return 0; 1673 - 1674 - sed_pool_cleanup: 1675 - if (device->sed_pool) { 1676 - int i; 1677 - kmem_cache_destroy(device->sed_pool); 1678 - 1679 - for (i = 0; i < MAX_SED_POOLS; i++) 1680 - if (device->sed_hw_pool[i]) 1681 - dma_pool_destroy(device->sed_hw_pool[i]); 1682 - } 1683 - 1684 - return -ENOMEM; 1685 - } 1686 - 1687 - void ioat3_dma_remove(struct ioatdma_device *device) 1688 - { 1689 - if (device->sed_pool) { 1690 - int i; 1691 - kmem_cache_destroy(device->sed_pool); 1692 - 1693 - for (i = 0; i < MAX_SED_POOLS; i++) 1694 - if (device->sed_hw_pool[i]) 1695 - dma_pool_destroy(device->sed_hw_pool[i]); 1696 - } 1697 1878 }
+15 -5
drivers/dma/ioat/pci.c
··· 123 123 MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)"); 124 124 125 125 struct kmem_cache *ioat2_cache; 126 + struct kmem_cache *ioat3_sed_cache; 126 127 127 128 #define DRV_NAME "ioatdma" 128 129 ··· 208 207 if (!device) 209 208 return; 210 209 211 - if (device->version >= IOAT_VER_3_0) 212 - ioat3_dma_remove(device); 213 - 214 210 dev_err(&pdev->dev, "Removing dma and dca services\n"); 215 211 if (device->dca) { 216 212 unregister_dca_provider(device->dca, &pdev->dev); ··· 219 221 220 222 static int __init ioat_init_module(void) 221 223 { 222 - int err; 224 + int err = -ENOMEM; 223 225 224 226 pr_info("%s: Intel(R) QuickData Technology Driver %s\n", 225 227 DRV_NAME, IOAT_DMA_VERSION); ··· 229 231 if (!ioat2_cache) 230 232 return -ENOMEM; 231 233 234 + ioat3_sed_cache = KMEM_CACHE(ioat_sed_ent, 0); 235 + if (!ioat3_sed_cache) 236 + goto err_ioat2_cache; 237 + 232 238 err = pci_register_driver(&ioat_pci_driver); 233 239 if (err) 234 - kmem_cache_destroy(ioat2_cache); 240 + goto err_ioat3_cache; 241 + 242 + return 0; 243 + 244 + err_ioat3_cache: 245 + kmem_cache_destroy(ioat3_sed_cache); 246 + 247 + err_ioat2_cache: 248 + kmem_cache_destroy(ioat2_cache); 235 249 236 250 return err; 237 251 }
+11 -102
drivers/dma/iop-adma.c
··· 61 61 } 62 62 } 63 63 64 - static void 65 - iop_desc_unmap(struct iop_adma_chan *iop_chan, struct iop_adma_desc_slot *desc) 66 - { 67 - struct dma_async_tx_descriptor *tx = &desc->async_tx; 68 - struct iop_adma_desc_slot *unmap = desc->group_head; 69 - struct device *dev = &iop_chan->device->pdev->dev; 70 - u32 len = unmap->unmap_len; 71 - enum dma_ctrl_flags flags = tx->flags; 72 - u32 src_cnt; 73 - dma_addr_t addr; 74 - dma_addr_t dest; 75 - 76 - src_cnt = unmap->unmap_src_cnt; 77 - dest = iop_desc_get_dest_addr(unmap, iop_chan); 78 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 79 - enum dma_data_direction dir; 80 - 81 - if (src_cnt > 1) /* is xor? */ 82 - dir = DMA_BIDIRECTIONAL; 83 - else 84 - dir = DMA_FROM_DEVICE; 85 - 86 - dma_unmap_page(dev, dest, len, dir); 87 - } 88 - 89 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 90 - while (src_cnt--) { 91 - addr = iop_desc_get_src_addr(unmap, iop_chan, src_cnt); 92 - if (addr == dest) 93 - continue; 94 - dma_unmap_page(dev, addr, len, DMA_TO_DEVICE); 95 - } 96 - } 97 - desc->group_head = NULL; 98 - } 99 - 100 - static void 101 - iop_desc_unmap_pq(struct iop_adma_chan *iop_chan, struct iop_adma_desc_slot *desc) 102 - { 103 - struct dma_async_tx_descriptor *tx = &desc->async_tx; 104 - struct iop_adma_desc_slot *unmap = desc->group_head; 105 - struct device *dev = &iop_chan->device->pdev->dev; 106 - u32 len = unmap->unmap_len; 107 - enum dma_ctrl_flags flags = tx->flags; 108 - u32 src_cnt = unmap->unmap_src_cnt; 109 - dma_addr_t pdest = iop_desc_get_dest_addr(unmap, iop_chan); 110 - dma_addr_t qdest = iop_desc_get_qdest_addr(unmap, iop_chan); 111 - int i; 112 - 113 - if (tx->flags & DMA_PREP_CONTINUE) 114 - src_cnt -= 3; 115 - 116 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP) && !desc->pq_check_result) { 117 - dma_unmap_page(dev, pdest, len, DMA_BIDIRECTIONAL); 118 - dma_unmap_page(dev, qdest, len, DMA_BIDIRECTIONAL); 119 - } 120 - 121 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 122 - dma_addr_t addr; 123 - 124 - for (i = 0; i < src_cnt; i++) { 125 - addr = iop_desc_get_src_addr(unmap, iop_chan, i); 126 - dma_unmap_page(dev, addr, len, DMA_TO_DEVICE); 127 - } 128 - if (desc->pq_check_result) { 129 - dma_unmap_page(dev, pdest, len, DMA_TO_DEVICE); 130 - dma_unmap_page(dev, qdest, len, DMA_TO_DEVICE); 131 - } 132 - } 133 - 134 - desc->group_head = NULL; 135 - } 136 - 137 - 138 64 static dma_cookie_t 139 65 iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc, 140 66 struct iop_adma_chan *iop_chan, dma_cookie_t cookie) ··· 78 152 if (tx->callback) 79 153 tx->callback(tx->callback_param); 80 154 81 - /* unmap dma addresses 82 - * (unmap_single vs unmap_page?) 83 - */ 84 - if (desc->group_head && desc->unmap_len) { 85 - if (iop_desc_is_pq(desc)) 86 - iop_desc_unmap_pq(iop_chan, desc); 87 - else 88 - iop_desc_unmap(iop_chan, desc); 89 - } 155 + dma_descriptor_unmap(tx); 156 + if (desc->group_head) 157 + desc->group_head = NULL; 90 158 } 91 159 92 160 /* run dependent operations */ ··· 511 591 if (sw_desc) { 512 592 grp_start = sw_desc->group_head; 513 593 iop_desc_init_interrupt(grp_start, iop_chan); 514 - grp_start->unmap_len = 0; 515 594 sw_desc->async_tx.flags = flags; 516 595 } 517 596 spin_unlock_bh(&iop_chan->lock); ··· 542 623 iop_desc_set_byte_count(grp_start, iop_chan, len); 543 624 iop_desc_set_dest_addr(grp_start, iop_chan, dma_dest); 544 625 iop_desc_set_memcpy_src_addr(grp_start, dma_src); 545 - sw_desc->unmap_src_cnt = 1; 546 - sw_desc->unmap_len = len; 547 626 sw_desc->async_tx.flags = flags; 548 627 } 549 628 spin_unlock_bh(&iop_chan->lock); ··· 574 657 iop_desc_init_xor(grp_start, src_cnt, flags); 575 658 iop_desc_set_byte_count(grp_start, iop_chan, len); 576 659 iop_desc_set_dest_addr(grp_start, iop_chan, dma_dest); 577 - sw_desc->unmap_src_cnt = src_cnt; 578 - sw_desc->unmap_len = len; 579 660 sw_desc->async_tx.flags = flags; 580 661 while (src_cnt--) 581 662 iop_desc_set_xor_src_addr(grp_start, src_cnt, ··· 609 694 grp_start->xor_check_result = result; 610 695 pr_debug("\t%s: grp_start->xor_check_result: %p\n", 611 696 __func__, grp_start->xor_check_result); 612 - sw_desc->unmap_src_cnt = src_cnt; 613 - sw_desc->unmap_len = len; 614 697 sw_desc->async_tx.flags = flags; 615 698 while (src_cnt--) 616 699 iop_desc_set_zero_sum_src_addr(grp_start, src_cnt, ··· 661 748 dst[0] = dst[1] & 0x7; 662 749 663 750 iop_desc_set_pq_addr(g, dst); 664 - sw_desc->unmap_src_cnt = src_cnt; 665 - sw_desc->unmap_len = len; 666 751 sw_desc->async_tx.flags = flags; 667 752 for (i = 0; i < src_cnt; i++) 668 753 iop_desc_set_pq_src_addr(g, i, src[i], scf[i]); ··· 715 804 g->pq_check_result = pqres; 716 805 pr_debug("\t%s: g->pq_check_result: %p\n", 717 806 __func__, g->pq_check_result); 718 - sw_desc->unmap_src_cnt = src_cnt+2; 719 - sw_desc->unmap_len = len; 720 807 sw_desc->async_tx.flags = flags; 721 808 while (src_cnt--) 722 809 iop_desc_set_pq_zero_sum_src_addr(g, src_cnt, ··· 773 864 int ret; 774 865 775 866 ret = dma_cookie_status(chan, cookie, txstate); 776 - if (ret == DMA_SUCCESS) 867 + if (ret == DMA_COMPLETE) 777 868 return ret; 778 869 779 870 iop_adma_slot_cleanup(iop_chan); ··· 892 983 msleep(1); 893 984 894 985 if (iop_adma_status(dma_chan, cookie, NULL) != 895 - DMA_SUCCESS) { 986 + DMA_COMPLETE) { 896 987 dev_err(dma_chan->device->dev, 897 988 "Self-test copy timed out, disabling\n"); 898 989 err = -ENODEV; ··· 992 1083 msleep(8); 993 1084 994 1085 if (iop_adma_status(dma_chan, cookie, NULL) != 995 - DMA_SUCCESS) { 1086 + DMA_COMPLETE) { 996 1087 dev_err(dma_chan->device->dev, 997 1088 "Self-test xor timed out, disabling\n"); 998 1089 err = -ENODEV; ··· 1038 1129 iop_adma_issue_pending(dma_chan); 1039 1130 msleep(8); 1040 1131 1041 - if (iop_adma_status(dma_chan, cookie, NULL) != DMA_SUCCESS) { 1132 + if (iop_adma_status(dma_chan, cookie, NULL) != DMA_COMPLETE) { 1042 1133 dev_err(dma_chan->device->dev, 1043 1134 "Self-test zero sum timed out, disabling\n"); 1044 1135 err = -ENODEV; ··· 1067 1158 iop_adma_issue_pending(dma_chan); 1068 1159 msleep(8); 1069 1160 1070 - if (iop_adma_status(dma_chan, cookie, NULL) != DMA_SUCCESS) { 1161 + if (iop_adma_status(dma_chan, cookie, NULL) != DMA_COMPLETE) { 1071 1162 dev_err(dma_chan->device->dev, 1072 1163 "Self-test non-zero sum timed out, disabling\n"); 1073 1164 err = -ENODEV; ··· 1163 1254 msleep(8); 1164 1255 1165 1256 if (iop_adma_status(dma_chan, cookie, NULL) != 1166 - DMA_SUCCESS) { 1257 + DMA_COMPLETE) { 1167 1258 dev_err(dev, "Self-test pq timed out, disabling\n"); 1168 1259 err = -ENODEV; 1169 1260 goto free_resources; ··· 1200 1291 msleep(8); 1201 1292 1202 1293 if (iop_adma_status(dma_chan, cookie, NULL) != 1203 - DMA_SUCCESS) { 1294 + DMA_COMPLETE) { 1204 1295 dev_err(dev, "Self-test pq-zero-sum timed out, disabling\n"); 1205 1296 err = -ENODEV; 1206 1297 goto free_resources; ··· 1232 1323 msleep(8); 1233 1324 1234 1325 if (iop_adma_status(dma_chan, cookie, NULL) != 1235 - DMA_SUCCESS) { 1326 + DMA_COMPLETE) { 1236 1327 dev_err(dev, "Self-test !pq-zero-sum timed out, disabling\n"); 1237 1328 err = -ENODEV; 1238 1329 goto free_resources;
+4 -2
drivers/dma/ipu/ipu_idmac.c
··· 1232 1232 desc = list_entry(ichan->queue.next, struct idmac_tx_desc, list); 1233 1233 descnew = desc; 1234 1234 1235 - dev_dbg(dev, "IDMAC irq %d, dma 0x%08x, next dma 0x%08x, current %d, curbuf 0x%08x\n", 1236 - irq, sg_dma_address(*sg), sgnext ? sg_dma_address(sgnext) : 0, ichan->active_buffer, curbuf); 1235 + dev_dbg(dev, "IDMAC irq %d, dma %#llx, next dma %#llx, current %d, curbuf %#x\n", 1236 + irq, (u64)sg_dma_address(*sg), 1237 + sgnext ? (u64)sg_dma_address(sgnext) : 0, 1238 + ichan->active_buffer, curbuf); 1237 1239 1238 1240 /* Find the descriptor of sgnext */ 1239 1241 sgnew = idmac_sg_next(ichan, &descnew, *sg);
+2 -2
drivers/dma/k3dma.c
··· 344 344 size_t bytes = 0; 345 345 346 346 ret = dma_cookie_status(&c->vc.chan, cookie, state); 347 - if (ret == DMA_SUCCESS) 347 + if (ret == DMA_COMPLETE) 348 348 return ret; 349 349 350 350 spin_lock_irqsave(&c->vc.lock, flags); ··· 693 693 694 694 irq = platform_get_irq(op, 0); 695 695 ret = devm_request_irq(&op->dev, irq, 696 - k3_dma_int_handler, IRQF_DISABLED, DRIVER_NAME, d); 696 + k3_dma_int_handler, 0, DRIVER_NAME, d); 697 697 if (ret) 698 698 return ret; 699 699
+3 -4
drivers/dma/mmp_pdma.c
··· 798 798 * move the descriptors to a temporary list so we can drop 799 799 * the lock during the entire cleanup operation 800 800 */ 801 - list_del(&desc->node); 802 - list_add(&desc->node, &chain_cleanup); 801 + list_move(&desc->node, &chain_cleanup); 803 802 804 803 /* 805 804 * Look for the first list entry which has the ENDIRQEN flag ··· 862 863 863 864 if (irq) { 864 865 ret = devm_request_irq(pdev->dev, irq, 865 - mmp_pdma_chan_handler, IRQF_DISABLED, "pdma", phy); 866 + mmp_pdma_chan_handler, 0, "pdma", phy); 866 867 if (ret) { 867 868 dev_err(pdev->dev, "channel request irq fail!\n"); 868 869 return ret; ··· 969 970 /* all chan share one irq, demux inside */ 970 971 irq = platform_get_irq(op, 0); 971 972 ret = devm_request_irq(pdev->dev, irq, 972 - mmp_pdma_int_handler, IRQF_DISABLED, "pdma", pdev); 973 + mmp_pdma_int_handler, 0, "pdma", pdev); 973 974 if (ret) 974 975 return ret; 975 976 }
+34 -6
drivers/dma/mmp_tdma.c
··· 62 62 #define TDCR_BURSTSZ_16B (0x3 << 6) 63 63 #define TDCR_BURSTSZ_32B (0x6 << 6) 64 64 #define TDCR_BURSTSZ_64B (0x7 << 6) 65 + #define TDCR_BURSTSZ_SQU_1B (0x5 << 6) 66 + #define TDCR_BURSTSZ_SQU_2B (0x6 << 6) 67 + #define TDCR_BURSTSZ_SQU_4B (0x0 << 6) 68 + #define TDCR_BURSTSZ_SQU_8B (0x1 << 6) 69 + #define TDCR_BURSTSZ_SQU_16B (0x3 << 6) 65 70 #define TDCR_BURSTSZ_SQU_32B (0x7 << 6) 66 71 #define TDCR_BURSTSZ_128B (0x5 << 6) 67 72 #define TDCR_DSTDIR_MSK (0x3 << 4) /* Dst Direction */ ··· 163 158 /* disable irq */ 164 159 writel(0, tdmac->reg_base + TDIMR); 165 160 166 - tdmac->status = DMA_SUCCESS; 161 + tdmac->status = DMA_COMPLETE; 167 162 } 168 163 169 164 static void mmp_tdma_resume_chan(struct mmp_tdma_chan *tdmac) ··· 233 228 return -EINVAL; 234 229 } 235 230 } else if (tdmac->type == PXA910_SQU) { 236 - tdcr |= TDCR_BURSTSZ_SQU_32B; 237 231 tdcr |= TDCR_SSPMOD; 232 + 233 + switch (tdmac->burst_sz) { 234 + case 1: 235 + tdcr |= TDCR_BURSTSZ_SQU_1B; 236 + break; 237 + case 2: 238 + tdcr |= TDCR_BURSTSZ_SQU_2B; 239 + break; 240 + case 4: 241 + tdcr |= TDCR_BURSTSZ_SQU_4B; 242 + break; 243 + case 8: 244 + tdcr |= TDCR_BURSTSZ_SQU_8B; 245 + break; 246 + case 16: 247 + tdcr |= TDCR_BURSTSZ_SQU_16B; 248 + break; 249 + case 32: 250 + tdcr |= TDCR_BURSTSZ_SQU_32B; 251 + break; 252 + default: 253 + dev_err(tdmac->dev, "mmp_tdma: unknown burst size.\n"); 254 + return -EINVAL; 255 + } 238 256 } 239 257 240 258 writel(tdcr, tdmac->reg_base + TDCR); ··· 352 324 353 325 if (tdmac->irq) { 354 326 ret = devm_request_irq(tdmac->dev, tdmac->irq, 355 - mmp_tdma_chan_handler, IRQF_DISABLED, "tdma", tdmac); 327 + mmp_tdma_chan_handler, 0, "tdma", tdmac); 356 328 if (ret) 357 329 return ret; 358 330 } ··· 393 365 int num_periods = buf_len / period_len; 394 366 int i = 0, buf = 0; 395 367 396 - if (tdmac->status != DMA_SUCCESS) 368 + if (tdmac->status != DMA_COMPLETE) 397 369 return NULL; 398 370 399 371 if (period_len > TDMA_MAX_XFER_BYTES) { ··· 527 499 tdmac->idx = idx; 528 500 tdmac->type = type; 529 501 tdmac->reg_base = (unsigned long)tdev->base + idx * 4; 530 - tdmac->status = DMA_SUCCESS; 502 + tdmac->status = DMA_COMPLETE; 531 503 tdev->tdmac[tdmac->idx] = tdmac; 532 504 tasklet_init(&tdmac->tasklet, dma_do_tasklet, (unsigned long)tdmac); 533 505 ··· 582 554 if (irq_num != chan_num) { 583 555 irq = platform_get_irq(pdev, 0); 584 556 ret = devm_request_irq(&pdev->dev, irq, 585 - mmp_tdma_int_handler, IRQF_DISABLED, "tdma", tdev); 557 + mmp_tdma_int_handler, 0, "tdma", tdev); 586 558 if (ret) 587 559 return ret; 588 560 }
+7 -51
drivers/dma/mv_xor.c
··· 60 60 return hw_desc->phy_dest_addr; 61 61 } 62 62 63 - static u32 mv_desc_get_src_addr(struct mv_xor_desc_slot *desc, 64 - int src_idx) 65 - { 66 - struct mv_xor_desc *hw_desc = desc->hw_desc; 67 - return hw_desc->phy_src_addr[mv_phy_src_idx(src_idx)]; 68 - } 69 - 70 - 71 63 static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc, 72 64 u32 byte_count) 73 65 { ··· 270 278 desc->async_tx.callback( 271 279 desc->async_tx.callback_param); 272 280 273 - /* unmap dma addresses 274 - * (unmap_single vs unmap_page?) 275 - */ 276 - if (desc->group_head && desc->unmap_len) { 277 - struct mv_xor_desc_slot *unmap = desc->group_head; 278 - struct device *dev = mv_chan_to_devp(mv_chan); 279 - u32 len = unmap->unmap_len; 280 - enum dma_ctrl_flags flags = desc->async_tx.flags; 281 - u32 src_cnt; 282 - dma_addr_t addr; 283 - dma_addr_t dest; 284 - 285 - src_cnt = unmap->unmap_src_cnt; 286 - dest = mv_desc_get_dest_addr(unmap); 287 - if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 288 - enum dma_data_direction dir; 289 - 290 - if (src_cnt > 1) /* is xor ? */ 291 - dir = DMA_BIDIRECTIONAL; 292 - else 293 - dir = DMA_FROM_DEVICE; 294 - dma_unmap_page(dev, dest, len, dir); 295 - } 296 - 297 - if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 298 - while (src_cnt--) { 299 - addr = mv_desc_get_src_addr(unmap, 300 - src_cnt); 301 - if (addr == dest) 302 - continue; 303 - dma_unmap_page(dev, addr, len, 304 - DMA_TO_DEVICE); 305 - } 306 - } 281 + dma_descriptor_unmap(&desc->async_tx); 282 + if (desc->group_head) 307 283 desc->group_head = NULL; 308 - } 309 284 } 310 285 311 286 /* run dependent operations */ ··· 708 749 enum dma_status ret; 709 750 710 751 ret = dma_cookie_status(chan, cookie, txstate); 711 - if (ret == DMA_SUCCESS) { 752 + if (ret == DMA_COMPLETE) { 712 753 mv_xor_clean_completed_slots(mv_chan); 713 754 return ret; 714 755 } ··· 833 874 msleep(1); 834 875 835 876 if (mv_xor_status(dma_chan, cookie, NULL) != 836 - DMA_SUCCESS) { 877 + DMA_COMPLETE) { 837 878 dev_err(dma_chan->device->dev, 838 879 "Self-test copy timed out, disabling\n"); 839 880 err = -ENODEV; ··· 927 968 msleep(8); 928 969 929 970 if (mv_xor_status(dma_chan, cookie, NULL) != 930 - DMA_SUCCESS) { 971 + DMA_COMPLETE) { 931 972 dev_err(dma_chan->device->dev, 932 973 "Self-test xor timed out, disabling\n"); 933 974 err = -ENODEV; ··· 1035 1076 } 1036 1077 1037 1078 mv_chan->mmr_base = xordev->xor_base; 1038 - if (!mv_chan->mmr_base) { 1039 - ret = -ENOMEM; 1040 - goto err_free_dma; 1041 - } 1079 + mv_chan->mmr_high_base = xordev->xor_high_base; 1042 1080 tasklet_init(&mv_chan->irq_tasklet, mv_xor_tasklet, (unsigned long) 1043 1081 mv_chan); 1044 1082 ··· 1094 1138 mv_xor_conf_mbus_windows(struct mv_xor_device *xordev, 1095 1139 const struct mbus_dram_target_info *dram) 1096 1140 { 1097 - void __iomem *base = xordev->xor_base; 1141 + void __iomem *base = xordev->xor_high_base; 1098 1142 u32 win_enable = 0; 1099 1143 int i; 1100 1144
+13 -12
drivers/dma/mv_xor.h
··· 34 34 #define XOR_OPERATION_MODE_MEMCPY 2 35 35 #define XOR_DESCRIPTOR_SWAP BIT(14) 36 36 37 - #define XOR_CURR_DESC(chan) (chan->mmr_base + 0x210 + (chan->idx * 4)) 38 - #define XOR_NEXT_DESC(chan) (chan->mmr_base + 0x200 + (chan->idx * 4)) 39 - #define XOR_BYTE_COUNT(chan) (chan->mmr_base + 0x220 + (chan->idx * 4)) 40 - #define XOR_DEST_POINTER(chan) (chan->mmr_base + 0x2B0 + (chan->idx * 4)) 41 - #define XOR_BLOCK_SIZE(chan) (chan->mmr_base + 0x2C0 + (chan->idx * 4)) 42 - #define XOR_INIT_VALUE_LOW(chan) (chan->mmr_base + 0x2E0) 43 - #define XOR_INIT_VALUE_HIGH(chan) (chan->mmr_base + 0x2E4) 37 + #define XOR_CURR_DESC(chan) (chan->mmr_high_base + 0x10 + (chan->idx * 4)) 38 + #define XOR_NEXT_DESC(chan) (chan->mmr_high_base + 0x00 + (chan->idx * 4)) 39 + #define XOR_BYTE_COUNT(chan) (chan->mmr_high_base + 0x20 + (chan->idx * 4)) 40 + #define XOR_DEST_POINTER(chan) (chan->mmr_high_base + 0xB0 + (chan->idx * 4)) 41 + #define XOR_BLOCK_SIZE(chan) (chan->mmr_high_base + 0xC0 + (chan->idx * 4)) 42 + #define XOR_INIT_VALUE_LOW(chan) (chan->mmr_high_base + 0xE0) 43 + #define XOR_INIT_VALUE_HIGH(chan) (chan->mmr_high_base + 0xE4) 44 44 45 45 #define XOR_CONFIG(chan) (chan->mmr_base + 0x10 + (chan->idx * 4)) 46 46 #define XOR_ACTIVATION(chan) (chan->mmr_base + 0x20 + (chan->idx * 4)) ··· 50 50 #define XOR_ERROR_ADDR(chan) (chan->mmr_base + 0x60) 51 51 #define XOR_INTR_MASK_VALUE 0x3F5 52 52 53 - #define WINDOW_BASE(w) (0x250 + ((w) << 2)) 54 - #define WINDOW_SIZE(w) (0x270 + ((w) << 2)) 55 - #define WINDOW_REMAP_HIGH(w) (0x290 + ((w) << 2)) 56 - #define WINDOW_BAR_ENABLE(chan) (0x240 + ((chan) << 2)) 57 - #define WINDOW_OVERRIDE_CTRL(chan) (0x2A0 + ((chan) << 2)) 53 + #define WINDOW_BASE(w) (0x50 + ((w) << 2)) 54 + #define WINDOW_SIZE(w) (0x70 + ((w) << 2)) 55 + #define WINDOW_REMAP_HIGH(w) (0x90 + ((w) << 2)) 56 + #define WINDOW_BAR_ENABLE(chan) (0x40 + ((chan) << 2)) 57 + #define WINDOW_OVERRIDE_CTRL(chan) (0xA0 + ((chan) << 2)) 58 58 59 59 struct mv_xor_device { 60 60 void __iomem *xor_base; ··· 82 82 int pending; 83 83 spinlock_t lock; /* protects the descriptor slot pool */ 84 84 void __iomem *mmr_base; 85 + void __iomem *mmr_high_base; 85 86 unsigned int idx; 86 87 int irq; 87 88 enum dma_transaction_type current_type;
+137 -39
drivers/dma/mxs-dma.c
··· 27 27 #include <linux/of.h> 28 28 #include <linux/of_device.h> 29 29 #include <linux/of_dma.h> 30 + #include <linux/list.h> 30 31 31 32 #include <asm/irq.h> 32 33 ··· 58 57 (((dma_is_apbh(d) && apbh_is_old(d)) ? 0x050 : 0x110) + (n) * 0x70) 59 58 #define HW_APBHX_CHn_SEMA(d, n) \ 60 59 (((dma_is_apbh(d) && apbh_is_old(d)) ? 0x080 : 0x140) + (n) * 0x70) 60 + #define HW_APBHX_CHn_BAR(d, n) \ 61 + (((dma_is_apbh(d) && apbh_is_old(d)) ? 0x070 : 0x130) + (n) * 0x70) 62 + #define HW_APBX_CHn_DEBUG1(d, n) (0x150 + (n) * 0x70) 61 63 62 64 /* 63 65 * ccw bits definitions ··· 119 115 int desc_count; 120 116 enum dma_status status; 121 117 unsigned int flags; 118 + bool reset; 122 119 #define MXS_DMA_SG_LOOP (1 << 0) 120 + #define MXS_DMA_USE_SEMAPHORE (1 << 1) 123 121 }; 124 122 125 123 #define MXS_DMA_CHANNELS 16 ··· 207 201 struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma; 208 202 int chan_id = mxs_chan->chan.chan_id; 209 203 210 - if (dma_is_apbh(mxs_dma) && apbh_is_old(mxs_dma)) 204 + /* 205 + * mxs dma channel resets can cause a channel stall. To recover from a 206 + * channel stall, we have to reset the whole DMA engine. To avoid this, 207 + * we use cyclic DMA with semaphores, that are enhanced in 208 + * mxs_dma_int_handler. To reset the channel, we can simply stop writing 209 + * into the semaphore counter. 210 + */ 211 + if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE && 212 + mxs_chan->flags & MXS_DMA_SG_LOOP) { 213 + mxs_chan->reset = true; 214 + } else if (dma_is_apbh(mxs_dma) && apbh_is_old(mxs_dma)) { 211 215 writel(1 << (chan_id + BP_APBH_CTRL0_RESET_CHANNEL), 212 216 mxs_dma->base + HW_APBHX_CTRL0 + STMP_OFFSET_REG_SET); 213 - else 217 + } else { 218 + unsigned long elapsed = 0; 219 + const unsigned long max_wait = 50000; /* 50ms */ 220 + void __iomem *reg_dbg1 = mxs_dma->base + 221 + HW_APBX_CHn_DEBUG1(mxs_dma, chan_id); 222 + 223 + /* 224 + * On i.MX28 APBX, the DMA channel can stop working if we reset 225 + * the channel while it is in READ_FLUSH (0x08) state. 226 + * We wait here until we leave the state. Then we trigger the 227 + * reset. Waiting a maximum of 50ms, the kernel shouldn't crash 228 + * because of this. 229 + */ 230 + while ((readl(reg_dbg1) & 0xf) == 0x8 && elapsed < max_wait) { 231 + udelay(100); 232 + elapsed += 100; 233 + } 234 + 235 + if (elapsed >= max_wait) 236 + dev_err(&mxs_chan->mxs_dma->pdev->dev, 237 + "Failed waiting for the DMA channel %d to leave state READ_FLUSH, trying to reset channel in READ_FLUSH state now\n", 238 + chan_id); 239 + 214 240 writel(1 << (chan_id + BP_APBHX_CHANNEL_CTRL_RESET_CHANNEL), 215 241 mxs_dma->base + HW_APBHX_CHANNEL_CTRL + STMP_OFFSET_REG_SET); 242 + } 243 + 244 + mxs_chan->status = DMA_COMPLETE; 216 245 } 217 246 218 247 static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan) ··· 260 219 mxs_dma->base + HW_APBHX_CHn_NXTCMDAR(mxs_dma, chan_id)); 261 220 262 221 /* write 1 to SEMA to kick off the channel */ 263 - writel(1, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id)); 222 + if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE && 223 + mxs_chan->flags & MXS_DMA_SG_LOOP) { 224 + /* A cyclic DMA consists of at least 2 segments, so initialize 225 + * the semaphore with 2 so we have enough time to add 1 to the 226 + * semaphore if we need to */ 227 + writel(2, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id)); 228 + } else { 229 + writel(1, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id)); 230 + } 231 + mxs_chan->reset = false; 264 232 } 265 233 266 234 static void mxs_dma_disable_chan(struct mxs_dma_chan *mxs_chan) 267 235 { 268 - mxs_chan->status = DMA_SUCCESS; 236 + mxs_chan->status = DMA_COMPLETE; 269 237 } 270 238 271 239 static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan) ··· 322 272 mxs_chan->desc.callback(mxs_chan->desc.callback_param); 323 273 } 324 274 275 + static int mxs_dma_irq_to_chan(struct mxs_dma_engine *mxs_dma, int irq) 276 + { 277 + int i; 278 + 279 + for (i = 0; i != mxs_dma->nr_channels; ++i) 280 + if (mxs_dma->mxs_chans[i].chan_irq == irq) 281 + return i; 282 + 283 + return -EINVAL; 284 + } 285 + 325 286 static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id) 326 287 { 327 288 struct mxs_dma_engine *mxs_dma = dev_id; 328 - u32 stat1, stat2; 289 + struct mxs_dma_chan *mxs_chan; 290 + u32 completed; 291 + u32 err; 292 + int chan = mxs_dma_irq_to_chan(mxs_dma, irq); 293 + 294 + if (chan < 0) 295 + return IRQ_NONE; 329 296 330 297 /* completion status */ 331 - stat1 = readl(mxs_dma->base + HW_APBHX_CTRL1); 332 - stat1 &= MXS_DMA_CHANNELS_MASK; 333 - writel(stat1, mxs_dma->base + HW_APBHX_CTRL1 + STMP_OFFSET_REG_CLR); 298 + completed = readl(mxs_dma->base + HW_APBHX_CTRL1); 299 + completed = (completed >> chan) & 0x1; 300 + 301 + /* Clear interrupt */ 302 + writel((1 << chan), 303 + mxs_dma->base + HW_APBHX_CTRL1 + STMP_OFFSET_REG_CLR); 334 304 335 305 /* error status */ 336 - stat2 = readl(mxs_dma->base + HW_APBHX_CTRL2); 337 - writel(stat2, mxs_dma->base + HW_APBHX_CTRL2 + STMP_OFFSET_REG_CLR); 306 + err = readl(mxs_dma->base + HW_APBHX_CTRL2); 307 + err &= (1 << (MXS_DMA_CHANNELS + chan)) | (1 << chan); 308 + 309 + /* 310 + * error status bit is in the upper 16 bits, error irq bit in the lower 311 + * 16 bits. We transform it into a simpler error code: 312 + * err: 0x00 = no error, 0x01 = TERMINATION, 0x02 = BUS_ERROR 313 + */ 314 + err = (err >> (MXS_DMA_CHANNELS + chan)) + (err >> chan); 315 + 316 + /* Clear error irq */ 317 + writel((1 << chan), 318 + mxs_dma->base + HW_APBHX_CTRL2 + STMP_OFFSET_REG_CLR); 338 319 339 320 /* 340 321 * When both completion and error of termination bits set at the 341 322 * same time, we do not take it as an error. IOW, it only becomes 342 - * an error we need to handle here in case of either it's (1) a bus 343 - * error or (2) a termination error with no completion. 323 + * an error we need to handle here in case of either it's a bus 324 + * error or a termination error with no completion. 0x01 is termination 325 + * error, so we can subtract err & completed to get the real error case. 344 326 */ 345 - stat2 = ((stat2 >> MXS_DMA_CHANNELS) & stat2) | /* (1) */ 346 - (~(stat2 >> MXS_DMA_CHANNELS) & stat2 & ~stat1); /* (2) */ 327 + err -= err & completed; 347 328 348 - /* combine error and completion status for checking */ 349 - stat1 = (stat2 << MXS_DMA_CHANNELS) | stat1; 350 - while (stat1) { 351 - int channel = fls(stat1) - 1; 352 - struct mxs_dma_chan *mxs_chan = 353 - &mxs_dma->mxs_chans[channel % MXS_DMA_CHANNELS]; 329 + mxs_chan = &mxs_dma->mxs_chans[chan]; 354 330 355 - if (channel >= MXS_DMA_CHANNELS) { 356 - dev_dbg(mxs_dma->dma_device.dev, 357 - "%s: error in channel %d\n", __func__, 358 - channel - MXS_DMA_CHANNELS); 359 - mxs_chan->status = DMA_ERROR; 360 - mxs_dma_reset_chan(mxs_chan); 331 + if (err) { 332 + dev_dbg(mxs_dma->dma_device.dev, 333 + "%s: error in channel %d\n", __func__, 334 + chan); 335 + mxs_chan->status = DMA_ERROR; 336 + mxs_dma_reset_chan(mxs_chan); 337 + } else if (mxs_chan->status != DMA_COMPLETE) { 338 + if (mxs_chan->flags & MXS_DMA_SG_LOOP) { 339 + mxs_chan->status = DMA_IN_PROGRESS; 340 + if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE) 341 + writel(1, mxs_dma->base + 342 + HW_APBHX_CHn_SEMA(mxs_dma, chan)); 361 343 } else { 362 - if (mxs_chan->flags & MXS_DMA_SG_LOOP) 363 - mxs_chan->status = DMA_IN_PROGRESS; 364 - else 365 - mxs_chan->status = DMA_SUCCESS; 344 + mxs_chan->status = DMA_COMPLETE; 366 345 } 367 - 368 - stat1 &= ~(1 << channel); 369 - 370 - if (mxs_chan->status == DMA_SUCCESS) 371 - dma_cookie_complete(&mxs_chan->desc); 372 - 373 - /* schedule tasklet on this channel */ 374 - tasklet_schedule(&mxs_chan->tasklet); 375 346 } 347 + 348 + if (mxs_chan->status == DMA_COMPLETE) { 349 + if (mxs_chan->reset) 350 + return IRQ_HANDLED; 351 + dma_cookie_complete(&mxs_chan->desc); 352 + } 353 + 354 + /* schedule tasklet on this channel */ 355 + tasklet_schedule(&mxs_chan->tasklet); 376 356 377 357 return IRQ_HANDLED; 378 358 } ··· 603 523 604 524 mxs_chan->status = DMA_IN_PROGRESS; 605 525 mxs_chan->flags |= MXS_DMA_SG_LOOP; 526 + mxs_chan->flags |= MXS_DMA_USE_SEMAPHORE; 606 527 607 528 if (num_periods > NUM_CCW) { 608 529 dev_err(mxs_dma->dma_device.dev, ··· 635 554 ccw->bits |= CCW_IRQ; 636 555 ccw->bits |= CCW_HALT_ON_TERM; 637 556 ccw->bits |= CCW_TERM_FLUSH; 557 + ccw->bits |= CCW_DEC_SEM; 638 558 ccw->bits |= BF_CCW(direction == DMA_DEV_TO_MEM ? 639 559 MXS_DMA_CMD_WRITE : MXS_DMA_CMD_READ, COMMAND); 640 560 ··· 681 599 dma_cookie_t cookie, struct dma_tx_state *txstate) 682 600 { 683 601 struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan); 602 + struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma; 603 + u32 residue = 0; 684 604 685 - dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 0); 605 + if (mxs_chan->status == DMA_IN_PROGRESS && 606 + mxs_chan->flags & MXS_DMA_SG_LOOP) { 607 + struct mxs_dma_ccw *last_ccw; 608 + u32 bar; 609 + 610 + last_ccw = &mxs_chan->ccw[mxs_chan->desc_count - 1]; 611 + residue = last_ccw->xfer_bytes + last_ccw->bufaddr; 612 + 613 + bar = readl(mxs_dma->base + 614 + HW_APBHX_CHn_BAR(mxs_dma, chan->chan_id)); 615 + residue -= bar; 616 + } 617 + 618 + dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 619 + residue); 686 620 687 621 return mxs_chan->status; 688 622 }
+1 -1
drivers/dma/omap-dma.c
··· 248 248 unsigned long flags; 249 249 250 250 ret = dma_cookie_status(chan, cookie, txstate); 251 - if (ret == DMA_SUCCESS || !txstate) 251 + if (ret == DMA_COMPLETE || !txstate) 252 252 return ret; 253 253 254 254 spin_lock_irqsave(&c->vc.lock, flags);
+16 -16
drivers/dma/pl330.c
··· 2268 2268 list_move_tail(&desc->node, &pch->dmac->desc_pool); 2269 2269 } 2270 2270 2271 + dma_descriptor_unmap(&desc->txd); 2272 + 2271 2273 if (callback) { 2272 2274 spin_unlock_irqrestore(&pch->lock, flags); 2273 2275 callback(callback_param); ··· 2316 2314 return false; 2317 2315 2318 2316 peri_id = chan->private; 2319 - return *peri_id == (unsigned)param; 2317 + return *peri_id == (unsigned long)param; 2320 2318 } 2321 2319 EXPORT_SYMBOL(pl330_filter); 2322 2320 ··· 2928 2926 2929 2927 amba_set_drvdata(adev, pdmac); 2930 2928 2931 - irq = adev->irq[0]; 2932 - ret = request_irq(irq, pl330_irq_handler, 0, 2933 - dev_name(&adev->dev), pi); 2934 - if (ret) 2935 - return ret; 2929 + for (i = 0; i < AMBA_NR_IRQS; i++) { 2930 + irq = adev->irq[i]; 2931 + if (irq) { 2932 + ret = devm_request_irq(&adev->dev, irq, 2933 + pl330_irq_handler, 0, 2934 + dev_name(&adev->dev), pi); 2935 + if (ret) 2936 + return ret; 2937 + } else { 2938 + break; 2939 + } 2940 + } 2936 2941 2937 2942 pi->pcfg.periph_id = adev->periphid; 2938 2943 ret = pl330_add(pi); 2939 2944 if (ret) 2940 - goto probe_err1; 2945 + return ret; 2941 2946 2942 2947 INIT_LIST_HEAD(&pdmac->desc_pool); 2943 2948 spin_lock_init(&pdmac->pool_lock); ··· 3042 3033 3043 3034 return 0; 3044 3035 probe_err3: 3045 - amba_set_drvdata(adev, NULL); 3046 - 3047 3036 /* Idle the DMAC */ 3048 3037 list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels, 3049 3038 chan.device_node) { ··· 3055 3048 } 3056 3049 probe_err2: 3057 3050 pl330_del(pi); 3058 - probe_err1: 3059 - free_irq(irq, pi); 3060 3051 3061 3052 return ret; 3062 3053 } ··· 3064 3059 struct dma_pl330_dmac *pdmac = amba_get_drvdata(adev); 3065 3060 struct dma_pl330_chan *pch, *_p; 3066 3061 struct pl330_info *pi; 3067 - int irq; 3068 3062 3069 3063 if (!pdmac) 3070 3064 return 0; ··· 3072 3068 of_dma_controller_free(adev->dev.of_node); 3073 3069 3074 3070 dma_async_device_unregister(&pdmac->ddma); 3075 - amba_set_drvdata(adev, NULL); 3076 3071 3077 3072 /* Idle the DMAC */ 3078 3073 list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels, ··· 3088 3085 pi = &pdmac->pif; 3089 3086 3090 3087 pl330_del(pi); 3091 - 3092 - irq = adev->irq[0]; 3093 - free_irq(irq, pi); 3094 3088 3095 3089 return 0; 3096 3090 }
+2 -270
drivers/dma/ppc4xx/adma.c
··· 804 804 } 805 805 806 806 /** 807 - * ppc440spe_desc_get_src_addr - extract the source address from the descriptor 808 - */ 809 - static u32 ppc440spe_desc_get_src_addr(struct ppc440spe_adma_desc_slot *desc, 810 - struct ppc440spe_adma_chan *chan, int src_idx) 811 - { 812 - struct dma_cdb *dma_hw_desc; 813 - struct xor_cb *xor_hw_desc; 814 - 815 - switch (chan->device->id) { 816 - case PPC440SPE_DMA0_ID: 817 - case PPC440SPE_DMA1_ID: 818 - dma_hw_desc = desc->hw_desc; 819 - /* May have 0, 1, 2, or 3 sources */ 820 - switch (dma_hw_desc->opc) { 821 - case DMA_CDB_OPC_NO_OP: 822 - case DMA_CDB_OPC_DFILL128: 823 - return 0; 824 - case DMA_CDB_OPC_DCHECK128: 825 - if (unlikely(src_idx)) { 826 - printk(KERN_ERR "%s: try to get %d source for" 827 - " DCHECK128\n", __func__, src_idx); 828 - BUG(); 829 - } 830 - return le32_to_cpu(dma_hw_desc->sg1l); 831 - case DMA_CDB_OPC_MULTICAST: 832 - case DMA_CDB_OPC_MV_SG1_SG2: 833 - if (unlikely(src_idx > 2)) { 834 - printk(KERN_ERR "%s: try to get %d source from" 835 - " DMA descr\n", __func__, src_idx); 836 - BUG(); 837 - } 838 - if (src_idx) { 839 - if (le32_to_cpu(dma_hw_desc->sg1u) & 840 - DMA_CUED_XOR_WIN_MSK) { 841 - u8 region; 842 - 843 - if (src_idx == 1) 844 - return le32_to_cpu( 845 - dma_hw_desc->sg1l) + 846 - desc->unmap_len; 847 - 848 - region = (le32_to_cpu( 849 - dma_hw_desc->sg1u)) >> 850 - DMA_CUED_REGION_OFF; 851 - 852 - region &= DMA_CUED_REGION_MSK; 853 - switch (region) { 854 - case DMA_RXOR123: 855 - return le32_to_cpu( 856 - dma_hw_desc->sg1l) + 857 - (desc->unmap_len << 1); 858 - case DMA_RXOR124: 859 - return le32_to_cpu( 860 - dma_hw_desc->sg1l) + 861 - (desc->unmap_len * 3); 862 - case DMA_RXOR125: 863 - return le32_to_cpu( 864 - dma_hw_desc->sg1l) + 865 - (desc->unmap_len << 2); 866 - default: 867 - printk(KERN_ERR 868 - "%s: try to" 869 - " get src3 for region %02x" 870 - "PPC440SPE_DESC_RXOR12?\n", 871 - __func__, region); 872 - BUG(); 873 - } 874 - } else { 875 - printk(KERN_ERR 876 - "%s: try to get %d" 877 - " source for non-cued descr\n", 878 - __func__, src_idx); 879 - BUG(); 880 - } 881 - } 882 - return le32_to_cpu(dma_hw_desc->sg1l); 883 - default: 884 - printk(KERN_ERR "%s: unknown OPC 0x%02x\n", 885 - __func__, dma_hw_desc->opc); 886 - BUG(); 887 - } 888 - return le32_to_cpu(dma_hw_desc->sg1l); 889 - case PPC440SPE_XOR_ID: 890 - /* May have up to 16 sources */ 891 - xor_hw_desc = desc->hw_desc; 892 - return xor_hw_desc->ops[src_idx].l; 893 - } 894 - return 0; 895 - } 896 - 897 - /** 898 - * ppc440spe_desc_get_dest_addr - extract the destination address from the 899 - * descriptor 900 - */ 901 - static u32 ppc440spe_desc_get_dest_addr(struct ppc440spe_adma_desc_slot *desc, 902 - struct ppc440spe_adma_chan *chan, int idx) 903 - { 904 - struct dma_cdb *dma_hw_desc; 905 - struct xor_cb *xor_hw_desc; 906 - 907 - switch (chan->device->id) { 908 - case PPC440SPE_DMA0_ID: 909 - case PPC440SPE_DMA1_ID: 910 - dma_hw_desc = desc->hw_desc; 911 - 912 - if (likely(!idx)) 913 - return le32_to_cpu(dma_hw_desc->sg2l); 914 - return le32_to_cpu(dma_hw_desc->sg3l); 915 - case PPC440SPE_XOR_ID: 916 - xor_hw_desc = desc->hw_desc; 917 - return xor_hw_desc->cbtal; 918 - } 919 - return 0; 920 - } 921 - 922 - /** 923 - * ppc440spe_desc_get_src_num - extract the number of source addresses from 924 - * the descriptor 925 - */ 926 - static u32 ppc440spe_desc_get_src_num(struct ppc440spe_adma_desc_slot *desc, 927 - struct ppc440spe_adma_chan *chan) 928 - { 929 - struct dma_cdb *dma_hw_desc; 930 - struct xor_cb *xor_hw_desc; 931 - 932 - switch (chan->device->id) { 933 - case PPC440SPE_DMA0_ID: 934 - case PPC440SPE_DMA1_ID: 935 - dma_hw_desc = desc->hw_desc; 936 - 937 - switch (dma_hw_desc->opc) { 938 - case DMA_CDB_OPC_NO_OP: 939 - case DMA_CDB_OPC_DFILL128: 940 - return 0; 941 - case DMA_CDB_OPC_DCHECK128: 942 - return 1; 943 - case DMA_CDB_OPC_MV_SG1_SG2: 944 - case DMA_CDB_OPC_MULTICAST: 945 - /* 946 - * Only for RXOR operations we have more than 947 - * one source 948 - */ 949 - if (le32_to_cpu(dma_hw_desc->sg1u) & 950 - DMA_CUED_XOR_WIN_MSK) { 951 - /* RXOR op, there are 2 or 3 sources */ 952 - if (((le32_to_cpu(dma_hw_desc->sg1u) >> 953 - DMA_CUED_REGION_OFF) & 954 - DMA_CUED_REGION_MSK) == DMA_RXOR12) { 955 - /* RXOR 1-2 */ 956 - return 2; 957 - } else { 958 - /* RXOR 1-2-3/1-2-4/1-2-5 */ 959 - return 3; 960 - } 961 - } 962 - return 1; 963 - default: 964 - printk(KERN_ERR "%s: unknown OPC 0x%02x\n", 965 - __func__, dma_hw_desc->opc); 966 - BUG(); 967 - } 968 - case PPC440SPE_XOR_ID: 969 - /* up to 16 sources */ 970 - xor_hw_desc = desc->hw_desc; 971 - return xor_hw_desc->cbc & XOR_CDCR_OAC_MSK; 972 - default: 973 - BUG(); 974 - } 975 - return 0; 976 - } 977 - 978 - /** 979 - * ppc440spe_desc_get_dst_num - get the number of destination addresses in 980 - * this descriptor 981 - */ 982 - static u32 ppc440spe_desc_get_dst_num(struct ppc440spe_adma_desc_slot *desc, 983 - struct ppc440spe_adma_chan *chan) 984 - { 985 - struct dma_cdb *dma_hw_desc; 986 - 987 - switch (chan->device->id) { 988 - case PPC440SPE_DMA0_ID: 989 - case PPC440SPE_DMA1_ID: 990 - /* May be 1 or 2 destinations */ 991 - dma_hw_desc = desc->hw_desc; 992 - switch (dma_hw_desc->opc) { 993 - case DMA_CDB_OPC_NO_OP: 994 - case DMA_CDB_OPC_DCHECK128: 995 - return 0; 996 - case DMA_CDB_OPC_MV_SG1_SG2: 997 - case DMA_CDB_OPC_DFILL128: 998 - return 1; 999 - case DMA_CDB_OPC_MULTICAST: 1000 - if (desc->dst_cnt == 2) 1001 - return 2; 1002 - else 1003 - return 1; 1004 - default: 1005 - printk(KERN_ERR "%s: unknown OPC 0x%02x\n", 1006 - __func__, dma_hw_desc->opc); 1007 - BUG(); 1008 - } 1009 - case PPC440SPE_XOR_ID: 1010 - /* Always only 1 destination */ 1011 - return 1; 1012 - default: 1013 - BUG(); 1014 - } 1015 - return 0; 1016 - } 1017 - 1018 - /** 1019 807 * ppc440spe_desc_get_link - get the address of the descriptor that 1020 808 * follows this one 1021 809 */ ··· 1495 1707 } 1496 1708 } 1497 1709 1498 - static void ppc440spe_adma_unmap(struct ppc440spe_adma_chan *chan, 1499 - struct ppc440spe_adma_desc_slot *desc) 1500 - { 1501 - u32 src_cnt, dst_cnt; 1502 - dma_addr_t addr; 1503 - 1504 - /* 1505 - * get the number of sources & destination 1506 - * included in this descriptor and unmap 1507 - * them all 1508 - */ 1509 - src_cnt = ppc440spe_desc_get_src_num(desc, chan); 1510 - dst_cnt = ppc440spe_desc_get_dst_num(desc, chan); 1511 - 1512 - /* unmap destinations */ 1513 - if (!(desc->async_tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 1514 - while (dst_cnt--) { 1515 - addr = ppc440spe_desc_get_dest_addr( 1516 - desc, chan, dst_cnt); 1517 - dma_unmap_page(chan->device->dev, 1518 - addr, desc->unmap_len, 1519 - DMA_FROM_DEVICE); 1520 - } 1521 - } 1522 - 1523 - /* unmap sources */ 1524 - if (!(desc->async_tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 1525 - while (src_cnt--) { 1526 - addr = ppc440spe_desc_get_src_addr( 1527 - desc, chan, src_cnt); 1528 - dma_unmap_page(chan->device->dev, 1529 - addr, desc->unmap_len, 1530 - DMA_TO_DEVICE); 1531 - } 1532 - } 1533 - } 1534 - 1535 1710 /** 1536 1711 * ppc440spe_adma_run_tx_complete_actions - call functions to be called 1537 1712 * upon completion ··· 1518 1767 desc->async_tx.callback( 1519 1768 desc->async_tx.callback_param); 1520 1769 1521 - /* unmap dma addresses 1522 - * (unmap_single vs unmap_page?) 1523 - * 1524 - * actually, ppc's dma_unmap_page() functions are empty, so 1525 - * the following code is just for the sake of completeness 1526 - */ 1527 - if (chan && chan->needs_unmap && desc->group_head && 1528 - desc->unmap_len) { 1529 - struct ppc440spe_adma_desc_slot *unmap = 1530 - desc->group_head; 1531 - /* assume 1 slot per op always */ 1532 - u32 slot_count = unmap->slot_cnt; 1533 - 1534 - /* Run through the group list and unmap addresses */ 1535 - for (i = 0; i < slot_count; i++) { 1536 - BUG_ON(!unmap); 1537 - ppc440spe_adma_unmap(chan, unmap); 1538 - unmap = unmap->hw_next; 1539 - } 1540 - } 1770 + dma_descriptor_unmap(&desc->async_tx); 1541 1771 } 1542 1772 1543 1773 /* run dependent operations */ ··· 3625 3893 3626 3894 ppc440spe_chan = to_ppc440spe_adma_chan(chan); 3627 3895 ret = dma_cookie_status(chan, cookie, txstate); 3628 - if (ret == DMA_SUCCESS) 3896 + if (ret == DMA_COMPLETE) 3629 3897 return ret; 3630 3898 3631 3899 ppc440spe_adma_slot_cleanup(ppc440spe_chan);
+1 -1
drivers/dma/sa11x0-dma.c
··· 436 436 enum dma_status ret; 437 437 438 438 ret = dma_cookie_status(&c->vc.chan, cookie, state); 439 - if (ret == DMA_SUCCESS) 439 + if (ret == DMA_COMPLETE) 440 440 return ret; 441 441 442 442 if (!state)
+1 -1
drivers/dma/sh/shdma-base.c
··· 724 724 * If we don't find cookie on the queue, it has been aborted and we have 725 725 * to report error 726 726 */ 727 - if (status != DMA_SUCCESS) { 727 + if (status != DMA_COMPLETE) { 728 728 struct shdma_desc *sdesc; 729 729 status = DMA_ERROR; 730 730 list_for_each_entry(sdesc, &schan->ld_queue, node)
+2 -2
drivers/dma/sh/shdmac.c
··· 685 685 static int sh_dmae_probe(struct platform_device *pdev) 686 686 { 687 687 const struct sh_dmae_pdata *pdata; 688 - unsigned long irqflags = IRQF_DISABLED, 688 + unsigned long irqflags = 0, 689 689 chan_flag[SH_DMAE_MAX_CHANNELS] = {}; 690 690 int errirq, chan_irq[SH_DMAE_MAX_CHANNELS]; 691 691 int err, i, irq_cnt = 0, irqres = 0, irq_cap = 0; ··· 838 838 IORESOURCE_IRQ_SHAREABLE) 839 839 chan_flag[irq_cnt] = IRQF_SHARED; 840 840 else 841 - chan_flag[irq_cnt] = IRQF_DISABLED; 841 + chan_flag[irq_cnt] = 0; 842 842 dev_dbg(&pdev->dev, 843 843 "Found IRQ %d for channel %d\n", 844 844 i, irq_cnt);
+4 -3
drivers/dma/ste_dma40.c
··· 14 14 #include <linux/platform_device.h> 15 15 #include <linux/clk.h> 16 16 #include <linux/delay.h> 17 + #include <linux/log2.h> 17 18 #include <linux/pm.h> 18 19 #include <linux/pm_runtime.h> 19 20 #include <linux/err.h> ··· 2627 2626 } 2628 2627 2629 2628 ret = dma_cookie_status(chan, cookie, txstate); 2630 - if (ret != DMA_SUCCESS) 2629 + if (ret != DMA_COMPLETE) 2631 2630 dma_set_residue(txstate, stedma40_residue(chan)); 2632 2631 2633 2632 if (d40_is_paused(d40c)) ··· 2797 2796 src_addr_width > DMA_SLAVE_BUSWIDTH_8_BYTES || 2798 2797 dst_addr_width <= DMA_SLAVE_BUSWIDTH_UNDEFINED || 2799 2798 dst_addr_width > DMA_SLAVE_BUSWIDTH_8_BYTES || 2800 - ((src_addr_width > 1) && (src_addr_width & 1)) || 2801 - ((dst_addr_width > 1) && (dst_addr_width & 1))) 2799 + !is_power_of_2(src_addr_width) || 2800 + !is_power_of_2(dst_addr_width)) 2802 2801 return -EINVAL; 2803 2802 2804 2803 cfg->src_info.data_width = src_addr_width;
+3 -3
drivers/dma/tegra20-apb-dma.c
··· 570 570 571 571 list_del(&sgreq->node); 572 572 if (sgreq->last_sg) { 573 - dma_desc->dma_status = DMA_SUCCESS; 573 + dma_desc->dma_status = DMA_COMPLETE; 574 574 dma_cookie_complete(&dma_desc->txd); 575 575 if (!dma_desc->cb_count) 576 576 list_add_tail(&dma_desc->cb_node, &tdc->cb_desc); ··· 768 768 unsigned int residual; 769 769 770 770 ret = dma_cookie_status(dc, cookie, txstate); 771 - if (ret == DMA_SUCCESS) 771 + if (ret == DMA_COMPLETE) 772 772 return ret; 773 773 774 774 spin_lock_irqsave(&tdc->lock, flags); ··· 1018 1018 return &dma_desc->txd; 1019 1019 } 1020 1020 1021 - struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic( 1021 + static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic( 1022 1022 struct dma_chan *dc, dma_addr_t buf_addr, size_t buf_len, 1023 1023 size_t period_len, enum dma_transfer_direction direction, 1024 1024 unsigned long flags, void *context)
+1 -36
drivers/dma/timb_dma.c
··· 154 154 return done; 155 155 } 156 156 157 - static void __td_unmap_desc(struct timb_dma_chan *td_chan, const u8 *dma_desc, 158 - bool single) 159 - { 160 - dma_addr_t addr; 161 - int len; 162 - 163 - addr = (dma_desc[7] << 24) | (dma_desc[6] << 16) | (dma_desc[5] << 8) | 164 - dma_desc[4]; 165 - 166 - len = (dma_desc[3] << 8) | dma_desc[2]; 167 - 168 - if (single) 169 - dma_unmap_single(chan2dev(&td_chan->chan), addr, len, 170 - DMA_TO_DEVICE); 171 - else 172 - dma_unmap_page(chan2dev(&td_chan->chan), addr, len, 173 - DMA_TO_DEVICE); 174 - } 175 - 176 - static void __td_unmap_descs(struct timb_dma_desc *td_desc, bool single) 177 - { 178 - struct timb_dma_chan *td_chan = container_of(td_desc->txd.chan, 179 - struct timb_dma_chan, chan); 180 - u8 *descs; 181 - 182 - for (descs = td_desc->desc_list; ; descs += TIMB_DMA_DESC_SIZE) { 183 - __td_unmap_desc(td_chan, descs, single); 184 - if (descs[0] & 0x02) 185 - break; 186 - } 187 - } 188 - 189 157 static int td_fill_desc(struct timb_dma_chan *td_chan, u8 *dma_desc, 190 158 struct scatterlist *sg, bool last) 191 159 { ··· 261 293 262 294 list_move(&td_desc->desc_node, &td_chan->free_list); 263 295 264 - if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) 265 - __td_unmap_descs(td_desc, 266 - txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE); 267 - 296 + dma_descriptor_unmap(txd); 268 297 /* 269 298 * The API requires that no submissions are done from a 270 299 * callback, so we don't need to drop the lock here
+3 -26
drivers/dma/txx9dmac.c
··· 419 419 list_splice_init(&desc->tx_list, &dc->free_list); 420 420 list_move(&desc->desc_node, &dc->free_list); 421 421 422 - if (!ds) { 423 - dma_addr_t dmaaddr; 424 - if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 425 - dmaaddr = is_dmac64(dc) ? 426 - desc->hwdesc.DAR : desc->hwdesc32.DAR; 427 - if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE) 428 - dma_unmap_single(chan2parent(&dc->chan), 429 - dmaaddr, desc->len, DMA_FROM_DEVICE); 430 - else 431 - dma_unmap_page(chan2parent(&dc->chan), 432 - dmaaddr, desc->len, DMA_FROM_DEVICE); 433 - } 434 - if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 435 - dmaaddr = is_dmac64(dc) ? 436 - desc->hwdesc.SAR : desc->hwdesc32.SAR; 437 - if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE) 438 - dma_unmap_single(chan2parent(&dc->chan), 439 - dmaaddr, desc->len, DMA_TO_DEVICE); 440 - else 441 - dma_unmap_page(chan2parent(&dc->chan), 442 - dmaaddr, desc->len, DMA_TO_DEVICE); 443 - } 444 - } 445 - 422 + dma_descriptor_unmap(txd); 446 423 /* 447 424 * The API requires that no submissions are done from a 448 425 * callback, so we don't need to drop the lock here ··· 939 962 enum dma_status ret; 940 963 941 964 ret = dma_cookie_status(chan, cookie, txstate); 942 - if (ret == DMA_SUCCESS) 943 - return DMA_SUCCESS; 965 + if (ret == DMA_COMPLETE) 966 + return DMA_COMPLETE; 944 967 945 968 spin_lock_bh(&dc->lock); 946 969 txx9dmac_scan_descriptors(dc);
+1 -2
drivers/media/platform/m2m-deinterlace.c
··· 341 341 ctx->xt->dir = DMA_MEM_TO_MEM; 342 342 ctx->xt->src_sgl = false; 343 343 ctx->xt->dst_sgl = true; 344 - flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT | 345 - DMA_COMPL_SKIP_DEST_UNMAP | DMA_COMPL_SKIP_SRC_UNMAP; 344 + flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 346 345 347 346 tx = dmadev->device_prep_interleaved_dma(chan, ctx->xt, flags); 348 347 if (tx == NULL) {
+1 -1
drivers/media/platform/timblogiw.c
··· 565 565 566 566 desc = dmaengine_prep_slave_sg(fh->chan, 567 567 buf->sg, sg_elems, DMA_DEV_TO_MEM, 568 - DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP); 568 + DMA_PREP_INTERRUPT); 569 569 if (!desc) { 570 570 spin_lock_irq(&fh->queue_lock); 571 571 list_del_init(&vb->queue);
+1 -2
drivers/misc/carma/carma-fpga.c
··· 633 633 struct dma_async_tx_descriptor *tx; 634 634 dma_cookie_t cookie; 635 635 dma_addr_t dst, src; 636 - unsigned long dma_flags = DMA_COMPL_SKIP_DEST_UNMAP | 637 - DMA_COMPL_SKIP_SRC_UNMAP; 636 + unsigned long dma_flags = 0; 638 637 639 638 dst_sg = buf->vb.sglist; 640 639 dst_nents = buf->vb.sglen;
+1 -2
drivers/mtd/nand/atmel_nand.c
··· 375 375 376 376 dma_dev = host->dma_chan->device; 377 377 378 - flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP | 379 - DMA_COMPL_SKIP_DEST_UNMAP; 378 + flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 380 379 381 380 phys_addr = dma_map_single(dma_dev->dev, p, len, dir); 382 381 if (dma_mapping_error(dma_dev->dev, phys_addr)) {
-2
drivers/mtd/nand/fsmc_nand.c
··· 573 573 dma_dev = chan->device; 574 574 dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction); 575 575 576 - flags |= DMA_COMPL_SKIP_SRC_UNMAP | DMA_COMPL_SKIP_DEST_UNMAP; 577 - 578 576 if (direction == DMA_TO_DEVICE) { 579 577 dma_src = dma_addr; 580 578 dma_dst = host->data_pa;
+2 -4
drivers/net/ethernet/micrel/ks8842.c
··· 459 459 sg_dma_len(&ctl->sg) += 4 - sg_dma_len(&ctl->sg) % 4; 460 460 461 461 ctl->adesc = dmaengine_prep_slave_sg(ctl->chan, 462 - &ctl->sg, 1, DMA_MEM_TO_DEV, 463 - DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP); 462 + &ctl->sg, 1, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT); 464 463 if (!ctl->adesc) 465 464 return NETDEV_TX_BUSY; 466 465 ··· 570 571 sg_dma_len(sg) = DMA_BUFFER_SIZE; 571 572 572 573 ctl->adesc = dmaengine_prep_slave_sg(ctl->chan, 573 - sg, 1, DMA_DEV_TO_MEM, 574 - DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP); 574 + sg, 1, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); 575 575 576 576 if (!ctl->adesc) 577 577 goto out;
+56 -30
drivers/ntb/ntb_transport.c
··· 1034 1034 struct dma_chan *chan = qp->dma_chan; 1035 1035 struct dma_device *device; 1036 1036 size_t pay_off, buff_off; 1037 - dma_addr_t src, dest; 1037 + struct dmaengine_unmap_data *unmap; 1038 1038 dma_cookie_t cookie; 1039 1039 void *buf = entry->buf; 1040 - unsigned long flags; 1041 1040 1042 1041 entry->len = len; 1043 1042 ··· 1044 1045 goto err; 1045 1046 1046 1047 if (len < copy_bytes) 1047 - goto err1; 1048 + goto err_wait; 1048 1049 1049 1050 device = chan->device; 1050 1051 pay_off = (size_t) offset & ~PAGE_MASK; 1051 1052 buff_off = (size_t) buf & ~PAGE_MASK; 1052 1053 1053 1054 if (!is_dma_copy_aligned(device, pay_off, buff_off, len)) 1054 - goto err1; 1055 + goto err_wait; 1055 1056 1056 - dest = dma_map_single(device->dev, buf, len, DMA_FROM_DEVICE); 1057 - if (dma_mapping_error(device->dev, dest)) 1058 - goto err1; 1057 + unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOWAIT); 1058 + if (!unmap) 1059 + goto err_wait; 1059 1060 1060 - src = dma_map_single(device->dev, offset, len, DMA_TO_DEVICE); 1061 - if (dma_mapping_error(device->dev, src)) 1062 - goto err2; 1061 + unmap->len = len; 1062 + unmap->addr[0] = dma_map_page(device->dev, virt_to_page(offset), 1063 + pay_off, len, DMA_TO_DEVICE); 1064 + if (dma_mapping_error(device->dev, unmap->addr[0])) 1065 + goto err_get_unmap; 1063 1066 1064 - flags = DMA_COMPL_DEST_UNMAP_SINGLE | DMA_COMPL_SRC_UNMAP_SINGLE | 1065 - DMA_PREP_INTERRUPT; 1066 - txd = device->device_prep_dma_memcpy(chan, dest, src, len, flags); 1067 + unmap->to_cnt = 1; 1068 + 1069 + unmap->addr[1] = dma_map_page(device->dev, virt_to_page(buf), 1070 + buff_off, len, DMA_FROM_DEVICE); 1071 + if (dma_mapping_error(device->dev, unmap->addr[1])) 1072 + goto err_get_unmap; 1073 + 1074 + unmap->from_cnt = 1; 1075 + 1076 + txd = device->device_prep_dma_memcpy(chan, unmap->addr[1], 1077 + unmap->addr[0], len, 1078 + DMA_PREP_INTERRUPT); 1067 1079 if (!txd) 1068 - goto err3; 1080 + goto err_get_unmap; 1069 1081 1070 1082 txd->callback = ntb_rx_copy_callback; 1071 1083 txd->callback_param = entry; 1084 + dma_set_unmap(txd, unmap); 1072 1085 1073 1086 cookie = dmaengine_submit(txd); 1074 1087 if (dma_submit_error(cookie)) 1075 - goto err3; 1088 + goto err_set_unmap; 1089 + 1090 + dmaengine_unmap_put(unmap); 1076 1091 1077 1092 qp->last_cookie = cookie; 1078 1093 ··· 1094 1081 1095 1082 return; 1096 1083 1097 - err3: 1098 - dma_unmap_single(device->dev, src, len, DMA_TO_DEVICE); 1099 - err2: 1100 - dma_unmap_single(device->dev, dest, len, DMA_FROM_DEVICE); 1101 - err1: 1084 + err_set_unmap: 1085 + dmaengine_unmap_put(unmap); 1086 + err_get_unmap: 1087 + dmaengine_unmap_put(unmap); 1088 + err_wait: 1102 1089 /* If the callbacks come out of order, the writing of the index to the 1103 1090 * last completed will be out of order. This may result in the 1104 1091 * receive stalling forever. ··· 1258 1245 struct dma_chan *chan = qp->dma_chan; 1259 1246 struct dma_device *device; 1260 1247 size_t dest_off, buff_off; 1261 - dma_addr_t src, dest; 1248 + struct dmaengine_unmap_data *unmap; 1249 + dma_addr_t dest; 1262 1250 dma_cookie_t cookie; 1263 1251 void __iomem *offset; 1264 1252 size_t len = entry->len; 1265 1253 void *buf = entry->buf; 1266 - unsigned long flags; 1267 1254 1268 1255 offset = qp->tx_mw + qp->tx_max_frame * qp->tx_index; 1269 1256 hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header); ··· 1286 1273 if (!is_dma_copy_aligned(device, buff_off, dest_off, len)) 1287 1274 goto err; 1288 1275 1289 - src = dma_map_single(device->dev, buf, len, DMA_TO_DEVICE); 1290 - if (dma_mapping_error(device->dev, src)) 1276 + unmap = dmaengine_get_unmap_data(device->dev, 1, GFP_NOWAIT); 1277 + if (!unmap) 1291 1278 goto err; 1292 1279 1293 - flags = DMA_COMPL_SRC_UNMAP_SINGLE | DMA_PREP_INTERRUPT; 1294 - txd = device->device_prep_dma_memcpy(chan, dest, src, len, flags); 1280 + unmap->len = len; 1281 + unmap->addr[0] = dma_map_page(device->dev, virt_to_page(buf), 1282 + buff_off, len, DMA_TO_DEVICE); 1283 + if (dma_mapping_error(device->dev, unmap->addr[0])) 1284 + goto err_get_unmap; 1285 + 1286 + unmap->to_cnt = 1; 1287 + 1288 + txd = device->device_prep_dma_memcpy(chan, dest, unmap->addr[0], len, 1289 + DMA_PREP_INTERRUPT); 1295 1290 if (!txd) 1296 - goto err1; 1291 + goto err_get_unmap; 1297 1292 1298 1293 txd->callback = ntb_tx_copy_callback; 1299 1294 txd->callback_param = entry; 1295 + dma_set_unmap(txd, unmap); 1300 1296 1301 1297 cookie = dmaengine_submit(txd); 1302 1298 if (dma_submit_error(cookie)) 1303 - goto err1; 1299 + goto err_set_unmap; 1300 + 1301 + dmaengine_unmap_put(unmap); 1304 1302 1305 1303 dma_async_issue_pending(chan); 1306 1304 qp->tx_async++; 1307 1305 1308 1306 return; 1309 - err1: 1310 - dma_unmap_single(device->dev, src, len, DMA_TO_DEVICE); 1307 + err_set_unmap: 1308 + dmaengine_unmap_put(unmap); 1309 + err_get_unmap: 1310 + dmaengine_unmap_put(unmap); 1311 1311 err: 1312 1312 ntb_memcpy_tx(entry, offset); 1313 1313 qp->tx_memcpy++;
+2 -2
drivers/spi/spi-dw-mid.c
··· 150 150 &dws->tx_sgl, 151 151 1, 152 152 DMA_MEM_TO_DEV, 153 - DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_DEST_UNMAP); 153 + DMA_PREP_INTERRUPT); 154 154 txdesc->callback = dw_spi_dma_done; 155 155 txdesc->callback_param = dws; 156 156 ··· 173 173 &dws->rx_sgl, 174 174 1, 175 175 DMA_DEV_TO_MEM, 176 - DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_DEST_UNMAP); 176 + DMA_PREP_INTERRUPT); 177 177 rxdesc->callback = dw_spi_dma_done; 178 178 rxdesc->callback_param = dws; 179 179
+1 -1
drivers/tty/serial/sh-sci.c
··· 1433 1433 desc = s->desc_rx[new]; 1434 1434 1435 1435 if (dma_async_is_tx_complete(s->chan_rx, s->active_rx, NULL, NULL) != 1436 - DMA_SUCCESS) { 1436 + DMA_COMPLETE) { 1437 1437 /* Handle incomplete DMA receive */ 1438 1438 struct dma_chan *chan = s->chan_rx; 1439 1439 struct shdma_desc *sh_desc = container_of(desc,
+56 -20
include/linux/dmaengine.h
··· 45 45 46 46 /** 47 47 * enum dma_status - DMA transaction status 48 - * @DMA_SUCCESS: transaction completed successfully 48 + * @DMA_COMPLETE: transaction completed 49 49 * @DMA_IN_PROGRESS: transaction not yet processed 50 50 * @DMA_PAUSED: transaction is paused 51 51 * @DMA_ERROR: transaction failed 52 52 */ 53 53 enum dma_status { 54 - DMA_SUCCESS, 54 + DMA_COMPLETE, 55 55 DMA_IN_PROGRESS, 56 56 DMA_PAUSED, 57 57 DMA_ERROR, ··· 171 171 * @DMA_CTRL_ACK - if clear, the descriptor cannot be reused until the client 172 172 * acknowledges receipt, i.e. has has a chance to establish any dependency 173 173 * chains 174 - * @DMA_COMPL_SKIP_SRC_UNMAP - set to disable dma-unmapping the source buffer(s) 175 - * @DMA_COMPL_SKIP_DEST_UNMAP - set to disable dma-unmapping the destination(s) 176 - * @DMA_COMPL_SRC_UNMAP_SINGLE - set to do the source dma-unmapping as single 177 - * (if not set, do the source dma-unmapping as page) 178 - * @DMA_COMPL_DEST_UNMAP_SINGLE - set to do the destination dma-unmapping as single 179 - * (if not set, do the destination dma-unmapping as page) 180 174 * @DMA_PREP_PQ_DISABLE_P - prevent generation of P while generating Q 181 175 * @DMA_PREP_PQ_DISABLE_Q - prevent generation of Q while generating P 182 176 * @DMA_PREP_CONTINUE - indicate to a driver that it is reusing buffers as ··· 182 188 enum dma_ctrl_flags { 183 189 DMA_PREP_INTERRUPT = (1 << 0), 184 190 DMA_CTRL_ACK = (1 << 1), 185 - DMA_COMPL_SKIP_SRC_UNMAP = (1 << 2), 186 - DMA_COMPL_SKIP_DEST_UNMAP = (1 << 3), 187 - DMA_COMPL_SRC_UNMAP_SINGLE = (1 << 4), 188 - DMA_COMPL_DEST_UNMAP_SINGLE = (1 << 5), 189 - DMA_PREP_PQ_DISABLE_P = (1 << 6), 190 - DMA_PREP_PQ_DISABLE_Q = (1 << 7), 191 - DMA_PREP_CONTINUE = (1 << 8), 192 - DMA_PREP_FENCE = (1 << 9), 191 + DMA_PREP_PQ_DISABLE_P = (1 << 2), 192 + DMA_PREP_PQ_DISABLE_Q = (1 << 3), 193 + DMA_PREP_CONTINUE = (1 << 4), 194 + DMA_PREP_FENCE = (1 << 5), 193 195 }; 194 196 195 197 /** ··· 403 413 typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 404 414 405 415 typedef void (*dma_async_tx_callback)(void *dma_async_param); 416 + 417 + struct dmaengine_unmap_data { 418 + u8 to_cnt; 419 + u8 from_cnt; 420 + u8 bidi_cnt; 421 + struct device *dev; 422 + struct kref kref; 423 + size_t len; 424 + dma_addr_t addr[0]; 425 + }; 426 + 406 427 /** 407 428 * struct dma_async_tx_descriptor - async transaction descriptor 408 429 * ---dma generic offload fields--- ··· 439 438 dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); 440 439 dma_async_tx_callback callback; 441 440 void *callback_param; 441 + struct dmaengine_unmap_data *unmap; 442 442 #ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH 443 443 struct dma_async_tx_descriptor *next; 444 444 struct dma_async_tx_descriptor *parent; 445 445 spinlock_t lock; 446 446 #endif 447 447 }; 448 + 449 + #ifdef CONFIG_DMA_ENGINE 450 + static inline void dma_set_unmap(struct dma_async_tx_descriptor *tx, 451 + struct dmaengine_unmap_data *unmap) 452 + { 453 + kref_get(&unmap->kref); 454 + tx->unmap = unmap; 455 + } 456 + 457 + struct dmaengine_unmap_data * 458 + dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags); 459 + void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap); 460 + #else 461 + static inline void dma_set_unmap(struct dma_async_tx_descriptor *tx, 462 + struct dmaengine_unmap_data *unmap) 463 + { 464 + } 465 + static inline struct dmaengine_unmap_data * 466 + dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags) 467 + { 468 + return NULL; 469 + } 470 + static inline void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap) 471 + { 472 + } 473 + #endif 474 + 475 + static inline void dma_descriptor_unmap(struct dma_async_tx_descriptor *tx) 476 + { 477 + if (tx->unmap) { 478 + dmaengine_unmap_put(tx->unmap); 479 + tx->unmap = NULL; 480 + } 481 + } 448 482 449 483 #ifndef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH 450 484 static inline void txd_lock(struct dma_async_tx_descriptor *txd) ··· 1015 979 { 1016 980 if (last_complete <= last_used) { 1017 981 if ((cookie <= last_complete) || (cookie > last_used)) 1018 - return DMA_SUCCESS; 982 + return DMA_COMPLETE; 1019 983 } else { 1020 984 if ((cookie <= last_complete) && (cookie > last_used)) 1021 - return DMA_SUCCESS; 985 + return DMA_COMPLETE; 1022 986 } 1023 987 return DMA_IN_PROGRESS; 1024 988 } ··· 1049 1013 } 1050 1014 static inline enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) 1051 1015 { 1052 - return DMA_SUCCESS; 1016 + return DMA_COMPLETE; 1053 1017 } 1054 1018 static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) 1055 1019 { 1056 - return DMA_SUCCESS; 1020 + return DMA_COMPLETE; 1057 1021 } 1058 1022 static inline void dma_issue_pending_all(void) 1059 1023 {
+4 -4
include/linux/platform_data/edma.h
··· 67 67 #define ITCCHEN BIT(23) 68 68 69 69 /*ch_status paramater of callback function possible values*/ 70 - #define DMA_COMPLETE 1 71 - #define DMA_CC_ERROR 2 72 - #define DMA_TC1_ERROR 3 73 - #define DMA_TC2_ERROR 4 70 + #define EDMA_DMA_COMPLETE 1 71 + #define EDMA_DMA_CC_ERROR 2 72 + #define EDMA_DMA_TC1_ERROR 3 73 + #define EDMA_DMA_TC2_ERROR 4 74 74 75 75 enum address_mode { 76 76 INCR = 0,
+2 -2
net/ipv4/tcp.c
··· 1425 1425 do { 1426 1426 if (dma_async_is_tx_complete(tp->ucopy.dma_chan, 1427 1427 last_issued, &done, 1428 - &used) == DMA_SUCCESS) { 1428 + &used) == DMA_COMPLETE) { 1429 1429 /* Safe to free early-copied skbs now */ 1430 1430 __skb_queue_purge(&sk->sk_async_wait_queue); 1431 1431 break; ··· 1433 1433 struct sk_buff *skb; 1434 1434 while ((skb = skb_peek(&sk->sk_async_wait_queue)) && 1435 1435 (dma_async_is_complete(skb->dma_cookie, done, 1436 - used) == DMA_SUCCESS)) { 1436 + used) == DMA_COMPLETE)) { 1437 1437 __skb_dequeue(&sk->sk_async_wait_queue); 1438 1438 kfree_skb(skb); 1439 1439 }
+1 -1
sound/soc/davinci/davinci-pcm.c
··· 238 238 print_buf_info(prtd->ram_channel, "i ram_channel"); 239 239 pr_debug("davinci_pcm: link=%d, status=0x%x\n", link, ch_status); 240 240 241 - if (unlikely(ch_status != DMA_COMPLETE)) 241 + if (unlikely(ch_status != EDMA_DMA_COMPLETE)) 242 242 return; 243 243 244 244 if (snd_pcm_running(substream)) {