Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (33 commits)
x86: poll waiting for I/OAT DMA channel status
maintainers: add dma engine tree details
dmaengine: add TODO items for future work on dma drivers
dmaengine: Add API documentation for slave dma usage
dmaengine/dw_dmac: Update maintainer-ship
dmaengine: move link order
dmaengine/dw_dmac: implement pause and resume in dwc_control
dmaengine/dw_dmac: Replace spin_lock* with irqsave variants and enable submission from callback
dmaengine/dw_dmac: Divide one sg to many desc, if sg len is greater than DWC_MAX_COUNT
dmaengine/dw_dmac: set residue as total len in dwc_tx_status if status is !DMA_SUCCESS
dmaengine/dw_dmac: don't call callback routine in case dmaengine_terminate_all() is called
dmaengine: at_hdmac: pause: no need to wait for FIFO empty
pch_dma: modify pci device table definition
pch_dma: Support new device ML7223 IOH
pch_dma: Support I2S for ML7213 IOH
pch_dma: Fix DMA setting issue
pch_dma: modify for checkpatch
pch_dma: fix dma direction issue for ML7213 IOH video-in
dmaengine: at_hdmac: use descriptor chaining help function
dmaengine: at_hdmac: implement pause and resume in atc_control
...

Fix up trivial conflict in drivers/dma/dw_dmac.c

+739 -249
+96 -1
Documentation/dmaengine.txt
··· 1 - See Documentation/crypto/async-tx-api.txt 1 + DMA Engine API Guide 2 + ==================== 3 + 4 + Vinod Koul <vinod dot koul at intel.com> 5 + 6 + NOTE: For DMA Engine usage in async_tx please see: 7 + Documentation/crypto/async-tx-api.txt 8 + 9 + 10 + Below is a guide to device driver writers on how to use the Slave-DMA API of the 11 + DMA Engine. This is applicable only for slave DMA usage only. 12 + 13 + The slave DMA usage consists of following steps 14 + 1. Allocate a DMA slave channel 15 + 2. Set slave and controller specific parameters 16 + 3. Get a descriptor for transaction 17 + 4. Submit the transaction and wait for callback notification 18 + 19 + 1. Allocate a DMA slave channel 20 + Channel allocation is slightly different in the slave DMA context, client 21 + drivers typically need a channel from a particular DMA controller only and even 22 + in some cases a specific channel is desired. To request a channel 23 + dma_request_channel() API is used. 24 + 25 + Interface: 26 + struct dma_chan *dma_request_channel(dma_cap_mask_t mask, 27 + dma_filter_fn filter_fn, 28 + void *filter_param); 29 + where dma_filter_fn is defined as: 30 + typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 31 + 32 + When the optional 'filter_fn' parameter is set to NULL dma_request_channel 33 + simply returns the first channel that satisfies the capability mask. Otherwise, 34 + when the mask parameter is insufficient for specifying the necessary channel, 35 + the filter_fn routine can be used to disposition the available channels in the 36 + system. The filter_fn routine is called once for each free channel in the 37 + system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags 38 + that channel to be the return value from dma_request_channel. A channel 39 + allocated via this interface is exclusive to the caller, until 40 + dma_release_channel() is called. 41 + 42 + 2. Set slave and controller specific parameters 43 + Next step is always to pass some specific information to the DMA driver. Most of 44 + the generic information which a slave DMA can use is in struct dma_slave_config. 45 + It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA 46 + burst lengths etc. If some DMA controllers have more parameters to be sent then 47 + they should try to embed struct dma_slave_config in their controller specific 48 + structure. That gives flexibility to client to pass more parameters, if 49 + required. 50 + 51 + Interface: 52 + int dmaengine_slave_config(struct dma_chan *chan, 53 + struct dma_slave_config *config) 54 + 55 + 3. Get a descriptor for transaction 56 + For slave usage the various modes of slave transfers supported by the 57 + DMA-engine are: 58 + slave_sg - DMA a list of scatter gather buffers from/to a peripheral 59 + dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the 60 + operation is explicitly stopped. 61 + The non NULL return of this transfer API represents a "descriptor" for the given 62 + transaction. 63 + 64 + Interface: 65 + struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)( 66 + struct dma_chan *chan, 67 + struct scatterlist *dst_sg, unsigned int dst_nents, 68 + struct scatterlist *src_sg, unsigned int src_nents, 69 + unsigned long flags); 70 + struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( 71 + struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, 72 + size_t period_len, enum dma_data_direction direction); 73 + 74 + 4. Submit the transaction and wait for callback notification 75 + To schedule the transaction to be scheduled by dma device, the "descriptor" 76 + returned in above (3) needs to be submitted. 77 + To tell the dma driver that a transaction is ready to be serviced, the 78 + descriptor->submit() callback needs to be invoked. This chains the descriptor to 79 + the pending queue. 80 + The transactions in the pending queue can be activated by calling the 81 + issue_pending API. If channel is idle then the first transaction in queue is 82 + started and subsequent ones queued up. 83 + On completion of the DMA operation the next in queue is submitted and a tasklet 84 + triggered. The tasklet would then call the client driver completion callback 85 + routine for notification, if set. 86 + Interface: 87 + void dma_async_issue_pending(struct dma_chan *chan); 88 + 89 + ============================================================================== 90 + 91 + Additional usage notes for dma driver writers 92 + 1/ Although DMA engine specifies that completion callback routines cannot submit 93 + any new operations, but typically for slave DMA subsequent transaction may not 94 + be available for submit prior to callback routine being called. This requirement 95 + is not a requirement for DMA-slave devices. But they should take care to drop 96 + the spin-lock they might be holding before calling the callback routine
+9
MAINTAINERS
··· 2178 2178 S: Supported 2179 2179 F: drivers/dma/ 2180 2180 F: include/linux/dma* 2181 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx.git 2182 + T: git git://git.infradead.org/users/vkoul/slave-dma.git (slave-dma) 2181 2183 2182 2184 DME1737 HARDWARE MONITOR DRIVER 2183 2185 M: Juerg Haefliger <juergh@gmail.com> ··· 5452 5450 L: linux-serial@vger.kernel.org 5453 5451 S: Maintained 5454 5452 F: drivers/tty/serial 5453 + 5454 + SYNOPSYS DESIGNWARE DMAC DRIVER 5455 + M: Viresh Kumar <viresh.kumar@st.com> 5456 + S: Maintained 5457 + F: include/linux/dw_dmac.h 5458 + F: drivers/dma/dw_dmac_regs.h 5459 + F: drivers/dma/dw_dmac.c 5455 5460 5456 5461 TIMEKEEPING, NTP 5457 5462 M: John Stultz <johnstul@us.ibm.com>
+3 -1
drivers/Makefile
··· 17 17 # was used and do nothing if so 18 18 obj-$(CONFIG_PNP) += pnp/ 19 19 obj-$(CONFIG_ARM_AMBA) += amba/ 20 + # Many drivers will want to use DMA so this has to be made available 21 + # really early. 22 + obj-$(CONFIG_DMA_ENGINE) += dma/ 20 23 21 24 obj-$(CONFIG_VIRTIO) += virtio/ 22 25 obj-$(CONFIG_XEN) += xen/ ··· 95 92 obj-y += lguest/ 96 93 obj-$(CONFIG_CPU_FREQ) += cpufreq/ 97 94 obj-$(CONFIG_CPU_IDLE) += cpuidle/ 98 - obj-$(CONFIG_DMA_ENGINE) += dma/ 99 95 obj-$(CONFIG_MMC) += mmc/ 100 96 obj-$(CONFIG_MEMSTICK) += memstick/ 101 97 obj-y += leds/
+7 -5
drivers/dma/Kconfig
··· 200 200 platform_data for a dma-pl330 device. 201 201 202 202 config PCH_DMA 203 - tristate "Intel EG20T PCH / OKI SEMICONDUCTOR ML7213 IOH DMA support" 203 + tristate "Intel EG20T PCH / OKI Semi IOH(ML7213/ML7223) DMA support" 204 204 depends on PCI && X86 205 205 select DMA_ENGINE 206 206 help 207 207 Enable support for Intel EG20T PCH DMA engine. 208 208 209 - This driver also can be used for OKI SEMICONDUCTOR ML7213 IOH(Input/ 210 - Output Hub) which is for IVI(In-Vehicle Infotainment) use. 211 - ML7213 is companion chip for Intel Atom E6xx series. 212 - ML7213 is completely compatible for Intel EG20T PCH. 209 + This driver also can be used for OKI SEMICONDUCTOR IOH(Input/ 210 + Output Hub), ML7213 and ML7223. 211 + ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is 212 + for MP(Media Phone) use. 213 + ML7213/ML7223 is companion chip for Intel Atom E6xx series. 214 + ML7213/ML7223 is completely compatible for Intel EG20T PCH. 213 215 214 216 config IMX_SDMA 215 217 tristate "i.MX SDMA support"
+14
drivers/dma/TODO
··· 1 + TODO for slave dma 2 + 3 + 1. Move remaining drivers to use new slave interface 4 + 2. Remove old slave pointer machansim 5 + 3. Make issue_pending to start the transaction in below drivers 6 + - mpc512x_dma 7 + - imx-dma 8 + - imx-sdma 9 + - mxs-dma.c 10 + - dw_dmac 11 + - intel_mid_dma 12 + - ste_dma40 13 + 4. Check other subsystems for dma drivers and merge/move to dmaengine 14 + 5. Remove dma_slave_config's dma direction.
+296 -90
drivers/dma/at_hdmac.c
··· 37 37 38 38 #define ATC_DEFAULT_CFG (ATC_FIFOCFG_HALFFIFO) 39 39 #define ATC_DEFAULT_CTRLA (0) 40 - #define ATC_DEFAULT_CTRLB (ATC_SIF(0) \ 41 - |ATC_DIF(1)) 40 + #define ATC_DEFAULT_CTRLB (ATC_SIF(AT_DMA_MEM_IF) \ 41 + |ATC_DIF(AT_DMA_MEM_IF)) 42 42 43 43 /* 44 44 * Initial number of descriptors to allocate for each channel. This could ··· 165 165 } 166 166 167 167 /** 168 + * atc_desc_chain - build chain adding a descripor 169 + * @first: address of first descripor of the chain 170 + * @prev: address of previous descripor of the chain 171 + * @desc: descriptor to queue 172 + * 173 + * Called from prep_* functions 174 + */ 175 + static void atc_desc_chain(struct at_desc **first, struct at_desc **prev, 176 + struct at_desc *desc) 177 + { 178 + if (!(*first)) { 179 + *first = desc; 180 + } else { 181 + /* inform the HW lli about chaining */ 182 + (*prev)->lli.dscr = desc->txd.phys; 183 + /* insert the link descriptor to the LD ring */ 184 + list_add_tail(&desc->desc_node, 185 + &(*first)->tx_list); 186 + } 187 + *prev = desc; 188 + } 189 + 190 + /** 168 191 * atc_assign_cookie - compute and assign new cookie 169 192 * @atchan: channel we work on 170 193 * @desc: descriptor to assign cookie for ··· 260 237 static void 261 238 atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc) 262 239 { 263 - dma_async_tx_callback callback; 264 - void *param; 265 240 struct dma_async_tx_descriptor *txd = &desc->txd; 266 241 267 242 dev_vdbg(chan2dev(&atchan->chan_common), 268 243 "descriptor %u complete\n", txd->cookie); 269 244 270 245 atchan->completed_cookie = txd->cookie; 271 - callback = txd->callback; 272 - param = txd->callback_param; 273 246 274 247 /* move children to free_list */ 275 248 list_splice_init(&desc->tx_list, &atchan->free_list); ··· 297 278 } 298 279 } 299 280 300 - /* 301 - * The API requires that no submissions are done from a 302 - * callback, so we don't need to drop the lock here 303 - */ 304 - if (callback) 305 - callback(param); 281 + /* for cyclic transfers, 282 + * no need to replay callback function while stopping */ 283 + if (!test_bit(ATC_IS_CYCLIC, &atchan->status)) { 284 + dma_async_tx_callback callback = txd->callback; 285 + void *param = txd->callback_param; 286 + 287 + /* 288 + * The API requires that no submissions are done from a 289 + * callback, so we don't need to drop the lock here 290 + */ 291 + if (callback) 292 + callback(param); 293 + } 306 294 307 295 dma_run_dependencies(txd); 308 296 } ··· 445 419 atc_chain_complete(atchan, bad_desc); 446 420 } 447 421 422 + /** 423 + * atc_handle_cyclic - at the end of a period, run callback function 424 + * @atchan: channel used for cyclic operations 425 + * 426 + * Called with atchan->lock held and bh disabled 427 + */ 428 + static void atc_handle_cyclic(struct at_dma_chan *atchan) 429 + { 430 + struct at_desc *first = atc_first_active(atchan); 431 + struct dma_async_tx_descriptor *txd = &first->txd; 432 + dma_async_tx_callback callback = txd->callback; 433 + void *param = txd->callback_param; 434 + 435 + dev_vdbg(chan2dev(&atchan->chan_common), 436 + "new cyclic period llp 0x%08x\n", 437 + channel_readl(atchan, DSCR)); 438 + 439 + if (callback) 440 + callback(param); 441 + } 448 442 449 443 /*-- IRQ & Tasklet ---------------------------------------------------*/ 450 444 ··· 472 426 { 473 427 struct at_dma_chan *atchan = (struct at_dma_chan *)data; 474 428 475 - /* Channel cannot be enabled here */ 476 - if (atc_chan_is_enabled(atchan)) { 477 - dev_err(chan2dev(&atchan->chan_common), 478 - "BUG: channel enabled in tasklet\n"); 479 - return; 480 - } 481 - 482 429 spin_lock(&atchan->lock); 483 - if (test_and_clear_bit(0, &atchan->error_status)) 430 + if (test_and_clear_bit(ATC_IS_ERROR, &atchan->status)) 484 431 atc_handle_error(atchan); 432 + else if (test_bit(ATC_IS_CYCLIC, &atchan->status)) 433 + atc_handle_cyclic(atchan); 485 434 else 486 435 atc_advance_work(atchan); 487 436 ··· 505 464 506 465 for (i = 0; i < atdma->dma_common.chancnt; i++) { 507 466 atchan = &atdma->chan[i]; 508 - if (pending & (AT_DMA_CBTC(i) | AT_DMA_ERR(i))) { 467 + if (pending & (AT_DMA_BTC(i) | AT_DMA_ERR(i))) { 509 468 if (pending & AT_DMA_ERR(i)) { 510 469 /* Disable channel on AHB error */ 511 - dma_writel(atdma, CHDR, atchan->mask); 470 + dma_writel(atdma, CHDR, 471 + AT_DMA_RES(i) | atchan->mask); 512 472 /* Give information to tasklet */ 513 - set_bit(0, &atchan->error_status); 473 + set_bit(ATC_IS_ERROR, &atchan->status); 514 474 } 515 475 tasklet_schedule(&atchan->tasklet); 516 476 ret = IRQ_HANDLED; ··· 591 549 } 592 550 593 551 ctrla = ATC_DEFAULT_CTRLA; 594 - ctrlb = ATC_DEFAULT_CTRLB 552 + ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN 595 553 | ATC_SRC_ADDR_MODE_INCR 596 554 | ATC_DST_ADDR_MODE_INCR 597 555 | ATC_FC_MEM2MEM; ··· 626 584 627 585 desc->txd.cookie = 0; 628 586 629 - if (!first) { 630 - first = desc; 631 - } else { 632 - /* inform the HW lli about chaining */ 633 - prev->lli.dscr = desc->txd.phys; 634 - /* insert the link descriptor to the LD ring */ 635 - list_add_tail(&desc->desc_node, 636 - &first->tx_list); 637 - } 638 - prev = desc; 587 + atc_desc_chain(&first, &prev, desc); 639 588 } 640 589 641 590 /* First descriptor of the chain embedds additional information */ ··· 672 639 struct scatterlist *sg; 673 640 size_t total_len = 0; 674 641 675 - dev_vdbg(chan2dev(chan), "prep_slave_sg: %s f0x%lx\n", 642 + dev_vdbg(chan2dev(chan), "prep_slave_sg (%d): %s f0x%lx\n", 643 + sg_len, 676 644 direction == DMA_TO_DEVICE ? "TO DEVICE" : "FROM DEVICE", 677 645 flags); 678 646 ··· 685 651 reg_width = atslave->reg_width; 686 652 687 653 ctrla = ATC_DEFAULT_CTRLA | atslave->ctrla; 688 - ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN; 654 + ctrlb = ATC_IEN; 689 655 690 656 switch (direction) { 691 657 case DMA_TO_DEVICE: 692 658 ctrla |= ATC_DST_WIDTH(reg_width); 693 659 ctrlb |= ATC_DST_ADDR_MODE_FIXED 694 660 | ATC_SRC_ADDR_MODE_INCR 695 - | ATC_FC_MEM2PER; 661 + | ATC_FC_MEM2PER 662 + | ATC_SIF(AT_DMA_MEM_IF) | ATC_DIF(AT_DMA_PER_IF); 696 663 reg = atslave->tx_reg; 697 664 for_each_sg(sgl, sg, sg_len, i) { 698 665 struct at_desc *desc; ··· 717 682 | len >> mem_width; 718 683 desc->lli.ctrlb = ctrlb; 719 684 720 - if (!first) { 721 - first = desc; 722 - } else { 723 - /* inform the HW lli about chaining */ 724 - prev->lli.dscr = desc->txd.phys; 725 - /* insert the link descriptor to the LD ring */ 726 - list_add_tail(&desc->desc_node, 727 - &first->tx_list); 728 - } 729 - prev = desc; 685 + atc_desc_chain(&first, &prev, desc); 730 686 total_len += len; 731 687 } 732 688 break; ··· 725 699 ctrla |= ATC_SRC_WIDTH(reg_width); 726 700 ctrlb |= ATC_DST_ADDR_MODE_INCR 727 701 | ATC_SRC_ADDR_MODE_FIXED 728 - | ATC_FC_PER2MEM; 702 + | ATC_FC_PER2MEM 703 + | ATC_SIF(AT_DMA_PER_IF) | ATC_DIF(AT_DMA_MEM_IF); 729 704 730 705 reg = atslave->rx_reg; 731 706 for_each_sg(sgl, sg, sg_len, i) { ··· 751 724 | len >> reg_width; 752 725 desc->lli.ctrlb = ctrlb; 753 726 754 - if (!first) { 755 - first = desc; 756 - } else { 757 - /* inform the HW lli about chaining */ 758 - prev->lli.dscr = desc->txd.phys; 759 - /* insert the link descriptor to the LD ring */ 760 - list_add_tail(&desc->desc_node, 761 - &first->tx_list); 762 - } 763 - prev = desc; 727 + atc_desc_chain(&first, &prev, desc); 764 728 total_len += len; 765 729 } 766 730 break; ··· 777 759 return NULL; 778 760 } 779 761 762 + /** 763 + * atc_dma_cyclic_check_values 764 + * Check for too big/unaligned periods and unaligned DMA buffer 765 + */ 766 + static int 767 + atc_dma_cyclic_check_values(unsigned int reg_width, dma_addr_t buf_addr, 768 + size_t period_len, enum dma_data_direction direction) 769 + { 770 + if (period_len > (ATC_BTSIZE_MAX << reg_width)) 771 + goto err_out; 772 + if (unlikely(period_len & ((1 << reg_width) - 1))) 773 + goto err_out; 774 + if (unlikely(buf_addr & ((1 << reg_width) - 1))) 775 + goto err_out; 776 + if (unlikely(!(direction & (DMA_TO_DEVICE | DMA_FROM_DEVICE)))) 777 + goto err_out; 778 + 779 + return 0; 780 + 781 + err_out: 782 + return -EINVAL; 783 + } 784 + 785 + /** 786 + * atc_dma_cyclic_fill_desc - Fill one period decriptor 787 + */ 788 + static int 789 + atc_dma_cyclic_fill_desc(struct at_dma_slave *atslave, struct at_desc *desc, 790 + unsigned int period_index, dma_addr_t buf_addr, 791 + size_t period_len, enum dma_data_direction direction) 792 + { 793 + u32 ctrla; 794 + unsigned int reg_width = atslave->reg_width; 795 + 796 + /* prepare common CRTLA value */ 797 + ctrla = ATC_DEFAULT_CTRLA | atslave->ctrla 798 + | ATC_DST_WIDTH(reg_width) 799 + | ATC_SRC_WIDTH(reg_width) 800 + | period_len >> reg_width; 801 + 802 + switch (direction) { 803 + case DMA_TO_DEVICE: 804 + desc->lli.saddr = buf_addr + (period_len * period_index); 805 + desc->lli.daddr = atslave->tx_reg; 806 + desc->lli.ctrla = ctrla; 807 + desc->lli.ctrlb = ATC_DST_ADDR_MODE_FIXED 808 + | ATC_SRC_ADDR_MODE_INCR 809 + | ATC_FC_MEM2PER 810 + | ATC_SIF(AT_DMA_MEM_IF) 811 + | ATC_DIF(AT_DMA_PER_IF); 812 + break; 813 + 814 + case DMA_FROM_DEVICE: 815 + desc->lli.saddr = atslave->rx_reg; 816 + desc->lli.daddr = buf_addr + (period_len * period_index); 817 + desc->lli.ctrla = ctrla; 818 + desc->lli.ctrlb = ATC_DST_ADDR_MODE_INCR 819 + | ATC_SRC_ADDR_MODE_FIXED 820 + | ATC_FC_PER2MEM 821 + | ATC_SIF(AT_DMA_PER_IF) 822 + | ATC_DIF(AT_DMA_MEM_IF); 823 + break; 824 + 825 + default: 826 + return -EINVAL; 827 + } 828 + 829 + return 0; 830 + } 831 + 832 + /** 833 + * atc_prep_dma_cyclic - prepare the cyclic DMA transfer 834 + * @chan: the DMA channel to prepare 835 + * @buf_addr: physical DMA address where the buffer starts 836 + * @buf_len: total number of bytes for the entire buffer 837 + * @period_len: number of bytes for each period 838 + * @direction: transfer direction, to or from device 839 + */ 840 + static struct dma_async_tx_descriptor * 841 + atc_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, 842 + size_t period_len, enum dma_data_direction direction) 843 + { 844 + struct at_dma_chan *atchan = to_at_dma_chan(chan); 845 + struct at_dma_slave *atslave = chan->private; 846 + struct at_desc *first = NULL; 847 + struct at_desc *prev = NULL; 848 + unsigned long was_cyclic; 849 + unsigned int periods = buf_len / period_len; 850 + unsigned int i; 851 + 852 + dev_vdbg(chan2dev(chan), "prep_dma_cyclic: %s buf@0x%08x - %d (%d/%d)\n", 853 + direction == DMA_TO_DEVICE ? "TO DEVICE" : "FROM DEVICE", 854 + buf_addr, 855 + periods, buf_len, period_len); 856 + 857 + if (unlikely(!atslave || !buf_len || !period_len)) { 858 + dev_dbg(chan2dev(chan), "prep_dma_cyclic: length is zero!\n"); 859 + return NULL; 860 + } 861 + 862 + was_cyclic = test_and_set_bit(ATC_IS_CYCLIC, &atchan->status); 863 + if (was_cyclic) { 864 + dev_dbg(chan2dev(chan), "prep_dma_cyclic: channel in use!\n"); 865 + return NULL; 866 + } 867 + 868 + /* Check for too big/unaligned periods and unaligned DMA buffer */ 869 + if (atc_dma_cyclic_check_values(atslave->reg_width, buf_addr, 870 + period_len, direction)) 871 + goto err_out; 872 + 873 + /* build cyclic linked list */ 874 + for (i = 0; i < periods; i++) { 875 + struct at_desc *desc; 876 + 877 + desc = atc_desc_get(atchan); 878 + if (!desc) 879 + goto err_desc_get; 880 + 881 + if (atc_dma_cyclic_fill_desc(atslave, desc, i, buf_addr, 882 + period_len, direction)) 883 + goto err_desc_get; 884 + 885 + atc_desc_chain(&first, &prev, desc); 886 + } 887 + 888 + /* lets make a cyclic list */ 889 + prev->lli.dscr = first->txd.phys; 890 + 891 + /* First descriptor of the chain embedds additional information */ 892 + first->txd.cookie = -EBUSY; 893 + first->len = buf_len; 894 + 895 + return &first->txd; 896 + 897 + err_desc_get: 898 + dev_err(chan2dev(chan), "not enough descriptors available\n"); 899 + atc_desc_put(atchan, first); 900 + err_out: 901 + clear_bit(ATC_IS_CYCLIC, &atchan->status); 902 + return NULL; 903 + } 904 + 905 + 780 906 static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, 781 907 unsigned long arg) 782 908 { 783 909 struct at_dma_chan *atchan = to_at_dma_chan(chan); 784 910 struct at_dma *atdma = to_at_dma(chan->device); 785 - struct at_desc *desc, *_desc; 911 + int chan_id = atchan->chan_common.chan_id; 912 + 786 913 LIST_HEAD(list); 787 914 788 - /* Only supports DMA_TERMINATE_ALL */ 789 - if (cmd != DMA_TERMINATE_ALL) 915 + dev_vdbg(chan2dev(chan), "atc_control (%d)\n", cmd); 916 + 917 + if (cmd == DMA_PAUSE) { 918 + spin_lock_bh(&atchan->lock); 919 + 920 + dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id)); 921 + set_bit(ATC_IS_PAUSED, &atchan->status); 922 + 923 + spin_unlock_bh(&atchan->lock); 924 + } else if (cmd == DMA_RESUME) { 925 + if (!test_bit(ATC_IS_PAUSED, &atchan->status)) 926 + return 0; 927 + 928 + spin_lock_bh(&atchan->lock); 929 + 930 + dma_writel(atdma, CHDR, AT_DMA_RES(chan_id)); 931 + clear_bit(ATC_IS_PAUSED, &atchan->status); 932 + 933 + spin_unlock_bh(&atchan->lock); 934 + } else if (cmd == DMA_TERMINATE_ALL) { 935 + struct at_desc *desc, *_desc; 936 + /* 937 + * This is only called when something went wrong elsewhere, so 938 + * we don't really care about the data. Just disable the 939 + * channel. We still have to poll the channel enable bit due 940 + * to AHB/HSB limitations. 941 + */ 942 + spin_lock_bh(&atchan->lock); 943 + 944 + /* disabling channel: must also remove suspend state */ 945 + dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask); 946 + 947 + /* confirm that this channel is disabled */ 948 + while (dma_readl(atdma, CHSR) & atchan->mask) 949 + cpu_relax(); 950 + 951 + /* active_list entries will end up before queued entries */ 952 + list_splice_init(&atchan->queue, &list); 953 + list_splice_init(&atchan->active_list, &list); 954 + 955 + /* Flush all pending and queued descriptors */ 956 + list_for_each_entry_safe(desc, _desc, &list, desc_node) 957 + atc_chain_complete(atchan, desc); 958 + 959 + clear_bit(ATC_IS_PAUSED, &atchan->status); 960 + /* if channel dedicated to cyclic operations, free it */ 961 + clear_bit(ATC_IS_CYCLIC, &atchan->status); 962 + 963 + spin_unlock_bh(&atchan->lock); 964 + } else { 790 965 return -ENXIO; 791 - 792 - /* 793 - * This is only called when something went wrong elsewhere, so 794 - * we don't really care about the data. Just disable the 795 - * channel. We still have to poll the channel enable bit due 796 - * to AHB/HSB limitations. 797 - */ 798 - spin_lock_bh(&atchan->lock); 799 - 800 - dma_writel(atdma, CHDR, atchan->mask); 801 - 802 - /* confirm that this channel is disabled */ 803 - while (dma_readl(atdma, CHSR) & atchan->mask) 804 - cpu_relax(); 805 - 806 - /* active_list entries will end up before queued entries */ 807 - list_splice_init(&atchan->queue, &list); 808 - list_splice_init(&atchan->active_list, &list); 809 - 810 - /* Flush all pending and queued descriptors */ 811 - list_for_each_entry_safe(desc, _desc, &list, desc_node) 812 - atc_chain_complete(atchan, desc); 813 - 814 - spin_unlock_bh(&atchan->lock); 966 + } 815 967 816 968 return 0; 817 969 } ··· 1023 835 1024 836 spin_unlock_bh(&atchan->lock); 1025 837 1026 - dma_set_tx_state(txstate, last_complete, last_used, 0); 1027 - dev_vdbg(chan2dev(chan), "tx_status: %d (d%d, u%d)\n", 1028 - cookie, last_complete ? last_complete : 0, 838 + if (ret != DMA_SUCCESS) 839 + dma_set_tx_state(txstate, last_complete, last_used, 840 + atc_first_active(atchan)->len); 841 + else 842 + dma_set_tx_state(txstate, last_complete, last_used, 0); 843 + 844 + if (test_bit(ATC_IS_PAUSED, &atchan->status)) 845 + ret = DMA_PAUSED; 846 + 847 + dev_vdbg(chan2dev(chan), "tx_status %d: cookie = %d (d%d, u%d)\n", 848 + ret, cookie, last_complete ? last_complete : 0, 1029 849 last_used ? last_used : 0); 1030 850 1031 851 return ret; ··· 1048 852 struct at_dma_chan *atchan = to_at_dma_chan(chan); 1049 853 1050 854 dev_vdbg(chan2dev(chan), "issue_pending\n"); 855 + 856 + /* Not needed for cyclic transfers */ 857 + if (test_bit(ATC_IS_CYCLIC, &atchan->status)) 858 + return; 1051 859 1052 860 spin_lock_bh(&atchan->lock); 1053 861 if (!atc_chan_is_enabled(atchan)) { ··· 1159 959 } 1160 960 list_splice_init(&atchan->free_list, &list); 1161 961 atchan->descs_allocated = 0; 962 + atchan->status = 0; 1162 963 1163 964 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1164 965 } ··· 1293 1092 if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask)) 1294 1093 atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy; 1295 1094 1296 - if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)) { 1095 + if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)) 1297 1096 atdma->dma_common.device_prep_slave_sg = atc_prep_slave_sg; 1097 + 1098 + if (dma_has_cap(DMA_CYCLIC, atdma->dma_common.cap_mask)) 1099 + atdma->dma_common.device_prep_dma_cyclic = atc_prep_dma_cyclic; 1100 + 1101 + if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) || 1102 + dma_has_cap(DMA_CYCLIC, atdma->dma_common.cap_mask)) 1298 1103 atdma->dma_common.device_control = atc_control; 1299 - } 1300 1104 1301 1105 dma_writel(atdma, EN, AT_DMA_ENABLE); 1302 1106
+25 -5
drivers/dma/at_hdmac_regs.h
··· 103 103 /* Bitfields in CTRLB */ 104 104 #define ATC_SIF(i) (0x3 & (i)) /* Src tx done via AHB-Lite Interface i */ 105 105 #define ATC_DIF(i) ((0x3 & (i)) << 4) /* Dst tx done via AHB-Lite Interface i */ 106 + /* Specify AHB interfaces */ 107 + #define AT_DMA_MEM_IF 0 /* interface 0 as memory interface */ 108 + #define AT_DMA_PER_IF 1 /* interface 1 as peripheral interface */ 109 + 106 110 #define ATC_SRC_PIP (0x1 << 8) /* Source Picture-in-Picture enabled */ 107 111 #define ATC_DST_PIP (0x1 << 12) /* Destination Picture-in-Picture enabled */ 108 112 #define ATC_SRC_DSCR_DIS (0x1 << 16) /* Src Descriptor fetch disable */ ··· 185 181 /*-- Channels --------------------------------------------------------*/ 186 182 187 183 /** 184 + * atc_status - information bits stored in channel status flag 185 + * 186 + * Manipulated with atomic operations. 187 + */ 188 + enum atc_status { 189 + ATC_IS_ERROR = 0, 190 + ATC_IS_PAUSED = 1, 191 + ATC_IS_CYCLIC = 24, 192 + }; 193 + 194 + /** 188 195 * struct at_dma_chan - internal representation of an Atmel HDMAC channel 189 196 * @chan_common: common dmaengine channel object members 190 197 * @device: parent device 191 198 * @ch_regs: memory mapped register base 192 199 * @mask: channel index in a mask 193 - * @error_status: transmit error status information from irq handler 200 + * @status: transmit status information from irq/prep* functions 194 201 * to tasklet (use atomic operations) 195 202 * @tasklet: bottom half to finish transaction work 196 203 * @lock: serializes enqueue/dequeue operations to descriptors lists ··· 216 201 struct at_dma *device; 217 202 void __iomem *ch_regs; 218 203 u8 mask; 219 - unsigned long error_status; 204 + unsigned long status; 220 205 struct tasklet_struct tasklet; 221 206 222 207 spinlock_t lock; ··· 324 309 struct at_dma *atdma = to_at_dma(atchan->chan_common.device); 325 310 u32 ebci; 326 311 327 - /* enable interrupts on buffer chain completion & error */ 328 - ebci = AT_DMA_CBTC(atchan->chan_common.chan_id) 312 + /* enable interrupts on buffer transfer completion & error */ 313 + ebci = AT_DMA_BTC(atchan->chan_common.chan_id) 329 314 | AT_DMA_ERR(atchan->chan_common.chan_id); 330 315 if (on) 331 316 dma_writel(atdma, EBCIER, ebci); ··· 362 347 */ 363 348 static void set_desc_eol(struct at_desc *desc) 364 349 { 365 - desc->lli.ctrlb |= ATC_SRC_DSCR_DIS | ATC_DST_DSCR_DIS; 350 + u32 ctrlb = desc->lli.ctrlb; 351 + 352 + ctrlb &= ~ATC_IEN; 353 + ctrlb |= ATC_SRC_DSCR_DIS | ATC_DST_DSCR_DIS; 354 + 355 + desc->lli.ctrlb = ctrlb; 366 356 desc->lli.dscr = 0; 367 357 } 368 358
+1 -1
drivers/dma/coh901318.c
··· 1610 1610 { 1611 1611 return platform_driver_probe(&coh901318_driver, coh901318_probe); 1612 1612 } 1613 - arch_initcall(coh901318_init); 1613 + subsys_initcall(coh901318_init); 1614 1614 1615 1615 void __exit coh901318_exit(void) 1616 1616 {
+188 -98
drivers/dma/dw_dmac.c
··· 3 3 * AVR32 systems.) 4 4 * 5 5 * Copyright (C) 2007-2008 Atmel Corporation 6 + * Copyright (C) 2010-2011 ST Microelectronics 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License version 2 as ··· 94 93 struct dw_desc *desc, *_desc; 95 94 struct dw_desc *ret = NULL; 96 95 unsigned int i = 0; 96 + unsigned long flags; 97 97 98 - spin_lock_bh(&dwc->lock); 98 + spin_lock_irqsave(&dwc->lock, flags); 99 99 list_for_each_entry_safe(desc, _desc, &dwc->free_list, desc_node) { 100 100 if (async_tx_test_ack(&desc->txd)) { 101 101 list_del(&desc->desc_node); ··· 106 104 dev_dbg(chan2dev(&dwc->chan), "desc %p not ACKed\n", desc); 107 105 i++; 108 106 } 109 - spin_unlock_bh(&dwc->lock); 107 + spin_unlock_irqrestore(&dwc->lock, flags); 110 108 111 109 dev_vdbg(chan2dev(&dwc->chan), "scanned %u descriptors on freelist\n", i); 112 110 ··· 132 130 */ 133 131 static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc) 134 132 { 133 + unsigned long flags; 134 + 135 135 if (desc) { 136 136 struct dw_desc *child; 137 137 138 138 dwc_sync_desc_for_cpu(dwc, desc); 139 139 140 - spin_lock_bh(&dwc->lock); 140 + spin_lock_irqsave(&dwc->lock, flags); 141 141 list_for_each_entry(child, &desc->tx_list, desc_node) 142 142 dev_vdbg(chan2dev(&dwc->chan), 143 143 "moving child desc %p to freelist\n", ··· 147 143 list_splice_init(&desc->tx_list, &dwc->free_list); 148 144 dev_vdbg(chan2dev(&dwc->chan), "moving desc %p to freelist\n", desc); 149 145 list_add(&desc->desc_node, &dwc->free_list); 150 - spin_unlock_bh(&dwc->lock); 146 + spin_unlock_irqrestore(&dwc->lock, flags); 151 147 } 152 148 } 153 149 ··· 199 195 /*----------------------------------------------------------------------*/ 200 196 201 197 static void 202 - dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc) 198 + dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc, 199 + bool callback_required) 203 200 { 204 - dma_async_tx_callback callback; 205 - void *param; 201 + dma_async_tx_callback callback = NULL; 202 + void *param = NULL; 206 203 struct dma_async_tx_descriptor *txd = &desc->txd; 207 204 struct dw_desc *child; 205 + unsigned long flags; 208 206 209 207 dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie); 210 208 209 + spin_lock_irqsave(&dwc->lock, flags); 211 210 dwc->completed = txd->cookie; 212 - callback = txd->callback; 213 - param = txd->callback_param; 211 + if (callback_required) { 212 + callback = txd->callback; 213 + param = txd->callback_param; 214 + } 214 215 215 216 dwc_sync_desc_for_cpu(dwc, desc); 216 217 ··· 247 238 } 248 239 } 249 240 250 - /* 251 - * The API requires that no submissions are done from a 252 - * callback, so we don't need to drop the lock here 253 - */ 254 - if (callback) 241 + spin_unlock_irqrestore(&dwc->lock, flags); 242 + 243 + if (callback_required && callback) 255 244 callback(param); 256 245 } 257 246 ··· 257 250 { 258 251 struct dw_desc *desc, *_desc; 259 252 LIST_HEAD(list); 253 + unsigned long flags; 260 254 255 + spin_lock_irqsave(&dwc->lock, flags); 261 256 if (dma_readl(dw, CH_EN) & dwc->mask) { 262 257 dev_err(chan2dev(&dwc->chan), 263 258 "BUG: XFER bit set, but channel not idle!\n"); ··· 280 271 dwc_dostart(dwc, dwc_first_active(dwc)); 281 272 } 282 273 274 + spin_unlock_irqrestore(&dwc->lock, flags); 275 + 283 276 list_for_each_entry_safe(desc, _desc, &list, desc_node) 284 - dwc_descriptor_complete(dwc, desc); 277 + dwc_descriptor_complete(dwc, desc, true); 285 278 } 286 279 287 280 static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) ··· 292 281 struct dw_desc *desc, *_desc; 293 282 struct dw_desc *child; 294 283 u32 status_xfer; 284 + unsigned long flags; 295 285 286 + spin_lock_irqsave(&dwc->lock, flags); 296 287 /* 297 288 * Clear block interrupt flag before scanning so that we don't 298 289 * miss any, and read LLP before RAW_XFER to ensure it is ··· 307 294 if (status_xfer & dwc->mask) { 308 295 /* Everything we've submitted is done */ 309 296 dma_writel(dw, CLEAR.XFER, dwc->mask); 297 + spin_unlock_irqrestore(&dwc->lock, flags); 298 + 310 299 dwc_complete_all(dw, dwc); 311 300 return; 312 301 } 313 302 314 - if (list_empty(&dwc->active_list)) 303 + if (list_empty(&dwc->active_list)) { 304 + spin_unlock_irqrestore(&dwc->lock, flags); 315 305 return; 306 + } 316 307 317 308 dev_vdbg(chan2dev(&dwc->chan), "scan_descriptors: llp=0x%x\n", llp); 318 309 319 310 list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) { 320 - if (desc->lli.llp == llp) 321 - /* This one is currently in progress */ 311 + /* check first descriptors addr */ 312 + if (desc->txd.phys == llp) { 313 + spin_unlock_irqrestore(&dwc->lock, flags); 322 314 return; 315 + } 316 + 317 + /* check first descriptors llp */ 318 + if (desc->lli.llp == llp) { 319 + /* This one is currently in progress */ 320 + spin_unlock_irqrestore(&dwc->lock, flags); 321 + return; 322 + } 323 323 324 324 list_for_each_entry(child, &desc->tx_list, desc_node) 325 - if (child->lli.llp == llp) 325 + if (child->lli.llp == llp) { 326 326 /* Currently in progress */ 327 + spin_unlock_irqrestore(&dwc->lock, flags); 327 328 return; 329 + } 328 330 329 331 /* 330 332 * No descriptors so far seem to be in progress, i.e. 331 333 * this one must be done. 332 334 */ 333 - dwc_descriptor_complete(dwc, desc); 335 + spin_unlock_irqrestore(&dwc->lock, flags); 336 + dwc_descriptor_complete(dwc, desc, true); 337 + spin_lock_irqsave(&dwc->lock, flags); 334 338 } 335 339 336 340 dev_err(chan2dev(&dwc->chan), ··· 362 332 list_move(dwc->queue.next, &dwc->active_list); 363 333 dwc_dostart(dwc, dwc_first_active(dwc)); 364 334 } 335 + spin_unlock_irqrestore(&dwc->lock, flags); 365 336 } 366 337 367 338 static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli) ··· 377 346 { 378 347 struct dw_desc *bad_desc; 379 348 struct dw_desc *child; 349 + unsigned long flags; 380 350 381 351 dwc_scan_descriptors(dw, dwc); 352 + 353 + spin_lock_irqsave(&dwc->lock, flags); 382 354 383 355 /* 384 356 * The descriptor currently at the head of the active list is ··· 412 378 list_for_each_entry(child, &bad_desc->tx_list, desc_node) 413 379 dwc_dump_lli(dwc, &child->lli); 414 380 381 + spin_unlock_irqrestore(&dwc->lock, flags); 382 + 415 383 /* Pretend the descriptor completed successfully */ 416 - dwc_descriptor_complete(dwc, bad_desc); 384 + dwc_descriptor_complete(dwc, bad_desc, true); 417 385 } 418 386 419 387 /* --------------------- Cyclic DMA API extensions -------------------- */ ··· 438 402 static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan *dwc, 439 403 u32 status_block, u32 status_err, u32 status_xfer) 440 404 { 405 + unsigned long flags; 406 + 441 407 if (status_block & dwc->mask) { 442 408 void (*callback)(void *param); 443 409 void *callback_param; ··· 450 412 451 413 callback = dwc->cdesc->period_callback; 452 414 callback_param = dwc->cdesc->period_callback_param; 453 - if (callback) { 454 - spin_unlock(&dwc->lock); 415 + 416 + if (callback) 455 417 callback(callback_param); 456 - spin_lock(&dwc->lock); 457 - } 458 418 } 459 419 460 420 /* ··· 466 430 dev_err(chan2dev(&dwc->chan), "cyclic DMA unexpected %s " 467 431 "interrupt, stopping DMA transfer\n", 468 432 status_xfer ? "xfer" : "error"); 433 + 434 + spin_lock_irqsave(&dwc->lock, flags); 435 + 469 436 dev_err(chan2dev(&dwc->chan), 470 437 " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n", 471 438 channel_readl(dwc, SAR), ··· 492 453 493 454 for (i = 0; i < dwc->cdesc->periods; i++) 494 455 dwc_dump_lli(dwc, &dwc->cdesc->desc[i]->lli); 456 + 457 + spin_unlock_irqrestore(&dwc->lock, flags); 495 458 } 496 459 } 497 460 ··· 517 476 518 477 for (i = 0; i < dw->dma.chancnt; i++) { 519 478 dwc = &dw->chan[i]; 520 - spin_lock(&dwc->lock); 521 479 if (test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) 522 480 dwc_handle_cyclic(dw, dwc, status_block, status_err, 523 481 status_xfer); ··· 524 484 dwc_handle_error(dw, dwc); 525 485 else if ((status_block | status_xfer) & (1 << i)) 526 486 dwc_scan_descriptors(dw, dwc); 527 - spin_unlock(&dwc->lock); 528 487 } 529 488 530 489 /* ··· 578 539 struct dw_desc *desc = txd_to_dw_desc(tx); 579 540 struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan); 580 541 dma_cookie_t cookie; 542 + unsigned long flags; 581 543 582 - spin_lock_bh(&dwc->lock); 544 + spin_lock_irqsave(&dwc->lock, flags); 583 545 cookie = dwc_assign_cookie(dwc, desc); 584 546 585 547 /* ··· 600 560 list_add_tail(&desc->desc_node, &dwc->queue); 601 561 } 602 562 603 - spin_unlock_bh(&dwc->lock); 563 + spin_unlock_irqrestore(&dwc->lock, flags); 604 564 605 565 return cookie; 606 566 } ··· 729 689 reg = dws->tx_reg; 730 690 for_each_sg(sgl, sg, sg_len, i) { 731 691 struct dw_desc *desc; 732 - u32 len; 733 - u32 mem; 734 - 735 - desc = dwc_desc_get(dwc); 736 - if (!desc) { 737 - dev_err(chan2dev(chan), 738 - "not enough descriptors available\n"); 739 - goto err_desc_get; 740 - } 692 + u32 len, dlen, mem; 741 693 742 694 mem = sg_phys(sg); 743 695 len = sg_dma_len(sg); ··· 737 705 if (unlikely(mem & 3 || len & 3)) 738 706 mem_width = 0; 739 707 708 + slave_sg_todev_fill_desc: 709 + desc = dwc_desc_get(dwc); 710 + if (!desc) { 711 + dev_err(chan2dev(chan), 712 + "not enough descriptors available\n"); 713 + goto err_desc_get; 714 + } 715 + 740 716 desc->lli.sar = mem; 741 717 desc->lli.dar = reg; 742 718 desc->lli.ctllo = ctllo | DWC_CTLL_SRC_WIDTH(mem_width); 743 - desc->lli.ctlhi = len >> mem_width; 719 + if ((len >> mem_width) > DWC_MAX_COUNT) { 720 + dlen = DWC_MAX_COUNT << mem_width; 721 + mem += dlen; 722 + len -= dlen; 723 + } else { 724 + dlen = len; 725 + len = 0; 726 + } 727 + 728 + desc->lli.ctlhi = dlen >> mem_width; 744 729 745 730 if (!first) { 746 731 first = desc; ··· 771 722 &first->tx_list); 772 723 } 773 724 prev = desc; 774 - total_len += len; 725 + total_len += dlen; 726 + 727 + if (len) 728 + goto slave_sg_todev_fill_desc; 775 729 } 776 730 break; 777 731 case DMA_FROM_DEVICE: ··· 787 735 reg = dws->rx_reg; 788 736 for_each_sg(sgl, sg, sg_len, i) { 789 737 struct dw_desc *desc; 790 - u32 len; 791 - u32 mem; 792 - 793 - desc = dwc_desc_get(dwc); 794 - if (!desc) { 795 - dev_err(chan2dev(chan), 796 - "not enough descriptors available\n"); 797 - goto err_desc_get; 798 - } 738 + u32 len, dlen, mem; 799 739 800 740 mem = sg_phys(sg); 801 741 len = sg_dma_len(sg); ··· 795 751 if (unlikely(mem & 3 || len & 3)) 796 752 mem_width = 0; 797 753 754 + slave_sg_fromdev_fill_desc: 755 + desc = dwc_desc_get(dwc); 756 + if (!desc) { 757 + dev_err(chan2dev(chan), 758 + "not enough descriptors available\n"); 759 + goto err_desc_get; 760 + } 761 + 798 762 desc->lli.sar = reg; 799 763 desc->lli.dar = mem; 800 764 desc->lli.ctllo = ctllo | DWC_CTLL_DST_WIDTH(mem_width); 801 - desc->lli.ctlhi = len >> reg_width; 765 + if ((len >> reg_width) > DWC_MAX_COUNT) { 766 + dlen = DWC_MAX_COUNT << reg_width; 767 + mem += dlen; 768 + len -= dlen; 769 + } else { 770 + dlen = len; 771 + len = 0; 772 + } 773 + desc->lli.ctlhi = dlen >> reg_width; 802 774 803 775 if (!first) { 804 776 first = desc; ··· 828 768 &first->tx_list); 829 769 } 830 770 prev = desc; 831 - total_len += len; 771 + total_len += dlen; 772 + 773 + if (len) 774 + goto slave_sg_fromdev_fill_desc; 832 775 } 833 776 break; 834 777 default: ··· 862 799 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 863 800 struct dw_dma *dw = to_dw_dma(chan->device); 864 801 struct dw_desc *desc, *_desc; 802 + unsigned long flags; 803 + u32 cfglo; 865 804 LIST_HEAD(list); 866 805 867 - /* Only supports DMA_TERMINATE_ALL */ 868 - if (cmd != DMA_TERMINATE_ALL) 806 + if (cmd == DMA_PAUSE) { 807 + spin_lock_irqsave(&dwc->lock, flags); 808 + 809 + cfglo = channel_readl(dwc, CFG_LO); 810 + channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP); 811 + while (!(channel_readl(dwc, CFG_LO) & DWC_CFGL_FIFO_EMPTY)) 812 + cpu_relax(); 813 + 814 + dwc->paused = true; 815 + spin_unlock_irqrestore(&dwc->lock, flags); 816 + } else if (cmd == DMA_RESUME) { 817 + if (!dwc->paused) 818 + return 0; 819 + 820 + spin_lock_irqsave(&dwc->lock, flags); 821 + 822 + cfglo = channel_readl(dwc, CFG_LO); 823 + channel_writel(dwc, CFG_LO, cfglo & ~DWC_CFGL_CH_SUSP); 824 + dwc->paused = false; 825 + 826 + spin_unlock_irqrestore(&dwc->lock, flags); 827 + } else if (cmd == DMA_TERMINATE_ALL) { 828 + spin_lock_irqsave(&dwc->lock, flags); 829 + 830 + channel_clear_bit(dw, CH_EN, dwc->mask); 831 + while (dma_readl(dw, CH_EN) & dwc->mask) 832 + cpu_relax(); 833 + 834 + dwc->paused = false; 835 + 836 + /* active_list entries will end up before queued entries */ 837 + list_splice_init(&dwc->queue, &list); 838 + list_splice_init(&dwc->active_list, &list); 839 + 840 + spin_unlock_irqrestore(&dwc->lock, flags); 841 + 842 + /* Flush all pending and queued descriptors */ 843 + list_for_each_entry_safe(desc, _desc, &list, desc_node) 844 + dwc_descriptor_complete(dwc, desc, false); 845 + } else 869 846 return -ENXIO; 870 - 871 - /* 872 - * This is only called when something went wrong elsewhere, so 873 - * we don't really care about the data. Just disable the 874 - * channel. We still have to poll the channel enable bit due 875 - * to AHB/HSB limitations. 876 - */ 877 - spin_lock_bh(&dwc->lock); 878 - 879 - channel_clear_bit(dw, CH_EN, dwc->mask); 880 - 881 - while (dma_readl(dw, CH_EN) & dwc->mask) 882 - cpu_relax(); 883 - 884 - /* active_list entries will end up before queued entries */ 885 - list_splice_init(&dwc->queue, &list); 886 - list_splice_init(&dwc->active_list, &list); 887 - 888 - spin_unlock_bh(&dwc->lock); 889 - 890 - /* Flush all pending and queued descriptors */ 891 - list_for_each_entry_safe(desc, _desc, &list, desc_node) 892 - dwc_descriptor_complete(dwc, desc); 893 847 894 848 return 0; 895 849 } ··· 926 846 927 847 ret = dma_async_is_complete(cookie, last_complete, last_used); 928 848 if (ret != DMA_SUCCESS) { 929 - spin_lock_bh(&dwc->lock); 930 849 dwc_scan_descriptors(to_dw_dma(chan->device), dwc); 931 - spin_unlock_bh(&dwc->lock); 932 850 933 851 last_complete = dwc->completed; 934 852 last_used = chan->cookie; ··· 934 856 ret = dma_async_is_complete(cookie, last_complete, last_used); 935 857 } 936 858 937 - dma_set_tx_state(txstate, last_complete, last_used, 0); 859 + if (ret != DMA_SUCCESS) 860 + dma_set_tx_state(txstate, last_complete, last_used, 861 + dwc_first_active(dwc)->len); 862 + else 863 + dma_set_tx_state(txstate, last_complete, last_used, 0); 864 + 865 + if (dwc->paused) 866 + return DMA_PAUSED; 938 867 939 868 return ret; 940 869 } ··· 950 865 { 951 866 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 952 867 953 - spin_lock_bh(&dwc->lock); 954 868 if (!list_empty(&dwc->queue)) 955 869 dwc_scan_descriptors(to_dw_dma(chan->device), dwc); 956 - spin_unlock_bh(&dwc->lock); 957 870 } 958 871 959 872 static int dwc_alloc_chan_resources(struct dma_chan *chan) ··· 963 880 int i; 964 881 u32 cfghi; 965 882 u32 cfglo; 883 + unsigned long flags; 966 884 967 885 dev_vdbg(chan2dev(chan), "alloc_chan_resources\n"); 968 886 ··· 1001 917 * doesn't mean what you think it means), and status writeback. 1002 918 */ 1003 919 1004 - spin_lock_bh(&dwc->lock); 920 + spin_lock_irqsave(&dwc->lock, flags); 1005 921 i = dwc->descs_allocated; 1006 922 while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) { 1007 - spin_unlock_bh(&dwc->lock); 923 + spin_unlock_irqrestore(&dwc->lock, flags); 1008 924 1009 925 desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL); 1010 926 if (!desc) { 1011 927 dev_info(chan2dev(chan), 1012 928 "only allocated %d descriptors\n", i); 1013 - spin_lock_bh(&dwc->lock); 929 + spin_lock_irqsave(&dwc->lock, flags); 1014 930 break; 1015 931 } 1016 932 ··· 1022 938 sizeof(desc->lli), DMA_TO_DEVICE); 1023 939 dwc_desc_put(dwc, desc); 1024 940 1025 - spin_lock_bh(&dwc->lock); 941 + spin_lock_irqsave(&dwc->lock, flags); 1026 942 i = ++dwc->descs_allocated; 1027 943 } 1028 944 ··· 1031 947 channel_set_bit(dw, MASK.BLOCK, dwc->mask); 1032 948 channel_set_bit(dw, MASK.ERROR, dwc->mask); 1033 949 1034 - spin_unlock_bh(&dwc->lock); 950 + spin_unlock_irqrestore(&dwc->lock, flags); 1035 951 1036 952 dev_dbg(chan2dev(chan), 1037 953 "alloc_chan_resources allocated %d descriptors\n", i); ··· 1044 960 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1045 961 struct dw_dma *dw = to_dw_dma(chan->device); 1046 962 struct dw_desc *desc, *_desc; 963 + unsigned long flags; 1047 964 LIST_HEAD(list); 1048 965 1049 966 dev_dbg(chan2dev(chan), "free_chan_resources (descs allocated=%u)\n", ··· 1055 970 BUG_ON(!list_empty(&dwc->queue)); 1056 971 BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask); 1057 972 1058 - spin_lock_bh(&dwc->lock); 973 + spin_lock_irqsave(&dwc->lock, flags); 1059 974 list_splice_init(&dwc->free_list, &list); 1060 975 dwc->descs_allocated = 0; 1061 976 ··· 1064 979 channel_clear_bit(dw, MASK.BLOCK, dwc->mask); 1065 980 channel_clear_bit(dw, MASK.ERROR, dwc->mask); 1066 981 1067 - spin_unlock_bh(&dwc->lock); 982 + spin_unlock_irqrestore(&dwc->lock, flags); 1068 983 1069 984 list_for_each_entry_safe(desc, _desc, &list, desc_node) { 1070 985 dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc); ··· 1089 1004 { 1090 1005 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1091 1006 struct dw_dma *dw = to_dw_dma(dwc->chan.device); 1007 + unsigned long flags; 1092 1008 1093 1009 if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) { 1094 1010 dev_err(chan2dev(&dwc->chan), "missing prep for cyclic DMA\n"); 1095 1011 return -ENODEV; 1096 1012 } 1097 1013 1098 - spin_lock(&dwc->lock); 1014 + spin_lock_irqsave(&dwc->lock, flags); 1099 1015 1100 1016 /* assert channel is idle */ 1101 1017 if (dma_readl(dw, CH_EN) & dwc->mask) { ··· 1109 1023 channel_readl(dwc, LLP), 1110 1024 channel_readl(dwc, CTL_HI), 1111 1025 channel_readl(dwc, CTL_LO)); 1112 - spin_unlock(&dwc->lock); 1026 + spin_unlock_irqrestore(&dwc->lock, flags); 1113 1027 return -EBUSY; 1114 1028 } 1115 1029 ··· 1124 1038 1125 1039 channel_set_bit(dw, CH_EN, dwc->mask); 1126 1040 1127 - spin_unlock(&dwc->lock); 1041 + spin_unlock_irqrestore(&dwc->lock, flags); 1128 1042 1129 1043 return 0; 1130 1044 } ··· 1140 1054 { 1141 1055 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 1142 1056 struct dw_dma *dw = to_dw_dma(dwc->chan.device); 1057 + unsigned long flags; 1143 1058 1144 - spin_lock(&dwc->lock); 1059 + spin_lock_irqsave(&dwc->lock, flags); 1145 1060 1146 1061 channel_clear_bit(dw, CH_EN, dwc->mask); 1147 1062 while (dma_readl(dw, CH_EN) & dwc->mask) 1148 1063 cpu_relax(); 1149 1064 1150 - spin_unlock(&dwc->lock); 1065 + spin_unlock_irqrestore(&dwc->lock, flags); 1151 1066 } 1152 1067 EXPORT_SYMBOL(dw_dma_cyclic_stop); 1153 1068 ··· 1177 1090 unsigned int reg_width; 1178 1091 unsigned int periods; 1179 1092 unsigned int i; 1093 + unsigned long flags; 1180 1094 1181 - spin_lock_bh(&dwc->lock); 1095 + spin_lock_irqsave(&dwc->lock, flags); 1182 1096 if (!list_empty(&dwc->queue) || !list_empty(&dwc->active_list)) { 1183 - spin_unlock_bh(&dwc->lock); 1097 + spin_unlock_irqrestore(&dwc->lock, flags); 1184 1098 dev_dbg(chan2dev(&dwc->chan), 1185 1099 "queue and/or active list are not empty\n"); 1186 1100 return ERR_PTR(-EBUSY); 1187 1101 } 1188 1102 1189 1103 was_cyclic = test_and_set_bit(DW_DMA_IS_CYCLIC, &dwc->flags); 1190 - spin_unlock_bh(&dwc->lock); 1104 + spin_unlock_irqrestore(&dwc->lock, flags); 1191 1105 if (was_cyclic) { 1192 1106 dev_dbg(chan2dev(&dwc->chan), 1193 1107 "channel already prepared for cyclic DMA\n"); ··· 1302 1214 struct dw_dma *dw = to_dw_dma(dwc->chan.device); 1303 1215 struct dw_cyclic_desc *cdesc = dwc->cdesc; 1304 1216 int i; 1217 + unsigned long flags; 1305 1218 1306 1219 dev_dbg(chan2dev(&dwc->chan), "cyclic free\n"); 1307 1220 1308 1221 if (!cdesc) 1309 1222 return; 1310 1223 1311 - spin_lock_bh(&dwc->lock); 1224 + spin_lock_irqsave(&dwc->lock, flags); 1312 1225 1313 1226 channel_clear_bit(dw, CH_EN, dwc->mask); 1314 1227 while (dma_readl(dw, CH_EN) & dwc->mask) ··· 1319 1230 dma_writel(dw, CLEAR.ERROR, dwc->mask); 1320 1231 dma_writel(dw, CLEAR.XFER, dwc->mask); 1321 1232 1322 - spin_unlock_bh(&dwc->lock); 1233 + spin_unlock_irqrestore(&dwc->lock, flags); 1323 1234 1324 1235 for (i = 0; i < cdesc->periods; i++) 1325 1236 dwc_desc_put(dwc, cdesc->desc[i]); ··· 1576 1487 MODULE_LICENSE("GPL v2"); 1577 1488 MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver"); 1578 1489 MODULE_AUTHOR("Haavard Skinnemoen (Atmel)"); 1490 + MODULE_AUTHOR("Viresh Kumar <viresh.kumar@st.com>");
+2
drivers/dma/dw_dmac_regs.h
··· 2 2 * Driver for the Synopsys DesignWare AHB DMA Controller 3 3 * 4 4 * Copyright (C) 2005-2007 Atmel Corporation 5 + * Copyright (C) 2010-2011 ST Microelectronics 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify 7 8 * it under the terms of the GNU General Public License version 2 as ··· 139 138 void __iomem *ch_regs; 140 139 u8 mask; 141 140 u8 priority; 141 + bool paused; 142 142 143 143 spinlock_t lock; 144 144
+13 -4
drivers/dma/intel_mid_dma.c
··· 1292 1292 if (err) 1293 1293 goto err_dma; 1294 1294 1295 - pm_runtime_set_active(&pdev->dev); 1296 - pm_runtime_enable(&pdev->dev); 1295 + pm_runtime_put_noidle(&pdev->dev); 1297 1296 pm_runtime_allow(&pdev->dev); 1298 1297 return 0; 1299 1298 ··· 1321 1322 static void __devexit intel_mid_dma_remove(struct pci_dev *pdev) 1322 1323 { 1323 1324 struct middma_device *device = pci_get_drvdata(pdev); 1325 + 1326 + pm_runtime_get_noresume(&pdev->dev); 1327 + pm_runtime_forbid(&pdev->dev); 1324 1328 middma_shutdown(pdev); 1325 1329 pci_dev_put(pdev); 1326 1330 kfree(device); ··· 1387 1385 static int dma_runtime_suspend(struct device *dev) 1388 1386 { 1389 1387 struct pci_dev *pci_dev = to_pci_dev(dev); 1390 - return dma_suspend(pci_dev, PMSG_SUSPEND); 1388 + struct middma_device *device = pci_get_drvdata(pci_dev); 1389 + 1390 + device->state = SUSPENDED; 1391 + return 0; 1391 1392 } 1392 1393 1393 1394 static int dma_runtime_resume(struct device *dev) 1394 1395 { 1395 1396 struct pci_dev *pci_dev = to_pci_dev(dev); 1396 - return dma_resume(pci_dev); 1397 + struct middma_device *device = pci_get_drvdata(pci_dev); 1398 + 1399 + device->state = RUNNING; 1400 + iowrite32(REG_BIT0, device->dma_base + DMA_CFG); 1401 + return 0; 1397 1402 } 1398 1403 1399 1404 static int dma_runtime_idle(struct device *dev)
+6 -2
drivers/dma/ioat/dma_v2.c
··· 508 508 struct ioat_ring_ent **ring; 509 509 u64 status; 510 510 int order; 511 + int i = 0; 511 512 512 513 /* have we already been set up? */ 513 514 if (ioat->ring) ··· 549 548 ioat2_start_null_desc(ioat); 550 549 551 550 /* check that we got off the ground */ 552 - udelay(5); 553 - status = ioat_chansts(chan); 551 + do { 552 + udelay(1); 553 + status = ioat_chansts(chan); 554 + } while (i++ < 20 && !is_ioat_active(status) && !is_ioat_idle(status)); 555 + 554 556 if (is_ioat_active(status) || is_ioat_idle(status)) { 555 557 set_bit(IOAT_RUN, &chan->state); 556 558 return 1 << ioat->alloc_order;
+3 -3
drivers/dma/iop-adma.c
··· 619 619 620 620 if (unlikely(!len)) 621 621 return NULL; 622 - BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT)); 622 + BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT); 623 623 624 624 dev_dbg(iop_chan->device->common.dev, "%s len: %u\n", 625 625 __func__, len); ··· 652 652 653 653 if (unlikely(!len)) 654 654 return NULL; 655 - BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT)); 655 + BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT); 656 656 657 657 dev_dbg(iop_chan->device->common.dev, "%s len: %u\n", 658 658 __func__, len); ··· 686 686 687 687 if (unlikely(!len)) 688 688 return NULL; 689 - BUG_ON(unlikely(len > IOP_ADMA_XOR_MAX_BYTE_COUNT)); 689 + BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT); 690 690 691 691 dev_dbg(iop_chan->device->common.dev, 692 692 "%s src_cnt: %d len: %u flags: %lx\n",
+3 -3
drivers/dma/mv_xor.c
··· 671 671 if (unlikely(len < MV_XOR_MIN_BYTE_COUNT)) 672 672 return NULL; 673 673 674 - BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT)); 674 + BUG_ON(len > MV_XOR_MAX_BYTE_COUNT); 675 675 676 676 spin_lock_bh(&mv_chan->lock); 677 677 slot_cnt = mv_chan_memcpy_slot_count(len); ··· 710 710 if (unlikely(len < MV_XOR_MIN_BYTE_COUNT)) 711 711 return NULL; 712 712 713 - BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT)); 713 + BUG_ON(len > MV_XOR_MAX_BYTE_COUNT); 714 714 715 715 spin_lock_bh(&mv_chan->lock); 716 716 slot_cnt = mv_chan_memset_slot_count(len); ··· 744 744 if (unlikely(len < MV_XOR_MIN_BYTE_COUNT)) 745 745 return NULL; 746 746 747 - BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT)); 747 + BUG_ON(len > MV_XOR_MAX_BYTE_COUNT); 748 748 749 749 dev_dbg(mv_chan->device->common.dev, 750 750 "%s src_cnt: %d len: dest %x %u flags: %ld\n",
+66 -30
drivers/dma/pch_dma.c
··· 77 77 u32 dma_ctl0; 78 78 u32 dma_ctl1; 79 79 u32 dma_ctl2; 80 - u32 reserved1; 80 + u32 dma_ctl3; 81 81 u32 dma_sts0; 82 82 u32 dma_sts1; 83 - u32 reserved2; 83 + u32 dma_sts2; 84 84 u32 reserved3; 85 85 struct pch_dma_desc_regs desc[MAX_CHAN_NR]; 86 86 }; ··· 130 130 #define PCH_DMA_CTL0 0x00 131 131 #define PCH_DMA_CTL1 0x04 132 132 #define PCH_DMA_CTL2 0x08 133 + #define PCH_DMA_CTL3 0x0C 133 134 #define PCH_DMA_STS0 0x10 134 135 #define PCH_DMA_STS1 0x14 135 136 ··· 139 138 #define dma_writel(pd, name, val) \ 140 139 writel((val), (pd)->membase + PCH_DMA_##name) 141 140 142 - static inline struct pch_dma_desc *to_pd_desc(struct dma_async_tx_descriptor *txd) 141 + static inline 142 + struct pch_dma_desc *to_pd_desc(struct dma_async_tx_descriptor *txd) 143 143 { 144 144 return container_of(txd, struct pch_dma_desc, txd); 145 145 } ··· 165 163 return chan->dev->device.parent; 166 164 } 167 165 168 - static inline struct pch_dma_desc *pdc_first_active(struct pch_dma_chan *pd_chan) 166 + static inline 167 + struct pch_dma_desc *pdc_first_active(struct pch_dma_chan *pd_chan) 169 168 { 170 169 return list_first_entry(&pd_chan->active_list, 171 170 struct pch_dma_desc, desc_node); 172 171 } 173 172 174 - static inline struct pch_dma_desc *pdc_first_queued(struct pch_dma_chan *pd_chan) 173 + static inline 174 + struct pch_dma_desc *pdc_first_queued(struct pch_dma_chan *pd_chan) 175 175 { 176 176 return list_first_entry(&pd_chan->queue, 177 177 struct pch_dma_desc, desc_node); ··· 203 199 struct pch_dma *pd = to_pd(chan->device); 204 200 u32 val; 205 201 206 - val = dma_readl(pd, CTL0); 202 + if (chan->chan_id < 8) { 203 + val = dma_readl(pd, CTL0); 207 204 208 - if (pd_chan->dir == DMA_TO_DEVICE) 209 - val |= 0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id + 210 - DMA_CTL0_DIR_SHIFT_BITS); 211 - else 212 - val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id + 213 - DMA_CTL0_DIR_SHIFT_BITS)); 205 + if (pd_chan->dir == DMA_TO_DEVICE) 206 + val |= 0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id + 207 + DMA_CTL0_DIR_SHIFT_BITS); 208 + else 209 + val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id + 210 + DMA_CTL0_DIR_SHIFT_BITS)); 214 211 215 - dma_writel(pd, CTL0, val); 212 + dma_writel(pd, CTL0, val); 213 + } else { 214 + int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */ 215 + val = dma_readl(pd, CTL3); 216 + 217 + if (pd_chan->dir == DMA_TO_DEVICE) 218 + val |= 0x1 << (DMA_CTL0_BITS_PER_CH * ch + 219 + DMA_CTL0_DIR_SHIFT_BITS); 220 + else 221 + val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * ch + 222 + DMA_CTL0_DIR_SHIFT_BITS)); 223 + 224 + dma_writel(pd, CTL3, val); 225 + } 216 226 217 227 dev_dbg(chan2dev(chan), "pdc_set_dir: chan %d -> %x\n", 218 228 chan->chan_id, val); ··· 237 219 struct pch_dma *pd = to_pd(chan->device); 238 220 u32 val; 239 221 240 - val = dma_readl(pd, CTL0); 222 + if (chan->chan_id < 8) { 223 + val = dma_readl(pd, CTL0); 241 224 242 - val &= ~(DMA_CTL0_MODE_MASK_BITS << 243 - (DMA_CTL0_BITS_PER_CH * chan->chan_id)); 244 - val |= mode << (DMA_CTL0_BITS_PER_CH * chan->chan_id); 225 + val &= ~(DMA_CTL0_MODE_MASK_BITS << 226 + (DMA_CTL0_BITS_PER_CH * chan->chan_id)); 227 + val |= mode << (DMA_CTL0_BITS_PER_CH * chan->chan_id); 245 228 246 - dma_writel(pd, CTL0, val); 229 + dma_writel(pd, CTL0, val); 230 + } else { 231 + int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */ 232 + 233 + val = dma_readl(pd, CTL3); 234 + 235 + val &= ~(DMA_CTL0_MODE_MASK_BITS << 236 + (DMA_CTL0_BITS_PER_CH * ch)); 237 + val |= mode << (DMA_CTL0_BITS_PER_CH * ch); 238 + 239 + dma_writel(pd, CTL3, val); 240 + 241 + } 247 242 248 243 dev_dbg(chan2dev(chan), "pdc_set_mode: chan %d -> %x\n", 249 244 chan->chan_id, val); ··· 282 251 283 252 static void pdc_dostart(struct pch_dma_chan *pd_chan, struct pch_dma_desc* desc) 284 253 { 285 - struct pch_dma *pd = to_pd(pd_chan->chan.device); 286 - u32 val; 287 - 288 254 if (!pdc_is_idle(pd_chan)) { 289 255 dev_err(chan2dev(&pd_chan->chan), 290 256 "BUG: Attempt to start non-idle channel\n"); ··· 307 279 channel_writel(pd_chan, NEXT, desc->txd.phys); 308 280 pdc_set_mode(&pd_chan->chan, DMA_CTL0_SG); 309 281 } 310 - 311 - val = dma_readl(pd, CTL2); 312 - val |= 1 << (DMA_CTL2_START_SHIFT_BITS + pd_chan->chan.chan_id); 313 - dma_writel(pd, CTL2, val); 314 282 } 315 283 316 284 static void pdc_chain_complete(struct pch_dma_chan *pd_chan, ··· 427 403 { 428 404 struct pch_dma_desc *desc, *_d; 429 405 struct pch_dma_desc *ret = NULL; 430 - int i; 406 + int i = 0; 431 407 432 408 spin_lock(&pd_chan->lock); 433 409 list_for_each_entry_safe(desc, _d, &pd_chan->free_list, desc_node) { ··· 502 478 spin_unlock_bh(&pd_chan->lock); 503 479 504 480 pdc_enable_irq(chan, 1); 505 - pdc_set_dir(chan); 506 481 507 482 return pd_chan->descs_allocated; 508 483 } ··· 583 560 reg = pd_slave->tx_reg; 584 561 else 585 562 return NULL; 563 + 564 + pd_chan->dir = direction; 565 + pdc_set_dir(chan); 586 566 587 567 for_each_sg(sgl, sg, sg_len, i) { 588 568 desc = pdc_desc_get(pd_chan); ··· 729 703 pd->regs.dma_ctl0 = dma_readl(pd, CTL0); 730 704 pd->regs.dma_ctl1 = dma_readl(pd, CTL1); 731 705 pd->regs.dma_ctl2 = dma_readl(pd, CTL2); 706 + pd->regs.dma_ctl3 = dma_readl(pd, CTL3); 732 707 733 708 list_for_each_entry_safe(chan, _c, &pd->dma.channels, device_node) { 734 709 pd_chan = to_pd_chan(chan); ··· 752 725 dma_writel(pd, CTL0, pd->regs.dma_ctl0); 753 726 dma_writel(pd, CTL1, pd->regs.dma_ctl1); 754 727 dma_writel(pd, CTL2, pd->regs.dma_ctl2); 728 + dma_writel(pd, CTL3, pd->regs.dma_ctl3); 755 729 756 730 list_for_each_entry_safe(chan, _c, &pd->dma.channels, device_node) { 757 731 pd_chan = to_pd_chan(chan); ··· 878 850 879 851 pd_chan->membase = &regs->desc[i]; 880 852 881 - pd_chan->dir = (i % 2) ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 882 - 883 853 spin_lock_init(&pd_chan->lock); 884 854 885 855 INIT_LIST_HEAD(&pd_chan->active_list); ··· 955 929 #define PCI_DEVICE_ID_ML7213_DMA1_8CH 0x8026 956 930 #define PCI_DEVICE_ID_ML7213_DMA2_8CH 0x802B 957 931 #define PCI_DEVICE_ID_ML7213_DMA3_4CH 0x8034 932 + #define PCI_DEVICE_ID_ML7213_DMA4_12CH 0x8032 933 + #define PCI_DEVICE_ID_ML7223_DMA1_4CH 0x800B 934 + #define PCI_DEVICE_ID_ML7223_DMA2_4CH 0x800E 935 + #define PCI_DEVICE_ID_ML7223_DMA3_4CH 0x8017 936 + #define PCI_DEVICE_ID_ML7223_DMA4_4CH 0x803B 958 937 959 - static const struct pci_device_id pch_dma_id_table[] = { 938 + DEFINE_PCI_DEVICE_TABLE(pch_dma_id_table) = { 960 939 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_8CH), 8 }, 961 940 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_4CH), 4 }, 962 941 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA1_8CH), 8}, /* UART Video */ 963 942 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA2_8CH), 8}, /* PCMIF SPI */ 964 943 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA3_4CH), 4}, /* FPGA */ 944 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA4_12CH), 12}, /* I2S */ 945 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA1_4CH), 4}, /* UART */ 946 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA2_4CH), 4}, /* Video SPI */ 947 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA3_4CH), 4}, /* Security */ 948 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA4_4CH), 4}, /* FPGA */ 965 949 { 0, }, 966 950 }; 967 951
+4 -4
drivers/dma/ppc4xx/adma.c
··· 2313 2313 if (unlikely(!len)) 2314 2314 return NULL; 2315 2315 2316 - BUG_ON(unlikely(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT)); 2316 + BUG_ON(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT); 2317 2317 2318 2318 spin_lock_bh(&ppc440spe_chan->lock); 2319 2319 ··· 2354 2354 if (unlikely(!len)) 2355 2355 return NULL; 2356 2356 2357 - BUG_ON(unlikely(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT)); 2357 + BUG_ON(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT); 2358 2358 2359 2359 spin_lock_bh(&ppc440spe_chan->lock); 2360 2360 ··· 2397 2397 dma_dest, dma_src, src_cnt)); 2398 2398 if (unlikely(!len)) 2399 2399 return NULL; 2400 - BUG_ON(unlikely(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT)); 2400 + BUG_ON(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT); 2401 2401 2402 2402 dev_dbg(ppc440spe_chan->device->common.dev, 2403 2403 "ppc440spe adma%d: %s src_cnt: %d len: %u int_en: %d\n", ··· 2887 2887 ADMA_LL_DBG(prep_dma_pq_dbg(ppc440spe_chan->device->id, 2888 2888 dst, src, src_cnt)); 2889 2889 BUG_ON(!len); 2890 - BUG_ON(unlikely(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT)); 2890 + BUG_ON(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT); 2891 2891 BUG_ON(!src_cnt); 2892 2892 2893 2893 if (src_cnt == 1 && dst[1] == src[0]) {
+2 -2
drivers/dma/ste_dma40.c
··· 1829 1829 { 1830 1830 struct stedma40_platform_data *plat = chan->base->plat_data; 1831 1831 struct stedma40_chan_cfg *cfg = &chan->dma_cfg; 1832 - dma_addr_t addr; 1832 + dma_addr_t addr = 0; 1833 1833 1834 1834 if (chan->runtime_addr) 1835 1835 return chan->runtime_addr; ··· 2962 2962 { 2963 2963 return platform_driver_probe(&d40_driver, d40_probe); 2964 2964 } 2965 - arch_initcall(stedma40_init); 2965 + subsys_initcall(stedma40_init);
+1
include/linux/dw_dmac.h
··· 3 3 * AVR32 systems.) 4 4 * 5 5 * Copyright (C) 2007 Atmel Corporation 6 + * Copyright (C) 2010-2011 ST Microelectronics 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License version 2 as