Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dmaengine-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
"This time we have bunch of driver updates and some new device support.

New support:
- Document RZ/V2L and RZ/G2UL dma binding
- TI AM62x k3-udma and k3-psil support

Updates:
- Yaml conversion for Mediatek uart apdma schema
- Removal of DMA-32 fallback configuration for various drivers
- imx-sdma updates for channel restart"

* tag 'dmaengine-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (23 commits)
dmaengine: hisi_dma: fix MSI allocate fail when reload hisi_dma
dmaengine: dw-axi-dmac: cleanup comments
dmaengine: fsl-dpaa2-qdma: Drop comma after SoC match table sentinel
dt-bindings: dma: Convert mtk-uart-apdma to DT schema
dmaengine: ppc4xx: Make use of the helper macro LIST_HEAD()
dmaengine: idxd: Remove useless DMA-32 fallback configuration
dmaengine: qcom_hidma: Remove useless DMA-32 fallback configuration
dmaengine: sh: Kconfig: Add ARCH_R9A07G054 dependency for RZ_DMAC config option
dmaengine: ti: k3-psil: Add AM62x PSIL and PDMA data
dmaengine: ti: k3-udma: Add AM62x DMSS support
dmaengine: ti: cleanup comments
dmaengine: imx-sdma: clean up some inconsistent indenting
dmaengine: Revert "dmaengine: shdma: Fix runtime PM imbalance on error"
dmaengine: idxd: restore traffic class defaults after wq reset
dmaengine: altera-msgdma: Remove useless DMA-32 fallback configuration
dmaengine: stm32-dma: set dma_device max_sg_burst
dmaengine: imx-sdma: fix cyclic buffer race condition
dmaengine: imx-sdma: restart cyclic channel if needed
dmaengine: iot: Remove useless DMA-32 fallback configuration
dmaengine: ptdma: handle the cases based on DMA is complete
...

+382 -102
+122
Documentation/devicetree/bindings/dma/mediatek,uart-dma.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/dma/mediatek,uart-dma.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek UART APDMA controller 8 + 9 + maintainers: 10 + - Long Cheng <long.cheng@mediatek.com> 11 + 12 + description: | 13 + The MediaTek UART APDMA controller provides DMA capabilities 14 + for the UART peripheral bus. 15 + 16 + allOf: 17 + - $ref: "dma-controller.yaml#" 18 + 19 + properties: 20 + compatible: 21 + oneOf: 22 + - items: 23 + - enum: 24 + - mediatek,mt2712-uart-dma 25 + - mediatek,mt8516-uart-dma 26 + - const: mediatek,mt6577-uart-dma 27 + - enum: 28 + - mediatek,mt6577-uart-dma 29 + 30 + reg: 31 + minItems: 1 32 + maxItems: 16 33 + 34 + interrupts: 35 + description: | 36 + TX, RX interrupt lines for each UART APDMA channel 37 + minItems: 1 38 + maxItems: 16 39 + 40 + clocks: 41 + description: Must contain one entry for the APDMA main clock 42 + maxItems: 1 43 + 44 + clock-names: 45 + const: apdma 46 + 47 + "#dma-cells": 48 + const: 1 49 + description: | 50 + The first cell specifies the UART APDMA channel number 51 + 52 + dma-requests: 53 + description: | 54 + Number of virtual channels of the UART APDMA controller 55 + maximum: 16 56 + 57 + mediatek,dma-33bits: 58 + type: boolean 59 + description: Enable 33-bits UART APDMA support 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - interrupts 65 + 66 + additionalProperties: false 67 + 68 + if: 69 + not: 70 + required: 71 + - dma-requests 72 + then: 73 + properties: 74 + interrupts: 75 + maxItems: 8 76 + reg: 77 + maxItems: 8 78 + 79 + examples: 80 + - | 81 + #include <dt-bindings/interrupt-controller/arm-gic.h> 82 + #include <dt-bindings/clock/mt2712-clk.h> 83 + soc { 84 + #address-cells = <2>; 85 + #size-cells = <2>; 86 + 87 + apdma: dma-controller@11000400 { 88 + compatible = "mediatek,mt2712-uart-dma", 89 + "mediatek,mt6577-uart-dma"; 90 + reg = <0 0x11000400 0 0x80>, 91 + <0 0x11000480 0 0x80>, 92 + <0 0x11000500 0 0x80>, 93 + <0 0x11000580 0 0x80>, 94 + <0 0x11000600 0 0x80>, 95 + <0 0x11000680 0 0x80>, 96 + <0 0x11000700 0 0x80>, 97 + <0 0x11000780 0 0x80>, 98 + <0 0x11000800 0 0x80>, 99 + <0 0x11000880 0 0x80>, 100 + <0 0x11000900 0 0x80>, 101 + <0 0x11000980 0 0x80>; 102 + interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>, 103 + <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>, 104 + <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>, 105 + <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>, 106 + <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>, 107 + <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>, 108 + <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>, 109 + <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>, 110 + <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>, 111 + <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>, 112 + <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>, 113 + <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>; 114 + dma-requests = <12>; 115 + clocks = <&pericfg CLK_PERI_AP_DMA>; 116 + clock-names = "apdma"; 117 + mediatek,dma-33bits; 118 + #dma-cells = <1>; 119 + }; 120 + }; 121 + 122 + ...
-56
Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
··· 1 - * Mediatek UART APDMA Controller 2 - 3 - Required properties: 4 - - compatible should contain: 5 - * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA 6 - * "mediatek,mt6577-uart-dma" for MT6577 and all of the above 7 - * "mediatek,mt8516-uart-dma", "mediatek,mt6577" for MT8516 SoC 8 - 9 - - reg: The base address of the APDMA register bank. 10 - 11 - - interrupts: A single interrupt specifier. 12 - One interrupt per dma-requests, or 8 if no dma-requests property is present 13 - 14 - - dma-requests: The number of DMA channels 15 - 16 - - clocks : Must contain an entry for each entry in clock-names. 17 - See ../clocks/clock-bindings.txt for details. 18 - - clock-names: The APDMA clock for register accesses 19 - 20 - - mediatek,dma-33bits: Present if the DMA requires support 21 - 22 - Examples: 23 - 24 - apdma: dma-controller@11000400 { 25 - compatible = "mediatek,mt2712-uart-dma", 26 - "mediatek,mt6577-uart-dma"; 27 - reg = <0 0x11000400 0 0x80>, 28 - <0 0x11000480 0 0x80>, 29 - <0 0x11000500 0 0x80>, 30 - <0 0x11000580 0 0x80>, 31 - <0 0x11000600 0 0x80>, 32 - <0 0x11000680 0 0x80>, 33 - <0 0x11000700 0 0x80>, 34 - <0 0x11000780 0 0x80>, 35 - <0 0x11000800 0 0x80>, 36 - <0 0x11000880 0 0x80>, 37 - <0 0x11000900 0 0x80>, 38 - <0 0x11000980 0 0x80>; 39 - interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>, 40 - <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>, 41 - <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>, 42 - <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>, 43 - <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>, 44 - <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>, 45 - <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>, 46 - <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>, 47 - <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>, 48 - <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>, 49 - <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>, 50 - <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>; 51 - dma-requests = <12>; 52 - clocks = <&pericfg CLK_PERI_AP_DMA>; 53 - clock-names = "apdma"; 54 - mediatek,dma-33bits; 55 - #dma-cells = <1>; 56 - };
+3 -1
Documentation/devicetree/bindings/dma/renesas,rz-dmac.yaml
··· 4 4 $id: http://devicetree.org/schemas/dma/renesas,rz-dmac.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Renesas RZ/G2L DMA Controller 7 + title: Renesas RZ/{G2L,G2UL,V2L} DMA Controller 8 8 9 9 maintainers: 10 10 - Biju Das <biju.das.jz@bp.renesas.com> ··· 16 16 compatible: 17 17 items: 18 18 - enum: 19 + - renesas,r9a07g043-dmac # RZ/G2UL 19 20 - renesas,r9a07g044-dmac # RZ/G2{L,LC} 21 + - renesas,r9a07g054-dmac # RZ/V2L 20 22 - const: renesas,rz-dmac 21 23 22 24 reg:
+1 -3
drivers/dma/altera-msgdma.c
··· 891 891 ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 892 892 if (ret) { 893 893 dev_warn(&pdev->dev, "unable to set coherent mask to 64"); 894 - ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 895 - if (ret) 896 - goto fail; 894 + goto fail; 897 895 } 898 896 899 897 msgdma_reset(mdev);
+4 -4
drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: GPL-2.0 2 2 // (C) 2017-2018 Synopsys, Inc. (www.synopsys.com) 3 3 4 4 /* ··· 35 35 /* 36 36 * The set of bus widths supported by the DMA controller. DW AXI DMAC supports 37 37 * master data bus width up to 512 bits (for both AXI master interfaces), but 38 - * it depends on IP block configurarion. 38 + * it depends on IP block configuration. 39 39 */ 40 40 #define AXI_DMA_BUSWIDTHS \ 41 41 (DMA_SLAVE_BUSWIDTH_1_BYTE | \ ··· 1089 1089 1090 1090 u32 status, i; 1091 1091 1092 - /* Disable DMAC inerrupts. We'll enable them after processing chanels */ 1092 + /* Disable DMAC interrupts. We'll enable them after processing channels */ 1093 1093 axi_dma_irq_disable(chip); 1094 1094 1095 - /* Poll, clear and process every chanel interrupt status */ 1095 + /* Poll, clear and process every channel interrupt status */ 1096 1096 for (i = 0; i < dw->hdata->nr_channels; i++) { 1097 1097 chan = &dw->chan[i]; 1098 1098 status = axi_chan_irq_read(chan);
+1 -1
drivers/dma/dw-axi-dmac/dw-axi-dmac.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 // (C) 2017-2018 Synopsys, Inc. (www.synopsys.com) 3 3 4 4 /*
+1 -1
drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.h
··· 139 139 140 140 static struct soc_device_attribute soc_fixup_tuning[] = { 141 141 { .family = "QorIQ LX2160A"}, 142 - { }, 142 + { /* sentinel */ } 143 143 }; 144 144 145 145 /* FD pool size: one FD + 3 Frame list + 2 source/destination descriptor */
+1 -1
drivers/dma/hisi_dma.c
··· 30 30 #define HISI_DMA_MODE 0x217c 31 31 #define HISI_DMA_OFFSET 0x100 32 32 33 - #define HISI_DMA_MSI_NUM 30 33 + #define HISI_DMA_MSI_NUM 32 34 34 #define HISI_DMA_CHAN_NUM 30 35 35 #define HISI_DMA_Q_DEPTH_VAL 1024 36 36
+7 -2
drivers/dma/idxd/device.c
··· 681 681 group->use_rdbuf_limit = false; 682 682 group->rdbufs_allowed = 0; 683 683 group->rdbufs_reserved = 0; 684 - group->tc_a = -1; 685 - group->tc_b = -1; 684 + if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) { 685 + group->tc_a = 1; 686 + group->tc_b = 1; 687 + } else { 688 + group->tc_a = -1; 689 + group->tc_b = -1; 690 + } 686 691 } 687 692 } 688 693
-2
drivers/dma/idxd/init.c
··· 605 605 dev_dbg(dev, "Set DMA masks\n"); 606 606 rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 607 607 if (rc) 608 - rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 609 - if (rc) 610 608 goto err; 611 609 612 610 dev_dbg(dev, "Set PCI master\n");
+19 -3
drivers/dma/imx-sdma.c
··· 701 701 return 0; 702 702 } 703 703 704 + static int is_sdma_channel_enabled(struct sdma_engine *sdma, int channel) 705 + { 706 + return !!(readl(sdma->regs + SDMA_H_STATSTOP) & BIT(channel)); 707 + } 708 + 704 709 static void sdma_enable_channel(struct sdma_engine *sdma, int channel) 705 710 { 706 711 writel(BIT(channel), sdma->regs + SDMA_H_START); ··· 847 842 */ 848 843 849 844 desc->chn_real_count = bd->mode.count; 850 - bd->mode.status |= BD_DONE; 851 845 bd->mode.count = desc->period_len; 852 846 desc->buf_ptail = desc->buf_tail; 853 847 desc->buf_tail = (desc->buf_tail + 1) % desc->num_bd; ··· 861 857 dmaengine_desc_get_callback_invoke(&desc->vd.tx, NULL); 862 858 spin_lock(&sdmac->vc.lock); 863 859 860 + /* Assign buffer ownership to SDMA */ 861 + bd->mode.status |= BD_DONE; 862 + 864 863 if (error) 865 864 sdmac->status = old_status; 865 + } 866 + 867 + /* 868 + * SDMA stops cyclic channel when DMA request triggers a channel and no SDMA 869 + * owned buffer is available (i.e. BD_DONE was set too late). 870 + */ 871 + if (!is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) { 872 + dev_warn(sdmac->sdma->dev, "restart cyclic channel %d\n", sdmac->channel); 873 + sdma_enable_channel(sdmac->sdma, sdmac->channel); 866 874 } 867 875 } 868 876 ··· 892 876 for (i = 0; i < sdmac->desc->num_bd; i++) { 893 877 bd = &sdmac->desc->bd[i]; 894 878 895 - if (bd->mode.status & (BD_DONE | BD_RROR)) 879 + if (bd->mode.status & (BD_DONE | BD_RROR)) 896 880 error = -EIO; 897 - sdmac->desc->chn_real_count += bd->mode.count; 881 + sdmac->desc->chn_real_count += bd->mode.count; 898 882 } 899 883 900 884 if (error)
-2
drivers/dma/ioat/init.c
··· 1365 1365 1366 1366 err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1367 1367 if (err) 1368 - err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1369 - if (err) 1370 1368 return err; 1371 1369 1372 1370 device = alloc_ioatdma(pdev, iomap[IOAT_MMIO_BAR]);
+1 -1
drivers/dma/ppc4xx/adma.c
··· 1686 1686 { 1687 1687 struct ppc440spe_adma_desc_slot *iter = NULL, *_iter; 1688 1688 struct ppc440spe_adma_desc_slot *alloc_start = NULL; 1689 - struct list_head chain = LIST_HEAD_INIT(chain); 1690 1689 int slots_found, retry = 0; 1690 + LIST_HEAD(chain); 1691 1691 1692 1692 1693 1693 BUG_ON(!num_slots || !slots_per_op);
+16 -6
drivers/dma/ptdma/ptdma-dmaengine.c
··· 100 100 spin_lock_irqsave(&chan->vc.lock, flags); 101 101 102 102 if (desc) { 103 - if (desc->status != DMA_ERROR) 104 - desc->status = DMA_COMPLETE; 103 + if (desc->status != DMA_COMPLETE) { 104 + if (desc->status != DMA_ERROR) 105 + desc->status = DMA_COMPLETE; 105 106 106 - dma_cookie_complete(tx_desc); 107 - dma_descriptor_unmap(tx_desc); 108 - list_del(&desc->vd.node); 107 + dma_cookie_complete(tx_desc); 108 + dma_descriptor_unmap(tx_desc); 109 + list_del(&desc->vd.node); 110 + } else { 111 + /* Don't handle it twice */ 112 + tx_desc = NULL; 113 + } 109 114 } 110 115 111 116 desc = pt_next_dma_desc(chan); ··· 238 233 struct pt_dma_chan *chan = to_pt_chan(dma_chan); 239 234 struct pt_dma_desc *desc; 240 235 unsigned long flags; 236 + bool engine_is_idle = true; 241 237 242 238 spin_lock_irqsave(&chan->vc.lock, flags); 239 + 240 + desc = pt_next_dma_desc(chan); 241 + if (desc) 242 + engine_is_idle = false; 243 243 244 244 vchan_issue_pending(&chan->vc); 245 245 ··· 253 243 spin_unlock_irqrestore(&chan->vc.lock, flags); 254 244 255 245 /* If there was nothing active, start processing */ 256 - if (desc) 246 + if (engine_is_idle) 257 247 pt_cmd_callback(desc, 0); 258 248 } 259 249
+1 -3
drivers/dma/qcom/hidma.c
··· 838 838 rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 839 839 if (rc) { 840 840 dev_warn(&pdev->dev, "unable to set coherent mask to 64"); 841 - rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 842 - if (rc) 843 - goto dmafree; 841 + goto dmafree; 844 842 } 845 843 846 844 dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+3 -3
drivers/dma/sh/Kconfig
··· 49 49 SoCs. 50 50 51 51 config RZ_DMAC 52 - tristate "Renesas RZ/G2L DMA Controller" 53 - depends on ARCH_R9A07G044 || COMPILE_TEST 52 + tristate "Renesas RZ/{G2L,V2L} DMA Controller" 53 + depends on ARCH_R9A07G044 || ARCH_R9A07G054 || COMPILE_TEST 54 54 select RENESAS_DMA 55 55 select DMA_VIRTUAL_CHANNELS 56 56 help 57 57 This driver supports the general purpose DMA controller found in the 58 - Renesas RZ/G2L SoC variants. 58 + Renesas RZ/{G2L,V2L} SoC variants.
+1 -3
drivers/dma/sh/shdma-base.c
··· 115 115 ret = pm_runtime_get(schan->dev); 116 116 117 117 spin_unlock_irq(&schan->chan_lock); 118 - if (ret < 0) { 118 + if (ret < 0) 119 119 dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret); 120 - pm_runtime_put(schan->dev); 121 - } 122 120 123 121 pm_runtime_barrier(schan->dev); 124 122
+1
drivers/dma/stm32-dma.c
··· 1389 1389 dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1390 1390 dd->copy_align = DMAENGINE_ALIGN_32_BYTES; 1391 1391 dd->max_burst = STM32_DMA_MAX_BURST; 1392 + dd->max_sg_burst = STM32_DMA_ALIGNED_MAX_DATA_ITEMS; 1392 1393 dd->descriptor_reuse = true; 1393 1394 dd->dev = &pdev->dev; 1394 1395 INIT_LIST_HEAD(&dd->channels);
+2 -1
drivers/dma/ti/Makefile
··· 9 9 k3-psil-j721e.o \ 10 10 k3-psil-j7200.o \ 11 11 k3-psil-am64.o \ 12 - k3-psil-j721s2.o 12 + k3-psil-j721s2.o \ 13 + k3-psil-am62.o 13 14 obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o
+3 -3
drivers/dma/ti/cppi41.c
··· 315 315 val = cppi_readl(cdd->qmgr_mem + QMGR_PEND(i)); 316 316 if (i == QMGR_PENDING_SLOT_Q(first_completion_queue) && val) { 317 317 u32 mask; 318 - /* set corresponding bit for completetion Q 93 */ 318 + /* set corresponding bit for completion Q 93 */ 319 319 mask = 1 << QMGR_PENDING_BIT_Q(first_completion_queue); 320 320 /* not set all bits for queues less than Q 93 */ 321 321 mask--; ··· 703 703 * transfer descriptor followed by TD descriptor. Waiting seems not to 704 704 * cause any difference. 705 705 * RX seems to be thrown out right away. However once the TearDown 706 - * descriptor gets through we are done. If we have seens the transfer 706 + * descriptor gets through we are done. If we have seen the transfer 707 707 * descriptor before the TD we fetch it from enqueue, it has to be 708 708 * there waiting for us. 709 709 */ ··· 747 747 struct cppi41_channel *cc, *_ct; 748 748 749 749 /* 750 - * channels might still be in the pendling list if 750 + * channels might still be in the pending list if 751 751 * cppi41_dma_issue_pending() is called after 752 752 * cppi41_runtime_suspend() is called 753 753 */
+5 -5
drivers/dma/ti/edma.c
··· 118 118 119 119 /* 120 120 * Max of 20 segments per channel to conserve PaRAM slots 121 - * Also note that MAX_NR_SG should be atleast the no.of periods 121 + * Also note that MAX_NR_SG should be at least the no.of periods 122 122 * that are required for ASoC, otherwise DMA prep calls will 123 123 * fail. Today davinci-pcm is the only user of this driver and 124 - * requires atleast 17 slots, so we setup the default to 20. 124 + * requires at least 17 slots, so we setup the default to 20. 125 125 */ 126 126 #define MAX_NR_SG 20 127 127 #define EDMA_MAX_SLOTS MAX_NR_SG ··· 976 976 * and quotient respectively of the division of: 977 977 * (dma_length / acnt) by (SZ_64K -1). This is so 978 978 * that in case bcnt over flows, we have ccnt to use. 979 - * Note: In A-sync tranfer only, bcntrld is used, but it 979 + * Note: In A-sync transfer only, bcntrld is used, but it 980 980 * only applies for sg_dma_len(sg) >= SZ_64K. 981 981 * In this case, the best way adopted is- bccnt for the 982 982 * first frame will be the remainder below. Then for ··· 1203 1203 * slot2: the remaining amount of data after slot1. 1204 1204 * ACNT = full_length - length1, length2 = ACNT 1205 1205 * 1206 - * When the full_length is multibple of 32767 one slot can be 1206 + * When the full_length is a multiple of 32767 one slot can be 1207 1207 * used to complete the transfer. 1208 1208 */ 1209 1209 width = array_size; ··· 1814 1814 * This limit exists to avoid a possible infinite loop when waiting for proof 1815 1815 * that a particular transfer is completed. This limit can be hit if there 1816 1816 * are large bursts to/from slow devices or the CPU is never able to catch 1817 - * the DMA hardware idle. On an AM335x transfering 48 bytes from the UART 1817 + * the DMA hardware idle. On an AM335x transferring 48 bytes from the UART 1818 1818 * RX-FIFO, as many as 55 loops have been seen. 1819 1819 */ 1820 1820 #define EDMA_MAX_TR_WAIT_LOOPS 1000
+186
drivers/dma/ti/k3-psil-am62.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2022 Texas Instruments Incorporated - https://www.ti.com 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + 8 + #include "k3-psil-priv.h" 9 + 10 + #define PSIL_PDMA_XY_PKT(x) \ 11 + { \ 12 + .thread_id = x, \ 13 + .ep_config = { \ 14 + .ep_type = PSIL_EP_PDMA_XY, \ 15 + .mapped_channel_id = -1, \ 16 + .default_flow_id = -1, \ 17 + .pkt_mode = 1, \ 18 + }, \ 19 + } 20 + 21 + #define PSIL_ETHERNET(x, ch, flow_base, flow_cnt) \ 22 + { \ 23 + .thread_id = x, \ 24 + .ep_config = { \ 25 + .ep_type = PSIL_EP_NATIVE, \ 26 + .pkt_mode = 1, \ 27 + .needs_epib = 1, \ 28 + .psd_size = 16, \ 29 + .mapped_channel_id = ch, \ 30 + .flow_start = flow_base, \ 31 + .flow_num = flow_cnt, \ 32 + .default_flow_id = flow_base, \ 33 + }, \ 34 + } 35 + 36 + #define PSIL_SAUL(x, ch, flow_base, flow_cnt, default_flow, tx) \ 37 + { \ 38 + .thread_id = x, \ 39 + .ep_config = { \ 40 + .ep_type = PSIL_EP_NATIVE, \ 41 + .pkt_mode = 1, \ 42 + .needs_epib = 1, \ 43 + .psd_size = 64, \ 44 + .mapped_channel_id = ch, \ 45 + .flow_start = flow_base, \ 46 + .flow_num = flow_cnt, \ 47 + .default_flow_id = default_flow, \ 48 + .notdpkt = tx, \ 49 + }, \ 50 + } 51 + 52 + #define PSIL_PDMA_MCASP(x) \ 53 + { \ 54 + .thread_id = x, \ 55 + .ep_config = { \ 56 + .ep_type = PSIL_EP_PDMA_XY, \ 57 + .pdma_acc32 = 1, \ 58 + .pdma_burst = 1, \ 59 + }, \ 60 + } 61 + 62 + #define PSIL_CSI2RX(x) \ 63 + { \ 64 + .thread_id = x, \ 65 + .ep_config = { \ 66 + .ep_type = PSIL_EP_NATIVE, \ 67 + }, \ 68 + } 69 + 70 + /* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */ 71 + static struct psil_ep am62_src_ep_map[] = { 72 + /* SAUL */ 73 + PSIL_SAUL(0x7500, 20, 35, 8, 35, 0), 74 + PSIL_SAUL(0x7501, 21, 35, 8, 36, 0), 75 + PSIL_SAUL(0x7502, 22, 43, 8, 43, 0), 76 + PSIL_SAUL(0x7503, 23, 43, 8, 44, 0), 77 + /* PDMA_MAIN0 - SPI0-3 */ 78 + PSIL_PDMA_XY_PKT(0x4302), 79 + PSIL_PDMA_XY_PKT(0x4303), 80 + PSIL_PDMA_XY_PKT(0x4304), 81 + PSIL_PDMA_XY_PKT(0x4305), 82 + PSIL_PDMA_XY_PKT(0x4306), 83 + PSIL_PDMA_XY_PKT(0x4307), 84 + PSIL_PDMA_XY_PKT(0x4308), 85 + PSIL_PDMA_XY_PKT(0x4309), 86 + PSIL_PDMA_XY_PKT(0x430a), 87 + PSIL_PDMA_XY_PKT(0x430b), 88 + PSIL_PDMA_XY_PKT(0x430c), 89 + PSIL_PDMA_XY_PKT(0x430d), 90 + /* PDMA_MAIN1 - UART0-6 */ 91 + PSIL_PDMA_XY_PKT(0x4400), 92 + PSIL_PDMA_XY_PKT(0x4401), 93 + PSIL_PDMA_XY_PKT(0x4402), 94 + PSIL_PDMA_XY_PKT(0x4403), 95 + PSIL_PDMA_XY_PKT(0x4404), 96 + PSIL_PDMA_XY_PKT(0x4405), 97 + PSIL_PDMA_XY_PKT(0x4406), 98 + /* PDMA_MAIN2 - MCASP0-2 */ 99 + PSIL_PDMA_MCASP(0x4500), 100 + PSIL_PDMA_MCASP(0x4501), 101 + PSIL_PDMA_MCASP(0x4502), 102 + /* CPSW3G */ 103 + PSIL_ETHERNET(0x4600, 19, 19, 16), 104 + /* CSI2RX */ 105 + PSIL_CSI2RX(0x4700), 106 + PSIL_CSI2RX(0x4701), 107 + PSIL_CSI2RX(0x4702), 108 + PSIL_CSI2RX(0x4703), 109 + PSIL_CSI2RX(0x4704), 110 + PSIL_CSI2RX(0x4705), 111 + PSIL_CSI2RX(0x4706), 112 + PSIL_CSI2RX(0x4707), 113 + PSIL_CSI2RX(0x4708), 114 + PSIL_CSI2RX(0x4709), 115 + PSIL_CSI2RX(0x470a), 116 + PSIL_CSI2RX(0x470b), 117 + PSIL_CSI2RX(0x470c), 118 + PSIL_CSI2RX(0x470d), 119 + PSIL_CSI2RX(0x470e), 120 + PSIL_CSI2RX(0x470f), 121 + PSIL_CSI2RX(0x4710), 122 + PSIL_CSI2RX(0x4711), 123 + PSIL_CSI2RX(0x4712), 124 + PSIL_CSI2RX(0x4713), 125 + PSIL_CSI2RX(0x4714), 126 + PSIL_CSI2RX(0x4715), 127 + PSIL_CSI2RX(0x4716), 128 + PSIL_CSI2RX(0x4717), 129 + PSIL_CSI2RX(0x4718), 130 + PSIL_CSI2RX(0x4719), 131 + PSIL_CSI2RX(0x471a), 132 + PSIL_CSI2RX(0x471b), 133 + PSIL_CSI2RX(0x471c), 134 + PSIL_CSI2RX(0x471d), 135 + PSIL_CSI2RX(0x471e), 136 + PSIL_CSI2RX(0x471f), 137 + }; 138 + 139 + /* PSI-L destination thread IDs, used for TX (DMA_MEM_TO_DEV) */ 140 + static struct psil_ep am62_dst_ep_map[] = { 141 + /* SAUL */ 142 + PSIL_SAUL(0xf500, 27, 83, 8, 83, 1), 143 + PSIL_SAUL(0xf501, 28, 91, 8, 91, 1), 144 + /* PDMA_MAIN0 - SPI0-3 */ 145 + PSIL_PDMA_XY_PKT(0xc302), 146 + PSIL_PDMA_XY_PKT(0xc303), 147 + PSIL_PDMA_XY_PKT(0xc304), 148 + PSIL_PDMA_XY_PKT(0xc305), 149 + PSIL_PDMA_XY_PKT(0xc306), 150 + PSIL_PDMA_XY_PKT(0xc307), 151 + PSIL_PDMA_XY_PKT(0xc308), 152 + PSIL_PDMA_XY_PKT(0xc309), 153 + PSIL_PDMA_XY_PKT(0xc30a), 154 + PSIL_PDMA_XY_PKT(0xc30b), 155 + PSIL_PDMA_XY_PKT(0xc30c), 156 + PSIL_PDMA_XY_PKT(0xc30d), 157 + /* PDMA_MAIN1 - UART0-6 */ 158 + PSIL_PDMA_XY_PKT(0xc400), 159 + PSIL_PDMA_XY_PKT(0xc401), 160 + PSIL_PDMA_XY_PKT(0xc402), 161 + PSIL_PDMA_XY_PKT(0xc403), 162 + PSIL_PDMA_XY_PKT(0xc404), 163 + PSIL_PDMA_XY_PKT(0xc405), 164 + PSIL_PDMA_XY_PKT(0xc406), 165 + /* PDMA_MAIN2 - MCASP0-2 */ 166 + PSIL_PDMA_MCASP(0xc500), 167 + PSIL_PDMA_MCASP(0xc501), 168 + PSIL_PDMA_MCASP(0xc502), 169 + /* CPSW3G */ 170 + PSIL_ETHERNET(0xc600, 19, 19, 8), 171 + PSIL_ETHERNET(0xc601, 20, 27, 8), 172 + PSIL_ETHERNET(0xc602, 21, 35, 8), 173 + PSIL_ETHERNET(0xc603, 22, 43, 8), 174 + PSIL_ETHERNET(0xc604, 23, 51, 8), 175 + PSIL_ETHERNET(0xc605, 24, 59, 8), 176 + PSIL_ETHERNET(0xc606, 25, 67, 8), 177 + PSIL_ETHERNET(0xc607, 26, 75, 8), 178 + }; 179 + 180 + struct psil_ep_map am62_ep_map = { 181 + .name = "am62", 182 + .src = am62_src_ep_map, 183 + .src_count = ARRAY_SIZE(am62_src_ep_map), 184 + .dst = am62_dst_ep_map, 185 + .dst_count = ARRAY_SIZE(am62_dst_ep_map), 186 + };
+1
drivers/dma/ti/k3-psil-priv.h
··· 42 42 extern struct psil_ep_map j7200_ep_map; 43 43 extern struct psil_ep_map am64_ep_map; 44 44 extern struct psil_ep_map j721s2_ep_map; 45 + extern struct psil_ep_map am62_ep_map; 45 46 46 47 #endif /* K3_PSIL_PRIV_H_ */
+1
drivers/dma/ti/k3-psil.c
··· 22 22 { .family = "J7200", .data = &j7200_ep_map }, 23 23 { .family = "AM64X", .data = &am64_ep_map }, 24 24 { .family = "J721S2", .data = &j721s2_ep_map }, 25 + { .family = "AM62X", .data = &am62_ep_map }, 25 26 { /* sentinel */ } 26 27 }; 27 28
+1
drivers/dma/ti/k3-udma.c
··· 4375 4375 { .family = "J7200", .data = &j7200_soc_data }, 4376 4376 { .family = "AM64X", .data = &am64_soc_data }, 4377 4377 { .family = "J721S2", .data = &j721e_soc_data}, 4378 + { .family = "AM62X", .data = &am64_soc_data }, 4378 4379 { /* sentinel */ } 4379 4380 }; 4380 4381
+1 -1
drivers/dma/ti/omap-dma.c
··· 1442 1442 * A source-synchronised channel is one where the fetching of data is 1443 1443 * under control of the device. In other words, a device-to-memory 1444 1444 * transfer. So, a destination-synchronised channel (which would be a 1445 - * memory-to-device transfer) undergoes an abort if the the CCR_ENABLE 1445 + * memory-to-device transfer) undergoes an abort if the CCR_ENABLE 1446 1446 * bit is cleared. 1447 1447 * From 16.1.4.20.4.6.2 Abort: "If an abort trigger occurs, the channel 1448 1448 * aborts immediately after completion of current read/write