Merge tag 'dma-mapping-4.14' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

- removal of the old dma_alloc_noncoherent interface

- remove unused flags to dma_declare_coherent_memory

- restrict OF DMA configuration to specific physical busses

- use the iommu mailing list for dma-mapping questions and patches

* tag 'dma-mapping-4.14' of git://git.infradead.org/users/hch/dma-mapping:
dma-coherent: fix dma_declare_coherent_memory() logic error
ARM: imx: mx31moboard: Remove unused 'dma' variable
dma-coherent: remove an unused variable
MAINTAINERS: use the iommu list for the dma-mapping subsystem
dma-coherent: remove the DMA_MEMORY_MAP and DMA_MEMORY_IO flags
dma-coherent: remove the DMA_MEMORY_INCLUDES_CHILDREN flag
of: restrict DMA configuration
dma-mapping: remove dma_alloc_noncoherent and dma_free_noncoherent
i825xx: switch to switch to dma_alloc_attrs
au1000_eth: switch to dma_alloc_attrs
sgiseeq: switch to dma_alloc_attrs
dma-mapping: reduce dma_mapping_error inline bloat

+156 -210
+17 -38
Documentation/DMA-API.txt
··· 515 515 :: 516 516 517 517 void * 518 - dma_alloc_noncoherent(struct device *dev, size_t size, 519 - dma_addr_t *dma_handle, gfp_t flag) 518 + dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, 519 + gfp_t flag, unsigned long attrs) 520 520 521 - Identical to dma_alloc_coherent() except that the platform will 522 - choose to return either consistent or non-consistent memory as it sees 523 - fit. By using this API, you are guaranteeing to the platform that you 524 - have all the correct and necessary sync points for this memory in the 525 - driver should it choose to return non-consistent memory. 521 + Identical to dma_alloc_coherent() except that when the 522 + DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the 523 + platform will choose to return either consistent or non-consistent memory 524 + as it sees fit. By using this API, you are guaranteeing to the platform 525 + that you have all the correct and necessary sync points for this memory 526 + in the driver should it choose to return non-consistent memory. 526 527 527 528 Note: where the platform can return consistent memory, it will 528 529 guarantee that the sync points become nops. ··· 536 535 :: 537 536 538 537 void 539 - dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 540 - dma_addr_t dma_handle) 538 + dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, 539 + dma_addr_t dma_handle, unsigned long attrs) 541 540 542 - Free memory allocated by the nonconsistent API. All parameters must 543 - be identical to those passed in (and returned by 544 - dma_alloc_noncoherent()). 541 + Free memory allocated by the dma_alloc_attrs(). All parameters common 542 + parameters must identical to those otherwise passed to dma_fre_coherent, 543 + and the attrs argument must be identical to the attrs passed to 544 + dma_alloc_attrs(). 545 545 546 546 :: 547 547 ··· 566 564 dma_cache_sync(struct device *dev, void *vaddr, size_t size, 567 565 enum dma_data_direction direction) 568 566 569 - Do a partial sync of memory that was allocated by 570 - dma_alloc_noncoherent(), starting at virtual address vaddr and 567 + Do a partial sync of memory that was allocated by dma_alloc_attrs() with 568 + the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and 571 569 continuing on for size. Again, you *must* observe the cache line 572 570 boundaries when doing this. 573 571 ··· 592 590 593 591 flags can be ORed together and are: 594 592 595 - - DMA_MEMORY_MAP - request that the memory returned from 596 - dma_alloc_coherent() be directly writable. 597 - 598 - - DMA_MEMORY_IO - request that the memory returned from 599 - dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. 600 - 601 - One or both of these flags must be present. 602 - 603 - - DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by 604 - dma_alloc_coherent of any child devices of this one (for memory residing 605 - on a bridge). 606 - 607 593 - DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 608 594 Do not allow dma_alloc_coherent() to fall back to system memory when 609 595 it's out of memory in the declared region. 610 596 611 - The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and 612 - must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO 613 - if only DMA_MEMORY_MAP were passed in) for success or zero for 614 - failure. 615 - 616 - Note, for DMA_MEMORY_IO returns, all subsequent memory returned by 617 - dma_alloc_coherent() may no longer be accessed directly, but instead 618 - must be accessed using the correct bus functions. If your driver 619 - isn't prepared to handle this contingency, it should not specify 620 - DMA_MEMORY_IO in the input flags. 621 - 622 - As a simplification for the platforms, only **one** such region of 597 + As a simplification for the platforms, only *one* such region of 623 598 memory may be declared per device. 624 599 625 600 For reasons of efficiency, most platforms choose to track the declared
+1 -1
MAINTAINERS
··· 4219 4219 M: Christoph Hellwig <hch@lst.de> 4220 4220 M: Marek Szyprowski <m.szyprowski@samsung.com> 4221 4221 R: Robin Murphy <robin.murphy@arm.com> 4222 - L: linux-kernel@vger.kernel.org 4222 + L: iommu@lists.linux-foundation.org 4223 4223 T: git git://git.infradead.org/users/hch/dma-mapping.git 4224 4224 W: http://git.infradead.org/users/hch/dma-mapping.git 4225 4225 S: Supported
+17 -27
arch/arm/mach-imx/mach-imx27_visstrim_m10.c
··· 245 245 static void __init visstrim_analog_camera_init(void) 246 246 { 247 247 struct platform_device *pdev; 248 - int dma; 249 248 250 249 gpio_set_value(TVP5150_PWDN, 1); 251 250 ndelay(1); ··· 257 258 if (IS_ERR(pdev)) 258 259 return; 259 260 260 - dma = dma_declare_coherent_memory(&pdev->dev, 261 - mx2_camera_base, mx2_camera_base, 262 - MX2_CAMERA_BUF_SIZE, 263 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); 264 - if (!(dma & DMA_MEMORY_MAP)) 265 - return; 261 + dma_declare_coherent_memory(&pdev->dev, mx2_camera_base, 262 + mx2_camera_base, MX2_CAMERA_BUF_SIZE, 263 + DMA_MEMORY_EXCLUSIVE); 266 264 } 267 265 268 266 static void __init visstrim_reserve(void) ··· 440 444 static void __init visstrim_coda_init(void) 441 445 { 442 446 struct platform_device *pdev; 443 - int dma; 444 447 445 448 pdev = imx27_add_coda(); 446 - dma = dma_declare_coherent_memory(&pdev->dev, 447 - mx2_camera_base + MX2_CAMERA_BUF_SIZE, 448 - mx2_camera_base + MX2_CAMERA_BUF_SIZE, 449 - MX2_CAMERA_BUF_SIZE, 450 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); 451 - if (!(dma & DMA_MEMORY_MAP)) 452 - return; 449 + dma_declare_coherent_memory(&pdev->dev, 450 + mx2_camera_base + MX2_CAMERA_BUF_SIZE, 451 + mx2_camera_base + MX2_CAMERA_BUF_SIZE, 452 + MX2_CAMERA_BUF_SIZE, 453 + DMA_MEMORY_EXCLUSIVE); 453 454 } 454 455 455 456 /* DMA deinterlace */ ··· 459 466 { 460 467 int ret = -ENOMEM; 461 468 struct platform_device *pdev = &visstrim_deinterlace; 462 - int dma; 463 469 464 470 ret = platform_device_register(pdev); 465 471 466 - dma = dma_declare_coherent_memory(&pdev->dev, 467 - mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 468 - mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 469 - MX2_CAMERA_BUF_SIZE, 470 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); 471 - if (!(dma & DMA_MEMORY_MAP)) 472 - return; 472 + dma_declare_coherent_memory(&pdev->dev, 473 + mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 474 + mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 475 + MX2_CAMERA_BUF_SIZE, 476 + DMA_MEMORY_EXCLUSIVE); 473 477 } 474 478 475 479 /* Emma-PrP for format conversion */ 476 480 static void __init visstrim_emmaprp_init(void) 477 481 { 478 482 struct platform_device *pdev; 479 - int dma; 483 + int ret; 480 484 481 485 pdev = imx27_add_mx2_emmaprp(); 482 486 if (IS_ERR(pdev)) ··· 483 493 * Use the same memory area as the analog camera since both 484 494 * devices are, by nature, exclusive. 485 495 */ 486 - dma = dma_declare_coherent_memory(&pdev->dev, 496 + ret = dma_declare_coherent_memory(&pdev->dev, 487 497 mx2_camera_base, mx2_camera_base, 488 498 MX2_CAMERA_BUF_SIZE, 489 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); 490 - if (!(dma & DMA_MEMORY_MAP)) 499 + DMA_MEMORY_EXCLUSIVE); 500 + if (ret) 491 501 pr_err("Failed to declare memory for emmaprp\n"); 492 502 } 493 503
+6 -6
arch/arm/mach-imx/mach-mx31moboard.c
··· 475 475 476 476 static int __init mx31moboard_init_cam(void) 477 477 { 478 - int dma, ret = -ENOMEM; 478 + int ret; 479 479 struct platform_device *pdev; 480 480 481 481 imx31_add_ipu_core(); ··· 484 484 if (IS_ERR(pdev)) 485 485 return PTR_ERR(pdev); 486 486 487 - dma = dma_declare_coherent_memory(&pdev->dev, 488 - mx3_camera_base, mx3_camera_base, 489 - MX3_CAMERA_BUF_SIZE, 490 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); 491 - if (!(dma & DMA_MEMORY_MAP)) 487 + ret = dma_declare_coherent_memory(&pdev->dev, 488 + mx3_camera_base, mx3_camera_base, 489 + MX3_CAMERA_BUF_SIZE, 490 + DMA_MEMORY_EXCLUSIVE); 491 + if (ret) 492 492 goto err; 493 493 494 494 ret = platform_device_add(pdev);
+1 -1
arch/metag/include/asm/dma-mapping.h
··· 9 9 } 10 10 11 11 /* 12 - * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to 12 + * dma_alloc_attrs() always returns non-cacheable memory, so there's no need to 13 13 * do any flushing here. 14 14 */ 15 15 static inline void
+1 -1
arch/nios2/include/asm/dma-mapping.h
··· 18 18 } 19 19 20 20 /* 21 - * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to 21 + * dma_alloc_attrs() always returns non-cacheable memory, so there's no need to 22 22 * do any flushing here. 23 23 */ 24 24 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+1 -2
arch/sh/drivers/pci/fixups-dreamcast.c
··· 63 63 res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1; 64 64 res.flags = IORESOURCE_MEM; 65 65 pcibios_resource_to_bus(dev->bus, &region, &res); 66 - BUG_ON(!dma_declare_coherent_memory(&dev->dev, 66 + BUG_ON(dma_declare_coherent_memory(&dev->dev, 67 67 res.start, 68 68 region.start, 69 69 resource_size(&res), 70 - DMA_MEMORY_MAP | 71 70 DMA_MEMORY_EXCLUSIVE)); 72 71 break; 73 72 default:
+2 -2
arch/tile/include/asm/dma-mapping.h
··· 68 68 int dma_set_mask(struct device *dev, u64 mask); 69 69 70 70 /* 71 - * dma_alloc_noncoherent() is #defined to return coherent memory, 72 - * so there's no need to do any flushing here. 71 + * dma_alloc_attrs() always returns non-cacheable memory, so there's no need to 72 + * do any flushing here. 73 73 */ 74 74 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 75 75 enum dma_data_direction direction)
+38 -47
drivers/base/dma-coherent.c
··· 37 37 return mem->device_base; 38 38 } 39 39 40 - static bool dma_init_coherent_memory( 40 + static int dma_init_coherent_memory( 41 41 phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags, 42 42 struct dma_coherent_mem **mem) 43 43 { ··· 45 45 void __iomem *mem_base = NULL; 46 46 int pages = size >> PAGE_SHIFT; 47 47 int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 48 + int ret; 48 49 49 - if ((flags & (DMA_MEMORY_MAP | DMA_MEMORY_IO)) == 0) 50 + if (!size) { 51 + ret = -EINVAL; 50 52 goto out; 51 - if (!size) 52 - goto out; 53 + } 53 54 54 - if (flags & DMA_MEMORY_MAP) 55 - mem_base = memremap(phys_addr, size, MEMREMAP_WC); 56 - else 57 - mem_base = ioremap(phys_addr, size); 58 - if (!mem_base) 55 + mem_base = memremap(phys_addr, size, MEMREMAP_WC); 56 + if (!mem_base) { 57 + ret = -EINVAL; 59 58 goto out; 60 - 59 + } 61 60 dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL); 62 - if (!dma_mem) 61 + if (!dma_mem) { 62 + ret = -ENOMEM; 63 63 goto out; 64 + } 64 65 dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL); 65 - if (!dma_mem->bitmap) 66 + if (!dma_mem->bitmap) { 67 + ret = -ENOMEM; 66 68 goto out; 69 + } 67 70 68 71 dma_mem->virt_base = mem_base; 69 72 dma_mem->device_base = device_addr; ··· 76 73 spin_lock_init(&dma_mem->spinlock); 77 74 78 75 *mem = dma_mem; 79 - return true; 76 + return 0; 80 77 81 78 out: 82 79 kfree(dma_mem); 83 - if (mem_base) { 84 - if (flags & DMA_MEMORY_MAP) 85 - memunmap(mem_base); 86 - else 87 - iounmap(mem_base); 88 - } 89 - return false; 80 + if (mem_base) 81 + memunmap(mem_base); 82 + return ret; 90 83 } 91 84 92 85 static void dma_release_coherent_memory(struct dma_coherent_mem *mem) ··· 90 91 if (!mem) 91 92 return; 92 93 93 - if (mem->flags & DMA_MEMORY_MAP) 94 - memunmap(mem->virt_base); 95 - else 96 - iounmap(mem->virt_base); 94 + memunmap(mem->virt_base); 97 95 kfree(mem->bitmap); 98 96 kfree(mem); 99 97 } ··· 105 109 return -EBUSY; 106 110 107 111 dev->dma_mem = mem; 108 - /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */ 109 - 110 112 return 0; 111 113 } 112 114 ··· 112 118 dma_addr_t device_addr, size_t size, int flags) 113 119 { 114 120 struct dma_coherent_mem *mem; 121 + int ret; 115 122 116 - if (!dma_init_coherent_memory(phys_addr, device_addr, size, flags, 117 - &mem)) 118 - return 0; 123 + ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags, &mem); 124 + if (ret) 125 + return ret; 119 126 120 - if (dma_assign_coherent_memory(dev, mem) == 0) 121 - return flags & DMA_MEMORY_MAP ? DMA_MEMORY_MAP : DMA_MEMORY_IO; 122 - 123 - dma_release_coherent_memory(mem); 124 - return 0; 127 + ret = dma_assign_coherent_memory(dev, mem); 128 + if (ret) 129 + dma_release_coherent_memory(mem); 130 + return ret; 125 131 } 126 132 EXPORT_SYMBOL(dma_declare_coherent_memory); 127 133 ··· 165 171 int order = get_order(size); 166 172 unsigned long flags; 167 173 int pageno; 168 - int dma_memory_map; 169 174 void *ret; 170 175 171 176 spin_lock_irqsave(&mem->spinlock, flags); ··· 181 188 */ 182 189 *dma_handle = mem->device_base + (pageno << PAGE_SHIFT); 183 190 ret = mem->virt_base + (pageno << PAGE_SHIFT); 184 - dma_memory_map = (mem->flags & DMA_MEMORY_MAP); 185 191 spin_unlock_irqrestore(&mem->spinlock, flags); 186 - if (dma_memory_map) 187 - memset(ret, 0, size); 188 - else 189 - memset_io(ret, 0, size); 190 - 192 + memset(ret, 0, size); 191 193 return ret; 192 - 193 194 err: 194 195 spin_unlock_irqrestore(&mem->spinlock, flags); 195 196 return NULL; ··· 346 359 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) 347 360 { 348 361 struct dma_coherent_mem *mem = rmem->priv; 362 + int ret; 349 363 350 - if (!mem && 351 - !dma_init_coherent_memory(rmem->base, rmem->base, rmem->size, 352 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE, 353 - &mem)) { 364 + if (!mem) 365 + return -ENODEV; 366 + 367 + ret = dma_init_coherent_memory(rmem->base, rmem->base, rmem->size, 368 + DMA_MEMORY_EXCLUSIVE, &mem); 369 + 370 + if (ret) { 354 371 pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n", 355 372 &rmem->base, (unsigned long)rmem->size / SZ_1M); 356 - return -ENODEV; 373 + return ret; 357 374 } 358 375 mem->use_dev_dma_pfn_offset = true; 359 376 rmem->priv = mem;
+2 -5
drivers/base/dma-mapping.c
··· 176 176 177 177 rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size, 178 178 flags); 179 - if (rc) { 179 + if (!rc) 180 180 devres_add(dev, res); 181 - rc = 0; 182 - } else { 181 + else 183 182 devres_free(res); 184 - rc = -ENOMEM; 185 - } 186 183 187 184 return rc; 188 185 }
-3
drivers/char/virtio_console.c
··· 451 451 * device is created by remoteproc, the DMA memory is 452 452 * associated with the grandparent device: 453 453 * vdev => rproc => platform-dev. 454 - * The code here would have been less quirky if 455 - * DMA_MEMORY_INCLUDES_CHILDREN had been supported 456 - * in dma-coherent.c 457 454 */ 458 455 if (!vq->vdev->dev.parent || !vq->vdev->dev.parent->parent) 459 456 goto free_buf;
+2 -3
drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c
··· 1708 1708 err = dma_declare_coherent_memory(&pdev->dev, res->start, 1709 1709 res->start, 1710 1710 resource_size(res), 1711 - DMA_MEMORY_MAP | 1712 1711 DMA_MEMORY_EXCLUSIVE); 1713 - if (!err) { 1712 + if (err) { 1714 1713 dev_err(&pdev->dev, "Unable to declare CEU memory.\n"); 1715 - return -ENXIO; 1714 + return err; 1716 1715 } 1717 1716 1718 1717 pcdev->video_limit = resource_size(res);
+10 -8
drivers/net/ethernet/amd/au1000_eth.c
··· 1180 1180 /* Allocate the data buffers 1181 1181 * Snooping works fine with eth on all au1xxx 1182 1182 */ 1183 - aup->vaddr = (u32)dma_alloc_noncoherent(NULL, MAX_BUF_SIZE * 1184 - (NUM_TX_BUFFS + NUM_RX_BUFFS), 1185 - &aup->dma_addr, 0); 1183 + aup->vaddr = (u32)dma_alloc_attrs(NULL, MAX_BUF_SIZE * 1184 + (NUM_TX_BUFFS + NUM_RX_BUFFS), 1185 + &aup->dma_addr, 0, 1186 + DMA_ATTR_NON_CONSISTENT); 1186 1187 if (!aup->vaddr) { 1187 1188 dev_err(&pdev->dev, "failed to allocate data buffers\n"); 1188 1189 err = -ENOMEM; ··· 1362 1361 err_remap2: 1363 1362 iounmap(aup->mac); 1364 1363 err_remap1: 1365 - dma_free_noncoherent(NULL, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS), 1366 - (void *)aup->vaddr, aup->dma_addr); 1364 + dma_free_attrs(NULL, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS), 1365 + (void *)aup->vaddr, aup->dma_addr, 1366 + DMA_ATTR_NON_CONSISTENT); 1367 1367 err_vaddr: 1368 1368 free_netdev(dev); 1369 1369 err_alloc: ··· 1396 1394 if (aup->tx_db_inuse[i]) 1397 1395 au1000_ReleaseDB(aup, aup->tx_db_inuse[i]); 1398 1396 1399 - dma_free_noncoherent(NULL, MAX_BUF_SIZE * 1400 - (NUM_TX_BUFFS + NUM_RX_BUFFS), 1401 - (void *)aup->vaddr, aup->dma_addr); 1397 + dma_free_attrs(NULL, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS), 1398 + (void *)aup->vaddr, aup->dma_addr, 1399 + DMA_ATTR_NON_CONSISTENT); 1402 1400 1403 1401 iounmap(aup->macdma); 1404 1402 iounmap(aup->mac);
+2 -4
drivers/net/ethernet/i825xx/lasi_82596.c
··· 96 96 97 97 #define OPT_SWAP_PORT 0x0001 /* Need to wordswp on the MPU port */ 98 98 99 - #define DMA_ALLOC dma_alloc_noncoherent 100 - #define DMA_FREE dma_free_noncoherent 101 99 #define DMA_WBACK(ndev, addr, len) \ 102 100 do { dma_cache_sync((ndev)->dev.parent, (void *)addr, len, DMA_TO_DEVICE); } while (0) 103 101 ··· 198 200 struct i596_private *lp = netdev_priv(dev); 199 201 200 202 unregister_netdev (dev); 201 - DMA_FREE(&pdev->dev, sizeof(struct i596_private), 202 - (void *)lp->dma, lp->dma_addr); 203 + dma_free_attrs(&pdev->dev, sizeof(struct i596_private), lp->dma, 204 + lp->dma_addr, DMA_ATTR_NON_CONSISTENT); 203 205 free_netdev (dev); 204 206 return 0; 205 207 }
+5 -4
drivers/net/ethernet/i825xx/lib82596.c
··· 1063 1063 if (!dev->base_addr || !dev->irq) 1064 1064 return -ENODEV; 1065 1065 1066 - dma = (struct i596_dma *) DMA_ALLOC(dev->dev.parent, 1067 - sizeof(struct i596_dma), &lp->dma_addr, GFP_KERNEL); 1066 + dma = dma_alloc_attrs(dev->dev.parent, sizeof(struct i596_dma), 1067 + &lp->dma_addr, GFP_KERNEL, 1068 + DMA_ATTR_NON_CONSISTENT); 1068 1069 if (!dma) { 1069 1070 printk(KERN_ERR "%s: Couldn't get shared memory\n", __FILE__); 1070 1071 return -ENOMEM; ··· 1086 1085 1087 1086 i = register_netdev(dev); 1088 1087 if (i) { 1089 - DMA_FREE(dev->dev.parent, sizeof(struct i596_dma), 1090 - (void *)dma, lp->dma_addr); 1088 + dma_free_attrs(dev->dev.parent, sizeof(struct i596_dma), 1089 + dma, lp->dma_addr, DMA_ATTR_NON_CONSISTENT); 1091 1090 return i; 1092 1091 } 1093 1092
+2 -4
drivers/net/ethernet/i825xx/sni_82596.c
··· 23 23 24 24 static const char sni_82596_string[] = "snirm_82596"; 25 25 26 - #define DMA_ALLOC dma_alloc_coherent 27 - #define DMA_FREE dma_free_coherent 28 26 #define DMA_WBACK(priv, addr, len) do { } while (0) 29 27 #define DMA_INV(priv, addr, len) do { } while (0) 30 28 #define DMA_WBACK_INV(priv, addr, len) do { } while (0) ··· 150 152 struct i596_private *lp = netdev_priv(dev); 151 153 152 154 unregister_netdev(dev); 153 - DMA_FREE(dev->dev.parent, sizeof(struct i596_private), 154 - lp->dma, lp->dma_addr); 155 + dma_free_attrs(dev->dev.parent, sizeof(struct i596_private), lp->dma, 156 + lp->dma_addr, DMA_ATTR_NON_CONSISTENT); 155 157 iounmap(lp->ca); 156 158 iounmap(lp->mpu_port); 157 159 free_netdev (dev);
+4 -4
drivers/net/ethernet/seeq/sgiseeq.c
··· 737 737 sp = netdev_priv(dev); 738 738 739 739 /* Make private data page aligned */ 740 - sr = dma_alloc_noncoherent(&pdev->dev, sizeof(*sp->srings), 741 - &sp->srings_dma, GFP_KERNEL); 740 + sr = dma_alloc_attrs(&pdev->dev, sizeof(*sp->srings), &sp->srings_dma, 741 + GFP_KERNEL, DMA_ATTR_NON_CONSISTENT); 742 742 if (!sr) { 743 743 printk(KERN_ERR "Sgiseeq: Page alloc failed, aborting.\n"); 744 744 err = -ENOMEM; ··· 813 813 struct sgiseeq_private *sp = netdev_priv(dev); 814 814 815 815 unregister_netdev(dev); 816 - dma_free_noncoherent(&pdev->dev, sizeof(*sp->srings), sp->srings, 817 - sp->srings_dma); 816 + dma_free_attrs(&pdev->dev, sizeof(*sp->srings), sp->srings, 817 + sp->srings_dma, DMA_ATTR_NON_CONSISTENT); 818 818 free_netdev(dev); 819 819 820 820 return 0;
+32 -16
drivers/of/device.c
··· 9 9 #include <linux/module.h> 10 10 #include <linux/mod_devicetable.h> 11 11 #include <linux/slab.h> 12 + #include <linux/pci.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/amba/bus.h> 12 15 13 16 #include <asm/errno.h> 14 17 #include "of_private.h" ··· 87 84 */ 88 85 int of_dma_configure(struct device *dev, struct device_node *np) 89 86 { 90 - u64 dma_addr, paddr, size; 87 + u64 dma_addr, paddr, size = 0; 91 88 int ret; 92 89 bool coherent; 93 90 unsigned long offset; 94 91 const struct iommu_ops *iommu; 95 92 u64 mask; 96 93 97 - /* 98 - * Set default coherent_dma_mask to 32 bit. Drivers are expected to 99 - * setup the correct supported mask. 100 - */ 101 - if (!dev->coherent_dma_mask) 102 - dev->coherent_dma_mask = DMA_BIT_MASK(32); 103 - 104 - /* 105 - * Set it to coherent_dma_mask by default if the architecture 106 - * code has not set it. 107 - */ 108 - if (!dev->dma_mask) 109 - dev->dma_mask = &dev->coherent_dma_mask; 110 - 111 94 ret = of_dma_get_range(np, &dma_addr, &paddr, &size); 112 95 if (ret < 0) { 96 + /* 97 + * For legacy reasons, we have to assume some devices need 98 + * DMA configuration regardless of whether "dma-ranges" is 99 + * correctly specified or not. 100 + */ 101 + if (!dev_is_pci(dev) && 102 + #ifdef CONFIG_ARM_AMBA 103 + dev->bus != &amba_bustype && 104 + #endif 105 + dev->bus != &platform_bus_type) 106 + return ret == -ENODEV ? 0 : ret; 107 + 113 108 dma_addr = offset = 0; 114 - size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1); 115 109 } else { 116 110 offset = PFN_DOWN(paddr - dma_addr); 117 111 ··· 128 128 } 129 129 dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", offset); 130 130 } 131 + 132 + /* 133 + * Set default coherent_dma_mask to 32 bit. Drivers are expected to 134 + * setup the correct supported mask. 135 + */ 136 + if (!dev->coherent_dma_mask) 137 + dev->coherent_dma_mask = DMA_BIT_MASK(32); 138 + /* 139 + * Set it to coherent_dma_mask by default if the architecture 140 + * code has not set it. 141 + */ 142 + if (!dev->dma_mask) 143 + dev->dma_mask = &dev->coherent_dma_mask; 144 + 145 + if (!size) 146 + size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1); 131 147 132 148 dev->dma_pfn_offset = offset; 133 149
+1 -2
drivers/scsi/NCR_Q720.c
··· 217 217 } 218 218 219 219 if (dma_declare_coherent_memory(dev, base_addr, base_addr, 220 - mem_size, DMA_MEMORY_MAP) 221 - != DMA_MEMORY_MAP) { 220 + mem_size, 0)) { 222 221 printk(KERN_ERR "NCR_Q720: DMA declare memory failed\n"); 223 222 goto out_release_region; 224 223 }
+3 -4
drivers/usb/host/ohci-sm501.c
··· 123 123 * regular memory. The HCD_LOCAL_MEM flag does just that. 124 124 */ 125 125 126 - if (!dma_declare_coherent_memory(dev, mem->start, 126 + retval = dma_declare_coherent_memory(dev, mem->start, 127 127 mem->start - mem->parent->start, 128 128 resource_size(mem), 129 - DMA_MEMORY_MAP | 130 - DMA_MEMORY_EXCLUSIVE)) { 129 + DMA_MEMORY_EXCLUSIVE); 130 + if (retval) { 131 131 dev_err(dev, "cannot declare coherent memory\n"); 132 - retval = -ENXIO; 133 132 goto err1; 134 133 } 135 134
+3 -6
drivers/usb/host/ohci-tmio.c
··· 227 227 goto err_ioremap_regs; 228 228 } 229 229 230 - if (!dma_declare_coherent_memory(&dev->dev, sram->start, 231 - sram->start, 232 - resource_size(sram), 233 - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE)) { 234 - ret = -EBUSY; 230 + ret = dma_declare_coherent_memory(&dev->dev, sram->start, sram->start, 231 + resource_size(sram), DMA_MEMORY_EXCLUSIVE); 232 + if (ret) 235 233 goto err_dma_declare; 236 - } 237 234 238 235 if (cell->enable) { 239 236 ret = cell->enable(dev);
+6 -22
include/linux/dma-mapping.h
··· 550 550 return dma_free_attrs(dev, size, cpu_addr, dma_handle, 0); 551 551 } 552 552 553 - static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, 554 - dma_addr_t *dma_handle, gfp_t gfp) 555 - { 556 - return dma_alloc_attrs(dev, size, dma_handle, gfp, 557 - DMA_ATTR_NON_CONSISTENT); 558 - } 559 - 560 - static inline void dma_free_noncoherent(struct device *dev, size_t size, 561 - void *cpu_addr, dma_addr_t dma_handle) 562 - { 563 - dma_free_attrs(dev, size, cpu_addr, dma_handle, 564 - DMA_ATTR_NON_CONSISTENT); 565 - } 566 - 567 553 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 568 554 { 569 - debug_dma_mapping_error(dev, dma_addr); 555 + const struct dma_map_ops *ops = get_dma_ops(dev); 570 556 571 - if (get_dma_ops(dev)->mapping_error) 572 - return get_dma_ops(dev)->mapping_error(dev, dma_addr); 557 + debug_dma_mapping_error(dev, dma_addr); 558 + if (ops->mapping_error) 559 + return ops->mapping_error(dev, dma_addr); 573 560 return 0; 574 561 } 575 562 ··· 707 720 #endif 708 721 709 722 /* flags for the coherent memory api */ 710 - #define DMA_MEMORY_MAP 0x01 711 - #define DMA_MEMORY_IO 0x02 712 - #define DMA_MEMORY_INCLUDES_CHILDREN 0x04 713 - #define DMA_MEMORY_EXCLUSIVE 0x08 723 + #define DMA_MEMORY_EXCLUSIVE 0x01 714 724 715 725 #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 716 726 int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, ··· 720 736 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 721 737 dma_addr_t device_addr, size_t size, int flags) 722 738 { 723 - return 0; 739 + return -ENOSYS; 724 740 } 725 741 726 742 static inline void