Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm

Pull DMA mask updates from Russell King:
"This series cleans up the handling of DMA masks in a lot of drivers,
fixing some bugs as we go.

Some of the more serious errors include:
- drivers which only set their coherent DMA mask if the attempt to
set the streaming mask fails.
- drivers which test for a NULL dma mask pointer, and then set the
dma mask pointer to a location in their module .data section -
which will cause problems if the module is reloaded.

To counter these, I have introduced two helper functions:
- dma_set_mask_and_coherent() takes care of setting both the
streaming and coherent masks at the same time, with the correct
error handling as specified by the API.
- dma_coerce_mask_and_coherent() which resolves the problem of
drivers forcefully setting DMA masks. This is more a marker for
future work to further clean these locations up - the code which
creates the devices really should be initialising these, but to fix
that in one go along with this change could potentially be very
disruptive.

The last thing this series does is prise away some of Linux's addition
to "DMA addresses are physical addresses and RAM always starts at
zero". We have ARM LPAE systems where all system memory is above 4GB
physical, hence having DMA masks interpreted by (eg) the block layers
as describing physical addresses in the range 0..DMAMASK fails on
these platforms. Santosh Shilimkar addresses this in this series; the
patches were copied to the appropriate people multiple times but were
ignored.

Fixing this also gets rid of some ARM weirdness in the setup of the
max*pfn variables, and brings ARM into line with every other Linux
architecture as far as those go"

* 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm: (52 commits)
ARM: 7805/1: mm: change max*pfn to include the physical offset of memory
ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations
ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper function
ARM: 7794/1: block: Rename parameter dma_mask to max_addr for blk_queue_bounce_limit()
ARM: DMA-API: better handing of DMA masks for coherent allocations
ARM: 7857/1: dma: imx-sdma: setup dma mask
DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masks
DMA-API: dcdbas: update DMA mask handing
DMA-API: dma: edma.c: no need to explicitly initialize DMA masks
DMA-API: usb: musb: use platform_device_register_full() to avoid directly messing with dma masks
DMA-API: crypto: remove last references to 'static struct device *dev'
DMA-API: crypto: fix ixp4xx crypto platform device support
DMA-API: others: use dma_set_coherent_mask()
DMA-API: staging: use dma_set_coherent_mask()
DMA-API: usb: use new dma_coerce_mask_and_coherent()
DMA-API: usb: use dma_set_coherent_mask()
DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent()
DMA-API: net: octeon: use dma_coerce_mask_and_coherent()
DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent()
...

+447 -448
+22 -15
Documentation/DMA-API-HOWTO.txt
··· 101 101 because this shows that you did think about these issues wrt. your 102 102 device. 103 103 104 - The query is performed via a call to dma_set_mask(): 104 + The query is performed via a call to dma_set_mask_and_coherent(): 105 105 106 - int dma_set_mask(struct device *dev, u64 mask); 106 + int dma_set_mask_and_coherent(struct device *dev, u64 mask); 107 107 108 - The query for consistent allocations is performed via a call to 109 - dma_set_coherent_mask(): 108 + which will query the mask for both streaming and coherent APIs together. 109 + If you have some special requirements, then the following two separate 110 + queries can be used instead: 110 111 111 - int dma_set_coherent_mask(struct device *dev, u64 mask); 112 + The query for streaming mappings is performed via a call to 113 + dma_set_mask(): 114 + 115 + int dma_set_mask(struct device *dev, u64 mask); 116 + 117 + The query for consistent allocations is performed via a call 118 + to dma_set_coherent_mask(): 119 + 120 + int dma_set_coherent_mask(struct device *dev, u64 mask); 112 121 113 122 Here, dev is a pointer to the device struct of your device, and mask 114 123 is a bit mask describing which bits of an address your device ··· 146 137 147 138 The standard 32-bit addressing device would do something like this: 148 139 149 - if (dma_set_mask(dev, DMA_BIT_MASK(32))) { 140 + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 150 141 printk(KERN_WARNING 151 142 "mydev: No suitable DMA available.\n"); 152 143 goto ignore_this_device; ··· 180 171 181 172 int using_dac, consistent_using_dac; 182 173 183 - if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { 174 + if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { 184 175 using_dac = 1; 185 176 consistent_using_dac = 1; 186 - dma_set_coherent_mask(dev, DMA_BIT_MASK(64)); 187 - } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { 177 + } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 188 178 using_dac = 0; 189 179 consistent_using_dac = 0; 190 - dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 191 180 } else { 192 181 printk(KERN_WARNING 193 182 "mydev: No suitable DMA available.\n"); 194 183 goto ignore_this_device; 195 184 } 196 185 197 - dma_set_coherent_mask() will always be able to set the same or a 198 - smaller mask as dma_set_mask(). However for the rare case that a 186 + The coherent coherent mask will always be able to set the same or a 187 + smaller mask as the streaming mask. However for the rare case that a 199 188 device driver only uses consistent allocations, one would have to 200 189 check the return value from dma_set_coherent_mask(). 201 190 ··· 206 199 goto ignore_this_device; 207 200 } 208 201 209 - When dma_set_mask() is successful, and returns zero, the kernel saves 210 - away this mask you have provided. The kernel will use this 211 - information later when you make DMA mappings. 202 + When dma_set_mask() or dma_set_mask_and_coherent() is successful, and 203 + returns zero, the kernel saves away this mask you have provided. The 204 + kernel will use this information later when you make DMA mappings. 212 205 213 206 There is a case which we are aware of at this time, which is worth 214 207 mentioning in this documentation. If your device supports multiple
+8
Documentation/DMA-API.txt
··· 142 142 driver writers. 143 143 144 144 int 145 + dma_set_mask_and_coherent(struct device *dev, u64 mask) 146 + 147 + Checks to see if the mask is possible and updates the device 148 + streaming and coherent DMA mask parameters if it is. 149 + 150 + Returns: 0 if successful and a negative error if not. 151 + 152 + int 145 153 dma_set_mask(struct device *dev, u64 mask) 146 154 147 155 Checks to see if the mask is possible and updates the device
+8
arch/arm/include/asm/dma-mapping.h
··· 64 64 { 65 65 return (dma_addr_t)__virt_to_bus((unsigned long)(addr)); 66 66 } 67 + 67 68 #else 68 69 static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned long pfn) 69 70 { ··· 86 85 return __arch_virt_to_dma(dev, addr); 87 86 } 88 87 #endif 88 + 89 + /* The ARM override for dma_max_pfn() */ 90 + static inline unsigned long dma_max_pfn(struct device *dev) 91 + { 92 + return PHYS_PFN_OFFSET + dma_to_pfn(dev, *dev->dma_mask); 93 + } 94 + #define dma_max_pfn(dev) dma_max_pfn(dev) 89 95 90 96 /* 91 97 * DMA errors are defined by all-bits-set in the DMA address.
+45 -6
arch/arm/mm/dma-mapping.c
··· 159 159 160 160 static u64 get_coherent_dma_mask(struct device *dev) 161 161 { 162 - u64 mask = (u64)arm_dma_limit; 162 + u64 mask = (u64)DMA_BIT_MASK(32); 163 163 164 164 if (dev) { 165 165 mask = dev->coherent_dma_mask; ··· 173 173 return 0; 174 174 } 175 175 176 - if ((~mask) & (u64)arm_dma_limit) { 177 - dev_warn(dev, "coherent DMA mask %#llx is smaller " 178 - "than system GFP_DMA mask %#llx\n", 179 - mask, (u64)arm_dma_limit); 176 + /* 177 + * If the mask allows for more memory than we can address, 178 + * and we actually have that much memory, then fail the 179 + * allocation. 180 + */ 181 + if (sizeof(mask) != sizeof(dma_addr_t) && 182 + mask > (dma_addr_t)~0 && 183 + dma_to_pfn(dev, ~0) > arm_dma_pfn_limit) { 184 + dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n", 185 + mask); 186 + dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n"); 187 + return 0; 188 + } 189 + 190 + /* 191 + * Now check that the mask, when translated to a PFN, 192 + * fits within the allowable addresses which we can 193 + * allocate. 194 + */ 195 + if (dma_to_pfn(dev, mask) < arm_dma_pfn_limit) { 196 + dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n", 197 + mask, 198 + dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1, 199 + arm_dma_pfn_limit + 1); 180 200 return 0; 181 201 } 182 202 } ··· 1027 1007 */ 1028 1008 int dma_supported(struct device *dev, u64 mask) 1029 1009 { 1030 - if (mask < (u64)arm_dma_limit) 1010 + unsigned long limit; 1011 + 1012 + /* 1013 + * If the mask allows for more memory than we can address, 1014 + * and we actually have that much memory, then we must 1015 + * indicate that DMA to this device is not supported. 1016 + */ 1017 + if (sizeof(mask) != sizeof(dma_addr_t) && 1018 + mask > (dma_addr_t)~0 && 1019 + dma_to_pfn(dev, ~0) > arm_dma_pfn_limit) 1031 1020 return 0; 1021 + 1022 + /* 1023 + * Translate the device's DMA mask to a PFN limit. This 1024 + * PFN number includes the page which we can DMA to. 1025 + */ 1026 + limit = dma_to_pfn(dev, mask); 1027 + 1028 + if (limit < arm_dma_pfn_limit) 1029 + return 0; 1030 + 1032 1031 return 1; 1033 1032 } 1034 1033 EXPORT_SYMBOL(dma_supported);
+6 -6
arch/arm/mm/init.c
··· 209 209 * so a successful GFP_DMA allocation will always satisfy this. 210 210 */ 211 211 phys_addr_t arm_dma_limit; 212 + unsigned long arm_dma_pfn_limit; 212 213 213 214 static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole, 214 215 unsigned long dma_size) ··· 232 231 arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1; 233 232 } else 234 233 arm_dma_limit = 0xffffffff; 234 + arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT; 235 235 #endif 236 236 } 237 237 ··· 420 418 * This doesn't seem to be used by the Linux memory manager any 421 419 * more, but is used by ll_rw_block. If we can get rid of it, we 422 420 * also get rid of some of the stuff above as well. 423 - * 424 - * Note: max_low_pfn and max_pfn reflect the number of _pages_ in 425 - * the system, not the maximum PFN. 426 421 */ 427 - max_low_pfn = max_low - PHYS_PFN_OFFSET; 428 - max_pfn = max_high - PHYS_PFN_OFFSET; 422 + min_low_pfn = min; 423 + max_low_pfn = max_low; 424 + max_pfn = max_high; 429 425 } 430 426 431 427 /* ··· 529 529 static void __init free_highpages(void) 530 530 { 531 531 #ifdef CONFIG_HIGHMEM 532 - unsigned long max_low = max_low_pfn + PHYS_PFN_OFFSET; 532 + unsigned long max_low = max_low_pfn; 533 533 struct memblock_region *mem, *res; 534 534 535 535 /* set highmem page free */
+2
arch/arm/mm/mm.h
··· 81 81 82 82 #ifdef CONFIG_ZONE_DMA 83 83 extern phys_addr_t arm_dma_limit; 84 + extern unsigned long arm_dma_pfn_limit; 84 85 #else 85 86 #define arm_dma_limit ((phys_addr_t)~0) 87 + #define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT) 86 88 #endif 87 89 88 90 extern phys_addr_t arm_lowmem_limit;
+1 -2
arch/powerpc/kernel/vio.c
··· 1419 1419 1420 1420 /* needed to ensure proper operation of coherent allocations 1421 1421 * later, in case driver doesn't set it explicitly */ 1422 - dma_set_mask(&viodev->dev, DMA_BIT_MASK(64)); 1423 - dma_set_coherent_mask(&viodev->dev, DMA_BIT_MASK(64)); 1422 + dma_set_mask_and_coherent(&viodev->dev, DMA_BIT_MASK(64)); 1424 1423 } 1425 1424 1426 1425 /* register with generic device framework */
+4 -4
block/blk-settings.c
··· 195 195 /** 196 196 * blk_queue_bounce_limit - set bounce buffer limit for queue 197 197 * @q: the request queue for the device 198 - * @dma_mask: the maximum address the device can handle 198 + * @max_addr: the maximum address the device can handle 199 199 * 200 200 * Description: 201 201 * Different hardware can have different requirements as to what pages 202 202 * it can do I/O directly to. A low level driver can call 203 203 * blk_queue_bounce_limit to have lower memory pages allocated as bounce 204 - * buffers for doing I/O to pages residing above @dma_mask. 204 + * buffers for doing I/O to pages residing above @max_addr. 205 205 **/ 206 - void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask) 206 + void blk_queue_bounce_limit(struct request_queue *q, u64 max_addr) 207 207 { 208 - unsigned long b_pfn = dma_mask >> PAGE_SHIFT; 208 + unsigned long b_pfn = max_addr >> PAGE_SHIFT; 209 209 int dma = 0; 210 210 211 211 q->bounce_gfp = GFP_NOIO;
+1 -5
drivers/amba/bus.c
··· 552 552 if (!dev) 553 553 return ERR_PTR(-ENOMEM); 554 554 555 - dev->dma_mask = dma_mask; 556 555 dev->dev.coherent_dma_mask = dma_mask; 557 556 dev->irq[0] = irq1; 558 557 dev->irq[1] = irq2; ··· 618 619 dev_set_name(&dev->dev, "%s", name); 619 620 dev->dev.release = amba_device_release; 620 621 dev->dev.bus = &amba_bustype; 621 - dev->dev.dma_mask = &dev->dma_mask; 622 + dev->dev.dma_mask = &dev->dev.coherent_dma_mask; 622 623 dev->res.name = dev_name(&dev->dev); 623 624 } 624 625 ··· 661 662 { 662 663 amba_device_initialize(dev, dev->dev.init_name); 663 664 dev->dev.init_name = NULL; 664 - 665 - if (!dev->dev.coherent_dma_mask && dev->dma_mask) 666 - dev_warn(&dev->dev, "coherent dma mask is unset\n"); 667 665 668 666 return amba_device_add(dev, parent); 669 667 }
+4 -1
drivers/ata/pata_ixp4xx_cf.c
··· 144 144 struct ata_host *host; 145 145 struct ata_port *ap; 146 146 struct ixp4xx_pata_data *data = dev_get_platdata(&pdev->dev); 147 + int ret; 147 148 148 149 cs0 = platform_get_resource(pdev, IORESOURCE_MEM, 0); 149 150 cs1 = platform_get_resource(pdev, IORESOURCE_MEM, 1); ··· 158 157 return -ENOMEM; 159 158 160 159 /* acquire resources and fill host */ 161 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 160 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 161 + if (ret) 162 + return ret; 162 163 163 164 data->cs0 = devm_ioremap(&pdev->dev, cs0->start, 0x1000); 164 165 data->cs1 = devm_ioremap(&pdev->dev, cs1->start, 0x1000);
+3 -2
drivers/ata/pata_octeon_cf.c
··· 1014 1014 } 1015 1015 cf_port->c0 = ap->ioaddr.ctl_addr; 1016 1016 1017 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(64); 1018 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 1017 + rv = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1018 + if (rv) 1019 + return rv; 1019 1020 1020 1021 ata_port_desc(ap, "cmd %p ctl %p", base, ap->ioaddr.ctl_addr); 1021 1022
+4 -6
drivers/block/nvme-core.c
··· 1949 1949 if (pci_request_selected_regions(pdev, bars, "nvme")) 1950 1950 goto disable_pci; 1951 1951 1952 - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) 1953 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1954 - else if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) 1955 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 1956 - else 1957 - goto disable_pci; 1952 + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) && 1953 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) 1954 + goto disable; 1958 1955 1959 1956 pci_set_drvdata(pdev, dev); 1960 1957 dev->bar = ioremap(pci_resource_start(pdev, 0), 8192); ··· 2165 2168 2166 2169 INIT_LIST_HEAD(&dev->namespaces); 2167 2170 dev->pci_dev = pdev; 2171 + 2168 2172 result = nvme_set_instance(dev); 2169 2173 if (result) 2170 2174 goto free;
+24 -24
drivers/crypto/ixp4xx_crypto.c
··· 218 218 219 219 static int support_aes = 1; 220 220 221 - static void dev_release(struct device *dev) 222 - { 223 - return; 224 - } 225 - 226 221 #define DRIVER_NAME "ixp4xx_crypto" 227 - static struct platform_device pseudo_dev = { 228 - .name = DRIVER_NAME, 229 - .id = 0, 230 - .num_resources = 0, 231 - .dev = { 232 - .coherent_dma_mask = DMA_BIT_MASK(32), 233 - .release = dev_release, 234 - } 235 - }; 236 222 237 - static struct device *dev = &pseudo_dev.dev; 223 + static struct platform_device *pdev; 238 224 239 225 static inline dma_addr_t crypt_virt2phys(struct crypt_ctl *virt) 240 226 { ··· 249 263 250 264 static int setup_crypt_desc(void) 251 265 { 266 + struct device *dev = &pdev->dev; 252 267 BUILD_BUG_ON(sizeof(struct crypt_ctl) != 64); 253 268 crypt_virt = dma_alloc_coherent(dev, 254 269 NPE_QLEN * sizeof(struct crypt_ctl), ··· 350 363 351 364 static void one_packet(dma_addr_t phys) 352 365 { 366 + struct device *dev = &pdev->dev; 353 367 struct crypt_ctl *crypt; 354 368 struct ixp_ctx *ctx; 355 369 int failed; ··· 420 432 tasklet_schedule(&crypto_done_tasklet); 421 433 } 422 434 423 - static int init_ixp_crypto(void) 435 + static int init_ixp_crypto(struct device *dev) 424 436 { 425 437 int ret = -ENODEV; 426 438 u32 msg[2] = { 0, 0 }; ··· 507 519 return ret; 508 520 } 509 521 510 - static void release_ixp_crypto(void) 522 + static void release_ixp_crypto(struct device *dev) 511 523 { 512 524 qmgr_disable_irq(RECV_QID); 513 525 tasklet_kill(&crypto_done_tasklet); ··· 874 886 enum dma_data_direction src_direction = DMA_BIDIRECTIONAL; 875 887 struct ablk_ctx *req_ctx = ablkcipher_request_ctx(req); 876 888 struct buffer_desc src_hook; 889 + struct device *dev = &pdev->dev; 877 890 gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? 878 891 GFP_KERNEL : GFP_ATOMIC; 879 892 ··· 999 1010 unsigned int cryptlen; 1000 1011 struct buffer_desc *buf, src_hook; 1001 1012 struct aead_ctx *req_ctx = aead_request_ctx(req); 1013 + struct device *dev = &pdev->dev; 1002 1014 gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? 1003 1015 GFP_KERNEL : GFP_ATOMIC; 1004 1016 ··· 1408 1418 } }; 1409 1419 1410 1420 #define IXP_POSTFIX "-ixp4xx" 1421 + 1422 + static const struct platform_device_info ixp_dev_info __initdata = { 1423 + .name = DRIVER_NAME, 1424 + .id = 0, 1425 + .dma_mask = DMA_BIT_MASK(32), 1426 + }; 1427 + 1411 1428 static int __init ixp_module_init(void) 1412 1429 { 1413 1430 int num = ARRAY_SIZE(ixp4xx_algos); 1414 - int i,err ; 1431 + int i, err ; 1415 1432 1416 - if (platform_device_register(&pseudo_dev)) 1417 - return -ENODEV; 1433 + pdev = platform_device_register_full(&ixp_dev_info); 1434 + if (IS_ERR(pdev)) 1435 + return PTR_ERR(pdev); 1436 + 1437 + dev = &pdev->dev; 1418 1438 1419 1439 spin_lock_init(&desc_lock); 1420 1440 spin_lock_init(&emerg_lock); 1421 1441 1422 - err = init_ixp_crypto(); 1442 + err = init_ixp_crypto(&pdev->dev); 1423 1443 if (err) { 1424 - platform_device_unregister(&pseudo_dev); 1444 + platform_device_unregister(pdev); 1425 1445 return err; 1426 1446 } 1427 1447 for (i=0; i< num; i++) { ··· 1495 1495 if (ixp4xx_algos[i].registered) 1496 1496 crypto_unregister_alg(&ixp4xx_algos[i].crypto); 1497 1497 } 1498 - release_ixp_crypto(); 1499 - platform_device_unregister(&pseudo_dev); 1498 + release_ixp_crypto(&pdev->dev); 1499 + platform_device_unregister(pdev); 1500 1500 } 1501 1501 1502 1502 module_init(ixp_module_init);
+5
drivers/dma/amba-pl08x.c
··· 2055 2055 if (ret) 2056 2056 return ret; 2057 2057 2058 + /* Ensure that we can do DMA */ 2059 + ret = dma_set_mask_and_coherent(&adev->dev, DMA_BIT_MASK(32)); 2060 + if (ret) 2061 + goto out_no_pl08x; 2062 + 2058 2063 /* Create the driver state holder */ 2059 2064 pl08x = kzalloc(sizeof(*pl08x), GFP_KERNEL); 2060 2065 if (!pl08x) {
+3 -5
drivers/dma/dw/platform.c
··· 191 191 if (IS_ERR(chip->regs)) 192 192 return PTR_ERR(chip->regs); 193 193 194 - /* Apply default dma_mask if needed */ 195 - if (!dev->dma_mask) { 196 - dev->dma_mask = &dev->coherent_dma_mask; 197 - dev->coherent_dma_mask = DMA_BIT_MASK(32); 198 - } 194 + err = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 195 + if (err) 196 + return err; 199 197 200 198 pdata = dev_get_platdata(dev); 201 199 if (!pdata)
+6 -4
drivers/dma/edma.c
··· 634 634 struct edma_cc *ecc; 635 635 int ret; 636 636 637 + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 638 + if (ret) 639 + return ret; 640 + 637 641 ecc = devm_kzalloc(&pdev->dev, sizeof(*ecc), GFP_KERNEL); 638 642 if (!ecc) { 639 643 dev_err(&pdev->dev, "Can't allocate controller\n"); ··· 709 705 static const struct platform_device_info edma_dev_info0 = { 710 706 .name = "edma-dma-engine", 711 707 .id = 0, 708 + .dma_mask = DMA_BIT_MASK(32), 712 709 }; 713 710 714 711 static const struct platform_device_info edma_dev_info1 = { 715 712 .name = "edma-dma-engine", 716 713 .id = 1, 714 + .dma_mask = DMA_BIT_MASK(32), 717 715 }; 718 716 719 717 static int edma_init(void) ··· 729 723 ret = PTR_ERR(pdev0); 730 724 goto out; 731 725 } 732 - pdev0->dev.dma_mask = &pdev0->dev.coherent_dma_mask; 733 - pdev0->dev.coherent_dma_mask = DMA_BIT_MASK(32); 734 726 } 735 727 736 728 if (EDMA_CTLRS == 2) { ··· 738 734 platform_device_unregister(pdev0); 739 735 ret = PTR_ERR(pdev1); 740 736 } 741 - pdev1->dev.dma_mask = &pdev1->dev.coherent_dma_mask; 742 - pdev1->dev.coherent_dma_mask = DMA_BIT_MASK(32); 743 737 } 744 738 745 739 out:
+4
drivers/dma/imx-sdma.c
··· 1432 1432 return -EINVAL; 1433 1433 } 1434 1434 1435 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1436 + if (ret) 1437 + return ret; 1438 + 1435 1439 sdma = kzalloc(sizeof(*sdma), GFP_KERNEL); 1436 1440 if (!sdma) 1437 1441 return -ENOMEM;
+4
drivers/dma/pl330.c
··· 2903 2903 2904 2904 pdat = dev_get_platdata(&adev->dev); 2905 2905 2906 + ret = dma_set_mask_and_coherent(&adev->dev, DMA_BIT_MASK(32)); 2907 + if (ret) 2908 + return ret; 2909 + 2906 2910 /* Allocate a new DMAC and its Channels */ 2907 2911 pdmac = devm_kzalloc(&adev->dev, sizeof(*pdmac), GFP_KERNEL); 2908 2912 if (!pdmac) {
+19 -13
drivers/firmware/dcdbas.c
··· 545 545 host_control_action = HC_ACTION_NONE; 546 546 host_control_smi_type = HC_SMITYPE_NONE; 547 547 548 + dcdbas_pdev = dev; 549 + 548 550 /* 549 551 * BIOS SMI calls require buffer addresses be in 32-bit address space. 550 552 * This is done by setting the DMA mask below. 551 553 */ 552 - dcdbas_pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 553 - dcdbas_pdev->dev.dma_mask = &dcdbas_pdev->dev.coherent_dma_mask; 554 + error = dma_set_coherent_mask(&dcdbas_pdev->dev, DMA_BIT_MASK(32)); 555 + if (error) 556 + return error; 554 557 555 558 error = sysfs_create_group(&dev->dev.kobj, &dcdbas_attr_group); 556 559 if (error) ··· 584 581 .remove = dcdbas_remove, 585 582 }; 586 583 584 + static const struct platform_device_info dcdbas_dev_info __initdata = { 585 + .name = DRIVER_NAME, 586 + .id = -1, 587 + .dma_mask = DMA_BIT_MASK(32), 588 + }; 589 + 590 + static struct platform_device *dcdbas_pdev_reg; 591 + 587 592 /** 588 593 * dcdbas_init: initialize driver 589 594 */ ··· 603 592 if (error) 604 593 return error; 605 594 606 - dcdbas_pdev = platform_device_alloc(DRIVER_NAME, -1); 607 - if (!dcdbas_pdev) { 608 - error = -ENOMEM; 595 + dcdbas_pdev_reg = platform_device_register_full(&dcdbas_dev_info); 596 + if (IS_ERR(dcdbas_pdev_reg)) { 597 + error = PTR_ERR(dcdbas_pdev_reg); 609 598 goto err_unregister_driver; 610 599 } 611 600 612 - error = platform_device_add(dcdbas_pdev); 613 - if (error) 614 - goto err_free_device; 615 - 616 601 return 0; 617 602 618 - err_free_device: 619 - platform_device_put(dcdbas_pdev); 620 603 err_unregister_driver: 621 604 platform_driver_unregister(&dcdbas_driver); 622 605 return error; ··· 633 628 * all sysfs attributes belonging to this module have been 634 629 * released. 635 630 */ 636 - smi_data_buf_free(); 637 - platform_device_unregister(dcdbas_pdev); 631 + if (dcdbas_pdev) 632 + smi_data_buf_free(); 633 + platform_device_unregister(dcdbas_pdev_reg); 638 634 platform_driver_unregister(&dcdbas_driver); 639 635 } 640 636
+8 -5
drivers/firmware/google/gsmi.c
··· 764 764 static struct kobject *gsmi_kobj; 765 765 static struct efivars efivars; 766 766 767 + static const struct platform_device_info gsmi_dev_info = { 768 + .name = "gsmi", 769 + .id = -1, 770 + /* SMI callbacks require 32bit addresses */ 771 + .dma_mask = DMA_BIT_MASK(32), 772 + }; 773 + 767 774 static __init int gsmi_init(void) 768 775 { 769 776 unsigned long flags; ··· 783 776 gsmi_dev.smi_cmd = acpi_gbl_FADT.smi_command; 784 777 785 778 /* register device */ 786 - gsmi_dev.pdev = platform_device_register_simple("gsmi", -1, NULL, 0); 779 + gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info); 787 780 if (IS_ERR(gsmi_dev.pdev)) { 788 781 printk(KERN_ERR "gsmi: unable to register platform device\n"); 789 782 return PTR_ERR(gsmi_dev.pdev); ··· 792 785 /* SMI access needs to be serialized */ 793 786 spin_lock_init(&gsmi_dev.lock); 794 787 795 - /* SMI callbacks require 32bit addresses */ 796 - gsmi_dev.pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 797 - gsmi_dev.pdev->dev.dma_mask = 798 - &gsmi_dev.pdev->dev.coherent_dma_mask; 799 788 ret = -ENOMEM; 800 789 gsmi_dev.dma_pool = dma_pool_create("gsmi", &gsmi_dev.pdev->dev, 801 790 GSMI_BUF_SIZE, GSMI_BUF_ALIGN, 0);
+5 -1
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 286 286 287 287 static int exynos_drm_platform_probe(struct platform_device *pdev) 288 288 { 289 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 289 + int ret; 290 + 291 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 292 + if (ret) 293 + return ret; 290 294 291 295 return drm_platform_init(&exynos_drm_driver, pdev); 292 296 }
+3 -2
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
··· 664 664 } 665 665 666 666 /* set dma mask for device */ 667 - /* NOTE: this is a workaround for the hwmod not initializing properly */ 668 - dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 667 + ret = dma_set_coherent_mask(&dev->dev, DMA_BIT_MASK(32)); 668 + if (ret) 669 + goto fail; 669 670 670 671 omap_dmm->dummy_pa = page_to_phys(omap_dmm->dummy_page); 671 672
+3 -3
drivers/media/platform/omap3isp/isp.c
··· 2182 2182 isp->pdata = pdata; 2183 2183 isp->ref_count = 0; 2184 2184 2185 - isp->raw_dmamask = DMA_BIT_MASK(32); 2186 - isp->dev->dma_mask = &isp->raw_dmamask; 2187 - isp->dev->coherent_dma_mask = DMA_BIT_MASK(32); 2185 + ret = dma_coerce_mask_and_coherent(isp->dev, DMA_BIT_MASK(32)); 2186 + if (ret) 2187 + return ret; 2188 2188 2189 2189 platform_set_drvdata(pdev, isp); 2190 2190
-3
drivers/media/platform/omap3isp/isp.h
··· 152 152 * @mmio_base_phys: Array with physical L4 bus addresses for ISP register 153 153 * regions. 154 154 * @mmio_size: Array with ISP register regions size in bytes. 155 - * @raw_dmamask: Raw DMA mask 156 155 * @stat_lock: Spinlock for handling statistics 157 156 * @isp_mutex: Mutex for serializing requests to ISP. 158 157 * @crashed: Bitmask of crashed entities (indexed by entity ID) ··· 188 189 void __iomem *mmio_base[OMAP3_ISP_IOMEM_LAST]; 189 190 unsigned long mmio_base_phys[OMAP3_ISP_IOMEM_LAST]; 190 191 resource_size_t mmio_size[OMAP3_ISP_IOMEM_LAST]; 191 - 192 - u64 raw_dmamask; 193 192 194 193 /* ISP Obj */ 195 194 spinlock_t stat_lock; /* common lock for statistic drivers */
+2 -1
drivers/mmc/card/queue.c
··· 15 15 #include <linux/freezer.h> 16 16 #include <linux/kthread.h> 17 17 #include <linux/scatterlist.h> 18 + #include <linux/dma-mapping.h> 18 19 19 20 #include <linux/mmc/card.h> 20 21 #include <linux/mmc/host.h> ··· 197 196 struct mmc_queue_req *mqrq_prev = &mq->mqrq[1]; 198 197 199 198 if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) 200 - limit = *mmc_dev(host)->dma_mask; 199 + limit = dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; 201 200 202 201 mq->card = card; 203 202 mq->queue = blk_init_queue(mmc_request_fn, lock);
+3 -2
drivers/mmc/host/sdhci-acpi.c
··· 310 310 dma_mask = DMA_BIT_MASK(32); 311 311 } 312 312 313 - dev->dma_mask = &dev->coherent_dma_mask; 314 - dev->coherent_dma_mask = dma_mask; 313 + err = dma_coerce_mask_and_coherent(dev, dma_mask); 314 + if (err) 315 + goto err_free; 315 316 } 316 317 317 318 if (c->slot) {
+1 -2
drivers/net/ethernet/broadcom/b44.c
··· 2193 2193 goto err_out_free_dev; 2194 2194 } 2195 2195 2196 - if (dma_set_mask(sdev->dma_dev, DMA_BIT_MASK(30)) || 2197 - dma_set_coherent_mask(sdev->dma_dev, DMA_BIT_MASK(30))) { 2196 + if (dma_set_mask_and_coherent(sdev->dma_dev, DMA_BIT_MASK(30))) { 2198 2197 dev_err(sdev->dev, 2199 2198 "Required 30BIT DMA mask unsupported by the system\n"); 2200 2199 goto err_out_powerdown;
+2 -6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12140 12140 { 12141 12141 struct device *dev = &bp->pdev->dev; 12142 12142 12143 - if (dma_set_mask(dev, DMA_BIT_MASK(64)) == 0) { 12144 - if (dma_set_coherent_mask(dev, DMA_BIT_MASK(64)) != 0) { 12145 - dev_err(dev, "dma_set_coherent_mask failed, aborting\n"); 12146 - return -EIO; 12147 - } 12148 - } else if (dma_set_mask(dev, DMA_BIT_MASK(32)) != 0) { 12143 + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)) != 0 && 12144 + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) != 0) { 12149 12145 dev_err(dev, "System does not support DMA, aborting\n"); 12150 12146 return -EIO; 12151 12147 }
+4 -9
drivers/net/ethernet/brocade/bna/bnad.c
··· 3299 3299 err = pci_request_regions(pdev, BNAD_NAME); 3300 3300 if (err) 3301 3301 goto disable_device; 3302 - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && 3303 - !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { 3302 + if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) { 3304 3303 *using_dac = true; 3305 3304 } else { 3306 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 3307 - if (err) { 3308 - err = dma_set_coherent_mask(&pdev->dev, 3309 - DMA_BIT_MASK(32)); 3310 - if (err) 3311 - goto release_regions; 3312 - } 3305 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 3306 + if (err) 3307 + goto release_regions; 3313 3308 *using_dac = false; 3314 3309 } 3315 3310 pci_set_master(pdev);
+2 -10
drivers/net/ethernet/emulex/benet/be_main.c
··· 4487 4487 adapter->netdev = netdev; 4488 4488 SET_NETDEV_DEV(netdev, &pdev->dev); 4489 4489 4490 - status = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 4490 + status = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 4491 4491 if (!status) { 4492 - status = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 4493 - if (status < 0) { 4494 - dev_err(&pdev->dev, "dma_set_coherent_mask failed\n"); 4495 - goto free_netdev; 4496 - } 4497 4492 netdev->features |= NETIF_F_HIGHDMA; 4498 4493 } else { 4499 - status = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 4500 - if (!status) 4501 - status = dma_set_coherent_mask(&pdev->dev, 4502 - DMA_BIT_MASK(32)); 4494 + status = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 4503 4495 if (status) { 4504 4496 dev_err(&pdev->dev, "Could not set PCI DMA Mask\n"); 4505 4497 goto free_netdev;
+2 -7
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 1018 1018 */ 1019 1019 pci_using_dac = 0; 1020 1020 if ((hw->bus_type == e1000_bus_type_pcix) && 1021 - !dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) { 1022 - /* according to DMA-API-HOWTO, coherent calls will always 1023 - * succeed if the set call did 1024 - */ 1025 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1021 + !dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) { 1026 1022 pci_using_dac = 1; 1027 1023 } else { 1028 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 1024 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1029 1025 if (err) { 1030 1026 pr_err("No usable DMA config, aborting\n"); 1031 1027 goto err_dma; 1032 1028 } 1033 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 1034 1029 } 1035 1030 1036 1031 netdev->netdev_ops = &e1000_netdev_ops;
+6 -12
drivers/net/ethernet/intel/e1000e/netdev.c
··· 6553 6553 return err; 6554 6554 6555 6555 pci_using_dac = 0; 6556 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 6556 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 6557 6557 if (!err) { 6558 - err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 6559 - if (!err) 6560 - pci_using_dac = 1; 6558 + pci_using_dac = 1; 6561 6559 } else { 6562 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 6560 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 6563 6561 if (err) { 6564 - err = dma_set_coherent_mask(&pdev->dev, 6565 - DMA_BIT_MASK(32)); 6566 - if (err) { 6567 - dev_err(&pdev->dev, 6568 - "No usable DMA configuration, aborting\n"); 6569 - goto err_dma; 6570 - } 6562 + dev_err(&pdev->dev, 6563 + "No usable DMA configuration, aborting\n"); 6564 + goto err_dma; 6571 6565 } 6572 6566 } 6573 6567
+6 -12
drivers/net/ethernet/intel/igb/igb_main.c
··· 2035 2035 return err; 2036 2036 2037 2037 pci_using_dac = 0; 2038 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 2038 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2039 2039 if (!err) { 2040 - err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 2041 - if (!err) 2042 - pci_using_dac = 1; 2040 + pci_using_dac = 1; 2043 2041 } else { 2044 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 2042 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 2045 2043 if (err) { 2046 - err = dma_set_coherent_mask(&pdev->dev, 2047 - DMA_BIT_MASK(32)); 2048 - if (err) { 2049 - dev_err(&pdev->dev, 2050 - "No usable DMA configuration, aborting\n"); 2051 - goto err_dma; 2052 - } 2044 + dev_err(&pdev->dev, 2045 + "No usable DMA configuration, aborting\n"); 2046 + goto err_dma; 2053 2047 } 2054 2048 } 2055 2049
+6 -12
drivers/net/ethernet/intel/igbvf/netdev.c
··· 2637 2637 return err; 2638 2638 2639 2639 pci_using_dac = 0; 2640 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 2640 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2641 2641 if (!err) { 2642 - err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 2643 - if (!err) 2644 - pci_using_dac = 1; 2642 + pci_using_dac = 1; 2645 2643 } else { 2646 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 2644 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 2647 2645 if (err) { 2648 - err = dma_set_coherent_mask(&pdev->dev, 2649 - DMA_BIT_MASK(32)); 2650 - if (err) { 2651 - dev_err(&pdev->dev, "No usable DMA " 2652 - "configuration, aborting\n"); 2653 - goto err_dma; 2654 - } 2646 + dev_err(&pdev->dev, "No usable DMA " 2647 + "configuration, aborting\n"); 2648 + goto err_dma; 2655 2649 } 2656 2650 } 2657 2651
+5 -11
drivers/net/ethernet/intel/ixgb/ixgb_main.c
··· 408 408 return err; 409 409 410 410 pci_using_dac = 0; 411 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 411 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 412 412 if (!err) { 413 - err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 414 - if (!err) 415 - pci_using_dac = 1; 413 + pci_using_dac = 1; 416 414 } else { 417 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 415 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 418 416 if (err) { 419 - err = dma_set_coherent_mask(&pdev->dev, 420 - DMA_BIT_MASK(32)); 421 - if (err) { 422 - pr_err("No usable DMA configuration, aborting\n"); 423 - goto err_dma_mask; 424 - } 417 + pr_err("No usable DMA configuration, aborting\n"); 418 + goto err_dma_mask; 425 419 } 426 420 } 427 421
+5 -10
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 7824 7824 if (err) 7825 7825 return err; 7826 7826 7827 - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && 7828 - !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { 7827 + if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) { 7829 7828 pci_using_dac = 1; 7830 7829 } else { 7831 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 7830 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 7832 7831 if (err) { 7833 - err = dma_set_coherent_mask(&pdev->dev, 7834 - DMA_BIT_MASK(32)); 7835 - if (err) { 7836 - dev_err(&pdev->dev, 7837 - "No usable DMA configuration, aborting\n"); 7838 - goto err_dma; 7839 - } 7832 + dev_err(&pdev->dev, 7833 + "No usable DMA configuration, aborting\n"); 7834 + goto err_dma; 7840 7835 } 7841 7836 pci_using_dac = 0; 7842 7837 }
+5 -10
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3421 3421 if (err) 3422 3422 return err; 3423 3423 3424 - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && 3425 - !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { 3424 + if (!dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) { 3426 3425 pci_using_dac = 1; 3427 3426 } else { 3428 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 3427 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 3429 3428 if (err) { 3430 - err = dma_set_coherent_mask(&pdev->dev, 3431 - DMA_BIT_MASK(32)); 3432 - if (err) { 3433 - dev_err(&pdev->dev, "No usable DMA " 3434 - "configuration, aborting\n"); 3435 - goto err_dma; 3436 - } 3429 + dev_err(&pdev->dev, "No usable DMA " 3430 + "configuration, aborting\n"); 3431 + goto err_dma; 3437 3432 } 3438 3433 pci_using_dac = 0; 3439 3434 }
+4 -2
drivers/net/ethernet/nxp/lpc_eth.c
··· 1399 1399 } 1400 1400 1401 1401 if (pldat->dma_buff_base_v == 0) { 1402 - pldat->pdev->dev.coherent_dma_mask = 0xFFFFFFFF; 1403 - pldat->pdev->dev.dma_mask = &pldat->pdev->dev.coherent_dma_mask; 1402 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1403 + if (ret) 1404 + goto err_out_free_irq; 1405 + 1404 1406 pldat->dma_buff_size = PAGE_ALIGN(pldat->dma_buff_size); 1405 1407 1406 1408 /* Allocate a chunk of memory for the DMA ethernet buffers
+3 -2
drivers/net/ethernet/octeon/octeon_mgmt.c
··· 1552 1552 1553 1553 p->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); 1554 1554 1555 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(64); 1556 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 1555 + result = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1556 + if (result) 1557 + goto err; 1557 1558 1558 1559 netif_carrier_off(netdev); 1559 1560 result = register_netdev(netdev);
+1 -11
drivers/net/ethernet/sfc/efx.c
··· 1121 1121 */ 1122 1122 while (dma_mask > 0x7fffffffUL) { 1123 1123 if (dma_supported(&pci_dev->dev, dma_mask)) { 1124 - rc = dma_set_mask(&pci_dev->dev, dma_mask); 1124 + rc = dma_set_mask_and_coherent(&pci_dev->dev, dma_mask); 1125 1125 if (rc == 0) 1126 1126 break; 1127 1127 } ··· 1134 1134 } 1135 1135 netif_dbg(efx, probe, efx->net_dev, 1136 1136 "using DMA mask %llx\n", (unsigned long long) dma_mask); 1137 - rc = dma_set_coherent_mask(&pci_dev->dev, dma_mask); 1138 - if (rc) { 1139 - /* dma_set_coherent_mask() is not *allowed* to 1140 - * fail with a mask that dma_set_mask() accepted, 1141 - * but just in case... 1142 - */ 1143 - netif_err(efx, probe, efx->net_dev, 1144 - "failed to set consistent DMA mask\n"); 1145 - goto fail2; 1146 - } 1147 1137 1148 1138 efx->membase_phys = pci_resource_start(efx->pci_dev, EFX_MEM_BAR); 1149 1139 rc = pci_request_region(pci_dev, EFX_MEM_BAR, "sfc");
+3 -6
drivers/net/wireless/b43/dma.c
··· 1065 1065 /* Try to set the DMA mask. If it fails, try falling back to a 1066 1066 * lower mask, as we can always also support a lower one. */ 1067 1067 while (1) { 1068 - err = dma_set_mask(dev->dev->dma_dev, mask); 1069 - if (!err) { 1070 - err = dma_set_coherent_mask(dev->dev->dma_dev, mask); 1071 - if (!err) 1072 - break; 1073 - } 1068 + err = dma_set_mask_and_coherent(dev->dev->dma_dev, mask); 1069 + if (!err) 1070 + break; 1074 1071 if (mask == DMA_BIT_MASK(64)) { 1075 1072 mask = DMA_BIT_MASK(32); 1076 1073 fallback = true;
+3 -6
drivers/net/wireless/b43legacy/dma.c
··· 806 806 /* Try to set the DMA mask. If it fails, try falling back to a 807 807 * lower mask, as we can always also support a lower one. */ 808 808 while (1) { 809 - err = dma_set_mask(dev->dev->dma_dev, mask); 810 - if (!err) { 811 - err = dma_set_coherent_mask(dev->dev->dma_dev, mask); 812 - if (!err) 813 - break; 814 - } 809 + err = dma_set_mask_and_coherent(dev->dev->dma_dev, mask); 810 + if (!err) 811 + break; 815 812 if (mask == DMA_BIT_MASK(64)) { 816 813 mask = DMA_BIT_MASK(32); 817 814 fallback = true;
-3
drivers/of/platform.c
··· 282 282 else 283 283 of_device_make_bus_id(&dev->dev); 284 284 285 - /* setup amba-specific device info */ 286 - dev->dma_mask = ~0; 287 - 288 285 /* Allow the HW Peripheral ID to be overridden */ 289 286 prop = of_get_property(node, "arm,primecell-periphid", NULL); 290 287 if (prop)
+6 -2
drivers/parport/parport_pc.c
··· 2004 2004 struct resource *ECR_res = NULL; 2005 2005 struct resource *EPP_res = NULL; 2006 2006 struct platform_device *pdev = NULL; 2007 + int ret; 2007 2008 2008 2009 if (!dev) { 2009 2010 /* We need a physical device to attach to, but none was ··· 2015 2014 return NULL; 2016 2015 dev = &pdev->dev; 2017 2016 2018 - dev->coherent_dma_mask = DMA_BIT_MASK(24); 2019 - dev->dma_mask = &dev->coherent_dma_mask; 2017 + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(24)); 2018 + if (ret) { 2019 + dev_err(dev, "Unable to set coherent dma mask: disabling DMA\n"); 2020 + dma = PARPORT_DMA_NONE; 2021 + } 2020 2022 } 2021 2023 2022 2024 ops = kmalloc(sizeof(struct parport_operations), GFP_KERNEL);
+1 -1
drivers/scsi/scsi_lib.c
··· 1684 1684 1685 1685 host_dev = scsi_get_device(shost); 1686 1686 if (host_dev && host_dev->dma_mask) 1687 - bounce_limit = *host_dev->dma_mask; 1687 + bounce_limit = dma_max_pfn(host_dev) << PAGE_SHIFT; 1688 1688 1689 1689 return bounce_limit; 1690 1690 }
+3 -2
drivers/staging/dwc2/platform.c
··· 100 100 */ 101 101 if (!dev->dev.dma_mask) 102 102 dev->dev.dma_mask = &dev->dev.coherent_dma_mask; 103 - if (!dev->dev.coherent_dma_mask) 104 - dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 103 + retval = dma_set_coherent_mask(&dev->dev, DMA_BIT_MASK(32)); 104 + if (retval) 105 + return retval; 105 106 106 107 irq = platform_get_irq(dev, 0); 107 108 if (irq < 0) {
+2 -15
drivers/staging/et131x/et131x.c
··· 4791 4791 pci_set_master(pdev); 4792 4792 4793 4793 /* Check the DMA addressing support of this device */ 4794 - if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) { 4795 - rc = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 4796 - if (rc < 0) { 4797 - dev_err(&pdev->dev, 4798 - "Unable to obtain 64 bit DMA for consistent allocations\n"); 4799 - goto err_release_res; 4800 - } 4801 - } else if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) { 4802 - rc = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 4803 - if (rc < 0) { 4804 - dev_err(&pdev->dev, 4805 - "Unable to obtain 32 bit DMA for consistent allocations\n"); 4806 - goto err_release_res; 4807 - } 4808 - } else { 4794 + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) && 4795 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 4809 4796 dev_err(&pdev->dev, "No usable DMA addressing method\n"); 4810 4797 rc = -EIO; 4811 4798 goto err_release_res;
+6 -2
drivers/staging/imx-drm/imx-drm-core.c
··· 815 815 816 816 static int imx_drm_platform_probe(struct platform_device *pdev) 817 817 { 818 + int ret; 819 + 820 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 821 + if (ret) 822 + return ret; 823 + 818 824 imx_drm_device->dev = &pdev->dev; 819 825 820 826 return drm_platform_init(&imx_drm_driver, pdev); ··· 862 856 ret = PTR_ERR(imx_drm_pdev); 863 857 goto err_pdev; 864 858 } 865 - 866 - imx_drm_pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32), 867 859 868 860 ret = platform_driver_register(&imx_drm_pdrv); 869 861 if (ret)
+3 -1
drivers/staging/imx-drm/ipuv3-crtc.c
··· 407 407 if (!pdata) 408 408 return -EINVAL; 409 409 410 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 410 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 411 + if (ret) 412 + return ret; 411 413 412 414 ipu_crtc = devm_kzalloc(&pdev->dev, sizeof(*ipu_crtc), GFP_KERNEL); 413 415 if (!ipu_crtc)
+1 -4
drivers/staging/media/dt3155v4l/dt3155v4l.c
··· 901 901 int err; 902 902 struct dt3155_priv *pd; 903 903 904 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 905 - if (err) 906 - return -ENODEV; 907 - err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 904 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 908 905 if (err) 909 906 return -ENODEV; 910 907 pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+3 -4
drivers/usb/chipidea/ci_hdrc_imx.c
··· 115 115 116 116 pdata.phy = data->phy; 117 117 118 - if (!pdev->dev.dma_mask) 119 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 120 - if (!pdev->dev.coherent_dma_mask) 121 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 118 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 119 + if (ret) 120 + goto err_clk; 122 121 123 122 if (data->usbmisc_data) { 124 123 ret = imx_usbmisc_init(data->usbmisc_data);
+3 -4
drivers/usb/dwc3/dwc3-exynos.c
··· 119 119 * Since shared usb code relies on it, set it here for now. 120 120 * Once we move to full device tree support this will vanish off. 121 121 */ 122 - if (!dev->dma_mask) 123 - dev->dma_mask = &dev->coherent_dma_mask; 124 - if (!dev->coherent_dma_mask) 125 - dev->coherent_dma_mask = DMA_BIT_MASK(32); 122 + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32)); 123 + if (ret) 124 + goto err1; 126 125 127 126 platform_set_drvdata(pdev, exynos); 128 127
+3 -1
drivers/usb/gadget/lpc32xx_udc.c
··· 3078 3078 udc->isp1301_i2c_client->addr); 3079 3079 3080 3080 pdev->dev.dma_mask = &lpc32xx_usbd_dmamask; 3081 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 3081 + retval = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 3082 + if (retval) 3083 + goto resource_fail; 3082 3084 3083 3085 udc->board = &lpc32xx_usbddata; 3084 3086
+1 -2
drivers/usb/host/bcma-hcd.c
··· 227 227 228 228 /* TODO: Probably need checks here; is the core connected? */ 229 229 230 - if (dma_set_mask(dev->dma_dev, DMA_BIT_MASK(32)) || 231 - dma_set_coherent_mask(dev->dma_dev, DMA_BIT_MASK(32))) 230 + if (dma_set_mask_and_coherent(dev->dma_dev, DMA_BIT_MASK(32))) 232 231 return -EOPNOTSUPP; 233 232 234 233 usb_dev = kzalloc(sizeof(struct bcma_hcd_device), GFP_KERNEL);
+3 -4
drivers/usb/host/ehci-atmel.c
··· 96 96 * Since shared usb code relies on it, set it here for now. 97 97 * Once we have dma capability bindings this can go away. 98 98 */ 99 - if (!pdev->dev.dma_mask) 100 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 101 - if (!pdev->dev.coherent_dma_mask) 102 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 99 + retval = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 100 + if (retval) 101 + goto fail_create_hcd; 103 102 104 103 hcd = usb_create_hcd(driver, &pdev->dev, dev_name(&pdev->dev)); 105 104 if (!hcd) {
+3 -4
drivers/usb/host/ehci-exynos.c
··· 84 84 * Since shared usb code relies on it, set it here for now. 85 85 * Once we move to full device tree support this will vanish off. 86 86 */ 87 - if (!pdev->dev.dma_mask) 88 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 89 - if (!pdev->dev.coherent_dma_mask) 90 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 87 + err = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 88 + if (err) 89 + return err; 91 90 92 91 exynos_setup_vbus_gpio(pdev); 93 92
+3 -1
drivers/usb/host/ehci-octeon.c
··· 116 116 * We can DMA from anywhere. But the descriptors must be in 117 117 * the lower 4GB. 118 118 */ 119 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 120 119 pdev->dev.dma_mask = &ehci_octeon_dma_mask; 120 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 121 + if (ret) 122 + return ret; 121 123 122 124 hcd = usb_create_hcd(&ehci_octeon_hc_driver, &pdev->dev, "octeon"); 123 125 if (!hcd)
+5 -5
drivers/usb/host/ehci-omap.c
··· 104 104 struct resource *res; 105 105 struct usb_hcd *hcd; 106 106 void __iomem *regs; 107 - int ret = -ENODEV; 107 + int ret; 108 108 int irq; 109 109 int i; 110 110 struct omap_hcd *omap; ··· 144 144 * Since shared usb code relies on it, set it here for now. 145 145 * Once we have dma capability bindings this can go away. 146 146 */ 147 - if (!dev->dma_mask) 148 - dev->dma_mask = &dev->coherent_dma_mask; 149 - if (!dev->coherent_dma_mask) 150 - dev->coherent_dma_mask = DMA_BIT_MASK(32); 147 + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32)); 148 + if (ret) 149 + return ret; 151 150 151 + ret = -ENODEV; 152 152 hcd = usb_create_hcd(&ehci_omap_hc_driver, dev, 153 153 dev_name(dev)); 154 154 if (!hcd) {
+3 -4
drivers/usb/host/ehci-orion.c
··· 180 180 * set. Since shared usb code relies on it, set it here for 181 181 * now. Once we have dma capability bindings this can go away. 182 182 */ 183 - if (!pdev->dev.dma_mask) 184 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 185 - if (!pdev->dev.coherent_dma_mask) 186 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 183 + err = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 184 + if (err) 185 + goto err1; 187 186 188 187 if (!request_mem_region(res->start, resource_size(res), 189 188 ehci_orion_hc_driver.description)) {
+5 -5
drivers/usb/host/ehci-platform.c
··· 78 78 struct resource *res_mem; 79 79 struct usb_ehci_pdata *pdata; 80 80 int irq; 81 - int err = -ENOMEM; 81 + int err; 82 82 83 83 if (usb_disabled()) 84 84 return -ENODEV; ··· 89 89 */ 90 90 if (!dev_get_platdata(&dev->dev)) 91 91 dev->dev.platform_data = &ehci_platform_defaults; 92 - if (!dev->dev.dma_mask) 93 - dev->dev.dma_mask = &dev->dev.coherent_dma_mask; 94 - if (!dev->dev.coherent_dma_mask) 95 - dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 92 + 93 + err = dma_coerce_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); 94 + if (err) 95 + return err; 96 96 97 97 pdata = dev_get_platdata(&dev->dev); 98 98
+3 -4
drivers/usb/host/ehci-spear.c
··· 81 81 * Since shared usb code relies on it, set it here for now. 82 82 * Once we have dma capability bindings this can go away. 83 83 */ 84 - if (!pdev->dev.dma_mask) 85 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 86 - if (!pdev->dev.coherent_dma_mask) 87 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 84 + retval = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 85 + if (retval) 86 + goto fail; 88 87 89 88 usbh_clk = devm_clk_get(&pdev->dev, NULL); 90 89 if (IS_ERR(usbh_clk)) {
+3 -4
drivers/usb/host/ehci-tegra.c
··· 362 362 * Since shared usb code relies on it, set it here for now. 363 363 * Once we have dma capability bindings this can go away. 364 364 */ 365 - if (!pdev->dev.dma_mask) 366 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 367 - if (!pdev->dev.coherent_dma_mask) 368 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 365 + err = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 366 + if (err) 367 + return err; 369 368 370 369 hcd = usb_create_hcd(&tegra_ehci_hc_driver, &pdev->dev, 371 370 dev_name(&pdev->dev));
+4 -5
drivers/usb/host/ohci-at91.c
··· 469 469 static int ohci_at91_of_init(struct platform_device *pdev) 470 470 { 471 471 struct device_node *np = pdev->dev.of_node; 472 - int i, gpio; 472 + int i, gpio, ret; 473 473 enum of_gpio_flags flags; 474 474 struct at91_usbh_data *pdata; 475 475 u32 ports; ··· 481 481 * Since shared usb code relies on it, set it here for now. 482 482 * Once we have dma capability bindings this can go away. 483 483 */ 484 - if (!pdev->dev.dma_mask) 485 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 486 - if (!pdev->dev.coherent_dma_mask) 487 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 484 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 485 + if (ret) 486 + return ret; 488 487 489 488 pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 490 489 if (!pdata)
+3 -4
drivers/usb/host/ohci-exynos.c
··· 71 71 * Since shared usb code relies on it, set it here for now. 72 72 * Once we move to full device tree support this will vanish off. 73 73 */ 74 - if (!pdev->dev.dma_mask) 75 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 76 - if (!pdev->dev.coherent_dma_mask) 77 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 74 + err = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 75 + if (err) 76 + return err; 78 77 79 78 hcd = usb_create_hcd(&exynos_ohci_hc_driver, 80 79 &pdev->dev, dev_name(&pdev->dev));
+3 -2
drivers/usb/host/ohci-nxp.c
··· 181 181 return -EPROBE_DEFER; 182 182 } 183 183 184 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 185 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 184 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 185 + if (ret) 186 + goto fail_disable; 186 187 187 188 dev_dbg(&pdev->dev, "%s: " DRIVER_DESC " (nxp)\n", hcd_name); 188 189 if (usb_disabled()) {
+3 -2
drivers/usb/host/ohci-octeon.c
··· 127 127 } 128 128 129 129 /* Ohci is a 32-bit device. */ 130 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 131 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 130 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 131 + if (ret) 132 + return ret; 132 133 133 134 hcd = usb_create_hcd(&ohci_octeon_hc_driver, &pdev->dev, "octeon"); 134 135 if (!hcd)
+5 -5
drivers/usb/host/ohci-omap3.c
··· 65 65 struct usb_hcd *hcd = NULL; 66 66 void __iomem *regs = NULL; 67 67 struct resource *res; 68 - int ret = -ENODEV; 68 + int ret; 69 69 int irq; 70 70 71 71 if (usb_disabled()) ··· 99 99 * Since shared usb code relies on it, set it here for now. 100 100 * Once we have dma capability bindings this can go away. 101 101 */ 102 - if (!dev->dma_mask) 103 - dev->dma_mask = &dev->coherent_dma_mask; 104 - if (!dev->coherent_dma_mask) 105 - dev->coherent_dma_mask = DMA_BIT_MASK(32); 102 + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32)); 103 + if (ret) 104 + goto err_io; 106 105 106 + ret = -ENODEV; 107 107 hcd = usb_create_hcd(&ohci_omap3_hc_driver, dev, 108 108 dev_name(dev)); 109 109 if (!hcd) {
+4 -4
drivers/usb/host/ohci-pxa27x.c
··· 298 298 struct device_node *np = pdev->dev.of_node; 299 299 struct pxaohci_platform_data *pdata; 300 300 u32 tmp; 301 + int ret; 301 302 302 303 if (!np) 303 304 return 0; ··· 307 306 * Since shared usb code relies on it, set it here for now. 308 307 * Once we have dma capability bindings this can go away. 309 308 */ 310 - if (!pdev->dev.dma_mask) 311 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 312 - if (!pdev->dev.coherent_dma_mask) 313 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 309 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 310 + if (ret) 311 + return ret; 314 312 315 313 pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 316 314 if (!pdata)
+6
drivers/usb/host/ohci-sa1111.c
··· 185 185 if (usb_disabled()) 186 186 return -ENODEV; 187 187 188 + /* 189 + * We don't call dma_set_mask_and_coherent() here because the 190 + * DMA mask has already been appropraitely setup by the core 191 + * SA-1111 bus code (which includes bug workarounds.) 192 + */ 193 + 188 194 hcd = usb_create_hcd(&ohci_sa1111_hc_driver, &dev->dev, "sa1111"); 189 195 if (!hcd) 190 196 return -ENOMEM;
+3 -4
drivers/usb/host/ohci-spear.c
··· 56 56 * Since shared usb code relies on it, set it here for now. 57 57 * Once we have dma capability bindings this can go away. 58 58 */ 59 - if (!pdev->dev.dma_mask) 60 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 61 - if (!pdev->dev.coherent_dma_mask) 62 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 59 + retval = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 60 + if (retval) 61 + goto fail; 63 62 64 63 usbh_clk = devm_clk_get(&pdev->dev, NULL); 65 64 if (IS_ERR(usbh_clk)) {
+1 -2
drivers/usb/host/ssb-hcd.c
··· 163 163 164 164 /* TODO: Probably need checks here; is the core connected? */ 165 165 166 - if (dma_set_mask(dev->dma_dev, DMA_BIT_MASK(32)) || 167 - dma_set_coherent_mask(dev->dma_dev, DMA_BIT_MASK(32))) 166 + if (dma_set_mask_and_coherent(dev->dma_dev, DMA_BIT_MASK(32))) 168 167 return -EOPNOTSUPP; 169 168 170 169 usb_dev = kzalloc(sizeof(struct ssb_hcd_device), GFP_KERNEL);
+3 -4
drivers/usb/host/uhci-platform.c
··· 75 75 * Since shared usb code relies on it, set it here for now. 76 76 * Once we have dma capability bindings this can go away. 77 77 */ 78 - if (!pdev->dev.dma_mask) 79 - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 80 - if (!pdev->dev.coherent_dma_mask) 81 - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 78 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 79 + if (ret) 80 + return ret; 82 81 83 82 hcd = usb_create_hcd(&uhci_platform_hc_driver, &pdev->dev, 84 83 pdev->name);
+5
drivers/video/amba-clcd.c
··· 10 10 * 11 11 * ARM PrimeCell PL110 Color LCD Controller 12 12 */ 13 + #include <linux/dma-mapping.h> 13 14 #include <linux/module.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/errno.h> ··· 551 550 552 551 if (!board) 553 552 return -EINVAL; 553 + 554 + ret = dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); 555 + if (ret) 556 + goto out; 554 557 555 558 ret = amba_request_regions(dev, NULL); 556 559 if (ret) {
-2
include/linux/amba/bus.h
··· 30 30 struct device dev; 31 31 struct resource res; 32 32 struct clk *pclk; 33 - u64 dma_mask; 34 33 unsigned int periphid; 35 34 unsigned int irq[AMBA_NR_IRQS]; 36 35 }; ··· 130 131 struct amba_device name##_device = { \ 131 132 .dev = __AMBA_DEV(busid, data, ~0ULL), \ 132 133 .res = DEFINE_RES_MEM(base, SZ_4K), \ 133 - .dma_mask = ~0ULL, \ 134 134 .irq = irqs, \ 135 135 .periphid = id, \ 136 136 }
+31
include/linux/dma-mapping.h
··· 97 97 } 98 98 #endif 99 99 100 + /* 101 + * Set both the DMA mask and the coherent DMA mask to the same thing. 102 + * Note that we don't check the return value from dma_set_coherent_mask() 103 + * as the DMA API guarantees that the coherent DMA mask can be set to 104 + * the same or smaller than the streaming DMA mask. 105 + */ 106 + static inline int dma_set_mask_and_coherent(struct device *dev, u64 mask) 107 + { 108 + int rc = dma_set_mask(dev, mask); 109 + if (rc == 0) 110 + dma_set_coherent_mask(dev, mask); 111 + return rc; 112 + } 113 + 114 + /* 115 + * Similar to the above, except it deals with the case where the device 116 + * does not have dev->dma_mask appropriately setup. 117 + */ 118 + static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) 119 + { 120 + dev->dma_mask = &dev->coherent_dma_mask; 121 + return dma_set_mask_and_coherent(dev, mask); 122 + } 123 + 100 124 extern u64 dma_get_required_mask(struct device *dev); 101 125 102 126 static inline unsigned int dma_get_max_seg_size(struct device *dev) ··· 152 128 } else 153 129 return -EIO; 154 130 } 131 + 132 + #ifndef dma_max_pfn 133 + static inline unsigned long dma_max_pfn(struct device *dev) 134 + { 135 + return *dev->dma_mask >> PAGE_SHIFT; 136 + } 137 + #endif 155 138 156 139 static inline void *dma_zalloc_coherent(struct device *dev, size_t size, 157 140 dma_addr_t *dma_handle, gfp_t flag)
+4 -6
sound/arm/pxa2xx-pcm.c
··· 11 11 */ 12 12 13 13 #include <linux/module.h> 14 + #include <linux/dma-mapping.h> 14 15 #include <linux/dmaengine.h> 15 16 16 17 #include <sound/core.h> ··· 84 83 .mmap = pxa2xx_pcm_mmap, 85 84 }; 86 85 87 - static u64 pxa2xx_pcm_dmamask = 0xffffffff; 88 - 89 86 int pxa2xx_pcm_new(struct snd_card *card, struct pxa2xx_pcm_client *client, 90 87 struct snd_pcm **rpcm) 91 88 { ··· 99 100 pcm->private_data = client; 100 101 pcm->private_free = pxa2xx_pcm_free_dma_buffers; 101 102 102 - if (!card->dev->dma_mask) 103 - card->dev->dma_mask = &pxa2xx_pcm_dmamask; 104 - if (!card->dev->coherent_dma_mask) 105 - card->dev->coherent_dma_mask = 0xffffffff; 103 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 104 + if (ret) 105 + goto out; 106 106 107 107 if (play) { 108 108 int stream = SNDRV_PCM_STREAM_PLAYBACK;
+4 -7
sound/soc/atmel/atmel-pcm.c
··· 68 68 } 69 69 EXPORT_SYMBOL_GPL(atmel_pcm_mmap); 70 70 71 - static u64 atmel_pcm_dmamask = DMA_BIT_MASK(32); 72 - 73 71 int atmel_pcm_new(struct snd_soc_pcm_runtime *rtd) 74 72 { 75 73 struct snd_card *card = rtd->card->snd_card; 76 74 struct snd_pcm *pcm = rtd->pcm; 77 - int ret = 0; 75 + int ret; 78 76 79 - if (!card->dev->dma_mask) 80 - card->dev->dma_mask = &atmel_pcm_dmamask; 81 - if (!card->dev->coherent_dma_mask) 82 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 77 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 78 + if (ret) 79 + return ret; 83 80 84 81 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 85 82 pr_debug("atmel-pcm: allocating PCM playback DMA buffer\n");
+4 -7
sound/soc/blackfin/bf5xx-ac97-pcm.c
··· 415 415 } 416 416 } 417 417 418 - static u64 bf5xx_pcm_dmamask = DMA_BIT_MASK(32); 419 - 420 418 static int bf5xx_pcm_ac97_new(struct snd_soc_pcm_runtime *rtd) 421 419 { 422 420 struct snd_card *card = rtd->card->snd_card; 423 421 struct snd_pcm *pcm = rtd->pcm; 424 - int ret = 0; 422 + int ret; 425 423 426 424 pr_debug("%s enter\n", __func__); 427 - if (!card->dev->dma_mask) 428 - card->dev->dma_mask = &bf5xx_pcm_dmamask; 429 - if (!card->dev->coherent_dma_mask) 430 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 425 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 426 + if (ret) 427 + return ret; 431 428 432 429 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 433 430 ret = bf5xx_pcm_preallocate_dma_buffer(pcm,
+4 -6
sound/soc/blackfin/bf5xx-i2s-pcm.c
··· 323 323 .silence = bf5xx_pcm_silence, 324 324 }; 325 325 326 - static u64 bf5xx_pcm_dmamask = DMA_BIT_MASK(32); 327 - 328 326 static int bf5xx_pcm_i2s_new(struct snd_soc_pcm_runtime *rtd) 329 327 { 330 328 struct snd_card *card = rtd->card->snd_card; 331 329 size_t size = bf5xx_pcm_hardware.buffer_bytes_max; 330 + int ret; 332 331 333 332 pr_debug("%s enter\n", __func__); 334 - if (!card->dev->dma_mask) 335 - card->dev->dma_mask = &bf5xx_pcm_dmamask; 336 - if (!card->dev->coherent_dma_mask) 337 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 333 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 334 + if (ret) 335 + return ret; 338 336 339 337 return snd_pcm_lib_preallocate_pages_for_all(rtd->pcm, 340 338 SNDRV_DMA_TYPE_DEV, card->dev, size, size);
+3 -6
sound/soc/davinci/davinci-pcm.c
··· 843 843 } 844 844 } 845 845 846 - static u64 davinci_pcm_dmamask = DMA_BIT_MASK(32); 847 - 848 846 static int davinci_pcm_new(struct snd_soc_pcm_runtime *rtd) 849 847 { 850 848 struct snd_card *card = rtd->card->snd_card; 851 849 struct snd_pcm *pcm = rtd->pcm; 852 850 int ret; 853 851 854 - if (!card->dev->dma_mask) 855 - card->dev->dma_mask = &davinci_pcm_dmamask; 856 - if (!card->dev->coherent_dma_mask) 857 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 852 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 853 + if (ret) 854 + return ret; 858 855 859 856 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 860 857 ret = davinci_pcm_preallocate_dma_buffer(pcm,
+3 -6
sound/soc/fsl/fsl_dma.c
··· 300 300 { 301 301 struct snd_card *card = rtd->card->snd_card; 302 302 struct snd_pcm *pcm = rtd->pcm; 303 - static u64 fsl_dma_dmamask = DMA_BIT_MASK(36); 304 303 int ret; 305 304 306 - if (!card->dev->dma_mask) 307 - card->dev->dma_mask = &fsl_dma_dmamask; 308 - 309 - if (!card->dev->coherent_dma_mask) 310 - card->dev->coherent_dma_mask = fsl_dma_dmamask; 305 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(36)); 306 + if (ret) 307 + return ret; 311 308 312 309 /* Some codecs have separate DAIs for playback and capture, so we 313 310 * should allocate a DMA buffer only for the streams that are valid.
+5 -7
sound/soc/fsl/imx-pcm-fiq.c
··· 254 254 return 0; 255 255 } 256 256 257 - static u64 imx_pcm_dmamask = DMA_BIT_MASK(32); 258 - 259 257 static int imx_pcm_new(struct snd_soc_pcm_runtime *rtd) 260 258 { 261 259 struct snd_card *card = rtd->card->snd_card; 262 260 struct snd_pcm *pcm = rtd->pcm; 263 - int ret = 0; 261 + int ret; 264 262 265 - if (!card->dev->dma_mask) 266 - card->dev->dma_mask = &imx_pcm_dmamask; 267 - if (!card->dev->coherent_dma_mask) 268 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 263 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 264 + if (ret) 265 + return ret; 266 + 269 267 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 270 268 ret = imx_pcm_preallocate_dma_buffer(pcm, 271 269 SNDRV_PCM_STREAM_PLAYBACK);
+4 -6
sound/soc/fsl/mpc5200_dma.c
··· 301 301 .hw_params = psc_dma_hw_params, 302 302 }; 303 303 304 - static u64 psc_dma_dmamask = DMA_BIT_MASK(32); 305 304 static int psc_dma_new(struct snd_soc_pcm_runtime *rtd) 306 305 { 307 306 struct snd_card *card = rtd->card->snd_card; ··· 308 309 struct snd_pcm *pcm = rtd->pcm; 309 310 struct psc_dma *psc_dma = snd_soc_dai_get_drvdata(rtd->cpu_dai); 310 311 size_t size = psc_dma_hardware.buffer_bytes_max; 311 - int rc = 0; 312 + int rc; 312 313 313 314 dev_dbg(rtd->platform->dev, "psc_dma_new(card=%p, dai=%p, pcm=%p)\n", 314 315 card, dai, pcm); 315 316 316 - if (!card->dev->dma_mask) 317 - card->dev->dma_mask = &psc_dma_dmamask; 318 - if (!card->dev->coherent_dma_mask) 319 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 317 + rc = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 318 + if (rc) 319 + return rc; 320 320 321 321 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 322 322 rc = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, pcm->card->dev,
+4 -8
sound/soc/jz4740/jz4740-pcm.c
··· 297 297 } 298 298 } 299 299 300 - static u64 jz4740_pcm_dmamask = DMA_BIT_MASK(32); 301 - 302 300 static int jz4740_pcm_new(struct snd_soc_pcm_runtime *rtd) 303 301 { 304 302 struct snd_card *card = rtd->card->snd_card; 305 303 struct snd_pcm *pcm = rtd->pcm; 306 - int ret = 0; 304 + int ret; 307 305 308 - if (!card->dev->dma_mask) 309 - card->dev->dma_mask = &jz4740_pcm_dmamask; 310 - 311 - if (!card->dev->coherent_dma_mask) 312 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 306 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 307 + if (ret) 308 + return ret; 313 309 314 310 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 315 311 ret = jz4740_pcm_preallocate_dma_buffer(pcm,
+3 -6
sound/soc/kirkwood/kirkwood-dma.c
··· 57 57 .fifo_size = 0, 58 58 }; 59 59 60 - static u64 kirkwood_dma_dmamask = DMA_BIT_MASK(32); 61 - 62 60 static irqreturn_t kirkwood_dma_irq(int irq, void *dev_id) 63 61 { 64 62 struct kirkwood_dma_data *priv = dev_id; ··· 288 290 struct snd_pcm *pcm = rtd->pcm; 289 291 int ret; 290 292 291 - if (!card->dev->dma_mask) 292 - card->dev->dma_mask = &kirkwood_dma_dmamask; 293 - if (!card->dev->coherent_dma_mask) 294 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 293 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 294 + if (ret) 295 + return ret; 295 296 296 297 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 297 298 ret = kirkwood_dma_preallocate_dma_buffer(pcm,
+4 -5
sound/soc/nuc900/nuc900-pcm.c
··· 314 314 snd_pcm_lib_preallocate_free_for_all(pcm); 315 315 } 316 316 317 - static u64 nuc900_pcm_dmamask = DMA_BIT_MASK(32); 318 317 static int nuc900_dma_new(struct snd_soc_pcm_runtime *rtd) 319 318 { 320 319 struct snd_card *card = rtd->card->snd_card; 321 320 struct snd_pcm *pcm = rtd->pcm; 321 + int ret; 322 322 323 - if (!card->dev->dma_mask) 324 - card->dev->dma_mask = &nuc900_pcm_dmamask; 325 - if (!card->dev->coherent_dma_mask) 326 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 323 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 324 + if (ret) 325 + return ret; 327 326 328 327 snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV, 329 328 card->dev, 4 * 1024, (4 * 1024) - 1);
+4 -7
sound/soc/omap/omap-pcm.c
··· 156 156 .mmap = omap_pcm_mmap, 157 157 }; 158 158 159 - static u64 omap_pcm_dmamask = DMA_BIT_MASK(64); 160 - 161 159 static int omap_pcm_preallocate_dma_buffer(struct snd_pcm *pcm, 162 160 int stream) 163 161 { ··· 200 202 { 201 203 struct snd_card *card = rtd->card->snd_card; 202 204 struct snd_pcm *pcm = rtd->pcm; 203 - int ret = 0; 205 + int ret; 204 206 205 - if (!card->dev->dma_mask) 206 - card->dev->dma_mask = &omap_pcm_dmamask; 207 - if (!card->dev->coherent_dma_mask) 208 - card->dev->coherent_dma_mask = DMA_BIT_MASK(64); 207 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(64)); 208 + if (ret) 209 + return ret; 209 210 210 211 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 211 212 ret = omap_pcm_preallocate_dma_buffer(pcm,
+4 -7
sound/soc/pxa/pxa2xx-pcm.c
··· 87 87 .mmap = pxa2xx_pcm_mmap, 88 88 }; 89 89 90 - static u64 pxa2xx_pcm_dmamask = DMA_BIT_MASK(32); 91 - 92 90 static int pxa2xx_soc_pcm_new(struct snd_soc_pcm_runtime *rtd) 93 91 { 94 92 struct snd_card *card = rtd->card->snd_card; 95 93 struct snd_pcm *pcm = rtd->pcm; 96 - int ret = 0; 94 + int ret; 97 95 98 - if (!card->dev->dma_mask) 99 - card->dev->dma_mask = &pxa2xx_pcm_dmamask; 100 - if (!card->dev->coherent_dma_mask) 101 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 96 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 97 + if (ret) 98 + return ret; 102 99 103 100 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 104 101 ret = pxa2xx_pcm_preallocate_dma_buffer(pcm,
+3 -6
sound/soc/s6000/s6000-pcm.c
··· 445 445 snd_pcm_lib_preallocate_free_for_all(pcm); 446 446 } 447 447 448 - static u64 s6000_pcm_dmamask = DMA_BIT_MASK(32); 449 - 450 448 static int s6000_pcm_new(struct snd_soc_pcm_runtime *runtime) 451 449 { 452 450 struct snd_card *card = runtime->card->snd_card; ··· 455 457 params = snd_soc_dai_get_dma_data(runtime->cpu_dai, 456 458 pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream); 457 459 458 - if (!card->dev->dma_mask) 459 - card->dev->dma_mask = &s6000_pcm_dmamask; 460 - if (!card->dev->coherent_dma_mask) 461 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 460 + res = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 461 + if (res) 462 + return res; 462 463 463 464 if (params->dma_in) { 464 465 s6dmac_disable_chan(DMA_MASK_DMAC(params->dma_in),
+4 -7
sound/soc/samsung/dma.c
··· 406 406 } 407 407 } 408 408 409 - static u64 dma_mask = DMA_BIT_MASK(32); 410 - 411 409 static int dma_new(struct snd_soc_pcm_runtime *rtd) 412 410 { 413 411 struct snd_card *card = rtd->card->snd_card; 414 412 struct snd_pcm *pcm = rtd->pcm; 415 - int ret = 0; 413 + int ret; 416 414 417 415 pr_debug("Entered %s\n", __func__); 418 416 419 - if (!card->dev->dma_mask) 420 - card->dev->dma_mask = &dma_mask; 421 - if (!card->dev->coherent_dma_mask) 422 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 417 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 418 + if (ret) 419 + return ret; 423 420 424 421 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 425 422 ret = preallocate_dma_buffer(pcm,
+4 -7
sound/soc/samsung/idma.c
··· 383 383 return 0; 384 384 } 385 385 386 - static u64 idma_mask = DMA_BIT_MASK(32); 387 - 388 386 static int idma_new(struct snd_soc_pcm_runtime *rtd) 389 387 { 390 388 struct snd_card *card = rtd->card->snd_card; 391 389 struct snd_pcm *pcm = rtd->pcm; 392 - int ret = 0; 390 + int ret; 393 391 394 - if (!card->dev->dma_mask) 395 - card->dev->dma_mask = &idma_mask; 396 - if (!card->dev->coherent_dma_mask) 397 - card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 392 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 393 + if (ret) 394 + return ret; 398 395 399 396 if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 400 397 ret = preallocate_idma_buffer(pcm,