Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

DMA-API: Clarify physical/bus address distinction

The DMA-API documentation sometimes refers to "physical addresses" when it
really means "bus addresses." Sometimes these are identical, but they may
be different if the bridge leading to the bus performs address translation.
Update the documentation to use "bus address" when appropriate.

Also, consistently capitalize "DMA", use parens with function names, use
dev_printk() in examples, and reword a few sections for clarity.

No functional change; documentation changes only.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: James Bottomley <jbottomley@Parallels.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>

+203 -137
+123 -69
Documentation/DMA-API-HOWTO.txt
··· 9 9 with example pseudo-code. For a concise description of the API, see 10 10 DMA-API.txt. 11 11 12 - Most of the 64bit platforms have special hardware that translates bus 13 - addresses (DMA addresses) into physical addresses. This is similar to 14 - how page tables and/or a TLB translates virtual addresses to physical 15 - addresses on a CPU. This is needed so that e.g. PCI devices can 16 - access with a Single Address Cycle (32bit DMA address) any page in the 17 - 64bit physical address space. Previously in Linux those 64bit 18 - platforms had to set artificial limits on the maximum RAM size in the 19 - system, so that the virt_to_bus() static scheme works (the DMA address 20 - translation tables were simply filled on bootup to map each bus 21 - address to the physical page __pa(bus_to_virt())). 12 + CPU and DMA addresses 13 + 14 + There are several kinds of addresses involved in the DMA API, and it's 15 + important to understand the differences. 16 + 17 + The kernel normally uses virtual addresses. Any address returned by 18 + kmalloc(), vmalloc(), and similar interfaces is a virtual address and can 19 + be stored in a "void *". 20 + 21 + The virtual memory system (TLB, page tables, etc.) translates virtual 22 + addresses to CPU physical addresses, which are stored as "phys_addr_t" or 23 + "resource_size_t". The kernel manages device resources like registers as 24 + physical addresses. These are the addresses in /proc/iomem. The physical 25 + address is not directly useful to a driver; it must use ioremap() to map 26 + the space and produce a virtual address. 27 + 28 + I/O devices use a third kind of address: a "bus address" or "DMA address". 29 + If a device has registers at an MMIO address, or if it performs DMA to read 30 + or write system memory, the addresses used by the device are bus addresses. 31 + In some systems, bus addresses are identical to CPU physical addresses, but 32 + in general they are not. IOMMUs and host bridges can produce arbitrary 33 + mappings between physical and bus addresses. 34 + 35 + Here's a picture and some examples: 36 + 37 + CPU CPU Bus 38 + Virtual Physical Address 39 + Address Address Space 40 + Space Space 41 + 42 + +-------+ +------+ +------+ 43 + | | |MMIO | Offset | | 44 + | | Virtual |Space | applied | | 45 + C +-------+ --------> B +------+ ----------> +------+ A 46 + | | mapping | | by host | | 47 + +-----+ | | | | bridge | | +--------+ 48 + | | | | +------+ | | | | 49 + | CPU | | | | RAM | | | | Device | 50 + | | | | | | | | | | 51 + +-----+ +-------+ +------+ +------+ +--------+ 52 + | | Virtual |Buffer| Mapping | | 53 + X +-------+ --------> Y +------+ <---------- +------+ Z 54 + | | mapping | RAM | by IOMMU 55 + | | | | 56 + | | | | 57 + +-------+ +------+ 58 + 59 + During the enumeration process, the kernel learns about I/O devices and 60 + their MMIO space and the host bridges that connect them to the system. For 61 + example, if a PCI device has a BAR, the kernel reads the bus address (A) 62 + from the BAR and converts it to a CPU physical address (B). The address B 63 + is stored in a struct resource and usually exposed via /proc/iomem. When a 64 + driver claims a device, it typically uses ioremap() to map physical address 65 + B at a virtual address (C). It can then use, e.g., ioread32(C), to access 66 + the device registers at bus address A. 67 + 68 + If the device supports DMA, the driver sets up a buffer using kmalloc() or 69 + a similar interface, which returns a virtual address (X). The virtual 70 + memory system maps X to a physical address (Y) in system RAM. The driver 71 + can use virtual address X to access the buffer, but the device itself 72 + cannot because DMA doesn't go through the CPU virtual memory system. 73 + 74 + In some simple systems, the device can do DMA directly to physical address 75 + Y. But in many others, there is IOMMU hardware that translates bus 76 + addresses to physical addresses, e.g., it translates Z to Y. This is part 77 + of the reason for the DMA API: the driver can give a virtual address X to 78 + an interface like dma_map_single(), which sets up any required IOMMU 79 + mapping and returns the bus address Z. The driver then tells the device to 80 + do DMA to Z, and the IOMMU maps it to the buffer at address Y in system 81 + RAM. 22 82 23 83 So that Linux can use the dynamic DMA mapping, it needs some help from the 24 84 drivers, namely it has to take into account that DMA addresses should be ··· 89 29 hardware exists. 90 30 91 31 Note that the DMA API works with any bus independent of the underlying 92 - microprocessor architecture. You should use the DMA API rather than 93 - the bus specific DMA API (e.g. pci_dma_*). 32 + microprocessor architecture. You should use the DMA API rather than the 33 + bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the 34 + pci_map_*() interfaces. 94 35 95 36 First of all, you should make sure 96 37 97 38 #include <linux/dma-mapping.h> 98 39 99 - is in your driver. This file will obtain for you the definition of the 100 - dma_addr_t (which can hold any valid DMA address for the platform) 101 - type which should be used everywhere you hold a DMA (bus) address 102 - returned from the DMA mapping functions. 40 + is in your driver, which provides the definition of dma_addr_t. This type 41 + can hold any valid DMA or bus address for the platform and should be used 42 + everywhere you hold a DMA address returned from the DMA mapping functions. 103 43 104 44 What memory is DMA'able? 105 45 ··· 183 123 is a bit mask describing which bits of an address your device 184 124 supports. It returns zero if your card can perform DMA properly on 185 125 the machine given the address mask you provided. In general, the 186 - device struct of your device is embedded in the bus specific device 187 - struct of your device. For example, a pointer to the device struct of 188 - your PCI device is pdev->dev (pdev is a pointer to the PCI device 126 + device struct of your device is embedded in the bus-specific device 127 + struct of your device. For example, &pdev->dev is a pointer to the 128 + device struct of a PCI device (pdev is a pointer to the PCI device 189 129 struct of your device). 190 130 191 131 If it returns non-zero, your device cannot perform DMA properly on ··· 207 147 The standard 32-bit addressing device would do something like this: 208 148 209 149 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 210 - printk(KERN_WARNING 211 - "mydev: No suitable DMA available.\n"); 150 + dev_warn(dev, "mydev: No suitable DMA available\n"); 212 151 goto ignore_this_device; 213 152 } 214 153 ··· 229 170 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { 230 171 using_dac = 0; 231 172 } else { 232 - printk(KERN_WARNING 233 - "mydev: No suitable DMA available.\n"); 173 + dev_warn(dev, "mydev: No suitable DMA available\n"); 234 174 goto ignore_this_device; 235 175 } 236 176 ··· 245 187 using_dac = 0; 246 188 consistent_using_dac = 0; 247 189 } else { 248 - printk(KERN_WARNING 249 - "mydev: No suitable DMA available.\n"); 190 + dev_warn(dev, "mydev: No suitable DMA available\n"); 250 191 goto ignore_this_device; 251 192 } 252 193 ··· 258 201 address you might do something like: 259 202 260 203 if (dma_set_mask(dev, DMA_BIT_MASK(24))) { 261 - printk(KERN_WARNING 262 - "mydev: 24-bit DMA addressing not available.\n"); 204 + dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); 263 205 goto ignore_this_device; 264 206 } 265 207 ··· 288 232 card->playback_enabled = 1; 289 233 } else { 290 234 card->playback_enabled = 0; 291 - printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", 235 + dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", 292 236 card->name); 293 237 } 294 238 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { 295 239 card->record_enabled = 1; 296 240 } else { 297 241 card->record_enabled = 0; 298 - printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n", 242 + dev_warn(dev, "%s: Record disabled due to DMA limitations\n", 299 243 card->name); 300 244 } 301 245 ··· 387 331 Size is the length of the region you want to allocate, in bytes. 388 332 389 333 This routine will allocate RAM for that region, so it acts similarly to 390 - __get_free_pages (but takes size instead of a page order). If your 334 + __get_free_pages() (but takes size instead of a page order). If your 391 335 driver needs regions sized smaller than a page, you may prefer using 392 336 the dma_pool interface, described below. 393 337 ··· 399 343 dma_set_coherent_mask(). This is true of the dma_pool interface as 400 344 well. 401 345 402 - dma_alloc_coherent returns two values: the virtual address which you 346 + dma_alloc_coherent() returns two values: the virtual address which you 403 347 can use to access it from the CPU and dma_handle which you pass to the 404 348 card. 405 349 406 - The cpu return address and the DMA bus master address are both 350 + The CPU virtual address and the DMA bus address are both 407 351 guaranteed to be aligned to the smallest PAGE_SIZE order which 408 352 is greater than or equal to the requested size. This invariant 409 353 exists (for example) to guarantee that if you allocate a chunk ··· 415 359 dma_free_coherent(dev, size, cpu_addr, dma_handle); 416 360 417 361 where dev, size are the same as in the above call and cpu_addr and 418 - dma_handle are the values dma_alloc_coherent returned to you. 362 + dma_handle are the values dma_alloc_coherent() returned to you. 419 363 This function may not be called in interrupt context. 420 364 421 365 If your driver needs lots of smaller memory regions, you can write 422 - custom code to subdivide pages returned by dma_alloc_coherent, 366 + custom code to subdivide pages returned by dma_alloc_coherent(), 423 367 or you can use the dma_pool API to do that. A dma_pool is like 424 - a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages. 368 + a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). 425 369 Also, it understands common hardware constraints for alignment, 426 370 like queue heads needing to be aligned on N byte boundaries. 427 371 ··· 437 381 power of two). If your device has no boundary crossing restrictions, 438 382 pass 0 for alloc; passing 4096 says memory allocated from this pool 439 383 must not cross 4KByte boundaries (but at that time it may be better to 440 - go for dma_alloc_coherent directly instead). 384 + use dma_alloc_coherent() directly instead). 441 385 442 - Allocate memory from a dma pool like this: 386 + Allocate memory from a DMA pool like this: 443 387 444 388 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); 445 389 446 390 flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor 447 - holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent, 391 + holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent(), 448 392 this returns two values, cpu_addr and dma_handle. 449 393 450 394 Free memory that was allocated from a dma_pool like this: 451 395 452 396 dma_pool_free(pool, cpu_addr, dma_handle); 453 397 454 - where pool is what you passed to dma_pool_alloc, and cpu_addr and 455 - dma_handle are the values dma_pool_alloc returned. This function 398 + where pool is what you passed to dma_pool_alloc(), and cpu_addr and 399 + dma_handle are the values dma_pool_alloc() returned. This function 456 400 may be called in interrupt context. 457 401 458 402 Destroy a dma_pool by calling: 459 403 460 404 dma_pool_destroy(pool); 461 405 462 - Make sure you've called dma_pool_free for all memory allocated 406 + Make sure you've called dma_pool_free() for all memory allocated 463 407 from a pool before you destroy the pool. This function may not 464 408 be called in interrupt context. 465 409 ··· 474 418 DMA_FROM_DEVICE 475 419 DMA_NONE 476 420 477 - One should provide the exact DMA direction if you know it. 421 + You should provide the exact DMA direction if you know it. 478 422 479 423 DMA_TO_DEVICE means "from main memory to the device" 480 424 DMA_FROM_DEVICE means "from the device to main memory" ··· 545 489 dma_unmap_single(dev, dma_handle, size, direction); 546 490 547 491 You should call dma_mapping_error() as dma_map_single() could fail and return 548 - error. Not all dma implementations support dma_mapping_error() interface. 492 + error. Not all DMA implementations support the dma_mapping_error() interface. 549 493 However, it is a good practice to call dma_mapping_error() interface, which 550 494 will invoke the generic mapping error check interface. Doing so will ensure 551 - that the mapping code will work correctly on all dma implementations without 495 + that the mapping code will work correctly on all DMA implementations without 552 496 any dependency on the specifics of the underlying implementation. Using the 553 497 returned address without checking for errors could result in failures ranging 554 498 from panics to silent data corruption. A couple of examples of incorrect ways 555 - to check for errors that make assumptions about the underlying dma 499 + to check for errors that make assumptions about the underlying DMA 556 500 implementation are as follows and these are applicable to dma_map_page() as 557 501 well. 558 502 ··· 572 516 goto map_error; 573 517 } 574 518 575 - You should call dma_unmap_single when the DMA activity is finished, e.g. 519 + You should call dma_unmap_single() when the DMA activity is finished, e.g., 576 520 from the interrupt which told you that the DMA transfer is done. 577 521 578 - Using cpu pointers like this for single mappings has a disadvantage, 522 + Using cpu pointers like this for single mappings has a disadvantage: 579 523 you cannot reference HIGHMEM memory in this way. Thus, there is a 580 - map/unmap interface pair akin to dma_{map,unmap}_single. These 524 + map/unmap interface pair akin to dma_{map,unmap}_single(). These 581 525 interfaces deal with page/offset pairs instead of cpu pointers. 582 526 Specifically: 583 527 ··· 606 550 You should call dma_mapping_error() as dma_map_page() could fail and return 607 551 error as outlined under the dma_map_single() discussion. 608 552 609 - You should call dma_unmap_page when the DMA activity is finished, e.g. 553 + You should call dma_unmap_page() when the DMA activity is finished, e.g., 610 554 from the interrupt which told you that the DMA transfer is done. 611 555 612 556 With scatterlists, you map a region gathered from several regions by: ··· 644 588 it should _NOT_ be the 'count' value _returned_ from the 645 589 dma_map_sg call. 646 590 647 - Every dma_map_{single,sg} call should have its dma_unmap_{single,sg} 648 - counterpart, because the bus address space is a shared resource (although 649 - in some ports the mapping is per each BUS so less devices contend for the 650 - same bus address space) and you could render the machine unusable by eating 651 - all bus addresses. 591 + Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() 592 + counterpart, because the bus address space is a shared resource and 593 + you could render the machine unusable by consuming all bus addresses. 652 594 653 595 If you need to use the same streaming DMA region multiple times and touch 654 596 the data in between the DMA transfers, the buffer needs to be synced 655 - properly in order for the cpu and device to see the most uptodate and 597 + properly in order for the cpu and device to see the most up-to-date and 656 598 correct copy of the DMA buffer. 657 599 658 - So, firstly, just map it with dma_map_{single,sg}, and after each DMA 600 + So, firstly, just map it with dma_map_{single,sg}(), and after each DMA 659 601 transfer call either: 660 602 661 603 dma_sync_single_for_cpu(dev, dma_handle, size, direction); ··· 677 623 as appropriate. 678 624 679 625 After the last DMA transfer call one of the DMA unmap routines 680 - dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_* 681 - call till dma_unmap_*, then you don't have to call the dma_sync_* 682 - routines at all. 626 + dma_unmap_{single,sg}(). If you don't touch the data from the first 627 + dma_map_*() call till dma_unmap_*(), then you don't have to call the 628 + dma_sync_*() routines at all. 683 629 684 630 Here is pseudo code which shows a situation in which you would need 685 631 to use the dma_sync_*() interfaces. ··· 744 690 } 745 691 } 746 692 747 - Drivers converted fully to this interface should not use virt_to_bus any 748 - longer, nor should they use bus_to_virt. Some drivers have to be changed a 749 - little bit, because there is no longer an equivalent to bus_to_virt in the 693 + Drivers converted fully to this interface should not use virt_to_bus() any 694 + longer, nor should they use bus_to_virt(). Some drivers have to be changed a 695 + little bit, because there is no longer an equivalent to bus_to_virt() in the 750 696 dynamic DMA mapping scheme - you have to always store the DMA addresses 751 - returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single 752 - calls (dma_map_sg stores them in the scatterlist itself if the platform 697 + returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() 698 + calls (dma_map_sg() stores them in the scatterlist itself if the platform 753 699 supports dynamic DMA mapping in hardware) in your driver structures and/or 754 700 in the card registers. 755 701 ··· 763 709 DMA address space is limited on some architectures and an allocation 764 710 failure can be determined by: 765 711 766 - - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0 712 + - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 767 713 768 - - checking the returned dma_addr_t of dma_map_single and dma_map_page 714 + - checking the dma_addr_t returned from dma_map_single() and dma_map_page() 769 715 by using dma_mapping_error(): 770 716 771 717 dma_addr_t dma_handle; ··· 848 794 dma_unmap_single(array[i].dma_addr); 849 795 } 850 796 851 - Networking drivers must call dev_kfree_skb to free the socket buffer 797 + Networking drivers must call dev_kfree_skb() to free the socket buffer 852 798 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook 853 799 (ndo_start_xmit). This means that the socket buffer is just dropped in 854 800 the failure case. ··· 885 831 DEFINE_DMA_UNMAP_LEN(len); 886 832 }; 887 833 888 - 2) Use dma_unmap_{addr,len}_set to set these values. 834 + 2) Use dma_unmap_{addr,len}_set() to set these values. 889 835 Example, before: 890 836 891 837 ringp->mapping = FOO; ··· 896 842 dma_unmap_addr_set(ringp, mapping, FOO); 897 843 dma_unmap_len_set(ringp, len, BAR); 898 844 899 - 3) Use dma_unmap_{addr,len} to access these values. 845 + 3) Use dma_unmap_{addr,len}() to access these values. 900 846 Example, before: 901 847 902 848 dma_unmap_single(dev, ringp->mapping, ringp->len,
+71 -66
Documentation/DMA-API.txt
··· 4 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 5 5 6 6 This document describes the DMA API. For a more gentle introduction 7 - of the API (and actual examples) see 8 - Documentation/DMA-API-HOWTO.txt. 7 + of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. 9 8 10 - This API is split into two pieces. Part I describes the API. Part II 11 - describes the extensions to the API for supporting non-consistent 12 - memory machines. Unless you know that your driver absolutely has to 13 - support non-consistent platforms (this is usually only legacy 14 - platforms) you should only use the API described in part I. 9 + This API is split into two pieces. Part I describes the basic API. 10 + Part II describes extensions for supporting non-consistent memory 11 + machines. Unless you know that your driver absolutely has to support 12 + non-consistent platforms (this is usually only legacy platforms) you 13 + should only use the API described in part I. 15 14 16 15 Part I - dma_ API 17 16 ------------------------------------- 18 17 19 - To get the dma_ API, you must #include <linux/dma-mapping.h> 18 + To get the dma_ API, you must #include <linux/dma-mapping.h>. This 19 + provides dma_addr_t and the interfaces described below. 20 20 21 + A dma_addr_t can hold any valid DMA or bus address for the platform. It 22 + can be given to a device to use as a DMA source or target. A cpu cannot 23 + reference a dma_addr_t directly because there may be translation between 24 + its physical address space and the bus address space. 21 25 22 - Part Ia - Using large dma-coherent buffers 26 + Part Ia - Using large DMA-coherent buffers 23 27 ------------------------------------------ 24 28 25 29 void * ··· 37 33 devices to read that memory.) 38 34 39 35 This routine allocates a region of <size> bytes of consistent memory. 40 - It also returns a <dma_handle> which may be cast to an unsigned 41 - integer the same width as the bus and used as the physical address 42 - base of the region. 43 36 44 - Returns: a pointer to the allocated region (in the processor's virtual 37 + It returns a pointer to the allocated region (in the processor's virtual 45 38 address space) or NULL if the allocation failed. 39 + 40 + It also returns a <dma_handle> which may be cast to an unsigned integer the 41 + same width as the bus and given to the device as the bus address base of 42 + the region. 46 43 47 44 Note: consistent memory can be expensive on some platforms, and the 48 45 minimum allocation length may be as big as a page, so you should 49 46 consolidate your requests for consistent memory as much as possible. 50 47 The simplest way to do that is to use the dma_pool calls (see below). 51 48 52 - The flag parameter (dma_alloc_coherent only) allows the caller to 53 - specify the GFP_ flags (see kmalloc) for the allocation (the 49 + The flag parameter (dma_alloc_coherent() only) allows the caller to 50 + specify the GFP_ flags (see kmalloc()) for the allocation (the 54 51 implementation may choose to ignore flags that affect the location of 55 52 the returned memory, like GFP_DMA). 56 53 ··· 66 61 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 67 62 dma_addr_t dma_handle) 68 63 69 - Free the region of consistent memory you previously allocated. dev, 70 - size and dma_handle must all be the same as those passed into the 71 - consistent allocate. cpu_addr must be the virtual address returned by 72 - the consistent allocate. 64 + Free a region of consistent memory you previously allocated. dev, 65 + size and dma_handle must all be the same as those passed into 66 + dma_alloc_coherent(). cpu_addr must be the virtual address returned by 67 + the dma_alloc_coherent(). 73 68 74 69 Note that unlike their sibling allocation calls, these routines 75 70 may only be called with IRQs enabled. 76 71 77 72 78 - Part Ib - Using small dma-coherent buffers 73 + Part Ib - Using small DMA-coherent buffers 79 74 ------------------------------------------ 80 75 81 76 To get this part of the dma_ API, you must #include <linux/dmapool.h> 82 77 83 - Many drivers need lots of small dma-coherent memory regions for DMA 78 + Many drivers need lots of small DMA-coherent memory regions for DMA 84 79 descriptors or I/O buffers. Rather than allocating in units of a page 85 80 or more using dma_alloc_coherent(), you can use DMA pools. These work 86 - much like a struct kmem_cache, except that they use the dma-coherent allocator, 81 + much like a struct kmem_cache, except that they use the DMA-coherent allocator, 87 82 not __get_free_pages(). Also, they understand common hardware constraints 88 83 for alignment, like queue heads needing to be aligned on N-byte boundaries. 89 84 ··· 92 87 dma_pool_create(const char *name, struct device *dev, 93 88 size_t size, size_t align, size_t alloc); 94 89 95 - The pool create() routines initialize a pool of dma-coherent buffers 90 + dma_pool_create() initializes a pool of DMA-coherent buffers 96 91 for use with a given device. It must be called in a context which 97 92 can sleep. 98 93 ··· 107 102 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 108 103 dma_addr_t *dma_handle); 109 104 110 - This allocates memory from the pool; the returned memory will meet the size 111 - and alignment requirements specified at creation time. Pass GFP_ATOMIC to 112 - prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks), 113 - pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns 114 - two values: an address usable by the cpu, and the dma address usable by the 115 - pool's device. 105 + This allocates memory from the pool; the returned memory will meet the 106 + size and alignment requirements specified at creation time. Pass 107 + GFP_ATOMIC to prevent blocking, or if it's permitted (not 108 + in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 109 + blocking. Like dma_alloc_coherent(), this returns two values: an 110 + address usable by the cpu, and the DMA address usable by the pool's 111 + device. 116 112 117 113 118 114 void dma_pool_free(struct dma_pool *pool, void *vaddr, 119 115 dma_addr_t addr); 120 116 121 117 This puts memory back into the pool. The pool is what was passed to 122 - the pool allocation routine; the cpu (vaddr) and dma addresses are what 118 + dma_pool_alloc(); the cpu (vaddr) and DMA addresses are what 123 119 were returned when that routine allocated the memory being freed. 124 120 125 121 126 122 void dma_pool_destroy(struct dma_pool *pool); 127 123 128 - The pool destroy() routines free the resources of the pool. They must be 124 + dma_pool_destroy() frees the resources of the pool. It must be 129 125 called in a context which can sleep. Make sure you've freed all allocated 130 126 memory back to the pool before you destroy it. 131 127 ··· 193 187 enum dma_data_direction direction) 194 188 195 189 Maps a piece of processor virtual memory so it can be accessed by the 196 - device and returns the physical handle of the memory. 190 + device and returns the bus address of the memory. 197 191 198 - The direction for both api's may be converted freely by casting. 192 + The direction for both APIs may be converted freely by casting. 199 193 However the dma_ API uses a strongly typed enumerator for its 200 194 direction: 201 195 ··· 204 198 DMA_FROM_DEVICE data is coming from the device to the memory 205 199 DMA_BIDIRECTIONAL direction isn't known 206 200 207 - Notes: Not all memory regions in a machine can be mapped by this 208 - API. Further, regions that appear to be physically contiguous in 209 - kernel virtual space may not be contiguous as physical memory. Since 210 - this API does not provide any scatter/gather capability, it will fail 211 - if the user tries to map a non-physically contiguous piece of memory. 212 - For this reason, it is recommended that memory mapped by this API be 213 - obtained only from sources which guarantee it to be physically contiguous 214 - (like kmalloc). 201 + Notes: Not all memory regions in a machine can be mapped by this API. 202 + Further, contiguous kernel virtual space may not be contiguous as 203 + physical memory. Since this API does not provide any scatter/gather 204 + capability, it will fail if the user tries to map a non-physically 205 + contiguous piece of memory. For this reason, memory to be mapped by 206 + this API should be obtained from sources which guarantee it to be 207 + physically contiguous (like kmalloc). 215 208 216 - Further, the physical address of the memory must be within the 217 - dma_mask of the device (the dma_mask represents a bit mask of the 218 - addressable region for the device. I.e., if the physical address of 219 - the memory anded with the dma_mask is still equal to the physical 220 - address, then the device can perform DMA to the memory). In order to 209 + Further, the bus address of the memory must be within the 210 + dma_mask of the device (the dma_mask is a bit mask of the 211 + addressable region for the device, i.e., if the bus address of 212 + the memory ANDed with the dma_mask is still equal to the bus 213 + address, then the device can perform DMA to the memory). To 221 214 ensure that the memory allocated by kmalloc is within the dma_mask, 222 215 the driver may specify various platform-dependent flags to restrict 223 - the physical memory range of the allocation (e.g. on x86, GFP_DMA 224 - guarantees to be within the first 16Mb of available physical memory, 216 + the bus address range of the allocation (e.g., on x86, GFP_DMA 217 + guarantees to be within the first 16MB of available bus addresses, 225 218 as required by ISA devices). 226 219 227 220 Note also that the above constraints on physical contiguity and 228 221 dma_mask may not apply if the platform has an IOMMU (a device which 229 - supplies a physical to virtual mapping between the I/O memory bus and 230 - the device). However, to be portable, device driver writers may *not* 231 - assume that such an IOMMU exists. 222 + maps an I/O bus address to a physical memory address). However, to be 223 + portable, device driver writers may *not* assume that such an IOMMU 224 + exists. 232 225 233 226 Warnings: Memory coherency operates at a granularity called the cache 234 227 line width. In order for memory mapped by this API to operate ··· 286 281 int 287 282 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 288 283 289 - In some circumstances dma_map_single and dma_map_page will fail to create 284 + In some circumstances dma_map_single() and dma_map_page() will fail to create 290 285 a mapping. A driver can check for these errors by testing the returned 291 - dma address with dma_mapping_error(). A non-zero return value means the mapping 286 + DMA address with dma_mapping_error(). A non-zero return value means the mapping 292 287 could not be created and the driver should take appropriate action (e.g. 293 288 reduce current DMA mapping usage or delay and try again later). 294 289 ··· 296 291 dma_map_sg(struct device *dev, struct scatterlist *sg, 297 292 int nents, enum dma_data_direction direction) 298 293 299 - Returns: the number of physical segments mapped (this may be shorter 294 + Returns: the number of bus address segments mapped (this may be shorter 300 295 than <nents> passed in if some elements of the scatter/gather list are 301 296 physically or virtually adjacent and an IOMMU maps them with a single 302 297 entry). ··· 304 299 Please note that the sg cannot be mapped again if it has been mapped once. 305 300 The mapping process is allowed to destroy information in the sg. 306 301 307 - As with the other mapping interfaces, dma_map_sg can fail. When it 302 + As with the other mapping interfaces, dma_map_sg() can fail. When it 308 303 does, 0 is returned and a driver must take appropriate action. It is 309 304 critical that the driver do something, in the case of a block driver 310 305 aborting the request or even oopsing is better than doing nothing and ··· 340 335 API. 341 336 342 337 Note: <nents> must be the number you passed in, *not* the number of 343 - physical entries returned. 338 + bus address entries returned. 344 339 345 340 void 346 341 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, ··· 396 391 without the _attrs suffixes, except that they pass an optional 397 392 struct dma_attrs*. 398 393 399 - struct dma_attrs encapsulates a set of "dma attributes". For the 394 + struct dma_attrs encapsulates a set of "DMA attributes". For the 400 395 definition of struct dma_attrs see linux/dma-attrs.h. 401 396 402 - The interpretation of dma attributes is architecture-specific, and 397 + The interpretation of DMA attributes is architecture-specific, and 403 398 each attribute should be documented in Documentation/DMA-attributes.txt. 404 399 405 400 If struct dma_attrs* is NULL, the semantics of each of these ··· 463 458 guarantee that the sync points become nops. 464 459 465 460 Warning: Handling non-consistent memory is a real pain. You should 466 - only ever use this API if you positively know your driver will be 461 + only use this API if you positively know your driver will be 467 462 required to work on one of the rare (usually non-PCI) architectures 468 463 that simply cannot make consistent memory. 469 464 ··· 501 496 dma_addr_t device_addr, size_t size, int 502 497 flags) 503 498 504 - Declare region of memory to be handed out by dma_alloc_coherent when 499 + Declare region of memory to be handed out by dma_alloc_coherent() when 505 500 it's asked for coherent memory for this device. 506 501 507 502 bus_addr is the physical address to which the memory is currently 508 503 assigned in the bus responding region (this will be used by the 509 504 platform to perform the mapping). 510 505 511 - device_addr is the physical address the device needs to be programmed 506 + device_addr is the bus address the device needs to be programmed 512 507 with actually to address this memory (this will be handed out as the 513 508 dma_addr_t in dma_alloc_coherent()). 514 509 515 510 size is the size of the area (must be multiples of PAGE_SIZE). 516 511 517 - flags can be or'd together and are: 512 + flags can be ORed together and are: 518 513 519 514 DMA_MEMORY_MAP - request that the memory returned from 520 515 dma_alloc_coherent() be directly writable. 521 516 522 517 DMA_MEMORY_IO - request that the memory returned from 523 - dma_alloc_coherent() be addressable using read/write/memcpy_toio etc. 518 + dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. 524 519 525 520 One or both of these flags must be present. 526 521 ··· 577 572 Part III - Debug drivers use of the DMA-API 578 573 ------------------------------------------- 579 574 580 - The DMA-API as described above as some constraints. DMA addresses must be 575 + The DMA-API as described above has some constraints. DMA addresses must be 581 576 released with the corresponding function with the same size for example. With 582 577 the advent of hardware IOMMUs it becomes more and more important that drivers 583 578 do not violate those constraints. In the worst case such a violation can ··· 695 690 void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); 696 691 697 692 dma-debug interface debug_dma_mapping_error() to debug drivers that fail 698 - to check dma mapping errors on addresses returned by dma_map_single() and 693 + to check DMA mapping errors on addresses returned by dma_map_single() and 699 694 dma_map_page() interfaces. This interface clears a flag set by 700 695 debug_dma_map_page() to indicate that dma_mapping_error() has been called by 701 696 the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 702 697 this flag is still set, prints warning message that includes call trace that 703 698 leads up to the unmap. This interface can be called from dma_mapping_error() 704 - routines to enable dma mapping error check debugging. 699 + routines to enable DMA mapping error check debugging. 705 700
+2 -2
Documentation/DMA-ISA-LPC.txt
··· 16 16 #include <asm/dma.h> 17 17 18 18 The first is the generic DMA API used to convert virtual addresses to 19 - physical addresses (see Documentation/DMA-API.txt for details). 19 + bus addresses (see Documentation/DMA-API.txt for details). 20 20 21 21 The second contains the routines specific to ISA DMA transfers. Since 22 22 this is not present on all platforms make sure you construct your ··· 50 50 Part III - Address translation 51 51 ------------------------------ 52 52 53 - To translate the virtual address to a physical use the normal DMA 53 + To translate the virtual address to a bus address, use the normal DMA 54 54 API. Do _not_ use isa_virt_to_phys() even though it does the same 55 55 thing. The reason for this is that the function isa_virt_to_phys() 56 56 will require a Kconfig dependency to ISA, not just ISA_DMA_API which
+6
include/linux/dma-mapping.h
··· 8 8 #include <linux/dma-direction.h> 9 9 #include <linux/scatterlist.h> 10 10 11 + /* 12 + * A dma_addr_t can hold any valid DMA or bus address for the platform. 13 + * It can be given to a device to use as a DMA source or target. A CPU cannot 14 + * reference a dma_addr_t directly because there may be translation between 15 + * its physical address space and the bus address space. 16 + */ 11 17 struct dma_map_ops { 12 18 void* (*alloc)(struct device *dev, size_t size, 13 19 dma_addr_t *dma_handle, gfp_t gfp,
+1
include/linux/types.h
··· 142 142 #define pgoff_t unsigned long 143 143 #endif 144 144 145 + /* A dma_addr_t can hold any valid DMA or bus address for the platform */ 145 146 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 146 147 typedef u64 dma_addr_t; 147 148 #else