Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.12-rc2 881 lines 32 kB view raw
1 Dynamic DMA mapping 2 =================== 3 4 David S. Miller <davem@redhat.com> 5 Richard Henderson <rth@cygnus.com> 6 Jakub Jelinek <jakub@redhat.com> 7 8This document describes the DMA mapping system in terms of the pci_ 9API. For a similar API that works for generic devices, see 10DMA-API.txt. 11 12Most of the 64bit platforms have special hardware that translates bus 13addresses (DMA addresses) into physical addresses. This is similar to 14how page tables and/or a TLB translates virtual addresses to physical 15addresses on a CPU. This is needed so that e.g. PCI devices can 16access with a Single Address Cycle (32bit DMA address) any page in the 1764bit physical address space. Previously in Linux those 64bit 18platforms had to set artificial limits on the maximum RAM size in the 19system, so that the virt_to_bus() static scheme works (the DMA address 20translation tables were simply filled on bootup to map each bus 21address to the physical page __pa(bus_to_virt())). 22 23So that Linux can use the dynamic DMA mapping, it needs some help from the 24drivers, namely it has to take into account that DMA addresses should be 25mapped only for the time they are actually used and unmapped after the DMA 26transfer. 27 28The following API will work of course even on platforms where no such 29hardware exists, see e.g. include/asm-i386/pci.h for how it is implemented on 30top of the virt_to_bus interface. 31 32First of all, you should make sure 33 34#include <linux/pci.h> 35 36is in your driver. This file will obtain for you the definition of the 37dma_addr_t (which can hold any valid DMA address for the platform) 38type which should be used everywhere you hold a DMA (bus) address 39returned from the DMA mapping functions. 40 41 What memory is DMA'able? 42 43The first piece of information you must know is what kernel memory can 44be used with the DMA mapping facilities. There has been an unwritten 45set of rules regarding this, and this text is an attempt to finally 46write them down. 47 48If you acquired your memory via the page allocator 49(i.e. __get_free_page*()) or the generic memory allocators 50(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from 51that memory using the addresses returned from those routines. 52 53This means specifically that you may _not_ use the memory/addresses 54returned from vmalloc() for DMA. It is possible to DMA to the 55_underlying_ memory mapped into a vmalloc() area, but this requires 56walking page tables to get the physical addresses, and then 57translating each of those pages back to a kernel address using 58something like __va(). [ EDIT: Update this when we integrate 59Gerd Knorr's generic code which does this. ] 60 61This rule also means that you may not use kernel image addresses 62(ie. items in the kernel's data/text/bss segment, or your driver's) 63nor may you use kernel stack addresses for DMA. Both of these items 64might be mapped somewhere entirely different than the rest of physical 65memory. 66 67Also, this means that you cannot take the return of a kmap() 68call and DMA to/from that. This is similar to vmalloc(). 69 70What about block I/O and networking buffers? The block I/O and 71networking subsystems make sure that the buffers they use are valid 72for you to DMA from/to. 73 74 DMA addressing limitations 75 76Does your device have any DMA addressing limitations? For example, is 77your device only capable of driving the low order 24-bits of address 78on the PCI bus for SAC DMA transfers? If so, you need to inform the 79PCI layer of this fact. 80 81By default, the kernel assumes that your device can address the full 8232-bits in a SAC cycle. For a 64-bit DAC capable device, this needs 83to be increased. And for a device with limitations, as discussed in 84the previous paragraph, it needs to be decreased. 85 86pci_alloc_consistent() by default will return 32-bit DMA addresses. 87PCI-X specification requires PCI-X devices to support 64-bit 88addressing (DAC) for all transactions. And at least one platform (SGI 89SN2) requires 64-bit consistent allocations to operate correctly when 90the IO bus is in PCI-X mode. Therefore, like with pci_set_dma_mask(), 91it's good practice to call pci_set_consistent_dma_mask() to set the 92appropriate mask even if your device only supports 32-bit DMA 93(default) and especially if it's a PCI-X device. 94 95For correct operation, you must interrogate the PCI layer in your 96device probe routine to see if the PCI controller on the machine can 97properly support the DMA addressing limitation your device has. It is 98good style to do this even if your device holds the default setting, 99because this shows that you did think about these issues wrt. your 100device. 101 102The query is performed via a call to pci_set_dma_mask(): 103 104 int pci_set_dma_mask(struct pci_dev *pdev, u64 device_mask); 105 106The query for consistent allocations is performed via a a call to 107pci_set_consistent_dma_mask(): 108 109 int pci_set_consistent_dma_mask(struct pci_dev *pdev, u64 device_mask); 110 111Here, pdev is a pointer to the PCI device struct of your device, and 112device_mask is a bit mask describing which bits of a PCI address your 113device supports. It returns zero if your card can perform DMA 114properly on the machine given the address mask you provided. 115 116If it returns non-zero, your device can not perform DMA properly on 117this platform, and attempting to do so will result in undefined 118behavior. You must either use a different mask, or not use DMA. 119 120This means that in the failure case, you have three options: 121 1221) Use another DMA mask, if possible (see below). 1232) Use some non-DMA mode for data transfer, if possible. 1243) Ignore this device and do not initialize it. 125 126It is recommended that your driver print a kernel KERN_WARNING message 127when you end up performing either #2 or #3. In this manner, if a user 128of your driver reports that performance is bad or that the device is not 129even detected, you can ask them for the kernel messages to find out 130exactly why. 131 132The standard 32-bit addressing PCI device would do something like 133this: 134 135 if (pci_set_dma_mask(pdev, DMA_32BIT_MASK)) { 136 printk(KERN_WARNING 137 "mydev: No suitable DMA available.\n"); 138 goto ignore_this_device; 139 } 140 141Another common scenario is a 64-bit capable device. The approach 142here is to try for 64-bit DAC addressing, but back down to a 14332-bit mask should that fail. The PCI platform code may fail the 14464-bit mask not because the platform is not capable of 64-bit 145addressing. Rather, it may fail in this case simply because 14632-bit SAC addressing is done more efficiently than DAC addressing. 147Sparc64 is one platform which behaves in this way. 148 149Here is how you would handle a 64-bit capable device which can drive 150all 64-bits when accessing streaming DMA: 151 152 int using_dac; 153 154 if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { 155 using_dac = 1; 156 } else if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)) { 157 using_dac = 0; 158 } else { 159 printk(KERN_WARNING 160 "mydev: No suitable DMA available.\n"); 161 goto ignore_this_device; 162 } 163 164If a card is capable of using 64-bit consistent allocations as well, 165the case would look like this: 166 167 int using_dac, consistent_using_dac; 168 169 if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { 170 using_dac = 1; 171 consistent_using_dac = 1; 172 pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK); 173 } else if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)) { 174 using_dac = 0; 175 consistent_using_dac = 0; 176 pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 177 } else { 178 printk(KERN_WARNING 179 "mydev: No suitable DMA available.\n"); 180 goto ignore_this_device; 181 } 182 183pci_set_consistent_dma_mask() will always be able to set the same or a 184smaller mask as pci_set_dma_mask(). However for the rare case that a 185device driver only uses consistent allocations, one would have to 186check the return value from pci_set_consistent_dma_mask(). 187 188If your 64-bit device is going to be an enormous consumer of DMA 189mappings, this can be problematic since the DMA mappings are a 190finite resource on many platforms. Please see the "DAC Addressing 191for Address Space Hungry Devices" section near the end of this 192document for how to handle this case. 193 194Finally, if your device can only drive the low 24-bits of 195address during PCI bus mastering you might do something like: 196 197 if (pci_set_dma_mask(pdev, 0x00ffffff)) { 198 printk(KERN_WARNING 199 "mydev: 24-bit DMA addressing not available.\n"); 200 goto ignore_this_device; 201 } 202 203When pci_set_dma_mask() is successful, and returns zero, the PCI layer 204saves away this mask you have provided. The PCI layer will use this 205information later when you make DMA mappings. 206 207There is a case which we are aware of at this time, which is worth 208mentioning in this documentation. If your device supports multiple 209functions (for example a sound card provides playback and record 210functions) and the various different functions have _different_ 211DMA addressing limitations, you may wish to probe each mask and 212only provide the functionality which the machine can handle. It 213is important that the last call to pci_set_dma_mask() be for the 214most specific mask. 215 216Here is pseudo-code showing how this might be done: 217 218 #define PLAYBACK_ADDRESS_BITS DMA_32BIT_MASK 219 #define RECORD_ADDRESS_BITS 0x00ffffff 220 221 struct my_sound_card *card; 222 struct pci_dev *pdev; 223 224 ... 225 if (!pci_set_dma_mask(pdev, PLAYBACK_ADDRESS_BITS)) { 226 card->playback_enabled = 1; 227 } else { 228 card->playback_enabled = 0; 229 printk(KERN_WARN "%s: Playback disabled due to DMA limitations.\n", 230 card->name); 231 } 232 if (!pci_set_dma_mask(pdev, RECORD_ADDRESS_BITS)) { 233 card->record_enabled = 1; 234 } else { 235 card->record_enabled = 0; 236 printk(KERN_WARN "%s: Record disabled due to DMA limitations.\n", 237 card->name); 238 } 239 240A sound card was used as an example here because this genre of PCI 241devices seems to be littered with ISA chips given a PCI front end, 242and thus retaining the 16MB DMA addressing limitations of ISA. 243 244 Types of DMA mappings 245 246There are two types of DMA mappings: 247 248- Consistent DMA mappings which are usually mapped at driver 249 initialization, unmapped at the end and for which the hardware should 250 guarantee that the device and the CPU can access the data 251 in parallel and will see updates made by each other without any 252 explicit software flushing. 253 254 Think of "consistent" as "synchronous" or "coherent". 255 256 The current default is to return consistent memory in the low 32 257 bits of the PCI bus space. However, for future compatibility you 258 should set the consistent mask even if this default is fine for your 259 driver. 260 261 Good examples of what to use consistent mappings for are: 262 263 - Network card DMA ring descriptors. 264 - SCSI adapter mailbox command data structures. 265 - Device firmware microcode executed out of 266 main memory. 267 268 The invariant these examples all require is that any CPU store 269 to memory is immediately visible to the device, and vice 270 versa. Consistent mappings guarantee this. 271 272 IMPORTANT: Consistent DMA memory does not preclude the usage of 273 proper memory barriers. The CPU may reorder stores to 274 consistent memory just as it may normal memory. Example: 275 if it is important for the device to see the first word 276 of a descriptor updated before the second, you must do 277 something like: 278 279 desc->word0 = address; 280 wmb(); 281 desc->word1 = DESC_VALID; 282 283 in order to get correct behavior on all platforms. 284 285- Streaming DMA mappings which are usually mapped for one DMA transfer, 286 unmapped right after it (unless you use pci_dma_sync_* below) and for which 287 hardware can optimize for sequential accesses. 288 289 This of "streaming" as "asynchronous" or "outside the coherency 290 domain". 291 292 Good examples of what to use streaming mappings for are: 293 294 - Networking buffers transmitted/received by a device. 295 - Filesystem buffers written/read by a SCSI device. 296 297 The interfaces for using this type of mapping were designed in 298 such a way that an implementation can make whatever performance 299 optimizations the hardware allows. To this end, when using 300 such mappings you must be explicit about what you want to happen. 301 302Neither type of DMA mapping has alignment restrictions that come 303from PCI, although some devices may have such restrictions. 304 305 Using Consistent DMA mappings. 306 307To allocate and map large (PAGE_SIZE or so) consistent DMA regions, 308you should do: 309 310 dma_addr_t dma_handle; 311 312 cpu_addr = pci_alloc_consistent(dev, size, &dma_handle); 313 314where dev is a struct pci_dev *. You should pass NULL for PCI like buses 315where devices don't have struct pci_dev (like ISA, EISA). This may be 316called in interrupt context. 317 318This argument is needed because the DMA translations may be bus 319specific (and often is private to the bus which the device is attached 320to). 321 322Size is the length of the region you want to allocate, in bytes. 323 324This routine will allocate RAM for that region, so it acts similarly to 325__get_free_pages (but takes size instead of a page order). If your 326driver needs regions sized smaller than a page, you may prefer using 327the pci_pool interface, described below. 328 329The consistent DMA mapping interfaces, for non-NULL dev, will by 330default return a DMA address which is SAC (Single Address Cycle) 331addressable. Even if the device indicates (via PCI dma mask) that it 332may address the upper 32-bits and thus perform DAC cycles, consistent 333allocation will only return > 32-bit PCI addresses for DMA if the 334consistent dma mask has been explicitly changed via 335pci_set_consistent_dma_mask(). This is true of the pci_pool interface 336as well. 337 338pci_alloc_consistent returns two values: the virtual address which you 339can use to access it from the CPU and dma_handle which you pass to the 340card. 341 342The cpu return address and the DMA bus master address are both 343guaranteed to be aligned to the smallest PAGE_SIZE order which 344is greater than or equal to the requested size. This invariant 345exists (for example) to guarantee that if you allocate a chunk 346which is smaller than or equal to 64 kilobytes, the extent of the 347buffer you receive will not cross a 64K boundary. 348 349To unmap and free such a DMA region, you call: 350 351 pci_free_consistent(dev, size, cpu_addr, dma_handle); 352 353where dev, size are the same as in the above call and cpu_addr and 354dma_handle are the values pci_alloc_consistent returned to you. 355This function may not be called in interrupt context. 356 357If your driver needs lots of smaller memory regions, you can write 358custom code to subdivide pages returned by pci_alloc_consistent, 359or you can use the pci_pool API to do that. A pci_pool is like 360a kmem_cache, but it uses pci_alloc_consistent not __get_free_pages. 361Also, it understands common hardware constraints for alignment, 362like queue heads needing to be aligned on N byte boundaries. 363 364Create a pci_pool like this: 365 366 struct pci_pool *pool; 367 368 pool = pci_pool_create(name, dev, size, align, alloc); 369 370The "name" is for diagnostics (like a kmem_cache name); dev and size 371are as above. The device's hardware alignment requirement for this 372type of data is "align" (which is expressed in bytes, and must be a 373power of two). If your device has no boundary crossing restrictions, 374pass 0 for alloc; passing 4096 says memory allocated from this pool 375must not cross 4KByte boundaries (but at that time it may be better to 376go for pci_alloc_consistent directly instead). 377 378Allocate memory from a pci pool like this: 379 380 cpu_addr = pci_pool_alloc(pool, flags, &dma_handle); 381 382flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor 383holding SMP locks), SLAB_ATOMIC otherwise. Like pci_alloc_consistent, 384this returns two values, cpu_addr and dma_handle. 385 386Free memory that was allocated from a pci_pool like this: 387 388 pci_pool_free(pool, cpu_addr, dma_handle); 389 390where pool is what you passed to pci_pool_alloc, and cpu_addr and 391dma_handle are the values pci_pool_alloc returned. This function 392may be called in interrupt context. 393 394Destroy a pci_pool by calling: 395 396 pci_pool_destroy(pool); 397 398Make sure you've called pci_pool_free for all memory allocated 399from a pool before you destroy the pool. This function may not 400be called in interrupt context. 401 402 DMA Direction 403 404The interfaces described in subsequent portions of this document 405take a DMA direction argument, which is an integer and takes on 406one of the following values: 407 408 PCI_DMA_BIDIRECTIONAL 409 PCI_DMA_TODEVICE 410 PCI_DMA_FROMDEVICE 411 PCI_DMA_NONE 412 413One should provide the exact DMA direction if you know it. 414 415PCI_DMA_TODEVICE means "from main memory to the PCI device" 416PCI_DMA_FROMDEVICE means "from the PCI device to main memory" 417It is the direction in which the data moves during the DMA 418transfer. 419 420You are _strongly_ encouraged to specify this as precisely 421as you possibly can. 422 423If you absolutely cannot know the direction of the DMA transfer, 424specify PCI_DMA_BIDIRECTIONAL. It means that the DMA can go in 425either direction. The platform guarantees that you may legally 426specify this, and that it will work, but this may be at the 427cost of performance for example. 428 429The value PCI_DMA_NONE is to be used for debugging. One can 430hold this in a data structure before you come to know the 431precise direction, and this will help catch cases where your 432direction tracking logic has failed to set things up properly. 433 434Another advantage of specifying this value precisely (outside of 435potential platform-specific optimizations of such) is for debugging. 436Some platforms actually have a write permission boolean which DMA 437mappings can be marked with, much like page protections in the user 438program address space. Such platforms can and do report errors in the 439kernel logs when the PCI controller hardware detects violation of the 440permission setting. 441 442Only streaming mappings specify a direction, consistent mappings 443implicitly have a direction attribute setting of 444PCI_DMA_BIDIRECTIONAL. 445 446The SCSI subsystem provides mechanisms for you to easily obtain 447the direction to use, in the SCSI command: 448 449 scsi_to_pci_dma_dir(SCSI_DIRECTION) 450 451Where SCSI_DIRECTION is obtained from the 'sc_data_direction' 452member of the SCSI command your driver is working on. The 453mentioned interface above returns a value suitable for passing 454into the streaming DMA mapping interfaces below. 455 456For Networking drivers, it's a rather simple affair. For transmit 457packets, map/unmap them with the PCI_DMA_TODEVICE direction 458specifier. For receive packets, just the opposite, map/unmap them 459with the PCI_DMA_FROMDEVICE direction specifier. 460 461 Using Streaming DMA mappings 462 463The streaming DMA mapping routines can be called from interrupt 464context. There are two versions of each map/unmap, one which will 465map/unmap a single memory region, and one which will map/unmap a 466scatterlist. 467 468To map a single region, you do: 469 470 struct pci_dev *pdev = mydev->pdev; 471 dma_addr_t dma_handle; 472 void *addr = buffer->ptr; 473 size_t size = buffer->len; 474 475 dma_handle = pci_map_single(dev, addr, size, direction); 476 477and to unmap it: 478 479 pci_unmap_single(dev, dma_handle, size, direction); 480 481You should call pci_unmap_single when the DMA activity is finished, e.g. 482from the interrupt which told you that the DMA transfer is done. 483 484Using cpu pointers like this for single mappings has a disadvantage, 485you cannot reference HIGHMEM memory in this way. Thus, there is a 486map/unmap interface pair akin to pci_{map,unmap}_single. These 487interfaces deal with page/offset pairs instead of cpu pointers. 488Specifically: 489 490 struct pci_dev *pdev = mydev->pdev; 491 dma_addr_t dma_handle; 492 struct page *page = buffer->page; 493 unsigned long offset = buffer->offset; 494 size_t size = buffer->len; 495 496 dma_handle = pci_map_page(dev, page, offset, size, direction); 497 498 ... 499 500 pci_unmap_page(dev, dma_handle, size, direction); 501 502Here, "offset" means byte offset within the given page. 503 504With scatterlists, you map a region gathered from several regions by: 505 506 int i, count = pci_map_sg(dev, sglist, nents, direction); 507 struct scatterlist *sg; 508 509 for (i = 0, sg = sglist; i < count; i++, sg++) { 510 hw_address[i] = sg_dma_address(sg); 511 hw_len[i] = sg_dma_len(sg); 512 } 513 514where nents is the number of entries in the sglist. 515 516The implementation is free to merge several consecutive sglist entries 517into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any 518consecutive sglist entries can be merged into one provided the first one 519ends and the second one starts on a page boundary - in fact this is a huge 520advantage for cards which either cannot do scatter-gather or have very 521limited number of scatter-gather entries) and returns the actual number 522of sg entries it mapped them to. On failure 0 is returned. 523 524Then you should loop count times (note: this can be less than nents times) 525and use sg_dma_address() and sg_dma_len() macros where you previously 526accessed sg->address and sg->length as shown above. 527 528To unmap a scatterlist, just call: 529 530 pci_unmap_sg(dev, sglist, nents, direction); 531 532Again, make sure DMA activity has already finished. 533 534PLEASE NOTE: The 'nents' argument to the pci_unmap_sg call must be 535 the _same_ one you passed into the pci_map_sg call, 536 it should _NOT_ be the 'count' value _returned_ from the 537 pci_map_sg call. 538 539Every pci_map_{single,sg} call should have its pci_unmap_{single,sg} 540counterpart, because the bus address space is a shared resource (although 541in some ports the mapping is per each BUS so less devices contend for the 542same bus address space) and you could render the machine unusable by eating 543all bus addresses. 544 545If you need to use the same streaming DMA region multiple times and touch 546the data in between the DMA transfers, the buffer needs to be synced 547properly in order for the cpu and device to see the most uptodate and 548correct copy of the DMA buffer. 549 550So, firstly, just map it with pci_map_{single,sg}, and after each DMA 551transfer call either: 552 553 pci_dma_sync_single_for_cpu(dev, dma_handle, size, direction); 554 555or: 556 557 pci_dma_sync_sg_for_cpu(dev, sglist, nents, direction); 558 559as appropriate. 560 561Then, if you wish to let the device get at the DMA area again, 562finish accessing the data with the cpu, and then before actually 563giving the buffer to the hardware call either: 564 565 pci_dma_sync_single_for_device(dev, dma_handle, size, direction); 566 567or: 568 569 pci_dma_sync_sg_for_device(dev, sglist, nents, direction); 570 571as appropriate. 572 573After the last DMA transfer call one of the DMA unmap routines 574pci_unmap_{single,sg}. If you don't touch the data from the first pci_map_* 575call till pci_unmap_*, then you don't have to call the pci_dma_sync_* 576routines at all. 577 578Here is pseudo code which shows a situation in which you would need 579to use the pci_dma_sync_*() interfaces. 580 581 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) 582 { 583 dma_addr_t mapping; 584 585 mapping = pci_map_single(cp->pdev, buffer, len, PCI_DMA_FROMDEVICE); 586 587 cp->rx_buf = buffer; 588 cp->rx_len = len; 589 cp->rx_dma = mapping; 590 591 give_rx_buf_to_card(cp); 592 } 593 594 ... 595 596 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) 597 { 598 struct my_card *cp = devid; 599 600 ... 601 if (read_card_status(cp) == RX_BUF_TRANSFERRED) { 602 struct my_card_header *hp; 603 604 /* Examine the header to see if we wish 605 * to accept the data. But synchronize 606 * the DMA transfer with the CPU first 607 * so that we see updated contents. 608 */ 609 pci_dma_sync_single_for_cpu(cp->pdev, cp->rx_dma, 610 cp->rx_len, 611 PCI_DMA_FROMDEVICE); 612 613 /* Now it is safe to examine the buffer. */ 614 hp = (struct my_card_header *) cp->rx_buf; 615 if (header_is_ok(hp)) { 616 pci_unmap_single(cp->pdev, cp->rx_dma, cp->rx_len, 617 PCI_DMA_FROMDEVICE); 618 pass_to_upper_layers(cp->rx_buf); 619 make_and_setup_new_rx_buf(cp); 620 } else { 621 /* Just sync the buffer and give it back 622 * to the card. 623 */ 624 pci_dma_sync_single_for_device(cp->pdev, 625 cp->rx_dma, 626 cp->rx_len, 627 PCI_DMA_FROMDEVICE); 628 give_rx_buf_to_card(cp); 629 } 630 } 631 } 632 633Drivers converted fully to this interface should not use virt_to_bus any 634longer, nor should they use bus_to_virt. Some drivers have to be changed a 635little bit, because there is no longer an equivalent to bus_to_virt in the 636dynamic DMA mapping scheme - you have to always store the DMA addresses 637returned by the pci_alloc_consistent, pci_pool_alloc, and pci_map_single 638calls (pci_map_sg stores them in the scatterlist itself if the platform 639supports dynamic DMA mapping in hardware) in your driver structures and/or 640in the card registers. 641 642All PCI drivers should be using these interfaces with no exceptions. 643It is planned to completely remove virt_to_bus() and bus_to_virt() as 644they are entirely deprecated. Some ports already do not provide these 645as it is impossible to correctly support them. 646 647 64-bit DMA and DAC cycle support 648 649Do you understand all of the text above? Great, then you already 650know how to use 64-bit DMA addressing under Linux. Simply make 651the appropriate pci_set_dma_mask() calls based upon your cards 652capabilities, then use the mapping APIs above. 653 654It is that simple. 655 656Well, not for some odd devices. See the next section for information 657about that. 658 659 DAC Addressing for Address Space Hungry Devices 660 661There exists a class of devices which do not mesh well with the PCI 662DMA mapping API. By definition these "mappings" are a finite 663resource. The number of total available mappings per bus is platform 664specific, but there will always be a reasonable amount. 665 666What is "reasonable"? Reasonable means that networking and block I/O 667devices need not worry about using too many mappings. 668 669As an example of a problematic device, consider compute cluster cards. 670They can potentially need to access gigabytes of memory at once via 671DMA. Dynamic mappings are unsuitable for this kind of access pattern. 672 673To this end we've provided a small API by which a device driver 674may use DAC cycles to directly address all of physical memory. 675Not all platforms support this, but most do. It is easy to determine 676whether the platform will work properly at probe time. 677 678First, understand that there may be a SEVERE performance penalty for 679using these interfaces on some platforms. Therefore, you MUST only 680use these interfaces if it is absolutely required. %99 of devices can 681use the normal APIs without any problems. 682 683Note that for streaming type mappings you must either use these 684interfaces, or the dynamic mapping interfaces above. You may not mix 685usage of both for the same device. Such an act is illegal and is 686guaranteed to put a banana in your tailpipe. 687 688However, consistent mappings may in fact be used in conjunction with 689these interfaces. Remember that, as defined, consistent mappings are 690always going to be SAC addressable. 691 692The first thing your driver needs to do is query the PCI platform 693layer with your devices DAC addressing capabilities: 694 695 int pci_dac_set_dma_mask(struct pci_dev *pdev, u64 mask); 696 697This routine behaves identically to pci_set_dma_mask. You may not 698use the following interfaces if this routine fails. 699 700Next, DMA addresses using this API are kept track of using the 701dma64_addr_t type. It is guaranteed to be big enough to hold any 702DAC address the platform layer will give to you from the following 703routines. If you have consistent mappings as well, you still 704use plain dma_addr_t to keep track of those. 705 706All mappings obtained here will be direct. The mappings are not 707translated, and this is the purpose of this dialect of the DMA API. 708 709All routines work with page/offset pairs. This is the _ONLY_ way to 710portably refer to any piece of memory. If you have a cpu pointer 711(which may be validly DMA'd too) you may easily obtain the page 712and offset using something like this: 713 714 struct page *page = virt_to_page(ptr); 715 unsigned long offset = offset_in_page(ptr); 716 717Here are the interfaces: 718 719 dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev, 720 struct page *page, 721 unsigned long offset, 722 int direction); 723 724The DAC address for the tuple PAGE/OFFSET are returned. The direction 725argument is the same as for pci_{map,unmap}_single(). The same rules 726for cpu/device access apply here as for the streaming mapping 727interfaces. To reiterate: 728 729 The cpu may touch the buffer before pci_dac_page_to_dma. 730 The device may touch the buffer after pci_dac_page_to_dma 731 is made, but the cpu may NOT. 732 733When the DMA transfer is complete, invoke: 734 735 void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, 736 dma64_addr_t dma_addr, 737 size_t len, int direction); 738 739This must be done before the CPU looks at the buffer again. 740This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu(). 741 742And likewise, if you wish to let the device get back at the buffer after 743the cpu has read/written it, invoke: 744 745 void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, 746 dma64_addr_t dma_addr, 747 size_t len, int direction); 748 749before letting the device access the DMA area again. 750 751If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t 752the following interfaces are provided: 753 754 struct page *pci_dac_dma_to_page(struct pci_dev *pdev, 755 dma64_addr_t dma_addr); 756 unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev, 757 dma64_addr_t dma_addr); 758 759This is possible with the DAC interfaces purely because they are 760not translated in any way. 761 762 Optimizing Unmap State Space Consumption 763 764On many platforms, pci_unmap_{single,page}() is simply a nop. 765Therefore, keeping track of the mapping address and length is a waste 766of space. Instead of filling your drivers up with ifdefs and the like 767to "work around" this (which would defeat the whole purpose of a 768portable API) the following facilities are provided. 769 770Actually, instead of describing the macros one by one, we'll 771transform some example code. 772 7731) Use DECLARE_PCI_UNMAP_{ADDR,LEN} in state saving structures. 774 Example, before: 775 776 struct ring_state { 777 struct sk_buff *skb; 778 dma_addr_t mapping; 779 __u32 len; 780 }; 781 782 after: 783 784 struct ring_state { 785 struct sk_buff *skb; 786 DECLARE_PCI_UNMAP_ADDR(mapping) 787 DECLARE_PCI_UNMAP_LEN(len) 788 }; 789 790 NOTE: DO NOT put a semicolon at the end of the DECLARE_*() 791 macro. 792 7932) Use pci_unmap_{addr,len}_set to set these values. 794 Example, before: 795 796 ringp->mapping = FOO; 797 ringp->len = BAR; 798 799 after: 800 801 pci_unmap_addr_set(ringp, mapping, FOO); 802 pci_unmap_len_set(ringp, len, BAR); 803 8043) Use pci_unmap_{addr,len} to access these values. 805 Example, before: 806 807 pci_unmap_single(pdev, ringp->mapping, ringp->len, 808 PCI_DMA_FROMDEVICE); 809 810 after: 811 812 pci_unmap_single(pdev, 813 pci_unmap_addr(ringp, mapping), 814 pci_unmap_len(ringp, len), 815 PCI_DMA_FROMDEVICE); 816 817It really should be self-explanatory. We treat the ADDR and LEN 818separately, because it is possible for an implementation to only 819need the address in order to perform the unmap operation. 820 821 Platform Issues 822 823If you are just writing drivers for Linux and do not maintain 824an architecture port for the kernel, you can safely skip down 825to "Closing". 826 8271) Struct scatterlist requirements. 828 829 Struct scatterlist must contain, at a minimum, the following 830 members: 831 832 struct page *page; 833 unsigned int offset; 834 unsigned int length; 835 836 The base address is specified by a "page+offset" pair. 837 838 Previous versions of struct scatterlist contained a "void *address" 839 field that was sometimes used instead of page+offset. As of Linux 840 2.5., page+offset is always used, and the "address" field has been 841 deleted. 842 8432) More to come... 844 845 Handling Errors 846 847DMA address space is limited on some architectures and an allocation 848failure can be determined by: 849 850- checking if pci_alloc_consistent returns NULL or pci_map_sg returns 0 851 852- checking the returned dma_addr_t of pci_map_single and pci_map_page 853 by using pci_dma_mapping_error(): 854 855 dma_addr_t dma_handle; 856 857 dma_handle = pci_map_single(dev, addr, size, direction); 858 if (pci_dma_mapping_error(dma_handle)) { 859 /* 860 * reduce current DMA mapping usage, 861 * delay and try again later or 862 * reset driver. 863 */ 864 } 865 866 Closing 867 868This document, and the API itself, would not be in it's current 869form without the feedback and suggestions from numerous individuals. 870We would like to specifically mention, in no particular order, the 871following people: 872 873 Russell King <rmk@arm.linux.org.uk> 874 Leo Dagum <dagum@barrel.engr.sgi.com> 875 Ralf Baechle <ralf@oss.sgi.com> 876 Grant Grundler <grundler@cup.hp.com> 877 Jay Estabrook <Jay.Estabrook@compaq.com> 878 Thomas Sailer <sailer@ife.ee.ethz.ch> 879 Andrea Arcangeli <andrea@suse.de> 880 Jens Axboe <axboe@suse.de> 881 David Mosberger-Tang <davidm@hpl.hp.com>