at v4.13 783 lines 29 kB view raw
1============================================ 2Dynamic DMA mapping using the generic device 3============================================ 4 5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 6 7This document describes the DMA API. For a more gentle introduction 8of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. 9 10This API is split into two pieces. Part I describes the basic API. 11Part II describes extensions for supporting non-consistent memory 12machines. Unless you know that your driver absolutely has to support 13non-consistent platforms (this is usually only legacy platforms) you 14should only use the API described in part I. 15 16Part I - dma_API 17---------------- 18 19To get the dma_API, you must #include <linux/dma-mapping.h>. This 20provides dma_addr_t and the interfaces described below. 21 22A dma_addr_t can hold any valid DMA address for the platform. It can be 23given to a device to use as a DMA source or target. A CPU cannot reference 24a dma_addr_t directly because there may be translation between its physical 25address space and the DMA address space. 26 27Part Ia - Using large DMA-coherent buffers 28------------------------------------------ 29 30:: 31 32 void * 33 dma_alloc_coherent(struct device *dev, size_t size, 34 dma_addr_t *dma_handle, gfp_t flag) 35 36Consistent memory is memory for which a write by either the device or 37the processor can immediately be read by the processor or device 38without having to worry about caching effects. (You may however need 39to make sure to flush the processor's write buffers before telling 40devices to read that memory.) 41 42This routine allocates a region of <size> bytes of consistent memory. 43 44It returns a pointer to the allocated region (in the processor's virtual 45address space) or NULL if the allocation failed. 46 47It also returns a <dma_handle> which may be cast to an unsigned integer the 48same width as the bus and given to the device as the DMA address base of 49the region. 50 51Note: consistent memory can be expensive on some platforms, and the 52minimum allocation length may be as big as a page, so you should 53consolidate your requests for consistent memory as much as possible. 54The simplest way to do that is to use the dma_pool calls (see below). 55 56The flag parameter (dma_alloc_coherent() only) allows the caller to 57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the 58implementation may choose to ignore flags that affect the location of 59the returned memory, like GFP_DMA). 60 61:: 62 63 void * 64 dma_zalloc_coherent(struct device *dev, size_t size, 65 dma_addr_t *dma_handle, gfp_t flag) 66 67Wraps dma_alloc_coherent() and also zeroes the returned memory if the 68allocation attempt succeeded. 69 70:: 71 72 void 73 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 74 dma_addr_t dma_handle) 75 76Free a region of consistent memory you previously allocated. dev, 77size and dma_handle must all be the same as those passed into 78dma_alloc_coherent(). cpu_addr must be the virtual address returned by 79the dma_alloc_coherent(). 80 81Note that unlike their sibling allocation calls, these routines 82may only be called with IRQs enabled. 83 84 85Part Ib - Using small DMA-coherent buffers 86------------------------------------------ 87 88To get this part of the dma_API, you must #include <linux/dmapool.h> 89 90Many drivers need lots of small DMA-coherent memory regions for DMA 91descriptors or I/O buffers. Rather than allocating in units of a page 92or more using dma_alloc_coherent(), you can use DMA pools. These work 93much like a struct kmem_cache, except that they use the DMA-coherent allocator, 94not __get_free_pages(). Also, they understand common hardware constraints 95for alignment, like queue heads needing to be aligned on N-byte boundaries. 96 97 98:: 99 100 struct dma_pool * 101 dma_pool_create(const char *name, struct device *dev, 102 size_t size, size_t align, size_t alloc); 103 104dma_pool_create() initializes a pool of DMA-coherent buffers 105for use with a given device. It must be called in a context which 106can sleep. 107 108The "name" is for diagnostics (like a struct kmem_cache name); dev and size 109are like what you'd pass to dma_alloc_coherent(). The device's hardware 110alignment requirement for this type of data is "align" (which is expressed 111in bytes, and must be a power of two). If your device has no boundary 112crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 113from this pool must not cross 4KByte boundaries. 114 115:: 116 117 void * 118 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 119 dma_addr_t *handle) 120 121Wraps dma_pool_alloc() and also zeroes the returned memory if the 122allocation attempt succeeded. 123 124 125:: 126 127 void * 128 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 129 dma_addr_t *dma_handle); 130 131This allocates memory from the pool; the returned memory will meet the 132size and alignment requirements specified at creation time. Pass 133GFP_ATOMIC to prevent blocking, or if it's permitted (not 134in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 135blocking. Like dma_alloc_coherent(), this returns two values: an 136address usable by the CPU, and the DMA address usable by the pool's 137device. 138 139:: 140 141 void 142 dma_pool_free(struct dma_pool *pool, void *vaddr, 143 dma_addr_t addr); 144 145This puts memory back into the pool. The pool is what was passed to 146dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 147were returned when that routine allocated the memory being freed. 148 149:: 150 151 void 152 dma_pool_destroy(struct dma_pool *pool); 153 154dma_pool_destroy() frees the resources of the pool. It must be 155called in a context which can sleep. Make sure you've freed all allocated 156memory back to the pool before you destroy it. 157 158 159Part Ic - DMA addressing limitations 160------------------------------------ 161 162:: 163 164 int 165 dma_set_mask_and_coherent(struct device *dev, u64 mask) 166 167Checks to see if the mask is possible and updates the device 168streaming and coherent DMA mask parameters if it is. 169 170Returns: 0 if successful and a negative error if not. 171 172:: 173 174 int 175 dma_set_mask(struct device *dev, u64 mask) 176 177Checks to see if the mask is possible and updates the device 178parameters if it is. 179 180Returns: 0 if successful and a negative error if not. 181 182:: 183 184 int 185 dma_set_coherent_mask(struct device *dev, u64 mask) 186 187Checks to see if the mask is possible and updates the device 188parameters if it is. 189 190Returns: 0 if successful and a negative error if not. 191 192:: 193 194 u64 195 dma_get_required_mask(struct device *dev) 196 197This API returns the mask that the platform requires to 198operate efficiently. Usually this means the returned mask 199is the minimum required to cover all of memory. Examining the 200required mask gives drivers with variable descriptor sizes the 201opportunity to use smaller descriptors as necessary. 202 203Requesting the required mask does not alter the current mask. If you 204wish to take advantage of it, you should issue a dma_set_mask() 205call to set the mask to the value returned. 206 207 208Part Id - Streaming DMA mappings 209-------------------------------- 210 211:: 212 213 dma_addr_t 214 dma_map_single(struct device *dev, void *cpu_addr, size_t size, 215 enum dma_data_direction direction) 216 217Maps a piece of processor virtual memory so it can be accessed by the 218device and returns the DMA address of the memory. 219 220The direction for both APIs may be converted freely by casting. 221However the dma_API uses a strongly typed enumerator for its 222direction: 223 224======================= ============================================= 225DMA_NONE no direction (used for debugging) 226DMA_TO_DEVICE data is going from the memory to the device 227DMA_FROM_DEVICE data is coming from the device to the memory 228DMA_BIDIRECTIONAL direction isn't known 229======================= ============================================= 230 231.. note:: 232 233 Not all memory regions in a machine can be mapped by this API. 234 Further, contiguous kernel virtual space may not be contiguous as 235 physical memory. Since this API does not provide any scatter/gather 236 capability, it will fail if the user tries to map a non-physically 237 contiguous piece of memory. For this reason, memory to be mapped by 238 this API should be obtained from sources which guarantee it to be 239 physically contiguous (like kmalloc). 240 241 Further, the DMA address of the memory must be within the 242 dma_mask of the device (the dma_mask is a bit mask of the 243 addressable region for the device, i.e., if the DMA address of 244 the memory ANDed with the dma_mask is still equal to the DMA 245 address, then the device can perform DMA to the memory). To 246 ensure that the memory allocated by kmalloc is within the dma_mask, 247 the driver may specify various platform-dependent flags to restrict 248 the DMA address range of the allocation (e.g., on x86, GFP_DMA 249 guarantees to be within the first 16MB of available DMA addresses, 250 as required by ISA devices). 251 252 Note also that the above constraints on physical contiguity and 253 dma_mask may not apply if the platform has an IOMMU (a device which 254 maps an I/O DMA address to a physical memory address). However, to be 255 portable, device driver writers may *not* assume that such an IOMMU 256 exists. 257 258.. warning:: 259 260 Memory coherency operates at a granularity called the cache 261 line width. In order for memory mapped by this API to operate 262 correctly, the mapped region must begin exactly on a cache line 263 boundary and end exactly on one (to prevent two separately mapped 264 regions from sharing a single cache line). Since the cache line size 265 may not be known at compile time, the API will not enforce this 266 requirement. Therefore, it is recommended that driver writers who 267 don't take special care to determine the cache line size at run time 268 only map virtual regions that begin and end on page boundaries (which 269 are guaranteed also to be cache line boundaries). 270 271 DMA_TO_DEVICE synchronisation must be done after the last modification 272 of the memory region by the software and before it is handed off to 273 the device. Once this primitive is used, memory covered by this 274 primitive should be treated as read-only by the device. If the device 275 may write to it at any point, it should be DMA_BIDIRECTIONAL (see 276 below). 277 278 DMA_FROM_DEVICE synchronisation must be done before the driver 279 accesses data that may be changed by the device. This memory should 280 be treated as read-only by the driver. If the driver needs to write 281 to it at any point, it should be DMA_BIDIRECTIONAL (see below). 282 283 DMA_BIDIRECTIONAL requires special handling: it means that the driver 284 isn't sure if the memory was modified before being handed off to the 285 device and also isn't sure if the device will also modify it. Thus, 286 you must always sync bidirectional memory twice: once before the 287 memory is handed off to the device (to make sure all memory changes 288 are flushed from the processor) and once before the data may be 289 accessed after being used by the device (to make sure any processor 290 cache lines are updated with data that the device may have changed). 291 292:: 293 294 void 295 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 296 enum dma_data_direction direction) 297 298Unmaps the region previously mapped. All the parameters passed in 299must be identical to those passed in (and returned) by the mapping 300API. 301 302:: 303 304 dma_addr_t 305 dma_map_page(struct device *dev, struct page *page, 306 unsigned long offset, size_t size, 307 enum dma_data_direction direction) 308 309 void 310 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 311 enum dma_data_direction direction) 312 313API for mapping and unmapping for pages. All the notes and warnings 314for the other mapping APIs apply here. Also, although the <offset> 315and <size> parameters are provided to do partial page mapping, it is 316recommended that you never use these unless you really know what the 317cache width is. 318 319:: 320 321 dma_addr_t 322 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, 323 enum dma_data_direction dir, unsigned long attrs) 324 325 void 326 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, 327 enum dma_data_direction dir, unsigned long attrs) 328 329API for mapping and unmapping for MMIO resources. All the notes and 330warnings for the other mapping APIs apply here. The API should only be 331used to map device MMIO resources, mapping of RAM is not permitted. 332 333:: 334 335 int 336 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 337 338In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() 339will fail to create a mapping. A driver can check for these errors by testing 340the returned DMA address with dma_mapping_error(). A non-zero return value 341means the mapping could not be created and the driver should take appropriate 342action (e.g. reduce current DMA mapping usage or delay and try again later). 343 344:: 345 346 int 347 dma_map_sg(struct device *dev, struct scatterlist *sg, 348 int nents, enum dma_data_direction direction) 349 350Returns: the number of DMA address segments mapped (this may be shorter 351than <nents> passed in if some elements of the scatter/gather list are 352physically or virtually adjacent and an IOMMU maps them with a single 353entry). 354 355Please note that the sg cannot be mapped again if it has been mapped once. 356The mapping process is allowed to destroy information in the sg. 357 358As with the other mapping interfaces, dma_map_sg() can fail. When it 359does, 0 is returned and a driver must take appropriate action. It is 360critical that the driver do something, in the case of a block driver 361aborting the request or even oopsing is better than doing nothing and 362corrupting the filesystem. 363 364With scatterlists, you use the resulting mapping like this:: 365 366 int i, count = dma_map_sg(dev, sglist, nents, direction); 367 struct scatterlist *sg; 368 369 for_each_sg(sglist, sg, count, i) { 370 hw_address[i] = sg_dma_address(sg); 371 hw_len[i] = sg_dma_len(sg); 372 } 373 374where nents is the number of entries in the sglist. 375 376The implementation is free to merge several consecutive sglist entries 377into one (e.g. with an IOMMU, or if several pages just happen to be 378physically contiguous) and returns the actual number of sg entries it 379mapped them to. On failure 0, is returned. 380 381Then you should loop count times (note: this can be less than nents times) 382and use sg_dma_address() and sg_dma_len() macros where you previously 383accessed sg->address and sg->length as shown above. 384 385:: 386 387 void 388 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 389 int nents, enum dma_data_direction direction) 390 391Unmap the previously mapped scatter/gather list. All the parameters 392must be the same as those and passed in to the scatter/gather mapping 393API. 394 395Note: <nents> must be the number you passed in, *not* the number of 396DMA address entries returned. 397 398:: 399 400 void 401 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 402 size_t size, 403 enum dma_data_direction direction) 404 405 void 406 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 407 size_t size, 408 enum dma_data_direction direction) 409 410 void 411 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 412 int nents, 413 enum dma_data_direction direction) 414 415 void 416 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 417 int nents, 418 enum dma_data_direction direction) 419 420Synchronise a single contiguous or scatter/gather mapping for the CPU 421and device. With the sync_sg API, all the parameters must be the same 422as those passed into the single mapping API. With the sync_single API, 423you can use dma_handle and size parameters that aren't identical to 424those passed into the single mapping API to do a partial sync. 425 426 427.. note:: 428 429 You must do this: 430 431 - Before reading values that have been written by DMA from the device 432 (use the DMA_FROM_DEVICE direction) 433 - After writing values that will be written to the device using DMA 434 (use the DMA_TO_DEVICE) direction 435 - before *and* after handing memory to the device if the memory is 436 DMA_BIDIRECTIONAL 437 438See also dma_map_single(). 439 440:: 441 442 dma_addr_t 443 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, 444 enum dma_data_direction dir, 445 unsigned long attrs) 446 447 void 448 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, 449 size_t size, enum dma_data_direction dir, 450 unsigned long attrs) 451 452 int 453 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, 454 int nents, enum dma_data_direction dir, 455 unsigned long attrs) 456 457 void 458 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, 459 int nents, enum dma_data_direction dir, 460 unsigned long attrs) 461 462The four functions above are just like the counterpart functions 463without the _attrs suffixes, except that they pass an optional 464dma_attrs. 465 466The interpretation of DMA attributes is architecture-specific, and 467each attribute should be documented in Documentation/DMA-attributes.txt. 468 469If dma_attrs are 0, the semantics of each of these functions 470is identical to those of the corresponding function 471without the _attrs suffix. As a result dma_map_single_attrs() 472can generally replace dma_map_single(), etc. 473 474As an example of the use of the ``*_attrs`` functions, here's how 475you could pass an attribute DMA_ATTR_FOO when mapping memory 476for DMA:: 477 478 #include <linux/dma-mapping.h> 479 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and 480 * documented in Documentation/DMA-attributes.txt */ 481 ... 482 483 unsigned long attr; 484 attr |= DMA_ATTR_FOO; 485 .... 486 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); 487 .... 488 489Architectures that care about DMA_ATTR_FOO would check for its 490presence in their implementations of the mapping and unmapping 491routines, e.g.::: 492 493 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, 494 size_t size, enum dma_data_direction dir, 495 unsigned long attrs) 496 { 497 .... 498 if (attrs & DMA_ATTR_FOO) 499 /* twizzle the frobnozzle */ 500 .... 501 } 502 503 504Part II - Advanced dma usage 505---------------------------- 506 507Warning: These pieces of the DMA API should not be used in the 508majority of cases, since they cater for unlikely corner cases that 509don't belong in usual drivers. 510 511If you don't understand how cache line coherency works between a 512processor and an I/O device, you should not be using this part of the 513API at all. 514 515:: 516 517 void * 518 dma_alloc_noncoherent(struct device *dev, size_t size, 519 dma_addr_t *dma_handle, gfp_t flag) 520 521Identical to dma_alloc_coherent() except that the platform will 522choose to return either consistent or non-consistent memory as it sees 523fit. By using this API, you are guaranteeing to the platform that you 524have all the correct and necessary sync points for this memory in the 525driver should it choose to return non-consistent memory. 526 527Note: where the platform can return consistent memory, it will 528guarantee that the sync points become nops. 529 530Warning: Handling non-consistent memory is a real pain. You should 531only use this API if you positively know your driver will be 532required to work on one of the rare (usually non-PCI) architectures 533that simply cannot make consistent memory. 534 535:: 536 537 void 538 dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 539 dma_addr_t dma_handle) 540 541Free memory allocated by the nonconsistent API. All parameters must 542be identical to those passed in (and returned by 543dma_alloc_noncoherent()). 544 545:: 546 547 int 548 dma_get_cache_alignment(void) 549 550Returns the processor cache alignment. This is the absolute minimum 551alignment *and* width that you must observe when either mapping 552memory or doing partial flushes. 553 554.. note:: 555 556 This API may return a number *larger* than the actual cache 557 line, but it will guarantee that one or more cache lines fit exactly 558 into the width returned by this call. It will also always be a power 559 of two for easy alignment. 560 561:: 562 563 void 564 dma_cache_sync(struct device *dev, void *vaddr, size_t size, 565 enum dma_data_direction direction) 566 567Do a partial sync of memory that was allocated by 568dma_alloc_noncoherent(), starting at virtual address vaddr and 569continuing on for size. Again, you *must* observe the cache line 570boundaries when doing this. 571 572:: 573 574 int 575 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 576 dma_addr_t device_addr, size_t size, int 577 flags) 578 579Declare region of memory to be handed out by dma_alloc_coherent() when 580it's asked for coherent memory for this device. 581 582phys_addr is the CPU physical address to which the memory is currently 583assigned (this will be ioremapped so the CPU can access the region). 584 585device_addr is the DMA address the device needs to be programmed 586with to actually address this memory (this will be handed out as the 587dma_addr_t in dma_alloc_coherent()). 588 589size is the size of the area (must be multiples of PAGE_SIZE). 590 591flags can be ORed together and are: 592 593- DMA_MEMORY_MAP - request that the memory returned from 594 dma_alloc_coherent() be directly writable. 595 596- DMA_MEMORY_IO - request that the memory returned from 597 dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. 598 599One or both of these flags must be present. 600 601- DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by 602 dma_alloc_coherent of any child devices of this one (for memory residing 603 on a bridge). 604 605- DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 606 Do not allow dma_alloc_coherent() to fall back to system memory when 607 it's out of memory in the declared region. 608 609The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and 610must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO 611if only DMA_MEMORY_MAP were passed in) for success or zero for 612failure. 613 614Note, for DMA_MEMORY_IO returns, all subsequent memory returned by 615dma_alloc_coherent() may no longer be accessed directly, but instead 616must be accessed using the correct bus functions. If your driver 617isn't prepared to handle this contingency, it should not specify 618DMA_MEMORY_IO in the input flags. 619 620As a simplification for the platforms, only **one** such region of 621memory may be declared per device. 622 623For reasons of efficiency, most platforms choose to track the declared 624region only at the granularity of a page. For smaller allocations, 625you should use the dma_pool() API. 626 627:: 628 629 void 630 dma_release_declared_memory(struct device *dev) 631 632Remove the memory region previously declared from the system. This 633API performs *no* in-use checking for this region and will return 634unconditionally having removed all the required structures. It is the 635driver's job to ensure that no parts of this memory region are 636currently in use. 637 638:: 639 640 void * 641 dma_mark_declared_memory_occupied(struct device *dev, 642 dma_addr_t device_addr, size_t size) 643 644This is used to occupy specific regions of the declared space 645(dma_alloc_coherent() will hand out the first free region it finds). 646 647device_addr is the *device* address of the region requested. 648 649size is the size (and should be a page-sized multiple). 650 651The return value will be either a pointer to the processor virtual 652address of the memory, or an error (via PTR_ERR()) if any part of the 653region is occupied. 654 655Part III - Debug drivers use of the DMA-API 656------------------------------------------- 657 658The DMA-API as described above has some constraints. DMA addresses must be 659released with the corresponding function with the same size for example. With 660the advent of hardware IOMMUs it becomes more and more important that drivers 661do not violate those constraints. In the worst case such a violation can 662result in data corruption up to destroyed filesystems. 663 664To debug drivers and find bugs in the usage of the DMA-API checking code can 665be compiled into the kernel which will tell the developer about those 666violations. If your architecture supports it you can select the "Enable 667debugging of DMA-API usage" option in your kernel configuration. Enabling this 668option has a performance impact. Do not enable it in production kernels. 669 670If you boot the resulting kernel will contain code which does some bookkeeping 671about what DMA memory was allocated for which device. If this code detects an 672error it prints a warning message with some details into your kernel log. An 673example warning message may look like this:: 674 675 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 676 check_unmap+0x203/0x490() 677 Hardware name: 678 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong 679 function [device address=0x00000000640444be] [size=66 bytes] [mapped as 680 single] [unmapped as page] 681 Modules linked in: nfsd exportfs bridge stp llc r8169 682 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 683 Call Trace: 684 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 685 [<ffffffff80647b70>] _spin_unlock+0x10/0x30 686 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 687 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 688 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 689 [<ffffffff80252f96>] queue_work+0x56/0x60 690 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 691 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 692 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 693 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 694 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 695 [<ffffffff803c7ea3>] check_unmap+0x203/0x490 696 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 697 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 698 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 699 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 700 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 701 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 702 [<ffffffff8020c093>] ret_from_intr+0x0/0xa 703 <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 704 705The driver developer can find the driver and the device including a stacktrace 706of the DMA-API call which caused this warning. 707 708Per default only the first error will result in a warning message. All other 709errors will only silently counted. This limitation exist to prevent the code 710from flooding your kernel log. To support debugging a device driver this can 711be disabled via debugfs. See the debugfs interface documentation below for 712details. 713 714The debugfs directory for the DMA-API debugging code is called dma-api/. In 715this directory the following files can currently be found: 716 717=============================== =============================================== 718dma-api/all_errors This file contains a numeric value. If this 719 value is not equal to zero the debugging code 720 will print a warning for every error it finds 721 into the kernel log. Be careful with this 722 option, as it can easily flood your logs. 723 724dma-api/disabled This read-only file contains the character 'Y' 725 if the debugging code is disabled. This can 726 happen when it runs out of memory or if it was 727 disabled at boot time 728 729dma-api/error_count This file is read-only and shows the total 730 numbers of errors found. 731 732dma-api/num_errors The number in this file shows how many 733 warnings will be printed to the kernel log 734 before it stops. This number is initialized to 735 one at system boot and be set by writing into 736 this file 737 738dma-api/min_free_entries This read-only file can be read to get the 739 minimum number of free dma_debug_entries the 740 allocator has ever seen. If this value goes 741 down to zero the code will disable itself 742 because it is not longer reliable. 743 744dma-api/num_free_entries The current number of free dma_debug_entries 745 in the allocator. 746 747dma-api/driver-filter You can write a name of a driver into this file 748 to limit the debug output to requests from that 749 particular driver. Write an empty string to 750 that file to disable the filter and see 751 all errors again. 752=============================== =============================================== 753 754If you have this code compiled into your kernel it will be enabled by default. 755If you want to boot without the bookkeeping anyway you can provide 756'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 757Notice that you can not enable it again at runtime. You have to reboot to do 758so. 759 760If you want to see debug messages only for a special device driver you can 761specify the dma_debug_driver=<drivername> parameter. This will enable the 762driver filter at boot time. The debug code will only print errors for that 763driver afterwards. This filter can be disabled or changed later using debugfs. 764 765When the code disables itself at runtime this is most likely because it ran 766out of dma_debug_entries. These entries are preallocated at boot. The number 767of preallocated entries is defined per architecture. If it is too low for you 768boot with 'dma_debug_entries=<your_desired_number>' to overwrite the 769architectural default. 770 771:: 772 773 void 774 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); 775 776dma-debug interface debug_dma_mapping_error() to debug drivers that fail 777to check DMA mapping errors on addresses returned by dma_map_single() and 778dma_map_page() interfaces. This interface clears a flag set by 779debug_dma_map_page() to indicate that dma_mapping_error() has been called by 780the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 781this flag is still set, prints warning message that includes call trace that 782leads up to the unmap. This interface can be called from dma_mapping_error() 783routines to enable DMA mapping error check debugging.