at v4.3 711 lines 28 kB view raw
1 Dynamic DMA mapping using the generic device 2 ============================================ 3 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 5 6This document describes the DMA API. For a more gentle introduction 7of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. 8 9This API is split into two pieces. Part I describes the basic API. 10Part II describes extensions for supporting non-consistent memory 11machines. Unless you know that your driver absolutely has to support 12non-consistent platforms (this is usually only legacy platforms) you 13should only use the API described in part I. 14 15Part I - dma_ API 16------------------------------------- 17 18To get the dma_ API, you must #include <linux/dma-mapping.h>. This 19provides dma_addr_t and the interfaces described below. 20 21A dma_addr_t can hold any valid DMA address for the platform. It can be 22given to a device to use as a DMA source or target. A CPU cannot reference 23a dma_addr_t directly because there may be translation between its physical 24address space and the DMA address space. 25 26Part Ia - Using large DMA-coherent buffers 27------------------------------------------ 28 29void * 30dma_alloc_coherent(struct device *dev, size_t size, 31 dma_addr_t *dma_handle, gfp_t flag) 32 33Consistent memory is memory for which a write by either the device or 34the processor can immediately be read by the processor or device 35without having to worry about caching effects. (You may however need 36to make sure to flush the processor's write buffers before telling 37devices to read that memory.) 38 39This routine allocates a region of <size> bytes of consistent memory. 40 41It returns a pointer to the allocated region (in the processor's virtual 42address space) or NULL if the allocation failed. 43 44It also returns a <dma_handle> which may be cast to an unsigned integer the 45same width as the bus and given to the device as the DMA address base of 46the region. 47 48Note: consistent memory can be expensive on some platforms, and the 49minimum allocation length may be as big as a page, so you should 50consolidate your requests for consistent memory as much as possible. 51The simplest way to do that is to use the dma_pool calls (see below). 52 53The flag parameter (dma_alloc_coherent() only) allows the caller to 54specify the GFP_ flags (see kmalloc()) for the allocation (the 55implementation may choose to ignore flags that affect the location of 56the returned memory, like GFP_DMA). 57 58void * 59dma_zalloc_coherent(struct device *dev, size_t size, 60 dma_addr_t *dma_handle, gfp_t flag) 61 62Wraps dma_alloc_coherent() and also zeroes the returned memory if the 63allocation attempt succeeded. 64 65void 66dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 67 dma_addr_t dma_handle) 68 69Free a region of consistent memory you previously allocated. dev, 70size and dma_handle must all be the same as those passed into 71dma_alloc_coherent(). cpu_addr must be the virtual address returned by 72the dma_alloc_coherent(). 73 74Note that unlike their sibling allocation calls, these routines 75may only be called with IRQs enabled. 76 77 78Part Ib - Using small DMA-coherent buffers 79------------------------------------------ 80 81To get this part of the dma_ API, you must #include <linux/dmapool.h> 82 83Many drivers need lots of small DMA-coherent memory regions for DMA 84descriptors or I/O buffers. Rather than allocating in units of a page 85or more using dma_alloc_coherent(), you can use DMA pools. These work 86much like a struct kmem_cache, except that they use the DMA-coherent allocator, 87not __get_free_pages(). Also, they understand common hardware constraints 88for alignment, like queue heads needing to be aligned on N-byte boundaries. 89 90 91 struct dma_pool * 92 dma_pool_create(const char *name, struct device *dev, 93 size_t size, size_t align, size_t alloc); 94 95dma_pool_create() initializes a pool of DMA-coherent buffers 96for use with a given device. It must be called in a context which 97can sleep. 98 99The "name" is for diagnostics (like a struct kmem_cache name); dev and size 100are like what you'd pass to dma_alloc_coherent(). The device's hardware 101alignment requirement for this type of data is "align" (which is expressed 102in bytes, and must be a power of two). If your device has no boundary 103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 104from this pool must not cross 4KByte boundaries. 105 106 107 void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 108 dma_addr_t *handle) 109 110Wraps dma_pool_alloc() and also zeroes the returned memory if the 111allocation attempt succeeded. 112 113 114 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 115 dma_addr_t *dma_handle); 116 117This allocates memory from the pool; the returned memory will meet the 118size and alignment requirements specified at creation time. Pass 119GFP_ATOMIC to prevent blocking, or if it's permitted (not 120in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 121blocking. Like dma_alloc_coherent(), this returns two values: an 122address usable by the CPU, and the DMA address usable by the pool's 123device. 124 125 126 void dma_pool_free(struct dma_pool *pool, void *vaddr, 127 dma_addr_t addr); 128 129This puts memory back into the pool. The pool is what was passed to 130dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 131were returned when that routine allocated the memory being freed. 132 133 134 void dma_pool_destroy(struct dma_pool *pool); 135 136dma_pool_destroy() frees the resources of the pool. It must be 137called in a context which can sleep. Make sure you've freed all allocated 138memory back to the pool before you destroy it. 139 140 141Part Ic - DMA addressing limitations 142------------------------------------ 143 144int 145dma_supported(struct device *dev, u64 mask) 146 147Checks to see if the device can support DMA to the memory described by 148mask. 149 150Returns: 1 if it can and 0 if it can't. 151 152Notes: This routine merely tests to see if the mask is possible. It 153won't change the current mask settings. It is more intended as an 154internal API for use by the platform than an external API for use by 155driver writers. 156 157int 158dma_set_mask_and_coherent(struct device *dev, u64 mask) 159 160Checks to see if the mask is possible and updates the device 161streaming and coherent DMA mask parameters if it is. 162 163Returns: 0 if successful and a negative error if not. 164 165int 166dma_set_mask(struct device *dev, u64 mask) 167 168Checks to see if the mask is possible and updates the device 169parameters if it is. 170 171Returns: 0 if successful and a negative error if not. 172 173int 174dma_set_coherent_mask(struct device *dev, u64 mask) 175 176Checks to see if the mask is possible and updates the device 177parameters if it is. 178 179Returns: 0 if successful and a negative error if not. 180 181u64 182dma_get_required_mask(struct device *dev) 183 184This API returns the mask that the platform requires to 185operate efficiently. Usually this means the returned mask 186is the minimum required to cover all of memory. Examining the 187required mask gives drivers with variable descriptor sizes the 188opportunity to use smaller descriptors as necessary. 189 190Requesting the required mask does not alter the current mask. If you 191wish to take advantage of it, you should issue a dma_set_mask() 192call to set the mask to the value returned. 193 194 195Part Id - Streaming DMA mappings 196-------------------------------- 197 198dma_addr_t 199dma_map_single(struct device *dev, void *cpu_addr, size_t size, 200 enum dma_data_direction direction) 201 202Maps a piece of processor virtual memory so it can be accessed by the 203device and returns the DMA address of the memory. 204 205The direction for both APIs may be converted freely by casting. 206However the dma_ API uses a strongly typed enumerator for its 207direction: 208 209DMA_NONE no direction (used for debugging) 210DMA_TO_DEVICE data is going from the memory to the device 211DMA_FROM_DEVICE data is coming from the device to the memory 212DMA_BIDIRECTIONAL direction isn't known 213 214Notes: Not all memory regions in a machine can be mapped by this API. 215Further, contiguous kernel virtual space may not be contiguous as 216physical memory. Since this API does not provide any scatter/gather 217capability, it will fail if the user tries to map a non-physically 218contiguous piece of memory. For this reason, memory to be mapped by 219this API should be obtained from sources which guarantee it to be 220physically contiguous (like kmalloc). 221 222Further, the DMA address of the memory must be within the 223dma_mask of the device (the dma_mask is a bit mask of the 224addressable region for the device, i.e., if the DMA address of 225the memory ANDed with the dma_mask is still equal to the DMA 226address, then the device can perform DMA to the memory). To 227ensure that the memory allocated by kmalloc is within the dma_mask, 228the driver may specify various platform-dependent flags to restrict 229the DMA address range of the allocation (e.g., on x86, GFP_DMA 230guarantees to be within the first 16MB of available DMA addresses, 231as required by ISA devices). 232 233Note also that the above constraints on physical contiguity and 234dma_mask may not apply if the platform has an IOMMU (a device which 235maps an I/O DMA address to a physical memory address). However, to be 236portable, device driver writers may *not* assume that such an IOMMU 237exists. 238 239Warnings: Memory coherency operates at a granularity called the cache 240line width. In order for memory mapped by this API to operate 241correctly, the mapped region must begin exactly on a cache line 242boundary and end exactly on one (to prevent two separately mapped 243regions from sharing a single cache line). Since the cache line size 244may not be known at compile time, the API will not enforce this 245requirement. Therefore, it is recommended that driver writers who 246don't take special care to determine the cache line size at run time 247only map virtual regions that begin and end on page boundaries (which 248are guaranteed also to be cache line boundaries). 249 250DMA_TO_DEVICE synchronisation must be done after the last modification 251of the memory region by the software and before it is handed off to 252the driver. Once this primitive is used, memory covered by this 253primitive should be treated as read-only by the device. If the device 254may write to it at any point, it should be DMA_BIDIRECTIONAL (see 255below). 256 257DMA_FROM_DEVICE synchronisation must be done before the driver 258accesses data that may be changed by the device. This memory should 259be treated as read-only by the driver. If the driver needs to write 260to it at any point, it should be DMA_BIDIRECTIONAL (see below). 261 262DMA_BIDIRECTIONAL requires special handling: it means that the driver 263isn't sure if the memory was modified before being handed off to the 264device and also isn't sure if the device will also modify it. Thus, 265you must always sync bidirectional memory twice: once before the 266memory is handed off to the device (to make sure all memory changes 267are flushed from the processor) and once before the data may be 268accessed after being used by the device (to make sure any processor 269cache lines are updated with data that the device may have changed). 270 271void 272dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 273 enum dma_data_direction direction) 274 275Unmaps the region previously mapped. All the parameters passed in 276must be identical to those passed in (and returned) by the mapping 277API. 278 279dma_addr_t 280dma_map_page(struct device *dev, struct page *page, 281 unsigned long offset, size_t size, 282 enum dma_data_direction direction) 283void 284dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 285 enum dma_data_direction direction) 286 287API for mapping and unmapping for pages. All the notes and warnings 288for the other mapping APIs apply here. Also, although the <offset> 289and <size> parameters are provided to do partial page mapping, it is 290recommended that you never use these unless you really know what the 291cache width is. 292 293int 294dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 295 296In some circumstances dma_map_single() and dma_map_page() will fail to create 297a mapping. A driver can check for these errors by testing the returned 298DMA address with dma_mapping_error(). A non-zero return value means the mapping 299could not be created and the driver should take appropriate action (e.g. 300reduce current DMA mapping usage or delay and try again later). 301 302 int 303 dma_map_sg(struct device *dev, struct scatterlist *sg, 304 int nents, enum dma_data_direction direction) 305 306Returns: the number of DMA address segments mapped (this may be shorter 307than <nents> passed in if some elements of the scatter/gather list are 308physically or virtually adjacent and an IOMMU maps them with a single 309entry). 310 311Please note that the sg cannot be mapped again if it has been mapped once. 312The mapping process is allowed to destroy information in the sg. 313 314As with the other mapping interfaces, dma_map_sg() can fail. When it 315does, 0 is returned and a driver must take appropriate action. It is 316critical that the driver do something, in the case of a block driver 317aborting the request or even oopsing is better than doing nothing and 318corrupting the filesystem. 319 320With scatterlists, you use the resulting mapping like this: 321 322 int i, count = dma_map_sg(dev, sglist, nents, direction); 323 struct scatterlist *sg; 324 325 for_each_sg(sglist, sg, count, i) { 326 hw_address[i] = sg_dma_address(sg); 327 hw_len[i] = sg_dma_len(sg); 328 } 329 330where nents is the number of entries in the sglist. 331 332The implementation is free to merge several consecutive sglist entries 333into one (e.g. with an IOMMU, or if several pages just happen to be 334physically contiguous) and returns the actual number of sg entries it 335mapped them to. On failure 0, is returned. 336 337Then you should loop count times (note: this can be less than nents times) 338and use sg_dma_address() and sg_dma_len() macros where you previously 339accessed sg->address and sg->length as shown above. 340 341 void 342 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 343 int nhwentries, enum dma_data_direction direction) 344 345Unmap the previously mapped scatter/gather list. All the parameters 346must be the same as those and passed in to the scatter/gather mapping 347API. 348 349Note: <nents> must be the number you passed in, *not* the number of 350DMA address entries returned. 351 352void 353dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 354 enum dma_data_direction direction) 355void 356dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 357 enum dma_data_direction direction) 358void 359dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 360 enum dma_data_direction direction) 361void 362dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 363 enum dma_data_direction direction) 364 365Synchronise a single contiguous or scatter/gather mapping for the CPU 366and device. With the sync_sg API, all the parameters must be the same 367as those passed into the single mapping API. With the sync_single API, 368you can use dma_handle and size parameters that aren't identical to 369those passed into the single mapping API to do a partial sync. 370 371Notes: You must do this: 372 373- Before reading values that have been written by DMA from the device 374 (use the DMA_FROM_DEVICE direction) 375- After writing values that will be written to the device using DMA 376 (use the DMA_TO_DEVICE) direction 377- before *and* after handing memory to the device if the memory is 378 DMA_BIDIRECTIONAL 379 380See also dma_map_single(). 381 382dma_addr_t 383dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, 384 enum dma_data_direction dir, 385 struct dma_attrs *attrs) 386 387void 388dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, 389 size_t size, enum dma_data_direction dir, 390 struct dma_attrs *attrs) 391 392int 393dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, 394 int nents, enum dma_data_direction dir, 395 struct dma_attrs *attrs) 396 397void 398dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, 399 int nents, enum dma_data_direction dir, 400 struct dma_attrs *attrs) 401 402The four functions above are just like the counterpart functions 403without the _attrs suffixes, except that they pass an optional 404struct dma_attrs*. 405 406struct dma_attrs encapsulates a set of "DMA attributes". For the 407definition of struct dma_attrs see linux/dma-attrs.h. 408 409The interpretation of DMA attributes is architecture-specific, and 410each attribute should be documented in Documentation/DMA-attributes.txt. 411 412If struct dma_attrs* is NULL, the semantics of each of these 413functions is identical to those of the corresponding function 414without the _attrs suffix. As a result dma_map_single_attrs() 415can generally replace dma_map_single(), etc. 416 417As an example of the use of the *_attrs functions, here's how 418you could pass an attribute DMA_ATTR_FOO when mapping memory 419for DMA: 420 421#include <linux/dma-attrs.h> 422/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and 423 * documented in Documentation/DMA-attributes.txt */ 424... 425 426 DEFINE_DMA_ATTRS(attrs); 427 dma_set_attr(DMA_ATTR_FOO, &attrs); 428 .... 429 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr); 430 .... 431 432Architectures that care about DMA_ATTR_FOO would check for its 433presence in their implementations of the mapping and unmapping 434routines, e.g.: 435 436void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, 437 size_t size, enum dma_data_direction dir, 438 struct dma_attrs *attrs) 439{ 440 .... 441 int foo = dma_get_attr(DMA_ATTR_FOO, attrs); 442 .... 443 if (foo) 444 /* twizzle the frobnozzle */ 445 .... 446 447 448Part II - Advanced dma_ usage 449----------------------------- 450 451Warning: These pieces of the DMA API should not be used in the 452majority of cases, since they cater for unlikely corner cases that 453don't belong in usual drivers. 454 455If you don't understand how cache line coherency works between a 456processor and an I/O device, you should not be using this part of the 457API at all. 458 459void * 460dma_alloc_noncoherent(struct device *dev, size_t size, 461 dma_addr_t *dma_handle, gfp_t flag) 462 463Identical to dma_alloc_coherent() except that the platform will 464choose to return either consistent or non-consistent memory as it sees 465fit. By using this API, you are guaranteeing to the platform that you 466have all the correct and necessary sync points for this memory in the 467driver should it choose to return non-consistent memory. 468 469Note: where the platform can return consistent memory, it will 470guarantee that the sync points become nops. 471 472Warning: Handling non-consistent memory is a real pain. You should 473only use this API if you positively know your driver will be 474required to work on one of the rare (usually non-PCI) architectures 475that simply cannot make consistent memory. 476 477void 478dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 479 dma_addr_t dma_handle) 480 481Free memory allocated by the nonconsistent API. All parameters must 482be identical to those passed in (and returned by 483dma_alloc_noncoherent()). 484 485int 486dma_get_cache_alignment(void) 487 488Returns the processor cache alignment. This is the absolute minimum 489alignment *and* width that you must observe when either mapping 490memory or doing partial flushes. 491 492Notes: This API may return a number *larger* than the actual cache 493line, but it will guarantee that one or more cache lines fit exactly 494into the width returned by this call. It will also always be a power 495of two for easy alignment. 496 497void 498dma_cache_sync(struct device *dev, void *vaddr, size_t size, 499 enum dma_data_direction direction) 500 501Do a partial sync of memory that was allocated by 502dma_alloc_noncoherent(), starting at virtual address vaddr and 503continuing on for size. Again, you *must* observe the cache line 504boundaries when doing this. 505 506int 507dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 508 dma_addr_t device_addr, size_t size, int 509 flags) 510 511Declare region of memory to be handed out by dma_alloc_coherent() when 512it's asked for coherent memory for this device. 513 514phys_addr is the CPU physical address to which the memory is currently 515assigned (this will be ioremapped so the CPU can access the region). 516 517device_addr is the DMA address the device needs to be programmed 518with to actually address this memory (this will be handed out as the 519dma_addr_t in dma_alloc_coherent()). 520 521size is the size of the area (must be multiples of PAGE_SIZE). 522 523flags can be ORed together and are: 524 525DMA_MEMORY_MAP - request that the memory returned from 526dma_alloc_coherent() be directly writable. 527 528DMA_MEMORY_IO - request that the memory returned from 529dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. 530 531One or both of these flags must be present. 532 533DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by 534dma_alloc_coherent of any child devices of this one (for memory residing 535on a bridge). 536 537DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 538Do not allow dma_alloc_coherent() to fall back to system memory when 539it's out of memory in the declared region. 540 541The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and 542must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO 543if only DMA_MEMORY_MAP were passed in) for success or zero for 544failure. 545 546Note, for DMA_MEMORY_IO returns, all subsequent memory returned by 547dma_alloc_coherent() may no longer be accessed directly, but instead 548must be accessed using the correct bus functions. If your driver 549isn't prepared to handle this contingency, it should not specify 550DMA_MEMORY_IO in the input flags. 551 552As a simplification for the platforms, only *one* such region of 553memory may be declared per device. 554 555For reasons of efficiency, most platforms choose to track the declared 556region only at the granularity of a page. For smaller allocations, 557you should use the dma_pool() API. 558 559void 560dma_release_declared_memory(struct device *dev) 561 562Remove the memory region previously declared from the system. This 563API performs *no* in-use checking for this region and will return 564unconditionally having removed all the required structures. It is the 565driver's job to ensure that no parts of this memory region are 566currently in use. 567 568void * 569dma_mark_declared_memory_occupied(struct device *dev, 570 dma_addr_t device_addr, size_t size) 571 572This is used to occupy specific regions of the declared space 573(dma_alloc_coherent() will hand out the first free region it finds). 574 575device_addr is the *device* address of the region requested. 576 577size is the size (and should be a page-sized multiple). 578 579The return value will be either a pointer to the processor virtual 580address of the memory, or an error (via PTR_ERR()) if any part of the 581region is occupied. 582 583Part III - Debug drivers use of the DMA-API 584------------------------------------------- 585 586The DMA-API as described above has some constraints. DMA addresses must be 587released with the corresponding function with the same size for example. With 588the advent of hardware IOMMUs it becomes more and more important that drivers 589do not violate those constraints. In the worst case such a violation can 590result in data corruption up to destroyed filesystems. 591 592To debug drivers and find bugs in the usage of the DMA-API checking code can 593be compiled into the kernel which will tell the developer about those 594violations. If your architecture supports it you can select the "Enable 595debugging of DMA-API usage" option in your kernel configuration. Enabling this 596option has a performance impact. Do not enable it in production kernels. 597 598If you boot the resulting kernel will contain code which does some bookkeeping 599about what DMA memory was allocated for which device. If this code detects an 600error it prints a warning message with some details into your kernel log. An 601example warning message may look like this: 602 603------------[ cut here ]------------ 604WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 605 check_unmap+0x203/0x490() 606Hardware name: 607forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong 608 function [device address=0x00000000640444be] [size=66 bytes] [mapped as 609single] [unmapped as page] 610Modules linked in: nfsd exportfs bridge stp llc r8169 611Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 612Call Trace: 613 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 614 [<ffffffff80647b70>] _spin_unlock+0x10/0x30 615 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 616 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 617 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 618 [<ffffffff80252f96>] queue_work+0x56/0x60 619 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 620 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 621 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 622 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 623 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 624 [<ffffffff803c7ea3>] check_unmap+0x203/0x490 625 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 626 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 627 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 628 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 629 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 630 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 631 [<ffffffff8020c093>] ret_from_intr+0x0/0xa 632 <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 633 634The driver developer can find the driver and the device including a stacktrace 635of the DMA-API call which caused this warning. 636 637Per default only the first error will result in a warning message. All other 638errors will only silently counted. This limitation exist to prevent the code 639from flooding your kernel log. To support debugging a device driver this can 640be disabled via debugfs. See the debugfs interface documentation below for 641details. 642 643The debugfs directory for the DMA-API debugging code is called dma-api/. In 644this directory the following files can currently be found: 645 646 dma-api/all_errors This file contains a numeric value. If this 647 value is not equal to zero the debugging code 648 will print a warning for every error it finds 649 into the kernel log. Be careful with this 650 option, as it can easily flood your logs. 651 652 dma-api/disabled This read-only file contains the character 'Y' 653 if the debugging code is disabled. This can 654 happen when it runs out of memory or if it was 655 disabled at boot time 656 657 dma-api/error_count This file is read-only and shows the total 658 numbers of errors found. 659 660 dma-api/num_errors The number in this file shows how many 661 warnings will be printed to the kernel log 662 before it stops. This number is initialized to 663 one at system boot and be set by writing into 664 this file 665 666 dma-api/min_free_entries 667 This read-only file can be read to get the 668 minimum number of free dma_debug_entries the 669 allocator has ever seen. If this value goes 670 down to zero the code will disable itself 671 because it is not longer reliable. 672 673 dma-api/num_free_entries 674 The current number of free dma_debug_entries 675 in the allocator. 676 677 dma-api/driver-filter 678 You can write a name of a driver into this file 679 to limit the debug output to requests from that 680 particular driver. Write an empty string to 681 that file to disable the filter and see 682 all errors again. 683 684If you have this code compiled into your kernel it will be enabled by default. 685If you want to boot without the bookkeeping anyway you can provide 686'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 687Notice that you can not enable it again at runtime. You have to reboot to do 688so. 689 690If you want to see debug messages only for a special device driver you can 691specify the dma_debug_driver=<drivername> parameter. This will enable the 692driver filter at boot time. The debug code will only print errors for that 693driver afterwards. This filter can be disabled or changed later using debugfs. 694 695When the code disables itself at runtime this is most likely because it ran 696out of dma_debug_entries. These entries are preallocated at boot. The number 697of preallocated entries is defined per architecture. If it is too low for you 698boot with 'dma_debug_entries=<your_desired_number>' to overwrite the 699architectural default. 700 701void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); 702 703dma-debug interface debug_dma_mapping_error() to debug drivers that fail 704to check DMA mapping errors on addresses returned by dma_map_single() and 705dma_map_page() interfaces. This interface clears a flag set by 706debug_dma_map_page() to indicate that dma_mapping_error() has been called by 707the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 708this flag is still set, prints warning message that includes call trace that 709leads up to the unmap. This interface can be called from dma_mapping_error() 710routines to enable DMA mapping error check debugging. 711