Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'iommu-updates-v3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
"A few new features this merge-window. The most important one is
probably, that dma-debug now warns if a dma-handle is not checked with
dma_mapping_error by the device driver. This requires minor changes
to some architectures which make use of dma-debug. Most of these
changes have the respective Acks by the Arch-Maintainers.

Besides that there are updates to the AMD IOMMU driver for refactor
the IOMMU-Groups support and to make sure it does not trigger a
hardware erratum.

The OMAP changes (for which I pulled in a branch from Tony Lindgren's
tree) have a conflict in linux-next with the arm-soc tree. The
conflict is in the file arch/arm/mach-omap2/clock44xx_data.c which is
deleted in the arm-soc tree. It is safe to delete the file too so
solve the conflict. Similar changes are done in the arm-soc tree in
the common clock framework migration. A missing hunk from the patch
in the IOMMU tree will be submitted as a seperate patch when the
merge-window is closed."

* tag 'iommu-updates-v3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (29 commits)
ARM: dma-mapping: support debug_dma_mapping_error
ARM: OMAP4: hwmod data: ipu and dsp to use parent clocks instead of leaf clocks
iommu/omap: Adapt to runtime pm
iommu/omap: Migrate to hwmod framework
iommu/omap: Keep mmu enabled when requested
iommu/omap: Remove redundant clock handling on ISR
iommu/amd: Remove obsolete comment
iommu/amd: Don't use 512GB pages
iommu/tegra: smmu: Move bus_set_iommu after probe for multi arch
iommu/tegra: gart: Move bus_set_iommu after probe for multi arch
iommu/tegra: smmu: Remove unnecessary PTC/TLB flush all
tile: dma_debug: add debug_dma_mapping_error support
sh: dma_debug: add debug_dma_mapping_error support
powerpc: dma_debug: add debug_dma_mapping_error support
mips: dma_debug: add debug_dma_mapping_error support
microblaze: dma-mapping: support debug_dma_mapping_error
ia64: dma_debug: add debug_dma_mapping_error support
c6x: dma_debug: add debug_dma_mapping_error support
ARM64: dma_debug: add debug_dma_mapping_error support
intel-iommu: Prevent devices with RMRRs from being placed into SI Domain
...

+490 -285
+126
Documentation/DMA-API-HOWTO.txt
··· 468 468 size_t size = buffer->len; 469 469 470 470 dma_handle = dma_map_single(dev, addr, size, direction); 471 + if (dma_mapping_error(dma_handle)) { 472 + /* 473 + * reduce current DMA mapping usage, 474 + * delay and try again later or 475 + * reset driver. 476 + */ 477 + goto map_error_handling; 478 + } 471 479 472 480 and to unmap it: 473 481 474 482 dma_unmap_single(dev, dma_handle, size, direction); 483 + 484 + You should call dma_mapping_error() as dma_map_single() could fail and return 485 + error. Not all dma implementations support dma_mapping_error() interface. 486 + However, it is a good practice to call dma_mapping_error() interface, which 487 + will invoke the generic mapping error check interface. Doing so will ensure 488 + that the mapping code will work correctly on all dma implementations without 489 + any dependency on the specifics of the underlying implementation. Using the 490 + returned address without checking for errors could result in failures ranging 491 + from panics to silent data corruption. Couple of example of incorrect ways to 492 + check for errors that make assumptions about the underlying dma implementation 493 + are as follows and these are applicable to dma_map_page() as well. 494 + 495 + Incorrect example 1: 496 + dma_addr_t dma_handle; 497 + 498 + dma_handle = dma_map_single(dev, addr, size, direction); 499 + if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) { 500 + goto map_error; 501 + } 502 + 503 + Incorrect example 2: 504 + dma_addr_t dma_handle; 505 + 506 + dma_handle = dma_map_single(dev, addr, size, direction); 507 + if (dma_handle == DMA_ERROR_CODE) { 508 + goto map_error; 509 + } 475 510 476 511 You should call dma_unmap_single when the DMA activity is finished, e.g. 477 512 from the interrupt which told you that the DMA transfer is done. ··· 524 489 size_t size = buffer->len; 525 490 526 491 dma_handle = dma_map_page(dev, page, offset, size, direction); 492 + if (dma_mapping_error(dma_handle)) { 493 + /* 494 + * reduce current DMA mapping usage, 495 + * delay and try again later or 496 + * reset driver. 497 + */ 498 + goto map_error_handling; 499 + } 527 500 528 501 ... 529 502 530 503 dma_unmap_page(dev, dma_handle, size, direction); 531 504 532 505 Here, "offset" means byte offset within the given page. 506 + 507 + You should call dma_mapping_error() as dma_map_page() could fail and return 508 + error as outlined under the dma_map_single() discussion. 509 + 510 + You should call dma_unmap_page when the DMA activity is finished, e.g. 511 + from the interrupt which told you that the DMA transfer is done. 533 512 534 513 With scatterlists, you map a region gathered from several regions by: 535 514 ··· 627 578 dma_addr_t mapping; 628 579 629 580 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); 581 + if (dma_mapping_error(dma_handle)) { 582 + /* 583 + * reduce current DMA mapping usage, 584 + * delay and try again later or 585 + * reset driver. 586 + */ 587 + goto map_error_handling; 588 + } 630 589 631 590 cp->rx_buf = buffer; 632 591 cp->rx_len = len; ··· 715 658 * delay and try again later or 716 659 * reset driver. 717 660 */ 661 + goto map_error_handling; 662 + } 663 + 664 + - unmap pages that are already mapped, when mapping error occurs in the middle 665 + of a multiple page mapping attempt. These example are applicable to 666 + dma_map_page() as well. 667 + 668 + Example 1: 669 + dma_addr_t dma_handle1; 670 + dma_addr_t dma_handle2; 671 + 672 + dma_handle1 = dma_map_single(dev, addr, size, direction); 673 + if (dma_mapping_error(dev, dma_handle1)) { 674 + /* 675 + * reduce current DMA mapping usage, 676 + * delay and try again later or 677 + * reset driver. 678 + */ 679 + goto map_error_handling1; 680 + } 681 + dma_handle2 = dma_map_single(dev, addr, size, direction); 682 + if (dma_mapping_error(dev, dma_handle2)) { 683 + /* 684 + * reduce current DMA mapping usage, 685 + * delay and try again later or 686 + * reset driver. 687 + */ 688 + goto map_error_handling2; 689 + } 690 + 691 + ... 692 + 693 + map_error_handling2: 694 + dma_unmap_single(dma_handle1); 695 + map_error_handling1: 696 + 697 + Example 2: (if buffers are allocated a loop, unmap all mapped buffers when 698 + mapping error is detected in the middle) 699 + 700 + dma_addr_t dma_addr; 701 + dma_addr_t array[DMA_BUFFERS]; 702 + int save_index = 0; 703 + 704 + for (i = 0; i < DMA_BUFFERS; i++) { 705 + 706 + ... 707 + 708 + dma_addr = dma_map_single(dev, addr, size, direction); 709 + if (dma_mapping_error(dev, dma_addr)) { 710 + /* 711 + * reduce current DMA mapping usage, 712 + * delay and try again later or 713 + * reset driver. 714 + */ 715 + goto map_error_handling; 716 + } 717 + array[i].dma_addr = dma_addr; 718 + save_index++; 719 + } 720 + 721 + ... 722 + 723 + map_error_handling: 724 + 725 + for (i = 0; i < save_index; i++) { 726 + 727 + ... 728 + 729 + dma_unmap_single(array[i].dma_addr); 718 730 } 719 731 720 732 Networking drivers must call dev_kfree_skb to free the socket buffer
+12
Documentation/DMA-API.txt
··· 678 678 of preallocated entries is defined per architecture. If it is too low for you 679 679 boot with 'dma_debug_entries=<your_desired_number>' to overwrite the 680 680 architectural default. 681 + 682 + void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); 683 + 684 + dma-debug interface debug_dma_mapping_error() to debug drivers that fail 685 + to check dma mapping errors on addresses returned by dma_map_single() and 686 + dma_map_page() interfaces. This interface clears a flag set by 687 + debug_dma_map_page() to indicate that dma_mapping_error() has been called by 688 + the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 689 + this flag is still set, prints warning message that includes call trace that 690 + leads up to the unmap. This interface can be called from dma_mapping_error() 691 + routines to enable dma mapping error check debugging. 692 +
+1
arch/arm/include/asm/dma-mapping.h
··· 91 91 */ 92 92 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 93 93 { 94 + debug_dma_mapping_error(dev, dma_addr); 94 95 return dma_addr == DMA_ERROR_CODE; 95 96 } 96 97
+1 -1
arch/arm/mach-omap2/devices.c
··· 226 226 }; 227 227 228 228 static struct omap_iommu_arch_data omap3_isp_iommu = { 229 - .name = "isp", 229 + .name = "mmu_isp", 230 230 }; 231 231 232 232 int omap3_init_camera(struct isp_platform_data *pdata)
+38 -131
arch/arm/mach-omap2/omap-iommu.c
··· 12 12 13 13 #include <linux/module.h> 14 14 #include <linux/platform_device.h> 15 + #include <linux/err.h> 16 + #include <linux/slab.h> 15 17 16 18 #include <linux/platform_data/iommu-omap.h> 19 + #include <plat/omap_hwmod.h> 20 + #include <plat/omap_device.h> 17 21 18 - #include "soc.h" 19 - #include "common.h" 22 + static int __init omap_iommu_dev_init(struct omap_hwmod *oh, void *unused) 23 + { 24 + struct platform_device *pdev; 25 + struct iommu_platform_data *pdata; 26 + struct omap_mmu_dev_attr *a = (struct omap_mmu_dev_attr *)oh->dev_attr; 27 + static int i; 20 28 21 - struct iommu_device { 22 - resource_size_t base; 23 - int irq; 24 - struct iommu_platform_data pdata; 25 - struct resource res[2]; 26 - }; 27 - static struct iommu_device *devices; 28 - static int num_iommu_devices; 29 + pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); 30 + if (!pdata) 31 + return -ENOMEM; 29 32 30 - #ifdef CONFIG_ARCH_OMAP3 31 - static struct iommu_device omap3_devices[] = { 32 - { 33 - .base = 0x480bd400, 34 - .irq = 24 + OMAP_INTC_START, 35 - .pdata = { 36 - .name = "isp", 37 - .nr_tlb_entries = 8, 38 - .clk_name = "cam_ick", 39 - .da_start = 0x0, 40 - .da_end = 0xFFFFF000, 41 - }, 42 - }, 43 - #if defined(CONFIG_OMAP_IOMMU_IVA2) 44 - { 45 - .base = 0x5d000000, 46 - .irq = 28 + OMAP_INTC_START, 47 - .pdata = { 48 - .name = "iva2", 49 - .nr_tlb_entries = 32, 50 - .clk_name = "iva2_ck", 51 - .da_start = 0x11000000, 52 - .da_end = 0xFFFFF000, 53 - }, 54 - }, 55 - #endif 56 - }; 57 - #define NR_OMAP3_IOMMU_DEVICES ARRAY_SIZE(omap3_devices) 58 - static struct platform_device *omap3_iommu_pdev[NR_OMAP3_IOMMU_DEVICES]; 59 - #else 60 - #define omap3_devices NULL 61 - #define NR_OMAP3_IOMMU_DEVICES 0 62 - #define omap3_iommu_pdev NULL 63 - #endif 33 + pdata->name = oh->name; 34 + pdata->nr_tlb_entries = a->nr_tlb_entries; 35 + pdata->da_start = a->da_start; 36 + pdata->da_end = a->da_end; 64 37 65 - #ifdef CONFIG_ARCH_OMAP4 66 - static struct iommu_device omap4_devices[] = { 67 - { 68 - .base = OMAP4_MMU1_BASE, 69 - .irq = 100 + OMAP44XX_IRQ_GIC_START, 70 - .pdata = { 71 - .name = "ducati", 72 - .nr_tlb_entries = 32, 73 - .clk_name = "ipu_fck", 74 - .da_start = 0x0, 75 - .da_end = 0xFFFFF000, 76 - }, 77 - }, 78 - { 79 - .base = OMAP4_MMU2_BASE, 80 - .irq = 28 + OMAP44XX_IRQ_GIC_START, 81 - .pdata = { 82 - .name = "tesla", 83 - .nr_tlb_entries = 32, 84 - .clk_name = "dsp_fck", 85 - .da_start = 0x0, 86 - .da_end = 0xFFFFF000, 87 - }, 88 - }, 89 - }; 90 - #define NR_OMAP4_IOMMU_DEVICES ARRAY_SIZE(omap4_devices) 91 - static struct platform_device *omap4_iommu_pdev[NR_OMAP4_IOMMU_DEVICES]; 92 - #else 93 - #define omap4_devices NULL 94 - #define NR_OMAP4_IOMMU_DEVICES 0 95 - #define omap4_iommu_pdev NULL 96 - #endif 38 + if (oh->rst_lines_cnt == 1) { 39 + pdata->reset_name = oh->rst_lines->name; 40 + pdata->assert_reset = omap_device_assert_hardreset; 41 + pdata->deassert_reset = omap_device_deassert_hardreset; 42 + } 97 43 98 - static struct platform_device **omap_iommu_pdev; 44 + pdev = omap_device_build("omap-iommu", i, oh, pdata, sizeof(*pdata), 45 + NULL, 0, 0); 46 + 47 + kfree(pdata); 48 + 49 + if (IS_ERR(pdev)) { 50 + pr_err("%s: device build err: %ld\n", __func__, PTR_ERR(pdev)); 51 + return PTR_ERR(pdev); 52 + } 53 + 54 + i++; 55 + 56 + return 0; 57 + } 99 58 100 59 static int __init omap_iommu_init(void) 101 60 { 102 - int i, err; 103 - struct resource res[] = { 104 - { .flags = IORESOURCE_MEM }, 105 - { .flags = IORESOURCE_IRQ }, 106 - }; 107 - 108 - if (cpu_is_omap34xx()) { 109 - devices = omap3_devices; 110 - omap_iommu_pdev = omap3_iommu_pdev; 111 - num_iommu_devices = NR_OMAP3_IOMMU_DEVICES; 112 - } else if (cpu_is_omap44xx()) { 113 - devices = omap4_devices; 114 - omap_iommu_pdev = omap4_iommu_pdev; 115 - num_iommu_devices = NR_OMAP4_IOMMU_DEVICES; 116 - } else 117 - return -ENODEV; 118 - 119 - for (i = 0; i < num_iommu_devices; i++) { 120 - struct platform_device *pdev; 121 - const struct iommu_device *d = &devices[i]; 122 - 123 - pdev = platform_device_alloc("omap-iommu", i); 124 - if (!pdev) { 125 - err = -ENOMEM; 126 - goto err_out; 127 - } 128 - 129 - res[0].start = d->base; 130 - res[0].end = d->base + MMU_REG_SIZE - 1; 131 - res[1].start = res[1].end = d->irq; 132 - 133 - err = platform_device_add_resources(pdev, res, 134 - ARRAY_SIZE(res)); 135 - if (err) 136 - goto err_out; 137 - err = platform_device_add_data(pdev, &d->pdata, 138 - sizeof(d->pdata)); 139 - if (err) 140 - goto err_out; 141 - err = platform_device_add(pdev); 142 - if (err) 143 - goto err_out; 144 - omap_iommu_pdev[i] = pdev; 145 - } 146 - return 0; 147 - 148 - err_out: 149 - while (i--) 150 - platform_device_put(omap_iommu_pdev[i]); 151 - return err; 61 + return omap_hwmod_for_each_by_class("mmu", omap_iommu_dev_init, NULL); 152 62 } 153 63 /* must be ready before omap3isp is probed */ 154 64 subsys_initcall(omap_iommu_init); 155 65 156 66 static void __exit omap_iommu_exit(void) 157 67 { 158 - int i; 159 - 160 - for (i = 0; i < num_iommu_devices; i++) 161 - platform_device_unregister(omap_iommu_pdev[i]); 68 + /* Do nothing */ 162 69 } 163 70 module_exit(omap_iommu_exit); 164 71
+2 -2
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 653 653 .mpu_irqs = omap44xx_dsp_irqs, 654 654 .rst_lines = omap44xx_dsp_resets, 655 655 .rst_lines_cnt = ARRAY_SIZE(omap44xx_dsp_resets), 656 - .main_clk = "dsp_fck", 656 + .main_clk = "dpll_iva_m4x2_ck", 657 657 .prcm = { 658 658 .omap4 = { 659 659 .clkctrl_offs = OMAP4_CM_TESLA_TESLA_CLKCTRL_OFFSET, ··· 1679 1679 .mpu_irqs = omap44xx_ipu_irqs, 1680 1680 .rst_lines = omap44xx_ipu_resets, 1681 1681 .rst_lines_cnt = ARRAY_SIZE(omap44xx_ipu_resets), 1682 - .main_clk = "ipu_fck", 1682 + .main_clk = "ducati_clk_mux_ck", 1683 1683 .prcm = { 1684 1684 .omap4 = { 1685 1685 .clkctrl_offs = OMAP4_CM_DUCATI_DUCATI_CLKCTRL_OFFSET,
+1
arch/arm64/include/asm/dma-mapping.h
··· 50 50 static inline int dma_mapping_error(struct device *dev, dma_addr_t dev_addr) 51 51 { 52 52 struct dma_map_ops *ops = get_dma_ops(dev); 53 + debug_dma_mapping_error(dev, dev_addr); 53 54 return ops->mapping_error(dev, dev_addr); 54 55 } 55 56
+1
arch/c6x/include/asm/dma-mapping.h
··· 32 32 */ 33 33 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 34 34 { 35 + debug_dma_mapping_error(dev, dma_addr); 35 36 return dma_addr == ~0; 36 37 } 37 38
+1
arch/ia64/include/asm/dma-mapping.h
··· 58 58 static inline int dma_mapping_error(struct device *dev, dma_addr_t daddr) 59 59 { 60 60 struct dma_map_ops *ops = platform_dma_get_ops(dev); 61 + debug_dma_mapping_error(dev, daddr); 61 62 return ops->mapping_error(dev, daddr); 62 63 } 63 64
+2
arch/microblaze/include/asm/dma-mapping.h
··· 114 114 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 115 115 { 116 116 struct dma_map_ops *ops = get_dma_ops(dev); 117 + 118 + debug_dma_mapping_error(dev, dma_addr); 117 119 if (ops->mapping_error) 118 120 return ops->mapping_error(dev, dma_addr); 119 121
+2
arch/mips/include/asm/dma-mapping.h
··· 40 40 static inline int dma_mapping_error(struct device *dev, u64 mask) 41 41 { 42 42 struct dma_map_ops *ops = get_dma_ops(dev); 43 + 44 + debug_dma_mapping_error(dev, mask); 43 45 return ops->mapping_error(dev, mask); 44 46 } 45 47
+1
arch/powerpc/include/asm/dma-mapping.h
··· 172 172 { 173 173 struct dma_map_ops *dma_ops = get_dma_ops(dev); 174 174 175 + debug_dma_mapping_error(dev, dma_addr); 175 176 if (dma_ops->mapping_error) 176 177 return dma_ops->mapping_error(dev, dma_addr); 177 178
+1
arch/sh/include/asm/dma-mapping.h
··· 46 46 { 47 47 struct dma_map_ops *ops = get_dma_ops(dev); 48 48 49 + debug_dma_mapping_error(dev, dma_addr); 49 50 if (ops->mapping_error) 50 51 return ops->mapping_error(dev, dma_addr); 51 52
+1
arch/sparc/include/asm/dma-mapping.h
··· 59 59 60 60 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 61 61 { 62 + debug_dma_mapping_error(dev, dma_addr); 62 63 return (dma_addr == DMA_ERROR_CODE); 63 64 } 64 65
+1
arch/tile/include/asm/dma-mapping.h
··· 72 72 static inline int 73 73 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 74 74 { 75 + debug_dma_mapping_error(dev, dma_addr); 75 76 return get_dma_ops(dev)->mapping_error(dev, dma_addr); 76 77 } 77 78
+1
arch/x86/include/asm/dma-mapping.h
··· 47 47 static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 48 48 { 49 49 struct dma_map_ops *ops = get_dma_ops(dev); 50 + debug_dma_mapping_error(dev, dma_addr); 50 51 if (ops->mapping_error) 51 52 return ops->mapping_error(dev, dma_addr); 52 53
+155 -65
drivers/iommu/amd_iommu.c
··· 57 57 * physically contiguous memory regions it is mapping into page sizes 58 58 * that we support. 59 59 * 60 - * Traditionally the IOMMU core just handed us the mappings directly, 61 - * after making sure the size is an order of a 4KiB page and that the 62 - * mapping has natural alignment. 63 - * 64 - * To retain this behavior, we currently advertise that we support 65 - * all page sizes that are an order of 4KiB. 66 - * 67 - * If at some point we'd like to utilize the IOMMU core's new behavior, 68 - * we could change this to advertise the real page sizes we support. 60 + * 512GB Pages are not supported due to a hardware bug 69 61 */ 70 - #define AMD_IOMMU_PGSIZES (~0xFFFUL) 62 + #define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38)) 71 63 72 64 static DEFINE_RWLOCK(amd_iommu_devtable_lock); 73 65 ··· 131 139 spin_lock_irqsave(&dev_data_list_lock, flags); 132 140 list_del(&dev_data->dev_data_list); 133 141 spin_unlock_irqrestore(&dev_data_list_lock, flags); 142 + 143 + if (dev_data->group) 144 + iommu_group_put(dev_data->group); 134 145 135 146 kfree(dev_data); 136 147 } ··· 269 274 *from = to; 270 275 } 271 276 277 + static struct pci_bus *find_hosted_bus(struct pci_bus *bus) 278 + { 279 + while (!bus->self) { 280 + if (!pci_is_root_bus(bus)) 281 + bus = bus->parent; 282 + else 283 + return ERR_PTR(-ENODEV); 284 + } 285 + 286 + return bus; 287 + } 288 + 272 289 #define REQ_ACS_FLAGS (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF) 290 + 291 + static struct pci_dev *get_isolation_root(struct pci_dev *pdev) 292 + { 293 + struct pci_dev *dma_pdev = pdev; 294 + 295 + /* Account for quirked devices */ 296 + swap_pci_ref(&dma_pdev, pci_get_dma_source(dma_pdev)); 297 + 298 + /* 299 + * If it's a multifunction device that does not support our 300 + * required ACS flags, add to the same group as function 0. 301 + */ 302 + if (dma_pdev->multifunction && 303 + !pci_acs_enabled(dma_pdev, REQ_ACS_FLAGS)) 304 + swap_pci_ref(&dma_pdev, 305 + pci_get_slot(dma_pdev->bus, 306 + PCI_DEVFN(PCI_SLOT(dma_pdev->devfn), 307 + 0))); 308 + 309 + /* 310 + * Devices on the root bus go through the iommu. If that's not us, 311 + * find the next upstream device and test ACS up to the root bus. 312 + * Finding the next device may require skipping virtual buses. 313 + */ 314 + while (!pci_is_root_bus(dma_pdev->bus)) { 315 + struct pci_bus *bus = find_hosted_bus(dma_pdev->bus); 316 + if (IS_ERR(bus)) 317 + break; 318 + 319 + if (pci_acs_path_enabled(bus->self, NULL, REQ_ACS_FLAGS)) 320 + break; 321 + 322 + swap_pci_ref(&dma_pdev, pci_dev_get(bus->self)); 323 + } 324 + 325 + return dma_pdev; 326 + } 327 + 328 + static int use_pdev_iommu_group(struct pci_dev *pdev, struct device *dev) 329 + { 330 + struct iommu_group *group = iommu_group_get(&pdev->dev); 331 + int ret; 332 + 333 + if (!group) { 334 + group = iommu_group_alloc(); 335 + if (IS_ERR(group)) 336 + return PTR_ERR(group); 337 + 338 + WARN_ON(&pdev->dev != dev); 339 + } 340 + 341 + ret = iommu_group_add_device(group, dev); 342 + iommu_group_put(group); 343 + return ret; 344 + } 345 + 346 + static int use_dev_data_iommu_group(struct iommu_dev_data *dev_data, 347 + struct device *dev) 348 + { 349 + if (!dev_data->group) { 350 + struct iommu_group *group = iommu_group_alloc(); 351 + if (IS_ERR(group)) 352 + return PTR_ERR(group); 353 + 354 + dev_data->group = group; 355 + } 356 + 357 + return iommu_group_add_device(dev_data->group, dev); 358 + } 359 + 360 + static int init_iommu_group(struct device *dev) 361 + { 362 + struct iommu_dev_data *dev_data; 363 + struct iommu_group *group; 364 + struct pci_dev *dma_pdev; 365 + int ret; 366 + 367 + group = iommu_group_get(dev); 368 + if (group) { 369 + iommu_group_put(group); 370 + return 0; 371 + } 372 + 373 + dev_data = find_dev_data(get_device_id(dev)); 374 + if (!dev_data) 375 + return -ENOMEM; 376 + 377 + if (dev_data->alias_data) { 378 + u16 alias; 379 + struct pci_bus *bus; 380 + 381 + if (dev_data->alias_data->group) 382 + goto use_group; 383 + 384 + /* 385 + * If the alias device exists, it's effectively just a first 386 + * level quirk for finding the DMA source. 387 + */ 388 + alias = amd_iommu_alias_table[dev_data->devid]; 389 + dma_pdev = pci_get_bus_and_slot(alias >> 8, alias & 0xff); 390 + if (dma_pdev) { 391 + dma_pdev = get_isolation_root(dma_pdev); 392 + goto use_pdev; 393 + } 394 + 395 + /* 396 + * If the alias is virtual, try to find a parent device 397 + * and test whether the IOMMU group is actualy rooted above 398 + * the alias. Be careful to also test the parent device if 399 + * we think the alias is the root of the group. 400 + */ 401 + bus = pci_find_bus(0, alias >> 8); 402 + if (!bus) 403 + goto use_group; 404 + 405 + bus = find_hosted_bus(bus); 406 + if (IS_ERR(bus) || !bus->self) 407 + goto use_group; 408 + 409 + dma_pdev = get_isolation_root(pci_dev_get(bus->self)); 410 + if (dma_pdev != bus->self || (dma_pdev->multifunction && 411 + !pci_acs_enabled(dma_pdev, REQ_ACS_FLAGS))) 412 + goto use_pdev; 413 + 414 + pci_dev_put(dma_pdev); 415 + goto use_group; 416 + } 417 + 418 + dma_pdev = get_isolation_root(pci_dev_get(to_pci_dev(dev))); 419 + use_pdev: 420 + ret = use_pdev_iommu_group(dma_pdev, dev); 421 + pci_dev_put(dma_pdev); 422 + return ret; 423 + use_group: 424 + return use_dev_data_iommu_group(dev_data->alias_data, dev); 425 + } 273 426 274 427 static int iommu_init_device(struct device *dev) 275 428 { 276 - struct pci_dev *dma_pdev = NULL, *pdev = to_pci_dev(dev); 429 + struct pci_dev *pdev = to_pci_dev(dev); 277 430 struct iommu_dev_data *dev_data; 278 - struct iommu_group *group; 279 431 u16 alias; 280 432 int ret; 281 433 ··· 445 303 return -ENOTSUPP; 446 304 } 447 305 dev_data->alias_data = alias_data; 448 - 449 - dma_pdev = pci_get_bus_and_slot(alias >> 8, alias & 0xff); 450 306 } 451 307 452 - if (dma_pdev == NULL) 453 - dma_pdev = pci_dev_get(pdev); 454 - 455 - /* Account for quirked devices */ 456 - swap_pci_ref(&dma_pdev, pci_get_dma_source(dma_pdev)); 457 - 458 - /* 459 - * If it's a multifunction device that does not support our 460 - * required ACS flags, add to the same group as function 0. 461 - */ 462 - if (dma_pdev->multifunction && 463 - !pci_acs_enabled(dma_pdev, REQ_ACS_FLAGS)) 464 - swap_pci_ref(&dma_pdev, 465 - pci_get_slot(dma_pdev->bus, 466 - PCI_DEVFN(PCI_SLOT(dma_pdev->devfn), 467 - 0))); 468 - 469 - /* 470 - * Devices on the root bus go through the iommu. If that's not us, 471 - * find the next upstream device and test ACS up to the root bus. 472 - * Finding the next device may require skipping virtual buses. 473 - */ 474 - while (!pci_is_root_bus(dma_pdev->bus)) { 475 - struct pci_bus *bus = dma_pdev->bus; 476 - 477 - while (!bus->self) { 478 - if (!pci_is_root_bus(bus)) 479 - bus = bus->parent; 480 - else 481 - goto root_bus; 482 - } 483 - 484 - if (pci_acs_path_enabled(bus->self, NULL, REQ_ACS_FLAGS)) 485 - break; 486 - 487 - swap_pci_ref(&dma_pdev, pci_dev_get(bus->self)); 488 - } 489 - 490 - root_bus: 491 - group = iommu_group_get(&dma_pdev->dev); 492 - pci_dev_put(dma_pdev); 493 - if (!group) { 494 - group = iommu_group_alloc(); 495 - if (IS_ERR(group)) 496 - return PTR_ERR(group); 497 - } 498 - 499 - ret = iommu_group_add_device(group, dev); 500 - 501 - iommu_group_put(group); 502 - 308 + ret = init_iommu_group(dev); 503 309 if (ret) 504 310 return ret; 505 311
+1
drivers/iommu/amd_iommu_types.h
··· 426 426 struct iommu_dev_data *alias_data;/* The alias dev_data */ 427 427 struct protection_domain *domain; /* Domain the device is bound to */ 428 428 atomic_t bind; /* Domain attach reference count */ 429 + struct iommu_group *group; /* IOMMU group for virtual aliases */ 429 430 u16 devid; /* PCI Device ID */ 430 431 bool iommu_v2; /* Device can make use of IOMMUv2 */ 431 432 bool passthrough; /* Default for device is pt_domain */
+31
drivers/iommu/intel-iommu.c
··· 2327 2327 return 0; 2328 2328 } 2329 2329 2330 + static bool device_has_rmrr(struct pci_dev *dev) 2331 + { 2332 + struct dmar_rmrr_unit *rmrr; 2333 + int i; 2334 + 2335 + for_each_rmrr_units(rmrr) { 2336 + for (i = 0; i < rmrr->devices_cnt; i++) { 2337 + /* 2338 + * Return TRUE if this RMRR contains the device that 2339 + * is passed in. 2340 + */ 2341 + if (rmrr->devices[i] == dev) 2342 + return true; 2343 + } 2344 + } 2345 + return false; 2346 + } 2347 + 2330 2348 static int iommu_should_identity_map(struct pci_dev *pdev, int startup) 2331 2349 { 2350 + 2351 + /* 2352 + * We want to prevent any device associated with an RMRR from 2353 + * getting placed into the SI Domain. This is done because 2354 + * problems exist when devices are moved in and out of domains 2355 + * and their respective RMRR info is lost. We exempt USB devices 2356 + * from this process due to their usage of RMRRs that are known 2357 + * to not be needed after BIOS hand-off to OS. 2358 + */ 2359 + if (device_has_rmrr(pdev) && 2360 + (pdev->class >> 8) != PCI_CLASS_SERIAL_USB) 2361 + return 0; 2362 + 2332 2363 if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev)) 2333 2364 return 1; 2334 2365
+37 -31
drivers/iommu/omap-iommu.c
··· 16 16 #include <linux/slab.h> 17 17 #include <linux/interrupt.h> 18 18 #include <linux/ioport.h> 19 - #include <linux/clk.h> 20 19 #include <linux/platform_device.h> 21 20 #include <linux/iommu.h> 22 21 #include <linux/omap-iommu.h> 23 22 #include <linux/mutex.h> 24 23 #include <linux/spinlock.h> 25 24 #include <linux/io.h> 25 + #include <linux/pm_runtime.h> 26 26 27 27 #include <asm/cacheflush.h> 28 28 ··· 143 143 static int iommu_enable(struct omap_iommu *obj) 144 144 { 145 145 int err; 146 + struct platform_device *pdev = to_platform_device(obj->dev); 147 + struct iommu_platform_data *pdata = pdev->dev.platform_data; 146 148 147 - if (!obj) 149 + if (!obj || !pdata) 148 150 return -EINVAL; 149 151 150 152 if (!arch_iommu) 151 153 return -ENODEV; 152 154 153 - clk_enable(obj->clk); 155 + if (pdata->deassert_reset) { 156 + err = pdata->deassert_reset(pdev, pdata->reset_name); 157 + if (err) { 158 + dev_err(obj->dev, "deassert_reset failed: %d\n", err); 159 + return err; 160 + } 161 + } 162 + 163 + pm_runtime_get_sync(obj->dev); 154 164 155 165 err = arch_iommu->enable(obj); 156 166 157 - clk_disable(obj->clk); 158 167 return err; 159 168 } 160 169 161 170 static void iommu_disable(struct omap_iommu *obj) 162 171 { 163 - if (!obj) 164 - return; 172 + struct platform_device *pdev = to_platform_device(obj->dev); 173 + struct iommu_platform_data *pdata = pdev->dev.platform_data; 165 174 166 - clk_enable(obj->clk); 175 + if (!obj || !pdata) 176 + return; 167 177 168 178 arch_iommu->disable(obj); 169 179 170 - clk_disable(obj->clk); 180 + pm_runtime_put_sync(obj->dev); 181 + 182 + if (pdata->assert_reset) 183 + pdata->assert_reset(pdev, pdata->reset_name); 171 184 } 172 185 173 186 /* ··· 303 290 if (!obj || !obj->nr_tlb_entries || !e) 304 291 return -EINVAL; 305 292 306 - clk_enable(obj->clk); 293 + pm_runtime_get_sync(obj->dev); 307 294 308 295 iotlb_lock_get(obj, &l); 309 296 if (l.base == obj->nr_tlb_entries) { ··· 333 320 334 321 cr = iotlb_alloc_cr(obj, e); 335 322 if (IS_ERR(cr)) { 336 - clk_disable(obj->clk); 323 + pm_runtime_put_sync(obj->dev); 337 324 return PTR_ERR(cr); 338 325 } 339 326 ··· 347 334 l.vict = l.base; 348 335 iotlb_lock_set(obj, &l); 349 336 out: 350 - clk_disable(obj->clk); 337 + pm_runtime_put_sync(obj->dev); 351 338 return err; 352 339 } 353 340 ··· 377 364 int i; 378 365 struct cr_regs cr; 379 366 380 - clk_enable(obj->clk); 367 + pm_runtime_get_sync(obj->dev); 381 368 382 369 for_each_iotlb_cr(obj, obj->nr_tlb_entries, i, cr) { 383 370 u32 start; ··· 396 383 iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY); 397 384 } 398 385 } 399 - clk_disable(obj->clk); 386 + pm_runtime_put_sync(obj->dev); 400 387 401 388 if (i == obj->nr_tlb_entries) 402 389 dev_dbg(obj->dev, "%s: no page for %08x\n", __func__, da); ··· 410 397 { 411 398 struct iotlb_lock l; 412 399 413 - clk_enable(obj->clk); 400 + pm_runtime_get_sync(obj->dev); 414 401 415 402 l.base = 0; 416 403 l.vict = 0; ··· 418 405 419 406 iommu_write_reg(obj, 1, MMU_GFLUSH); 420 407 421 - clk_disable(obj->clk); 408 + pm_runtime_put_sync(obj->dev); 422 409 } 423 410 424 411 #if defined(CONFIG_OMAP_IOMMU_DEBUG) || defined(CONFIG_OMAP_IOMMU_DEBUG_MODULE) ··· 428 415 if (!obj || !buf) 429 416 return -EINVAL; 430 417 431 - clk_enable(obj->clk); 418 + pm_runtime_get_sync(obj->dev); 432 419 433 420 bytes = arch_iommu->dump_ctx(obj, buf, bytes); 434 421 435 - clk_disable(obj->clk); 422 + pm_runtime_put_sync(obj->dev); 436 423 437 424 return bytes; 438 425 } ··· 446 433 struct cr_regs tmp; 447 434 struct cr_regs *p = crs; 448 435 449 - clk_enable(obj->clk); 436 + pm_runtime_get_sync(obj->dev); 450 437 iotlb_lock_get(obj, &saved); 451 438 452 439 for_each_iotlb_cr(obj, num, i, tmp) { ··· 456 443 } 457 444 458 445 iotlb_lock_set(obj, &saved); 459 - clk_disable(obj->clk); 446 + pm_runtime_put_sync(obj->dev); 460 447 461 448 return p - crs; 462 449 } ··· 820 807 if (!obj->refcount) 821 808 return IRQ_NONE; 822 809 823 - clk_enable(obj->clk); 824 810 errs = iommu_report_fault(obj, &da); 825 - clk_disable(obj->clk); 826 811 if (errs == 0) 827 812 return IRQ_HANDLED; 828 813 ··· 942 931 struct resource *res; 943 932 struct iommu_platform_data *pdata = pdev->dev.platform_data; 944 933 945 - if (pdev->num_resources != 2) 946 - return -EINVAL; 947 - 948 934 obj = kzalloc(sizeof(*obj) + MMU_REG_SIZE, GFP_KERNEL); 949 935 if (!obj) 950 936 return -ENOMEM; 951 - 952 - obj->clk = clk_get(&pdev->dev, pdata->clk_name); 953 - if (IS_ERR(obj->clk)) 954 - goto err_clk; 955 937 956 938 obj->nr_tlb_entries = pdata->nr_tlb_entries; 957 939 obj->name = pdata->name; ··· 988 984 goto err_irq; 989 985 platform_set_drvdata(pdev, obj); 990 986 987 + pm_runtime_irq_safe(obj->dev); 988 + pm_runtime_enable(obj->dev); 989 + 991 990 dev_info(&pdev->dev, "%s registered\n", obj->name); 992 991 return 0; 993 992 ··· 999 992 err_ioremap: 1000 993 release_mem_region(res->start, resource_size(res)); 1001 994 err_mem: 1002 - clk_put(obj->clk); 1003 - err_clk: 1004 995 kfree(obj); 1005 996 return err; 1006 997 } ··· 1019 1014 release_mem_region(res->start, resource_size(res)); 1020 1015 iounmap(obj->regbase); 1021 1016 1022 - clk_put(obj->clk); 1017 + pm_runtime_disable(obj->dev); 1018 + 1023 1019 dev_info(&pdev->dev, "%s removed\n", obj->name); 1024 1020 kfree(obj); 1025 1021 return 0;
-3
drivers/iommu/omap-iommu.h
··· 29 29 struct omap_iommu { 30 30 const char *name; 31 31 struct module *owner; 32 - struct clk *clk; 33 32 void __iomem *regbase; 34 33 struct device *dev; 35 34 void *isr_priv; ··· 115 116 * MMU Register offsets 116 117 */ 117 118 #define MMU_REVISION 0x00 118 - #define MMU_SYSCONFIG 0x10 119 - #define MMU_SYSSTATUS 0x14 120 119 #define MMU_IRQSTATUS 0x18 121 120 #define MMU_IRQENABLE 0x1c 122 121 #define MMU_WALKING_ST 0x40
-36
drivers/iommu/omap-iommu2.c
··· 28 28 */ 29 29 #define IOMMU_ARCH_VERSION 0x00000011 30 30 31 - /* SYSCONF */ 32 - #define MMU_SYS_IDLE_SHIFT 3 33 - #define MMU_SYS_IDLE_FORCE (0 << MMU_SYS_IDLE_SHIFT) 34 - #define MMU_SYS_IDLE_NONE (1 << MMU_SYS_IDLE_SHIFT) 35 - #define MMU_SYS_IDLE_SMART (2 << MMU_SYS_IDLE_SHIFT) 36 - #define MMU_SYS_IDLE_MASK (3 << MMU_SYS_IDLE_SHIFT) 37 - 38 - #define MMU_SYS_SOFTRESET (1 << 1) 39 - #define MMU_SYS_AUTOIDLE 1 40 - 41 - /* SYSSTATUS */ 42 - #define MMU_SYS_RESETDONE 1 43 - 44 31 /* IRQSTATUS & IRQENABLE */ 45 32 #define MMU_IRQ_MULTIHITFAULT (1 << 4) 46 33 #define MMU_IRQ_TABLEWALKFAULT (1 << 3) ··· 84 97 static int omap2_iommu_enable(struct omap_iommu *obj) 85 98 { 86 99 u32 l, pa; 87 - unsigned long timeout; 88 100 89 101 if (!obj->iopgd || !IS_ALIGNED((u32)obj->iopgd, SZ_16K)) 90 102 return -EINVAL; ··· 92 106 if (!IS_ALIGNED(pa, SZ_16K)) 93 107 return -EINVAL; 94 108 95 - iommu_write_reg(obj, MMU_SYS_SOFTRESET, MMU_SYSCONFIG); 96 - 97 - timeout = jiffies + msecs_to_jiffies(20); 98 - do { 99 - l = iommu_read_reg(obj, MMU_SYSSTATUS); 100 - if (l & MMU_SYS_RESETDONE) 101 - break; 102 - } while (!time_after(jiffies, timeout)); 103 - 104 - if (!(l & MMU_SYS_RESETDONE)) { 105 - dev_err(obj->dev, "can't take mmu out of reset\n"); 106 - return -ENODEV; 107 - } 108 - 109 109 l = iommu_read_reg(obj, MMU_REVISION); 110 110 dev_info(obj->dev, "%s: version %d.%d\n", obj->name, 111 111 (l >> 4) & 0xf, l & 0xf); 112 - 113 - l = iommu_read_reg(obj, MMU_SYSCONFIG); 114 - l &= ~MMU_SYS_IDLE_MASK; 115 - l |= (MMU_SYS_IDLE_SMART | MMU_SYS_AUTOIDLE); 116 - iommu_write_reg(obj, l, MMU_SYSCONFIG); 117 112 118 113 iommu_write_reg(obj, pa, MMU_TTB); 119 114 ··· 109 142 110 143 l &= ~MMU_CNTL_MASK; 111 144 iommu_write_reg(obj, l, MMU_CNTL); 112 - iommu_write_reg(obj, MMU_SYS_IDLE_FORCE, MMU_SYSCONFIG); 113 145 114 146 dev_dbg(obj->dev, "%s is shutting down\n", obj->name); 115 147 } ··· 237 271 char *p = buf; 238 272 239 273 pr_reg(REVISION); 240 - pr_reg(SYSCONFIG); 241 - pr_reg(SYSSTATUS); 242 274 pr_reg(IRQSTATUS); 243 275 pr_reg(IRQENABLE); 244 276 pr_reg(WALKING_ST);
+1 -1
drivers/iommu/tegra-gart.c
··· 398 398 do_gart_setup(gart, NULL); 399 399 400 400 gart_handle = gart; 401 + bus_set_iommu(&platform_bus_type, &gart_iommu_ops); 401 402 return 0; 402 403 403 404 fail: ··· 451 450 452 451 static int __devinit tegra_gart_init(void) 453 452 { 454 - bus_set_iommu(&platform_bus_type, &gart_iommu_ops); 455 453 return platform_driver_register(&tegra_gart_driver); 456 454 } 457 455
+2 -4
drivers/iommu/tegra-smmu.c
··· 694 694 *pte = _PTE_VACANT(iova); 695 695 FLUSH_CPU_DCACHE(pte, page, sizeof(*pte)); 696 696 flush_ptc_and_tlb(as->smmu, as, iova, pte, page, 0); 697 - if (!--(*count)) { 697 + if (!--(*count)) 698 698 free_ptbl(as, iova); 699 - smmu_flush_regs(as->smmu, 0); 700 - } 701 699 } 702 700 703 701 static void __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, ··· 1230 1232 1231 1233 smmu_debugfs_create(smmu); 1232 1234 smmu_handle = smmu; 1235 + bus_set_iommu(&platform_bus_type, &smmu_iommu_ops); 1233 1236 return 0; 1234 1237 } 1235 1238 ··· 1275 1276 1276 1277 static int __devinit tegra_smmu_init(void) 1277 1278 { 1278 - bus_set_iommu(&platform_bus_type, &smmu_iommu_ops); 1279 1279 return platform_driver_register(&tegra_smmu_driver); 1280 1280 } 1281 1281
+7
include/linux/dma-debug.h
··· 39 39 int direction, dma_addr_t dma_addr, 40 40 bool map_single); 41 41 42 + extern void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); 43 + 42 44 extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, 43 45 size_t size, int direction, bool map_single); 44 46 ··· 104 102 size_t offset, size_t size, 105 103 int direction, dma_addr_t dma_addr, 106 104 bool map_single) 105 + { 106 + } 107 + 108 + static inline void debug_dma_mapping_error(struct device *dev, 109 + dma_addr_t dma_addr) 107 110 { 108 111 } 109 112
+7 -2
include/linux/platform_data/iommu-omap.h
··· 10 10 * published by the Free Software Foundation. 11 11 */ 12 12 13 + #include <linux/platform_device.h> 14 + 13 15 #define MMU_REG_SIZE 256 14 16 15 17 /** ··· 44 42 45 43 struct iommu_platform_data { 46 44 const char *name; 47 - const char *clk_name; 48 - const int nr_tlb_entries; 45 + const char *reset_name; 46 + int nr_tlb_entries; 49 47 u32 da_start; 50 48 u32 da_end; 49 + 50 + int (*assert_reset)(struct platform_device *pdev, const char *name); 51 + int (*deassert_reset)(struct platform_device *pdev, const char *name); 51 52 };
+57 -9
lib/dma-debug.c
··· 45 45 dma_debug_coherent, 46 46 }; 47 47 48 + enum map_err_types { 49 + MAP_ERR_CHECK_NOT_APPLICABLE, 50 + MAP_ERR_NOT_CHECKED, 51 + MAP_ERR_CHECKED, 52 + }; 53 + 48 54 #define DMA_DEBUG_STACKTRACE_ENTRIES 5 49 55 50 56 struct dma_debug_entry { ··· 63 57 int direction; 64 58 int sg_call_ents; 65 59 int sg_mapped_ents; 60 + enum map_err_types map_err_type; 66 61 #ifdef CONFIG_STACKTRACE 67 62 struct stack_trace stacktrace; 68 63 unsigned long st_entries[DMA_DEBUG_STACKTRACE_ENTRIES]; ··· 120 113 static struct device_driver *current_driver __read_mostly; 121 114 122 115 static DEFINE_RWLOCK(driver_name_lock); 116 + 117 + static const char *const maperr2str[] = { 118 + [MAP_ERR_CHECK_NOT_APPLICABLE] = "dma map error check not applicable", 119 + [MAP_ERR_NOT_CHECKED] = "dma map error not checked", 120 + [MAP_ERR_CHECKED] = "dma map error checked", 121 + }; 123 122 124 123 static const char *type2name[4] = { "single", "page", 125 124 "scather-gather", "coherent" }; ··· 389 376 list_for_each_entry(entry, &bucket->list, list) { 390 377 if (!dev || dev == entry->dev) { 391 378 dev_info(entry->dev, 392 - "%s idx %d P=%Lx D=%Lx L=%Lx %s\n", 379 + "%s idx %d P=%Lx D=%Lx L=%Lx %s %s\n", 393 380 type2name[entry->type], idx, 394 381 (unsigned long long)entry->paddr, 395 382 entry->dev_addr, entry->size, 396 - dir2name[entry->direction]); 383 + dir2name[entry->direction], 384 + maperr2str[entry->map_err_type]); 397 385 } 398 386 } 399 387 ··· 858 844 struct hash_bucket *bucket; 859 845 unsigned long flags; 860 846 861 - if (dma_mapping_error(ref->dev, ref->dev_addr)) { 862 - err_printk(ref->dev, NULL, "DMA-API: device driver tries " 863 - "to free an invalid DMA memory address\n"); 864 - return; 865 - } 866 - 867 847 bucket = get_hash_bucket(ref, &flags); 868 848 entry = bucket_find_exact(bucket, ref); 869 849 870 850 if (!entry) { 851 + if (dma_mapping_error(ref->dev, ref->dev_addr)) { 852 + err_printk(ref->dev, NULL, 853 + "DMA-API: device driver tries " 854 + "to free an invalid DMA memory address\n"); 855 + return; 856 + } 871 857 err_printk(ref->dev, NULL, "DMA-API: device driver tries " 872 858 "to free DMA memory it has not allocated " 873 859 "[device address=0x%016llx] [size=%llu bytes]\n", ··· 922 908 ref->dev_addr, ref->size, 923 909 dir2name[entry->direction], 924 910 dir2name[ref->direction]); 911 + } 912 + 913 + if (entry->map_err_type == MAP_ERR_NOT_CHECKED) { 914 + err_printk(ref->dev, entry, 915 + "DMA-API: device driver failed to check map error" 916 + "[device address=0x%016llx] [size=%llu bytes] " 917 + "[mapped as %s]", 918 + ref->dev_addr, ref->size, 919 + type2name[entry->type]); 925 920 } 926 921 927 922 hash_bucket_del(entry); ··· 1040 1017 if (unlikely(global_disable)) 1041 1018 return; 1042 1019 1043 - if (unlikely(dma_mapping_error(dev, dma_addr))) 1020 + if (dma_mapping_error(dev, dma_addr)) 1044 1021 return; 1045 1022 1046 1023 entry = dma_entry_alloc(); ··· 1053 1030 entry->dev_addr = dma_addr; 1054 1031 entry->size = size; 1055 1032 entry->direction = direction; 1033 + entry->map_err_type = MAP_ERR_NOT_CHECKED; 1056 1034 1057 1035 if (map_single) 1058 1036 entry->type = dma_debug_single; ··· 1068 1044 add_dma_entry(entry); 1069 1045 } 1070 1046 EXPORT_SYMBOL(debug_dma_map_page); 1047 + 1048 + void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 1049 + { 1050 + struct dma_debug_entry ref; 1051 + struct dma_debug_entry *entry; 1052 + struct hash_bucket *bucket; 1053 + unsigned long flags; 1054 + 1055 + if (unlikely(global_disable)) 1056 + return; 1057 + 1058 + ref.dev = dev; 1059 + ref.dev_addr = dma_addr; 1060 + bucket = get_hash_bucket(&ref, &flags); 1061 + entry = bucket_find_exact(bucket, &ref); 1062 + 1063 + if (!entry) 1064 + goto out; 1065 + 1066 + entry->map_err_type = MAP_ERR_CHECKED; 1067 + out: 1068 + put_hash_bucket(bucket, &flags); 1069 + } 1070 + EXPORT_SYMBOL(debug_dma_mapping_error); 1071 1071 1072 1072 void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, 1073 1073 size_t size, int direction, bool map_single)