Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'iommu-updates-v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

- Debugfs support for the Intel VT-d driver.

When enabled, it now also exposes some of its internal data
structures to user-space for debugging purposes.

- ARM-SMMU driver now uses the generic deferred flushing and fast-path
iova allocation code.

This is expected to be a major performance improvement, as this
allocation path scales a lot better.

- Support for r8a7744 in the Renesas iommu driver

- Couple of minor fixes and improvements all over the place

* tag 'iommu-updates-v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (39 commits)
iommu/arm-smmu-v3: Remove unnecessary wrapper function
iommu/arm-smmu-v3: Add SPDX header
iommu/amd: Add default branch in amd_iommu_capable()
dt-bindings: iommu: ipmmu-vmsa: Add r8a7744 support
iommu/amd: Move iommu_init_pci() to .init section
iommu/arm-smmu: Support non-strict mode
iommu/io-pgtable-arm-v7s: Add support for non-strict mode
iommu/arm-smmu-v3: Add support for non-strict mode
iommu/io-pgtable-arm: Add support for non-strict mode
iommu: Add "iommu.strict" command line option
iommu/dma: Add support for non-strict mode
iommu/arm-smmu: Ensure that page-table updates are visible before TLBI
iommu/arm-smmu-v3: Implement flush_iotlb_all hook
iommu/arm-smmu-v3: Avoid back-to-back CMD_SYNC operations
iommu/arm-smmu-v3: Fix unexpected CMD_SYNC timeout
iommu/io-pgtable-arm: Fix race handling in split_blk_unmap()
iommu/arm-smmu-v3: Fix a couple of minor comment typos
iommu: Fix a typo
iommu: Remove .domain_{get,set}_windows
iommu: Tidy up window attributes
...

+986 -356
+12
Documentation/admin-guide/kernel-parameters.txt
··· 1759 1759 nobypass [PPC/POWERNV] 1760 1760 Disable IOMMU bypass, using IOMMU for PCI devices. 1761 1761 1762 + iommu.strict= [ARM64] Configure TLB invalidation behaviour 1763 + Format: { "0" | "1" } 1764 + 0 - Lazy mode. 1765 + Request that DMA unmap operations use deferred 1766 + invalidation of hardware TLBs, for increased 1767 + throughput at the cost of reduced device isolation. 1768 + Will fall back to strict mode if not supported by 1769 + the relevant IOMMU driver. 1770 + 1 - Strict mode (default). 1771 + DMA unmap operations invalidate IOMMU hardware TLBs 1772 + synchronously. 1773 + 1762 1774 iommu.passthrough= 1763 1775 [ARM64] Configure DMA to bypass the IOMMU by default. 1764 1776 Format: { "0" | "1" }
+1
Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt
··· 12 12 13 13 - "renesas,ipmmu-r8a73a4" for the R8A73A4 (R-Mobile APE6) IPMMU. 14 14 - "renesas,ipmmu-r8a7743" for the R8A7743 (RZ/G1M) IPMMU. 15 + - "renesas,ipmmu-r8a7744" for the R8A7744 (RZ/G1N) IPMMU. 15 16 - "renesas,ipmmu-r8a7745" for the R8A7745 (RZ/G1E) IPMMU. 16 17 - "renesas,ipmmu-r8a7790" for the R8A7790 (R-Car H2) IPMMU. 17 18 - "renesas,ipmmu-r8a7791" for the R8A7791 (R-Car M2-W) IPMMU.
+39
Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt
··· 9 9 such as network interfaces, crypto accelerator instances, L2 switches, 10 10 etc. 11 11 12 + For an overview of the DPAA2 architecture and fsl-mc bus see: 13 + Documentation/networking/dpaa2/overview.rst 14 + 15 + As described in the above overview, all DPAA2 objects in a DPRC share the 16 + same hardware "isolation context" and a 10-bit value called an ICID 17 + (isolation context id) is expressed by the hardware to identify 18 + the requester. 19 + 20 + The generic 'iommus' property is insufficient to describe the relationship 21 + between ICIDs and IOMMUs, so an iommu-map property is used to define 22 + the set of possible ICIDs under a root DPRC and how they map to 23 + an IOMMU. 24 + 25 + For generic IOMMU bindings, see 26 + Documentation/devicetree/bindings/iommu/iommu.txt. 27 + 28 + For arm-smmu binding, see: 29 + Documentation/devicetree/bindings/iommu/arm,smmu.txt. 30 + 12 31 Required properties: 13 32 14 33 - compatible ··· 107 88 Value type: <phandle> 108 89 Definition: Specifies the phandle to the PHY device node associated 109 90 with the this dpmac. 91 + Optional properties: 92 + 93 + - iommu-map: Maps an ICID to an IOMMU and associated iommu-specifier 94 + data. 95 + 96 + The property is an arbitrary number of tuples of 97 + (icid-base,iommu,iommu-base,length). 98 + 99 + Any ICID i in the interval [icid-base, icid-base + length) is 100 + associated with the listed IOMMU, with the iommu-specifier 101 + (i - icid-base + iommu-base). 110 102 111 103 Example: 104 + 105 + smmu: iommu@5000000 { 106 + compatible = "arm,mmu-500"; 107 + #iommu-cells = <1>; 108 + stream-match-mask = <0x7C00>; 109 + ... 110 + }; 112 111 113 112 fsl_mc: fsl-mc@80c000000 { 114 113 compatible = "fsl,qoriq-mc"; 115 114 reg = <0x00000008 0x0c000000 0 0x40>, /* MC portal base */ 116 115 <0x00000000 0x08340000 0 0x40000>; /* MC control reg */ 117 116 msi-parent = <&its>; 117 + /* define map for ICIDs 23-64 */ 118 + iommu-map = <23 &smmu 23 41>; 118 119 #address-cells = <3>; 119 120 #size-cells = <1>; 120 121
+6 -1
arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
··· 148 148 #address-cells = <2>; 149 149 #size-cells = <2>; 150 150 ranges; 151 + dma-ranges = <0x0 0x0 0x0 0x0 0x10000 0x00000000>; 151 152 152 153 clockgen: clocking@1300000 { 153 154 compatible = "fsl,ls2080a-clockgen"; ··· 322 321 reg = <0x00000008 0x0c000000 0 0x40>, /* MC portal base */ 323 322 <0x00000000 0x08340000 0 0x40000>; /* MC control reg */ 324 323 msi-parent = <&its>; 324 + iommu-map = <0 &smmu 0 0>; /* This is fixed-up by u-boot */ 325 + dma-coherent; 325 326 #address-cells = <3>; 326 327 #size-cells = <1>; 327 328 ··· 427 424 compatible = "arm,mmu-500"; 428 425 reg = <0 0x5000000 0 0x800000>; 429 426 #global-interrupts = <12>; 427 + #iommu-cells = <1>; 428 + stream-match-mask = <0x7C00>; 429 + dma-coherent; 430 430 interrupts = <0 13 4>, /* global secure fault */ 431 431 <0 14 4>, /* combined secure interrupt */ 432 432 <0 15 4>, /* global non-secure fault */ ··· 472 466 <0 204 4>, <0 205 4>, 473 467 <0 206 4>, <0 207 4>, 474 468 <0 208 4>, <0 209 4>; 475 - mmu-masters = <&fsl_mc 0x300 0>; 476 469 }; 477 470 478 471 dspi: dspi@2100000 {
+5 -5
arch/arm64/mm/dma-mapping.c
··· 712 712 if (is_device_dma_coherent(dev)) 713 713 return; 714 714 715 - phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); 715 + phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dev_addr); 716 716 __dma_unmap_area(phys_to_virt(phys), size, dir); 717 717 } 718 718 ··· 725 725 if (is_device_dma_coherent(dev)) 726 726 return; 727 727 728 - phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); 728 + phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dev_addr); 729 729 __dma_map_area(phys_to_virt(phys), size, dir); 730 730 } 731 731 ··· 738 738 int prot = dma_info_to_prot(dir, coherent, attrs); 739 739 dma_addr_t dev_addr = iommu_dma_map_page(dev, page, offset, size, prot); 740 740 741 - if (!iommu_dma_mapping_error(dev, dev_addr) && 742 - (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) 743 - __iommu_sync_single_for_device(dev, dev_addr, size, dir); 741 + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && 742 + !iommu_dma_mapping_error(dev, dev_addr)) 743 + __dma_map_area(page_address(page) + offset, size, dir); 744 744 745 745 return dev_addr; 746 746 }
+2
arch/x86/include/asm/irq_remapping.h
··· 45 45 46 46 #ifdef CONFIG_IRQ_REMAP 47 47 48 + extern raw_spinlock_t irq_2_ir_lock; 49 + 48 50 extern bool irq_remapping_cap(enum irq_remap_cap cap); 49 51 extern void set_irq_remapping_broken(void); 50 52 extern int irq_remapping_prepare(void);
+12 -4
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 127 127 return 0; 128 128 } 129 129 130 + static int fsl_mc_dma_configure(struct device *dev) 131 + { 132 + struct device *dma_dev = dev; 133 + 134 + while (dev_is_fsl_mc(dma_dev)) 135 + dma_dev = dma_dev->parent; 136 + 137 + return of_dma_configure(dev, dma_dev->of_node, 0); 138 + } 139 + 130 140 static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, 131 141 char *buf) 132 142 { ··· 158 148 .name = "fsl-mc", 159 149 .match = fsl_mc_bus_match, 160 150 .uevent = fsl_mc_bus_uevent, 151 + .dma_configure = fsl_mc_dma_configure, 161 152 .dev_groups = fsl_mc_dev_groups, 162 153 }; 163 154 EXPORT_SYMBOL_GPL(fsl_mc_bus_type); ··· 632 621 mc_dev->icid = parent_mc_dev->icid; 633 622 mc_dev->dma_mask = FSL_MC_DEFAULT_DMA_MASK; 634 623 mc_dev->dev.dma_mask = &mc_dev->dma_mask; 624 + mc_dev->dev.coherent_dma_mask = mc_dev->dma_mask; 635 625 dev_set_msi_domain(&mc_dev->dev, 636 626 dev_get_msi_domain(&parent_mc_dev->dev)); 637 627 } ··· 649 637 if (error < 0) 650 638 goto error_cleanup_dev; 651 639 } 652 - 653 - /* Objects are coherent, unless 'no shareability' flag set. */ 654 - if (!(obj_desc->flags & FSL_MC_OBJ_FLAG_NO_MEM_SHAREABILITY)) 655 - arch_setup_dma_ops(&mc_dev->dev, 0, 0, NULL, true); 656 640 657 641 /* 658 642 * The device-specific probe callback will get invoked by device_add()
+13
drivers/iommu/Kconfig
··· 186 186 and include PCI device scope covered by these DMA 187 187 remapping devices. 188 188 189 + config INTEL_IOMMU_DEBUGFS 190 + bool "Export Intel IOMMU internals in Debugfs" 191 + depends on INTEL_IOMMU && IOMMU_DEBUGFS 192 + help 193 + !!!WARNING!!! 194 + 195 + DO NOT ENABLE THIS OPTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!! 196 + 197 + Expose Intel IOMMU internals in Debugfs. 198 + 199 + This option is -NOT- intended for production environments, and should 200 + only be enabled for debugging Intel IOMMU. 201 + 189 202 config INTEL_IOMMU_SVM 190 203 bool "Support for Shared Virtual Memory with Intel IOMMU" 191 204 depends on INTEL_IOMMU && X86
+1
drivers/iommu/Makefile
··· 17 17 obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o 18 18 obj-$(CONFIG_DMAR_TABLE) += dmar.o 19 19 obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o intel-pasid.o 20 + obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += intel-iommu-debugfs.o 20 21 obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o 21 22 obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o 22 23 obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
+2
drivers/iommu/amd_iommu.c
··· 3083 3083 return (irq_remapping_enabled == 1); 3084 3084 case IOMMU_CAP_NOEXEC: 3085 3085 return false; 3086 + default: 3087 + break; 3086 3088 } 3087 3089 3088 3090 return false;
+1 -1
drivers/iommu/amd_iommu_init.c
··· 1719 1719 NULL, 1720 1720 }; 1721 1721 1722 - static int iommu_init_pci(struct amd_iommu *iommu) 1722 + static int __init iommu_init_pci(struct amd_iommu *iommu) 1723 1723 { 1724 1724 int cap_ptr = iommu->cap_ptr; 1725 1725 u32 range, misc, low, high;
+87 -53
drivers/iommu/arm-smmu-v3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * IOMMU API for ARM architected SMMUv3 implementations. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 4 * 16 5 * Copyright (C) 2015 ARM Limited 17 6 * ··· 556 567 557 568 int gerr_irq; 558 569 int combined_irq; 559 - atomic_t sync_nr; 570 + u32 sync_nr; 571 + u8 prev_cmd_opcode; 560 572 561 573 unsigned long ias; /* IPA */ 562 574 unsigned long oas; /* PA */ ··· 601 611 struct mutex init_mutex; /* Protects smmu pointer */ 602 612 603 613 struct io_pgtable_ops *pgtbl_ops; 614 + bool non_strict; 604 615 605 616 enum arm_smmu_domain_stage stage; 606 617 union { ··· 699 708 } 700 709 701 710 /* 702 - * Wait for the SMMU to consume items. If drain is true, wait until the queue 711 + * Wait for the SMMU to consume items. If sync is true, wait until the queue 703 712 * is empty. Otherwise, wait until there is at least one free slot. 704 713 */ 705 714 static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe) ··· 892 901 struct arm_smmu_queue *q = &smmu->cmdq.q; 893 902 bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV); 894 903 904 + smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]); 905 + 895 906 while (queue_insert_raw(q, cmd) == -ENOSPC) { 896 907 if (queue_poll_cons(q, false, wfe)) 897 908 dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); ··· 941 948 struct arm_smmu_cmdq_ent ent = { 942 949 .opcode = CMDQ_OP_CMD_SYNC, 943 950 .sync = { 944 - .msidata = atomic_inc_return_relaxed(&smmu->sync_nr), 945 951 .msiaddr = virt_to_phys(&smmu->sync_count), 946 952 }, 947 953 }; 948 954 949 - arm_smmu_cmdq_build_cmd(cmd, &ent); 950 - 951 955 spin_lock_irqsave(&smmu->cmdq.lock, flags); 952 - arm_smmu_cmdq_insert_cmd(smmu, cmd); 956 + 957 + /* Piggy-back on the previous command if it's a SYNC */ 958 + if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) { 959 + ent.sync.msidata = smmu->sync_nr; 960 + } else { 961 + ent.sync.msidata = ++smmu->sync_nr; 962 + arm_smmu_cmdq_build_cmd(cmd, &ent); 963 + arm_smmu_cmdq_insert_cmd(smmu, cmd); 964 + } 965 + 953 966 spin_unlock_irqrestore(&smmu->cmdq.lock, flags); 954 967 955 968 return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata); ··· 1371 1372 } 1372 1373 1373 1374 /* IO_PGTABLE API */ 1374 - static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu) 1375 - { 1376 - arm_smmu_cmdq_issue_sync(smmu); 1377 - } 1378 - 1379 1375 static void arm_smmu_tlb_sync(void *cookie) 1380 1376 { 1381 1377 struct arm_smmu_domain *smmu_domain = cookie; 1382 - __arm_smmu_tlb_sync(smmu_domain->smmu); 1378 + 1379 + arm_smmu_cmdq_issue_sync(smmu_domain->smmu); 1383 1380 } 1384 1381 1385 1382 static void arm_smmu_tlb_inv_context(void *cookie) ··· 1393 1398 cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; 1394 1399 } 1395 1400 1401 + /* 1402 + * NOTE: when io-pgtable is in non-strict mode, we may get here with 1403 + * PTEs previously cleared by unmaps on the current CPU not yet visible 1404 + * to the SMMU. We are relying on the DSB implicit in queue_inc_prod() 1405 + * to guarantee those are observed before the TLBI. Do be careful, 007. 1406 + */ 1396 1407 arm_smmu_cmdq_issue_cmd(smmu, &cmd); 1397 - __arm_smmu_tlb_sync(smmu); 1408 + arm_smmu_cmdq_issue_sync(smmu); 1398 1409 } 1399 1410 1400 1411 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, ··· 1625 1624 if (smmu->features & ARM_SMMU_FEAT_COHERENCY) 1626 1625 pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA; 1627 1626 1627 + if (smmu_domain->non_strict) 1628 + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; 1629 + 1628 1630 pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); 1629 1631 if (!pgtbl_ops) 1630 1632 return -ENOMEM; ··· 1776 1772 return ops->unmap(ops, iova, size); 1777 1773 } 1778 1774 1775 + static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) 1776 + { 1777 + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1778 + 1779 + if (smmu_domain->smmu) 1780 + arm_smmu_tlb_inv_context(smmu_domain); 1781 + } 1782 + 1779 1783 static void arm_smmu_iotlb_sync(struct iommu_domain *domain) 1780 1784 { 1781 1785 struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; 1782 1786 1783 1787 if (smmu) 1784 - __arm_smmu_tlb_sync(smmu); 1788 + arm_smmu_cmdq_issue_sync(smmu); 1785 1789 } 1786 1790 1787 1791 static phys_addr_t ··· 1929 1917 { 1930 1918 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1931 1919 1932 - if (domain->type != IOMMU_DOMAIN_UNMANAGED) 1933 - return -EINVAL; 1934 - 1935 - switch (attr) { 1936 - case DOMAIN_ATTR_NESTING: 1937 - *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); 1938 - return 0; 1920 + switch (domain->type) { 1921 + case IOMMU_DOMAIN_UNMANAGED: 1922 + switch (attr) { 1923 + case DOMAIN_ATTR_NESTING: 1924 + *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); 1925 + return 0; 1926 + default: 1927 + return -ENODEV; 1928 + } 1929 + break; 1930 + case IOMMU_DOMAIN_DMA: 1931 + switch (attr) { 1932 + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: 1933 + *(int *)data = smmu_domain->non_strict; 1934 + return 0; 1935 + default: 1936 + return -ENODEV; 1937 + } 1938 + break; 1939 1939 default: 1940 - return -ENODEV; 1940 + return -EINVAL; 1941 1941 } 1942 1942 } 1943 1943 ··· 1959 1935 int ret = 0; 1960 1936 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1961 1937 1962 - if (domain->type != IOMMU_DOMAIN_UNMANAGED) 1963 - return -EINVAL; 1964 - 1965 1938 mutex_lock(&smmu_domain->init_mutex); 1966 1939 1967 - switch (attr) { 1968 - case DOMAIN_ATTR_NESTING: 1969 - if (smmu_domain->smmu) { 1970 - ret = -EPERM; 1971 - goto out_unlock; 1940 + switch (domain->type) { 1941 + case IOMMU_DOMAIN_UNMANAGED: 1942 + switch (attr) { 1943 + case DOMAIN_ATTR_NESTING: 1944 + if (smmu_domain->smmu) { 1945 + ret = -EPERM; 1946 + goto out_unlock; 1947 + } 1948 + 1949 + if (*(int *)data) 1950 + smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; 1951 + else 1952 + smmu_domain->stage = ARM_SMMU_DOMAIN_S1; 1953 + break; 1954 + default: 1955 + ret = -ENODEV; 1972 1956 } 1973 - 1974 - if (*(int *)data) 1975 - smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; 1976 - else 1977 - smmu_domain->stage = ARM_SMMU_DOMAIN_S1; 1978 - 1957 + break; 1958 + case IOMMU_DOMAIN_DMA: 1959 + switch(attr) { 1960 + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: 1961 + smmu_domain->non_strict = *(int *)data; 1962 + break; 1963 + default: 1964 + ret = -ENODEV; 1965 + } 1979 1966 break; 1980 1967 default: 1981 - ret = -ENODEV; 1968 + ret = -EINVAL; 1982 1969 } 1983 1970 1984 1971 out_unlock: ··· 2034 1999 .attach_dev = arm_smmu_attach_dev, 2035 2000 .map = arm_smmu_map, 2036 2001 .unmap = arm_smmu_unmap, 2037 - .flush_iotlb_all = arm_smmu_iotlb_sync, 2002 + .flush_iotlb_all = arm_smmu_flush_iotlb_all, 2038 2003 .iotlb_sync = arm_smmu_iotlb_sync, 2039 2004 .iova_to_phys = arm_smmu_iova_to_phys, 2040 2005 .add_device = arm_smmu_add_device, ··· 2215 2180 { 2216 2181 int ret; 2217 2182 2218 - atomic_set(&smmu->sync_nr, 0); 2219 2183 ret = arm_smmu_init_queues(smmu); 2220 2184 if (ret) 2221 2185 return ret; ··· 2387 2353 irq = smmu->combined_irq; 2388 2354 if (irq) { 2389 2355 /* 2390 - * Cavium ThunderX2 implementation doesn't not support unique 2391 - * irq lines. Use single irq line for all the SMMUv3 interrupts. 2356 + * Cavium ThunderX2 implementation doesn't support unique irq 2357 + * lines. Use a single irq line for all the SMMUv3 interrupts. 2392 2358 */ 2393 2359 ret = devm_request_threaded_irq(smmu->dev, irq, 2394 2360 arm_smmu_combined_irq_handler,
+79 -27
drivers/iommu/arm-smmu.c
··· 52 52 #include <linux/spinlock.h> 53 53 54 54 #include <linux/amba/bus.h> 55 + #include <linux/fsl/mc.h> 55 56 56 57 #include "io-pgtable.h" 57 58 #include "arm-smmu-regs.h" ··· 247 246 const struct iommu_gather_ops *tlb_ops; 248 247 struct arm_smmu_cfg cfg; 249 248 enum arm_smmu_domain_stage stage; 249 + bool non_strict; 250 250 struct mutex init_mutex; /* Protects smmu pointer */ 251 251 spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */ 252 252 struct iommu_domain domain; ··· 449 447 struct arm_smmu_cfg *cfg = &smmu_domain->cfg; 450 448 void __iomem *base = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx); 451 449 452 - writel_relaxed(cfg->asid, base + ARM_SMMU_CB_S1_TLBIASID); 450 + /* 451 + * NOTE: this is not a relaxed write; it needs to guarantee that PTEs 452 + * cleared by the current CPU are visible to the SMMU before the TLBI. 453 + */ 454 + writel(cfg->asid, base + ARM_SMMU_CB_S1_TLBIASID); 453 455 arm_smmu_tlb_sync_context(cookie); 454 456 } 455 457 ··· 463 457 struct arm_smmu_device *smmu = smmu_domain->smmu; 464 458 void __iomem *base = ARM_SMMU_GR0(smmu); 465 459 466 - writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID); 460 + /* NOTE: see above */ 461 + writel(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID); 467 462 arm_smmu_tlb_sync_global(smmu); 468 463 } 469 464 ··· 475 468 struct arm_smmu_cfg *cfg = &smmu_domain->cfg; 476 469 bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS; 477 470 void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx); 471 + 472 + if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) 473 + wmb(); 478 474 479 475 if (stage1) { 480 476 reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; ··· 519 509 { 520 510 struct arm_smmu_domain *smmu_domain = cookie; 521 511 void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu); 512 + 513 + if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) 514 + wmb(); 522 515 523 516 writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID); 524 517 } ··· 875 862 876 863 if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) 877 864 pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA; 865 + 866 + if (smmu_domain->non_strict) 867 + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; 878 868 879 869 smmu_domain->smmu = smmu; 880 870 pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); ··· 1268 1252 return ops->unmap(ops, iova, size); 1269 1253 } 1270 1254 1255 + static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) 1256 + { 1257 + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1258 + 1259 + if (smmu_domain->tlb_ops) 1260 + smmu_domain->tlb_ops->tlb_flush_all(smmu_domain); 1261 + } 1262 + 1271 1263 static void arm_smmu_iotlb_sync(struct iommu_domain *domain) 1272 1264 { 1273 1265 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); ··· 1483 1459 1484 1460 if (dev_is_pci(dev)) 1485 1461 group = pci_device_group(dev); 1462 + else if (dev_is_fsl_mc(dev)) 1463 + group = fsl_mc_device_group(dev); 1486 1464 else 1487 1465 group = generic_device_group(dev); 1488 1466 ··· 1496 1470 { 1497 1471 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1498 1472 1499 - if (domain->type != IOMMU_DOMAIN_UNMANAGED) 1500 - return -EINVAL; 1501 - 1502 - switch (attr) { 1503 - case DOMAIN_ATTR_NESTING: 1504 - *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); 1505 - return 0; 1473 + switch(domain->type) { 1474 + case IOMMU_DOMAIN_UNMANAGED: 1475 + switch (attr) { 1476 + case DOMAIN_ATTR_NESTING: 1477 + *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); 1478 + return 0; 1479 + default: 1480 + return -ENODEV; 1481 + } 1482 + break; 1483 + case IOMMU_DOMAIN_DMA: 1484 + switch (attr) { 1485 + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: 1486 + *(int *)data = smmu_domain->non_strict; 1487 + return 0; 1488 + default: 1489 + return -ENODEV; 1490 + } 1491 + break; 1506 1492 default: 1507 - return -ENODEV; 1493 + return -EINVAL; 1508 1494 } 1509 1495 } 1510 1496 ··· 1526 1488 int ret = 0; 1527 1489 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1528 1490 1529 - if (domain->type != IOMMU_DOMAIN_UNMANAGED) 1530 - return -EINVAL; 1531 - 1532 1491 mutex_lock(&smmu_domain->init_mutex); 1533 1492 1534 - switch (attr) { 1535 - case DOMAIN_ATTR_NESTING: 1536 - if (smmu_domain->smmu) { 1537 - ret = -EPERM; 1538 - goto out_unlock; 1493 + switch(domain->type) { 1494 + case IOMMU_DOMAIN_UNMANAGED: 1495 + switch (attr) { 1496 + case DOMAIN_ATTR_NESTING: 1497 + if (smmu_domain->smmu) { 1498 + ret = -EPERM; 1499 + goto out_unlock; 1500 + } 1501 + 1502 + if (*(int *)data) 1503 + smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; 1504 + else 1505 + smmu_domain->stage = ARM_SMMU_DOMAIN_S1; 1506 + break; 1507 + default: 1508 + ret = -ENODEV; 1539 1509 } 1540 - 1541 - if (*(int *)data) 1542 - smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; 1543 - else 1544 - smmu_domain->stage = ARM_SMMU_DOMAIN_S1; 1545 - 1510 + break; 1511 + case IOMMU_DOMAIN_DMA: 1512 + switch (attr) { 1513 + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: 1514 + smmu_domain->non_strict = *(int *)data; 1515 + break; 1516 + default: 1517 + ret = -ENODEV; 1518 + } 1546 1519 break; 1547 1520 default: 1548 - ret = -ENODEV; 1521 + ret = -EINVAL; 1549 1522 } 1550 - 1551 1523 out_unlock: 1552 1524 mutex_unlock(&smmu_domain->init_mutex); 1553 1525 return ret; ··· 1610 1562 .attach_dev = arm_smmu_attach_dev, 1611 1563 .map = arm_smmu_map, 1612 1564 .unmap = arm_smmu_unmap, 1613 - .flush_iotlb_all = arm_smmu_iotlb_sync, 1565 + .flush_iotlb_all = arm_smmu_flush_iotlb_all, 1614 1566 .iotlb_sync = arm_smmu_iotlb_sync, 1615 1567 .iova_to_phys = arm_smmu_iova_to_phys, 1616 1568 .add_device = arm_smmu_add_device, ··· 2083 2035 pci_request_acs(); 2084 2036 bus_set_iommu(&pci_bus_type, &arm_smmu_ops); 2085 2037 } 2038 + #endif 2039 + #ifdef CONFIG_FSL_MC_BUS 2040 + if (!iommu_present(&fsl_mc_bus_type)) 2041 + bus_set_iommu(&fsl_mc_bus_type, &arm_smmu_ops); 2086 2042 #endif 2087 2043 } 2088 2044
+43 -12
drivers/iommu/dma-iommu.c
··· 55 55 }; 56 56 struct list_head msi_page_list; 57 57 spinlock_t msi_lock; 58 + 59 + /* Domain for flush queue callback; NULL if flush queue not in use */ 60 + struct iommu_domain *fq_domain; 58 61 }; 59 62 60 63 static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) ··· 260 257 return ret; 261 258 } 262 259 260 + static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad) 261 + { 262 + struct iommu_dma_cookie *cookie; 263 + struct iommu_domain *domain; 264 + 265 + cookie = container_of(iovad, struct iommu_dma_cookie, iovad); 266 + domain = cookie->fq_domain; 267 + /* 268 + * The IOMMU driver supporting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE 269 + * implies that ops->flush_iotlb_all must be non-NULL. 270 + */ 271 + domain->ops->flush_iotlb_all(domain); 272 + } 273 + 263 274 /** 264 275 * iommu_dma_init_domain - Initialise a DMA mapping domain 265 276 * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() ··· 292 275 struct iommu_dma_cookie *cookie = domain->iova_cookie; 293 276 struct iova_domain *iovad = &cookie->iovad; 294 277 unsigned long order, base_pfn, end_pfn; 278 + int attr; 295 279 296 280 if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) 297 281 return -EINVAL; ··· 326 308 } 327 309 328 310 init_iova_domain(iovad, 1UL << order, base_pfn); 311 + 312 + if (!cookie->fq_domain && !iommu_domain_get_attr(domain, 313 + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { 314 + cookie->fq_domain = domain; 315 + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); 316 + } 317 + 329 318 if (!dev) 330 319 return 0; 331 320 ··· 418 393 /* The MSI case is only ever cleaning up its most recent allocation */ 419 394 if (cookie->type == IOMMU_DMA_MSI_COOKIE) 420 395 cookie->msi_iova -= size; 396 + else if (cookie->fq_domain) /* non-strict mode */ 397 + queue_iova(iovad, iova_pfn(iovad, iova), 398 + size >> iova_shift(iovad), 0); 421 399 else 422 400 free_iova_fast(iovad, iova_pfn(iovad, iova), 423 401 size >> iova_shift(iovad)); ··· 436 408 dma_addr -= iova_off; 437 409 size = iova_align(iovad, size + iova_off); 438 410 439 - WARN_ON(iommu_unmap(domain, dma_addr, size) != size); 411 + WARN_ON(iommu_unmap_fast(domain, dma_addr, size) != size); 412 + if (!cookie->fq_domain) 413 + iommu_tlb_sync(domain); 440 414 iommu_dma_free_iova(cookie, dma_addr, size); 441 415 } 442 416 ··· 521 491 void iommu_dma_free(struct device *dev, struct page **pages, size_t size, 522 492 dma_addr_t *handle) 523 493 { 524 - __iommu_dma_unmap(iommu_get_domain_for_dev(dev), *handle, size); 494 + __iommu_dma_unmap(iommu_get_dma_domain(dev), *handle, size); 525 495 __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); 526 496 *handle = IOMMU_MAPPING_ERROR; 527 497 } ··· 548 518 unsigned long attrs, int prot, dma_addr_t *handle, 549 519 void (*flush_page)(struct device *, const void *, phys_addr_t)) 550 520 { 551 - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 521 + struct iommu_domain *domain = iommu_get_dma_domain(dev); 552 522 struct iommu_dma_cookie *cookie = domain->iova_cookie; 553 523 struct iova_domain *iovad = &cookie->iovad; 554 524 struct page **pages; ··· 636 606 } 637 607 638 608 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, 639 - size_t size, int prot) 609 + size_t size, int prot, struct iommu_domain *domain) 640 610 { 641 - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 642 611 struct iommu_dma_cookie *cookie = domain->iova_cookie; 643 612 size_t iova_off = 0; 644 613 dma_addr_t iova; ··· 661 632 dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, 662 633 unsigned long offset, size_t size, int prot) 663 634 { 664 - return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot); 635 + return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot, 636 + iommu_get_dma_domain(dev)); 665 637 } 666 638 667 639 void iommu_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, 668 640 enum dma_data_direction dir, unsigned long attrs) 669 641 { 670 - __iommu_dma_unmap(iommu_get_domain_for_dev(dev), handle, size); 642 + __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); 671 643 } 672 644 673 645 /* ··· 756 726 int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, 757 727 int nents, int prot) 758 728 { 759 - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 729 + struct iommu_domain *domain = iommu_get_dma_domain(dev); 760 730 struct iommu_dma_cookie *cookie = domain->iova_cookie; 761 731 struct iova_domain *iovad = &cookie->iovad; 762 732 struct scatterlist *s, *prev = NULL; ··· 841 811 sg = tmp; 842 812 } 843 813 end = sg_dma_address(sg) + sg_dma_len(sg); 844 - __iommu_dma_unmap(iommu_get_domain_for_dev(dev), start, end - start); 814 + __iommu_dma_unmap(iommu_get_dma_domain(dev), start, end - start); 845 815 } 846 816 847 817 dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, 848 818 size_t size, enum dma_data_direction dir, unsigned long attrs) 849 819 { 850 820 return __iommu_dma_map(dev, phys, size, 851 - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO); 821 + dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, 822 + iommu_get_dma_domain(dev)); 852 823 } 853 824 854 825 void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, 855 826 size_t size, enum dma_data_direction dir, unsigned long attrs) 856 827 { 857 - __iommu_dma_unmap(iommu_get_domain_for_dev(dev), handle, size); 828 + __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); 858 829 } 859 830 860 831 int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) ··· 881 850 if (!msi_page) 882 851 return NULL; 883 852 884 - iova = __iommu_dma_map(dev, msi_addr, size, prot); 853 + iova = __iommu_dma_map(dev, msi_addr, size, prot, domain); 885 854 if (iommu_dma_mapping_error(dev, iova)) 886 855 goto out_free_page; 887 856
+58 -61
drivers/iommu/fsl_pamu_domain.c
··· 814 814 return 0; 815 815 } 816 816 817 + static int fsl_pamu_set_windows(struct iommu_domain *domain, u32 w_count) 818 + { 819 + struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain); 820 + unsigned long flags; 821 + int ret; 822 + 823 + spin_lock_irqsave(&dma_domain->domain_lock, flags); 824 + /* Ensure domain is inactive i.e. DMA should be disabled for the domain */ 825 + if (dma_domain->enabled) { 826 + pr_debug("Can't set geometry attributes as domain is active\n"); 827 + spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 828 + return -EBUSY; 829 + } 830 + 831 + /* Ensure that the geometry has been set for the domain */ 832 + if (!dma_domain->geom_size) { 833 + pr_debug("Please configure geometry before setting the number of windows\n"); 834 + spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 835 + return -EINVAL; 836 + } 837 + 838 + /* 839 + * Ensure we have valid window count i.e. it should be less than 840 + * maximum permissible limit and should be a power of two. 841 + */ 842 + if (w_count > pamu_get_max_subwin_cnt() || !is_power_of_2(w_count)) { 843 + pr_debug("Invalid window count\n"); 844 + spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 845 + return -EINVAL; 846 + } 847 + 848 + ret = pamu_set_domain_geometry(dma_domain, &domain->geometry, 849 + w_count > 1 ? w_count : 0); 850 + if (!ret) { 851 + kfree(dma_domain->win_arr); 852 + dma_domain->win_arr = kcalloc(w_count, 853 + sizeof(*dma_domain->win_arr), 854 + GFP_ATOMIC); 855 + if (!dma_domain->win_arr) { 856 + spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 857 + return -ENOMEM; 858 + } 859 + dma_domain->win_cnt = w_count; 860 + } 861 + spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 862 + 863 + return ret; 864 + } 865 + 817 866 static int fsl_pamu_set_domain_attr(struct iommu_domain *domain, 818 867 enum iommu_attr attr_type, void *data) 819 868 { ··· 878 829 break; 879 830 case DOMAIN_ATTR_FSL_PAMU_ENABLE: 880 831 ret = configure_domain_dma_state(dma_domain, *(int *)data); 832 + break; 833 + case DOMAIN_ATTR_WINDOWS: 834 + ret = fsl_pamu_set_windows(domain, *(u32 *)data); 881 835 break; 882 836 default: 883 837 pr_debug("Unsupported attribute type\n"); ··· 907 855 break; 908 856 case DOMAIN_ATTR_FSL_PAMUV1: 909 857 *(int *)data = DOMAIN_ATTR_FSL_PAMUV1; 858 + break; 859 + case DOMAIN_ATTR_WINDOWS: 860 + *(u32 *)data = dma_domain->win_cnt; 910 861 break; 911 862 default: 912 863 pr_debug("Unsupported attribute type\n"); ··· 971 916 static struct iommu_group *get_pci_device_group(struct pci_dev *pdev) 972 917 { 973 918 struct pci_controller *pci_ctl; 974 - bool pci_endpt_partioning; 919 + bool pci_endpt_partitioning; 975 920 struct iommu_group *group = NULL; 976 921 977 922 pci_ctl = pci_bus_to_host(pdev->bus); 978 - pci_endpt_partioning = check_pci_ctl_endpt_part(pci_ctl); 923 + pci_endpt_partitioning = check_pci_ctl_endpt_part(pci_ctl); 979 924 /* We can partition PCIe devices so assign device group to the device */ 980 - if (pci_endpt_partioning) { 925 + if (pci_endpt_partitioning) { 981 926 group = pci_device_group(&pdev->dev); 982 927 983 928 /* ··· 1049 994 iommu_group_remove_device(dev); 1050 995 } 1051 996 1052 - static int fsl_pamu_set_windows(struct iommu_domain *domain, u32 w_count) 1053 - { 1054 - struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain); 1055 - unsigned long flags; 1056 - int ret; 1057 - 1058 - spin_lock_irqsave(&dma_domain->domain_lock, flags); 1059 - /* Ensure domain is inactive i.e. DMA should be disabled for the domain */ 1060 - if (dma_domain->enabled) { 1061 - pr_debug("Can't set geometry attributes as domain is active\n"); 1062 - spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 1063 - return -EBUSY; 1064 - } 1065 - 1066 - /* Ensure that the geometry has been set for the domain */ 1067 - if (!dma_domain->geom_size) { 1068 - pr_debug("Please configure geometry before setting the number of windows\n"); 1069 - spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 1070 - return -EINVAL; 1071 - } 1072 - 1073 - /* 1074 - * Ensure we have valid window count i.e. it should be less than 1075 - * maximum permissible limit and should be a power of two. 1076 - */ 1077 - if (w_count > pamu_get_max_subwin_cnt() || !is_power_of_2(w_count)) { 1078 - pr_debug("Invalid window count\n"); 1079 - spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 1080 - return -EINVAL; 1081 - } 1082 - 1083 - ret = pamu_set_domain_geometry(dma_domain, &domain->geometry, 1084 - w_count > 1 ? w_count : 0); 1085 - if (!ret) { 1086 - kfree(dma_domain->win_arr); 1087 - dma_domain->win_arr = kcalloc(w_count, 1088 - sizeof(*dma_domain->win_arr), 1089 - GFP_ATOMIC); 1090 - if (!dma_domain->win_arr) { 1091 - spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 1092 - return -ENOMEM; 1093 - } 1094 - dma_domain->win_cnt = w_count; 1095 - } 1096 - spin_unlock_irqrestore(&dma_domain->domain_lock, flags); 1097 - 1098 - return ret; 1099 - } 1100 - 1101 - static u32 fsl_pamu_get_windows(struct iommu_domain *domain) 1102 - { 1103 - struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain); 1104 - 1105 - return dma_domain->win_cnt; 1106 - } 1107 - 1108 997 static const struct iommu_ops fsl_pamu_ops = { 1109 998 .capable = fsl_pamu_capable, 1110 999 .domain_alloc = fsl_pamu_domain_alloc, ··· 1057 1058 .detach_dev = fsl_pamu_detach_device, 1058 1059 .domain_window_enable = fsl_pamu_window_enable, 1059 1060 .domain_window_disable = fsl_pamu_window_disable, 1060 - .domain_get_windows = fsl_pamu_get_windows, 1061 - .domain_set_windows = fsl_pamu_set_windows, 1062 1061 .iova_to_phys = fsl_pamu_iova_to_phys, 1063 1062 .domain_set_attr = fsl_pamu_set_domain_attr, 1064 1063 .domain_get_attr = fsl_pamu_get_domain_attr,
+314
drivers/iommu/intel-iommu-debugfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright © 2018 Intel Corporation. 4 + * 5 + * Authors: Gayatri Kammela <gayatri.kammela@intel.com> 6 + * Sohil Mehta <sohil.mehta@intel.com> 7 + * Jacob Pan <jacob.jun.pan@linux.intel.com> 8 + */ 9 + 10 + #include <linux/debugfs.h> 11 + #include <linux/dmar.h> 12 + #include <linux/intel-iommu.h> 13 + #include <linux/pci.h> 14 + 15 + #include <asm/irq_remapping.h> 16 + 17 + struct iommu_regset { 18 + int offset; 19 + const char *regs; 20 + }; 21 + 22 + #define IOMMU_REGSET_ENTRY(_reg_) \ 23 + { DMAR_##_reg_##_REG, __stringify(_reg_) } 24 + static const struct iommu_regset iommu_regs[] = { 25 + IOMMU_REGSET_ENTRY(VER), 26 + IOMMU_REGSET_ENTRY(CAP), 27 + IOMMU_REGSET_ENTRY(ECAP), 28 + IOMMU_REGSET_ENTRY(GCMD), 29 + IOMMU_REGSET_ENTRY(GSTS), 30 + IOMMU_REGSET_ENTRY(RTADDR), 31 + IOMMU_REGSET_ENTRY(CCMD), 32 + IOMMU_REGSET_ENTRY(FSTS), 33 + IOMMU_REGSET_ENTRY(FECTL), 34 + IOMMU_REGSET_ENTRY(FEDATA), 35 + IOMMU_REGSET_ENTRY(FEADDR), 36 + IOMMU_REGSET_ENTRY(FEUADDR), 37 + IOMMU_REGSET_ENTRY(AFLOG), 38 + IOMMU_REGSET_ENTRY(PMEN), 39 + IOMMU_REGSET_ENTRY(PLMBASE), 40 + IOMMU_REGSET_ENTRY(PLMLIMIT), 41 + IOMMU_REGSET_ENTRY(PHMBASE), 42 + IOMMU_REGSET_ENTRY(PHMLIMIT), 43 + IOMMU_REGSET_ENTRY(IQH), 44 + IOMMU_REGSET_ENTRY(IQT), 45 + IOMMU_REGSET_ENTRY(IQA), 46 + IOMMU_REGSET_ENTRY(ICS), 47 + IOMMU_REGSET_ENTRY(IRTA), 48 + IOMMU_REGSET_ENTRY(PQH), 49 + IOMMU_REGSET_ENTRY(PQT), 50 + IOMMU_REGSET_ENTRY(PQA), 51 + IOMMU_REGSET_ENTRY(PRS), 52 + IOMMU_REGSET_ENTRY(PECTL), 53 + IOMMU_REGSET_ENTRY(PEDATA), 54 + IOMMU_REGSET_ENTRY(PEADDR), 55 + IOMMU_REGSET_ENTRY(PEUADDR), 56 + IOMMU_REGSET_ENTRY(MTRRCAP), 57 + IOMMU_REGSET_ENTRY(MTRRDEF), 58 + IOMMU_REGSET_ENTRY(MTRR_FIX64K_00000), 59 + IOMMU_REGSET_ENTRY(MTRR_FIX16K_80000), 60 + IOMMU_REGSET_ENTRY(MTRR_FIX16K_A0000), 61 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_C0000), 62 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_C8000), 63 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_D0000), 64 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_D8000), 65 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_E0000), 66 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_E8000), 67 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_F0000), 68 + IOMMU_REGSET_ENTRY(MTRR_FIX4K_F8000), 69 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE0), 70 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK0), 71 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE1), 72 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK1), 73 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE2), 74 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK2), 75 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE3), 76 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK3), 77 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE4), 78 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK4), 79 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE5), 80 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK5), 81 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE6), 82 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK6), 83 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE7), 84 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK7), 85 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE8), 86 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK8), 87 + IOMMU_REGSET_ENTRY(MTRR_PHYSBASE9), 88 + IOMMU_REGSET_ENTRY(MTRR_PHYSMASK9), 89 + IOMMU_REGSET_ENTRY(VCCAP), 90 + IOMMU_REGSET_ENTRY(VCMD), 91 + IOMMU_REGSET_ENTRY(VCRSP), 92 + }; 93 + 94 + static int iommu_regset_show(struct seq_file *m, void *unused) 95 + { 96 + struct dmar_drhd_unit *drhd; 97 + struct intel_iommu *iommu; 98 + unsigned long flag; 99 + int i, ret = 0; 100 + u64 value; 101 + 102 + rcu_read_lock(); 103 + for_each_active_iommu(iommu, drhd) { 104 + if (!drhd->reg_base_addr) { 105 + seq_puts(m, "IOMMU: Invalid base address\n"); 106 + ret = -EINVAL; 107 + goto out; 108 + } 109 + 110 + seq_printf(m, "IOMMU: %s Register Base Address: %llx\n", 111 + iommu->name, drhd->reg_base_addr); 112 + seq_puts(m, "Name\t\t\tOffset\t\tContents\n"); 113 + /* 114 + * Publish the contents of the 64-bit hardware registers 115 + * by adding the offset to the pointer (virtual address). 116 + */ 117 + raw_spin_lock_irqsave(&iommu->register_lock, flag); 118 + for (i = 0 ; i < ARRAY_SIZE(iommu_regs); i++) { 119 + value = dmar_readq(iommu->reg + iommu_regs[i].offset); 120 + seq_printf(m, "%-16s\t0x%02x\t\t0x%016llx\n", 121 + iommu_regs[i].regs, iommu_regs[i].offset, 122 + value); 123 + } 124 + raw_spin_unlock_irqrestore(&iommu->register_lock, flag); 125 + seq_putc(m, '\n'); 126 + } 127 + out: 128 + rcu_read_unlock(); 129 + 130 + return ret; 131 + } 132 + DEFINE_SHOW_ATTRIBUTE(iommu_regset); 133 + 134 + static void ctx_tbl_entry_show(struct seq_file *m, struct intel_iommu *iommu, 135 + int bus) 136 + { 137 + struct context_entry *context; 138 + int devfn; 139 + 140 + seq_printf(m, " Context Table Entries for Bus: %d\n", bus); 141 + seq_puts(m, " Entry\tB:D.F\tHigh\tLow\n"); 142 + 143 + for (devfn = 0; devfn < 256; devfn++) { 144 + context = iommu_context_addr(iommu, bus, devfn, 0); 145 + if (!context) 146 + return; 147 + 148 + if (!context_present(context)) 149 + continue; 150 + 151 + seq_printf(m, " %-5d\t%02x:%02x.%x\t%-6llx\t%llx\n", devfn, 152 + bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 153 + context[0].hi, context[0].lo); 154 + } 155 + } 156 + 157 + static void root_tbl_entry_show(struct seq_file *m, struct intel_iommu *iommu) 158 + { 159 + unsigned long flags; 160 + int bus; 161 + 162 + spin_lock_irqsave(&iommu->lock, flags); 163 + seq_printf(m, "IOMMU %s: Root Table Address:%llx\n", iommu->name, 164 + (u64)virt_to_phys(iommu->root_entry)); 165 + seq_puts(m, "Root Table Entries:\n"); 166 + 167 + for (bus = 0; bus < 256; bus++) { 168 + if (!(iommu->root_entry[bus].lo & 1)) 169 + continue; 170 + 171 + seq_printf(m, " Bus: %d H: %llx L: %llx\n", bus, 172 + iommu->root_entry[bus].hi, 173 + iommu->root_entry[bus].lo); 174 + 175 + ctx_tbl_entry_show(m, iommu, bus); 176 + seq_putc(m, '\n'); 177 + } 178 + spin_unlock_irqrestore(&iommu->lock, flags); 179 + } 180 + 181 + static int dmar_translation_struct_show(struct seq_file *m, void *unused) 182 + { 183 + struct dmar_drhd_unit *drhd; 184 + struct intel_iommu *iommu; 185 + 186 + rcu_read_lock(); 187 + for_each_active_iommu(iommu, drhd) { 188 + root_tbl_entry_show(m, iommu); 189 + seq_putc(m, '\n'); 190 + } 191 + rcu_read_unlock(); 192 + 193 + return 0; 194 + } 195 + DEFINE_SHOW_ATTRIBUTE(dmar_translation_struct); 196 + 197 + #ifdef CONFIG_IRQ_REMAP 198 + static void ir_tbl_remap_entry_show(struct seq_file *m, 199 + struct intel_iommu *iommu) 200 + { 201 + struct irte *ri_entry; 202 + unsigned long flags; 203 + int idx; 204 + 205 + seq_puts(m, " Entry SrcID DstID Vct IRTE_high\t\tIRTE_low\n"); 206 + 207 + raw_spin_lock_irqsave(&irq_2_ir_lock, flags); 208 + for (idx = 0; idx < INTR_REMAP_TABLE_ENTRIES; idx++) { 209 + ri_entry = &iommu->ir_table->base[idx]; 210 + if (!ri_entry->present || ri_entry->p_pst) 211 + continue; 212 + 213 + seq_printf(m, " %-5d %02x:%02x.%01x %08x %02x %016llx\t%016llx\n", 214 + idx, PCI_BUS_NUM(ri_entry->sid), 215 + PCI_SLOT(ri_entry->sid), PCI_FUNC(ri_entry->sid), 216 + ri_entry->dest_id, ri_entry->vector, 217 + ri_entry->high, ri_entry->low); 218 + } 219 + raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags); 220 + } 221 + 222 + static void ir_tbl_posted_entry_show(struct seq_file *m, 223 + struct intel_iommu *iommu) 224 + { 225 + struct irte *pi_entry; 226 + unsigned long flags; 227 + int idx; 228 + 229 + seq_puts(m, " Entry SrcID PDA_high PDA_low Vct IRTE_high\t\tIRTE_low\n"); 230 + 231 + raw_spin_lock_irqsave(&irq_2_ir_lock, flags); 232 + for (idx = 0; idx < INTR_REMAP_TABLE_ENTRIES; idx++) { 233 + pi_entry = &iommu->ir_table->base[idx]; 234 + if (!pi_entry->present || !pi_entry->p_pst) 235 + continue; 236 + 237 + seq_printf(m, " %-5d %02x:%02x.%01x %08x %08x %02x %016llx\t%016llx\n", 238 + idx, PCI_BUS_NUM(pi_entry->sid), 239 + PCI_SLOT(pi_entry->sid), PCI_FUNC(pi_entry->sid), 240 + pi_entry->pda_h, pi_entry->pda_l << 6, 241 + pi_entry->vector, pi_entry->high, 242 + pi_entry->low); 243 + } 244 + raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags); 245 + } 246 + 247 + /* 248 + * For active IOMMUs go through the Interrupt remapping 249 + * table and print valid entries in a table format for 250 + * Remapped and Posted Interrupts. 251 + */ 252 + static int ir_translation_struct_show(struct seq_file *m, void *unused) 253 + { 254 + struct dmar_drhd_unit *drhd; 255 + struct intel_iommu *iommu; 256 + u64 irta; 257 + 258 + rcu_read_lock(); 259 + for_each_active_iommu(iommu, drhd) { 260 + if (!ecap_ir_support(iommu->ecap)) 261 + continue; 262 + 263 + seq_printf(m, "Remapped Interrupt supported on IOMMU: %s\n", 264 + iommu->name); 265 + 266 + if (iommu->ir_table) { 267 + irta = virt_to_phys(iommu->ir_table->base); 268 + seq_printf(m, " IR table address:%llx\n", irta); 269 + ir_tbl_remap_entry_show(m, iommu); 270 + } else { 271 + seq_puts(m, "Interrupt Remapping is not enabled\n"); 272 + } 273 + seq_putc(m, '\n'); 274 + } 275 + 276 + seq_puts(m, "****\n\n"); 277 + 278 + for_each_active_iommu(iommu, drhd) { 279 + if (!cap_pi_support(iommu->cap)) 280 + continue; 281 + 282 + seq_printf(m, "Posted Interrupt supported on IOMMU: %s\n", 283 + iommu->name); 284 + 285 + if (iommu->ir_table) { 286 + irta = virt_to_phys(iommu->ir_table->base); 287 + seq_printf(m, " IR table address:%llx\n", irta); 288 + ir_tbl_posted_entry_show(m, iommu); 289 + } else { 290 + seq_puts(m, "Interrupt Remapping is not enabled\n"); 291 + } 292 + seq_putc(m, '\n'); 293 + } 294 + rcu_read_unlock(); 295 + 296 + return 0; 297 + } 298 + DEFINE_SHOW_ATTRIBUTE(ir_translation_struct); 299 + #endif 300 + 301 + void __init intel_iommu_debugfs_init(void) 302 + { 303 + struct dentry *intel_iommu_debug = debugfs_create_dir("intel", 304 + iommu_debugfs_dir); 305 + 306 + debugfs_create_file("iommu_regset", 0444, intel_iommu_debug, NULL, 307 + &iommu_regset_fops); 308 + debugfs_create_file("dmar_translation_struct", 0444, intel_iommu_debug, 309 + NULL, &dmar_translation_struct_fops); 310 + #ifdef CONFIG_IRQ_REMAP 311 + debugfs_create_file("ir_translation_struct", 0444, intel_iommu_debug, 312 + NULL, &ir_translation_struct_fops); 313 + #endif 314 + }
+4 -28
drivers/iommu/intel-iommu.c
··· 185 185 static int force_on = 0; 186 186 int intel_iommu_tboot_noforce; 187 187 188 - /* 189 - * 0: Present 190 - * 1-11: Reserved 191 - * 12-63: Context Ptr (12 - (haw-1)) 192 - * 64-127: Reserved 193 - */ 194 - struct root_entry { 195 - u64 lo; 196 - u64 hi; 197 - }; 198 188 #define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry)) 199 189 200 190 /* ··· 210 220 211 221 return re->hi & VTD_PAGE_MASK; 212 222 } 213 - /* 214 - * low 64 bits: 215 - * 0: present 216 - * 1: fault processing disable 217 - * 2-3: translation type 218 - * 12-63: address space root 219 - * high 64 bits: 220 - * 0-2: address width 221 - * 3-6: aval 222 - * 8-23: domain id 223 - */ 224 - struct context_entry { 225 - u64 lo; 226 - u64 hi; 227 - }; 228 223 229 224 static inline void context_clear_pasid_enable(struct context_entry *context) 230 225 { ··· 236 261 return (context->lo & 1); 237 262 } 238 263 239 - static inline bool context_present(struct context_entry *context) 264 + bool context_present(struct context_entry *context) 240 265 { 241 266 return context_pasid_enabled(context) ? 242 267 __context_present(context) : ··· 763 788 domain->iommu_superpage = domain_update_iommu_superpage(NULL); 764 789 } 765 790 766 - static inline struct context_entry *iommu_context_addr(struct intel_iommu *iommu, 767 - u8 bus, u8 devfn, int alloc) 791 + struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus, 792 + u8 devfn, int alloc) 768 793 { 769 794 struct root_entry *root = &iommu->root_entry[bus]; 770 795 struct context_entry *context; ··· 4835 4860 cpuhp_setup_state(CPUHP_IOMMU_INTEL_DEAD, "iommu/intel:dead", NULL, 4836 4861 intel_iommu_cpu_dead); 4837 4862 intel_iommu_enabled = 1; 4863 + intel_iommu_debugfs_init(); 4838 4864 4839 4865 return 0; 4840 4866
+1 -1
drivers/iommu/intel_irq_remapping.c
··· 76 76 * in single-threaded environment with interrupt disabled, so no need to tabke 77 77 * the dmar_global_lock. 78 78 */ 79 - static DEFINE_RAW_SPINLOCK(irq_2_ir_lock); 79 + DEFINE_RAW_SPINLOCK(irq_2_ir_lock); 80 80 static const struct irq_domain_ops intel_ir_domain_ops; 81 81 82 82 static void iommu_disable_irq_remapping(struct intel_iommu *iommu);
+10 -1
drivers/iommu/io-pgtable-arm-v7s.c
··· 587 587 } 588 588 589 589 io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); 590 + io_pgtable_tlb_sync(&data->iop); 590 591 return size; 591 592 } 592 593 ··· 643 642 io_pgtable_tlb_sync(iop); 644 643 ptep = iopte_deref(pte[i], lvl); 645 644 __arm_v7s_free_table(ptep, lvl + 1, data); 645 + } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { 646 + /* 647 + * Order the PTE update against queueing the IOVA, to 648 + * guarantee that a flush callback from a different CPU 649 + * has observed it before the TLBIALL can be issued. 650 + */ 651 + smp_wmb(); 646 652 } else { 647 653 io_pgtable_tlb_add_flush(iop, iova, blk_size, 648 654 blk_size, true); ··· 720 712 IO_PGTABLE_QUIRK_NO_PERMS | 721 713 IO_PGTABLE_QUIRK_TLBI_ON_MAP | 722 714 IO_PGTABLE_QUIRK_ARM_MTK_4GB | 723 - IO_PGTABLE_QUIRK_NO_DMA)) 715 + IO_PGTABLE_QUIRK_NO_DMA | 716 + IO_PGTABLE_QUIRK_NON_STRICT)) 724 717 return NULL; 725 718 726 719 /* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */
+16 -7
drivers/iommu/io-pgtable-arm.c
··· 574 574 return 0; 575 575 576 576 tablep = iopte_deref(pte, data); 577 + } else if (unmap_idx >= 0) { 578 + io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); 579 + io_pgtable_tlb_sync(&data->iop); 580 + return size; 577 581 } 578 582 579 - if (unmap_idx < 0) 580 - return __arm_lpae_unmap(data, iova, size, lvl, tablep); 581 - 582 - io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); 583 - return size; 583 + return __arm_lpae_unmap(data, iova, size, lvl, tablep); 584 584 } 585 585 586 586 static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, ··· 610 610 io_pgtable_tlb_sync(iop); 611 611 ptep = iopte_deref(pte, data); 612 612 __arm_lpae_free_pgtable(data, lvl + 1, ptep); 613 + } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { 614 + /* 615 + * Order the PTE update against queueing the IOVA, to 616 + * guarantee that a flush callback from a different CPU 617 + * has observed it before the TLBIALL can be issued. 618 + */ 619 + smp_wmb(); 613 620 } else { 614 621 io_pgtable_tlb_add_flush(iop, iova, size, size, true); 615 622 } ··· 779 772 u64 reg; 780 773 struct arm_lpae_io_pgtable *data; 781 774 782 - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA)) 775 + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA | 776 + IO_PGTABLE_QUIRK_NON_STRICT)) 783 777 return NULL; 784 778 785 779 data = arm_lpae_alloc_pgtable(cfg); ··· 872 864 struct arm_lpae_io_pgtable *data; 873 865 874 866 /* The NS quirk doesn't apply at stage 2 */ 875 - if (cfg->quirks & ~IO_PGTABLE_QUIRK_NO_DMA) 867 + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_NO_DMA | 868 + IO_PGTABLE_QUIRK_NON_STRICT)) 876 869 return NULL; 877 870 878 871 data = arm_lpae_alloc_pgtable(cfg);
+5
drivers/iommu/io-pgtable.h
··· 71 71 * be accessed by a fully cache-coherent IOMMU or CPU (e.g. for a 72 72 * software-emulated IOMMU), such that pagetable updates need not 73 73 * be treated as explicit DMA data. 74 + * 75 + * IO_PGTABLE_QUIRK_NON_STRICT: Skip issuing synchronous leaf TLBIs 76 + * on unmap, for DMA domains using the flush queue mechanism for 77 + * delayed invalidation. 74 78 */ 75 79 #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) 76 80 #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) 77 81 #define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2) 78 82 #define IO_PGTABLE_QUIRK_ARM_MTK_4GB BIT(3) 79 83 #define IO_PGTABLE_QUIRK_NO_DMA BIT(4) 84 + #define IO_PGTABLE_QUIRK_NON_STRICT BIT(5) 80 85 unsigned long quirks; 81 86 unsigned long pgsize_bitmap; 82 87 unsigned int ias;
+37 -21
drivers/iommu/iommu.c
··· 32 32 #include <linux/pci.h> 33 33 #include <linux/bitops.h> 34 34 #include <linux/property.h> 35 + #include <linux/fsl/mc.h> 35 36 #include <trace/events/iommu.h> 36 37 37 38 static struct kset *iommu_group_kset; ··· 42 41 #else 43 42 static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA; 44 43 #endif 44 + static bool iommu_dma_strict __read_mostly = true; 45 45 46 46 struct iommu_callback_data { 47 47 const struct iommu_ops *ops; ··· 132 130 return 0; 133 131 } 134 132 early_param("iommu.passthrough", iommu_set_def_domain_type); 133 + 134 + static int __init iommu_dma_setup(char *str) 135 + { 136 + return kstrtobool(str, &iommu_dma_strict); 137 + } 138 + early_param("iommu.strict", iommu_dma_setup); 135 139 136 140 static ssize_t iommu_group_attr_show(struct kobject *kobj, 137 141 struct attribute *__attr, char *buf) ··· 1032 1024 return iommu_group_alloc(); 1033 1025 } 1034 1026 1027 + /* Get the IOMMU group for device on fsl-mc bus */ 1028 + struct iommu_group *fsl_mc_device_group(struct device *dev) 1029 + { 1030 + struct device *cont_dev = fsl_mc_cont_dev(dev); 1031 + struct iommu_group *group; 1032 + 1033 + group = iommu_group_get(cont_dev); 1034 + if (!group) 1035 + group = iommu_group_alloc(); 1036 + return group; 1037 + } 1038 + 1035 1039 /** 1036 1040 * iommu_group_get_for_dev - Find or create the IOMMU group for a device 1037 1041 * @dev: target device ··· 1092 1072 group->default_domain = dom; 1093 1073 if (!group->domain) 1094 1074 group->domain = dom; 1075 + 1076 + if (dom && !iommu_dma_strict) { 1077 + int attr = 1; 1078 + iommu_domain_set_attr(dom, 1079 + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, 1080 + &attr); 1081 + } 1095 1082 } 1096 1083 1097 1084 ret = iommu_group_add_device(group, dev); ··· 1443 1416 EXPORT_SYMBOL_GPL(iommu_get_domain_for_dev); 1444 1417 1445 1418 /* 1446 - * IOMMU groups are really the natrual working unit of the IOMMU, but 1419 + * For IOMMU_DOMAIN_DMA implementations which already provide their own 1420 + * guarantees that the group and its default domain are valid and correct. 1421 + */ 1422 + struct iommu_domain *iommu_get_dma_domain(struct device *dev) 1423 + { 1424 + return dev->iommu_group->default_domain; 1425 + } 1426 + 1427 + /* 1428 + * IOMMU groups are really the natural working unit of the IOMMU, but 1447 1429 * the IOMMU API works on domains and devices. Bridge that gap by 1448 1430 * iterating over the devices in a group. Ideally we'd have a single 1449 1431 * device which represents the requestor ID of the group, but we also ··· 1832 1796 struct iommu_domain_geometry *geometry; 1833 1797 bool *paging; 1834 1798 int ret = 0; 1835 - u32 *count; 1836 1799 1837 1800 switch (attr) { 1838 1801 case DOMAIN_ATTR_GEOMETRY: ··· 1842 1807 case DOMAIN_ATTR_PAGING: 1843 1808 paging = data; 1844 1809 *paging = (domain->pgsize_bitmap != 0UL); 1845 - break; 1846 - case DOMAIN_ATTR_WINDOWS: 1847 - count = data; 1848 - 1849 - if (domain->ops->domain_get_windows != NULL) 1850 - *count = domain->ops->domain_get_windows(domain); 1851 - else 1852 - ret = -ENODEV; 1853 - 1854 1810 break; 1855 1811 default: 1856 1812 if (!domain->ops->domain_get_attr) ··· 1858 1832 enum iommu_attr attr, void *data) 1859 1833 { 1860 1834 int ret = 0; 1861 - u32 *count; 1862 1835 1863 1836 switch (attr) { 1864 - case DOMAIN_ATTR_WINDOWS: 1865 - count = data; 1866 - 1867 - if (domain->ops->domain_set_windows != NULL) 1868 - ret = domain->ops->domain_set_windows(domain, *count); 1869 - else 1870 - ret = -ENODEV; 1871 - 1872 - break; 1873 1837 default: 1874 1838 if (domain->ops->domain_set_attr == NULL) 1875 1839 return -EINVAL;
+15 -7
drivers/iommu/iova.c
··· 56 56 iovad->granule = granule; 57 57 iovad->start_pfn = start_pfn; 58 58 iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); 59 + iovad->max32_alloc_size = iovad->dma_32bit_pfn; 59 60 iovad->flush_cb = NULL; 60 61 iovad->fq = NULL; 61 62 iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; ··· 140 139 141 140 cached_iova = rb_entry(iovad->cached32_node, struct iova, node); 142 141 if (free->pfn_hi < iovad->dma_32bit_pfn && 143 - free->pfn_lo >= cached_iova->pfn_lo) 142 + free->pfn_lo >= cached_iova->pfn_lo) { 144 143 iovad->cached32_node = rb_next(&free->node); 144 + iovad->max32_alloc_size = iovad->dma_32bit_pfn; 145 + } 145 146 146 147 cached_iova = rb_entry(iovad->cached_node, struct iova, node); 147 148 if (free->pfn_lo >= cached_iova->pfn_lo) ··· 193 190 194 191 /* Walk the tree backwards */ 195 192 spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); 193 + if (limit_pfn <= iovad->dma_32bit_pfn && 194 + size >= iovad->max32_alloc_size) 195 + goto iova32_full; 196 + 196 197 curr = __get_cached_rbnode(iovad, limit_pfn); 197 198 curr_iova = rb_entry(curr, struct iova, node); 198 199 do { ··· 207 200 curr_iova = rb_entry(curr, struct iova, node); 208 201 } while (curr && new_pfn <= curr_iova->pfn_hi); 209 202 210 - if (limit_pfn < size || new_pfn < iovad->start_pfn) { 211 - spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 212 - return -ENOMEM; 213 - } 203 + if (limit_pfn < size || new_pfn < iovad->start_pfn) 204 + goto iova32_full; 214 205 215 206 /* pfn_lo will point to size aligned address if size_aligned is set */ 216 207 new->pfn_lo = new_pfn; ··· 219 214 __cached_rbnode_insert_update(iovad, new); 220 215 221 216 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 222 - 223 - 224 217 return 0; 218 + 219 + iova32_full: 220 + iovad->max32_alloc_size = size; 221 + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 222 + return -ENOMEM; 225 223 } 226 224 227 225 static struct kmem_cache *iova_cache;
+1 -4
drivers/iommu/ipmmu-vmsa.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * IPMMU VMSA 3 4 * 4 5 * Copyright (C) 2014 Renesas Electronics Corporation 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; version 2 of the License. 9 6 */ 10 7 11 8 #include <linux/bitmap.h>
+22 -3
drivers/iommu/of_iommu.c
··· 24 24 #include <linux/of_iommu.h> 25 25 #include <linux/of_pci.h> 26 26 #include <linux/slab.h> 27 + #include <linux/fsl/mc.h> 27 28 28 29 #define NO_IOMMU 1 29 30 ··· 133 132 struct of_phandle_args iommu_spec = { .args_count = 1 }; 134 133 int err; 135 134 136 - err = of_pci_map_rid(info->np, alias, "iommu-map", 137 - "iommu-map-mask", &iommu_spec.np, 138 - iommu_spec.args); 135 + err = of_map_rid(info->np, alias, "iommu-map", "iommu-map-mask", 136 + &iommu_spec.np, iommu_spec.args); 139 137 if (err) 140 138 return err == -ENODEV ? NO_IOMMU : err; 141 139 142 140 err = of_iommu_xlate(info->dev, &iommu_spec); 141 + of_node_put(iommu_spec.np); 142 + return err; 143 + } 144 + 145 + static int of_fsl_mc_iommu_init(struct fsl_mc_device *mc_dev, 146 + struct device_node *master_np) 147 + { 148 + struct of_phandle_args iommu_spec = { .args_count = 1 }; 149 + int err; 150 + 151 + err = of_map_rid(master_np, mc_dev->icid, "iommu-map", 152 + "iommu-map-mask", &iommu_spec.np, 153 + iommu_spec.args); 154 + if (err) 155 + return err == -ENODEV ? NO_IOMMU : err; 156 + 157 + err = of_iommu_xlate(&mc_dev->dev, &iommu_spec); 143 158 of_node_put(iommu_spec.np); 144 159 return err; 145 160 } ··· 191 174 192 175 err = pci_for_each_dma_alias(to_pci_dev(dev), 193 176 of_pci_iommu_init, &info); 177 + } else if (dev_is_fsl_mc(dev)) { 178 + err = of_fsl_mc_iommu_init(to_fsl_mc_device(dev), master_np); 194 179 } else { 195 180 struct of_phandle_args iommu_spec; 196 181 int idx = 0;
+102
drivers/of/base.c
··· 2045 2045 2046 2046 return cache_level; 2047 2047 } 2048 + 2049 + /** 2050 + * of_map_rid - Translate a requester ID through a downstream mapping. 2051 + * @np: root complex device node. 2052 + * @rid: device requester ID to map. 2053 + * @map_name: property name of the map to use. 2054 + * @map_mask_name: optional property name of the mask to use. 2055 + * @target: optional pointer to a target device node. 2056 + * @id_out: optional pointer to receive the translated ID. 2057 + * 2058 + * Given a device requester ID, look up the appropriate implementation-defined 2059 + * platform ID and/or the target device which receives transactions on that 2060 + * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or 2061 + * @id_out may be NULL if only the other is required. If @target points to 2062 + * a non-NULL device node pointer, only entries targeting that node will be 2063 + * matched; if it points to a NULL value, it will receive the device node of 2064 + * the first matching target phandle, with a reference held. 2065 + * 2066 + * Return: 0 on success or a standard error code on failure. 2067 + */ 2068 + int of_map_rid(struct device_node *np, u32 rid, 2069 + const char *map_name, const char *map_mask_name, 2070 + struct device_node **target, u32 *id_out) 2071 + { 2072 + u32 map_mask, masked_rid; 2073 + int map_len; 2074 + const __be32 *map = NULL; 2075 + 2076 + if (!np || !map_name || (!target && !id_out)) 2077 + return -EINVAL; 2078 + 2079 + map = of_get_property(np, map_name, &map_len); 2080 + if (!map) { 2081 + if (target) 2082 + return -ENODEV; 2083 + /* Otherwise, no map implies no translation */ 2084 + *id_out = rid; 2085 + return 0; 2086 + } 2087 + 2088 + if (!map_len || map_len % (4 * sizeof(*map))) { 2089 + pr_err("%pOF: Error: Bad %s length: %d\n", np, 2090 + map_name, map_len); 2091 + return -EINVAL; 2092 + } 2093 + 2094 + /* The default is to select all bits. */ 2095 + map_mask = 0xffffffff; 2096 + 2097 + /* 2098 + * Can be overridden by "{iommu,msi}-map-mask" property. 2099 + * If of_property_read_u32() fails, the default is used. 2100 + */ 2101 + if (map_mask_name) 2102 + of_property_read_u32(np, map_mask_name, &map_mask); 2103 + 2104 + masked_rid = map_mask & rid; 2105 + for ( ; map_len > 0; map_len -= 4 * sizeof(*map), map += 4) { 2106 + struct device_node *phandle_node; 2107 + u32 rid_base = be32_to_cpup(map + 0); 2108 + u32 phandle = be32_to_cpup(map + 1); 2109 + u32 out_base = be32_to_cpup(map + 2); 2110 + u32 rid_len = be32_to_cpup(map + 3); 2111 + 2112 + if (rid_base & ~map_mask) { 2113 + pr_err("%pOF: Invalid %s translation - %s-mask (0x%x) ignores rid-base (0x%x)\n", 2114 + np, map_name, map_name, 2115 + map_mask, rid_base); 2116 + return -EFAULT; 2117 + } 2118 + 2119 + if (masked_rid < rid_base || masked_rid >= rid_base + rid_len) 2120 + continue; 2121 + 2122 + phandle_node = of_find_node_by_phandle(phandle); 2123 + if (!phandle_node) 2124 + return -ENODEV; 2125 + 2126 + if (target) { 2127 + if (*target) 2128 + of_node_put(phandle_node); 2129 + else 2130 + *target = phandle_node; 2131 + 2132 + if (*target != phandle_node) 2133 + continue; 2134 + } 2135 + 2136 + if (id_out) 2137 + *id_out = masked_rid - rid_base + out_base; 2138 + 2139 + pr_debug("%pOF: %s, using mask %08x, rid-base: %08x, out-base: %08x, length: %08x, rid: %08x -> %08x\n", 2140 + np, map_name, map_mask, rid_base, out_base, 2141 + rid_len, rid, masked_rid - rid_base + out_base); 2142 + return 0; 2143 + } 2144 + 2145 + pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n", 2146 + np, map_name, rid, target && *target ? *target : NULL); 2147 + return -EFAULT; 2148 + } 2149 + EXPORT_SYMBOL_GPL(of_map_rid);
+2 -3
drivers/of/irq.c
··· 22 22 #include <linux/module.h> 23 23 #include <linux/of.h> 24 24 #include <linux/of_irq.h> 25 - #include <linux/of_pci.h> 26 25 #include <linux/string.h> 27 26 #include <linux/slab.h> 28 27 ··· 587 588 * "msi-map" property. 588 589 */ 589 590 for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent) 590 - if (!of_pci_map_rid(parent_dev->of_node, rid_in, "msi-map", 591 - "msi-map-mask", np, &rid_out)) 591 + if (!of_map_rid(parent_dev->of_node, rid_in, "msi-map", 592 + "msi-map-mask", np, &rid_out)) 592 593 break; 593 594 return rid_out; 594 595 }
-101
drivers/pci/of.c
··· 355 355 EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources); 356 356 #endif /* CONFIG_OF_ADDRESS */ 357 357 358 - /** 359 - * of_pci_map_rid - Translate a requester ID through a downstream mapping. 360 - * @np: root complex device node. 361 - * @rid: PCI requester ID to map. 362 - * @map_name: property name of the map to use. 363 - * @map_mask_name: optional property name of the mask to use. 364 - * @target: optional pointer to a target device node. 365 - * @id_out: optional pointer to receive the translated ID. 366 - * 367 - * Given a PCI requester ID, look up the appropriate implementation-defined 368 - * platform ID and/or the target device which receives transactions on that 369 - * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or 370 - * @id_out may be NULL if only the other is required. If @target points to 371 - * a non-NULL device node pointer, only entries targeting that node will be 372 - * matched; if it points to a NULL value, it will receive the device node of 373 - * the first matching target phandle, with a reference held. 374 - * 375 - * Return: 0 on success or a standard error code on failure. 376 - */ 377 - int of_pci_map_rid(struct device_node *np, u32 rid, 378 - const char *map_name, const char *map_mask_name, 379 - struct device_node **target, u32 *id_out) 380 - { 381 - u32 map_mask, masked_rid; 382 - int map_len; 383 - const __be32 *map = NULL; 384 - 385 - if (!np || !map_name || (!target && !id_out)) 386 - return -EINVAL; 387 - 388 - map = of_get_property(np, map_name, &map_len); 389 - if (!map) { 390 - if (target) 391 - return -ENODEV; 392 - /* Otherwise, no map implies no translation */ 393 - *id_out = rid; 394 - return 0; 395 - } 396 - 397 - if (!map_len || map_len % (4 * sizeof(*map))) { 398 - pr_err("%pOF: Error: Bad %s length: %d\n", np, 399 - map_name, map_len); 400 - return -EINVAL; 401 - } 402 - 403 - /* The default is to select all bits. */ 404 - map_mask = 0xffffffff; 405 - 406 - /* 407 - * Can be overridden by "{iommu,msi}-map-mask" property. 408 - * If of_property_read_u32() fails, the default is used. 409 - */ 410 - if (map_mask_name) 411 - of_property_read_u32(np, map_mask_name, &map_mask); 412 - 413 - masked_rid = map_mask & rid; 414 - for ( ; map_len > 0; map_len -= 4 * sizeof(*map), map += 4) { 415 - struct device_node *phandle_node; 416 - u32 rid_base = be32_to_cpup(map + 0); 417 - u32 phandle = be32_to_cpup(map + 1); 418 - u32 out_base = be32_to_cpup(map + 2); 419 - u32 rid_len = be32_to_cpup(map + 3); 420 - 421 - if (rid_base & ~map_mask) { 422 - pr_err("%pOF: Invalid %s translation - %s-mask (0x%x) ignores rid-base (0x%x)\n", 423 - np, map_name, map_name, 424 - map_mask, rid_base); 425 - return -EFAULT; 426 - } 427 - 428 - if (masked_rid < rid_base || masked_rid >= rid_base + rid_len) 429 - continue; 430 - 431 - phandle_node = of_find_node_by_phandle(phandle); 432 - if (!phandle_node) 433 - return -ENODEV; 434 - 435 - if (target) { 436 - if (*target) 437 - of_node_put(phandle_node); 438 - else 439 - *target = phandle_node; 440 - 441 - if (*target != phandle_node) 442 - continue; 443 - } 444 - 445 - if (id_out) 446 - *id_out = masked_rid - rid_base + out_base; 447 - 448 - pr_debug("%pOF: %s, using mask %08x, rid-base: %08x, out-base: %08x, length: %08x, rid: %08x -> %08x\n", 449 - np, map_name, map_mask, rid_base, out_base, 450 - rid_len, rid, masked_rid - rid_base + out_base); 451 - return 0; 452 - } 453 - 454 - pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n", 455 - np, map_name, rid, target && *target ? *target : NULL); 456 - return -EFAULT; 457 - } 458 - 459 358 #if IS_ENABLED(CONFIG_OF_IRQ) 460 359 /** 461 360 * of_irq_parse_pci - Resolve the interrupt for a PCI device
+8
include/linux/fsl/mc.h
··· 351 351 #define dev_is_fsl_mc(_dev) (0) 352 352 #endif 353 353 354 + /* Macro to check if a device is a container device */ 355 + #define fsl_mc_is_cont_dev(_dev) (to_fsl_mc_device(_dev)->flags & \ 356 + FSL_MC_IS_DPRC) 357 + 358 + /* Macro to get the container device of a MC device */ 359 + #define fsl_mc_cont_dev(_dev) (fsl_mc_is_cont_dev(_dev) ? \ 360 + (_dev) : (_dev)->parent) 361 + 354 362 /* 355 363 * module_fsl_mc_driver() - Helper macro for drivers that don't do 356 364 * anything special in module init/exit. This eliminates a lot of
+72
include/linux/intel-iommu.h
··· 72 72 #define DMAR_PEDATA_REG 0xe4 /* Page request event interrupt data register */ 73 73 #define DMAR_PEADDR_REG 0xe8 /* Page request event interrupt addr register */ 74 74 #define DMAR_PEUADDR_REG 0xec /* Page request event Upper address register */ 75 + #define DMAR_MTRRCAP_REG 0x100 /* MTRR capability register */ 76 + #define DMAR_MTRRDEF_REG 0x108 /* MTRR default type register */ 77 + #define DMAR_MTRR_FIX64K_00000_REG 0x120 /* MTRR Fixed range registers */ 78 + #define DMAR_MTRR_FIX16K_80000_REG 0x128 79 + #define DMAR_MTRR_FIX16K_A0000_REG 0x130 80 + #define DMAR_MTRR_FIX4K_C0000_REG 0x138 81 + #define DMAR_MTRR_FIX4K_C8000_REG 0x140 82 + #define DMAR_MTRR_FIX4K_D0000_REG 0x148 83 + #define DMAR_MTRR_FIX4K_D8000_REG 0x150 84 + #define DMAR_MTRR_FIX4K_E0000_REG 0x158 85 + #define DMAR_MTRR_FIX4K_E8000_REG 0x160 86 + #define DMAR_MTRR_FIX4K_F0000_REG 0x168 87 + #define DMAR_MTRR_FIX4K_F8000_REG 0x170 88 + #define DMAR_MTRR_PHYSBASE0_REG 0x180 /* MTRR Variable range registers */ 89 + #define DMAR_MTRR_PHYSMASK0_REG 0x188 90 + #define DMAR_MTRR_PHYSBASE1_REG 0x190 91 + #define DMAR_MTRR_PHYSMASK1_REG 0x198 92 + #define DMAR_MTRR_PHYSBASE2_REG 0x1a0 93 + #define DMAR_MTRR_PHYSMASK2_REG 0x1a8 94 + #define DMAR_MTRR_PHYSBASE3_REG 0x1b0 95 + #define DMAR_MTRR_PHYSMASK3_REG 0x1b8 96 + #define DMAR_MTRR_PHYSBASE4_REG 0x1c0 97 + #define DMAR_MTRR_PHYSMASK4_REG 0x1c8 98 + #define DMAR_MTRR_PHYSBASE5_REG 0x1d0 99 + #define DMAR_MTRR_PHYSMASK5_REG 0x1d8 100 + #define DMAR_MTRR_PHYSBASE6_REG 0x1e0 101 + #define DMAR_MTRR_PHYSMASK6_REG 0x1e8 102 + #define DMAR_MTRR_PHYSBASE7_REG 0x1f0 103 + #define DMAR_MTRR_PHYSMASK7_REG 0x1f8 104 + #define DMAR_MTRR_PHYSBASE8_REG 0x200 105 + #define DMAR_MTRR_PHYSMASK8_REG 0x208 106 + #define DMAR_MTRR_PHYSBASE9_REG 0x210 107 + #define DMAR_MTRR_PHYSMASK9_REG 0x218 108 + #define DMAR_VCCAP_REG 0xe00 /* Virtual command capability register */ 109 + #define DMAR_VCMD_REG 0xe10 /* Virtual command register */ 110 + #define DMAR_VCRSP_REG 0xe20 /* Virtual command response register */ 75 111 76 112 #define OFFSET_STRIDE (9) 77 113 ··· 425 389 struct pasid_state_entry; 426 390 struct page_req_dsc; 427 391 392 + /* 393 + * 0: Present 394 + * 1-11: Reserved 395 + * 12-63: Context Ptr (12 - (haw-1)) 396 + * 64-127: Reserved 397 + */ 398 + struct root_entry { 399 + u64 lo; 400 + u64 hi; 401 + }; 402 + 403 + /* 404 + * low 64 bits: 405 + * 0: present 406 + * 1: fault processing disable 407 + * 2-3: translation type 408 + * 12-63: address space root 409 + * high 64 bits: 410 + * 0-2: address width 411 + * 3-6: aval 412 + * 8-23: domain id 413 + */ 414 + struct context_entry { 415 + u64 lo; 416 + u64 hi; 417 + }; 418 + 428 419 struct dmar_domain { 429 420 int nid; /* node id */ 430 421 ··· 621 558 extern struct intel_iommu *intel_svm_device_to_iommu(struct device *dev); 622 559 #endif 623 560 561 + #ifdef CONFIG_INTEL_IOMMU_DEBUGFS 562 + void intel_iommu_debugfs_init(void); 563 + #else 564 + static inline void intel_iommu_debugfs_init(void) {} 565 + #endif /* CONFIG_INTEL_IOMMU_DEBUGFS */ 566 + 624 567 extern const struct attribute_group *intel_iommu_groups[]; 568 + bool context_present(struct context_entry *context); 569 + struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus, 570 + u8 devfn, int alloc); 625 571 626 572 #endif
+4 -6
include/linux/iommu.h
··· 124 124 DOMAIN_ATTR_FSL_PAMU_ENABLE, 125 125 DOMAIN_ATTR_FSL_PAMUV1, 126 126 DOMAIN_ATTR_NESTING, /* two stages of translation */ 127 + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, 127 128 DOMAIN_ATTR_MAX, 128 129 }; 129 130 ··· 182 181 * @apply_resv_region: Temporary helper call-back for iova reserved ranges 183 182 * @domain_window_enable: Configure and enable a particular window for a domain 184 183 * @domain_window_disable: Disable a particular window for a domain 185 - * @domain_set_windows: Set the number of windows for a domain 186 - * @domain_get_windows: Return the number of windows for a domain 187 184 * @of_xlate: add OF master IDs to iommu grouping 188 185 * @pgsize_bitmap: bitmap of all possible supported page sizes 189 186 */ ··· 222 223 int (*domain_window_enable)(struct iommu_domain *domain, u32 wnd_nr, 223 224 phys_addr_t paddr, u64 size, int prot); 224 225 void (*domain_window_disable)(struct iommu_domain *domain, u32 wnd_nr); 225 - /* Set the number of windows per domain */ 226 - int (*domain_set_windows)(struct iommu_domain *domain, u32 w_count); 227 - /* Get the number of windows per domain */ 228 - u32 (*domain_get_windows)(struct iommu_domain *domain); 229 226 230 227 int (*of_xlate)(struct device *dev, struct of_phandle_args *args); 231 228 bool (*is_attach_deferred)(struct iommu_domain *domain, struct device *dev); ··· 288 293 extern void iommu_detach_device(struct iommu_domain *domain, 289 294 struct device *dev); 290 295 extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); 296 + extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); 291 297 extern int iommu_map(struct iommu_domain *domain, unsigned long iova, 292 298 phys_addr_t paddr, size_t size, int prot); 293 299 extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, ··· 373 377 extern struct iommu_group *pci_device_group(struct device *dev); 374 378 /* Generic device grouping function */ 375 379 extern struct iommu_group *generic_device_group(struct device *dev); 380 + /* FSL-MC device grouping function */ 381 + struct iommu_group *fsl_mc_device_group(struct device *dev); 376 382 377 383 /** 378 384 * struct iommu_fwspec - per-device IOMMU instance data
+1
include/linux/iova.h
··· 75 75 unsigned long granule; /* pfn granularity for this domain */ 76 76 unsigned long start_pfn; /* Lower limit for this domain */ 77 77 unsigned long dma_32bit_pfn; 78 + unsigned long max32_alloc_size; /* Size of last failed allocation */ 78 79 struct iova anchor; /* rbtree lookup anchor */ 79 80 struct iova_rcache rcaches[IOVA_RANGE_CACHE_MAX_SIZE]; /* IOVA range caches */ 80 81
+11
include/linux/of.h
··· 550 550 551 551 extern int of_cpu_node_to_id(struct device_node *np); 552 552 553 + int of_map_rid(struct device_node *np, u32 rid, 554 + const char *map_name, const char *map_mask_name, 555 + struct device_node **target, u32 *id_out); 556 + 553 557 #else /* CONFIG_OF */ 554 558 555 559 static inline void of_core_init(void) ··· 954 950 static inline int of_cpu_node_to_id(struct device_node *np) 955 951 { 956 952 return -ENODEV; 953 + } 954 + 955 + static inline int of_map_rid(struct device_node *np, u32 rid, 956 + const char *map_name, const char *map_mask_name, 957 + struct device_node **target, u32 *id_out) 958 + { 959 + return -EINVAL; 957 960 } 958 961 959 962 #define of_match_ptr(_ptr) NULL
-10
include/linux/of_pci.h
··· 14 14 unsigned int devfn); 15 15 int of_pci_get_devfn(struct device_node *np); 16 16 void of_pci_check_probe_only(void); 17 - int of_pci_map_rid(struct device_node *np, u32 rid, 18 - const char *map_name, const char *map_mask_name, 19 - struct device_node **target, u32 *id_out); 20 17 #else 21 18 static inline struct device_node *of_pci_find_child_device(struct device_node *parent, 22 19 unsigned int devfn) ··· 22 25 } 23 26 24 27 static inline int of_pci_get_devfn(struct device_node *np) 25 - { 26 - return -EINVAL; 27 - } 28 - 29 - static inline int of_pci_map_rid(struct device_node *np, u32 rid, 30 - const char *map_name, const char *map_mask_name, 31 - struct device_node **target, u32 *id_out) 32 28 { 33 29 return -EINVAL; 34 30 }