Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'iommu-updates-v3.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
"The most important part of these updates is the IOMMU groups code
enhancement written by Alex Williamson. It abstracts the problem that
a given hardware IOMMU can't isolate any given device from any other
device (e.g. 32 bit PCI devices can't usually be isolated). Devices
that can't be isolated are grouped together. This code is required
for the upcoming VFIO framework.

Another IOMMU-API change written by me is the introduction of domain
attributes. This makes it easier to handle GART-like IOMMUs with the
IOMMU-API because now the start-address and the size of the domain
address space can be queried.

Besides that there are a few cleanups and fixes for the NVidia Tegra
IOMMU drivers and the reworked init-code for the AMD IOMMU. The
latter is from my patch-set to support interrupt remapping. The rest
of this patch-set requires x86 changes which are not mergabe yet. So
full support for interrupt remapping with AMD IOMMUs will come in a
future merge window."

* tag 'iommu-updates-v3.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (33 commits)
iommu/amd: Fix hotplug with iommu=pt
iommu/amd: Add missing spin_lock initialization
iommu/amd: Convert iommu initialization to state machine
iommu/amd: Introduce amd_iommu_init_dma routine
iommu/amd: Move unmap_flush message to amd_iommu_init_dma_ops()
iommu/amd: Split enable_iommus() routine
iommu/amd: Introduce early_amd_iommu_init routine
iommu/amd: Move informational prinks out of iommu_enable
iommu/amd: Split out PCI related parts of IOMMU initialization
iommu/amd: Use acpi_get_table instead of acpi_table_parse
iommu/amd: Fix sparse warnings
iommu/tegra: Don't call alloc_pdir with as->lock
iommu/tegra: smmu: Fix unsleepable memory allocation at alloc_pdir()
iommu/tegra: smmu: Remove unnecessary sanity check at alloc_pdir()
iommu/exynos: Implement DOMAIN_ATTR_GEOMETRY attribute
iommu/tegra: Implement DOMAIN_ATTR_GEOMETRY attribute
iommu/msm: Implement DOMAIN_ATTR_GEOMETRY attribute
iommu/omap: Implement DOMAIN_ATTR_GEOMETRY attribute
iommu/vt-d: Implement DOMAIN_ATTR_GEOMETRY attribute
iommu/amd: Implement DOMAIN_ATTR_GEOMETRY attribute
...

+1533 -487
+14
Documentation/ABI/testing/sysfs-kernel-iommu_groups
··· 1 + What: /sys/kernel/iommu_groups/ 2 + Date: May 2012 3 + KernelVersion: v3.5 4 + Contact: Alex Williamson <alex.williamson@redhat.com> 5 + Description: /sys/kernel/iommu_groups/ contains a number of sub- 6 + directories, each representing an IOMMU group. The 7 + name of the sub-directory matches the iommu_group_id() 8 + for the group, which is an integer value. Within each 9 + subdirectory is another directory named "devices" with 10 + links to the sysfs devices contained in this group. 11 + The group directory also optionally contains a "name" 12 + file if the IOMMU driver has chosen to register a more 13 + common name for the group. 14 + Users:
+21
Documentation/devicetree/bindings/iommu/nvidia,tegra30-smmu.txt
··· 1 + NVIDIA Tegra 30 IOMMU H/W, SMMU (System Memory Management Unit) 2 + 3 + Required properties: 4 + - compatible : "nvidia,tegra30-smmu" 5 + - reg : Should contain 3 register banks(address and length) for each 6 + of the SMMU register blocks. 7 + - interrupts : Should contain MC General interrupt. 8 + - nvidia,#asids : # of ASIDs 9 + - dma-window : IOVA start address and length. 10 + - nvidia,ahb : phandle to the ahb bus connected to SMMU. 11 + 12 + Example: 13 + smmu { 14 + compatible = "nvidia,tegra30-smmu"; 15 + reg = <0x7000f010 0x02c 16 + 0x7000f1f0 0x010 17 + 0x7000f228 0x05c>; 18 + nvidia,#asids = <4>; /* # of ASIDs */ 19 + dma-window = <0 0x40000000>; /* IOVA start & length */ 20 + nvidia,ahb = <&ahb>; 21 + };
-1
Documentation/kernel-parameters.txt
··· 1134 1134 forcesac 1135 1135 soft 1136 1136 pt [x86, IA-64] 1137 - group_mf [x86, IA-64] 1138 1137 1139 1138 1140 1139 io7= [HW] IO7 for Marvel based alpha systems
-2
arch/ia64/include/asm/iommu.h
··· 11 11 extern int force_iommu, no_iommu; 12 12 extern int iommu_pass_through; 13 13 extern int iommu_detected; 14 - extern int iommu_group_mf; 15 14 #else 16 15 #define iommu_pass_through (0) 17 16 #define no_iommu (1) 18 17 #define iommu_detected (0) 19 - #define iommu_group_mf (0) 20 18 #endif 21 19 extern void iommu_dma_init(void); 22 20 extern void machvec_init(const char *name);
-1
arch/ia64/kernel/pci-dma.c
··· 32 32 #endif 33 33 34 34 int iommu_pass_through; 35 - int iommu_group_mf; 36 35 37 36 /* Dummy device used for NULL arguments (normally ISA). Better would 38 37 be probably a smaller DMA mask, but this is bug-to-bug compatible
-1
arch/x86/include/asm/iommu.h
··· 5 5 extern int force_iommu, no_iommu; 6 6 extern int iommu_detected; 7 7 extern int iommu_pass_through; 8 - extern int iommu_group_mf; 9 8 10 9 /* 10 seconds */ 11 10 #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)
-11
arch/x86/kernel/pci-dma.c
··· 45 45 */ 46 46 int iommu_pass_through __read_mostly; 47 47 48 - /* 49 - * Group multi-function PCI devices into a single device-group for the 50 - * iommu_device_group interface. This tells the iommu driver to pretend 51 - * it cannot distinguish between functions of a device, exposing only one 52 - * group for the device. Useful for disallowing use of individual PCI 53 - * functions from userspace drivers. 54 - */ 55 - int iommu_group_mf __read_mostly; 56 - 57 48 extern struct iommu_table_entry __iommu_table[], __iommu_table_end[]; 58 49 59 50 /* Dummy device used for NULL arguments (normally ISA). */ ··· 185 194 #endif 186 195 if (!strncmp(p, "pt", 2)) 187 196 iommu_pass_through = 1; 188 - if (!strncmp(p, "group_mf", 8)) 189 - iommu_group_mf = 1; 190 197 191 198 gart_parse_options(p); 192 199
+5 -1
drivers/iommu/Kconfig
··· 13 13 14 14 if IOMMU_SUPPORT 15 15 16 + config OF_IOMMU 17 + def_bool y 18 + depends on OF 19 + 16 20 # MSM IOMMU support 17 21 config MSM_IOMMU 18 22 bool "MSM IOMMU Support" ··· 158 154 159 155 config TEGRA_IOMMU_SMMU 160 156 bool "Tegra SMMU IOMMU Support" 161 - depends on ARCH_TEGRA_3x_SOC 157 + depends on ARCH_TEGRA_3x_SOC && TEGRA_AHB 162 158 select IOMMU_API 163 159 help 164 160 Enables support for remapping discontiguous physical memory
+1
drivers/iommu/Makefile
··· 1 1 obj-$(CONFIG_IOMMU_API) += iommu.o 2 + obj-$(CONFIG_OF_IOMMU) += of_iommu.o 2 3 obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o 3 4 obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o 4 5 obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
+71 -28
drivers/iommu/amd_iommu.c
··· 256 256 return true; 257 257 } 258 258 259 + static void swap_pci_ref(struct pci_dev **from, struct pci_dev *to) 260 + { 261 + pci_dev_put(*from); 262 + *from = to; 263 + } 264 + 265 + #define REQ_ACS_FLAGS (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF) 266 + 259 267 static int iommu_init_device(struct device *dev) 260 268 { 261 - struct pci_dev *pdev = to_pci_dev(dev); 269 + struct pci_dev *dma_pdev, *pdev = to_pci_dev(dev); 262 270 struct iommu_dev_data *dev_data; 271 + struct iommu_group *group; 263 272 u16 alias; 273 + int ret; 264 274 265 275 if (dev->archdata.iommu) 266 276 return 0; ··· 291 281 return -ENOTSUPP; 292 282 } 293 283 dev_data->alias_data = alias_data; 284 + 285 + dma_pdev = pci_get_bus_and_slot(alias >> 8, alias & 0xff); 286 + } else 287 + dma_pdev = pci_dev_get(pdev); 288 + 289 + swap_pci_ref(&dma_pdev, pci_get_dma_source(dma_pdev)); 290 + 291 + if (dma_pdev->multifunction && 292 + !pci_acs_enabled(dma_pdev, REQ_ACS_FLAGS)) 293 + swap_pci_ref(&dma_pdev, 294 + pci_get_slot(dma_pdev->bus, 295 + PCI_DEVFN(PCI_SLOT(dma_pdev->devfn), 296 + 0))); 297 + 298 + while (!pci_is_root_bus(dma_pdev->bus)) { 299 + if (pci_acs_path_enabled(dma_pdev->bus->self, 300 + NULL, REQ_ACS_FLAGS)) 301 + break; 302 + 303 + swap_pci_ref(&dma_pdev, pci_dev_get(dma_pdev->bus->self)); 294 304 } 305 + 306 + group = iommu_group_get(&dma_pdev->dev); 307 + pci_dev_put(dma_pdev); 308 + if (!group) { 309 + group = iommu_group_alloc(); 310 + if (IS_ERR(group)) 311 + return PTR_ERR(group); 312 + } 313 + 314 + ret = iommu_group_add_device(group, dev); 315 + 316 + iommu_group_put(group); 317 + 318 + if (ret) 319 + return ret; 295 320 296 321 if (pci_iommuv2_capable(pdev)) { 297 322 struct amd_iommu *iommu; ··· 356 311 357 312 static void iommu_uninit_device(struct device *dev) 358 313 { 314 + iommu_group_remove_device(dev); 315 + 359 316 /* 360 317 * Nothing to do here - we keep dev_data around for unplugged devices 361 318 * and reuse it when the device is re-plugged - not doing so would ··· 430 383 DECLARE_STATS_COUNTER(invalidate_iotlb); 431 384 DECLARE_STATS_COUNTER(invalidate_iotlb_all); 432 385 DECLARE_STATS_COUNTER(pri_requests); 433 - 434 386 435 387 static struct dentry *stats_dir; 436 388 static struct dentry *de_fflush; ··· 2119 2073 /* FIXME: Move this to PCI code */ 2120 2074 #define PCI_PRI_TLP_OFF (1 << 15) 2121 2075 2122 - bool pci_pri_tlp_required(struct pci_dev *pdev) 2076 + static bool pci_pri_tlp_required(struct pci_dev *pdev) 2123 2077 { 2124 2078 u16 status; 2125 2079 int pos; ··· 2300 2254 2301 2255 iommu_init_device(dev); 2302 2256 2257 + /* 2258 + * dev_data is still NULL and 2259 + * got initialized in iommu_init_device 2260 + */ 2261 + dev_data = get_dev_data(dev); 2262 + 2263 + if (iommu_pass_through || dev_data->iommu_v2) { 2264 + dev_data->passthrough = true; 2265 + attach_device(dev, pt_domain); 2266 + break; 2267 + } 2268 + 2303 2269 domain = domain_for_device(dev); 2304 2270 2305 2271 /* allocate a protection domain if a device is added */ ··· 2329 2271 2330 2272 dev_data = get_dev_data(dev); 2331 2273 2332 - if (!dev_data->passthrough) 2333 - dev->archdata.dma_ops = &amd_iommu_dma_ops; 2334 - else 2335 - dev->archdata.dma_ops = &nommu_dma_ops; 2274 + dev->archdata.dma_ops = &amd_iommu_dma_ops; 2336 2275 2337 2276 break; 2338 2277 case BUS_NOTIFY_DEL_DEVICE: ··· 3027 2972 3028 2973 amd_iommu_stats_init(); 3029 2974 2975 + if (amd_iommu_unmap_flush) 2976 + pr_info("AMD-Vi: IO/TLB flush on unmap enabled\n"); 2977 + else 2978 + pr_info("AMD-Vi: Lazy IO/TLB flushing enabled\n"); 2979 + 3030 2980 return 0; 3031 2981 3032 2982 free_domains: ··· 3137 3077 domain->iommu_domain = dom; 3138 3078 3139 3079 dom->priv = domain; 3080 + 3081 + dom->geometry.aperture_start = 0; 3082 + dom->geometry.aperture_end = ~0ULL; 3083 + dom->geometry.force_aperture = true; 3140 3084 3141 3085 return 0; 3142 3086 ··· 3300 3236 return 0; 3301 3237 } 3302 3238 3303 - static int amd_iommu_device_group(struct device *dev, unsigned int *groupid) 3304 - { 3305 - struct iommu_dev_data *dev_data = dev->archdata.iommu; 3306 - struct pci_dev *pdev = to_pci_dev(dev); 3307 - u16 devid; 3308 - 3309 - if (!dev_data) 3310 - return -ENODEV; 3311 - 3312 - if (pdev->is_virtfn || !iommu_group_mf) 3313 - devid = dev_data->devid; 3314 - else 3315 - devid = calc_devid(pdev->bus->number, 3316 - PCI_DEVFN(PCI_SLOT(pdev->devfn), 0)); 3317 - 3318 - *groupid = amd_iommu_alias_table[devid]; 3319 - 3320 - return 0; 3321 - } 3322 - 3323 3239 static struct iommu_ops amd_iommu_ops = { 3324 3240 .domain_init = amd_iommu_domain_init, 3325 3241 .domain_destroy = amd_iommu_domain_destroy, ··· 3309 3265 .unmap = amd_iommu_unmap, 3310 3266 .iova_to_phys = amd_iommu_iova_to_phys, 3311 3267 .domain_has_cap = amd_iommu_domain_has_cap, 3312 - .device_group = amd_iommu_device_group, 3313 3268 .pgsize_bitmap = AMD_IOMMU_PGSIZES, 3314 3269 }; 3315 3270
+336 -233
drivers/iommu/amd_iommu_init.c
··· 26 26 #include <linux/msi.h> 27 27 #include <linux/amd-iommu.h> 28 28 #include <linux/export.h> 29 + #include <linux/acpi.h> 30 + #include <acpi/acpi.h> 29 31 #include <asm/pci-direct.h> 30 32 #include <asm/iommu.h> 31 33 #include <asm/gart.h> ··· 124 122 125 123 bool amd_iommu_dump; 126 124 127 - static int __initdata amd_iommu_detected; 125 + static bool amd_iommu_detected; 128 126 static bool __initdata amd_iommu_disabled; 129 127 130 128 u16 amd_iommu_last_bdf; /* largest PCI device id we have ··· 149 147 bool amd_iommu_v2_present __read_mostly; 150 148 151 149 bool amd_iommu_force_isolation __read_mostly; 152 - 153 - /* 154 - * The ACPI table parsing functions set this variable on an error 155 - */ 156 - static int __initdata amd_iommu_init_err; 157 150 158 151 /* 159 152 * List of protection domains - used during resume ··· 187 190 static u32 alias_table_size; /* size of the alias table */ 188 191 static u32 rlookup_table_size; /* size if the rlookup table */ 189 192 190 - /* 191 - * This function flushes all internal caches of 192 - * the IOMMU used by this driver. 193 - */ 194 - extern void iommu_flush_all_caches(struct amd_iommu *iommu); 193 + enum iommu_init_state { 194 + IOMMU_START_STATE, 195 + IOMMU_IVRS_DETECTED, 196 + IOMMU_ACPI_FINISHED, 197 + IOMMU_ENABLED, 198 + IOMMU_PCI_INIT, 199 + IOMMU_INTERRUPTS_EN, 200 + IOMMU_DMA_OPS, 201 + IOMMU_INITIALIZED, 202 + IOMMU_NOT_FOUND, 203 + IOMMU_INIT_ERROR, 204 + }; 205 + 206 + static enum iommu_init_state init_state = IOMMU_START_STATE; 195 207 196 208 static int amd_iommu_enable_interrupts(void); 209 + static int __init iommu_go_to_state(enum iommu_init_state state); 197 210 198 211 static inline void update_last_devid(u16 devid) 199 212 { ··· 328 321 /* Function to enable the hardware */ 329 322 static void iommu_enable(struct amd_iommu *iommu) 330 323 { 331 - static const char * const feat_str[] = { 332 - "PreF", "PPR", "X2APIC", "NX", "GT", "[5]", 333 - "IA", "GA", "HE", "PC", NULL 334 - }; 335 - int i; 336 - 337 - printk(KERN_INFO "AMD-Vi: Enabling IOMMU at %s cap 0x%hx", 338 - dev_name(&iommu->dev->dev), iommu->cap_ptr); 339 - 340 - if (iommu->cap & (1 << IOMMU_CAP_EFR)) { 341 - printk(KERN_CONT " extended features: "); 342 - for (i = 0; feat_str[i]; ++i) 343 - if (iommu_feature(iommu, (1ULL << i))) 344 - printk(KERN_CONT " %s", feat_str[i]); 345 - } 346 - printk(KERN_CONT "\n"); 347 - 348 324 iommu_feature_enable(iommu, CONTROL_IOMMU_EN); 349 325 } 350 326 ··· 348 358 * mapping and unmapping functions for the IOMMU MMIO space. Each AMD IOMMU in 349 359 * the system has one. 350 360 */ 351 - static u8 * __init iommu_map_mmio_space(u64 address) 361 + static u8 __iomem * __init iommu_map_mmio_space(u64 address) 352 362 { 353 363 if (!request_mem_region(address, MMIO_REGION_LENGTH, "amd_iommu")) { 354 364 pr_err("AMD-Vi: Can not reserve memory region %llx for mmio\n", ··· 357 367 return NULL; 358 368 } 359 369 360 - return ioremap_nocache(address, MMIO_REGION_LENGTH); 370 + return (u8 __iomem *)ioremap_nocache(address, MMIO_REGION_LENGTH); 361 371 } 362 372 363 373 static void __init iommu_unmap_mmio_space(struct amd_iommu *iommu) ··· 453 463 */ 454 464 for (i = 0; i < table->length; ++i) 455 465 checksum += p[i]; 456 - if (checksum != 0) { 466 + if (checksum != 0) 457 467 /* ACPI table corrupt */ 458 - amd_iommu_init_err = -ENODEV; 459 - return 0; 460 - } 468 + return -ENODEV; 461 469 462 470 p += IVRS_HEADER_LENGTH; 463 471 ··· 714 726 } 715 727 716 728 /* 717 - * This function reads some important data from the IOMMU PCI space and 718 - * initializes the driver data structure with it. It reads the hardware 719 - * capabilities and the first/last device entries 720 - */ 721 - static void __init init_iommu_from_pci(struct amd_iommu *iommu) 722 - { 723 - int cap_ptr = iommu->cap_ptr; 724 - u32 range, misc, low, high; 725 - int i, j; 726 - 727 - pci_read_config_dword(iommu->dev, cap_ptr + MMIO_CAP_HDR_OFFSET, 728 - &iommu->cap); 729 - pci_read_config_dword(iommu->dev, cap_ptr + MMIO_RANGE_OFFSET, 730 - &range); 731 - pci_read_config_dword(iommu->dev, cap_ptr + MMIO_MISC_OFFSET, 732 - &misc); 733 - 734 - iommu->first_device = calc_devid(MMIO_GET_BUS(range), 735 - MMIO_GET_FD(range)); 736 - iommu->last_device = calc_devid(MMIO_GET_BUS(range), 737 - MMIO_GET_LD(range)); 738 - iommu->evt_msi_num = MMIO_MSI_NUM(misc); 739 - 740 - if (!(iommu->cap & (1 << IOMMU_CAP_IOTLB))) 741 - amd_iommu_iotlb_sup = false; 742 - 743 - /* read extended feature bits */ 744 - low = readl(iommu->mmio_base + MMIO_EXT_FEATURES); 745 - high = readl(iommu->mmio_base + MMIO_EXT_FEATURES + 4); 746 - 747 - iommu->features = ((u64)high << 32) | low; 748 - 749 - if (iommu_feature(iommu, FEATURE_GT)) { 750 - int glxval; 751 - u32 pasids; 752 - u64 shift; 753 - 754 - shift = iommu->features & FEATURE_PASID_MASK; 755 - shift >>= FEATURE_PASID_SHIFT; 756 - pasids = (1 << shift); 757 - 758 - amd_iommu_max_pasids = min(amd_iommu_max_pasids, pasids); 759 - 760 - glxval = iommu->features & FEATURE_GLXVAL_MASK; 761 - glxval >>= FEATURE_GLXVAL_SHIFT; 762 - 763 - if (amd_iommu_max_glx_val == -1) 764 - amd_iommu_max_glx_val = glxval; 765 - else 766 - amd_iommu_max_glx_val = min(amd_iommu_max_glx_val, glxval); 767 - } 768 - 769 - if (iommu_feature(iommu, FEATURE_GT) && 770 - iommu_feature(iommu, FEATURE_PPR)) { 771 - iommu->is_iommu_v2 = true; 772 - amd_iommu_v2_present = true; 773 - } 774 - 775 - if (!is_rd890_iommu(iommu->dev)) 776 - return; 777 - 778 - /* 779 - * Some rd890 systems may not be fully reconfigured by the BIOS, so 780 - * it's necessary for us to store this information so it can be 781 - * reprogrammed on resume 782 - */ 783 - 784 - pci_read_config_dword(iommu->dev, iommu->cap_ptr + 4, 785 - &iommu->stored_addr_lo); 786 - pci_read_config_dword(iommu->dev, iommu->cap_ptr + 8, 787 - &iommu->stored_addr_hi); 788 - 789 - /* Low bit locks writes to configuration space */ 790 - iommu->stored_addr_lo &= ~1; 791 - 792 - for (i = 0; i < 6; i++) 793 - for (j = 0; j < 0x12; j++) 794 - iommu->stored_l1[i][j] = iommu_read_l1(iommu, i, j); 795 - 796 - for (i = 0; i < 0x83; i++) 797 - iommu->stored_l2[i] = iommu_read_l2(iommu, i); 798 - } 799 - 800 - /* 801 729 * Takes a pointer to an AMD IOMMU entry in the ACPI table and 802 730 * initializes the hardware and our data structures with it. 803 731 */ ··· 929 1025 /* 930 1026 * Copy data from ACPI table entry to the iommu struct 931 1027 */ 932 - iommu->dev = pci_get_bus_and_slot(PCI_BUS(h->devid), h->devid & 0xff); 933 - if (!iommu->dev) 934 - return 1; 935 - 936 - iommu->root_pdev = pci_get_bus_and_slot(iommu->dev->bus->number, 937 - PCI_DEVFN(0, 0)); 938 - 1028 + iommu->devid = h->devid; 939 1029 iommu->cap_ptr = h->cap_ptr; 940 1030 iommu->pci_seg = h->pci_seg; 941 1031 iommu->mmio_phys = h->mmio_phys; ··· 947 1049 948 1050 iommu->int_enabled = false; 949 1051 950 - init_iommu_from_pci(iommu); 951 1052 init_iommu_from_acpi(iommu, h); 952 1053 init_iommu_devices(iommu); 953 1054 954 - if (iommu_feature(iommu, FEATURE_PPR)) { 955 - iommu->ppr_log = alloc_ppr_log(iommu); 956 - if (!iommu->ppr_log) 957 - return -ENOMEM; 958 - } 959 - 960 - if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE)) 961 - amd_iommu_np_cache = true; 962 - 963 - return pci_enable_device(iommu->dev); 1055 + return 0; 964 1056 } 965 1057 966 1058 /* ··· 981 1093 h->mmio_phys); 982 1094 983 1095 iommu = kzalloc(sizeof(struct amd_iommu), GFP_KERNEL); 984 - if (iommu == NULL) { 985 - amd_iommu_init_err = -ENOMEM; 986 - return 0; 987 - } 1096 + if (iommu == NULL) 1097 + return -ENOMEM; 988 1098 989 1099 ret = init_iommu_one(iommu, h); 990 - if (ret) { 991 - amd_iommu_init_err = ret; 992 - return 0; 993 - } 1100 + if (ret) 1101 + return ret; 994 1102 break; 995 1103 default: 996 1104 break; ··· 997 1113 WARN_ON(p != end); 998 1114 999 1115 return 0; 1116 + } 1117 + 1118 + static int iommu_init_pci(struct amd_iommu *iommu) 1119 + { 1120 + int cap_ptr = iommu->cap_ptr; 1121 + u32 range, misc, low, high; 1122 + 1123 + iommu->dev = pci_get_bus_and_slot(PCI_BUS(iommu->devid), 1124 + iommu->devid & 0xff); 1125 + if (!iommu->dev) 1126 + return -ENODEV; 1127 + 1128 + pci_read_config_dword(iommu->dev, cap_ptr + MMIO_CAP_HDR_OFFSET, 1129 + &iommu->cap); 1130 + pci_read_config_dword(iommu->dev, cap_ptr + MMIO_RANGE_OFFSET, 1131 + &range); 1132 + pci_read_config_dword(iommu->dev, cap_ptr + MMIO_MISC_OFFSET, 1133 + &misc); 1134 + 1135 + iommu->first_device = calc_devid(MMIO_GET_BUS(range), 1136 + MMIO_GET_FD(range)); 1137 + iommu->last_device = calc_devid(MMIO_GET_BUS(range), 1138 + MMIO_GET_LD(range)); 1139 + 1140 + if (!(iommu->cap & (1 << IOMMU_CAP_IOTLB))) 1141 + amd_iommu_iotlb_sup = false; 1142 + 1143 + /* read extended feature bits */ 1144 + low = readl(iommu->mmio_base + MMIO_EXT_FEATURES); 1145 + high = readl(iommu->mmio_base + MMIO_EXT_FEATURES + 4); 1146 + 1147 + iommu->features = ((u64)high << 32) | low; 1148 + 1149 + if (iommu_feature(iommu, FEATURE_GT)) { 1150 + int glxval; 1151 + u32 pasids; 1152 + u64 shift; 1153 + 1154 + shift = iommu->features & FEATURE_PASID_MASK; 1155 + shift >>= FEATURE_PASID_SHIFT; 1156 + pasids = (1 << shift); 1157 + 1158 + amd_iommu_max_pasids = min(amd_iommu_max_pasids, pasids); 1159 + 1160 + glxval = iommu->features & FEATURE_GLXVAL_MASK; 1161 + glxval >>= FEATURE_GLXVAL_SHIFT; 1162 + 1163 + if (amd_iommu_max_glx_val == -1) 1164 + amd_iommu_max_glx_val = glxval; 1165 + else 1166 + amd_iommu_max_glx_val = min(amd_iommu_max_glx_val, glxval); 1167 + } 1168 + 1169 + if (iommu_feature(iommu, FEATURE_GT) && 1170 + iommu_feature(iommu, FEATURE_PPR)) { 1171 + iommu->is_iommu_v2 = true; 1172 + amd_iommu_v2_present = true; 1173 + } 1174 + 1175 + if (iommu_feature(iommu, FEATURE_PPR)) { 1176 + iommu->ppr_log = alloc_ppr_log(iommu); 1177 + if (!iommu->ppr_log) 1178 + return -ENOMEM; 1179 + } 1180 + 1181 + if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE)) 1182 + amd_iommu_np_cache = true; 1183 + 1184 + if (is_rd890_iommu(iommu->dev)) { 1185 + int i, j; 1186 + 1187 + iommu->root_pdev = pci_get_bus_and_slot(iommu->dev->bus->number, 1188 + PCI_DEVFN(0, 0)); 1189 + 1190 + /* 1191 + * Some rd890 systems may not be fully reconfigured by the 1192 + * BIOS, so it's necessary for us to store this information so 1193 + * it can be reprogrammed on resume 1194 + */ 1195 + pci_read_config_dword(iommu->dev, iommu->cap_ptr + 4, 1196 + &iommu->stored_addr_lo); 1197 + pci_read_config_dword(iommu->dev, iommu->cap_ptr + 8, 1198 + &iommu->stored_addr_hi); 1199 + 1200 + /* Low bit locks writes to configuration space */ 1201 + iommu->stored_addr_lo &= ~1; 1202 + 1203 + for (i = 0; i < 6; i++) 1204 + for (j = 0; j < 0x12; j++) 1205 + iommu->stored_l1[i][j] = iommu_read_l1(iommu, i, j); 1206 + 1207 + for (i = 0; i < 0x83; i++) 1208 + iommu->stored_l2[i] = iommu_read_l2(iommu, i); 1209 + } 1210 + 1211 + return pci_enable_device(iommu->dev); 1212 + } 1213 + 1214 + static void print_iommu_info(void) 1215 + { 1216 + static const char * const feat_str[] = { 1217 + "PreF", "PPR", "X2APIC", "NX", "GT", "[5]", 1218 + "IA", "GA", "HE", "PC" 1219 + }; 1220 + struct amd_iommu *iommu; 1221 + 1222 + for_each_iommu(iommu) { 1223 + int i; 1224 + 1225 + pr_info("AMD-Vi: Found IOMMU at %s cap 0x%hx\n", 1226 + dev_name(&iommu->dev->dev), iommu->cap_ptr); 1227 + 1228 + if (iommu->cap & (1 << IOMMU_CAP_EFR)) { 1229 + pr_info("AMD-Vi: Extended features: "); 1230 + for (i = 0; ARRAY_SIZE(feat_str); ++i) { 1231 + if (iommu_feature(iommu, (1ULL << i))) 1232 + pr_cont(" %s", feat_str[i]); 1233 + } 1234 + } 1235 + pr_cont("\n"); 1236 + } 1237 + } 1238 + 1239 + static int __init amd_iommu_init_pci(void) 1240 + { 1241 + struct amd_iommu *iommu; 1242 + int ret = 0; 1243 + 1244 + for_each_iommu(iommu) { 1245 + ret = iommu_init_pci(iommu); 1246 + if (ret) 1247 + break; 1248 + } 1249 + 1250 + /* Make sure ACS will be enabled */ 1251 + pci_request_acs(); 1252 + 1253 + ret = amd_iommu_init_devices(); 1254 + 1255 + print_iommu_info(); 1256 + 1257 + return ret; 1000 1258 } 1001 1259 1002 1260 /**************************************************************************** ··· 1243 1217 /* called for unity map ACPI definition */ 1244 1218 static int __init init_unity_map_range(struct ivmd_header *m) 1245 1219 { 1246 - struct unity_map_entry *e = 0; 1220 + struct unity_map_entry *e = NULL; 1247 1221 char *s; 1248 1222 1249 1223 e = kzalloc(sizeof(*e), GFP_KERNEL); ··· 1395 1369 * This function finally enables all IOMMUs found in the system after 1396 1370 * they have been initialized 1397 1371 */ 1398 - static void enable_iommus(void) 1372 + static void early_enable_iommus(void) 1399 1373 { 1400 1374 struct amd_iommu *iommu; 1401 1375 ··· 1405 1379 iommu_set_device_table(iommu); 1406 1380 iommu_enable_command_buffer(iommu); 1407 1381 iommu_enable_event_buffer(iommu); 1408 - iommu_enable_ppr_log(iommu); 1409 - iommu_enable_gt(iommu); 1410 1382 iommu_set_exclusion_range(iommu); 1411 1383 iommu_enable(iommu); 1412 1384 iommu_flush_all_caches(iommu); 1413 1385 } 1386 + } 1387 + 1388 + static void enable_iommus_v2(void) 1389 + { 1390 + struct amd_iommu *iommu; 1391 + 1392 + for_each_iommu(iommu) { 1393 + iommu_enable_ppr_log(iommu); 1394 + iommu_enable_gt(iommu); 1395 + } 1396 + } 1397 + 1398 + static void enable_iommus(void) 1399 + { 1400 + early_enable_iommus(); 1401 + 1402 + enable_iommus_v2(); 1414 1403 } 1415 1404 1416 1405 static void disable_iommus(void) ··· 1522 1481 * After everything is set up the IOMMUs are enabled and the necessary 1523 1482 * hotplug and suspend notifiers are registered. 1524 1483 */ 1525 - int __init amd_iommu_init_hardware(void) 1484 + static int __init early_amd_iommu_init(void) 1526 1485 { 1486 + struct acpi_table_header *ivrs_base; 1487 + acpi_size ivrs_size; 1488 + acpi_status status; 1527 1489 int i, ret = 0; 1528 1490 1529 1491 if (!amd_iommu_detected) 1530 1492 return -ENODEV; 1531 1493 1532 - if (amd_iommu_dev_table != NULL) { 1533 - /* Hardware already initialized */ 1534 - return 0; 1494 + status = acpi_get_table_with_size("IVRS", 0, &ivrs_base, &ivrs_size); 1495 + if (status == AE_NOT_FOUND) 1496 + return -ENODEV; 1497 + else if (ACPI_FAILURE(status)) { 1498 + const char *err = acpi_format_exception(status); 1499 + pr_err("AMD-Vi: IVRS table error: %s\n", err); 1500 + return -EINVAL; 1535 1501 } 1536 1502 1537 1503 /* ··· 1546 1498 * we need to handle. Upon this information the shared data 1547 1499 * structures for the IOMMUs in the system will be allocated 1548 1500 */ 1549 - if (acpi_table_parse("IVRS", find_last_devid_acpi) != 0) 1550 - return -ENODEV; 1551 - 1552 - ret = amd_iommu_init_err; 1501 + ret = find_last_devid_acpi(ivrs_base); 1553 1502 if (ret) 1554 1503 goto out; 1555 1504 ··· 1568 1523 amd_iommu_alias_table = (void *)__get_free_pages(GFP_KERNEL, 1569 1524 get_order(alias_table_size)); 1570 1525 if (amd_iommu_alias_table == NULL) 1571 - goto free; 1526 + goto out; 1572 1527 1573 1528 /* IOMMU rlookup table - find the IOMMU for a specific device */ 1574 1529 amd_iommu_rlookup_table = (void *)__get_free_pages( 1575 1530 GFP_KERNEL | __GFP_ZERO, 1576 1531 get_order(rlookup_table_size)); 1577 1532 if (amd_iommu_rlookup_table == NULL) 1578 - goto free; 1533 + goto out; 1579 1534 1580 1535 amd_iommu_pd_alloc_bitmap = (void *)__get_free_pages( 1581 1536 GFP_KERNEL | __GFP_ZERO, 1582 1537 get_order(MAX_DOMAIN_ID/8)); 1583 1538 if (amd_iommu_pd_alloc_bitmap == NULL) 1584 - goto free; 1539 + goto out; 1585 1540 1586 1541 /* init the device table */ 1587 1542 init_device_table(); ··· 1604 1559 * now the data structures are allocated and basically initialized 1605 1560 * start the real acpi table scan 1606 1561 */ 1607 - ret = -ENODEV; 1608 - if (acpi_table_parse("IVRS", init_iommu_all) != 0) 1609 - goto free; 1610 - 1611 - if (amd_iommu_init_err) { 1612 - ret = amd_iommu_init_err; 1613 - goto free; 1614 - } 1615 - 1616 - if (acpi_table_parse("IVRS", init_memory_definitions) != 0) 1617 - goto free; 1618 - 1619 - if (amd_iommu_init_err) { 1620 - ret = amd_iommu_init_err; 1621 - goto free; 1622 - } 1623 - 1624 - ret = amd_iommu_init_devices(); 1562 + ret = init_iommu_all(ivrs_base); 1625 1563 if (ret) 1626 - goto free; 1564 + goto out; 1627 1565 1628 - enable_iommus(); 1629 - 1630 - amd_iommu_init_notifier(); 1631 - 1632 - register_syscore_ops(&amd_iommu_syscore_ops); 1566 + ret = init_memory_definitions(ivrs_base); 1567 + if (ret) 1568 + goto out; 1633 1569 1634 1570 out: 1635 - return ret; 1636 - 1637 - free: 1638 - free_on_init_error(); 1571 + /* Don't leak any ACPI memory */ 1572 + early_acpi_os_unmap_memory((char __iomem *)ivrs_base, ivrs_size); 1573 + ivrs_base = NULL; 1639 1574 1640 1575 return ret; 1641 1576 } ··· 1635 1610 return ret; 1636 1611 } 1637 1612 1638 - /* 1639 - * This is the core init function for AMD IOMMU hardware in the system. 1640 - * This function is called from the generic x86 DMA layer initialization 1641 - * code. 1642 - * 1643 - * The function calls amd_iommu_init_hardware() to setup and enable the 1644 - * IOMMU hardware if this has not happened yet. After that the driver 1645 - * registers for the DMA-API and for the IOMMU-API as necessary. 1646 - */ 1647 - static int __init amd_iommu_init(void) 1613 + static bool detect_ivrs(void) 1648 1614 { 1649 - int ret = 0; 1615 + struct acpi_table_header *ivrs_base; 1616 + acpi_size ivrs_size; 1617 + acpi_status status; 1650 1618 1651 - ret = amd_iommu_init_hardware(); 1652 - if (ret) 1653 - goto out; 1619 + status = acpi_get_table_with_size("IVRS", 0, &ivrs_base, &ivrs_size); 1620 + if (status == AE_NOT_FOUND) 1621 + return false; 1622 + else if (ACPI_FAILURE(status)) { 1623 + const char *err = acpi_format_exception(status); 1624 + pr_err("AMD-Vi: IVRS table error: %s\n", err); 1625 + return false; 1626 + } 1654 1627 1655 - ret = amd_iommu_enable_interrupts(); 1656 - if (ret) 1657 - goto free; 1628 + early_acpi_os_unmap_memory((char __iomem *)ivrs_base, ivrs_size); 1629 + 1630 + return true; 1631 + } 1632 + 1633 + static int amd_iommu_init_dma(void) 1634 + { 1635 + int ret; 1658 1636 1659 1637 if (iommu_pass_through) 1660 1638 ret = amd_iommu_init_passthrough(); ··· 1665 1637 ret = amd_iommu_init_dma_ops(); 1666 1638 1667 1639 if (ret) 1668 - goto free; 1640 + return ret; 1669 1641 1670 1642 amd_iommu_init_api(); 1671 1643 1672 - x86_platform.iommu_shutdown = disable_iommus; 1644 + amd_iommu_init_notifier(); 1673 1645 1674 - if (iommu_pass_through) 1675 - goto out; 1646 + return 0; 1647 + } 1676 1648 1677 - if (amd_iommu_unmap_flush) 1678 - printk(KERN_INFO "AMD-Vi: IO/TLB flush on unmap enabled\n"); 1679 - else 1680 - printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n"); 1649 + /**************************************************************************** 1650 + * 1651 + * AMD IOMMU Initialization State Machine 1652 + * 1653 + ****************************************************************************/ 1681 1654 1682 - out: 1655 + static int __init state_next(void) 1656 + { 1657 + int ret = 0; 1658 + 1659 + switch (init_state) { 1660 + case IOMMU_START_STATE: 1661 + if (!detect_ivrs()) { 1662 + init_state = IOMMU_NOT_FOUND; 1663 + ret = -ENODEV; 1664 + } else { 1665 + init_state = IOMMU_IVRS_DETECTED; 1666 + } 1667 + break; 1668 + case IOMMU_IVRS_DETECTED: 1669 + ret = early_amd_iommu_init(); 1670 + init_state = ret ? IOMMU_INIT_ERROR : IOMMU_ACPI_FINISHED; 1671 + break; 1672 + case IOMMU_ACPI_FINISHED: 1673 + early_enable_iommus(); 1674 + register_syscore_ops(&amd_iommu_syscore_ops); 1675 + x86_platform.iommu_shutdown = disable_iommus; 1676 + init_state = IOMMU_ENABLED; 1677 + break; 1678 + case IOMMU_ENABLED: 1679 + ret = amd_iommu_init_pci(); 1680 + init_state = ret ? IOMMU_INIT_ERROR : IOMMU_PCI_INIT; 1681 + enable_iommus_v2(); 1682 + break; 1683 + case IOMMU_PCI_INIT: 1684 + ret = amd_iommu_enable_interrupts(); 1685 + init_state = ret ? IOMMU_INIT_ERROR : IOMMU_INTERRUPTS_EN; 1686 + break; 1687 + case IOMMU_INTERRUPTS_EN: 1688 + ret = amd_iommu_init_dma(); 1689 + init_state = ret ? IOMMU_INIT_ERROR : IOMMU_DMA_OPS; 1690 + break; 1691 + case IOMMU_DMA_OPS: 1692 + init_state = IOMMU_INITIALIZED; 1693 + break; 1694 + case IOMMU_INITIALIZED: 1695 + /* Nothing to do */ 1696 + break; 1697 + case IOMMU_NOT_FOUND: 1698 + case IOMMU_INIT_ERROR: 1699 + /* Error states => do nothing */ 1700 + ret = -EINVAL; 1701 + break; 1702 + default: 1703 + /* Unknown state */ 1704 + BUG(); 1705 + } 1706 + 1683 1707 return ret; 1708 + } 1684 1709 1685 - free: 1686 - disable_iommus(); 1710 + static int __init iommu_go_to_state(enum iommu_init_state state) 1711 + { 1712 + int ret = 0; 1687 1713 1688 - free_on_init_error(); 1714 + while (init_state != state) { 1715 + ret = state_next(); 1716 + if (init_state == IOMMU_NOT_FOUND || 1717 + init_state == IOMMU_INIT_ERROR) 1718 + break; 1719 + } 1689 1720 1690 - goto out; 1721 + return ret; 1722 + } 1723 + 1724 + 1725 + 1726 + /* 1727 + * This is the core init function for AMD IOMMU hardware in the system. 1728 + * This function is called from the generic x86 DMA layer initialization 1729 + * code. 1730 + */ 1731 + static int __init amd_iommu_init(void) 1732 + { 1733 + int ret; 1734 + 1735 + ret = iommu_go_to_state(IOMMU_INITIALIZED); 1736 + if (ret) { 1737 + disable_iommus(); 1738 + free_on_init_error(); 1739 + } 1740 + 1741 + return ret; 1691 1742 } 1692 1743 1693 1744 /**************************************************************************** ··· 1776 1669 * IOMMUs 1777 1670 * 1778 1671 ****************************************************************************/ 1779 - static int __init early_amd_iommu_detect(struct acpi_table_header *table) 1780 - { 1781 - return 0; 1782 - } 1783 - 1784 1672 int __init amd_iommu_detect(void) 1785 1673 { 1674 + int ret; 1675 + 1786 1676 if (no_iommu || (iommu_detected && !gart_iommu_aperture)) 1787 1677 return -ENODEV; 1788 1678 1789 1679 if (amd_iommu_disabled) 1790 1680 return -ENODEV; 1791 1681 1792 - if (acpi_table_parse("IVRS", early_amd_iommu_detect) == 0) { 1793 - iommu_detected = 1; 1794 - amd_iommu_detected = 1; 1795 - x86_init.iommu.iommu_init = amd_iommu_init; 1682 + ret = iommu_go_to_state(IOMMU_IVRS_DETECTED); 1683 + if (ret) 1684 + return ret; 1796 1685 1797 - /* Make sure ACS will be enabled */ 1798 - pci_request_acs(); 1799 - return 1; 1800 - } 1801 - return -ENODEV; 1686 + amd_iommu_detected = true; 1687 + iommu_detected = 1; 1688 + x86_init.iommu.iommu_init = amd_iommu_init; 1689 + 1690 + return 0; 1802 1691 } 1803 1692 1804 1693 /**************************************************************************** ··· 1830 1727 1831 1728 IOMMU_INIT_FINISH(amd_iommu_detect, 1832 1729 gart_iommu_hole_init, 1833 - 0, 1834 - 0); 1730 + NULL, 1731 + NULL); 1835 1732 1836 1733 bool amd_iommu_v2_supported(void) 1837 1734 {
+10 -3
drivers/iommu/amd_iommu_types.h
··· 487 487 /* physical address of MMIO space */ 488 488 u64 mmio_phys; 489 489 /* virtual address of MMIO space */ 490 - u8 *mmio_base; 490 + u8 __iomem *mmio_base; 491 491 492 492 /* capabilities of that IOMMU read from ACPI */ 493 493 u32 cap; ··· 500 500 501 501 /* IOMMUv2 */ 502 502 bool is_iommu_v2; 503 + 504 + /* PCI device id of the IOMMU device */ 505 + u16 devid; 503 506 504 507 /* 505 508 * Capability pointer. There could be more than one IOMMU per PCI ··· 533 530 u32 evt_buf_size; 534 531 /* event buffer virtual address */ 535 532 u8 *evt_buf; 536 - /* MSI number for event interrupt */ 537 - u16 evt_msi_num; 538 533 539 534 /* Base of the PPR log, if present */ 540 535 u8 *ppr_log; ··· 664 663 665 664 /* Max levels of glxval supported */ 666 665 extern int amd_iommu_max_glx_val; 666 + 667 + /* 668 + * This function flushes all internal caches of 669 + * the IOMMU used by this driver. 670 + */ 671 + extern void iommu_flush_all_caches(struct amd_iommu *iommu); 667 672 668 673 /* takes bus and device/function and returns the device id 669 674 * FIXME: should that be in generic PCI code? */
+3 -1
drivers/iommu/amd_iommu_v2.c
··· 81 81 u16 flags; 82 82 }; 83 83 84 - struct device_state **state_table; 84 + static struct device_state **state_table; 85 85 static spinlock_t state_lock; 86 86 87 87 /* List and lock for all pasid_states */ ··· 681 681 682 682 atomic_set(&pasid_state->count, 1); 683 683 init_waitqueue_head(&pasid_state->wq); 684 + spin_lock_init(&pasid_state->lock); 685 + 684 686 pasid_state->task = task; 685 687 pasid_state->mm = get_task_mm(task); 686 688 pasid_state->device_state = dev_state;
+4
drivers/iommu/exynos-iommu.c
··· 732 732 spin_lock_init(&priv->pgtablelock); 733 733 INIT_LIST_HEAD(&priv->clients); 734 734 735 + dom->geometry.aperture_start = 0; 736 + dom->geometry.aperture_end = ~0UL; 737 + dom->geometry.force_aperture = true; 738 + 735 739 domain->priv = priv; 736 740 return 0; 737 741
+59 -36
drivers/iommu/intel-iommu.c
··· 3932 3932 domain_update_iommu_cap(dmar_domain); 3933 3933 domain->priv = dmar_domain; 3934 3934 3935 + domain->geometry.aperture_start = 0; 3936 + domain->geometry.aperture_end = __DOMAIN_MAX_ADDR(dmar_domain->gaw); 3937 + domain->geometry.force_aperture = true; 3938 + 3935 3939 return 0; 3936 3940 } 3937 3941 ··· 4094 4090 return 0; 4095 4091 } 4096 4092 4097 - /* 4098 - * Group numbers are arbitrary. Device with the same group number 4099 - * indicate the iommu cannot differentiate between them. To avoid 4100 - * tracking used groups we just use the seg|bus|devfn of the lowest 4101 - * level we're able to differentiate devices 4102 - */ 4103 - static int intel_iommu_device_group(struct device *dev, unsigned int *groupid) 4093 + static void swap_pci_ref(struct pci_dev **from, struct pci_dev *to) 4094 + { 4095 + pci_dev_put(*from); 4096 + *from = to; 4097 + } 4098 + 4099 + #define REQ_ACS_FLAGS (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF) 4100 + 4101 + static int intel_iommu_add_device(struct device *dev) 4104 4102 { 4105 4103 struct pci_dev *pdev = to_pci_dev(dev); 4106 - struct pci_dev *bridge; 4107 - union { 4108 - struct { 4109 - u8 devfn; 4110 - u8 bus; 4111 - u16 segment; 4112 - } pci; 4113 - u32 group; 4114 - } id; 4104 + struct pci_dev *bridge, *dma_pdev; 4105 + struct iommu_group *group; 4106 + int ret; 4115 4107 4116 - if (iommu_no_mapping(dev)) 4117 - return -ENODEV; 4118 - 4119 - id.pci.segment = pci_domain_nr(pdev->bus); 4120 - id.pci.bus = pdev->bus->number; 4121 - id.pci.devfn = pdev->devfn; 4122 - 4123 - if (!device_to_iommu(id.pci.segment, id.pci.bus, id.pci.devfn)) 4108 + if (!device_to_iommu(pci_domain_nr(pdev->bus), 4109 + pdev->bus->number, pdev->devfn)) 4124 4110 return -ENODEV; 4125 4111 4126 4112 bridge = pci_find_upstream_pcie_bridge(pdev); 4127 4113 if (bridge) { 4128 - if (pci_is_pcie(bridge)) { 4129 - id.pci.bus = bridge->subordinate->number; 4130 - id.pci.devfn = 0; 4131 - } else { 4132 - id.pci.bus = bridge->bus->number; 4133 - id.pci.devfn = bridge->devfn; 4134 - } 4114 + if (pci_is_pcie(bridge)) 4115 + dma_pdev = pci_get_domain_bus_and_slot( 4116 + pci_domain_nr(pdev->bus), 4117 + bridge->subordinate->number, 0); 4118 + else 4119 + dma_pdev = pci_dev_get(bridge); 4120 + } else 4121 + dma_pdev = pci_dev_get(pdev); 4122 + 4123 + swap_pci_ref(&dma_pdev, pci_get_dma_source(dma_pdev)); 4124 + 4125 + if (dma_pdev->multifunction && 4126 + !pci_acs_enabled(dma_pdev, REQ_ACS_FLAGS)) 4127 + swap_pci_ref(&dma_pdev, 4128 + pci_get_slot(dma_pdev->bus, 4129 + PCI_DEVFN(PCI_SLOT(dma_pdev->devfn), 4130 + 0))); 4131 + 4132 + while (!pci_is_root_bus(dma_pdev->bus)) { 4133 + if (pci_acs_path_enabled(dma_pdev->bus->self, 4134 + NULL, REQ_ACS_FLAGS)) 4135 + break; 4136 + 4137 + swap_pci_ref(&dma_pdev, pci_dev_get(dma_pdev->bus->self)); 4135 4138 } 4136 4139 4137 - if (!pdev->is_virtfn && iommu_group_mf) 4138 - id.pci.devfn = PCI_DEVFN(PCI_SLOT(id.pci.devfn), 0); 4140 + group = iommu_group_get(&dma_pdev->dev); 4141 + pci_dev_put(dma_pdev); 4142 + if (!group) { 4143 + group = iommu_group_alloc(); 4144 + if (IS_ERR(group)) 4145 + return PTR_ERR(group); 4146 + } 4139 4147 4140 - *groupid = id.group; 4148 + ret = iommu_group_add_device(group, dev); 4141 4149 4142 - return 0; 4150 + iommu_group_put(group); 4151 + return ret; 4152 + } 4153 + 4154 + static void intel_iommu_remove_device(struct device *dev) 4155 + { 4156 + iommu_group_remove_device(dev); 4143 4157 } 4144 4158 4145 4159 static struct iommu_ops intel_iommu_ops = { ··· 4169 4147 .unmap = intel_iommu_unmap, 4170 4148 .iova_to_phys = intel_iommu_iova_to_phys, 4171 4149 .domain_has_cap = intel_iommu_domain_has_cap, 4172 - .device_group = intel_iommu_device_group, 4150 + .add_device = intel_iommu_add_device, 4151 + .remove_device = intel_iommu_remove_device, 4173 4152 .pgsize_bitmap = INTEL_IOMMU_PGSIZES, 4174 4153 }; 4175 4154
+585 -34
drivers/iommu/iommu.c
··· 26 26 #include <linux/slab.h> 27 27 #include <linux/errno.h> 28 28 #include <linux/iommu.h> 29 + #include <linux/idr.h> 30 + #include <linux/notifier.h> 31 + #include <linux/err.h> 29 32 30 - static ssize_t show_iommu_group(struct device *dev, 31 - struct device_attribute *attr, char *buf) 33 + static struct kset *iommu_group_kset; 34 + static struct ida iommu_group_ida; 35 + static struct mutex iommu_group_mutex; 36 + 37 + struct iommu_group { 38 + struct kobject kobj; 39 + struct kobject *devices_kobj; 40 + struct list_head devices; 41 + struct mutex mutex; 42 + struct blocking_notifier_head notifier; 43 + void *iommu_data; 44 + void (*iommu_data_release)(void *iommu_data); 45 + char *name; 46 + int id; 47 + }; 48 + 49 + struct iommu_device { 50 + struct list_head list; 51 + struct device *dev; 52 + char *name; 53 + }; 54 + 55 + struct iommu_group_attribute { 56 + struct attribute attr; 57 + ssize_t (*show)(struct iommu_group *group, char *buf); 58 + ssize_t (*store)(struct iommu_group *group, 59 + const char *buf, size_t count); 60 + }; 61 + 62 + #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \ 63 + struct iommu_group_attribute iommu_group_attr_##_name = \ 64 + __ATTR(_name, _mode, _show, _store) 65 + 66 + #define to_iommu_group_attr(_attr) \ 67 + container_of(_attr, struct iommu_group_attribute, attr) 68 + #define to_iommu_group(_kobj) \ 69 + container_of(_kobj, struct iommu_group, kobj) 70 + 71 + static ssize_t iommu_group_attr_show(struct kobject *kobj, 72 + struct attribute *__attr, char *buf) 32 73 { 33 - unsigned int groupid; 74 + struct iommu_group_attribute *attr = to_iommu_group_attr(__attr); 75 + struct iommu_group *group = to_iommu_group(kobj); 76 + ssize_t ret = -EIO; 34 77 35 - if (iommu_device_group(dev, &groupid)) 36 - return 0; 37 - 38 - return sprintf(buf, "%u", groupid); 78 + if (attr->show) 79 + ret = attr->show(group, buf); 80 + return ret; 39 81 } 40 - static DEVICE_ATTR(iommu_group, S_IRUGO, show_iommu_group, NULL); 82 + 83 + static ssize_t iommu_group_attr_store(struct kobject *kobj, 84 + struct attribute *__attr, 85 + const char *buf, size_t count) 86 + { 87 + struct iommu_group_attribute *attr = to_iommu_group_attr(__attr); 88 + struct iommu_group *group = to_iommu_group(kobj); 89 + ssize_t ret = -EIO; 90 + 91 + if (attr->store) 92 + ret = attr->store(group, buf, count); 93 + return ret; 94 + } 95 + 96 + static const struct sysfs_ops iommu_group_sysfs_ops = { 97 + .show = iommu_group_attr_show, 98 + .store = iommu_group_attr_store, 99 + }; 100 + 101 + static int iommu_group_create_file(struct iommu_group *group, 102 + struct iommu_group_attribute *attr) 103 + { 104 + return sysfs_create_file(&group->kobj, &attr->attr); 105 + } 106 + 107 + static void iommu_group_remove_file(struct iommu_group *group, 108 + struct iommu_group_attribute *attr) 109 + { 110 + sysfs_remove_file(&group->kobj, &attr->attr); 111 + } 112 + 113 + static ssize_t iommu_group_show_name(struct iommu_group *group, char *buf) 114 + { 115 + return sprintf(buf, "%s\n", group->name); 116 + } 117 + 118 + static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL); 119 + 120 + static void iommu_group_release(struct kobject *kobj) 121 + { 122 + struct iommu_group *group = to_iommu_group(kobj); 123 + 124 + if (group->iommu_data_release) 125 + group->iommu_data_release(group->iommu_data); 126 + 127 + mutex_lock(&iommu_group_mutex); 128 + ida_remove(&iommu_group_ida, group->id); 129 + mutex_unlock(&iommu_group_mutex); 130 + 131 + kfree(group->name); 132 + kfree(group); 133 + } 134 + 135 + static struct kobj_type iommu_group_ktype = { 136 + .sysfs_ops = &iommu_group_sysfs_ops, 137 + .release = iommu_group_release, 138 + }; 139 + 140 + /** 141 + * iommu_group_alloc - Allocate a new group 142 + * @name: Optional name to associate with group, visible in sysfs 143 + * 144 + * This function is called by an iommu driver to allocate a new iommu 145 + * group. The iommu group represents the minimum granularity of the iommu. 146 + * Upon successful return, the caller holds a reference to the supplied 147 + * group in order to hold the group until devices are added. Use 148 + * iommu_group_put() to release this extra reference count, allowing the 149 + * group to be automatically reclaimed once it has no devices or external 150 + * references. 151 + */ 152 + struct iommu_group *iommu_group_alloc(void) 153 + { 154 + struct iommu_group *group; 155 + int ret; 156 + 157 + group = kzalloc(sizeof(*group), GFP_KERNEL); 158 + if (!group) 159 + return ERR_PTR(-ENOMEM); 160 + 161 + group->kobj.kset = iommu_group_kset; 162 + mutex_init(&group->mutex); 163 + INIT_LIST_HEAD(&group->devices); 164 + BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier); 165 + 166 + mutex_lock(&iommu_group_mutex); 167 + 168 + again: 169 + if (unlikely(0 == ida_pre_get(&iommu_group_ida, GFP_KERNEL))) { 170 + kfree(group); 171 + mutex_unlock(&iommu_group_mutex); 172 + return ERR_PTR(-ENOMEM); 173 + } 174 + 175 + if (-EAGAIN == ida_get_new(&iommu_group_ida, &group->id)) 176 + goto again; 177 + 178 + mutex_unlock(&iommu_group_mutex); 179 + 180 + ret = kobject_init_and_add(&group->kobj, &iommu_group_ktype, 181 + NULL, "%d", group->id); 182 + if (ret) { 183 + mutex_lock(&iommu_group_mutex); 184 + ida_remove(&iommu_group_ida, group->id); 185 + mutex_unlock(&iommu_group_mutex); 186 + kfree(group); 187 + return ERR_PTR(ret); 188 + } 189 + 190 + group->devices_kobj = kobject_create_and_add("devices", &group->kobj); 191 + if (!group->devices_kobj) { 192 + kobject_put(&group->kobj); /* triggers .release & free */ 193 + return ERR_PTR(-ENOMEM); 194 + } 195 + 196 + /* 197 + * The devices_kobj holds a reference on the group kobject, so 198 + * as long as that exists so will the group. We can therefore 199 + * use the devices_kobj for reference counting. 200 + */ 201 + kobject_put(&group->kobj); 202 + 203 + return group; 204 + } 205 + EXPORT_SYMBOL_GPL(iommu_group_alloc); 206 + 207 + /** 208 + * iommu_group_get_iommudata - retrieve iommu_data registered for a group 209 + * @group: the group 210 + * 211 + * iommu drivers can store data in the group for use when doing iommu 212 + * operations. This function provides a way to retrieve it. Caller 213 + * should hold a group reference. 214 + */ 215 + void *iommu_group_get_iommudata(struct iommu_group *group) 216 + { 217 + return group->iommu_data; 218 + } 219 + EXPORT_SYMBOL_GPL(iommu_group_get_iommudata); 220 + 221 + /** 222 + * iommu_group_set_iommudata - set iommu_data for a group 223 + * @group: the group 224 + * @iommu_data: new data 225 + * @release: release function for iommu_data 226 + * 227 + * iommu drivers can store data in the group for use when doing iommu 228 + * operations. This function provides a way to set the data after 229 + * the group has been allocated. Caller should hold a group reference. 230 + */ 231 + void iommu_group_set_iommudata(struct iommu_group *group, void *iommu_data, 232 + void (*release)(void *iommu_data)) 233 + { 234 + group->iommu_data = iommu_data; 235 + group->iommu_data_release = release; 236 + } 237 + EXPORT_SYMBOL_GPL(iommu_group_set_iommudata); 238 + 239 + /** 240 + * iommu_group_set_name - set name for a group 241 + * @group: the group 242 + * @name: name 243 + * 244 + * Allow iommu driver to set a name for a group. When set it will 245 + * appear in a name attribute file under the group in sysfs. 246 + */ 247 + int iommu_group_set_name(struct iommu_group *group, const char *name) 248 + { 249 + int ret; 250 + 251 + if (group->name) { 252 + iommu_group_remove_file(group, &iommu_group_attr_name); 253 + kfree(group->name); 254 + group->name = NULL; 255 + if (!name) 256 + return 0; 257 + } 258 + 259 + group->name = kstrdup(name, GFP_KERNEL); 260 + if (!group->name) 261 + return -ENOMEM; 262 + 263 + ret = iommu_group_create_file(group, &iommu_group_attr_name); 264 + if (ret) { 265 + kfree(group->name); 266 + group->name = NULL; 267 + return ret; 268 + } 269 + 270 + return 0; 271 + } 272 + EXPORT_SYMBOL_GPL(iommu_group_set_name); 273 + 274 + /** 275 + * iommu_group_add_device - add a device to an iommu group 276 + * @group: the group into which to add the device (reference should be held) 277 + * @dev: the device 278 + * 279 + * This function is called by an iommu driver to add a device into a 280 + * group. Adding a device increments the group reference count. 281 + */ 282 + int iommu_group_add_device(struct iommu_group *group, struct device *dev) 283 + { 284 + int ret, i = 0; 285 + struct iommu_device *device; 286 + 287 + device = kzalloc(sizeof(*device), GFP_KERNEL); 288 + if (!device) 289 + return -ENOMEM; 290 + 291 + device->dev = dev; 292 + 293 + ret = sysfs_create_link(&dev->kobj, &group->kobj, "iommu_group"); 294 + if (ret) { 295 + kfree(device); 296 + return ret; 297 + } 298 + 299 + device->name = kasprintf(GFP_KERNEL, "%s", kobject_name(&dev->kobj)); 300 + rename: 301 + if (!device->name) { 302 + sysfs_remove_link(&dev->kobj, "iommu_group"); 303 + kfree(device); 304 + return -ENOMEM; 305 + } 306 + 307 + ret = sysfs_create_link_nowarn(group->devices_kobj, 308 + &dev->kobj, device->name); 309 + if (ret) { 310 + kfree(device->name); 311 + if (ret == -EEXIST && i >= 0) { 312 + /* 313 + * Account for the slim chance of collision 314 + * and append an instance to the name. 315 + */ 316 + device->name = kasprintf(GFP_KERNEL, "%s.%d", 317 + kobject_name(&dev->kobj), i++); 318 + goto rename; 319 + } 320 + 321 + sysfs_remove_link(&dev->kobj, "iommu_group"); 322 + kfree(device); 323 + return ret; 324 + } 325 + 326 + kobject_get(group->devices_kobj); 327 + 328 + dev->iommu_group = group; 329 + 330 + mutex_lock(&group->mutex); 331 + list_add_tail(&device->list, &group->devices); 332 + mutex_unlock(&group->mutex); 333 + 334 + /* Notify any listeners about change to group. */ 335 + blocking_notifier_call_chain(&group->notifier, 336 + IOMMU_GROUP_NOTIFY_ADD_DEVICE, dev); 337 + return 0; 338 + } 339 + EXPORT_SYMBOL_GPL(iommu_group_add_device); 340 + 341 + /** 342 + * iommu_group_remove_device - remove a device from it's current group 343 + * @dev: device to be removed 344 + * 345 + * This function is called by an iommu driver to remove the device from 346 + * it's current group. This decrements the iommu group reference count. 347 + */ 348 + void iommu_group_remove_device(struct device *dev) 349 + { 350 + struct iommu_group *group = dev->iommu_group; 351 + struct iommu_device *tmp_device, *device = NULL; 352 + 353 + /* Pre-notify listeners that a device is being removed. */ 354 + blocking_notifier_call_chain(&group->notifier, 355 + IOMMU_GROUP_NOTIFY_DEL_DEVICE, dev); 356 + 357 + mutex_lock(&group->mutex); 358 + list_for_each_entry(tmp_device, &group->devices, list) { 359 + if (tmp_device->dev == dev) { 360 + device = tmp_device; 361 + list_del(&device->list); 362 + break; 363 + } 364 + } 365 + mutex_unlock(&group->mutex); 366 + 367 + if (!device) 368 + return; 369 + 370 + sysfs_remove_link(group->devices_kobj, device->name); 371 + sysfs_remove_link(&dev->kobj, "iommu_group"); 372 + 373 + kfree(device->name); 374 + kfree(device); 375 + dev->iommu_group = NULL; 376 + kobject_put(group->devices_kobj); 377 + } 378 + EXPORT_SYMBOL_GPL(iommu_group_remove_device); 379 + 380 + /** 381 + * iommu_group_for_each_dev - iterate over each device in the group 382 + * @group: the group 383 + * @data: caller opaque data to be passed to callback function 384 + * @fn: caller supplied callback function 385 + * 386 + * This function is called by group users to iterate over group devices. 387 + * Callers should hold a reference count to the group during callback. 388 + * The group->mutex is held across callbacks, which will block calls to 389 + * iommu_group_add/remove_device. 390 + */ 391 + int iommu_group_for_each_dev(struct iommu_group *group, void *data, 392 + int (*fn)(struct device *, void *)) 393 + { 394 + struct iommu_device *device; 395 + int ret = 0; 396 + 397 + mutex_lock(&group->mutex); 398 + list_for_each_entry(device, &group->devices, list) { 399 + ret = fn(device->dev, data); 400 + if (ret) 401 + break; 402 + } 403 + mutex_unlock(&group->mutex); 404 + return ret; 405 + } 406 + EXPORT_SYMBOL_GPL(iommu_group_for_each_dev); 407 + 408 + /** 409 + * iommu_group_get - Return the group for a device and increment reference 410 + * @dev: get the group that this device belongs to 411 + * 412 + * This function is called by iommu drivers and users to get the group 413 + * for the specified device. If found, the group is returned and the group 414 + * reference in incremented, else NULL. 415 + */ 416 + struct iommu_group *iommu_group_get(struct device *dev) 417 + { 418 + struct iommu_group *group = dev->iommu_group; 419 + 420 + if (group) 421 + kobject_get(group->devices_kobj); 422 + 423 + return group; 424 + } 425 + EXPORT_SYMBOL_GPL(iommu_group_get); 426 + 427 + /** 428 + * iommu_group_put - Decrement group reference 429 + * @group: the group to use 430 + * 431 + * This function is called by iommu drivers and users to release the 432 + * iommu group. Once the reference count is zero, the group is released. 433 + */ 434 + void iommu_group_put(struct iommu_group *group) 435 + { 436 + if (group) 437 + kobject_put(group->devices_kobj); 438 + } 439 + EXPORT_SYMBOL_GPL(iommu_group_put); 440 + 441 + /** 442 + * iommu_group_register_notifier - Register a notifier for group changes 443 + * @group: the group to watch 444 + * @nb: notifier block to signal 445 + * 446 + * This function allows iommu group users to track changes in a group. 447 + * See include/linux/iommu.h for actions sent via this notifier. Caller 448 + * should hold a reference to the group throughout notifier registration. 449 + */ 450 + int iommu_group_register_notifier(struct iommu_group *group, 451 + struct notifier_block *nb) 452 + { 453 + return blocking_notifier_chain_register(&group->notifier, nb); 454 + } 455 + EXPORT_SYMBOL_GPL(iommu_group_register_notifier); 456 + 457 + /** 458 + * iommu_group_unregister_notifier - Unregister a notifier 459 + * @group: the group to watch 460 + * @nb: notifier block to signal 461 + * 462 + * Unregister a previously registered group notifier block. 463 + */ 464 + int iommu_group_unregister_notifier(struct iommu_group *group, 465 + struct notifier_block *nb) 466 + { 467 + return blocking_notifier_chain_unregister(&group->notifier, nb); 468 + } 469 + EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier); 470 + 471 + /** 472 + * iommu_group_id - Return ID for a group 473 + * @group: the group to ID 474 + * 475 + * Return the unique ID for the group matching the sysfs group number. 476 + */ 477 + int iommu_group_id(struct iommu_group *group) 478 + { 479 + return group->id; 480 + } 481 + EXPORT_SYMBOL_GPL(iommu_group_id); 41 482 42 483 static int add_iommu_group(struct device *dev, void *data) 43 484 { 44 - unsigned int groupid; 485 + struct iommu_ops *ops = data; 45 486 46 - if (iommu_device_group(dev, &groupid) == 0) 47 - return device_create_file(dev, &dev_attr_iommu_group); 487 + if (!ops->add_device) 488 + return -ENODEV; 48 489 49 - return 0; 50 - } 490 + WARN_ON(dev->iommu_group); 51 491 52 - static int remove_iommu_group(struct device *dev) 53 - { 54 - unsigned int groupid; 55 - 56 - if (iommu_device_group(dev, &groupid) == 0) 57 - device_remove_file(dev, &dev_attr_iommu_group); 492 + ops->add_device(dev); 58 493 59 494 return 0; 60 495 } 61 496 62 - static int iommu_device_notifier(struct notifier_block *nb, 63 - unsigned long action, void *data) 497 + static int iommu_bus_notifier(struct notifier_block *nb, 498 + unsigned long action, void *data) 64 499 { 65 500 struct device *dev = data; 501 + struct iommu_ops *ops = dev->bus->iommu_ops; 502 + struct iommu_group *group; 503 + unsigned long group_action = 0; 66 504 67 - if (action == BUS_NOTIFY_ADD_DEVICE) 68 - return add_iommu_group(dev, NULL); 69 - else if (action == BUS_NOTIFY_DEL_DEVICE) 70 - return remove_iommu_group(dev); 505 + /* 506 + * ADD/DEL call into iommu driver ops if provided, which may 507 + * result in ADD/DEL notifiers to group->notifier 508 + */ 509 + if (action == BUS_NOTIFY_ADD_DEVICE) { 510 + if (ops->add_device) 511 + return ops->add_device(dev); 512 + } else if (action == BUS_NOTIFY_DEL_DEVICE) { 513 + if (ops->remove_device && dev->iommu_group) { 514 + ops->remove_device(dev); 515 + return 0; 516 + } 517 + } 71 518 519 + /* 520 + * Remaining BUS_NOTIFYs get filtered and republished to the 521 + * group, if anyone is listening 522 + */ 523 + group = iommu_group_get(dev); 524 + if (!group) 525 + return 0; 526 + 527 + switch (action) { 528 + case BUS_NOTIFY_BIND_DRIVER: 529 + group_action = IOMMU_GROUP_NOTIFY_BIND_DRIVER; 530 + break; 531 + case BUS_NOTIFY_BOUND_DRIVER: 532 + group_action = IOMMU_GROUP_NOTIFY_BOUND_DRIVER; 533 + break; 534 + case BUS_NOTIFY_UNBIND_DRIVER: 535 + group_action = IOMMU_GROUP_NOTIFY_UNBIND_DRIVER; 536 + break; 537 + case BUS_NOTIFY_UNBOUND_DRIVER: 538 + group_action = IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER; 539 + break; 540 + } 541 + 542 + if (group_action) 543 + blocking_notifier_call_chain(&group->notifier, 544 + group_action, dev); 545 + 546 + iommu_group_put(group); 72 547 return 0; 73 548 } 74 549 75 - static struct notifier_block iommu_device_nb = { 76 - .notifier_call = iommu_device_notifier, 550 + static struct notifier_block iommu_bus_nb = { 551 + .notifier_call = iommu_bus_notifier, 77 552 }; 78 553 79 554 static void iommu_bus_init(struct bus_type *bus, struct iommu_ops *ops) 80 555 { 81 - bus_register_notifier(bus, &iommu_device_nb); 82 - bus_for_each_dev(bus, NULL, NULL, add_iommu_group); 556 + bus_register_notifier(bus, &iommu_bus_nb); 557 + bus_for_each_dev(bus, NULL, ops, add_iommu_group); 83 558 } 84 559 85 560 /** ··· 666 191 domain->ops->detach_dev(domain, dev); 667 192 } 668 193 EXPORT_SYMBOL_GPL(iommu_detach_device); 194 + 195 + /* 196 + * IOMMU groups are really the natrual working unit of the IOMMU, but 197 + * the IOMMU API works on domains and devices. Bridge that gap by 198 + * iterating over the devices in a group. Ideally we'd have a single 199 + * device which represents the requestor ID of the group, but we also 200 + * allow IOMMU drivers to create policy defined minimum sets, where 201 + * the physical hardware may be able to distiguish members, but we 202 + * wish to group them at a higher level (ex. untrusted multi-function 203 + * PCI devices). Thus we attach each device. 204 + */ 205 + static int iommu_group_do_attach_device(struct device *dev, void *data) 206 + { 207 + struct iommu_domain *domain = data; 208 + 209 + return iommu_attach_device(domain, dev); 210 + } 211 + 212 + int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group) 213 + { 214 + return iommu_group_for_each_dev(group, domain, 215 + iommu_group_do_attach_device); 216 + } 217 + EXPORT_SYMBOL_GPL(iommu_attach_group); 218 + 219 + static int iommu_group_do_detach_device(struct device *dev, void *data) 220 + { 221 + struct iommu_domain *domain = data; 222 + 223 + iommu_detach_device(domain, dev); 224 + 225 + return 0; 226 + } 227 + 228 + void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group) 229 + { 230 + iommu_group_for_each_dev(group, domain, iommu_group_do_detach_device); 231 + } 232 + EXPORT_SYMBOL_GPL(iommu_detach_group); 669 233 670 234 phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, 671 235 unsigned long iova) ··· 850 336 } 851 337 EXPORT_SYMBOL_GPL(iommu_unmap); 852 338 853 - int iommu_device_group(struct device *dev, unsigned int *groupid) 339 + static int __init iommu_init(void) 854 340 { 855 - if (iommu_present(dev->bus) && dev->bus->iommu_ops->device_group) 856 - return dev->bus->iommu_ops->device_group(dev, groupid); 341 + iommu_group_kset = kset_create_and_add("iommu_groups", 342 + NULL, kernel_kobj); 343 + ida_init(&iommu_group_ida); 344 + mutex_init(&iommu_group_mutex); 857 345 858 - return -ENODEV; 346 + BUG_ON(!iommu_group_kset); 347 + 348 + return 0; 859 349 } 860 - EXPORT_SYMBOL_GPL(iommu_device_group); 350 + subsys_initcall(iommu_init); 351 + 352 + int iommu_domain_get_attr(struct iommu_domain *domain, 353 + enum iommu_attr attr, void *data) 354 + { 355 + struct iommu_domain_geometry *geometry; 356 + int ret = 0; 357 + 358 + switch (attr) { 359 + case DOMAIN_ATTR_GEOMETRY: 360 + geometry = data; 361 + *geometry = domain->geometry; 362 + 363 + break; 364 + default: 365 + if (!domain->ops->domain_get_attr) 366 + return -EINVAL; 367 + 368 + ret = domain->ops->domain_get_attr(domain, attr, data); 369 + } 370 + 371 + return ret; 372 + } 373 + EXPORT_SYMBOL_GPL(iommu_domain_get_attr); 374 + 375 + int iommu_domain_set_attr(struct iommu_domain *domain, 376 + enum iommu_attr attr, void *data) 377 + { 378 + if (!domain->ops->domain_set_attr) 379 + return -EINVAL; 380 + 381 + return domain->ops->domain_set_attr(domain, attr, data); 382 + } 383 + EXPORT_SYMBOL_GPL(iommu_domain_set_attr);
+5
drivers/iommu/irq_remapping.c
··· 1 1 #include <linux/kernel.h> 2 2 #include <linux/string.h> 3 + #include <linux/cpumask.h> 3 4 #include <linux/errno.h> 5 + #include <linux/msi.h> 6 + 7 + #include <asm/hw_irq.h> 8 + #include <asm/irq_remapping.h> 4 9 5 10 #include "irq_remapping.h" 6 11
+5
drivers/iommu/msm_iommu.c
··· 226 226 227 227 memset(priv->pgtable, 0, SZ_16K); 228 228 domain->priv = priv; 229 + 230 + domain->geometry.aperture_start = 0; 231 + domain->geometry.aperture_end = (1ULL << 32) - 1; 232 + domain->geometry.force_aperture = true; 233 + 229 234 return 0; 230 235 231 236 fail_nomem:
+90
drivers/iommu/of_iommu.c
··· 1 + /* 2 + * OF helpers for IOMMU 3 + * 4 + * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program; if not, write to the Free Software Foundation, Inc., 17 + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 18 + */ 19 + 20 + #include <linux/export.h> 21 + #include <linux/limits.h> 22 + #include <linux/of.h> 23 + 24 + /** 25 + * of_get_dma_window - Parse *dma-window property and returns 0 if found. 26 + * 27 + * @dn: device node 28 + * @prefix: prefix for property name if any 29 + * @index: index to start to parse 30 + * @busno: Returns busno if supported. Otherwise pass NULL 31 + * @addr: Returns address that DMA starts 32 + * @size: Returns the range that DMA can handle 33 + * 34 + * This supports different formats flexibly. "prefix" can be 35 + * configured if any. "busno" and "index" are optionally 36 + * specified. Set 0(or NULL) if not used. 37 + */ 38 + int of_get_dma_window(struct device_node *dn, const char *prefix, int index, 39 + unsigned long *busno, dma_addr_t *addr, size_t *size) 40 + { 41 + const __be32 *dma_window, *end; 42 + int bytes, cur_index = 0; 43 + char propname[NAME_MAX], addrname[NAME_MAX], sizename[NAME_MAX]; 44 + 45 + if (!dn || !addr || !size) 46 + return -EINVAL; 47 + 48 + if (!prefix) 49 + prefix = ""; 50 + 51 + snprintf(propname, sizeof(propname), "%sdma-window", prefix); 52 + snprintf(addrname, sizeof(addrname), "%s#dma-address-cells", prefix); 53 + snprintf(sizename, sizeof(sizename), "%s#dma-size-cells", prefix); 54 + 55 + dma_window = of_get_property(dn, propname, &bytes); 56 + if (!dma_window) 57 + return -ENODEV; 58 + end = dma_window + bytes / sizeof(*dma_window); 59 + 60 + while (dma_window < end) { 61 + u32 cells; 62 + const void *prop; 63 + 64 + /* busno is one cell if supported */ 65 + if (busno) 66 + *busno = be32_to_cpup(dma_window++); 67 + 68 + prop = of_get_property(dn, addrname, NULL); 69 + if (!prop) 70 + prop = of_get_property(dn, "#address-cells", NULL); 71 + 72 + cells = prop ? be32_to_cpup(prop) : of_n_addr_cells(dn); 73 + if (!cells) 74 + return -EINVAL; 75 + *addr = of_read_number(dma_window, cells); 76 + dma_window += cells; 77 + 78 + prop = of_get_property(dn, sizename, NULL); 79 + cells = prop ? be32_to_cpup(prop) : of_n_size_cells(dn); 80 + if (!cells) 81 + return -EINVAL; 82 + *size = of_read_number(dma_window, cells); 83 + dma_window += cells; 84 + 85 + if (cur_index++ == index) 86 + break; 87 + } 88 + return 0; 89 + } 90 + EXPORT_SYMBOL_GPL(of_get_dma_window);
+4
drivers/iommu/omap-iommu.c
··· 1148 1148 1149 1149 domain->priv = omap_domain; 1150 1150 1151 + domain->geometry.aperture_start = 0; 1152 + domain->geometry.aperture_end = (1ULL << 32) - 1; 1153 + domain->geometry.force_aperture = true; 1154 + 1151 1155 return 0; 1152 1156 1153 1157 fail_nomem:
+5
drivers/iommu/tegra-gart.c
··· 165 165 return -EINVAL; 166 166 domain->priv = gart; 167 167 168 + domain->geometry.aperture_start = gart->iovmm_base; 169 + domain->geometry.aperture_end = gart->iovmm_base + 170 + gart->page_count * GART_PAGE_SIZE - 1; 171 + domain->geometry.force_aperture = true; 172 + 168 173 client = devm_kzalloc(gart->dev, sizeof(*c), GFP_KERNEL); 169 174 if (!client) 170 175 return -ENOMEM;
+155 -132
drivers/iommu/tegra-smmu.c
··· 30 30 #include <linux/sched.h> 31 31 #include <linux/iommu.h> 32 32 #include <linux/io.h> 33 + #include <linux/of.h> 34 + #include <linux/of_iommu.h> 33 35 34 36 #include <asm/page.h> 35 37 #include <asm/cacheflush.h> 36 38 37 39 #include <mach/iomap.h> 38 40 #include <mach/smmu.h> 41 + #include <mach/tegra-ahb.h> 39 42 40 43 /* bitmap of the page sizes currently supported */ 41 44 #define SMMU_IOMMU_PGSIZES (SZ_4K) ··· 114 111 115 112 #define SMMU_PDE_NEXT_SHIFT 28 116 113 117 - /* AHB Arbiter Registers */ 118 - #define AHB_XBAR_CTRL 0xe0 119 - #define AHB_XBAR_CTRL_SMMU_INIT_DONE_DONE 1 120 - #define AHB_XBAR_CTRL_SMMU_INIT_DONE_SHIFT 17 121 - 122 - #define SMMU_NUM_ASIDS 4 123 114 #define SMMU_TLB_FLUSH_VA_SECTION__MASK 0xffc00000 124 115 #define SMMU_TLB_FLUSH_VA_SECTION__SHIFT 12 /* right shift */ 125 116 #define SMMU_TLB_FLUSH_VA_GROUP__MASK 0xffffc000 ··· 133 136 134 137 #define SMMU_PAGE_SHIFT 12 135 138 #define SMMU_PAGE_SIZE (1 << SMMU_PAGE_SHIFT) 139 + #define SMMU_PAGE_MASK ((1 << SMMU_PAGE_SHIFT) - 1) 136 140 137 141 #define SMMU_PDIR_COUNT 1024 138 142 #define SMMU_PDIR_SIZE (sizeof(unsigned long) * SMMU_PDIR_COUNT) ··· 174 176 #define SMMU_ASID_ENABLE(asid) ((asid) | (1 << 31)) 175 177 #define SMMU_ASID_DISABLE 0 176 178 #define SMMU_ASID_ASID(n) ((n) & ~SMMU_ASID_ENABLE(0)) 179 + 180 + #define NUM_SMMU_REG_BANKS 3 177 181 178 182 #define smmu_client_enable_hwgrp(c, m) smmu_client_set_hwgrp(c, m, 1) 179 183 #define smmu_client_disable_hwgrp(c) smmu_client_set_hwgrp(c, 0, 0) ··· 235 235 * Per SMMU device - IOMMU device 236 236 */ 237 237 struct smmu_device { 238 - void __iomem *regs, *regs_ahbarb; 238 + void __iomem *regs[NUM_SMMU_REG_BANKS]; 239 239 unsigned long iovmm_base; /* remappable base address */ 240 240 unsigned long page_count; /* total remappable size */ 241 241 spinlock_t lock; 242 242 char *name; 243 243 struct device *dev; 244 - int num_as; 245 - struct smmu_as *as; /* Run-time allocated array */ 246 244 struct page *avp_vector_page; /* dummy page shared by all AS's */ 247 245 248 246 /* ··· 250 252 unsigned long translation_enable_1; 251 253 unsigned long translation_enable_2; 252 254 unsigned long asid_security; 255 + 256 + struct device_node *ahb; 257 + 258 + int num_as; 259 + struct smmu_as as[0]; /* Run-time allocated array */ 253 260 }; 254 261 255 262 static struct smmu_device *smmu_handle; /* unique for a system */ 256 263 257 264 /* 258 - * SMMU/AHB register accessors 265 + * SMMU register accessors 259 266 */ 260 267 static inline u32 smmu_read(struct smmu_device *smmu, size_t offs) 261 268 { 262 - return readl(smmu->regs + offs); 263 - } 264 - static inline void smmu_write(struct smmu_device *smmu, u32 val, size_t offs) 265 - { 266 - writel(val, smmu->regs + offs); 269 + BUG_ON(offs < 0x10); 270 + if (offs < 0x3c) 271 + return readl(smmu->regs[0] + offs - 0x10); 272 + BUG_ON(offs < 0x1f0); 273 + if (offs < 0x200) 274 + return readl(smmu->regs[1] + offs - 0x1f0); 275 + BUG_ON(offs < 0x228); 276 + if (offs < 0x284) 277 + return readl(smmu->regs[2] + offs - 0x228); 278 + BUG(); 267 279 } 268 280 269 - static inline u32 ahb_read(struct smmu_device *smmu, size_t offs) 281 + static inline void smmu_write(struct smmu_device *smmu, u32 val, size_t offs) 270 282 { 271 - return readl(smmu->regs_ahbarb + offs); 272 - } 273 - static inline void ahb_write(struct smmu_device *smmu, u32 val, size_t offs) 274 - { 275 - writel(val, smmu->regs_ahbarb + offs); 283 + BUG_ON(offs < 0x10); 284 + if (offs < 0x3c) { 285 + writel(val, smmu->regs[0] + offs - 0x10); 286 + return; 287 + } 288 + BUG_ON(offs < 0x1f0); 289 + if (offs < 0x200) { 290 + writel(val, smmu->regs[1] + offs - 0x1f0); 291 + return; 292 + } 293 + BUG_ON(offs < 0x228); 294 + if (offs < 0x284) { 295 + writel(val, smmu->regs[2] + offs - 0x228); 296 + return; 297 + } 298 + BUG(); 276 299 } 277 300 278 301 #define VA_PAGE_TO_PA(va, page) \ ··· 389 370 FLUSH_SMMU_REGS(smmu); 390 371 } 391 372 392 - static void smmu_setup_regs(struct smmu_device *smmu) 373 + static int smmu_setup_regs(struct smmu_device *smmu) 393 374 { 394 375 int i; 395 376 u32 val; ··· 417 398 418 399 smmu_flush_regs(smmu, 1); 419 400 420 - val = ahb_read(smmu, AHB_XBAR_CTRL); 421 - val |= AHB_XBAR_CTRL_SMMU_INIT_DONE_DONE << 422 - AHB_XBAR_CTRL_SMMU_INIT_DONE_SHIFT; 423 - ahb_write(smmu, val, AHB_XBAR_CTRL); 401 + return tegra_ahb_enable_smmu(smmu->ahb); 424 402 } 425 403 426 404 static void flush_ptc_and_tlb(struct smmu_device *smmu, ··· 553 537 #endif 554 538 555 539 /* 556 - * Caller must lock/unlock as 540 + * Caller must not hold as->lock 557 541 */ 558 542 static int alloc_pdir(struct smmu_as *as) 559 543 { 560 - unsigned long *pdir; 561 - int pdn; 544 + unsigned long *pdir, flags; 545 + int pdn, err = 0; 562 546 u32 val; 563 547 struct smmu_device *smmu = as->smmu; 548 + struct page *page; 549 + unsigned int *cnt; 564 550 565 - if (as->pdir_page) 566 - return 0; 551 + /* 552 + * do the allocation, then grab as->lock 553 + */ 554 + cnt = devm_kzalloc(smmu->dev, 555 + sizeof(cnt[0]) * SMMU_PDIR_COUNT, 556 + GFP_KERNEL); 557 + page = alloc_page(GFP_KERNEL | __GFP_DMA); 567 558 568 - as->pte_count = devm_kzalloc(smmu->dev, 569 - sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_ATOMIC); 570 - if (!as->pte_count) { 571 - dev_err(smmu->dev, 572 - "failed to allocate smmu_device PTE cunters\n"); 573 - return -ENOMEM; 559 + spin_lock_irqsave(&as->lock, flags); 560 + 561 + if (as->pdir_page) { 562 + /* We raced, free the redundant */ 563 + err = -EAGAIN; 564 + goto err_out; 574 565 } 575 - as->pdir_page = alloc_page(GFP_ATOMIC | __GFP_DMA); 576 - if (!as->pdir_page) { 577 - dev_err(smmu->dev, 578 - "failed to allocate smmu_device page directory\n"); 579 - devm_kfree(smmu->dev, as->pte_count); 580 - as->pte_count = NULL; 581 - return -ENOMEM; 566 + 567 + if (!page || !cnt) { 568 + dev_err(smmu->dev, "failed to allocate at %s\n", __func__); 569 + err = -ENOMEM; 570 + goto err_out; 582 571 } 572 + 573 + as->pdir_page = page; 574 + as->pte_count = cnt; 575 + 583 576 SetPageReserved(as->pdir_page); 584 577 pdir = page_address(as->pdir_page); 585 578 ··· 604 579 smmu_write(smmu, val, SMMU_TLB_FLUSH); 605 580 FLUSH_SMMU_REGS(as->smmu); 606 581 582 + spin_unlock_irqrestore(&as->lock, flags); 583 + 607 584 return 0; 585 + 586 + err_out: 587 + spin_unlock_irqrestore(&as->lock, flags); 588 + 589 + devm_kfree(smmu->dev, cnt); 590 + if (page) 591 + __free_page(page); 592 + return err; 608 593 } 609 594 610 595 static void __smmu_iommu_unmap(struct smmu_as *as, dma_addr_t iova) ··· 806 771 807 772 static int smmu_iommu_domain_init(struct iommu_domain *domain) 808 773 { 809 - int i; 774 + int i, err = -ENODEV; 810 775 unsigned long flags; 811 776 struct smmu_as *as; 812 777 struct smmu_device *smmu = smmu_handle; 813 778 814 779 /* Look for a free AS with lock held */ 815 780 for (i = 0; i < smmu->num_as; i++) { 816 - struct smmu_as *tmp = &smmu->as[i]; 817 - 818 - spin_lock_irqsave(&tmp->lock, flags); 819 - if (!tmp->pdir_page) { 820 - as = tmp; 821 - goto found; 781 + as = &smmu->as[i]; 782 + if (!as->pdir_page) { 783 + err = alloc_pdir(as); 784 + if (!err) 785 + goto found; 822 786 } 823 - spin_unlock_irqrestore(&tmp->lock, flags); 787 + if (err != -EAGAIN) 788 + break; 824 789 } 825 - dev_err(smmu->dev, "no free AS\n"); 826 - return -ENODEV; 790 + if (i == smmu->num_as) 791 + dev_err(smmu->dev, "no free AS\n"); 792 + return err; 827 793 828 794 found: 829 - if (alloc_pdir(as) < 0) 830 - goto err_alloc_pdir; 831 - 832 - spin_lock(&smmu->lock); 795 + spin_lock_irqsave(&smmu->lock, flags); 833 796 834 797 /* Update PDIR register */ 835 798 smmu_write(smmu, SMMU_PTB_ASID_CUR(as->asid), SMMU_PTB_ASID); ··· 835 802 SMMU_MK_PDIR(as->pdir_page, as->pdir_attr), SMMU_PTB_DATA); 836 803 FLUSH_SMMU_REGS(smmu); 837 804 838 - spin_unlock(&smmu->lock); 805 + spin_unlock_irqrestore(&smmu->lock, flags); 839 806 840 - spin_unlock_irqrestore(&as->lock, flags); 841 807 domain->priv = as; 842 808 843 - dev_dbg(smmu->dev, "smmu_as@%p\n", as); 844 - return 0; 809 + domain->geometry.aperture_start = smmu->iovmm_base; 810 + domain->geometry.aperture_end = smmu->iovmm_base + 811 + smmu->page_count * SMMU_PAGE_SIZE - 1; 812 + domain->geometry.force_aperture = true; 845 813 846 - err_alloc_pdir: 847 - spin_unlock_irqrestore(&as->lock, flags); 848 - return -ENODEV; 814 + dev_dbg(smmu->dev, "smmu_as@%p\n", as); 815 + 816 + return 0; 849 817 } 850 818 851 819 static void smmu_iommu_domain_destroy(struct iommu_domain *domain) ··· 907 873 { 908 874 struct smmu_device *smmu = dev_get_drvdata(dev); 909 875 unsigned long flags; 876 + int err; 910 877 911 878 spin_lock_irqsave(&smmu->lock, flags); 912 - smmu_setup_regs(smmu); 879 + err = smmu_setup_regs(smmu); 913 880 spin_unlock_irqrestore(&smmu->lock, flags); 914 - return 0; 881 + return err; 915 882 } 916 883 917 884 static int tegra_smmu_probe(struct platform_device *pdev) 918 885 { 919 886 struct smmu_device *smmu; 920 - struct resource *regs, *regs2, *window; 921 887 struct device *dev = &pdev->dev; 922 - int i, err = 0; 888 + int i, asids, err = 0; 889 + dma_addr_t uninitialized_var(base); 890 + size_t bytes, uninitialized_var(size); 923 891 924 892 if (smmu_handle) 925 893 return -EIO; 926 894 927 895 BUILD_BUG_ON(PAGE_SHIFT != SMMU_PAGE_SHIFT); 928 896 929 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 930 - regs2 = platform_get_resource(pdev, IORESOURCE_MEM, 1); 931 - window = platform_get_resource(pdev, IORESOURCE_MEM, 2); 932 - if (!regs || !regs2 || !window) { 933 - dev_err(dev, "No SMMU resources\n"); 897 + if (of_property_read_u32(dev->of_node, "nvidia,#asids", &asids)) 934 898 return -ENODEV; 935 - } 936 899 937 - smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); 900 + bytes = sizeof(*smmu) + asids * sizeof(*smmu->as); 901 + smmu = devm_kzalloc(dev, bytes, GFP_KERNEL); 938 902 if (!smmu) { 939 903 dev_err(dev, "failed to allocate smmu_device\n"); 940 904 return -ENOMEM; 941 905 } 942 906 943 - smmu->dev = dev; 944 - smmu->num_as = SMMU_NUM_ASIDS; 945 - smmu->iovmm_base = (unsigned long)window->start; 946 - smmu->page_count = resource_size(window) >> SMMU_PAGE_SHIFT; 947 - smmu->regs = devm_ioremap(dev, regs->start, resource_size(regs)); 948 - smmu->regs_ahbarb = devm_ioremap(dev, regs2->start, 949 - resource_size(regs2)); 950 - if (!smmu->regs || !smmu->regs_ahbarb) { 951 - dev_err(dev, "failed to remap SMMU registers\n"); 952 - err = -ENXIO; 953 - goto fail; 907 + for (i = 0; i < ARRAY_SIZE(smmu->regs); i++) { 908 + struct resource *res; 909 + 910 + res = platform_get_resource(pdev, IORESOURCE_MEM, i); 911 + if (!res) 912 + return -ENODEV; 913 + smmu->regs[i] = devm_request_and_ioremap(&pdev->dev, res); 914 + if (!smmu->regs[i]) 915 + return -EBUSY; 954 916 } 917 + 918 + err = of_get_dma_window(dev->of_node, NULL, 0, NULL, &base, &size); 919 + if (err) 920 + return -ENODEV; 921 + 922 + if (size & SMMU_PAGE_MASK) 923 + return -EINVAL; 924 + 925 + size >>= SMMU_PAGE_SHIFT; 926 + if (!size) 927 + return -EINVAL; 928 + 929 + smmu->ahb = of_parse_phandle(dev->of_node, "nvidia,ahb", 0); 930 + if (!smmu->ahb) 931 + return -ENODEV; 932 + 933 + smmu->dev = dev; 934 + smmu->num_as = asids; 935 + smmu->iovmm_base = base; 936 + smmu->page_count = size; 955 937 956 938 smmu->translation_enable_0 = ~0; 957 939 smmu->translation_enable_1 = ~0; 958 940 smmu->translation_enable_2 = ~0; 959 941 smmu->asid_security = 0; 960 - 961 - smmu->as = devm_kzalloc(dev, 962 - sizeof(smmu->as[0]) * smmu->num_as, GFP_KERNEL); 963 - if (!smmu->as) { 964 - dev_err(dev, "failed to allocate smmu_as\n"); 965 - err = -ENOMEM; 966 - goto fail; 967 - } 968 942 969 943 for (i = 0; i < smmu->num_as; i++) { 970 944 struct smmu_as *as = &smmu->as[i]; ··· 987 945 INIT_LIST_HEAD(&as->client); 988 946 } 989 947 spin_lock_init(&smmu->lock); 990 - smmu_setup_regs(smmu); 948 + err = smmu_setup_regs(smmu); 949 + if (err) 950 + return err; 991 951 platform_set_drvdata(pdev, smmu); 992 952 993 953 smmu->avp_vector_page = alloc_page(GFP_KERNEL); 994 954 if (!smmu->avp_vector_page) 995 - goto fail; 955 + return -ENOMEM; 996 956 997 957 smmu_handle = smmu; 998 958 return 0; 999 - 1000 - fail: 1001 - if (smmu->avp_vector_page) 1002 - __free_page(smmu->avp_vector_page); 1003 - if (smmu->regs) 1004 - devm_iounmap(dev, smmu->regs); 1005 - if (smmu->regs_ahbarb) 1006 - devm_iounmap(dev, smmu->regs_ahbarb); 1007 - if (smmu && smmu->as) { 1008 - for (i = 0; i < smmu->num_as; i++) { 1009 - if (smmu->as[i].pdir_page) { 1010 - ClearPageReserved(smmu->as[i].pdir_page); 1011 - __free_page(smmu->as[i].pdir_page); 1012 - } 1013 - } 1014 - devm_kfree(dev, smmu->as); 1015 - } 1016 - devm_kfree(dev, smmu); 1017 - return err; 1018 959 } 1019 960 1020 961 static int tegra_smmu_remove(struct platform_device *pdev) 1021 962 { 1022 963 struct smmu_device *smmu = platform_get_drvdata(pdev); 1023 - struct device *dev = smmu->dev; 964 + int i; 1024 965 1025 966 smmu_write(smmu, SMMU_CONFIG_DISABLE, SMMU_CONFIG); 1026 - platform_set_drvdata(pdev, NULL); 1027 - if (smmu->as) { 1028 - int i; 1029 - 1030 - for (i = 0; i < smmu->num_as; i++) 1031 - free_pdir(&smmu->as[i]); 1032 - devm_kfree(dev, smmu->as); 1033 - } 1034 - if (smmu->avp_vector_page) 1035 - __free_page(smmu->avp_vector_page); 1036 - if (smmu->regs) 1037 - devm_iounmap(dev, smmu->regs); 1038 - if (smmu->regs_ahbarb) 1039 - devm_iounmap(dev, smmu->regs_ahbarb); 1040 - devm_kfree(dev, smmu); 967 + for (i = 0; i < smmu->num_as; i++) 968 + free_pdir(&smmu->as[i]); 969 + __free_page(smmu->avp_vector_page); 1041 970 smmu_handle = NULL; 1042 971 return 0; 1043 972 } ··· 1018 1005 .resume = tegra_smmu_resume, 1019 1006 }; 1020 1007 1008 + #ifdef CONFIG_OF 1009 + static struct of_device_id tegra_smmu_of_match[] __devinitdata = { 1010 + { .compatible = "nvidia,tegra30-smmu", }, 1011 + { }, 1012 + }; 1013 + MODULE_DEVICE_TABLE(of, tegra_smmu_of_match); 1014 + #endif 1015 + 1021 1016 static struct platform_driver tegra_smmu_driver = { 1022 1017 .probe = tegra_smmu_probe, 1023 1018 .remove = tegra_smmu_remove, ··· 1033 1012 .owner = THIS_MODULE, 1034 1013 .name = "tegra-smmu", 1035 1014 .pm = &tegra_smmu_pm_ops, 1015 + .of_match_table = of_match_ptr(tegra_smmu_of_match), 1036 1016 }, 1037 1017 }; 1038 1018 ··· 1053 1031 1054 1032 MODULE_DESCRIPTION("IOMMU API for SMMU in Tegra30"); 1055 1033 MODULE_AUTHOR("Hiroshi DOYU <hdoyu@nvidia.com>"); 1034 + MODULE_ALIAS("platform:tegra-smmu"); 1056 1035 MODULE_LICENSE("GPL v2");
+2
include/linux/device.h
··· 36 36 struct bus_type; 37 37 struct device_node; 38 38 struct iommu_ops; 39 + struct iommu_group; 39 40 40 41 struct bus_attribute { 41 42 struct attribute attr; ··· 688 687 const struct attribute_group **groups; /* optional groups */ 689 688 690 689 void (*release)(struct device *dev); 690 + struct iommu_group *iommu_group; 691 691 }; 692 692 693 693 /* Get the wakeup routines, which depend on struct device */
+137 -3
include/linux/iommu.h
··· 26 26 #define IOMMU_CACHE (4) /* DMA cache coherency */ 27 27 28 28 struct iommu_ops; 29 + struct iommu_group; 29 30 struct bus_type; 30 31 struct device; 31 32 struct iommu_domain; ··· 38 37 typedef int (*iommu_fault_handler_t)(struct iommu_domain *, 39 38 struct device *, unsigned long, int, void *); 40 39 40 + struct iommu_domain_geometry { 41 + dma_addr_t aperture_start; /* First address that can be mapped */ 42 + dma_addr_t aperture_end; /* Last address that can be mapped */ 43 + bool force_aperture; /* DMA only allowed in mappable range? */ 44 + }; 45 + 41 46 struct iommu_domain { 42 47 struct iommu_ops *ops; 43 48 void *priv; 44 49 iommu_fault_handler_t handler; 45 50 void *handler_token; 51 + struct iommu_domain_geometry geometry; 46 52 }; 47 53 48 54 #define IOMMU_CAP_CACHE_COHERENCY 0x1 49 55 #define IOMMU_CAP_INTR_REMAP 0x2 /* isolates device intrs */ 56 + 57 + enum iommu_attr { 58 + DOMAIN_ATTR_MAX, 59 + DOMAIN_ATTR_GEOMETRY, 60 + }; 50 61 51 62 #ifdef CONFIG_IOMMU_API 52 63 ··· 72 59 * @unmap: unmap a physically contiguous memory region from an iommu domain 73 60 * @iova_to_phys: translate iova to physical address 74 61 * @domain_has_cap: domain capabilities query 75 - * @commit: commit iommu domain 62 + * @add_device: add device to iommu grouping 63 + * @remove_device: remove device from iommu grouping 64 + * @domain_get_attr: Query domain attributes 65 + * @domain_set_attr: Change domain attributes 76 66 * @pgsize_bitmap: bitmap of supported page sizes 77 67 */ 78 68 struct iommu_ops { ··· 91 75 unsigned long iova); 92 76 int (*domain_has_cap)(struct iommu_domain *domain, 93 77 unsigned long cap); 78 + int (*add_device)(struct device *dev); 79 + void (*remove_device)(struct device *dev); 94 80 int (*device_group)(struct device *dev, unsigned int *groupid); 81 + int (*domain_get_attr)(struct iommu_domain *domain, 82 + enum iommu_attr attr, void *data); 83 + int (*domain_set_attr)(struct iommu_domain *domain, 84 + enum iommu_attr attr, void *data); 95 85 unsigned long pgsize_bitmap; 96 86 }; 87 + 88 + #define IOMMU_GROUP_NOTIFY_ADD_DEVICE 1 /* Device added */ 89 + #define IOMMU_GROUP_NOTIFY_DEL_DEVICE 2 /* Pre Device removed */ 90 + #define IOMMU_GROUP_NOTIFY_BIND_DRIVER 3 /* Pre Driver bind */ 91 + #define IOMMU_GROUP_NOTIFY_BOUND_DRIVER 4 /* Post Driver bind */ 92 + #define IOMMU_GROUP_NOTIFY_UNBIND_DRIVER 5 /* Pre Driver unbind */ 93 + #define IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER 6 /* Post Driver unbind */ 97 94 98 95 extern int bus_set_iommu(struct bus_type *bus, struct iommu_ops *ops); 99 96 extern bool iommu_present(struct bus_type *bus); ··· 126 97 unsigned long cap); 127 98 extern void iommu_set_fault_handler(struct iommu_domain *domain, 128 99 iommu_fault_handler_t handler, void *token); 129 - extern int iommu_device_group(struct device *dev, unsigned int *groupid); 100 + 101 + extern int iommu_attach_group(struct iommu_domain *domain, 102 + struct iommu_group *group); 103 + extern void iommu_detach_group(struct iommu_domain *domain, 104 + struct iommu_group *group); 105 + extern struct iommu_group *iommu_group_alloc(void); 106 + extern void *iommu_group_get_iommudata(struct iommu_group *group); 107 + extern void iommu_group_set_iommudata(struct iommu_group *group, 108 + void *iommu_data, 109 + void (*release)(void *iommu_data)); 110 + extern int iommu_group_set_name(struct iommu_group *group, const char *name); 111 + extern int iommu_group_add_device(struct iommu_group *group, 112 + struct device *dev); 113 + extern void iommu_group_remove_device(struct device *dev); 114 + extern int iommu_group_for_each_dev(struct iommu_group *group, void *data, 115 + int (*fn)(struct device *, void *)); 116 + extern struct iommu_group *iommu_group_get(struct device *dev); 117 + extern void iommu_group_put(struct iommu_group *group); 118 + extern int iommu_group_register_notifier(struct iommu_group *group, 119 + struct notifier_block *nb); 120 + extern int iommu_group_unregister_notifier(struct iommu_group *group, 121 + struct notifier_block *nb); 122 + extern int iommu_group_id(struct iommu_group *group); 123 + 124 + extern int iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr, 125 + void *data); 126 + extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, 127 + void *data); 130 128 131 129 /** 132 130 * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework ··· 198 142 #else /* CONFIG_IOMMU_API */ 199 143 200 144 struct iommu_ops {}; 145 + struct iommu_group {}; 201 146 202 147 static inline bool iommu_present(struct bus_type *bus) 203 148 { ··· 254 197 { 255 198 } 256 199 257 - static inline int iommu_device_group(struct device *dev, unsigned int *groupid) 200 + int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group) 258 201 { 259 202 return -ENODEV; 203 + } 204 + 205 + void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group) 206 + { 207 + } 208 + 209 + struct iommu_group *iommu_group_alloc(void) 210 + { 211 + return ERR_PTR(-ENODEV); 212 + } 213 + 214 + void *iommu_group_get_iommudata(struct iommu_group *group) 215 + { 216 + return NULL; 217 + } 218 + 219 + void iommu_group_set_iommudata(struct iommu_group *group, void *iommu_data, 220 + void (*release)(void *iommu_data)) 221 + { 222 + } 223 + 224 + int iommu_group_set_name(struct iommu_group *group, const char *name) 225 + { 226 + return -ENODEV; 227 + } 228 + 229 + int iommu_group_add_device(struct iommu_group *group, struct device *dev) 230 + { 231 + return -ENODEV; 232 + } 233 + 234 + void iommu_group_remove_device(struct device *dev) 235 + { 236 + } 237 + 238 + int iommu_group_for_each_dev(struct iommu_group *group, void *data, 239 + int (*fn)(struct device *, void *)) 240 + { 241 + return -ENODEV; 242 + } 243 + 244 + struct iommu_group *iommu_group_get(struct device *dev) 245 + { 246 + return NULL; 247 + } 248 + 249 + void iommu_group_put(struct iommu_group *group) 250 + { 251 + } 252 + 253 + int iommu_group_register_notifier(struct iommu_group *group, 254 + struct notifier_block *nb) 255 + { 256 + return -ENODEV; 257 + } 258 + 259 + int iommu_group_unregister_notifier(struct iommu_group *group, 260 + struct notifier_block *nb) 261 + { 262 + return 0; 263 + } 264 + 265 + int iommu_group_id(struct iommu_group *group) 266 + { 267 + return -ENODEV; 268 + } 269 + 270 + static inline int iommu_domain_get_attr(struct iommu_domain *domain, 271 + enum iommu_attr attr, void *data) 272 + { 273 + return -EINVAL; 274 + } 275 + 276 + static inline int iommu_domain_set_attr(struct iommu_domain *domain, 277 + enum iommu_attr attr, void *data) 278 + { 279 + return -EINVAL; 260 280 } 261 281 262 282 #endif /* CONFIG_IOMMU_API */
+21
include/linux/of_iommu.h
··· 1 + #ifndef __OF_IOMMU_H 2 + #define __OF_IOMMU_H 3 + 4 + #ifdef CONFIG_OF_IOMMU 5 + 6 + extern int of_get_dma_window(struct device_node *dn, const char *prefix, 7 + int index, unsigned long *busno, dma_addr_t *addr, 8 + size_t *size); 9 + 10 + #else 11 + 12 + static inline int of_get_dma_window(struct device_node *dn, const char *prefix, 13 + int index, unsigned long *busno, dma_addr_t *addr, 14 + size_t *size) 15 + { 16 + return -EINVAL; 17 + } 18 + 19 + #endif /* CONFIG_OF_IOMMU */ 20 + 21 + #endif /* __OF_IOMMU_H */