···11-* Mediatek IOMMU Architecture Implementation22-33- Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U), and44-this M4U have two generations of HW architecture. Generation one uses flat55-pagetable, and only supports 4K size page mapping. Generation two uses the66-ARM Short-Descriptor translation table format for address translation.77-88- About the M4U Hardware Block Diagram, please check below:99-1010- EMI (External Memory Interface)1111- |1212- m4u (Multimedia Memory Management Unit)1313- |1414- +--------+1515- | |1616- gals0-rx gals1-rx (Global Async Local Sync rx)1717- | |1818- | |1919- gals0-tx gals1-tx (Global Async Local Sync tx)2020- | | Some SoCs may have GALS.2121- +--------+2222- |2323- SMI Common(Smart Multimedia Interface Common)2424- |2525- +----------------+-------2626- | |2727- | gals-rx There may be GALS in some larbs.2828- | |2929- | |3030- | gals-tx3131- | |3232- SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).3333- (display) (vdec)3434- | |3535- | |3636- +-----+-----+ +----+----+3737- | | | | | |3838- | | |... | | | ... There are different ports in each larb.3939- | | | | | |4040-OVL0 RDMA0 WDMA0 MC PP VLD4141-4242- As above, The Multimedia HW will go through SMI and M4U while it4343-access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain4444-smi local arbiter and smi common. It will control whether the Multimedia4545-HW should go though the m4u for translation or bypass it and talk4646-directly with EMI. And also SMI help control the power domain and clocks for4747-each local arbiter.4848- Normally we specify a local arbiter(larb) for each multimedia HW4949-like display, video decode, and camera. And there are different ports5050-in each larb. Take a example, There are many ports like MC, PP, VLD in the5151-video decode local arbiter, all these ports are according to the video HW.5252- In some SoCs, there may be a GALS(Global Async Local Sync) module between5353-smi-common and m4u, and additional GALS module between smi-larb and5454-smi-common. GALS can been seen as a "asynchronous fifo" which could help5555-synchronize for the modules in different clock frequency.5656-5757-Required properties:5858-- compatible : must be one of the following string:5959- "mediatek,mt2701-m4u" for mt2701 which uses generation one m4u HW.6060- "mediatek,mt2712-m4u" for mt2712 which uses generation two m4u HW.6161- "mediatek,mt6779-m4u" for mt6779 which uses generation two m4u HW.6262- "mediatek,mt7623-m4u", "mediatek,mt2701-m4u" for mt7623 which uses6363- generation one m4u HW.6464- "mediatek,mt8167-m4u" for mt8167 which uses generation two m4u HW.6565- "mediatek,mt8173-m4u" for mt8173 which uses generation two m4u HW.6666- "mediatek,mt8183-m4u" for mt8183 which uses generation two m4u HW.6767-- reg : m4u register base and size.6868-- interrupts : the interrupt of m4u.6969-- clocks : must contain one entry for each clock-names.7070-- clock-names : Only 1 optional clock:7171- - "bclk": the block clock of m4u.7272- Here is the list which require this "bclk":7373- - mt2701, mt2712, mt7623 and mt8173.7474- Note that m4u use the EMI clock which always has been enabled before kernel7575- if there is no this "bclk".7676-- mediatek,larbs : List of phandle to the local arbiters in the current Socs.7777- Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort7878- according to the local arbiter index, like larb0, larb1, larb2...7979-- iommu-cells : must be 1. This is the mtk_m4u_id according to the HW.8080- Specifies the mtk_m4u_id as defined in8181- dt-binding/memory/mt2701-larb-port.h for mt2701, mt76238282- dt-binding/memory/mt2712-larb-port.h for mt2712,8383- dt-binding/memory/mt6779-larb-port.h for mt6779,8484- dt-binding/memory/mt8167-larb-port.h for mt8167,8585- dt-binding/memory/mt8173-larb-port.h for mt8173, and8686- dt-binding/memory/mt8183-larb-port.h for mt8183.8787-8888-Example:8989- iommu: iommu@10205000 {9090- compatible = "mediatek,mt8173-m4u";9191- reg = <0 0x10205000 0 0x1000>;9292- interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;9393- clocks = <&infracfg CLK_INFRA_M4U>;9494- clock-names = "bclk";9595- mediatek,larbs = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>;9696- #iommu-cells = <1>;9797- };9898-9999-Example for a client device:100100- display {101101- compatible = "mediatek,mt8173-disp";102102- iommus = <&iommu M4U_PORT_DISP_OVL0>,103103- <&iommu M4U_PORT_DISP_RDMA0>;104104- ...105105- };
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/iommu/mediatek,iommu.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+title: MediaTek IOMMU Architecture Implementation88+99+maintainers:1010+ - Yong Wu <yong.wu@mediatek.com>1111+1212+description: |+1313+ Some MediaTek SOCs contain a Multimedia Memory Management Unit (M4U), and1414+ this M4U have two generations of HW architecture. Generation one uses flat1515+ pagetable, and only supports 4K size page mapping. Generation two uses the1616+ ARM Short-Descriptor translation table format for address translation.1717+1818+ About the M4U Hardware Block Diagram, please check below:1919+2020+ EMI (External Memory Interface)2121+ |2222+ m4u (Multimedia Memory Management Unit)2323+ |2424+ +--------+2525+ | |2626+ gals0-rx gals1-rx (Global Async Local Sync rx)2727+ | |2828+ | |2929+ gals0-tx gals1-tx (Global Async Local Sync tx)3030+ | | Some SoCs may have GALS.3131+ +--------+3232+ |3333+ SMI Common(Smart Multimedia Interface Common)3434+ |3535+ +----------------+-------3636+ | |3737+ | gals-rx There may be GALS in some larbs.3838+ | |3939+ | |4040+ | gals-tx4141+ | |4242+ SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).4343+ (display) (vdec)4444+ | |4545+ | |4646+ +-----+-----+ +----+----+4747+ | | | | | |4848+ | | |... | | | ... There are different ports in each larb.4949+ | | | | | |5050+ OVL0 RDMA0 WDMA0 MC PP VLD5151+5252+ As above, The Multimedia HW will go through SMI and M4U while it5353+ access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain5454+ smi local arbiter and smi common. It will control whether the Multimedia5555+ HW should go though the m4u for translation or bypass it and talk5656+ directly with EMI. And also SMI help control the power domain and clocks for5757+ each local arbiter.5858+5959+ Normally we specify a local arbiter(larb) for each multimedia HW6060+ like display, video decode, and camera. And there are different ports6161+ in each larb. Take a example, There are many ports like MC, PP, VLD in the6262+ video decode local arbiter, all these ports are according to the video HW.6363+6464+ In some SoCs, there may be a GALS(Global Async Local Sync) module between6565+ smi-common and m4u, and additional GALS module between smi-larb and6666+ smi-common. GALS can been seen as a "asynchronous fifo" which could help6767+ synchronize for the modules in different clock frequency.6868+6969+properties:7070+ compatible:7171+ oneOf:7272+ - enum:7373+ - mediatek,mt2701-m4u # generation one7474+ - mediatek,mt2712-m4u # generation two7575+ - mediatek,mt6779-m4u # generation two7676+ - mediatek,mt8167-m4u # generation two7777+ - mediatek,mt8173-m4u # generation two7878+ - mediatek,mt8183-m4u # generation two7979+ - mediatek,mt8192-m4u # generation two8080+8181+ - description: mt7623 generation one8282+ items:8383+ - const: mediatek,mt7623-m4u8484+ - const: mediatek,mt2701-m4u8585+8686+ reg:8787+ maxItems: 18888+8989+ interrupts:9090+ maxItems: 19191+9292+ clocks:9393+ items:9494+ - description: bclk is the block clock.9595+9696+ clock-names:9797+ items:9898+ - const: bclk9999+100100+ mediatek,larbs:101101+ $ref: /schemas/types.yaml#/definitions/phandle-array102102+ minItems: 1103103+ maxItems: 32104104+ description: |105105+ List of phandle to the local arbiters in the current Socs.106106+ Refer to bindings/memory-controllers/mediatek,smi-larb.yaml. It must sort107107+ according to the local arbiter index, like larb0, larb1, larb2...108108+109109+ '#iommu-cells':110110+ const: 1111111+ description: |112112+ This is the mtk_m4u_id according to the HW. Specifies the mtk_m4u_id as113113+ defined in114114+ dt-binding/memory/mt2701-larb-port.h for mt2701 and mt7623,115115+ dt-binding/memory/mt2712-larb-port.h for mt2712,116116+ dt-binding/memory/mt6779-larb-port.h for mt6779,117117+ dt-binding/memory/mt8167-larb-port.h for mt8167,118118+ dt-binding/memory/mt8173-larb-port.h for mt8173,119119+ dt-binding/memory/mt8183-larb-port.h for mt8183,120120+ dt-binding/memory/mt8192-larb-port.h for mt8192.121121+122122+ power-domains:123123+ maxItems: 1124124+125125+required:126126+ - compatible127127+ - reg128128+ - interrupts129129+ - mediatek,larbs130130+ - '#iommu-cells'131131+132132+allOf:133133+ - if:134134+ properties:135135+ compatible:136136+ contains:137137+ enum:138138+ - mediatek,mt2701-m4u139139+ - mediatek,mt2712-m4u140140+ - mediatek,mt8173-m4u141141+ - mediatek,mt8192-m4u142142+143143+ then:144144+ required:145145+ - clocks146146+147147+ - if:148148+ properties:149149+ compatible:150150+ enum:151151+ - mediatek,mt8192-m4u152152+153153+ then:154154+ required:155155+ - power-domains156156+157157+additionalProperties: false158158+159159+examples:160160+ - |161161+ #include <dt-bindings/clock/mt8173-clk.h>162162+ #include <dt-bindings/interrupt-controller/arm-gic.h>163163+164164+ iommu: iommu@10205000 {165165+ compatible = "mediatek,mt8173-m4u";166166+ reg = <0x10205000 0x1000>;167167+ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;168168+ clocks = <&infracfg CLK_INFRA_M4U>;169169+ clock-names = "bclk";170170+ mediatek,larbs = <&larb0 &larb1 &larb2171171+ &larb3 &larb4 &larb5>;172172+ #iommu-cells = <1>;173173+ };174174+175175+ - |176176+ #include <dt-bindings/memory/mt8173-larb-port.h>177177+178178+ /* Example for a client device */179179+ display {180180+ compatible = "mediatek,mt8173-disp";181181+ iommus = <&iommu M4U_PORT_DISP_OVL0>,182182+ <&iommu M4U_PORT_DISP_RDMA0>;183183+ };
+9
MAINTAINERS
···1117611176F: Documentation/devicetree/bindings/i2c/i2c-mt65xx.txt1117711177F: drivers/i2c/busses/i2c-mt65xx.c11178111781117911179+MEDIATEK IOMMU DRIVER1118011180+M: Yong Wu <yong.wu@mediatek.com>1118111181+L: iommu@lists.linux-foundation.org1118211182+L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)1118311183+S: Supported1118411184+F: Documentation/devicetree/bindings/iommu/mediatek*1118511185+F: drivers/iommu/mtk_iommu*1118611186+F: include/dt-bindings/memory/mt*-port.h1118711187+1117911188MEDIATEK JPEG DRIVER1118011189M: Rick Chang <rick.chang@mediatek.com>1118111190M: Bin Liu <bin.liu@mediatek.com>
+1
drivers/iommu/amd/Kconfig
···1010 select IOMMU_API1111 select IOMMU_IOVA1212 select IOMMU_DMA1313+ select IOMMU_IO_PGTABLE1314 depends on X86_64 && PCI && ACPI && HAVE_CMPXCHG_DOUBLE1415 help1516 With this option you can enable support for AMD IOMMU hardware in
···1515#include <linux/spinlock.h>1616#include <linux/pci.h>1717#include <linux/irqreturn.h>1818+#include <linux/io-pgtable.h>18191920/*2021 * Maximum number of IOMMUs supported···253252254253#define GA_GUEST_NR 0x1255254255255+#define IOMMU_IN_ADDR_BIT_SIZE 52256256+#define IOMMU_OUT_ADDR_BIT_SIZE 52257257+258258+/*259259+ * This bitmap is used to advertise the page sizes our hardware support260260+ * to the IOMMU core, which will then use this information to split261261+ * physically contiguous memory regions it is mapping into page sizes262262+ * that we support.263263+ *264264+ * 512GB Pages are not supported due to a hardware bug265265+ */266266+#define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))267267+256268/* Bit value definition for dte irq remapping fields*/257269#define DTE_IRQ_PHYS_ADDR_MASK (((1ULL << 45)-1) << 6)258270#define DTE_IRQ_REMAP_INTCTL_MASK (0x3ULL << 60)···484470485471#define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0)486472473473+#define io_pgtable_to_data(x) \474474+ container_of((x), struct amd_io_pgtable, iop)475475+476476+#define io_pgtable_ops_to_data(x) \477477+ io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))478478+479479+#define io_pgtable_ops_to_domain(x) \480480+ container_of(io_pgtable_ops_to_data(x), \481481+ struct protection_domain, iop)482482+483483+#define io_pgtable_cfg_to_data(x) \484484+ container_of((x), struct amd_io_pgtable, pgtbl_cfg)485485+486486+struct amd_io_pgtable {487487+ struct io_pgtable_cfg pgtbl_cfg;488488+ struct io_pgtable iop;489489+ int mode;490490+ u64 *root;491491+ atomic64_t pt_root; /* pgtable root and pgtable mode */492492+};493493+487494/*488495 * This structure contains generic data for IOMMU protection domains489496 * independent of their use.···513478 struct list_head dev_list; /* List of all devices in this domain */514479 struct iommu_domain domain; /* generic domain handle used by515480 iommu core code */481481+ struct amd_io_pgtable iop;516482 spinlock_t lock; /* mostly used to lock the page table*/517483 u16 id; /* the domain id written to the device table */518518- atomic64_t pt_root; /* pgtable root and pgtable mode */519484 int glx; /* Number of levels for GCR3 table */520485 u64 *gcr3_tbl; /* Guest CR3 table */521486 unsigned long flags; /* flags to find out type of domain */522487 unsigned dev_cnt; /* devices assigned to this domain */523488 unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */524524-};525525-526526-/* For decocded pt_root */527527-struct domain_pgtable {528528- int mode;529529- u64 *root;530489};531490532491/*
+39-15
drivers/iommu/amd/init.c
···1212#include <linux/acpi.h>1313#include <linux/list.h>1414#include <linux/bitmap.h>1515+#include <linux/delay.h>1516#include <linux/slab.h>1617#include <linux/syscore_ops.h>1718#include <linux/interrupt.h>···148147bool amd_iommu_dump;149148bool amd_iommu_irq_remap __read_mostly;150149150150+enum io_pgtable_fmt amd_iommu_pgtable = AMD_IOMMU_V1;151151+151152int amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_VAPIC;152153static int amd_iommu_xt_mode = IRQ_REMAP_XAPIC_MODE;153154···257254static int amd_iommu_enable_interrupts(void);258255static int __init iommu_go_to_state(enum iommu_init_state state);259256static void init_device_table_dma(void);257257+static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,258258+ u8 fxn, u64 *value, bool is_write);260259261260static bool amd_iommu_pre_enabled = true;262261···17171712 return 0;17181713}1719171417201720-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,17211721- u8 fxn, u64 *value, bool is_write);17221722-17231723-static void init_iommu_perf_ctr(struct amd_iommu *iommu)17151715+static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)17241716{17171717+ int retry;17251718 struct pci_dev *pdev = iommu->dev;17261726- u64 val = 0xabcd, val2 = 0, save_reg = 0;17191719+ u64 val = 0xabcd, val2 = 0, save_reg, save_src;1727172017281721 if (!iommu_feature(iommu, FEATURE_PC))17291722 return;···17291726 amd_iommu_pc_present = true;1730172717311728 /* save the value to restore, if writable */17321732- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))17291729+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||17301730+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))17311731+ goto pc_false;17321732+17331733+ /*17341734+ * Disable power gating by programing the performance counter17351735+ * source to 20 (i.e. counts the reads and writes from/to IOMMU17361736+ * Reserved Register [MMIO Offset 1FF8h] that are ignored.),17371737+ * which never get incremented during this init phase.17381738+ * (Note: The event is also deprecated.)17391739+ */17401740+ val = 20;17411741+ if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))17331742 goto pc_false;1734174317351744 /* Check if the performance counters can be written to */17361736- if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||17371737- (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||17381738- (val != val2))17391739- goto pc_false;17451745+ val = 0xabcd;17461746+ for (retry = 5; retry; retry--) {17471747+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||17481748+ iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||17491749+ val2)17501750+ break;17511751+17521752+ /* Wait about 20 msec for power gating to disable and retry. */17531753+ msleep(20);17541754+ }1740175517411756 /* restore */17421742- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))17571757+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||17581758+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))17591759+ goto pc_false;17601760+17611761+ if (val != val2)17431762 goto pc_false;1744176317451764 pci_info(pdev, "IOMMU performance counters supported\n");···19531928 struct pci_dev *pdev = iommu->dev;19541929 int i;1955193019561956- pci_info(pdev, "Found IOMMU cap 0x%hx\n", iommu->cap_ptr);19311931+ pci_info(pdev, "Found IOMMU cap 0x%x\n", iommu->cap_ptr);1957193219581933 if (iommu->cap & (1 << IOMMU_CAP_EFR)) {19591934 pci_info(pdev, "Extended features (%#llx):",···19811956static int __init amd_iommu_init_pci(void)19821957{19831958 struct amd_iommu *iommu;19841984- int ret = 0;19591959+ int ret;1985196019861961 for_each_iommu(iommu) {19871962 ret = iommu_init_pci(iommu);···27122687static int __init early_amd_iommu_init(void)27132688{27142689 struct acpi_table_header *ivrs_base;26902690+ int i, remap_cache_sz, ret;27152691 acpi_status status;27162716- int i, remap_cache_sz, ret = 0;27172692 u32 pci_id;2718269327192694 if (!amd_iommu_detected)···28572832out:28582833 /* Don't leak any ACPI memory */28592834 acpi_put_table(ivrs_base);28602860- ivrs_base = NULL;2861283528622836 return ret;28632837}
+558
drivers/iommu/amd/io_pgtable.c
···11+// SPDX-License-Identifier: GPL-2.0-only22+/*33+ * CPU-agnostic AMD IO page table allocator.44+ *55+ * Copyright (C) 2020 Advanced Micro Devices, Inc.66+ * Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>77+ */88+99+#define pr_fmt(fmt) "AMD-Vi: " fmt1010+#define dev_fmt(fmt) pr_fmt(fmt)1111+1212+#include <linux/atomic.h>1313+#include <linux/bitops.h>1414+#include <linux/io-pgtable.h>1515+#include <linux/kernel.h>1616+#include <linux/sizes.h>1717+#include <linux/slab.h>1818+#include <linux/types.h>1919+#include <linux/dma-mapping.h>2020+2121+#include <asm/barrier.h>2222+2323+#include "amd_iommu_types.h"2424+#include "amd_iommu.h"2525+2626+static void v1_tlb_flush_all(void *cookie)2727+{2828+}2929+3030+static void v1_tlb_flush_walk(unsigned long iova, size_t size,3131+ size_t granule, void *cookie)3232+{3333+}3434+3535+static void v1_tlb_add_page(struct iommu_iotlb_gather *gather,3636+ unsigned long iova, size_t granule,3737+ void *cookie)3838+{3939+}4040+4141+static const struct iommu_flush_ops v1_flush_ops = {4242+ .tlb_flush_all = v1_tlb_flush_all,4343+ .tlb_flush_walk = v1_tlb_flush_walk,4444+ .tlb_add_page = v1_tlb_add_page,4545+};4646+4747+/*4848+ * Helper function to get the first pte of a large mapping4949+ */5050+static u64 *first_pte_l7(u64 *pte, unsigned long *page_size,5151+ unsigned long *count)5252+{5353+ unsigned long pte_mask, pg_size, cnt;5454+ u64 *fpte;5555+5656+ pg_size = PTE_PAGE_SIZE(*pte);5757+ cnt = PAGE_SIZE_PTE_COUNT(pg_size);5858+ pte_mask = ~((cnt << 3) - 1);5959+ fpte = (u64 *)(((unsigned long)pte) & pte_mask);6060+6161+ if (page_size)6262+ *page_size = pg_size;6363+6464+ if (count)6565+ *count = cnt;6666+6767+ return fpte;6868+}6969+7070+/****************************************************************************7171+ *7272+ * The functions below are used the create the page table mappings for7373+ * unity mapped regions.7474+ *7575+ ****************************************************************************/7676+7777+static void free_page_list(struct page *freelist)7878+{7979+ while (freelist != NULL) {8080+ unsigned long p = (unsigned long)page_address(freelist);8181+8282+ freelist = freelist->freelist;8383+ free_page(p);8484+ }8585+}8686+8787+static struct page *free_pt_page(unsigned long pt, struct page *freelist)8888+{8989+ struct page *p = virt_to_page((void *)pt);9090+9191+ p->freelist = freelist;9292+9393+ return p;9494+}9595+9696+#define DEFINE_FREE_PT_FN(LVL, FN) \9797+static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist) \9898+{ \9999+ unsigned long p; \100100+ u64 *pt; \101101+ int i; \102102+ \103103+ pt = (u64 *)__pt; \104104+ \105105+ for (i = 0; i < 512; ++i) { \106106+ /* PTE present? */ \107107+ if (!IOMMU_PTE_PRESENT(pt[i])) \108108+ continue; \109109+ \110110+ /* Large PTE? */ \111111+ if (PM_PTE_LEVEL(pt[i]) == 0 || \112112+ PM_PTE_LEVEL(pt[i]) == 7) \113113+ continue; \114114+ \115115+ p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \116116+ freelist = FN(p, freelist); \117117+ } \118118+ \119119+ return free_pt_page((unsigned long)pt, freelist); \120120+}121121+122122+DEFINE_FREE_PT_FN(l2, free_pt_page)123123+DEFINE_FREE_PT_FN(l3, free_pt_l2)124124+DEFINE_FREE_PT_FN(l4, free_pt_l3)125125+DEFINE_FREE_PT_FN(l5, free_pt_l4)126126+DEFINE_FREE_PT_FN(l6, free_pt_l5)127127+128128+static struct page *free_sub_pt(unsigned long root, int mode,129129+ struct page *freelist)130130+{131131+ switch (mode) {132132+ case PAGE_MODE_NONE:133133+ case PAGE_MODE_7_LEVEL:134134+ break;135135+ case PAGE_MODE_1_LEVEL:136136+ freelist = free_pt_page(root, freelist);137137+ break;138138+ case PAGE_MODE_2_LEVEL:139139+ freelist = free_pt_l2(root, freelist);140140+ break;141141+ case PAGE_MODE_3_LEVEL:142142+ freelist = free_pt_l3(root, freelist);143143+ break;144144+ case PAGE_MODE_4_LEVEL:145145+ freelist = free_pt_l4(root, freelist);146146+ break;147147+ case PAGE_MODE_5_LEVEL:148148+ freelist = free_pt_l5(root, freelist);149149+ break;150150+ case PAGE_MODE_6_LEVEL:151151+ freelist = free_pt_l6(root, freelist);152152+ break;153153+ default:154154+ BUG();155155+ }156156+157157+ return freelist;158158+}159159+160160+void amd_iommu_domain_set_pgtable(struct protection_domain *domain,161161+ u64 *root, int mode)162162+{163163+ u64 pt_root;164164+165165+ /* lowest 3 bits encode pgtable mode */166166+ pt_root = mode & 7;167167+ pt_root |= (u64)root;168168+169169+ amd_iommu_domain_set_pt_root(domain, pt_root);170170+}171171+172172+/*173173+ * This function is used to add another level to an IO page table. Adding174174+ * another level increases the size of the address space by 9 bits to a size up175175+ * to 64 bits.176176+ */177177+static bool increase_address_space(struct protection_domain *domain,178178+ unsigned long address,179179+ gfp_t gfp)180180+{181181+ unsigned long flags;182182+ bool ret = true;183183+ u64 *pte;184184+185185+ spin_lock_irqsave(&domain->lock, flags);186186+187187+ if (address <= PM_LEVEL_SIZE(domain->iop.mode))188188+ goto out;189189+190190+ ret = false;191191+ if (WARN_ON_ONCE(domain->iop.mode == PAGE_MODE_6_LEVEL))192192+ goto out;193193+194194+ pte = (void *)get_zeroed_page(gfp);195195+ if (!pte)196196+ goto out;197197+198198+ *pte = PM_LEVEL_PDE(domain->iop.mode, iommu_virt_to_phys(domain->iop.root));199199+200200+ domain->iop.root = pte;201201+ domain->iop.mode += 1;202202+ amd_iommu_update_and_flush_device_table(domain);203203+ amd_iommu_domain_flush_complete(domain);204204+205205+ /*206206+ * Device Table needs to be updated and flushed before the new root can207207+ * be published.208208+ */209209+ amd_iommu_domain_set_pgtable(domain, pte, domain->iop.mode);210210+211211+ ret = true;212212+213213+out:214214+ spin_unlock_irqrestore(&domain->lock, flags);215215+216216+ return ret;217217+}218218+219219+static u64 *alloc_pte(struct protection_domain *domain,220220+ unsigned long address,221221+ unsigned long page_size,222222+ u64 **pte_page,223223+ gfp_t gfp,224224+ bool *updated)225225+{226226+ int level, end_lvl;227227+ u64 *pte, *page;228228+229229+ BUG_ON(!is_power_of_2(page_size));230230+231231+ while (address > PM_LEVEL_SIZE(domain->iop.mode)) {232232+ /*233233+ * Return an error if there is no memory to update the234234+ * page-table.235235+ */236236+ if (!increase_address_space(domain, address, gfp))237237+ return NULL;238238+ }239239+240240+241241+ level = domain->iop.mode - 1;242242+ pte = &domain->iop.root[PM_LEVEL_INDEX(level, address)];243243+ address = PAGE_SIZE_ALIGN(address, page_size);244244+ end_lvl = PAGE_SIZE_LEVEL(page_size);245245+246246+ while (level > end_lvl) {247247+ u64 __pte, __npte;248248+ int pte_level;249249+250250+ __pte = *pte;251251+ pte_level = PM_PTE_LEVEL(__pte);252252+253253+ /*254254+ * If we replace a series of large PTEs, we need255255+ * to tear down all of them.256256+ */257257+ if (IOMMU_PTE_PRESENT(__pte) &&258258+ pte_level == PAGE_MODE_7_LEVEL) {259259+ unsigned long count, i;260260+ u64 *lpte;261261+262262+ lpte = first_pte_l7(pte, NULL, &count);263263+264264+ /*265265+ * Unmap the replicated PTEs that still match the266266+ * original large mapping267267+ */268268+ for (i = 0; i < count; ++i)269269+ cmpxchg64(&lpte[i], __pte, 0ULL);270270+271271+ *updated = true;272272+ continue;273273+ }274274+275275+ if (!IOMMU_PTE_PRESENT(__pte) ||276276+ pte_level == PAGE_MODE_NONE) {277277+ page = (u64 *)get_zeroed_page(gfp);278278+279279+ if (!page)280280+ return NULL;281281+282282+ __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page));283283+284284+ /* pte could have been changed somewhere. */285285+ if (cmpxchg64(pte, __pte, __npte) != __pte)286286+ free_page((unsigned long)page);287287+ else if (IOMMU_PTE_PRESENT(__pte))288288+ *updated = true;289289+290290+ continue;291291+ }292292+293293+ /* No level skipping support yet */294294+ if (pte_level != level)295295+ return NULL;296296+297297+ level -= 1;298298+299299+ pte = IOMMU_PTE_PAGE(__pte);300300+301301+ if (pte_page && level == end_lvl)302302+ *pte_page = pte;303303+304304+ pte = &pte[PM_LEVEL_INDEX(level, address)];305305+ }306306+307307+ return pte;308308+}309309+310310+/*311311+ * This function checks if there is a PTE for a given dma address. If312312+ * there is one, it returns the pointer to it.313313+ */314314+static u64 *fetch_pte(struct amd_io_pgtable *pgtable,315315+ unsigned long address,316316+ unsigned long *page_size)317317+{318318+ int level;319319+ u64 *pte;320320+321321+ *page_size = 0;322322+323323+ if (address > PM_LEVEL_SIZE(pgtable->mode))324324+ return NULL;325325+326326+ level = pgtable->mode - 1;327327+ pte = &pgtable->root[PM_LEVEL_INDEX(level, address)];328328+ *page_size = PTE_LEVEL_PAGE_SIZE(level);329329+330330+ while (level > 0) {331331+332332+ /* Not Present */333333+ if (!IOMMU_PTE_PRESENT(*pte))334334+ return NULL;335335+336336+ /* Large PTE */337337+ if (PM_PTE_LEVEL(*pte) == 7 ||338338+ PM_PTE_LEVEL(*pte) == 0)339339+ break;340340+341341+ /* No level skipping support yet */342342+ if (PM_PTE_LEVEL(*pte) != level)343343+ return NULL;344344+345345+ level -= 1;346346+347347+ /* Walk to the next level */348348+ pte = IOMMU_PTE_PAGE(*pte);349349+ pte = &pte[PM_LEVEL_INDEX(level, address)];350350+ *page_size = PTE_LEVEL_PAGE_SIZE(level);351351+ }352352+353353+ /*354354+ * If we have a series of large PTEs, make355355+ * sure to return a pointer to the first one.356356+ */357357+ if (PM_PTE_LEVEL(*pte) == PAGE_MODE_7_LEVEL)358358+ pte = first_pte_l7(pte, page_size, NULL);359359+360360+ return pte;361361+}362362+363363+static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist)364364+{365365+ unsigned long pt;366366+ int mode;367367+368368+ while (cmpxchg64(pte, pteval, 0) != pteval) {369369+ pr_warn("AMD-Vi: IOMMU pte changed since we read it\n");370370+ pteval = *pte;371371+ }372372+373373+ if (!IOMMU_PTE_PRESENT(pteval))374374+ return freelist;375375+376376+ pt = (unsigned long)IOMMU_PTE_PAGE(pteval);377377+ mode = IOMMU_PTE_MODE(pteval);378378+379379+ return free_sub_pt(pt, mode, freelist);380380+}381381+382382+/*383383+ * Generic mapping functions. It maps a physical address into a DMA384384+ * address space. It allocates the page table pages if necessary.385385+ * In the future it can be extended to a generic mapping function386386+ * supporting all features of AMD IOMMU page tables like level skipping387387+ * and full 64 bit address spaces.388388+ */389389+static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,390390+ phys_addr_t paddr, size_t size, int prot, gfp_t gfp)391391+{392392+ struct protection_domain *dom = io_pgtable_ops_to_domain(ops);393393+ struct page *freelist = NULL;394394+ bool updated = false;395395+ u64 __pte, *pte;396396+ int ret, i, count;397397+398398+ BUG_ON(!IS_ALIGNED(iova, size));399399+ BUG_ON(!IS_ALIGNED(paddr, size));400400+401401+ ret = -EINVAL;402402+ if (!(prot & IOMMU_PROT_MASK))403403+ goto out;404404+405405+ count = PAGE_SIZE_PTE_COUNT(size);406406+ pte = alloc_pte(dom, iova, size, NULL, gfp, &updated);407407+408408+ ret = -ENOMEM;409409+ if (!pte)410410+ goto out;411411+412412+ for (i = 0; i < count; ++i)413413+ freelist = free_clear_pte(&pte[i], pte[i], freelist);414414+415415+ if (freelist != NULL)416416+ updated = true;417417+418418+ if (count > 1) {419419+ __pte = PAGE_SIZE_PTE(__sme_set(paddr), size);420420+ __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_PR | IOMMU_PTE_FC;421421+ } else422422+ __pte = __sme_set(paddr) | IOMMU_PTE_PR | IOMMU_PTE_FC;423423+424424+ if (prot & IOMMU_PROT_IR)425425+ __pte |= IOMMU_PTE_IR;426426+ if (prot & IOMMU_PROT_IW)427427+ __pte |= IOMMU_PTE_IW;428428+429429+ for (i = 0; i < count; ++i)430430+ pte[i] = __pte;431431+432432+ ret = 0;433433+434434+out:435435+ if (updated) {436436+ unsigned long flags;437437+438438+ spin_lock_irqsave(&dom->lock, flags);439439+ /*440440+ * Flush domain TLB(s) and wait for completion. Any Device-Table441441+ * Updates and flushing already happened in442442+ * increase_address_space().443443+ */444444+ amd_iommu_domain_flush_tlb_pde(dom);445445+ amd_iommu_domain_flush_complete(dom);446446+ spin_unlock_irqrestore(&dom->lock, flags);447447+ }448448+449449+ /* Everything flushed out, free pages now */450450+ free_page_list(freelist);451451+452452+ return ret;453453+}454454+455455+static unsigned long iommu_v1_unmap_page(struct io_pgtable_ops *ops,456456+ unsigned long iova,457457+ size_t size,458458+ struct iommu_iotlb_gather *gather)459459+{460460+ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);461461+ unsigned long long unmapped;462462+ unsigned long unmap_size;463463+ u64 *pte;464464+465465+ BUG_ON(!is_power_of_2(size));466466+467467+ unmapped = 0;468468+469469+ while (unmapped < size) {470470+ pte = fetch_pte(pgtable, iova, &unmap_size);471471+ if (pte) {472472+ int i, count;473473+474474+ count = PAGE_SIZE_PTE_COUNT(unmap_size);475475+ for (i = 0; i < count; i++)476476+ pte[i] = 0ULL;477477+ }478478+479479+ iova = (iova & ~(unmap_size - 1)) + unmap_size;480480+ unmapped += unmap_size;481481+ }482482+483483+ BUG_ON(unmapped && !is_power_of_2(unmapped));484484+485485+ return unmapped;486486+}487487+488488+static phys_addr_t iommu_v1_iova_to_phys(struct io_pgtable_ops *ops, unsigned long iova)489489+{490490+ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);491491+ unsigned long offset_mask, pte_pgsize;492492+ u64 *pte, __pte;493493+494494+ if (pgtable->mode == PAGE_MODE_NONE)495495+ return iova;496496+497497+ pte = fetch_pte(pgtable, iova, &pte_pgsize);498498+499499+ if (!pte || !IOMMU_PTE_PRESENT(*pte))500500+ return 0;501501+502502+ offset_mask = pte_pgsize - 1;503503+ __pte = __sme_clr(*pte & PM_ADDR_MASK);504504+505505+ return (__pte & ~offset_mask) | (iova & offset_mask);506506+}507507+508508+/*509509+ * ----------------------------------------------------510510+ */511511+static void v1_free_pgtable(struct io_pgtable *iop)512512+{513513+ struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);514514+ struct protection_domain *dom;515515+ struct page *freelist = NULL;516516+ unsigned long root;517517+518518+ if (pgtable->mode == PAGE_MODE_NONE)519519+ return;520520+521521+ dom = container_of(pgtable, struct protection_domain, iop);522522+523523+ /* Update data structure */524524+ amd_iommu_domain_clr_pt_root(dom);525525+526526+ /* Make changes visible to IOMMUs */527527+ amd_iommu_domain_update(dom);528528+529529+ /* Page-table is not visible to IOMMU anymore, so free it */530530+ BUG_ON(pgtable->mode < PAGE_MODE_NONE ||531531+ pgtable->mode > PAGE_MODE_6_LEVEL);532532+533533+ root = (unsigned long)pgtable->root;534534+ freelist = free_sub_pt(root, pgtable->mode, freelist);535535+536536+ free_page_list(freelist);537537+}538538+539539+static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)540540+{541541+ struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);542542+543543+ cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES,544544+ cfg->ias = IOMMU_IN_ADDR_BIT_SIZE,545545+ cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE,546546+ cfg->tlb = &v1_flush_ops;547547+548548+ pgtable->iop.ops.map = iommu_v1_map_page;549549+ pgtable->iop.ops.unmap = iommu_v1_unmap_page;550550+ pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys;551551+552552+ return &pgtable->iop;553553+}554554+555555+struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns = {556556+ .alloc = v1_alloc_pgtable,557557+ .free = v1_free_pgtable,558558+};
+76-596
drivers/iommu/amd/iommu.c
···3131#include <linux/irqdomain.h>3232#include <linux/percpu.h>3333#include <linux/iova.h>3434+#include <linux/io-pgtable.h>3435#include <asm/irq_remapping.h>3536#include <asm/io_apic.h>3637#include <asm/apic.h>···5756#define MSI_RANGE_END (0xfeefffff)5857#define HT_RANGE_START (0xfd00000000ULL)5958#define HT_RANGE_END (0xffffffffffULL)6060-6161-/*6262- * This bitmap is used to advertise the page sizes our hardware support6363- * to the IOMMU core, which will then use this information to split6464- * physically contiguous memory regions it is mapping into page sizes6565- * that we support.6666- *6767- * 512GB Pages are not supported due to a hardware bug6868- */6969-#define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))70597160#define DEFAULT_PGTABLE_LEVEL PAGE_MODE_3_LEVEL7261···87968897struct kmem_cache *amd_iommu_irq_cache;89989090-static void update_domain(struct protection_domain *domain);9199static void detach_device(struct device *dev);9292-static void update_and_flush_device_table(struct protection_domain *domain,9393- struct domain_pgtable *pgtable);9410095101/****************************************************************************96102 *···137149static struct protection_domain *to_pdomain(struct iommu_domain *dom)138150{139151 return container_of(dom, struct protection_domain, domain);140140-}141141-142142-static void amd_iommu_domain_get_pgtable(struct protection_domain *domain,143143- struct domain_pgtable *pgtable)144144-{145145- u64 pt_root = atomic64_read(&domain->pt_root);146146-147147- pgtable->root = (u64 *)(pt_root & PAGE_MASK);148148- pgtable->mode = pt_root & 7; /* lowest 3 bits encode pgtable mode */149149-}150150-151151-static void amd_iommu_domain_set_pt_root(struct protection_domain *domain, u64 root)152152-{153153- atomic64_set(&domain->pt_root, root);154154-}155155-156156-static void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)157157-{158158- amd_iommu_domain_set_pt_root(domain, 0);159159-}160160-161161-static void amd_iommu_domain_set_pgtable(struct protection_domain *domain,162162- u64 *root, int mode)163163-{164164- u64 pt_root;165165-166166- /* lowest 3 bits encode pgtable mode */167167- pt_root = mode & 7;168168- pt_root |= (u64)root;169169-170170- amd_iommu_domain_set_pt_root(domain, pt_root);171152}172153173154static struct iommu_dev_data *alloc_dev_data(u16 devid)···392435 * We keep dev_data around for unplugged devices and reuse it when the393436 * device is re-plugged - not doing so would introduce a ton of races.394437 */395395-}396396-397397-/*398398- * Helper function to get the first pte of a large mapping399399- */400400-static u64 *first_pte_l7(u64 *pte, unsigned long *page_size,401401- unsigned long *count)402402-{403403- unsigned long pte_mask, pg_size, cnt;404404- u64 *fpte;405405-406406- pg_size = PTE_PAGE_SIZE(*pte);407407- cnt = PAGE_SIZE_PTE_COUNT(pg_size);408408- pte_mask = ~((cnt << 3) - 1);409409- fpte = (u64 *)(((unsigned long)pte) & pte_mask);410410-411411- if (page_size)412412- *page_size = pg_size;413413-414414- if (count)415415- *count = cnt;416416-417417- return fpte;418438}419439420440/****************************************************************************···12691335}1270133612711337/* Flush the whole IO/TLB for a given protection domain - including PDE */12721272-static void domain_flush_tlb_pde(struct protection_domain *domain)13381338+void amd_iommu_domain_flush_tlb_pde(struct protection_domain *domain)12731339{12741340 __domain_flush_pages(domain, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS, 1);12751341}1276134212771277-static void domain_flush_complete(struct protection_domain *domain)13431343+void amd_iommu_domain_flush_complete(struct protection_domain *domain)12781344{12791345 int i;12801346···1299136513001366 spin_lock_irqsave(&domain->lock, flags);13011367 domain_flush_pages(domain, iova, size);13021302- domain_flush_complete(domain);13681368+ amd_iommu_domain_flush_complete(domain);13031369 spin_unlock_irqrestore(&domain->lock, flags);13041370 }13051371}···1314138013151381 list_for_each_entry(dev_data, &domain->dev_list, list)13161382 device_flush_dte(dev_data);13171317-}13181318-13191319-/****************************************************************************13201320- *13211321- * The functions below are used the create the page table mappings for13221322- * unity mapped regions.13231323- *13241324- ****************************************************************************/13251325-13261326-static void free_page_list(struct page *freelist)13271327-{13281328- while (freelist != NULL) {13291329- unsigned long p = (unsigned long)page_address(freelist);13301330- freelist = freelist->freelist;13311331- free_page(p);13321332- }13331333-}13341334-13351335-static struct page *free_pt_page(unsigned long pt, struct page *freelist)13361336-{13371337- struct page *p = virt_to_page((void *)pt);13381338-13391339- p->freelist = freelist;13401340-13411341- return p;13421342-}13431343-13441344-#define DEFINE_FREE_PT_FN(LVL, FN) \13451345-static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist) \13461346-{ \13471347- unsigned long p; \13481348- u64 *pt; \13491349- int i; \13501350- \13511351- pt = (u64 *)__pt; \13521352- \13531353- for (i = 0; i < 512; ++i) { \13541354- /* PTE present? */ \13551355- if (!IOMMU_PTE_PRESENT(pt[i])) \13561356- continue; \13571357- \13581358- /* Large PTE? */ \13591359- if (PM_PTE_LEVEL(pt[i]) == 0 || \13601360- PM_PTE_LEVEL(pt[i]) == 7) \13611361- continue; \13621362- \13631363- p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \13641364- freelist = FN(p, freelist); \13651365- } \13661366- \13671367- return free_pt_page((unsigned long)pt, freelist); \13681368-}13691369-13701370-DEFINE_FREE_PT_FN(l2, free_pt_page)13711371-DEFINE_FREE_PT_FN(l3, free_pt_l2)13721372-DEFINE_FREE_PT_FN(l4, free_pt_l3)13731373-DEFINE_FREE_PT_FN(l5, free_pt_l4)13741374-DEFINE_FREE_PT_FN(l6, free_pt_l5)13751375-13761376-static struct page *free_sub_pt(unsigned long root, int mode,13771377- struct page *freelist)13781378-{13791379- switch (mode) {13801380- case PAGE_MODE_NONE:13811381- case PAGE_MODE_7_LEVEL:13821382- break;13831383- case PAGE_MODE_1_LEVEL:13841384- freelist = free_pt_page(root, freelist);13851385- break;13861386- case PAGE_MODE_2_LEVEL:13871387- freelist = free_pt_l2(root, freelist);13881388- break;13891389- case PAGE_MODE_3_LEVEL:13901390- freelist = free_pt_l3(root, freelist);13911391- break;13921392- case PAGE_MODE_4_LEVEL:13931393- freelist = free_pt_l4(root, freelist);13941394- break;13951395- case PAGE_MODE_5_LEVEL:13961396- freelist = free_pt_l5(root, freelist);13971397- break;13981398- case PAGE_MODE_6_LEVEL:13991399- freelist = free_pt_l6(root, freelist);14001400- break;14011401- default:14021402- BUG();14031403- }14041404-14051405- return freelist;14061406-}14071407-14081408-static void free_pagetable(struct domain_pgtable *pgtable)14091409-{14101410- struct page *freelist = NULL;14111411- unsigned long root;14121412-14131413- if (pgtable->mode == PAGE_MODE_NONE)14141414- return;14151415-14161416- BUG_ON(pgtable->mode < PAGE_MODE_NONE ||14171417- pgtable->mode > PAGE_MODE_6_LEVEL);14181418-14191419- root = (unsigned long)pgtable->root;14201420- freelist = free_sub_pt(root, pgtable->mode, freelist);14211421-14221422- free_page_list(freelist);14231423-}14241424-14251425-/*14261426- * This function is used to add another level to an IO page table. Adding14271427- * another level increases the size of the address space by 9 bits to a size up14281428- * to 64 bits.14291429- */14301430-static bool increase_address_space(struct protection_domain *domain,14311431- unsigned long address,14321432- gfp_t gfp)14331433-{14341434- struct domain_pgtable pgtable;14351435- unsigned long flags;14361436- bool ret = true;14371437- u64 *pte;14381438-14391439- spin_lock_irqsave(&domain->lock, flags);14401440-14411441- amd_iommu_domain_get_pgtable(domain, &pgtable);14421442-14431443- if (address <= PM_LEVEL_SIZE(pgtable.mode))14441444- goto out;14451445-14461446- ret = false;14471447- if (WARN_ON_ONCE(pgtable.mode == PAGE_MODE_6_LEVEL))14481448- goto out;14491449-14501450- pte = (void *)get_zeroed_page(gfp);14511451- if (!pte)14521452- goto out;14531453-14541454- *pte = PM_LEVEL_PDE(pgtable.mode, iommu_virt_to_phys(pgtable.root));14551455-14561456- pgtable.root = pte;14571457- pgtable.mode += 1;14581458- update_and_flush_device_table(domain, &pgtable);14591459- domain_flush_complete(domain);14601460-14611461- /*14621462- * Device Table needs to be updated and flushed before the new root can14631463- * be published.14641464- */14651465- amd_iommu_domain_set_pgtable(domain, pte, pgtable.mode);14661466-14671467- ret = true;14681468-14691469-out:14701470- spin_unlock_irqrestore(&domain->lock, flags);14711471-14721472- return ret;14731473-}14741474-14751475-static u64 *alloc_pte(struct protection_domain *domain,14761476- unsigned long address,14771477- unsigned long page_size,14781478- u64 **pte_page,14791479- gfp_t gfp,14801480- bool *updated)14811481-{14821482- struct domain_pgtable pgtable;14831483- int level, end_lvl;14841484- u64 *pte, *page;14851485-14861486- BUG_ON(!is_power_of_2(page_size));14871487-14881488- amd_iommu_domain_get_pgtable(domain, &pgtable);14891489-14901490- while (address > PM_LEVEL_SIZE(pgtable.mode)) {14911491- /*14921492- * Return an error if there is no memory to update the14931493- * page-table.14941494- */14951495- if (!increase_address_space(domain, address, gfp))14961496- return NULL;14971497-14981498- /* Read new values to check if update was successful */14991499- amd_iommu_domain_get_pgtable(domain, &pgtable);15001500- }15011501-15021502-15031503- level = pgtable.mode - 1;15041504- pte = &pgtable.root[PM_LEVEL_INDEX(level, address)];15051505- address = PAGE_SIZE_ALIGN(address, page_size);15061506- end_lvl = PAGE_SIZE_LEVEL(page_size);15071507-15081508- while (level > end_lvl) {15091509- u64 __pte, __npte;15101510- int pte_level;15111511-15121512- __pte = *pte;15131513- pte_level = PM_PTE_LEVEL(__pte);15141514-15151515- /*15161516- * If we replace a series of large PTEs, we need15171517- * to tear down all of them.15181518- */15191519- if (IOMMU_PTE_PRESENT(__pte) &&15201520- pte_level == PAGE_MODE_7_LEVEL) {15211521- unsigned long count, i;15221522- u64 *lpte;15231523-15241524- lpte = first_pte_l7(pte, NULL, &count);15251525-15261526- /*15271527- * Unmap the replicated PTEs that still match the15281528- * original large mapping15291529- */15301530- for (i = 0; i < count; ++i)15311531- cmpxchg64(&lpte[i], __pte, 0ULL);15321532-15331533- *updated = true;15341534- continue;15351535- }15361536-15371537- if (!IOMMU_PTE_PRESENT(__pte) ||15381538- pte_level == PAGE_MODE_NONE) {15391539- page = (u64 *)get_zeroed_page(gfp);15401540-15411541- if (!page)15421542- return NULL;15431543-15441544- __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page));15451545-15461546- /* pte could have been changed somewhere. */15471547- if (cmpxchg64(pte, __pte, __npte) != __pte)15481548- free_page((unsigned long)page);15491549- else if (IOMMU_PTE_PRESENT(__pte))15501550- *updated = true;15511551-15521552- continue;15531553- }15541554-15551555- /* No level skipping support yet */15561556- if (pte_level != level)15571557- return NULL;15581558-15591559- level -= 1;15601560-15611561- pte = IOMMU_PTE_PAGE(__pte);15621562-15631563- if (pte_page && level == end_lvl)15641564- *pte_page = pte;15651565-15661566- pte = &pte[PM_LEVEL_INDEX(level, address)];15671567- }15681568-15691569- return pte;15701570-}15711571-15721572-/*15731573- * This function checks if there is a PTE for a given dma address. If15741574- * there is one, it returns the pointer to it.15751575- */15761576-static u64 *fetch_pte(struct protection_domain *domain,15771577- unsigned long address,15781578- unsigned long *page_size)15791579-{15801580- struct domain_pgtable pgtable;15811581- int level;15821582- u64 *pte;15831583-15841584- *page_size = 0;15851585-15861586- amd_iommu_domain_get_pgtable(domain, &pgtable);15871587-15881588- if (address > PM_LEVEL_SIZE(pgtable.mode))15891589- return NULL;15901590-15911591- level = pgtable.mode - 1;15921592- pte = &pgtable.root[PM_LEVEL_INDEX(level, address)];15931593- *page_size = PTE_LEVEL_PAGE_SIZE(level);15941594-15951595- while (level > 0) {15961596-15971597- /* Not Present */15981598- if (!IOMMU_PTE_PRESENT(*pte))15991599- return NULL;16001600-16011601- /* Large PTE */16021602- if (PM_PTE_LEVEL(*pte) == 7 ||16031603- PM_PTE_LEVEL(*pte) == 0)16041604- break;16051605-16061606- /* No level skipping support yet */16071607- if (PM_PTE_LEVEL(*pte) != level)16081608- return NULL;16091609-16101610- level -= 1;16111611-16121612- /* Walk to the next level */16131613- pte = IOMMU_PTE_PAGE(*pte);16141614- pte = &pte[PM_LEVEL_INDEX(level, address)];16151615- *page_size = PTE_LEVEL_PAGE_SIZE(level);16161616- }16171617-16181618- /*16191619- * If we have a series of large PTEs, make16201620- * sure to return a pointer to the first one.16211621- */16221622- if (PM_PTE_LEVEL(*pte) == PAGE_MODE_7_LEVEL)16231623- pte = first_pte_l7(pte, page_size, NULL);16241624-16251625- return pte;16261626-}16271627-16281628-static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist)16291629-{16301630- unsigned long pt;16311631- int mode;16321632-16331633- while (cmpxchg64(pte, pteval, 0) != pteval) {16341634- pr_warn("AMD-Vi: IOMMU pte changed since we read it\n");16351635- pteval = *pte;16361636- }16371637-16381638- if (!IOMMU_PTE_PRESENT(pteval))16391639- return freelist;16401640-16411641- pt = (unsigned long)IOMMU_PTE_PAGE(pteval);16421642- mode = IOMMU_PTE_MODE(pteval);16431643-16441644- return free_sub_pt(pt, mode, freelist);16451645-}16461646-16471647-/*16481648- * Generic mapping functions. It maps a physical address into a DMA16491649- * address space. It allocates the page table pages if necessary.16501650- * In the future it can be extended to a generic mapping function16511651- * supporting all features of AMD IOMMU page tables like level skipping16521652- * and full 64 bit address spaces.16531653- */16541654-static int iommu_map_page(struct protection_domain *dom,16551655- unsigned long bus_addr,16561656- unsigned long phys_addr,16571657- unsigned long page_size,16581658- int prot,16591659- gfp_t gfp)16601660-{16611661- struct page *freelist = NULL;16621662- bool updated = false;16631663- u64 __pte, *pte;16641664- int ret, i, count;16651665-16661666- BUG_ON(!IS_ALIGNED(bus_addr, page_size));16671667- BUG_ON(!IS_ALIGNED(phys_addr, page_size));16681668-16691669- ret = -EINVAL;16701670- if (!(prot & IOMMU_PROT_MASK))16711671- goto out;16721672-16731673- count = PAGE_SIZE_PTE_COUNT(page_size);16741674- pte = alloc_pte(dom, bus_addr, page_size, NULL, gfp, &updated);16751675-16761676- ret = -ENOMEM;16771677- if (!pte)16781678- goto out;16791679-16801680- for (i = 0; i < count; ++i)16811681- freelist = free_clear_pte(&pte[i], pte[i], freelist);16821682-16831683- if (freelist != NULL)16841684- updated = true;16851685-16861686- if (count > 1) {16871687- __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size);16881688- __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_PR | IOMMU_PTE_FC;16891689- } else16901690- __pte = __sme_set(phys_addr) | IOMMU_PTE_PR | IOMMU_PTE_FC;16911691-16921692- if (prot & IOMMU_PROT_IR)16931693- __pte |= IOMMU_PTE_IR;16941694- if (prot & IOMMU_PROT_IW)16951695- __pte |= IOMMU_PTE_IW;16961696-16971697- for (i = 0; i < count; ++i)16981698- pte[i] = __pte;16991699-17001700- ret = 0;17011701-17021702-out:17031703- if (updated) {17041704- unsigned long flags;17051705-17061706- spin_lock_irqsave(&dom->lock, flags);17071707- /*17081708- * Flush domain TLB(s) and wait for completion. Any Device-Table17091709- * Updates and flushing already happened in17101710- * increase_address_space().17111711- */17121712- domain_flush_tlb_pde(dom);17131713- domain_flush_complete(dom);17141714- spin_unlock_irqrestore(&dom->lock, flags);17151715- }17161716-17171717- /* Everything flushed out, free pages now */17181718- free_page_list(freelist);17191719-17201720- return ret;17211721-}17221722-17231723-static unsigned long iommu_unmap_page(struct protection_domain *dom,17241724- unsigned long bus_addr,17251725- unsigned long page_size)17261726-{17271727- unsigned long long unmapped;17281728- unsigned long unmap_size;17291729- u64 *pte;17301730-17311731- BUG_ON(!is_power_of_2(page_size));17321732-17331733- unmapped = 0;17341734-17351735- while (unmapped < page_size) {17361736-17371737- pte = fetch_pte(dom, bus_addr, &unmap_size);17381738-17391739- if (pte) {17401740- int i, count;17411741-17421742- count = PAGE_SIZE_PTE_COUNT(unmap_size);17431743- for (i = 0; i < count; i++)17441744- pte[i] = 0ULL;17451745- }17461746-17471747- bus_addr = (bus_addr & ~(unmap_size - 1)) + unmap_size;17481748- unmapped += unmap_size;17491749- }17501750-17511751- BUG_ON(unmapped && !is_power_of_2(unmapped));17521752-17531753- return unmapped;17541383}1755138417561385/****************************************************************************···13931896}1394189713951898static void set_dte_entry(u16 devid, struct protection_domain *domain,13961396- struct domain_pgtable *pgtable,13971899 bool ats, bool ppr)13981900{13991901 u64 pte_root = 0;14001902 u64 flags = 0;14011903 u32 old_domid;1402190414031403- if (pgtable->mode != PAGE_MODE_NONE)14041404- pte_root = iommu_virt_to_phys(pgtable->root);19051905+ if (domain->iop.mode != PAGE_MODE_NONE)19061906+ pte_root = iommu_virt_to_phys(domain->iop.root);1405190714061406- pte_root |= (pgtable->mode & DEV_ENTRY_MODE_MASK)19081908+ pte_root |= (domain->iop.mode & DEV_ENTRY_MODE_MASK)14071909 << DEV_ENTRY_MODE_SHIFT;14081910 pte_root |= DTE_FLAG_IR | DTE_FLAG_IW | DTE_FLAG_V | DTE_FLAG_TV;14091911···14751979static void do_attach(struct iommu_dev_data *dev_data,14761980 struct protection_domain *domain)14771981{14781478- struct domain_pgtable pgtable;14791982 struct amd_iommu *iommu;14801983 bool ats;14811984···14901995 domain->dev_cnt += 1;1491199614921997 /* Update device table */14931493- amd_iommu_domain_get_pgtable(domain, &pgtable);14941494- set_dte_entry(dev_data->devid, domain, &pgtable,19981998+ set_dte_entry(dev_data->devid, domain,14951999 ats, dev_data->iommu_v2);14962000 clone_aliases(dev_data->pdev);14972001···15142020 device_flush_dte(dev_data);1515202115162022 /* Flush IOTLB */15171517- domain_flush_tlb_pde(domain);20232023+ amd_iommu_domain_flush_tlb_pde(domain);1518202415192025 /* Wait for the flushes to finish */15201520- domain_flush_complete(domain);20262026+ amd_iommu_domain_flush_complete(domain);1521202715222028 /* decrease reference counters - needs to happen after the flushes */15232029 domain->dev_iommu[iommu->index] -= 1;···16502156 * left the caches in the IOMMU dirty. So we have to flush16512157 * here to evict all dirty stuff.16522158 */16531653- domain_flush_tlb_pde(domain);21592159+ amd_iommu_domain_flush_tlb_pde(domain);1654216016551655- domain_flush_complete(domain);21612161+ amd_iommu_domain_flush_complete(domain);1656216216572163out:16582164 spin_unlock(&dev_data->lock);···17972303 *17982304 *****************************************************************************/1799230518001800-static void update_device_table(struct protection_domain *domain,18011801- struct domain_pgtable *pgtable)23062306+static void update_device_table(struct protection_domain *domain)18022307{18032308 struct iommu_dev_data *dev_data;1804230918052310 list_for_each_entry(dev_data, &domain->dev_list, list) {18061806- set_dte_entry(dev_data->devid, domain, pgtable,23112311+ set_dte_entry(dev_data->devid, domain,18072312 dev_data->ats.enabled, dev_data->iommu_v2);18082313 clone_aliases(dev_data->pdev);18092314 }18102315}1811231618121812-static void update_and_flush_device_table(struct protection_domain *domain,18131813- struct domain_pgtable *pgtable)23172317+void amd_iommu_update_and_flush_device_table(struct protection_domain *domain)18142318{18151815- update_device_table(domain, pgtable);23192319+ update_device_table(domain);18162320 domain_flush_devices(domain);18172321}1818232218191819-static void update_domain(struct protection_domain *domain)23232323+void amd_iommu_domain_update(struct protection_domain *domain)18202324{18211821- struct domain_pgtable pgtable;18221822-18232325 /* Update device table */18241824- amd_iommu_domain_get_pgtable(domain, &pgtable);18251825- update_and_flush_device_table(domain, &pgtable);23262326+ amd_iommu_update_and_flush_device_table(domain);1826232718272328 /* Flush domain TLB(s) and wait for completion */18281828- domain_flush_tlb_pde(domain);18291829- domain_flush_complete(domain);23292329+ amd_iommu_domain_flush_tlb_pde(domain);23302330+ amd_iommu_domain_flush_complete(domain);18302331}1831233218322333int __init amd_iommu_init_api(void)···1889240018902401static void protection_domain_free(struct protection_domain *domain)18912402{18921892- struct domain_pgtable pgtable;18931893-18942403 if (!domain)18952404 return;1896240518972406 if (domain->id)18982407 domain_id_free(domain->id);1899240819001900- amd_iommu_domain_get_pgtable(domain, &pgtable);19011901- amd_iommu_domain_clr_pt_root(domain);19021902- free_pagetable(&pgtable);24092409+ if (domain->iop.pgtbl_cfg.tlb)24102410+ free_io_pgtable_ops(&domain->iop.iop.ops);1903241119042412 kfree(domain);19052413}1906241419071907-static int protection_domain_init(struct protection_domain *domain, int mode)24152415+static int protection_domain_init_v1(struct protection_domain *domain, int mode)19082416{19092417 u64 *pt_root = NULL;19102418···19242438 return 0;19252439}1926244019271927-static struct protection_domain *protection_domain_alloc(int mode)24412441+static struct protection_domain *protection_domain_alloc(unsigned int type)19282442{24432443+ struct io_pgtable_ops *pgtbl_ops;19292444 struct protection_domain *domain;24452445+ int pgtable = amd_iommu_pgtable;24462446+ int mode = DEFAULT_PGTABLE_LEVEL;24472447+ int ret;1930244819312449 domain = kzalloc(sizeof(*domain), GFP_KERNEL);19322450 if (!domain)19332451 return NULL;1934245219351935- if (protection_domain_init(domain, mode))24532453+ /*24542454+ * Force IOMMU v1 page table when iommu=pt and24552455+ * when allocating domain for pass-through devices.24562456+ */24572457+ if (type == IOMMU_DOMAIN_IDENTITY) {24582458+ pgtable = AMD_IOMMU_V1;24592459+ mode = PAGE_MODE_NONE;24602460+ } else if (type == IOMMU_DOMAIN_UNMANAGED) {24612461+ pgtable = AMD_IOMMU_V1;24622462+ }24632463+24642464+ switch (pgtable) {24652465+ case AMD_IOMMU_V1:24662466+ ret = protection_domain_init_v1(domain, mode);24672467+ break;24682468+ default:24692469+ ret = -EINVAL;24702470+ }24712471+24722472+ if (ret)24732473+ goto out_err;24742474+24752475+ pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain);24762476+ if (!pgtbl_ops)19362477 goto out_err;1937247819382479 return domain;19391939-19402480out_err:19412481 kfree(domain);19421942-19432482 return NULL;19442483}1945248419462485static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)19472486{19482487 struct protection_domain *domain;19491949- int mode = DEFAULT_PGTABLE_LEVEL;1950248819511951- if (type == IOMMU_DOMAIN_IDENTITY)19521952- mode = PAGE_MODE_NONE;19531953-19541954- domain = protection_domain_alloc(mode);24892489+ domain = protection_domain_alloc(type);19552490 if (!domain)19562491 return NULL;19572492···20872580 gfp_t gfp)20882581{20892582 struct protection_domain *domain = to_pdomain(dom);20902090- struct domain_pgtable pgtable;25832583+ struct io_pgtable_ops *ops = &domain->iop.iop.ops;20912584 int prot = 0;20922092- int ret;25852585+ int ret = -EINVAL;2093258620942094- amd_iommu_domain_get_pgtable(domain, &pgtable);20952095- if (pgtable.mode == PAGE_MODE_NONE)25872587+ if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&25882588+ (domain->iop.mode == PAGE_MODE_NONE))20962589 return -EINVAL;2097259020982591 if (iommu_prot & IOMMU_READ)···21002593 if (iommu_prot & IOMMU_WRITE)21012594 prot |= IOMMU_PROT_IW;2102259521032103- ret = iommu_map_page(domain, iova, paddr, page_size, prot, gfp);21042104-21052105- domain_flush_np_cache(domain, iova, page_size);25962596+ if (ops->map) {25972597+ ret = ops->map(ops, iova, paddr, page_size, prot, gfp);25982598+ domain_flush_np_cache(domain, iova, page_size);25992599+ }2106260021072601 return ret;21082602}···21132605 struct iommu_iotlb_gather *gather)21142606{21152607 struct protection_domain *domain = to_pdomain(dom);21162116- struct domain_pgtable pgtable;26082608+ struct io_pgtable_ops *ops = &domain->iop.iop.ops;2117260921182118- amd_iommu_domain_get_pgtable(domain, &pgtable);21192119- if (pgtable.mode == PAGE_MODE_NONE)26102610+ if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&26112611+ (domain->iop.mode == PAGE_MODE_NONE))21202612 return 0;2121261321222122- return iommu_unmap_page(domain, iova, page_size);26142614+ return (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;21232615}2124261621252617static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,21262618 dma_addr_t iova)21272619{21282620 struct protection_domain *domain = to_pdomain(dom);21292129- unsigned long offset_mask, pte_pgsize;21302130- struct domain_pgtable pgtable;21312131- u64 *pte, __pte;26212621+ struct io_pgtable_ops *ops = &domain->iop.iop.ops;2132262221332133- amd_iommu_domain_get_pgtable(domain, &pgtable);21342134- if (pgtable.mode == PAGE_MODE_NONE)21352135- return iova;21362136-21372137- pte = fetch_pte(domain, iova, &pte_pgsize);21382138-21392139- if (!pte || !IOMMU_PTE_PRESENT(*pte))21402140- return 0;21412141-21422142- offset_mask = pte_pgsize - 1;21432143- __pte = __sme_clr(*pte & PM_ADDR_MASK);21442144-21452145- return (__pte & ~offset_mask) | (iova & offset_mask);26232623+ return ops->iova_to_phys(ops, iova);21462624}2147262521482626static bool amd_iommu_capable(enum iommu_cap cap)···22142720 unsigned long flags;2215272122162722 spin_lock_irqsave(&dom->lock, flags);22172217- domain_flush_tlb_pde(dom);22182218- domain_flush_complete(dom);27232723+ amd_iommu_domain_flush_tlb_pde(dom);27242724+ amd_iommu_domain_flush_complete(dom);22192725 spin_unlock_irqrestore(&dom->lock, flags);22202726}22212727···22932799void amd_iommu_domain_direct_map(struct iommu_domain *dom)22942800{22952801 struct protection_domain *domain = to_pdomain(dom);22962296- struct domain_pgtable pgtable;22972802 unsigned long flags;2298280322992804 spin_lock_irqsave(&domain->lock, flags);2300280523012301- /* First save pgtable configuration*/23022302- amd_iommu_domain_get_pgtable(domain, &pgtable);23032303-23042304- /* Remove page-table from domain */23052305- amd_iommu_domain_clr_pt_root(domain);23062306-23072307- /* Make changes visible to IOMMUs */23082308- update_domain(domain);23092309-23102310- /* Page-table is not visible to IOMMU anymore, so free it */23112311- free_pagetable(&pgtable);28062806+ if (domain->iop.pgtbl_cfg.tlb)28072807+ free_io_pgtable_ops(&domain->iop.iop.ops);2312280823132809 spin_unlock_irqrestore(&domain->lock, flags);23142810}···23392855 domain->glx = levels;23402856 domain->flags |= PD_IOMMUV2_MASK;2341285723422342- update_domain(domain);28582858+ amd_iommu_domain_update(domain);2343285923442860 ret = 0;23452861···23762892 }2377289323782894 /* Wait until IOMMU TLB flushes are complete */23792379- domain_flush_complete(domain);28952895+ amd_iommu_domain_flush_complete(domain);2380289623812897 /* Now flush device TLBs */23822898 list_for_each_entry(dev_data, &domain->dev_list, list) {···24022918 }2403291924042920 /* Wait until all device TLBs are flushed */24052405- domain_flush_complete(domain);29212921+ amd_iommu_domain_flush_complete(domain);2406292224072923 ret = 0;24082924···24873003static int __set_gcr3(struct protection_domain *domain, u32 pasid,24883004 unsigned long cr3)24893005{24902490- struct domain_pgtable pgtable;24913006 u64 *pte;2492300724932493- amd_iommu_domain_get_pgtable(domain, &pgtable);24942494- if (pgtable.mode != PAGE_MODE_NONE)30083008+ if (domain->iop.mode != PAGE_MODE_NONE)24953009 return -EINVAL;2496301024973011 pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, true);···2503302125043022static int __clear_gcr3(struct protection_domain *domain, u32 pasid)25053023{25062506- struct domain_pgtable pgtable;25073024 u64 *pte;2508302525092509- amd_iommu_domain_get_pgtable(domain, &pgtable);25102510- if (pgtable.mode != PAGE_MODE_NONE)30263026+ if (domain->iop.mode != PAGE_MODE_NONE)25113027 return -EINVAL;2512302825133029 pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, false);
···3131#include <linux/limits.h>3232#include <asm/irq_remapping.h>3333#include <asm/iommu_table.h>3434+#include <trace/events/intel_iommu.h>34353536#include "../irq_remapping.h"3637···526525 struct acpi_dmar_reserved_memory *rmrr;527526 struct acpi_dmar_atsr *atsr;528527 struct acpi_dmar_rhsa *rhsa;528528+ struct acpi_dmar_satc *satc;529529530530 switch (header->type) {531531 case ACPI_DMAR_TYPE_HARDWARE_UNIT:···555553 case ACPI_DMAR_TYPE_NAMESPACE:556554 /* We don't print this here because we need to sanity-check557555 it first. So print it in dmar_parse_one_andd() instead. */556556+ break;557557+ case ACPI_DMAR_TYPE_SATC:558558+ satc = container_of(header, struct acpi_dmar_satc, header);559559+ pr_info("SATC flags: 0x%x\n", satc->flags);558560 break;559561 }560562}···647641 .cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,648642 .cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,649643 .cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,644644+ .cb[ACPI_DMAR_TYPE_SATC] = &dmar_parse_one_satc,650645 };651646652647 /*···13141307 offset = ((index + i) % QI_LENGTH) << shift;13151308 memcpy(qi->desc + offset, &desc[i], 1 << shift);13161309 qi->desc_status[(index + i) % QI_LENGTH] = QI_IN_USE;13101310+ trace_qi_submit(iommu, desc[i].qw0, desc[i].qw1,13111311+ desc[i].qw2, desc[i].qw3);13171312 }13181313 qi->desc_status[wait_index] = QI_IN_USE;13191314···20832074#define DMAR_DSM_FUNC_DRHD 120842075#define DMAR_DSM_FUNC_ATSR 220852076#define DMAR_DSM_FUNC_RHSA 320772077+#define DMAR_DSM_FUNC_SATC 42086207820872079static inline bool dmar_detect_dsm(acpi_handle handle, int func)20882080{···21012091 [DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT,21022092 [DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS,21032093 [DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY,20942094+ [DMAR_DSM_FUNC_SATC] = ACPI_DMAR_TYPE_SATC,21042095 };2105209621062097 if (!dmar_detect_dsm(handle, func))
+172-107
drivers/iommu/intel/iommu.c
···4444#include <asm/irq_remapping.h>4545#include <asm/cacheflush.h>4646#include <asm/iommu.h>4747-#include <trace/events/intel_iommu.h>48474948#include "../irq_remapping.h"5049#include "pasid.h"5050+#include "cap_audit.h"51515252#define ROOT_SIZE VTD_PAGE_SIZE5353#define CONTEXT_SIZE VTD_PAGE_SIZE···316316 u8 include_all:1; /* include all ports */317317};318318319319+struct dmar_satc_unit {320320+ struct list_head list; /* list of SATC units */321321+ struct acpi_dmar_header *hdr; /* ACPI header */322322+ struct dmar_dev_scope *devices; /* target devices */323323+ struct intel_iommu *iommu; /* the corresponding iommu */324324+ int devices_cnt; /* target device count */325325+ u8 atc_required:1; /* ATS is required */326326+};327327+319328static LIST_HEAD(dmar_atsr_units);320329static LIST_HEAD(dmar_rmrr_units);330330+static LIST_HEAD(dmar_satc_units);321331322332#define for_each_rmrr_units(rmrr) \323333 list_for_each_entry(rmrr, &dmar_rmrr_units, list)···1027101710281018 domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);10291019 pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;10301030- if (domain_use_first_level(domain))10201020+ if (domain_use_first_level(domain)) {10311021 pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US;10221022+ if (domain->domain.type == IOMMU_DOMAIN_DMA)10231023+ pteval |= DMA_FL_PTE_ACCESS;10241024+ }10321025 if (cmpxchg64(&pte->val, 0ULL, pteval))10331026 /* Someone else set it while we were thinking; use theirs. */10341027 free_pgtable_page(tmp_page);···18741861 */18751862static bool first_level_by_default(void)18761863{18771877- struct dmar_drhd_unit *drhd;18781878- struct intel_iommu *iommu;18791879- static int first_level_support = -1;18801880-18811881- if (likely(first_level_support != -1))18821882- return first_level_support;18831883-18841884- first_level_support = 1;18851885-18861886- rcu_read_lock();18871887- for_each_active_iommu(iommu, drhd) {18881888- if (!sm_supported(iommu) || !ecap_flts(iommu->ecap)) {18891889- first_level_support = 0;18901890- break;18911891- }18921892- }18931893- rcu_read_unlock();18941894-18951895- return first_level_support;18641864+ return scalable_mode_support() && intel_cap_flts_sanity();18961865}1897186618981867static struct dmar_domain *alloc_domain(int flags)···22932298__domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,22942299 unsigned long phys_pfn, unsigned long nr_pages, int prot)22952300{22962296- struct dma_pte *first_pte = NULL, *pte = NULL;22972301 unsigned int largepage_lvl = 0;22982302 unsigned long lvl_pages = 0;23032303+ struct dma_pte *pte = NULL;22992304 phys_addr_t pteval;23002305 u64 attr;23012306···23052310 return -EINVAL;2306231123072312 attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);23082308- if (domain_use_first_level(domain))23132313+ if (domain_use_first_level(domain)) {23092314 attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD | DMA_FL_PTE_US;23152315+23162316+ if (domain->domain.type == IOMMU_DOMAIN_DMA) {23172317+ attr |= DMA_FL_PTE_ACCESS;23182318+ if (prot & DMA_PTE_WRITE)23192319+ attr |= DMA_FL_PTE_DIRTY;23202320+ }23212321+ }2310232223112323 pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;23122324···23242322 largepage_lvl = hardware_largepage_caps(domain, iov_pfn,23252323 phys_pfn, nr_pages);2326232423272327- first_pte = pte = pfn_to_dma_pte(domain, iov_pfn, &largepage_lvl);23252325+ pte = pfn_to_dma_pte(domain, iov_pfn, &largepage_lvl);23282326 if (!pte)23292327 return -ENOMEM;23302328 /* It is large page*/···23852383 * recalculate 'pte' and switch back to smaller pages for the23862384 * end of the mapping, if the trailing size is not enough to23872385 * use another superpage (i.e. nr_pages < lvl_pages).23862386+ *23872387+ * We leave clflush for the leaf pte changes to iotlb_sync_map()23882388+ * callback.23882389 */23892390 pte++;23902391 if (!nr_pages || first_pte_in_page(pte) ||23912391- (largepage_lvl > 1 && nr_pages < lvl_pages)) {23922392- domain_flush_cache(domain, first_pte,23932393- (void *)pte - (void *)first_pte);23922392+ (largepage_lvl > 1 && nr_pages < lvl_pages))23942393 pte = NULL;23952395- }23962396- }23972397-23982398- return 0;23992399-}24002400-24012401-static int24022402-domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,24032403- unsigned long phys_pfn, unsigned long nr_pages, int prot)24042404-{24052405- int iommu_id, ret;24062406- struct intel_iommu *iommu;24072407-24082408- /* Do the real mapping first */24092409- ret = __domain_mapping(domain, iov_pfn, phys_pfn, nr_pages, prot);24102410- if (ret)24112411- return ret;24122412-24132413- for_each_domain_iommu(iommu_id, domain) {24142414- iommu = g_iommus[iommu_id];24152415- __mapping_notify_one(iommu, domain, iov_pfn, nr_pages);24162394 }2417239524182396 return 0;···31793197 goto error;31803198 }3181319932003200+ ret = intel_cap_audit(CAP_AUDIT_STATIC_DMAR, NULL);32013201+ if (ret)32023202+ goto free_iommu;32033203+31823204 for_each_iommu(iommu, drhd) {31833205 if (drhd->ignored) {31843206 iommu_disable_translation(iommu);···37263740 return 0;37273741}3728374237433743+static struct dmar_satc_unit *dmar_find_satc(struct acpi_dmar_satc *satc)37443744+{37453745+ struct dmar_satc_unit *satcu;37463746+ struct acpi_dmar_satc *tmp;37473747+37483748+ list_for_each_entry_rcu(satcu, &dmar_satc_units, list,37493749+ dmar_rcu_check()) {37503750+ tmp = (struct acpi_dmar_satc *)satcu->hdr;37513751+ if (satc->segment != tmp->segment)37523752+ continue;37533753+ if (satc->header.length != tmp->header.length)37543754+ continue;37553755+ if (memcmp(satc, tmp, satc->header.length) == 0)37563756+ return satcu;37573757+ }37583758+37593759+ return NULL;37603760+}37613761+37623762+int dmar_parse_one_satc(struct acpi_dmar_header *hdr, void *arg)37633763+{37643764+ struct acpi_dmar_satc *satc;37653765+ struct dmar_satc_unit *satcu;37663766+37673767+ if (system_state >= SYSTEM_RUNNING && !intel_iommu_enabled)37683768+ return 0;37693769+37703770+ satc = container_of(hdr, struct acpi_dmar_satc, header);37713771+ satcu = dmar_find_satc(satc);37723772+ if (satcu)37733773+ return 0;37743774+37753775+ satcu = kzalloc(sizeof(*satcu) + hdr->length, GFP_KERNEL);37763776+ if (!satcu)37773777+ return -ENOMEM;37783778+37793779+ satcu->hdr = (void *)(satcu + 1);37803780+ memcpy(satcu->hdr, hdr, hdr->length);37813781+ satcu->atc_required = satc->flags & 0x1;37823782+ satcu->devices = dmar_alloc_dev_scope((void *)(satc + 1),37833783+ (void *)satc + satc->header.length,37843784+ &satcu->devices_cnt);37853785+ if (satcu->devices_cnt && !satcu->devices) {37863786+ kfree(satcu);37873787+ return -ENOMEM;37883788+ }37893789+ list_add_rcu(&satcu->list, &dmar_satc_units);37903790+37913791+ return 0;37923792+}37933793+37293794static int intel_iommu_add(struct dmar_drhd_unit *dmaru)37303795{37313796 int sp, ret;···3784374737853748 if (g_iommus[iommu->seq_id])37863749 return 0;37503750+37513751+ ret = intel_cap_audit(CAP_AUDIT_HOTPLUG_DMAR, iommu);37523752+ if (ret)37533753+ goto out;3787375437883755 if (hw_pass_through && !ecap_pass_through(iommu->ecap)) {37893756 pr_warn("%s: Doesn't support hardware pass through.\n",···38843843{38853844 struct dmar_rmrr_unit *rmrru, *rmrr_n;38863845 struct dmar_atsr_unit *atsru, *atsr_n;38463846+ struct dmar_satc_unit *satcu, *satc_n;3887384738883848 list_for_each_entry_safe(rmrru, rmrr_n, &dmar_rmrr_units, list) {38893849 list_del(&rmrru->list);···38953853 list_for_each_entry_safe(atsru, atsr_n, &dmar_atsr_units, list) {38963854 list_del(&atsru->list);38973855 intel_iommu_free_atsr(atsru);38563856+ }38573857+ list_for_each_entry_safe(satcu, satc_n, &dmar_satc_units, list) {38583858+ list_del(&satcu->list);38593859+ dmar_free_dev_scope(&satcu->devices, &satcu->devices_cnt);38603860+ kfree(satcu);38983861 }38993862}39003863···39523905 int ret;39533906 struct dmar_rmrr_unit *rmrru;39543907 struct dmar_atsr_unit *atsru;39083908+ struct dmar_satc_unit *satcu;39553909 struct acpi_dmar_atsr *atsr;39563910 struct acpi_dmar_reserved_memory *rmrr;39113911+ struct acpi_dmar_satc *satc;3957391239583913 if (!intel_iommu_enabled && system_state >= SYSTEM_RUNNING)39593914 return 0;···39933944 } else if (info->event == BUS_NOTIFY_REMOVED_DEVICE) {39943945 if (dmar_remove_dev_scope(info, atsr->segment,39953946 atsru->devices, atsru->devices_cnt))39473947+ break;39483948+ }39493949+ }39503950+ list_for_each_entry(satcu, &dmar_satc_units, list) {39513951+ satc = container_of(satcu->hdr, struct acpi_dmar_satc, header);39523952+ if (info->event == BUS_NOTIFY_ADD_DEVICE) {39533953+ ret = dmar_insert_dev_scope(info, (void *)(satc + 1),39543954+ (void *)satc + satc->header.length,39553955+ satc->segment, satcu->devices,39563956+ satcu->devices_cnt);39573957+ if (ret > 0)39583958+ break;39593959+ else if (ret < 0)39603960+ return ret;39613961+ } else if (info->event == BUS_NOTIFY_REMOVED_DEVICE) {39623962+ if (dmar_remove_dev_scope(info, satc->segment,39633963+ satcu->devices, satcu->devices_cnt))39963964 break;39973965 }39983966 }···4355428943564290 if (list_empty(&dmar_atsr_units))43574291 pr_info("No ATSR found\n");42924292+42934293+ if (list_empty(&dmar_satc_units))42944294+ pr_info("No SATC found\n");4358429543594296 if (dmar_map_gfx)43604297 intel_iommu_gfx_mapped = 1;···50124943 struct dmar_domain *dmar_domain = to_dmar_domain(domain);50134944 u64 max_addr;50144945 int prot = 0;50155015- int ret;5016494650174947 if (iommu_prot & IOMMU_READ)50184948 prot |= DMA_PTE_READ;···50374969 /* Round up size to next multiple of PAGE_SIZE, if it and50384970 the low bits of hpa would take us onto the next page */50394971 size = aligned_nrpages(hpa, size);50405040- ret = domain_mapping(dmar_domain, iova >> VTD_PAGE_SHIFT,50415041- hpa >> VTD_PAGE_SHIFT, size, prot);50425042- return ret;49724972+ return __domain_mapping(dmar_domain, iova >> VTD_PAGE_SHIFT,49734973+ hpa >> VTD_PAGE_SHIFT, size, prot);50434974}5044497550454976static size_t intel_iommu_unmap(struct iommu_domain *domain,···51055038 VTD_PAGE_SHIFT) - 1));5106503951075040 return phys;51085108-}51095109-51105110-static inline bool scalable_mode_support(void)51115111-{51125112- struct dmar_drhd_unit *drhd;51135113- struct intel_iommu *iommu;51145114- bool ret = true;51155115-51165116- rcu_read_lock();51175117- for_each_active_iommu(iommu, drhd) {51185118- if (!sm_supported(iommu)) {51195119- ret = false;51205120- break;51215121- }51225122- }51235123- rcu_read_unlock();51245124-51255125- return ret;51265126-}51275127-51285128-static inline bool iommu_pasid_support(void)51295129-{51305130- struct dmar_drhd_unit *drhd;51315131- struct intel_iommu *iommu;51325132- bool ret = true;51335133-51345134- rcu_read_lock();51355135- for_each_active_iommu(iommu, drhd) {51365136- if (!pasid_supported(iommu)) {51375137- ret = false;51385138- break;51395139- }51405140- }51415141- rcu_read_unlock();51425142-51435143- return ret;51445144-}51455145-51465146-static inline bool nested_mode_support(void)51475147-{51485148- struct dmar_drhd_unit *drhd;51495149- struct intel_iommu *iommu;51505150- bool ret = true;51515151-51525152- rcu_read_lock();51535153- for_each_active_iommu(iommu, drhd) {51545154- if (!sm_supported(iommu) || !ecap_nest(iommu->ecap)) {51555155- ret = false;51565156- break;51575157- }51585158- }51595159- rcu_read_unlock();51605160-51615161- return ret;51625041}5163504251645043static bool intel_iommu_capable(enum iommu_cap cap)···53475334 int ret;5348533553495336 if (!dev_is_pci(dev) || dmar_disabled ||53505350- !scalable_mode_support() || !iommu_pasid_support())53375337+ !scalable_mode_support() || !pasid_mode_support())53515338 return false;5352533953535340 ret = pci_pasid_features(to_pci_dev(dev));···55215508 return false;55225509}5523551055115511+static void clflush_sync_map(struct dmar_domain *domain, unsigned long clf_pfn,55125512+ unsigned long clf_pages)55135513+{55145514+ struct dma_pte *first_pte = NULL, *pte = NULL;55155515+ unsigned long lvl_pages = 0;55165516+ int level = 0;55175517+55185518+ while (clf_pages > 0) {55195519+ if (!pte) {55205520+ level = 0;55215521+ pte = pfn_to_dma_pte(domain, clf_pfn, &level);55225522+ if (WARN_ON(!pte))55235523+ return;55245524+ first_pte = pte;55255525+ lvl_pages = lvl_to_nr_pages(level);55265526+ }55275527+55285528+ if (WARN_ON(!lvl_pages || clf_pages < lvl_pages))55295529+ return;55305530+55315531+ clf_pages -= lvl_pages;55325532+ clf_pfn += lvl_pages;55335533+ pte++;55345534+55355535+ if (!clf_pages || first_pte_in_page(pte) ||55365536+ (level > 1 && clf_pages < lvl_pages)) {55375537+ domain_flush_cache(domain, first_pte,55385538+ (void *)pte - (void *)first_pte);55395539+ pte = NULL;55405540+ }55415541+ }55425542+}55435543+55445544+static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,55455545+ unsigned long iova, size_t size)55465546+{55475547+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);55485548+ unsigned long pages = aligned_nrpages(iova, size);55495549+ unsigned long pfn = iova >> VTD_PAGE_SHIFT;55505550+ struct intel_iommu *iommu;55515551+ int iommu_id;55525552+55535553+ if (!dmar_domain->iommu_coherency)55545554+ clflush_sync_map(dmar_domain, pfn, pages);55555555+55565556+ for_each_domain_iommu(iommu_id, dmar_domain) {55575557+ iommu = g_iommus[iommu_id];55585558+ __mapping_notify_one(iommu, dmar_domain, pfn, pages);55595559+ }55605560+}55615561+55245562const struct iommu_ops intel_iommu_ops = {55255563 .capable = intel_iommu_capable,55265564 .domain_alloc = intel_iommu_domain_alloc,···55845520 .aux_detach_dev = intel_iommu_aux_detach_device,55855521 .aux_get_pasid = intel_iommu_aux_get_pasid,55865522 .map = intel_iommu_map,55235523+ .iotlb_sync_map = intel_iommu_iotlb_sync_map,55875524 .unmap = intel_iommu_unmap,55885525 .flush_iotlb_all = intel_flush_iotlb_all,55895526 .iotlb_sync = intel_iommu_tlb_sync,
+8
drivers/iommu/intel/irq_remapping.c
···2222#include <asm/pci-direct.h>23232424#include "../irq_remapping.h"2525+#include "cap_audit.h"25262627enum irq_mode {2728 IRQ_REMAPPING,···735734 if (dmar_table_init() < 0)736735 return -ENODEV;737736737737+ if (intel_cap_audit(CAP_AUDIT_STATIC_IRQR, NULL))738738+ goto error;739739+738740 if (!dmar_ir_support())739741 return -ENODEV;740742···14421438{14431439 int ret;14441440 int eim = x2apic_enabled();14411441+14421442+ ret = intel_cap_audit(CAP_AUDIT_HOTPLUG_IRQR, iommu);14431443+ if (ret)14441444+ return ret;1445144514461446 if (eim && !ecap_eim_support(iommu->ecap)) {14471447 pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
···62626363config ARM_SMMU_V3_PMU6464 tristate "ARM SMMUv3 Performance Monitors Extension"6565- depends on ARM64 && ACPI && ARM_SMMU_V36565+ depends on ARM64 && ACPI6666 help6767 Provides support for the ARM SMMUv3 Performance Monitor Counter6868 Groups (PMCG), which provide monitoring of transactions passing
···1515 ARM_64_LPAE_S2,1616 ARM_V7S,1717 ARM_MALI_LPAE,1818+ AMD_IOMMU_V1,1819 IO_PGTABLE_NUM_FMTS,1920};2021···6968 * hardware which does not implement the permissions of a given7069 * format, and/or requires some format-specific default value.7170 *7272- * IO_PGTABLE_QUIRK_TLBI_ON_MAP: If the format forbids caching invalid7373- * (unmapped) entries but the hardware might do so anyway, perform7474- * TLB maintenance when mapping as well as when unmapping.7575- *7671 * IO_PGTABLE_QUIRK_ARM_MTK_EXT: (ARM v7s format) MediaTek IOMMUs extend7777- * to support up to 34 bits PA where the bit32 and bit33 are7878- * encoded in the bit9 and bit4 of the PTE respectively.7272+ * to support up to 35 bits PA where the bit32, bit33 and bit34 are7373+ * encoded in the bit9, bit4 and bit5 of the PTE respectively.7974 *8075 * IO_PGTABLE_QUIRK_NON_STRICT: Skip issuing synchronous leaf TLBIs8176 * on unmap, for DMA domains using the flush queue mechanism for···8588 */8689 #define IO_PGTABLE_QUIRK_ARM_NS BIT(0)8790 #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)8888- #define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2)8991 #define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)9092 #define IO_PGTABLE_QUIRK_NON_STRICT BIT(4)9193 #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)···210214211215static inline void io_pgtable_tlb_flush_all(struct io_pgtable *iop)212216{213213- iop->cfg.tlb->tlb_flush_all(iop->cookie);217217+ if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_all)218218+ iop->cfg.tlb->tlb_flush_all(iop->cookie);214219}215220216221static inline void217222io_pgtable_tlb_flush_walk(struct io_pgtable *iop, unsigned long iova,218223 size_t size, size_t granule)219224{220220- iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie);225225+ if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_walk)226226+ iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie);221227}222228223229static inline void···227229 struct iommu_iotlb_gather * gather, unsigned long iova,228230 size_t granule)229231{230230- if (iop->cfg.tlb->tlb_add_page)232232+ if (iop->cfg.tlb && iop->cfg.tlb->tlb_add_page)231233 iop->cfg.tlb->tlb_add_page(gather, iova, granule, iop->cookie);232234}233235···249251extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;250252extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;251253extern struct io_pgtable_init_fns io_pgtable_arm_mali_lpae_init_fns;254254+extern struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns;252255253256#endif /* __IO_PGTABLE_H */
+5-16
include/linux/iommu.h
···170170 * struct iommu_iotlb_gather - Range information for a pending IOTLB flush171171 *172172 * @start: IOVA representing the start of the range to be flushed173173- * @end: IOVA representing the end of the range to be flushed (exclusive)173173+ * @end: IOVA representing the end of the range to be flushed (inclusive)174174 * @pgsize: The interval at which to perform the flush175175 *176176 * This structure is intended to be updated by multiple calls to the···246246 size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,247247 size_t size, struct iommu_iotlb_gather *iotlb_gather);248248 void (*flush_iotlb_all)(struct iommu_domain *domain);249249- void (*iotlb_sync_map)(struct iommu_domain *domain);249249+ void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,250250+ size_t size);250251 void (*iotlb_sync)(struct iommu_domain *domain,251252 struct iommu_iotlb_gather *iotlb_gather);252253 phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);···377376void iommu_device_sysfs_remove(struct iommu_device *iommu);378377int iommu_device_link(struct iommu_device *iommu, struct device *link);379378void iommu_device_unlink(struct iommu_device *iommu, struct device *link);379379+int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain);380380381381static inline void __iommu_device_set_ops(struct iommu_device *iommu,382382 const struct iommu_ops *ops)···516514extern int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,517515 phys_addr_t offset, u64 size,518516 int prot);519519-extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr);520517521518extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,522519 unsigned long iova, int flags);···539538 struct iommu_iotlb_gather *gather,540539 unsigned long iova, size_t size)541540{542542- unsigned long start = iova, end = start + size;541541+ unsigned long start = iova, end = start + size - 1;543542544543 /*545544 * If the new page is disjoint from the current range or is mapped at···631630int iommu_probe_device(struct device *dev);632631void iommu_release_device(struct device *dev);633632634634-bool iommu_dev_has_feature(struct device *dev, enum iommu_dev_features f);635633int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features f);636634int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features f);637635bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features f);···747747 u64 size, int prot)748748{749749 return -ENODEV;750750-}751751-752752-static inline void iommu_domain_window_disable(struct iommu_domain *domain,753753- u32 wnd_nr)754754-{755750}756751757752static inline phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)···977982const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)978983{979984 return NULL;980980-}981981-982982-static inline bool983983-iommu_dev_has_feature(struct device *dev, enum iommu_dev_features feat)984984-{985985- return false;986985}987986988987static inline bool
-12
include/linux/iova.h
···150150 unsigned long limit_pfn, bool flush_rcache);151151struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,152152 unsigned long pfn_hi);153153-void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);154153void init_iova_domain(struct iova_domain *iovad, unsigned long granule,155154 unsigned long start_pfn);156156-bool has_iova_flush_queue(struct iova_domain *iovad);157155int init_iova_flush_queue(struct iova_domain *iovad,158156 iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);159157struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);···210212 return NULL;211213}212214213213-static inline void copy_reserved_iova(struct iova_domain *from,214214- struct iova_domain *to)215215-{216216-}217217-218215static inline void init_iova_domain(struct iova_domain *iovad,219216 unsigned long granule,220217 unsigned long start_pfn)221218{222222-}223223-224224-static inline bool has_iova_flush_queue(struct iova_domain *iovad)225225-{226226- return false;227219}228220229221static inline int init_iova_flush_queue(struct iova_domain *iovad,