Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'iommu-updates-v4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

- updates for the Exynos IOMMU driver to make use of default domains
and to add support for the SYSMMU v5

- new Mediatek IOMMU driver

- support for the ARMv7 short descriptor format in the io-pgtable code

- default domain support for the ARM SMMU

- couple of other small fixes all over the place

* tag 'iommu-updates-v4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (41 commits)
iommu/ipmmu-vmsa: Add r8a7795 DT binding
iommu/mediatek: Check for NULL instead of IS_ERR()
iommu/io-pgtable-armv7s: Fix kmem_cache_alloc() flags
iommu/mediatek: Fix handling of of_count_phandle_with_args result
iommu/dma: Fix NEED_SG_DMA_LENGTH dependency
iommu/mediatek: Mark PM functions as __maybe_unused
iommu/mediatek: Select ARM_DMA_USE_IOMMU
iommu/exynos: Use proper readl/writel register interface
iommu/exynos: Pointers are nto physical addresses
dts: mt8173: Add iommu/smi nodes for mt8173
iommu/mediatek: Add mt8173 IOMMU driver
memory: mediatek: Add SMI driver
dt-bindings: mediatek: Add smi dts binding
dt-bindings: iommu: Add binding for mediatek IOMMU
iommu/ipmmu-vmsa: Use ARCH_RENESAS
iommu/exynos: Support multiple attach_device calls
iommu/exynos: Add Maintainers entry for Exynos SYSMMU driver
iommu/exynos: Add support for v5 SYSMMU
iommu/exynos: Update device tree documentation
iommu/exynos: Add support for SYSMMU controller with bogus version reg
...

+2957 -418
+68
Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
··· 1 + * Mediatek IOMMU Architecture Implementation 2 + 3 + Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U) which 4 + uses the ARM Short-Descriptor translation table format for address translation. 5 + 6 + About the M4U Hardware Block Diagram, please check below: 7 + 8 + EMI (External Memory Interface) 9 + | 10 + m4u (Multimedia Memory Management Unit) 11 + | 12 + SMI Common(Smart Multimedia Interface Common) 13 + | 14 + +----------------+------- 15 + | | 16 + | | 17 + SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb). 18 + (display) (vdec) 19 + | | 20 + | | 21 + +-----+-----+ +----+----+ 22 + | | | | | | 23 + | | |... | | | ... There are different ports in each larb. 24 + | | | | | | 25 + OVL0 RDMA0 WDMA0 MC PP VLD 26 + 27 + As above, The Multimedia HW will go through SMI and M4U while it 28 + access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain 29 + smi local arbiter and smi common. It will control whether the Multimedia 30 + HW should go though the m4u for translation or bypass it and talk 31 + directly with EMI. And also SMI help control the power domain and clocks for 32 + each local arbiter. 33 + Normally we specify a local arbiter(larb) for each multimedia HW 34 + like display, video decode, and camera. And there are different ports 35 + in each larb. Take a example, There are many ports like MC, PP, VLD in the 36 + video decode local arbiter, all these ports are according to the video HW. 37 + 38 + Required properties: 39 + - compatible : must be "mediatek,mt8173-m4u". 40 + - reg : m4u register base and size. 41 + - interrupts : the interrupt of m4u. 42 + - clocks : must contain one entry for each clock-names. 43 + - clock-names : must be "bclk", It is the block clock of m4u. 44 + - mediatek,larbs : List of phandle to the local arbiters in the current Socs. 45 + Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort 46 + according to the local arbiter index, like larb0, larb1, larb2... 47 + - iommu-cells : must be 1. This is the mtk_m4u_id according to the HW. 48 + Specifies the mtk_m4u_id as defined in 49 + dt-binding/memory/mt8173-larb-port.h. 50 + 51 + Example: 52 + iommu: iommu@10205000 { 53 + compatible = "mediatek,mt8173-m4u"; 54 + reg = <0 0x10205000 0 0x1000>; 55 + interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>; 56 + clocks = <&infracfg CLK_INFRA_M4U>; 57 + clock-names = "bclk"; 58 + mediatek,larbs = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>; 59 + #iommu-cells = <1>; 60 + }; 61 + 62 + Example for a client device: 63 + display { 64 + compatible = "mediatek,mt8173-disp"; 65 + iommus = <&iommu M4U_PORT_DISP_OVL0>, 66 + <&iommu M4U_PORT_DISP_RDMA0>; 67 + ... 68 + };
+13 -2
Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt
··· 7 7 8 8 Required Properties: 9 9 10 - - compatible: Must contain SoC-specific and generic entries from below. 10 + - compatible: Must contain SoC-specific and generic entry below in case 11 + the device is compatible with the R-Car Gen2 VMSA-compatible IPMMU. 11 12 12 13 - "renesas,ipmmu-r8a73a4" for the R8A73A4 (R-Mobile APE6) IPMMU. 13 14 - "renesas,ipmmu-r8a7790" for the R8A7790 (R-Car H2) IPMMU. 14 15 - "renesas,ipmmu-r8a7791" for the R8A7791 (R-Car M2-W) IPMMU. 15 16 - "renesas,ipmmu-r8a7793" for the R8A7793 (R-Car M2-N) IPMMU. 16 17 - "renesas,ipmmu-r8a7794" for the R8A7794 (R-Car E2) IPMMU. 18 + - "renesas,ipmmu-r8a7795" for the R8A7795 (R-Car H3) IPMMU. 17 19 - "renesas,ipmmu-vmsa" for generic R-Car Gen2 VMSA-compatible IPMMU. 18 20 19 21 - reg: Base address and size of the IPMMU registers. 20 22 - interrupts: Specifiers for the MMU fault interrupts. For instances that 21 23 support secure mode two interrupts must be specified, for non-secure and 22 24 secure mode, in that order. For instances that don't support secure mode a 23 - single interrupt must be specified. 25 + single interrupt must be specified. Not required for cache IPMMUs. 24 26 25 27 - #iommu-cells: Must be 1. 28 + 29 + Optional properties: 30 + 31 + - renesas,ipmmu-main: reference to the main IPMMU instance in two cells. 32 + The first cell is a phandle to the main IPMMU and the second cell is 33 + the interrupt bit number associated with the particular cache IPMMU device. 34 + The interrupt bit number needs to match the main IPMMU IMSSTR register. 35 + Only used by cache IPMMU instances. 36 + 26 37 27 38 Each bus master connected to an IPMMU must reference the IPMMU in its device 28 39 node with the following property:
+10 -12
Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt
··· 23 23 for window 1, 2 and 3. 24 24 * M2M Scalers and G2D in Exynos5420 has one System MMU on the read channel and 25 25 the other System MMU on the write channel. 26 - The drivers must consider how to handle those System MMUs. One of the idea is 27 - to implement child devices or sub-devices which are the client devices of the 28 - System MMU. 29 26 30 - Note: 31 - The current DT binding for the Exynos System MMU is incomplete. 32 - The following properties can be removed or changed, if found incompatible with 33 - the "Generic IOMMU Binding" support for attaching devices to the IOMMU. 27 + For information on assigning System MMU controller to its peripheral devices, 28 + see generic IOMMU bindings. 34 29 35 30 Required properties: 36 31 - compatible: Should be "samsung,exynos-sysmmu" 37 32 - reg: A tuple of base address and size of System MMU registers. 33 + - #iommu-cells: Should be <0>. 38 34 - interrupt-parent: The phandle of the interrupt controller of System MMU 39 35 - interrupts: An interrupt specifier for interrupt signal of System MMU, 40 36 according to the format defined by a particular interrupt 41 37 controller. 42 - - clock-names: Should be "sysmmu" if the System MMU is needed to gate its clock. 38 + - clock-names: Should be "sysmmu" or a pair of "aclk" and "pclk" to gate 39 + SYSMMU core clocks. 43 40 Optional "master" if the clock to the System MMU is gated by 44 - another gate clock other than "sysmmu". 45 - Exynos4 SoCs, there needs no "master" clock. 46 - Exynos5 SoCs, some System MMUs must have "master" clocks. 47 - - clocks: Required if the System MMU is needed to gate its clock. 41 + another gate clock other core (usually main gate clock 42 + of peripheral device this SYSMMU belongs to). 43 + - clocks: Phandles for respective clocks described by clock-names. 48 44 - power-domains: Required if the System MMU is needed to gate its power. 49 45 Please refer to the following document: 50 46 Documentation/devicetree/bindings/power/pd-samsung.txt ··· 53 57 power-domains = <&pd_gsc>; 54 58 clocks = <&clock CLK_GSCL0>; 55 59 clock-names = "gscl"; 60 + iommus = <&sysmmu_gsc0>; 56 61 }; 57 62 58 63 sysmmu_gsc0: sysmmu@13E80000 { ··· 64 67 clock-names = "sysmmu", "master"; 65 68 clocks = <&clock CLK_SMMU_GSCL0>, <&clock CLK_GSCL0>; 66 69 power-domains = <&pd_gsc>; 70 + #iommu-cells = <0>; 67 71 };
+24
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.txt
··· 1 + SMI (Smart Multimedia Interface) Common 2 + 3 + The hardware block diagram please check bindings/iommu/mediatek,iommu.txt 4 + 5 + Required properties: 6 + - compatible : must be "mediatek,mt8173-smi-common" 7 + - reg : the register and size of the SMI block. 8 + - power-domains : a phandle to the power domain of this local arbiter. 9 + - clocks : Must contain an entry for each entry in clock-names. 10 + - clock-names : must contain 2 entries, as follows: 11 + - "apb" : Advanced Peripheral Bus clock, It's the clock for setting 12 + the register. 13 + - "smi" : It's the clock for transfer data and command. 14 + They may be the same if both source clocks are the same. 15 + 16 + Example: 17 + smi_common: smi@14022000 { 18 + compatible = "mediatek,mt8173-smi-common"; 19 + reg = <0 0x14022000 0 0x1000>; 20 + power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>; 21 + clocks = <&mmsys CLK_MM_SMI_COMMON>, 22 + <&mmsys CLK_MM_SMI_COMMON>; 23 + clock-names = "apb", "smi"; 24 + };
+25
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.txt
··· 1 + SMI (Smart Multimedia Interface) Local Arbiter 2 + 3 + The hardware block diagram please check bindings/iommu/mediatek,iommu.txt 4 + 5 + Required properties: 6 + - compatible : must be "mediatek,mt8173-smi-larb" 7 + - reg : the register and size of this local arbiter. 8 + - mediatek,smi : a phandle to the smi_common node. 9 + - power-domains : a phandle to the power domain of this local arbiter. 10 + - clocks : Must contain an entry for each entry in clock-names. 11 + - clock-names: must contain 2 entries, as follows: 12 + - "apb" : Advanced Peripheral Bus clock, It's the clock for setting 13 + the register. 14 + - "smi" : It's the clock for transfer data and command. 15 + 16 + Example: 17 + larb1: larb@16010000 { 18 + compatible = "mediatek,mt8173-smi-larb"; 19 + reg = <0 0x16010000 0 0x1000>; 20 + mediatek,smi = <&smi_common>; 21 + power-domains = <&scpsys MT8173_POWER_DOMAIN_VDEC>; 22 + clocks = <&vdecsys CLK_VDEC_CKEN>, 23 + <&vdecsys CLK_VDEC_LARB_CKEN>; 24 + clock-names = "apb", "smi"; 25 + };
+8
MAINTAINERS
··· 1838 1838 1839 1839 ARM SMMU DRIVERS 1840 1840 M: Will Deacon <will.deacon@arm.com> 1841 + R: Robin Murphy <robin.murphy@arm.com> 1841 1842 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1842 1843 S: Maintained 1843 1844 F: drivers/iommu/arm-smmu.c 1844 1845 F: drivers/iommu/arm-smmu-v3.c 1845 1846 F: drivers/iommu/io-pgtable-arm.c 1847 + F: drivers/iommu/io-pgtable-arm-v7s.c 1846 1848 1847 1849 ARM64 PORT (AARCH64 ARCHITECTURE) 1848 1850 M: Catalin Marinas <catalin.marinas@arm.com> ··· 4363 4361 L: dri-devel@lists.freedesktop.org 4364 4362 S: Maintained 4365 4363 F: drivers/gpu/drm/exynos/exynos_dp* 4364 + 4365 + EXYNOS SYSMMU (IOMMU) driver 4366 + M: Marek Szyprowski <m.szyprowski@samsung.com> 4367 + L: iommu@lists.linux-foundation.org 4368 + S: Maintained 4369 + F: drivers/iommu/exynos-iommu.c 4366 4370 4367 4371 EXYNOS MIPI DISPLAY DRIVERS 4368 4372 M: Inki Dae <inki.dae@samsung.com>
+81
arch/arm64/boot/dts/mediatek/mt8173.dtsi
··· 14 14 #include <dt-bindings/clock/mt8173-clk.h> 15 15 #include <dt-bindings/interrupt-controller/irq.h> 16 16 #include <dt-bindings/interrupt-controller/arm-gic.h> 17 + #include <dt-bindings/memory/mt8173-larb-port.h> 17 18 #include <dt-bindings/phy/phy.h> 18 19 #include <dt-bindings/power/mt8173-power.h> 19 20 #include <dt-bindings/reset/mt8173-resets.h> ··· 276 275 #interrupt-cells = <3>; 277 276 interrupt-parent = <&gic>; 278 277 reg = <0 0x10200620 0 0x20>; 278 + }; 279 + 280 + iommu: iommu@10205000 { 281 + compatible = "mediatek,mt8173-m4u"; 282 + reg = <0 0x10205000 0 0x1000>; 283 + interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>; 284 + clocks = <&infracfg CLK_INFRA_M4U>; 285 + clock-names = "bclk"; 286 + mediatek,larbs = <&larb0 &larb1 &larb2 287 + &larb3 &larb4 &larb5>; 288 + #iommu-cells = <1>; 279 289 }; 280 290 281 291 efuse: efuse@10206000 { ··· 617 605 status = "disabled"; 618 606 }; 619 607 608 + larb0: larb@14021000 { 609 + compatible = "mediatek,mt8173-smi-larb"; 610 + reg = <0 0x14021000 0 0x1000>; 611 + mediatek,smi = <&smi_common>; 612 + power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>; 613 + clocks = <&mmsys CLK_MM_SMI_LARB0>, 614 + <&mmsys CLK_MM_SMI_LARB0>; 615 + clock-names = "apb", "smi"; 616 + }; 617 + 618 + smi_common: smi@14022000 { 619 + compatible = "mediatek,mt8173-smi-common"; 620 + reg = <0 0x14022000 0 0x1000>; 621 + power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>; 622 + clocks = <&mmsys CLK_MM_SMI_COMMON>, 623 + <&mmsys CLK_MM_SMI_COMMON>; 624 + clock-names = "apb", "smi"; 625 + }; 626 + 627 + larb4: larb@14027000 { 628 + compatible = "mediatek,mt8173-smi-larb"; 629 + reg = <0 0x14027000 0 0x1000>; 630 + mediatek,smi = <&smi_common>; 631 + power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>; 632 + clocks = <&mmsys CLK_MM_SMI_LARB4>, 633 + <&mmsys CLK_MM_SMI_LARB4>; 634 + clock-names = "apb", "smi"; 635 + }; 636 + 620 637 imgsys: clock-controller@15000000 { 621 638 compatible = "mediatek,mt8173-imgsys", "syscon"; 622 639 reg = <0 0x15000000 0 0x1000>; 623 640 #clock-cells = <1>; 641 + }; 642 + 643 + larb2: larb@15001000 { 644 + compatible = "mediatek,mt8173-smi-larb"; 645 + reg = <0 0x15001000 0 0x1000>; 646 + mediatek,smi = <&smi_common>; 647 + power-domains = <&scpsys MT8173_POWER_DOMAIN_ISP>; 648 + clocks = <&imgsys CLK_IMG_LARB2_SMI>, 649 + <&imgsys CLK_IMG_LARB2_SMI>; 650 + clock-names = "apb", "smi"; 624 651 }; 625 652 626 653 vdecsys: clock-controller@16000000 { ··· 668 617 #clock-cells = <1>; 669 618 }; 670 619 620 + larb1: larb@16010000 { 621 + compatible = "mediatek,mt8173-smi-larb"; 622 + reg = <0 0x16010000 0 0x1000>; 623 + mediatek,smi = <&smi_common>; 624 + power-domains = <&scpsys MT8173_POWER_DOMAIN_VDEC>; 625 + clocks = <&vdecsys CLK_VDEC_CKEN>, 626 + <&vdecsys CLK_VDEC_LARB_CKEN>; 627 + clock-names = "apb", "smi"; 628 + }; 629 + 671 630 vencsys: clock-controller@18000000 { 672 631 compatible = "mediatek,mt8173-vencsys", "syscon"; 673 632 reg = <0 0x18000000 0 0x1000>; 674 633 #clock-cells = <1>; 675 634 }; 676 635 636 + larb3: larb@18001000 { 637 + compatible = "mediatek,mt8173-smi-larb"; 638 + reg = <0 0x18001000 0 0x1000>; 639 + mediatek,smi = <&smi_common>; 640 + power-domains = <&scpsys MT8173_POWER_DOMAIN_VENC>; 641 + clocks = <&vencsys CLK_VENC_CKE1>, 642 + <&vencsys CLK_VENC_CKE0>; 643 + clock-names = "apb", "smi"; 644 + }; 645 + 677 646 vencltsys: clock-controller@19000000 { 678 647 compatible = "mediatek,mt8173-vencltsys", "syscon"; 679 648 reg = <0 0x19000000 0 0x1000>; 680 649 #clock-cells = <1>; 650 + }; 651 + 652 + larb5: larb@19001000 { 653 + compatible = "mediatek,mt8173-smi-larb"; 654 + reg = <0 0x19001000 0 0x1000>; 655 + mediatek,smi = <&smi_common>; 656 + power-domains = <&scpsys MT8173_POWER_DOMAIN_VENC_LT>; 657 + clocks = <&vencltsys CLK_VENCLT_CKE1>, 658 + <&vencltsys CLK_VENCLT_CKE0>; 659 + clock-names = "apb", "smi"; 681 660 }; 682 661 }; 683 662 };
+39 -3
drivers/iommu/Kconfig
··· 39 39 40 40 If unsure, say N here. 41 41 42 + config IOMMU_IO_PGTABLE_ARMV7S 43 + bool "ARMv7/v8 Short Descriptor Format" 44 + select IOMMU_IO_PGTABLE 45 + depends on HAS_DMA && (ARM || ARM64 || COMPILE_TEST) 46 + help 47 + Enable support for the ARM Short-descriptor pagetable format. 48 + This supports 32-bit virtual and physical addresses mapped using 49 + 2-level tables with 4KB pages/1MB sections, and contiguous entries 50 + for 64KB pages/16MB supersections if indicated by the IOMMU driver. 51 + 52 + config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST 53 + bool "ARMv7s selftests" 54 + depends on IOMMU_IO_PGTABLE_ARMV7S 55 + help 56 + Enable self-tests for ARMv7s page table allocator. This performs 57 + a series of page-table consistency checks during boot. 58 + 59 + If unsure, say N here. 60 + 42 61 endmenu 43 62 44 63 config IOMMU_IOVA ··· 70 51 # IOMMU-agnostic DMA-mapping layer 71 52 config IOMMU_DMA 72 53 bool 73 - depends on NEED_SG_DMA_LENGTH 74 54 select IOMMU_API 75 55 select IOMMU_IOVA 56 + select NEED_SG_DMA_LENGTH 76 57 77 58 config FSL_PAMU 78 59 bool "Freescale IOMMU support" ··· 262 243 263 244 config EXYNOS_IOMMU 264 245 bool "Exynos IOMMU Support" 265 - depends on ARCH_EXYNOS && ARM && MMU 246 + depends on ARCH_EXYNOS && MMU 266 247 select IOMMU_API 267 248 select ARM_DMA_USE_IOMMU 268 249 help ··· 285 266 config IPMMU_VMSA 286 267 bool "Renesas VMSA-compatible IPMMU" 287 268 depends on ARM_LPAE 288 - depends on ARCH_SHMOBILE || COMPILE_TEST 269 + depends on ARCH_RENESAS || COMPILE_TEST 289 270 select IOMMU_API 290 271 select IOMMU_IO_PGTABLE_LPAE 291 272 select ARM_DMA_USE_IOMMU ··· 336 317 select IOMMU_API 337 318 help 338 319 Support for the IOMMU API for s390 PCI devices. 320 + 321 + config MTK_IOMMU 322 + bool "MTK IOMMU Support" 323 + depends on ARM || ARM64 324 + depends on ARCH_MEDIATEK || COMPILE_TEST 325 + select ARM_DMA_USE_IOMMU 326 + select IOMMU_API 327 + select IOMMU_DMA 328 + select IOMMU_IO_PGTABLE_ARMV7S 329 + select MEMORY 330 + select MTK_SMI 331 + help 332 + Support for the M4U on certain Mediatek SOCs. M4U is MultiMedia 333 + Memory Management Unit. This option enables remapping of DMA memory 334 + accesses for the multimedia subsystem. 335 + 336 + If unsure, say N here. 339 337 340 338 endif # IOMMU_SUPPORT
+2
drivers/iommu/Makefile
··· 3 3 obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o 4 4 obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o 5 5 obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o 6 + obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o 6 7 obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o 7 8 obj-$(CONFIG_IOMMU_IOVA) += iova.o 8 9 obj-$(CONFIG_OF_IOMMU) += of_iommu.o ··· 17 16 obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o 18 17 obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o 19 18 obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o 19 + obj-$(CONFIG_MTK_IOMMU) += mtk_iommu.o 20 20 obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o 21 21 obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o 22 22 obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o
+27 -23
drivers/iommu/arm-smmu-v3.c
··· 21 21 */ 22 22 23 23 #include <linux/delay.h> 24 + #include <linux/dma-iommu.h> 24 25 #include <linux/err.h> 25 26 #include <linux/interrupt.h> 26 27 #include <linux/iommu.h> ··· 1397 1396 { 1398 1397 struct arm_smmu_domain *smmu_domain; 1399 1398 1400 - if (type != IOMMU_DOMAIN_UNMANAGED) 1399 + if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA) 1401 1400 return NULL; 1402 1401 1403 1402 /* ··· 1408 1407 smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL); 1409 1408 if (!smmu_domain) 1410 1409 return NULL; 1410 + 1411 + if (type == IOMMU_DOMAIN_DMA && 1412 + iommu_get_dma_cookie(&smmu_domain->domain)) { 1413 + kfree(smmu_domain); 1414 + return NULL; 1415 + } 1411 1416 1412 1417 mutex_init(&smmu_domain->init_mutex); 1413 1418 spin_lock_init(&smmu_domain->pgtbl_lock); ··· 1443 1436 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1444 1437 struct arm_smmu_device *smmu = smmu_domain->smmu; 1445 1438 1439 + iommu_put_dma_cookie(domain); 1446 1440 free_io_pgtable_ops(smmu_domain->pgtbl_ops); 1447 1441 1448 1442 /* Free the CD and ASID, if we allocated them */ ··· 1638 1630 return 0; 1639 1631 } 1640 1632 1633 + static void arm_smmu_detach_dev(struct device *dev) 1634 + { 1635 + struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev); 1636 + 1637 + smmu_group->ste.bypass = true; 1638 + if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group))) 1639 + dev_warn(dev, "failed to install bypass STE\n"); 1640 + 1641 + smmu_group->domain = NULL; 1642 + } 1643 + 1641 1644 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) 1642 1645 { 1643 1646 int ret = 0; ··· 1661 1642 1662 1643 /* Already attached to a different domain? */ 1663 1644 if (smmu_group->domain && smmu_group->domain != smmu_domain) 1664 - return -EEXIST; 1645 + arm_smmu_detach_dev(dev); 1665 1646 1666 1647 smmu = smmu_group->smmu; 1667 1648 mutex_lock(&smmu_domain->init_mutex); ··· 1687 1668 goto out_unlock; 1688 1669 1689 1670 smmu_group->domain = smmu_domain; 1690 - smmu_group->ste.bypass = false; 1671 + 1672 + /* 1673 + * FIXME: This should always be "false" once we have IOMMU-backed 1674 + * DMA ops for all devices behind the SMMU. 1675 + */ 1676 + smmu_group->ste.bypass = domain->type == IOMMU_DOMAIN_DMA; 1691 1677 1692 1678 ret = arm_smmu_install_ste_for_group(smmu_group); 1693 1679 if (IS_ERR_VALUE(ret)) ··· 1701 1677 out_unlock: 1702 1678 mutex_unlock(&smmu_domain->init_mutex); 1703 1679 return ret; 1704 - } 1705 - 1706 - static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev) 1707 - { 1708 - struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1709 - struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev); 1710 - 1711 - BUG_ON(!smmu_domain); 1712 - BUG_ON(!smmu_group); 1713 - 1714 - mutex_lock(&smmu_domain->init_mutex); 1715 - BUG_ON(smmu_group->domain != smmu_domain); 1716 - 1717 - smmu_group->ste.bypass = true; 1718 - if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group))) 1719 - dev_warn(dev, "failed to install bypass STE\n"); 1720 - 1721 - smmu_group->domain = NULL; 1722 - mutex_unlock(&smmu_domain->init_mutex); 1723 1680 } 1724 1681 1725 1682 static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, ··· 1940 1935 .domain_alloc = arm_smmu_domain_alloc, 1941 1936 .domain_free = arm_smmu_domain_free, 1942 1937 .attach_dev = arm_smmu_attach_dev, 1943 - .detach_dev = arm_smmu_detach_dev, 1944 1938 .map = arm_smmu_map, 1945 1939 .unmap = arm_smmu_unmap, 1946 1940 .iova_to_phys = arm_smmu_iova_to_phys,
+50 -29
drivers/iommu/arm-smmu.c
··· 29 29 #define pr_fmt(fmt) "arm-smmu: " fmt 30 30 31 31 #include <linux/delay.h> 32 + #include <linux/dma-iommu.h> 32 33 #include <linux/dma-mapping.h> 33 34 #include <linux/err.h> 34 35 #include <linux/interrupt.h> ··· 168 167 #define S2CR_TYPE_BYPASS (1 << S2CR_TYPE_SHIFT) 169 168 #define S2CR_TYPE_FAULT (2 << S2CR_TYPE_SHIFT) 170 169 170 + #define S2CR_PRIVCFG_SHIFT 24 171 + #define S2CR_PRIVCFG_UNPRIV (2 << S2CR_PRIVCFG_SHIFT) 172 + 171 173 /* Context bank attribute registers */ 172 174 #define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2)) 173 175 #define CBAR_VMID_SHIFT 0 ··· 261 257 #define FSYNR0_WNR (1 << 4) 262 258 263 259 static int force_stage; 264 - module_param_named(force_stage, force_stage, int, S_IRUGO); 260 + module_param(force_stage, int, S_IRUGO); 265 261 MODULE_PARM_DESC(force_stage, 266 262 "Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation."); 263 + static bool disable_bypass; 264 + module_param(disable_bypass, bool, S_IRUGO); 265 + MODULE_PARM_DESC(disable_bypass, 266 + "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); 267 267 268 268 enum arm_smmu_arch_version { 269 269 ARM_SMMU_V1 = 1, ··· 971 963 { 972 964 struct arm_smmu_domain *smmu_domain; 973 965 974 - if (type != IOMMU_DOMAIN_UNMANAGED) 966 + if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA) 975 967 return NULL; 976 968 /* 977 969 * Allocate the domain and initialise some of its data structures. ··· 981 973 smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL); 982 974 if (!smmu_domain) 983 975 return NULL; 976 + 977 + if (type == IOMMU_DOMAIN_DMA && 978 + iommu_get_dma_cookie(&smmu_domain->domain)) { 979 + kfree(smmu_domain); 980 + return NULL; 981 + } 984 982 985 983 mutex_init(&smmu_domain->init_mutex); 986 984 spin_lock_init(&smmu_domain->pgtbl_lock); ··· 1002 988 * Free the domain resources. We assume that all devices have 1003 989 * already been detached. 1004 990 */ 991 + iommu_put_dma_cookie(domain); 1005 992 arm_smmu_destroy_domain_context(domain); 1006 993 kfree(smmu_domain); 1007 994 } ··· 1094 1079 if (ret) 1095 1080 return ret == -EEXIST ? 0 : ret; 1096 1081 1082 + /* 1083 + * FIXME: This won't be needed once we have IOMMU-backed DMA ops 1084 + * for all devices behind the SMMU. 1085 + */ 1086 + if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA) 1087 + return 0; 1088 + 1097 1089 for (i = 0; i < cfg->num_streamids; ++i) { 1098 1090 u32 idx, s2cr; 1099 1091 1100 1092 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; 1101 - s2cr = S2CR_TYPE_TRANS | 1093 + s2cr = S2CR_TYPE_TRANS | S2CR_PRIVCFG_UNPRIV | 1102 1094 (smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT); 1103 1095 writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx)); 1104 1096 } ··· 1130 1108 */ 1131 1109 for (i = 0; i < cfg->num_streamids; ++i) { 1132 1110 u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; 1111 + u32 reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS; 1133 1112 1134 - writel_relaxed(S2CR_TYPE_BYPASS, 1135 - gr0_base + ARM_SMMU_GR0_S2CR(idx)); 1113 + writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(idx)); 1136 1114 } 1137 1115 1138 1116 arm_smmu_master_free_smrs(smmu, cfg); 1117 + } 1118 + 1119 + static void arm_smmu_detach_dev(struct device *dev, 1120 + struct arm_smmu_master_cfg *cfg) 1121 + { 1122 + struct iommu_domain *domain = dev->archdata.iommu; 1123 + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1124 + 1125 + dev->archdata.iommu = NULL; 1126 + arm_smmu_domain_remove_master(smmu_domain, cfg); 1139 1127 } 1140 1128 1141 1129 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ··· 1159 1127 if (!smmu) { 1160 1128 dev_err(dev, "cannot attach to SMMU, is it on the same bus?\n"); 1161 1129 return -ENXIO; 1162 - } 1163 - 1164 - if (dev->archdata.iommu) { 1165 - dev_err(dev, "already attached to IOMMU domain\n"); 1166 - return -EEXIST; 1167 1130 } 1168 1131 1169 1132 /* Ensure that the domain is finalised */ ··· 1182 1155 if (!cfg) 1183 1156 return -ENODEV; 1184 1157 1158 + /* Detach the dev from its current domain */ 1159 + if (dev->archdata.iommu) 1160 + arm_smmu_detach_dev(dev, cfg); 1161 + 1185 1162 ret = arm_smmu_domain_add_master(smmu_domain, cfg); 1186 1163 if (!ret) 1187 1164 dev->archdata.iommu = domain; 1188 1165 return ret; 1189 - } 1190 - 1191 - static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev) 1192 - { 1193 - struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 1194 - struct arm_smmu_master_cfg *cfg; 1195 - 1196 - cfg = find_smmu_master_cfg(dev); 1197 - if (!cfg) 1198 - return; 1199 - 1200 - dev->archdata.iommu = NULL; 1201 - arm_smmu_domain_remove_master(smmu_domain, cfg); 1202 1166 } 1203 1167 1204 1168 static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, ··· 1467 1449 .domain_alloc = arm_smmu_domain_alloc, 1468 1450 .domain_free = arm_smmu_domain_free, 1469 1451 .attach_dev = arm_smmu_attach_dev, 1470 - .detach_dev = arm_smmu_detach_dev, 1471 1452 .map = arm_smmu_map, 1472 1453 .unmap = arm_smmu_unmap, 1473 1454 .map_sg = default_iommu_map_sg, ··· 1490 1473 reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); 1491 1474 writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); 1492 1475 1493 - /* Mark all SMRn as invalid and all S2CRn as bypass */ 1476 + /* Mark all SMRn as invalid and all S2CRn as bypass unless overridden */ 1477 + reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS; 1494 1478 for (i = 0; i < smmu->num_mapping_groups; ++i) { 1495 1479 writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i)); 1496 - writel_relaxed(S2CR_TYPE_BYPASS, 1497 - gr0_base + ARM_SMMU_GR0_S2CR(i)); 1480 + writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(i)); 1498 1481 } 1499 1482 1500 1483 /* Make sure all context banks are disabled and clear CB_FSR */ ··· 1516 1499 /* Disable TLB broadcasting. */ 1517 1500 reg |= (sCR0_VMIDPNE | sCR0_PTM); 1518 1501 1519 - /* Enable client access, but bypass when no mapping is found */ 1520 - reg &= ~(sCR0_CLIENTPD | sCR0_USFCFG); 1502 + /* Enable client access, handling unmatched streams as appropriate */ 1503 + reg &= ~sCR0_CLIENTPD; 1504 + if (disable_bypass) 1505 + reg |= sCR0_USFCFG; 1506 + else 1507 + reg &= ~sCR0_USFCFG; 1521 1508 1522 1509 /* Disable forced broadcasting */ 1523 1510 reg &= ~sCR0_FB;
+362 -250
drivers/iommu/exynos-iommu.c
··· 1 - /* linux/drivers/iommu/exynos_iommu.c 2 - * 3 - * Copyright (c) 2011 Samsung Electronics Co., Ltd. 1 + /* 2 + * Copyright (c) 2011,2016 Samsung Electronics Co., Ltd. 4 3 * http://www.samsung.com 5 4 * 6 5 * This program is free software; you can redistribute it and/or modify ··· 24 25 #include <linux/platform_device.h> 25 26 #include <linux/pm_runtime.h> 26 27 #include <linux/slab.h> 27 - 28 - #include <asm/cacheflush.h> 29 - #include <asm/dma-iommu.h> 30 - #include <asm/pgtable.h> 28 + #include <linux/dma-iommu.h> 31 29 32 30 typedef u32 sysmmu_iova_t; 33 31 typedef u32 sysmmu_pte_t; ··· 54 58 #define lv2ent_small(pent) ((*(pent) & 2) == 2) 55 59 #define lv2ent_large(pent) ((*(pent) & 3) == 1) 56 60 57 - static u32 sysmmu_page_offset(sysmmu_iova_t iova, u32 size) 58 - { 59 - return iova & (size - 1); 60 - } 61 + /* 62 + * v1.x - v3.x SYSMMU supports 32bit physical and 32bit virtual address spaces 63 + * v5.0 introduced support for 36bit physical address space by shifting 64 + * all page entry values by 4 bits. 65 + * All SYSMMU controllers in the system support the address spaces of the same 66 + * size, so PG_ENT_SHIFT can be initialized on first SYSMMU probe to proper 67 + * value (0 or 4). 68 + */ 69 + static short PG_ENT_SHIFT = -1; 70 + #define SYSMMU_PG_ENT_SHIFT 0 71 + #define SYSMMU_V5_PG_ENT_SHIFT 4 61 72 62 - #define section_phys(sent) (*(sent) & SECT_MASK) 63 - #define section_offs(iova) sysmmu_page_offset((iova), SECT_SIZE) 64 - #define lpage_phys(pent) (*(pent) & LPAGE_MASK) 65 - #define lpage_offs(iova) sysmmu_page_offset((iova), LPAGE_SIZE) 66 - #define spage_phys(pent) (*(pent) & SPAGE_MASK) 67 - #define spage_offs(iova) sysmmu_page_offset((iova), SPAGE_SIZE) 73 + #define sect_to_phys(ent) (((phys_addr_t) ent) << PG_ENT_SHIFT) 74 + #define section_phys(sent) (sect_to_phys(*(sent)) & SECT_MASK) 75 + #define section_offs(iova) (iova & (SECT_SIZE - 1)) 76 + #define lpage_phys(pent) (sect_to_phys(*(pent)) & LPAGE_MASK) 77 + #define lpage_offs(iova) (iova & (LPAGE_SIZE - 1)) 78 + #define spage_phys(pent) (sect_to_phys(*(pent)) & SPAGE_MASK) 79 + #define spage_offs(iova) (iova & (SPAGE_SIZE - 1)) 68 80 69 81 #define NUM_LV1ENTRIES 4096 70 82 #define NUM_LV2ENTRIES (SECT_SIZE / SPAGE_SIZE) ··· 87 83 return (iova >> SPAGE_ORDER) & (NUM_LV2ENTRIES - 1); 88 84 } 89 85 86 + #define LV1TABLE_SIZE (NUM_LV1ENTRIES * sizeof(sysmmu_pte_t)) 90 87 #define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(sysmmu_pte_t)) 91 88 92 89 #define SPAGES_PER_LPAGE (LPAGE_SIZE / SPAGE_SIZE) 90 + #define lv2table_base(sent) (sect_to_phys(*(sent) & 0xFFFFFFC0)) 93 91 94 - #define lv2table_base(sent) (*(sent) & 0xFFFFFC00) 95 - 96 - #define mk_lv1ent_sect(pa) ((pa) | 2) 97 - #define mk_lv1ent_page(pa) ((pa) | 1) 98 - #define mk_lv2ent_lpage(pa) ((pa) | 1) 99 - #define mk_lv2ent_spage(pa) ((pa) | 2) 92 + #define mk_lv1ent_sect(pa) ((pa >> PG_ENT_SHIFT) | 2) 93 + #define mk_lv1ent_page(pa) ((pa >> PG_ENT_SHIFT) | 1) 94 + #define mk_lv2ent_lpage(pa) ((pa >> PG_ENT_SHIFT) | 1) 95 + #define mk_lv2ent_spage(pa) ((pa >> PG_ENT_SHIFT) | 2) 100 96 101 97 #define CTRL_ENABLE 0x5 102 98 #define CTRL_BLOCK 0x7 ··· 104 100 105 101 #define CFG_LRU 0x1 106 102 #define CFG_QOS(n) ((n & 0xF) << 7) 107 - #define CFG_MASK 0x0150FFFF /* Selecting bit 0-15, 20, 22 and 24 */ 108 103 #define CFG_ACGEN (1 << 24) /* System MMU 3.3 only */ 109 104 #define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */ 110 105 #define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */ 111 106 107 + /* common registers */ 112 108 #define REG_MMU_CTRL 0x000 113 109 #define REG_MMU_CFG 0x004 114 110 #define REG_MMU_STATUS 0x008 111 + #define REG_MMU_VERSION 0x034 112 + 113 + #define MMU_MAJ_VER(val) ((val) >> 7) 114 + #define MMU_MIN_VER(val) ((val) & 0x7F) 115 + #define MMU_RAW_VER(reg) (((reg) >> 21) & ((1 << 11) - 1)) /* 11 bits */ 116 + 117 + #define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F)) 118 + 119 + /* v1.x - v3.x registers */ 115 120 #define REG_MMU_FLUSH 0x00C 116 121 #define REG_MMU_FLUSH_ENTRY 0x010 117 122 #define REG_PT_BASE_ADDR 0x014 ··· 132 119 #define REG_AR_FAULT_ADDR 0x02C 133 120 #define REG_DEFAULT_SLAVE_ADDR 0x030 134 121 135 - #define REG_MMU_VERSION 0x034 136 - 137 - #define MMU_MAJ_VER(val) ((val) >> 7) 138 - #define MMU_MIN_VER(val) ((val) & 0x7F) 139 - #define MMU_RAW_VER(reg) (((reg) >> 21) & ((1 << 11) - 1)) /* 11 bits */ 140 - 141 - #define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F)) 142 - 143 - #define REG_PB0_SADDR 0x04C 144 - #define REG_PB0_EADDR 0x050 145 - #define REG_PB1_SADDR 0x054 146 - #define REG_PB1_EADDR 0x058 122 + /* v5.x registers */ 123 + #define REG_V5_PT_BASE_PFN 0x00C 124 + #define REG_V5_MMU_FLUSH_ALL 0x010 125 + #define REG_V5_MMU_FLUSH_ENTRY 0x014 126 + #define REG_V5_INT_STATUS 0x060 127 + #define REG_V5_INT_CLEAR 0x064 128 + #define REG_V5_FAULT_AR_VA 0x070 129 + #define REG_V5_FAULT_AW_VA 0x080 147 130 148 131 #define has_sysmmu(dev) (dev->archdata.iommu != NULL) 149 132 133 + static struct device *dma_dev; 150 134 static struct kmem_cache *lv2table_kmem_cache; 151 135 static sysmmu_pte_t *zero_lv2_table; 152 136 #define ZERO_LV2LINK mk_lv1ent_page(virt_to_phys(zero_lv2_table)) ··· 159 149 lv2table_base(sent)) + lv2ent_offset(iova); 160 150 } 161 151 162 - enum exynos_sysmmu_inttype { 163 - SYSMMU_PAGEFAULT, 164 - SYSMMU_AR_MULTIHIT, 165 - SYSMMU_AW_MULTIHIT, 166 - SYSMMU_BUSERROR, 167 - SYSMMU_AR_SECURITY, 168 - SYSMMU_AR_ACCESS, 169 - SYSMMU_AW_SECURITY, 170 - SYSMMU_AW_PROTECTION, /* 7 */ 171 - SYSMMU_FAULT_UNKNOWN, 172 - SYSMMU_FAULTS_NUM 152 + /* 153 + * IOMMU fault information register 154 + */ 155 + struct sysmmu_fault_info { 156 + unsigned int bit; /* bit number in STATUS register */ 157 + unsigned short addr_reg; /* register to read VA fault address */ 158 + const char *name; /* human readable fault name */ 159 + unsigned int type; /* fault type for report_iommu_fault */ 173 160 }; 174 161 175 - static unsigned short fault_reg_offset[SYSMMU_FAULTS_NUM] = { 176 - REG_PAGE_FAULT_ADDR, 177 - REG_AR_FAULT_ADDR, 178 - REG_AW_FAULT_ADDR, 179 - REG_DEFAULT_SLAVE_ADDR, 180 - REG_AR_FAULT_ADDR, 181 - REG_AR_FAULT_ADDR, 182 - REG_AW_FAULT_ADDR, 183 - REG_AW_FAULT_ADDR 162 + static const struct sysmmu_fault_info sysmmu_faults[] = { 163 + { 0, REG_PAGE_FAULT_ADDR, "PAGE", IOMMU_FAULT_READ }, 164 + { 1, REG_AR_FAULT_ADDR, "AR MULTI-HIT", IOMMU_FAULT_READ }, 165 + { 2, REG_AW_FAULT_ADDR, "AW MULTI-HIT", IOMMU_FAULT_WRITE }, 166 + { 3, REG_DEFAULT_SLAVE_ADDR, "BUS ERROR", IOMMU_FAULT_READ }, 167 + { 4, REG_AR_FAULT_ADDR, "AR SECURITY PROTECTION", IOMMU_FAULT_READ }, 168 + { 5, REG_AR_FAULT_ADDR, "AR ACCESS PROTECTION", IOMMU_FAULT_READ }, 169 + { 6, REG_AW_FAULT_ADDR, "AW SECURITY PROTECTION", IOMMU_FAULT_WRITE }, 170 + { 7, REG_AW_FAULT_ADDR, "AW ACCESS PROTECTION", IOMMU_FAULT_WRITE }, 184 171 }; 185 172 186 - static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = { 187 - "PAGE FAULT", 188 - "AR MULTI-HIT FAULT", 189 - "AW MULTI-HIT FAULT", 190 - "BUS ERROR", 191 - "AR SECURITY PROTECTION FAULT", 192 - "AR ACCESS PROTECTION FAULT", 193 - "AW SECURITY PROTECTION FAULT", 194 - "AW ACCESS PROTECTION FAULT", 195 - "UNKNOWN FAULT" 173 + static const struct sysmmu_fault_info sysmmu_v5_faults[] = { 174 + { 0, REG_V5_FAULT_AR_VA, "AR PTW", IOMMU_FAULT_READ }, 175 + { 1, REG_V5_FAULT_AR_VA, "AR PAGE", IOMMU_FAULT_READ }, 176 + { 2, REG_V5_FAULT_AR_VA, "AR MULTI-HIT", IOMMU_FAULT_READ }, 177 + { 3, REG_V5_FAULT_AR_VA, "AR ACCESS PROTECTION", IOMMU_FAULT_READ }, 178 + { 4, REG_V5_FAULT_AR_VA, "AR SECURITY PROTECTION", IOMMU_FAULT_READ }, 179 + { 16, REG_V5_FAULT_AW_VA, "AW PTW", IOMMU_FAULT_WRITE }, 180 + { 17, REG_V5_FAULT_AW_VA, "AW PAGE", IOMMU_FAULT_WRITE }, 181 + { 18, REG_V5_FAULT_AW_VA, "AW MULTI-HIT", IOMMU_FAULT_WRITE }, 182 + { 19, REG_V5_FAULT_AW_VA, "AW ACCESS PROTECTION", IOMMU_FAULT_WRITE }, 183 + { 20, REG_V5_FAULT_AW_VA, "AW SECURITY PROTECTION", IOMMU_FAULT_WRITE }, 196 184 }; 197 185 198 186 /* ··· 201 193 */ 202 194 struct exynos_iommu_owner { 203 195 struct list_head controllers; /* list of sysmmu_drvdata.owner_node */ 196 + struct iommu_domain *domain; /* domain this device is attached */ 204 197 }; 205 198 206 199 /* ··· 230 221 struct device *master; /* master device (owner) */ 231 222 void __iomem *sfrbase; /* our registers */ 232 223 struct clk *clk; /* SYSMMU's clock */ 224 + struct clk *aclk; /* SYSMMU's aclk clock */ 225 + struct clk *pclk; /* SYSMMU's pclk clock */ 233 226 struct clk *clk_master; /* master's device clock */ 234 227 int activations; /* number of calls to sysmmu_enable */ 235 228 spinlock_t lock; /* lock for modyfying state */ ··· 266 255 return data->activations > 0; 267 256 } 268 257 269 - static void sysmmu_unblock(void __iomem *sfrbase) 258 + static void sysmmu_unblock(struct sysmmu_drvdata *data) 270 259 { 271 - __raw_writel(CTRL_ENABLE, sfrbase + REG_MMU_CTRL); 260 + writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL); 272 261 } 273 262 274 - static bool sysmmu_block(void __iomem *sfrbase) 263 + static bool sysmmu_block(struct sysmmu_drvdata *data) 275 264 { 276 265 int i = 120; 277 266 278 - __raw_writel(CTRL_BLOCK, sfrbase + REG_MMU_CTRL); 279 - while ((i > 0) && !(__raw_readl(sfrbase + REG_MMU_STATUS) & 1)) 267 + writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL); 268 + while ((i > 0) && !(readl(data->sfrbase + REG_MMU_STATUS) & 1)) 280 269 --i; 281 270 282 - if (!(__raw_readl(sfrbase + REG_MMU_STATUS) & 1)) { 283 - sysmmu_unblock(sfrbase); 271 + if (!(readl(data->sfrbase + REG_MMU_STATUS) & 1)) { 272 + sysmmu_unblock(data); 284 273 return false; 285 274 } 286 275 287 276 return true; 288 277 } 289 278 290 - static void __sysmmu_tlb_invalidate(void __iomem *sfrbase) 279 + static void __sysmmu_tlb_invalidate(struct sysmmu_drvdata *data) 291 280 { 292 - __raw_writel(0x1, sfrbase + REG_MMU_FLUSH); 281 + if (MMU_MAJ_VER(data->version) < 5) 282 + writel(0x1, data->sfrbase + REG_MMU_FLUSH); 283 + else 284 + writel(0x1, data->sfrbase + REG_V5_MMU_FLUSH_ALL); 293 285 } 294 286 295 - static void __sysmmu_tlb_invalidate_entry(void __iomem *sfrbase, 287 + static void __sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, 296 288 sysmmu_iova_t iova, unsigned int num_inv) 297 289 { 298 290 unsigned int i; 299 291 300 292 for (i = 0; i < num_inv; i++) { 301 - __raw_writel((iova & SPAGE_MASK) | 1, 302 - sfrbase + REG_MMU_FLUSH_ENTRY); 293 + if (MMU_MAJ_VER(data->version) < 5) 294 + writel((iova & SPAGE_MASK) | 1, 295 + data->sfrbase + REG_MMU_FLUSH_ENTRY); 296 + else 297 + writel((iova & SPAGE_MASK) | 1, 298 + data->sfrbase + REG_V5_MMU_FLUSH_ENTRY); 303 299 iova += SPAGE_SIZE; 304 300 } 305 301 } 306 302 307 - static void __sysmmu_set_ptbase(void __iomem *sfrbase, 308 - phys_addr_t pgd) 303 + static void __sysmmu_set_ptbase(struct sysmmu_drvdata *data, phys_addr_t pgd) 309 304 { 310 - __raw_writel(pgd, sfrbase + REG_PT_BASE_ADDR); 305 + if (MMU_MAJ_VER(data->version) < 5) 306 + writel(pgd, data->sfrbase + REG_PT_BASE_ADDR); 307 + else 308 + writel(pgd >> PAGE_SHIFT, 309 + data->sfrbase + REG_V5_PT_BASE_PFN); 311 310 312 - __sysmmu_tlb_invalidate(sfrbase); 311 + __sysmmu_tlb_invalidate(data); 313 312 } 314 313 315 - static void show_fault_information(const char *name, 316 - enum exynos_sysmmu_inttype itype, 317 - phys_addr_t pgtable_base, sysmmu_iova_t fault_addr) 314 + static void __sysmmu_get_version(struct sysmmu_drvdata *data) 315 + { 316 + u32 ver; 317 + 318 + clk_enable(data->clk_master); 319 + clk_enable(data->clk); 320 + clk_enable(data->pclk); 321 + clk_enable(data->aclk); 322 + 323 + ver = readl(data->sfrbase + REG_MMU_VERSION); 324 + 325 + /* controllers on some SoCs don't report proper version */ 326 + if (ver == 0x80000001u) 327 + data->version = MAKE_MMU_VER(1, 0); 328 + else 329 + data->version = MMU_RAW_VER(ver); 330 + 331 + dev_dbg(data->sysmmu, "hardware version: %d.%d\n", 332 + MMU_MAJ_VER(data->version), MMU_MIN_VER(data->version)); 333 + 334 + clk_disable(data->aclk); 335 + clk_disable(data->pclk); 336 + clk_disable(data->clk); 337 + clk_disable(data->clk_master); 338 + } 339 + 340 + static void show_fault_information(struct sysmmu_drvdata *data, 341 + const struct sysmmu_fault_info *finfo, 342 + sysmmu_iova_t fault_addr) 318 343 { 319 344 sysmmu_pte_t *ent; 320 345 321 - if ((itype >= SYSMMU_FAULTS_NUM) || (itype < SYSMMU_PAGEFAULT)) 322 - itype = SYSMMU_FAULT_UNKNOWN; 323 - 324 - pr_err("%s occurred at %#x by %s(Page table base: %pa)\n", 325 - sysmmu_fault_name[itype], fault_addr, name, &pgtable_base); 326 - 327 - ent = section_entry(phys_to_virt(pgtable_base), fault_addr); 328 - pr_err("\tLv1 entry: %#x\n", *ent); 329 - 346 + dev_err(data->sysmmu, "%s FAULT occurred at %#x (page table base: %pa)\n", 347 + finfo->name, fault_addr, &data->pgtable); 348 + ent = section_entry(phys_to_virt(data->pgtable), fault_addr); 349 + dev_err(data->sysmmu, "\tLv1 entry: %#x\n", *ent); 330 350 if (lv1ent_page(ent)) { 331 351 ent = page_entry(ent, fault_addr); 332 - pr_err("\t Lv2 entry: %#x\n", *ent); 352 + dev_err(data->sysmmu, "\t Lv2 entry: %#x\n", *ent); 333 353 } 334 354 } 335 355 ··· 368 326 { 369 327 /* SYSMMU is in blocked state when interrupt occurred. */ 370 328 struct sysmmu_drvdata *data = dev_id; 371 - enum exynos_sysmmu_inttype itype; 372 - sysmmu_iova_t addr = -1; 329 + const struct sysmmu_fault_info *finfo; 330 + unsigned int i, n, itype; 331 + sysmmu_iova_t fault_addr = -1; 332 + unsigned short reg_status, reg_clear; 373 333 int ret = -ENOSYS; 374 334 375 335 WARN_ON(!is_sysmmu_active(data)); 376 336 377 - spin_lock(&data->lock); 378 - 379 - if (!IS_ERR(data->clk_master)) 380 - clk_enable(data->clk_master); 381 - 382 - itype = (enum exynos_sysmmu_inttype) 383 - __ffs(__raw_readl(data->sfrbase + REG_INT_STATUS)); 384 - if (WARN_ON(!((itype >= 0) && (itype < SYSMMU_FAULT_UNKNOWN)))) 385 - itype = SYSMMU_FAULT_UNKNOWN; 386 - else 387 - addr = __raw_readl(data->sfrbase + fault_reg_offset[itype]); 388 - 389 - if (itype == SYSMMU_FAULT_UNKNOWN) { 390 - pr_err("%s: Fault is not occurred by System MMU '%s'!\n", 391 - __func__, dev_name(data->sysmmu)); 392 - pr_err("%s: Please check if IRQ is correctly configured.\n", 393 - __func__); 394 - BUG(); 337 + if (MMU_MAJ_VER(data->version) < 5) { 338 + reg_status = REG_INT_STATUS; 339 + reg_clear = REG_INT_CLEAR; 340 + finfo = sysmmu_faults; 341 + n = ARRAY_SIZE(sysmmu_faults); 395 342 } else { 396 - unsigned int base = 397 - __raw_readl(data->sfrbase + REG_PT_BASE_ADDR); 398 - show_fault_information(dev_name(data->sysmmu), 399 - itype, base, addr); 400 - if (data->domain) 401 - ret = report_iommu_fault(&data->domain->domain, 402 - data->master, addr, itype); 343 + reg_status = REG_V5_INT_STATUS; 344 + reg_clear = REG_V5_INT_CLEAR; 345 + finfo = sysmmu_v5_faults; 346 + n = ARRAY_SIZE(sysmmu_v5_faults); 403 347 } 404 348 349 + spin_lock(&data->lock); 350 + 351 + clk_enable(data->clk_master); 352 + 353 + itype = __ffs(readl(data->sfrbase + reg_status)); 354 + for (i = 0; i < n; i++, finfo++) 355 + if (finfo->bit == itype) 356 + break; 357 + /* unknown/unsupported fault */ 358 + BUG_ON(i == n); 359 + 360 + /* print debug message */ 361 + fault_addr = readl(data->sfrbase + finfo->addr_reg); 362 + show_fault_information(data, finfo, fault_addr); 363 + 364 + if (data->domain) 365 + ret = report_iommu_fault(&data->domain->domain, 366 + data->master, fault_addr, finfo->type); 405 367 /* fault is not recovered by fault handler */ 406 368 BUG_ON(ret != 0); 407 369 408 - __raw_writel(1 << itype, data->sfrbase + REG_INT_CLEAR); 370 + writel(1 << itype, data->sfrbase + reg_clear); 409 371 410 - sysmmu_unblock(data->sfrbase); 372 + sysmmu_unblock(data); 411 373 412 - if (!IS_ERR(data->clk_master)) 413 - clk_disable(data->clk_master); 374 + clk_disable(data->clk_master); 414 375 415 376 spin_unlock(&data->lock); 416 377 ··· 422 377 423 378 static void __sysmmu_disable_nocount(struct sysmmu_drvdata *data) 424 379 { 425 - if (!IS_ERR(data->clk_master)) 426 - clk_enable(data->clk_master); 380 + clk_enable(data->clk_master); 427 381 428 - __raw_writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL); 429 - __raw_writel(0, data->sfrbase + REG_MMU_CFG); 382 + writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL); 383 + writel(0, data->sfrbase + REG_MMU_CFG); 430 384 385 + clk_disable(data->aclk); 386 + clk_disable(data->pclk); 431 387 clk_disable(data->clk); 432 - if (!IS_ERR(data->clk_master)) 433 - clk_disable(data->clk_master); 388 + clk_disable(data->clk_master); 434 389 } 435 390 436 391 static bool __sysmmu_disable(struct sysmmu_drvdata *data) ··· 461 416 462 417 static void __sysmmu_init_config(struct sysmmu_drvdata *data) 463 418 { 464 - unsigned int cfg = CFG_LRU | CFG_QOS(15); 465 - unsigned int ver; 419 + unsigned int cfg; 466 420 467 - ver = MMU_RAW_VER(__raw_readl(data->sfrbase + REG_MMU_VERSION)); 468 - if (MMU_MAJ_VER(ver) == 3) { 469 - if (MMU_MIN_VER(ver) >= 2) { 470 - cfg |= CFG_FLPDCACHE; 471 - if (MMU_MIN_VER(ver) == 3) { 472 - cfg |= CFG_ACGEN; 473 - cfg &= ~CFG_LRU; 474 - } else { 475 - cfg |= CFG_SYSSEL; 476 - } 477 - } 478 - } 421 + if (data->version <= MAKE_MMU_VER(3, 1)) 422 + cfg = CFG_LRU | CFG_QOS(15); 423 + else if (data->version <= MAKE_MMU_VER(3, 2)) 424 + cfg = CFG_LRU | CFG_QOS(15) | CFG_FLPDCACHE | CFG_SYSSEL; 425 + else 426 + cfg = CFG_QOS(15) | CFG_FLPDCACHE | CFG_ACGEN; 479 427 480 - __raw_writel(cfg, data->sfrbase + REG_MMU_CFG); 481 - data->version = ver; 428 + writel(cfg, data->sfrbase + REG_MMU_CFG); 482 429 } 483 430 484 431 static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data) 485 432 { 486 - if (!IS_ERR(data->clk_master)) 487 - clk_enable(data->clk_master); 433 + clk_enable(data->clk_master); 488 434 clk_enable(data->clk); 435 + clk_enable(data->pclk); 436 + clk_enable(data->aclk); 489 437 490 - __raw_writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL); 438 + writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL); 491 439 492 440 __sysmmu_init_config(data); 493 441 494 - __sysmmu_set_ptbase(data->sfrbase, data->pgtable); 442 + __sysmmu_set_ptbase(data, data->pgtable); 495 443 496 - __raw_writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL); 444 + writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL); 497 445 498 - if (!IS_ERR(data->clk_master)) 499 - clk_disable(data->clk_master); 446 + clk_disable(data->clk_master); 500 447 } 501 448 502 449 static int __sysmmu_enable(struct sysmmu_drvdata *data, phys_addr_t pgtable, ··· 519 482 return ret; 520 483 } 521 484 522 - static void __sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data, 523 - sysmmu_iova_t iova) 524 - { 525 - if (data->version == MAKE_MMU_VER(3, 3)) 526 - __raw_writel(iova | 0x1, data->sfrbase + REG_MMU_FLUSH_ENTRY); 527 - } 528 - 529 485 static void sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data, 530 486 sysmmu_iova_t iova) 531 487 { 532 488 unsigned long flags; 533 489 534 - if (!IS_ERR(data->clk_master)) 535 - clk_enable(data->clk_master); 490 + clk_enable(data->clk_master); 536 491 537 492 spin_lock_irqsave(&data->lock, flags); 538 - if (is_sysmmu_active(data)) 539 - __sysmmu_tlb_invalidate_flpdcache(data, iova); 493 + if (is_sysmmu_active(data)) { 494 + if (data->version >= MAKE_MMU_VER(3, 3)) 495 + __sysmmu_tlb_invalidate_entry(data, iova, 1); 496 + } 540 497 spin_unlock_irqrestore(&data->lock, flags); 541 498 542 - if (!IS_ERR(data->clk_master)) 543 - clk_disable(data->clk_master); 499 + clk_disable(data->clk_master); 544 500 } 545 501 546 502 static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, ··· 545 515 if (is_sysmmu_active(data)) { 546 516 unsigned int num_inv = 1; 547 517 548 - if (!IS_ERR(data->clk_master)) 549 - clk_enable(data->clk_master); 518 + clk_enable(data->clk_master); 550 519 551 520 /* 552 521 * L2TLB invalidation required ··· 560 531 if (MMU_MAJ_VER(data->version) == 2) 561 532 num_inv = min_t(unsigned int, size / PAGE_SIZE, 64); 562 533 563 - if (sysmmu_block(data->sfrbase)) { 564 - __sysmmu_tlb_invalidate_entry( 565 - data->sfrbase, iova, num_inv); 566 - sysmmu_unblock(data->sfrbase); 534 + if (sysmmu_block(data)) { 535 + __sysmmu_tlb_invalidate_entry(data, iova, num_inv); 536 + sysmmu_unblock(data); 567 537 } 568 - if (!IS_ERR(data->clk_master)) 569 - clk_disable(data->clk_master); 538 + clk_disable(data->clk_master); 570 539 } else { 571 540 dev_dbg(data->master, 572 541 "disabled. Skipping TLB invalidation @ %#x\n", iova); ··· 602 575 } 603 576 604 577 data->clk = devm_clk_get(dev, "sysmmu"); 605 - if (IS_ERR(data->clk)) { 606 - dev_err(dev, "Failed to get clock!\n"); 607 - return PTR_ERR(data->clk); 608 - } else { 578 + if (!IS_ERR(data->clk)) { 609 579 ret = clk_prepare(data->clk); 610 580 if (ret) { 611 581 dev_err(dev, "Failed to prepare clk\n"); 612 582 return ret; 613 583 } 584 + } else { 585 + data->clk = NULL; 586 + } 587 + 588 + data->aclk = devm_clk_get(dev, "aclk"); 589 + if (!IS_ERR(data->aclk)) { 590 + ret = clk_prepare(data->aclk); 591 + if (ret) { 592 + dev_err(dev, "Failed to prepare aclk\n"); 593 + return ret; 594 + } 595 + } else { 596 + data->aclk = NULL; 597 + } 598 + 599 + data->pclk = devm_clk_get(dev, "pclk"); 600 + if (!IS_ERR(data->pclk)) { 601 + ret = clk_prepare(data->pclk); 602 + if (ret) { 603 + dev_err(dev, "Failed to prepare pclk\n"); 604 + return ret; 605 + } 606 + } else { 607 + data->pclk = NULL; 608 + } 609 + 610 + if (!data->clk && (!data->aclk || !data->pclk)) { 611 + dev_err(dev, "Failed to get device clock(s)!\n"); 612 + return -ENOSYS; 614 613 } 615 614 616 615 data->clk_master = devm_clk_get(dev, "master"); 617 616 if (!IS_ERR(data->clk_master)) { 618 617 ret = clk_prepare(data->clk_master); 619 618 if (ret) { 620 - clk_unprepare(data->clk); 621 619 dev_err(dev, "Failed to prepare master's clk\n"); 622 620 return ret; 623 621 } 622 + } else { 623 + data->clk_master = NULL; 624 624 } 625 625 626 626 data->sysmmu = dev; 627 627 spin_lock_init(&data->lock); 628 628 629 629 platform_set_drvdata(pdev, data); 630 + 631 + __sysmmu_get_version(data); 632 + if (PG_ENT_SHIFT < 0) { 633 + if (MMU_MAJ_VER(data->version) < 5) 634 + PG_ENT_SHIFT = SYSMMU_PG_ENT_SHIFT; 635 + else 636 + PG_ENT_SHIFT = SYSMMU_V5_PG_ENT_SHIFT; 637 + } 630 638 631 639 pm_runtime_enable(dev); 632 640 ··· 712 650 } 713 651 }; 714 652 715 - static inline void pgtable_flush(void *vastart, void *vaend) 653 + static inline void update_pte(sysmmu_pte_t *ent, sysmmu_pte_t val) 716 654 { 717 - dmac_flush_range(vastart, vaend); 718 - outer_flush_range(virt_to_phys(vastart), 719 - virt_to_phys(vaend)); 655 + dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent), sizeof(*ent), 656 + DMA_TO_DEVICE); 657 + *ent = val; 658 + dma_sync_single_for_device(dma_dev, virt_to_phys(ent), sizeof(*ent), 659 + DMA_TO_DEVICE); 720 660 } 721 661 722 662 static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type) 723 663 { 724 664 struct exynos_iommu_domain *domain; 665 + dma_addr_t handle; 725 666 int i; 726 667 727 - if (type != IOMMU_DOMAIN_UNMANAGED) 728 - return NULL; 668 + /* Check if correct PTE offsets are initialized */ 669 + BUG_ON(PG_ENT_SHIFT < 0 || !dma_dev); 729 670 730 671 domain = kzalloc(sizeof(*domain), GFP_KERNEL); 731 672 if (!domain) 732 673 return NULL; 733 674 675 + if (type == IOMMU_DOMAIN_DMA) { 676 + if (iommu_get_dma_cookie(&domain->domain) != 0) 677 + goto err_pgtable; 678 + } else if (type != IOMMU_DOMAIN_UNMANAGED) { 679 + goto err_pgtable; 680 + } 681 + 734 682 domain->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2); 735 683 if (!domain->pgtable) 736 - goto err_pgtable; 684 + goto err_dma_cookie; 737 685 738 686 domain->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1); 739 687 if (!domain->lv2entcnt) ··· 761 689 domain->pgtable[i + 7] = ZERO_LV2LINK; 762 690 } 763 691 764 - pgtable_flush(domain->pgtable, domain->pgtable + NUM_LV1ENTRIES); 692 + handle = dma_map_single(dma_dev, domain->pgtable, LV1TABLE_SIZE, 693 + DMA_TO_DEVICE); 694 + /* For mapping page table entries we rely on dma == phys */ 695 + BUG_ON(handle != virt_to_phys(domain->pgtable)); 765 696 766 697 spin_lock_init(&domain->lock); 767 698 spin_lock_init(&domain->pgtablelock); ··· 778 703 779 704 err_counter: 780 705 free_pages((unsigned long)domain->pgtable, 2); 706 + err_dma_cookie: 707 + if (type == IOMMU_DOMAIN_DMA) 708 + iommu_put_dma_cookie(&domain->domain); 781 709 err_pgtable: 782 710 kfree(domain); 783 711 return NULL; ··· 805 727 806 728 spin_unlock_irqrestore(&domain->lock, flags); 807 729 730 + if (iommu_domain->type == IOMMU_DOMAIN_DMA) 731 + iommu_put_dma_cookie(iommu_domain); 732 + 733 + dma_unmap_single(dma_dev, virt_to_phys(domain->pgtable), LV1TABLE_SIZE, 734 + DMA_TO_DEVICE); 735 + 808 736 for (i = 0; i < NUM_LV1ENTRIES; i++) 809 - if (lv1ent_page(domain->pgtable + i)) 737 + if (lv1ent_page(domain->pgtable + i)) { 738 + phys_addr_t base = lv2table_base(domain->pgtable + i); 739 + 740 + dma_unmap_single(dma_dev, base, LV2TABLE_SIZE, 741 + DMA_TO_DEVICE); 810 742 kmem_cache_free(lv2table_kmem_cache, 811 - phys_to_virt(lv2table_base(domain->pgtable + i))); 743 + phys_to_virt(base)); 744 + } 812 745 813 746 free_pages((unsigned long)domain->pgtable, 2); 814 747 free_pages((unsigned long)domain->lv2entcnt, 1); 815 748 kfree(domain); 749 + } 750 + 751 + static void exynos_iommu_detach_device(struct iommu_domain *iommu_domain, 752 + struct device *dev) 753 + { 754 + struct exynos_iommu_owner *owner = dev->archdata.iommu; 755 + struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain); 756 + phys_addr_t pagetable = virt_to_phys(domain->pgtable); 757 + struct sysmmu_drvdata *data, *next; 758 + unsigned long flags; 759 + bool found = false; 760 + 761 + if (!has_sysmmu(dev) || owner->domain != iommu_domain) 762 + return; 763 + 764 + spin_lock_irqsave(&domain->lock, flags); 765 + list_for_each_entry_safe(data, next, &domain->clients, domain_node) { 766 + if (data->master == dev) { 767 + if (__sysmmu_disable(data)) { 768 + data->master = NULL; 769 + list_del_init(&data->domain_node); 770 + } 771 + pm_runtime_put(data->sysmmu); 772 + found = true; 773 + } 774 + } 775 + spin_unlock_irqrestore(&domain->lock, flags); 776 + 777 + owner->domain = NULL; 778 + 779 + if (found) 780 + dev_dbg(dev, "%s: Detached IOMMU with pgtable %pa\n", 781 + __func__, &pagetable); 782 + else 783 + dev_err(dev, "%s: No IOMMU is attached\n", __func__); 816 784 } 817 785 818 786 static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain, ··· 873 749 874 750 if (!has_sysmmu(dev)) 875 751 return -ENODEV; 752 + 753 + if (owner->domain) 754 + exynos_iommu_detach_device(owner->domain, dev); 876 755 877 756 list_for_each_entry(data, &owner->controllers, owner_node) { 878 757 pm_runtime_get_sync(data->sysmmu); ··· 895 768 return ret; 896 769 } 897 770 771 + owner->domain = iommu_domain; 898 772 dev_dbg(dev, "%s: Attached IOMMU with pgtable %pa %s\n", 899 773 __func__, &pagetable, (ret == 0) ? "" : ", again"); 900 774 901 775 return ret; 902 - } 903 - 904 - static void exynos_iommu_detach_device(struct iommu_domain *iommu_domain, 905 - struct device *dev) 906 - { 907 - struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain); 908 - phys_addr_t pagetable = virt_to_phys(domain->pgtable); 909 - struct sysmmu_drvdata *data, *next; 910 - unsigned long flags; 911 - bool found = false; 912 - 913 - if (!has_sysmmu(dev)) 914 - return; 915 - 916 - spin_lock_irqsave(&domain->lock, flags); 917 - list_for_each_entry_safe(data, next, &domain->clients, domain_node) { 918 - if (data->master == dev) { 919 - if (__sysmmu_disable(data)) { 920 - data->master = NULL; 921 - list_del_init(&data->domain_node); 922 - } 923 - pm_runtime_put(data->sysmmu); 924 - found = true; 925 - } 926 - } 927 - spin_unlock_irqrestore(&domain->lock, flags); 928 - 929 - if (found) 930 - dev_dbg(dev, "%s: Detached IOMMU with pgtable %pa\n", 931 - __func__, &pagetable); 932 - else 933 - dev_err(dev, "%s: No IOMMU is attached\n", __func__); 934 776 } 935 777 936 778 static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain, ··· 915 819 bool need_flush_flpd_cache = lv1ent_zero(sent); 916 820 917 821 pent = kmem_cache_zalloc(lv2table_kmem_cache, GFP_ATOMIC); 918 - BUG_ON((unsigned int)pent & (LV2TABLE_SIZE - 1)); 822 + BUG_ON((uintptr_t)pent & (LV2TABLE_SIZE - 1)); 919 823 if (!pent) 920 824 return ERR_PTR(-ENOMEM); 921 825 922 - *sent = mk_lv1ent_page(virt_to_phys(pent)); 826 + update_pte(sent, mk_lv1ent_page(virt_to_phys(pent))); 923 827 kmemleak_ignore(pent); 924 828 *pgcounter = NUM_LV2ENTRIES; 925 - pgtable_flush(pent, pent + NUM_LV2ENTRIES); 926 - pgtable_flush(sent, sent + 1); 829 + dma_map_single(dma_dev, pent, LV2TABLE_SIZE, DMA_TO_DEVICE); 927 830 928 831 /* 929 832 * If pre-fetched SLPD is a faulty SLPD in zero_l2_table, ··· 975 880 *pgcnt = 0; 976 881 } 977 882 978 - *sent = mk_lv1ent_sect(paddr); 979 - 980 - pgtable_flush(sent, sent + 1); 883 + update_pte(sent, mk_lv1ent_sect(paddr)); 981 884 982 885 spin_lock(&domain->lock); 983 886 if (lv1ent_page_zero(sent)) { ··· 999 906 if (WARN_ON(!lv2ent_fault(pent))) 1000 907 return -EADDRINUSE; 1001 908 1002 - *pent = mk_lv2ent_spage(paddr); 1003 - pgtable_flush(pent, pent + 1); 909 + update_pte(pent, mk_lv2ent_spage(paddr)); 1004 910 *pgcnt -= 1; 1005 911 } else { /* size == LPAGE_SIZE */ 1006 912 int i; 913 + dma_addr_t pent_base = virt_to_phys(pent); 1007 914 915 + dma_sync_single_for_cpu(dma_dev, pent_base, 916 + sizeof(*pent) * SPAGES_PER_LPAGE, 917 + DMA_TO_DEVICE); 1008 918 for (i = 0; i < SPAGES_PER_LPAGE; i++, pent++) { 1009 919 if (WARN_ON(!lv2ent_fault(pent))) { 1010 920 if (i > 0) ··· 1017 921 1018 922 *pent = mk_lv2ent_lpage(paddr); 1019 923 } 1020 - pgtable_flush(pent - SPAGES_PER_LPAGE, pent); 924 + dma_sync_single_for_device(dma_dev, pent_base, 925 + sizeof(*pent) * SPAGES_PER_LPAGE, 926 + DMA_TO_DEVICE); 1021 927 *pgcnt -= SPAGES_PER_LPAGE; 1022 928 } 1023 929 ··· 1129 1031 } 1130 1032 1131 1033 /* workaround for h/w bug in System MMU v3.3 */ 1132 - *ent = ZERO_LV2LINK; 1133 - pgtable_flush(ent, ent + 1); 1034 + update_pte(ent, ZERO_LV2LINK); 1134 1035 size = SECT_SIZE; 1135 1036 goto done; 1136 1037 } ··· 1150 1053 } 1151 1054 1152 1055 if (lv2ent_small(ent)) { 1153 - *ent = 0; 1056 + update_pte(ent, 0); 1154 1057 size = SPAGE_SIZE; 1155 - pgtable_flush(ent, ent + 1); 1156 1058 domain->lv2entcnt[lv1ent_offset(iova)] += 1; 1157 1059 goto done; 1158 1060 } ··· 1162 1066 goto err; 1163 1067 } 1164 1068 1069 + dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent), 1070 + sizeof(*ent) * SPAGES_PER_LPAGE, 1071 + DMA_TO_DEVICE); 1165 1072 memset(ent, 0, sizeof(*ent) * SPAGES_PER_LPAGE); 1166 - pgtable_flush(ent, ent + SPAGES_PER_LPAGE); 1167 - 1073 + dma_sync_single_for_device(dma_dev, virt_to_phys(ent), 1074 + sizeof(*ent) * SPAGES_PER_LPAGE, 1075 + DMA_TO_DEVICE); 1168 1076 size = LPAGE_SIZE; 1169 1077 domain->lv2entcnt[lv1ent_offset(iova)] += SPAGES_PER_LPAGE; 1170 1078 done: ··· 1214 1114 return phys; 1215 1115 } 1216 1116 1117 + static struct iommu_group *get_device_iommu_group(struct device *dev) 1118 + { 1119 + struct iommu_group *group; 1120 + 1121 + group = iommu_group_get(dev); 1122 + if (!group) 1123 + group = iommu_group_alloc(); 1124 + 1125 + return group; 1126 + } 1127 + 1217 1128 static int exynos_iommu_add_device(struct device *dev) 1218 1129 { 1219 1130 struct iommu_group *group; 1220 - int ret; 1221 1131 1222 1132 if (!has_sysmmu(dev)) 1223 1133 return -ENODEV; 1224 1134 1225 - group = iommu_group_get(dev); 1135 + group = iommu_group_get_for_dev(dev); 1226 1136 1227 - if (!group) { 1228 - group = iommu_group_alloc(); 1229 - if (IS_ERR(group)) { 1230 - dev_err(dev, "Failed to allocate IOMMU group\n"); 1231 - return PTR_ERR(group); 1232 - } 1233 - } 1137 + if (IS_ERR(group)) 1138 + return PTR_ERR(group); 1234 1139 1235 - ret = iommu_group_add_device(group, dev); 1236 1140 iommu_group_put(group); 1237 1141 1238 - return ret; 1142 + return 0; 1239 1143 } 1240 1144 1241 1145 static void exynos_iommu_remove_device(struct device *dev) ··· 1286 1182 .unmap = exynos_iommu_unmap, 1287 1183 .map_sg = default_iommu_map_sg, 1288 1184 .iova_to_phys = exynos_iommu_iova_to_phys, 1185 + .device_group = get_device_iommu_group, 1289 1186 .add_device = exynos_iommu_add_device, 1290 1187 .remove_device = exynos_iommu_remove_device, 1291 1188 .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, ··· 1349 1244 pdev = of_platform_device_create(np, NULL, platform_bus_type.dev_root); 1350 1245 if (IS_ERR(pdev)) 1351 1246 return PTR_ERR(pdev); 1247 + 1248 + /* 1249 + * use the first registered sysmmu device for performing 1250 + * dma mapping operations on iommu page tables (cpu cache flush) 1251 + */ 1252 + if (!dma_dev) 1253 + dma_dev = &pdev->dev; 1352 1254 1353 1255 of_iommu_set_ops(np, &exynos_iommu_ops); 1354 1256 return 0;
+846
drivers/iommu/io-pgtable-arm-v7s.c
··· 1 + /* 2 + * CPU-agnostic ARM page table allocator. 3 + * 4 + * ARMv7 Short-descriptor format, supporting 5 + * - Basic memory attributes 6 + * - Simplified access permissions (AP[2:1] model) 7 + * - Backwards-compatible TEX remap 8 + * - Large pages/supersections (if indicated by the caller) 9 + * 10 + * Not supporting: 11 + * - Legacy access permissions (AP[2:0] model) 12 + * 13 + * Almost certainly never supporting: 14 + * - PXN 15 + * - Domains 16 + * 17 + * This program is free software; you can redistribute it and/or modify 18 + * it under the terms of the GNU General Public License version 2 as 19 + * published by the Free Software Foundation. 20 + * 21 + * This program is distributed in the hope that it will be useful, 22 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 23 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 24 + * GNU General Public License for more details. 25 + * 26 + * You should have received a copy of the GNU General Public License 27 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 28 + * 29 + * Copyright (C) 2014-2015 ARM Limited 30 + * Copyright (c) 2014-2015 MediaTek Inc. 31 + */ 32 + 33 + #define pr_fmt(fmt) "arm-v7s io-pgtable: " fmt 34 + 35 + #include <linux/dma-mapping.h> 36 + #include <linux/gfp.h> 37 + #include <linux/iommu.h> 38 + #include <linux/kernel.h> 39 + #include <linux/kmemleak.h> 40 + #include <linux/sizes.h> 41 + #include <linux/slab.h> 42 + #include <linux/types.h> 43 + 44 + #include <asm/barrier.h> 45 + 46 + #include "io-pgtable.h" 47 + 48 + /* Struct accessors */ 49 + #define io_pgtable_to_data(x) \ 50 + container_of((x), struct arm_v7s_io_pgtable, iop) 51 + 52 + #define io_pgtable_ops_to_data(x) \ 53 + io_pgtable_to_data(io_pgtable_ops_to_pgtable(x)) 54 + 55 + /* 56 + * We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2, 57 + * and 12 bits in a page. With some carefully-chosen coefficients we can 58 + * hide the ugly inconsistencies behind these macros and at least let the 59 + * rest of the code pretend to be somewhat sane. 60 + */ 61 + #define ARM_V7S_ADDR_BITS 32 62 + #define _ARM_V7S_LVL_BITS(lvl) (16 - (lvl) * 4) 63 + #define ARM_V7S_LVL_SHIFT(lvl) (ARM_V7S_ADDR_BITS - (4 + 8 * (lvl))) 64 + #define ARM_V7S_TABLE_SHIFT 10 65 + 66 + #define ARM_V7S_PTES_PER_LVL(lvl) (1 << _ARM_V7S_LVL_BITS(lvl)) 67 + #define ARM_V7S_TABLE_SIZE(lvl) \ 68 + (ARM_V7S_PTES_PER_LVL(lvl) * sizeof(arm_v7s_iopte)) 69 + 70 + #define ARM_V7S_BLOCK_SIZE(lvl) (1UL << ARM_V7S_LVL_SHIFT(lvl)) 71 + #define ARM_V7S_LVL_MASK(lvl) ((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl))) 72 + #define ARM_V7S_TABLE_MASK ((u32)(~0U << ARM_V7S_TABLE_SHIFT)) 73 + #define _ARM_V7S_IDX_MASK(lvl) (ARM_V7S_PTES_PER_LVL(lvl) - 1) 74 + #define ARM_V7S_LVL_IDX(addr, lvl) ({ \ 75 + int _l = lvl; \ 76 + ((u32)(addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l); \ 77 + }) 78 + 79 + /* 80 + * Large page/supersection entries are effectively a block of 16 page/section 81 + * entries, along the lines of the LPAE contiguous hint, but all with the 82 + * same output address. For want of a better common name we'll call them 83 + * "contiguous" versions of their respective page/section entries here, but 84 + * noting the distinction (WRT to TLB maintenance) that they represent *one* 85 + * entry repeated 16 times, not 16 separate entries (as in the LPAE case). 86 + */ 87 + #define ARM_V7S_CONT_PAGES 16 88 + 89 + /* PTE type bits: these are all mixed up with XN/PXN bits in most cases */ 90 + #define ARM_V7S_PTE_TYPE_TABLE 0x1 91 + #define ARM_V7S_PTE_TYPE_PAGE 0x2 92 + #define ARM_V7S_PTE_TYPE_CONT_PAGE 0x1 93 + 94 + #define ARM_V7S_PTE_IS_VALID(pte) (((pte) & 0x3) != 0) 95 + #define ARM_V7S_PTE_IS_TABLE(pte, lvl) (lvl == 1 && ((pte) & ARM_V7S_PTE_TYPE_TABLE)) 96 + 97 + /* Page table bits */ 98 + #define ARM_V7S_ATTR_XN(lvl) BIT(4 * (2 - (lvl))) 99 + #define ARM_V7S_ATTR_B BIT(2) 100 + #define ARM_V7S_ATTR_C BIT(3) 101 + #define ARM_V7S_ATTR_NS_TABLE BIT(3) 102 + #define ARM_V7S_ATTR_NS_SECTION BIT(19) 103 + 104 + #define ARM_V7S_CONT_SECTION BIT(18) 105 + #define ARM_V7S_CONT_PAGE_XN_SHIFT 15 106 + 107 + /* 108 + * The attribute bits are consistently ordered*, but occupy bits [17:10] of 109 + * a level 1 PTE vs. bits [11:4] at level 2. Thus we define the individual 110 + * fields relative to that 8-bit block, plus a total shift relative to the PTE. 111 + */ 112 + #define ARM_V7S_ATTR_SHIFT(lvl) (16 - (lvl) * 6) 113 + 114 + #define ARM_V7S_ATTR_MASK 0xff 115 + #define ARM_V7S_ATTR_AP0 BIT(0) 116 + #define ARM_V7S_ATTR_AP1 BIT(1) 117 + #define ARM_V7S_ATTR_AP2 BIT(5) 118 + #define ARM_V7S_ATTR_S BIT(6) 119 + #define ARM_V7S_ATTR_NG BIT(7) 120 + #define ARM_V7S_TEX_SHIFT 2 121 + #define ARM_V7S_TEX_MASK 0x7 122 + #define ARM_V7S_ATTR_TEX(val) (((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT) 123 + 124 + /* *well, except for TEX on level 2 large pages, of course :( */ 125 + #define ARM_V7S_CONT_PAGE_TEX_SHIFT 6 126 + #define ARM_V7S_CONT_PAGE_TEX_MASK (ARM_V7S_TEX_MASK << ARM_V7S_CONT_PAGE_TEX_SHIFT) 127 + 128 + /* Simplified access permissions */ 129 + #define ARM_V7S_PTE_AF ARM_V7S_ATTR_AP0 130 + #define ARM_V7S_PTE_AP_UNPRIV ARM_V7S_ATTR_AP1 131 + #define ARM_V7S_PTE_AP_RDONLY ARM_V7S_ATTR_AP2 132 + 133 + /* Register bits */ 134 + #define ARM_V7S_RGN_NC 0 135 + #define ARM_V7S_RGN_WBWA 1 136 + #define ARM_V7S_RGN_WT 2 137 + #define ARM_V7S_RGN_WB 3 138 + 139 + #define ARM_V7S_PRRR_TYPE_DEVICE 1 140 + #define ARM_V7S_PRRR_TYPE_NORMAL 2 141 + #define ARM_V7S_PRRR_TR(n, type) (((type) & 0x3) << ((n) * 2)) 142 + #define ARM_V7S_PRRR_DS0 BIT(16) 143 + #define ARM_V7S_PRRR_DS1 BIT(17) 144 + #define ARM_V7S_PRRR_NS0 BIT(18) 145 + #define ARM_V7S_PRRR_NS1 BIT(19) 146 + #define ARM_V7S_PRRR_NOS(n) BIT((n) + 24) 147 + 148 + #define ARM_V7S_NMRR_IR(n, attr) (((attr) & 0x3) << ((n) * 2)) 149 + #define ARM_V7S_NMRR_OR(n, attr) (((attr) & 0x3) << ((n) * 2 + 16)) 150 + 151 + #define ARM_V7S_TTBR_S BIT(1) 152 + #define ARM_V7S_TTBR_NOS BIT(5) 153 + #define ARM_V7S_TTBR_ORGN_ATTR(attr) (((attr) & 0x3) << 3) 154 + #define ARM_V7S_TTBR_IRGN_ATTR(attr) \ 155 + ((((attr) & 0x1) << 6) | (((attr) & 0x2) >> 1)) 156 + 157 + #define ARM_V7S_TCR_PD1 BIT(5) 158 + 159 + typedef u32 arm_v7s_iopte; 160 + 161 + static bool selftest_running; 162 + 163 + struct arm_v7s_io_pgtable { 164 + struct io_pgtable iop; 165 + 166 + arm_v7s_iopte *pgd; 167 + struct kmem_cache *l2_tables; 168 + }; 169 + 170 + static dma_addr_t __arm_v7s_dma_addr(void *pages) 171 + { 172 + return (dma_addr_t)virt_to_phys(pages); 173 + } 174 + 175 + static arm_v7s_iopte *iopte_deref(arm_v7s_iopte pte, int lvl) 176 + { 177 + if (ARM_V7S_PTE_IS_TABLE(pte, lvl)) 178 + pte &= ARM_V7S_TABLE_MASK; 179 + else 180 + pte &= ARM_V7S_LVL_MASK(lvl); 181 + return phys_to_virt(pte); 182 + } 183 + 184 + static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, 185 + struct arm_v7s_io_pgtable *data) 186 + { 187 + struct device *dev = data->iop.cfg.iommu_dev; 188 + dma_addr_t dma; 189 + size_t size = ARM_V7S_TABLE_SIZE(lvl); 190 + void *table = NULL; 191 + 192 + if (lvl == 1) 193 + table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size)); 194 + else if (lvl == 2) 195 + table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA); 196 + if (table && !selftest_running) { 197 + dma = dma_map_single(dev, table, size, DMA_TO_DEVICE); 198 + if (dma_mapping_error(dev, dma)) 199 + goto out_free; 200 + /* 201 + * We depend on the IOMMU being able to work with any physical 202 + * address directly, so if the DMA layer suggests otherwise by 203 + * translating or truncating them, that bodes very badly... 204 + */ 205 + if (dma != virt_to_phys(table)) 206 + goto out_unmap; 207 + } 208 + kmemleak_ignore(table); 209 + return table; 210 + 211 + out_unmap: 212 + dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n"); 213 + dma_unmap_single(dev, dma, size, DMA_TO_DEVICE); 214 + out_free: 215 + if (lvl == 1) 216 + free_pages((unsigned long)table, get_order(size)); 217 + else 218 + kmem_cache_free(data->l2_tables, table); 219 + return NULL; 220 + } 221 + 222 + static void __arm_v7s_free_table(void *table, int lvl, 223 + struct arm_v7s_io_pgtable *data) 224 + { 225 + struct device *dev = data->iop.cfg.iommu_dev; 226 + size_t size = ARM_V7S_TABLE_SIZE(lvl); 227 + 228 + if (!selftest_running) 229 + dma_unmap_single(dev, __arm_v7s_dma_addr(table), size, 230 + DMA_TO_DEVICE); 231 + if (lvl == 1) 232 + free_pages((unsigned long)table, get_order(size)); 233 + else 234 + kmem_cache_free(data->l2_tables, table); 235 + } 236 + 237 + static void __arm_v7s_pte_sync(arm_v7s_iopte *ptep, int num_entries, 238 + struct io_pgtable_cfg *cfg) 239 + { 240 + if (selftest_running) 241 + return; 242 + 243 + dma_sync_single_for_device(cfg->iommu_dev, __arm_v7s_dma_addr(ptep), 244 + num_entries * sizeof(*ptep), DMA_TO_DEVICE); 245 + } 246 + static void __arm_v7s_set_pte(arm_v7s_iopte *ptep, arm_v7s_iopte pte, 247 + int num_entries, struct io_pgtable_cfg *cfg) 248 + { 249 + int i; 250 + 251 + for (i = 0; i < num_entries; i++) 252 + ptep[i] = pte; 253 + 254 + __arm_v7s_pte_sync(ptep, num_entries, cfg); 255 + } 256 + 257 + static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl, 258 + struct io_pgtable_cfg *cfg) 259 + { 260 + bool ap = !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS); 261 + arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S | 262 + ARM_V7S_ATTR_TEX(1); 263 + 264 + if (ap) { 265 + pte |= ARM_V7S_PTE_AF | ARM_V7S_PTE_AP_UNPRIV; 266 + if (!(prot & IOMMU_WRITE)) 267 + pte |= ARM_V7S_PTE_AP_RDONLY; 268 + } 269 + pte <<= ARM_V7S_ATTR_SHIFT(lvl); 270 + 271 + if ((prot & IOMMU_NOEXEC) && ap) 272 + pte |= ARM_V7S_ATTR_XN(lvl); 273 + if (prot & IOMMU_CACHE) 274 + pte |= ARM_V7S_ATTR_B | ARM_V7S_ATTR_C; 275 + 276 + return pte; 277 + } 278 + 279 + static int arm_v7s_pte_to_prot(arm_v7s_iopte pte, int lvl) 280 + { 281 + int prot = IOMMU_READ; 282 + 283 + if (pte & (ARM_V7S_PTE_AP_RDONLY << ARM_V7S_ATTR_SHIFT(lvl))) 284 + prot |= IOMMU_WRITE; 285 + if (pte & ARM_V7S_ATTR_C) 286 + prot |= IOMMU_CACHE; 287 + 288 + return prot; 289 + } 290 + 291 + static arm_v7s_iopte arm_v7s_pte_to_cont(arm_v7s_iopte pte, int lvl) 292 + { 293 + if (lvl == 1) { 294 + pte |= ARM_V7S_CONT_SECTION; 295 + } else if (lvl == 2) { 296 + arm_v7s_iopte xn = pte & ARM_V7S_ATTR_XN(lvl); 297 + arm_v7s_iopte tex = pte & ARM_V7S_CONT_PAGE_TEX_MASK; 298 + 299 + pte ^= xn | tex | ARM_V7S_PTE_TYPE_PAGE; 300 + pte |= (xn << ARM_V7S_CONT_PAGE_XN_SHIFT) | 301 + (tex << ARM_V7S_CONT_PAGE_TEX_SHIFT) | 302 + ARM_V7S_PTE_TYPE_CONT_PAGE; 303 + } 304 + return pte; 305 + } 306 + 307 + static arm_v7s_iopte arm_v7s_cont_to_pte(arm_v7s_iopte pte, int lvl) 308 + { 309 + if (lvl == 1) { 310 + pte &= ~ARM_V7S_CONT_SECTION; 311 + } else if (lvl == 2) { 312 + arm_v7s_iopte xn = pte & BIT(ARM_V7S_CONT_PAGE_XN_SHIFT); 313 + arm_v7s_iopte tex = pte & (ARM_V7S_CONT_PAGE_TEX_MASK << 314 + ARM_V7S_CONT_PAGE_TEX_SHIFT); 315 + 316 + pte ^= xn | tex | ARM_V7S_PTE_TYPE_CONT_PAGE; 317 + pte |= (xn >> ARM_V7S_CONT_PAGE_XN_SHIFT) | 318 + (tex >> ARM_V7S_CONT_PAGE_TEX_SHIFT) | 319 + ARM_V7S_PTE_TYPE_PAGE; 320 + } 321 + return pte; 322 + } 323 + 324 + static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl) 325 + { 326 + if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte, lvl)) 327 + return pte & ARM_V7S_CONT_SECTION; 328 + else if (lvl == 2) 329 + return !(pte & ARM_V7S_PTE_TYPE_PAGE); 330 + return false; 331 + } 332 + 333 + static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *, unsigned long, 334 + size_t, int, arm_v7s_iopte *); 335 + 336 + static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, 337 + unsigned long iova, phys_addr_t paddr, int prot, 338 + int lvl, int num_entries, arm_v7s_iopte *ptep) 339 + { 340 + struct io_pgtable_cfg *cfg = &data->iop.cfg; 341 + arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg); 342 + int i; 343 + 344 + for (i = 0; i < num_entries; i++) 345 + if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) { 346 + /* 347 + * We need to unmap and free the old table before 348 + * overwriting it with a block entry. 349 + */ 350 + arm_v7s_iopte *tblp; 351 + size_t sz = ARM_V7S_BLOCK_SIZE(lvl); 352 + 353 + tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl); 354 + if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz, 355 + sz, lvl, tblp) != sz)) 356 + return -EINVAL; 357 + } else if (ptep[i]) { 358 + /* We require an unmap first */ 359 + WARN_ON(!selftest_running); 360 + return -EEXIST; 361 + } 362 + 363 + pte |= ARM_V7S_PTE_TYPE_PAGE; 364 + if (lvl == 1 && (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)) 365 + pte |= ARM_V7S_ATTR_NS_SECTION; 366 + 367 + if (num_entries > 1) 368 + pte = arm_v7s_pte_to_cont(pte, lvl); 369 + 370 + pte |= paddr & ARM_V7S_LVL_MASK(lvl); 371 + 372 + __arm_v7s_set_pte(ptep, pte, num_entries, cfg); 373 + return 0; 374 + } 375 + 376 + static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, 377 + phys_addr_t paddr, size_t size, int prot, 378 + int lvl, arm_v7s_iopte *ptep) 379 + { 380 + struct io_pgtable_cfg *cfg = &data->iop.cfg; 381 + arm_v7s_iopte pte, *cptep; 382 + int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl); 383 + 384 + /* Find our entry at the current level */ 385 + ptep += ARM_V7S_LVL_IDX(iova, lvl); 386 + 387 + /* If we can install a leaf entry at this level, then do so */ 388 + if (num_entries) 389 + return arm_v7s_init_pte(data, iova, paddr, prot, 390 + lvl, num_entries, ptep); 391 + 392 + /* We can't allocate tables at the final level */ 393 + if (WARN_ON(lvl == 2)) 394 + return -EINVAL; 395 + 396 + /* Grab a pointer to the next level */ 397 + pte = *ptep; 398 + if (!pte) { 399 + cptep = __arm_v7s_alloc_table(lvl + 1, GFP_ATOMIC, data); 400 + if (!cptep) 401 + return -ENOMEM; 402 + 403 + pte = virt_to_phys(cptep) | ARM_V7S_PTE_TYPE_TABLE; 404 + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS) 405 + pte |= ARM_V7S_ATTR_NS_TABLE; 406 + 407 + __arm_v7s_set_pte(ptep, pte, 1, cfg); 408 + } else { 409 + cptep = iopte_deref(pte, lvl); 410 + } 411 + 412 + /* Rinse, repeat */ 413 + return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep); 414 + } 415 + 416 + static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, 417 + phys_addr_t paddr, size_t size, int prot) 418 + { 419 + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); 420 + struct io_pgtable *iop = &data->iop; 421 + int ret; 422 + 423 + /* If no access, then nothing to do */ 424 + if (!(prot & (IOMMU_READ | IOMMU_WRITE))) 425 + return 0; 426 + 427 + ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd); 428 + /* 429 + * Synchronise all PTE updates for the new mapping before there's 430 + * a chance for anything to kick off a table walk for the new iova. 431 + */ 432 + if (iop->cfg.quirks & IO_PGTABLE_QUIRK_TLBI_ON_MAP) { 433 + io_pgtable_tlb_add_flush(iop, iova, size, 434 + ARM_V7S_BLOCK_SIZE(2), false); 435 + io_pgtable_tlb_sync(iop); 436 + } else { 437 + wmb(); 438 + } 439 + 440 + return ret; 441 + } 442 + 443 + static void arm_v7s_free_pgtable(struct io_pgtable *iop) 444 + { 445 + struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop); 446 + int i; 447 + 448 + for (i = 0; i < ARM_V7S_PTES_PER_LVL(1); i++) { 449 + arm_v7s_iopte pte = data->pgd[i]; 450 + 451 + if (ARM_V7S_PTE_IS_TABLE(pte, 1)) 452 + __arm_v7s_free_table(iopte_deref(pte, 1), 2, data); 453 + } 454 + __arm_v7s_free_table(data->pgd, 1, data); 455 + kmem_cache_destroy(data->l2_tables); 456 + kfree(data); 457 + } 458 + 459 + static void arm_v7s_split_cont(struct arm_v7s_io_pgtable *data, 460 + unsigned long iova, int idx, int lvl, 461 + arm_v7s_iopte *ptep) 462 + { 463 + struct io_pgtable *iop = &data->iop; 464 + arm_v7s_iopte pte; 465 + size_t size = ARM_V7S_BLOCK_SIZE(lvl); 466 + int i; 467 + 468 + ptep -= idx & (ARM_V7S_CONT_PAGES - 1); 469 + pte = arm_v7s_cont_to_pte(*ptep, lvl); 470 + for (i = 0; i < ARM_V7S_CONT_PAGES; i++) { 471 + ptep[i] = pte; 472 + pte += size; 473 + } 474 + 475 + __arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, &iop->cfg); 476 + 477 + size *= ARM_V7S_CONT_PAGES; 478 + io_pgtable_tlb_add_flush(iop, iova, size, size, true); 479 + io_pgtable_tlb_sync(iop); 480 + } 481 + 482 + static int arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data, 483 + unsigned long iova, size_t size, 484 + arm_v7s_iopte *ptep) 485 + { 486 + unsigned long blk_start, blk_end, blk_size; 487 + phys_addr_t blk_paddr; 488 + arm_v7s_iopte table = 0; 489 + int prot = arm_v7s_pte_to_prot(*ptep, 1); 490 + 491 + blk_size = ARM_V7S_BLOCK_SIZE(1); 492 + blk_start = iova & ARM_V7S_LVL_MASK(1); 493 + blk_end = blk_start + ARM_V7S_BLOCK_SIZE(1); 494 + blk_paddr = *ptep & ARM_V7S_LVL_MASK(1); 495 + 496 + for (; blk_start < blk_end; blk_start += size, blk_paddr += size) { 497 + arm_v7s_iopte *tablep; 498 + 499 + /* Unmap! */ 500 + if (blk_start == iova) 501 + continue; 502 + 503 + /* __arm_v7s_map expects a pointer to the start of the table */ 504 + tablep = &table - ARM_V7S_LVL_IDX(blk_start, 1); 505 + if (__arm_v7s_map(data, blk_start, blk_paddr, size, prot, 1, 506 + tablep) < 0) { 507 + if (table) { 508 + /* Free the table we allocated */ 509 + tablep = iopte_deref(table, 1); 510 + __arm_v7s_free_table(tablep, 2, data); 511 + } 512 + return 0; /* Bytes unmapped */ 513 + } 514 + } 515 + 516 + __arm_v7s_set_pte(ptep, table, 1, &data->iop.cfg); 517 + iova &= ~(blk_size - 1); 518 + io_pgtable_tlb_add_flush(&data->iop, iova, blk_size, blk_size, true); 519 + return size; 520 + } 521 + 522 + static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, 523 + unsigned long iova, size_t size, int lvl, 524 + arm_v7s_iopte *ptep) 525 + { 526 + arm_v7s_iopte pte[ARM_V7S_CONT_PAGES]; 527 + struct io_pgtable *iop = &data->iop; 528 + int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl); 529 + 530 + /* Something went horribly wrong and we ran out of page table */ 531 + if (WARN_ON(lvl > 2)) 532 + return 0; 533 + 534 + idx = ARM_V7S_LVL_IDX(iova, lvl); 535 + ptep += idx; 536 + do { 537 + if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i]))) 538 + return 0; 539 + pte[i] = ptep[i]; 540 + } while (++i < num_entries); 541 + 542 + /* 543 + * If we've hit a contiguous 'large page' entry at this level, it 544 + * needs splitting first, unless we're unmapping the whole lot. 545 + */ 546 + if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl)) 547 + arm_v7s_split_cont(data, iova, idx, lvl, ptep); 548 + 549 + /* If the size matches this level, we're in the right place */ 550 + if (num_entries) { 551 + size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl); 552 + 553 + __arm_v7s_set_pte(ptep, 0, num_entries, &iop->cfg); 554 + 555 + for (i = 0; i < num_entries; i++) { 556 + if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) { 557 + /* Also flush any partial walks */ 558 + io_pgtable_tlb_add_flush(iop, iova, blk_size, 559 + ARM_V7S_BLOCK_SIZE(lvl + 1), false); 560 + io_pgtable_tlb_sync(iop); 561 + ptep = iopte_deref(pte[i], lvl); 562 + __arm_v7s_free_table(ptep, lvl + 1, data); 563 + } else { 564 + io_pgtable_tlb_add_flush(iop, iova, blk_size, 565 + blk_size, true); 566 + } 567 + iova += blk_size; 568 + } 569 + return size; 570 + } else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) { 571 + /* 572 + * Insert a table at the next level to map the old region, 573 + * minus the part we want to unmap 574 + */ 575 + return arm_v7s_split_blk_unmap(data, iova, size, ptep); 576 + } 577 + 578 + /* Keep on walkin' */ 579 + ptep = iopte_deref(pte[0], lvl); 580 + return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep); 581 + } 582 + 583 + static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, 584 + size_t size) 585 + { 586 + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); 587 + size_t unmapped; 588 + 589 + unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd); 590 + if (unmapped) 591 + io_pgtable_tlb_sync(&data->iop); 592 + 593 + return unmapped; 594 + } 595 + 596 + static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, 597 + unsigned long iova) 598 + { 599 + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); 600 + arm_v7s_iopte *ptep = data->pgd, pte; 601 + int lvl = 0; 602 + u32 mask; 603 + 604 + do { 605 + pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)]; 606 + ptep = iopte_deref(pte, lvl); 607 + } while (ARM_V7S_PTE_IS_TABLE(pte, lvl)); 608 + 609 + if (!ARM_V7S_PTE_IS_VALID(pte)) 610 + return 0; 611 + 612 + mask = ARM_V7S_LVL_MASK(lvl); 613 + if (arm_v7s_pte_is_cont(pte, lvl)) 614 + mask *= ARM_V7S_CONT_PAGES; 615 + return (pte & mask) | (iova & ~mask); 616 + } 617 + 618 + static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, 619 + void *cookie) 620 + { 621 + struct arm_v7s_io_pgtable *data; 622 + 623 + if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS) 624 + return NULL; 625 + 626 + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | 627 + IO_PGTABLE_QUIRK_NO_PERMS | 628 + IO_PGTABLE_QUIRK_TLBI_ON_MAP)) 629 + return NULL; 630 + 631 + data = kmalloc(sizeof(*data), GFP_KERNEL); 632 + if (!data) 633 + return NULL; 634 + 635 + data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2", 636 + ARM_V7S_TABLE_SIZE(2), 637 + ARM_V7S_TABLE_SIZE(2), 638 + SLAB_CACHE_DMA, NULL); 639 + if (!data->l2_tables) 640 + goto out_free_data; 641 + 642 + data->iop.ops = (struct io_pgtable_ops) { 643 + .map = arm_v7s_map, 644 + .unmap = arm_v7s_unmap, 645 + .iova_to_phys = arm_v7s_iova_to_phys, 646 + }; 647 + 648 + /* We have to do this early for __arm_v7s_alloc_table to work... */ 649 + data->iop.cfg = *cfg; 650 + 651 + /* 652 + * Unless the IOMMU driver indicates supersection support by 653 + * having SZ_16M set in the initial bitmap, they won't be used. 654 + */ 655 + cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M; 656 + 657 + /* TCR: T0SZ=0, disable TTBR1 */ 658 + cfg->arm_v7s_cfg.tcr = ARM_V7S_TCR_PD1; 659 + 660 + /* 661 + * TEX remap: the indices used map to the closest equivalent types 662 + * under the non-TEX-remap interpretation of those attribute bits, 663 + * excepting various implementation-defined aspects of shareability. 664 + */ 665 + cfg->arm_v7s_cfg.prrr = ARM_V7S_PRRR_TR(1, ARM_V7S_PRRR_TYPE_DEVICE) | 666 + ARM_V7S_PRRR_TR(4, ARM_V7S_PRRR_TYPE_NORMAL) | 667 + ARM_V7S_PRRR_TR(7, ARM_V7S_PRRR_TYPE_NORMAL) | 668 + ARM_V7S_PRRR_DS0 | ARM_V7S_PRRR_DS1 | 669 + ARM_V7S_PRRR_NS1 | ARM_V7S_PRRR_NOS(7); 670 + cfg->arm_v7s_cfg.nmrr = ARM_V7S_NMRR_IR(7, ARM_V7S_RGN_WBWA) | 671 + ARM_V7S_NMRR_OR(7, ARM_V7S_RGN_WBWA); 672 + 673 + /* Looking good; allocate a pgd */ 674 + data->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data); 675 + if (!data->pgd) 676 + goto out_free_data; 677 + 678 + /* Ensure the empty pgd is visible before any actual TTBR write */ 679 + wmb(); 680 + 681 + /* TTBRs */ 682 + cfg->arm_v7s_cfg.ttbr[0] = virt_to_phys(data->pgd) | 683 + ARM_V7S_TTBR_S | ARM_V7S_TTBR_NOS | 684 + ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) | 685 + ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA); 686 + cfg->arm_v7s_cfg.ttbr[1] = 0; 687 + return &data->iop; 688 + 689 + out_free_data: 690 + kmem_cache_destroy(data->l2_tables); 691 + kfree(data); 692 + return NULL; 693 + } 694 + 695 + struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns = { 696 + .alloc = arm_v7s_alloc_pgtable, 697 + .free = arm_v7s_free_pgtable, 698 + }; 699 + 700 + #ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST 701 + 702 + static struct io_pgtable_cfg *cfg_cookie; 703 + 704 + static void dummy_tlb_flush_all(void *cookie) 705 + { 706 + WARN_ON(cookie != cfg_cookie); 707 + } 708 + 709 + static void dummy_tlb_add_flush(unsigned long iova, size_t size, 710 + size_t granule, bool leaf, void *cookie) 711 + { 712 + WARN_ON(cookie != cfg_cookie); 713 + WARN_ON(!(size & cfg_cookie->pgsize_bitmap)); 714 + } 715 + 716 + static void dummy_tlb_sync(void *cookie) 717 + { 718 + WARN_ON(cookie != cfg_cookie); 719 + } 720 + 721 + static struct iommu_gather_ops dummy_tlb_ops = { 722 + .tlb_flush_all = dummy_tlb_flush_all, 723 + .tlb_add_flush = dummy_tlb_add_flush, 724 + .tlb_sync = dummy_tlb_sync, 725 + }; 726 + 727 + #define __FAIL(ops) ({ \ 728 + WARN(1, "selftest: test failed\n"); \ 729 + selftest_running = false; \ 730 + -EFAULT; \ 731 + }) 732 + 733 + static int __init arm_v7s_do_selftests(void) 734 + { 735 + struct io_pgtable_ops *ops; 736 + struct io_pgtable_cfg cfg = { 737 + .tlb = &dummy_tlb_ops, 738 + .oas = 32, 739 + .ias = 32, 740 + .quirks = IO_PGTABLE_QUIRK_ARM_NS, 741 + .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, 742 + }; 743 + unsigned int iova, size, iova_start; 744 + unsigned int i, loopnr = 0; 745 + 746 + selftest_running = true; 747 + 748 + cfg_cookie = &cfg; 749 + 750 + ops = alloc_io_pgtable_ops(ARM_V7S, &cfg, &cfg); 751 + if (!ops) { 752 + pr_err("selftest: failed to allocate io pgtable ops\n"); 753 + return -EINVAL; 754 + } 755 + 756 + /* 757 + * Initial sanity checks. 758 + * Empty page tables shouldn't provide any translations. 759 + */ 760 + if (ops->iova_to_phys(ops, 42)) 761 + return __FAIL(ops); 762 + 763 + if (ops->iova_to_phys(ops, SZ_1G + 42)) 764 + return __FAIL(ops); 765 + 766 + if (ops->iova_to_phys(ops, SZ_2G + 42)) 767 + return __FAIL(ops); 768 + 769 + /* 770 + * Distinct mappings of different granule sizes. 771 + */ 772 + iova = 0; 773 + i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG); 774 + while (i != BITS_PER_LONG) { 775 + size = 1UL << i; 776 + if (ops->map(ops, iova, iova, size, IOMMU_READ | 777 + IOMMU_WRITE | 778 + IOMMU_NOEXEC | 779 + IOMMU_CACHE)) 780 + return __FAIL(ops); 781 + 782 + /* Overlapping mappings */ 783 + if (!ops->map(ops, iova, iova + size, size, 784 + IOMMU_READ | IOMMU_NOEXEC)) 785 + return __FAIL(ops); 786 + 787 + if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) 788 + return __FAIL(ops); 789 + 790 + iova += SZ_16M; 791 + i++; 792 + i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i); 793 + loopnr++; 794 + } 795 + 796 + /* Partial unmap */ 797 + i = 1; 798 + size = 1UL << __ffs(cfg.pgsize_bitmap); 799 + while (i < loopnr) { 800 + iova_start = i * SZ_16M; 801 + if (ops->unmap(ops, iova_start + size, size) != size) 802 + return __FAIL(ops); 803 + 804 + /* Remap of partial unmap */ 805 + if (ops->map(ops, iova_start + size, size, size, IOMMU_READ)) 806 + return __FAIL(ops); 807 + 808 + if (ops->iova_to_phys(ops, iova_start + size + 42) 809 + != (size + 42)) 810 + return __FAIL(ops); 811 + i++; 812 + } 813 + 814 + /* Full unmap */ 815 + iova = 0; 816 + i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG); 817 + while (i != BITS_PER_LONG) { 818 + size = 1UL << i; 819 + 820 + if (ops->unmap(ops, iova, size) != size) 821 + return __FAIL(ops); 822 + 823 + if (ops->iova_to_phys(ops, iova + 42)) 824 + return __FAIL(ops); 825 + 826 + /* Remap full block */ 827 + if (ops->map(ops, iova, iova, size, IOMMU_WRITE)) 828 + return __FAIL(ops); 829 + 830 + if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) 831 + return __FAIL(ops); 832 + 833 + iova += SZ_16M; 834 + i++; 835 + i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i); 836 + } 837 + 838 + free_io_pgtable_ops(ops); 839 + 840 + selftest_running = false; 841 + 842 + pr_info("self test ok\n"); 843 + return 0; 844 + } 845 + subsys_initcall(arm_v7s_do_selftests); 846 + #endif
+20 -14
drivers/iommu/io-pgtable-arm.c
··· 446 446 unsigned long blk_start, blk_end; 447 447 phys_addr_t blk_paddr; 448 448 arm_lpae_iopte table = 0; 449 - struct io_pgtable_cfg *cfg = &data->iop.cfg; 450 449 451 450 blk_start = iova & ~(blk_size - 1); 452 451 blk_end = blk_start + blk_size; ··· 471 472 } 472 473 } 473 474 474 - __arm_lpae_set_pte(ptep, table, cfg); 475 + __arm_lpae_set_pte(ptep, table, &data->iop.cfg); 475 476 iova &= ~(blk_size - 1); 476 - cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie); 477 + io_pgtable_tlb_add_flush(&data->iop, iova, blk_size, blk_size, true); 477 478 return size; 478 479 } 479 480 ··· 482 483 arm_lpae_iopte *ptep) 483 484 { 484 485 arm_lpae_iopte pte; 485 - const struct iommu_gather_ops *tlb = data->iop.cfg.tlb; 486 - void *cookie = data->iop.cookie; 486 + struct io_pgtable *iop = &data->iop; 487 487 size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data); 488 488 489 489 /* Something went horribly wrong and we ran out of page table */ ··· 496 498 497 499 /* If the size matches this level, we're in the right place */ 498 500 if (size == blk_size) { 499 - __arm_lpae_set_pte(ptep, 0, &data->iop.cfg); 501 + __arm_lpae_set_pte(ptep, 0, &iop->cfg); 500 502 501 503 if (!iopte_leaf(pte, lvl)) { 502 504 /* Also flush any partial walks */ 503 - tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data), 504 - false, cookie); 505 - tlb->tlb_sync(cookie); 505 + io_pgtable_tlb_add_flush(iop, iova, size, 506 + ARM_LPAE_GRANULE(data), false); 507 + io_pgtable_tlb_sync(iop); 506 508 ptep = iopte_deref(pte, data); 507 509 __arm_lpae_free_pgtable(data, lvl + 1, ptep); 508 510 } else { 509 - tlb->tlb_add_flush(iova, size, size, true, cookie); 511 + io_pgtable_tlb_add_flush(iop, iova, size, size, true); 510 512 } 511 513 512 514 return size; ··· 530 532 { 531 533 size_t unmapped; 532 534 struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); 533 - struct io_pgtable *iop = &data->iop; 534 535 arm_lpae_iopte *ptep = data->pgd; 535 536 int lvl = ARM_LPAE_START_LVL(data); 536 537 537 538 unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep); 538 539 if (unmapped) 539 - iop->cfg.tlb->tlb_sync(iop->cookie); 540 + io_pgtable_tlb_sync(&data->iop); 540 541 541 542 return unmapped; 542 543 } ··· 659 662 arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) 660 663 { 661 664 u64 reg; 662 - struct arm_lpae_io_pgtable *data = arm_lpae_alloc_pgtable(cfg); 665 + struct arm_lpae_io_pgtable *data; 663 666 667 + if (cfg->quirks & ~IO_PGTABLE_QUIRK_ARM_NS) 668 + return NULL; 669 + 670 + data = arm_lpae_alloc_pgtable(cfg); 664 671 if (!data) 665 672 return NULL; 666 673 ··· 747 746 arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) 748 747 { 749 748 u64 reg, sl; 750 - struct arm_lpae_io_pgtable *data = arm_lpae_alloc_pgtable(cfg); 749 + struct arm_lpae_io_pgtable *data; 751 750 751 + /* The NS quirk doesn't apply at stage 2 */ 752 + if (cfg->quirks) 753 + return NULL; 754 + 755 + data = arm_lpae_alloc_pgtable(cfg); 752 756 if (!data) 753 757 return NULL; 754 758
+4 -1
drivers/iommu/io-pgtable.c
··· 33 33 [ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns, 34 34 [ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns, 35 35 #endif 36 + #ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S 37 + [ARM_V7S] = &io_pgtable_arm_v7s_init_fns, 38 + #endif 36 39 }; 37 40 38 41 struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, ··· 75 72 return; 76 73 77 74 iop = container_of(ops, struct io_pgtable, ops); 78 - iop->cfg.tlb->tlb_flush_all(iop->cookie); 75 + io_pgtable_tlb_flush_all(iop); 79 76 io_pgtable_init_table[iop->fmt]->free(iop); 80 77 }
+51 -2
drivers/iommu/io-pgtable.h
··· 1 1 #ifndef __IO_PGTABLE_H 2 2 #define __IO_PGTABLE_H 3 + #include <linux/bitops.h> 3 4 4 5 /* 5 6 * Public API for use by IOMMU drivers ··· 10 9 ARM_32_LPAE_S2, 11 10 ARM_64_LPAE_S1, 12 11 ARM_64_LPAE_S2, 12 + ARM_V7S, 13 13 IO_PGTABLE_NUM_FMTS, 14 14 }; 15 15 ··· 47 45 * page table walker. 48 46 */ 49 47 struct io_pgtable_cfg { 50 - #define IO_PGTABLE_QUIRK_ARM_NS (1 << 0) /* Set NS bit in PTEs */ 51 - int quirks; 48 + /* 49 + * IO_PGTABLE_QUIRK_ARM_NS: (ARM formats) Set NS and NSTABLE bits in 50 + * stage 1 PTEs, for hardware which insists on validating them 51 + * even in non-secure state where they should normally be ignored. 52 + * 53 + * IO_PGTABLE_QUIRK_NO_PERMS: Ignore the IOMMU_READ, IOMMU_WRITE and 54 + * IOMMU_NOEXEC flags and map everything with full access, for 55 + * hardware which does not implement the permissions of a given 56 + * format, and/or requires some format-specific default value. 57 + * 58 + * IO_PGTABLE_QUIRK_TLBI_ON_MAP: If the format forbids caching invalid 59 + * (unmapped) entries but the hardware might do so anyway, perform 60 + * TLB maintenance when mapping as well as when unmapping. 61 + */ 62 + #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) 63 + #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) 64 + #define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2) 65 + unsigned long quirks; 52 66 unsigned long pgsize_bitmap; 53 67 unsigned int ias; 54 68 unsigned int oas; ··· 83 65 u64 vttbr; 84 66 u64 vtcr; 85 67 } arm_lpae_s2_cfg; 68 + 69 + struct { 70 + u32 ttbr[2]; 71 + u32 tcr; 72 + u32 nmrr; 73 + u32 prrr; 74 + } arm_v7s_cfg; 86 75 }; 87 76 }; 88 77 ··· 146 121 * @fmt: The page table format. 147 122 * @cookie: An opaque token provided by the IOMMU driver and passed back to 148 123 * any callback routines. 124 + * @tlb_sync_pending: Private flag for optimising out redundant syncs. 149 125 * @cfg: A copy of the page table configuration. 150 126 * @ops: The page table operations in use for this set of page tables. 151 127 */ 152 128 struct io_pgtable { 153 129 enum io_pgtable_fmt fmt; 154 130 void *cookie; 131 + bool tlb_sync_pending; 155 132 struct io_pgtable_cfg cfg; 156 133 struct io_pgtable_ops ops; 157 134 }; 158 135 159 136 #define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops) 137 + 138 + static inline void io_pgtable_tlb_flush_all(struct io_pgtable *iop) 139 + { 140 + iop->cfg.tlb->tlb_flush_all(iop->cookie); 141 + iop->tlb_sync_pending = true; 142 + } 143 + 144 + static inline void io_pgtable_tlb_add_flush(struct io_pgtable *iop, 145 + unsigned long iova, size_t size, size_t granule, bool leaf) 146 + { 147 + iop->cfg.tlb->tlb_add_flush(iova, size, granule, leaf, iop->cookie); 148 + iop->tlb_sync_pending = true; 149 + } 150 + 151 + static inline void io_pgtable_tlb_sync(struct io_pgtable *iop) 152 + { 153 + if (iop->tlb_sync_pending) { 154 + iop->cfg.tlb->tlb_sync(iop->cookie); 155 + iop->tlb_sync_pending = false; 156 + } 157 + } 160 158 161 159 /** 162 160 * struct io_pgtable_init_fns - Alloc/free a set of page tables for a ··· 197 149 extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns; 198 150 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns; 199 151 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns; 152 + extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns; 200 153 201 154 #endif /* __IO_PGTABLE_H */
+2 -1
drivers/iommu/iommu.c
··· 1314 1314 unsigned long orig_iova = iova; 1315 1315 unsigned int min_pagesz; 1316 1316 size_t orig_size = size; 1317 + phys_addr_t orig_paddr = paddr; 1317 1318 int ret = 0; 1318 1319 1319 1320 if (unlikely(domain->ops->map == NULL || ··· 1359 1358 if (ret) 1360 1359 iommu_unmap(domain, orig_iova, orig_size - size); 1361 1360 else 1362 - trace_map(orig_iova, paddr, orig_size); 1361 + trace_map(orig_iova, orig_paddr, orig_size); 1363 1362 1364 1363 return ret; 1365 1364 }
+736
drivers/iommu/mtk_iommu.c
··· 1 + /* 2 + * Copyright (c) 2015-2016 MediaTek Inc. 3 + * Author: Yong Wu <yong.wu@mediatek.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + #include <linux/bug.h> 15 + #include <linux/clk.h> 16 + #include <linux/component.h> 17 + #include <linux/device.h> 18 + #include <linux/dma-iommu.h> 19 + #include <linux/err.h> 20 + #include <linux/interrupt.h> 21 + #include <linux/io.h> 22 + #include <linux/iommu.h> 23 + #include <linux/iopoll.h> 24 + #include <linux/list.h> 25 + #include <linux/of_address.h> 26 + #include <linux/of_iommu.h> 27 + #include <linux/of_irq.h> 28 + #include <linux/of_platform.h> 29 + #include <linux/platform_device.h> 30 + #include <linux/slab.h> 31 + #include <linux/spinlock.h> 32 + #include <asm/barrier.h> 33 + #include <dt-bindings/memory/mt8173-larb-port.h> 34 + #include <soc/mediatek/smi.h> 35 + 36 + #include "io-pgtable.h" 37 + 38 + #define REG_MMU_PT_BASE_ADDR 0x000 39 + 40 + #define REG_MMU_INVALIDATE 0x020 41 + #define F_ALL_INVLD 0x2 42 + #define F_MMU_INV_RANGE 0x1 43 + 44 + #define REG_MMU_INVLD_START_A 0x024 45 + #define REG_MMU_INVLD_END_A 0x028 46 + 47 + #define REG_MMU_INV_SEL 0x038 48 + #define F_INVLD_EN0 BIT(0) 49 + #define F_INVLD_EN1 BIT(1) 50 + 51 + #define REG_MMU_STANDARD_AXI_MODE 0x048 52 + #define REG_MMU_DCM_DIS 0x050 53 + 54 + #define REG_MMU_CTRL_REG 0x110 55 + #define F_MMU_PREFETCH_RT_REPLACE_MOD BIT(4) 56 + #define F_MMU_TF_PROTECT_SEL(prot) (((prot) & 0x3) << 5) 57 + 58 + #define REG_MMU_IVRP_PADDR 0x114 59 + #define F_MMU_IVRP_PA_SET(pa) ((pa) >> 1) 60 + 61 + #define REG_MMU_INT_CONTROL0 0x120 62 + #define F_L2_MULIT_HIT_EN BIT(0) 63 + #define F_TABLE_WALK_FAULT_INT_EN BIT(1) 64 + #define F_PREETCH_FIFO_OVERFLOW_INT_EN BIT(2) 65 + #define F_MISS_FIFO_OVERFLOW_INT_EN BIT(3) 66 + #define F_PREFETCH_FIFO_ERR_INT_EN BIT(5) 67 + #define F_MISS_FIFO_ERR_INT_EN BIT(6) 68 + #define F_INT_CLR_BIT BIT(12) 69 + 70 + #define REG_MMU_INT_MAIN_CONTROL 0x124 71 + #define F_INT_TRANSLATION_FAULT BIT(0) 72 + #define F_INT_MAIN_MULTI_HIT_FAULT BIT(1) 73 + #define F_INT_INVALID_PA_FAULT BIT(2) 74 + #define F_INT_ENTRY_REPLACEMENT_FAULT BIT(3) 75 + #define F_INT_TLB_MISS_FAULT BIT(4) 76 + #define F_INT_MISS_TRANSACTION_FIFO_FAULT BIT(5) 77 + #define F_INT_PRETETCH_TRANSATION_FIFO_FAULT BIT(6) 78 + 79 + #define REG_MMU_CPE_DONE 0x12C 80 + 81 + #define REG_MMU_FAULT_ST1 0x134 82 + 83 + #define REG_MMU_FAULT_VA 0x13c 84 + #define F_MMU_FAULT_VA_MSK 0xfffff000 85 + #define F_MMU_FAULT_VA_WRITE_BIT BIT(1) 86 + #define F_MMU_FAULT_VA_LAYER_BIT BIT(0) 87 + 88 + #define REG_MMU_INVLD_PA 0x140 89 + #define REG_MMU_INT_ID 0x150 90 + #define F_MMU0_INT_ID_LARB_ID(a) (((a) >> 7) & 0x7) 91 + #define F_MMU0_INT_ID_PORT_ID(a) (((a) >> 2) & 0x1f) 92 + 93 + #define MTK_PROTECT_PA_ALIGN 128 94 + 95 + struct mtk_iommu_suspend_reg { 96 + u32 standard_axi_mode; 97 + u32 dcm_dis; 98 + u32 ctrl_reg; 99 + u32 int_control0; 100 + u32 int_main_control; 101 + }; 102 + 103 + struct mtk_iommu_client_priv { 104 + struct list_head client; 105 + unsigned int mtk_m4u_id; 106 + struct device *m4udev; 107 + }; 108 + 109 + struct mtk_iommu_domain { 110 + spinlock_t pgtlock; /* lock for page table */ 111 + 112 + struct io_pgtable_cfg cfg; 113 + struct io_pgtable_ops *iop; 114 + 115 + struct iommu_domain domain; 116 + }; 117 + 118 + struct mtk_iommu_data { 119 + void __iomem *base; 120 + int irq; 121 + struct device *dev; 122 + struct clk *bclk; 123 + phys_addr_t protect_base; /* protect memory base */ 124 + struct mtk_iommu_suspend_reg reg; 125 + struct mtk_iommu_domain *m4u_dom; 126 + struct iommu_group *m4u_group; 127 + struct mtk_smi_iommu smi_imu; /* SMI larb iommu info */ 128 + }; 129 + 130 + static struct iommu_ops mtk_iommu_ops; 131 + 132 + static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom) 133 + { 134 + return container_of(dom, struct mtk_iommu_domain, domain); 135 + } 136 + 137 + static void mtk_iommu_tlb_flush_all(void *cookie) 138 + { 139 + struct mtk_iommu_data *data = cookie; 140 + 141 + writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0, data->base + REG_MMU_INV_SEL); 142 + writel_relaxed(F_ALL_INVLD, data->base + REG_MMU_INVALIDATE); 143 + wmb(); /* Make sure the tlb flush all done */ 144 + } 145 + 146 + static void mtk_iommu_tlb_add_flush_nosync(unsigned long iova, size_t size, 147 + size_t granule, bool leaf, 148 + void *cookie) 149 + { 150 + struct mtk_iommu_data *data = cookie; 151 + 152 + writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0, data->base + REG_MMU_INV_SEL); 153 + 154 + writel_relaxed(iova, data->base + REG_MMU_INVLD_START_A); 155 + writel_relaxed(iova + size - 1, data->base + REG_MMU_INVLD_END_A); 156 + writel_relaxed(F_MMU_INV_RANGE, data->base + REG_MMU_INVALIDATE); 157 + } 158 + 159 + static void mtk_iommu_tlb_sync(void *cookie) 160 + { 161 + struct mtk_iommu_data *data = cookie; 162 + int ret; 163 + u32 tmp; 164 + 165 + ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp, 166 + tmp != 0, 10, 100000); 167 + if (ret) { 168 + dev_warn(data->dev, 169 + "Partial TLB flush timed out, falling back to full flush\n"); 170 + mtk_iommu_tlb_flush_all(cookie); 171 + } 172 + /* Clear the CPE status */ 173 + writel_relaxed(0, data->base + REG_MMU_CPE_DONE); 174 + } 175 + 176 + static const struct iommu_gather_ops mtk_iommu_gather_ops = { 177 + .tlb_flush_all = mtk_iommu_tlb_flush_all, 178 + .tlb_add_flush = mtk_iommu_tlb_add_flush_nosync, 179 + .tlb_sync = mtk_iommu_tlb_sync, 180 + }; 181 + 182 + static irqreturn_t mtk_iommu_isr(int irq, void *dev_id) 183 + { 184 + struct mtk_iommu_data *data = dev_id; 185 + struct mtk_iommu_domain *dom = data->m4u_dom; 186 + u32 int_state, regval, fault_iova, fault_pa; 187 + unsigned int fault_larb, fault_port; 188 + bool layer, write; 189 + 190 + /* Read error info from registers */ 191 + int_state = readl_relaxed(data->base + REG_MMU_FAULT_ST1); 192 + fault_iova = readl_relaxed(data->base + REG_MMU_FAULT_VA); 193 + layer = fault_iova & F_MMU_FAULT_VA_LAYER_BIT; 194 + write = fault_iova & F_MMU_FAULT_VA_WRITE_BIT; 195 + fault_iova &= F_MMU_FAULT_VA_MSK; 196 + fault_pa = readl_relaxed(data->base + REG_MMU_INVLD_PA); 197 + regval = readl_relaxed(data->base + REG_MMU_INT_ID); 198 + fault_larb = F_MMU0_INT_ID_LARB_ID(regval); 199 + fault_port = F_MMU0_INT_ID_PORT_ID(regval); 200 + 201 + if (report_iommu_fault(&dom->domain, data->dev, fault_iova, 202 + write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ)) { 203 + dev_err_ratelimited( 204 + data->dev, 205 + "fault type=0x%x iova=0x%x pa=0x%x larb=%d port=%d layer=%d %s\n", 206 + int_state, fault_iova, fault_pa, fault_larb, fault_port, 207 + layer, write ? "write" : "read"); 208 + } 209 + 210 + /* Interrupt clear */ 211 + regval = readl_relaxed(data->base + REG_MMU_INT_CONTROL0); 212 + regval |= F_INT_CLR_BIT; 213 + writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL0); 214 + 215 + mtk_iommu_tlb_flush_all(data); 216 + 217 + return IRQ_HANDLED; 218 + } 219 + 220 + static void mtk_iommu_config(struct mtk_iommu_data *data, 221 + struct device *dev, bool enable) 222 + { 223 + struct mtk_iommu_client_priv *head, *cur, *next; 224 + struct mtk_smi_larb_iommu *larb_mmu; 225 + unsigned int larbid, portid; 226 + 227 + head = dev->archdata.iommu; 228 + list_for_each_entry_safe(cur, next, &head->client, client) { 229 + larbid = MTK_M4U_TO_LARB(cur->mtk_m4u_id); 230 + portid = MTK_M4U_TO_PORT(cur->mtk_m4u_id); 231 + larb_mmu = &data->smi_imu.larb_imu[larbid]; 232 + 233 + dev_dbg(dev, "%s iommu port: %d\n", 234 + enable ? "enable" : "disable", portid); 235 + 236 + if (enable) 237 + larb_mmu->mmu |= MTK_SMI_MMU_EN(portid); 238 + else 239 + larb_mmu->mmu &= ~MTK_SMI_MMU_EN(portid); 240 + } 241 + } 242 + 243 + static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data) 244 + { 245 + struct mtk_iommu_domain *dom = data->m4u_dom; 246 + 247 + spin_lock_init(&dom->pgtlock); 248 + 249 + dom->cfg = (struct io_pgtable_cfg) { 250 + .quirks = IO_PGTABLE_QUIRK_ARM_NS | 251 + IO_PGTABLE_QUIRK_NO_PERMS | 252 + IO_PGTABLE_QUIRK_TLBI_ON_MAP, 253 + .pgsize_bitmap = mtk_iommu_ops.pgsize_bitmap, 254 + .ias = 32, 255 + .oas = 32, 256 + .tlb = &mtk_iommu_gather_ops, 257 + .iommu_dev = data->dev, 258 + }; 259 + 260 + dom->iop = alloc_io_pgtable_ops(ARM_V7S, &dom->cfg, data); 261 + if (!dom->iop) { 262 + dev_err(data->dev, "Failed to alloc io pgtable\n"); 263 + return -EINVAL; 264 + } 265 + 266 + /* Update our support page sizes bitmap */ 267 + mtk_iommu_ops.pgsize_bitmap = dom->cfg.pgsize_bitmap; 268 + 269 + writel(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0], 270 + data->base + REG_MMU_PT_BASE_ADDR); 271 + return 0; 272 + } 273 + 274 + static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type) 275 + { 276 + struct mtk_iommu_domain *dom; 277 + 278 + if (type != IOMMU_DOMAIN_DMA) 279 + return NULL; 280 + 281 + dom = kzalloc(sizeof(*dom), GFP_KERNEL); 282 + if (!dom) 283 + return NULL; 284 + 285 + if (iommu_get_dma_cookie(&dom->domain)) { 286 + kfree(dom); 287 + return NULL; 288 + } 289 + 290 + dom->domain.geometry.aperture_start = 0; 291 + dom->domain.geometry.aperture_end = DMA_BIT_MASK(32); 292 + dom->domain.geometry.force_aperture = true; 293 + 294 + return &dom->domain; 295 + } 296 + 297 + static void mtk_iommu_domain_free(struct iommu_domain *domain) 298 + { 299 + iommu_put_dma_cookie(domain); 300 + kfree(to_mtk_domain(domain)); 301 + } 302 + 303 + static int mtk_iommu_attach_device(struct iommu_domain *domain, 304 + struct device *dev) 305 + { 306 + struct mtk_iommu_domain *dom = to_mtk_domain(domain); 307 + struct mtk_iommu_client_priv *priv = dev->archdata.iommu; 308 + struct mtk_iommu_data *data; 309 + int ret; 310 + 311 + if (!priv) 312 + return -ENODEV; 313 + 314 + data = dev_get_drvdata(priv->m4udev); 315 + if (!data->m4u_dom) { 316 + data->m4u_dom = dom; 317 + ret = mtk_iommu_domain_finalise(data); 318 + if (ret) { 319 + data->m4u_dom = NULL; 320 + return ret; 321 + } 322 + } else if (data->m4u_dom != dom) { 323 + /* All the client devices should be in the same m4u domain */ 324 + dev_err(dev, "try to attach into the error iommu domain\n"); 325 + return -EPERM; 326 + } 327 + 328 + mtk_iommu_config(data, dev, true); 329 + return 0; 330 + } 331 + 332 + static void mtk_iommu_detach_device(struct iommu_domain *domain, 333 + struct device *dev) 334 + { 335 + struct mtk_iommu_client_priv *priv = dev->archdata.iommu; 336 + struct mtk_iommu_data *data; 337 + 338 + if (!priv) 339 + return; 340 + 341 + data = dev_get_drvdata(priv->m4udev); 342 + mtk_iommu_config(data, dev, false); 343 + } 344 + 345 + static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, 346 + phys_addr_t paddr, size_t size, int prot) 347 + { 348 + struct mtk_iommu_domain *dom = to_mtk_domain(domain); 349 + unsigned long flags; 350 + int ret; 351 + 352 + spin_lock_irqsave(&dom->pgtlock, flags); 353 + ret = dom->iop->map(dom->iop, iova, paddr, size, prot); 354 + spin_unlock_irqrestore(&dom->pgtlock, flags); 355 + 356 + return ret; 357 + } 358 + 359 + static size_t mtk_iommu_unmap(struct iommu_domain *domain, 360 + unsigned long iova, size_t size) 361 + { 362 + struct mtk_iommu_domain *dom = to_mtk_domain(domain); 363 + unsigned long flags; 364 + size_t unmapsz; 365 + 366 + spin_lock_irqsave(&dom->pgtlock, flags); 367 + unmapsz = dom->iop->unmap(dom->iop, iova, size); 368 + spin_unlock_irqrestore(&dom->pgtlock, flags); 369 + 370 + return unmapsz; 371 + } 372 + 373 + static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain, 374 + dma_addr_t iova) 375 + { 376 + struct mtk_iommu_domain *dom = to_mtk_domain(domain); 377 + unsigned long flags; 378 + phys_addr_t pa; 379 + 380 + spin_lock_irqsave(&dom->pgtlock, flags); 381 + pa = dom->iop->iova_to_phys(dom->iop, iova); 382 + spin_unlock_irqrestore(&dom->pgtlock, flags); 383 + 384 + return pa; 385 + } 386 + 387 + static int mtk_iommu_add_device(struct device *dev) 388 + { 389 + struct iommu_group *group; 390 + 391 + if (!dev->archdata.iommu) /* Not a iommu client device */ 392 + return -ENODEV; 393 + 394 + group = iommu_group_get_for_dev(dev); 395 + if (IS_ERR(group)) 396 + return PTR_ERR(group); 397 + 398 + iommu_group_put(group); 399 + return 0; 400 + } 401 + 402 + static void mtk_iommu_remove_device(struct device *dev) 403 + { 404 + struct mtk_iommu_client_priv *head, *cur, *next; 405 + 406 + head = dev->archdata.iommu; 407 + if (!head) 408 + return; 409 + 410 + list_for_each_entry_safe(cur, next, &head->client, client) { 411 + list_del(&cur->client); 412 + kfree(cur); 413 + } 414 + kfree(head); 415 + dev->archdata.iommu = NULL; 416 + 417 + iommu_group_remove_device(dev); 418 + } 419 + 420 + static struct iommu_group *mtk_iommu_device_group(struct device *dev) 421 + { 422 + struct mtk_iommu_data *data; 423 + struct mtk_iommu_client_priv *priv; 424 + 425 + priv = dev->archdata.iommu; 426 + if (!priv) 427 + return ERR_PTR(-ENODEV); 428 + 429 + /* All the client devices are in the same m4u iommu-group */ 430 + data = dev_get_drvdata(priv->m4udev); 431 + if (!data->m4u_group) { 432 + data->m4u_group = iommu_group_alloc(); 433 + if (IS_ERR(data->m4u_group)) 434 + dev_err(dev, "Failed to allocate M4U IOMMU group\n"); 435 + } 436 + return data->m4u_group; 437 + } 438 + 439 + static int mtk_iommu_of_xlate(struct device *dev, struct of_phandle_args *args) 440 + { 441 + struct mtk_iommu_client_priv *head, *priv, *next; 442 + struct platform_device *m4updev; 443 + 444 + if (args->args_count != 1) { 445 + dev_err(dev, "invalid #iommu-cells(%d) property for IOMMU\n", 446 + args->args_count); 447 + return -EINVAL; 448 + } 449 + 450 + if (!dev->archdata.iommu) { 451 + /* Get the m4u device */ 452 + m4updev = of_find_device_by_node(args->np); 453 + of_node_put(args->np); 454 + if (WARN_ON(!m4updev)) 455 + return -EINVAL; 456 + 457 + head = kzalloc(sizeof(*head), GFP_KERNEL); 458 + if (!head) 459 + return -ENOMEM; 460 + 461 + dev->archdata.iommu = head; 462 + INIT_LIST_HEAD(&head->client); 463 + head->m4udev = &m4updev->dev; 464 + } else { 465 + head = dev->archdata.iommu; 466 + } 467 + 468 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 469 + if (!priv) 470 + goto err_free_mem; 471 + 472 + priv->mtk_m4u_id = args->args[0]; 473 + list_add_tail(&priv->client, &head->client); 474 + 475 + return 0; 476 + 477 + err_free_mem: 478 + list_for_each_entry_safe(priv, next, &head->client, client) 479 + kfree(priv); 480 + kfree(head); 481 + dev->archdata.iommu = NULL; 482 + return -ENOMEM; 483 + } 484 + 485 + static struct iommu_ops mtk_iommu_ops = { 486 + .domain_alloc = mtk_iommu_domain_alloc, 487 + .domain_free = mtk_iommu_domain_free, 488 + .attach_dev = mtk_iommu_attach_device, 489 + .detach_dev = mtk_iommu_detach_device, 490 + .map = mtk_iommu_map, 491 + .unmap = mtk_iommu_unmap, 492 + .map_sg = default_iommu_map_sg, 493 + .iova_to_phys = mtk_iommu_iova_to_phys, 494 + .add_device = mtk_iommu_add_device, 495 + .remove_device = mtk_iommu_remove_device, 496 + .device_group = mtk_iommu_device_group, 497 + .of_xlate = mtk_iommu_of_xlate, 498 + .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, 499 + }; 500 + 501 + static int mtk_iommu_hw_init(const struct mtk_iommu_data *data) 502 + { 503 + u32 regval; 504 + int ret; 505 + 506 + ret = clk_prepare_enable(data->bclk); 507 + if (ret) { 508 + dev_err(data->dev, "Failed to enable iommu bclk(%d)\n", ret); 509 + return ret; 510 + } 511 + 512 + regval = F_MMU_PREFETCH_RT_REPLACE_MOD | 513 + F_MMU_TF_PROTECT_SEL(2); 514 + writel_relaxed(regval, data->base + REG_MMU_CTRL_REG); 515 + 516 + regval = F_L2_MULIT_HIT_EN | 517 + F_TABLE_WALK_FAULT_INT_EN | 518 + F_PREETCH_FIFO_OVERFLOW_INT_EN | 519 + F_MISS_FIFO_OVERFLOW_INT_EN | 520 + F_PREFETCH_FIFO_ERR_INT_EN | 521 + F_MISS_FIFO_ERR_INT_EN; 522 + writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL0); 523 + 524 + regval = F_INT_TRANSLATION_FAULT | 525 + F_INT_MAIN_MULTI_HIT_FAULT | 526 + F_INT_INVALID_PA_FAULT | 527 + F_INT_ENTRY_REPLACEMENT_FAULT | 528 + F_INT_TLB_MISS_FAULT | 529 + F_INT_MISS_TRANSACTION_FIFO_FAULT | 530 + F_INT_PRETETCH_TRANSATION_FIFO_FAULT; 531 + writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL); 532 + 533 + writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base), 534 + data->base + REG_MMU_IVRP_PADDR); 535 + 536 + writel_relaxed(0, data->base + REG_MMU_DCM_DIS); 537 + writel_relaxed(0, data->base + REG_MMU_STANDARD_AXI_MODE); 538 + 539 + if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0, 540 + dev_name(data->dev), (void *)data)) { 541 + writel_relaxed(0, data->base + REG_MMU_PT_BASE_ADDR); 542 + clk_disable_unprepare(data->bclk); 543 + dev_err(data->dev, "Failed @ IRQ-%d Request\n", data->irq); 544 + return -ENODEV; 545 + } 546 + 547 + return 0; 548 + } 549 + 550 + static int compare_of(struct device *dev, void *data) 551 + { 552 + return dev->of_node == data; 553 + } 554 + 555 + static int mtk_iommu_bind(struct device *dev) 556 + { 557 + struct mtk_iommu_data *data = dev_get_drvdata(dev); 558 + 559 + return component_bind_all(dev, &data->smi_imu); 560 + } 561 + 562 + static void mtk_iommu_unbind(struct device *dev) 563 + { 564 + struct mtk_iommu_data *data = dev_get_drvdata(dev); 565 + 566 + component_unbind_all(dev, &data->smi_imu); 567 + } 568 + 569 + static const struct component_master_ops mtk_iommu_com_ops = { 570 + .bind = mtk_iommu_bind, 571 + .unbind = mtk_iommu_unbind, 572 + }; 573 + 574 + static int mtk_iommu_probe(struct platform_device *pdev) 575 + { 576 + struct mtk_iommu_data *data; 577 + struct device *dev = &pdev->dev; 578 + struct resource *res; 579 + struct component_match *match = NULL; 580 + void *protect; 581 + int i, larb_nr, ret; 582 + 583 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 584 + if (!data) 585 + return -ENOMEM; 586 + data->dev = dev; 587 + 588 + /* Protect memory. HW will access here while translation fault.*/ 589 + protect = devm_kzalloc(dev, MTK_PROTECT_PA_ALIGN * 2, GFP_KERNEL); 590 + if (!protect) 591 + return -ENOMEM; 592 + data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN); 593 + 594 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 595 + data->base = devm_ioremap_resource(dev, res); 596 + if (IS_ERR(data->base)) 597 + return PTR_ERR(data->base); 598 + 599 + data->irq = platform_get_irq(pdev, 0); 600 + if (data->irq < 0) 601 + return data->irq; 602 + 603 + data->bclk = devm_clk_get(dev, "bclk"); 604 + if (IS_ERR(data->bclk)) 605 + return PTR_ERR(data->bclk); 606 + 607 + larb_nr = of_count_phandle_with_args(dev->of_node, 608 + "mediatek,larbs", NULL); 609 + if (larb_nr < 0) 610 + return larb_nr; 611 + data->smi_imu.larb_nr = larb_nr; 612 + 613 + for (i = 0; i < larb_nr; i++) { 614 + struct device_node *larbnode; 615 + struct platform_device *plarbdev; 616 + 617 + larbnode = of_parse_phandle(dev->of_node, "mediatek,larbs", i); 618 + if (!larbnode) 619 + return -EINVAL; 620 + 621 + if (!of_device_is_available(larbnode)) 622 + continue; 623 + 624 + plarbdev = of_find_device_by_node(larbnode); 625 + of_node_put(larbnode); 626 + if (!plarbdev) { 627 + plarbdev = of_platform_device_create( 628 + larbnode, NULL, 629 + platform_bus_type.dev_root); 630 + if (!plarbdev) 631 + return -EPROBE_DEFER; 632 + } 633 + data->smi_imu.larb_imu[i].dev = &plarbdev->dev; 634 + 635 + component_match_add(dev, &match, compare_of, larbnode); 636 + } 637 + 638 + platform_set_drvdata(pdev, data); 639 + 640 + ret = mtk_iommu_hw_init(data); 641 + if (ret) 642 + return ret; 643 + 644 + if (!iommu_present(&platform_bus_type)) 645 + bus_set_iommu(&platform_bus_type, &mtk_iommu_ops); 646 + 647 + return component_master_add_with_match(dev, &mtk_iommu_com_ops, match); 648 + } 649 + 650 + static int mtk_iommu_remove(struct platform_device *pdev) 651 + { 652 + struct mtk_iommu_data *data = platform_get_drvdata(pdev); 653 + 654 + if (iommu_present(&platform_bus_type)) 655 + bus_set_iommu(&platform_bus_type, NULL); 656 + 657 + free_io_pgtable_ops(data->m4u_dom->iop); 658 + clk_disable_unprepare(data->bclk); 659 + devm_free_irq(&pdev->dev, data->irq, data); 660 + component_master_del(&pdev->dev, &mtk_iommu_com_ops); 661 + return 0; 662 + } 663 + 664 + static int __maybe_unused mtk_iommu_suspend(struct device *dev) 665 + { 666 + struct mtk_iommu_data *data = dev_get_drvdata(dev); 667 + struct mtk_iommu_suspend_reg *reg = &data->reg; 668 + void __iomem *base = data->base; 669 + 670 + reg->standard_axi_mode = readl_relaxed(base + 671 + REG_MMU_STANDARD_AXI_MODE); 672 + reg->dcm_dis = readl_relaxed(base + REG_MMU_DCM_DIS); 673 + reg->ctrl_reg = readl_relaxed(base + REG_MMU_CTRL_REG); 674 + reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL0); 675 + reg->int_main_control = readl_relaxed(base + REG_MMU_INT_MAIN_CONTROL); 676 + return 0; 677 + } 678 + 679 + static int __maybe_unused mtk_iommu_resume(struct device *dev) 680 + { 681 + struct mtk_iommu_data *data = dev_get_drvdata(dev); 682 + struct mtk_iommu_suspend_reg *reg = &data->reg; 683 + void __iomem *base = data->base; 684 + 685 + writel_relaxed(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0], 686 + base + REG_MMU_PT_BASE_ADDR); 687 + writel_relaxed(reg->standard_axi_mode, 688 + base + REG_MMU_STANDARD_AXI_MODE); 689 + writel_relaxed(reg->dcm_dis, base + REG_MMU_DCM_DIS); 690 + writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG); 691 + writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0); 692 + writel_relaxed(reg->int_main_control, base + REG_MMU_INT_MAIN_CONTROL); 693 + writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base), 694 + base + REG_MMU_IVRP_PADDR); 695 + return 0; 696 + } 697 + 698 + const struct dev_pm_ops mtk_iommu_pm_ops = { 699 + SET_SYSTEM_SLEEP_PM_OPS(mtk_iommu_suspend, mtk_iommu_resume) 700 + }; 701 + 702 + static const struct of_device_id mtk_iommu_of_ids[] = { 703 + { .compatible = "mediatek,mt8173-m4u", }, 704 + {} 705 + }; 706 + 707 + static struct platform_driver mtk_iommu_driver = { 708 + .probe = mtk_iommu_probe, 709 + .remove = mtk_iommu_remove, 710 + .driver = { 711 + .name = "mtk-iommu", 712 + .of_match_table = mtk_iommu_of_ids, 713 + .pm = &mtk_iommu_pm_ops, 714 + } 715 + }; 716 + 717 + static int mtk_iommu_init_fn(struct device_node *np) 718 + { 719 + int ret; 720 + struct platform_device *pdev; 721 + 722 + pdev = of_platform_device_create(np, NULL, platform_bus_type.dev_root); 723 + if (!pdev) 724 + return -ENOMEM; 725 + 726 + ret = platform_driver_register(&mtk_iommu_driver); 727 + if (ret) { 728 + pr_err("%s: Failed to register driver\n", __func__); 729 + return ret; 730 + } 731 + 732 + of_iommu_set_ops(np, &mtk_iommu_ops); 733 + return 0; 734 + } 735 + 736 + IOMMU_OF_DECLARE(mtkm4u, "mediatek,mt8173-m4u", mtk_iommu_init_fn);
+1
drivers/iommu/of_iommu.c
··· 110 110 if (WARN_ON(!iommu)) 111 111 return; 112 112 113 + of_node_get(np); 113 114 INIT_LIST_HEAD(&iommu->list); 114 115 iommu->np = np; 115 116 iommu->ops = ops;
+137 -81
drivers/iommu/rockchip-iommu.c
··· 86 86 87 87 struct rk_iommu { 88 88 struct device *dev; 89 - void __iomem *base; 89 + void __iomem **bases; 90 + int num_mmu; 90 91 int irq; 91 92 struct list_head node; /* entry in rk_iommu_domain.iommus */ 92 93 struct iommu_domain *domain; /* domain to which iommu is attached */ ··· 272 271 return (u32)(iova & RK_IOVA_PAGE_MASK) >> RK_IOVA_PAGE_SHIFT; 273 272 } 274 273 275 - static u32 rk_iommu_read(struct rk_iommu *iommu, u32 offset) 274 + static u32 rk_iommu_read(void __iomem *base, u32 offset) 276 275 { 277 - return readl(iommu->base + offset); 276 + return readl(base + offset); 278 277 } 279 278 280 - static void rk_iommu_write(struct rk_iommu *iommu, u32 offset, u32 value) 279 + static void rk_iommu_write(void __iomem *base, u32 offset, u32 value) 281 280 { 282 - writel(value, iommu->base + offset); 281 + writel(value, base + offset); 283 282 } 284 283 285 284 static void rk_iommu_command(struct rk_iommu *iommu, u32 command) 286 285 { 287 - writel(command, iommu->base + RK_MMU_COMMAND); 286 + int i; 287 + 288 + for (i = 0; i < iommu->num_mmu; i++) 289 + writel(command, iommu->bases[i] + RK_MMU_COMMAND); 288 290 } 289 291 292 + static void rk_iommu_base_command(void __iomem *base, u32 command) 293 + { 294 + writel(command, base + RK_MMU_COMMAND); 295 + } 290 296 static void rk_iommu_zap_lines(struct rk_iommu *iommu, dma_addr_t iova, 291 297 size_t size) 292 298 { 299 + int i; 300 + 293 301 dma_addr_t iova_end = iova + size; 294 302 /* 295 303 * TODO(djkurtz): Figure out when it is more efficient to shootdown the 296 304 * entire iotlb rather than iterate over individual iovas. 297 305 */ 298 - for (; iova < iova_end; iova += SPAGE_SIZE) 299 - rk_iommu_write(iommu, RK_MMU_ZAP_ONE_LINE, iova); 306 + for (i = 0; i < iommu->num_mmu; i++) 307 + for (; iova < iova_end; iova += SPAGE_SIZE) 308 + rk_iommu_write(iommu->bases[i], RK_MMU_ZAP_ONE_LINE, iova); 300 309 } 301 310 302 311 static bool rk_iommu_is_stall_active(struct rk_iommu *iommu) 303 312 { 304 - return rk_iommu_read(iommu, RK_MMU_STATUS) & RK_MMU_STATUS_STALL_ACTIVE; 313 + bool active = true; 314 + int i; 315 + 316 + for (i = 0; i < iommu->num_mmu; i++) 317 + active &= rk_iommu_read(iommu->bases[i], RK_MMU_STATUS) & 318 + RK_MMU_STATUS_STALL_ACTIVE; 319 + 320 + return active; 305 321 } 306 322 307 323 static bool rk_iommu_is_paging_enabled(struct rk_iommu *iommu) 308 324 { 309 - return rk_iommu_read(iommu, RK_MMU_STATUS) & 310 - RK_MMU_STATUS_PAGING_ENABLED; 325 + bool enable = true; 326 + int i; 327 + 328 + for (i = 0; i < iommu->num_mmu; i++) 329 + enable &= rk_iommu_read(iommu->bases[i], RK_MMU_STATUS) & 330 + RK_MMU_STATUS_PAGING_ENABLED; 331 + 332 + return enable; 311 333 } 312 334 313 335 static int rk_iommu_enable_stall(struct rk_iommu *iommu) 314 336 { 315 - int ret; 337 + int ret, i; 316 338 317 339 if (rk_iommu_is_stall_active(iommu)) 318 340 return 0; ··· 348 324 349 325 ret = rk_wait_for(rk_iommu_is_stall_active(iommu), 1); 350 326 if (ret) 351 - dev_err(iommu->dev, "Enable stall request timed out, status: %#08x\n", 352 - rk_iommu_read(iommu, RK_MMU_STATUS)); 327 + for (i = 0; i < iommu->num_mmu; i++) 328 + dev_err(iommu->dev, "Enable stall request timed out, status: %#08x\n", 329 + rk_iommu_read(iommu->bases[i], RK_MMU_STATUS)); 353 330 354 331 return ret; 355 332 } 356 333 357 334 static int rk_iommu_disable_stall(struct rk_iommu *iommu) 358 335 { 359 - int ret; 336 + int ret, i; 360 337 361 338 if (!rk_iommu_is_stall_active(iommu)) 362 339 return 0; ··· 366 341 367 342 ret = rk_wait_for(!rk_iommu_is_stall_active(iommu), 1); 368 343 if (ret) 369 - dev_err(iommu->dev, "Disable stall request timed out, status: %#08x\n", 370 - rk_iommu_read(iommu, RK_MMU_STATUS)); 344 + for (i = 0; i < iommu->num_mmu; i++) 345 + dev_err(iommu->dev, "Disable stall request timed out, status: %#08x\n", 346 + rk_iommu_read(iommu->bases[i], RK_MMU_STATUS)); 371 347 372 348 return ret; 373 349 } 374 350 375 351 static int rk_iommu_enable_paging(struct rk_iommu *iommu) 376 352 { 377 - int ret; 353 + int ret, i; 378 354 379 355 if (rk_iommu_is_paging_enabled(iommu)) 380 356 return 0; ··· 384 358 385 359 ret = rk_wait_for(rk_iommu_is_paging_enabled(iommu), 1); 386 360 if (ret) 387 - dev_err(iommu->dev, "Enable paging request timed out, status: %#08x\n", 388 - rk_iommu_read(iommu, RK_MMU_STATUS)); 361 + for (i = 0; i < iommu->num_mmu; i++) 362 + dev_err(iommu->dev, "Enable paging request timed out, status: %#08x\n", 363 + rk_iommu_read(iommu->bases[i], RK_MMU_STATUS)); 389 364 390 365 return ret; 391 366 } 392 367 393 368 static int rk_iommu_disable_paging(struct rk_iommu *iommu) 394 369 { 395 - int ret; 370 + int ret, i; 396 371 397 372 if (!rk_iommu_is_paging_enabled(iommu)) 398 373 return 0; ··· 402 375 403 376 ret = rk_wait_for(!rk_iommu_is_paging_enabled(iommu), 1); 404 377 if (ret) 405 - dev_err(iommu->dev, "Disable paging request timed out, status: %#08x\n", 406 - rk_iommu_read(iommu, RK_MMU_STATUS)); 378 + for (i = 0; i < iommu->num_mmu; i++) 379 + dev_err(iommu->dev, "Disable paging request timed out, status: %#08x\n", 380 + rk_iommu_read(iommu->bases[i], RK_MMU_STATUS)); 407 381 408 382 return ret; 409 383 } 410 384 411 385 static int rk_iommu_force_reset(struct rk_iommu *iommu) 412 386 { 413 - int ret; 387 + int ret, i; 414 388 u32 dte_addr; 415 389 416 390 /* 417 391 * Check if register DTE_ADDR is working by writing DTE_ADDR_DUMMY 418 392 * and verifying that upper 5 nybbles are read back. 419 393 */ 420 - rk_iommu_write(iommu, RK_MMU_DTE_ADDR, DTE_ADDR_DUMMY); 394 + for (i = 0; i < iommu->num_mmu; i++) { 395 + rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, DTE_ADDR_DUMMY); 421 396 422 - dte_addr = rk_iommu_read(iommu, RK_MMU_DTE_ADDR); 423 - if (dte_addr != (DTE_ADDR_DUMMY & RK_DTE_PT_ADDRESS_MASK)) { 424 - dev_err(iommu->dev, "Error during raw reset. MMU_DTE_ADDR is not functioning\n"); 425 - return -EFAULT; 397 + dte_addr = rk_iommu_read(iommu->bases[i], RK_MMU_DTE_ADDR); 398 + if (dte_addr != (DTE_ADDR_DUMMY & RK_DTE_PT_ADDRESS_MASK)) { 399 + dev_err(iommu->dev, "Error during raw reset. MMU_DTE_ADDR is not functioning\n"); 400 + return -EFAULT; 401 + } 426 402 } 427 403 428 404 rk_iommu_command(iommu, RK_MMU_CMD_FORCE_RESET); 429 405 430 - ret = rk_wait_for(rk_iommu_read(iommu, RK_MMU_DTE_ADDR) == 0x00000000, 431 - FORCE_RESET_TIMEOUT); 432 - if (ret) 433 - dev_err(iommu->dev, "FORCE_RESET command timed out\n"); 406 + for (i = 0; i < iommu->num_mmu; i++) { 407 + ret = rk_wait_for(rk_iommu_read(iommu->bases[i], RK_MMU_DTE_ADDR) == 0x00000000, 408 + FORCE_RESET_TIMEOUT); 409 + if (ret) { 410 + dev_err(iommu->dev, "FORCE_RESET command timed out\n"); 411 + return ret; 412 + } 413 + } 434 414 435 - return ret; 415 + return 0; 436 416 } 437 417 438 - static void log_iova(struct rk_iommu *iommu, dma_addr_t iova) 418 + static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova) 439 419 { 420 + void __iomem *base = iommu->bases[index]; 440 421 u32 dte_index, pte_index, page_offset; 441 422 u32 mmu_dte_addr; 442 423 phys_addr_t mmu_dte_addr_phys, dte_addr_phys; ··· 460 425 pte_index = rk_iova_pte_index(iova); 461 426 page_offset = rk_iova_page_offset(iova); 462 427 463 - mmu_dte_addr = rk_iommu_read(iommu, RK_MMU_DTE_ADDR); 428 + mmu_dte_addr = rk_iommu_read(base, RK_MMU_DTE_ADDR); 464 429 mmu_dte_addr_phys = (phys_addr_t)mmu_dte_addr; 465 430 466 431 dte_addr_phys = mmu_dte_addr_phys + (4 * dte_index); ··· 495 460 u32 status; 496 461 u32 int_status; 497 462 dma_addr_t iova; 463 + irqreturn_t ret = IRQ_NONE; 464 + int i; 498 465 499 - int_status = rk_iommu_read(iommu, RK_MMU_INT_STATUS); 500 - if (int_status == 0) 501 - return IRQ_NONE; 466 + for (i = 0; i < iommu->num_mmu; i++) { 467 + int_status = rk_iommu_read(iommu->bases[i], RK_MMU_INT_STATUS); 468 + if (int_status == 0) 469 + continue; 502 470 503 - iova = rk_iommu_read(iommu, RK_MMU_PAGE_FAULT_ADDR); 471 + ret = IRQ_HANDLED; 472 + iova = rk_iommu_read(iommu->bases[i], RK_MMU_PAGE_FAULT_ADDR); 504 473 505 - if (int_status & RK_MMU_IRQ_PAGE_FAULT) { 506 - int flags; 474 + if (int_status & RK_MMU_IRQ_PAGE_FAULT) { 475 + int flags; 507 476 508 - status = rk_iommu_read(iommu, RK_MMU_STATUS); 509 - flags = (status & RK_MMU_STATUS_PAGE_FAULT_IS_WRITE) ? 510 - IOMMU_FAULT_WRITE : IOMMU_FAULT_READ; 477 + status = rk_iommu_read(iommu->bases[i], RK_MMU_STATUS); 478 + flags = (status & RK_MMU_STATUS_PAGE_FAULT_IS_WRITE) ? 479 + IOMMU_FAULT_WRITE : IOMMU_FAULT_READ; 511 480 512 - dev_err(iommu->dev, "Page fault at %pad of type %s\n", 513 - &iova, 514 - (flags == IOMMU_FAULT_WRITE) ? "write" : "read"); 481 + dev_err(iommu->dev, "Page fault at %pad of type %s\n", 482 + &iova, 483 + (flags == IOMMU_FAULT_WRITE) ? "write" : "read"); 515 484 516 - log_iova(iommu, iova); 485 + log_iova(iommu, i, iova); 517 486 518 - /* 519 - * Report page fault to any installed handlers. 520 - * Ignore the return code, though, since we always zap cache 521 - * and clear the page fault anyway. 522 - */ 523 - if (iommu->domain) 524 - report_iommu_fault(iommu->domain, iommu->dev, iova, 525 - flags); 526 - else 527 - dev_err(iommu->dev, "Page fault while iommu not attached to domain?\n"); 487 + /* 488 + * Report page fault to any installed handlers. 489 + * Ignore the return code, though, since we always zap cache 490 + * and clear the page fault anyway. 491 + */ 492 + if (iommu->domain) 493 + report_iommu_fault(iommu->domain, iommu->dev, iova, 494 + flags); 495 + else 496 + dev_err(iommu->dev, "Page fault while iommu not attached to domain?\n"); 528 497 529 - rk_iommu_command(iommu, RK_MMU_CMD_ZAP_CACHE); 530 - rk_iommu_command(iommu, RK_MMU_CMD_PAGE_FAULT_DONE); 498 + rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE); 499 + rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_PAGE_FAULT_DONE); 500 + } 501 + 502 + if (int_status & RK_MMU_IRQ_BUS_ERROR) 503 + dev_err(iommu->dev, "BUS_ERROR occurred at %pad\n", &iova); 504 + 505 + if (int_status & ~RK_MMU_IRQ_MASK) 506 + dev_err(iommu->dev, "unexpected int_status: %#08x\n", 507 + int_status); 508 + 509 + rk_iommu_write(iommu->bases[i], RK_MMU_INT_CLEAR, int_status); 531 510 } 532 511 533 - if (int_status & RK_MMU_IRQ_BUS_ERROR) 534 - dev_err(iommu->dev, "BUS_ERROR occurred at %pad\n", &iova); 535 - 536 - if (int_status & ~RK_MMU_IRQ_MASK) 537 - dev_err(iommu->dev, "unexpected int_status: %#08x\n", 538 - int_status); 539 - 540 - rk_iommu_write(iommu, RK_MMU_INT_CLEAR, int_status); 541 - 542 - return IRQ_HANDLED; 512 + return ret; 543 513 } 544 514 545 515 static phys_addr_t rk_iommu_iova_to_phys(struct iommu_domain *domain, ··· 786 746 struct rk_iommu *iommu; 787 747 struct rk_iommu_domain *rk_domain = to_rk_domain(domain); 788 748 unsigned long flags; 789 - int ret; 749 + int ret, i; 790 750 phys_addr_t dte_addr; 791 751 792 752 /* ··· 813 773 return ret; 814 774 815 775 dte_addr = virt_to_phys(rk_domain->dt); 816 - rk_iommu_write(iommu, RK_MMU_DTE_ADDR, dte_addr); 817 - rk_iommu_command(iommu, RK_MMU_CMD_ZAP_CACHE); 818 - rk_iommu_write(iommu, RK_MMU_INT_MASK, RK_MMU_IRQ_MASK); 776 + for (i = 0; i < iommu->num_mmu; i++) { 777 + rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, dte_addr); 778 + rk_iommu_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE); 779 + rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK); 780 + } 819 781 820 782 ret = rk_iommu_enable_paging(iommu); 821 783 if (ret) ··· 840 798 struct rk_iommu *iommu; 841 799 struct rk_iommu_domain *rk_domain = to_rk_domain(domain); 842 800 unsigned long flags; 801 + int i; 843 802 844 803 /* Allow 'virtual devices' (eg drm) to detach from domain */ 845 804 iommu = rk_iommu_from_dev(dev); ··· 854 811 /* Ignore error while disabling, just keep going */ 855 812 rk_iommu_enable_stall(iommu); 856 813 rk_iommu_disable_paging(iommu); 857 - rk_iommu_write(iommu, RK_MMU_INT_MASK, 0); 858 - rk_iommu_write(iommu, RK_MMU_DTE_ADDR, 0); 814 + for (i = 0; i < iommu->num_mmu; i++) { 815 + rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, 0); 816 + rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, 0); 817 + } 859 818 rk_iommu_disable_stall(iommu); 860 819 861 820 devm_free_irq(dev, iommu->irq, iommu); ··· 1033 988 struct device *dev = &pdev->dev; 1034 989 struct rk_iommu *iommu; 1035 990 struct resource *res; 991 + int i; 1036 992 1037 993 iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL); 1038 994 if (!iommu) ··· 1041 995 1042 996 platform_set_drvdata(pdev, iommu); 1043 997 iommu->dev = dev; 998 + iommu->num_mmu = 0; 999 + iommu->bases = devm_kzalloc(dev, sizeof(*iommu->bases) * iommu->num_mmu, 1000 + GFP_KERNEL); 1001 + if (!iommu->bases) 1002 + return -ENOMEM; 1044 1003 1045 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1046 - iommu->base = devm_ioremap_resource(&pdev->dev, res); 1047 - if (IS_ERR(iommu->base)) 1048 - return PTR_ERR(iommu->base); 1004 + for (i = 0; i < pdev->num_resources; i++) { 1005 + res = platform_get_resource(pdev, IORESOURCE_MEM, i); 1006 + iommu->bases[i] = devm_ioremap_resource(&pdev->dev, res); 1007 + if (IS_ERR(iommu->bases[i])) 1008 + continue; 1009 + iommu->num_mmu++; 1010 + } 1011 + if (iommu->num_mmu == 0) 1012 + return PTR_ERR(iommu->bases[0]); 1049 1013 1050 1014 iommu->irq = platform_get_irq(pdev, 0); 1051 1015 if (iommu->irq < 0) {
+8
drivers/memory/Kconfig
··· 114 114 the Ingenic JZ4780. This controller is used to handle external 115 115 memory devices such as NAND and SRAM. 116 116 117 + config MTK_SMI 118 + bool 119 + depends on ARCH_MEDIATEK || COMPILE_TEST 120 + help 121 + This driver is for the Memory Controller module in MediaTek SoCs, 122 + mainly help enable/disable iommu and control the power domain and 123 + clocks for each local arbiter. 124 + 117 125 source "drivers/memory/tegra/Kconfig" 118 126 119 127 endif
+1
drivers/memory/Makefile
··· 15 15 obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o 16 16 obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o 17 17 obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o 18 + obj-$(CONFIG_MTK_SMI) += mtk-smi.o 18 19 19 20 obj-$(CONFIG_TEGRA_MC) += tegra/
+273
drivers/memory/mtk-smi.c
··· 1 + /* 2 + * Copyright (c) 2015-2016 MediaTek Inc. 3 + * Author: Yong Wu <yong.wu@mediatek.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + #include <linux/clk.h> 15 + #include <linux/component.h> 16 + #include <linux/device.h> 17 + #include <linux/err.h> 18 + #include <linux/io.h> 19 + #include <linux/of.h> 20 + #include <linux/of_platform.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/pm_runtime.h> 23 + #include <soc/mediatek/smi.h> 24 + 25 + #define SMI_LARB_MMU_EN 0xf00 26 + 27 + struct mtk_smi { 28 + struct device *dev; 29 + struct clk *clk_apb, *clk_smi; 30 + }; 31 + 32 + struct mtk_smi_larb { /* larb: local arbiter */ 33 + struct mtk_smi smi; 34 + void __iomem *base; 35 + struct device *smi_common_dev; 36 + u32 *mmu; 37 + }; 38 + 39 + static int mtk_smi_enable(const struct mtk_smi *smi) 40 + { 41 + int ret; 42 + 43 + ret = pm_runtime_get_sync(smi->dev); 44 + if (ret < 0) 45 + return ret; 46 + 47 + ret = clk_prepare_enable(smi->clk_apb); 48 + if (ret) 49 + goto err_put_pm; 50 + 51 + ret = clk_prepare_enable(smi->clk_smi); 52 + if (ret) 53 + goto err_disable_apb; 54 + 55 + return 0; 56 + 57 + err_disable_apb: 58 + clk_disable_unprepare(smi->clk_apb); 59 + err_put_pm: 60 + pm_runtime_put_sync(smi->dev); 61 + return ret; 62 + } 63 + 64 + static void mtk_smi_disable(const struct mtk_smi *smi) 65 + { 66 + clk_disable_unprepare(smi->clk_smi); 67 + clk_disable_unprepare(smi->clk_apb); 68 + pm_runtime_put_sync(smi->dev); 69 + } 70 + 71 + int mtk_smi_larb_get(struct device *larbdev) 72 + { 73 + struct mtk_smi_larb *larb = dev_get_drvdata(larbdev); 74 + struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev); 75 + int ret; 76 + 77 + /* Enable the smi-common's power and clocks */ 78 + ret = mtk_smi_enable(common); 79 + if (ret) 80 + return ret; 81 + 82 + /* Enable the larb's power and clocks */ 83 + ret = mtk_smi_enable(&larb->smi); 84 + if (ret) { 85 + mtk_smi_disable(common); 86 + return ret; 87 + } 88 + 89 + /* Configure the iommu info for this larb */ 90 + writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN); 91 + 92 + return 0; 93 + } 94 + 95 + void mtk_smi_larb_put(struct device *larbdev) 96 + { 97 + struct mtk_smi_larb *larb = dev_get_drvdata(larbdev); 98 + struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev); 99 + 100 + /* 101 + * Don't de-configure the iommu info for this larb since there may be 102 + * several modules in this larb. 103 + * The iommu info will be reset after power off. 104 + */ 105 + 106 + mtk_smi_disable(&larb->smi); 107 + mtk_smi_disable(common); 108 + } 109 + 110 + static int 111 + mtk_smi_larb_bind(struct device *dev, struct device *master, void *data) 112 + { 113 + struct mtk_smi_larb *larb = dev_get_drvdata(dev); 114 + struct mtk_smi_iommu *smi_iommu = data; 115 + unsigned int i; 116 + 117 + for (i = 0; i < smi_iommu->larb_nr; i++) { 118 + if (dev == smi_iommu->larb_imu[i].dev) { 119 + /* The 'mmu' may be updated in iommu-attach/detach. */ 120 + larb->mmu = &smi_iommu->larb_imu[i].mmu; 121 + return 0; 122 + } 123 + } 124 + return -ENODEV; 125 + } 126 + 127 + static void 128 + mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data) 129 + { 130 + /* Do nothing as the iommu is always enabled. */ 131 + } 132 + 133 + static const struct component_ops mtk_smi_larb_component_ops = { 134 + .bind = mtk_smi_larb_bind, 135 + .unbind = mtk_smi_larb_unbind, 136 + }; 137 + 138 + static int mtk_smi_larb_probe(struct platform_device *pdev) 139 + { 140 + struct mtk_smi_larb *larb; 141 + struct resource *res; 142 + struct device *dev = &pdev->dev; 143 + struct device_node *smi_node; 144 + struct platform_device *smi_pdev; 145 + 146 + if (!dev->pm_domain) 147 + return -EPROBE_DEFER; 148 + 149 + larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL); 150 + if (!larb) 151 + return -ENOMEM; 152 + 153 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 154 + larb->base = devm_ioremap_resource(dev, res); 155 + if (IS_ERR(larb->base)) 156 + return PTR_ERR(larb->base); 157 + 158 + larb->smi.clk_apb = devm_clk_get(dev, "apb"); 159 + if (IS_ERR(larb->smi.clk_apb)) 160 + return PTR_ERR(larb->smi.clk_apb); 161 + 162 + larb->smi.clk_smi = devm_clk_get(dev, "smi"); 163 + if (IS_ERR(larb->smi.clk_smi)) 164 + return PTR_ERR(larb->smi.clk_smi); 165 + larb->smi.dev = dev; 166 + 167 + smi_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0); 168 + if (!smi_node) 169 + return -EINVAL; 170 + 171 + smi_pdev = of_find_device_by_node(smi_node); 172 + of_node_put(smi_node); 173 + if (smi_pdev) { 174 + larb->smi_common_dev = &smi_pdev->dev; 175 + } else { 176 + dev_err(dev, "Failed to get the smi_common device\n"); 177 + return -EINVAL; 178 + } 179 + 180 + pm_runtime_enable(dev); 181 + platform_set_drvdata(pdev, larb); 182 + return component_add(dev, &mtk_smi_larb_component_ops); 183 + } 184 + 185 + static int mtk_smi_larb_remove(struct platform_device *pdev) 186 + { 187 + pm_runtime_disable(&pdev->dev); 188 + component_del(&pdev->dev, &mtk_smi_larb_component_ops); 189 + return 0; 190 + } 191 + 192 + static const struct of_device_id mtk_smi_larb_of_ids[] = { 193 + { .compatible = "mediatek,mt8173-smi-larb",}, 194 + {} 195 + }; 196 + 197 + static struct platform_driver mtk_smi_larb_driver = { 198 + .probe = mtk_smi_larb_probe, 199 + .remove = mtk_smi_larb_remove, 200 + .driver = { 201 + .name = "mtk-smi-larb", 202 + .of_match_table = mtk_smi_larb_of_ids, 203 + } 204 + }; 205 + 206 + static int mtk_smi_common_probe(struct platform_device *pdev) 207 + { 208 + struct device *dev = &pdev->dev; 209 + struct mtk_smi *common; 210 + 211 + if (!dev->pm_domain) 212 + return -EPROBE_DEFER; 213 + 214 + common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL); 215 + if (!common) 216 + return -ENOMEM; 217 + common->dev = dev; 218 + 219 + common->clk_apb = devm_clk_get(dev, "apb"); 220 + if (IS_ERR(common->clk_apb)) 221 + return PTR_ERR(common->clk_apb); 222 + 223 + common->clk_smi = devm_clk_get(dev, "smi"); 224 + if (IS_ERR(common->clk_smi)) 225 + return PTR_ERR(common->clk_smi); 226 + 227 + pm_runtime_enable(dev); 228 + platform_set_drvdata(pdev, common); 229 + return 0; 230 + } 231 + 232 + static int mtk_smi_common_remove(struct platform_device *pdev) 233 + { 234 + pm_runtime_disable(&pdev->dev); 235 + return 0; 236 + } 237 + 238 + static const struct of_device_id mtk_smi_common_of_ids[] = { 239 + { .compatible = "mediatek,mt8173-smi-common", }, 240 + {} 241 + }; 242 + 243 + static struct platform_driver mtk_smi_common_driver = { 244 + .probe = mtk_smi_common_probe, 245 + .remove = mtk_smi_common_remove, 246 + .driver = { 247 + .name = "mtk-smi-common", 248 + .of_match_table = mtk_smi_common_of_ids, 249 + } 250 + }; 251 + 252 + static int __init mtk_smi_init(void) 253 + { 254 + int ret; 255 + 256 + ret = platform_driver_register(&mtk_smi_common_driver); 257 + if (ret != 0) { 258 + pr_err("Failed to register SMI driver\n"); 259 + return ret; 260 + } 261 + 262 + ret = platform_driver_register(&mtk_smi_larb_driver); 263 + if (ret != 0) { 264 + pr_err("Failed to register SMI-LARB driver\n"); 265 + goto err_unreg_smi; 266 + } 267 + return ret; 268 + 269 + err_unreg_smi: 270 + platform_driver_unregister(&mtk_smi_common_driver); 271 + return ret; 272 + } 273 + subsys_initcall(mtk_smi_init);
+111
include/dt-bindings/memory/mt8173-larb-port.h
··· 1 + /* 2 + * Copyright (c) 2015-2016 MediaTek Inc. 3 + * Author: Yong Wu <yong.wu@mediatek.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + #ifndef __DTS_IOMMU_PORT_MT8173_H 15 + #define __DTS_IOMMU_PORT_MT8173_H 16 + 17 + #define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) 18 + /* Local arbiter ID */ 19 + #define MTK_M4U_TO_LARB(id) (((id) >> 5) & 0x7) 20 + /* PortID within the local arbiter */ 21 + #define MTK_M4U_TO_PORT(id) ((id) & 0x1f) 22 + 23 + #define M4U_LARB0_ID 0 24 + #define M4U_LARB1_ID 1 25 + #define M4U_LARB2_ID 2 26 + #define M4U_LARB3_ID 3 27 + #define M4U_LARB4_ID 4 28 + #define M4U_LARB5_ID 5 29 + 30 + /* larb0 */ 31 + #define M4U_PORT_DISP_OVL0 MTK_M4U_ID(M4U_LARB0_ID, 0) 32 + #define M4U_PORT_DISP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 1) 33 + #define M4U_PORT_DISP_WDMA0 MTK_M4U_ID(M4U_LARB0_ID, 2) 34 + #define M4U_PORT_DISP_OD_R MTK_M4U_ID(M4U_LARB0_ID, 3) 35 + #define M4U_PORT_DISP_OD_W MTK_M4U_ID(M4U_LARB0_ID, 4) 36 + #define M4U_PORT_MDP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 5) 37 + #define M4U_PORT_MDP_WDMA MTK_M4U_ID(M4U_LARB0_ID, 6) 38 + #define M4U_PORT_MDP_WROT0 MTK_M4U_ID(M4U_LARB0_ID, 7) 39 + 40 + /* larb1 */ 41 + #define M4U_PORT_HW_VDEC_MC_EXT MTK_M4U_ID(M4U_LARB1_ID, 0) 42 + #define M4U_PORT_HW_VDEC_PP_EXT MTK_M4U_ID(M4U_LARB1_ID, 1) 43 + #define M4U_PORT_HW_VDEC_UFO_EXT MTK_M4U_ID(M4U_LARB1_ID, 2) 44 + #define M4U_PORT_HW_VDEC_VLD_EXT MTK_M4U_ID(M4U_LARB1_ID, 3) 45 + #define M4U_PORT_HW_VDEC_VLD2_EXT MTK_M4U_ID(M4U_LARB1_ID, 4) 46 + #define M4U_PORT_HW_VDEC_AVC_MV_EXT MTK_M4U_ID(M4U_LARB1_ID, 5) 47 + #define M4U_PORT_HW_VDEC_PRED_RD_EXT MTK_M4U_ID(M4U_LARB1_ID, 6) 48 + #define M4U_PORT_HW_VDEC_PRED_WR_EXT MTK_M4U_ID(M4U_LARB1_ID, 7) 49 + #define M4U_PORT_HW_VDEC_PPWRAP_EXT MTK_M4U_ID(M4U_LARB1_ID, 8) 50 + #define M4U_PORT_HW_VDEC_TILE MTK_M4U_ID(M4U_LARB1_ID, 9) 51 + 52 + /* larb2 */ 53 + #define M4U_PORT_IMGO MTK_M4U_ID(M4U_LARB2_ID, 0) 54 + #define M4U_PORT_RRZO MTK_M4U_ID(M4U_LARB2_ID, 1) 55 + #define M4U_PORT_AAO MTK_M4U_ID(M4U_LARB2_ID, 2) 56 + #define M4U_PORT_LCSO MTK_M4U_ID(M4U_LARB2_ID, 3) 57 + #define M4U_PORT_ESFKO MTK_M4U_ID(M4U_LARB2_ID, 4) 58 + #define M4U_PORT_IMGO_D MTK_M4U_ID(M4U_LARB2_ID, 5) 59 + #define M4U_PORT_LSCI MTK_M4U_ID(M4U_LARB2_ID, 6) 60 + #define M4U_PORT_LSCI_D MTK_M4U_ID(M4U_LARB2_ID, 7) 61 + #define M4U_PORT_BPCI MTK_M4U_ID(M4U_LARB2_ID, 8) 62 + #define M4U_PORT_BPCI_D MTK_M4U_ID(M4U_LARB2_ID, 9) 63 + #define M4U_PORT_UFDI MTK_M4U_ID(M4U_LARB2_ID, 10) 64 + #define M4U_PORT_IMGI MTK_M4U_ID(M4U_LARB2_ID, 11) 65 + #define M4U_PORT_IMG2O MTK_M4U_ID(M4U_LARB2_ID, 12) 66 + #define M4U_PORT_IMG3O MTK_M4U_ID(M4U_LARB2_ID, 13) 67 + #define M4U_PORT_VIPI MTK_M4U_ID(M4U_LARB2_ID, 14) 68 + #define M4U_PORT_VIP2I MTK_M4U_ID(M4U_LARB2_ID, 15) 69 + #define M4U_PORT_VIP3I MTK_M4U_ID(M4U_LARB2_ID, 16) 70 + #define M4U_PORT_LCEI MTK_M4U_ID(M4U_LARB2_ID, 17) 71 + #define M4U_PORT_RB MTK_M4U_ID(M4U_LARB2_ID, 18) 72 + #define M4U_PORT_RP MTK_M4U_ID(M4U_LARB2_ID, 19) 73 + #define M4U_PORT_WR MTK_M4U_ID(M4U_LARB2_ID, 20) 74 + 75 + /* larb3 */ 76 + #define M4U_PORT_VENC_RCPU MTK_M4U_ID(M4U_LARB3_ID, 0) 77 + #define M4U_PORT_VENC_REC MTK_M4U_ID(M4U_LARB3_ID, 1) 78 + #define M4U_PORT_VENC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 2) 79 + #define M4U_PORT_VENC_SV_COMV MTK_M4U_ID(M4U_LARB3_ID, 3) 80 + #define M4U_PORT_VENC_RD_COMV MTK_M4U_ID(M4U_LARB3_ID, 4) 81 + #define M4U_PORT_JPGENC_RDMA MTK_M4U_ID(M4U_LARB3_ID, 5) 82 + #define M4U_PORT_JPGENC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 6) 83 + #define M4U_PORT_JPGDEC_WDMA MTK_M4U_ID(M4U_LARB3_ID, 7) 84 + #define M4U_PORT_JPGDEC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 8) 85 + #define M4U_PORT_VENC_CUR_LUMA MTK_M4U_ID(M4U_LARB3_ID, 9) 86 + #define M4U_PORT_VENC_CUR_CHROMA MTK_M4U_ID(M4U_LARB3_ID, 10) 87 + #define M4U_PORT_VENC_REF_LUMA MTK_M4U_ID(M4U_LARB3_ID, 11) 88 + #define M4U_PORT_VENC_REF_CHROMA MTK_M4U_ID(M4U_LARB3_ID, 12) 89 + #define M4U_PORT_VENC_NBM_RDMA MTK_M4U_ID(M4U_LARB3_ID, 13) 90 + #define M4U_PORT_VENC_NBM_WDMA MTK_M4U_ID(M4U_LARB3_ID, 14) 91 + 92 + /* larb4 */ 93 + #define M4U_PORT_DISP_OVL1 MTK_M4U_ID(M4U_LARB4_ID, 0) 94 + #define M4U_PORT_DISP_RDMA1 MTK_M4U_ID(M4U_LARB4_ID, 1) 95 + #define M4U_PORT_DISP_RDMA2 MTK_M4U_ID(M4U_LARB4_ID, 2) 96 + #define M4U_PORT_DISP_WDMA1 MTK_M4U_ID(M4U_LARB4_ID, 3) 97 + #define M4U_PORT_MDP_RDMA1 MTK_M4U_ID(M4U_LARB4_ID, 4) 98 + #define M4U_PORT_MDP_WROT1 MTK_M4U_ID(M4U_LARB4_ID, 5) 99 + 100 + /* larb5 */ 101 + #define M4U_PORT_VENC_RCPU_SET2 MTK_M4U_ID(M4U_LARB5_ID, 0) 102 + #define M4U_PORT_VENC_REC_FRM_SET2 MTK_M4U_ID(M4U_LARB5_ID, 1) 103 + #define M4U_PORT_VENC_REF_LUMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 2) 104 + #define M4U_PORT_VENC_REC_CHROMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 3) 105 + #define M4U_PORT_VENC_BSDMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 4) 106 + #define M4U_PORT_VENC_CUR_LUMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 5) 107 + #define M4U_PORT_VENC_CUR_CHROMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 6) 108 + #define M4U_PORT_VENC_RD_COMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 7) 109 + #define M4U_PORT_VENC_SV_COMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 8) 110 + 111 + #endif
+58
include/soc/mediatek/smi.h
··· 1 + /* 2 + * Copyright (c) 2015-2016 MediaTek Inc. 3 + * Author: Yong Wu <yong.wu@mediatek.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + #ifndef MTK_IOMMU_SMI_H 15 + #define MTK_IOMMU_SMI_H 16 + 17 + #include <linux/bitops.h> 18 + #include <linux/device.h> 19 + 20 + #ifdef CONFIG_MTK_SMI 21 + 22 + #define MTK_LARB_NR_MAX 8 23 + 24 + #define MTK_SMI_MMU_EN(port) BIT(port) 25 + 26 + struct mtk_smi_larb_iommu { 27 + struct device *dev; 28 + unsigned int mmu; 29 + }; 30 + 31 + struct mtk_smi_iommu { 32 + unsigned int larb_nr; 33 + struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX]; 34 + }; 35 + 36 + /* 37 + * mtk_smi_larb_get: Enable the power domain and clocks for this local arbiter. 38 + * It also initialize some basic setting(like iommu). 39 + * mtk_smi_larb_put: Disable the power domain and clocks for this local arbiter. 40 + * Both should be called in non-atomic context. 41 + * 42 + * Returns 0 if successful, negative on failure. 43 + */ 44 + int mtk_smi_larb_get(struct device *larbdev); 45 + void mtk_smi_larb_put(struct device *larbdev); 46 + 47 + #else 48 + 49 + static inline int mtk_smi_larb_get(struct device *larbdev) 50 + { 51 + return 0; 52 + } 53 + 54 + static inline void mtk_smi_larb_put(struct device *larbdev) { } 55 + 56 + #endif 57 + 58 + #endif