Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

- replace the force_dma flag with a dma_configure bus method. (Nipun
Gupta, although one patch is іncorrectly attributed to me due to a
git rebase bug)

- use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)

- remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
right thing for bounce buffering.

- move dma-debug initialization to common code, and apply a few
cleanups to the dma-debug code.

- cleanup the Kconfig mess around swiotlb selection

- swiotlb comment fixup (Yisheng Xie)

- a trivial swiotlb fix. (Dan Carpenter)

- support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)

- add a new generic dma-noncoherent dma_map_ops implementation and use
it for arc, c6x and nds32.

- improve scatterlist validity checking in dma-debug. (Robin Murphy)

- add a struct device quirk to limit the dma-mask to 32-bit due to
bridge/system issues, and switch x86 to use it instead of a local
hack for VIA bridges.

- handle devices without a dma_mask more gracefully in the dma-direct
code.

* tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping: (48 commits)
dma-direct: don't crash on device without dma_mask
nds32: use generic dma_noncoherent_ops
nds32: implement the unmap_sg DMA operation
nds32: consolidate DMA cache maintainance routines
x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag
x86/pci-dma: remove the explicit nodac and allowdac option
x86/pci-dma: remove the experimental forcesac boot option
Documentation/x86: remove a stray reference to pci-nommu.c
core, dma-direct: add a flag 32-bit dma limits
dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs
dma-debug: check scatterlist segments
c6x: use generic dma_noncoherent_ops
arc: use generic dma_noncoherent_ops
arc: fix arc_dma_{map,unmap}_page
arc: fix arc_dma_sync_sg_for_{cpu,device}
arc: simplify arc_dma_sync_single_for_{cpu,device}
dma-mapping: provide a generic dma-noncoherent implementation
dma-mapping: simplify Kconfig dependencies
riscv: add swiotlb support
riscv: only enable ZONE_DMA32 for 64-bit
...

+625 -1393
-1
Documentation/admin-guide/kernel-parameters.txt
··· 1705 1705 nopanic 1706 1706 merge 1707 1707 nomerge 1708 - forcesac 1709 1708 soft 1710 1709 pt [x86, IA-64] 1711 1710 nobypass [PPC/POWERNV]
-31
Documentation/features/io/dma-api-debug/arch-support.txt
··· 1 - # 2 - # Feature name: dma-api-debug 3 - # Kconfig: HAVE_DMA_API_DEBUG 4 - # description: arch supports DMA debug facilities 5 - # 6 - ----------------------- 7 - | arch |status| 8 - ----------------------- 9 - | alpha: | TODO | 10 - | arc: | TODO | 11 - | arm: | ok | 12 - | arm64: | ok | 13 - | c6x: | ok | 14 - | h8300: | TODO | 15 - | hexagon: | TODO | 16 - | ia64: | ok | 17 - | m68k: | TODO | 18 - | microblaze: | ok | 19 - | mips: | ok | 20 - | nios2: | TODO | 21 - | openrisc: | TODO | 22 - | parisc: | TODO | 23 - | powerpc: | ok | 24 - | s390: | ok | 25 - | sh: | ok | 26 - | sparc: | ok | 27 - | um: | TODO | 28 - | unicore32: | TODO | 29 - | x86: | ok | 30 - | xtensa: | ok | 31 - -----------------------
+3 -10
Documentation/x86/x86_64/boot-options.txt
··· 187 187 188 188 IOMMU (input/output memory management unit) 189 189 190 - Currently four x86-64 PCI-DMA mapping implementations exist: 190 + Multiple x86-64 PCI-DMA mapping implementations exist, for example: 191 191 192 - 1. <arch/x86_64/kernel/pci-nommu.c>: use no hardware/software IOMMU at all 192 + 1. <lib/dma-direct.c>: use no hardware/software IOMMU at all 193 193 (e.g. because you have < 3 GB memory). 194 194 Kernel boot message: "PCI-DMA: Disabling IOMMU" 195 195 ··· 208 208 Kernel boot message: "PCI-DMA: Using Calgary IOMMU" 209 209 210 210 iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>] 211 - [,memaper[=<order>]][,merge][,forcesac][,fullflush][,nomerge] 211 + [,memaper[=<order>]][,merge][,fullflush][,nomerge] 212 212 [,noaperture][,calgary] 213 213 214 214 General iommu options: ··· 235 235 (experimental). 236 236 nomerge Don't do scatter-gather (SG) merging. 237 237 noaperture Ask the IOMMU not to touch the aperture for AGP. 238 - forcesac Force single-address cycle (SAC) mode for masks <40bits 239 - (experimental). 240 238 noagp Don't initialize the AGP driver and use full aperture. 241 - allowdac Allow double-address cycle (DAC) mode, i.e. DMA >4GB. 242 - DAC is used with 32-bit PCI to push a 64-bit address in 243 - two cycles. When off all DMA over >4GB is forced through 244 - an IOMMU or software bounce buffering. 245 - nodac Forbid DAC mode, i.e. DMA >4GB. 246 239 panic Always panic when IOMMU overflows. 247 240 calgary Use the Calgary IOMMU if it is available 248 241
+2
MAINTAINERS
··· 4330 4330 S: Supported 4331 4331 F: lib/dma-debug.c 4332 4332 F: lib/dma-direct.c 4333 + F: lib/dma-noncoherent.c 4333 4334 F: lib/dma-virt.c 4334 4335 F: drivers/base/dma-mapping.c 4335 4336 F: drivers/base/dma-coherent.c 4336 4337 F: include/asm-generic/dma-mapping.h 4337 4338 F: include/linux/dma-direct.h 4338 4339 F: include/linux/dma-mapping.h 4340 + F: include/linux/dma-noncoherent.h 4339 4341 4340 4342 DME1737 HARDWARE MONITOR DRIVER 4341 4343 M: Juerg Haefliger <juergh@gmail.com>
-3
arch/Kconfig
··· 278 278 The <linux/clk.h> calls support software clock gating and 279 279 thus are a key power management tool on many systems. 280 280 281 - config HAVE_DMA_API_DEBUG 282 - bool 283 - 284 281 config HAVE_HW_BREAKPOINT 285 282 bool 286 283 depends on PERF_EVENTS
+2 -12
arch/alpha/Kconfig
··· 10 10 select HAVE_OPROFILE 11 11 select HAVE_PCSPKR_PLATFORM 12 12 select HAVE_PERF_EVENTS 13 + select NEED_DMA_MAP_STATE 14 + select NEED_SG_DMA_LENGTH 13 15 select VIRT_TO_BUS 14 16 select GENERIC_IRQ_PROBE 15 17 select AUTO_IRQ_AFFINITY if SMP ··· 65 63 config ZONE_DMA 66 64 bool 67 65 default y 68 - 69 - config ARCH_DMA_ADDR_T_64BIT 70 - def_bool y 71 - 72 - config NEED_DMA_MAP_STATE 73 - def_bool y 74 - 75 - config NEED_SG_DMA_LENGTH 76 - def_bool y 77 66 78 67 config GENERIC_ISA_DMA 79 68 bool ··· 337 344 default y 338 345 339 346 config PCI_SYSCALL 340 - def_bool PCI 341 - 342 - config IOMMU_HELPER 343 347 def_bool PCI 344 348 345 349 config ALPHA_NONAME
-5
arch/alpha/include/asm/pci.h
··· 56 56 57 57 /* IOMMU controls. */ 58 58 59 - /* The PCI address space does not equal the physical memory address space. 60 - The networking and block device layers use this boolean for bounce buffer 61 - decisions. */ 62 - #define PCI_DMA_BUS_IS_PHYS 0 63 - 64 59 /* TODO: integrate with include/asm-generic/pci.h ? */ 65 60 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 66 61 {
+5 -6
arch/arc/Kconfig
··· 9 9 config ARC 10 10 def_bool y 11 11 select ARC_TIMERS 12 + select ARCH_HAS_SYNC_DMA_FOR_CPU 13 + select ARCH_HAS_SYNC_DMA_FOR_DEVICE 12 14 select ARCH_HAS_SG_CHAIN 13 15 select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC 14 16 select BUILDTIME_EXTABLE_SORT 15 17 select CLONE_BACKWARDS 16 18 select COMMON_CLK 19 + select DMA_NONCOHERENT_OPS 20 + select DMA_NONCOHERENT_MMAP 17 21 select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) 18 22 select GENERIC_CLOCKEVENTS 19 23 select GENERIC_FIND_FIRST_BIT ··· 457 453 default n 458 454 depends on ISA_ARCV2 459 455 select HIGHMEM 456 + select PHYS_ADDR_T_64BIT 460 457 help 461 458 Enable access to physical memory beyond 4G, only supported on 462 459 ARC cores with 40 bit Physical Addressing support 463 - 464 - config ARCH_PHYS_ADDR_T_64BIT 465 - def_bool ARC_HAS_PAE40 466 - 467 - config ARCH_DMA_ADDR_T_64BIT 468 - bool 469 460 470 461 config ARC_KVADDR_SIZE 471 462 int "Kernel Virtual Address Space size (MB)"
+1
arch/arc/include/asm/Kbuild
··· 2 2 generic-y += bugs.h 3 3 generic-y += device.h 4 4 generic-y += div64.h 5 + generic-y += dma-mapping.h 5 6 generic-y += emergency-restart.h 6 7 generic-y += extable.h 7 8 generic-y += fb.h
-21
arch/arc/include/asm/dma-mapping.h
··· 1 - /* 2 - * DMA Mapping glue for ARC 3 - * 4 - * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - */ 10 - 11 - #ifndef ASM_ARC_DMA_MAPPING_H 12 - #define ASM_ARC_DMA_MAPPING_H 13 - 14 - extern const struct dma_map_ops arc_dma_ops; 15 - 16 - static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) 17 - { 18 - return &arc_dma_ops; 19 - } 20 - 21 - #endif
-6
arch/arc/include/asm/pci.h
··· 16 16 #define PCIBIOS_MIN_MEM 0x100000 17 17 18 18 #define pcibios_assign_all_busses() 1 19 - /* 20 - * The PCI address space does equal the physical memory address space. 21 - * The networking and block device layers use this boolean for bounce 22 - * buffer decisions. 23 - */ 24 - #define PCI_DMA_BUS_IS_PHYS 1 25 19 26 20 #endif /* __KERNEL__ */ 27 21
+13 -149
arch/arc/mm/dma.c
··· 16 16 * The default DMA address == Phy address which is 0x8000_0000 based. 17 17 */ 18 18 19 - #include <linux/dma-mapping.h> 19 + #include <linux/dma-noncoherent.h> 20 20 #include <asm/cache.h> 21 21 #include <asm/cacheflush.h> 22 22 23 - 24 - static void *arc_dma_alloc(struct device *dev, size_t size, 25 - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) 23 + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, 24 + gfp_t gfp, unsigned long attrs) 26 25 { 27 26 unsigned long order = get_order(size); 28 27 struct page *page; ··· 88 89 return kvaddr; 89 90 } 90 91 91 - static void arc_dma_free(struct device *dev, size_t size, void *vaddr, 92 + void arch_dma_free(struct device *dev, size_t size, void *vaddr, 92 93 dma_addr_t dma_handle, unsigned long attrs) 93 94 { 94 95 phys_addr_t paddr = dma_handle; ··· 104 105 __free_pages(page, get_order(size)); 105 106 } 106 107 107 - static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma, 108 - void *cpu_addr, dma_addr_t dma_addr, size_t size, 109 - unsigned long attrs) 108 + int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma, 109 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 110 + unsigned long attrs) 110 111 { 111 112 unsigned long user_count = vma_pages(vma); 112 113 unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; ··· 129 130 return ret; 130 131 } 131 132 132 - /* 133 - * streaming DMA Mapping API... 134 - * CPU accesses page via normal paddr, thus needs to explicitly made 135 - * consistent before each use 136 - */ 137 - static void _dma_cache_sync(phys_addr_t paddr, size_t size, 138 - enum dma_data_direction dir) 133 + void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, 134 + size_t size, enum dma_data_direction dir) 139 135 { 140 - switch (dir) { 141 - case DMA_FROM_DEVICE: 142 - dma_cache_inv(paddr, size); 143 - break; 144 - case DMA_TO_DEVICE: 145 - dma_cache_wback(paddr, size); 146 - break; 147 - case DMA_BIDIRECTIONAL: 148 - dma_cache_wback_inv(paddr, size); 149 - break; 150 - default: 151 - pr_err("Invalid DMA dir [%d] for OP @ %pa[p]\n", dir, &paddr); 152 - } 136 + dma_cache_wback(paddr, size); 153 137 } 154 138 155 - /* 156 - * arc_dma_map_page - map a portion of a page for streaming DMA 157 - * 158 - * Ensure that any data held in the cache is appropriately discarded 159 - * or written back. 160 - * 161 - * The device owns this memory once this call has completed. The CPU 162 - * can regain ownership by calling dma_unmap_page(). 163 - * 164 - * Note: while it takes struct page as arg, caller can "abuse" it to pass 165 - * a region larger than PAGE_SIZE, provided it is physically contiguous 166 - * and this still works correctly 167 - */ 168 - static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page, 169 - unsigned long offset, size_t size, enum dma_data_direction dir, 170 - unsigned long attrs) 139 + void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, 140 + size_t size, enum dma_data_direction dir) 171 141 { 172 - phys_addr_t paddr = page_to_phys(page) + offset; 173 - 174 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 175 - _dma_cache_sync(paddr, size, dir); 176 - 177 - return paddr; 142 + dma_cache_inv(paddr, size); 178 143 } 179 - 180 - /* 181 - * arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page() 182 - * 183 - * After this call, reads by the CPU to the buffer are guaranteed to see 184 - * whatever the device wrote there. 185 - * 186 - * Note: historically this routine was not implemented for ARC 187 - */ 188 - static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle, 189 - size_t size, enum dma_data_direction dir, 190 - unsigned long attrs) 191 - { 192 - phys_addr_t paddr = handle; 193 - 194 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 195 - _dma_cache_sync(paddr, size, dir); 196 - } 197 - 198 - static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg, 199 - int nents, enum dma_data_direction dir, unsigned long attrs) 200 - { 201 - struct scatterlist *s; 202 - int i; 203 - 204 - for_each_sg(sg, s, nents, i) 205 - s->dma_address = dma_map_page(dev, sg_page(s), s->offset, 206 - s->length, dir); 207 - 208 - return nents; 209 - } 210 - 211 - static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 212 - int nents, enum dma_data_direction dir, 213 - unsigned long attrs) 214 - { 215 - struct scatterlist *s; 216 - int i; 217 - 218 - for_each_sg(sg, s, nents, i) 219 - arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, 220 - attrs); 221 - } 222 - 223 - static void arc_dma_sync_single_for_cpu(struct device *dev, 224 - dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) 225 - { 226 - _dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE); 227 - } 228 - 229 - static void arc_dma_sync_single_for_device(struct device *dev, 230 - dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) 231 - { 232 - _dma_cache_sync(dma_handle, size, DMA_TO_DEVICE); 233 - } 234 - 235 - static void arc_dma_sync_sg_for_cpu(struct device *dev, 236 - struct scatterlist *sglist, int nelems, 237 - enum dma_data_direction dir) 238 - { 239 - int i; 240 - struct scatterlist *sg; 241 - 242 - for_each_sg(sglist, sg, nelems, i) 243 - _dma_cache_sync(sg_phys(sg), sg->length, dir); 244 - } 245 - 246 - static void arc_dma_sync_sg_for_device(struct device *dev, 247 - struct scatterlist *sglist, int nelems, 248 - enum dma_data_direction dir) 249 - { 250 - int i; 251 - struct scatterlist *sg; 252 - 253 - for_each_sg(sglist, sg, nelems, i) 254 - _dma_cache_sync(sg_phys(sg), sg->length, dir); 255 - } 256 - 257 - static int arc_dma_supported(struct device *dev, u64 dma_mask) 258 - { 259 - /* Support 32 bit DMA mask exclusively */ 260 - return dma_mask == DMA_BIT_MASK(32); 261 - } 262 - 263 - const struct dma_map_ops arc_dma_ops = { 264 - .alloc = arc_dma_alloc, 265 - .free = arc_dma_free, 266 - .mmap = arc_dma_mmap, 267 - .map_page = arc_dma_map_page, 268 - .unmap_page = arc_dma_unmap_page, 269 - .map_sg = arc_dma_map_sg, 270 - .unmap_sg = arc_dma_unmap_sg, 271 - .sync_single_for_device = arc_dma_sync_single_for_device, 272 - .sync_single_for_cpu = arc_dma_sync_single_for_cpu, 273 - .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu, 274 - .sync_sg_for_device = arc_dma_sync_sg_for_device, 275 - .dma_supported = arc_dma_supported, 276 - }; 277 - EXPORT_SYMBOL(arc_dma_ops);
+2 -13
arch/arm/Kconfig
··· 60 60 select HAVE_CONTEXT_TRACKING 61 61 select HAVE_C_RECORDMCOUNT 62 62 select HAVE_DEBUG_KMEMLEAK 63 - select HAVE_DMA_API_DEBUG 64 63 select HAVE_DMA_CONTIGUOUS if MMU 65 64 select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU 66 65 select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE ··· 95 96 select HAVE_VIRT_CPU_ACCOUNTING_GEN 96 97 select IRQ_FORCED_THREADING 97 98 select MODULES_USE_ELF_REL 99 + select NEED_DMA_MAP_STATE 98 100 select NO_BOOTMEM 99 101 select OF_EARLY_FLATTREE if OF 100 102 select OF_RESERVED_MEM if OF ··· 117 117 118 118 config ARM_HAS_SG_CHAIN 119 119 select ARCH_HAS_SG_CHAIN 120 - bool 121 - 122 - config NEED_SG_DMA_LENGTH 123 120 bool 124 121 125 122 config ARM_DMA_USE_IOMMU ··· 220 223 221 224 config ZONE_DMA 222 225 bool 223 - 224 - config NEED_DMA_MAP_STATE 225 - def_bool y 226 226 227 227 config ARCH_SUPPORTS_UPROBES 228 228 def_bool y ··· 1772 1778 and the task is only allowed to execute a few safe syscalls 1773 1779 defined by each seccomp mode. 1774 1780 1775 - config SWIOTLB 1776 - def_bool y 1777 - 1778 - config IOMMU_HELPER 1779 - def_bool SWIOTLB 1780 - 1781 1781 config PARAVIRT 1782 1782 bool "Enable paravirtualization code" 1783 1783 help ··· 1803 1815 depends on MMU 1804 1816 select ARCH_DMA_ADDR_T_64BIT 1805 1817 select ARM_PSCI 1818 + select SWIOTLB 1806 1819 select SWIOTLB_XEN 1807 1820 select PARAVIRT 1808 1821 help
-7
arch/arm/include/asm/pci.h
··· 19 19 } 20 20 #endif /* CONFIG_PCI_DOMAINS */ 21 21 22 - /* 23 - * The PCI address space does equal the physical memory address space. 24 - * The networking and block device layers use this boolean for bounce 25 - * buffer decisions. 26 - */ 27 - #define PCI_DMA_BUS_IS_PHYS (1) 28 - 29 22 #define HAVE_PCI_MMAP 30 23 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 31 24
+1 -1
arch/arm/kernel/setup.c
··· 754 754 else 755 755 size -= aligned_start - start; 756 756 757 - #ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT 757 + #ifndef CONFIG_PHYS_ADDR_T_64BIT 758 758 if (aligned_start > ULONG_MAX) { 759 759 pr_crit("Ignoring memory at 0x%08llx outside 32-bit physical address space\n", 760 760 (long long)start);
-1
arch/arm/mach-axxia/Kconfig
··· 2 2 config ARCH_AXXIA 3 3 bool "LSI Axxia platforms" 4 4 depends on ARCH_MULTI_V7 && ARM_LPAE 5 - select ARCH_DMA_ADDR_T_64BIT 6 5 select ARM_AMBA 7 6 select ARM_GIC 8 7 select ARM_TIMER_SP804
-1
arch/arm/mach-bcm/Kconfig
··· 211 211 select BRCMSTB_L2_IRQ 212 212 select BCM7120_L2_IRQ 213 213 select ARCH_HAS_HOLES_MEMORYMODEL 214 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 215 214 select ZONE_DMA if ARM_LPAE 216 215 select SOC_BRCMSTB 217 216 select SOC_BUS
-1
arch/arm/mach-exynos/Kconfig
··· 112 112 bool "SAMSUNG EXYNOS5440" 113 113 default y 114 114 depends on ARCH_EXYNOS5 115 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 116 115 select HAVE_ARM_ARCH_TIMER 117 116 select AUTO_ZRELADDR 118 117 select PINCTRL_EXYNOS5440
-1
arch/arm/mach-highbank/Kconfig
··· 1 1 config ARCH_HIGHBANK 2 2 bool "Calxeda ECX-1000/2000 (Highbank/Midway)" 3 3 depends on ARCH_MULTI_V7 4 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 5 4 select ARCH_HAS_HOLES_MEMORYMODEL 6 5 select ARCH_SUPPORTS_BIG_ENDIAN 7 6 select ARM_AMBA
-1
arch/arm/mach-rockchip/Kconfig
··· 3 3 depends on ARCH_MULTI_V7 4 4 select PINCTRL 5 5 select PINCTRL_ROCKCHIP 6 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 7 6 select ARCH_HAS_RESET_CONTROLLER 8 7 select ARM_AMBA 9 8 select ARM_GIC
-1
arch/arm/mach-shmobile/Kconfig
··· 29 29 menuconfig ARCH_RENESAS 30 30 bool "Renesas ARM SoCs" 31 31 depends on ARCH_MULTI_V7 && MMU 32 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 33 32 select ARCH_SHMOBILE 34 33 select ARM_GIC 35 34 select GPIOLIB
-1
arch/arm/mach-tegra/Kconfig
··· 15 15 select RESET_CONTROLLER 16 16 select SOC_BUS 17 17 select ZONE_DMA if ARM_LPAE 18 - select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE 19 18 help 20 19 This enables support for NVIDIA Tegra based systems.
+1 -6
arch/arm/mm/Kconfig
··· 661 661 bool "Support for the Large Physical Address Extension" 662 662 depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \ 663 663 !CPU_32v4 && !CPU_32v3 664 + select PHYS_ADDR_T_64BIT 664 665 help 665 666 Say Y if you have an ARMv7 processor supporting the LPAE page 666 667 table format and you would like to access memory beyond the ··· 673 672 config ARM_PV_FIXUP 674 673 def_bool y 675 674 depends on ARM_LPAE && ARM_PATCH_PHYS_VIRT && ARCH_KEYSTONE 676 - 677 - config ARCH_PHYS_ADDR_T_64BIT 678 - def_bool ARM_LPAE 679 - 680 - config ARCH_DMA_ADDR_T_64BIT 681 - bool 682 675 683 676 config ARM_THUMB 684 677 bool "Support Thumb user binaries" if !CPU_THUMBONLY && EXPERT
-9
arch/arm/mm/dma-mapping-nommu.c
··· 241 241 void arch_teardown_dma_ops(struct device *dev) 242 242 { 243 243 } 244 - 245 - #define PREALLOC_DMA_DEBUG_ENTRIES 4096 246 - 247 - static int __init dma_debug_do_init(void) 248 - { 249 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 250 - return 0; 251 - } 252 - core_initcall(dma_debug_do_init);
-9
arch/arm/mm/dma-mapping.c
··· 1151 1151 return __dma_supported(dev, mask, false); 1152 1152 } 1153 1153 1154 - #define PREALLOC_DMA_DEBUG_ENTRIES 4096 1155 - 1156 - static int __init dma_debug_do_init(void) 1157 - { 1158 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 1159 - return 0; 1160 - } 1161 - core_initcall(dma_debug_do_init); 1162 - 1163 1154 #ifdef CONFIG_ARM_DMA_USE_IOMMU 1164 1155 1165 1156 static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs)
+3 -19
arch/arm64/Kconfig
··· 105 105 select HAVE_CONTEXT_TRACKING 106 106 select HAVE_DEBUG_BUGVERBOSE 107 107 select HAVE_DEBUG_KMEMLEAK 108 - select HAVE_DMA_API_DEBUG 109 108 select HAVE_DMA_CONTIGUOUS 110 109 select HAVE_DYNAMIC_FTRACE 111 110 select HAVE_EFFICIENT_UNALIGNED_ACCESS ··· 132 133 select IRQ_FORCED_THREADING 133 134 select MODULES_USE_ELF_RELA 134 135 select MULTI_IRQ_HANDLER 136 + select NEED_DMA_MAP_STATE 137 + select NEED_SG_DMA_LENGTH 135 138 select NO_BOOTMEM 136 139 select OF 137 140 select OF_EARLY_FLATTREE ··· 143 142 select POWER_SUPPLY 144 143 select REFCOUNT_FULL 145 144 select SPARSE_IRQ 145 + select SWIOTLB 146 146 select SYSCTL_EXCEPTION_TRACE 147 147 select THREAD_INFO_IN_TASK 148 148 help 149 149 ARM 64-bit (AArch64) Linux support. 150 150 151 151 config 64BIT 152 - def_bool y 153 - 154 - config ARCH_PHYS_ADDR_T_64BIT 155 152 def_bool y 156 153 157 154 config MMU ··· 236 237 config HAVE_GENERIC_GUP 237 238 def_bool y 238 239 239 - config ARCH_DMA_ADDR_T_64BIT 240 - def_bool y 241 - 242 - config NEED_DMA_MAP_STATE 243 - def_bool y 244 - 245 - config NEED_SG_DMA_LENGTH 246 - def_bool y 247 - 248 240 config SMP 249 241 def_bool y 250 - 251 - config SWIOTLB 252 - def_bool y 253 - 254 - config IOMMU_HELPER 255 - def_bool SWIOTLB 256 242 257 243 config KERNEL_MODE_NEON 258 244 def_bool y
-5
arch/arm64/include/asm/pci.h
··· 18 18 #define pcibios_assign_all_busses() \ 19 19 (pci_has_flag(PCI_REASSIGN_ALL_BUS)) 20 20 21 - /* 22 - * PCI address space differs from physical memory address space 23 - */ 24 - #define PCI_DMA_BUS_IS_PHYS (0) 25 - 26 21 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 27 22 28 23 extern int isa_dma_bridge_buggy;
-10
arch/arm64/mm/dma-mapping.c
··· 508 508 } 509 509 arch_initcall(arm64_dma_init); 510 510 511 - #define PREALLOC_DMA_DEBUG_ENTRIES 4096 512 - 513 - static int __init dma_debug_do_init(void) 514 - { 515 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 516 - return 0; 517 - } 518 - fs_initcall(dma_debug_do_init); 519 - 520 - 521 511 #ifdef CONFIG_IOMMU_DMA 522 512 #include <linux/dma-iommu.h> 523 513 #include <linux/platform_device.h>
+3 -1
arch/c6x/Kconfig
··· 6 6 7 7 config C6X 8 8 def_bool y 9 + select ARCH_HAS_SYNC_DMA_FOR_CPU 10 + select ARCH_HAS_SYNC_DMA_FOR_DEVICE 9 11 select CLKDEV_LOOKUP 12 + select DMA_NONCOHERENT_OPS 10 13 select GENERIC_ATOMIC64 11 14 select GENERIC_IRQ_SHOW 12 15 select HAVE_ARCH_TRACEHOOK 13 - select HAVE_DMA_API_DEBUG 14 16 select HAVE_MEMBLOCK 15 17 select SPARSE_IRQ 16 18 select IRQ_DOMAIN
+1
arch/c6x/include/asm/Kbuild
··· 5 5 generic-y += device.h 6 6 generic-y += div64.h 7 7 generic-y += dma.h 8 + generic-y += dma-mapping.h 8 9 generic-y += emergency-restart.h 9 10 generic-y += exec.h 10 11 generic-y += extable.h
-28
arch/c6x/include/asm/dma-mapping.h
··· 1 - /* 2 - * Port on Texas Instruments TMS320C6x architecture 3 - * 4 - * Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated 5 - * Author: Aurelien Jacquiot <aurelien.jacquiot@ti.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - * 11 - */ 12 - #ifndef _ASM_C6X_DMA_MAPPING_H 13 - #define _ASM_C6X_DMA_MAPPING_H 14 - 15 - extern const struct dma_map_ops c6x_dma_ops; 16 - 17 - static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) 18 - { 19 - return &c6x_dma_ops; 20 - } 21 - 22 - extern void coherent_mem_init(u32 start, u32 size); 23 - void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 24 - gfp_t gfp, unsigned long attrs); 25 - void c6x_dma_free(struct device *dev, size_t size, void *vaddr, 26 - dma_addr_t dma_handle, unsigned long attrs); 27 - 28 - #endif /* _ASM_C6X_DMA_MAPPING_H */
+2
arch/c6x/include/asm/setup.h
··· 28 28 extern void machine_init(unsigned long dt_ptr); 29 29 extern void time_init(void); 30 30 31 + extern void coherent_mem_init(u32 start, u32 size); 32 + 31 33 #endif /* !__ASSEMBLY__ */ 32 34 #endif /* _ASM_C6X_SETUP_H */
+1 -1
arch/c6x/kernel/Makefile
··· 8 8 obj-y := process.o traps.o irq.o signal.o ptrace.o 9 9 obj-y += setup.o sys_c6x.o time.o devicetree.o 10 10 obj-y += switch_to.o entry.o vectors.o c6x_ksyms.o 11 - obj-y += soc.o dma.o 11 + obj-y += soc.o 12 12 13 13 obj-$(CONFIG_MODULES) += module.o
-149
arch/c6x/kernel/dma.c
··· 1 - /* 2 - * Copyright (C) 2011 Texas Instruments Incorporated 3 - * Author: Mark Salter <msalter@redhat.com> 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 as 7 - * published by the Free Software Foundation. 8 - */ 9 - #include <linux/module.h> 10 - #include <linux/dma-mapping.h> 11 - #include <linux/mm.h> 12 - #include <linux/mm_types.h> 13 - #include <linux/scatterlist.h> 14 - 15 - #include <asm/cacheflush.h> 16 - 17 - static void c6x_dma_sync(dma_addr_t handle, size_t size, 18 - enum dma_data_direction dir) 19 - { 20 - unsigned long paddr = handle; 21 - 22 - BUG_ON(!valid_dma_direction(dir)); 23 - 24 - switch (dir) { 25 - case DMA_FROM_DEVICE: 26 - L2_cache_block_invalidate(paddr, paddr + size); 27 - break; 28 - case DMA_TO_DEVICE: 29 - L2_cache_block_writeback(paddr, paddr + size); 30 - break; 31 - case DMA_BIDIRECTIONAL: 32 - L2_cache_block_writeback_invalidate(paddr, paddr + size); 33 - break; 34 - default: 35 - break; 36 - } 37 - } 38 - 39 - static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page, 40 - unsigned long offset, size_t size, enum dma_data_direction dir, 41 - unsigned long attrs) 42 - { 43 - dma_addr_t handle = virt_to_phys(page_address(page) + offset); 44 - 45 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 46 - c6x_dma_sync(handle, size, dir); 47 - 48 - return handle; 49 - } 50 - 51 - static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle, 52 - size_t size, enum dma_data_direction dir, unsigned long attrs) 53 - { 54 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 55 - c6x_dma_sync(handle, size, dir); 56 - } 57 - 58 - static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist, 59 - int nents, enum dma_data_direction dir, unsigned long attrs) 60 - { 61 - struct scatterlist *sg; 62 - int i; 63 - 64 - for_each_sg(sglist, sg, nents, i) { 65 - sg->dma_address = sg_phys(sg); 66 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 67 - c6x_dma_sync(sg->dma_address, sg->length, dir); 68 - } 69 - 70 - return nents; 71 - } 72 - 73 - static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 74 - int nents, enum dma_data_direction dir, unsigned long attrs) 75 - { 76 - struct scatterlist *sg; 77 - int i; 78 - 79 - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) 80 - return; 81 - 82 - for_each_sg(sglist, sg, nents, i) 83 - c6x_dma_sync(sg_dma_address(sg), sg->length, dir); 84 - } 85 - 86 - static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 87 - size_t size, enum dma_data_direction dir) 88 - { 89 - c6x_dma_sync(handle, size, dir); 90 - 91 - } 92 - 93 - static void c6x_dma_sync_single_for_device(struct device *dev, 94 - dma_addr_t handle, size_t size, enum dma_data_direction dir) 95 - { 96 - c6x_dma_sync(handle, size, dir); 97 - 98 - } 99 - 100 - static void c6x_dma_sync_sg_for_cpu(struct device *dev, 101 - struct scatterlist *sglist, int nents, 102 - enum dma_data_direction dir) 103 - { 104 - struct scatterlist *sg; 105 - int i; 106 - 107 - for_each_sg(sglist, sg, nents, i) 108 - c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg), 109 - sg->length, dir); 110 - 111 - } 112 - 113 - static void c6x_dma_sync_sg_for_device(struct device *dev, 114 - struct scatterlist *sglist, int nents, 115 - enum dma_data_direction dir) 116 - { 117 - struct scatterlist *sg; 118 - int i; 119 - 120 - for_each_sg(sglist, sg, nents, i) 121 - c6x_dma_sync_single_for_device(dev, sg_dma_address(sg), 122 - sg->length, dir); 123 - 124 - } 125 - 126 - const struct dma_map_ops c6x_dma_ops = { 127 - .alloc = c6x_dma_alloc, 128 - .free = c6x_dma_free, 129 - .map_page = c6x_dma_map_page, 130 - .unmap_page = c6x_dma_unmap_page, 131 - .map_sg = c6x_dma_map_sg, 132 - .unmap_sg = c6x_dma_unmap_sg, 133 - .sync_single_for_device = c6x_dma_sync_single_for_device, 134 - .sync_single_for_cpu = c6x_dma_sync_single_for_cpu, 135 - .sync_sg_for_device = c6x_dma_sync_sg_for_device, 136 - .sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu, 137 - }; 138 - EXPORT_SYMBOL(c6x_dma_ops); 139 - 140 - /* Number of entries preallocated for DMA-API debugging */ 141 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 142 - 143 - static int __init dma_init(void) 144 - { 145 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 146 - 147 - return 0; 148 - } 149 - fs_initcall(dma_init);
+37 -3
arch/c6x/mm/dma-coherent.c
··· 19 19 #include <linux/bitops.h> 20 20 #include <linux/module.h> 21 21 #include <linux/interrupt.h> 22 - #include <linux/dma-mapping.h> 22 + #include <linux/dma-noncoherent.h> 23 23 #include <linux/memblock.h> 24 24 25 + #include <asm/cacheflush.h> 25 26 #include <asm/page.h> 27 + #include <asm/setup.h> 26 28 27 29 /* 28 30 * DMA coherent memory management, can be redefined using the memdma= ··· 75 73 * Allocate DMA coherent memory space and return both the kernel 76 74 * virtual and DMA address for that space. 77 75 */ 78 - void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 76 + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 79 77 gfp_t gfp, unsigned long attrs) 80 78 { 81 79 u32 paddr; ··· 100 98 /* 101 99 * Free DMA coherent memory as defined by the above mapping. 102 100 */ 103 - void c6x_dma_free(struct device *dev, size_t size, void *vaddr, 101 + void arch_dma_free(struct device *dev, size_t size, void *vaddr, 104 102 dma_addr_t dma_handle, unsigned long attrs) 105 103 { 106 104 int order; ··· 140 138 141 139 dma_bitmap = phys_to_virt(bitmap_phys); 142 140 memset(dma_bitmap, 0, dma_pages * PAGE_SIZE); 141 + } 142 + 143 + static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size, 144 + enum dma_data_direction dir) 145 + { 146 + BUG_ON(!valid_dma_direction(dir)); 147 + 148 + switch (dir) { 149 + case DMA_FROM_DEVICE: 150 + L2_cache_block_invalidate(paddr, paddr + size); 151 + break; 152 + case DMA_TO_DEVICE: 153 + L2_cache_block_writeback(paddr, paddr + size); 154 + break; 155 + case DMA_BIDIRECTIONAL: 156 + L2_cache_block_writeback_invalidate(paddr, paddr + size); 157 + break; 158 + default: 159 + break; 160 + } 161 + } 162 + 163 + void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, 164 + size_t size, enum dma_data_direction dir) 165 + { 166 + return c6x_dma_sync(dev, paddr, size, dir); 167 + } 168 + 169 + void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, 170 + size_t size, enum dma_data_direction dir) 171 + { 172 + return c6x_dma_sync(dev, paddr, size, dir); 143 173 }
-2
arch/h8300/include/asm/pci.h
··· 15 15 /* We don't do dynamic PCI IRQ allocation */ 16 16 } 17 17 18 - #define PCI_DMA_BUS_IS_PHYS (1) 19 - 20 18 #endif /* _ASM_H8300_PCI_H */
+1 -3
arch/hexagon/Kconfig
··· 19 19 select GENERIC_IRQ_SHOW 20 20 select HAVE_ARCH_KGDB 21 21 select HAVE_ARCH_TRACEHOOK 22 + select NEED_SG_DMA_LENGTH 22 23 select NO_IOPORT_MAP 23 24 select GENERIC_IOMAP 24 25 select GENERIC_SMP_IDLE_THREAD ··· 62 61 # Use the generic interrupt handling code in kernel/irq/: 63 62 # 64 63 config GENERIC_IRQ_PROBE 65 - def_bool y 66 - 67 - config NEED_SG_DMA_LENGTH 68 64 def_bool y 69 65 70 66 config RWSEM_GENERIC_SPINLOCK
-1
arch/hexagon/kernel/dma.c
··· 208 208 .sync_single_for_cpu = hexagon_sync_single_for_cpu, 209 209 .sync_single_for_device = hexagon_sync_single_for_device, 210 210 .mapping_error = hexagon_mapping_error, 211 - .is_phys = 1, 212 211 }; 213 212 214 213 void __init hexagon_dma_init(void)
+2 -21
arch/ia64/Kconfig
··· 29 29 select HAVE_FUNCTION_TRACER 30 30 select TTY 31 31 select HAVE_ARCH_TRACEHOOK 32 - select HAVE_DMA_API_DEBUG 33 32 select HAVE_MEMBLOCK 34 33 select HAVE_MEMBLOCK_NODE_MAP 35 34 select HAVE_VIRT_CPU_ACCOUNTING ··· 53 54 select MODULES_USE_ELF_RELA 54 55 select ARCH_USE_CMPXCHG_LOCKREF 55 56 select HAVE_ARCH_AUDITSYSCALL 57 + select NEED_DMA_MAP_STATE 58 + select NEED_SG_DMA_LENGTH 56 59 default y 57 60 help 58 61 The Itanium Processor Family is Intel's 64-bit successor to ··· 78 77 config MMU 79 78 bool 80 79 default y 81 - 82 - config ARCH_DMA_ADDR_T_64BIT 83 - def_bool y 84 - 85 - config NEED_DMA_MAP_STATE 86 - def_bool y 87 - 88 - config NEED_SG_DMA_LENGTH 89 - def_bool y 90 - 91 - config SWIOTLB 92 - bool 93 80 94 81 config STACKTRACE_SUPPORT 95 82 def_bool y ··· 135 146 bool "generic" 136 147 select NUMA 137 148 select ACPI_NUMA 138 - select DMA_DIRECT_OPS 139 149 select SWIOTLB 140 150 select PCI_MSI 141 151 help ··· 155 167 156 168 config IA64_DIG 157 169 bool "DIG-compliant" 158 - select DMA_DIRECT_OPS 159 170 select SWIOTLB 160 171 161 172 config IA64_DIG_VTD ··· 170 183 171 184 config IA64_HP_ZX1_SWIOTLB 172 185 bool "HP-zx1/sx1000 with software I/O TLB" 173 - select DMA_DIRECT_OPS 174 186 select SWIOTLB 175 187 help 176 188 Build a kernel that runs on HP zx1 and sx1000 systems even when they ··· 193 207 bool "SGI-UV" 194 208 select NUMA 195 209 select ACPI_NUMA 196 - select DMA_DIRECT_OPS 197 210 select SWIOTLB 198 211 help 199 212 Selecting this option will optimize the kernel for use on UV based ··· 203 218 204 219 config IA64_HP_SIM 205 220 bool "Ski-simulator" 206 - select DMA_DIRECT_OPS 207 221 select SWIOTLB 208 222 depends on !PM 209 223 ··· 597 613 source "crypto/Kconfig" 598 614 599 615 source "lib/Kconfig" 600 - 601 - config IOMMU_HELPER 602 - def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)
-3
arch/ia64/hp/common/sba_iommu.c
··· 1845 1845 ioc_resource_init(ioc); 1846 1846 ioc_sac_init(ioc); 1847 1847 1848 - if ((long) ~iovp_mask > (long) ia64_max_iommu_merge_mask) 1849 - ia64_max_iommu_merge_mask = ~iovp_mask; 1850 - 1851 1848 printk(KERN_INFO PFX 1852 1849 "%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n", 1853 1850 ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF,
-17
arch/ia64/include/asm/pci.h
··· 30 30 #define PCIBIOS_MIN_IO 0x1000 31 31 #define PCIBIOS_MIN_MEM 0x10000000 32 32 33 - /* 34 - * PCI_DMA_BUS_IS_PHYS should be set to 1 if there is _necessarily_ a direct 35 - * correspondence between device bus addresses and CPU physical addresses. 36 - * Platforms with a hardware I/O MMU _must_ turn this off to suppress the 37 - * bounce buffer handling code in the block and network device layers. 38 - * Platforms with separate bus address spaces _must_ turn this off and provide 39 - * a device DMA mapping implementation that takes care of the necessary 40 - * address translation. 41 - * 42 - * For now, the ia64 platforms which may have separate/multiple bus address 43 - * spaces all have I/O MMUs which support the merging of physically 44 - * discontiguous buffers, so we can use that as the sole factor to determine 45 - * the setting of PCI_DMA_BUS_IS_PHYS. 46 - */ 47 - extern unsigned long ia64_max_iommu_merge_mask; 48 - #define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL) 49 - 50 33 #define HAVE_PCI_MMAP 51 34 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 52 35 #define arch_can_pci_mmap_wc() 1
-10
arch/ia64/kernel/dma-mapping.c
··· 9 9 const struct dma_map_ops *dma_ops; 10 10 EXPORT_SYMBOL(dma_ops); 11 11 12 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 13 - 14 - static int __init dma_init(void) 15 - { 16 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 17 - 18 - return 0; 19 - } 20 - fs_initcall(dma_init); 21 - 22 12 const struct dma_map_ops *dma_get_ops(struct device *dev) 23 13 { 24 14 return dma_ops;
-12
arch/ia64/kernel/setup.c
··· 124 124 unsigned long ia64_cache_stride_shift = ~0; 125 125 126 126 /* 127 - * The merge_mask variable needs to be set to (max(iommu_page_size(iommu)) - 1). This 128 - * mask specifies a mask of address bits that must be 0 in order for two buffers to be 129 - * mergeable by the I/O MMU (i.e., the end address of the first buffer and the start 130 - * address of the second buffer must be aligned to (merge_mask+1) in order to be 131 - * mergeable). By default, we assume there is no I/O MMU which can merge physically 132 - * discontiguous buffers, so we set the merge_mask to ~0UL, which corresponds to a iommu 133 - * page-size of 2^64. 134 - */ 135 - unsigned long ia64_max_iommu_merge_mask = ~0UL; 136 - EXPORT_SYMBOL(ia64_max_iommu_merge_mask); 137 - 138 - /* 139 127 * We use a special marker for the end of memory and it uses the extra (+1) slot 140 128 */ 141 129 struct rsvd_region rsvd_region[IA64_MAX_RSVD_REGIONS + 1] __initdata;
-5
arch/ia64/sn/kernel/io_common.c
··· 480 480 tioca_init_provider(); 481 481 tioce_init_provider(); 482 482 483 - /* 484 - * This is needed to avoid bounce limit checks in the blk layer 485 - */ 486 - ia64_max_iommu_merge_mask = ~PAGE_MASK; 487 - 488 483 sn_irq_lh_init(); 489 484 INIT_LIST_HEAD(&sn_sysdata_list); 490 485 sn_init_cpei_timer();
-6
arch/m68k/include/asm/pci.h
··· 4 4 5 5 #include <asm-generic/pci.h> 6 6 7 - /* The PCI address space does equal the physical memory 8 - * address space. The networking and block device layers use 9 - * this boolean for bounce buffer decisions. 10 - */ 11 - #define PCI_DMA_BUS_IS_PHYS (1) 12 - 13 7 #define pcibios_assign_all_busses() 1 14 8 15 9 #define PCIBIOS_MIN_IO 0x00000100
-1
arch/microblaze/Kconfig
··· 19 19 select HAVE_ARCH_HASH 20 20 select HAVE_ARCH_KGDB 21 21 select HAVE_DEBUG_KMEMLEAK 22 - select HAVE_DMA_API_DEBUG 23 22 select HAVE_DYNAMIC_FTRACE 24 23 select HAVE_FTRACE_MCOUNT_RECORD 25 24 select HAVE_FUNCTION_GRAPH_TRACER
-6
arch/microblaze/include/asm/pci.h
··· 62 62 63 63 #define HAVE_PCI_LEGACY 1 64 64 65 - /* The PCI address space does equal the physical memory 66 - * address space (no IOMMU). The IDE and SCSI device layers use 67 - * this boolean for bounce buffer decisions. 68 - */ 69 - #define PCI_DMA_BUS_IS_PHYS (1) 70 - 71 65 extern void pcibios_claim_one_bus(struct pci_bus *b); 72 66 73 67 extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
-11
arch/microblaze/kernel/dma.c
··· 184 184 .sync_sg_for_device = dma_nommu_sync_sg_for_device, 185 185 }; 186 186 EXPORT_SYMBOL(dma_nommu_ops); 187 - 188 - /* Number of entries preallocated for DMA-API debugging */ 189 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 190 - 191 - static int __init dma_init(void) 192 - { 193 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 194 - 195 - return 0; 196 - } 197 - fs_initcall(dma_init);
+7 -15
arch/mips/Kconfig
··· 42 42 select HAVE_C_RECORDMCOUNT 43 43 select HAVE_DEBUG_KMEMLEAK 44 44 select HAVE_DEBUG_STACKOVERFLOW 45 - select HAVE_DMA_API_DEBUG 46 45 select HAVE_DMA_CONTIGUOUS 47 46 select HAVE_DYNAMIC_FTRACE 48 47 select HAVE_EXIT_THREAD ··· 131 132 132 133 config MIPS_ALCHEMY 133 134 bool "Alchemy processor based machines" 134 - select ARCH_PHYS_ADDR_T_64BIT 135 + select PHYS_ADDR_T_64BIT 135 136 select CEVT_R4K 136 137 select CSRC_R4K 137 138 select IRQ_MIPS_CPU ··· 889 890 bool "Cavium Networks Octeon SoC based boards" 890 891 select CEVT_R4K 891 892 select ARCH_HAS_PHYS_TO_DMA 892 - select ARCH_PHYS_ADDR_T_64BIT 893 + select PHYS_ADDR_T_64BIT 893 894 select DMA_COHERENT 894 895 select SYS_SUPPORTS_64BIT_KERNEL 895 896 select SYS_SUPPORTS_BIG_ENDIAN ··· 911 912 select MIPS_NR_CPU_NR_MAP_1024 912 913 select BUILTIN_DTB 913 914 select MTD_COMPLEX_MAPPINGS 915 + select SWIOTLB 914 916 select SYS_SUPPORTS_RELOCATABLE 915 917 help 916 918 This option supports all of the Octeon reference boards from Cavium ··· 936 936 select SWAP_IO_SPACE 937 937 select SYS_SUPPORTS_32BIT_KERNEL 938 938 select SYS_SUPPORTS_64BIT_KERNEL 939 - select ARCH_PHYS_ADDR_T_64BIT 939 + select PHYS_ADDR_T_64BIT 940 940 select SYS_SUPPORTS_BIG_ENDIAN 941 941 select SYS_SUPPORTS_HIGHMEM 942 942 select DMA_COHERENT ··· 962 962 select HW_HAS_PCI 963 963 select SYS_SUPPORTS_32BIT_KERNEL 964 964 select SYS_SUPPORTS_64BIT_KERNEL 965 - select ARCH_PHYS_ADDR_T_64BIT 965 + select PHYS_ADDR_T_64BIT 966 966 select GPIOLIB 967 967 select SYS_SUPPORTS_BIG_ENDIAN 968 968 select SYS_SUPPORTS_LITTLE_ENDIAN ··· 1101 1101 config FW_CFE 1102 1102 bool 1103 1103 1104 - config ARCH_DMA_ADDR_T_64BIT 1105 - def_bool (HIGHMEM && ARCH_PHYS_ADDR_T_64BIT) || 64BIT 1106 - 1107 1104 config ARCH_SUPPORTS_UPROBES 1108 1105 bool 1109 1106 ··· 1118 1121 config DMA_NONCOHERENT 1119 1122 bool 1120 1123 select NEED_DMA_MAP_STATE 1121 - 1122 - config NEED_DMA_MAP_STATE 1123 - bool 1124 1124 1125 1125 config SYS_HAS_EARLY_PRINTK 1126 1126 bool ··· 1367 1373 select MIPS_PGD_C0_CONTEXT 1368 1374 select MIPS_L1_CACHE_SHIFT_6 1369 1375 select GPIOLIB 1376 + select SWIOTLB 1370 1377 help 1371 1378 The Loongson 3 processor implements the MIPS64R2 instruction 1372 1379 set with many extensions. ··· 1765 1770 depends on SYS_SUPPORTS_HIGHMEM 1766 1771 select XPA 1767 1772 select HIGHMEM 1768 - select ARCH_PHYS_ADDR_T_64BIT 1773 + select PHYS_ADDR_T_64BIT 1769 1774 default n 1770 1775 help 1771 1776 Choose this option if you want to enable the Extended Physical ··· 2396 2401 depends on CPU_SB1 && CPU_SB1_PASS_2 2397 2402 default y 2398 2403 2399 - 2400 - config ARCH_PHYS_ADDR_T_64BIT 2401 - bool 2402 2404 2403 2405 choice 2404 2406 prompt "SmartMIPS or microMIPS ASE support"
-12
arch/mips/cavium-octeon/Kconfig
··· 67 67 help 68 68 Lock the kernel's implementation of memcpy() into L2. 69 69 70 - config IOMMU_HELPER 71 - bool 72 - 73 - config NEED_SG_DMA_LENGTH 74 - bool 75 - 76 - config SWIOTLB 77 - def_bool y 78 - select DMA_DIRECT_OPS 79 - select IOMMU_HELPER 80 - select NEED_SG_DMA_LENGTH 81 - 82 70 config OCTEON_ILM 83 71 tristate "Module to measure interrupt latency using Octeon CIU Timer" 84 72 help
-7
arch/mips/include/asm/pci.h
··· 121 121 #include <linux/string.h> 122 122 #include <asm/io.h> 123 123 124 - /* 125 - * The PCI address space does equal the physical memory address space. 126 - * The networking and block device layers use this boolean for bounce 127 - * buffer decisions. 128 - */ 129 - #define PCI_DMA_BUS_IS_PHYS (1) 130 - 131 124 #ifdef CONFIG_PCI_DOMAINS_GENERIC 132 125 static inline int pci_proc_domain(struct pci_bus *bus) 133 126 {
-15
arch/mips/loongson64/Kconfig
··· 130 130 default y 131 131 depends on EARLY_PRINTK || SERIAL_8250 132 132 133 - config IOMMU_HELPER 134 - bool 135 - 136 - config NEED_SG_DMA_LENGTH 137 - bool 138 - 139 - config SWIOTLB 140 - bool "Soft IOMMU Support for All-Memory DMA" 141 - default y 142 - depends on CPU_LOONGSON3 143 - select DMA_DIRECT_OPS 144 - select IOMMU_HELPER 145 - select NEED_SG_DMA_LENGTH 146 - select NEED_DMA_MAP_STATE 147 - 148 133 config PHYS48_TO_HT40 149 134 bool 150 135 default y if CPU_LOONGSON3
-10
arch/mips/mm/dma-default.c
··· 402 402 403 403 const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops; 404 404 EXPORT_SYMBOL(mips_dma_map_ops); 405 - 406 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 407 - 408 - static int __init mips_dma_init(void) 409 - { 410 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 411 - 412 - return 0; 413 - } 414 - fs_initcall(mips_dma_init);
-6
arch/mips/netlogic/Kconfig
··· 83 83 config NLM_COMMON 84 84 bool 85 85 86 - config IOMMU_HELPER 87 - bool 88 - 89 - config NEED_SG_DMA_LENGTH 90 - bool 91 - 92 86 endif
+3
arch/nds32/Kconfig
··· 5 5 6 6 config NDS32 7 7 def_bool y 8 + select ARCH_HAS_SYNC_DMA_FOR_CPU 9 + select ARCH_HAS_SYNC_DMA_FOR_DEVICE 8 10 select ARCH_WANT_FRAME_POINTERS if FTRACE 9 11 select CLKSRC_MMIO 10 12 select CLONE_BACKWARDS 11 13 select COMMON_CLK 14 + select DMA_NONCOHERENT_OPS 12 15 select GENERIC_ASHLDI3 13 16 select GENERIC_ASHRDI3 14 17 select GENERIC_LSHRDI3
+1
arch/nds32/include/asm/Kbuild
··· 13 13 generic-y += device.h 14 14 generic-y += div64.h 15 15 generic-y += dma.h 16 + generic-y += dma-mapping.h 16 17 generic-y += emergency-restart.h 17 18 generic-y += errno.h 18 19 generic-y += exec.h
-14
arch/nds32/include/asm/dma-mapping.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef ASMNDS32_DMA_MAPPING_H 5 - #define ASMNDS32_DMA_MAPPING_H 6 - 7 - extern struct dma_map_ops nds32_dma_ops; 8 - 9 - static inline struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) 10 - { 11 - return &nds32_dma_ops; 12 - } 13 - 14 - #endif
+56 -142
arch/nds32/kernel/dma.c
··· 3 3 4 4 #include <linux/types.h> 5 5 #include <linux/mm.h> 6 - #include <linux/export.h> 7 6 #include <linux/string.h> 8 - #include <linux/scatterlist.h> 9 - #include <linux/dma-mapping.h> 7 + #include <linux/dma-noncoherent.h> 10 8 #include <linux/io.h> 11 9 #include <linux/cache.h> 12 10 #include <linux/highmem.h> 13 11 #include <linux/slab.h> 14 12 #include <asm/cacheflush.h> 15 13 #include <asm/tlbflush.h> 16 - #include <asm/dma-mapping.h> 17 14 #include <asm/proc-fns.h> 18 15 19 16 /* ··· 18 21 */ 19 22 static pte_t *consistent_pte; 20 23 static DEFINE_RAW_SPINLOCK(consistent_lock); 21 - 22 - enum master_type { 23 - FOR_CPU = 0, 24 - FOR_DEVICE = 1, 25 - }; 26 24 27 25 /* 28 26 * VM region handling support. ··· 116 124 return c; 117 125 } 118 126 119 - /* FIXME: attrs is not used. */ 120 - static void *nds32_dma_alloc_coherent(struct device *dev, size_t size, 121 - dma_addr_t * handle, gfp_t gfp, 122 - unsigned long attrs) 127 + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 128 + gfp_t gfp, unsigned long attrs) 123 129 { 124 130 struct page *page; 125 131 struct arch_vm_region *c; ··· 222 232 return NULL; 223 233 } 224 234 225 - static void nds32_dma_free(struct device *dev, size_t size, void *cpu_addr, 226 - dma_addr_t handle, unsigned long attrs) 235 + void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, 236 + dma_addr_t handle, unsigned long attrs) 227 237 { 228 238 struct arch_vm_region *c; 229 239 unsigned long flags, addr; ··· 323 333 } 324 334 325 335 core_initcall(consistent_init); 326 - static void consistent_sync(void *vaddr, size_t size, int direction, int master_type); 327 - static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page, 328 - unsigned long offset, size_t size, 329 - enum dma_data_direction dir, 330 - unsigned long attrs) 336 + 337 + static inline void cache_op(phys_addr_t paddr, size_t size, 338 + void (*fn)(unsigned long start, unsigned long end)) 331 339 { 332 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 333 - consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE); 334 - return page_to_phys(page) + offset; 335 - } 340 + struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); 341 + unsigned offset = paddr & ~PAGE_MASK; 342 + size_t left = size; 343 + unsigned long start; 336 344 337 - static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle, 338 - size_t size, enum dma_data_direction dir, 339 - unsigned long attrs) 340 - { 341 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 342 - consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU); 343 - } 345 + do { 346 + size_t len = left; 344 347 345 - /* 346 - * Make an area consistent for devices. 347 - */ 348 - static void consistent_sync(void *vaddr, size_t size, int direction, int master_type) 349 - { 350 - unsigned long start = (unsigned long)vaddr; 351 - unsigned long end = start + size; 352 - 353 - if (master_type == FOR_CPU) { 354 - switch (direction) { 355 - case DMA_TO_DEVICE: 356 - break; 357 - case DMA_FROM_DEVICE: 358 - case DMA_BIDIRECTIONAL: 359 - cpu_dma_inval_range(start, end); 360 - break; 361 - default: 362 - BUG(); 363 - } 364 - } else { 365 - /* FOR_DEVICE */ 366 - switch (direction) { 367 - case DMA_FROM_DEVICE: 368 - break; 369 - case DMA_TO_DEVICE: 370 - case DMA_BIDIRECTIONAL: 371 - cpu_dma_wb_range(start, end); 372 - break; 373 - default: 374 - BUG(); 375 - } 376 - } 377 - } 378 - 379 - static int nds32_dma_map_sg(struct device *dev, struct scatterlist *sg, 380 - int nents, enum dma_data_direction dir, 381 - unsigned long attrs) 382 - { 383 - int i; 384 - 385 - for (i = 0; i < nents; i++, sg++) { 386 - void *virt; 387 - unsigned long pfn; 388 - struct page *page = sg_page(sg); 389 - 390 - sg->dma_address = sg_phys(sg); 391 - pfn = page_to_pfn(page) + sg->offset / PAGE_SIZE; 392 - page = pfn_to_page(pfn); 393 348 if (PageHighMem(page)) { 394 - virt = kmap_atomic(page); 395 - consistent_sync(virt, sg->length, dir, FOR_CPU); 396 - kunmap_atomic(virt); 349 + void *addr; 350 + 351 + if (offset + len > PAGE_SIZE) { 352 + if (offset >= PAGE_SIZE) { 353 + page += offset >> PAGE_SHIFT; 354 + offset &= ~PAGE_MASK; 355 + } 356 + len = PAGE_SIZE - offset; 357 + } 358 + 359 + addr = kmap_atomic(page); 360 + start = (unsigned long)(addr + offset); 361 + fn(start, start + len); 362 + kunmap_atomic(addr); 397 363 } else { 398 - if (sg->offset > PAGE_SIZE) 399 - panic("sg->offset:%08x > PAGE_SIZE\n", 400 - sg->offset); 401 - virt = page_address(page) + sg->offset; 402 - consistent_sync(virt, sg->length, dir, FOR_CPU); 364 + start = (unsigned long)phys_to_virt(paddr); 365 + fn(start, start + size); 403 366 } 404 - } 405 - return nents; 367 + offset = 0; 368 + page++; 369 + left -= len; 370 + } while (left); 406 371 } 407 372 408 - static void nds32_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 409 - int nhwentries, enum dma_data_direction dir, 410 - unsigned long attrs) 373 + void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, 374 + size_t size, enum dma_data_direction dir) 411 375 { 412 - } 413 - 414 - static void 415 - nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 416 - size_t size, enum dma_data_direction dir) 417 - { 418 - consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU); 419 - } 420 - 421 - static void 422 - nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, 423 - size_t size, enum dma_data_direction dir) 424 - { 425 - consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE); 426 - } 427 - 428 - static void 429 - nds32_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, 430 - enum dma_data_direction dir) 431 - { 432 - int i; 433 - 434 - for (i = 0; i < nents; i++, sg++) { 435 - char *virt = 436 - page_address((struct page *)sg->page_link) + sg->offset; 437 - consistent_sync(virt, sg->length, dir, FOR_CPU); 376 + switch (dir) { 377 + case DMA_FROM_DEVICE: 378 + break; 379 + case DMA_TO_DEVICE: 380 + case DMA_BIDIRECTIONAL: 381 + cache_op(paddr, size, cpu_dma_wb_range); 382 + break; 383 + default: 384 + BUG(); 438 385 } 439 386 } 440 387 441 - static void 442 - nds32_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 443 - int nents, enum dma_data_direction dir) 388 + void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, 389 + size_t size, enum dma_data_direction dir) 444 390 { 445 - int i; 446 - 447 - for (i = 0; i < nents; i++, sg++) { 448 - char *virt = 449 - page_address((struct page *)sg->page_link) + sg->offset; 450 - consistent_sync(virt, sg->length, dir, FOR_DEVICE); 391 + switch (dir) { 392 + case DMA_TO_DEVICE: 393 + break; 394 + case DMA_FROM_DEVICE: 395 + case DMA_BIDIRECTIONAL: 396 + cache_op(paddr, size, cpu_dma_inval_range); 397 + break; 398 + default: 399 + BUG(); 451 400 } 452 401 } 453 - 454 - struct dma_map_ops nds32_dma_ops = { 455 - .alloc = nds32_dma_alloc_coherent, 456 - .free = nds32_dma_free, 457 - .map_page = nds32_dma_map_page, 458 - .unmap_page = nds32_dma_unmap_page, 459 - .map_sg = nds32_dma_map_sg, 460 - .unmap_sg = nds32_dma_unmap_sg, 461 - .sync_single_for_device = nds32_dma_sync_single_for_device, 462 - .sync_single_for_cpu = nds32_dma_sync_single_for_cpu, 463 - .sync_sg_for_cpu = nds32_dma_sync_sg_for_cpu, 464 - .sync_sg_for_device = nds32_dma_sync_sg_for_device, 465 - }; 466 - 467 - EXPORT_SYMBOL(nds32_dma_ops);
-11
arch/openrisc/kernel/dma.c
··· 247 247 .sync_single_for_device = or1k_sync_single_for_device, 248 248 }; 249 249 EXPORT_SYMBOL(or1k_dma_map_ops); 250 - 251 - /* Number of entries preallocated for DMA-API debugging */ 252 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 253 - 254 - static int __init dma_init(void) 255 - { 256 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 257 - 258 - return 0; 259 - } 260 - fs_initcall(dma_init);
+2 -6
arch/parisc/Kconfig
··· 51 51 select GENERIC_CLOCKEVENTS 52 52 select ARCH_NO_COHERENT_DMA_MMAP 53 53 select CPU_NO_EFFICIENT_FFS 54 + select NEED_DMA_MAP_STATE 55 + select NEED_SG_DMA_LENGTH 54 56 55 57 help 56 58 The PA-RISC microprocessor is designed by Hewlett-Packard and used ··· 111 109 bool 112 110 113 111 config STACKTRACE_SUPPORT 114 - def_bool y 115 - 116 - config NEED_DMA_MAP_STATE 117 - def_bool y 118 - 119 - config NEED_SG_DMA_LENGTH 120 112 def_bool y 121 113 122 114 config ISA_DMA_API
-23
arch/parisc/include/asm/pci.h
··· 88 88 #endif /* !CONFIG_64BIT */ 89 89 90 90 /* 91 - * If the PCI device's view of memory is the same as the CPU's view of memory, 92 - * PCI_DMA_BUS_IS_PHYS is true. The networking and block device layers use 93 - * this boolean for bounce buffer decisions. 94 - */ 95 - #ifdef CONFIG_PA20 96 - /* All PA-2.0 machines have an IOMMU. */ 97 - #define PCI_DMA_BUS_IS_PHYS 0 98 - #define parisc_has_iommu() do { } while (0) 99 - #else 100 - 101 - #if defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA) 102 - extern int parisc_bus_is_phys; /* in arch/parisc/kernel/setup.c */ 103 - #define PCI_DMA_BUS_IS_PHYS parisc_bus_is_phys 104 - #define parisc_has_iommu() do { parisc_bus_is_phys = 0; } while (0) 105 - #else 106 - #define PCI_DMA_BUS_IS_PHYS 1 107 - #define parisc_has_iommu() do { } while (0) 108 - #endif 109 - 110 - #endif /* !CONFIG_PA20 */ 111 - 112 - 113 - /* 114 91 ** Most PCI devices (eg Tulip, NCR720) also export the same registers 115 92 ** to both MMIO and I/O port space. Due to poor performance of I/O Port 116 93 ** access under HP PCI bus adapters, strongly recommend the use of MMIO
-5
arch/parisc/kernel/setup.c
··· 58 58 struct proc_dir_entry * proc_gsc_root __read_mostly = NULL; 59 59 struct proc_dir_entry * proc_mckinley_root __read_mostly = NULL; 60 60 61 - #if !defined(CONFIG_PA20) && (defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA)) 62 - int parisc_bus_is_phys __read_mostly = 1; /* Assume no IOMMU is present */ 63 - EXPORT_SYMBOL(parisc_bus_is_phys); 64 - #endif 65 - 66 61 void __init setup_cmdline(char **cmdline_p) 67 62 { 68 63 extern unsigned int boot_args[];
+2 -23
arch/powerpc/Kconfig
··· 13 13 bool 14 14 default y if PPC64 15 15 16 - config ARCH_PHYS_ADDR_T_64BIT 17 - def_bool PPC64 || PHYS_64BIT 18 - 19 - config ARCH_DMA_ADDR_T_64BIT 20 - def_bool ARCH_PHYS_ADDR_T_64BIT 21 - 22 16 config MMU 23 17 bool 24 18 default y ··· 181 187 select HAVE_CONTEXT_TRACKING if PPC64 182 188 select HAVE_DEBUG_KMEMLEAK 183 189 select HAVE_DEBUG_STACKOVERFLOW 184 - select HAVE_DMA_API_DEBUG 185 190 select HAVE_DYNAMIC_FTRACE 186 191 select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL 187 192 select HAVE_EBPF_JIT if PPC64 ··· 216 223 select HAVE_SYSCALL_TRACEPOINTS 217 224 select HAVE_VIRT_CPU_ACCOUNTING 218 225 select HAVE_IRQ_TIME_ACCOUNTING 226 + select IOMMU_HELPER if PPC64 219 227 select IRQ_DOMAIN 220 228 select IRQ_FORCED_THREADING 221 229 select MODULES_USE_ELF_RELA 230 + select NEED_SG_DMA_LENGTH 222 231 select NO_BOOTMEM 223 232 select OF 224 233 select OF_EARLY_FLATTREE ··· 472 477 config MPROFILE_KERNEL 473 478 depends on PPC64 && CPU_LITTLE_ENDIAN 474 479 def_bool !DISABLE_MPROFILE_KERNEL 475 - 476 - config IOMMU_HELPER 477 - def_bool PPC64 478 - 479 - config SWIOTLB 480 - bool "SWIOTLB support" 481 - default n 482 - select IOMMU_HELPER 483 - ---help--- 484 - Support for IO bounce buffering for systems without an IOMMU. 485 - This allows us to DMA to the full physical address space on 486 - platforms where the size of a physical address is larger 487 - than the bus address. Not all platforms support this. 488 480 489 481 config HOTPLUG_CPU 490 482 bool "Support for enabling/disabling CPUs" ··· 894 912 895 913 config NEED_DMA_MAP_STATE 896 914 def_bool (PPC64 || NOT_COHERENT_CACHE) 897 - 898 - config NEED_SG_DMA_LENGTH 899 - def_bool y 900 915 901 916 config GENERIC_ISA_DMA 902 917 bool
-18
arch/powerpc/include/asm/pci.h
··· 92 92 93 93 #define HAVE_PCI_LEGACY 1 94 94 95 - #ifdef CONFIG_PPC64 96 - 97 - /* The PCI address space does not equal the physical memory address 98 - * space (we have an IOMMU). The IDE and SCSI device layers use 99 - * this boolean for bounce buffer decisions. 100 - */ 101 - #define PCI_DMA_BUS_IS_PHYS (0) 102 - 103 - #else /* 32-bit */ 104 - 105 - /* The PCI address space does equal the physical memory 106 - * address space (no IOMMU). The IDE and SCSI device layers use 107 - * this boolean for bounce buffer decisions. 108 - */ 109 - #define PCI_DMA_BUS_IS_PHYS (1) 110 - 111 - #endif /* CONFIG_PPC64 */ 112 - 113 95 extern void pcibios_claim_one_bus(struct pci_bus *b); 114 96 115 97 extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
-3
arch/powerpc/kernel/dma.c
··· 309 309 } 310 310 EXPORT_SYMBOL(dma_set_coherent_mask); 311 311 312 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 313 - 314 312 int dma_set_mask(struct device *dev, u64 dma_mask) 315 313 { 316 314 if (ppc_md.dma_set_mask) ··· 359 361 360 362 static int __init dma_init(void) 361 363 { 362 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 363 364 #ifdef CONFIG_PCI 364 365 dma_debug_add_bus(&pci_bus_type); 365 366 #endif
+1
arch/powerpc/platforms/Kconfig.cputype
··· 222 222 config PHYS_64BIT 223 223 bool 'Large physical address support' if E500 || PPC_86xx 224 224 depends on (44x || E500 || PPC_86xx) && !PPC_83xx && !PPC_82xx 225 + select PHYS_ADDR_T_64BIT 225 226 ---help--- 226 227 This option enables kernel support for larger than 32-bit physical 227 228 addresses. This feature may not be available on all cores.
+10 -34
arch/riscv/Kconfig
··· 3 3 # see Documentation/kbuild/kconfig-language.txt. 4 4 # 5 5 6 + config 64BIT 7 + bool 8 + 9 + config 32BIT 10 + bool 11 + 6 12 config RISCV 7 13 def_bool y 14 + # even on 32-bit, physical (and DMA) addresses are > 32-bits 15 + select PHYS_ADDR_T_64BIT 8 16 select OF 9 17 select OF_EARLY_FLATTREE 10 18 select OF_IRQ ··· 30 22 select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A 31 23 select HAVE_MEMBLOCK 32 24 select HAVE_MEMBLOCK_NODE_MAP 33 - select HAVE_DMA_API_DEBUG 34 25 select HAVE_DMA_CONTIGUOUS 35 26 select HAVE_GENERIC_DMA_COHERENT 36 27 select IRQ_DOMAIN ··· 46 39 config MMU 47 40 def_bool y 48 41 49 - # even on 32-bit, physical (and DMA) addresses are > 32-bits 50 - config ARCH_PHYS_ADDR_T_64BIT 51 - def_bool y 52 - 53 42 config ZONE_DMA32 54 43 bool 55 - default y 56 - 57 - config ARCH_DMA_ADDR_T_64BIT 58 - def_bool y 44 + default y if 64BIT 59 45 60 46 config PAGE_OFFSET 61 47 hex ··· 101 101 102 102 config ARCH_RV32I 103 103 bool "RV32I" 104 - select CPU_SUPPORTS_32BIT_KERNEL 105 104 select 32BIT 106 105 select GENERIC_ASHLDI3 107 106 select GENERIC_ASHRDI3 ··· 108 109 109 110 config ARCH_RV64I 110 111 bool "RV64I" 111 - select CPU_SUPPORTS_64BIT_KERNEL 112 112 select 64BIT 113 113 select HAVE_FUNCTION_TRACER 114 114 select HAVE_FUNCTION_GRAPH_TRACER 115 115 select HAVE_FTRACE_MCOUNT_RECORD 116 116 select HAVE_DYNAMIC_FTRACE 117 117 select HAVE_DYNAMIC_FTRACE_WITH_REGS 118 + select SWIOTLB 118 119 119 120 endchoice 120 121 ··· 170 171 depends on SMP 171 172 default "8" 172 173 173 - config CPU_SUPPORTS_32BIT_KERNEL 174 - bool 175 - config CPU_SUPPORTS_64BIT_KERNEL 176 - bool 177 - 178 174 choice 179 175 prompt "CPU Tuning" 180 176 default TUNE_GENERIC ··· 195 201 endmenu 196 202 197 203 menu "Kernel type" 198 - 199 - choice 200 - prompt "Kernel code model" 201 - default 64BIT 202 - 203 - config 32BIT 204 - bool "32-bit kernel" 205 - depends on CPU_SUPPORTS_32BIT_KERNEL 206 - help 207 - Select this option to build a 32-bit kernel. 208 - 209 - config 64BIT 210 - bool "64-bit kernel" 211 - depends on CPU_SUPPORTS_64BIT_KERNEL 212 - help 213 - Select this option to build a 64-bit kernel. 214 - 215 - endchoice 216 204 217 205 source "mm/Kconfig" 218 206
+15
arch/riscv/include/asm/dma-mapping.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #ifndef _RISCV_ASM_DMA_MAPPING_H 3 + #define _RISCV_ASM_DMA_MAPPING_H 1 4 + 5 + #ifdef CONFIG_SWIOTLB 6 + #include <linux/swiotlb.h> 7 + static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) 8 + { 9 + return &swiotlb_dma_ops; 10 + } 11 + #else 12 + #include <asm-generic/dma-mapping.h> 13 + #endif /* CONFIG_SWIOTLB */ 14 + 15 + #endif /* _RISCV_ASM_DMA_MAPPING_H */
-3
arch/riscv/include/asm/pci.h
··· 26 26 /* RISC-V shim does not initialize PCI bus */ 27 27 #define pcibios_assign_all_busses() 1 28 28 29 - /* We do not have an IOMMU */ 30 - #define PCI_DMA_BUS_IS_PHYS 1 31 - 32 29 extern int isa_dma_bridge_buggy; 33 30 34 31 #ifdef CONFIG_PCI
+2
arch/riscv/kernel/setup.c
··· 29 29 #include <linux/of_fdt.h> 30 30 #include <linux/of_platform.h> 31 31 #include <linux/sched/task.h> 32 + #include <linux/swiotlb.h> 32 33 33 34 #include <asm/setup.h> 34 35 #include <asm/sections.h> ··· 207 206 setup_bootmem(); 208 207 paging_init(); 209 208 unflatten_device_tree(); 209 + swiotlb_init(1); 210 210 211 211 #ifdef CONFIG_SMP 212 212 setup_smp();
+4 -13
arch/s390/Kconfig
··· 35 35 config GENERIC_BUG_RELATIVE_POINTERS 36 36 def_bool y 37 37 38 - config ARCH_DMA_ADDR_T_64BIT 39 - def_bool y 40 - 41 38 config GENERIC_LOCKBREAK 42 39 def_bool y if SMP && PREEMPT 43 40 ··· 130 133 select HAVE_CMPXCHG_LOCAL 131 134 select HAVE_COPY_THREAD_TLS 132 135 select HAVE_DEBUG_KMEMLEAK 133 - select HAVE_DMA_API_DEBUG 134 136 select HAVE_DMA_CONTIGUOUS 135 137 select DMA_DIRECT_OPS 136 138 select HAVE_DYNAMIC_FTRACE ··· 705 709 menuconfig PCI 706 710 bool "PCI support" 707 711 select PCI_MSI 712 + select IOMMU_HELPER 708 713 select IOMMU_SUPPORT 714 + select NEED_DMA_MAP_STATE 715 + select NEED_SG_DMA_LENGTH 716 + 709 717 help 710 718 Enable PCI support. 711 719 ··· 731 731 def_bool PCI 732 732 733 733 config HAS_IOMEM 734 - def_bool PCI 735 - 736 - config IOMMU_HELPER 737 - def_bool PCI 738 - 739 - config NEED_SG_DMA_LENGTH 740 - def_bool PCI 741 - 742 - config NEED_DMA_MAP_STATE 743 734 def_bool PCI 744 735 745 736 config CHSC_SCH
-2
arch/s390/include/asm/pci.h
··· 2 2 #ifndef __ASM_S390_PCI_H 3 3 #define __ASM_S390_PCI_H 4 4 5 - /* must be set before including asm-generic/pci.h */ 6 - #define PCI_DMA_BUS_IS_PHYS (0) 7 5 /* must be set before including pci_clp.h */ 8 6 #define PCI_BAR_COUNT 6 9 7
-11
arch/s390/pci/pci_dma.c
··· 668 668 kmem_cache_destroy(dma_region_table_cache); 669 669 } 670 670 671 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 672 - 673 - static int __init dma_debug_do_init(void) 674 - { 675 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 676 - return 0; 677 - } 678 - fs_initcall(dma_debug_do_init); 679 - 680 671 const struct dma_map_ops s390_pci_dma_ops = { 681 672 .alloc = s390_dma_alloc, 682 673 .free = s390_dma_free, ··· 676 685 .map_page = s390_dma_map_pages, 677 686 .unmap_page = s390_dma_unmap_pages, 678 687 .mapping_error = s390_mapping_error, 679 - /* if we support direct DMA this must be conditional */ 680 - .is_phys = 0, 681 688 /* dma_supported is unconditionally true without a callback */ 682 689 }; 683 690 EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
+3 -7
arch/sh/Kconfig
··· 14 14 select HAVE_OPROFILE 15 15 select HAVE_GENERIC_DMA_COHERENT 16 16 select HAVE_ARCH_TRACEHOOK 17 - select HAVE_DMA_API_DEBUG 18 17 select HAVE_PERF_EVENTS 19 18 select HAVE_DEBUG_BUGVERBOSE 20 19 select ARCH_HAVE_CUSTOM_GPIO_H ··· 50 51 select HAVE_ARCH_AUDITSYSCALL 51 52 select HAVE_FUTEX_CMPXCHG if FUTEX 52 53 select HAVE_NMI 54 + select NEED_DMA_MAP_STATE 55 + select NEED_SG_DMA_LENGTH 56 + 53 57 help 54 58 The SuperH is a RISC processor targeted for use in embedded systems 55 59 and consumer electronics; it was also used in the Sega Dreamcast ··· 162 160 163 161 config DMA_NONCOHERENT 164 162 def_bool !DMA_COHERENT 165 - 166 - config NEED_DMA_MAP_STATE 167 - def_bool DMA_NONCOHERENT 168 - 169 - config NEED_SG_DMA_LENGTH 170 - def_bool y 171 163 172 164 config PGTABLE_LEVELS 173 165 default 3 if X2TLB
-6
arch/sh/include/asm/pci.h
··· 71 71 * SuperH has everything mapped statically like x86. 72 72 */ 73 73 74 - /* The PCI address space does equal the physical memory 75 - * address space. The networking and block device layers use 76 - * this boolean for bounce buffer decisions. 77 - */ 78 - #define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys) 79 - 80 74 #ifdef CONFIG_PCI 81 75 /* 82 76 * None of the SH PCI controllers support MWI, it is always treated as a
-1
arch/sh/kernel/dma-nommu.c
··· 78 78 .sync_single_for_device = nommu_sync_single_for_device, 79 79 .sync_sg_for_device = nommu_sync_sg_for_device, 80 80 #endif 81 - .is_phys = 1, 82 81 }; 83 82 84 83 void __init no_iommu_init(void)
-9
arch/sh/mm/consistent.c
··· 20 20 #include <asm/cacheflush.h> 21 21 #include <asm/addrspace.h> 22 22 23 - #define PREALLOC_DMA_DEBUG_ENTRIES 4096 24 - 25 23 const struct dma_map_ops *dma_ops; 26 24 EXPORT_SYMBOL(dma_ops); 27 - 28 - static int __init dma_init(void) 29 - { 30 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 31 - return 0; 32 - } 33 - fs_initcall(dma_init); 34 25 35 26 void *dma_generic_alloc_coherent(struct device *dev, size_t size, 36 27 dma_addr_t *dma_handle, gfp_t gfp,
+3 -15
arch/sparc/Kconfig
··· 25 25 select RTC_CLASS 26 26 select RTC_DRV_M48T59 27 27 select RTC_SYSTOHC 28 - select HAVE_DMA_API_DEBUG 29 28 select HAVE_ARCH_JUMP_LABEL if SPARC64 30 29 select GENERIC_IRQ_SHOW 31 30 select ARCH_WANT_IPC_PARSE_VERSION ··· 43 44 select ARCH_HAS_SG_CHAIN 44 45 select CPU_NO_EFFICIENT_FFS 45 46 select LOCKDEP_SMALL if LOCKDEP 47 + select NEED_DMA_MAP_STATE 48 + select NEED_SG_DMA_LENGTH 46 49 47 50 config SPARC32 48 51 def_bool !64BIT ··· 68 67 select HAVE_SYSCALL_TRACEPOINTS 69 68 select HAVE_CONTEXT_TRACKING 70 69 select HAVE_DEBUG_KMEMLEAK 70 + select IOMMU_HELPER 71 71 select SPARSE_IRQ 72 72 select RTC_DRV_CMOS 73 73 select RTC_DRV_BQ4802 ··· 101 99 def_bool y 102 100 103 101 config ARCH_ATU 104 - bool 105 - default y if SPARC64 106 - 107 - config ARCH_DMA_ADDR_T_64BIT 108 - bool 109 - default y if ARCH_ATU 110 - 111 - config IOMMU_HELPER 112 102 bool 113 103 default y if SPARC64 114 104 ··· 139 145 config ZONE_DMA 140 146 bool 141 147 default y if SPARC32 142 - 143 - config NEED_DMA_MAP_STATE 144 - def_bool y 145 - 146 - config NEED_SG_DMA_LENGTH 147 - def_bool y 148 148 149 149 config GENERIC_ISA_DMA 150 150 bool
+1 -1
arch/sparc/include/asm/iommu_64.h
··· 17 17 #define IOPTE_WRITE 0x0000000000000002UL 18 18 19 19 #define IOMMU_NUM_CTXS 4096 20 - #include <linux/iommu-common.h> 20 + #include <asm/iommu-common.h> 21 21 22 22 struct iommu_arena { 23 23 unsigned long *map;
-4
arch/sparc/include/asm/pci_32.h
··· 17 17 18 18 #define PCI_IRQ_NONE 0xffffffff 19 19 20 - /* Dynamic DMA mapping stuff. 21 - */ 22 - #define PCI_DMA_BUS_IS_PHYS (0) 23 - 24 20 #endif /* __KERNEL__ */ 25 21 26 22 #ifndef CONFIG_LEON_PCI
-6
arch/sparc/include/asm/pci_64.h
··· 17 17 18 18 #define PCI_IRQ_NONE 0xffffffff 19 19 20 - /* The PCI address space does not equal the physical memory 21 - * address space. The networking and block device layers use 22 - * this boolean for bounce buffer decisions. 23 - */ 24 - #define PCI_DMA_BUS_IS_PHYS (0) 25 - 26 20 /* PCI IOMMU mapping bypass support. */ 27 21 28 22 /* PCI 64-bit addressing works for all slots on all controller
+1 -3
arch/sparc/kernel/Makefile
··· 59 59 60 60 obj-$(CONFIG_SPARC64) += reboot.o 61 61 obj-$(CONFIG_SPARC64) += sysfs.o 62 - obj-$(CONFIG_SPARC64) += iommu.o 62 + obj-$(CONFIG_SPARC64) += iommu.o iommu-common.o 63 63 obj-$(CONFIG_SPARC64) += central.o 64 64 obj-$(CONFIG_SPARC64) += starfire.o 65 65 obj-$(CONFIG_SPARC64) += power.o ··· 73 73 obj-$(CONFIG_SPARC64) += pcr.o 74 74 obj-$(CONFIG_SPARC64) += nmi.o 75 75 obj-$(CONFIG_SPARC64_SMP) += cpumap.o 76 - 77 - obj-y += dma.o 78 76 79 77 obj-$(CONFIG_PCIC_PCI) += pcic.o 80 78 obj-$(CONFIG_LEON_PCI) += leon_pci.o
-13
arch/sparc/kernel/dma.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/kernel.h> 3 - #include <linux/dma-mapping.h> 4 - #include <linux/dma-debug.h> 5 - 6 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 15) 7 - 8 - static int __init dma_init(void) 9 - { 10 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 11 - return 0; 12 - } 13 - fs_initcall(dma_init);
+1 -1
arch/sparc/kernel/iommu.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/iommu-helper.h> 16 16 #include <linux/bitmap.h> 17 - #include <linux/iommu-common.h> 17 + #include <asm/iommu-common.h> 18 18 19 19 #ifdef CONFIG_PCI 20 20 #include <linux/pci.h>
+1 -1
arch/sparc/kernel/ldc.c
··· 16 16 #include <linux/list.h> 17 17 #include <linux/init.h> 18 18 #include <linux/bitmap.h> 19 - #include <linux/iommu-common.h> 19 + #include <asm/iommu-common.h> 20 20 21 21 #include <asm/hypervisor.h> 22 22 #include <asm/iommu.h>
+1 -1
arch/sparc/kernel/pci_sun4v.c
··· 16 16 #include <linux/export.h> 17 17 #include <linux/log2.h> 18 18 #include <linux/of_device.h> 19 - #include <linux/iommu-common.h> 19 + #include <asm/iommu-common.h> 20 20 21 21 #include <asm/iommu.h> 22 22 #include <asm/irq.h>
+2 -3
arch/unicore32/Kconfig
··· 19 19 select ARCH_WANT_FRAME_POINTERS 20 20 select GENERIC_IOMAP 21 21 select MODULES_USE_ELF_REL 22 + select NEED_DMA_MAP_STATE 23 + select SWIOTLB 22 24 help 23 25 UniCore-32 is 32-bit Instruction Set Architecture, 24 26 including a series of low-power-consumption RISC chip ··· 62 60 63 61 config ZONE_DMA 64 62 def_bool y 65 - 66 - config NEED_DMA_MAP_STATE 67 - def_bool y 68 63 69 64 source "init/Kconfig" 70 65
-11
arch/unicore32/mm/Kconfig
··· 39 39 default y 40 40 help 41 41 Say Y here to disable the TLB single entry operations. 42 - 43 - config SWIOTLB 44 - def_bool y 45 - select DMA_DIRECT_OPS 46 - 47 - config IOMMU_HELPER 48 - def_bool SWIOTLB 49 - 50 - config NEED_SG_DMA_LENGTH 51 - def_bool SWIOTLB 52 -
+6 -30
arch/x86/Kconfig
··· 28 28 select ARCH_USE_CMPXCHG_LOCKREF 29 29 select HAVE_ARCH_SOFT_DIRTY 30 30 select MODULES_USE_ELF_RELA 31 + select NEED_DMA_MAP_STATE 32 + select SWIOTLB 31 33 select X86_DEV_DMA_OPS 32 34 select ARCH_HAS_SYSCALL_WRAPPER 33 35 ··· 136 134 select HAVE_C_RECORDMCOUNT 137 135 select HAVE_DEBUG_KMEMLEAK 138 136 select HAVE_DEBUG_STACKOVERFLOW 139 - select HAVE_DMA_API_DEBUG 140 137 select HAVE_DMA_CONTIGUOUS 141 138 select HAVE_DYNAMIC_FTRACE 142 139 select HAVE_DYNAMIC_FTRACE_WITH_REGS ··· 185 184 select HAVE_UNSTABLE_SCHED_CLOCK 186 185 select HAVE_USER_RETURN_NOTIFIER 187 186 select IRQ_FORCED_THREADING 187 + select NEED_SG_DMA_LENGTH 188 188 select PCI_LOCKLESS_CONFIG 189 189 select PERF_EVENTS 190 190 select RTC_LIB ··· 237 235 238 236 config SBUS 239 237 bool 240 - 241 - config NEED_DMA_MAP_STATE 242 - def_bool y 243 - depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB 244 - 245 - config NEED_SG_DMA_LENGTH 246 - def_bool y 247 238 248 239 config GENERIC_ISA_DMA 249 240 def_bool y ··· 870 875 871 876 config GART_IOMMU 872 877 bool "Old AMD GART IOMMU support" 878 + select IOMMU_HELPER 873 879 select SWIOTLB 874 880 depends on X86_64 && PCI && AMD_NB 875 881 ---help--- ··· 892 896 893 897 config CALGARY_IOMMU 894 898 bool "IBM Calgary IOMMU support" 899 + select IOMMU_HELPER 895 900 select SWIOTLB 896 901 depends on X86_64 && PCI 897 902 ---help--- ··· 919 922 used even if it exists. If you choose 'n' and would like to use 920 923 Calgary anyway, pass 'iommu=calgary' on the kernel command line. 921 924 If unsure, say Y. 922 - 923 - # need this always selected by IOMMU for the VIA workaround 924 - config SWIOTLB 925 - def_bool y if X86_64 926 - ---help--- 927 - Support for software bounce buffers used on x86-64 systems 928 - which don't have a hardware IOMMU. Using this PCI devices 929 - which can only access 32-bits of memory can be used on systems 930 - with more than 3 GB of memory. 931 - If unsure, say Y. 932 - 933 - config IOMMU_HELPER 934 - def_bool y 935 - depends on CALGARY_IOMMU || GART_IOMMU || SWIOTLB || AMD_IOMMU 936 925 937 926 config MAXSMP 938 927 bool "Enable Maximum number of SMP Processors and NUMA Nodes" ··· 1441 1458 config X86_PAE 1442 1459 bool "PAE (Physical Address Extension) Support" 1443 1460 depends on X86_32 && !HIGHMEM4G 1461 + select PHYS_ADDR_T_64BIT 1444 1462 select SWIOTLB 1445 1463 ---help--- 1446 1464 PAE is required for NX support, and furthermore enables ··· 1468 1484 information. 1469 1485 1470 1486 Say N if unsure. 1471 - 1472 - config ARCH_PHYS_ADDR_T_64BIT 1473 - def_bool y 1474 - depends on X86_64 || X86_PAE 1475 - 1476 - config ARCH_DMA_ADDR_T_64BIT 1477 - def_bool y 1478 - depends on X86_64 || HIGHMEM64G 1479 1487 1480 1488 config X86_DIRECT_GBPAGES 1481 1489 def_bool y
+1 -4
arch/x86/include/asm/dma-mapping.h
··· 30 30 return dma_ops; 31 31 } 32 32 33 - int arch_dma_supported(struct device *dev, u64 mask); 34 - #define arch_dma_supported arch_dma_supported 35 - 36 - bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp); 33 + bool arch_dma_alloc_attrs(struct device **dev); 37 34 #define arch_dma_alloc_attrs arch_dma_alloc_attrs 38 35 39 36 #endif
-3
arch/x86/include/asm/pci.h
··· 117 117 #define native_setup_msi_irqs NULL 118 118 #define native_teardown_msi_irq NULL 119 119 #endif 120 - 121 - #define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys) 122 - 123 120 #endif /* __KERNEL__ */ 124 121 125 122 #ifdef CONFIG_X86_64
+14 -44
arch/x86/kernel/pci-dma.c
··· 15 15 #include <asm/x86_init.h> 16 16 #include <asm/iommu_table.h> 17 17 18 - static int forbid_dac __read_mostly; 18 + static bool disable_dac_quirk __read_mostly; 19 19 20 20 const struct dma_map_ops *dma_ops = &dma_direct_ops; 21 21 EXPORT_SYMBOL(dma_ops); 22 - 23 - static int iommu_sac_force __read_mostly; 24 22 25 23 #ifdef CONFIG_IOMMU_DEBUG 26 24 int panic_on_overflow __read_mostly = 1; ··· 53 55 }; 54 56 EXPORT_SYMBOL(x86_dma_fallback_dev); 55 57 56 - /* Number of entries preallocated for DMA-API debugging */ 57 - #define PREALLOC_DMA_DEBUG_ENTRIES 65536 58 - 59 58 void __init pci_iommu_alloc(void) 60 59 { 61 60 struct iommu_table_entry *p; ··· 71 76 } 72 77 } 73 78 74 - bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp) 79 + bool arch_dma_alloc_attrs(struct device **dev) 75 80 { 76 81 if (!*dev) 77 82 *dev = &x86_dma_fallback_dev; ··· 120 125 if (!strncmp(p, "nomerge", 7)) 121 126 iommu_merge = 0; 122 127 if (!strncmp(p, "forcesac", 8)) 123 - iommu_sac_force = 1; 128 + pr_warn("forcesac option ignored.\n"); 124 129 if (!strncmp(p, "allowdac", 8)) 125 - forbid_dac = 0; 130 + pr_warn("allowdac option ignored.\n"); 126 131 if (!strncmp(p, "nodac", 5)) 127 - forbid_dac = 1; 132 + pr_warn("nodac option ignored.\n"); 128 133 if (!strncmp(p, "usedac", 6)) { 129 - forbid_dac = -1; 134 + disable_dac_quirk = true; 130 135 return 1; 131 136 } 132 137 #ifdef CONFIG_SWIOTLB ··· 151 156 } 152 157 early_param("iommu", iommu_setup); 153 158 154 - int arch_dma_supported(struct device *dev, u64 mask) 155 - { 156 - #ifdef CONFIG_PCI 157 - if (mask > 0xffffffff && forbid_dac > 0) { 158 - dev_info(dev, "PCI: Disallowing DAC for device\n"); 159 - return 0; 160 - } 161 - #endif 162 - 163 - /* Tell the device to use SAC when IOMMU force is on. This 164 - allows the driver to use cheaper accesses in some cases. 165 - 166 - Problem with this is that if we overflow the IOMMU area and 167 - return DAC as fallback address the device may not handle it 168 - correctly. 169 - 170 - As a special case some controllers have a 39bit address 171 - mode that is as efficient as 32bit (aic79xx). Don't force 172 - SAC for these. Assume all masks <= 40 bits are of this 173 - type. Normally this doesn't make any difference, but gives 174 - more gentle handling of IOMMU overflow. */ 175 - if (iommu_sac_force && (mask >= DMA_BIT_MASK(40))) { 176 - dev_info(dev, "Force SAC with mask %Lx\n", mask); 177 - return 0; 178 - } 179 - 180 - return 1; 181 - } 182 - EXPORT_SYMBOL(arch_dma_supported); 183 - 184 159 static int __init pci_iommu_init(void) 185 160 { 186 161 struct iommu_table_entry *p; 187 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 188 162 189 163 #ifdef CONFIG_PCI 190 164 dma_debug_add_bus(&pci_bus_type); ··· 173 209 #ifdef CONFIG_PCI 174 210 /* Many VIA bridges seem to corrupt data for DAC. Disable it here */ 175 211 212 + static int via_no_dac_cb(struct pci_dev *pdev, void *data) 213 + { 214 + pdev->dev.dma_32bit_limit = true; 215 + return 0; 216 + } 217 + 176 218 static void via_no_dac(struct pci_dev *dev) 177 219 { 178 - if (forbid_dac == 0) { 220 + if (!disable_dac_quirk) { 179 221 dev_info(&dev->dev, "disabling DAC on VIA PCI bridge\n"); 180 - forbid_dac = 1; 222 + pci_walk_bus(dev->subordinate, via_no_dac_cb, NULL); 181 223 } 182 224 } 183 225 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_VIA, PCI_ANY_ID,
-1
arch/xtensa/Kconfig
··· 19 19 select HAVE_ARCH_KASAN if MMU 20 20 select HAVE_CC_STACKPROTECTOR 21 21 select HAVE_DEBUG_KMEMLEAK 22 - select HAVE_DMA_API_DEBUG 23 22 select HAVE_DMA_CONTIGUOUS 24 23 select HAVE_EXIT_THREAD 25 24 select HAVE_FUNCTION_TRACER
-2
arch/xtensa/include/asm/pci.h
··· 42 42 * decisions. 43 43 */ 44 44 45 - #define PCI_DMA_BUS_IS_PHYS (1) 46 - 47 45 /* Tell PCI code what kind of PCI resource mappings we support */ 48 46 #define HAVE_PCI_MMAP 1 49 47 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
-9
arch/xtensa/kernel/pci-dma.c
··· 261 261 .mapping_error = xtensa_dma_mapping_error, 262 262 }; 263 263 EXPORT_SYMBOL(xtensa_dma_map_ops); 264 - 265 - #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 266 - 267 - static int __init xtensa_dma_init(void) 268 - { 269 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 270 - return 0; 271 - } 272 - fs_initcall(xtensa_dma_init);
+4 -1
drivers/amba/bus.c
··· 20 20 #include <linux/sizes.h> 21 21 #include <linux/limits.h> 22 22 #include <linux/clk/clk-conf.h> 23 + #include <linux/platform_device.h> 23 24 24 25 #include <asm/irq.h> 25 26 ··· 194 193 /* 195 194 * Primecells are part of the Advanced Microcontroller Bus Architecture, 196 195 * so we call the bus "amba". 196 + * DMA configuration for platform and AMBA bus is same. So here we reuse 197 + * platform's DMA config routine. 197 198 */ 198 199 struct bus_type amba_bustype = { 199 200 .name = "amba", 200 201 .dev_groups = amba_dev_groups, 201 202 .match = amba_match, 202 203 .uevent = amba_uevent, 204 + .dma_configure = platform_dma_configure, 203 205 .pm = &amba_pm, 204 - .force_dma = true, 205 206 }; 206 207 207 208 static int __init amba_init(void)
+4 -27
drivers/base/dma-mapping.c
··· 329 329 #endif 330 330 331 331 /* 332 - * Common configuration to enable DMA API use for a device 332 + * enables DMA API use for a device 333 333 */ 334 - #include <linux/pci.h> 335 - 336 334 int dma_configure(struct device *dev) 337 335 { 338 - struct device *bridge = NULL, *dma_dev = dev; 339 - enum dev_dma_attr attr; 340 - int ret = 0; 341 - 342 - if (dev_is_pci(dev)) { 343 - bridge = pci_get_host_bridge_device(to_pci_dev(dev)); 344 - dma_dev = bridge; 345 - if (IS_ENABLED(CONFIG_OF) && dma_dev->parent && 346 - dma_dev->parent->of_node) 347 - dma_dev = dma_dev->parent; 348 - } 349 - 350 - if (dma_dev->of_node) { 351 - ret = of_dma_configure(dev, dma_dev->of_node); 352 - } else if (has_acpi_companion(dma_dev)) { 353 - attr = acpi_get_dma_attr(to_acpi_device_node(dma_dev->fwnode)); 354 - if (attr != DEV_DMA_NOT_SUPPORTED) 355 - ret = acpi_dma_configure(dev, attr); 356 - } 357 - 358 - if (bridge) 359 - pci_put_host_bridge_device(bridge); 360 - 361 - return ret; 336 + if (dev->bus->dma_configure) 337 + return dev->bus->dma_configure(dev); 338 + return 0; 362 339 } 363 340 364 341 void dma_deconfigure(struct device *dev)
+17 -1
drivers/base/platform.c
··· 1130 1130 1131 1131 #endif /* CONFIG_HIBERNATE_CALLBACKS */ 1132 1132 1133 + int platform_dma_configure(struct device *dev) 1134 + { 1135 + enum dev_dma_attr attr; 1136 + int ret = 0; 1137 + 1138 + if (dev->of_node) { 1139 + ret = of_dma_configure(dev, dev->of_node, true); 1140 + } else if (has_acpi_companion(dev)) { 1141 + attr = acpi_get_dma_attr(to_acpi_device_node(dev->fwnode)); 1142 + if (attr != DEV_DMA_NOT_SUPPORTED) 1143 + ret = acpi_dma_configure(dev, attr); 1144 + } 1145 + 1146 + return ret; 1147 + } 1148 + 1133 1149 static const struct dev_pm_ops platform_dev_pm_ops = { 1134 1150 .runtime_suspend = pm_generic_runtime_suspend, 1135 1151 .runtime_resume = pm_generic_runtime_resume, ··· 1157 1141 .dev_groups = platform_dev_groups, 1158 1142 .match = platform_match, 1159 1143 .uevent = platform_uevent, 1144 + .dma_configure = platform_dma_configure, 1160 1145 .pm = &platform_dev_pm_ops, 1161 - .force_dma = true, 1162 1146 }; 1163 1147 EXPORT_SYMBOL_GPL(platform_bus_type); 1164 1148
+1 -1
drivers/bcma/main.c
··· 207 207 208 208 core->irq = bcma_of_get_irq(parent, core, 0); 209 209 210 - of_dma_configure(&core->dev, node); 210 + of_dma_configure(&core->dev, node, false); 211 211 } 212 212 213 213 unsigned int bcma_core_irq(struct bcma_device *core, int num)
+1 -1
drivers/dma/qcom/hidma_mgmt.c
··· 398 398 } 399 399 of_node_get(child); 400 400 new_pdev->dev.of_node = child; 401 - of_dma_configure(&new_pdev->dev, child); 401 + of_dma_configure(&new_pdev->dev, child, true); 402 402 /* 403 403 * It is assumed that calling of_msi_configure is safe on 404 404 * platforms with or without MSI support.
+7 -2
drivers/gpu/host1x/bus.c
··· 314 314 return strcmp(dev_name(dev), drv->name) == 0; 315 315 } 316 316 317 + static int host1x_dma_configure(struct device *dev) 318 + { 319 + return of_dma_configure(dev, dev->of_node, true); 320 + } 321 + 317 322 static const struct dev_pm_ops host1x_device_pm_ops = { 318 323 .suspend = pm_generic_suspend, 319 324 .resume = pm_generic_resume, ··· 331 326 struct bus_type host1x_bus_type = { 332 327 .name = "host1x", 333 328 .match = host1x_device_match, 329 + .dma_configure = host1x_dma_configure, 334 330 .pm = &host1x_device_pm_ops, 335 - .force_dma = true, 336 331 }; 337 332 338 333 static void __host1x_device_del(struct host1x_device *device) ··· 421 416 device->dev.bus = &host1x_bus_type; 422 417 device->dev.parent = host1x->dev; 423 418 424 - of_dma_configure(&device->dev, host1x->dev->of_node); 419 + of_dma_configure(&device->dev, host1x->dev->of_node, true); 425 420 426 421 err = host1x_device_parse_dt(device, driver); 427 422 if (err < 0) {
-2
drivers/ide/ide-dma.c
··· 180 180 void ide_dma_off_quietly(ide_drive_t *drive) 181 181 { 182 182 drive->dev_flags &= ~IDE_DFLAG_USING_DMA; 183 - ide_toggle_bounce(drive, 0); 184 183 185 184 drive->hwif->dma_ops->dma_host_set(drive, 0); 186 185 } ··· 210 211 void ide_dma_on(ide_drive_t *drive) 211 212 { 212 213 drive->dev_flags |= IDE_DFLAG_USING_DMA; 213 - ide_toggle_bounce(drive, 1); 214 214 215 215 drive->hwif->dma_ops->dma_host_set(drive, 1); 216 216 }
-26
drivers/ide/ide-lib.c
··· 6 6 #include <linux/ide.h> 7 7 #include <linux/bitops.h> 8 8 9 - /** 10 - * ide_toggle_bounce - handle bounce buffering 11 - * @drive: drive to update 12 - * @on: on/off boolean 13 - * 14 - * Enable or disable bounce buffering for the device. Drives move 15 - * between PIO and DMA and that changes the rules we need. 16 - */ 17 - 18 - void ide_toggle_bounce(ide_drive_t *drive, int on) 19 - { 20 - u64 addr = BLK_BOUNCE_HIGH; /* dma64_addr_t */ 21 - 22 - if (!PCI_DMA_BUS_IS_PHYS) { 23 - addr = BLK_BOUNCE_ANY; 24 - } else if (on && drive->media == ide_disk) { 25 - struct device *dev = drive->hwif->dev; 26 - 27 - if (dev && dev->dma_mask) 28 - addr = *dev->dma_mask; 29 - } 30 - 31 - if (drive->queue) 32 - blk_queue_bounce_limit(drive->queue, addr); 33 - } 34 - 35 9 u64 ide_get_lba_addr(struct ide_cmd *cmd, int lba48) 36 10 { 37 11 struct ide_taskfile *tf = &cmd->tf;
+1 -5
drivers/ide/ide-probe.c
··· 796 796 * This will be fixed once we teach pci_map_sg() about our boundary 797 797 * requirements, hopefully soon. *FIXME* 798 798 */ 799 - if (!PCI_DMA_BUS_IS_PHYS) 800 - max_sg_entries >>= 1; 799 + max_sg_entries >>= 1; 801 800 #endif /* CONFIG_PCI */ 802 801 803 802 blk_queue_max_segments(q, max_sg_entries); 804 803 805 804 /* assign drive queue */ 806 805 drive->queue = q; 807 - 808 - /* needs drive->queue to be set */ 809 - ide_toggle_bounce(drive, 1); 810 806 811 807 return 0; 812 808 }
+1
drivers/iommu/Kconfig
··· 146 146 select DMA_DIRECT_OPS 147 147 select IOMMU_API 148 148 select IOMMU_IOVA 149 + select NEED_DMA_MAP_STATE 149 150 select DMAR_TABLE 150 151 help 151 152 DMA remapping (DMAR) devices support enables independent address
+2 -3
drivers/net/ethernet/sfc/efx.c
··· 1289 1289 1290 1290 pci_set_master(pci_dev); 1291 1291 1292 - /* Set the PCI DMA mask. Try all possibilities from our 1293 - * genuine mask down to 32 bits, because some architectures 1294 - * (e.g. x86_64 with iommu_sac_force set) will allow 40 bit 1292 + /* Set the PCI DMA mask. Try all possibilities from our genuine mask 1293 + * down to 32 bits, because some architectures will allow 40 bit 1295 1294 * masks event though they reject 46 bit masks. 1296 1295 */ 1297 1296 while (dma_mask > 0x7fffffffUL) {
+2 -3
drivers/net/ethernet/sfc/falcon/efx.c
··· 1242 1242 1243 1243 pci_set_master(pci_dev); 1244 1244 1245 - /* Set the PCI DMA mask. Try all possibilities from our 1246 - * genuine mask down to 32 bits, because some architectures 1247 - * (e.g. x86_64 with iommu_sac_force set) will allow 40 bit 1245 + /* Set the PCI DMA mask. Try all possibilities from our genuine mask 1246 + * down to 32 bits, because some architectures will allow 40 bit 1248 1247 * masks event though they reject 46 bit masks. 1249 1248 */ 1250 1249 while (dma_mask > 0x7fffffffUL) {
+4 -2
drivers/of/device.c
··· 76 76 * of_dma_configure - Setup DMA configuration 77 77 * @dev: Device to apply DMA configuration 78 78 * @np: Pointer to OF node having DMA configuration 79 + * @force_dma: Whether device is to be set up by of_dma_configure() even if 80 + * DMA capability is not explicitly described by firmware. 79 81 * 80 82 * Try to get devices's DMA configuration from DT and update it 81 83 * accordingly. ··· 86 84 * can use a platform bus notifier and handle BUS_NOTIFY_ADD_DEVICE events 87 85 * to fix up DMA configuration. 88 86 */ 89 - int of_dma_configure(struct device *dev, struct device_node *np) 87 + int of_dma_configure(struct device *dev, struct device_node *np, bool force_dma) 90 88 { 91 89 u64 dma_addr, paddr, size = 0; 92 90 int ret; ··· 102 100 * DMA configuration regardless of whether "dma-ranges" is 103 101 * correctly specified or not. 104 102 */ 105 - if (!dev->bus->force_dma) 103 + if (!force_dma) 106 104 return ret == -ENODEV ? 0 : ret; 107 105 108 106 dma_addr = offset = 0;
+1 -1
drivers/of/of_reserved_mem.c
··· 353 353 /* ensure that dma_ops is set for virtual devices 354 354 * using reserved memory 355 355 */ 356 - of_dma_configure(dev, np); 356 + of_dma_configure(dev, np, true); 357 357 358 358 dev_info(dev, "assigned reserved memory node %s\n", rmem->name); 359 359 } else {
-5
drivers/parisc/Kconfig
··· 103 103 depends on PCI_LBA 104 104 default PCI_LBA 105 105 106 - config IOMMU_HELPER 107 - bool 108 - depends on IOMMU_SBA || IOMMU_CCIO 109 - default y 110 - 111 106 source "drivers/pcmcia/Kconfig" 112 107 113 108 endmenu
-2
drivers/parisc/ccio-dma.c
··· 1570 1570 } 1571 1571 #endif 1572 1572 ioc_count++; 1573 - 1574 - parisc_has_iommu(); 1575 1573 return 0; 1576 1574 } 1577 1575
-2
drivers/parisc/sba_iommu.c
··· 1989 1989 proc_create_single("sba_iommu", 0, root, sba_proc_info); 1990 1990 proc_create_single("sba_iommu-bitmap", 0, root, sba_proc_bitmap_info); 1991 1991 #endif 1992 - 1993 - parisc_has_iommu(); 1994 1992 return 0; 1995 1993 } 1996 1994
-4
drivers/pci/Kconfig
··· 5 5 6 6 source "drivers/pci/pcie/Kconfig" 7 7 8 - config PCI_BUS_ADDR_T_64BIT 9 - def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT) 10 - depends on PCI 11 - 12 8 config PCI_MSI 13 9 bool "Message Signaled Interrupts (MSI and MSI-X)" 14 10 depends on PCI
+2 -2
drivers/pci/bus.c
··· 120 120 EXPORT_SYMBOL_GPL(devm_request_pci_bus_resources); 121 121 122 122 static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL}; 123 - #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 123 + #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 124 124 static struct pci_bus_region pci_64_bit = {0, 125 125 (pci_bus_addr_t) 0xffffffffffffffffULL}; 126 126 static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL, ··· 230 230 resource_size_t), 231 231 void *alignf_data) 232 232 { 233 - #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 233 + #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 234 234 int rc; 235 235 236 236 if (res->flags & IORESOURCE_MEM_64) {
+32 -1
drivers/pci/pci-driver.c
··· 16 16 #include <linux/pm_runtime.h> 17 17 #include <linux/suspend.h> 18 18 #include <linux/kexec.h> 19 + #include <linux/of_device.h> 20 + #include <linux/acpi.h> 19 21 #include "pci.h" 20 22 #include "pcie/portdrv.h" 21 23 ··· 1579 1577 return pci_num_vf(to_pci_dev(dev)); 1580 1578 } 1581 1579 1580 + /** 1581 + * pci_dma_configure - Setup DMA configuration 1582 + * @dev: ptr to dev structure 1583 + * 1584 + * Function to update PCI devices's DMA configuration using the same 1585 + * info from the OF node or ACPI node of host bridge's parent (if any). 1586 + */ 1587 + static int pci_dma_configure(struct device *dev) 1588 + { 1589 + struct device *bridge; 1590 + int ret = 0; 1591 + 1592 + bridge = pci_get_host_bridge_device(to_pci_dev(dev)); 1593 + 1594 + if (IS_ENABLED(CONFIG_OF) && bridge->parent && 1595 + bridge->parent->of_node) { 1596 + ret = of_dma_configure(dev, bridge->parent->of_node, true); 1597 + } else if (has_acpi_companion(bridge)) { 1598 + struct acpi_device *adev = to_acpi_device_node(bridge->fwnode); 1599 + enum dev_dma_attr attr = acpi_get_dma_attr(adev); 1600 + 1601 + if (attr != DEV_DMA_NOT_SUPPORTED) 1602 + ret = acpi_dma_configure(dev, attr); 1603 + } 1604 + 1605 + pci_put_host_bridge_device(bridge); 1606 + return ret; 1607 + } 1608 + 1582 1609 struct bus_type pci_bus_type = { 1583 1610 .name = "pci", 1584 1611 .match = pci_bus_match, ··· 1620 1589 .drv_groups = pci_drv_groups, 1621 1590 .pm = PCI_PM_OPS_PTR, 1622 1591 .num_vf = pci_bus_num_vf, 1623 - .force_dma = true, 1592 + .dma_configure = pci_dma_configure, 1624 1593 }; 1625 1594 EXPORT_SYMBOL(pci_bus_type); 1626 1595
+2 -22
drivers/scsi/scsi_lib.c
··· 2149 2149 return blk_mq_map_queues(set); 2150 2150 } 2151 2151 2152 - static u64 scsi_calculate_bounce_limit(struct Scsi_Host *shost) 2153 - { 2154 - struct device *host_dev; 2155 - u64 bounce_limit = 0xffffffff; 2156 - 2157 - if (shost->unchecked_isa_dma) 2158 - return BLK_BOUNCE_ISA; 2159 - /* 2160 - * Platforms with virtual-DMA translation 2161 - * hardware have no practical limit. 2162 - */ 2163 - if (!PCI_DMA_BUS_IS_PHYS) 2164 - return BLK_BOUNCE_ANY; 2165 - 2166 - host_dev = scsi_get_device(shost); 2167 - if (host_dev && host_dev->dma_mask) 2168 - bounce_limit = (u64)dma_max_pfn(host_dev) << PAGE_SHIFT; 2169 - 2170 - return bounce_limit; 2171 - } 2172 - 2173 2152 void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) 2174 2153 { 2175 2154 struct device *dev = shost->dma_dev; ··· 2168 2189 } 2169 2190 2170 2191 blk_queue_max_hw_sectors(q, shost->max_sectors); 2171 - blk_queue_bounce_limit(q, scsi_calculate_bounce_limit(shost)); 2192 + if (shost->unchecked_isa_dma) 2193 + blk_queue_bounce_limit(q, BLK_BOUNCE_ISA); 2172 2194 blk_queue_segment_boundary(q, shost->dma_boundary); 2173 2195 dma_set_seg_boundary(dev, shost->dma_boundary); 2174 2196
+9
include/asm-generic/dma-mapping.h
··· 4 4 5 5 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) 6 6 { 7 + /* 8 + * Use the non-coherent ops if available. If an architecture wants a 9 + * more fine-grained selection of operations it will have to implement 10 + * get_arch_dma_ops itself or use the per-device dma_ops. 11 + */ 12 + #ifdef CONFIG_DMA_NONCOHERENT_OPS 13 + return &dma_noncoherent_ops; 14 + #else 7 15 return &dma_direct_ops; 16 + #endif 8 17 } 9 18 10 19 #endif /* _ASM_GENERIC_DMA_MAPPING_H */
-8
include/asm-generic/pci.h
··· 14 14 } 15 15 #endif /* HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ */ 16 16 17 - /* 18 - * By default, assume that no iommu is in use and that the PCI 19 - * space is mapped to address physical 0. 20 - */ 21 - #ifndef PCI_DMA_BUS_IS_PHYS 22 - #define PCI_DMA_BUS_IS_PHYS (1) 23 - #endif 24 - 25 17 #endif /* _ASM_GENERIC_PCI_H */
+7 -4
include/linux/device.h
··· 88 88 * @resume: Called to bring a device on this bus out of sleep mode. 89 89 * @num_vf: Called to find out how many virtual functions a device on this 90 90 * bus supports. 91 + * @dma_configure: Called to setup DMA configuration on a device on 92 + this bus. 91 93 * @pm: Power management operations of this bus, callback the specific 92 94 * device driver's pm-ops. 93 95 * @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU ··· 98 96 * @p: The private data of the driver core, only the driver core can 99 97 * touch this. 100 98 * @lock_key: Lock class key for use by the lock validator 101 - * @force_dma: Assume devices on this bus should be set up by dma_configure() 102 - * even if DMA capability is not explicitly described by firmware. 103 99 * 104 100 * A bus is a channel between the processor and one or more devices. For the 105 101 * purposes of the device model, all devices are connected via a bus, even if ··· 130 130 131 131 int (*num_vf)(struct device *dev); 132 132 133 + int (*dma_configure)(struct device *dev); 134 + 133 135 const struct dev_pm_ops *pm; 134 136 135 137 const struct iommu_ops *iommu_ops; 136 138 137 139 struct subsys_private *p; 138 140 struct lock_class_key lock_key; 139 - 140 - bool force_dma; 141 141 }; 142 142 143 143 extern int __must_check bus_register(struct bus_type *bus); ··· 904 904 * @offline: Set after successful invocation of bus type's .offline(). 905 905 * @of_node_reused: Set if the device-tree node is shared with an ancestor 906 906 * device. 907 + * @dma_32bit_limit: bridge limited to 32bit DMA even if the device itself 908 + * indicates support for a higher limit in the dma_mask field. 907 909 * 908 910 * At the lowest level, every device in a Linux system is represented by an 909 911 * instance of struct device. The device structure contains the information ··· 994 992 bool offline_disabled:1; 995 993 bool offline:1; 996 994 bool of_node_reused:1; 995 + bool dma_32bit_limit:1; 997 996 }; 998 997 999 998 static inline struct device *kobj_to_dev(struct kobject *kobj)
-6
include/linux/dma-debug.h
··· 30 30 31 31 extern void dma_debug_add_bus(struct bus_type *bus); 32 32 33 - extern void dma_debug_init(u32 num_entries); 34 - 35 33 extern int dma_debug_resize_entries(u32 num_entries); 36 34 37 35 extern void debug_dma_map_page(struct device *dev, struct page *page, ··· 95 97 #else /* CONFIG_DMA_API_DEBUG */ 96 98 97 99 static inline void dma_debug_add_bus(struct bus_type *bus) 98 - { 99 - } 100 - 101 - static inline void dma_debug_init(u32 num_entries) 102 100 { 103 101 } 104 102
+6 -1
include/linux/dma-direct.h
··· 59 59 gfp_t gfp, unsigned long attrs); 60 60 void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, 61 61 dma_addr_t dma_addr, unsigned long attrs); 62 + dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, 63 + unsigned long offset, size_t size, enum dma_data_direction dir, 64 + unsigned long attrs); 65 + int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, 66 + enum dma_data_direction dir, unsigned long attrs); 62 67 int dma_direct_supported(struct device *dev, u64 mask); 63 - 68 + int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr); 64 69 #endif /* _LINUX_DMA_DIRECT_H */
+4 -15
include/linux/dma-mapping.h
··· 133 133 #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK 134 134 u64 (*get_required_mask)(struct device *dev); 135 135 #endif 136 - int is_phys; 137 136 }; 138 137 139 138 extern const struct dma_map_ops dma_direct_ops; 139 + extern const struct dma_map_ops dma_noncoherent_ops; 140 140 extern const struct dma_map_ops dma_virt_ops; 141 141 142 142 #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) ··· 502 502 #define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, 0) 503 503 504 504 #ifndef arch_dma_alloc_attrs 505 - #define arch_dma_alloc_attrs(dev, flag) (true) 505 + #define arch_dma_alloc_attrs(dev) (true) 506 506 #endif 507 507 508 508 static inline void *dma_alloc_attrs(struct device *dev, size_t size, ··· 521 521 /* let the implementation decide on the zone to allocate from: */ 522 522 flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); 523 523 524 - if (!arch_dma_alloc_attrs(&dev, &flag)) 524 + if (!arch_dma_alloc_attrs(&dev)) 525 525 return NULL; 526 526 if (!ops->alloc) 527 527 return NULL; ··· 572 572 return 0; 573 573 } 574 574 575 - /* 576 - * This is a hack for the legacy x86 forbid_dac and iommu_sac_force. Please 577 - * don't use this in new code. 578 - */ 579 - #ifndef arch_dma_supported 580 - #define arch_dma_supported(dev, mask) (1) 581 - #endif 582 - 583 575 static inline void dma_check_mask(struct device *dev, u64 mask) 584 576 { 585 577 if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1))) ··· 584 592 585 593 if (!ops) 586 594 return 0; 587 - if (!arch_dma_supported(dev, mask)) 588 - return 0; 589 - 590 595 if (!ops->dma_supported) 591 596 return 1; 592 597 return ops->dma_supported(dev, mask); ··· 828 839 #define dma_mmap_writecombine dma_mmap_wc 829 840 #endif 830 841 831 - #if defined(CONFIG_NEED_DMA_MAP_STATE) || defined(CONFIG_DMA_API_DEBUG) 842 + #ifdef CONFIG_NEED_DMA_MAP_STATE 832 843 #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) dma_addr_t ADDR_NAME 833 844 #define DEFINE_DMA_UNMAP_LEN(LEN_NAME) __u32 LEN_NAME 834 845 #define dma_unmap_addr(PTR, ADDR_NAME) ((PTR)->ADDR_NAME)
+47
include/linux/dma-noncoherent.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_DMA_NONCOHERENT_H 3 + #define _LINUX_DMA_NONCOHERENT_H 1 4 + 5 + #include <linux/dma-mapping.h> 6 + 7 + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, 8 + gfp_t gfp, unsigned long attrs); 9 + void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, 10 + dma_addr_t dma_addr, unsigned long attrs); 11 + 12 + #ifdef CONFIG_DMA_NONCOHERENT_MMAP 13 + int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma, 14 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 15 + unsigned long attrs); 16 + #else 17 + #define arch_dma_mmap NULL 18 + #endif /* CONFIG_DMA_NONCOHERENT_MMAP */ 19 + 20 + #ifdef CONFIG_DMA_NONCOHERENT_CACHE_SYNC 21 + void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size, 22 + enum dma_data_direction direction); 23 + #else 24 + #define arch_dma_cache_sync NULL 25 + #endif /* CONFIG_DMA_NONCOHERENT_CACHE_SYNC */ 26 + 27 + #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE 28 + void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, 29 + size_t size, enum dma_data_direction dir); 30 + #else 31 + static inline void arch_sync_dma_for_device(struct device *dev, 32 + phys_addr_t paddr, size_t size, enum dma_data_direction dir) 33 + { 34 + } 35 + #endif /* ARCH_HAS_SYNC_DMA_FOR_DEVICE */ 36 + 37 + #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU 38 + void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, 39 + size_t size, enum dma_data_direction dir); 40 + #else 41 + static inline void arch_sync_dma_for_cpu(struct device *dev, 42 + phys_addr_t paddr, size_t size, enum dma_data_direction dir) 43 + { 44 + } 45 + #endif /* ARCH_HAS_SYNC_DMA_FOR_CPU */ 46 + 47 + #endif /* _LINUX_DMA_NONCOHERENT_H */
-2
include/linux/ide.h
··· 1508 1508 hwif->hwif_data = data; 1509 1509 } 1510 1510 1511 - extern void ide_toggle_bounce(ide_drive_t *drive, int on); 1512 - 1513 1511 u64 ide_get_lba_addr(struct ide_cmd *, int); 1514 1512 u8 ide_dump_status(ide_drive_t *, const char *, u8); 1515 1513
include/linux/iommu-common.h arch/sparc/include/asm/iommu-common.h
+10 -3
include/linux/iommu-helper.h
··· 2 2 #ifndef _LINUX_IOMMU_HELPER_H 3 3 #define _LINUX_IOMMU_HELPER_H 4 4 5 + #include <linux/bug.h> 5 6 #include <linux/kernel.h> 6 7 7 8 static inline unsigned long iommu_device_max_index(unsigned long size, ··· 15 14 return size; 16 15 } 17 16 18 - extern int iommu_is_span_boundary(unsigned int index, unsigned int nr, 19 - unsigned long shift, 20 - unsigned long boundary_size); 17 + static inline int iommu_is_span_boundary(unsigned int index, unsigned int nr, 18 + unsigned long shift, unsigned long boundary_size) 19 + { 20 + BUG_ON(!is_power_of_2(boundary_size)); 21 + 22 + shift = (shift + index) & (boundary_size - 1); 23 + return shift + nr > boundary_size; 24 + } 25 + 21 26 extern unsigned long iommu_area_alloc(unsigned long *map, unsigned long size, 22 27 unsigned long start, unsigned int nr, 23 28 unsigned long shift,
+6 -2
include/linux/of_device.h
··· 55 55 return of_node_get(cpu_dev->of_node); 56 56 } 57 57 58 - int of_dma_configure(struct device *dev, struct device_node *np); 58 + int of_dma_configure(struct device *dev, 59 + struct device_node *np, 60 + bool force_dma); 59 61 void of_dma_deconfigure(struct device *dev); 60 62 #else /* CONFIG_OF */ 61 63 ··· 107 105 return NULL; 108 106 } 109 107 110 - static inline int of_dma_configure(struct device *dev, struct device_node *np) 108 + static inline int of_dma_configure(struct device *dev, 109 + struct device_node *np, 110 + bool force_dma) 111 111 { 112 112 return 0; 113 113 }
+1 -1
include/linux/pci.h
··· 670 670 int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn, 671 671 int reg, int len, u32 val); 672 672 673 - #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 673 + #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 674 674 typedef u64 pci_bus_addr_t; 675 675 #else 676 676 typedef u32 pci_bus_addr_t;
+2
include/linux/platform_device.h
··· 356 356 #define platform_pm_restore NULL 357 357 #endif 358 358 359 + extern int platform_dma_configure(struct device *dev); 360 + 359 361 #ifdef CONFIG_PM_SLEEP 360 362 #define USE_PLATFORM_PM_SLEEP_OPS \ 361 363 .suspend = platform_pm_suspend, \
+39 -4
lib/Kconfig
··· 429 429 bool 430 430 default n 431 431 432 + config NEED_SG_DMA_LENGTH 433 + bool 434 + 435 + config NEED_DMA_MAP_STATE 436 + bool 437 + 438 + config ARCH_DMA_ADDR_T_64BIT 439 + def_bool 64BIT || PHYS_ADDR_T_64BIT 440 + 441 + config IOMMU_HELPER 442 + bool 443 + 444 + config ARCH_HAS_SYNC_DMA_FOR_DEVICE 445 + bool 446 + 447 + config ARCH_HAS_SYNC_DMA_FOR_CPU 448 + bool 449 + select NEED_DMA_MAP_STATE 450 + 432 451 config DMA_DIRECT_OPS 433 452 bool 434 - depends on HAS_DMA && (!64BIT || ARCH_DMA_ADDR_T_64BIT) 435 - default n 453 + depends on HAS_DMA 454 + 455 + config DMA_NONCOHERENT_OPS 456 + bool 457 + depends on HAS_DMA 458 + select DMA_DIRECT_OPS 459 + 460 + config DMA_NONCOHERENT_MMAP 461 + bool 462 + depends on DMA_NONCOHERENT_OPS 463 + 464 + config DMA_NONCOHERENT_CACHE_SYNC 465 + bool 466 + depends on DMA_NONCOHERENT_OPS 436 467 437 468 config DMA_VIRT_OPS 438 469 bool 439 - depends on HAS_DMA && (!64BIT || ARCH_DMA_ADDR_T_64BIT) 440 - default n 470 + depends on HAS_DMA 471 + 472 + config SWIOTLB 473 + bool 474 + select DMA_DIRECT_OPS 475 + select NEED_DMA_MAP_STATE 441 476 442 477 config CHECK_SIGNATURE 443 478 bool
+18 -1
lib/Kconfig.debug
··· 1634 1634 1635 1635 config DMA_API_DEBUG 1636 1636 bool "Enable debugging of DMA-API usage" 1637 - depends on HAVE_DMA_API_DEBUG 1637 + select NEED_DMA_MAP_STATE 1638 1638 help 1639 1639 Enable this option to debug the use of the DMA API by device drivers. 1640 1640 With this option you will be able to detect common bugs in device ··· 1648 1648 1649 1649 This option causes a performance degradation. Use only if you want to 1650 1650 debug device drivers and dma interactions. 1651 + 1652 + If unsure, say N. 1653 + 1654 + config DMA_API_DEBUG_SG 1655 + bool "Debug DMA scatter-gather usage" 1656 + default y 1657 + depends on DMA_API_DEBUG 1658 + help 1659 + Perform extra checking that callers of dma_map_sg() have respected the 1660 + appropriate segment length/boundary limits for the given device when 1661 + preparing DMA scatterlists. 1662 + 1663 + This is particularly likely to have been overlooked in cases where the 1664 + dma_map_sg() API is used for general bulk mapping of pages rather than 1665 + preparing literal scatter-gather descriptors, where there is a risk of 1666 + unexpected behaviour from DMA API implementations if the scatterlist 1667 + is technically out-of-spec. 1651 1668 1652 1669 If unsure, say N. 1653 1670
+2 -1
lib/Makefile
··· 30 30 lib-$(CONFIG_MMU) += ioremap.o 31 31 lib-$(CONFIG_SMP) += cpumask.o 32 32 lib-$(CONFIG_DMA_DIRECT_OPS) += dma-direct.o 33 + lib-$(CONFIG_DMA_NONCOHERENT_OPS) += dma-noncoherent.o 33 34 lib-$(CONFIG_DMA_VIRT_OPS) += dma-virt.o 34 35 35 36 lib-y += kobject.o klist.o ··· 148 147 obj-$(CONFIG_AUDIT_COMPAT_GENERIC) += compat_audit.o 149 148 150 149 obj-$(CONFIG_SWIOTLB) += swiotlb.o 151 - obj-$(CONFIG_IOMMU_HELPER) += iommu-helper.o iommu-common.o 150 + obj-$(CONFIG_IOMMU_HELPER) += iommu-helper.o 152 151 obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o 153 152 obj-$(CONFIG_NOTIFIER_ERROR_INJECTION) += notifier-error-inject.o 154 153 obj-$(CONFIG_PM_NOTIFIER_ERROR_INJECT) += pm-notifier-error-inject.o
+43 -22
lib/dma-debug.c
··· 41 41 #define HASH_FN_SHIFT 13 42 42 #define HASH_FN_MASK (HASH_SIZE - 1) 43 43 44 + /* allow architectures to override this if absolutely required */ 45 + #ifndef PREALLOC_DMA_DEBUG_ENTRIES 46 + #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 47 + #endif 48 + 44 49 enum { 45 50 dma_debug_single, 46 51 dma_debug_page, ··· 132 127 static u32 nr_total_entries; 133 128 134 129 /* number of preallocated entries requested by kernel cmdline */ 135 - static u32 req_entries; 130 + static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; 136 131 137 132 /* debugfs dentry's for the stuff above */ 138 133 static struct dentry *dma_debug_dent __read_mostly; ··· 444 439 spin_unlock_irqrestore(&bucket->lock, flags); 445 440 } 446 441 } 447 - EXPORT_SYMBOL(debug_dma_dump_mappings); 448 442 449 443 /* 450 444 * For each mapping (initial cacheline in the case of ··· 752 748 753 749 return ret; 754 750 } 755 - EXPORT_SYMBOL(dma_debug_resize_entries); 756 751 757 752 /* 758 753 * DMA-API debugging init code ··· 1007 1004 bus_register_notifier(bus, nb); 1008 1005 } 1009 1006 1010 - /* 1011 - * Let the architectures decide how many entries should be preallocated. 1012 - */ 1013 - void dma_debug_init(u32 num_entries) 1007 + static int dma_debug_init(void) 1014 1008 { 1015 1009 int i; 1016 1010 ··· 1015 1015 * called to set dma_debug_initialized 1016 1016 */ 1017 1017 if (global_disable) 1018 - return; 1018 + return 0; 1019 1019 1020 1020 for (i = 0; i < HASH_SIZE; ++i) { 1021 1021 INIT_LIST_HEAD(&dma_entry_hash[i].list); ··· 1026 1026 pr_err("DMA-API: error creating debugfs entries - disabling\n"); 1027 1027 global_disable = true; 1028 1028 1029 - return; 1029 + return 0; 1030 1030 } 1031 1031 1032 - if (req_entries) 1033 - num_entries = req_entries; 1034 - 1035 - if (prealloc_memory(num_entries) != 0) { 1032 + if (prealloc_memory(nr_prealloc_entries) != 0) { 1036 1033 pr_err("DMA-API: debugging out of memory error - disabled\n"); 1037 1034 global_disable = true; 1038 1035 1039 - return; 1036 + return 0; 1040 1037 } 1041 1038 1042 1039 nr_total_entries = num_free_entries; ··· 1041 1044 dma_debug_initialized = true; 1042 1045 1043 1046 pr_info("DMA-API: debugging enabled by kernel config\n"); 1047 + return 0; 1044 1048 } 1049 + core_initcall(dma_debug_init); 1045 1050 1046 1051 static __init int dma_debug_cmdline(char *str) 1047 1052 { ··· 1060 1061 1061 1062 static __init int dma_debug_entries_cmdline(char *str) 1062 1063 { 1063 - int res; 1064 - 1065 1064 if (!str) 1066 1065 return -EINVAL; 1067 - 1068 - res = get_option(&str, &req_entries); 1069 - 1070 - if (!res) 1071 - req_entries = 0; 1072 - 1066 + if (!get_option(&str, &nr_prealloc_entries)) 1067 + nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; 1073 1068 return 0; 1074 1069 } 1075 1070 ··· 1286 1293 put_hash_bucket(bucket, &flags); 1287 1294 } 1288 1295 1296 + static void check_sg_segment(struct device *dev, struct scatterlist *sg) 1297 + { 1298 + #ifdef CONFIG_DMA_API_DEBUG_SG 1299 + unsigned int max_seg = dma_get_max_seg_size(dev); 1300 + u64 start, end, boundary = dma_get_seg_boundary(dev); 1301 + 1302 + /* 1303 + * Either the driver forgot to set dma_parms appropriately, or 1304 + * whoever generated the list forgot to check them. 1305 + */ 1306 + if (sg->length > max_seg) 1307 + err_printk(dev, NULL, "DMA-API: mapping sg segment longer than device claims to support [len=%u] [max=%u]\n", 1308 + sg->length, max_seg); 1309 + /* 1310 + * In some cases this could potentially be the DMA API 1311 + * implementation's fault, but it would usually imply that 1312 + * the scatterlist was built inappropriately to begin with. 1313 + */ 1314 + start = sg_dma_address(sg); 1315 + end = start + sg_dma_len(sg) - 1; 1316 + if ((start ^ end) & ~boundary) 1317 + err_printk(dev, NULL, "DMA-API: mapping sg segment across boundary [start=0x%016llx] [end=0x%016llx] [boundary=0x%016llx]\n", 1318 + start, end, boundary); 1319 + #endif 1320 + } 1321 + 1289 1322 void debug_dma_map_page(struct device *dev, struct page *page, size_t offset, 1290 1323 size_t size, int direction, dma_addr_t dma_addr, 1291 1324 bool map_single) ··· 1441 1422 if (!PageHighMem(sg_page(s))) { 1442 1423 check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s)); 1443 1424 } 1425 + 1426 + check_sg_segment(dev, s); 1444 1427 1445 1428 add_dma_entry(entry); 1446 1429 }
+24 -5
lib/dma-direct.c
··· 34 34 const char *caller) 35 35 { 36 36 if (unlikely(dev && !dma_capable(dev, dma_addr, size))) { 37 + if (!dev->dma_mask) { 38 + dev_err(dev, 39 + "%s: call on device without dma_mask\n", 40 + caller); 41 + return false; 42 + } 43 + 37 44 if (*dev->dma_mask >= DMA_BIT_MASK(32)) { 38 45 dev_err(dev, 39 46 "%s: overflow %pad+%zu of device mask %llx\n", ··· 91 84 __free_pages(page, page_order); 92 85 page = NULL; 93 86 87 + if (IS_ENABLED(CONFIG_ZONE_DMA32) && 88 + dev->coherent_dma_mask < DMA_BIT_MASK(64) && 89 + !(gfp & (GFP_DMA32 | GFP_DMA))) { 90 + gfp |= GFP_DMA32; 91 + goto again; 92 + } 93 + 94 94 if (IS_ENABLED(CONFIG_ZONE_DMA) && 95 95 dev->coherent_dma_mask < DMA_BIT_MASK(32) && 96 96 !(gfp & GFP_DMA)) { ··· 135 121 free_pages((unsigned long)cpu_addr, page_order); 136 122 } 137 123 138 - static dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, 124 + dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, 139 125 unsigned long offset, size_t size, enum dma_data_direction dir, 140 126 unsigned long attrs) 141 127 { ··· 146 132 return dma_addr; 147 133 } 148 134 149 - static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, 150 - int nents, enum dma_data_direction dir, unsigned long attrs) 135 + int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, 136 + enum dma_data_direction dir, unsigned long attrs) 151 137 { 152 138 int i; 153 139 struct scatterlist *sg; ··· 179 165 if (mask < DMA_BIT_MASK(32)) 180 166 return 0; 181 167 #endif 168 + /* 169 + * Various PCI/PCIe bridges have broken support for > 32bit DMA even 170 + * if the device itself might support it. 171 + */ 172 + if (dev->dma_32bit_limit && mask > DMA_BIT_MASK(32)) 173 + return 0; 182 174 return 1; 183 175 } 184 176 185 - static int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr) 177 + int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr) 186 178 { 187 179 return dma_addr == DIRECT_MAPPING_ERROR; 188 180 } ··· 200 180 .map_sg = dma_direct_map_sg, 201 181 .dma_supported = dma_direct_supported, 202 182 .mapping_error = dma_direct_mapping_error, 203 - .is_phys = 1, 204 183 }; 205 184 EXPORT_SYMBOL(dma_direct_ops);
+102
lib/dma-noncoherent.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Christoph Hellwig. 4 + * 5 + * DMA operations that map physical memory directly without providing cache 6 + * coherence. 7 + */ 8 + #include <linux/export.h> 9 + #include <linux/mm.h> 10 + #include <linux/dma-direct.h> 11 + #include <linux/dma-noncoherent.h> 12 + #include <linux/scatterlist.h> 13 + 14 + static void dma_noncoherent_sync_single_for_device(struct device *dev, 15 + dma_addr_t addr, size_t size, enum dma_data_direction dir) 16 + { 17 + arch_sync_dma_for_device(dev, dma_to_phys(dev, addr), size, dir); 18 + } 19 + 20 + static void dma_noncoherent_sync_sg_for_device(struct device *dev, 21 + struct scatterlist *sgl, int nents, enum dma_data_direction dir) 22 + { 23 + struct scatterlist *sg; 24 + int i; 25 + 26 + for_each_sg(sgl, sg, nents, i) 27 + arch_sync_dma_for_device(dev, sg_phys(sg), sg->length, dir); 28 + } 29 + 30 + static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page, 31 + unsigned long offset, size_t size, enum dma_data_direction dir, 32 + unsigned long attrs) 33 + { 34 + dma_addr_t addr; 35 + 36 + addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); 37 + if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 38 + arch_sync_dma_for_device(dev, page_to_phys(page) + offset, 39 + size, dir); 40 + return addr; 41 + } 42 + 43 + static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl, 44 + int nents, enum dma_data_direction dir, unsigned long attrs) 45 + { 46 + nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs); 47 + if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 48 + dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir); 49 + return nents; 50 + } 51 + 52 + #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU 53 + static void dma_noncoherent_sync_single_for_cpu(struct device *dev, 54 + dma_addr_t addr, size_t size, enum dma_data_direction dir) 55 + { 56 + arch_sync_dma_for_cpu(dev, dma_to_phys(dev, addr), size, dir); 57 + } 58 + 59 + static void dma_noncoherent_sync_sg_for_cpu(struct device *dev, 60 + struct scatterlist *sgl, int nents, enum dma_data_direction dir) 61 + { 62 + struct scatterlist *sg; 63 + int i; 64 + 65 + for_each_sg(sgl, sg, nents, i) 66 + arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, dir); 67 + } 68 + 69 + static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr, 70 + size_t size, enum dma_data_direction dir, unsigned long attrs) 71 + { 72 + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 73 + dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir); 74 + } 75 + 76 + static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl, 77 + int nents, enum dma_data_direction dir, unsigned long attrs) 78 + { 79 + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 80 + dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir); 81 + } 82 + #endif 83 + 84 + const struct dma_map_ops dma_noncoherent_ops = { 85 + .alloc = arch_dma_alloc, 86 + .free = arch_dma_free, 87 + .mmap = arch_dma_mmap, 88 + .sync_single_for_device = dma_noncoherent_sync_single_for_device, 89 + .sync_sg_for_device = dma_noncoherent_sync_sg_for_device, 90 + .map_page = dma_noncoherent_map_page, 91 + .map_sg = dma_noncoherent_map_sg, 92 + #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU 93 + .sync_single_for_cpu = dma_noncoherent_sync_single_for_cpu, 94 + .sync_sg_for_cpu = dma_noncoherent_sync_sg_for_cpu, 95 + .unmap_page = dma_noncoherent_unmap_page, 96 + .unmap_sg = dma_noncoherent_unmap_sg, 97 + #endif 98 + .dma_supported = dma_direct_supported, 99 + .mapping_error = dma_direct_mapping_error, 100 + .cache_sync = arch_dma_cache_sync, 101 + }; 102 + EXPORT_SYMBOL(dma_noncoherent_ops);
+1 -4
lib/iommu-common.c arch/sparc/kernel/iommu-common.c
··· 8 8 #include <linux/bitmap.h> 9 9 #include <linux/bug.h> 10 10 #include <linux/iommu-helper.h> 11 - #include <linux/iommu-common.h> 12 11 #include <linux/dma-mapping.h> 13 12 #include <linux/hash.h> 13 + #include <asm/iommu-common.h> 14 14 15 15 static unsigned long iommu_large_alloc = 15; 16 16 ··· 93 93 p->hint = p->start; 94 94 p->end = num_entries; 95 95 } 96 - EXPORT_SYMBOL(iommu_tbl_pool_init); 97 96 98 97 unsigned long iommu_tbl_range_alloc(struct device *dev, 99 98 struct iommu_map_table *iommu, ··· 223 224 224 225 return n; 225 226 } 226 - EXPORT_SYMBOL(iommu_tbl_range_alloc); 227 227 228 228 static struct iommu_pool *get_pool(struct iommu_map_table *tbl, 229 229 unsigned long entry) ··· 262 264 bitmap_clear(iommu->map, entry, npages); 263 265 spin_unlock_irqrestore(&(pool->lock), flags); 264 266 } 265 - EXPORT_SYMBOL(iommu_tbl_range_free);
+1 -13
lib/iommu-helper.c
··· 3 3 * IOMMU helper functions for the free area management 4 4 */ 5 5 6 - #include <linux/export.h> 7 6 #include <linux/bitmap.h> 8 - #include <linux/bug.h> 9 - 10 - int iommu_is_span_boundary(unsigned int index, unsigned int nr, 11 - unsigned long shift, 12 - unsigned long boundary_size) 13 - { 14 - BUG_ON(!is_power_of_2(boundary_size)); 15 - 16 - shift = (shift + index) & (boundary_size - 1); 17 - return shift + nr > boundary_size; 18 - } 7 + #include <linux/iommu-helper.h> 19 8 20 9 unsigned long iommu_area_alloc(unsigned long *map, unsigned long size, 21 10 unsigned long start, unsigned int nr, ··· 27 38 } 28 39 return -1; 29 40 } 30 - EXPORT_SYMBOL(iommu_area_alloc);
+3 -8
lib/swiotlb.c
··· 593 593 } 594 594 595 595 /* 596 - * Allocates bounce buffer and returns its kernel virtual address. 596 + * Allocates bounce buffer and returns its physical address. 597 597 */ 598 - 599 598 static phys_addr_t 600 599 map_single(struct device *hwdev, phys_addr_t phys, size_t size, 601 600 enum dma_data_direction dir, unsigned long attrs) ··· 613 614 } 614 615 615 616 /* 616 - * dma_addr is the kernel virtual address of the bounce buffer to unmap. 617 + * tlb_addr is the physical address of the bounce buffer to unmap. 617 618 */ 618 619 void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, 619 620 size_t size, enum dma_data_direction dir, ··· 691 692 } 692 693 } 693 694 694 - #ifdef CONFIG_DMA_DIRECT_OPS 695 695 static inline bool dma_coherent_ok(struct device *dev, dma_addr_t addr, 696 696 size_t size) 697 697 { ··· 725 727 726 728 out_unmap: 727 729 dev_warn(dev, "hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n", 728 - (unsigned long long)(dev ? dev->coherent_dma_mask : 0), 730 + (unsigned long long)dev->coherent_dma_mask, 729 731 (unsigned long long)*dma_handle); 730 732 731 733 /* ··· 762 764 DMA_ATTR_SKIP_CPU_SYNC); 763 765 return true; 764 766 } 765 - #endif 766 767 767 768 static void 768 769 swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir, ··· 1042 1045 return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask; 1043 1046 } 1044 1047 1045 - #ifdef CONFIG_DMA_DIRECT_OPS 1046 1048 void *swiotlb_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, 1047 1049 gfp_t gfp, unsigned long attrs) 1048 1050 { ··· 1085 1089 .unmap_page = swiotlb_unmap_page, 1086 1090 .dma_supported = dma_direct_supported, 1087 1091 }; 1088 - #endif /* CONFIG_DMA_DIRECT_OPS */
+1 -1
mm/Kconfig
··· 266 266 bool 267 267 268 268 config PHYS_ADDR_T_64BIT 269 - def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT 269 + def_bool 64BIT 270 270 271 271 config BOUNCE 272 272 bool "Enable bounce buffers"
+1 -19
net/core/dev.c
··· 2884 2884 EXPORT_SYMBOL(netdev_rx_csum_fault); 2885 2885 #endif 2886 2886 2887 - /* Actually, we should eliminate this check as soon as we know, that: 2888 - * 1. IOMMU is present and allows to map all the memory. 2889 - * 2. No high memory really exists on this machine. 2890 - */ 2891 - 2887 + /* XXX: check that highmem exists at all on the given machine. */ 2892 2888 static int illegal_highdma(struct net_device *dev, struct sk_buff *skb) 2893 2889 { 2894 2890 #ifdef CONFIG_HIGHMEM ··· 2895 2899 skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 2896 2900 2897 2901 if (PageHighMem(skb_frag_page(frag))) 2898 - return 1; 2899 - } 2900 - } 2901 - 2902 - if (PCI_DMA_BUS_IS_PHYS) { 2903 - struct device *pdev = dev->dev.parent; 2904 - 2905 - if (!pdev) 2906 - return 0; 2907 - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2908 - skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 2909 - dma_addr_t addr = page_to_phys(skb_frag_page(frag)); 2910 - 2911 - if (!pdev->dma_mask || addr + PAGE_SIZE - 1 > *pdev->dma_mask) 2912 2902 return 1; 2913 2903 } 2914 2904 }
-2
tools/virtio/linux/dma-mapping.h
··· 6 6 # error Virtio userspace code does not support CONFIG_HAS_DMA 7 7 #endif 8 8 9 - #define PCI_DMA_BUS_IS_PHYS 1 10 - 11 9 enum dma_data_direction { 12 10 DMA_BIDIRECTIONAL = 0, 13 11 DMA_TO_DEVICE = 1,