Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

PCI: remove pci_dac_dma_... APIs

Based on replies to a respective query, remove the pci_dac_dma_...() APIs
(except for pci_dac_dma_supported() on Alpha, where this function is used
in non-DAC PCI DMA code).

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Jesse Barnes <jesse.barnes@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>


authored by

Jan Beulich and committed by
Greg Kroah-Hartman
caa51716 b7b095c1

+7 -464
-103
Documentation/DMA-mapping.txt
··· 664 664 Well, not for some odd devices. See the next section for information 665 665 about that. 666 666 667 - DAC Addressing for Address Space Hungry Devices 668 - 669 - There exists a class of devices which do not mesh well with the PCI 670 - DMA mapping API. By definition these "mappings" are a finite 671 - resource. The number of total available mappings per bus is platform 672 - specific, but there will always be a reasonable amount. 673 - 674 - What is "reasonable"? Reasonable means that networking and block I/O 675 - devices need not worry about using too many mappings. 676 - 677 - As an example of a problematic device, consider compute cluster cards. 678 - They can potentially need to access gigabytes of memory at once via 679 - DMA. Dynamic mappings are unsuitable for this kind of access pattern. 680 - 681 - To this end we've provided a small API by which a device driver 682 - may use DAC cycles to directly address all of physical memory. 683 - Not all platforms support this, but most do. It is easy to determine 684 - whether the platform will work properly at probe time. 685 - 686 - First, understand that there may be a SEVERE performance penalty for 687 - using these interfaces on some platforms. Therefore, you MUST only 688 - use these interfaces if it is absolutely required. %99 of devices can 689 - use the normal APIs without any problems. 690 - 691 - Note that for streaming type mappings you must either use these 692 - interfaces, or the dynamic mapping interfaces above. You may not mix 693 - usage of both for the same device. Such an act is illegal and is 694 - guaranteed to put a banana in your tailpipe. 695 - 696 - However, consistent mappings may in fact be used in conjunction with 697 - these interfaces. Remember that, as defined, consistent mappings are 698 - always going to be SAC addressable. 699 - 700 - The first thing your driver needs to do is query the PCI platform 701 - layer if it is capable of handling your devices DAC addressing 702 - capabilities: 703 - 704 - int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask); 705 - 706 - You may not use the following interfaces if this routine fails. 707 - 708 - Next, DMA addresses using this API are kept track of using the 709 - dma64_addr_t type. It is guaranteed to be big enough to hold any 710 - DAC address the platform layer will give to you from the following 711 - routines. If you have consistent mappings as well, you still 712 - use plain dma_addr_t to keep track of those. 713 - 714 - All mappings obtained here will be direct. The mappings are not 715 - translated, and this is the purpose of this dialect of the DMA API. 716 - 717 - All routines work with page/offset pairs. This is the _ONLY_ way to 718 - portably refer to any piece of memory. If you have a cpu pointer 719 - (which may be validly DMA'd too) you may easily obtain the page 720 - and offset using something like this: 721 - 722 - struct page *page = virt_to_page(ptr); 723 - unsigned long offset = offset_in_page(ptr); 724 - 725 - Here are the interfaces: 726 - 727 - dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev, 728 - struct page *page, 729 - unsigned long offset, 730 - int direction); 731 - 732 - The DAC address for the tuple PAGE/OFFSET are returned. The direction 733 - argument is the same as for pci_{map,unmap}_single(). The same rules 734 - for cpu/device access apply here as for the streaming mapping 735 - interfaces. To reiterate: 736 - 737 - The cpu may touch the buffer before pci_dac_page_to_dma. 738 - The device may touch the buffer after pci_dac_page_to_dma 739 - is made, but the cpu may NOT. 740 - 741 - When the DMA transfer is complete, invoke: 742 - 743 - void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, 744 - dma64_addr_t dma_addr, 745 - size_t len, int direction); 746 - 747 - This must be done before the CPU looks at the buffer again. 748 - This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu(). 749 - 750 - And likewise, if you wish to let the device get back at the buffer after 751 - the cpu has read/written it, invoke: 752 - 753 - void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, 754 - dma64_addr_t dma_addr, 755 - size_t len, int direction); 756 - 757 - before letting the device access the DMA area again. 758 - 759 - If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t 760 - the following interfaces are provided: 761 - 762 - struct page *pci_dac_dma_to_page(struct pci_dev *pdev, 763 - dma64_addr_t dma_addr); 764 - unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev, 765 - dma64_addr_t dma_addr); 766 - 767 - This is possible with the DAC interfaces purely because they are 768 - not translated in any way. 769 - 770 667 Optimizing Unmap State Space Consumption 771 668 772 669 On many platforms, pci_unmap_{single,page}() is simply a nop.
+5 -27
arch/alpha/kernel/pci_iommu.c
··· 207 207 p[i] = 0; 208 208 } 209 209 210 + /* True if the machine supports DAC addressing, and DEV can 211 + make use of it given MASK. */ 212 + static int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask); 213 + 210 214 /* Map a single buffer of the indicated size for PCI DMA in streaming 211 215 mode. The 32-bit PCI bus mastering address to use is returned. 212 216 Once the device is given the dma address, the device owns this memory ··· 901 897 /* True if the machine supports DAC addressing, and DEV can 902 898 make use of it given MASK. */ 903 899 904 - int 900 + static int 905 901 pci_dac_dma_supported(struct pci_dev *dev, u64 mask) 906 902 { 907 903 dma64_addr_t dac_offset = alpha_mv.pci_dac_offset; ··· 921 917 922 918 return ok; 923 919 } 924 - EXPORT_SYMBOL(pci_dac_dma_supported); 925 - 926 - dma64_addr_t 927 - pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, 928 - unsigned long offset, int direction) 929 - { 930 - return (alpha_mv.pci_dac_offset 931 - + __pa(page_address(page)) 932 - + (dma64_addr_t) offset); 933 - } 934 - EXPORT_SYMBOL(pci_dac_page_to_dma); 935 - 936 - struct page * 937 - pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr) 938 - { 939 - unsigned long paddr = (dma_addr & PAGE_MASK) - alpha_mv.pci_dac_offset; 940 - return virt_to_page(__va(paddr)); 941 - } 942 - EXPORT_SYMBOL(pci_dac_dma_to_page); 943 - 944 - unsigned long 945 - pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr) 946 - { 947 - return (dma_addr & ~PAGE_MASK); 948 - } 949 - EXPORT_SYMBOL(pci_dac_dma_to_offset); 950 920 951 921 /* Helper for generic DMA-mapping functions. */ 952 922
+1 -1
arch/mips/pci/Makefile
··· 2 2 # Makefile for the PCI specific kernel interface routines under Linux. 3 3 # 4 4 5 - obj-y += pci.o pci-dac.o 5 + obj-y += pci.o 6 6 7 7 # 8 8 # PCI bus host bridge specific code
-79
arch/mips/pci/pci-dac.c
··· 1 - /* 2 - * This file is subject to the terms and conditions of the GNU General Public 3 - * License. See the file "COPYING" in the main directory of this archive 4 - * for more details. 5 - * 6 - * Copyright (C) 2000 Ani Joshi <ajoshi@unixbox.com> 7 - * Copyright (C) 2000, 2001, 06 Ralf Baechle <ralf@linux-mips.org> 8 - * swiped from i386, and cloned for MIPS by Geert, polished by Ralf. 9 - */ 10 - 11 - #include <linux/types.h> 12 - #include <linux/dma-mapping.h> 13 - #include <linux/mm.h> 14 - #include <linux/module.h> 15 - #include <linux/string.h> 16 - 17 - #include <asm/cache.h> 18 - #include <asm/io.h> 19 - 20 - #include <dma-coherence.h> 21 - 22 - #include <linux/pci.h> 23 - 24 - dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev, 25 - struct page *page, unsigned long offset, int direction) 26 - { 27 - struct device *dev = &pdev->dev; 28 - 29 - BUG_ON(direction == DMA_NONE); 30 - 31 - if (!plat_device_is_coherent(dev)) { 32 - unsigned long addr; 33 - 34 - addr = (unsigned long) page_address(page) + offset; 35 - dma_cache_wback_inv(addr, PAGE_SIZE); 36 - } 37 - 38 - return plat_map_dma_mem_page(dev, page) + offset; 39 - } 40 - 41 - EXPORT_SYMBOL(pci_dac_page_to_dma); 42 - 43 - struct page *pci_dac_dma_to_page(struct pci_dev *pdev, 44 - dma64_addr_t dma_addr) 45 - { 46 - return pfn_to_page(plat_dma_addr_to_phys(dma_addr) >> PAGE_SHIFT); 47 - } 48 - 49 - EXPORT_SYMBOL(pci_dac_dma_to_page); 50 - 51 - unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev, 52 - dma64_addr_t dma_addr) 53 - { 54 - return dma_addr & ~PAGE_MASK; 55 - } 56 - 57 - EXPORT_SYMBOL(pci_dac_dma_to_offset); 58 - 59 - void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, 60 - dma64_addr_t dma_addr, size_t len, int direction) 61 - { 62 - BUG_ON(direction == PCI_DMA_NONE); 63 - 64 - if (!plat_device_is_coherent(&pdev->dev)) 65 - dma_cache_wback_inv(dma_addr + PAGE_OFFSET, len); 66 - } 67 - 68 - EXPORT_SYMBOL(pci_dac_dma_sync_single_for_cpu); 69 - 70 - void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, 71 - dma64_addr_t dma_addr, size_t len, int direction) 72 - { 73 - BUG_ON(direction == PCI_DMA_NONE); 74 - 75 - if (!plat_device_is_coherent(&pdev->dev)) 76 - dma_cache_wback_inv(dma_addr + PAGE_OFFSET, len); 77 - } 78 - 79 - EXPORT_SYMBOL(pci_dac_dma_sync_single_for_device);
+1 -2
arch/x86_64/kernel/pci-dma.c
··· 22 22 int iommu_bio_merge __read_mostly = 0; 23 23 EXPORT_SYMBOL(iommu_bio_merge); 24 24 25 - int iommu_sac_force __read_mostly = 0; 26 - EXPORT_SYMBOL(iommu_sac_force); 25 + static int iommu_sac_force __read_mostly = 0; 27 26 28 27 int no_iommu __read_mostly; 29 28 #ifdef CONFIG_IOMMU_DEBUG
-24
include/asm-alpha/pci.h
··· 199 199 200 200 extern int pci_dma_supported(struct pci_dev *hwdev, u64 mask); 201 201 202 - /* True if the machine supports DAC addressing, and DEV can 203 - make use of it given MASK. */ 204 - extern int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask); 205 - 206 - /* Convert to/from DAC dma address and struct page. */ 207 - extern dma64_addr_t pci_dac_page_to_dma(struct pci_dev *, struct page *, 208 - unsigned long, int); 209 - extern struct page *pci_dac_dma_to_page(struct pci_dev *, dma64_addr_t); 210 - extern unsigned long pci_dac_dma_to_offset(struct pci_dev *, dma64_addr_t); 211 - 212 - static inline void 213 - pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, dma64_addr_t dma_addr, 214 - size_t len, int direction) 215 - { 216 - /* Nothing to do. */ 217 - } 218 - 219 - static inline void 220 - pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, 221 - size_t len, int direction) 222 - { 223 - /* Nothing to do. */ 224 - } 225 - 226 202 #ifdef CONFIG_PCI 227 203 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 228 204 enum pci_dma_burst_strategy *strat,
-5
include/asm-arm/pci.h
··· 26 26 #define PCI_DMA_BUS_IS_PHYS (0) 27 27 28 28 /* 29 - * We don't support DAC DMA cycles. 30 - */ 31 - #define pci_dac_dma_supported(pci_dev, mask) (0) 32 - 33 - /* 34 29 * Whether pci_unmap_{single,page} is a nop depends upon the 35 30 * configuration. 36 31 */
-32
include/asm-cris/pci.h
··· 52 52 #define pci_unmap_len(PTR, LEN_NAME) (0) 53 53 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 54 54 55 - /* This is always fine. */ 56 - #define pci_dac_dma_supported(pci_dev, mask) (1) 57 - 58 - static inline dma64_addr_t 59 - pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, unsigned long offset, int direction) 60 - { 61 - return ((dma64_addr_t) page_to_phys(page) + 62 - (dma64_addr_t) offset); 63 - } 64 - 65 - static inline struct page * 66 - pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr) 67 - { 68 - return pfn_to_page(dma_addr >> PAGE_SHIFT); 69 - } 70 - 71 - static inline unsigned long 72 - pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr) 73 - { 74 - return (dma_addr & ~PAGE_MASK); 75 - } 76 - 77 - static inline void 78 - pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 79 - { 80 - } 81 - 82 - static inline void 83 - pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 84 - { 85 - } 86 - 87 55 #define HAVE_PCI_MMAP 88 56 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 89 57 enum pci_mmap_state mmap_state, int write_combine);
-3
include/asm-frv/pci.h
··· 40 40 extern void pci_free_consistent(struct pci_dev *hwdev, size_t size, 41 41 void *vaddr, dma_addr_t dma_handle); 42 42 43 - /* This is always fine. */ 44 - #define pci_dac_dma_supported(pci_dev, mask) (1) 45 - 46 43 /* Return the index of the PCI controller for device PDEV. */ 47 44 #define pci_controller_num(PDEV) (0) 48 45
-33
include/asm-i386/pci.h
··· 56 56 #define pci_unmap_len(PTR, LEN_NAME) (0) 57 57 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 58 58 59 - /* This is always fine. */ 60 - #define pci_dac_dma_supported(pci_dev, mask) (1) 61 - 62 - static inline dma64_addr_t 63 - pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, unsigned long offset, int direction) 64 - { 65 - return ((dma64_addr_t) page_to_phys(page) + 66 - (dma64_addr_t) offset); 67 - } 68 - 69 - static inline struct page * 70 - pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr) 71 - { 72 - return pfn_to_page(dma_addr >> PAGE_SHIFT); 73 - } 74 - 75 - static inline unsigned long 76 - pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr) 77 - { 78 - return (dma_addr & ~PAGE_MASK); 79 - } 80 - 81 - static inline void 82 - pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 83 - { 84 - } 85 - 86 - static inline void 87 - pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 88 - { 89 - flush_write_buffers(); 90 - } 91 - 92 59 #define HAVE_PCI_MMAP 93 60 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 94 61 enum pci_mmap_state mmap_state, int write_combine);
-8
include/asm-ia64/pci.h
··· 71 71 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) \ 72 72 (((PTR)->LEN_NAME) = (VAL)) 73 73 74 - /* The ia64 platform always supports 64-bit addressing. */ 75 - #define pci_dac_dma_supported(pci_dev, mask) (1) 76 - #define pci_dac_page_to_dma(dev,pg,off,dir) ((dma_addr_t) page_to_bus(pg) + (off)) 77 - #define pci_dac_dma_to_page(dev,dma_addr) (virt_to_page(bus_to_virt(dma_addr))) 78 - #define pci_dac_dma_to_offset(dev,dma_addr) offset_in_page(dma_addr) 79 - #define pci_dac_dma_sync_single_for_cpu(dev,dma_addr,len,dir) do { } while (0) 80 - #define pci_dac_dma_sync_single_for_device(dev,dma_addr,len,dir) do { mb(); } while (0) 81 - 82 74 #ifdef CONFIG_PCI 83 75 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 84 76 enum pci_dma_burst_strategy *strat,
-6
include/asm-m68knommu/pci.h
··· 24 24 return 1; 25 25 } 26 26 27 - /* 28 - * Not supporting more than 32-bit PCI bus addresses now, but 29 - * must satisfy references to this function. Change if needed. 30 - */ 31 - #define pci_dac_dma_supported(pci_dev, mask) (0) 32 - 33 27 #endif /* CONFIG_COMEMPCI */ 34 28 35 29 #endif /* M68KNOMMU_PCI_H */
-14
include/asm-mips/pci.h
··· 121 121 122 122 #endif /* CONFIG_DMA_NEED_PCI_MAP_STATE */ 123 123 124 - /* This is always fine. */ 125 - #define pci_dac_dma_supported(pci_dev, mask) (1) 126 - 127 - extern dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev, 128 - struct page *page, unsigned long offset, int direction); 129 - extern struct page *pci_dac_dma_to_page(struct pci_dev *pdev, 130 - dma64_addr_t dma_addr); 131 - extern unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev, 132 - dma64_addr_t dma_addr); 133 - extern void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, 134 - dma64_addr_t dma_addr, size_t len, int direction); 135 - extern void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, 136 - dma64_addr_t dma_addr, size_t len, int direction); 137 - 138 124 #ifdef CONFIG_PCI 139 125 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 140 126 enum pci_dma_burst_strategy *strat,
-3
include/asm-parisc/pci.h
··· 238 238 #define PCIBIOS_MIN_IO 0x10 239 239 #define PCIBIOS_MIN_MEM 0x1000 /* NBPG - but pci/setup-res.c dies */ 240 240 241 - /* Don't support DAC yet. */ 242 - #define pci_dac_dma_supported(pci_dev, mask) (0) 243 - 244 241 /* export the pci_ DMA API in terms of the dma_ one */ 245 242 #include <asm-generic/pci-dma-compat.h> 246 243
-1
include/asm-powerpc/dma-mapping.h
··· 61 61 void (*unmap_sg)(struct device *dev, struct scatterlist *sg, 62 62 int nents, enum dma_data_direction direction); 63 63 int (*dma_supported)(struct device *dev, u64 mask); 64 - int (*dac_dma_supported)(struct device *dev, u64 mask); 65 64 int (*set_dma_mask)(struct device *dev, u64 dma_mask); 66 65 }; 67 66
-18
include/asm-powerpc/pci.h
··· 74 74 extern void set_pci_dma_ops(struct dma_mapping_ops *dma_ops); 75 75 extern struct dma_mapping_ops *get_pci_dma_ops(void); 76 76 77 - /* For DAC DMA, we currently don't support it by default, but 78 - * we let 64-bit platforms override this. 79 - */ 80 - static inline int pci_dac_dma_supported(struct pci_dev *hwdev,u64 mask) 81 - { 82 - struct dma_mapping_ops *d = get_pci_dma_ops(); 83 - 84 - if (d && d->dac_dma_supported) 85 - return d->dac_dma_supported(&hwdev->dev, mask); 86 - return 0; 87 - } 88 - 89 77 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 90 78 enum pci_dma_burst_strategy *strat, 91 79 unsigned long *strategy_parameter) ··· 111 123 *strategy_parameter = ~0UL; 112 124 } 113 125 #endif 114 - 115 - /* 116 - * At present there are very few 32-bit PPC machines that can have 117 - * memory above the 4GB point, and we don't support that. 118 - */ 119 - #define pci_dac_dma_supported(pci_dev, mask) (0) 120 126 121 127 /* Return the index of the PCI controller for device PDEV. */ 122 128 #define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index
-6
include/asm-ppc/pci.h
··· 102 102 } 103 103 #endif 104 104 105 - /* 106 - * At present there are very few 32-bit PPC machines that can have 107 - * memory above the 4GB point, and we don't support that. 108 - */ 109 - #define pci_dac_dma_supported(pci_dev, mask) (0) 110 - 111 105 /* Return the index of the PCI controller for device PDEV. */ 112 106 #define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index 113 107
-5
include/asm-sh/pci.h
··· 110 110 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 111 111 #endif 112 112 113 - /* Not supporting more than 32-bit PCI bus addresses now, but 114 - * must satisfy references to this function. Change if needed. 115 - */ 116 - #define pci_dac_dma_supported(pci_dev, mask) (0) 117 - 118 113 #ifdef CONFIG_PCI 119 114 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 120 115 enum pci_dma_burst_strategy *strat,
-5
include/asm-sh64/pci.h
··· 72 72 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 73 73 #endif 74 74 75 - /* Not supporting more than 32-bit PCI bus addresses now, but 76 - * must satisfy references to this function. Change if needed. 77 - */ 78 - #define pci_dac_dma_supported(pci_dev, mask) (0) 79 - 80 75 /* These macros should be used after a pci_map_sg call has been done 81 76 * to get bus addresses of each of the SG entries and their lengths. 82 77 * You should only work with the number of sg entries pci_map_sg
-2
include/asm-sparc/pci.h
··· 142 142 return 1; 143 143 } 144 144 145 - #define pci_dac_dma_supported(dev, mask) (0) 146 - 147 145 #ifdef CONFIG_PCI 148 146 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 149 147 enum pci_dma_burst_strategy *strat,
-43
include/asm-sparc64/pci.h
··· 206 206 #define PCI64_REQUIRED_MASK (~(dma64_addr_t)0) 207 207 #define PCI64_ADDR_BASE 0xfffc000000000000UL 208 208 209 - /* Usage of the pci_dac_foo interfaces is only valid if this 210 - * test passes. 211 - */ 212 - #define pci_dac_dma_supported(pci_dev, mask) \ 213 - ((((mask) & PCI64_REQUIRED_MASK) == PCI64_REQUIRED_MASK) ? 1 : 0) 214 - 215 - static inline dma64_addr_t 216 - pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, unsigned long offset, int direction) 217 - { 218 - return (PCI64_ADDR_BASE + 219 - __pa(page_address(page)) + offset); 220 - } 221 - 222 - static inline struct page * 223 - pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr) 224 - { 225 - unsigned long paddr = (dma_addr & PAGE_MASK) - PCI64_ADDR_BASE; 226 - 227 - return virt_to_page(__va(paddr)); 228 - } 229 - 230 - static inline unsigned long 231 - pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr) 232 - { 233 - return (dma_addr & ~PAGE_MASK); 234 - } 235 - 236 - static inline void 237 - pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 238 - { 239 - /* DAC cycle addressing does not make use of the 240 - * PCI controller's streaming cache, so nothing to do. 241 - */ 242 - } 243 - 244 - static inline void 245 - pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 246 - { 247 - /* DAC cycle addressing does not make use of the 248 - * PCI controller's streaming cache, so nothing to do. 249 - */ 250 - } 251 - 252 209 #define PCI_DMA_ERROR_CODE (~(dma_addr_t)0x0) 253 210 254 211 static inline int pci_dma_mapping_error(dma_addr_t dma_addr)
-1
include/asm-v850/rte_cb.h
··· 64 64 /* As we don't really support PCI DMA to cpu memory, and use bounce-buffers 65 65 instead, perversely enough, this becomes always true! */ 66 66 # define pci_dma_supported(dev, mask) 1 67 - # define pci_dac_dma_supported(dev, mask) 0 68 67 # define pcibios_assign_all_busses() 1 69 68 70 69 #endif /* CONFIG_RTE_MB_A_PCI */
-40
include/asm-x86_64/pci.h
··· 54 54 55 55 #if defined(CONFIG_IOMMU) || defined(CONFIG_CALGARY_IOMMU) 56 56 57 - /* 58 - * x86-64 always supports DAC, but sometimes it is useful to force 59 - * devices through the IOMMU to get automatic sg list merging. 60 - * Optional right now. 61 - */ 62 - extern int iommu_sac_force; 63 - #define pci_dac_dma_supported(pci_dev, mask) (!iommu_sac_force) 64 - 65 57 #define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) \ 66 58 dma_addr_t ADDR_NAME; 67 59 #define DECLARE_PCI_UNMAP_LEN(LEN_NAME) \ ··· 70 78 #else 71 79 /* No IOMMU */ 72 80 73 - #define pci_dac_dma_supported(pci_dev, mask) 1 74 - 75 81 #define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) 76 82 #define DECLARE_PCI_UNMAP_LEN(LEN_NAME) 77 83 #define pci_unmap_addr(PTR, ADDR_NAME) (0) ··· 80 90 #endif 81 91 82 92 #include <asm-generic/pci-dma-compat.h> 83 - 84 - static inline dma64_addr_t 85 - pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, unsigned long offset, int direction) 86 - { 87 - return ((dma64_addr_t) page_to_phys(page) + 88 - (dma64_addr_t) offset); 89 - } 90 - 91 - static inline struct page * 92 - pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr) 93 - { 94 - return virt_to_page(__va(dma_addr)); 95 - } 96 - 97 - static inline unsigned long 98 - pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr) 99 - { 100 - return (dma_addr & ~PAGE_MASK); 101 - } 102 - 103 - static inline void 104 - pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 105 - { 106 - } 107 - 108 - static inline void 109 - pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction) 110 - { 111 - flush_write_buffers(); 112 - } 113 93 114 94 #ifdef CONFIG_PCI 115 95 static inline void pci_dma_burst_advice(struct pci_dev *pdev,
-3
include/asm-xtensa/pci.h
··· 64 64 #define pci_ubnmap_len(PTR, LEN_NAME) (0) 65 65 #define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 66 66 67 - /* We cannot access memory above 4GB */ 68 - #define pci_dac_dma_supported(pci_dev, mask) (0) 69 - 70 67 /* Map a range of PCI memory or I/O space for a device into user space */ 71 68 int pci_mmap_page_range(struct pci_dev *pdev, struct vm_area_struct *vma, 72 69 enum pci_mmap_state mmap_state, int write_combine);