Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'pci/aspm', 'pci/enumeration', 'pci/hotplug', 'pci/misc', 'pci/msi', 'pci/resource' and 'pci/virtualization' into next

* pci/aspm:
PCI/ASPM: Simplify Clock Power Management setting
PCI: Use dev->has_secondary_link to find downstream PCIe links
PCI/ASPM: Use dev->has_secondary_link to find downstream links
PCI: Add dev->has_secondary_link to track downstream PCIe links
PCI/ASPM: Remove redundant PCIe port type checking
PCI/ASPM: Drop __pci_disable_link_state() useless "force" parameter

* pci/enumeration:
PCI: Remove unused pci_scan_bus_parented()
xen/pcifront: Don't use deprecated function pci_scan_bus_parented()
PCI: designware: Use pci_scan_root_bus() for simplicity
PCI: tegra: Remove tegra_pcie_scan_bus()
PCI: mvebu: Remove mvebu_pcie_scan_bus()

* pci/hotplug:
PCI: pciehp: Wait for hotplug command completion where necessary
PCI: Propagate the "ignore hotplug" setting to parent
ACPI / hotplug / PCI: Check ignore_hotplug for all downstream devices
PCI: pciehp: Drop pointless label from pciehp_probe()
PCI: pciehp: Drop pointless ACPI-based "slot detection" check

* pci/misc:
PCI: Remove unused pci_dma_burst_advice()
PCI: Remove unused pcibios_select_root() (again)
PCI: Remove unnecessary #includes of <asm/pci.h>
PCI: Include <linux/pci.h>, not <asm/pci.h>

* pci/msi:
PCI/MSI: Remove unused pci_msi_off()
PCI/MSI: Drop pci_msi_off() calls from quirks
ntb: Drop pci_msi_off() call during probe
virtio_pci: drop pci_msi_off() call during probe
PCI/MSI: Disable MSI at enumeration even if kernel doesn't support MSI
PCI/MSI: Export pci_msi_set_enable(), pci_msix_clear_and_set_ctrl()
PCI/MSI: Rename msi_set_enable(), msix_clear_and_set_ctrl()

* pci/resource:
PCI: Add pci_bus_addr_t

* pci/virtualization:
ACPI / PCI: Account for ARI in _PRT lookups
PCI: Move pci_ari_enabled() to global header
PCI: Add function 1 DMA alias quirk for Marvell 9120
PCI: Add ACS quirks for Intel 9-series PCH root ports

+250 -692
+17 -12
Documentation/DMA-API-HOWTO.txt
··· 25 25 address is not directly useful to a driver; it must use ioremap() to map 26 26 the space and produce a virtual address. 27 27 28 - I/O devices use a third kind of address: a "bus address" or "DMA address". 29 - If a device has registers at an MMIO address, or if it performs DMA to read 30 - or write system memory, the addresses used by the device are bus addresses. 31 - In some systems, bus addresses are identical to CPU physical addresses, but 32 - in general they are not. IOMMUs and host bridges can produce arbitrary 28 + I/O devices use a third kind of address: a "bus address". If a device has 29 + registers at an MMIO address, or if it performs DMA to read or write system 30 + memory, the addresses used by the device are bus addresses. In some 31 + systems, bus addresses are identical to CPU physical addresses, but in 32 + general they are not. IOMMUs and host bridges can produce arbitrary 33 33 mappings between physical and bus addresses. 34 + 35 + From a device's point of view, DMA uses the bus address space, but it may 36 + be restricted to a subset of that space. For example, even if a system 37 + supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU 38 + so devices only need to use 32-bit DMA addresses. 34 39 35 40 Here's a picture and some examples: 36 41 ··· 77 72 cannot because DMA doesn't go through the CPU virtual memory system. 78 73 79 74 In some simple systems, the device can do DMA directly to physical address 80 - Y. But in many others, there is IOMMU hardware that translates bus 75 + Y. But in many others, there is IOMMU hardware that translates DMA 81 76 addresses to physical addresses, e.g., it translates Z to Y. This is part 82 77 of the reason for the DMA API: the driver can give a virtual address X to 83 78 an interface like dma_map_single(), which sets up any required IOMMU 84 - mapping and returns the bus address Z. The driver then tells the device to 79 + mapping and returns the DMA address Z. The driver then tells the device to 85 80 do DMA to Z, and the IOMMU maps it to the buffer at address Y in system 86 81 RAM. 87 82 ··· 103 98 #include <linux/dma-mapping.h> 104 99 105 100 is in your driver, which provides the definition of dma_addr_t. This type 106 - can hold any valid DMA or bus address for the platform and should be used 101 + can hold any valid DMA address for the platform and should be used 107 102 everywhere you hold a DMA address returned from the DMA mapping functions. 108 103 109 104 What memory is DMA'able? ··· 321 316 Think of "consistent" as "synchronous" or "coherent". 322 317 323 318 The current default is to return consistent memory in the low 32 324 - bits of the bus space. However, for future compatibility you should 319 + bits of the DMA space. However, for future compatibility you should 325 320 set the consistent mask even if this default is fine for your 326 321 driver. 327 322 ··· 408 403 can use to access it from the CPU and dma_handle which you pass to the 409 404 card. 410 405 411 - The CPU virtual address and the DMA bus address are both 406 + The CPU virtual address and the DMA address are both 412 407 guaranteed to be aligned to the smallest PAGE_SIZE order which 413 408 is greater than or equal to the requested size. This invariant 414 409 exists (for example) to guarantee that if you allocate a chunk ··· 650 645 dma_map_sg call. 651 646 652 647 Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() 653 - counterpart, because the bus address space is a shared resource and 654 - you could render the machine unusable by consuming all bus addresses. 648 + counterpart, because the DMA address space is a shared resource and 649 + you could render the machine unusable by consuming all DMA addresses. 655 650 656 651 If you need to use the same streaming DMA region multiple times and touch 657 652 the data in between the DMA transfers, the buffer needs to be synced
+15 -15
Documentation/DMA-API.txt
··· 18 18 To get the dma_ API, you must #include <linux/dma-mapping.h>. This 19 19 provides dma_addr_t and the interfaces described below. 20 20 21 - A dma_addr_t can hold any valid DMA or bus address for the platform. It 22 - can be given to a device to use as a DMA source or target. A CPU cannot 23 - reference a dma_addr_t directly because there may be translation between 24 - its physical address space and the bus address space. 21 + A dma_addr_t can hold any valid DMA address for the platform. It can be 22 + given to a device to use as a DMA source or target. A CPU cannot reference 23 + a dma_addr_t directly because there may be translation between its physical 24 + address space and the DMA address space. 25 25 26 26 Part Ia - Using large DMA-coherent buffers 27 27 ------------------------------------------ ··· 42 42 address space) or NULL if the allocation failed. 43 43 44 44 It also returns a <dma_handle> which may be cast to an unsigned integer the 45 - same width as the bus and given to the device as the bus address base of 45 + same width as the bus and given to the device as the DMA address base of 46 46 the region. 47 47 48 48 Note: consistent memory can be expensive on some platforms, and the ··· 193 193 enum dma_data_direction direction) 194 194 195 195 Maps a piece of processor virtual memory so it can be accessed by the 196 - device and returns the bus address of the memory. 196 + device and returns the DMA address of the memory. 197 197 198 198 The direction for both APIs may be converted freely by casting. 199 199 However the dma_ API uses a strongly typed enumerator for its ··· 212 212 this API should be obtained from sources which guarantee it to be 213 213 physically contiguous (like kmalloc). 214 214 215 - Further, the bus address of the memory must be within the 215 + Further, the DMA address of the memory must be within the 216 216 dma_mask of the device (the dma_mask is a bit mask of the 217 - addressable region for the device, i.e., if the bus address of 218 - the memory ANDed with the dma_mask is still equal to the bus 217 + addressable region for the device, i.e., if the DMA address of 218 + the memory ANDed with the dma_mask is still equal to the DMA 219 219 address, then the device can perform DMA to the memory). To 220 220 ensure that the memory allocated by kmalloc is within the dma_mask, 221 221 the driver may specify various platform-dependent flags to restrict 222 - the bus address range of the allocation (e.g., on x86, GFP_DMA 223 - guarantees to be within the first 16MB of available bus addresses, 222 + the DMA address range of the allocation (e.g., on x86, GFP_DMA 223 + guarantees to be within the first 16MB of available DMA addresses, 224 224 as required by ISA devices). 225 225 226 226 Note also that the above constraints on physical contiguity and 227 227 dma_mask may not apply if the platform has an IOMMU (a device which 228 - maps an I/O bus address to a physical memory address). However, to be 228 + maps an I/O DMA address to a physical memory address). However, to be 229 229 portable, device driver writers may *not* assume that such an IOMMU 230 230 exists. 231 231 ··· 296 296 dma_map_sg(struct device *dev, struct scatterlist *sg, 297 297 int nents, enum dma_data_direction direction) 298 298 299 - Returns: the number of bus address segments mapped (this may be shorter 299 + Returns: the number of DMA address segments mapped (this may be shorter 300 300 than <nents> passed in if some elements of the scatter/gather list are 301 301 physically or virtually adjacent and an IOMMU maps them with a single 302 302 entry). ··· 340 340 API. 341 341 342 342 Note: <nents> must be the number you passed in, *not* the number of 343 - bus address entries returned. 343 + DMA address entries returned. 344 344 345 345 void 346 346 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, ··· 507 507 phys_addr is the CPU physical address to which the memory is currently 508 508 assigned (this will be ioremapped so the CPU can access the region). 509 509 510 - device_addr is the bus address the device needs to be programmed 510 + device_addr is the DMA address the device needs to be programmed 511 511 with to actually address this memory (this will be handed out as the 512 512 dma_addr_t in dma_alloc_coherent()). 513 513
-16
arch/alpha/include/asm/pci.h
··· 71 71 /* implement the pci_ DMA API in terms of the generic device dma_ one */ 72 72 #include <asm-generic/pci-dma-compat.h> 73 73 74 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 75 - enum pci_dma_burst_strategy *strat, 76 - unsigned long *strategy_parameter) 77 - { 78 - unsigned long cacheline_size; 79 - u8 byte; 80 - 81 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 82 - if (byte == 0) 83 - cacheline_size = 1024; 84 - else 85 - cacheline_size = (int) byte * 4; 86 - 87 - *strat = PCI_DMA_BURST_BOUNDARY; 88 - *strategy_parameter = cacheline_size; 89 - } 90 74 #endif 91 75 92 76 /* TODO: integrate with include/asm-generic/pci.h ? */
-1
arch/alpha/kernel/core_irongate.c
··· 22 22 #include <linux/bootmem.h> 23 23 24 24 #include <asm/ptrace.h> 25 - #include <asm/pci.h> 26 25 #include <asm/cacheflush.h> 27 26 #include <asm/tlbflush.h> 28 27
-1
arch/alpha/kernel/sys_eiger.c
··· 22 22 #include <asm/irq.h> 23 23 #include <asm/mmu_context.h> 24 24 #include <asm/io.h> 25 - #include <asm/pci.h> 26 25 #include <asm/pgtable.h> 27 26 #include <asm/core_tsunami.h> 28 27 #include <asm/hwrpb.h>
-1
arch/alpha/kernel/sys_nautilus.c
··· 39 39 #include <asm/irq.h> 40 40 #include <asm/mmu_context.h> 41 41 #include <asm/io.h> 42 - #include <asm/pci.h> 43 42 #include <asm/pgtable.h> 44 43 #include <asm/core_irongate.h> 45 44 #include <asm/hwrpb.h>
-10
arch/arm/include/asm/pci.h
··· 31 31 */ 32 32 #define PCI_DMA_BUS_IS_PHYS (1) 33 33 34 - #ifdef CONFIG_PCI 35 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 36 - enum pci_dma_burst_strategy *strat, 37 - unsigned long *strategy_parameter) 38 - { 39 - *strat = PCI_DMA_BURST_INFINITY; 40 - *strategy_parameter = ~0UL; 41 - } 42 - #endif 43 - 44 34 #define HAVE_PCI_MMAP 45 35 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 46 36 enum pci_mmap_state mmap_state, int write_combine);
-10
arch/frv/include/asm/pci.h
··· 41 41 /* Return the index of the PCI controller for device PDEV. */ 42 42 #define pci_controller_num(PDEV) (0) 43 43 44 - #ifdef CONFIG_PCI 45 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 46 - enum pci_dma_burst_strategy *strat, 47 - unsigned long *strategy_parameter) 48 - { 49 - *strat = PCI_DMA_BURST_INFINITY; 50 - *strategy_parameter = ~0UL; 51 - } 52 - #endif 53 - 54 44 /* 55 45 * These are pretty much arbitrary with the CoMEM implementation. 56 46 * We have the whole address space to ourselves.
-32
arch/ia64/include/asm/pci.h
··· 52 52 53 53 #include <asm-generic/pci-dma-compat.h> 54 54 55 - #ifdef CONFIG_PCI 56 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 57 - enum pci_dma_burst_strategy *strat, 58 - unsigned long *strategy_parameter) 59 - { 60 - unsigned long cacheline_size; 61 - u8 byte; 62 - 63 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 64 - if (byte == 0) 65 - cacheline_size = 1024; 66 - else 67 - cacheline_size = (int) byte * 4; 68 - 69 - *strat = PCI_DMA_BURST_MULTIPLE; 70 - *strategy_parameter = cacheline_size; 71 - } 72 - #endif 73 - 74 55 #define HAVE_PCI_MMAP 75 56 extern int pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma, 76 57 enum pci_mmap_state mmap_state, int write_combine); ··· 87 106 static inline int pci_proc_domain(struct pci_bus *bus) 88 107 { 89 108 return (pci_domain_nr(bus) != 0); 90 - } 91 - 92 - static inline struct resource * 93 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 94 - { 95 - struct resource *root = NULL; 96 - 97 - if (res->flags & IORESOURCE_IO) 98 - root = &ioport_resource; 99 - if (res->flags & IORESOURCE_MEM) 100 - root = &iomem_resource; 101 - 102 - return root; 103 109 } 104 110 105 111 #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
-23
arch/microblaze/include/asm/pci.h
··· 44 44 */ 45 45 #define pcibios_assign_all_busses() 0 46 46 47 - #ifdef CONFIG_PCI 48 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 49 - enum pci_dma_burst_strategy *strat, 50 - unsigned long *strategy_parameter) 51 - { 52 - *strat = PCI_DMA_BURST_INFINITY; 53 - *strategy_parameter = ~0UL; 54 - } 55 - #endif 56 - 57 47 extern int pci_domain_nr(struct pci_bus *bus); 58 48 59 49 /* Decide whether to display the domain number in /proc */ ··· 72 82 * this boolean for bounce buffer decisions. 73 83 */ 74 84 #define PCI_DMA_BUS_IS_PHYS (1) 75 - 76 - static inline struct resource *pcibios_select_root(struct pci_dev *pdev, 77 - struct resource *res) 78 - { 79 - struct resource *root = NULL; 80 - 81 - if (res->flags & IORESOURCE_IO) 82 - root = &ioport_resource; 83 - if (res->flags & IORESOURCE_MEM) 84 - root = &iomem_resource; 85 - 86 - return root; 87 - } 88 85 89 86 extern void pcibios_claim_one_bus(struct pci_bus *b); 90 87
-10
arch/mips/include/asm/pci.h
··· 113 113 */ 114 114 extern unsigned int PCI_DMA_BUS_IS_PHYS; 115 115 116 - #ifdef CONFIG_PCI 117 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 118 - enum pci_dma_burst_strategy *strat, 119 - unsigned long *strategy_parameter) 120 - { 121 - *strat = PCI_DMA_BURST_INFINITY; 122 - *strategy_parameter = ~0UL; 123 - } 124 - #endif 125 - 126 116 #ifdef CONFIG_PCI_DOMAINS 127 117 #define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index 128 118
-1
arch/mips/pci/fixup-cobalt.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/init.h> 15 15 16 - #include <asm/pci.h> 17 16 #include <asm/io.h> 18 17 #include <asm/gt64120.h> 19 18
-1
arch/mips/pci/ops-mace.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/pci.h> 10 10 #include <linux/types.h> 11 - #include <asm/pci.h> 12 11 #include <asm/ip32/mace.h> 13 12 14 13 #if 0
-1
arch/mips/pci/pci-lantiq.c
··· 20 20 #include <linux/of_irq.h> 21 21 #include <linux/of_pci.h> 22 22 23 - #include <asm/pci.h> 24 23 #include <asm/gpio.h> 25 24 #include <asm/addrspace.h> 26 25
-13
arch/mn10300/include/asm/pci.h
··· 83 83 /* implement the pci_ DMA API in terms of the generic device dma_ one */ 84 84 #include <asm-generic/pci-dma-compat.h> 85 85 86 - static inline struct resource * 87 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 88 - { 89 - struct resource *root = NULL; 90 - 91 - if (res->flags & IORESOURCE_IO) 92 - root = &ioport_resource; 93 - if (res->flags & IORESOURCE_MEM) 94 - root = &iomem_resource; 95 - 96 - return root; 97 - } 98 - 99 86 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 100 87 { 101 88 return channel ? 15 : 14;
-19
arch/parisc/include/asm/pci.h
··· 196 196 /* export the pci_ DMA API in terms of the dma_ one */ 197 197 #include <asm-generic/pci-dma-compat.h> 198 198 199 - #ifdef CONFIG_PCI 200 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 201 - enum pci_dma_burst_strategy *strat, 202 - unsigned long *strategy_parameter) 203 - { 204 - unsigned long cacheline_size; 205 - u8 byte; 206 - 207 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 208 - if (byte == 0) 209 - cacheline_size = 1024; 210 - else 211 - cacheline_size = (int) byte * 4; 212 - 213 - *strat = PCI_DMA_BURST_MULTIPLE; 214 - *strategy_parameter = cacheline_size; 215 - } 216 - #endif 217 - 218 199 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 219 200 { 220 201 return channel ? 15 : 14;
-30
arch/powerpc/include/asm/pci.h
··· 71 71 */ 72 72 #define PCI_DISABLE_MWI 73 73 74 - #ifdef CONFIG_PCI 75 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 76 - enum pci_dma_burst_strategy *strat, 77 - unsigned long *strategy_parameter) 78 - { 79 - unsigned long cacheline_size; 80 - u8 byte; 81 - 82 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 83 - if (byte == 0) 84 - cacheline_size = 1024; 85 - else 86 - cacheline_size = (int) byte * 4; 87 - 88 - *strat = PCI_DMA_BURST_MULTIPLE; 89 - *strategy_parameter = cacheline_size; 90 - } 91 - #endif 92 - 93 - #else /* 32-bit */ 94 - 95 - #ifdef CONFIG_PCI 96 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 97 - enum pci_dma_burst_strategy *strat, 98 - unsigned long *strategy_parameter) 99 - { 100 - *strat = PCI_DMA_BURST_INFINITY; 101 - *strategy_parameter = ~0UL; 102 - } 103 - #endif 104 74 #endif /* CONFIG_PPC64 */ 105 75 106 76 extern int pci_domain_nr(struct pci_bus *bus);
-1
arch/powerpc/kernel/prom.c
··· 46 46 #include <asm/mmu.h> 47 47 #include <asm/paca.h> 48 48 #include <asm/pgtable.h> 49 - #include <asm/pci.h> 50 49 #include <asm/iommu.h> 51 50 #include <asm/btext.h> 52 51 #include <asm/sections.h>
-1
arch/powerpc/kernel/prom_init.c
··· 37 37 #include <asm/smp.h> 38 38 #include <asm/mmu.h> 39 39 #include <asm/pgtable.h> 40 - #include <asm/pci.h> 41 40 #include <asm/iommu.h> 42 41 #include <asm/btext.h> 43 42 #include <asm/sections.h>
+1 -1
arch/powerpc/platforms/52xx/mpc52xx_pci.c
··· 12 12 13 13 #undef DEBUG 14 14 15 - #include <asm/pci.h> 15 + #include <linux/pci.h> 16 16 #include <asm/mpc52xx.h> 17 17 #include <asm/delay.h> 18 18 #include <asm/machdep.h>
+1 -1
arch/s390/kernel/suspend.c
··· 9 9 #include <linux/pfn.h> 10 10 #include <linux/suspend.h> 11 11 #include <linux/mm.h> 12 + #include <linux/pci.h> 12 13 #include <asm/ctl_reg.h> 13 14 #include <asm/ipl.h> 14 15 #include <asm/cio.h> 15 - #include <asm/pci.h> 16 16 #include <asm/sections.h> 17 17 #include "entry.h" 18 18
-1
arch/sh/drivers/pci/ops-sh5.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/types.h> 20 20 #include <linux/irq.h> 21 - #include <asm/pci.h> 22 21 #include <asm/io.h> 23 22 #include "pci-sh5.h" 24 23
-1
arch/sh/drivers/pci/pci-sh5.c
··· 20 20 #include <linux/types.h> 21 21 #include <linux/irq.h> 22 22 #include <cpu/irq.h> 23 - #include <asm/pci.h> 24 23 #include <asm/io.h> 25 24 #include "pci-sh5.h" 26 25
-18
arch/sh/include/asm/pci.h
··· 86 86 * direct memory write. 87 87 */ 88 88 #define PCI_DISABLE_MWI 89 - 90 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 91 - enum pci_dma_burst_strategy *strat, 92 - unsigned long *strategy_parameter) 93 - { 94 - unsigned long cacheline_size; 95 - u8 byte; 96 - 97 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 98 - 99 - if (byte == 0) 100 - cacheline_size = L1_CACHE_BYTES; 101 - else 102 - cacheline_size = byte << 2; 103 - 104 - *strat = PCI_DMA_BURST_MULTIPLE; 105 - *strategy_parameter = cacheline_size; 106 - } 107 89 #endif 108 90 109 91 /* Board-specific fixup routines. */
-10
arch/sparc/include/asm/pci_32.h
··· 22 22 23 23 struct pci_dev; 24 24 25 - #ifdef CONFIG_PCI 26 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 27 - enum pci_dma_burst_strategy *strat, 28 - unsigned long *strategy_parameter) 29 - { 30 - *strat = PCI_DMA_BURST_INFINITY; 31 - *strategy_parameter = ~0UL; 32 - } 33 - #endif 34 - 35 25 #endif /* __KERNEL__ */ 36 26 37 27 #ifndef CONFIG_LEON_PCI
-19
arch/sparc/include/asm/pci_64.h
··· 31 31 #define PCI64_REQUIRED_MASK (~(u64)0) 32 32 #define PCI64_ADDR_BASE 0xfffc000000000000UL 33 33 34 - #ifdef CONFIG_PCI 35 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 36 - enum pci_dma_burst_strategy *strat, 37 - unsigned long *strategy_parameter) 38 - { 39 - unsigned long cacheline_size; 40 - u8 byte; 41 - 42 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 43 - if (byte == 0) 44 - cacheline_size = 1024; 45 - else 46 - cacheline_size = (int) byte * 4; 47 - 48 - *strat = PCI_DMA_BURST_BOUNDARY; 49 - *strategy_parameter = cacheline_size; 50 - } 51 - #endif 52 - 53 34 /* Return the index of the PCI controller for device PDEV. */ 54 35 55 36 int pci_domain_nr(struct pci_bus *bus);
-10
arch/unicore32/include/asm/pci.h
··· 18 18 #include <asm-generic/pci.h> 19 19 #include <mach/hardware.h> /* for PCIBIOS_MIN_* */ 20 20 21 - #ifdef CONFIG_PCI 22 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 23 - enum pci_dma_burst_strategy *strat, 24 - unsigned long *strategy_parameter) 25 - { 26 - *strat = PCI_DMA_BURST_INFINITY; 27 - *strategy_parameter = ~0UL; 28 - } 29 - #endif 30 - 31 21 #define HAVE_PCI_MMAP 32 22 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 33 23 enum pci_mmap_state mmap_state, int write_combine);
-7
arch/x86/include/asm/pci.h
··· 80 80 81 81 #ifdef CONFIG_PCI 82 82 extern void early_quirks(void); 83 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 84 - enum pci_dma_burst_strategy *strat, 85 - unsigned long *strategy_parameter) 86 - { 87 - *strat = PCI_DMA_BURST_INFINITY; 88 - *strategy_parameter = ~0UL; 89 - } 90 83 #else 91 84 static inline void early_quirks(void) { } 92 85 #endif
-1
arch/x86/kernel/x86_init.c
··· 11 11 #include <asm/bios_ebda.h> 12 12 #include <asm/paravirt.h> 13 13 #include <asm/pci_x86.h> 14 - #include <asm/pci.h> 15 14 #include <asm/mpspec.h> 16 15 #include <asm/setup.h> 17 16 #include <asm/apic.h>
+1 -1
drivers/acpi/pci_irq.c
··· 163 163 { 164 164 int segment = pci_domain_nr(dev->bus); 165 165 int bus = dev->bus->number; 166 - int device = PCI_SLOT(dev->devfn); 166 + int device = pci_ari_enabled(dev->bus) ? 0 : PCI_SLOT(dev->devfn); 167 167 struct acpi_prt_entry *entry; 168 168 169 169 if (((prt->address >> 16) & 0xffff) != device ||
-1
drivers/net/ethernet/sun/cassini.c
··· 3058 3058 /* setup core arbitration weight register */ 3059 3059 writel(CAWR_RR_DIS, cp->regs + REG_CAWR); 3060 3060 3061 - /* XXX Use pci_dma_burst_advice() */ 3062 3061 #if !defined(CONFIG_SPARC64) && !defined(CONFIG_ALPHA) 3063 3062 /* set the infinite burst register for chips that don't have 3064 3063 * pci issues.
-2
drivers/ntb/ntb_hw.c
··· 1313 1313 struct pci_dev *pdev = ndev->pdev; 1314 1314 int rc; 1315 1315 1316 - pci_msi_off(pdev); 1317 - 1318 1316 /* Verify intx is enabled */ 1319 1317 pci_intx(pdev, 1); 1320 1318
+4
drivers/pci/Kconfig
··· 1 1 # 2 2 # PCI configuration 3 3 # 4 + config PCI_BUS_ADDR_T_64BIT 5 + def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT) 6 + depends on PCI 7 + 4 8 config PCI_MSI 5 9 bool "Message Signaled Interrupts (MSI and MSI-X)" 6 10 depends on PCI
+5 -5
drivers/pci/bus.c
··· 92 92 } 93 93 94 94 static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL}; 95 - #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 95 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 96 96 static struct pci_bus_region pci_64_bit = {0, 97 - (dma_addr_t) 0xffffffffffffffffULL}; 98 - static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL, 99 - (dma_addr_t) 0xffffffffffffffffULL}; 97 + (pci_bus_addr_t) 0xffffffffffffffffULL}; 98 + static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL, 99 + (pci_bus_addr_t) 0xffffffffffffffffULL}; 100 100 #endif 101 101 102 102 /* ··· 200 200 resource_size_t), 201 201 void *alignf_data) 202 202 { 203 - #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 203 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 204 204 int rc; 205 205 206 206 if (res->flags & IORESOURCE_MEM_64) {
+1 -17
drivers/pci/host/pci-mvebu.c
··· 751 751 return 1; 752 752 } 753 753 754 - static struct pci_bus *mvebu_pcie_scan_bus(int nr, struct pci_sys_data *sys) 755 - { 756 - struct mvebu_pcie *pcie = sys_to_pcie(sys); 757 - struct pci_bus *bus; 758 - 759 - bus = pci_create_root_bus(&pcie->pdev->dev, sys->busnr, 760 - &mvebu_pcie_ops, sys, &sys->resources); 761 - if (!bus) 762 - return NULL; 763 - 764 - pci_scan_child_bus(bus); 765 - 766 - return bus; 767 - } 768 - 769 754 static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev, 770 755 const struct resource *res, 771 756 resource_size_t start, ··· 794 809 hw.nr_controllers = 1; 795 810 hw.private_data = (void **)&pcie; 796 811 hw.setup = mvebu_pcie_setup; 797 - hw.scan = mvebu_pcie_scan_bus; 798 812 hw.map_irq = of_irq_parse_and_map_pci; 799 813 hw.ops = &mvebu_pcie_ops; 800 814 hw.align_resource = mvebu_pcie_align_resource; 801 815 802 - pci_common_init(&hw); 816 + pci_common_init_dev(&pcie->pdev->dev, &hw); 803 817 } 804 818 805 819 /*
-16
drivers/pci/host/pci-tegra.c
··· 630 630 return irq; 631 631 } 632 632 633 - static struct pci_bus *tegra_pcie_scan_bus(int nr, struct pci_sys_data *sys) 634 - { 635 - struct tegra_pcie *pcie = sys_to_pcie(sys); 636 - struct pci_bus *bus; 637 - 638 - bus = pci_create_root_bus(pcie->dev, sys->busnr, &tegra_pcie_ops, sys, 639 - &sys->resources); 640 - if (!bus) 641 - return NULL; 642 - 643 - pci_scan_child_bus(bus); 644 - 645 - return bus; 646 - } 647 - 648 633 static irqreturn_t tegra_pcie_isr(int irq, void *arg) 649 634 { 650 635 const char *err_msg[] = { ··· 1816 1831 hw.private_data = (void **)&pcie; 1817 1832 hw.setup = tegra_pcie_setup; 1818 1833 hw.map_irq = tegra_pcie_map_irq; 1819 - hw.scan = tegra_pcie_scan_bus; 1820 1834 hw.ops = &tegra_pcie_ops; 1821 1835 1822 1836 pci_common_init_dev(pcie->dev, &hw);
+1 -3
drivers/pci/host/pcie-designware.c
··· 728 728 struct pcie_port *pp = sys_to_pcie(sys); 729 729 730 730 pp->root_bus_nr = sys->busnr; 731 - bus = pci_create_root_bus(pp->dev, sys->busnr, 731 + bus = pci_scan_root_bus(pp->dev, sys->busnr, 732 732 &dw_pcie_ops, sys, &sys->resources); 733 733 if (!bus) 734 734 return NULL; 735 - 736 - pci_scan_child_bus(bus); 737 735 738 736 if (bus && pp->ops->scan_bus) 739 737 pp->ops->scan_bus(pp);
-3
drivers/pci/hotplug/Makefile
··· 61 61 pciehp_ctrl.o \ 62 62 pciehp_pci.o \ 63 63 pciehp_hpc.o 64 - ifdef CONFIG_ACPI 65 - pciehp-objs += pciehp_acpi.o 66 - endif 67 64 68 65 shpchp-objs := shpchp_core.o \ 69 66 shpchp_ctrl.o \
+2 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 632 632 { 633 633 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); 634 634 struct pci_bus *bus = dev->subordinate; 635 - bool alive = false; 635 + bool alive = dev->ignore_hotplug; 636 636 637 637 if (adev) { 638 638 acpi_status status; 639 639 unsigned long long sta; 640 640 641 641 status = acpi_evaluate_integer(adev->handle, "_STA", NULL, &sta); 642 - alive = (ACPI_SUCCESS(status) && device_status_valid(sta)) 643 - || dev->ignore_hotplug; 642 + alive = alive || (ACPI_SUCCESS(status) && device_status_valid(sta)); 644 643 } 645 644 if (!alive) 646 645 alive = pci_device_is_present(dev);
-17
drivers/pci/hotplug/pciehp.h
··· 167 167 return hotplug_slot_name(slot->hotplug_slot); 168 168 } 169 169 170 - #ifdef CONFIG_ACPI 171 - #include <linux/pci-acpi.h> 172 - 173 - void __init pciehp_acpi_slot_detection_init(void); 174 - int pciehp_acpi_slot_detection_check(struct pci_dev *dev); 175 - 176 - static inline void pciehp_firmware_init(void) 177 - { 178 - pciehp_acpi_slot_detection_init(); 179 - } 180 - #else 181 - #define pciehp_firmware_init() do {} while (0) 182 - static inline int pciehp_acpi_slot_detection_check(struct pci_dev *dev) 183 - { 184 - return 0; 185 - } 186 - #endif /* CONFIG_ACPI */ 187 170 #endif /* _PCIEHP_H */
-137
drivers/pci/hotplug/pciehp_acpi.c
··· 1 - /* 2 - * ACPI related functions for PCI Express Hot Plug driver. 3 - * 4 - * Copyright (C) 2008 Kenji Kaneshige 5 - * Copyright (C) 2008 Fujitsu Limited. 6 - * 7 - * All rights reserved. 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License as published by 11 - * the Free Software Foundation; either version 2 of the License, or (at 12 - * your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 17 - * NON INFRINGEMENT. See the GNU General Public License for more 18 - * details. 19 - * 20 - * You should have received a copy of the GNU General Public License 21 - * along with this program; if not, write to the Free Software 22 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 23 - * 24 - */ 25 - 26 - #include <linux/acpi.h> 27 - #include <linux/pci.h> 28 - #include <linux/pci_hotplug.h> 29 - #include <linux/slab.h> 30 - #include <linux/module.h> 31 - #include "pciehp.h" 32 - 33 - #define PCIEHP_DETECT_PCIE (0) 34 - #define PCIEHP_DETECT_ACPI (1) 35 - #define PCIEHP_DETECT_AUTO (2) 36 - #define PCIEHP_DETECT_DEFAULT PCIEHP_DETECT_AUTO 37 - 38 - struct dummy_slot { 39 - u32 number; 40 - struct list_head list; 41 - }; 42 - 43 - static int slot_detection_mode; 44 - static char *pciehp_detect_mode; 45 - module_param(pciehp_detect_mode, charp, 0444); 46 - MODULE_PARM_DESC(pciehp_detect_mode, 47 - "Slot detection mode: pcie, acpi, auto\n" 48 - " pcie - Use PCIe based slot detection\n" 49 - " acpi - Use ACPI for slot detection\n" 50 - " auto(default) - Auto select mode. Use acpi option if duplicate\n" 51 - " slot ids are found. Otherwise, use pcie option\n"); 52 - 53 - int pciehp_acpi_slot_detection_check(struct pci_dev *dev) 54 - { 55 - if (slot_detection_mode != PCIEHP_DETECT_ACPI) 56 - return 0; 57 - if (acpi_pci_detect_ejectable(ACPI_HANDLE(&dev->dev))) 58 - return 0; 59 - return -ENODEV; 60 - } 61 - 62 - static int __init parse_detect_mode(void) 63 - { 64 - if (!pciehp_detect_mode) 65 - return PCIEHP_DETECT_DEFAULT; 66 - if (!strcmp(pciehp_detect_mode, "pcie")) 67 - return PCIEHP_DETECT_PCIE; 68 - if (!strcmp(pciehp_detect_mode, "acpi")) 69 - return PCIEHP_DETECT_ACPI; 70 - if (!strcmp(pciehp_detect_mode, "auto")) 71 - return PCIEHP_DETECT_AUTO; 72 - warn("bad specifier '%s' for pciehp_detect_mode. Use default\n", 73 - pciehp_detect_mode); 74 - return PCIEHP_DETECT_DEFAULT; 75 - } 76 - 77 - static int __initdata dup_slot_id; 78 - static int __initdata acpi_slot_detected; 79 - static struct list_head __initdata dummy_slots = LIST_HEAD_INIT(dummy_slots); 80 - 81 - /* Dummy driver for duplicate name detection */ 82 - static int __init dummy_probe(struct pcie_device *dev) 83 - { 84 - u32 slot_cap; 85 - acpi_handle handle; 86 - struct dummy_slot *slot, *tmp; 87 - struct pci_dev *pdev = dev->port; 88 - 89 - pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); 90 - slot = kzalloc(sizeof(*slot), GFP_KERNEL); 91 - if (!slot) 92 - return -ENOMEM; 93 - slot->number = (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19; 94 - list_for_each_entry(tmp, &dummy_slots, list) { 95 - if (tmp->number == slot->number) 96 - dup_slot_id++; 97 - } 98 - list_add_tail(&slot->list, &dummy_slots); 99 - handle = ACPI_HANDLE(&pdev->dev); 100 - if (!acpi_slot_detected && acpi_pci_detect_ejectable(handle)) 101 - acpi_slot_detected = 1; 102 - return -ENODEV; /* dummy driver always returns error */ 103 - } 104 - 105 - static struct pcie_port_service_driver __initdata dummy_driver = { 106 - .name = "pciehp_dummy", 107 - .port_type = PCIE_ANY_PORT, 108 - .service = PCIE_PORT_SERVICE_HP, 109 - .probe = dummy_probe, 110 - }; 111 - 112 - static int __init select_detection_mode(void) 113 - { 114 - struct dummy_slot *slot, *tmp; 115 - 116 - if (pcie_port_service_register(&dummy_driver)) 117 - return PCIEHP_DETECT_ACPI; 118 - pcie_port_service_unregister(&dummy_driver); 119 - list_for_each_entry_safe(slot, tmp, &dummy_slots, list) { 120 - list_del(&slot->list); 121 - kfree(slot); 122 - } 123 - if (acpi_slot_detected && dup_slot_id) 124 - return PCIEHP_DETECT_ACPI; 125 - return PCIEHP_DETECT_PCIE; 126 - } 127 - 128 - void __init pciehp_acpi_slot_detection_init(void) 129 - { 130 - slot_detection_mode = parse_detect_mode(); 131 - if (slot_detection_mode != PCIEHP_DETECT_AUTO) 132 - goto out; 133 - slot_detection_mode = select_detection_mode(); 134 - out: 135 - if (slot_detection_mode == PCIEHP_DETECT_ACPI) 136 - info("Using ACPI for slot detection.\n"); 137 - }
+5 -10
drivers/pci/hotplug/pciehp_core.c
··· 248 248 struct slot *slot; 249 249 u8 occupied, poweron; 250 250 251 - if (pciehp_force) 252 - dev_info(&dev->device, 253 - "Bypassing BIOS check for pciehp use on %s\n", 254 - pci_name(dev->port)); 255 - else if (pciehp_acpi_slot_detection_check(dev->port)) 256 - goto err_out_none; 251 + /* If this is not a "hotplug" service, we have no business here. */ 252 + if (dev->service != PCIE_PORT_SERVICE_HP) 253 + return -ENODEV; 257 254 258 255 if (!dev->port->subordinate) { 259 256 /* Can happen if we run out of bus numbers during probe */ 260 257 dev_err(&dev->device, 261 258 "Hotplug bridge without secondary bus, ignoring\n"); 262 - goto err_out_none; 259 + return -ENODEV; 263 260 } 264 261 265 262 ctrl = pcie_init(dev); 266 263 if (!ctrl) { 267 264 dev_err(&dev->device, "Controller initialization failed\n"); 268 - goto err_out_none; 265 + return -ENODEV; 269 266 } 270 267 set_service_data(dev, ctrl); 271 268 ··· 302 305 cleanup_slot(ctrl); 303 306 err_out_release_ctlr: 304 307 pciehp_release_ctrl(ctrl); 305 - err_out_none: 306 308 return -ENODEV; 307 309 } 308 310 ··· 362 366 { 363 367 int retval = 0; 364 368 365 - pciehp_firmware_init(); 366 369 retval = pcie_port_service_register(&hpdriver_portdrv); 367 370 dbg("pcie_port_service_register = %d\n", retval); 368 371 info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
+38 -14
drivers/pci/hotplug/pciehp_hpc.c
··· 176 176 jiffies_to_msecs(jiffies - ctrl->cmd_started)); 177 177 } 178 178 179 - /** 180 - * pcie_write_cmd - Issue controller command 181 - * @ctrl: controller to which the command is issued 182 - * @cmd: command value written to slot control register 183 - * @mask: bitmask of slot control register to be modified 184 - */ 185 - static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 179 + static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd, 180 + u16 mask, bool wait) 186 181 { 187 182 struct pci_dev *pdev = ctrl_dev(ctrl); 188 183 u16 slot_ctrl; 189 184 190 185 mutex_lock(&ctrl->ctrl_lock); 191 186 192 - /* Wait for any previous command that might still be in progress */ 187 + /* 188 + * Always wait for any previous command that might still be in progress 189 + */ 193 190 pcie_wait_cmd(ctrl); 194 191 195 192 pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); ··· 198 201 ctrl->cmd_started = jiffies; 199 202 ctrl->slot_ctrl = slot_ctrl; 200 203 204 + /* 205 + * Optionally wait for the hardware to be ready for a new command, 206 + * indicating completion of the above issued command. 207 + */ 208 + if (wait) 209 + pcie_wait_cmd(ctrl); 210 + 201 211 mutex_unlock(&ctrl->ctrl_lock); 212 + } 213 + 214 + /** 215 + * pcie_write_cmd - Issue controller command 216 + * @ctrl: controller to which the command is issued 217 + * @cmd: command value written to slot control register 218 + * @mask: bitmask of slot control register to be modified 219 + */ 220 + static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 221 + { 222 + pcie_do_write_cmd(ctrl, cmd, mask, true); 223 + } 224 + 225 + /* Same as above without waiting for the hardware to latch */ 226 + static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask) 227 + { 228 + pcie_do_write_cmd(ctrl, cmd, mask, false); 202 229 } 203 230 204 231 bool pciehp_check_link_active(struct controller *ctrl) ··· 443 422 default: 444 423 return; 445 424 } 446 - pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 425 + pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 447 426 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 448 427 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 449 428 } ··· 455 434 if (!PWR_LED(ctrl)) 456 435 return; 457 436 458 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC); 437 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, 438 + PCI_EXP_SLTCTL_PIC); 459 439 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 460 440 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 461 441 PCI_EXP_SLTCTL_PWR_IND_ON); ··· 469 447 if (!PWR_LED(ctrl)) 470 448 return; 471 449 472 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC); 450 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, 451 + PCI_EXP_SLTCTL_PIC); 473 452 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 474 453 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 475 454 PCI_EXP_SLTCTL_PWR_IND_OFF); ··· 483 460 if (!PWR_LED(ctrl)) 484 461 return; 485 462 486 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC); 463 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, 464 + PCI_EXP_SLTCTL_PIC); 487 465 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 488 466 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 489 467 PCI_EXP_SLTCTL_PWR_IND_BLINK); ··· 637 613 PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE | 638 614 PCI_EXP_SLTCTL_DLLSCE); 639 615 640 - pcie_write_cmd(ctrl, cmd, mask); 616 + pcie_write_cmd_nowait(ctrl, cmd, mask); 641 617 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 642 618 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); 643 619 } ··· 688 664 pci_reset_bridge_secondary_bus(ctrl->pcie->port); 689 665 690 666 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); 691 - pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask); 667 + pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask); 692 668 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 693 669 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); 694 670 if (pciehp_poll_mode)
+10 -43
drivers/pci/msi.c
··· 185 185 return default_restore_msi_irqs(dev); 186 186 } 187 187 188 - static void msi_set_enable(struct pci_dev *dev, int enable) 189 - { 190 - u16 control; 191 - 192 - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 193 - control &= ~PCI_MSI_FLAGS_ENABLE; 194 - if (enable) 195 - control |= PCI_MSI_FLAGS_ENABLE; 196 - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 197 - } 198 - 199 - static void msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) 200 - { 201 - u16 ctrl; 202 - 203 - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); 204 - ctrl &= ~clear; 205 - ctrl |= set; 206 - pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); 207 - } 208 - 209 188 static inline __attribute_const__ u32 msi_mask(unsigned x) 210 189 { 211 190 /* Don't shift by >= width of type */ ··· 431 452 entry = irq_get_msi_desc(dev->irq); 432 453 433 454 pci_intx_for_msi(dev, 0); 434 - msi_set_enable(dev, 0); 455 + pci_msi_set_enable(dev, 0); 435 456 arch_restore_msi_irqs(dev); 436 457 437 458 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); ··· 452 473 453 474 /* route the table */ 454 475 pci_intx_for_msi(dev, 0); 455 - msix_clear_and_set_ctrl(dev, 0, 476 + pci_msix_clear_and_set_ctrl(dev, 0, 456 477 PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL); 457 478 458 479 arch_restore_msi_irqs(dev); 459 480 list_for_each_entry(entry, &dev->msi_list, list) 460 481 msix_mask_irq(entry, entry->masked); 461 482 462 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 483 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 463 484 } 464 485 465 486 void pci_restore_msi_state(struct pci_dev *dev) ··· 626 647 int ret; 627 648 unsigned mask; 628 649 629 - msi_set_enable(dev, 0); /* Disable MSI during set up */ 650 + pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ 630 651 631 652 entry = msi_setup_entry(dev, nvec); 632 653 if (!entry) ··· 662 683 663 684 /* Set MSI enabled bits */ 664 685 pci_intx_for_msi(dev, 0); 665 - msi_set_enable(dev, 1); 686 + pci_msi_set_enable(dev, 1); 666 687 dev->msi_enabled = 1; 667 688 668 689 dev->irq = entry->irq; ··· 754 775 void __iomem *base; 755 776 756 777 /* Ensure MSI-X is disabled while it is set up */ 757 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 778 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 758 779 759 780 pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); 760 781 /* Request & Map MSI-X table region */ ··· 780 801 * MSI-X registers. We need to mask all the vectors to prevent 781 802 * interrupts coming in before they're fully set up. 782 803 */ 783 - msix_clear_and_set_ctrl(dev, 0, 804 + pci_msix_clear_and_set_ctrl(dev, 0, 784 805 PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE); 785 806 786 807 msix_program_entries(dev, entries); ··· 793 814 pci_intx_for_msi(dev, 0); 794 815 dev->msix_enabled = 1; 795 816 796 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 817 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 797 818 798 819 return 0; 799 820 ··· 898 919 BUG_ON(list_empty(&dev->msi_list)); 899 920 desc = list_first_entry(&dev->msi_list, struct msi_desc, list); 900 921 901 - msi_set_enable(dev, 0); 922 + pci_msi_set_enable(dev, 0); 902 923 pci_intx_for_msi(dev, 1); 903 924 dev->msi_enabled = 0; 904 925 ··· 1006 1027 __pci_msix_desc_mask_irq(entry, 1); 1007 1028 } 1008 1029 1009 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1030 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1010 1031 pci_intx_for_msi(dev, 1); 1011 1032 dev->msix_enabled = 0; 1012 1033 } ··· 1041 1062 void pci_msi_init_pci_dev(struct pci_dev *dev) 1042 1063 { 1043 1064 INIT_LIST_HEAD(&dev->msi_list); 1044 - 1045 - /* Disable the msi hardware to avoid screaming interrupts 1046 - * during boot. This is the power on reset default so 1047 - * usually this should be a noop. 1048 - */ 1049 - dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); 1050 - if (dev->msi_cap) 1051 - msi_set_enable(dev, 0); 1052 - 1053 - dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); 1054 - if (dev->msix_cap) 1055 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1056 1065 } 1057 1066 1058 1067 /**
+11 -33
drivers/pci/pci.c
··· 3101 3101 } 3102 3102 EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx); 3103 3103 3104 - /** 3105 - * pci_msi_off - disables any MSI or MSI-X capabilities 3106 - * @dev: the PCI device to operate on 3107 - * 3108 - * If you want to use MSI, see pci_enable_msi() and friends. 3109 - * This is a lower-level primitive that allows us to disable 3110 - * MSI operation at the device level. 3111 - */ 3112 - void pci_msi_off(struct pci_dev *dev) 3113 - { 3114 - int pos; 3115 - u16 control; 3116 - 3117 - /* 3118 - * This looks like it could go in msi.c, but we need it even when 3119 - * CONFIG_PCI_MSI=n. For the same reason, we can't use 3120 - * dev->msi_cap or dev->msix_cap here. 3121 - */ 3122 - pos = pci_find_capability(dev, PCI_CAP_ID_MSI); 3123 - if (pos) { 3124 - pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &control); 3125 - control &= ~PCI_MSI_FLAGS_ENABLE; 3126 - pci_write_config_word(dev, pos + PCI_MSI_FLAGS, control); 3127 - } 3128 - pos = pci_find_capability(dev, PCI_CAP_ID_MSIX); 3129 - if (pos) { 3130 - pci_read_config_word(dev, pos + PCI_MSIX_FLAGS, &control); 3131 - control &= ~PCI_MSIX_FLAGS_ENABLE; 3132 - pci_write_config_word(dev, pos + PCI_MSIX_FLAGS, control); 3133 - } 3134 - } 3135 - EXPORT_SYMBOL_GPL(pci_msi_off); 3136 - 3137 3104 int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size) 3138 3105 { 3139 3106 return dma_set_max_seg_size(&dev->dev, size); ··· 4290 4323 return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0); 4291 4324 } 4292 4325 EXPORT_SYMBOL_GPL(pci_device_is_present); 4326 + 4327 + void pci_ignore_hotplug(struct pci_dev *dev) 4328 + { 4329 + struct pci_dev *bridge = dev->bus->self; 4330 + 4331 + dev->ignore_hotplug = 1; 4332 + /* Propagate the "ignore hotplug" setting to the parent bridge. */ 4333 + if (bridge) 4334 + bridge->ignore_hotplug = 1; 4335 + } 4336 + EXPORT_SYMBOL_GPL(pci_ignore_hotplug); 4293 4337 4294 4338 #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE 4295 4339 static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
+21 -11
drivers/pci/pci.h
··· 146 146 static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 147 147 #endif 148 148 149 + static inline void pci_msi_set_enable(struct pci_dev *dev, int enable) 150 + { 151 + u16 control; 152 + 153 + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 154 + control &= ~PCI_MSI_FLAGS_ENABLE; 155 + if (enable) 156 + control |= PCI_MSI_FLAGS_ENABLE; 157 + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 158 + } 159 + 160 + static inline void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) 161 + { 162 + u16 ctrl; 163 + 164 + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); 165 + ctrl &= ~clear; 166 + ctrl |= set; 167 + pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); 168 + } 169 + 149 170 void pci_realloc_get_opt(char *); 150 171 151 172 static inline int pci_no_d1d2(struct pci_dev *dev) ··· 236 215 struct list_head *realloc_head, 237 216 struct list_head *fail_head); 238 217 bool pci_bus_clip_resource(struct pci_dev *dev, int idx); 239 - 240 - /** 241 - * pci_ari_enabled - query ARI forwarding status 242 - * @bus: the PCI bus 243 - * 244 - * Returns 1 if ARI forwarding is enabled, or 0 if not enabled; 245 - */ 246 - static inline int pci_ari_enabled(struct pci_bus *bus) 247 - { 248 - return bus->self && bus->self->ari_enabled; 249 - } 250 218 251 219 void pci_reassigndev_resource_alignment(struct pci_dev *dev); 252 220 void pci_disable_bridge_window(struct pci_dev *dev);
+1 -2
drivers/pci/pcie/aer/aerdrv_core.c
··· 425 425 426 426 if (driver && driver->reset_link) { 427 427 status = driver->reset_link(udev); 428 - } else if (pci_pcie_type(udev) == PCI_EXP_TYPE_DOWNSTREAM || 429 - pci_pcie_type(udev) == PCI_EXP_TYPE_ROOT_PORT) { 428 + } else if (udev->has_secondary_link) { 430 429 status = default_reset_link(udev); 431 430 } else { 432 431 dev_printk(KERN_DEBUG, &dev->dev,
+23 -34
drivers/pci/pcie/aspm.c
··· 127 127 { 128 128 struct pci_dev *child; 129 129 struct pci_bus *linkbus = link->pdev->subordinate; 130 + u32 val = enable ? PCI_EXP_LNKCTL_CLKREQ_EN : 0; 130 131 131 - list_for_each_entry(child, &linkbus->devices, bus_list) { 132 - if (enable) 133 - pcie_capability_set_word(child, PCI_EXP_LNKCTL, 134 - PCI_EXP_LNKCTL_CLKREQ_EN); 135 - else 136 - pcie_capability_clear_word(child, PCI_EXP_LNKCTL, 137 - PCI_EXP_LNKCTL_CLKREQ_EN); 138 - } 132 + list_for_each_entry(child, &linkbus->devices, bus_list) 133 + pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, 134 + PCI_EXP_LNKCTL_CLKREQ_EN, 135 + val); 139 136 link->clkpm_enabled = !!enable; 140 137 } 141 138 ··· 522 525 INIT_LIST_HEAD(&link->children); 523 526 INIT_LIST_HEAD(&link->link); 524 527 link->pdev = pdev; 525 - if (pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM) { 528 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) { 526 529 struct pcie_link_state *parent; 527 530 parent = pdev->bus->parent->self->link_state; 528 531 if (!parent) { ··· 556 559 if (!aspm_support_enabled) 557 560 return; 558 561 559 - if (!pci_is_pcie(pdev) || pdev->link_state) 562 + if (pdev->link_state) 560 563 return; 561 - if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 562 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) 564 + 565 + /* 566 + * We allocate pcie_link_state for the component on the upstream 567 + * end of a Link, so there's nothing to do unless this device has a 568 + * Link on its secondary side. 569 + */ 570 + if (!pdev->has_secondary_link) 563 571 return; 564 572 565 573 /* VIA has a strange chipset, root port is under a bridge */ ··· 677 675 { 678 676 struct pcie_link_state *link = pdev->link_state; 679 677 680 - if (aspm_disabled || !pci_is_pcie(pdev) || !link) 681 - return; 682 - if ((pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) && 683 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)) 678 + if (aspm_disabled || !link) 684 679 return; 685 680 /* 686 681 * Devices changed PM state, we should recheck if latency ··· 695 696 { 696 697 struct pcie_link_state *link = pdev->link_state; 697 698 698 - if (aspm_disabled || !pci_is_pcie(pdev) || !link) 699 + if (aspm_disabled || !link) 699 700 return; 700 701 701 702 if (aspm_policy != POLICY_POWERSAVE) 702 - return; 703 - 704 - if ((pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) && 705 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)) 706 703 return; 707 704 708 705 down_read(&pci_bus_sem); ··· 709 714 up_read(&pci_bus_sem); 710 715 } 711 716 712 - static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem, 713 - bool force) 717 + static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) 714 718 { 715 719 struct pci_dev *parent = pdev->bus->self; 716 720 struct pcie_link_state *link; ··· 717 723 if (!pci_is_pcie(pdev)) 718 724 return; 719 725 720 - if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT || 721 - pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM) 726 + if (pdev->has_secondary_link) 722 727 parent = pdev; 723 728 if (!parent || !parent->link_state) 724 729 return; ··· 730 737 * a similar mechanism using "PciASPMOptOut", which is also 731 738 * ignored in this situation. 732 739 */ 733 - if (aspm_disabled && !force) { 740 + if (aspm_disabled) { 734 741 dev_warn(&pdev->dev, "can't disable ASPM; OS doesn't have ASPM control\n"); 735 742 return; 736 743 } ··· 756 763 757 764 void pci_disable_link_state_locked(struct pci_dev *pdev, int state) 758 765 { 759 - __pci_disable_link_state(pdev, state, false, false); 766 + __pci_disable_link_state(pdev, state, false); 760 767 } 761 768 EXPORT_SYMBOL(pci_disable_link_state_locked); 762 769 ··· 771 778 */ 772 779 void pci_disable_link_state(struct pci_dev *pdev, int state) 773 780 { 774 - __pci_disable_link_state(pdev, state, true, false); 781 + __pci_disable_link_state(pdev, state, true); 775 782 } 776 783 EXPORT_SYMBOL(pci_disable_link_state); 777 784 ··· 900 907 { 901 908 struct pcie_link_state *link_state = pdev->link_state; 902 909 903 - if (!pci_is_pcie(pdev) || 904 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 905 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) || !link_state) 910 + if (!link_state) 906 911 return; 907 912 908 913 if (link_state->aspm_support) ··· 915 924 { 916 925 struct pcie_link_state *link_state = pdev->link_state; 917 926 918 - if (!pci_is_pcie(pdev) || 919 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 920 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) || !link_state) 927 + if (!link_state) 921 928 return; 922 929 923 930 if (link_state->aspm_support)
+43 -26
drivers/pci/probe.c
··· 254 254 } 255 255 256 256 if (res->flags & IORESOURCE_MEM_64) { 257 - if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) && 258 - sz64 > 0x100000000ULL) { 257 + if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8) 258 + && sz64 > 0x100000000ULL) { 259 259 res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 260 260 res->start = 0; 261 261 res->end = 0; ··· 264 264 goto out; 265 265 } 266 266 267 - if ((sizeof(dma_addr_t) < 8) && l) { 267 + if ((sizeof(pci_bus_addr_t) < 8) && l) { 268 268 /* Above 32-bit boundary; try to reallocate */ 269 269 res->flags |= IORESOURCE_UNSET; 270 270 res->start = 0; ··· 399 399 struct pci_dev *dev = child->self; 400 400 u16 mem_base_lo, mem_limit_lo; 401 401 u64 base64, limit64; 402 - dma_addr_t base, limit; 402 + pci_bus_addr_t base, limit; 403 403 struct pci_bus_region region; 404 404 struct resource *res; 405 405 ··· 426 426 } 427 427 } 428 428 429 - base = (dma_addr_t) base64; 430 - limit = (dma_addr_t) limit64; 429 + base = (pci_bus_addr_t) base64; 430 + limit = (pci_bus_addr_t) limit64; 431 431 432 432 if (base != base64) { 433 433 dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n", ··· 973 973 { 974 974 int pos; 975 975 u16 reg16; 976 + int type; 977 + struct pci_dev *parent; 976 978 977 979 pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 978 980 if (!pos) ··· 984 982 pdev->pcie_flags_reg = reg16; 985 983 pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, &reg16); 986 984 pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; 985 + 986 + /* 987 + * A Root Port is always the upstream end of a Link. No PCIe 988 + * component has two Links. Two Links are connected by a Switch 989 + * that has a Port on each Link and internal logic to connect the 990 + * two Ports. 991 + */ 992 + type = pci_pcie_type(pdev); 993 + if (type == PCI_EXP_TYPE_ROOT_PORT) 994 + pdev->has_secondary_link = 1; 995 + else if (type == PCI_EXP_TYPE_UPSTREAM || 996 + type == PCI_EXP_TYPE_DOWNSTREAM) { 997 + parent = pci_upstream_bridge(pdev); 998 + if (!parent->has_secondary_link) 999 + pdev->has_secondary_link = 1; 1000 + } 987 1001 } 988 1002 989 1003 void set_pcie_hotplug_bridge(struct pci_dev *pdev) ··· 1103 1085 1104 1086 #define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED) 1105 1087 1088 + static void pci_msi_setup_pci_dev(struct pci_dev *dev) 1089 + { 1090 + /* 1091 + * Disable the MSI hardware to avoid screaming interrupts 1092 + * during boot. This is the power on reset default so 1093 + * usually this should be a noop. 1094 + */ 1095 + dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); 1096 + if (dev->msi_cap) 1097 + pci_msi_set_enable(dev, 0); 1098 + 1099 + dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); 1100 + if (dev->msix_cap) 1101 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1102 + } 1103 + 1106 1104 /** 1107 1105 * pci_setup_device - fill in class and map information of a device 1108 1106 * @dev: the device structure to fill ··· 1173 1139 1174 1140 /* "Unknown power state" */ 1175 1141 dev->current_state = PCI_UNKNOWN; 1142 + 1143 + pci_msi_setup_pci_dev(dev); 1176 1144 1177 1145 /* Early fixups, before probing the BARs */ 1178 1146 pci_fixup_device(pci_fixup_early, dev); ··· 1647 1611 return 0; 1648 1612 if (pci_pcie_type(parent) == PCI_EXP_TYPE_ROOT_PORT) 1649 1613 return 1; 1650 - if (pci_pcie_type(parent) == PCI_EXP_TYPE_DOWNSTREAM && 1614 + if (parent->has_secondary_link && 1651 1615 !pci_has_flag(PCI_SCAN_ALL_PCIE_DEVS)) 1652 1616 return 1; 1653 1617 return 0; ··· 2129 2093 return b; 2130 2094 } 2131 2095 EXPORT_SYMBOL(pci_scan_root_bus); 2132 - 2133 - /* Deprecated; use pci_scan_root_bus() instead */ 2134 - struct pci_bus *pci_scan_bus_parented(struct device *parent, 2135 - int bus, struct pci_ops *ops, void *sysdata) 2136 - { 2137 - LIST_HEAD(resources); 2138 - struct pci_bus *b; 2139 - 2140 - pci_add_resource(&resources, &ioport_resource); 2141 - pci_add_resource(&resources, &iomem_resource); 2142 - pci_add_resource(&resources, &busn_resource); 2143 - b = pci_create_root_bus(parent, bus, ops, sysdata, &resources); 2144 - if (b) 2145 - pci_scan_child_bus(b); 2146 - else 2147 - pci_free_resource_list(&resources); 2148 - return b; 2149 - } 2150 - EXPORT_SYMBOL(pci_scan_bus_parented); 2151 2096 2152 2097 struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, 2153 2098 void *sysdata)
+4 -2
drivers/pci/quirks.c
··· 1600 1600 1601 1601 static void quirk_pcie_mch(struct pci_dev *pdev) 1602 1602 { 1603 - pci_msi_off(pdev); 1604 1603 pdev->no_msi = 1; 1605 1604 } 1606 1605 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch); ··· 1613 1614 */ 1614 1615 static void quirk_pcie_pxh(struct pci_dev *dev) 1615 1616 { 1616 - pci_msi_off(dev); 1617 1617 dev->no_msi = 1; 1618 1618 dev_warn(&dev->dev, "PXH quirk detected; SHPC device MSI disabled\n"); 1619 1619 } ··· 3570 3572 * SKUs this function is not present, making this a ghost requester. 3571 3573 * https://bugzilla.kernel.org/show_bug.cgi?id=42679 3572 3574 */ 3575 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120, 3576 + quirk_dma_func1_alias); 3573 3577 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123, 3574 3578 quirk_dma_func1_alias); 3575 3579 /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */ ··· 3740 3740 /* Wellsburg (X99) PCH */ 3741 3741 0x8d10, 0x8d11, 0x8d12, 0x8d13, 0x8d14, 0x8d15, 0x8d16, 0x8d17, 3742 3742 0x8d18, 0x8d19, 0x8d1a, 0x8d1b, 0x8d1c, 0x8d1d, 0x8d1e, 3743 + /* Lynx Point (9 series) PCH */ 3744 + 0x8c90, 0x8c92, 0x8c94, 0x8c96, 0x8c98, 0x8c9a, 0x8c9c, 0x8c9e, 3743 3745 }; 3744 3746 3745 3747 static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev)
+1 -2
drivers/pci/vc.c
··· 108 108 struct pci_dev *link = NULL; 109 109 110 110 /* Enable VCs from the downstream device */ 111 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 112 - pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) 111 + if (!dev->has_secondary_link) 113 112 return; 114 113 115 114 ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF);
+13 -3
drivers/pci/xen-pcifront.c
··· 446 446 unsigned int domain, unsigned int bus) 447 447 { 448 448 struct pci_bus *b; 449 + LIST_HEAD(resources); 449 450 struct pcifront_sd *sd = NULL; 450 451 struct pci_bus_entry *bus_entry = NULL; 451 452 int err = 0; 453 + static struct resource busn_res = { 454 + .start = 0, 455 + .end = 255, 456 + .flags = IORESOURCE_BUS, 457 + }; 452 458 453 459 #ifndef CONFIG_PCI_DOMAINS 454 460 if (domain != 0) { ··· 476 470 err = -ENOMEM; 477 471 goto err_out; 478 472 } 473 + pci_add_resource(&resources, &ioport_resource); 474 + pci_add_resource(&resources, &iomem_resource); 475 + pci_add_resource(&resources, &busn_res); 479 476 pcifront_init_sd(sd, domain, bus, pdev); 480 477 481 478 pci_lock_rescan_remove(); 482 479 483 - b = pci_scan_bus_parented(&pdev->xdev->dev, bus, 484 - &pcifront_bus_ops, sd); 480 + b = pci_scan_root_bus(&pdev->xdev->dev, bus, 481 + &pcifront_bus_ops, sd, &resources); 485 482 if (!b) { 486 483 dev_err(&pdev->xdev->dev, 487 484 "Error creating PCI Frontend Bus!\n"); 488 485 err = -ENOMEM; 489 486 pci_unlock_rescan_remove(); 487 + pci_free_resource_list(&resources); 490 488 goto err_out; 491 489 } 492 490 ··· 498 488 499 489 list_add(&bus_entry->list, &pdev->root_buses); 500 490 501 - /* pci_scan_bus_parented skips devices which do not have a have 491 + /* pci_scan_root_bus skips devices which do not have a 502 492 * devfn==0. The pcifront_scan_bus enumerates all devfn. */ 503 493 err = pcifront_scan_bus(pdev, domain, bus, b); 504 494
-3
drivers/virtio/virtio_pci_common.c
··· 501 501 INIT_LIST_HEAD(&vp_dev->virtqueues); 502 502 spin_lock_init(&vp_dev->lock); 503 503 504 - /* Disable MSI/MSIX to bring device to a known good state. */ 505 - pci_msi_off(pci_dev); 506 - 507 504 /* enable the device */ 508 505 rc = pci_enable_device(pci_dev); 509 506 if (rc)
-13
include/asm-generic/pci.h
··· 6 6 #ifndef _ASM_GENERIC_PCI_H 7 7 #define _ASM_GENERIC_PCI_H 8 8 9 - static inline struct resource * 10 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 11 - { 12 - struct resource *root = NULL; 13 - 14 - if (res->flags & IORESOURCE_IO) 15 - root = &ioport_resource; 16 - if (res->flags & IORESOURCE_MEM) 17 - root = &iomem_resource; 18 - 19 - return root; 20 - } 21 - 22 9 #ifndef HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 23 10 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 24 11 {
+22 -22
include/linux/pci.h
··· 355 355 unsigned int broken_intx_masking:1; 356 356 unsigned int io_window_1k:1; /* Intel P2P bridge 1K I/O windows */ 357 357 unsigned int irq_managed:1; 358 + unsigned int has_secondary_link:1; 358 359 pci_dev_flags_t dev_flags; 359 360 atomic_t enable_cnt; /* pci_enable_device has been called */ 360 361 ··· 578 577 int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn, 579 578 int reg, int len, u32 val); 580 579 580 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 581 + typedef u64 pci_bus_addr_t; 582 + #else 583 + typedef u32 pci_bus_addr_t; 584 + #endif 585 + 581 586 struct pci_bus_region { 582 - dma_addr_t start; 583 - dma_addr_t end; 587 + pci_bus_addr_t start; 588 + pci_bus_addr_t end; 584 589 }; 585 590 586 591 struct pci_dynids { ··· 780 773 void pcibios_scan_specific_bus(int busn); 781 774 struct pci_bus *pci_find_bus(int domain, int busnr); 782 775 void pci_bus_add_devices(const struct pci_bus *bus); 783 - struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, 784 - struct pci_ops *ops, void *sysdata); 785 776 struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata); 786 777 struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 787 778 struct pci_ops *ops, void *sysdata, ··· 979 974 bool pci_intx_mask_supported(struct pci_dev *dev); 980 975 bool pci_check_and_mask_intx(struct pci_dev *dev); 981 976 bool pci_check_and_unmask_intx(struct pci_dev *dev); 982 - void pci_msi_off(struct pci_dev *dev); 983 977 int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size); 984 978 int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask); 985 979 int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask); ··· 1010 1006 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 1011 1007 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 1012 1008 bool pci_device_is_present(struct pci_dev *pdev); 1009 + void pci_ignore_hotplug(struct pci_dev *dev); 1013 1010 1014 1011 /* ROM control related routines */ 1015 1012 int pci_enable_rom(struct pci_dev *pdev); ··· 1047 1042 bool pci_dev_run_wake(struct pci_dev *dev); 1048 1043 bool pci_check_pme_status(struct pci_dev *dev); 1049 1044 void pci_pme_wakeup_bus(struct pci_bus *bus); 1050 - 1051 - static inline void pci_ignore_hotplug(struct pci_dev *dev) 1052 - { 1053 - dev->ignore_hotplug = 1; 1054 - } 1055 1045 1056 1046 static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 1057 1047 bool enable) ··· 1128 1128 1129 1129 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr); 1130 1130 1131 - static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1131 + static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1132 1132 { 1133 1133 struct pci_bus_region region; 1134 1134 ··· 1196 1196 #define pci_pool_destroy(pool) dma_pool_destroy(pool) 1197 1197 #define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle) 1198 1198 #define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr) 1199 - 1200 - enum pci_dma_burst_strategy { 1201 - PCI_DMA_BURST_INFINITY, /* make bursts as large as possible, 1202 - strategy_parameter is N/A */ 1203 - PCI_DMA_BURST_BOUNDARY, /* disconnect at every strategy_parameter 1204 - byte boundaries */ 1205 - PCI_DMA_BURST_MULTIPLE, /* disconnect at some multiple of 1206 - strategy_parameter byte boundaries */ 1207 - }; 1208 1199 1209 1200 struct msix_entry { 1210 1201 u32 vector; /* kernel uses to write allocated vector */ ··· 1420 1429 static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) 1421 1430 { return -EIO; } 1422 1431 static inline void pci_release_regions(struct pci_dev *dev) { } 1423 - 1424 - #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 1425 1432 1426 1433 static inline void pci_block_cfg_access(struct pci_dev *dev) { } 1427 1434 static inline int pci_block_cfg_access_in_atomic(struct pci_dev *dev) ··· 1893 1904 static inline bool pci_is_dev_assigned(struct pci_dev *pdev) 1894 1905 { 1895 1906 return (pdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) == PCI_DEV_FLAGS_ASSIGNED; 1907 + } 1908 + 1909 + /** 1910 + * pci_ari_enabled - query ARI forwarding status 1911 + * @bus: the PCI bus 1912 + * 1913 + * Returns true if ARI forwarding is enabled. 1914 + */ 1915 + static inline bool pci_ari_enabled(struct pci_bus *bus) 1916 + { 1917 + return bus->self && bus->self->ari_enabled; 1896 1918 } 1897 1919 #endif /* LINUX_PCI_H */
+10 -2
include/linux/types.h
··· 139 139 */ 140 140 #define pgoff_t unsigned long 141 141 142 - /* A dma_addr_t can hold any valid DMA or bus address for the platform */ 142 + /* 143 + * A dma_addr_t can hold any valid DMA address, i.e., any address returned 144 + * by the DMA API. 145 + * 146 + * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32 147 + * bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits, 148 + * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses, 149 + * so they don't care about the size of the actual bus addresses. 150 + */ 143 151 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 144 152 typedef u64 dma_addr_t; 145 153 #else 146 154 typedef u32 dma_addr_t; 147 - #endif /* dma_addr_t */ 155 + #endif 148 156 149 157 typedef unsigned __bitwise__ gfp_t; 150 158 typedef unsigned __bitwise__ fmode_t;