Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"PCI changes for the v4.2 merge window:

Enumeration
- Move pci_ari_enabled() to global header (Alex Williamson)
- Account for ARI in _PRT lookups (Alex Williamson)
- Remove unused pci_scan_bus_parented() (Yijing Wang)

Resource management
- Use host bridge _CRS info on systems with >32 bit addressing (Bjorn Helgaas)
- Use host bridge _CRS info on Foxconn K8M890-8237A (Bjorn Helgaas)
- Fix pci_address_to_pio() conversion of CPU address to I/O port (Zhichang Yuan)
- Add pci_bus_addr_t (Yinghai Lu)

PCI device hotplug
- Wait for pciehp command completion where necessary (Alex Williamson)
- Drop pointless ACPI-based "slot detection" check (Rafael J. Wysocki)
- Check ignore_hotplug for all downstream devices (Rafael J. Wysocki)
- Propagate the "ignore hotplug" setting to parent (Rafael J. Wysocki)
- Inline pciehp "handle event" functions into the ISR (Bjorn Helgaas)
- Clean up pciehp debug logging (Bjorn Helgaas)

Power management
- Remove redundant PCIe port type checking (Yijing Wang)
- Add dev->has_secondary_link to track downstream PCIe links (Yijing Wang)
- Use dev->has_secondary_link to find downstream links for ASPM (Yijing Wang)
- Drop __pci_disable_link_state() useless "force" parameter (Bjorn Helgaas)
- Simplify Clock Power Management setting (Bjorn Helgaas)

Virtualization
- Add ACS quirks for Intel 9-series PCH root ports (Alex Williamson)
- Add function 1 DMA alias quirk for Marvell 9120 (Sakari Ailus)

MSI
- Disable MSI at enumeration even if kernel doesn't support MSI (Michael S. Tsirkin)
- Remove unused pci_msi_off() (Bjorn Helgaas)
- Rename msi_set_enable(), msix_clear_and_set_ctrl() (Michael S. Tsirkin)
- Export pci_msi_set_enable(), pci_msix_clear_and_set_ctrl() (Michael S. Tsirkin)
- Drop pci_msi_off() calls during probe (Michael S. Tsirkin)

APM X-Gene host bridge driver
- Add APM X-Gene v1 PCIe MSI/MSIX termination driver (Duc Dang)
- Add APM X-Gene PCIe MSI DTS nodes (Duc Dang)
- Disable Configuration Request Retry Status for v1 silicon (Duc Dang)
- Allow config access to Root Port even when link is down (Duc Dang)

Broadcom iProc host bridge driver
- Allow override of device tree IRQ mapping function (Hauke Mehrtens)
- Add BCMA PCIe driver (Hauke Mehrtens)
- Directly add PCI resources (Hauke Mehrtens)
- Free resource list after registration (Hauke Mehrtens)

Freescale i.MX6 host bridge driver
- Add speed change timeout message (Troy Kisky)
- Rename imx6_pcie_start_link() to imx6_pcie_establish_link() (Bjorn Helgaas)

Freescale Layerscape host bridge driver
- Use dw_pcie_link_up() consistently (Bjorn Helgaas)
- Factor out ls_pcie_establish_link() (Bjorn Helgaas)

Marvell MVEBU host bridge driver
- Remove mvebu_pcie_scan_bus() (Yijing Wang)

NVIDIA Tegra host bridge driver
- Remove tegra_pcie_scan_bus() (Yijing Wang)

Synopsys DesignWare host bridge driver
- Consolidate outbound iATU programming functions (Jisheng Zhang)
- Use iATU0 for cfg and IO, iATU1 for MEM (Jisheng Zhang)
- Add support for x8 links (Zhou Wang)
- Wait for link to come up with consistent style (Bjorn Helgaas)
- Use pci_scan_root_bus() for simplicity (Yijing Wang)

TI DRA7xx host bridge driver
- Use dw_pcie_link_up() consistently (Bjorn Helgaas)

Miscellaneous
- Include <linux/pci.h>, not <asm/pci.h> (Bjorn Helgaas)
- Remove unnecessary #includes of <asm/pci.h> (Bjorn Helgaas)
- Remove unused pcibios_select_root() (again) (Bjorn Helgaas)
- Remove unused pci_dma_burst_advice() (Bjorn Helgaas)
- xen/pcifront: Don't use deprecated function pci_scan_bus_parented() (Arnd Bergmann)"

* tag 'pci-v4.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (58 commits)
PCI: pciehp: Inline the "handle event" functions into the ISR
PCI: pciehp: Rename queue_interrupt_event() to pciehp_queue_interrupt_event()
PCI: pciehp: Make queue_interrupt_event() void
PCI: xgene: Allow config access to Root Port even when link is down
PCI: xgene: Disable Configuration Request Retry Status for v1 silicon
PCI: pciehp: Clean up debug logging
x86/PCI: Use host bridge _CRS info on systems with >32 bit addressing
PCI: imx6: Add #define PCIE_RC_LCSR
PCI: imx6: Use "u32", not "uint32_t"
PCI: Remove unused pci_scan_bus_parented()
xen/pcifront: Don't use deprecated function pci_scan_bus_parented()
PCI: imx6: Add speed change timeout message
PCI/ASPM: Simplify Clock Power Management setting
PCI: designware: Wait for link to come up with consistent style
PCI: layerscape: Factor out ls_pcie_establish_link()
PCI: layerscape: Use dw_pcie_link_up() consistently
PCI: dra7xx: Use dw_pcie_link_up() consistently
x86/PCI: Use host bridge _CRS info on Foxconn K8M890-8237A
PCI: pciehp: Wait for hotplug command completion where necessary
PCI: Remove unused pci_dma_burst_advice()
...

+1396 -1125
+17 -12
Documentation/DMA-API-HOWTO.txt
··· 25 25 address is not directly useful to a driver; it must use ioremap() to map 26 26 the space and produce a virtual address. 27 27 28 - I/O devices use a third kind of address: a "bus address" or "DMA address". 29 - If a device has registers at an MMIO address, or if it performs DMA to read 30 - or write system memory, the addresses used by the device are bus addresses. 31 - In some systems, bus addresses are identical to CPU physical addresses, but 32 - in general they are not. IOMMUs and host bridges can produce arbitrary 28 + I/O devices use a third kind of address: a "bus address". If a device has 29 + registers at an MMIO address, or if it performs DMA to read or write system 30 + memory, the addresses used by the device are bus addresses. In some 31 + systems, bus addresses are identical to CPU physical addresses, but in 32 + general they are not. IOMMUs and host bridges can produce arbitrary 33 33 mappings between physical and bus addresses. 34 + 35 + From a device's point of view, DMA uses the bus address space, but it may 36 + be restricted to a subset of that space. For example, even if a system 37 + supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU 38 + so devices only need to use 32-bit DMA addresses. 34 39 35 40 Here's a picture and some examples: 36 41 ··· 77 72 cannot because DMA doesn't go through the CPU virtual memory system. 78 73 79 74 In some simple systems, the device can do DMA directly to physical address 80 - Y. But in many others, there is IOMMU hardware that translates bus 75 + Y. But in many others, there is IOMMU hardware that translates DMA 81 76 addresses to physical addresses, e.g., it translates Z to Y. This is part 82 77 of the reason for the DMA API: the driver can give a virtual address X to 83 78 an interface like dma_map_single(), which sets up any required IOMMU 84 - mapping and returns the bus address Z. The driver then tells the device to 79 + mapping and returns the DMA address Z. The driver then tells the device to 85 80 do DMA to Z, and the IOMMU maps it to the buffer at address Y in system 86 81 RAM. 87 82 ··· 103 98 #include <linux/dma-mapping.h> 104 99 105 100 is in your driver, which provides the definition of dma_addr_t. This type 106 - can hold any valid DMA or bus address for the platform and should be used 101 + can hold any valid DMA address for the platform and should be used 107 102 everywhere you hold a DMA address returned from the DMA mapping functions. 108 103 109 104 What memory is DMA'able? ··· 321 316 Think of "consistent" as "synchronous" or "coherent". 322 317 323 318 The current default is to return consistent memory in the low 32 324 - bits of the bus space. However, for future compatibility you should 319 + bits of the DMA space. However, for future compatibility you should 325 320 set the consistent mask even if this default is fine for your 326 321 driver. 327 322 ··· 408 403 can use to access it from the CPU and dma_handle which you pass to the 409 404 card. 410 405 411 - The CPU virtual address and the DMA bus address are both 406 + The CPU virtual address and the DMA address are both 412 407 guaranteed to be aligned to the smallest PAGE_SIZE order which 413 408 is greater than or equal to the requested size. This invariant 414 409 exists (for example) to guarantee that if you allocate a chunk ··· 650 645 dma_map_sg call. 651 646 652 647 Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() 653 - counterpart, because the bus address space is a shared resource and 654 - you could render the machine unusable by consuming all bus addresses. 648 + counterpart, because the DMA address space is a shared resource and 649 + you could render the machine unusable by consuming all DMA addresses. 655 650 656 651 If you need to use the same streaming DMA region multiple times and touch 657 652 the data in between the DMA transfers, the buffer needs to be synced
+15 -15
Documentation/DMA-API.txt
··· 18 18 To get the dma_ API, you must #include <linux/dma-mapping.h>. This 19 19 provides dma_addr_t and the interfaces described below. 20 20 21 - A dma_addr_t can hold any valid DMA or bus address for the platform. It 22 - can be given to a device to use as a DMA source or target. A CPU cannot 23 - reference a dma_addr_t directly because there may be translation between 24 - its physical address space and the bus address space. 21 + A dma_addr_t can hold any valid DMA address for the platform. It can be 22 + given to a device to use as a DMA source or target. A CPU cannot reference 23 + a dma_addr_t directly because there may be translation between its physical 24 + address space and the DMA address space. 25 25 26 26 Part Ia - Using large DMA-coherent buffers 27 27 ------------------------------------------ ··· 42 42 address space) or NULL if the allocation failed. 43 43 44 44 It also returns a <dma_handle> which may be cast to an unsigned integer the 45 - same width as the bus and given to the device as the bus address base of 45 + same width as the bus and given to the device as the DMA address base of 46 46 the region. 47 47 48 48 Note: consistent memory can be expensive on some platforms, and the ··· 193 193 enum dma_data_direction direction) 194 194 195 195 Maps a piece of processor virtual memory so it can be accessed by the 196 - device and returns the bus address of the memory. 196 + device and returns the DMA address of the memory. 197 197 198 198 The direction for both APIs may be converted freely by casting. 199 199 However the dma_ API uses a strongly typed enumerator for its ··· 212 212 this API should be obtained from sources which guarantee it to be 213 213 physically contiguous (like kmalloc). 214 214 215 - Further, the bus address of the memory must be within the 215 + Further, the DMA address of the memory must be within the 216 216 dma_mask of the device (the dma_mask is a bit mask of the 217 - addressable region for the device, i.e., if the bus address of 218 - the memory ANDed with the dma_mask is still equal to the bus 217 + addressable region for the device, i.e., if the DMA address of 218 + the memory ANDed with the dma_mask is still equal to the DMA 219 219 address, then the device can perform DMA to the memory). To 220 220 ensure that the memory allocated by kmalloc is within the dma_mask, 221 221 the driver may specify various platform-dependent flags to restrict 222 - the bus address range of the allocation (e.g., on x86, GFP_DMA 223 - guarantees to be within the first 16MB of available bus addresses, 222 + the DMA address range of the allocation (e.g., on x86, GFP_DMA 223 + guarantees to be within the first 16MB of available DMA addresses, 224 224 as required by ISA devices). 225 225 226 226 Note also that the above constraints on physical contiguity and 227 227 dma_mask may not apply if the platform has an IOMMU (a device which 228 - maps an I/O bus address to a physical memory address). However, to be 228 + maps an I/O DMA address to a physical memory address). However, to be 229 229 portable, device driver writers may *not* assume that such an IOMMU 230 230 exists. 231 231 ··· 296 296 dma_map_sg(struct device *dev, struct scatterlist *sg, 297 297 int nents, enum dma_data_direction direction) 298 298 299 - Returns: the number of bus address segments mapped (this may be shorter 299 + Returns: the number of DMA address segments mapped (this may be shorter 300 300 than <nents> passed in if some elements of the scatter/gather list are 301 301 physically or virtually adjacent and an IOMMU maps them with a single 302 302 entry). ··· 340 340 API. 341 341 342 342 Note: <nents> must be the number you passed in, *not* the number of 343 - bus address entries returned. 343 + DMA address entries returned. 344 344 345 345 void 346 346 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, ··· 507 507 phys_addr is the CPU physical address to which the memory is currently 508 508 assigned (this will be ioremapped so the CPU can access the region). 509 509 510 - device_addr is the bus address the device needs to be programmed 510 + device_addr is the DMA address the device needs to be programmed 511 511 with to actually address this memory (this will be handed out as the 512 512 dma_addr_t in dma_alloc_coherent()). 513 513
+68
Documentation/devicetree/bindings/pci/xgene-pci-msi.txt
··· 1 + * AppliedMicro X-Gene v1 PCIe MSI controller 2 + 3 + Required properties: 4 + 5 + - compatible: should be "apm,xgene1-msi" to identify 6 + X-Gene v1 PCIe MSI controller block. 7 + - msi-controller: indicates that this is X-Gene v1 PCIe MSI controller node 8 + - reg: physical base address (0x79000000) and length (0x900000) for controller 9 + registers. These registers include the MSI termination address and data 10 + registers as well as the MSI interrupt status registers. 11 + - reg-names: not required 12 + - interrupts: A list of 16 interrupt outputs of the controller, starting from 13 + interrupt number 0x10 to 0x1f. 14 + - interrupt-names: not required 15 + 16 + Each PCIe node needs to have property msi-parent that points to msi controller node 17 + 18 + Examples: 19 + 20 + SoC DTSI: 21 + 22 + + MSI node: 23 + msi@79000000 { 24 + compatible = "apm,xgene1-msi"; 25 + msi-controller; 26 + reg = <0x00 0x79000000 0x0 0x900000>; 27 + interrupts = <0x0 0x10 0x4> 28 + <0x0 0x11 0x4> 29 + <0x0 0x12 0x4> 30 + <0x0 0x13 0x4> 31 + <0x0 0x14 0x4> 32 + <0x0 0x15 0x4> 33 + <0x0 0x16 0x4> 34 + <0x0 0x17 0x4> 35 + <0x0 0x18 0x4> 36 + <0x0 0x19 0x4> 37 + <0x0 0x1a 0x4> 38 + <0x0 0x1b 0x4> 39 + <0x0 0x1c 0x4> 40 + <0x0 0x1d 0x4> 41 + <0x0 0x1e 0x4> 42 + <0x0 0x1f 0x4>; 43 + }; 44 + 45 + + PCIe controller node with msi-parent property pointing to MSI node: 46 + pcie0: pcie@1f2b0000 { 47 + status = "disabled"; 48 + device_type = "pci"; 49 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 50 + #interrupt-cells = <1>; 51 + #size-cells = <2>; 52 + #address-cells = <3>; 53 + reg = < 0x00 0x1f2b0000 0x0 0x00010000 /* Controller registers */ 54 + 0xe0 0xd0000000 0x0 0x00040000>; /* PCI config space */ 55 + reg-names = "csr", "cfg"; 56 + ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 0x00 0x00010000 /* io */ 57 + 0x02000000 0x00 0x80000000 0xe1 0x80000000 0x00 0x80000000>; /* mem */ 58 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 59 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 60 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 61 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1 62 + 0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1 63 + 0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1 64 + 0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>; 65 + dma-coherent; 66 + clocks = <&pcie0clk 0>; 67 + msi-parent= <&msi>; 68 + };
+8
MAINTAINERS
··· 7611 7611 S: Maintained 7612 7612 F: drivers/pci/host/*spear* 7613 7613 7614 + PCI MSI DRIVER FOR APPLIEDMICRO XGENE 7615 + M: Duc Dang <dhdang@apm.com> 7616 + L: linux-pci@vger.kernel.org 7617 + L: linux-arm-kernel@lists.infradead.org 7618 + S: Maintained 7619 + F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt 7620 + F: drivers/pci/host/pci-xgene-msi.c 7621 + 7614 7622 PCMCIA SUBSYSTEM 7615 7623 P: Linux PCMCIA Team 7616 7624 L: linux-pcmcia@lists.infradead.org
-16
arch/alpha/include/asm/pci.h
··· 71 71 /* implement the pci_ DMA API in terms of the generic device dma_ one */ 72 72 #include <asm-generic/pci-dma-compat.h> 73 73 74 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 75 - enum pci_dma_burst_strategy *strat, 76 - unsigned long *strategy_parameter) 77 - { 78 - unsigned long cacheline_size; 79 - u8 byte; 80 - 81 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 82 - if (byte == 0) 83 - cacheline_size = 1024; 84 - else 85 - cacheline_size = (int) byte * 4; 86 - 87 - *strat = PCI_DMA_BURST_BOUNDARY; 88 - *strategy_parameter = cacheline_size; 89 - } 90 74 #endif 91 75 92 76 /* TODO: integrate with include/asm-generic/pci.h ? */
-1
arch/alpha/kernel/core_irongate.c
··· 22 22 #include <linux/bootmem.h> 23 23 24 24 #include <asm/ptrace.h> 25 - #include <asm/pci.h> 26 25 #include <asm/cacheflush.h> 27 26 #include <asm/tlbflush.h> 28 27
-1
arch/alpha/kernel/sys_eiger.c
··· 22 22 #include <asm/irq.h> 23 23 #include <asm/mmu_context.h> 24 24 #include <asm/io.h> 25 - #include <asm/pci.h> 26 25 #include <asm/pgtable.h> 27 26 #include <asm/core_tsunami.h> 28 27 #include <asm/hwrpb.h>
-1
arch/alpha/kernel/sys_nautilus.c
··· 39 39 #include <asm/irq.h> 40 40 #include <asm/mmu_context.h> 41 41 #include <asm/io.h> 42 - #include <asm/pci.h> 43 42 #include <asm/pgtable.h> 44 43 #include <asm/core_irongate.h> 45 44 #include <asm/hwrpb.h>
-10
arch/arm/include/asm/pci.h
··· 31 31 */ 32 32 #define PCI_DMA_BUS_IS_PHYS (1) 33 33 34 - #ifdef CONFIG_PCI 35 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 36 - enum pci_dma_burst_strategy *strat, 37 - unsigned long *strategy_parameter) 38 - { 39 - *strat = PCI_DMA_BURST_INFINITY; 40 - *strategy_parameter = ~0UL; 41 - } 42 - #endif 43 - 44 34 #define HAVE_PCI_MMAP 45 35 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 46 36 enum pci_mmap_state mmap_state, int write_combine);
+27
arch/arm64/boot/dts/apm/apm-storm.dtsi
··· 374 374 }; 375 375 }; 376 376 377 + msi: msi@79000000 { 378 + compatible = "apm,xgene1-msi"; 379 + msi-controller; 380 + reg = <0x00 0x79000000 0x0 0x900000>; 381 + interrupts = < 0x0 0x10 0x4 382 + 0x0 0x11 0x4 383 + 0x0 0x12 0x4 384 + 0x0 0x13 0x4 385 + 0x0 0x14 0x4 386 + 0x0 0x15 0x4 387 + 0x0 0x16 0x4 388 + 0x0 0x17 0x4 389 + 0x0 0x18 0x4 390 + 0x0 0x19 0x4 391 + 0x0 0x1a 0x4 392 + 0x0 0x1b 0x4 393 + 0x0 0x1c 0x4 394 + 0x0 0x1d 0x4 395 + 0x0 0x1e 0x4 396 + 0x0 0x1f 0x4>; 397 + }; 398 + 377 399 pcie0: pcie@1f2b0000 { 378 400 status = "disabled"; 379 401 device_type = "pci"; ··· 417 395 0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>; 418 396 dma-coherent; 419 397 clocks = <&pcie0clk 0>; 398 + msi-parent = <&msi>; 420 399 }; 421 400 422 401 pcie1: pcie@1f2c0000 { ··· 441 418 0x0 0x0 0x0 0x4 &gic 0x0 0xcb 0x1>; 442 419 dma-coherent; 443 420 clocks = <&pcie1clk 0>; 421 + msi-parent = <&msi>; 444 422 }; 445 423 446 424 pcie2: pcie@1f2d0000 { ··· 465 441 0x0 0x0 0x0 0x4 &gic 0x0 0xd1 0x1>; 466 442 dma-coherent; 467 443 clocks = <&pcie2clk 0>; 444 + msi-parent = <&msi>; 468 445 }; 469 446 470 447 pcie3: pcie@1f500000 { ··· 489 464 0x0 0x0 0x0 0x4 &gic 0x0 0xd7 0x1>; 490 465 dma-coherent; 491 466 clocks = <&pcie3clk 0>; 467 + msi-parent = <&msi>; 492 468 }; 493 469 494 470 pcie4: pcie@1f510000 { ··· 513 487 0x0 0x0 0x0 0x4 &gic 0x0 0xdd 0x1>; 514 488 dma-coherent; 515 489 clocks = <&pcie4clk 0>; 490 + msi-parent = <&msi>; 516 491 }; 517 492 518 493 serial0: serial@1c020000 {
-10
arch/frv/include/asm/pci.h
··· 41 41 /* Return the index of the PCI controller for device PDEV. */ 42 42 #define pci_controller_num(PDEV) (0) 43 43 44 - #ifdef CONFIG_PCI 45 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 46 - enum pci_dma_burst_strategy *strat, 47 - unsigned long *strategy_parameter) 48 - { 49 - *strat = PCI_DMA_BURST_INFINITY; 50 - *strategy_parameter = ~0UL; 51 - } 52 - #endif 53 - 54 44 /* 55 45 * These are pretty much arbitrary with the CoMEM implementation. 56 46 * We have the whole address space to ourselves.
-32
arch/ia64/include/asm/pci.h
··· 52 52 53 53 #include <asm-generic/pci-dma-compat.h> 54 54 55 - #ifdef CONFIG_PCI 56 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 57 - enum pci_dma_burst_strategy *strat, 58 - unsigned long *strategy_parameter) 59 - { 60 - unsigned long cacheline_size; 61 - u8 byte; 62 - 63 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 64 - if (byte == 0) 65 - cacheline_size = 1024; 66 - else 67 - cacheline_size = (int) byte * 4; 68 - 69 - *strat = PCI_DMA_BURST_MULTIPLE; 70 - *strategy_parameter = cacheline_size; 71 - } 72 - #endif 73 - 74 55 #define HAVE_PCI_MMAP 75 56 extern int pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma, 76 57 enum pci_mmap_state mmap_state, int write_combine); ··· 87 106 static inline int pci_proc_domain(struct pci_bus *bus) 88 107 { 89 108 return (pci_domain_nr(bus) != 0); 90 - } 91 - 92 - static inline struct resource * 93 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 94 - { 95 - struct resource *root = NULL; 96 - 97 - if (res->flags & IORESOURCE_IO) 98 - root = &ioport_resource; 99 - if (res->flags & IORESOURCE_MEM) 100 - root = &iomem_resource; 101 - 102 - return root; 103 109 } 104 110 105 111 #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
-23
arch/microblaze/include/asm/pci.h
··· 44 44 */ 45 45 #define pcibios_assign_all_busses() 0 46 46 47 - #ifdef CONFIG_PCI 48 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 49 - enum pci_dma_burst_strategy *strat, 50 - unsigned long *strategy_parameter) 51 - { 52 - *strat = PCI_DMA_BURST_INFINITY; 53 - *strategy_parameter = ~0UL; 54 - } 55 - #endif 56 - 57 47 extern int pci_domain_nr(struct pci_bus *bus); 58 48 59 49 /* Decide whether to display the domain number in /proc */ ··· 72 82 * this boolean for bounce buffer decisions. 73 83 */ 74 84 #define PCI_DMA_BUS_IS_PHYS (1) 75 - 76 - static inline struct resource *pcibios_select_root(struct pci_dev *pdev, 77 - struct resource *res) 78 - { 79 - struct resource *root = NULL; 80 - 81 - if (res->flags & IORESOURCE_IO) 82 - root = &ioport_resource; 83 - if (res->flags & IORESOURCE_MEM) 84 - root = &iomem_resource; 85 - 86 - return root; 87 - } 88 85 89 86 extern void pcibios_claim_one_bus(struct pci_bus *b); 90 87
-10
arch/mips/include/asm/pci.h
··· 113 113 */ 114 114 extern unsigned int PCI_DMA_BUS_IS_PHYS; 115 115 116 - #ifdef CONFIG_PCI 117 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 118 - enum pci_dma_burst_strategy *strat, 119 - unsigned long *strategy_parameter) 120 - { 121 - *strat = PCI_DMA_BURST_INFINITY; 122 - *strategy_parameter = ~0UL; 123 - } 124 - #endif 125 - 126 116 #ifdef CONFIG_PCI_DOMAINS 127 117 #define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index 128 118
-1
arch/mips/pci/fixup-cobalt.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/init.h> 15 15 16 - #include <asm/pci.h> 17 16 #include <asm/io.h> 18 17 #include <asm/gt64120.h> 19 18
-1
arch/mips/pci/ops-mace.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/pci.h> 10 10 #include <linux/types.h> 11 - #include <asm/pci.h> 12 11 #include <asm/ip32/mace.h> 13 12 14 13 #if 0
-1
arch/mips/pci/pci-lantiq.c
··· 20 20 #include <linux/of_irq.h> 21 21 #include <linux/of_pci.h> 22 22 23 - #include <asm/pci.h> 24 23 #include <asm/gpio.h> 25 24 #include <asm/addrspace.h> 26 25
-13
arch/mn10300/include/asm/pci.h
··· 83 83 /* implement the pci_ DMA API in terms of the generic device dma_ one */ 84 84 #include <asm-generic/pci-dma-compat.h> 85 85 86 - static inline struct resource * 87 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 88 - { 89 - struct resource *root = NULL; 90 - 91 - if (res->flags & IORESOURCE_IO) 92 - root = &ioport_resource; 93 - if (res->flags & IORESOURCE_MEM) 94 - root = &iomem_resource; 95 - 96 - return root; 97 - } 98 - 99 86 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 100 87 { 101 88 return channel ? 15 : 14;
-19
arch/parisc/include/asm/pci.h
··· 196 196 /* export the pci_ DMA API in terms of the dma_ one */ 197 197 #include <asm-generic/pci-dma-compat.h> 198 198 199 - #ifdef CONFIG_PCI 200 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 201 - enum pci_dma_burst_strategy *strat, 202 - unsigned long *strategy_parameter) 203 - { 204 - unsigned long cacheline_size; 205 - u8 byte; 206 - 207 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 208 - if (byte == 0) 209 - cacheline_size = 1024; 210 - else 211 - cacheline_size = (int) byte * 4; 212 - 213 - *strat = PCI_DMA_BURST_MULTIPLE; 214 - *strategy_parameter = cacheline_size; 215 - } 216 - #endif 217 - 218 199 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 219 200 { 220 201 return channel ? 15 : 14;
-30
arch/powerpc/include/asm/pci.h
··· 71 71 */ 72 72 #define PCI_DISABLE_MWI 73 73 74 - #ifdef CONFIG_PCI 75 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 76 - enum pci_dma_burst_strategy *strat, 77 - unsigned long *strategy_parameter) 78 - { 79 - unsigned long cacheline_size; 80 - u8 byte; 81 - 82 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 83 - if (byte == 0) 84 - cacheline_size = 1024; 85 - else 86 - cacheline_size = (int) byte * 4; 87 - 88 - *strat = PCI_DMA_BURST_MULTIPLE; 89 - *strategy_parameter = cacheline_size; 90 - } 91 - #endif 92 - 93 - #else /* 32-bit */ 94 - 95 - #ifdef CONFIG_PCI 96 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 97 - enum pci_dma_burst_strategy *strat, 98 - unsigned long *strategy_parameter) 99 - { 100 - *strat = PCI_DMA_BURST_INFINITY; 101 - *strategy_parameter = ~0UL; 102 - } 103 - #endif 104 74 #endif /* CONFIG_PPC64 */ 105 75 106 76 extern int pci_domain_nr(struct pci_bus *bus);
-1
arch/powerpc/kernel/prom.c
··· 46 46 #include <asm/mmu.h> 47 47 #include <asm/paca.h> 48 48 #include <asm/pgtable.h> 49 - #include <asm/pci.h> 50 49 #include <asm/iommu.h> 51 50 #include <asm/btext.h> 52 51 #include <asm/sections.h>
-1
arch/powerpc/kernel/prom_init.c
··· 37 37 #include <asm/smp.h> 38 38 #include <asm/mmu.h> 39 39 #include <asm/pgtable.h> 40 - #include <asm/pci.h> 41 40 #include <asm/iommu.h> 42 41 #include <asm/btext.h> 43 42 #include <asm/sections.h>
+1 -1
arch/powerpc/platforms/52xx/mpc52xx_pci.c
··· 12 12 13 13 #undef DEBUG 14 14 15 - #include <asm/pci.h> 15 + #include <linux/pci.h> 16 16 #include <asm/mpc52xx.h> 17 17 #include <asm/delay.h> 18 18 #include <asm/machdep.h>
+1 -1
arch/s390/kernel/suspend.c
··· 9 9 #include <linux/pfn.h> 10 10 #include <linux/suspend.h> 11 11 #include <linux/mm.h> 12 + #include <linux/pci.h> 12 13 #include <asm/ctl_reg.h> 13 14 #include <asm/ipl.h> 14 15 #include <asm/cio.h> 15 - #include <asm/pci.h> 16 16 #include <asm/sections.h> 17 17 #include "entry.h" 18 18
-1
arch/sh/drivers/pci/ops-sh5.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/types.h> 20 20 #include <linux/irq.h> 21 - #include <asm/pci.h> 22 21 #include <asm/io.h> 23 22 #include "pci-sh5.h" 24 23
-1
arch/sh/drivers/pci/pci-sh5.c
··· 20 20 #include <linux/types.h> 21 21 #include <linux/irq.h> 22 22 #include <cpu/irq.h> 23 - #include <asm/pci.h> 24 23 #include <asm/io.h> 25 24 #include "pci-sh5.h" 26 25
-18
arch/sh/include/asm/pci.h
··· 86 86 * direct memory write. 87 87 */ 88 88 #define PCI_DISABLE_MWI 89 - 90 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 91 - enum pci_dma_burst_strategy *strat, 92 - unsigned long *strategy_parameter) 93 - { 94 - unsigned long cacheline_size; 95 - u8 byte; 96 - 97 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 98 - 99 - if (byte == 0) 100 - cacheline_size = L1_CACHE_BYTES; 101 - else 102 - cacheline_size = byte << 2; 103 - 104 - *strat = PCI_DMA_BURST_MULTIPLE; 105 - *strategy_parameter = cacheline_size; 106 - } 107 89 #endif 108 90 109 91 /* Board-specific fixup routines. */
-10
arch/sparc/include/asm/pci_32.h
··· 22 22 23 23 struct pci_dev; 24 24 25 - #ifdef CONFIG_PCI 26 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 27 - enum pci_dma_burst_strategy *strat, 28 - unsigned long *strategy_parameter) 29 - { 30 - *strat = PCI_DMA_BURST_INFINITY; 31 - *strategy_parameter = ~0UL; 32 - } 33 - #endif 34 - 35 25 #endif /* __KERNEL__ */ 36 26 37 27 #ifndef CONFIG_LEON_PCI
-19
arch/sparc/include/asm/pci_64.h
··· 31 31 #define PCI64_REQUIRED_MASK (~(u64)0) 32 32 #define PCI64_ADDR_BASE 0xfffc000000000000UL 33 33 34 - #ifdef CONFIG_PCI 35 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 36 - enum pci_dma_burst_strategy *strat, 37 - unsigned long *strategy_parameter) 38 - { 39 - unsigned long cacheline_size; 40 - u8 byte; 41 - 42 - pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &byte); 43 - if (byte == 0) 44 - cacheline_size = 1024; 45 - else 46 - cacheline_size = (int) byte * 4; 47 - 48 - *strat = PCI_DMA_BURST_BOUNDARY; 49 - *strategy_parameter = cacheline_size; 50 - } 51 - #endif 52 - 53 34 /* Return the index of the PCI controller for device PDEV. */ 54 35 55 36 int pci_domain_nr(struct pci_bus *bus);
-10
arch/unicore32/include/asm/pci.h
··· 18 18 #include <asm-generic/pci.h> 19 19 #include <mach/hardware.h> /* for PCIBIOS_MIN_* */ 20 20 21 - #ifdef CONFIG_PCI 22 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 23 - enum pci_dma_burst_strategy *strat, 24 - unsigned long *strategy_parameter) 25 - { 26 - *strat = PCI_DMA_BURST_INFINITY; 27 - *strategy_parameter = ~0UL; 28 - } 29 - #endif 30 - 31 21 #define HAVE_PCI_MMAP 32 22 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 33 23 enum pci_mmap_state mmap_state, int write_combine);
-7
arch/x86/include/asm/pci.h
··· 80 80 81 81 #ifdef CONFIG_PCI 82 82 extern void early_quirks(void); 83 - static inline void pci_dma_burst_advice(struct pci_dev *pdev, 84 - enum pci_dma_burst_strategy *strat, 85 - unsigned long *strategy_parameter) 86 - { 87 - *strat = PCI_DMA_BURST_INFINITY; 88 - *strategy_parameter = ~0UL; 89 - } 90 83 #else 91 84 static inline void early_quirks(void) { } 92 85 #endif
-1
arch/x86/kernel/x86_init.c
··· 11 11 #include <asm/bios_ebda.h> 12 12 #include <asm/paravirt.h> 13 13 #include <asm/pci_x86.h> 14 - #include <asm/pci.h> 15 14 #include <asm/mpspec.h> 16 15 #include <asm/setup.h> 17 16 #include <asm/apic.h>
+15 -2
arch/x86/pci/acpi.c
··· 81 81 DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"), 82 82 }, 83 83 }, 84 + /* https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/931368 */ 85 + /* https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/1033299 */ 86 + { 87 + .callback = set_use_crs, 88 + .ident = "Foxconn K8M890-8237A", 89 + .matches = { 90 + DMI_MATCH(DMI_BOARD_VENDOR, "Foxconn"), 91 + DMI_MATCH(DMI_BOARD_NAME, "K8M890-8237A"), 92 + DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"), 93 + }, 94 + }, 84 95 85 96 /* Now for the blacklist.. */ 86 97 ··· 132 121 { 133 122 int year; 134 123 135 - if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year < 2008) 136 - pci_use_crs = false; 124 + if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year < 2008) { 125 + if (iomem_resource.end <= 0xffffffff) 126 + pci_use_crs = false; 127 + } 137 128 138 129 dmi_check_system(pci_crs_quirks); 139 130
+1 -1
drivers/acpi/pci_irq.c
··· 163 163 { 164 164 int segment = pci_domain_nr(dev->bus); 165 165 int bus = dev->bus->number; 166 - int device = PCI_SLOT(dev->devfn); 166 + int device = pci_ari_enabled(dev->bus) ? 0 : PCI_SLOT(dev->devfn); 167 167 struct acpi_prt_entry *entry; 168 168 169 169 if (((prt->address >> 16) & 0xffff) != device ||
-1
drivers/net/ethernet/sun/cassini.c
··· 3058 3058 /* setup core arbitration weight register */ 3059 3059 writel(CAWR_RR_DIS, cp->regs + REG_CAWR); 3060 3060 3061 - /* XXX Use pci_dma_burst_advice() */ 3062 3061 #if !defined(CONFIG_SPARC64) && !defined(CONFIG_ALPHA) 3063 3062 /* set the infinite burst register for chips that don't have 3064 3063 * pci issues.
-2
drivers/ntb/ntb_hw.c
··· 1313 1313 struct pci_dev *pdev = ndev->pdev; 1314 1314 int rc; 1315 1315 1316 - pci_msi_off(pdev); 1317 - 1318 1316 /* Verify intx is enabled */ 1319 1317 pci_intx(pdev, 1); 1320 1318
+1 -1
drivers/of/address.c
··· 765 765 spin_lock(&io_range_lock); 766 766 list_for_each_entry(res, &io_range_list, list) { 767 767 if (address >= res->start && address < res->start + res->size) { 768 - addr = res->start - address + offset; 768 + addr = address - res->start + offset; 769 769 break; 770 770 } 771 771 offset += res->size;
+4
drivers/pci/Kconfig
··· 1 1 # 2 2 # PCI configuration 3 3 # 4 + config PCI_BUS_ADDR_T_64BIT 5 + def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT) 6 + depends on PCI 7 + 4 8 config PCI_MSI 5 9 bool "Message Signaled Interrupts (MSI and MSI-X)" 6 10 depends on PCI
+5 -5
drivers/pci/bus.c
··· 92 92 } 93 93 94 94 static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL}; 95 - #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 95 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 96 96 static struct pci_bus_region pci_64_bit = {0, 97 - (dma_addr_t) 0xffffffffffffffffULL}; 98 - static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL, 99 - (dma_addr_t) 0xffffffffffffffffULL}; 97 + (pci_bus_addr_t) 0xffffffffffffffffULL}; 98 + static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL, 99 + (pci_bus_addr_t) 0xffffffffffffffffULL}; 100 100 #endif 101 101 102 102 /* ··· 200 200 resource_size_t), 201 201 void *alignf_data) 202 202 { 203 - #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 203 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 204 204 int rc; 205 205 206 206 if (res->flags & IORESOURCE_MEM_64) {
+20
drivers/pci/host/Kconfig
··· 89 89 depends on ARCH_XGENE 90 90 depends on OF 91 91 select PCIEPORTBUS 92 + select PCI_MSI_IRQ_DOMAIN if PCI_MSI 92 93 help 93 94 Say Y here if you want internal PCI support on APM X-Gene SoC. 94 95 There are 5 internal PCIe ports available. Each port is GEN3 capable 95 96 and have varied lanes from x1 to x8. 97 + 98 + config PCI_XGENE_MSI 99 + bool "X-Gene v1 PCIe MSI feature" 100 + depends on PCI_XGENE && PCI_MSI 101 + default y 102 + help 103 + Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC. 104 + This MSI driver supports 5 PCIe ports on the APM X-Gene v1 SoC. 96 105 97 106 config PCI_LAYERSCAPE 98 107 bool "Freescale Layerscape PCIe controller" ··· 133 124 help 134 125 Say Y here if you want to use the Broadcom iProc PCIe controller 135 126 through the generic platform bus interface 127 + 128 + config PCIE_IPROC_BCMA 129 + bool "Broadcom iProc PCIe BCMA bus driver" 130 + depends on ARCH_BCM_IPROC || (ARM && COMPILE_TEST) 131 + select PCIE_IPROC 132 + select BCMA 133 + select PCI_DOMAINS 134 + default ARCH_BCM_5301X 135 + help 136 + Say Y here if you want to use the Broadcom iProc PCIe controller 137 + through the BCMA bus interface 136 138 137 139 endmenu
+2
drivers/pci/host/Makefile
··· 11 11 obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o 12 12 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 13 13 obj-$(CONFIG_PCI_XGENE) += pci-xgene.o 14 + obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o 14 15 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o 15 16 obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o 16 17 obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o 17 18 obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o 19 + obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o
+7 -12
drivers/pci/host/pci-dra7xx.c
··· 93 93 94 94 static int dra7xx_pcie_establish_link(struct pcie_port *pp) 95 95 { 96 - u32 reg; 97 - unsigned int retries = 1000; 98 96 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp); 97 + u32 reg; 98 + unsigned int retries; 99 99 100 100 if (dw_pcie_link_up(pp)) { 101 101 dev_err(pp->dev, "link is already up\n"); ··· 106 106 reg |= LTSSM_EN; 107 107 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg); 108 108 109 - while (retries--) { 110 - reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_PHY_CS); 111 - if (reg & LINK_UP) 112 - break; 109 + for (retries = 0; retries < 1000; retries++) { 110 + if (dw_pcie_link_up(pp)) 111 + return 0; 113 112 usleep_range(10, 20); 114 113 } 115 114 116 - if (retries == 0) { 117 - dev_err(pp->dev, "link is not up\n"); 118 - return -ETIMEDOUT; 119 - } 120 - 121 - return 0; 115 + dev_err(pp->dev, "link is not up\n"); 116 + return -EINVAL; 122 117 } 123 118 124 119 static void dra7xx_pcie_enable_interrupts(struct pcie_port *pp)
+15 -19
drivers/pci/host/pci-exynos.c
··· 316 316 317 317 static int exynos_pcie_establish_link(struct pcie_port *pp) 318 318 { 319 - u32 val; 320 - int count = 0; 321 319 struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp); 320 + u32 val; 321 + unsigned int retries; 322 322 323 323 if (dw_pcie_link_up(pp)) { 324 324 dev_err(pp->dev, "Link already up\n"); ··· 357 357 PCIE_APP_LTSSM_ENABLE); 358 358 359 359 /* check if the link is up or not */ 360 - while (!dw_pcie_link_up(pp)) { 361 - mdelay(100); 362 - count++; 363 - if (count == 10) { 364 - while (exynos_phy_readl(exynos_pcie, 365 - PCIE_PHY_PLL_LOCKED) == 0) { 366 - val = exynos_blk_readl(exynos_pcie, 367 - PCIE_PHY_PLL_LOCKED); 368 - dev_info(pp->dev, "PLL Locked: 0x%x\n", val); 369 - } 370 - /* power off phy */ 371 - exynos_pcie_power_off_phy(pp); 372 - 373 - dev_err(pp->dev, "PCIe Link Fail\n"); 374 - return -EINVAL; 360 + for (retries = 0; retries < 10; retries++) { 361 + if (dw_pcie_link_up(pp)) { 362 + dev_info(pp->dev, "Link up\n"); 363 + return 0; 375 364 } 365 + mdelay(100); 376 366 } 377 367 378 - dev_info(pp->dev, "Link up\n"); 368 + while (exynos_phy_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED) == 0) { 369 + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED); 370 + dev_info(pp->dev, "PLL Locked: 0x%x\n", val); 371 + } 372 + /* power off phy */ 373 + exynos_pcie_power_off_phy(pp); 379 374 380 - return 0; 375 + dev_err(pp->dev, "PCIe Link Fail\n"); 376 + return -EINVAL; 381 377 } 382 378 383 379 static void exynos_pcie_clear_irq_pulse(struct pcie_port *pp)
+49 -39
drivers/pci/host/pci-imx6.c
··· 47 47 #define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2 0x2 48 48 #define PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK 0xf 49 49 50 + #define PCIE_RC_LCSR 0x80 51 + 50 52 /* PCIe Port Logic registers (memory-mapped) */ 51 53 #define PL_OFFSET 0x700 52 54 #define PCIE_PL_PFLR (PL_OFFSET + 0x08) ··· 337 335 338 336 static int imx6_pcie_wait_for_link(struct pcie_port *pp) 339 337 { 340 - int count = 200; 338 + unsigned int retries; 341 339 342 - while (!dw_pcie_link_up(pp)) { 340 + for (retries = 0; retries < 200; retries++) { 341 + if (dw_pcie_link_up(pp)) 342 + return 0; 343 343 usleep_range(100, 1000); 344 - if (--count) 345 - continue; 346 - 347 - dev_err(pp->dev, "phy link never came up\n"); 348 - dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 349 - readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 350 - readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 351 - return -EINVAL; 352 344 } 353 345 354 - return 0; 346 + dev_err(pp->dev, "phy link never came up\n"); 347 + dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 348 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 349 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 350 + return -EINVAL; 351 + } 352 + 353 + static int imx6_pcie_wait_for_speed_change(struct pcie_port *pp) 354 + { 355 + u32 tmp; 356 + unsigned int retries; 357 + 358 + for (retries = 0; retries < 200; retries++) { 359 + tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 360 + /* Test if the speed change finished. */ 361 + if (!(tmp & PORT_LOGIC_SPEED_CHANGE)) 362 + return 0; 363 + usleep_range(100, 1000); 364 + } 365 + 366 + dev_err(pp->dev, "Speed change timeout\n"); 367 + return -EINVAL; 355 368 } 356 369 357 370 static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg) ··· 376 359 return dw_handle_msi_irq(pp); 377 360 } 378 361 379 - static int imx6_pcie_start_link(struct pcie_port *pp) 362 + static int imx6_pcie_establish_link(struct pcie_port *pp) 380 363 { 381 364 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 382 - uint32_t tmp; 383 - int ret, count; 365 + u32 tmp; 366 + int ret; 384 367 385 368 /* 386 369 * Force Gen1 operation when starting the link. In case the link is ··· 414 397 tmp |= PORT_LOGIC_SPEED_CHANGE; 415 398 writel(tmp, pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 416 399 417 - count = 200; 418 - while (count--) { 419 - tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 420 - /* Test if the speed change finished. */ 421 - if (!(tmp & PORT_LOGIC_SPEED_CHANGE)) 422 - break; 423 - usleep_range(100, 1000); 400 + ret = imx6_pcie_wait_for_speed_change(pp); 401 + if (ret) { 402 + dev_err(pp->dev, "Failed to bring link up!\n"); 403 + return ret; 424 404 } 425 405 426 406 /* Make sure link training is finished as well! */ 427 - if (count) 428 - ret = imx6_pcie_wait_for_link(pp); 429 - else 430 - ret = -EINVAL; 431 - 407 + ret = imx6_pcie_wait_for_link(pp); 432 408 if (ret) { 433 409 dev_err(pp->dev, "Failed to bring link up!\n"); 434 - } else { 435 - tmp = readl(pp->dbi_base + 0x80); 436 - dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf); 410 + return ret; 437 411 } 438 412 439 - return ret; 413 + tmp = readl(pp->dbi_base + PCIE_RC_LCSR); 414 + dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf); 415 + return 0; 440 416 } 441 417 442 418 static void imx6_pcie_host_init(struct pcie_port *pp) ··· 442 432 443 433 dw_pcie_setup_rc(pp); 444 434 445 - imx6_pcie_start_link(pp); 435 + imx6_pcie_establish_link(pp); 446 436 447 437 if (IS_ENABLED(CONFIG_PCI_MSI)) 448 438 dw_pcie_msi_init(pp); ··· 450 440 451 441 static void imx6_pcie_reset_phy(struct pcie_port *pp) 452 442 { 453 - uint32_t temp; 443 + u32 tmp; 454 444 455 - pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp); 456 - temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | 457 - PHY_RX_OVRD_IN_LO_RX_PLL_EN); 458 - pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp); 445 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 446 + tmp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | 447 + PHY_RX_OVRD_IN_LO_RX_PLL_EN); 448 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 459 449 460 450 usleep_range(2000, 3000); 461 451 462 - pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp); 463 - temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN | 452 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 453 + tmp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN | 464 454 PHY_RX_OVRD_IN_LO_RX_PLL_EN); 465 - pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp); 455 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 466 456 } 467 457 468 458 static int imx6_pcie_link_up(struct pcie_port *pp)
+7 -9
drivers/pci/host/pci-keystone.c
··· 88 88 static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) 89 89 { 90 90 struct pcie_port *pp = &ks_pcie->pp; 91 - int count = 200; 91 + unsigned int retries; 92 92 93 93 dw_pcie_setup_rc(pp); 94 94 ··· 99 99 100 100 ks_dw_pcie_initiate_link_train(ks_pcie); 101 101 /* check if the link is up or not */ 102 - while (!dw_pcie_link_up(pp)) { 102 + for (retries = 0; retries < 200; retries++) { 103 + if (dw_pcie_link_up(pp)) 104 + return 0; 103 105 usleep_range(100, 1000); 104 - if (--count) { 105 - ks_dw_pcie_initiate_link_train(ks_pcie); 106 - continue; 107 - } 108 - dev_err(pp->dev, "phy link never came up\n"); 109 - return -EINVAL; 106 + ks_dw_pcie_initiate_link_train(ks_pcie); 110 107 } 111 108 112 - return 0; 109 + dev_err(pp->dev, "phy link never came up\n"); 110 + return -EINVAL; 113 111 } 114 112 115 113 static void ks_pcie_msi_irq_handler(unsigned int irq, struct irq_desc *desc)
+15 -10
drivers/pci/host/pci-layerscape.c
··· 62 62 return 1; 63 63 } 64 64 65 + static int ls_pcie_establish_link(struct pcie_port *pp) 66 + { 67 + unsigned int retries; 68 + 69 + for (retries = 0; retries < 200; retries++) { 70 + if (dw_pcie_link_up(pp)) 71 + return 0; 72 + usleep_range(100, 1000); 73 + } 74 + 75 + dev_err(pp->dev, "phy link never came up\n"); 76 + return -EINVAL; 77 + } 78 + 65 79 static void ls_pcie_host_init(struct pcie_port *pp) 66 80 { 67 81 struct ls_pcie *pcie = to_ls_pcie(pp); 68 - int count = 0; 69 82 u32 val; 70 83 71 84 dw_pcie_setup_rc(pp); 72 - 73 - while (!ls_pcie_link_up(pp)) { 74 - usleep_range(100, 1000); 75 - count++; 76 - if (count >= 200) { 77 - dev_err(pp->dev, "phy link never came up\n"); 78 - return; 79 - } 80 - } 85 + ls_pcie_establish_link(pp); 81 86 82 87 /* 83 88 * LS1021A Workaround for internal TKT228622
+1 -17
drivers/pci/host/pci-mvebu.c
··· 751 751 return 1; 752 752 } 753 753 754 - static struct pci_bus *mvebu_pcie_scan_bus(int nr, struct pci_sys_data *sys) 755 - { 756 - struct mvebu_pcie *pcie = sys_to_pcie(sys); 757 - struct pci_bus *bus; 758 - 759 - bus = pci_create_root_bus(&pcie->pdev->dev, sys->busnr, 760 - &mvebu_pcie_ops, sys, &sys->resources); 761 - if (!bus) 762 - return NULL; 763 - 764 - pci_scan_child_bus(bus); 765 - 766 - return bus; 767 - } 768 - 769 754 static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev, 770 755 const struct resource *res, 771 756 resource_size_t start, ··· 794 809 hw.nr_controllers = 1; 795 810 hw.private_data = (void **)&pcie; 796 811 hw.setup = mvebu_pcie_setup; 797 - hw.scan = mvebu_pcie_scan_bus; 798 812 hw.map_irq = of_irq_parse_and_map_pci; 799 813 hw.ops = &mvebu_pcie_ops; 800 814 hw.align_resource = mvebu_pcie_align_resource; 801 815 802 - pci_common_init(&hw); 816 + pci_common_init_dev(&pcie->pdev->dev, &hw); 803 817 } 804 818 805 819 /*
-16
drivers/pci/host/pci-tegra.c
··· 630 630 return irq; 631 631 } 632 632 633 - static struct pci_bus *tegra_pcie_scan_bus(int nr, struct pci_sys_data *sys) 634 - { 635 - struct tegra_pcie *pcie = sys_to_pcie(sys); 636 - struct pci_bus *bus; 637 - 638 - bus = pci_create_root_bus(pcie->dev, sys->busnr, &tegra_pcie_ops, sys, 639 - &sys->resources); 640 - if (!bus) 641 - return NULL; 642 - 643 - pci_scan_child_bus(bus); 644 - 645 - return bus; 646 - } 647 - 648 633 static irqreturn_t tegra_pcie_isr(int irq, void *arg) 649 634 { 650 635 const char *err_msg[] = { ··· 1816 1831 hw.private_data = (void **)&pcie; 1817 1832 hw.setup = tegra_pcie_setup; 1818 1833 hw.map_irq = tegra_pcie_map_irq; 1819 - hw.scan = tegra_pcie_scan_bus; 1820 1834 hw.ops = &tegra_pcie_ops; 1821 1835 1822 1836 pci_common_init_dev(pcie->dev, &hw);
+596
drivers/pci/host/pci-xgene-msi.c
··· 1 + /* 2 + * APM X-Gene MSI Driver 3 + * 4 + * Copyright (c) 2014, Applied Micro Circuits Corporation 5 + * Author: Tanmay Inamdar <tinamdar@apm.com> 6 + * Duc Dang <dhdang@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + #include <linux/cpu.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/module.h> 21 + #include <linux/msi.h> 22 + #include <linux/of_irq.h> 23 + #include <linux/irqchip/chained_irq.h> 24 + #include <linux/pci.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/of_pci.h> 27 + 28 + #define MSI_IR0 0x000000 29 + #define MSI_INT0 0x800000 30 + #define IDX_PER_GROUP 8 31 + #define IRQS_PER_IDX 16 32 + #define NR_HW_IRQS 16 33 + #define NR_MSI_VEC (IDX_PER_GROUP * IRQS_PER_IDX * NR_HW_IRQS) 34 + 35 + struct xgene_msi_group { 36 + struct xgene_msi *msi; 37 + int gic_irq; 38 + u32 msi_grp; 39 + }; 40 + 41 + struct xgene_msi { 42 + struct device_node *node; 43 + struct msi_controller mchip; 44 + struct irq_domain *domain; 45 + u64 msi_addr; 46 + void __iomem *msi_regs; 47 + unsigned long *bitmap; 48 + struct mutex bitmap_lock; 49 + struct xgene_msi_group *msi_groups; 50 + int num_cpus; 51 + }; 52 + 53 + /* Global data */ 54 + static struct xgene_msi xgene_msi_ctrl; 55 + 56 + static struct irq_chip xgene_msi_top_irq_chip = { 57 + .name = "X-Gene1 MSI", 58 + .irq_enable = pci_msi_unmask_irq, 59 + .irq_disable = pci_msi_mask_irq, 60 + .irq_mask = pci_msi_mask_irq, 61 + .irq_unmask = pci_msi_unmask_irq, 62 + }; 63 + 64 + static struct msi_domain_info xgene_msi_domain_info = { 65 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 66 + MSI_FLAG_PCI_MSIX), 67 + .chip = &xgene_msi_top_irq_chip, 68 + }; 69 + 70 + /* 71 + * X-Gene v1 has 16 groups of MSI termination registers MSInIRx, where 72 + * n is group number (0..F), x is index of registers in each group (0..7) 73 + * The register layout is as follows: 74 + * MSI0IR0 base_addr 75 + * MSI0IR1 base_addr + 0x10000 76 + * ... ... 77 + * MSI0IR6 base_addr + 0x60000 78 + * MSI0IR7 base_addr + 0x70000 79 + * MSI1IR0 base_addr + 0x80000 80 + * MSI1IR1 base_addr + 0x90000 81 + * ... ... 82 + * MSI1IR7 base_addr + 0xF0000 83 + * MSI2IR0 base_addr + 0x100000 84 + * ... ... 85 + * MSIFIR0 base_addr + 0x780000 86 + * MSIFIR1 base_addr + 0x790000 87 + * ... ... 88 + * MSIFIR7 base_addr + 0x7F0000 89 + * MSIINT0 base_addr + 0x800000 90 + * MSIINT1 base_addr + 0x810000 91 + * ... ... 92 + * MSIINTF base_addr + 0x8F0000 93 + * 94 + * Each index register supports 16 MSI vectors (0..15) to generate interrupt. 95 + * There are total 16 GIC IRQs assigned for these 16 groups of MSI termination 96 + * registers. 97 + * 98 + * Each MSI termination group has 1 MSIINTn register (n is 0..15) to indicate 99 + * the MSI pending status caused by 1 of its 8 index registers. 100 + */ 101 + 102 + /* MSInIRx read helper */ 103 + static u32 xgene_msi_ir_read(struct xgene_msi *msi, 104 + u32 msi_grp, u32 msir_idx) 105 + { 106 + return readl_relaxed(msi->msi_regs + MSI_IR0 + 107 + (msi_grp << 19) + (msir_idx << 16)); 108 + } 109 + 110 + /* MSIINTn read helper */ 111 + static u32 xgene_msi_int_read(struct xgene_msi *msi, u32 msi_grp) 112 + { 113 + return readl_relaxed(msi->msi_regs + MSI_INT0 + (msi_grp << 16)); 114 + } 115 + 116 + /* 117 + * With 2048 MSI vectors supported, the MSI message can be constructed using 118 + * following scheme: 119 + * - Divide into 8 256-vector groups 120 + * Group 0: 0-255 121 + * Group 1: 256-511 122 + * Group 2: 512-767 123 + * ... 124 + * Group 7: 1792-2047 125 + * - Each 256-vector group is divided into 16 16-vector groups 126 + * As an example: 16 16-vector groups for 256-vector group 0-255 is 127 + * Group 0: 0-15 128 + * Group 1: 16-32 129 + * ... 130 + * Group 15: 240-255 131 + * - The termination address of MSI vector in 256-vector group n and 16-vector 132 + * group x is the address of MSIxIRn 133 + * - The data for MSI vector in 16-vector group x is x 134 + */ 135 + static u32 hwirq_to_reg_set(unsigned long hwirq) 136 + { 137 + return (hwirq / (NR_HW_IRQS * IRQS_PER_IDX)); 138 + } 139 + 140 + static u32 hwirq_to_group(unsigned long hwirq) 141 + { 142 + return (hwirq % NR_HW_IRQS); 143 + } 144 + 145 + static u32 hwirq_to_msi_data(unsigned long hwirq) 146 + { 147 + return ((hwirq / NR_HW_IRQS) % IRQS_PER_IDX); 148 + } 149 + 150 + static void xgene_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 151 + { 152 + struct xgene_msi *msi = irq_data_get_irq_chip_data(data); 153 + u32 reg_set = hwirq_to_reg_set(data->hwirq); 154 + u32 group = hwirq_to_group(data->hwirq); 155 + u64 target_addr = msi->msi_addr + (((8 * group) + reg_set) << 16); 156 + 157 + msg->address_hi = upper_32_bits(target_addr); 158 + msg->address_lo = lower_32_bits(target_addr); 159 + msg->data = hwirq_to_msi_data(data->hwirq); 160 + } 161 + 162 + /* 163 + * X-Gene v1 only has 16 MSI GIC IRQs for 2048 MSI vectors. To maintain 164 + * the expected behaviour of .set_affinity for each MSI interrupt, the 16 165 + * MSI GIC IRQs are statically allocated to 8 X-Gene v1 cores (2 GIC IRQs 166 + * for each core). The MSI vector is moved fom 1 MSI GIC IRQ to another 167 + * MSI GIC IRQ to steer its MSI interrupt to correct X-Gene v1 core. As a 168 + * consequence, the total MSI vectors that X-Gene v1 supports will be 169 + * reduced to 256 (2048/8) vectors. 170 + */ 171 + static int hwirq_to_cpu(unsigned long hwirq) 172 + { 173 + return (hwirq % xgene_msi_ctrl.num_cpus); 174 + } 175 + 176 + static unsigned long hwirq_to_canonical_hwirq(unsigned long hwirq) 177 + { 178 + return (hwirq - hwirq_to_cpu(hwirq)); 179 + } 180 + 181 + static int xgene_msi_set_affinity(struct irq_data *irqdata, 182 + const struct cpumask *mask, bool force) 183 + { 184 + int target_cpu = cpumask_first(mask); 185 + int curr_cpu; 186 + 187 + curr_cpu = hwirq_to_cpu(irqdata->hwirq); 188 + if (curr_cpu == target_cpu) 189 + return IRQ_SET_MASK_OK_DONE; 190 + 191 + /* Update MSI number to target the new CPU */ 192 + irqdata->hwirq = hwirq_to_canonical_hwirq(irqdata->hwirq) + target_cpu; 193 + 194 + return IRQ_SET_MASK_OK; 195 + } 196 + 197 + static struct irq_chip xgene_msi_bottom_irq_chip = { 198 + .name = "MSI", 199 + .irq_set_affinity = xgene_msi_set_affinity, 200 + .irq_compose_msi_msg = xgene_compose_msi_msg, 201 + }; 202 + 203 + static int xgene_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 204 + unsigned int nr_irqs, void *args) 205 + { 206 + struct xgene_msi *msi = domain->host_data; 207 + int msi_irq; 208 + 209 + mutex_lock(&msi->bitmap_lock); 210 + 211 + msi_irq = bitmap_find_next_zero_area(msi->bitmap, NR_MSI_VEC, 0, 212 + msi->num_cpus, 0); 213 + if (msi_irq < NR_MSI_VEC) 214 + bitmap_set(msi->bitmap, msi_irq, msi->num_cpus); 215 + else 216 + msi_irq = -ENOSPC; 217 + 218 + mutex_unlock(&msi->bitmap_lock); 219 + 220 + if (msi_irq < 0) 221 + return msi_irq; 222 + 223 + irq_domain_set_info(domain, virq, msi_irq, 224 + &xgene_msi_bottom_irq_chip, domain->host_data, 225 + handle_simple_irq, NULL, NULL); 226 + set_irq_flags(virq, IRQF_VALID); 227 + 228 + return 0; 229 + } 230 + 231 + static void xgene_irq_domain_free(struct irq_domain *domain, 232 + unsigned int virq, unsigned int nr_irqs) 233 + { 234 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 235 + struct xgene_msi *msi = irq_data_get_irq_chip_data(d); 236 + u32 hwirq; 237 + 238 + mutex_lock(&msi->bitmap_lock); 239 + 240 + hwirq = hwirq_to_canonical_hwirq(d->hwirq); 241 + bitmap_clear(msi->bitmap, hwirq, msi->num_cpus); 242 + 243 + mutex_unlock(&msi->bitmap_lock); 244 + 245 + irq_domain_free_irqs_parent(domain, virq, nr_irqs); 246 + } 247 + 248 + static const struct irq_domain_ops msi_domain_ops = { 249 + .alloc = xgene_irq_domain_alloc, 250 + .free = xgene_irq_domain_free, 251 + }; 252 + 253 + static int xgene_allocate_domains(struct xgene_msi *msi) 254 + { 255 + msi->domain = irq_domain_add_linear(NULL, NR_MSI_VEC, 256 + &msi_domain_ops, msi); 257 + if (!msi->domain) 258 + return -ENOMEM; 259 + 260 + msi->mchip.domain = pci_msi_create_irq_domain(msi->mchip.of_node, 261 + &xgene_msi_domain_info, 262 + msi->domain); 263 + 264 + if (!msi->mchip.domain) { 265 + irq_domain_remove(msi->domain); 266 + return -ENOMEM; 267 + } 268 + 269 + return 0; 270 + } 271 + 272 + static void xgene_free_domains(struct xgene_msi *msi) 273 + { 274 + if (msi->mchip.domain) 275 + irq_domain_remove(msi->mchip.domain); 276 + if (msi->domain) 277 + irq_domain_remove(msi->domain); 278 + } 279 + 280 + static int xgene_msi_init_allocator(struct xgene_msi *xgene_msi) 281 + { 282 + int size = BITS_TO_LONGS(NR_MSI_VEC) * sizeof(long); 283 + 284 + xgene_msi->bitmap = kzalloc(size, GFP_KERNEL); 285 + if (!xgene_msi->bitmap) 286 + return -ENOMEM; 287 + 288 + mutex_init(&xgene_msi->bitmap_lock); 289 + 290 + xgene_msi->msi_groups = kcalloc(NR_HW_IRQS, 291 + sizeof(struct xgene_msi_group), 292 + GFP_KERNEL); 293 + if (!xgene_msi->msi_groups) 294 + return -ENOMEM; 295 + 296 + return 0; 297 + } 298 + 299 + static void xgene_msi_isr(unsigned int irq, struct irq_desc *desc) 300 + { 301 + struct irq_chip *chip = irq_desc_get_chip(desc); 302 + struct xgene_msi_group *msi_groups; 303 + struct xgene_msi *xgene_msi; 304 + unsigned int virq; 305 + int msir_index, msir_val, hw_irq; 306 + u32 intr_index, grp_select, msi_grp; 307 + 308 + chained_irq_enter(chip, desc); 309 + 310 + msi_groups = irq_desc_get_handler_data(desc); 311 + xgene_msi = msi_groups->msi; 312 + msi_grp = msi_groups->msi_grp; 313 + 314 + /* 315 + * MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt 316 + * If bit x of this register is set (x is 0..7), one or more interupts 317 + * corresponding to MSInIRx is set. 318 + */ 319 + grp_select = xgene_msi_int_read(xgene_msi, msi_grp); 320 + while (grp_select) { 321 + msir_index = ffs(grp_select) - 1; 322 + /* 323 + * Calculate MSInIRx address to read to check for interrupts 324 + * (refer to termination address and data assignment 325 + * described in xgene_compose_msi_msg() ) 326 + */ 327 + msir_val = xgene_msi_ir_read(xgene_msi, msi_grp, msir_index); 328 + while (msir_val) { 329 + intr_index = ffs(msir_val) - 1; 330 + /* 331 + * Calculate MSI vector number (refer to the termination 332 + * address and data assignment described in 333 + * xgene_compose_msi_msg function) 334 + */ 335 + hw_irq = (((msir_index * IRQS_PER_IDX) + intr_index) * 336 + NR_HW_IRQS) + msi_grp; 337 + /* 338 + * As we have multiple hw_irq that maps to single MSI, 339 + * always look up the virq using the hw_irq as seen from 340 + * CPU0 341 + */ 342 + hw_irq = hwirq_to_canonical_hwirq(hw_irq); 343 + virq = irq_find_mapping(xgene_msi->domain, hw_irq); 344 + WARN_ON(!virq); 345 + if (virq != 0) 346 + generic_handle_irq(virq); 347 + msir_val &= ~(1 << intr_index); 348 + } 349 + grp_select &= ~(1 << msir_index); 350 + 351 + if (!grp_select) { 352 + /* 353 + * We handled all interrupts happened in this group, 354 + * resample this group MSI_INTx register in case 355 + * something else has been made pending in the meantime 356 + */ 357 + grp_select = xgene_msi_int_read(xgene_msi, msi_grp); 358 + } 359 + } 360 + 361 + chained_irq_exit(chip, desc); 362 + } 363 + 364 + static int xgene_msi_remove(struct platform_device *pdev) 365 + { 366 + int virq, i; 367 + struct xgene_msi *msi = platform_get_drvdata(pdev); 368 + 369 + for (i = 0; i < NR_HW_IRQS; i++) { 370 + virq = msi->msi_groups[i].gic_irq; 371 + if (virq != 0) { 372 + irq_set_chained_handler(virq, NULL); 373 + irq_set_handler_data(virq, NULL); 374 + } 375 + } 376 + kfree(msi->msi_groups); 377 + 378 + kfree(msi->bitmap); 379 + msi->bitmap = NULL; 380 + 381 + xgene_free_domains(msi); 382 + 383 + return 0; 384 + } 385 + 386 + static int xgene_msi_hwirq_alloc(unsigned int cpu) 387 + { 388 + struct xgene_msi *msi = &xgene_msi_ctrl; 389 + struct xgene_msi_group *msi_group; 390 + cpumask_var_t mask; 391 + int i; 392 + int err; 393 + 394 + for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) { 395 + msi_group = &msi->msi_groups[i]; 396 + if (!msi_group->gic_irq) 397 + continue; 398 + 399 + irq_set_chained_handler(msi_group->gic_irq, 400 + xgene_msi_isr); 401 + err = irq_set_handler_data(msi_group->gic_irq, msi_group); 402 + if (err) { 403 + pr_err("failed to register GIC IRQ handler\n"); 404 + return -EINVAL; 405 + } 406 + /* 407 + * Statically allocate MSI GIC IRQs to each CPU core. 408 + * With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated 409 + * to each core. 410 + */ 411 + if (alloc_cpumask_var(&mask, GFP_KERNEL)) { 412 + cpumask_clear(mask); 413 + cpumask_set_cpu(cpu, mask); 414 + err = irq_set_affinity(msi_group->gic_irq, mask); 415 + if (err) 416 + pr_err("failed to set affinity for GIC IRQ"); 417 + free_cpumask_var(mask); 418 + } else { 419 + pr_err("failed to alloc CPU mask for affinity\n"); 420 + err = -EINVAL; 421 + } 422 + 423 + if (err) { 424 + irq_set_chained_handler(msi_group->gic_irq, NULL); 425 + irq_set_handler_data(msi_group->gic_irq, NULL); 426 + return err; 427 + } 428 + } 429 + 430 + return 0; 431 + } 432 + 433 + static void xgene_msi_hwirq_free(unsigned int cpu) 434 + { 435 + struct xgene_msi *msi = &xgene_msi_ctrl; 436 + struct xgene_msi_group *msi_group; 437 + int i; 438 + 439 + for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) { 440 + msi_group = &msi->msi_groups[i]; 441 + if (!msi_group->gic_irq) 442 + continue; 443 + 444 + irq_set_chained_handler(msi_group->gic_irq, NULL); 445 + irq_set_handler_data(msi_group->gic_irq, NULL); 446 + } 447 + } 448 + 449 + static int xgene_msi_cpu_callback(struct notifier_block *nfb, 450 + unsigned long action, void *hcpu) 451 + { 452 + unsigned cpu = (unsigned long)hcpu; 453 + 454 + switch (action) { 455 + case CPU_ONLINE: 456 + case CPU_ONLINE_FROZEN: 457 + xgene_msi_hwirq_alloc(cpu); 458 + break; 459 + case CPU_DEAD: 460 + case CPU_DEAD_FROZEN: 461 + xgene_msi_hwirq_free(cpu); 462 + break; 463 + default: 464 + break; 465 + } 466 + 467 + return NOTIFY_OK; 468 + } 469 + 470 + static struct notifier_block xgene_msi_cpu_notifier = { 471 + .notifier_call = xgene_msi_cpu_callback, 472 + }; 473 + 474 + static const struct of_device_id xgene_msi_match_table[] = { 475 + {.compatible = "apm,xgene1-msi"}, 476 + {}, 477 + }; 478 + 479 + static int xgene_msi_probe(struct platform_device *pdev) 480 + { 481 + struct resource *res; 482 + int rc, irq_index; 483 + struct xgene_msi *xgene_msi; 484 + unsigned int cpu; 485 + int virt_msir; 486 + u32 msi_val, msi_idx; 487 + 488 + xgene_msi = &xgene_msi_ctrl; 489 + 490 + platform_set_drvdata(pdev, xgene_msi); 491 + 492 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 493 + xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); 494 + if (IS_ERR(xgene_msi->msi_regs)) { 495 + dev_err(&pdev->dev, "no reg space\n"); 496 + rc = -EINVAL; 497 + goto error; 498 + } 499 + xgene_msi->msi_addr = res->start; 500 + 501 + xgene_msi->num_cpus = num_possible_cpus(); 502 + 503 + rc = xgene_msi_init_allocator(xgene_msi); 504 + if (rc) { 505 + dev_err(&pdev->dev, "Error allocating MSI bitmap\n"); 506 + goto error; 507 + } 508 + 509 + rc = xgene_allocate_domains(xgene_msi); 510 + if (rc) { 511 + dev_err(&pdev->dev, "Failed to allocate MSI domain\n"); 512 + goto error; 513 + } 514 + 515 + for (irq_index = 0; irq_index < NR_HW_IRQS; irq_index++) { 516 + virt_msir = platform_get_irq(pdev, irq_index); 517 + if (virt_msir < 0) { 518 + dev_err(&pdev->dev, "Cannot translate IRQ index %d\n", 519 + irq_index); 520 + rc = -EINVAL; 521 + goto error; 522 + } 523 + xgene_msi->msi_groups[irq_index].gic_irq = virt_msir; 524 + xgene_msi->msi_groups[irq_index].msi_grp = irq_index; 525 + xgene_msi->msi_groups[irq_index].msi = xgene_msi; 526 + } 527 + 528 + /* 529 + * MSInIRx registers are read-to-clear; before registering 530 + * interrupt handlers, read all of them to clear spurious 531 + * interrupts that may occur before the driver is probed. 532 + */ 533 + for (irq_index = 0; irq_index < NR_HW_IRQS; irq_index++) { 534 + for (msi_idx = 0; msi_idx < IDX_PER_GROUP; msi_idx++) 535 + msi_val = xgene_msi_ir_read(xgene_msi, irq_index, 536 + msi_idx); 537 + /* Read MSIINTn to confirm */ 538 + msi_val = xgene_msi_int_read(xgene_msi, irq_index); 539 + if (msi_val) { 540 + dev_err(&pdev->dev, "Failed to clear spurious IRQ\n"); 541 + rc = -EINVAL; 542 + goto error; 543 + } 544 + } 545 + 546 + cpu_notifier_register_begin(); 547 + 548 + for_each_online_cpu(cpu) 549 + if (xgene_msi_hwirq_alloc(cpu)) { 550 + dev_err(&pdev->dev, "failed to register MSI handlers\n"); 551 + cpu_notifier_register_done(); 552 + goto error; 553 + } 554 + 555 + rc = __register_hotcpu_notifier(&xgene_msi_cpu_notifier); 556 + if (rc) { 557 + dev_err(&pdev->dev, "failed to add CPU MSI notifier\n"); 558 + cpu_notifier_register_done(); 559 + goto error; 560 + } 561 + 562 + cpu_notifier_register_done(); 563 + 564 + xgene_msi->mchip.of_node = pdev->dev.of_node; 565 + rc = of_pci_msi_chip_add(&xgene_msi->mchip); 566 + if (rc) { 567 + dev_err(&pdev->dev, "failed to add MSI controller chip\n"); 568 + goto error_notifier; 569 + } 570 + 571 + dev_info(&pdev->dev, "APM X-Gene PCIe MSI driver loaded\n"); 572 + 573 + return 0; 574 + 575 + error_notifier: 576 + unregister_hotcpu_notifier(&xgene_msi_cpu_notifier); 577 + error: 578 + xgene_msi_remove(pdev); 579 + return rc; 580 + } 581 + 582 + static struct platform_driver xgene_msi_driver = { 583 + .driver = { 584 + .name = "xgene-msi", 585 + .owner = THIS_MODULE, 586 + .of_match_table = xgene_msi_match_table, 587 + }, 588 + .probe = xgene_msi_probe, 589 + .remove = xgene_msi_remove, 590 + }; 591 + 592 + static int __init xgene_pcie_msi_init(void) 593 + { 594 + return platform_driver_register(&xgene_msi_driver); 595 + } 596 + subsys_initcall(xgene_pcie_msi_init);
+62 -4
drivers/pci/host/pci-xgene.c
··· 59 59 #define SZ_1T (SZ_1G*1024ULL) 60 60 #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) 61 61 62 + #define ROOT_CAP_AND_CTRL 0x5C 63 + 64 + /* PCIe IP version */ 65 + #define XGENE_PCIE_IP_VER_UNKN 0 66 + #define XGENE_PCIE_IP_VER_1 1 67 + 62 68 struct xgene_pcie_port { 63 69 struct device_node *node; 64 70 struct device *dev; ··· 73 67 void __iomem *cfg_base; 74 68 unsigned long cfg_addr; 75 69 bool link_up; 70 + u32 version; 76 71 }; 77 72 78 73 static inline u32 pcie_bar_low_val(u32 addr, u32 flags) ··· 137 130 static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 138 131 int offset) 139 132 { 140 - struct xgene_pcie_port *port = bus->sysdata; 141 - 142 - if ((pci_is_root_bus(bus) && devfn != 0) || !port->link_up || 133 + if ((pci_is_root_bus(bus) && devfn != 0) || 143 134 xgene_pcie_hide_rc_bars(bus, offset)) 144 135 return NULL; 145 136 ··· 145 140 return xgene_pcie_get_cfg_base(bus) + offset; 146 141 } 147 142 143 + static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, 144 + int where, int size, u32 *val) 145 + { 146 + struct xgene_pcie_port *port = bus->sysdata; 147 + 148 + if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) != 149 + PCIBIOS_SUCCESSFUL) 150 + return PCIBIOS_DEVICE_NOT_FOUND; 151 + 152 + /* 153 + * The v1 controller has a bug in its Configuration Request 154 + * Retry Status (CRS) logic: when CRS is enabled and we read the 155 + * Vendor and Device ID of a non-existent device, the controller 156 + * fabricates return data of 0xFFFF0001 ("device exists but is not 157 + * ready") instead of 0xFFFFFFFF ("device does not exist"). This 158 + * causes the PCI core to retry the read until it times out. 159 + * Avoid this by not claiming to support CRS. 160 + */ 161 + if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) && 162 + ((where & ~0x3) == ROOT_CAP_AND_CTRL)) 163 + *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); 164 + 165 + if (size <= 2) 166 + *val = (*val >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); 167 + 168 + return PCIBIOS_SUCCESSFUL; 169 + } 170 + 148 171 static struct pci_ops xgene_pcie_ops = { 149 172 .map_bus = xgene_pcie_map_bus, 150 - .read = pci_generic_config_read32, 173 + .read = xgene_pcie_config_read32, 151 174 .write = pci_generic_config_write32, 152 175 }; 153 176 ··· 501 468 return 0; 502 469 } 503 470 471 + static int xgene_pcie_msi_enable(struct pci_bus *bus) 472 + { 473 + struct device_node *msi_node; 474 + 475 + msi_node = of_parse_phandle(bus->dev.of_node, 476 + "msi-parent", 0); 477 + if (!msi_node) 478 + return -ENODEV; 479 + 480 + bus->msi = of_pci_find_msi_chip_by_node(msi_node); 481 + if (!bus->msi) 482 + return -ENODEV; 483 + 484 + bus->msi->dev = &bus->dev; 485 + return 0; 486 + } 487 + 504 488 static int xgene_pcie_probe_bridge(struct platform_device *pdev) 505 489 { 506 490 struct device_node *dn = pdev->dev.of_node; ··· 532 482 return -ENOMEM; 533 483 port->node = of_node_get(pdev->dev.of_node); 534 484 port->dev = &pdev->dev; 485 + 486 + port->version = XGENE_PCIE_IP_VER_UNKN; 487 + if (of_device_is_compatible(port->node, "apm,xgene-pcie")) 488 + port->version = XGENE_PCIE_IP_VER_1; 535 489 536 490 ret = xgene_pcie_map_reg(port, pdev); 537 491 if (ret) ··· 557 503 &xgene_pcie_ops, port, &res); 558 504 if (!bus) 559 505 return -ENOMEM; 506 + 507 + if (IS_ENABLED(CONFIG_PCI_MSI)) 508 + if (xgene_pcie_msi_enable(bus)) 509 + dev_info(port->dev, "failed to enable MSI\n"); 560 510 561 511 pci_scan_child_bus(bus); 562 512 pci_assign_unassigned_bus_resources(bus);
+69 -85
drivers/pci/host/pcie-designware.c
··· 31 31 #define PORT_LINK_MODE_1_LANES (0x1 << 16) 32 32 #define PORT_LINK_MODE_2_LANES (0x3 << 16) 33 33 #define PORT_LINK_MODE_4_LANES (0x7 << 16) 34 + #define PORT_LINK_MODE_8_LANES (0xf << 16) 34 35 35 36 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 36 37 #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) ··· 39 38 #define PORT_LOGIC_LINK_WIDTH_1_LANES (0x1 << 8) 40 39 #define PORT_LOGIC_LINK_WIDTH_2_LANES (0x2 << 8) 41 40 #define PORT_LOGIC_LINK_WIDTH_4_LANES (0x4 << 8) 41 + #define PORT_LOGIC_LINK_WIDTH_8_LANES (0x8 << 8) 42 42 43 43 #define PCIE_MSI_ADDR_LO 0x820 44 44 #define PCIE_MSI_ADDR_HI 0x824 ··· 150 148 size, val); 151 149 152 150 return ret; 151 + } 152 + 153 + static void dw_pcie_prog_outbound_atu(struct pcie_port *pp, int index, 154 + int type, u64 cpu_addr, u64 pci_addr, u32 size) 155 + { 156 + dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | index, 157 + PCIE_ATU_VIEWPORT); 158 + dw_pcie_writel_rc(pp, lower_32_bits(cpu_addr), PCIE_ATU_LOWER_BASE); 159 + dw_pcie_writel_rc(pp, upper_32_bits(cpu_addr), PCIE_ATU_UPPER_BASE); 160 + dw_pcie_writel_rc(pp, lower_32_bits(cpu_addr + size - 1), 161 + PCIE_ATU_LIMIT); 162 + dw_pcie_writel_rc(pp, lower_32_bits(pci_addr), PCIE_ATU_LOWER_TARGET); 163 + dw_pcie_writel_rc(pp, upper_32_bits(pci_addr), PCIE_ATU_UPPER_TARGET); 164 + dw_pcie_writel_rc(pp, type, PCIE_ATU_CR1); 165 + dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 153 166 } 154 167 155 168 static struct irq_chip dw_msi_irq_chip = { ··· 510 493 if (pp->ops->host_init) 511 494 pp->ops->host_init(pp); 512 495 496 + if (!pp->ops->rd_other_conf) 497 + dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1, 498 + PCIE_ATU_TYPE_MEM, pp->mem_mod_base, 499 + pp->mem_bus_addr, pp->mem_size); 500 + 513 501 dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); 514 502 515 503 /* program correct class for RC */ ··· 537 515 return 0; 538 516 } 539 517 540 - static void dw_pcie_prog_viewport_cfg0(struct pcie_port *pp, u32 busdev) 541 - { 542 - /* Program viewport 0 : OUTBOUND : CFG0 */ 543 - dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX0, 544 - PCIE_ATU_VIEWPORT); 545 - dw_pcie_writel_rc(pp, pp->cfg0_mod_base, PCIE_ATU_LOWER_BASE); 546 - dw_pcie_writel_rc(pp, (pp->cfg0_mod_base >> 32), PCIE_ATU_UPPER_BASE); 547 - dw_pcie_writel_rc(pp, pp->cfg0_mod_base + pp->cfg0_size - 1, 548 - PCIE_ATU_LIMIT); 549 - dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET); 550 - dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET); 551 - dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_CFG0, PCIE_ATU_CR1); 552 - dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 553 - } 554 - 555 - static void dw_pcie_prog_viewport_cfg1(struct pcie_port *pp, u32 busdev) 556 - { 557 - /* Program viewport 1 : OUTBOUND : CFG1 */ 558 - dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX1, 559 - PCIE_ATU_VIEWPORT); 560 - dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_CFG1, PCIE_ATU_CR1); 561 - dw_pcie_writel_rc(pp, pp->cfg1_mod_base, PCIE_ATU_LOWER_BASE); 562 - dw_pcie_writel_rc(pp, (pp->cfg1_mod_base >> 32), PCIE_ATU_UPPER_BASE); 563 - dw_pcie_writel_rc(pp, pp->cfg1_mod_base + pp->cfg1_size - 1, 564 - PCIE_ATU_LIMIT); 565 - dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET); 566 - dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET); 567 - dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 568 - } 569 - 570 - static void dw_pcie_prog_viewport_mem_outbound(struct pcie_port *pp) 571 - { 572 - /* Program viewport 0 : OUTBOUND : MEM */ 573 - dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX0, 574 - PCIE_ATU_VIEWPORT); 575 - dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_MEM, PCIE_ATU_CR1); 576 - dw_pcie_writel_rc(pp, pp->mem_mod_base, PCIE_ATU_LOWER_BASE); 577 - dw_pcie_writel_rc(pp, (pp->mem_mod_base >> 32), PCIE_ATU_UPPER_BASE); 578 - dw_pcie_writel_rc(pp, pp->mem_mod_base + pp->mem_size - 1, 579 - PCIE_ATU_LIMIT); 580 - dw_pcie_writel_rc(pp, pp->mem_bus_addr, PCIE_ATU_LOWER_TARGET); 581 - dw_pcie_writel_rc(pp, upper_32_bits(pp->mem_bus_addr), 582 - PCIE_ATU_UPPER_TARGET); 583 - dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 584 - } 585 - 586 - static void dw_pcie_prog_viewport_io_outbound(struct pcie_port *pp) 587 - { 588 - /* Program viewport 1 : OUTBOUND : IO */ 589 - dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX1, 590 - PCIE_ATU_VIEWPORT); 591 - dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_IO, PCIE_ATU_CR1); 592 - dw_pcie_writel_rc(pp, pp->io_mod_base, PCIE_ATU_LOWER_BASE); 593 - dw_pcie_writel_rc(pp, (pp->io_mod_base >> 32), PCIE_ATU_UPPER_BASE); 594 - dw_pcie_writel_rc(pp, pp->io_mod_base + pp->io_size - 1, 595 - PCIE_ATU_LIMIT); 596 - dw_pcie_writel_rc(pp, pp->io_bus_addr, PCIE_ATU_LOWER_TARGET); 597 - dw_pcie_writel_rc(pp, upper_32_bits(pp->io_bus_addr), 598 - PCIE_ATU_UPPER_TARGET); 599 - dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 600 - } 601 - 602 518 static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 603 519 u32 devfn, int where, int size, u32 *val) 604 520 { 605 - int ret = PCIBIOS_SUCCESSFUL; 606 - u32 address, busdev; 521 + int ret, type; 522 + u32 address, busdev, cfg_size; 523 + u64 cpu_addr; 524 + void __iomem *va_cfg_base; 607 525 608 526 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 609 527 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 610 528 address = where & ~0x3; 611 529 612 530 if (bus->parent->number == pp->root_bus_nr) { 613 - dw_pcie_prog_viewport_cfg0(pp, busdev); 614 - ret = dw_pcie_cfg_read(pp->va_cfg0_base + address, where, size, 615 - val); 616 - dw_pcie_prog_viewport_mem_outbound(pp); 531 + type = PCIE_ATU_TYPE_CFG0; 532 + cpu_addr = pp->cfg0_mod_base; 533 + cfg_size = pp->cfg0_size; 534 + va_cfg_base = pp->va_cfg0_base; 617 535 } else { 618 - dw_pcie_prog_viewport_cfg1(pp, busdev); 619 - ret = dw_pcie_cfg_read(pp->va_cfg1_base + address, where, size, 620 - val); 621 - dw_pcie_prog_viewport_io_outbound(pp); 536 + type = PCIE_ATU_TYPE_CFG1; 537 + cpu_addr = pp->cfg1_mod_base; 538 + cfg_size = pp->cfg1_size; 539 + va_cfg_base = pp->va_cfg1_base; 622 540 } 541 + 542 + dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 543 + type, cpu_addr, 544 + busdev, cfg_size); 545 + ret = dw_pcie_cfg_read(va_cfg_base + address, where, size, val); 546 + dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 547 + PCIE_ATU_TYPE_IO, pp->io_mod_base, 548 + pp->io_bus_addr, pp->io_size); 623 549 624 550 return ret; 625 551 } ··· 575 605 static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 576 606 u32 devfn, int where, int size, u32 val) 577 607 { 578 - int ret = PCIBIOS_SUCCESSFUL; 579 - u32 address, busdev; 608 + int ret, type; 609 + u32 address, busdev, cfg_size; 610 + u64 cpu_addr; 611 + void __iomem *va_cfg_base; 580 612 581 613 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 582 614 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 583 615 address = where & ~0x3; 584 616 585 617 if (bus->parent->number == pp->root_bus_nr) { 586 - dw_pcie_prog_viewport_cfg0(pp, busdev); 587 - ret = dw_pcie_cfg_write(pp->va_cfg0_base + address, where, size, 588 - val); 589 - dw_pcie_prog_viewport_mem_outbound(pp); 618 + type = PCIE_ATU_TYPE_CFG0; 619 + cpu_addr = pp->cfg0_mod_base; 620 + cfg_size = pp->cfg0_size; 621 + va_cfg_base = pp->va_cfg0_base; 590 622 } else { 591 - dw_pcie_prog_viewport_cfg1(pp, busdev); 592 - ret = dw_pcie_cfg_write(pp->va_cfg1_base + address, where, size, 593 - val); 594 - dw_pcie_prog_viewport_io_outbound(pp); 623 + type = PCIE_ATU_TYPE_CFG1; 624 + cpu_addr = pp->cfg1_mod_base; 625 + cfg_size = pp->cfg1_size; 626 + va_cfg_base = pp->va_cfg1_base; 595 627 } 628 + 629 + dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 630 + type, cpu_addr, 631 + busdev, cfg_size); 632 + ret = dw_pcie_cfg_write(va_cfg_base + address, where, size, val); 633 + dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 634 + PCIE_ATU_TYPE_IO, pp->io_mod_base, 635 + pp->io_bus_addr, pp->io_size); 596 636 597 637 return ret; 598 638 } ··· 708 728 struct pcie_port *pp = sys_to_pcie(sys); 709 729 710 730 pp->root_bus_nr = sys->busnr; 711 - bus = pci_create_root_bus(pp->dev, sys->busnr, 731 + bus = pci_scan_root_bus(pp->dev, sys->busnr, 712 732 &dw_pcie_ops, sys, &sys->resources); 713 733 if (!bus) 714 734 return NULL; 715 - 716 - pci_scan_child_bus(bus); 717 735 718 736 if (bus && pp->ops->scan_bus) 719 737 pp->ops->scan_bus(pp); ··· 756 778 case 4: 757 779 val |= PORT_LINK_MODE_4_LANES; 758 780 break; 781 + case 8: 782 + val |= PORT_LINK_MODE_8_LANES; 783 + break; 759 784 } 760 785 dw_pcie_writel_rc(pp, val, PCIE_PORT_LINK_CONTROL); 761 786 ··· 774 793 break; 775 794 case 4: 776 795 val |= PORT_LOGIC_LINK_WIDTH_4_LANES; 796 + break; 797 + case 8: 798 + val |= PORT_LOGIC_LINK_WIDTH_8_LANES; 777 799 break; 778 800 } 779 801 dw_pcie_writel_rc(pp, val, PCIE_LINK_WIDTH_SPEED_CONTROL);
+110
drivers/pci/host/pcie-iproc-bcma.c
··· 1 + /* 2 + * Copyright (C) 2015 Broadcom Corporation 3 + * Copyright (C) 2015 Hauke Mehrtens <hauke@hauke-m.de> 4 + * 5 + * This program is free software; you can redistribute it and/or 6 + * modify it under the terms of the GNU General Public License as 7 + * published by the Free Software Foundation version 2. 8 + * 9 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 10 + * kind, whether express or implied; without even the implied warranty 11 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/kernel.h> 16 + #include <linux/pci.h> 17 + #include <linux/module.h> 18 + #include <linux/slab.h> 19 + #include <linux/phy/phy.h> 20 + #include <linux/bcma/bcma.h> 21 + #include <linux/ioport.h> 22 + 23 + #include "pcie-iproc.h" 24 + 25 + 26 + /* NS: CLASS field is R/O, and set to wrong 0x200 value */ 27 + static void bcma_pcie2_fixup_class(struct pci_dev *dev) 28 + { 29 + dev->class = PCI_CLASS_BRIDGE_PCI << 8; 30 + } 31 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8011, bcma_pcie2_fixup_class); 32 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class); 33 + 34 + static int iproc_pcie_bcma_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 35 + { 36 + struct pci_sys_data *sys = dev->sysdata; 37 + struct iproc_pcie *pcie = sys->private_data; 38 + struct bcma_device *bdev = container_of(pcie->dev, struct bcma_device, dev); 39 + 40 + return bcma_core_irq(bdev, 5); 41 + } 42 + 43 + static int iproc_pcie_bcma_probe(struct bcma_device *bdev) 44 + { 45 + struct iproc_pcie *pcie; 46 + LIST_HEAD(res); 47 + struct resource res_mem; 48 + int ret; 49 + 50 + pcie = devm_kzalloc(&bdev->dev, sizeof(*pcie), GFP_KERNEL); 51 + if (!pcie) 52 + return -ENOMEM; 53 + 54 + pcie->dev = &bdev->dev; 55 + bcma_set_drvdata(bdev, pcie); 56 + 57 + pcie->base = bdev->io_addr; 58 + 59 + res_mem.start = bdev->addr_s[0]; 60 + res_mem.end = bdev->addr_s[0] + SZ_128M - 1; 61 + res_mem.name = "PCIe MEM space"; 62 + res_mem.flags = IORESOURCE_MEM; 63 + pci_add_resource(&res, &res_mem); 64 + 65 + pcie->map_irq = iproc_pcie_bcma_map_irq; 66 + 67 + ret = iproc_pcie_setup(pcie, &res); 68 + if (ret) 69 + dev_err(pcie->dev, "PCIe controller setup failed\n"); 70 + 71 + pci_free_resource_list(&res); 72 + 73 + return ret; 74 + } 75 + 76 + static void iproc_pcie_bcma_remove(struct bcma_device *bdev) 77 + { 78 + struct iproc_pcie *pcie = bcma_get_drvdata(bdev); 79 + 80 + iproc_pcie_remove(pcie); 81 + } 82 + 83 + static const struct bcma_device_id iproc_pcie_bcma_table[] = { 84 + BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_NS_PCIEG2, BCMA_ANY_REV, BCMA_ANY_CLASS), 85 + {}, 86 + }; 87 + MODULE_DEVICE_TABLE(bcma, iproc_pcie_bcma_table); 88 + 89 + static struct bcma_driver iproc_pcie_bcma_driver = { 90 + .name = KBUILD_MODNAME, 91 + .id_table = iproc_pcie_bcma_table, 92 + .probe = iproc_pcie_bcma_probe, 93 + .remove = iproc_pcie_bcma_remove, 94 + }; 95 + 96 + static int __init iproc_pcie_bcma_init(void) 97 + { 98 + return bcma_driver_register(&iproc_pcie_bcma_driver); 99 + } 100 + module_init(iproc_pcie_bcma_init); 101 + 102 + static void __exit iproc_pcie_bcma_exit(void) 103 + { 104 + bcma_driver_unregister(&iproc_pcie_bcma_driver); 105 + } 106 + module_exit(iproc_pcie_bcma_exit); 107 + 108 + MODULE_AUTHOR("Hauke Mehrtens"); 109 + MODULE_DESCRIPTION("Broadcom iProc PCIe BCMA driver"); 110 + MODULE_LICENSE("GPL v2");
+6 -6
drivers/pci/host/pcie-iproc-platform.c
··· 69 69 return ret; 70 70 } 71 71 72 - pcie->resources = &res; 72 + pcie->map_irq = of_irq_parse_and_map_pci; 73 73 74 - ret = iproc_pcie_setup(pcie); 75 - if (ret) { 74 + ret = iproc_pcie_setup(pcie, &res); 75 + if (ret) 76 76 dev_err(pcie->dev, "PCIe controller setup failed\n"); 77 - return ret; 78 - } 79 77 80 - return 0; 78 + pci_free_resource_list(&res); 79 + 80 + return ret; 81 81 } 82 82 83 83 static int iproc_pcie_pltfm_remove(struct platform_device *pdev)
+3 -3
drivers/pci/host/pcie-iproc.c
··· 183 183 writel(SYS_RC_INTX_MASK, pcie->base + SYS_RC_INTX_EN); 184 184 } 185 185 186 - int iproc_pcie_setup(struct iproc_pcie *pcie) 186 + int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) 187 187 { 188 188 int ret; 189 189 struct pci_bus *bus; ··· 211 211 pcie->sysdata.private_data = pcie; 212 212 213 213 bus = pci_create_root_bus(pcie->dev, 0, &iproc_pcie_ops, 214 - &pcie->sysdata, pcie->resources); 214 + &pcie->sysdata, res); 215 215 if (!bus) { 216 216 dev_err(pcie->dev, "unable to create PCI root bus\n"); 217 217 ret = -ENOMEM; ··· 229 229 230 230 pci_scan_child_bus(bus); 231 231 pci_assign_unassigned_bus_resources(bus); 232 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 232 + pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); 233 233 pci_bus_add_devices(bus); 234 234 235 235 return 0;
+2 -2
drivers/pci/host/pcie-iproc.h
··· 29 29 struct iproc_pcie { 30 30 struct device *dev; 31 31 void __iomem *base; 32 - struct list_head *resources; 33 32 struct pci_sys_data sysdata; 34 33 struct pci_bus *root_bus; 35 34 struct phy *phy; 36 35 int irqs[IPROC_PCIE_MAX_NUM_IRQS]; 36 + int (*map_irq)(const struct pci_dev *, u8, u8); 37 37 }; 38 38 39 - int iproc_pcie_setup(struct iproc_pcie *pcie); 39 + int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); 40 40 int iproc_pcie_remove(struct iproc_pcie *pcie); 41 41 42 42 #endif /* _PCIE_IPROC_H */
+8 -9
drivers/pci/host/pcie-spear13xx.c
··· 146 146 static int spear13xx_pcie_establish_link(struct pcie_port *pp) 147 147 { 148 148 u32 val; 149 - int count = 0; 150 149 struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 151 150 struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 152 151 u32 exp_cap_off = EXP_CAP_ID_OFFSET; 152 + unsigned int retries; 153 153 154 154 if (dw_pcie_link_up(pp)) { 155 155 dev_err(pp->dev, "link already up\n"); ··· 201 201 &app_reg->app_ctrl_0); 202 202 203 203 /* check if the link is up or not */ 204 - while (!dw_pcie_link_up(pp)) { 205 - mdelay(100); 206 - count++; 207 - if (count == 10) { 208 - dev_err(pp->dev, "link Fail\n"); 209 - return -EINVAL; 204 + for (retries = 0; retries < 10; retries++) { 205 + if (dw_pcie_link_up(pp)) { 206 + dev_info(pp->dev, "link up\n"); 207 + return 0; 210 208 } 209 + mdelay(100); 211 210 } 212 - dev_info(pp->dev, "link up\n"); 213 211 214 - return 0; 212 + dev_err(pp->dev, "link Fail\n"); 213 + return -EINVAL; 215 214 } 216 215 217 216 static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
-3
drivers/pci/hotplug/Makefile
··· 61 61 pciehp_ctrl.o \ 62 62 pciehp_pci.o \ 63 63 pciehp_hpc.o 64 - ifdef CONFIG_ACPI 65 - pciehp-objs += pciehp_acpi.o 66 - endif 67 64 68 65 shpchp-objs := shpchp_core.o \ 69 66 shpchp_ctrl.o \
+2 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 632 632 { 633 633 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); 634 634 struct pci_bus *bus = dev->subordinate; 635 - bool alive = false; 635 + bool alive = dev->ignore_hotplug; 636 636 637 637 if (adev) { 638 638 acpi_status status; 639 639 unsigned long long sta; 640 640 641 641 status = acpi_evaluate_integer(adev->handle, "_STA", NULL, &sta); 642 - alive = (ACPI_SUCCESS(status) && device_status_valid(sta)) 643 - || dev->ignore_hotplug; 642 + alive = alive || (ACPI_SUCCESS(status) && device_status_valid(sta)); 644 643 } 645 644 if (!alive) 646 645 alive = pci_device_is_present(dev);
+1 -22
drivers/pci/hotplug/pciehp.h
··· 132 132 133 133 int pciehp_sysfs_enable_slot(struct slot *slot); 134 134 int pciehp_sysfs_disable_slot(struct slot *slot); 135 - u8 pciehp_handle_attention_button(struct slot *p_slot); 136 - u8 pciehp_handle_switch_change(struct slot *p_slot); 137 - u8 pciehp_handle_presence_change(struct slot *p_slot); 138 - u8 pciehp_handle_power_fault(struct slot *p_slot); 139 - void pciehp_handle_linkstate_change(struct slot *p_slot); 135 + void pciehp_queue_interrupt_event(struct slot *slot, u32 event_type); 140 136 int pciehp_configure_device(struct slot *p_slot); 141 137 int pciehp_unconfigure_device(struct slot *p_slot); 142 138 void pciehp_queue_pushbutton_work(struct work_struct *work); ··· 163 167 return hotplug_slot_name(slot->hotplug_slot); 164 168 } 165 169 166 - #ifdef CONFIG_ACPI 167 - #include <linux/pci-acpi.h> 168 - 169 - void __init pciehp_acpi_slot_detection_init(void); 170 - int pciehp_acpi_slot_detection_check(struct pci_dev *dev); 171 - 172 - static inline void pciehp_firmware_init(void) 173 - { 174 - pciehp_acpi_slot_detection_init(); 175 - } 176 - #else 177 - #define pciehp_firmware_init() do {} while (0) 178 - static inline int pciehp_acpi_slot_detection_check(struct pci_dev *dev) 179 - { 180 - return 0; 181 - } 182 - #endif /* CONFIG_ACPI */ 183 170 #endif /* _PCIEHP_H */
-137
drivers/pci/hotplug/pciehp_acpi.c
··· 1 - /* 2 - * ACPI related functions for PCI Express Hot Plug driver. 3 - * 4 - * Copyright (C) 2008 Kenji Kaneshige 5 - * Copyright (C) 2008 Fujitsu Limited. 6 - * 7 - * All rights reserved. 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License as published by 11 - * the Free Software Foundation; either version 2 of the License, or (at 12 - * your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 17 - * NON INFRINGEMENT. See the GNU General Public License for more 18 - * details. 19 - * 20 - * You should have received a copy of the GNU General Public License 21 - * along with this program; if not, write to the Free Software 22 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 23 - * 24 - */ 25 - 26 - #include <linux/acpi.h> 27 - #include <linux/pci.h> 28 - #include <linux/pci_hotplug.h> 29 - #include <linux/slab.h> 30 - #include <linux/module.h> 31 - #include "pciehp.h" 32 - 33 - #define PCIEHP_DETECT_PCIE (0) 34 - #define PCIEHP_DETECT_ACPI (1) 35 - #define PCIEHP_DETECT_AUTO (2) 36 - #define PCIEHP_DETECT_DEFAULT PCIEHP_DETECT_AUTO 37 - 38 - struct dummy_slot { 39 - u32 number; 40 - struct list_head list; 41 - }; 42 - 43 - static int slot_detection_mode; 44 - static char *pciehp_detect_mode; 45 - module_param(pciehp_detect_mode, charp, 0444); 46 - MODULE_PARM_DESC(pciehp_detect_mode, 47 - "Slot detection mode: pcie, acpi, auto\n" 48 - " pcie - Use PCIe based slot detection\n" 49 - " acpi - Use ACPI for slot detection\n" 50 - " auto(default) - Auto select mode. Use acpi option if duplicate\n" 51 - " slot ids are found. Otherwise, use pcie option\n"); 52 - 53 - int pciehp_acpi_slot_detection_check(struct pci_dev *dev) 54 - { 55 - if (slot_detection_mode != PCIEHP_DETECT_ACPI) 56 - return 0; 57 - if (acpi_pci_detect_ejectable(ACPI_HANDLE(&dev->dev))) 58 - return 0; 59 - return -ENODEV; 60 - } 61 - 62 - static int __init parse_detect_mode(void) 63 - { 64 - if (!pciehp_detect_mode) 65 - return PCIEHP_DETECT_DEFAULT; 66 - if (!strcmp(pciehp_detect_mode, "pcie")) 67 - return PCIEHP_DETECT_PCIE; 68 - if (!strcmp(pciehp_detect_mode, "acpi")) 69 - return PCIEHP_DETECT_ACPI; 70 - if (!strcmp(pciehp_detect_mode, "auto")) 71 - return PCIEHP_DETECT_AUTO; 72 - warn("bad specifier '%s' for pciehp_detect_mode. Use default\n", 73 - pciehp_detect_mode); 74 - return PCIEHP_DETECT_DEFAULT; 75 - } 76 - 77 - static int __initdata dup_slot_id; 78 - static int __initdata acpi_slot_detected; 79 - static struct list_head __initdata dummy_slots = LIST_HEAD_INIT(dummy_slots); 80 - 81 - /* Dummy driver for duplicate name detection */ 82 - static int __init dummy_probe(struct pcie_device *dev) 83 - { 84 - u32 slot_cap; 85 - acpi_handle handle; 86 - struct dummy_slot *slot, *tmp; 87 - struct pci_dev *pdev = dev->port; 88 - 89 - pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); 90 - slot = kzalloc(sizeof(*slot), GFP_KERNEL); 91 - if (!slot) 92 - return -ENOMEM; 93 - slot->number = (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19; 94 - list_for_each_entry(tmp, &dummy_slots, list) { 95 - if (tmp->number == slot->number) 96 - dup_slot_id++; 97 - } 98 - list_add_tail(&slot->list, &dummy_slots); 99 - handle = ACPI_HANDLE(&pdev->dev); 100 - if (!acpi_slot_detected && acpi_pci_detect_ejectable(handle)) 101 - acpi_slot_detected = 1; 102 - return -ENODEV; /* dummy driver always returns error */ 103 - } 104 - 105 - static struct pcie_port_service_driver __initdata dummy_driver = { 106 - .name = "pciehp_dummy", 107 - .port_type = PCIE_ANY_PORT, 108 - .service = PCIE_PORT_SERVICE_HP, 109 - .probe = dummy_probe, 110 - }; 111 - 112 - static int __init select_detection_mode(void) 113 - { 114 - struct dummy_slot *slot, *tmp; 115 - 116 - if (pcie_port_service_register(&dummy_driver)) 117 - return PCIEHP_DETECT_ACPI; 118 - pcie_port_service_unregister(&dummy_driver); 119 - list_for_each_entry_safe(slot, tmp, &dummy_slots, list) { 120 - list_del(&slot->list); 121 - kfree(slot); 122 - } 123 - if (acpi_slot_detected && dup_slot_id) 124 - return PCIEHP_DETECT_ACPI; 125 - return PCIEHP_DETECT_PCIE; 126 - } 127 - 128 - void __init pciehp_acpi_slot_detection_init(void) 129 - { 130 - slot_detection_mode = parse_detect_mode(); 131 - if (slot_detection_mode != PCIEHP_DETECT_AUTO) 132 - goto out; 133 - slot_detection_mode = select_detection_mode(); 134 - out: 135 - if (slot_detection_mode == PCIEHP_DETECT_ACPI) 136 - info("Using ACPI for slot detection.\n"); 137 - }
+8 -46
drivers/pci/hotplug/pciehp_core.c
··· 77 77 */ 78 78 static void release_slot(struct hotplug_slot *hotplug_slot) 79 79 { 80 - struct slot *slot = hotplug_slot->private; 81 - 82 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 83 - __func__, hotplug_slot_name(hotplug_slot)); 84 - 85 80 kfree(hotplug_slot->ops); 86 81 kfree(hotplug_slot->info); 87 82 kfree(hotplug_slot); ··· 124 129 slot->hotplug_slot = hotplug; 125 130 snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); 126 131 127 - ctrl_dbg(ctrl, "Registering domain:bus:dev=%04x:%02x:00 sun=%x\n", 128 - pci_domain_nr(ctrl->pcie->port->subordinate), 129 - ctrl->pcie->port->subordinate->number, PSN(ctrl)); 130 132 retval = pci_hp_register(hotplug, 131 133 ctrl->pcie->port->subordinate, 0, name); 132 134 if (retval) 133 - ctrl_err(ctrl, 134 - "pci_hp_register failed with error %d\n", retval); 135 + ctrl_err(ctrl, "pci_hp_register failed: error %d\n", retval); 135 136 out: 136 137 if (retval) { 137 138 kfree(ops); ··· 149 158 { 150 159 struct slot *slot = hotplug_slot->private; 151 160 152 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 153 - __func__, slot_name(slot)); 154 - 155 161 pciehp_set_attention_status(slot, status); 156 162 return 0; 157 163 } ··· 158 170 { 159 171 struct slot *slot = hotplug_slot->private; 160 172 161 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 162 - __func__, slot_name(slot)); 163 - 164 173 return pciehp_sysfs_enable_slot(slot); 165 174 } 166 175 ··· 166 181 { 167 182 struct slot *slot = hotplug_slot->private; 168 183 169 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 170 - __func__, slot_name(slot)); 171 - 172 184 return pciehp_sysfs_disable_slot(slot); 173 185 } 174 186 175 187 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 176 188 { 177 189 struct slot *slot = hotplug_slot->private; 178 - 179 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 180 - __func__, slot_name(slot)); 181 190 182 191 pciehp_get_power_status(slot, value); 183 192 return 0; ··· 181 202 { 182 203 struct slot *slot = hotplug_slot->private; 183 204 184 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 185 - __func__, slot_name(slot)); 186 - 187 205 pciehp_get_attention_status(slot, value); 188 206 return 0; 189 207 } ··· 188 212 static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 189 213 { 190 214 struct slot *slot = hotplug_slot->private; 191 - 192 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 193 - __func__, slot_name(slot)); 194 215 195 216 pciehp_get_latch_status(slot, value); 196 217 return 0; ··· 197 224 { 198 225 struct slot *slot = hotplug_slot->private; 199 226 200 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 201 - __func__, slot_name(slot)); 202 - 203 227 pciehp_get_adapter_status(slot, value); 204 228 return 0; 205 229 } ··· 204 234 static int reset_slot(struct hotplug_slot *hotplug_slot, int probe) 205 235 { 206 236 struct slot *slot = hotplug_slot->private; 207 - 208 - ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 209 - __func__, slot_name(slot)); 210 237 211 238 return pciehp_reset_slot(slot, probe); 212 239 } ··· 215 248 struct slot *slot; 216 249 u8 occupied, poweron; 217 250 218 - if (pciehp_force) 219 - dev_info(&dev->device, 220 - "Bypassing BIOS check for pciehp use on %s\n", 221 - pci_name(dev->port)); 222 - else if (pciehp_acpi_slot_detection_check(dev->port)) 223 - goto err_out_none; 251 + /* If this is not a "hotplug" service, we have no business here. */ 252 + if (dev->service != PCIE_PORT_SERVICE_HP) 253 + return -ENODEV; 224 254 225 255 if (!dev->port->subordinate) { 226 256 /* Can happen if we run out of bus numbers during probe */ 227 257 dev_err(&dev->device, 228 258 "Hotplug bridge without secondary bus, ignoring\n"); 229 - goto err_out_none; 259 + return -ENODEV; 230 260 } 231 261 232 262 ctrl = pcie_init(dev); 233 263 if (!ctrl) { 234 264 dev_err(&dev->device, "Controller initialization failed\n"); 235 - goto err_out_none; 265 + return -ENODEV; 236 266 } 237 267 set_service_data(dev, ctrl); 238 268 ··· 239 275 if (rc == -EBUSY) 240 276 ctrl_warn(ctrl, "Slot already registered by another hotplug driver\n"); 241 277 else 242 - ctrl_err(ctrl, "Slot initialization failed\n"); 278 + ctrl_err(ctrl, "Slot initialization failed (%d)\n", rc); 243 279 goto err_out_release_ctlr; 244 280 } 245 281 246 282 /* Enable events after we have setup the data structures */ 247 283 rc = pcie_init_notification(ctrl); 248 284 if (rc) { 249 - ctrl_err(ctrl, "Notification initialization failed\n"); 285 + ctrl_err(ctrl, "Notification initialization failed (%d)\n", rc); 250 286 goto err_out_free_ctrl_slot; 251 287 } 252 288 ··· 269 305 cleanup_slot(ctrl); 270 306 err_out_release_ctlr: 271 307 pciehp_release_ctrl(ctrl); 272 - err_out_none: 273 308 return -ENODEV; 274 309 } 275 310 ··· 329 366 { 330 367 int retval = 0; 331 368 332 - pciehp_firmware_init(); 333 369 retval = pcie_port_service_register(&hpdriver_portdrv); 334 370 dbg("pcie_port_service_register = %d\n", retval); 335 371 info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
+13 -141
drivers/pci/hotplug/pciehp_ctrl.c
··· 37 37 38 38 static void interrupt_event_handler(struct work_struct *work); 39 39 40 - static int queue_interrupt_event(struct slot *p_slot, u32 event_type) 40 + void pciehp_queue_interrupt_event(struct slot *p_slot, u32 event_type) 41 41 { 42 42 struct event_info *info; 43 43 44 44 info = kmalloc(sizeof(*info), GFP_ATOMIC); 45 - if (!info) 46 - return -ENOMEM; 45 + if (!info) { 46 + ctrl_err(p_slot->ctrl, "dropped event %d (ENOMEM)\n", event_type); 47 + return; 48 + } 47 49 50 + INIT_WORK(&info->work, interrupt_event_handler); 48 51 info->event_type = event_type; 49 52 info->p_slot = p_slot; 50 - INIT_WORK(&info->work, interrupt_event_handler); 51 - 52 53 queue_work(p_slot->wq, &info->work); 53 - 54 - return 0; 55 - } 56 - 57 - u8 pciehp_handle_attention_button(struct slot *p_slot) 58 - { 59 - u32 event_type; 60 - struct controller *ctrl = p_slot->ctrl; 61 - 62 - /* Attention Button Change */ 63 - ctrl_dbg(ctrl, "Attention button interrupt received\n"); 64 - 65 - /* 66 - * Button pressed - See if need to TAKE ACTION!!! 67 - */ 68 - ctrl_info(ctrl, "Button pressed on Slot(%s)\n", slot_name(p_slot)); 69 - event_type = INT_BUTTON_PRESS; 70 - 71 - queue_interrupt_event(p_slot, event_type); 72 - 73 - return 0; 74 - } 75 - 76 - u8 pciehp_handle_switch_change(struct slot *p_slot) 77 - { 78 - u8 getstatus; 79 - u32 event_type; 80 - struct controller *ctrl = p_slot->ctrl; 81 - 82 - /* Switch Change */ 83 - ctrl_dbg(ctrl, "Switch interrupt received\n"); 84 - 85 - pciehp_get_latch_status(p_slot, &getstatus); 86 - if (getstatus) { 87 - /* 88 - * Switch opened 89 - */ 90 - ctrl_info(ctrl, "Latch open on Slot(%s)\n", slot_name(p_slot)); 91 - event_type = INT_SWITCH_OPEN; 92 - } else { 93 - /* 94 - * Switch closed 95 - */ 96 - ctrl_info(ctrl, "Latch close on Slot(%s)\n", slot_name(p_slot)); 97 - event_type = INT_SWITCH_CLOSE; 98 - } 99 - 100 - queue_interrupt_event(p_slot, event_type); 101 - 102 - return 1; 103 - } 104 - 105 - u8 pciehp_handle_presence_change(struct slot *p_slot) 106 - { 107 - u32 event_type; 108 - u8 presence_save; 109 - struct controller *ctrl = p_slot->ctrl; 110 - 111 - /* Presence Change */ 112 - ctrl_dbg(ctrl, "Presence/Notify input change\n"); 113 - 114 - /* Switch is open, assume a presence change 115 - * Save the presence state 116 - */ 117 - pciehp_get_adapter_status(p_slot, &presence_save); 118 - if (presence_save) { 119 - /* 120 - * Card Present 121 - */ 122 - ctrl_info(ctrl, "Card present on Slot(%s)\n", slot_name(p_slot)); 123 - event_type = INT_PRESENCE_ON; 124 - } else { 125 - /* 126 - * Not Present 127 - */ 128 - ctrl_info(ctrl, "Card not present on Slot(%s)\n", 129 - slot_name(p_slot)); 130 - event_type = INT_PRESENCE_OFF; 131 - } 132 - 133 - queue_interrupt_event(p_slot, event_type); 134 - 135 - return 1; 136 - } 137 - 138 - u8 pciehp_handle_power_fault(struct slot *p_slot) 139 - { 140 - u32 event_type; 141 - struct controller *ctrl = p_slot->ctrl; 142 - 143 - /* power fault */ 144 - ctrl_dbg(ctrl, "Power fault interrupt received\n"); 145 - ctrl_err(ctrl, "Power fault on slot %s\n", slot_name(p_slot)); 146 - event_type = INT_POWER_FAULT; 147 - ctrl_info(ctrl, "Power fault bit %x set\n", 0); 148 - queue_interrupt_event(p_slot, event_type); 149 - 150 - return 1; 151 - } 152 - 153 - void pciehp_handle_linkstate_change(struct slot *p_slot) 154 - { 155 - u32 event_type; 156 - struct controller *ctrl = p_slot->ctrl; 157 - 158 - /* Link Status Change */ 159 - ctrl_dbg(ctrl, "Data Link Layer State change\n"); 160 - 161 - if (pciehp_check_link_active(ctrl)) { 162 - ctrl_info(ctrl, "slot(%s): Link Up event\n", 163 - slot_name(p_slot)); 164 - event_type = INT_LINK_UP; 165 - } else { 166 - ctrl_info(ctrl, "slot(%s): Link Down event\n", 167 - slot_name(p_slot)); 168 - event_type = INT_LINK_DOWN; 169 - } 170 - 171 - queue_interrupt_event(p_slot, event_type); 172 54 } 173 55 174 56 /* The following routines constitute the bulk of the ··· 180 298 181 299 switch (info->req) { 182 300 case DISABLE_REQ: 183 - ctrl_dbg(p_slot->ctrl, 184 - "Disabling domain:bus:device=%04x:%02x:00\n", 185 - pci_domain_nr(p_slot->ctrl->pcie->port->subordinate), 186 - p_slot->ctrl->pcie->port->subordinate->number); 187 301 mutex_lock(&p_slot->hotplug_lock); 188 302 pciehp_disable_slot(p_slot); 189 303 mutex_unlock(&p_slot->hotplug_lock); ··· 188 310 mutex_unlock(&p_slot->lock); 189 311 break; 190 312 case ENABLE_REQ: 191 - ctrl_dbg(p_slot->ctrl, 192 - "Enabling domain:bus:device=%04x:%02x:00\n", 193 - pci_domain_nr(p_slot->ctrl->pcie->port->subordinate), 194 - p_slot->ctrl->pcie->port->subordinate->number); 195 313 mutex_lock(&p_slot->hotplug_lock); 196 314 ret = pciehp_enable_slot(p_slot); 197 315 mutex_unlock(&p_slot->hotplug_lock); ··· 290 416 ctrl_info(ctrl, "Button ignore on Slot(%s)\n", slot_name(p_slot)); 291 417 break; 292 418 default: 293 - ctrl_warn(ctrl, "Not a valid state\n"); 419 + ctrl_warn(ctrl, "ignoring invalid state %#x\n", p_slot->state); 294 420 break; 295 421 } 296 422 } ··· 381 507 } 382 508 break; 383 509 default: 384 - ctrl_err(ctrl, "Not a valid state on slot(%s)\n", 385 - slot_name(p_slot)); 510 + ctrl_err(ctrl, "ignoring invalid state %#x on slot(%s)\n", 511 + p_slot->state, slot_name(p_slot)); 386 512 kfree(info); 387 513 break; 388 514 } ··· 406 532 pciehp_green_led_off(p_slot); 407 533 break; 408 534 case INT_PRESENCE_ON: 409 - ctrl_dbg(ctrl, "Surprise Insertion\n"); 410 535 handle_surprise_event(p_slot); 411 536 break; 412 537 case INT_PRESENCE_OFF: ··· 413 540 * Regardless of surprise capability, we need to 414 541 * definitely remove a card that has been pulled out! 415 542 */ 416 - ctrl_dbg(ctrl, "Surprise Removal\n"); 417 543 handle_surprise_event(p_slot); 418 544 break; 419 545 case INT_LINK_UP: ··· 519 647 slot_name(p_slot)); 520 648 break; 521 649 default: 522 - ctrl_err(ctrl, "Not a valid state on slot %s\n", 523 - slot_name(p_slot)); 650 + ctrl_err(ctrl, "invalid state %#x on slot %s\n", 651 + p_slot->state, slot_name(p_slot)); 524 652 break; 525 653 } 526 654 mutex_unlock(&p_slot->lock); ··· 554 682 slot_name(p_slot)); 555 683 break; 556 684 default: 557 - ctrl_err(ctrl, "Not a valid state on slot %s\n", 558 - slot_name(p_slot)); 685 + ctrl_err(ctrl, "invalid state %#x on slot %s\n", 686 + p_slot->state, slot_name(p_slot)); 559 687 break; 560 688 } 561 689 mutex_unlock(&p_slot->lock);
+78 -67
drivers/pci/hotplug/pciehp_hpc.c
··· 176 176 jiffies_to_msecs(jiffies - ctrl->cmd_started)); 177 177 } 178 178 179 - /** 180 - * pcie_write_cmd - Issue controller command 181 - * @ctrl: controller to which the command is issued 182 - * @cmd: command value written to slot control register 183 - * @mask: bitmask of slot control register to be modified 184 - */ 185 - static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 179 + static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd, 180 + u16 mask, bool wait) 186 181 { 187 182 struct pci_dev *pdev = ctrl_dev(ctrl); 188 183 u16 slot_ctrl; 189 184 190 185 mutex_lock(&ctrl->ctrl_lock); 191 186 192 - /* Wait for any previous command that might still be in progress */ 187 + /* 188 + * Always wait for any previous command that might still be in progress 189 + */ 193 190 pcie_wait_cmd(ctrl); 194 191 195 192 pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); ··· 198 201 ctrl->cmd_started = jiffies; 199 202 ctrl->slot_ctrl = slot_ctrl; 200 203 204 + /* 205 + * Optionally wait for the hardware to be ready for a new command, 206 + * indicating completion of the above issued command. 207 + */ 208 + if (wait) 209 + pcie_wait_cmd(ctrl); 210 + 201 211 mutex_unlock(&ctrl->ctrl_lock); 212 + } 213 + 214 + /** 215 + * pcie_write_cmd - Issue controller command 216 + * @ctrl: controller to which the command is issued 217 + * @cmd: command value written to slot control register 218 + * @mask: bitmask of slot control register to be modified 219 + */ 220 + static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 221 + { 222 + pcie_do_write_cmd(ctrl, cmd, mask, true); 223 + } 224 + 225 + /* Same as above without waiting for the hardware to latch */ 226 + static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask) 227 + { 228 + pcie_do_write_cmd(ctrl, cmd, mask, false); 202 229 } 203 230 204 231 bool pciehp_check_link_active(struct controller *ctrl) ··· 312 291 ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); 313 292 if ((lnk_status & PCI_EXP_LNKSTA_LT) || 314 293 !(lnk_status & PCI_EXP_LNKSTA_NLW)) { 315 - ctrl_err(ctrl, "Link Training Error occurs\n"); 294 + ctrl_err(ctrl, "link training error: status %#06x\n", 295 + lnk_status); 316 296 return -1; 317 297 } 318 298 ··· 444 422 default: 445 423 return; 446 424 } 447 - pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 425 + pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 448 426 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 449 427 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 450 428 } ··· 456 434 if (!PWR_LED(ctrl)) 457 435 return; 458 436 459 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC); 437 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, 438 + PCI_EXP_SLTCTL_PIC); 460 439 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 461 440 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 462 441 PCI_EXP_SLTCTL_PWR_IND_ON); ··· 470 447 if (!PWR_LED(ctrl)) 471 448 return; 472 449 473 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC); 450 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, 451 + PCI_EXP_SLTCTL_PIC); 474 452 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 475 453 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 476 454 PCI_EXP_SLTCTL_PWR_IND_OFF); ··· 484 460 if (!PWR_LED(ctrl)) 485 461 return; 486 462 487 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC); 463 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, 464 + PCI_EXP_SLTCTL_PIC); 488 465 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 489 466 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 490 467 PCI_EXP_SLTCTL_PWR_IND_BLINK); ··· 535 510 struct pci_dev *dev; 536 511 struct slot *slot = ctrl->slot; 537 512 u16 detected, intr_loc; 513 + u8 open, present; 514 + bool link; 538 515 539 516 /* 540 517 * In order to guarantee that all interrupt events are ··· 559 532 intr_loc); 560 533 } while (detected); 561 534 562 - ctrl_dbg(ctrl, "%s: intr_loc %x\n", __func__, intr_loc); 535 + ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", intr_loc); 563 536 564 537 /* Check Command Complete Interrupt Pending */ 565 538 if (intr_loc & PCI_EXP_SLTSTA_CC) { ··· 582 555 return IRQ_HANDLED; 583 556 584 557 /* Check MRL Sensor Changed */ 585 - if (intr_loc & PCI_EXP_SLTSTA_MRLSC) 586 - pciehp_handle_switch_change(slot); 558 + if (intr_loc & PCI_EXP_SLTSTA_MRLSC) { 559 + pciehp_get_latch_status(slot, &open); 560 + ctrl_info(ctrl, "Latch %s on Slot(%s)\n", 561 + open ? "open" : "close", slot_name(slot)); 562 + pciehp_queue_interrupt_event(slot, open ? INT_SWITCH_OPEN : 563 + INT_SWITCH_CLOSE); 564 + } 587 565 588 566 /* Check Attention Button Pressed */ 589 - if (intr_loc & PCI_EXP_SLTSTA_ABP) 590 - pciehp_handle_attention_button(slot); 567 + if (intr_loc & PCI_EXP_SLTSTA_ABP) { 568 + ctrl_info(ctrl, "Button pressed on Slot(%s)\n", 569 + slot_name(slot)); 570 + pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS); 571 + } 591 572 592 573 /* Check Presence Detect Changed */ 593 - if (intr_loc & PCI_EXP_SLTSTA_PDC) 594 - pciehp_handle_presence_change(slot); 574 + if (intr_loc & PCI_EXP_SLTSTA_PDC) { 575 + pciehp_get_adapter_status(slot, &present); 576 + ctrl_info(ctrl, "Card %spresent on Slot(%s)\n", 577 + present ? "" : "not ", slot_name(slot)); 578 + pciehp_queue_interrupt_event(slot, present ? INT_PRESENCE_ON : 579 + INT_PRESENCE_OFF); 580 + } 595 581 596 582 /* Check Power Fault Detected */ 597 583 if ((intr_loc & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { 598 584 ctrl->power_fault_detected = 1; 599 - pciehp_handle_power_fault(slot); 585 + ctrl_err(ctrl, "Power fault on slot %s\n", slot_name(slot)); 586 + pciehp_queue_interrupt_event(slot, INT_POWER_FAULT); 600 587 } 601 588 602 - if (intr_loc & PCI_EXP_SLTSTA_DLLSC) 603 - pciehp_handle_linkstate_change(slot); 589 + if (intr_loc & PCI_EXP_SLTSTA_DLLSC) { 590 + link = pciehp_check_link_active(ctrl); 591 + ctrl_info(ctrl, "slot(%s): Link %s event\n", 592 + slot_name(slot), link ? "Up" : "Down"); 593 + pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : 594 + INT_LINK_DOWN); 595 + } 604 596 605 597 return IRQ_HANDLED; 606 598 } ··· 659 613 PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE | 660 614 PCI_EXP_SLTCTL_DLLSCE); 661 615 662 - pcie_write_cmd(ctrl, cmd, mask); 616 + pcie_write_cmd_nowait(ctrl, cmd, mask); 663 617 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 664 618 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); 665 619 } ··· 710 664 pci_reset_bridge_secondary_bus(ctrl->pcie->port); 711 665 712 666 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); 713 - pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask); 667 + pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask); 714 668 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 715 669 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); 716 670 if (pciehp_poll_mode) ··· 770 724 771 725 static inline void dbg_ctrl(struct controller *ctrl) 772 726 { 773 - int i; 774 - u16 reg16; 775 727 struct pci_dev *pdev = ctrl->pcie->port; 728 + u16 reg16; 776 729 777 730 if (!pciehp_debug) 778 731 return; 779 732 780 - ctrl_info(ctrl, "Hotplug Controller:\n"); 781 - ctrl_info(ctrl, " Seg/Bus/Dev/Func/IRQ : %s IRQ %d\n", 782 - pci_name(pdev), pdev->irq); 783 - ctrl_info(ctrl, " Vendor ID : 0x%04x\n", pdev->vendor); 784 - ctrl_info(ctrl, " Device ID : 0x%04x\n", pdev->device); 785 - ctrl_info(ctrl, " Subsystem ID : 0x%04x\n", 786 - pdev->subsystem_device); 787 - ctrl_info(ctrl, " Subsystem Vendor ID : 0x%04x\n", 788 - pdev->subsystem_vendor); 789 - ctrl_info(ctrl, " PCIe Cap offset : 0x%02x\n", 790 - pci_pcie_cap(pdev)); 791 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 792 - if (!pci_resource_len(pdev, i)) 793 - continue; 794 - ctrl_info(ctrl, " PCI resource [%d] : %pR\n", 795 - i, &pdev->resource[i]); 796 - } 797 733 ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap); 798 - ctrl_info(ctrl, " Physical Slot Number : %d\n", PSN(ctrl)); 799 - ctrl_info(ctrl, " Attention Button : %3s\n", 800 - ATTN_BUTTN(ctrl) ? "yes" : "no"); 801 - ctrl_info(ctrl, " Power Controller : %3s\n", 802 - POWER_CTRL(ctrl) ? "yes" : "no"); 803 - ctrl_info(ctrl, " MRL Sensor : %3s\n", 804 - MRL_SENS(ctrl) ? "yes" : "no"); 805 - ctrl_info(ctrl, " Attention Indicator : %3s\n", 806 - ATTN_LED(ctrl) ? "yes" : "no"); 807 - ctrl_info(ctrl, " Power Indicator : %3s\n", 808 - PWR_LED(ctrl) ? "yes" : "no"); 809 - ctrl_info(ctrl, " Hot-Plug Surprise : %3s\n", 810 - HP_SUPR_RM(ctrl) ? "yes" : "no"); 811 - ctrl_info(ctrl, " EMI Present : %3s\n", 812 - EMI(ctrl) ? "yes" : "no"); 813 - ctrl_info(ctrl, " Command Completed : %3s\n", 814 - NO_CMD_CMPL(ctrl) ? "no" : "yes"); 815 734 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16); 816 735 ctrl_info(ctrl, "Slot Status : 0x%04x\n", reg16); 817 736 pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16); ··· 805 794 806 795 /* Check if Data Link Layer Link Active Reporting is implemented */ 807 796 pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); 808 - if (link_cap & PCI_EXP_LNKCAP_DLLLARC) { 809 - ctrl_dbg(ctrl, "Link Active Reporting supported\n"); 797 + if (link_cap & PCI_EXP_LNKCAP_DLLLARC) 810 798 ctrl->link_active_reporting = 1; 811 - } 812 799 813 800 /* Clear all remaining event bits in Slot Status register */ 814 801 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, ··· 814 805 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 815 806 PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC); 816 807 817 - ctrl_info(ctrl, "Slot #%d AttnBtn%c AttnInd%c PwrInd%c PwrCtrl%c MRL%c Interlock%c NoCompl%c LLActRep%c\n", 808 + ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c\n", 818 809 (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, 819 810 FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), 820 - FLAG(slot_cap, PCI_EXP_SLTCAP_AIP), 821 - FLAG(slot_cap, PCI_EXP_SLTCAP_PIP), 822 811 FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), 823 812 FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP), 813 + FLAG(slot_cap, PCI_EXP_SLTCAP_AIP), 814 + FLAG(slot_cap, PCI_EXP_SLTCAP_PIP), 815 + FLAG(slot_cap, PCI_EXP_SLTCAP_HPC), 816 + FLAG(slot_cap, PCI_EXP_SLTCAP_HPS), 824 817 FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), 825 818 FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), 826 819 FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC));
+10 -43
drivers/pci/msi.c
··· 185 185 return default_restore_msi_irqs(dev); 186 186 } 187 187 188 - static void msi_set_enable(struct pci_dev *dev, int enable) 189 - { 190 - u16 control; 191 - 192 - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 193 - control &= ~PCI_MSI_FLAGS_ENABLE; 194 - if (enable) 195 - control |= PCI_MSI_FLAGS_ENABLE; 196 - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 197 - } 198 - 199 - static void msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) 200 - { 201 - u16 ctrl; 202 - 203 - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); 204 - ctrl &= ~clear; 205 - ctrl |= set; 206 - pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); 207 - } 208 - 209 188 static inline __attribute_const__ u32 msi_mask(unsigned x) 210 189 { 211 190 /* Don't shift by >= width of type */ ··· 431 452 entry = irq_get_msi_desc(dev->irq); 432 453 433 454 pci_intx_for_msi(dev, 0); 434 - msi_set_enable(dev, 0); 455 + pci_msi_set_enable(dev, 0); 435 456 arch_restore_msi_irqs(dev); 436 457 437 458 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); ··· 452 473 453 474 /* route the table */ 454 475 pci_intx_for_msi(dev, 0); 455 - msix_clear_and_set_ctrl(dev, 0, 476 + pci_msix_clear_and_set_ctrl(dev, 0, 456 477 PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL); 457 478 458 479 arch_restore_msi_irqs(dev); 459 480 list_for_each_entry(entry, &dev->msi_list, list) 460 481 msix_mask_irq(entry, entry->masked); 461 482 462 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 483 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 463 484 } 464 485 465 486 void pci_restore_msi_state(struct pci_dev *dev) ··· 626 647 int ret; 627 648 unsigned mask; 628 649 629 - msi_set_enable(dev, 0); /* Disable MSI during set up */ 650 + pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ 630 651 631 652 entry = msi_setup_entry(dev, nvec); 632 653 if (!entry) ··· 662 683 663 684 /* Set MSI enabled bits */ 664 685 pci_intx_for_msi(dev, 0); 665 - msi_set_enable(dev, 1); 686 + pci_msi_set_enable(dev, 1); 666 687 dev->msi_enabled = 1; 667 688 668 689 dev->irq = entry->irq; ··· 754 775 void __iomem *base; 755 776 756 777 /* Ensure MSI-X is disabled while it is set up */ 757 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 778 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 758 779 759 780 pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); 760 781 /* Request & Map MSI-X table region */ ··· 780 801 * MSI-X registers. We need to mask all the vectors to prevent 781 802 * interrupts coming in before they're fully set up. 782 803 */ 783 - msix_clear_and_set_ctrl(dev, 0, 804 + pci_msix_clear_and_set_ctrl(dev, 0, 784 805 PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE); 785 806 786 807 msix_program_entries(dev, entries); ··· 793 814 pci_intx_for_msi(dev, 0); 794 815 dev->msix_enabled = 1; 795 816 796 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 817 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 797 818 798 819 return 0; 799 820 ··· 898 919 BUG_ON(list_empty(&dev->msi_list)); 899 920 desc = list_first_entry(&dev->msi_list, struct msi_desc, list); 900 921 901 - msi_set_enable(dev, 0); 922 + pci_msi_set_enable(dev, 0); 902 923 pci_intx_for_msi(dev, 1); 903 924 dev->msi_enabled = 0; 904 925 ··· 1006 1027 __pci_msix_desc_mask_irq(entry, 1); 1007 1028 } 1008 1029 1009 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1030 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1010 1031 pci_intx_for_msi(dev, 1); 1011 1032 dev->msix_enabled = 0; 1012 1033 } ··· 1041 1062 void pci_msi_init_pci_dev(struct pci_dev *dev) 1042 1063 { 1043 1064 INIT_LIST_HEAD(&dev->msi_list); 1044 - 1045 - /* Disable the msi hardware to avoid screaming interrupts 1046 - * during boot. This is the power on reset default so 1047 - * usually this should be a noop. 1048 - */ 1049 - dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); 1050 - if (dev->msi_cap) 1051 - msi_set_enable(dev, 0); 1052 - 1053 - dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); 1054 - if (dev->msix_cap) 1055 - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1056 1065 } 1057 1066 1058 1067 /**
+11 -33
drivers/pci/pci.c
··· 3101 3101 } 3102 3102 EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx); 3103 3103 3104 - /** 3105 - * pci_msi_off - disables any MSI or MSI-X capabilities 3106 - * @dev: the PCI device to operate on 3107 - * 3108 - * If you want to use MSI, see pci_enable_msi() and friends. 3109 - * This is a lower-level primitive that allows us to disable 3110 - * MSI operation at the device level. 3111 - */ 3112 - void pci_msi_off(struct pci_dev *dev) 3113 - { 3114 - int pos; 3115 - u16 control; 3116 - 3117 - /* 3118 - * This looks like it could go in msi.c, but we need it even when 3119 - * CONFIG_PCI_MSI=n. For the same reason, we can't use 3120 - * dev->msi_cap or dev->msix_cap here. 3121 - */ 3122 - pos = pci_find_capability(dev, PCI_CAP_ID_MSI); 3123 - if (pos) { 3124 - pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &control); 3125 - control &= ~PCI_MSI_FLAGS_ENABLE; 3126 - pci_write_config_word(dev, pos + PCI_MSI_FLAGS, control); 3127 - } 3128 - pos = pci_find_capability(dev, PCI_CAP_ID_MSIX); 3129 - if (pos) { 3130 - pci_read_config_word(dev, pos + PCI_MSIX_FLAGS, &control); 3131 - control &= ~PCI_MSIX_FLAGS_ENABLE; 3132 - pci_write_config_word(dev, pos + PCI_MSIX_FLAGS, control); 3133 - } 3134 - } 3135 - EXPORT_SYMBOL_GPL(pci_msi_off); 3136 - 3137 3104 int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size) 3138 3105 { 3139 3106 return dma_set_max_seg_size(&dev->dev, size); ··· 4290 4323 return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0); 4291 4324 } 4292 4325 EXPORT_SYMBOL_GPL(pci_device_is_present); 4326 + 4327 + void pci_ignore_hotplug(struct pci_dev *dev) 4328 + { 4329 + struct pci_dev *bridge = dev->bus->self; 4330 + 4331 + dev->ignore_hotplug = 1; 4332 + /* Propagate the "ignore hotplug" setting to the parent bridge. */ 4333 + if (bridge) 4334 + bridge->ignore_hotplug = 1; 4335 + } 4336 + EXPORT_SYMBOL_GPL(pci_ignore_hotplug); 4293 4337 4294 4338 #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE 4295 4339 static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
+21 -11
drivers/pci/pci.h
··· 146 146 static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 147 147 #endif 148 148 149 + static inline void pci_msi_set_enable(struct pci_dev *dev, int enable) 150 + { 151 + u16 control; 152 + 153 + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 154 + control &= ~PCI_MSI_FLAGS_ENABLE; 155 + if (enable) 156 + control |= PCI_MSI_FLAGS_ENABLE; 157 + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 158 + } 159 + 160 + static inline void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) 161 + { 162 + u16 ctrl; 163 + 164 + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); 165 + ctrl &= ~clear; 166 + ctrl |= set; 167 + pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); 168 + } 169 + 149 170 void pci_realloc_get_opt(char *); 150 171 151 172 static inline int pci_no_d1d2(struct pci_dev *dev) ··· 236 215 struct list_head *realloc_head, 237 216 struct list_head *fail_head); 238 217 bool pci_bus_clip_resource(struct pci_dev *dev, int idx); 239 - 240 - /** 241 - * pci_ari_enabled - query ARI forwarding status 242 - * @bus: the PCI bus 243 - * 244 - * Returns 1 if ARI forwarding is enabled, or 0 if not enabled; 245 - */ 246 - static inline int pci_ari_enabled(struct pci_bus *bus) 247 - { 248 - return bus->self && bus->self->ari_enabled; 249 - } 250 218 251 219 void pci_reassigndev_resource_alignment(struct pci_dev *dev); 252 220 void pci_disable_bridge_window(struct pci_dev *dev);
+1 -2
drivers/pci/pcie/aer/aerdrv_core.c
··· 425 425 426 426 if (driver && driver->reset_link) { 427 427 status = driver->reset_link(udev); 428 - } else if (pci_pcie_type(udev) == PCI_EXP_TYPE_DOWNSTREAM || 429 - pci_pcie_type(udev) == PCI_EXP_TYPE_ROOT_PORT) { 428 + } else if (udev->has_secondary_link) { 430 429 status = default_reset_link(udev); 431 430 } else { 432 431 dev_printk(KERN_DEBUG, &dev->dev,
+23 -34
drivers/pci/pcie/aspm.c
··· 127 127 { 128 128 struct pci_dev *child; 129 129 struct pci_bus *linkbus = link->pdev->subordinate; 130 + u32 val = enable ? PCI_EXP_LNKCTL_CLKREQ_EN : 0; 130 131 131 - list_for_each_entry(child, &linkbus->devices, bus_list) { 132 - if (enable) 133 - pcie_capability_set_word(child, PCI_EXP_LNKCTL, 134 - PCI_EXP_LNKCTL_CLKREQ_EN); 135 - else 136 - pcie_capability_clear_word(child, PCI_EXP_LNKCTL, 137 - PCI_EXP_LNKCTL_CLKREQ_EN); 138 - } 132 + list_for_each_entry(child, &linkbus->devices, bus_list) 133 + pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, 134 + PCI_EXP_LNKCTL_CLKREQ_EN, 135 + val); 139 136 link->clkpm_enabled = !!enable; 140 137 } 141 138 ··· 522 525 INIT_LIST_HEAD(&link->children); 523 526 INIT_LIST_HEAD(&link->link); 524 527 link->pdev = pdev; 525 - if (pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM) { 528 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) { 526 529 struct pcie_link_state *parent; 527 530 parent = pdev->bus->parent->self->link_state; 528 531 if (!parent) { ··· 556 559 if (!aspm_support_enabled) 557 560 return; 558 561 559 - if (!pci_is_pcie(pdev) || pdev->link_state) 562 + if (pdev->link_state) 560 563 return; 561 - if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 562 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) 564 + 565 + /* 566 + * We allocate pcie_link_state for the component on the upstream 567 + * end of a Link, so there's nothing to do unless this device has a 568 + * Link on its secondary side. 569 + */ 570 + if (!pdev->has_secondary_link) 563 571 return; 564 572 565 573 /* VIA has a strange chipset, root port is under a bridge */ ··· 677 675 { 678 676 struct pcie_link_state *link = pdev->link_state; 679 677 680 - if (aspm_disabled || !pci_is_pcie(pdev) || !link) 681 - return; 682 - if ((pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) && 683 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)) 678 + if (aspm_disabled || !link) 684 679 return; 685 680 /* 686 681 * Devices changed PM state, we should recheck if latency ··· 695 696 { 696 697 struct pcie_link_state *link = pdev->link_state; 697 698 698 - if (aspm_disabled || !pci_is_pcie(pdev) || !link) 699 + if (aspm_disabled || !link) 699 700 return; 700 701 701 702 if (aspm_policy != POLICY_POWERSAVE) 702 - return; 703 - 704 - if ((pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT) && 705 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)) 706 703 return; 707 704 708 705 down_read(&pci_bus_sem); ··· 709 714 up_read(&pci_bus_sem); 710 715 } 711 716 712 - static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem, 713 - bool force) 717 + static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) 714 718 { 715 719 struct pci_dev *parent = pdev->bus->self; 716 720 struct pcie_link_state *link; ··· 717 723 if (!pci_is_pcie(pdev)) 718 724 return; 719 725 720 - if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT || 721 - pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM) 726 + if (pdev->has_secondary_link) 722 727 parent = pdev; 723 728 if (!parent || !parent->link_state) 724 729 return; ··· 730 737 * a similar mechanism using "PciASPMOptOut", which is also 731 738 * ignored in this situation. 732 739 */ 733 - if (aspm_disabled && !force) { 740 + if (aspm_disabled) { 734 741 dev_warn(&pdev->dev, "can't disable ASPM; OS doesn't have ASPM control\n"); 735 742 return; 736 743 } ··· 756 763 757 764 void pci_disable_link_state_locked(struct pci_dev *pdev, int state) 758 765 { 759 - __pci_disable_link_state(pdev, state, false, false); 766 + __pci_disable_link_state(pdev, state, false); 760 767 } 761 768 EXPORT_SYMBOL(pci_disable_link_state_locked); 762 769 ··· 771 778 */ 772 779 void pci_disable_link_state(struct pci_dev *pdev, int state) 773 780 { 774 - __pci_disable_link_state(pdev, state, true, false); 781 + __pci_disable_link_state(pdev, state, true); 775 782 } 776 783 EXPORT_SYMBOL(pci_disable_link_state); 777 784 ··· 900 907 { 901 908 struct pcie_link_state *link_state = pdev->link_state; 902 909 903 - if (!pci_is_pcie(pdev) || 904 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 905 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) || !link_state) 910 + if (!link_state) 906 911 return; 907 912 908 913 if (link_state->aspm_support) ··· 915 924 { 916 925 struct pcie_link_state *link_state = pdev->link_state; 917 926 918 - if (!pci_is_pcie(pdev) || 919 - (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT && 920 - pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) || !link_state) 927 + if (!link_state) 921 928 return; 922 929 923 930 if (link_state->aspm_support)
+43 -26
drivers/pci/probe.c
··· 254 254 } 255 255 256 256 if (res->flags & IORESOURCE_MEM_64) { 257 - if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) && 258 - sz64 > 0x100000000ULL) { 257 + if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8) 258 + && sz64 > 0x100000000ULL) { 259 259 res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 260 260 res->start = 0; 261 261 res->end = 0; ··· 264 264 goto out; 265 265 } 266 266 267 - if ((sizeof(dma_addr_t) < 8) && l) { 267 + if ((sizeof(pci_bus_addr_t) < 8) && l) { 268 268 /* Above 32-bit boundary; try to reallocate */ 269 269 res->flags |= IORESOURCE_UNSET; 270 270 res->start = 0; ··· 399 399 struct pci_dev *dev = child->self; 400 400 u16 mem_base_lo, mem_limit_lo; 401 401 u64 base64, limit64; 402 - dma_addr_t base, limit; 402 + pci_bus_addr_t base, limit; 403 403 struct pci_bus_region region; 404 404 struct resource *res; 405 405 ··· 426 426 } 427 427 } 428 428 429 - base = (dma_addr_t) base64; 430 - limit = (dma_addr_t) limit64; 429 + base = (pci_bus_addr_t) base64; 430 + limit = (pci_bus_addr_t) limit64; 431 431 432 432 if (base != base64) { 433 433 dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n", ··· 973 973 { 974 974 int pos; 975 975 u16 reg16; 976 + int type; 977 + struct pci_dev *parent; 976 978 977 979 pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 978 980 if (!pos) ··· 984 982 pdev->pcie_flags_reg = reg16; 985 983 pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, &reg16); 986 984 pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; 985 + 986 + /* 987 + * A Root Port is always the upstream end of a Link. No PCIe 988 + * component has two Links. Two Links are connected by a Switch 989 + * that has a Port on each Link and internal logic to connect the 990 + * two Ports. 991 + */ 992 + type = pci_pcie_type(pdev); 993 + if (type == PCI_EXP_TYPE_ROOT_PORT) 994 + pdev->has_secondary_link = 1; 995 + else if (type == PCI_EXP_TYPE_UPSTREAM || 996 + type == PCI_EXP_TYPE_DOWNSTREAM) { 997 + parent = pci_upstream_bridge(pdev); 998 + if (!parent->has_secondary_link) 999 + pdev->has_secondary_link = 1; 1000 + } 987 1001 } 988 1002 989 1003 void set_pcie_hotplug_bridge(struct pci_dev *pdev) ··· 1103 1085 1104 1086 #define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED) 1105 1087 1088 + static void pci_msi_setup_pci_dev(struct pci_dev *dev) 1089 + { 1090 + /* 1091 + * Disable the MSI hardware to avoid screaming interrupts 1092 + * during boot. This is the power on reset default so 1093 + * usually this should be a noop. 1094 + */ 1095 + dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); 1096 + if (dev->msi_cap) 1097 + pci_msi_set_enable(dev, 0); 1098 + 1099 + dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); 1100 + if (dev->msix_cap) 1101 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 1102 + } 1103 + 1106 1104 /** 1107 1105 * pci_setup_device - fill in class and map information of a device 1108 1106 * @dev: the device structure to fill ··· 1173 1139 1174 1140 /* "Unknown power state" */ 1175 1141 dev->current_state = PCI_UNKNOWN; 1142 + 1143 + pci_msi_setup_pci_dev(dev); 1176 1144 1177 1145 /* Early fixups, before probing the BARs */ 1178 1146 pci_fixup_device(pci_fixup_early, dev); ··· 1647 1611 return 0; 1648 1612 if (pci_pcie_type(parent) == PCI_EXP_TYPE_ROOT_PORT) 1649 1613 return 1; 1650 - if (pci_pcie_type(parent) == PCI_EXP_TYPE_DOWNSTREAM && 1614 + if (parent->has_secondary_link && 1651 1615 !pci_has_flag(PCI_SCAN_ALL_PCIE_DEVS)) 1652 1616 return 1; 1653 1617 return 0; ··· 2129 2093 return b; 2130 2094 } 2131 2095 EXPORT_SYMBOL(pci_scan_root_bus); 2132 - 2133 - /* Deprecated; use pci_scan_root_bus() instead */ 2134 - struct pci_bus *pci_scan_bus_parented(struct device *parent, 2135 - int bus, struct pci_ops *ops, void *sysdata) 2136 - { 2137 - LIST_HEAD(resources); 2138 - struct pci_bus *b; 2139 - 2140 - pci_add_resource(&resources, &ioport_resource); 2141 - pci_add_resource(&resources, &iomem_resource); 2142 - pci_add_resource(&resources, &busn_resource); 2143 - b = pci_create_root_bus(parent, bus, ops, sysdata, &resources); 2144 - if (b) 2145 - pci_scan_child_bus(b); 2146 - else 2147 - pci_free_resource_list(&resources); 2148 - return b; 2149 - } 2150 - EXPORT_SYMBOL(pci_scan_bus_parented); 2151 2096 2152 2097 struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, 2153 2098 void *sysdata)
+4 -2
drivers/pci/quirks.c
··· 1593 1593 1594 1594 static void quirk_pcie_mch(struct pci_dev *pdev) 1595 1595 { 1596 - pci_msi_off(pdev); 1597 1596 pdev->no_msi = 1; 1598 1597 } 1599 1598 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch); ··· 1606 1607 */ 1607 1608 static void quirk_pcie_pxh(struct pci_dev *dev) 1608 1609 { 1609 - pci_msi_off(dev); 1610 1610 dev->no_msi = 1; 1611 1611 dev_warn(&dev->dev, "PXH quirk detected; SHPC device MSI disabled\n"); 1612 1612 } ··· 3563 3565 * SKUs this function is not present, making this a ghost requester. 3564 3566 * https://bugzilla.kernel.org/show_bug.cgi?id=42679 3565 3567 */ 3568 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120, 3569 + quirk_dma_func1_alias); 3566 3570 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123, 3567 3571 quirk_dma_func1_alias); 3568 3572 /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */ ··· 3733 3733 /* Wellsburg (X99) PCH */ 3734 3734 0x8d10, 0x8d11, 0x8d12, 0x8d13, 0x8d14, 0x8d15, 0x8d16, 0x8d17, 3735 3735 0x8d18, 0x8d19, 0x8d1a, 0x8d1b, 0x8d1c, 0x8d1d, 0x8d1e, 3736 + /* Lynx Point (9 series) PCH */ 3737 + 0x8c90, 0x8c92, 0x8c94, 0x8c96, 0x8c98, 0x8c9a, 0x8c9c, 0x8c9e, 3736 3738 }; 3737 3739 3738 3740 static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev)
+1 -2
drivers/pci/vc.c
··· 108 108 struct pci_dev *link = NULL; 109 109 110 110 /* Enable VCs from the downstream device */ 111 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 112 - pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) 111 + if (!dev->has_secondary_link) 113 112 return; 114 113 115 114 ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF);
+13 -3
drivers/pci/xen-pcifront.c
··· 446 446 unsigned int domain, unsigned int bus) 447 447 { 448 448 struct pci_bus *b; 449 + LIST_HEAD(resources); 449 450 struct pcifront_sd *sd = NULL; 450 451 struct pci_bus_entry *bus_entry = NULL; 451 452 int err = 0; 453 + static struct resource busn_res = { 454 + .start = 0, 455 + .end = 255, 456 + .flags = IORESOURCE_BUS, 457 + }; 452 458 453 459 #ifndef CONFIG_PCI_DOMAINS 454 460 if (domain != 0) { ··· 476 470 err = -ENOMEM; 477 471 goto err_out; 478 472 } 473 + pci_add_resource(&resources, &ioport_resource); 474 + pci_add_resource(&resources, &iomem_resource); 475 + pci_add_resource(&resources, &busn_res); 479 476 pcifront_init_sd(sd, domain, bus, pdev); 480 477 481 478 pci_lock_rescan_remove(); 482 479 483 - b = pci_scan_bus_parented(&pdev->xdev->dev, bus, 484 - &pcifront_bus_ops, sd); 480 + b = pci_scan_root_bus(&pdev->xdev->dev, bus, 481 + &pcifront_bus_ops, sd, &resources); 485 482 if (!b) { 486 483 dev_err(&pdev->xdev->dev, 487 484 "Error creating PCI Frontend Bus!\n"); 488 485 err = -ENOMEM; 489 486 pci_unlock_rescan_remove(); 487 + pci_free_resource_list(&resources); 490 488 goto err_out; 491 489 } 492 490 ··· 498 488 499 489 list_add(&bus_entry->list, &pdev->root_buses); 500 490 501 - /* pci_scan_bus_parented skips devices which do not have a have 491 + /* pci_scan_root_bus skips devices which do not have a 502 492 * devfn==0. The pcifront_scan_bus enumerates all devfn. */ 503 493 err = pcifront_scan_bus(pdev, domain, bus, b); 504 494
-3
drivers/virtio/virtio_pci_common.c
··· 502 502 INIT_LIST_HEAD(&vp_dev->virtqueues); 503 503 spin_lock_init(&vp_dev->lock); 504 504 505 - /* Disable MSI/MSIX to bring device to a known good state. */ 506 - pci_msi_off(pci_dev); 507 - 508 505 /* enable the device */ 509 506 rc = pci_enable_device(pci_dev); 510 507 if (rc)
-13
include/asm-generic/pci.h
··· 6 6 #ifndef _ASM_GENERIC_PCI_H 7 7 #define _ASM_GENERIC_PCI_H 8 8 9 - static inline struct resource * 10 - pcibios_select_root(struct pci_dev *pdev, struct resource *res) 11 - { 12 - struct resource *root = NULL; 13 - 14 - if (res->flags & IORESOURCE_IO) 15 - root = &ioport_resource; 16 - if (res->flags & IORESOURCE_MEM) 17 - root = &iomem_resource; 18 - 19 - return root; 20 - } 21 - 22 9 #ifndef HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 23 10 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 24 11 {
+22 -22
include/linux/pci.h
··· 355 355 unsigned int broken_intx_masking:1; 356 356 unsigned int io_window_1k:1; /* Intel P2P bridge 1K I/O windows */ 357 357 unsigned int irq_managed:1; 358 + unsigned int has_secondary_link:1; 358 359 pci_dev_flags_t dev_flags; 359 360 atomic_t enable_cnt; /* pci_enable_device has been called */ 360 361 ··· 578 577 int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn, 579 578 int reg, int len, u32 val); 580 579 580 + #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 581 + typedef u64 pci_bus_addr_t; 582 + #else 583 + typedef u32 pci_bus_addr_t; 584 + #endif 585 + 581 586 struct pci_bus_region { 582 - dma_addr_t start; 583 - dma_addr_t end; 587 + pci_bus_addr_t start; 588 + pci_bus_addr_t end; 584 589 }; 585 590 586 591 struct pci_dynids { ··· 780 773 void pcibios_scan_specific_bus(int busn); 781 774 struct pci_bus *pci_find_bus(int domain, int busnr); 782 775 void pci_bus_add_devices(const struct pci_bus *bus); 783 - struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, 784 - struct pci_ops *ops, void *sysdata); 785 776 struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata); 786 777 struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 787 778 struct pci_ops *ops, void *sysdata, ··· 979 974 bool pci_intx_mask_supported(struct pci_dev *dev); 980 975 bool pci_check_and_mask_intx(struct pci_dev *dev); 981 976 bool pci_check_and_unmask_intx(struct pci_dev *dev); 982 - void pci_msi_off(struct pci_dev *dev); 983 977 int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size); 984 978 int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask); 985 979 int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask); ··· 1010 1006 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 1011 1007 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 1012 1008 bool pci_device_is_present(struct pci_dev *pdev); 1009 + void pci_ignore_hotplug(struct pci_dev *dev); 1013 1010 1014 1011 /* ROM control related routines */ 1015 1012 int pci_enable_rom(struct pci_dev *pdev); ··· 1047 1042 bool pci_dev_run_wake(struct pci_dev *dev); 1048 1043 bool pci_check_pme_status(struct pci_dev *dev); 1049 1044 void pci_pme_wakeup_bus(struct pci_bus *bus); 1050 - 1051 - static inline void pci_ignore_hotplug(struct pci_dev *dev) 1052 - { 1053 - dev->ignore_hotplug = 1; 1054 - } 1055 1045 1056 1046 static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 1057 1047 bool enable) ··· 1128 1128 1129 1129 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr); 1130 1130 1131 - static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1131 + static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1132 1132 { 1133 1133 struct pci_bus_region region; 1134 1134 ··· 1196 1196 #define pci_pool_destroy(pool) dma_pool_destroy(pool) 1197 1197 #define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle) 1198 1198 #define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr) 1199 - 1200 - enum pci_dma_burst_strategy { 1201 - PCI_DMA_BURST_INFINITY, /* make bursts as large as possible, 1202 - strategy_parameter is N/A */ 1203 - PCI_DMA_BURST_BOUNDARY, /* disconnect at every strategy_parameter 1204 - byte boundaries */ 1205 - PCI_DMA_BURST_MULTIPLE, /* disconnect at some multiple of 1206 - strategy_parameter byte boundaries */ 1207 - }; 1208 1199 1209 1200 struct msix_entry { 1210 1201 u32 vector; /* kernel uses to write allocated vector */ ··· 1420 1429 static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) 1421 1430 { return -EIO; } 1422 1431 static inline void pci_release_regions(struct pci_dev *dev) { } 1423 - 1424 - #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 1425 1432 1426 1433 static inline void pci_block_cfg_access(struct pci_dev *dev) { } 1427 1434 static inline int pci_block_cfg_access_in_atomic(struct pci_dev *dev) ··· 1893 1904 static inline bool pci_is_dev_assigned(struct pci_dev *pdev) 1894 1905 { 1895 1906 return (pdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) == PCI_DEV_FLAGS_ASSIGNED; 1907 + } 1908 + 1909 + /** 1910 + * pci_ari_enabled - query ARI forwarding status 1911 + * @bus: the PCI bus 1912 + * 1913 + * Returns true if ARI forwarding is enabled. 1914 + */ 1915 + static inline bool pci_ari_enabled(struct pci_bus *bus) 1916 + { 1917 + return bus->self && bus->self->ari_enabled; 1896 1918 } 1897 1919 #endif /* LINUX_PCI_H */
+10 -2
include/linux/types.h
··· 139 139 */ 140 140 #define pgoff_t unsigned long 141 141 142 - /* A dma_addr_t can hold any valid DMA or bus address for the platform */ 142 + /* 143 + * A dma_addr_t can hold any valid DMA address, i.e., any address returned 144 + * by the DMA API. 145 + * 146 + * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32 147 + * bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits, 148 + * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses, 149 + * so they don't care about the size of the actual bus addresses. 150 + */ 143 151 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 144 152 typedef u64 dma_addr_t; 145 153 #else 146 154 typedef u32 dma_addr_t; 147 - #endif /* dma_addr_t */ 155 + #endif 148 156 149 157 typedef unsigned __bitwise__ gfp_t; 150 158 typedef unsigned __bitwise__ fmode_t;