Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

- add enhanced Downstream Port Containment support, which prints more
details about Root Port Programmed I/O errors (Dongdong Liu)

- add Layerscape ls1088a and ls2088a support (Hou Zhiqiang)

- add MediaTek MT2712 and MT7622 support (Ryder Lee)

- add MediaTek MT2712 and MT7622 MSI support (Honghui Zhang)

- add Qualcom IPQ8074 support (Varadarajan Narayanan)

- add R-Car r8a7743/5 device tree support (Biju Das)

- add Rockchip per-lane PHY support for better power management (Shawn
Lin)

- fix IRQ mapping for hot-added devices by replacing the
pci_fixup_irqs() boot-time design with a host bridge hook called at
probe-time (Lorenzo Pieralisi, Matthew Minter)

- fix race when enabling two devices that results in upstream bridge
not being enabled correctly (Srinath Mannam)

- fix pciehp power fault infinite loop (Keith Busch)

- fix SHPC bridge MSI hotplug events by enabling bus mastering
(Aleksandr Bezzubikov)

- fix a VFIO issue by correcting PCIe capability sizes (Alex
Williamson)

- fix an INTD issue on Xilinx and possibly other drivers by unifying
INTx IRQ domain support (Paul Burton)

- avoid IOMMU stalls by marking AMD Stoney GPU ATS as broken (Joerg
Roedel)

- allow APM X-Gene device assignment to guests by adding an ACS quirk
(Feng Kan)

- fix driver crashes by disabling Extended Tags on Broadcom HT2100
(Extended Tags support is required for PCIe Receivers but not
Requesters, and we now enable them by default when Requesters support
them) (Sinan Kaya)

- fix MSIs for devices that use phantom RIDs for DMA by assuming MSIs
use the real Requester ID (not a phantom RID) (Robin Murphy)

- prevent assignment of Intel VMD children to guests (which may be
supported eventually, but isn't yet) by not associating an IOMMU with
them (Jon Derrick)

- fix Intel VMD suspend/resume by releasing IRQs on suspend (Scott
Bauer)

- fix a Function-Level Reset issue with Intel 750 NVMe by waiting
longer (up to 60sec instead of 1sec) for device to become ready
(Sinan Kaya)

- fix a Function-Level Reset issue on iProc Stingray by working around
hardware defects in the CRS implementation (Oza Pawandeep)

- fix an issue with Intel NVMe P3700 after an iProc reset by adding a
delay during shutdown (Oza Pawandeep)

- fix a Microsoft Hyper-V lockdep issue by polling instead of blocking
in compose_msi_msg() (Stephen Hemminger)

- fix a wireless LAN driver timeout by clearing DesignWare MSI
interrupt status after it is handled, not before (Faiz Abbas)

- fix DesignWare ATU enable checking (Jisheng Zhang)

- reduce Layerscape dependencies on the bootloader by doing more
initialization in the driver (Hou Zhiqiang)

- improve Intel VMD performance allowing allocation of more IRQ vectors
than present CPUs (Keith Busch)

- improve endpoint framework support for initial DMA mask, different
BAR sizes, configurable page sizes, MSI, test driver, etc (Kishon
Vijay Abraham I, Stan Drozd)

- rework CRS support to add periodic messages while we poll during
enumeration and after Function-Level Reset and prepare for possible
other uses of CRS (Sinan Kaya)

- clean up Root Port AER handling by removing unnecessary code and
moving error handler methods to struct pcie_port_service_driver
(Christoph Hellwig)

- clean up error handling paths in various drivers (Bjorn Andersson,
Fabio Estevam, Gustavo A. R. Silva, Harunobu Kurokawa, Jeffy Chen,
Lorenzo Pieralisi, Sergei Shtylyov)

- clean up SR-IOV resource handling by disabling VF decoding before
updating the corresponding resource structs (Gavin Shan)

- clean up DesignWare-based drivers by unifying quirks to update Class
Code and Interrupt Pin and related handling of write-protected
registers (Hou Zhiqiang)

- clean up by adding empty generic pcibios_align_resource() and
pcibios_fixup_bus() and removing empty arch-specific implementations
(Palmer Dabbelt)

- request exclusive reset control for several drivers to allow cleanup
elsewhere (Philipp Zabel)

- constify various structures (Arvind Yadav, Bhumika Goyal)

- convert from full_name() to %pOF (Rob Herring)

- remove unused variables from iProc, HiSi, Altera, Keystone (Shawn
Lin)

* tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (170 commits)
PCI: xgene: Clean up whitespace
PCI: xgene: Define XGENE_PCI_EXP_CAP and use generic PCI_EXP_RTCTL offset
PCI: xgene: Fix platform_get_irq() error handling
PCI: xilinx-nwl: Fix platform_get_irq() error handling
PCI: rockchip: Fix platform_get_irq() error handling
PCI: altera: Fix platform_get_irq() error handling
PCI: spear13xx: Fix platform_get_irq() error handling
PCI: artpec6: Fix platform_get_irq() error handling
PCI: armada8k: Fix platform_get_irq() error handling
PCI: dra7xx: Fix platform_get_irq() error handling
PCI: exynos: Fix platform_get_irq() error handling
PCI: iproc: Clean up whitespace
PCI: iproc: Rename PCI_EXP_CAP to IPROC_PCI_EXP_CAP
PCI: iproc: Add 500ms delay during device shutdown
PCI: Fix typos and whitespace errors
PCI: Remove unused "res" variable from pci_resource_io()
PCI: Correct kernel-doc of pci_vpd_srdt_size(), pci_vpd_srdt_tag()
PCI/AER: Reformat AER register definitions
iommu/vt-d: Prevent VMD child devices from being remapping targets
x86/PCI: Use is_vmd() rather than relying on the domain number
...

+3434 -1637
+1 -1
CREDITS
··· 2090 2090 2091 2091 N: Mohit Kumar 2092 2092 D: ST Microelectronics SPEAr13xx PCI host bridge driver 2093 - D: Synopsys Designware PCI host bridge driver 2093 + D: Synopsys DesignWare PCI host bridge driver 2094 2094 2095 2095 N: Gabor Kuti 2096 2096 E: seasons@falcon.sch.bme.hu
+3 -3
Documentation/devicetree/bindings/pci/83xx-512x-pci.txt
··· 1 1 * Freescale 83xx and 512x PCI bridges 2 2 3 - Freescale 83xx and 512x SOCs include the same pci bridge core. 3 + Freescale 83xx and 512x SOCs include the same PCI bridge core. 4 4 5 5 83xx/512x specific notes: 6 6 - reg: should contain two address length tuples 7 - The first is for the internal pci bridge registers 8 - The second is for the pci config space access registers 7 + The first is for the internal PCI bridge registers 8 + The second is for the PCI config space access registers 9 9 10 10 Example (MPC8313ERDB) 11 11 pci0: pci@e0008500 {
+9 -9
Documentation/devicetree/bindings/pci/altera-pcie.txt
··· 7 7 "Txs": TX slave port region 8 8 "Cra": Control register access region 9 9 - interrupt-parent: interrupt source phandle. 10 - - interrupts: specifies the interrupt source of the parent interrupt controller. 11 - The format of the interrupt specifier depends on the parent interrupt 12 - controller. 10 + - interrupts: specifies the interrupt source of the parent interrupt 11 + controller. The format of the interrupt specifier depends 12 + on the parent interrupt controller. 13 13 - device_type: must be "pci" 14 14 - #address-cells: set to <3> 15 - - #size-cells: set to <2> 15 + - #size-cells: set to <2> 16 16 - #interrupt-cells: set to <1> 17 - - ranges: describes the translation of addresses for root ports and standard 18 - PCI regions. 17 + - ranges: describes the translation of addresses for root ports and 18 + standard PCI regions. 19 19 - interrupt-map-mask and interrupt-map: standard PCI properties to define the 20 20 mapping of the PCIe interface to interrupt numbers. 21 21 22 22 Optional properties: 23 - - msi-parent: Link to the hardware entity that serves as the MSI controller for this PCIe 24 - controller. 23 + - msi-parent: Link to the hardware entity that serves as the MSI controller 24 + for this PCIe controller. 25 25 - bus-range: PCI bus numbers covered 26 26 27 27 Example ··· 45 45 <0 0 0 3 &pcie_0 3>, 46 46 <0 0 0 4 &pcie_0 4>; 47 47 ranges = <0x82000000 0x00000000 0x00000000 0xc0000000 0x00000000 0x10000000 48 - 0x82000000 0x00000000 0x10000000 0xd0000000 0x00000000 0x10000000>; 48 + 0x82000000 0x00000000 0x10000000 0xd0000000 0x00000000 0x10000000>; 49 49 };
+1 -1
Documentation/devicetree/bindings/pci/axis,artpec6-pcie.txt
··· 6 6 Required properties: 7 7 - compatible: "axis,artpec6-pcie", "snps,dw-pcie" 8 8 - reg: base addresses and lengths of the PCIe controller (DBI), 9 - the phy controller, and configuration address space. 9 + the PHY controller, and configuration address space. 10 10 - reg-names: Must include the following entries: 11 11 - "dbi" 12 12 - "phy"
+11 -13
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 1 - * Synopsys Designware PCIe interface 1 + * Synopsys DesignWare PCIe interface 2 2 3 3 Required properties: 4 4 - compatible: should contain "snps,dw-pcie" to identify the core. ··· 17 17 properties to define the mapping of the PCIe interface to interrupt 18 18 numbers. 19 19 EP mode: 20 - - num-ib-windows: number of inbound address translation 21 - windows 22 - - num-ob-windows: number of outbound address translation 23 - windows 20 + - num-ib-windows: number of inbound address translation windows 21 + - num-ob-windows: number of outbound address translation windows 24 22 25 23 Optional properties: 26 24 - num-lanes: number of lanes to use (this property should be specified unless 27 25 the link is brought already up in BIOS) 28 - - reset-gpio: gpio pin number of power good signal 26 + - reset-gpio: GPIO pin number of power good signal 29 27 - clocks: Must contain an entry for each entry in clock-names. 30 28 See ../clocks/clock-bindings.txt for details. 31 29 - clock-names: Must include the following entries: 32 30 - "pcie" 33 31 - "pcie_bus" 34 32 RC mode: 35 - - num-viewport: number of view ports configured in 36 - hardware. If a platform does not specify it, the driver assumes 2. 37 - - bus-range: PCI bus numbers covered (it is recommended 38 - for new devicetrees to specify this property, to keep backwards 39 - compatibility a range of 0x00-0xff is assumed if not present) 33 + - num-viewport: number of view ports configured in hardware. If a platform 34 + does not specify it, the driver assumes 2. 35 + - bus-range: PCI bus numbers covered (it is recommended for new devicetrees 36 + to specify this property, to keep backwards compatibility a range of 37 + 0x00-0xff is assumed if not present) 38 + 40 39 EP mode: 41 - - max-functions: maximum number of functions that can be 42 - configured 40 + - max-functions: maximum number of functions that can be configured 43 41 44 42 Example configuration: 45 43
+1 -1
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
··· 1 1 * Freescale i.MX6 PCIe interface 2 2 3 - This PCIe host controller is based on the Synopsis Designware PCIe IP 3 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 4 and thus inherits all the common properties defined in designware-pcie.txt. 5 5 6 6 Required properties:
+2 -2
Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
··· 1 1 HiSilicon Hip05 and Hip06 PCIe host bridge DT description 2 2 3 - HiSilicon PCIe host controller is based on Designware PCI core. 4 - It shares common functions with PCIe Designware core driver and inherits 3 + HiSilicon PCIe host controller is based on the Synopsys DesignWare PCI core. 4 + It shares common functions with the PCIe DesignWare core driver and inherits 5 5 common properties defined in 6 6 Documentation/devicetree/bindings/pci/designware-pci.txt. 7 7
+4 -4
Documentation/devicetree/bindings/pci/kirin-pcie.txt
··· 1 1 HiSilicon Kirin SoCs PCIe host DT description 2 2 3 - Kirin PCIe host controller is based on Designware PCI core. 4 - It shares common functions with PCIe Designware core driver 5 - and inherits common properties defined in 3 + Kirin PCIe host controller is based on the Synopsys DesignWare PCI core. 4 + It shares common functions with the PCIe DesignWare core driver and 5 + inherits common properties defined in 6 6 Documentation/devicetree/bindings/pci/designware-pci.txt. 7 7 8 8 Additional properties are described here: ··· 16 16 "apb": apb Ctrl register defined by Kirin; 17 17 "phy": apb PHY register defined by Kirin; 18 18 "config": PCIe configuration space registers. 19 - - reset-gpios: The gpio to generate PCIe perst assert and deassert signal. 19 + - reset-gpios: The GPIO to generate PCIe PERST# assert and deassert signal. 20 20 21 21 Optional properties: 22 22
+3 -1
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 15 15 - compatible: should contain the platform identifier such as: 16 16 "fsl,ls1021a-pcie", "snps,dw-pcie" 17 17 "fsl,ls2080a-pcie", "fsl,ls2085a-pcie", "snps,dw-pcie" 18 + "fsl,ls2088a-pcie" 19 + "fsl,ls1088a-pcie" 18 20 "fsl,ls1046a-pcie" 19 - - reg: base addresses and lengths of the PCIe controller 21 + - reg: base addresses and lengths of the PCIe controller register blocks. 20 22 - interrupts: A list of interrupt outputs of the controller. Must contain an 21 23 entry for each entry in the interrupt-names property. 22 24 - interrupt-names: Must include the following entries:
-130
Documentation/devicetree/bindings/pci/mediatek,mt7623-pcie.txt
··· 1 - MediaTek Gen2 PCIe controller which is available on MT7623 series SoCs 2 - 3 - PCIe subsys supports single root complex (RC) with 3 Root Ports. Each root 4 - ports supports a Gen2 1-lane Link and has PIPE interface to PHY. 5 - 6 - Required properties: 7 - - compatible: Should contain "mediatek,mt7623-pcie". 8 - - device_type: Must be "pci" 9 - - reg: Base addresses and lengths of the PCIe controller. 10 - - #address-cells: Address representation for root ports (must be 3) 11 - - #size-cells: Size representation for root ports (must be 2) 12 - - #interrupt-cells: Size representation for interrupts (must be 1) 13 - - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 14 - Please refer to the standard PCI bus binding document for a more detailed 15 - explanation. 16 - - clocks: Must contain an entry for each entry in clock-names. 17 - See ../clocks/clock-bindings.txt for details. 18 - - clock-names: Must include the following entries: 19 - - free_ck :for reference clock of PCIe subsys 20 - - sys_ck0 :for clock of Port0 21 - - sys_ck1 :for clock of Port1 22 - - sys_ck2 :for clock of Port2 23 - - resets: Must contain an entry for each entry in reset-names. 24 - See ../reset/reset.txt for details. 25 - - reset-names: Must include the following entries: 26 - - pcie-rst0 :port0 reset 27 - - pcie-rst1 :port1 reset 28 - - pcie-rst2 :port2 reset 29 - - phys: List of PHY specifiers (used by generic PHY framework). 30 - - phy-names : Must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the 31 - number of PHYs as specified in *phys* property. 32 - - power-domains: A phandle and power domain specifier pair to the power domain 33 - which is responsible for collapsing and restoring power to the peripheral. 34 - - bus-range: Range of bus numbers associated with this controller. 35 - - ranges: Ranges for the PCI memory and I/O regions. 36 - 37 - In addition, the device tree node must have sub-nodes describing each 38 - PCIe port interface, having the following mandatory properties: 39 - 40 - Required properties: 41 - - device_type: Must be "pci" 42 - - reg: Only the first four bytes are used to refer to the correct bus number 43 - and device number. 44 - - #address-cells: Must be 3 45 - - #size-cells: Must be 2 46 - - #interrupt-cells: Must be 1 47 - - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 48 - Please refer to the standard PCI bus binding document for a more detailed 49 - explanation. 50 - - ranges: Sub-ranges distributed from the PCIe controller node. An empty 51 - property is sufficient. 52 - - num-lanes: Number of lanes to use for this port. 53 - 54 - Examples: 55 - 56 - hifsys: syscon@1a000000 { 57 - compatible = "mediatek,mt7623-hifsys", 58 - "mediatek,mt2701-hifsys", 59 - "syscon"; 60 - reg = <0 0x1a000000 0 0x1000>; 61 - #clock-cells = <1>; 62 - #reset-cells = <1>; 63 - }; 64 - 65 - pcie: pcie-controller@1a140000 { 66 - compatible = "mediatek,mt7623-pcie"; 67 - device_type = "pci"; 68 - reg = <0 0x1a140000 0 0x1000>, /* PCIe shared registers */ 69 - <0 0x1a142000 0 0x1000>, /* Port0 registers */ 70 - <0 0x1a143000 0 0x1000>, /* Port1 registers */ 71 - <0 0x1a144000 0 0x1000>; /* Port2 registers */ 72 - #address-cells = <3>; 73 - #size-cells = <2>; 74 - #interrupt-cells = <1>; 75 - interrupt-map-mask = <0xf800 0 0 0>; 76 - interrupt-map = <0x0000 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>, 77 - <0x0800 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>, 78 - <0x1000 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 79 - clocks = <&topckgen CLK_TOP_ETHIF_SEL>, 80 - <&hifsys CLK_HIFSYS_PCIE0>, 81 - <&hifsys CLK_HIFSYS_PCIE1>, 82 - <&hifsys CLK_HIFSYS_PCIE2>; 83 - clock-names = "free_ck", "sys_ck0", "sys_ck1", "sys_ck2"; 84 - resets = <&hifsys MT2701_HIFSYS_PCIE0_RST>, 85 - <&hifsys MT2701_HIFSYS_PCIE1_RST>, 86 - <&hifsys MT2701_HIFSYS_PCIE2_RST>; 87 - reset-names = "pcie-rst0", "pcie-rst1", "pcie-rst2"; 88 - phys = <&pcie0_phy>, <&pcie1_phy>, <&pcie2_phy>; 89 - phy-names = "pcie-phy0", "pcie-phy1", "pcie-phy2"; 90 - power-domains = <&scpsys MT2701_POWER_DOMAIN_HIF>; 91 - bus-range = <0x00 0xff>; 92 - ranges = <0x81000000 0 0x1a160000 0 0x1a160000 0 0x00010000 /* I/O space */ 93 - 0x83000000 0 0x60000000 0 0x60000000 0 0x10000000>; /* memory space */ 94 - 95 - pcie@0,0 { 96 - device_type = "pci"; 97 - reg = <0x0000 0 0 0 0>; 98 - #address-cells = <3>; 99 - #size-cells = <2>; 100 - #interrupt-cells = <1>; 101 - interrupt-map-mask = <0 0 0 0>; 102 - interrupt-map = <0 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>; 103 - ranges; 104 - num-lanes = <1>; 105 - }; 106 - 107 - pcie@1,0 { 108 - device_type = "pci"; 109 - reg = <0x0800 0 0 0 0>; 110 - #address-cells = <3>; 111 - #size-cells = <2>; 112 - #interrupt-cells = <1>; 113 - interrupt-map-mask = <0 0 0 0>; 114 - interrupt-map = <0 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>; 115 - ranges; 116 - num-lanes = <1>; 117 - }; 118 - 119 - pcie@2,0 { 120 - device_type = "pci"; 121 - reg = <0x1000 0 0 0 0>; 122 - #address-cells = <3>; 123 - #size-cells = <2>; 124 - #interrupt-cells = <1>; 125 - interrupt-map-mask = <0 0 0 0>; 126 - interrupt-map = <0 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 127 - ranges; 128 - num-lanes = <1>; 129 - }; 130 - };
+284
Documentation/devicetree/bindings/pci/mediatek-pcie.txt
··· 1 + MediaTek Gen2 PCIe controller 2 + 3 + Required properties: 4 + - compatible: Should contain one of the following strings: 5 + "mediatek,mt2701-pcie" 6 + "mediatek,mt2712-pcie" 7 + "mediatek,mt7622-pcie" 8 + "mediatek,mt7623-pcie" 9 + - device_type: Must be "pci" 10 + - reg: Base addresses and lengths of the PCIe subsys and root ports. 11 + - reg-names: Names of the above areas to use during resource lookup. 12 + - #address-cells: Address representation for root ports (must be 3) 13 + - #size-cells: Size representation for root ports (must be 2) 14 + - clocks: Must contain an entry for each entry in clock-names. 15 + See ../clocks/clock-bindings.txt for details. 16 + - clock-names: 17 + Mandatory entries: 18 + - sys_ckN :transaction layer and data link layer clock 19 + Required entries for MT2701/MT7623: 20 + - free_ck :for reference clock of PCIe subsys 21 + Required entries for MT2712/MT7622: 22 + - ahb_ckN :AHB slave interface operating clock for CSR access and RC 23 + initiated MMIO access 24 + Required entries for MT7622: 25 + - axi_ckN :application layer MMIO channel operating clock 26 + - aux_ckN :pe2_mac_bridge and pe2_mac_core operating clock when 27 + pcie_mac_ck/pcie_pipe_ck is turned off 28 + - obff_ckN :OBFF functional block operating clock 29 + - pipe_ckN :LTSSM and PHY/MAC layer operating clock 30 + where N starting from 0 to one less than the number of root ports. 31 + - phys: List of PHY specifiers (used by generic PHY framework). 32 + - phy-names : Must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the 33 + number of PHYs as specified in *phys* property. 34 + - power-domains: A phandle and power domain specifier pair to the power domain 35 + which is responsible for collapsing and restoring power to the peripheral. 36 + - bus-range: Range of bus numbers associated with this controller. 37 + - ranges: Ranges for the PCI memory and I/O regions. 38 + 39 + Required properties for MT7623/MT2701: 40 + - #interrupt-cells: Size representation for interrupts (must be 1) 41 + - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 42 + Please refer to the standard PCI bus binding document for a more detailed 43 + explanation. 44 + - resets: Must contain an entry for each entry in reset-names. 45 + See ../reset/reset.txt for details. 46 + - reset-names: Must be "pcie-rst0", "pcie-rst1", "pcie-rstN".. based on the 47 + number of root ports. 48 + 49 + Required properties for MT2712/MT7622: 50 + -interrupts: A list of interrupt outputs of the controller, must have one 51 + entry for each PCIe port 52 + 53 + In addition, the device tree node must have sub-nodes describing each 54 + PCIe port interface, having the following mandatory properties: 55 + 56 + Required properties: 57 + - device_type: Must be "pci" 58 + - reg: Only the first four bytes are used to refer to the correct bus number 59 + and device number. 60 + - #address-cells: Must be 3 61 + - #size-cells: Must be 2 62 + - #interrupt-cells: Must be 1 63 + - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 64 + Please refer to the standard PCI bus binding document for a more detailed 65 + explanation. 66 + - ranges: Sub-ranges distributed from the PCIe controller node. An empty 67 + property is sufficient. 68 + - num-lanes: Number of lanes to use for this port. 69 + 70 + Examples for MT7623: 71 + 72 + hifsys: syscon@1a000000 { 73 + compatible = "mediatek,mt7623-hifsys", 74 + "mediatek,mt2701-hifsys", 75 + "syscon"; 76 + reg = <0 0x1a000000 0 0x1000>; 77 + #clock-cells = <1>; 78 + #reset-cells = <1>; 79 + }; 80 + 81 + pcie: pcie-controller@1a140000 { 82 + compatible = "mediatek,mt7623-pcie"; 83 + device_type = "pci"; 84 + reg = <0 0x1a140000 0 0x1000>, /* PCIe shared registers */ 85 + <0 0x1a142000 0 0x1000>, /* Port0 registers */ 86 + <0 0x1a143000 0 0x1000>, /* Port1 registers */ 87 + <0 0x1a144000 0 0x1000>; /* Port2 registers */ 88 + reg-names = "subsys", "port0", "port1", "port2"; 89 + #address-cells = <3>; 90 + #size-cells = <2>; 91 + #interrupt-cells = <1>; 92 + interrupt-map-mask = <0xf800 0 0 0>; 93 + interrupt-map = <0x0000 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>, 94 + <0x0800 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>, 95 + <0x1000 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 96 + clocks = <&topckgen CLK_TOP_ETHIF_SEL>, 97 + <&hifsys CLK_HIFSYS_PCIE0>, 98 + <&hifsys CLK_HIFSYS_PCIE1>, 99 + <&hifsys CLK_HIFSYS_PCIE2>; 100 + clock-names = "free_ck", "sys_ck0", "sys_ck1", "sys_ck2"; 101 + resets = <&hifsys MT2701_HIFSYS_PCIE0_RST>, 102 + <&hifsys MT2701_HIFSYS_PCIE1_RST>, 103 + <&hifsys MT2701_HIFSYS_PCIE2_RST>; 104 + reset-names = "pcie-rst0", "pcie-rst1", "pcie-rst2"; 105 + phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>, 106 + <&pcie2_phy PHY_TYPE_PCIE>; 107 + phy-names = "pcie-phy0", "pcie-phy1", "pcie-phy2"; 108 + power-domains = <&scpsys MT2701_POWER_DOMAIN_HIF>; 109 + bus-range = <0x00 0xff>; 110 + ranges = <0x81000000 0 0x1a160000 0 0x1a160000 0 0x00010000 /* I/O space */ 111 + 0x83000000 0 0x60000000 0 0x60000000 0 0x10000000>; /* memory space */ 112 + 113 + pcie@0,0 { 114 + device_type = "pci"; 115 + reg = <0x0000 0 0 0 0>; 116 + #address-cells = <3>; 117 + #size-cells = <2>; 118 + #interrupt-cells = <1>; 119 + interrupt-map-mask = <0 0 0 0>; 120 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>; 121 + ranges; 122 + num-lanes = <1>; 123 + }; 124 + 125 + pcie@1,0 { 126 + device_type = "pci"; 127 + reg = <0x0800 0 0 0 0>; 128 + #address-cells = <3>; 129 + #size-cells = <2>; 130 + #interrupt-cells = <1>; 131 + interrupt-map-mask = <0 0 0 0>; 132 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>; 133 + ranges; 134 + num-lanes = <1>; 135 + }; 136 + 137 + pcie@2,0 { 138 + device_type = "pci"; 139 + reg = <0x1000 0 0 0 0>; 140 + #address-cells = <3>; 141 + #size-cells = <2>; 142 + #interrupt-cells = <1>; 143 + interrupt-map-mask = <0 0 0 0>; 144 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 145 + ranges; 146 + num-lanes = <1>; 147 + }; 148 + }; 149 + 150 + Examples for MT2712: 151 + pcie: pcie@11700000 { 152 + compatible = "mediatek,mt2712-pcie"; 153 + device_type = "pci"; 154 + reg = <0 0x11700000 0 0x1000>, 155 + <0 0x112ff000 0 0x1000>; 156 + reg-names = "port0", "port1"; 157 + #address-cells = <3>; 158 + #size-cells = <2>; 159 + interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>, 160 + <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; 161 + clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>, 162 + <&topckgen CLK_TOP_PE2_MAC_P1_SEL>, 163 + <&pericfg CLK_PERI_PCIE0>, 164 + <&pericfg CLK_PERI_PCIE1>; 165 + clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1"; 166 + phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>; 167 + phy-names = "pcie-phy0", "pcie-phy1"; 168 + bus-range = <0x00 0xff>; 169 + ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>; 170 + 171 + pcie0: pcie@0,0 { 172 + device_type = "pci"; 173 + reg = <0x0000 0 0 0 0>; 174 + #address-cells = <3>; 175 + #size-cells = <2>; 176 + #interrupt-cells = <1>; 177 + ranges; 178 + num-lanes = <1>; 179 + interrupt-map-mask = <0 0 0 7>; 180 + interrupt-map = <0 0 0 1 &pcie_intc0 0>, 181 + <0 0 0 2 &pcie_intc0 1>, 182 + <0 0 0 3 &pcie_intc0 2>, 183 + <0 0 0 4 &pcie_intc0 3>; 184 + pcie_intc0: interrupt-controller { 185 + interrupt-controller; 186 + #address-cells = <0>; 187 + #interrupt-cells = <1>; 188 + }; 189 + }; 190 + 191 + pcie1: pcie@1,0 { 192 + device_type = "pci"; 193 + reg = <0x0800 0 0 0 0>; 194 + #address-cells = <3>; 195 + #size-cells = <2>; 196 + #interrupt-cells = <1>; 197 + ranges; 198 + num-lanes = <1>; 199 + interrupt-map-mask = <0 0 0 7>; 200 + interrupt-map = <0 0 0 1 &pcie_intc1 0>, 201 + <0 0 0 2 &pcie_intc1 1>, 202 + <0 0 0 3 &pcie_intc1 2>, 203 + <0 0 0 4 &pcie_intc1 3>; 204 + pcie_intc1: interrupt-controller { 205 + interrupt-controller; 206 + #address-cells = <0>; 207 + #interrupt-cells = <1>; 208 + }; 209 + }; 210 + }; 211 + 212 + Examples for MT7622: 213 + pcie: pcie@1a140000 { 214 + compatible = "mediatek,mt7622-pcie"; 215 + device_type = "pci"; 216 + reg = <0 0x1a140000 0 0x1000>, 217 + <0 0x1a143000 0 0x1000>, 218 + <0 0x1a145000 0 0x1000>; 219 + reg-names = "subsys", "port0", "port1"; 220 + #address-cells = <3>; 221 + #size-cells = <2>; 222 + interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>, 223 + <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>; 224 + clocks = <&pciesys CLK_PCIE_P0_MAC_EN>, 225 + <&pciesys CLK_PCIE_P1_MAC_EN>, 226 + <&pciesys CLK_PCIE_P0_AHB_EN>, 227 + <&pciesys CLK_PCIE_P1_AHB_EN>, 228 + <&pciesys CLK_PCIE_P0_AUX_EN>, 229 + <&pciesys CLK_PCIE_P1_AUX_EN>, 230 + <&pciesys CLK_PCIE_P0_AXI_EN>, 231 + <&pciesys CLK_PCIE_P1_AXI_EN>, 232 + <&pciesys CLK_PCIE_P0_OBFF_EN>, 233 + <&pciesys CLK_PCIE_P1_OBFF_EN>, 234 + <&pciesys CLK_PCIE_P0_PIPE_EN>, 235 + <&pciesys CLK_PCIE_P1_PIPE_EN>; 236 + clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1", 237 + "aux_ck0", "aux_ck1", "axi_ck0", "axi_ck1", 238 + "obff_ck0", "obff_ck1", "pipe_ck0", "pipe_ck1"; 239 + phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>; 240 + phy-names = "pcie-phy0", "pcie-phy1"; 241 + power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>; 242 + bus-range = <0x00 0xff>; 243 + ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>; 244 + 245 + pcie0: pcie@0,0 { 246 + device_type = "pci"; 247 + reg = <0x0000 0 0 0 0>; 248 + #address-cells = <3>; 249 + #size-cells = <2>; 250 + #interrupt-cells = <1>; 251 + ranges; 252 + num-lanes = <1>; 253 + interrupt-map-mask = <0 0 0 7>; 254 + interrupt-map = <0 0 0 1 &pcie_intc0 0>, 255 + <0 0 0 2 &pcie_intc0 1>, 256 + <0 0 0 3 &pcie_intc0 2>, 257 + <0 0 0 4 &pcie_intc0 3>; 258 + pcie_intc0: interrupt-controller { 259 + interrupt-controller; 260 + #address-cells = <0>; 261 + #interrupt-cells = <1>; 262 + }; 263 + }; 264 + 265 + pcie1: pcie@1,0 { 266 + device_type = "pci"; 267 + reg = <0x0800 0 0 0 0>; 268 + #address-cells = <3>; 269 + #size-cells = <2>; 270 + #interrupt-cells = <1>; 271 + ranges; 272 + num-lanes = <1>; 273 + interrupt-map-mask = <0 0 0 7>; 274 + interrupt-map = <0 0 0 1 &pcie_intc1 0>, 275 + <0 0 0 2 &pcie_intc1 1>, 276 + <0 0 0 3 &pcie_intc1 2>, 277 + <0 0 0 4 &pcie_intc1 3>; 278 + pcie_intc1: interrupt-controller { 279 + interrupt-controller; 280 + #address-cells = <0>; 281 + #interrupt-cells = <1>; 282 + }; 283 + }; 284 + };
+1 -1
Documentation/devicetree/bindings/pci/mvebu-pci.txt
··· 77 77 - marvell,pcie-lane: the physical PCIe lane number, for ports having 78 78 multiple lanes. If this property is not found, we assume that the 79 79 value is 0. 80 - - reset-gpios: optional gpio to PERST# 80 + - reset-gpios: optional GPIO to PERST# 81 81 - reset-delay-us: delay in us to wait after reset de-assertion, if not 82 82 specified will default to 100ms, as required by the PCIe specification. 83 83
+1 -1
Documentation/devicetree/bindings/pci/pci-armada8k.txt
··· 1 1 * Marvell Armada 7K/8K PCIe interface 2 2 3 - This PCIe host controller is based on the Synopsis Designware PCIe IP 3 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 4 and thus inherits all the common properties defined in designware-pcie.txt. 5 5 6 6 Required properties:
+7 -8
Documentation/devicetree/bindings/pci/pci-keystone.txt
··· 1 1 TI Keystone PCIe interface 2 2 3 - Keystone PCI host Controller is based on Designware PCI h/w version 3.65. 4 - It shares common functions with PCIe Designware core driver and inherit 5 - common properties defined in 3 + Keystone PCI host Controller is based on the Synopsys DesignWare PCI 4 + hardware version 3.65. It shares common functions with the PCIe DesignWare 5 + core driver and inherits common properties defined in 6 6 Documentation/devicetree/bindings/pci/designware-pci.txt 7 7 8 8 Please refer to Documentation/devicetree/bindings/pci/designware-pci.txt 9 - for the details of Designware DT bindings. Additional properties are 9 + for the details of DesignWare DT bindings. Additional properties are 10 10 described here as well as properties that are not applicable. 11 11 12 12 Required Properties:- ··· 52 52 }; 53 53 54 54 Optional properties:- 55 - phys: phandle to Generic Keystone SerDes phy for PCI 56 - phy-names: name of the Generic Keystine SerDes phy for PCI 55 + phys: phandle to generic Keystone SerDes PHY for PCI 56 + phy-names: name of the generic Keystone SerDes PHY for PCI 57 57 - If boot loader already does PCI link establishment, then phys and 58 58 phy-names shouldn't be present. 59 59 interrupts: platform interrupt for error interrupts. 60 60 61 - Designware DT Properties not applicable for Keystone PCI 61 + DesignWare DT Properties not applicable for Keystone PCI 62 62 63 63 1. pcie_bus clock-names not used. Instead, a phandle to phys is used. 64 -
+5 -2
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 6 6 OHCI and EHCI controllers. 7 7 8 8 Required properties: 9 - - compatible: "renesas,pci-r8a7790" for the R8A7790 SoC; 9 + - compatible: "renesas,pci-r8a7743" for the R8A7743 SoC; 10 + "renesas,pci-r8a7745" for the R8A7745 SoC; 11 + "renesas,pci-r8a7790" for the R8A7790 SoC; 10 12 "renesas,pci-r8a7791" for the R8A7791 SoC; 11 13 "renesas,pci-r8a7793" for the R8A7793 SoC; 12 14 "renesas,pci-r8a7794" for the R8A7794 SoC; 13 - "renesas,pci-rcar-gen2" for a generic R-Car Gen2 compatible device 15 + "renesas,pci-rcar-gen2" for a generic R-Car Gen2 or 16 + RZ/G1 compatible device. 14 17 15 18 16 19 When compatible with the generic version, nodes must list the
+25 -2
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 9 9 - "qcom,pcie-apq8084" for apq8084 10 10 - "qcom,pcie-msm8996" for msm8996 or apq8096 11 11 - "qcom,pcie-ipq4019" for ipq4019 12 + - "qcom,pcie-ipq8074" for ipq8074 12 13 13 14 - reg: 14 15 Usage: required ··· 21 20 Value type: <stringlist> 22 21 Definition: Must include the following entries 23 22 - "parf" Qualcomm specific registers 24 - - "dbi" Designware PCIe registers 23 + - "dbi" DesignWare PCIe registers 25 24 - "elbi" External local bus interface registers 26 25 - "config" PCIe configuration space 27 26 ··· 106 105 - "bus_master" Master AXI clock 107 106 - "bus_slave" Slave AXI clock 108 107 108 + - clock-names: 109 + Usage: required for ipq8074 110 + Value type: <stringlist> 111 + Definition: Should contain the following entries 112 + - "iface" PCIe to SysNOC BIU clock 113 + - "axi_m" AXI Master clock 114 + - "axi_s" AXI Slave clock 115 + - "ahb" AHB clock 116 + - "aux" Auxiliary clock 117 + 109 118 - resets: 110 119 Usage: required 111 120 Value type: <prop-encoded-array> ··· 155 144 - "ahb" AHB reset 156 145 - "phy_ahb" PHY AHB reset 157 146 147 + - reset-names: 148 + Usage: required for ipq8074 149 + Value type: <stringlist> 150 + Definition: Should contain the following entries 151 + - "pipe" PIPE reset 152 + - "sleep" Sleep reset 153 + - "sticky" Core Sticky reset 154 + - "axi_m" AXI Master reset 155 + - "axi_s" AXI Slave reset 156 + - "ahb" AHB Reset 157 + - "axi_m_sticky" AXI Master Sticky reset 158 + 158 159 - power-domains: 159 160 Usage: required for apq8084 and msm8996/apq8096 160 161 Value type: <prop-encoded-array> ··· 203 180 - <name>-gpios: 204 181 Usage: optional 205 182 Value type: <prop-encoded-array> 206 - Definition: List of phandle and gpio specifier pairs. Should contain 183 + Definition: List of phandle and GPIO specifier pairs. Should contain 207 184 - "perst-gpios" PCIe endpoint reset signal line 208 185 - "wake-gpios" PCIe endpoint wake signal line 209 186
+1 -1
Documentation/devicetree/bindings/pci/ralink,rt3883-pci.txt
··· 71 71 - interrupt-map: standard PCI properties to define the mapping of the 72 72 PCI interface to interrupt numbers. 73 73 74 - The PCI host bridge node migh have additional sub-nodes representing 74 + The PCI host bridge node might have additional sub-nodes representing 75 75 the onboard PCI devices/PCI slots. Each such sub-node must have the 76 76 following mandatory properties: 77 77
+3 -4
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 14 14 SoC-specific version corresponding to the platform first 15 15 followed by the generic version. 16 16 17 - - reg: base address and length of the pcie controller registers. 17 + - reg: base address and length of the PCIe controller registers. 18 18 - #address-cells: set to <3> 19 19 - #size-cells: set to <2> 20 20 - bus-range: PCI bus numbers covered ··· 25 25 source for hardware related interrupts (e.g. link speed change). 26 26 - #interrupt-cells: set to <1> 27 27 - interrupt-map-mask and interrupt-map: standard PCI properties 28 - to define the mapping of the PCIe interface to interrupt 29 - numbers. 28 + to define the mapping of the PCIe interface to interrupt numbers. 30 29 - clocks: from common clock binding: clock specifiers for the PCIe controller 31 30 and PCIe bus clocks. 32 31 - clock-names: from common clock binding: should be "pcie" and "pcie_bus". 33 32 34 33 Example: 35 34 36 - SoC specific DT Entry: 35 + SoC-specific DT Entry: 37 36 38 37 pcie: pcie@fe000000 { 39 38 compatible = "renesas,pcie-r8a7791", "renesas,pcie-rcar-gen2";
+25 -3
Documentation/devicetree/bindings/pci/rockchip-pcie.txt
··· 19 19 - "pm" 20 20 - msi-map: Maps a Requester ID to an MSI controller and associated 21 21 msi-specifier data. See ./pci-msi.txt 22 - - phys: From PHY bindings: Phandle for the Generic PHY for PCIe. 23 - - phy-names: MUST be "pcie-phy". 24 22 - interrupts: Three interrupt entries must be specified. 25 23 - interrupt-names: Must include the following names 26 24 - "sys" ··· 40 42 interrupt source. The value must be 1. 41 43 - interrupt-map-mask and interrupt-map: standard PCI properties 42 44 45 + Required properties for legacy PHY model (deprecated): 46 + - phys: From PHY bindings: Phandle for the Generic PHY for PCIe. 47 + - phy-names: MUST be "pcie-phy". 48 + 49 + Required properties for per-lane PHY model (preferred): 50 + - phys: Must contain an phandle to a PHY for each entry in phy-names. 51 + - phy-names: Must include 4 entries for all 4 lanes even if some of 52 + them won't be used for your cases. Entries are of the form "pcie-phy-N": 53 + where N ranges from 0 to 3. 54 + (see example below and you MUST also refer to ../phy/rockchip-pcie-phy.txt 55 + for changing the #phy-cells of phy node to support it) 56 + 43 57 Optional Property: 44 58 - aspm-no-l0s: RC won't support ASPM L0s. This property is needed if 45 59 using 24MHz OSC for RC's PHY. 46 - - ep-gpios: contain the entry for pre-reset gpio 60 + - ep-gpios: contain the entry for pre-reset GPIO 47 61 - num-lanes: number of lanes to use 62 + - vpcie12v-supply: The phandle to the 12v regulator to use for PCIe. 48 63 - vpcie3v3-supply: The phandle to the 3.3v regulator to use for PCIe. 49 64 - vpcie1v8-supply: The phandle to the 1.8v regulator to use for PCIe. 50 65 - vpcie0v9-supply: The phandle to the 0.9v regulator to use for PCIe. ··· 106 95 <&cru SRST_PCIE_PM>, <&cru SRST_P_PCIE>, <&cru SRST_A_PCIE>; 107 96 reset-names = "core", "mgmt", "mgmt-sticky", "pipe", 108 97 "pm", "pclk", "aclk"; 98 + /* deprecated legacy PHY model */ 109 99 phys = <&pcie_phy>; 110 100 phy-names = "pcie-phy"; 111 101 pinctrl-names = "default"; ··· 122 110 #address-cells = <0>; 123 111 #interrupt-cells = <1>; 124 112 }; 113 + }; 114 + 115 + pcie0: pcie@f8000000 { 116 + ... 117 + 118 + /* preferred per-lane PHY model */ 119 + phys = <&pcie_phy 0>, <&pcie_phy 1>, <&pcie_phy 2>, <&pcie_phy 3>; 120 + phy-names = "pcie-phy-0", "pcie-phy-1", "pcie-phy-2", "pcie-phy-3"; 121 + 122 + ... 125 123 };
+11 -11
Documentation/devicetree/bindings/pci/samsung,exynos5440-pcie.txt
··· 1 1 * Samsung Exynos 5440 PCIe interface 2 2 3 - This PCIe host controller is based on the Synopsis Designware PCIe IP 3 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 4 and thus inherits all the common properties defined in designware-pcie.txt. 5 5 6 6 Required properties: 7 7 - compatible: "samsung,exynos5440-pcie" 8 - - reg: base addresses and lengths of the pcie controller, 9 - the phy controller, additional register for the phy controller. 10 - (Registers for the phy controller are DEPRECATED. 8 + - reg: base addresses and lengths of the PCIe controller, 9 + the PHY controller, additional register for the PHY controller. 10 + (Registers for the PHY controller are DEPRECATED. 11 11 Use the PHY framework.) 12 12 - reg-names : First name should be set to "elbi". 13 - And use the "config" instead of getting the confgiruation address space 13 + And use the "config" instead of getting the configuration address space 14 14 from "ranges". 15 - NOTE: When use the "config" property, reg-names must be set. 15 + NOTE: When using the "config" property, reg-names must be set. 16 16 - interrupts: A list of interrupt outputs for level interrupt, 17 17 pulse interrupt, special interrupt. 18 - - phys: From PHY binding. Phandle for the Generic PHY. 18 + - phys: From PHY binding. Phandle for the generic PHY. 19 19 Refer to Documentation/devicetree/bindings/phy/samsung-phy.txt 20 20 21 - Other common properties refer to 22 - Documentation/devicetree/binding/pci/designware-pcie.txt 21 + For other common properties, refer to 22 + Documentation/devicetree/bindings/pci/designware-pcie.txt 23 23 24 24 Example: 25 25 26 - SoC specific DT Entry: 26 + SoC-specific DT Entry: 27 27 28 28 pcie@290000 { 29 29 compatible = "samsung,exynos5440-pcie", "snps,dw-pcie"; ··· 83 83 ... 84 84 }; 85 85 86 - Board specific DT Entry: 86 + Board-specific DT Entry: 87 87 88 88 pcie@290000 { 89 89 reset-gpio = <&pin_ctrl 5 0>;
+3 -3
Documentation/devicetree/bindings/pci/spear13xx-pcie.txt
··· 1 1 SPEAr13XX PCIe DT detail: 2 2 ================================ 3 3 4 - SPEAr13XX uses synopsis designware PCIe controller and ST MiPHY as phy 4 + SPEAr13XX uses the Synopsys DesignWare PCIe controller and ST MiPHY as PHY 5 5 controller. 6 6 7 7 Required properties: 8 - - compatible : should be "st,spear1340-pcie", "snps,dw-pcie". 9 - - phys : phandle to phy node associated with pcie controller 8 + - compatible : should be "st,spear1340-pcie", "snps,dw-pcie". 9 + - phys : phandle to PHY node associated with PCIe controller 10 10 - phy-names : must be "pcie-phy" 11 11 - All other definitions as per generic PCI bindings 12 12
+4 -4
Documentation/devicetree/bindings/pci/ti-pci.txt
··· 1 1 TI PCI Controllers 2 2 3 - PCIe Designware Controller 3 + PCIe DesignWare Controller 4 4 - compatible: Should be "ti,dra7-pcie" for RC 5 5 Should be "ti,dra7-pcie-ep" for EP 6 6 - phys : list of PHY specifiers (used by generic PHY framework) ··· 13 13 HOST MODE 14 14 ========= 15 15 - reg : Two register ranges as listed in the reg-names property 16 - - reg-names : The first entry must be "ti-conf" for the TI specific registers 16 + - reg-names : The first entry must be "ti-conf" for the TI-specific registers 17 17 The second entry must be "rc-dbics" for the DesignWare PCIe 18 18 registers 19 19 The third entry must be "config" for the PCIe configuration space ··· 30 30 DEVICE MODE 31 31 =========== 32 32 - reg : Four register ranges as listed in the reg-names property 33 - - reg-names : "ti-conf" for the TI specific registers 33 + - reg-names : "ti-conf" for the TI-specific registers 34 34 "ep_dbics" for the standard configuration registers as 35 35 they are locally accessed within the DIF CS space 36 36 "ep_dbics2" for the standard configuration registers as ··· 46 46 access. 47 47 48 48 Optional Property: 49 - - gpios : Should be added if a gpio line is required to drive PERST# line 49 + - gpios : Should be added if a GPIO line is required to drive PERST# line 50 50 51 51 NOTE: Two DT nodes may be added for each PCI controller; one for host 52 52 mode and another for device mode. So in order for PCI to
+1 -1
Documentation/devicetree/bindings/pci/versatile.txt
··· 5 5 Required properties: 6 6 - compatible: should contain "arm,versatile-pci" to identify the Versatile PCI 7 7 controller. 8 - - reg: base addresses and lengths of the pci controller. There must be 3 8 + - reg: base addresses and lengths of the PCI controller. There must be 3 9 9 entries: 10 10 - Versatile-specific registers 11 11 - Self Config space
+3 -2
Documentation/devicetree/bindings/pci/xgene-pci-msi.txt
··· 4 4 5 5 - compatible: should be "apm,xgene1-msi" to identify 6 6 X-Gene v1 PCIe MSI controller block. 7 - - msi-controller: indicates that this is X-Gene v1 PCIe MSI controller node 7 + - msi-controller: indicates that this is an X-Gene v1 PCIe MSI controller node 8 8 - reg: physical base address (0x79000000) and length (0x900000) for controller 9 9 registers. These registers include the MSI termination address and data 10 10 registers as well as the MSI interrupt status registers. ··· 13 13 interrupt number 0x10 to 0x1f. 14 14 - interrupt-names: not required 15 15 16 - Each PCIe node needs to have property msi-parent that points to msi controller node 16 + Each PCIe node needs to have property msi-parent that points to an MSI 17 + controller node 17 18 18 19 Examples: 19 20
+4 -4
Documentation/devicetree/bindings/pci/xgene-pci.txt
··· 8 8 property. 9 9 - reg-names: Must include the following entries: 10 10 "csr": controller configuration registers. 11 - "cfg": pcie configuration space registers. 11 + "cfg": PCIe configuration space registers. 12 12 - #address-cells: set to <3> 13 13 - #size-cells: set to <2> 14 14 - ranges: ranges for the outbound memory, I/O regions. ··· 21 21 22 22 Optional properties: 23 23 - status: Either "ok" or "disabled". 24 - - dma-coherent: Present if dma operations are coherent 24 + - dma-coherent: Present if DMA operations are coherent 25 25 26 26 Example: 27 27 28 - SoC specific DT Entry: 28 + SoC-specific DT Entry: 29 29 30 30 pcie0: pcie@1f2b0000 { 31 31 status = "disabled"; ··· 51 51 }; 52 52 53 53 54 - Board specific DT Entry: 54 + Board-specific DT Entry: 55 55 &pcie0 { 56 56 status = "ok"; 57 57 };
+4 -3
Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt
··· 15 15 - device_type: must be "pci" 16 16 - interrupts: Should contain NWL PCIe interrupt 17 17 - interrupt-names: Must include the following entries: 18 - "msi1, msi0": interrupt asserted when MSI is received 18 + "msi1, msi0": interrupt asserted when an MSI is received 19 19 "intx": interrupt asserted when a legacy interrupt is received 20 - "misc": interrupt asserted when miscellaneous is received 20 + "misc": interrupt asserted when miscellaneous interrupt is received 21 21 - interrupt-map-mask and interrupt-map: standard PCI properties to define the 22 22 mapping of the PCI interface to interrupt numbers. 23 23 - ranges: ranges for the PCI memory regions (I/O space region is not ··· 26 26 detailed explanation 27 27 - msi-controller: indicates that this is MSI controller node 28 28 - msi-parent: MSI parent of the root complex itself 29 - - legacy-interrupt-controller: Interrupt controller device node for Legacy interrupts 29 + - legacy-interrupt-controller: Interrupt controller device node for Legacy 30 + interrupts 30 31 - interrupt-controller: identifies the node as an interrupt controller 31 32 - #interrupt-cells: should be set to 1 32 33 - #address-cells: specifies the number of cells needed to encode an
+6 -1
Documentation/devicetree/bindings/phy/rockchip-pcie-phy.txt
··· 3 3 4 4 Required properties: 5 5 - compatible: rockchip,rk3399-pcie-phy 6 - - #phy-cells: must be 0 7 6 - clocks: Must contain an entry in clock-names. 8 7 See ../clocks/clock-bindings.txt for details. 9 8 - clock-names: Must be "refclk" 10 9 - resets: Must contain an entry in reset-names. 11 10 See ../reset/reset.txt for details. 12 11 - reset-names: Must be "phy" 12 + 13 + Required properties for legacy PHY mode (deprecated): 14 + - #phy-cells: must be 0 15 + 16 + Required properties for per-lane PHY mode (preferred): 17 + - #phy-cells: must be 1 13 18 14 19 Example: 15 20
+2 -1
MAINTAINERS
··· 10244 10244 10245 10245 PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD) 10246 10246 M: Keith Busch <keith.busch@intel.com> 10247 + M: Jonathan Derrick <jonathan.derrick@intel.com> 10247 10248 L: linux-pci@vger.kernel.org 10248 10249 S: Supported 10249 10250 F: drivers/pci/host/vmd.c ··· 10291 10290 S: Maintained 10292 10291 F: drivers/pci/dwc/pci-exynos.c 10293 10292 10294 - PCI DRIVER FOR SYNOPSIS DESIGNWARE 10293 + PCI DRIVER FOR SYNOPSYS DESIGNWARE 10295 10294 M: Jingoo Han <jingoohan1@gmail.com> 10296 10295 M: Joao Pinto <Joao.Pinto@synopsys.com> 10297 10296 L: linux-pci@vger.kernel.org
+20 -7
arch/alpha/kernel/pci.c
··· 312 312 { 313 313 struct pci_controller *hose; 314 314 struct list_head resources; 315 + struct pci_host_bridge *bridge; 315 316 struct pci_bus *bus; 316 - int next_busno; 317 + int ret, next_busno; 317 318 int need_domain_info = 0; 318 319 u32 pci_mem_end; 319 320 u32 sg_base; ··· 337 336 pci_add_resource_offset(&resources, hose->mem_space, 338 337 hose->mem_space->start); 339 338 340 - bus = pci_scan_root_bus(NULL, next_busno, alpha_mv.pci_ops, 341 - hose, &resources); 342 - if (!bus) 339 + bridge = pci_alloc_host_bridge(0); 340 + if (!bridge) 343 341 continue; 344 - hose->bus = bus; 342 + 343 + list_splice_init(&resources, &bridge->windows); 344 + bridge->dev.parent = NULL; 345 + bridge->sysdata = hose; 346 + bridge->busnr = next_busno; 347 + bridge->ops = alpha_mv.pci_ops; 348 + bridge->swizzle_irq = alpha_mv.pci_swizzle; 349 + bridge->map_irq = alpha_mv.pci_map_irq; 350 + 351 + ret = pci_scan_root_bus_bridge(bridge); 352 + if (ret) { 353 + pci_free_host_bridge(bridge); 354 + continue; 355 + } 356 + 357 + bus = hose->bus = bridge->bus; 345 358 hose->need_domain_info = need_domain_info; 346 359 next_busno = bus->busn_res.end + 1; 347 360 /* Don't allow 8-bit bus number overflow inside the hose - ··· 369 354 pcibios_claim_console_setup(); 370 355 371 356 pci_assign_unassigned_resources(); 372 - pci_fixup_irqs(alpha_mv.pci_swizzle, alpha_mv.pci_map_irq); 373 357 for (hose = hose_head; hose; hose = hose->next) { 374 358 bus = hose->bus; 375 359 if (bus) 376 360 pci_bus_add_devices(bus); 377 361 } 378 362 } 379 - 380 363 381 364 struct pci_controller * __init 382 365 alloc_pci_controller(void)
+28 -5
arch/alpha/kernel/sys_nautilus.c
··· 194 194 .name = "Irongate PCI MEM", 195 195 .flags = IORESOURCE_MEM, 196 196 }; 197 + static struct resource busn_resource = { 198 + .name = "PCI busn", 199 + .start = 0, 200 + .end = 255, 201 + .flags = IORESOURCE_BUS, 202 + }; 197 203 198 204 void __init 199 205 nautilus_init_pci(void) 200 206 { 201 207 struct pci_controller *hose = hose_head; 208 + struct pci_host_bridge *bridge; 202 209 struct pci_bus *bus; 203 210 struct pci_dev *irongate; 204 211 unsigned long bus_align, bus_size, pci_mem; 205 212 unsigned long memtop = max_low_pfn << PAGE_SHIFT; 213 + int ret; 206 214 207 - /* Scan our single hose. */ 208 - bus = pci_scan_bus(0, alpha_mv.pci_ops, hose); 209 - if (!bus) 215 + bridge = pci_alloc_host_bridge(0); 216 + if (!bridge) 210 217 return; 211 218 212 - hose->bus = bus; 219 + pci_add_resource(&bridge->windows, &ioport_resource); 220 + pci_add_resource(&bridge->windows, &iomem_resource); 221 + pci_add_resource(&bridge->windows, &busn_resource); 222 + bridge->dev.parent = NULL; 223 + bridge->sysdata = hose; 224 + bridge->busnr = 0; 225 + bridge->ops = alpha_mv.pci_ops; 226 + bridge->swizzle_irq = alpha_mv.pci_swizzle; 227 + bridge->map_irq = alpha_mv.pci_map_irq; 228 + 229 + /* Scan our single hose. */ 230 + ret = pci_scan_root_bus_bridge(bridge); 231 + if (ret) { 232 + pci_free_host_bridge(bridge); 233 + return; 234 + } 235 + 236 + bus = hose->bus = bridge->bus; 213 237 pcibios_claim_one_bus(bus); 214 238 215 239 irongate = pci_get_bus_and_slot(0, 0); ··· 278 254 /* pci_common_swizzle() relies on bus->self being NULL 279 255 for the root bus, so just clear it. */ 280 256 bus->self = NULL; 281 - pci_fixup_irqs(alpha_mv.pci_swizzle, alpha_mv.pci_map_irq); 282 257 pci_bus_add_devices(bus); 283 258 } 284 259
-1
arch/arc/kernel/Makefile
··· 12 12 obj-y += signal.o traps.o sys.o troubleshoot.o stacktrace.o disasm.o 13 13 obj-$(CONFIG_ISA_ARCOMPACT) += entry-compact.o intc-compact.o 14 14 obj-$(CONFIG_ISA_ARCV2) += entry-arcv2.o intc-arcv2.o 15 - obj-$(CONFIG_PCI) += pcibios.o 16 15 17 16 obj-$(CONFIG_MODULES) += arcksyms.o module.o 18 17 obj-$(CONFIG_SMP) += smp.o
-22
arch/arc/kernel/pcibios.c
··· 1 - /* 2 - * Copyright (C) 2014-2015 Synopsys, Inc. (www.synopsys.com) 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - */ 8 - 9 - #include <linux/pci.h> 10 - 11 - /* 12 - * We don't have to worry about legacy ISA devices, so nothing to do here 13 - */ 14 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 15 - resource_size_t size, resource_size_t align) 16 - { 17 - return res->start; 18 - } 19 - 20 - void pcibios_fixup_bus(struct pci_bus *bus) 21 - { 22 - }
+5 -3
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 238 238 linux,pci-domain = <0>; 239 239 max-link-speed = <1>; 240 240 msi-map = <0x0 &its 0x0 0x1000>; 241 - phys = <&pcie_phy>; 242 - phy-names = "pcie-phy"; 241 + phys = <&pcie_phy 0>, <&pcie_phy 1>, 242 + <&pcie_phy 2>, <&pcie_phy 3>; 243 + phy-names = "pcie-phy-0", "pcie-phy-1", 244 + "pcie-phy-2", "pcie-phy-3"; 243 245 ranges = <0x83000000 0x0 0xfa000000 0x0 0xfa000000 0x0 0x1e00000 244 246 0x81000000 0x0 0xfbe00000 0x0 0xfbe00000 0x0 0x100000>; 245 247 resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>, ··· 1297 1295 compatible = "rockchip,rk3399-pcie-phy"; 1298 1296 clocks = <&cru SCLK_PCIEPHY_REF>; 1299 1297 clock-names = "refclk"; 1300 - #phy-cells = <0>; 1298 + #phy-cells = <1>; 1301 1299 resets = <&cru SRST_PCIEPHY>; 1302 1300 reset-names = "phy"; 1303 1301 status = "disabled";
-17
arch/arm64/kernel/pci.c
··· 22 22 #include <linux/pci-ecam.h> 23 23 #include <linux/slab.h> 24 24 25 - /* 26 - * Called after each bus is probed, but before its children are examined 27 - */ 28 - void pcibios_fixup_bus(struct pci_bus *bus) 29 - { 30 - /* nothing to do, expected to be removed in the future */ 31 - } 32 - 33 - /* 34 - * We don't have to worry about legacy ISA devices, so nothing to do here 35 - */ 36 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 37 - resource_size_t size, resource_size_t align) 38 - { 39 - return res->start; 40 - } 41 - 42 25 #ifdef CONFIG_ACPI 43 26 /* 44 27 * Try to assign the IRQ number when probing a new device
-4
arch/cris/arch-v32/drivers/pci/bios.c
··· 2 2 #include <linux/kernel.h> 3 3 #include <hwregs/intr_vect.h> 4 4 5 - void pcibios_fixup_bus(struct pci_bus *b) 6 - { 7 - } 8 - 9 5 void pcibios_set_master(struct pci_dev *dev) 10 6 { 11 7 u8 lat;
-7
arch/ia64/pci/pci.c
··· 411 411 acpi_pci_irq_disable(dev); 412 412 } 413 413 414 - resource_size_t 415 - pcibios_align_resource (void *data, const struct resource *res, 416 - resource_size_t size, resource_size_t align) 417 - { 418 - return res->start; 419 - } 420 - 421 414 /** 422 415 * ia64_pci_get_legacy_mem - generic legacy mem routine 423 416 * @bus: bus to get legacy memory base address for
+32 -4
arch/m68k/coldfire/pci.c
··· 243 243 .flags = IORESOURCE_IO, 244 244 }; 245 245 246 + static struct resource busn_resource = { 247 + .name = "PCI busn", 248 + .start = 0, 249 + .end = 255, 250 + .flags = IORESOURCE_BUS, 251 + }; 252 + 246 253 /* 247 254 * Interrupt mapping and setting. 248 255 */ ··· 265 258 266 259 static int __init mcf_pci_init(void) 267 260 { 261 + struct pci_host_bridge *bridge; 262 + int ret; 263 + 264 + bridge = pci_alloc_host_bridge(0); 265 + if (!bridge) 266 + return -ENOMEM; 267 + 268 268 pr_info("ColdFire: PCI bus initialization...\n"); 269 269 270 270 /* Reset the external PCI bus */ ··· 326 312 set_current_state(TASK_UNINTERRUPTIBLE); 327 313 schedule_timeout(msecs_to_jiffies(200)); 328 314 329 - rootbus = pci_scan_bus(0, &mcf_pci_ops, NULL); 330 - if (!rootbus) 331 - return -ENODEV; 315 + 316 + pci_add_resource(&bridge->windows, &ioport_resource); 317 + pci_add_resource(&bridge->windows, &iomem_resource); 318 + pci_add_resource(&bridge->windows, &busn_resource); 319 + bridge->dev.parent = NULL; 320 + bridge->sysdata = NULL; 321 + bridge->busnr = 0; 322 + bridge->ops = &mcf_pci_ops; 323 + bridge->swizzle_irq = pci_common_swizzle; 324 + bridge->map_irq = mcf_pci_map_irq; 325 + 326 + ret = pci_scan_root_bus_bridge(bridge); 327 + if (ret) { 328 + pci_free_host_bridge(bridge); 329 + return ret; 330 + } 331 + 332 + rootbus = bridge->bus; 332 333 333 334 rootbus->resource[0] = &mcf_pci_io; 334 335 rootbus->resource[1] = &mcf_pci_mem; 335 336 336 - pci_fixup_irqs(pci_common_swizzle, mcf_pci_map_irq); 337 337 pci_bus_size_bridges(rootbus); 338 338 pci_bus_assign_resources(rootbus); 339 339 pci_bus_add_devices(rootbus);
-3
arch/microblaze/include/asm/pci.h
··· 81 81 82 82 #define HAVE_ARCH_PCI_RESOURCE_TO_USER 83 83 84 - extern void pcibios_setup_bus_devices(struct pci_bus *bus); 85 - extern void pcibios_setup_bus_self(struct pci_bus *bus); 86 - 87 84 /* This part of code was originally in xilinx-pci.h */ 88 85 #ifdef CONFIG_PCI_XILINX 89 86 extern void __init xilinx_pci_init(void);
-145
arch/microblaze/pci/pci-common.c
··· 678 678 } 679 679 DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_fixup_resources); 680 680 681 - /* This function tries to figure out if a bridge resource has been initialized 682 - * by the firmware or not. It doesn't have to be absolutely bullet proof, but 683 - * things go more smoothly when it gets it right. It should covers cases such 684 - * as Apple "closed" bridge resources and bare-metal pSeries unassigned bridges 685 - */ 686 - static int pcibios_uninitialized_bridge_resource(struct pci_bus *bus, 687 - struct resource *res) 688 - { 689 - struct pci_controller *hose = pci_bus_to_host(bus); 690 - struct pci_dev *dev = bus->self; 691 - resource_size_t offset; 692 - u16 command; 693 - int i; 694 - 695 - /* Job is a bit different between memory and IO */ 696 - if (res->flags & IORESOURCE_MEM) { 697 - /* If the BAR is non-0 (res != pci_mem_offset) then it's 698 - * probably been initialized by somebody 699 - */ 700 - if (res->start != hose->pci_mem_offset) 701 - return 0; 702 - 703 - /* The BAR is 0, let's check if memory decoding is enabled on 704 - * the bridge. If not, we consider it unassigned 705 - */ 706 - pci_read_config_word(dev, PCI_COMMAND, &command); 707 - if ((command & PCI_COMMAND_MEMORY) == 0) 708 - return 1; 709 - 710 - /* Memory decoding is enabled and the BAR is 0. If any of 711 - * the bridge resources covers that starting address (0 then 712 - * it's good enough for us for memory 713 - */ 714 - for (i = 0; i < 3; i++) { 715 - if ((hose->mem_resources[i].flags & IORESOURCE_MEM) && 716 - hose->mem_resources[i].start == hose->pci_mem_offset) 717 - return 0; 718 - } 719 - 720 - /* Well, it starts at 0 and we know it will collide so we may as 721 - * well consider it as unassigned. That covers the Apple case. 722 - */ 723 - return 1; 724 - } else { 725 - /* If the BAR is non-0, then we consider it assigned */ 726 - offset = (unsigned long)hose->io_base_virt - _IO_BASE; 727 - if (((res->start - offset) & 0xfffffffful) != 0) 728 - return 0; 729 - 730 - /* Here, we are a bit different than memory as typically IO 731 - * space starting at low addresses -is- valid. What we do 732 - * instead if that we consider as unassigned anything that 733 - * doesn't have IO enabled in the PCI command register, 734 - * and that's it. 735 - */ 736 - pci_read_config_word(dev, PCI_COMMAND, &command); 737 - if (command & PCI_COMMAND_IO) 738 - return 0; 739 - 740 - /* It's starting at 0 and IO is disabled in the bridge, consider 741 - * it unassigned 742 - */ 743 - return 1; 744 - } 745 - } 746 - 747 - /* Fixup resources of a PCI<->PCI bridge */ 748 - static void pcibios_fixup_bridge(struct pci_bus *bus) 749 - { 750 - struct resource *res; 751 - int i; 752 - 753 - struct pci_dev *dev = bus->self; 754 - 755 - pci_bus_for_each_resource(bus, res, i) { 756 - if (!res) 757 - continue; 758 - if (!res->flags) 759 - continue; 760 - if (i >= 3 && bus->self->transparent) 761 - continue; 762 - 763 - pr_debug("PCI:%s Bus rsrc %d %016llx-%016llx [%x] fixup...\n", 764 - pci_name(dev), i, 765 - (unsigned long long)res->start, 766 - (unsigned long long)res->end, 767 - (unsigned int)res->flags); 768 - 769 - /* Try to detect uninitialized P2P bridge resources, 770 - * and clear them out so they get re-assigned later 771 - */ 772 - if (pcibios_uninitialized_bridge_resource(bus, res)) { 773 - res->flags = 0; 774 - pr_debug("PCI:%s (unassigned)\n", 775 - pci_name(dev)); 776 - } else { 777 - pr_debug("PCI:%s %016llx-%016llx\n", 778 - pci_name(dev), 779 - (unsigned long long)res->start, 780 - (unsigned long long)res->end); 781 - } 782 - } 783 - } 784 - 785 - void pcibios_setup_bus_self(struct pci_bus *bus) 786 - { 787 - /* Fix up the bus resources for P2P bridges */ 788 - if (bus->self != NULL) 789 - pcibios_fixup_bridge(bus); 790 - } 791 - 792 - void pcibios_setup_bus_devices(struct pci_bus *bus) 793 - { 794 - struct pci_dev *dev; 795 - 796 - pr_debug("PCI: Fixup bus devices %d (%s)\n", 797 - bus->number, bus->self ? pci_name(bus->self) : "PHB"); 798 - 799 - list_for_each_entry(dev, &bus->devices, bus_list) { 800 - /* Setup OF node pointer in archdata */ 801 - dev->dev.of_node = pci_device_to_OF_node(dev); 802 - 803 - /* Fixup NUMA node as it may not be setup yet by the generic 804 - * code and is needed by the DMA init 805 - */ 806 - set_dev_node(&dev->dev, pcibus_to_node(dev->bus)); 807 - 808 - /* Read default IRQs and fixup if necessary */ 809 - dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 810 - } 811 - } 812 - 813 - void pcibios_fixup_bus(struct pci_bus *bus) 814 - { 815 - /* nothing to do */ 816 - } 817 - EXPORT_SYMBOL(pcibios_fixup_bus); 818 - 819 681 /* 820 682 * We need to avoid collisions with `mirrored' VGA ports 821 683 * and other strange ISA hardware, so we always want the ··· 691 829 * but we want to try to avoid allocating at 0x2900-0x2bff 692 830 * which might have be mirrored at 0x0100-0x03ff.. 693 831 */ 694 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 695 - resource_size_t size, resource_size_t align) 696 - { 697 - return res->start; 698 - } 699 - EXPORT_SYMBOL(pcibios_align_resource); 700 - 701 832 int pcibios_add_device(struct pci_dev *dev) 702 833 { 703 834 dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
+20 -10
arch/mips/pci/pci-legacy.c
··· 78 78 static int need_domain_info; 79 79 LIST_HEAD(resources); 80 80 struct pci_bus *bus; 81 + struct pci_host_bridge *bridge; 82 + int ret; 83 + 84 + bridge = pci_alloc_host_bridge(0); 85 + if (!bridge) 86 + return; 81 87 82 88 if (hose->get_busno && pci_has_flag(PCI_PROBE_ONLY)) 83 89 next_busno = (*hose->get_busno)(); ··· 93 87 pci_add_resource_offset(&resources, 94 88 hose->io_resource, hose->io_offset); 95 89 pci_add_resource(&resources, hose->busn_resource); 96 - bus = pci_scan_root_bus(NULL, next_busno, hose->pci_ops, hose, 97 - &resources); 98 - hose->bus = bus; 90 + list_splice_init(&resources, &bridge->windows); 91 + bridge->dev.parent = NULL; 92 + bridge->sysdata = hose; 93 + bridge->busnr = next_busno; 94 + bridge->ops = hose->pci_ops; 95 + bridge->swizzle_irq = pci_common_swizzle; 96 + bridge->map_irq = pcibios_map_irq; 97 + ret = pci_scan_root_bus_bridge(bridge); 98 + if (ret) { 99 + pci_free_host_bridge(bridge); 100 + return; 101 + } 102 + 103 + hose->bus = bus = bridge->bus; 99 104 100 105 need_domain_info = need_domain_info || pci_domain_nr(bus); 101 106 set_pci_need_domain_info(hose, need_domain_info); 102 - 103 - if (!bus) { 104 - pci_free_resource_list(&resources); 105 - return; 106 - } 107 107 108 108 next_busno = bus->busn_res.end + 1; 109 109 /* Don't allow 8-bit bus number overflow inside the hose - ··· 235 223 /* Scan all of the recorded PCI controllers. */ 236 224 list_for_each_entry(hose, &controllers, list) 237 225 pcibios_scanbus(hose); 238 - 239 - pci_fixup_irqs(pci_common_swizzle, pcibios_map_irq); 240 226 241 227 pci_initialized = 1; 242 228
-4
arch/s390/pci/pci.c
··· 262 262 return rc; 263 263 } 264 264 265 - void pcibios_fixup_bus(struct pci_bus *bus) 266 - { 267 - } 268 - 269 265 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 270 266 resource_size_t size, 271 267 resource_size_t align)
+1 -1
arch/sh/drivers/pci/fixups-cayman.c
··· 5 5 #include <cpu/irq.h> 6 6 #include "pci-sh5.h" 7 7 8 - int __init pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 8 + int pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 9 9 { 10 10 int result = -1; 11 11
+1 -1
arch/sh/drivers/pci/fixups-dreamcast.c
··· 76 76 } 77 77 DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, gapspci_fixup_resources); 78 78 79 - int __init pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 79 + int pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 80 80 { 81 81 /* 82 82 * The interrupt routing semantics here are quite trivial.
+1 -1
arch/sh/drivers/pci/fixups-r7780rp.c
··· 15 15 #include <linux/sh_intc.h> 16 16 #include "pci-sh4.h" 17 17 18 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 18 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 19 19 { 20 20 return evt2irq(0xa20) + slot; 21 21 }
+3 -3
arch/sh/drivers/pci/fixups-rts7751r2d.c
··· 20 20 #define PCIMCR_MRSET_OFF 0xBFFFFFFF 21 21 #define PCIMCR_RFSH_OFF 0xFFFFFFFB 22 22 23 - static u8 rts7751r2d_irq_tab[] __initdata = { 23 + static u8 rts7751r2d_irq_tab[] = { 24 24 IRQ_PCI_INTA, 25 25 IRQ_PCI_INTB, 26 26 IRQ_PCI_INTC, 27 27 IRQ_PCI_INTD, 28 28 }; 29 29 30 - static char lboxre2_irq_tab[] __initdata = { 30 + static char lboxre2_irq_tab[] = { 31 31 IRQ_ETH0, IRQ_ETH1, IRQ_INTA, IRQ_INTD, 32 32 }; 33 33 34 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 34 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 35 35 { 36 36 if (mach_is_lboxre2()) 37 37 return lboxre2_irq_tab[slot];
+2 -2
arch/sh/drivers/pci/fixups-sdk7780.c
··· 22 22 #define IRQ_INTD evt2irq(0xa80) 23 23 24 24 /* IDSEL [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31] */ 25 - static char sdk7780_irq_tab[4][16] __initdata = { 25 + static char sdk7780_irq_tab[4][16] = { 26 26 /* INTA */ 27 27 { IRQ_INTA, IRQ_INTD, IRQ_INTC, IRQ_INTD, -1, -1, -1, -1, -1, -1, 28 28 -1, -1, -1, -1, -1, -1 }, ··· 37 37 -1, -1, -1 }, 38 38 }; 39 39 40 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 40 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 41 41 { 42 42 return sdk7780_irq_tab[pin-1][slot]; 43 43 }
+1 -1
arch/sh/drivers/pci/fixups-se7751.c
··· 7 7 #include <linux/sh_intc.h> 8 8 #include "pci-sh4.h" 9 9 10 - int __init pcibios_map_platform_irq(const struct pci_dev *, u8 slot, u8 pin) 10 + int pcibios_map_platform_irq(const struct pci_dev *, u8 slot, u8 pin) 11 11 { 12 12 switch (slot) { 13 13 case 0: return evt2irq(0x3a0);
+1 -1
arch/sh/drivers/pci/fixups-sh03.c
··· 4 4 #include <linux/pci.h> 5 5 #include <linux/sh_intc.h> 6 6 7 - int __init pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 7 + int pcibios_map_platform_irq(const struct pci_dev *dev, u8 slot, u8 pin) 8 8 { 9 9 int irq; 10 10
+1 -1
arch/sh/drivers/pci/fixups-snapgear.c
··· 19 19 #include <linux/sh_intc.h> 20 20 #include "pci-sh4.h" 21 21 22 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 22 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 23 23 { 24 24 int irq = -1; 25 25
+2 -2
arch/sh/drivers/pci/fixups-titan.c
··· 19 19 #include <mach/titan.h> 20 20 #include "pci-sh4.h" 21 21 22 - static char titan_irq_tab[] __initdata = { 22 + static char titan_irq_tab[] = { 23 23 TITAN_IRQ_WAN, 24 24 TITAN_IRQ_LAN, 25 25 TITAN_IRQ_MPCIA, ··· 27 27 TITAN_IRQ_USB, 28 28 }; 29 29 30 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 30 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 31 31 { 32 32 int irq = titan_irq_tab[slot]; 33 33
+25 -24
arch/sh/drivers/pci/pci.c
··· 39 39 LIST_HEAD(resources); 40 40 struct resource *res; 41 41 resource_size_t offset; 42 - int i; 43 - struct pci_bus *bus; 42 + int i, ret; 43 + struct pci_host_bridge *bridge; 44 + 45 + bridge = pci_alloc_host_bridge(0); 46 + if (!bridge) 47 + return; 44 48 45 49 for (i = 0; i < hose->nr_resources; i++) { 46 50 res = hose->resources + i; ··· 56 52 pci_add_resource_offset(&resources, res, offset); 57 53 } 58 54 59 - bus = pci_scan_root_bus(NULL, next_busno, hose->pci_ops, hose, 60 - &resources); 61 - hose->bus = bus; 55 + list_splice_init(&resources, &bridge->windows); 56 + bridge->dev.parent = NULL; 57 + bridge->sysdata = hose; 58 + bridge->busnr = next_busno; 59 + bridge->ops = hose->pci_ops; 60 + bridge->swizzle_irq = pci_common_swizzle; 61 + bridge->map_irq = pcibios_map_platform_irq; 62 + 63 + ret = pci_scan_root_bus_bridge(bridge); 64 + if (ret) { 65 + pci_free_host_bridge(bridge); 66 + return; 67 + } 68 + 69 + hose->bus = bridge->bus; 62 70 63 71 need_domain_info = need_domain_info || hose->index; 64 72 hose->need_domain_info = need_domain_info; 65 73 66 - if (!bus) { 67 - pci_free_resource_list(&resources); 68 - return; 69 - } 70 - 71 - next_busno = bus->busn_res.end + 1; 74 + next_busno = hose->bus->busn_res.end + 1; 72 75 /* Don't allow 8-bit bus number overflow inside the hose - 73 76 reserve some space for bridges. */ 74 77 if (next_busno > 224) { ··· 83 72 need_domain_info = 1; 84 73 } 85 74 86 - pci_bus_size_bridges(bus); 87 - pci_bus_assign_resources(bus); 88 - pci_bus_add_devices(bus); 75 + pci_bus_size_bridges(hose->bus); 76 + pci_bus_assign_resources(hose->bus); 77 + pci_bus_add_devices(hose->bus); 89 78 } 90 79 91 80 /* ··· 155 144 for (hose = hose_head; hose; hose = hose->next) 156 145 pcibios_scanbus(hose); 157 146 158 - pci_fixup_irqs(pci_common_swizzle, pcibios_map_platform_irq); 159 - 160 147 dma_debug_add_bus(&pci_bus_type); 161 148 162 149 pci_initialized = 1; ··· 162 153 return 0; 163 154 } 164 155 subsys_initcall(pcibios_init); 165 - 166 - /* 167 - * Called after each bus is probed, but before its children 168 - * are examined. 169 - */ 170 - void pcibios_fixup_bus(struct pci_bus *bus) 171 - { 172 - } 173 156 174 157 /* 175 158 * We need to avoid collisions with `mirrored' VGA ports
+1 -1
arch/sh/drivers/pci/pcie-sh7786.c
··· 467 467 return 0; 468 468 } 469 469 470 - int __init pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 470 + int pcibios_map_platform_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 471 471 { 472 472 return evt2irq(0xae0); 473 473 }
+18 -12
arch/sparc/kernel/leon_pci.c
··· 25 25 { 26 26 LIST_HEAD(resources); 27 27 struct pci_bus *root_bus; 28 + struct pci_host_bridge *bridge; 29 + int ret; 30 + 31 + bridge = pci_alloc_host_bridge(0); 32 + if (!bridge) 33 + return; 28 34 29 35 pci_add_resource_offset(&resources, &info->io_space, 30 36 info->io_space.start - 0x1000); ··· 38 32 info->busn.flags = IORESOURCE_BUS; 39 33 pci_add_resource(&resources, &info->busn); 40 34 41 - root_bus = pci_scan_root_bus(&ofdev->dev, 0, info->ops, info, 42 - &resources); 43 - if (!root_bus) { 44 - pci_free_resource_list(&resources); 35 + list_splice_init(&resources, &bridge->windows); 36 + bridge->dev.parent = &ofdev->dev; 37 + bridge->sysdata = info; 38 + bridge->busnr = 0; 39 + bridge->ops = info->ops; 40 + bridge->swizzle_irq = pci_common_swizzle; 41 + bridge->map_irq = info->map_irq; 42 + 43 + ret = pci_scan_root_bus_bridge(bridge); 44 + if (ret) { 45 + pci_free_host_bridge(bridge); 45 46 return; 46 47 } 47 48 48 - /* Setup IRQs of all devices using custom routines */ 49 - pci_fixup_irqs(pci_common_swizzle, info->map_irq); 49 + root_bus = bridge->bus; 50 50 51 51 /* Assign devices with resources */ 52 52 pci_assign_unassigned_resources(); ··· 105 93 cmd); 106 94 } 107 95 } 108 - } 109 - 110 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 111 - resource_size_t size, resource_size_t align) 112 - { 113 - return res->start; 114 96 }
-10
arch/sparc/kernel/pci.c
··· 690 690 return bus; 691 691 } 692 692 693 - void pcibios_fixup_bus(struct pci_bus *pbus) 694 - { 695 - } 696 - 697 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 698 - resource_size_t size, resource_size_t align) 699 - { 700 - return res->start; 701 - } 702 - 703 693 int pcibios_enable_device(struct pci_dev *dev, int mask) 704 694 { 705 695 u16 cmd, oldcmd;
-6
arch/sparc/kernel/pcic.c
··· 746 746 } 747 747 #endif 748 748 749 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 750 - resource_size_t size, resource_size_t align) 751 - { 752 - return res->start; 753 - } 754 - 755 749 int pcibios_enable_device(struct pci_dev *pdev, int mask) 756 750 { 757 751 return 0;
+16 -23
arch/tile/kernel/pci.c
··· 67 67 68 68 69 69 /* 70 - * We don't need to worry about the alignment of resources. 71 - */ 72 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 73 - resource_size_t size, resource_size_t align) 74 - { 75 - return res->start; 76 - } 77 - EXPORT_SYMBOL(pcibios_align_resource); 78 - 79 - /* 80 70 * Open a FD to the hypervisor PCI device. 81 71 * 82 72 * controller_id is the controller number, config type is 0 or 1 for ··· 264 274 */ 265 275 int __init pcibios_init(void) 266 276 { 277 + struct pci_host_bridge *bridge; 267 278 int i; 268 279 269 280 pr_info("PCI: Probing PCI hardware\n"); ··· 297 306 298 307 pci_add_resource(&resources, &ioport_resource); 299 308 pci_add_resource(&resources, &iomem_resource); 300 - bus = pci_scan_root_bus(NULL, 0, controller->ops, 301 - controller, &resources); 309 + 310 + bridge = pci_alloc_host_bridge(0); 311 + if (!bridge) 312 + break; 313 + 314 + list_splice_init(&resources, &bridge->windows); 315 + bridge->dev.parent = NULL; 316 + bridge->sysdata = controller; 317 + bridge->busnr = 0; 318 + bridge->ops = controller->ops; 319 + bridge->swizzle_irq = pci_common_swizzle; 320 + bridge->map_irq = tile_map_irq; 321 + 322 + pci_scan_root_bus_bridge(bridge); 323 + bus = bridge->bus; 302 324 controller->root_bus = bus; 303 325 controller->last_busno = bus->busn_res.end; 304 326 } 305 327 } 306 - 307 - /* Do machine dependent PCI interrupt routing */ 308 - pci_fixup_irqs(pci_common_swizzle, tile_map_irq); 309 328 310 329 /* 311 330 * This comes from the generic Linux PCI driver. ··· 369 368 return 0; 370 369 } 371 370 subsys_initcall(pcibios_init); 372 - 373 - /* 374 - * No bus fixups needed. 375 - */ 376 - void pcibios_fixup_bus(struct pci_bus *bus) 377 - { 378 - /* Nothing needs to be done. */ 379 - } 380 371 381 372 void pcibios_set_master(struct pci_dev *dev) 382 373 {
+16 -19
arch/tile/kernel/pci_gx.c
··· 108 108 /* Mask of CPUs that should receive PCIe interrupts. */ 109 109 static struct cpumask intr_cpus_map; 110 110 111 - /* We don't need to worry about the alignment of resources. */ 112 - resource_size_t pcibios_align_resource(void *data, const struct resource *res, 113 - resource_size_t size, 114 - resource_size_t align) 115 - { 116 - return res->start; 117 - } 118 - EXPORT_SYMBOL(pcibios_align_resource); 119 - 120 111 /* 121 112 * Pick a CPU to receive and handle the PCIe interrupts, based on the IRQ #. 122 113 * For now, we simply send interrupts to non-dataplane CPUs. ··· 660 669 resource_size_t offset; 661 670 LIST_HEAD(resources); 662 671 int next_busno; 672 + struct pci_host_bridge *bridge; 663 673 int i; 664 674 665 675 tile_pci_init(); ··· 873 881 controller->mem_offset); 874 882 pci_add_resource(&resources, &controller->io_space); 875 883 controller->first_busno = next_busno; 876 - bus = pci_scan_root_bus(NULL, next_busno, controller->ops, 877 - controller, &resources); 884 + 885 + bridge = pci_alloc_host_bridge(0); 886 + if (!bridge) 887 + break; 888 + 889 + list_splice_init(&resources, &bridge->windows); 890 + bridge->dev.parent = NULL; 891 + bridge->sysdata = controller; 892 + bridge->busnr = next_busno; 893 + bridge->ops = controller->ops; 894 + bridge->swizzle_irq = pci_common_swizzle; 895 + bridge->map_irq = tile_map_irq; 896 + 897 + pci_scan_root_bus_bridge(bridge); 898 + bus = bridge->bus; 878 899 controller->root_bus = bus; 879 900 next_busno = bus->busn_res.end + 1; 880 901 } 881 - 882 - /* Do machine dependent PCI interrupt routing */ 883 - pci_fixup_irqs(pci_common_swizzle, tile_map_irq); 884 902 885 903 /* 886 904 * This comes from the generic Linux PCI driver. ··· 1039 1037 return 0; 1040 1038 } 1041 1039 subsys_initcall(pcibios_init); 1042 - 1043 - /* No bus fixups needed. */ 1044 - void pcibios_fixup_bus(struct pci_bus *bus) 1045 - { 1046 - } 1047 1040 1048 1041 /* Process any "pci=" kernel boot arguments. */ 1049 1042 char *__init pcibios_setup(char *str)
+31 -4
arch/unicore32/kernel/pci.c
··· 101 101 writel(readl(PCIBRI_CMD) | PCIBRI_CMD_IO | PCIBRI_CMD_MEM, PCIBRI_CMD); 102 102 } 103 103 104 - static int __init pci_puv3_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 104 + static int pci_puv3_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 105 105 { 106 106 if (dev->bus->number == 0) { 107 107 #ifdef CONFIG_ARCH_FPGA /* 4 pci slots */ ··· 252 252 } 253 253 EXPORT_SYMBOL(pcibios_fixup_bus); 254 254 255 + static struct resource busn_resource = { 256 + .name = "PCI busn", 257 + .start = 0, 258 + .end = 255, 259 + .flags = IORESOURCE_BUS, 260 + }; 261 + 255 262 static int __init pci_common_init(void) 256 263 { 257 264 struct pci_bus *puv3_bus; 265 + struct pci_host_bridge *bridge; 266 + int ret; 267 + 268 + bridge = pci_alloc_host_bridge(0); 269 + if (!bridge) 270 + return -ENOMEM; 258 271 259 272 pci_puv3_preinit(); 260 273 261 - puv3_bus = pci_scan_bus(0, &pci_puv3_ops, NULL); 274 + pci_add_resource(&bridge->windows, &ioport_resource); 275 + pci_add_resource(&bridge->windows, &iomem_resource); 276 + pci_add_resource(&bridge->windows, &busn_resource); 277 + bridge->sysdata = NULL; 278 + bridge->busnr = 0; 279 + bridge->ops = &pci_puv3_ops; 280 + bridge->swizzle_irq = pci_common_swizzle; 281 + bridge->map_irq = pci_puv3_map_irq; 282 + 283 + /* Scan our single hose. */ 284 + ret = pci_scan_root_bus_bridge(bridge); 285 + if (ret) { 286 + pci_free_host_bridge(bridge); 287 + return; 288 + } 289 + 290 + puv3_bus = bridge->bus; 262 291 263 292 if (!puv3_bus) 264 293 panic("PCI: unable to scan bus!"); 265 - 266 - pci_fixup_irqs(pci_common_swizzle, pci_puv3_map_irq); 267 294 268 295 pci_bus_size_bridges(puv3_bus); 269 296 pci_bus_assign_resources(puv3_bus);
+17
arch/x86/pci/fixup.c
··· 618 618 dev_info(dev, "can't work around MacBook Pro poweroff issue\n"); 619 619 } 620 620 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff); 621 + 622 + /* 623 + * VMD-enabled root ports will change the source ID for all messages 624 + * to the VMD device. Rather than doing device matching with the source 625 + * ID, the AER driver should traverse the child device tree, reading 626 + * AER registers to find the faulting device. 627 + */ 628 + static void quirk_no_aersid(struct pci_dev *pdev) 629 + { 630 + /* VMD Domain */ 631 + if (is_vmd(pdev->bus)) 632 + pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; 633 + } 634 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid); 635 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); 636 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); 637 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid);
+7
drivers/iommu/intel-iommu.c
··· 901 901 struct pci_dev *pf_pdev; 902 902 903 903 pdev = to_pci_dev(dev); 904 + 905 + #ifdef CONFIG_X86 906 + /* VMD child devices currently cannot be handled individually */ 907 + if (is_vmd(pdev->bus)) 908 + return NULL; 909 + #endif 910 + 904 911 /* VFs aren't listed in scope tables; we need to look up 905 912 * the PF instead to find the IOMMU. */ 906 913 pf_pdev = pci_physfn(pdev);
+109 -23
drivers/misc/pci_endpoint_test.c
··· 72 72 73 73 #define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \ 74 74 miscdev) 75 + 76 + static bool no_msi; 77 + module_param(no_msi, bool, 0444); 78 + MODULE_PARM_DESC(no_msi, "Disable MSI interrupt in pci_endpoint_test"); 79 + 75 80 enum pci_barno { 76 81 BAR_0, 77 82 BAR_1, ··· 95 90 /* mutex to protect the ioctls */ 96 91 struct mutex mutex; 97 92 struct miscdevice miscdev; 93 + enum pci_barno test_reg_bar; 94 + size_t alignment; 98 95 }; 99 96 100 - static int bar_size[] = { 4, 512, 1024, 16384, 131072, 1048576 }; 97 + struct pci_endpoint_test_data { 98 + enum pci_barno test_reg_bar; 99 + size_t alignment; 100 + bool no_msi; 101 + }; 101 102 102 103 static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, 103 104 u32 offset) ··· 152 141 int j; 153 142 u32 val; 154 143 int size; 144 + struct pci_dev *pdev = test->pdev; 155 145 156 146 if (!test->bar[barno]) 157 147 return false; 158 148 159 - size = bar_size[barno]; 149 + size = pci_resource_len(pdev, barno); 150 + 151 + if (barno == test->test_reg_bar) 152 + size = 0x4; 160 153 161 154 for (j = 0; j < size; j += 4) 162 155 pci_endpoint_test_bar_writel(test, barno, j, 0xA0A0A0A0); ··· 217 202 dma_addr_t dst_phys_addr; 218 203 struct pci_dev *pdev = test->pdev; 219 204 struct device *dev = &pdev->dev; 205 + void *orig_src_addr; 206 + dma_addr_t orig_src_phys_addr; 207 + void *orig_dst_addr; 208 + dma_addr_t orig_dst_phys_addr; 209 + size_t offset; 210 + size_t alignment = test->alignment; 220 211 u32 src_crc32; 221 212 u32 dst_crc32; 222 213 223 - src_addr = dma_alloc_coherent(dev, size, &src_phys_addr, GFP_KERNEL); 224 - if (!src_addr) { 214 + orig_src_addr = dma_alloc_coherent(dev, size + alignment, 215 + &orig_src_phys_addr, GFP_KERNEL); 216 + if (!orig_src_addr) { 225 217 dev_err(dev, "failed to allocate source buffer\n"); 226 218 ret = false; 227 219 goto err; 220 + } 221 + 222 + if (alignment && !IS_ALIGNED(orig_src_phys_addr, alignment)) { 223 + src_phys_addr = PTR_ALIGN(orig_src_phys_addr, alignment); 224 + offset = src_phys_addr - orig_src_phys_addr; 225 + src_addr = orig_src_addr + offset; 226 + } else { 227 + src_phys_addr = orig_src_phys_addr; 228 + src_addr = orig_src_addr; 228 229 } 229 230 230 231 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_LOWER_SRC_ADDR, ··· 252 221 get_random_bytes(src_addr, size); 253 222 src_crc32 = crc32_le(~0, src_addr, size); 254 223 255 - dst_addr = dma_alloc_coherent(dev, size, &dst_phys_addr, GFP_KERNEL); 256 - if (!dst_addr) { 224 + orig_dst_addr = dma_alloc_coherent(dev, size + alignment, 225 + &orig_dst_phys_addr, GFP_KERNEL); 226 + if (!orig_dst_addr) { 257 227 dev_err(dev, "failed to allocate destination address\n"); 258 228 ret = false; 259 - goto err_src_addr; 229 + goto err_orig_src_addr; 230 + } 231 + 232 + if (alignment && !IS_ALIGNED(orig_dst_phys_addr, alignment)) { 233 + dst_phys_addr = PTR_ALIGN(orig_dst_phys_addr, alignment); 234 + offset = dst_phys_addr - orig_dst_phys_addr; 235 + dst_addr = orig_dst_addr + offset; 236 + } else { 237 + dst_phys_addr = orig_dst_phys_addr; 238 + dst_addr = orig_dst_addr; 260 239 } 261 240 262 241 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_LOWER_DST_ADDR, ··· 286 245 if (dst_crc32 == src_crc32) 287 246 ret = true; 288 247 289 - dma_free_coherent(dev, size, dst_addr, dst_phys_addr); 248 + dma_free_coherent(dev, size + alignment, orig_dst_addr, 249 + orig_dst_phys_addr); 290 250 291 - err_src_addr: 292 - dma_free_coherent(dev, size, src_addr, src_phys_addr); 251 + err_orig_src_addr: 252 + dma_free_coherent(dev, size + alignment, orig_src_addr, 253 + orig_src_phys_addr); 293 254 294 255 err: 295 256 return ret; ··· 305 262 dma_addr_t phys_addr; 306 263 struct pci_dev *pdev = test->pdev; 307 264 struct device *dev = &pdev->dev; 265 + void *orig_addr; 266 + dma_addr_t orig_phys_addr; 267 + size_t offset; 268 + size_t alignment = test->alignment; 308 269 u32 crc32; 309 270 310 - addr = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); 311 - if (!addr) { 271 + orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 272 + GFP_KERNEL); 273 + if (!orig_addr) { 312 274 dev_err(dev, "failed to allocate address\n"); 313 275 ret = false; 314 276 goto err; 277 + } 278 + 279 + if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) { 280 + phys_addr = PTR_ALIGN(orig_phys_addr, alignment); 281 + offset = phys_addr - orig_phys_addr; 282 + addr = orig_addr + offset; 283 + } else { 284 + phys_addr = orig_phys_addr; 285 + addr = orig_addr; 315 286 } 316 287 317 288 get_random_bytes(addr, size); ··· 350 293 if (reg & STATUS_READ_SUCCESS) 351 294 ret = true; 352 295 353 - dma_free_coherent(dev, size, addr, phys_addr); 296 + dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr); 354 297 355 298 err: 356 299 return ret; ··· 363 306 dma_addr_t phys_addr; 364 307 struct pci_dev *pdev = test->pdev; 365 308 struct device *dev = &pdev->dev; 309 + void *orig_addr; 310 + dma_addr_t orig_phys_addr; 311 + size_t offset; 312 + size_t alignment = test->alignment; 366 313 u32 crc32; 367 314 368 - addr = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); 369 - if (!addr) { 315 + orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 316 + GFP_KERNEL); 317 + if (!orig_addr) { 370 318 dev_err(dev, "failed to allocate destination address\n"); 371 319 ret = false; 372 320 goto err; 321 + } 322 + 323 + if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) { 324 + phys_addr = PTR_ALIGN(orig_phys_addr, alignment); 325 + offset = phys_addr - orig_phys_addr; 326 + addr = orig_addr + offset; 327 + } else { 328 + phys_addr = orig_phys_addr; 329 + addr = orig_addr; 373 330 } 374 331 375 332 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_LOWER_DST_ADDR, ··· 402 331 if (crc32 == pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM)) 403 332 ret = true; 404 333 405 - dma_free_coherent(dev, size, addr, phys_addr); 334 + dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr); 406 335 err: 407 336 return ret; 408 337 } ··· 454 383 { 455 384 int i; 456 385 int err; 457 - int irq; 386 + int irq = 0; 458 387 int id; 459 388 char name[20]; 460 389 enum pci_barno bar; 461 390 void __iomem *base; 462 391 struct device *dev = &pdev->dev; 463 392 struct pci_endpoint_test *test; 393 + struct pci_endpoint_test_data *data; 394 + enum pci_barno test_reg_bar = BAR_0; 464 395 struct miscdevice *misc_device; 465 396 466 397 if (pci_is_bridge(pdev)) ··· 472 399 if (!test) 473 400 return -ENOMEM; 474 401 402 + test->test_reg_bar = 0; 403 + test->alignment = 0; 475 404 test->pdev = pdev; 405 + 406 + data = (struct pci_endpoint_test_data *)ent->driver_data; 407 + if (data) { 408 + test_reg_bar = data->test_reg_bar; 409 + test->alignment = data->alignment; 410 + no_msi = data->no_msi; 411 + } 412 + 476 413 init_completion(&test->irq_raised); 477 414 mutex_init(&test->mutex); 478 415 ··· 500 417 501 418 pci_set_master(pdev); 502 419 503 - irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); 504 - if (irq < 0) 505 - dev_err(dev, "failed to get MSI interrupts\n"); 420 + if (!no_msi) { 421 + irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); 422 + if (irq < 0) 423 + dev_err(dev, "failed to get MSI interrupts\n"); 424 + } 506 425 507 426 err = devm_request_irq(dev, pdev->irq, pci_endpoint_test_irqhandler, 508 427 IRQF_SHARED, DRV_MODULE_NAME, test); ··· 526 441 base = pci_ioremap_bar(pdev, bar); 527 442 if (!base) { 528 443 dev_err(dev, "failed to read BAR%d\n", bar); 529 - WARN_ON(bar == BAR_0); 444 + WARN_ON(bar == test_reg_bar); 530 445 } 531 446 test->bar[bar] = base; 532 447 } 533 448 534 - test->base = test->bar[0]; 449 + test->base = test->bar[test_reg_bar]; 535 450 if (!test->base) { 536 - dev_err(dev, "Cannot perform PCI test without BAR0\n"); 451 + dev_err(dev, "Cannot perform PCI test without BAR%d\n", 452 + test_reg_bar); 537 453 goto err_iounmap; 538 454 } 539 455
+6 -6
drivers/pci/dwc/Kconfig
··· 25 25 work either as EP or RC. In order to enable host-specific features 26 26 PCI_DRA7XX_HOST must be selected and in order to enable device- 27 27 specific features PCI_DRA7XX_EP must be selected. This uses 28 - the Designware core. 28 + the DesignWare core. 29 29 30 30 if PCI_DRA7XX 31 31 ··· 97 97 select PCIE_DW_HOST 98 98 help 99 99 Say Y here if you want to enable PCI controller support on Keystone 100 - SoCs. The PCI controller on Keystone is based on Designware hardware 101 - and therefore the driver re-uses the Designware core functions to 100 + SoCs. The PCI controller on Keystone is based on DesignWare hardware 101 + and therefore the driver re-uses the DesignWare core functions to 102 102 implement the driver. 103 103 104 104 config PCI_LAYERSCAPE ··· 132 132 select PCIE_DW_HOST 133 133 help 134 134 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 135 - PCIe controller uses the Designware core plus Qualcomm-specific 135 + PCIe controller uses the DesignWare core plus Qualcomm-specific 136 136 hardware wrappers. 137 137 138 138 config PCIE_ARMADA_8K ··· 145 145 help 146 146 Say Y here if you want to enable PCIe controller support on 147 147 Armada-8K SoCs. The PCIe controller on Armada-8K is based on 148 - Designware hardware and therefore the driver re-uses the 149 - Designware core functions to implement the driver. 148 + DesignWare hardware and therefore the driver re-uses the 149 + DesignWare core functions to implement the driver. 150 150 151 151 config PCIE_ARTPEC6 152 152 bool "Axis ARTPEC-6 PCIe controller"
+20 -6
drivers/pci/dwc/pci-dra7xx.c
··· 195 195 dra7xx_pcie_enable_msi_interrupts(dra7xx); 196 196 } 197 197 198 - static void dra7xx_pcie_host_init(struct pcie_port *pp) 198 + static int dra7xx_pcie_host_init(struct pcie_port *pp) 199 199 { 200 200 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 201 201 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); ··· 206 206 dw_pcie_wait_for_link(pci); 207 207 dw_pcie_msi_init(pp); 208 208 dra7xx_pcie_enable_interrupts(dra7xx); 209 + 210 + return 0; 209 211 } 210 212 211 213 static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { ··· 240 238 return -ENODEV; 241 239 } 242 240 243 - dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, 4, 241 + dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 244 242 &intx_domain_ops, pp); 245 243 if (!dra7xx->irq_domain) { 246 244 dev_err(dev, "Failed to get a INTx IRQ domain\n"); ··· 276 274 277 275 return IRQ_HANDLED; 278 276 } 279 - 280 277 281 278 static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) 282 279 { ··· 336 335 return IRQ_HANDLED; 337 336 } 338 337 338 + static void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 339 + { 340 + u32 reg; 341 + 342 + reg = PCI_BASE_ADDRESS_0 + (4 * bar); 343 + dw_pcie_writel_dbi2(pci, reg, 0x0); 344 + dw_pcie_writel_dbi(pci, reg, 0x0); 345 + } 346 + 339 347 static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep) 340 348 { 341 349 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 342 350 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 351 + enum pci_barno bar; 352 + 353 + for (bar = BAR_0; bar <= BAR_5; bar++) 354 + dw_pcie_ep_reset_bar(pci, bar); 343 355 344 356 dra7xx_pcie_enable_wrapper_interrupts(dra7xx); 345 357 } ··· 449 435 pp->irq = platform_get_irq(pdev, 1); 450 436 if (pp->irq < 0) { 451 437 dev_err(dev, "missing IRQ resource\n"); 452 - return -EINVAL; 438 + return pp->irq; 453 439 } 454 440 455 441 ret = devm_request_irq(dev, pp->irq, dra7xx_pcie_msi_irq_handler, ··· 630 616 631 617 irq = platform_get_irq(pdev, 0); 632 618 if (irq < 0) { 633 - dev_err(dev, "missing IRQ resource\n"); 634 - return -EINVAL; 619 + dev_err(dev, "missing IRQ resource: %d\n", irq); 620 + return irq; 635 621 } 636 622 637 623 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf");
+7 -5
drivers/pci/dwc/pci-exynos.c
··· 581 581 return 0; 582 582 } 583 583 584 - static void exynos_pcie_host_init(struct pcie_port *pp) 584 + static int exynos_pcie_host_init(struct pcie_port *pp) 585 585 { 586 586 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 587 587 struct exynos_pcie *ep = to_exynos_pcie(pci); 588 588 589 589 exynos_pcie_establish_link(ep); 590 590 exynos_pcie_enable_interrupts(ep); 591 + 592 + return 0; 591 593 } 592 594 593 595 static const struct dw_pcie_host_ops exynos_pcie_host_ops = { ··· 607 605 int ret; 608 606 609 607 pp->irq = platform_get_irq(pdev, 1); 610 - if (!pp->irq) { 608 + if (pp->irq < 0) { 611 609 dev_err(dev, "failed to get irq\n"); 612 - return -ENODEV; 610 + return pp->irq; 613 611 } 614 612 ret = devm_request_irq(dev, pp->irq, exynos_pcie_irq_handler, 615 613 IRQF_SHARED, "exynos-pcie", ep); ··· 620 618 621 619 if (IS_ENABLED(CONFIG_PCI_MSI)) { 622 620 pp->msi_irq = platform_get_irq(pdev, 0); 623 - if (!pp->msi_irq) { 621 + if (pp->msi_irq < 0) { 624 622 dev_err(dev, "failed to get msi irq\n"); 625 - return -ENODEV; 623 + return pp->msi_irq; 626 624 } 627 625 628 626 ret = devm_request_irq(dev, pp->msi_irq,
+7 -4
drivers/pci/dwc/pci-imx6.c
··· 636 636 return ret; 637 637 } 638 638 639 - static void imx6_pcie_host_init(struct pcie_port *pp) 639 + static int imx6_pcie_host_init(struct pcie_port *pp) 640 640 { 641 641 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 642 642 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); ··· 649 649 650 650 if (IS_ENABLED(CONFIG_PCI_MSI)) 651 651 dw_pcie_msi_init(pp); 652 + 653 + return 0; 652 654 } 653 655 654 656 static int imx6_pcie_link_up(struct dw_pcie *pci) ··· 780 778 } 781 779 break; 782 780 case IMX7D: 783 - imx6_pcie->pciephy_reset = devm_reset_control_get(dev, 784 - "pciephy"); 781 + imx6_pcie->pciephy_reset = devm_reset_control_get_exclusive(dev, 782 + "pciephy"); 785 783 if (IS_ERR(imx6_pcie->pciephy_reset)) { 786 784 dev_err(dev, "Failed to get PCIEPHY reset control\n"); 787 785 return PTR_ERR(imx6_pcie->pciephy_reset); 788 786 } 789 787 790 - imx6_pcie->apps_reset = devm_reset_control_get(dev, "apps"); 788 + imx6_pcie->apps_reset = devm_reset_control_get_exclusive(dev, 789 + "apps"); 791 790 if (IS_ERR(imx6_pcie->apps_reset)) { 792 791 dev_err(dev, "Failed to get PCIE APPS reset control\n"); 793 792 return PTR_ERR(imx6_pcie->apps_reset);
+3 -11
drivers/pci/dwc/pci-keystone-dw.c
··· 1 1 /* 2 - * Designware application register space functions for Keystone PCI controller 2 + * DesignWare application register space functions for Keystone PCI controller 3 3 * 4 4 * Copyright (C) 2013-2014 Texas Instruments., Ltd. 5 5 * http://www.ti.com ··· 168 168 169 169 static void ks_dw_pcie_msi_irq_mask(struct irq_data *d) 170 170 { 171 - struct keystone_pcie *ks_pcie; 172 171 struct msi_desc *msi; 173 172 struct pcie_port *pp; 174 - struct dw_pcie *pci; 175 173 u32 offset; 176 174 177 175 msi = irq_data_get_msi_desc(d); 178 176 pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); 179 - pci = to_dw_pcie_from_pp(pp); 180 - ks_pcie = to_keystone_pcie(pci); 181 177 offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); 182 178 183 179 /* Mask the end point if PVM implemented */ ··· 187 191 188 192 static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d) 189 193 { 190 - struct keystone_pcie *ks_pcie; 191 194 struct msi_desc *msi; 192 195 struct pcie_port *pp; 193 - struct dw_pcie *pci; 194 196 u32 offset; 195 197 196 198 msi = irq_data_get_msi_desc(d); 197 199 pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); 198 - pci = to_dw_pcie_from_pp(pp); 199 - ks_pcie = to_keystone_pcie(pci); 200 200 offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); 201 201 202 202 /* Mask the end point if PVM implemented */ ··· 251 259 { 252 260 int i; 253 261 254 - for (i = 0; i < MAX_LEGACY_IRQS; i++) 262 + for (i = 0; i < PCI_NUM_INTX; i++) 255 263 ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); 256 264 } 257 265 ··· 557 565 /* Create legacy IRQ domain */ 558 566 ks_pcie->legacy_irq_domain = 559 567 irq_domain_add_linear(ks_pcie->legacy_intc_np, 560 - MAX_LEGACY_IRQS, 568 + PCI_NUM_INTX, 561 569 &ks_dw_pcie_legacy_irq_domain_ops, 562 570 NULL); 563 571 if (!ks_pcie->legacy_irq_domain) {
+4 -6
drivers/pci/dwc/pci-keystone.c
··· 32 32 33 33 #define DRIVER_NAME "keystone-pcie" 34 34 35 - /* driver specific constants */ 36 - #define MAX_MSI_HOST_IRQS 8 37 - #define MAX_LEGACY_HOST_IRQS 4 38 - 39 35 /* DEV_STAT_CTRL */ 40 36 #define PCIE_CAP_BASE 0x70 41 37 ··· 169 173 170 174 if (legacy) { 171 175 np_temp = &ks_pcie->legacy_intc_np; 172 - max_host_irqs = MAX_LEGACY_HOST_IRQS; 176 + max_host_irqs = PCI_NUM_INTX; 173 177 host_irqs = &ks_pcie->legacy_host_irqs[0]; 174 178 } else { 175 179 np_temp = &ks_pcie->msi_intc_np; ··· 257 261 return 0; 258 262 } 259 263 260 - static void __init ks_pcie_host_init(struct pcie_port *pp) 264 + static int __init ks_pcie_host_init(struct pcie_port *pp) 261 265 { 262 266 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 263 267 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); ··· 285 289 */ 286 290 hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0, 287 291 "Asynchronous external abort"); 292 + 293 + return 0; 288 294 } 289 295 290 296 static const struct dw_pcie_host_ops keystone_pcie_host_ops = {
+1 -3
drivers/pci/dwc/pci-keystone.h
··· 12 12 * published by the Free Software Foundation. 13 13 */ 14 14 15 - #define MAX_LEGACY_IRQS 4 16 15 #define MAX_MSI_HOST_IRQS 8 17 - #define MAX_LEGACY_HOST_IRQS 4 18 16 19 17 struct keystone_pcie { 20 18 struct dw_pcie *pci; ··· 20 22 /* PCI Device ID */ 21 23 u32 device_id; 22 24 int num_legacy_host_irqs; 23 - int legacy_host_irqs[MAX_LEGACY_HOST_IRQS]; 25 + int legacy_host_irqs[PCI_NUM_INTX]; 24 26 struct device_node *legacy_intc_np; 25 27 26 28 int num_msi_host_irqs;
+62 -40
drivers/pci/dwc/pci-layerscape.c
··· 33 33 34 34 /* PEX Internal Configuration Registers */ 35 35 #define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */ 36 - #define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */ 36 + 37 + #define PCIE_IATU_NUM 6 37 38 38 39 struct ls_pcie_drvdata { 39 40 u32 lut_offset; ··· 73 72 iowrite8(PCI_HEADER_TYPE_BRIDGE, pci->dbi_base + PCI_HEADER_TYPE); 74 73 } 75 74 76 - /* Fix class value */ 77 - static void ls_pcie_fix_class(struct ls_pcie *pcie) 78 - { 79 - struct dw_pcie *pci = pcie->pci; 80 - 81 - iowrite16(PCI_CLASS_BRIDGE_PCI, pci->dbi_base + PCI_CLASS_DEVICE); 82 - } 83 - 84 75 /* Drop MSG TLP except for Vendor MSG */ 85 76 static void ls_pcie_drop_msg_tlp(struct ls_pcie *pcie) 86 77 { ··· 82 89 val = ioread32(pci->dbi_base + PCIE_STRFMR1); 83 90 val &= 0xDFFFFFFF; 84 91 iowrite32(val, pci->dbi_base + PCIE_STRFMR1); 92 + } 93 + 94 + static void ls_pcie_disable_outbound_atus(struct ls_pcie *pcie) 95 + { 96 + int i; 97 + 98 + for (i = 0; i < PCIE_IATU_NUM; i++) 99 + dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i); 85 100 } 86 101 87 102 static int ls1021_pcie_link_up(struct dw_pcie *pci) ··· 109 108 return 1; 110 109 } 111 110 112 - static void ls1021_pcie_host_init(struct pcie_port *pp) 113 - { 114 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 115 - struct ls_pcie *pcie = to_ls_pcie(pci); 116 - struct device *dev = pci->dev; 117 - u32 index[2]; 118 - 119 - pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, 120 - "fsl,pcie-scfg"); 121 - if (IS_ERR(pcie->scfg)) { 122 - dev_err(dev, "No syscfg phandle specified\n"); 123 - pcie->scfg = NULL; 124 - return; 125 - } 126 - 127 - if (of_property_read_u32_array(dev->of_node, 128 - "fsl,pcie-scfg", index, 2)) { 129 - pcie->scfg = NULL; 130 - return; 131 - } 132 - pcie->index = index[1]; 133 - 134 - dw_pcie_setup_rc(pp); 135 - 136 - ls_pcie_drop_msg_tlp(pcie); 137 - } 138 - 139 111 static int ls_pcie_link_up(struct dw_pcie *pci) 140 112 { 141 113 struct ls_pcie *pcie = to_ls_pcie(pci); ··· 124 150 return 1; 125 151 } 126 152 127 - static void ls_pcie_host_init(struct pcie_port *pp) 153 + static int ls_pcie_host_init(struct pcie_port *pp) 128 154 { 129 155 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 130 156 struct ls_pcie *pcie = to_ls_pcie(pci); 131 157 132 - iowrite32(1, pci->dbi_base + PCIE_DBI_RO_WR_EN); 133 - ls_pcie_fix_class(pcie); 158 + /* 159 + * Disable outbound windows configured by the bootloader to avoid 160 + * one transaction hitting multiple outbound windows. 161 + * dw_pcie_setup_rc() will reconfigure the outbound windows. 162 + */ 163 + ls_pcie_disable_outbound_atus(pcie); 164 + 165 + dw_pcie_dbi_ro_wr_en(pci); 134 166 ls_pcie_clear_multifunction(pcie); 167 + dw_pcie_dbi_ro_wr_dis(pci); 168 + 135 169 ls_pcie_drop_msg_tlp(pcie); 136 - iowrite32(0, pci->dbi_base + PCIE_DBI_RO_WR_EN); 170 + 171 + dw_pcie_setup_rc(pp); 172 + 173 + return 0; 174 + } 175 + 176 + static int ls1021_pcie_host_init(struct pcie_port *pp) 177 + { 178 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 179 + struct ls_pcie *pcie = to_ls_pcie(pci); 180 + struct device *dev = pci->dev; 181 + u32 index[2]; 182 + int ret; 183 + 184 + pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, 185 + "fsl,pcie-scfg"); 186 + if (IS_ERR(pcie->scfg)) { 187 + ret = PTR_ERR(pcie->scfg); 188 + dev_err(dev, "No syscfg phandle specified\n"); 189 + pcie->scfg = NULL; 190 + return ret; 191 + } 192 + 193 + if (of_property_read_u32_array(dev->of_node, 194 + "fsl,pcie-scfg", index, 2)) { 195 + pcie->scfg = NULL; 196 + return -EINVAL; 197 + } 198 + pcie->index = index[1]; 199 + 200 + return ls_pcie_host_init(pp); 137 201 } 138 202 139 203 static int ls_pcie_msi_host_init(struct pcie_port *pp, ··· 244 232 .dw_pcie_ops = &dw_ls_pcie_ops, 245 233 }; 246 234 235 + static struct ls_pcie_drvdata ls2088_drvdata = { 236 + .lut_offset = 0x80000, 237 + .ltssm_shift = 0, 238 + .lut_dbg = 0x407fc, 239 + .ops = &ls_pcie_host_ops, 240 + .dw_pcie_ops = &dw_ls_pcie_ops, 241 + }; 242 + 247 243 static const struct of_device_id ls_pcie_of_match[] = { 248 244 { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, 249 245 { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, 250 246 { .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata }, 251 247 { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, 252 248 { .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata }, 249 + { .compatible = "fsl,ls2088a-pcie", .data = &ls2088_drvdata }, 250 + { .compatible = "fsl,ls1088a-pcie", .data = &ls2088_drvdata }, 253 251 { }, 254 252 }; 255 253
+8 -4
drivers/pci/dwc/pcie-armada8k.c
··· 134 134 dev_err(pci->dev, "Link not up after reconfiguration\n"); 135 135 } 136 136 137 - static void armada8k_pcie_host_init(struct pcie_port *pp) 137 + static int armada8k_pcie_host_init(struct pcie_port *pp) 138 138 { 139 139 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 140 140 struct armada8k_pcie *pcie = to_armada8k_pcie(pci); 141 141 142 142 dw_pcie_setup_rc(pp); 143 143 armada8k_pcie_establish_link(pcie); 144 + 145 + return 0; 144 146 } 145 147 146 148 static irqreturn_t armada8k_pcie_irq_handler(int irq, void *arg) ··· 178 176 pp->ops = &armada8k_pcie_host_ops; 179 177 180 178 pp->irq = platform_get_irq(pdev, 0); 181 - if (!pp->irq) { 179 + if (pp->irq < 0) { 182 180 dev_err(dev, "failed to get irq for port\n"); 183 - return -ENODEV; 181 + return pp->irq; 184 182 } 185 183 186 184 ret = devm_request_irq(dev, pp->irq, armada8k_pcie_irq_handler, ··· 228 226 if (IS_ERR(pcie->clk)) 229 227 return PTR_ERR(pcie->clk); 230 228 231 - clk_prepare_enable(pcie->clk); 229 + ret = clk_prepare_enable(pcie->clk); 230 + if (ret) 231 + return ret; 232 232 233 233 /* Get the dw-pcie unit configuration/control registers base. */ 234 234 base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl");
+5 -9
drivers/pci/dwc/pcie-artpec6.c
··· 141 141 artpec6_pcie_writel(artpec6_pcie, PCIECFG, val); 142 142 usleep_range(100, 200); 143 143 144 - /* 145 - * Enable writing to config regs. This is required as the Synopsys 146 - * driver changes the class code. That register needs DBI write enable. 147 - */ 148 - dw_pcie_writel_dbi(pci, MISC_CONTROL_1_OFF, DBI_RO_WR_EN); 149 - 150 144 /* setup root complex */ 151 145 dw_pcie_setup_rc(pp); 152 146 ··· 169 175 dw_pcie_msi_init(pp); 170 176 } 171 177 172 - static void artpec6_pcie_host_init(struct pcie_port *pp) 178 + static int artpec6_pcie_host_init(struct pcie_port *pp) 173 179 { 174 180 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 175 181 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); 176 182 177 183 artpec6_pcie_establish_link(artpec6_pcie); 178 184 artpec6_pcie_enable_interrupts(artpec6_pcie); 185 + 186 + return 0; 179 187 } 180 188 181 189 static const struct dw_pcie_host_ops artpec6_pcie_host_ops = { ··· 203 207 204 208 if (IS_ENABLED(CONFIG_PCI_MSI)) { 205 209 pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 206 - if (pp->msi_irq <= 0) { 210 + if (pp->msi_irq < 0) { 207 211 dev_err(dev, "failed to get MSI irq\n"); 208 - return -ENODEV; 212 + return pp->msi_irq; 209 213 } 210 214 211 215 ret = devm_request_irq(dev, pp->msi_irq,
+3 -6
drivers/pci/dwc/pcie-designware-ep.c
··· 1 1 /** 2 - * Synopsys Designware PCIe Endpoint controller driver 2 + * Synopsys DesignWare PCIe Endpoint controller driver 3 3 * 4 4 * Copyright (C) 2017 Texas Instruments 5 5 * Author: Kishon Vijay Abraham I <kishon@ti.com> ··· 283 283 { 284 284 int ret; 285 285 void *addr; 286 - enum pci_barno bar; 287 286 struct pci_epc *epc; 288 287 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 289 288 struct device *dev = pci->dev; ··· 311 312 return -ENOMEM; 312 313 ep->outbound_addr = addr; 313 314 314 - for (bar = BAR_0; bar <= BAR_5; bar++) 315 - dw_pcie_ep_reset_bar(pci, bar); 316 - 317 315 if (ep->ops->ep_init) 318 316 ep->ops->ep_init(ep); 319 317 ··· 324 328 if (ret < 0) 325 329 epc->max_functions = 1; 326 330 327 - ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size); 331 + ret = __pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, 332 + ep->page_size); 328 333 if (ret < 0) { 329 334 dev_err(dev, "Failed to initialize address space\n"); 330 335 return ret;
+13 -4
drivers/pci/dwc/pcie-designware-host.c
··· 1 1 /* 2 - * Synopsys Designware PCIe host controller driver 2 + * Synopsys DesignWare PCIe host controller driver 3 3 * 4 4 * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 5 * http://www.samsung.com ··· 71 71 while ((pos = find_next_bit((unsigned long *) &val, 32, 72 72 pos)) != 32) { 73 73 irq = irq_find_mapping(pp->irq_domain, i * 32 + pos); 74 + generic_handle_irq(irq); 74 75 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 75 76 4, 1 << pos); 76 - generic_handle_irq(irq); 77 77 pos++; 78 78 } 79 79 } ··· 401 401 } 402 402 } 403 403 404 - if (pp->ops->host_init) 405 - pp->ops->host_init(pp); 404 + if (pp->ops->host_init) { 405 + ret = pp->ops->host_init(pp); 406 + if (ret) 407 + goto error; 408 + } 406 409 407 410 pp->root_bus_nr = pp->busn->start; 408 411 ··· 597 594 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); 598 595 599 596 /* setup interrupt pins */ 597 + dw_pcie_dbi_ro_wr_en(pci); 600 598 val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); 601 599 val &= 0xffff00ff; 602 600 val |= 0x00000100; 603 601 dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); 602 + dw_pcie_dbi_ro_wr_dis(pci); 604 603 605 604 /* setup bus numbers */ 606 605 val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); ··· 639 634 640 635 dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); 641 636 637 + /* Enable write permission for the DBI read-only register */ 638 + dw_pcie_dbi_ro_wr_en(pci); 642 639 /* program correct class for RC */ 643 640 dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); 641 + /* Better disable write permission right after the update */ 642 + dw_pcie_dbi_ro_wr_dis(pci); 644 643 645 644 dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); 646 645 val |= PORT_LOGIC_SPEED_CHANGE;
+3 -1
drivers/pci/dwc/pcie-designware-plat.c
··· 35 35 return dw_handle_msi_irq(pp); 36 36 } 37 37 38 - static void dw_plat_pcie_host_init(struct pcie_port *pp) 38 + static int dw_plat_pcie_host_init(struct pcie_port *pp) 39 39 { 40 40 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 41 41 ··· 44 44 45 45 if (IS_ENABLED(CONFIG_PCI_MSI)) 46 46 dw_pcie_msi_init(pp); 47 + 48 + return 0; 47 49 } 48 50 49 51 static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = {
+8 -6
drivers/pci/dwc/pcie-designware.c
··· 1 1 /* 2 - * Synopsys Designware PCIe host controller driver 2 + * Synopsys DesignWare PCIe host controller driver 3 3 * 4 4 * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 5 * http://www.samsung.com ··· 107 107 dw_pcie_writel_dbi(pci, offset + reg, val); 108 108 } 109 109 110 - void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, int type, 111 - u64 cpu_addr, u64 pci_addr, u32 size) 110 + static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, 111 + int type, u64 cpu_addr, 112 + u64 pci_addr, u32 size) 112 113 { 113 114 u32 retries, val; 114 115 ··· 178 177 */ 179 178 for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 180 179 val = dw_pcie_readl_dbi(pci, PCIE_ATU_CR2); 181 - if (val == PCIE_ATU_ENABLE) 180 + if (val & PCIE_ATU_ENABLE) 182 181 return; 183 182 184 183 usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); ··· 201 200 dw_pcie_writel_dbi(pci, offset + reg, val); 202 201 } 203 202 204 - int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, int bar, 205 - u64 cpu_addr, enum dw_pcie_as_type as_type) 203 + static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, 204 + int bar, u64 cpu_addr, 205 + enum dw_pcie_as_type as_type) 206 206 { 207 207 int type; 208 208 u32 retries, val;
+28 -2
drivers/pci/dwc/pcie-designware.h
··· 1 1 /* 2 - * Synopsys Designware PCIe host controller driver 2 + * Synopsys DesignWare PCIe host controller driver 3 3 * 4 4 * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 5 * http://www.samsung.com ··· 76 76 #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) 77 77 #define PCIE_ATU_UPPER_TARGET 0x91C 78 78 79 + #define PCIE_MISC_CONTROL_1_OFF 0x8BC 80 + #define PCIE_DBI_RO_WR_EN (0x1 << 0) 81 + 79 82 /* 80 83 * iATU Unroll-specific register definitions 81 84 * From 4.80 core version the address translation will be made by unroll ··· 137 134 unsigned int devfn, int where, int size, u32 *val); 138 135 int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, 139 136 unsigned int devfn, int where, int size, u32 val); 140 - void (*host_init)(struct pcie_port *pp); 137 + int (*host_init)(struct pcie_port *pp); 141 138 void (*msi_set_irq)(struct pcie_port *pp, int irq); 142 139 void (*msi_clear_irq)(struct pcie_port *pp, int irq); 143 140 phys_addr_t (*get_msi_addr)(struct pcie_port *pp); ··· 189 186 struct dw_pcie_ep_ops *ops; 190 187 phys_addr_t phys_base; 191 188 size_t addr_size; 189 + size_t page_size; 192 190 u8 bar_to_atu[6]; 193 191 phys_addr_t *outbound_addr; 194 192 unsigned long ib_window_map; ··· 281 277 static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) 282 278 { 283 279 return __dw_pcie_read_dbi(pci, pci->dbi_base2, reg, 0x4); 280 + } 281 + 282 + static inline void dw_pcie_dbi_ro_wr_en(struct dw_pcie *pci) 283 + { 284 + u32 reg; 285 + u32 val; 286 + 287 + reg = PCIE_MISC_CONTROL_1_OFF; 288 + val = dw_pcie_readl_dbi(pci, reg); 289 + val |= PCIE_DBI_RO_WR_EN; 290 + dw_pcie_writel_dbi(pci, reg, val); 291 + } 292 + 293 + static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci) 294 + { 295 + u32 reg; 296 + u32 val; 297 + 298 + reg = PCIE_MISC_CONTROL_1_OFF; 299 + val = dw_pcie_readl_dbi(pci, reg); 300 + val &= ~PCIE_DBI_RO_WR_EN; 301 + dw_pcie_writel_dbi(pci, reg, val); 284 302 } 285 303 286 304 #ifdef CONFIG_PCIE_DW_HOST
+1 -4
drivers/pci/dwc/pcie-hisi.c
··· 223 223 return hisi_pcie->soc_ops->hisi_pcie_link_up(hisi_pcie); 224 224 } 225 225 226 - static struct dw_pcie_host_ops hisi_pcie_host_ops = { 226 + static const struct dw_pcie_host_ops hisi_pcie_host_ops = { 227 227 .rd_own_conf = hisi_pcie_cfg_read, 228 228 .wr_own_conf = hisi_pcie_cfg_write, 229 229 }; ··· 268 268 struct dw_pcie *pci; 269 269 struct hisi_pcie *hisi_pcie; 270 270 struct resource *reg; 271 - struct device_driver *driver; 272 271 int ret; 273 272 274 273 hisi_pcie = devm_kzalloc(dev, sizeof(*hisi_pcie), GFP_KERNEL); ··· 280 281 281 282 pci->dev = dev; 282 283 pci->ops = &dw_pcie_ops; 283 - 284 - driver = dev->driver; 285 284 286 285 hisi_pcie->pci = pci; 287 286
+4 -2
drivers/pci/dwc/pcie-kirin.c
··· 430 430 return 0; 431 431 } 432 432 433 - static void kirin_pcie_host_init(struct pcie_port *pp) 433 + static int kirin_pcie_host_init(struct pcie_port *pp) 434 434 { 435 435 kirin_pcie_establish_link(pp); 436 + 437 + return 0; 436 438 } 437 439 438 440 static struct dw_pcie_ops kirin_dw_pcie_ops = { ··· 443 441 .link_up = kirin_pcie_link_up, 444 442 }; 445 443 446 - static struct dw_pcie_host_ops kirin_pcie_host_ops = { 444 + static const struct dw_pcie_host_ops kirin_pcie_host_ops = { 447 445 .rd_own_conf = kirin_pcie_rd_own_conf, 448 446 .wr_own_conf = kirin_pcie_wr_own_conf, 449 447 .host_init = kirin_pcie_host_init,
+319 -90
drivers/pci/dwc/pcie-qcom.c
··· 37 37 #include "pcie-designware.h" 38 38 39 39 #define PCIE20_PARF_SYS_CTRL 0x00 40 + #define MST_WAKEUP_EN BIT(13) 41 + #define SLV_WAKEUP_EN BIT(12) 42 + #define MSTR_ACLK_CGC_DIS BIT(10) 43 + #define SLV_ACLK_CGC_DIS BIT(9) 44 + #define CORE_CLK_CGC_DIS BIT(6) 45 + #define AUX_PWR_DET BIT(4) 46 + #define L23_CLK_RMV_DIS BIT(2) 47 + #define L1_CLK_RMV_DIS BIT(1) 48 + 49 + #define PCIE20_COMMAND_STATUS 0x04 50 + #define CMD_BME_VAL 0x4 51 + #define PCIE20_DEVICE_CONTROL2_STATUS2 0x98 52 + #define PCIE_CAP_CPL_TIMEOUT_DISABLE 0x10 53 + 40 54 #define PCIE20_PARF_PHY_CTRL 0x40 41 55 #define PCIE20_PARF_PHY_REFCLK 0x4C 42 56 #define PCIE20_PARF_DBI_BASE_ADDR 0x168 ··· 72 58 #define CFG_BRIDGE_SB_INIT BIT(0) 73 59 74 60 #define PCIE20_CAP 0x70 61 + #define PCIE20_CAP_LINK_CAPABILITIES (PCIE20_CAP + 0xC) 62 + #define PCIE20_CAP_ACTIVE_STATE_LINK_PM_SUPPORT (BIT(10) | BIT(11)) 63 + #define PCIE20_CAP_LINK_1 (PCIE20_CAP + 0x14) 64 + #define PCIE_CAP_LINK1_VAL 0x2FD7F 65 + 66 + #define PCIE20_PARF_Q2A_FLUSH 0x1AC 67 + 68 + #define PCIE20_MISC_CONTROL_1_REG 0x8BC 69 + #define DBI_RO_WR_EN 1 75 70 76 71 #define PERST_DELAY_US 1000 77 72 78 - struct qcom_pcie_resources_v0 { 73 + #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358 74 + #define SLV_ADDR_SPACE_SZ 0x10000000 75 + 76 + struct qcom_pcie_resources_2_1_0 { 79 77 struct clk *iface_clk; 80 78 struct clk *core_clk; 81 79 struct clk *phy_clk; ··· 101 75 struct regulator *vdda_refclk; 102 76 }; 103 77 104 - struct qcom_pcie_resources_v1 { 78 + struct qcom_pcie_resources_1_0_0 { 105 79 struct clk *iface; 106 80 struct clk *aux; 107 81 struct clk *master_bus; ··· 110 84 struct regulator *vdda; 111 85 }; 112 86 113 - struct qcom_pcie_resources_v2 { 87 + struct qcom_pcie_resources_2_3_2 { 114 88 struct clk *aux_clk; 115 89 struct clk *master_clk; 116 90 struct clk *slave_clk; ··· 118 92 struct clk *pipe_clk; 119 93 }; 120 94 121 - struct qcom_pcie_resources_v3 { 95 + struct qcom_pcie_resources_2_4_0 { 122 96 struct clk *aux_clk; 123 97 struct clk *master_clk; 124 98 struct clk *slave_clk; ··· 136 110 struct reset_control *phy_ahb_reset; 137 111 }; 138 112 113 + struct qcom_pcie_resources_2_3_3 { 114 + struct clk *iface; 115 + struct clk *axi_m_clk; 116 + struct clk *axi_s_clk; 117 + struct clk *ahb_clk; 118 + struct clk *aux_clk; 119 + struct reset_control *rst[7]; 120 + }; 121 + 139 122 union qcom_pcie_resources { 140 - struct qcom_pcie_resources_v0 v0; 141 - struct qcom_pcie_resources_v1 v1; 142 - struct qcom_pcie_resources_v2 v2; 143 - struct qcom_pcie_resources_v3 v3; 123 + struct qcom_pcie_resources_1_0_0 v1_0_0; 124 + struct qcom_pcie_resources_2_1_0 v2_1_0; 125 + struct qcom_pcie_resources_2_3_2 v2_3_2; 126 + struct qcom_pcie_resources_2_3_3 v2_3_3; 127 + struct qcom_pcie_resources_2_4_0 v2_4_0; 144 128 }; 145 129 146 130 struct qcom_pcie; ··· 160 124 int (*init)(struct qcom_pcie *pcie); 161 125 int (*post_init)(struct qcom_pcie *pcie); 162 126 void (*deinit)(struct qcom_pcie *pcie); 127 + void (*post_deinit)(struct qcom_pcie *pcie); 163 128 void (*ltssm_enable)(struct qcom_pcie *pcie); 164 129 }; 165 130 ··· 178 141 179 142 static void qcom_ep_reset_assert(struct qcom_pcie *pcie) 180 143 { 181 - gpiod_set_value(pcie->reset, 1); 144 + gpiod_set_value_cansleep(pcie->reset, 1); 182 145 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 183 146 } 184 147 185 148 static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) 186 149 { 187 - gpiod_set_value(pcie->reset, 0); 150 + gpiod_set_value_cansleep(pcie->reset, 0); 188 151 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 189 152 } 190 153 ··· 209 172 return dw_pcie_wait_for_link(pci); 210 173 } 211 174 212 - static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) 175 + static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) 213 176 { 214 177 u32 val; 215 178 ··· 219 182 writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 220 183 } 221 184 222 - static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie) 185 + static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) 223 186 { 224 - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 187 + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 225 188 struct dw_pcie *pci = pcie->pci; 226 189 struct device *dev = pci->dev; 227 190 ··· 249 212 if (IS_ERR(res->phy_clk)) 250 213 return PTR_ERR(res->phy_clk); 251 214 252 - res->pci_reset = devm_reset_control_get(dev, "pci"); 215 + res->pci_reset = devm_reset_control_get_exclusive(dev, "pci"); 253 216 if (IS_ERR(res->pci_reset)) 254 217 return PTR_ERR(res->pci_reset); 255 218 256 - res->axi_reset = devm_reset_control_get(dev, "axi"); 219 + res->axi_reset = devm_reset_control_get_exclusive(dev, "axi"); 257 220 if (IS_ERR(res->axi_reset)) 258 221 return PTR_ERR(res->axi_reset); 259 222 260 - res->ahb_reset = devm_reset_control_get(dev, "ahb"); 223 + res->ahb_reset = devm_reset_control_get_exclusive(dev, "ahb"); 261 224 if (IS_ERR(res->ahb_reset)) 262 225 return PTR_ERR(res->ahb_reset); 263 226 264 - res->por_reset = devm_reset_control_get(dev, "por"); 227 + res->por_reset = devm_reset_control_get_exclusive(dev, "por"); 265 228 if (IS_ERR(res->por_reset)) 266 229 return PTR_ERR(res->por_reset); 267 230 268 - res->phy_reset = devm_reset_control_get(dev, "phy"); 231 + res->phy_reset = devm_reset_control_get_exclusive(dev, "phy"); 269 232 return PTR_ERR_OR_ZERO(res->phy_reset); 270 233 } 271 234 272 - static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie) 235 + static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie) 273 236 { 274 - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 237 + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 275 238 276 239 reset_control_assert(res->pci_reset); 277 240 reset_control_assert(res->axi_reset); ··· 286 249 regulator_disable(res->vdda_refclk); 287 250 } 288 251 289 - static int qcom_pcie_init_v0(struct qcom_pcie *pcie) 252 + static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) 290 253 { 291 - struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 254 + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 292 255 struct dw_pcie *pci = pcie->pci; 293 256 struct device *dev = pci->dev; 294 257 u32 val; ··· 404 367 return ret; 405 368 } 406 369 407 - static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) 370 + static int qcom_pcie_get_resources_1_0_0(struct qcom_pcie *pcie) 408 371 { 409 - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 372 + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; 410 373 struct dw_pcie *pci = pcie->pci; 411 374 struct device *dev = pci->dev; 412 375 ··· 430 393 if (IS_ERR(res->slave_bus)) 431 394 return PTR_ERR(res->slave_bus); 432 395 433 - res->core = devm_reset_control_get(dev, "core"); 396 + res->core = devm_reset_control_get_exclusive(dev, "core"); 434 397 return PTR_ERR_OR_ZERO(res->core); 435 398 } 436 399 437 - static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie) 400 + static void qcom_pcie_deinit_1_0_0(struct qcom_pcie *pcie) 438 401 { 439 - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 402 + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; 440 403 441 404 reset_control_assert(res->core); 442 405 clk_disable_unprepare(res->slave_bus); ··· 446 409 regulator_disable(res->vdda); 447 410 } 448 411 449 - static int qcom_pcie_init_v1(struct qcom_pcie *pcie) 412 + static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie) 450 413 { 451 - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 414 + struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; 452 415 struct dw_pcie *pci = pcie->pci; 453 416 struct device *dev = pci->dev; 454 417 int ret; ··· 514 477 return ret; 515 478 } 516 479 517 - static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) 480 + static void qcom_pcie_2_3_2_ltssm_enable(struct qcom_pcie *pcie) 518 481 { 519 482 u32 val; 520 483 ··· 524 487 writel(val, pcie->parf + PCIE20_PARF_LTSSM); 525 488 } 526 489 527 - static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie) 490 + static int qcom_pcie_get_resources_2_3_2(struct qcom_pcie *pcie) 528 491 { 529 - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 492 + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 530 493 struct dw_pcie *pci = pcie->pci; 531 494 struct device *dev = pci->dev; 532 495 ··· 550 513 return PTR_ERR_OR_ZERO(res->pipe_clk); 551 514 } 552 515 553 - static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie) 516 + static void qcom_pcie_deinit_2_3_2(struct qcom_pcie *pcie) 554 517 { 555 - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 518 + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 556 519 557 - clk_disable_unprepare(res->pipe_clk); 558 520 clk_disable_unprepare(res->slave_clk); 559 521 clk_disable_unprepare(res->master_clk); 560 522 clk_disable_unprepare(res->cfg_clk); 561 523 clk_disable_unprepare(res->aux_clk); 562 524 } 563 525 564 - static int qcom_pcie_init_v2(struct qcom_pcie *pcie) 526 + static void qcom_pcie_post_deinit_2_3_2(struct qcom_pcie *pcie) 565 527 { 566 - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 528 + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 529 + 530 + clk_disable_unprepare(res->pipe_clk); 531 + } 532 + 533 + static int qcom_pcie_init_2_3_2(struct qcom_pcie *pcie) 534 + { 535 + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 567 536 struct dw_pcie *pci = pcie->pci; 568 537 struct device *dev = pci->dev; 569 538 u32 val; ··· 632 589 return ret; 633 590 } 634 591 635 - static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie) 592 + static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) 636 593 { 637 - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 594 + struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 638 595 struct dw_pcie *pci = pcie->pci; 639 596 struct device *dev = pci->dev; 640 597 int ret; ··· 648 605 return 0; 649 606 } 650 607 651 - static int qcom_pcie_get_resources_v3(struct qcom_pcie *pcie) 608 + static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie) 652 609 { 653 - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 610 + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; 654 611 struct dw_pcie *pci = pcie->pci; 655 612 struct device *dev = pci->dev; 656 613 ··· 666 623 if (IS_ERR(res->slave_clk)) 667 624 return PTR_ERR(res->slave_clk); 668 625 669 - res->axi_m_reset = devm_reset_control_get(dev, "axi_m"); 626 + res->axi_m_reset = devm_reset_control_get_exclusive(dev, "axi_m"); 670 627 if (IS_ERR(res->axi_m_reset)) 671 628 return PTR_ERR(res->axi_m_reset); 672 629 673 - res->axi_s_reset = devm_reset_control_get(dev, "axi_s"); 630 + res->axi_s_reset = devm_reset_control_get_exclusive(dev, "axi_s"); 674 631 if (IS_ERR(res->axi_s_reset)) 675 632 return PTR_ERR(res->axi_s_reset); 676 633 677 - res->pipe_reset = devm_reset_control_get(dev, "pipe"); 634 + res->pipe_reset = devm_reset_control_get_exclusive(dev, "pipe"); 678 635 if (IS_ERR(res->pipe_reset)) 679 636 return PTR_ERR(res->pipe_reset); 680 637 681 - res->axi_m_vmid_reset = devm_reset_control_get(dev, "axi_m_vmid"); 638 + res->axi_m_vmid_reset = devm_reset_control_get_exclusive(dev, 639 + "axi_m_vmid"); 682 640 if (IS_ERR(res->axi_m_vmid_reset)) 683 641 return PTR_ERR(res->axi_m_vmid_reset); 684 642 685 - res->axi_s_xpu_reset = devm_reset_control_get(dev, "axi_s_xpu"); 643 + res->axi_s_xpu_reset = devm_reset_control_get_exclusive(dev, 644 + "axi_s_xpu"); 686 645 if (IS_ERR(res->axi_s_xpu_reset)) 687 646 return PTR_ERR(res->axi_s_xpu_reset); 688 647 689 - res->parf_reset = devm_reset_control_get(dev, "parf"); 648 + res->parf_reset = devm_reset_control_get_exclusive(dev, "parf"); 690 649 if (IS_ERR(res->parf_reset)) 691 650 return PTR_ERR(res->parf_reset); 692 651 693 - res->phy_reset = devm_reset_control_get(dev, "phy"); 652 + res->phy_reset = devm_reset_control_get_exclusive(dev, "phy"); 694 653 if (IS_ERR(res->phy_reset)) 695 654 return PTR_ERR(res->phy_reset); 696 655 697 - res->axi_m_sticky_reset = devm_reset_control_get(dev, "axi_m_sticky"); 656 + res->axi_m_sticky_reset = devm_reset_control_get_exclusive(dev, 657 + "axi_m_sticky"); 698 658 if (IS_ERR(res->axi_m_sticky_reset)) 699 659 return PTR_ERR(res->axi_m_sticky_reset); 700 660 701 - res->pipe_sticky_reset = devm_reset_control_get(dev, "pipe_sticky"); 661 + res->pipe_sticky_reset = devm_reset_control_get_exclusive(dev, 662 + "pipe_sticky"); 702 663 if (IS_ERR(res->pipe_sticky_reset)) 703 664 return PTR_ERR(res->pipe_sticky_reset); 704 665 705 - res->pwr_reset = devm_reset_control_get(dev, "pwr"); 666 + res->pwr_reset = devm_reset_control_get_exclusive(dev, "pwr"); 706 667 if (IS_ERR(res->pwr_reset)) 707 668 return PTR_ERR(res->pwr_reset); 708 669 709 - res->ahb_reset = devm_reset_control_get(dev, "ahb"); 670 + res->ahb_reset = devm_reset_control_get_exclusive(dev, "ahb"); 710 671 if (IS_ERR(res->ahb_reset)) 711 672 return PTR_ERR(res->ahb_reset); 712 673 713 - res->phy_ahb_reset = devm_reset_control_get(dev, "phy_ahb"); 674 + res->phy_ahb_reset = devm_reset_control_get_exclusive(dev, "phy_ahb"); 714 675 if (IS_ERR(res->phy_ahb_reset)) 715 676 return PTR_ERR(res->phy_ahb_reset); 716 677 717 678 return 0; 718 679 } 719 680 720 - static void qcom_pcie_deinit_v3(struct qcom_pcie *pcie) 681 + static void qcom_pcie_deinit_2_4_0(struct qcom_pcie *pcie) 721 682 { 722 - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 683 + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; 723 684 724 685 reset_control_assert(res->axi_m_reset); 725 686 reset_control_assert(res->axi_s_reset); ··· 739 692 clk_disable_unprepare(res->slave_clk); 740 693 } 741 694 742 - static int qcom_pcie_init_v3(struct qcom_pcie *pcie) 695 + static int qcom_pcie_init_2_4_0(struct qcom_pcie *pcie) 743 696 { 744 - struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 697 + struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; 745 698 struct dw_pcie *pci = pcie->pci; 746 699 struct device *dev = pci->dev; 747 700 u32 val; ··· 931 884 return ret; 932 885 } 933 886 887 + static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie) 888 + { 889 + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 890 + struct dw_pcie *pci = pcie->pci; 891 + struct device *dev = pci->dev; 892 + int i; 893 + const char *rst_names[] = { "axi_m", "axi_s", "pipe", 894 + "axi_m_sticky", "sticky", 895 + "ahb", "sleep", }; 896 + 897 + res->iface = devm_clk_get(dev, "iface"); 898 + if (IS_ERR(res->iface)) 899 + return PTR_ERR(res->iface); 900 + 901 + res->axi_m_clk = devm_clk_get(dev, "axi_m"); 902 + if (IS_ERR(res->axi_m_clk)) 903 + return PTR_ERR(res->axi_m_clk); 904 + 905 + res->axi_s_clk = devm_clk_get(dev, "axi_s"); 906 + if (IS_ERR(res->axi_s_clk)) 907 + return PTR_ERR(res->axi_s_clk); 908 + 909 + res->ahb_clk = devm_clk_get(dev, "ahb"); 910 + if (IS_ERR(res->ahb_clk)) 911 + return PTR_ERR(res->ahb_clk); 912 + 913 + res->aux_clk = devm_clk_get(dev, "aux"); 914 + if (IS_ERR(res->aux_clk)) 915 + return PTR_ERR(res->aux_clk); 916 + 917 + for (i = 0; i < ARRAY_SIZE(rst_names); i++) { 918 + res->rst[i] = devm_reset_control_get(dev, rst_names[i]); 919 + if (IS_ERR(res->rst[i])) 920 + return PTR_ERR(res->rst[i]); 921 + } 922 + 923 + return 0; 924 + } 925 + 926 + static void qcom_pcie_deinit_2_3_3(struct qcom_pcie *pcie) 927 + { 928 + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 929 + 930 + clk_disable_unprepare(res->iface); 931 + clk_disable_unprepare(res->axi_m_clk); 932 + clk_disable_unprepare(res->axi_s_clk); 933 + clk_disable_unprepare(res->ahb_clk); 934 + clk_disable_unprepare(res->aux_clk); 935 + } 936 + 937 + static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie) 938 + { 939 + struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 940 + struct dw_pcie *pci = pcie->pci; 941 + struct device *dev = pci->dev; 942 + int i, ret; 943 + u32 val; 944 + 945 + for (i = 0; i < ARRAY_SIZE(res->rst); i++) { 946 + ret = reset_control_assert(res->rst[i]); 947 + if (ret) { 948 + dev_err(dev, "reset #%d assert failed (%d)\n", i, ret); 949 + return ret; 950 + } 951 + } 952 + 953 + usleep_range(2000, 2500); 954 + 955 + for (i = 0; i < ARRAY_SIZE(res->rst); i++) { 956 + ret = reset_control_deassert(res->rst[i]); 957 + if (ret) { 958 + dev_err(dev, "reset #%d deassert failed (%d)\n", i, 959 + ret); 960 + return ret; 961 + } 962 + } 963 + 964 + /* 965 + * Don't have a way to see if the reset has completed. 966 + * Wait for some time. 967 + */ 968 + usleep_range(2000, 2500); 969 + 970 + ret = clk_prepare_enable(res->iface); 971 + if (ret) { 972 + dev_err(dev, "cannot prepare/enable core clock\n"); 973 + goto err_clk_iface; 974 + } 975 + 976 + ret = clk_prepare_enable(res->axi_m_clk); 977 + if (ret) { 978 + dev_err(dev, "cannot prepare/enable core clock\n"); 979 + goto err_clk_axi_m; 980 + } 981 + 982 + ret = clk_prepare_enable(res->axi_s_clk); 983 + if (ret) { 984 + dev_err(dev, "cannot prepare/enable axi slave clock\n"); 985 + goto err_clk_axi_s; 986 + } 987 + 988 + ret = clk_prepare_enable(res->ahb_clk); 989 + if (ret) { 990 + dev_err(dev, "cannot prepare/enable ahb clock\n"); 991 + goto err_clk_ahb; 992 + } 993 + 994 + ret = clk_prepare_enable(res->aux_clk); 995 + if (ret) { 996 + dev_err(dev, "cannot prepare/enable aux clock\n"); 997 + goto err_clk_aux; 998 + } 999 + 1000 + writel(SLV_ADDR_SPACE_SZ, 1001 + pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE); 1002 + 1003 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 1004 + val &= ~BIT(0); 1005 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 1006 + 1007 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 1008 + 1009 + writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS 1010 + | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | 1011 + AUX_PWR_DET | L23_CLK_RMV_DIS | L1_CLK_RMV_DIS, 1012 + pcie->parf + PCIE20_PARF_SYS_CTRL); 1013 + writel(0, pcie->parf + PCIE20_PARF_Q2A_FLUSH); 1014 + 1015 + writel(CMD_BME_VAL, pci->dbi_base + PCIE20_COMMAND_STATUS); 1016 + writel(DBI_RO_WR_EN, pci->dbi_base + PCIE20_MISC_CONTROL_1_REG); 1017 + writel(PCIE_CAP_LINK1_VAL, pci->dbi_base + PCIE20_CAP_LINK_1); 1018 + 1019 + val = readl(pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); 1020 + val &= ~PCIE20_CAP_ACTIVE_STATE_LINK_PM_SUPPORT; 1021 + writel(val, pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); 1022 + 1023 + writel(PCIE_CAP_CPL_TIMEOUT_DISABLE, pci->dbi_base + 1024 + PCIE20_DEVICE_CONTROL2_STATUS2); 1025 + 1026 + return 0; 1027 + 1028 + err_clk_aux: 1029 + clk_disable_unprepare(res->ahb_clk); 1030 + err_clk_ahb: 1031 + clk_disable_unprepare(res->axi_s_clk); 1032 + err_clk_axi_s: 1033 + clk_disable_unprepare(res->axi_m_clk); 1034 + err_clk_axi_m: 1035 + clk_disable_unprepare(res->iface); 1036 + err_clk_iface: 1037 + /* 1038 + * Not checking for failure, will anyway return 1039 + * the original failure in 'ret'. 1040 + */ 1041 + for (i = 0; i < ARRAY_SIZE(res->rst); i++) 1042 + reset_control_assert(res->rst[i]); 1043 + 1044 + return ret; 1045 + } 1046 + 934 1047 static int qcom_pcie_link_up(struct dw_pcie *pci) 935 1048 { 936 1049 u16 val = readw(pci->dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA); ··· 1098 891 return !!(val & PCI_EXP_LNKSTA_DLLLA); 1099 892 } 1100 893 1101 - static void qcom_pcie_host_init(struct pcie_port *pp) 894 + static int qcom_pcie_host_init(struct pcie_port *pp) 1102 895 { 1103 896 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1104 897 struct qcom_pcie *pcie = to_qcom_pcie(pci); ··· 1108 901 1109 902 ret = pcie->ops->init(pcie); 1110 903 if (ret) 1111 - goto err_deinit; 904 + return ret; 1112 905 1113 906 ret = phy_power_on(pcie->phy); 1114 907 if (ret) 1115 908 goto err_deinit; 1116 909 1117 - if (pcie->ops->post_init) 1118 - pcie->ops->post_init(pcie); 910 + if (pcie->ops->post_init) { 911 + ret = pcie->ops->post_init(pcie); 912 + if (ret) 913 + goto err_disable_phy; 914 + } 1119 915 1120 916 dw_pcie_setup_rc(pp); 1121 917 ··· 1131 921 if (ret) 1132 922 goto err; 1133 923 1134 - return; 924 + return 0; 1135 925 err: 1136 926 qcom_ep_reset_assert(pcie); 927 + if (pcie->ops->post_deinit) 928 + pcie->ops->post_deinit(pcie); 929 + err_disable_phy: 1137 930 phy_power_off(pcie->phy); 1138 931 err_deinit: 1139 932 pcie->ops->deinit(pcie); 933 + 934 + return ret; 1140 935 } 1141 936 1142 937 static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, ··· 1165 950 .rd_own_conf = qcom_pcie_rd_own_conf, 1166 951 }; 1167 952 1168 - static const struct qcom_pcie_ops ops_v0 = { 1169 - .get_resources = qcom_pcie_get_resources_v0, 1170 - .init = qcom_pcie_init_v0, 1171 - .deinit = qcom_pcie_deinit_v0, 1172 - .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, 953 + /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ 954 + static const struct qcom_pcie_ops ops_2_1_0 = { 955 + .get_resources = qcom_pcie_get_resources_2_1_0, 956 + .init = qcom_pcie_init_2_1_0, 957 + .deinit = qcom_pcie_deinit_2_1_0, 958 + .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, 1173 959 }; 1174 960 1175 - static const struct qcom_pcie_ops ops_v1 = { 1176 - .get_resources = qcom_pcie_get_resources_v1, 1177 - .init = qcom_pcie_init_v1, 1178 - .deinit = qcom_pcie_deinit_v1, 1179 - .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, 961 + /* Qcom IP rev.: 1.0.0 Synopsys IP rev.: 4.11a */ 962 + static const struct qcom_pcie_ops ops_1_0_0 = { 963 + .get_resources = qcom_pcie_get_resources_1_0_0, 964 + .init = qcom_pcie_init_1_0_0, 965 + .deinit = qcom_pcie_deinit_1_0_0, 966 + .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, 1180 967 }; 1181 968 1182 - static const struct qcom_pcie_ops ops_v2 = { 1183 - .get_resources = qcom_pcie_get_resources_v2, 1184 - .init = qcom_pcie_init_v2, 1185 - .post_init = qcom_pcie_post_init_v2, 1186 - .deinit = qcom_pcie_deinit_v2, 1187 - .ltssm_enable = qcom_pcie_v2_ltssm_enable, 969 + /* Qcom IP rev.: 2.3.2 Synopsys IP rev.: 4.21a */ 970 + static const struct qcom_pcie_ops ops_2_3_2 = { 971 + .get_resources = qcom_pcie_get_resources_2_3_2, 972 + .init = qcom_pcie_init_2_3_2, 973 + .post_init = qcom_pcie_post_init_2_3_2, 974 + .deinit = qcom_pcie_deinit_2_3_2, 975 + .post_deinit = qcom_pcie_post_deinit_2_3_2, 976 + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 977 + }; 978 + 979 + /* Qcom IP rev.: 2.4.0 Synopsys IP rev.: 4.20a */ 980 + static const struct qcom_pcie_ops ops_2_4_0 = { 981 + .get_resources = qcom_pcie_get_resources_2_4_0, 982 + .init = qcom_pcie_init_2_4_0, 983 + .deinit = qcom_pcie_deinit_2_4_0, 984 + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 985 + }; 986 + 987 + /* Qcom IP rev.: 2.3.3 Synopsys IP rev.: 4.30a */ 988 + static const struct qcom_pcie_ops ops_2_3_3 = { 989 + .get_resources = qcom_pcie_get_resources_2_3_3, 990 + .init = qcom_pcie_init_2_3_3, 991 + .deinit = qcom_pcie_deinit_2_3_3, 992 + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1188 993 }; 1189 994 1190 995 static const struct dw_pcie_ops dw_pcie_ops = { 1191 996 .link_up = qcom_pcie_link_up, 1192 - }; 1193 - 1194 - static const struct qcom_pcie_ops ops_v3 = { 1195 - .get_resources = qcom_pcie_get_resources_v3, 1196 - .init = qcom_pcie_init_v3, 1197 - .deinit = qcom_pcie_deinit_v3, 1198 - .ltssm_enable = qcom_pcie_v2_ltssm_enable, 1199 997 }; 1200 998 1201 999 static int qcom_pcie_probe(struct platform_device *pdev) ··· 1297 1069 } 1298 1070 1299 1071 static const struct of_device_id qcom_pcie_match[] = { 1300 - { .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 }, 1301 - { .compatible = "qcom,pcie-apq8064", .data = &ops_v0 }, 1302 - { .compatible = "qcom,pcie-apq8084", .data = &ops_v1 }, 1303 - { .compatible = "qcom,pcie-msm8996", .data = &ops_v2 }, 1304 - { .compatible = "qcom,pcie-ipq4019", .data = &ops_v3 }, 1072 + { .compatible = "qcom,pcie-apq8084", .data = &ops_1_0_0 }, 1073 + { .compatible = "qcom,pcie-ipq8064", .data = &ops_2_1_0 }, 1074 + { .compatible = "qcom,pcie-apq8064", .data = &ops_2_1_0 }, 1075 + { .compatible = "qcom,pcie-msm8996", .data = &ops_2_3_2 }, 1076 + { .compatible = "qcom,pcie-ipq8074", .data = &ops_2_3_3 }, 1077 + { .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, 1305 1078 { } 1306 1079 }; 1307 1080
+5 -3
drivers/pci/dwc/pcie-spear13xx.c
··· 177 177 return 0; 178 178 } 179 179 180 - static void spear13xx_pcie_host_init(struct pcie_port *pp) 180 + static int spear13xx_pcie_host_init(struct pcie_port *pp) 181 181 { 182 182 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 183 183 struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci); 184 184 185 185 spear13xx_pcie_establish_link(spear13xx_pcie); 186 186 spear13xx_pcie_enable_interrupts(spear13xx_pcie); 187 + 188 + return 0; 187 189 } 188 190 189 191 static const struct dw_pcie_host_ops spear13xx_pcie_host_ops = { ··· 201 199 int ret; 202 200 203 201 pp->irq = platform_get_irq(pdev, 0); 204 - if (!pp->irq) { 202 + if (pp->irq < 0) { 205 203 dev_err(dev, "failed to get irq\n"); 206 - return -ENODEV; 204 + return pp->irq; 207 205 } 208 206 ret = devm_request_irq(dev, pp->irq, spear13xx_pcie_irq_handler, 209 207 IRQF_SHARED | IRQF_NO_THREAD,
+63 -36
drivers/pci/endpoint/functions/pci-epf-test.c
··· 54 54 struct pci_epf_test { 55 55 void *reg[6]; 56 56 struct pci_epf *epf; 57 + enum pci_barno test_reg_bar; 58 + bool linkup_notifier; 57 59 struct delayed_work cmd_handler; 58 60 }; 59 61 ··· 76 74 .interrupt_pin = PCI_INTERRUPT_INTA, 77 75 }; 78 76 79 - static int bar_size[] = { 512, 1024, 16384, 131072, 1048576 }; 77 + struct pci_epf_test_data { 78 + enum pci_barno test_reg_bar; 79 + bool linkup_notifier; 80 + }; 81 + 82 + static int bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; 80 83 81 84 static int pci_epf_test_copy(struct pci_epf_test *epf_test) 82 85 { ··· 93 86 struct pci_epf *epf = epf_test->epf; 94 87 struct device *dev = &epf->dev; 95 88 struct pci_epc *epc = epf->epc; 96 - struct pci_epf_test_reg *reg = epf_test->reg[0]; 89 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 90 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 97 91 98 92 src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); 99 93 if (!src_addr) { ··· 153 145 struct pci_epf *epf = epf_test->epf; 154 146 struct device *dev = &epf->dev; 155 147 struct pci_epc *epc = epf->epc; 156 - struct pci_epf_test_reg *reg = epf_test->reg[0]; 148 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 149 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 157 150 158 151 src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 159 152 if (!src_addr) { ··· 204 195 struct pci_epf *epf = epf_test->epf; 205 196 struct device *dev = &epf->dev; 206 197 struct pci_epc *epc = epf->epc; 207 - struct pci_epf_test_reg *reg = epf_test->reg[0]; 198 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 199 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 208 200 209 201 dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 210 202 if (!dst_addr) { ··· 257 247 u8 msi_count; 258 248 struct pci_epf *epf = epf_test->epf; 259 249 struct pci_epc *epc = epf->epc; 260 - struct pci_epf_test_reg *reg = epf_test->reg[0]; 250 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 251 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 261 252 262 253 reg->status |= STATUS_IRQ_RAISED; 263 254 msi_count = pci_epc_get_msi(epc); ··· 274 263 int ret; 275 264 u8 irq; 276 265 u8 msi_count; 266 + u32 command; 277 267 struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, 278 268 cmd_handler.work); 279 269 struct pci_epf *epf = epf_test->epf; 280 270 struct pci_epc *epc = epf->epc; 281 - struct pci_epf_test_reg *reg = epf_test->reg[0]; 271 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 272 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 282 273 283 - if (!reg->command) 274 + command = reg->command; 275 + if (!command) 284 276 goto reset_handler; 285 277 286 - if (reg->command & COMMAND_RAISE_LEGACY_IRQ) { 278 + reg->command = 0; 279 + reg->status = 0; 280 + 281 + if (command & COMMAND_RAISE_LEGACY_IRQ) { 287 282 reg->status = STATUS_IRQ_RAISED; 288 283 pci_epc_raise_irq(epc, PCI_EPC_IRQ_LEGACY, 0); 289 284 goto reset_handler; 290 285 } 291 286 292 - if (reg->command & COMMAND_WRITE) { 287 + if (command & COMMAND_WRITE) { 293 288 ret = pci_epf_test_write(epf_test); 294 289 if (ret) 295 290 reg->status |= STATUS_WRITE_FAIL; ··· 305 288 goto reset_handler; 306 289 } 307 290 308 - if (reg->command & COMMAND_READ) { 291 + if (command & COMMAND_READ) { 309 292 ret = pci_epf_test_read(epf_test); 310 293 if (!ret) 311 294 reg->status |= STATUS_READ_SUCCESS; ··· 315 298 goto reset_handler; 316 299 } 317 300 318 - if (reg->command & COMMAND_COPY) { 301 + if (command & COMMAND_COPY) { 319 302 ret = pci_epf_test_copy(epf_test); 320 303 if (!ret) 321 304 reg->status |= STATUS_COPY_SUCCESS; ··· 325 308 goto reset_handler; 326 309 } 327 310 328 - if (reg->command & COMMAND_RAISE_MSI_IRQ) { 311 + if (command & COMMAND_RAISE_MSI_IRQ) { 329 312 msi_count = pci_epc_get_msi(epc); 330 - irq = (reg->command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT; 313 + irq = (command & MSI_NUMBER_MASK) >> MSI_NUMBER_SHIFT; 331 314 if (irq > msi_count || msi_count <= 0) 332 315 goto reset_handler; 333 316 reg->status = STATUS_IRQ_RAISED; ··· 336 319 } 337 320 338 321 reset_handler: 339 - reg->command = 0; 340 - 341 322 queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, 342 323 msecs_to_jiffies(1)); 343 324 } ··· 373 358 struct pci_epc *epc = epf->epc; 374 359 struct device *dev = &epf->dev; 375 360 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 361 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 376 362 377 363 flags = PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32; 378 364 if (sizeof(dma_addr_t) == 0x8) ··· 386 370 if (ret) { 387 371 pci_epf_free_space(epf, epf_test->reg[bar], bar); 388 372 dev_err(dev, "failed to set BAR%d\n", bar); 389 - if (bar == BAR_0) 373 + if (bar == test_reg_bar) 390 374 return ret; 391 375 } 392 376 } ··· 400 384 struct device *dev = &epf->dev; 401 385 void *base; 402 386 int bar; 387 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 403 388 404 389 base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), 405 - BAR_0); 390 + test_reg_bar); 406 391 if (!base) { 407 392 dev_err(dev, "failed to allocated register space\n"); 408 393 return -ENOMEM; 409 394 } 410 - epf_test->reg[0] = base; 395 + epf_test->reg[test_reg_bar] = base; 411 396 412 - for (bar = BAR_1; bar <= BAR_5; bar++) { 413 - base = pci_epf_alloc_space(epf, bar_size[bar - 1], bar); 397 + for (bar = BAR_0; bar <= BAR_5; bar++) { 398 + if (bar == test_reg_bar) 399 + continue; 400 + base = pci_epf_alloc_space(epf, bar_size[bar], bar); 414 401 if (!base) 415 402 dev_err(dev, "failed to allocate space for BAR%d\n", 416 403 bar); ··· 426 407 static int pci_epf_test_bind(struct pci_epf *epf) 427 408 { 428 409 int ret; 410 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 429 411 struct pci_epf_header *header = epf->header; 430 412 struct pci_epc *epc = epf->epc; 431 413 struct device *dev = &epf->dev; ··· 452 432 if (ret) 453 433 return ret; 454 434 435 + if (!epf_test->linkup_notifier) 436 + queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); 437 + 455 438 return 0; 456 439 } 440 + 441 + static const struct pci_epf_device_id pci_epf_test_ids[] = { 442 + { 443 + .name = "pci_epf_test", 444 + }, 445 + {}, 446 + }; 457 447 458 448 static int pci_epf_test_probe(struct pci_epf *epf) 459 449 { 460 450 struct pci_epf_test *epf_test; 461 451 struct device *dev = &epf->dev; 452 + const struct pci_epf_device_id *match; 453 + struct pci_epf_test_data *data; 454 + enum pci_barno test_reg_bar = BAR_0; 455 + bool linkup_notifier = true; 456 + 457 + match = pci_epf_match_device(pci_epf_test_ids, epf); 458 + data = (struct pci_epf_test_data *)match->driver_data; 459 + if (data) { 460 + test_reg_bar = data->test_reg_bar; 461 + linkup_notifier = data->linkup_notifier; 462 + } 462 463 463 464 epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL); 464 465 if (!epf_test) ··· 487 446 488 447 epf->header = &test_header; 489 448 epf_test->epf = epf; 449 + epf_test->test_reg_bar = test_reg_bar; 450 + epf_test->linkup_notifier = linkup_notifier; 490 451 491 452 INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); 492 453 493 454 epf_set_drvdata(epf, epf_test); 494 - return 0; 495 - } 496 - 497 - static int pci_epf_test_remove(struct pci_epf *epf) 498 - { 499 - struct pci_epf_test *epf_test = epf_get_drvdata(epf); 500 - 501 - kfree(epf_test); 502 455 return 0; 503 456 } 504 457 ··· 502 467 .linkup = pci_epf_test_linkup, 503 468 }; 504 469 505 - static const struct pci_epf_device_id pci_epf_test_ids[] = { 506 - { 507 - .name = "pci_epf_test", 508 - }, 509 - {}, 510 - }; 511 - 512 470 static struct pci_epf_driver test_driver = { 513 471 .driver.name = "pci_epf_test", 514 472 .probe = pci_epf_test_probe, 515 - .remove = pci_epf_test_remove, 516 473 .id_table = pci_epf_test_ids, 517 474 .ops = &ops, 518 475 .owner = THIS_MODULE,
+9 -2
drivers/pci/endpoint/pci-epc-core.c
··· 21 21 #include <linux/dma-mapping.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/module.h> 24 + #include <linux/of_device.h> 24 25 25 26 #include <linux/pci-epc.h> 26 27 #include <linux/pci-epf.h> ··· 371 370 int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf) 372 371 { 373 372 unsigned long flags; 373 + struct device *dev = epc->dev.parent; 374 374 375 375 if (epf->epc) 376 376 return -EBUSY; ··· 383 381 return -EINVAL; 384 382 385 383 epf->epc = epc; 386 - dma_set_coherent_mask(&epf->dev, epc->dev.coherent_dma_mask); 387 - epf->dev.dma_mask = epc->dev.dma_mask; 384 + if (dev->of_node) { 385 + of_dma_configure(&epf->dev, dev->of_node); 386 + } else { 387 + dma_set_coherent_mask(&epf->dev, epc->dev.coherent_dma_mask); 388 + epf->dev.dma_mask = epc->dev.dma_mask; 389 + } 388 390 389 391 spin_lock_irqsave(&epc->lock, flags); 390 392 list_add_tail(&epf->list, &epc->pci_epf); ··· 506 500 dma_set_coherent_mask(&epc->dev, dev->coherent_dma_mask); 507 501 epc->dev.class = pci_epc_class; 508 502 epc->dev.dma_mask = dev->dma_mask; 503 + epc->dev.parent = dev; 509 504 epc->ops = ops; 510 505 511 506 ret = dev_set_name(&epc->dev, "%s", dev_name(dev));
+50 -9
drivers/pci/endpoint/pci-epc-mem.c
··· 24 24 #include <linux/pci-epc.h> 25 25 26 26 /** 27 - * pci_epc_mem_init() - initialize the pci_epc_mem structure 27 + * pci_epc_mem_get_order() - determine the allocation order of a memory size 28 + * @mem: address space of the endpoint controller 29 + * @size: the size for which to get the order 30 + * 31 + * Reimplement get_order() for mem->page_size since the generic get_order 32 + * always gets order with a constant PAGE_SIZE. 33 + */ 34 + static int pci_epc_mem_get_order(struct pci_epc_mem *mem, size_t size) 35 + { 36 + int order; 37 + unsigned int page_shift = ilog2(mem->page_size); 38 + 39 + size--; 40 + size >>= page_shift; 41 + #if BITS_PER_LONG == 32 42 + order = fls(size); 43 + #else 44 + order = fls64(size); 45 + #endif 46 + return order; 47 + } 48 + 49 + /** 50 + * __pci_epc_mem_init() - initialize the pci_epc_mem structure 28 51 * @epc: the EPC device that invoked pci_epc_mem_init 29 52 * @phys_base: the physical address of the base 30 53 * @size: the size of the address space 54 + * @page_size: size of each page 31 55 * 32 56 * Invoke to initialize the pci_epc_mem structure used by the 33 57 * endpoint functions to allocate mapped PCI address. 34 58 */ 35 - int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size) 59 + int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, 60 + size_t page_size) 36 61 { 37 62 int ret; 38 63 struct pci_epc_mem *mem; 39 64 unsigned long *bitmap; 40 - int pages = size >> PAGE_SHIFT; 41 - int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 65 + unsigned int page_shift; 66 + int pages; 67 + int bitmap_size; 68 + 69 + if (page_size < PAGE_SIZE) 70 + page_size = PAGE_SIZE; 71 + 72 + page_shift = ilog2(page_size); 73 + pages = size >> page_shift; 74 + bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 42 75 43 76 mem = kzalloc(sizeof(*mem), GFP_KERNEL); 44 77 if (!mem) { ··· 87 54 88 55 mem->bitmap = bitmap; 89 56 mem->phys_base = phys_base; 57 + mem->page_size = page_size; 90 58 mem->pages = pages; 91 59 mem->size = size; 92 60 ··· 101 67 err: 102 68 return ret; 103 69 } 104 - EXPORT_SYMBOL_GPL(pci_epc_mem_init); 70 + EXPORT_SYMBOL_GPL(__pci_epc_mem_init); 105 71 106 72 /** 107 73 * pci_epc_mem_exit() - cleanup the pci_epc_mem structure ··· 135 101 int pageno; 136 102 void __iomem *virt_addr; 137 103 struct pci_epc_mem *mem = epc->mem; 138 - int order = get_order(size); 104 + unsigned int page_shift = ilog2(mem->page_size); 105 + int order; 106 + 107 + size = ALIGN(size, mem->page_size); 108 + order = pci_epc_mem_get_order(mem, size); 139 109 140 110 pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); 141 111 if (pageno < 0) 142 112 return NULL; 143 113 144 - *phys_addr = mem->phys_base + (pageno << PAGE_SHIFT); 114 + *phys_addr = mem->phys_base + (pageno << page_shift); 145 115 virt_addr = ioremap(*phys_addr, size); 146 116 if (!virt_addr) 147 117 bitmap_release_region(mem->bitmap, pageno, order); ··· 167 129 void __iomem *virt_addr, size_t size) 168 130 { 169 131 int pageno; 170 - int order = get_order(size); 171 132 struct pci_epc_mem *mem = epc->mem; 133 + unsigned int page_shift = ilog2(mem->page_size); 134 + int order; 172 135 173 136 iounmap(virt_addr); 174 - pageno = (phys_addr - mem->phys_base) >> PAGE_SHIFT; 137 + pageno = (phys_addr - mem->phys_base) >> page_shift; 138 + size = ALIGN(size, mem->page_size); 139 + order = pci_epc_mem_get_order(mem, size); 175 140 bitmap_release_region(mem->bitmap, pageno, order); 176 141 } 177 142 EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr);
+21 -4
drivers/pci/endpoint/pci-epf-core.c
··· 27 27 #include <linux/pci-ep-cfs.h> 28 28 29 29 static struct bus_type pci_epf_bus_type; 30 - static struct device_type pci_epf_type; 30 + static const struct device_type pci_epf_type; 31 31 32 32 /** 33 33 * pci_epf_linkup() - Notify the function driver that EPC device has ··· 267 267 } 268 268 EXPORT_SYMBOL_GPL(pci_epf_create); 269 269 270 + const struct pci_epf_device_id * 271 + pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf) 272 + { 273 + if (!id || !epf) 274 + return NULL; 275 + 276 + while (*id->name) { 277 + if (strcmp(epf->name, id->name) == 0) 278 + return id; 279 + id++; 280 + } 281 + 282 + return NULL; 283 + } 284 + EXPORT_SYMBOL_GPL(pci_epf_match_device); 285 + 270 286 static void pci_epf_dev_release(struct device *dev) 271 287 { 272 288 struct pci_epf *epf = to_pci_epf(dev); ··· 291 275 kfree(epf); 292 276 } 293 277 294 - static struct device_type pci_epf_type = { 278 + static const struct device_type pci_epf_type = { 295 279 .release = pci_epf_dev_release, 296 280 }; 297 281 ··· 333 317 334 318 static int pci_epf_device_remove(struct device *dev) 335 319 { 336 - int ret; 320 + int ret = 0; 337 321 struct pci_epf *epf = to_pci_epf(dev); 338 322 struct pci_epf_driver *driver = to_pci_epf_driver(dev->driver); 339 323 340 - ret = driver->remove(epf); 324 + if (driver->remove) 325 + ret = driver->remove(epf); 341 326 epf->driver = NULL; 342 327 343 328 return ret;
+3 -4
drivers/pci/host/Kconfig
··· 71 71 72 72 config PCIE_XILINX 73 73 bool "Xilinx AXI PCIe host bridge support" 74 - depends on ARCH_ZYNQ || MICROBLAZE 74 + depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) 75 75 help 76 76 Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 77 77 Host Bridge driver. ··· 182 182 183 183 config PCIE_MEDIATEK 184 184 bool "MediaTek PCIe controller" 185 - depends on ARM && (ARCH_MEDIATEK || COMPILE_TEST) 185 + depends on (ARM || ARM64) && (ARCH_MEDIATEK || COMPILE_TEST) 186 186 depends on OF 187 187 depends on PCI 188 188 select PCIEPORTBUS 189 189 help 190 190 Say Y here if you want to enable PCIe controller support on 191 - MT7623 series SoCs. There is one single root complex with 3 root 192 - ports available. Each port supports Gen2 lane x1. 191 + MediaTek SoCs. 193 192 194 193 config PCIE_TANGO_SMP8759 195 194 bool "Tango SMP8759 PCIe controller (DANGEROUS)"
+2 -3
drivers/pci/host/pci-aardvark.c
··· 191 191 #define LINK_WAIT_USLEEP_MIN 90000 192 192 #define LINK_WAIT_USLEEP_MAX 100000 193 193 194 - #define LEGACY_IRQ_NUM 4 195 194 #define MSI_IRQ_NUM 32 196 195 197 196 struct advk_pcie { ··· 728 729 irq_chip->irq_unmask = advk_pcie_irq_unmask; 729 730 730 731 pcie->irq_domain = 731 - irq_domain_add_linear(pcie_intc_node, LEGACY_IRQ_NUM, 732 + irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 732 733 &advk_pcie_irq_domain_ops, pcie); 733 734 if (!pcie->irq_domain) { 734 735 dev_err(dev, "Failed to get a INTx IRQ domain\n"); ··· 785 786 advk_pcie_handle_msi(pcie); 786 787 787 788 /* Process legacy interrupts */ 788 - for (i = 0; i < LEGACY_IRQ_NUM; i++) { 789 + for (i = 0; i < PCI_NUM_INTX; i++) { 789 790 if (!(status & PCIE_ISR0_INTX_ASSERT(i))) 790 791 continue; 791 792
+3 -3
drivers/pci/host/pci-ftpci100.c
··· 350 350 351 351 /* All PCI IRQs cascade off this one */ 352 352 irq = of_irq_get(intc, 0); 353 - if (!irq) { 353 + if (irq <= 0) { 354 354 dev_err(p->dev, "failed to get parent IRQ\n"); 355 - return -EINVAL; 355 + return irq ?: -EINVAL; 356 356 } 357 357 358 - p->irqdomain = irq_domain_add_linear(intc, 4, 358 + p->irqdomain = irq_domain_add_linear(intc, PCI_NUM_INTX, 359 359 &faraday_pci_irqdomain_ops, p); 360 360 if (!p->irqdomain) { 361 361 dev_err(p->dev, "failed to create Gemini PCI IRQ domain\n");
+7 -1
drivers/pci/host/pci-hyperv.c
··· 50 50 #include <linux/kernel.h> 51 51 #include <linux/module.h> 52 52 #include <linux/pci.h> 53 + #include <linux/delay.h> 53 54 #include <linux/semaphore.h> 54 55 #include <linux/irqdomain.h> 55 56 #include <asm/irqdomain.h> ··· 1114 1113 goto free_int_desc; 1115 1114 } 1116 1115 1117 - wait_for_completion(&comp.comp_pkt.host_event); 1116 + /* 1117 + * Since this function is called with IRQ locks held, can't 1118 + * do normal wait for completion; instead poll. 1119 + */ 1120 + while (!try_wait_for_completion(&comp.comp_pkt.host_event)) 1121 + udelay(100); 1118 1122 1119 1123 if (comp.comp_pkt.completion_status < 0) { 1120 1124 dev_err(&hbus->hdev->device,
+5 -6
drivers/pci/host/pci-mvebu.c
··· 1054 1054 port->pcie = pcie; 1055 1055 1056 1056 if (of_property_read_u32(child, "marvell,pcie-port", &port->port)) { 1057 - dev_warn(dev, "ignoring %s, missing pcie-port property\n", 1058 - of_node_full_name(child)); 1057 + dev_warn(dev, "ignoring %pOF, missing pcie-port property\n", 1058 + child); 1059 1059 goto skip; 1060 1060 } 1061 1061 ··· 1106 1106 } 1107 1107 1108 1108 if (flags & OF_GPIO_ACTIVE_LOW) { 1109 - dev_info(dev, "%s: reset gpio is active low\n", 1110 - of_node_full_name(child)); 1109 + dev_info(dev, "%pOF: reset gpio is active low\n", 1110 + child); 1111 1111 gpio_flags = GPIOF_ACTIVE_LOW | 1112 1112 GPIOF_OUT_INIT_LOW; 1113 1113 } else { ··· 1186 1186 */ 1187 1187 static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port) 1188 1188 { 1189 - if (port->reset_gpio) 1190 - gpiod_set_value_cansleep(port->reset_gpio, 1); 1189 + gpiod_set_value_cansleep(port->reset_gpio, 1); 1191 1190 1192 1191 clk_disable_unprepare(port->clk); 1193 1192 }
+4 -5
drivers/pci/host/pci-tegra.c
··· 1147 1147 { 1148 1148 struct device *dev = pcie->dev; 1149 1149 1150 - pcie->pex_rst = devm_reset_control_get(dev, "pex"); 1150 + pcie->pex_rst = devm_reset_control_get_exclusive(dev, "pex"); 1151 1151 if (IS_ERR(pcie->pex_rst)) 1152 1152 return PTR_ERR(pcie->pex_rst); 1153 1153 1154 - pcie->afi_rst = devm_reset_control_get(dev, "afi"); 1154 + pcie->afi_rst = devm_reset_control_get_exclusive(dev, "afi"); 1155 1155 if (IS_ERR(pcie->afi_rst)) 1156 1156 return PTR_ERR(pcie->afi_rst); 1157 1157 1158 - pcie->pcie_xrst = devm_reset_control_get(dev, "pcie_x"); 1158 + pcie->pcie_xrst = devm_reset_control_get_exclusive(dev, "pcie_x"); 1159 1159 if (IS_ERR(pcie->pcie_xrst)) 1160 1160 return PTR_ERR(pcie->pcie_xrst); 1161 1161 ··· 1703 1703 pcie->num_supplies = 2; 1704 1704 1705 1705 if (pcie->num_supplies == 0) { 1706 - dev_err(dev, "device %s not supported in legacy mode\n", 1707 - np->full_name); 1706 + dev_err(dev, "device %pOF not supported in legacy mode\n", np); 1708 1707 return -ENODEV; 1709 1708 } 1710 1709
+1 -1
drivers/pci/host/pci-xgene-msi.c
··· 489 489 if (virt_msir < 0) { 490 490 dev_err(&pdev->dev, "Cannot translate IRQ index %d\n", 491 491 irq_index); 492 - rc = -EINVAL; 492 + rc = virt_msir; 493 493 goto error; 494 494 } 495 495 xgene_msi->msi_groups[irq_index].gic_irq = virt_msir;
+20 -21
drivers/pci/host/pci-xgene.c
··· 61 61 #define SZ_1T (SZ_1G*1024ULL) 62 62 #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) 63 63 64 - #define ROOT_CAP_AND_CTRL 0x5C 64 + #define XGENE_V1_PCI_EXP_CAP 0x40 65 65 66 66 /* PCIe IP version */ 67 67 #define XGENE_PCIE_IP_VER_UNKN 0 ··· 160 160 } 161 161 162 162 static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 163 - int offset) 163 + int offset) 164 164 { 165 165 if ((pci_is_root_bus(bus) && devfn != 0) || 166 166 xgene_pcie_hide_rc_bars(bus, offset)) ··· 189 189 * Avoid this by not claiming to support CRS. 190 190 */ 191 191 if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) && 192 - ((where & ~0x3) == ROOT_CAP_AND_CTRL)) 192 + ((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL)) 193 193 *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); 194 194 195 195 if (size <= 2) ··· 265 265 } 266 266 267 267 struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { 268 - .bus_shift = 16, 269 - .init = xgene_v1_pcie_ecam_init, 270 - .pci_ops = { 271 - .map_bus = xgene_pcie_map_bus, 272 - .read = xgene_pcie_config_read32, 273 - .write = pci_generic_config_write, 268 + .bus_shift = 16, 269 + .init = xgene_v1_pcie_ecam_init, 270 + .pci_ops = { 271 + .map_bus = xgene_pcie_map_bus, 272 + .read = xgene_pcie_config_read32, 273 + .write = pci_generic_config_write, 274 274 } 275 275 }; 276 276 ··· 280 280 } 281 281 282 282 struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { 283 - .bus_shift = 16, 284 - .init = xgene_v2_pcie_ecam_init, 285 - .pci_ops = { 286 - .map_bus = xgene_pcie_map_bus, 287 - .read = xgene_pcie_config_read32, 288 - .write = pci_generic_config_write, 283 + .bus_shift = 16, 284 + .init = xgene_v2_pcie_ecam_init, 285 + .pci_ops = { 286 + .map_bus = xgene_pcie_map_bus, 287 + .read = xgene_pcie_config_read32, 288 + .write = pci_generic_config_write, 289 289 } 290 290 }; 291 291 #endif ··· 318 318 } 319 319 320 320 static void xgene_pcie_linkup(struct xgene_pcie_port *port, 321 - u32 *lanes, u32 *speed) 321 + u32 *lanes, u32 *speed) 322 322 { 323 323 u32 val32; 324 324 ··· 593 593 xgene_pcie_writel(port, i, 0); 594 594 } 595 595 596 - static int xgene_pcie_setup(struct xgene_pcie_port *port, 597 - struct list_head *res, 596 + static int xgene_pcie_setup(struct xgene_pcie_port *port, struct list_head *res, 598 597 resource_size_t io_base) 599 598 { 600 599 struct device *dev = port->dev; ··· 705 706 706 707 static struct platform_driver xgene_pcie_driver = { 707 708 .driver = { 708 - .name = "xgene-pcie", 709 - .of_match_table = of_match_ptr(xgene_pcie_match_table), 710 - .suppress_bind_attrs = true, 709 + .name = "xgene-pcie", 710 + .of_match_table = of_match_ptr(xgene_pcie_match_table), 711 + .suppress_bind_attrs = true, 711 712 }, 712 713 .probe = xgene_pcie_probe_bridge, 713 714 };
+2 -4
drivers/pci/host/pcie-altera-msi.c
··· 64 64 struct irq_chip *chip = irq_desc_get_chip(desc); 65 65 struct altera_msi *msi; 66 66 unsigned long status; 67 - u32 num_of_vectors; 68 67 u32 bit; 69 68 u32 virq; 70 69 71 70 chained_irq_enter(chip, desc); 72 71 msi = irq_desc_get_handler_data(desc); 73 - num_of_vectors = msi->num_of_vectors; 74 72 75 73 while ((status = msi_readl(msi, MSI_STATUS)) != 0) { 76 74 for_each_set_bit(bit, &status, msi->num_of_vectors) { ··· 265 267 return ret; 266 268 267 269 msi->irq = platform_get_irq(pdev, 0); 268 - if (msi->irq <= 0) { 270 + if (msi->irq < 0) { 269 271 dev_err(&pdev->dev, "failed to map IRQ: %d\n", msi->irq); 270 - ret = -ENODEV; 272 + ret = msi->irq; 271 273 goto err; 272 274 } 273 275
+6 -7
drivers/pci/host/pcie-altera.c
··· 76 76 #define LINK_UP_TIMEOUT HZ 77 77 #define LINK_RETRAIN_TIMEOUT HZ 78 78 79 - #define INTX_NUM 4 80 - 81 79 #define DWORD_MASK 3 82 80 83 81 struct altera_pcie { ··· 462 464 463 465 static const struct irq_domain_ops intx_domain_ops = { 464 466 .map = altera_pcie_intx_map, 467 + .xlate = pci_irqd_intx_xlate, 465 468 }; 466 469 467 470 static void altera_pcie_isr(struct irq_desc *desc) ··· 480 481 481 482 while ((status = cra_readl(pcie, P2A_INT_STATUS) 482 483 & P2A_INT_STS_ALL) != 0) { 483 - for_each_set_bit(bit, &status, INTX_NUM) { 484 + for_each_set_bit(bit, &status, PCI_NUM_INTX) { 484 485 /* clear interrupts */ 485 486 cra_writel(pcie, 1 << bit, P2A_INT_STATUS); 486 487 487 - virq = irq_find_mapping(pcie->irq_domain, bit + 1); 488 + virq = irq_find_mapping(pcie->irq_domain, bit); 488 489 if (virq) 489 490 generic_handle_irq(virq); 490 491 else ··· 535 536 struct device_node *node = dev->of_node; 536 537 537 538 /* Setup INTx */ 538 - pcie->irq_domain = irq_domain_add_linear(node, INTX_NUM + 1, 539 + pcie->irq_domain = irq_domain_add_linear(node, PCI_NUM_INTX, 539 540 &intx_domain_ops, pcie); 540 541 if (!pcie->irq_domain) { 541 542 dev_err(dev, "Failed to get a INTx IRQ domain\n"); ··· 558 559 559 560 /* setup IRQ */ 560 561 pcie->irq = platform_get_irq(pdev, 0); 561 - if (pcie->irq <= 0) { 562 + if (pcie->irq < 0) { 562 563 dev_err(dev, "failed to get IRQ: %d\n", pcie->irq); 563 - return -EINVAL; 564 + return pcie->irq; 564 565 } 565 566 566 567 irq_set_chained_handler_and_data(pcie->irq, altera_pcie_isr, pcie);
-2
drivers/pci/host/pcie-iproc-msi.c
··· 317 317 struct irq_chip *chip = irq_desc_get_chip(desc); 318 318 struct iproc_msi_grp *grp; 319 319 struct iproc_msi *msi; 320 - struct iproc_pcie *pcie; 321 320 u32 eq, head, tail, nr_events; 322 321 unsigned long hwirq; 323 322 int virq; ··· 325 326 326 327 grp = irq_desc_get_handler_data(desc); 327 328 msi = grp->msi; 328 - pcie = msi->pcie; 329 329 eq = grp->eq; 330 330 331 331 /*
+8
drivers/pci/host/pcie-iproc-platform.c
··· 134 134 return iproc_pcie_remove(pcie); 135 135 } 136 136 137 + static void iproc_pcie_pltfm_shutdown(struct platform_device *pdev) 138 + { 139 + struct iproc_pcie *pcie = platform_get_drvdata(pdev); 140 + 141 + iproc_pcie_shutdown(pcie); 142 + } 143 + 137 144 static struct platform_driver iproc_pcie_pltfm_driver = { 138 145 .driver = { 139 146 .name = "iproc-pcie", ··· 148 141 }, 149 142 .probe = iproc_pcie_pltfm_probe, 150 143 .remove = iproc_pcie_pltfm_remove, 144 + .shutdown = iproc_pcie_pltfm_shutdown, 151 145 }; 152 146 module_platform_driver(iproc_pcie_pltfm_driver); 153 147
+243 -143
drivers/pci/host/pcie-iproc.c
··· 31 31 32 32 #include "pcie-iproc.h" 33 33 34 - #define EP_PERST_SOURCE_SELECT_SHIFT 2 35 - #define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) 36 - #define EP_MODE_SURVIVE_PERST_SHIFT 1 37 - #define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) 38 - #define RC_PCIE_RST_OUTPUT_SHIFT 0 39 - #define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) 40 - #define PAXC_RESET_MASK 0x7f 34 + #define EP_PERST_SOURCE_SELECT_SHIFT 2 35 + #define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) 36 + #define EP_MODE_SURVIVE_PERST_SHIFT 1 37 + #define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) 38 + #define RC_PCIE_RST_OUTPUT_SHIFT 0 39 + #define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) 40 + #define PAXC_RESET_MASK 0x7f 41 41 42 - #define GIC_V3_CFG_SHIFT 0 43 - #define GIC_V3_CFG BIT(GIC_V3_CFG_SHIFT) 42 + #define GIC_V3_CFG_SHIFT 0 43 + #define GIC_V3_CFG BIT(GIC_V3_CFG_SHIFT) 44 44 45 - #define MSI_ENABLE_CFG_SHIFT 0 46 - #define MSI_ENABLE_CFG BIT(MSI_ENABLE_CFG_SHIFT) 45 + #define MSI_ENABLE_CFG_SHIFT 0 46 + #define MSI_ENABLE_CFG BIT(MSI_ENABLE_CFG_SHIFT) 47 47 48 - #define CFG_IND_ADDR_MASK 0x00001ffc 48 + #define CFG_IND_ADDR_MASK 0x00001ffc 49 49 50 - #define CFG_ADDR_BUS_NUM_SHIFT 20 51 - #define CFG_ADDR_BUS_NUM_MASK 0x0ff00000 52 - #define CFG_ADDR_DEV_NUM_SHIFT 15 53 - #define CFG_ADDR_DEV_NUM_MASK 0x000f8000 54 - #define CFG_ADDR_FUNC_NUM_SHIFT 12 55 - #define CFG_ADDR_FUNC_NUM_MASK 0x00007000 56 - #define CFG_ADDR_REG_NUM_SHIFT 2 57 - #define CFG_ADDR_REG_NUM_MASK 0x00000ffc 58 - #define CFG_ADDR_CFG_TYPE_SHIFT 0 59 - #define CFG_ADDR_CFG_TYPE_MASK 0x00000003 50 + #define CFG_ADDR_BUS_NUM_SHIFT 20 51 + #define CFG_ADDR_BUS_NUM_MASK 0x0ff00000 52 + #define CFG_ADDR_DEV_NUM_SHIFT 15 53 + #define CFG_ADDR_DEV_NUM_MASK 0x000f8000 54 + #define CFG_ADDR_FUNC_NUM_SHIFT 12 55 + #define CFG_ADDR_FUNC_NUM_MASK 0x00007000 56 + #define CFG_ADDR_REG_NUM_SHIFT 2 57 + #define CFG_ADDR_REG_NUM_MASK 0x00000ffc 58 + #define CFG_ADDR_CFG_TYPE_SHIFT 0 59 + #define CFG_ADDR_CFG_TYPE_MASK 0x00000003 60 60 61 - #define SYS_RC_INTX_MASK 0xf 61 + #define SYS_RC_INTX_MASK 0xf 62 62 63 - #define PCIE_PHYLINKUP_SHIFT 3 64 - #define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) 65 - #define PCIE_DL_ACTIVE_SHIFT 2 66 - #define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) 63 + #define PCIE_PHYLINKUP_SHIFT 3 64 + #define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) 65 + #define PCIE_DL_ACTIVE_SHIFT 2 66 + #define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) 67 67 68 - #define APB_ERR_EN_SHIFT 0 69 - #define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) 68 + #define APB_ERR_EN_SHIFT 0 69 + #define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) 70 + 71 + #define CFG_RETRY_STATUS 0xffff0001 72 + #define CFG_RETRY_STATUS_TIMEOUT_US 500000 /* 500 milliseconds */ 70 73 71 74 /* derive the enum index of the outbound/inbound mapping registers */ 72 - #define MAP_REG(base_reg, index) ((base_reg) + (index) * 2) 75 + #define MAP_REG(base_reg, index) ((base_reg) + (index) * 2) 73 76 74 77 /* 75 78 * Maximum number of outbound mapping window sizes that can be supported by any 76 79 * OARR/OMAP mapping pair 77 80 */ 78 - #define MAX_NUM_OB_WINDOW_SIZES 4 81 + #define MAX_NUM_OB_WINDOW_SIZES 4 79 82 80 - #define OARR_VALID_SHIFT 0 81 - #define OARR_VALID BIT(OARR_VALID_SHIFT) 82 - #define OARR_SIZE_CFG_SHIFT 1 83 + #define OARR_VALID_SHIFT 0 84 + #define OARR_VALID BIT(OARR_VALID_SHIFT) 85 + #define OARR_SIZE_CFG_SHIFT 1 83 86 84 87 /* 85 88 * Maximum number of inbound mapping region sizes that can be supported by an 86 89 * IARR 87 90 */ 88 - #define MAX_NUM_IB_REGION_SIZES 9 91 + #define MAX_NUM_IB_REGION_SIZES 9 89 92 90 - #define IMAP_VALID_SHIFT 0 91 - #define IMAP_VALID BIT(IMAP_VALID_SHIFT) 93 + #define IMAP_VALID_SHIFT 0 94 + #define IMAP_VALID BIT(IMAP_VALID_SHIFT) 92 95 93 - #define PCI_EXP_CAP 0xac 96 + #define IPROC_PCI_EXP_CAP 0xac 94 97 95 - #define IPROC_PCIE_REG_INVALID 0xffff 98 + #define IPROC_PCIE_REG_INVALID 0xffff 96 99 97 100 /** 98 101 * iProc PCIe outbound mapping controller specific parameters ··· 307 304 308 305 /* iProc PCIe PAXB BCMA registers */ 309 306 static const u16 iproc_pcie_reg_paxb_bcma[] = { 310 - [IPROC_PCIE_CLK_CTRL] = 0x000, 311 - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 312 - [IPROC_PCIE_CFG_IND_DATA] = 0x124, 313 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 314 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 315 - [IPROC_PCIE_INTX_EN] = 0x330, 316 - [IPROC_PCIE_LINK_STATUS] = 0xf0c, 307 + [IPROC_PCIE_CLK_CTRL] = 0x000, 308 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 309 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 310 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 311 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 312 + [IPROC_PCIE_INTX_EN] = 0x330, 313 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 317 314 }; 318 315 319 316 /* iProc PCIe PAXB registers */ 320 317 static const u16 iproc_pcie_reg_paxb[] = { 321 - [IPROC_PCIE_CLK_CTRL] = 0x000, 322 - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 323 - [IPROC_PCIE_CFG_IND_DATA] = 0x124, 324 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 325 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 326 - [IPROC_PCIE_INTX_EN] = 0x330, 327 - [IPROC_PCIE_OARR0] = 0xd20, 328 - [IPROC_PCIE_OMAP0] = 0xd40, 329 - [IPROC_PCIE_OARR1] = 0xd28, 330 - [IPROC_PCIE_OMAP1] = 0xd48, 331 - [IPROC_PCIE_LINK_STATUS] = 0xf0c, 332 - [IPROC_PCIE_APB_ERR_EN] = 0xf40, 318 + [IPROC_PCIE_CLK_CTRL] = 0x000, 319 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 320 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 321 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 322 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 323 + [IPROC_PCIE_INTX_EN] = 0x330, 324 + [IPROC_PCIE_OARR0] = 0xd20, 325 + [IPROC_PCIE_OMAP0] = 0xd40, 326 + [IPROC_PCIE_OARR1] = 0xd28, 327 + [IPROC_PCIE_OMAP1] = 0xd48, 328 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 329 + [IPROC_PCIE_APB_ERR_EN] = 0xf40, 333 330 }; 334 331 335 332 /* iProc PCIe PAXB v2 registers */ 336 333 static const u16 iproc_pcie_reg_paxb_v2[] = { 337 - [IPROC_PCIE_CLK_CTRL] = 0x000, 338 - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 339 - [IPROC_PCIE_CFG_IND_DATA] = 0x124, 340 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 341 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 342 - [IPROC_PCIE_INTX_EN] = 0x330, 343 - [IPROC_PCIE_OARR0] = 0xd20, 344 - [IPROC_PCIE_OMAP0] = 0xd40, 345 - [IPROC_PCIE_OARR1] = 0xd28, 346 - [IPROC_PCIE_OMAP1] = 0xd48, 347 - [IPROC_PCIE_OARR2] = 0xd60, 348 - [IPROC_PCIE_OMAP2] = 0xd68, 349 - [IPROC_PCIE_OARR3] = 0xdf0, 350 - [IPROC_PCIE_OMAP3] = 0xdf8, 351 - [IPROC_PCIE_IARR0] = 0xd00, 352 - [IPROC_PCIE_IMAP0] = 0xc00, 353 - [IPROC_PCIE_IARR2] = 0xd10, 354 - [IPROC_PCIE_IMAP2] = 0xcc0, 355 - [IPROC_PCIE_IARR3] = 0xe00, 356 - [IPROC_PCIE_IMAP3] = 0xe08, 357 - [IPROC_PCIE_IARR4] = 0xe68, 358 - [IPROC_PCIE_IMAP4] = 0xe70, 359 - [IPROC_PCIE_LINK_STATUS] = 0xf0c, 360 - [IPROC_PCIE_APB_ERR_EN] = 0xf40, 334 + [IPROC_PCIE_CLK_CTRL] = 0x000, 335 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 336 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 337 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 338 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 339 + [IPROC_PCIE_INTX_EN] = 0x330, 340 + [IPROC_PCIE_OARR0] = 0xd20, 341 + [IPROC_PCIE_OMAP0] = 0xd40, 342 + [IPROC_PCIE_OARR1] = 0xd28, 343 + [IPROC_PCIE_OMAP1] = 0xd48, 344 + [IPROC_PCIE_OARR2] = 0xd60, 345 + [IPROC_PCIE_OMAP2] = 0xd68, 346 + [IPROC_PCIE_OARR3] = 0xdf0, 347 + [IPROC_PCIE_OMAP3] = 0xdf8, 348 + [IPROC_PCIE_IARR0] = 0xd00, 349 + [IPROC_PCIE_IMAP0] = 0xc00, 350 + [IPROC_PCIE_IARR2] = 0xd10, 351 + [IPROC_PCIE_IMAP2] = 0xcc0, 352 + [IPROC_PCIE_IARR3] = 0xe00, 353 + [IPROC_PCIE_IMAP3] = 0xe08, 354 + [IPROC_PCIE_IARR4] = 0xe68, 355 + [IPROC_PCIE_IMAP4] = 0xe70, 356 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 357 + [IPROC_PCIE_APB_ERR_EN] = 0xf40, 361 358 }; 362 359 363 360 /* iProc PCIe PAXC v1 registers */ 364 361 static const u16 iproc_pcie_reg_paxc[] = { 365 - [IPROC_PCIE_CLK_CTRL] = 0x000, 366 - [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 367 - [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 368 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 369 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 362 + [IPROC_PCIE_CLK_CTRL] = 0x000, 363 + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 364 + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 365 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 366 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 370 367 }; 371 368 372 369 /* iProc PCIe PAXC v2 registers */ 373 370 static const u16 iproc_pcie_reg_paxc_v2[] = { 374 - [IPROC_PCIE_MSI_GIC_MODE] = 0x050, 375 - [IPROC_PCIE_MSI_BASE_ADDR] = 0x074, 376 - [IPROC_PCIE_MSI_WINDOW_SIZE] = 0x078, 377 - [IPROC_PCIE_MSI_ADDR_LO] = 0x07c, 378 - [IPROC_PCIE_MSI_ADDR_HI] = 0x080, 379 - [IPROC_PCIE_MSI_EN_CFG] = 0x09c, 380 - [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 381 - [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 382 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 383 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 371 + [IPROC_PCIE_MSI_GIC_MODE] = 0x050, 372 + [IPROC_PCIE_MSI_BASE_ADDR] = 0x074, 373 + [IPROC_PCIE_MSI_WINDOW_SIZE] = 0x078, 374 + [IPROC_PCIE_MSI_ADDR_LO] = 0x07c, 375 + [IPROC_PCIE_MSI_ADDR_HI] = 0x080, 376 + [IPROC_PCIE_MSI_EN_CFG] = 0x09c, 377 + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 378 + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 379 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 380 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 384 381 }; 385 382 386 383 static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) ··· 451 448 } 452 449 } 453 450 451 + static void __iomem *iproc_pcie_map_ep_cfg_reg(struct iproc_pcie *pcie, 452 + unsigned int busno, 453 + unsigned int slot, 454 + unsigned int fn, 455 + int where) 456 + { 457 + u16 offset; 458 + u32 val; 459 + 460 + /* EP device access */ 461 + val = (busno << CFG_ADDR_BUS_NUM_SHIFT) | 462 + (slot << CFG_ADDR_DEV_NUM_SHIFT) | 463 + (fn << CFG_ADDR_FUNC_NUM_SHIFT) | 464 + (where & CFG_ADDR_REG_NUM_MASK) | 465 + (1 & CFG_ADDR_CFG_TYPE_MASK); 466 + 467 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_ADDR, val); 468 + offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_DATA); 469 + 470 + if (iproc_pcie_reg_is_invalid(offset)) 471 + return NULL; 472 + 473 + return (pcie->base + offset); 474 + } 475 + 476 + static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p) 477 + { 478 + int timeout = CFG_RETRY_STATUS_TIMEOUT_US; 479 + unsigned int data; 480 + 481 + /* 482 + * As per PCIe spec r3.1, sec 2.3.2, CRS Software Visibility only 483 + * affects config reads of the Vendor ID. For config writes or any 484 + * other config reads, the Root may automatically reissue the 485 + * configuration request again as a new request. 486 + * 487 + * For config reads, this hardware returns CFG_RETRY_STATUS data 488 + * when it receives a CRS completion, regardless of the address of 489 + * the read or the CRS Software Visibility Enable bit. As a 490 + * partial workaround for this, we retry in software any read that 491 + * returns CFG_RETRY_STATUS. 492 + * 493 + * Note that a non-Vendor ID config register may have a value of 494 + * CFG_RETRY_STATUS. If we read that, we can't distinguish it from 495 + * a CRS completion, so we will incorrectly retry the read and 496 + * eventually return the wrong data (0xffffffff). 497 + */ 498 + data = readl(cfg_data_p); 499 + while (data == CFG_RETRY_STATUS && timeout--) { 500 + udelay(1); 501 + data = readl(cfg_data_p); 502 + } 503 + 504 + if (data == CFG_RETRY_STATUS) 505 + data = 0xffffffff; 506 + 507 + return data; 508 + } 509 + 510 + static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn, 511 + int where, int size, u32 *val) 512 + { 513 + struct iproc_pcie *pcie = iproc_data(bus); 514 + unsigned int slot = PCI_SLOT(devfn); 515 + unsigned int fn = PCI_FUNC(devfn); 516 + unsigned int busno = bus->number; 517 + void __iomem *cfg_data_p; 518 + unsigned int data; 519 + int ret; 520 + 521 + /* root complex access */ 522 + if (busno == 0) { 523 + ret = pci_generic_config_read32(bus, devfn, where, size, val); 524 + if (ret != PCIBIOS_SUCCESSFUL) 525 + return ret; 526 + 527 + /* Don't advertise CRS SV support */ 528 + if ((where & ~0x3) == IPROC_PCI_EXP_CAP + PCI_EXP_RTCTL) 529 + *val &= ~(PCI_EXP_RTCAP_CRSVIS << 16); 530 + return PCIBIOS_SUCCESSFUL; 531 + } 532 + 533 + cfg_data_p = iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); 534 + 535 + if (!cfg_data_p) 536 + return PCIBIOS_DEVICE_NOT_FOUND; 537 + 538 + data = iproc_pcie_cfg_retry(cfg_data_p); 539 + 540 + *val = data; 541 + if (size <= 2) 542 + *val = (data >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); 543 + 544 + return PCIBIOS_SUCCESSFUL; 545 + } 546 + 454 547 /** 455 548 * Note access to the configuration registers are protected at the higher layer 456 549 * by 'pci_lock' in drivers/pci/access.c 457 550 */ 458 551 static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie, 459 - int busno, 460 - unsigned int devfn, 552 + int busno, unsigned int devfn, 461 553 int where) 462 554 { 463 555 unsigned slot = PCI_SLOT(devfn); 464 556 unsigned fn = PCI_FUNC(devfn); 465 - u32 val; 466 557 u16 offset; 467 558 468 559 /* root complex access */ ··· 581 484 if (slot > 0) 582 485 return NULL; 583 486 584 - /* EP device access */ 585 - val = (busno << CFG_ADDR_BUS_NUM_SHIFT) | 586 - (slot << CFG_ADDR_DEV_NUM_SHIFT) | 587 - (fn << CFG_ADDR_FUNC_NUM_SHIFT) | 588 - (where & CFG_ADDR_REG_NUM_MASK) | 589 - (1 & CFG_ADDR_CFG_TYPE_MASK); 590 - iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_ADDR, val); 591 - offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_DATA); 592 - if (iproc_pcie_reg_is_invalid(offset)) 593 - return NULL; 594 - else 595 - return (pcie->base + offset); 487 + return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); 596 488 } 597 489 598 490 static void __iomem *iproc_pcie_bus_map_cfg_bus(struct pci_bus *bus, ··· 640 554 int where, int size, u32 *val) 641 555 { 642 556 int ret; 557 + struct iproc_pcie *pcie = iproc_data(bus); 643 558 644 559 iproc_pcie_apb_err_disable(bus, true); 645 - ret = pci_generic_config_read32(bus, devfn, where, size, val); 560 + if (pcie->type == IPROC_PCIE_PAXB_V2) 561 + ret = iproc_pcie_config_read(bus, devfn, where, size, val); 562 + else 563 + ret = pci_generic_config_read32(bus, devfn, where, size, val); 646 564 iproc_pcie_apb_err_disable(bus, false); 647 565 648 566 return ret; ··· 670 580 .write = iproc_pcie_config_write32, 671 581 }; 672 582 673 - static void iproc_pcie_reset(struct iproc_pcie *pcie) 583 + static void iproc_pcie_perst_ctrl(struct iproc_pcie *pcie, bool assert) 674 584 { 675 585 u32 val; 676 586 ··· 682 592 if (pcie->ep_is_internal) 683 593 return; 684 594 685 - /* 686 - * Select perst_b signal as reset source. Put the device into reset, 687 - * and then bring it out of reset 688 - */ 689 - val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 690 - val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & 691 - ~RC_PCIE_RST_OUTPUT; 692 - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 693 - udelay(250); 694 - 695 - val |= RC_PCIE_RST_OUTPUT; 696 - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 697 - msleep(100); 595 + if (assert) { 596 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 597 + val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & 598 + ~RC_PCIE_RST_OUTPUT; 599 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 600 + udelay(250); 601 + } else { 602 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 603 + val |= RC_PCIE_RST_OUTPUT; 604 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 605 + msleep(100); 606 + } 698 607 } 608 + 609 + int iproc_pcie_shutdown(struct iproc_pcie *pcie) 610 + { 611 + iproc_pcie_perst_ctrl(pcie, true); 612 + msleep(500); 613 + 614 + return 0; 615 + } 616 + EXPORT_SYMBOL_GPL(iproc_pcie_shutdown); 699 617 700 618 static int iproc_pcie_check_link(struct iproc_pcie *pcie) 701 619 { 702 620 struct device *dev = pcie->dev; 703 621 u32 hdr_type, link_ctrl, link_status, class, val; 704 - u16 pos = PCI_EXP_CAP; 705 622 bool link_is_active = false; 706 623 707 624 /* ··· 725 628 } 726 629 727 630 /* make sure we are not in EP mode */ 728 - iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); 631 + iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); 729 632 if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) { 730 633 dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type); 731 634 return -EFAULT; 732 635 } 733 636 734 637 /* force class to PCI_CLASS_BRIDGE_PCI (0x0604) */ 735 - #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c 736 - #define PCI_CLASS_BRIDGE_MASK 0xffff00 737 - #define PCI_CLASS_BRIDGE_SHIFT 8 638 + #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c 639 + #define PCI_CLASS_BRIDGE_MASK 0xffff00 640 + #define PCI_CLASS_BRIDGE_SHIFT 8 738 641 iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 739 642 4, &class); 740 643 class &= ~PCI_CLASS_BRIDGE_MASK; ··· 743 646 4, class); 744 647 745 648 /* check link status to see if link is active */ 746 - iproc_pci_raw_config_read32(pcie, 0, pos + PCI_EXP_LNKSTA, 649 + iproc_pci_raw_config_read32(pcie, 0, IPROC_PCI_EXP_CAP + PCI_EXP_LNKSTA, 747 650 2, &link_status); 748 651 if (link_status & PCI_EXP_LNKSTA_NLW) 749 652 link_is_active = true; 750 653 751 654 if (!link_is_active) { 752 655 /* try GEN 1 link speed */ 753 - #define PCI_TARGET_LINK_SPEED_MASK 0xf 754 - #define PCI_TARGET_LINK_SPEED_GEN2 0x2 755 - #define PCI_TARGET_LINK_SPEED_GEN1 0x1 656 + #define PCI_TARGET_LINK_SPEED_MASK 0xf 657 + #define PCI_TARGET_LINK_SPEED_GEN2 0x2 658 + #define PCI_TARGET_LINK_SPEED_GEN1 0x1 756 659 iproc_pci_raw_config_read32(pcie, 0, 757 - pos + PCI_EXP_LNKCTL2, 4, 758 - &link_ctrl); 660 + IPROC_PCI_EXP_CAP + PCI_EXP_LNKCTL2, 661 + 4, &link_ctrl); 759 662 if ((link_ctrl & PCI_TARGET_LINK_SPEED_MASK) == 760 663 PCI_TARGET_LINK_SPEED_GEN2) { 761 664 link_ctrl &= ~PCI_TARGET_LINK_SPEED_MASK; 762 665 link_ctrl |= PCI_TARGET_LINK_SPEED_GEN1; 763 666 iproc_pci_raw_config_write32(pcie, 0, 764 - pos + PCI_EXP_LNKCTL2, 765 - 4, link_ctrl); 667 + IPROC_PCI_EXP_CAP + PCI_EXP_LNKCTL2, 668 + 4, link_ctrl); 766 669 msleep(100); 767 670 768 671 iproc_pci_raw_config_read32(pcie, 0, 769 - pos + PCI_EXP_LNKSTA, 770 - 2, &link_status); 672 + IPROC_PCI_EXP_CAP + PCI_EXP_LNKSTA, 673 + 2, &link_status); 771 674 if (link_status & PCI_EXP_LNKSTA_NLW) 772 675 link_is_active = true; 773 676 } ··· 1320 1223 pcie->ib.nr_regions = ARRAY_SIZE(paxb_v2_ib_map); 1321 1224 pcie->ib_map = paxb_v2_ib_map; 1322 1225 pcie->need_msi_steer = true; 1226 + dev_warn(dev, "reads of config registers that contain %#x return incorrect data\n", 1227 + CFG_RETRY_STATUS); 1323 1228 break; 1324 1229 case IPROC_PCIE_PAXC: 1325 1230 regs = iproc_pcie_reg_paxc; ··· 1385 1286 goto err_exit_phy; 1386 1287 } 1387 1288 1388 - iproc_pcie_reset(pcie); 1289 + iproc_pcie_perst_ctrl(pcie, true); 1290 + iproc_pcie_perst_ctrl(pcie, false); 1389 1291 1390 1292 if (pcie->need_ob_cfg) { 1391 1293 ret = iproc_pcie_map_ranges(pcie, res);
+1
drivers/pci/host/pcie-iproc.h
··· 110 110 111 111 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); 112 112 int iproc_pcie_remove(struct iproc_pcie *pcie); 113 + int iproc_pcie_shutdown(struct iproc_pcie *pcie); 113 114 114 115 #ifdef CONFIG_PCIE_IPROC_MSI 115 116 int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node);
+690 -68
drivers/pci/host/pcie-mediatek.c
··· 3 3 * 4 4 * Copyright (c) 2017 MediaTek Inc. 5 5 * Author: Ryder Lee <ryder.lee@mediatek.com> 6 + * Honghui Zhang <honghui.zhang@mediatek.com> 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License version 2 as ··· 17 16 18 17 #include <linux/clk.h> 19 18 #include <linux/delay.h> 19 + #include <linux/iopoll.h> 20 + #include <linux/irq.h> 21 + #include <linux/irqdomain.h> 20 22 #include <linux/kernel.h> 21 23 #include <linux/of_address.h> 22 24 #include <linux/of_pci.h> ··· 67 63 #define PCIE_FC_CREDIT_MASK (GENMASK(31, 31) | GENMASK(28, 16)) 68 64 #define PCIE_FC_CREDIT_VAL(x) ((x) << 16) 69 65 66 + /* PCIe V2 share registers */ 67 + #define PCIE_SYS_CFG_V2 0x0 68 + #define PCIE_CSR_LTSSM_EN(x) BIT(0 + (x) * 8) 69 + #define PCIE_CSR_ASPM_L1_EN(x) BIT(1 + (x) * 8) 70 + 71 + /* PCIe V2 per-port registers */ 72 + #define PCIE_MSI_VECTOR 0x0c0 73 + #define PCIE_INT_MASK 0x420 74 + #define INTX_MASK GENMASK(19, 16) 75 + #define INTX_SHIFT 16 76 + #define PCIE_INT_STATUS 0x424 77 + #define MSI_STATUS BIT(23) 78 + #define PCIE_IMSI_STATUS 0x42c 79 + #define PCIE_IMSI_ADDR 0x430 80 + #define MSI_MASK BIT(23) 81 + #define MTK_MSI_IRQS_NUM 32 82 + 83 + #define PCIE_AHB_TRANS_BASE0_L 0x438 84 + #define PCIE_AHB_TRANS_BASE0_H 0x43c 85 + #define AHB2PCIE_SIZE(x) ((x) & GENMASK(4, 0)) 86 + #define PCIE_AXI_WINDOW0 0x448 87 + #define WIN_ENABLE BIT(7) 88 + 89 + /* PCIe V2 configuration transaction header */ 90 + #define PCIE_CFG_HEADER0 0x460 91 + #define PCIE_CFG_HEADER1 0x464 92 + #define PCIE_CFG_HEADER2 0x468 93 + #define PCIE_CFG_WDATA 0x470 94 + #define PCIE_APP_TLP_REQ 0x488 95 + #define PCIE_CFG_RDATA 0x48c 96 + #define APP_CFG_REQ BIT(0) 97 + #define APP_CPL_STATUS GENMASK(7, 5) 98 + 99 + #define CFG_WRRD_TYPE_0 4 100 + #define CFG_WR_FMT 2 101 + #define CFG_RD_FMT 0 102 + 103 + #define CFG_DW0_LENGTH(length) ((length) & GENMASK(9, 0)) 104 + #define CFG_DW0_TYPE(type) (((type) << 24) & GENMASK(28, 24)) 105 + #define CFG_DW0_FMT(fmt) (((fmt) << 29) & GENMASK(31, 29)) 106 + #define CFG_DW2_REGN(regn) ((regn) & GENMASK(11, 2)) 107 + #define CFG_DW2_FUN(fun) (((fun) << 16) & GENMASK(18, 16)) 108 + #define CFG_DW2_DEV(dev) (((dev) << 19) & GENMASK(23, 19)) 109 + #define CFG_DW2_BUS(bus) (((bus) << 24) & GENMASK(31, 24)) 110 + #define CFG_HEADER_DW0(type, fmt) \ 111 + (CFG_DW0_LENGTH(1) | CFG_DW0_TYPE(type) | CFG_DW0_FMT(fmt)) 112 + #define CFG_HEADER_DW1(where, size) \ 113 + (GENMASK(((size) - 1), 0) << ((where) & 0x3)) 114 + #define CFG_HEADER_DW2(regn, fun, dev, bus) \ 115 + (CFG_DW2_REGN(regn) | CFG_DW2_FUN(fun) | \ 116 + CFG_DW2_DEV(dev) | CFG_DW2_BUS(bus)) 117 + 118 + #define PCIE_RST_CTRL 0x510 119 + #define PCIE_PHY_RSTB BIT(0) 120 + #define PCIE_PIPE_SRSTB BIT(1) 121 + #define PCIE_MAC_SRSTB BIT(2) 122 + #define PCIE_CRSTB BIT(3) 123 + #define PCIE_PERSTB BIT(8) 124 + #define PCIE_LINKDOWN_RST_EN GENMASK(15, 13) 125 + #define PCIE_LINK_STATUS_V2 0x804 126 + #define PCIE_PORT_LINKUP_V2 BIT(10) 127 + 128 + struct mtk_pcie_port; 129 + 130 + /** 131 + * struct mtk_pcie_soc - differentiate between host generations 132 + * @has_msi: whether this host supports MSI interrupts or not 133 + * @ops: pointer to configuration access functions 134 + * @startup: pointer to controller setting functions 135 + * @setup_irq: pointer to initialize IRQ functions 136 + */ 137 + struct mtk_pcie_soc { 138 + bool has_msi; 139 + struct pci_ops *ops; 140 + int (*startup)(struct mtk_pcie_port *port); 141 + int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); 142 + }; 143 + 70 144 /** 71 145 * struct mtk_pcie_port - PCIe port information 72 146 * @base: IO mapped register base 73 147 * @list: port list 74 148 * @pcie: pointer to PCIe host info 75 149 * @reset: pointer to port reset control 76 - * @sys_ck: pointer to bus clock 77 - * @phy: pointer to phy control block 150 + * @sys_ck: pointer to transaction/data link layer clock 151 + * @ahb_ck: pointer to AHB slave interface operating clock for CSR access 152 + * and RC initiated MMIO access 153 + * @axi_ck: pointer to application layer MMIO channel operating clock 154 + * @aux_ck: pointer to pe2_mac_bridge and pe2_mac_core operating clock 155 + * when pcie_mac_ck/pcie_pipe_ck is turned off 156 + * @obff_ck: pointer to OBFF functional block operating clock 157 + * @pipe_ck: pointer to LTSSM and PHY/MAC layer operating clock 158 + * @phy: pointer to PHY control block 78 159 * @lane: lane count 79 - * @index: port index 160 + * @slot: port slot 161 + * @irq_domain: legacy INTx IRQ domain 162 + * @msi_domain: MSI IRQ domain 163 + * @msi_irq_in_use: bit map for assigned MSI IRQ 80 164 */ 81 165 struct mtk_pcie_port { 82 166 void __iomem *base; ··· 172 80 struct mtk_pcie *pcie; 173 81 struct reset_control *reset; 174 82 struct clk *sys_ck; 83 + struct clk *ahb_ck; 84 + struct clk *axi_ck; 85 + struct clk *aux_ck; 86 + struct clk *obff_ck; 87 + struct clk *pipe_ck; 175 88 struct phy *phy; 176 89 u32 lane; 177 - u32 index; 90 + u32 slot; 91 + struct irq_domain *irq_domain; 92 + struct irq_domain *msi_domain; 93 + DECLARE_BITMAP(msi_irq_in_use, MTK_MSI_IRQS_NUM); 178 94 }; 179 95 180 96 /** ··· 196 96 * @busn: bus range 197 97 * @offset: IO / Memory offset 198 98 * @ports: pointer to PCIe port information 99 + * @soc: pointer to SoC-dependent operations 199 100 */ 200 101 struct mtk_pcie { 201 102 struct device *dev; ··· 212 111 resource_size_t io; 213 112 } offset; 214 113 struct list_head ports; 114 + const struct mtk_pcie_soc *soc; 215 115 }; 216 - 217 - static inline bool mtk_pcie_link_up(struct mtk_pcie_port *port) 218 - { 219 - return !!(readl(port->base + PCIE_LINK_STATUS) & PCIE_PORT_LINKUP); 220 - } 221 116 222 117 static void mtk_pcie_subsys_powerdown(struct mtk_pcie *pcie) 223 118 { ··· 243 146 244 147 list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 245 148 phy_power_off(port->phy); 149 + phy_exit(port->phy); 150 + clk_disable_unprepare(port->pipe_ck); 151 + clk_disable_unprepare(port->obff_ck); 152 + clk_disable_unprepare(port->axi_ck); 153 + clk_disable_unprepare(port->aux_ck); 154 + clk_disable_unprepare(port->ahb_ck); 246 155 clk_disable_unprepare(port->sys_ck); 247 156 mtk_pcie_port_free(port); 248 157 } ··· 256 153 mtk_pcie_subsys_powerdown(pcie); 257 154 } 258 155 156 + static int mtk_pcie_check_cfg_cpld(struct mtk_pcie_port *port) 157 + { 158 + u32 val; 159 + int err; 160 + 161 + err = readl_poll_timeout_atomic(port->base + PCIE_APP_TLP_REQ, val, 162 + !(val & APP_CFG_REQ), 10, 163 + 100 * USEC_PER_MSEC); 164 + if (err) 165 + return PCIBIOS_SET_FAILED; 166 + 167 + if (readl(port->base + PCIE_APP_TLP_REQ) & APP_CPL_STATUS) 168 + return PCIBIOS_SET_FAILED; 169 + 170 + return PCIBIOS_SUCCESSFUL; 171 + } 172 + 173 + static int mtk_pcie_hw_rd_cfg(struct mtk_pcie_port *port, u32 bus, u32 devfn, 174 + int where, int size, u32 *val) 175 + { 176 + u32 tmp; 177 + 178 + /* Write PCIe configuration transaction header for Cfgrd */ 179 + writel(CFG_HEADER_DW0(CFG_WRRD_TYPE_0, CFG_RD_FMT), 180 + port->base + PCIE_CFG_HEADER0); 181 + writel(CFG_HEADER_DW1(where, size), port->base + PCIE_CFG_HEADER1); 182 + writel(CFG_HEADER_DW2(where, PCI_FUNC(devfn), PCI_SLOT(devfn), bus), 183 + port->base + PCIE_CFG_HEADER2); 184 + 185 + /* Trigger h/w to transmit Cfgrd TLP */ 186 + tmp = readl(port->base + PCIE_APP_TLP_REQ); 187 + tmp |= APP_CFG_REQ; 188 + writel(tmp, port->base + PCIE_APP_TLP_REQ); 189 + 190 + /* Check completion status */ 191 + if (mtk_pcie_check_cfg_cpld(port)) 192 + return PCIBIOS_SET_FAILED; 193 + 194 + /* Read cpld payload of Cfgrd */ 195 + *val = readl(port->base + PCIE_CFG_RDATA); 196 + 197 + if (size == 1) 198 + *val = (*val >> (8 * (where & 3))) & 0xff; 199 + else if (size == 2) 200 + *val = (*val >> (8 * (where & 3))) & 0xffff; 201 + 202 + return PCIBIOS_SUCCESSFUL; 203 + } 204 + 205 + static int mtk_pcie_hw_wr_cfg(struct mtk_pcie_port *port, u32 bus, u32 devfn, 206 + int where, int size, u32 val) 207 + { 208 + /* Write PCIe configuration transaction header for Cfgwr */ 209 + writel(CFG_HEADER_DW0(CFG_WRRD_TYPE_0, CFG_WR_FMT), 210 + port->base + PCIE_CFG_HEADER0); 211 + writel(CFG_HEADER_DW1(where, size), port->base + PCIE_CFG_HEADER1); 212 + writel(CFG_HEADER_DW2(where, PCI_FUNC(devfn), PCI_SLOT(devfn), bus), 213 + port->base + PCIE_CFG_HEADER2); 214 + 215 + /* Write Cfgwr data */ 216 + val = val << 8 * (where & 3); 217 + writel(val, port->base + PCIE_CFG_WDATA); 218 + 219 + /* Trigger h/w to transmit Cfgwr TLP */ 220 + val = readl(port->base + PCIE_APP_TLP_REQ); 221 + val |= APP_CFG_REQ; 222 + writel(val, port->base + PCIE_APP_TLP_REQ); 223 + 224 + /* Check completion status */ 225 + return mtk_pcie_check_cfg_cpld(port); 226 + } 227 + 228 + static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus, 229 + unsigned int devfn) 230 + { 231 + struct mtk_pcie *pcie = bus->sysdata; 232 + struct mtk_pcie_port *port; 233 + 234 + list_for_each_entry(port, &pcie->ports, list) 235 + if (port->slot == PCI_SLOT(devfn)) 236 + return port; 237 + 238 + return NULL; 239 + } 240 + 241 + static int mtk_pcie_config_read(struct pci_bus *bus, unsigned int devfn, 242 + int where, int size, u32 *val) 243 + { 244 + struct mtk_pcie_port *port; 245 + u32 bn = bus->number; 246 + int ret; 247 + 248 + port = mtk_pcie_find_port(bus, devfn); 249 + if (!port) { 250 + *val = ~0; 251 + return PCIBIOS_DEVICE_NOT_FOUND; 252 + } 253 + 254 + ret = mtk_pcie_hw_rd_cfg(port, bn, devfn, where, size, val); 255 + if (ret) 256 + *val = ~0; 257 + 258 + return ret; 259 + } 260 + 261 + static int mtk_pcie_config_write(struct pci_bus *bus, unsigned int devfn, 262 + int where, int size, u32 val) 263 + { 264 + struct mtk_pcie_port *port; 265 + u32 bn = bus->number; 266 + 267 + port = mtk_pcie_find_port(bus, devfn); 268 + if (!port) 269 + return PCIBIOS_DEVICE_NOT_FOUND; 270 + 271 + return mtk_pcie_hw_wr_cfg(port, bn, devfn, where, size, val); 272 + } 273 + 274 + static struct pci_ops mtk_pcie_ops_v2 = { 275 + .read = mtk_pcie_config_read, 276 + .write = mtk_pcie_config_write, 277 + }; 278 + 279 + static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) 280 + { 281 + struct mtk_pcie *pcie = port->pcie; 282 + struct resource *mem = &pcie->mem; 283 + u32 val; 284 + size_t size; 285 + int err; 286 + 287 + /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ 288 + if (pcie->base) { 289 + val = readl(pcie->base + PCIE_SYS_CFG_V2); 290 + val |= PCIE_CSR_LTSSM_EN(port->slot) | 291 + PCIE_CSR_ASPM_L1_EN(port->slot); 292 + writel(val, pcie->base + PCIE_SYS_CFG_V2); 293 + } 294 + 295 + /* Assert all reset signals */ 296 + writel(0, port->base + PCIE_RST_CTRL); 297 + 298 + /* 299 + * Enable PCIe link down reset, if link status changed from link up to 300 + * link down, this will reset MAC control registers and configuration 301 + * space. 302 + */ 303 + writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); 304 + 305 + /* De-assert PHY, PE, PIPE, MAC and configuration reset */ 306 + val = readl(port->base + PCIE_RST_CTRL); 307 + val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | 308 + PCIE_MAC_SRSTB | PCIE_CRSTB; 309 + writel(val, port->base + PCIE_RST_CTRL); 310 + 311 + /* 100ms timeout value should be enough for Gen1/2 training */ 312 + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, 313 + !!(val & PCIE_PORT_LINKUP_V2), 20, 314 + 100 * USEC_PER_MSEC); 315 + if (err) 316 + return -ETIMEDOUT; 317 + 318 + /* Set INTx mask */ 319 + val = readl(port->base + PCIE_INT_MASK); 320 + val &= ~INTX_MASK; 321 + writel(val, port->base + PCIE_INT_MASK); 322 + 323 + /* Set AHB to PCIe translation windows */ 324 + size = mem->end - mem->start; 325 + val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); 326 + writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); 327 + 328 + val = upper_32_bits(mem->start); 329 + writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); 330 + 331 + /* Set PCIe to AXI translation memory space.*/ 332 + val = fls(0xffffffff) | WIN_ENABLE; 333 + writel(val, port->base + PCIE_AXI_WINDOW0); 334 + 335 + return 0; 336 + } 337 + 338 + static int mtk_pcie_msi_alloc(struct mtk_pcie_port *port) 339 + { 340 + int msi; 341 + 342 + msi = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM); 343 + if (msi < MTK_MSI_IRQS_NUM) 344 + set_bit(msi, port->msi_irq_in_use); 345 + else 346 + return -ENOSPC; 347 + 348 + return msi; 349 + } 350 + 351 + static void mtk_pcie_msi_free(struct mtk_pcie_port *port, unsigned long hwirq) 352 + { 353 + clear_bit(hwirq, port->msi_irq_in_use); 354 + } 355 + 356 + static int mtk_pcie_msi_setup_irq(struct msi_controller *chip, 357 + struct pci_dev *pdev, struct msi_desc *desc) 358 + { 359 + struct mtk_pcie_port *port; 360 + struct msi_msg msg; 361 + unsigned int irq; 362 + int hwirq; 363 + phys_addr_t msg_addr; 364 + 365 + port = mtk_pcie_find_port(pdev->bus, pdev->devfn); 366 + if (!port) 367 + return -EINVAL; 368 + 369 + hwirq = mtk_pcie_msi_alloc(port); 370 + if (hwirq < 0) 371 + return hwirq; 372 + 373 + irq = irq_create_mapping(port->msi_domain, hwirq); 374 + if (!irq) { 375 + mtk_pcie_msi_free(port, hwirq); 376 + return -EINVAL; 377 + } 378 + 379 + chip->dev = &pdev->dev; 380 + 381 + irq_set_msi_desc(irq, desc); 382 + 383 + /* MT2712/MT7622 only support 32-bit MSI addresses */ 384 + msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); 385 + msg.address_hi = 0; 386 + msg.address_lo = lower_32_bits(msg_addr); 387 + msg.data = hwirq; 388 + 389 + pci_write_msi_msg(irq, &msg); 390 + 391 + return 0; 392 + } 393 + 394 + static void mtk_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 395 + { 396 + struct pci_dev *pdev = to_pci_dev(chip->dev); 397 + struct irq_data *d = irq_get_irq_data(irq); 398 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 399 + struct mtk_pcie_port *port; 400 + 401 + port = mtk_pcie_find_port(pdev->bus, pdev->devfn); 402 + if (!port) 403 + return; 404 + 405 + irq_dispose_mapping(irq); 406 + mtk_pcie_msi_free(port, hwirq); 407 + } 408 + 409 + static struct msi_controller mtk_pcie_msi_chip = { 410 + .setup_irq = mtk_pcie_msi_setup_irq, 411 + .teardown_irq = mtk_msi_teardown_irq, 412 + }; 413 + 414 + static struct irq_chip mtk_msi_irq_chip = { 415 + .name = "MTK PCIe MSI", 416 + .irq_enable = pci_msi_unmask_irq, 417 + .irq_disable = pci_msi_mask_irq, 418 + .irq_mask = pci_msi_mask_irq, 419 + .irq_unmask = pci_msi_unmask_irq, 420 + }; 421 + 422 + static int mtk_pcie_msi_map(struct irq_domain *domain, unsigned int irq, 423 + irq_hw_number_t hwirq) 424 + { 425 + irq_set_chip_and_handler(irq, &mtk_msi_irq_chip, handle_simple_irq); 426 + irq_set_chip_data(irq, domain->host_data); 427 + 428 + return 0; 429 + } 430 + 431 + static const struct irq_domain_ops msi_domain_ops = { 432 + .map = mtk_pcie_msi_map, 433 + }; 434 + 435 + static void mtk_pcie_enable_msi(struct mtk_pcie_port *port) 436 + { 437 + u32 val; 438 + phys_addr_t msg_addr; 439 + 440 + msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); 441 + val = lower_32_bits(msg_addr); 442 + writel(val, port->base + PCIE_IMSI_ADDR); 443 + 444 + val = readl(port->base + PCIE_INT_MASK); 445 + val &= ~MSI_MASK; 446 + writel(val, port->base + PCIE_INT_MASK); 447 + } 448 + 449 + static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 450 + irq_hw_number_t hwirq) 451 + { 452 + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 453 + irq_set_chip_data(irq, domain->host_data); 454 + 455 + return 0; 456 + } 457 + 458 + static const struct irq_domain_ops intx_domain_ops = { 459 + .map = mtk_pcie_intx_map, 460 + }; 461 + 462 + static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port, 463 + struct device_node *node) 464 + { 465 + struct device *dev = port->pcie->dev; 466 + struct device_node *pcie_intc_node; 467 + 468 + /* Setup INTx */ 469 + pcie_intc_node = of_get_next_child(node, NULL); 470 + if (!pcie_intc_node) { 471 + dev_err(dev, "no PCIe Intc node found\n"); 472 + return -ENODEV; 473 + } 474 + 475 + port->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 476 + &intx_domain_ops, port); 477 + if (!port->irq_domain) { 478 + dev_err(dev, "failed to get INTx IRQ domain\n"); 479 + return -ENODEV; 480 + } 481 + 482 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 483 + port->msi_domain = irq_domain_add_linear(node, MTK_MSI_IRQS_NUM, 484 + &msi_domain_ops, 485 + &mtk_pcie_msi_chip); 486 + if (!port->msi_domain) { 487 + dev_err(dev, "failed to create MSI IRQ domain\n"); 488 + return -ENODEV; 489 + } 490 + mtk_pcie_enable_msi(port); 491 + } 492 + 493 + return 0; 494 + } 495 + 496 + static irqreturn_t mtk_pcie_intr_handler(int irq, void *data) 497 + { 498 + struct mtk_pcie_port *port = (struct mtk_pcie_port *)data; 499 + unsigned long status; 500 + u32 virq; 501 + u32 bit = INTX_SHIFT; 502 + 503 + while ((status = readl(port->base + PCIE_INT_STATUS)) & INTX_MASK) { 504 + for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) { 505 + /* Clear the INTx */ 506 + writel(1 << bit, port->base + PCIE_INT_STATUS); 507 + virq = irq_find_mapping(port->irq_domain, 508 + bit - INTX_SHIFT); 509 + generic_handle_irq(virq); 510 + } 511 + } 512 + 513 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 514 + while ((status = readl(port->base + PCIE_INT_STATUS)) & MSI_STATUS) { 515 + unsigned long imsi_status; 516 + 517 + while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) { 518 + for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) { 519 + /* Clear the MSI */ 520 + writel(1 << bit, port->base + PCIE_IMSI_STATUS); 521 + virq = irq_find_mapping(port->msi_domain, bit); 522 + generic_handle_irq(virq); 523 + } 524 + } 525 + /* Clear MSI interrupt status */ 526 + writel(MSI_STATUS, port->base + PCIE_INT_STATUS); 527 + } 528 + } 529 + 530 + return IRQ_HANDLED; 531 + } 532 + 533 + static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, 534 + struct device_node *node) 535 + { 536 + struct mtk_pcie *pcie = port->pcie; 537 + struct device *dev = pcie->dev; 538 + struct platform_device *pdev = to_platform_device(dev); 539 + int err, irq; 540 + 541 + irq = platform_get_irq(pdev, port->slot); 542 + err = devm_request_irq(dev, irq, mtk_pcie_intr_handler, 543 + IRQF_SHARED, "mtk-pcie", port); 544 + if (err) { 545 + dev_err(dev, "unable to request IRQ %d\n", irq); 546 + return err; 547 + } 548 + 549 + err = mtk_pcie_init_irq_domain(port, node); 550 + if (err) { 551 + dev_err(dev, "failed to init PCIe IRQ domain\n"); 552 + return err; 553 + } 554 + 555 + return 0; 556 + } 557 + 259 558 static void __iomem *mtk_pcie_map_bus(struct pci_bus *bus, 260 559 unsigned int devfn, int where) 261 560 { 262 - struct pci_host_bridge *host = pci_find_host_bridge(bus); 263 - struct mtk_pcie *pcie = pci_host_bridge_priv(host); 561 + struct mtk_pcie *pcie = bus->sysdata; 264 562 265 563 writel(PCIE_CONF_ADDR(where, PCI_FUNC(devfn), PCI_SLOT(devfn), 266 564 bus->number), pcie->base + PCIE_CFG_ADDR); ··· 675 171 .write = pci_generic_config_write, 676 172 }; 677 173 678 - static void mtk_pcie_configure_rc(struct mtk_pcie_port *port) 174 + static int mtk_pcie_startup_port(struct mtk_pcie_port *port) 679 175 { 680 176 struct mtk_pcie *pcie = port->pcie; 681 - u32 func = PCI_FUNC(port->index << 3); 682 - u32 slot = PCI_SLOT(port->index << 3); 177 + u32 func = PCI_FUNC(port->slot << 3); 178 + u32 slot = PCI_SLOT(port->slot << 3); 683 179 u32 val; 180 + int err; 181 + 182 + /* assert port PERST_N */ 183 + val = readl(pcie->base + PCIE_SYS_CFG); 184 + val |= PCIE_PORT_PERST(port->slot); 185 + writel(val, pcie->base + PCIE_SYS_CFG); 186 + 187 + /* de-assert port PERST_N */ 188 + val = readl(pcie->base + PCIE_SYS_CFG); 189 + val &= ~PCIE_PORT_PERST(port->slot); 190 + writel(val, pcie->base + PCIE_SYS_CFG); 191 + 192 + /* 100ms timeout value should be enough for Gen1/2 training */ 193 + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS, val, 194 + !!(val & PCIE_PORT_LINKUP), 20, 195 + 100 * USEC_PER_MSEC); 196 + if (err) 197 + return -ETIMEDOUT; 684 198 685 199 /* enable interrupt */ 686 200 val = readl(pcie->base + PCIE_INT_ENABLE); 687 - val |= PCIE_PORT_INT_EN(port->index); 201 + val |= PCIE_PORT_INT_EN(port->slot); 688 202 writel(val, pcie->base + PCIE_INT_ENABLE); 689 203 690 204 /* map to all DDR region. We need to set it before cfg operation. */ ··· 731 209 writel(PCIE_CONF_ADDR(PCIE_FTS_NUM, func, slot, 0), 732 210 pcie->base + PCIE_CFG_ADDR); 733 211 writel(val, pcie->base + PCIE_CFG_DATA); 212 + 213 + return 0; 734 214 } 735 215 736 - static void mtk_pcie_assert_ports(struct mtk_pcie_port *port) 216 + static void mtk_pcie_enable_port(struct mtk_pcie_port *port) 737 217 { 738 218 struct mtk_pcie *pcie = port->pcie; 739 - u32 val; 740 - 741 - /* assert port PERST_N */ 742 - val = readl(pcie->base + PCIE_SYS_CFG); 743 - val |= PCIE_PORT_PERST(port->index); 744 - writel(val, pcie->base + PCIE_SYS_CFG); 745 - 746 - /* de-assert port PERST_N */ 747 - val = readl(pcie->base + PCIE_SYS_CFG); 748 - val &= ~PCIE_PORT_PERST(port->index); 749 - writel(val, pcie->base + PCIE_SYS_CFG); 750 - 751 - /* PCIe v2.0 need at least 100ms delay to train from Gen1 to Gen2 */ 752 - msleep(100); 753 - } 754 - 755 - static void mtk_pcie_enable_ports(struct mtk_pcie_port *port) 756 - { 757 - struct device *dev = port->pcie->dev; 219 + struct device *dev = pcie->dev; 758 220 int err; 759 221 760 222 err = clk_prepare_enable(port->sys_ck); 761 223 if (err) { 762 - dev_err(dev, "failed to enable port%d clock\n", port->index); 224 + dev_err(dev, "failed to enable sys_ck%d clock\n", port->slot); 763 225 goto err_sys_clk; 226 + } 227 + 228 + err = clk_prepare_enable(port->ahb_ck); 229 + if (err) { 230 + dev_err(dev, "failed to enable ahb_ck%d\n", port->slot); 231 + goto err_ahb_clk; 232 + } 233 + 234 + err = clk_prepare_enable(port->aux_ck); 235 + if (err) { 236 + dev_err(dev, "failed to enable aux_ck%d\n", port->slot); 237 + goto err_aux_clk; 238 + } 239 + 240 + err = clk_prepare_enable(port->axi_ck); 241 + if (err) { 242 + dev_err(dev, "failed to enable axi_ck%d\n", port->slot); 243 + goto err_axi_clk; 244 + } 245 + 246 + err = clk_prepare_enable(port->obff_ck); 247 + if (err) { 248 + dev_err(dev, "failed to enable obff_ck%d\n", port->slot); 249 + goto err_obff_clk; 250 + } 251 + 252 + err = clk_prepare_enable(port->pipe_ck); 253 + if (err) { 254 + dev_err(dev, "failed to enable pipe_ck%d\n", port->slot); 255 + goto err_pipe_clk; 764 256 } 765 257 766 258 reset_control_assert(port->reset); 767 259 reset_control_deassert(port->reset); 768 260 261 + err = phy_init(port->phy); 262 + if (err) { 263 + dev_err(dev, "failed to initialize port%d phy\n", port->slot); 264 + goto err_phy_init; 265 + } 266 + 769 267 err = phy_power_on(port->phy); 770 268 if (err) { 771 - dev_err(dev, "failed to power on port%d phy\n", port->index); 269 + dev_err(dev, "failed to power on port%d phy\n", port->slot); 772 270 goto err_phy_on; 773 271 } 774 272 775 - mtk_pcie_assert_ports(port); 776 - 777 - /* if link up, then setup root port configuration space */ 778 - if (mtk_pcie_link_up(port)) { 779 - mtk_pcie_configure_rc(port); 273 + if (!pcie->soc->startup(port)) 780 274 return; 781 - } 782 275 783 - dev_info(dev, "Port%d link down\n", port->index); 276 + dev_info(dev, "Port%d link down\n", port->slot); 784 277 785 278 phy_power_off(port->phy); 786 279 err_phy_on: 280 + phy_exit(port->phy); 281 + err_phy_init: 282 + clk_disable_unprepare(port->pipe_ck); 283 + err_pipe_clk: 284 + clk_disable_unprepare(port->obff_ck); 285 + err_obff_clk: 286 + clk_disable_unprepare(port->axi_ck); 287 + err_axi_clk: 288 + clk_disable_unprepare(port->aux_ck); 289 + err_aux_clk: 290 + clk_disable_unprepare(port->ahb_ck); 291 + err_ahb_clk: 787 292 clk_disable_unprepare(port->sys_ck); 788 293 err_sys_clk: 789 294 mtk_pcie_port_free(port); 790 295 } 791 296 792 - static int mtk_pcie_parse_ports(struct mtk_pcie *pcie, 793 - struct device_node *node, 794 - int index) 297 + static int mtk_pcie_parse_port(struct mtk_pcie *pcie, 298 + struct device_node *node, 299 + int slot) 795 300 { 796 301 struct mtk_pcie_port *port; 797 302 struct resource *regs; ··· 837 288 return err; 838 289 } 839 290 840 - regs = platform_get_resource(pdev, IORESOURCE_MEM, index + 1); 291 + snprintf(name, sizeof(name), "port%d", slot); 292 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); 841 293 port->base = devm_ioremap_resource(dev, regs); 842 294 if (IS_ERR(port->base)) { 843 - dev_err(dev, "failed to map port%d base\n", index); 295 + dev_err(dev, "failed to map port%d base\n", slot); 844 296 return PTR_ERR(port->base); 845 297 } 846 298 847 - snprintf(name, sizeof(name), "sys_ck%d", index); 299 + snprintf(name, sizeof(name), "sys_ck%d", slot); 848 300 port->sys_ck = devm_clk_get(dev, name); 849 301 if (IS_ERR(port->sys_ck)) { 850 - dev_err(dev, "failed to get port%d clock\n", index); 302 + dev_err(dev, "failed to get sys_ck%d clock\n", slot); 851 303 return PTR_ERR(port->sys_ck); 852 304 } 853 305 854 - snprintf(name, sizeof(name), "pcie-rst%d", index); 855 - port->reset = devm_reset_control_get_optional(dev, name); 306 + /* sys_ck might be divided into the following parts in some chips */ 307 + snprintf(name, sizeof(name), "ahb_ck%d", slot); 308 + port->ahb_ck = devm_clk_get(dev, name); 309 + if (IS_ERR(port->ahb_ck)) { 310 + if (PTR_ERR(port->ahb_ck) == -EPROBE_DEFER) 311 + return -EPROBE_DEFER; 312 + 313 + port->ahb_ck = NULL; 314 + } 315 + 316 + snprintf(name, sizeof(name), "axi_ck%d", slot); 317 + port->axi_ck = devm_clk_get(dev, name); 318 + if (IS_ERR(port->axi_ck)) { 319 + if (PTR_ERR(port->axi_ck) == -EPROBE_DEFER) 320 + return -EPROBE_DEFER; 321 + 322 + port->axi_ck = NULL; 323 + } 324 + 325 + snprintf(name, sizeof(name), "aux_ck%d", slot); 326 + port->aux_ck = devm_clk_get(dev, name); 327 + if (IS_ERR(port->aux_ck)) { 328 + if (PTR_ERR(port->aux_ck) == -EPROBE_DEFER) 329 + return -EPROBE_DEFER; 330 + 331 + port->aux_ck = NULL; 332 + } 333 + 334 + snprintf(name, sizeof(name), "obff_ck%d", slot); 335 + port->obff_ck = devm_clk_get(dev, name); 336 + if (IS_ERR(port->obff_ck)) { 337 + if (PTR_ERR(port->obff_ck) == -EPROBE_DEFER) 338 + return -EPROBE_DEFER; 339 + 340 + port->obff_ck = NULL; 341 + } 342 + 343 + snprintf(name, sizeof(name), "pipe_ck%d", slot); 344 + port->pipe_ck = devm_clk_get(dev, name); 345 + if (IS_ERR(port->pipe_ck)) { 346 + if (PTR_ERR(port->pipe_ck) == -EPROBE_DEFER) 347 + return -EPROBE_DEFER; 348 + 349 + port->pipe_ck = NULL; 350 + } 351 + 352 + snprintf(name, sizeof(name), "pcie-rst%d", slot); 353 + port->reset = devm_reset_control_get_optional_exclusive(dev, name); 856 354 if (PTR_ERR(port->reset) == -EPROBE_DEFER) 857 355 return PTR_ERR(port->reset); 858 356 859 357 /* some platforms may use default PHY setting */ 860 - snprintf(name, sizeof(name), "pcie-phy%d", index); 358 + snprintf(name, sizeof(name), "pcie-phy%d", slot); 861 359 port->phy = devm_phy_optional_get(dev, name); 862 360 if (IS_ERR(port->phy)) 863 361 return PTR_ERR(port->phy); 864 362 865 - port->index = index; 363 + port->slot = slot; 866 364 port->pcie = pcie; 365 + 366 + if (pcie->soc->setup_irq) { 367 + err = pcie->soc->setup_irq(port, node); 368 + if (err) 369 + return err; 370 + } 867 371 868 372 INIT_LIST_HEAD(&port->list); 869 373 list_add_tail(&port->list, &pcie->ports); ··· 931 329 struct resource *regs; 932 330 int err; 933 331 934 - /* get shared registers */ 935 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 936 - pcie->base = devm_ioremap_resource(dev, regs); 937 - if (IS_ERR(pcie->base)) { 938 - dev_err(dev, "failed to map shared register\n"); 939 - return PTR_ERR(pcie->base); 332 + /* get shared registers, which are optional */ 333 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "subsys"); 334 + if (regs) { 335 + pcie->base = devm_ioremap_resource(dev, regs); 336 + if (IS_ERR(pcie->base)) { 337 + dev_err(dev, "failed to map shared register\n"); 338 + return PTR_ERR(pcie->base); 339 + } 940 340 } 941 341 942 342 pcie->free_ck = devm_clk_get(dev, "free_ck"); ··· 1026 422 } 1027 423 1028 424 for_each_available_child_of_node(node, child) { 1029 - int index; 425 + int slot; 1030 426 1031 427 err = of_pci_get_devfn(child); 1032 428 if (err < 0) { ··· 1034 430 return err; 1035 431 } 1036 432 1037 - index = PCI_SLOT(err); 433 + slot = PCI_SLOT(err); 1038 434 1039 - err = mtk_pcie_parse_ports(pcie, child, index); 435 + err = mtk_pcie_parse_port(pcie, child, slot); 1040 436 if (err) 1041 437 return err; 1042 438 } ··· 1047 443 1048 444 /* enable each port, and then check link status */ 1049 445 list_for_each_entry_safe(port, tmp, &pcie->ports, list) 1050 - mtk_pcie_enable_ports(port); 446 + mtk_pcie_enable_port(port); 1051 447 1052 448 /* power down PCIe subsys if slots are all empty (link down) */ 1053 449 if (list_empty(&pcie->ports)) ··· 1084 480 1085 481 host->busnr = pcie->busn.start; 1086 482 host->dev.parent = pcie->dev; 1087 - host->ops = &mtk_pcie_ops; 483 + host->ops = pcie->soc->ops; 1088 484 host->map_irq = of_irq_parse_and_map_pci; 1089 485 host->swizzle_irq = pci_common_swizzle; 486 + host->sysdata = pcie; 487 + if (IS_ENABLED(CONFIG_PCI_MSI) && pcie->soc->has_msi) 488 + host->msi = &mtk_pcie_msi_chip; 1090 489 1091 490 err = pci_scan_root_bus_bridge(host); 1092 491 if (err < 0) ··· 1120 513 pcie = pci_host_bridge_priv(host); 1121 514 1122 515 pcie->dev = dev; 516 + pcie->soc = of_device_get_match_data(dev); 1123 517 platform_set_drvdata(pdev, pcie); 1124 518 INIT_LIST_HEAD(&pcie->ports); 1125 519 ··· 1145 537 return err; 1146 538 } 1147 539 540 + static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { 541 + .ops = &mtk_pcie_ops, 542 + .startup = mtk_pcie_startup_port, 543 + }; 544 + 545 + static const struct mtk_pcie_soc mtk_pcie_soc_v2 = { 546 + .has_msi = true, 547 + .ops = &mtk_pcie_ops_v2, 548 + .startup = mtk_pcie_startup_port_v2, 549 + .setup_irq = mtk_pcie_setup_irq, 550 + }; 551 + 1148 552 static const struct of_device_id mtk_pcie_ids[] = { 1149 - { .compatible = "mediatek,mt7623-pcie"}, 1150 - { .compatible = "mediatek,mt2701-pcie"}, 553 + { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, 554 + { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, 555 + { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_v2 }, 556 + { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_v2 }, 1151 557 {}, 1152 558 }; 1153 559
+6 -6
drivers/pci/host/pcie-rcar.c
··· 471 471 bridge->msi = &pcie->msi.chip; 472 472 473 473 ret = pci_scan_root_bus_bridge(bridge); 474 - if (ret < 0) { 475 - kfree(bridge); 474 + if (ret < 0) 476 475 return ret; 477 - } 478 476 479 477 bus = bridge->bus; 480 478 ··· 1188 1190 1189 1191 return 0; 1190 1192 1191 - err_free_bridge: 1192 - pci_free_host_bridge(bridge); 1193 - 1194 1193 err_pm_put: 1195 1194 pm_runtime_put(dev); 1196 1195 1197 1196 err_pm_disable: 1198 1197 pm_runtime_disable(dev); 1198 + 1199 + err_free_bridge: 1200 + pci_free_host_bridge(bridge); 1201 + pci_free_resource_list(&pcie->resources); 1202 + 1199 1203 return err; 1200 1204 } 1201 1205
+282 -144
drivers/pci/host/pcie-rockchip.c
··· 6 6 * Author: Shawn Lin <shawn.lin@rock-chips.com> 7 7 * Wenrui Li <wenrui.li@rock-chips.com> 8 8 * 9 - * Bits taken from Synopsys Designware Host controller driver and 9 + * Bits taken from Synopsys DesignWare Host controller driver and 10 10 * ARM PCI Host generic driver. 11 11 * 12 12 * This program is free software: you can redistribute it and/or modify ··· 15 15 * (at your option) any later version. 16 16 */ 17 17 18 + #include <linux/bitrev.h> 18 19 #include <linux/clk.h> 19 20 #include <linux/delay.h> 20 21 #include <linux/gpio/consumer.h> ··· 48 47 #define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val) 49 48 50 49 #define ENCODE_LANES(x) ((((x) >> 1) & 3) << 4) 50 + #define MAX_LANE_NUM 4 51 51 52 52 #define PCIE_CLIENT_BASE 0x0 53 53 #define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00) ··· 113 111 #define PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT 16 114 112 #define PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(x) \ 115 113 (((x) >> 3) << PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT) 114 + #define PCIE_CORE_LANE_MAP (PCIE_CORE_CTRL_MGMT_BASE + 0x200) 115 + #define PCIE_CORE_LANE_MAP_MASK 0x0000000f 116 + #define PCIE_CORE_LANE_MAP_REVERSE BIT(16) 116 117 #define PCIE_CORE_INT_STATUS (PCIE_CORE_CTRL_MGMT_BASE + 0x20c) 117 118 #define PCIE_CORE_INT_PRFPE BIT(0) 118 119 #define PCIE_CORE_INT_CRFPE BIT(1) ··· 215 210 struct rockchip_pcie { 216 211 void __iomem *reg_base; /* DT axi-base */ 217 212 void __iomem *apb_base; /* DT apb-base */ 218 - struct phy *phy; 213 + bool legacy_phy; 214 + struct phy *phys[MAX_LANE_NUM]; 219 215 struct reset_control *core_rst; 220 216 struct reset_control *mgmt_rst; 221 217 struct reset_control *mgmt_sticky_rst; ··· 228 222 struct clk *aclk_perf_pcie; 229 223 struct clk *hclk_pcie; 230 224 struct clk *clk_pcie_pm; 225 + struct regulator *vpcie12v; /* 12V power supply */ 231 226 struct regulator *vpcie3v3; /* 3.3V power supply */ 232 227 struct regulator *vpcie1v8; /* 1.8V power supply */ 233 228 struct regulator *vpcie0v9; /* 0.9V power supply */ 234 229 struct gpio_desc *ep_gpio; 235 230 u32 lanes; 231 + u8 lanes_map; 236 232 u8 root_bus_nr; 237 233 int link_gen; 238 234 struct device *dev; ··· 305 297 return 0; 306 298 307 299 return 1; 300 + } 301 + 302 + static u8 rockchip_pcie_lane_map(struct rockchip_pcie *rockchip) 303 + { 304 + u32 val; 305 + u8 map; 306 + 307 + if (rockchip->legacy_phy) 308 + return GENMASK(MAX_LANE_NUM - 1, 0); 309 + 310 + val = rockchip_pcie_read(rockchip, PCIE_CORE_LANE_MAP); 311 + map = val & PCIE_CORE_LANE_MAP_MASK; 312 + 313 + /* The link may be using a reverse-indexed mapping. */ 314 + if (val & PCIE_CORE_LANE_MAP_REVERSE) 315 + map = bitrev8(map) >> 4; 316 + 317 + return map; 308 318 } 309 319 310 320 static int rockchip_pcie_rd_own_conf(struct rockchip_pcie *rockchip, ··· 540 514 static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) 541 515 { 542 516 struct device *dev = rockchip->dev; 543 - int err; 517 + int err, i; 544 518 u32 status; 545 519 546 - gpiod_set_value(rockchip->ep_gpio, 0); 520 + gpiod_set_value_cansleep(rockchip->ep_gpio, 0); 547 521 548 522 err = reset_control_assert(rockchip->aclk_rst); 549 523 if (err) { ··· 563 537 return err; 564 538 } 565 539 566 - err = phy_init(rockchip->phy); 567 - if (err < 0) { 568 - dev_err(dev, "fail to init phy, err %d\n", err); 569 - return err; 540 + for (i = 0; i < MAX_LANE_NUM; i++) { 541 + err = phy_init(rockchip->phys[i]); 542 + if (err) { 543 + dev_err(dev, "init phy%d err %d\n", i, err); 544 + goto err_exit_phy; 545 + } 570 546 } 571 547 572 548 err = reset_control_assert(rockchip->core_rst); 573 549 if (err) { 574 550 dev_err(dev, "assert core_rst err %d\n", err); 575 - return err; 551 + goto err_exit_phy; 576 552 } 577 553 578 554 err = reset_control_assert(rockchip->mgmt_rst); 579 555 if (err) { 580 556 dev_err(dev, "assert mgmt_rst err %d\n", err); 581 - return err; 557 + goto err_exit_phy; 582 558 } 583 559 584 560 err = reset_control_assert(rockchip->mgmt_sticky_rst); 585 561 if (err) { 586 562 dev_err(dev, "assert mgmt_sticky_rst err %d\n", err); 587 - return err; 563 + goto err_exit_phy; 588 564 } 589 565 590 566 err = reset_control_assert(rockchip->pipe_rst); 591 567 if (err) { 592 568 dev_err(dev, "assert pipe_rst err %d\n", err); 593 - return err; 569 + goto err_exit_phy; 594 570 } 595 571 596 572 udelay(10); ··· 600 572 err = reset_control_deassert(rockchip->pm_rst); 601 573 if (err) { 602 574 dev_err(dev, "deassert pm_rst err %d\n", err); 603 - return err; 575 + goto err_exit_phy; 604 576 } 605 577 606 578 err = reset_control_deassert(rockchip->aclk_rst); 607 579 if (err) { 608 580 dev_err(dev, "deassert aclk_rst err %d\n", err); 609 - return err; 581 + goto err_exit_phy; 610 582 } 611 583 612 584 err = reset_control_deassert(rockchip->pclk_rst); 613 585 if (err) { 614 586 dev_err(dev, "deassert pclk_rst err %d\n", err); 615 - return err; 587 + goto err_exit_phy; 616 588 } 617 589 618 590 if (rockchip->link_gen == 2) ··· 630 602 PCIE_CLIENT_MODE_RC, 631 603 PCIE_CLIENT_CONFIG); 632 604 633 - err = phy_power_on(rockchip->phy); 634 - if (err) { 635 - dev_err(dev, "fail to power on phy, err %d\n", err); 636 - return err; 605 + for (i = 0; i < MAX_LANE_NUM; i++) { 606 + err = phy_power_on(rockchip->phys[i]); 607 + if (err) { 608 + dev_err(dev, "power on phy%d err %d\n", i, err); 609 + goto err_power_off_phy; 610 + } 637 611 } 638 612 639 613 /* ··· 645 615 err = reset_control_deassert(rockchip->mgmt_sticky_rst); 646 616 if (err) { 647 617 dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 648 - return err; 618 + goto err_power_off_phy; 649 619 } 650 620 651 621 err = reset_control_deassert(rockchip->core_rst); 652 622 if (err) { 653 623 dev_err(dev, "deassert core_rst err %d\n", err); 654 - return err; 624 + goto err_power_off_phy; 655 625 } 656 626 657 627 err = reset_control_deassert(rockchip->mgmt_rst); 658 628 if (err) { 659 629 dev_err(dev, "deassert mgmt_rst err %d\n", err); 660 - return err; 630 + goto err_power_off_phy; 661 631 } 662 632 663 633 err = reset_control_deassert(rockchip->pipe_rst); 664 634 if (err) { 665 635 dev_err(dev, "deassert pipe_rst err %d\n", err); 666 - return err; 636 + goto err_power_off_phy; 667 637 } 668 638 669 639 /* Fix the transmitted FTS count desired to exit from L0s. */ ··· 688 658 rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, 689 659 PCIE_CLIENT_CONFIG); 690 660 691 - gpiod_set_value(rockchip->ep_gpio, 1); 661 + gpiod_set_value_cansleep(rockchip->ep_gpio, 1); 692 662 693 663 /* 500ms timeout value should be enough for Gen1/2 training */ 694 664 err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1, ··· 696 666 500 * USEC_PER_MSEC); 697 667 if (err) { 698 668 dev_err(dev, "PCIe link training gen1 timeout!\n"); 699 - return -ETIMEDOUT; 669 + goto err_power_off_phy; 700 670 } 701 671 702 672 if (rockchip->link_gen == 2) { ··· 720 690 status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >> 721 691 PCIE_CORE_PL_CONF_LANE_SHIFT); 722 692 dev_dbg(dev, "current link width is x%d\n", status); 693 + 694 + /* Power off unused lane(s) */ 695 + rockchip->lanes_map = rockchip_pcie_lane_map(rockchip); 696 + for (i = 0; i < MAX_LANE_NUM; i++) { 697 + if (!(rockchip->lanes_map & BIT(i))) { 698 + dev_dbg(dev, "idling lane %d\n", i); 699 + phy_power_off(rockchip->phys[i]); 700 + } 701 + } 723 702 724 703 rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, 725 704 PCIE_CORE_CONFIG_VENDOR); ··· 754 715 rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); 755 716 756 717 return 0; 718 + err_power_off_phy: 719 + while (i--) 720 + phy_power_off(rockchip->phys[i]); 721 + i = MAX_LANE_NUM; 722 + err_exit_phy: 723 + while (i--) 724 + phy_exit(rockchip->phys[i]); 725 + return err; 726 + } 727 + 728 + static void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip) 729 + { 730 + int i; 731 + 732 + for (i = 0; i < MAX_LANE_NUM; i++) { 733 + /* inactive lanes are already powered off */ 734 + if (rockchip->lanes_map & BIT(i)) 735 + phy_power_off(rockchip->phys[i]); 736 + phy_exit(rockchip->phys[i]); 737 + } 757 738 } 758 739 759 740 static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg) ··· 912 853 chained_irq_exit(chip, desc); 913 854 } 914 855 856 + static int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip) 857 + { 858 + struct device *dev = rockchip->dev; 859 + struct phy *phy; 860 + char *name; 861 + u32 i; 862 + 863 + phy = devm_phy_get(dev, "pcie-phy"); 864 + if (!IS_ERR(phy)) { 865 + rockchip->legacy_phy = true; 866 + rockchip->phys[0] = phy; 867 + dev_warn(dev, "legacy phy model is deprecated!\n"); 868 + return 0; 869 + } 870 + 871 + if (PTR_ERR(phy) == -EPROBE_DEFER) 872 + return PTR_ERR(phy); 873 + 874 + dev_dbg(dev, "missing legacy phy; search for per-lane PHY\n"); 875 + 876 + for (i = 0; i < MAX_LANE_NUM; i++) { 877 + name = kasprintf(GFP_KERNEL, "pcie-phy-%u", i); 878 + if (!name) 879 + return -ENOMEM; 880 + 881 + phy = devm_of_phy_get(dev, dev->of_node, name); 882 + kfree(name); 883 + 884 + if (IS_ERR(phy)) { 885 + if (PTR_ERR(phy) != -EPROBE_DEFER) 886 + dev_err(dev, "missing phy for lane %d: %ld\n", 887 + i, PTR_ERR(phy)); 888 + return PTR_ERR(phy); 889 + } 890 + 891 + rockchip->phys[i] = phy; 892 + } 893 + 894 + return 0; 895 + } 896 + 897 + static int rockchip_pcie_setup_irq(struct rockchip_pcie *rockchip) 898 + { 899 + int irq, err; 900 + struct device *dev = rockchip->dev; 901 + struct platform_device *pdev = to_platform_device(dev); 902 + 903 + irq = platform_get_irq_byname(pdev, "sys"); 904 + if (irq < 0) { 905 + dev_err(dev, "missing sys IRQ resource\n"); 906 + return irq; 907 + } 908 + 909 + err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, 910 + IRQF_SHARED, "pcie-sys", rockchip); 911 + if (err) { 912 + dev_err(dev, "failed to request PCIe subsystem IRQ\n"); 913 + return err; 914 + } 915 + 916 + irq = platform_get_irq_byname(pdev, "legacy"); 917 + if (irq < 0) { 918 + dev_err(dev, "missing legacy IRQ resource\n"); 919 + return irq; 920 + } 921 + 922 + irq_set_chained_handler_and_data(irq, 923 + rockchip_pcie_legacy_int_handler, 924 + rockchip); 925 + 926 + irq = platform_get_irq_byname(pdev, "client"); 927 + if (irq < 0) { 928 + dev_err(dev, "missing client IRQ resource\n"); 929 + return irq; 930 + } 931 + 932 + err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, 933 + IRQF_SHARED, "pcie-client", rockchip); 934 + if (err) { 935 + dev_err(dev, "failed to request PCIe client IRQ\n"); 936 + return err; 937 + } 938 + 939 + return 0; 940 + } 915 941 916 942 /** 917 943 * rockchip_pcie_parse_dt - Parse Device Tree ··· 1010 866 struct platform_device *pdev = to_platform_device(dev); 1011 867 struct device_node *node = dev->of_node; 1012 868 struct resource *regs; 1013 - int irq; 1014 869 int err; 1015 870 1016 871 regs = platform_get_resource_byname(pdev, ··· 1026 883 if (IS_ERR(rockchip->apb_base)) 1027 884 return PTR_ERR(rockchip->apb_base); 1028 885 1029 - rockchip->phy = devm_phy_get(dev, "pcie-phy"); 1030 - if (IS_ERR(rockchip->phy)) { 1031 - if (PTR_ERR(rockchip->phy) != -EPROBE_DEFER) 1032 - dev_err(dev, "missing phy\n"); 1033 - return PTR_ERR(rockchip->phy); 1034 - } 886 + err = rockchip_pcie_get_phys(rockchip); 887 + if (err) 888 + return err; 1035 889 1036 890 rockchip->lanes = 1; 1037 891 err = of_property_read_u32(node, "num-lanes", &rockchip->lanes); ··· 1043 903 if (rockchip->link_gen < 0 || rockchip->link_gen > 2) 1044 904 rockchip->link_gen = 2; 1045 905 1046 - rockchip->core_rst = devm_reset_control_get(dev, "core"); 906 + rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core"); 1047 907 if (IS_ERR(rockchip->core_rst)) { 1048 908 if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER) 1049 909 dev_err(dev, "missing core reset property in node\n"); 1050 910 return PTR_ERR(rockchip->core_rst); 1051 911 } 1052 912 1053 - rockchip->mgmt_rst = devm_reset_control_get(dev, "mgmt"); 913 + rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt"); 1054 914 if (IS_ERR(rockchip->mgmt_rst)) { 1055 915 if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER) 1056 916 dev_err(dev, "missing mgmt reset property in node\n"); 1057 917 return PTR_ERR(rockchip->mgmt_rst); 1058 918 } 1059 919 1060 - rockchip->mgmt_sticky_rst = devm_reset_control_get(dev, "mgmt-sticky"); 920 + rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, 921 + "mgmt-sticky"); 1061 922 if (IS_ERR(rockchip->mgmt_sticky_rst)) { 1062 923 if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) 1063 924 dev_err(dev, "missing mgmt-sticky reset property in node\n"); 1064 925 return PTR_ERR(rockchip->mgmt_sticky_rst); 1065 926 } 1066 927 1067 - rockchip->pipe_rst = devm_reset_control_get(dev, "pipe"); 928 + rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe"); 1068 929 if (IS_ERR(rockchip->pipe_rst)) { 1069 930 if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) 1070 931 dev_err(dev, "missing pipe reset property in node\n"); 1071 932 return PTR_ERR(rockchip->pipe_rst); 1072 933 } 1073 934 1074 - rockchip->pm_rst = devm_reset_control_get(dev, "pm"); 935 + rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm"); 1075 936 if (IS_ERR(rockchip->pm_rst)) { 1076 937 if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) 1077 938 dev_err(dev, "missing pm reset property in node\n"); 1078 939 return PTR_ERR(rockchip->pm_rst); 1079 940 } 1080 941 1081 - rockchip->pclk_rst = devm_reset_control_get(dev, "pclk"); 942 + rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk"); 1082 943 if (IS_ERR(rockchip->pclk_rst)) { 1083 944 if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) 1084 945 dev_err(dev, "missing pclk reset property in node\n"); 1085 946 return PTR_ERR(rockchip->pclk_rst); 1086 947 } 1087 948 1088 - rockchip->aclk_rst = devm_reset_control_get(dev, "aclk"); 949 + rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk"); 1089 950 if (IS_ERR(rockchip->aclk_rst)) { 1090 951 if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) 1091 952 dev_err(dev, "missing aclk reset property in node\n"); ··· 1123 982 return PTR_ERR(rockchip->clk_pcie_pm); 1124 983 } 1125 984 1126 - irq = platform_get_irq_byname(pdev, "sys"); 1127 - if (irq < 0) { 1128 - dev_err(dev, "missing sys IRQ resource\n"); 1129 - return -EINVAL; 1130 - } 1131 - 1132 - err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, 1133 - IRQF_SHARED, "pcie-sys", rockchip); 1134 - if (err) { 1135 - dev_err(dev, "failed to request PCIe subsystem IRQ\n"); 985 + err = rockchip_pcie_setup_irq(rockchip); 986 + if (err) 1136 987 return err; 1137 - } 1138 988 1139 - irq = platform_get_irq_byname(pdev, "legacy"); 1140 - if (irq < 0) { 1141 - dev_err(dev, "missing legacy IRQ resource\n"); 1142 - return -EINVAL; 1143 - } 1144 - 1145 - irq_set_chained_handler_and_data(irq, 1146 - rockchip_pcie_legacy_int_handler, 1147 - rockchip); 1148 - 1149 - irq = platform_get_irq_byname(pdev, "client"); 1150 - if (irq < 0) { 1151 - dev_err(dev, "missing client IRQ resource\n"); 1152 - return -EINVAL; 1153 - } 1154 - 1155 - err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, 1156 - IRQF_SHARED, "pcie-client", rockchip); 1157 - if (err) { 1158 - dev_err(dev, "failed to request PCIe client IRQ\n"); 1159 - return err; 989 + rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); 990 + if (IS_ERR(rockchip->vpcie12v)) { 991 + if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) 992 + return -EPROBE_DEFER; 993 + dev_info(dev, "no vpcie12v regulator found\n"); 1160 994 } 1161 995 1162 996 rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); ··· 1163 1047 struct device *dev = rockchip->dev; 1164 1048 int err; 1165 1049 1050 + if (!IS_ERR(rockchip->vpcie12v)) { 1051 + err = regulator_enable(rockchip->vpcie12v); 1052 + if (err) { 1053 + dev_err(dev, "fail to enable vpcie12v regulator\n"); 1054 + goto err_out; 1055 + } 1056 + } 1057 + 1166 1058 if (!IS_ERR(rockchip->vpcie3v3)) { 1167 1059 err = regulator_enable(rockchip->vpcie3v3); 1168 1060 if (err) { 1169 1061 dev_err(dev, "fail to enable vpcie3v3 regulator\n"); 1170 - goto err_out; 1062 + goto err_disable_12v; 1171 1063 } 1172 1064 } 1173 1065 ··· 1203 1079 err_disable_3v3: 1204 1080 if (!IS_ERR(rockchip->vpcie3v3)) 1205 1081 regulator_disable(rockchip->vpcie3v3); 1082 + err_disable_12v: 1083 + if (!IS_ERR(rockchip->vpcie12v)) 1084 + regulator_disable(rockchip->vpcie12v); 1206 1085 err_out: 1207 1086 return err; 1208 1087 } ··· 1243 1116 return -EINVAL; 1244 1117 } 1245 1118 1246 - rockchip->irq_domain = irq_domain_add_linear(intc, 4, 1119 + rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, 1247 1120 &intx_domain_ops, rockchip); 1248 1121 if (!rockchip->irq_domain) { 1249 1122 dev_err(dev, "failed to get a INTx IRQ domain\n"); ··· 1397 1270 return 0; 1398 1271 } 1399 1272 1273 + static int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip) 1274 + { 1275 + struct device *dev = rockchip->dev; 1276 + int err; 1277 + 1278 + err = clk_prepare_enable(rockchip->aclk_pcie); 1279 + if (err) { 1280 + dev_err(dev, "unable to enable aclk_pcie clock\n"); 1281 + return err; 1282 + } 1283 + 1284 + err = clk_prepare_enable(rockchip->aclk_perf_pcie); 1285 + if (err) { 1286 + dev_err(dev, "unable to enable aclk_perf_pcie clock\n"); 1287 + goto err_aclk_perf_pcie; 1288 + } 1289 + 1290 + err = clk_prepare_enable(rockchip->hclk_pcie); 1291 + if (err) { 1292 + dev_err(dev, "unable to enable hclk_pcie clock\n"); 1293 + goto err_hclk_pcie; 1294 + } 1295 + 1296 + err = clk_prepare_enable(rockchip->clk_pcie_pm); 1297 + if (err) { 1298 + dev_err(dev, "unable to enable clk_pcie_pm clock\n"); 1299 + goto err_clk_pcie_pm; 1300 + } 1301 + 1302 + return 0; 1303 + 1304 + err_clk_pcie_pm: 1305 + clk_disable_unprepare(rockchip->hclk_pcie); 1306 + err_hclk_pcie: 1307 + clk_disable_unprepare(rockchip->aclk_perf_pcie); 1308 + err_aclk_perf_pcie: 1309 + clk_disable_unprepare(rockchip->aclk_pcie); 1310 + return err; 1311 + } 1312 + 1313 + static void rockchip_pcie_disable_clocks(void *data) 1314 + { 1315 + struct rockchip_pcie *rockchip = data; 1316 + 1317 + clk_disable_unprepare(rockchip->clk_pcie_pm); 1318 + clk_disable_unprepare(rockchip->hclk_pcie); 1319 + clk_disable_unprepare(rockchip->aclk_perf_pcie); 1320 + clk_disable_unprepare(rockchip->aclk_pcie); 1321 + } 1322 + 1400 1323 static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) 1401 1324 { 1402 1325 struct rockchip_pcie *rockchip = dev_get_drvdata(dev); ··· 1463 1286 return ret; 1464 1287 } 1465 1288 1466 - phy_power_off(rockchip->phy); 1467 - phy_exit(rockchip->phy); 1289 + rockchip_pcie_deinit_phys(rockchip); 1468 1290 1469 - clk_disable_unprepare(rockchip->clk_pcie_pm); 1470 - clk_disable_unprepare(rockchip->hclk_pcie); 1471 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 1472 - clk_disable_unprepare(rockchip->aclk_pcie); 1291 + rockchip_pcie_disable_clocks(rockchip); 1473 1292 1474 1293 if (!IS_ERR(rockchip->vpcie0v9)) 1475 1294 regulator_disable(rockchip->vpcie0v9); ··· 1486 1313 } 1487 1314 } 1488 1315 1489 - err = clk_prepare_enable(rockchip->clk_pcie_pm); 1316 + err = rockchip_pcie_enable_clocks(rockchip); 1490 1317 if (err) 1491 - goto err_pcie_pm; 1492 - 1493 - err = clk_prepare_enable(rockchip->hclk_pcie); 1494 - if (err) 1495 - goto err_hclk_pcie; 1496 - 1497 - err = clk_prepare_enable(rockchip->aclk_perf_pcie); 1498 - if (err) 1499 - goto err_aclk_perf_pcie; 1500 - 1501 - err = clk_prepare_enable(rockchip->aclk_pcie); 1502 - if (err) 1503 - goto err_aclk_pcie; 1318 + goto err_disable_0v9; 1504 1319 1505 1320 err = rockchip_pcie_init_port(rockchip); 1506 1321 if (err) ··· 1496 1335 1497 1336 err = rockchip_pcie_cfg_atu(rockchip); 1498 1337 if (err) 1499 - goto err_pcie_resume; 1338 + goto err_err_deinit_port; 1500 1339 1501 1340 /* Need this to enter L1 again */ 1502 1341 rockchip_pcie_update_txcredit_mui(rockchip); ··· 1504 1343 1505 1344 return 0; 1506 1345 1346 + err_err_deinit_port: 1347 + rockchip_pcie_deinit_phys(rockchip); 1507 1348 err_pcie_resume: 1508 - clk_disable_unprepare(rockchip->aclk_pcie); 1509 - err_aclk_pcie: 1510 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 1511 - err_aclk_perf_pcie: 1512 - clk_disable_unprepare(rockchip->hclk_pcie); 1513 - err_hclk_pcie: 1514 - clk_disable_unprepare(rockchip->clk_pcie_pm); 1515 - err_pcie_pm: 1349 + rockchip_pcie_disable_clocks(rockchip); 1350 + err_disable_0v9: 1351 + if (!IS_ERR(rockchip->vpcie0v9)) 1352 + regulator_disable(rockchip->vpcie0v9); 1516 1353 return err; 1517 1354 } 1518 1355 ··· 1544 1385 if (err) 1545 1386 return err; 1546 1387 1547 - err = clk_prepare_enable(rockchip->aclk_pcie); 1548 - if (err) { 1549 - dev_err(dev, "unable to enable aclk_pcie clock\n"); 1550 - goto err_aclk_pcie; 1551 - } 1552 - 1553 - err = clk_prepare_enable(rockchip->aclk_perf_pcie); 1554 - if (err) { 1555 - dev_err(dev, "unable to enable aclk_perf_pcie clock\n"); 1556 - goto err_aclk_perf_pcie; 1557 - } 1558 - 1559 - err = clk_prepare_enable(rockchip->hclk_pcie); 1560 - if (err) { 1561 - dev_err(dev, "unable to enable hclk_pcie clock\n"); 1562 - goto err_hclk_pcie; 1563 - } 1564 - 1565 - err = clk_prepare_enable(rockchip->clk_pcie_pm); 1566 - if (err) { 1567 - dev_err(dev, "unable to enable hclk_pcie clock\n"); 1568 - goto err_pcie_pm; 1569 - } 1388 + err = rockchip_pcie_enable_clocks(rockchip); 1389 + if (err) 1390 + return err; 1570 1391 1571 1392 err = rockchip_pcie_set_vpcie(rockchip); 1572 1393 if (err) { ··· 1562 1423 1563 1424 err = rockchip_pcie_init_irq_domain(rockchip); 1564 1425 if (err < 0) 1565 - goto err_vpcie; 1426 + goto err_deinit_port; 1566 1427 1567 1428 err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, 1568 1429 &res, &io_base); 1569 1430 if (err) 1570 - goto err_vpcie; 1431 + goto err_remove_irq_domain; 1571 1432 1572 1433 err = devm_request_pci_bus_resources(dev, &res); 1573 1434 if (err) ··· 1605 1466 1606 1467 err = rockchip_pcie_cfg_atu(rockchip); 1607 1468 if (err) 1608 - goto err_free_res; 1469 + goto err_unmap_iospace; 1609 1470 1610 1471 rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); 1611 1472 if (!rockchip->msg_region) { 1612 1473 err = -ENOMEM; 1613 - goto err_free_res; 1474 + goto err_unmap_iospace; 1614 1475 } 1615 1476 1616 1477 list_splice_init(&res, &bridge->windows); ··· 1623 1484 1624 1485 err = pci_scan_root_bus_bridge(bridge); 1625 1486 if (err < 0) 1626 - goto err_free_res; 1487 + goto err_unmap_iospace; 1627 1488 1628 1489 bus = bridge->bus; 1629 1490 ··· 1637 1498 pci_bus_add_devices(bus); 1638 1499 return 0; 1639 1500 1501 + err_unmap_iospace: 1502 + pci_unmap_iospace(rockchip->io); 1640 1503 err_free_res: 1641 1504 pci_free_resource_list(&res); 1505 + err_remove_irq_domain: 1506 + irq_domain_remove(rockchip->irq_domain); 1507 + err_deinit_port: 1508 + rockchip_pcie_deinit_phys(rockchip); 1642 1509 err_vpcie: 1510 + if (!IS_ERR(rockchip->vpcie12v)) 1511 + regulator_disable(rockchip->vpcie12v); 1643 1512 if (!IS_ERR(rockchip->vpcie3v3)) 1644 1513 regulator_disable(rockchip->vpcie3v3); 1645 1514 if (!IS_ERR(rockchip->vpcie1v8)) ··· 1655 1508 if (!IS_ERR(rockchip->vpcie0v9)) 1656 1509 regulator_disable(rockchip->vpcie0v9); 1657 1510 err_set_vpcie: 1658 - clk_disable_unprepare(rockchip->clk_pcie_pm); 1659 - err_pcie_pm: 1660 - clk_disable_unprepare(rockchip->hclk_pcie); 1661 - err_hclk_pcie: 1662 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 1663 - err_aclk_perf_pcie: 1664 - clk_disable_unprepare(rockchip->aclk_pcie); 1665 - err_aclk_pcie: 1511 + rockchip_pcie_disable_clocks(rockchip); 1666 1512 return err; 1667 1513 } 1668 1514 ··· 1669 1529 pci_unmap_iospace(rockchip->io); 1670 1530 irq_domain_remove(rockchip->irq_domain); 1671 1531 1672 - phy_power_off(rockchip->phy); 1673 - phy_exit(rockchip->phy); 1532 + rockchip_pcie_deinit_phys(rockchip); 1674 1533 1675 - clk_disable_unprepare(rockchip->clk_pcie_pm); 1676 - clk_disable_unprepare(rockchip->hclk_pcie); 1677 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 1678 - clk_disable_unprepare(rockchip->aclk_pcie); 1534 + rockchip_pcie_disable_clocks(rockchip); 1679 1535 1536 + if (!IS_ERR(rockchip->vpcie12v)) 1537 + regulator_disable(rockchip->vpcie12v); 1680 1538 if (!IS_ERR(rockchip->vpcie3v3)) 1681 1539 regulator_disable(rockchip->vpcie3v3); 1682 1540 if (!IS_ERR(rockchip->vpcie1v8))
+5 -6
drivers/pci/host/pcie-xilinx-nwl.c
··· 133 133 #define CFG_DMA_REG_BAR GENMASK(2, 0) 134 134 135 135 #define INT_PCI_MSI_NR (2 * 32) 136 - #define INTX_NUM 4 137 136 138 137 /* Readin the PS_LINKUP */ 139 138 #define PS_LINKUP_OFFSET 0x00000238 ··· 333 334 334 335 while ((status = nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & 335 336 MSGF_LEG_SR_MASKALL) != 0) { 336 - for_each_set_bit(bit, &status, INTX_NUM) { 337 - virq = irq_find_mapping(pcie->legacy_irq_domain, 338 - bit + 1); 337 + for_each_set_bit(bit, &status, PCI_NUM_INTX) { 338 + virq = irq_find_mapping(pcie->legacy_irq_domain, bit); 339 339 if (virq) 340 340 generic_handle_irq(virq); 341 341 } ··· 434 436 435 437 static const struct irq_domain_ops legacy_domain_ops = { 436 438 .map = nwl_legacy_map, 439 + .xlate = pci_irqd_intx_xlate, 437 440 }; 438 441 439 442 #ifdef CONFIG_PCI_MSI ··· 558 559 } 559 560 560 561 pcie->legacy_irq_domain = irq_domain_add_linear(legacy_intc_node, 561 - INTX_NUM, 562 + PCI_NUM_INTX, 562 563 &legacy_domain_ops, 563 564 pcie); 564 565 ··· 812 813 pcie->irq_intx = platform_get_irq_byname(pdev, "intx"); 813 814 if (pcie->irq_intx < 0) { 814 815 dev_err(dev, "failed to get intx IRQ %d\n", pcie->irq_intx); 815 - return -EINVAL; 816 + return pcie->irq_intx; 816 817 } 817 818 818 819 irq_set_chained_handler_and_data(pcie->irq_intx,
+24 -40
drivers/pci/host/pcie-xilinx.c
··· 5 5 * 6 6 * Based on the Tegra PCIe driver 7 7 * 8 - * Bits taken from Synopsys Designware Host controller driver and 8 + * Bits taken from Synopsys DesignWare Host controller driver and 9 9 * ARM PCI Host generic driver. 10 10 * 11 11 * This program is free software: you can redistribute it and/or modify ··· 60 60 #define XILINX_PCIE_INTR_MST_SLVERR BIT(27) 61 61 #define XILINX_PCIE_INTR_MST_ERRP BIT(28) 62 62 #define XILINX_PCIE_IMR_ALL_MASK 0x1FF30FED 63 + #define XILINX_PCIE_IMR_ENABLE_MASK 0x1FF30F0D 63 64 #define XILINX_PCIE_IDR_ALL_MASK 0xFFFFFFFF 64 65 65 66 /* Root Port Error FIFO Read Register definitions */ ··· 370 369 /* INTx IRQ Domain operations */ 371 370 static const struct irq_domain_ops intx_domain_ops = { 372 371 .map = xilinx_pcie_intx_map, 372 + .xlate = pci_irqd_intx_xlate, 373 373 }; 374 374 375 375 /* PCIe HW Functions */ ··· 386 384 { 387 385 struct xilinx_pcie_port *port = (struct xilinx_pcie_port *)data; 388 386 struct device *dev = port->dev; 389 - u32 val, mask, status, msi_data; 387 + u32 val, mask, status; 390 388 391 389 /* Read interrupt decode and mask registers */ 392 390 val = pcie_read(port, XILINX_PCIE_REG_IDR); ··· 426 424 xilinx_pcie_clear_err_interrupts(port); 427 425 } 428 426 429 - if (status & XILINX_PCIE_INTR_INTX) { 430 - /* INTx interrupt received */ 427 + if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) { 431 428 val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 432 429 433 430 /* Check whether interrupt valid */ ··· 435 434 goto error; 436 435 } 437 436 438 - if (!(val & XILINX_PCIE_RPIFR1_MSI_INTR)) { 439 - /* Clear interrupt FIFO register 1 */ 440 - pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 441 - XILINX_PCIE_REG_RPIFR1); 442 - 443 - /* Handle INTx Interrupt */ 444 - val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 445 - XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1; 446 - generic_handle_irq(irq_find_mapping(port->leg_domain, 447 - val)); 448 - } 449 - } 450 - 451 - if (status & XILINX_PCIE_INTR_MSI) { 452 - /* MSI Interrupt */ 453 - val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 454 - 455 - if (!(val & XILINX_PCIE_RPIFR1_INTR_VALID)) { 456 - dev_warn(dev, "RP Intr FIFO1 read error\n"); 457 - goto error; 458 - } 459 - 437 + /* Decode the IRQ number */ 460 438 if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { 461 - msi_data = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & 462 - XILINX_PCIE_RPIFR2_MSG_DATA; 463 - 464 - /* Clear interrupt FIFO register 1 */ 465 - pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 466 - XILINX_PCIE_REG_RPIFR1); 467 - 468 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 469 - /* Handle MSI Interrupt */ 470 - generic_handle_irq(msi_data); 471 - } 439 + val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & 440 + XILINX_PCIE_RPIFR2_MSG_DATA; 441 + } else { 442 + val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 443 + XILINX_PCIE_RPIFR1_INTR_SHIFT; 444 + val = irq_find_mapping(port->leg_domain, val); 472 445 } 446 + 447 + /* Clear interrupt FIFO register 1 */ 448 + pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 449 + XILINX_PCIE_REG_RPIFR1); 450 + 451 + /* Handle the interrupt */ 452 + if (IS_ENABLED(CONFIG_PCI_MSI) || 453 + !(val & XILINX_PCIE_RPIFR1_MSI_INTR)) 454 + generic_handle_irq(val); 473 455 } 474 456 475 457 if (status & XILINX_PCIE_INTR_SLV_UNSUPP) ··· 508 524 return -ENODEV; 509 525 } 510 526 511 - port->leg_domain = irq_domain_add_linear(pcie_intc_node, 4, 527 + port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 512 528 &intx_domain_ops, 513 529 port); 514 530 if (!port->leg_domain) { ··· 555 571 XILINX_PCIE_IMR_ALL_MASK, 556 572 XILINX_PCIE_REG_IDR); 557 573 558 - /* Enable all interrupts */ 559 - pcie_write(port, XILINX_PCIE_IMR_ALL_MASK, XILINX_PCIE_REG_IMR); 574 + /* Enable all interrupts we handle */ 575 + pcie_write(port, XILINX_PCIE_IMR_ENABLE_MASK, XILINX_PCIE_REG_IMR); 560 576 561 577 /* Enable the Bridge enable bit */ 562 578 pcie_write(port, pcie_read(port, XILINX_PCIE_REG_RPSC) |
+17 -2
drivers/pci/host/vmd.c
··· 183 183 int i, best = 1; 184 184 unsigned long flags; 185 185 186 - if (!desc->msi_attrib.is_msix || vmd->msix_count == 1) 186 + if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1) 187 187 return &vmd->irqs[0]; 188 188 189 189 raw_spin_lock_irqsave(&list_lock, flags); ··· 697 697 return -ENODEV; 698 698 699 699 vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count, 700 - PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); 700 + PCI_IRQ_MSIX); 701 701 if (vmd->msix_count < 0) 702 702 return vmd->msix_count; 703 703 ··· 755 755 static int vmd_suspend(struct device *dev) 756 756 { 757 757 struct pci_dev *pdev = to_pci_dev(dev); 758 + struct vmd_dev *vmd = pci_get_drvdata(pdev); 759 + int i; 760 + 761 + for (i = 0; i < vmd->msix_count; i++) 762 + devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]); 758 763 759 764 pci_save_state(pdev); 760 765 return 0; ··· 768 763 static int vmd_resume(struct device *dev) 769 764 { 770 765 struct pci_dev *pdev = to_pci_dev(dev); 766 + struct vmd_dev *vmd = pci_get_drvdata(pdev); 767 + int err, i; 768 + 769 + for (i = 0; i < vmd->msix_count; i++) { 770 + err = devm_request_irq(dev, pci_irq_vector(pdev, i), 771 + vmd_irq, IRQF_NO_THREAD, 772 + "vmd", &vmd->irqs[i]); 773 + if (err) 774 + return err; 775 + } 771 776 772 777 pci_restore_state(pdev); 773 778 return 0;
+1 -1
drivers/pci/hotplug/cpcihp_zt5550.c
··· 280 280 } 281 281 282 282 283 - static struct pci_device_id zt5550_hc_pci_tbl[] = { 283 + static const struct pci_device_id zt5550_hc_pci_tbl[] = { 284 284 { PCI_VENDOR_ID_ZIATECH, PCI_DEVICE_ID_ZIATECH_5550_HC, PCI_ANY_ID, PCI_ANY_ID, }, 285 285 { 0, } 286 286 };
+1 -1
drivers/pci/hotplug/cpqphp_core.c
··· 1417 1417 iounmap(smbios_start); 1418 1418 } 1419 1419 1420 - static struct pci_device_id hpcd_pci_tbl[] = { 1420 + static const struct pci_device_id hpcd_pci_tbl[] = { 1421 1421 { 1422 1422 /* handle any PCI Hotplug controller */ 1423 1423 .class = ((PCI_CLASS_SYSTEM_PCI_HOTPLUG << 8) | 0x00),
+1 -1
drivers/pci/hotplug/ibmphp_core.c
··· 852 852 u8 speed; 853 853 u8 cmd = 0x0; 854 854 int retval; 855 - static struct pci_device_id ciobx[] = { 855 + static const struct pci_device_id ciobx[] = { 856 856 { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, 0x0101) }, 857 857 { }, 858 858 };
+1 -1
drivers/pci/hotplug/ibmphp_ebda.c
··· 1153 1153 } 1154 1154 } 1155 1155 1156 - static struct pci_device_id id_table[] = { 1156 + static const struct pci_device_id id_table[] = { 1157 1157 { 1158 1158 .vendor = PCI_VENDOR_ID_IBM, 1159 1159 .device = HPC_DEVICE_ID,
+8
drivers/pci/hotplug/pciehp_hpc.c
··· 586 586 events = status & (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 587 587 PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC | 588 588 PCI_EXP_SLTSTA_DLLSC); 589 + 590 + /* 591 + * If we've already reported a power fault, don't report it again 592 + * until we've done something to handle it. 593 + */ 594 + if (ctrl->power_fault_detected) 595 + events &= ~PCI_EXP_SLTSTA_PFD; 596 + 589 597 if (!events) 590 598 return IRQ_NONE; 591 599
+2 -2
drivers/pci/hotplug/pnv_php.c
··· 163 163 of_node_put(dn); 164 164 refcount = kref_read(&dn->kobj.kref); 165 165 if (refcount != 1) 166 - pr_warn("Invalid refcount %d on <%s>\n", 167 - refcount, of_node_full_name(dn)); 166 + pr_warn("Invalid refcount %d on <%pOF>\n", 167 + refcount, dn); 168 168 169 169 of_detach_node(dn); 170 170 }
+2 -2
drivers/pci/hotplug/rpadlpar_core.c
··· 150 150 /* Add EADS device to PHB bus, adding new entry to bus->devices */ 151 151 dev = of_create_pci_dev(dn, phb->bus, pdn->devfn); 152 152 if (!dev) { 153 - printk(KERN_ERR "%s: failed to create pci dev for %s\n", 154 - __func__, dn->full_name); 153 + printk(KERN_ERR "%s: failed to create pci dev for %pOF\n", 154 + __func__, dn); 155 155 return; 156 156 } 157 157
+1 -1
drivers/pci/hotplug/rpadlpar_sysfs.c
··· 102 102 NULL, 103 103 }; 104 104 105 - static struct attribute_group dlpar_attr_group = { 105 + static const struct attribute_group dlpar_attr_group = { 106 106 .attrs = default_attrs, 107 107 }; 108 108
+1 -1
drivers/pci/hotplug/rpaphp_core.c
··· 318 318 if (!is_php_dn(dn, &indexes, &names, &types, &power_domains)) 319 319 return 0; 320 320 321 - dbg("Entry %s: dn->full_name=%s\n", __func__, dn->full_name); 321 + dbg("Entry %s: dn=%pOF\n", __func__, dn); 322 322 323 323 /* register PCI devices */ 324 324 name = (char *) &names[1];
+2 -2
drivers/pci/hotplug/rpaphp_pci.c
··· 95 95 96 96 bus = pci_find_bus_by_node(slot->dn); 97 97 if (!bus) { 98 - err("%s: no pci_bus for dn %s\n", __func__, slot->dn->full_name); 98 + err("%s: no pci_bus for dn %pOF\n", __func__, slot->dn); 99 99 return -EINVAL; 100 100 } 101 101 ··· 125 125 126 126 if (rpaphp_debug) { 127 127 struct pci_dev *dev; 128 - dbg("%s: pci_devs of slot[%s]\n", __func__, slot->dn->full_name); 128 + dbg("%s: pci_devs of slot[%pOF]\n", __func__, slot->dn); 129 129 list_for_each_entry(dev, &bus->devices, bus_list) 130 130 dbg("\t%s\n", pci_name(dev)); 131 131 }
+2 -2
drivers/pci/hotplug/rpaphp_slot.c
··· 122 122 int retval; 123 123 int slotno = -1; 124 124 125 - dbg("%s registering slot:path[%s] index[%x], name[%s] pdomain[%x] type[%d]\n", 126 - __func__, slot->dn->full_name, slot->index, slot->name, 125 + dbg("%s registering slot:path[%pOF] index[%x], name[%s] pdomain[%x] type[%d]\n", 126 + __func__, slot->dn, slot->index, slot->name, 127 127 slot->power_domain, slot->type); 128 128 129 129 /* should not try to register the same slot twice */
+1 -1
drivers/pci/hotplug/shpchp_core.c
··· 351 351 kfree(ctrl); 352 352 } 353 353 354 - static struct pci_device_id shpcd_pci_tbl[] = { 354 + static const struct pci_device_id shpcd_pci_tbl[] = { 355 355 {PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0)}, 356 356 { /* end: all zeroes */ } 357 357 };
+2
drivers/pci/hotplug/shpchp_hpc.c
··· 1062 1062 if (rc) { 1063 1063 ctrl_info(ctrl, "Can't get msi for the hotplug controller\n"); 1064 1064 ctrl_info(ctrl, "Use INTx for the hotplug controller\n"); 1065 + } else { 1066 + pci_set_master(pdev); 1065 1067 } 1066 1068 1067 1069 rc = request_irq(ctrl->pci_dev->irq, shpc_isr, IRQF_SHARED,
+4 -3
drivers/pci/iov.c
··· 331 331 while (i--) 332 332 pci_iov_remove_virtfn(dev, i, 0); 333 333 334 - pcibios_sriov_disable(dev); 335 334 err_pcibios: 336 335 iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); 337 336 pci_cfg_access_lock(dev); 338 337 pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 339 338 ssleep(1); 340 339 pci_cfg_access_unlock(dev); 340 + 341 + pcibios_sriov_disable(dev); 341 342 342 343 if (iov->link != dev->devfn) 343 344 sysfs_remove_link(&dev->dev.kobj, "dep_link"); ··· 358 357 for (i = 0; i < iov->num_VFs; i++) 359 358 pci_iov_remove_virtfn(dev, i, 0); 360 359 361 - pcibios_sriov_disable(dev); 362 - 363 360 iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); 364 361 pci_cfg_access_lock(dev); 365 362 pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 366 363 ssleep(1); 367 364 pci_cfg_access_unlock(dev); 365 + 366 + pcibios_sriov_disable(dev); 368 367 369 368 if (iov->link != dev->devfn) 370 369 sysfs_remove_link(&dev->dev.kobj, "dep_link");
+22 -5
drivers/pci/msi.c
··· 1451 1451 } 1452 1452 EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); 1453 1453 1454 + /* 1455 + * Users of the generic MSI infrastructure expect a device to have a single ID, 1456 + * so with DMA aliases we have to pick the least-worst compromise. Devices with 1457 + * DMA phantom functions tend to still emit MSIs from the real function number, 1458 + * so we ignore those and only consider topological aliases where either the 1459 + * alias device or RID appears on a different bus number. We also make the 1460 + * reasonable assumption that bridges are walked in an upstream direction (so 1461 + * the last one seen wins), and the much braver assumption that the most likely 1462 + * case is that of PCI->PCIe so we should always use the alias RID. This echoes 1463 + * the logic from intel_irq_remapping's set_msi_sid(), which presumably works 1464 + * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions 1465 + * for taking ownership all we can really do is close our eyes and hope... 1466 + */ 1454 1467 static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) 1455 1468 { 1456 1469 u32 *pa = data; 1470 + u8 bus = PCI_BUS_NUM(*pa); 1457 1471 1458 - *pa = alias; 1472 + if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) 1473 + *pa = alias; 1474 + 1459 1475 return 0; 1460 1476 } 1477 + 1461 1478 /** 1462 1479 * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) 1463 1480 * @domain: The interrupt domain ··· 1488 1471 u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) 1489 1472 { 1490 1473 struct device_node *of_node; 1491 - u32 rid = 0; 1474 + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 1492 1475 1493 1476 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1494 1477 ··· 1504 1487 * @pdev: The PCI device 1505 1488 * 1506 1489 * Use the firmware data to find a device-specific MSI domain 1507 - * (i.e. not one that is ste as a default). 1490 + * (i.e. not one that is set as a default). 1508 1491 * 1509 - * Returns: The coresponding MSI domain or NULL if none has been found. 1492 + * Returns: The corresponding MSI domain or NULL if none has been found. 1510 1493 */ 1511 1494 struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) 1512 1495 { 1513 1496 struct irq_domain *dom; 1514 - u32 rid = 0; 1497 + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 1515 1498 1516 1499 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1517 1500 dom = of_msi_map_get_device_domain(&pdev->dev, rid);
+2 -2
drivers/pci/pci-label.c
··· 123 123 NULL, 124 124 }; 125 125 126 - static struct attribute_group smbios_attr_group = { 126 + static const struct attribute_group smbios_attr_group = { 127 127 .attrs = smbios_attributes, 128 128 .is_visible = smbios_instance_string_exist, 129 129 }; ··· 260 260 NULL, 261 261 }; 262 262 263 - static struct attribute_group acpi_attr_group = { 263 + static const struct attribute_group acpi_attr_group = { 264 264 .attrs = acpi_attributes, 265 265 .is_visible = acpi_index_string_exist, 266 266 };
+9 -12
drivers/pci/pci-sysfs.c
··· 556 556 struct pci_dev *pdev = to_pci_dev(dev); 557 557 struct device_node *np = pci_device_to_OF_node(pdev); 558 558 559 - if (np == NULL || np->full_name == NULL) 559 + if (np == NULL) 560 560 return 0; 561 - return sprintf(buf, "%s", np->full_name); 561 + return sprintf(buf, "%pOF", np); 562 562 } 563 563 static DEVICE_ATTR_RO(devspec); 564 564 #endif ··· 1211 1211 { 1212 1212 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1213 1213 int bar = (unsigned long)attr->private; 1214 - struct resource *res; 1215 1214 unsigned long port = off; 1216 - 1217 - res = &pdev->resource[bar]; 1218 1215 1219 1216 port += pci_resource_start(pdev, bar); 1220 1217 ··· 1428 1431 return count; 1429 1432 } 1430 1433 1431 - static struct bin_attribute pci_config_attr = { 1434 + static const struct bin_attribute pci_config_attr = { 1432 1435 .attr = { 1433 1436 .name = "config", 1434 1437 .mode = S_IRUGO | S_IWUSR, ··· 1438 1441 .write = pci_write_config, 1439 1442 }; 1440 1443 1441 - static struct bin_attribute pcie_config_attr = { 1444 + static const struct bin_attribute pcie_config_attr = { 1442 1445 .attr = { 1443 1446 .name = "config", 1444 1447 .mode = S_IRUGO | S_IWUSR, ··· 1732 1735 NULL, 1733 1736 }; 1734 1737 1735 - static struct attribute_group pci_dev_hp_attr_group = { 1738 + static const struct attribute_group pci_dev_hp_attr_group = { 1736 1739 .attrs = pci_dev_hp_attrs, 1737 1740 .is_visible = pci_dev_hp_attrs_are_visible, 1738 1741 }; ··· 1756 1759 return a->mode; 1757 1760 } 1758 1761 1759 - static struct attribute_group sriov_dev_attr_group = { 1762 + static const struct attribute_group sriov_dev_attr_group = { 1760 1763 .attrs = sriov_dev_attrs, 1761 1764 .is_visible = sriov_attrs_are_visible, 1762 1765 }; 1763 1766 #endif /* CONFIG_PCI_IOV */ 1764 1767 1765 - static struct attribute_group pci_dev_attr_group = { 1768 + static const struct attribute_group pci_dev_attr_group = { 1766 1769 .attrs = pci_dev_dev_attrs, 1767 1770 .is_visible = pci_dev_attrs_are_visible, 1768 1771 }; 1769 1772 1770 - static struct attribute_group pci_bridge_attr_group = { 1773 + static const struct attribute_group pci_bridge_attr_group = { 1771 1774 .attrs = pci_bridge_attrs, 1772 1775 .is_visible = pci_bridge_attrs_are_visible, 1773 1776 }; 1774 1777 1775 - static struct attribute_group pcie_dev_attr_group = { 1778 + static const struct attribute_group pcie_dev_attr_group = { 1776 1779 .attrs = pcie_dev_attrs, 1777 1780 .is_visible = pcie_dev_attrs_are_visible, 1778 1781 };
+53 -20
drivers/pci/pci.c
··· 52 52 static LIST_HEAD(pci_pme_list); 53 53 static DEFINE_MUTEX(pci_pme_list_mutex); 54 54 static DECLARE_DELAYED_WORK(pci_pme_work, pci_pme_list_scan); 55 + static DEFINE_MUTEX(pci_bridge_mutex); 55 56 56 57 struct pci_pme_device { 57 58 struct list_head list; ··· 893 892 * -EINVAL if the requested state is invalid. 894 893 * -EIO if device does not support PCI PM or its PM capabilities register has a 895 894 * wrong version, or device doesn't support the requested state. 895 + * 0 if the transition is to D1 or D2 but D1 and D2 are not supported. 896 896 * 0 if device already is in the requested state. 897 + * 0 if the transition is to D3 but D3 is not supported. 897 898 * 0 if device's power state has been successfully changed. 898 899 */ 899 900 int pci_set_power_state(struct pci_dev *dev, pci_power_t state) ··· 1351 1348 if (bridge) 1352 1349 pci_enable_bridge(bridge); 1353 1350 1351 + /* 1352 + * Hold pci_bridge_mutex to prevent a race when enabling two 1353 + * devices below the bridge simultaneously. The race may cause a 1354 + * PCI_COMMAND_MEMORY update to be lost (see changelog). 1355 + */ 1356 + mutex_lock(&pci_bridge_mutex); 1354 1357 if (pci_is_enabled(dev)) { 1355 1358 if (!dev->is_busmaster) 1356 1359 pci_set_master(dev); 1357 - return; 1360 + goto end; 1358 1361 } 1359 1362 1360 1363 retval = pci_enable_device(dev); ··· 1368 1359 dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n", 1369 1360 retval); 1370 1361 pci_set_master(dev); 1362 + end: 1363 + mutex_unlock(&pci_bridge_mutex); 1371 1364 } 1372 1365 1373 1366 static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags) ··· 1394 1383 return 0; /* already enabled */ 1395 1384 1396 1385 bridge = pci_upstream_bridge(dev); 1397 - if (bridge) 1386 + if (bridge && !pci_is_enabled(bridge)) 1398 1387 pci_enable_bridge(bridge); 1399 1388 1400 1389 /* only skip sriov related */ ··· 3829 3818 } 3830 3819 EXPORT_SYMBOL(pci_wait_for_pending_transaction); 3831 3820 3832 - /* 3833 - * We should only need to wait 100ms after FLR, but some devices take longer. 3834 - * Wait for up to 1000ms for config space to return something other than -1. 3835 - * Intel IGD requires this when an LCD panel is attached. We read the 2nd 3836 - * dword because VFs don't implement the 1st dword. 3837 - */ 3838 3821 static void pci_flr_wait(struct pci_dev *dev) 3839 3822 { 3840 - int i = 0; 3823 + int delay = 1, timeout = 60000; 3841 3824 u32 id; 3842 3825 3843 - do { 3844 - msleep(100); 3845 - pci_read_config_dword(dev, PCI_COMMAND, &id); 3846 - } while (i++ < 10 && id == ~0); 3826 + /* 3827 + * Per PCIe r3.1, sec 6.6.2, a device must complete an FLR within 3828 + * 100ms, but may silently discard requests while the FLR is in 3829 + * progress. Wait 100ms before trying to access the device. 3830 + */ 3831 + msleep(100); 3847 3832 3848 - if (id == ~0) 3849 - dev_warn(&dev->dev, "Failed to return from FLR\n"); 3850 - else if (i > 1) 3851 - dev_info(&dev->dev, "Required additional %dms to return from FLR\n", 3852 - (i - 1) * 100); 3833 + /* 3834 + * After 100ms, the device should not silently discard config 3835 + * requests, but it may still indicate that it needs more time by 3836 + * responding to them with CRS completions. The Root Port will 3837 + * generally synthesize ~0 data to complete the read (except when 3838 + * CRS SV is enabled and the read was for the Vendor ID; in that 3839 + * case it synthesizes 0x0001 data). 3840 + * 3841 + * Wait for the device to return a non-CRS completion. Read the 3842 + * Command register instead of Vendor ID so we don't have to 3843 + * contend with the CRS SV value. 3844 + */ 3845 + pci_read_config_dword(dev, PCI_COMMAND, &id); 3846 + while (id == ~0) { 3847 + if (delay > timeout) { 3848 + dev_warn(&dev->dev, "not ready %dms after FLR; giving up\n", 3849 + 100 + delay - 1); 3850 + return; 3851 + } 3852 + 3853 + if (delay > 1000) 3854 + dev_info(&dev->dev, "not ready %dms after FLR; waiting\n", 3855 + 100 + delay - 1); 3856 + 3857 + msleep(delay); 3858 + delay *= 2; 3859 + pci_read_config_dword(dev, PCI_COMMAND, &id); 3860 + } 3861 + 3862 + if (delay > 1000) 3863 + dev_info(&dev->dev, "ready %dms after FLR\n", 100 + delay - 1); 3853 3864 } 3854 3865 3855 3866 /** ··· 5438 5405 use_dt_domains = 0; 5439 5406 domain = pci_get_new_domain_nr(); 5440 5407 } else { 5441 - dev_err(parent, "Node %s has inconsistent \"linux,pci-domain\" property in DT\n", 5442 - parent->of_node->full_name); 5408 + dev_err(parent, "Node %pOF has inconsistent \"linux,pci-domain\" property in DT\n", 5409 + parent->of_node); 5443 5410 domain = -1; 5444 5411 } 5445 5412
+1
drivers/pci/pci.h
··· 235 235 pci_bar_mem64, /* A 64-bit memory BAR */ 236 236 }; 237 237 238 + int pci_configure_extended_tags(struct pci_dev *dev, void *ign); 238 239 bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, 239 240 int crs_timeout); 240 241 int pci_setup_device(struct pci_dev *dev);
+1 -24
drivers/pci/pcie/aer/aerdrv.c
··· 32 32 33 33 static int aer_probe(struct pcie_device *dev); 34 34 static void aer_remove(struct pcie_device *dev); 35 - static pci_ers_result_t aer_error_detected(struct pci_dev *dev, 36 - enum pci_channel_state error); 37 35 static void aer_error_resume(struct pci_dev *dev); 38 36 static pci_ers_result_t aer_root_reset(struct pci_dev *dev); 39 - 40 - static const struct pci_error_handlers aer_error_handlers = { 41 - .error_detected = aer_error_detected, 42 - .resume = aer_error_resume, 43 - }; 44 37 45 38 static struct pcie_port_service_driver aerdriver = { 46 39 .name = "aer", ··· 42 49 43 50 .probe = aer_probe, 44 51 .remove = aer_remove, 45 - 46 - .err_handler = &aer_error_handlers, 47 - 52 + .error_resume = aer_error_resume, 48 53 .reset_link = aer_root_reset, 49 54 }; 50 55 ··· 338 347 pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); 339 348 340 349 return PCI_ERS_RESULT_RECOVERED; 341 - } 342 - 343 - /** 344 - * aer_error_detected - update severity status 345 - * @dev: pointer to Root Port's pci_dev data structure 346 - * @error: error severity being notified by port bus 347 - * 348 - * Invoked by Port Bus driver during error recovery. 349 - */ 350 - static pci_ers_result_t aer_error_detected(struct pci_dev *dev, 351 - enum pci_channel_state error) 352 - { 353 - /* Root Port has no impact. Always recovers. */ 354 - return PCI_ERS_RESULT_CAN_RECOVER; 355 350 } 356 351 357 352 /**
+2 -2
drivers/pci/pcie/aer/aerdrv_core.c
··· 5 5 * License. See the file "COPYING" in the main directory of this archive 6 6 * for more details. 7 7 * 8 - * This file implements the core part of PCI-Express AER. When an pci-express 8 + * This file implements the core part of PCIe AER. When a PCIe 9 9 * error is delivered, an error message will be collected and printed to 10 10 * console, then, an error recovery procedure will be executed by following 11 - * the pci error recovery rules. 11 + * the PCI error recovery rules. 12 12 * 13 13 * Copyright (C) 2006 Intel Corp. 14 14 * Tom Long Nguyen (tom.l.nguyen@intel.com)
+177 -10
drivers/pci/pcie/pcie-dpc.c
··· 16 16 #include <linux/pcieport_if.h> 17 17 #include "../pci.h" 18 18 19 + struct rp_pio_header_log_regs { 20 + u32 dw0; 21 + u32 dw1; 22 + u32 dw2; 23 + u32 dw3; 24 + }; 25 + 26 + struct dpc_rp_pio_regs { 27 + u32 status; 28 + u32 mask; 29 + u32 severity; 30 + u32 syserror; 31 + u32 exception; 32 + 33 + struct rp_pio_header_log_regs header_log; 34 + u32 impspec_log; 35 + u32 tlp_prefix_log[4]; 36 + u32 log_size; 37 + u16 first_error; 38 + }; 39 + 19 40 struct dpc_dev { 20 41 struct pcie_device *dev; 21 42 struct work_struct work; 22 43 int cap_pos; 23 44 bool rp; 45 + u32 rp_pio_status; 46 + }; 47 + 48 + static const char * const rp_pio_error_string[] = { 49 + "Configuration Request received UR Completion", /* Bit Position 0 */ 50 + "Configuration Request received CA Completion", /* Bit Position 1 */ 51 + "Configuration Request Completion Timeout", /* Bit Position 2 */ 52 + NULL, 53 + NULL, 54 + NULL, 55 + NULL, 56 + NULL, 57 + "I/O Request received UR Completion", /* Bit Position 8 */ 58 + "I/O Request received CA Completion", /* Bit Position 9 */ 59 + "I/O Request Completion Timeout", /* Bit Position 10 */ 60 + NULL, 61 + NULL, 62 + NULL, 63 + NULL, 64 + NULL, 65 + "Memory Request received UR Completion", /* Bit Position 16 */ 66 + "Memory Request received CA Completion", /* Bit Position 17 */ 67 + "Memory Request Completion Timeout", /* Bit Position 18 */ 24 68 }; 25 69 26 70 static int dpc_wait_rp_inactive(struct dpc_dev *dpc) 27 71 { 28 72 unsigned long timeout = jiffies + HZ; 29 73 struct pci_dev *pdev = dpc->dev->port; 74 + struct device *dev = &dpc->dev->device; 30 75 u16 status; 31 76 32 77 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); ··· 81 36 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); 82 37 } 83 38 if (status & PCI_EXP_DPC_RP_BUSY) { 84 - dev_warn(&pdev->dev, "DPC root port still busy\n"); 39 + dev_warn(dev, "DPC root port still busy\n"); 85 40 return -EBUSY; 86 41 } 87 42 return 0; 88 43 } 89 44 90 - static void dpc_wait_link_inactive(struct pci_dev *pdev) 45 + static void dpc_wait_link_inactive(struct dpc_dev *dpc) 91 46 { 92 47 unsigned long timeout = jiffies + HZ; 48 + struct pci_dev *pdev = dpc->dev->port; 49 + struct device *dev = &dpc->dev->device; 93 50 u16 lnk_status; 94 51 95 52 pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); ··· 101 54 pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 102 55 } 103 56 if (lnk_status & PCI_EXP_LNKSTA_DLLLA) 104 - dev_warn(&pdev->dev, "Link state not disabled for DPC event\n"); 57 + dev_warn(dev, "Link state not disabled for DPC event\n"); 105 58 } 106 59 107 60 static void interrupt_event_handler(struct work_struct *work) ··· 123 76 } 124 77 pci_unlock_rescan_remove(); 125 78 126 - dpc_wait_link_inactive(pdev); 79 + dpc_wait_link_inactive(dpc); 127 80 if (dpc->rp && dpc_wait_rp_inactive(dpc)) 128 81 return; 82 + if (dpc->rp && dpc->rp_pio_status) { 83 + pci_write_config_dword(pdev, 84 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_STATUS, 85 + dpc->rp_pio_status); 86 + dpc->rp_pio_status = 0; 87 + } 88 + 129 89 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, 130 90 PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT); 91 + } 92 + 93 + static void dpc_rp_pio_print_tlp_header(struct device *dev, 94 + struct rp_pio_header_log_regs *t) 95 + { 96 + dev_err(dev, "TLP Header: %#010x %#010x %#010x %#010x\n", 97 + t->dw0, t->dw1, t->dw2, t->dw3); 98 + } 99 + 100 + static void dpc_rp_pio_print_error(struct dpc_dev *dpc, 101 + struct dpc_rp_pio_regs *rp_pio) 102 + { 103 + struct device *dev = &dpc->dev->device; 104 + int i; 105 + u32 status; 106 + 107 + dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", 108 + rp_pio->status, rp_pio->mask); 109 + 110 + dev_err(dev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n", 111 + rp_pio->severity, rp_pio->syserror, rp_pio->exception); 112 + 113 + status = (rp_pio->status & ~rp_pio->mask); 114 + 115 + for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { 116 + if (!(status & (1 << i))) 117 + continue; 118 + 119 + dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], 120 + rp_pio->first_error == i ? " (First)" : ""); 121 + } 122 + 123 + dpc_rp_pio_print_tlp_header(dev, &rp_pio->header_log); 124 + if (rp_pio->log_size == 4) 125 + return; 126 + dev_err(dev, "RP PIO ImpSpec Log %#010x\n", rp_pio->impspec_log); 127 + 128 + for (i = 0; i < rp_pio->log_size - 5; i++) 129 + dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, 130 + rp_pio->tlp_prefix_log[i]); 131 + } 132 + 133 + static void dpc_rp_pio_get_info(struct dpc_dev *dpc, 134 + struct dpc_rp_pio_regs *rp_pio) 135 + { 136 + struct pci_dev *pdev = dpc->dev->port; 137 + struct device *dev = &dpc->dev->device; 138 + int i; 139 + u16 cap; 140 + u16 status; 141 + 142 + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_STATUS, 143 + &rp_pio->status); 144 + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_MASK, 145 + &rp_pio->mask); 146 + 147 + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_SEVERITY, 148 + &rp_pio->severity); 149 + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_SYSERROR, 150 + &rp_pio->syserror); 151 + pci_read_config_dword(pdev, dpc->cap_pos + PCI_EXP_DPC_RP_PIO_EXCEPTION, 152 + &rp_pio->exception); 153 + 154 + /* Get First Error Pointer */ 155 + pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); 156 + rp_pio->first_error = (status & 0x1f00) >> 8; 157 + 158 + pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap); 159 + rp_pio->log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; 160 + if (rp_pio->log_size < 4 || rp_pio->log_size > 9) { 161 + dev_err(dev, "RP PIO log size %u is invalid\n", 162 + rp_pio->log_size); 163 + return; 164 + } 165 + 166 + pci_read_config_dword(pdev, 167 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG, 168 + &rp_pio->header_log.dw0); 169 + pci_read_config_dword(pdev, 170 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 4, 171 + &rp_pio->header_log.dw1); 172 + pci_read_config_dword(pdev, 173 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 8, 174 + &rp_pio->header_log.dw2); 175 + pci_read_config_dword(pdev, 176 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 12, 177 + &rp_pio->header_log.dw3); 178 + if (rp_pio->log_size == 4) 179 + return; 180 + 181 + pci_read_config_dword(pdev, 182 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, 183 + &rp_pio->impspec_log); 184 + for (i = 0; i < rp_pio->log_size - 5; i++) 185 + pci_read_config_dword(pdev, 186 + dpc->cap_pos + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, 187 + &rp_pio->tlp_prefix_log[i]); 188 + } 189 + 190 + static void dpc_process_rp_pio_error(struct dpc_dev *dpc) 191 + { 192 + struct dpc_rp_pio_regs rp_pio_regs; 193 + 194 + dpc_rp_pio_get_info(dpc, &rp_pio_regs); 195 + dpc_rp_pio_print_error(dpc, &rp_pio_regs); 196 + 197 + dpc->rp_pio_status = rp_pio_regs.status; 131 198 } 132 199 133 200 static irqreturn_t dpc_irq(int irq, void *context) 134 201 { 135 202 struct dpc_dev *dpc = (struct dpc_dev *)context; 136 203 struct pci_dev *pdev = dpc->dev->port; 204 + struct device *dev = &dpc->dev->device; 137 205 u16 status, source; 138 206 139 207 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); ··· 257 95 if (!status || status == (u16)(~0)) 258 96 return IRQ_NONE; 259 97 260 - dev_info(&dpc->dev->device, "DPC containment event, status:%#06x source:%#06x\n", 98 + dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", 261 99 status, source); 262 100 263 101 if (status & PCI_EXP_DPC_STATUS_TRIGGER) { 264 102 u16 reason = (status >> 1) & 0x3; 265 103 u16 ext_reason = (status >> 5) & 0x3; 266 104 267 - dev_warn(&dpc->dev->device, "DPC %s detected, remove downstream devices\n", 105 + dev_warn(dev, "DPC %s detected, remove downstream devices\n", 268 106 (reason == 0) ? "unmasked uncorrectable error" : 269 107 (reason == 1) ? "ERR_NONFATAL" : 270 108 (reason == 2) ? "ERR_FATAL" : 271 109 (ext_reason == 0) ? "RP PIO error" : 272 110 (ext_reason == 1) ? "software trigger" : 273 111 "reserved error"); 112 + /* show RP PIO error detail information */ 113 + if (reason == 3 && ext_reason == 0) 114 + dpc_process_rp_pio_error(dpc); 115 + 274 116 schedule_work(&dpc->work); 275 117 } 276 118 return IRQ_HANDLED; ··· 285 119 { 286 120 struct dpc_dev *dpc; 287 121 struct pci_dev *pdev = dev->port; 122 + struct device *device = &dev->device; 288 123 int status; 289 124 u16 ctl, cap; 290 125 291 - dpc = devm_kzalloc(&dev->device, sizeof(*dpc), GFP_KERNEL); 126 + dpc = devm_kzalloc(device, sizeof(*dpc), GFP_KERNEL); 292 127 if (!dpc) 293 128 return -ENOMEM; 294 129 ··· 298 131 INIT_WORK(&dpc->work, interrupt_event_handler); 299 132 set_service_data(dev, dpc); 300 133 301 - status = devm_request_irq(&dev->device, dev->irq, dpc_irq, IRQF_SHARED, 134 + status = devm_request_irq(device, dev->irq, dpc_irq, IRQF_SHARED, 302 135 "pcie-dpc", dpc); 303 136 if (status) { 304 - dev_warn(&dev->device, "request IRQ%d failed: %d\n", dev->irq, 137 + dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, 305 138 status); 306 139 return status; 307 140 } ··· 314 147 ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; 315 148 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 316 149 317 - dev_info(&dev->device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 150 + dev_info(device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 318 151 cap & 0xf, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), 319 152 FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), 320 153 FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), (cap >> 8) & 0xf,
+6 -99
drivers/pci/pcie/portdrv_pci.c
··· 21 21 22 22 #include "../pci.h" 23 23 #include "portdrv.h" 24 - #include "aer/aerdrv.h" 25 24 26 25 /* If this switch is set, PCIe port native services should not be enabled. */ 27 26 bool pcie_ports_disabled; ··· 176 177 pcie_port_device_remove(dev); 177 178 } 178 179 179 - static int error_detected_iter(struct device *device, void *data) 180 - { 181 - struct pcie_device *pcie_device; 182 - struct pcie_port_service_driver *driver; 183 - struct aer_broadcast_data *result_data; 184 - pci_ers_result_t status; 185 - 186 - result_data = (struct aer_broadcast_data *) data; 187 - 188 - if (device->bus == &pcie_port_bus_type && device->driver) { 189 - driver = to_service_driver(device->driver); 190 - if (!driver || 191 - !driver->err_handler || 192 - !driver->err_handler->error_detected) 193 - return 0; 194 - 195 - pcie_device = to_pcie_device(device); 196 - 197 - /* Forward error detected message to service drivers */ 198 - status = driver->err_handler->error_detected( 199 - pcie_device->port, 200 - result_data->state); 201 - result_data->result = 202 - merge_result(result_data->result, status); 203 - } 204 - 205 - return 0; 206 - } 207 - 208 180 static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev, 209 181 enum pci_channel_state error) 210 182 { 211 - struct aer_broadcast_data data = {error, PCI_ERS_RESULT_CAN_RECOVER}; 212 - 213 - /* get true return value from &data */ 214 - device_for_each_child(&dev->dev, &data, error_detected_iter); 215 - return data.result; 216 - } 217 - 218 - static int mmio_enabled_iter(struct device *device, void *data) 219 - { 220 - struct pcie_device *pcie_device; 221 - struct pcie_port_service_driver *driver; 222 - pci_ers_result_t status, *result; 223 - 224 - result = (pci_ers_result_t *) data; 225 - 226 - if (device->bus == &pcie_port_bus_type && device->driver) { 227 - driver = to_service_driver(device->driver); 228 - if (driver && 229 - driver->err_handler && 230 - driver->err_handler->mmio_enabled) { 231 - pcie_device = to_pcie_device(device); 232 - 233 - /* Forward error message to service drivers */ 234 - status = driver->err_handler->mmio_enabled( 235 - pcie_device->port); 236 - *result = merge_result(*result, status); 237 - } 238 - } 239 - 240 - return 0; 183 + /* Root Port has no impact. Always recovers. */ 184 + return PCI_ERS_RESULT_CAN_RECOVER; 241 185 } 242 186 243 187 static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) 244 188 { 245 - pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; 246 - 247 - /* get true return value from &status */ 248 - device_for_each_child(&dev->dev, &status, mmio_enabled_iter); 249 - return status; 250 - } 251 - 252 - static int slot_reset_iter(struct device *device, void *data) 253 - { 254 - struct pcie_device *pcie_device; 255 - struct pcie_port_service_driver *driver; 256 - pci_ers_result_t status, *result; 257 - 258 - result = (pci_ers_result_t *) data; 259 - 260 - if (device->bus == &pcie_port_bus_type && device->driver) { 261 - driver = to_service_driver(device->driver); 262 - if (driver && 263 - driver->err_handler && 264 - driver->err_handler->slot_reset) { 265 - pcie_device = to_pcie_device(device); 266 - 267 - /* Forward error message to service drivers */ 268 - status = driver->err_handler->slot_reset( 269 - pcie_device->port); 270 - *result = merge_result(*result, status); 271 - } 272 - } 273 - 274 - return 0; 189 + return PCI_ERS_RESULT_RECOVERED; 275 190 } 276 191 277 192 static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) 278 193 { 279 - pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; 280 - 281 194 /* If fatal, restore cfg space for possible link reset at upstream */ 282 195 if (dev->error_state == pci_channel_io_frozen) { 283 196 dev->state_saved = true; ··· 198 287 pci_enable_pcie_error_reporting(dev); 199 288 } 200 289 201 - /* get true return value from &status */ 202 - device_for_each_child(&dev->dev, &status, slot_reset_iter); 203 - return status; 290 + return PCI_ERS_RESULT_RECOVERED; 204 291 } 205 292 206 293 static int resume_iter(struct device *device, void *data) ··· 208 299 209 300 if (device->bus == &pcie_port_bus_type && device->driver) { 210 301 driver = to_service_driver(device->driver); 211 - if (driver && 212 - driver->err_handler && 213 - driver->err_handler->resume) { 302 + if (driver && driver->error_resume) { 214 303 pcie_device = to_pcie_device(device); 215 304 216 305 /* Forward error message to service drivers */ 217 - driver->err_handler->resume(pcie_device->port); 306 + driver->error_resume(pcie_device->port); 218 307 } 219 308 } 220 309
+96 -31
drivers/pci/probe.c
··· 1745 1745 */ 1746 1746 } 1747 1747 1748 - static void pci_configure_extended_tags(struct pci_dev *dev) 1748 + int pci_configure_extended_tags(struct pci_dev *dev, void *ign) 1749 1749 { 1750 - u32 dev_cap; 1750 + struct pci_host_bridge *host; 1751 + u32 cap; 1752 + u16 ctl; 1751 1753 int ret; 1752 1754 1753 1755 if (!pci_is_pcie(dev)) 1754 - return; 1756 + return 0; 1755 1757 1756 - ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &dev_cap); 1758 + ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); 1757 1759 if (ret) 1758 - return; 1760 + return 0; 1759 1761 1760 - if (dev_cap & PCI_EXP_DEVCAP_EXT_TAG) 1762 + if (!(cap & PCI_EXP_DEVCAP_EXT_TAG)) 1763 + return 0; 1764 + 1765 + ret = pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); 1766 + if (ret) 1767 + return 0; 1768 + 1769 + host = pci_find_host_bridge(dev->bus); 1770 + if (!host) 1771 + return 0; 1772 + 1773 + /* 1774 + * If some device in the hierarchy doesn't handle Extended Tags 1775 + * correctly, make sure they're disabled. 1776 + */ 1777 + if (host->no_ext_tags) { 1778 + if (ctl & PCI_EXP_DEVCTL_EXT_TAG) { 1779 + dev_info(&dev->dev, "disabling Extended Tags\n"); 1780 + pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, 1781 + PCI_EXP_DEVCTL_EXT_TAG); 1782 + } 1783 + return 0; 1784 + } 1785 + 1786 + if (!(ctl & PCI_EXP_DEVCTL_EXT_TAG)) { 1787 + dev_info(&dev->dev, "enabling Extended Tags\n"); 1761 1788 pcie_capability_set_word(dev, PCI_EXP_DEVCTL, 1762 1789 PCI_EXP_DEVCTL_EXT_TAG); 1790 + } 1791 + return 0; 1763 1792 } 1764 1793 1765 1794 /** ··· 1839 1810 int ret; 1840 1811 1841 1812 pci_configure_mps(dev); 1842 - pci_configure_extended_tags(dev); 1813 + pci_configure_extended_tags(dev, NULL); 1843 1814 pci_configure_relaxed_ordering(dev); 1844 1815 1845 1816 memset(&hpp, 0, sizeof(hpp)); ··· 1896 1867 } 1897 1868 EXPORT_SYMBOL(pci_alloc_dev); 1898 1869 1899 - bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, 1900 - int crs_timeout) 1870 + static bool pci_bus_crs_vendor_id(u32 l) 1871 + { 1872 + return (l & 0xffff) == 0x0001; 1873 + } 1874 + 1875 + static bool pci_bus_wait_crs(struct pci_bus *bus, int devfn, u32 *l, 1876 + int timeout) 1901 1877 { 1902 1878 int delay = 1; 1903 1879 1880 + if (!pci_bus_crs_vendor_id(*l)) 1881 + return true; /* not a CRS completion */ 1882 + 1883 + if (!timeout) 1884 + return false; /* CRS, but caller doesn't want to wait */ 1885 + 1886 + /* 1887 + * We got the reserved Vendor ID that indicates a completion with 1888 + * Configuration Request Retry Status (CRS). Retry until we get a 1889 + * valid Vendor ID or we time out. 1890 + */ 1891 + while (pci_bus_crs_vendor_id(*l)) { 1892 + if (delay > timeout) { 1893 + pr_warn("pci %04x:%02x:%02x.%d: not ready after %dms; giving up\n", 1894 + pci_domain_nr(bus), bus->number, 1895 + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); 1896 + 1897 + return false; 1898 + } 1899 + if (delay >= 1000) 1900 + pr_info("pci %04x:%02x:%02x.%d: not ready after %dms; waiting\n", 1901 + pci_domain_nr(bus), bus->number, 1902 + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); 1903 + 1904 + msleep(delay); 1905 + delay *= 2; 1906 + 1907 + if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) 1908 + return false; 1909 + } 1910 + 1911 + if (delay >= 1000) 1912 + pr_info("pci %04x:%02x:%02x.%d: ready after %dms\n", 1913 + pci_domain_nr(bus), bus->number, 1914 + PCI_SLOT(devfn), PCI_FUNC(devfn), delay - 1); 1915 + 1916 + return true; 1917 + } 1918 + 1919 + bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, 1920 + int timeout) 1921 + { 1904 1922 if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) 1905 1923 return false; 1906 1924 ··· 1956 1880 *l == 0x0000ffff || *l == 0xffff0000) 1957 1881 return false; 1958 1882 1959 - /* 1960 - * Configuration Request Retry Status. Some root ports return the 1961 - * actual device ID instead of the synthetic ID (0xFFFF) required 1962 - * by the PCIe spec. Ignore the device ID and only check for 1963 - * (vendor id == 1). 1964 - */ 1965 - while ((*l & 0xffff) == 0x0001) { 1966 - if (!crs_timeout) 1967 - return false; 1968 - 1969 - msleep(delay); 1970 - delay *= 2; 1971 - if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l)) 1972 - return false; 1973 - /* Card hasn't responded in 60 seconds? Must be stuck. */ 1974 - if (delay > crs_timeout) { 1975 - printk(KERN_WARNING "pci %04x:%02x:%02x.%d: not responding\n", 1976 - pci_domain_nr(bus), bus->number, PCI_SLOT(devfn), 1977 - PCI_FUNC(devfn)); 1978 - return false; 1979 - } 1980 - } 1883 + if (pci_bus_crs_vendor_id(*l)) 1884 + return pci_bus_wait_crs(bus, devfn, l, timeout); 1981 1885 1982 1886 return true; 1983 1887 } ··· 2386 2330 pci_walk_bus(bus, pcie_bus_configure_set, &smpss); 2387 2331 } 2388 2332 EXPORT_SYMBOL_GPL(pcie_bus_configure_settings); 2333 + 2334 + /* 2335 + * Called after each bus is probed, but before its children are examined. This 2336 + * is marked as __weak because multiple architectures define it. 2337 + */ 2338 + void __weak pcibios_fixup_bus(struct pci_bus *bus) 2339 + { 2340 + /* nothing to do, expected to be removed in the future */ 2341 + } 2389 2342 2390 2343 unsigned int pci_scan_child_bus(struct pci_bus *bus) 2391 2344 {
+46 -18
drivers/pci/quirks.c
··· 2062 2062 2063 2063 /* 2064 2064 * The 82575 and 82598 may experience data corruption issues when transitioning 2065 - * out of L0S. To prevent this we need to disable L0S on the pci-e link 2065 + * out of L0S. To prevent this we need to disable L0S on the PCIe link. 2066 2066 */ 2067 2067 static void quirk_disable_aspm_l0s(struct pci_dev *dev) 2068 2068 { ··· 4227 4227 return acs_flags ? 0 : 1; 4228 4228 } 4229 4229 4230 + static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags) 4231 + { 4232 + /* 4233 + * X-Gene root matching this quirk do not allow peer-to-peer 4234 + * transactions with others, allowing masking out these bits as if they 4235 + * were unimplemented in the ACS capability. 4236 + */ 4237 + acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4238 + 4239 + return acs_flags ? 0 : 1; 4240 + } 4241 + 4230 4242 /* 4231 4243 * Many Intel PCH root ports do provide ACS-like features to disable peer 4232 4244 * transactions and validate bus numbers in requests, but do not provide an ··· 4487 4475 { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ 4488 4476 /* Cavium ThunderX */ 4489 4477 { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, 4478 + /* APM X-Gene */ 4479 + { PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs }, 4490 4480 { 0 } 4491 4481 }; 4492 4482 ··· 4761 4747 } 4762 4748 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap); 4763 4749 4764 - /* 4765 - * VMD-enabled root ports will change the source ID for all messages 4766 - * to the VMD device. Rather than doing device matching with the source 4767 - * ID, the AER driver should traverse the child device tree, reading 4768 - * AER registers to find the faulting device. 4769 - */ 4770 - static void quirk_no_aersid(struct pci_dev *pdev) 4771 - { 4772 - /* VMD Domain */ 4773 - if (pdev->bus->sysdata && pci_domain_nr(pdev->bus) >= 0x10000) 4774 - pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; 4775 - } 4776 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid); 4777 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); 4778 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); 4779 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); 4780 - 4781 4750 /* FLR may cause some 82579 devices to hang. */ 4782 4751 static void quirk_intel_no_flr(struct pci_dev *dev) 4783 4752 { ··· 4768 4771 } 4769 4772 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_intel_no_flr); 4770 4773 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_intel_no_flr); 4774 + 4775 + static void quirk_no_ext_tags(struct pci_dev *pdev) 4776 + { 4777 + struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus); 4778 + 4779 + if (!bridge) 4780 + return; 4781 + 4782 + bridge->no_ext_tags = 1; 4783 + dev_info(&pdev->dev, "disabling Extended Tags (this device can't handle them)\n"); 4784 + 4785 + pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL); 4786 + } 4787 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags); 4788 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0142, quirk_no_ext_tags); 4789 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0144, quirk_no_ext_tags); 4790 + 4791 + #ifdef CONFIG_PCI_ATS 4792 + /* 4793 + * Some devices have a broken ATS implementation causing IOMMU stalls. 4794 + * Don't use ATS for those devices. 4795 + */ 4796 + static void quirk_no_ats(struct pci_dev *pdev) 4797 + { 4798 + dev_info(&pdev->dev, "disabling ATS (broken on this device)\n"); 4799 + pdev->ats_cap = 0; 4800 + } 4801 + 4802 + /* AMD Stoney platform GPU */ 4803 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats); 4804 + #endif /* CONFIG_PCI_ATS */
+1 -31
drivers/pci/setup-irq.c
··· 17 17 #include <linux/cache.h> 18 18 #include "pci.h" 19 19 20 - void __weak pcibios_update_irq(struct pci_dev *dev, int irq) 21 - { 22 - dev_dbg(&dev->dev, "assigning IRQ %02d\n", irq); 23 - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); 24 - } 25 - 26 20 void pci_assign_irq(struct pci_dev *dev) 27 21 { 28 22 u8 pin; ··· 59 65 60 66 /* Always tell the device, so the driver knows what is 61 67 the real IRQ to use; the device does not use it. */ 62 - pcibios_update_irq(dev, irq); 68 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); 63 69 } 64 - 65 - void pci_fixup_irqs(u8 (*swizzle)(struct pci_dev *, u8 *), 66 - int (*map_irq)(const struct pci_dev *, u8, u8)) 67 - { 68 - /* 69 - * Implement pci_fixup_irqs() through pci_assign_irq(). 70 - * This code should be remove eventually, it is a wrapper 71 - * around pci_assign_irq() interface to keep current 72 - * pci_fixup_irqs() behaviour unchanged on architecture 73 - * code still relying on its interface. 74 - */ 75 - struct pci_dev *dev = NULL; 76 - struct pci_host_bridge *hbrg = NULL; 77 - 78 - for_each_pci_dev(dev) { 79 - hbrg = pci_find_host_bridge(dev->bus); 80 - hbrg->swizzle_irq = swizzle; 81 - hbrg->map_irq = map_irq; 82 - pci_assign_irq(dev); 83 - hbrg->swizzle_irq = NULL; 84 - hbrg->map_irq = NULL; 85 - } 86 - } 87 - EXPORT_SYMBOL_GPL(pci_fixup_irqs);
+13
drivers/pci/setup-res.c
··· 234 234 return 0; 235 235 } 236 236 237 + /* 238 + * We don't have to worry about legacy ISA devices, so nothing to do here. 239 + * This is marked as __weak because multiple architectures define it; it should 240 + * eventually go away. 241 + */ 242 + resource_size_t __weak pcibios_align_resource(void *data, 243 + const struct resource *res, 244 + resource_size_t size, 245 + resource_size_t align) 246 + { 247 + return res->start; 248 + } 249 + 237 250 static int __pci_assign_resource(struct pci_bus *bus, struct pci_dev *dev, 238 251 int resno, resource_size_t size, resource_size_t align) 239 252 {
+117 -14
drivers/phy/rockchip/phy-rockchip-pcie.c
··· 73 73 struct rockchip_pcie_phy { 74 74 struct rockchip_pcie_data *phy_data; 75 75 struct regmap *reg_base; 76 + struct phy_pcie_instance { 77 + struct phy *phy; 78 + u32 index; 79 + } phys[PHY_MAX_LANE_NUM]; 80 + struct mutex pcie_mutex; 76 81 struct reset_control *phy_rst; 77 82 struct clk *clk_pciephy_ref; 83 + int pwr_cnt; 84 + int init_cnt; 78 85 }; 86 + 87 + static struct rockchip_pcie_phy *to_pcie_phy(struct phy_pcie_instance *inst) 88 + { 89 + return container_of(inst, struct rockchip_pcie_phy, 90 + phys[inst->index]); 91 + } 92 + 93 + static struct phy *rockchip_pcie_phy_of_xlate(struct device *dev, 94 + struct of_phandle_args *args) 95 + { 96 + struct rockchip_pcie_phy *rk_phy = dev_get_drvdata(dev); 97 + 98 + if (args->args_count == 0) 99 + return rk_phy->phys[0].phy; 100 + 101 + if (WARN_ON(args->args[0] >= PHY_MAX_LANE_NUM)) 102 + return ERR_PTR(-ENODEV); 103 + 104 + return rk_phy->phys[args->args[0]].phy; 105 + } 106 + 79 107 80 108 static inline void phy_wr_cfg(struct rockchip_pcie_phy *rk_phy, 81 109 u32 addr, u32 data) ··· 144 116 145 117 static int rockchip_pcie_phy_power_off(struct phy *phy) 146 118 { 147 - struct rockchip_pcie_phy *rk_phy = phy_get_drvdata(phy); 119 + struct phy_pcie_instance *inst = phy_get_drvdata(phy); 120 + struct rockchip_pcie_phy *rk_phy = to_pcie_phy(inst); 148 121 int err = 0; 122 + 123 + mutex_lock(&rk_phy->pcie_mutex); 124 + 125 + regmap_write(rk_phy->reg_base, 126 + rk_phy->phy_data->pcie_laneoff, 127 + HIWORD_UPDATE(PHY_LANE_IDLE_OFF, 128 + PHY_LANE_IDLE_MASK, 129 + PHY_LANE_IDLE_A_SHIFT + inst->index)); 130 + 131 + if (--rk_phy->pwr_cnt) 132 + goto err_out; 149 133 150 134 err = reset_control_assert(rk_phy->phy_rst); 151 135 if (err) { 152 136 dev_err(&phy->dev, "assert phy_rst err %d\n", err); 153 - return err; 137 + goto err_restore; 154 138 } 155 139 140 + err_out: 141 + mutex_unlock(&rk_phy->pcie_mutex); 156 142 return 0; 143 + 144 + err_restore: 145 + rk_phy->pwr_cnt++; 146 + regmap_write(rk_phy->reg_base, 147 + rk_phy->phy_data->pcie_laneoff, 148 + HIWORD_UPDATE(!PHY_LANE_IDLE_OFF, 149 + PHY_LANE_IDLE_MASK, 150 + PHY_LANE_IDLE_A_SHIFT + inst->index)); 151 + mutex_unlock(&rk_phy->pcie_mutex); 152 + return err; 157 153 } 158 154 159 155 static int rockchip_pcie_phy_power_on(struct phy *phy) 160 156 { 161 - struct rockchip_pcie_phy *rk_phy = phy_get_drvdata(phy); 157 + struct phy_pcie_instance *inst = phy_get_drvdata(phy); 158 + struct rockchip_pcie_phy *rk_phy = to_pcie_phy(inst); 162 159 int err = 0; 163 160 u32 status; 164 161 unsigned long timeout; 165 162 163 + mutex_lock(&rk_phy->pcie_mutex); 164 + 165 + if (rk_phy->pwr_cnt++) 166 + goto err_out; 167 + 166 168 err = reset_control_deassert(rk_phy->phy_rst); 167 169 if (err) { 168 170 dev_err(&phy->dev, "deassert phy_rst err %d\n", err); 169 - return err; 171 + goto err_pwr_cnt; 170 172 } 171 173 172 174 regmap_write(rk_phy->reg_base, rk_phy->phy_data->pcie_conf, 173 175 HIWORD_UPDATE(PHY_CFG_PLL_LOCK, 174 176 PHY_CFG_ADDR_MASK, 175 177 PHY_CFG_ADDR_SHIFT)); 178 + 179 + regmap_write(rk_phy->reg_base, 180 + rk_phy->phy_data->pcie_laneoff, 181 + HIWORD_UPDATE(!PHY_LANE_IDLE_OFF, 182 + PHY_LANE_IDLE_MASK, 183 + PHY_LANE_IDLE_A_SHIFT + inst->index)); 176 184 177 185 /* 178 186 * No documented timeout value for phy operation below, ··· 278 214 goto err_pll_lock; 279 215 } 280 216 217 + err_out: 218 + mutex_unlock(&rk_phy->pcie_mutex); 281 219 return 0; 282 220 283 221 err_pll_lock: 284 222 reset_control_assert(rk_phy->phy_rst); 223 + err_pwr_cnt: 224 + rk_phy->pwr_cnt--; 225 + mutex_unlock(&rk_phy->pcie_mutex); 285 226 return err; 286 227 } 287 228 288 229 static int rockchip_pcie_phy_init(struct phy *phy) 289 230 { 290 - struct rockchip_pcie_phy *rk_phy = phy_get_drvdata(phy); 231 + struct phy_pcie_instance *inst = phy_get_drvdata(phy); 232 + struct rockchip_pcie_phy *rk_phy = to_pcie_phy(inst); 291 233 int err = 0; 234 + 235 + mutex_lock(&rk_phy->pcie_mutex); 236 + 237 + if (rk_phy->init_cnt++) 238 + goto err_out; 292 239 293 240 err = clk_prepare_enable(rk_phy->clk_pciephy_ref); 294 241 if (err) { ··· 313 238 goto err_reset; 314 239 } 315 240 316 - return err; 241 + err_out: 242 + mutex_unlock(&rk_phy->pcie_mutex); 243 + return 0; 317 244 318 245 err_reset: 246 + 319 247 clk_disable_unprepare(rk_phy->clk_pciephy_ref); 320 248 err_refclk: 249 + rk_phy->init_cnt--; 250 + mutex_unlock(&rk_phy->pcie_mutex); 321 251 return err; 322 252 } 323 253 324 254 static int rockchip_pcie_phy_exit(struct phy *phy) 325 255 { 326 - struct rockchip_pcie_phy *rk_phy = phy_get_drvdata(phy); 256 + struct phy_pcie_instance *inst = phy_get_drvdata(phy); 257 + struct rockchip_pcie_phy *rk_phy = to_pcie_phy(inst); 258 + 259 + mutex_lock(&rk_phy->pcie_mutex); 260 + 261 + if (--rk_phy->init_cnt) 262 + goto err_init_cnt; 327 263 328 264 clk_disable_unprepare(rk_phy->clk_pciephy_ref); 329 265 266 + err_init_cnt: 267 + mutex_unlock(&rk_phy->pcie_mutex); 330 268 return 0; 331 269 } 332 270 ··· 371 283 { 372 284 struct device *dev = &pdev->dev; 373 285 struct rockchip_pcie_phy *rk_phy; 374 - struct phy *generic_phy; 375 286 struct phy_provider *phy_provider; 376 287 struct regmap *grf; 377 288 const struct of_device_id *of_id; 289 + int i; 290 + u32 phy_num; 378 291 379 292 grf = syscon_node_to_regmap(dev->parent->of_node); 380 293 if (IS_ERR(grf)) { ··· 394 305 rk_phy->phy_data = (struct rockchip_pcie_data *)of_id->data; 395 306 rk_phy->reg_base = grf; 396 307 308 + mutex_init(&rk_phy->pcie_mutex); 309 + 397 310 rk_phy->phy_rst = devm_reset_control_get(dev, "phy"); 398 311 if (IS_ERR(rk_phy->phy_rst)) { 399 312 if (PTR_ERR(rk_phy->phy_rst) != -EPROBE_DEFER) ··· 410 319 return PTR_ERR(rk_phy->clk_pciephy_ref); 411 320 } 412 321 413 - generic_phy = devm_phy_create(dev, dev->of_node, &ops); 414 - if (IS_ERR(generic_phy)) { 415 - dev_err(dev, "failed to create PHY\n"); 416 - return PTR_ERR(generic_phy); 322 + /* parse #phy-cells to see if it's legacy PHY model */ 323 + if (of_property_read_u32(dev->of_node, "#phy-cells", &phy_num)) 324 + return -ENOENT; 325 + 326 + phy_num = (phy_num == 0) ? 1 : PHY_MAX_LANE_NUM; 327 + dev_dbg(dev, "phy number is %d\n", phy_num); 328 + 329 + for (i = 0; i < phy_num; i++) { 330 + rk_phy->phys[i].phy = devm_phy_create(dev, dev->of_node, &ops); 331 + if (IS_ERR(rk_phy->phys[i].phy)) { 332 + dev_err(dev, "failed to create PHY%d\n", i); 333 + return PTR_ERR(rk_phy->phys[i].phy); 334 + } 335 + rk_phy->phys[i].index = i; 336 + phy_set_drvdata(rk_phy->phys[i].phy, &rk_phy->phys[i]); 417 337 } 418 338 419 - phy_set_drvdata(generic_phy, rk_phy); 420 - phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 339 + platform_set_drvdata(pdev, rk_phy); 340 + phy_provider = devm_of_phy_provider_register(dev, 341 + rockchip_pcie_phy_of_xlate); 421 342 422 343 return PTR_ERR_OR_ZERO(phy_provider); 423 344 }
+2 -3
include/linux/aer.h
··· 39 39 }; 40 40 41 41 #if defined(CONFIG_PCIEAER) 42 - /* pci-e port driver needs this function to enable aer */ 42 + /* PCIe port driver needs this function to enable AER */ 43 43 int pci_enable_pcie_error_reporting(struct pci_dev *dev); 44 44 int pci_disable_pcie_error_reporting(struct pci_dev *dev); 45 45 int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev); ··· 67 67 struct aer_capability_regs *aer); 68 68 int cper_severity_to_aer(int cper_severity); 69 69 void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn, 70 - int severity, 71 - struct aer_capability_regs *aer_regs); 70 + int severity, struct aer_capability_regs *aer_regs); 72 71 #endif //_AER_H_ 73 72
+7 -1
include/linux/pci-epc.h
··· 62 62 * @size: the size of the PCI address space 63 63 * @bitmap: bitmap to manage the PCI address space 64 64 * @pages: number of bits representing the address region 65 + * @page_size: size of each page 65 66 */ 66 67 struct pci_epc_mem { 67 68 phys_addr_t phys_base; 68 69 size_t size; 69 70 unsigned long *bitmap; 71 + size_t page_size; 70 72 int pages; 71 73 }; 72 74 ··· 99 97 __pci_epc_create((dev), (ops), THIS_MODULE) 100 98 #define devm_pci_epc_create(dev, ops) \ 101 99 __devm_pci_epc_create((dev), (ops), THIS_MODULE) 100 + 101 + #define pci_epc_mem_init(epc, phys_addr, size) \ 102 + __pci_epc_mem_init((epc), (phys_addr), (size), PAGE_SIZE) 102 103 103 104 static inline void epc_set_drvdata(struct pci_epc *epc, void *data) 104 105 { ··· 140 135 struct pci_epc *pci_epc_get(const char *epc_name); 141 136 void pci_epc_put(struct pci_epc *epc); 142 137 143 - int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_addr, size_t size); 138 + int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_addr, size_t size, 139 + size_t page_size); 144 140 void pci_epc_mem_exit(struct pci_epc *epc); 145 141 void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, 146 142 phys_addr_t *phys_addr, size_t size);
+3 -8
include/linux/pci-epf.h
··· 14 14 15 15 #include <linux/device.h> 16 16 #include <linux/mod_devicetable.h> 17 + #include <linux/pci.h> 17 18 18 19 struct pci_epf; 19 - 20 - enum pci_interrupt_pin { 21 - PCI_INTERRUPT_UNKNOWN, 22 - PCI_INTERRUPT_INTA, 23 - PCI_INTERRUPT_INTB, 24 - PCI_INTERRUPT_INTC, 25 - PCI_INTERRUPT_INTD, 26 - }; 27 20 28 21 enum pci_barno { 29 22 BAR_0, ··· 142 149 return dev_get_drvdata(&epf->dev); 143 150 } 144 151 152 + const struct pci_epf_device_id * 153 + pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf); 145 154 struct pci_epf *pci_epf_create(const char *name); 146 155 void pci_epf_destroy(struct pci_epf *epf); 147 156 int __pci_epf_register_driver(struct pci_epf_driver *driver,
+57 -5
include/linux/pci.h
··· 102 102 DEVICE_COUNT_RESOURCE = PCI_NUM_RESOURCES, 103 103 }; 104 104 105 + /** 106 + * enum pci_interrupt_pin - PCI INTx interrupt values 107 + * @PCI_INTERRUPT_UNKNOWN: Unknown or unassigned interrupt 108 + * @PCI_INTERRUPT_INTA: PCI INTA pin 109 + * @PCI_INTERRUPT_INTB: PCI INTB pin 110 + * @PCI_INTERRUPT_INTC: PCI INTC pin 111 + * @PCI_INTERRUPT_INTD: PCI INTD pin 112 + * 113 + * Corresponds to values for legacy PCI INTx interrupts, as can be found in the 114 + * PCI_INTERRUPT_PIN register. 115 + */ 116 + enum pci_interrupt_pin { 117 + PCI_INTERRUPT_UNKNOWN, 118 + PCI_INTERRUPT_INTA, 119 + PCI_INTERRUPT_INTB, 120 + PCI_INTERRUPT_INTC, 121 + PCI_INTERRUPT_INTD, 122 + }; 123 + 124 + /* The number of legacy PCI INTx interrupts */ 125 + #define PCI_NUM_INTX 4 126 + 105 127 /* 106 128 * pci_power_t values must match the bits in the Capabilities PME_Support 107 129 * and Control/Status PowerState fields in the Power Management capability. ··· 475 453 void *release_data; 476 454 struct msi_controller *msi; 477 455 unsigned int ignore_reset_delay:1; /* for entire hierarchy */ 456 + unsigned int no_ext_tags:1; /* no Extended Tags */ 478 457 /* Resource alignment requirements */ 479 458 resource_size_t (*align_resource)(struct pci_dev *dev, 480 459 const struct resource *res, ··· 870 847 resource_size_t pcibios_align_resource(void *, const struct resource *, 871 848 resource_size_t, 872 849 resource_size_t); 873 - void pcibios_update_irq(struct pci_dev *, int irq); 874 850 875 851 /* Weak but can be overriden by arch */ 876 852 void pci_fixup_cardbus(struct pci_bus *); ··· 1187 1165 void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus); 1188 1166 void pdev_enable_device(struct pci_dev *); 1189 1167 int pci_enable_resources(struct pci_dev *, int mask); 1190 - void pci_fixup_irqs(u8 (*)(struct pci_dev *, u8 *), 1191 - int (*)(const struct pci_dev *, u8, u8)); 1192 1168 void pci_assign_irq(struct pci_dev *dev); 1193 1169 struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res); 1194 1170 #define HAVE_PCI_REQ_REGIONS 2 ··· 1417 1397 { 1418 1398 return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, 1419 1399 NULL); 1400 + } 1401 + 1402 + /** 1403 + * pci_irqd_intx_xlate() - Translate PCI INTx value to an IRQ domain hwirq 1404 + * @d: the INTx IRQ domain 1405 + * @node: the DT node for the device whose interrupt we're translating 1406 + * @intspec: the interrupt specifier data from the DT 1407 + * @intsize: the number of entries in @intspec 1408 + * @out_hwirq: pointer at which to write the hwirq number 1409 + * @out_type: pointer at which to write the interrupt type 1410 + * 1411 + * Translate a PCI INTx interrupt number from device tree in the range 1-4, as 1412 + * stored in the standard PCI_INTERRUPT_PIN register, to a value in the range 1413 + * 0-3 suitable for use in a 4 entry IRQ domain. That is, subtract one from the 1414 + * INTx value to obtain the hwirq number. 1415 + * 1416 + * Returns 0 on success, or -EINVAL if the interrupt specifier is out of range. 1417 + */ 1418 + static inline int pci_irqd_intx_xlate(struct irq_domain *d, 1419 + struct device_node *node, 1420 + const u32 *intspec, 1421 + unsigned int intsize, 1422 + unsigned long *out_hwirq, 1423 + unsigned int *out_type) 1424 + { 1425 + const u32 intx = intspec[0]; 1426 + 1427 + if (intx < PCI_INTERRUPT_INTA || intx > PCI_INTERRUPT_INTD) 1428 + return -EINVAL; 1429 + 1430 + *out_hwirq = intx - PCI_INTERRUPT_INTA; 1431 + return 0; 1420 1432 } 1421 1433 1422 1434 #ifdef CONFIG_PCIEPORTBUS ··· 2116 2064 2117 2065 /** 2118 2066 * pci_vpd_srdt_size - Extracts the Small Resource Data Type length 2119 - * @lrdt: Pointer to the beginning of the Small Resource Data Type tag 2067 + * @srdt: Pointer to the beginning of the Small Resource Data Type tag 2120 2068 * 2121 2069 * Returns the extracted Small Resource Data Type length. 2122 2070 */ ··· 2127 2075 2128 2076 /** 2129 2077 * pci_vpd_srdt_tag - Extracts the Small Resource Data Type Tag Item 2130 - * @lrdt: Pointer to the beginning of the Small Resource Data Type tag 2078 + * @srdt: Pointer to the beginning of the Small Resource Data Type tag 2131 2079 * 2132 2080 * Returns the extracted Small Resource Data Type Tag Item. 2133 2081 */
+3 -3
include/linux/pcieport_if.h
··· 38 38 dev->priv_data = data; 39 39 } 40 40 41 - static inline void* get_service_data(struct pcie_device *dev) 41 + static inline void *get_service_data(struct pcie_device *dev) 42 42 { 43 43 return dev->priv_data; 44 44 } ··· 50 50 int (*suspend) (struct pcie_device *dev); 51 51 int (*resume) (struct pcie_device *dev); 52 52 53 - /* Service Error Recovery Handler */ 54 - const struct pci_error_handlers *err_handler; 53 + /* Device driver may resume normal operations */ 54 + void (*error_resume)(struct pci_dev *dev); 55 55 56 56 /* Link Reset Capability - AER service driver specific */ 57 57 pci_ers_result_t (*reset_link) (struct pci_dev *dev);
+24 -18
include/uapi/linux/pci_regs.h
··· 513 513 #define PCI_EXP_DEVSTA_URD 0x0008 /* Unsupported Request Detected */ 514 514 #define PCI_EXP_DEVSTA_AUXPD 0x0010 /* AUX Power Detected */ 515 515 #define PCI_EXP_DEVSTA_TRPND 0x0020 /* Transactions Pending */ 516 + #define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V1 12 /* v1 endpoints without link end here */ 516 517 #define PCI_EXP_LNKCAP 12 /* Link Capabilities */ 517 518 #define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */ 518 519 #define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */ ··· 557 556 #define PCI_EXP_LNKSTA_DLLLA 0x2000 /* Data Link Layer Link Active */ 558 557 #define PCI_EXP_LNKSTA_LBMS 0x4000 /* Link Bandwidth Management Status */ 559 558 #define PCI_EXP_LNKSTA_LABS 0x8000 /* Link Autonomous Bandwidth Status */ 560 - #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V1 20 /* v1 endpoints end here */ 559 + #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V1 20 /* v1 endpoints with link end here */ 561 560 #define PCI_EXP_SLTCAP 20 /* Slot Capabilities */ 562 561 #define PCI_EXP_SLTCAP_ABP 0x00000001 /* Attention Button Present */ 563 562 #define PCI_EXP_SLTCAP_PCP 0x00000002 /* Power Controller Present */ ··· 640 639 #define PCI_EXP_DEVCTL2_OBFF_MSGB_EN 0x4000 /* Enable OBFF Message type B */ 641 640 #define PCI_EXP_DEVCTL2_OBFF_WAKE_EN 0x6000 /* OBFF using WAKE# signaling */ 642 641 #define PCI_EXP_DEVSTA2 42 /* Device Status 2 */ 643 - #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 44 /* v2 endpoints end here */ 642 + #define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V2 44 /* v2 endpoints without link end here */ 644 643 #define PCI_EXP_LNKCAP2 44 /* Link Capabilities 2 */ 645 644 #define PCI_EXP_LNKCAP2_SLS_2_5GB 0x00000002 /* Supported Speed 2.5GT/s */ 646 645 #define PCI_EXP_LNKCAP2_SLS_5_0GB 0x00000004 /* Supported Speed 5.0GT/s */ ··· 648 647 #define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */ 649 648 #define PCI_EXP_LNKCTL2 48 /* Link Control 2 */ 650 649 #define PCI_EXP_LNKSTA2 50 /* Link Status 2 */ 650 + #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */ 651 651 #define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */ 652 652 #define PCI_EXP_SLTCTL2 56 /* Slot Control 2 */ 653 653 #define PCI_EXP_SLTSTA2 58 /* Slot Status 2 */ ··· 735 733 #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 736 734 #define PCI_ERR_HEADER_LOG 28 /* Header Log Register (16 bytes) */ 737 735 #define PCI_ERR_ROOT_COMMAND 44 /* Root Error Command */ 738 - /* Correctable Err Reporting Enable */ 739 - #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 740 - /* Non-fatal Err Reporting Enable */ 741 - #define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 742 - /* Fatal Err Reporting Enable */ 743 - #define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 736 + #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */ 737 + #define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */ 738 + #define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */ 744 739 #define PCI_ERR_ROOT_STATUS 48 745 - #define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */ 746 - /* Multi ERR_COR Received */ 747 - #define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 748 - /* ERR_FATAL/NONFATAL Received */ 749 - #define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 750 - /* Multi ERR_FATAL/NONFATAL Received */ 751 - #define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 752 - #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First Fatal */ 753 - #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ 754 - #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 740 + #define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */ 741 + #define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */ 742 + #define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */ 743 + #define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 /* Multiple FATAL/NONFATAL */ 744 + #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */ 745 + #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ 746 + #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 755 747 #define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */ 756 748 757 749 /* Virtual Channel */ ··· 963 967 #define PCI_EXP_DPC_CAP_RP_EXT 0x20 /* Root Port Extensions for DPC */ 964 968 #define PCI_EXP_DPC_CAP_POISONED_TLP 0x40 /* Poisoned TLP Egress Blocking Supported */ 965 969 #define PCI_EXP_DPC_CAP_SW_TRIGGER 0x80 /* Software Triggering Supported */ 970 + #define PCI_EXP_DPC_RP_PIO_LOG_SIZE 0xF00 /* RP PIO log size */ 966 971 #define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */ 967 972 968 973 #define PCI_EXP_DPC_CTL 6 /* DPC control */ ··· 976 979 #define PCI_EXP_DPC_RP_BUSY 0x10 /* Root Port Busy */ 977 980 978 981 #define PCI_EXP_DPC_SOURCE_ID 10 /* DPC Source Identifier */ 982 + 983 + #define PCI_EXP_DPC_RP_PIO_STATUS 0x0C /* RP PIO Status */ 984 + #define PCI_EXP_DPC_RP_PIO_MASK 0x10 /* RP PIO MASK */ 985 + #define PCI_EXP_DPC_RP_PIO_SEVERITY 0x14 /* RP PIO Severity */ 986 + #define PCI_EXP_DPC_RP_PIO_SYSERROR 0x18 /* RP PIO SysError */ 987 + #define PCI_EXP_DPC_RP_PIO_EXCEPTION 0x1C /* RP PIO Exception */ 988 + #define PCI_EXP_DPC_RP_PIO_HEADER_LOG 0x20 /* RP PIO Header Log */ 989 + #define PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG 0x30 /* RP PIO ImpSpec Log */ 990 + #define PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG 0x34 /* RP PIO TLP Prefix Log */ 979 991 980 992 /* Precision Time Measurement */ 981 993 #define PCI_PTM_CAP 0x04 /* PTM Capability */
+1
tools/pci/pcitest.c
··· 173 173 "\t-D <dev> PCI endpoint test device {default: /dev/pci-endpoint-test.0}\n" 174 174 "\t-b <bar num> BAR test (bar number between 0..5)\n" 175 175 "\t-m <msi num> MSI test (msi number between 1..32)\n" 176 + "\t-l Legacy IRQ test\n" 176 177 "\t-r Read buffer test\n" 177 178 "\t-w Write buffer test\n" 178 179 "\t-c Copy buffer test\n"